From xen-devel-bounces@lists.xen.org Wed Aug 01 00:00:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMLu-0004UD-Id; Tue, 31 Jul 2012 23:59:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1SwMLt-0004U8-SB
	for xen-devel@lists.xen.org; Tue, 31 Jul 2012 23:59:50 +0000
Received: from [85.158.139.83:50347] by server-4.bemta-5.messagelabs.com id
	BA/37-27831-47178105; Tue, 31 Jul 2012 23:59:48 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343779188!29618397!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17018 invoked from network); 31 Jul 2012 23:59:48 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Jul 2012 23:59:48 -0000
Received: from localhost (unknown [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id EEB52A02F0;
	Tue, 31 Jul 2012 23:59:47 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id gu8ExOIE6eOM; Tue, 31 Jul 2012 23:59:42 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 9E1C1A02EC;
	Tue, 31 Jul 2012 23:59:42 +0000 (UTC)
Date: Wed, 1 Aug 2012 01:59:41 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120801015941.22f773d3@internecto.net>
In-Reply-To: <20120731134054.GD4789@phenom.dumpdata.com>
References: <20120722181611.7ae03506@internecto.net>
	<20120723142025.GB793@phenom.dumpdata.com>
	<20120726022527.4d3f6e4a@internecto.net>
	<20120726140552.GD28024@phenom.dumpdata.com>
	<501176DD0200007800090A32@nat28.tlf.novell.com>
	<20120726145535.GA5743@phenom.dumpdata.com>
	<20120728145515.50bf4d45@internecto.net>
	<501657C70200007800091357@nat28.tlf.novell.com>
	<20120731134054.GD4789@phenom.dumpdata.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; i686-pc-linux-gnu)
Mime-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel crash with acpi_processor,
 cpu_idle and intel_idle =y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Message 20120731134054.GD4789@phenom.dumpdata.com contained:

>Duh! That is easy enough to fix. Mark, can you please try testing with
>this patch (and obviously enable the CONFIG_INTEL_IDLE)
>
>diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
>index d0f59c3..46a9884 100644
>--- a/drivers/idle/intel_idle.c
>+++ b/drivers/idle/intel_idle.c
>@@ -557,7 +557,7 @@ static int __init intel_idle_init(void)
> 	retval = cpuidle_register_driver(&intel_idle_driver);
> 	if (retval) {
> 		printk(KERN_DEBUG PREFIX "intel_idle yielding to %s",
>-			cpuidle_get_driver()->name);
>+			cpuidle_get_driver() ?
>cpuidle_get_driver()->name : "none");
> 		return retval;
> 	}
> 

Good going Jan and Konrad. The above patch works: no more crashes. For
fun I have also tested the patch from my previous email (the link to
the lkml, it has been acked so was worth a try). I tested it in
correlation with your patch and I still see no crashes.



-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Tue Jul 31 22:19 UTC 2012
Today is Pungenday, the 67th day of Confusion in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 00:00:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMLu-0004UD-Id; Tue, 31 Jul 2012 23:59:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1SwMLt-0004U8-SB
	for xen-devel@lists.xen.org; Tue, 31 Jul 2012 23:59:50 +0000
Received: from [85.158.139.83:50347] by server-4.bemta-5.messagelabs.com id
	BA/37-27831-47178105; Tue, 31 Jul 2012 23:59:48 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343779188!29618397!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17018 invoked from network); 31 Jul 2012 23:59:48 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Jul 2012 23:59:48 -0000
Received: from localhost (unknown [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id EEB52A02F0;
	Tue, 31 Jul 2012 23:59:47 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id gu8ExOIE6eOM; Tue, 31 Jul 2012 23:59:42 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 9E1C1A02EC;
	Tue, 31 Jul 2012 23:59:42 +0000 (UTC)
Date: Wed, 1 Aug 2012 01:59:41 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120801015941.22f773d3@internecto.net>
In-Reply-To: <20120731134054.GD4789@phenom.dumpdata.com>
References: <20120722181611.7ae03506@internecto.net>
	<20120723142025.GB793@phenom.dumpdata.com>
	<20120726022527.4d3f6e4a@internecto.net>
	<20120726140552.GD28024@phenom.dumpdata.com>
	<501176DD0200007800090A32@nat28.tlf.novell.com>
	<20120726145535.GA5743@phenom.dumpdata.com>
	<20120728145515.50bf4d45@internecto.net>
	<501657C70200007800091357@nat28.tlf.novell.com>
	<20120731134054.GD4789@phenom.dumpdata.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; i686-pc-linux-gnu)
Mime-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel crash with acpi_processor,
 cpu_idle and intel_idle =y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Message 20120731134054.GD4789@phenom.dumpdata.com contained:

>Duh! That is easy enough to fix. Mark, can you please try testing with
>this patch (and obviously enable the CONFIG_INTEL_IDLE)
>
>diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
>index d0f59c3..46a9884 100644
>--- a/drivers/idle/intel_idle.c
>+++ b/drivers/idle/intel_idle.c
>@@ -557,7 +557,7 @@ static int __init intel_idle_init(void)
> 	retval = cpuidle_register_driver(&intel_idle_driver);
> 	if (retval) {
> 		printk(KERN_DEBUG PREFIX "intel_idle yielding to %s",
>-			cpuidle_get_driver()->name);
>+			cpuidle_get_driver() ?
>cpuidle_get_driver()->name : "none");
> 		return retval;
> 	}
> 

Good going Jan and Konrad. The above patch works: no more crashes. For
fun I have also tested the patch from my previous email (the link to
the lkml, it has been acked so was worth a try). I tested it in
correlation with your patch and I still see no crashes.



-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Tue Jul 31 22:19 UTC 2012
Today is Pungenday, the 67th day of Confusion in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 00:05:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:05:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMQY-00059n-9r; Wed, 01 Aug 2012 00:04:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwMQW-00059g-EX
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 00:04:36 +0000
Received: from [85.158.143.35:36778] by server-2.bemta-4.messagelabs.com id
	D9/CB-17938-39278105; Wed, 01 Aug 2012 00:04:35 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343779473!4969469!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MTAxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7759 invoked from network); 1 Aug 2012 00:04:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 00:04:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7104RbQ014343
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 00:04:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7104RQ0018617
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 00:04:27 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7104Quk009031; Tue, 31 Jul 2012 19:04:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Jul 2012 17:04:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7BB0F402B2; Tue, 31 Jul 2012 19:55:27 -0400 (EDT)
Date: Tue, 31 Jul 2012 19:55:27 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mark van Dijk <lists+xen@internecto.net>
Message-ID: <20120731235527.GM32698@phenom.dumpdata.com>
References: <20120722181611.7ae03506@internecto.net>
	<20120723142025.GB793@phenom.dumpdata.com>
	<20120726022527.4d3f6e4a@internecto.net>
	<20120726140552.GD28024@phenom.dumpdata.com>
	<501176DD0200007800090A32@nat28.tlf.novell.com>
	<20120726145535.GA5743@phenom.dumpdata.com>
	<20120728145515.50bf4d45@internecto.net>
	<501657C70200007800091357@nat28.tlf.novell.com>
	<20120731134054.GD4789@phenom.dumpdata.com>
	<20120801015941.22f773d3@internecto.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120801015941.22f773d3@internecto.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel crash with acpi_processor,
 cpu_idle and intel_idle =y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 01:59:41AM +0200, Mark van Dijk wrote:
> Message 20120731134054.GD4789@phenom.dumpdata.com contained:
> 
> >Duh! That is easy enough to fix. Mark, can you please try testing with
> >this patch (and obviously enable the CONFIG_INTEL_IDLE)
> >
> >diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
> >index d0f59c3..46a9884 100644
> >--- a/drivers/idle/intel_idle.c
> >+++ b/drivers/idle/intel_idle.c
> >@@ -557,7 +557,7 @@ static int __init intel_idle_init(void)
> > 	retval = cpuidle_register_driver(&intel_idle_driver);
> > 	if (retval) {
> > 		printk(KERN_DEBUG PREFIX "intel_idle yielding to %s",
> >-			cpuidle_get_driver()->name);
> >+			cpuidle_get_driver() ?
> >cpuidle_get_driver()->name : "none");
> > 		return retval;
> > 	}
> > 
> 
> Good going Jan and Konrad. The above patch works: no more crashes. For
> fun I have also tested the patch from my previous email (the link to
> the lkml, it has been acked so was worth a try). I tested it in
> correlation with your patch and I still see no crashes.

Ok, is it Ok if I include your name as Reported-by and Tested-by?
Thx
> 
> 
> 
> -- 
> Stay in touch,
> Mark van Dijk.               ,--------------------------------
> ----------------------------'        Tue Jul 31 22:19 UTC 2012
> Today is Pungenday, the 67th day of Confusion in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 00:05:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:05:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMQY-00059n-9r; Wed, 01 Aug 2012 00:04:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwMQW-00059g-EX
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 00:04:36 +0000
Received: from [85.158.143.35:36778] by server-2.bemta-4.messagelabs.com id
	D9/CB-17938-39278105; Wed, 01 Aug 2012 00:04:35 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343779473!4969469!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MTAxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7759 invoked from network); 1 Aug 2012 00:04:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 00:04:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7104RbQ014343
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 00:04:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7104RQ0018617
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 00:04:27 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7104Quk009031; Tue, 31 Jul 2012 19:04:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Jul 2012 17:04:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7BB0F402B2; Tue, 31 Jul 2012 19:55:27 -0400 (EDT)
Date: Tue, 31 Jul 2012 19:55:27 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mark van Dijk <lists+xen@internecto.net>
Message-ID: <20120731235527.GM32698@phenom.dumpdata.com>
References: <20120722181611.7ae03506@internecto.net>
	<20120723142025.GB793@phenom.dumpdata.com>
	<20120726022527.4d3f6e4a@internecto.net>
	<20120726140552.GD28024@phenom.dumpdata.com>
	<501176DD0200007800090A32@nat28.tlf.novell.com>
	<20120726145535.GA5743@phenom.dumpdata.com>
	<20120728145515.50bf4d45@internecto.net>
	<501657C70200007800091357@nat28.tlf.novell.com>
	<20120731134054.GD4789@phenom.dumpdata.com>
	<20120801015941.22f773d3@internecto.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120801015941.22f773d3@internecto.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel crash with acpi_processor,
 cpu_idle and intel_idle =y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 01:59:41AM +0200, Mark van Dijk wrote:
> Message 20120731134054.GD4789@phenom.dumpdata.com contained:
> 
> >Duh! That is easy enough to fix. Mark, can you please try testing with
> >this patch (and obviously enable the CONFIG_INTEL_IDLE)
> >
> >diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
> >index d0f59c3..46a9884 100644
> >--- a/drivers/idle/intel_idle.c
> >+++ b/drivers/idle/intel_idle.c
> >@@ -557,7 +557,7 @@ static int __init intel_idle_init(void)
> > 	retval = cpuidle_register_driver(&intel_idle_driver);
> > 	if (retval) {
> > 		printk(KERN_DEBUG PREFIX "intel_idle yielding to %s",
> >-			cpuidle_get_driver()->name);
> >+			cpuidle_get_driver() ?
> >cpuidle_get_driver()->name : "none");
> > 		return retval;
> > 	}
> > 
> 
> Good going Jan and Konrad. The above patch works: no more crashes. For
> fun I have also tested the patch from my previous email (the link to
> the lkml, it has been acked so was worth a try). I tested it in
> correlation with your patch and I still see no crashes.

Ok, is it Ok if I include your name as Reported-by and Tested-by?
Thx
> 
> 
> 
> -- 
> Stay in touch,
> Mark van Dijk.               ,--------------------------------
> ----------------------------'        Tue Jul 31 22:19 UTC 2012
> Today is Pungenday, the 67th day of Confusion in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 00:20:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMfW-0005WC-D7; Wed, 01 Aug 2012 00:20:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwMfU-0005W7-J9
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 00:20:04 +0000
Received: from [85.158.138.51:17911] by server-1.bemta-3.messagelabs.com id
	42/4F-31934-33678105; Wed, 01 Aug 2012 00:20:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1343780403!9519046!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9441 invoked from network); 1 Aug 2012 00:20:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 00:20:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,690,1336348800"; d="scan'208";a="13792857"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 00:20:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 01:20:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwMfS-0008OE-Ig;
	Wed, 01 Aug 2012 00:20:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwMfS-0001i5-Ha;
	Wed, 01 Aug 2012 01:20:02 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13528-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 01:20:02 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13528: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13528 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13528/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      8 debian-fixup                fail pass in 13524
 test-amd64-i386-xl-win-vcpus1  5 xen-boot                   fail pass in 13524
 test-i386-i386-xl-win         5 xen-boot           fail in 13524 pass in 13528
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 13524 pass in 13528

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-credit2   14 guest-localmigrate/x10       fail   like 13514

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl           15 guest-stop                   fail   never pass
 test-i386-i386-xl            15 guest-stop                   fail   never pass
 test-amd64-amd64-xl          15 guest-stop                   fail   never pass
 test-amd64-i386-xl-multivcpu 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-sedf-pin 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64  8 guest-saverestore            fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore      fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-rhel6hvm-intel  7 redhat-install               fail never pass
 test-amd64-i386-rhel6hvm-amd  7 redhat-install               fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  7 redhat-install         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install           fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-win         7 windows-install              fail   never pass
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3  7 windows-install            fail never pass
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  never pass
 test-i386-i386-xl-winxpsp3    7 windows-install              fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail never pass
 test-amd64-amd64-xl-win       7 windows-install              fail   never pass
 test-amd64-amd64-xl-sedf     15 guest-stop            fail in 13524 never pass
 test-amd64-i386-xl-credit2   15 guest-stop            fail in 13524 never pass
 test-amd64-i386-xl-win-vcpus1  7 windows-install      fail in 13524 never pass

version targeted for testing:
 xen                  6d7ae840463c
baseline version:
 xen                  82fcf3a5dc3a

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.0-testing
+ revision=6d7ae840463c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.0-testing 6d7ae840463c
+ branch=xen-4.0-testing
+ revision=6d7ae840463c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.0-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.0-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.0-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.0-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.0-testing.git
++ : daily-cron.xen-4.0-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.0-testing.git
+ info_linux_tree xen-4.0-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.0-testing.hg
+ hg push -r 6d7ae840463c ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 3 changes to 3 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 00:20:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMfW-0005WC-D7; Wed, 01 Aug 2012 00:20:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwMfU-0005W7-J9
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 00:20:04 +0000
Received: from [85.158.138.51:17911] by server-1.bemta-3.messagelabs.com id
	42/4F-31934-33678105; Wed, 01 Aug 2012 00:20:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1343780403!9519046!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9441 invoked from network); 1 Aug 2012 00:20:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 00:20:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,690,1336348800"; d="scan'208";a="13792857"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 00:20:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 01:20:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwMfS-0008OE-Ig;
	Wed, 01 Aug 2012 00:20:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwMfS-0001i5-Ha;
	Wed, 01 Aug 2012 01:20:02 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13528-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 01:20:02 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13528: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13528 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13528/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      8 debian-fixup                fail pass in 13524
 test-amd64-i386-xl-win-vcpus1  5 xen-boot                   fail pass in 13524
 test-i386-i386-xl-win         5 xen-boot           fail in 13524 pass in 13528
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 13524 pass in 13528

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-credit2   14 guest-localmigrate/x10       fail   like 13514

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl           15 guest-stop                   fail   never pass
 test-i386-i386-xl            15 guest-stop                   fail   never pass
 test-amd64-amd64-xl          15 guest-stop                   fail   never pass
 test-amd64-i386-xl-multivcpu 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-sedf-pin 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64  8 guest-saverestore            fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore      fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-rhel6hvm-intel  7 redhat-install               fail never pass
 test-amd64-i386-rhel6hvm-amd  7 redhat-install               fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  7 redhat-install         fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install           fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-win         7 windows-install              fail   never pass
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3  7 windows-install            fail never pass
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  never pass
 test-i386-i386-xl-winxpsp3    7 windows-install              fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail never pass
 test-amd64-amd64-xl-win       7 windows-install              fail   never pass
 test-amd64-amd64-xl-sedf     15 guest-stop            fail in 13524 never pass
 test-amd64-i386-xl-credit2   15 guest-stop            fail in 13524 never pass
 test-amd64-i386-xl-win-vcpus1  7 windows-install      fail in 13524 never pass

version targeted for testing:
 xen                  6d7ae840463c
baseline version:
 xen                  82fcf3a5dc3a

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.0-testing
+ revision=6d7ae840463c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.0-testing 6d7ae840463c
+ branch=xen-4.0-testing
+ revision=6d7ae840463c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.0-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.0-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.0-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.0-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.0-testing.git
++ : daily-cron.xen-4.0-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.0-testing.git
+ info_linux_tree xen-4.0-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.0-testing.hg
+ hg push -r 6d7ae840463c ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 3 changes to 3 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 00:26:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMkl-0005lb-Fo; Wed, 01 Aug 2012 00:25:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1SwMkj-0005lV-AA
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 00:25:29 +0000
Received: from [85.158.139.83:6627] by server-7.bemta-5.messagelabs.com id
	F4/4E-28276-87778105; Wed, 01 Aug 2012 00:25:28 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-5.tower-182.messagelabs.com!1343780727!29683820!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9608 invoked from network); 1 Aug 2012 00:25:27 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 00:25:27 -0000
Received: from localhost (unknown [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 46321A02F0;
	Wed,  1 Aug 2012 00:25:27 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id KLlHB3+0za8t; Wed,  1 Aug 2012 00:25:20 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id A5D30A02EC;
	Wed,  1 Aug 2012 00:25:20 +0000 (UTC)
Date: Wed, 1 Aug 2012 02:25:18 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120801022518.212fc429@internecto.net>
In-Reply-To: <20120731235527.GM32698@phenom.dumpdata.com>
References: <20120722181611.7ae03506@internecto.net>
	<20120723142025.GB793@phenom.dumpdata.com>
	<20120726022527.4d3f6e4a@internecto.net>
	<20120726140552.GD28024@phenom.dumpdata.com>
	<501176DD0200007800090A32@nat28.tlf.novell.com>
	<20120726145535.GA5743@phenom.dumpdata.com>
	<20120728145515.50bf4d45@internecto.net>
	<501657C70200007800091357@nat28.tlf.novell.com>
	<20120731134054.GD4789@phenom.dumpdata.com>
	<20120801015941.22f773d3@internecto.net>
	<20120731235527.GM32698@phenom.dumpdata.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; i686-pc-linux-gnu)
Mime-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel crash with acpi_processor,
 cpu_idle and intel_idle =y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Message 20120731235527.GM32698@phenom.dumpdata.com contained:

>Ok, is it Ok if I include your name as Reported-by and Tested-by?

Yes, that's fine. Thanks for asking. If you want to include my email
address please substitute 'lists+xen' with 'mark'.

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Wed Aug 01 00:22 UTC 2012
Today is Pungenday, the 67th day of Confusion in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 00:26:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 00:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwMkl-0005lb-Fo; Wed, 01 Aug 2012 00:25:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1SwMkj-0005lV-AA
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 00:25:29 +0000
Received: from [85.158.139.83:6627] by server-7.bemta-5.messagelabs.com id
	F4/4E-28276-87778105; Wed, 01 Aug 2012 00:25:28 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-5.tower-182.messagelabs.com!1343780727!29683820!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9608 invoked from network); 1 Aug 2012 00:25:27 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 00:25:27 -0000
Received: from localhost (unknown [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 46321A02F0;
	Wed,  1 Aug 2012 00:25:27 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id KLlHB3+0za8t; Wed,  1 Aug 2012 00:25:20 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id A5D30A02EC;
	Wed,  1 Aug 2012 00:25:20 +0000 (UTC)
Date: Wed, 1 Aug 2012 02:25:18 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120801022518.212fc429@internecto.net>
In-Reply-To: <20120731235527.GM32698@phenom.dumpdata.com>
References: <20120722181611.7ae03506@internecto.net>
	<20120723142025.GB793@phenom.dumpdata.com>
	<20120726022527.4d3f6e4a@internecto.net>
	<20120726140552.GD28024@phenom.dumpdata.com>
	<501176DD0200007800090A32@nat28.tlf.novell.com>
	<20120726145535.GA5743@phenom.dumpdata.com>
	<20120728145515.50bf4d45@internecto.net>
	<501657C70200007800091357@nat28.tlf.novell.com>
	<20120731134054.GD4789@phenom.dumpdata.com>
	<20120801015941.22f773d3@internecto.net>
	<20120731235527.GM32698@phenom.dumpdata.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; i686-pc-linux-gnu)
Mime-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel crash with acpi_processor,
 cpu_idle and intel_idle =y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Message 20120731235527.GM32698@phenom.dumpdata.com contained:

>Ok, is it Ok if I include your name as Reported-by and Tested-by?

Yes, that's fine. Thanks for asking. If you want to include my email
address please substitute 'lists+xen' with 'mark'.

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Wed Aug 01 00:22 UTC 2012
Today is Pungenday, the 67th day of Confusion in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 02:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 02:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwP2J-0001t6-8D; Wed, 01 Aug 2012 02:51:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1SwP2H-0001sy-IY
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 02:51:45 +0000
Received: from [85.158.143.99:63458] by server-1.bemta-4.messagelabs.com id
	A9/F2-24392-0C998105; Wed, 01 Aug 2012 02:51:44 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343789504!17989458!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2038 invoked from network); 1 Aug 2012 02:51:44 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 02:51:44 -0000
Received: by eaal12 with SMTP id l12so843977eaa.30
	for <xen-devel@lists.xensource.com>;
	Tue, 31 Jul 2012 19:51:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=171nj3Xpeg56l3ss52pTMoNOL7hb0CIqVoUGMRXCMC8=;
	b=uJ00rDdZHl2TzrYWNE87x8JUyH7sIHw2DMg5EcJTWD+N8GMuLs2GUO0fRTKYn0UNmc
	0rz9D6vqT1nqe3jRZXI6jDpCDPfjUCyGpM51M0elrffr35Cc9nFuCGzBz9P+6naWihPi
	rnzV2MMiJOntK/Y+iv4qIy+1bv6FB1XJAY5LYVv2DZGSSlJzY7xX98yi/KbEyeHHxHnU
	qhOKXXLKUgAQlDvqXolcJq/x2skm8NKTt5htA8wnBS3hD+/zXPLBo+wnDdSd2jiX62wP
	yCH62RMT/MTfFaVFUDjALXyn9aquc4VOAj5ngKM7VbJydl6P7zHDiAoMkPOVi5WCWkfM
	hr/g==
MIME-Version: 1.0
Received: by 10.14.203.132 with SMTP id f4mr20648516eeo.24.1343789503850; Tue,
	31 Jul 2012 19:51:43 -0700 (PDT)
Received: by 10.14.94.200 with HTTP; Tue, 31 Jul 2012 19:51:43 -0700 (PDT)
In-Reply-To: <5016E519.9080304@tycho.nsa.gov>
References: <CAGj-7pU7Y3mwwyOKkO3X5ZpUn+-_LQdz_vqcrRErbkW81sT+3w@mail.gmail.com>
	<5016E519.9080304@tycho.nsa.gov>
Date: Tue, 31 Jul 2012 22:51:43 -0400
Message-ID: <CAGj-7pW8Etp8h-ubmtorfezWrNvW3tgCdgJoY6jVPhC7HDyNgw@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] XSM instead of IS_PRIV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Once this is done, it will make sense to change the XSM header file so
> that the appropriate xsm_* hooks resolve to IS_PRIV when XSM is not
> enabled. This would allow the IS_PRIV hooks covered by XSM to be
> completely removed, assuming the Xen developers agree that the XSM hooks
> cover all all the important code.

Thanks for replying. So, is there consensus among the Xen community
to adopt this patch (xsm hooks replacing IS_PRIV) on upstream Xen?

thanks,
Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 02:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 02:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwP2J-0001t6-8D; Wed, 01 Aug 2012 02:51:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1SwP2H-0001sy-IY
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 02:51:45 +0000
Received: from [85.158.143.99:63458] by server-1.bemta-4.messagelabs.com id
	A9/F2-24392-0C998105; Wed, 01 Aug 2012 02:51:44 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343789504!17989458!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2038 invoked from network); 1 Aug 2012 02:51:44 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 02:51:44 -0000
Received: by eaal12 with SMTP id l12so843977eaa.30
	for <xen-devel@lists.xensource.com>;
	Tue, 31 Jul 2012 19:51:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=171nj3Xpeg56l3ss52pTMoNOL7hb0CIqVoUGMRXCMC8=;
	b=uJ00rDdZHl2TzrYWNE87x8JUyH7sIHw2DMg5EcJTWD+N8GMuLs2GUO0fRTKYn0UNmc
	0rz9D6vqT1nqe3jRZXI6jDpCDPfjUCyGpM51M0elrffr35Cc9nFuCGzBz9P+6naWihPi
	rnzV2MMiJOntK/Y+iv4qIy+1bv6FB1XJAY5LYVv2DZGSSlJzY7xX98yi/KbEyeHHxHnU
	qhOKXXLKUgAQlDvqXolcJq/x2skm8NKTt5htA8wnBS3hD+/zXPLBo+wnDdSd2jiX62wP
	yCH62RMT/MTfFaVFUDjALXyn9aquc4VOAj5ngKM7VbJydl6P7zHDiAoMkPOVi5WCWkfM
	hr/g==
MIME-Version: 1.0
Received: by 10.14.203.132 with SMTP id f4mr20648516eeo.24.1343789503850; Tue,
	31 Jul 2012 19:51:43 -0700 (PDT)
Received: by 10.14.94.200 with HTTP; Tue, 31 Jul 2012 19:51:43 -0700 (PDT)
In-Reply-To: <5016E519.9080304@tycho.nsa.gov>
References: <CAGj-7pU7Y3mwwyOKkO3X5ZpUn+-_LQdz_vqcrRErbkW81sT+3w@mail.gmail.com>
	<5016E519.9080304@tycho.nsa.gov>
Date: Tue, 31 Jul 2012 22:51:43 -0400
Message-ID: <CAGj-7pW8Etp8h-ubmtorfezWrNvW3tgCdgJoY6jVPhC7HDyNgw@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] XSM instead of IS_PRIV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Once this is done, it will make sense to change the XSM header file so
> that the appropriate xsm_* hooks resolve to IS_PRIV when XSM is not
> enabled. This would allow the IS_PRIV hooks covered by XSM to be
> completely removed, assuming the Xen developers agree that the XSM hooks
> cover all all the important code.

Thanks for replying. So, is there consensus among the Xen community
to adopt this patch (xsm hooks replacing IS_PRIV) on upstream Xen?

thanks,
Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaJ-0002tG-Sh; Wed, 01 Aug 2012 04:30:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lmw@satx.rr.com>) id 1SwLYR-0003gy-RZ
	for xen-devel@lists.xen.org; Tue, 31 Jul 2012 23:08:43 +0000
Received: from [85.158.138.51:16643] by server-10.bemta-3.messagelabs.com id
	13/06-21993-A7568105; Tue, 31 Jul 2012 23:08:42 +0000
X-Env-Sender: lmw@satx.rr.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343776121!20250725!1
X-Originating-IP: [75.180.132.120]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3NS4xODAuMTMyLjEyMCA9PiA5ODA3NQ==\n,sa_preprocessor: 
	QmFkIElQOiA3NS4xODAuMTMyLjEyMCA9PiA5ODA3NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12580 invoked from network); 31 Jul 2012 23:08:42 -0000
Received: from cdptpa-omtalb.mail.rr.com (HELO cdptpa-omtalb.mail.rr.com)
	(75.180.132.120) by server-12.tower-174.messagelabs.com with SMTP;
	31 Jul 2012 23:08:42 -0000
Authentication-Results: cdptpa-omtalb.mail.rr.com smtp.user=lmw@satx.rr.com;
	auth=pass (LOGIN)
X-Authority-Analysis: v=2.0 cv=IuCcgcDg c=1 sm=0 a=05ChyHeVI94A:10
	a=IkcTkHD0fZMA:10 a=ayC55rCoAAAA:8 a=VZa10c6xo_XutLS05KUA:9
	a=QEXdDO2ut3YA:10 a=FXLSPIJfNss/Zw1CXsBq9w==:117
X-Cloudmark-Score: 0
Received: from [10.127.132.171] ([10.127.132.171:56289] helo=cdptpa-web20-z02)
	by cdptpa-oedge03.mail.rr.com (envelope-from <lmw@satx.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTPA
	id 81/00-17657-97568105; Tue, 31 Jul 2012 23:08:41 +0000
Message-ID: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
Date: Tue, 31 Jul 2012 23:08:41 +0000
From: <lmw@satx.rr.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
X-Priority: 3 (Normal)
Sensitivity: Normal
X-Originating-IP: 
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Subject: [Xen-devel] serial="pty"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When adding the keyword serial="pty" to a xen cfg file, I am able to open /dev/ttyS0 in my dom-U linux kernel environment and send serial data to a /dev/pts/x port in my dom-0 linux environment.  If I happen to have have multiple dom-U's configured this way, what is the best way to determine how these ports will be mapped so I will know which /dev/pts port belongs to which dom-U's /dev/ttyS0 port?

Thanks,
Larry White

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaJ-0002tG-Sh; Wed, 01 Aug 2012 04:30:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lmw@satx.rr.com>) id 1SwLYR-0003gy-RZ
	for xen-devel@lists.xen.org; Tue, 31 Jul 2012 23:08:43 +0000
Received: from [85.158.138.51:16643] by server-10.bemta-3.messagelabs.com id
	13/06-21993-A7568105; Tue, 31 Jul 2012 23:08:42 +0000
X-Env-Sender: lmw@satx.rr.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343776121!20250725!1
X-Originating-IP: [75.180.132.120]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3NS4xODAuMTMyLjEyMCA9PiA5ODA3NQ==\n,sa_preprocessor: 
	QmFkIElQOiA3NS4xODAuMTMyLjEyMCA9PiA5ODA3NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12580 invoked from network); 31 Jul 2012 23:08:42 -0000
Received: from cdptpa-omtalb.mail.rr.com (HELO cdptpa-omtalb.mail.rr.com)
	(75.180.132.120) by server-12.tower-174.messagelabs.com with SMTP;
	31 Jul 2012 23:08:42 -0000
Authentication-Results: cdptpa-omtalb.mail.rr.com smtp.user=lmw@satx.rr.com;
	auth=pass (LOGIN)
X-Authority-Analysis: v=2.0 cv=IuCcgcDg c=1 sm=0 a=05ChyHeVI94A:10
	a=IkcTkHD0fZMA:10 a=ayC55rCoAAAA:8 a=VZa10c6xo_XutLS05KUA:9
	a=QEXdDO2ut3YA:10 a=FXLSPIJfNss/Zw1CXsBq9w==:117
X-Cloudmark-Score: 0
Received: from [10.127.132.171] ([10.127.132.171:56289] helo=cdptpa-web20-z02)
	by cdptpa-oedge03.mail.rr.com (envelope-from <lmw@satx.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTPA
	id 81/00-17657-97568105; Tue, 31 Jul 2012 23:08:41 +0000
Message-ID: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
Date: Tue, 31 Jul 2012 23:08:41 +0000
From: <lmw@satx.rr.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
X-Priority: 3 (Normal)
Sensitivity: Normal
X-Originating-IP: 
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Subject: [Xen-devel] serial="pty"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When adding the keyword serial="pty" to a xen cfg file, I am able to open /dev/ttyS0 in my dom-U linux kernel environment and send serial data to a /dev/pts/x port in my dom-0 linux environment.  If I happen to have have multiple dom-U's configured this way, what is the best way to determine how these ports will be mapped so I will know which /dev/pts port belongs to which dom-U's /dev/ttyS0 port?

Thanks,
Larry White

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaK-0002tU-LA; Wed, 01 Aug 2012 04:31:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1SwPQn-0002Va-55
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 03:17:05 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343791015!3197403!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5869 invoked from network); 1 Aug 2012 03:16:57 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 03:16:57 -0000
Received: by ghrr14 with SMTP id r14so7569247ghr.32
	for <xen-devel@lists.xen.org>; Tue, 31 Jul 2012 20:16:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=TmdQUIdOm+0Gcg2PCFtQUccf9aQ0Qsj+aT5oEvl5xME=;
	b=AMEXvtdbw/sEGLooikPtsUmpHwmNsFxEpHH97nH+d7gAygAHvOgL80SE91V5So90c5
	xnHbnOpXzBv5xe7ZRE/YyG8ypQeP5pZ/YM8ClnGnr4nDm3NvFNXgtxDohi4iD4HVkTCZ
	zzxknPSB4tprDy6WXpnX8vN14GzzDiwwIZyDs62YiQIyzg7pMnRkRLLuehUV0Ic3VfLO
	/filNbvFehWmvd/sSeEefQ6fNKsSD6+FkGknkmI6vjIC7BnRPdDe1IZafzSWM16sKbXz
	LFkXwNWPCKzHJ4qFvUjw9meesYG19PZGhyzMUpSMy/fqBkmrJePY9wlZzqwgYDzB+VAc
	NMJA==
MIME-Version: 1.0
Received: by 10.50.202.68 with SMTP id kg4mr3909513igc.43.1343791015059; Tue,
	31 Jul 2012 20:16:55 -0700 (PDT)
Received: by 10.64.126.199 with HTTP; Tue, 31 Jul 2012 20:16:55 -0700 (PDT)
In-Reply-To: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
Date: Tue, 31 Jul 2012 23:16:55 -0400
Message-ID: <CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0433309395694513612=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0433309395694513612==
Content-Type: multipart/alternative; boundary=f46d0447950b1ae12404c62bbacb

--f46d0447950b1ae12404c62bbacb
Content-Type: text/plain; charset=ISO-8859-1

Dear libxl developers,
currently there is no way to automatically parse a config file into
the libxl_domain_config datastructure and therefore using any libxl_*
function that requires that datastructure as an input is unusable. The only
implementation that creates that structure is in xl_cmdimpl.c in the
private function parse_config_data. As I see, it relies on the XLU_Config
structure to create the libxl_domain_config structure. It is
very impractical to replicate that code for third party tools just to be
able to use for example libxl_domain_restore.

My requests is either:

   - Create a publicly accessible function in libxl.h that parses a
   file/char* to libxl_domain_config structure (essentially making
   parse_config_data public).
   - Or just use XLU_Config everywhere. I'm not entirely sure why there is
   a need to have two separate formats and the whole translation in-between.

Thanks,
Tamas

--f46d0447950b1ae12404c62bbacb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<span style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:1=
3px;background-color:rgb(255,255,255)">Dear libxl developers,</span><div st=
yle=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13px;back=
ground-color:rgb(255,255,255)">
currently there is no way to automatically parse a config file into the=A0l=
ibxl_domain_config datastructure and therefore using any libxl_* function t=
hat requires that datastructure as an input is unusable. The only implement=
ation that creates that structure is in xl_cmdimpl.c in the private functio=
n parse_config_data. As I see, it relies on the XLU_Config structure to cre=
ate the libxl_domain_config structure. It is very=A0impractical to replicat=
e that code for third party tools just to be able to use for example libxl_=
domain_restore.=A0</div>
<div style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)"><br></div><div style=3D"color:rgb(34,=
34,34);font-family:arial,sans-serif;font-size:13px;background-color:rgb(255=
,255,255)">
My requests is either:</div><div style=3D"color:rgb(34,34,34);font-family:a=
rial,sans-serif;font-size:13px;background-color:rgb(255,255,255)"><ul><li s=
tyle=3D"margin-left:15px">Create a publicly accessible function in libxl.h =
that parses a file/char* to=A0libxl_domain_config structure (essentially ma=
king parse_config_data public).</li>
<li style=3D"margin-left:15px">Or just use XLU_Config everywhere. I&#39;m n=
ot entirely sure why there is a need to have two separate formats and the w=
hole translation=A0in-between.</li></ul><div>Thanks,</div></div><div style=
=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13px;backgro=
und-color:rgb(255,255,255)">
Tamas</div>

--f46d0447950b1ae12404c62bbacb--


--===============0433309395694513612==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0433309395694513612==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaK-0002tU-LA; Wed, 01 Aug 2012 04:31:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1SwPQn-0002Va-55
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 03:17:05 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343791015!3197403!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5869 invoked from network); 1 Aug 2012 03:16:57 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 03:16:57 -0000
Received: by ghrr14 with SMTP id r14so7569247ghr.32
	for <xen-devel@lists.xen.org>; Tue, 31 Jul 2012 20:16:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=TmdQUIdOm+0Gcg2PCFtQUccf9aQ0Qsj+aT5oEvl5xME=;
	b=AMEXvtdbw/sEGLooikPtsUmpHwmNsFxEpHH97nH+d7gAygAHvOgL80SE91V5So90c5
	xnHbnOpXzBv5xe7ZRE/YyG8ypQeP5pZ/YM8ClnGnr4nDm3NvFNXgtxDohi4iD4HVkTCZ
	zzxknPSB4tprDy6WXpnX8vN14GzzDiwwIZyDs62YiQIyzg7pMnRkRLLuehUV0Ic3VfLO
	/filNbvFehWmvd/sSeEefQ6fNKsSD6+FkGknkmI6vjIC7BnRPdDe1IZafzSWM16sKbXz
	LFkXwNWPCKzHJ4qFvUjw9meesYG19PZGhyzMUpSMy/fqBkmrJePY9wlZzqwgYDzB+VAc
	NMJA==
MIME-Version: 1.0
Received: by 10.50.202.68 with SMTP id kg4mr3909513igc.43.1343791015059; Tue,
	31 Jul 2012 20:16:55 -0700 (PDT)
Received: by 10.64.126.199 with HTTP; Tue, 31 Jul 2012 20:16:55 -0700 (PDT)
In-Reply-To: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
Date: Tue, 31 Jul 2012 23:16:55 -0400
Message-ID: <CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0433309395694513612=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0433309395694513612==
Content-Type: multipart/alternative; boundary=f46d0447950b1ae12404c62bbacb

--f46d0447950b1ae12404c62bbacb
Content-Type: text/plain; charset=ISO-8859-1

Dear libxl developers,
currently there is no way to automatically parse a config file into
the libxl_domain_config datastructure and therefore using any libxl_*
function that requires that datastructure as an input is unusable. The only
implementation that creates that structure is in xl_cmdimpl.c in the
private function parse_config_data. As I see, it relies on the XLU_Config
structure to create the libxl_domain_config structure. It is
very impractical to replicate that code for third party tools just to be
able to use for example libxl_domain_restore.

My requests is either:

   - Create a publicly accessible function in libxl.h that parses a
   file/char* to libxl_domain_config structure (essentially making
   parse_config_data public).
   - Or just use XLU_Config everywhere. I'm not entirely sure why there is
   a need to have two separate formats and the whole translation in-between.

Thanks,
Tamas

--f46d0447950b1ae12404c62bbacb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<span style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:1=
3px;background-color:rgb(255,255,255)">Dear libxl developers,</span><div st=
yle=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13px;back=
ground-color:rgb(255,255,255)">
currently there is no way to automatically parse a config file into the=A0l=
ibxl_domain_config datastructure and therefore using any libxl_* function t=
hat requires that datastructure as an input is unusable. The only implement=
ation that creates that structure is in xl_cmdimpl.c in the private functio=
n parse_config_data. As I see, it relies on the XLU_Config structure to cre=
ate the libxl_domain_config structure. It is very=A0impractical to replicat=
e that code for third party tools just to be able to use for example libxl_=
domain_restore.=A0</div>
<div style=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13=
px;background-color:rgb(255,255,255)"><br></div><div style=3D"color:rgb(34,=
34,34);font-family:arial,sans-serif;font-size:13px;background-color:rgb(255=
,255,255)">
My requests is either:</div><div style=3D"color:rgb(34,34,34);font-family:a=
rial,sans-serif;font-size:13px;background-color:rgb(255,255,255)"><ul><li s=
tyle=3D"margin-left:15px">Create a publicly accessible function in libxl.h =
that parses a file/char* to=A0libxl_domain_config structure (essentially ma=
king parse_config_data public).</li>
<li style=3D"margin-left:15px">Or just use XLU_Config everywhere. I&#39;m n=
ot entirely sure why there is a need to have two separate formats and the w=
hole translation=A0in-between.</li></ul><div>Thanks,</div></div><div style=
=3D"color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13px;backgro=
und-color:rgb(255,255,255)">
Tamas</div>

--f46d0447950b1ae12404c62bbacb--


--===============0433309395694513612==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0433309395694513612==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaK-0002tN-99; Wed, 01 Aug 2012 04:31:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1SwOwQ-0001s5-IO
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 02:45:42 +0000
Received: from [85.158.138.51:39789] by server-10.bemta-3.messagelabs.com id
	B4/7F-21993-55898105; Wed, 01 Aug 2012 02:45:41 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343789139!28036830!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11047 invoked from network); 1 Aug 2012 02:45:41 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 02:45:41 -0000
Received: by yhpp34 with SMTP id p34so7589393yhp.32
	for <xen-devel@lists.xen.org>; Tue, 31 Jul 2012 19:45:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=eo5g3rWsFRrmOm9akwP0aKgSnoZlx+IUrhDBVZmeROI=;
	b=iNhsv84wXFtnngCwtgN5VZZF1l88m5JkuL/goNCMh0aMn4zTsDg2Hnjw0so7dbYWWQ
	fY6ybAglRaFLeodY1albG8JSJv+q+wR12EsD3oAtyVyb07TWhp8f/4vgx+Gb7UyKxxp+
	+1egRguJ1lbAK4j3OzE0MGWNH3wVmTAtDpAP0Zoeohz2trjfv52Mwn9gLhkohxKmAiwi
	kxJqhchIUTys7lCAewEeEiICQ/6HhGT53u/8FBZKz5BcR3GBl6dvuk3ek7xc05YH+nH5
	JQVg5B8uLAwhqMkOoZotEh0s693CnSUyrM8chF1hC4OG4vF9+Q0vIfz1lu2Ngs1zy3oB
	4R6A==
MIME-Version: 1.0
Received: by 10.50.185.129 with SMTP id fc1mr2585712igc.64.1343789139502; Tue,
	31 Jul 2012 19:45:39 -0700 (PDT)
Received: by 10.64.126.199 with HTTP; Tue, 31 Jul 2012 19:45:39 -0700 (PDT)
Date: Tue, 31 Jul 2012 22:45:39 -0400
Message-ID: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Subject: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6504664477599872048=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6504664477599872048==
Content-Type: multipart/alternative; boundary=14dae934114350249b04c62b4a0e

--14dae934114350249b04c62b4a0e
Content-Type: text/plain; charset=ISO-8859-1

Dear libxl developers,
currently there is no way to automatically parse a config file into
the libxl_domain_config datastructure and therefore using any libxl_*
function that requires that datastructure as an input is unusable. The only
implementation that creates that structure is in xl_cmdimpl.c in the
private function parse_config_data. As I see, it relies on the XLU_Config
structure to create the libxl_domain_config structure. It is
very impractical to replicate that code for third party tools just to be
able to use for example libxl_domain_restore.

My requests is either:

   - Create a publicly accessible function in libxl.h that parses a
   file/char* to libxl_domain_config structure (essentially making
   parse_config_data public).
   - Or just use XLU_Config everywhere. I'm not entirely sure why there is
   a need to have two separate formats and the whole translation in-between.

Thanks,
Tamas

--14dae934114350249b04c62b4a0e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Dear libxl developers,<div>currently there is no way to automatically parse=
 a config file into the=A0libxl_domain_config datastructure and therefore u=
sing any libxl_* function that requires that datastructure as an input is u=
nusable. The only implementation that creates that structure is in xl_cmdim=
pl.c in the private function parse_config_data. As I see, it relies on the =
XLU_Config structure to create the libxl_domain_config structure. It is ver=
y=A0impractical to replicate that code for third party tools just to be abl=
e to use for example libxl_domain_restore.=A0</div>
<div><br></div><div>My requests is either:</div><div><ul><li>Create a publi=
cly accessible function in libxl.h that parses a file/char* to=A0libxl_doma=
in_config structure (essentially making parse_config_data public).</li><li>
Or just use XLU_Config everywhere. I&#39;m not entirely sure why there is a=
 need to have two separate formats and the whole translation=A0in-between.<=
/li></ul><div>Thanks,</div></div><div>Tamas</div><div><br></div>

--14dae934114350249b04c62b4a0e--


--===============6504664477599872048==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6504664477599872048==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaK-0002tN-99; Wed, 01 Aug 2012 04:31:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1SwOwQ-0001s5-IO
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 02:45:42 +0000
Received: from [85.158.138.51:39789] by server-10.bemta-3.messagelabs.com id
	B4/7F-21993-55898105; Wed, 01 Aug 2012 02:45:41 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343789139!28036830!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11047 invoked from network); 1 Aug 2012 02:45:41 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 02:45:41 -0000
Received: by yhpp34 with SMTP id p34so7589393yhp.32
	for <xen-devel@lists.xen.org>; Tue, 31 Jul 2012 19:45:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=eo5g3rWsFRrmOm9akwP0aKgSnoZlx+IUrhDBVZmeROI=;
	b=iNhsv84wXFtnngCwtgN5VZZF1l88m5JkuL/goNCMh0aMn4zTsDg2Hnjw0so7dbYWWQ
	fY6ybAglRaFLeodY1albG8JSJv+q+wR12EsD3oAtyVyb07TWhp8f/4vgx+Gb7UyKxxp+
	+1egRguJ1lbAK4j3OzE0MGWNH3wVmTAtDpAP0Zoeohz2trjfv52Mwn9gLhkohxKmAiwi
	kxJqhchIUTys7lCAewEeEiICQ/6HhGT53u/8FBZKz5BcR3GBl6dvuk3ek7xc05YH+nH5
	JQVg5B8uLAwhqMkOoZotEh0s693CnSUyrM8chF1hC4OG4vF9+Q0vIfz1lu2Ngs1zy3oB
	4R6A==
MIME-Version: 1.0
Received: by 10.50.185.129 with SMTP id fc1mr2585712igc.64.1343789139502; Tue,
	31 Jul 2012 19:45:39 -0700 (PDT)
Received: by 10.64.126.199 with HTTP; Tue, 31 Jul 2012 19:45:39 -0700 (PDT)
Date: Tue, 31 Jul 2012 22:45:39 -0400
Message-ID: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Subject: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6504664477599872048=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6504664477599872048==
Content-Type: multipart/alternative; boundary=14dae934114350249b04c62b4a0e

--14dae934114350249b04c62b4a0e
Content-Type: text/plain; charset=ISO-8859-1

Dear libxl developers,
currently there is no way to automatically parse a config file into
the libxl_domain_config datastructure and therefore using any libxl_*
function that requires that datastructure as an input is unusable. The only
implementation that creates that structure is in xl_cmdimpl.c in the
private function parse_config_data. As I see, it relies on the XLU_Config
structure to create the libxl_domain_config structure. It is
very impractical to replicate that code for third party tools just to be
able to use for example libxl_domain_restore.

My requests is either:

   - Create a publicly accessible function in libxl.h that parses a
   file/char* to libxl_domain_config structure (essentially making
   parse_config_data public).
   - Or just use XLU_Config everywhere. I'm not entirely sure why there is
   a need to have two separate formats and the whole translation in-between.

Thanks,
Tamas

--14dae934114350249b04c62b4a0e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Dear libxl developers,<div>currently there is no way to automatically parse=
 a config file into the=A0libxl_domain_config datastructure and therefore u=
sing any libxl_* function that requires that datastructure as an input is u=
nusable. The only implementation that creates that structure is in xl_cmdim=
pl.c in the private function parse_config_data. As I see, it relies on the =
XLU_Config structure to create the libxl_domain_config structure. It is ver=
y=A0impractical to replicate that code for third party tools just to be abl=
e to use for example libxl_domain_restore.=A0</div>
<div><br></div><div>My requests is either:</div><div><ul><li>Create a publi=
cly accessible function in libxl.h that parses a file/char* to=A0libxl_doma=
in_config structure (essentially making parse_config_data public).</li><li>
Or just use XLU_Config everywhere. I&#39;m not entirely sure why there is a=
 need to have two separate formats and the whole translation=A0in-between.<=
/li></ul><div>Thanks,</div></div><div>Tamas</div><div><br></div>

--14dae934114350249b04c62b4a0e--


--===============6504664477599872048==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6504664477599872048==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaJ-0002t9-HX; Wed, 01 Aug 2012 04:30:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anirbanchak@gmail.com>) id 1SwKKd-0002pf-3q
	for xen-devel@lists.xensource.com; Tue, 31 Jul 2012 21:50:23 +0000
Received: from [85.158.143.35:7474] by server-2.bemta-4.messagelabs.com id
	65/A0-17938-D1358105; Tue, 31 Jul 2012 21:50:21 +0000
X-Env-Sender: anirbanchak@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343771417!16207705!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14738 invoked from network); 31 Jul 2012 21:50:18 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jul 2012 21:50:18 -0000
Received: by qcad1 with SMTP id d1so4062130qca.30
	for <xen-devel@lists.xensource.com>;
	Tue, 31 Jul 2012 14:50:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=9zZplSDoVMDRUprRXY3N70jXN8V9e7X1BkED/9tC4+k=;
	b=nINIWhIprh5KHiR9OI7APZFthO1qYa6lmRV8a0EQFTTEWbYVWmQletWvOujOO/mMws
	+PuawUS40/cFrLWXo90tYOPSDkXJIwikwey7TjmfQJhoXeEY0cK9E6n6+PdcN9rHQire
	VjEoCz4uMcoFsPxq8yzcshZuBHX82XJVqNKYCMOnt0n0WO2iwMdOrvJL5TvHaEZwQR/q
	WsAJnd++9uqr6u3vcIcV8v1AeX6Q8Jas9K8TVJNU2mVKm8XQSyfGnvrKwrf4a7y26a7v
	tpP5M+if/5+B/qhv8A6++2oc0XZ04dh6i7tJOW74XaNLTxEpFCqgKlStdi8KOrL//TWz
	1AUg==
MIME-Version: 1.0
Received: by 10.224.32.205 with SMTP id e13mr32198691qad.69.1343771417404;
	Tue, 31 Jul 2012 14:50:17 -0700 (PDT)
Received: by 10.229.56.97 with HTTP; Tue, 31 Jul 2012 14:50:17 -0700 (PDT)
In-Reply-To: <1332842573.2485.88.camel@leeds.uk.xensource.com>
References: <1332838109433-5597283.post@n5.nabble.com>
	<1332840330.2485.70.camel@leeds.uk.xensource.com>
	<1332841414026-5597400.post@n5.nabble.com>
	<1332842573.2485.88.camel@leeds.uk.xensource.com>
Date: Tue, 31 Jul 2012 14:50:17 -0700
Message-ID: <CAB=E=HAfU-U-QXbomc3a29tUghGk48NQcMm4JBV3fe5qcFFKWQ@mail.gmail.com>
From: Anirban Chakraborty <anirbanchak@gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Cc: Kristoffer Harthing Egefelt <k@itoc.dk>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] no-carrier on qlogic 8242 10gig with linux 3.x
 running xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not worry about failed card response code. It indicates that the
driver was not able to get the FW dump template during initialization
as that CDRP command is not supported by the onboard FW. In order to
get line rate, it is important that you run the driver with all the
msi-x vectors. Looks like the driver could get only legacy interrupt
in this case.

-Anirban

On Tue, Mar 27, 2012 at 3:02 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Tue, 2012-03-27 at 10:43 +0100, Kristoffer Harthing Egefelt wrote:
>> Please find dmesg below - do you see any hints?
>> The onboard 1gig network cards are in fabric A, the 10gig's are in fabric B.
>> I have disabled the onboard NICs so the 10gigs is eth0 and eth1.
>> I have tried enabling the onboard NICs, making the 10gig's eth2 and eth3
>> without luck.
>>
>> Thanks
>>
>> oot@node0301:/var/log# cat dmesg.0
>
> <snip>
>
>> [   12.646309] input: PC Speaker as /devices/platform/pcspkr/input/input1
>> [   12.696167] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver v5.0.25
>> [   12.696278] xen: registering gsi 38 triggering 0 polarity 1
>> [   12.696304] xen: --> pirq=38 -> irq=38 (gsi=38)
>> [   12.696340] qlcnic 0000:03:00.0: PCI INT A -> GSI 38 (level, low) -> IRQ
>> 38
>
> This one.
>
>> [   12.696355] qlcnic 0000:03:00.0: setting latency timer to 64
>> [   12.697214] qlcnic 0000:03:00.0: 2MB memory map
>
> <snip>
>
>> [   13.077174] ata1: SATA link down (SStatus 0 SControl 300)
>> [   13.077266] ata2: SATA link down (SStatus 0 SControl 300)
>> [   13.221115] usb 2-3: new high-speed USB device number 3 using ehci_hcd
>> [   13.345114] qlcnic 0000:03:00.0: phy port: 0 switch_mode: 0,
>> [   13.345116]  max_tx_q: 1 max_rx_q: 8 min_tx_bw: 0x0,
>> [   13.345117]  max_tx_bw: 0x64 max_mtu:0x2580, capabilities: 0x6affae
>> [   13.361103] qlcnic 0000:03:00.0: failed card response code:0x10
>> [   13.361162] qlcnic 0000:03:00.0: Can't get template size 16
>> [   13.361167] qlcnic 0000:03:00.0: firmware v4.7.83
>
> Failed. :-(
>
>> [   13.371640] usb 2-3: New USB device found, idVendor=046b, idProduct=ff90
>> [   13.371646] usb 2-3: New USB device strings: Mfr=1, Product=2,
>> SerialNumber=3
>> [   13.371650] usb 2-3: Product: Composite Device
>> [   13.371653] usb 2-3: Manufacturer: American Megatrends Inc.
>> [   13.371657] usb 2-3: SerialNumber: serial
>> [   13.393138] qlcnic: 24:b6:fd:64:1e:45: QME8242-k 10GbE Dual Port
>> Mezzanine Card Board Chip rev 0x54
>> [   13.393460] usbcore: registered new interface driver uas
>> [   13.393559] qlcnic 0000:03:00.0: using msi-x interrupts
>> [   13.394251] qlcnic 0000:03:00.0: eth0: XGbE port initialized
>
> And this one.
>
>> [   13.394290] xen: registering gsi 38 triggering 0 polarity 1
>> [   13.394304] xen_map_pirq_gsi: returning irq 38 for gsi 38
>> [   13.394308] xen: --> pirq=38 -> irq=38 (gsi=38)
>> [   13.394311] Already setup the GSI :38
>> [   13.394316] qlcnic 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ
>> 38
>> [   13.394331] qlcnic 0000:03:00.1: setting latency timer to 64
>> [   13.395317] qlcnic 0000:03:00.1: 2MB memory map
>> [   13.404052] Initializing USB Mass Storage driver...
>> [   13.404264] scsi3 : usb-storage 2-3:1.0
>> [   13.404487] scsi4 : usb-storage 2-3:1.1
>> [   13.404693] scsi5 : usb-storage 2-3:1.2
>> [   13.404799] usbcore: registered new interface driver usb-storage
>> [   13.404802] USB Mass Storage support registered.
>> [   13.409136] qlcnic 0000:03:00.1: phy port: 1 switch_mode: 0,
>> [   13.409138]  max_tx_q: 1 max_rx_q: 8 min_tx_bw: 0x0,
>> [   13.409139]  max_tx_bw: 0x64 max_mtu:0x2580, capabilities: 0x6affae
>> [   13.425118] qlcnic 0000:03:00.1: failed card response code:0x10
>> [   13.425174] qlcnic 0000:03:00.1: Can't get template size 16
>> [   13.425178] qlcnic 0000:03:00.1: firmware v4.7.83
>> [   13.449525] qlcnic 0000:03:00.1: using msi-x interrupts
>> [   13.450132] qlcnic 0000:03:00.1: eth1: XGbE port initialized
>
> And this one.
>
>> [   13.677120] usb 5-2: new full-speed USB device number 2 using uhci_hcd
>> [   13.840290] usb 5-2: New USB device found, idVendor=046b, idProduct=ff10
>> [   13.840296] usb 5-2: New USB device strings: Mfr=1, Product=2,
>> SerialNumber=3
>
> <snip>
>
>> [   18.325633] ip_tables: (C) 2000-2006 Netfilter Core Team
>> [   18.434036] device xenbr0 entered promiscuous mode
>> [   18.445825] device eth1 entered promiscuous mode
>> [   18.446092] device eth0 entered promiscuous mode
>
> I have no idea about the "failed card response", but there are eth0 and
> eth1 presented. :-)
>
> And it looks like that they are both bridged to xenbr0 by userspace
> tools?
>
>
> Wei.
>
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 04:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 04:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwQaJ-0002t9-HX; Wed, 01 Aug 2012 04:30:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anirbanchak@gmail.com>) id 1SwKKd-0002pf-3q
	for xen-devel@lists.xensource.com; Tue, 31 Jul 2012 21:50:23 +0000
Received: from [85.158.143.35:7474] by server-2.bemta-4.messagelabs.com id
	65/A0-17938-D1358105; Tue, 31 Jul 2012 21:50:21 +0000
X-Env-Sender: anirbanchak@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343771417!16207705!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14738 invoked from network); 31 Jul 2012 21:50:18 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jul 2012 21:50:18 -0000
Received: by qcad1 with SMTP id d1so4062130qca.30
	for <xen-devel@lists.xensource.com>;
	Tue, 31 Jul 2012 14:50:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=9zZplSDoVMDRUprRXY3N70jXN8V9e7X1BkED/9tC4+k=;
	b=nINIWhIprh5KHiR9OI7APZFthO1qYa6lmRV8a0EQFTTEWbYVWmQletWvOujOO/mMws
	+PuawUS40/cFrLWXo90tYOPSDkXJIwikwey7TjmfQJhoXeEY0cK9E6n6+PdcN9rHQire
	VjEoCz4uMcoFsPxq8yzcshZuBHX82XJVqNKYCMOnt0n0WO2iwMdOrvJL5TvHaEZwQR/q
	WsAJnd++9uqr6u3vcIcV8v1AeX6Q8Jas9K8TVJNU2mVKm8XQSyfGnvrKwrf4a7y26a7v
	tpP5M+if/5+B/qhv8A6++2oc0XZ04dh6i7tJOW74XaNLTxEpFCqgKlStdi8KOrL//TWz
	1AUg==
MIME-Version: 1.0
Received: by 10.224.32.205 with SMTP id e13mr32198691qad.69.1343771417404;
	Tue, 31 Jul 2012 14:50:17 -0700 (PDT)
Received: by 10.229.56.97 with HTTP; Tue, 31 Jul 2012 14:50:17 -0700 (PDT)
In-Reply-To: <1332842573.2485.88.camel@leeds.uk.xensource.com>
References: <1332838109433-5597283.post@n5.nabble.com>
	<1332840330.2485.70.camel@leeds.uk.xensource.com>
	<1332841414026-5597400.post@n5.nabble.com>
	<1332842573.2485.88.camel@leeds.uk.xensource.com>
Date: Tue, 31 Jul 2012 14:50:17 -0700
Message-ID: <CAB=E=HAfU-U-QXbomc3a29tUghGk48NQcMm4JBV3fe5qcFFKWQ@mail.gmail.com>
From: Anirban Chakraborty <anirbanchak@gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
X-Mailman-Approved-At: Wed, 01 Aug 2012 04:30:59 +0000
Cc: Kristoffer Harthing Egefelt <k@itoc.dk>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] no-carrier on qlogic 8242 10gig with linux 3.x
 running xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not worry about failed card response code. It indicates that the
driver was not able to get the FW dump template during initialization
as that CDRP command is not supported by the onboard FW. In order to
get line rate, it is important that you run the driver with all the
msi-x vectors. Looks like the driver could get only legacy interrupt
in this case.

-Anirban

On Tue, Mar 27, 2012 at 3:02 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Tue, 2012-03-27 at 10:43 +0100, Kristoffer Harthing Egefelt wrote:
>> Please find dmesg below - do you see any hints?
>> The onboard 1gig network cards are in fabric A, the 10gig's are in fabric B.
>> I have disabled the onboard NICs so the 10gigs is eth0 and eth1.
>> I have tried enabling the onboard NICs, making the 10gig's eth2 and eth3
>> without luck.
>>
>> Thanks
>>
>> oot@node0301:/var/log# cat dmesg.0
>
> <snip>
>
>> [   12.646309] input: PC Speaker as /devices/platform/pcspkr/input/input1
>> [   12.696167] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver v5.0.25
>> [   12.696278] xen: registering gsi 38 triggering 0 polarity 1
>> [   12.696304] xen: --> pirq=38 -> irq=38 (gsi=38)
>> [   12.696340] qlcnic 0000:03:00.0: PCI INT A -> GSI 38 (level, low) -> IRQ
>> 38
>
> This one.
>
>> [   12.696355] qlcnic 0000:03:00.0: setting latency timer to 64
>> [   12.697214] qlcnic 0000:03:00.0: 2MB memory map
>
> <snip>
>
>> [   13.077174] ata1: SATA link down (SStatus 0 SControl 300)
>> [   13.077266] ata2: SATA link down (SStatus 0 SControl 300)
>> [   13.221115] usb 2-3: new high-speed USB device number 3 using ehci_hcd
>> [   13.345114] qlcnic 0000:03:00.0: phy port: 0 switch_mode: 0,
>> [   13.345116]  max_tx_q: 1 max_rx_q: 8 min_tx_bw: 0x0,
>> [   13.345117]  max_tx_bw: 0x64 max_mtu:0x2580, capabilities: 0x6affae
>> [   13.361103] qlcnic 0000:03:00.0: failed card response code:0x10
>> [   13.361162] qlcnic 0000:03:00.0: Can't get template size 16
>> [   13.361167] qlcnic 0000:03:00.0: firmware v4.7.83
>
> Failed. :-(
>
>> [   13.371640] usb 2-3: New USB device found, idVendor=046b, idProduct=ff90
>> [   13.371646] usb 2-3: New USB device strings: Mfr=1, Product=2,
>> SerialNumber=3
>> [   13.371650] usb 2-3: Product: Composite Device
>> [   13.371653] usb 2-3: Manufacturer: American Megatrends Inc.
>> [   13.371657] usb 2-3: SerialNumber: serial
>> [   13.393138] qlcnic: 24:b6:fd:64:1e:45: QME8242-k 10GbE Dual Port
>> Mezzanine Card Board Chip rev 0x54
>> [   13.393460] usbcore: registered new interface driver uas
>> [   13.393559] qlcnic 0000:03:00.0: using msi-x interrupts
>> [   13.394251] qlcnic 0000:03:00.0: eth0: XGbE port initialized
>
> And this one.
>
>> [   13.394290] xen: registering gsi 38 triggering 0 polarity 1
>> [   13.394304] xen_map_pirq_gsi: returning irq 38 for gsi 38
>> [   13.394308] xen: --> pirq=38 -> irq=38 (gsi=38)
>> [   13.394311] Already setup the GSI :38
>> [   13.394316] qlcnic 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ
>> 38
>> [   13.394331] qlcnic 0000:03:00.1: setting latency timer to 64
>> [   13.395317] qlcnic 0000:03:00.1: 2MB memory map
>> [   13.404052] Initializing USB Mass Storage driver...
>> [   13.404264] scsi3 : usb-storage 2-3:1.0
>> [   13.404487] scsi4 : usb-storage 2-3:1.1
>> [   13.404693] scsi5 : usb-storage 2-3:1.2
>> [   13.404799] usbcore: registered new interface driver usb-storage
>> [   13.404802] USB Mass Storage support registered.
>> [   13.409136] qlcnic 0000:03:00.1: phy port: 1 switch_mode: 0,
>> [   13.409138]  max_tx_q: 1 max_rx_q: 8 min_tx_bw: 0x0,
>> [   13.409139]  max_tx_bw: 0x64 max_mtu:0x2580, capabilities: 0x6affae
>> [   13.425118] qlcnic 0000:03:00.1: failed card response code:0x10
>> [   13.425174] qlcnic 0000:03:00.1: Can't get template size 16
>> [   13.425178] qlcnic 0000:03:00.1: firmware v4.7.83
>> [   13.449525] qlcnic 0000:03:00.1: using msi-x interrupts
>> [   13.450132] qlcnic 0000:03:00.1: eth1: XGbE port initialized
>
> And this one.
>
>> [   13.677120] usb 5-2: new full-speed USB device number 2 using uhci_hcd
>> [   13.840290] usb 5-2: New USB device found, idVendor=046b, idProduct=ff10
>> [   13.840296] usb 5-2: New USB device strings: Mfr=1, Product=2,
>> SerialNumber=3
>
> <snip>
>
>> [   18.325633] ip_tables: (C) 2000-2006 Netfilter Core Team
>> [   18.434036] device xenbr0 entered promiscuous mode
>> [   18.445825] device eth1 entered promiscuous mode
>> [   18.446092] device eth0 entered promiscuous mode
>
> I have no idea about the "failed card response", but there are eth0 and
> eth1 presented. :-)
>
> And it looks like that they are both bridged to xenbr0 by userspace
> tools?
>
>
> Wei.
>
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 05:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 05:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwRRn-0004Ge-9u; Wed, 01 Aug 2012 05:26:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwRRl-0004GY-SN
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 05:26:14 +0000
Received: from [85.158.138.51:39221] by server-1.bemta-3.messagelabs.com id
	8F/AE-31934-4FDB8105; Wed, 01 Aug 2012 05:26:12 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1343798772!29855466!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2583 invoked from network); 1 Aug 2012 05:26:12 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 05:26:12 -0000
Received: by wibhq4 with SMTP id hq4so3151845wib.6
	for <xen-devel@lists.xensource.com>;
	Tue, 31 Jul 2012 22:26:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=w9FlkgwPoVSkUBV5OUAQzEQ4ts+BvEOpsjL54NaK3uI=;
	b=oc4JwXXBU0mQleAMAMquP5+j9vFcxeBVUZaXgul6M/cdN/VOCcPgH5D7tWK5NOpP6B
	gnls77GPQEFXkHBwew9UASoWrSyOXCEpjP/Lw/pdYbtIREmyuWrqSejkBy2cDAT9d7Dl
	beY6KTvRRWhq26ZsoC0ff0eEcJm3a3ynZ76SbmFd/C+vIeQ7QyTVsSyukNv2jZDsEm8B
	+lizRo+qmO2UDUYW+ce25JweGm85kdJEMrNzchESQ0Oso5RLeh5ybkY+27Gw8I+46PM8
	6C4JzwzwS6DL0YGLSog5A2KZuDUhV5eTUYMF3Oabtqoa6bV8fD1MHSPZRbo9eogkwlhg
	i7RQ==
Received: by 10.180.83.106 with SMTP id p10mr8743847wiy.21.1343798772042;
	Tue, 31 Jul 2012 22:26:12 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id o2sm5913874wiz.11.2012.07.31.22.26.10
	(version=SSLv3 cipher=OTHER); Tue, 31 Jul 2012 22:26:11 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 01 Aug 2012 06:26:09 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Shakeel Butt <shakeel.butt@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <CC3E7C81.3A25F%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] XSM instead of IS_PRIV
Thread-Index: Ac1vpigA6j7XhULvN0iZLp+YldCAMQ==
In-Reply-To: <CAGj-7pW8Etp8h-ubmtorfezWrNvW3tgCdgJoY6jVPhC7HDyNgw@mail.gmail.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] XSM instead of IS_PRIV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 03:51, "Shakeel Butt" <shakeel.butt@gmail.com> wrote:

>> Once this is done, it will make sense to change the XSM header file so
>> that the appropriate xsm_* hooks resolve to IS_PRIV when XSM is not
>> enabled. This would allow the IS_PRIV hooks covered by XSM to be
>> completely removed, assuming the Xen developers agree that the XSM hooks
>> cover all all the important code.
> 
> Thanks for replying. So, is there consensus among the Xen community
> to adopt this patch (xsm hooks replacing IS_PRIV) on upstream Xen?

Assuming it can be done cleanly and comprehensively, with same behaviour as
IS_PRIV for an out-of-the-box setup, yes. What's not to like about that? :)

 -- Keir

> thanks,
> Shakeel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 05:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 05:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwRRn-0004Ge-9u; Wed, 01 Aug 2012 05:26:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwRRl-0004GY-SN
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 05:26:14 +0000
Received: from [85.158.138.51:39221] by server-1.bemta-3.messagelabs.com id
	8F/AE-31934-4FDB8105; Wed, 01 Aug 2012 05:26:12 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1343798772!29855466!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2583 invoked from network); 1 Aug 2012 05:26:12 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 05:26:12 -0000
Received: by wibhq4 with SMTP id hq4so3151845wib.6
	for <xen-devel@lists.xensource.com>;
	Tue, 31 Jul 2012 22:26:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=w9FlkgwPoVSkUBV5OUAQzEQ4ts+BvEOpsjL54NaK3uI=;
	b=oc4JwXXBU0mQleAMAMquP5+j9vFcxeBVUZaXgul6M/cdN/VOCcPgH5D7tWK5NOpP6B
	gnls77GPQEFXkHBwew9UASoWrSyOXCEpjP/Lw/pdYbtIREmyuWrqSejkBy2cDAT9d7Dl
	beY6KTvRRWhq26ZsoC0ff0eEcJm3a3ynZ76SbmFd/C+vIeQ7QyTVsSyukNv2jZDsEm8B
	+lizRo+qmO2UDUYW+ce25JweGm85kdJEMrNzchESQ0Oso5RLeh5ybkY+27Gw8I+46PM8
	6C4JzwzwS6DL0YGLSog5A2KZuDUhV5eTUYMF3Oabtqoa6bV8fD1MHSPZRbo9eogkwlhg
	i7RQ==
Received: by 10.180.83.106 with SMTP id p10mr8743847wiy.21.1343798772042;
	Tue, 31 Jul 2012 22:26:12 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id o2sm5913874wiz.11.2012.07.31.22.26.10
	(version=SSLv3 cipher=OTHER); Tue, 31 Jul 2012 22:26:11 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 01 Aug 2012 06:26:09 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Shakeel Butt <shakeel.butt@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <CC3E7C81.3A25F%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] XSM instead of IS_PRIV
Thread-Index: Ac1vpigA6j7XhULvN0iZLp+YldCAMQ==
In-Reply-To: <CAGj-7pW8Etp8h-ubmtorfezWrNvW3tgCdgJoY6jVPhC7HDyNgw@mail.gmail.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] XSM instead of IS_PRIV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 03:51, "Shakeel Butt" <shakeel.butt@gmail.com> wrote:

>> Once this is done, it will make sense to change the XSM header file so
>> that the appropriate xsm_* hooks resolve to IS_PRIV when XSM is not
>> enabled. This would allow the IS_PRIV hooks covered by XSM to be
>> completely removed, assuming the Xen developers agree that the XSM hooks
>> cover all all the important code.
> 
> Thanks for replying. So, is there consensus among the Xen community
> to adopt this patch (xsm hooks replacing IS_PRIV) on upstream Xen?

Assuming it can be done cleanly and comprehensively, with same behaviour as
IS_PRIV for an out-of-the-box setup, yes. What's not to like about that? :)

 -- Keir

> thanks,
> Shakeel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 05:58:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 05:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwRwL-0004ZT-Va; Wed, 01 Aug 2012 05:57:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hufan@websense.com>) id 1SwRwJ-0004ZI-Ji
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 05:57:48 +0000
Received: from [85.158.143.35:57619] by server-2.bemta-4.messagelabs.com id
	24/27-17938-A55C8105; Wed, 01 Aug 2012 05:57:46 +0000
X-Env-Sender: hufan@websense.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343800665!16247174!1
X-Originating-IP: [208.87.234.190]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljg3LjIzNC4xOTAgPT4gMTY4NDUxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4308 invoked from network); 1 Aug 2012 05:57:45 -0000
Received: from cluster-h.mailcontrol.com (HELO cluster-h.mailcontrol.com)
	(208.87.234.190)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 05:57:45 -0000
Received: from rly29h.srv.mailcontrol.com (localhost.localdomain [127.0.0.1])
	by rly29h.srv.mailcontrol.com (MailControl) with ESMTP id
	q715vd1o002408; Wed, 1 Aug 2012 06:57:42 +0100
Received: from localhost.localdomain (localhost.localdomain [127.0.0.1])
	by rly29h.srv.mailcontrol.com (MailControl) id q715vT5F001307;
	Wed, 1 Aug 2012 06:57:29 +0100
Received: from SSDEXCH1A.websense.com ([204.15.64.107])
	by rly29h-eth0.srv.mailcontrol.com (envelope-sender
	<hufan@websense.com>) (MIMEDefang) with ESMTP id q715vOMG000567
	(TLS bits=128 verify=FAIL); Wed, 01 Aug 2012 06:57:29 +0100 (BST)
Received: from SBJEXCH2B.websense.com (10.32.8.112) by SSDEXCH1A.websense.com
	(10.8.1.91) with Microsoft SMTP Server (TLS) id 14.2.283.3;
	Tue, 31 Jul 2012 22:57:18 -0700
Received: from SBJEXCH1A.websense.com ([169.254.1.209]) by
	SBJEXCH2B.websense.com ([::1]) with mapi id 14.02.0283.003;
	Wed, 1 Aug 2012 13:57:14 +0800
From: "Fan, Huaxiang" <hufan@websense.com>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>
Thread-Topic: PCI passthrough for domU allocated with more than 4G memory
Thread-Index: Ac1vp0xm5UJayF9xRx2DE9F/z2HnhA==
Date: Wed, 1 Aug 2012 05:57:13 +0000
Message-ID: <E71FC5D6F96C3C4B93FC8FF942D924C675ADD043@SBJEXCH1A.websense.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.32.134.111]
MIME-Version: 1.0
X-Scanned-By: MailControl 8316.0 (www.mailcontrol.com) on 10.72.1.139
Cc: "Konrad Rzeszutek
	Wilk \(konrad.wilk@oracle.com\)" <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] PCI passthrough for domU allocated with more than 4G
	memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6773783635368204994=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6773783635368204994==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_"

--_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,

I have encountered some strange problems when I was trying to PCI passthrou=
gh Broadcom 5709/5716 NICs to domUs allocated with more than 4G memory. Ple=
ase see below for details.

My environment is:
Hardware Platform: DELL R210 with 2 Broadcom 5709 NICs and 2 Broadcom 5716 =
NICs
Xen: xen 4.2 unstable (64bits for hypervisor and 32bit for tools)
Kernel for both dom0 and domUs: xenified kernel 2.6.32.57 (32bit)
OS: CentOS 6.2 (32bit)

The general info regarding to xen can be get via below command

# xl info

host                   : 7.8

release                : 2.6.32.57

version                : #1 SMP Fri Jul 6 18:44:16 CST 2012

machine                : i686

nr_cpus                : 8

max_cpu_id             : 31

nr_nodes               : 1

cores_per_socket       : 4

threads_per_core       : 2

cpu_mhz                : 2660

hw_caps                : bfebfbff:28100800:00000000:00003b40:0098e3fd:00000=
000:00000001:00000000

virt_caps              : hvm hvm_directio

total_memory           : 8182

free_memory            : 7046

sharing_freed_memory   : 0

sharing_used_memory    : 0

free_cpus              : 0

xen_major              : 4

xen_minor              : 2

xen_extra              : -unstable

xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64

xen_scheduler          : credit

xen_pagesize           : 4096

platform_params        : virt_start=3D0xff400000

xen_changeset          : unavailable

xen_commandline        : dom0_mem=3D1024M dom0_max_vcpus=3D2 dom0_vcpus_pin

cc_compiler            : gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC)

cc_compile_by          : root

cc_compile_domain      :

cc_compile_date        : Thu Jul 12 11:20:56 CST 2012

xend_config_format     : 4

case 1> I specified PV-domU config as below

memory=3D3072

maxmem=3D6144

name=3D"bs"

vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]

disk=3D['file:/root/bs.img,xvda1,w']

kernel=3D'/root/vmlinuz'

extra=3D"iommu=3Dsoft console=3Dhvc0"

ramdisk=3D'/root/initrd.img'

root=3D"/dev/xvda1 ro"

pci=3D['01:00.0','01:00.1']

and then start domU. After that I executed below command

# xl list

Name                                        ID   Mem VCPUs      State   Tim=
e(s)

Domain-0                                     0  1024     2     r-----      =
88.7

bs                                           2  3072     1     -b----      =
 1.1

It seemed normal to me. But when I logon bs domU, and executed below command

# cat /proc/meminfo | head

MemTotal:        6158940 kB

MemFree:         2944776 kB

Buffers:            5108 kB

Cached:            32292 kB

SwapCached:            0 kB

Active:            21456 kB

Inactive:          22936 kB

Active(anon):       7000 kB

Inactive(anon):      108 kB

Active(file):      14456 kB

It indicated the total memory was 6G, why?
When I back to dom0, I executed below command

# xl mem-set bs 6144

# xl list

Name                                        ID   Mem VCPUs      State   Tim=
e(s)

Domain-0                                     0  1024     2     r-----      =
93.5

bs                                           2  6144     1     -b----      =
10.5
It seemed normal to me. But when I logon bs domU again and executed below c=
ommand

# cat /proc/meminfo | head

MemTotal:        9304668 kB

MemFree:         6087540 kB

Buffers:            5168 kB

Cached:            32464 kB

SwapCached:            0 kB

Active:            22300 kB

Inactive:          22408 kB

Active(anon):       7080 kB

Inactive(anon):      108 kB

Active(file):      15220 kB



It indicated total memory was 9G (6G + 3G). It was wired. Any idea about th=
is?
Case 2> I specified PV-domU config as below

memory=3D6144

maxmem=3D6144

name=3D"bs"

vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]

disk=3D['file:/root/bs.img,xvda1,w']

kernel=3D'/root/vmlinuz'

extra=3D"iommu=3Dsoft console=3Dhvc0"

ramdisk=3D'/root/initrd.img'

root=3D"/dev/xvda1 ro"

pci=3D['01:00.0','01:00.1']

and then start domU. After that I executed below command

# xl list

Name                                        ID   Mem VCPUs      State   Tim=
e(s)

Domain-0                                     0   648     2     r-----     1=
20.5

bs                                           3  3360     1     -b----      =
 7.0

the output was very confusing. Why dom0 memory had been shrank to 648M and =
only 3360M assigned to bs domU?

My own analysis:
I extracted the bios e820 memory map on bs domU as below

[    0.000000] BIOS-provided physical RAM map:

[    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)

[    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)

[    0.000000]  Xen: 0000000000100000 - 00000000bf699000 (usable)

[    0.000000]  Xen: 00000000bf699000 - 00000000bf6af000 (reserved)

[    0.000000]  Xen: 00000000bf6af000 - 00000000bf6ce000 (ACPI data)

[    0.000000]  Xen: 00000000bf6ce000 - 00000000c0000000 (reserved)

[    0.000000]  Xen: 00000000e0000000 - 00000000f0000000 (reserved)

[    0.000000]  Xen: 00000000fe000000 - 0000000100000000 (reserved)

[    0.000000]  Xen: 0000000180000000 - 00000001c33ec000 (usable)

I think the root cause might be related to the holes between c0000000 and e=
0000000 and between f0000000 and fe000000 and between 100000000 and 1800000=
00. And I think the e820_host option set according to my tracking.

Thanks in advance
HUAXIANG FAN
Software Engineer II

WEBSENSE NETWORK SECURITY TECHNOLOGY R&D (BEIJING) CO. LTD.
ph: +8610.5884.4327
fax: +8610.5884.4727
www.websense.cn<http://www.websense.cn>

Websense TRITON(tm)
For Essential Information Protection(tm)
Web Security<http://www.websense.com/content/Regional/SCH/WebSecurityOvervi=
ew.aspx> | Data Security<http://www.websense.com/content/Regional/SCH/DataS=
ecurity.aspx> | Email Security<http://www.websense.com/content/Regional/SCH=
/MessagingSecurity.aspx>



 Protected by Websense Hosted Email Security -- www.websense.com=20

--_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Plain Text Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.PlainTextChar
	{mso-style-name:"Plain Text Char";
	mso-style-priority:99;
	mso-style-link:"Plain Text";
	font-family:"Calibri","sans-serif";}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I have encountered some strange problems when I was =
trying to PCI passthrough Broadcom 5709/5716 NICs to domUs allocated with m=
ore than 4G memory. Please see below for details.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">My environment is:<o:p></o:p></p>
<p class=3D"MsoNormal">Hardware Platform: DELL R210 with 2 Broadcom 5709 NI=
Cs and 2 Broadcom 5716 NICs<o:p></o:p></p>
<p class=3D"MsoNormal">Xen: xen 4.2 unstable (64bits for hypervisor and 32b=
it for tools)<o:p></o:p></p>
<p class=3D"MsoNormal">Kernel for both dom0 and domUs: xenified kernel 2.6.=
32.57 (32bit)<o:p></o:p></p>
<p class=3D"MsoNormal">OS: CentOS 6.2 (32bit)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">The general info regarding to xen can be get via bel=
ow command<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl info<o:p></o:p></p>
<p class=3D"MsoPlainText">host&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 7.8<o:p><=
/o:p></p>
<p class=3D"MsoPlainText">release&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 2.6.32.57<o:p></o:p></p>
<p class=3D"MsoPlainText">version&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : #1 SMP Fri Jul 6 18:44:1=
6 CST 2012<o:p></o:p></p>
<p class=3D"MsoPlainText">machine&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : i686<o:p></o:p></p>
<p class=3D"MsoPlainText">nr_cpus&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 8<o:p></o:p></p>
<p class=3D"MsoPlainText">max_cpu_id&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 31<o:p></o:p></p>
<p class=3D"MsoPlainText">nr_nodes&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 1<o:p></o:p></p>
<p class=3D"MsoPlainText">cores_per_socket&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; : 4<o:p></o:p></p>
<p class=3D"MsoPlainText">threads_per_core&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; : 2<o:p></o:p></p>
<p class=3D"MsoPlainText">cpu_mhz&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 2660<o:p></o:p></p>
<p class=3D"MsoPlainText">hw_caps&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : bfebfbff:28100800:000000=
00:00003b40:0098e3fd:00000000:00000001:00000000<o:p></o:p></p>
<p class=3D"MsoPlainText">virt_caps&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : hvm hvm_directio<o:p></o:p></p>
<p class=3D"MsoPlainText">total_memory&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp; : 8182<o:p></o:p></p>
<p class=3D"MsoPlainText">free_memory&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; : 7046<o:p></o:p></p>
<p class=3D"MsoPlainText">sharing_freed_memory&nbsp;&nbsp; : 0<o:p></o:p></=
p>
<p class=3D"MsoPlainText">sharing_used_memory&nbsp;&nbsp;&nbsp; : 0<o:p></o=
:p></p>
<p class=3D"MsoPlainText">free_cpus&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 0<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_major&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 4<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_minor&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 2<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_extra&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : -unstable<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_caps&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : xen-3.0-x86_64 xen-3.0-x86_32=
p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_scheduler&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; : credit<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_pagesize&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp; : 4096<o:p></o:p></p>
<p class=3D"MsoPlainText">platform_params&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp; : virt_start=3D0xff400000<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_changeset&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; : unavailable<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_commandline&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp; : dom0_mem=3D1024M dom0_max_vcpus=3D2 dom0_vcpus_pin<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compiler&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; : gcc version 4.4.6 20110731 (Red Hat 4.4.6-3)=
 (GCC)<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compile_by&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; : root<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compile_domain&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; :=
<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compile_date&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp; : Thu Jul 12 11:20:56 CST 2012<o:p></o:p></p>
<p class=3D"MsoPlainText">xend_config_format&nbsp;&nbsp;&nbsp;&nbsp; : 4<o:=
p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">case 1&gt; I specified PV-domU config as below<o:p><=
/o:p></p>
<p class=3D"MsoPlainText">memory=3D3072<o:p></o:p></p>
<p class=3D"MsoPlainText">maxmem=3D6144<o:p></o:p></p>
<p class=3D"MsoPlainText">name=3D&quot;bs&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]<o:=
p></o:p></p>
<p class=3D"MsoPlainText">disk=3D['file:/root/bs.img,xvda1,w']<o:p></o:p></=
p>
<p class=3D"MsoPlainText">kernel=3D'/root/vmlinuz'<o:p></o:p></p>
<p class=3D"MsoPlainText">extra=3D&quot;iommu=3Dsoft console=3Dhvc0&quot;<o=
:p></o:p></p>
<p class=3D"MsoPlainText">ramdisk=3D'/root/initrd.img'<o:p></o:p></p>
<p class=3D"MsoPlainText">root=3D&quot;/dev/xvda1 ro&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">pci=3D['01:00.0','01:00.1']<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">and then start domU. After that I executed below com=
mand<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl list<o:p></o:p></p>
<p class=3D"MsoPlainText">Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Mem VCPUs&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; State&nbsp;&nbsp; Time(s)<o:p></o:p></p>
<p class=3D"MsoPlainText">Domain-0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 1024&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;&nbsp=
;&nbsp;&nbsp; r-----&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 88.7<o:p></o:p></p>
<p class=3D"MsoPlainText">bs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2&nbsp; 3072&nbsp;&=
nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp; -b----&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; 1.1<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">It seemed normal to me. But when I logon bs domU, an=
d executed below command<o:p></o:p></p>
<p class=3D"MsoPlainText"># cat /proc/meminfo | head<o:p></o:p></p>
<p class=3D"MsoPlainText">MemTotal:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; 6158940 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">MemFree:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; 2944776 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Buffers:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp; 5108 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Cached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 32292 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">SwapCached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 21456 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; 22936 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 7000 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 108=
 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(file):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 14456=
 kB<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">It indicated the total memory was 6G, why?<o:p></o:p=
></p>
<p class=3D"MsoNormal">When I back to dom0, I executed below command<o:p></=
o:p></p>
<p class=3D"MsoPlainText"># xl mem-set bs 6144<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl list<o:p></o:p></p>
<p class=3D"MsoPlainText">Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Mem VCPUs&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; State&nbsp;&nbsp; Time(s)<o:p></o:p></p>
<p class=3D"MsoPlainText">Domain-0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 1024&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;&nbsp=
;&nbsp;&nbsp; r-----&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;93.5<o:p></o:p></p>
<p class=3D"MsoPlainText">bs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp; 6144&nbsp;&=
nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp; -b----&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; 10.5<o:p></o:p></p>
<p class=3D"MsoNormal">It seemed normal to me. But when I logon bs domU aga=
in and executed below command<o:p></o:p></p>
<p class=3D"MsoPlainText"># cat /proc/meminfo | head<o:p></o:p></p>
<p class=3D"MsoPlainText">MemTotal:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; 9304668 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">MemFree:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; 6087540 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Buffers:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; &nbsp;&nbsp;&nbsp;5168 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Cached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 32464 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">SwapCached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 22300 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; 22408 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 7080 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 108=
 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(file):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 15220=
 kB<o:p></o:p></p>
<p class=3D"MsoPlainText"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoPlainText">It indicated total memory was 9G (6G &#43; 3G). I=
t was wired. Any idea about this?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p></o:p></p>
<p class=3D"MsoNormal">Case 2&gt; I specified PV-domU config as below<o:p><=
/o:p></p>
<p class=3D"MsoPlainText">memory=3D6144<o:p></o:p></p>
<p class=3D"MsoPlainText">maxmem=3D6144<o:p></o:p></p>
<p class=3D"MsoPlainText">name=3D&quot;bs&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]<o:=
p></o:p></p>
<p class=3D"MsoPlainText">disk=3D['file:/root/bs.img,xvda1,w']<o:p></o:p></=
p>
<p class=3D"MsoPlainText">kernel=3D'/root/vmlinuz'<o:p></o:p></p>
<p class=3D"MsoPlainText">extra=3D&quot;iommu=3Dsoft console=3Dhvc0&quot;<o=
:p></o:p></p>
<p class=3D"MsoPlainText">ramdisk=3D'/root/initrd.img'<o:p></o:p></p>
<p class=3D"MsoPlainText">root=3D&quot;/dev/xvda1 ro&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">pci=3D['01:00.0','01:00.1']<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">and then start domU. After that I executed below com=
mand<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl list<o:p></o:p></p>
<p class=3D"MsoPlainText">Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Mem VCPUs&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; State&nbsp;&nbsp; Time(s)<o:p></o:p></p>
<p class=3D"MsoPlainText">Domain-0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; &nbsp;&nbsp;0&nbsp;&nbsp; 648&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;=
&nbsp;&nbsp;&nbsp; r-----&nbsp;&nbsp;&nbsp;&nbsp; 120.5<o:p></o:p></p>
<p class=3D"MsoPlainText">bs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3&nbsp; 3360&nbsp;&=
nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp; -b----&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; 7.0<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">the output was very confusing. Why dom0 memory had b=
een shrank to 648M and only 3360M assigned to bs domU?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">My own analysis:<o:p></o:p></p>
<p class=3D"MsoNormal">I extracted the bios e820 memory map on bs domU as b=
elow<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000] BIOS-provided physi=
cal RAM map:<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
00000000 - 00000000000a0000 (usable)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
000a0000 - 0000000000100000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
00100000 - 00000000bf699000 (usable)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
bf699000 - 00000000bf6af000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
bf6af000 - 00000000bf6ce000 (ACPI data)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
bf6ce000 - 00000000c0000000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
e0000000 - 00000000f0000000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
fe000000 - 0000000100000000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000001=
80000000 - 00000001c33ec000 (usable)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I think the root cause might be related to the holes=
 between c0000000 and e0000000 and between f0000000 and fe000000 and betwee=
n 100000000 and 180000000. And I think the e820_host option set according t=
o my tracking.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Thanks in advance<o:p></o:p></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:8.0pt;font-family:&quot;=
Arial&quot;,&quot;sans-serif&quot;;color:#6699C2">HUAXIANG FAN</span></b><s=
pan style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quot;sans-serif=
&quot;;color:#6699C2"><br>
</span><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quot;s=
ans-serif&quot;;color:#8C7D72">Software Engineer II<br>
<br>
</span><b><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quo=
t;sans-serif&quot;;color:#6699C2">WEBSENSE NETWORK SECURITY TECHNOLOGY R&am=
p;D (BEIJING) CO. LTD.</span></b><span style=3D"font-size:8.0pt;font-family=
:&quot;Arial&quot;,&quot;sans-serif&quot;;color:#6699C2"><br>
</span><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quot;s=
ans-serif&quot;;color:#8C7D72">ph: &#43;8610.5884.4327<br>
fax: &#43;8610.5884.4727<br>
<a href=3D"http://www.websense.cn"><span style=3D"color:#8C7D72;text-decora=
tion:none">www.websense.cn</span></a><br>
<br>
</span><b><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quo=
t;sans-serif&quot;;color:#003352">Websense TRITON&#8482;<br>
For Essential Information Protection&#8482;<br>
</span></b><b><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,=
&quot;sans-serif&quot;;color:#8C7D72"><a href=3D"http://www.websense.com/co=
ntent/Regional/SCH/WebSecurityOverview.aspx"><span style=3D"color:#8C7D72;t=
ext-decoration:none">Web Security</span></a> |
<a href=3D"http://www.websense.com/content/Regional/SCH/DataSecurity.aspx">=
<span style=3D"color:#8C7D72;text-decoration:none">Data Security</span></a>=
 |
<a href=3D"http://www.websense.com/content/Regional/SCH/MessagingSecurity.a=
spx"><span style=3D"color:#8C7D72;text-decoration:none">Email Security</spa=
n></a><br>
<br>
</span></b><o:p></o:p></p>
</div>
<br><br>
<P align=3Dcenter>Protected by Websense&nbsp;Hosted Email&nbsp;Security &#8=
212; www.websense.com</P>
</body>
</html>

--_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_--


--===============6773783635368204994==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6773783635368204994==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 05:58:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 05:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwRwL-0004ZT-Va; Wed, 01 Aug 2012 05:57:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hufan@websense.com>) id 1SwRwJ-0004ZI-Ji
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 05:57:48 +0000
Received: from [85.158.143.35:57619] by server-2.bemta-4.messagelabs.com id
	24/27-17938-A55C8105; Wed, 01 Aug 2012 05:57:46 +0000
X-Env-Sender: hufan@websense.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343800665!16247174!1
X-Originating-IP: [208.87.234.190]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljg3LjIzNC4xOTAgPT4gMTY4NDUxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4308 invoked from network); 1 Aug 2012 05:57:45 -0000
Received: from cluster-h.mailcontrol.com (HELO cluster-h.mailcontrol.com)
	(208.87.234.190)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 05:57:45 -0000
Received: from rly29h.srv.mailcontrol.com (localhost.localdomain [127.0.0.1])
	by rly29h.srv.mailcontrol.com (MailControl) with ESMTP id
	q715vd1o002408; Wed, 1 Aug 2012 06:57:42 +0100
Received: from localhost.localdomain (localhost.localdomain [127.0.0.1])
	by rly29h.srv.mailcontrol.com (MailControl) id q715vT5F001307;
	Wed, 1 Aug 2012 06:57:29 +0100
Received: from SSDEXCH1A.websense.com ([204.15.64.107])
	by rly29h-eth0.srv.mailcontrol.com (envelope-sender
	<hufan@websense.com>) (MIMEDefang) with ESMTP id q715vOMG000567
	(TLS bits=128 verify=FAIL); Wed, 01 Aug 2012 06:57:29 +0100 (BST)
Received: from SBJEXCH2B.websense.com (10.32.8.112) by SSDEXCH1A.websense.com
	(10.8.1.91) with Microsoft SMTP Server (TLS) id 14.2.283.3;
	Tue, 31 Jul 2012 22:57:18 -0700
Received: from SBJEXCH1A.websense.com ([169.254.1.209]) by
	SBJEXCH2B.websense.com ([::1]) with mapi id 14.02.0283.003;
	Wed, 1 Aug 2012 13:57:14 +0800
From: "Fan, Huaxiang" <hufan@websense.com>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>
Thread-Topic: PCI passthrough for domU allocated with more than 4G memory
Thread-Index: Ac1vp0xm5UJayF9xRx2DE9F/z2HnhA==
Date: Wed, 1 Aug 2012 05:57:13 +0000
Message-ID: <E71FC5D6F96C3C4B93FC8FF942D924C675ADD043@SBJEXCH1A.websense.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.32.134.111]
MIME-Version: 1.0
X-Scanned-By: MailControl 8316.0 (www.mailcontrol.com) on 10.72.1.139
Cc: "Konrad Rzeszutek
	Wilk \(konrad.wilk@oracle.com\)" <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] PCI passthrough for domU allocated with more than 4G
	memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6773783635368204994=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6773783635368204994==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_"

--_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,

I have encountered some strange problems when I was trying to PCI passthrou=
gh Broadcom 5709/5716 NICs to domUs allocated with more than 4G memory. Ple=
ase see below for details.

My environment is:
Hardware Platform: DELL R210 with 2 Broadcom 5709 NICs and 2 Broadcom 5716 =
NICs
Xen: xen 4.2 unstable (64bits for hypervisor and 32bit for tools)
Kernel for both dom0 and domUs: xenified kernel 2.6.32.57 (32bit)
OS: CentOS 6.2 (32bit)

The general info regarding to xen can be get via below command

# xl info

host                   : 7.8

release                : 2.6.32.57

version                : #1 SMP Fri Jul 6 18:44:16 CST 2012

machine                : i686

nr_cpus                : 8

max_cpu_id             : 31

nr_nodes               : 1

cores_per_socket       : 4

threads_per_core       : 2

cpu_mhz                : 2660

hw_caps                : bfebfbff:28100800:00000000:00003b40:0098e3fd:00000=
000:00000001:00000000

virt_caps              : hvm hvm_directio

total_memory           : 8182

free_memory            : 7046

sharing_freed_memory   : 0

sharing_used_memory    : 0

free_cpus              : 0

xen_major              : 4

xen_minor              : 2

xen_extra              : -unstable

xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64

xen_scheduler          : credit

xen_pagesize           : 4096

platform_params        : virt_start=3D0xff400000

xen_changeset          : unavailable

xen_commandline        : dom0_mem=3D1024M dom0_max_vcpus=3D2 dom0_vcpus_pin

cc_compiler            : gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC)

cc_compile_by          : root

cc_compile_domain      :

cc_compile_date        : Thu Jul 12 11:20:56 CST 2012

xend_config_format     : 4

case 1> I specified PV-domU config as below

memory=3D3072

maxmem=3D6144

name=3D"bs"

vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]

disk=3D['file:/root/bs.img,xvda1,w']

kernel=3D'/root/vmlinuz'

extra=3D"iommu=3Dsoft console=3Dhvc0"

ramdisk=3D'/root/initrd.img'

root=3D"/dev/xvda1 ro"

pci=3D['01:00.0','01:00.1']

and then start domU. After that I executed below command

# xl list

Name                                        ID   Mem VCPUs      State   Tim=
e(s)

Domain-0                                     0  1024     2     r-----      =
88.7

bs                                           2  3072     1     -b----      =
 1.1

It seemed normal to me. But when I logon bs domU, and executed below command

# cat /proc/meminfo | head

MemTotal:        6158940 kB

MemFree:         2944776 kB

Buffers:            5108 kB

Cached:            32292 kB

SwapCached:            0 kB

Active:            21456 kB

Inactive:          22936 kB

Active(anon):       7000 kB

Inactive(anon):      108 kB

Active(file):      14456 kB

It indicated the total memory was 6G, why?
When I back to dom0, I executed below command

# xl mem-set bs 6144

# xl list

Name                                        ID   Mem VCPUs      State   Tim=
e(s)

Domain-0                                     0  1024     2     r-----      =
93.5

bs                                           2  6144     1     -b----      =
10.5
It seemed normal to me. But when I logon bs domU again and executed below c=
ommand

# cat /proc/meminfo | head

MemTotal:        9304668 kB

MemFree:         6087540 kB

Buffers:            5168 kB

Cached:            32464 kB

SwapCached:            0 kB

Active:            22300 kB

Inactive:          22408 kB

Active(anon):       7080 kB

Inactive(anon):      108 kB

Active(file):      15220 kB



It indicated total memory was 9G (6G + 3G). It was wired. Any idea about th=
is?
Case 2> I specified PV-domU config as below

memory=3D6144

maxmem=3D6144

name=3D"bs"

vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]

disk=3D['file:/root/bs.img,xvda1,w']

kernel=3D'/root/vmlinuz'

extra=3D"iommu=3Dsoft console=3Dhvc0"

ramdisk=3D'/root/initrd.img'

root=3D"/dev/xvda1 ro"

pci=3D['01:00.0','01:00.1']

and then start domU. After that I executed below command

# xl list

Name                                        ID   Mem VCPUs      State   Tim=
e(s)

Domain-0                                     0   648     2     r-----     1=
20.5

bs                                           3  3360     1     -b----      =
 7.0

the output was very confusing. Why dom0 memory had been shrank to 648M and =
only 3360M assigned to bs domU?

My own analysis:
I extracted the bios e820 memory map on bs domU as below

[    0.000000] BIOS-provided physical RAM map:

[    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)

[    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)

[    0.000000]  Xen: 0000000000100000 - 00000000bf699000 (usable)

[    0.000000]  Xen: 00000000bf699000 - 00000000bf6af000 (reserved)

[    0.000000]  Xen: 00000000bf6af000 - 00000000bf6ce000 (ACPI data)

[    0.000000]  Xen: 00000000bf6ce000 - 00000000c0000000 (reserved)

[    0.000000]  Xen: 00000000e0000000 - 00000000f0000000 (reserved)

[    0.000000]  Xen: 00000000fe000000 - 0000000100000000 (reserved)

[    0.000000]  Xen: 0000000180000000 - 00000001c33ec000 (usable)

I think the root cause might be related to the holes between c0000000 and e=
0000000 and between f0000000 and fe000000 and between 100000000 and 1800000=
00. And I think the e820_host option set according to my tracking.

Thanks in advance
HUAXIANG FAN
Software Engineer II

WEBSENSE NETWORK SECURITY TECHNOLOGY R&D (BEIJING) CO. LTD.
ph: +8610.5884.4327
fax: +8610.5884.4727
www.websense.cn<http://www.websense.cn>

Websense TRITON(tm)
For Essential Information Protection(tm)
Web Security<http://www.websense.com/content/Regional/SCH/WebSecurityOvervi=
ew.aspx> | Data Security<http://www.websense.com/content/Regional/SCH/DataS=
ecurity.aspx> | Email Security<http://www.websense.com/content/Regional/SCH=
/MessagingSecurity.aspx>



 Protected by Websense Hosted Email Security -- www.websense.com=20

--_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Plain Text Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.PlainTextChar
	{mso-style-name:"Plain Text Char";
	mso-style-priority:99;
	mso-style-link:"Plain Text";
	font-family:"Calibri","sans-serif";}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I have encountered some strange problems when I was =
trying to PCI passthrough Broadcom 5709/5716 NICs to domUs allocated with m=
ore than 4G memory. Please see below for details.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">My environment is:<o:p></o:p></p>
<p class=3D"MsoNormal">Hardware Platform: DELL R210 with 2 Broadcom 5709 NI=
Cs and 2 Broadcom 5716 NICs<o:p></o:p></p>
<p class=3D"MsoNormal">Xen: xen 4.2 unstable (64bits for hypervisor and 32b=
it for tools)<o:p></o:p></p>
<p class=3D"MsoNormal">Kernel for both dom0 and domUs: xenified kernel 2.6.=
32.57 (32bit)<o:p></o:p></p>
<p class=3D"MsoNormal">OS: CentOS 6.2 (32bit)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">The general info regarding to xen can be get via bel=
ow command<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl info<o:p></o:p></p>
<p class=3D"MsoPlainText">host&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 7.8<o:p><=
/o:p></p>
<p class=3D"MsoPlainText">release&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 2.6.32.57<o:p></o:p></p>
<p class=3D"MsoPlainText">version&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : #1 SMP Fri Jul 6 18:44:1=
6 CST 2012<o:p></o:p></p>
<p class=3D"MsoPlainText">machine&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : i686<o:p></o:p></p>
<p class=3D"MsoPlainText">nr_cpus&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 8<o:p></o:p></p>
<p class=3D"MsoPlainText">max_cpu_id&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 31<o:p></o:p></p>
<p class=3D"MsoPlainText">nr_nodes&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 1<o:p></o:p></p>
<p class=3D"MsoPlainText">cores_per_socket&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; : 4<o:p></o:p></p>
<p class=3D"MsoPlainText">threads_per_core&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; : 2<o:p></o:p></p>
<p class=3D"MsoPlainText">cpu_mhz&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 2660<o:p></o:p></p>
<p class=3D"MsoPlainText">hw_caps&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : bfebfbff:28100800:000000=
00:00003b40:0098e3fd:00000000:00000001:00000000<o:p></o:p></p>
<p class=3D"MsoPlainText">virt_caps&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : hvm hvm_directio<o:p></o:p></p>
<p class=3D"MsoPlainText">total_memory&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp; : 8182<o:p></o:p></p>
<p class=3D"MsoPlainText">free_memory&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; : 7046<o:p></o:p></p>
<p class=3D"MsoPlainText">sharing_freed_memory&nbsp;&nbsp; : 0<o:p></o:p></=
p>
<p class=3D"MsoPlainText">sharing_used_memory&nbsp;&nbsp;&nbsp; : 0<o:p></o=
:p></p>
<p class=3D"MsoPlainText">free_cpus&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 0<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_major&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 4<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_minor&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : 2<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_extra&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : -unstable<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_caps&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : xen-3.0-x86_64 xen-3.0-x86_32=
p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_scheduler&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; : credit<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_pagesize&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp; : 4096<o:p></o:p></p>
<p class=3D"MsoPlainText">platform_params&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp; : virt_start=3D0xff400000<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_changeset&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; : unavailable<o:p></o:p></p>
<p class=3D"MsoPlainText">xen_commandline&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp; : dom0_mem=3D1024M dom0_max_vcpus=3D2 dom0_vcpus_pin<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compiler&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; : gcc version 4.4.6 20110731 (Red Hat 4.4.6-3)=
 (GCC)<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compile_by&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; : root<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compile_domain&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; :=
<o:p></o:p></p>
<p class=3D"MsoPlainText">cc_compile_date&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp; : Thu Jul 12 11:20:56 CST 2012<o:p></o:p></p>
<p class=3D"MsoPlainText">xend_config_format&nbsp;&nbsp;&nbsp;&nbsp; : 4<o:=
p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">case 1&gt; I specified PV-domU config as below<o:p><=
/o:p></p>
<p class=3D"MsoPlainText">memory=3D3072<o:p></o:p></p>
<p class=3D"MsoPlainText">maxmem=3D6144<o:p></o:p></p>
<p class=3D"MsoPlainText">name=3D&quot;bs&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]<o:=
p></o:p></p>
<p class=3D"MsoPlainText">disk=3D['file:/root/bs.img,xvda1,w']<o:p></o:p></=
p>
<p class=3D"MsoPlainText">kernel=3D'/root/vmlinuz'<o:p></o:p></p>
<p class=3D"MsoPlainText">extra=3D&quot;iommu=3Dsoft console=3Dhvc0&quot;<o=
:p></o:p></p>
<p class=3D"MsoPlainText">ramdisk=3D'/root/initrd.img'<o:p></o:p></p>
<p class=3D"MsoPlainText">root=3D&quot;/dev/xvda1 ro&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">pci=3D['01:00.0','01:00.1']<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">and then start domU. After that I executed below com=
mand<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl list<o:p></o:p></p>
<p class=3D"MsoPlainText">Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Mem VCPUs&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; State&nbsp;&nbsp; Time(s)<o:p></o:p></p>
<p class=3D"MsoPlainText">Domain-0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 1024&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;&nbsp=
;&nbsp;&nbsp; r-----&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 88.7<o:p></o:p></p>
<p class=3D"MsoPlainText">bs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2&nbsp; 3072&nbsp;&=
nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp; -b----&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; 1.1<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">It seemed normal to me. But when I logon bs domU, an=
d executed below command<o:p></o:p></p>
<p class=3D"MsoPlainText"># cat /proc/meminfo | head<o:p></o:p></p>
<p class=3D"MsoPlainText">MemTotal:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; 6158940 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">MemFree:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; 2944776 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Buffers:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp; 5108 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Cached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 32292 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">SwapCached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 21456 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; 22936 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 7000 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 108=
 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(file):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 14456=
 kB<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">It indicated the total memory was 6G, why?<o:p></o:p=
></p>
<p class=3D"MsoNormal">When I back to dom0, I executed below command<o:p></=
o:p></p>
<p class=3D"MsoPlainText"># xl mem-set bs 6144<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl list<o:p></o:p></p>
<p class=3D"MsoPlainText">Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Mem VCPUs&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; State&nbsp;&nbsp; Time(s)<o:p></o:p></p>
<p class=3D"MsoPlainText">Domain-0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 1024&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;&nbsp=
;&nbsp;&nbsp; r-----&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;93.5<o:p></o:p></p>
<p class=3D"MsoPlainText">bs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp; 6144&nbsp;&=
nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp; -b----&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; 10.5<o:p></o:p></p>
<p class=3D"MsoNormal">It seemed normal to me. But when I logon bs domU aga=
in and executed below command<o:p></o:p></p>
<p class=3D"MsoPlainText"># cat /proc/meminfo | head<o:p></o:p></p>
<p class=3D"MsoPlainText">MemTotal:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; 9304668 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">MemFree:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; 6087540 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Buffers:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; &nbsp;&nbsp;&nbsp;5168 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Cached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 32464 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">SwapCached:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; 22300 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; 22408 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 7080 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Inactive(anon):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 108=
 kB<o:p></o:p></p>
<p class=3D"MsoPlainText">Active(file):&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 15220=
 kB<o:p></o:p></p>
<p class=3D"MsoPlainText"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoPlainText">It indicated total memory was 9G (6G &#43; 3G). I=
t was wired. Any idea about this?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p></o:p></p>
<p class=3D"MsoNormal">Case 2&gt; I specified PV-domU config as below<o:p><=
/o:p></p>
<p class=3D"MsoPlainText">memory=3D6144<o:p></o:p></p>
<p class=3D"MsoPlainText">maxmem=3D6144<o:p></o:p></p>
<p class=3D"MsoPlainText">name=3D&quot;bs&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">vif=3D['ip=3D169.254.254.1,script=3Dvif-nat',]<o:=
p></o:p></p>
<p class=3D"MsoPlainText">disk=3D['file:/root/bs.img,xvda1,w']<o:p></o:p></=
p>
<p class=3D"MsoPlainText">kernel=3D'/root/vmlinuz'<o:p></o:p></p>
<p class=3D"MsoPlainText">extra=3D&quot;iommu=3Dsoft console=3Dhvc0&quot;<o=
:p></o:p></p>
<p class=3D"MsoPlainText">ramdisk=3D'/root/initrd.img'<o:p></o:p></p>
<p class=3D"MsoPlainText">root=3D&quot;/dev/xvda1 ro&quot;<o:p></o:p></p>
<p class=3D"MsoPlainText">pci=3D['01:00.0','01:00.1']<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">and then start domU. After that I executed below com=
mand<o:p></o:p></p>
<p class=3D"MsoPlainText"># xl list<o:p></o:p></p>
<p class=3D"MsoPlainText">Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ID&nbsp;&nbsp; Mem VCPUs&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; State&nbsp;&nbsp; Time(s)<o:p></o:p></p>
<p class=3D"MsoPlainText">Domain-0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; &nbsp;&nbsp;0&nbsp;&nbsp; 648&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;=
&nbsp;&nbsp;&nbsp; r-----&nbsp;&nbsp;&nbsp;&nbsp; 120.5<o:p></o:p></p>
<p class=3D"MsoPlainText">bs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3&nbsp; 3360&nbsp;&=
nbsp;&nbsp;&nbsp; 1&nbsp;&nbsp;&nbsp;&nbsp; -b----&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; 7.0<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">the output was very confusing. Why dom0 memory had b=
een shrank to 648M and only 3360M assigned to bs domU?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">My own analysis:<o:p></o:p></p>
<p class=3D"MsoNormal">I extracted the bios e820 memory map on bs domU as b=
elow<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000] BIOS-provided physi=
cal RAM map:<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
00000000 - 00000000000a0000 (usable)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
000a0000 - 0000000000100000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
00100000 - 00000000bf699000 (usable)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
bf699000 - 00000000bf6af000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
bf6af000 - 00000000bf6ce000 (ACPI data)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
bf6ce000 - 00000000c0000000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
e0000000 - 00000000f0000000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000000=
fe000000 - 0000000100000000 (reserved)<o:p></o:p></p>
<p class=3D"MsoPlainText">[&nbsp;&nbsp;&nbsp; 0.000000]&nbsp; Xen: 00000001=
80000000 - 00000001c33ec000 (usable)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I think the root cause might be related to the holes=
 between c0000000 and e0000000 and between f0000000 and fe000000 and betwee=
n 100000000 and 180000000. And I think the e820_host option set according t=
o my tracking.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Thanks in advance<o:p></o:p></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:8.0pt;font-family:&quot;=
Arial&quot;,&quot;sans-serif&quot;;color:#6699C2">HUAXIANG FAN</span></b><s=
pan style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quot;sans-serif=
&quot;;color:#6699C2"><br>
</span><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quot;s=
ans-serif&quot;;color:#8C7D72">Software Engineer II<br>
<br>
</span><b><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quo=
t;sans-serif&quot;;color:#6699C2">WEBSENSE NETWORK SECURITY TECHNOLOGY R&am=
p;D (BEIJING) CO. LTD.</span></b><span style=3D"font-size:8.0pt;font-family=
:&quot;Arial&quot;,&quot;sans-serif&quot;;color:#6699C2"><br>
</span><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quot;s=
ans-serif&quot;;color:#8C7D72">ph: &#43;8610.5884.4327<br>
fax: &#43;8610.5884.4727<br>
<a href=3D"http://www.websense.cn"><span style=3D"color:#8C7D72;text-decora=
tion:none">www.websense.cn</span></a><br>
<br>
</span><b><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,&quo=
t;sans-serif&quot;;color:#003352">Websense TRITON&#8482;<br>
For Essential Information Protection&#8482;<br>
</span></b><b><span style=3D"font-size:8.0pt;font-family:&quot;Arial&quot;,=
&quot;sans-serif&quot;;color:#8C7D72"><a href=3D"http://www.websense.com/co=
ntent/Regional/SCH/WebSecurityOverview.aspx"><span style=3D"color:#8C7D72;t=
ext-decoration:none">Web Security</span></a> |
<a href=3D"http://www.websense.com/content/Regional/SCH/DataSecurity.aspx">=
<span style=3D"color:#8C7D72;text-decoration:none">Data Security</span></a>=
 |
<a href=3D"http://www.websense.com/content/Regional/SCH/MessagingSecurity.a=
spx"><span style=3D"color:#8C7D72;text-decoration:none">Email Security</spa=
n></a><br>
<br>
</span></b><o:p></o:p></p>
</div>
<br><br>
<P align=3Dcenter>Protected by Websense&nbsp;Hosted Email&nbsp;Security &#8=
212; www.websense.com</P>
</body>
</html>

--_000_E71FC5D6F96C3C4B93FC8FF942D924C675ADD043SBJEXCH1Awebsen_--


--===============6773783635368204994==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6773783635368204994==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 06:43:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 06:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwSdu-0005mr-GL; Wed, 01 Aug 2012 06:42:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwSds-0005mm-JQ
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 06:42:48 +0000
Received: from [85.158.143.99:52732] by server-3.bemta-4.messagelabs.com id
	5A/21-01511-7EFC8105; Wed, 01 Aug 2012 06:42:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343803366!26322869!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13975 invoked from network); 1 Aug 2012 06:42:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 06:42:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13795248"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 06:42:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 07:42:46 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwSdp-0002ch-VW;
	Wed, 01 Aug 2012 06:42:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwSdp-0004fm-R7;
	Wed, 01 Aug 2012 07:42:45 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13533-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 07:42:45 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13533: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13533 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13533/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13527
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13527
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13527
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13527

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  cf0e661cb321
baseline version:
 xen                  cf0e661cb321

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 06:43:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 06:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwSdu-0005mr-GL; Wed, 01 Aug 2012 06:42:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwSds-0005mm-JQ
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 06:42:48 +0000
Received: from [85.158.143.99:52732] by server-3.bemta-4.messagelabs.com id
	5A/21-01511-7EFC8105; Wed, 01 Aug 2012 06:42:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343803366!26322869!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13975 invoked from network); 1 Aug 2012 06:42:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 06:42:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13795248"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 06:42:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 07:42:46 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwSdp-0002ch-VW;
	Wed, 01 Aug 2012 06:42:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwSdp-0004fm-R7;
	Wed, 01 Aug 2012 07:42:45 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13533-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 07:42:45 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13533: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13533 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13533/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13527
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13527
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13527
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13527

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  cf0e661cb321
baseline version:
 xen                  cf0e661cb321

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 07:49:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 07:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwTg5-0006s3-QL; Wed, 01 Aug 2012 07:49:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwTg4-0006rx-LQ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 07:49:08 +0000
Received: from [85.158.139.83:22967] by server-9.bemta-5.messagelabs.com id
	1C/48-01069-37FD8105; Wed, 01 Aug 2012 07:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343807347!26977002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27476 invoked from network); 1 Aug 2012 07:49:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with SMTP;
	1 Aug 2012 07:49:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 08:49:06 +0100
Message-Id: <5018FBB60200007800091CCB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 08:49:42 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
In-Reply-To: <20504.3594.340410.435942@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
 be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.07.12 at 18:55, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Andrew Cooper writes ("[PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be 
> set externally"):
> ..
>> diff -r 6db5c184a777 -r ae32690d0d74 xen/Makefile
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -13,6 +13,7 @@ export BASEDIR := $(CURDIR)
>>  export XEN_ROOT := $(BASEDIR)/..
>>  
>>  EFI_MOUNTPOINT ?= /boot/efi
>> +XEN_CHANGESET  ?= $(shell hg root &> /dev/null && hg parents --template 
> "{date|date} {rev}:{node|short}" || echo "unavailable" )
>>  
>>  .PHONY: default
>>  default: build
>> @@ -107,7 +108,7 @@ include/xen/compile.h: include/xen/compi
>>  	    -e 's/@@version@@/$(XEN_VERSION)/g' \
>>  	    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>>  	    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
>> -	    -e 's!@@changeset@@!$(shell ((hg parents --template "{date|date} 
> {rev}:{node|short}" >/dev/null && hg parents --template "{date|date} 
> {rev}:{node|short}") || echo "unavailable") 2>/dev/null)!g' \
>> +	    -e 's!@@changeset@@!$(XEN_CHANGESET)!g' \
>>  	    < include/xen/compile.h.in > $@.new
>>  	@grep \" .banner >> $@.new
>>  	@grep -v \" .banner
> 
> We need to check how many times, and at which point, this gets
> executed, when this patch is applied.  I think it's OK...

If in doubt, perhaps the better option would be to execute this
once unconditionally (via := assignment to a helper variable), or
to use := here but frame the assignment with some if construct
excluding it to be executed when the changeset was already
specified (assuming that an empty specification is pointless,
simply checking for it to be non-empty should suffice).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 07:49:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 07:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwTg5-0006s3-QL; Wed, 01 Aug 2012 07:49:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwTg4-0006rx-LQ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 07:49:08 +0000
Received: from [85.158.139.83:22967] by server-9.bemta-5.messagelabs.com id
	1C/48-01069-37FD8105; Wed, 01 Aug 2012 07:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343807347!26977002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27476 invoked from network); 1 Aug 2012 07:49:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with SMTP;
	1 Aug 2012 07:49:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 08:49:06 +0100
Message-Id: <5018FBB60200007800091CCB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 08:49:42 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
In-Reply-To: <20504.3594.340410.435942@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
 be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.07.12 at 18:55, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Andrew Cooper writes ("[PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be 
> set externally"):
> ..
>> diff -r 6db5c184a777 -r ae32690d0d74 xen/Makefile
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -13,6 +13,7 @@ export BASEDIR := $(CURDIR)
>>  export XEN_ROOT := $(BASEDIR)/..
>>  
>>  EFI_MOUNTPOINT ?= /boot/efi
>> +XEN_CHANGESET  ?= $(shell hg root &> /dev/null && hg parents --template 
> "{date|date} {rev}:{node|short}" || echo "unavailable" )
>>  
>>  .PHONY: default
>>  default: build
>> @@ -107,7 +108,7 @@ include/xen/compile.h: include/xen/compi
>>  	    -e 's/@@version@@/$(XEN_VERSION)/g' \
>>  	    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>>  	    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
>> -	    -e 's!@@changeset@@!$(shell ((hg parents --template "{date|date} 
> {rev}:{node|short}" >/dev/null && hg parents --template "{date|date} 
> {rev}:{node|short}") || echo "unavailable") 2>/dev/null)!g' \
>> +	    -e 's!@@changeset@@!$(XEN_CHANGESET)!g' \
>>  	    < include/xen/compile.h.in > $@.new
>>  	@grep \" .banner >> $@.new
>>  	@grep -v \" .banner
> 
> We need to check how many times, and at which point, this gets
> executed, when this patch is applied.  I think it's OK...

If in doubt, perhaps the better option would be to execute this
once unconditionally (via := assignment to a helper variable), or
to use := here but frame the assignment with some if construct
excluding it to be executed when the changeset was already
specified (assuming that an empty specification is pointless,
simply checking for it to be non-empty should suffice).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 07:54:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 07:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwTkB-0006zj-Ij; Wed, 01 Aug 2012 07:53:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <t.wagner@inode.at>) id 1SwTgU-0006sz-8M
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 07:49:34 +0000
Received: from [85.158.139.83:24818] by server-4.bemta-5.messagelabs.com id
	96/A5-27831-D8FD8105; Wed, 01 Aug 2012 07:49:33 +0000
X-Env-Sender: t.wagner@inode.at
X-Msg-Ref: server-10.tower-182.messagelabs.com!1343807371!29959999!1
X-Originating-IP: [213.47.214.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9326 invoked from network); 1 Aug 2012 07:49:32 -0000
Received: from webmail.inode.at (HELO webmail.inode.at) (213.47.214.141)
	by server-10.tower-182.messagelabs.com with SMTP;
	1 Aug 2012 07:49:32 -0000
Received: from [127.0.0.1] (helo=inode.at) by webmail with smtp (Exim 4.67)
	(envelope-from <t.wagner@inode.at>) id 1SwTgR-0003VH-Je
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:49:31 +0200
Received: from 85.126.176.235
	(SquirrelMail authenticated user t.wagner@inode.at)
	by webmail.inode.at with HTTP; Wed, 1 Aug 2012 09:49:31 +0200 (CEST)
Message-ID: <85.126.176.235.1343807371.wm@webmail.inode.at>
Date: Wed, 1 Aug 2012 09:49:31 +0200 (CEST)
From: "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at>
To: <xen-devel@lists.xen.org>
X-Priority: 3
Importance: Normal
X-Mailer: SquirrelMail (version 1.2.8)
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 01 Aug 2012 07:53:22 +0000
Subject: [Xen-devel] XEN 4.1.2 and kernel 3.4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello


Once or twice a day I get a kernel trace in dom0 like this.

Aug  1 05:52:08 zeus kernel: INFO: rcu_bh self-detected stall on CPU { 3} =

(t=3D0 jiffies)
Aug  1 05:52:08 zeus kernel: Pid: 0, comm: swapper/3 Not tainted
3.4.7-4-xen #1
Aug  1 05:52:08 zeus kernel: Call Trace:
Aug  1 05:52:08 zeus kernel: <IRQ>  [<ffffffff81096841>] ?
__rcu_pending+0x1a1/0x4b0
Aug  1 05:52:08 zeus kernel: [<ffffffff8106f150>] ?
tick_init_highres+0x10/0x10
Aug  1 05:52:08 zeus kernel: [<ffffffff8109771b>] ?
rcu_check_callbacks+0xbb/0xd0
Aug  1 05:52:08 zeus kernel: [<ffffffff81042c9f>] ?
update_process_times+0x3f/0x80
Aug  1 05:52:08 zeus kernel: [<ffffffff8106f1a4>] ?
tick_sched_timer+0x54/0x130
Aug  1 05:52:08 zeus kernel: [<ffffffff81055469>] ?
__run_hrtimer.isra.34+0x59/0xf0
Aug  1 05:52:08 zeus kernel: [<ffffffff81055aac>] ?
hrtimer_interrupt+0xec/0x230
Aug  1 05:52:08 zeus kernel: [<ffffffff81006caa>] ?
xen_timer_interrupt+0x2a/0x1a0
Aug  1 05:52:08 zeus kernel: [<ffffffff810422b3>] ? cascade+0x73/0x90
Aug  1 05:52:08 zeus kernel: [<ffffffff81090fea>] ?
handle_irq_event_percpu+0x3a/0x150
Aug  1 05:52:08 zeus kernel: [<ffffffff8109403f>] ?
handle_percpu_irq+0x3f/0x70
Aug  1 05:52:08 zeus kernel: [<ffffffff8103d23d>] ? __do_softirq+0xbd/0x130
Aug  1 05:52:08 zeus kernel: [<ffffffff811f35f4>] ?
__xen_evtchn_do_upcall+0x194/0x240
Aug  1 05:52:08 zeus kernel: [<ffffffff811f55f2>] ?
xen_evtchn_do_upcall+0x22/0x40
Aug  1 05:52:08 zeus kernel: [<ffffffff813244ee>] ?
xen_do_hypervisor_callback+0x1e/0x30
Aug  1 05:52:08 zeus kernel: <EOI>  [<ffffffff810013aa>] ?
hypercall_page+0x3aa/0x1000
Aug  1 05:52:08 zeus kernel: [<ffffffff810013aa>] ?
hypercall_page+0x3aa/0x1000
Aug  1 05:52:08 zeus kernel: [<ffffffff81006afc>] ? xen_safe_halt+0xc/0x20
Aug  1 05:52:08 zeus kernel: [<ffffffff81012453>] ? default_idle+0x23/0x40
Aug  1 05:52:08 zeus kernel: [<ffffffff81012e86>] ? cpu_idle+0xa6/0xc0


Im am using a HP DL585 G2 running openSuse 12.1.

zeus:~ # xm info
host                   : zeus
release                : 3.4.7-4-xen
version                : #1 SMP Mon Jul 30 15:31:08 CEST 2012
machine                : x86_64
nr_cpus                : 8
nr_nodes               : 4
cores_per_socket       : 2
threads_per_core       : 1
cpu_mhz                : 2612
hw_caps                :
178bf3ff:ebc3fbff:00000000:00000010:00002001:00000000:0000001f:00000000
virt_caps              : hvm
total_memory           : 24572
free_memory            : 14810
free_cpus              : 0
max_free_memory        : 14810
max_para_memory        : 14806
max_hvm_memory         : 14763
xen_major              : 4
xen_minor              : 1
xen_extra              : .2_17-199.1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          : 23174
xen_commandline        : dom0_mem=3Dmax:768M
cc_compiler            : gcc version 4.6.2 (SUSE Linux)
cc_compile_by          : abuild
cc_compile_domain      :
cc_compile_date        : Tue Jul 10 14:09:23 UTC 2012
xend_config_format     : 4


zeus:~ # xm dmesg
 __  __            _  _    _   ____      _ _____  _  ___   ___   _
 \ \/ /___ _ __   | || |  / | |___ \    / |___  |/ |/ _ \ / _ \ / |
  \  // _ \ '_ \  | || |_ | |   __) |   | |  / /_| | (_) | (_) || |
  /  \  __/ | | | |__   _|| |_ / __/    | | / /__| |\__, |\__, || |
 /_/\_\___|_| |_|    |_|(_)_(_)_____|___|_|/_/   |_|  /_/   /_(_)_|
                                   |_____|
(XEN) Xen version 4.1.2_17-199.1 (abuild@) (gcc version 4.6.2 (SUSE Linux)
) Tue Jul 10 14:09:23 UTC 2012
(XEN) Latest ChangeSet: 23174
(XEN) Bootloader: GNU GRUB 0.97
(XEN) Command line: dom0_mem=3Dmax:768M
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 2 seconds
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009f400 (usable)
(XEN)  000000000009f400 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 000000007fd4e000 (usable)
(XEN)  000000007fd4e000 - 000000007fd56000 (ACPI data)
(XEN)  000000007fd56000 - 000000007fd57000 (usable)
(XEN)  000000007fd57000 - 0000000080000000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 00000000fed00000 (reserved)
(XEN)  00000000fee00000 - 00000000fee10000 (reserved)
(XEN)  00000000ffc00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000067ffff000 (usable)
(XEN) ACPI: RSDP 000F4F00, 0024 (r2 HP    )
(XEN) ACPI: XSDT 7FD4ED80, 0074 (r1 HP     ProLiant        2   =D2     162E)
(XEN) ACPI: FACP 7FD4EE00, 00F4 (r3 HP     A07             2   =D2     162E)
(XEN) ACPI: DSDT 7FD4EF00, 4A6D (r1 HP         DSDT        1 INTL 20030228)
(XEN) ACPI: FACS 7FD4E100, 0040
(XEN) ACPI: SPCR 7FD4E140, 0050 (r1 HP     SPCRRBSU        1   =D2     162E)
(XEN) ACPI: HPET 7FD4E1C0, 0038 (r1 HP     ProLiant        2   =D2     162E)
(XEN) ACPI: SPMI 7FD4E200, 0040 (r5 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: ERST 7FD4E240, 01D0 (r1 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: APIC 7FD4E440, 00C6 (r1 HP     ProLiant        2             0)
(XEN) ACPI: SRAT 7FD4E600, 01A0 (r1 AMD    FAM_F_10        2 AMD         1)
(XEN) ACPI: FFFF 7FD4EA00, 0176 (r1 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: BERT 7FD4EB80, 0030 (r1 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: HEST 7FD4EBC0, 0170 (r1 HP     ProLiant        1   =D2     162E)
(XEN) System RAM: 24572MB (25162676kB)
(XEN) Domain heap initialised DMA width 31 bits
(XEN) Processor #0 15:1 APIC version 16
(XEN) Processor #2 15:1 APIC version 16
(XEN) Processor #4 15:1 APIC version 16
(XEN) Processor #6 15:1 APIC version 16
(XEN) Processor #1 15:1 APIC version 16
(XEN) Processor #3 15:1 APIC version 16
(XEN) Processor #5 15:1 APIC version 16
(XEN) Processor #7 15:1 APIC version 16
(XEN) IOAPIC[0]: apic_id 8, version 17, address 0xd9af0000, GSI 0-23
(XEN) IOAPIC[1]: apic_id 9, version 17, address 0xd9fd0000, GSI 24-30
(XEN) IOAPIC[2]: apic_id 10, version 17, address 0xd9fe0000, GSI 31-37
(XEN) IOAPIC[3]: apic_id 11, version 17, address 0xd9ff0000, GSI 38-61
(XEN) Enabling APIC mode:  Flat.  Using 4 I/O APICs
(XEN) ERST table is invalid
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2612.090 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD-Vi: IOMMU not found!
(XEN) I/O virtualisation disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) Platform timer is 25.000MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) HVM: ASIDs disabled.
(XEN) SVM: Supported advanced features:
(XEN)  - none
(XEN) HVM: SVM enabled
(XEN) AMD: Disabling C1 Clock Ramping Node #0
(XEN) AMD: Disabling C1 Clock Ramping Node #1
(XEN) AMD: Disabling C1 Clock Ramping Node #2
(XEN) AMD: Disabling C1 Clock Ramping Node #3
(XEN) Brought up 8 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x1a01000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   000000043c000000->000000043e000000 (182523 pages to
be allocated)
(XEN)  Init. ramdisk: 000000067e6fb000->000000067fdffa00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff81a01000
(XEN)  Init. ramdisk: ffffffff81a01000->ffffffff83105a00
(XEN)  Phys-Mach map: ffffffff83106000->ffffffff83286000
(XEN)  Start info:    ffffffff83286000->ffffffff832864b4
(XEN)  Page tables:   ffffffff83287000->ffffffff832a4000
(XEN)  Boot stack:    ffffffff832a4000->ffffffff832a5000
(XEN)  TOTAL:         ffffffff80000000->ffffffff83400000
(XEN)  ENTRY ADDRESS: ffffffff81496200
(XEN) Dom0 has maximum 8 VCPUs
(XEN) Scrubbing Free RAM:
...........................................................................=
...........................................................................=
...........................................................................=
.........done.
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
to Xen)
(XEN) Freed 232kB init memory.
(XEN) traps.c:2527:d0 Domain attempted WRMSR 00000000c0010004 from
0x0000d6be5955d450 to 0x000000000000abcd.
(XEN) physdev.c:159: dom0: wrong map_pirq type 3
(XEN) traps.c:2527:d1 Domain attempted WRMSR 00000000c0010004 from
0x0000d6be5955d450 to 0x000000000000abcd.
(XEN) traps.c:2527:d2 Domain attempted WRMSR 00000000c0010004 from
0x0000a5bc3314d6f8 to 0x000000000000abcd.
(XEN) traps.c:2527:d3 Domain attempted WRMSR 00000000c0010004 from
0x000092bdbb01c033 to 0x000000000000abcd.
(XEN) traps.c:2527:d4 Domain attempted WRMSR 00000000c0010004 from
0x000095bff892b9bd to 0x000000000000abcd.



There are some linux guests (running kernel 3.4.6) and Windows systems. I
don't see any problems in
the guest logs at these moments.


Anybody else has these problems too?



regards
Thomas




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 07:54:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 07:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwTkB-0006zj-Ij; Wed, 01 Aug 2012 07:53:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <t.wagner@inode.at>) id 1SwTgU-0006sz-8M
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 07:49:34 +0000
Received: from [85.158.139.83:24818] by server-4.bemta-5.messagelabs.com id
	96/A5-27831-D8FD8105; Wed, 01 Aug 2012 07:49:33 +0000
X-Env-Sender: t.wagner@inode.at
X-Msg-Ref: server-10.tower-182.messagelabs.com!1343807371!29959999!1
X-Originating-IP: [213.47.214.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9326 invoked from network); 1 Aug 2012 07:49:32 -0000
Received: from webmail.inode.at (HELO webmail.inode.at) (213.47.214.141)
	by server-10.tower-182.messagelabs.com with SMTP;
	1 Aug 2012 07:49:32 -0000
Received: from [127.0.0.1] (helo=inode.at) by webmail with smtp (Exim 4.67)
	(envelope-from <t.wagner@inode.at>) id 1SwTgR-0003VH-Je
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:49:31 +0200
Received: from 85.126.176.235
	(SquirrelMail authenticated user t.wagner@inode.at)
	by webmail.inode.at with HTTP; Wed, 1 Aug 2012 09:49:31 +0200 (CEST)
Message-ID: <85.126.176.235.1343807371.wm@webmail.inode.at>
Date: Wed, 1 Aug 2012 09:49:31 +0200 (CEST)
From: "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at>
To: <xen-devel@lists.xen.org>
X-Priority: 3
Importance: Normal
X-Mailer: SquirrelMail (version 1.2.8)
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 01 Aug 2012 07:53:22 +0000
Subject: [Xen-devel] XEN 4.1.2 and kernel 3.4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello


Once or twice a day I get a kernel trace in dom0 like this.

Aug  1 05:52:08 zeus kernel: INFO: rcu_bh self-detected stall on CPU { 3} =

(t=3D0 jiffies)
Aug  1 05:52:08 zeus kernel: Pid: 0, comm: swapper/3 Not tainted
3.4.7-4-xen #1
Aug  1 05:52:08 zeus kernel: Call Trace:
Aug  1 05:52:08 zeus kernel: <IRQ>  [<ffffffff81096841>] ?
__rcu_pending+0x1a1/0x4b0
Aug  1 05:52:08 zeus kernel: [<ffffffff8106f150>] ?
tick_init_highres+0x10/0x10
Aug  1 05:52:08 zeus kernel: [<ffffffff8109771b>] ?
rcu_check_callbacks+0xbb/0xd0
Aug  1 05:52:08 zeus kernel: [<ffffffff81042c9f>] ?
update_process_times+0x3f/0x80
Aug  1 05:52:08 zeus kernel: [<ffffffff8106f1a4>] ?
tick_sched_timer+0x54/0x130
Aug  1 05:52:08 zeus kernel: [<ffffffff81055469>] ?
__run_hrtimer.isra.34+0x59/0xf0
Aug  1 05:52:08 zeus kernel: [<ffffffff81055aac>] ?
hrtimer_interrupt+0xec/0x230
Aug  1 05:52:08 zeus kernel: [<ffffffff81006caa>] ?
xen_timer_interrupt+0x2a/0x1a0
Aug  1 05:52:08 zeus kernel: [<ffffffff810422b3>] ? cascade+0x73/0x90
Aug  1 05:52:08 zeus kernel: [<ffffffff81090fea>] ?
handle_irq_event_percpu+0x3a/0x150
Aug  1 05:52:08 zeus kernel: [<ffffffff8109403f>] ?
handle_percpu_irq+0x3f/0x70
Aug  1 05:52:08 zeus kernel: [<ffffffff8103d23d>] ? __do_softirq+0xbd/0x130
Aug  1 05:52:08 zeus kernel: [<ffffffff811f35f4>] ?
__xen_evtchn_do_upcall+0x194/0x240
Aug  1 05:52:08 zeus kernel: [<ffffffff811f55f2>] ?
xen_evtchn_do_upcall+0x22/0x40
Aug  1 05:52:08 zeus kernel: [<ffffffff813244ee>] ?
xen_do_hypervisor_callback+0x1e/0x30
Aug  1 05:52:08 zeus kernel: <EOI>  [<ffffffff810013aa>] ?
hypercall_page+0x3aa/0x1000
Aug  1 05:52:08 zeus kernel: [<ffffffff810013aa>] ?
hypercall_page+0x3aa/0x1000
Aug  1 05:52:08 zeus kernel: [<ffffffff81006afc>] ? xen_safe_halt+0xc/0x20
Aug  1 05:52:08 zeus kernel: [<ffffffff81012453>] ? default_idle+0x23/0x40
Aug  1 05:52:08 zeus kernel: [<ffffffff81012e86>] ? cpu_idle+0xa6/0xc0


Im am using a HP DL585 G2 running openSuse 12.1.

zeus:~ # xm info
host                   : zeus
release                : 3.4.7-4-xen
version                : #1 SMP Mon Jul 30 15:31:08 CEST 2012
machine                : x86_64
nr_cpus                : 8
nr_nodes               : 4
cores_per_socket       : 2
threads_per_core       : 1
cpu_mhz                : 2612
hw_caps                :
178bf3ff:ebc3fbff:00000000:00000010:00002001:00000000:0000001f:00000000
virt_caps              : hvm
total_memory           : 24572
free_memory            : 14810
free_cpus              : 0
max_free_memory        : 14810
max_para_memory        : 14806
max_hvm_memory         : 14763
xen_major              : 4
xen_minor              : 1
xen_extra              : .2_17-199.1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          : 23174
xen_commandline        : dom0_mem=3Dmax:768M
cc_compiler            : gcc version 4.6.2 (SUSE Linux)
cc_compile_by          : abuild
cc_compile_domain      :
cc_compile_date        : Tue Jul 10 14:09:23 UTC 2012
xend_config_format     : 4


zeus:~ # xm dmesg
 __  __            _  _    _   ____      _ _____  _  ___   ___   _
 \ \/ /___ _ __   | || |  / | |___ \    / |___  |/ |/ _ \ / _ \ / |
  \  // _ \ '_ \  | || |_ | |   __) |   | |  / /_| | (_) | (_) || |
  /  \  __/ | | | |__   _|| |_ / __/    | | / /__| |\__, |\__, || |
 /_/\_\___|_| |_|    |_|(_)_(_)_____|___|_|/_/   |_|  /_/   /_(_)_|
                                   |_____|
(XEN) Xen version 4.1.2_17-199.1 (abuild@) (gcc version 4.6.2 (SUSE Linux)
) Tue Jul 10 14:09:23 UTC 2012
(XEN) Latest ChangeSet: 23174
(XEN) Bootloader: GNU GRUB 0.97
(XEN) Command line: dom0_mem=3Dmax:768M
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 2 seconds
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009f400 (usable)
(XEN)  000000000009f400 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 000000007fd4e000 (usable)
(XEN)  000000007fd4e000 - 000000007fd56000 (ACPI data)
(XEN)  000000007fd56000 - 000000007fd57000 (usable)
(XEN)  000000007fd57000 - 0000000080000000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 00000000fed00000 (reserved)
(XEN)  00000000fee00000 - 00000000fee10000 (reserved)
(XEN)  00000000ffc00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000067ffff000 (usable)
(XEN) ACPI: RSDP 000F4F00, 0024 (r2 HP    )
(XEN) ACPI: XSDT 7FD4ED80, 0074 (r1 HP     ProLiant        2   =D2     162E)
(XEN) ACPI: FACP 7FD4EE00, 00F4 (r3 HP     A07             2   =D2     162E)
(XEN) ACPI: DSDT 7FD4EF00, 4A6D (r1 HP         DSDT        1 INTL 20030228)
(XEN) ACPI: FACS 7FD4E100, 0040
(XEN) ACPI: SPCR 7FD4E140, 0050 (r1 HP     SPCRRBSU        1   =D2     162E)
(XEN) ACPI: HPET 7FD4E1C0, 0038 (r1 HP     ProLiant        2   =D2     162E)
(XEN) ACPI: SPMI 7FD4E200, 0040 (r5 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: ERST 7FD4E240, 01D0 (r1 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: APIC 7FD4E440, 00C6 (r1 HP     ProLiant        2             0)
(XEN) ACPI: SRAT 7FD4E600, 01A0 (r1 AMD    FAM_F_10        2 AMD         1)
(XEN) ACPI: FFFF 7FD4EA00, 0176 (r1 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: BERT 7FD4EB80, 0030 (r1 HP     ProLiant        1   =D2     162E)
(XEN) ACPI: HEST 7FD4EBC0, 0170 (r1 HP     ProLiant        1   =D2     162E)
(XEN) System RAM: 24572MB (25162676kB)
(XEN) Domain heap initialised DMA width 31 bits
(XEN) Processor #0 15:1 APIC version 16
(XEN) Processor #2 15:1 APIC version 16
(XEN) Processor #4 15:1 APIC version 16
(XEN) Processor #6 15:1 APIC version 16
(XEN) Processor #1 15:1 APIC version 16
(XEN) Processor #3 15:1 APIC version 16
(XEN) Processor #5 15:1 APIC version 16
(XEN) Processor #7 15:1 APIC version 16
(XEN) IOAPIC[0]: apic_id 8, version 17, address 0xd9af0000, GSI 0-23
(XEN) IOAPIC[1]: apic_id 9, version 17, address 0xd9fd0000, GSI 24-30
(XEN) IOAPIC[2]: apic_id 10, version 17, address 0xd9fe0000, GSI 31-37
(XEN) IOAPIC[3]: apic_id 11, version 17, address 0xd9ff0000, GSI 38-61
(XEN) Enabling APIC mode:  Flat.  Using 4 I/O APICs
(XEN) ERST table is invalid
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2612.090 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD-Vi: IOMMU not found!
(XEN) I/O virtualisation disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) Platform timer is 25.000MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) HVM: ASIDs disabled.
(XEN) SVM: Supported advanced features:
(XEN)  - none
(XEN) HVM: SVM enabled
(XEN) AMD: Disabling C1 Clock Ramping Node #0
(XEN) AMD: Disabling C1 Clock Ramping Node #1
(XEN) AMD: Disabling C1 Clock Ramping Node #2
(XEN) AMD: Disabling C1 Clock Ramping Node #3
(XEN) Brought up 8 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x1a01000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   000000043c000000->000000043e000000 (182523 pages to
be allocated)
(XEN)  Init. ramdisk: 000000067e6fb000->000000067fdffa00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff81a01000
(XEN)  Init. ramdisk: ffffffff81a01000->ffffffff83105a00
(XEN)  Phys-Mach map: ffffffff83106000->ffffffff83286000
(XEN)  Start info:    ffffffff83286000->ffffffff832864b4
(XEN)  Page tables:   ffffffff83287000->ffffffff832a4000
(XEN)  Boot stack:    ffffffff832a4000->ffffffff832a5000
(XEN)  TOTAL:         ffffffff80000000->ffffffff83400000
(XEN)  ENTRY ADDRESS: ffffffff81496200
(XEN) Dom0 has maximum 8 VCPUs
(XEN) Scrubbing Free RAM:
...........................................................................=
...........................................................................=
...........................................................................=
.........done.
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
to Xen)
(XEN) Freed 232kB init memory.
(XEN) traps.c:2527:d0 Domain attempted WRMSR 00000000c0010004 from
0x0000d6be5955d450 to 0x000000000000abcd.
(XEN) physdev.c:159: dom0: wrong map_pirq type 3
(XEN) traps.c:2527:d1 Domain attempted WRMSR 00000000c0010004 from
0x0000d6be5955d450 to 0x000000000000abcd.
(XEN) traps.c:2527:d2 Domain attempted WRMSR 00000000c0010004 from
0x0000a5bc3314d6f8 to 0x000000000000abcd.
(XEN) traps.c:2527:d3 Domain attempted WRMSR 00000000c0010004 from
0x000092bdbb01c033 to 0x000000000000abcd.
(XEN) traps.c:2527:d4 Domain attempted WRMSR 00000000c0010004 from
0x000095bff892b9bd to 0x000000000000abcd.



There are some linux guests (running kernel 3.4.6) and Windows systems. I
don't see any problems in
the guest logs at these moments.


Anybody else has these problems too?



regards
Thomas




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:05:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwTvR-0007hr-GM; Wed, 01 Aug 2012 08:05:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwTvP-0007hm-HQ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:04:59 +0000
Received: from [85.158.139.83:22551] by server-10.bemta-5.messagelabs.com id
	C3/5E-02190-A23E8105; Wed, 01 Aug 2012 08:04:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343808298!29118965!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13504 invoked from network); 1 Aug 2012 08:04:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	1 Aug 2012 08:04:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 09:04:57 +0100
Message-Id: <5018FF720200007800091CE3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 09:05:38 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at>
References: <85.126.176.235.1343807371.wm@webmail.inode.at>
In-Reply-To: <85.126.176.235.1343807371.wm@webmail.inode.at>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN 4.1.2 and kernel 3.4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 09:49, "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at> wrote:
> Aug  1 05:52:08 zeus kernel: INFO: rcu_bh self-detected stall on CPU { 3} 
> (t=0 jiffies)
> Aug  1 05:52:08 zeus kernel: Pid: 0, comm: swapper/3 Not tainted
> 3.4.7-4-xen #1
> Aug  1 05:52:08 zeus kernel: Call Trace:
> Aug  1 05:52:08 zeus kernel: <IRQ>  [<ffffffff81096841>] ?
> __rcu_pending+0x1a1/0x4b0
> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f150>] ?
> tick_init_highres+0x10/0x10
> Aug  1 05:52:08 zeus kernel: [<ffffffff8109771b>] ?
> rcu_check_callbacks+0xbb/0xd0
> Aug  1 05:52:08 zeus kernel: [<ffffffff81042c9f>] ?
> update_process_times+0x3f/0x80
> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f1a4>] ?
> tick_sched_timer+0x54/0x130
> Aug  1 05:52:08 zeus kernel: [<ffffffff81055469>] ?
> __run_hrtimer.isra.34+0x59/0xf0
> Aug  1 05:52:08 zeus kernel: [<ffffffff81055aac>] ?
> hrtimer_interrupt+0xec/0x230
> Aug  1 05:52:08 zeus kernel: [<ffffffff81006caa>] ?
> xen_timer_interrupt+0x2a/0x1a0
> Aug  1 05:52:08 zeus kernel: [<ffffffff810422b3>] ? cascade+0x73/0x90
> Aug  1 05:52:08 zeus kernel: [<ffffffff81090fea>] ?
> handle_irq_event_percpu+0x3a/0x150
> Aug  1 05:52:08 zeus kernel: [<ffffffff8109403f>] ?
> handle_percpu_irq+0x3f/0x70
> Aug  1 05:52:08 zeus kernel: [<ffffffff8103d23d>] ? __do_softirq+0xbd/0x130
> Aug  1 05:52:08 zeus kernel: [<ffffffff811f35f4>] ?
> __xen_evtchn_do_upcall+0x194/0x240
> Aug  1 05:52:08 zeus kernel: [<ffffffff811f55f2>] ?
> xen_evtchn_do_upcall+0x22/0x40
> Aug  1 05:52:08 zeus kernel: [<ffffffff813244ee>] ?
> xen_do_hypervisor_callback+0x1e/0x30
> Aug  1 05:52:08 zeus kernel: <EOI>  [<ffffffff810013aa>] ?
> hypercall_page+0x3aa/0x1000
> Aug  1 05:52:08 zeus kernel: [<ffffffff810013aa>] ?
> hypercall_page+0x3aa/0x1000
> Aug  1 05:52:08 zeus kernel: [<ffffffff81006afc>] ? xen_safe_halt+0xc/0x20
> Aug  1 05:52:08 zeus kernel: [<ffffffff81012453>] ? default_idle+0x23/0x40
> Aug  1 05:52:08 zeus kernel: [<ffffffff81012e86>] ? cpu_idle+0xa6/0xc0
> 
> 
> Im am using a HP DL585 G2 running openSuse 12.1.

Yet the above trace appears to be from a pv-ops kernel, which
12.1 doesn't include (nor does it ship a 3.4-based kernel in the
first place). Please provide complete, consistent information.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:05:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwTvR-0007hr-GM; Wed, 01 Aug 2012 08:05:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwTvP-0007hm-HQ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:04:59 +0000
Received: from [85.158.139.83:22551] by server-10.bemta-5.messagelabs.com id
	C3/5E-02190-A23E8105; Wed, 01 Aug 2012 08:04:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343808298!29118965!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13504 invoked from network); 1 Aug 2012 08:04:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	1 Aug 2012 08:04:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 09:04:57 +0100
Message-Id: <5018FF720200007800091CE3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 09:05:38 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at>
References: <85.126.176.235.1343807371.wm@webmail.inode.at>
In-Reply-To: <85.126.176.235.1343807371.wm@webmail.inode.at>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN 4.1.2 and kernel 3.4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 09:49, "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at> wrote:
> Aug  1 05:52:08 zeus kernel: INFO: rcu_bh self-detected stall on CPU { 3} 
> (t=0 jiffies)
> Aug  1 05:52:08 zeus kernel: Pid: 0, comm: swapper/3 Not tainted
> 3.4.7-4-xen #1
> Aug  1 05:52:08 zeus kernel: Call Trace:
> Aug  1 05:52:08 zeus kernel: <IRQ>  [<ffffffff81096841>] ?
> __rcu_pending+0x1a1/0x4b0
> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f150>] ?
> tick_init_highres+0x10/0x10
> Aug  1 05:52:08 zeus kernel: [<ffffffff8109771b>] ?
> rcu_check_callbacks+0xbb/0xd0
> Aug  1 05:52:08 zeus kernel: [<ffffffff81042c9f>] ?
> update_process_times+0x3f/0x80
> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f1a4>] ?
> tick_sched_timer+0x54/0x130
> Aug  1 05:52:08 zeus kernel: [<ffffffff81055469>] ?
> __run_hrtimer.isra.34+0x59/0xf0
> Aug  1 05:52:08 zeus kernel: [<ffffffff81055aac>] ?
> hrtimer_interrupt+0xec/0x230
> Aug  1 05:52:08 zeus kernel: [<ffffffff81006caa>] ?
> xen_timer_interrupt+0x2a/0x1a0
> Aug  1 05:52:08 zeus kernel: [<ffffffff810422b3>] ? cascade+0x73/0x90
> Aug  1 05:52:08 zeus kernel: [<ffffffff81090fea>] ?
> handle_irq_event_percpu+0x3a/0x150
> Aug  1 05:52:08 zeus kernel: [<ffffffff8109403f>] ?
> handle_percpu_irq+0x3f/0x70
> Aug  1 05:52:08 zeus kernel: [<ffffffff8103d23d>] ? __do_softirq+0xbd/0x130
> Aug  1 05:52:08 zeus kernel: [<ffffffff811f35f4>] ?
> __xen_evtchn_do_upcall+0x194/0x240
> Aug  1 05:52:08 zeus kernel: [<ffffffff811f55f2>] ?
> xen_evtchn_do_upcall+0x22/0x40
> Aug  1 05:52:08 zeus kernel: [<ffffffff813244ee>] ?
> xen_do_hypervisor_callback+0x1e/0x30
> Aug  1 05:52:08 zeus kernel: <EOI>  [<ffffffff810013aa>] ?
> hypercall_page+0x3aa/0x1000
> Aug  1 05:52:08 zeus kernel: [<ffffffff810013aa>] ?
> hypercall_page+0x3aa/0x1000
> Aug  1 05:52:08 zeus kernel: [<ffffffff81006afc>] ? xen_safe_halt+0xc/0x20
> Aug  1 05:52:08 zeus kernel: [<ffffffff81012453>] ? default_idle+0x23/0x40
> Aug  1 05:52:08 zeus kernel: [<ffffffff81012e86>] ? cpu_idle+0xa6/0xc0
> 
> 
> Im am using a HP DL585 G2 running openSuse 12.1.

Yet the above trace appears to be from a pv-ops kernel, which
12.1 doesn't include (nor does it ship a 3.4-based kernel in the
first place). Please provide complete, consistent information.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:11:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwU1E-0007ru-9L; Wed, 01 Aug 2012 08:11:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwU1C-0007ro-Rj
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 08:10:59 +0000
Received: from [85.158.139.83:4927] by server-2.bemta-5.messagelabs.com id
	D1/98-04598-194E8105; Wed, 01 Aug 2012 08:10:57 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343808655!26981424!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11344 invoked from network); 1 Aug 2012 08:10:55 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 08:10:55 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id 034FF402208
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 10:10:55 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id zFjatR9ZoyLe for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 10:10:54 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id 53CF440197E
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 10:10:54 +0200 (CEST)
Message-ID: <5018E489.6080601@tiscali.it>
Date: Wed, 01 Aug 2012 10:10:49 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] Build error qemu upstream 1.2 with xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2539503632737551075=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============2539503632737551075==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070800000102020409050900"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms070800000102020409050900
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

I had tried build from source xen 4.2.0-rc1 on Wheezy with qemu-xen from =

git (commit 0b22ef0f57a8910d849602bef0940edcd0553d2c)
I had encounter this build error:
   CC    i386-softmmu/hw/i386/../xen_pt.o
/mnt/vm/xen/xen-unstable.hg/tools/qemu-xen-dir/hw/i386/../xen_pt.c: In=20
function =E2xen_pci_passthrough_class_init=E2:
/mnt/vm/xen/xen-unstable.hg/tools/qemu-xen-dir/hw/i386/../xen_pt.c:832:13=
:=20
error: assignment from incompatible pointer type [-Werror]
cc1: all warnings being treated as errors
make[4]: *** [hw/i386/../xen_pt.o] Error 1
make[3]: *** [subdir-i386-softmmu] Error 2
make[3]: Leaving directory=20
`/mnt/vm/xen/xen-unstable.hg/tools/qemu-xen-dir-remote'
make[2]: *** [subdir-all-qemu-xen-dir] Error 2
make[2]: Leaving directory `/mnt/vm/xen/xen-unstable.hg/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/mnt/vm/xen/xen-unstable.hg/tools'
make: *** [install-tools] Error 2


--------------ms070800000102020409050900
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTA4MTA0OVowIwYJKoZIhvcNAQkEMRYEFGB4ev772+sBGsXDAWtAdTy/
IYTFMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAq8EWJPFrVG2WaWmajsdeFr3h
bCm4gJVHEYzm+17pzpW4l9sS8CWwyooopxlyj6c5s129sQVUQ8PcRCutkNIf5/i7u1xbJ4MV
v0ypDyty03loca+ZXZLyyB+vnXPHrb1IvUviSkPpdOLeKOP3di23xiJIzgG5BZzKnI7vYarX
EDCOx1xYHOwynwj/UvVkcODtQWp8E/aUw/HJvUgp18jKk2PV65n273dbqmkw/gYUFRQ2RXv9
XI2LocrN2GDcH1mkInEPgt0PLUewoaNxIo7MwAHNwhJESG8MoFZ0uFDdclPOhsLQgEUBes5l
QiK66pZQrnCwiVVZeq2oDeKuGJeDiAAAAAAAAA==
--------------ms070800000102020409050900--


--===============2539503632737551075==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2539503632737551075==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 08:11:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwU1E-0007ru-9L; Wed, 01 Aug 2012 08:11:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwU1C-0007ro-Rj
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 08:10:59 +0000
Received: from [85.158.139.83:4927] by server-2.bemta-5.messagelabs.com id
	D1/98-04598-194E8105; Wed, 01 Aug 2012 08:10:57 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343808655!26981424!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11344 invoked from network); 1 Aug 2012 08:10:55 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 08:10:55 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id 034FF402208
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 10:10:55 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id zFjatR9ZoyLe for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 10:10:54 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id 53CF440197E
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 10:10:54 +0200 (CEST)
Message-ID: <5018E489.6080601@tiscali.it>
Date: Wed, 01 Aug 2012 10:10:49 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] Build error qemu upstream 1.2 with xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2539503632737551075=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============2539503632737551075==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms070800000102020409050900"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms070800000102020409050900
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

I had tried build from source xen 4.2.0-rc1 on Wheezy with qemu-xen from =

git (commit 0b22ef0f57a8910d849602bef0940edcd0553d2c)
I had encounter this build error:
   CC    i386-softmmu/hw/i386/../xen_pt.o
/mnt/vm/xen/xen-unstable.hg/tools/qemu-xen-dir/hw/i386/../xen_pt.c: In=20
function =E2xen_pci_passthrough_class_init=E2:
/mnt/vm/xen/xen-unstable.hg/tools/qemu-xen-dir/hw/i386/../xen_pt.c:832:13=
:=20
error: assignment from incompatible pointer type [-Werror]
cc1: all warnings being treated as errors
make[4]: *** [hw/i386/../xen_pt.o] Error 1
make[3]: *** [subdir-i386-softmmu] Error 2
make[3]: Leaving directory=20
`/mnt/vm/xen/xen-unstable.hg/tools/qemu-xen-dir-remote'
make[2]: *** [subdir-all-qemu-xen-dir] Error 2
make[2]: Leaving directory `/mnt/vm/xen/xen-unstable.hg/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/mnt/vm/xen/xen-unstable.hg/tools'
make: *** [install-tools] Error 2


--------------ms070800000102020409050900
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTA4MTA0OVowIwYJKoZIhvcNAQkEMRYEFGB4ev772+sBGsXDAWtAdTy/
IYTFMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAq8EWJPFrVG2WaWmajsdeFr3h
bCm4gJVHEYzm+17pzpW4l9sS8CWwyooopxlyj6c5s129sQVUQ8PcRCutkNIf5/i7u1xbJ4MV
v0ypDyty03loca+ZXZLyyB+vnXPHrb1IvUviSkPpdOLeKOP3di23xiJIzgG5BZzKnI7vYarX
EDCOx1xYHOwynwj/UvVkcODtQWp8E/aUw/HJvUgp18jKk2PV65n273dbqmkw/gYUFRQ2RXv9
XI2LocrN2GDcH1mkInEPgt0PLUewoaNxIo7MwAHNwhJESG8MoFZ0uFDdclPOhsLQgEUBes5l
QiK66pZQrnCwiVVZeq2oDeKuGJeDiAAAAAAAAA==
--------------ms070800000102020409050900--


--===============2539503632737551075==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2539503632737551075==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 08:22:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUBU-00084E-QL; Wed, 01 Aug 2012 08:21:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwUBT-000849-KC
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:21:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1343809283!10923935!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7345 invoked from network); 1 Aug 2012 08:21:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 08:21:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13796740"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:21:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	09:21:23 +0100
Message-ID: <1343809281.27221.14.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
Date: Wed, 1 Aug 2012 09:21:21 +0100
In-Reply-To: <CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 04:16 +0100, Tamas Lengyel wrote:
> Dear libxl developers,
> currently there is no way to automatically parse a config file into
> the libxl_domain_config datastructure and therefore using any libxl_*
> function that requires that datastructure as an input is unusable. The
> only implementation that creates that structure is in xl_cmdimpl.c in
> the private function parse_config_data. As I see, it relies on the
> XLU_Config structure to create the libxl_domain_config structure. It
> is very impractical to replicate that code for third party tools just
> to be able to use for example libxl_domain_restore. 

There is a deliberate split in functionality here, which is part of the
design of libxl.

libxl is a library which can be used to implement a toolstack. It is not
in and of itself a toolstack. I often say that it implements the common
"bottom third" of a toolstack.

xl is an actual toolstack which uses libxl and aims for, among other
things, xm compatibility which implies the ability to parse xm style
configuration files (and a similar command line syntax, etc).

The parsing of xl/xm style configuration files is specific to the
toolstack and therefore belongs in xl.

It is incorrect to say that "libxl_* function that requires that
datastructure as an input is unusable", it is obviously entirely
possible to fill in this data structure without reference to an xm/xl
style configuration file. If you are writing your own toolstack then it
is expected that you will have your own configuration format and will
parse that instead, there is certainly no general requirement that all
toolstack use xm/xl style configuration files.

For example The libxl backend for libvirt takes libvirt configuration
and uses that to drive libxl in order to implement the required
functionality and we expect that in the future the xapi toolstack will
similarly use settings in its database to drive libxl.

Now in actual fact the actual parser is in the libxlu (utils) library.
This is because although the configuration file format is particular to
xm/xl we acknowledge that some of the functionality of xl, such the
ability to parse simple key=value configuration files or xl disk
configuration specifications, might be useful to other toolstacks.

> My requests is either:

It would be useful if you would explain what you are actually trying to
achieve here. Are you writing your own toolstack which strives for xm
compatibility? Or are you perhaps trying to add functionality to libxl
which actual belongs in xl?

If you explain what you are doing then we can try and figure out how
best to move forward.

>       * Create a publicly accessible function in libxl.h that parses a
>         file/char* to libxl_domain_config structure (essentially
>         making parse_config_data public).
>       * Or just use XLU_Config everywhere. I'm not entirely sure why
>         there is a need to have two separate formats and the whole
>         translation in-between.

I don't think either of these would be correct and in keeping with the
design decisions which we have made for libxl/xl.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:22:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUBU-00084E-QL; Wed, 01 Aug 2012 08:21:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwUBT-000849-KC
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:21:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1343809283!10923935!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7345 invoked from network); 1 Aug 2012 08:21:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 08:21:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13796740"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:21:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	09:21:23 +0100
Message-ID: <1343809281.27221.14.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
Date: Wed, 1 Aug 2012 09:21:21 +0100
In-Reply-To: <CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 04:16 +0100, Tamas Lengyel wrote:
> Dear libxl developers,
> currently there is no way to automatically parse a config file into
> the libxl_domain_config datastructure and therefore using any libxl_*
> function that requires that datastructure as an input is unusable. The
> only implementation that creates that structure is in xl_cmdimpl.c in
> the private function parse_config_data. As I see, it relies on the
> XLU_Config structure to create the libxl_domain_config structure. It
> is very impractical to replicate that code for third party tools just
> to be able to use for example libxl_domain_restore. 

There is a deliberate split in functionality here, which is part of the
design of libxl.

libxl is a library which can be used to implement a toolstack. It is not
in and of itself a toolstack. I often say that it implements the common
"bottom third" of a toolstack.

xl is an actual toolstack which uses libxl and aims for, among other
things, xm compatibility which implies the ability to parse xm style
configuration files (and a similar command line syntax, etc).

The parsing of xl/xm style configuration files is specific to the
toolstack and therefore belongs in xl.

It is incorrect to say that "libxl_* function that requires that
datastructure as an input is unusable", it is obviously entirely
possible to fill in this data structure without reference to an xm/xl
style configuration file. If you are writing your own toolstack then it
is expected that you will have your own configuration format and will
parse that instead, there is certainly no general requirement that all
toolstack use xm/xl style configuration files.

For example The libxl backend for libvirt takes libvirt configuration
and uses that to drive libxl in order to implement the required
functionality and we expect that in the future the xapi toolstack will
similarly use settings in its database to drive libxl.

Now in actual fact the actual parser is in the libxlu (utils) library.
This is because although the configuration file format is particular to
xm/xl we acknowledge that some of the functionality of xl, such the
ability to parse simple key=value configuration files or xl disk
configuration specifications, might be useful to other toolstacks.

> My requests is either:

It would be useful if you would explain what you are actually trying to
achieve here. Are you writing your own toolstack which strives for xm
compatibility? Or are you perhaps trying to add functionality to libxl
which actual belongs in xl?

If you explain what you are doing then we can try and figure out how
best to move forward.

>       * Create a publicly accessible function in libxl.h that parses a
>         file/char* to libxl_domain_config structure (essentially
>         making parse_config_data public).
>       * Or just use XLU_Config everywhere. I'm not entirely sure why
>         there is a need to have two separate formats and the whole
>         translation in-between.

I don't think either of these would be correct and in keeping with the
design decisions which we have made for libxl/xl.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:28:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUH7-0008Ct-JF; Wed, 01 Aug 2012 08:27:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SwUH5-0008Cn-J5
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:27:23 +0000
Received: from [85.158.143.99:56351] by server-3.bemta-4.messagelabs.com id
	EC/1E-01511-A68E8105; Wed, 01 Aug 2012 08:27:22 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343809641!28799224!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Njg3OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 764 invoked from network); 1 Aug 2012 08:27:22 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 08:27:22 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 9C45A278A;
	Wed,  1 Aug 2012 11:27:20 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 70F7A2005D; Wed,  1 Aug 2012 11:27:20 +0300 (EEST)
Date: Wed, 1 Aug 2012 11:27:20 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: lmw@satx.rr.com
Message-ID: <20120801082720.GC19851@reaktio.net>
References: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] serial="pty"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 11:08:41PM +0000, lmw@satx.rr.com wrote:
> When adding the keyword serial="pty" to a xen cfg file, I am able to open /dev/ttyS0 in my dom-U linux kernel environment and send serial data to a /dev/pts/x port in my dom-0 linux environment.  If I happen to have have multiple dom-U's configured this way, what is the best way to determine how these ports will be mapped so I will know which /dev/pts port belongs to which dom-U's /dev/ttyS0 port?
> 

You can see that info at least from xenstore, dunno if there are other ways..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:28:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUH7-0008Ct-JF; Wed, 01 Aug 2012 08:27:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SwUH5-0008Cn-J5
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:27:23 +0000
Received: from [85.158.143.99:56351] by server-3.bemta-4.messagelabs.com id
	EC/1E-01511-A68E8105; Wed, 01 Aug 2012 08:27:22 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343809641!28799224!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Njg3OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 764 invoked from network); 1 Aug 2012 08:27:22 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 08:27:22 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 9C45A278A;
	Wed,  1 Aug 2012 11:27:20 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 70F7A2005D; Wed,  1 Aug 2012 11:27:20 +0300 (EEST)
Date: Wed, 1 Aug 2012 11:27:20 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: lmw@satx.rr.com
Message-ID: <20120801082720.GC19851@reaktio.net>
References: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] serial="pty"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 11:08:41PM +0000, lmw@satx.rr.com wrote:
> When adding the keyword serial="pty" to a xen cfg file, I am able to open /dev/ttyS0 in my dom-U linux kernel environment and send serial data to a /dev/pts/x port in my dom-0 linux environment.  If I happen to have have multiple dom-U's configured this way, what is the best way to determine how these ports will be mapped so I will know which /dev/pts port belongs to which dom-U's /dev/ttyS0 port?
> 

You can see that info at least from xenstore, dunno if there are other ways..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUOd-0008TA-Fn; Wed, 01 Aug 2012 08:35:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwUOd-0008T5-0q
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 08:35:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1343810102!4930616!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2525 invoked from network); 1 Aug 2012 08:35:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 08:35:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13796976"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:34:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	09:34:31 +0100
Message-ID: <1343810069.27221.16.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Date: Wed, 1 Aug 2012 09:34:29 +0100
In-Reply-To: <5018E489.6080601@tiscali.it>
References: <5018E489.6080601@tiscali.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] Build error qemu upstream 1.2 with xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTAxIGF0IDA5OjEwICswMTAwLCBGYWJpbyBGYW50b25pIHdyb3RlOgo+
IEkgaGFkIHRyaWVkIGJ1aWxkIGZyb20gc291cmNlIHhlbiA0LjIuMC1yYzEgb24gV2hlZXp5IHdp
dGggcWVtdS14ZW4gZnJvbSAKPiBnaXQgKGNvbW1pdCAwYjIyZWYwZjU3YTg5MTBkODQ5NjAyYmVm
MDk0MGVkY2QwNTUzZDJjKQo+IEkgaGFkIGVuY291bnRlciB0aGlzIGJ1aWxkIGVycm9yOgo+ICAg
IENDICAgIGkzODYtc29mdG1tdS9ody9pMzg2Ly4uL3hlbl9wdC5vCj4gL21udC92bS94ZW4veGVu
LXVuc3RhYmxlLmhnL3Rvb2xzL3FlbXUteGVuLWRpci9ody9pMzg2Ly4uL3hlbl9wdC5jOiBJbiAK
PiBmdW5jdGlvbiDDonhlbl9wY2lfcGFzc3Rocm91Z2hfY2xhc3NfaW5pdMOiOgo+IC9tbnQvdm0v
eGVuL3hlbi11bnN0YWJsZS5oZy90b29scy9xZW11LXhlbi1kaXIvaHcvaTM4Ni8uLi94ZW5fcHQu
Yzo4MzI6MTM6IAo+IGVycm9yOiBhc3NpZ25tZW50IGZyb20gaW5jb21wYXRpYmxlIHBvaW50ZXIg
dHlwZSBbLVdlcnJvcl0KPiBjYzE6IGFsbCB3YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9y
cwoKVGhpcyBpcyB0aGUgc2FtZSBhcyB0aGUgYnVpbGRib3QgZmFpbHVyZSByZXBvcnQgb24gcWVt
dS1kZXZlbCB5ZXN0ZXJkYXkKKDwyMDEyMDczMTA1MDY1OC41RTIxNUJGRENAYnVpbGRib3QuYjEt
c3lzdGVtcy5kZT4pLiBJIHByZXN1bWUgQW50aG9ueQooQ0NkKSBpcyBhd2FyZSwgYnV0IGhlJ3Mg
anVzdCBnb25lIG9uIGhvbGlkYXkgc28gQ0NpbmcgU3RlZmFubyB0b28uCgo+IG1ha2VbNF06ICoq
KiBbaHcvaTM4Ni8uLi94ZW5fcHQub10gRXJyb3IgMQo+IG1ha2VbM106ICoqKiBbc3ViZGlyLWkz
ODYtc29mdG1tdV0gRXJyb3IgMgo+IG1ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IAo+IGAvbW50
L3ZtL3hlbi94ZW4tdW5zdGFibGUuaGcvdG9vbHMvcWVtdS14ZW4tZGlyLXJlbW90ZScKPiBtYWtl
WzJdOiAqKiogW3N1YmRpci1hbGwtcWVtdS14ZW4tZGlyXSBFcnJvciAyCj4gbWFrZVsyXTogTGVh
dmluZyBkaXJlY3RvcnkgYC9tbnQvdm0veGVuL3hlbi11bnN0YWJsZS5oZy90b29scycKPiBtYWtl
WzFdOiAqKiogW3N1YmRpcnMtaW5zdGFsbF0gRXJyb3IgMgo+IG1ha2VbMV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvbW50L3ZtL3hlbi94ZW4tdW5zdGFibGUuaGcvdG9vbHMnCj4gbWFrZTogKioqIFtp
bnN0YWxsLXRvb2xzXSBFcnJvciAyCj4gCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUOd-0008TA-Fn; Wed, 01 Aug 2012 08:35:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwUOd-0008T5-0q
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 08:35:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1343810102!4930616!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2525 invoked from network); 1 Aug 2012 08:35:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 08:35:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13796976"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:34:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	09:34:31 +0100
Message-ID: <1343810069.27221.16.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Date: Wed, 1 Aug 2012 09:34:29 +0100
In-Reply-To: <5018E489.6080601@tiscali.it>
References: <5018E489.6080601@tiscali.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] Build error qemu upstream 1.2 with xen-unstable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTAxIGF0IDA5OjEwICswMTAwLCBGYWJpbyBGYW50b25pIHdyb3RlOgo+
IEkgaGFkIHRyaWVkIGJ1aWxkIGZyb20gc291cmNlIHhlbiA0LjIuMC1yYzEgb24gV2hlZXp5IHdp
dGggcWVtdS14ZW4gZnJvbSAKPiBnaXQgKGNvbW1pdCAwYjIyZWYwZjU3YTg5MTBkODQ5NjAyYmVm
MDk0MGVkY2QwNTUzZDJjKQo+IEkgaGFkIGVuY291bnRlciB0aGlzIGJ1aWxkIGVycm9yOgo+ICAg
IENDICAgIGkzODYtc29mdG1tdS9ody9pMzg2Ly4uL3hlbl9wdC5vCj4gL21udC92bS94ZW4veGVu
LXVuc3RhYmxlLmhnL3Rvb2xzL3FlbXUteGVuLWRpci9ody9pMzg2Ly4uL3hlbl9wdC5jOiBJbiAK
PiBmdW5jdGlvbiDDonhlbl9wY2lfcGFzc3Rocm91Z2hfY2xhc3NfaW5pdMOiOgo+IC9tbnQvdm0v
eGVuL3hlbi11bnN0YWJsZS5oZy90b29scy9xZW11LXhlbi1kaXIvaHcvaTM4Ni8uLi94ZW5fcHQu
Yzo4MzI6MTM6IAo+IGVycm9yOiBhc3NpZ25tZW50IGZyb20gaW5jb21wYXRpYmxlIHBvaW50ZXIg
dHlwZSBbLVdlcnJvcl0KPiBjYzE6IGFsbCB3YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9y
cwoKVGhpcyBpcyB0aGUgc2FtZSBhcyB0aGUgYnVpbGRib3QgZmFpbHVyZSByZXBvcnQgb24gcWVt
dS1kZXZlbCB5ZXN0ZXJkYXkKKDwyMDEyMDczMTA1MDY1OC41RTIxNUJGRENAYnVpbGRib3QuYjEt
c3lzdGVtcy5kZT4pLiBJIHByZXN1bWUgQW50aG9ueQooQ0NkKSBpcyBhd2FyZSwgYnV0IGhlJ3Mg
anVzdCBnb25lIG9uIGhvbGlkYXkgc28gQ0NpbmcgU3RlZmFubyB0b28uCgo+IG1ha2VbNF06ICoq
KiBbaHcvaTM4Ni8uLi94ZW5fcHQub10gRXJyb3IgMQo+IG1ha2VbM106ICoqKiBbc3ViZGlyLWkz
ODYtc29mdG1tdV0gRXJyb3IgMgo+IG1ha2VbM106IExlYXZpbmcgZGlyZWN0b3J5IAo+IGAvbW50
L3ZtL3hlbi94ZW4tdW5zdGFibGUuaGcvdG9vbHMvcWVtdS14ZW4tZGlyLXJlbW90ZScKPiBtYWtl
WzJdOiAqKiogW3N1YmRpci1hbGwtcWVtdS14ZW4tZGlyXSBFcnJvciAyCj4gbWFrZVsyXTogTGVh
dmluZyBkaXJlY3RvcnkgYC9tbnQvdm0veGVuL3hlbi11bnN0YWJsZS5oZy90b29scycKPiBtYWtl
WzFdOiAqKiogW3N1YmRpcnMtaW5zdGFsbF0gRXJyb3IgMgo+IG1ha2VbMV06IExlYXZpbmcgZGly
ZWN0b3J5IGAvbW50L3ZtL3hlbi94ZW4tdW5zdGFibGUuaGcvdG9vbHMnCj4gbWFrZTogKioqIFtp
bnN0YWxsLXRvb2xzXSBFcnJvciAyCj4gCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUPO-0008VM-Ta; Wed, 01 Aug 2012 08:35:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwUPN-0008V9-CZ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:35:57 +0000
Received: from [85.158.143.35:36968] by server-1.bemta-4.messagelabs.com id
	F2/AA-24392-C6AE8105; Wed, 01 Aug 2012 08:35:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343810151!12899808!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15380 invoked from network); 1 Aug 2012 08:35:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 08:35:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13796996"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:35:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	09:35:51 +0100
Message-ID: <1343810149.27221.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Wed, 1 Aug 2012 09:35:49 +0100
In-Reply-To: <20120801082720.GC19851@reaktio.net>
References: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
	<20120801082720.GC19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "lmw@satx.rr.com" <lmw@satx.rr.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] serial="pty"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTAxIGF0IDA5OjI3ICswMTAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBUdWUsIEp1bCAzMSwgMjAxMiBhdCAxMTowODo0MVBNICswMDAwLCBsbXdAc2F0eC5y
ci5jb20gd3JvdGU6Cj4gPiBXaGVuIGFkZGluZyB0aGUga2V5d29yZCBzZXJpYWw9InB0eSIgdG8g
YSB4ZW4gY2ZnIGZpbGUsIEkgYW0gYWJsZSB0byBvcGVuIC9kZXYvdHR5UzAgaW4gbXkgZG9tLVUg
bGludXgga2VybmVsIGVudmlyb25tZW50IGFuZCBzZW5kIHNlcmlhbCBkYXRhIHRvIGEgL2Rldi9w
dHMveCBwb3J0IGluIG15IGRvbS0wIGxpbnV4IGVudmlyb25tZW50LiAgSWYgSSBoYXBwZW4gdG8g
aGF2ZSBoYXZlIG11bHRpcGxlIGRvbS1VJ3MgY29uZmlndXJlZCB0aGlzIHdheSwgd2hhdCBpcyB0
aGUgYmVzdCB3YXkgdG8gZGV0ZXJtaW5lIGhvdyB0aGVzZSBwb3J0cyB3aWxsIGJlIG1hcHBlZCBz
byBJIHdpbGwga25vdyB3aGljaCAvZGV2L3B0cyBwb3J0IGJlbG9uZ3MgdG8gd2hpY2ggZG9tLVUn
cyAvZGV2L3R0eVMwIHBvcnQ/Cj4gPiAKPiAKPiBZb3UgY2FuIHNlZSB0aGF0IGluZm8gYXQgbGVh
c3QgZnJvbSB4ZW5zdG9yZSwgZHVubm8gaWYgdGhlcmUgYXJlIG90aGVyIHdheXMuLgoKSWYgeW91
IGFyZSB1c2luZyB4bCB0aGUgInhsIGNvbnNvbGUgPGRvbT4iIHdpbGwgZ2V0IHlvdSB0aGUgc2Vy
aWFsCmNvbnNvbGUgYnkgZGVmYXVsdCAodXNlIC10IG9wdGlvbiB0byBmb3JjZSkuIEkgZG9uJ3Qg
a25vdyBpZiAieG0KY29uc29sZSIgaGFkIHRoZSBzYW1lIGJlaGF2aW91ci4KCklhbi4KCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Aug 01 08:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 08:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwUPO-0008VM-Ta; Wed, 01 Aug 2012 08:35:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwUPN-0008V9-CZ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 08:35:57 +0000
Received: from [85.158.143.35:36968] by server-1.bemta-4.messagelabs.com id
	F2/AA-24392-C6AE8105; Wed, 01 Aug 2012 08:35:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343810151!12899808!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc2Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15380 invoked from network); 1 Aug 2012 08:35:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 08:35:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,691,1336348800"; d="scan'208";a="13796996"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:35:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	09:35:51 +0100
Message-ID: <1343810149.27221.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Wed, 1 Aug 2012 09:35:49 +0100
In-Reply-To: <20120801082720.GC19851@reaktio.net>
References: <20120731230841.M5H9R.154709.root@cdptpa-web20-z02>
	<20120801082720.GC19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "lmw@satx.rr.com" <lmw@satx.rr.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] serial="pty"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTAxIGF0IDA5OjI3ICswMTAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBUdWUsIEp1bCAzMSwgMjAxMiBhdCAxMTowODo0MVBNICswMDAwLCBsbXdAc2F0eC5y
ci5jb20gd3JvdGU6Cj4gPiBXaGVuIGFkZGluZyB0aGUga2V5d29yZCBzZXJpYWw9InB0eSIgdG8g
YSB4ZW4gY2ZnIGZpbGUsIEkgYW0gYWJsZSB0byBvcGVuIC9kZXYvdHR5UzAgaW4gbXkgZG9tLVUg
bGludXgga2VybmVsIGVudmlyb25tZW50IGFuZCBzZW5kIHNlcmlhbCBkYXRhIHRvIGEgL2Rldi9w
dHMveCBwb3J0IGluIG15IGRvbS0wIGxpbnV4IGVudmlyb25tZW50LiAgSWYgSSBoYXBwZW4gdG8g
aGF2ZSBoYXZlIG11bHRpcGxlIGRvbS1VJ3MgY29uZmlndXJlZCB0aGlzIHdheSwgd2hhdCBpcyB0
aGUgYmVzdCB3YXkgdG8gZGV0ZXJtaW5lIGhvdyB0aGVzZSBwb3J0cyB3aWxsIGJlIG1hcHBlZCBz
byBJIHdpbGwga25vdyB3aGljaCAvZGV2L3B0cyBwb3J0IGJlbG9uZ3MgdG8gd2hpY2ggZG9tLVUn
cyAvZGV2L3R0eVMwIHBvcnQ/Cj4gPiAKPiAKPiBZb3UgY2FuIHNlZSB0aGF0IGluZm8gYXQgbGVh
c3QgZnJvbSB4ZW5zdG9yZSwgZHVubm8gaWYgdGhlcmUgYXJlIG90aGVyIHdheXMuLgoKSWYgeW91
IGFyZSB1c2luZyB4bCB0aGUgInhsIGNvbnNvbGUgPGRvbT4iIHdpbGwgZ2V0IHlvdSB0aGUgc2Vy
aWFsCmNvbnNvbGUgYnkgZGVmYXVsdCAodXNlIC10IG9wdGlvbiB0byBmb3JjZSkuIEkgZG9uJ3Qg
a25vdyBpZiAieG0KY29uc29sZSIgaGFkIHRoZSBzYW1lIGJlaGF2aW91ci4KCklhbi4KCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:15:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV0p-0000X4-50; Wed, 01 Aug 2012 09:14:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1SwV0o-0000Wz-0y
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 09:14:38 +0000
Received: from [85.158.143.35:41211] by server-3.bemta-4.messagelabs.com id
	40/D2-01511-D73F8105; Wed, 01 Aug 2012 09:14:37 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343812474!5423133!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18334 invoked from network); 1 Aug 2012 09:14:36 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:14:36 -0000
Received: by pbcwz7 with SMTP id wz7so943824pbc.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 02:14:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=10SgRzKCMoeFLEld/mZTgYLNznTiWcl23IrkjnjozK0=;
	b=BIUeL8dpfYN92gqYfh0x3g7VohF0FXrWCxP6yn19IUZv40M/c4Y2iaUfswba5zSSLS
	BNiHHsDvOt2av2GQ1eeSOMoX9fnIGiR061rM7F+DLnDuAZn4Y1TX6wVeWC3RJ59ooutj
	FfvYPyJuBWmCz6r3Jb3qQNXcWBDNylhx/7UCnLE/EPIV0sbAPDcGqRtXFuIvhRUvV9ip
	PnFKJOQfBmgtbJ1v2xw+T/VvVwySfDbXqlRQyCxrkMJEqxZ+1lAH5yS4tH4dgJOLu9pN
	e1W+oCzet1E2PNvt76IL3dHxX0fH7GAkl3dcdD0gjc51xn1BvmVwbnD6wKwD/DLb1Cw6
	D1dA==
Received: by 10.68.222.9 with SMTP id qi9mr32544237pbc.164.1343812474198; Wed,
	01 Aug 2012 02:14:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.229 with HTTP; Wed, 1 Aug 2012 02:14:13 -0700 (PDT)
In-Reply-To: <20120731230830.GA32698@phenom.dumpdata.com>
References: <CAJ75kXaiMeXUu-8E7-vk7WeX7N_FgHn=Orak4kZ6zqhJ0CQBHQ@mail.gmail.com>
	<20120731230830.GA32698@phenom.dumpdata.com>
From: William Dauchy <wdauchy@gmail.com>
Date: Wed, 1 Aug 2012 11:14:13 +0200
Message-ID: <CAJ75kXY-ME0opxgM65Ryn1Q+uHPUG0O5zzJk2eQjSssS1o5BHQ@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] kobject (null) without dirent 3.x domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 1:08 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> Do you see this with v3.5?

Thanks for your answer.
Sorry, in fact I found the issue. It was a race in my api script
between hotplug cpu and the start of the vm.

Regards,
-- 
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:15:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV0p-0000X4-50; Wed, 01 Aug 2012 09:14:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1SwV0o-0000Wz-0y
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 09:14:38 +0000
Received: from [85.158.143.35:41211] by server-3.bemta-4.messagelabs.com id
	40/D2-01511-D73F8105; Wed, 01 Aug 2012 09:14:37 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343812474!5423133!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18334 invoked from network); 1 Aug 2012 09:14:36 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:14:36 -0000
Received: by pbcwz7 with SMTP id wz7so943824pbc.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 02:14:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=10SgRzKCMoeFLEld/mZTgYLNznTiWcl23IrkjnjozK0=;
	b=BIUeL8dpfYN92gqYfh0x3g7VohF0FXrWCxP6yn19IUZv40M/c4Y2iaUfswba5zSSLS
	BNiHHsDvOt2av2GQ1eeSOMoX9fnIGiR061rM7F+DLnDuAZn4Y1TX6wVeWC3RJ59ooutj
	FfvYPyJuBWmCz6r3Jb3qQNXcWBDNylhx/7UCnLE/EPIV0sbAPDcGqRtXFuIvhRUvV9ip
	PnFKJOQfBmgtbJ1v2xw+T/VvVwySfDbXqlRQyCxrkMJEqxZ+1lAH5yS4tH4dgJOLu9pN
	e1W+oCzet1E2PNvt76IL3dHxX0fH7GAkl3dcdD0gjc51xn1BvmVwbnD6wKwD/DLb1Cw6
	D1dA==
Received: by 10.68.222.9 with SMTP id qi9mr32544237pbc.164.1343812474198; Wed,
	01 Aug 2012 02:14:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.229 with HTTP; Wed, 1 Aug 2012 02:14:13 -0700 (PDT)
In-Reply-To: <20120731230830.GA32698@phenom.dumpdata.com>
References: <CAJ75kXaiMeXUu-8E7-vk7WeX7N_FgHn=Orak4kZ6zqhJ0CQBHQ@mail.gmail.com>
	<20120731230830.GA32698@phenom.dumpdata.com>
From: William Dauchy <wdauchy@gmail.com>
Date: Wed, 1 Aug 2012 11:14:13 +0200
Message-ID: <CAJ75kXY-ME0opxgM65Ryn1Q+uHPUG0O5zzJk2eQjSssS1o5BHQ@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] kobject (null) without dirent 3.x domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 1:08 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> Do you see this with v3.5?

Thanks for your answer.
Sorry, in fact I found the issue. It was a race in my api script
between hotplug cpu and the start of the vm.

Regards,
-- 
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:18:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV3W-0000cc-NU; Wed, 01 Aug 2012 09:17:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwV3V-0000cX-4I
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:17:25 +0000
Received: from [85.158.143.35:57923] by server-1.bemta-4.messagelabs.com id
	4B/30-24392-424F8105; Wed, 01 Aug 2012 09:17:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343812643!15572403!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32218 invoked from network); 1 Aug 2012 09:17:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 09:17:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 10:17:22 +0100
Message-Id: <5019106B0200007800091D22@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 10:18:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part8FBECA5B.0__="
Subject: [Xen-devel] linux-2.6.18/x86: fix context switch on debug
 hypervisor after 1183:5e3c342a325e
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part8FBECA5B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Looking at anything but the result field of a multicall structure after
issuing the multicall is invalid - the debug hypervisor intentionally
clobbers all other fields.

On i386 also remove some left over debugging code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/arch/i386/kernel/process-xen.c
+++ b/arch/i386/kernel/process-xen.c
@@ -548,7 +548,7 @@ struct task_struct fastcall * __switch_t
 {
 	struct thread_struct *prev =3D &prev_p->thread,
 				 *next =3D &next_p->thread;
-	int cpu =3D smp_processor_id();
+	int cpu =3D smp_processor_id(), cr0_ts;
 #ifndef CONFIG_X86_NO_TSS
 	struct tss_struct *tss =3D &per_cpu(init_tss, cpu);
 #endif
@@ -575,9 +575,6 @@ struct task_struct fastcall * __switch_t
 		mcl->args[0] =3D 1;
 		mcl++;
 	}
-#if 0 /* lazy fpu sanity check */
-	else BUG_ON(!(read_cr0() & 8));
-#endif
=20
 	/*
 	 * Reload esp0.
@@ -639,11 +636,14 @@ struct task_struct fastcall * __switch_t
 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));
 #endif
 	BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch)
+	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
 		__get_cpu_var(xen_x86_cr0_upd) =3D X86_CR0_TS;
+		cr0_ts =3D 1;
+	} else
+		cr0_ts =3D 0;
 	if (unlikely(HYPERVISOR_multicall_check(_mcl, mcl - _mcl, NULL)))
 		BUG();
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
+	if (cr0_ts) {
 		__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;
 		xen_clear_cr0_upd();
 	}
--- a/arch/x86_64/kernel/process-xen.c
+++ b/arch/x86_64/kernel/process-xen.c
@@ -486,7 +486,7 @@ __switch_to(struct task_struct *prev_p,=20
 {
 	struct thread_struct *prev =3D &prev_p->thread,
 				 *next =3D &next_p->thread;
-	int cpu =3D smp_processor_id(); =20
+	int cpu =3D smp_processor_id(), cr0_ts;
 #ifndef CONFIG_X86_NO_TSS
 	struct tss_struct *tss =3D &per_cpu(init_tss, cpu);
 #endif
@@ -572,11 +572,14 @@ __switch_to(struct task_struct *prev_p,=20
 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));
 #endif
 	BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch)
+	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
 		__get_cpu_var(xen_x86_cr0_upd) =3D X86_CR0_TS;
+		cr0_ts =3D 1;
+	} else
+		cr0_ts =3D 0;
 	if (unlikely(HYPERVISOR_multicall_check(_mcl, mcl - _mcl, NULL)))
 		BUG();
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
+	if (cr0_ts) {
 		__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;
 		xen_clear_cr0_upd();
 	}




--=__Part8FBECA5B.0__=
Content-Type: text/plain; name="xen-x86-cr0-fix.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-x86-cr0-fix.patch"

x86: fix context switch on debug hypervisor after 1183:5e3c342a325e=0A=0ALo=
oking at anything but the result field of a multicall structure after=0Aiss=
uing the multicall is invalid - the debug hypervisor intentionally=0Aclobbe=
rs all other fields.=0A=0AOn i386 also remove some left over debugging =
code.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/arch/i386/kernel/process-xen.c=0A+++ b/arch/i386/kernel/process-xen.c=0A@=
@ -548,7 +548,7 @@ struct task_struct fastcall * __switch_t=0A {=0A 	=
struct thread_struct *prev =3D &prev_p->thread,=0A 				=
 *next =3D &next_p->thread;=0A-	int cpu =3D smp_processor_id();=0A+	=
int cpu =3D smp_processor_id(), cr0_ts;=0A #ifndef CONFIG_X86_NO_TSS=0A 	=
struct tss_struct *tss =3D &per_cpu(init_tss, cpu);=0A #endif=0A@@ -575,9 =
+575,6 @@ struct task_struct fastcall * __switch_t=0A 		mcl->args[0=
] =3D 1;=0A 		mcl++;=0A 	}=0A-#if 0 /* lazy fpu sanity =
check */=0A-	else BUG_ON(!(read_cr0() & 8));=0A-#endif=0A =0A 	=
/*=0A 	 * Reload esp0.=0A@@ -639,11 +636,14 @@ struct task_struct =
fastcall * __switch_t=0A 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));=0A =
#endif=0A 	BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));=0A-	if =
(_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch)=0A+	if (_mcl->op =
=3D=3D __HYPERVISOR_fpu_taskswitch) {=0A 		__get_cpu_var(xen_x=
86_cr0_upd) =3D X86_CR0_TS;=0A+		cr0_ts =3D 1;=0A+	} else=0A+	=
	cr0_ts =3D 0;=0A 	if (unlikely(HYPERVISOR_multicall_check(_mc=
l, mcl - _mcl, NULL)))=0A 		BUG();=0A-	if (_mcl->op =
=3D=3D __HYPERVISOR_fpu_taskswitch) {=0A+	if (cr0_ts) {=0A 		=
__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;=0A 		xen_clear_cr0_upd()=
;=0A 	}=0A--- a/arch/x86_64/kernel/process-xen.c=0A+++ b/arch/x86_64/kern=
el/process-xen.c=0A@@ -486,7 +486,7 @@ __switch_to(struct task_struct =
*prev_p, =0A {=0A 	struct thread_struct *prev =3D &prev_p->thread,=0A =
				 *next =3D &next_p->thread;=0A-	int cpu =
=3D smp_processor_id();  =0A+	int cpu =3D smp_processor_id(), cr0_ts;=0A =
#ifndef CONFIG_X86_NO_TSS=0A 	struct tss_struct *tss =3D &per_cpu(init_ts=
s, cpu);=0A #endif=0A@@ -572,11 +572,14 @@ __switch_to(struct task_struct =
*prev_p, =0A 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));=0A #endif=0A 	=
BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));=0A-	if (_mcl->op =3D=3D =
__HYPERVISOR_fpu_taskswitch)=0A+	if (_mcl->op =3D=3D __HYPERVISOR_fp=
u_taskswitch) {=0A 		__get_cpu_var(xen_x86_cr0_upd) =3D =
X86_CR0_TS;=0A+		cr0_ts =3D 1;=0A+	} else=0A+		=
cr0_ts =3D 0;=0A 	if (unlikely(HYPERVISOR_multicall_check(_mcl, mcl =
- _mcl, NULL)))=0A 		BUG();=0A-	if (_mcl->op =3D=3D =
__HYPERVISOR_fpu_taskswitch) {=0A+	if (cr0_ts) {=0A 		=
__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;=0A 		xen_clear_cr0_upd()=
;=0A 	}=0A
--=__Part8FBECA5B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part8FBECA5B.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 01 09:18:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV3W-0000cc-NU; Wed, 01 Aug 2012 09:17:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwV3V-0000cX-4I
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:17:25 +0000
Received: from [85.158.143.35:57923] by server-1.bemta-4.messagelabs.com id
	4B/30-24392-424F8105; Wed, 01 Aug 2012 09:17:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343812643!15572403!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32218 invoked from network); 1 Aug 2012 09:17:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 09:17:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 10:17:22 +0100
Message-Id: <5019106B0200007800091D22@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 10:18:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part8FBECA5B.0__="
Subject: [Xen-devel] linux-2.6.18/x86: fix context switch on debug
 hypervisor after 1183:5e3c342a325e
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part8FBECA5B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Looking at anything but the result field of a multicall structure after
issuing the multicall is invalid - the debug hypervisor intentionally
clobbers all other fields.

On i386 also remove some left over debugging code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/arch/i386/kernel/process-xen.c
+++ b/arch/i386/kernel/process-xen.c
@@ -548,7 +548,7 @@ struct task_struct fastcall * __switch_t
 {
 	struct thread_struct *prev =3D &prev_p->thread,
 				 *next =3D &next_p->thread;
-	int cpu =3D smp_processor_id();
+	int cpu =3D smp_processor_id(), cr0_ts;
 #ifndef CONFIG_X86_NO_TSS
 	struct tss_struct *tss =3D &per_cpu(init_tss, cpu);
 #endif
@@ -575,9 +575,6 @@ struct task_struct fastcall * __switch_t
 		mcl->args[0] =3D 1;
 		mcl++;
 	}
-#if 0 /* lazy fpu sanity check */
-	else BUG_ON(!(read_cr0() & 8));
-#endif
=20
 	/*
 	 * Reload esp0.
@@ -639,11 +636,14 @@ struct task_struct fastcall * __switch_t
 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));
 #endif
 	BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch)
+	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
 		__get_cpu_var(xen_x86_cr0_upd) =3D X86_CR0_TS;
+		cr0_ts =3D 1;
+	} else
+		cr0_ts =3D 0;
 	if (unlikely(HYPERVISOR_multicall_check(_mcl, mcl - _mcl, NULL)))
 		BUG();
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
+	if (cr0_ts) {
 		__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;
 		xen_clear_cr0_upd();
 	}
--- a/arch/x86_64/kernel/process-xen.c
+++ b/arch/x86_64/kernel/process-xen.c
@@ -486,7 +486,7 @@ __switch_to(struct task_struct *prev_p,=20
 {
 	struct thread_struct *prev =3D &prev_p->thread,
 				 *next =3D &next_p->thread;
-	int cpu =3D smp_processor_id(); =20
+	int cpu =3D smp_processor_id(), cr0_ts;
 #ifndef CONFIG_X86_NO_TSS
 	struct tss_struct *tss =3D &per_cpu(init_tss, cpu);
 #endif
@@ -572,11 +572,14 @@ __switch_to(struct task_struct *prev_p,=20
 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));
 #endif
 	BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch)
+	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
 		__get_cpu_var(xen_x86_cr0_upd) =3D X86_CR0_TS;
+		cr0_ts =3D 1;
+	} else
+		cr0_ts =3D 0;
 	if (unlikely(HYPERVISOR_multicall_check(_mcl, mcl - _mcl, NULL)))
 		BUG();
-	if (_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch) {
+	if (cr0_ts) {
 		__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;
 		xen_clear_cr0_upd();
 	}




--=__Part8FBECA5B.0__=
Content-Type: text/plain; name="xen-x86-cr0-fix.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-x86-cr0-fix.patch"

x86: fix context switch on debug hypervisor after 1183:5e3c342a325e=0A=0ALo=
oking at anything but the result field of a multicall structure after=0Aiss=
uing the multicall is invalid - the debug hypervisor intentionally=0Aclobbe=
rs all other fields.=0A=0AOn i386 also remove some left over debugging =
code.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/arch/i386/kernel/process-xen.c=0A+++ b/arch/i386/kernel/process-xen.c=0A@=
@ -548,7 +548,7 @@ struct task_struct fastcall * __switch_t=0A {=0A 	=
struct thread_struct *prev =3D &prev_p->thread,=0A 				=
 *next =3D &next_p->thread;=0A-	int cpu =3D smp_processor_id();=0A+	=
int cpu =3D smp_processor_id(), cr0_ts;=0A #ifndef CONFIG_X86_NO_TSS=0A 	=
struct tss_struct *tss =3D &per_cpu(init_tss, cpu);=0A #endif=0A@@ -575,9 =
+575,6 @@ struct task_struct fastcall * __switch_t=0A 		mcl->args[0=
] =3D 1;=0A 		mcl++;=0A 	}=0A-#if 0 /* lazy fpu sanity =
check */=0A-	else BUG_ON(!(read_cr0() & 8));=0A-#endif=0A =0A 	=
/*=0A 	 * Reload esp0.=0A@@ -639,11 +636,14 @@ struct task_struct =
fastcall * __switch_t=0A 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));=0A =
#endif=0A 	BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));=0A-	if =
(_mcl->op =3D=3D __HYPERVISOR_fpu_taskswitch)=0A+	if (_mcl->op =
=3D=3D __HYPERVISOR_fpu_taskswitch) {=0A 		__get_cpu_var(xen_x=
86_cr0_upd) =3D X86_CR0_TS;=0A+		cr0_ts =3D 1;=0A+	} else=0A+	=
	cr0_ts =3D 0;=0A 	if (unlikely(HYPERVISOR_multicall_check(_mc=
l, mcl - _mcl, NULL)))=0A 		BUG();=0A-	if (_mcl->op =
=3D=3D __HYPERVISOR_fpu_taskswitch) {=0A+	if (cr0_ts) {=0A 		=
__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;=0A 		xen_clear_cr0_upd()=
;=0A 	}=0A--- a/arch/x86_64/kernel/process-xen.c=0A+++ b/arch/x86_64/kern=
el/process-xen.c=0A@@ -486,7 +486,7 @@ __switch_to(struct task_struct =
*prev_p, =0A {=0A 	struct thread_struct *prev =3D &prev_p->thread,=0A =
				 *next =3D &next_p->thread;=0A-	int cpu =
=3D smp_processor_id();  =0A+	int cpu =3D smp_processor_id(), cr0_ts;=0A =
#ifndef CONFIG_X86_NO_TSS=0A 	struct tss_struct *tss =3D &per_cpu(init_ts=
s, cpu);=0A #endif=0A@@ -572,11 +572,14 @@ __switch_to(struct task_struct =
*prev_p, =0A 	BUG_ON(pdo > _pdo + ARRAY_SIZE(_pdo));=0A #endif=0A 	=
BUG_ON(mcl > _mcl + ARRAY_SIZE(_mcl));=0A-	if (_mcl->op =3D=3D =
__HYPERVISOR_fpu_taskswitch)=0A+	if (_mcl->op =3D=3D __HYPERVISOR_fp=
u_taskswitch) {=0A 		__get_cpu_var(xen_x86_cr0_upd) =3D =
X86_CR0_TS;=0A+		cr0_ts =3D 1;=0A+	} else=0A+		=
cr0_ts =3D 0;=0A 	if (unlikely(HYPERVISOR_multicall_check(_mcl, mcl =
- _mcl, NULL)))=0A 		BUG();=0A-	if (_mcl->op =3D=3D =
__HYPERVISOR_fpu_taskswitch) {=0A+	if (cr0_ts) {=0A 		=
__get_cpu_var(xen_x86_cr0) |=3D X86_CR0_TS;=0A 		xen_clear_cr0_upd()=
;=0A 	}=0A
--=__Part8FBECA5B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part8FBECA5B.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 01 09:18:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV46-0000eT-4i; Wed, 01 Aug 2012 09:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwV44-0000eE-W8
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:18:01 +0000
Received: from [85.158.143.35:20570] by server-2.bemta-4.messagelabs.com id
	B8/EA-17938-844F8105; Wed, 01 Aug 2012 09:18:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343812679!16867356!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15098 invoked from network); 1 Aug 2012 09:17:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:17:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13797858"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 09:17:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	10:17:59 +0100
Message-ID: <1343812677.27221.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "greg@enjellic.com" <greg@enjellic.com>
Date: Wed, 1 Aug 2012 10:17:57 +0100
In-Reply-To: <201207312141.q6VLfJje012656@wind.enjellic.com>
References: <201207312141.q6VLfJje012656@wind.enjellic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Blktap fixes and kernel patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 22:41 +0100, Dr. Greg Wettstein wrote:

> After becoming far more familiar with the XEN vbd infrastructure then
> I had intended I was finally able to run down the problem with the
> blktap2 driver which has plagued 4.1.2.  Most specifically the
> orphaning of the tapdisk2 driver process and after Ian's fix for that,
> the hang and orphaning of a tapdisk minor which occurs when libxl
> attempts to shutdown the driver process.

Thanks for digging into this Greg.

> In the process I updated the blktap2 kernel driver to patch cleanly
> into the Linux 3.4 kernel.  These fixes have been validated against
> the 3.4 kernel as well as the 3.2 kernel.

Just to be clear this is just a straight forward port, there's no part
of the deadlock fix in here?

> The first patch is one which was done by Ian for the development tree
> with minor corrections for 4.1.2.  I'm including it for completeness
> for those who want a trouble free patch set for a 4.1.2 distribution.
> This patch fixes the orphaning of the tapdisk2 driver process when xl
> shuts down.

This is a fairly straight backport of a patch in unstable?

If you send a mail with a subject "Xen 4.1.x backport request
<commit-id>" explaining which commit it is and CC keir@xen.org &
ian.jackson@eu.citrix.com then we can see about getting this into a
future 4.1.x (perhaps even 4.1.3, not sure which stage of rcs we are at
there).

If the backport is reasonably trivial then there is often no need to
include it but since you have done so you might as well include the
patch for reference.

> The second patch corrects the deadlock which occurs between the
> blktap2 kernel driver and the blktap2 userspace control plane.  The
> deadlock causes a delay in the shutdown of a XEN guest and results in
> the 'orphaning' of tapdisk minor number allocations.  As seems to be
> typical with these types of things the fix was trivially straight
> forward once I finally figured out what was going on.

Thanks for this.

Am I right that the important functional change here is that the xs_rm
needs to come after we read the params node but before tap_ctl_destroy?
Obviously removing the node before calling libxl__device_destroy_tapdisk
is wrong since libxl__device_destroy_tapdisk reads from be_path!

Looking at 4.2.0-rc1 I see that libxl__device_destroy removes the
backend before calling libxl__device_destroy_tapdisk, so I think that a
fix is needed there too.

I'm less sure about the usage in libxl__initiate_device_remove. I wonder
if the call to libxl__device_destroy_tapdisk there needs to move to
device_hotplug_done right after the transaction which cleans up the
backend back?

The code which used to be in libxl__devices_destroy is now in
libxl__initiate_device_remove so I expect that fixing that would be
sufficient.

I'd really appreciate it if you could validate whether 4.2.0-rc1 works
for you or not, I suspect not. We would usually want fix the development
version before considering fixes for the stable branches (even if the
actual patch ends up looking totally different) otherwise we run the
risk of regressions in the next version.

Is there a simple command which will list the leaked tap devices? If so
we can consider adding it to the leak-check phase of the automated tests
(although I'm not sure how much use these make of blktap)

For future reference if you intend for a patch to be applied it is best
to submit it in the form described in
http://wiki.xen.org/wiki/Submitting_Xen_Patches, that is one patch per
email, with a changelog specific to that change and a Signed-off-by. In
this sort of scenario (a patch going to 4.1 which isn't a backport) the
changelog should also mention why the patch isn't a backport.

> Ian for your reference the following change which you introduced to
> address this issue:
> 
> 79e3dbe4b659e78408a9eea76c51a601bd4a383a
> tapdisk: respond to destroy request before tearing down the commuication channel
> 
> Is not needed and does not provide formally correct behavior in the
> presence of the two patches noted above.

Is it incorrect (i.e. should be reverted) or is it just incomplete/not
helpful?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:18:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV46-0000eT-4i; Wed, 01 Aug 2012 09:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwV44-0000eE-W8
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:18:01 +0000
Received: from [85.158.143.35:20570] by server-2.bemta-4.messagelabs.com id
	B8/EA-17938-844F8105; Wed, 01 Aug 2012 09:18:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343812679!16867356!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15098 invoked from network); 1 Aug 2012 09:17:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:17:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13797858"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 09:17:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	10:17:59 +0100
Message-ID: <1343812677.27221.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "greg@enjellic.com" <greg@enjellic.com>
Date: Wed, 1 Aug 2012 10:17:57 +0100
In-Reply-To: <201207312141.q6VLfJje012656@wind.enjellic.com>
References: <201207312141.q6VLfJje012656@wind.enjellic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Blktap fixes and kernel patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 22:41 +0100, Dr. Greg Wettstein wrote:

> After becoming far more familiar with the XEN vbd infrastructure then
> I had intended I was finally able to run down the problem with the
> blktap2 driver which has plagued 4.1.2.  Most specifically the
> orphaning of the tapdisk2 driver process and after Ian's fix for that,
> the hang and orphaning of a tapdisk minor which occurs when libxl
> attempts to shutdown the driver process.

Thanks for digging into this Greg.

> In the process I updated the blktap2 kernel driver to patch cleanly
> into the Linux 3.4 kernel.  These fixes have been validated against
> the 3.4 kernel as well as the 3.2 kernel.

Just to be clear this is just a straight forward port, there's no part
of the deadlock fix in here?

> The first patch is one which was done by Ian for the development tree
> with minor corrections for 4.1.2.  I'm including it for completeness
> for those who want a trouble free patch set for a 4.1.2 distribution.
> This patch fixes the orphaning of the tapdisk2 driver process when xl
> shuts down.

This is a fairly straight backport of a patch in unstable?

If you send a mail with a subject "Xen 4.1.x backport request
<commit-id>" explaining which commit it is and CC keir@xen.org &
ian.jackson@eu.citrix.com then we can see about getting this into a
future 4.1.x (perhaps even 4.1.3, not sure which stage of rcs we are at
there).

If the backport is reasonably trivial then there is often no need to
include it but since you have done so you might as well include the
patch for reference.

> The second patch corrects the deadlock which occurs between the
> blktap2 kernel driver and the blktap2 userspace control plane.  The
> deadlock causes a delay in the shutdown of a XEN guest and results in
> the 'orphaning' of tapdisk minor number allocations.  As seems to be
> typical with these types of things the fix was trivially straight
> forward once I finally figured out what was going on.

Thanks for this.

Am I right that the important functional change here is that the xs_rm
needs to come after we read the params node but before tap_ctl_destroy?
Obviously removing the node before calling libxl__device_destroy_tapdisk
is wrong since libxl__device_destroy_tapdisk reads from be_path!

Looking at 4.2.0-rc1 I see that libxl__device_destroy removes the
backend before calling libxl__device_destroy_tapdisk, so I think that a
fix is needed there too.

I'm less sure about the usage in libxl__initiate_device_remove. I wonder
if the call to libxl__device_destroy_tapdisk there needs to move to
device_hotplug_done right after the transaction which cleans up the
backend back?

The code which used to be in libxl__devices_destroy is now in
libxl__initiate_device_remove so I expect that fixing that would be
sufficient.

I'd really appreciate it if you could validate whether 4.2.0-rc1 works
for you or not, I suspect not. We would usually want fix the development
version before considering fixes for the stable branches (even if the
actual patch ends up looking totally different) otherwise we run the
risk of regressions in the next version.

Is there a simple command which will list the leaked tap devices? If so
we can consider adding it to the leak-check phase of the automated tests
(although I'm not sure how much use these make of blktap)

For future reference if you intend for a patch to be applied it is best
to submit it in the form described in
http://wiki.xen.org/wiki/Submitting_Xen_Patches, that is one patch per
email, with a changelog specific to that change and a Signed-off-by. In
this sort of scenario (a patch going to 4.1 which isn't a backport) the
changelog should also mention why the patch isn't a backport.

> Ian for your reference the following change which you introduced to
> address this issue:
> 
> 79e3dbe4b659e78408a9eea76c51a601bd4a383a
> tapdisk: respond to destroy request before tearing down the commuication channel
> 
> Is not needed and does not provide formally correct behavior in the
> presence of the two patches noted above.

Is it incorrect (i.e. should be reverted) or is it just incomplete/not
helpful?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:18:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV47-0000eo-LE; Wed, 01 Aug 2012 09:18:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwV45-0000eF-Hk
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:18:01 +0000
Received: from [85.158.143.35:60123] by server-2.bemta-4.messagelabs.com id
	DE/EA-17938-844F8105; Wed, 01 Aug 2012 09:18:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343812679!12910268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20890 invoked from network); 1 Aug 2012 09:17:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 09:17:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 10:17:59 +0100
Message-Id: <501910900200007800091D2F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 10:18:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB485F160.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/mm: properly frame Xen additions
 with CONFIG_XEN conditionals
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB485F160.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

There's no need and no good reason to affect native kernels built from
the same sources.

Also eliminate a compiler warning triggered by various versions, and
adjust some white space.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -209,14 +209,15 @@ struct vm_operations_struct {
 	/* notification that a previously read-only page is about to =
become
 	 * writable, if an error is returned it will cause a SIGBUS */
 	int (*page_mkwrite)(struct vm_area_struct *vma, struct page =
*page);
+#ifdef CONFIG_XEN
 	/* Area-specific function for clearing the PTE at @ptep. Returns =
the
 	 * original value of @ptep. */
-	pte_t (*zap_pte)(struct vm_area_struct *vma,=20
+	pte_t (*zap_pte)(struct vm_area_struct *vma,
 			 unsigned long addr, pte_t *ptep, int is_fullmm);
=20
-        /* called before close() to indicate no more pages should be =
mapped */
-        void (*unmap)(struct vm_area_struct *area);
-
+	/* called before close() to indicate no more pages should be =
mapped */
+	void (*unmap)(struct vm_area_struct *area);
+#endif
 #ifdef CONFIG_NUMA
 	int (*set_policy)(struct vm_area_struct *vma, struct mempolicy =
*new);
 	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -409,7 +409,9 @@ struct page *vm_normal_page(struct vm_ar
 	 * and that the resulting page looks ok.
 	 */
 	if (unlikely(!pfn_valid(pfn))) {
+#ifdef CONFIG_XEN
 		if (!(vma->vm_flags & VM_RESERVED))
+#endif
 			print_bad_pte(vma, pte, addr);
 		return NULL;
 	}
@@ -665,10 +667,12 @@ static unsigned long zap_pte_range(struc
 				     page->index > details->last_index))
 					continue;
 			}
+#ifdef CONFIG_XEN
 			if (unlikely(vma->vm_ops && vma->vm_ops->zap_pte))
 				ptent =3D vma->vm_ops->zap_pte(vma, addr, =
pte,
 							     tlb->fullmm);
 			else
+#endif
 				ptent =3D ptep_get_and_clear_full(mm, =
addr, pte,
 								tlb->fullmm=
);
 			tlb_remove_tlb_entry(tlb, pte, addr);
@@ -1425,7 +1429,7 @@ static inline int apply_to_pte_range(str
 	spinlock_t *ptl;
=20
 	pte =3D (mm =3D=3D &init_mm) ?
-		pte_alloc_kernel(pmd, addr) :
+		ptl =3D NULL, pte_alloc_kernel(pmd, addr) :
 		pte_alloc_map_lock(mm, pmd, addr, &ptl);
 	if (!pte)
 		return -ENOMEM;
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1689,8 +1689,10 @@ static void unmap_region(struct mm_struc
=20
 static inline void unmap_vma(struct vm_area_struct *vma)
 {
+#ifdef CONFIG_XEN
 	if (unlikely(vma->vm_ops && vma->vm_ops->unmap))
 		vma->vm_ops->unmap(vma);
+#endif
 }
=20
 /*
@@ -1966,7 +1968,7 @@ EXPORT_SYMBOL(do_brk);
 void exit_mmap(struct mm_struct *mm)
 {
 	struct mmu_gather *tlb;
-	struct vm_area_struct *vma_tmp, *vma =3D mm->mmap;
+	struct vm_area_struct *vma =3D mm->mmap;
 	unsigned long nr_accounted =3D 0;
 	unsigned long end;
=20
@@ -1974,8 +1976,11 @@ void exit_mmap(struct mm_struct *mm)
 	arch_exit_mmap(mm);
 #endif
=20
-	for (vma_tmp =3D mm->mmap; vma_tmp; vma_tmp =3D vma_tmp->vm_next)
-		unmap_vma(vma_tmp);
+#ifdef CONFIG_XEN
+	for (; vma; vma =3D vma->vm_next)
+		unmap_vma(vma);
+	vma =3D mm->mmap;
+#endif
=20
 	lru_add_drain();
 	flush_cache_mm(mm);




--=__PartB485F160.0__=
Content-Type: text/plain; name="xen-mm-conditionals.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-mm-conditionals.patch"

mm: properly frame Xen additions with CONFIG_XEN conditionals=0A=0AThere's =
no need and no good reason to affect native kernels built from=0Athe same =
sources.=0A=0AAlso eliminate a compiler warning triggered by various =
versions, and=0Aadjust some white space.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/include/linux/mm.h=0A+++ b/include/linux/mm.=
h=0A@@ -209,14 +209,15 @@ struct vm_operations_struct {=0A 	/* =
notification that a previously read-only page is about to become=0A 	 * =
writable, if an error is returned it will cause a SIGBUS */=0A 	int =
(*page_mkwrite)(struct vm_area_struct *vma, struct page *page);=0A+#ifdef =
CONFIG_XEN=0A 	/* Area-specific function for clearing the PTE at @ptep. =
Returns the=0A 	 * original value of @ptep. */=0A-	pte_t (*zap_pte)(st=
ruct vm_area_struct *vma, =0A+	pte_t (*zap_pte)(struct vm_area_struct =
*vma,=0A 			 unsigned long addr, pte_t *ptep, int =
is_fullmm);=0A =0A-        /* called before close() to indicate no more =
pages should be mapped */=0A-        void (*unmap)(struct vm_area_struct =
*area);=0A-=0A+	/* called before close() to indicate no more pages should =
be mapped */=0A+	void (*unmap)(struct vm_area_struct *area);=0A+#end=
if=0A #ifdef CONFIG_NUMA=0A 	int (*set_policy)(struct vm_area_struct =
*vma, struct mempolicy *new);=0A 	struct mempolicy *(*get_policy)(str=
uct vm_area_struct *vma,=0A--- a/mm/memory.c=0A+++ b/mm/memory.c=0A@@ =
-409,7 +409,9 @@ struct page *vm_normal_page(struct vm_ar=0A 	 * and =
that the resulting page looks ok.=0A 	 */=0A 	if (unlikely(!pfn_valid(pfn=
))) {=0A+#ifdef CONFIG_XEN=0A 		if (!(vma->vm_flags & VM_RESERVED))=
=0A+#endif=0A 			print_bad_pte(vma, pte, addr);=0A 		=
return NULL;=0A 	}=0A@@ -665,10 +667,12 @@ static unsigned long =
zap_pte_range(struc=0A 				     page->index > =
details->last_index))=0A 					=
continue;=0A 			}=0A+#ifdef CONFIG_XEN=0A 			=
if (unlikely(vma->vm_ops && vma->vm_ops->zap_pte))=0A 				=
ptent =3D vma->vm_ops->zap_pte(vma, addr, pte,=0A 				=
			     tlb->fullmm);=0A 			else=0A+#en=
dif=0A 				ptent =3D ptep_get_and_clear_full(mm, =
addr, pte,=0A 								=
tlb->fullmm);=0A 			tlb_remove_tlb_entry(tlb, pte, =
addr);=0A@@ -1425,7 +1429,7 @@ static inline int apply_to_pte_range(str=0A =
	spinlock_t *ptl;=0A =0A 	pte =3D (mm =3D=3D &init_mm) ?=0A-	=
	pte_alloc_kernel(pmd, addr) :=0A+		ptl =3D NULL, =
pte_alloc_kernel(pmd, addr) :=0A 		pte_alloc_map_lock(mm, =
pmd, addr, &ptl);=0A 	if (!pte)=0A 		return -ENOMEM;=0A--- =
a/mm/mmap.c=0A+++ b/mm/mmap.c=0A@@ -1689,8 +1689,10 @@ static void =
unmap_region(struct mm_struc=0A =0A static inline void unmap_vma(struct =
vm_area_struct *vma)=0A {=0A+#ifdef CONFIG_XEN=0A 	if (unlikely(vma->v=
m_ops && vma->vm_ops->unmap))=0A 		vma->vm_ops->unmap(vma);=0A=
+#endif=0A }=0A =0A /*=0A@@ -1966,7 +1968,7 @@ EXPORT_SYMBOL(do_brk);=0A =
void exit_mmap(struct mm_struct *mm)=0A {=0A 	struct mmu_gather =
*tlb;=0A-	struct vm_area_struct *vma_tmp, *vma =3D mm->mmap;=0A+	=
struct vm_area_struct *vma =3D mm->mmap;=0A 	unsigned long nr_accounted =
=3D 0;=0A 	unsigned long end;=0A =0A@@ -1974,8 +1976,11 @@ void =
exit_mmap(struct mm_struct *mm)=0A 	arch_exit_mmap(mm);=0A #endif=0A =
=0A-	for (vma_tmp =3D mm->mmap; vma_tmp; vma_tmp =3D vma_tmp->vm_next)=
=0A-		unmap_vma(vma_tmp);=0A+#ifdef CONFIG_XEN=0A+	for (; =
vma; vma =3D vma->vm_next)=0A+		unmap_vma(vma);=0A+	vma =3D =
mm->mmap;=0A+#endif=0A =0A 	lru_add_drain();=0A 	flush_cache_mm(mm);=
=0A
--=__PartB485F160.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB485F160.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 01 09:18:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwV47-0000eo-LE; Wed, 01 Aug 2012 09:18:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwV45-0000eF-Hk
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:18:01 +0000
Received: from [85.158.143.35:60123] by server-2.bemta-4.messagelabs.com id
	DE/EA-17938-844F8105; Wed, 01 Aug 2012 09:18:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343812679!12910268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20890 invoked from network); 1 Aug 2012 09:17:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 09:17:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 10:17:59 +0100
Message-Id: <501910900200007800091D2F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 10:18:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB485F160.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/mm: properly frame Xen additions
 with CONFIG_XEN conditionals
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB485F160.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

There's no need and no good reason to affect native kernels built from
the same sources.

Also eliminate a compiler warning triggered by various versions, and
adjust some white space.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -209,14 +209,15 @@ struct vm_operations_struct {
 	/* notification that a previously read-only page is about to =
become
 	 * writable, if an error is returned it will cause a SIGBUS */
 	int (*page_mkwrite)(struct vm_area_struct *vma, struct page =
*page);
+#ifdef CONFIG_XEN
 	/* Area-specific function for clearing the PTE at @ptep. Returns =
the
 	 * original value of @ptep. */
-	pte_t (*zap_pte)(struct vm_area_struct *vma,=20
+	pte_t (*zap_pte)(struct vm_area_struct *vma,
 			 unsigned long addr, pte_t *ptep, int is_fullmm);
=20
-        /* called before close() to indicate no more pages should be =
mapped */
-        void (*unmap)(struct vm_area_struct *area);
-
+	/* called before close() to indicate no more pages should be =
mapped */
+	void (*unmap)(struct vm_area_struct *area);
+#endif
 #ifdef CONFIG_NUMA
 	int (*set_policy)(struct vm_area_struct *vma, struct mempolicy =
*new);
 	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -409,7 +409,9 @@ struct page *vm_normal_page(struct vm_ar
 	 * and that the resulting page looks ok.
 	 */
 	if (unlikely(!pfn_valid(pfn))) {
+#ifdef CONFIG_XEN
 		if (!(vma->vm_flags & VM_RESERVED))
+#endif
 			print_bad_pte(vma, pte, addr);
 		return NULL;
 	}
@@ -665,10 +667,12 @@ static unsigned long zap_pte_range(struc
 				     page->index > details->last_index))
 					continue;
 			}
+#ifdef CONFIG_XEN
 			if (unlikely(vma->vm_ops && vma->vm_ops->zap_pte))
 				ptent =3D vma->vm_ops->zap_pte(vma, addr, =
pte,
 							     tlb->fullmm);
 			else
+#endif
 				ptent =3D ptep_get_and_clear_full(mm, =
addr, pte,
 								tlb->fullmm=
);
 			tlb_remove_tlb_entry(tlb, pte, addr);
@@ -1425,7 +1429,7 @@ static inline int apply_to_pte_range(str
 	spinlock_t *ptl;
=20
 	pte =3D (mm =3D=3D &init_mm) ?
-		pte_alloc_kernel(pmd, addr) :
+		ptl =3D NULL, pte_alloc_kernel(pmd, addr) :
 		pte_alloc_map_lock(mm, pmd, addr, &ptl);
 	if (!pte)
 		return -ENOMEM;
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1689,8 +1689,10 @@ static void unmap_region(struct mm_struc
=20
 static inline void unmap_vma(struct vm_area_struct *vma)
 {
+#ifdef CONFIG_XEN
 	if (unlikely(vma->vm_ops && vma->vm_ops->unmap))
 		vma->vm_ops->unmap(vma);
+#endif
 }
=20
 /*
@@ -1966,7 +1968,7 @@ EXPORT_SYMBOL(do_brk);
 void exit_mmap(struct mm_struct *mm)
 {
 	struct mmu_gather *tlb;
-	struct vm_area_struct *vma_tmp, *vma =3D mm->mmap;
+	struct vm_area_struct *vma =3D mm->mmap;
 	unsigned long nr_accounted =3D 0;
 	unsigned long end;
=20
@@ -1974,8 +1976,11 @@ void exit_mmap(struct mm_struct *mm)
 	arch_exit_mmap(mm);
 #endif
=20
-	for (vma_tmp =3D mm->mmap; vma_tmp; vma_tmp =3D vma_tmp->vm_next)
-		unmap_vma(vma_tmp);
+#ifdef CONFIG_XEN
+	for (; vma; vma =3D vma->vm_next)
+		unmap_vma(vma);
+	vma =3D mm->mmap;
+#endif
=20
 	lru_add_drain();
 	flush_cache_mm(mm);




--=__PartB485F160.0__=
Content-Type: text/plain; name="xen-mm-conditionals.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-mm-conditionals.patch"

mm: properly frame Xen additions with CONFIG_XEN conditionals=0A=0AThere's =
no need and no good reason to affect native kernels built from=0Athe same =
sources.=0A=0AAlso eliminate a compiler warning triggered by various =
versions, and=0Aadjust some white space.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/include/linux/mm.h=0A+++ b/include/linux/mm.=
h=0A@@ -209,14 +209,15 @@ struct vm_operations_struct {=0A 	/* =
notification that a previously read-only page is about to become=0A 	 * =
writable, if an error is returned it will cause a SIGBUS */=0A 	int =
(*page_mkwrite)(struct vm_area_struct *vma, struct page *page);=0A+#ifdef =
CONFIG_XEN=0A 	/* Area-specific function for clearing the PTE at @ptep. =
Returns the=0A 	 * original value of @ptep. */=0A-	pte_t (*zap_pte)(st=
ruct vm_area_struct *vma, =0A+	pte_t (*zap_pte)(struct vm_area_struct =
*vma,=0A 			 unsigned long addr, pte_t *ptep, int =
is_fullmm);=0A =0A-        /* called before close() to indicate no more =
pages should be mapped */=0A-        void (*unmap)(struct vm_area_struct =
*area);=0A-=0A+	/* called before close() to indicate no more pages should =
be mapped */=0A+	void (*unmap)(struct vm_area_struct *area);=0A+#end=
if=0A #ifdef CONFIG_NUMA=0A 	int (*set_policy)(struct vm_area_struct =
*vma, struct mempolicy *new);=0A 	struct mempolicy *(*get_policy)(str=
uct vm_area_struct *vma,=0A--- a/mm/memory.c=0A+++ b/mm/memory.c=0A@@ =
-409,7 +409,9 @@ struct page *vm_normal_page(struct vm_ar=0A 	 * and =
that the resulting page looks ok.=0A 	 */=0A 	if (unlikely(!pfn_valid(pfn=
))) {=0A+#ifdef CONFIG_XEN=0A 		if (!(vma->vm_flags & VM_RESERVED))=
=0A+#endif=0A 			print_bad_pte(vma, pte, addr);=0A 		=
return NULL;=0A 	}=0A@@ -665,10 +667,12 @@ static unsigned long =
zap_pte_range(struc=0A 				     page->index > =
details->last_index))=0A 					=
continue;=0A 			}=0A+#ifdef CONFIG_XEN=0A 			=
if (unlikely(vma->vm_ops && vma->vm_ops->zap_pte))=0A 				=
ptent =3D vma->vm_ops->zap_pte(vma, addr, pte,=0A 				=
			     tlb->fullmm);=0A 			else=0A+#en=
dif=0A 				ptent =3D ptep_get_and_clear_full(mm, =
addr, pte,=0A 								=
tlb->fullmm);=0A 			tlb_remove_tlb_entry(tlb, pte, =
addr);=0A@@ -1425,7 +1429,7 @@ static inline int apply_to_pte_range(str=0A =
	spinlock_t *ptl;=0A =0A 	pte =3D (mm =3D=3D &init_mm) ?=0A-	=
	pte_alloc_kernel(pmd, addr) :=0A+		ptl =3D NULL, =
pte_alloc_kernel(pmd, addr) :=0A 		pte_alloc_map_lock(mm, =
pmd, addr, &ptl);=0A 	if (!pte)=0A 		return -ENOMEM;=0A--- =
a/mm/mmap.c=0A+++ b/mm/mmap.c=0A@@ -1689,8 +1689,10 @@ static void =
unmap_region(struct mm_struc=0A =0A static inline void unmap_vma(struct =
vm_area_struct *vma)=0A {=0A+#ifdef CONFIG_XEN=0A 	if (unlikely(vma->v=
m_ops && vma->vm_ops->unmap))=0A 		vma->vm_ops->unmap(vma);=0A=
+#endif=0A }=0A =0A /*=0A@@ -1966,7 +1968,7 @@ EXPORT_SYMBOL(do_brk);=0A =
void exit_mmap(struct mm_struct *mm)=0A {=0A 	struct mmu_gather =
*tlb;=0A-	struct vm_area_struct *vma_tmp, *vma =3D mm->mmap;=0A+	=
struct vm_area_struct *vma =3D mm->mmap;=0A 	unsigned long nr_accounted =
=3D 0;=0A 	unsigned long end;=0A =0A@@ -1974,8 +1976,11 @@ void =
exit_mmap(struct mm_struct *mm)=0A 	arch_exit_mmap(mm);=0A #endif=0A =
=0A-	for (vma_tmp =3D mm->mmap; vma_tmp; vma_tmp =3D vma_tmp->vm_next)=
=0A-		unmap_vma(vma_tmp);=0A+#ifdef CONFIG_XEN=0A+	for (; =
vma; vma =3D vma->vm_next)=0A+		unmap_vma(vma);=0A+	vma =3D =
mm->mmap;=0A+#endif=0A =0A 	lru_add_drain();=0A 	flush_cache_mm(mm);=
=0A
--=__PartB485F160.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB485F160.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 01 09:35:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVKC-0001C6-8q; Wed, 01 Aug 2012 09:34:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwVKA-0001C1-Ot
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:34:38 +0000
Received: from [85.158.143.35:32204] by server-2.bemta-4.messagelabs.com id
	33/4D-17938-E28F8105; Wed, 01 Aug 2012 09:34:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343813614!16871342!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3548 invoked from network); 1 Aug 2012 09:33:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:33:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="33174790"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 05:33:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 05:33:33 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwVJ7-0004fa-Ip;
	Wed, 01 Aug 2012 10:33:33 +0100
Message-ID: <5018F7ED.3010100@citrix.com>
Date: Wed, 1 Aug 2012 10:33:33 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
	<5018FBB60200007800091CCB@nat28.tlf.novell.com>
In-Reply-To: <5018FBB60200007800091CCB@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.3
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
	be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/08/12 08:49, Jan Beulich wrote:
>>>> On 31.07.12 at 18:55, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> Andrew Cooper writes ("[PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be 
>> set externally"):
>> ..
>>> diff -r 6db5c184a777 -r ae32690d0d74 xen/Makefile
>>> --- a/xen/Makefile
>>> +++ b/xen/Makefile
>>> @@ -13,6 +13,7 @@ export BASEDIR := $(CURDIR)
>>>  export XEN_ROOT := $(BASEDIR)/..
>>>  
>>>  EFI_MOUNTPOINT ?= /boot/efi
>>> +XEN_CHANGESET  ?= $(shell hg root &> /dev/null && hg parents --template 
>> "{date|date} {rev}:{node|short}" || echo "unavailable" )
>>>  
>>>  .PHONY: default
>>>  default: build
>>> @@ -107,7 +108,7 @@ include/xen/compile.h: include/xen/compi
>>>  	    -e 's/@@version@@/$(XEN_VERSION)/g' \
>>>  	    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>>>  	    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
>>> -	    -e 's!@@changeset@@!$(shell ((hg parents --template "{date|date} 
>> {rev}:{node|short}" >/dev/null && hg parents --template "{date|date} 
>> {rev}:{node|short}") || echo "unavailable") 2>/dev/null)!g' \
>>> +	    -e 's!@@changeset@@!$(XEN_CHANGESET)!g' \
>>>  	    < include/xen/compile.h.in > $@.new
>>>  	@grep \" .banner >> $@.new
>>>  	@grep -v \" .banner
>> We need to check how many times, and at which point, this gets
>> executed, when this patch is applied.  I think it's OK...
> If in doubt, perhaps the better option would be to execute this
> once unconditionally (via := assignment to a helper variable), or
> to use := here but frame the assignment with some if construct
> excluding it to be executed when the changeset was already
> specified (assuming that an empty specification is pointless,
> simply checking for it to be non-empty should suffice).
>
> Jan
>

The rule looks like:

# compile.h contains dynamic build info. Rebuilt on every 'make'
invocation.                                                                                                                                  

include/xen/compile.h: include/xen/compile.h.in .banner
        @sed -e 's/@@date@@/$(shell LC_ALL=C date)/g' \

So it should only be executed once (unless someone is messing around
deleting compile.h)

Having said that, even if it were executed more than once, there is no
reasonable circumstance during which the contents of XEN_CHANGESET
should change.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:35:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVKC-0001C6-8q; Wed, 01 Aug 2012 09:34:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwVKA-0001C1-Ot
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:34:38 +0000
Received: from [85.158.143.35:32204] by server-2.bemta-4.messagelabs.com id
	33/4D-17938-E28F8105; Wed, 01 Aug 2012 09:34:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343813614!16871342!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3548 invoked from network); 1 Aug 2012 09:33:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:33:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="33174790"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 05:33:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 05:33:33 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwVJ7-0004fa-Ip;
	Wed, 01 Aug 2012 10:33:33 +0100
Message-ID: <5018F7ED.3010100@citrix.com>
Date: Wed, 1 Aug 2012 10:33:33 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
	<5018FBB60200007800091CCB@nat28.tlf.novell.com>
In-Reply-To: <5018FBB60200007800091CCB@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.3
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
	be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/08/12 08:49, Jan Beulich wrote:
>>>> On 31.07.12 at 18:55, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> Andrew Cooper writes ("[PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be 
>> set externally"):
>> ..
>>> diff -r 6db5c184a777 -r ae32690d0d74 xen/Makefile
>>> --- a/xen/Makefile
>>> +++ b/xen/Makefile
>>> @@ -13,6 +13,7 @@ export BASEDIR := $(CURDIR)
>>>  export XEN_ROOT := $(BASEDIR)/..
>>>  
>>>  EFI_MOUNTPOINT ?= /boot/efi
>>> +XEN_CHANGESET  ?= $(shell hg root &> /dev/null && hg parents --template 
>> "{date|date} {rev}:{node|short}" || echo "unavailable" )
>>>  
>>>  .PHONY: default
>>>  default: build
>>> @@ -107,7 +108,7 @@ include/xen/compile.h: include/xen/compi
>>>  	    -e 's/@@version@@/$(XEN_VERSION)/g' \
>>>  	    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>>>  	    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
>>> -	    -e 's!@@changeset@@!$(shell ((hg parents --template "{date|date} 
>> {rev}:{node|short}" >/dev/null && hg parents --template "{date|date} 
>> {rev}:{node|short}") || echo "unavailable") 2>/dev/null)!g' \
>>> +	    -e 's!@@changeset@@!$(XEN_CHANGESET)!g' \
>>>  	    < include/xen/compile.h.in > $@.new
>>>  	@grep \" .banner >> $@.new
>>>  	@grep -v \" .banner
>> We need to check how many times, and at which point, this gets
>> executed, when this patch is applied.  I think it's OK...
> If in doubt, perhaps the better option would be to execute this
> once unconditionally (via := assignment to a helper variable), or
> to use := here but frame the assignment with some if construct
> excluding it to be executed when the changeset was already
> specified (assuming that an empty specification is pointless,
> simply checking for it to be non-empty should suffice).
>
> Jan
>

The rule looks like:

# compile.h contains dynamic build info. Rebuilt on every 'make'
invocation.                                                                                                                                  

include/xen/compile.h: include/xen/compile.h.in .banner
        @sed -e 's/@@date@@/$(shell LC_ALL=C date)/g' \

So it should only be executed once (unless someone is messing around
deleting compile.h)

Having said that, even if it were executed more than once, there is no
reasonable circumstance during which the contents of XEN_CHANGESET
should change.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:47:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVVW-0001Pl-GR; Wed, 01 Aug 2012 09:46:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwVVV-0001Pg-7f
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 09:46:21 +0000
Received: from [85.158.138.51:23090] by server-3.bemta-3.messagelabs.com id
	F3/04-08301-CEAF8105; Wed, 01 Aug 2012 09:46:20 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343814379!20934277!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31782 invoked from network); 1 Aug 2012 09:46:19 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 09:46:19 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id C38F64018E2
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 11:46:18 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id NRDMJd468-oy for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 11:46:18 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id F1B754017B0
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 11:46:17 +0200 (CEST)
Message-ID: <5018FAE5.6070305@tiscali.it>
Date: Wed, 01 Aug 2012 11:46:13 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
 xen-unstable dom0 with qemu traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8250863752639961411=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============8250863752639961411==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090008010005060308030904"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms090008010005060308030904
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

Because the save / restore and cd-insert on qemu-xen for now are not=20
working I had to fall back onqemu-xen-traditional.
Dom0 is Wheezy 64 bit with xen 4.2.0-rc1 from source
I tested Windows 7 pro 64 bit domU with gplpv (last build 357).
xl create is working
xl shutdown is working
vnc is working, but with vfb line on configuration file is not working
save/restore is working but on restore network is up but not working
cdrom is not working, see empty device also if there is cdrom

After there are the content of domU configuration file and log, if you=20
need more information tell me and I will post it.

-------------------------------------------------------------------------=
-----------------------
/etc/xen/W7.cfg
-------------------------------------------------------------------------=
-----------------------
name=3D'W7'
builder=3D"hvm"
memory=3D2048
vcpus=3D2
vif=3D['bridge=3Dxenbr0']
#vfb=3D['vnc=3D1,vncunused=3D1,vnclisten=3D0.0.0.0,keymap=3Dit']
disk=3D['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/dev/sr0,raw,hdb,ro,cdrom=
']
boot=3D'cd'
device_model_version=3D"qemu-xen-traditional"
vnc=3D1
vncunused=3D1
vnclisten=3D"0.0.0.0"
keymap=3D"it"
#spice=3D1
#spicehost=3D"0.0.0.0"
spiceport=3D6000
#spicepasswd=3D'test'
#spicedisable_ticketing=3D1
#spiceagent_mouse =3D 0
#on_poweroff=3D"destroy"
on_reboot=3D"restart"
on_crash=3D"destroy"
#stdvga=3D0
#qxl=3D1
-------------------------------------------------------------------------=
-----------------------

-------------------------------------------------------------------------=
-----------------------
/var/log/xen/qemu-dm-W7.log
-------------------------------------------------------------------------=
-----------------------
domid: 9
-videoram option does not work with cirrus vga device model. Videoram=20
set to 4M.
Using file /dev/xen/blktap-2/tapdev0 in read-write mode
Using file /dev/sr0 in read-only mode
Watching /local/domain/0/device-model/9/logdirty/cmd
Watching /local/domain/0/device-model/9/command
Watching /local/domain/9/cpu
qemu_map_cache_init nr_buckets =3D 10000 size 4194304
shared page at pfn feffd
buffered io page at pfn feffb
Guest uuid =3D 6dd9db74-ec31-47a4-a2af-43e382d2ec21
populating video RAM at ff000000
mapping video RAM from ff000000
Register xen platform.
Done register platform.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw =

state.
xs_read(/local/domain/0/device-model/9/xen_extended_power_mgmt): read err=
or
xs_read(): vncpasswd get error.=20
/vm/6dd9db74-ec31-47a4-a2af-43e382d2ec21/vncpasswd.
medium change watch on `hdb' (index: 1): /dev/sr0
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
Log-dirty: no command yet.
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
vcpu-set: watch node error.
xs_read(/local/domain/9/log-throttling): read error
qemu: ignoring not-understood drive `/local/domain/9/log-throttling'
medium change watch on `/local/domain/9/log-throttling' - unknown=20
device, ignored
cirrus vga map change while on lfb mode
mapping vram to f0000000 - f0400000
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw =

state.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro =

state.
Unknown PV product 2 loaded in guest
PV driver build 1
region type 1 at [c100,c200).
region type 0 at [f3001000,f3001100).
squash iomem [f3001000, f3001100).
-------------------------------------------------------------------------=
-----------------------


--------------ms090008010005060308030904
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTA5NDYxM1owIwYJKoZIhvcNAQkEMRYEFBA0Lk4SYZz1pEhOnwPbjEjz
+4ZFMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAfFN3ReNQBhevfSpAfTyYelXv
0WFNAlYO1jYp+MCoroN68KmfyVUu+C/Ff++ZdVpZaeBhfm8zddQq6bhTzfMt10FcZn6qL0Nk
U8H2YhE0TkM58RkrnaExlTQbTOGIuIW2IvfLELo4V0UE+LPKE4YN2cy39Idq1HK4/97sKNJU
lWyVzoPnSL/7VZ6NqAfiPuQvp3kRz33mqExTmQWiBoScy01Q+1YPXCY2uklvR6ZtCpQHTEM5
Pxl+0N96DmDcQIfanzVuWeU0VEEzFKUr/RGu4+k8/4YKXaAwwdZ25LnphsyKuqDtx+jUb9jP
vaH85xacohWAPk87ZZ09CCw/L9MjrAAAAAAAAA==
--------------ms090008010005060308030904--


--===============8250863752639961411==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8250863752639961411==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 09:47:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVVW-0001Pl-GR; Wed, 01 Aug 2012 09:46:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwVVV-0001Pg-7f
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 09:46:21 +0000
Received: from [85.158.138.51:23090] by server-3.bemta-3.messagelabs.com id
	F3/04-08301-CEAF8105; Wed, 01 Aug 2012 09:46:20 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343814379!20934277!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31782 invoked from network); 1 Aug 2012 09:46:19 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 09:46:19 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id C38F64018E2
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 11:46:18 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id NRDMJd468-oy for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 11:46:18 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id F1B754017B0
	for <xen-devel@lists.xensource.com>;
	Wed,  1 Aug 2012 11:46:17 +0200 (CEST)
Message-ID: <5018FAE5.6070305@tiscali.it>
Date: Wed, 01 Aug 2012 11:46:13 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
 xen-unstable dom0 with qemu traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8250863752639961411=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============8250863752639961411==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms090008010005060308030904"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms090008010005060308030904
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: quoted-printable

Because the save / restore and cd-insert on qemu-xen for now are not=20
working I had to fall back onqemu-xen-traditional.
Dom0 is Wheezy 64 bit with xen 4.2.0-rc1 from source
I tested Windows 7 pro 64 bit domU with gplpv (last build 357).
xl create is working
xl shutdown is working
vnc is working, but with vfb line on configuration file is not working
save/restore is working but on restore network is up but not working
cdrom is not working, see empty device also if there is cdrom

After there are the content of domU configuration file and log, if you=20
need more information tell me and I will post it.

-------------------------------------------------------------------------=
-----------------------
/etc/xen/W7.cfg
-------------------------------------------------------------------------=
-----------------------
name=3D'W7'
builder=3D"hvm"
memory=3D2048
vcpus=3D2
vif=3D['bridge=3Dxenbr0']
#vfb=3D['vnc=3D1,vncunused=3D1,vnclisten=3D0.0.0.0,keymap=3Dit']
disk=3D['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/dev/sr0,raw,hdb,ro,cdrom=
']
boot=3D'cd'
device_model_version=3D"qemu-xen-traditional"
vnc=3D1
vncunused=3D1
vnclisten=3D"0.0.0.0"
keymap=3D"it"
#spice=3D1
#spicehost=3D"0.0.0.0"
spiceport=3D6000
#spicepasswd=3D'test'
#spicedisable_ticketing=3D1
#spiceagent_mouse =3D 0
#on_poweroff=3D"destroy"
on_reboot=3D"restart"
on_crash=3D"destroy"
#stdvga=3D0
#qxl=3D1
-------------------------------------------------------------------------=
-----------------------

-------------------------------------------------------------------------=
-----------------------
/var/log/xen/qemu-dm-W7.log
-------------------------------------------------------------------------=
-----------------------
domid: 9
-videoram option does not work with cirrus vga device model. Videoram=20
set to 4M.
Using file /dev/xen/blktap-2/tapdev0 in read-write mode
Using file /dev/sr0 in read-only mode
Watching /local/domain/0/device-model/9/logdirty/cmd
Watching /local/domain/0/device-model/9/command
Watching /local/domain/9/cpu
qemu_map_cache_init nr_buckets =3D 10000 size 4194304
shared page at pfn feffd
buffered io page at pfn feffb
Guest uuid =3D 6dd9db74-ec31-47a4-a2af-43e382d2ec21
populating video RAM at ff000000
mapping video RAM from ff000000
Register xen platform.
Done register platform.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw =

state.
xs_read(/local/domain/0/device-model/9/xen_extended_power_mgmt): read err=
or
xs_read(): vncpasswd get error.=20
/vm/6dd9db74-ec31-47a4-a2af-43e382d2ec21/vncpasswd.
medium change watch on `hdb' (index: 1): /dev/sr0
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
Log-dirty: no command yet.
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
vcpu-set: watch node error.
xs_read(/local/domain/9/log-throttling): read error
qemu: ignoring not-understood drive `/local/domain/9/log-throttling'
medium change watch on `/local/domain/9/log-throttling' - unknown=20
device, ignored
cirrus vga map change while on lfb mode
mapping vram to f0000000 - f0400000
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw =

state.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro =

state.
Unknown PV product 2 loaded in guest
PV driver build 1
region type 1 at [c100,c200).
region type 0 at [f3001000,f3001100).
squash iomem [f3001000, f3001100).
-------------------------------------------------------------------------=
-----------------------


--------------ms090008010005060308030904
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTA5NDYxM1owIwYJKoZIhvcNAQkEMRYEFBA0Lk4SYZz1pEhOnwPbjEjz
+4ZFMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAfFN3ReNQBhevfSpAfTyYelXv
0WFNAlYO1jYp+MCoroN68KmfyVUu+C/Ff++ZdVpZaeBhfm8zddQq6bhTzfMt10FcZn6qL0Nk
U8H2YhE0TkM58RkrnaExlTQbTOGIuIW2IvfLELo4V0UE+LPKE4YN2cy39Idq1HK4/97sKNJU
lWyVzoPnSL/7VZ6NqAfiPuQvp3kRz33mqExTmQWiBoScy01Q+1YPXCY2uklvR6ZtCpQHTEM5
Pxl+0N96DmDcQIfanzVuWeU0VEEzFKUr/RGu4+k8/4YKXaAwwdZ25LnphsyKuqDtx+jUb9jP
vaH85xacohWAPk87ZZ09CCw/L9MjrAAAAAAAAA==
--------------ms090008010005060308030904--


--===============8250863752639961411==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8250863752639961411==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 09:59:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVhW-0001ZW-Nn; Wed, 01 Aug 2012 09:58:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVhU-0001ZR-Ji
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:58:45 +0000
Received: from [85.158.143.99:12269] by server-1.bemta-4.messagelabs.com id
	DE/8C-24392-3DDF8105; Wed, 01 Aug 2012 09:58:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343815119!29389983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30320 invoked from network); 1 Aug 2012 09:58:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:58:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="203779554"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 05:57:38 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 05:57:38 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwVgP-00055K-UY;
	Wed, 01 Aug 2012 10:57:37 +0100
MIME-Version: 1.0
X-Mercurial-Node: 84f0686ebcbfb0fa3a437def65513038be389736
Message-ID: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 1 Aug 2012 10:57:37 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl: support custom block hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343815041 -3600
# Node ID 84f0686ebcbfb0fa3a437def65513038be389736
# Parent  006588b1bbb609186df45770a3c16d3028b54778
libxl: support custom block hotplug scripts

These are provided using the "script=" syntax described in
docs/misc/xl-disk-configuration.txt.

The existing hotplug scripts currently conflate two different concepts, namely
that of making a datapath available in the backend domain (logging into iSCSI
LUNs and the like) and that of actually connecting that datapath to a Xen
backend path (e.g. writing "physical-device" node in xenstore to bring up
blkback).

For this reason the script support implemented here is only supported in
conjunction with backendtype=phy.

Eventually we hope to rework the hotplug scripts to separate the to concepts,
but that is not 4.2 material.

In addition there are some other subtleties:

 - Previously in the blktap case we would add "script = .../blktap" to the
   backend flex array, but then jumped to the PHY case which added
   "script = .../block" too. The block one takes precendence since it comes
   second.

   This was, accidentally, correct. The blktap script is for blktap1 devices
   and not blktap2 devices. libxl completely manages the blktap2 side of things
   without resorting to hotplug scripts and creates a blkback device directly.
   Therefore the "block" script is always the correct one to call. Custom
   script are not supported in this context.

 - libxl should not write the "physical-device" node. This is the
   responsibility of the block script. Writing the "physical-device" node in
   libxl basically completely short-cuts the standard block hotplug script
   which uses "physical-device" to know if it has run already or not.

   In the case of more complex scripts libxl cannot know the right value to
   write here anyway, in particular the device may not exist until after the
   script is called.

   This change has the side effect of re-enabling the checks for device sharing
   aspect of the default block script, which I have tested and which now cause
   libxl to properly abort now that libxl properly checks for hotplug script
   errors.

   There is no sharing check for blktap2 since even if you reuse the same vhd
   the resulting tap device is different. I would have preferred to simply
   write the "physical-device" node for the blktap2 case but the hotplug script
   infrastructure is not currently setup to handle LIBXL__DEVICE_KIND_VBD
   devices without a hotplug script (backendtype phy and tap both end up as
   KIND_VBD). Changing this was more surgery than I was happy doing for 4.2 and
   therefore I have simply hardcoded to the block script for the
   LIBXL_DISK_BACKEND_TAP case.

 - libxl__device_disk_set_backend running against a phy device with a script
   cannot stat the device to check its properties since it may not exist until
   the script is run. Therefore I have special cased this in disk_try_backend
   to simply assume that backend == phy is always ok if a script was
   configured.  Similarly the other backend types are always rejected if a
   script was configured.

   Note that the reason for implementing the default script behaviour in
   device_disk_add instead of libxl__device_disk_setdefault is because we need
   to be able to tell when the script was user-supplied rather than defaulted
   by libxl in order to correctly implement the above. The setdefault function
   must be idempotent so we cannot simply update disk->script.

   I suspect that for 4.3 a script member should be added to libxl__device,
   this would also help in the case above of handling devices with no script in
   a consistent manner. This is not 4.2 material.

 - When the block script falls through and shells out to a block-$type script
   it used to pass "$node" however the only place this was assigned was in the
   remove+phy case (in which case it contains the file:// derived /dev/loopN
   device), and in that case the script exits without falling through to the
   block-$type case.

   Since libxl never creates a type other than phy this never happens in
   practice anyway and we now call the correct block-$type script directly.
   But fix it up anyway since it is confusing.

 - The block-nbd and block-enbd scripts which we supply appear to be broken WRT
   the hotplug calling convention, in that they seem to expect a command line
   parameter (perhaps the $node described above) rather than reading the
   appropriate node from xenstore.

   I rather suspect this was broken by 7774:e2e7f47e6f79 in November 2005. I
   think it is safe to say no one is using these scripts! I haven't fixed this
   here. It would be good to track down some working scripts and either
   incorproate them or defer to them in their existing home (e.g. if they live
   somewhere useful like the nbd tools package).

 - Added a few block script related entries to check-xl-disk-parse from
   http://backdrift.org/xen-block-iscsi-script-with-multipath-support and
   http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html /
   http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html
   (and snuck in another interesting empty CDROM case)

   This highlighted two bugs in the libxlu disk parser handling of the
   deprecated "<script>:" prefix:

   - It was failing to prefix with "block-" to construct the actual script name

   - The regex for matching iscsi or drdb or e?nbd was incorrect

 - Use libxl__abs_path for the nic script too. Just because the existing code
   nearly tricked me into repeating the mistake

I have tested with a custom block script which uses "lvchange -a" to
dynamically add remove the referenced device (simulates iSCSI login/logout
without requiring me to faff around setting up an iSCSI target). I also tested
on a blktap2 system.

I haven't directly tested anything more complex like iscsi: or nbd: other than
what check-xl-disk-parse exercises.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2:
  - observe that script= requires backendtype=phy and substantially rework to
    correctly reflect that.
  - remove unintentional braces change in SAVESTRING macro

diff -r 006588b1bbb6 -r 84f0686ebcbf tools/hotplug/Linux/block
--- a/tools/hotplug/Linux/block	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/hotplug/Linux/block	Wed Aug 01 10:57:21 2012 +0100
@@ -342,4 +342,4 @@ esac
 
 # If we've reached here, $t is neither phy nor file, so fire a helper script.
 [ -x ${XEN_SCRIPT_DIR}/block-"$t" ] && \
-  ${XEN_SCRIPT_DIR}/block-"$t" "$command" $node
+  ${XEN_SCRIPT_DIR}/block-"$t" "$command"
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/check-xl-disk-parse
--- a/tools/libxl/check-xl-disk-parse	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/check-xl-disk-parse	Wed Aug 01 10:57:21 2012 +0100
@@ -142,5 +142,44 @@ disk: {
 
 EOF
 one 0 vdev=hdc,access=r,devtype=cdrom,format=empty
+one 0 vdev=hdc,access=r,devtype=cdrom
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
+    "vdev": "xvda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-iscsi",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://backdrift.org/xen-block-iscsi-script-with-multipath-support
+one 0 iscsi:iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost,xvda,w
+one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "app01",
+    "vdev": "hda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-drbd",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html
+# http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html
+one 0 drbd:app01,hda,w
 
 complete
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxl.c	Wed Aug 01 10:57:21 2012 +0100
@@ -1795,9 +1795,9 @@ static void device_disk_add(libxl__egc *
     STATE_AO_GC(aodev->ao);
     flexarray_t *front = NULL;
     flexarray_t *back = NULL;
-    char *dev;
+    char *dev, *script;
     libxl__device *device;
-    int major, minor, rc;
+    int rc;
     libxl_ctx *ctx = gc->owner;
     xs_transaction_t t = XBT_NULL;
 
@@ -1832,13 +1832,6 @@ static void device_disk_add(libxl__egc *
             goto out_free;
         }
 
-        if (disk->script) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "External block scripts"
-                       " not yet supported, sorry");
-            rc = ERROR_INVAL;
-            goto out_free;
-        }
-
         GCNEW(device);
         rc = libxl__device_from_disk(gc, domid, disk, device);
         if (rc != 0) {
@@ -1850,18 +1843,16 @@ static void device_disk_add(libxl__egc *
         switch (disk->backend) {
             case LIBXL_DISK_BACKEND_PHY:
                 dev = disk->pdev_path;
+
+                script = libxl__abs_path(gc, disk->script ?: "block",
+                                         libxl__xen_script_dir_path());
+
         do_backend_phy:
-                libxl__device_physdisk_major_minor(dev, &major, &minor);
-                flexarray_append(back, "physical-device");
-                flexarray_append(back, libxl__sprintf(gc, "%x:%x", major, minor));
-
                 flexarray_append(back, "params");
                 flexarray_append(back, dev);
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "block"));
+                assert(script);
+                flexarray_append_pair(back, "script", script);
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
@@ -1878,10 +1869,12 @@ static void device_disk_add(libxl__egc *
                     libxl__device_disk_string_of_format(disk->format),
                     disk->pdev_path));
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "blktap"));
+                /*
+                 * tap devices do not support custom block scripts and
+                 * always use the plain block script.
+                 */
+                script = libxl__abs_path(gc, "block",
+                                         libxl__xen_script_dir_path());
 
                 /* now create a phy device to export the device to the guest */
                 goto do_backend_phy;
@@ -2581,13 +2574,10 @@ void libxl__device_nic_add(libxl__egc *e
     flexarray_append(back, "1");
     flexarray_append(back, "state");
     flexarray_append(back, libxl__sprintf(gc, "%d", 1));
-    if (nic->script) {
-        flexarray_append(back, "script");
-        flexarray_append(back, nic->script[0]=='/' ? nic->script
-                         : libxl__sprintf(gc, "%s/%s",
-                                          libxl__xen_script_dir_path(),
-                                          nic->script));
-    }
+    if (nic->script)
+        flexarray_append_pair(back, "script",
+                              libxl__abs_path(gc, nic->script,
+                                              libxl__xen_script_dir_path()));
 
     if (nic->ifname) {
         flexarray_append(back, "vifname");
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxl_device.c	Wed Aug 01 10:57:21 2012 +0100
@@ -191,18 +191,26 @@ typedef struct {
 } disk_try_backend_args;
 
 static int disk_try_backend(disk_try_backend_args *a,
-                            libxl_disk_backend backend) {
+                            libxl_disk_backend backend)
+ {
+    libxl__gc *gc = a->gc;
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
-    libxl_ctx *ctx = libxl__gc_owner(a->gc);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
     switch (backend) {
-
     case LIBXL_DISK_BACKEND_PHY:
         if (!(a->disk->format == LIBXL_DISK_FORMAT_RAW ||
               a->disk->format == LIBXL_DISK_FORMAT_EMPTY)) {
             goto bad_format;
         }
 
+        if (a->disk->script) {
+            LOG(DEBUG, "Disk vdev=%s, uses script=... assuming phy backend",
+                a->disk->vdev);
+            return backend;
+        }
+
         if (libxl__try_phy_backend(a->stab.st_mode))
             return backend;
 
@@ -212,6 +220,8 @@ static int disk_try_backend(disk_try_bac
         return 0;
 
     case LIBXL_DISK_BACKEND_TAP:
+        if (a->disk->script) goto bad_script;
+
         if (!libxl__blktap_enabled(a->gc)) {
             LIBXL__LOG(ctx, LIBXL__LOG_DEBUG, "Disk vdev=%s, backend tap"
                        " unsuitable because blktap not available",
@@ -225,6 +235,7 @@ static int disk_try_backend(disk_try_bac
         return backend;
 
     case LIBXL_DISK_BACKEND_QDISK:
+        if (a->disk->script) goto bad_script;
         return backend;
 
     default:
@@ -242,6 +253,11 @@ static int disk_try_backend(disk_try_bac
                libxl_disk_backend_to_string(backend),
                libxl_disk_format_to_string(a->disk->format));
     return 0;
+
+ bad_script:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with script=...",
+        a->disk->vdev, libxl_disk_backend_to_string(backend));
+    return 0;
 }
 
 int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
@@ -264,7 +280,7 @@ int libxl__device_disk_set_backend(libxl
             return ERROR_INVAL;
         }
         memset(&a.stab, 0, sizeof(a.stab));
-    } else {
+    } else if (!disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
             LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Disk vdev=%s "
                              "failed to stat: %s",
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxlu_disk_l.c
--- a/tools/libxl/libxlu_disk_l.c	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.c	Wed Aug 01 10:57:21 2012 +0100
@@ -366,7 +366,7 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[456] =
+static yyconst flex_int16_t yy_acclist[447] =
     {   0,
        24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
     16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
@@ -379,77 +379,76 @@ static yyconst flex_int16_t yy_acclist[4
      8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
 
-       22,   24, 8193,   22, 8193,   22, 8193, 8213,   22, 8213,
-       22, 8213,   12,   22,   22,   22,   22,   22,   22,   22,
+       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
+     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-     8213,   22, 8213,   22, 8213,   12,   22,   17, 8213,   22,
-    16405,   22,   22,   22,   22,   22,   22,   22, 8213,   22,
-    16405,   20, 8213,   22,16405,   22, 8205, 8213,   22,16397,
-    16405,   22,   22, 8208, 8213,   22,16400,16405,   22,   22,
-       22,   22,   17, 8213,   22,   17, 8213,   22,   17,   22,
-       17, 8213,   22,    3,   22,   22,   19, 8213,   22,16405,
-       22,   22,   22,   22,   20, 8213,   22,   20, 8213,   22,
+       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
+     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
+     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
+     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
+    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
+     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
+       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
 
-       20,   22,   20, 8213, 8205, 8213,   22, 8205, 8213,   22,
-     8205,   22, 8205, 8213,   22, 8208, 8213,   22, 8208, 8213,
-       22, 8208,   22, 8208, 8213,   22,   22,    9,   22,   17,
-     8213,   22,   17, 8213,   22,   17, 8213,   17,   22,   17,
-       22,    3,   22,   22,   19, 8213,   22,   19, 8213,   22,
-       19,   22,   19, 8213,   22,   18, 8213,   22,16405, 8206,
-     8213,   22,16398,16405,   22,   20, 8213,   22,   20, 8213,
-       22,   20, 8213,   20,   22,   20, 8205, 8213,   22, 8205,
-     8213,   22, 8205, 8213, 8205,   22, 8205,   22, 8208, 8213,
-       22, 8208, 8213,   22, 8208, 8213, 8208,   22, 8208,   22,
+     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
+     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
+     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
+     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
+       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
+       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
+     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
+    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
+       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
+       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
 
-       22,    9,   12,    9,    7,   22,   22,   19, 8213,   22,
-       19, 8213,   22,   19, 8213,   19,   22,   19,    2,   18,
-     8213,   22,   18, 8213,   22,   18,   22,   18, 8213, 8206,
-     8213,   22, 8206, 8213,   22, 8206,   22, 8206, 8213,   22,
-       10,   22,   11,    9,    9,   12,    7,   12,    7,   22,
-        6,    2,   12,    2,   18, 8213,   22,   18, 8213,   22,
-       18, 8213,   18,   22,   18, 8206, 8213,   22, 8206, 8213,
-       22, 8206, 8213, 8206,   22, 8206,   22,   10,   12,   10,
-       15, 8213,   22,16405,   11,   12,   11,    7,    7,   12,
-       22,    6,   12,    6,    6,   12,    6,   12,    2,    2,
+     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
+       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
+        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
+       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
+     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
+        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
+       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
+       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
+       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
+        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
 
-       12, 8206,   22,16398,   10,   10,   12,   15, 8213,   22,
-       15, 8213,   22,   15,   22,   15, 8213,   11,   12,   22,
-        6,    6,   12,    6,    6,   15, 8213,   22,   15, 8213,
-       22,   15, 8213,   15,   22,   15,   22,    6,    6,    8,
-        6,    5,    6,    8,   12,    8,    4,    6,    5,    6,
-        8,    8,   12,    4,    6
+       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
+       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
+     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
+        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
+        6,    8,    8,   12,    4,    6
     } ;
 
-static yyconst flex_int16_t yy_accept[257] =
+static yyconst flex_int16_t yy_accept[252] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
        51,   54,   57,   60,   63,   66,   68,   69,   70,   71,
        73,   76,   78,   79,   80,   81,   84,   84,   85,   86,
        87,   88,   89,   90,   91,   92,   93,   94,   95,   96,
-       97,   98,   99,  100,  101,  102,  103,  105,  107,  108,
-      110,  112,  113,  114,  115,  116,  117,  118,  119,  120,
+       97,   98,   99,  100,  101,  102,  103,  104,  106,  108,
+      109,  111,  113,  114,  115,  116,  117,  118,  119,  120,
       121,  122,  123,  124,  125,  126,  127,  128,  129,  130,
-      131,  133,  135,  136,  137,  138,  142,  143,  144,  145,
-      146,  147,  148,  149,  152,  156,  157,  162,  163,  164,
+      131,  132,  133,  135,  137,  138,  139,  140,  144,  145,
+      146,  147,  148,  149,  150,  151,  156,  160,  161,  166,
 
-      169,  170,  171,  172,  173,  176,  179,  181,  183,  184,
-      186,  187,  191,  192,  193,  194,  195,  198,  201,  203,
-      205,  208,  211,  213,  215,  216,  219,  222,  224,  226,
-      227,  228,  229,  230,  233,  236,  238,  240,  241,  242,
-      244,  245,  248,  251,  253,  255,  256,  260,  265,  266,
-      269,  272,  274,  276,  277,  280,  283,  285,  287,  288,
-      289,  292,  295,  297,  299,  300,  301,  302,  304,  305,
-      306,  307,  308,  311,  314,  316,  318,  319,  320,  323,
-      326,  328,  330,  333,  336,  338,  340,  341,  342,  343,
-      344,  345,  347,  349,  350,  351,  352,  354,  355,  358,
+      167,  168,  173,  174,  175,  176,  177,  180,  183,  185,
+      187,  188,  190,  191,  195,  196,  197,  200,  203,  205,
+      207,  210,  213,  215,  217,  220,  223,  225,  227,  228,
+      231,  234,  236,  238,  239,  240,  241,  242,  245,  248,
+      250,  252,  253,  254,  256,  257,  260,  263,  265,  267,
+      268,  272,  275,  278,  280,  282,  283,  286,  289,  291,
+      293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
+      314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
+      328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
+      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
 
-      361,  363,  365,  366,  369,  372,  374,  376,  377,  378,
-      380,  381,  385,  387,  388,  389,  391,  392,  394,  395,
-      397,  399,  400,  402,  405,  406,  408,  411,  414,  416,
-      418,  420,  421,  422,  424,  425,  426,  429,  432,  434,
-      436,  437,  438,  439,  440,  441,  442,  444,  446,  447,
-      449,  451,  452,  454,  456,  456
+      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
+      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
+      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
+      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
+      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
+      447
     } ;
 
 static yyconst flex_int32_t yy_ec[256] =
@@ -492,244 +491,238 @@ static yyconst flex_int32_t yy_meta[34] 
         1,    1,    1
     } ;
 
-static yyconst flex_int16_t yy_base[313] =
+static yyconst flex_int16_t yy_base[308] =
     {   0,
-        0,    0,  572,  560,  559,  551,   32,   35,  662,  662,
-       44,   62,   30,   40,   32,   50,  533,   49,   47,   59,
-       68,  525,   69,  517,   72,    0,  662,  515,  662,   83,
-       91,    0,    0,  100,  501,  109,    0,   78,   51,   86,
-       89,   74,   96,  105,  109,  110,  111,  112,  117,   73,
-      119,  118,  121,  120,  122,    0,  134,    0,    0,  138,
-        0,    0,  495,  130,  144,  129,  143,  145,  146,  147,
-      148,  149,  153,  154,  155,  158,  161,  165,  166,  170,
-      180,    0,    0,  662,  171,  201,  176,  175,  178,  183,
-      465,  182,  190,  455,  212,  188,  221,  208,  224,  234,
+        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
+       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
+       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
+       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
+       32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
+      118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
+      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
+      148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
+      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
+      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
 
-      209,  230,  236,  221,  244,    0,  247,    0,  184,  248,
-      244,  269,  231,  247,  251,  258,  272,    0,  279,    0,
-      283,    0,  286,    0,  255,  290,    0,  293,    0,  270,
-      281,  455,  254,  297,    0,    0,    0,    0,  294,  662,
-      295,  308,    0,  310,    0,  257,  319,  328,  304,  331,
-        0,    0,    0,    0,  335,    0,    0,    0,    0,  316,
-      338,    0,    0,    0,    0,  333,  336,  447,  662,  429,
-      338,  340,  348,    0,    0,    0,    0,  428,  351,    0,
-      355,    0,  359,    0,  362,    0,  357,  427,  308,  369,
-      426,  662,  425,  662,  346,  365,  423,  662,  371,    0,
+      237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
+      251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
+      286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
+        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
+        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
+      332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
+        0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
+        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
+        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
+      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
 
-        0,    0,    0,  378,    0,    0,    0,    0,  380,  421,
-      662,  388,  420,    0,  419,  662,  373,  418,  662,  372,
-      382,  417,  662,  398,  416,  662,  400,    0,  402,    0,
-        0,  385,  415,  662,  390,  275,  409,    0,    0,    0,
-        0,  405,  404,  406,  264,  412,  224,  129,  662,   87,
-      662,   47,  662,  662,  662,  434,  438,  441,  445,  449,
-      453,  457,  461,  465,  469,  473,  477,  481,  485,  489,
-      493,  497,  501,  505,  509,  513,  517,  521,  525,  529,
-      533,  537,  541,  545,  549,  553,  557,  561,  565,  569,
-      573,  577,  581,  585,  589,  593,  597,  601,  605,  609,
+      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
+      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
+      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
+      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
+      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
+      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
+      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
+      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
+      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
+      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
 
-      613,  617,  621,  625,  629,  633,  637,  641,  645,  649,
-      653,  657
+      627,  631,  635,  639,  643,  647,  651
     } ;
 
-static yyconst flex_int16_t yy_def[313] =
+static yyconst flex_int16_t yy_def[308] =
     {   0,
-      255,    1,  256,  256,  255,  257,  258,  258,  255,  255,
-      259,  259,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  255,  257,  255,  261,
-      258,  262,  262,  263,   12,  257,  264,   12,   12,   12,
+      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
+      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
+      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  261,  262,  262,  265,
-      266,  266,  255,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
+      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-      265,  266,  266,  255,   12,  267,   12,   12,   12,   12,
-       12,   12,   12,   36,  268,   12,  269,   12,   12,  270,
+       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
+       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
 
-       12,   12,   12,   12,  271,  272,  267,  272,   12,   12,
-       12,  273,   12,   12,   12,  257,  274,  275,  268,  275,
-      276,  277,  269,  277,   12,  278,  279,  270,  279,   12,
-       12,  280,   12,  271,  272,  272,  281,  281,   12,  255,
-       12,  282,  283,  273,  283,   12,  284,  285,  257,  274,
-      275,  275,  286,  286,  276,  277,  277,  287,  287,   12,
-      278,  279,  279,  288,  288,   12,   12,  289,  255,  290,
-       12,   12,  282,  283,  283,  291,  291,  292,  293,  294,
-      284,  294,  295,  296,  285,  296,  257,  297,   12,  298,
-      289,  255,  299,  255,   12,  300,  301,  255,  293,  294,
+       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
+       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
+      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
+      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
+      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
+      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
+      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
+      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
+      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
+       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
 
-      294,  302,  302,  295,  296,  296,  303,  303,  257,  304,
-      255,  305,  306,  306,  299,  255,   12,  307,  255,  307,
-      307,  301,  255,  285,  304,  255,  308,  309,  305,  309,
-      306,   12,  307,  255,  307,  307,  308,  309,  309,  310,
-      310,   12,  307,  307,  311,  307,  307,  312,  255,  307,
-      255,  312,  255,  255,    0,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
+      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
+      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
+      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
+      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
+      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
 
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255
+      250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_nxt[696] =
+static yyconst flex_int16_t yy_nxt[690] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
        17,   17,   19,   17,   20,   21,   22,   23,   24,   17,
        25,   17,   17,   31,   31,   32,   31,   31,   32,   35,
        33,   35,   41,   33,   28,   28,   28,   29,   34,   35,
-      249,   36,   37,   42,   43,   38,   35,   48,   35,   35,
-       35,   39,   28,   28,   28,   29,   34,   44,   35,   36,
-       37,   40,   46,   45,   65,   49,   47,   35,   35,   50,
-       52,   35,   35,   35,   54,   28,   58,   35,   55,   64,
-      254,   59,   31,   31,   32,   35,   75,   66,   35,   33,
+       35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
+       35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
+       37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
+      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
+       59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
 
-       28,   28,   28,   29,   68,   35,   48,   28,   37,   60,
-       60,   60,   61,   60,   35,   67,   60,   62,   35,   35,
-       35,   35,   72,   71,   73,   69,   35,   35,   35,   35,
-       35,   35,  253,   80,   76,   70,   28,   58,   35,   35,
-       28,   82,   59,   85,   77,   78,   83,   79,   87,   74,
-       76,   86,   35,   35,   35,   35,   35,   35,   35,   90,
-       94,   95,   35,   35,   35,   97,   88,   35,   91,   92,
-       35,   99,  100,   89,   35,   35,   93,  101,   98,   35,
-       35,  102,   28,   82,   35,   35,   96,   35,   83,  109,
-      112,   35,   35,   35,   76,   97,  110,   35,  104,   35,
+       28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
+       61,   61,   62,   61,   35,   70,   61,   63,   66,   35,
+       49,   35,   35,   72,   74,   35,   69,   35,   75,   35,
+       35,   35,   35,   88,   35,   35,   82,   78,   35,   28,
+       59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
+       84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
+       35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
+       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
+       95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
+       28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
 
-      103,  105,  105,   60,  106,  105,  139,  115,  105,  108,
-      111,  114,  117,  117,   60,  118,  117,   35,   35,  117,
-      120,  121,  121,   60,  122,  121,  130,  251,  121,  124,
-       35,  100,  125,   35,  126,  126,   60,  127,  126,   35,
-       35,  126,  129,  131,  132,   35,   28,  135,  133,   28,
-      137,  140,  136,   35,  147,  138,   35,   35,  148,  146,
-       35,   29,  170,   35,   35,  178,   35,  249,  141,  142,
-      142,   60,  143,  142,   28,  151,  142,  145,  219,   35,
-      152,   28,  153,  160,  149,   28,  156,  154,   28,  158,
-       35,  157,   28,  162,  159,   28,  164,  166,  163,   28,
+      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
+       75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
+      117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
+       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
+      137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
+      131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
+       35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
+      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
+      146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
+       28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
 
-      135,  165,  244,   35,   35,  136,  171,   29,  172,  167,
-       28,  174,   28,  176,  187,  212,  175,   35,  177,  179,
-      179,   60,  180,  179,  188,   35,  179,  182,  183,  183,
-       60,  184,  183,   28,  151,  183,  186,   28,  156,  152,
-       28,  162,   35,  157,  190,   35,  163,   35,  196,   35,
-       28,  174,  189,   28,  200,   35,  175,   28,  202,  201,
-       29,   28,  205,  203,   28,  207,  195,  206,  219,  209,
-      208,   63,  214,   28,  200,  234,  220,  221,  217,  201,
-       28,  205,   35,   29,  235,  234,  206,  224,  227,  227,
-       60,  228,  227,  219,   35,  227,  230,  232,  242,  236,
+      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
+      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
+      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
+      185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
+      189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
+      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
+       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
+      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
+      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
+      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
 
-       28,  207,   28,  238,   28,  240,  208,  219,  239,  219,
-      241,   28,  238,  245,   35,  219,  243,  239,  219,  211,
-      198,  234,  194,  231,  226,  247,  223,  246,  216,  169,
-      211,  198,  194,  250,   26,   26,   26,   26,   28,   28,
-       28,   30,   30,   30,   30,   35,   35,   35,   35,   56,
-      192,   56,   56,   57,   57,   57,   57,   59,  169,   59,
-       59,   34,   34,   34,   34,   63,   63,  116,   63,   81,
-       81,   81,   81,   83,  113,   83,   83,  107,  107,  107,
-      107,  119,  119,  119,  119,  123,  123,  123,  123,  128,
-      128,  128,  128,  134,  134,  134,  134,  136,   84,  136,
+       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
+      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
+      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
+       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
+       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
+       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
+       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
+       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
+      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
+      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
 
-      136,  144,  144,  144,  144,  150,  150,  150,  150,  152,
-       35,  152,  152,  155,  155,  155,  155,  157,   29,  157,
-      157,  161,  161,  161,  161,  163,   53,  163,  163,  168,
-      168,  168,  168,  138,   51,  138,  138,  173,  173,  173,
-      173,  175,   35,  175,  175,  181,  181,  181,  181,  185,
-      185,  185,  185,  154,   29,  154,  154,  159,  255,  159,
-      159,  165,   27,  165,  165,  191,  191,  191,  191,  193,
-      193,  193,  193,  177,   27,  177,  177,  197,  197,  197,
-      197,  199,  199,  199,  199,  201,  255,  201,  201,  204,
-      204,  204,  204,  206,  255,  206,  206,  210,  210,  210,
+      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
+      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
+      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
+      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
+       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
+      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
+      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
+      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
+      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
+      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
 
-      210,  213,  213,  213,  213,  215,  215,  215,  215,  218,
-      218,  218,  218,  222,  222,  222,  222,  203,  255,  203,
-      203,  208,  255,  208,  208,  225,  225,  225,  225,  229,
-      229,  229,  229,  214,  255,  214,  214,  233,  233,  233,
-      233,  237,  237,  237,  237,  239,  255,  239,  239,  241,
-      255,  241,  241,  248,  248,  248,  248,  252,  252,  252,
-      252,    5,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
+      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
+      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
+      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
+      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
+      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_chk[696] =
+static yyconst flex_int16_t yy_chk[690] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    7,    7,    7,    8,    8,    8,   13,
-        7,   15,   13,    8,   11,   11,   11,   11,   11,   14,
-      252,   11,   11,   14,   15,   11,   19,   19,   18,   16,
-       39,   11,   12,   12,   12,   12,   12,   16,   20,   12,
-       12,   12,   18,   16,   39,   20,   18,   21,   23,   21,
-       23,   25,   50,   42,   25,   30,   30,   38,   25,   38,
-      250,   30,   31,   31,   31,   40,   50,   40,   41,   31,
+        7,   41,   13,    8,   11,   11,   11,   11,   11,   47,
+       14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
+       16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
+       12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
+      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
+       30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
 
-       34,   34,   34,   34,   42,   43,   43,   34,   34,   36,
-       36,   36,   36,   36,   44,   41,   36,   36,   45,   46,
-       47,   48,   47,   46,   48,   44,   49,   52,   51,   54,
-       53,   55,  248,   54,   55,   45,   57,   57,   66,   64,
-       60,   60,   57,   64,   52,   53,   60,   53,   66,   49,
-       51,   65,   67,   65,   68,   69,   70,   71,   72,   69,
-       73,   74,   73,   74,   75,   76,   67,   76,   70,   71,
-       77,   78,   78,   68,   78,   79,   72,   78,   77,   80,
-       85,   79,   81,   81,   88,   87,   75,   89,   81,   87,
-       90,   92,   90,  109,   96,   96,   88,   96,   85,   93,
+       34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
+       36,   36,   36,   36,   43,   43,   36,   36,   39,   44,
+       44,   50,   48,   46,   48,   49,   42,   51,   49,   52,
+       53,   54,   55,   66,   56,   66,   55,   56,   65,   58,
+       58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
+       61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
+       73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
+       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
+       74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
+       83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
 
-       80,   86,   86,   86,   86,   86,  109,   93,   86,   86,
-       89,   92,   95,   95,   95,   95,   95,   98,  101,   95,
-       95,   97,   97,   97,   97,   97,  101,  247,   97,   97,
-      104,   99,   98,   99,  100,  100,  100,  100,  100,  102,
-      113,  100,  100,  102,  103,  103,  105,  105,  104,  107,
-      107,  110,  105,  111,  114,  107,  114,  110,  115,  113,
-      115,  116,  133,  133,  125,  146,  146,  245,  111,  112,
-      112,  112,  112,  112,  117,  117,  112,  112,  236,  130,
-      117,  119,  119,  125,  116,  121,  121,  119,  123,  123,
-      131,  121,  126,  126,  123,  128,  128,  130,  126,  134,
+       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
+       95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
+       96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
+      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
+      106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
+      102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
+      111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
+      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
+      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
+      123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
 
-      134,  128,  236,  139,  141,  134,  139,  149,  141,  131,
-      142,  142,  144,  144,  149,  189,  142,  189,  144,  147,
-      147,  147,  147,  147,  160,  160,  147,  147,  148,  148,
-      148,  148,  148,  150,  150,  148,  148,  155,  155,  150,
-      161,  161,  166,  155,  167,  167,  161,  171,  172,  172,
-      173,  173,  166,  179,  179,  195,  173,  181,  181,  179,
-      187,  183,  183,  181,  185,  185,  171,  183,  196,  187,
-      185,  190,  190,  199,  199,  220,  196,  196,  195,  199,
-      204,  204,  217,  209,  220,  221,  204,  209,  212,  212,
-      212,  212,  212,  235,  232,  212,  212,  217,  232,  221,
+      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
+      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
+      145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
+      150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
+      151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
+      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
+      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
+      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
+      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
+      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
 
-      224,  224,  227,  227,  229,  229,  224,  243,  227,  244,
-      229,  237,  237,  242,  242,  246,  235,  237,  233,  225,
-      222,  218,  215,  213,  210,  244,  197,  243,  193,  191,
-      188,  178,  170,  246,  256,  256,  256,  256,  257,  257,
-      257,  258,  258,  258,  258,  259,  259,  259,  259,  260,
-      168,  260,  260,  261,  261,  261,  261,  262,  132,  262,
-      262,  263,  263,  263,  263,  264,  264,   94,  264,  265,
-      265,  265,  265,  266,   91,  266,  266,  267,  267,  267,
-      267,  268,  268,  268,  268,  269,  269,  269,  269,  270,
-      270,  270,  270,  271,  271,  271,  271,  272,   63,  272,
+      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
+      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
+      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
+      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
+      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
+      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
+      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
+      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
+      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
+      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
 
-      272,  273,  273,  273,  273,  274,  274,  274,  274,  275,
-       35,  275,  275,  276,  276,  276,  276,  277,   28,  277,
-      277,  278,  278,  278,  278,  279,   24,  279,  279,  280,
-      280,  280,  280,  281,   22,  281,  281,  282,  282,  282,
-      282,  283,   17,  283,  283,  284,  284,  284,  284,  285,
-      285,  285,  285,  286,    6,  286,  286,  287,    5,  287,
-      287,  288,    4,  288,  288,  289,  289,  289,  289,  290,
-      290,  290,  290,  291,    3,  291,  291,  292,  292,  292,
-      292,  293,  293,  293,  293,  294,    0,  294,  294,  295,
-      295,  295,  295,  296,    0,  296,  296,  297,  297,  297,
+      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
+      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
+      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
+      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
+        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
+      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
+        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
+      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
+        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
+      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
 
-      297,  298,  298,  298,  298,  299,  299,  299,  299,  300,
-      300,  300,  300,  301,  301,  301,  301,  302,    0,  302,
-      302,  303,    0,  303,  303,  304,  304,  304,  304,  305,
-      305,  305,  305,  306,    0,  306,  306,  307,  307,  307,
-      307,  308,  308,  308,  308,  309,    0,  309,  309,  310,
-        0,  310,  310,  311,  311,  311,  311,  312,  312,  312,
-      312,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
+      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
+      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
+      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
+        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
+      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -885,7 +878,7 @@ static int vdev_and_devtype(DiskParseCon
 #define DPC ((DiskParseContext*)yyextra)
 
 
-#line 889 "libxlu_disk_l.c"
+#line 882 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1126,7 +1119,7 @@ YY_DECL
 
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1130 "libxlu_disk_l.c"
+#line 1123 "libxlu_disk_l.c"
 
 	if ( !yyg->yy_init )
 		{
@@ -1190,14 +1183,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 256 )
+				if ( yy_current_state >= 251 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 255 );
+		while ( yy_current_state != 250 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1331,22 +1324,33 @@ case 14:
 YY_RULE_SETUP
 #line 191 "libxlu_disk_l.l"
 {
-		    STRIP(':');
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (DPC->disk->script) {
+                        if (*DPC->disk->script) {
+                            xlu__disk_err(DPC,yytext,"script respecified");
+                            return 0;
+                        }
+                        /* do not complain about overwriting empty strings */
+                        free(DPC->disk->script);
+                    }
+                    DPC->disk->script = malloc(strlen("block-")
+                                               +strlen(yytext) + 1);
+                    strcpy(DPC->disk->script, "block-");
+                    strcat(DPC->disk->script, yytext);
+                }
 	YY_BREAK
 case 15:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
+#line 208 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 16:
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
+#line 209 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 17:
@@ -1354,7 +1358,7 @@ case 17:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 199 "libxlu_disk_l.l"
+#line 210 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 18:
@@ -1362,7 +1366,7 @@ case 18:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 200 "libxlu_disk_l.l"
+#line 211 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 19:
@@ -1370,7 +1374,7 @@ case 19:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 201 "libxlu_disk_l.l"
+#line 212 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 20:
@@ -1378,13 +1382,13 @@ case 20:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 202 "libxlu_disk_l.l"
+#line 213 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
@@ -1394,7 +1398,7 @@ YY_RULE_SETUP
 case 22:
 /* rule 22 can match eol */
 YY_RULE_SETUP
-#line 211 "libxlu_disk_l.l"
+#line 222 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1423,7 +1427,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 23:
 YY_RULE_SETUP
-#line 237 "libxlu_disk_l.l"
+#line 248 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
@@ -1431,17 +1435,17 @@ YY_RULE_SETUP
 	YY_BREAK
 case 24:
 YY_RULE_SETUP
-#line 241 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
 case 25:
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1445 "libxlu_disk_l.c"
+#line 1449 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1705,7 +1709,7 @@ static int yy_get_next_buffer (yyscan_t 
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 256 )
+			if ( yy_current_state >= 251 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1729,11 +1733,11 @@ static int yy_get_next_buffer (yyscan_t 
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 256 )
+		if ( yy_current_state >= 251 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 255);
+	yy_is_jam = (yy_current_state == 250);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2533,4 +2537,4 @@ void xlu__disk_yyfree (void * ptr , yysc
 
 #define YYTABLES_NAME "yytables"
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxlu_disk_l.h
--- a/tools/libxl/libxlu_disk_l.h	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.h	Wed Aug 01 10:57:21 2012 +0100
@@ -340,7 +340,7 @@ extern int xlu__disk_yylex (yyscan_t yys
 #undef YY_DECL
 #endif
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 
 #line 346 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxlu_disk_l.l
--- a/tools/libxl/libxlu_disk_l.l	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.l	Wed Aug 01 10:57:21 2012 +0100
@@ -188,11 +188,22 @@ target=.*	{ STRIP(','); SAVESTRING("targ
                     setformat(DPC, yytext);
                  }
 
-iscsi:|e?nbd:drbd:/.* {
-		    STRIP(':');
+(iscsi|e?nbd|drbd):/.* {
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (DPC->disk->script) {
+                        if (*DPC->disk->script) {
+                            xlu__disk_err(DPC,yytext,"script respecified");
+                            return 0;
+                        }
+                        /* do not complain about overwriting empty strings */
+                        free(DPC->disk->script);
+                    }
+                    DPC->disk->script = malloc(strlen("block-")
+                                               +strlen(yytext) + 1);
+                    strcpy(DPC->disk->script, "block-");
+                    strcat(DPC->disk->script, yytext);
+                }
 
 tapdisk:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }
 tap2?:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 09:59:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 09:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVhW-0001ZW-Nn; Wed, 01 Aug 2012 09:58:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVhU-0001ZR-Ji
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 09:58:45 +0000
Received: from [85.158.143.99:12269] by server-1.bemta-4.messagelabs.com id
	DE/8C-24392-3DDF8105; Wed, 01 Aug 2012 09:58:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343815119!29389983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30320 invoked from network); 1 Aug 2012 09:58:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:58:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="203779554"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 05:57:38 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 05:57:38 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwVgP-00055K-UY;
	Wed, 01 Aug 2012 10:57:37 +0100
MIME-Version: 1.0
X-Mercurial-Node: 84f0686ebcbfb0fa3a437def65513038be389736
Message-ID: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 1 Aug 2012 10:57:37 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl: support custom block hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343815041 -3600
# Node ID 84f0686ebcbfb0fa3a437def65513038be389736
# Parent  006588b1bbb609186df45770a3c16d3028b54778
libxl: support custom block hotplug scripts

These are provided using the "script=" syntax described in
docs/misc/xl-disk-configuration.txt.

The existing hotplug scripts currently conflate two different concepts, namely
that of making a datapath available in the backend domain (logging into iSCSI
LUNs and the like) and that of actually connecting that datapath to a Xen
backend path (e.g. writing "physical-device" node in xenstore to bring up
blkback).

For this reason the script support implemented here is only supported in
conjunction with backendtype=phy.

Eventually we hope to rework the hotplug scripts to separate the to concepts,
but that is not 4.2 material.

In addition there are some other subtleties:

 - Previously in the blktap case we would add "script = .../blktap" to the
   backend flex array, but then jumped to the PHY case which added
   "script = .../block" too. The block one takes precendence since it comes
   second.

   This was, accidentally, correct. The blktap script is for blktap1 devices
   and not blktap2 devices. libxl completely manages the blktap2 side of things
   without resorting to hotplug scripts and creates a blkback device directly.
   Therefore the "block" script is always the correct one to call. Custom
   script are not supported in this context.

 - libxl should not write the "physical-device" node. This is the
   responsibility of the block script. Writing the "physical-device" node in
   libxl basically completely short-cuts the standard block hotplug script
   which uses "physical-device" to know if it has run already or not.

   In the case of more complex scripts libxl cannot know the right value to
   write here anyway, in particular the device may not exist until after the
   script is called.

   This change has the side effect of re-enabling the checks for device sharing
   aspect of the default block script, which I have tested and which now cause
   libxl to properly abort now that libxl properly checks for hotplug script
   errors.

   There is no sharing check for blktap2 since even if you reuse the same vhd
   the resulting tap device is different. I would have preferred to simply
   write the "physical-device" node for the blktap2 case but the hotplug script
   infrastructure is not currently setup to handle LIBXL__DEVICE_KIND_VBD
   devices without a hotplug script (backendtype phy and tap both end up as
   KIND_VBD). Changing this was more surgery than I was happy doing for 4.2 and
   therefore I have simply hardcoded to the block script for the
   LIBXL_DISK_BACKEND_TAP case.

 - libxl__device_disk_set_backend running against a phy device with a script
   cannot stat the device to check its properties since it may not exist until
   the script is run. Therefore I have special cased this in disk_try_backend
   to simply assume that backend == phy is always ok if a script was
   configured.  Similarly the other backend types are always rejected if a
   script was configured.

   Note that the reason for implementing the default script behaviour in
   device_disk_add instead of libxl__device_disk_setdefault is because we need
   to be able to tell when the script was user-supplied rather than defaulted
   by libxl in order to correctly implement the above. The setdefault function
   must be idempotent so we cannot simply update disk->script.

   I suspect that for 4.3 a script member should be added to libxl__device,
   this would also help in the case above of handling devices with no script in
   a consistent manner. This is not 4.2 material.

 - When the block script falls through and shells out to a block-$type script
   it used to pass "$node" however the only place this was assigned was in the
   remove+phy case (in which case it contains the file:// derived /dev/loopN
   device), and in that case the script exits without falling through to the
   block-$type case.

   Since libxl never creates a type other than phy this never happens in
   practice anyway and we now call the correct block-$type script directly.
   But fix it up anyway since it is confusing.

 - The block-nbd and block-enbd scripts which we supply appear to be broken WRT
   the hotplug calling convention, in that they seem to expect a command line
   parameter (perhaps the $node described above) rather than reading the
   appropriate node from xenstore.

   I rather suspect this was broken by 7774:e2e7f47e6f79 in November 2005. I
   think it is safe to say no one is using these scripts! I haven't fixed this
   here. It would be good to track down some working scripts and either
   incorproate them or defer to them in their existing home (e.g. if they live
   somewhere useful like the nbd tools package).

 - Added a few block script related entries to check-xl-disk-parse from
   http://backdrift.org/xen-block-iscsi-script-with-multipath-support and
   http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html /
   http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html
   (and snuck in another interesting empty CDROM case)

   This highlighted two bugs in the libxlu disk parser handling of the
   deprecated "<script>:" prefix:

   - It was failing to prefix with "block-" to construct the actual script name

   - The regex for matching iscsi or drdb or e?nbd was incorrect

 - Use libxl__abs_path for the nic script too. Just because the existing code
   nearly tricked me into repeating the mistake

I have tested with a custom block script which uses "lvchange -a" to
dynamically add remove the referenced device (simulates iSCSI login/logout
without requiring me to faff around setting up an iSCSI target). I also tested
on a blktap2 system.

I haven't directly tested anything more complex like iscsi: or nbd: other than
what check-xl-disk-parse exercises.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2:
  - observe that script= requires backendtype=phy and substantially rework to
    correctly reflect that.
  - remove unintentional braces change in SAVESTRING macro

diff -r 006588b1bbb6 -r 84f0686ebcbf tools/hotplug/Linux/block
--- a/tools/hotplug/Linux/block	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/hotplug/Linux/block	Wed Aug 01 10:57:21 2012 +0100
@@ -342,4 +342,4 @@ esac
 
 # If we've reached here, $t is neither phy nor file, so fire a helper script.
 [ -x ${XEN_SCRIPT_DIR}/block-"$t" ] && \
-  ${XEN_SCRIPT_DIR}/block-"$t" "$command" $node
+  ${XEN_SCRIPT_DIR}/block-"$t" "$command"
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/check-xl-disk-parse
--- a/tools/libxl/check-xl-disk-parse	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/check-xl-disk-parse	Wed Aug 01 10:57:21 2012 +0100
@@ -142,5 +142,44 @@ disk: {
 
 EOF
 one 0 vdev=hdc,access=r,devtype=cdrom,format=empty
+one 0 vdev=hdc,access=r,devtype=cdrom
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
+    "vdev": "xvda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-iscsi",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://backdrift.org/xen-block-iscsi-script-with-multipath-support
+one 0 iscsi:iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost,xvda,w
+one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "app01",
+    "vdev": "hda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-drbd",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html
+# http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html
+one 0 drbd:app01,hda,w
 
 complete
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxl.c	Wed Aug 01 10:57:21 2012 +0100
@@ -1795,9 +1795,9 @@ static void device_disk_add(libxl__egc *
     STATE_AO_GC(aodev->ao);
     flexarray_t *front = NULL;
     flexarray_t *back = NULL;
-    char *dev;
+    char *dev, *script;
     libxl__device *device;
-    int major, minor, rc;
+    int rc;
     libxl_ctx *ctx = gc->owner;
     xs_transaction_t t = XBT_NULL;
 
@@ -1832,13 +1832,6 @@ static void device_disk_add(libxl__egc *
             goto out_free;
         }
 
-        if (disk->script) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "External block scripts"
-                       " not yet supported, sorry");
-            rc = ERROR_INVAL;
-            goto out_free;
-        }
-
         GCNEW(device);
         rc = libxl__device_from_disk(gc, domid, disk, device);
         if (rc != 0) {
@@ -1850,18 +1843,16 @@ static void device_disk_add(libxl__egc *
         switch (disk->backend) {
             case LIBXL_DISK_BACKEND_PHY:
                 dev = disk->pdev_path;
+
+                script = libxl__abs_path(gc, disk->script ?: "block",
+                                         libxl__xen_script_dir_path());
+
         do_backend_phy:
-                libxl__device_physdisk_major_minor(dev, &major, &minor);
-                flexarray_append(back, "physical-device");
-                flexarray_append(back, libxl__sprintf(gc, "%x:%x", major, minor));
-
                 flexarray_append(back, "params");
                 flexarray_append(back, dev);
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "block"));
+                assert(script);
+                flexarray_append_pair(back, "script", script);
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
@@ -1878,10 +1869,12 @@ static void device_disk_add(libxl__egc *
                     libxl__device_disk_string_of_format(disk->format),
                     disk->pdev_path));
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "blktap"));
+                /*
+                 * tap devices do not support custom block scripts and
+                 * always use the plain block script.
+                 */
+                script = libxl__abs_path(gc, "block",
+                                         libxl__xen_script_dir_path());
 
                 /* now create a phy device to export the device to the guest */
                 goto do_backend_phy;
@@ -2581,13 +2574,10 @@ void libxl__device_nic_add(libxl__egc *e
     flexarray_append(back, "1");
     flexarray_append(back, "state");
     flexarray_append(back, libxl__sprintf(gc, "%d", 1));
-    if (nic->script) {
-        flexarray_append(back, "script");
-        flexarray_append(back, nic->script[0]=='/' ? nic->script
-                         : libxl__sprintf(gc, "%s/%s",
-                                          libxl__xen_script_dir_path(),
-                                          nic->script));
-    }
+    if (nic->script)
+        flexarray_append_pair(back, "script",
+                              libxl__abs_path(gc, nic->script,
+                                              libxl__xen_script_dir_path()));
 
     if (nic->ifname) {
         flexarray_append(back, "vifname");
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxl_device.c	Wed Aug 01 10:57:21 2012 +0100
@@ -191,18 +191,26 @@ typedef struct {
 } disk_try_backend_args;
 
 static int disk_try_backend(disk_try_backend_args *a,
-                            libxl_disk_backend backend) {
+                            libxl_disk_backend backend)
+ {
+    libxl__gc *gc = a->gc;
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
-    libxl_ctx *ctx = libxl__gc_owner(a->gc);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
     switch (backend) {
-
     case LIBXL_DISK_BACKEND_PHY:
         if (!(a->disk->format == LIBXL_DISK_FORMAT_RAW ||
               a->disk->format == LIBXL_DISK_FORMAT_EMPTY)) {
             goto bad_format;
         }
 
+        if (a->disk->script) {
+            LOG(DEBUG, "Disk vdev=%s, uses script=... assuming phy backend",
+                a->disk->vdev);
+            return backend;
+        }
+
         if (libxl__try_phy_backend(a->stab.st_mode))
             return backend;
 
@@ -212,6 +220,8 @@ static int disk_try_backend(disk_try_bac
         return 0;
 
     case LIBXL_DISK_BACKEND_TAP:
+        if (a->disk->script) goto bad_script;
+
         if (!libxl__blktap_enabled(a->gc)) {
             LIBXL__LOG(ctx, LIBXL__LOG_DEBUG, "Disk vdev=%s, backend tap"
                        " unsuitable because blktap not available",
@@ -225,6 +235,7 @@ static int disk_try_backend(disk_try_bac
         return backend;
 
     case LIBXL_DISK_BACKEND_QDISK:
+        if (a->disk->script) goto bad_script;
         return backend;
 
     default:
@@ -242,6 +253,11 @@ static int disk_try_backend(disk_try_bac
                libxl_disk_backend_to_string(backend),
                libxl_disk_format_to_string(a->disk->format));
     return 0;
+
+ bad_script:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with script=...",
+        a->disk->vdev, libxl_disk_backend_to_string(backend));
+    return 0;
 }
 
 int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
@@ -264,7 +280,7 @@ int libxl__device_disk_set_backend(libxl
             return ERROR_INVAL;
         }
         memset(&a.stab, 0, sizeof(a.stab));
-    } else {
+    } else if (!disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
             LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Disk vdev=%s "
                              "failed to stat: %s",
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxlu_disk_l.c
--- a/tools/libxl/libxlu_disk_l.c	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.c	Wed Aug 01 10:57:21 2012 +0100
@@ -366,7 +366,7 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[456] =
+static yyconst flex_int16_t yy_acclist[447] =
     {   0,
        24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
     16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
@@ -379,77 +379,76 @@ static yyconst flex_int16_t yy_acclist[4
      8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
 
-       22,   24, 8193,   22, 8193,   22, 8193, 8213,   22, 8213,
-       22, 8213,   12,   22,   22,   22,   22,   22,   22,   22,
+       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
+     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-     8213,   22, 8213,   22, 8213,   12,   22,   17, 8213,   22,
-    16405,   22,   22,   22,   22,   22,   22,   22, 8213,   22,
-    16405,   20, 8213,   22,16405,   22, 8205, 8213,   22,16397,
-    16405,   22,   22, 8208, 8213,   22,16400,16405,   22,   22,
-       22,   22,   17, 8213,   22,   17, 8213,   22,   17,   22,
-       17, 8213,   22,    3,   22,   22,   19, 8213,   22,16405,
-       22,   22,   22,   22,   20, 8213,   22,   20, 8213,   22,
+       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
+     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
+     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
+     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
+    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
+     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
+       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
 
-       20,   22,   20, 8213, 8205, 8213,   22, 8205, 8213,   22,
-     8205,   22, 8205, 8213,   22, 8208, 8213,   22, 8208, 8213,
-       22, 8208,   22, 8208, 8213,   22,   22,    9,   22,   17,
-     8213,   22,   17, 8213,   22,   17, 8213,   17,   22,   17,
-       22,    3,   22,   22,   19, 8213,   22,   19, 8213,   22,
-       19,   22,   19, 8213,   22,   18, 8213,   22,16405, 8206,
-     8213,   22,16398,16405,   22,   20, 8213,   22,   20, 8213,
-       22,   20, 8213,   20,   22,   20, 8205, 8213,   22, 8205,
-     8213,   22, 8205, 8213, 8205,   22, 8205,   22, 8208, 8213,
-       22, 8208, 8213,   22, 8208, 8213, 8208,   22, 8208,   22,
+     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
+     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
+     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
+     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
+       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
+       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
+     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
+    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
+       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
+       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
 
-       22,    9,   12,    9,    7,   22,   22,   19, 8213,   22,
-       19, 8213,   22,   19, 8213,   19,   22,   19,    2,   18,
-     8213,   22,   18, 8213,   22,   18,   22,   18, 8213, 8206,
-     8213,   22, 8206, 8213,   22, 8206,   22, 8206, 8213,   22,
-       10,   22,   11,    9,    9,   12,    7,   12,    7,   22,
-        6,    2,   12,    2,   18, 8213,   22,   18, 8213,   22,
-       18, 8213,   18,   22,   18, 8206, 8213,   22, 8206, 8213,
-       22, 8206, 8213, 8206,   22, 8206,   22,   10,   12,   10,
-       15, 8213,   22,16405,   11,   12,   11,    7,    7,   12,
-       22,    6,   12,    6,    6,   12,    6,   12,    2,    2,
+     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
+       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
+        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
+       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
+     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
+        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
+       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
+       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
+       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
+        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
 
-       12, 8206,   22,16398,   10,   10,   12,   15, 8213,   22,
-       15, 8213,   22,   15,   22,   15, 8213,   11,   12,   22,
-        6,    6,   12,    6,    6,   15, 8213,   22,   15, 8213,
-       22,   15, 8213,   15,   22,   15,   22,    6,    6,    8,
-        6,    5,    6,    8,   12,    8,    4,    6,    5,    6,
-        8,    8,   12,    4,    6
+       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
+       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
+     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
+        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
+        6,    8,    8,   12,    4,    6
     } ;
 
-static yyconst flex_int16_t yy_accept[257] =
+static yyconst flex_int16_t yy_accept[252] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
        51,   54,   57,   60,   63,   66,   68,   69,   70,   71,
        73,   76,   78,   79,   80,   81,   84,   84,   85,   86,
        87,   88,   89,   90,   91,   92,   93,   94,   95,   96,
-       97,   98,   99,  100,  101,  102,  103,  105,  107,  108,
-      110,  112,  113,  114,  115,  116,  117,  118,  119,  120,
+       97,   98,   99,  100,  101,  102,  103,  104,  106,  108,
+      109,  111,  113,  114,  115,  116,  117,  118,  119,  120,
       121,  122,  123,  124,  125,  126,  127,  128,  129,  130,
-      131,  133,  135,  136,  137,  138,  142,  143,  144,  145,
-      146,  147,  148,  149,  152,  156,  157,  162,  163,  164,
+      131,  132,  133,  135,  137,  138,  139,  140,  144,  145,
+      146,  147,  148,  149,  150,  151,  156,  160,  161,  166,
 
-      169,  170,  171,  172,  173,  176,  179,  181,  183,  184,
-      186,  187,  191,  192,  193,  194,  195,  198,  201,  203,
-      205,  208,  211,  213,  215,  216,  219,  222,  224,  226,
-      227,  228,  229,  230,  233,  236,  238,  240,  241,  242,
-      244,  245,  248,  251,  253,  255,  256,  260,  265,  266,
-      269,  272,  274,  276,  277,  280,  283,  285,  287,  288,
-      289,  292,  295,  297,  299,  300,  301,  302,  304,  305,
-      306,  307,  308,  311,  314,  316,  318,  319,  320,  323,
-      326,  328,  330,  333,  336,  338,  340,  341,  342,  343,
-      344,  345,  347,  349,  350,  351,  352,  354,  355,  358,
+      167,  168,  173,  174,  175,  176,  177,  180,  183,  185,
+      187,  188,  190,  191,  195,  196,  197,  200,  203,  205,
+      207,  210,  213,  215,  217,  220,  223,  225,  227,  228,
+      231,  234,  236,  238,  239,  240,  241,  242,  245,  248,
+      250,  252,  253,  254,  256,  257,  260,  263,  265,  267,
+      268,  272,  275,  278,  280,  282,  283,  286,  289,  291,
+      293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
+      314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
+      328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
+      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
 
-      361,  363,  365,  366,  369,  372,  374,  376,  377,  378,
-      380,  381,  385,  387,  388,  389,  391,  392,  394,  395,
-      397,  399,  400,  402,  405,  406,  408,  411,  414,  416,
-      418,  420,  421,  422,  424,  425,  426,  429,  432,  434,
-      436,  437,  438,  439,  440,  441,  442,  444,  446,  447,
-      449,  451,  452,  454,  456,  456
+      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
+      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
+      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
+      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
+      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
+      447
     } ;
 
 static yyconst flex_int32_t yy_ec[256] =
@@ -492,244 +491,238 @@ static yyconst flex_int32_t yy_meta[34] 
         1,    1,    1
     } ;
 
-static yyconst flex_int16_t yy_base[313] =
+static yyconst flex_int16_t yy_base[308] =
     {   0,
-        0,    0,  572,  560,  559,  551,   32,   35,  662,  662,
-       44,   62,   30,   40,   32,   50,  533,   49,   47,   59,
-       68,  525,   69,  517,   72,    0,  662,  515,  662,   83,
-       91,    0,    0,  100,  501,  109,    0,   78,   51,   86,
-       89,   74,   96,  105,  109,  110,  111,  112,  117,   73,
-      119,  118,  121,  120,  122,    0,  134,    0,    0,  138,
-        0,    0,  495,  130,  144,  129,  143,  145,  146,  147,
-      148,  149,  153,  154,  155,  158,  161,  165,  166,  170,
-      180,    0,    0,  662,  171,  201,  176,  175,  178,  183,
-      465,  182,  190,  455,  212,  188,  221,  208,  224,  234,
+        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
+       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
+       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
+       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
+       32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
+      118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
+      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
+      148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
+      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
+      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
 
-      209,  230,  236,  221,  244,    0,  247,    0,  184,  248,
-      244,  269,  231,  247,  251,  258,  272,    0,  279,    0,
-      283,    0,  286,    0,  255,  290,    0,  293,    0,  270,
-      281,  455,  254,  297,    0,    0,    0,    0,  294,  662,
-      295,  308,    0,  310,    0,  257,  319,  328,  304,  331,
-        0,    0,    0,    0,  335,    0,    0,    0,    0,  316,
-      338,    0,    0,    0,    0,  333,  336,  447,  662,  429,
-      338,  340,  348,    0,    0,    0,    0,  428,  351,    0,
-      355,    0,  359,    0,  362,    0,  357,  427,  308,  369,
-      426,  662,  425,  662,  346,  365,  423,  662,  371,    0,
+      237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
+      251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
+      286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
+        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
+        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
+      332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
+        0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
+        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
+        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
+      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
 
-        0,    0,    0,  378,    0,    0,    0,    0,  380,  421,
-      662,  388,  420,    0,  419,  662,  373,  418,  662,  372,
-      382,  417,  662,  398,  416,  662,  400,    0,  402,    0,
-        0,  385,  415,  662,  390,  275,  409,    0,    0,    0,
-        0,  405,  404,  406,  264,  412,  224,  129,  662,   87,
-      662,   47,  662,  662,  662,  434,  438,  441,  445,  449,
-      453,  457,  461,  465,  469,  473,  477,  481,  485,  489,
-      493,  497,  501,  505,  509,  513,  517,  521,  525,  529,
-      533,  537,  541,  545,  549,  553,  557,  561,  565,  569,
-      573,  577,  581,  585,  589,  593,  597,  601,  605,  609,
+      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
+      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
+      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
+      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
+      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
+      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
+      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
+      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
+      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
+      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
 
-      613,  617,  621,  625,  629,  633,  637,  641,  645,  649,
-      653,  657
+      627,  631,  635,  639,  643,  647,  651
     } ;
 
-static yyconst flex_int16_t yy_def[313] =
+static yyconst flex_int16_t yy_def[308] =
     {   0,
-      255,    1,  256,  256,  255,  257,  258,  258,  255,  255,
-      259,  259,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  255,  257,  255,  261,
-      258,  262,  262,  263,   12,  257,  264,   12,   12,   12,
+      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
+      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
+      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  261,  262,  262,  265,
-      266,  266,  255,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
+      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-      265,  266,  266,  255,   12,  267,   12,   12,   12,   12,
-       12,   12,   12,   36,  268,   12,  269,   12,   12,  270,
+       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
+       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
 
-       12,   12,   12,   12,  271,  272,  267,  272,   12,   12,
-       12,  273,   12,   12,   12,  257,  274,  275,  268,  275,
-      276,  277,  269,  277,   12,  278,  279,  270,  279,   12,
-       12,  280,   12,  271,  272,  272,  281,  281,   12,  255,
-       12,  282,  283,  273,  283,   12,  284,  285,  257,  274,
-      275,  275,  286,  286,  276,  277,  277,  287,  287,   12,
-      278,  279,  279,  288,  288,   12,   12,  289,  255,  290,
-       12,   12,  282,  283,  283,  291,  291,  292,  293,  294,
-      284,  294,  295,  296,  285,  296,  257,  297,   12,  298,
-      289,  255,  299,  255,   12,  300,  301,  255,  293,  294,
+       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
+       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
+      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
+      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
+      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
+      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
+      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
+      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
+      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
+       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
 
-      294,  302,  302,  295,  296,  296,  303,  303,  257,  304,
-      255,  305,  306,  306,  299,  255,   12,  307,  255,  307,
-      307,  301,  255,  285,  304,  255,  308,  309,  305,  309,
-      306,   12,  307,  255,  307,  307,  308,  309,  309,  310,
-      310,   12,  307,  307,  311,  307,  307,  312,  255,  307,
-      255,  312,  255,  255,    0,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
+      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
+      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
+      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
+      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
+      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
 
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255
+      250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_nxt[696] =
+static yyconst flex_int16_t yy_nxt[690] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
        17,   17,   19,   17,   20,   21,   22,   23,   24,   17,
        25,   17,   17,   31,   31,   32,   31,   31,   32,   35,
        33,   35,   41,   33,   28,   28,   28,   29,   34,   35,
-      249,   36,   37,   42,   43,   38,   35,   48,   35,   35,
-       35,   39,   28,   28,   28,   29,   34,   44,   35,   36,
-       37,   40,   46,   45,   65,   49,   47,   35,   35,   50,
-       52,   35,   35,   35,   54,   28,   58,   35,   55,   64,
-      254,   59,   31,   31,   32,   35,   75,   66,   35,   33,
+       35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
+       35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
+       37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
+      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
+       59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
 
-       28,   28,   28,   29,   68,   35,   48,   28,   37,   60,
-       60,   60,   61,   60,   35,   67,   60,   62,   35,   35,
-       35,   35,   72,   71,   73,   69,   35,   35,   35,   35,
-       35,   35,  253,   80,   76,   70,   28,   58,   35,   35,
-       28,   82,   59,   85,   77,   78,   83,   79,   87,   74,
-       76,   86,   35,   35,   35,   35,   35,   35,   35,   90,
-       94,   95,   35,   35,   35,   97,   88,   35,   91,   92,
-       35,   99,  100,   89,   35,   35,   93,  101,   98,   35,
-       35,  102,   28,   82,   35,   35,   96,   35,   83,  109,
-      112,   35,   35,   35,   76,   97,  110,   35,  104,   35,
+       28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
+       61,   61,   62,   61,   35,   70,   61,   63,   66,   35,
+       49,   35,   35,   72,   74,   35,   69,   35,   75,   35,
+       35,   35,   35,   88,   35,   35,   82,   78,   35,   28,
+       59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
+       84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
+       35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
+       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
+       95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
+       28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
 
-      103,  105,  105,   60,  106,  105,  139,  115,  105,  108,
-      111,  114,  117,  117,   60,  118,  117,   35,   35,  117,
-      120,  121,  121,   60,  122,  121,  130,  251,  121,  124,
-       35,  100,  125,   35,  126,  126,   60,  127,  126,   35,
-       35,  126,  129,  131,  132,   35,   28,  135,  133,   28,
-      137,  140,  136,   35,  147,  138,   35,   35,  148,  146,
-       35,   29,  170,   35,   35,  178,   35,  249,  141,  142,
-      142,   60,  143,  142,   28,  151,  142,  145,  219,   35,
-      152,   28,  153,  160,  149,   28,  156,  154,   28,  158,
-       35,  157,   28,  162,  159,   28,  164,  166,  163,   28,
+      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
+       75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
+      117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
+       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
+      137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
+      131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
+       35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
+      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
+      146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
+       28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
 
-      135,  165,  244,   35,   35,  136,  171,   29,  172,  167,
-       28,  174,   28,  176,  187,  212,  175,   35,  177,  179,
-      179,   60,  180,  179,  188,   35,  179,  182,  183,  183,
-       60,  184,  183,   28,  151,  183,  186,   28,  156,  152,
-       28,  162,   35,  157,  190,   35,  163,   35,  196,   35,
-       28,  174,  189,   28,  200,   35,  175,   28,  202,  201,
-       29,   28,  205,  203,   28,  207,  195,  206,  219,  209,
-      208,   63,  214,   28,  200,  234,  220,  221,  217,  201,
-       28,  205,   35,   29,  235,  234,  206,  224,  227,  227,
-       60,  228,  227,  219,   35,  227,  230,  232,  242,  236,
+      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
+      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
+      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
+      185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
+      189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
+      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
+       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
+      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
+      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
+      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
 
-       28,  207,   28,  238,   28,  240,  208,  219,  239,  219,
-      241,   28,  238,  245,   35,  219,  243,  239,  219,  211,
-      198,  234,  194,  231,  226,  247,  223,  246,  216,  169,
-      211,  198,  194,  250,   26,   26,   26,   26,   28,   28,
-       28,   30,   30,   30,   30,   35,   35,   35,   35,   56,
-      192,   56,   56,   57,   57,   57,   57,   59,  169,   59,
-       59,   34,   34,   34,   34,   63,   63,  116,   63,   81,
-       81,   81,   81,   83,  113,   83,   83,  107,  107,  107,
-      107,  119,  119,  119,  119,  123,  123,  123,  123,  128,
-      128,  128,  128,  134,  134,  134,  134,  136,   84,  136,
+       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
+      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
+      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
+       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
+       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
+       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
+       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
+       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
+      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
+      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
 
-      136,  144,  144,  144,  144,  150,  150,  150,  150,  152,
-       35,  152,  152,  155,  155,  155,  155,  157,   29,  157,
-      157,  161,  161,  161,  161,  163,   53,  163,  163,  168,
-      168,  168,  168,  138,   51,  138,  138,  173,  173,  173,
-      173,  175,   35,  175,  175,  181,  181,  181,  181,  185,
-      185,  185,  185,  154,   29,  154,  154,  159,  255,  159,
-      159,  165,   27,  165,  165,  191,  191,  191,  191,  193,
-      193,  193,  193,  177,   27,  177,  177,  197,  197,  197,
-      197,  199,  199,  199,  199,  201,  255,  201,  201,  204,
-      204,  204,  204,  206,  255,  206,  206,  210,  210,  210,
+      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
+      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
+      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
+      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
+       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
+      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
+      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
+      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
+      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
+      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
 
-      210,  213,  213,  213,  213,  215,  215,  215,  215,  218,
-      218,  218,  218,  222,  222,  222,  222,  203,  255,  203,
-      203,  208,  255,  208,  208,  225,  225,  225,  225,  229,
-      229,  229,  229,  214,  255,  214,  214,  233,  233,  233,
-      233,  237,  237,  237,  237,  239,  255,  239,  239,  241,
-      255,  241,  241,  248,  248,  248,  248,  252,  252,  252,
-      252,    5,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
+      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
+      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
+      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
+      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
+      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_chk[696] =
+static yyconst flex_int16_t yy_chk[690] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    7,    7,    7,    8,    8,    8,   13,
-        7,   15,   13,    8,   11,   11,   11,   11,   11,   14,
-      252,   11,   11,   14,   15,   11,   19,   19,   18,   16,
-       39,   11,   12,   12,   12,   12,   12,   16,   20,   12,
-       12,   12,   18,   16,   39,   20,   18,   21,   23,   21,
-       23,   25,   50,   42,   25,   30,   30,   38,   25,   38,
-      250,   30,   31,   31,   31,   40,   50,   40,   41,   31,
+        7,   41,   13,    8,   11,   11,   11,   11,   11,   47,
+       14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
+       16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
+       12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
+      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
+       30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
 
-       34,   34,   34,   34,   42,   43,   43,   34,   34,   36,
-       36,   36,   36,   36,   44,   41,   36,   36,   45,   46,
-       47,   48,   47,   46,   48,   44,   49,   52,   51,   54,
-       53,   55,  248,   54,   55,   45,   57,   57,   66,   64,
-       60,   60,   57,   64,   52,   53,   60,   53,   66,   49,
-       51,   65,   67,   65,   68,   69,   70,   71,   72,   69,
-       73,   74,   73,   74,   75,   76,   67,   76,   70,   71,
-       77,   78,   78,   68,   78,   79,   72,   78,   77,   80,
-       85,   79,   81,   81,   88,   87,   75,   89,   81,   87,
-       90,   92,   90,  109,   96,   96,   88,   96,   85,   93,
+       34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
+       36,   36,   36,   36,   43,   43,   36,   36,   39,   44,
+       44,   50,   48,   46,   48,   49,   42,   51,   49,   52,
+       53,   54,   55,   66,   56,   66,   55,   56,   65,   58,
+       58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
+       61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
+       73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
+       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
+       74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
+       83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
 
-       80,   86,   86,   86,   86,   86,  109,   93,   86,   86,
-       89,   92,   95,   95,   95,   95,   95,   98,  101,   95,
-       95,   97,   97,   97,   97,   97,  101,  247,   97,   97,
-      104,   99,   98,   99,  100,  100,  100,  100,  100,  102,
-      113,  100,  100,  102,  103,  103,  105,  105,  104,  107,
-      107,  110,  105,  111,  114,  107,  114,  110,  115,  113,
-      115,  116,  133,  133,  125,  146,  146,  245,  111,  112,
-      112,  112,  112,  112,  117,  117,  112,  112,  236,  130,
-      117,  119,  119,  125,  116,  121,  121,  119,  123,  123,
-      131,  121,  126,  126,  123,  128,  128,  130,  126,  134,
+       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
+       95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
+       96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
+      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
+      106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
+      102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
+      111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
+      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
+      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
+      123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
 
-      134,  128,  236,  139,  141,  134,  139,  149,  141,  131,
-      142,  142,  144,  144,  149,  189,  142,  189,  144,  147,
-      147,  147,  147,  147,  160,  160,  147,  147,  148,  148,
-      148,  148,  148,  150,  150,  148,  148,  155,  155,  150,
-      161,  161,  166,  155,  167,  167,  161,  171,  172,  172,
-      173,  173,  166,  179,  179,  195,  173,  181,  181,  179,
-      187,  183,  183,  181,  185,  185,  171,  183,  196,  187,
-      185,  190,  190,  199,  199,  220,  196,  196,  195,  199,
-      204,  204,  217,  209,  220,  221,  204,  209,  212,  212,
-      212,  212,  212,  235,  232,  212,  212,  217,  232,  221,
+      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
+      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
+      145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
+      150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
+      151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
+      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
+      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
+      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
+      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
+      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
 
-      224,  224,  227,  227,  229,  229,  224,  243,  227,  244,
-      229,  237,  237,  242,  242,  246,  235,  237,  233,  225,
-      222,  218,  215,  213,  210,  244,  197,  243,  193,  191,
-      188,  178,  170,  246,  256,  256,  256,  256,  257,  257,
-      257,  258,  258,  258,  258,  259,  259,  259,  259,  260,
-      168,  260,  260,  261,  261,  261,  261,  262,  132,  262,
-      262,  263,  263,  263,  263,  264,  264,   94,  264,  265,
-      265,  265,  265,  266,   91,  266,  266,  267,  267,  267,
-      267,  268,  268,  268,  268,  269,  269,  269,  269,  270,
-      270,  270,  270,  271,  271,  271,  271,  272,   63,  272,
+      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
+      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
+      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
+      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
+      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
+      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
+      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
+      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
+      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
+      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
 
-      272,  273,  273,  273,  273,  274,  274,  274,  274,  275,
-       35,  275,  275,  276,  276,  276,  276,  277,   28,  277,
-      277,  278,  278,  278,  278,  279,   24,  279,  279,  280,
-      280,  280,  280,  281,   22,  281,  281,  282,  282,  282,
-      282,  283,   17,  283,  283,  284,  284,  284,  284,  285,
-      285,  285,  285,  286,    6,  286,  286,  287,    5,  287,
-      287,  288,    4,  288,  288,  289,  289,  289,  289,  290,
-      290,  290,  290,  291,    3,  291,  291,  292,  292,  292,
-      292,  293,  293,  293,  293,  294,    0,  294,  294,  295,
-      295,  295,  295,  296,    0,  296,  296,  297,  297,  297,
+      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
+      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
+      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
+      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
+        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
+      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
+        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
+      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
+        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
+      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
 
-      297,  298,  298,  298,  298,  299,  299,  299,  299,  300,
-      300,  300,  300,  301,  301,  301,  301,  302,    0,  302,
-      302,  303,    0,  303,  303,  304,  304,  304,  304,  305,
-      305,  305,  305,  306,    0,  306,  306,  307,  307,  307,
-      307,  308,  308,  308,  308,  309,    0,  309,  309,  310,
-        0,  310,  310,  311,  311,  311,  311,  312,  312,  312,
-      312,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
+      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
+      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
+      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
+        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
+      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -885,7 +878,7 @@ static int vdev_and_devtype(DiskParseCon
 #define DPC ((DiskParseContext*)yyextra)
 
 
-#line 889 "libxlu_disk_l.c"
+#line 882 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1126,7 +1119,7 @@ YY_DECL
 
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1130 "libxlu_disk_l.c"
+#line 1123 "libxlu_disk_l.c"
 
 	if ( !yyg->yy_init )
 		{
@@ -1190,14 +1183,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 256 )
+				if ( yy_current_state >= 251 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 255 );
+		while ( yy_current_state != 250 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1331,22 +1324,33 @@ case 14:
 YY_RULE_SETUP
 #line 191 "libxlu_disk_l.l"
 {
-		    STRIP(':');
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (DPC->disk->script) {
+                        if (*DPC->disk->script) {
+                            xlu__disk_err(DPC,yytext,"script respecified");
+                            return 0;
+                        }
+                        /* do not complain about overwriting empty strings */
+                        free(DPC->disk->script);
+                    }
+                    DPC->disk->script = malloc(strlen("block-")
+                                               +strlen(yytext) + 1);
+                    strcpy(DPC->disk->script, "block-");
+                    strcat(DPC->disk->script, yytext);
+                }
 	YY_BREAK
 case 15:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
+#line 208 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 16:
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
+#line 209 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 17:
@@ -1354,7 +1358,7 @@ case 17:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 199 "libxlu_disk_l.l"
+#line 210 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 18:
@@ -1362,7 +1366,7 @@ case 18:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 200 "libxlu_disk_l.l"
+#line 211 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 19:
@@ -1370,7 +1374,7 @@ case 19:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 201 "libxlu_disk_l.l"
+#line 212 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 20:
@@ -1378,13 +1382,13 @@ case 20:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 202 "libxlu_disk_l.l"
+#line 213 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
@@ -1394,7 +1398,7 @@ YY_RULE_SETUP
 case 22:
 /* rule 22 can match eol */
 YY_RULE_SETUP
-#line 211 "libxlu_disk_l.l"
+#line 222 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1423,7 +1427,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 23:
 YY_RULE_SETUP
-#line 237 "libxlu_disk_l.l"
+#line 248 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
@@ -1431,17 +1435,17 @@ YY_RULE_SETUP
 	YY_BREAK
 case 24:
 YY_RULE_SETUP
-#line 241 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
 case 25:
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1445 "libxlu_disk_l.c"
+#line 1449 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1705,7 +1709,7 @@ static int yy_get_next_buffer (yyscan_t 
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 256 )
+			if ( yy_current_state >= 251 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1729,11 +1733,11 @@ static int yy_get_next_buffer (yyscan_t 
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 256 )
+		if ( yy_current_state >= 251 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 255);
+	yy_is_jam = (yy_current_state == 250);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2533,4 +2537,4 @@ void xlu__disk_yyfree (void * ptr , yysc
 
 #define YYTABLES_NAME "yytables"
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxlu_disk_l.h
--- a/tools/libxl/libxlu_disk_l.h	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.h	Wed Aug 01 10:57:21 2012 +0100
@@ -340,7 +340,7 @@ extern int xlu__disk_yylex (yyscan_t yys
 #undef YY_DECL
 #endif
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 
 #line 346 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
diff -r 006588b1bbb6 -r 84f0686ebcbf tools/libxl/libxlu_disk_l.l
--- a/tools/libxl/libxlu_disk_l.l	Thu Jul 26 17:11:07 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.l	Wed Aug 01 10:57:21 2012 +0100
@@ -188,11 +188,22 @@ target=.*	{ STRIP(','); SAVESTRING("targ
                     setformat(DPC, yytext);
                  }
 
-iscsi:|e?nbd:drbd:/.* {
-		    STRIP(':');
+(iscsi|e?nbd|drbd):/.* {
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (DPC->disk->script) {
+                        if (*DPC->disk->script) {
+                            xlu__disk_err(DPC,yytext,"script respecified");
+                            return 0;
+                        }
+                        /* do not complain about overwriting empty strings */
+                        free(DPC->disk->script);
+                    }
+                    DPC->disk->script = malloc(strlen("block-")
+                                               +strlen(yytext) + 1);
+                    strcpy(DPC->disk->script, "block-");
+                    strcat(DPC->disk->script, yytext);
+                }
 
 tapdisk:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }
 tap2?:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:00:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVj5-0001iS-DP; Wed, 01 Aug 2012 10:00:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVj3-0001i8-DW
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:00:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1343814944!6136032!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24576 invoked from network); 1 Aug 2012 09:55:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:55:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208,217";a="13798638"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 09:55:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	10:55:44 +0100
Message-ID: <1343814942.27221.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 10:55:42 +0100
In-Reply-To: <20502.48288.351664.168722@mariner.uk.xensource.com>
References: <20502.48288.351664.168722@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: document hotplug script protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 5da5e11..86c16be 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -160,7 +160,10 @@ script=<script>
>  ---------------
>  
>  Specifies that <target> is not a normal host path, but rather
> -information to be interpreted by /etc/xen/scripts/block-<script>.
> +information to be interpreted by the executable program <script>,
> +(looked for in /etc/xen/scripts, if it doesn't contain a slash).
> +
> +These scripts are normally called "block-<script>".
>  
> 
> 
> @@ -204,7 +207,7 @@ Supported values:      iscsi:  nbd:  enbd:  drbd:
>  In xend and old versions of libxl it was necessary to specify the
>  "script" (see above) with a prefix.  For compatibility, these four
>  prefixes are recognised as specifying the corresponding script.  They
> -are equivalent to "script=<script>".
> +are equivalent to "script=block-<script>".
>  
> 
>  <deprecated-prefix>:

Should I pull these hunks into my block-script patch?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:00:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVj5-0001iS-DP; Wed, 01 Aug 2012 10:00:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVj3-0001i8-DW
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:00:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1343814944!6136032!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24576 invoked from network); 1 Aug 2012 09:55:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 09:55:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208,217";a="13798638"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 09:55:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	10:55:44 +0100
Message-ID: <1343814942.27221.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 10:55:42 +0100
In-Reply-To: <20502.48288.351664.168722@mariner.uk.xensource.com>
References: <20502.48288.351664.168722@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: document hotplug script protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 5da5e11..86c16be 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -160,7 +160,10 @@ script=<script>
>  ---------------
>  
>  Specifies that <target> is not a normal host path, but rather
> -information to be interpreted by /etc/xen/scripts/block-<script>.
> +information to be interpreted by the executable program <script>,
> +(looked for in /etc/xen/scripts, if it doesn't contain a slash).
> +
> +These scripts are normally called "block-<script>".
>  
> 
> 
> @@ -204,7 +207,7 @@ Supported values:      iscsi:  nbd:  enbd:  drbd:
>  In xend and old versions of libxl it was necessary to specify the
>  "script" (see above) with a prefix.  For compatibility, these four
>  prefixes are recognised as specifying the corresponding script.  They
> -are equivalent to "script=<script>".
> +are equivalent to "script=block-<script>".
>  
> 
>  <deprecated-prefix>:

Should I pull these hunks into my block-script patch?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:07:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVpE-000238-7a; Wed, 01 Aug 2012 10:06:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVpD-000233-1V
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:06:43 +0000
Received: from [85.158.143.35:19793] by server-1.bemta-4.messagelabs.com id
	18/81-24392-2BFF8105; Wed, 01 Aug 2012 10:06:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343815563!5435020!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9473 invoked from network); 1 Aug 2012 10:06:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:06:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13798826"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:05:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:05:19 +0100
Message-ID: <1343815518.27221.49.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 11:05:18 +0100
In-Reply-To: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
References: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] libxl: support custom block hotplug
 scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 10:57 +0100, Ian Campbell wrote:
> I have tested with a custom block script which uses "lvchange -a" to
> dynamically add remove the referenced device (simulates iSCSI login/logout
> without requiring me to faff around setting up an iSCSI target).

Script below for reference. Configured with:
 'access=w,vdev=xvde,script=block-lvm,target=VG=VG LV=trash'

Once you've done the initial "lvchange -a n /dev/VG/trash" then you
should find that /dv/VG/trash exists only when the domain exists.

I deliberately chose a target with = in it to valid the "target= eats to
end of line" use case. As expected it works, the params node ends up as
"VG=VG LV=trash". This also exercised the behaviour of not stat()ing the
device in libxl.

8<-------------------------------------------


#!/bin/bash

dir=$(dirname "$0")
. "$dir/block-common.sh"

p=$(xenstore_read "$XENBUS_PATH/params")
evalVariables $p

exec 1>>/tmp/block-lvm.log
DEV=/dev/$VG/$LV

echo block-lvm $command on `date`
echo VG=$VG LV=$LV DEV=$DEV

case "$command" in
  add)
    if [ -e $DEV ] ; then
        fatal "$DEV already active, disable with \`lvchange -a n $DEV\'"
    fi
    lvchange -a y $DEV
    write_dev $DEV
    exit 0
    ;;
  remove)
    lvchange -a n $DEV
    exit 0
    ;;
esac



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:07:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVpE-000238-7a; Wed, 01 Aug 2012 10:06:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVpD-000233-1V
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:06:43 +0000
Received: from [85.158.143.35:19793] by server-1.bemta-4.messagelabs.com id
	18/81-24392-2BFF8105; Wed, 01 Aug 2012 10:06:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343815563!5435020!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9473 invoked from network); 1 Aug 2012 10:06:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:06:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13798826"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:05:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:05:19 +0100
Message-ID: <1343815518.27221.49.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 11:05:18 +0100
In-Reply-To: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
References: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] libxl: support custom block hotplug
 scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 10:57 +0100, Ian Campbell wrote:
> I have tested with a custom block script which uses "lvchange -a" to
> dynamically add remove the referenced device (simulates iSCSI login/logout
> without requiring me to faff around setting up an iSCSI target).

Script below for reference. Configured with:
 'access=w,vdev=xvde,script=block-lvm,target=VG=VG LV=trash'

Once you've done the initial "lvchange -a n /dev/VG/trash" then you
should find that /dv/VG/trash exists only when the domain exists.

I deliberately chose a target with = in it to valid the "target= eats to
end of line" use case. As expected it works, the params node ends up as
"VG=VG LV=trash". This also exercised the behaviour of not stat()ing the
device in libxl.

8<-------------------------------------------


#!/bin/bash

dir=$(dirname "$0")
. "$dir/block-common.sh"

p=$(xenstore_read "$XENBUS_PATH/params")
evalVariables $p

exec 1>>/tmp/block-lvm.log
DEV=/dev/$VG/$LV

echo block-lvm $command on `date`
echo VG=$VG LV=$LV DEV=$DEV

case "$command" in
  add)
    if [ -e $DEV ] ; then
        fatal "$DEV already active, disable with \`lvchange -a n $DEV\'"
    fi
    lvchange -a y $DEV
    write_dev $DEV
    exit 0
    ;;
  remove)
    lvchange -a n $DEV
    exit 0
    ;;
esac



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVs0-000294-TI; Wed, 01 Aug 2012 10:09:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwVrz-00028v-R5
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:09:36 +0000
Received: from [85.158.143.35:46908] by server-2.bemta-4.messagelabs.com id
	8F/90-17938-F5009105; Wed, 01 Aug 2012 10:09:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1343815770!4829895!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20097 invoked from network); 1 Aug 2012 10:09:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:09:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13798937"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:09:29 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 11:09:29 +0100
Date: Wed, 1 Aug 2012 11:09:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120731182551.GA1559@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011108590.4645@kaball.uk.xensource.com>
References: <1343743223-30092-1-git-send-email-konrad.wilk@oracle.com>
	<1343743223-30092-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1207311542520.4645@kaball.uk.xensource.com>
	<20120731182551.GA1559@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "fujita.tomonori@lab.ntt.co.jp" <fujita.tomonori@lab.ntt.co.jp>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/5] xen/swiotlb: With more than 4GB on
 64-bit, disable the native SWIOTLB.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Jul 2012, Konrad Rzeszutek Wilk wrote:
> commit 21ef55f4ab2b6d63eb0ed86abbc959d31377853b
> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date:   Fri Jul 27 20:16:00 2012 -0400
> 
>     xen/swiotlb: With more than 4GB on 64-bit, disable the native SWIOTLB.
>     
>     If a PV guest is booted the native SWIOTLB should not be
>     turned on. It does not help us (we don't have any PCI devices)
>     and it eats 64MB of good memory. In the case of PV guests
>     with PCI devices we need the Xen-SWIOTLB one.
>    
>     [v1: Rewrite comment per Stefano's suggestion] 
>     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> index b6a5340..1c17227 100644
> --- a/arch/x86/xen/pci-swiotlb-xen.c
> +++ b/arch/x86/xen/pci-swiotlb-xen.c
> @@ -8,6 +8,11 @@
>  #include <xen/xen.h>
>  #include <asm/iommu_table.h>
>  
> +#ifdef CONFIG_X86_64
> +#include <asm/iommu.h>
> +#include <asm/dma.h>
> +#endif
> +
>  int xen_swiotlb __read_mostly;
>  
>  static struct dma_map_ops xen_swiotlb_dma_ops = {
> @@ -49,6 +54,15 @@ int __init pci_xen_swiotlb_detect(void)
>  	 * the 'swiotlb' flag is the only one turning it on. */
>  	swiotlb = 0;
>  
> +#ifdef CONFIG_X86_64
> +	/* pci_swiotlb_detect_4gb turns on native SWIOTLB if no_iommu == 0
> +	 * (so no iommu=X command line over-writes).
> +	 * Considering that PV guests do not want the *native SWIOTLB* but
> +	 * only Xen SWIOTLB it is not useful to us so set no_iommu=1 here.
> +	 */
> +	if (max_pfn > MAX_DMA32_PFN)
> +		no_iommu = 1;
> +#endif
>  	return xen_swiotlb;
>  }
>  
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:10:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVs0-000294-TI; Wed, 01 Aug 2012 10:09:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwVrz-00028v-R5
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:09:36 +0000
Received: from [85.158.143.35:46908] by server-2.bemta-4.messagelabs.com id
	8F/90-17938-F5009105; Wed, 01 Aug 2012 10:09:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1343815770!4829895!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20097 invoked from network); 1 Aug 2012 10:09:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:09:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13798937"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:09:29 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 11:09:29 +0100
Date: Wed, 1 Aug 2012 11:09:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120731182551.GA1559@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011108590.4645@kaball.uk.xensource.com>
References: <1343743223-30092-1-git-send-email-konrad.wilk@oracle.com>
	<1343743223-30092-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1207311542520.4645@kaball.uk.xensource.com>
	<20120731182551.GA1559@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "fujita.tomonori@lab.ntt.co.jp" <fujita.tomonori@lab.ntt.co.jp>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/5] xen/swiotlb: With more than 4GB on
 64-bit, disable the native SWIOTLB.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Jul 2012, Konrad Rzeszutek Wilk wrote:
> commit 21ef55f4ab2b6d63eb0ed86abbc959d31377853b
> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date:   Fri Jul 27 20:16:00 2012 -0400
> 
>     xen/swiotlb: With more than 4GB on 64-bit, disable the native SWIOTLB.
>     
>     If a PV guest is booted the native SWIOTLB should not be
>     turned on. It does not help us (we don't have any PCI devices)
>     and it eats 64MB of good memory. In the case of PV guests
>     with PCI devices we need the Xen-SWIOTLB one.
>    
>     [v1: Rewrite comment per Stefano's suggestion] 
>     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> index b6a5340..1c17227 100644
> --- a/arch/x86/xen/pci-swiotlb-xen.c
> +++ b/arch/x86/xen/pci-swiotlb-xen.c
> @@ -8,6 +8,11 @@
>  #include <xen/xen.h>
>  #include <asm/iommu_table.h>
>  
> +#ifdef CONFIG_X86_64
> +#include <asm/iommu.h>
> +#include <asm/dma.h>
> +#endif
> +
>  int xen_swiotlb __read_mostly;
>  
>  static struct dma_map_ops xen_swiotlb_dma_ops = {
> @@ -49,6 +54,15 @@ int __init pci_xen_swiotlb_detect(void)
>  	 * the 'swiotlb' flag is the only one turning it on. */
>  	swiotlb = 0;
>  
> +#ifdef CONFIG_X86_64
> +	/* pci_swiotlb_detect_4gb turns on native SWIOTLB if no_iommu == 0
> +	 * (so no iommu=X command line over-writes).
> +	 * Considering that PV guests do not want the *native SWIOTLB* but
> +	 * only Xen SWIOTLB it is not useful to us so set no_iommu=1 here.
> +	 */
> +	if (max_pfn > MAX_DMA32_PFN)
> +		no_iommu = 1;
> +#endif
>  	return xen_swiotlb;
>  }
>  
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:11:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVt6-0002Dv-BQ; Wed, 01 Aug 2012 10:10:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVt4-0002Dk-Ia
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:10:42 +0000
Received: from [85.158.138.51:35209] by server-9.bemta-3.messagelabs.com id
	FC/91-27628-1A009105; Wed, 01 Aug 2012 10:10:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343815841!25808433!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20752 invoked from network); 1 Aug 2012 10:10:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:10:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13798975"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:10:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:10:41 +0100
Message-ID: <1343815839.27221.50.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
Date: Wed, 1 Aug 2012 11:10:39 +0100
In-Reply-To: <CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 11:06 +0100, Tamas Lengyel wrote:
> Hi Ian,
> thanks for the quick reply. 
> 
> 
> >  Are you writing your own toolstack which strives for
> xm compatibility?
> 
> 
> In essence, yes. My goal is to checkpoint a running VM and be able to
> restore the VM with a slightly modified configuration (which is
> currently XM style). The config modification I can do easily with
> XLU_Config, but turning that into libxl_domain_config would require,
> as you said, to write my own translator.

Why not add this functionality to xl directly if it is useful?

Actually "xl restore" and "xl migrate" already support passing in an
updated configuration file to use on the other end, IIRC.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:11:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVt6-0002Dv-BQ; Wed, 01 Aug 2012 10:10:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwVt4-0002Dk-Ia
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:10:42 +0000
Received: from [85.158.138.51:35209] by server-9.bemta-3.messagelabs.com id
	FC/91-27628-1A009105; Wed, 01 Aug 2012 10:10:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343815841!25808433!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20752 invoked from network); 1 Aug 2012 10:10:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:10:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13798975"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:10:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:10:41 +0100
Message-ID: <1343815839.27221.50.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
Date: Wed, 1 Aug 2012 11:10:39 +0100
In-Reply-To: <CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 11:06 +0100, Tamas Lengyel wrote:
> Hi Ian,
> thanks for the quick reply. 
> 
> 
> >  Are you writing your own toolstack which strives for
> xm compatibility?
> 
> 
> In essence, yes. My goal is to checkpoint a running VM and be able to
> restore the VM with a slightly modified configuration (which is
> currently XM style). The config modification I can do easily with
> XLU_Config, but turning that into libxl_domain_config would require,
> as you said, to write my own translator.

Why not add this functionality to xl directly if it is useful?

Actually "xl restore" and "xl migrate" already support passing in an
updated configuration file to use on the other end, IIRC.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVxF-0002Sk-1C; Wed, 01 Aug 2012 10:15:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1SwVpU-00023x-TE
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:07:01 +0000
Received: from [85.158.143.35:23539] by server-3.bemta-4.messagelabs.com id
	AE/72-01511-4CFF8105; Wed, 01 Aug 2012 10:07:00 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343815618!16246938!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15301 invoked from network); 1 Aug 2012 10:06:59 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:06:59 -0000
Received: by yenl1 with SMTP id l1so7924234yen.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 03:06:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=6gLc2ujRUdCJJbn3aBRONUlT5r5Atmz8h1v9q61tI5I=;
	b=IlQuaNHDjVMeel4NqelndX4ZG0n2pCSa2d1j772d6i+jO/jknrInLEgF1ZViFuoov4
	tDwfuYYLWzxdSCGoGXz0Dzpn1RNTkJ8Y2mBcg2I+dMjL34SJtQDLNaBg7gwLJwjRmEn6
	vpARApp4ym10UClYSfgSyyD1osTHJEnfb5TOdrWoXArP9WmEi6JjqmPjhD/0ol5uAJXs
	TRMWNvraxLPRtjahKBlm/SxoyQWrsMjr13ETWtVlxQixhDf1mVwu/Vlq2ReueSg7hF5c
	4ouyl4g2xRxAkh/6ajRrbf2LTOZR1/KlPhv5N4PQnmKV2o+Ic2agQPs9DgY2z2yYWwuH
	lmHQ==
MIME-Version: 1.0
Received: by 10.50.170.65 with SMTP id ak1mr4685173igc.43.1343815617620; Wed,
	01 Aug 2012 03:06:57 -0700 (PDT)
Received: by 10.64.126.199 with HTTP; Wed, 1 Aug 2012 03:06:57 -0700 (PDT)
In-Reply-To: <1343809281.27221.14.camel@zakaz.uk.xensource.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
Date: Wed, 1 Aug 2012 06:06:57 -0400
Message-ID: <CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Wed, 01 Aug 2012 10:15:00 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5222377758237263627=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5222377758237263627==
Content-Type: multipart/alternative; boundary=e89a8f23595b882b0d04c63174ab

--e89a8f23595b882b0d04c63174ab
Content-Type: text/plain; charset=ISO-8859-1

Hi Ian,
thanks for the quick reply.

>  Are you writing your own toolstack which strives for xm compatibility?

In essence, yes. My goal is to checkpoint a running VM and be able to
restore the VM with a slightly modified configuration (which is currently
XM style). The config modification I can do easily with XLU_Config, but
turning that into libxl_domain_config would require, as you said, to write
my own translator.

> The parsing of xl/xm style configuration files is specific to the toolstack
and therefore belongs in xl.

While I understand the design decision now for keeping config formats
general, since the code is already written for xl, might as well let other
developers access it through libxlu when it's convenient for them. It would
make (at least my) life easier.

Thanks,
Tamas

--e89a8f23595b882b0d04c63174ab
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Ian,<div>thanks for the quick reply.=A0<br><br></div><div>&gt;=A0<span s=
tyle=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);font-family:a=
rial,sans-serif;font-size:13px">=A0</span><span style=3D"background-color:r=
gb(255,255,255);color:rgb(34,34,34);font-family:arial,sans-serif;font-size:=
13px">Are you writing your own toolstack which strives for xm=A0</span><spa=
n style=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);font-famil=
y:arial,sans-serif;font-size:13px">compatibility?</span></div>
<div><span style=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);f=
ont-family:arial,sans-serif;font-size:13px"><br></span></div><div>In essenc=
e, yes. My goal is to checkpoint a running VM and be able to restore the VM=
 with a slightly modified configuration (which is currently XM style). The =
config modification I can do easily with XLU_Config, but turning that into =
libxl_domain_config would require, as you said, to write my own translator.=
</div>
<div><br></div><div>&gt;=A0<span style=3D"background-color:rgb(255,255,255)=
;color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13px">The parsi=
ng of xl/xm style configuration files is specific to the=A0</span><span sty=
le=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);font-family:ari=
al,sans-serif;font-size:13px">toolstack and therefore belongs in xl.</span>=
</div>
<div><br></div><div>While I understand the design decision now for keeping =
config formats general, since the code is already written for xl, might as =
well let other developers access it through libxlu when it&#39;s=A0convenie=
nt for them. It would make (at least my) life easier.</div>
<div><br></div><div>Thanks,</div><div>Tamas</div>

--e89a8f23595b882b0d04c63174ab--


--===============5222377758237263627==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5222377758237263627==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 10:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwVxF-0002Sk-1C; Wed, 01 Aug 2012 10:15:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1SwVpU-00023x-TE
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:07:01 +0000
Received: from [85.158.143.35:23539] by server-3.bemta-4.messagelabs.com id
	AE/72-01511-4CFF8105; Wed, 01 Aug 2012 10:07:00 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343815618!16246938!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15301 invoked from network); 1 Aug 2012 10:06:59 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:06:59 -0000
Received: by yenl1 with SMTP id l1so7924234yen.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 03:06:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=6gLc2ujRUdCJJbn3aBRONUlT5r5Atmz8h1v9q61tI5I=;
	b=IlQuaNHDjVMeel4NqelndX4ZG0n2pCSa2d1j772d6i+jO/jknrInLEgF1ZViFuoov4
	tDwfuYYLWzxdSCGoGXz0Dzpn1RNTkJ8Y2mBcg2I+dMjL34SJtQDLNaBg7gwLJwjRmEn6
	vpARApp4ym10UClYSfgSyyD1osTHJEnfb5TOdrWoXArP9WmEi6JjqmPjhD/0ol5uAJXs
	TRMWNvraxLPRtjahKBlm/SxoyQWrsMjr13ETWtVlxQixhDf1mVwu/Vlq2ReueSg7hF5c
	4ouyl4g2xRxAkh/6ajRrbf2LTOZR1/KlPhv5N4PQnmKV2o+Ic2agQPs9DgY2z2yYWwuH
	lmHQ==
MIME-Version: 1.0
Received: by 10.50.170.65 with SMTP id ak1mr4685173igc.43.1343815617620; Wed,
	01 Aug 2012 03:06:57 -0700 (PDT)
Received: by 10.64.126.199 with HTTP; Wed, 1 Aug 2012 03:06:57 -0700 (PDT)
In-Reply-To: <1343809281.27221.14.camel@zakaz.uk.xensource.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
Date: Wed, 1 Aug 2012 06:06:57 -0400
Message-ID: <CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
From: Tamas Lengyel <tamas.k.lengyel@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Wed, 01 Aug 2012 10:15:00 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5222377758237263627=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5222377758237263627==
Content-Type: multipart/alternative; boundary=e89a8f23595b882b0d04c63174ab

--e89a8f23595b882b0d04c63174ab
Content-Type: text/plain; charset=ISO-8859-1

Hi Ian,
thanks for the quick reply.

>  Are you writing your own toolstack which strives for xm compatibility?

In essence, yes. My goal is to checkpoint a running VM and be able to
restore the VM with a slightly modified configuration (which is currently
XM style). The config modification I can do easily with XLU_Config, but
turning that into libxl_domain_config would require, as you said, to write
my own translator.

> The parsing of xl/xm style configuration files is specific to the toolstack
and therefore belongs in xl.

While I understand the design decision now for keeping config formats
general, since the code is already written for xl, might as well let other
developers access it through libxlu when it's convenient for them. It would
make (at least my) life easier.

Thanks,
Tamas

--e89a8f23595b882b0d04c63174ab
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Ian,<div>thanks for the quick reply.=A0<br><br></div><div>&gt;=A0<span s=
tyle=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);font-family:a=
rial,sans-serif;font-size:13px">=A0</span><span style=3D"background-color:r=
gb(255,255,255);color:rgb(34,34,34);font-family:arial,sans-serif;font-size:=
13px">Are you writing your own toolstack which strives for xm=A0</span><spa=
n style=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);font-famil=
y:arial,sans-serif;font-size:13px">compatibility?</span></div>
<div><span style=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);f=
ont-family:arial,sans-serif;font-size:13px"><br></span></div><div>In essenc=
e, yes. My goal is to checkpoint a running VM and be able to restore the VM=
 with a slightly modified configuration (which is currently XM style). The =
config modification I can do easily with XLU_Config, but turning that into =
libxl_domain_config would require, as you said, to write my own translator.=
</div>
<div><br></div><div>&gt;=A0<span style=3D"background-color:rgb(255,255,255)=
;color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13px">The parsi=
ng of xl/xm style configuration files is specific to the=A0</span><span sty=
le=3D"background-color:rgb(255,255,255);color:rgb(34,34,34);font-family:ari=
al,sans-serif;font-size:13px">toolstack and therefore belongs in xl.</span>=
</div>
<div><br></div><div>While I understand the design decision now for keeping =
config formats general, since the code is already written for xl, might as =
well let other developers access it through libxlu when it&#39;s=A0convenie=
nt for them. It would make (at least my) life easier.</div>
<div><br></div><div>Thanks,</div><div>Tamas</div>

--e89a8f23595b882b0d04c63174ab--


--===============5222377758237263627==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5222377758237263627==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 10:20:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwW2Q-0002cR-SG; Wed, 01 Aug 2012 10:20:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwW2P-0002cG-7r
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:20:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343816372!1822825!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19067 invoked from network); 1 Aug 2012 10:19:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799227"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:19:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 11:19:27 +0100
Date: Wed, 1 Aug 2012 11:19:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <qemu-devel@nongnu.org>
Message-ID: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	fantonifabio@tiscali.it
Subject: [Xen-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
match the type.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/hw/xen_pt.c b/hw/xen_pt.c
index fdf68aa..307119a 100644
--- a/hw/xen_pt.c
+++ b/hw/xen_pt.c
@@ -764,7 +764,7 @@ out:
     return 0;
 }
 
-static int xen_pt_unregister_device(PCIDevice *d)
+static void xen_pt_unregister_device(PCIDevice *d)
 {
     XenPCIPassthroughState *s = DO_UPCAST(XenPCIPassthroughState, dev, d);
     uint8_t machine_irq = s->machine_irq;
@@ -814,8 +814,6 @@ static int xen_pt_unregister_device(PCIDevice *d)
     memory_listener_unregister(&s->memory_listener);
 
     xen_host_pci_device_put(&s->real_device);
-
-    return 0;
 }
 
 static Property xen_pci_passthrough_properties[] = {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:20:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwW2Q-0002cR-SG; Wed, 01 Aug 2012 10:20:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwW2P-0002cG-7r
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:20:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343816372!1822825!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19067 invoked from network); 1 Aug 2012 10:19:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799227"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:19:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 11:19:27 +0100
Date: Wed, 1 Aug 2012 11:19:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <qemu-devel@nongnu.org>
Message-ID: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	fantonifabio@tiscali.it
Subject: [Xen-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
match the type.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/hw/xen_pt.c b/hw/xen_pt.c
index fdf68aa..307119a 100644
--- a/hw/xen_pt.c
+++ b/hw/xen_pt.c
@@ -764,7 +764,7 @@ out:
     return 0;
 }
 
-static int xen_pt_unregister_device(PCIDevice *d)
+static void xen_pt_unregister_device(PCIDevice *d)
 {
     XenPCIPassthroughState *s = DO_UPCAST(XenPCIPassthroughState, dev, d);
     uint8_t machine_irq = s->machine_irq;
@@ -814,8 +814,6 @@ static int xen_pt_unregister_device(PCIDevice *d)
     memory_listener_unregister(&s->memory_listener);
 
     xen_host_pci_device_put(&s->real_device);
-
-    return 0;
 }
 
 static Property xen_pci_passthrough_properties[] = {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:29:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWAQ-0002md-Qq; Wed, 01 Aug 2012 10:28:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwWAP-0002mY-Io
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:28:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1343816883!9278564!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10753 invoked from network); 1 Aug 2012 10:28:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:28:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799389"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:28:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:28:03 +0100
Message-ID: <1343816881.27221.51.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Wed, 1 Aug 2012 11:28:01 +0100
In-Reply-To: <alpine.DEB.2.02.1207261715230.26163@kaball.uk.xensource.com>
References: <1343057360.5797.59.camel@zakaz.uk.xensource.com>
	<1343057475-9258-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1207261715230.26163@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2 2/3] libxc: add ARM support to xc_dom (PV
 domain building)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 17:30 +0100, Stefano Stabellini wrote:
> On Mon, 23 Jul 2012, Ian Campbell wrote:
> > Includes ARM zImage support.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v2:
> >  - add comments for rambase_pfn and p2m_host to clarify usage
> >  - remove opencoded guest base address of 0x80000000 (still hardcoded, but now
> >    with a comment!).
> >  - comments around Linux boot protocol setup.
> 
> It looks better now.
> I think we should take it as is, and then replace ntohl with
> be32_to_cpu when we have it.

OK. I've applied this whole series to my "arm-for-4.3" branch which I'll
announce shortly.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:29:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWAQ-0002md-Qq; Wed, 01 Aug 2012 10:28:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwWAP-0002mY-Io
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:28:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1343816883!9278564!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10753 invoked from network); 1 Aug 2012 10:28:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:28:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799389"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:28:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:28:03 +0100
Message-ID: <1343816881.27221.51.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Wed, 1 Aug 2012 11:28:01 +0100
In-Reply-To: <alpine.DEB.2.02.1207261715230.26163@kaball.uk.xensource.com>
References: <1343057360.5797.59.camel@zakaz.uk.xensource.com>
	<1343057475-9258-2-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1207261715230.26163@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2 2/3] libxc: add ARM support to xc_dom (PV
 domain building)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 17:30 +0100, Stefano Stabellini wrote:
> On Mon, 23 Jul 2012, Ian Campbell wrote:
> > Includes ARM zImage support.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > v2:
> >  - add comments for rambase_pfn and p2m_host to clarify usage
> >  - remove opencoded guest base address of 0x80000000 (still hardcoded, but now
> >    with a comment!).
> >  - comments around Linux boot protocol setup.
> 
> It looks better now.
> I think we should take it as is, and then replace ntohl with
> be32_to_cpu when we have it.

OK. I've applied this whole series to my "arm-for-4.3" branch which I'll
announce shortly.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:29:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWAq-0002nk-6J; Wed, 01 Aug 2012 10:29:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwWAp-0002nd-DU
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:29:03 +0000
Received: from [85.158.143.99:33969] by server-2.bemta-4.messagelabs.com id
	C2/2A-17938-EE409105; Wed, 01 Aug 2012 10:29:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343816942!18054149!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23598 invoked from network); 1 Aug 2012 10:29:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:29:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799418"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:29:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:29:02 +0100
Message-ID: <1343816940.27221.52.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Wed, 1 Aug 2012 11:29:00 +0100
In-Reply-To: <alpine.DEB.2.02.1207261949150.26163@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207261949150.26163@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, Tim
	Deegan <Tim.Deegan@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 0/4] xen/arm: grant_table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 19:53 +0100, Stefano Stabellini wrote:
> Hi all,
> this patch series implements the basic mechanisms needed by the grant
> table to work properly.

I've applied all 4 of these to my "arm-for-4.3" branch which I'll
announce shortly.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:29:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWAq-0002nk-6J; Wed, 01 Aug 2012 10:29:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwWAp-0002nd-DU
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:29:03 +0000
Received: from [85.158.143.99:33969] by server-2.bemta-4.messagelabs.com id
	C2/2A-17938-EE409105; Wed, 01 Aug 2012 10:29:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343816942!18054149!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23598 invoked from network); 1 Aug 2012 10:29:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:29:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799418"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:29:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:29:02 +0100
Message-ID: <1343816940.27221.52.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Wed, 1 Aug 2012 11:29:00 +0100
In-Reply-To: <alpine.DEB.2.02.1207261949150.26163@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207261949150.26163@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, Tim
	Deegan <Tim.Deegan@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 0/4] xen/arm: grant_table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 19:53 +0100, Stefano Stabellini wrote:
> Hi all,
> this patch series implements the basic mechanisms needed by the grant
> table to work properly.

I've applied all 4 of these to my "arm-for-4.3" branch which I'll
announce shortly.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:33:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWEr-00032g-S0; Wed, 01 Aug 2012 10:33:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwWEr-00032X-3J
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:33:13 +0000
Received: from [85.158.143.35:36397] by server-3.bemta-4.messagelabs.com id
	12/0F-01511-8E509105; Wed, 01 Aug 2012 10:33:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343817190!16253182!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 841 invoked from network); 1 Aug 2012 10:33:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:33:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799512"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:33:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:33:10 +0100
Message-ID: <1343817188.27221.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 1 Aug 2012 11:33:08 +0100
In-Reply-To: <1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207201539380.26163@kaball.uk.xensource.com>
	<1342795744-3768-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/4] libxc/arm: allocate xenstore and
	console pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-20 at 15:49 +0100, Stefano Stabellini wrote:
> Allocate two additional pages at the end of the guest physical memory
> for xenstore and console.
> Set HVM_PARAM_STORE_PFN and HVM_PARAM_CONSOLE_PFN to the corresponding
> values.

Applied to arm-for-xen-4.3.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:33:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWEr-00032g-S0; Wed, 01 Aug 2012 10:33:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwWEr-00032X-3J
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:33:13 +0000
Received: from [85.158.143.35:36397] by server-3.bemta-4.messagelabs.com id
	12/0F-01511-8E509105; Wed, 01 Aug 2012 10:33:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343817190!16253182!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 841 invoked from network); 1 Aug 2012 10:33:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 10:33:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13799512"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 10:33:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	11:33:10 +0100
Message-ID: <1343817188.27221.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 1 Aug 2012 11:33:08 +0100
In-Reply-To: <1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207201539380.26163@kaball.uk.xensource.com>
	<1342795744-3768-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/4] libxc/arm: allocate xenstore and
	console pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-20 at 15:49 +0100, Stefano Stabellini wrote:
> Allocate two additional pages at the end of the guest physical memory
> for xenstore and console.
> Set HVM_PARAM_STORE_PFN and HVM_PARAM_CONSOLE_PFN to the corresponding
> values.

Applied to arm-for-xen-4.3.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:52:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWWZ-0003KM-OS; Wed, 01 Aug 2012 10:51:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwWWX-0003KH-Vv
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:51:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343817322!11689487!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2156 invoked from network); 1 Aug 2012 10:35:30 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 10:35:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71AZG9U008383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 10:35:17 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71AZGPu028189
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 10:35:16 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71AZFcD009371; Wed, 1 Aug 2012 05:35:15 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 03:35:15 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 49DC0402B2; Wed,  1 Aug 2012 06:26:15 -0400 (EDT)
Date: Wed, 1 Aug 2012 06:26:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Regan <jamregan1@yahoo.com>, anthony.perard@citrix.com
Message-ID: <20120801102615.GA7227@phenom.dumpdata.com>
References: <1341776002.45483.YahooMailNeo@web114518.mail.gq1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1341776002.45483.YahooMailNeo@web114518.mail.gq1.yahoo.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen pci passthru support for pci-e 4kB config space
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jul 08, 2012 at 12:33:22PM -0700, James Regan wrote:
> Hi:
> =

> I have successfully leased (passed through) an ati hd 6450 to a win7 hvm =
domU as the primary gfx (, but I don't seem to have full functionality (e.g=
. HDCP).=A0 Currently, I can see the 256 Byte PCI configuration space in th=
e DomU (say if I leased the card to a linux guest and run lspci -xxxx), but=
 is there a way to pass the full 4KBpci-e configuration space to an hvm gue=
st?=A0 My theory is that win7 utilizes this full configuration space to com=
plete the HDCP handshake.=A0 I am running xen 4.1.1-r2 on a gentoo dom0 (ke=
rnel 3.3.8) with vt-d, vt-x enabled chipset (MSI Z-68 and Intel i7 2600S).=
=A0 I have applied the ati gfx primary gpu patches.=A0 Any thoughts?=A0 tha=
nk you.

It sounds like it would require using the upstream version of QEMU that
has PCIe subsystem. Perhaps Anthony might know more..

> =

> Jim

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:52:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWWZ-0003KM-OS; Wed, 01 Aug 2012 10:51:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwWWX-0003KH-Vv
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 10:51:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343817322!11689487!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2156 invoked from network); 1 Aug 2012 10:35:30 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 10:35:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71AZG9U008383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 10:35:17 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71AZGPu028189
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 10:35:16 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71AZFcD009371; Wed, 1 Aug 2012 05:35:15 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 03:35:15 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 49DC0402B2; Wed,  1 Aug 2012 06:26:15 -0400 (EDT)
Date: Wed, 1 Aug 2012 06:26:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Regan <jamregan1@yahoo.com>, anthony.perard@citrix.com
Message-ID: <20120801102615.GA7227@phenom.dumpdata.com>
References: <1341776002.45483.YahooMailNeo@web114518.mail.gq1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1341776002.45483.YahooMailNeo@web114518.mail.gq1.yahoo.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen pci passthru support for pci-e 4kB config space
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jul 08, 2012 at 12:33:22PM -0700, James Regan wrote:
> Hi:
> =

> I have successfully leased (passed through) an ati hd 6450 to a win7 hvm =
domU as the primary gfx (, but I don't seem to have full functionality (e.g=
. HDCP).=A0 Currently, I can see the 256 Byte PCI configuration space in th=
e DomU (say if I leased the card to a linux guest and run lspci -xxxx), but=
 is there a way to pass the full 4KBpci-e configuration space to an hvm gue=
st?=A0 My theory is that win7 utilizes this full configuration space to com=
plete the HDCP handshake.=A0 I am running xen 4.1.1-r2 on a gentoo dom0 (ke=
rnel 3.3.8) with vt-d, vt-x enabled chipset (MSI Z-68 and Intel i7 2600S).=
=A0 I have applied the ati gfx primary gpu patches.=A0 Any thoughts?=A0 tha=
nk you.

It sounds like it would require using the upstream version of QEMU that
has PCIe subsystem. Perhaps Anthony might know more..

> =

> Jim

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:52:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWX8-0003Lt-4m; Wed, 01 Aug 2012 10:52:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwWX6-0003Lh-V1
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:52:05 +0000
Received: from [85.158.143.35:28168] by server-3.bemta-4.messagelabs.com id
	37/81-01511-45A09105; Wed, 01 Aug 2012 10:52:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1343818318!10339443!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3434 invoked from network); 1 Aug 2012 10:51:59 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 10:51:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Apesm023175
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 10:51:41 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Apdxq023155
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 10:51:39 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71Apctr008462; Wed, 1 Aug 2012 05:51:38 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 03:51:38 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 62F2F402B2; Wed,  1 Aug 2012 06:42:37 -0400 (EDT)
Date: Wed, 1 Aug 2012 06:42:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801104237.GB7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163020.GB9222@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207271246080.26163@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1207271246080.26163@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > +struct pvclock_wall_clock {
> > > +	u32   version;
> > > +	u32   sec;
> > > +	u32   nsec;
> > > +} __attribute__((__packed__));
> > 
> > That is weird. It is 4+4+4 = 12 bytes? Don't you want it to be 16 bytes?
> 
> I agree that 16 bytes would be a better choice, but it needs to match
> the struct in Xen that is defined as follow:
> 
>     uint32_t wc_version;      /* Version counter: see vcpu_time_info_t. */
>     uint32_t wc_sec;          /* Secs  00:00:00 UTC, Jan 1, 1970.  */
>     uint32_t wc_nsec;         /* Nsecs 00:00:00 UTC, Jan 1, 1970.  */

Would it make sense to add some paddigin then at least? In both
cases? Or is it too late for this?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 10:52:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 10:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWX8-0003Lt-4m; Wed, 01 Aug 2012 10:52:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwWX6-0003Lh-V1
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 10:52:05 +0000
Received: from [85.158.143.35:28168] by server-3.bemta-4.messagelabs.com id
	37/81-01511-45A09105; Wed, 01 Aug 2012 10:52:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1343818318!10339443!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3434 invoked from network); 1 Aug 2012 10:51:59 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 10:51:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Apesm023175
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 10:51:41 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Apdxq023155
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 10:51:39 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71Apctr008462; Wed, 1 Aug 2012 05:51:38 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 03:51:38 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 62F2F402B2; Wed,  1 Aug 2012 06:42:37 -0400 (EDT)
Date: Wed, 1 Aug 2012 06:42:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801104237.GB7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163020.GB9222@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207271246080.26163@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1207271246080.26163@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > +struct pvclock_wall_clock {
> > > +	u32   version;
> > > +	u32   sec;
> > > +	u32   nsec;
> > > +} __attribute__((__packed__));
> > 
> > That is weird. It is 4+4+4 = 12 bytes? Don't you want it to be 16 bytes?
> 
> I agree that 16 bytes would be a better choice, but it needs to match
> the struct in Xen that is defined as follow:
> 
>     uint32_t wc_version;      /* Version counter: see vcpu_time_info_t. */
>     uint32_t wc_sec;          /* Secs  00:00:00 UTC, Jan 1, 1970.  */
>     uint32_t wc_nsec;         /* Nsecs 00:00:00 UTC, Jan 1, 1970.  */

Would it make sense to add some paddigin then at least? In both
cases? Or is it too late for this?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:07:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWkw-0003is-He; Wed, 01 Aug 2012 11:06:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1SwWku-0003in-Ug
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 11:06:21 +0000
Received: from [85.158.143.35:48992] by server-3.bemta-4.messagelabs.com id
	33/9B-01511-CAD09105; Wed, 01 Aug 2012 11:06:20 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-16.tower-21.messagelabs.com!1343819179!15612411!1
X-Originating-IP: [80.160.77.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3Ljk4ID0+IDI3OTg3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4740 invoked from network); 1 Aug 2012 11:06:19 -0000
Received: from pasmtpb.tele.dk (HELO pasmtpB.tele.dk) (80.160.77.98)
	by server-16.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 11:06:19 -0000
Received: from hagsted.dk (unknown [2.108.99.186])
	by pasmtpB.tele.dk (Postfix) with ESMTP id 262E6101004A;
	Wed,  1 Aug 2012 13:06:18 +0200 (CEST)
Received: from Hagsted-Aserver.hagsted.dk ([192.168.1.98]) by hagsted-aserver
	([192.168.1.98]) with mapi; Wed, 1 Aug 2012 12:19:33 +0200
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: "fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Wed, 1 Aug 2012 12:19:31 +0200
Thread-Topic: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
	xen-unstable dom0 with qemu traditional
Thread-Index: Ac1vxCosba0oSUyUSRqwrugv/2Ny5wACQYtK
Message-ID: <56EBEBACEA93434C80F27BB249CB677601929FFFD935@hagsted-aserver>
References: <5018FAE5.6070305@tiscali.it>
In-Reply-To: <5018FAE5.6070305@tiscali.it>
Accept-Language: da-DK
Content-Language: da-DK
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: da-DK
MIME-Version: 1.0
Subject: Re: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
 xen-unstable dom0 with qemu traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>Because the save / restore and cd-insert on qemu-xen for now are not
>working I had to fall back onqemu-xen-traditional.
>Dom0 is Wheezy 64 bit with xen 4.2.0-rc1 from source
>I tested Windows 7 pro 64 bit domU with gplpv (last build 357).
>xl create is working
>xl shutdown is working
>vnc is working, but with vfb line on configuration file is not working
>save/restore is working but on restore network is up but not working
>cdrom is not working, see empty device also if there is cdrom

I have seen the same problem, with cd-rom in windows 7 64 bit. When I check device manager
the CD or DVD drive is listed with the right size of the media, but the filesystem is listed as RAW.
If I exchange the real device (/dev/sr0) with an iso file, windows is able to read from the cd-drive
without problems. I am also using qemu-xen-traditional as I use PCI-passthrough. For me the
problem is there both with and without pv-drivers installed.

>After there are the content of domU configuration file and log, if you
>need more information tell me and I will post it.

>------------------------------------------------------------------------------------------------
>/etc/xen/W7.cfg
>------------------------------------------------------------------------------------------------
>name='W7'
>builder="hvm"
>memory=2048
>vcpus=2
>vif=['bridge=xenbr0']
>#vfb=['vnc=1,vncunused=1,vnclisten=0.0.0.0,keymap=it']
>disk=['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/dev/sr0,raw,hdb,ro,cdrom']
>boot='cd'
>device_model_version="qemu-xen-traditional"
>vnc=1
>vncunused=1
>vnclisten="0.0.0.0"
>keymap="it"
>#spice=1
>#spicehost="0.0.0.0"
>spiceport=6000
>#spicepasswd='test'
>#spicedisable_ticketing=1
>#spiceagent_mouse = 0
>#on_poweroff="destroy"
>on_reboot="restart"
>on_crash="destroy"
>#stdvga=0
>#qxl=1
>------------------------------------------------------------------------------------------------

>------------------------------------------------------------------------------------------------
>/var/log/xen/qemu-dm-W7.log
>------------------------------------------------------------------------------------------------
>domid: 9
>-videoram option does not work with cirrus vga device model. Videoram
>set to 4M.
>Using file /dev/xen/blktap-2/tapdev0 in read-write mode
>Using file /dev/sr0 in read-only mode
>Watching /local/domain/0/device-model/9/logdirty/cmd
>Watching /local/domain/0/device-model/9/command
>Watching /local/domain/9/cpu
>qemu_map_cache_init nr_buckets = 10000 size 4194304
>shared page at pfn feffd
>buffered io page at pfn feffb
>Guest uuid = 6dd9db74-ec31-47a4-a2af-43e382d2ec21
>populating video RAM at ff000000
>mapping video RAM from ff000000
>Register xen platform.
>Done register platform.
>platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw
>state.
>xs_read(/local/domain/0/device-model/9/xen_extended_power_mgmt): read error
>xs_read(): vncpasswd get error.
>/vm/6dd9db74-ec31-47a4-a2af-43e382d2ec21/vncpasswd.
>medium change watch on `hdb' (index: 1): /dev/sr0
>I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>Log-dirty: no command yet.
>I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>vcpu-set: watch node error.
>xs_read(/local/domain/9/log-throttling): read error
>qemu: ignoring not-understood drive `/local/domain/9/log-throttling'
>medium change watch on `/local/domain/9/log-throttling' - unknown
>device, ignored
>cirrus vga map change while on lfb mode
>mapping vram to f0000000 - f0400000
>platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw
>state.
>platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro
>state.
>Unknown PV product 2 loaded in guest
>PV driver build 1
>region type 1 at [c100,c200).
>region type 0 at [f3001000,f3001100).
>squash iomem [f3001000, f3001100).
>------------------------------------------------------------------------------------------------

BR Kristian Hagsted
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:07:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWkw-0003is-He; Wed, 01 Aug 2012 11:06:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1SwWku-0003in-Ug
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 11:06:21 +0000
Received: from [85.158.143.35:48992] by server-3.bemta-4.messagelabs.com id
	33/9B-01511-CAD09105; Wed, 01 Aug 2012 11:06:20 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-16.tower-21.messagelabs.com!1343819179!15612411!1
X-Originating-IP: [80.160.77.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3Ljk4ID0+IDI3OTg3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4740 invoked from network); 1 Aug 2012 11:06:19 -0000
Received: from pasmtpb.tele.dk (HELO pasmtpB.tele.dk) (80.160.77.98)
	by server-16.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 11:06:19 -0000
Received: from hagsted.dk (unknown [2.108.99.186])
	by pasmtpB.tele.dk (Postfix) with ESMTP id 262E6101004A;
	Wed,  1 Aug 2012 13:06:18 +0200 (CEST)
Received: from Hagsted-Aserver.hagsted.dk ([192.168.1.98]) by hagsted-aserver
	([192.168.1.98]) with mapi; Wed, 1 Aug 2012 12:19:33 +0200
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: "fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Wed, 1 Aug 2012 12:19:31 +0200
Thread-Topic: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
	xen-unstable dom0 with qemu traditional
Thread-Index: Ac1vxCosba0oSUyUSRqwrugv/2Ny5wACQYtK
Message-ID: <56EBEBACEA93434C80F27BB249CB677601929FFFD935@hagsted-aserver>
References: <5018FAE5.6070305@tiscali.it>
In-Reply-To: <5018FAE5.6070305@tiscali.it>
Accept-Language: da-DK
Content-Language: da-DK
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: da-DK
MIME-Version: 1.0
Subject: Re: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
 xen-unstable dom0 with qemu traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>Because the save / restore and cd-insert on qemu-xen for now are not
>working I had to fall back onqemu-xen-traditional.
>Dom0 is Wheezy 64 bit with xen 4.2.0-rc1 from source
>I tested Windows 7 pro 64 bit domU with gplpv (last build 357).
>xl create is working
>xl shutdown is working
>vnc is working, but with vfb line on configuration file is not working
>save/restore is working but on restore network is up but not working
>cdrom is not working, see empty device also if there is cdrom

I have seen the same problem, with cd-rom in windows 7 64 bit. When I check device manager
the CD or DVD drive is listed with the right size of the media, but the filesystem is listed as RAW.
If I exchange the real device (/dev/sr0) with an iso file, windows is able to read from the cd-drive
without problems. I am also using qemu-xen-traditional as I use PCI-passthrough. For me the
problem is there both with and without pv-drivers installed.

>After there are the content of domU configuration file and log, if you
>need more information tell me and I will post it.

>------------------------------------------------------------------------------------------------
>/etc/xen/W7.cfg
>------------------------------------------------------------------------------------------------
>name='W7'
>builder="hvm"
>memory=2048
>vcpus=2
>vif=['bridge=xenbr0']
>#vfb=['vnc=1,vncunused=1,vnclisten=0.0.0.0,keymap=it']
>disk=['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/dev/sr0,raw,hdb,ro,cdrom']
>boot='cd'
>device_model_version="qemu-xen-traditional"
>vnc=1
>vncunused=1
>vnclisten="0.0.0.0"
>keymap="it"
>#spice=1
>#spicehost="0.0.0.0"
>spiceport=6000
>#spicepasswd='test'
>#spicedisable_ticketing=1
>#spiceagent_mouse = 0
>#on_poweroff="destroy"
>on_reboot="restart"
>on_crash="destroy"
>#stdvga=0
>#qxl=1
>------------------------------------------------------------------------------------------------

>------------------------------------------------------------------------------------------------
>/var/log/xen/qemu-dm-W7.log
>------------------------------------------------------------------------------------------------
>domid: 9
>-videoram option does not work with cirrus vga device model. Videoram
>set to 4M.
>Using file /dev/xen/blktap-2/tapdev0 in read-write mode
>Using file /dev/sr0 in read-only mode
>Watching /local/domain/0/device-model/9/logdirty/cmd
>Watching /local/domain/0/device-model/9/command
>Watching /local/domain/9/cpu
>qemu_map_cache_init nr_buckets = 10000 size 4194304
>shared page at pfn feffd
>buffered io page at pfn feffb
>Guest uuid = 6dd9db74-ec31-47a4-a2af-43e382d2ec21
>populating video RAM at ff000000
>mapping video RAM from ff000000
>Register xen platform.
>Done register platform.
>platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw
>state.
>xs_read(/local/domain/0/device-model/9/xen_extended_power_mgmt): read error
>xs_read(): vncpasswd get error.
>/vm/6dd9db74-ec31-47a4-a2af-43e382d2ec21/vncpasswd.
>medium change watch on `hdb' (index: 1): /dev/sr0
>I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>Log-dirty: no command yet.
>I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>vcpu-set: watch node error.
>xs_read(/local/domain/9/log-throttling): read error
>qemu: ignoring not-understood drive `/local/domain/9/log-throttling'
>medium change watch on `/local/domain/9/log-throttling' - unknown
>device, ignored
>cirrus vga map change while on lfb mode
>mapping vram to f0000000 - f0400000
>platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw
>state.
>platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro
>state.
>Unknown PV product 2 loaded in guest
>PV driver build 1
>region type 1 at [c100,c200).
>region type 0 at [f3001000,f3001100).
>squash iomem [f3001000, f3001100).
>------------------------------------------------------------------------------------------------

BR Kristian Hagsted
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:15:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWt3-0003vL-Gb; Wed, 01 Aug 2012 11:14:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwWt2-0003vG-Ct
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:14:44 +0000
Received: from [85.158.139.83:46500] by server-3.bemta-5.messagelabs.com id
	06/E5-03367-3AF09105; Wed, 01 Aug 2012 11:14:43 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343819682!29697769!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26646 invoked from network); 1 Aug 2012 11:14:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:14:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13800421"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:13:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 12:13:20 +0100
Date: Wed, 1 Aug 2012 12:13:03 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Erickson <halcyon1981@gmail.com>
In-Reply-To: <CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Jul 2012, David Erickson wrote:
> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Tue, 31 Jul 2012, David Erickson wrote:
> >> Just got back in town, following up on the prior discussion.  I
> >> successfully compiled the latest code (25688 and qemu upstream
> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
> >> problems during initialization of the card in the guest, in particular
> >> the unsupported delivery mode 3 which seems to cause interrupt related
> >> problems during init.  I've again attached the qemu-dm-log, and xl
> >> dmesg log files, and additionally screenshots of the guest dmesg and
> >> also for comparison starting the same livecd natively on the box.
> >
> > "unsupported delivery mode 3" means that the Linux guest is trying to
> > remap the MSI onto an event channel but Xen is still trying to deliver
> > the MSI using the emulated code path anyway.
> >
> > Adding
> >
> > #define XEN_PT_LOGGING_ENABLED 1
> >
> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
> > be helpful.
> >
> > The full Xen logs might also be useful. I would add some more tracing to
> > the hypervisor too:
> >
> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> > index b5975d1..08f4ab7 100644
> > --- a/xen/drivers/passthrough/io.c
> > +++ b/xen/drivers/passthrough/io.c
> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
> >  {
> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
> >
> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
> > +            pirq->pirq,
> > +            hvm_domain_use_pirq(d, pirq),
> > +            pirq->arch.hvm.emuirq);
> > +
> >      if ( hvm_domain_use_pirq(d, pirq) )
> >          send_guest_pirq(d, pirq);
> >      else
> 
> Hi Stefano-
> I made the modifications (it looks like that DEFINE hasn't been used
> in awhile, caused a few compilation issues, I had to prefix most of
> the logged variables with s->hostaddr.), and am attaching the
> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
> where do I find those at?

Thanks for the logs!
You can get the full Xen logs from the serial console but you can also
grab the last few lines with "xl dmesg", like you did and it seems to be
enough in this case.


The initial MSI remapping has been done:

[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)

But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
necessary to be able to receive event notifications (emuirq=-1 in the
Xen logs).

Now we need to figure out why: we still need more logs, this time on the
guest side.
What is the kernel version that you are using in the guest?
Could you please add "debug loglevel=9" to the guest kernel command line
and then post the guest dmesg again? 
It would be great if you could use the emulated serial to get the logs
rather than a picture. You can do that by adding serial='pty' to the VM
config file and console=ttyS0 to the guest command line.
This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
has been done:


diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..d65a97a 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
 #ifdef CONFIG_X86
     if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
         map_domain_emuirq_pirq(d, pirq, IRQ_PT);
+    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
+            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
 #endif
 
  out:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:15:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwWt3-0003vL-Gb; Wed, 01 Aug 2012 11:14:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwWt2-0003vG-Ct
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:14:44 +0000
Received: from [85.158.139.83:46500] by server-3.bemta-5.messagelabs.com id
	06/E5-03367-3AF09105; Wed, 01 Aug 2012 11:14:43 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343819682!29697769!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26646 invoked from network); 1 Aug 2012 11:14:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:14:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13800421"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:13:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 12:13:20 +0100
Date: Wed, 1 Aug 2012 12:13:03 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Erickson <halcyon1981@gmail.com>
In-Reply-To: <CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Jul 2012, David Erickson wrote:
> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Tue, 31 Jul 2012, David Erickson wrote:
> >> Just got back in town, following up on the prior discussion.  I
> >> successfully compiled the latest code (25688 and qemu upstream
> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
> >> problems during initialization of the card in the guest, in particular
> >> the unsupported delivery mode 3 which seems to cause interrupt related
> >> problems during init.  I've again attached the qemu-dm-log, and xl
> >> dmesg log files, and additionally screenshots of the guest dmesg and
> >> also for comparison starting the same livecd natively on the box.
> >
> > "unsupported delivery mode 3" means that the Linux guest is trying to
> > remap the MSI onto an event channel but Xen is still trying to deliver
> > the MSI using the emulated code path anyway.
> >
> > Adding
> >
> > #define XEN_PT_LOGGING_ENABLED 1
> >
> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
> > be helpful.
> >
> > The full Xen logs might also be useful. I would add some more tracing to
> > the hypervisor too:
> >
> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> > index b5975d1..08f4ab7 100644
> > --- a/xen/drivers/passthrough/io.c
> > +++ b/xen/drivers/passthrough/io.c
> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
> >  {
> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
> >
> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
> > +            pirq->pirq,
> > +            hvm_domain_use_pirq(d, pirq),
> > +            pirq->arch.hvm.emuirq);
> > +
> >      if ( hvm_domain_use_pirq(d, pirq) )
> >          send_guest_pirq(d, pirq);
> >      else
> 
> Hi Stefano-
> I made the modifications (it looks like that DEFINE hasn't been used
> in awhile, caused a few compilation issues, I had to prefix most of
> the logged variables with s->hostaddr.), and am attaching the
> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
> where do I find those at?

Thanks for the logs!
You can get the full Xen logs from the serial console but you can also
grab the last few lines with "xl dmesg", like you did and it seems to be
enough in this case.


The initial MSI remapping has been done:

[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)

But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
necessary to be able to receive event notifications (emuirq=-1 in the
Xen logs).

Now we need to figure out why: we still need more logs, this time on the
guest side.
What is the kernel version that you are using in the guest?
Could you please add "debug loglevel=9" to the guest kernel command line
and then post the guest dmesg again? 
It would be great if you could use the emulated serial to get the logs
rather than a picture. You can do that by adding serial='pty' to the VM
config file and console=ttyS0 to the guest command line.
This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
has been done:


diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..d65a97a 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
 #ifdef CONFIG_X86
     if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
         map_domain_emuirq_pirq(d, pirq, IRQ_PT);
+    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
+            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
 #endif
 
  out:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:27:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwX4S-0004Bp-AX; Wed, 01 Aug 2012 11:26:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwX4R-0004Bj-5u
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:26:31 +0000
Received: from [85.158.138.51:40124] by server-4.bemta-3.messagelabs.com id
	AC/8F-29069-66219105; Wed, 01 Aug 2012 11:26:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343820389!29830768!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13033 invoked from network); 1 Aug 2012 11:26:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:26:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13800690"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:26:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:26:29 +0100
Message-ID: <1343820387.27221.66.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 1 Aug 2012 12:26:27 +0100
In-Reply-To: <20120731125021.GA11261@aepfle.de>
References: <ccbebdbe44da04604081.1343666104@probook.site>
	<1343723296.15432.62.camel@zakaz.uk.xensource.com>
	<20120731125021.GA11261@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] stubdom: disable parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 13:50 +0100, Olaf Hering wrote:
> On Tue, Jul 31, Ian Campbell wrote:
> 
> > I suspect something is not quite right in the top level Makefile WRT the
> > dependencies between tools and stubdom builds.
> 
> Doing a s@$(CROSS_MAKE)@$(MAKE) DESTDIR=@g in stubdom/Makefile fixes it
> for me:

Can anyone who understands Make explain why this should make a
difference?

I would have expected that
	FOO := $(BAR) baz
...
	$(FOO) bif
ought to be identical to
	$(BAR) baz bif

Is that not the case?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:27:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwX4S-0004Bp-AX; Wed, 01 Aug 2012 11:26:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwX4R-0004Bj-5u
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:26:31 +0000
Received: from [85.158.138.51:40124] by server-4.bemta-3.messagelabs.com id
	AC/8F-29069-66219105; Wed, 01 Aug 2012 11:26:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343820389!29830768!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13033 invoked from network); 1 Aug 2012 11:26:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:26:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13800690"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:26:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:26:29 +0100
Message-ID: <1343820387.27221.66.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 1 Aug 2012 12:26:27 +0100
In-Reply-To: <20120731125021.GA11261@aepfle.de>
References: <ccbebdbe44da04604081.1343666104@probook.site>
	<1343723296.15432.62.camel@zakaz.uk.xensource.com>
	<20120731125021.GA11261@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] stubdom: disable parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 13:50 +0100, Olaf Hering wrote:
> On Tue, Jul 31, Ian Campbell wrote:
> 
> > I suspect something is not quite right in the top level Makefile WRT the
> > dependencies between tools and stubdom builds.
> 
> Doing a s@$(CROSS_MAKE)@$(MAKE) DESTDIR=@g in stubdom/Makefile fixes it
> for me:

Can anyone who understands Make explain why this should make a
difference?

I would have expected that
	FOO := $(BAR) baz
...
	$(FOO) bif
ought to be identical to
	$(BAR) baz bif

Is that not the case?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:33:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXB5-0004Wm-5J; Wed, 01 Aug 2012 11:33:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXB4-0004Wg-29
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:33:22 +0000
Received: from [85.158.143.35:48983] by server-3.bemta-4.messagelabs.com id
	80/B1-01511-10419105; Wed, 01 Aug 2012 11:33:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343820800!15605832!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8255 invoked from network); 1 Aug 2012 11:33:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:33:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13800807"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:33:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:33:19 +0100
Message-ID: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 12:33:18 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] Git branch for ARM patches pending for 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We've got a lot of patches floating about which are a necessary baseline
for on going ARM port work. Now that we have released 4.2.0-rc1 I think
is inappropriate to keep committing ARM patches to mainline (even those
which touch only ARM code, for a while I've not been committing ARM
patches which touch generic bits).

Rather than have everyone collate all the necessary patches themselves I
have created a branch where I intend to collect patches which are
basically ready but need to wait for the 4.3 dev cycle to open before
they can go in (which a couple of exceptions for necessary but still in
the HACK phase patches)

The branch is:
        git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3
and is based off:
        git://xenbits.xen.org/people/aperard/xen.git staging

The arm-for-4.3 branch contains patches which will need to be swapped
out for non-HACK versions at some point and is therefore potentially
rebasing.

In fact I'm considering rebasing as a matter of course (rather than
merging) such that we have a branch which is current against 4.3 when we
come to sweep it in. Opinions from the potential consumers of the branch
will be considered ;-)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:33:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXB5-0004Wm-5J; Wed, 01 Aug 2012 11:33:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXB4-0004Wg-29
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:33:22 +0000
Received: from [85.158.143.35:48983] by server-3.bemta-4.messagelabs.com id
	80/B1-01511-10419105; Wed, 01 Aug 2012 11:33:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343820800!15605832!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8255 invoked from network); 1 Aug 2012 11:33:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:33:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13800807"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:33:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:33:19 +0100
Message-ID: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 12:33:18 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] Git branch for ARM patches pending for 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We've got a lot of patches floating about which are a necessary baseline
for on going ARM port work. Now that we have released 4.2.0-rc1 I think
is inappropriate to keep committing ARM patches to mainline (even those
which touch only ARM code, for a while I've not been committing ARM
patches which touch generic bits).

Rather than have everyone collate all the necessary patches themselves I
have created a branch where I intend to collect patches which are
basically ready but need to wait for the 4.3 dev cycle to open before
they can go in (which a couple of exceptions for necessary but still in
the HACK phase patches)

The branch is:
        git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3
and is based off:
        git://xenbits.xen.org/people/aperard/xen.git staging

The arm-for-4.3 branch contains patches which will need to be swapped
out for non-HACK versions at some point and is therefore potentially
rebasing.

In fact I'm considering rebasing as a matter of course (rather than
merging) such that we have a branch which is current against 4.3 when we
come to sweep it in. Opinions from the potential consumers of the branch
will be considered ;-)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOV-0004z7-3R; Wed, 01 Aug 2012 11:47:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOU-0004z2-B6
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:14 +0000
Received: from [85.158.143.99:33844] by server-1.bemta-4.messagelabs.com id
	BD/35-24392-14719105; Wed, 01 Aug 2012 11:47:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343821633!28841903!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25640 invoked from network); 1 Aug 2012 11:47:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801117"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:12 +0100
Message-ID: <1343821631.27221.75.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:11 +0100
In-Reply-To: <20504.7228.128503.451291@mariner.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 18:56 +0100, Ian Jackson wrote:
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.

> -

Can you make this "---" ? Then git am does the right thing...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOV-0004z7-3R; Wed, 01 Aug 2012 11:47:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOU-0004z2-B6
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:14 +0000
Received: from [85.158.143.99:33844] by server-1.bemta-4.messagelabs.com id
	BD/35-24392-14719105; Wed, 01 Aug 2012 11:47:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343821633!28841903!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25640 invoked from network); 1 Aug 2012 11:47:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801117"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:12 +0100
Message-ID: <1343821631.27221.75.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:11 +0100
In-Reply-To: <20504.7228.128503.451291@mariner.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 18:56 +0100, Ian Jackson wrote:
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.

> -

Can you make this "---" ? Then git am does the right thing...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOj-00051B-Ec; Wed, 01 Aug 2012 11:47:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOh-00050p-P7
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:27 +0000
Received: from [85.158.143.99:34728] by server-3.bemta-4.messagelabs.com id
	D4/DD-01511-F4719105; Wed, 01 Aug 2012 11:47:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19627 invoked from network); 1 Aug 2012 11:47:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801125"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:26 +0100
Message-ID: <1343821645.27221.78.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:25 +0100
In-Reply-To: <20504.748.620597.428570@mariner.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<7012e0d68f3b10be6bdd.1343749917@andrewcoop.uk.xensource.com>
	<20504.748.620597.428570@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 5] tools/ocaml: ignore and clean .spot
 and .spit files
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 17:08 +0100, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH 1 of 5] tools/ocaml: ignore and clean .spot and .spit files"):
> > Newer ocaml toolchains generate .spot and .spit files which are ocaml metadata
> > about their respective source files.
> > 
> > Add them to the clean rules as well as the .{hg,git}ignore files.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> This is good for 4.2 IMO

Agreed, applied.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOj-00051B-Ec; Wed, 01 Aug 2012 11:47:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOh-00050p-P7
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:27 +0000
Received: from [85.158.143.99:34728] by server-3.bemta-4.messagelabs.com id
	D4/DD-01511-F4719105; Wed, 01 Aug 2012 11:47:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19627 invoked from network); 1 Aug 2012 11:47:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801125"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:26 +0100
Message-ID: <1343821645.27221.78.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:25 +0100
In-Reply-To: <20504.748.620597.428570@mariner.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<7012e0d68f3b10be6bdd.1343749917@andrewcoop.uk.xensource.com>
	<20504.748.620597.428570@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 5] tools/ocaml: ignore and clean .spot
 and .spit files
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 17:08 +0100, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH 1 of 5] tools/ocaml: ignore and clean .spot and .spit files"):
> > Newer ocaml toolchains generate .spot and .spit files which are ocaml metadata
> > about their respective source files.
> > 
> > Add them to the clean rules as well as the .{hg,git}ignore files.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> This is good for 4.2 IMO

Agreed, applied.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOc-000504-KX; Wed, 01 Aug 2012 11:47:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOa-0004zr-Tg
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:21 +0000
Received: from [85.158.143.99:34235] by server-3.bemta-4.messagelabs.com id
	11/AD-01511-84719105; Wed, 01 Aug 2012 11:47:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19199 invoked from network); 1 Aug 2012 11:47:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801120"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:19 +0100
Message-ID: <1343821638.27221.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:18 +0100
In-Reply-To: <20504.900.812718.65308@mariner.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<2d5cac99caf7572c2a3c.1343749918@andrewcoop.uk.xensource.com>
	<20504.900.812718.65308@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 5] tools/config: Allow building of
 components to be controlled from .config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 17:10 +0100, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH 2 of 5] tools/config: Allow building of components to be controlled from .config"):
> > For build systems which build certain Xen components separately, allow certain
> > components to be conditionally built based on .config, rather than always
> > building them.
> > 
> > This patch allows qemu and blktap to be configured in this manner.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOc-000504-KX; Wed, 01 Aug 2012 11:47:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOa-0004zr-Tg
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:21 +0000
Received: from [85.158.143.99:34235] by server-3.bemta-4.messagelabs.com id
	11/AD-01511-84719105; Wed, 01 Aug 2012 11:47:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19199 invoked from network); 1 Aug 2012 11:47:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801120"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:19 +0100
Message-ID: <1343821638.27221.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:18 +0100
In-Reply-To: <20504.900.812718.65308@mariner.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<2d5cac99caf7572c2a3c.1343749918@andrewcoop.uk.xensource.com>
	<20504.900.812718.65308@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 5] tools/config: Allow building of
 components to be controlled from .config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 17:10 +0100, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH 2 of 5] tools/config: Allow building of components to be controlled from .config"):
> > For build systems which build certain Xen components separately, allow certain
> > components to be conditionally built based on .config, rather than always
> > building them.
> > 
> > This patch allows qemu and blktap to be configured in this manner.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOh-00050f-0J; Wed, 01 Aug 2012 11:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOf-00050N-Ox
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:25 +0000
Received: from [85.158.143.99:21004] by server-2.bemta-4.messagelabs.com id
	36/6C-17938-D4719105; Wed, 01 Aug 2012 11:47:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19570 invoked from network); 1 Aug 2012 11:47:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801122"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:24 +0100
Message-ID: <1343821643.27221.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:23 +0100
In-Reply-To: <20504.988.608606.973125@mariner.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<6db5c184a77782717a89.1343749919@andrewcoop.uk.xensource.com>
	<20504.988.608606.973125@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 5] config: Split debug build from debug
	symbols
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 17:12 +0100, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH 3 of 5] config: Split debug build from debug symbols"):
> > RPM based packaging systems expect binaries to have debug symbols which get
> > placed in a separate debuginfo RPM.
> > 
> > Split the concept of a debug build up so that binaries can be built with
> > debugging symbols without having the other gubbins which $(debug) implies, most
> > notibly frame pointers.
> 
> This looks plausible (for 4.2) but I would like to run a test build
> and check the results.

OK, I'll wait for you to do that before applying.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:47:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOh-00050f-0J; Wed, 01 Aug 2012 11:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOf-00050N-Ox
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:25 +0000
Received: from [85.158.143.99:21004] by server-2.bemta-4.messagelabs.com id
	36/6C-17938-D4719105; Wed, 01 Aug 2012 11:47:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19570 invoked from network); 1 Aug 2012 11:47:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801122"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:24 +0100
Message-ID: <1343821643.27221.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 12:47:23 +0100
In-Reply-To: <20504.988.608606.973125@mariner.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<6db5c184a77782717a89.1343749919@andrewcoop.uk.xensource.com>
	<20504.988.608606.973125@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 5] config: Split debug build from debug
	symbols
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-07-31 at 17:12 +0100, Ian Jackson wrote:
> Andrew Cooper writes ("[PATCH 3 of 5] config: Split debug build from debug symbols"):
> > RPM based packaging systems expect binaries to have debug symbols which get
> > placed in a separate debuginfo RPM.
> > 
> > Split the concept of a debug build up so that binaries can be built with
> > debugging symbols without having the other gubbins which $(debug) implies, most
> > notibly frame pointers.
> 
> This looks plausible (for 4.2) but I would like to run a test build
> and check the results.

OK, I'll wait for you to do that before applying.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOt-00054h-Mx; Wed, 01 Aug 2012 11:47:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOs-00052m-VV
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:39 +0000
Received: from [85.158.143.99:35590] by server-1.bemta-4.messagelabs.com id
	F5/16-24392-A5719105; Wed, 01 Aug 2012 11:47:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19976 invoked from network); 1 Aug 2012 11:47:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801135"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:38 +0100
Message-ID: <1343821656.27221.81.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Wed, 1 Aug 2012 12:47:36 +0100
In-Reply-To: <50128E67.7000706@amd.com>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-5-git-send-email-roger.pau@citrix.com>
	<50128E67.7000706@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/5] libxl: call hotplug scripts from xl
 for NetBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-27 at 13:49 +0100, Christoph Egger wrote:
> On 07/26/12 21:54, Roger Pau Monne wrote:
> 
> > Add the missing NetBSD functions to call hotplug scripts, and disable
> > xenbackendd if libxl/disable_udev is not set.
> > 
> > Cc: Christoph Egger <Christoph.Egger@amd.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> Acked-by: Christoph Egger <Christoph.Egger@amd.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOt-00054h-Mx; Wed, 01 Aug 2012 11:47:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOs-00052m-VV
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:39 +0000
Received: from [85.158.143.99:35590] by server-1.bemta-4.messagelabs.com id
	F5/16-24392-A5719105; Wed, 01 Aug 2012 11:47:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19976 invoked from network); 1 Aug 2012 11:47:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801135"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:38 +0100
Message-ID: <1343821656.27221.81.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Wed, 1 Aug 2012 12:47:36 +0100
In-Reply-To: <50128E67.7000706@amd.com>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-5-git-send-email-roger.pau@citrix.com>
	<50128E67.7000706@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/5] libxl: call hotplug scripts from xl
 for NetBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-27 at 13:49 +0100, Christoph Egger wrote:
> On 07/26/12 21:54, Roger Pau Monne wrote:
> 
> > Add the missing NetBSD functions to call hotplug scripts, and disable
> > xenbackendd if libxl/disable_udev is not set.
> > 
> > Cc: Christoph Egger <Christoph.Egger@amd.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> Acked-by: Christoph Egger <Christoph.Egger@amd.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOn-00052K-U7; Wed, 01 Aug 2012 11:47:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOm-00051q-Ah
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:32 +0000
Received: from [85.158.143.99:21484] by server-2.bemta-4.messagelabs.com id
	31/EC-17938-35719105; Wed, 01 Aug 2012 11:47:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19884 invoked from network); 1 Aug 2012 11:47:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801131"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:31 +0100
Message-ID: <1343821649.27221.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 1 Aug 2012 12:47:29 +0100
In-Reply-To: <db8adce4f09307a90f96.1343666158@probook.site>
References: <db8adce4f09307a90f96.1343666158@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix unitialized variables in
 libxl__primary_console_find
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 17:35 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1343662357 -7200
> # Node ID db8adce4f09307a90f96103f7fd67efa97fc9ac0
> # Parent  cf0e661cb321b1c898c9008dc17ba21db434c976
> libxl: fix unitialized variables in libxl__primary_console_find
> 
> gcc 4.5 as shipped with openSuSE 11.4 does not recognize the case of
> LIBXL_DOMAIN_TYPE_INVALID properly:
> 
> cc1: warnings being treated as errors
> libxl.c: In function 'libxl_primary_console_exec':
> libxl.c:1408:14: error: 'domid' may be used uninitialized in this function
> libxl.c:1409:9: error: 'cons_num' may be used uninitialized in this function
> libxl.c:1410:24: error: 'type' may be used uninitialized in this function
> libxl.c: In function 'libxl_primary_console_get_tty':
> libxl.c:1421:14: error: 'domid' may be used uninitialized in this function
> libxl.c:1422:9: error: 'cons_num' may be used uninitialized in this function
> libxl.c:1423:24: error: 'type' may be used uninitialized in this function
> make[3]: *** [libxl.o] Error 1
> 
> Fix this by adding a default case.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied, thanks.

> 
> diff -r cf0e661cb321 -r db8adce4f093 tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -1590,6 +1590,7 @@ static int libxl__primary_console_find(l
>          case LIBXL_DOMAIN_TYPE_INVALID:
>              rc = ERROR_INVAL;
>              goto out;
> +        default: abort();
>          }
>      }
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOn-00052K-U7; Wed, 01 Aug 2012 11:47:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOm-00051q-Ah
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:32 +0000
Received: from [85.158.143.99:21484] by server-2.bemta-4.messagelabs.com id
	31/EC-17938-35719105; Wed, 01 Aug 2012 11:47:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19884 invoked from network); 1 Aug 2012 11:47:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801131"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:31 +0100
Message-ID: <1343821649.27221.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 1 Aug 2012 12:47:29 +0100
In-Reply-To: <db8adce4f09307a90f96.1343666158@probook.site>
References: <db8adce4f09307a90f96.1343666158@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix unitialized variables in
 libxl__primary_console_find
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 17:35 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1343662357 -7200
> # Node ID db8adce4f09307a90f96103f7fd67efa97fc9ac0
> # Parent  cf0e661cb321b1c898c9008dc17ba21db434c976
> libxl: fix unitialized variables in libxl__primary_console_find
> 
> gcc 4.5 as shipped with openSuSE 11.4 does not recognize the case of
> LIBXL_DOMAIN_TYPE_INVALID properly:
> 
> cc1: warnings being treated as errors
> libxl.c: In function 'libxl_primary_console_exec':
> libxl.c:1408:14: error: 'domid' may be used uninitialized in this function
> libxl.c:1409:9: error: 'cons_num' may be used uninitialized in this function
> libxl.c:1410:24: error: 'type' may be used uninitialized in this function
> libxl.c: In function 'libxl_primary_console_get_tty':
> libxl.c:1421:14: error: 'domid' may be used uninitialized in this function
> libxl.c:1422:9: error: 'cons_num' may be used uninitialized in this function
> libxl.c:1423:24: error: 'type' may be used uninitialized in this function
> make[3]: *** [libxl.o] Error 1
> 
> Fix this by adding a default case.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied, thanks.

> 
> diff -r cf0e661cb321 -r db8adce4f093 tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -1590,6 +1590,7 @@ static int libxl__primary_console_find(l
>          case LIBXL_DOMAIN_TYPE_INVALID:
>              rc = ERROR_INVAL;
>              goto out;
> +        default: abort();
>          }
>      }
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOq-000533-9j; Wed, 01 Aug 2012 11:47:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOp-00052m-M5
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:35 +0000
Received: from [85.158.143.99:21697] by server-1.bemta-4.messagelabs.com id
	DA/F5-24392-75719105; Wed, 01 Aug 2012 11:47:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19933 invoked from network); 1 Aug 2012 11:47:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801133"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:34 +0100
Message-ID: <1343821653.27221.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Wed, 1 Aug 2012 12:47:33 +0100
In-Reply-To: <5012AA86.3080704@amd.com>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-6-git-send-email-roger.pau@citrix.com>
	<50128E8E.9020702@amd.com>
	<20498.42441.778072.807967@mariner.uk.xensource.com>
	<5012AA86.3080704@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 5/5] init/NetBSD: move xenbackendd to
 xend init script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-27 at 15:49 +0100, Christoph Egger wrote:
> On 07/27/12 16:29, Ian Jackson wrote:
> 
> > Christoph Egger writes ("Re: [PATCH v2 5/5] init/NetBSD: move xenbackendd to xend init script"):
> >> On 07/26/12 21:54, Roger Pau Monne wrote:
> >>
> >>> xenbackendd is not needed by the xl toolstack, so move it's launch to
> >>> the xend script.
> >>>
> >>> We have to iterate until we are sure there are no xend processes left,
> >>> since doing a single pkill usually leaves xend processes running.
> >>>
> >>> Changes since v1:
> >>>
> >>>  * Use pgrep and pkill instead of the convoluted shell expression.
> >>>
> >>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> >>> Cc: Christoph Egger <Christoph.Egger@amd.com>
> >>
> >>
> >> Acked-by: Christoph Egger <Christoph.Egger@amd.com>
> > 
> > Thanks for the review.
> > 
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> I can also give a
> 
> Tested-by: Christoph Egger <Christoph.Egger@amd.com>

Applied, thanks everyone.

> 
>  
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXOq-000533-9j; Wed, 01 Aug 2012 11:47:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOp-00052m-M5
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:35 +0000
Received: from [85.158.143.99:21697] by server-1.bemta-4.messagelabs.com id
	DA/F5-24392-75719105; Wed, 01 Aug 2012 11:47:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19933 invoked from network); 1 Aug 2012 11:47:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801133"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:34 +0100
Message-ID: <1343821653.27221.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Wed, 1 Aug 2012 12:47:33 +0100
In-Reply-To: <5012AA86.3080704@amd.com>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-6-git-send-email-roger.pau@citrix.com>
	<50128E8E.9020702@amd.com>
	<20498.42441.778072.807967@mariner.uk.xensource.com>
	<5012AA86.3080704@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 5/5] init/NetBSD: move xenbackendd to
 xend init script
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-27 at 15:49 +0100, Christoph Egger wrote:
> On 07/27/12 16:29, Ian Jackson wrote:
> 
> > Christoph Egger writes ("Re: [PATCH v2 5/5] init/NetBSD: move xenbackendd to xend init script"):
> >> On 07/26/12 21:54, Roger Pau Monne wrote:
> >>
> >>> xenbackendd is not needed by the xl toolstack, so move it's launch to
> >>> the xend script.
> >>>
> >>> We have to iterate until we are sure there are no xend processes left,
> >>> since doing a single pkill usually leaves xend processes running.
> >>>
> >>> Changes since v1:
> >>>
> >>>  * Use pgrep and pkill instead of the convoluted shell expression.
> >>>
> >>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> >>> Cc: Christoph Egger <Christoph.Egger@amd.com>
> >>
> >>
> >> Acked-by: Christoph Egger <Christoph.Egger@amd.com>
> > 
> > Thanks for the review.
> > 
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> I can also give a
> 
> Tested-by: Christoph Egger <Christoph.Egger@amd.com>

Applied, thanks everyone.

> 
>  
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXP0-00057H-4Z; Wed, 01 Aug 2012 11:47:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOy-00055l-Ge
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:44 +0000
Received: from [85.158.143.99:35849] by server-2.bemta-4.messagelabs.com id
	10/4D-17938-E5719105; Wed, 01 Aug 2012 11:47:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!7
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20345 invoked from network); 1 Aug 2012 11:47:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801139"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:41 +0100
Message-ID: <1343821660.27221.82.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Wed, 1 Aug 2012 12:47:40 +0100
In-Reply-To: <1343378909.6812.86.camel@zakaz.uk.xensource.com>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-2-git-send-email-roger.pau@citrix.com>
	<1343378909.6812.86.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/5] tools/build: fix pygrub linking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-27 at 09:48 +0100, Ian Campbell wrote:
> On Thu, 2012-07-26 at 20:54 +0100, Roger Pau Monne wrote:
> > Prevent creating a symlink to $(DESTDIR)/$(BINDIR) if it is the same
> > as $(PRIVATE_BINDIR)
> > 
> > This fixes NetBSD install, where $(DESTDIR)/$(BINDIR) ==
> > $(PRIVATE_BINDIR).
> 
> [...]
> > Changes since v1:
> > 
> >  * Do the check in shell instead of Makefile.
> > 
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Christoph Egger <Christoph.Egger@amd.com>
> > Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied, thanks.

> 
> > ---
> >  tools/pygrub/Makefile |    5 ++++-
> >  1 files changed, 4 insertions(+), 1 deletions(-)
> > 
> > diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> > index bd22dd4..8c99e11 100644
> > --- a/tools/pygrub/Makefile
> > +++ b/tools/pygrub/Makefile
> > @@ -14,7 +14,10 @@ install: all
> >  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
> >  		--install-scripts=$(PRIVATE_BINDIR) --force
> >  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> > -	ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR)
> > +	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> > +	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> > +	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
> > +	fi
> >  
> >  .PHONY: clean
> >  clean:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXP0-00057H-4Z; Wed, 01 Aug 2012 11:47:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOy-00055l-Ge
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:44 +0000
Received: from [85.158.143.99:35849] by server-2.bemta-4.messagelabs.com id
	10/4D-17938-E5719105; Wed, 01 Aug 2012 11:47:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!7
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20345 invoked from network); 1 Aug 2012 11:47:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801139"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:41 +0100
Message-ID: <1343821660.27221.82.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Wed, 1 Aug 2012 12:47:40 +0100
In-Reply-To: <1343378909.6812.86.camel@zakaz.uk.xensource.com>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-2-git-send-email-roger.pau@citrix.com>
	<1343378909.6812.86.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/5] tools/build: fix pygrub linking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-27 at 09:48 +0100, Ian Campbell wrote:
> On Thu, 2012-07-26 at 20:54 +0100, Roger Pau Monne wrote:
> > Prevent creating a symlink to $(DESTDIR)/$(BINDIR) if it is the same
> > as $(PRIVATE_BINDIR)
> > 
> > This fixes NetBSD install, where $(DESTDIR)/$(BINDIR) ==
> > $(PRIVATE_BINDIR).
> 
> [...]
> > Changes since v1:
> > 
> >  * Do the check in shell instead of Makefile.
> > 
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Christoph Egger <Christoph.Egger@amd.com>
> > Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied, thanks.

> 
> > ---
> >  tools/pygrub/Makefile |    5 ++++-
> >  1 files changed, 4 insertions(+), 1 deletions(-)
> > 
> > diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> > index bd22dd4..8c99e11 100644
> > --- a/tools/pygrub/Makefile
> > +++ b/tools/pygrub/Makefile
> > @@ -14,7 +14,10 @@ install: all
> >  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
> >  		--install-scripts=$(PRIVATE_BINDIR) --force
> >  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> > -	ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR)
> > +	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> > +	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> > +	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
> > +	fi
> >  
> >  .PHONY: clean
> >  clean:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXP0-00057c-HK; Wed, 01 Aug 2012 11:47:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOy-00056W-VA
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:45 +0000
Received: from [85.158.143.99:36015] by server-3.bemta-4.messagelabs.com id
	35/7E-01511-06719105; Wed, 01 Aug 2012 11:47:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!8
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20418 invoked from network); 1 Aug 2012 11:47:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801142"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:43 +0100
Message-ID: <1343821662.27221.83.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Wed, 1 Aug 2012 12:47:42 +0100
In-Reply-To: <f83f8c98692f2cbc5178.1343319438@Solace>
References: <f83f8c98692f2cbc5178.1343319438@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v9] Some automatic NUMA placement
	documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 17:17 +0100, Dario Faggioli wrote:
> About rationale, usage and (some small bits of) API.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 11:48:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 11:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXP0-00057c-HK; Wed, 01 Aug 2012 11:47:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXOy-00056W-VA
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 11:47:45 +0000
Received: from [85.158.143.99:36015] by server-3.bemta-4.messagelabs.com id
	35/7E-01511-06719105; Wed, 01 Aug 2012 11:47:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343821639!23357071!8
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20418 invoked from network); 1 Aug 2012 11:47:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 11:47:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13801142"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 11:47:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	12:47:43 +0100
Message-ID: <1343821662.27221.83.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Wed, 1 Aug 2012 12:47:42 +0100
In-Reply-To: <f83f8c98692f2cbc5178.1343319438@Solace>
References: <f83f8c98692f2cbc5178.1343319438@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v9] Some automatic NUMA placement
	documentation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 17:17 +0100, Dario Faggioli wrote:
> About rationale, usage and (some small bits of) API.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:00:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXbB-0006TT-DP; Wed, 01 Aug 2012 12:00:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXb9-0006TN-KI
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:00:19 +0000
Received: from [85.158.143.99:53061] by server-2.bemta-4.messagelabs.com id
	74/95-17938-35A19105; Wed, 01 Aug 2012 12:00:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1343822416!19979718!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13654 invoked from network); 1 Aug 2012 12:00:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:00:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="203787013"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:00:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 08:00:15 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwXT9-0007rD-Hp;
	Wed, 01 Aug 2012 12:52:03 +0100
MIME-Version: 1.0
X-Mercurial-Node: 5ba5402335fe0365d2d0110df34a8bd58c5381da
Message-ID: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 1 Aug 2012 12:52:03 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: only read script once in libxl__hotplug_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343821803 -3600
# Node ID 5ba5402335fe0365d2d0110df34a8bd58c5381da
# Parent  3aecd311802744b325a4bf246f1b4507baf2d932
libxl: only read script once in libxl__hotplug_*

instead of duplicating the error handling etc in get_hotplug_env just pass the
script already read by the caller down.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 3aecd3118027 -r 5ba5402335fe tools/libxl/libxl_linux.c
--- a/tools/libxl/libxl_linux.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/libxl_linux.c	Wed Aug 01 12:50:03 2012 +0100
@@ -80,22 +80,14 @@ char *libxl__devid_to_localdev(libxl__gc
 
 /* Hotplug scripts helpers */
 
-static char **get_hotplug_env(libxl__gc *gc, libxl__device *dev)
+static char **get_hotplug_env(libxl__gc *gc,
+                              char *script, libxl__device *dev)
 {
-    char *be_path = libxl__device_backend_path(gc, dev);
-    char *script;
     const char *type = libxl__device_kind_to_string(dev->backend_kind);
     char **env;
     int nr = 0;
     libxl_nic_type nictype;
 
-    script = libxl__xs_read(gc, XBT_NULL,
-                            GCSPRINTF("%s/%s", be_path, "script"));
-    if (!script) {
-        LOGEV(ERROR, errno, "unable to read script from %s", be_path);
-        return NULL;
-    }
-
     const int arraysize = 13;
     GCNEW_ARRAY(env, arraysize);
     env[nr++] = "script";
@@ -170,7 +162,7 @@ static int libxl__hotplug_nic(libxl__gc 
         goto out;
     }
 
-    *env = get_hotplug_env(gc, dev);
+    *env = get_hotplug_env(gc, script, dev);
     if (!env) {
         rc = ERROR_FAIL;
         goto out;
@@ -212,7 +204,7 @@ static int libxl__hotplug_disk(libxl__gc
         goto error;
     }
 
-    *env = get_hotplug_env(gc, dev);
+    *env = get_hotplug_env(gc, script, dev);
     if (!*env) {
         rc = ERROR_FAIL;
         goto error;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:00:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXbB-0006TT-DP; Wed, 01 Aug 2012 12:00:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXb9-0006TN-KI
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:00:19 +0000
Received: from [85.158.143.99:53061] by server-2.bemta-4.messagelabs.com id
	74/95-17938-35A19105; Wed, 01 Aug 2012 12:00:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1343822416!19979718!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13654 invoked from network); 1 Aug 2012 12:00:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:00:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="203787013"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:00:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 08:00:15 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwXT9-0007rD-Hp;
	Wed, 01 Aug 2012 12:52:03 +0100
MIME-Version: 1.0
X-Mercurial-Node: 5ba5402335fe0365d2d0110df34a8bd58c5381da
Message-ID: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 1 Aug 2012 12:52:03 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: only read script once in libxl__hotplug_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343821803 -3600
# Node ID 5ba5402335fe0365d2d0110df34a8bd58c5381da
# Parent  3aecd311802744b325a4bf246f1b4507baf2d932
libxl: only read script once in libxl__hotplug_*

instead of duplicating the error handling etc in get_hotplug_env just pass the
script already read by the caller down.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 3aecd3118027 -r 5ba5402335fe tools/libxl/libxl_linux.c
--- a/tools/libxl/libxl_linux.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/libxl_linux.c	Wed Aug 01 12:50:03 2012 +0100
@@ -80,22 +80,14 @@ char *libxl__devid_to_localdev(libxl__gc
 
 /* Hotplug scripts helpers */
 
-static char **get_hotplug_env(libxl__gc *gc, libxl__device *dev)
+static char **get_hotplug_env(libxl__gc *gc,
+                              char *script, libxl__device *dev)
 {
-    char *be_path = libxl__device_backend_path(gc, dev);
-    char *script;
     const char *type = libxl__device_kind_to_string(dev->backend_kind);
     char **env;
     int nr = 0;
     libxl_nic_type nictype;
 
-    script = libxl__xs_read(gc, XBT_NULL,
-                            GCSPRINTF("%s/%s", be_path, "script"));
-    if (!script) {
-        LOGEV(ERROR, errno, "unable to read script from %s", be_path);
-        return NULL;
-    }
-
     const int arraysize = 13;
     GCNEW_ARRAY(env, arraysize);
     env[nr++] = "script";
@@ -170,7 +162,7 @@ static int libxl__hotplug_nic(libxl__gc 
         goto out;
     }
 
-    *env = get_hotplug_env(gc, dev);
+    *env = get_hotplug_env(gc, script, dev);
     if (!env) {
         rc = ERROR_FAIL;
         goto out;
@@ -212,7 +204,7 @@ static int libxl__hotplug_disk(libxl__gc
         goto error;
     }
 
-    *env = get_hotplug_env(gc, dev);
+    *env = get_hotplug_env(gc, script, dev);
     if (!*env) {
         rc = ERROR_FAIL;
         goto error;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:07:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXh8-0006mn-8c; Wed, 01 Aug 2012 12:06:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwXh7-0006mi-1o
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:06:29 +0000
Received: from [85.158.143.35:33727] by server-1.bemta-4.messagelabs.com id
	EC/48-24392-4CB19105; Wed, 01 Aug 2012 12:06:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343822786!12948715!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22899 invoked from network); 1 Aug 2012 12:06:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 12:06:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 13:06:26 +0100
Message-Id: <501938070200007800091E24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 13:07:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartCDFC89F7.0__="
Subject: [Xen-devel] [PATCH] x86: also allow disabling LAPIC NMI watchdog on
 newer CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartCDFC89F7.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This complements c/s 9146:941897e98591, and also replaces a literal
zero with a proper manifest constant.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -175,15 +175,9 @@ static void disable_lapic_nmi_watchdog(v
     case X86_VENDOR_INTEL:
         switch (boot_cpu_data.x86) {
         case 6:
-            if (boot_cpu_data.x86_model > 0xd)
-                break;
-
             wrmsr(MSR_P6_EVNTSEL0, 0, 0);
             break;
         case 15:
-            if (boot_cpu_data.x86_model > 0x4)
-                break;
-
             wrmsr(MSR_P4_IQ_CCCR0, 0, 0);
             wrmsr(MSR_P4_CRU_ESCR0, 0, 0);
             break;
@@ -192,7 +186,7 @@ static void disable_lapic_nmi_watchdog(v
     }
     nmi_active =3D -1;
     /* tell do_nmi() and others that we're not active any more */
-    nmi_watchdog =3D 0;
+    nmi_watchdog =3D NMI_NONE;
 }
=20
 static void enable_lapic_nmi_watchdog(void)




--=__PartCDFC89F7.0__=
Content-Type: text/plain; name="x86-NMI-LAPIC-watchdog-disable.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-NMI-LAPIC-watchdog-disable.patch"

x86: also allow disabling LAPIC NMI watchdog on newer CPUs=0A=0AThis =
complements c/s 9146:941897e98591, and also replaces a literal=0Azero with =
a proper manifest constant.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.=
com>=0A=0A--- a/xen/arch/x86/nmi.c=0A+++ b/xen/arch/x86/nmi.c=0A@@ -175,15 =
+175,9 @@ static void disable_lapic_nmi_watchdog(v=0A     case X86_VENDOR_I=
NTEL:=0A         switch (boot_cpu_data.x86) {=0A         case 6:=0A-       =
     if (boot_cpu_data.x86_model > 0xd)=0A-                break;=0A-=0A   =
          wrmsr(MSR_P6_EVNTSEL0, 0, 0);=0A             break;=0A         =
case 15:=0A-            if (boot_cpu_data.x86_model > 0x4)=0A-             =
   break;=0A-=0A             wrmsr(MSR_P4_IQ_CCCR0, 0, 0);=0A             =
wrmsr(MSR_P4_CRU_ESCR0, 0, 0);=0A             break;=0A@@ -192,7 +186,7 @@ =
static void disable_lapic_nmi_watchdog(v=0A     }=0A     nmi_active =3D =
-1;=0A     /* tell do_nmi() and others that we're not active any more =
*/=0A-    nmi_watchdog =3D 0;=0A+    nmi_watchdog =3D NMI_NONE;=0A }=0A =
=0A static void enable_lapic_nmi_watchdog(void)=0A
--=__PartCDFC89F7.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartCDFC89F7.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 01 12:07:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXh8-0006mn-8c; Wed, 01 Aug 2012 12:06:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwXh7-0006mi-1o
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:06:29 +0000
Received: from [85.158.143.35:33727] by server-1.bemta-4.messagelabs.com id
	EC/48-24392-4CB19105; Wed, 01 Aug 2012 12:06:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343822786!12948715!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22899 invoked from network); 1 Aug 2012 12:06:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with SMTP;
	1 Aug 2012 12:06:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 01 Aug 2012 13:06:26 +0100
Message-Id: <501938070200007800091E24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 01 Aug 2012 13:07:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartCDFC89F7.0__="
Subject: [Xen-devel] [PATCH] x86: also allow disabling LAPIC NMI watchdog on
 newer CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartCDFC89F7.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This complements c/s 9146:941897e98591, and also replaces a literal
zero with a proper manifest constant.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -175,15 +175,9 @@ static void disable_lapic_nmi_watchdog(v
     case X86_VENDOR_INTEL:
         switch (boot_cpu_data.x86) {
         case 6:
-            if (boot_cpu_data.x86_model > 0xd)
-                break;
-
             wrmsr(MSR_P6_EVNTSEL0, 0, 0);
             break;
         case 15:
-            if (boot_cpu_data.x86_model > 0x4)
-                break;
-
             wrmsr(MSR_P4_IQ_CCCR0, 0, 0);
             wrmsr(MSR_P4_CRU_ESCR0, 0, 0);
             break;
@@ -192,7 +186,7 @@ static void disable_lapic_nmi_watchdog(v
     }
     nmi_active =3D -1;
     /* tell do_nmi() and others that we're not active any more */
-    nmi_watchdog =3D 0;
+    nmi_watchdog =3D NMI_NONE;
 }
=20
 static void enable_lapic_nmi_watchdog(void)




--=__PartCDFC89F7.0__=
Content-Type: text/plain; name="x86-NMI-LAPIC-watchdog-disable.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-NMI-LAPIC-watchdog-disable.patch"

x86: also allow disabling LAPIC NMI watchdog on newer CPUs=0A=0AThis =
complements c/s 9146:941897e98591, and also replaces a literal=0Azero with =
a proper manifest constant.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.=
com>=0A=0A--- a/xen/arch/x86/nmi.c=0A+++ b/xen/arch/x86/nmi.c=0A@@ -175,15 =
+175,9 @@ static void disable_lapic_nmi_watchdog(v=0A     case X86_VENDOR_I=
NTEL:=0A         switch (boot_cpu_data.x86) {=0A         case 6:=0A-       =
     if (boot_cpu_data.x86_model > 0xd)=0A-                break;=0A-=0A   =
          wrmsr(MSR_P6_EVNTSEL0, 0, 0);=0A             break;=0A         =
case 15:=0A-            if (boot_cpu_data.x86_model > 0x4)=0A-             =
   break;=0A-=0A             wrmsr(MSR_P4_IQ_CCCR0, 0, 0);=0A             =
wrmsr(MSR_P4_CRU_ESCR0, 0, 0);=0A             break;=0A@@ -192,7 +186,7 @@ =
static void disable_lapic_nmi_watchdog(v=0A     }=0A     nmi_active =3D =
-1;=0A     /* tell do_nmi() and others that we're not active any more =
*/=0A-    nmi_watchdog =3D 0;=0A+    nmi_watchdog =3D NMI_NONE;=0A }=0A =
=0A static void enable_lapic_nmi_watchdog(void)=0A
--=__PartCDFC89F7.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartCDFC89F7.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 01 12:09:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXj9-0006re-PK; Wed, 01 Aug 2012 12:08:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwXj7-0006rV-W5
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:08:34 +0000
Received: from [85.158.138.51:8769] by server-9.bemta-3.messagelabs.com id
	2B/0E-27628-14C19105; Wed, 01 Aug 2012 12:08:33 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343822912!29962474!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6451 invoked from network); 1 Aug 2012 12:08:32 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:08:32 -0000
Received: by eaah1 with SMTP id h1so1821465eaa.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 05:08:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=LpDikkCADoh1ui5m5W2g/oTCnPfFWdlJZD0Cz15yMUI=;
	b=UHn7vl8/ki0WTnQf+iOE7+aS5dMCpyNXgys98cmX/0TXo1BOrpOIS/XfGQqwSvr2IV
	x1toDPde+fdEcdnG2wqxi5BWv6T2duT6i3J3F5pbLBixDnnJK6Vys1tmJtGmPvNnLL6R
	4XOd/pBicJHI0Xi4Q8J1UHv/66+P3G9ps/W95pmmw3AjWlQXH2k8EwOYLcXaP9UYx0O3
	jpNrO/jJwDmz0swZdBfKvIJhxEaa87LZHDxRJ/o++rrcSCGCtDxP3LlVG7zZGFt6oQRU
	PJqfuSnqo8iuaf+3+DIYsbsOs3LRS84Yuyg4909Zf7fnzk7QmWmVwVKD+NCxjE6jJB4f
	5tgQ==
Received: by 10.14.179.71 with SMTP id g47mr22235445eem.21.1343822912258;
	Wed, 01 Aug 2012 05:08:32 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id g46sm8265235eep.15.2012.08.01.05.08.31
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 05:08:31 -0700 (PDT)
Message-ID: <50191C3E.4050003@xen.org>
Date: Wed, 01 Aug 2012 13:08:30 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everybody,

at OSCON I had a couple of discussions regarding Fedora-like Xen Test 
Days. It may be a little bit late for this release putting all the 
documentation together (i.e. a TODO list of what we want to community 
and distros which consume Xen) to test to pull this off for this release 
cycle.

But I wanted to raise this as possibility and maybe something to build 
into future release cycles.  If I look at 
http://wiki.xen.org/wiki/Xen_4.2 there is fairly little (or in fact 
almost nothing) we have in terms of how new functionality would be 
tested. My gut feel is that the biggest benefit of a Xen test Day for 
4.2 may be in testing XL. There were some improvements last Monday, but 
I am not sure this is enough.

If I look at https://fedoraproject.org/wiki/QA/Test_Days they have spent 
quite a bit of effort on this, and it would probably take one person a 
week or two full-time to pull this together. Also see, 
https://fedoraproject.org/wiki/Test_Day:Current

I just wanted to put this out there to see whether we should try for 
this release cycle and gather views. It would require a volunteer to 
step up. If the view is that this is not doable for 4.2, this may be a 
good thing to either try for patch releases as well as maybe for Xen 4.3

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:09:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXj9-0006re-PK; Wed, 01 Aug 2012 12:08:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwXj7-0006rV-W5
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:08:34 +0000
Received: from [85.158.138.51:8769] by server-9.bemta-3.messagelabs.com id
	2B/0E-27628-14C19105; Wed, 01 Aug 2012 12:08:33 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343822912!29962474!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6451 invoked from network); 1 Aug 2012 12:08:32 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:08:32 -0000
Received: by eaah1 with SMTP id h1so1821465eaa.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 05:08:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=LpDikkCADoh1ui5m5W2g/oTCnPfFWdlJZD0Cz15yMUI=;
	b=UHn7vl8/ki0WTnQf+iOE7+aS5dMCpyNXgys98cmX/0TXo1BOrpOIS/XfGQqwSvr2IV
	x1toDPde+fdEcdnG2wqxi5BWv6T2duT6i3J3F5pbLBixDnnJK6Vys1tmJtGmPvNnLL6R
	4XOd/pBicJHI0Xi4Q8J1UHv/66+P3G9ps/W95pmmw3AjWlQXH2k8EwOYLcXaP9UYx0O3
	jpNrO/jJwDmz0swZdBfKvIJhxEaa87LZHDxRJ/o++rrcSCGCtDxP3LlVG7zZGFt6oQRU
	PJqfuSnqo8iuaf+3+DIYsbsOs3LRS84Yuyg4909Zf7fnzk7QmWmVwVKD+NCxjE6jJB4f
	5tgQ==
Received: by 10.14.179.71 with SMTP id g47mr22235445eem.21.1343822912258;
	Wed, 01 Aug 2012 05:08:32 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id g46sm8265235eep.15.2012.08.01.05.08.31
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 05:08:31 -0700 (PDT)
Message-ID: <50191C3E.4050003@xen.org>
Date: Wed, 01 Aug 2012 13:08:30 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everybody,

at OSCON I had a couple of discussions regarding Fedora-like Xen Test 
Days. It may be a little bit late for this release putting all the 
documentation together (i.e. a TODO list of what we want to community 
and distros which consume Xen) to test to pull this off for this release 
cycle.

But I wanted to raise this as possibility and maybe something to build 
into future release cycles.  If I look at 
http://wiki.xen.org/wiki/Xen_4.2 there is fairly little (or in fact 
almost nothing) we have in terms of how new functionality would be 
tested. My gut feel is that the biggest benefit of a Xen test Day for 
4.2 may be in testing XL. There were some improvements last Monday, but 
I am not sure this is enough.

If I look at https://fedoraproject.org/wiki/QA/Test_Days they have spent 
quite a bit of effort on this, and it would probably take one person a 
week or two full-time to pull this together. Also see, 
https://fedoraproject.org/wiki/Test_Day:Current

I just wanted to put this out there to see whether we should try for 
this release cycle and gather views. It would require a volunteer to 
step up. If the view is that this is not doable for 4.2, this may be a 
good thing to either try for patch releases as well as maybe for Xen 4.3

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:19:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXso-00076Q-Ts; Wed, 01 Aug 2012 12:18:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SwXsn-00076L-O2
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:18:33 +0000
Received: from [85.158.138.51:14128] by server-4.bemta-3.messagelabs.com id
	31/A6-29069-89E19105; Wed, 01 Aug 2012 12:18:32 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343823512!29964522!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTIwNzM=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTIwNzM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22205 invoked from network); 1 Aug 2012 12:18:32 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 12:18:32 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjy0HFcM=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-081-096.pools.arcor-ip.net [88.65.81.96])
	by smtp.strato.de (josoe mo16) (RZmta 30.2 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id L0042fo71Bo9xk ;
	Wed, 1 Aug 2012 14:18:32 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 7668D18639; Wed,  1 Aug 2012 14:18:31 +0200 (CEST)
Date: Wed, 1 Aug 2012 14:18:31 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120801121831.GA24090@aepfle.de>
References: <ccbebdbe44da04604081.1343666104@probook.site>
	<1343723296.15432.62.camel@zakaz.uk.xensource.com>
	<20120731125021.GA11261@aepfle.de>
	<1343820387.27221.66.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343820387.27221.66.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] stubdom: disable parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, Ian Campbell wrote:

> On Tue, 2012-07-31 at 13:50 +0100, Olaf Hering wrote:
> > On Tue, Jul 31, Ian Campbell wrote:
> > 
> > > I suspect something is not quite right in the top level Makefile WRT the
> > > dependencies between tools and stubdom builds.
> > 
> > Doing a s@$(CROSS_MAKE)@$(MAKE) DESTDIR=@g in stubdom/Makefile fixes it
> > for me:
> 
> Can anyone who understands Make explain why this should make a
> difference?
> 
> I would have expected that
> 	FOO := $(BAR) baz
> ...
> 	$(FOO) bif
> ought to be identical to
> 	$(BAR) baz bif
> 
> Is that not the case?

MAKE is handled special if written in a receipe:
http://www.gnu.org/software/make/manual/html_node/MAKE-Variable.html

That does not really explain why it makes a difference in the case of
stubdom.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:19:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXso-00076Q-Ts; Wed, 01 Aug 2012 12:18:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SwXsn-00076L-O2
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:18:33 +0000
Received: from [85.158.138.51:14128] by server-4.bemta-3.messagelabs.com id
	31/A6-29069-89E19105; Wed, 01 Aug 2012 12:18:32 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343823512!29964522!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTIwNzM=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTIwNzM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22205 invoked from network); 1 Aug 2012 12:18:32 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 12:18:32 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjy0HFcM=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-081-096.pools.arcor-ip.net [88.65.81.96])
	by smtp.strato.de (josoe mo16) (RZmta 30.2 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id L0042fo71Bo9xk ;
	Wed, 1 Aug 2012 14:18:32 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 7668D18639; Wed,  1 Aug 2012 14:18:31 +0200 (CEST)
Date: Wed, 1 Aug 2012 14:18:31 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120801121831.GA24090@aepfle.de>
References: <ccbebdbe44da04604081.1343666104@probook.site>
	<1343723296.15432.62.camel@zakaz.uk.xensource.com>
	<20120731125021.GA11261@aepfle.de>
	<1343820387.27221.66.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343820387.27221.66.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] stubdom: disable parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, Ian Campbell wrote:

> On Tue, 2012-07-31 at 13:50 +0100, Olaf Hering wrote:
> > On Tue, Jul 31, Ian Campbell wrote:
> > 
> > > I suspect something is not quite right in the top level Makefile WRT the
> > > dependencies between tools and stubdom builds.
> > 
> > Doing a s@$(CROSS_MAKE)@$(MAKE) DESTDIR=@g in stubdom/Makefile fixes it
> > for me:
> 
> Can anyone who understands Make explain why this should make a
> difference?
> 
> I would have expected that
> 	FOO := $(BAR) baz
> ...
> 	$(FOO) bif
> ought to be identical to
> 	$(BAR) baz bif
> 
> Is that not the case?

MAKE is handled special if written in a receipe:
http://www.gnu.org/software/make/manual/html_node/MAKE-Variable.html

That does not really explain why it makes a difference in the case of
stubdom.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:24:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXxc-0007FC-MO; Wed, 01 Aug 2012 12:23:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwXxc-0007F7-1Z
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 12:23:32 +0000
Received: from [85.158.139.83:62005] by server-10.bemta-5.messagelabs.com id
	DE/A4-02190-3CF19105; Wed, 01 Aug 2012 12:23:31 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343823809!22514826!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21599 invoked from network); 1 Aug 2012 12:23:30 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 12:23:30 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id CA6E7402990;
	Wed,  1 Aug 2012 14:23:29 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 77edd4Fzc0iq; Wed,  1 Aug 2012 14:23:29 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id C988740298E;
	Wed,  1 Aug 2012 14:23:28 +0200 (CEST)
Message-ID: <50191FBC.8070208@tiscali.it>
Date: Wed, 01 Aug 2012 14:23:24 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
In-Reply-To: <20434.1848.747678.259199@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3722033863519370756=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============3722033863519370756==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080207080306020700070001"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms080207080306020700070001
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable

Il 08/06/2012 16:07, Ian Jackson ha scritto:
> Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/:=
 added other xen kernel modules on xencommons start"):
>> # HG changeset patch
>> # User Fabio Fantoni
>> # Date 1338467204 -7200
>> # Node ID 1f57503f10112718ecbbe424fa8fc9c55785f4c0
>> # Parent  dd7319230f4ac295b6d14ce2e2a3dccf82bb87d8
>> tools/hotplug/Linux/init.d/: added other xen kernel modules on
>> xencommons start
> This looks at least harmless to me.
>
> I'm surprised, however, that these things aren't loaded automatically.
> For example, shouldn't the xenbus driver's enumeration automatically
> load blkback too ?
>
> Having said that, I'm inclined to apply this unless someone
> explains that it's a bad idea.
>
> Ian.
>
>
> -----
> Nessun virus nel messaggio.
> Controllato da AVG - www.avg.com
> Versione: 2012.0.2177 / Database dei virus: 2433/5056 -  Data di rilasc=
io: 08/06/2012
>
>
I have noticed that this patch wasn't commited, is there some problem=20
with it?


--------------ms080207080306020700070001
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTEyMjMyNFowIwYJKoZIhvcNAQkEMRYEFGs0AGilQcFEQjN2n+f1WlCE
UT/EMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAKUX5V2WhlI68+pZnXrofJjMw
b0fGTuvpvafGGftJ7m3bSdXpXaXWtxTWbLi7FaV2sZfxK3XLgLh+azJJlB1VtnkFeJUpR1I8
7J62w80Z6i1eDAmBG2QzHsBGksRR6VuYDMOoNjzHx0kmDm9dQS+uJNNOdsRwaX4uGO5kNuyL
SI47FgDUbIx/QC/2TEn354MMMlXsrDYj1m5NLCk8xIG+hzm/0UDUw8lEVuVhK3Yd5QrAKTzM
lTgRffeSdxQp9OkjMe/R08MNiWBCMJJdgQCmF6n7p6bkHstUmkLLatA1WsfCZbWihQe4GIgz
vfoTRORWvosWwd2dnlTM6Qo1Ab+/zgAAAAAAAA==
--------------ms080207080306020700070001--


--===============3722033863519370756==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3722033863519370756==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 12:24:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXxc-0007FC-MO; Wed, 01 Aug 2012 12:23:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwXxc-0007F7-1Z
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 12:23:32 +0000
Received: from [85.158.139.83:62005] by server-10.bemta-5.messagelabs.com id
	DE/A4-02190-3CF19105; Wed, 01 Aug 2012 12:23:31 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343823809!22514826!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21599 invoked from network); 1 Aug 2012 12:23:30 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 12:23:30 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id CA6E7402990;
	Wed,  1 Aug 2012 14:23:29 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 77edd4Fzc0iq; Wed,  1 Aug 2012 14:23:29 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id C988740298E;
	Wed,  1 Aug 2012 14:23:28 +0200 (CEST)
Message-ID: <50191FBC.8070208@tiscali.it>
Date: Wed, 01 Aug 2012 14:23:24 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
In-Reply-To: <20434.1848.747678.259199@mariner.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3722033863519370756=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============3722033863519370756==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080207080306020700070001"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms080207080306020700070001
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable

Il 08/06/2012 16:07, Ian Jackson ha scritto:
> Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/:=
 added other xen kernel modules on xencommons start"):
>> # HG changeset patch
>> # User Fabio Fantoni
>> # Date 1338467204 -7200
>> # Node ID 1f57503f10112718ecbbe424fa8fc9c55785f4c0
>> # Parent  dd7319230f4ac295b6d14ce2e2a3dccf82bb87d8
>> tools/hotplug/Linux/init.d/: added other xen kernel modules on
>> xencommons start
> This looks at least harmless to me.
>
> I'm surprised, however, that these things aren't loaded automatically.
> For example, shouldn't the xenbus driver's enumeration automatically
> load blkback too ?
>
> Having said that, I'm inclined to apply this unless someone
> explains that it's a bad idea.
>
> Ian.
>
>
> -----
> Nessun virus nel messaggio.
> Controllato da AVG - www.avg.com
> Versione: 2012.0.2177 / Database dei virus: 2433/5056 -  Data di rilasc=
io: 08/06/2012
>
>
I have noticed that this patch wasn't commited, is there some problem=20
with it?


--------------ms080207080306020700070001
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTEyMjMyNFowIwYJKoZIhvcNAQkEMRYEFGs0AGilQcFEQjN2n+f1WlCE
UT/EMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAKUX5V2WhlI68+pZnXrofJjMw
b0fGTuvpvafGGftJ7m3bSdXpXaXWtxTWbLi7FaV2sZfxK3XLgLh+azJJlB1VtnkFeJUpR1I8
7J62w80Z6i1eDAmBG2QzHsBGksRR6VuYDMOoNjzHx0kmDm9dQS+uJNNOdsRwaX4uGO5kNuyL
SI47FgDUbIx/QC/2TEn354MMMlXsrDYj1m5NLCk8xIG+hzm/0UDUw8lEVuVhK3Yd5QrAKTzM
lTgRffeSdxQp9OkjMe/R08MNiWBCMJJdgQCmF6n7p6bkHstUmkLLatA1WsfCZbWihQe4GIgz
vfoTRORWvosWwd2dnlTM6Qo1Ab+/zgAAAAAAAA==
--------------ms080207080306020700070001--


--===============3722033863519370756==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3722033863519370756==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 12:25:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXz4-0007K7-5w; Wed, 01 Aug 2012 12:25:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXz2-0007Jw-JK
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:25:00 +0000
Received: from [85.158.138.51:31581] by server-6.bemta-3.messagelabs.com id
	BC/46-20447-B1029105; Wed, 01 Aug 2012 12:24:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343823897!29856637!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26164 invoked from network); 1 Aug 2012 12:24:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="203789229"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:24:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 08:24:57 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwXyy-0000B5-J8;
	Wed, 01 Aug 2012 13:24:56 +0100
MIME-Version: 1.0
X-Mercurial-Node: 6b09cb00e9f4d2dcea48edd1fe01fb29bbc04500
Message-ID: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 1 Aug 2012 13:24:56 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl: make libxl_device_pci_{add, remove,
 destroy} interfaces asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343823885 -3600
# Node ID 6b09cb00e9f4d2dcea48edd1fe01fb29bbc04500
# Parent  59bbb5dec3b88e32916aab620323aa9aa71289a3
libxl: make libxl_device_pci_{add,remove,destroy} interfaces asynchronous

This does not make the implementation fully asynchronous but just
updates the API to support asynchrony in the future.

Currently although these functions do not call hotplug scripts etc and
therefore are not "slow" (per the comment about ao machinery in
libxl_internal.h) they do interact with the device model and so are
not quite "fast" either. We can live with this for now.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2 - rebase on top of "libxl: enforce prohibitions of internal callers" adding
     the appropriate annotations.

diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/libxl/libxl.h
--- a/tools/libxl/libxl.h	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/libxl.h	Wed Aug 01 13:24:45 2012 +0100
@@ -757,10 +757,21 @@ int libxl_device_vfb_destroy(libxl_ctx *
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
 /* PCI Passthrough */
-int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev);
-int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev);
-int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev);
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num);
+int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
+                         libxl_device_pci *pcidev,
+                         const libxl_asyncop_how *ao_how)
+                         LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
+                            libxl_device_pci *pcidev,
+                            const libxl_asyncop_how *ao_how)
+                            LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
+                             libxl_device_pci *pcidev,
+                             const libxl_asyncop_how *ao_how)
+                             LIBXL_EXTERNAL_CALLERS_ONLY;
+
+libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
+                                        int *num);
 
 /*
  * Functions related to making devices assignable -- that is, bound to
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/libxl/libxl_pci.c
--- a/tools/libxl/libxl_pci.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/libxl_pci.c	Wed Aug 01 13:24:45 2012 +0100
@@ -1010,13 +1010,15 @@ int libxl__device_pci_setdefault(libxl__
     return 0;
 }
 
-int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev)
+int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
+                         libxl_device_pci *pcidev,
+                         const libxl_asyncop_how *ao_how)
 {
-    GC_INIT(ctx);
+    AO_CREATE(ctx, domid, ao_how);
     int rc;
     rc = libxl__device_pci_add(gc, domid, pcidev, 0);
-    GC_FREE;
-    return rc;
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
 }
 
 static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
@@ -1150,6 +1152,9 @@ static int qemu_pci_remove_xenstore(libx
     return 0;
 }
 
+static int libxl__device_pci_remove_common(libxl__gc *gc, uint32_t domid,
+                                           libxl_device_pci *pcidev, int force);
+
 static int do_pci_remove(libxl__gc *gc, uint32_t domid,
                          libxl_device_pci *pcidev, int force)
 {
@@ -1263,10 +1268,7 @@ out:
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
         libxl_device_pci pcidev_s = *pcidev;
-        if (force)
-                libxl_device_pci_destroy(ctx, stubdomid, &pcidev_s);
-        else
-                libxl_device_pci_remove(ctx, stubdomid, &pcidev_s);
+        libxl__device_pci_remove_common(gc, stubdomid, &pcidev_s, force);
     }
 
     libxl__device_pci_remove_xenstore(gc, domid, pcidev);
@@ -1313,27 +1315,31 @@ out:
     return rc;
 }
 
-int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev)
+int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
+                            libxl_device_pci *pcidev,
+                            const libxl_asyncop_how *ao_how)
+
 {
-    GC_INIT(ctx);
+    AO_CREATE(ctx, domid, ao_how);
     int rc;
 
     rc = libxl__device_pci_remove_common(gc, domid, pcidev, 0);
 
-    GC_FREE;
-    return rc;
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                                  libxl_device_pci *pcidev)
+                             libxl_device_pci *pcidev,
+                             const libxl_asyncop_how *ao_how)
 {
-    GC_INIT(ctx);
+    AO_CREATE(ctx, domid, ao_how);
     int rc;
 
     rc = libxl__device_pci_remove_common(gc, domid, pcidev, 1);
 
-    GC_FREE;
-    return rc;
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
 }
 
 static void libxl__device_pci_from_xs_be(libxl__gc *gc,
@@ -1415,7 +1421,7 @@ int libxl__device_pci_destroy_all(libxl_
          * respond to SCI interrupt because the guest kernel has shut down the
          * devices by the time we even get here!
          */
-        if (libxl_device_pci_destroy(ctx, domid, pcidevs + i) < 0)
+        if (libxl__device_pci_remove_common(gc, domid, pcidevs + i, 1) < 0)
             rc = ERROR_FAIL;
     }
 
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Wed Aug 01 13:24:45 2012 +0100
@@ -2389,9 +2389,9 @@ static void pcidetach(const char *dom, c
         exit(2);
     }
     if (force)
-        libxl_device_pci_destroy(ctx, domid, &pcidev);
+        libxl_device_pci_destroy(ctx, domid, &pcidev, 0);
     else
-        libxl_device_pci_remove(ctx, domid, &pcidev);
+        libxl_device_pci_remove(ctx, domid, &pcidev, 0);
 
     libxl_device_pci_dispose(&pcidev);
     xlu_cfg_destroy(config);
@@ -2435,7 +2435,7 @@ static void pciattach(const char *dom, c
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
-    libxl_device_pci_add(ctx, domid, &pcidev);
+    libxl_device_pci_add(ctx, domid, &pcidev, 0);
 
     libxl_device_pci_dispose(&pcidev);
     xlu_cfg_destroy(config);
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/ocaml/libs/xl/xenlight_stubs.c
--- a/tools/ocaml/libs/xl/xenlight_stubs.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c	Wed Aug 01 13:24:45 2012 +0100
@@ -423,7 +423,7 @@ value stub_xl_device_pci_add(value info,
 	device_pci_val(&gc, &lg, &c_info, info);
 
 	INIT_CTX();
-	ret = libxl_device_pci_add(ctx, Int_val(domid), &c_info);
+	ret = libxl_device_pci_add(ctx, Int_val(domid), &c_info, 0);
 	if (ret != 0)
 		failwith_xl("pci_add", &lg);
 	FREE_CTX();
@@ -441,7 +441,7 @@ value stub_xl_device_pci_remove(value in
 	device_pci_val(&gc, &lg, &c_info, info);
 
 	INIT_CTX();
-	ret = libxl_device_pci_remove(ctx, Int_val(domid), &c_info);
+	ret = libxl_device_pci_remove(ctx, Int_val(domid), &c_info, 0);
 	if (ret != 0)
 		failwith_xl("pci_remove", &lg);
 	FREE_CTX();
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/python/xen/lowlevel/xl/xl.c
--- a/tools/python/xen/lowlevel/xl/xl.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/python/xen/lowlevel/xl/xl.c	Wed Aug 01 13:24:45 2012 +0100
@@ -497,7 +497,7 @@ static PyObject *pyxl_pci_add(XlObject *
         return NULL;
     }
     pci = (Py_device_pci *)obj;
-    if ( libxl_device_pci_add(self->ctx, domid, &pci->obj) ) {
+    if ( libxl_device_pci_add(self->ctx, domid, &pci->obj, 0) ) {
         PyErr_SetString(xl_error_obj, "cannot add pci device");
         return NULL;
     }
@@ -519,12 +519,12 @@ static PyObject *pyxl_pci_del(XlObject *
     }
     pci = (Py_device_pci *)obj;
     if ( force ) {
-        if ( libxl_device_pci_destroy(self->ctx, domid, &pci->obj) ) {
+        if ( libxl_device_pci_destroy(self->ctx, domid, &pci->obj, 0) ) {
             PyErr_SetString(xl_error_obj, "cannot remove pci device");
             return NULL;
         }
     } else {
-        if ( libxl_device_pci_remove(self->ctx, domid, &pci->obj) ) {
+        if ( libxl_device_pci_remove(self->ctx, domid, &pci->obj, 0) ) {
             PyErr_SetString(xl_error_obj, "cannot remove pci device");
             return NULL;
         }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:25:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwXz4-0007K7-5w; Wed, 01 Aug 2012 12:25:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwXz2-0007Jw-JK
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 12:25:00 +0000
Received: from [85.158.138.51:31581] by server-6.bemta-3.messagelabs.com id
	BC/46-20447-B1029105; Wed, 01 Aug 2012 12:24:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343823897!29856637!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26164 invoked from network); 1 Aug 2012 12:24:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336363200"; d="scan'208";a="203789229"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 08:24:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 08:24:57 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwXyy-0000B5-J8;
	Wed, 01 Aug 2012 13:24:56 +0100
MIME-Version: 1.0
X-Mercurial-Node: 6b09cb00e9f4d2dcea48edd1fe01fb29bbc04500
Message-ID: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 1 Aug 2012 13:24:56 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl: make libxl_device_pci_{add, remove,
 destroy} interfaces asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343823885 -3600
# Node ID 6b09cb00e9f4d2dcea48edd1fe01fb29bbc04500
# Parent  59bbb5dec3b88e32916aab620323aa9aa71289a3
libxl: make libxl_device_pci_{add,remove,destroy} interfaces asynchronous

This does not make the implementation fully asynchronous but just
updates the API to support asynchrony in the future.

Currently although these functions do not call hotplug scripts etc and
therefore are not "slow" (per the comment about ao machinery in
libxl_internal.h) they do interact with the device model and so are
not quite "fast" either. We can live with this for now.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2 - rebase on top of "libxl: enforce prohibitions of internal callers" adding
     the appropriate annotations.

diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/libxl/libxl.h
--- a/tools/libxl/libxl.h	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/libxl.h	Wed Aug 01 13:24:45 2012 +0100
@@ -757,10 +757,21 @@ int libxl_device_vfb_destroy(libxl_ctx *
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
 /* PCI Passthrough */
-int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev);
-int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev);
-int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev);
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num);
+int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
+                         libxl_device_pci *pcidev,
+                         const libxl_asyncop_how *ao_how)
+                         LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
+                            libxl_device_pci *pcidev,
+                            const libxl_asyncop_how *ao_how)
+                            LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
+                             libxl_device_pci *pcidev,
+                             const libxl_asyncop_how *ao_how)
+                             LIBXL_EXTERNAL_CALLERS_ONLY;
+
+libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
+                                        int *num);
 
 /*
  * Functions related to making devices assignable -- that is, bound to
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/libxl/libxl_pci.c
--- a/tools/libxl/libxl_pci.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/libxl_pci.c	Wed Aug 01 13:24:45 2012 +0100
@@ -1010,13 +1010,15 @@ int libxl__device_pci_setdefault(libxl__
     return 0;
 }
 
-int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev)
+int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
+                         libxl_device_pci *pcidev,
+                         const libxl_asyncop_how *ao_how)
 {
-    GC_INIT(ctx);
+    AO_CREATE(ctx, domid, ao_how);
     int rc;
     rc = libxl__device_pci_add(gc, domid, pcidev, 0);
-    GC_FREE;
-    return rc;
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
 }
 
 static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
@@ -1150,6 +1152,9 @@ static int qemu_pci_remove_xenstore(libx
     return 0;
 }
 
+static int libxl__device_pci_remove_common(libxl__gc *gc, uint32_t domid,
+                                           libxl_device_pci *pcidev, int force);
+
 static int do_pci_remove(libxl__gc *gc, uint32_t domid,
                          libxl_device_pci *pcidev, int force)
 {
@@ -1263,10 +1268,7 @@ out:
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
         libxl_device_pci pcidev_s = *pcidev;
-        if (force)
-                libxl_device_pci_destroy(ctx, stubdomid, &pcidev_s);
-        else
-                libxl_device_pci_remove(ctx, stubdomid, &pcidev_s);
+        libxl__device_pci_remove_common(gc, stubdomid, &pcidev_s, force);
     }
 
     libxl__device_pci_remove_xenstore(gc, domid, pcidev);
@@ -1313,27 +1315,31 @@ out:
     return rc;
 }
 
-int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid, libxl_device_pci *pcidev)
+int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
+                            libxl_device_pci *pcidev,
+                            const libxl_asyncop_how *ao_how)
+
 {
-    GC_INIT(ctx);
+    AO_CREATE(ctx, domid, ao_how);
     int rc;
 
     rc = libxl__device_pci_remove_common(gc, domid, pcidev, 0);
 
-    GC_FREE;
-    return rc;
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                                  libxl_device_pci *pcidev)
+                             libxl_device_pci *pcidev,
+                             const libxl_asyncop_how *ao_how)
 {
-    GC_INIT(ctx);
+    AO_CREATE(ctx, domid, ao_how);
     int rc;
 
     rc = libxl__device_pci_remove_common(gc, domid, pcidev, 1);
 
-    GC_FREE;
-    return rc;
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
 }
 
 static void libxl__device_pci_from_xs_be(libxl__gc *gc,
@@ -1415,7 +1421,7 @@ int libxl__device_pci_destroy_all(libxl_
          * respond to SCI interrupt because the guest kernel has shut down the
          * devices by the time we even get here!
          */
-        if (libxl_device_pci_destroy(ctx, domid, pcidevs + i) < 0)
+        if (libxl__device_pci_remove_common(gc, domid, pcidevs + i, 1) < 0)
             rc = ERROR_FAIL;
     }
 
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Wed Aug 01 13:24:45 2012 +0100
@@ -2389,9 +2389,9 @@ static void pcidetach(const char *dom, c
         exit(2);
     }
     if (force)
-        libxl_device_pci_destroy(ctx, domid, &pcidev);
+        libxl_device_pci_destroy(ctx, domid, &pcidev, 0);
     else
-        libxl_device_pci_remove(ctx, domid, &pcidev);
+        libxl_device_pci_remove(ctx, domid, &pcidev, 0);
 
     libxl_device_pci_dispose(&pcidev);
     xlu_cfg_destroy(config);
@@ -2435,7 +2435,7 @@ static void pciattach(const char *dom, c
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
-    libxl_device_pci_add(ctx, domid, &pcidev);
+    libxl_device_pci_add(ctx, domid, &pcidev, 0);
 
     libxl_device_pci_dispose(&pcidev);
     xlu_cfg_destroy(config);
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/ocaml/libs/xl/xenlight_stubs.c
--- a/tools/ocaml/libs/xl/xenlight_stubs.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c	Wed Aug 01 13:24:45 2012 +0100
@@ -423,7 +423,7 @@ value stub_xl_device_pci_add(value info,
 	device_pci_val(&gc, &lg, &c_info, info);
 
 	INIT_CTX();
-	ret = libxl_device_pci_add(ctx, Int_val(domid), &c_info);
+	ret = libxl_device_pci_add(ctx, Int_val(domid), &c_info, 0);
 	if (ret != 0)
 		failwith_xl("pci_add", &lg);
 	FREE_CTX();
@@ -441,7 +441,7 @@ value stub_xl_device_pci_remove(value in
 	device_pci_val(&gc, &lg, &c_info, info);
 
 	INIT_CTX();
-	ret = libxl_device_pci_remove(ctx, Int_val(domid), &c_info);
+	ret = libxl_device_pci_remove(ctx, Int_val(domid), &c_info, 0);
 	if (ret != 0)
 		failwith_xl("pci_remove", &lg);
 	FREE_CTX();
diff -r 59bbb5dec3b8 -r 6b09cb00e9f4 tools/python/xen/lowlevel/xl/xl.c
--- a/tools/python/xen/lowlevel/xl/xl.c	Wed Aug 01 12:50:03 2012 +0100
+++ b/tools/python/xen/lowlevel/xl/xl.c	Wed Aug 01 13:24:45 2012 +0100
@@ -497,7 +497,7 @@ static PyObject *pyxl_pci_add(XlObject *
         return NULL;
     }
     pci = (Py_device_pci *)obj;
-    if ( libxl_device_pci_add(self->ctx, domid, &pci->obj) ) {
+    if ( libxl_device_pci_add(self->ctx, domid, &pci->obj, 0) ) {
         PyErr_SetString(xl_error_obj, "cannot add pci device");
         return NULL;
     }
@@ -519,12 +519,12 @@ static PyObject *pyxl_pci_del(XlObject *
     }
     pci = (Py_device_pci *)obj;
     if ( force ) {
-        if ( libxl_device_pci_destroy(self->ctx, domid, &pci->obj) ) {
+        if ( libxl_device_pci_destroy(self->ctx, domid, &pci->obj, 0) ) {
             PyErr_SetString(xl_error_obj, "cannot remove pci device");
             return NULL;
         }
     } else {
-        if ( libxl_device_pci_remove(self->ctx, domid, &pci->obj) ) {
+        if ( libxl_device_pci_remove(self->ctx, domid, &pci->obj, 0) ) {
             PyErr_SetString(xl_error_obj, "cannot remove pci device");
             return NULL;
         }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:28:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwY21-0007VK-Ua; Wed, 01 Aug 2012 12:28:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwY20-0007VA-6H
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 12:28:04 +0000
Received: from [85.158.143.99:30570] by server-3.bemta-4.messagelabs.com id
	9D/F9-01511-3D029105; Wed, 01 Aug 2012 12:28:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343824082!26388530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27289 invoked from network); 1 Aug 2012 12:28:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:28:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13802045"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:28:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	13:28:02 +0100
Message-ID: <1343824081.27221.84.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 13:28:01 +0100
In-Reply-To: <20434.1848.747678.259199@mariner.uk.xensource.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-06-08 at 15:07 +0100, Ian Jackson wrote:
> Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> > # HG changeset patch
> > # User Fabio Fantoni
> > # Date 1338467204 -7200
> > # Node ID 1f57503f10112718ecbbe424fa8fc9c55785f4c0
> > # Parent  dd7319230f4ac295b6d14ce2e2a3dccf82bb87d8
> > tools/hotplug/Linux/init.d/: added other xen kernel modules on 
> > xencommons start
> 
> This looks at least harmless to me.
> 
> I'm surprised, however, that these things aren't loaded automatically.
> For example, shouldn't the xenbus driver's enumeration automatically
> load blkback too ?

Yes it should, there is autoloading stuff for all the backends.

Not sure about gntalloc. I suspect not.

> 
> Having said that, I'm inclined to apply this unless someone 
> explains that it's a bad idea.
> 
> Ian.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:28:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwY21-0007VK-Ua; Wed, 01 Aug 2012 12:28:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwY20-0007VA-6H
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 12:28:04 +0000
Received: from [85.158.143.99:30570] by server-3.bemta-4.messagelabs.com id
	9D/F9-01511-3D029105; Wed, 01 Aug 2012 12:28:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343824082!26388530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27289 invoked from network); 1 Aug 2012 12:28:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 12:28:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,693,1336348800"; d="scan'208";a="13802045"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:28:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	13:28:02 +0100
Message-ID: <1343824081.27221.84.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 13:28:01 +0100
In-Reply-To: <20434.1848.747678.259199@mariner.uk.xensource.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-06-08 at 15:07 +0100, Ian Jackson wrote:
> Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> > # HG changeset patch
> > # User Fabio Fantoni
> > # Date 1338467204 -7200
> > # Node ID 1f57503f10112718ecbbe424fa8fc9c55785f4c0
> > # Parent  dd7319230f4ac295b6d14ce2e2a3dccf82bb87d8
> > tools/hotplug/Linux/init.d/: added other xen kernel modules on 
> > xencommons start
> 
> This looks at least harmless to me.
> 
> I'm surprised, however, that these things aren't loaded automatically.
> For example, shouldn't the xenbus driver's enumeration automatically
> load blkback too ?

Yes it should, there is autoloading stuff for all the backends.

Not sure about gntalloc. I suspect not.

> 
> Having said that, I'm inclined to apply this unless someone 
> explains that it's a bad idea.
> 
> Ian.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 12:40:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:40:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwYDV-0007vb-QC; Wed, 01 Aug 2012 12:39:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwYDU-0007vJ-DQ
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 12:39:56 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343824658!8612383!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3150 invoked from network); 1 Aug 2012 12:37:38 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 12:37:38 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id 37E43402991;
	Wed,  1 Aug 2012 14:37:38 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id D7QzLxYhwUJd; Wed,  1 Aug 2012 14:37:37 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id D5043402990;
	Wed,  1 Aug 2012 14:37:36 +0200 (CEST)
Message-ID: <5019230C.8010602@tiscali.it>
Date: Wed, 01 Aug 2012 14:37:32 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
References: <5018FAE5.6070305@tiscali.it>
	<56EBEBACEA93434C80F27BB249CB677601929FFFD935@hagsted-aserver>
In-Reply-To: <56EBEBACEA93434C80F27BB249CB677601929FFFD935@hagsted-aserver>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
 xen-unstable dom0 with qemu traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5716622120492766473=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============5716622120492766473==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060409080300010806080007"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms060409080300010806080007
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable

Il 01/08/2012 12:19, Kristian Hagsted Rasmussen ha scritto:
>> Because the save / restore and cd-insert on qemu-xen for now are not
>> working I had to fall back onqemu-xen-traditional.
>> Dom0 is Wheezy 64 bit with xen 4.2.0-rc1 from source
>> I tested Windows 7 pro 64 bit domU with gplpv (last build 357).
>> xl create is working
>> xl shutdown is working
>> vnc is working, but with vfb line on configuration file is not working=

>> save/restore is working but on restore network is up but not working
>> cdrom is not working, see empty device also if there is cdrom
> I have seen the same problem, with cd-rom in windows 7 64 bit. When I c=
heck device manager
> the CD or DVD drive is listed with the right size of the media, but the=
 filesystem is listed as RAW.
> If I exchange the real device (/dev/sr0) with an iso file, windows is a=
ble to read from the cd-drive
> without problems. I am also using qemu-xen-traditional as I use PCI-pas=
sthrough. For me the
> problem is there both with and without pv-drivers installed.
>
>> After there are the content of domU configuration file and log, if you=

>> need more information tell me and I will post it.
>> ----------------------------------------------------------------------=
--------------------------
>> /etc/xen/W7.cfg
>> ----------------------------------------------------------------------=
--------------------------
>> name=3D'W7'
>> builder=3D"hvm"
>> memory=3D2048
>> vcpus=3D2
>> vif=3D['bridge=3Dxenbr0']
>> #vfb=3D['vnc=3D1,vncunused=3D1,vnclisten=3D0.0.0.0,keymap=3Dit']
>> disk=3D['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/dev/sr0,raw,hdb,ro,cd=
rom']
>> boot=3D'cd'
>> device_model_version=3D"qemu-xen-traditional"
>> vnc=3D1
>> vncunused=3D1
>> vnclisten=3D"0.0.0.0"
>> keymap=3D"it"
>> #spice=3D1
>> #spicehost=3D"0.0.0.0"
>> spiceport=3D6000
>> #spicepasswd=3D'test'
>> #spicedisable_ticketing=3D1
>> #spiceagent_mouse =3D 0
>> #on_poweroff=3D"destroy"
>> on_reboot=3D"restart"
>> on_crash=3D"destroy"
>> #stdvga=3D0
>> #qxl=3D1
>> ----------------------------------------------------------------------=
--------------------------
>> ----------------------------------------------------------------------=
--------------------------
>> /var/log/xen/qemu-dm-W7.log
>> ----------------------------------------------------------------------=
--------------------------
>> domid: 9
>> -videoram option does not work with cirrus vga device model. Videoram
>> set to 4M.
>> Using file /dev/xen/blktap-2/tapdev0 in read-write mode
>> Using file /dev/sr0 in read-only mode
>> Watching /local/domain/0/device-model/9/logdirty/cmd
>> Watching /local/domain/0/device-model/9/command
>> Watching /local/domain/9/cpu
>> qemu_map_cache_init nr_buckets =3D 10000 size 4194304
>> shared page at pfn feffd
>> buffered io page at pfn feffb
>> Guest uuid =3D 6dd9db74-ec31-47a4-a2af-43e382d2ec21
>> populating video RAM at ff000000
>> mapping video RAM from ff000000
>> Register xen platform.
>> Done register platform.
>> platform_fixed_ioport: changed ro/rw state of ROM memory area. now is =
rw
>> state.
>> xs_read(/local/domain/0/device-model/9/xen_extended_power_mgmt): read =
error
>> xs_read(): vncpasswd get error.
>> /vm/6dd9db74-ec31-47a4-a2af-43e382d2ec21/vncpasswd.
>> medium change watch on `hdb' (index: 1): /dev/sr0
>> I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>> Log-dirty: no command yet.
>> I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>> vcpu-set: watch node error.
>> xs_read(/local/domain/9/log-throttling): read error
>> qemu: ignoring not-understood drive `/local/domain/9/log-throttling'
>> medium change watch on `/local/domain/9/log-throttling' - unknown
>> device, ignored
>> cirrus vga map change while on lfb mode
>> mapping vram to f0000000 - f0400000
>> platform_fixed_ioport: changed ro/rw state of ROM memory area. now is =
rw
>> state.
>> platform_fixed_ioport: changed ro/rw state of ROM memory area. now is =
ro
>> state.
>> Unknown PV product 2 loaded in guest
>> PV driver build 1
>> region type 1 at [c100,c200).
>> region type 0 at [f3001000,f3001100).
>> squash iomem [f3001000, f3001100).
>> ----------------------------------------------------------------------=
--------------------------
> BR Kristian Hagsted
>
> -----
> Nessun virus nel messaggio.
> Controllato da AVG - www.avg.com
> Versione: 2012.0.2196 / Database dei virus: 2437/5168 -  Data di rilasc=
io: 31/07/2012
>
>
>
Thanks for reply, I have tried iso instead of phisical cdrom and it works=
=2E
I have also tried hotplug cd change but there is a bug

xl -vvv cd-eject W7 hdb
libxl: debug: libxl.c:2137:libxl_cdrom_insert: ao 0xb15980: create:=20
how=3D(nil) callback=3D(nil) poller=3D0xb159e0
Errore di segmentazione (Segfault error)


--------------ms060409080300010806080007
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTEyMzczMlowIwYJKoZIhvcNAQkEMRYEFKMvWb2SVS4a7eJdS7vQIKs4
EqPxMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAfBD/kfFhHoCvX9TOMlMp4yqB
Zho43HRpq7fAavP5q7qNp3WWZXtyxr2aiE1zGxwt8Y7wlVfL/+73ddTaVB+H4b64HIRtZzB5
z/z8yPMvVZ0F+tK00194etUZ0Fo5VoYZMCBqHwHofAmWQwPnXs5ymUHplMpYHS6wXkkm2tSd
LBlgGNB0pYy830nlYrUVmjX4UFFPFZggGcP3IGT3+iGd16lOXIQrMlGjezLRusu1jPPN49lj
2+VYtuY6v52K1kv7UR5Tw+DwHZVoZjPvJgvhRit2npWtO/ZrTVBuZVjXB0QwTjyawz0kLGyo
PbLZskNV6sDq3KloHbHqekc0xr53zQAAAAAAAA==
--------------ms060409080300010806080007--


--===============5716622120492766473==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5716622120492766473==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 12:40:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 12:40:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwYDV-0007vb-QC; Wed, 01 Aug 2012 12:39:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fantonifabio@tiscali.it>) id 1SwYDU-0007vJ-DQ
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 12:39:56 +0000
X-Env-Sender: fantonifabio@tiscali.it
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343824658!8612383!1
X-Originating-IP: [94.23.245.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3150 invoked from network); 1 Aug 2012 12:37:38 -0000
Received: from lnx3.fantu.it (HELO lnx3.fantu.it) (94.23.245.208)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 12:37:38 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by lnx3.fantu.it (Postfix) with ESMTP id 37E43402991;
	Wed,  1 Aug 2012 14:37:38 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at lnx3.fantu.it
Received: from lnx3.fantu.it ([127.0.0.1])
	by localhost (lnx3.fantu.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id D7QzLxYhwUJd; Wed,  1 Aug 2012 14:37:37 +0200 (CEST)
Received: from [192.168.178.50]
	(host73-164-dynamic.56-82-r.retail.telecomitalia.it [82.56.164.73])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: prova@fantu.it)
	by lnx3.fantu.it (Postfix) with ESMTPSA id D5043402990;
	Wed,  1 Aug 2012 14:37:36 +0200 (CEST)
Message-ID: <5019230C.8010602@tiscali.it>
Date: Wed, 01 Aug 2012 14:37:32 +0200
From: Fabio Fantoni <fantonifabio@tiscali.it>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
References: <5018FAE5.6070305@tiscali.it>
	<56EBEBACEA93434C80F27BB249CB677601929FFFD935@hagsted-aserver>
In-Reply-To: <56EBEBACEA93434C80F27BB249CB677601929FFFD935@hagsted-aserver>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Bug report about Windows 7 pro 64 bit domU on
 xen-unstable dom0 with qemu traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: fantonifabio@tiscali.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5716622120492766473=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--===============5716622120492766473==
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060409080300010806080007"

Questo Ã¨ un messaggio firmato digitalmente in formato MIME.

--------------ms060409080300010806080007
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: quoted-printable

Il 01/08/2012 12:19, Kristian Hagsted Rasmussen ha scritto:
>> Because the save / restore and cd-insert on qemu-xen for now are not
>> working I had to fall back onqemu-xen-traditional.
>> Dom0 is Wheezy 64 bit with xen 4.2.0-rc1 from source
>> I tested Windows 7 pro 64 bit domU with gplpv (last build 357).
>> xl create is working
>> xl shutdown is working
>> vnc is working, but with vfb line on configuration file is not working=

>> save/restore is working but on restore network is up but not working
>> cdrom is not working, see empty device also if there is cdrom
> I have seen the same problem, with cd-rom in windows 7 64 bit. When I c=
heck device manager
> the CD or DVD drive is listed with the right size of the media, but the=
 filesystem is listed as RAW.
> If I exchange the real device (/dev/sr0) with an iso file, windows is a=
ble to read from the cd-drive
> without problems. I am also using qemu-xen-traditional as I use PCI-pas=
sthrough. For me the
> problem is there both with and without pv-drivers installed.
>
>> After there are the content of domU configuration file and log, if you=

>> need more information tell me and I will post it.
>> ----------------------------------------------------------------------=
--------------------------
>> /etc/xen/W7.cfg
>> ----------------------------------------------------------------------=
--------------------------
>> name=3D'W7'
>> builder=3D"hvm"
>> memory=3D2048
>> vcpus=3D2
>> vif=3D['bridge=3Dxenbr0']
>> #vfb=3D['vnc=3D1,vncunused=3D1,vnclisten=3D0.0.0.0,keymap=3Dit']
>> disk=3D['/mnt/vm/disks/W7.disk1.xm,raw,hda,rw','/dev/sr0,raw,hdb,ro,cd=
rom']
>> boot=3D'cd'
>> device_model_version=3D"qemu-xen-traditional"
>> vnc=3D1
>> vncunused=3D1
>> vnclisten=3D"0.0.0.0"
>> keymap=3D"it"
>> #spice=3D1
>> #spicehost=3D"0.0.0.0"
>> spiceport=3D6000
>> #spicepasswd=3D'test'
>> #spicedisable_ticketing=3D1
>> #spiceagent_mouse =3D 0
>> #on_poweroff=3D"destroy"
>> on_reboot=3D"restart"
>> on_crash=3D"destroy"
>> #stdvga=3D0
>> #qxl=3D1
>> ----------------------------------------------------------------------=
--------------------------
>> ----------------------------------------------------------------------=
--------------------------
>> /var/log/xen/qemu-dm-W7.log
>> ----------------------------------------------------------------------=
--------------------------
>> domid: 9
>> -videoram option does not work with cirrus vga device model. Videoram
>> set to 4M.
>> Using file /dev/xen/blktap-2/tapdev0 in read-write mode
>> Using file /dev/sr0 in read-only mode
>> Watching /local/domain/0/device-model/9/logdirty/cmd
>> Watching /local/domain/0/device-model/9/command
>> Watching /local/domain/9/cpu
>> qemu_map_cache_init nr_buckets =3D 10000 size 4194304
>> shared page at pfn feffd
>> buffered io page at pfn feffb
>> Guest uuid =3D 6dd9db74-ec31-47a4-a2af-43e382d2ec21
>> populating video RAM at ff000000
>> mapping video RAM from ff000000
>> Register xen platform.
>> Done register platform.
>> platform_fixed_ioport: changed ro/rw state of ROM memory area. now is =
rw
>> state.
>> xs_read(/local/domain/0/device-model/9/xen_extended_power_mgmt): read =
error
>> xs_read(): vncpasswd get error.
>> /vm/6dd9db74-ec31-47a4-a2af-43e382d2ec21/vncpasswd.
>> medium change watch on `hdb' (index: 1): /dev/sr0
>> I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>> Log-dirty: no command yet.
>> I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
>> vcpu-set: watch node error.
>> xs_read(/local/domain/9/log-throttling): read error
>> qemu: ignoring not-understood drive `/local/domain/9/log-throttling'
>> medium change watch on `/local/domain/9/log-throttling' - unknown
>> device, ignored
>> cirrus vga map change while on lfb mode
>> mapping vram to f0000000 - f0400000
>> platform_fixed_ioport: changed ro/rw state of ROM memory area. now is =
rw
>> state.
>> platform_fixed_ioport: changed ro/rw state of ROM memory area. now is =
ro
>> state.
>> Unknown PV product 2 loaded in guest
>> PV driver build 1
>> region type 1 at [c100,c200).
>> region type 0 at [f3001000,f3001100).
>> squash iomem [f3001000, f3001100).
>> ----------------------------------------------------------------------=
--------------------------
> BR Kristian Hagsted
>
> -----
> Nessun virus nel messaggio.
> Controllato da AVG - www.avg.com
> Versione: 2012.0.2196 / Database dei virus: 2437/5168 -  Data di rilasc=
io: 31/07/2012
>
>
>
Thanks for reply, I have tried iso instead of phisical cdrom and it works=
=2E
I have also tried hotplug cd change but there is a bug

xl -vvv cd-eject W7 hdb
libxl: debug: libxl.c:2137:libxl_cdrom_insert: ao 0xb15980: create:=20
how=3D(nil) callback=3D(nil) poller=3D0xb159e0
Errore di segmentazione (Segfault error)


--------------ms060409080300010806080007
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: Firma crittografica S/MIME

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINhjCC
BjQwggQcoAMCAQICASAwDQYJKoZIhvcNAQEFBQAwfTELMAkGA1UEBhMCSUwxFjAUBgNVBAoT
DVN0YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNp
Z25pbmcxKTAnBgNVBAMTIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA3
MTAyNDIxMDI1NVoXDTE3MTAyNDIxMDI1NVowgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1T
dGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWdu
aW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENs
aWVudCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMsohUWcASz7GfKrpTOM
KqANy9BV7V0igWdGxA8IU77L3aTxErQ+fcxtDYZ36Z6GH0YFn7fq5RADteP0AYzrCA+EQTfi
8q1+kA3m0nwtwXG94M5sIqsvs7lRP1aycBke/s5g9hJHryZ2acScnzczjBCAo7X1v5G3yw8M
DP2m2RCye0KfgZ4nODerZJVzhAlOD9YejvAXZqHksw56HzElVIoYSZ3q4+RJuPXXfIoyby+Y
2m1E+YzX5iCZXBx05gk6MKAW1vaw4/v2OOLy6FZH3XHHtOkzUreG//CsFnB9+uaYSlR65cdG
zTsmoIK8WH1ygoXhRBm98SD7Hf/r3FELNvUCAwEAAaOCAa0wggGpMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSuVYNv7DHKufcd+q9rMfPIHeOsuzAfBgNV
HSMEGDAWgBROC+8apEBbpRdphzDKNGhD0EGu8jBmBggrBgEFBQcBAQRaMFgwJwYIKwYBBQUH
MAGGG2h0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9jYTAtBggrBgEFBQcwAoYhaHR0cDovL3d3
dy5zdGFydHNzbC5jb20vc2ZzY2EuY3J0MFsGA1UdHwRUMFIwJ6AloCOGIWh0dHA6Ly93d3cu
c3RhcnRzc2wuY29tL3Nmc2NhLmNybDAnoCWgI4YhaHR0cDovL2NybC5zdGFydHNzbC5jb20v
c2ZzY2EuY3JsMIGABgNVHSAEeTB3MHUGCysGAQQBgbU3AQIBMGYwLgYIKwYBBQUHAgEWImh0
dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYwNAYIKwYBBQUHAgEWKGh0dHA6Ly93
d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYwDQYJKoZIhvcNAQEFBQADggIBADqp
Jw3I07QWke9plNBpxUxcffc7nUrIQpJHDci91DFG7fVhHRkMZ1J+BKg5UNUxIFJ2Z9B90Mic
c/NXcs7kPBRdn6XGO/vPc87Y6R+cWS9Nc9+fp3Enmsm94OxOwI9wn8qnr/6o3mD4noP9Jphw
UPTXwHovjavRnhUQHLfo/i2NG0XXgTHXS2Xm0kVUozXqpYpAdumMiB/vezj1QHQJDmUdPYMc
p+reg9901zkyT3fDW/ivJVv6pWtkh6Pw2ytZT7mvg7YhX3V50Nv860cV11mocUVcqBLv0gcT
+HBDYtbuvexNftwNQKD5193A7zN4vG7CTYkXxytSjKuXrpEatEiFPxWgb84nVj25SU5q/r1X
hwby6mLhkbaXslkVtwEWT3Van49rKjlK4XrUKYYWtnfzq6aSak5u0Vpxd1rY79tWhD3EdCvO
hNz/QplNa+VkIsrcp7+8ZhP1l1b2U6MaxIVteuVMD3X0vziIwr7jxYae9FZjbxlpUemqXjcC
0QaFfN7qI0JsQMALL7iGRBg7K0CoOBzECdD3fuZil5kU/LP9cr1BK31U0Uy651bFnAMMMkqh
AChIbn0ei72VnbpSsrrSdF0BAGYQ8vyHae5aCg+H75dVCV33K6FuxZrf09yTz+Vx/PkdRUYk
XmZz/OTfyJXsUOUXrym6KvI2rYpccSk5MIIHSjCCBjKgAwIBAgICHmMwDQYJKoZIhvcNAQEF
BQAwgYwxCzAJBgNVBAYTAklMMRYwFAYDVQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJT
ZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBD
bGFzcyAyIFByaW1hcnkgSW50ZXJtZWRpYXRlIENsaWVudCBDQTAeFw0xMjAzMTgyMjE0MzBa
Fw0xNDAzMjAwODU3MDlaMIGMMRkwFwYDVQQNExBlQjZPRTM3UlJOUHlsNW0yMQswCQYDVQQG
EwJJVDEQMA4GA1UECBMHQmVyZ2FtbzEQMA4GA1UEBxMHUm92ZXR0YTEWMBQGA1UEAxMNRmFi
aW8gRmFudG9uaTEmMCQGCSqGSIb3DQEJARYXZmFudG9uaWZhYmlvQHRpc2NhbGkuaXQwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1XhckXsX23vgJq76s2f0KT8U8Msov5QgV
10eQBb2wL/TzcmqtZotI7ztKVhio3ehHg+mfu+3EqOkX9Umgut8rP0bPi7AGjkPXbOTT/cSU
Xz2Kw31VGOmiOVoUFGvpQitp3weCkhUJLBipI8EpNyBXpjtQ9yCpnIAqfuc77ybfSnCy7tTR
MBq1BUkfjH1+GL45riosuS4+F+MSUvlYzLiT4rAduAX1Y2IuORDsf9Bce8GBxa6syP9rCyzl
Vk7DIX5k8j2vlnyRATIypn5CQLQxGT6e0f6ac4gvWOHwO2QEBsmZKKs1ZidE4q/9OoNXYX6A
jnHtp1H1vcrek/vVcs19AgMBAAGjggOyMIIDrjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIEsDAd
BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwHQYDVR0OBBYEFFan8cbEWWBmSTWFtLk2
YNdAcGUbMB8GA1UdIwQYMBaAFK5Vg2/sMcq59x36r2sx88gd46y7MCIGA1UdEQQbMBmBF2Zh
bnRvbmlmYWJpb0B0aXNjYWxpLml0MIICIQYDVR0gBIICGDCCAhQwggIQBgsrBgEEAYG1NwEC
AjCCAf8wLgYIKwYBBQUHAgEWImh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL3BvbGljeS5wZGYw
NAYIKwYBBQUHAgEWKGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tL2ludGVybWVkaWF0ZS5wZGYw
gfcGCCsGAQUFBwICMIHqMCcWIFN0YXJ0Q29tIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MAMC
AQEagb5UaGlzIGNlcnRpZmljYXRlIHdhcyBpc3N1ZWQgYWNjb3JkaW5nIHRvIHRoZSBDbGFz
cyAyIFZhbGlkYXRpb24gcmVxdWlyZW1lbnRzIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3ks
IHJlbGlhbmNlIG9ubHkgZm9yIHRoZSBpbnRlbmRlZCBwdXJwb3NlIGluIGNvbXBsaWFuY2Ug
b2YgdGhlIHJlbHlpbmcgcGFydHkgb2JsaWdhdGlvbnMuMIGcBggrBgEFBQcCAjCBjzAnFiBT
dGFydENvbSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTADAgECGmRMaWFiaWxpdHkgYW5kIHdh
cnJhbnRpZXMgYXJlIGxpbWl0ZWQhIFNlZSBzZWN0aW9uICJMZWdhbCBhbmQgTGltaXRhdGlv
bnMiIG9mIHRoZSBTdGFydENvbSBDQSBwb2xpY3kuMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6
Ly9jcmwuc3RhcnRzc2wuY29tL2NydHUyLWNybC5jcmwwgY4GCCsGAQUFBwEBBIGBMH8wOQYI
KwYBBQUHMAGGLWh0dHA6Ly9vY3NwLnN0YXJ0c3NsLmNvbS9zdWIvY2xhc3MyL2NsaWVudC9j
YTBCBggrBgEFBQcwAoY2aHR0cDovL2FpYS5zdGFydHNzbC5jb20vY2VydHMvc3ViLmNsYXNz
Mi5jbGllbnQuY2EuY3J0MCMGA1UdEgQcMBqGGGh0dHA6Ly93d3cuc3RhcnRzc2wuY29tLzAN
BgkqhkiG9w0BAQUFAAOCAQEAjzHNqifpDVMkH1TSPFZVIiQ4fh49/V5JMpstgqEZPDaDe5r8
h+fMBZtUa6LLMco03Z9BNEXlqlXKiFk8feVYB8obEjz7YYq1XhO9q7JUmkSs0WGIH4xU0XB1
kPC8T8H+5E//84poYSFHE4pA+Ff68UANP2/EuFJWMjegiefnOr8aM42OAcUkjEWSlautIIX8
oD2GizwQYjWdDDjEonbuMKFP6rY2xGI3PSLI3IVU2opb0/itNhQui3WRxafloJqTlriY8m8+
qSLr2HGftbBlbyzVWB8o//aW0H0LMabjkIvrm7Zmh2vcCxiSxGBwYASuSYXGuQiKAgGptUs1
XJLZuzGCA9owggPWAgEBMIGTMIGMMQswCQYDVQQGEwJJTDEWMBQGA1UEChMNU3RhcnRDb20g
THRkLjErMCkGA1UECxMiU2VjdXJlIERpZ2l0YWwgQ2VydGlmaWNhdGUgU2lnbmluZzE4MDYG
A1UEAxMvU3RhcnRDb20gQ2xhc3MgMiBQcmltYXJ5IEludGVybWVkaWF0ZSBDbGllbnQgQ0EC
Ah5jMAkGBSsOAwIaBQCgggIbMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN
AQkFMQ8XDTEyMDgwMTEyMzczMlowIwYJKoZIhvcNAQkEMRYEFKMvWb2SVS4a7eJdS7vQIKs4
EqPxMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG
9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcN
AwICASgwgaQGCSsGAQQBgjcQBDGBljCBkzCBjDELMAkGA1UEBhMCSUwxFjAUBgNVBAoTDVN0
YXJ0Q29tIEx0ZC4xKzApBgNVBAsTIlNlY3VyZSBEaWdpdGFsIENlcnRpZmljYXRlIFNpZ25p
bmcxODA2BgNVBAMTL1N0YXJ0Q29tIENsYXNzIDIgUHJpbWFyeSBJbnRlcm1lZGlhdGUgQ2xp
ZW50IENBAgIeYzCBpgYLKoZIhvcNAQkQAgsxgZaggZMwgYwxCzAJBgNVBAYTAklMMRYwFAYD
VQQKEw1TdGFydENvbSBMdGQuMSswKQYDVQQLEyJTZWN1cmUgRGlnaXRhbCBDZXJ0aWZpY2F0
ZSBTaWduaW5nMTgwNgYDVQQDEy9TdGFydENvbSBDbGFzcyAyIFByaW1hcnkgSW50ZXJtZWRp
YXRlIENsaWVudCBDQQICHmMwDQYJKoZIhvcNAQEBBQAEggEAfBD/kfFhHoCvX9TOMlMp4yqB
Zho43HRpq7fAavP5q7qNp3WWZXtyxr2aiE1zGxwt8Y7wlVfL/+73ddTaVB+H4b64HIRtZzB5
z/z8yPMvVZ0F+tK00194etUZ0Fo5VoYZMCBqHwHofAmWQwPnXs5ymUHplMpYHS6wXkkm2tSd
LBlgGNB0pYy830nlYrUVmjX4UFFPFZggGcP3IGT3+iGd16lOXIQrMlGjezLRusu1jPPN49lj
2+VYtuY6v52K1kv7UR5Tw+DwHZVoZjPvJgvhRit2npWtO/ZrTVBuZVjXB0QwTjyawz0kLGyo
PbLZskNV6sDq3KloHbHqekc0xr53zQAAAAAAAA==
--------------ms060409080300010806080007--


--===============5716622120492766473==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5716622120492766473==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 13:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwYwG-0008Sl-KS; Wed, 01 Aug 2012 13:26:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwYwE-0008Sd-Dh
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 13:26:10 +0000
Received: from [85.158.143.35:50997] by server-3.bemta-4.messagelabs.com id
	26/33-01511-17E29105; Wed, 01 Aug 2012 13:26:09 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343827566!5073777!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26608 invoked from network); 1 Aug 2012 13:26:07 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:26:07 -0000
Received: by eeke53 with SMTP id e53so1987416eek.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 06:26:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2sxTJNdhtJFw/p8IIVVMASFSNSWktsmHzSqzqtbsf+E=;
	b=MA9XYkAn8fQG0R+k3lqMrSdp9EzdHiJNg3feCEWyBkhmXl5i5i5A7rHFJhrRYLRC7K
	enLVTBNGsdRDn5RaQZCqBxykYkEv8V+EfmhvfOeqSAN4IiA71dQ0FyGN4YIlFP0bCgy5
	yp+sM62ETzF8YhHrKDkYBv01xaZXD1f36upIWEay4arJ9AA54DolXdmFrxS0z6YoNq6j
	9jaNt0TQEpfDSSdqsRLySPHWM7YpR3o5pRo0tlaEYCu6WzaYDJHgPGUglYprB9O6NuF2
	O5Qrw+yu98jhdsKSitZNI89AzAh79MjRPWQnrKiVU7dAKqx7wbjPD8HZupjhVnWq6vgg
	AYHA==
Received: by 10.14.178.67 with SMTP id e43mr22369166eem.44.1343827566077;
	Wed, 01 Aug 2012 06:26:06 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id g42sm8870802eem.14.2012.08.01.06.26.04
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 06:26:05 -0700 (PDT)
Message-ID: <50192E6C.6020606@xen.org>
Date: Wed, 01 Aug 2012 14:26:04 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
Subject: Re: [Xen-devel] Git branch for ARM patches pending for 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sounds like some of the info in here should be added to 
http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions
Lars

On 01/08/2012 12:33, Ian Campbell wrote:
> We've got a lot of patches floating about which are a necessary baseline
> for on going ARM port work. Now that we have released 4.2.0-rc1 I think
> is inappropriate to keep committing ARM patches to mainline (even those
> which touch only ARM code, for a while I've not been committing ARM
> patches which touch generic bits).
>
> Rather than have everyone collate all the necessary patches themselves I
> have created a branch where I intend to collect patches which are
> basically ready but need to wait for the 4.3 dev cycle to open before
> they can go in (which a couple of exceptions for necessary but still in
> the HACK phase patches)
>
> The branch is:
>          git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3
> and is based off:
>          git://xenbits.xen.org/people/aperard/xen.git staging
>
> The arm-for-4.3 branch contains patches which will need to be swapped
> out for non-HACK versions at some point and is therefore potentially
> rebasing.
>
> In fact I'm considering rebasing as a matter of course (rather than
> merging) such that we have a branch which is current against 4.3 when we
> come to sweep it in. Opinions from the potential consumers of the branch
> will be considered ;-)
>
> Ian.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 13:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwYwG-0008Sl-KS; Wed, 01 Aug 2012 13:26:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwYwE-0008Sd-Dh
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 13:26:10 +0000
Received: from [85.158.143.35:50997] by server-3.bemta-4.messagelabs.com id
	26/33-01511-17E29105; Wed, 01 Aug 2012 13:26:09 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343827566!5073777!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26608 invoked from network); 1 Aug 2012 13:26:07 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:26:07 -0000
Received: by eeke53 with SMTP id e53so1987416eek.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 06:26:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2sxTJNdhtJFw/p8IIVVMASFSNSWktsmHzSqzqtbsf+E=;
	b=MA9XYkAn8fQG0R+k3lqMrSdp9EzdHiJNg3feCEWyBkhmXl5i5i5A7rHFJhrRYLRC7K
	enLVTBNGsdRDn5RaQZCqBxykYkEv8V+EfmhvfOeqSAN4IiA71dQ0FyGN4YIlFP0bCgy5
	yp+sM62ETzF8YhHrKDkYBv01xaZXD1f36upIWEay4arJ9AA54DolXdmFrxS0z6YoNq6j
	9jaNt0TQEpfDSSdqsRLySPHWM7YpR3o5pRo0tlaEYCu6WzaYDJHgPGUglYprB9O6NuF2
	O5Qrw+yu98jhdsKSitZNI89AzAh79MjRPWQnrKiVU7dAKqx7wbjPD8HZupjhVnWq6vgg
	AYHA==
Received: by 10.14.178.67 with SMTP id e43mr22369166eem.44.1343827566077;
	Wed, 01 Aug 2012 06:26:06 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id g42sm8870802eem.14.2012.08.01.06.26.04
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 06:26:05 -0700 (PDT)
Message-ID: <50192E6C.6020606@xen.org>
Date: Wed, 01 Aug 2012 14:26:04 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
Subject: Re: [Xen-devel] Git branch for ARM patches pending for 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sounds like some of the info in here should be added to 
http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions
Lars

On 01/08/2012 12:33, Ian Campbell wrote:
> We've got a lot of patches floating about which are a necessary baseline
> for on going ARM port work. Now that we have released 4.2.0-rc1 I think
> is inappropriate to keep committing ARM patches to mainline (even those
> which touch only ARM code, for a while I've not been committing ARM
> patches which touch generic bits).
>
> Rather than have everyone collate all the necessary patches themselves I
> have created a branch where I intend to collect patches which are
> basically ready but need to wait for the 4.3 dev cycle to open before
> they can go in (which a couple of exceptions for necessary but still in
> the HACK phase patches)
>
> The branch is:
>          git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3
> and is based off:
>          git://xenbits.xen.org/people/aperard/xen.git staging
>
> The arm-for-4.3 branch contains patches which will need to be swapped
> out for non-HACK versions at some point and is therefore potentially
> rebasing.
>
> In fact I'm considering rebasing as a matter of course (rather than
> merging) such that we have a branch which is current against 4.3 when we
> come to sweep it in. Opinions from the potential consumers of the branch
> will be considered ;-)
>
> Ian.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 13:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZ0z-000089-BQ; Wed, 01 Aug 2012 13:31:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwZ0x-000082-TE
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 13:31:03 +0000
Received: from [85.158.143.99:25448] by server-3.bemta-4.messagelabs.com id
	64/9B-01511-79F29105; Wed, 01 Aug 2012 13:31:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343827861!26401006!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8582 invoked from network); 1 Aug 2012 13:31:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:31:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13803735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 13:31:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	14:31:01 +0100
Message-ID: <1343827860.27221.85.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Wed, 1 Aug 2012 14:31:00 +0100
In-Reply-To: <50192E6C.6020606@xen.org>
References: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
	<50192E6C.6020606@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Git branch for ARM patches pending for 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 14:26 +0100, Lars Kurth wrote:
> Sounds like some of the info in here should be added to 
> http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions

Good point. Adding now.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 13:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZ0z-000089-BQ; Wed, 01 Aug 2012 13:31:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwZ0x-000082-TE
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 13:31:03 +0000
Received: from [85.158.143.99:25448] by server-3.bemta-4.messagelabs.com id
	64/9B-01511-79F29105; Wed, 01 Aug 2012 13:31:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343827861!26401006!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8582 invoked from network); 1 Aug 2012 13:31:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:31:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13803735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 13:31:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	14:31:01 +0100
Message-ID: <1343827860.27221.85.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Wed, 1 Aug 2012 14:31:00 +0100
In-Reply-To: <50192E6C.6020606@xen.org>
References: <1343820798.27221.71.camel@zakaz.uk.xensource.com>
	<50192E6C.6020606@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Git branch for ARM patches pending for 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 14:26 +0100, Lars Kurth wrote:
> Sounds like some of the info in here should be added to 
> http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions

Good point. Adding now.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 13:34:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZ3l-0000Ho-UT; Wed, 01 Aug 2012 13:33:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1SwZ3k-0000Hi-BH
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 13:33:56 +0000
Received: from [85.158.139.83:15426] by server-10.bemta-5.messagelabs.com id
	6C/4B-02190-34039105; Wed, 01 Aug 2012 13:33:55 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343828034!22530060!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6432 invoked from network); 1 Aug 2012 13:33:54 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 13:33:54 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id AE5A4A39CE;
	Wed,  1 Aug 2012 15:33:53 +0200 (CEST)
Message-ID: <5019303E.30508@suse.de>
Date: Wed, 01 Aug 2012 15:33:50 +0200
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
X-Enigmail-Version: 1.5a1pre
Cc: Anthony Liguori <anthony@codemonkey.ws>, xen-devel@lists.xensource.com,
	qemu-devel@nongnu.org, fantonifabio@tiscali.it,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 01.08.2012 12:19, schrieb Stefano Stabellini:
> xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
> match the type.
> =

> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Tested-by: Andreas F=E4rber <afaerber@suse.de>

/-F

> =

> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index fdf68aa..307119a 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -764,7 +764,7 @@ out:
>      return 0;
>  }
>  =

> -static int xen_pt_unregister_device(PCIDevice *d)
> +static void xen_pt_unregister_device(PCIDevice *d)
>  {
>      XenPCIPassthroughState *s =3D DO_UPCAST(XenPCIPassthroughState, dev,=
 d);
>      uint8_t machine_irq =3D s->machine_irq;
> @@ -814,8 +814,6 @@ static int xen_pt_unregister_device(PCIDevice *d)
>      memory_listener_unregister(&s->memory_listener);
>  =

>      xen_host_pci_device_put(&s->real_device);
> -
> -    return 0;
>  }
>  =

>  static Property xen_pci_passthrough_properties[] =3D {
> =



-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 13:34:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZ3l-0000Ho-UT; Wed, 01 Aug 2012 13:33:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1SwZ3k-0000Hi-BH
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 13:33:56 +0000
Received: from [85.158.139.83:15426] by server-10.bemta-5.messagelabs.com id
	6C/4B-02190-34039105; Wed, 01 Aug 2012 13:33:55 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343828034!22530060!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6432 invoked from network); 1 Aug 2012 13:33:54 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 13:33:54 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id AE5A4A39CE;
	Wed,  1 Aug 2012 15:33:53 +0200 (CEST)
Message-ID: <5019303E.30508@suse.de>
Date: Wed, 01 Aug 2012 15:33:50 +0200
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
X-Enigmail-Version: 1.5a1pre
Cc: Anthony Liguori <anthony@codemonkey.ws>, xen-devel@lists.xensource.com,
	qemu-devel@nongnu.org, fantonifabio@tiscali.it,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 01.08.2012 12:19, schrieb Stefano Stabellini:
> xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
> match the type.
> =

> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Tested-by: Andreas F=E4rber <afaerber@suse.de>

/-F

> =

> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index fdf68aa..307119a 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -764,7 +764,7 @@ out:
>      return 0;
>  }
>  =

> -static int xen_pt_unregister_device(PCIDevice *d)
> +static void xen_pt_unregister_device(PCIDevice *d)
>  {
>      XenPCIPassthroughState *s =3D DO_UPCAST(XenPCIPassthroughState, dev,=
 d);
>      uint8_t machine_irq =3D s->machine_irq;
> @@ -814,8 +814,6 @@ static int xen_pt_unregister_device(PCIDevice *d)
>      memory_listener_unregister(&s->memory_listener);
>  =

>      xen_host_pci_device_put(&s->real_device);
> -
> -    return 0;
>  }
>  =

>  static Property xen_pci_passthrough_properties[] =3D {
> =



-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 13:41:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:41:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZAb-0000aJ-G4; Wed, 01 Aug 2012 13:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwZAa-0000aA-LM
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 13:41:00 +0000
Received: from [85.158.143.99:23014] by server-1.bemta-4.messagelabs.com id
	35/D8-24392-BE139105; Wed, 01 Aug 2012 13:40:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343828459!28864148!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15837 invoked from network); 1 Aug 2012 13:40:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:40:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13803985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 13:40:58 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 14:40:58 +0100
Date: Wed, 1 Aug 2012 14:40:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: =?UTF-8?Q?Andreas_F=C3=A4rber?= <afaerber@suse.de>
In-Reply-To: <5019303E.30508@suse.de>
Message-ID: <alpine.DEB.2.02.1208011439540.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
	<5019303E.30508@suse.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1969835623-1343828412=:4645"
Content-ID: <alpine.DEB.2.02.1208011440190.4645@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1969835623-1343828412=:4645
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.02.1208011440191.4645@kaball.uk.xensource.com>

On Wed, 1 Aug 2012, Andreas FÃ¤rber wrote:
> Am 01.08.2012 12:19, schrieb Stefano Stabellini:
> > xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
> > match the type.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Tested-by: Andreas FÃ¤rber <afaerber@suse.de>
> 

Thanks!
I have another old Xen fix to configure in my backlog, I am just going
to send a pull request with both fixes in it.
--1342847746-1969835623-1343828412=:4645
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1969835623-1343828412=:4645--


From xen-devel-bounces@lists.xen.org Wed Aug 01 13:41:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:41:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZAb-0000aJ-G4; Wed, 01 Aug 2012 13:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwZAa-0000aA-LM
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 13:41:00 +0000
Received: from [85.158.143.99:23014] by server-1.bemta-4.messagelabs.com id
	35/D8-24392-BE139105; Wed, 01 Aug 2012 13:40:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343828459!28864148!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15837 invoked from network); 1 Aug 2012 13:40:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:40:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13803985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 13:40:58 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 14:40:58 +0100
Date: Wed, 1 Aug 2012 14:40:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: =?UTF-8?Q?Andreas_F=C3=A4rber?= <afaerber@suse.de>
In-Reply-To: <5019303E.30508@suse.de>
Message-ID: <alpine.DEB.2.02.1208011439540.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
	<5019303E.30508@suse.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1969835623-1343828412=:4645"
Content-ID: <alpine.DEB.2.02.1208011440190.4645@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1969835623-1343828412=:4645
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.02.1208011440191.4645@kaball.uk.xensource.com>

On Wed, 1 Aug 2012, Andreas FÃ¤rber wrote:
> Am 01.08.2012 12:19, schrieb Stefano Stabellini:
> > xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
> > match the type.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Tested-by: Andreas FÃ¤rber <afaerber@suse.de>
> 

Thanks!
I have another old Xen fix to configure in my backlog, I am just going
to send a pull request with both fixes in it.
--1342847746-1969835623-1343828412=:4645
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1969835623-1343828412=:4645--


From xen-devel-bounces@lists.xen.org Wed Aug 01 13:53:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZMC-0000zU-Sc; Wed, 01 Aug 2012 13:53:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwZMA-0000zK-TU
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 13:52:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1343829172!6179549!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12187 invoked from network); 1 Aug 2012 13:52:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:52:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13804269"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 13:52:52 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 14:52:52 +0100
Date: Wed, 1 Aug 2012 14:52:35 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <qemu-devel@nongnu.org>
Message-ID: <alpine.DEB.2.02.1208011447240.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com,
	Anthony Liguori <anthony@codemonkey.ws>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PULL] Xen fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Anthony,
please pull a couple of simple Xen compilation fixes from:

git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-fixes-20120801

Anthony PERARD (1):
      configure: Fix xen probe with Xen 4.2 and later

Stefano Stabellini (1):
      fix Xen compilation

 configure   |    1 -
 hw/xen_pt.c |    4 +---
 2 files changed, 1 insertions(+), 4 deletions(-)

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 13:53:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 13:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZMC-0000zU-Sc; Wed, 01 Aug 2012 13:53:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwZMA-0000zK-TU
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 13:52:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1343829172!6179549!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12187 invoked from network); 1 Aug 2012 13:52:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 13:52:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13804269"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 13:52:52 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 14:52:52 +0100
Date: Wed, 1 Aug 2012 14:52:35 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <qemu-devel@nongnu.org>
Message-ID: <alpine.DEB.2.02.1208011447240.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com,
	Anthony Liguori <anthony@codemonkey.ws>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PULL] Xen fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Anthony,
please pull a couple of simple Xen compilation fixes from:

git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-fixes-20120801

Anthony PERARD (1):
      configure: Fix xen probe with Xen 4.2 and later

Stefano Stabellini (1):
      fix Xen compilation

 configure   |    1 -
 hw/xen_pt.c |    4 +---
 2 files changed, 1 insertions(+), 4 deletions(-)

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:02:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZUp-0001E9-SB; Wed, 01 Aug 2012 14:01:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nyerup@one.com>) id 1SwZUo-0001E4-9c
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 14:01:54 +0000
Received: from [85.158.143.99:19114] by server-1.bemta-4.messagelabs.com id
	81/7C-24392-1D639105; Wed, 01 Aug 2012 14:01:53 +0000
X-Env-Sender: nyerup@one.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343829712!29437820!1
X-Originating-IP: [195.47.247.16]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29347 invoked from network); 1 Aug 2012 14:01:52 -0000
Received: from kontorsmtp1.one.com (HELO kontorsmtp1.one.com) (195.47.247.16)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:01:52 -0000
Received: from one.com (unknown [46.30.211.1])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by kontorsmtp1.one.com (Postfix) with ESMTP id 5710D2F802A3D;
	Wed,  1 Aug 2012 16:01:52 +0200 (CEST)
Date: Wed, 1 Aug 2012 16:01:38 +0200
From: Jesper Dahl Nyerup <nyerup@one.com>
To: xen-devel@lists.xen.org
Message-ID: <20120801140137.GA4866@one.com>
MIME-Version: 1.0
Organization: One.com
User-Agent: Mutt/1.5.20 (2009-06-14)
Subject: [Xen-devel] File system passthrough using v9fs?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: nyerup@one.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1289011196830554502=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1289011196830554502==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="HcAYCG3uE/tztfnV"
Content-Disposition: inline


--HcAYCG3uE/tztfnV
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi,

I need to deploy a bunch of VMs, that need data from NFS shares residing
on a network I don't trust my VMs to connect to directly.

What would it take for me to mount the shares on the hosts, and export
them to my VMs using v9fs, for instance?

In practice, only one of my VMs will access a portion of the NFS at a
time, and the host won't touch it at all, so I'm pretty confident that
the VMs' VFS caching and locking won't be an issue.

I understand that KVM can do v9fs exports using virtio and qemu[1], and
I was wondering if this was possible with Xen as well, as Xen also makes
use of qemu. Allegedly using virtio devices should theoretically be
possible for HVM guests, but I'm not sure if this has been implemented
in Xen's qemu.

I don't have a preference for v9fs at all, so any hints or insights to
similar solutions will be greatly appreciated.

Yours,

Jesper.

[1]: http://www.linux-kvm.org/page/9p_virtio

--HcAYCG3uE/tztfnV
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iEYEARECAAYFAlAZNsEACgkQtzA4yjN/Kb2N5wCgkYUvpUz126X1K7UrNDUTWX/W
wlgAnRYR6N+XkD/coPcsiNdphlYvAIYr
=bOWd
-----END PGP SIGNATURE-----

--HcAYCG3uE/tztfnV--


--===============1289011196830554502==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1289011196830554502==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 14:02:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZUp-0001E9-SB; Wed, 01 Aug 2012 14:01:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nyerup@one.com>) id 1SwZUo-0001E4-9c
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 14:01:54 +0000
Received: from [85.158.143.99:19114] by server-1.bemta-4.messagelabs.com id
	81/7C-24392-1D639105; Wed, 01 Aug 2012 14:01:53 +0000
X-Env-Sender: nyerup@one.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343829712!29437820!1
X-Originating-IP: [195.47.247.16]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29347 invoked from network); 1 Aug 2012 14:01:52 -0000
Received: from kontorsmtp1.one.com (HELO kontorsmtp1.one.com) (195.47.247.16)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:01:52 -0000
Received: from one.com (unknown [46.30.211.1])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by kontorsmtp1.one.com (Postfix) with ESMTP id 5710D2F802A3D;
	Wed,  1 Aug 2012 16:01:52 +0200 (CEST)
Date: Wed, 1 Aug 2012 16:01:38 +0200
From: Jesper Dahl Nyerup <nyerup@one.com>
To: xen-devel@lists.xen.org
Message-ID: <20120801140137.GA4866@one.com>
MIME-Version: 1.0
Organization: One.com
User-Agent: Mutt/1.5.20 (2009-06-14)
Subject: [Xen-devel] File system passthrough using v9fs?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: nyerup@one.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1289011196830554502=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1289011196830554502==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="HcAYCG3uE/tztfnV"
Content-Disposition: inline


--HcAYCG3uE/tztfnV
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi,

I need to deploy a bunch of VMs, that need data from NFS shares residing
on a network I don't trust my VMs to connect to directly.

What would it take for me to mount the shares on the hosts, and export
them to my VMs using v9fs, for instance?

In practice, only one of my VMs will access a portion of the NFS at a
time, and the host won't touch it at all, so I'm pretty confident that
the VMs' VFS caching and locking won't be an issue.

I understand that KVM can do v9fs exports using virtio and qemu[1], and
I was wondering if this was possible with Xen as well, as Xen also makes
use of qemu. Allegedly using virtio devices should theoretically be
possible for HVM guests, but I'm not sure if this has been implemented
in Xen's qemu.

I don't have a preference for v9fs at all, so any hints or insights to
similar solutions will be greatly appreciated.

Yours,

Jesper.

[1]: http://www.linux-kvm.org/page/9p_virtio

--HcAYCG3uE/tztfnV
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iEYEARECAAYFAlAZNsEACgkQtzA4yjN/Kb2N5wCgkYUvpUz126X1K7UrNDUTWX/W
wlgAnRYR6N+XkD/coPcsiNdphlYvAIYr
=bOWd
-----END PGP SIGNATURE-----

--HcAYCG3uE/tztfnV--


--===============1289011196830554502==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1289011196830554502==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 14:26:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZs1-0001Ss-7b; Wed, 01 Aug 2012 14:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwZs0-0001Sn-Ae
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:25:52 +0000
Received: from [85.158.143.35:9176] by server-1.bemta-4.messagelabs.com id
	33/E6-24392-F6C39105; Wed, 01 Aug 2012 14:25:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1343831148!12812735!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22000 invoked from network); 1 Aug 2012 14:25:50 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:25:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EPRHw008943
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:25:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EPQRG016287
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:25:26 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EPPOg024152; Wed, 1 Aug 2012 09:25:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:25:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 65E20402B2; Wed,  1 Aug 2012 10:16:24 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:16:24 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801141624.GD7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-5-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-5-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 05/24] xen/arm: empty implementation of
 grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:47PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/Makefile      |    2 +-
>  arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/xen/grant-table.c
> 
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> index b9d6acc..4384103 100644
> --- a/arch/arm/xen/Makefile
> +++ b/arch/arm/xen/Makefile
> @@ -1 +1 @@
> -obj-y		:= enlighten.o hypercall.o
> +obj-y		:= enlighten.o hypercall.o grant-table.o
> diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
> new file mode 100644
> index 0000000..0a4ee80
> --- /dev/null
> +++ b/arch/arm/xen/grant-table.c
> @@ -0,0 +1,53 @@
> +/******************************************************************************
> + * grant_table.c
> + * ARM specific part
> + *
> + * Granting foreign access to our memory reservation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <xen/interface/xen.h>
> +#include <xen/page.h>
> +#include <xen/grant_table.h>
> +
> +int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   void **__shared)
> +{
> +	return -1;

-ENOSYS
> +}
> +
> +void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> +{
> +	return;
> +}
> +
> +int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   grant_status_t **__shared)
> +{
> +	return -1;

Same here -ENOSYS
> +}
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:26:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZs1-0001Ss-7b; Wed, 01 Aug 2012 14:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwZs0-0001Sn-Ae
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:25:52 +0000
Received: from [85.158.143.35:9176] by server-1.bemta-4.messagelabs.com id
	33/E6-24392-F6C39105; Wed, 01 Aug 2012 14:25:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1343831148!12812735!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22000 invoked from network); 1 Aug 2012 14:25:50 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:25:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EPRHw008943
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:25:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EPQRG016287
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:25:26 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EPPOg024152; Wed, 1 Aug 2012 09:25:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:25:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 65E20402B2; Wed,  1 Aug 2012 10:16:24 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:16:24 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801141624.GD7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-5-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-5-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 05/24] xen/arm: empty implementation of
 grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:47PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/Makefile      |    2 +-
>  arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/xen/grant-table.c
> 
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> index b9d6acc..4384103 100644
> --- a/arch/arm/xen/Makefile
> +++ b/arch/arm/xen/Makefile
> @@ -1 +1 @@
> -obj-y		:= enlighten.o hypercall.o
> +obj-y		:= enlighten.o hypercall.o grant-table.o
> diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
> new file mode 100644
> index 0000000..0a4ee80
> --- /dev/null
> +++ b/arch/arm/xen/grant-table.c
> @@ -0,0 +1,53 @@
> +/******************************************************************************
> + * grant_table.c
> + * ARM specific part
> + *
> + * Granting foreign access to our memory reservation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <xen/interface/xen.h>
> +#include <xen/page.h>
> +#include <xen/grant_table.h>
> +
> +int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   void **__shared)
> +{
> +	return -1;

-ENOSYS
> +}
> +
> +void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> +{
> +	return;
> +}
> +
> +int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   grant_status_t **__shared)
> +{
> +	return -1;

Same here -ENOSYS
> +}
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:30:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZvU-0001ZJ-Rv; Wed, 01 Aug 2012 14:29:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwZvT-0001Z3-C3
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:29:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343831359!1851647!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25651 invoked from network); 1 Aug 2012 14:29:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:29:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71ET0e1011727
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:29:01 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71ET0pX020827
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:29:00 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71ESxbh026972; Wed, 1 Aug 2012 09:28:59 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:28:59 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1E423402B2; Wed,  1 Aug 2012 10:19:59 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:19:59 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801141959.GE7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-7-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-7-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 07/24] xen/arm: Xen detection and
	shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:49PM +0100, Stefano Stabellini wrote:
> Check for a "/xen" node in the device tree, if it is present set
> xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> 
> Map the real shared info page using XENMEM_add_to_physmap with
> XENMAPSPACE_shared_info.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   56 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 56 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index d27c2a6..8c923af 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -5,6 +5,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_address.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -33,3 +36,56 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/*
> + * == Xen Device Tree format ==
> + * - /xen node;
> + * - compatible "arm,xen";
> + * - one interrupt for Xen event notifications;
> + * - one memory region to map the grant_table.
> + */
> +static int __init xen_guest_init(void)
> +{
> +	int cpu;
> +	struct xen_add_to_physmap xatp;
> +	static struct shared_info *shared_info_page = 0;
> +	struct device_node *node;
> +
> +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> +	if (!node) {
> +		pr_info("No Xen support\n");

I don't think the pr_info is appropiate here?
> +		return 0;

Should this be -ENODEV?

> +	}
> +	xen_domain_type = XEN_HVM_DOMAIN;
> +
> +	if (!shared_info_page)
> +		shared_info_page = (struct shared_info *)
> +			get_zeroed_page(GFP_KERNEL);
> +	if (!shared_info_page) {
> +		pr_err("not enough memory");

\n

> +		return -ENOMEM;
> +	}
> +	xatp.domid = DOMID_SELF;
> +	xatp.idx = 0;
> +	xatp.space = XENMAPSPACE_shared_info;
> +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> +		BUG();
> +
> +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> +
> +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> +	 * page, we use it in the event channel upcall and in some pvclock
> +	 * related functions. We don't need the vcpu_info placement
> +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> +	 * HVM.
> +	 * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is
> +	 * online but xen_hvm_init_shared_info is run at resume time too and
> +	 * in that case multiple vcpus might be online. */
> +	for_each_online_cpu(cpu) {
> +		per_cpu(xen_vcpu, cpu) =
> +			&HYPERVISOR_shared_info->vcpu_info[cpu];
> +	}
> +	return 0;

This above looks stringly similar to the x86 one. Could it be
abstracted away to share the same code? Or is that something that
ought to be done later on when there is more meat on the bone?


> +}
> +core_initcall(xen_guest_init);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:30:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZvU-0001ZJ-Rv; Wed, 01 Aug 2012 14:29:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwZvT-0001Z3-C3
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:29:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343831359!1851647!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25651 invoked from network); 1 Aug 2012 14:29:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:29:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71ET0e1011727
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:29:01 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71ET0pX020827
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:29:00 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71ESxbh026972; Wed, 1 Aug 2012 09:28:59 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:28:59 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1E423402B2; Wed,  1 Aug 2012 10:19:59 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:19:59 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801141959.GE7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-7-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-7-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 07/24] xen/arm: Xen detection and
	shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:49PM +0100, Stefano Stabellini wrote:
> Check for a "/xen" node in the device tree, if it is present set
> xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> 
> Map the real shared info page using XENMEM_add_to_physmap with
> XENMAPSPACE_shared_info.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   56 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 56 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index d27c2a6..8c923af 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -5,6 +5,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_address.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -33,3 +36,56 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/*
> + * == Xen Device Tree format ==
> + * - /xen node;
> + * - compatible "arm,xen";
> + * - one interrupt for Xen event notifications;
> + * - one memory region to map the grant_table.
> + */
> +static int __init xen_guest_init(void)
> +{
> +	int cpu;
> +	struct xen_add_to_physmap xatp;
> +	static struct shared_info *shared_info_page = 0;
> +	struct device_node *node;
> +
> +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> +	if (!node) {
> +		pr_info("No Xen support\n");

I don't think the pr_info is appropiate here?
> +		return 0;

Should this be -ENODEV?

> +	}
> +	xen_domain_type = XEN_HVM_DOMAIN;
> +
> +	if (!shared_info_page)
> +		shared_info_page = (struct shared_info *)
> +			get_zeroed_page(GFP_KERNEL);
> +	if (!shared_info_page) {
> +		pr_err("not enough memory");

\n

> +		return -ENOMEM;
> +	}
> +	xatp.domid = DOMID_SELF;
> +	xatp.idx = 0;
> +	xatp.space = XENMAPSPACE_shared_info;
> +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> +		BUG();
> +
> +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> +
> +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> +	 * page, we use it in the event channel upcall and in some pvclock
> +	 * related functions. We don't need the vcpu_info placement
> +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> +	 * HVM.
> +	 * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is
> +	 * online but xen_hvm_init_shared_info is run at resume time too and
> +	 * in that case multiple vcpus might be online. */
> +	for_each_online_cpu(cpu) {
> +		per_cpu(xen_vcpu, cpu) =
> +			&HYPERVISOR_shared_info->vcpu_info[cpu];
> +	}
> +	return 0;

This above looks stringly similar to the x86 one. Could it be
abstracted away to share the same code? Or is that something that
ought to be done later on when there is more meat on the bone?


> +}
> +core_initcall(xen_guest_init);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:32:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZyF-0001hS-E7; Wed, 01 Aug 2012 14:32:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwZyD-0001hL-7R
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:32:17 +0000
Received: from [85.158.138.51:46597] by server-7.bemta-3.messagelabs.com id
	93/3F-21158-0FD39105; Wed, 01 Aug 2012 14:32:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343831533!29881756!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14960 invoked from network); 1 Aug 2012 14:32:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:32:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EVp72015431
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:31:52 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EVous026386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:31:51 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EVo8L007520; Wed, 1 Aug 2012 09:31:50 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:31:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6BE6E402B2; Wed,  1 Aug 2012 10:22:49 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:22:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801142249.GF7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-8-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-8-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 08/24] xen/arm: Introduce xen_pfn_t for pfn
	and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:50PM +0100, Stefano Stabellini wrote:
> All the original Xen headers have xen_pfn_t as mfn and pfn type, however
> when they have been imported in Linux, xen_pfn_t has been replaced with
> unsigned long. That might work for x86 and ia64 but it does not for arm.

How come?
> Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
> see fit.

I am OK with this as long as your include a comment in both of the
interface.h saying why this is needed. I am curious why 'unsinged long'
won't work? Is it b/c on ARM you always want a 64-bit type?

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/interface.h  |    2 ++
>  arch/ia64/include/asm/xen/interface.h |    2 +-
>  arch/x86/include/asm/xen/interface.h  |    2 ++
>  include/xen/interface/grant_table.h   |    4 ++--
>  include/xen/interface/memory.h        |    6 +++---
>  include/xen/interface/platform.h      |    4 ++--
>  include/xen/interface/xen.h           |    6 +++---
>  include/xen/privcmd.h                 |    2 --
>  8 files changed, 15 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> index 6c3ab59..76b1ebe 100644
> --- a/arch/arm/include/asm/xen/interface.h
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -25,6 +25,7 @@
>  	} while (0)
>  
>  #ifndef __ASSEMBLY__
> +typedef uint64_t xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -35,6 +36,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  
>  /* Maximum number of virtual CPUs in multi-processor guests. */
>  #define MAX_VIRT_CPUS 1
> diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> index 09d5f7f..9efa068 100644
> --- a/arch/ia64/include/asm/xen/interface.h
> +++ b/arch/ia64/include/asm/xen/interface.h
> @@ -67,6 +67,7 @@
>  #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
>  
>  #ifndef __ASSEMBLY__
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> @@ -79,7 +80,6 @@ DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
>  
> -typedef unsigned long xen_pfn_t;
>  DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #define PRI_xen_pfn	"lx"
>  #endif
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..24c1b07 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -47,6 +47,7 @@
>  #endif
>  
>  #ifndef __ASSEMBLY__
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -57,6 +58,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #endif
>  
>  #ifndef HYPERVISOR_VIRT_START
> diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
> index a17d844..7da811b 100644
> --- a/include/xen/interface/grant_table.h
> +++ b/include/xen/interface/grant_table.h
> @@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
>  #define GNTTABOP_transfer                4
>  struct gnttab_transfer {
>      /* IN parameters. */
> -    unsigned long mfn;
> +    xen_pfn_t mfn;
>      domid_t       domid;
>      grant_ref_t   ref;
>      /* OUT parameters. */
> @@ -375,7 +375,7 @@ struct gnttab_copy {
>  	struct {
>  		union {
>  			grant_ref_t ref;
> -			unsigned long   gmfn;
> +			xen_pfn_t   gmfn;
>  		} u;
>  		domid_t  domid;
>  		uint16_t offset;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..abbbff0 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -31,7 +31,7 @@ struct xen_memory_reservation {
>       *   OUT: GMFN bases of extents that were allocated
>       *   (NB. This command also updates the mach_to_phys translation table)
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /* Number of extents, and size/alignment of each (2^extent_order pages). */
>      unsigned long  nr_extents;
> @@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
>       * any large discontiguities in the machine address space, 2MB gaps in
>       * the machphys table will be represented by an MFN base of zero.
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /*
>       * Number of extents written to the above array. This will be smaller
> @@ -172,7 +172,7 @@ struct xen_add_to_physmap {
>      unsigned long idx;
>  
>      /* GPFN where the source mapping page should appear. */
> -    unsigned long gpfn;
> +    xen_pfn_t gpfn;
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
>  
> diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
> index 486653f..0bea470 100644
> --- a/include/xen/interface/platform.h
> +++ b/include/xen/interface/platform.h
> @@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
>  #define XENPF_add_memtype         31
>  struct xenpf_add_memtype {
>  	/* IN variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  	/* OUT variables. */
> @@ -84,7 +84,7 @@ struct xenpf_read_memtype {
>  	/* IN variables. */
>  	uint32_t reg;
>  	/* OUT variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  };
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 4f29f33..d59a991 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -192,7 +192,7 @@ struct mmuext_op {
>  	unsigned int cmd;
>  	union {
>  		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
> -		unsigned long mfn;
> +		xen_pfn_t mfn;
>  		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
>  		unsigned long linear_addr;
>  	} arg1;
> @@ -432,11 +432,11 @@ struct start_info {
>  	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
>  	unsigned long shared_info;  /* MACHINE address of shared info struct. */
>  	uint32_t flags;             /* SIF_xxx flags.                         */
> -	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
> +	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
>  	uint32_t store_evtchn;      /* Event channel for store communication. */
>  	union {
>  		struct {
> -			unsigned long mfn;  /* MACHINE page number of console page.   */
> +			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
>  			uint32_t  evtchn;   /* Event channel for console page.        */
>  		} domU;
>  		struct {
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 4d58881..45c1aa1 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -37,8 +37,6 @@
>  #include <linux/compiler.h>
>  #include <xen/interface/xen.h>
>  
> -typedef unsigned long xen_pfn_t;
> -
>  struct privcmd_hypercall {
>  	__u64 op;
>  	__u64 arg[5];
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:32:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwZyF-0001hS-E7; Wed, 01 Aug 2012 14:32:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwZyD-0001hL-7R
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:32:17 +0000
Received: from [85.158.138.51:46597] by server-7.bemta-3.messagelabs.com id
	93/3F-21158-0FD39105; Wed, 01 Aug 2012 14:32:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343831533!29881756!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14960 invoked from network); 1 Aug 2012 14:32:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:32:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EVp72015431
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:31:52 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EVous026386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:31:51 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EVo8L007520; Wed, 1 Aug 2012 09:31:50 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:31:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6BE6E402B2; Wed,  1 Aug 2012 10:22:49 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:22:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801142249.GF7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-8-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-8-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 08/24] xen/arm: Introduce xen_pfn_t for pfn
	and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:50PM +0100, Stefano Stabellini wrote:
> All the original Xen headers have xen_pfn_t as mfn and pfn type, however
> when they have been imported in Linux, xen_pfn_t has been replaced with
> unsigned long. That might work for x86 and ia64 but it does not for arm.

How come?
> Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
> see fit.

I am OK with this as long as your include a comment in both of the
interface.h saying why this is needed. I am curious why 'unsinged long'
won't work? Is it b/c on ARM you always want a 64-bit type?

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/interface.h  |    2 ++
>  arch/ia64/include/asm/xen/interface.h |    2 +-
>  arch/x86/include/asm/xen/interface.h  |    2 ++
>  include/xen/interface/grant_table.h   |    4 ++--
>  include/xen/interface/memory.h        |    6 +++---
>  include/xen/interface/platform.h      |    4 ++--
>  include/xen/interface/xen.h           |    6 +++---
>  include/xen/privcmd.h                 |    2 --
>  8 files changed, 15 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> index 6c3ab59..76b1ebe 100644
> --- a/arch/arm/include/asm/xen/interface.h
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -25,6 +25,7 @@
>  	} while (0)
>  
>  #ifndef __ASSEMBLY__
> +typedef uint64_t xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -35,6 +36,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  
>  /* Maximum number of virtual CPUs in multi-processor guests. */
>  #define MAX_VIRT_CPUS 1
> diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> index 09d5f7f..9efa068 100644
> --- a/arch/ia64/include/asm/xen/interface.h
> +++ b/arch/ia64/include/asm/xen/interface.h
> @@ -67,6 +67,7 @@
>  #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
>  
>  #ifndef __ASSEMBLY__
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> @@ -79,7 +80,6 @@ DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
>  
> -typedef unsigned long xen_pfn_t;
>  DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #define PRI_xen_pfn	"lx"
>  #endif
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..24c1b07 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -47,6 +47,7 @@
>  #endif
>  
>  #ifndef __ASSEMBLY__
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -57,6 +58,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #endif
>  
>  #ifndef HYPERVISOR_VIRT_START
> diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
> index a17d844..7da811b 100644
> --- a/include/xen/interface/grant_table.h
> +++ b/include/xen/interface/grant_table.h
> @@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
>  #define GNTTABOP_transfer                4
>  struct gnttab_transfer {
>      /* IN parameters. */
> -    unsigned long mfn;
> +    xen_pfn_t mfn;
>      domid_t       domid;
>      grant_ref_t   ref;
>      /* OUT parameters. */
> @@ -375,7 +375,7 @@ struct gnttab_copy {
>  	struct {
>  		union {
>  			grant_ref_t ref;
> -			unsigned long   gmfn;
> +			xen_pfn_t   gmfn;
>  		} u;
>  		domid_t  domid;
>  		uint16_t offset;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..abbbff0 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -31,7 +31,7 @@ struct xen_memory_reservation {
>       *   OUT: GMFN bases of extents that were allocated
>       *   (NB. This command also updates the mach_to_phys translation table)
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /* Number of extents, and size/alignment of each (2^extent_order pages). */
>      unsigned long  nr_extents;
> @@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
>       * any large discontiguities in the machine address space, 2MB gaps in
>       * the machphys table will be represented by an MFN base of zero.
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /*
>       * Number of extents written to the above array. This will be smaller
> @@ -172,7 +172,7 @@ struct xen_add_to_physmap {
>      unsigned long idx;
>  
>      /* GPFN where the source mapping page should appear. */
> -    unsigned long gpfn;
> +    xen_pfn_t gpfn;
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
>  
> diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
> index 486653f..0bea470 100644
> --- a/include/xen/interface/platform.h
> +++ b/include/xen/interface/platform.h
> @@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
>  #define XENPF_add_memtype         31
>  struct xenpf_add_memtype {
>  	/* IN variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  	/* OUT variables. */
> @@ -84,7 +84,7 @@ struct xenpf_read_memtype {
>  	/* IN variables. */
>  	uint32_t reg;
>  	/* OUT variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  };
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 4f29f33..d59a991 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -192,7 +192,7 @@ struct mmuext_op {
>  	unsigned int cmd;
>  	union {
>  		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
> -		unsigned long mfn;
> +		xen_pfn_t mfn;
>  		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
>  		unsigned long linear_addr;
>  	} arg1;
> @@ -432,11 +432,11 @@ struct start_info {
>  	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
>  	unsigned long shared_info;  /* MACHINE address of shared info struct. */
>  	uint32_t flags;             /* SIF_xxx flags.                         */
> -	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
> +	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
>  	uint32_t store_evtchn;      /* Event channel for store communication. */
>  	union {
>  		struct {
> -			unsigned long mfn;  /* MACHINE page number of console page.   */
> +			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
>  			uint32_t  evtchn;   /* Event channel for console page.        */
>  		} domU;
>  		struct {
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 4d58881..45c1aa1 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -37,8 +37,6 @@
>  #include <linux/compiler.h>
>  #include <xen/interface/xen.h>
>  
> -typedef unsigned long xen_pfn_t;
> -
>  struct privcmd_hypercall {
>  	__u64 op;
>  	__u64 arg[5];
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:38:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swa3m-0001vy-7J; Wed, 01 Aug 2012 14:38:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Swa3j-0001vr-UR
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:38:00 +0000
Received: from [85.158.139.83:36407] by server-3.bemta-5.messagelabs.com id
	38/CB-03367-74F39105; Wed, 01 Aug 2012 14:37:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343831877!25893629!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20430 invoked from network); 1 Aug 2012 14:37:58 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:37:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EbhnX022791
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:37:43 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Ebf7p021378
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:37:42 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71Ebfe4012623; Wed, 1 Aug 2012 09:37:41 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:37:41 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5D5AF402B2; Wed,  1 Aug 2012 10:28:40 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:28:40 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801142840.GG7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-9-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-9-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 09/24] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:51PM +0100, Stefano Stabellini wrote:
> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> an error.
> 
> If Linux is running as an HVM domain and is running as Dom0, use
> xenstored_local_init to initialize the xenstore page and event channel.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>  drivers/xen/xenbus/xenbus_probe.c |   27 +++++++++++++++++----------
>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>  3 files changed, 19 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index 52fe7ad..c5aa55c 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>  		int err;
>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>  						0, "xenbus", &xb_waitq);
> -		if (err <= 0) {
> +		if (err < 0) {
>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>  			return err;
>  		}
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index b793723..3ae47c2 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -729,16 +729,23 @@ static int __init xenbus_init(void)
>  	xenbus_ring_ops_init();
>  
>  	if (xen_hvm_domain()) {
> -		uint64_t v = 0;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_evtchn = (int)v;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_mfn = (unsigned long)v;
> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> +		if (xen_initial_domain()) {
> +			err = xenstored_local_init();
> +			xen_store_interface =
> +				phys_to_virt(xen_store_mfn << PAGE_SHIFT);
> +		} else {
> +			uint64_t v = 0;
> +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_evtchn = (int)v;
> +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_mfn = (unsigned long)v;
> +			xen_store_interface =
> +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> +		}

This, and along with the Hybrid PV dom0 (not yet posted, but it was doing
similar manipulation here) is getting more and more like a rat-mess.


Any chance we can just abstract the three different XenStore access
ways and just have something like this:

	enum {
		USE_UNKNOWN
		USE_HVM,
		USE_PV,
		USE_LOCAL
		USE_ALREADY_INIT
	};
	int usage = USE_UNKNOWN;
	if (xen_pv_domain())
		usage = USE_PV;
	if (xen_hvm_domain())
		usage = USE_HVM;
	if (xen_initial_domain())
		usage = USE_LOCAL;

	if (xen_start_info->store_evtchn)
		usage = USE_ALREADY_INIT;
	
	.. other overwrites..

	switch (usage) {
		.. blah blah.
	}


>  		xen_store_evtchn = xen_start_info->store_evtchn;
>  		xen_store_mfn = xen_start_info->store_mfn;
> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> index d1c217b..f7feb3d 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -44,6 +44,7 @@
>  #include <linux/rwsem.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> +#include <asm/xen/hypervisor.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
>  #include "xenbus_comms.h"
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:38:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swa3m-0001vy-7J; Wed, 01 Aug 2012 14:38:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Swa3j-0001vr-UR
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:38:00 +0000
Received: from [85.158.139.83:36407] by server-3.bemta-5.messagelabs.com id
	38/CB-03367-74F39105; Wed, 01 Aug 2012 14:37:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343831877!25893629!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20430 invoked from network); 1 Aug 2012 14:37:58 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:37:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EbhnX022791
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:37:43 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Ebf7p021378
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:37:42 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71Ebfe4012623; Wed, 1 Aug 2012 09:37:41 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:37:41 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5D5AF402B2; Wed,  1 Aug 2012 10:28:40 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:28:40 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801142840.GG7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-9-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-9-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 09/24] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:51PM +0100, Stefano Stabellini wrote:
> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> an error.
> 
> If Linux is running as an HVM domain and is running as Dom0, use
> xenstored_local_init to initialize the xenstore page and event channel.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>  drivers/xen/xenbus/xenbus_probe.c |   27 +++++++++++++++++----------
>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>  3 files changed, 19 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index 52fe7ad..c5aa55c 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>  		int err;
>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>  						0, "xenbus", &xb_waitq);
> -		if (err <= 0) {
> +		if (err < 0) {
>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>  			return err;
>  		}
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index b793723..3ae47c2 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -729,16 +729,23 @@ static int __init xenbus_init(void)
>  	xenbus_ring_ops_init();
>  
>  	if (xen_hvm_domain()) {
> -		uint64_t v = 0;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_evtchn = (int)v;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_mfn = (unsigned long)v;
> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> +		if (xen_initial_domain()) {
> +			err = xenstored_local_init();
> +			xen_store_interface =
> +				phys_to_virt(xen_store_mfn << PAGE_SHIFT);
> +		} else {
> +			uint64_t v = 0;
> +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_evtchn = (int)v;
> +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_mfn = (unsigned long)v;
> +			xen_store_interface =
> +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> +		}

This, and along with the Hybrid PV dom0 (not yet posted, but it was doing
similar manipulation here) is getting more and more like a rat-mess.


Any chance we can just abstract the three different XenStore access
ways and just have something like this:

	enum {
		USE_UNKNOWN
		USE_HVM,
		USE_PV,
		USE_LOCAL
		USE_ALREADY_INIT
	};
	int usage = USE_UNKNOWN;
	if (xen_pv_domain())
		usage = USE_PV;
	if (xen_hvm_domain())
		usage = USE_HVM;
	if (xen_initial_domain())
		usage = USE_LOCAL;

	if (xen_start_info->store_evtchn)
		usage = USE_ALREADY_INIT;
	
	.. other overwrites..

	switch (usage) {
		.. blah blah.
	}


>  		xen_store_evtchn = xen_start_info->store_evtchn;
>  		xen_store_mfn = xen_start_info->store_mfn;
> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> index d1c217b..f7feb3d 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -44,6 +44,7 @@
>  #include <linux/rwsem.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> +#include <asm/xen/hypervisor.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
>  #include "xenbus_comms.h"
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:45:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swa9u-00027A-5v; Wed, 01 Aug 2012 14:44:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Swa9s-000275-N9
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:44:21 +0000
Received: from [85.158.139.83:27976] by server-9.bemta-5.messagelabs.com id
	5E/2D-01069-3C049105; Wed, 01 Aug 2012 14:44:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343832258!24076910!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2944 invoked from network); 1 Aug 2012 14:44:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:44:19 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Ei3sp031394
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:44:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Ei0K4024028
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:44:01 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EhxKh018218; Wed, 1 Aug 2012 09:44:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:43:59 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1E802402B2; Wed,  1 Aug 2012 10:34:59 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:34:59 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801143459.GH7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-11-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-11-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 11/24] xen/arm: introduce CONFIG_XEN on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:53PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/Kconfig |   10 ++++++++++
>  1 files changed, 10 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index a91009c..9c54cb4 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -2228,6 +2228,16 @@ config NEON
>  	  Say Y to include support code for NEON, the ARMv7 Advanced SIMD
>  	  Extension.
>  
> +config XEN_DOM0
> +	def_bool y

What is the benefit of this ? I was hoping at some point to rip out all of those
XEN_DOM0 and just have, mostly,
	CONFIG_XEN_BACKEND_SUPPORT
		(which would compile whatever is needed for HVM or PV guests to run
		blkback/netback/grant/grantalloc/etc)
	CONFIG_XEN_FRONTEND_SUPPORT
		(the vice-versa)

	CONFIG_XEN_PCI
		which would have the PCI support, the ACPI routing (which is
		predomaintaly most of the dom0 support), VGA text support, and
		whatever else is in there.

In that fashion you could compile a kernel with CONFIG_XEN_BACKEND_SUPPORT
without any CONFIG_XEN_PCI and drop it in as an HVM device driver domain.
Thought maybe that wouldn't really work as if you do PCI passthrough to such
domain, you are going to need the PCI support and ACPI routing. The VGA text
maybe not...

OK, never mind - we should brainstorm it and figure out how to make this
nicely work. In the meantime this is OK.

> +
> +config XEN
> +	bool "Xen guest support on ARM"
> +	depends on ARM && OF
> +	select XEN_DOM0
> +	help
> +	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
> +
>  endmenu
>  
>  menu "Userspace binary formats"
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:45:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swa9u-00027A-5v; Wed, 01 Aug 2012 14:44:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Swa9s-000275-N9
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:44:21 +0000
Received: from [85.158.139.83:27976] by server-9.bemta-5.messagelabs.com id
	5E/2D-01069-3C049105; Wed, 01 Aug 2012 14:44:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343832258!24076910!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2944 invoked from network); 1 Aug 2012 14:44:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:44:19 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Ei3sp031394
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:44:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Ei0K4024028
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:44:01 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EhxKh018218; Wed, 1 Aug 2012 09:44:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:43:59 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1E802402B2; Wed,  1 Aug 2012 10:34:59 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:34:59 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801143459.GH7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-11-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-11-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 11/24] xen/arm: introduce CONFIG_XEN on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:53PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/Kconfig |   10 ++++++++++
>  1 files changed, 10 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index a91009c..9c54cb4 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -2228,6 +2228,16 @@ config NEON
>  	  Say Y to include support code for NEON, the ARMv7 Advanced SIMD
>  	  Extension.
>  
> +config XEN_DOM0
> +	def_bool y

What is the benefit of this ? I was hoping at some point to rip out all of those
XEN_DOM0 and just have, mostly,
	CONFIG_XEN_BACKEND_SUPPORT
		(which would compile whatever is needed for HVM or PV guests to run
		blkback/netback/grant/grantalloc/etc)
	CONFIG_XEN_FRONTEND_SUPPORT
		(the vice-versa)

	CONFIG_XEN_PCI
		which would have the PCI support, the ACPI routing (which is
		predomaintaly most of the dom0 support), VGA text support, and
		whatever else is in there.

In that fashion you could compile a kernel with CONFIG_XEN_BACKEND_SUPPORT
without any CONFIG_XEN_PCI and drop it in as an HVM device driver domain.
Thought maybe that wouldn't really work as if you do PCI passthrough to such
domain, you are going to need the PCI support and ACPI routing. The VGA text
maybe not...

OK, never mind - we should brainstorm it and figure out how to make this
nicely work. In the meantime this is OK.

> +
> +config XEN
> +	bool "Xen guest support on ARM"
> +	depends on ARM && OF
> +	select XEN_DOM0
> +	help
> +	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
> +
>  endmenu
>  
>  menu "Userspace binary formats"
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:45:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaAj-000296-KG; Wed, 01 Aug 2012 14:45:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaAi-00028y-Gz
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:45:12 +0000
Received: from [85.158.143.99:12704] by server-3.bemta-4.messagelabs.com id
	03/D1-01511-7F049105; Wed, 01 Aug 2012 14:45:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343832308!22256103!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10473 invoked from network); 1 Aug 2012 14:45:10 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:45:10 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Eirbs032142
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:44:54 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EiqOg023204
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:44:52 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EiqnZ008497; Wed, 1 Aug 2012 09:44:52 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:44:52 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B021B402B2; Wed,  1 Aug 2012 10:35:51 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:35:51 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120801143551.GI7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163759.GE9222@phenom.dumpdata.com>
	<1343381305.6812.116.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343381305.6812.116.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 04/24] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jul 27, 2012 at 10:28:25AM +0100, Ian Campbell wrote:
> On Thu, 2012-07-26 at 17:37 +0100, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jul 26, 2012 at 04:33:46PM +0100, Stefano Stabellini wrote:
> > > sync_bitops functions are equivalent to the SMP implementation of the
> > > original functions, independently from CONFIG_SMP being defined.
> > 
> > So why can't the code be changed to use that? Is it that
> > the _set_bit, _clear_bit, etc are not available with !CONFIG_SMP?
> 
> _set_bit etc are not SMP safe if !CONFIG_SMP. But under Xen you might be
> communicating with a completely external entity who might be on another
> CPU (e.g. two uniprocessor guests communicating via event channels and
> grant tables). So we need a variant of the bit ops which are SMP safe
> even on a UP kernel.
> 
> The users are common code and the sync_foo vs foo distinction matters on
> some platforms (e.g. x86 where a UP kernel would omit the LOCK prefix
> for the normal ones).

OK, that makes sense. Stefano can you include that comment in the git
commit description and in the sync_bitops.h file please?
> 
> > 
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  arch/arm/include/asm/sync_bitops.h |   17 +++++++++++++++++
> > >  1 files changed, 17 insertions(+), 0 deletions(-)
> > >  create mode 100644 arch/arm/include/asm/sync_bitops.h
> > > 
> > > diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
> > > new file mode 100644
> > > index 0000000..d975092903
> > > --- /dev/null
> > > +++ b/arch/arm/include/asm/sync_bitops.h
> > > @@ -0,0 +1,17 @@
> > > +#ifndef __ASM_SYNC_BITOPS_H__
> > > +#define __ASM_SYNC_BITOPS_H__
> > > +
> > > +#include <asm/bitops.h>
> > > +#include <asm/system.h>
> > > +
> > > +#define sync_set_bit(nr, p)		_set_bit(nr, p)
> > > +#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
> > > +#define sync_change_bit(nr, p)		_change_bit(nr, p)
> > > +#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
> > > +#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
> > > +#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
> > > +#define sync_test_bit(nr, addr)		test_bit(nr, addr)
> > > +#define sync_cmpxchg			cmpxchg
> > > +
> > > +
> > > +#endif
> > > -- 
> > > 1.7.2.5
> > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:45:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaAj-000296-KG; Wed, 01 Aug 2012 14:45:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaAi-00028y-Gz
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:45:12 +0000
Received: from [85.158.143.99:12704] by server-3.bemta-4.messagelabs.com id
	03/D1-01511-7F049105; Wed, 01 Aug 2012 14:45:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343832308!22256103!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10473 invoked from network); 1 Aug 2012 14:45:10 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:45:10 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Eirbs032142
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:44:54 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EiqOg023204
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:44:52 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EiqnZ008497; Wed, 1 Aug 2012 09:44:52 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:44:52 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B021B402B2; Wed,  1 Aug 2012 10:35:51 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:35:51 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120801143551.GI7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163759.GE9222@phenom.dumpdata.com>
	<1343381305.6812.116.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343381305.6812.116.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 04/24] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jul 27, 2012 at 10:28:25AM +0100, Ian Campbell wrote:
> On Thu, 2012-07-26 at 17:37 +0100, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jul 26, 2012 at 04:33:46PM +0100, Stefano Stabellini wrote:
> > > sync_bitops functions are equivalent to the SMP implementation of the
> > > original functions, independently from CONFIG_SMP being defined.
> > 
> > So why can't the code be changed to use that? Is it that
> > the _set_bit, _clear_bit, etc are not available with !CONFIG_SMP?
> 
> _set_bit etc are not SMP safe if !CONFIG_SMP. But under Xen you might be
> communicating with a completely external entity who might be on another
> CPU (e.g. two uniprocessor guests communicating via event channels and
> grant tables). So we need a variant of the bit ops which are SMP safe
> even on a UP kernel.
> 
> The users are common code and the sync_foo vs foo distinction matters on
> some platforms (e.g. x86 where a UP kernel would omit the LOCK prefix
> for the normal ones).

OK, that makes sense. Stefano can you include that comment in the git
commit description and in the sync_bitops.h file please?
> 
> > 
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  arch/arm/include/asm/sync_bitops.h |   17 +++++++++++++++++
> > >  1 files changed, 17 insertions(+), 0 deletions(-)
> > >  create mode 100644 arch/arm/include/asm/sync_bitops.h
> > > 
> > > diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
> > > new file mode 100644
> > > index 0000000..d975092903
> > > --- /dev/null
> > > +++ b/arch/arm/include/asm/sync_bitops.h
> > > @@ -0,0 +1,17 @@
> > > +#ifndef __ASM_SYNC_BITOPS_H__
> > > +#define __ASM_SYNC_BITOPS_H__
> > > +
> > > +#include <asm/bitops.h>
> > > +#include <asm/system.h>
> > > +
> > > +#define sync_set_bit(nr, p)		_set_bit(nr, p)
> > > +#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
> > > +#define sync_change_bit(nr, p)		_change_bit(nr, p)
> > > +#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
> > > +#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
> > > +#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
> > > +#define sync_test_bit(nr, addr)		test_bit(nr, addr)
> > > +#define sync_cmpxchg			cmpxchg
> > > +
> > > +
> > > +#endif
> > > -- 
> > > 1.7.2.5
> > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:48:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaD6-0002If-5k; Wed, 01 Aug 2012 14:47:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaD4-0002IW-CE
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:47:38 +0000
Received: from [85.158.139.83:26320] by server-3.bemta-5.messagelabs.com id
	12/AE-03367-98149105; Wed, 01 Aug 2012 14:47:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1343832455!22423431!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23839 invoked from network); 1 Aug 2012 14:47:37 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:47:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71ElOOL002536
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:47:24 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71ElNAc027823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:47:23 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71ElNaf010419; Wed, 1 Aug 2012 09:47:23 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:47:23 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 78C1C402B2; Wed,  1 Aug 2012 10:38:22 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:38:22 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801143822.GJ7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-12-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-12-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 12/24] xen/arm: Introduce xen_guest_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:54PM +0100, Stefano Stabellini wrote:
> We used to rely on a core_initcall to initialize Xen on ARM, however
> core_initcalls are actually called after early consoles are initialized.
> That means that hvc_xen.c is going to be initialized before Xen.
> 
> Given the lack of a better alternative, just call a new Xen
> initialization function (xen_guest_init) from xen_cons_init.
> 
> xen_guest_init has to be arch independent, so write both an ARM and an
> x86 implementation. The x86 implementation is currently empty because we
> can be sure that xen_hvm_guest_init is called early enough.

Should the arm version then not be anymore on the core_initcall then?

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c  |    7 ++++++-
>  arch/x86/xen/enlighten.c  |    8 ++++++++
>  drivers/tty/hvc/hvc_xen.c |    7 ++++++-
>  include/xen/xen.h         |    2 ++
>  4 files changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 8c923af..dc68074 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -44,7 +44,7 @@ EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>   * - one interrupt for Xen event notifications;
>   * - one memory region to map the grant_table.
>   */
> -static int __init xen_guest_init(void)
> +int __init xen_guest_init(void)
>  {
>  	int cpu;
>  	struct xen_add_to_physmap xatp;
> @@ -58,6 +58,10 @@ static int __init xen_guest_init(void)
>  	}
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
> +	/* already setup */
> +	if (shared_info_page != 0 && HYPERVISOR_shared_info == shared_info_page)
> +		return 0;
> +
>  	if (!shared_info_page)
>  		shared_info_page = (struct shared_info *)
>  			get_zeroed_page(GFP_KERNEL);
> @@ -88,4 +92,5 @@ static int __init xen_guest_init(void)
>  	}
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(xen_guest_init);

Why the export symbols?

>  core_initcall(xen_guest_init);
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..6131d43 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1567,4 +1567,12 @@ const struct hypervisor_x86 x86_hyper_xen_hvm __refconst = {
>  	.init_platform		= xen_hvm_guest_init,
>  };
>  EXPORT_SYMBOL(x86_hyper_xen_hvm);
> +
> +int __init xen_guest_init(void)
> +{
> +	/* do nothing: rely on x86_hyper_xen_hvm for the initialization */
> +	return 0;
> +	
> +}
> +EXPORT_SYMBOL_GPL(xen_guest_init);
>  #endif
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index dc07f56..3c04fb8 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -577,6 +577,12 @@ static void __exit xen_hvc_fini(void)
>  static int xen_cons_init(void)
>  {
>  	const struct hv_ops *ops;
> +	int r;
> +
> +	/* retrieve xen infos  */
> +	r = xen_guest_init();
> +	if (r < 0)
> +		return r;
>  
>  	if (!xen_domain())
>  		return 0;
> @@ -584,7 +590,6 @@ static int xen_cons_init(void)
>  	if (xen_initial_domain())
>  		ops = &dom0_hvc_ops;
>  	else {
> -		int r;
>  		ops = &domU_hvc_ops;
>  
>  		if (xen_hvm_domain())
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 2c0d3a5..792a4d2 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -9,8 +9,10 @@ enum xen_domain_type {
>  
>  #ifdef CONFIG_XEN
>  extern enum xen_domain_type xen_domain_type;
> +int xen_guest_init(void);
>  #else
>  #define xen_domain_type		XEN_NATIVE
> +static inline int xen_guest_init(void) { return 0; }
>  #endif
>  
>  #define xen_domain()		(xen_domain_type != XEN_NATIVE)
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:48:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaD6-0002If-5k; Wed, 01 Aug 2012 14:47:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaD4-0002IW-CE
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:47:38 +0000
Received: from [85.158.139.83:26320] by server-3.bemta-5.messagelabs.com id
	12/AE-03367-98149105; Wed, 01 Aug 2012 14:47:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1343832455!22423431!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23839 invoked from network); 1 Aug 2012 14:47:37 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:47:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71ElOOL002536
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:47:24 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71ElNAc027823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:47:23 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71ElNaf010419; Wed, 1 Aug 2012 09:47:23 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:47:23 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 78C1C402B2; Wed,  1 Aug 2012 10:38:22 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:38:22 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801143822.GJ7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-12-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-12-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 12/24] xen/arm: Introduce xen_guest_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:54PM +0100, Stefano Stabellini wrote:
> We used to rely on a core_initcall to initialize Xen on ARM, however
> core_initcalls are actually called after early consoles are initialized.
> That means that hvc_xen.c is going to be initialized before Xen.
> 
> Given the lack of a better alternative, just call a new Xen
> initialization function (xen_guest_init) from xen_cons_init.
> 
> xen_guest_init has to be arch independent, so write both an ARM and an
> x86 implementation. The x86 implementation is currently empty because we
> can be sure that xen_hvm_guest_init is called early enough.

Should the arm version then not be anymore on the core_initcall then?

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c  |    7 ++++++-
>  arch/x86/xen/enlighten.c  |    8 ++++++++
>  drivers/tty/hvc/hvc_xen.c |    7 ++++++-
>  include/xen/xen.h         |    2 ++
>  4 files changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 8c923af..dc68074 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -44,7 +44,7 @@ EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>   * - one interrupt for Xen event notifications;
>   * - one memory region to map the grant_table.
>   */
> -static int __init xen_guest_init(void)
> +int __init xen_guest_init(void)
>  {
>  	int cpu;
>  	struct xen_add_to_physmap xatp;
> @@ -58,6 +58,10 @@ static int __init xen_guest_init(void)
>  	}
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
> +	/* already setup */
> +	if (shared_info_page != 0 && HYPERVISOR_shared_info == shared_info_page)
> +		return 0;
> +
>  	if (!shared_info_page)
>  		shared_info_page = (struct shared_info *)
>  			get_zeroed_page(GFP_KERNEL);
> @@ -88,4 +92,5 @@ static int __init xen_guest_init(void)
>  	}
>  	return 0;
>  }
> +EXPORT_SYMBOL_GPL(xen_guest_init);

Why the export symbols?

>  core_initcall(xen_guest_init);
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..6131d43 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1567,4 +1567,12 @@ const struct hypervisor_x86 x86_hyper_xen_hvm __refconst = {
>  	.init_platform		= xen_hvm_guest_init,
>  };
>  EXPORT_SYMBOL(x86_hyper_xen_hvm);
> +
> +int __init xen_guest_init(void)
> +{
> +	/* do nothing: rely on x86_hyper_xen_hvm for the initialization */
> +	return 0;
> +	
> +}
> +EXPORT_SYMBOL_GPL(xen_guest_init);
>  #endif
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index dc07f56..3c04fb8 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -577,6 +577,12 @@ static void __exit xen_hvc_fini(void)
>  static int xen_cons_init(void)
>  {
>  	const struct hv_ops *ops;
> +	int r;
> +
> +	/* retrieve xen infos  */
> +	r = xen_guest_init();
> +	if (r < 0)
> +		return r;
>  
>  	if (!xen_domain())
>  		return 0;
> @@ -584,7 +590,6 @@ static int xen_cons_init(void)
>  	if (xen_initial_domain())
>  		ops = &dom0_hvc_ops;
>  	else {
> -		int r;
>  		ops = &domU_hvc_ops;
>  
>  		if (xen_hvm_domain())
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 2c0d3a5..792a4d2 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -9,8 +9,10 @@ enum xen_domain_type {
>  
>  #ifdef CONFIG_XEN
>  extern enum xen_domain_type xen_domain_type;
> +int xen_guest_init(void);
>  #else
>  #define xen_domain_type		XEN_NATIVE
> +static inline int xen_guest_init(void) { return 0; }
>  #endif
>  
>  #define xen_domain()		(xen_domain_type != XEN_NATIVE)
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:49:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaEW-0002QT-LC; Wed, 01 Aug 2012 14:49:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaEV-0002QH-F3
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:49:07 +0000
Received: from [85.158.139.83:64765] by server-12.bemta-5.messagelabs.com id
	5C/5E-25233-2E149105; Wed, 01 Aug 2012 14:49:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1343832544!27833077!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31883 invoked from network); 1 Aug 2012 14:49:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:49:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Emop5004154
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:48:51 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EmlPE003027
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:48:48 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EmltO021922; Wed, 1 Aug 2012 09:48:47 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:48:47 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8E3A8402B2; Wed,  1 Aug 2012 10:39:46 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:39:46 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120801143946.GK7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-13-git-send-email-stefano.stabellini@eu.citrix.com>
	<1343382276.6812.126.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1207271514140.26163@kaball.uk.xensource.com>
	<1343399630.25096.4.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343399630.25096.4.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 13/24] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jul 27, 2012 at 03:33:50PM +0100, Ian Campbell wrote:
> On Fri, 2012-07-27 at 15:25 +0100, Stefano Stabellini wrote:
> > On Fri, 27 Jul 2012, Ian Campbell wrote:
> > > On Thu, 2012-07-26 at 16:33 +0100, Stefano Stabellini wrote:
> > > > Use Xen features to figure out if we are privileged.
> > > > 
> > > > XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > ---
> > > >  arch/arm/xen/enlighten.c         |    7 +++++++
> > > >  include/xen/interface/features.h |    3 +++
> > > >  2 files changed, 10 insertions(+), 0 deletions(-)
> > > > 
> > > > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > > > index dc68074..2e013cf 100644
> > > > --- a/arch/arm/xen/enlighten.c
> > > > +++ b/arch/arm/xen/enlighten.c
> > > > @@ -2,6 +2,7 @@
> > > >  #include <xen/interface/xen.h>
> > > >  #include <xen/interface/memory.h>
> > > >  #include <xen/platform_pci.h>
> > > > +#include <xen/features.h>
> > > >  #include <asm/xen/hypervisor.h>
> > > >  #include <asm/xen/hypercall.h>
> > > >  #include <linux/module.h>
> > > > @@ -58,6 +59,12 @@ int __init xen_guest_init(void)
> > > >  	}
> > > >  	xen_domain_type = XEN_HVM_DOMAIN;
> > > >  
> > > > +	xen_setup_features();
> > > > +	if (xen_feature(XENFEAT_dom0))
> > > > +		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
> > > > +	else
> > > > +		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
> > > 
> > > What happens here on platforms prior to hypervisor changeset 23735?
> > 
> > It wouldn't work.
> > Considering that we are certainly not going to backport ARM support to
> > Xen 4.1, and that both ARM and XENFEAT_dom0 will be present in Xen 4.2,
> > do we really need to support the Xen unstable changesets between ARM was
> > introduced and XENFEAT_dom0 appeared?

So should it just panic and say "AAAAAAH"?

> 
> Sorry, I missed the "arm" in the path.
> 
> Ian.
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:49:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaEW-0002QT-LC; Wed, 01 Aug 2012 14:49:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaEV-0002QH-F3
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:49:07 +0000
Received: from [85.158.139.83:64765] by server-12.bemta-5.messagelabs.com id
	5C/5E-25233-2E149105; Wed, 01 Aug 2012 14:49:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1343832544!27833077!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31883 invoked from network); 1 Aug 2012 14:49:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:49:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Emop5004154
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:48:51 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EmlPE003027
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:48:48 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EmltO021922; Wed, 1 Aug 2012 09:48:47 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:48:47 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8E3A8402B2; Wed,  1 Aug 2012 10:39:46 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:39:46 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120801143946.GK7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-13-git-send-email-stefano.stabellini@eu.citrix.com>
	<1343382276.6812.126.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1207271514140.26163@kaball.uk.xensource.com>
	<1343399630.25096.4.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343399630.25096.4.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 13/24] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jul 27, 2012 at 03:33:50PM +0100, Ian Campbell wrote:
> On Fri, 2012-07-27 at 15:25 +0100, Stefano Stabellini wrote:
> > On Fri, 27 Jul 2012, Ian Campbell wrote:
> > > On Thu, 2012-07-26 at 16:33 +0100, Stefano Stabellini wrote:
> > > > Use Xen features to figure out if we are privileged.
> > > > 
> > > > XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > ---
> > > >  arch/arm/xen/enlighten.c         |    7 +++++++
> > > >  include/xen/interface/features.h |    3 +++
> > > >  2 files changed, 10 insertions(+), 0 deletions(-)
> > > > 
> > > > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > > > index dc68074..2e013cf 100644
> > > > --- a/arch/arm/xen/enlighten.c
> > > > +++ b/arch/arm/xen/enlighten.c
> > > > @@ -2,6 +2,7 @@
> > > >  #include <xen/interface/xen.h>
> > > >  #include <xen/interface/memory.h>
> > > >  #include <xen/platform_pci.h>
> > > > +#include <xen/features.h>
> > > >  #include <asm/xen/hypervisor.h>
> > > >  #include <asm/xen/hypercall.h>
> > > >  #include <linux/module.h>
> > > > @@ -58,6 +59,12 @@ int __init xen_guest_init(void)
> > > >  	}
> > > >  	xen_domain_type = XEN_HVM_DOMAIN;
> > > >  
> > > > +	xen_setup_features();
> > > > +	if (xen_feature(XENFEAT_dom0))
> > > > +		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
> > > > +	else
> > > > +		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
> > > 
> > > What happens here on platforms prior to hypervisor changeset 23735?
> > 
> > It wouldn't work.
> > Considering that we are certainly not going to backport ARM support to
> > Xen 4.1, and that both ARM and XENFEAT_dom0 will be present in Xen 4.2,
> > do we really need to support the Xen unstable changesets between ARM was
> > introduced and XENFEAT_dom0 appeared?

So should it just panic and say "AAAAAAH"?

> 
> Sorry, I missed the "arm" in the path.
> 
> Ian.
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:50:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaFg-0002X7-8D; Wed, 01 Aug 2012 14:50:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwaFe-0002Wp-97
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 14:50:18 +0000
Received: from [85.158.139.83:62530] by server-1.bemta-5.messagelabs.com id
	F0/E8-29759-92249105; Wed, 01 Aug 2012 14:50:17 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1343832616!30036836!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15756 invoked from network); 1 Aug 2012 14:50:16 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 14:50:16 -0000
Received: by eeke53 with SMTP id e53so2020359eek.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 07:50:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=N95NmBSLxMWxUmZyGLC0FtzLeAr1QlCtH04Jb+HoDtY=;
	b=BUTLT4DRTwMUTZwUcACUaiIosO3eFkkQcWY1TjMFcH7TeU5qb/gDy02mUnD+jH0y4a
	qGzq7E350pna/I5XjoDzOU/2MWKm1EYK6ToUJGoDytyfdllTIoPZr+0lvE8X75z0PLyN
	Kra7AYJ4/GQOGUupg4CpY/WTryKaXDPaljiAQpe3r+0fiZJTAHQr7z9UOrVmgXcTMXTM
	tAHJq7BaVOdcUPZrkNuStdXhm9Hi3GGxP18/2pl1eiy7YZgLBROzcaxBX5GEgO4pHdtZ
	VqYw/SwFxm1J9VY4k4P9P7JK1cbu99ItnQVKtyaXWpC9y7/JwDiR+5L27xd7ayw7HBxX
	I8gA==
MIME-Version: 1.0
Received: by 10.14.214.197 with SMTP id c45mr22709578eep.37.1343832616243;
	Wed, 01 Aug 2012 07:50:16 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 07:50:16 -0700 (PDT)
In-Reply-To: <50191C3E.4050003@xen.org>
References: <50191C3E.4050003@xen.org>
Date: Wed, 1 Aug 2012 15:50:16 +0100
X-Google-Sender-Auth: 30_idxGyLqGjOf1gm4YmYYBHBPI
Message-ID: <CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: lars.kurth@xen.org
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 1:08 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> Hi everybody,
>
> at OSCON I had a couple of discussions regarding Fedora-like Xen Test Days.
> It may be a little bit late for this release putting all the documentation
> together (i.e. a TODO list of what we want to community and distros which
> consume Xen) to test to pull this off for this release cycle.
>
> But I wanted to raise this as possibility and maybe something to build into
> future release cycles.  If I look at http://wiki.xen.org/wiki/Xen_4.2 there
> is fairly little (or in fact almost nothing) we have in terms of how new
> functionality would be tested. My gut feel is that the biggest benefit of a
> Xen test Day for 4.2 may be in testing XL. There were some improvements last
> Monday, but I am not sure this is enough.
>
> If I look at https://fedoraproject.org/wiki/QA/Test_Days they have spent
> quite a bit of effort on this, and it would probably take one person a week
> or two full-time to pull this together. Also see,
> https://fedoraproject.org/wiki/Test_Day:Current
>
> I just wanted to put this out there to see whether we should try for this
> release cycle and gather views. It would require a volunteer to step up. If
> the view is that this is not doable for 4.2, this may be a good thing to
> either try for patch releases as well as maybe for Xen 4.3

I think for 4.2, the key thing we want to test is the xm -> xl
transition; and the instructions for that are really simple --
basically, "Do what you normally do using xl instead of xm". :-)
Secondary things we want tested involve just installing it on
different software setups (e.g., distros), and hardware testing.  But
I think those will come as a matter of course with the first one.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:50:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaFg-0002X7-8D; Wed, 01 Aug 2012 14:50:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwaFe-0002Wp-97
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 14:50:18 +0000
Received: from [85.158.139.83:62530] by server-1.bemta-5.messagelabs.com id
	F0/E8-29759-92249105; Wed, 01 Aug 2012 14:50:17 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1343832616!30036836!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15756 invoked from network); 1 Aug 2012 14:50:16 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 14:50:16 -0000
Received: by eeke53 with SMTP id e53so2020359eek.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 07:50:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=N95NmBSLxMWxUmZyGLC0FtzLeAr1QlCtH04Jb+HoDtY=;
	b=BUTLT4DRTwMUTZwUcACUaiIosO3eFkkQcWY1TjMFcH7TeU5qb/gDy02mUnD+jH0y4a
	qGzq7E350pna/I5XjoDzOU/2MWKm1EYK6ToUJGoDytyfdllTIoPZr+0lvE8X75z0PLyN
	Kra7AYJ4/GQOGUupg4CpY/WTryKaXDPaljiAQpe3r+0fiZJTAHQr7z9UOrVmgXcTMXTM
	tAHJq7BaVOdcUPZrkNuStdXhm9Hi3GGxP18/2pl1eiy7YZgLBROzcaxBX5GEgO4pHdtZ
	VqYw/SwFxm1J9VY4k4P9P7JK1cbu99ItnQVKtyaXWpC9y7/JwDiR+5L27xd7ayw7HBxX
	I8gA==
MIME-Version: 1.0
Received: by 10.14.214.197 with SMTP id c45mr22709578eep.37.1343832616243;
	Wed, 01 Aug 2012 07:50:16 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 07:50:16 -0700 (PDT)
In-Reply-To: <50191C3E.4050003@xen.org>
References: <50191C3E.4050003@xen.org>
Date: Wed, 1 Aug 2012 15:50:16 +0100
X-Google-Sender-Auth: 30_idxGyLqGjOf1gm4YmYYBHBPI
Message-ID: <CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: lars.kurth@xen.org
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 1:08 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> Hi everybody,
>
> at OSCON I had a couple of discussions regarding Fedora-like Xen Test Days.
> It may be a little bit late for this release putting all the documentation
> together (i.e. a TODO list of what we want to community and distros which
> consume Xen) to test to pull this off for this release cycle.
>
> But I wanted to raise this as possibility and maybe something to build into
> future release cycles.  If I look at http://wiki.xen.org/wiki/Xen_4.2 there
> is fairly little (or in fact almost nothing) we have in terms of how new
> functionality would be tested. My gut feel is that the biggest benefit of a
> Xen test Day for 4.2 may be in testing XL. There were some improvements last
> Monday, but I am not sure this is enough.
>
> If I look at https://fedoraproject.org/wiki/QA/Test_Days they have spent
> quite a bit of effort on this, and it would probably take one person a week
> or two full-time to pull this together. Also see,
> https://fedoraproject.org/wiki/Test_Day:Current
>
> I just wanted to put this out there to see whether we should try for this
> release cycle and gather views. It would require a volunteer to step up. If
> the view is that this is not doable for 4.2, this may be a good thing to
> either try for patch releases as well as maybe for Xen 4.3

I think for 4.2, the key thing we want to test is the xm -> xl
transition; and the instructions for that are really simple --
basically, "Do what you normally do using xl instead of xm". :-)
Secondary things we want tested involve just installing it on
different software setups (e.g., distros), and hardware testing.  But
I think those will come as a matter of course with the first one.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:50:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaFj-0002XY-LC; Wed, 01 Aug 2012 14:50:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaFi-0002Wl-2P
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:50:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1343832614!9328866!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28274 invoked from network); 1 Aug 2012 14:50:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:50:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Eo1oN004204
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:50:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Eo07V029399
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:50:00 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71Eo0be012745; Wed, 1 Aug 2012 09:50:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:50:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9A9F5402B2; Wed,  1 Aug 2012 10:40:59 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:40:59 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144059.GL7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-14-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-14-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 14/24] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:56PM +0100, Stefano Stabellini wrote:
> Initialize the grant table mapping at the address specified at index 0
> in the DT under the /xen node.

Is it always index 0? If so, should it have a #define for the
other index values?

> After the grant table is initialized, call xenbus_probe (if not dom0).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c  |   13 +++++++++++++
>  drivers/xen/grant-table.c |    2 +-
>  2 files changed, 14 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 2e013cf..854af1e 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -1,8 +1,12 @@
>  #include <xen/xen.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/memory.h>
> +#include <xen/interface/hvm/params.h>
>  #include <xen/platform_pci.h>
>  #include <xen/features.h>
> +#include <xen/grant_table.h>
> +#include <xen/hvm.h>
> +#include <xen/xenbus.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> @@ -51,12 +55,16 @@ int __init xen_guest_init(void)
>  	struct xen_add_to_physmap xatp;
>  	static struct shared_info *shared_info_page = 0;
>  	struct device_node *node;
> +	struct resource res;
>  
>  	node = of_find_compatible_node(NULL, NULL, "arm,xen");
>  	if (!node) {
>  		pr_info("No Xen support\n");
>  		return 0;
>  	}
> +	if (of_address_to_resource(node, 0, &res))
> +		return -EINVAL;
> +	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -97,6 +105,11 @@ int __init xen_guest_init(void)
>  		per_cpu(xen_vcpu, cpu) =
>  			&HYPERVISOR_shared_info->vcpu_info[cpu];
>  	}
> +
> +	gnttab_init();
> +	if (!xen_initial_domain())
> +		xenbus_probe(NULL);
> +
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(xen_guest_init);
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 1d0d95e..fd2137a 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -62,7 +62,7 @@
>  
>  static grant_ref_t **gnttab_list;
>  static unsigned int nr_grant_frames;
> -static unsigned int boot_max_nr_grant_frames;
> +static unsigned int boot_max_nr_grant_frames = 1;

Is this going to impact x86 version?

>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:50:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaFj-0002XY-LC; Wed, 01 Aug 2012 14:50:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaFi-0002Wl-2P
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:50:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1343832614!9328866!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28274 invoked from network); 1 Aug 2012 14:50:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:50:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71Eo1oN004204
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:50:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Eo07V029399
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:50:00 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71Eo0be012745; Wed, 1 Aug 2012 09:50:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:50:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9A9F5402B2; Wed,  1 Aug 2012 10:40:59 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:40:59 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144059.GL7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-14-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-14-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 14/24] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:56PM +0100, Stefano Stabellini wrote:
> Initialize the grant table mapping at the address specified at index 0
> in the DT under the /xen node.

Is it always index 0? If so, should it have a #define for the
other index values?

> After the grant table is initialized, call xenbus_probe (if not dom0).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c  |   13 +++++++++++++
>  drivers/xen/grant-table.c |    2 +-
>  2 files changed, 14 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 2e013cf..854af1e 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -1,8 +1,12 @@
>  #include <xen/xen.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/memory.h>
> +#include <xen/interface/hvm/params.h>
>  #include <xen/platform_pci.h>
>  #include <xen/features.h>
> +#include <xen/grant_table.h>
> +#include <xen/hvm.h>
> +#include <xen/xenbus.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> @@ -51,12 +55,16 @@ int __init xen_guest_init(void)
>  	struct xen_add_to_physmap xatp;
>  	static struct shared_info *shared_info_page = 0;
>  	struct device_node *node;
> +	struct resource res;
>  
>  	node = of_find_compatible_node(NULL, NULL, "arm,xen");
>  	if (!node) {
>  		pr_info("No Xen support\n");
>  		return 0;
>  	}
> +	if (of_address_to_resource(node, 0, &res))
> +		return -EINVAL;
> +	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -97,6 +105,11 @@ int __init xen_guest_init(void)
>  		per_cpu(xen_vcpu, cpu) =
>  			&HYPERVISOR_shared_info->vcpu_info[cpu];
>  	}
> +
> +	gnttab_init();
> +	if (!xen_initial_domain())
> +		xenbus_probe(NULL);
> +
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(xen_guest_init);
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 1d0d95e..fd2137a 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -62,7 +62,7 @@
>  
>  static grant_ref_t **gnttab_list;
>  static unsigned int nr_grant_frames;
> -static unsigned int boot_max_nr_grant_frames;
> +static unsigned int boot_max_nr_grant_frames = 1;

Is this going to impact x86 version?

>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:54:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaIz-0002u2-DT; Wed, 01 Aug 2012 14:53:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaIx-0002tJ-KP
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:53:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1343832816!10283087!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27456 invoked from network); 1 Aug 2012 14:53:37 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:53:37 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71ErKws008050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:53:20 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71ErJAu010189
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:53:19 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71ErIeZ025850; Wed, 1 Aug 2012 09:53:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:53:18 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 20DE7402B2; Wed,  1 Aug 2012 10:44:18 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:44:18 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144418.GM7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-15-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-15-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 15/24] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:57PM +0100, Stefano Stabellini wrote:
> Compile events.c on ARM.
> Parse, map and enable the IRQ to get event notifications from the device
> tree (node "/xen").
> 
> On ARM Linux irqs are not enabled by default:
> 
> - call enable_percpu_irq for xen_events_irq (drivers are supposed
> to call enable_irq after request_irq);
> 
> - reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
> default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
> irq_startup, that is responsible for calling irq_unmask at startup time.
> As a result event channels remain masked.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   33 +++++++++++++++++++++++++++++++++
>  arch/x86/xen/enlighten.c |    1 +
>  arch/x86/xen/irq.c       |    1 +
>  arch/x86/xen/xen-ops.h   |    1 -
>  drivers/xen/events.c     |   18 +++++++++++++++---
>  include/xen/events.h     |    2 ++
>  6 files changed, 52 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 854af1e..60d6d36 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -7,8 +7,11 @@
>  #include <xen/grant_table.h>
>  #include <xen/hvm.h>
>  #include <xen/xenbus.h>
> +#include <xen/events.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
> +#include <linux/interrupt.h>
> +#include <linux/irqreturn.h>
>  #include <linux/module.h>
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
> @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
>  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
>  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
> +static __read_mostly int xen_events_irq = -1;
> +
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
> @@ -65,6 +70,9 @@ int __init xen_guest_init(void)
>  	if (of_address_to_resource(node, 0, &res))
>  		return -EINVAL;
>  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> +	xen_events_irq = irq_of_parse_and_map(node, 0);
> +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> +			xen_events_irq, xen_hvm_resume_frames);
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -114,3 +122,28 @@ int __init xen_guest_init(void)
>  }
>  EXPORT_SYMBOL_GPL(xen_guest_init);
>  core_initcall(xen_guest_init);
> +
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return 0;

Um, IRQ_HANDLED?

> +}
> +
> +static int __init xen_init_events(void)
> +{
> +	if (!xen_domain() || xen_events_irq < 0)
> +		return -ENODEV;
> +
> +	xen_init_IRQ();
> +
> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			"events", xen_vcpu)) {
> +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	enable_percpu_irq(xen_events_irq, 0);
> +
> +	return 0;
> +}
> +postcore_initcall(xen_init_events);
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index 6131d43..5a30502 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -33,6 +33,7 @@
>  #include <linux/memblock.h>
>  
>  #include <xen/xen.h>
> +#include <xen/events.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/version.h>
>  #include <xen/interface/physdev.h>
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..01a4dc0 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 202d4c1..2368295 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -35,7 +35,6 @@ void xen_set_pat(u64);
>  
>  char * __init xen_memory_setup(void);
>  void __init xen_arch_setup(void);
> -void __init xen_init_IRQ(void);
>  void xen_enable_sysenter(void);
>  void xen_enable_syscall(void);
>  void xen_vcpu_restore(void);
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7da65d3..9b506b2 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -31,14 +31,16 @@
>  #include <linux/irqnr.h>
>  #include <linux/pci.h>
>  
> +#ifdef CONFIG_X86
>  #include <asm/desc.h>
>  #include <asm/ptrace.h>
>  #include <asm/irq.h>
>  #include <asm/idle.h>
>  #include <asm/io_apic.h>
> -#include <asm/sync_bitops.h>
>  #include <asm/xen/page.h>
>  #include <asm/xen/pci.h>
> +#endif
> +#include <asm/sync_bitops.h>
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
>  
> @@ -50,6 +52,9 @@
>  #include <xen/interface/event_channel.h>
>  #include <xen/interface/hvm/hvm_op.h>
>  #include <xen/interface/hvm/params.h>
> +#include <xen/interface/physdev.h>
> +#include <xen/interface/sched.h>
> +#include <asm/hw_irq.h>
>  
>  /*
>   * This lock protects updates to the following mapping and reference-count
> @@ -834,6 +839,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
>  		struct irq_info *info = info_for_irq(irq);
>  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
>  	}
> +	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);

I feel that this should be its own commit by itself. I am not certain
of the implication of this on x86 and I think it deserves some explanation.


>  
>  out:
>  	mutex_unlock(&irq_mapping_update_lock);
> @@ -1377,7 +1383,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  {
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  
> +#ifdef CONFIG_X86
>  	exit_idle();
> +#endif

Doesn't exist? Or is that it does not need it?

>  	irq_enter();
>  
>  	__xen_evtchn_do_upcall();
> @@ -1786,9 +1794,9 @@ void xen_callback_vector(void)
>  void xen_callback_vector(void) {}
>  #endif
>  
> -void __init xen_init_IRQ(void)
> +void xen_init_IRQ(void)
>  {
> -	int i, rc;
> +	int i;
>  
>  	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
>  				    GFP_KERNEL);
> @@ -1804,6 +1812,7 @@ void __init xen_init_IRQ(void)
>  
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
> +#ifdef CONFIG_X86
>  	if (xen_hvm_domain()) {
>  		xen_callback_vector();
>  		native_init_IRQ();
> @@ -1811,6 +1820,7 @@ void __init xen_init_IRQ(void)
>  		 * __acpi_register_gsi can point at the right function */
>  		pci_xen_hvm_init();
>  	} else {
> +		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
>  		irq_ctx_init(smp_processor_id());
> @@ -1826,4 +1836,6 @@ void __init xen_init_IRQ(void)
>  		} else
>  			pirq_needs_eoi = pirq_check_eoi_map;
>  	}
> +#endif
>  }
> +EXPORT_SYMBOL_GPL(xen_init_IRQ);
> diff --git a/include/xen/events.h b/include/xen/events.h
> index 04399b2..c6bfe01 100644
> --- a/include/xen/events.h
> +++ b/include/xen/events.h
> @@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
>  /* Determine whether to ignore this IRQ if it is passed to a guest. */
>  int xen_test_irq_shared(int irq);
>  
> +/* initialize Xen IRQ subsystem */
> +void xen_init_IRQ(void);
>  #endif	/* _XEN_EVENTS_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:54:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaIz-0002u2-DT; Wed, 01 Aug 2012 14:53:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaIx-0002tJ-KP
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:53:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1343832816!10283087!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27456 invoked from network); 1 Aug 2012 14:53:37 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:53:37 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71ErKws008050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:53:20 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71ErJAu010189
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:53:19 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71ErIeZ025850; Wed, 1 Aug 2012 09:53:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:53:18 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 20DE7402B2; Wed,  1 Aug 2012 10:44:18 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:44:18 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144418.GM7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-15-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-15-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 15/24] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:33:57PM +0100, Stefano Stabellini wrote:
> Compile events.c on ARM.
> Parse, map and enable the IRQ to get event notifications from the device
> tree (node "/xen").
> 
> On ARM Linux irqs are not enabled by default:
> 
> - call enable_percpu_irq for xen_events_irq (drivers are supposed
> to call enable_irq after request_irq);
> 
> - reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
> default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
> irq_startup, that is responsible for calling irq_unmask at startup time.
> As a result event channels remain masked.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   33 +++++++++++++++++++++++++++++++++
>  arch/x86/xen/enlighten.c |    1 +
>  arch/x86/xen/irq.c       |    1 +
>  arch/x86/xen/xen-ops.h   |    1 -
>  drivers/xen/events.c     |   18 +++++++++++++++---
>  include/xen/events.h     |    2 ++
>  6 files changed, 52 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 854af1e..60d6d36 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -7,8 +7,11 @@
>  #include <xen/grant_table.h>
>  #include <xen/hvm.h>
>  #include <xen/xenbus.h>
> +#include <xen/events.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
> +#include <linux/interrupt.h>
> +#include <linux/irqreturn.h>
>  #include <linux/module.h>
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
> @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
>  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
>  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
> +static __read_mostly int xen_events_irq = -1;
> +
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
> @@ -65,6 +70,9 @@ int __init xen_guest_init(void)
>  	if (of_address_to_resource(node, 0, &res))
>  		return -EINVAL;
>  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> +	xen_events_irq = irq_of_parse_and_map(node, 0);
> +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> +			xen_events_irq, xen_hvm_resume_frames);
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -114,3 +122,28 @@ int __init xen_guest_init(void)
>  }
>  EXPORT_SYMBOL_GPL(xen_guest_init);
>  core_initcall(xen_guest_init);
> +
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return 0;

Um, IRQ_HANDLED?

> +}
> +
> +static int __init xen_init_events(void)
> +{
> +	if (!xen_domain() || xen_events_irq < 0)
> +		return -ENODEV;
> +
> +	xen_init_IRQ();
> +
> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			"events", xen_vcpu)) {
> +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	enable_percpu_irq(xen_events_irq, 0);
> +
> +	return 0;
> +}
> +postcore_initcall(xen_init_events);
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index 6131d43..5a30502 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -33,6 +33,7 @@
>  #include <linux/memblock.h>
>  
>  #include <xen/xen.h>
> +#include <xen/events.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/version.h>
>  #include <xen/interface/physdev.h>
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..01a4dc0 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 202d4c1..2368295 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -35,7 +35,6 @@ void xen_set_pat(u64);
>  
>  char * __init xen_memory_setup(void);
>  void __init xen_arch_setup(void);
> -void __init xen_init_IRQ(void);
>  void xen_enable_sysenter(void);
>  void xen_enable_syscall(void);
>  void xen_vcpu_restore(void);
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7da65d3..9b506b2 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -31,14 +31,16 @@
>  #include <linux/irqnr.h>
>  #include <linux/pci.h>
>  
> +#ifdef CONFIG_X86
>  #include <asm/desc.h>
>  #include <asm/ptrace.h>
>  #include <asm/irq.h>
>  #include <asm/idle.h>
>  #include <asm/io_apic.h>
> -#include <asm/sync_bitops.h>
>  #include <asm/xen/page.h>
>  #include <asm/xen/pci.h>
> +#endif
> +#include <asm/sync_bitops.h>
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
>  
> @@ -50,6 +52,9 @@
>  #include <xen/interface/event_channel.h>
>  #include <xen/interface/hvm/hvm_op.h>
>  #include <xen/interface/hvm/params.h>
> +#include <xen/interface/physdev.h>
> +#include <xen/interface/sched.h>
> +#include <asm/hw_irq.h>
>  
>  /*
>   * This lock protects updates to the following mapping and reference-count
> @@ -834,6 +839,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
>  		struct irq_info *info = info_for_irq(irq);
>  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
>  	}
> +	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);

I feel that this should be its own commit by itself. I am not certain
of the implication of this on x86 and I think it deserves some explanation.


>  
>  out:
>  	mutex_unlock(&irq_mapping_update_lock);
> @@ -1377,7 +1383,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  {
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  
> +#ifdef CONFIG_X86
>  	exit_idle();
> +#endif

Doesn't exist? Or is that it does not need it?

>  	irq_enter();
>  
>  	__xen_evtchn_do_upcall();
> @@ -1786,9 +1794,9 @@ void xen_callback_vector(void)
>  void xen_callback_vector(void) {}
>  #endif
>  
> -void __init xen_init_IRQ(void)
> +void xen_init_IRQ(void)
>  {
> -	int i, rc;
> +	int i;
>  
>  	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
>  				    GFP_KERNEL);
> @@ -1804,6 +1812,7 @@ void __init xen_init_IRQ(void)
>  
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
> +#ifdef CONFIG_X86
>  	if (xen_hvm_domain()) {
>  		xen_callback_vector();
>  		native_init_IRQ();
> @@ -1811,6 +1820,7 @@ void __init xen_init_IRQ(void)
>  		 * __acpi_register_gsi can point at the right function */
>  		pci_xen_hvm_init();
>  	} else {
> +		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
>  		irq_ctx_init(smp_processor_id());
> @@ -1826,4 +1836,6 @@ void __init xen_init_IRQ(void)
>  		} else
>  			pirq_needs_eoi = pirq_check_eoi_map;
>  	}
> +#endif
>  }
> +EXPORT_SYMBOL_GPL(xen_init_IRQ);
> diff --git a/include/xen/events.h b/include/xen/events.h
> index 04399b2..c6bfe01 100644
> --- a/include/xen/events.h
> +++ b/include/xen/events.h
> @@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
>  /* Determine whether to ignore this IRQ if it is passed to a guest. */
>  int xen_test_irq_shared(int irq);
>  
> +/* initialize Xen IRQ subsystem */
> +void xen_init_IRQ(void);
>  #endif	/* _XEN_EVENTS_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:57:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaMN-00035u-1B; Wed, 01 Aug 2012 14:57:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaML-00035Z-Ge
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:57:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343833025!3307665!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5268 invoked from network); 1 Aug 2012 14:57:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:57:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EurQA013503
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:56:54 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Eur3T011200
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:56:53 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EuqlA018549; Wed, 1 Aug 2012 09:56:52 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:56:52 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EDFE9402B2; Wed,  1 Aug 2012 10:47:51 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:47:51 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144751.GN7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-17-git-send-email-stefano.stabellini@eu.citrix.com>
	<5012598C0200007800090DB9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1207271502480.26163@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1207271502480.26163@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Jan Beulich <JBeulich@suse.com>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 17/24] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jul 27, 2012 at 03:10:13PM +0100, Stefano Stabellini wrote:
> On Fri, 27 Jul 2012, Jan Beulich wrote:
> > >>> On 26.07.12 at 17:33, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > In order for privcmd mmap to work correctly, xen_remap_domain_mfn_range
> > > needs to be implemented for HVM guests.
> > > If it is not, mmap is going to fail later on.
> > 
> > Somehow, for me at least, this description doesn't connect to the
> > actual change.
> 
> We can remove the "return -ENOSYS" from privcmd_mmap but the actual mmap
> is still not going to work unless xen_remap_domain_mfn_range is
> implemented correctly.
> The x86 implementation of xen_remap_domain_mfn_range is PV only so it is
> not going to work for HVM or auto_translated_physmap guests.
> As a result mmap_batch_fn is going to fail.

So what you are saying is that this check is redundant and that earlier
on in the call stack this check is made?

I am not seeing it? I am seeing an:

289         if (!xen_initial_domain())
290                 return -EPERM;

But that would still work.

Perhaps adding an:

	if (xen_hvm_domain())
		return -ENOSYS

is more appropiate in privcmd_ioctl_mmap_batch?

Irrespective of HVM guests, I recall that it is possible to run PV guests
with XENFEAT_auto_translated_physmap? How will this be impacted?

> 
> 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  drivers/xen/privcmd.c |    4 ----
> > >  1 files changed, 0 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> > > index ccee0f1..85226cb 100644
> > > --- a/drivers/xen/privcmd.c
> > > +++ b/drivers/xen/privcmd.c
> > > @@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
> > >  
> > >  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> > >  {
> > > -	/* Unsupported for auto-translate guests. */
> > > -	if (xen_feature(XENFEAT_auto_translated_physmap))
> > > -		return -ENOSYS;
> > > -
> > 
> > Is this safe on x86?
> > 
> 
> It is safe in the sense that is not going to crash dom0 or the
> hypervisor, but it is not going to work.
> 
> Actually in order for it to be safe we need this additional change:
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 3a73785..885a223 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	unsigned long range;
>  	int err = 0;
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return -EINVAL;
> +
>  	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
>  
>  	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:57:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaMN-00035u-1B; Wed, 01 Aug 2012 14:57:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaML-00035Z-Ge
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:57:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343833025!3307665!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5268 invoked from network); 1 Aug 2012 14:57:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 14:57:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EurQA013503
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:56:54 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71Eur3T011200
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:56:53 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EuqlA018549; Wed, 1 Aug 2012 09:56:52 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:56:52 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EDFE9402B2; Wed,  1 Aug 2012 10:47:51 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:47:51 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144751.GN7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-17-git-send-email-stefano.stabellini@eu.citrix.com>
	<5012598C0200007800090DB9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1207271502480.26163@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1207271502480.26163@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Jan Beulich <JBeulich@suse.com>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 17/24] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jul 27, 2012 at 03:10:13PM +0100, Stefano Stabellini wrote:
> On Fri, 27 Jul 2012, Jan Beulich wrote:
> > >>> On 26.07.12 at 17:33, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > In order for privcmd mmap to work correctly, xen_remap_domain_mfn_range
> > > needs to be implemented for HVM guests.
> > > If it is not, mmap is going to fail later on.
> > 
> > Somehow, for me at least, this description doesn't connect to the
> > actual change.
> 
> We can remove the "return -ENOSYS" from privcmd_mmap but the actual mmap
> is still not going to work unless xen_remap_domain_mfn_range is
> implemented correctly.
> The x86 implementation of xen_remap_domain_mfn_range is PV only so it is
> not going to work for HVM or auto_translated_physmap guests.
> As a result mmap_batch_fn is going to fail.

So what you are saying is that this check is redundant and that earlier
on in the call stack this check is made?

I am not seeing it? I am seeing an:

289         if (!xen_initial_domain())
290                 return -EPERM;

But that would still work.

Perhaps adding an:

	if (xen_hvm_domain())
		return -ENOSYS

is more appropiate in privcmd_ioctl_mmap_batch?

Irrespective of HVM guests, I recall that it is possible to run PV guests
with XENFEAT_auto_translated_physmap? How will this be impacted?

> 
> 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  drivers/xen/privcmd.c |    4 ----
> > >  1 files changed, 0 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> > > index ccee0f1..85226cb 100644
> > > --- a/drivers/xen/privcmd.c
> > > +++ b/drivers/xen/privcmd.c
> > > @@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
> > >  
> > >  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> > >  {
> > > -	/* Unsupported for auto-translate guests. */
> > > -	if (xen_feature(XENFEAT_auto_translated_physmap))
> > > -		return -ENOSYS;
> > > -
> > 
> > Is this safe on x86?
> > 
> 
> It is safe in the sense that is not going to crash dom0 or the
> hypervisor, but it is not going to work.
> 
> Actually in order for it to be safe we need this additional change:
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 3a73785..885a223 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	unsigned long range;
>  	int err = 0;
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return -EINVAL;
> +
>  	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
>  
>  	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:58:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaMh-00037d-E9; Wed, 01 Aug 2012 14:57:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaMg-00037J-1C
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:57:34 +0000
Received: from [85.158.138.51:21575] by server-1.bemta-3.messagelabs.com id
	F5/07-31934-DD349105; Wed, 01 Aug 2012 14:57:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343833051!23561129!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20024 invoked from network); 1 Aug 2012 14:57:32 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:57:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EvK8L014108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:57:21 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EvJG6017143
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:57:19 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EvJj7029271; Wed, 1 Aug 2012 09:57:19 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:57:18 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F56A402B2; Wed,  1 Aug 2012 10:48:18 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:48:18 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144818.GO7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-18-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-18-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 18/24] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:00PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/block/xen-blkback/blkback.c  |    1 +
>  include/xen/interface/io/protocols.h |    3 +++
>  2 files changed, 4 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 73f196c..63dd5b9 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -42,6 +42,7 @@
>  
>  #include <xen/events.h>
>  #include <xen/page.h>
> +#include <xen/xen.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include "common.h"
> diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
> index 01fc8ae..0eafaf2 100644
> --- a/include/xen/interface/io/protocols.h
> +++ b/include/xen/interface/io/protocols.h
> @@ -5,6 +5,7 @@
>  #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
>  #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
>  #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
> +#define XEN_IO_PROTO_ABI_ARM        "arm-abi"

So one that has all of the 32/64 issues worked out? Nice.

>  
>  #if defined(__i386__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
> @@ -14,6 +15,8 @@
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
>  #elif defined(__powerpc64__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
> +#elif defined(__arm__)
> +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
>  #else
>  # error arch fixup needed here
>  #endif
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 14:58:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 14:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaMh-00037d-E9; Wed, 01 Aug 2012 14:57:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaMg-00037J-1C
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 14:57:34 +0000
Received: from [85.158.138.51:21575] by server-1.bemta-3.messagelabs.com id
	F5/07-31934-DD349105; Wed, 01 Aug 2012 14:57:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343833051!23561129!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20024 invoked from network); 1 Aug 2012 14:57:32 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 14:57:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71EvK8L014108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 14:57:21 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71EvJG6017143
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 14:57:19 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71EvJj7029271; Wed, 1 Aug 2012 09:57:19 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 07:57:18 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F56A402B2; Wed,  1 Aug 2012 10:48:18 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:48:18 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801144818.GO7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-18-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-18-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 18/24] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:00PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/block/xen-blkback/blkback.c  |    1 +
>  include/xen/interface/io/protocols.h |    3 +++
>  2 files changed, 4 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 73f196c..63dd5b9 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -42,6 +42,7 @@
>  
>  #include <xen/events.h>
>  #include <xen/page.h>
> +#include <xen/xen.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include "common.h"
> diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
> index 01fc8ae..0eafaf2 100644
> --- a/include/xen/interface/io/protocols.h
> +++ b/include/xen/interface/io/protocols.h
> @@ -5,6 +5,7 @@
>  #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
>  #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
>  #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
> +#define XEN_IO_PROTO_ABI_ARM        "arm-abi"

So one that has all of the 32/64 issues worked out? Nice.

>  
>  #if defined(__i386__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
> @@ -14,6 +15,8 @@
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
>  #elif defined(__powerpc64__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
> +#elif defined(__arm__)
> +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
>  #else
>  # error arch fixup needed here
>  #endif
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:02:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaQd-0003P7-3o; Wed, 01 Aug 2012 15:01:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaQc-0003Op-D1
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:01:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343833289!3383439!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26841 invoked from network); 1 Aug 2012 15:01:32 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:01:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F1HEa018867
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:01:17 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F1GE4022990
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:01:16 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F1G35022660; Wed, 1 Aug 2012 10:01:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:01:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 34172402B2; Wed,  1 Aug 2012 10:52:15 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:52:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120801145215.GP7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 20/24] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:02PM +0100, Stefano Stabellini wrote:
> Update struct xen_add_to_physmap to be in sync with Xen's version of the
> structure.
> The size field was introduced by:
> 
> changeset:   24164:707d27fe03e7
> user:        Jean Guyader <jean.guyader@eu.citrix.com>
> date:        Fri Nov 18 13:42:08 2011 +0000
> summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> 
> According to the comment:
> 
> "This new field .size is located in the 16 bits padding between .domid
> and .space in struct xen_add_to_physmap to stay compatible with older
> versions."
> 
> This is not true on ARM where there is not padding, but it is valid on
> X86, so introducing size is safe on X86 and it is going to fix the
> interace for ARM.

Has this been checked actually for backwards compatibility? It sounds
like it should work just fine with Xen 4.0 right?

I believe this also helps Mukesh's patches, so CC-ing him here for
his Ack.

I can put this in right now in tree if we are 100% sure its compatiblie with 4.0.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  include/xen/interface/memory.h |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index abbbff0..d8e33a9 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,6 +163,9 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:02:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaQd-0003P7-3o; Wed, 01 Aug 2012 15:01:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaQc-0003Op-D1
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:01:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343833289!3383439!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26841 invoked from network); 1 Aug 2012 15:01:32 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:01:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F1HEa018867
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:01:17 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F1GE4022990
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:01:16 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F1G35022660; Wed, 1 Aug 2012 10:01:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:01:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 34172402B2; Wed,  1 Aug 2012 10:52:15 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:52:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120801145215.GP7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 20/24] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:02PM +0100, Stefano Stabellini wrote:
> Update struct xen_add_to_physmap to be in sync with Xen's version of the
> structure.
> The size field was introduced by:
> 
> changeset:   24164:707d27fe03e7
> user:        Jean Guyader <jean.guyader@eu.citrix.com>
> date:        Fri Nov 18 13:42:08 2011 +0000
> summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> 
> According to the comment:
> 
> "This new field .size is located in the 16 bits padding between .domid
> and .space in struct xen_add_to_physmap to stay compatible with older
> versions."
> 
> This is not true on ARM where there is not padding, but it is valid on
> X86, so introducing size is safe on X86 and it is going to fix the
> interace for ARM.

Has this been checked actually for backwards compatibility? It sounds
like it should work just fine with Xen 4.0 right?

I believe this also helps Mukesh's patches, so CC-ing him here for
his Ack.

I can put this in right now in tree if we are 100% sure its compatiblie with 4.0.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  include/xen/interface/memory.h |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index abbbff0..d8e33a9 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,6 +163,9 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:03:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaRZ-0003Ty-II; Wed, 01 Aug 2012 15:02:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaRX-0003TR-Vx
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:02:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343833335!10351858!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11565 invoked from network); 1 Aug 2012 15:02:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:02:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F20DW019707
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:02:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F1xM9014144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:01:59 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F1wF8001225; Wed, 1 Aug 2012 10:01:58 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:01:58 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 181F6402B2; Wed,  1 Aug 2012 10:52:58 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:52:58 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801145257.GQ7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-21-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-21-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 21/24] arm/v2m: initialize arch_timers even
 if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:03PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Should the maintainer of the v2m be CC-ed here?
This looks like a bug-fix of itself?

> ---
>  arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
>  1 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
> index fde26ad..dee1451 100644
> --- a/arch/arm/mach-vexpress/v2m.c
> +++ b/arch/arm/mach-vexpress/v2m.c
> @@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
>  	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
>  	v2m_sysctl_init(of_iomap(node, 0));
>  
> -	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> -	if (WARN_ON(err))
> -		return;
> -	node = of_find_node_by_path(path);
> -	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
>  	if (arch_timer_of_register() != 0)
>  		twd_local_timer_of_register();
>  
>  	if (arch_timer_sched_clock_init() != 0)
>  		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
> +
> +	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> +	if (WARN_ON(err))
> +		return;
> +	node = of_find_node_by_path(path);
> +	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
>  }
>  
>  static struct sys_timer v2m_dt_timer = {
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:03:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaRZ-0003Ty-II; Wed, 01 Aug 2012 15:02:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaRX-0003TR-Vx
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:02:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343833335!10351858!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11565 invoked from network); 1 Aug 2012 15:02:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:02:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F20DW019707
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:02:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F1xM9014144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:01:59 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F1wF8001225; Wed, 1 Aug 2012 10:01:58 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:01:58 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 181F6402B2; Wed,  1 Aug 2012 10:52:58 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:52:58 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801145257.GQ7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-21-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-21-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 21/24] arm/v2m: initialize arch_timers even
 if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:03PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Should the maintainer of the v2m be CC-ed here?
This looks like a bug-fix of itself?

> ---
>  arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
>  1 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
> index fde26ad..dee1451 100644
> --- a/arch/arm/mach-vexpress/v2m.c
> +++ b/arch/arm/mach-vexpress/v2m.c
> @@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
>  	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
>  	v2m_sysctl_init(of_iomap(node, 0));
>  
> -	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> -	if (WARN_ON(err))
> -		return;
> -	node = of_find_node_by_path(path);
> -	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
>  	if (arch_timer_of_register() != 0)
>  		twd_local_timer_of_register();
>  
>  	if (arch_timer_sched_clock_init() != 0)
>  		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
> +
> +	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> +	if (WARN_ON(err))
> +		return;
> +	node = of_find_node_by_path(path);
> +	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
>  }
>  
>  static struct sys_timer v2m_dt_timer = {
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaSX-0003cI-5b; Wed, 01 Aug 2012 15:03:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaSV-0003bH-8D
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:03:35 +0000
Received: from [85.158.138.51:47686] by server-1.bemta-3.messagelabs.com id
	9D/F1-31934-64549105; Wed, 01 Aug 2012 15:03:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343833412!20392170!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23312 invoked from network); 1 Aug 2012 15:03:33 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 15:03:33 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F3E9H020161
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:03:15 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F3EGY024998
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:03:14 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F3EM4002912; Wed, 1 Aug 2012 10:03:14 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:03:14 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 64908402B2; Wed,  1 Aug 2012 10:54:13 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:54:13 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801145413.GR7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-23-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-23-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 23/24] hvc_xen: allow dom0_write_console for
	HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:05PM +0100, Stefano Stabellini wrote:
> On ARM all guests are HVM guests, including Dom0.
> Allow dom0_write_console to be called by an HVM domain.

Um, but xen_hvm_domain() != xen_pv_domain() so won't this return without
printing anything?

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/tty/hvc/hvc_xen.c |    5 +----
>  1 files changed, 1 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index 3c04fb8..949edc2 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -616,12 +616,9 @@ static void xenboot_write_console(struct console *console, const char *string,
>  	unsigned int linelen, off = 0;
>  	const char *pos;
>  
> -	if (!xen_pv_domain())
> -		return;
> -
>  	dom0_write_console(0, string, len);
>  
> -	if (xen_initial_domain())
> +	if (!xen_pv_domain())
>  		return;
>  
>  	domU_write_console(0, "(early) ", 8);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaSX-0003cI-5b; Wed, 01 Aug 2012 15:03:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaSV-0003bH-8D
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:03:35 +0000
Received: from [85.158.138.51:47686] by server-1.bemta-3.messagelabs.com id
	9D/F1-31934-64549105; Wed, 01 Aug 2012 15:03:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343833412!20392170!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23312 invoked from network); 1 Aug 2012 15:03:33 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 15:03:33 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F3E9H020161
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:03:15 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F3EGY024998
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:03:14 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F3EM4002912; Wed, 1 Aug 2012 10:03:14 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:03:14 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 64908402B2; Wed,  1 Aug 2012 10:54:13 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:54:13 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801145413.GR7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-23-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-23-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 23/24] hvc_xen: allow dom0_write_console for
	HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:05PM +0100, Stefano Stabellini wrote:
> On ARM all guests are HVM guests, including Dom0.
> Allow dom0_write_console to be called by an HVM domain.

Um, but xen_hvm_domain() != xen_pv_domain() so won't this return without
printing anything?

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/tty/hvc/hvc_xen.c |    5 +----
>  1 files changed, 1 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index 3c04fb8..949edc2 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -616,12 +616,9 @@ static void xenboot_write_console(struct console *console, const char *string,
>  	unsigned int linelen, off = 0;
>  	const char *pos;
>  
> -	if (!xen_pv_domain())
> -		return;
> -
>  	dom0_write_console(0, string, len);
>  
> -	if (xen_initial_domain())
> +	if (!xen_pv_domain())
>  		return;
>  
>  	domU_write_console(0, "(early) ", 8);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:06:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaUa-0003pw-N4; Wed, 01 Aug 2012 15:05:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaUa-0003pU-0A
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:05:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1343833525!11001244!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1726 invoked from network); 1 Aug 2012 15:05:26 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:05:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F5CpQ023672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:05:13 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F5BuB021553
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:05:11 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F5BNo004419; Wed, 1 Aug 2012 10:05:11 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:05:10 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1CA3C402B2; Wed,  1 Aug 2012 10:56:10 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:56:10 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120801145610.GS7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-24-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-24-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 24/24] [HACK] xen/arm: implement
 xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:06PM +0100, Stefano Stabellini wrote:
> From: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Do not apply!

Mukesh, I believe this is similar to what you had in mind.

> 
> This is a simple, hacky implementation of xen_remap_domain_mfn_range,
> using XENMAPSPACE_gmfn_foreign.
> 
> It should use same interface as hybrid x86.

Yeah.. We should get this done irrespective of this ARM patchset as
it will certainly benefit the HVM domains.

So  what is with the 0x9000 values?

> 
> Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
>  drivers/xen/privcmd.c          |   16 +++++----
>  drivers/xen/xenfs/super.c      |    7 ++++
>  include/xen/interface/memory.h |   10 ++++--
>  4 files changed, 101 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 1476b0b..7092015 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -16,6 +16,10 @@
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
>  #include <linux/of_address.h>
> +#include <linux/mm.h>
> +#include <linux/ioport.h>
> +
> +#include <asm/pgtable.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
>  static __read_mostly int xen_events_irq = -1;
>  
> +#define FOREIGN_MAP_BUFFER 0x90000000UL
> +#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
> +struct resource foreign_map_resource = {
> +	.start = FOREIGN_MAP_BUFFER,
> +	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
> +	.name = "Xen foreign map buffer",
> +	.flags = 0,
> +};
> +
> +static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
> +
> +struct remap_data {
> +	struct mm_struct *mm;
> +	unsigned long mfn;
> +	pgprot_t prot;
> +};
> +
> +static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
> +				 unsigned long addr, void *data)
> +{
> +	struct remap_data *rmd = data;
> +	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
> +
> +	if (rmd->mfn < 0x90010)
> +		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
> +		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
> +
> +	set_pte_at(rmd->mm, addr, ptep, pte);
> +
> +	rmd->mfn++;
> +	return 0;
> +}
> +
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
>  			       pgprot_t prot, unsigned domid)
>  {
> -	return -ENOSYS;
> +	int i, rc = 0;
> +	struct remap_data rmd = {
> +		.mm = vma->vm_mm,
> +		.prot = prot,
> +	};
> +	struct xen_add_to_physmap xatp = {
> +		.domid = DOMID_SELF,
> +		.space = XENMAPSPACE_gmfn_foreign,
> +
> +		.foreign_domid = domid,
> +	};
> +
> +	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
> +					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
> +		pr_crit("RAM out of foreign map buffers...\n");
> +		return -EBUSY;
> +	}
> +
> +	for (i = 0; i < nr; i++) {
> +		xatp.idx = mfn + i;
> +		xatp.gpfn = foreign_map_buffer_pfn + i;
> +		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
> +		if (rc != 0) {
> +			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
> +			goto out;
> +		}
> +	}
> +
> +	rmd.mfn = foreign_map_buffer_pfn;
> +	rc = apply_to_page_range(vma->vm_mm,
> +				 addr,
> +				 (unsigned long)nr << PAGE_SHIFT,
> +				 remap_area_mfn_pte_fn, &rmd);
> +	if (rc != 0) {
> +		pr_crit("apply_to_page_range failed rc=%d\n", rc);
> +		goto out;
> +	}
> +
> +	foreign_map_buffer_pfn += nr;
> +out:
> +	return rc;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>  
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 85226cb..3e15c22 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -20,6 +20,8 @@
>  #include <linux/pagemap.h>
>  #include <linux/seq_file.h>
>  #include <linux/miscdevice.h>
> +#include <linux/resource.h>
> +#include <linux/ioport.h>
>  
>  #include <asm/pgalloc.h>
>  #include <asm/pgtable.h>
> @@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_mfn_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
>  		return -EFAULT;
>  
> @@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_batch_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&m, udata, sizeof(m)))
>  		return -EFAULT;
>  
> @@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
>  	return ret;
>  }
>  
> +static void privcmd_close(struct vm_area_struct *vma)
> +{
> +	/* TODO: unmap VMA */
> +}
> +
>  static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  {
>  	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
> @@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops = {
> -	.fault = privcmd_fault
> +	.fault = privcmd_fault,
> +	.close = privcmd_close,
>  };
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
> index a84b53c..edbe22f 100644
> --- a/drivers/xen/xenfs/super.c
> +++ b/drivers/xen/xenfs/super.c
> @@ -12,6 +12,7 @@
>  #include <linux/module.h>
>  #include <linux/fs.h>
>  #include <linux/magic.h>
> +#include <linux/ioport.h>
>  
>  #include <xen/xen.h>
>  
> @@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
>  	.llseek = default_llseek,
>  };
>  
> +extern struct resource foreign_map_resource;
> +
>  static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  {
>  	static struct tree_descr xenfs_files[] = {
> @@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
>  		xenfs_create_file(sb, sb->s_root, "xsd_port",
>  				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
> +		rc = request_resource(&iomem_resource, &foreign_map_resource);
> +		if (rc < 0)
> +			pr_crit("failed to register foreign map resource\n");
> +		rc = 0; /* ignore */
>  	}
>  
>  	return rc;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index d8e33a9..ec68945 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -167,9 +167,13 @@ struct xen_add_to_physmap {
>      uint16_t    size;
>  
>      /* Source mapping space. */
> -#define XENMAPSPACE_shared_info 0 /* shared info page */
> -#define XENMAPSPACE_grant_table 1 /* grant table page */
> -    unsigned int space;
> +#define XENMAPSPACE_shared_info  0 /* shared info page */
> +#define XENMAPSPACE_grant_table  1 /* grant table page */
> +#define XENMAPSPACE_gmfn         2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> +    uint16_t space;
> +    domid_t foreign_domid; /* IFF gmfn_foreign */
>  
>      /* Index into source mapping space. */
>      unsigned long idx;
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:06:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwaUa-0003pw-N4; Wed, 01 Aug 2012 15:05:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwaUa-0003pU-0A
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:05:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1343833525!11001244!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1726 invoked from network); 1 Aug 2012 15:05:26 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:05:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71F5CpQ023672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:05:13 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71F5BuB021553
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:05:11 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71F5BNo004419; Wed, 1 Aug 2012 10:05:11 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:05:10 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1CA3C402B2; Wed,  1 Aug 2012 10:56:10 -0400 (EDT)
Date: Wed, 1 Aug 2012 10:56:10 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120801145610.GS7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-24-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343316846-25860-24-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 24/24] [HACK] xen/arm: implement
 xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 04:34:06PM +0100, Stefano Stabellini wrote:
> From: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Do not apply!

Mukesh, I believe this is similar to what you had in mind.

> 
> This is a simple, hacky implementation of xen_remap_domain_mfn_range,
> using XENMAPSPACE_gmfn_foreign.
> 
> It should use same interface as hybrid x86.

Yeah.. We should get this done irrespective of this ARM patchset as
it will certainly benefit the HVM domains.

So  what is with the 0x9000 values?

> 
> Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
>  drivers/xen/privcmd.c          |   16 +++++----
>  drivers/xen/xenfs/super.c      |    7 ++++
>  include/xen/interface/memory.h |   10 ++++--
>  4 files changed, 101 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 1476b0b..7092015 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -16,6 +16,10 @@
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
>  #include <linux/of_address.h>
> +#include <linux/mm.h>
> +#include <linux/ioport.h>
> +
> +#include <asm/pgtable.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
>  static __read_mostly int xen_events_irq = -1;
>  
> +#define FOREIGN_MAP_BUFFER 0x90000000UL
> +#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
> +struct resource foreign_map_resource = {
> +	.start = FOREIGN_MAP_BUFFER,
> +	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
> +	.name = "Xen foreign map buffer",
> +	.flags = 0,
> +};
> +
> +static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
> +
> +struct remap_data {
> +	struct mm_struct *mm;
> +	unsigned long mfn;
> +	pgprot_t prot;
> +};
> +
> +static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
> +				 unsigned long addr, void *data)
> +{
> +	struct remap_data *rmd = data;
> +	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
> +
> +	if (rmd->mfn < 0x90010)
> +		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
> +		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
> +
> +	set_pte_at(rmd->mm, addr, ptep, pte);
> +
> +	rmd->mfn++;
> +	return 0;
> +}
> +
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
>  			       pgprot_t prot, unsigned domid)
>  {
> -	return -ENOSYS;
> +	int i, rc = 0;
> +	struct remap_data rmd = {
> +		.mm = vma->vm_mm,
> +		.prot = prot,
> +	};
> +	struct xen_add_to_physmap xatp = {
> +		.domid = DOMID_SELF,
> +		.space = XENMAPSPACE_gmfn_foreign,
> +
> +		.foreign_domid = domid,
> +	};
> +
> +	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
> +					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
> +		pr_crit("RAM out of foreign map buffers...\n");
> +		return -EBUSY;
> +	}
> +
> +	for (i = 0; i < nr; i++) {
> +		xatp.idx = mfn + i;
> +		xatp.gpfn = foreign_map_buffer_pfn + i;
> +		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
> +		if (rc != 0) {
> +			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
> +			goto out;
> +		}
> +	}
> +
> +	rmd.mfn = foreign_map_buffer_pfn;
> +	rc = apply_to_page_range(vma->vm_mm,
> +				 addr,
> +				 (unsigned long)nr << PAGE_SHIFT,
> +				 remap_area_mfn_pte_fn, &rmd);
> +	if (rc != 0) {
> +		pr_crit("apply_to_page_range failed rc=%d\n", rc);
> +		goto out;
> +	}
> +
> +	foreign_map_buffer_pfn += nr;
> +out:
> +	return rc;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>  
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 85226cb..3e15c22 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -20,6 +20,8 @@
>  #include <linux/pagemap.h>
>  #include <linux/seq_file.h>
>  #include <linux/miscdevice.h>
> +#include <linux/resource.h>
> +#include <linux/ioport.h>
>  
>  #include <asm/pgalloc.h>
>  #include <asm/pgtable.h>
> @@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_mfn_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
>  		return -EFAULT;
>  
> @@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_batch_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&m, udata, sizeof(m)))
>  		return -EFAULT;
>  
> @@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
>  	return ret;
>  }
>  
> +static void privcmd_close(struct vm_area_struct *vma)
> +{
> +	/* TODO: unmap VMA */
> +}
> +
>  static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  {
>  	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
> @@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops = {
> -	.fault = privcmd_fault
> +	.fault = privcmd_fault,
> +	.close = privcmd_close,
>  };
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
> index a84b53c..edbe22f 100644
> --- a/drivers/xen/xenfs/super.c
> +++ b/drivers/xen/xenfs/super.c
> @@ -12,6 +12,7 @@
>  #include <linux/module.h>
>  #include <linux/fs.h>
>  #include <linux/magic.h>
> +#include <linux/ioport.h>
>  
>  #include <xen/xen.h>
>  
> @@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
>  	.llseek = default_llseek,
>  };
>  
> +extern struct resource foreign_map_resource;
> +
>  static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  {
>  	static struct tree_descr xenfs_files[] = {
> @@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
>  		xenfs_create_file(sb, sb->s_root, "xsd_port",
>  				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
> +		rc = request_resource(&iomem_resource, &foreign_map_resource);
> +		if (rc < 0)
> +			pr_crit("failed to register foreign map resource\n");
> +		rc = 0; /* ignore */
>  	}
>  
>  	return rc;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index d8e33a9..ec68945 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -167,9 +167,13 @@ struct xen_add_to_physmap {
>      uint16_t    size;
>  
>      /* Source mapping space. */
> -#define XENMAPSPACE_shared_info 0 /* shared info page */
> -#define XENMAPSPACE_grant_table 1 /* grant table page */
> -    unsigned int space;
> +#define XENMAPSPACE_shared_info  0 /* shared info page */
> +#define XENMAPSPACE_grant_table  1 /* grant table page */
> +#define XENMAPSPACE_gmfn         2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> +    uint16_t space;
> +    domid_t foreign_domid; /* IFF gmfn_foreign */
>  
>      /* Index into source mapping space. */
>      unsigned long idx;
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwalC-00047v-BD; Wed, 01 Aug 2012 15:22:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwalB-00047q-35
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 15:22:53 +0000
Received: from [85.158.138.51:17933] by server-9.bemta-3.messagelabs.com id
	72/C4-27628-CC949105; Wed, 01 Aug 2012 15:22:52 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343834571!21823413!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2991 invoked from network); 1 Aug 2012 15:22:51 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:22:51 -0000
Received: by eaah1 with SMTP id h1so1895688eaa.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 08:22:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=yc7HBA485ajS9y3z/jpf1Q96ALif33XCZTdtfxQJ7qc=;
	b=xnOE2uArBf7L3iDabg3+TPzeHFr6li2Dqn60gRykM5EQySRxPiKIxh/gyt6s64kozn
	nhEZCf60tB9AWFHzXEHX796nJd4MN2oLK4jUgdmVDS+rxMiHPwBIkCDGNxXVfRQcadD/
	Fw4zRUDp5ljFfcNiMq9M5/+K0EwJFh7cpL89/5OV9ppZY9qO3qgPS9Uqwk+MCp58ZRPP
	+nX6TiHYkvA348Ue6IHSgLuEQSbRmd68Xg0OrIxlUfFC5LjEWWduxedadaqJ0Pn6srne
	ZyIO2+fxSmoJrrPmgaXG+xiIWsPuDvpy77zKKlcT8B8bntcWnv8zqJaYV/t2YQ5K0ozI
	HVvQ==
Received: by 10.14.223.9 with SMTP id u9mr6838787eep.10.1343834571476;
	Wed, 01 Aug 2012 08:22:51 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id w3sm9797041eep.2.2012.08.01.08.22.50
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 08:22:51 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 01 Aug 2012 16:22:48 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC3F0858.3A2BC%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: also allow disabling LAPIC NMI watchdog
	on newer CPUs
Thread-Index: Ac1v+YHeoux2wZ/5W0SuaLaSn9cGKw==
In-Reply-To: <501938070200007800091E24@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: also allow disabling LAPIC NMI
 watchdog on newer CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 13:07, "Jan Beulich" <JBeulich@suse.com> wrote:

> This complements c/s 9146:941897e98591, and also replaces a literal
> zero with a proper manifest constant.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/nmi.c
> +++ b/xen/arch/x86/nmi.c
> @@ -175,15 +175,9 @@ static void disable_lapic_nmi_watchdog(v
>      case X86_VENDOR_INTEL:
>          switch (boot_cpu_data.x86) {
>          case 6:
> -            if (boot_cpu_data.x86_model > 0xd)
> -                break;
> -
>              wrmsr(MSR_P6_EVNTSEL0, 0, 0);
>              break;
>          case 15:
> -            if (boot_cpu_data.x86_model > 0x4)
> -                break;
> -
>              wrmsr(MSR_P4_IQ_CCCR0, 0, 0);
>              wrmsr(MSR_P4_CRU_ESCR0, 0, 0);
>              break;
> @@ -192,7 +186,7 @@ static void disable_lapic_nmi_watchdog(v
>      }
>      nmi_active = -1;
>      /* tell do_nmi() and others that we're not active any more */
> -    nmi_watchdog = 0;
> +    nmi_watchdog = NMI_NONE;
>  }
>  
>  static void enable_lapic_nmi_watchdog(void)
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwalC-00047v-BD; Wed, 01 Aug 2012 15:22:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwalB-00047q-35
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 15:22:53 +0000
Received: from [85.158.138.51:17933] by server-9.bemta-3.messagelabs.com id
	72/C4-27628-CC949105; Wed, 01 Aug 2012 15:22:52 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343834571!21823413!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2991 invoked from network); 1 Aug 2012 15:22:51 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:22:51 -0000
Received: by eaah1 with SMTP id h1so1895688eaa.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 08:22:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=yc7HBA485ajS9y3z/jpf1Q96ALif33XCZTdtfxQJ7qc=;
	b=xnOE2uArBf7L3iDabg3+TPzeHFr6li2Dqn60gRykM5EQySRxPiKIxh/gyt6s64kozn
	nhEZCf60tB9AWFHzXEHX796nJd4MN2oLK4jUgdmVDS+rxMiHPwBIkCDGNxXVfRQcadD/
	Fw4zRUDp5ljFfcNiMq9M5/+K0EwJFh7cpL89/5OV9ppZY9qO3qgPS9Uqwk+MCp58ZRPP
	+nX6TiHYkvA348Ue6IHSgLuEQSbRmd68Xg0OrIxlUfFC5LjEWWduxedadaqJ0Pn6srne
	ZyIO2+fxSmoJrrPmgaXG+xiIWsPuDvpy77zKKlcT8B8bntcWnv8zqJaYV/t2YQ5K0ozI
	HVvQ==
Received: by 10.14.223.9 with SMTP id u9mr6838787eep.10.1343834571476;
	Wed, 01 Aug 2012 08:22:51 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id w3sm9797041eep.2.2012.08.01.08.22.50
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 08:22:51 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 01 Aug 2012 16:22:48 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC3F0858.3A2BC%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: also allow disabling LAPIC NMI watchdog
	on newer CPUs
Thread-Index: Ac1v+YHeoux2wZ/5W0SuaLaSn9cGKw==
In-Reply-To: <501938070200007800091E24@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: also allow disabling LAPIC NMI
 watchdog on newer CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 13:07, "Jan Beulich" <JBeulich@suse.com> wrote:

> This complements c/s 9146:941897e98591, and also replaces a literal
> zero with a proper manifest constant.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/nmi.c
> +++ b/xen/arch/x86/nmi.c
> @@ -175,15 +175,9 @@ static void disable_lapic_nmi_watchdog(v
>      case X86_VENDOR_INTEL:
>          switch (boot_cpu_data.x86) {
>          case 6:
> -            if (boot_cpu_data.x86_model > 0xd)
> -                break;
> -
>              wrmsr(MSR_P6_EVNTSEL0, 0, 0);
>              break;
>          case 15:
> -            if (boot_cpu_data.x86_model > 0x4)
> -                break;
> -
>              wrmsr(MSR_P4_IQ_CCCR0, 0, 0);
>              wrmsr(MSR_P4_CRU_ESCR0, 0, 0);
>              break;
> @@ -192,7 +186,7 @@ static void disable_lapic_nmi_watchdog(v
>      }
>      nmi_active = -1;
>      /* tell do_nmi() and others that we're not active any more */
> -    nmi_watchdog = 0;
> +    nmi_watchdog = NMI_NONE;
>  }
>  
>  static void enable_lapic_nmi_watchdog(void)
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:25:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwanN-0004CZ-SD; Wed, 01 Aug 2012 15:25:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwanM-0004CO-Cd
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:25:08 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343834702!3313263!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20576 invoked from network); 1 Aug 2012 15:25:02 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:25:02 -0000
Received: by eekd4 with SMTP id d4so2000934eek.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 08:25:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=6No64RXyZKCa3GyRdD3+6Y3yWr5CHYEs+DKZUVNQY7I=;
	b=IWfZHlqQu5vGpJDeYt9mMw9CoTwTTt8uaETtoJeRVsNoocq43NoHbe6Jb5MFQeS9Mt
	6nO4nX0X7qRiwt+aNvrdLkuR4GxuQOexcIrlALvMAXveHNGUPxZlo05HZoQbVusm5Sir
	ut4A+Rq9RN0B80bM5N1N35/iza3HQ/WaF0JjjgtcwRMV1qeAlqvy4hNbMeLYL9hHip3+
	kQDla1HYVav2XAWhQcjjPh+G4g/FNTbfQ90nWlpKyxhQuCX/3+cxzynk4HH3nok5ecfn
	aqJ2Se0F/gsiPtMftEB46/QXJjxvYRyLElXCGlBZu6x8OfJcTuFUve14DWYH85frDGOZ
	FxVg==
MIME-Version: 1.0
Received: by 10.14.175.130 with SMTP id z2mr23045789eel.0.1343834701843; Wed,
	01 Aug 2012 08:25:01 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 08:25:01 -0700 (PDT)
In-Reply-To: <20120626181707.4203d336@mantra.us.oracle.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
Date: Wed, 1 Aug 2012 16:25:01 +0100
X-Google-Sender-Auth: J-KxhWwK50-0JFa-Ea0UPCnTvws
Message-ID: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
for this feature, mainly for "marketing" reasons.  I think it will
probably give people the wrong idea about what the technology does.
PV domains is one of Xen's really distinct advantages -- much simpler
interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
understand it, the mode you've been calling "hybrid" still has all of
these advantages -- it just uses some of the HVM hardware extensions
to make the interface even simpler / faster.  I'm afraid "hybrid" may
be seen as, "Even Xen has had to give up on PV."

Can I suggest something like "PVH" instead?  That (at least to me)
makes it clear that PV domains are still fully PV, but just use some
HVM extensions.

Thoughts?

 -George

On Wed, Jun 27, 2012 at 2:17 AM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Hi Guys,
>
> Just a quick status update. I refreshed my trees and then debugged as
> the code had changed a lot. I'm again few weeks behind from the latest
> tree on both linux and xen. After the refresh, I ran into few issues:
>
>    - xenstored is using gnttab interface that will not work for hybrid
>      For now I just disabled it.
>
>    - libxl has changed a lot, so for now, I'm only supporting
>           disk = ['phy:/dev/loop1,xvda,w']
>
>    - the struct pv_vcpu and hvm_vcpu are into a union now. I added a new
>      type hyb_vcpu and now going thru the code changing all refs.
>
>    - on the linux side I managed to remove all changes to non-xen files,
>      this should help a alot.
>
> Once I finish the changes for hyb_vcpu union, I should be able to get
> things working again. Then I'll refresh the linux tree, keeping xen the
> same, and test it all out and submit linux patch. After that I'll
> refresh xen tree and keeping same linux, test it out, and submit patch.
>
> Thanks,
> Mukesh
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:25:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwanN-0004CZ-SD; Wed, 01 Aug 2012 15:25:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwanM-0004CO-Cd
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:25:08 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343834702!3313263!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20576 invoked from network); 1 Aug 2012 15:25:02 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:25:02 -0000
Received: by eekd4 with SMTP id d4so2000934eek.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 08:25:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=6No64RXyZKCa3GyRdD3+6Y3yWr5CHYEs+DKZUVNQY7I=;
	b=IWfZHlqQu5vGpJDeYt9mMw9CoTwTTt8uaETtoJeRVsNoocq43NoHbe6Jb5MFQeS9Mt
	6nO4nX0X7qRiwt+aNvrdLkuR4GxuQOexcIrlALvMAXveHNGUPxZlo05HZoQbVusm5Sir
	ut4A+Rq9RN0B80bM5N1N35/iza3HQ/WaF0JjjgtcwRMV1qeAlqvy4hNbMeLYL9hHip3+
	kQDla1HYVav2XAWhQcjjPh+G4g/FNTbfQ90nWlpKyxhQuCX/3+cxzynk4HH3nok5ecfn
	aqJ2Se0F/gsiPtMftEB46/QXJjxvYRyLElXCGlBZu6x8OfJcTuFUve14DWYH85frDGOZ
	FxVg==
MIME-Version: 1.0
Received: by 10.14.175.130 with SMTP id z2mr23045789eel.0.1343834701843; Wed,
	01 Aug 2012 08:25:01 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 08:25:01 -0700 (PDT)
In-Reply-To: <20120626181707.4203d336@mantra.us.oracle.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
Date: Wed, 1 Aug 2012 16:25:01 +0100
X-Google-Sender-Auth: J-KxhWwK50-0JFa-Ea0UPCnTvws
Message-ID: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
for this feature, mainly for "marketing" reasons.  I think it will
probably give people the wrong idea about what the technology does.
PV domains is one of Xen's really distinct advantages -- much simpler
interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
understand it, the mode you've been calling "hybrid" still has all of
these advantages -- it just uses some of the HVM hardware extensions
to make the interface even simpler / faster.  I'm afraid "hybrid" may
be seen as, "Even Xen has had to give up on PV."

Can I suggest something like "PVH" instead?  That (at least to me)
makes it clear that PV domains are still fully PV, but just use some
HVM extensions.

Thoughts?

 -George

On Wed, Jun 27, 2012 at 2:17 AM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Hi Guys,
>
> Just a quick status update. I refreshed my trees and then debugged as
> the code had changed a lot. I'm again few weeks behind from the latest
> tree on both linux and xen. After the refresh, I ran into few issues:
>
>    - xenstored is using gnttab interface that will not work for hybrid
>      For now I just disabled it.
>
>    - libxl has changed a lot, so for now, I'm only supporting
>           disk = ['phy:/dev/loop1,xvda,w']
>
>    - the struct pv_vcpu and hvm_vcpu are into a union now. I added a new
>      type hyb_vcpu and now going thru the code changing all refs.
>
>    - on the linux side I managed to remove all changes to non-xen files,
>      this should help a alot.
>
> Once I finish the changes for hyb_vcpu union, I should be able to get
> things working again. Then I'll refresh the linux tree, keeping xen the
> same, and test it all out and submit linux patch. After that I'll
> refresh xen tree and keeping same linux, test it out, and submit patch.
>
> Thanks,
> Mukesh
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:34:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwawJ-0004Vn-0q; Wed, 01 Aug 2012 15:34:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwawH-0004Vi-1C
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:34:21 +0000
Received: from [85.158.143.99:27870] by server-1.bemta-4.messagelabs.com id
	E2/B2-24392-C7C49105; Wed, 01 Aug 2012 15:34:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343835258!29182550!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23226 invoked from network); 1 Aug 2012 15:34:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 15:34:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71FYAWC029229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:34:11 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71FY9tp025573
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:34:10 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71FY8o0028317; Wed, 1 Aug 2012 10:34:08 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:34:08 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 438F4402B2; Wed,  1 Aug 2012 11:25:08 -0400 (EDT)
Date: Wed, 1 Aug 2012 11:25:08 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801152508.GA7132@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> for this feature, mainly for "marketing" reasons.  I think it will
> probably give people the wrong idea about what the technology does.
> PV domains is one of Xen's really distinct advantages -- much simpler
> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> understand it, the mode you've been calling "hybrid" still has all of
> these advantages -- it just uses some of the HVM hardware extensions
> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> be seen as, "Even Xen has had to give up on PV."
> 
> Can I suggest something like "PVH" instead?  That (at least to me)
> makes it clear that PV domains are still fully PV, but just use some
> HVM extensions.

if (xen_pvh_domain()?

if (xen_pv_h_domain()?

if (xen_h_domain()) ?

if (xen_pvplus_domain()) ?

if (xen_pv_ext_domain()) ?

I think I like 'pv+'?

> 
> Thoughts?
> 
>  -George
> 
> On Wed, Jun 27, 2012 at 2:17 AM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > Hi Guys,
> >
> > Just a quick status update. I refreshed my trees and then debugged as
> > the code had changed a lot. I'm again few weeks behind from the latest
> > tree on both linux and xen. After the refresh, I ran into few issues:
> >
> >    - xenstored is using gnttab interface that will not work for hybrid
> >      For now I just disabled it.
> >
> >    - libxl has changed a lot, so for now, I'm only supporting
> >           disk = ['phy:/dev/loop1,xvda,w']
> >
> >    - the struct pv_vcpu and hvm_vcpu are into a union now. I added a new
> >      type hyb_vcpu and now going thru the code changing all refs.
> >
> >    - on the linux side I managed to remove all changes to non-xen files,
> >      this should help a alot.
> >
> > Once I finish the changes for hyb_vcpu union, I should be able to get
> > things working again. Then I'll refresh the linux tree, keeping xen the
> > same, and test it all out and submit linux patch. After that I'll
> > refresh xen tree and keeping same linux, test it out, and submit patch.
> >
> > Thanks,
> > Mukesh
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:34:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwawJ-0004Vn-0q; Wed, 01 Aug 2012 15:34:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwawH-0004Vi-1C
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:34:21 +0000
Received: from [85.158.143.99:27870] by server-1.bemta-4.messagelabs.com id
	E2/B2-24392-C7C49105; Wed, 01 Aug 2012 15:34:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343835258!29182550!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23226 invoked from network); 1 Aug 2012 15:34:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 15:34:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71FYAWC029229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:34:11 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71FY9tp025573
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:34:10 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71FY8o0028317; Wed, 1 Aug 2012 10:34:08 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:34:08 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 438F4402B2; Wed,  1 Aug 2012 11:25:08 -0400 (EDT)
Date: Wed, 1 Aug 2012 11:25:08 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801152508.GA7132@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> for this feature, mainly for "marketing" reasons.  I think it will
> probably give people the wrong idea about what the technology does.
> PV domains is one of Xen's really distinct advantages -- much simpler
> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> understand it, the mode you've been calling "hybrid" still has all of
> these advantages -- it just uses some of the HVM hardware extensions
> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> be seen as, "Even Xen has had to give up on PV."
> 
> Can I suggest something like "PVH" instead?  That (at least to me)
> makes it clear that PV domains are still fully PV, but just use some
> HVM extensions.

if (xen_pvh_domain()?

if (xen_pv_h_domain()?

if (xen_h_domain()) ?

if (xen_pvplus_domain()) ?

if (xen_pv_ext_domain()) ?

I think I like 'pv+'?

> 
> Thoughts?
> 
>  -George
> 
> On Wed, Jun 27, 2012 at 2:17 AM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > Hi Guys,
> >
> > Just a quick status update. I refreshed my trees and then debugged as
> > the code had changed a lot. I'm again few weeks behind from the latest
> > tree on both linux and xen. After the refresh, I ran into few issues:
> >
> >    - xenstored is using gnttab interface that will not work for hybrid
> >      For now I just disabled it.
> >
> >    - libxl has changed a lot, so for now, I'm only supporting
> >           disk = ['phy:/dev/loop1,xvda,w']
> >
> >    - the struct pv_vcpu and hvm_vcpu are into a union now. I added a new
> >      type hyb_vcpu and now going thru the code changing all refs.
> >
> >    - on the linux side I managed to remove all changes to non-xen files,
> >      this should help a alot.
> >
> > Once I finish the changes for hyb_vcpu union, I should be able to get
> > things working again. Then I'll refresh the linux tree, keeping xen the
> > same, and test it all out and submit linux patch. After that I'll
> > refresh xen tree and keeping same linux, test it out, and submit patch.
> >
> > Thanks,
> > Mukesh
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:43:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swb4b-0004fT-54; Wed, 01 Aug 2012 15:42:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1Swb4a-0004fO-4T
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:42:56 +0000
Received: from [85.158.138.51:41468] by server-5.bemta-3.messagelabs.com id
	1C/99-28237-E7E49105; Wed, 01 Aug 2012 15:42:54 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343835773!28167912!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29505 invoked from network); 1 Aug 2012 15:42:54 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:42:54 -0000
Received: by ghy10 with SMTP id 10so9332437ghy.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 08:42:53 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=from:to:cc:subject:in-reply-to:references:user-agent:date
	:message-id:mime-version:content-type:x-gm-message-state;
	bh=C7PPgvjWGu1mFmXAU9UjXUM530qIBuErvEJkwjS7R3U=;
	b=cDPw5RJy4fc/9cu0aRToK7w6SLooyxnwOS+qGpEOGAwgyyTVGB1SQdgiuSHzANvodI
	L8MHeRZ05YjB5R/PCrpfUqqxc372iwle1FeBCX0nItoEJe3ayAJcZusGAMJ5i2Hz6vxV
	CUSnjIYe27Itn5+27aax+zQrHHpExd2Bf+e1vA7HzRUx2+Z3qJ4035Lj6OA26Ch5nHvL
	SL7LW0CoW/oSV5nPDeMHSL2IClR2RpU4kiDZ6RwDzPHsY6zldFxEUPElQz+1X/rtJZSv
	17eYma7TS2wJbrPXheM+d0NT9qrr3GdfkHTLVens3W46kaXHml1qUaJhrGC90XE/JmN0
	jqIQ==
Received: by 10.60.2.131 with SMTP id 3mr29530924oeu.59.1343835772860;
	Wed, 01 Aug 2012 08:42:52 -0700 (PDT)
Received: from titi.smtp.gmail.com (cpe-70-123-145-39.austin.res.rr.com.
	[70.123.145.39])
	by mx.google.com with ESMTPS id ac8sm2853070obc.11.2012.08.01.08.42.51
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 08:42:52 -0700 (PDT)
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org
In-Reply-To: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
User-Agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1
	(x86_64-pc-linux-gnu)
Date: Wed, 01 Aug 2012 10:42:50 -0500
Message-ID: <873946mxdx.fsf@codemonkey.ws>
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQk7L9IF4Rgv6Rd2kGgkb3V9UqK9z3qJj9md7d6wp6+Fu1JtBcGGnqYORrg1NqDYttIW6mXC
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	fantonifabio@tiscali.it,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:

> xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
> match the type.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Applied.  Thanks.

Regards,

Anthony Liguori

>
> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index fdf68aa..307119a 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -764,7 +764,7 @@ out:
>      return 0;
>  }
>  
> -static int xen_pt_unregister_device(PCIDevice *d)
> +static void xen_pt_unregister_device(PCIDevice *d)
>  {
>      XenPCIPassthroughState *s = DO_UPCAST(XenPCIPassthroughState, dev, d);
>      uint8_t machine_irq = s->machine_irq;
> @@ -814,8 +814,6 @@ static int xen_pt_unregister_device(PCIDevice *d)
>      memory_listener_unregister(&s->memory_listener);
>  
>      xen_host_pci_device_put(&s->real_device);
> -
> -    return 0;
>  }
>  
>  static Property xen_pci_passthrough_properties[] = {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:43:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swb4b-0004fT-54; Wed, 01 Aug 2012 15:42:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1Swb4a-0004fO-4T
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:42:56 +0000
Received: from [85.158.138.51:41468] by server-5.bemta-3.messagelabs.com id
	1C/99-28237-E7E49105; Wed, 01 Aug 2012 15:42:54 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343835773!28167912!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29505 invoked from network); 1 Aug 2012 15:42:54 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:42:54 -0000
Received: by ghy10 with SMTP id 10so9332437ghy.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 08:42:53 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=from:to:cc:subject:in-reply-to:references:user-agent:date
	:message-id:mime-version:content-type:x-gm-message-state;
	bh=C7PPgvjWGu1mFmXAU9UjXUM530qIBuErvEJkwjS7R3U=;
	b=cDPw5RJy4fc/9cu0aRToK7w6SLooyxnwOS+qGpEOGAwgyyTVGB1SQdgiuSHzANvodI
	L8MHeRZ05YjB5R/PCrpfUqqxc372iwle1FeBCX0nItoEJe3ayAJcZusGAMJ5i2Hz6vxV
	CUSnjIYe27Itn5+27aax+zQrHHpExd2Bf+e1vA7HzRUx2+Z3qJ4035Lj6OA26Ch5nHvL
	SL7LW0CoW/oSV5nPDeMHSL2IClR2RpU4kiDZ6RwDzPHsY6zldFxEUPElQz+1X/rtJZSv
	17eYma7TS2wJbrPXheM+d0NT9qrr3GdfkHTLVens3W46kaXHml1qUaJhrGC90XE/JmN0
	jqIQ==
Received: by 10.60.2.131 with SMTP id 3mr29530924oeu.59.1343835772860;
	Wed, 01 Aug 2012 08:42:52 -0700 (PDT)
Received: from titi.smtp.gmail.com (cpe-70-123-145-39.austin.res.rr.com.
	[70.123.145.39])
	by mx.google.com with ESMTPS id ac8sm2853070obc.11.2012.08.01.08.42.51
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 08:42:52 -0700 (PDT)
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org
In-Reply-To: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208011114490.4645@kaball.uk.xensource.com>
User-Agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1
	(x86_64-pc-linux-gnu)
Date: Wed, 01 Aug 2012 10:42:50 -0500
Message-ID: <873946mxdx.fsf@codemonkey.ws>
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQk7L9IF4Rgv6Rd2kGgkb3V9UqK9z3qJj9md7d6wp6+Fu1JtBcGGnqYORrg1NqDYttIW6mXC
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	fantonifabio@tiscali.it,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] fix Xen compilation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:

> xen_pt_unregister_device is used as PCIUnregisterFunc, so it should
> match the type.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Applied.  Thanks.

Regards,

Anthony Liguori

>
> diff --git a/hw/xen_pt.c b/hw/xen_pt.c
> index fdf68aa..307119a 100644
> --- a/hw/xen_pt.c
> +++ b/hw/xen_pt.c
> @@ -764,7 +764,7 @@ out:
>      return 0;
>  }
>  
> -static int xen_pt_unregister_device(PCIDevice *d)
> +static void xen_pt_unregister_device(PCIDevice *d)
>  {
>      XenPCIPassthroughState *s = DO_UPCAST(XenPCIPassthroughState, dev, d);
>      uint8_t machine_irq = s->machine_irq;
> @@ -814,8 +814,6 @@ static int xen_pt_unregister_device(PCIDevice *d)
>      memory_listener_unregister(&s->memory_listener);
>  
>      xen_host_pci_device_put(&s->real_device);
> -
> -    return 0;
>  }
>  
>  static Property xen_pci_passthrough_properties[] = {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:46:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swb7d-0004nZ-TC; Wed, 01 Aug 2012 15:46:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swb7c-0004nU-0D
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:46:04 +0000
Received: from [85.158.143.35:6211] by server-3.bemta-4.messagelabs.com id
	CD/D4-01511-B3F49105; Wed, 01 Aug 2012 15:46:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343835957!12992150!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6389 invoked from network); 1 Aug 2012 15:46:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:46:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807298"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 15:45:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 16:45:57 +0100
Date: Wed, 1 Aug 2012 16:45:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801141959.GE7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011641410.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-7-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801141959.GE7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 07/24] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:49PM +0100, Stefano Stabellini wrote:
> > Check for a "/xen" node in the device tree, if it is present set
> > xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> > 
> > Map the real shared info page using XENMEM_add_to_physmap with
> > XENMAPSPACE_shared_info.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c |   56 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 files changed, 56 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index d27c2a6..8c923af 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -5,6 +5,9 @@
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/of_irq.h>
> > +#include <linux/of_address.h>
> >  
> >  struct start_info _xen_start_info;
> >  struct start_info *xen_start_info = &_xen_start_info;
> > @@ -33,3 +36,56 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  	return -ENOSYS;
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> > +
> > +/*
> > + * == Xen Device Tree format ==
> > + * - /xen node;
> > + * - compatible "arm,xen";
> > + * - one interrupt for Xen event notifications;
> > + * - one memory region to map the grant_table.
> > + */
> > +static int __init xen_guest_init(void)
> > +{
> > +	int cpu;
> > +	struct xen_add_to_physmap xatp;
> > +	static struct shared_info *shared_info_page = 0;
> > +	struct device_node *node;
> > +
> > +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> > +	if (!node) {
> > +		pr_info("No Xen support\n");
> 
> I don't think the pr_info is appropiate here?

Yes, you are right. In fact I had already turned it into a pr_debug.

> > +		return 0;
> 
> Should this be -ENODEV?

Considering that xen_guest_init is called by a core_initcall, I didn't
want to return an error just because Xen is not present on the platform.


> > +	}
> > +	xen_domain_type = XEN_HVM_DOMAIN;
> > +
> > +	if (!shared_info_page)
> > +		shared_info_page = (struct shared_info *)
> > +			get_zeroed_page(GFP_KERNEL);
> > +	if (!shared_info_page) {
> > +		pr_err("not enough memory");
> 
> \n

OK

> > +		return -ENOMEM;
> > +	}
> > +	xatp.domid = DOMID_SELF;
> > +	xatp.idx = 0;
> > +	xatp.space = XENMAPSPACE_shared_info;
> > +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> > +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> > +		BUG();
> > +
> > +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> > +
> > +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> > +	 * page, we use it in the event channel upcall and in some pvclock
> > +	 * related functions. We don't need the vcpu_info placement
> > +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> > +	 * HVM.
> > +	 * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is
> > +	 * online but xen_hvm_init_shared_info is run at resume time too and
> > +	 * in that case multiple vcpus might be online. */
> > +	for_each_online_cpu(cpu) {
> > +		per_cpu(xen_vcpu, cpu) =
> > +			&HYPERVISOR_shared_info->vcpu_info[cpu];
> > +	}
> > +	return 0;
> 
> This above looks stringly similar to the x86 one. Could it be
> abstracted away to share the same code? Or is that something that
> ought to be done later on when there is more meat on the bone?

Actually I had to remove these three lines because on ARM we are going
to have just one vcpu_info struct in the shared_info page and then rely
on VCPUOP_register_vcpu_info.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:46:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swb7d-0004nZ-TC; Wed, 01 Aug 2012 15:46:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swb7c-0004nU-0D
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:46:04 +0000
Received: from [85.158.143.35:6211] by server-3.bemta-4.messagelabs.com id
	CD/D4-01511-B3F49105; Wed, 01 Aug 2012 15:46:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343835957!12992150!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6389 invoked from network); 1 Aug 2012 15:46:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:46:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807298"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 15:45:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 16:45:57 +0100
Date: Wed, 1 Aug 2012 16:45:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801141959.GE7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011641410.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-7-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801141959.GE7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 07/24] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:49PM +0100, Stefano Stabellini wrote:
> > Check for a "/xen" node in the device tree, if it is present set
> > xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> > 
> > Map the real shared info page using XENMEM_add_to_physmap with
> > XENMAPSPACE_shared_info.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c |   56 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 files changed, 56 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index d27c2a6..8c923af 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -5,6 +5,9 @@
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/of_irq.h>
> > +#include <linux/of_address.h>
> >  
> >  struct start_info _xen_start_info;
> >  struct start_info *xen_start_info = &_xen_start_info;
> > @@ -33,3 +36,56 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  	return -ENOSYS;
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> > +
> > +/*
> > + * == Xen Device Tree format ==
> > + * - /xen node;
> > + * - compatible "arm,xen";
> > + * - one interrupt for Xen event notifications;
> > + * - one memory region to map the grant_table.
> > + */
> > +static int __init xen_guest_init(void)
> > +{
> > +	int cpu;
> > +	struct xen_add_to_physmap xatp;
> > +	static struct shared_info *shared_info_page = 0;
> > +	struct device_node *node;
> > +
> > +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> > +	if (!node) {
> > +		pr_info("No Xen support\n");
> 
> I don't think the pr_info is appropiate here?

Yes, you are right. In fact I had already turned it into a pr_debug.

> > +		return 0;
> 
> Should this be -ENODEV?

Considering that xen_guest_init is called by a core_initcall, I didn't
want to return an error just because Xen is not present on the platform.


> > +	}
> > +	xen_domain_type = XEN_HVM_DOMAIN;
> > +
> > +	if (!shared_info_page)
> > +		shared_info_page = (struct shared_info *)
> > +			get_zeroed_page(GFP_KERNEL);
> > +	if (!shared_info_page) {
> > +		pr_err("not enough memory");
> 
> \n

OK

> > +		return -ENOMEM;
> > +	}
> > +	xatp.domid = DOMID_SELF;
> > +	xatp.idx = 0;
> > +	xatp.space = XENMAPSPACE_shared_info;
> > +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> > +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> > +		BUG();
> > +
> > +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> > +
> > +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> > +	 * page, we use it in the event channel upcall and in some pvclock
> > +	 * related functions. We don't need the vcpu_info placement
> > +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> > +	 * HVM.
> > +	 * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is
> > +	 * online but xen_hvm_init_shared_info is run at resume time too and
> > +	 * in that case multiple vcpus might be online. */
> > +	for_each_online_cpu(cpu) {
> > +		per_cpu(xen_vcpu, cpu) =
> > +			&HYPERVISOR_shared_info->vcpu_info[cpu];
> > +	}
> > +	return 0;
> 
> This above looks stringly similar to the x86 one. Could it be
> abstracted away to share the same code? Or is that something that
> ought to be done later on when there is more meat on the bone?

Actually I had to remove these three lines because on ARM we are going
to have just one vcpu_info struct in the shared_info page and then rely
on VCPUOP_register_vcpu_info.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:46:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swb7v-0004ok-9F; Wed, 01 Aug 2012 15:46:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swb7u-0004oc-M0
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:46:22 +0000
Received: from [85.158.139.83:19764] by server-3.bemta-5.messagelabs.com id
	E1/FF-03367-D4F49105; Wed, 01 Aug 2012 15:46:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1343835980!29072295!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9250 invoked from network); 1 Aug 2012 15:46:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:46:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807333"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 15:46:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 16:46:20 +0100
Date: Wed, 1 Aug 2012 16:46:02 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801141624.GD7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011641110.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801141624.GD7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 05/24] xen/arm: empty implementation of
 grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:47PM +0100, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/Makefile      |    2 +-
> >  arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 54 insertions(+), 1 deletions(-)
> >  create mode 100644 arch/arm/xen/grant-table.c
> > 
> > diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> > index b9d6acc..4384103 100644
> > --- a/arch/arm/xen/Makefile
> > +++ b/arch/arm/xen/Makefile
> > @@ -1 +1 @@
> > -obj-y		:= enlighten.o hypercall.o
> > +obj-y		:= enlighten.o hypercall.o grant-table.o
> > diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
> > new file mode 100644
> > index 0000000..0a4ee80
> > --- /dev/null
> > +++ b/arch/arm/xen/grant-table.c
> > @@ -0,0 +1,53 @@
> > +/******************************************************************************
> > + * grant_table.c
> > + * ARM specific part
> > + *
> > + * Granting foreign access to our memory reservation.
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License version 2
> > + * as published by the Free Software Foundation; or, when distributed
> > + * separately from the Linux kernel or incorporated into other
> > + * software packages, subject to the following license:
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a copy
> > + * of this source file (the "Software"), to deal in the Software without
> > + * restriction, including without limitation the rights to use, copy, modify,
> > + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so, subject to
> > + * the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > + * IN THE SOFTWARE.
> > + */
> > +
> > +#include <xen/interface/xen.h>
> > +#include <xen/page.h>
> > +#include <xen/grant_table.h>
> > +
> > +int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
> > +			   unsigned long max_nr_gframes,
> > +			   void **__shared)
> > +{
> > +	return -1;
> 
> -ENOSYS
> > +}
> > +
> > +void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> > +{
> > +	return;
> > +}
> > +
> > +int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> > +			   unsigned long max_nr_gframes,
> > +			   grant_status_t **__shared)
> > +{
> > +	return -1;
> 
> Same here -ENOSYS

OK, I'll do that

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:46:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swb7v-0004ok-9F; Wed, 01 Aug 2012 15:46:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swb7u-0004oc-M0
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:46:22 +0000
Received: from [85.158.139.83:19764] by server-3.bemta-5.messagelabs.com id
	E1/FF-03367-D4F49105; Wed, 01 Aug 2012 15:46:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1343835980!29072295!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9250 invoked from network); 1 Aug 2012 15:46:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:46:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807333"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 15:46:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 16:46:20 +0100
Date: Wed, 1 Aug 2012 16:46:02 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801141624.GD7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011641110.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801141624.GD7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 05/24] xen/arm: empty implementation of
 grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:47PM +0100, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/Makefile      |    2 +-
> >  arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 54 insertions(+), 1 deletions(-)
> >  create mode 100644 arch/arm/xen/grant-table.c
> > 
> > diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> > index b9d6acc..4384103 100644
> > --- a/arch/arm/xen/Makefile
> > +++ b/arch/arm/xen/Makefile
> > @@ -1 +1 @@
> > -obj-y		:= enlighten.o hypercall.o
> > +obj-y		:= enlighten.o hypercall.o grant-table.o
> > diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
> > new file mode 100644
> > index 0000000..0a4ee80
> > --- /dev/null
> > +++ b/arch/arm/xen/grant-table.c
> > @@ -0,0 +1,53 @@
> > +/******************************************************************************
> > + * grant_table.c
> > + * ARM specific part
> > + *
> > + * Granting foreign access to our memory reservation.
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License version 2
> > + * as published by the Free Software Foundation; or, when distributed
> > + * separately from the Linux kernel or incorporated into other
> > + * software packages, subject to the following license:
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a copy
> > + * of this source file (the "Software"), to deal in the Software without
> > + * restriction, including without limitation the rights to use, copy, modify,
> > + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so, subject to
> > + * the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > + * IN THE SOFTWARE.
> > + */
> > +
> > +#include <xen/interface/xen.h>
> > +#include <xen/page.h>
> > +#include <xen/grant_table.h>
> > +
> > +int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
> > +			   unsigned long max_nr_gframes,
> > +			   void **__shared)
> > +{
> > +	return -1;
> 
> -ENOSYS
> > +}
> > +
> > +void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> > +{
> > +	return;
> > +}
> > +
> > +int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> > +			   unsigned long max_nr_gframes,
> > +			   grant_status_t **__shared)
> > +{
> > +	return -1;
> 
> Same here -ENOSYS

OK, I'll do that

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:51:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbCn-00057T-0X; Wed, 01 Aug 2012 15:51:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbCl-00057D-E2
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:51:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343836277!10361212!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23641 invoked from network); 1 Aug 2012 15:51:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:51:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807442"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 15:51:17 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 16:51:16 +0100
Date: Wed, 1 Aug 2012 16:50:59 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801142249.GF7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011646560.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-8-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801142249.GF7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 08/24] xen/arm: Introduce xen_pfn_t for pfn
 and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:50PM +0100, Stefano Stabellini wrote:
> > All the original Xen headers have xen_pfn_t as mfn and pfn type, however
> > when they have been imported in Linux, xen_pfn_t has been replaced with
> > unsigned long. That might work for x86 and ia64 but it does not for arm.
> 
> How come?

see below

> > Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
> > see fit.
> 
> I am OK with this as long as your include a comment in both of the
> interface.h saying why this is needed. I am curious why 'unsinged long'
> won't work? Is it b/c on ARM you always want a 64-bit type?

Yes, we would like to make the same interface work for 32 and 64
bit guests, without any need for a "compat layer" as we currently have
on x86.
In fact I am going to add another patch to explicitly size all the
other unsigned long that we have in the public interface.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 15:51:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 15:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbCn-00057T-0X; Wed, 01 Aug 2012 15:51:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbCl-00057D-E2
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:51:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343836277!10361212!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23641 invoked from network); 1 Aug 2012 15:51:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:51:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807442"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 15:51:17 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 16:51:16 +0100
Date: Wed, 1 Aug 2012 16:50:59 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801142249.GF7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011646560.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-8-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801142249.GF7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 08/24] xen/arm: Introduce xen_pfn_t for pfn
 and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:50PM +0100, Stefano Stabellini wrote:
> > All the original Xen headers have xen_pfn_t as mfn and pfn type, however
> > when they have been imported in Linux, xen_pfn_t has been replaced with
> > unsigned long. That might work for x86 and ia64 but it does not for arm.
> 
> How come?

see below

> > Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
> > see fit.
> 
> I am OK with this as long as your include a comment in both of the
> interface.h saying why this is needed. I am curious why 'unsinged long'
> won't work? Is it b/c on ARM you always want a 64-bit type?

Yes, we would like to make the same interface work for 32 and 64
bit guests, without any need for a "compat layer" as we currently have
on x86.
In fact I am going to add another patch to explicitly size all the
other unsigned long that we have in the public interface.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbKw-0005I7-W7; Wed, 01 Aug 2012 15:59:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwbKv-0005I2-NH
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:59:49 +0000
Received: from [85.158.139.83:41070] by server-12.bemta-5.messagelabs.com id
	83/CD-25233-47259105; Wed, 01 Aug 2012 15:59:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343836786!27068506!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3538 invoked from network); 1 Aug 2012 15:59:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:59:48 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71FxfXe025482
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:59:42 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71FxfaM017096
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:59:41 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71FxefB014549; Wed, 1 Aug 2012 10:59:40 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:59:40 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4EBC1402B2; Wed,  1 Aug 2012 11:50:40 -0400 (EDT)
Date: Wed, 1 Aug 2012 11:50:40 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Message-ID: <20120801155040.GB15812@phenom.dumpdata.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 10:43:18AM -0400, Konrad Rzeszutek Wilk wrote:
> Changelog:
> Since v1: [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01561.html]
>  - added more comments, and #ifdefs
>  - squashed The L4 and L4, L3, and L2 recycle patches together
>  - Added Acked-by's
> 
> The explanation of these patches is exactly what v1 had:
> 
> The details of this problem are nicely explained in:
> 
>  [PATCH 4/6] xen/p2m: Add logic to revector a P2M tree to use __va
>  [PATCH 5/6] xen/mmu: Copy and revector the P2M tree.
>  [PATCH 6/6] xen/mmu: Remove from __ka space PMD entries for
> 
> and the supporting patches are just nice optimizations. Pasting in
> what those patches mentioned:

With these patches I've gotten it to boot up to 384GB. Around that area
something weird happens - mainly the pagetables that the toolstack allocated
seems to have missing data. I hadn't looked in details, but this is what
domain builder tells me:


xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
xc_dom_malloc            : 1621 kB
xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at 0x7fb0853a2000
xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
xc_dom_malloc            : 4608 kB
xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at 0x7fb0553a2000
xc_dom_alloc_page   :   start info   : 0xffffffffc30b4000 (pfn 0x430b4)
xc_dom_alloc_page   :   xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn 0x430b6)
nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffffffffff, 2 table(s)
nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffffc33fffff, 538 table(s)
xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at 0x7fb055184000
xc_dom_alloc_page   :   boot stack   : 0xffffffffc32d5000 (pfn 0x432d5)
xc_dom_build_image  : virt_alloc_end : 0xffffffffc32d6000
xc_dom_build_image  : virt_pgtab_end : 0xffffffffc3400000

Note it is is 0xffffffffc30b4000 - so already past the level2_kernel_pgt (L3[510]
and in level2_fixmap_pgt territory (L3[511]).

Hypervisor tells me:

(XEN) Pagetable walk from ffffffffc32d5ff8:
(XEN)  L4[0x1ff] = 000000b9804d9067 00000000000430b8
(XEN)  L3[0x1ff] = 0000000000000000 ffffffffffffffff
(XEN) domain_crash_sync called from entry.S
(XEN) Domain 13 (vcpu#0) crashed on cpu#121:
(XEN) ----[ Xen-4.1.2-OVM  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    121
(XEN) RIP:    e033:[<ffffffff818a4200>]
(XEN) RFLAGS: 0000000000010202   EM: 1   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: 0000000000000000   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: ffffffffc30b4000   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffffffffc32d6000   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000b9804da000   cr2: ffffffffc32d5ff8
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffffc32d6000:
(XEN)   Fault while accessing guest memory.

And that EIP translates to ffffffff818a4200 T startup_xen
which does:

ENTRY(startup_xen)
        cld      
ffffffff818a4200:       fc                      cld      
#ifdef CONFIG_X86_32
        mov %esi,xen_start_info
        mov $init_thread_union+THREAD_SIZE,%esp
#else
        mov %rsi,xen_start_info
ffffffff818a4201:       48 89 34 25 48 92 94    mov    %rsi,0xffffffff81949248
ffffffff818a4208:       81       


At that stage we are still operating using the Xen provided pagetable - which
look to have the L4[511][511] empty! Which sounds to me like a Xen tool-stack
problem? Jan, have you seen something similar to this?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbKw-0005I7-W7; Wed, 01 Aug 2012 15:59:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwbKv-0005I2-NH
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:59:49 +0000
Received: from [85.158.139.83:41070] by server-12.bemta-5.messagelabs.com id
	83/CD-25233-47259105; Wed, 01 Aug 2012 15:59:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343836786!27068506!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3538 invoked from network); 1 Aug 2012 15:59:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 15:59:48 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71FxfXe025482
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 15:59:42 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71FxfaM017096
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 15:59:41 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71FxefB014549; Wed, 1 Aug 2012 10:59:40 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 08:59:40 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4EBC1402B2; Wed,  1 Aug 2012 11:50:40 -0400 (EDT)
Date: Wed, 1 Aug 2012 11:50:40 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Message-ID: <20120801155040.GB15812@phenom.dumpdata.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 10:43:18AM -0400, Konrad Rzeszutek Wilk wrote:
> Changelog:
> Since v1: [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01561.html]
>  - added more comments, and #ifdefs
>  - squashed The L4 and L4, L3, and L2 recycle patches together
>  - Added Acked-by's
> 
> The explanation of these patches is exactly what v1 had:
> 
> The details of this problem are nicely explained in:
> 
>  [PATCH 4/6] xen/p2m: Add logic to revector a P2M tree to use __va
>  [PATCH 5/6] xen/mmu: Copy and revector the P2M tree.
>  [PATCH 6/6] xen/mmu: Remove from __ka space PMD entries for
> 
> and the supporting patches are just nice optimizations. Pasting in
> what those patches mentioned:

With these patches I've gotten it to boot up to 384GB. Around that area
something weird happens - mainly the pagetables that the toolstack allocated
seems to have missing data. I hadn't looked in details, but this is what
domain builder tells me:


xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
xc_dom_malloc            : 1621 kB
xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at 0x7fb0853a2000
xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
xc_dom_malloc            : 4608 kB
xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at 0x7fb0553a2000
xc_dom_alloc_page   :   start info   : 0xffffffffc30b4000 (pfn 0x430b4)
xc_dom_alloc_page   :   xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn 0x430b6)
nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffffffffff, 2 table(s)
nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffffc33fffff, 538 table(s)
xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at 0x7fb055184000
xc_dom_alloc_page   :   boot stack   : 0xffffffffc32d5000 (pfn 0x432d5)
xc_dom_build_image  : virt_alloc_end : 0xffffffffc32d6000
xc_dom_build_image  : virt_pgtab_end : 0xffffffffc3400000

Note it is is 0xffffffffc30b4000 - so already past the level2_kernel_pgt (L3[510]
and in level2_fixmap_pgt territory (L3[511]).

Hypervisor tells me:

(XEN) Pagetable walk from ffffffffc32d5ff8:
(XEN)  L4[0x1ff] = 000000b9804d9067 00000000000430b8
(XEN)  L3[0x1ff] = 0000000000000000 ffffffffffffffff
(XEN) domain_crash_sync called from entry.S
(XEN) Domain 13 (vcpu#0) crashed on cpu#121:
(XEN) ----[ Xen-4.1.2-OVM  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    121
(XEN) RIP:    e033:[<ffffffff818a4200>]
(XEN) RFLAGS: 0000000000010202   EM: 1   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: 0000000000000000   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: ffffffffc30b4000   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffffffffc32d6000   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000b9804da000   cr2: ffffffffc32d5ff8
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffffc32d6000:
(XEN)   Fault while accessing guest memory.

And that EIP translates to ffffffff818a4200 T startup_xen
which does:

ENTRY(startup_xen)
        cld      
ffffffff818a4200:       fc                      cld      
#ifdef CONFIG_X86_32
        mov %esi,xen_start_info
        mov $init_thread_union+THREAD_SIZE,%esp
#else
        mov %rsi,xen_start_info
ffffffff818a4201:       48 89 34 25 48 92 94    mov    %rsi,0xffffffff81949248
ffffffff818a4208:       81       


At that stage we are still operating using the Xen provided pagetable - which
look to have the L4[511][511] empty! Which sounds to me like a Xen tool-stack
problem? Jan, have you seen something similar to this?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbL7-0005Ii-Cs; Wed, 01 Aug 2012 16:00:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwbL5-0005IT-LW
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:59:59 +0000
Received: from [85.158.143.99:18236] by server-3.bemta-4.messagelabs.com id
	D5/08-01511-E7259105; Wed, 01 Aug 2012 15:59:58 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1343836798!20165691!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_SEX,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2241 invoked from network); 1 Aug 2012 15:59:58 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:59:58 -0000
Received: by eekd4 with SMTP id d4so2012572eek.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 08:59:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=MaVKvTlPd0nA/UMaM7PA0nwI5ZoRxJ2naeeZUBJfdB0=;
	b=jPJhfED9M3UW5rbrJL6U6E2dAE0PXw+7EbSqO5J7+n5586i/WkxhAzCEA4HGZn93k/
	M/78oc7qlH322+rrHGSNcxNG22iTK2MB4ed6iHW43uyIxyWnsmcop5wJjbuTYTWKdO3b
	TE7rly9gwoDh0hsc3tw1lHQOSclTZfNOTjKoXzP3ZtKkXbQpiONWfG4KriKmYnb1X5nE
	Cy4AxE66EJhjxGSTbB4gFHeawWTPtwJdBX/pXEcJ7mtbtj5d3hyAyLw9D9oojNtzWwaX
	7H/hRxWoEgZbyXQiJ321+dgI0JuoTCFsXD6n83nxZFY5G3uWt7pW4QizrKu3bce8tFQF
	z9rg==
MIME-Version: 1.0
Received: by 10.14.175.130 with SMTP id z2mr23174751eel.0.1343836798168; Wed,
	01 Aug 2012 08:59:58 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 08:59:58 -0700 (PDT)
In-Reply-To: <20120801152508.GA7132@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
Date: Wed, 1 Aug 2012 16:59:58 +0100
X-Google-Sender-Auth: prLNgjGQkOY79UwW9O-W0FyEDiE
Message-ID: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>> for this feature, mainly for "marketing" reasons.  I think it will
>> probably give people the wrong idea about what the technology does.
>> PV domains is one of Xen's really distinct advantages -- much simpler
>> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>> understand it, the mode you've been calling "hybrid" still has all of
>> these advantages -- it just uses some of the HVM hardware extensions
>> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>> be seen as, "Even Xen has had to give up on PV."
>>
>> Can I suggest something like "PVH" instead?  That (at least to me)
>> makes it clear that PV domains are still fully PV, but just use some
>> HVM extensions.
>
> if (xen_pvh_domain()?
>
> if (xen_pv_h_domain()?
>
> if (xen_h_domain()) ?
>
> if (xen_pvplus_domain()) ?
>
> if (xen_pv_ext_domain()) ?
>
> I think I like 'pv+'?

I could deal with pv+.  However, in general I dislike that kind of
"now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
(Extreme PV!) -- they all sound cool when they come out, but five
years later, when they're not so new or sexy anymore, they all sound
lame.  PVH is just descriptive -- it will always be PV with HVM
extensions, so it will age much better. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbL7-0005Ii-Cs; Wed, 01 Aug 2012 16:00:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwbL5-0005IT-LW
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 15:59:59 +0000
Received: from [85.158.143.99:18236] by server-3.bemta-4.messagelabs.com id
	D5/08-01511-E7259105; Wed, 01 Aug 2012 15:59:58 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1343836798!20165691!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_SEX,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2241 invoked from network); 1 Aug 2012 15:59:58 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 15:59:58 -0000
Received: by eekd4 with SMTP id d4so2012572eek.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 08:59:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=MaVKvTlPd0nA/UMaM7PA0nwI5ZoRxJ2naeeZUBJfdB0=;
	b=jPJhfED9M3UW5rbrJL6U6E2dAE0PXw+7EbSqO5J7+n5586i/WkxhAzCEA4HGZn93k/
	M/78oc7qlH322+rrHGSNcxNG22iTK2MB4ed6iHW43uyIxyWnsmcop5wJjbuTYTWKdO3b
	TE7rly9gwoDh0hsc3tw1lHQOSclTZfNOTjKoXzP3ZtKkXbQpiONWfG4KriKmYnb1X5nE
	Cy4AxE66EJhjxGSTbB4gFHeawWTPtwJdBX/pXEcJ7mtbtj5d3hyAyLw9D9oojNtzWwaX
	7H/hRxWoEgZbyXQiJ321+dgI0JuoTCFsXD6n83nxZFY5G3uWt7pW4QizrKu3bce8tFQF
	z9rg==
MIME-Version: 1.0
Received: by 10.14.175.130 with SMTP id z2mr23174751eel.0.1343836798168; Wed,
	01 Aug 2012 08:59:58 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 08:59:58 -0700 (PDT)
In-Reply-To: <20120801152508.GA7132@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
Date: Wed, 1 Aug 2012 16:59:58 +0100
X-Google-Sender-Auth: prLNgjGQkOY79UwW9O-W0FyEDiE
Message-ID: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>> for this feature, mainly for "marketing" reasons.  I think it will
>> probably give people the wrong idea about what the technology does.
>> PV domains is one of Xen's really distinct advantages -- much simpler
>> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>> understand it, the mode you've been calling "hybrid" still has all of
>> these advantages -- it just uses some of the HVM hardware extensions
>> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>> be seen as, "Even Xen has had to give up on PV."
>>
>> Can I suggest something like "PVH" instead?  That (at least to me)
>> makes it clear that PV domains are still fully PV, but just use some
>> HVM extensions.
>
> if (xen_pvh_domain()?
>
> if (xen_pv_h_domain()?
>
> if (xen_h_domain()) ?
>
> if (xen_pvplus_domain()) ?
>
> if (xen_pv_ext_domain()) ?
>
> I think I like 'pv+'?

I could deal with pv+.  However, in general I dislike that kind of
"now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
(Extreme PV!) -- they all sound cool when they come out, but five
years later, when they're not so new or sexy anymore, they all sound
lame.  PVH is just descriptive -- it will always be PV with HVM
extensions, so it will age much better. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:03:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbNW-0005rG-VF; Wed, 01 Aug 2012 16:02:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1SwbNV-0005qt-Lh
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:02:29 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343836934!1868720!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17796 invoked from network); 1 Aug 2012 16:02:14 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-11.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	1 Aug 2012 16:02:14 -0000
Received: from 225-68-ftth.onsneteindhoven.nl ([88.159.68.225]:49989
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1SwbJm-0004u7-PR; Wed, 01 Aug 2012 17:58:38 +0200
Date: Wed, 1 Aug 2012 18:02:09 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1732333982.20120801180209@eikelenboom.it>
To: George Dunlap <George.Dunlap@eu.citrix.com>
In-Reply-To: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, August 1, 2012, 5:25:01 PM, you wrote:

> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> for this feature, mainly for "marketing" reasons.  I think it will
> probably give people the wrong idea about what the technology does.
> PV domains is one of Xen's really distinct advantages -- much simpler
> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> understand it, the mode you've been calling "hybrid" still has all of
> these advantages -- it just uses some of the HVM hardware extensions
> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> be seen as, "Even Xen has had to give up on PV."

Hmm i must say i was indeed under the impression that hybrid was == HVM + PV drivers.
But from what i read above it would be much more interesting by avoiding the qemu stuff, but still using the HVM hardware extensions.

So i think you have a point for people who don't dive too deep into what the actual features are.

--
Sander

> Can I suggest something like "PVH" instead?  That (at least to me)
> makes it clear that PV domains are still fully PV, but just use some
> HVM extensions.

> Thoughts?

>  -George

> On Wed, Jun 27, 2012 at 2:17 AM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> Hi Guys,
>>
>> Just a quick status update. I refreshed my trees and then debugged as
>> the code had changed a lot. I'm again few weeks behind from the latest
>> tree on both linux and xen. After the refresh, I ran into few issues:
>>
>>    - xenstored is using gnttab interface that will not work for hybrid
>>      For now I just disabled it.
>>
>>    - libxl has changed a lot, so for now, I'm only supporting
>>           disk = ['phy:/dev/loop1,xvda,w']
>>
>>    - the struct pv_vcpu and hvm_vcpu are into a union now. I added a new
>>      type hyb_vcpu and now going thru the code changing all refs.
>>
>>    - on the linux side I managed to remove all changes to non-xen files,
>>      this should help a alot.
>>
>> Once I finish the changes for hyb_vcpu union, I should be able to get
>> things working again. Then I'll refresh the linux tree, keeping xen the
>> same, and test it all out and submit linux patch. After that I'll
>> refresh xen tree and keeping same linux, test it out, and submit patch.
>>
>> Thanks,
>> Mukesh
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:03:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbNW-0005rG-VF; Wed, 01 Aug 2012 16:02:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1SwbNV-0005qt-Lh
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:02:29 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343836934!1868720!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17796 invoked from network); 1 Aug 2012 16:02:14 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-11.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	1 Aug 2012 16:02:14 -0000
Received: from 225-68-ftth.onsneteindhoven.nl ([88.159.68.225]:49989
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1SwbJm-0004u7-PR; Wed, 01 Aug 2012 17:58:38 +0200
Date: Wed, 1 Aug 2012 18:02:09 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1732333982.20120801180209@eikelenboom.it>
To: George Dunlap <George.Dunlap@eu.citrix.com>
In-Reply-To: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, August 1, 2012, 5:25:01 PM, you wrote:

> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> for this feature, mainly for "marketing" reasons.  I think it will
> probably give people the wrong idea about what the technology does.
> PV domains is one of Xen's really distinct advantages -- much simpler
> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> understand it, the mode you've been calling "hybrid" still has all of
> these advantages -- it just uses some of the HVM hardware extensions
> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> be seen as, "Even Xen has had to give up on PV."

Hmm i must say i was indeed under the impression that hybrid was == HVM + PV drivers.
But from what i read above it would be much more interesting by avoiding the qemu stuff, but still using the HVM hardware extensions.

So i think you have a point for people who don't dive too deep into what the actual features are.

--
Sander

> Can I suggest something like "PVH" instead?  That (at least to me)
> makes it clear that PV domains are still fully PV, but just use some
> HVM extensions.

> Thoughts?

>  -George

> On Wed, Jun 27, 2012 at 2:17 AM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> Hi Guys,
>>
>> Just a quick status update. I refreshed my trees and then debugged as
>> the code had changed a lot. I'm again few weeks behind from the latest
>> tree on both linux and xen. After the refresh, I ran into few issues:
>>
>>    - xenstored is using gnttab interface that will not work for hybrid
>>      For now I just disabled it.
>>
>>    - libxl has changed a lot, so for now, I'm only supporting
>>           disk = ['phy:/dev/loop1,xvda,w']
>>
>>    - the struct pv_vcpu and hvm_vcpu are into a union now. I added a new
>>      type hyb_vcpu and now going thru the code changing all refs.
>>
>>    - on the linux side I managed to remove all changes to non-xen files,
>>      this should help a alot.
>>
>> Once I finish the changes for hyb_vcpu union, I should be able to get
>> things working again. Then I'll refresh the linux tree, keeping xen the
>> same, and test it all out and submit linux patch. After that I'll
>> refresh xen tree and keeping same linux, test it out, and submit patch.
>>
>> Thanks,
>> Mukesh
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:05:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbQL-00068V-MN; Wed, 01 Aug 2012 16:05:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbQJ-000689-TB
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:05:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343837117!8649861!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24093 invoked from network); 1 Aug 2012 16:05:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:05:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807715"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:05:02 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:05:02 +0100
Date: Wed, 1 Aug 2012 17:04:45 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801145413.GR7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011704000.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-23-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145413.GR7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 23/24] hvc_xen: allow dom0_write_console for
	HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:05PM +0100, Stefano Stabellini wrote:
> > On ARM all guests are HVM guests, including Dom0.
> > Allow dom0_write_console to be called by an HVM domain.
> 
> Um, but xen_hvm_domain() != xen_pv_domain() so won't this return without
> printing anything?

Nope, it would call dom0_write_console that issues a console hypercall.
However I am going to remove this patch and rely on the simple serial
emulator we have in Xen for early_printk stuff

> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/tty/hvc/hvc_xen.c |    5 +----
> >  1 files changed, 1 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> > index 3c04fb8..949edc2 100644
> > --- a/drivers/tty/hvc/hvc_xen.c
> > +++ b/drivers/tty/hvc/hvc_xen.c
> > @@ -616,12 +616,9 @@ static void xenboot_write_console(struct console *console, const char *string,
> >  	unsigned int linelen, off = 0;
> >  	const char *pos;
> >  
> > -	if (!xen_pv_domain())
> > -		return;
> > -
> >  	dom0_write_console(0, string, len);
> >  
> > -	if (xen_initial_domain())
> > +	if (!xen_pv_domain())
> >  		return;
> >  
> >  	domU_write_console(0, "(early) ", 8);
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:05:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbQL-00068V-MN; Wed, 01 Aug 2012 16:05:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbQJ-000689-TB
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:05:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343837117!8649861!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24093 invoked from network); 1 Aug 2012 16:05:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:05:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807715"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:05:02 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:05:02 +0100
Date: Wed, 1 Aug 2012 17:04:45 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801145413.GR7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011704000.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-23-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145413.GR7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 23/24] hvc_xen: allow dom0_write_console for
	HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:05PM +0100, Stefano Stabellini wrote:
> > On ARM all guests are HVM guests, including Dom0.
> > Allow dom0_write_console to be called by an HVM domain.
> 
> Um, but xen_hvm_domain() != xen_pv_domain() so won't this return without
> printing anything?

Nope, it would call dom0_write_console that issues a console hypercall.
However I am going to remove this patch and rely on the simple serial
emulator we have in Xen for early_printk stuff

> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/tty/hvc/hvc_xen.c |    5 +----
> >  1 files changed, 1 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> > index 3c04fb8..949edc2 100644
> > --- a/drivers/tty/hvc/hvc_xen.c
> > +++ b/drivers/tty/hvc/hvc_xen.c
> > @@ -616,12 +616,9 @@ static void xenboot_write_console(struct console *console, const char *string,
> >  	unsigned int linelen, off = 0;
> >  	const char *pos;
> >  
> > -	if (!xen_pv_domain())
> > -		return;
> > -
> >  	dom0_write_console(0, string, len);
> >  
> > -	if (xen_initial_domain())
> > +	if (!xen_pv_domain())
> >  		return;
> >  
> >  	domU_write_console(0, "(early) ", 8);
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:06:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbQq-0006Ap-3U; Wed, 01 Aug 2012 16:05:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwbQo-0006Ae-E0
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:05:54 +0000
Received: from [85.158.143.99:9784] by server-2.bemta-4.messagelabs.com id
	47/DA-17938-1E359105; Wed, 01 Aug 2012 16:05:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343837151!23405633!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6852 invoked from network); 1 Aug 2012 16:05:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:05:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336363200"; d="scan'208";a="33224827"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:05:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:05:51 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwbQk-0004ec-Qf	for
	xen-devel@lists.xen.org; Wed, 01 Aug 2012 17:05:50 +0100
Message-ID: <501953DE.9000509@citrix.com>
Date: Wed, 1 Aug 2012 17:05:50 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
In-Reply-To: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/08/12 16:59, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>>> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>>> for this feature, mainly for "marketing" reasons.  I think it will
>>> probably give people the wrong idea about what the technology does.
>>> PV domains is one of Xen's really distinct advantages -- much simpler
>>> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>>> understand it, the mode you've been calling "hybrid" still has all of
>>> these advantages -- it just uses some of the HVM hardware extensions
>>> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>>> be seen as, "Even Xen has had to give up on PV."
>>>
>>> Can I suggest something like "PVH" instead?  That (at least to me)
>>> makes it clear that PV domains are still fully PV, but just use some
>>> HVM extensions.
>> if (xen_pvh_domain()?
>>
>> if (xen_pv_h_domain()?
>>
>> if (xen_h_domain()) ?
>>
>> if (xen_pvplus_domain()) ?
>>
>> if (xen_pv_ext_domain()) ?
>>
>> I think I like 'pv+'?
> I could deal with pv+.  However, in general I dislike that kind of
> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> (Extreme PV!) -- they all sound cool when they come out, but five
> years later, when they're not so new or sexy anymore, they all sound
> lame.  PVH is just descriptive -- it will always be PV with HVM
> extensions, so it will age much better. :-)
>
>  -George

See for perfect example USB LowSpeed, FullSpeed, HiSpeed and now SuperSpeed

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:06:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbQq-0006Ap-3U; Wed, 01 Aug 2012 16:05:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwbQo-0006Ae-E0
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:05:54 +0000
Received: from [85.158.143.99:9784] by server-2.bemta-4.messagelabs.com id
	47/DA-17938-1E359105; Wed, 01 Aug 2012 16:05:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343837151!23405633!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6852 invoked from network); 1 Aug 2012 16:05:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:05:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336363200"; d="scan'208";a="33224827"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:05:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:05:51 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwbQk-0004ec-Qf	for
	xen-devel@lists.xen.org; Wed, 01 Aug 2012 17:05:50 +0100
Message-ID: <501953DE.9000509@citrix.com>
Date: Wed, 1 Aug 2012 17:05:50 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
In-Reply-To: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/08/12 16:59, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>>> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>>> for this feature, mainly for "marketing" reasons.  I think it will
>>> probably give people the wrong idea about what the technology does.
>>> PV domains is one of Xen's really distinct advantages -- much simpler
>>> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>>> understand it, the mode you've been calling "hybrid" still has all of
>>> these advantages -- it just uses some of the HVM hardware extensions
>>> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>>> be seen as, "Even Xen has had to give up on PV."
>>>
>>> Can I suggest something like "PVH" instead?  That (at least to me)
>>> makes it clear that PV domains are still fully PV, but just use some
>>> HVM extensions.
>> if (xen_pvh_domain()?
>>
>> if (xen_pv_h_domain()?
>>
>> if (xen_h_domain()) ?
>>
>> if (xen_pvplus_domain()) ?
>>
>> if (xen_pv_ext_domain()) ?
>>
>> I think I like 'pv+'?
> I could deal with pv+.  However, in general I dislike that kind of
> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> (Extreme PV!) -- they all sound cool when they come out, but five
> years later, when they're not so new or sexy anymore, they all sound
> lame.  PVH is just descriptive -- it will always be PV with HVM
> extensions, so it will age much better. :-)
>
>  -George

See for perfect example USB LowSpeed, FullSpeed, HiSpeed and now SuperSpeed

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:07:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:07:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbS9-0006Jy-Ic; Wed, 01 Aug 2012 16:07:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbS8-0006Jm-MI
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:07:16 +0000
Received: from [85.158.143.35:8213] by server-2.bemta-4.messagelabs.com id
	9C/7C-17938-43459105; Wed, 01 Aug 2012 16:07:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343837222!5505951!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32016 invoked from network); 1 Aug 2012 16:07:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:07:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807760"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:07:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:07:00 +0100
Date: Wed, 1 Aug 2012 17:06:42 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801145257.GQ7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011705100.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-21-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145257.GQ7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 21/24] arm/v2m: initialize arch_timers even
 if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:03PM +0100, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Should the maintainer of the v2m be CC-ed here?
> This looks like a bug-fix of itself?

I think so. I'll CC Russell King next time.

> > ---
> >  arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
> >  1 files changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
> > index fde26ad..dee1451 100644
> > --- a/arch/arm/mach-vexpress/v2m.c
> > +++ b/arch/arm/mach-vexpress/v2m.c
> > @@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
> >  	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
> >  	v2m_sysctl_init(of_iomap(node, 0));
> >  
> > -	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> > -	if (WARN_ON(err))
> > -		return;
> > -	node = of_find_node_by_path(path);
> > -	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
> >  	if (arch_timer_of_register() != 0)
> >  		twd_local_timer_of_register();
> >  
> >  	if (arch_timer_sched_clock_init() != 0)
> >  		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
> > +
> > +	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> > +	if (WARN_ON(err))
> > +		return;
> > +	node = of_find_node_by_path(path);
> > +	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
> >  }
> >  
> >  static struct sys_timer v2m_dt_timer = {
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:07:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:07:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbS9-0006Jy-Ic; Wed, 01 Aug 2012 16:07:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbS8-0006Jm-MI
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:07:16 +0000
Received: from [85.158.143.35:8213] by server-2.bemta-4.messagelabs.com id
	9C/7C-17938-43459105; Wed, 01 Aug 2012 16:07:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343837222!5505951!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32016 invoked from network); 1 Aug 2012 16:07:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:07:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807760"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:07:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:07:00 +0100
Date: Wed, 1 Aug 2012 17:06:42 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801145257.GQ7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011705100.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-21-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145257.GQ7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 21/24] arm/v2m: initialize arch_timers even
 if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:03PM +0100, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Should the maintainer of the v2m be CC-ed here?
> This looks like a bug-fix of itself?

I think so. I'll CC Russell King next time.

> > ---
> >  arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
> >  1 files changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
> > index fde26ad..dee1451 100644
> > --- a/arch/arm/mach-vexpress/v2m.c
> > +++ b/arch/arm/mach-vexpress/v2m.c
> > @@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
> >  	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
> >  	v2m_sysctl_init(of_iomap(node, 0));
> >  
> > -	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> > -	if (WARN_ON(err))
> > -		return;
> > -	node = of_find_node_by_path(path);
> > -	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
> >  	if (arch_timer_of_register() != 0)
> >  		twd_local_timer_of_register();
> >  
> >  	if (arch_timer_sched_clock_init() != 0)
> >  		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
> > +
> > +	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
> > +	if (WARN_ON(err))
> > +		return;
> > +	node = of_find_node_by_path(path);
> > +	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
> >  }
> >  
> >  static struct sys_timer v2m_dt_timer = {
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbSt-0006P4-0K; Wed, 01 Aug 2012 16:08:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbSr-0006Om-GQ
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:08:01 +0000
Received: from [85.158.139.83:30889] by server-5.bemta-5.messagelabs.com id
	08/E6-02722-06459105; Wed, 01 Aug 2012 16:08:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343837280!29698775!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29663 invoked from network); 1 Aug 2012 16:08:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:08:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807782"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:07:59 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:08:00 +0100
Date: Wed, 1 Aug 2012 17:07:42 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801143551.GI7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011707280.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163759.GE9222@phenom.dumpdata.com>
	<1343381305.6812.116.camel@zakaz.uk.xensource.com>
	<20120801143551.GI7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 04/24] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Fri, Jul 27, 2012 at 10:28:25AM +0100, Ian Campbell wrote:
> > On Thu, 2012-07-26 at 17:37 +0100, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Jul 26, 2012 at 04:33:46PM +0100, Stefano Stabellini wrote:
> > > > sync_bitops functions are equivalent to the SMP implementation of the
> > > > original functions, independently from CONFIG_SMP being defined.
> > > 
> > > So why can't the code be changed to use that? Is it that
> > > the _set_bit, _clear_bit, etc are not available with !CONFIG_SMP?
> > 
> > _set_bit etc are not SMP safe if !CONFIG_SMP. But under Xen you might be
> > communicating with a completely external entity who might be on another
> > CPU (e.g. two uniprocessor guests communicating via event channels and
> > grant tables). So we need a variant of the bit ops which are SMP safe
> > even on a UP kernel.
> > 
> > The users are common code and the sync_foo vs foo distinction matters on
> > some platforms (e.g. x86 where a UP kernel would omit the LOCK prefix
> > for the normal ones).
> 
> OK, that makes sense. Stefano can you include that comment in the git
> commit description and in the sync_bitops.h file please?

Yep, I'll do that.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbSt-0006P4-0K; Wed, 01 Aug 2012 16:08:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbSr-0006Om-GQ
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:08:01 +0000
Received: from [85.158.139.83:30889] by server-5.bemta-5.messagelabs.com id
	08/E6-02722-06459105; Wed, 01 Aug 2012 16:08:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343837280!29698775!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29663 invoked from network); 1 Aug 2012 16:08:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:08:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807782"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:07:59 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:08:00 +0100
Date: Wed, 1 Aug 2012 17:07:42 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801143551.GI7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011707280.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163759.GE9222@phenom.dumpdata.com>
	<1343381305.6812.116.camel@zakaz.uk.xensource.com>
	<20120801143551.GI7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 04/24] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Fri, Jul 27, 2012 at 10:28:25AM +0100, Ian Campbell wrote:
> > On Thu, 2012-07-26 at 17:37 +0100, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Jul 26, 2012 at 04:33:46PM +0100, Stefano Stabellini wrote:
> > > > sync_bitops functions are equivalent to the SMP implementation of the
> > > > original functions, independently from CONFIG_SMP being defined.
> > > 
> > > So why can't the code be changed to use that? Is it that
> > > the _set_bit, _clear_bit, etc are not available with !CONFIG_SMP?
> > 
> > _set_bit etc are not SMP safe if !CONFIG_SMP. But under Xen you might be
> > communicating with a completely external entity who might be on another
> > CPU (e.g. two uniprocessor guests communicating via event channels and
> > grant tables). So we need a variant of the bit ops which are SMP safe
> > even on a UP kernel.
> > 
> > The users are common code and the sync_foo vs foo distinction matters on
> > some platforms (e.g. x86 where a UP kernel would omit the LOCK prefix
> > for the normal ones).
> 
> OK, that makes sense. Stefano can you include that comment in the git
> commit description and in the sync_bitops.h file please?

Yep, I'll do that.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:09:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbTg-0006Wn-FP; Wed, 01 Aug 2012 16:08:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwbTe-0006VZ-AT
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:08:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343837318!1692150!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5427 invoked from network); 1 Aug 2012 16:08:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807790"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:08:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	17:08:38 +0100
Message-ID: <1343837316.27221.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Wed, 1 Aug 2012 17:08:36 +0100
In-Reply-To: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 16:59 +0100, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> > I think I like 'pv+'?
> 
> I could deal with pv+. 

+ is a bad idea because it is not valid in a C identifier name (or
indeed most languages), which means you actually need to call it
something else in the code.

Plus (no pun intended) the reasons George mentions.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:09:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbTg-0006Wn-FP; Wed, 01 Aug 2012 16:08:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwbTe-0006VZ-AT
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:08:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343837318!1692150!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5427 invoked from network); 1 Aug 2012 16:08:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,694,1336348800"; d="scan'208";a="13807790"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:08:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 1 Aug 2012
	17:08:38 +0100
Message-ID: <1343837316.27221.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Wed, 1 Aug 2012 17:08:36 +0100
In-Reply-To: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 16:59 +0100, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> > I think I like 'pv+'?
> 
> I could deal with pv+. 

+ is a bad idea because it is not valid in a C identifier name (or
indeed most languages), which means you actually need to call it
something else in the code.

Plus (no pun intended) the reasons George mentions.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:15:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbZD-0006ts-8d; Wed, 01 Aug 2012 16:14:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwbZC-0006tn-56
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:14:34 +0000
Received: from [85.158.138.51:55356] by server-8.bemta-3.messagelabs.com id
	64/40-30925-4E559105; Wed, 01 Aug 2012 16:14:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343837666!29899698!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14817 invoked from network); 1 Aug 2012 16:14:28 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 16:14:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71GEHmS010025
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 16:14:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71GEH98018229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 16:14:17 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71GEG0i026560; Wed, 1 Aug 2012 11:14:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 09:14:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C132E402B2; Wed,  1 Aug 2012 12:05:15 -0400 (EDT)
Date: Wed, 1 Aug 2012 12:05:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801160515.GA16155@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >> for this feature, mainly for "marketing" reasons.  I think it will
> >> probably give people the wrong idea about what the technology does.
> >> PV domains is one of Xen's really distinct advantages -- much simpler
> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >> understand it, the mode you've been calling "hybrid" still has all of
> >> these advantages -- it just uses some of the HVM hardware extensions
> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >> be seen as, "Even Xen has had to give up on PV."
> >>
> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >> makes it clear that PV domains are still fully PV, but just use some
> >> HVM extensions.
> >
> > if (xen_pvh_domain()?
> >
> > if (xen_pv_h_domain()?
> >
> > if (xen_h_domain()) ?
> >
> > if (xen_pvplus_domain()) ?
> >
> > if (xen_pv_ext_domain()) ?
> >
> > I think I like 'pv+'?
> 
> I could deal with pv+.  However, in general I dislike that kind of
> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> (Extreme PV!) -- they all sound cool when they come out, but five
> years later, when they're not so new or sexy anymore, they all sound
> lame.  PVH is just descriptive -- it will always be PV with HVM
> extensions, so it will age much better. :-)

How about pv_with_mmu_in_hvm_container_domain() ?

Ok, that is a bit to lengthy. How about then:

if (xen_pvhvm_ext_domain()) ?

The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?

> 
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:15:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbZD-0006ts-8d; Wed, 01 Aug 2012 16:14:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwbZC-0006tn-56
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:14:34 +0000
Received: from [85.158.138.51:55356] by server-8.bemta-3.messagelabs.com id
	64/40-30925-4E559105; Wed, 01 Aug 2012 16:14:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343837666!29899698!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14817 invoked from network); 1 Aug 2012 16:14:28 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 16:14:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71GEHmS010025
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 16:14:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71GEH98018229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 16:14:17 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71GEG0i026560; Wed, 1 Aug 2012 11:14:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 09:14:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C132E402B2; Wed,  1 Aug 2012 12:05:15 -0400 (EDT)
Date: Wed, 1 Aug 2012 12:05:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801160515.GA16155@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >> for this feature, mainly for "marketing" reasons.  I think it will
> >> probably give people the wrong idea about what the technology does.
> >> PV domains is one of Xen's really distinct advantages -- much simpler
> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >> understand it, the mode you've been calling "hybrid" still has all of
> >> these advantages -- it just uses some of the HVM hardware extensions
> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >> be seen as, "Even Xen has had to give up on PV."
> >>
> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >> makes it clear that PV domains are still fully PV, but just use some
> >> HVM extensions.
> >
> > if (xen_pvh_domain()?
> >
> > if (xen_pv_h_domain()?
> >
> > if (xen_h_domain()) ?
> >
> > if (xen_pvplus_domain()) ?
> >
> > if (xen_pv_ext_domain()) ?
> >
> > I think I like 'pv+'?
> 
> I could deal with pv+.  However, in general I dislike that kind of
> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> (Extreme PV!) -- they all sound cool when they come out, but five
> years later, when they're not so new or sexy anymore, they all sound
> lame.  PVH is just descriptive -- it will always be PV with HVM
> extensions, so it will age much better. :-)

How about pv_with_mmu_in_hvm_container_domain() ?

Ok, that is a bit to lengthy. How about then:

if (xen_pvhvm_ext_domain()) ?

The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?

> 
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:17:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:17:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbbN-0006zH-Q1; Wed, 01 Aug 2012 16:16:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwbbM-0006z6-5G
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:16:48 +0000
Received: from [85.158.143.99:20959] by server-2.bemta-4.messagelabs.com id
	15/77-17938-F6659105; Wed, 01 Aug 2012 16:16:47 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343837806!29461918!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24883 invoked from network); 1 Aug 2012 16:16:46 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:16:46 -0000
Received: by wgbed3 with SMTP id ed3so5217212wgb.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:16:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:content-type:x-mailer
	:mime-version; bh=XEBTEzK9OI3kI1lLe71ZZ1VQQan5YhXPrA3/2U7VCHI=;
	b=ju2Iu/w7ztjxwWElqQK4VoW8xrpKeh07bZ4vtWWgO1ypbwkWfxabI+bf3lxOSRA1D4
	Ji70AwURhEfXDfz9RG13o303wvoIhAnXxYABRG79POmTPdRtgn4fMDucQ94J4BvBKVis
	GDnXEbKyYIiT0ExxJFckc2XrAUkJWZaluA7HpxxH4H9EQu6PZHjLAPsZtyntMypHmse1
	wT05awHhgRPhjZsko3iJ25JyjdsG732tUWigWupWeWPN/s7YtsB5d6U7PhpEuTkYOTiC
	XgKORGH8EopntS4Sb96XjXCLBg6yNHlReV5UyG76YYQ7tyZxnddkOaQesC7Zv0XjL8rC
	v+7g==
Received: by 10.180.98.138 with SMTP id ei10mr17801264wib.1.1343837806183;
	Wed, 01 Aug 2012 09:16:46 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id ep14sm35389459wid.0.2012.08.01.09.16.42
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:16:45 -0700 (PDT)
Message-ID: <1343837796.4958.32.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 01 Aug 2012 18:16:36 +0200
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7610754884113169368=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7610754884113169368==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-Cqdwpdx1Fkmz9VQNIpOo"


--=-Cqdwpdx1Fkmz9VQNIpOo
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi everyone,

With automatic placement finally landing into xen-unstable, I stated
thinking about what I could work on next, still in the field of
improving Xen's NUMA support. Well, it turned out that running out of
things to do is not an option! :-O

In fact, I can think of quite a bit of open issues in that area, that I'm
just braindumping here. If anyone has thoughts or idea or feedback or
whatever, I'd be happy to serve as a collector of them. I've already
created a Wiki page to help with the tracking. You can see it here
(for now it basically replicates this e-mail):

 http://wiki.xen.org/wiki/Xen_NUMA_Roadmap

I'm putting a [D] (standing for Dario) near the points I've started
working on or looking at, and again, I'd be happy to try tracking this
too, i.e., keeping the list of "who-is-doing-what" updated, in order to
ease collaboration.

So, let's cut the talking:

    - Automatic placement at guest creation time. Basics are there and
      will be shipping with 4.2. However, a lot of other things are
      missing and/or can be improved, for instance:
[D]    * automated verification and testing of the placement;
       * benchmarks and improvements of the placement heuristic;
[D]    * choosing/building up some measure of node load (more accurate
         than just counting vcpus) onto which to rely during placement;
       * consider IONUMA during placement;
       * automatic placement of Dom0, if possible (my current series is
         only affecting DomU)
       * having internal xen data structure honour the placement (e.g.,=20
         I've been told that right now vcpu stacks are always allocated
         on node 0... Andrew?).

[D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
      just have them _prefer_ running on the nodes where their memory
      is.

[D] - Dynamic memory migration between different nodes of the host. As
      the counter-part of the NUMA-aware scheduler.

    - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
      guest ends up on more than one nodes, make sure it knows it's
      running on a NUMA platform (smaller than the actual host, but
      still NUMA). This interacts with some of the above points:
       * consider this during automatic placement for
         resuming/migrating domains (if they have a virtual topology,
         better not to change it);
       * consider this during memory migration (it can change the
         actual topology, should we update it on-line or disable memory
         migration?)

    - NUMA and ballooning and memory sharing. In some more details:
       * page sharing on NUMA boxes: it's probably sane to make it
         possible disabling sharing pages across nodes;
       * ballooning and its interaction with placement (races, amount of
         memory needed and reported being different at different time,
         etc.).

    - Inter-VM dependencies and communication issues. If a workload is
      made up of more than just a VM and they all share the same (NUMA)
      host, it might be best to have them sharing the nodes as much as
      possible, or perhaps do right the opposite, depending on the
      specific characteristics of he workload itself, and this might be
      considered during placement, memory migration and perhaps
      scheduling.

    - Benchmarking and performances evaluation in general. Meaning both
      agreeing on a (set of) relevant workload(s) and on how to extract
      meaningful performances data from there (and maybe how to do that
      automatically?).

So, what do you think?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Cqdwpdx1Fkmz9VQNIpOo
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZVmQACgkQk4XaBE3IOsRmvgCfcQc7ZdaunJkfSMidV7BqI6/n
UcgAnA/TrzD6BFDlVIttm3MfUDi+rxDg
=DuYq
-----END PGP SIGNATURE-----

--=-Cqdwpdx1Fkmz9VQNIpOo--



--===============7610754884113169368==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7610754884113169368==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:17:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:17:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbbN-0006zH-Q1; Wed, 01 Aug 2012 16:16:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwbbM-0006z6-5G
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:16:48 +0000
Received: from [85.158.143.99:20959] by server-2.bemta-4.messagelabs.com id
	15/77-17938-F6659105; Wed, 01 Aug 2012 16:16:47 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343837806!29461918!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24883 invoked from network); 1 Aug 2012 16:16:46 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:16:46 -0000
Received: by wgbed3 with SMTP id ed3so5217212wgb.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:16:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:content-type:x-mailer
	:mime-version; bh=XEBTEzK9OI3kI1lLe71ZZ1VQQan5YhXPrA3/2U7VCHI=;
	b=ju2Iu/w7ztjxwWElqQK4VoW8xrpKeh07bZ4vtWWgO1ypbwkWfxabI+bf3lxOSRA1D4
	Ji70AwURhEfXDfz9RG13o303wvoIhAnXxYABRG79POmTPdRtgn4fMDucQ94J4BvBKVis
	GDnXEbKyYIiT0ExxJFckc2XrAUkJWZaluA7HpxxH4H9EQu6PZHjLAPsZtyntMypHmse1
	wT05awHhgRPhjZsko3iJ25JyjdsG732tUWigWupWeWPN/s7YtsB5d6U7PhpEuTkYOTiC
	XgKORGH8EopntS4Sb96XjXCLBg6yNHlReV5UyG76YYQ7tyZxnddkOaQesC7Zv0XjL8rC
	v+7g==
Received: by 10.180.98.138 with SMTP id ei10mr17801264wib.1.1343837806183;
	Wed, 01 Aug 2012 09:16:46 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id ep14sm35389459wid.0.2012.08.01.09.16.42
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:16:45 -0700 (PDT)
Message-ID: <1343837796.4958.32.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 01 Aug 2012 18:16:36 +0200
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7610754884113169368=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7610754884113169368==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-Cqdwpdx1Fkmz9VQNIpOo"


--=-Cqdwpdx1Fkmz9VQNIpOo
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi everyone,

With automatic placement finally landing into xen-unstable, I stated
thinking about what I could work on next, still in the field of
improving Xen's NUMA support. Well, it turned out that running out of
things to do is not an option! :-O

In fact, I can think of quite a bit of open issues in that area, that I'm
just braindumping here. If anyone has thoughts or idea or feedback or
whatever, I'd be happy to serve as a collector of them. I've already
created a Wiki page to help with the tracking. You can see it here
(for now it basically replicates this e-mail):

 http://wiki.xen.org/wiki/Xen_NUMA_Roadmap

I'm putting a [D] (standing for Dario) near the points I've started
working on or looking at, and again, I'd be happy to try tracking this
too, i.e., keeping the list of "who-is-doing-what" updated, in order to
ease collaboration.

So, let's cut the talking:

    - Automatic placement at guest creation time. Basics are there and
      will be shipping with 4.2. However, a lot of other things are
      missing and/or can be improved, for instance:
[D]    * automated verification and testing of the placement;
       * benchmarks and improvements of the placement heuristic;
[D]    * choosing/building up some measure of node load (more accurate
         than just counting vcpus) onto which to rely during placement;
       * consider IONUMA during placement;
       * automatic placement of Dom0, if possible (my current series is
         only affecting DomU)
       * having internal xen data structure honour the placement (e.g.,=20
         I've been told that right now vcpu stacks are always allocated
         on node 0... Andrew?).

[D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
      just have them _prefer_ running on the nodes where their memory
      is.

[D] - Dynamic memory migration between different nodes of the host. As
      the counter-part of the NUMA-aware scheduler.

    - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
      guest ends up on more than one nodes, make sure it knows it's
      running on a NUMA platform (smaller than the actual host, but
      still NUMA). This interacts with some of the above points:
       * consider this during automatic placement for
         resuming/migrating domains (if they have a virtual topology,
         better not to change it);
       * consider this during memory migration (it can change the
         actual topology, should we update it on-line or disable memory
         migration?)

    - NUMA and ballooning and memory sharing. In some more details:
       * page sharing on NUMA boxes: it's probably sane to make it
         possible disabling sharing pages across nodes;
       * ballooning and its interaction with placement (races, amount of
         memory needed and reported being different at different time,
         etc.).

    - Inter-VM dependencies and communication issues. If a workload is
      made up of more than just a VM and they all share the same (NUMA)
      host, it might be best to have them sharing the nodes as much as
      possible, or perhaps do right the opposite, depending on the
      specific characteristics of he workload itself, and this might be
      considered during placement, memory migration and perhaps
      scheduling.

    - Benchmarking and performances evaluation in general. Meaning both
      agreeing on a (set of) relevant workload(s) and on how to extract
      meaningful performances data from there (and maybe how to do that
      automatically?).

So, what do you think?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Cqdwpdx1Fkmz9VQNIpOo
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZVmQACgkQk4XaBE3IOsRmvgCfcQc7ZdaunJkfSMidV7BqI6/n
UcgAnA/TrzD6BFDlVIttm3MfUDi+rxDg
=DuYq
-----END PGP SIGNATURE-----

--=-Cqdwpdx1Fkmz9VQNIpOo--



--===============7610754884113169368==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7610754884113169368==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:19:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbdN-00078q-GM; Wed, 01 Aug 2012 16:18:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbdL-00078W-8z
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:18:51 +0000
Received: from [85.158.139.83:5488] by server-9.bemta-5.messagelabs.com id
	3C/73-01069-8E659105; Wed, 01 Aug 2012 16:18:48 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343837927!18500504!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27104 invoked from network); 1 Aug 2012 16:18:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:18:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13807950"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:18:47 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:18:47 +0100
Date: Wed, 1 Aug 2012 17:18:29 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801145215.GP7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011716040.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145215.GP7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 20/24] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:02PM +0100, Stefano Stabellini wrote:
> > Update struct xen_add_to_physmap to be in sync with Xen's version of the
> > structure.
> > The size field was introduced by:
> > 
> > changeset:   24164:707d27fe03e7
> > user:        Jean Guyader <jean.guyader@eu.citrix.com>
> > date:        Fri Nov 18 13:42:08 2011 +0000
> > summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> > 
> > According to the comment:
> > 
> > "This new field .size is located in the 16 bits padding between .domid
> > and .space in struct xen_add_to_physmap to stay compatible with older
> > versions."
> > 
> > This is not true on ARM where there is not padding, but it is valid on
> > X86, so introducing size is safe on X86 and it is going to fix the
> > interace for ARM.
> 
> Has this been checked actually for backwards compatibility? It sounds
> like it should work just fine with Xen 4.0 right?
> 
> I believe this also helps Mukesh's patches, so CC-ing him here for
> his Ack.
> 
> I can put this in right now in tree if we are 100% sure its compatiblie with 4.0.

Yes, it is: 4 bytes integers are 4-byte aligned on 32 bit and 64 bit
on x86.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:19:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbdN-00078q-GM; Wed, 01 Aug 2012 16:18:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbdL-00078W-8z
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:18:51 +0000
Received: from [85.158.139.83:5488] by server-9.bemta-5.messagelabs.com id
	3C/73-01069-8E659105; Wed, 01 Aug 2012 16:18:48 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343837927!18500504!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27104 invoked from network); 1 Aug 2012 16:18:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:18:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13807950"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:18:47 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:18:47 +0100
Date: Wed, 1 Aug 2012 17:18:29 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801145215.GP7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011716040.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145215.GP7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 20/24] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:02PM +0100, Stefano Stabellini wrote:
> > Update struct xen_add_to_physmap to be in sync with Xen's version of the
> > structure.
> > The size field was introduced by:
> > 
> > changeset:   24164:707d27fe03e7
> > user:        Jean Guyader <jean.guyader@eu.citrix.com>
> > date:        Fri Nov 18 13:42:08 2011 +0000
> > summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> > 
> > According to the comment:
> > 
> > "This new field .size is located in the 16 bits padding between .domid
> > and .space in struct xen_add_to_physmap to stay compatible with older
> > versions."
> > 
> > This is not true on ARM where there is not padding, but it is valid on
> > X86, so introducing size is safe on X86 and it is going to fix the
> > interace for ARM.
> 
> Has this been checked actually for backwards compatibility? It sounds
> like it should work just fine with Xen 4.0 right?
> 
> I believe this also helps Mukesh's patches, so CC-ing him here for
> his Ack.
> 
> I can put this in right now in tree if we are 100% sure its compatiblie with 4.0.

Yes, it is: 4 bytes integers are 4-byte aligned on 32 bit and 64 bit
on x86.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:20:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbeI-0007DW-UJ; Wed, 01 Aug 2012 16:19:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbeI-0007D1-8P
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:19:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1343837984!11014579!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17688 invoked from network); 1 Aug 2012 16:19:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:19:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13807960"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:19:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:19:44 +0100
Date: Wed, 1 Aug 2012 17:19:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801144818.GO7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011719050.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-18-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801144818.GO7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 18/24] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:00PM +0100, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/block/xen-blkback/blkback.c  |    1 +
> >  include/xen/interface/io/protocols.h |    3 +++
> >  2 files changed, 4 insertions(+), 0 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> > index 73f196c..63dd5b9 100644
> > --- a/drivers/block/xen-blkback/blkback.c
> > +++ b/drivers/block/xen-blkback/blkback.c
> > @@ -42,6 +42,7 @@
> >  
> >  #include <xen/events.h>
> >  #include <xen/page.h>
> > +#include <xen/xen.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include "common.h"
> > diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
> > index 01fc8ae..0eafaf2 100644
> > --- a/include/xen/interface/io/protocols.h
> > +++ b/include/xen/interface/io/protocols.h
> > @@ -5,6 +5,7 @@
> >  #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
> >  #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
> >  #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
> > +#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
> 
> So one that has all of the 32/64 issues worked out? Nice.

Yes, that is the idea, but it needs another patch to actually achieve
the goal :)

> >  
> >  #if defined(__i386__)
> >  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
> > @@ -14,6 +15,8 @@
> >  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
> >  #elif defined(__powerpc64__)
> >  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
> > +#elif defined(__arm__)
> > +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
> >  #else
> >  # error arch fixup needed here
> >  #endif
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:20:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbeI-0007DW-UJ; Wed, 01 Aug 2012 16:19:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwbeI-0007D1-8P
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:19:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1343837984!11014579!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17688 invoked from network); 1 Aug 2012 16:19:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:19:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13807960"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:19:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:19:44 +0100
Date: Wed, 1 Aug 2012 17:19:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801144818.GO7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011719050.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-18-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801144818.GO7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 18/24] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:34:00PM +0100, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/block/xen-blkback/blkback.c  |    1 +
> >  include/xen/interface/io/protocols.h |    3 +++
> >  2 files changed, 4 insertions(+), 0 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> > index 73f196c..63dd5b9 100644
> > --- a/drivers/block/xen-blkback/blkback.c
> > +++ b/drivers/block/xen-blkback/blkback.c
> > @@ -42,6 +42,7 @@
> >  
> >  #include <xen/events.h>
> >  #include <xen/page.h>
> > +#include <xen/xen.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include "common.h"
> > diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
> > index 01fc8ae..0eafaf2 100644
> > --- a/include/xen/interface/io/protocols.h
> > +++ b/include/xen/interface/io/protocols.h
> > @@ -5,6 +5,7 @@
> >  #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
> >  #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
> >  #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
> > +#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
> 
> So one that has all of the 32/64 issues worked out? Nice.

Yes, that is the idea, but it needs another patch to actually achieve
the goal :)

> >  
> >  #if defined(__i386__)
> >  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
> > @@ -14,6 +15,8 @@
> >  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
> >  #elif defined(__powerpc64__)
> >  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
> > +#elif defined(__arm__)
> > +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
> >  #else
> >  # error arch fixup needed here
> >  #endif
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:21:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbfl-0007MX-DS; Wed, 01 Aug 2012 16:21:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swbfj-0007MI-LU
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:21:19 +0000
Received: from [85.158.138.51:37696] by server-1.bemta-3.messagelabs.com id
	71/2E-31934-E7759105; Wed, 01 Aug 2012 16:21:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343838078!28174258!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11961 invoked from network); 1 Aug 2012 16:21:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:21:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13807986"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:21:17 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:21:17 +0100
Date: Wed, 1 Aug 2012 17:21:00 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801143946.GK7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011719570.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-13-git-send-email-stefano.stabellini@eu.citrix.com>
	<1343382276.6812.126.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1207271514140.26163@kaball.uk.xensource.com>
	<1343399630.25096.4.camel@zakaz.uk.xensource.com>
	<20120801143946.GK7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 13/24] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Fri, Jul 27, 2012 at 03:33:50PM +0100, Ian Campbell wrote:
> > On Fri, 2012-07-27 at 15:25 +0100, Stefano Stabellini wrote:
> > > On Fri, 27 Jul 2012, Ian Campbell wrote:
> > > > On Thu, 2012-07-26 at 16:33 +0100, Stefano Stabellini wrote:
> > > > > Use Xen features to figure out if we are privileged.
> > > > > 
> > > > > XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.
> > > > > 
> > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > > ---
> > > > >  arch/arm/xen/enlighten.c         |    7 +++++++
> > > > >  include/xen/interface/features.h |    3 +++
> > > > >  2 files changed, 10 insertions(+), 0 deletions(-)
> > > > > 
> > > > > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > > > > index dc68074..2e013cf 100644
> > > > > --- a/arch/arm/xen/enlighten.c
> > > > > +++ b/arch/arm/xen/enlighten.c
> > > > > @@ -2,6 +2,7 @@
> > > > >  #include <xen/interface/xen.h>
> > > > >  #include <xen/interface/memory.h>
> > > > >  #include <xen/platform_pci.h>
> > > > > +#include <xen/features.h>
> > > > >  #include <asm/xen/hypervisor.h>
> > > > >  #include <asm/xen/hypercall.h>
> > > > >  #include <linux/module.h>
> > > > > @@ -58,6 +59,12 @@ int __init xen_guest_init(void)
> > > > >  	}
> > > > >  	xen_domain_type = XEN_HVM_DOMAIN;
> > > > >  
> > > > > +	xen_setup_features();
> > > > > +	if (xen_feature(XENFEAT_dom0))
> > > > > +		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
> > > > > +	else
> > > > > +		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
> > > > 
> > > > What happens here on platforms prior to hypervisor changeset 23735?
> > > 
> > > It wouldn't work.
> > > Considering that we are certainly not going to backport ARM support to
> > > Xen 4.1, and that both ARM and XENFEAT_dom0 will be present in Xen 4.2,
> > > do we really need to support the Xen unstable changesets between ARM was
> > > introduced and XENFEAT_dom0 appeared?
> 
> So should it just panic and say "AAAAAAH"?

I could panic if I find out that XENFEAT_dom0 is unimplemented but
actually I only get to know if it is available...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:21:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbfl-0007MX-DS; Wed, 01 Aug 2012 16:21:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swbfj-0007MI-LU
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:21:19 +0000
Received: from [85.158.138.51:37696] by server-1.bemta-3.messagelabs.com id
	71/2E-31934-E7759105; Wed, 01 Aug 2012 16:21:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343838078!28174258!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11961 invoked from network); 1 Aug 2012 16:21:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:21:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13807986"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:21:17 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:21:17 +0100
Date: Wed, 1 Aug 2012 17:21:00 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801143946.GK7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011719570.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-13-git-send-email-stefano.stabellini@eu.citrix.com>
	<1343382276.6812.126.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1207271514140.26163@kaball.uk.xensource.com>
	<1343399630.25096.4.camel@zakaz.uk.xensource.com>
	<20120801143946.GK7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 13/24] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Fri, Jul 27, 2012 at 03:33:50PM +0100, Ian Campbell wrote:
> > On Fri, 2012-07-27 at 15:25 +0100, Stefano Stabellini wrote:
> > > On Fri, 27 Jul 2012, Ian Campbell wrote:
> > > > On Thu, 2012-07-26 at 16:33 +0100, Stefano Stabellini wrote:
> > > > > Use Xen features to figure out if we are privileged.
> > > > > 
> > > > > XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.
> > > > > 
> > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > > ---
> > > > >  arch/arm/xen/enlighten.c         |    7 +++++++
> > > > >  include/xen/interface/features.h |    3 +++
> > > > >  2 files changed, 10 insertions(+), 0 deletions(-)
> > > > > 
> > > > > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > > > > index dc68074..2e013cf 100644
> > > > > --- a/arch/arm/xen/enlighten.c
> > > > > +++ b/arch/arm/xen/enlighten.c
> > > > > @@ -2,6 +2,7 @@
> > > > >  #include <xen/interface/xen.h>
> > > > >  #include <xen/interface/memory.h>
> > > > >  #include <xen/platform_pci.h>
> > > > > +#include <xen/features.h>
> > > > >  #include <asm/xen/hypervisor.h>
> > > > >  #include <asm/xen/hypercall.h>
> > > > >  #include <linux/module.h>
> > > > > @@ -58,6 +59,12 @@ int __init xen_guest_init(void)
> > > > >  	}
> > > > >  	xen_domain_type = XEN_HVM_DOMAIN;
> > > > >  
> > > > > +	xen_setup_features();
> > > > > +	if (xen_feature(XENFEAT_dom0))
> > > > > +		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
> > > > > +	else
> > > > > +		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
> > > > 
> > > > What happens here on platforms prior to hypervisor changeset 23735?
> > > 
> > > It wouldn't work.
> > > Considering that we are certainly not going to backport ARM support to
> > > Xen 4.1, and that both ARM and XENFEAT_dom0 will be present in Xen 4.2,
> > > do we really need to support the Xen unstable changesets between ARM was
> > > introduced and XENFEAT_dom0 appeared?
> 
> So should it just panic and say "AAAAAAH"?

I could panic if I find out that XENFEAT_dom0 is unimplemented but
actually I only get to know if it is available...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:22:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbgP-0007Re-Rk; Wed, 01 Aug 2012 16:22:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwbgO-0007RL-7V
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:22:00 +0000
Received: from [85.158.143.35:45898] by server-3.bemta-4.messagelabs.com id
	77/24-01511-7A759105; Wed, 01 Aug 2012 16:21:59 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1343838118!18190638!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_SEX,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3732 invoked from network); 1 Aug 2012 16:21:58 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:21:58 -0000
Received: by eekd4 with SMTP id d4so2019456eek.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 09:21:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mSgy6nbmXqeMYcfTtReatgfKzqCX9f2ydiBsy+ywloQ=;
	b=K7rSMxToGTPFE1/IkzaBAl1iLQFKQhHqOKTBAI9JNR1pNTkRjnEFHATAoHJlkIsDil
	A26y1mZj4twgekmIT7jVV9VQfdSOectKdqvHOVLrPNKGXrsE+A7MSsKNgUxkZtBwY1+v
	EjI0U0+wd5ryuUwy4R42sWibTmMvAAQoRfYefMAR5IuY2GgyzcKIdIpyrahDoop5ZiAB
	VG4CxYqB8H/YcileyHmjX9j/cDnKNHmoUyhzuNtb18qzcAK/21ouQa1fKMChJtnR67uC
	A2vqto31V7nXg5GtbeTN3wpPnPJ3U9zt5rTgxEO6hSUjaCZLz4xaqq0A0jS/G3QzQUZF
	ZLFw==
MIME-Version: 1.0
Received: by 10.14.175.130 with SMTP id z2mr23253855eel.0.1343838117988; Wed,
	01 Aug 2012 09:21:57 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 09:21:57 -0700 (PDT)
In-Reply-To: <20120801160515.GA16155@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
Date: Wed, 1 Aug 2012 17:21:57 +0100
X-Google-Sender-Auth: a8D1VtAZOQh4k7AZHgsiuaNAf2M
Message-ID: <CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
>> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>> >> for this feature, mainly for "marketing" reasons.  I think it will
>> >> probably give people the wrong idea about what the technology does.
>> >> PV domains is one of Xen's really distinct advantages -- much simpler
>> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>> >> understand it, the mode you've been calling "hybrid" still has all of
>> >> these advantages -- it just uses some of the HVM hardware extensions
>> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>> >> be seen as, "Even Xen has had to give up on PV."
>> >>
>> >> Can I suggest something like "PVH" instead?  That (at least to me)
>> >> makes it clear that PV domains are still fully PV, but just use some
>> >> HVM extensions.
>> >
>> > if (xen_pvh_domain()?
>> >
>> > if (xen_pv_h_domain()?
>> >
>> > if (xen_h_domain()) ?
>> >
>> > if (xen_pvplus_domain()) ?
>> >
>> > if (xen_pv_ext_domain()) ?
>> >
>> > I think I like 'pv+'?
>>
>> I could deal with pv+.  However, in general I dislike that kind of
>> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
>> (Extreme PV!) -- they all sound cool when they come out, but five
>> years later, when they're not so new or sexy anymore, they all sound
>> lame.  PVH is just descriptive -- it will always be PV with HVM
>> extensions, so it will age much better. :-)
>
> How about pv_with_mmu_in_hvm_container_domain() ?
>
> Ok, that is a bit to lengthy. How about then:
>
> if (xen_pvhvm_ext_domain()) ?
>
> The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
> and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?

Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
already been sort of taken by Stefano's extensions to allow Linux
kernels booted in HVM mode to use some of the PV extensions.  I tend
to think "xen_pvh_domain()" is probably OK, but maybe calling it
"pvext" (or "pvhext") in the code, and "PVH" in documentation /
stories?  Just using "pvext" everywhere could work as well; it's a
little bit "now even better!", but not as much as pvplus.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:22:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbgP-0007Re-Rk; Wed, 01 Aug 2012 16:22:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwbgO-0007RL-7V
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:22:00 +0000
Received: from [85.158.143.35:45898] by server-3.bemta-4.messagelabs.com id
	77/24-01511-7A759105; Wed, 01 Aug 2012 16:21:59 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1343838118!18190638!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_SEX,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3732 invoked from network); 1 Aug 2012 16:21:58 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:21:58 -0000
Received: by eekd4 with SMTP id d4so2019456eek.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 09:21:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mSgy6nbmXqeMYcfTtReatgfKzqCX9f2ydiBsy+ywloQ=;
	b=K7rSMxToGTPFE1/IkzaBAl1iLQFKQhHqOKTBAI9JNR1pNTkRjnEFHATAoHJlkIsDil
	A26y1mZj4twgekmIT7jVV9VQfdSOectKdqvHOVLrPNKGXrsE+A7MSsKNgUxkZtBwY1+v
	EjI0U0+wd5ryuUwy4R42sWibTmMvAAQoRfYefMAR5IuY2GgyzcKIdIpyrahDoop5ZiAB
	VG4CxYqB8H/YcileyHmjX9j/cDnKNHmoUyhzuNtb18qzcAK/21ouQa1fKMChJtnR67uC
	A2vqto31V7nXg5GtbeTN3wpPnPJ3U9zt5rTgxEO6hSUjaCZLz4xaqq0A0jS/G3QzQUZF
	ZLFw==
MIME-Version: 1.0
Received: by 10.14.175.130 with SMTP id z2mr23253855eel.0.1343838117988; Wed,
	01 Aug 2012 09:21:57 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Wed, 1 Aug 2012 09:21:57 -0700 (PDT)
In-Reply-To: <20120801160515.GA16155@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
Date: Wed, 1 Aug 2012 17:21:57 +0100
X-Google-Sender-Auth: a8D1VtAZOQh4k7AZHgsiuaNAf2M
Message-ID: <CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
>> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>> >> for this feature, mainly for "marketing" reasons.  I think it will
>> >> probably give people the wrong idea about what the technology does.
>> >> PV domains is one of Xen's really distinct advantages -- much simpler
>> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>> >> understand it, the mode you've been calling "hybrid" still has all of
>> >> these advantages -- it just uses some of the HVM hardware extensions
>> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>> >> be seen as, "Even Xen has had to give up on PV."
>> >>
>> >> Can I suggest something like "PVH" instead?  That (at least to me)
>> >> makes it clear that PV domains are still fully PV, but just use some
>> >> HVM extensions.
>> >
>> > if (xen_pvh_domain()?
>> >
>> > if (xen_pv_h_domain()?
>> >
>> > if (xen_h_domain()) ?
>> >
>> > if (xen_pvplus_domain()) ?
>> >
>> > if (xen_pv_ext_domain()) ?
>> >
>> > I think I like 'pv+'?
>>
>> I could deal with pv+.  However, in general I dislike that kind of
>> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
>> (Extreme PV!) -- they all sound cool when they come out, but five
>> years later, when they're not so new or sexy anymore, they all sound
>> lame.  PVH is just descriptive -- it will always be PV with HVM
>> extensions, so it will age much better. :-)
>
> How about pv_with_mmu_in_hvm_container_domain() ?
>
> Ok, that is a bit to lengthy. How about then:
>
> if (xen_pvhvm_ext_domain()) ?
>
> The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
> and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?

Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
already been sort of taken by Stefano's extensions to allow Linux
kernels booted in HVM mode to use some of the PV extensions.  I tend
to think "xen_pvh_domain()" is probably OK, but maybe calling it
"pvext" (or "pvhext") in the code, and "PVH" in documentation /
stories?  Just using "pvext" everywhere could work as well; it's a
little bit "now even better!", but not as much as pvplus.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbiG-0007eE-DU; Wed, 01 Aug 2012 16:23:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwbiE-0007dk-KF
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:23:54 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343838226!1693728!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15685 invoked from network); 1 Aug 2012 16:23:46 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:23:46 -0000
Received: by eaah1 with SMTP id h1so1917047eaa.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:23:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8AP15gn9BOVHsblqAtvjLmhbhxEyY73hv7Lvhobtrao=;
	b=qLKEdlvUxx6NBtQlMosiCcf+a5K2IOKYj8L/FNtr0vRochjGD3S7mZ79i02i3/lOrb
	cjc0VOysrYF0rzl+DwfTdrHC150VH78rNGFHf/1Zlq7ZProOXuICUo2mdgcMs6ACit6l
	UGMT9Fh1TPRfQoHgYUmlQ3yJdtImMp4C0pKGv9ZRy699XGFy/vzq3RqjTu7dJYBpwtcp
	bPfoyRvxZntwtEGpniyhRugEKhVqvyqJgeOFtdQl4X0ILrjarw3p5BTRgAmJTI+5qW6E
	k68aMycl1VzfCAbpHfxuHSOI7vNkDt7orKT2+I7dnfUnvNWFVki28lXYGrJj5IlGm5lC
	dzhw==
Received: by 10.14.215.129 with SMTP id e1mr3039837eep.46.1343838226680;
	Wed, 01 Aug 2012 09:23:46 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id w3sm10222911eep.2.2012.08.01.09.23.44
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 09:23:45 -0700 (PDT)
Message-ID: <5019580C.8040905@xen.org>
Date: Wed, 01 Aug 2012 17:23:40 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
CC: xen-devel@lists.xen.org
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
In-Reply-To: <CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 15:50, George Dunlap wrote:
> I think for 4.2, the key thing we want to test is the xm -> xl 
> transition; and the instructions for that are really simple -- 
> basically, "Do what you normally do using xl instead of xm". :-) 
> Secondary things we want tested involve just installing it on 
> different software setups (e.g., distros), and hardware testing. But I 
> think those will come as a matter of course with the first one. -George 
I created http://wiki.xen.org/wiki/Xen_Test_Days (proposing the 13th - 
maybe the 14th is better? - if we do an RC on the 13th though)
I guess we do need to provide some guidance and help for what people can 
and should test in http://wiki.xen.org/wiki/Xen_Test_Days/TODO

Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbiG-0007eE-DU; Wed, 01 Aug 2012 16:23:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwbiE-0007dk-KF
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:23:54 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343838226!1693728!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15685 invoked from network); 1 Aug 2012 16:23:46 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:23:46 -0000
Received: by eaah1 with SMTP id h1so1917047eaa.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:23:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8AP15gn9BOVHsblqAtvjLmhbhxEyY73hv7Lvhobtrao=;
	b=qLKEdlvUxx6NBtQlMosiCcf+a5K2IOKYj8L/FNtr0vRochjGD3S7mZ79i02i3/lOrb
	cjc0VOysrYF0rzl+DwfTdrHC150VH78rNGFHf/1Zlq7ZProOXuICUo2mdgcMs6ACit6l
	UGMT9Fh1TPRfQoHgYUmlQ3yJdtImMp4C0pKGv9ZRy699XGFy/vzq3RqjTu7dJYBpwtcp
	bPfoyRvxZntwtEGpniyhRugEKhVqvyqJgeOFtdQl4X0ILrjarw3p5BTRgAmJTI+5qW6E
	k68aMycl1VzfCAbpHfxuHSOI7vNkDt7orKT2+I7dnfUnvNWFVki28lXYGrJj5IlGm5lC
	dzhw==
Received: by 10.14.215.129 with SMTP id e1mr3039837eep.46.1343838226680;
	Wed, 01 Aug 2012 09:23:46 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id w3sm10222911eep.2.2012.08.01.09.23.44
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 09:23:45 -0700 (PDT)
Message-ID: <5019580C.8040905@xen.org>
Date: Wed, 01 Aug 2012 17:23:40 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
CC: xen-devel@lists.xen.org
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
In-Reply-To: <CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 15:50, George Dunlap wrote:
> I think for 4.2, the key thing we want to test is the xm -> xl 
> transition; and the instructions for that are really simple -- 
> basically, "Do what you normally do using xl instead of xm". :-) 
> Secondary things we want tested involve just installing it on 
> different software setups (e.g., distros), and hardware testing. But I 
> think those will come as a matter of course with the first one. -George 
I created http://wiki.xen.org/wiki/Xen_Test_Days (proposing the 13th - 
maybe the 14th is better? - if we do an RC on the 13th though)
I guess we do need to provide some guidance and help for what people can 
and should test in http://wiki.xen.org/wiki/Xen_Test_Days/TODO

Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbik-0007jE-6z; Wed, 01 Aug 2012 16:24:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007iU-Cf
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:25 +0000
Received: from [85.158.139.83:6355] by server-4.bemta-5.messagelabs.com id
	F1/85-27831-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13652 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808039"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ir-M3; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004cw-Ja;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:13 +0100
Message-ID: <1343838260-17725-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 04/11] libxl: fix formatting of
	DEFINE_DEVICES_ADD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These lines were exactly 80 columns wide, which produces hideous wrap
damage in an 80 column emacs.  Reformat using emacs's C-c \,
which puts the \ in column 72 (by default) where possible.

Whitespace change only.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_device.c |   26 +++++++++++++-------------
 1 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 544a861..319f0e8 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -483,19 +483,19 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
  * libxl__add_nics
  */
 
-#define DEFINE_DEVICES_ADD(type)                                               \
-    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid,  \
-                              int start, libxl_domain_config *d_config,        \
-                              libxl__ao_devices *aodevs)                       \
-    {                                                                          \
-        AO_GC;                                                                 \
-        int i;                                                                 \
-        int end = start + d_config->num_##type##s;                             \
-        for (i = start; i < end; i++) {                                        \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       aodev);                                 \
-        }                                                                      \
+#define DEFINE_DEVICES_ADD(type)                                        \
+    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
+                              int start, libxl_domain_config *d_config, \
+                              libxl__ao_devices *aodevs)                \
+    {                                                                   \
+        AO_GC;                                                          \
+        int i;                                                          \
+        int end = start + d_config->num_##type##s;                      \
+        for (i = start; i < end; i++) {                                 \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+                                       aodev);                          \
+        }                                                               \
     }
 
 DEFINE_DEVICES_ADD(disk)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbik-0007jE-6z; Wed, 01 Aug 2012 16:24:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007iU-Cf
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:25 +0000
Received: from [85.158.139.83:6355] by server-4.bemta-5.messagelabs.com id
	F1/85-27831-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13652 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808039"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ir-M3; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004cw-Ja;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:13 +0100
Message-ID: <1343838260-17725-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 04/11] libxl: fix formatting of
	DEFINE_DEVICES_ADD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These lines were exactly 80 columns wide, which produces hideous wrap
damage in an 80 column emacs.  Reformat using emacs's C-c \,
which puts the \ in column 72 (by default) where possible.

Whitespace change only.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_device.c |   26 +++++++++++++-------------
 1 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 544a861..319f0e8 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -483,19 +483,19 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
  * libxl__add_nics
  */
 
-#define DEFINE_DEVICES_ADD(type)                                               \
-    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid,  \
-                              int start, libxl_domain_config *d_config,        \
-                              libxl__ao_devices *aodevs)                       \
-    {                                                                          \
-        AO_GC;                                                                 \
-        int i;                                                                 \
-        int end = start + d_config->num_##type##s;                             \
-        for (i = start; i < end; i++) {                                        \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       aodev);                                 \
-        }                                                                      \
+#define DEFINE_DEVICES_ADD(type)                                        \
+    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
+                              int start, libxl_domain_config *d_config, \
+                              libxl__ao_devices *aodevs)                \
+    {                                                                   \
+        AO_GC;                                                          \
+        int i;                                                          \
+        int end = start + d_config->num_##type##s;                      \
+        for (i = start; i < end; i++) {                                 \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+                                       aodev);                          \
+        }                                                               \
     }
 
 DEFINE_DEVICES_ADD(disk)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbii-0007iK-QK; Wed, 01 Aug 2012 16:24:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbii-0007i4-1F
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:24 +0000
Received: from [85.158.139.83:3020] by server-1.bemta-5.messagelabs.com id
	30/95-29759-73859105; Wed, 01 Aug 2012 16:24:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13572 invoked from network); 1 Aug 2012 16:24:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808035"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ih-Eo; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004ci-E0;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:09 +0100
Message-ID: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 00/11] libxl: Assorted bugfixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are various bugfix and debugging patches.  I have now tested
this series up to 10/11.  I haven't tested 11/11 because it doesn't
build.

Bugfixes:

 02/11 libxl: react correctly to bootloader pty master POLLHUP
 03/11 libxl: fix device counting race in libxl__devices_destroy
 04/11 libxl: fix formatting of DEFINE_DEVICES_ADD
 07/11 libxl: do not blunder on if bootloader fails (again)

Cleanups:

 01/11 libxl: unify libxl__device_destroy and device_hotplug_done
 05/11 libxl: abolish useless `start' parameter to libxl__add_*
 06/11 libxl: rename aodevs to multidev
 09/11 libxl: remus: mark TODOs more clearly
 10/11 libxl: remove an unused numainfo parameter

DO NOT APPLY:

 08/11 Debugging machinery for synthesising POLLHUP
 11/11 libxl: -Wunused-parameter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbii-0007iK-QK; Wed, 01 Aug 2012 16:24:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbii-0007i4-1F
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:24 +0000
Received: from [85.158.139.83:3020] by server-1.bemta-5.messagelabs.com id
	30/95-29759-73859105; Wed, 01 Aug 2012 16:24:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13572 invoked from network); 1 Aug 2012 16:24:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808035"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ih-Eo; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004ci-E0;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:09 +0100
Message-ID: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v4 00/11] libxl: Assorted bugfixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are various bugfix and debugging patches.  I have now tested
this series up to 10/11.  I haven't tested 11/11 because it doesn't
build.

Bugfixes:

 02/11 libxl: react correctly to bootloader pty master POLLHUP
 03/11 libxl: fix device counting race in libxl__devices_destroy
 04/11 libxl: fix formatting of DEFINE_DEVICES_ADD
 07/11 libxl: do not blunder on if bootloader fails (again)

Cleanups:

 01/11 libxl: unify libxl__device_destroy and device_hotplug_done
 05/11 libxl: abolish useless `start' parameter to libxl__add_*
 06/11 libxl: rename aodevs to multidev
 09/11 libxl: remus: mark TODOs more clearly
 10/11 libxl: remove an unused numainfo parameter

DO NOT APPLY:

 08/11 Debugging machinery for synthesising POLLHUP
 11/11 libxl: -Wunused-parameter


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbik-0007jh-Ii; Wed, 01 Aug 2012 16:24:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007iJ-9r
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:25 +0000
Received: from [85.158.139.83:3098] by server-10.bemta-5.messagelabs.com id
	CA/8E-02190-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13632 invoked from network); 1 Aug 2012 16:24:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808037"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Il-I1; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004co-Fs;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:11 +0100
Message-ID: <1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader pty
	master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Receive POLLHUP on the bootloader master pty is not an error.
Hopefully it means that the bootloader has exited and therefore the
pty slave side has no process group any more.  (At least NetBSD
indicates POLLHUP on the master in this case.)

So send the bootloader SIGTERM; if it has already exited then this has
no effect (except that on some versions of NetBSD it erroneously
returns ESRCH and we print a harmless warning) and we will then
collect the bootloader's exit status and be satisfied.

However, we remember that we have done this so that if we got POLLHUP
for some other reason than that the bootloader exited we report
something resembling a useful message.

In order to implement this we need to provide a way for users of
datacopier to handle POLLHUP rather than treating it as fatal.

We rename bootloader_abort to bootloader_stop since it now no longer
only applies to error situations.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

-
Changes in v4:
 * Track whether we sent SIGTERM due to POLLHUP so we can report
   messages properly.

Changes in v3:
 * datacopier provides new interface for handling POLLHUP
 * Do not ignore errors on the xenconsole pty
 * Rename bootloader_abort.
---
 tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
 tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
 tools/libxl/libxl_internal.h   |    7 +++++--
 3 files changed, 57 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 99972a2..4bd5484 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
     LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
 }
 
+static int datacopier_pollhup_handled(libxl__egc *egc,
+                                      libxl__datacopier_state *dc,
+                                      short revents, int onwrite)
+{
+    STATE_AO_GC(dc->ao);
+
+    if (dc->callback_pollhup && (revents & POLLHUP)) {
+        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
+            onwrite ? dc->writewhat : dc->readwhat,
+            dc->copywhat);
+        libxl__datacopier_kill(dc);
+        dc->callback(egc, dc, onwrite, -1);
+        return 1;
+    }
+    return 0;
+}
+
 static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
                                 int fd, short events, short revents) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 0))
+        return;
+
     if (revents & ~POLLIN) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLIN)"
             " on %s during copy of %s", revents, dc->readwhat, dc->copywhat);
@@ -163,6 +183,9 @@ static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 1))
+        return;
+
     if (revents & ~POLLOUT) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLOUT)"
             " on %s during copy of %s", revents, dc->writewhat, dc->copywhat);
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index ef5a91b..bfc1b56 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -215,6 +215,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
     libxl__domaindeathcheck_init(&bl->deathcheck);
     bl->keystrokes.ao = bl->ao;  libxl__datacopier_init(&bl->keystrokes);
     bl->display.ao = bl->ao;     libxl__datacopier_init(&bl->display);
+    bl->got_pollhup = 0;
 }
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
@@ -275,7 +276,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 }
 
 /* might be called at any time, provided it's init'd */
-static void bootloader_abort(libxl__egc *egc,
+static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
     STATE_AO_GC(bl->ao);
@@ -285,8 +286,8 @@ static void bootloader_abort(libxl__egc *egc,
     libxl__datacopier_kill(&bl->display);
     if (libxl__ev_child_inuse(&bl->child)) {
         r = kill(bl->child.pid, SIGTERM);
-        if (r) LOGE(WARN, "after failure, failed to kill bootloader [%lu]",
-                    (unsigned long)bl->child.pid);
+        if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
+                    rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
     bl->rc = rc;
 }
@@ -508,7 +509,10 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->keystrokes.maxsz = BOOTLOADER_BUF_OUT;
     bl->keystrokes.copywhat =
         GCSPRINTF("bootloader input for domain %"PRIu32, bl->domid);
-    bl->keystrokes.callback = bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback =         bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback_pollhup = bootloader_keystrokes_copyfail;
+        /* pollhup gets called with errnoval==-1 which is not otherwise
+         * possible since errnos are nonnegative, so it's unambiguous */
     rc = libxl__datacopier_start(&bl->keystrokes);
     if (rc) goto out;
 
@@ -516,7 +520,8 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->display.maxsz = BOOTLOADER_BUF_IN;
     bl->display.copywhat =
         GCSPRINTF("bootloader output for domain %"PRIu32, bl->domid);
-    bl->display.callback = bootloader_display_copyfail;
+    bl->display.callback =         bootloader_display_copyfail;
+    bl->display.callback_pollhup = bootloader_display_copyfail;
     rc = libxl__datacopier_start(&bl->display);
     if (rc) goto out;
 
@@ -562,30 +567,42 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
 
 /* perhaps one of these will be called, but perhaps not */
 static void bootloader_copyfail(libxl__egc *egc, const char *which,
-       libxl__bootloader_state *bl, int onwrite, int errnoval)
+        libxl__bootloader_state *bl, int ondisplay, int onwrite, int errnoval)
 {
     STATE_AO_GC(bl->ao);
+    int rc = ERROR_FAIL;
+
+    if (errnoval==-1) {
+        /* POLLHUP */
+        if (!!ondisplay != !!onwrite) {
+            rc = 0;
+            bl->got_pollhup = 1;
+        } else {
+            LOG(ERROR, "unexpected POLLHUP on %s", which);
+        }
+    }
     if (!onwrite && !errnoval)
         LOG(ERROR, "unexpected eof copying %s", which);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+
+    bootloader_stop(egc, bl, rc);
 }
 static void bootloader_keystrokes_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
-    bootloader_copyfail(egc, "bootloader input", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, errnoval);
 }
 static void bootloader_display_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, display);
-    bootloader_copyfail(egc, "bootloader output", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader output", bl, 1, onwrite, errnoval);
 }
 
 static void bootloader_domaindeath(libxl__egc *egc, libxl__domaindeathcheck *dc)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, deathcheck);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+    bootloader_stop(egc, bl, ERROR_FAIL);
 }
 
 static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
@@ -599,6 +616,8 @@ static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
     libxl__datacopier_kill(&bl->display);
 
     if (status) {
+        if (bl->got_pollhup && WIFSIGNALED(status) && WTERMSIG(status)==SIGTERM)
+            LOG(ERROR, "got POLLHUP, sent SIGTERM");
         LOG(ERROR, "bootloader failed - consult logfile %s", bl->logfile);
         libxl_report_child_exitstatus(CTX, XTL_ERROR, "bootloader",
                                       pid, status);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 691b4f6..c57503f 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2070,7 +2070,9 @@ typedef struct libxl__datacopier_buf libxl__datacopier_buf;
  *     errnoval==0 means we got eof and all data was written
  *     errnoval!=0 means we had a read error, logged
  * onwrite==-1 means some other internal failure, errnoval not valid, logged
- * in all cases copier is killed before calling this callback */
+ * If we get POLLHUP, we call callback_pollhup(..., onwrite, -1);
+ * or if callback_pollhup==0 this is an internal failure, as above.
+ * In all cases copier is killed before calling this callback */
 typedef void libxl__datacopier_callback(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval);
 
@@ -2089,6 +2091,7 @@ struct libxl__datacopier_state {
     const char *copywhat, *readwhat, *writewhat; /* for error msgs */
     FILE *log; /* gets a copy of everything */
     libxl__datacopier_callback *callback;
+    libxl__datacopier_callback *callback_pollhup;
     /* remaining fields are private to datacopier */
     libxl__ev_fd toread, towrite;
     ssize_t used;
@@ -2273,7 +2276,7 @@ struct libxl__bootloader_state {
     int nargs, argsspace;
     const char **args;
     libxl__datacopier_state keystrokes, display;
-    int rc;
+    int rc, got_pollhup;
 };
 
 _hidden void libxl__bootloader_init(libxl__bootloader_state *bl);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbik-0007jh-Ii; Wed, 01 Aug 2012 16:24:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007iJ-9r
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:25 +0000
Received: from [85.158.139.83:3098] by server-10.bemta-5.messagelabs.com id
	CA/8E-02190-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13632 invoked from network); 1 Aug 2012 16:24:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808037"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Il-I1; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004co-Fs;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:11 +0100
Message-ID: <1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader pty
	master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Receive POLLHUP on the bootloader master pty is not an error.
Hopefully it means that the bootloader has exited and therefore the
pty slave side has no process group any more.  (At least NetBSD
indicates POLLHUP on the master in this case.)

So send the bootloader SIGTERM; if it has already exited then this has
no effect (except that on some versions of NetBSD it erroneously
returns ESRCH and we print a harmless warning) and we will then
collect the bootloader's exit status and be satisfied.

However, we remember that we have done this so that if we got POLLHUP
for some other reason than that the bootloader exited we report
something resembling a useful message.

In order to implement this we need to provide a way for users of
datacopier to handle POLLHUP rather than treating it as fatal.

We rename bootloader_abort to bootloader_stop since it now no longer
only applies to error situations.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

-
Changes in v4:
 * Track whether we sent SIGTERM due to POLLHUP so we can report
   messages properly.

Changes in v3:
 * datacopier provides new interface for handling POLLHUP
 * Do not ignore errors on the xenconsole pty
 * Rename bootloader_abort.
---
 tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
 tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
 tools/libxl/libxl_internal.h   |    7 +++++--
 3 files changed, 57 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 99972a2..4bd5484 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
     LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
 }
 
+static int datacopier_pollhup_handled(libxl__egc *egc,
+                                      libxl__datacopier_state *dc,
+                                      short revents, int onwrite)
+{
+    STATE_AO_GC(dc->ao);
+
+    if (dc->callback_pollhup && (revents & POLLHUP)) {
+        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
+            onwrite ? dc->writewhat : dc->readwhat,
+            dc->copywhat);
+        libxl__datacopier_kill(dc);
+        dc->callback(egc, dc, onwrite, -1);
+        return 1;
+    }
+    return 0;
+}
+
 static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
                                 int fd, short events, short revents) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 0))
+        return;
+
     if (revents & ~POLLIN) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLIN)"
             " on %s during copy of %s", revents, dc->readwhat, dc->copywhat);
@@ -163,6 +183,9 @@ static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 1))
+        return;
+
     if (revents & ~POLLOUT) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLOUT)"
             " on %s during copy of %s", revents, dc->writewhat, dc->copywhat);
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index ef5a91b..bfc1b56 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -215,6 +215,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
     libxl__domaindeathcheck_init(&bl->deathcheck);
     bl->keystrokes.ao = bl->ao;  libxl__datacopier_init(&bl->keystrokes);
     bl->display.ao = bl->ao;     libxl__datacopier_init(&bl->display);
+    bl->got_pollhup = 0;
 }
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
@@ -275,7 +276,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 }
 
 /* might be called at any time, provided it's init'd */
-static void bootloader_abort(libxl__egc *egc,
+static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
     STATE_AO_GC(bl->ao);
@@ -285,8 +286,8 @@ static void bootloader_abort(libxl__egc *egc,
     libxl__datacopier_kill(&bl->display);
     if (libxl__ev_child_inuse(&bl->child)) {
         r = kill(bl->child.pid, SIGTERM);
-        if (r) LOGE(WARN, "after failure, failed to kill bootloader [%lu]",
-                    (unsigned long)bl->child.pid);
+        if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
+                    rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
     bl->rc = rc;
 }
@@ -508,7 +509,10 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->keystrokes.maxsz = BOOTLOADER_BUF_OUT;
     bl->keystrokes.copywhat =
         GCSPRINTF("bootloader input for domain %"PRIu32, bl->domid);
-    bl->keystrokes.callback = bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback =         bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback_pollhup = bootloader_keystrokes_copyfail;
+        /* pollhup gets called with errnoval==-1 which is not otherwise
+         * possible since errnos are nonnegative, so it's unambiguous */
     rc = libxl__datacopier_start(&bl->keystrokes);
     if (rc) goto out;
 
@@ -516,7 +520,8 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->display.maxsz = BOOTLOADER_BUF_IN;
     bl->display.copywhat =
         GCSPRINTF("bootloader output for domain %"PRIu32, bl->domid);
-    bl->display.callback = bootloader_display_copyfail;
+    bl->display.callback =         bootloader_display_copyfail;
+    bl->display.callback_pollhup = bootloader_display_copyfail;
     rc = libxl__datacopier_start(&bl->display);
     if (rc) goto out;
 
@@ -562,30 +567,42 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
 
 /* perhaps one of these will be called, but perhaps not */
 static void bootloader_copyfail(libxl__egc *egc, const char *which,
-       libxl__bootloader_state *bl, int onwrite, int errnoval)
+        libxl__bootloader_state *bl, int ondisplay, int onwrite, int errnoval)
 {
     STATE_AO_GC(bl->ao);
+    int rc = ERROR_FAIL;
+
+    if (errnoval==-1) {
+        /* POLLHUP */
+        if (!!ondisplay != !!onwrite) {
+            rc = 0;
+            bl->got_pollhup = 1;
+        } else {
+            LOG(ERROR, "unexpected POLLHUP on %s", which);
+        }
+    }
     if (!onwrite && !errnoval)
         LOG(ERROR, "unexpected eof copying %s", which);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+
+    bootloader_stop(egc, bl, rc);
 }
 static void bootloader_keystrokes_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
-    bootloader_copyfail(egc, "bootloader input", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, errnoval);
 }
 static void bootloader_display_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, display);
-    bootloader_copyfail(egc, "bootloader output", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader output", bl, 1, onwrite, errnoval);
 }
 
 static void bootloader_domaindeath(libxl__egc *egc, libxl__domaindeathcheck *dc)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, deathcheck);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+    bootloader_stop(egc, bl, ERROR_FAIL);
 }
 
 static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
@@ -599,6 +616,8 @@ static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
     libxl__datacopier_kill(&bl->display);
 
     if (status) {
+        if (bl->got_pollhup && WIFSIGNALED(status) && WTERMSIG(status)==SIGTERM)
+            LOG(ERROR, "got POLLHUP, sent SIGTERM");
         LOG(ERROR, "bootloader failed - consult logfile %s", bl->logfile);
         libxl_report_child_exitstatus(CTX, XTL_ERROR, "bootloader",
                                       pid, status);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 691b4f6..c57503f 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2070,7 +2070,9 @@ typedef struct libxl__datacopier_buf libxl__datacopier_buf;
  *     errnoval==0 means we got eof and all data was written
  *     errnoval!=0 means we had a read error, logged
  * onwrite==-1 means some other internal failure, errnoval not valid, logged
- * in all cases copier is killed before calling this callback */
+ * If we get POLLHUP, we call callback_pollhup(..., onwrite, -1);
+ * or if callback_pollhup==0 this is an internal failure, as above.
+ * In all cases copier is killed before calling this callback */
 typedef void libxl__datacopier_callback(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval);
 
@@ -2089,6 +2091,7 @@ struct libxl__datacopier_state {
     const char *copywhat, *readwhat, *writewhat; /* for error msgs */
     FILE *log; /* gets a copy of everything */
     libxl__datacopier_callback *callback;
+    libxl__datacopier_callback *callback_pollhup;
     /* remaining fields are private to datacopier */
     libxl__ev_fd toread, towrite;
     ssize_t used;
@@ -2273,7 +2276,7 @@ struct libxl__bootloader_state {
     int nargs, argsspace;
     const char **args;
     libxl__datacopier_state keystrokes, display;
-    int rc;
+    int rc, got_pollhup;
 };
 
 _hidden void libxl__bootloader_init(libxl__bootloader_state *bl);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbim-0007li-Q8; Wed, 01 Aug 2012 16:24:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007iq-0Z
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.139.83:6406] by server-5.bemta-5.messagelabs.com id
	5B/01-02722-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13675 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808041"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ix-R4; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004d4-No;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:15 +0100
Message-ID: <1343838260-17725-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 06/11] libxl: rename aodevs to multidev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To be consistent with the new function naming, rename
libxl__ao_devices to libxl__multidev and all variables aodevs to
multidev.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_create.c   |   30 +++++++++---------
 tools/libxl/libxl_device.c   |   68 +++++++++++++++++++++---------------------
 tools/libxl/libxl_dm.c       |   30 +++++++++---------
 tools/libxl/libxl_internal.h |   26 ++++++++--------
 4 files changed, 77 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5275373..5f0d26f 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -599,10 +599,10 @@ static void domcreate_bootloader_done(libxl__egc *egc,
                                       libxl__bootloader_state *bl,
                                       int rc);
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *aodevs,
                                 int ret);
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *aodevs,
                                  int ret);
 
 static void domcreate_console_available(libxl__egc *egc,
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    libxl__multidev_begin(ao, &dcs->aodevs);
-    dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
-    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+    libxl__multidev_begin(ao, &dcs->multidev);
+    dcs->multidev.callback = domcreate_launch_dm;
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->multidev);
+    libxl__multidev_prepared(egc, &dcs->multidev, 0);
 
     return;
 
@@ -921,10 +921,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
                                 int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
 
@@ -1039,14 +1039,14 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        libxl__multidev_begin(ao, &dcs->aodevs);
-        dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
-        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+        libxl__multidev_begin(ao, &dcs->multidev);
+        dcs->multidev.callback = domcreate_attach_pci;
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->multidev);
+        libxl__multidev_prepared(egc, &dcs->multidev, 0);
         return;
     }
 
-    domcreate_attach_pci(egc, &dcs->aodevs, 0);
+    domcreate_attach_pci(egc, &dcs->multidev, 0);
     return;
 
 error_out:
@@ -1054,10 +1054,10 @@ error_out:
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *multidev,
                                  int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
     libxl_ctx *ctx = CTX;
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 41d527b..84fa06c 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -403,13 +403,13 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
 
 /* multidev */
 
-void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
+void libxl__multidev_begin(libxl__ao *ao, libxl__multidev *multidev)
 {
     AO_GC;
 
-    aodevs->ao = ao;
-    aodevs->array = 0;
-    aodevs->used = aodevs->allocd = 0;
+    multidev->ao = ao;
+    multidev->array = 0;
+    multidev->used = multidev->allocd = 0;
 
     /* We allocate an aodev to represent the operation of preparing
      * all of the other operations.  This operation is completed when
@@ -422,25 +422,25 @@ void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
      *  (iii) we have a nice consistent way to deal with any
      *      error that might occur while deciding what to initiate
      */
-    aodevs->preparation = libxl__multidev_prepare(aodevs);
+    multidev->preparation = libxl__multidev_prepare(multidev);
 }
 
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
 
-libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
-    STATE_AO_GC(aodevs->ao);
+libxl__ao_device *libxl__multidev_prepare(libxl__multidev *multidev) {
+    STATE_AO_GC(multidev->ao);
     libxl__ao_device *aodev;
 
     GCNEW(aodev);
-    aodev->aodevs = aodevs;
+    aodev->multidev = multidev;
     aodev->callback = multidev_one_callback;
     libxl__prepare_ao_device(ao, aodev);
 
-    if (aodevs->used >= aodevs->allocd) {
-        aodevs->allocd = aodevs->used * 2 + 5;
-        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
+    if (multidev->used >= multidev->allocd) {
+        multidev->allocd = multidev->used * 2 + 5;
+        GCREALLOC_ARRAY(multidev->array, multidev->allocd);
     }
-    aodevs->array[aodevs->used++] = aodev;
+    multidev->array[multidev->used++] = aodev;
 
     return aodev;
 }
@@ -448,28 +448,28 @@ libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    libxl__ao_devices *aodevs = aodev->aodevs;
+    libxl__multidev *multidev = aodev->multidev;
     int i, error = 0;
 
     aodev->active = 0;
 
-    for (i = 0; i < aodevs->used; i++) {
-        if (aodevs->array[i]->active)
+    for (i = 0; i < multidev->used; i++) {
+        if (multidev->array[i]->active)
             return;
 
-        if (aodevs->array[i]->rc)
-            error = aodevs->array[i]->rc;
+        if (multidev->array[i]->rc)
+            error = multidev->array[i]->rc;
     }
 
-    aodevs->callback(egc, aodevs, error);
+    multidev->callback(egc, multidev, error);
     return;
 }
 
-void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
-                              int rc)
+void libxl__multidev_prepared(libxl__egc *egc,
+                              libxl__multidev *multidev, int rc)
 {
-    aodevs->preparation->rc = rc;
-    multidev_one_callback(egc, aodevs->preparation);
+    multidev->preparation->rc = rc;
+    multidev_one_callback(egc, multidev->preparation);
 }
 
 /******************************************************************************/
@@ -486,12 +486,12 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
                               libxl_domain_config *d_config,            \
-                              libxl__ao_devices *aodevs)                \
+                              libxl__multidev *multidev)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
         for (i = 0; i < d_config->num_##type##s; i++) {                 \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__ao_device *aodev = libxl__multidev_prepare(multidev);  \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
@@ -531,8 +531,8 @@ out:
 
 /* Callback for device destruction */
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc);
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc);
 
 void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
 {
@@ -544,12 +544,12 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char **kinds = NULL, **devs = NULL;
     int i, j, rc = 0;
     libxl__device *dev;
-    libxl__ao_devices *aodevs = &drs->aodevs;
+    libxl__multidev *multidev = &drs->multidev;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    libxl__multidev_begin(ao, aodevs);
-    aodevs->callback = devices_remove_callback;
+    libxl__multidev_begin(ao, multidev);
+    multidev->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
     kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
@@ -586,7 +586,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = libxl__multidev_prepare(aodevs);
+                aodev = libxl__multidev_prepare(multidev);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
                 aodev->force = drs->force;
@@ -612,7 +612,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    libxl__multidev_prepared(egc, aodevs, rc);
+    libxl__multidev_prepared(egc, multidev, rc);
 }
 
 /* Callbacks for device related operations */
@@ -1002,10 +1002,10 @@ static void device_hotplug_clean(libxl__gc *gc, libxl__ao_device *aodev)
     assert(!libxl__ev_child_inuse(&aodev->child));
 }
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc)
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc)
 {
-    libxl__devices_remove_state *drs = CONTAINER_OF(aodevs, *drs, aodevs);
+    libxl__devices_remove_state *drs = CONTAINER_OF(multidev, *drs, multidev);
     STATE_AO_GC(drs->ao);
 
     drs->callback(egc, drs, rc);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 66aa45e..0c0084f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -714,10 +714,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
                                 int rc);
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret);
+                                 libxl__multidev *aodevs, int ret);
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *aodevs,
                               int rc);
 
 static void spaw_stubdom_pvqemu_destroy_cb(libxl__egc *egc,
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    libxl__multidev_begin(ao, &sdss->aodevs);
-    sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
-    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+    libxl__multidev_begin(ao, &sdss->multidev);
+    sdss->multidev.callback = spawn_stub_launch_dm;
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->multidev);
+    libxl__multidev_prepared(egc, &sdss->multidev, 0);
 
     free(args);
     return;
@@ -872,9 +872,9 @@ out:
 }
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret)
+                                 libxl__multidev *multidev, int ret)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int i, num_console = STUBDOM_SPECIAL_CONSOLES;
@@ -982,22 +982,22 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        libxl__multidev_begin(ao, &sdss->aodevs);
-        sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
-        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+        libxl__multidev_begin(ao, &sdss->multidev);
+        sdss->multidev.callback = stubdom_pvqemu_cb;
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->multidev);
+        libxl__multidev_prepared(egc, &sdss->multidev, 0);
         return;
     }
 
 out:
-    stubdom_pvqemu_cb(egc, &sdss->aodevs, rc);
+    stubdom_pvqemu_cb(egc, &sdss->multidev, rc);
 }
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *multidev,
                               int rc)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     uint32_t dm_domid = sdss->pvqemu.guest_domid;
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 450dbe5..9315ae0 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1791,7 +1791,7 @@ typedef enum {
 } libxl__device_action;
 
 typedef struct libxl__ao_device libxl__ao_device;
-typedef struct libxl__ao_devices libxl__ao_devices;
+typedef struct libxl__multidev libxl__multidev;
 typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
 
 /* This functions sets the necessary libxl__ao_device struct values to use
@@ -1821,7 +1821,7 @@ struct libxl__ao_device {
     int rc;
     /* private for multidev */
     int active;
-    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    libxl__multidev *multidev; /* reference to the containing multidev */
     /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
@@ -1847,12 +1847,12 @@ struct libxl__ao_device {
  */
 
 /* Starts preparing to add/remove a bunch of devices. */
-_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__multidev*);
 
 /* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
  * which has had libxl__prepare_ao_device called, and which has also
  * had ->callback set.  The user should not mess with aodev->callback. */
-_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__multidev*);
 
 /* Notifies the multidev machinery that we have now finished preparing
  * and initiating devices.  multidev->callback may then be called as
@@ -1860,10 +1860,10 @@ _hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
  * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
  * logged) multidev->callback will get a non-zero rc.
  * callback may be set by the user at any point before prepared. */
-_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__multidev*, int rc);
 
-typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
-struct libxl__ao_devices {
+typedef void libxl__devices_callback(libxl__egc*, libxl__multidev*, int rc);
+struct libxl__multidev {
     /* set by user: */
     libxl__devices_callback *callback;
     /* for private use by libxl__...ao_devices... machinery: */
@@ -2336,7 +2336,7 @@ struct libxl__devices_remove_state {
     libxl__devices_remove_callback *callback;
     int force; /* libxl_device_TYPE_destroy rather than _remove */
     /* private */
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
     int num_devices;
 };
 
@@ -2380,7 +2380,7 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by
+ * the caller is inside an async op. "multidev" will NOT be prepared by
  * this function, so the caller must make sure to call
  * libxl__multidev_begin before calling this function.
  *
@@ -2389,11 +2389,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                               libxl_domain_config *d_config,
-                              libxl__ao_devices *aodevs);
+                              libxl__multidev *multidev);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                              libxl_domain_config *d_config,
-                             libxl__ao_devices *aodevs);
+                             libxl__multidev *multidev);
 
 /*----- device model creation -----*/
 
@@ -2429,7 +2429,7 @@ typedef struct {
     libxl__domain_build_state dm_state;
     libxl__dm_spawn_state pvqemu;
     libxl__destroy_domid_state dis;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 } libxl__stub_dm_spawn_state;
 
 _hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
@@ -2461,7 +2461,7 @@ struct libxl__domain_create_state {
     libxl__save_helper_state shs;
     /* necessary if the domain creation failed and we have to destroy it */
     libxl__domain_destroy_state dds;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 };
 
 /*----- Domain suspend (save) functions -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbim-0007li-Q8; Wed, 01 Aug 2012 16:24:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007iq-0Z
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.139.83:6406] by server-5.bemta-5.messagelabs.com id
	5B/01-02722-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13675 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808041"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ix-R4; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004d4-No;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:15 +0100
Message-ID: <1343838260-17725-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 06/11] libxl: rename aodevs to multidev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To be consistent with the new function naming, rename
libxl__ao_devices to libxl__multidev and all variables aodevs to
multidev.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_create.c   |   30 +++++++++---------
 tools/libxl/libxl_device.c   |   68 +++++++++++++++++++++---------------------
 tools/libxl/libxl_dm.c       |   30 +++++++++---------
 tools/libxl/libxl_internal.h |   26 ++++++++--------
 4 files changed, 77 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5275373..5f0d26f 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -599,10 +599,10 @@ static void domcreate_bootloader_done(libxl__egc *egc,
                                       libxl__bootloader_state *bl,
                                       int rc);
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *aodevs,
                                 int ret);
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *aodevs,
                                  int ret);
 
 static void domcreate_console_available(libxl__egc *egc,
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    libxl__multidev_begin(ao, &dcs->aodevs);
-    dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
-    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+    libxl__multidev_begin(ao, &dcs->multidev);
+    dcs->multidev.callback = domcreate_launch_dm;
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->multidev);
+    libxl__multidev_prepared(egc, &dcs->multidev, 0);
 
     return;
 
@@ -921,10 +921,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
                                 int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
 
@@ -1039,14 +1039,14 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        libxl__multidev_begin(ao, &dcs->aodevs);
-        dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
-        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+        libxl__multidev_begin(ao, &dcs->multidev);
+        dcs->multidev.callback = domcreate_attach_pci;
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->multidev);
+        libxl__multidev_prepared(egc, &dcs->multidev, 0);
         return;
     }
 
-    domcreate_attach_pci(egc, &dcs->aodevs, 0);
+    domcreate_attach_pci(egc, &dcs->multidev, 0);
     return;
 
 error_out:
@@ -1054,10 +1054,10 @@ error_out:
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *multidev,
                                  int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
     libxl_ctx *ctx = CTX;
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 41d527b..84fa06c 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -403,13 +403,13 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
 
 /* multidev */
 
-void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
+void libxl__multidev_begin(libxl__ao *ao, libxl__multidev *multidev)
 {
     AO_GC;
 
-    aodevs->ao = ao;
-    aodevs->array = 0;
-    aodevs->used = aodevs->allocd = 0;
+    multidev->ao = ao;
+    multidev->array = 0;
+    multidev->used = multidev->allocd = 0;
 
     /* We allocate an aodev to represent the operation of preparing
      * all of the other operations.  This operation is completed when
@@ -422,25 +422,25 @@ void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
      *  (iii) we have a nice consistent way to deal with any
      *      error that might occur while deciding what to initiate
      */
-    aodevs->preparation = libxl__multidev_prepare(aodevs);
+    multidev->preparation = libxl__multidev_prepare(multidev);
 }
 
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
 
-libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
-    STATE_AO_GC(aodevs->ao);
+libxl__ao_device *libxl__multidev_prepare(libxl__multidev *multidev) {
+    STATE_AO_GC(multidev->ao);
     libxl__ao_device *aodev;
 
     GCNEW(aodev);
-    aodev->aodevs = aodevs;
+    aodev->multidev = multidev;
     aodev->callback = multidev_one_callback;
     libxl__prepare_ao_device(ao, aodev);
 
-    if (aodevs->used >= aodevs->allocd) {
-        aodevs->allocd = aodevs->used * 2 + 5;
-        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
+    if (multidev->used >= multidev->allocd) {
+        multidev->allocd = multidev->used * 2 + 5;
+        GCREALLOC_ARRAY(multidev->array, multidev->allocd);
     }
-    aodevs->array[aodevs->used++] = aodev;
+    multidev->array[multidev->used++] = aodev;
 
     return aodev;
 }
@@ -448,28 +448,28 @@ libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    libxl__ao_devices *aodevs = aodev->aodevs;
+    libxl__multidev *multidev = aodev->multidev;
     int i, error = 0;
 
     aodev->active = 0;
 
-    for (i = 0; i < aodevs->used; i++) {
-        if (aodevs->array[i]->active)
+    for (i = 0; i < multidev->used; i++) {
+        if (multidev->array[i]->active)
             return;
 
-        if (aodevs->array[i]->rc)
-            error = aodevs->array[i]->rc;
+        if (multidev->array[i]->rc)
+            error = multidev->array[i]->rc;
     }
 
-    aodevs->callback(egc, aodevs, error);
+    multidev->callback(egc, multidev, error);
     return;
 }
 
-void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
-                              int rc)
+void libxl__multidev_prepared(libxl__egc *egc,
+                              libxl__multidev *multidev, int rc)
 {
-    aodevs->preparation->rc = rc;
-    multidev_one_callback(egc, aodevs->preparation);
+    multidev->preparation->rc = rc;
+    multidev_one_callback(egc, multidev->preparation);
 }
 
 /******************************************************************************/
@@ -486,12 +486,12 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
                               libxl_domain_config *d_config,            \
-                              libxl__ao_devices *aodevs)                \
+                              libxl__multidev *multidev)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
         for (i = 0; i < d_config->num_##type##s; i++) {                 \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__ao_device *aodev = libxl__multidev_prepare(multidev);  \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
@@ -531,8 +531,8 @@ out:
 
 /* Callback for device destruction */
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc);
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc);
 
 void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
 {
@@ -544,12 +544,12 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char **kinds = NULL, **devs = NULL;
     int i, j, rc = 0;
     libxl__device *dev;
-    libxl__ao_devices *aodevs = &drs->aodevs;
+    libxl__multidev *multidev = &drs->multidev;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    libxl__multidev_begin(ao, aodevs);
-    aodevs->callback = devices_remove_callback;
+    libxl__multidev_begin(ao, multidev);
+    multidev->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
     kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
@@ -586,7 +586,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = libxl__multidev_prepare(aodevs);
+                aodev = libxl__multidev_prepare(multidev);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
                 aodev->force = drs->force;
@@ -612,7 +612,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    libxl__multidev_prepared(egc, aodevs, rc);
+    libxl__multidev_prepared(egc, multidev, rc);
 }
 
 /* Callbacks for device related operations */
@@ -1002,10 +1002,10 @@ static void device_hotplug_clean(libxl__gc *gc, libxl__ao_device *aodev)
     assert(!libxl__ev_child_inuse(&aodev->child));
 }
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc)
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc)
 {
-    libxl__devices_remove_state *drs = CONTAINER_OF(aodevs, *drs, aodevs);
+    libxl__devices_remove_state *drs = CONTAINER_OF(multidev, *drs, multidev);
     STATE_AO_GC(drs->ao);
 
     drs->callback(egc, drs, rc);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 66aa45e..0c0084f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -714,10 +714,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
                                 int rc);
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret);
+                                 libxl__multidev *aodevs, int ret);
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *aodevs,
                               int rc);
 
 static void spaw_stubdom_pvqemu_destroy_cb(libxl__egc *egc,
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    libxl__multidev_begin(ao, &sdss->aodevs);
-    sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
-    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+    libxl__multidev_begin(ao, &sdss->multidev);
+    sdss->multidev.callback = spawn_stub_launch_dm;
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->multidev);
+    libxl__multidev_prepared(egc, &sdss->multidev, 0);
 
     free(args);
     return;
@@ -872,9 +872,9 @@ out:
 }
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret)
+                                 libxl__multidev *multidev, int ret)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int i, num_console = STUBDOM_SPECIAL_CONSOLES;
@@ -982,22 +982,22 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        libxl__multidev_begin(ao, &sdss->aodevs);
-        sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
-        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+        libxl__multidev_begin(ao, &sdss->multidev);
+        sdss->multidev.callback = stubdom_pvqemu_cb;
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->multidev);
+        libxl__multidev_prepared(egc, &sdss->multidev, 0);
         return;
     }
 
 out:
-    stubdom_pvqemu_cb(egc, &sdss->aodevs, rc);
+    stubdom_pvqemu_cb(egc, &sdss->multidev, rc);
 }
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *multidev,
                               int rc)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     uint32_t dm_domid = sdss->pvqemu.guest_domid;
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 450dbe5..9315ae0 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1791,7 +1791,7 @@ typedef enum {
 } libxl__device_action;
 
 typedef struct libxl__ao_device libxl__ao_device;
-typedef struct libxl__ao_devices libxl__ao_devices;
+typedef struct libxl__multidev libxl__multidev;
 typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
 
 /* This functions sets the necessary libxl__ao_device struct values to use
@@ -1821,7 +1821,7 @@ struct libxl__ao_device {
     int rc;
     /* private for multidev */
     int active;
-    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    libxl__multidev *multidev; /* reference to the containing multidev */
     /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
@@ -1847,12 +1847,12 @@ struct libxl__ao_device {
  */
 
 /* Starts preparing to add/remove a bunch of devices. */
-_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__multidev*);
 
 /* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
  * which has had libxl__prepare_ao_device called, and which has also
  * had ->callback set.  The user should not mess with aodev->callback. */
-_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__multidev*);
 
 /* Notifies the multidev machinery that we have now finished preparing
  * and initiating devices.  multidev->callback may then be called as
@@ -1860,10 +1860,10 @@ _hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
  * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
  * logged) multidev->callback will get a non-zero rc.
  * callback may be set by the user at any point before prepared. */
-_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__multidev*, int rc);
 
-typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
-struct libxl__ao_devices {
+typedef void libxl__devices_callback(libxl__egc*, libxl__multidev*, int rc);
+struct libxl__multidev {
     /* set by user: */
     libxl__devices_callback *callback;
     /* for private use by libxl__...ao_devices... machinery: */
@@ -2336,7 +2336,7 @@ struct libxl__devices_remove_state {
     libxl__devices_remove_callback *callback;
     int force; /* libxl_device_TYPE_destroy rather than _remove */
     /* private */
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
     int num_devices;
 };
 
@@ -2380,7 +2380,7 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by
+ * the caller is inside an async op. "multidev" will NOT be prepared by
  * this function, so the caller must make sure to call
  * libxl__multidev_begin before calling this function.
  *
@@ -2389,11 +2389,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                               libxl_domain_config *d_config,
-                              libxl__ao_devices *aodevs);
+                              libxl__multidev *multidev);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                              libxl_domain_config *d_config,
-                             libxl__ao_devices *aodevs);
+                             libxl__multidev *multidev);
 
 /*----- device model creation -----*/
 
@@ -2429,7 +2429,7 @@ typedef struct {
     libxl__domain_build_state dm_state;
     libxl__dm_spawn_state pvqemu;
     libxl__destroy_domid_state dis;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 } libxl__stub_dm_spawn_state;
 
 _hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
@@ -2461,7 +2461,7 @@ struct libxl__domain_create_state {
     libxl__save_helper_state shs;
     /* necessary if the domain creation failed and we have to destroy it */
     libxl__domain_destroy_state dds;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 };
 
 /*----- Domain suspend (save) functions -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbin-0007mH-Aj; Wed, 01 Aug 2012 16:24:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007is-R7
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:27 +0000
Received: from [85.158.139.83:3184] by server-11.bemta-5.messagelabs.com id
	7D/C5-20400-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!8
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13714 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808046"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbig-0007JA-5A; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbig-0004dO-2e;
	Wed, 01 Aug 2012 17:24:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:20 +0100
Message-ID: <1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

*** DO NOT APPLY ***

We have recently had a couple of bugs which basically involved
ignoring the rc parameter to a callback function.  I thought I would
try -Wunused-parameter.  Here are the results.

I found three further problems:

 * libxl_wait_for_free_memory takes a domid parameter but its
   semantics don't seem to call for that.  This function is
   going to have a big warning put on it for 4.2 and that
   should happen soon.

 * qmp_synchronous_send has an `ask_timeout' parameter which is
   ignored.

 * The autogenerated function libxl_event_init_type ignores the type
   parameter.

Things I needed to do to get the rest of the code to compile:

 * Remove one harmless unused parameter from an internal function.
   (Earlier in this series.)

 * Add an assert to make the error handling in the broken remus code
   slightly less broken.  (Earlier in this series.)

 * Provide machinery in the Makefile for passing different CFLAGS to
   libxl as opposed to xl and libxlu.  The flex- and bison-generated
   files in libxlu can't be compiled with -Wunused-parameter.

 * Define a new helper macro
        #define USE(var) ((void)(var))
   and use it 43 times.  The pattern is something like
        USE(egc);
   in a function which takes egc but doesn't need it.  If the
   parameter is later used, this is harmless.  In functions
   which are placeholders the USE statement should be placed in the
   middle of the function where the parameter would be used if the
   function is changed later, so that the USE gets deleted by the
   patch introducing the implementation.

 * Define a new helper macro for use only in other macros
        #define MAYBE_UNUSED __attribute__((unused))
   and use it in 10 different places.

 * Define new macros for helping declare common types of callback
   functions.  For example:

        #define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)  \
            libxl__egc *egc MAYBE_UNUSED,                             \
            libxl__ev_xswatch *watch MAYBE_UNUSED,                    \
            const char *wpath MAYBE_UNUSED,                           \
            const char *epath MAYBE_UNUSED

   which is used like this:

        -static void some_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
        -                          const char *wpath, const char *epath)
        +static void some_callback(EV_XSWATCH_CALLBACK_PARAMS
        +                          (egc, watch, wpath, epath))
        {
            ... now we use (or not) egc, watch, wpath, etc. or not as we like

   This somewhat resembles a Traditional K&R C typeless function
   definition.  The types of the parameters are actually defined
   for the compiler of course, along with the information that
   the parameters might be unused.

   There are 4 macros of this kind with 22 call sites.

IMO on the cost (65 places in ordinary code where we have to write
something somewhat ugly) is worth the benefit (finding, if we had
deployed this right away, around 6 bugs).  But it's arguable.

*** DO NOT APPLY ***

Anyway, this patch must not be applied right now because it causes the
build to fail.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile             |    4 +++-
 tools/libxl/libxl.c              |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_aoutils.c      |    8 ++++----
 tools/libxl/libxl_blktap2.c      |    1 +
 tools/libxl/libxl_bootloader.c   |    6 ++++++
 tools/libxl/libxl_create.c       |    2 ++
 tools/libxl/libxl_device.c       |    9 +++++----
 tools/libxl/libxl_dm.c           |    4 ++++
 tools/libxl/libxl_dom.c          |    8 ++++----
 tools/libxl/libxl_event.c        |   22 +++++++++++++---------
 tools/libxl/libxl_exec.c         |    8 ++++----
 tools/libxl/libxl_fork.c         |    1 +
 tools/libxl/libxl_internal.h     |   24 ++++++++++++++++++++++--
 tools/libxl/libxl_pci.c          |    6 ++++++
 tools/libxl/libxl_qmp.c          |   17 +++++++++--------
 tools/libxl/libxl_save_callout.c |    4 ++--
 tools/libxl/libxl_utils.c        |    2 ++
 17 files changed, 113 insertions(+), 42 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 424a7ee..7d1ebf9 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -13,6 +13,8 @@ XLUMINOR = 0
 
 CFLAGS += -Werror -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
+CFLAGS_FOR_LIBXL = -Wunused-parameter
+
 CFLAGS += -I. -fPIC
 
 ifeq ($(CONFIG_Linux),y)
@@ -71,7 +73,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-$(LIBXL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
+$(LIBXL_OBJS): CFLAGS += $(CFLAGS_FOR_LIBXL) $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 00ddc0e..3ad83ea 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -28,6 +28,8 @@ int libxl_ctx_alloc(libxl_ctx **pctx, int version,
     struct stat stat_buf;
     int rc;
 
+    USE(flags);
+
     if (version != LIBXL_VERSION) { rc = ERROR_VERSION; goto out; }
 
     ctx = malloc(sizeof(*ctx));
@@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     libxl__domain_suspend_state *dss;
     int rc;
 
+    USE(recv_fd); /* TODO get rid of this and actually use it! */
+
     libxl_domain_type type = libxl__domain_type(gc, domid);
     if (type == LIBXL_DOMAIN_TYPE_INVALID) {
         rc = ERROR_FAIL;
@@ -979,8 +983,9 @@ static void domain_death_occurred(libxl__egc *egc,
     LIBXL_TAILQ_INSERT_HEAD(&CTX->death_reported, evg, entry);
 }
 
-static void domain_death_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void domain_death_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                          (egc, watch, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_domain_death *evg;
     uint32_t domid;
@@ -1137,8 +1142,9 @@ void libxl_evdisable_domain_death(libxl_ctx *ctx,
     GC_FREE;
 }
 
-static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void disk_eject_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                        (egc, w, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_disk_eject *evg = (void*)w;
     char *backend;
@@ -2524,6 +2530,8 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
                                   libxl_device_nic *nic,
                                   libxl__device *device)
 {
+    USE(gc);
+
     device->backend_devid    = nic->devid;
     device->backend_domid    = nic->backend_domid;
     device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
@@ -2902,6 +2910,9 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
+    USE(gc);
+
+    USE(vkb);
     return 0;
 }
 
@@ -2909,6 +2920,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vkb *vkb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vkb->devid;
     device->backend_domid = vkb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
@@ -2990,6 +3002,7 @@ out:
 
 int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
 {
+    USE(gc);
     libxl_defbool_setdefault(&vfb->vnc.enable, true);
     if (libxl_defbool_val(vfb->vnc.enable)) {
         if (!vfb->vnc.listen) {
@@ -3010,6 +3023,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vfb *vfb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vfb->devid;
     device->backend_domid = vfb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VFB;
@@ -3936,8 +3950,13 @@ libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx)
 static int sched_arinc653_domain_set(libxl__gc *gc, uint32_t domid,
                                      const libxl_domain_sched_params *scinfo)
 {
+    USE(gc);
+
     /* Currently, the ARINC 653 scheduler does not take any domain-specific
          configuration, so we simply return success. */
+    USE(domid);
+    USE(scinfo);
+
     return 0;
 }
 
@@ -4385,6 +4404,8 @@ int libxl_xen_console_read_line(libxl_ctx *ctx,
 void libxl_xen_console_read_finish(libxl_ctx *ctx,
                                    libxl_xen_console_reader *cr)
 {
+    USE(ctx);
+
     free(cr->buffer);
     free(cr);
 }
diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 4bd5484..e91dc9c 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -114,8 +114,8 @@ static int datacopier_pollhup_handled(libxl__egc *egc,
     return 0;
 }
 
-static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_readable(EV_FD_CALLBACK_PARAMS
+                                (egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
@@ -178,8 +178,8 @@ static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
     datacopier_check_state(egc, dc);
 }
 
-static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_writable(EV_FD_CALLBACK_PARAMS(
+                                egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
diff --git a/tools/libxl/libxl_blktap2.c b/tools/libxl/libxl_blktap2.c
index 2c40182..660a669 100644
--- a/tools/libxl/libxl_blktap2.c
+++ b/tools/libxl/libxl_blktap2.c
@@ -19,6 +19,7 @@
 
 int libxl__blktap_enabled(libxl__gc *gc)
 {
+    USE(gc);
     const char *msg;
     return !tap_ctl_check(&msg);
 }
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index 2e65383..bfae328 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -88,6 +88,7 @@ static void make_bootloader_args(libxl__gc *gc, libxl__bootloader_state *bl,
 static int setup_xenconsoled_pty(libxl__egc *egc, libxl__bootloader_state *bl,
                                  char *slave_path, size_t slave_path_len)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     struct termios termattr;
     int r, rc;
@@ -141,6 +142,7 @@ static const char *bootloader_result_command(libxl__gc *gc, const char *buf,
 static int parse_bootloader_result(libxl__egc *egc,
                                    libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     char buf[PATH_MAX*2];
     FILE *f = 0;
@@ -221,6 +223,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int i;
 
@@ -256,6 +259,8 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    USE(egc);
+
     if (!bl->rc)
         bl->rc = rc;
 
@@ -285,6 +290,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int r;
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5f0d26f..5d56d67 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -62,6 +62,8 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
 int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                          libxl_domain_create_info *c_info)
 {
+    USE(gc);
+
     if (!c_info->type)
         return ERROR_INVAL;
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 84fa06c..85df199 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -44,6 +44,8 @@ int libxl__parse_backend_path(libxl__gc *gc,
                               const char *path,
                               libxl__device *dev)
 {
+    USE(gc);
+
     /* /local/domain/<domid>/backend/<kind>/<domid>/<devid> */
     char strkind[16]; /* Longest is actually "console" */
     int rc = sscanf(path, "/local/domain/%d/backend/%15[^/]/%u/%d",
@@ -795,8 +797,7 @@ out:
     return;
 }
 
-static void device_qemu_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                const struct timeval *requested_abs)
+static void device_qemu_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
@@ -918,8 +919,8 @@ out:
     return;
 }
 
-static void device_hotplug_timeout_cb(libxl__egc *egc, libxl__ev_time *ev,
-                                      const struct timeval *requested_abs)
+static void device_hotplug_timeout_cb(EV_TIME_CALLBACK_PARAMS
+                                      (egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..6995306 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -640,6 +640,7 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
                                         libxl_device_vfb *vfb,
                                         libxl_device_vkb *vkb)
 {
+    USE(gc);
     const libxl_domain_build_info *b_info = &guest_config->b_info;
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_HVM)
@@ -1177,6 +1178,7 @@ static void device_model_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
 {
     libxl__dm_spawn_state *dmss = CONTAINER_OF(spawn, *dmss, spawn);
     STATE_AO_GC(spawn->ao);
+    USE(egc);
 
     if (!xsdata)
         return;
@@ -1262,6 +1264,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
         int nr_vfbs, libxl_device_vfb *vfbs,
         int nr_disks, libxl_device_disk *disks)
 {
+    USE(gc);
     int i, ret = 0;
 
     /*
@@ -1283,6 +1286,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
     }
 
     if (nr_vfbs > 0) {
+        USE(vfbs);
         ret = 1;
         goto out;
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 06d5e4f..4a36652 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -756,8 +756,8 @@ void libxl__domain_suspend_common_switch_qemu_logdirty
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                    const struct timeval *requested_abs)
+static void switch_logdirty_timeout(EV_TIME_CALLBACK_PARAMS
+                                    (egc, ev, requested_abs))
 {
     libxl__domain_suspend_state *dss = CONTAINER_OF(ev, *dss, logdirty.timeout);
     STATE_AO_GC(dss->ao);
@@ -765,8 +765,8 @@ static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_xswatch(libxl__egc *egc, libxl__ev_xswatch *watch,
-                            const char *watch_path, const char *event_path)
+static void switch_logdirty_xswatch(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, wpath, epath))
 {
     libxl__domain_suspend_state *dss =
         CONTAINER_OF(watch, *dss, logdirty.watch);
diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 1af64c8..296303c 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -197,6 +197,8 @@ static void time_done_debug(libxl__gc *gc, const char *func,
                "ev_time=%p done rc=%d .func=%p infinite=%d abs=%lu.%06lu",
                ev, rc, ev->func, ev->infinite,
                (unsigned long)ev->abs.tv_sec, (unsigned long)ev->abs.tv_usec);
+#else
+    USE(gc); USE(func); USE(ev); USE(rc);
 #endif
 }
 
@@ -381,8 +383,7 @@ static void libxl__set_watch_slot_contents(libxl__ev_watch_slot *slot,
     slot->empty.sle_next = (void*)w;
 }
 
-static void watchfd_callback(libxl__egc *egc, libxl__ev_fd *ev,
-                             int fd, short events, short revents)
+static void watchfd_callback(EV_FD_CALLBACK_PARAMS(egc,ev, fd,events,revents))
 {
     EGC_GC;
 
@@ -571,8 +572,8 @@ void libxl__ev_xswatch_deregister(libxl__gc *gc, libxl__ev_xswatch *w)
  * waiting for device state
  */
 
-static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
-                                const char *watch_path, const char *event_path)
+static void devstate_watch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, watch_path, event_path))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(watch, *ds, watch);
@@ -605,8 +606,7 @@ static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
     ds->callback(egc, ds, rc);
 }
 
-static void devstate_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                             const struct timeval *requested_abs)
+static void devstate_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(ev, *ds, timeout);
@@ -662,8 +662,8 @@ int libxl__ev_devstate_wait(libxl__gc *gc, libxl__ev_devstate *ds,
  * futile.
  */
 
-static void domaindeathcheck_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                            const char *watch_path, const char *event_path)
+static void domaindeathcheck_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                      (egc, w, watch_path, event_path))
 {
     libxl__domaindeathcheck *dc = CONTAINER_OF(w, *dc, watch);
     EGC_GC;
@@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
 
     assert(fd == ev->fd);
     revents &= ev->events;
+    USE(events); /* we use our own idea of what we asked for */
     if (revents)
         ev->func(egc, ev, fd, ev->events, revents);
 
@@ -1151,6 +1152,8 @@ void libxl__event_occurred(libxl__egc *egc, libxl_event *event)
 
 void libxl_event_free(libxl_ctx *ctx, libxl_event *event)
 {
+    USE(ctx);
+
     /* Exceptionally, this function may be called from libxl, with ctx==0 */
     libxl_event_dispose(event);
     free(event);
@@ -1645,7 +1648,8 @@ int libxl__ao_inprogress(libxl__ao *ao,
  * for how.  But we want to copy *how.  So we have this dummy function
  * whose address is stored in callback if the app passed how==NULL. */
 static void dummy_asyncprogress_callback_ignore
-  (libxl_ctx *ctx, libxl_event *ev, void *for_callback) { }
+  (libxl_ctx *ctx, libxl_event *ev, void *for_callback)
+    { USE(ctx); USE(ev); USE(for_callback); }
 
 void libxl__ao_progress_gethow(libxl_asyncprogress_how *in_state,
                                const libxl_asyncprogress_how *from_app) {
diff --git a/tools/libxl/libxl_exec.c b/tools/libxl/libxl_exec.c
index 0477386..ed6b44e 100644
--- a/tools/libxl/libxl_exec.c
+++ b/tools/libxl/libxl_exec.c
@@ -280,6 +280,7 @@ void libxl__spawn_init(libxl__spawn_state *ss)
 
 int libxl__spawn_spawn(libxl__egc *egc, libxl__spawn_state *ss)
 {
+    USE(egc);
     STATE_AO_GC(ss->ao);
     int r;
     pid_t child;
@@ -387,8 +388,7 @@ static void spawn_fail(libxl__egc *egc, libxl__spawn_state *ss)
     spawn_detach(gc, ss);
 }
 
-static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                          const struct timeval *requested_abs)
+static void spawn_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     /* Before event, was Attached. */
     EGC_GC;
@@ -397,8 +397,8 @@ static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
     spawn_fail(egc, ss); /* must be last */
 }
 
-static void spawn_watch_event(libxl__egc *egc, libxl__ev_xswatch *xsw,
-                              const char *watch_path, const char *event_path)
+static void spawn_watch_event(EV_XSWATCH_CALLBACK_PARAMS
+                              (egc, xsw, wpath, epath))
 {
     /* On entry, is Attached. */
     EGC_GC;
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 044ddad..0379604 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -157,6 +157,7 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 
 static void sigchld_handler(int signo)
 {
+    USE(signo);
     int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
     assert(!e); /* errors are probably EBADF, very bad */
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 9315ae0..2687382 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -95,6 +95,8 @@
 #define DISABLE_UDEV_PATH "libxl/disable_udev"
 
 #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+#define USE(var) ((void)(var))
+#define MAYBE_UNUSED __attribute__((unused))
 
 #define LIBXL__LOGGING_ENABLED
 
@@ -147,6 +149,14 @@ typedef void libxl__ev_fd_callback(libxl__egc *egc, libxl__ev_fd *ev,
    * It is not permitted to listen for the same or overlapping events
    * on the same fd using multiple different libxl__ev_fd's.
    */
+
+/* Declare your callback functions with this helper and you avoid unused
+ * parameter warnings (and don't have to list all the types either): */
+#define EV_FD_CALLBACK_PARAMS(egc, ev, fd, events, revents)             \
+     libxl__egc *egc MAYBE_UNUSED, libxl__ev_fd *ev MAYBE_UNUSED,       \
+     int fd MAYBE_UNUSED,                                               \
+     short events MAYBE_UNUSED, short revents MAYBE_UNUSED
+
 struct libxl__ev_fd {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -162,6 +172,11 @@ struct libxl__ev_fd {
 typedef struct libxl__ev_time libxl__ev_time;
 typedef void libxl__ev_time_callback(libxl__egc *egc, libxl__ev_time *ev,
                                      const struct timeval *requested_abs);
+
+#define EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs)                 \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_time *ev MAYBE_UNUSED,      \
+    const struct timeval *requested_abs MAYBE_UNUSED
+
 struct libxl__ev_time {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -174,8 +189,13 @@ struct libxl__ev_time {
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
-typedef void libxl__ev_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch*,
-                            const char *watch_path, const char *event_path);
+typedef void libxl__ev_xswatch_callback(libxl__egc *egc,
+    libxl__ev_xswatch *watch, const char *watch_path, const char *event_path);
+
+#define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)    \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_xswatch *watch MAYBE_UNUSED,            \
+    const char *wpath MAYBE_UNUSED, const char *epath MAYBE_UNUSED
+
 struct libxl__ev_xswatch {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 9c92ae6..a094965 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -800,6 +800,10 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 {
     char *orig_state = priv;
 
+    USE(gc);
+    USE(domid);
+    USE(priv);
+
     if ( !strcmp(state, "pci-insert-failed") )
         return -1;
     if ( !strcmp(state, "pci-inserted") )
@@ -1007,6 +1011,8 @@ static int libxl__device_pci_reset(libxl__gc *gc, unsigned int domain, unsigned
 
 int libxl__device_pci_setdefault(libxl__gc *gc, libxl_device_pci *pci)
 {
+    USE(gc);
+    USE(pci);
     return 0;
 }
 
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..c354c71 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -46,6 +46,10 @@
 typedef int (*qmp_callback_t)(libxl__qmp_handler *qmp,
                               const libxl__json_object *tree,
                               void *opaque);
+#define QMP_CALLBACK_PARAMS(qmp, tree, opaque)          \
+    libxl__qmp_handler *qmp MAYBE_UNUSED,               \
+    const libxl__json_object *tree MAYBE_UNUSED,        \
+    void *opaque MAYBE_UNUSED
 
 typedef struct qmp_request_context {
     int rc;
@@ -109,9 +113,7 @@ static int store_serial_port_info(libxl__qmp_handler *qmp,
     return ret;
 }
 
-static int register_serials_chardev_callback(libxl__qmp_handler *qmp,
-                                             const libxl__json_object *o,
-                                             void *unused)
+static int register_serials_chardev_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     const libxl__json_object *obj = NULL;
     const libxl__json_object *label = NULL;
@@ -165,9 +167,7 @@ static int qmp_write_domain_console_item(libxl__gc *gc, int domid,
     return libxl__xs_write(gc, XBT_NULL, path, "%s", value);
 }
 
-static int qmp_register_vnc_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o,
-                                     void *unused)
+static int qmp_register_vnc_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     GC_INIT(qmp->ctx);
     const libxl__json_object *obj;
@@ -203,8 +203,7 @@ out:
     return rc;
 }
 
-static int qmp_capabilities_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o, void *unused)
+static int qmp_capabilities_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     qmp->connected = true;
 
@@ -228,6 +227,8 @@ static int enable_qmp_capabilities(libxl__qmp_handler *qmp)
 static libxl__qmp_message_type qmp_response_type(libxl__qmp_handler *qmp,
                                                  const libxl__json_object *o)
 {
+    USE(qmp);
+
     libxl__qmp_message_type type;
     libxl__json_map_node *node = NULL;
     int i = 0;
diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
index 078b7ee..78bb67e 100644
--- a/tools/libxl/libxl_save_callout.c
+++ b/tools/libxl/libxl_save_callout.c
@@ -252,8 +252,8 @@ static void helper_failed(libxl__egc *egc, libxl__save_helper_state *shs,
                 (unsigned long)shs->child.pid);
 }
 
-static void helper_stdout_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                   int fd, short events, short revents)
+static void helper_stdout_readable(EV_FD_CALLBACK_PARAMS
+                                   (egc, ev, fd, events, revents))
 {
     libxl__save_helper_state *shs = CONTAINER_OF(ev, *shs, readable);
     STATE_AO_GC(shs->ao);
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index f7b44a0..a7c34a9 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -230,6 +230,7 @@ out:
 
 int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend)
 {
+    USE(ctx);
     char *p;
     int rc = 0;
 
@@ -513,6 +514,7 @@ void libxl_bitmap_dispose(libxl_bitmap *map)
 void libxl_bitmap_copy(libxl_ctx *ctx, libxl_bitmap *dptr,
                        const libxl_bitmap *sptr)
 {
+    USE(ctx);
     int sz;
 
     assert(dptr->size == sptr->size);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbin-0007mH-Aj; Wed, 01 Aug 2012 16:24:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007is-R7
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:27 +0000
Received: from [85.158.139.83:3184] by server-11.bemta-5.messagelabs.com id
	7D/C5-20400-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!8
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13714 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808046"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbig-0007JA-5A; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbig-0004dO-2e;
	Wed, 01 Aug 2012 17:24:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:20 +0100
Message-ID: <1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

*** DO NOT APPLY ***

We have recently had a couple of bugs which basically involved
ignoring the rc parameter to a callback function.  I thought I would
try -Wunused-parameter.  Here are the results.

I found three further problems:

 * libxl_wait_for_free_memory takes a domid parameter but its
   semantics don't seem to call for that.  This function is
   going to have a big warning put on it for 4.2 and that
   should happen soon.

 * qmp_synchronous_send has an `ask_timeout' parameter which is
   ignored.

 * The autogenerated function libxl_event_init_type ignores the type
   parameter.

Things I needed to do to get the rest of the code to compile:

 * Remove one harmless unused parameter from an internal function.
   (Earlier in this series.)

 * Add an assert to make the error handling in the broken remus code
   slightly less broken.  (Earlier in this series.)

 * Provide machinery in the Makefile for passing different CFLAGS to
   libxl as opposed to xl and libxlu.  The flex- and bison-generated
   files in libxlu can't be compiled with -Wunused-parameter.

 * Define a new helper macro
        #define USE(var) ((void)(var))
   and use it 43 times.  The pattern is something like
        USE(egc);
   in a function which takes egc but doesn't need it.  If the
   parameter is later used, this is harmless.  In functions
   which are placeholders the USE statement should be placed in the
   middle of the function where the parameter would be used if the
   function is changed later, so that the USE gets deleted by the
   patch introducing the implementation.

 * Define a new helper macro for use only in other macros
        #define MAYBE_UNUSED __attribute__((unused))
   and use it in 10 different places.

 * Define new macros for helping declare common types of callback
   functions.  For example:

        #define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)  \
            libxl__egc *egc MAYBE_UNUSED,                             \
            libxl__ev_xswatch *watch MAYBE_UNUSED,                    \
            const char *wpath MAYBE_UNUSED,                           \
            const char *epath MAYBE_UNUSED

   which is used like this:

        -static void some_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
        -                          const char *wpath, const char *epath)
        +static void some_callback(EV_XSWATCH_CALLBACK_PARAMS
        +                          (egc, watch, wpath, epath))
        {
            ... now we use (or not) egc, watch, wpath, etc. or not as we like

   This somewhat resembles a Traditional K&R C typeless function
   definition.  The types of the parameters are actually defined
   for the compiler of course, along with the information that
   the parameters might be unused.

   There are 4 macros of this kind with 22 call sites.

IMO on the cost (65 places in ordinary code where we have to write
something somewhat ugly) is worth the benefit (finding, if we had
deployed this right away, around 6 bugs).  But it's arguable.

*** DO NOT APPLY ***

Anyway, this patch must not be applied right now because it causes the
build to fail.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile             |    4 +++-
 tools/libxl/libxl.c              |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_aoutils.c      |    8 ++++----
 tools/libxl/libxl_blktap2.c      |    1 +
 tools/libxl/libxl_bootloader.c   |    6 ++++++
 tools/libxl/libxl_create.c       |    2 ++
 tools/libxl/libxl_device.c       |    9 +++++----
 tools/libxl/libxl_dm.c           |    4 ++++
 tools/libxl/libxl_dom.c          |    8 ++++----
 tools/libxl/libxl_event.c        |   22 +++++++++++++---------
 tools/libxl/libxl_exec.c         |    8 ++++----
 tools/libxl/libxl_fork.c         |    1 +
 tools/libxl/libxl_internal.h     |   24 ++++++++++++++++++++++--
 tools/libxl/libxl_pci.c          |    6 ++++++
 tools/libxl/libxl_qmp.c          |   17 +++++++++--------
 tools/libxl/libxl_save_callout.c |    4 ++--
 tools/libxl/libxl_utils.c        |    2 ++
 17 files changed, 113 insertions(+), 42 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 424a7ee..7d1ebf9 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -13,6 +13,8 @@ XLUMINOR = 0
 
 CFLAGS += -Werror -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
+CFLAGS_FOR_LIBXL = -Wunused-parameter
+
 CFLAGS += -I. -fPIC
 
 ifeq ($(CONFIG_Linux),y)
@@ -71,7 +73,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-$(LIBXL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
+$(LIBXL_OBJS): CFLAGS += $(CFLAGS_FOR_LIBXL) $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 00ddc0e..3ad83ea 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -28,6 +28,8 @@ int libxl_ctx_alloc(libxl_ctx **pctx, int version,
     struct stat stat_buf;
     int rc;
 
+    USE(flags);
+
     if (version != LIBXL_VERSION) { rc = ERROR_VERSION; goto out; }
 
     ctx = malloc(sizeof(*ctx));
@@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     libxl__domain_suspend_state *dss;
     int rc;
 
+    USE(recv_fd); /* TODO get rid of this and actually use it! */
+
     libxl_domain_type type = libxl__domain_type(gc, domid);
     if (type == LIBXL_DOMAIN_TYPE_INVALID) {
         rc = ERROR_FAIL;
@@ -979,8 +983,9 @@ static void domain_death_occurred(libxl__egc *egc,
     LIBXL_TAILQ_INSERT_HEAD(&CTX->death_reported, evg, entry);
 }
 
-static void domain_death_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void domain_death_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                          (egc, watch, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_domain_death *evg;
     uint32_t domid;
@@ -1137,8 +1142,9 @@ void libxl_evdisable_domain_death(libxl_ctx *ctx,
     GC_FREE;
 }
 
-static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void disk_eject_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                        (egc, w, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_disk_eject *evg = (void*)w;
     char *backend;
@@ -2524,6 +2530,8 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
                                   libxl_device_nic *nic,
                                   libxl__device *device)
 {
+    USE(gc);
+
     device->backend_devid    = nic->devid;
     device->backend_domid    = nic->backend_domid;
     device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
@@ -2902,6 +2910,9 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
+    USE(gc);
+
+    USE(vkb);
     return 0;
 }
 
@@ -2909,6 +2920,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vkb *vkb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vkb->devid;
     device->backend_domid = vkb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
@@ -2990,6 +3002,7 @@ out:
 
 int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
 {
+    USE(gc);
     libxl_defbool_setdefault(&vfb->vnc.enable, true);
     if (libxl_defbool_val(vfb->vnc.enable)) {
         if (!vfb->vnc.listen) {
@@ -3010,6 +3023,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vfb *vfb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vfb->devid;
     device->backend_domid = vfb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VFB;
@@ -3936,8 +3950,13 @@ libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx)
 static int sched_arinc653_domain_set(libxl__gc *gc, uint32_t domid,
                                      const libxl_domain_sched_params *scinfo)
 {
+    USE(gc);
+
     /* Currently, the ARINC 653 scheduler does not take any domain-specific
          configuration, so we simply return success. */
+    USE(domid);
+    USE(scinfo);
+
     return 0;
 }
 
@@ -4385,6 +4404,8 @@ int libxl_xen_console_read_line(libxl_ctx *ctx,
 void libxl_xen_console_read_finish(libxl_ctx *ctx,
                                    libxl_xen_console_reader *cr)
 {
+    USE(ctx);
+
     free(cr->buffer);
     free(cr);
 }
diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 4bd5484..e91dc9c 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -114,8 +114,8 @@ static int datacopier_pollhup_handled(libxl__egc *egc,
     return 0;
 }
 
-static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_readable(EV_FD_CALLBACK_PARAMS
+                                (egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
@@ -178,8 +178,8 @@ static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
     datacopier_check_state(egc, dc);
 }
 
-static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_writable(EV_FD_CALLBACK_PARAMS(
+                                egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
diff --git a/tools/libxl/libxl_blktap2.c b/tools/libxl/libxl_blktap2.c
index 2c40182..660a669 100644
--- a/tools/libxl/libxl_blktap2.c
+++ b/tools/libxl/libxl_blktap2.c
@@ -19,6 +19,7 @@
 
 int libxl__blktap_enabled(libxl__gc *gc)
 {
+    USE(gc);
     const char *msg;
     return !tap_ctl_check(&msg);
 }
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index 2e65383..bfae328 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -88,6 +88,7 @@ static void make_bootloader_args(libxl__gc *gc, libxl__bootloader_state *bl,
 static int setup_xenconsoled_pty(libxl__egc *egc, libxl__bootloader_state *bl,
                                  char *slave_path, size_t slave_path_len)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     struct termios termattr;
     int r, rc;
@@ -141,6 +142,7 @@ static const char *bootloader_result_command(libxl__gc *gc, const char *buf,
 static int parse_bootloader_result(libxl__egc *egc,
                                    libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     char buf[PATH_MAX*2];
     FILE *f = 0;
@@ -221,6 +223,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int i;
 
@@ -256,6 +259,8 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    USE(egc);
+
     if (!bl->rc)
         bl->rc = rc;
 
@@ -285,6 +290,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int r;
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5f0d26f..5d56d67 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -62,6 +62,8 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
 int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                          libxl_domain_create_info *c_info)
 {
+    USE(gc);
+
     if (!c_info->type)
         return ERROR_INVAL;
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 84fa06c..85df199 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -44,6 +44,8 @@ int libxl__parse_backend_path(libxl__gc *gc,
                               const char *path,
                               libxl__device *dev)
 {
+    USE(gc);
+
     /* /local/domain/<domid>/backend/<kind>/<domid>/<devid> */
     char strkind[16]; /* Longest is actually "console" */
     int rc = sscanf(path, "/local/domain/%d/backend/%15[^/]/%u/%d",
@@ -795,8 +797,7 @@ out:
     return;
 }
 
-static void device_qemu_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                const struct timeval *requested_abs)
+static void device_qemu_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
@@ -918,8 +919,8 @@ out:
     return;
 }
 
-static void device_hotplug_timeout_cb(libxl__egc *egc, libxl__ev_time *ev,
-                                      const struct timeval *requested_abs)
+static void device_hotplug_timeout_cb(EV_TIME_CALLBACK_PARAMS
+                                      (egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..6995306 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -640,6 +640,7 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
                                         libxl_device_vfb *vfb,
                                         libxl_device_vkb *vkb)
 {
+    USE(gc);
     const libxl_domain_build_info *b_info = &guest_config->b_info;
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_HVM)
@@ -1177,6 +1178,7 @@ static void device_model_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
 {
     libxl__dm_spawn_state *dmss = CONTAINER_OF(spawn, *dmss, spawn);
     STATE_AO_GC(spawn->ao);
+    USE(egc);
 
     if (!xsdata)
         return;
@@ -1262,6 +1264,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
         int nr_vfbs, libxl_device_vfb *vfbs,
         int nr_disks, libxl_device_disk *disks)
 {
+    USE(gc);
     int i, ret = 0;
 
     /*
@@ -1283,6 +1286,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
     }
 
     if (nr_vfbs > 0) {
+        USE(vfbs);
         ret = 1;
         goto out;
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 06d5e4f..4a36652 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -756,8 +756,8 @@ void libxl__domain_suspend_common_switch_qemu_logdirty
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                    const struct timeval *requested_abs)
+static void switch_logdirty_timeout(EV_TIME_CALLBACK_PARAMS
+                                    (egc, ev, requested_abs))
 {
     libxl__domain_suspend_state *dss = CONTAINER_OF(ev, *dss, logdirty.timeout);
     STATE_AO_GC(dss->ao);
@@ -765,8 +765,8 @@ static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_xswatch(libxl__egc *egc, libxl__ev_xswatch *watch,
-                            const char *watch_path, const char *event_path)
+static void switch_logdirty_xswatch(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, wpath, epath))
 {
     libxl__domain_suspend_state *dss =
         CONTAINER_OF(watch, *dss, logdirty.watch);
diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 1af64c8..296303c 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -197,6 +197,8 @@ static void time_done_debug(libxl__gc *gc, const char *func,
                "ev_time=%p done rc=%d .func=%p infinite=%d abs=%lu.%06lu",
                ev, rc, ev->func, ev->infinite,
                (unsigned long)ev->abs.tv_sec, (unsigned long)ev->abs.tv_usec);
+#else
+    USE(gc); USE(func); USE(ev); USE(rc);
 #endif
 }
 
@@ -381,8 +383,7 @@ static void libxl__set_watch_slot_contents(libxl__ev_watch_slot *slot,
     slot->empty.sle_next = (void*)w;
 }
 
-static void watchfd_callback(libxl__egc *egc, libxl__ev_fd *ev,
-                             int fd, short events, short revents)
+static void watchfd_callback(EV_FD_CALLBACK_PARAMS(egc,ev, fd,events,revents))
 {
     EGC_GC;
 
@@ -571,8 +572,8 @@ void libxl__ev_xswatch_deregister(libxl__gc *gc, libxl__ev_xswatch *w)
  * waiting for device state
  */
 
-static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
-                                const char *watch_path, const char *event_path)
+static void devstate_watch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, watch_path, event_path))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(watch, *ds, watch);
@@ -605,8 +606,7 @@ static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
     ds->callback(egc, ds, rc);
 }
 
-static void devstate_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                             const struct timeval *requested_abs)
+static void devstate_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(ev, *ds, timeout);
@@ -662,8 +662,8 @@ int libxl__ev_devstate_wait(libxl__gc *gc, libxl__ev_devstate *ds,
  * futile.
  */
 
-static void domaindeathcheck_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                            const char *watch_path, const char *event_path)
+static void domaindeathcheck_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                      (egc, w, watch_path, event_path))
 {
     libxl__domaindeathcheck *dc = CONTAINER_OF(w, *dc, watch);
     EGC_GC;
@@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
 
     assert(fd == ev->fd);
     revents &= ev->events;
+    USE(events); /* we use our own idea of what we asked for */
     if (revents)
         ev->func(egc, ev, fd, ev->events, revents);
 
@@ -1151,6 +1152,8 @@ void libxl__event_occurred(libxl__egc *egc, libxl_event *event)
 
 void libxl_event_free(libxl_ctx *ctx, libxl_event *event)
 {
+    USE(ctx);
+
     /* Exceptionally, this function may be called from libxl, with ctx==0 */
     libxl_event_dispose(event);
     free(event);
@@ -1645,7 +1648,8 @@ int libxl__ao_inprogress(libxl__ao *ao,
  * for how.  But we want to copy *how.  So we have this dummy function
  * whose address is stored in callback if the app passed how==NULL. */
 static void dummy_asyncprogress_callback_ignore
-  (libxl_ctx *ctx, libxl_event *ev, void *for_callback) { }
+  (libxl_ctx *ctx, libxl_event *ev, void *for_callback)
+    { USE(ctx); USE(ev); USE(for_callback); }
 
 void libxl__ao_progress_gethow(libxl_asyncprogress_how *in_state,
                                const libxl_asyncprogress_how *from_app) {
diff --git a/tools/libxl/libxl_exec.c b/tools/libxl/libxl_exec.c
index 0477386..ed6b44e 100644
--- a/tools/libxl/libxl_exec.c
+++ b/tools/libxl/libxl_exec.c
@@ -280,6 +280,7 @@ void libxl__spawn_init(libxl__spawn_state *ss)
 
 int libxl__spawn_spawn(libxl__egc *egc, libxl__spawn_state *ss)
 {
+    USE(egc);
     STATE_AO_GC(ss->ao);
     int r;
     pid_t child;
@@ -387,8 +388,7 @@ static void spawn_fail(libxl__egc *egc, libxl__spawn_state *ss)
     spawn_detach(gc, ss);
 }
 
-static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                          const struct timeval *requested_abs)
+static void spawn_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     /* Before event, was Attached. */
     EGC_GC;
@@ -397,8 +397,8 @@ static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
     spawn_fail(egc, ss); /* must be last */
 }
 
-static void spawn_watch_event(libxl__egc *egc, libxl__ev_xswatch *xsw,
-                              const char *watch_path, const char *event_path)
+static void spawn_watch_event(EV_XSWATCH_CALLBACK_PARAMS
+                              (egc, xsw, wpath, epath))
 {
     /* On entry, is Attached. */
     EGC_GC;
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 044ddad..0379604 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -157,6 +157,7 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 
 static void sigchld_handler(int signo)
 {
+    USE(signo);
     int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
     assert(!e); /* errors are probably EBADF, very bad */
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 9315ae0..2687382 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -95,6 +95,8 @@
 #define DISABLE_UDEV_PATH "libxl/disable_udev"
 
 #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+#define USE(var) ((void)(var))
+#define MAYBE_UNUSED __attribute__((unused))
 
 #define LIBXL__LOGGING_ENABLED
 
@@ -147,6 +149,14 @@ typedef void libxl__ev_fd_callback(libxl__egc *egc, libxl__ev_fd *ev,
    * It is not permitted to listen for the same or overlapping events
    * on the same fd using multiple different libxl__ev_fd's.
    */
+
+/* Declare your callback functions with this helper and you avoid unused
+ * parameter warnings (and don't have to list all the types either): */
+#define EV_FD_CALLBACK_PARAMS(egc, ev, fd, events, revents)             \
+     libxl__egc *egc MAYBE_UNUSED, libxl__ev_fd *ev MAYBE_UNUSED,       \
+     int fd MAYBE_UNUSED,                                               \
+     short events MAYBE_UNUSED, short revents MAYBE_UNUSED
+
 struct libxl__ev_fd {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -162,6 +172,11 @@ struct libxl__ev_fd {
 typedef struct libxl__ev_time libxl__ev_time;
 typedef void libxl__ev_time_callback(libxl__egc *egc, libxl__ev_time *ev,
                                      const struct timeval *requested_abs);
+
+#define EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs)                 \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_time *ev MAYBE_UNUSED,      \
+    const struct timeval *requested_abs MAYBE_UNUSED
+
 struct libxl__ev_time {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -174,8 +189,13 @@ struct libxl__ev_time {
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
-typedef void libxl__ev_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch*,
-                            const char *watch_path, const char *event_path);
+typedef void libxl__ev_xswatch_callback(libxl__egc *egc,
+    libxl__ev_xswatch *watch, const char *watch_path, const char *event_path);
+
+#define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)    \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_xswatch *watch MAYBE_UNUSED,            \
+    const char *wpath MAYBE_UNUSED, const char *epath MAYBE_UNUSED
+
 struct libxl__ev_xswatch {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 9c92ae6..a094965 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -800,6 +800,10 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 {
     char *orig_state = priv;
 
+    USE(gc);
+    USE(domid);
+    USE(priv);
+
     if ( !strcmp(state, "pci-insert-failed") )
         return -1;
     if ( !strcmp(state, "pci-inserted") )
@@ -1007,6 +1011,8 @@ static int libxl__device_pci_reset(libxl__gc *gc, unsigned int domain, unsigned
 
 int libxl__device_pci_setdefault(libxl__gc *gc, libxl_device_pci *pci)
 {
+    USE(gc);
+    USE(pci);
     return 0;
 }
 
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..c354c71 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -46,6 +46,10 @@
 typedef int (*qmp_callback_t)(libxl__qmp_handler *qmp,
                               const libxl__json_object *tree,
                               void *opaque);
+#define QMP_CALLBACK_PARAMS(qmp, tree, opaque)          \
+    libxl__qmp_handler *qmp MAYBE_UNUSED,               \
+    const libxl__json_object *tree MAYBE_UNUSED,        \
+    void *opaque MAYBE_UNUSED
 
 typedef struct qmp_request_context {
     int rc;
@@ -109,9 +113,7 @@ static int store_serial_port_info(libxl__qmp_handler *qmp,
     return ret;
 }
 
-static int register_serials_chardev_callback(libxl__qmp_handler *qmp,
-                                             const libxl__json_object *o,
-                                             void *unused)
+static int register_serials_chardev_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     const libxl__json_object *obj = NULL;
     const libxl__json_object *label = NULL;
@@ -165,9 +167,7 @@ static int qmp_write_domain_console_item(libxl__gc *gc, int domid,
     return libxl__xs_write(gc, XBT_NULL, path, "%s", value);
 }
 
-static int qmp_register_vnc_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o,
-                                     void *unused)
+static int qmp_register_vnc_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     GC_INIT(qmp->ctx);
     const libxl__json_object *obj;
@@ -203,8 +203,7 @@ out:
     return rc;
 }
 
-static int qmp_capabilities_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o, void *unused)
+static int qmp_capabilities_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     qmp->connected = true;
 
@@ -228,6 +227,8 @@ static int enable_qmp_capabilities(libxl__qmp_handler *qmp)
 static libxl__qmp_message_type qmp_response_type(libxl__qmp_handler *qmp,
                                                  const libxl__json_object *o)
 {
+    USE(qmp);
+
     libxl__qmp_message_type type;
     libxl__json_map_node *node = NULL;
     int i = 0;
diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
index 078b7ee..78bb67e 100644
--- a/tools/libxl/libxl_save_callout.c
+++ b/tools/libxl/libxl_save_callout.c
@@ -252,8 +252,8 @@ static void helper_failed(libxl__egc *egc, libxl__save_helper_state *shs,
                 (unsigned long)shs->child.pid);
 }
 
-static void helper_stdout_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                   int fd, short events, short revents)
+static void helper_stdout_readable(EV_FD_CALLBACK_PARAMS
+                                   (egc, ev, fd, events, revents))
 {
     libxl__save_helper_state *shs = CONTAINER_OF(ev, *shs, readable);
     STATE_AO_GC(shs->ao);
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index f7b44a0..a7c34a9 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -230,6 +230,7 @@ out:
 
 int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend)
 {
+    USE(ctx);
     char *p;
     int rc = 0;
 
@@ -513,6 +514,7 @@ void libxl_bitmap_dispose(libxl_bitmap *map)
 void libxl_bitmap_copy(libxl_ctx *ctx, libxl_bitmap *dptr,
                        const libxl_bitmap *sptr)
 {
+    USE(ctx);
     int sz;
 
     assert(dptr->size == sptr->size);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbin-0007mr-Ri; Wed, 01 Aug 2012 16:24:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbil-0007jo-4X
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:27 +0000
Received: from [85.158.138.51:63115] by server-12.bemta-3.messagelabs.com id
	1D/CD-15259-A3859105; Wed, 01 Aug 2012 16:24:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838263!29827278!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18688 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808044"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbig-0007J7-2t; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbig-0004dK-0z;
	Wed, 01 Aug 2012 17:24:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:19 +0100
Message-ID: <1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 10/11] libxl: remove an unused numainfo parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_numa.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
index 5301ec4..2c8e59f 100644
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -231,7 +231,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
  * candidates with just one node).
  */
 static int count_cpus_per_node(libxl_cputopology *tinfo, int nr_cpus,
-                               libxl_numainfo *ninfo, int nr_nodes)
+                               int nr_nodes)
 {
     int cpus_per_node = 0;
     int j, i;
@@ -340,7 +340,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
     if (!min_nodes) {
         int cpus_per_node;
 
-        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, ninfo, nr_nodes);
+        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, nr_nodes);
         if (cpus_per_node == 0)
             min_nodes = 1;
         else
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbin-0007mr-Ri; Wed, 01 Aug 2012 16:24:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbil-0007jo-4X
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:27 +0000
Received: from [85.158.138.51:63115] by server-12.bemta-3.messagelabs.com id
	1D/CD-15259-A3859105; Wed, 01 Aug 2012 16:24:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838263!29827278!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18688 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808044"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbig-0007J7-2t; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbig-0004dK-0z;
	Wed, 01 Aug 2012 17:24:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:19 +0100
Message-ID: <1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 10/11] libxl: remove an unused numainfo parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_numa.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
index 5301ec4..2c8e59f 100644
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -231,7 +231,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
  * candidates with just one node).
  */
 static int count_cpus_per_node(libxl_cputopology *tinfo, int nr_cpus,
-                               libxl_numainfo *ninfo, int nr_nodes)
+                               int nr_nodes)
 {
     int cpus_per_node = 0;
     int j, i;
@@ -340,7 +340,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
     if (!min_nodes) {
         int cpus_per_node;
 
-        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, ninfo, nr_nodes);
+        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, nr_nodes);
         if (cpus_per_node == 0)
             min_nodes = 1;
         else
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbil-0007l2-Vx; Wed, 01 Aug 2012 16:24:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007is-53
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.139.83:3148] by server-11.bemta-5.messagelabs.com id
	AA/C5-20400-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13609 invoked from network); 1 Aug 2012 16:24:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808036"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ii-GE; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004ck-ES;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:10 +0100
Message-ID: <1343838260-17725-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 01/11] libxl: unify libxl__device_destroy and
	device_hotplug_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

device_hotplug_done contains an open-coded but improved version of
libxl__device_destroy.  So move the contents of device_hotplug_done
into libxl__device_destroy, deleting the old code, and replace it at
its old location with a function call.

Also fix the error handling: the rc from the destroy should be
propagated into the aodev.

Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_device.c |   35 ++++++++++++-----------------------
 1 files changed, 12 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index da0c3ea..3658bd1 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -513,17 +513,18 @@ int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
     char *be_path = libxl__device_backend_path(gc, dev);
     char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
-    int rc = 0;
+    int rc;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
 
-    do {
-        t = xs_transaction_start(CTX->xsh);
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
-        rc = !xs_transaction_end(CTX->xsh, t, 0);
-    } while (rc && errno == EAGAIN);
-    if (rc) {
-        LOGE(ERROR, "unable to finish transaction");
-        goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc < 0) goto out;
     }
 
     libxl__device_destroy_tapdisk(gc, be_path);
@@ -993,29 +994,17 @@ error:
 static void device_hotplug_done(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    char *be_path = libxl__device_backend_path(gc, aodev->dev);
-    char *fe_path = libxl__device_frontend_path(gc, aodev->dev);
-    xs_transaction_t t = 0;
     int rc;
 
     device_hotplug_clean(gc, aodev);
 
     /* Clean xenstore if it's a disconnection */
     if (aodev->action == DEVICE_DISCONNECT) {
-        for (;;) {
-            rc = libxl__xs_transaction_start(gc, &t);
-            if (rc) goto out;
-
-            libxl__xs_path_cleanup(gc, t, fe_path);
-            libxl__xs_path_cleanup(gc, t, be_path);
-
-            rc = libxl__xs_transaction_commit(gc, &t);
-            if (!rc) break;
-            if (rc < 0) goto out;
-        }
+        rc = libxl__device_destroy(gc, aodev->dev);
+        if (!aodev->rc)
+            aodev->rc = rc;
     }
 
-out:
     aodev->callback(egc, aodev);
     return;
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbil-0007l2-Vx; Wed, 01 Aug 2012 16:24:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007is-53
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.139.83:3148] by server-11.bemta-5.messagelabs.com id
	AA/C5-20400-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13609 invoked from network); 1 Aug 2012 16:24:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808036"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Ii-GE; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004ck-ES;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:10 +0100
Message-ID: <1343838260-17725-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 01/11] libxl: unify libxl__device_destroy and
	device_hotplug_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

device_hotplug_done contains an open-coded but improved version of
libxl__device_destroy.  So move the contents of device_hotplug_done
into libxl__device_destroy, deleting the old code, and replace it at
its old location with a function call.

Also fix the error handling: the rc from the destroy should be
propagated into the aodev.

Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_device.c |   35 ++++++++++++-----------------------
 1 files changed, 12 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index da0c3ea..3658bd1 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -513,17 +513,18 @@ int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
     char *be_path = libxl__device_backend_path(gc, dev);
     char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
-    int rc = 0;
+    int rc;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
 
-    do {
-        t = xs_transaction_start(CTX->xsh);
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
-        rc = !xs_transaction_end(CTX->xsh, t, 0);
-    } while (rc && errno == EAGAIN);
-    if (rc) {
-        LOGE(ERROR, "unable to finish transaction");
-        goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc < 0) goto out;
     }
 
     libxl__device_destroy_tapdisk(gc, be_path);
@@ -993,29 +994,17 @@ error:
 static void device_hotplug_done(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    char *be_path = libxl__device_backend_path(gc, aodev->dev);
-    char *fe_path = libxl__device_frontend_path(gc, aodev->dev);
-    xs_transaction_t t = 0;
     int rc;
 
     device_hotplug_clean(gc, aodev);
 
     /* Clean xenstore if it's a disconnection */
     if (aodev->action == DEVICE_DISCONNECT) {
-        for (;;) {
-            rc = libxl__xs_transaction_start(gc, &t);
-            if (rc) goto out;
-
-            libxl__xs_path_cleanup(gc, t, fe_path);
-            libxl__xs_path_cleanup(gc, t, be_path);
-
-            rc = libxl__xs_transaction_commit(gc, &t);
-            if (!rc) break;
-            if (rc < 0) goto out;
-        }
+        rc = libxl__device_destroy(gc, aodev->dev);
+        if (!aodev->rc)
+            aodev->rc = rc;
     }
 
-out:
     aodev->callback(egc, aodev);
     return;
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbil-0007ke-Jo; Wed, 01 Aug 2012 16:24:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007in-PR
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.138.51:63019] by server-10.bemta-3.messagelabs.com id
	C5/5E-21993-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838263!29827278!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18357 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808038"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Io-Ju; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004cs-Hj;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:12 +0100
Message-ID: <1343838260-17725-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/11] libxl: fix device counting race in
	libxl__devices_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don't have a fixed number of devices in the aodevs array, and instead
size it depending on the devices present in xenstore.  Somewhat
formalise the multiple device addition/removal machinery to make this
clearer and easier to do.

As a side-effect we fix a few "lost thread of control" bug which would
occur if there were no devices of a particular kind.  (Various if
statements which checked for there being no devices have become
redundant, but are retained to avoid making the patch bigger.)

Specifically:

 * Users of libxl__ao_devices are no longer expected to know in
   advance how many device operations they are going to do.  Instead
   they can initiate them one at a time, between bracketing calls to
   "begin" and "prepared".

 * The array of aodevs used for this is dynamically sized; to support
   this it's an array of pointers rather than of structs.

 * Users of libxl__ao_devices are presented with a more opaque interface.
   They are are no longer expected to, themselves,
      - look into the array of aodevs (this is now private)
      - know that the individual addition/removal completions are
        handled by libxl__ao_devices_callback (this callback function
        is now a private function for the multidev machinery)
      - ever deal with populating the contents of an aodevs

 * The doc comments relating to some of the members of
   libxl__ao_device are clarified.  (And the member `aodevs' is moved
   to put it with the other members with the same status.)

 * The multidev machinery allocates an aodev to represent the
   operation of preparing all of the other operations.  See
   the comment in libxl__multidev_begin.

A wrinkle is that the functions are called "multidev" but the structs
are called "libxl__ao_devices" and "aodevs".  I have given these
functions this name to distinguish them from "libxl__ao_device" and
"aodev" and so forth by more than just the use of the plural "s"
suffix.

In the next patch we will rename the structs.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@eu.citrix.com>

-
Changes in v4:
 * Actually honour errors in rc argument to libxl__multidev_prepared.
 * Fix the doc comment for libxl__add_*.
 * In comments, consistently use "multidev" not "multidevs".

Changes in v3:
 * New multidev interfaces - extensive changes.
---
 tools/libxl/libxl_create.c   |    8 +-
 tools/libxl/libxl_device.c   |  129 +++++++++++++++++++-----------------------
 tools/libxl/libxl_dm.c       |    8 +-
 tools/libxl/libxl_internal.h |   75 +++++++++++++++----------
 4 files changed, 111 insertions(+), 109 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index aafacd8..3265d69 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    dcs->aodevs.size = d_config->num_disks;
+    libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__prepare_ao_devices(ao, &dcs->aodevs);
     libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
 
@@ -1039,10 +1039,10 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        dcs->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__prepare_ao_devices(ao, &dcs->aodevs);
         libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 3658bd1..544a861 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -58,50 +58,6 @@ int libxl__parse_backend_path(libxl__gc *gc,
     return libxl__device_kind_from_string(strkind, &dev->backend_kind);
 }
 
-static int libxl__num_devices(libxl__gc *gc, uint32_t domid)
-{
-    char *path;
-    unsigned int num_kinds, num_devs;
-    char **kinds = NULL, **devs = NULL;
-    int i, j, rc = 0;
-    libxl__device dev;
-    libxl__device_kind kind;
-    int numdevs = 0;
-
-    path = GCSPRINTF("/local/domain/%d/device", domid);
-    kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
-    if (!kinds) {
-        if (errno != ENOENT) {
-            LOGE(ERROR, "unable to get xenstore device listing %s", path);
-            rc = ERROR_FAIL;
-            goto out;
-        }
-        num_kinds = 0;
-    }
-    for (i = 0; i < num_kinds; i++) {
-        if (libxl__device_kind_from_string(kinds[i], &kind))
-            continue;
-        if (kind == LIBXL__DEVICE_KIND_CONSOLE)
-            continue;
-
-        path = GCSPRINTF("/local/domain/%d/device/%s", domid, kinds[i]);
-        devs = libxl__xs_directory(gc, XBT_NULL, path, &num_devs);
-        if (!devs)
-            continue;
-        for (j = 0; j < num_devs; j++) {
-            path = GCSPRINTF("/local/domain/%d/device/%s/%s/backend",
-                             domid, kinds[i], devs[j]);
-            path = libxl__xs_read(gc, XBT_NULL, path);
-            if (path && libxl__parse_backend_path(gc, path, &dev) == 0) {
-                numdevs++;
-            }
-        }
-    }
-out:
-    if (rc) return rc;
-    return numdevs;
-}
-
 int libxl__nic_type(libxl__gc *gc, libxl__device *dev, libxl_nic_type *nictype)
 {
     char *snictype, *be_path;
@@ -445,40 +401,81 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
     libxl__ev_child_init(&aodev->child);
 }
 
-void libxl__prepare_ao_devices(libxl__ao *ao, libxl__ao_devices *aodevs)
+/* multidev */
+
+void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
 {
     AO_GC;
 
-    GCNEW_ARRAY(aodevs->array, aodevs->size);
-    for (int i = 0; i < aodevs->size; i++) {
-        aodevs->array[i].aodevs = aodevs;
-        libxl__prepare_ao_device(ao, &aodevs->array[i]);
+    aodevs->ao = ao;
+    aodevs->array = 0;
+    aodevs->used = aodevs->allocd = 0;
+
+    /* We allocate an aodev to represent the operation of preparing
+     * all of the other operations.  This operation is completed when
+     * we have started all the others (ie, when the user calls
+     * _prepared).  That arranges automatically that
+     *  (i) we do not think we have finished even if one of the
+     *      operations completes while we are still preparing
+     *  (ii) if we are starting zero operations, we do still
+     *      make the callback as soon as we know this fact
+     *  (iii) we have a nice consistent way to deal with any
+     *      error that might occur while deciding what to initiate
+     */
+    aodevs->preparation = libxl__multidev_prepare(aodevs);
+}
+
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
+
+libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
+    STATE_AO_GC(aodevs->ao);
+    libxl__ao_device *aodev;
+
+    GCNEW(aodev);
+    aodev->aodevs = aodevs;
+    aodev->callback = multidev_one_callback;
+    libxl__prepare_ao_device(ao, aodev);
+
+    if (aodevs->used >= aodevs->allocd) {
+        aodevs->allocd = aodevs->used * 2 + 5;
+        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
     }
+    aodevs->array[aodevs->used++] = aodev;
+
+    return aodev;
 }
 
-void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
     libxl__ao_devices *aodevs = aodev->aodevs;
     int i, error = 0;
 
     aodev->active = 0;
-    for (i = 0; i < aodevs->size; i++) {
-        if (aodevs->array[i].active)
+
+    for (i = 0; i < aodevs->used; i++) {
+        if (aodevs->array[i]->active)
             return;
 
-        if (aodevs->array[i].rc)
-            error = aodevs->array[i].rc;
+        if (aodevs->array[i]->rc)
+            error = aodevs->array[i]->rc;
     }
 
     aodevs->callback(egc, aodevs, error);
     return;
 }
 
+void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
+                              int rc)
+{
+    aodevs->preparation->rc = rc;
+    multidev_one_callback(egc, aodevs->preparation);
+}
+
 /******************************************************************************/
 
 /* Macro for defining the functions that will add a bunch of disks when
- * inside an async op.
+ * inside an async op with multidev.
  * This macro is added to prevent repetition of code.
  *
  * The following functions are defined:
@@ -495,9 +492,9 @@ void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
         int i;                                                                 \
         int end = start + d_config->num_##type##s;                             \
         for (i = start; i < end; i++) {                                        \
-            aodevs->array[i].callback = libxl__ao_devices_callback;            \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       &aodevs->array[i]);                     \
+                                       aodev);                                 \
         }                                                                      \
     }
 
@@ -546,20 +543,13 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char *path;
     unsigned int num_kinds, num_dev_xsentries;
     char **kinds = NULL, **devs = NULL;
-    int i, j, numdev = 0, rc = 0;
+    int i, j, rc = 0;
     libxl__device *dev;
     libxl__ao_devices *aodevs = &drs->aodevs;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    aodevs->size = libxl__num_devices(gc, drs->domid);
-    if (aodevs->size < 0) {
-        LOG(ERROR, "unable to get number of devices for domain %u", drs->domid);
-        rc = aodevs->size;
-        goto out;
-    }
-
-    libxl__prepare_ao_devices(drs->ao, aodevs);
+    libxl__multidev_begin(ao, aodevs);
     aodevs->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
@@ -597,13 +587,11 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = &aodevs->array[numdev];
+                aodev = libxl__multidev_prepare(aodevs);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
-                aodev->callback = libxl__ao_devices_callback;
                 aodev->force = drs->force;
                 libxl__initiate_device_remove(egc, aodev);
-                numdev++;
             }
         }
     }
@@ -625,8 +613,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    if (!numdev) drs->callback(egc, drs, rc);
-    return;
+    libxl__multidev_prepared(egc, aodevs, rc);
 }
 
 /* Callbacks for device related operations */
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f2e9572..177642b 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    sdss->aodevs.size = dm_config->num_disks;
+    libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__prepare_ao_devices(ao, &sdss->aodevs);
     libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
     return;
@@ -982,10 +982,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        sdss->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__prepare_ao_devices(ao, &sdss->aodevs);
         libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c57503f..a5978b0 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1810,20 +1810,6 @@ typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
  */
 _hidden void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev);
 
-/* Prepare a bunch of devices for addition/removal. Every ao_device in
- * ao_devices is set to 'active', and the ao_device 'base' field is set to
- * the one pointed by aodevs.
- */
-_hidden void libxl__prepare_ao_devices(libxl__ao *ao,
-                                       libxl__ao_devices *aodevs);
-
-/* Generic callback to use when adding/removing several devices, this will
- * check if the given aodev is the last one, and call the callback in the
- * parent libxl__ao_devices struct, passing the appropriate error if found.
- */
-_hidden void libxl__ao_devices_callback(libxl__egc *egc,
-                                        libxl__ao_device *aodev);
-
 struct libxl__ao_device {
     /* filled in by user */
     libxl__ao *ao;
@@ -1831,32 +1817,60 @@ struct libxl__ao_device {
     libxl__device *dev;
     int force;
     libxl__device_callback *callback;
-    /* private for implementation */
-    int active;
+    /* return value, zeroed by user on entry, is valid on callback */
     int rc;
+    /* private for multidev */
+    int active;
+    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
     libxl__ev_time timeout;
-    /* Used internally to have a reference to the upper libxl__ao_devices
-     * struct when present */
-    libxl__ao_devices *aodevs;
     /* device hotplug execution */
     const char *what;
     int num_exec;
     libxl__ev_child child;
 };
 
-/* Helper struct to simply the plug/unplug of multiple devices at the same
- * time.
- *
- * This structure holds several devices, and the callback is only called
- * when all the devices inside of the array have finished.
- */
+/*
+ * Multiple devices "multidev" handling.
+ *
+ * Firstly, you should
+ *    libxl__multidev_begin
+ *    multidev->callback = ...
+ * Then zero or more times
+ *    libxl__multidev_prepare
+ *    libal__initiate_device_{remove/addition}.
+ * Finally, once
+ *    libxl__multidev_prepared
+ * which will result (perhaps reentrantly) in one call to callback().
+ */
+
+/* Starts preparing to add/remove a bunch of devices. */
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+
+/* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
+ * which has had libxl__prepare_ao_device called, and which has also
+ * had ->callback set.  The user should not mess with aodev->callback. */
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+
+/* Notifies the multidev machinery that we have now finished preparing
+ * and initiating devices.  multidev->callback may then be called as
+ * soon as there are no prepared but not completed operations
+ * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
+ * logged) multidev->callback will get a non-zero rc.
+ * callback may be set by the user at any point before prepared. */
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+
 typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
 struct libxl__ao_devices {
-    libxl__ao_device *array;
-    int size;
+    /* set by user: */
     libxl__devices_callback *callback;
+    /* for private use by libxl__...ao_devices... machinery: */
+    libxl__ao *ao;
+    libxl__ao_device **array;
+    int used, allocd;
+    libxl__ao_device *preparation;
 };
 
 /*
@@ -2366,10 +2380,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by this
- * function, so the caller must make sure to call _prepare before calling this
- * function. The start parameter contains the position inside the aodevs array
- * that should be used to store the state of this devices.
+ * the caller is inside an async op. "devices" will NOT be prepared by
+ * this function, so the caller must make sure to call
+ * libxl__multidev_begin before calling this function. The start
+ * parameter contains the position inside the aodevs array that should
+ * be used to store the state of this devices.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbil-0007ke-Jo; Wed, 01 Aug 2012 16:24:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007in-PR
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.138.51:63019] by server-10.bemta-3.messagelabs.com id
	C5/5E-21993-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838263!29827278!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18357 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808038"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Io-Ju; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004cs-Hj;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:12 +0100
Message-ID: <1343838260-17725-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/11] libxl: fix device counting race in
	libxl__devices_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don't have a fixed number of devices in the aodevs array, and instead
size it depending on the devices present in xenstore.  Somewhat
formalise the multiple device addition/removal machinery to make this
clearer and easier to do.

As a side-effect we fix a few "lost thread of control" bug which would
occur if there were no devices of a particular kind.  (Various if
statements which checked for there being no devices have become
redundant, but are retained to avoid making the patch bigger.)

Specifically:

 * Users of libxl__ao_devices are no longer expected to know in
   advance how many device operations they are going to do.  Instead
   they can initiate them one at a time, between bracketing calls to
   "begin" and "prepared".

 * The array of aodevs used for this is dynamically sized; to support
   this it's an array of pointers rather than of structs.

 * Users of libxl__ao_devices are presented with a more opaque interface.
   They are are no longer expected to, themselves,
      - look into the array of aodevs (this is now private)
      - know that the individual addition/removal completions are
        handled by libxl__ao_devices_callback (this callback function
        is now a private function for the multidev machinery)
      - ever deal with populating the contents of an aodevs

 * The doc comments relating to some of the members of
   libxl__ao_device are clarified.  (And the member `aodevs' is moved
   to put it with the other members with the same status.)

 * The multidev machinery allocates an aodev to represent the
   operation of preparing all of the other operations.  See
   the comment in libxl__multidev_begin.

A wrinkle is that the functions are called "multidev" but the structs
are called "libxl__ao_devices" and "aodevs".  I have given these
functions this name to distinguish them from "libxl__ao_device" and
"aodev" and so forth by more than just the use of the plural "s"
suffix.

In the next patch we will rename the structs.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@eu.citrix.com>

-
Changes in v4:
 * Actually honour errors in rc argument to libxl__multidev_prepared.
 * Fix the doc comment for libxl__add_*.
 * In comments, consistently use "multidev" not "multidevs".

Changes in v3:
 * New multidev interfaces - extensive changes.
---
 tools/libxl/libxl_create.c   |    8 +-
 tools/libxl/libxl_device.c   |  129 +++++++++++++++++++-----------------------
 tools/libxl/libxl_dm.c       |    8 +-
 tools/libxl/libxl_internal.h |   75 +++++++++++++++----------
 4 files changed, 111 insertions(+), 109 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index aafacd8..3265d69 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    dcs->aodevs.size = d_config->num_disks;
+    libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__prepare_ao_devices(ao, &dcs->aodevs);
     libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
 
@@ -1039,10 +1039,10 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        dcs->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__prepare_ao_devices(ao, &dcs->aodevs);
         libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 3658bd1..544a861 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -58,50 +58,6 @@ int libxl__parse_backend_path(libxl__gc *gc,
     return libxl__device_kind_from_string(strkind, &dev->backend_kind);
 }
 
-static int libxl__num_devices(libxl__gc *gc, uint32_t domid)
-{
-    char *path;
-    unsigned int num_kinds, num_devs;
-    char **kinds = NULL, **devs = NULL;
-    int i, j, rc = 0;
-    libxl__device dev;
-    libxl__device_kind kind;
-    int numdevs = 0;
-
-    path = GCSPRINTF("/local/domain/%d/device", domid);
-    kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
-    if (!kinds) {
-        if (errno != ENOENT) {
-            LOGE(ERROR, "unable to get xenstore device listing %s", path);
-            rc = ERROR_FAIL;
-            goto out;
-        }
-        num_kinds = 0;
-    }
-    for (i = 0; i < num_kinds; i++) {
-        if (libxl__device_kind_from_string(kinds[i], &kind))
-            continue;
-        if (kind == LIBXL__DEVICE_KIND_CONSOLE)
-            continue;
-
-        path = GCSPRINTF("/local/domain/%d/device/%s", domid, kinds[i]);
-        devs = libxl__xs_directory(gc, XBT_NULL, path, &num_devs);
-        if (!devs)
-            continue;
-        for (j = 0; j < num_devs; j++) {
-            path = GCSPRINTF("/local/domain/%d/device/%s/%s/backend",
-                             domid, kinds[i], devs[j]);
-            path = libxl__xs_read(gc, XBT_NULL, path);
-            if (path && libxl__parse_backend_path(gc, path, &dev) == 0) {
-                numdevs++;
-            }
-        }
-    }
-out:
-    if (rc) return rc;
-    return numdevs;
-}
-
 int libxl__nic_type(libxl__gc *gc, libxl__device *dev, libxl_nic_type *nictype)
 {
     char *snictype, *be_path;
@@ -445,40 +401,81 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
     libxl__ev_child_init(&aodev->child);
 }
 
-void libxl__prepare_ao_devices(libxl__ao *ao, libxl__ao_devices *aodevs)
+/* multidev */
+
+void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
 {
     AO_GC;
 
-    GCNEW_ARRAY(aodevs->array, aodevs->size);
-    for (int i = 0; i < aodevs->size; i++) {
-        aodevs->array[i].aodevs = aodevs;
-        libxl__prepare_ao_device(ao, &aodevs->array[i]);
+    aodevs->ao = ao;
+    aodevs->array = 0;
+    aodevs->used = aodevs->allocd = 0;
+
+    /* We allocate an aodev to represent the operation of preparing
+     * all of the other operations.  This operation is completed when
+     * we have started all the others (ie, when the user calls
+     * _prepared).  That arranges automatically that
+     *  (i) we do not think we have finished even if one of the
+     *      operations completes while we are still preparing
+     *  (ii) if we are starting zero operations, we do still
+     *      make the callback as soon as we know this fact
+     *  (iii) we have a nice consistent way to deal with any
+     *      error that might occur while deciding what to initiate
+     */
+    aodevs->preparation = libxl__multidev_prepare(aodevs);
+}
+
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
+
+libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
+    STATE_AO_GC(aodevs->ao);
+    libxl__ao_device *aodev;
+
+    GCNEW(aodev);
+    aodev->aodevs = aodevs;
+    aodev->callback = multidev_one_callback;
+    libxl__prepare_ao_device(ao, aodev);
+
+    if (aodevs->used >= aodevs->allocd) {
+        aodevs->allocd = aodevs->used * 2 + 5;
+        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
     }
+    aodevs->array[aodevs->used++] = aodev;
+
+    return aodev;
 }
 
-void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
     libxl__ao_devices *aodevs = aodev->aodevs;
     int i, error = 0;
 
     aodev->active = 0;
-    for (i = 0; i < aodevs->size; i++) {
-        if (aodevs->array[i].active)
+
+    for (i = 0; i < aodevs->used; i++) {
+        if (aodevs->array[i]->active)
             return;
 
-        if (aodevs->array[i].rc)
-            error = aodevs->array[i].rc;
+        if (aodevs->array[i]->rc)
+            error = aodevs->array[i]->rc;
     }
 
     aodevs->callback(egc, aodevs, error);
     return;
 }
 
+void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
+                              int rc)
+{
+    aodevs->preparation->rc = rc;
+    multidev_one_callback(egc, aodevs->preparation);
+}
+
 /******************************************************************************/
 
 /* Macro for defining the functions that will add a bunch of disks when
- * inside an async op.
+ * inside an async op with multidev.
  * This macro is added to prevent repetition of code.
  *
  * The following functions are defined:
@@ -495,9 +492,9 @@ void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
         int i;                                                                 \
         int end = start + d_config->num_##type##s;                             \
         for (i = start; i < end; i++) {                                        \
-            aodevs->array[i].callback = libxl__ao_devices_callback;            \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       &aodevs->array[i]);                     \
+                                       aodev);                                 \
         }                                                                      \
     }
 
@@ -546,20 +543,13 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char *path;
     unsigned int num_kinds, num_dev_xsentries;
     char **kinds = NULL, **devs = NULL;
-    int i, j, numdev = 0, rc = 0;
+    int i, j, rc = 0;
     libxl__device *dev;
     libxl__ao_devices *aodevs = &drs->aodevs;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    aodevs->size = libxl__num_devices(gc, drs->domid);
-    if (aodevs->size < 0) {
-        LOG(ERROR, "unable to get number of devices for domain %u", drs->domid);
-        rc = aodevs->size;
-        goto out;
-    }
-
-    libxl__prepare_ao_devices(drs->ao, aodevs);
+    libxl__multidev_begin(ao, aodevs);
     aodevs->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
@@ -597,13 +587,11 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = &aodevs->array[numdev];
+                aodev = libxl__multidev_prepare(aodevs);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
-                aodev->callback = libxl__ao_devices_callback;
                 aodev->force = drs->force;
                 libxl__initiate_device_remove(egc, aodev);
-                numdev++;
             }
         }
     }
@@ -625,8 +613,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    if (!numdev) drs->callback(egc, drs, rc);
-    return;
+    libxl__multidev_prepared(egc, aodevs, rc);
 }
 
 /* Callbacks for device related operations */
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f2e9572..177642b 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    sdss->aodevs.size = dm_config->num_disks;
+    libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__prepare_ao_devices(ao, &sdss->aodevs);
     libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
     return;
@@ -982,10 +982,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        sdss->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__prepare_ao_devices(ao, &sdss->aodevs);
         libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c57503f..a5978b0 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1810,20 +1810,6 @@ typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
  */
 _hidden void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev);
 
-/* Prepare a bunch of devices for addition/removal. Every ao_device in
- * ao_devices is set to 'active', and the ao_device 'base' field is set to
- * the one pointed by aodevs.
- */
-_hidden void libxl__prepare_ao_devices(libxl__ao *ao,
-                                       libxl__ao_devices *aodevs);
-
-/* Generic callback to use when adding/removing several devices, this will
- * check if the given aodev is the last one, and call the callback in the
- * parent libxl__ao_devices struct, passing the appropriate error if found.
- */
-_hidden void libxl__ao_devices_callback(libxl__egc *egc,
-                                        libxl__ao_device *aodev);
-
 struct libxl__ao_device {
     /* filled in by user */
     libxl__ao *ao;
@@ -1831,32 +1817,60 @@ struct libxl__ao_device {
     libxl__device *dev;
     int force;
     libxl__device_callback *callback;
-    /* private for implementation */
-    int active;
+    /* return value, zeroed by user on entry, is valid on callback */
     int rc;
+    /* private for multidev */
+    int active;
+    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
     libxl__ev_time timeout;
-    /* Used internally to have a reference to the upper libxl__ao_devices
-     * struct when present */
-    libxl__ao_devices *aodevs;
     /* device hotplug execution */
     const char *what;
     int num_exec;
     libxl__ev_child child;
 };
 
-/* Helper struct to simply the plug/unplug of multiple devices at the same
- * time.
- *
- * This structure holds several devices, and the callback is only called
- * when all the devices inside of the array have finished.
- */
+/*
+ * Multiple devices "multidev" handling.
+ *
+ * Firstly, you should
+ *    libxl__multidev_begin
+ *    multidev->callback = ...
+ * Then zero or more times
+ *    libxl__multidev_prepare
+ *    libal__initiate_device_{remove/addition}.
+ * Finally, once
+ *    libxl__multidev_prepared
+ * which will result (perhaps reentrantly) in one call to callback().
+ */
+
+/* Starts preparing to add/remove a bunch of devices. */
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+
+/* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
+ * which has had libxl__prepare_ao_device called, and which has also
+ * had ->callback set.  The user should not mess with aodev->callback. */
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+
+/* Notifies the multidev machinery that we have now finished preparing
+ * and initiating devices.  multidev->callback may then be called as
+ * soon as there are no prepared but not completed operations
+ * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
+ * logged) multidev->callback will get a non-zero rc.
+ * callback may be set by the user at any point before prepared. */
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+
 typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
 struct libxl__ao_devices {
-    libxl__ao_device *array;
-    int size;
+    /* set by user: */
     libxl__devices_callback *callback;
+    /* for private use by libxl__...ao_devices... machinery: */
+    libxl__ao *ao;
+    libxl__ao_device **array;
+    int used, allocd;
+    libxl__ao_device *preparation;
 };
 
 /*
@@ -2366,10 +2380,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by this
- * function, so the caller must make sure to call _prepare before calling this
- * function. The start parameter contains the position inside the aodevs array
- * that should be used to store the state of this devices.
+ * the caller is inside an async op. "devices" will NOT be prepared by
+ * this function, so the caller must make sure to call
+ * libxl__multidev_begin before calling this function. The start
+ * parameter contains the position inside the aodevs array that should
+ * be used to store the state of this devices.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbil-0007kE-5E; Wed, 01 Aug 2012 16:24:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007ik-Mp
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:25 +0000
Received: from [85.158.138.51:55214] by server-9.bemta-3.messagelabs.com id
	7C/F6-27628-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838263!29827278!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18382 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Iy-TF; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004d8-QL;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:16 +0100
Message-ID: <1343838260-17725-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 07/11] libxl: do not blunder on if bootloader
	fails (again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not lose the rc value passed to bootloader_callback.  Do not lose
the rc value from the bl when the local disk detach succeeds.

While we're here rationalise the use of bl->rc to make things clearer.
Set it to zero at the start and always update it conditionally; copy
it into bootloader_callback's argument each time.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_bootloader.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index bfc1b56..e103ee9 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -206,6 +206,7 @@ static int parse_bootloader_result(libxl__egc *egc,
 void libxl__bootloader_init(libxl__bootloader_state *bl)
 {
     assert(bl->ao);
+    bl->rc = 0;
     bl->dls.diskpath = NULL;
     bl->openpty.ao = bl->ao;
     bl->dls.ao = bl->ao;
@@ -255,6 +256,9 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    if (!bl->rc)
+        bl->rc = rc;
+
     bootloader_cleanup(egc, bl);
 
     bl->dls.callback = bootloader_local_detached_cb;
@@ -270,9 +274,11 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 
     if (rc) {
         LOG(ERROR, "unable to detach locally attached disk");
+        if (!bl->rc)
+            bl->rc = rc;
     }
 
-    bl->callback(egc, bl, rc);
+    bl->callback(egc, bl, bl->rc);
 }
 
 /* might be called at any time, provided it's init'd */
@@ -289,7 +295,8 @@ static void bootloader_stop(libxl__egc *egc,
         if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
                     rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
-    bl->rc = rc;
+    if (!bl->rc)
+        bl->rc = rc;
 }
 
 /*----- main flow of control -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbil-0007kE-5E; Wed, 01 Aug 2012 16:24:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbij-0007ik-Mp
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:25 +0000
Received: from [85.158.138.51:55214] by server-9.bemta-3.messagelabs.com id
	7C/F6-27628-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838263!29827278!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18382 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Iy-TF; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004d8-QL;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:16 +0100
Message-ID: <1343838260-17725-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 07/11] libxl: do not blunder on if bootloader
	fails (again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not lose the rc value passed to bootloader_callback.  Do not lose
the rc value from the bl when the local disk detach succeeds.

While we're here rationalise the use of bl->rc to make things clearer.
Set it to zero at the start and always update it conditionally; copy
it into bootloader_callback's argument each time.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_bootloader.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index bfc1b56..e103ee9 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -206,6 +206,7 @@ static int parse_bootloader_result(libxl__egc *egc,
 void libxl__bootloader_init(libxl__bootloader_state *bl)
 {
     assert(bl->ao);
+    bl->rc = 0;
     bl->dls.diskpath = NULL;
     bl->openpty.ao = bl->ao;
     bl->dls.ao = bl->ao;
@@ -255,6 +256,9 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    if (!bl->rc)
+        bl->rc = rc;
+
     bootloader_cleanup(egc, bl);
 
     bl->dls.callback = bootloader_local_detached_cb;
@@ -270,9 +274,11 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 
     if (rc) {
         LOG(ERROR, "unable to detach locally attached disk");
+        if (!bl->rc)
+            bl->rc = rc;
     }
 
-    bl->callback(egc, bl, rc);
+    bl->callback(egc, bl, bl->rc);
 }
 
 /* might be called at any time, provided it's init'd */
@@ -289,7 +295,8 @@ static void bootloader_stop(libxl__egc *egc,
         if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
                     rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
-    bl->rc = rc;
+    if (!bl->rc)
+        bl->rc = rc;
 }
 
 /*----- main flow of control -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbim-0007lL-EE; Wed, 01 Aug 2012 16:24:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007ix-8Q
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.138.51:63056] by server-3.bemta-3.messagelabs.com id
	12/BB-08301-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343838264!28174756!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21456 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808045"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbig-0007J6-1g; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004dG-Ur;
	Wed, 01 Aug 2012 17:24:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:18 +0100
Message-ID: <1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 09/11] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change the TODOs in the remus code to "REMUS TODO" which will make
them easier to grep for later.  AIUI all of these are essential for
use of remus in production.

Also add a new TODO and a new assert, to check rc on entry to
remus_checkpoint_dm_saved.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_dom.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index d749983..06d5e4f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* TODO: Issue disk and network checkpoint reqs. */
+    /* REMUS TODO: Issue disk and network checkpoint reqs. */
     return libxl__domain_suspend_common_callback(data);
 }
 
@@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. Start a new network output buffer */
     return 1;
 }
 
@@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* TODO: Wait for disk and memory ack, release network buffer */
-    /* TODO: make this asynchronous */
+    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
+    /* REMUS TODO: make this asynchronous */
+    assert(!rc); /* REMUS TODO handle this error properly */
     usleep(dss->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbim-0007lL-EE; Wed, 01 Aug 2012 16:24:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbik-0007ix-8Q
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:26 +0000
Received: from [85.158.138.51:63056] by server-3.bemta-3.messagelabs.com id
	12/BB-08301-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343838264!28174756!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21456 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808045"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbig-0007J6-1g; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004dG-Ur;
	Wed, 01 Aug 2012 17:24:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:18 +0100
Message-ID: <1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 09/11] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change the TODOs in the remus code to "REMUS TODO" which will make
them easier to grep for later.  AIUI all of these are essential for
use of remus in production.

Also add a new TODO and a new assert, to check rc on entry to
remus_checkpoint_dm_saved.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_dom.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index d749983..06d5e4f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* TODO: Issue disk and network checkpoint reqs. */
+    /* REMUS TODO: Issue disk and network checkpoint reqs. */
     return libxl__domain_suspend_common_callback(data);
 }
 
@@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. Start a new network output buffer */
     return 1;
 }
 
@@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* TODO: Wait for disk and memory ack, release network buffer */
-    /* TODO: make this asynchronous */
+    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
+    /* REMUS TODO: make this asynchronous */
+    assert(!rc); /* REMUS TODO handle this error properly */
     usleep(dss->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbio-0007nX-Cb; Wed, 01 Aug 2012 16:24:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbil-0007is-Nh
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:27 +0000
Received: from [85.158.139.83:3201] by server-11.bemta-5.messagelabs.com id
	ED/C5-20400-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!7
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13695 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808043"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007J3-Vd; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004dC-SV;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:17 +0100
Message-ID: <1343838260-17725-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 08/11] Debugging machinery for synthesising
	POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 tools/libxl/libxl_bootloader.c |   16 ++++++++++++++++
 1 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index e103ee9..2e65383 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -447,6 +447,19 @@ static void bootloader_disk_attached_cb(libxl__egc *egc,
     bootloader_callback(egc, bl, rc);
 }
 
+static int tst_blfd;
+static void tst_sigh(int dummy) {
+    int r, e = errno;
+    int p[2];
+    write(2,"tst_sigh\n",9);
+    r = pipe(p);  assert(!r);
+    close(p[1]);
+if (getenv("TST_EXTRADUP")) dup(tst_blfd);
+    dup2(p[0], tst_blfd);
+    close(p[0]);
+    errno = e;
+}
+
 static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(op, *bl, openpty);
@@ -503,6 +516,9 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     int bootloader_master = libxl__carefd_fd(bl->ptys[0].master);
     int xenconsole_master = libxl__carefd_fd(bl->ptys[1].master);
 
+tst_blfd = bootloader_master;
+signal(SIGUSR2,tst_sigh);
+
     libxl_fd_set_nonblock(CTX, bootloader_master, 1);
     libxl_fd_set_nonblock(CTX, xenconsole_master, 1);
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbio-0007nX-Cb; Wed, 01 Aug 2012 16:24:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbil-0007is-Nh
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:27 +0000
Received: from [85.158.139.83:3201] by server-11.bemta-5.messagelabs.com id
	ED/C5-20400-93859105; Wed, 01 Aug 2012 16:24:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!7
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13695 invoked from network); 1 Aug 2012 16:24:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808043"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007J3-Vd; Wed, 01 Aug 2012 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004dC-SV;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:17 +0100
Message-ID: <1343838260-17725-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 08/11] Debugging machinery for synthesising
	POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 tools/libxl/libxl_bootloader.c |   16 ++++++++++++++++
 1 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index e103ee9..2e65383 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -447,6 +447,19 @@ static void bootloader_disk_attached_cb(libxl__egc *egc,
     bootloader_callback(egc, bl, rc);
 }
 
+static int tst_blfd;
+static void tst_sigh(int dummy) {
+    int r, e = errno;
+    int p[2];
+    write(2,"tst_sigh\n",9);
+    r = pipe(p);  assert(!r);
+    close(p[1]);
+if (getenv("TST_EXTRADUP")) dup(tst_blfd);
+    dup2(p[0], tst_blfd);
+    close(p[0]);
+    errno = e;
+}
+
 static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(op, *bl, openpty);
@@ -503,6 +516,9 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     int bootloader_master = libxl__carefd_fd(bl->ptys[0].master);
     int xenconsole_master = libxl__carefd_fd(bl->ptys[1].master);
 
+tst_blfd = bootloader_master;
+signal(SIGUSR2,tst_sigh);
+
     libxl_fd_set_nonblock(CTX, bootloader_master, 1);
     libxl_fd_set_nonblock(CTX, xenconsole_master, 1);
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbio-0007oA-OW; Wed, 01 Aug 2012 16:24:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbil-0007kJ-QW
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:28 +0000
Received: from [85.158.139.83:3128] by server-7.bemta-5.messagelabs.com id
	DC/06-28276-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13665 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808040"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Iu-Ov; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004d0-Li;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:14 +0100
Message-ID: <1343838260-17725-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 05/11] libxl: abolish useless `start' parameter
	to libxl__add_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

0 is always passed for this parameter and the code doesn't, actually,
use it, now.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_create.c   |    4 ++--
 tools/libxl/libxl_device.c   |    7 +++----
 tools/libxl/libxl_dm.c       |    4 ++--
 tools/libxl/libxl_internal.h |    8 +++-----
 4 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 3265d69..5275373 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -911,7 +911,7 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
     libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
@@ -1041,7 +1041,7 @@ static void domcreate_devmodel_started(libxl__egc *egc,
         /* Attach nics */
         libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
         libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 319f0e8..41d527b 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -485,15 +485,14 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
-                              int start, libxl_domain_config *d_config, \
+                              libxl_domain_config *d_config,            \
                               libxl__ao_devices *aodevs)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
-        int end = start + d_config->num_##type##s;                      \
-        for (i = start; i < end; i++) {                                 \
+        for (i = 0; i < d_config->num_##type##s; i++) {                 \
             libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
     }
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 177642b..66aa45e 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -858,7 +858,7 @@ retry_transaction:
 
     libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
     libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
@@ -984,7 +984,7 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (d_config->num_nics > 0) {
         libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
         libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index a5978b0..450dbe5 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2382,19 +2382,17 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
 /* Helper function to add a bunch of disks. This should be used when
  * the caller is inside an async op. "devices" will NOT be prepared by
  * this function, so the caller must make sure to call
- * libxl__multidev_begin before calling this function. The start
- * parameter contains the position inside the aodevs array that should
- * be used to store the state of this devices.
+ * libxl__multidev_begin before calling this function.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                              int start, libxl_domain_config *d_config,
+                              libxl_domain_config *d_config,
                               libxl__ao_devices *aodevs);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                             int start, libxl_domain_config *d_config,
+                             libxl_domain_config *d_config,
                              libxl__ao_devices *aodevs);
 
 /*----- device model creation -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbio-0007oA-OW; Wed, 01 Aug 2012 16:24:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swbil-0007kJ-QW
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:28 +0000
Received: from [85.158.139.83:3128] by server-7.bemta-5.messagelabs.com id
	DC/06-28276-83859105; Wed, 01 Aug 2012 16:24:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343838262!18501401!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13665 invoked from network); 1 Aug 2012 16:24:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808040"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 16:24:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 17:24:22 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swbif-0007Iu-Ov; Wed, 01 Aug 2012 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swbif-0004d0-Li;
	Wed, 01 Aug 2012 17:24:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 17:24:14 +0100
Message-ID: <1343838260-17725-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 05/11] libxl: abolish useless `start' parameter
	to libxl__add_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

0 is always passed for this parameter and the code doesn't, actually,
use it, now.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_create.c   |    4 ++--
 tools/libxl/libxl_device.c   |    7 +++----
 tools/libxl/libxl_dm.c       |    4 ++--
 tools/libxl/libxl_internal.h |    8 +++-----
 4 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 3265d69..5275373 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -911,7 +911,7 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
     libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
@@ -1041,7 +1041,7 @@ static void domcreate_devmodel_started(libxl__egc *egc,
         /* Attach nics */
         libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
         libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 319f0e8..41d527b 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -485,15 +485,14 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
-                              int start, libxl_domain_config *d_config, \
+                              libxl_domain_config *d_config,            \
                               libxl__ao_devices *aodevs)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
-        int end = start + d_config->num_##type##s;                      \
-        for (i = start; i < end; i++) {                                 \
+        for (i = 0; i < d_config->num_##type##s; i++) {                 \
             libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
     }
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 177642b..66aa45e 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -858,7 +858,7 @@ retry_transaction:
 
     libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
     libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
@@ -984,7 +984,7 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (d_config->num_nics > 0) {
         libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
         libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index a5978b0..450dbe5 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2382,19 +2382,17 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
 /* Helper function to add a bunch of disks. This should be used when
  * the caller is inside an async op. "devices" will NOT be prepared by
  * this function, so the caller must make sure to call
- * libxl__multidev_begin before calling this function. The start
- * parameter contains the position inside the aodevs array that should
- * be used to store the state of this devices.
+ * libxl__multidev_begin before calling this function.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                              int start, libxl_domain_config *d_config,
+                              libxl_domain_config *d_config,
                               libxl__ao_devices *aodevs);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                             int start, libxl_domain_config *d_config,
+                             libxl_domain_config *d_config,
                              libxl__ao_devices *aodevs);
 
 /*----- device model creation -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbiy-0007wo-60; Wed, 01 Aug 2012 16:24:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Swbix-0007vX-6e
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:39 +0000
Received: from [85.158.143.35:4912] by server-3.bemta-4.messagelabs.com id
	EA/67-01511-64859105; Wed, 01 Aug 2012 16:24:38 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343838277!15663983!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14293 invoked from network); 1 Aug 2012 16:24:37 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:37 -0000
Received: by wibhq4 with SMTP id hq4so3441633wib.14
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:24:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=pGNqrHwKYL3hwx0KWBXV052UZO48YiB5S5BnhR926AI=;
	b=NI6nivh6IZcQnChFhNJcJ9FwjVk2Fdm6kd1H0TcVW8Zua08DM0hpw6rK+hPbPHzAE4
	RECZiaVO37qtVGfHxozB8izxWhVcLXIEPM2YK+Jup0fZ8mEubG3nCkQbZcvhYK72jE7t
	L9Lhygzv6uX2Y7TBgtAfWKtTAVdxx2/+LAVdr+v/igGHF+EQATem9i/Qfl/DdCudIVcc
	5U2cob9cjz2a6MPZYjh/XYwIpMegJfmlYoB6IAG6dedea1s5M1eaSaihyw13WpIQ0ChV
	iUQWR7Tq/KTvSKdUI0j77NSqJJiHvQW/78rRqo61vjoeKHdIupeAjr9lr6aZuMKf6zT5
	dCoA==
Received: by 10.180.78.37 with SMTP id y5mr13168945wiw.16.1343838277644;
	Wed, 01 Aug 2012 09:24:37 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id ex20sm9773771wid.7.2012.08.01.09.24.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:24:36 -0700 (PDT)
Message-ID: <1343838268.4958.35.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 01 Aug 2012 18:24:28 +0200
In-Reply-To: <1343837796.4958.32.camel@Solace>
References: <1343837796.4958.32.camel@Solace>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8891354066491479664=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8891354066491479664==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-BFKi8jmqiErRLWj2Yehm"


--=-BFKi8jmqiErRLWj2Yehm
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-08-01 at 18:16 +0200, Dario Faggioli wrote:
> Hi everyone,
>
Quite a bad subject... I put it there just as a placeholder and then
forgot to change it into something sensible. :-(

Sorry for that. I hope the content can still get some attention. :-P

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-BFKi8jmqiErRLWj2Yehm
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZWDwACgkQk4XaBE3IOsQevwCgmtws9H6is+3H3qB3ClLN3V0v
t4gAmwfnm40tj5DaFOMQ7Rxp1NYRpyno
=CVSX
-----END PGP SIGNATURE-----

--=-BFKi8jmqiErRLWj2Yehm--



--===============8891354066491479664==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8891354066491479664==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:24:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:24:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbiy-0007wo-60; Wed, 01 Aug 2012 16:24:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Swbix-0007vX-6e
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:24:39 +0000
Received: from [85.158.143.35:4912] by server-3.bemta-4.messagelabs.com id
	EA/67-01511-64859105; Wed, 01 Aug 2012 16:24:38 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343838277!15663983!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14293 invoked from network); 1 Aug 2012 16:24:37 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:24:37 -0000
Received: by wibhq4 with SMTP id hq4so3441633wib.14
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:24:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=pGNqrHwKYL3hwx0KWBXV052UZO48YiB5S5BnhR926AI=;
	b=NI6nivh6IZcQnChFhNJcJ9FwjVk2Fdm6kd1H0TcVW8Zua08DM0hpw6rK+hPbPHzAE4
	RECZiaVO37qtVGfHxozB8izxWhVcLXIEPM2YK+Jup0fZ8mEubG3nCkQbZcvhYK72jE7t
	L9Lhygzv6uX2Y7TBgtAfWKtTAVdxx2/+LAVdr+v/igGHF+EQATem9i/Qfl/DdCudIVcc
	5U2cob9cjz2a6MPZYjh/XYwIpMegJfmlYoB6IAG6dedea1s5M1eaSaihyw13WpIQ0ChV
	iUQWR7Tq/KTvSKdUI0j77NSqJJiHvQW/78rRqo61vjoeKHdIupeAjr9lr6aZuMKf6zT5
	dCoA==
Received: by 10.180.78.37 with SMTP id y5mr13168945wiw.16.1343838277644;
	Wed, 01 Aug 2012 09:24:37 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id ex20sm9773771wid.7.2012.08.01.09.24.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:24:36 -0700 (PDT)
Message-ID: <1343838268.4958.35.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 01 Aug 2012 18:24:28 +0200
In-Reply-To: <1343837796.4958.32.camel@Solace>
References: <1343837796.4958.32.camel@Solace>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8891354066491479664=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8891354066491479664==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-BFKi8jmqiErRLWj2Yehm"


--=-BFKi8jmqiErRLWj2Yehm
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-08-01 at 18:16 +0200, Dario Faggioli wrote:
> Hi everyone,
>
Quite a bad subject... I put it there just as a placeholder and then
forgot to change it into something sensible. :-(

Sorry for that. I hope the content can still get some attention. :-P

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-BFKi8jmqiErRLWj2Yehm
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZWDwACgkQk4XaBE3IOsQevwCgmtws9H6is+3H3qB3ClLN3V0v
t4gAmwfnm40tj5DaFOMQ7Rxp1NYRpyno
=CVSX
-----END PGP SIGNATURE-----

--=-BFKi8jmqiErRLWj2Yehm--



--===============8891354066491479664==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8891354066491479664==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:31:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbp8-0001Kv-1k; Wed, 01 Aug 2012 16:31:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Swbp6-0001Kn-EZ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:31:00 +0000
Received: from [85.158.139.83:4513] by server-1.bemta-5.messagelabs.com id
	35/8F-29759-3C959105; Wed, 01 Aug 2012 16:30:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1343838656!27850795!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10026 invoked from network); 1 Aug 2012 16:30:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:30:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; 
	d="scan'208,217";a="203831834"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:30:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:30:55 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Swbp1-00050z-0k;
	Wed, 01 Aug 2012 17:30:55 +0100
Message-ID: <501959BE.60801@citrix.com>
Date: Wed, 1 Aug 2012 17:30:54 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
X-Enigmail-Version: 1.4.3
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7985365601985832137=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7985365601985832137==
Content-Type: multipart/alternative;
	boundary="------------090302080202010206030406"

--------------090302080202010206030406
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On 01/08/12 17:16, Dario Faggioli wrote:
> Hi everyone,
>
> With automatic placement finally landing into xen-unstable, I stated
> thinking about what I could work on next, still in the field of
> improving Xen's NUMA support. Well, it turned out that running out of
> things to do is not an option! :-O
>
> In fact, I can think of quite a bit of open issues in that area, that I'm
> just braindumping here. If anyone has thoughts or idea or feedback or
> whatever, I'd be happy to serve as a collector of them. I've already
> created a Wiki page to help with the tracking. You can see it here
> (for now it basically replicates this e-mail):
>
> http://wiki.xen.org/wiki/Xen_NUMA_Roadmap
>
> I'm putting a [D] (standing for Dario) near the points I've started
> working on or looking at, and again, I'd be happy to try tracking this
> too, i.e., keeping the list of "who-is-doing-what" updated, in order to
> ease collaboration.
>
> So, let's cut the talking:
>
> - Automatic placement at guest creation time. Basics are there and
> will be shipping with 4.2. However, a lot of other things are
> missing and/or can be improved, for instance:
> [D] * automated verification and testing of the placement;
> * benchmarks and improvements of the placement heuristic;
> [D] * choosing/building up some measure of node load (more accurate
> than just counting vcpus) onto which to rely during placement;
> * consider IONUMA during placement;
> * automatic placement of Dom0, if possible (my current series is
> only affecting DomU)
> * having internal xen data structure honour the placement (e.g.,
> I've been told that right now vcpu stacks are always allocated
> on node 0... Andrew?).
>
> [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
> just have them _prefer_ running on the nodes where their memory
> is.
>
> [D] - Dynamic memory migration between different nodes of the host. As
> the counter-part of the NUMA-aware scheduler.
>
> - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
> guest ends up on more than one nodes, make sure it knows it's
> running on a NUMA platform (smaller than the actual host, but
> still NUMA). This interacts with some of the above points:
> * consider this during automatic placement for
> resuming/migrating domains (if they have a virtual topology,
> better not to change it);
> * consider this during memory migration (it can change the
> actual topology, should we update it on-line or disable memory
> migration?)
>
> - NUMA and ballooning and memory sharing. In some more details:
> * page sharing on NUMA boxes: it's probably sane to make it
> possible disabling sharing pages across nodes;
> * ballooning and its interaction with placement (races, amount of
> memory needed and reported being different at different time,
> etc.).
>
> - Inter-VM dependencies and communication issues. If a workload is
> made up of more than just a VM and they all share the same (NUMA)
> host, it might be best to have them sharing the nodes as much as
> possible, or perhaps do right the opposite, depending on the
> specific characteristics of he workload itself, and this might be
> considered during placement, memory migration and perhaps
> scheduling.
>
> - Benchmarking and performances evaluation in general. Meaning both
> agreeing on a (set of) relevant workload(s) and on how to extract
> meaningful performances data from there (and maybe how to do that
> automatically?).

- Xen NUMA internals.  Placing items such as the per-cpu stacks and data
area on the local NUMA node, rather than unconditionally on node 0 at
the moment.  As part of this, there will be changes to
alloc_{dom,xen}heap_page() to allow specification of which node(s) to
allocate memory from.

~Andrew

>
>
> So, what do you think?
>
> Thanks and Regards,
> Dario
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------090302080202010206030406
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    On 01/08/12 17:16, Dario Faggioli wrote:<br>
    <span style="white-space: pre;">&gt; Hi everyone,<br>
      &gt;<br>
      &gt; With automatic placement finally landing into xen-unstable, I
      stated<br>
      &gt; thinking about what I could work on next, still in the field
      of<br>
      &gt; improving Xen's NUMA support. Well, it turned out that
      running out of<br>
      &gt; things to do is not an option! :-O<br>
      &gt;<br>
      &gt; In fact, I can think of quite a bit of open issues in that
      area, that I'm<br>
      &gt; just braindumping here. If anyone has thoughts or idea or
      feedback or<br>
      &gt; whatever, I'd be happy to serve as a collector of them. I've
      already<br>
      &gt; created a Wiki page to help with the tracking. You can see it
      here<br>
      &gt; (for now it basically replicates this e-mail):<br>
      &gt;<br>
      &gt; <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Xen_NUMA_Roadmap">http://wiki.xen.org/wiki/Xen_NUMA_Roadmap</a><br>
      &gt;<br>
      &gt; I'm putting a [D] (standing for Dario) near the points I've
      started<br>
      &gt; working on or looking at, and again, I'd be happy to try
      tracking this<br>
      &gt; too, i.e., keeping the list of "who-is-doing-what" updated,
      in order to<br>
      &gt; ease collaboration.<br>
      &gt;<br>
      &gt; So, let's cut the talking:<br>
      &gt;<br>
      &gt; - Automatic placement at guest creation time. Basics are
      there and<br>
      &gt; will be shipping with 4.2. However, a lot of other things are<br>
      &gt; missing and/or can be improved, for instance:<br>
      &gt; [D] * automated verification and testing of the placement;<br>
      &gt; * benchmarks and improvements of the placement heuristic;<br>
      &gt; [D] * choosing/building up some measure of node load (more
      accurate<br>
      &gt; than just counting vcpus) onto which to rely during
      placement;<br>
      &gt; * consider IONUMA during placement;<br>
      &gt; * automatic placement of Dom0, if possible (my current series
      is<br>
      &gt; only affecting DomU)<br>
      &gt; * having internal xen data structure honour the placement
      (e.g., <br>
      &gt; I've been told that right now vcpu stacks are always
      allocated<br>
      &gt; on node 0... Andrew?).<br>
      &gt;<br>
      &gt; [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes'
      pcpus,<br>
      &gt; just have them _prefer_ running on the nodes where their
      memory<br>
      &gt; is.<br>
      &gt;<br>
      &gt; [D] - Dynamic memory migration between different nodes of the
      host. As<br>
      &gt; the counter-part of the NUMA-aware scheduler.<br>
      &gt;<br>
      &gt; - Virtual NUMA topology exposure to guests (a.k.a
      guest-numa). If a<br>
      &gt; guest ends up on more than one nodes, make sure it knows it's<br>
      &gt; running on a NUMA platform (smaller than the actual host, but<br>
      &gt; still NUMA). This interacts with some of the above points:<br>
      &gt; * consider this during automatic placement for<br>
      &gt; resuming/migrating domains (if they have a virtual topology,<br>
      &gt; better not to change it);<br>
      &gt; * consider this during memory migration (it can change the<br>
      &gt; actual topology, should we update it on-line or disable
      memory<br>
      &gt; migration?)<br>
      &gt;<br>
      &gt; - NUMA and ballooning and memory sharing. In some more
      details:<br>
      &gt; * page sharing on NUMA boxes: it's probably sane to make it<br>
      &gt; possible disabling sharing pages across nodes;<br>
      &gt; * ballooning and its interaction with placement (races,
      amount of<br>
      &gt; memory needed and reported being different at different time,<br>
      &gt; etc.).<br>
      &gt;<br>
      &gt; - Inter-VM dependencies and communication issues. If a
      workload is<br>
      &gt; made up of more than just a VM and they all share the same
      (NUMA)<br>
      &gt; host, it might be best to have them sharing the nodes as much
      as<br>
      &gt; possible, or perhaps do right the opposite, depending on the<br>
      &gt; specific characteristics of he workload itself, and this
      might be<br>
      &gt; considered during placement, memory migration and perhaps<br>
      &gt; scheduling.<br>
      &gt;<br>
      &gt; - Benchmarking and performances evaluation in general.
      Meaning both<br>
      &gt; agreeing on a (set of) relevant workload(s) and on how to
      extract<br>
      &gt; meaningful performances data from there (and maybe how to do
      that<br>
      &gt; automatically?).</span><br>
    <br>
    - Xen NUMA internals.Â  Placing items such as the per-cpu stacks and
    data area on the local NUMA node, rather than unconditionally on
    node 0 at the moment.Â  As part of this, there will be changes to
    alloc_{dom,xen}heap_page() to allow specification of which node(s)
    to allocate memory from.<br>
    <br>
    ~Andrew<br>
    <br>
    <span style="white-space: pre;">&gt;<br>
      &gt;<br>
      &gt; So, what do you think?<br>
      &gt;<br>
      &gt; Thanks and Regards,<br>
      &gt; Dario<br>
      &gt;</span><br>
    <br>
    -- <br>
    Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer<br>
    T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a><br>
    <br>
  </body>
</html>

--------------090302080202010206030406--


--===============7985365601985832137==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7985365601985832137==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 16:31:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbp8-0001Kv-1k; Wed, 01 Aug 2012 16:31:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Swbp6-0001Kn-EZ
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:31:00 +0000
Received: from [85.158.139.83:4513] by server-1.bemta-5.messagelabs.com id
	35/8F-29759-3C959105; Wed, 01 Aug 2012 16:30:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1343838656!27850795!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjg2MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10026 invoked from network); 1 Aug 2012 16:30:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:30:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; 
	d="scan'208,217";a="203831834"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:30:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:30:55 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Swbp1-00050z-0k;
	Wed, 01 Aug 2012 17:30:55 +0100
Message-ID: <501959BE.60801@citrix.com>
Date: Wed, 1 Aug 2012 17:30:54 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
X-Enigmail-Version: 1.4.3
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7985365601985832137=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7985365601985832137==
Content-Type: multipart/alternative;
	boundary="------------090302080202010206030406"

--------------090302080202010206030406
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On 01/08/12 17:16, Dario Faggioli wrote:
> Hi everyone,
>
> With automatic placement finally landing into xen-unstable, I stated
> thinking about what I could work on next, still in the field of
> improving Xen's NUMA support. Well, it turned out that running out of
> things to do is not an option! :-O
>
> In fact, I can think of quite a bit of open issues in that area, that I'm
> just braindumping here. If anyone has thoughts or idea or feedback or
> whatever, I'd be happy to serve as a collector of them. I've already
> created a Wiki page to help with the tracking. You can see it here
> (for now it basically replicates this e-mail):
>
> http://wiki.xen.org/wiki/Xen_NUMA_Roadmap
>
> I'm putting a [D] (standing for Dario) near the points I've started
> working on or looking at, and again, I'd be happy to try tracking this
> too, i.e., keeping the list of "who-is-doing-what" updated, in order to
> ease collaboration.
>
> So, let's cut the talking:
>
> - Automatic placement at guest creation time. Basics are there and
> will be shipping with 4.2. However, a lot of other things are
> missing and/or can be improved, for instance:
> [D] * automated verification and testing of the placement;
> * benchmarks and improvements of the placement heuristic;
> [D] * choosing/building up some measure of node load (more accurate
> than just counting vcpus) onto which to rely during placement;
> * consider IONUMA during placement;
> * automatic placement of Dom0, if possible (my current series is
> only affecting DomU)
> * having internal xen data structure honour the placement (e.g.,
> I've been told that right now vcpu stacks are always allocated
> on node 0... Andrew?).
>
> [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
> just have them _prefer_ running on the nodes where their memory
> is.
>
> [D] - Dynamic memory migration between different nodes of the host. As
> the counter-part of the NUMA-aware scheduler.
>
> - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
> guest ends up on more than one nodes, make sure it knows it's
> running on a NUMA platform (smaller than the actual host, but
> still NUMA). This interacts with some of the above points:
> * consider this during automatic placement for
> resuming/migrating domains (if they have a virtual topology,
> better not to change it);
> * consider this during memory migration (it can change the
> actual topology, should we update it on-line or disable memory
> migration?)
>
> - NUMA and ballooning and memory sharing. In some more details:
> * page sharing on NUMA boxes: it's probably sane to make it
> possible disabling sharing pages across nodes;
> * ballooning and its interaction with placement (races, amount of
> memory needed and reported being different at different time,
> etc.).
>
> - Inter-VM dependencies and communication issues. If a workload is
> made up of more than just a VM and they all share the same (NUMA)
> host, it might be best to have them sharing the nodes as much as
> possible, or perhaps do right the opposite, depending on the
> specific characteristics of he workload itself, and this might be
> considered during placement, memory migration and perhaps
> scheduling.
>
> - Benchmarking and performances evaluation in general. Meaning both
> agreeing on a (set of) relevant workload(s) and on how to extract
> meaningful performances data from there (and maybe how to do that
> automatically?).

- Xen NUMA internals.  Placing items such as the per-cpu stacks and data
area on the local NUMA node, rather than unconditionally on node 0 at
the moment.  As part of this, there will be changes to
alloc_{dom,xen}heap_page() to allow specification of which node(s) to
allocate memory from.

~Andrew

>
>
> So, what do you think?
>
> Thanks and Regards,
> Dario
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------090302080202010206030406
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    On 01/08/12 17:16, Dario Faggioli wrote:<br>
    <span style="white-space: pre;">&gt; Hi everyone,<br>
      &gt;<br>
      &gt; With automatic placement finally landing into xen-unstable, I
      stated<br>
      &gt; thinking about what I could work on next, still in the field
      of<br>
      &gt; improving Xen's NUMA support. Well, it turned out that
      running out of<br>
      &gt; things to do is not an option! :-O<br>
      &gt;<br>
      &gt; In fact, I can think of quite a bit of open issues in that
      area, that I'm<br>
      &gt; just braindumping here. If anyone has thoughts or idea or
      feedback or<br>
      &gt; whatever, I'd be happy to serve as a collector of them. I've
      already<br>
      &gt; created a Wiki page to help with the tracking. You can see it
      here<br>
      &gt; (for now it basically replicates this e-mail):<br>
      &gt;<br>
      &gt; <a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Xen_NUMA_Roadmap">http://wiki.xen.org/wiki/Xen_NUMA_Roadmap</a><br>
      &gt;<br>
      &gt; I'm putting a [D] (standing for Dario) near the points I've
      started<br>
      &gt; working on or looking at, and again, I'd be happy to try
      tracking this<br>
      &gt; too, i.e., keeping the list of "who-is-doing-what" updated,
      in order to<br>
      &gt; ease collaboration.<br>
      &gt;<br>
      &gt; So, let's cut the talking:<br>
      &gt;<br>
      &gt; - Automatic placement at guest creation time. Basics are
      there and<br>
      &gt; will be shipping with 4.2. However, a lot of other things are<br>
      &gt; missing and/or can be improved, for instance:<br>
      &gt; [D] * automated verification and testing of the placement;<br>
      &gt; * benchmarks and improvements of the placement heuristic;<br>
      &gt; [D] * choosing/building up some measure of node load (more
      accurate<br>
      &gt; than just counting vcpus) onto which to rely during
      placement;<br>
      &gt; * consider IONUMA during placement;<br>
      &gt; * automatic placement of Dom0, if possible (my current series
      is<br>
      &gt; only affecting DomU)<br>
      &gt; * having internal xen data structure honour the placement
      (e.g., <br>
      &gt; I've been told that right now vcpu stacks are always
      allocated<br>
      &gt; on node 0... Andrew?).<br>
      &gt;<br>
      &gt; [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes'
      pcpus,<br>
      &gt; just have them _prefer_ running on the nodes where their
      memory<br>
      &gt; is.<br>
      &gt;<br>
      &gt; [D] - Dynamic memory migration between different nodes of the
      host. As<br>
      &gt; the counter-part of the NUMA-aware scheduler.<br>
      &gt;<br>
      &gt; - Virtual NUMA topology exposure to guests (a.k.a
      guest-numa). If a<br>
      &gt; guest ends up on more than one nodes, make sure it knows it's<br>
      &gt; running on a NUMA platform (smaller than the actual host, but<br>
      &gt; still NUMA). This interacts with some of the above points:<br>
      &gt; * consider this during automatic placement for<br>
      &gt; resuming/migrating domains (if they have a virtual topology,<br>
      &gt; better not to change it);<br>
      &gt; * consider this during memory migration (it can change the<br>
      &gt; actual topology, should we update it on-line or disable
      memory<br>
      &gt; migration?)<br>
      &gt;<br>
      &gt; - NUMA and ballooning and memory sharing. In some more
      details:<br>
      &gt; * page sharing on NUMA boxes: it's probably sane to make it<br>
      &gt; possible disabling sharing pages across nodes;<br>
      &gt; * ballooning and its interaction with placement (races,
      amount of<br>
      &gt; memory needed and reported being different at different time,<br>
      &gt; etc.).<br>
      &gt;<br>
      &gt; - Inter-VM dependencies and communication issues. If a
      workload is<br>
      &gt; made up of more than just a VM and they all share the same
      (NUMA)<br>
      &gt; host, it might be best to have them sharing the nodes as much
      as<br>
      &gt; possible, or perhaps do right the opposite, depending on the<br>
      &gt; specific characteristics of he workload itself, and this
      might be<br>
      &gt; considered during placement, memory migration and perhaps<br>
      &gt; scheduling.<br>
      &gt;<br>
      &gt; - Benchmarking and performances evaluation in general.
      Meaning both<br>
      &gt; agreeing on a (set of) relevant workload(s) and on how to
      extract<br>
      &gt; meaningful performances data from there (and maybe how to do
      that<br>
      &gt; automatically?).</span><br>
    <br>
    - Xen NUMA internals.Â  Placing items such as the per-cpu stacks and
    data area on the local NUMA node, rather than unconditionally on
    node 0 at the moment.Â  As part of this, there will be changes to
    alloc_{dom,xen}heap_page() to allow specification of which node(s)
    to allocate memory from.<br>
    <br>
    ~Andrew<br>
    <br>
    <span style="white-space: pre;">&gt;<br>
      &gt;<br>
      &gt; So, what do you think?<br>
      &gt;<br>
      &gt; Thanks and Regards,<br>
      &gt; Dario<br>
      &gt;</span><br>
    <br>
    -- <br>
    Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer<br>
    T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a><br>
    <br>
  </body>
</html>

--------------090302080202010206030406--


--===============7985365601985832137==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7985365601985832137==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 16:32:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbqF-0001Rf-Lm; Wed, 01 Aug 2012 16:32:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anil@recoil.org>) id 1SwbqE-0001RT-5p
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:32:10 +0000
Received: from [85.158.138.51:40443] by server-5.bemta-3.messagelabs.com id
	A6/E1-28237-90A59105; Wed, 01 Aug 2012 16:32:09 +0000
X-Env-Sender: anil@recoil.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838728!29828406!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14114 invoked from network); 1 Aug 2012 16:32:08 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-16.tower-174.messagelabs.com with SMTP;
	1 Aug 2012 16:32:08 -0000
Received: (qmail 17066 invoked by uid 634); 1 Aug 2012 16:32:07 -0000
X-Spam-Level: *
X-Spam-Status: No, hits=-1.0 required=5.0
	tests=ALL_TRUSTED
X-Spam-Check-By: dark.recoil.org
Received: from cpc7-cmbg14-2-0-cust238.5-4.cable.virginmedia.com (HELO
	[192.168.1.38]) (86.30.244.239)
	(smtp-auth username remote@recoil.org, mechanism cram-md5)
	by dark.recoil.org (qpsmtpd/0.84) with ESMTPA;
	Wed, 01 Aug 2012 17:32:06 +0100
Mime-Version: 1.0 (Mac OS X Mail 6.0 \(1485\))
From: Anil Madhavapeddy <anil@recoil.org>
In-Reply-To: <1343837796.4958.32.camel@Solace>
Date: Wed, 1 Aug 2012 17:32:04 +0100
Message-Id: <A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
References: <1343837796.4958.32.camel@Solace>
To: Dario Faggioli <raistlin@linux.it>
X-Mailer: Apple Mail (2.1485)
X-Virus-Checked: Checked by ClamAV on dark.recoil.org
Cc: Andre Przywara <andre.przywara@amd.com>,
	Steven Smith <steven.smith@cl.cam.ac.uk>,
	George Dunlap <dunlapg@gmail.com>,
	Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1 Aug 2012, at 17:16, Dario Faggioli <raistlin@linux.it> wrote:

>    - Inter-VM dependencies and communication issues. If a workload is
>      made up of more than just a VM and they all share the same (NUMA)
>      host, it might be best to have them sharing the nodes as much as
>      possible, or perhaps do right the opposite, depending on the
>      specific characteristics of he workload itself, and this might be
>      considered during placement, memory migration and perhaps
>      scheduling.
> 
>    - Benchmarking and performances evaluation in general. Meaning both
>      agreeing on a (set of) relevant workload(s) and on how to extract
>      meaningful performances data from there (and maybe how to do that
>      automatically?).

I haven't tried out the latest Xen NUMA features yet, but we've been
keeping track of the IPC benchmarks as we get newer machines here:

http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/results.html

The newer chipsets (Sandy Bridge and AMD Valencia) both have quite
different inter-core/socket/MPM performance characteristics from their
respective previous generations; e.g.

http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/details/tmpfCBrYh.html
http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/details/tmppI61nX.html

Happy to share the raw data if you have cycles to figure out the best
way to auto-place multiple VMs so they are near each other from a memory
latency perspective.  We haven't run many macro-benchmarks though, so
in practise it might not matter, so it would be nice to settle on a good
set of benchmarks to determine that for sure.

-anil

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:32:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwbqF-0001Rf-Lm; Wed, 01 Aug 2012 16:32:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anil@recoil.org>) id 1SwbqE-0001RT-5p
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:32:10 +0000
Received: from [85.158.138.51:40443] by server-5.bemta-3.messagelabs.com id
	A6/E1-28237-90A59105; Wed, 01 Aug 2012 16:32:09 +0000
X-Env-Sender: anil@recoil.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343838728!29828406!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14114 invoked from network); 1 Aug 2012 16:32:08 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-16.tower-174.messagelabs.com with SMTP;
	1 Aug 2012 16:32:08 -0000
Received: (qmail 17066 invoked by uid 634); 1 Aug 2012 16:32:07 -0000
X-Spam-Level: *
X-Spam-Status: No, hits=-1.0 required=5.0
	tests=ALL_TRUSTED
X-Spam-Check-By: dark.recoil.org
Received: from cpc7-cmbg14-2-0-cust238.5-4.cable.virginmedia.com (HELO
	[192.168.1.38]) (86.30.244.239)
	(smtp-auth username remote@recoil.org, mechanism cram-md5)
	by dark.recoil.org (qpsmtpd/0.84) with ESMTPA;
	Wed, 01 Aug 2012 17:32:06 +0100
Mime-Version: 1.0 (Mac OS X Mail 6.0 \(1485\))
From: Anil Madhavapeddy <anil@recoil.org>
In-Reply-To: <1343837796.4958.32.camel@Solace>
Date: Wed, 1 Aug 2012 17:32:04 +0100
Message-Id: <A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
References: <1343837796.4958.32.camel@Solace>
To: Dario Faggioli <raistlin@linux.it>
X-Mailer: Apple Mail (2.1485)
X-Virus-Checked: Checked by ClamAV on dark.recoil.org
Cc: Andre Przywara <andre.przywara@amd.com>,
	Steven Smith <steven.smith@cl.cam.ac.uk>,
	George Dunlap <dunlapg@gmail.com>,
	Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1 Aug 2012, at 17:16, Dario Faggioli <raistlin@linux.it> wrote:

>    - Inter-VM dependencies and communication issues. If a workload is
>      made up of more than just a VM and they all share the same (NUMA)
>      host, it might be best to have them sharing the nodes as much as
>      possible, or perhaps do right the opposite, depending on the
>      specific characteristics of he workload itself, and this might be
>      considered during placement, memory migration and perhaps
>      scheduling.
> 
>    - Benchmarking and performances evaluation in general. Meaning both
>      agreeing on a (set of) relevant workload(s) and on how to extract
>      meaningful performances data from there (and maybe how to do that
>      automatically?).

I haven't tried out the latest Xen NUMA features yet, but we've been
keeping track of the IPC benchmarks as we get newer machines here:

http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/results.html

The newer chipsets (Sandy Bridge and AMD Valencia) both have quite
different inter-core/socket/MPM performance characteristics from their
respective previous generations; e.g.

http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/details/tmpfCBrYh.html
http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/details/tmppI61nX.html

Happy to share the raw data if you have cycles to figure out the best
way to auto-place multiple VMs so they are near each other from a memory
latency perspective.  We haven't run many macro-benchmarks though, so
in practise it might not matter, so it would be nice to settle on a good
set of benchmarks to determine that for sure.

-anil

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:33:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbqa-0001U9-1l; Wed, 01 Aug 2012 16:32:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <trashsee@gmail.com>) id 1SwbqY-0001TG-BT
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:32:30 +0000
Received: from [85.158.143.35:57365] by server-3.bemta-4.messagelabs.com id
	66/80-01511-E1A59105; Wed, 01 Aug 2012 16:32:30 +0000
X-Env-Sender: trashsee@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1343838747!5910045!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=INFO_TLD,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32404 invoked from network); 1 Aug 2012 16:32:28 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:32:28 -0000
Received: by ghrr14 with SMTP id r14so8306449ghr.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:32:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=hj4nvKvA0H4ttFwfkfvz6C3PVg8Gu9GXFu4EquowPX4=;
	b=rG4wlcZChr7JGj7+h97q71UuHRiuaoK8mX8Veg8B+wgneGiJDCxWjBzMuMtwPgemUK
	ERqM2kqufKtvm3eIRdOYPkrjCgdKYikzbiHLgMTBRjMykdhEebBUUsIJ3QCmk9bka4Q5
	kgdwGLnv92vkL5EiizvgezjdTW2eL8GtQuy3yxK40t7Rlr/4hyFlC2L3TGlEfhr32EXe
	acDMSRDMrAndgwbh0RvEBMwtKradHa2HbIb8NGTeaK8lcDhAtfC6oUKXFB7Ke3dOg+yf
	ji3NjO45SoCnacche/RrqepCPingct5QPFpd6lEJIN2lQFFwsMGhrPcdHBf+kCu9Vgi2
	c58w==
MIME-Version: 1.0
Received: by 10.50.189.167 with SMTP id gj7mr4298968igc.32.1343838747159; Wed,
	01 Aug 2012 09:32:27 -0700 (PDT)
Received: by 10.64.22.10 with HTTP; Wed, 1 Aug 2012 09:32:26 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
References: <CAPny0soyuQkUmAU+kYrBvG+w_jxKUsY8YxCrxBA=7cwmdwV6Xw@mail.gmail.com>
	<alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
Date: Wed, 1 Aug 2012 20:32:26 +0400
Message-ID: <CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
From: Alexey Klimov <trashsee@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
Content-Type: multipart/mixed; boundary=14dae9340efb28ec1704c636d706
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [questions] Dom0/DomU on ARM under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae9340efb28ec1704c636d706
Content-Type: text/plain; charset=ISO-8859-1

Hello Stefano and Ian,

2012/7/31 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:
> On Mon, 30 Jul 2012, Alexey Klimov wrote:
>> I'm trying to run DomU and Dom0 on ARM under Xen and have some
>> problems (may be question of configuration).
>
> It is great to see interest in our project!
>
>
>> I'm using:
>> - unstable Xen mercurial repository with your "grant table" patches
>> and few patches from Ian Campbell (xcbuild,
>> xen_remap_domain_mfn_range, XENMAPSPACE_gmfn_foreign,  ARM support to
>> xc_dom).
>
> You also need "libxc/arm: allocate xenstore and console pages".
>
> Unfortunately with the 4.2 tree frozen we still have few missing pieces
> here and there in the Xen hypervisor and tools.
> I think that Ian intended to setup a Xen tree to be used for development
> with all the currently unapplied patches that are actually needed on top
> of xen-unstable.

I found patch and applied.

> Also the xcbuild patch posted by Ian is quite limited, I am attaching
> the xcbuild.c that I am currently using for my tests with PV disk and
> network support.

Thank you very much, i renamed Ian early version of xcbuild to
xcbuild-old, added your file and fixed Makefile to have two xcbuilds.
I found your patch on June 22 and looks like i missed it, my bad.
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01338.html

>> - your (Stefano's) linux kernel git repository
>> git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git with head
>> 3.5-rc7-arm-1. I hope all patches to Linux kernel from Stefano letters
>> are there.
>
> You might also need:
>
> "xen/events: fix unmask_evtchn for PV on HVM guests"
>
> this is the last version that I posted:
>
> http://marc.info/?l=linux-kernel&m=134263575132006&w=2

Also applied on top of Stefano's kernel tree cloned on my machine.

>> - Fast Models with few models created as described in wiki page
>> http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/FastModels
>> - device trees dts files (vexpress-v2p-ca15-tc1.dts and
>> vexpress-virt.dts) from Stefano letter on 26 July. v2p-ca15-tc1 is
>> attached to Xen using CONFIG_DTB_FILE and vexpress-virt.dtb is
>> attached to DomU zImage.
>
> That's correct.
>
>
>> Well, kernel hangs after message (Calibrating delay loop...) when
>> running on models RTSM_VE/Build_Cortex-A15x4 and
>> RTSM_VE/Build_Cortex-A15x2. I attached logs (Dom0-A15x2 and A15x4).
>> Logs also shows problems with device trees (HBI and arch timer).
>>
>> I can boot Dom0 on Cortex-A15x1 model (log file Dom0-A15x1 with
>> warning/problems about DT and HBI) and when i'm tryng to boot zImage
>> using xcbuild utility then it also hangs with message from Xen "Guest
>> data abort: Translation fault at level 3". Log file is also attached.
>>
>> Could you please take a look and help?
>
> I have been testing on the Cortex-A15x1 model exclusively so far, so I
> am not surprised if there are any errors on the other models.
> Also I know that there are still few warnings on boot, but I haven't got
> around to fixing them yet.
>
>
>> May be i miss important config option in Linux kernel or in Xen.
>>
>> Is it okay that vexpress-virt descibes V2P-AEMv7A platform and not
>> V2P-CA15?
>
> That's should be OK.
>
>
>> It looks that vexpress-v2p-ca15-tc1.dts includes
>> vexpress-v2m-rs1-rtsm.dtsi. Could you please also share this file if
>> it has specific options?
>
> I am attaching it. I think you might be missing an important change there.

Thanks, i checked it and used to build final dtb-file but it looks no
changes were there with file that i used.

>> And what can be reason of errors about
>> HBI/arch_timers when running Xen+Linux
>> kernel+vexpress-v2p-ca15-tc1.dts on Cortex-A15x2 model?
>
> I am not sure yet, but I'll take a look. I'll try to fix them in one of
> the following version of my series.
>
>
>> I can provide/send other info if you want. Thanks in advance.
>
> Let me know if the missing patches and the new
> vexpress-v2m-rs1-rtsm.dtsi fix the issue!

Thank you very much for help.

Well, i still have problems after two additional patches (for Xen and
kernel). Logs are attached: add_to_physmap failed with -22 in both
xcbuild versions and bad p2m lookups and translation fault at level 2
in Xen..

As i understand from email it's better to use Cortex-A15x1 model, so i
will use this model for tests.

And i saw that Ian set up git repository for xen with latest patches
for ARM. So i'll try to use this repository.

Best regards,
Alexey Klimov.

--14dae9340efb28ec1704c636d706
Content-Type: application/octet-stream; 
	name="xcbuild-Dom0+U-A15x1_01082012.log"
Content-Disposition: attachment; 
	filename="xcbuild-Dom0+U-A15x1_01082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5clc7280

KFhFTikgYmFkIHAybSBsb29rdXAKKFhFTikgZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAwMDAwCihY
RU4pIFAyTSBAIDAyZmZjYWMwIG1mbjoweGZmZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAw
MGYzZjY4NmZmCihYRU4pIDJORFsweDgwXSA9IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSBiYWQg
cDJtIGxvb2t1cAooWEVOKSBkb20xIElQQSAweDAwMDAwMDAwOTAwMDEwMDAKKFhFTikgUDJNIEAg
MDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikgMVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNmNjg2ZmYK
KFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAwMDAwMDAwMDAwCihYRU4pIGJhZCBwMm0gbG9va3Vw
CihYRU4pIGRvbTEgSVBBIDB4MDAwMDAwMDA5MDAwMTAwMAooWEVOKSBQMk0gQCAwMmZmY2FjMCBt
Zm46MHhmZmU1NgooWEVOKSAxU1RbMHgyXSA9IDB4MDAwMDAwMDBmM2Y2ODZmZgooWEVOKSAyTkRb
MHg4MF0gPSAweDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgRE9NMTogVW5jb21wcmVzc2luZyBMaW51
eC4uLiBkb25lLCBib290aW5nIHRoZSBrZXJuZWwuCkJvb3RpbmcgTGludXggb24gcGh5c2ljYWwg
Q1BVIDAKTGludXggdmVyc2lvbiAzLjUuMC1yYzcrIChyb290QHR1eikgKGdjYyB2ZXJzaW9uIDQu
Ni4zIChVYnVudHUvTGluYXJvIDQuNi4zLTF1YnVudHU1KSApICMxMyBXZWQgQXVnIDEgMTc6MzI6
MjAgTVNLIDIwMTIKQ1BVOiBBUk12NyBQcm9jZXNzb3IgWzQxMmZjMGYwXSByZXZpc2lvbiAwIChB
Uk12NyksIGNyPTEwYzUzYzdkCkNQVTogUElQVCAvIFZJUFQgbm9uYWxpYXNpbmcgZGF0YSBjYWNo
ZSwgUElQVCBpbnN0cnVjdGlvbiBjYWNoZQpNYWNoaW5lOiBBUk0tVmVyc2F0aWxlIEV4cHJlc3Ms
IG1vZGVsOiBWMlAtQUVNdjdBCmJvb3Rjb25zb2xlIFt4ZW5ib290MF0gZW5hYmxlZApNZW1vcnkg
cG9saWN5OiBFQ0MgZGlzYWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCk9uIG5vZGUgMCB0b3Rh
bHBhZ2VzOiAzMjc2OApmcmVlX2FyZWFfaW5pdF9ub2RlOiBub2RlIDAsIHBnZGF0IDgwM2U0ZDI0
LCBub2RlX21lbV9tYXAgODA0MDEwMDAKICBOb3JtYWwgem9uZTogMjU2IHBhZ2VzIHVzZWQgZm9y
IG1lbW1hcAogIE5vcm1hbCB6b25lOiAwIHBhZ2VzIHJlc2VydmVkCiAgTm9ybWFsIHpvbmU6IDMy
NTEyIHBhZ2VzLCBMSUZPIGJhdGNoOjcKLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0t
LS0tCldBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4cHJlc3MvdjJtLmM6NjAzIHYybV9kdF9p
bml0X2Vhcmx5KzB4NDQvMHhlYygpCk1vZHVsZXMgbGlua2VkIGluOgpCYWNrdHJhY2U6IApbPDgw
MDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEwYykgZnJvbSBbPDgwMmQxZTVjPl0gKGR1
bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAwMDAyNWIgcjU6ODAzYTBlMzQgcjQ6MDAwMDAwMDAg
cjM6ODAzYzk5M2MKWzw4MDJkMWU0ND5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAw
MWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdh
cm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93
cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4OjgwM2M3MzM4IHI3OjgwNTAxNDQwIHI2OjgwMDAwMjAw
IHI1OjgwM2VkYTA4IHI0OjgwM2U1ZWI4CnIzOjAwMDAwMDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9z
bG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzYTBlMzQ+XSAodjJtX2R0X2luaXRfZWFy
bHkrMHg0NC8weGVjKQpbPDgwM2EwZGYwPl0gKHYybV9kdF9pbml0X2Vhcmx5KzB4MC8weGVjKSBm
cm9tIFs8ODAzOWQ2OWM+XSAoc2V0dXBfYXJjaCsweDcxMC8weDdmYykKIHI0OjgwM2I0YWNjCls8
ODAzOWNmOGM+XSAoc2V0dXBfYXJjaCsweDAvMHg3ZmMpIGZyb20gWzw4MDM5YjU5Yz5dIChzdGFy
dF9rZXJuZWwrMHg3OC8weDI2YykKWzw4MDM5YjUyND5dIChzdGFydF9rZXJuZWwrMHgwLzB4MjZj
KSBmcm9tIFs8ODAwMDgwNDA+XSAoMHg4MDAwODA0MCkKIHI3OjgwM2M3Mjg0IHI2OjgwM2I1ZTMw
IHI1OjgwM2M0MDU0IHI0OjEwYzUzYzdkCi0tLVsgZW5kIHRyYWNlIDFiNzViMzFhMjcxOWVkMWMg
XS0tLQpwY3B1LWFsbG9jOiBzMCByMCBkMzI3NjggdTMyNzY4IGFsbG9jPTEqMzI3NjgKcGNwdS1h
bGxvYzogWzBdIDAgCkJ1aWx0IDEgem9uZWxpc3RzIGluIFpvbmUgb3JkZXIsIG1vYmlsaXR5IGdy
b3VwaW5nIG9uLiAgVG90YWwgcGFnZXM6IDMyNTEyCktlcm5lbCBjb21tYW5kIGxpbmU6IGVhcmx5
cHJpbnRrPXhlbiBkZWJ1ZyBsb2dsZXZlbD05IGNvbnNvbGU9aHZjMCByb290PS9kZXYveHZkYQpQ
SUQgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAtMSwgMjA0OCBieXRlcykKRGVudHJ5
IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTYzODQgKG9yZGVyOiA0LCA2NTUzNiBieXRlcykK
SW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRlcjogMywgMzI3NjggYnl0
ZXMpCk1lbW9yeTogMTI4TUIgPSAxMjhNQiB0b3RhbApNZW1vcnk6IDEyNTc5NmsvMTI1Nzk2ayBh
dmFpbGFibGUsIDUyNzZrIHJlc2VydmVkLCAwSyBoaWdobWVtClZpcnR1YWwga2VybmVsIG1lbW9y
eSBsYXlvdXQ6CiAgICB2ZWN0b3IgIDogMHhmZmZmMDAwMCAtIDB4ZmZmZjEwMDAgICAoICAgNCBr
QikKICAgIGZpeG1hcCAgOiAweGZmZjAwMDAwIC0gMHhmZmZlMDAwMCAgICggODk2IGtCKQogICAg
dm1hbGxvYyA6IDB4ODg4MDAwMDAgLSAweGZmMDAwMDAwICAgKDE4OTYgTUIpCiAgICBsb3dtZW0g
IDogMHg4MDAwMDAwMCAtIDB4ODgwMDAwMDAgICAoIDEyOCBNQikKICAgIG1vZHVsZXMgOiAweDdm
MDAwMDAwIC0gMHg4MDAwMDAwMCAgICggIDE2IE1CKQogICAgICAudGV4dCA6IDB4ODAwMDgwMDAg
LSAweDgwMzliMDAwICAgKDM2NjAga0IpCiAgICAgIC5pbml0IDogMHg4MDM5YjAwMCAtIDB4ODAz
YmI2YzQgICAoIDEzMCBrQikKICAgICAgLmRhdGEgOiAweDgwM2JjMDAwIC0gMHg4MDNlNTQ0MCAg
ICggMTY2IGtCKQogICAgICAgLmJzcyA6IDB4ODAzZTU0NjQgLSAweDgwNDAwZmU0ICAgKCAxMTEg
a0IpClNMVUI6IEdlbnNsYWJzPTExLCBIV2FsaWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9
MCwgQ1BVcz0xLCBOb2Rlcz0xCk5SX0lSUVM6MjU2Ci0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0t
LS0tLS0tLS0tLQpXQVJOSU5HOiBhdCBhcmNoL2FybS9tYWNoLXZleHByZXNzL3YybS5jOjYyIHYy
bV9zeXNjdGxfaW5pdCsweDIwLzB4NTgoKQpNb2R1bGVzIGxpbmtlZCBpbjoKQmFja3RyYWNlOiAK
Wzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJkMWU1Yz5d
IChkdW1wX3N0YWNrKzB4MTgvMHgxYykKIHI2OjAwMDAwMDNlIHI1OjgwM2EwYTI0IHI0OjAwMDAw
MDAwIHIzOjgwM2M5OTNjCls8ODAyZDFlNDQ+XSAoZHVtcF9zdGFjaysweDAvMHgxYykgZnJvbSBb
PDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKWzw4MDAxYjE4OD5d
ICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5f
c2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCiByODo4MDAwNDA1OSByNzo4MDUwMWI4MCByNjo4MDNi
NWUzNCByNTo4MDNlNTQ4MCByNDowMDAwMDAwMApyMzowMDAwMDAwOQpbPDgwMDFiMWY0Pl0gKHdh
cm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwM2EwYTI0Pl0gKHYybV9zeXNjdGxf
aW5pdCsweDIwLzB4NTgpCls8ODAzYTBhMDQ+XSAodjJtX3N5c2N0bF9pbml0KzB4MC8weDU4KSBm
cm9tIFs8ODAzYTBiMjQ+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgyYy8weGNjKQogcjU6ODAzZTU0
ODAgcjQ6ZmZmZmZmZmYKWzw4MDNhMGFmOD5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDAvMHhjYykg
ZnJvbSBbPDgwMzlkODMwPl0gKHRpbWVfaW5pdCsweDI4LzB4MzgpCiByNjo4MDNiNWUzNCByNTo4
MDNlNTQ4MCByNDpmZmZmZmZmZgpbPDgwMzlkODA4Pl0gKHRpbWVfaW5pdCsweDAvMHgzOCkgZnJv
bSBbPDgwMzliNmFjPl0gKHN0YXJ0X2tlcm5lbCsweDE4OC8weDI2YykKWzw4MDM5YjUyND5dIChz
dGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgwNDA+XSAoMHg4MDAwODA0MCkKIHI3
OjgwM2M3Mjg0IHI2OjgwM2I1ZTMwIHI1OjgwM2M0MDU0IHI0OjEwYzUzYzdkCi0tLVsgZW5kIHRy
YWNlIDFiNzViMzFhMjcxOWVkMWQgXS0tLQphcmNoX3RpbWVyOiBmb3VuZCB0aW1lciBpcnFzIDI5
IDMwCkFyY2hpdGVjdGVkIGxvY2FsIHRpbWVyIHJ1bm5pbmcgYXQgMTAwLjAwTUh6LgpzY2hlZF9j
bG9jazogMzIgYml0cyBhdCAxMDBNSHosIHJlc29sdXRpb24gMTBucywgd3JhcHMgZXZlcnkgNDI5
NDltcwotLS0tLS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQgYXJj
aC9hcm0vbWFjaC12ZXhwcmVzcy92Mm0uYzo2NDcgdjJtX2R0X3RpbWVyX2luaXQrMHg3Yy8weGNj
KCkKTW9kdWxlcyBsaW5rZWQgaW46CkJhY2t0cmFjZTogCls8ODAwMTFiMGM+XSAoZHVtcF9iYWNr
dHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDFlNWM+XSAoZHVtcF9zdGFjaysweDE4LzB4MWMp
CiByNjowMDAwMDI4NyByNTo4MDNhMGI3NCByNDowMDAwMDAwMCByMzo4MDNjOTkzYwpbPDgwMmQx
ZTQ0Pl0gKGR1bXBfc3RhY2srMHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3YXJuX3Nsb3dw
YXRoX2NvbW1vbisweDU0LzB4NmMpCls8ODAwMWIxODg+XSAod2Fybl9zbG93cGF0aF9jb21tb24r
MHgwLzB4NmMpIGZyb20gWzw4MDAxYjIxOD5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgyNC8weDJj
KQogcjg6ODAwMDQwNTkgcjc6ODA1MDFiODAgcjY6ODAzYjVlMzQgcjU6ODAzZTU0ODAgcjQ6ZmZm
ZmZmZWEKcjM6MDAwMDAwMDkKWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4
MmMpIGZyb20gWzw4MDNhMGI3ND5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDdjLzB4Y2MpCls8ODAz
YTBhZjg+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDM5ZDgzMD5dICh0
aW1lX2luaXQrMHgyOC8weDM4KQogcjY6ODAzYjVlMzQgcjU6ODAzZTU0ODAgcjQ6ZmZmZmZmZmYK
Wzw4MDM5ZDgwOD5dICh0aW1lX2luaXQrMHgwLzB4MzgpIGZyb20gWzw4MDM5YjZhYz5dIChzdGFy
dF9rZXJuZWwrMHgxODgvMHgyNmMpCls8ODAzOWI1MjQ+XSAoc3RhcnRfa2VybmVsKzB4MC8weDI2
YykgZnJvbSBbPDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCiByNzo4MDNjNzI4NCByNjo4MDNiNWUz
MCByNTo4MDNjNDA1NCByNDoxMGM1M2M3ZAotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFl
IF0tLS0KQ29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgzMApYZW4gc3VwcG9ydCBmb3Vu
ZCwgZXZlbnRzX2lycT0zMSBnbnR0YWJfZnJhbWVfcGZuPWIwMDAwCkdyYW50IHRhYmxlcyB1c2lu
ZyB2ZXJzaW9uIDEgbGF5b3V0LgpHcmFudCB0YWJsZSBpbml0aWFsaXplZAooWEVOKSBHdWVzdCBk
YXRhIGFib3J0OiBUcmFuc2xhdGlvbiBmYXVsdCBhdCBsZXZlbCAyCihYRU4pICAgICBndmE9ODg4
MDRjMDgKKFhFTikgICAgIGdwYT0wMDAwMDAwMDkwMDAwYzA4CihYRU4pICAgICBzaXplPTIgc2ln
bj0wIHdyaXRlPTAgcmVnPTkKKFhFTikgICAgIGVhdD0wIGNtPTAgczFwdHc9MCBkZnNjPTYKKFhF
TikgZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAwYzA4CihYRU4pIFAyTSBAIDAyZmZjYWMwIG1mbjow
eGZmZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAwMGYzZjY4NmZmCihYRU4pIDJORFsweDgw
XSA9IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSAtLS0tWyBYZW4tNC4yLXVuc3RhYmxlICB4ODZf
NjQgIGRlYnVnPXkgIE5vdCB0YWludGVkIF0tLS0tCihYRU4pIENQVTogICAgMAooWEVOKSBQQzog
ICAgIDgwMTk0OWU0CihYRU4pIENQU1I6ICAgMjAwMDAxZDMgTU9ERTpTVkMKKFhFTikgICAgICBS
MDogMDA1ODY1NmUgUjE6IDAwNTg2NTZlIFIyOiAwMDAwMDAxMCBSMzogODc4MDA5ODAKKFhFTikg
ICAgICBSNDogMDAwMDAwMTAgUjU6IDAwMDAwN2ZmIFI2OiAwMDAwMDAxMCBSNzogMDAwMDAwMWQK
KFhFTikgICAgICBSODogMDAwMDAwMTAgUjk6IDgwM2U2ODVjIFIxMDo4ODgwNDAwMCBSMTE6ODAz
YmRkZWMgUjEyOjgwM2JkZGYwCihYRU4pIFVTUjogU1A6IDAwMDAwMDAwIExSOiAwMDAwMDAwMCBD
UFNSOjIwMDAwMWQzCihYRU4pIFNWQzogU1A6IDgwM2JkZGE4IExSOiA4MDE5M2I5YyBTUFNSOjAw
MDAwMTUzCihYRU4pIEFCVDogU1A6IDgwM2U1YzBjIExSOiA4MDNlNWMwYyBTUFNSOjAwMDAwMDAw
CihYRU4pIFVORDogU1A6IDgwM2U1YzE4IExSOiA4MDNlNWMxOCBTUFNSOjAwMDAwMDAwCihYRU4p
IElSUTogU1A6IDgwM2U1YzAwIExSOiA4MDAwZGZjMCBTUFNSOjAwMDAwMWQzCihYRU4pIEZJUTog
U1A6IDAwMDAwMDAwIExSOiAwMDAwMDAwMCBTUFNSOjAwMDAwMDAwCihYRU4pIEZJUTogUjg6IDAw
MDAwMDAwIFI5OiAwMDAwMDAwMCBSMTA6MDAwMDAwMDAgUjExOjAwMDAwMDAwIFIxMjowMDAwMDAw
MAooWEVOKSAKKFhFTikgVFRCUjAgODAwMDQwNTkgVFRCUjEgODAwMDQwNTkgVFRCQ1IgMDAwMDAw
MDAKKFhFTikgU0NUTFIgMTBjNTNjN2QKKFhFTikgVlRUQlIgMjAwMDBmZmU1NjAwMAooWEVOKSAK
KFhFTikgSFRUQlIgZmZlYzIwMDAKKFhFTikgSERGQVIgODg4MDRjMDgKKFhFTikgSElGQVIgMAoo
WEVOKSBIUEZBUiA5MDAwMDAKKFhFTikgSENSIDAwMDAwODM1CihYRU4pIEhTUiAgIDkzODkwMDA2
CihYRU4pIAooWEVOKSBERlNSIDAgREZBUiAwCihYRU4pIElGU1IgMCBJRkFSIDAKKFhFTikgCihY
RU4pIEdVRVNUIFNUQUNLIEdPRVMgSEVSRQooWEVOKSAKKFhFTikgKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKgooWEVOKSBQYW5pYyBvbiBDUFUgMDoKKFhFTikgVW5oYW5k
bGVkIGd1ZXN0IGRhdGEgYWJvcnQKKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKgooWEVOKSAKKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLgoKCgoKcm9v
dEB0dXo6L3Vzci9saWIveGVuL2JpbiMgLi94Y2J1aWxkIC9ib290L3pJbWFnZWR0YiAKSW1hZ2U6
IC9ib290L3pJbWFnZWR0YgpNZW1vcnk6IDI2NDE5MktCCnhjX2RvbWFpbl9jcmVhdGU6IDAgKDAp
CmJ1aWxkaW5nIGRvbTEKdXNpbmcgeGNfZG9tIHRvIGJ1aWxkIGltYWdlIC9ib290L3pJbWFnZWR0
Ygpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9hbGxvY2F0ZTogY21kbGluZT0iIiwgZmVh
dHVyZXM9IihudWxsKSIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fa2VybmVsX2ZpbGU6
IGZpbGVuYW1lPSIvYm9vdC96SW1hZ2VkdGIiCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9t
X21hbGxvY19maWxlbWFwICAgIDogMjAwNSBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2Rv
bV9ib290X3hlbl9pbml0OiB2ZXIgNC4yLCBjYXBzIHhlbi0zLjAtYXJtdjdsIApkb21haW5idWls
ZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV9pbWFnZTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRl
dGFpbDogeGNfZG9tX2ZpbmRfbG9hZGVyOiB0cnlpbmcgbXVsdGlib290LWJpbmFyeSBsb2FkZXIg
Li4uIApkb21haW5idWlsZGVyOiBkZXRhaWw6IGxvYWRlciBwcm9iZSBmYWlsZWQKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fZmluZF9sb2FkZXI6IHRyeWluZyBMaW51eCB6SW1hZ2UgKEFS
TSkgbG9hZGVyIC4uLiAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fcHJvYmVfemltYWdl
X2tlcm5lbDogZm91bmQgYW4gYXBwZW5kZWQgRFRCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogbG9h
ZGVyIHByb2JlIE9LCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX3ppbWFnZV9r
ZXJuZWw6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96aW1hZ2Vf
a2VybmVsOiB4ZW4tMy4wLWFybXY3bDogUkFNIHN0YXJ0cyBhdCA4MDAwMApkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96aW1hZ2Vfa2VybmVsOiB4ZW4tMy4wLWFybXY3bDogMHg4
MDAwODAwMCAtPiAweDgwMWZkN2YxCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9p
bml0OiBtZW0gMjU2IE1CLCBwYWdlcyAweDEwMDAwIHBhZ2VzLCA0ayBlYWNoCmRvbWFpbmJ1aWxk
ZXI6IGRldGFpbDogeGNfZG9tX21lbV9pbml0OiAweDEwMDAwIHBhZ2VzCmRvbWFpbmJ1aWxkZXI6
IGRldGFpbDogeGNfZG9tX2Jvb3RfbWVtX2luaXQ6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRh
aWw6IHhjX2RvbV9tYWxsb2MgICAgICAgICAgICA6IDUxMiBrQgpkb21haW5idWlsZGVyOiBkZXRh
aWw6IHJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzIxYzZlNCBhZGRyIDB4NzY5YjkwMDAg
PT4gMHg5MDAwMDMwZiAvIDB4OTAwMDAKeGNfZG9tX2J1aWxkX2ltYXJlbWFwX2FyZWFfbWZuX3B0
ZV9mbjogcHRlcCA4NzIxYzZlOCBhZGRyIDB4NzY5YmEwMDAgPT4gMHg5MDAwMTMwZiAvIDB4OTAw
MDEKZ2U6IGNhbGxlZApkb21hcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmVjIGFk
ZHIgMHg3NjliYjAwMCA9PiAweDkwMDAyMzBmIC8gMHg5MDAwMgppbmJ1aWxkZXI6IGRldGFpcmVt
YXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmYwIGFkZHIgMHg3NjliYzAwMCA9PiAweDkw
MDAzMzBmIC8gMHg5MDAwMwpsOiB4Y19kb21fYWxsb2NfcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBw
dGVwIDg3MjFjNmY0IGFkZHIgMHg3NjliZDAwMCA9PiAweDkwMDA0MzBmIC8gMHg5MDAwNApzZWdt
ZW50OiAgIGtlcm5lcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmY4IGFkZHIgMHg3
NjliZTAwMCA9PiAweDkwMDA1MzBmIC8gMHg5MDAwNQpsICAgICAgIDogMHg4MDAwcmVtYXBfYXJl
YV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmZjIGFkZHIgMHg3NjliZjAwMCA9PiAweDkwMDA2MzBm
IC8gMHg5MDAwNgo4MDAwIC0+IDB4ODAxZmUwcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3
MjFjNzAwIGFkZHIgMHg3NjljMDAwMCA9PiAweDkwMDA3MzBmIC8gMHg5MDAwNwowMCAgKHBmbiAw
eDgwMDA4cmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNzA0IGFkZHIgMHg3NjljMTAw
MCA9PiAweDkwMDA4MzBmIC8gMHg5MDAwOApyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcy
MWM3MDggYWRkciAweDc2OWMyMDAwID0+IDB4OTAwMDkzMGYgLyAweDkwMDA5CgpyZW1hcF9hcmVh
X21mbl9wdGVfZm46IHB0ZXAgODcyMWM3MGMgYWRkciAweDc2OWMzMDAwID0+IDB4OTAwMGEzMGYg
LyAweDkwMDBhCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzIxYzcxMCBhZGRyIDB4NzY5
YzQwMDAgPT4gMHg5MDAwYjMwZiAvIDB4OTAwMGIKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVw
IDg3MjFjNzE0IGFkZHIgMHg3NjljNTAwMCA9PiAweDkwMDBjMzBmIC8gMHg5MDAwYwpyZW1hcF9h
cmVhX21mbl9wdGVfZm46IHB0ZXAgODcyMWM3MTggYWRkciAweDc2OWM2MDAwID0+IDB4OTAwMGQz
MGYgLyAweDkwMDBkCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzIxYzcxYyBhZGRyIDB4
NzY5YzcwMDAgPT4gMHg5MDAwZTMwZiAvIDB4OTAwMGUKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBw
dGVwIDg3MjFjNzIwIGFkZHIgMHg3NjljODAwMCA9PiAweDkwMDBmMzBmIC8gMHg5MDAwZgpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyOiBkb21VIG1hcHBpbmc6IHBmbiAw
eDgwMDA4KzB4MWY2IGF0IDB4NzY5YjkwMDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21f
bG9hZF96aW1hZ2Vfa2VybmVsOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21f
bG9hZF96aW1hZ2Vfa2VybmVsOiBrZXJuZWwgc2VkIDB4ODAwMDgwMDAtMHg4MDFmZTAwMApkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9sb2FkX3ppbWFnZV9rZXJuZWw6IGNvcHkgMjA1NDEy
OSBieXRlcyBmcm9tIGJsb2IgMHg3NmMzMDAwMCB0byBkc3QgMHg3NjliOTAwMApkb21haW5idWls
ZGVyOiBkZXRhaWw6IGFsbG9jX21hZm9yZWlnbiBtYXAgYWRkX3RvX3BoeXNtYXAgZmFpbGVkLCBl
cnI9LTIyCmdpY19wYWdlczogY2FsbGVkCmZvcmVpZ24gbWFwIGFkZF90b19waHlzbWFwIGZhaWxl
ZCwgZXJyPS0yMgpkb21haW5idWlsZGVyOiBkZXRhaWw6IGNvdW50X3BndGFibGVzX2FybTogY2Fs
bGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfYWxs
b2NfZW5kIDogMHg4MDFmZTAwMApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9idWlsZF9p
bWFnZSAgOiB2aXJ0X3BndGFiX2VuZCA6IDB4MApmb3JlaWduIG1hcCBhZGRfdG9fcGh5c21hcCBm
YWlsZWQsIGVycj0tMjIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYm9vdF9pbWFnZTog
Y2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogYXJjaF9zZXR1cF9ib290ZWFybHk6IGRvaW5n
IG5vdGhpbmcKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fY29tcGF0X2NoZWNrOiBzdXBw
b3J0ZWQgZ3Vlc3QgdHlwZTogeGVuLTMuMC1hcm12N2wgPD0gbWF0Y2hlcwpkb21haW5idWlsZGVy
OiBkZXRhaWw6IHNldHVwX3BndGFibGVzX2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogc3RhcnRfaW5mb19hcm06IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGRvbWFpbiBi
dWlsZGVyIG1lbW9yeSBmb290cHJpbnQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICBhbGxvY2F0
ZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBtYWxsb2MgICAgICAgICAgICAgOiA1MjUg
a0IKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBhbm9uIG1tYXAgICAgICAgICAgOiAwIGJ5
dGVzCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogICAgbWFwcGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogICAgICAgZmlsZSBtbWFwICAgICAgICAgIDogMjAwNSBrQgpkb21haW5idWlsZGVyOiBkZXRh
aWw6ICAgICAgIGRvbVUgbW1hcCAgICAgICAgICA6IDIwMDgga0IKZG9tYWluYnVpbGRlcjogZGV0
YWlsOiB2Y3B1X2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogSW5pdGlhbCBzdGF0
ZSBDUFNSIDB4MWQzIFBDIDB4ODAwMDgwMDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsYXVuY2hf
dm06IGNhbGxlZCwgY3R4dD0weDdlZjNlN2JjCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9t
X3JlbGVhc2U6IGNhbGxlZAp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IHRvdGFsIGFsbG9j
YXRpb25zOjIwIHRvdGFsIHJlbGVhc2VzOjIwCnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjog
Y3VycmVudCBhbGxvY2F0aW9uczowIG1heGltdW0gYWxsb2NhdGlvbnM6Mgp4YzogZGVidWc6IGh5
cGVyY2FsbCBidWZmZXI6IGNhY2hlIGN1cnJlbnQgc2l6ZToyCnhjOiBkZWJ1ZzogaHlwZXJjYWxs
IGJ1ZmZlcjogY2FjaGUgaGl0czoxNyBtaXNzZXM6MiB0b29iaWc6MQpyb290QHR1ejovdXNyL2xp
Yi94ZW4vYmluIyB2YmQgdmJkLTEtNTE3MTI6IDIgY3JlYXRpbmcgdmJkIHN0cnVjdHVyZQoKCg==
--14dae9340efb28ec1704c636d706
Content-Type: application/octet-stream; 
	name="xcbuild-old-Dom0+U-A15x1_01082012.log"
Content-Disposition: attachment; 
	filename="xcbuild-old-Dom0+U-A15x1_01082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5clcb3j1

VHJ5aW5nIDEyNy4wLjAuMS4uLgpDb25uZWN0ZWQgdG8gbG9jYWxob3N0LgooWEVOKSBFc2NhcGUg
Y2hhcmFjdGVyIGlzICdeXScuCi0gVUFSVCBlbmFibGVkIC0KLSBDUFUgMDAwMDAwMDAgYm9vdGlu
ZyAtCi0gU3RhcnRlZCBpbiBTZWN1cmUgc3RhdGUgLQotIEVudGVyaW5nIEh5cCBtb2RlIC0KLSBT
ZXR0aW5nIHVwIGNvbnRyb2wgcmVnaXN0ZXJzIC0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtCi0gUmVh
ZHkgLQpSQU06IDAwMDAwMDAwODAwMDAwMDAgLSAwMDAwMDAwMGZmZmZmZmZmCiBfXyAgX18gICAg
ICAgICAgICBfICBfICAgIF9fX18gICAgICAgICAgICAgICAgICAgICBfICAgICAgICBfICAgICBf
ICAgICAgCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9fXyBcICAgIF8gICBfIF8gX18gIF9f
X3wgfF8gX18gX3wgfF9fIHwgfCBfX18gCiAgXCAgLy8gXyBcICdfIFwgIHwgfHwgfF8gICBfXykg
fF9ffCB8IHwgfCAnXyBcLyBfX3wgX18vIF9gIHwgJ18gXHwgfC8gXyBcCiAgLyAgXCAgX18vIHwg
fCB8IHxfXyAgIF98IC8gX18vfF9ffCB8X3wgfCB8IHwgXF9fIFwgfHwgKF98IHwgfF8pIHwgfCAg
X18vCiAvXy9cX1xfX198X3wgfF98ICAgIHxffChfKV9fX19ffCAgIFxfXyxffF98IHxffF9fXy9c
X19cX18sX3xfLl9fL3xffFxfX198CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCihYRU4pIFhlbiB2ZXJzaW9u
IDQuMi11bnN0YWJsZSAocm9vdEAobm9uZSkpIChnY2MgdmVyc2lvbiA0LjYuMyAoVWJ1bnR1L0xp
bmFybyA0LjYuMy0xdWJ1bnR1NSkgKSBXZWQgQXVnICAxIDA5OjM0OjMwIFVUQyAyMDEyCkxhdGVz
dCBDaGFuZ2VTZXQ6IHVuYXZhaWxhYmxlCihYRU4pIEdJQzogNjQgbGluZXMsIDEgY3B1LCBzZWN1
cmUgKElJRCAwMDAwMDQzYikuCihYRU4pIFdhaXRpbmcgZm9yIDAgb3RoZXIgQ1BVcyB0byBiZSBy
ZWFkeQooWEVOKSBVc2luZyBnZW5lcmljIHRpbWVyIGF0IDEwMDAwMDAwMCBIegooWEVOKSBYZW4g
aGVhcDogMjYyMTQ0IHBhZ2VzICBEb20gaGVhcDogMjUzOTUyIHBhZ2VzCihYRU4pIERvbWFpbiBo
ZWFwIGluaXRpYWxpc2VkCihYRU4pIFNldCBoeXAgdmVjdG9yIGJhc2UgdG8gMjNjZTQwIChleHBl
Y3RlZCAwMDIzY2U0MCkKKFhFTikgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMTEzMSAwMDAwMTEz
MQooWEVOKSBEZWJ1ZyBGZWF0dXJlczogMDIwMTA1NTUKKFhFTikgQXV4aWxpYXJ5IEZlYXR1cmVz
OiAwMDAwMDAwMAooWEVOKSBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDEwMjAxMTA1IDIwMDAwMDAw
IDAxMjQwMDAwIDAyMTAyMjExCihYRU4pIElTQSBGZWF0dXJlczogMDIxMDExMTAgMTMxMTIxMTEg
MjEyMzIwNDEgMTExMTIxMzEgMTAwMTExNDIgMDAwMDAwMDAKKFhFTikgVXNpbmcgc2NoZWR1bGVy
OiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSBy
aW5nIG9mIDE2IEtpQi4KKFhFTikgQnJvdWdodCB1cCAxIENQVXMKKFhFTikgKioqIExPQURJTkcg
RE9NQUlOIDAgKioqCihYRU4pIFBvcHVsYXRlIFAyTSAweDgwMDAwMDAwLT4weDg4MDAwMDAwCihY
RU4pIE1hcCBDUzIgTU1JTyByZWdpb25zIDE6MSBpbiB0aGUgUDJNIDB4MTgwMDAwMDAtPjB4MWJm
ZmZmZmYKKFhFTikgTWFwIENTMyBNTUlPIHJlZ2lvbnMgMToxIGluIHRoZSBQMk0gMHgxYzAwMDAw
MC0+MHgxZmZmZmZmZgooWEVOKSBNYXAgVkdJQyBNTUlPIHJlZ2lvbnMgMToxIGluIHRoZSBQMk0g
MHgyYzAwODAwMC0+MHgyZGZmZmZmZgooWEVOKSBSb3V0aW5nIHBlcmlwaGVyYWwgaW50ZXJydXB0
cyB0byBndWVzdAooWEVOKSBMb2FkaW5nIDAwMDAwMDAwMDAxZjhjZTAgYnl0ZSB6SW1hZ2UgZnJv
bSBmbGFzaCAwMDAwMDAwMDAwMDAwMDAwIHRvIDAwMDAwMDAwODAwMDgwMDAtMDAwMDAwMDA4MDIw
MGNlMDogWy4uXQooWEVOKSBTdGQuIExvZ2xldmVsOiBBbGwKKFhFTikgR3Vlc3QgTG9nbGV2ZWw6
IEFsbAooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUg
dGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikKKFhFTikgRnJlZWQgMjA0a0IgaW5pdCBtZW1v
cnkuClVuY29tcHJlc3NpbmcgTGludXguLi4gZG9uZSwgYm9vdGluZyB0aGUga2VybmVsLgpCb290
aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAwCkxpbnV4IHZlcnNpb24gMy41LjAtcmM3KyAocm9v
dEB0dXopIChnY2MgdmVyc2lvbiA0LjYuMyAoVWJ1bnR1L0xpbmFybyA0LjYuMy0xdWJ1bnR1NSkg
KSAjMTEgV2VkIEF1ZyAxIDEzOjM4OjQ2IE1TSyAyMDEyCkNQVTogQVJNdjcgUHJvY2Vzc29yIFs0
MTJmYzBmMF0gcmV2aXNpb24gMCAoQVJNdjcpLCBjcj0xMGM1M2M3ZApDUFU6IFBJUFQgLyBWSVBU
IG5vbmFsaWFzaW5nIGRhdGEgY2FjaGUsIFBJUFQgaW5zdHJ1Y3Rpb24gY2FjaGUKTWFjaGluZTog
QVJNLVZlcnNhdGlsZSBFeHByZXNzLCBtb2RlbDogVjJQLUNBMTUKYm9vdGNvbnNvbGUgW3hlbmJv
b3QwXSBlbmFibGVkCk1lbW9yeSBwb2xpY3k6IEVDQyBkaXNhYmxlZCwgRGF0YSBjYWNoZSB3cml0
ZWJhY2sKT24gbm9kZSAwIHRvdGFscGFnZXM6IDMyNzY4CmZyZWVfYXJlYV9pbml0X25vZGU6IG5v
ZGUgMCwgcGdkYXQgODAzZWFlOTQsIG5vZGVfbWVtX21hcCA4MDQwODAwMAogIE5vcm1hbCB6b25l
OiAyNTYgcGFnZXMgdXNlZCBmb3IgbWVtbWFwCiAgTm9ybWFsIHpvbmU6IDAgcGFnZXMgcmVzZXJ2
ZWQKICBOb3JtYWwgem9uZTogMzI1MTIgcGFnZXMsIExJRk8gYmF0Y2g6NwotLS0tLS0tLS0tLS1b
IGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQgYXJjaC9hcm0vbWFjaC12ZXhwcmVz
cy92Mm0uYzo2MTMgdjJtX2R0X2luaXRfZWFybHkrMHhhYy8weGVjKCkKTW9kdWxlcyBsaW5rZWQg
aW46CkJhY2t0cmFjZTogCls8ODAwMTFiMGM+XSAoZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBm
cm9tIFs8ODAyZDdlZDA+XSAoZHVtcF9zdGFjaysweDE4LzB4MWMpCiByNjowMDAwMDI2NSByNTo4
MDNhNmU5YyByNDowMDAwMDAwMCByMzo4MDNjZjkzYwpbPDgwMmQ3ZWI4Pl0gKGR1bXBfc3RhY2sr
MHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDU0LzB4
NmMpCgpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8
ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4OjgwM2NkMzM4IHI3
OjgwNTA4NDQwIHI2OjgwMDAwMjAwIHI1OjgwM2YzYjg4IHI0OjAwMDAwMDAwCnIzOjAwMDAwMDA5
Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzYTZl
OWM+XSAodjJtX2R0X2luaXRfZWFybHkrMHhhYy8weGVjKQpbPDgwM2E2ZGYwPl0gKHYybV9kdF9p
bml0X2Vhcmx5KzB4MC8weGVjKSBmcm9tIFs8ODAzYTM2OWM+XSAoc2V0dXBfYXJjaCsweDcxMC8w
eDdmYykKIHI0OjgwM2JhOWFjCls8ODAzYTJmOGM+XSAoc2V0dXBfYXJjaCsweDAvMHg3ZmMpIGZy
b20gWzw4MDNhMTU5Yz5dIChzdGFydF9rZXJuZWwrMHg3OC8weDI2YykKWzw4MDNhMTUyND5dIChz
dGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgwNDA+XSAoMHg4MDAwODA0MCkKIHI3
OjgwM2NkMjg0IHI2OjgwM2JiZDUwIHI1OjgwM2NhMDU0IHI0OjEwYzUzYzdkCi0tLVsgZW5kIHRy
YWNlIDFiNzViMzFhMjcxOWVkMWMgXS0tLQp2ZXhwcmVzczogRFQgSEJJICgyMzcpIGlzIG5vdCBt
YXRjaGluZyBoYXJkd2FyZSAoMCkhCnBjcHUtYWxsb2M6IHMwIHIwIGQzMjc2OCB1MzI3NjggYWxs
b2M9MSozMjc2OApwY3B1LWFsbG9jOiBbMF0gMCAKQnVpbHQgMSB6b25lbGlzdHMgaW4gWm9uZSBv
cmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdlczogMzI1MTIKS2VybmVsIGNv
bW1hbmQgbGluZTogZWFybHlwcmludGs9eGVuYm9vdCBjb25zb2xlPXR0eUFNQTEgcm9vdD0vZGV2
L21tY2JsazAgZGVidWcgcncKUElEIGhhc2ggdGFibGUgZW50cmllczogNTEyIChvcmRlcjogLTEs
IDIwNDggYnl0ZXMpCkRlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDE2Mzg0IChvcmRl
cjogNCwgNjU1MzYgYnl0ZXMpCklub2RlLWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogODE5MiAo
b3JkZXI6IDMsIDMyNzY4IGJ5dGVzKQpNZW1vcnk6IDEyOE1CID0gMTI4TUIgdG90YWwKTWVtb3J5
OiAxMjU3NjRrLzEyNTc2NGsgYXZhaWxhYmxlLCA1MzA4ayByZXNlcnZlZCwgMEsgaGlnaG1lbQpW
aXJ0dWFsIGtlcm5lbCBtZW1vcnkgbGF5b3V0OgogICAgdmVjdG9yICA6IDB4ZmZmZjAwMDAgLSAw
eGZmZmYxMDAwICAgKCAgIDQga0IpCiAgICBmaXhtYXAgIDogMHhmZmYwMDAwMCAtIDB4ZmZmZTAw
MDAgICAoIDg5NiBrQikKICAgIHZtYWxsb2MgOiAweDg4ODAwMDAwIC0gMHhmZjAwMDAwMCAgICgx
ODk2IE1CKQogICAgbG93bWVtICA6IDB4ODAwMDAwMDAgLSAweDg4MDAwMDAwICAgKCAxMjggTUIp
CiAgICBtb2R1bGVzIDogMHg3ZjAwMDAwMCAtIDB4ODAwMDAwMDAgICAoICAxNiBNQikKICAgICAg
LnRleHQgOiAweDgwMDA4MDAwIC0gMHg4MDNhMTAwMCAgICgzNjg0IGtCKQogICAgICAuaW5pdCA6
IDB4ODAzYTEwMDAgLSAweDgwM2MxNWU4ICAgKCAxMzAga0IpCiAgICAgIC5kYXRhIDogMHg4MDNj
MjAwMCAtIDB4ODAzZWI1YzAgICAoIDE2NiBrQikKICAgICAgIC5ic3MgOiAweDgwM2ViNWU0IC0g
MHg4MDQwNzE2NCAgICggMTExIGtCKQpTTFVCOiBHZW5zbGFicz0xMSwgSFdhbGlnbj02NCwgT3Jk
ZXI9MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9MSwgTm9kZXM9MQpOUl9JUlFTOjI1NgphcmNoX3Rp
bWVyOiBjYW4ndCBmaW5kIERUIG5vZGUKQXJjaGl0ZWN0ZWQgbG9jYWwgdGltZXIgcnVubmluZyBh
dCAxMDAuMDBNSHouCnNjaGVkX2Nsb2NrOiAzMiBiaXRzIGF0IDEwME1IeiwgcmVzb2x1dGlvbiAx
MG5zLCB3cmFwcyBldmVyeSA0Mjk0OW1zCkNvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4
MzAKWGVuIHN1cHBvcnQgZm91bmQsIGV2ZW50c19pcnE9MzEgZ250dGFiX2ZyYW1lX3Bmbj1iMDAw
MApHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dC4KR3JhbnQgdGFibGUgaW5pdGlh
bGl6ZWQKQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcC4uLiA5OC43MSBCb2dvTUlQUyAobHBqPTQ5MzU2
OCkKcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxCk1vdW50LWNhY2hlIGhhc2gg
dGFibGUgZW50cmllczogNTEyCkNQVTogVGVzdGluZyB3cml0ZSBidWZmZXIgY29oZXJlbmN5OiBv
awpTZXR0aW5nIHVwIHN0YXRpYyBpZGVudGl0eSBtYXAgZm9yIDB4ODAyZGMxMzggLSAweDgwMmRj
MTZjClhlbiBzdXBwb3J0IGZvdW5kLCBldmVudHNfaXJxPTMxIGdudHRhYl9mcmFtZV9wZm49YjAw
MDAKTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNgotLS0tLS0tLS0tLS1bIGN1dCBo
ZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQga2VybmVsL2lycS9pcnFkb21haW4uYzoxMzUg
aXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCgpCk1vZHVsZXMgbGlua2VkIGluOgpC
YWNrdHJhY2U6IAoKWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20g
Wzw4MDJkN2VkMD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKIHI2OjAwMDAwMDg3IHI1OjgwMDUx
MzIwIHI0OjAwMDAwMDAwIHIzOjgwM2NmOTNjCls8ODAyZDdlYjg+XSAoZHVtcF9zdGFjaysweDAv
MHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykK
Wzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFi
MjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCiByODo4Nzg0ZTEwMCByNzowMDAw
MDAwMyByNjowMDAwMDA2NCByNTo4NzgwMDQ0MCByNDo4MDNkODk1MApyMzowMDAwMDAwOQpbPDgw
MDFiMWY0Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwMDUxMzIwPl0g
KGlycV9kb21haW5fbGVnYWN5X3Jldm1hcCsweDI4LzB4NTApCls8ODAwNTEyZjg+XSAoaXJxX2Rv
bWFpbl9sZWdhY3lfcmV2bWFwKzB4MC8weDUwKSBmcm9tIFs8ODAwNTEzZTg+XSAoaXJxX2ZpbmRf
bWFwcGluZysweGEwLzB4ZDApCls8ODAwNTEzNDg+XSAoaXJxX2ZpbmRfbWFwcGluZysweDAvMHhk
MCkgZnJvbSBbPDgwMDUxODI0Pl0gKGlycV9jcmVhdGVfbWFwcGluZysweDI4LzB4MTI4KQogcjg6
ODc4NGUxMDAgcjc6MDAwMDAwMDMgcjY6MDAwMDAwNjQgcjU6ODc4MzFkZDAgcjQ6ODc4MDA0NDAK
cjM6ODc4MzFkYTQKWzw4MDA1MTdmYz5dIChpcnFfY3JlYXRlX21hcHBpbmcrMHgwLzB4MTI4KSBm
cm9tIFs8ODAwNTE5YTg+XSAoaXJxX2NyZWF0ZV9vZl9tYXBwaW5nKzB4ODQvMHhmOCkKIHI3OjAw
MDAwMDAzIHI2OjgwNTA4OGE4IHI1Ojg3ODMxZGQwIHI0Ojg3ODAwNDQwCls8ODAwNTE5MjQ+XSAo
aXJxX2NyZWF0ZV9vZl9tYXBwaW5nKzB4MC8weGY4KSBmcm9tIFs8ODAyM2NkYTg+XSAoaXJxX29m
X3BhcnNlX2FuZF9tYXArMHgzNC8weDNjKQogcjc6MDAwMDAwMDAgcjY6ODA1MDg5ZGMgcjU6MDAw
MDAwMDAgcjQ6MDAwMDAwMDAKWzw4MDIzY2Q3ND5dIChpcnFfb2ZfcGFyc2VfYW5kX21hcCsweDAv
MHgzYykgZnJvbSBbPDgwMjNjZGQwPl0gKG9mX2lycV90b19yZXNvdXJjZSsweDIwLzB4N2MpCls8
ODAyM2NkYjA+XSAob2ZfaXJxX3RvX3Jlc291cmNlKzB4MC8weDdjKSBmcm9tIFs8ODAyM2NlNTg+
XSAob2ZfaXJxX2NvdW50KzB4MmMvMHgzYykKIHI3OjAwMDAwMDAwIHI2OjgwNTA4OWRjIHI1Ojgw
NTA4OWRjIHI0OjAwMDAwMDAwCls8ODAyM2NlMmM+XSAob2ZfaXJxX2NvdW50KzB4MC8weDNjKSBm
cm9tIFs8ODAyM2Q0MWM+XSAob2ZfZGV2aWNlX2FsbG9jKzB4NWMvMHgxNWMpCiByNTowMDAwMDAw
MCByNDowMDAwMDAwMApbPDgwMjNkM2MwPl0gKG9mX2RldmljZV9hbGxvYysweDAvMHgxNWMpIGZy
b20gWzw4MDIzZDU1OD5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3JlYXRlX3BkYXRhKzB4M2MvMHg4
OCkKWzw4MDIzZDUxYz5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3JlYXRlX3BkYXRhKzB4MC8weDg4
KSBmcm9tIFs8ODAyM2Q2Nzg+XSAob2ZfcGxhdGZvcm1fYnVzX2NyZWF0ZSsweGQ0LzB4Mjc4KQog
cjc6MDAwMDAwMDEgcjY6MDAwMDAwMDAgcjU6ODAzYmMyNjAgcjQ6ODA1MDg5ZGMKWzw4MDIzZDVh
ND5dIChvZl9wbGF0Zm9ybV9idXNfY3JlYXRlKzB4MC8weDI3OCkgZnJvbSBbPDgwMjNkODg0Pl0g
KG9mX3BsYXRmb3JtX3BvcHVsYXRlKzB4NjgvMHhhMCkKWzw4MDIzZDgxYz5dIChvZl9wbGF0Zm9y
bV9wb3B1bGF0ZSsweDAvMHhhMCkgZnJvbSBbPDgwM2E2YzM0Pl0gKHYybV9kdF9pbml0KzB4MmMv
MHg0YykKWzw4MDNhNmMwOD5dICh2Mm1fZHRfaW5pdCsweDAvMHg0YykgZnJvbSBbPDgwM2EyYzBj
Pl0gKGN1c3RvbWl6ZV9tYWNoaW5lKzB4MjQvMHgzMCkKWzw4MDNhMmJlOD5dIChjdXN0b21pemVf
bWFjaGluZSsweDAvMHgzMCkgZnJvbSBbPDgwMDA4NjNjPl0gKGRvX29uZV9pbml0Y2FsbCsweDQw
LzB4MTg0KQpbPDgwMDA4NWZjPl0gKGRvX29uZV9pbml0Y2FsbCsweDAvMHgxODQpIGZyb20gWzw4
MDNhMTg4MD5dIChrZXJuZWxfaW5pdCsweGYwLzB4MWFjKQpbPDgwM2ExNzkwPl0gKGtlcm5lbF9p
bml0KzB4MC8weDFhYykgZnJvbSBbPDgwMDFmN2I0Pl0gKGRvX2V4aXQrMHgwLzB4NmJjKQotLS1b
IGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFkIF0tLS0KLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBd
LS0tLS0tLS0tLS0tCldBUk5JTkc6IGF0IGtlcm5lbC9pcnEvaXJxZG9tYWluLmM6MTM1IGlycV9k
b21haW5fbGVnYWN5X3Jldm1hcCsweDI4LzB4NTAoKQpNb2R1bGVzIGxpbmtlZCBpbjoKQmFja3Ry
YWNlOiAKWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJk
N2VkMD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKIHI2OjAwMDAwMDg3IHI1OjgwMDUxMzIwIHI0
OjAwMDAwMDAwIHIzOjgwM2NmOTNjCls8ODAyZDdlYjg+XSAoZHVtcF9zdGFjaysweDAvMHgxYykg
ZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKWzw4MDAx
YjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0g
KHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCiByODo4Nzg0ZTEwMCByNzowMDAwMDAwMyBy
NjowMDAwMDA2NCByNTowMDAwMDAwMCByNDo4NzgwMDQ0MApyMzowMDAwMDAwOQpbPDgwMDFiMWY0
Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwMDUxMzIwPl0gKGlycV9k
b21haW5fbGVnYWN5X3Jldm1hcCsweDI4LzB4NTApCls8ODAwNTEyZjg+XSAoaXJxX2RvbWFpbl9s
ZWdhY3lfcmV2bWFwKzB4MC8weDUwKSBmcm9tIFs8ODAwNTE4YWM+XSAoaXJxX2NyZWF0ZV9tYXBw
aW5nKzB4YjAvMHgxMjgpCls8ODAwNTE3ZmM+XSAoaXJxX2NyZWF0ZV9tYXBwaW5nKzB4MC8weDEy
OCkgZnJvbSBbPDgwMDUxOWE4Pl0gKGlycV9jcmVhdGVfb2ZfbWFwcGluZysweDg0LzB4ZjgpCiBy
NzowMDAwMDAwMyByNjo4MDUwODhhOCByNTo4NzgzMWRkMCByNDo4NzgwMDQ0MApbPDgwMDUxOTI0
Pl0gKGlycV9jcmVhdGVfb2ZfbWFwcGluZysweDAvMHhmOCkgZnJvbSBbPDgwMjNjZGE4Pl0gKGly
cV9vZl9wYXJzZV9hbmRfbWFwKzB4MzQvMHgzYykKIHI3OjAwMDAwMDAwIHI2OjgwNTA4OWRjIHI1
OjAwMDAwMDAwIHI0OjAwMDAwMDAwCls8ODAyM2NkNzQ+XSAoaXJxX29mX3BhcnNlX2FuZF9tYXAr
MHgwLzB4M2MpIGZyb20gWzw4MDIzY2RkMD5dIChvZl9pcnFfdG9fcmVzb3VyY2UrMHgyMC8weDdj
KQpbPDgwMjNjZGIwPl0gKG9mX2lycV90b19yZXNvdXJjZSsweDAvMHg3YykgZnJvbSBbPDgwMjNj
ZTU4Pl0gKG9mX2lycV9jb3VudCsweDJjLzB4M2MpCiByNzowMDAwMDAwMCByNjo4MDUwODlkYyBy
NTo4MDUwODlkYyByNDowMDAwMDAwMApbPDgwMjNjZTJjPl0gKG9mX2lycV9jb3VudCsweDAvMHgz
YykgZnJvbSBbPDgwMjNkNDFjPl0gKG9mX2RldmljZV9hbGxvYysweDVjLzB4MTVjKQogcjU6MDAw
MDAwMDAgcjQ6MDAwMDAwMDAKWzw4MDIzZDNjMD5dIChvZl9kZXZpY2VfYWxsb2MrMHgwLzB4MTVj
KSBmcm9tIFs8ODAyM2Q1NTg+XSAob2ZfcGxhdGZvcm1fZGV2aWNlX2NyZWF0ZV9wZGF0YSsweDNj
LzB4ODgpCls8ODAyM2Q1MWM+XSAob2ZfcGxhdGZvcm1fZGV2aWNlX2NyZWF0ZV9wZGF0YSsweDAv
MHg4OCkgZnJvbSBbPDgwMjNkNjc4Pl0gKG9mX3BsYXRmb3JtX2J1c19jcmVhdGUrMHhkNC8weDI3
OCkKIHI3OjAwMDAwMDAxIHI2OjAwMDAwMDAwIHI1OjgwM2JjMjYwIHI0OjgwNTA4OWRjCls8ODAy
M2Q1YTQ+XSAob2ZfcGxhdGZvcm1fYnVzX2NyZWF0ZSsweDAvMHgyNzgpIGZyb20gWzw4MDIzZDg4
ND5dIChvZl9wbGF0Zm9ybV9wb3B1bGF0ZSsweDY4LzB4YTApCls8ODAyM2Q4MWM+XSAob2ZfcGxh
dGZvcm1fcG9wdWxhdGUrMHgwLzB4YTApIGZyb20gWzw4MDNhNmMzND5dICh2Mm1fZHRfaW5pdCsw
eDJjLzB4NGMpCls8ODAzYTZjMDg+XSAodjJtX2R0X2luaXQrMHgwLzB4NGMpIGZyb20gWzw4MDNh
MmMwYz5dIChjdXN0b21pemVfbWFjaGluZSsweDI0LzB4MzApCls8ODAzYTJiZTg+XSAoY3VzdG9t
aXplX21hY2hpbmUrMHgwLzB4MzApIGZyb20gWzw4MDAwODYzYz5dIChkb19vbmVfaW5pdGNhbGwr
MHg0MC8weDE4NCkKWzw4MDAwODVmYz5dIChkb19vbmVfaW5pdGNhbGwrMHgwLzB4MTg0KSBmcm9t
IFs8ODAzYTE4ODA+XSAoa2VybmVsX2luaXQrMHhmMC8weDFhYykKWzw4MDNhMTc5MD5dIChrZXJu
ZWxfaW5pdCsweDAvMHgxYWMpIGZyb20gWzw4MDAxZjdiND5dIChkb19leGl0KzB4MC8weDZiYykK
LS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxZSBdLS0tClNlcmlhbDogQU1CQSBQTDAxMSBV
QVJUIGRyaXZlcgoxYzA5MDAwMC51YXJ0OiB0dHlBTUEwIGF0IE1NSU8gMHgxYzA5MDAwMCAoaXJx
ID0gMzcpIGlzIGEgUEwwMTEgcmV2MQoxYzBhMDAwMC51YXJ0OiB0dHlBTUExIGF0IE1NSU8gMHgx
YzBhMDAwMCAoaXJxID0gMzgpIGlzIGEgUEwwMTEgcmV2MQpjb25zb2xlIFt0dHlBTUExXSBlbmFi
bGVkLCBib290Y29uc29sZSBkaXNhYmxlZAooWEVOKSBiYWQgcDJtIGxvb2t1cAooWEVOKSBkb20x
MTMgSVBBIDB4MDAwMDAwMDA5MDAwMDAwMAooWEVOKSBQMk0gQCAwMmZmY2FjMCBtZm46MHhmZmU1
NgooWEVOKSAxU1RbMHgyXSA9IDB4MDAwMDAwMDBmM2Y2ODZmZgooWEVOKSAyTkRbMHg4MF0gPSAw
eDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgYmFkIHAybSBsb29rdXAKKFhFTikgZG9tMTEzIElQQSAw
eDAwMDAwMDAwOTAwMDEwMDAKKFhFTikgUDJNIEAgMDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikg
MVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNmNjg2ZmYKKFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAw
MDAwMDAwMDAwCihYRU4pIERPTTExMzogVW5jb21wcmVzc2luZyBMaW51eC4uLiBkb25lLCBib290
aW5nIHRoZSBrZXJuZWwuCkJvb3RpbmcgTGludXggb24gcGh5c2ljYWwgQ1BVIDAKTGludXggdmVy
c2lvbiAzLjUuMC1yYzcrIChyb290QHR1eikgKGdjYyB2ZXJzaW9uIDQuNi4zIChVYnVudHUvTGlu
YXJvIDQuNi4zLTF1YnVudHU1KSApICMxMyBXZWQgQXVnIDEgMTc6MzI6MjAgTVNLIDIwMTIKQ1BV
OiBBUk12NyBQcm9jZXNzb3IgWzQxMmZjMGYwXSByZXZpc2lvbiAwIChBUk12NyksIGNyPTEwYzUz
YzdkCkNQVTogUElQVCAvIFZJUFQgbm9uYWxpYXNpbmcgZGF0YSBjYWNoZSwgUElQVCBpbnN0cnVj
dGlvbiBjYWNoZQpNYWNoaW5lOiBBUk0tVmVyc2F0aWxlIEV4cHJlc3MsIG1vZGVsOiBWMlAtQUVN
djdBCmJvb3Rjb25zb2xlIFt4ZW5ib290MF0gZW5hYmxlZApNZW1vcnkgcG9saWN5OiBFQ0MgZGlz
YWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCk9uIG5vZGUgMCB0b3RhbHBhZ2VzOiAzMjc2OApm
cmVlX2FyZWFfaW5pdF9ub2RlOiBub2RlIDAsIHBnZGF0IDgwM2U0ZDI0LCBub2RlX21lbV9tYXAg
ODA0MDEwMDAKICBOb3JtYWwgem9uZTogMjU2IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcAogIE5vcm1h
bCB6b25lOiAwIHBhZ2VzIHJlc2VydmVkCiAgTm9ybWFsIHpvbmU6IDMyNTEyIHBhZ2VzLCBMSUZP
IGJhdGNoOjcKLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCgpXQVJOSU5HOiBh
dCBhcmNoL2FybS9tYWNoLXZleHByZXNzL3YybS5jOjYwMyB2Mm1fZHRfaW5pdF9lYXJseSsweDQ0
LzB4ZWMoKQpNb2R1bGVzIGxpbmtlZCBpbjoKQmFja3RyYWNlOiAKWzw4MDAxMWIwYz5dIChkdW1w
X2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJkMWU1Yz5dIChkdW1wX3N0YWNrKzB4MTgv
MHgxYykKIHI2OjAwMDAwMjViIHI1OjgwM2EwZTM0IHI0OjAwMDAwMDAwIHIzOjgwM2M5OTNjCls8
ODAyZDFlNDQ+XSAoZHVtcF9zdGFjaysweDAvMHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5f
c2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKWzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2Nv
bW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0
LzB4MmMpCiByODo4MDNjNzMzOCByNzo4MDUwMTQ0MCByNjo4MDAwMDIwMCByNTo4MDNlZGEwOCBy
NDo4MDNlNWViOApyMzowMDAwMDAwOQpbPDgwMDFiMWY0Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsw
eDAvMHgyYykgZnJvbSBbPDgwM2EwZTM0Pl0gKHYybV9kdF9pbml0X2Vhcmx5KzB4NDQvMHhlYykK
Wzw4MDNhMGRmMD5dICh2Mm1fZHRfaW5pdF9lYXJseSsweDAvMHhlYykgZnJvbSBbPDgwMzlkNjlj
Pl0gKHNldHVwX2FyY2grMHg3MTAvMHg3ZmMpCiByNDo4MDNiNGFjYwpbPDgwMzljZjhjPl0gKHNl
dHVwX2FyY2grMHgwLzB4N2ZjKSBmcm9tIFs8ODAzOWI1OWM+XSAoc3RhcnRfa2VybmVsKzB4Nzgv
MHgyNmMpCls8ODAzOWI1MjQ+XSAoc3RhcnRfa2VybmVsKzB4MC8weDI2YykgZnJvbSBbPDgwMDA4
MDQwPl0gKDB4ODAwMDgwNDApCiByNzo4MDNjNzI4NCByNjo4MDNiNWUzMCByNTo4MDNjNDA1NCBy
NDoxMGM1M2M3ZAotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFjIF0tLS0KcGNwdS1hbGxv
YzogczAgcjAgZDMyNzY4IHUzMjc2OCBhbGxvYz0xKjMyNzY4CnBjcHUtYWxsb2M6IFswXSAwIApC
dWlsdCAxIHpvbmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0eSBncm91cGluZyBvbi4gIFRv
dGFsIHBhZ2VzOiAzMjUxMgpLZXJuZWwgY29tbWFuZCBsaW5lOiBlYXJseXByaW50az14ZW4gZGVi
dWcgbG9nbGV2ZWw9OSBjb25zb2xlPWh2YzAgcm9vdD0vZGV2L3h2ZGEKUElEIGhhc2ggdGFibGUg
ZW50cmllczogNTEyIChvcmRlcjogLTEsIDIwNDggYnl0ZXMpCkRlbnRyeSBjYWNoZSBoYXNoIHRh
YmxlIGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNCwgNjU1MzYgYnl0ZXMpCklub2RlLWNhY2hlIGhh
c2ggdGFibGUgZW50cmllczogODE5MiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzKQpNZW1vcnk6IDEy
OE1CID0gMTI4TUIgdG90YWwKTWVtb3J5OiAxMjU3OTZrLzEyNTc5NmsgYXZhaWxhYmxlLCA1Mjc2
ayByZXNlcnZlZCwgMEsgaGlnaG1lbQpWaXJ0dWFsIGtlcm5lbCBtZW1vcnkgbGF5b3V0OgogICAg
dmVjdG9yICA6IDB4ZmZmZjAwMDAgLSAweGZmZmYxMDAwICAgKCAgIDQga0IpCiAgICBmaXhtYXAg
IDogMHhmZmYwMDAwMCAtIDB4ZmZmZTAwMDAgICAoIDg5NiBrQikKICAgIHZtYWxsb2MgOiAweDg4
ODAwMDAwIC0gMHhmZjAwMDAwMCAgICgxODk2IE1CKQogICAgbG93bWVtICA6IDB4ODAwMDAwMDAg
LSAweDg4MDAwMDAwICAgKCAxMjggTUIpCiAgICBtb2R1bGVzIDogMHg3ZjAwMDAwMCAtIDB4ODAw
MDAwMDAgICAoICAxNiBNQikKICAgICAgLnRleHQgOiAweDgwMDA4MDAwIC0gMHg4MDM5YjAwMCAg
ICgzNjYwIGtCKQogICAgICAuaW5pdCA6IDB4ODAzOWIwMDAgLSAweDgwM2JiNmM0ICAgKCAxMzAg
a0IpCiAgICAgIC5kYXRhIDogMHg4MDNiYzAwMCAtIDB4ODAzZTU0NDAgICAoIDE2NiBrQikKICAg
ICAgIC5ic3MgOiAweDgwM2U1NDY0IC0gMHg4MDQwMGZlNCAgICggMTExIGtCKQpTTFVCOiBHZW5z
bGFicz0xMSwgSFdhbGlnbj02NCwgT3JkZXI9MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9MSwgTm9k
ZXM9MQpOUl9JUlFTOjI1NgotLS0tLS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FS
TklORzogYXQgYXJjaC9hcm0vbWFjaC12ZXhwcmVzcy92Mm0uYzo2MiB2Mm1fc3lzY3RsX2luaXQr
MHgyMC8weDU4KCkKTW9kdWxlcyBsaW5rZWQgaW46CkJhY2t0cmFjZTogCls8ODAwMTFiMGM+XSAo
ZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDFlNWM+XSAoZHVtcF9zdGFjaysw
eDE4LzB4MWMpCiByNjowMDAwMDAzZSByNTo4MDNhMGEyNCByNDowMDAwMDAwMCByMzo4MDNjOTkz
YwpbPDgwMmQxZTQ0Pl0gKGR1bXBfc3RhY2srMHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3
YXJuX3Nsb3dwYXRoX2NvbW1vbisweDU0LzB4NmMpCls8ODAwMWIxODg+XSAod2Fybl9zbG93cGF0
aF9jb21tb24rMHgwLzB4NmMpIGZyb20gWzw4MDAxYjIxOD5dICh3YXJuX3Nsb3dwYXRoX251bGwr
MHgyNC8weDJjKQogcjg6ODAwMDQwNTkgcjc6ODA1MDFiODAgcjY6ODAzYjVlMzQgcjU6ODAzZTU0
ODAgcjQ6MDAwMDAwMDAKcjM6MDAwMDAwMDkKWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251
bGwrMHgwLzB4MmMpIGZyb20gWzw4MDNhMGEyND5dICh2Mm1fc3lzY3RsX2luaXQrMHgyMC8weDU4
KQpbPDgwM2EwYTA0Pl0gKHYybV9zeXNjdGxfaW5pdCsweDAvMHg1OCkgZnJvbSBbPDgwM2EwYjI0
Pl0gKHYybV9kdF90aW1lcl9pbml0KzB4MmMvMHhjYykKIHI1OjgwM2U1NDgwIHI0OmZmZmZmZmZm
Cls8ODAzYTBhZjg+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDM5ZDgz
MD5dICh0aW1lX2luaXQrMHgyOC8weDM4KQogcjY6ODAzYjVlMzQgcjU6ODAzZTU0ODAgcjQ6ZmZm
ZmZmZmYKWzw4MDM5ZDgwOD5dICh0aW1lX2luaXQrMHgwLzB4MzgpIGZyb20gWzw4MDM5YjZhYz5d
IChzdGFydF9rZXJuZWwrMHgxODgvMHgyNmMpCls8ODAzOWI1MjQ+XSAoc3RhcnRfa2VybmVsKzB4
MC8weDI2YykgZnJvbSBbPDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCiByNzo4MDNjNzI4NCByNjo4
MDNiNWUzMCByNTo4MDNjNDA1NCByNDoxMGM1M2M3ZAotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3
MTllZDFkIF0tLS0KYXJjaF90aW1lcjogZm91bmQgdGltZXIgaXJxcyAyOSAzMApBcmNoaXRlY3Rl
ZCBsb2NhbCB0aW1lciBydW5uaW5nIGF0IDEwMC4wME1Iei4Kc2NoZWRfY2xvY2s6IDMyIGJpdHMg
YXQgMTAwTUh6LCByZXNvbHV0aW9uIDEwbnMsIHdyYXBzIGV2ZXJ5IDQyOTQ5bXMKLS0tLS0tLS0t
LS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCldBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4
cHJlc3MvdjJtLmM6NjQ3IHYybV9kdF90aW1lcl9pbml0KzB4N2MvMHhjYygpCk1vZHVsZXMgbGlu
a2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEw
YykgZnJvbSBbPDgwMmQxZTVjPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAwMDAyODcg
cjU6ODAzYTBiNzQgcjQ6MDAwMDAwMDAgcjM6ODAzYzk5M2MKWzw4MDJkMWU0ND5dIChkdW1wX3N0
YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1
NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9t
IFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4OjgwMDA0MDU5
IHI3OjgwNTAxYjgwIHI2OjgwM2I1ZTM0IHI1OjgwM2U1NDgwIHI0OmZmZmZmZmVhCnIzOjAwMDAw
MDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAz
YTBiNzQ+XSAodjJtX2R0X3RpbWVyX2luaXQrMHg3Yy8weGNjKQpbPDgwM2EwYWY4Pl0gKHYybV9k
dF90aW1lcl9pbml0KzB4MC8weGNjKSBmcm9tIFs8ODAzOWQ4MzA+XSAodGltZV9pbml0KzB4Mjgv
MHgzOCkKIHI2OjgwM2I1ZTM0IHI1OjgwM2U1NDgwIHI0OmZmZmZmZmZmCls8ODAzOWQ4MDg+XSAo
dGltZV9pbml0KzB4MC8weDM4KSBmcm9tIFs8ODAzOWI2YWM+XSAoc3RhcnRfa2VybmVsKzB4MTg4
LzB4MjZjKQpbPDgwMzliNTI0Pl0gKHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMpIGZyb20gWzw4MDAw
ODA0MD5dICgweDgwMDA4MDQwKQogcjc6ODAzYzcyODQgcjY6ODAzYjVlMzAgcjU6ODAzYzQwNTQg
cjQ6MTBjNTNjN2QKLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxZSBdLS0tCkNvbnNvbGU6
IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MzAKWGVuIHN1cHBvcnQgZm91bmQsIGV2ZW50c19pcnE9
MzEgZ250dGFiX2ZyYW1lX3Bmbj1iMDAwMApHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxh
eW91dC4KR3JhbnQgdGFibGUgaW5pdGlhbGl6ZWQKQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcC4uLiA5
OS4yMiBCb2dvTUlQUyAobHBqPTQ5NjEyOCkKcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11
bTogMzAxCk1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTEyCkNQVTogVGVzdGluZyB3
cml0ZSBidWZmZXIgY29oZXJlbmN5OiBvawpTZXR0aW5nIHVwIHN0YXRpYyBpZGVudGl0eSBtYXAg
Zm9yIDB4ODAyZDYwYzAgLSAweDgwMmQ2MGY0ClhlbiBzdXBwb3J0IGZvdW5kLCBldmVudHNfaXJx
PTMxIGdudHRhYl9mcmFtZV9wZm49YjAwMDAKTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWls
eSAxNgooWEVOKSBHdWVzdCBkYXRhIGFib3J0OiBUcmFuc2xhdGlvbiBmYXVsdCBhdCBsZXZlbCAy
CihYRU4pICAgICBndmE9ODg4MDg4MDQKKFhFTikgICAgIGdwYT0wMDAwMDAwMDkwMDAxODA0CihY
RU4pICAgICBzaXplPTIgc2lnbj0wIHdyaXRlPTAgcmVnPTIKKFhFTikgICAgIGVhdD0wIGNtPTAg
czFwdHc9MCBkZnNjPTYKKFhFTikgZG9tMTEzIElQQSAweDAwMDAwMDAwOTAwMDE4MDQKKFhFTikg
UDJNIEAgMDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikgMVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNm
Njg2ZmYKKFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAwMDAwMDAwMDAwCihYRU4pIC0tLS1bIFhl
bi00LjItdW5zdGFibGUgIHg4Nl82NCAgZGVidWc9eSAgTm90IHRhaW50ZWQgXS0tLS0KKFhFTikg
Q1BVOiAgICAwCihYRU4pIFBDOiAgICAgODAxNzU2MWMKKFhFTikgQ1BTUjogICAyMDAwMDAxMyBN
T0RFOlNWQwooWEVOKSAgICAgIFIwOiA4MDNmOTkyNCBSMTogODAzNjFhOTggUjI6IDgwM2Y5OTNj
IFIzOiA4MDNmOTk0NAooWEVOKSAgICAgIFI0OiA4ODgwODAwMCBSNTogODAzZjk5M2MgUjY6IDAw
MDA3ZmYwIFI3OiAwMDAwMDAwMQooWEVOKSAgICAgIFI4OiA4MDM5YjI1YyBSOTogODAzYmIzOGMg
UjEwOjgwM2U1NDgwIFIxMTo4NzgyZmYwYyBSMTI6ODc4MmZmMTAKKFhFTikgVVNSOiBTUDogMDAw
MDAwMDAgTFI6IDAwMDAwMDAwIENQU1I6MjAwMDAwMTMKKFhFTikgU1ZDOiBTUDogODc4MmZlZTgg
TFI6IDgwMTc2OTg4IFNQU1I6MDAwMDAwOTMKKFhFTikgQUJUOiBTUDogODAzZTVjMGMgTFI6IDgw
M2U1YzBjIFNQU1I6MDAwMDAwMDAKKFhFTikgVU5EOiBTUDogODAzZTVjMTggTFI6IDgwM2U1YzE4
IFNQU1I6MDAwMDAwMDAKKFhFTikgSVJROiBTUDogODAzZTVjMDAgTFI6IDgwMDBkZmMwIFNQU1I6
NjAwMDAxOTMKKFhFTikgRklROiBTUDogMDAwMDAwMDAgTFI6IDAwMDAwMDAwIFNQU1I6MDAwMDAw
MDAKKFhFTikgRklROiBSODogMDAwMDAwMDAgUjk6IDAwMDAwMDAwIFIxMDowMDAwMDAwMCBSMTE6
MDAwMDAwMDAgUjEyOjAwMDAwMDAwCihYRU4pIAooWEVOKSBUVEJSMCA4MDAwNDA1OSBUVEJSMSA4
MDAwNDA1OSBUVEJDUiAwMDAwMDAwMAooWEVOKSBTQ1RMUiAxMGM1M2M3ZAooWEVOKSBWVFRCUiA3
MjAwMDBmZmU1NjAwMAooWEVOKSAKKFhFTikgSFRUQlIgZmZlYzIwMDAKKFhFTikgSERGQVIgODg4
MDg4MDQKKFhFTikgSElGQVIgMAooWEVOKSBIUEZBUiA5MDAwMTAKKFhFTikgSENSIDAwMDAwODM1
CihYRU4pIEhTUiAgIDkzODIwMDA2CihYRU4pIAooWEVOKSBERlNSIDAgREZBUiAwCihYRU4pIElG
U1IgMCBJRkFSIDAKKFhFTikgCihYRU4pIEdVRVNUIFNUQUNLIEdPRVMgSEVSRQooWEVOKSAKKFhF
TikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgooWEVOKSBQYW5pYyBv
biBDUFUgMDoKKFhFTikgVW5oYW5kbGVkIGd1ZXN0IGRhdGEgYWJvcnQKKFhFTikgKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgooWEVOKSAKKFhFTikgUmVib290IGluIGZp
dmUgc2Vjb25kcy4uLgoKCnJvb3RAdHV6Oi91c3IvbGliL3hlbi9iaW4jIC4veGNidWlsZC1vbGQg
L2Jvb3QvekltYWdlZHRiIApJbWFnZTogL2Jvb3QvekltYWdlZHRiCk1lbW9yeTogMjY0MTkyS0IK
eGNfZG9tYWluX2NyZWF0ZTogMCAoMCkKYnVpbGRpbmcgZG9tMTEzCmRvbWFpbmJ1aWxkZXI6IGRl
dGFpbDogeGNfZG9tX2FsbG9jYXRlOiBjbWRsaW5lPSIiLCBmZWF0dXJlcz0iKG51bGwpIgpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9rZXJuZWxfZmlsZTogZmlsZW5hbWU9Ii9ib290L3pJ
bWFnZWR0YiIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWFsbG9jX2ZpbGVtYXAgICAg
OiAyMDA1IGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfeGVuX2luaXQ6IHZl
ciA0LjIsIGNhcHMgeGVuLTMuMC1hcm12N2wgCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9t
X3BhcnNlX2ltYWdlOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fZmluZF9s
b2FkZXI6IHRyeWluZyBtdWx0aWJvb3QtYmluYXJ5IGxvYWRlciAuLi4gCmRvbWFpbmJ1aWxkZXI6
IGRldGFpbDogbG9hZGVyIHByb2JlIGZhaWxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2Rv
bV9maW5kX2xvYWRlcjogdHJ5aW5nIExpbnV4IHpJbWFnZSAoQVJNKSBsb2FkZXIgLi4uIApkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wcm9iZV96aW1hZ2Vfa2VybmVsOiBmb3VuZCBhbiBh
cHBlbmRlZCBEVEIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsb2FkZXIgcHJvYmUgT0sKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiB4Y19kb21fcGFyc2VfemltYWdlX2tlcm5lbDogY2FsbGVkCmRvbWFp
bmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX3ppbWFnZV9rZXJuZWw6IHhlbi0zLjAtYXJt
djdsOiBSQU0gc3RhcnRzIGF0IDgwMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3Bh
cnNlX3ppbWFnZV9rZXJuZWw6IHhlbi0zLjAtYXJtdjdsOiAweDgwMDA4MDAwIC0+IDB4ODAxZmQ3
ZjEKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWVtX2luaXQ6IG1lbSAyNTYgTUIsIHBh
Z2VzIDB4MTAwMDAgcGFnZXMsIDRrIGVhY2gKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21f
bWVtX2luaXQ6IDB4MTAwMDAgcGFnZXMKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYm9v
dF9tZW1faW5pdDogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21hbGxvYyAg
ICAgICAgICAgIDogNTEyIGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogcmVtYXBfYXJlYV9tZm5f
cHRlX2ZuOiBwdGVwIDg3MTIxNmEwIGFkZHIgMHg3NjlhODAwMCA9PiAweDkwMDAwMzBmIC8gMHg5
MDAwMAp4Y19kb21fYnVpbGRfaW1hcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MTIxNmE0
IGFkZHIgMHg3NjlhOTAwMCA9PiAweDkwMDAxMzBmIC8gMHg5MDAwMQpnZTogY2FsbGVkCmRvbWFy
ZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YTggYWRkciAweDc2OWFhMDAwID0+IDB4
OTAwMDIzMGYgLyAweDkwMDAyCmluYnVpbGRlcjogZGV0YWlyZW1hcF9hcmVhX21mbl9wdGVfZm46
IHB0ZXAgODcxMjE2YWMgYWRkciAweDc2OWFiMDAwID0+IDB4OTAwMDMzMGYgLyAweDkwMDAzCmw6
IHhjX2RvbV9hbGxvY19yZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YjAgYWRkciAw
eDc2OWFjMDAwID0+IDB4OTAwMDQzMGYgLyAweDkwMDA0CnNlZ21lbnQ6ICAga2VybmVyZW1hcF9h
cmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YjQgYWRkciAweDc2OWFkMDAwID0+IDB4OTAwMDUz
MGYgLyAweDkwMDA1CmwgICAgICAgOiAweDgwMDByZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAg
ODcxMjE2YjggYWRkciAweDc2OWFlMDAwID0+IDB4OTAwMDYzMGYgLyAweDkwMDA2CjgwMDAgLT4g
MHg4MDFmZTByZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YmMgYWRkciAweDc2OWFm
MDAwID0+IDB4OTAwMDczMGYgLyAweDkwMDA3CjAwICAocGZuIDB4ODAwMDhyZW1hcF9hcmVhX21m
bl9wdGVfZm46IHB0ZXAgODcxMjE2YzAgYWRkciAweDc2OWIwMDAwID0+IDB4OTAwMDgzMGYgLyAw
eDkwMDA4CnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzEyMTZjNCBhZGRyIDB4NzY5YjEw
MDAgPT4gMHg5MDAwOTMwZiAvIDB4OTAwMDkKCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4
NzEyMTZjOCBhZGRyIDB4NzY5YjIwMDAgPT4gMHg5MDAwYTMwZiAvIDB4OTAwMGEKcmVtYXBfYXJl
YV9tZm5fcHRlX2ZuOiBwdGVwIDg3MTIxNmNjIGFkZHIgMHg3NjliMzAwMCA9PiAweDkwMDBiMzBm
IC8gMHg5MDAwYgpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2ZDAgYWRkciAweDc2
OWI0MDAwID0+IDB4OTAwMGMzMGYgLyAweDkwMDBjCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRl
cCA4NzEyMTZkNCBhZGRyIDB4NzY5YjUwMDAgPT4gMHg5MDAwZDMwZiAvIDB4OTAwMGQKcmVtYXBf
YXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MTIxNmQ4IGFkZHIgMHg3NjliNjAwMCA9PiAweDkwMDBl
MzBmIC8gMHg5MDAwZQpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2ZGMgYWRkciAw
eDc2OWI3MDAwID0+IDB4OTAwMGYzMGYgLyAweDkwMDBmCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDog
eGNfZG9tX3Bmbl90b19wdHI6IGRvbVUgbWFwcGluZzogcGZuIDB4ODAwMDgrMHgxZjYgYXQgMHg3
NjlhODAwMApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9sb2FkX3ppbWFnZV9rZXJuZWw6
IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9sb2FkX3ppbWFnZV9rZXJuZWw6
IGtlcmZvcmVpZ24gbWFwIGFkZF90b19waHlzbWFwIGZhaWxlZCwgZXJyPS0yMgpuZWwgc2VkIDB4
ODAwMDgwMDAtMHg4MDFmZTAwMApkZm9yZWlnbiBtYXAgYWRkX3RvX3BoeXNtYXAgZmFpbGVkLCBl
cnI9LTIyCm9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbG9hZF96aW1hZ2Vfa2VybmVsOiBj
b3B5IDIwNTQxMjkgYnl0ZXMgZnJvbSBibG9iIDB4NzZjMWYwMDAgdG8gZHN0IDB4NzY5YTgwMDAK
ZG9tYWluYnVpbGRlcjogZGV0YWlsOiBhbGxvY19tYWdpY19wYWdlczogY2FsbGVkCmRvbWFpbmJ1
aWxkZXI6IGRldGFpbDogY291bnRfcGd0YWJsZXNfYXJtOiBjYWxsZWQKZG9tYWluYnVpbGRlcjog
ZGV0YWlsOiB4Y19kb21fYnVpbGRfaW1hZ2UgIDogdmlydF9hbGxvY19lbmQgOiAweDgwMWZlMDAw
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfcGd0YWJf
ZW5kIDogMHgwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfaW1hZ2U6IGNhbGxl
ZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGFyY2hfc2V0dXBfYm9vdGVhcmx5OiBkb2luZyBub3Ro
aW5nCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2NvbXBhdF9jaGVjazogc3VwcG9ydGVk
IGd1ZXN0IHR5cGU6IHhlbi0zLjAtYXJtdjdsIDw9IG1hdGNoZXMKZG9tYWluYnVpbGRlcjogZGV0
YWlsOiBzZXR1cF9wZ3RhYmxlc19hcm06IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHN0
YXJ0X2luZm9fYXJtOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBkb21haW4gYnVpbGRl
ciBtZW1vcnkgZm9vdHByaW50CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogICAgYWxsb2NhdGVkCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogICAgICAgbWFsbG9jICAgICAgICAgICAgIDogNTI1IGtCCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogICAgICAgYW5vbiBtbWFwICAgICAgICAgIDogMCBieXRlcwpk
b21haW5idWlsZGVyOiBkZXRhaWw6ICAgIG1hcHBlZApkb21haW5idWlsZGVyOiBkZXRhaWw6ICAg
ICAgIGZpbGUgbW1hcCAgICAgICAgICA6IDIwMDUga0IKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAg
ICAgICBkb21VIG1tYXAgICAgICAgICAgOiAyMDA4IGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDog
dmNwdV9hcm06IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IEluaXRpYWwgc3RhdGUgQ1BT
UiAweDFkMyBQQyAweDgwMDA4MDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogbGF1bmNoX3ZtOiBj
YWxsZWQsIGN0eHQ9MHg3ZWFmMmEwYwpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9yZWxl
YXNlOiBjYWxsZWQKeGM6IGRlYnVnOiBoeXBlcmNhbGwgYnVmZmVyOiB0b3RhbCBhbGxvY2F0aW9u
czoxNCB0b3RhbCByZWxlYXNlczoxNAp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IGN1cnJl
bnQgYWxsb2NhdGlvbnM6MCBtYXhpbXVtIGFsbG9jYXRpb25zOjIKeGM6IGRlYnVnOiBoeXBlcmNh
bGwgYnVmZmVyOiBjYWNoZSBjdXJyZW50IHNpemU6Mgp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZm
ZXI6IGNhY2hlIGhpdHM6MTEgbWlzc2VzOjIgdG9vYmlnOjEKCgo=
--14dae9340efb28ec1704c636d706
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae9340efb28ec1704c636d706--


From xen-devel-bounces@lists.xen.org Wed Aug 01 16:33:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swbqa-0001U9-1l; Wed, 01 Aug 2012 16:32:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <trashsee@gmail.com>) id 1SwbqY-0001TG-BT
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:32:30 +0000
Received: from [85.158.143.35:57365] by server-3.bemta-4.messagelabs.com id
	66/80-01511-E1A59105; Wed, 01 Aug 2012 16:32:30 +0000
X-Env-Sender: trashsee@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1343838747!5910045!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=INFO_TLD,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32404 invoked from network); 1 Aug 2012 16:32:28 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:32:28 -0000
Received: by ghrr14 with SMTP id r14so8306449ghr.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:32:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=hj4nvKvA0H4ttFwfkfvz6C3PVg8Gu9GXFu4EquowPX4=;
	b=rG4wlcZChr7JGj7+h97q71UuHRiuaoK8mX8Veg8B+wgneGiJDCxWjBzMuMtwPgemUK
	ERqM2kqufKtvm3eIRdOYPkrjCgdKYikzbiHLgMTBRjMykdhEebBUUsIJ3QCmk9bka4Q5
	kgdwGLnv92vkL5EiizvgezjdTW2eL8GtQuy3yxK40t7Rlr/4hyFlC2L3TGlEfhr32EXe
	acDMSRDMrAndgwbh0RvEBMwtKradHa2HbIb8NGTeaK8lcDhAtfC6oUKXFB7Ke3dOg+yf
	ji3NjO45SoCnacche/RrqepCPingct5QPFpd6lEJIN2lQFFwsMGhrPcdHBf+kCu9Vgi2
	c58w==
MIME-Version: 1.0
Received: by 10.50.189.167 with SMTP id gj7mr4298968igc.32.1343838747159; Wed,
	01 Aug 2012 09:32:27 -0700 (PDT)
Received: by 10.64.22.10 with HTTP; Wed, 1 Aug 2012 09:32:26 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
References: <CAPny0soyuQkUmAU+kYrBvG+w_jxKUsY8YxCrxBA=7cwmdwV6Xw@mail.gmail.com>
	<alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
Date: Wed, 1 Aug 2012 20:32:26 +0400
Message-ID: <CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
From: Alexey Klimov <trashsee@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
Content-Type: multipart/mixed; boundary=14dae9340efb28ec1704c636d706
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [questions] Dom0/DomU on ARM under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae9340efb28ec1704c636d706
Content-Type: text/plain; charset=ISO-8859-1

Hello Stefano and Ian,

2012/7/31 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:
> On Mon, 30 Jul 2012, Alexey Klimov wrote:
>> I'm trying to run DomU and Dom0 on ARM under Xen and have some
>> problems (may be question of configuration).
>
> It is great to see interest in our project!
>
>
>> I'm using:
>> - unstable Xen mercurial repository with your "grant table" patches
>> and few patches from Ian Campbell (xcbuild,
>> xen_remap_domain_mfn_range, XENMAPSPACE_gmfn_foreign,  ARM support to
>> xc_dom).
>
> You also need "libxc/arm: allocate xenstore and console pages".
>
> Unfortunately with the 4.2 tree frozen we still have few missing pieces
> here and there in the Xen hypervisor and tools.
> I think that Ian intended to setup a Xen tree to be used for development
> with all the currently unapplied patches that are actually needed on top
> of xen-unstable.

I found patch and applied.

> Also the xcbuild patch posted by Ian is quite limited, I am attaching
> the xcbuild.c that I am currently using for my tests with PV disk and
> network support.

Thank you very much, i renamed Ian early version of xcbuild to
xcbuild-old, added your file and fixed Makefile to have two xcbuilds.
I found your patch on June 22 and looks like i missed it, my bad.
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01338.html

>> - your (Stefano's) linux kernel git repository
>> git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git with head
>> 3.5-rc7-arm-1. I hope all patches to Linux kernel from Stefano letters
>> are there.
>
> You might also need:
>
> "xen/events: fix unmask_evtchn for PV on HVM guests"
>
> this is the last version that I posted:
>
> http://marc.info/?l=linux-kernel&m=134263575132006&w=2

Also applied on top of Stefano's kernel tree cloned on my machine.

>> - Fast Models with few models created as described in wiki page
>> http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/FastModels
>> - device trees dts files (vexpress-v2p-ca15-tc1.dts and
>> vexpress-virt.dts) from Stefano letter on 26 July. v2p-ca15-tc1 is
>> attached to Xen using CONFIG_DTB_FILE and vexpress-virt.dtb is
>> attached to DomU zImage.
>
> That's correct.
>
>
>> Well, kernel hangs after message (Calibrating delay loop...) when
>> running on models RTSM_VE/Build_Cortex-A15x4 and
>> RTSM_VE/Build_Cortex-A15x2. I attached logs (Dom0-A15x2 and A15x4).
>> Logs also shows problems with device trees (HBI and arch timer).
>>
>> I can boot Dom0 on Cortex-A15x1 model (log file Dom0-A15x1 with
>> warning/problems about DT and HBI) and when i'm tryng to boot zImage
>> using xcbuild utility then it also hangs with message from Xen "Guest
>> data abort: Translation fault at level 3". Log file is also attached.
>>
>> Could you please take a look and help?
>
> I have been testing on the Cortex-A15x1 model exclusively so far, so I
> am not surprised if there are any errors on the other models.
> Also I know that there are still few warnings on boot, but I haven't got
> around to fixing them yet.
>
>
>> May be i miss important config option in Linux kernel or in Xen.
>>
>> Is it okay that vexpress-virt descibes V2P-AEMv7A platform and not
>> V2P-CA15?
>
> That's should be OK.
>
>
>> It looks that vexpress-v2p-ca15-tc1.dts includes
>> vexpress-v2m-rs1-rtsm.dtsi. Could you please also share this file if
>> it has specific options?
>
> I am attaching it. I think you might be missing an important change there.

Thanks, i checked it and used to build final dtb-file but it looks no
changes were there with file that i used.

>> And what can be reason of errors about
>> HBI/arch_timers when running Xen+Linux
>> kernel+vexpress-v2p-ca15-tc1.dts on Cortex-A15x2 model?
>
> I am not sure yet, but I'll take a look. I'll try to fix them in one of
> the following version of my series.
>
>
>> I can provide/send other info if you want. Thanks in advance.
>
> Let me know if the missing patches and the new
> vexpress-v2m-rs1-rtsm.dtsi fix the issue!

Thank you very much for help.

Well, i still have problems after two additional patches (for Xen and
kernel). Logs are attached: add_to_physmap failed with -22 in both
xcbuild versions and bad p2m lookups and translation fault at level 2
in Xen..

As i understand from email it's better to use Cortex-A15x1 model, so i
will use this model for tests.

And i saw that Ian set up git repository for xen with latest patches
for ARM. So i'll try to use this repository.

Best regards,
Alexey Klimov.

--14dae9340efb28ec1704c636d706
Content-Type: application/octet-stream; 
	name="xcbuild-Dom0+U-A15x1_01082012.log"
Content-Disposition: attachment; 
	filename="xcbuild-Dom0+U-A15x1_01082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5clc7280

KFhFTikgYmFkIHAybSBsb29rdXAKKFhFTikgZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAwMDAwCihY
RU4pIFAyTSBAIDAyZmZjYWMwIG1mbjoweGZmZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAw
MGYzZjY4NmZmCihYRU4pIDJORFsweDgwXSA9IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSBiYWQg
cDJtIGxvb2t1cAooWEVOKSBkb20xIElQQSAweDAwMDAwMDAwOTAwMDEwMDAKKFhFTikgUDJNIEAg
MDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikgMVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNmNjg2ZmYK
KFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAwMDAwMDAwMDAwCihYRU4pIGJhZCBwMm0gbG9va3Vw
CihYRU4pIGRvbTEgSVBBIDB4MDAwMDAwMDA5MDAwMTAwMAooWEVOKSBQMk0gQCAwMmZmY2FjMCBt
Zm46MHhmZmU1NgooWEVOKSAxU1RbMHgyXSA9IDB4MDAwMDAwMDBmM2Y2ODZmZgooWEVOKSAyTkRb
MHg4MF0gPSAweDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgRE9NMTogVW5jb21wcmVzc2luZyBMaW51
eC4uLiBkb25lLCBib290aW5nIHRoZSBrZXJuZWwuCkJvb3RpbmcgTGludXggb24gcGh5c2ljYWwg
Q1BVIDAKTGludXggdmVyc2lvbiAzLjUuMC1yYzcrIChyb290QHR1eikgKGdjYyB2ZXJzaW9uIDQu
Ni4zIChVYnVudHUvTGluYXJvIDQuNi4zLTF1YnVudHU1KSApICMxMyBXZWQgQXVnIDEgMTc6MzI6
MjAgTVNLIDIwMTIKQ1BVOiBBUk12NyBQcm9jZXNzb3IgWzQxMmZjMGYwXSByZXZpc2lvbiAwIChB
Uk12NyksIGNyPTEwYzUzYzdkCkNQVTogUElQVCAvIFZJUFQgbm9uYWxpYXNpbmcgZGF0YSBjYWNo
ZSwgUElQVCBpbnN0cnVjdGlvbiBjYWNoZQpNYWNoaW5lOiBBUk0tVmVyc2F0aWxlIEV4cHJlc3Ms
IG1vZGVsOiBWMlAtQUVNdjdBCmJvb3Rjb25zb2xlIFt4ZW5ib290MF0gZW5hYmxlZApNZW1vcnkg
cG9saWN5OiBFQ0MgZGlzYWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCk9uIG5vZGUgMCB0b3Rh
bHBhZ2VzOiAzMjc2OApmcmVlX2FyZWFfaW5pdF9ub2RlOiBub2RlIDAsIHBnZGF0IDgwM2U0ZDI0
LCBub2RlX21lbV9tYXAgODA0MDEwMDAKICBOb3JtYWwgem9uZTogMjU2IHBhZ2VzIHVzZWQgZm9y
IG1lbW1hcAogIE5vcm1hbCB6b25lOiAwIHBhZ2VzIHJlc2VydmVkCiAgTm9ybWFsIHpvbmU6IDMy
NTEyIHBhZ2VzLCBMSUZPIGJhdGNoOjcKLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0t
LS0tCldBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4cHJlc3MvdjJtLmM6NjAzIHYybV9kdF9p
bml0X2Vhcmx5KzB4NDQvMHhlYygpCk1vZHVsZXMgbGlua2VkIGluOgpCYWNrdHJhY2U6IApbPDgw
MDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEwYykgZnJvbSBbPDgwMmQxZTVjPl0gKGR1
bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAwMDAyNWIgcjU6ODAzYTBlMzQgcjQ6MDAwMDAwMDAg
cjM6ODAzYzk5M2MKWzw4MDJkMWU0ND5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAw
MWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdh
cm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93
cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4OjgwM2M3MzM4IHI3OjgwNTAxNDQwIHI2OjgwMDAwMjAw
IHI1OjgwM2VkYTA4IHI0OjgwM2U1ZWI4CnIzOjAwMDAwMDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9z
bG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzYTBlMzQ+XSAodjJtX2R0X2luaXRfZWFy
bHkrMHg0NC8weGVjKQpbPDgwM2EwZGYwPl0gKHYybV9kdF9pbml0X2Vhcmx5KzB4MC8weGVjKSBm
cm9tIFs8ODAzOWQ2OWM+XSAoc2V0dXBfYXJjaCsweDcxMC8weDdmYykKIHI0OjgwM2I0YWNjCls8
ODAzOWNmOGM+XSAoc2V0dXBfYXJjaCsweDAvMHg3ZmMpIGZyb20gWzw4MDM5YjU5Yz5dIChzdGFy
dF9rZXJuZWwrMHg3OC8weDI2YykKWzw4MDM5YjUyND5dIChzdGFydF9rZXJuZWwrMHgwLzB4MjZj
KSBmcm9tIFs8ODAwMDgwNDA+XSAoMHg4MDAwODA0MCkKIHI3OjgwM2M3Mjg0IHI2OjgwM2I1ZTMw
IHI1OjgwM2M0MDU0IHI0OjEwYzUzYzdkCi0tLVsgZW5kIHRyYWNlIDFiNzViMzFhMjcxOWVkMWMg
XS0tLQpwY3B1LWFsbG9jOiBzMCByMCBkMzI3NjggdTMyNzY4IGFsbG9jPTEqMzI3NjgKcGNwdS1h
bGxvYzogWzBdIDAgCkJ1aWx0IDEgem9uZWxpc3RzIGluIFpvbmUgb3JkZXIsIG1vYmlsaXR5IGdy
b3VwaW5nIG9uLiAgVG90YWwgcGFnZXM6IDMyNTEyCktlcm5lbCBjb21tYW5kIGxpbmU6IGVhcmx5
cHJpbnRrPXhlbiBkZWJ1ZyBsb2dsZXZlbD05IGNvbnNvbGU9aHZjMCByb290PS9kZXYveHZkYQpQ
SUQgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAtMSwgMjA0OCBieXRlcykKRGVudHJ5
IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTYzODQgKG9yZGVyOiA0LCA2NTUzNiBieXRlcykK
SW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRlcjogMywgMzI3NjggYnl0
ZXMpCk1lbW9yeTogMTI4TUIgPSAxMjhNQiB0b3RhbApNZW1vcnk6IDEyNTc5NmsvMTI1Nzk2ayBh
dmFpbGFibGUsIDUyNzZrIHJlc2VydmVkLCAwSyBoaWdobWVtClZpcnR1YWwga2VybmVsIG1lbW9y
eSBsYXlvdXQ6CiAgICB2ZWN0b3IgIDogMHhmZmZmMDAwMCAtIDB4ZmZmZjEwMDAgICAoICAgNCBr
QikKICAgIGZpeG1hcCAgOiAweGZmZjAwMDAwIC0gMHhmZmZlMDAwMCAgICggODk2IGtCKQogICAg
dm1hbGxvYyA6IDB4ODg4MDAwMDAgLSAweGZmMDAwMDAwICAgKDE4OTYgTUIpCiAgICBsb3dtZW0g
IDogMHg4MDAwMDAwMCAtIDB4ODgwMDAwMDAgICAoIDEyOCBNQikKICAgIG1vZHVsZXMgOiAweDdm
MDAwMDAwIC0gMHg4MDAwMDAwMCAgICggIDE2IE1CKQogICAgICAudGV4dCA6IDB4ODAwMDgwMDAg
LSAweDgwMzliMDAwICAgKDM2NjAga0IpCiAgICAgIC5pbml0IDogMHg4MDM5YjAwMCAtIDB4ODAz
YmI2YzQgICAoIDEzMCBrQikKICAgICAgLmRhdGEgOiAweDgwM2JjMDAwIC0gMHg4MDNlNTQ0MCAg
ICggMTY2IGtCKQogICAgICAgLmJzcyA6IDB4ODAzZTU0NjQgLSAweDgwNDAwZmU0ICAgKCAxMTEg
a0IpClNMVUI6IEdlbnNsYWJzPTExLCBIV2FsaWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9
MCwgQ1BVcz0xLCBOb2Rlcz0xCk5SX0lSUVM6MjU2Ci0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0t
LS0tLS0tLS0tLQpXQVJOSU5HOiBhdCBhcmNoL2FybS9tYWNoLXZleHByZXNzL3YybS5jOjYyIHYy
bV9zeXNjdGxfaW5pdCsweDIwLzB4NTgoKQpNb2R1bGVzIGxpbmtlZCBpbjoKQmFja3RyYWNlOiAK
Wzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJkMWU1Yz5d
IChkdW1wX3N0YWNrKzB4MTgvMHgxYykKIHI2OjAwMDAwMDNlIHI1OjgwM2EwYTI0IHI0OjAwMDAw
MDAwIHIzOjgwM2M5OTNjCls8ODAyZDFlNDQ+XSAoZHVtcF9zdGFjaysweDAvMHgxYykgZnJvbSBb
PDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKWzw4MDAxYjE4OD5d
ICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5f
c2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCiByODo4MDAwNDA1OSByNzo4MDUwMWI4MCByNjo4MDNi
NWUzNCByNTo4MDNlNTQ4MCByNDowMDAwMDAwMApyMzowMDAwMDAwOQpbPDgwMDFiMWY0Pl0gKHdh
cm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwM2EwYTI0Pl0gKHYybV9zeXNjdGxf
aW5pdCsweDIwLzB4NTgpCls8ODAzYTBhMDQ+XSAodjJtX3N5c2N0bF9pbml0KzB4MC8weDU4KSBm
cm9tIFs8ODAzYTBiMjQ+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgyYy8weGNjKQogcjU6ODAzZTU0
ODAgcjQ6ZmZmZmZmZmYKWzw4MDNhMGFmOD5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDAvMHhjYykg
ZnJvbSBbPDgwMzlkODMwPl0gKHRpbWVfaW5pdCsweDI4LzB4MzgpCiByNjo4MDNiNWUzNCByNTo4
MDNlNTQ4MCByNDpmZmZmZmZmZgpbPDgwMzlkODA4Pl0gKHRpbWVfaW5pdCsweDAvMHgzOCkgZnJv
bSBbPDgwMzliNmFjPl0gKHN0YXJ0X2tlcm5lbCsweDE4OC8weDI2YykKWzw4MDM5YjUyND5dIChz
dGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgwNDA+XSAoMHg4MDAwODA0MCkKIHI3
OjgwM2M3Mjg0IHI2OjgwM2I1ZTMwIHI1OjgwM2M0MDU0IHI0OjEwYzUzYzdkCi0tLVsgZW5kIHRy
YWNlIDFiNzViMzFhMjcxOWVkMWQgXS0tLQphcmNoX3RpbWVyOiBmb3VuZCB0aW1lciBpcnFzIDI5
IDMwCkFyY2hpdGVjdGVkIGxvY2FsIHRpbWVyIHJ1bm5pbmcgYXQgMTAwLjAwTUh6LgpzY2hlZF9j
bG9jazogMzIgYml0cyBhdCAxMDBNSHosIHJlc29sdXRpb24gMTBucywgd3JhcHMgZXZlcnkgNDI5
NDltcwotLS0tLS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQgYXJj
aC9hcm0vbWFjaC12ZXhwcmVzcy92Mm0uYzo2NDcgdjJtX2R0X3RpbWVyX2luaXQrMHg3Yy8weGNj
KCkKTW9kdWxlcyBsaW5rZWQgaW46CkJhY2t0cmFjZTogCls8ODAwMTFiMGM+XSAoZHVtcF9iYWNr
dHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDFlNWM+XSAoZHVtcF9zdGFjaysweDE4LzB4MWMp
CiByNjowMDAwMDI4NyByNTo4MDNhMGI3NCByNDowMDAwMDAwMCByMzo4MDNjOTkzYwpbPDgwMmQx
ZTQ0Pl0gKGR1bXBfc3RhY2srMHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3YXJuX3Nsb3dw
YXRoX2NvbW1vbisweDU0LzB4NmMpCls8ODAwMWIxODg+XSAod2Fybl9zbG93cGF0aF9jb21tb24r
MHgwLzB4NmMpIGZyb20gWzw4MDAxYjIxOD5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgyNC8weDJj
KQogcjg6ODAwMDQwNTkgcjc6ODA1MDFiODAgcjY6ODAzYjVlMzQgcjU6ODAzZTU0ODAgcjQ6ZmZm
ZmZmZWEKcjM6MDAwMDAwMDkKWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4
MmMpIGZyb20gWzw4MDNhMGI3ND5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDdjLzB4Y2MpCls8ODAz
YTBhZjg+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDM5ZDgzMD5dICh0
aW1lX2luaXQrMHgyOC8weDM4KQogcjY6ODAzYjVlMzQgcjU6ODAzZTU0ODAgcjQ6ZmZmZmZmZmYK
Wzw4MDM5ZDgwOD5dICh0aW1lX2luaXQrMHgwLzB4MzgpIGZyb20gWzw4MDM5YjZhYz5dIChzdGFy
dF9rZXJuZWwrMHgxODgvMHgyNmMpCls8ODAzOWI1MjQ+XSAoc3RhcnRfa2VybmVsKzB4MC8weDI2
YykgZnJvbSBbPDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCiByNzo4MDNjNzI4NCByNjo4MDNiNWUz
MCByNTo4MDNjNDA1NCByNDoxMGM1M2M3ZAotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFl
IF0tLS0KQ29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgzMApYZW4gc3VwcG9ydCBmb3Vu
ZCwgZXZlbnRzX2lycT0zMSBnbnR0YWJfZnJhbWVfcGZuPWIwMDAwCkdyYW50IHRhYmxlcyB1c2lu
ZyB2ZXJzaW9uIDEgbGF5b3V0LgpHcmFudCB0YWJsZSBpbml0aWFsaXplZAooWEVOKSBHdWVzdCBk
YXRhIGFib3J0OiBUcmFuc2xhdGlvbiBmYXVsdCBhdCBsZXZlbCAyCihYRU4pICAgICBndmE9ODg4
MDRjMDgKKFhFTikgICAgIGdwYT0wMDAwMDAwMDkwMDAwYzA4CihYRU4pICAgICBzaXplPTIgc2ln
bj0wIHdyaXRlPTAgcmVnPTkKKFhFTikgICAgIGVhdD0wIGNtPTAgczFwdHc9MCBkZnNjPTYKKFhF
TikgZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAwYzA4CihYRU4pIFAyTSBAIDAyZmZjYWMwIG1mbjow
eGZmZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAwMGYzZjY4NmZmCihYRU4pIDJORFsweDgw
XSA9IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSAtLS0tWyBYZW4tNC4yLXVuc3RhYmxlICB4ODZf
NjQgIGRlYnVnPXkgIE5vdCB0YWludGVkIF0tLS0tCihYRU4pIENQVTogICAgMAooWEVOKSBQQzog
ICAgIDgwMTk0OWU0CihYRU4pIENQU1I6ICAgMjAwMDAxZDMgTU9ERTpTVkMKKFhFTikgICAgICBS
MDogMDA1ODY1NmUgUjE6IDAwNTg2NTZlIFIyOiAwMDAwMDAxMCBSMzogODc4MDA5ODAKKFhFTikg
ICAgICBSNDogMDAwMDAwMTAgUjU6IDAwMDAwN2ZmIFI2OiAwMDAwMDAxMCBSNzogMDAwMDAwMWQK
KFhFTikgICAgICBSODogMDAwMDAwMTAgUjk6IDgwM2U2ODVjIFIxMDo4ODgwNDAwMCBSMTE6ODAz
YmRkZWMgUjEyOjgwM2JkZGYwCihYRU4pIFVTUjogU1A6IDAwMDAwMDAwIExSOiAwMDAwMDAwMCBD
UFNSOjIwMDAwMWQzCihYRU4pIFNWQzogU1A6IDgwM2JkZGE4IExSOiA4MDE5M2I5YyBTUFNSOjAw
MDAwMTUzCihYRU4pIEFCVDogU1A6IDgwM2U1YzBjIExSOiA4MDNlNWMwYyBTUFNSOjAwMDAwMDAw
CihYRU4pIFVORDogU1A6IDgwM2U1YzE4IExSOiA4MDNlNWMxOCBTUFNSOjAwMDAwMDAwCihYRU4p
IElSUTogU1A6IDgwM2U1YzAwIExSOiA4MDAwZGZjMCBTUFNSOjAwMDAwMWQzCihYRU4pIEZJUTog
U1A6IDAwMDAwMDAwIExSOiAwMDAwMDAwMCBTUFNSOjAwMDAwMDAwCihYRU4pIEZJUTogUjg6IDAw
MDAwMDAwIFI5OiAwMDAwMDAwMCBSMTA6MDAwMDAwMDAgUjExOjAwMDAwMDAwIFIxMjowMDAwMDAw
MAooWEVOKSAKKFhFTikgVFRCUjAgODAwMDQwNTkgVFRCUjEgODAwMDQwNTkgVFRCQ1IgMDAwMDAw
MDAKKFhFTikgU0NUTFIgMTBjNTNjN2QKKFhFTikgVlRUQlIgMjAwMDBmZmU1NjAwMAooWEVOKSAK
KFhFTikgSFRUQlIgZmZlYzIwMDAKKFhFTikgSERGQVIgODg4MDRjMDgKKFhFTikgSElGQVIgMAoo
WEVOKSBIUEZBUiA5MDAwMDAKKFhFTikgSENSIDAwMDAwODM1CihYRU4pIEhTUiAgIDkzODkwMDA2
CihYRU4pIAooWEVOKSBERlNSIDAgREZBUiAwCihYRU4pIElGU1IgMCBJRkFSIDAKKFhFTikgCihY
RU4pIEdVRVNUIFNUQUNLIEdPRVMgSEVSRQooWEVOKSAKKFhFTikgKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKgooWEVOKSBQYW5pYyBvbiBDUFUgMDoKKFhFTikgVW5oYW5k
bGVkIGd1ZXN0IGRhdGEgYWJvcnQKKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKgooWEVOKSAKKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLgoKCgoKcm9v
dEB0dXo6L3Vzci9saWIveGVuL2JpbiMgLi94Y2J1aWxkIC9ib290L3pJbWFnZWR0YiAKSW1hZ2U6
IC9ib290L3pJbWFnZWR0YgpNZW1vcnk6IDI2NDE5MktCCnhjX2RvbWFpbl9jcmVhdGU6IDAgKDAp
CmJ1aWxkaW5nIGRvbTEKdXNpbmcgeGNfZG9tIHRvIGJ1aWxkIGltYWdlIC9ib290L3pJbWFnZWR0
Ygpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9hbGxvY2F0ZTogY21kbGluZT0iIiwgZmVh
dHVyZXM9IihudWxsKSIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fa2VybmVsX2ZpbGU6
IGZpbGVuYW1lPSIvYm9vdC96SW1hZ2VkdGIiCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9t
X21hbGxvY19maWxlbWFwICAgIDogMjAwNSBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2Rv
bV9ib290X3hlbl9pbml0OiB2ZXIgNC4yLCBjYXBzIHhlbi0zLjAtYXJtdjdsIApkb21haW5idWls
ZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV9pbWFnZTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRl
dGFpbDogeGNfZG9tX2ZpbmRfbG9hZGVyOiB0cnlpbmcgbXVsdGlib290LWJpbmFyeSBsb2FkZXIg
Li4uIApkb21haW5idWlsZGVyOiBkZXRhaWw6IGxvYWRlciBwcm9iZSBmYWlsZWQKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fZmluZF9sb2FkZXI6IHRyeWluZyBMaW51eCB6SW1hZ2UgKEFS
TSkgbG9hZGVyIC4uLiAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fcHJvYmVfemltYWdl
X2tlcm5lbDogZm91bmQgYW4gYXBwZW5kZWQgRFRCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogbG9h
ZGVyIHByb2JlIE9LCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX3ppbWFnZV9r
ZXJuZWw6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96aW1hZ2Vf
a2VybmVsOiB4ZW4tMy4wLWFybXY3bDogUkFNIHN0YXJ0cyBhdCA4MDAwMApkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96aW1hZ2Vfa2VybmVsOiB4ZW4tMy4wLWFybXY3bDogMHg4
MDAwODAwMCAtPiAweDgwMWZkN2YxCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9p
bml0OiBtZW0gMjU2IE1CLCBwYWdlcyAweDEwMDAwIHBhZ2VzLCA0ayBlYWNoCmRvbWFpbmJ1aWxk
ZXI6IGRldGFpbDogeGNfZG9tX21lbV9pbml0OiAweDEwMDAwIHBhZ2VzCmRvbWFpbmJ1aWxkZXI6
IGRldGFpbDogeGNfZG9tX2Jvb3RfbWVtX2luaXQ6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRh
aWw6IHhjX2RvbV9tYWxsb2MgICAgICAgICAgICA6IDUxMiBrQgpkb21haW5idWlsZGVyOiBkZXRh
aWw6IHJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzIxYzZlNCBhZGRyIDB4NzY5YjkwMDAg
PT4gMHg5MDAwMDMwZiAvIDB4OTAwMDAKeGNfZG9tX2J1aWxkX2ltYXJlbWFwX2FyZWFfbWZuX3B0
ZV9mbjogcHRlcCA4NzIxYzZlOCBhZGRyIDB4NzY5YmEwMDAgPT4gMHg5MDAwMTMwZiAvIDB4OTAw
MDEKZ2U6IGNhbGxlZApkb21hcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmVjIGFk
ZHIgMHg3NjliYjAwMCA9PiAweDkwMDAyMzBmIC8gMHg5MDAwMgppbmJ1aWxkZXI6IGRldGFpcmVt
YXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmYwIGFkZHIgMHg3NjliYzAwMCA9PiAweDkw
MDAzMzBmIC8gMHg5MDAwMwpsOiB4Y19kb21fYWxsb2NfcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBw
dGVwIDg3MjFjNmY0IGFkZHIgMHg3NjliZDAwMCA9PiAweDkwMDA0MzBmIC8gMHg5MDAwNApzZWdt
ZW50OiAgIGtlcm5lcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmY4IGFkZHIgMHg3
NjliZTAwMCA9PiAweDkwMDA1MzBmIC8gMHg5MDAwNQpsICAgICAgIDogMHg4MDAwcmVtYXBfYXJl
YV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNmZjIGFkZHIgMHg3NjliZjAwMCA9PiAweDkwMDA2MzBm
IC8gMHg5MDAwNgo4MDAwIC0+IDB4ODAxZmUwcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3
MjFjNzAwIGFkZHIgMHg3NjljMDAwMCA9PiAweDkwMDA3MzBmIC8gMHg5MDAwNwowMCAgKHBmbiAw
eDgwMDA4cmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MjFjNzA0IGFkZHIgMHg3NjljMTAw
MCA9PiAweDkwMDA4MzBmIC8gMHg5MDAwOApyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcy
MWM3MDggYWRkciAweDc2OWMyMDAwID0+IDB4OTAwMDkzMGYgLyAweDkwMDA5CgpyZW1hcF9hcmVh
X21mbl9wdGVfZm46IHB0ZXAgODcyMWM3MGMgYWRkciAweDc2OWMzMDAwID0+IDB4OTAwMGEzMGYg
LyAweDkwMDBhCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzIxYzcxMCBhZGRyIDB4NzY5
YzQwMDAgPT4gMHg5MDAwYjMwZiAvIDB4OTAwMGIKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVw
IDg3MjFjNzE0IGFkZHIgMHg3NjljNTAwMCA9PiAweDkwMDBjMzBmIC8gMHg5MDAwYwpyZW1hcF9h
cmVhX21mbl9wdGVfZm46IHB0ZXAgODcyMWM3MTggYWRkciAweDc2OWM2MDAwID0+IDB4OTAwMGQz
MGYgLyAweDkwMDBkCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzIxYzcxYyBhZGRyIDB4
NzY5YzcwMDAgPT4gMHg5MDAwZTMwZiAvIDB4OTAwMGUKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBw
dGVwIDg3MjFjNzIwIGFkZHIgMHg3NjljODAwMCA9PiAweDkwMDBmMzBmIC8gMHg5MDAwZgpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyOiBkb21VIG1hcHBpbmc6IHBmbiAw
eDgwMDA4KzB4MWY2IGF0IDB4NzY5YjkwMDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21f
bG9hZF96aW1hZ2Vfa2VybmVsOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21f
bG9hZF96aW1hZ2Vfa2VybmVsOiBrZXJuZWwgc2VkIDB4ODAwMDgwMDAtMHg4MDFmZTAwMApkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9sb2FkX3ppbWFnZV9rZXJuZWw6IGNvcHkgMjA1NDEy
OSBieXRlcyBmcm9tIGJsb2IgMHg3NmMzMDAwMCB0byBkc3QgMHg3NjliOTAwMApkb21haW5idWls
ZGVyOiBkZXRhaWw6IGFsbG9jX21hZm9yZWlnbiBtYXAgYWRkX3RvX3BoeXNtYXAgZmFpbGVkLCBl
cnI9LTIyCmdpY19wYWdlczogY2FsbGVkCmZvcmVpZ24gbWFwIGFkZF90b19waHlzbWFwIGZhaWxl
ZCwgZXJyPS0yMgpkb21haW5idWlsZGVyOiBkZXRhaWw6IGNvdW50X3BndGFibGVzX2FybTogY2Fs
bGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfYWxs
b2NfZW5kIDogMHg4MDFmZTAwMApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9idWlsZF9p
bWFnZSAgOiB2aXJ0X3BndGFiX2VuZCA6IDB4MApmb3JlaWduIG1hcCBhZGRfdG9fcGh5c21hcCBm
YWlsZWQsIGVycj0tMjIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYm9vdF9pbWFnZTog
Y2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogYXJjaF9zZXR1cF9ib290ZWFybHk6IGRvaW5n
IG5vdGhpbmcKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fY29tcGF0X2NoZWNrOiBzdXBw
b3J0ZWQgZ3Vlc3QgdHlwZTogeGVuLTMuMC1hcm12N2wgPD0gbWF0Y2hlcwpkb21haW5idWlsZGVy
OiBkZXRhaWw6IHNldHVwX3BndGFibGVzX2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogc3RhcnRfaW5mb19hcm06IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGRvbWFpbiBi
dWlsZGVyIG1lbW9yeSBmb290cHJpbnQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICBhbGxvY2F0
ZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBtYWxsb2MgICAgICAgICAgICAgOiA1MjUg
a0IKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBhbm9uIG1tYXAgICAgICAgICAgOiAwIGJ5
dGVzCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogICAgbWFwcGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogICAgICAgZmlsZSBtbWFwICAgICAgICAgIDogMjAwNSBrQgpkb21haW5idWlsZGVyOiBkZXRh
aWw6ICAgICAgIGRvbVUgbW1hcCAgICAgICAgICA6IDIwMDgga0IKZG9tYWluYnVpbGRlcjogZGV0
YWlsOiB2Y3B1X2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogSW5pdGlhbCBzdGF0
ZSBDUFNSIDB4MWQzIFBDIDB4ODAwMDgwMDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsYXVuY2hf
dm06IGNhbGxlZCwgY3R4dD0weDdlZjNlN2JjCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9t
X3JlbGVhc2U6IGNhbGxlZAp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IHRvdGFsIGFsbG9j
YXRpb25zOjIwIHRvdGFsIHJlbGVhc2VzOjIwCnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjog
Y3VycmVudCBhbGxvY2F0aW9uczowIG1heGltdW0gYWxsb2NhdGlvbnM6Mgp4YzogZGVidWc6IGh5
cGVyY2FsbCBidWZmZXI6IGNhY2hlIGN1cnJlbnQgc2l6ZToyCnhjOiBkZWJ1ZzogaHlwZXJjYWxs
IGJ1ZmZlcjogY2FjaGUgaGl0czoxNyBtaXNzZXM6MiB0b29iaWc6MQpyb290QHR1ejovdXNyL2xp
Yi94ZW4vYmluIyB2YmQgdmJkLTEtNTE3MTI6IDIgY3JlYXRpbmcgdmJkIHN0cnVjdHVyZQoKCg==
--14dae9340efb28ec1704c636d706
Content-Type: application/octet-stream; 
	name="xcbuild-old-Dom0+U-A15x1_01082012.log"
Content-Disposition: attachment; 
	filename="xcbuild-old-Dom0+U-A15x1_01082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5clcb3j1

VHJ5aW5nIDEyNy4wLjAuMS4uLgpDb25uZWN0ZWQgdG8gbG9jYWxob3N0LgooWEVOKSBFc2NhcGUg
Y2hhcmFjdGVyIGlzICdeXScuCi0gVUFSVCBlbmFibGVkIC0KLSBDUFUgMDAwMDAwMDAgYm9vdGlu
ZyAtCi0gU3RhcnRlZCBpbiBTZWN1cmUgc3RhdGUgLQotIEVudGVyaW5nIEh5cCBtb2RlIC0KLSBT
ZXR0aW5nIHVwIGNvbnRyb2wgcmVnaXN0ZXJzIC0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtCi0gUmVh
ZHkgLQpSQU06IDAwMDAwMDAwODAwMDAwMDAgLSAwMDAwMDAwMGZmZmZmZmZmCiBfXyAgX18gICAg
ICAgICAgICBfICBfICAgIF9fX18gICAgICAgICAgICAgICAgICAgICBfICAgICAgICBfICAgICBf
ICAgICAgCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9fXyBcICAgIF8gICBfIF8gX18gIF9f
X3wgfF8gX18gX3wgfF9fIHwgfCBfX18gCiAgXCAgLy8gXyBcICdfIFwgIHwgfHwgfF8gICBfXykg
fF9ffCB8IHwgfCAnXyBcLyBfX3wgX18vIF9gIHwgJ18gXHwgfC8gXyBcCiAgLyAgXCAgX18vIHwg
fCB8IHxfXyAgIF98IC8gX18vfF9ffCB8X3wgfCB8IHwgXF9fIFwgfHwgKF98IHwgfF8pIHwgfCAg
X18vCiAvXy9cX1xfX198X3wgfF98ICAgIHxffChfKV9fX19ffCAgIFxfXyxffF98IHxffF9fXy9c
X19cX18sX3xfLl9fL3xffFxfX198CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCihYRU4pIFhlbiB2ZXJzaW9u
IDQuMi11bnN0YWJsZSAocm9vdEAobm9uZSkpIChnY2MgdmVyc2lvbiA0LjYuMyAoVWJ1bnR1L0xp
bmFybyA0LjYuMy0xdWJ1bnR1NSkgKSBXZWQgQXVnICAxIDA5OjM0OjMwIFVUQyAyMDEyCkxhdGVz
dCBDaGFuZ2VTZXQ6IHVuYXZhaWxhYmxlCihYRU4pIEdJQzogNjQgbGluZXMsIDEgY3B1LCBzZWN1
cmUgKElJRCAwMDAwMDQzYikuCihYRU4pIFdhaXRpbmcgZm9yIDAgb3RoZXIgQ1BVcyB0byBiZSBy
ZWFkeQooWEVOKSBVc2luZyBnZW5lcmljIHRpbWVyIGF0IDEwMDAwMDAwMCBIegooWEVOKSBYZW4g
aGVhcDogMjYyMTQ0IHBhZ2VzICBEb20gaGVhcDogMjUzOTUyIHBhZ2VzCihYRU4pIERvbWFpbiBo
ZWFwIGluaXRpYWxpc2VkCihYRU4pIFNldCBoeXAgdmVjdG9yIGJhc2UgdG8gMjNjZTQwIChleHBl
Y3RlZCAwMDIzY2U0MCkKKFhFTikgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMTEzMSAwMDAwMTEz
MQooWEVOKSBEZWJ1ZyBGZWF0dXJlczogMDIwMTA1NTUKKFhFTikgQXV4aWxpYXJ5IEZlYXR1cmVz
OiAwMDAwMDAwMAooWEVOKSBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDEwMjAxMTA1IDIwMDAwMDAw
IDAxMjQwMDAwIDAyMTAyMjExCihYRU4pIElTQSBGZWF0dXJlczogMDIxMDExMTAgMTMxMTIxMTEg
MjEyMzIwNDEgMTExMTIxMzEgMTAwMTExNDIgMDAwMDAwMDAKKFhFTikgVXNpbmcgc2NoZWR1bGVy
OiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSBy
aW5nIG9mIDE2IEtpQi4KKFhFTikgQnJvdWdodCB1cCAxIENQVXMKKFhFTikgKioqIExPQURJTkcg
RE9NQUlOIDAgKioqCihYRU4pIFBvcHVsYXRlIFAyTSAweDgwMDAwMDAwLT4weDg4MDAwMDAwCihY
RU4pIE1hcCBDUzIgTU1JTyByZWdpb25zIDE6MSBpbiB0aGUgUDJNIDB4MTgwMDAwMDAtPjB4MWJm
ZmZmZmYKKFhFTikgTWFwIENTMyBNTUlPIHJlZ2lvbnMgMToxIGluIHRoZSBQMk0gMHgxYzAwMDAw
MC0+MHgxZmZmZmZmZgooWEVOKSBNYXAgVkdJQyBNTUlPIHJlZ2lvbnMgMToxIGluIHRoZSBQMk0g
MHgyYzAwODAwMC0+MHgyZGZmZmZmZgooWEVOKSBSb3V0aW5nIHBlcmlwaGVyYWwgaW50ZXJydXB0
cyB0byBndWVzdAooWEVOKSBMb2FkaW5nIDAwMDAwMDAwMDAxZjhjZTAgYnl0ZSB6SW1hZ2UgZnJv
bSBmbGFzaCAwMDAwMDAwMDAwMDAwMDAwIHRvIDAwMDAwMDAwODAwMDgwMDAtMDAwMDAwMDA4MDIw
MGNlMDogWy4uXQooWEVOKSBTdGQuIExvZ2xldmVsOiBBbGwKKFhFTikgR3Vlc3QgTG9nbGV2ZWw6
IEFsbAooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUg
dGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikKKFhFTikgRnJlZWQgMjA0a0IgaW5pdCBtZW1v
cnkuClVuY29tcHJlc3NpbmcgTGludXguLi4gZG9uZSwgYm9vdGluZyB0aGUga2VybmVsLgpCb290
aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAwCkxpbnV4IHZlcnNpb24gMy41LjAtcmM3KyAocm9v
dEB0dXopIChnY2MgdmVyc2lvbiA0LjYuMyAoVWJ1bnR1L0xpbmFybyA0LjYuMy0xdWJ1bnR1NSkg
KSAjMTEgV2VkIEF1ZyAxIDEzOjM4OjQ2IE1TSyAyMDEyCkNQVTogQVJNdjcgUHJvY2Vzc29yIFs0
MTJmYzBmMF0gcmV2aXNpb24gMCAoQVJNdjcpLCBjcj0xMGM1M2M3ZApDUFU6IFBJUFQgLyBWSVBU
IG5vbmFsaWFzaW5nIGRhdGEgY2FjaGUsIFBJUFQgaW5zdHJ1Y3Rpb24gY2FjaGUKTWFjaGluZTog
QVJNLVZlcnNhdGlsZSBFeHByZXNzLCBtb2RlbDogVjJQLUNBMTUKYm9vdGNvbnNvbGUgW3hlbmJv
b3QwXSBlbmFibGVkCk1lbW9yeSBwb2xpY3k6IEVDQyBkaXNhYmxlZCwgRGF0YSBjYWNoZSB3cml0
ZWJhY2sKT24gbm9kZSAwIHRvdGFscGFnZXM6IDMyNzY4CmZyZWVfYXJlYV9pbml0X25vZGU6IG5v
ZGUgMCwgcGdkYXQgODAzZWFlOTQsIG5vZGVfbWVtX21hcCA4MDQwODAwMAogIE5vcm1hbCB6b25l
OiAyNTYgcGFnZXMgdXNlZCBmb3IgbWVtbWFwCiAgTm9ybWFsIHpvbmU6IDAgcGFnZXMgcmVzZXJ2
ZWQKICBOb3JtYWwgem9uZTogMzI1MTIgcGFnZXMsIExJRk8gYmF0Y2g6NwotLS0tLS0tLS0tLS1b
IGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQgYXJjaC9hcm0vbWFjaC12ZXhwcmVz
cy92Mm0uYzo2MTMgdjJtX2R0X2luaXRfZWFybHkrMHhhYy8weGVjKCkKTW9kdWxlcyBsaW5rZWQg
aW46CkJhY2t0cmFjZTogCls8ODAwMTFiMGM+XSAoZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBm
cm9tIFs8ODAyZDdlZDA+XSAoZHVtcF9zdGFjaysweDE4LzB4MWMpCiByNjowMDAwMDI2NSByNTo4
MDNhNmU5YyByNDowMDAwMDAwMCByMzo4MDNjZjkzYwpbPDgwMmQ3ZWI4Pl0gKGR1bXBfc3RhY2sr
MHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDU0LzB4
NmMpCgpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8
ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4OjgwM2NkMzM4IHI3
OjgwNTA4NDQwIHI2OjgwMDAwMjAwIHI1OjgwM2YzYjg4IHI0OjAwMDAwMDAwCnIzOjAwMDAwMDA5
Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzYTZl
OWM+XSAodjJtX2R0X2luaXRfZWFybHkrMHhhYy8weGVjKQpbPDgwM2E2ZGYwPl0gKHYybV9kdF9p
bml0X2Vhcmx5KzB4MC8weGVjKSBmcm9tIFs8ODAzYTM2OWM+XSAoc2V0dXBfYXJjaCsweDcxMC8w
eDdmYykKIHI0OjgwM2JhOWFjCls8ODAzYTJmOGM+XSAoc2V0dXBfYXJjaCsweDAvMHg3ZmMpIGZy
b20gWzw4MDNhMTU5Yz5dIChzdGFydF9rZXJuZWwrMHg3OC8weDI2YykKWzw4MDNhMTUyND5dIChz
dGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgwNDA+XSAoMHg4MDAwODA0MCkKIHI3
OjgwM2NkMjg0IHI2OjgwM2JiZDUwIHI1OjgwM2NhMDU0IHI0OjEwYzUzYzdkCi0tLVsgZW5kIHRy
YWNlIDFiNzViMzFhMjcxOWVkMWMgXS0tLQp2ZXhwcmVzczogRFQgSEJJICgyMzcpIGlzIG5vdCBt
YXRjaGluZyBoYXJkd2FyZSAoMCkhCnBjcHUtYWxsb2M6IHMwIHIwIGQzMjc2OCB1MzI3NjggYWxs
b2M9MSozMjc2OApwY3B1LWFsbG9jOiBbMF0gMCAKQnVpbHQgMSB6b25lbGlzdHMgaW4gWm9uZSBv
cmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdlczogMzI1MTIKS2VybmVsIGNv
bW1hbmQgbGluZTogZWFybHlwcmludGs9eGVuYm9vdCBjb25zb2xlPXR0eUFNQTEgcm9vdD0vZGV2
L21tY2JsazAgZGVidWcgcncKUElEIGhhc2ggdGFibGUgZW50cmllczogNTEyIChvcmRlcjogLTEs
IDIwNDggYnl0ZXMpCkRlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDE2Mzg0IChvcmRl
cjogNCwgNjU1MzYgYnl0ZXMpCklub2RlLWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogODE5MiAo
b3JkZXI6IDMsIDMyNzY4IGJ5dGVzKQpNZW1vcnk6IDEyOE1CID0gMTI4TUIgdG90YWwKTWVtb3J5
OiAxMjU3NjRrLzEyNTc2NGsgYXZhaWxhYmxlLCA1MzA4ayByZXNlcnZlZCwgMEsgaGlnaG1lbQpW
aXJ0dWFsIGtlcm5lbCBtZW1vcnkgbGF5b3V0OgogICAgdmVjdG9yICA6IDB4ZmZmZjAwMDAgLSAw
eGZmZmYxMDAwICAgKCAgIDQga0IpCiAgICBmaXhtYXAgIDogMHhmZmYwMDAwMCAtIDB4ZmZmZTAw
MDAgICAoIDg5NiBrQikKICAgIHZtYWxsb2MgOiAweDg4ODAwMDAwIC0gMHhmZjAwMDAwMCAgICgx
ODk2IE1CKQogICAgbG93bWVtICA6IDB4ODAwMDAwMDAgLSAweDg4MDAwMDAwICAgKCAxMjggTUIp
CiAgICBtb2R1bGVzIDogMHg3ZjAwMDAwMCAtIDB4ODAwMDAwMDAgICAoICAxNiBNQikKICAgICAg
LnRleHQgOiAweDgwMDA4MDAwIC0gMHg4MDNhMTAwMCAgICgzNjg0IGtCKQogICAgICAuaW5pdCA6
IDB4ODAzYTEwMDAgLSAweDgwM2MxNWU4ICAgKCAxMzAga0IpCiAgICAgIC5kYXRhIDogMHg4MDNj
MjAwMCAtIDB4ODAzZWI1YzAgICAoIDE2NiBrQikKICAgICAgIC5ic3MgOiAweDgwM2ViNWU0IC0g
MHg4MDQwNzE2NCAgICggMTExIGtCKQpTTFVCOiBHZW5zbGFicz0xMSwgSFdhbGlnbj02NCwgT3Jk
ZXI9MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9MSwgTm9kZXM9MQpOUl9JUlFTOjI1NgphcmNoX3Rp
bWVyOiBjYW4ndCBmaW5kIERUIG5vZGUKQXJjaGl0ZWN0ZWQgbG9jYWwgdGltZXIgcnVubmluZyBh
dCAxMDAuMDBNSHouCnNjaGVkX2Nsb2NrOiAzMiBiaXRzIGF0IDEwME1IeiwgcmVzb2x1dGlvbiAx
MG5zLCB3cmFwcyBldmVyeSA0Mjk0OW1zCkNvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4
MzAKWGVuIHN1cHBvcnQgZm91bmQsIGV2ZW50c19pcnE9MzEgZ250dGFiX2ZyYW1lX3Bmbj1iMDAw
MApHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dC4KR3JhbnQgdGFibGUgaW5pdGlh
bGl6ZWQKQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcC4uLiA5OC43MSBCb2dvTUlQUyAobHBqPTQ5MzU2
OCkKcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxCk1vdW50LWNhY2hlIGhhc2gg
dGFibGUgZW50cmllczogNTEyCkNQVTogVGVzdGluZyB3cml0ZSBidWZmZXIgY29oZXJlbmN5OiBv
awpTZXR0aW5nIHVwIHN0YXRpYyBpZGVudGl0eSBtYXAgZm9yIDB4ODAyZGMxMzggLSAweDgwMmRj
MTZjClhlbiBzdXBwb3J0IGZvdW5kLCBldmVudHNfaXJxPTMxIGdudHRhYl9mcmFtZV9wZm49YjAw
MDAKTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNgotLS0tLS0tLS0tLS1bIGN1dCBo
ZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQga2VybmVsL2lycS9pcnFkb21haW4uYzoxMzUg
aXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCgpCk1vZHVsZXMgbGlua2VkIGluOgpC
YWNrdHJhY2U6IAoKWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20g
Wzw4MDJkN2VkMD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKIHI2OjAwMDAwMDg3IHI1OjgwMDUx
MzIwIHI0OjAwMDAwMDAwIHIzOjgwM2NmOTNjCls8ODAyZDdlYjg+XSAoZHVtcF9zdGFjaysweDAv
MHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykK
Wzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFi
MjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCiByODo4Nzg0ZTEwMCByNzowMDAw
MDAwMyByNjowMDAwMDA2NCByNTo4NzgwMDQ0MCByNDo4MDNkODk1MApyMzowMDAwMDAwOQpbPDgw
MDFiMWY0Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwMDUxMzIwPl0g
KGlycV9kb21haW5fbGVnYWN5X3Jldm1hcCsweDI4LzB4NTApCls8ODAwNTEyZjg+XSAoaXJxX2Rv
bWFpbl9sZWdhY3lfcmV2bWFwKzB4MC8weDUwKSBmcm9tIFs8ODAwNTEzZTg+XSAoaXJxX2ZpbmRf
bWFwcGluZysweGEwLzB4ZDApCls8ODAwNTEzNDg+XSAoaXJxX2ZpbmRfbWFwcGluZysweDAvMHhk
MCkgZnJvbSBbPDgwMDUxODI0Pl0gKGlycV9jcmVhdGVfbWFwcGluZysweDI4LzB4MTI4KQogcjg6
ODc4NGUxMDAgcjc6MDAwMDAwMDMgcjY6MDAwMDAwNjQgcjU6ODc4MzFkZDAgcjQ6ODc4MDA0NDAK
cjM6ODc4MzFkYTQKWzw4MDA1MTdmYz5dIChpcnFfY3JlYXRlX21hcHBpbmcrMHgwLzB4MTI4KSBm
cm9tIFs8ODAwNTE5YTg+XSAoaXJxX2NyZWF0ZV9vZl9tYXBwaW5nKzB4ODQvMHhmOCkKIHI3OjAw
MDAwMDAzIHI2OjgwNTA4OGE4IHI1Ojg3ODMxZGQwIHI0Ojg3ODAwNDQwCls8ODAwNTE5MjQ+XSAo
aXJxX2NyZWF0ZV9vZl9tYXBwaW5nKzB4MC8weGY4KSBmcm9tIFs8ODAyM2NkYTg+XSAoaXJxX29m
X3BhcnNlX2FuZF9tYXArMHgzNC8weDNjKQogcjc6MDAwMDAwMDAgcjY6ODA1MDg5ZGMgcjU6MDAw
MDAwMDAgcjQ6MDAwMDAwMDAKWzw4MDIzY2Q3ND5dIChpcnFfb2ZfcGFyc2VfYW5kX21hcCsweDAv
MHgzYykgZnJvbSBbPDgwMjNjZGQwPl0gKG9mX2lycV90b19yZXNvdXJjZSsweDIwLzB4N2MpCls8
ODAyM2NkYjA+XSAob2ZfaXJxX3RvX3Jlc291cmNlKzB4MC8weDdjKSBmcm9tIFs8ODAyM2NlNTg+
XSAob2ZfaXJxX2NvdW50KzB4MmMvMHgzYykKIHI3OjAwMDAwMDAwIHI2OjgwNTA4OWRjIHI1Ojgw
NTA4OWRjIHI0OjAwMDAwMDAwCls8ODAyM2NlMmM+XSAob2ZfaXJxX2NvdW50KzB4MC8weDNjKSBm
cm9tIFs8ODAyM2Q0MWM+XSAob2ZfZGV2aWNlX2FsbG9jKzB4NWMvMHgxNWMpCiByNTowMDAwMDAw
MCByNDowMDAwMDAwMApbPDgwMjNkM2MwPl0gKG9mX2RldmljZV9hbGxvYysweDAvMHgxNWMpIGZy
b20gWzw4MDIzZDU1OD5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3JlYXRlX3BkYXRhKzB4M2MvMHg4
OCkKWzw4MDIzZDUxYz5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3JlYXRlX3BkYXRhKzB4MC8weDg4
KSBmcm9tIFs8ODAyM2Q2Nzg+XSAob2ZfcGxhdGZvcm1fYnVzX2NyZWF0ZSsweGQ0LzB4Mjc4KQog
cjc6MDAwMDAwMDEgcjY6MDAwMDAwMDAgcjU6ODAzYmMyNjAgcjQ6ODA1MDg5ZGMKWzw4MDIzZDVh
ND5dIChvZl9wbGF0Zm9ybV9idXNfY3JlYXRlKzB4MC8weDI3OCkgZnJvbSBbPDgwMjNkODg0Pl0g
KG9mX3BsYXRmb3JtX3BvcHVsYXRlKzB4NjgvMHhhMCkKWzw4MDIzZDgxYz5dIChvZl9wbGF0Zm9y
bV9wb3B1bGF0ZSsweDAvMHhhMCkgZnJvbSBbPDgwM2E2YzM0Pl0gKHYybV9kdF9pbml0KzB4MmMv
MHg0YykKWzw4MDNhNmMwOD5dICh2Mm1fZHRfaW5pdCsweDAvMHg0YykgZnJvbSBbPDgwM2EyYzBj
Pl0gKGN1c3RvbWl6ZV9tYWNoaW5lKzB4MjQvMHgzMCkKWzw4MDNhMmJlOD5dIChjdXN0b21pemVf
bWFjaGluZSsweDAvMHgzMCkgZnJvbSBbPDgwMDA4NjNjPl0gKGRvX29uZV9pbml0Y2FsbCsweDQw
LzB4MTg0KQpbPDgwMDA4NWZjPl0gKGRvX29uZV9pbml0Y2FsbCsweDAvMHgxODQpIGZyb20gWzw4
MDNhMTg4MD5dIChrZXJuZWxfaW5pdCsweGYwLzB4MWFjKQpbPDgwM2ExNzkwPl0gKGtlcm5lbF9p
bml0KzB4MC8weDFhYykgZnJvbSBbPDgwMDFmN2I0Pl0gKGRvX2V4aXQrMHgwLzB4NmJjKQotLS1b
IGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFkIF0tLS0KLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBd
LS0tLS0tLS0tLS0tCldBUk5JTkc6IGF0IGtlcm5lbC9pcnEvaXJxZG9tYWluLmM6MTM1IGlycV9k
b21haW5fbGVnYWN5X3Jldm1hcCsweDI4LzB4NTAoKQpNb2R1bGVzIGxpbmtlZCBpbjoKQmFja3Ry
YWNlOiAKWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJk
N2VkMD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKIHI2OjAwMDAwMDg3IHI1OjgwMDUxMzIwIHI0
OjAwMDAwMDAwIHIzOjgwM2NmOTNjCls8ODAyZDdlYjg+XSAoZHVtcF9zdGFjaysweDAvMHgxYykg
ZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKWzw4MDAx
YjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0g
KHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCiByODo4Nzg0ZTEwMCByNzowMDAwMDAwMyBy
NjowMDAwMDA2NCByNTowMDAwMDAwMCByNDo4NzgwMDQ0MApyMzowMDAwMDAwOQpbPDgwMDFiMWY0
Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwMDUxMzIwPl0gKGlycV9k
b21haW5fbGVnYWN5X3Jldm1hcCsweDI4LzB4NTApCls8ODAwNTEyZjg+XSAoaXJxX2RvbWFpbl9s
ZWdhY3lfcmV2bWFwKzB4MC8weDUwKSBmcm9tIFs8ODAwNTE4YWM+XSAoaXJxX2NyZWF0ZV9tYXBw
aW5nKzB4YjAvMHgxMjgpCls8ODAwNTE3ZmM+XSAoaXJxX2NyZWF0ZV9tYXBwaW5nKzB4MC8weDEy
OCkgZnJvbSBbPDgwMDUxOWE4Pl0gKGlycV9jcmVhdGVfb2ZfbWFwcGluZysweDg0LzB4ZjgpCiBy
NzowMDAwMDAwMyByNjo4MDUwODhhOCByNTo4NzgzMWRkMCByNDo4NzgwMDQ0MApbPDgwMDUxOTI0
Pl0gKGlycV9jcmVhdGVfb2ZfbWFwcGluZysweDAvMHhmOCkgZnJvbSBbPDgwMjNjZGE4Pl0gKGly
cV9vZl9wYXJzZV9hbmRfbWFwKzB4MzQvMHgzYykKIHI3OjAwMDAwMDAwIHI2OjgwNTA4OWRjIHI1
OjAwMDAwMDAwIHI0OjAwMDAwMDAwCls8ODAyM2NkNzQ+XSAoaXJxX29mX3BhcnNlX2FuZF9tYXAr
MHgwLzB4M2MpIGZyb20gWzw4MDIzY2RkMD5dIChvZl9pcnFfdG9fcmVzb3VyY2UrMHgyMC8weDdj
KQpbPDgwMjNjZGIwPl0gKG9mX2lycV90b19yZXNvdXJjZSsweDAvMHg3YykgZnJvbSBbPDgwMjNj
ZTU4Pl0gKG9mX2lycV9jb3VudCsweDJjLzB4M2MpCiByNzowMDAwMDAwMCByNjo4MDUwODlkYyBy
NTo4MDUwODlkYyByNDowMDAwMDAwMApbPDgwMjNjZTJjPl0gKG9mX2lycV9jb3VudCsweDAvMHgz
YykgZnJvbSBbPDgwMjNkNDFjPl0gKG9mX2RldmljZV9hbGxvYysweDVjLzB4MTVjKQogcjU6MDAw
MDAwMDAgcjQ6MDAwMDAwMDAKWzw4MDIzZDNjMD5dIChvZl9kZXZpY2VfYWxsb2MrMHgwLzB4MTVj
KSBmcm9tIFs8ODAyM2Q1NTg+XSAob2ZfcGxhdGZvcm1fZGV2aWNlX2NyZWF0ZV9wZGF0YSsweDNj
LzB4ODgpCls8ODAyM2Q1MWM+XSAob2ZfcGxhdGZvcm1fZGV2aWNlX2NyZWF0ZV9wZGF0YSsweDAv
MHg4OCkgZnJvbSBbPDgwMjNkNjc4Pl0gKG9mX3BsYXRmb3JtX2J1c19jcmVhdGUrMHhkNC8weDI3
OCkKIHI3OjAwMDAwMDAxIHI2OjAwMDAwMDAwIHI1OjgwM2JjMjYwIHI0OjgwNTA4OWRjCls8ODAy
M2Q1YTQ+XSAob2ZfcGxhdGZvcm1fYnVzX2NyZWF0ZSsweDAvMHgyNzgpIGZyb20gWzw4MDIzZDg4
ND5dIChvZl9wbGF0Zm9ybV9wb3B1bGF0ZSsweDY4LzB4YTApCls8ODAyM2Q4MWM+XSAob2ZfcGxh
dGZvcm1fcG9wdWxhdGUrMHgwLzB4YTApIGZyb20gWzw4MDNhNmMzND5dICh2Mm1fZHRfaW5pdCsw
eDJjLzB4NGMpCls8ODAzYTZjMDg+XSAodjJtX2R0X2luaXQrMHgwLzB4NGMpIGZyb20gWzw4MDNh
MmMwYz5dIChjdXN0b21pemVfbWFjaGluZSsweDI0LzB4MzApCls8ODAzYTJiZTg+XSAoY3VzdG9t
aXplX21hY2hpbmUrMHgwLzB4MzApIGZyb20gWzw4MDAwODYzYz5dIChkb19vbmVfaW5pdGNhbGwr
MHg0MC8weDE4NCkKWzw4MDAwODVmYz5dIChkb19vbmVfaW5pdGNhbGwrMHgwLzB4MTg0KSBmcm9t
IFs8ODAzYTE4ODA+XSAoa2VybmVsX2luaXQrMHhmMC8weDFhYykKWzw4MDNhMTc5MD5dIChrZXJu
ZWxfaW5pdCsweDAvMHgxYWMpIGZyb20gWzw4MDAxZjdiND5dIChkb19leGl0KzB4MC8weDZiYykK
LS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxZSBdLS0tClNlcmlhbDogQU1CQSBQTDAxMSBV
QVJUIGRyaXZlcgoxYzA5MDAwMC51YXJ0OiB0dHlBTUEwIGF0IE1NSU8gMHgxYzA5MDAwMCAoaXJx
ID0gMzcpIGlzIGEgUEwwMTEgcmV2MQoxYzBhMDAwMC51YXJ0OiB0dHlBTUExIGF0IE1NSU8gMHgx
YzBhMDAwMCAoaXJxID0gMzgpIGlzIGEgUEwwMTEgcmV2MQpjb25zb2xlIFt0dHlBTUExXSBlbmFi
bGVkLCBib290Y29uc29sZSBkaXNhYmxlZAooWEVOKSBiYWQgcDJtIGxvb2t1cAooWEVOKSBkb20x
MTMgSVBBIDB4MDAwMDAwMDA5MDAwMDAwMAooWEVOKSBQMk0gQCAwMmZmY2FjMCBtZm46MHhmZmU1
NgooWEVOKSAxU1RbMHgyXSA9IDB4MDAwMDAwMDBmM2Y2ODZmZgooWEVOKSAyTkRbMHg4MF0gPSAw
eDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgYmFkIHAybSBsb29rdXAKKFhFTikgZG9tMTEzIElQQSAw
eDAwMDAwMDAwOTAwMDEwMDAKKFhFTikgUDJNIEAgMDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikg
MVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNmNjg2ZmYKKFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAw
MDAwMDAwMDAwCihYRU4pIERPTTExMzogVW5jb21wcmVzc2luZyBMaW51eC4uLiBkb25lLCBib290
aW5nIHRoZSBrZXJuZWwuCkJvb3RpbmcgTGludXggb24gcGh5c2ljYWwgQ1BVIDAKTGludXggdmVy
c2lvbiAzLjUuMC1yYzcrIChyb290QHR1eikgKGdjYyB2ZXJzaW9uIDQuNi4zIChVYnVudHUvTGlu
YXJvIDQuNi4zLTF1YnVudHU1KSApICMxMyBXZWQgQXVnIDEgMTc6MzI6MjAgTVNLIDIwMTIKQ1BV
OiBBUk12NyBQcm9jZXNzb3IgWzQxMmZjMGYwXSByZXZpc2lvbiAwIChBUk12NyksIGNyPTEwYzUz
YzdkCkNQVTogUElQVCAvIFZJUFQgbm9uYWxpYXNpbmcgZGF0YSBjYWNoZSwgUElQVCBpbnN0cnVj
dGlvbiBjYWNoZQpNYWNoaW5lOiBBUk0tVmVyc2F0aWxlIEV4cHJlc3MsIG1vZGVsOiBWMlAtQUVN
djdBCmJvb3Rjb25zb2xlIFt4ZW5ib290MF0gZW5hYmxlZApNZW1vcnkgcG9saWN5OiBFQ0MgZGlz
YWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCk9uIG5vZGUgMCB0b3RhbHBhZ2VzOiAzMjc2OApm
cmVlX2FyZWFfaW5pdF9ub2RlOiBub2RlIDAsIHBnZGF0IDgwM2U0ZDI0LCBub2RlX21lbV9tYXAg
ODA0MDEwMDAKICBOb3JtYWwgem9uZTogMjU2IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcAogIE5vcm1h
bCB6b25lOiAwIHBhZ2VzIHJlc2VydmVkCiAgTm9ybWFsIHpvbmU6IDMyNTEyIHBhZ2VzLCBMSUZP
IGJhdGNoOjcKLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCgpXQVJOSU5HOiBh
dCBhcmNoL2FybS9tYWNoLXZleHByZXNzL3YybS5jOjYwMyB2Mm1fZHRfaW5pdF9lYXJseSsweDQ0
LzB4ZWMoKQpNb2R1bGVzIGxpbmtlZCBpbjoKQmFja3RyYWNlOiAKWzw4MDAxMWIwYz5dIChkdW1w
X2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJkMWU1Yz5dIChkdW1wX3N0YWNrKzB4MTgv
MHgxYykKIHI2OjAwMDAwMjViIHI1OjgwM2EwZTM0IHI0OjAwMDAwMDAwIHIzOjgwM2M5OTNjCls8
ODAyZDFlNDQ+XSAoZHVtcF9zdGFjaysweDAvMHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5f
c2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKWzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2Nv
bW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0
LzB4MmMpCiByODo4MDNjNzMzOCByNzo4MDUwMTQ0MCByNjo4MDAwMDIwMCByNTo4MDNlZGEwOCBy
NDo4MDNlNWViOApyMzowMDAwMDAwOQpbPDgwMDFiMWY0Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsw
eDAvMHgyYykgZnJvbSBbPDgwM2EwZTM0Pl0gKHYybV9kdF9pbml0X2Vhcmx5KzB4NDQvMHhlYykK
Wzw4MDNhMGRmMD5dICh2Mm1fZHRfaW5pdF9lYXJseSsweDAvMHhlYykgZnJvbSBbPDgwMzlkNjlj
Pl0gKHNldHVwX2FyY2grMHg3MTAvMHg3ZmMpCiByNDo4MDNiNGFjYwpbPDgwMzljZjhjPl0gKHNl
dHVwX2FyY2grMHgwLzB4N2ZjKSBmcm9tIFs8ODAzOWI1OWM+XSAoc3RhcnRfa2VybmVsKzB4Nzgv
MHgyNmMpCls8ODAzOWI1MjQ+XSAoc3RhcnRfa2VybmVsKzB4MC8weDI2YykgZnJvbSBbPDgwMDA4
MDQwPl0gKDB4ODAwMDgwNDApCiByNzo4MDNjNzI4NCByNjo4MDNiNWUzMCByNTo4MDNjNDA1NCBy
NDoxMGM1M2M3ZAotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFjIF0tLS0KcGNwdS1hbGxv
YzogczAgcjAgZDMyNzY4IHUzMjc2OCBhbGxvYz0xKjMyNzY4CnBjcHUtYWxsb2M6IFswXSAwIApC
dWlsdCAxIHpvbmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0eSBncm91cGluZyBvbi4gIFRv
dGFsIHBhZ2VzOiAzMjUxMgpLZXJuZWwgY29tbWFuZCBsaW5lOiBlYXJseXByaW50az14ZW4gZGVi
dWcgbG9nbGV2ZWw9OSBjb25zb2xlPWh2YzAgcm9vdD0vZGV2L3h2ZGEKUElEIGhhc2ggdGFibGUg
ZW50cmllczogNTEyIChvcmRlcjogLTEsIDIwNDggYnl0ZXMpCkRlbnRyeSBjYWNoZSBoYXNoIHRh
YmxlIGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNCwgNjU1MzYgYnl0ZXMpCklub2RlLWNhY2hlIGhh
c2ggdGFibGUgZW50cmllczogODE5MiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzKQpNZW1vcnk6IDEy
OE1CID0gMTI4TUIgdG90YWwKTWVtb3J5OiAxMjU3OTZrLzEyNTc5NmsgYXZhaWxhYmxlLCA1Mjc2
ayByZXNlcnZlZCwgMEsgaGlnaG1lbQpWaXJ0dWFsIGtlcm5lbCBtZW1vcnkgbGF5b3V0OgogICAg
dmVjdG9yICA6IDB4ZmZmZjAwMDAgLSAweGZmZmYxMDAwICAgKCAgIDQga0IpCiAgICBmaXhtYXAg
IDogMHhmZmYwMDAwMCAtIDB4ZmZmZTAwMDAgICAoIDg5NiBrQikKICAgIHZtYWxsb2MgOiAweDg4
ODAwMDAwIC0gMHhmZjAwMDAwMCAgICgxODk2IE1CKQogICAgbG93bWVtICA6IDB4ODAwMDAwMDAg
LSAweDg4MDAwMDAwICAgKCAxMjggTUIpCiAgICBtb2R1bGVzIDogMHg3ZjAwMDAwMCAtIDB4ODAw
MDAwMDAgICAoICAxNiBNQikKICAgICAgLnRleHQgOiAweDgwMDA4MDAwIC0gMHg4MDM5YjAwMCAg
ICgzNjYwIGtCKQogICAgICAuaW5pdCA6IDB4ODAzOWIwMDAgLSAweDgwM2JiNmM0ICAgKCAxMzAg
a0IpCiAgICAgIC5kYXRhIDogMHg4MDNiYzAwMCAtIDB4ODAzZTU0NDAgICAoIDE2NiBrQikKICAg
ICAgIC5ic3MgOiAweDgwM2U1NDY0IC0gMHg4MDQwMGZlNCAgICggMTExIGtCKQpTTFVCOiBHZW5z
bGFicz0xMSwgSFdhbGlnbj02NCwgT3JkZXI9MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9MSwgTm9k
ZXM9MQpOUl9JUlFTOjI1NgotLS0tLS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FS
TklORzogYXQgYXJjaC9hcm0vbWFjaC12ZXhwcmVzcy92Mm0uYzo2MiB2Mm1fc3lzY3RsX2luaXQr
MHgyMC8weDU4KCkKTW9kdWxlcyBsaW5rZWQgaW46CkJhY2t0cmFjZTogCls8ODAwMTFiMGM+XSAo
ZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDFlNWM+XSAoZHVtcF9zdGFjaysw
eDE4LzB4MWMpCiByNjowMDAwMDAzZSByNTo4MDNhMGEyNCByNDowMDAwMDAwMCByMzo4MDNjOTkz
YwpbPDgwMmQxZTQ0Pl0gKGR1bXBfc3RhY2srMHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3
YXJuX3Nsb3dwYXRoX2NvbW1vbisweDU0LzB4NmMpCls8ODAwMWIxODg+XSAod2Fybl9zbG93cGF0
aF9jb21tb24rMHgwLzB4NmMpIGZyb20gWzw4MDAxYjIxOD5dICh3YXJuX3Nsb3dwYXRoX251bGwr
MHgyNC8weDJjKQogcjg6ODAwMDQwNTkgcjc6ODA1MDFiODAgcjY6ODAzYjVlMzQgcjU6ODAzZTU0
ODAgcjQ6MDAwMDAwMDAKcjM6MDAwMDAwMDkKWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251
bGwrMHgwLzB4MmMpIGZyb20gWzw4MDNhMGEyND5dICh2Mm1fc3lzY3RsX2luaXQrMHgyMC8weDU4
KQpbPDgwM2EwYTA0Pl0gKHYybV9zeXNjdGxfaW5pdCsweDAvMHg1OCkgZnJvbSBbPDgwM2EwYjI0
Pl0gKHYybV9kdF90aW1lcl9pbml0KzB4MmMvMHhjYykKIHI1OjgwM2U1NDgwIHI0OmZmZmZmZmZm
Cls8ODAzYTBhZjg+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDM5ZDgz
MD5dICh0aW1lX2luaXQrMHgyOC8weDM4KQogcjY6ODAzYjVlMzQgcjU6ODAzZTU0ODAgcjQ6ZmZm
ZmZmZmYKWzw4MDM5ZDgwOD5dICh0aW1lX2luaXQrMHgwLzB4MzgpIGZyb20gWzw4MDM5YjZhYz5d
IChzdGFydF9rZXJuZWwrMHgxODgvMHgyNmMpCls8ODAzOWI1MjQ+XSAoc3RhcnRfa2VybmVsKzB4
MC8weDI2YykgZnJvbSBbPDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCiByNzo4MDNjNzI4NCByNjo4
MDNiNWUzMCByNTo4MDNjNDA1NCByNDoxMGM1M2M3ZAotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3
MTllZDFkIF0tLS0KYXJjaF90aW1lcjogZm91bmQgdGltZXIgaXJxcyAyOSAzMApBcmNoaXRlY3Rl
ZCBsb2NhbCB0aW1lciBydW5uaW5nIGF0IDEwMC4wME1Iei4Kc2NoZWRfY2xvY2s6IDMyIGJpdHMg
YXQgMTAwTUh6LCByZXNvbHV0aW9uIDEwbnMsIHdyYXBzIGV2ZXJ5IDQyOTQ5bXMKLS0tLS0tLS0t
LS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCldBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4
cHJlc3MvdjJtLmM6NjQ3IHYybV9kdF90aW1lcl9pbml0KzB4N2MvMHhjYygpCk1vZHVsZXMgbGlu
a2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEw
YykgZnJvbSBbPDgwMmQxZTVjPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAwMDAyODcg
cjU6ODAzYTBiNzQgcjQ6MDAwMDAwMDAgcjM6ODAzYzk5M2MKWzw4MDJkMWU0ND5dIChkdW1wX3N0
YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1
NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9t
IFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4OjgwMDA0MDU5
IHI3OjgwNTAxYjgwIHI2OjgwM2I1ZTM0IHI1OjgwM2U1NDgwIHI0OmZmZmZmZmVhCnIzOjAwMDAw
MDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAz
YTBiNzQ+XSAodjJtX2R0X3RpbWVyX2luaXQrMHg3Yy8weGNjKQpbPDgwM2EwYWY4Pl0gKHYybV9k
dF90aW1lcl9pbml0KzB4MC8weGNjKSBmcm9tIFs8ODAzOWQ4MzA+XSAodGltZV9pbml0KzB4Mjgv
MHgzOCkKIHI2OjgwM2I1ZTM0IHI1OjgwM2U1NDgwIHI0OmZmZmZmZmZmCls8ODAzOWQ4MDg+XSAo
dGltZV9pbml0KzB4MC8weDM4KSBmcm9tIFs8ODAzOWI2YWM+XSAoc3RhcnRfa2VybmVsKzB4MTg4
LzB4MjZjKQpbPDgwMzliNTI0Pl0gKHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMpIGZyb20gWzw4MDAw
ODA0MD5dICgweDgwMDA4MDQwKQogcjc6ODAzYzcyODQgcjY6ODAzYjVlMzAgcjU6ODAzYzQwNTQg
cjQ6MTBjNTNjN2QKLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxZSBdLS0tCkNvbnNvbGU6
IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MzAKWGVuIHN1cHBvcnQgZm91bmQsIGV2ZW50c19pcnE9
MzEgZ250dGFiX2ZyYW1lX3Bmbj1iMDAwMApHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxh
eW91dC4KR3JhbnQgdGFibGUgaW5pdGlhbGl6ZWQKQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcC4uLiA5
OS4yMiBCb2dvTUlQUyAobHBqPTQ5NjEyOCkKcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11
bTogMzAxCk1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTEyCkNQVTogVGVzdGluZyB3
cml0ZSBidWZmZXIgY29oZXJlbmN5OiBvawpTZXR0aW5nIHVwIHN0YXRpYyBpZGVudGl0eSBtYXAg
Zm9yIDB4ODAyZDYwYzAgLSAweDgwMmQ2MGY0ClhlbiBzdXBwb3J0IGZvdW5kLCBldmVudHNfaXJx
PTMxIGdudHRhYl9mcmFtZV9wZm49YjAwMDAKTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWls
eSAxNgooWEVOKSBHdWVzdCBkYXRhIGFib3J0OiBUcmFuc2xhdGlvbiBmYXVsdCBhdCBsZXZlbCAy
CihYRU4pICAgICBndmE9ODg4MDg4MDQKKFhFTikgICAgIGdwYT0wMDAwMDAwMDkwMDAxODA0CihY
RU4pICAgICBzaXplPTIgc2lnbj0wIHdyaXRlPTAgcmVnPTIKKFhFTikgICAgIGVhdD0wIGNtPTAg
czFwdHc9MCBkZnNjPTYKKFhFTikgZG9tMTEzIElQQSAweDAwMDAwMDAwOTAwMDE4MDQKKFhFTikg
UDJNIEAgMDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikgMVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNm
Njg2ZmYKKFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAwMDAwMDAwMDAwCihYRU4pIC0tLS1bIFhl
bi00LjItdW5zdGFibGUgIHg4Nl82NCAgZGVidWc9eSAgTm90IHRhaW50ZWQgXS0tLS0KKFhFTikg
Q1BVOiAgICAwCihYRU4pIFBDOiAgICAgODAxNzU2MWMKKFhFTikgQ1BTUjogICAyMDAwMDAxMyBN
T0RFOlNWQwooWEVOKSAgICAgIFIwOiA4MDNmOTkyNCBSMTogODAzNjFhOTggUjI6IDgwM2Y5OTNj
IFIzOiA4MDNmOTk0NAooWEVOKSAgICAgIFI0OiA4ODgwODAwMCBSNTogODAzZjk5M2MgUjY6IDAw
MDA3ZmYwIFI3OiAwMDAwMDAwMQooWEVOKSAgICAgIFI4OiA4MDM5YjI1YyBSOTogODAzYmIzOGMg
UjEwOjgwM2U1NDgwIFIxMTo4NzgyZmYwYyBSMTI6ODc4MmZmMTAKKFhFTikgVVNSOiBTUDogMDAw
MDAwMDAgTFI6IDAwMDAwMDAwIENQU1I6MjAwMDAwMTMKKFhFTikgU1ZDOiBTUDogODc4MmZlZTgg
TFI6IDgwMTc2OTg4IFNQU1I6MDAwMDAwOTMKKFhFTikgQUJUOiBTUDogODAzZTVjMGMgTFI6IDgw
M2U1YzBjIFNQU1I6MDAwMDAwMDAKKFhFTikgVU5EOiBTUDogODAzZTVjMTggTFI6IDgwM2U1YzE4
IFNQU1I6MDAwMDAwMDAKKFhFTikgSVJROiBTUDogODAzZTVjMDAgTFI6IDgwMDBkZmMwIFNQU1I6
NjAwMDAxOTMKKFhFTikgRklROiBTUDogMDAwMDAwMDAgTFI6IDAwMDAwMDAwIFNQU1I6MDAwMDAw
MDAKKFhFTikgRklROiBSODogMDAwMDAwMDAgUjk6IDAwMDAwMDAwIFIxMDowMDAwMDAwMCBSMTE6
MDAwMDAwMDAgUjEyOjAwMDAwMDAwCihYRU4pIAooWEVOKSBUVEJSMCA4MDAwNDA1OSBUVEJSMSA4
MDAwNDA1OSBUVEJDUiAwMDAwMDAwMAooWEVOKSBTQ1RMUiAxMGM1M2M3ZAooWEVOKSBWVFRCUiA3
MjAwMDBmZmU1NjAwMAooWEVOKSAKKFhFTikgSFRUQlIgZmZlYzIwMDAKKFhFTikgSERGQVIgODg4
MDg4MDQKKFhFTikgSElGQVIgMAooWEVOKSBIUEZBUiA5MDAwMTAKKFhFTikgSENSIDAwMDAwODM1
CihYRU4pIEhTUiAgIDkzODIwMDA2CihYRU4pIAooWEVOKSBERlNSIDAgREZBUiAwCihYRU4pIElG
U1IgMCBJRkFSIDAKKFhFTikgCihYRU4pIEdVRVNUIFNUQUNLIEdPRVMgSEVSRQooWEVOKSAKKFhF
TikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgooWEVOKSBQYW5pYyBv
biBDUFUgMDoKKFhFTikgVW5oYW5kbGVkIGd1ZXN0IGRhdGEgYWJvcnQKKFhFTikgKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgooWEVOKSAKKFhFTikgUmVib290IGluIGZp
dmUgc2Vjb25kcy4uLgoKCnJvb3RAdHV6Oi91c3IvbGliL3hlbi9iaW4jIC4veGNidWlsZC1vbGQg
L2Jvb3QvekltYWdlZHRiIApJbWFnZTogL2Jvb3QvekltYWdlZHRiCk1lbW9yeTogMjY0MTkyS0IK
eGNfZG9tYWluX2NyZWF0ZTogMCAoMCkKYnVpbGRpbmcgZG9tMTEzCmRvbWFpbmJ1aWxkZXI6IGRl
dGFpbDogeGNfZG9tX2FsbG9jYXRlOiBjbWRsaW5lPSIiLCBmZWF0dXJlcz0iKG51bGwpIgpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9rZXJuZWxfZmlsZTogZmlsZW5hbWU9Ii9ib290L3pJ
bWFnZWR0YiIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWFsbG9jX2ZpbGVtYXAgICAg
OiAyMDA1IGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfeGVuX2luaXQ6IHZl
ciA0LjIsIGNhcHMgeGVuLTMuMC1hcm12N2wgCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9t
X3BhcnNlX2ltYWdlOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fZmluZF9s
b2FkZXI6IHRyeWluZyBtdWx0aWJvb3QtYmluYXJ5IGxvYWRlciAuLi4gCmRvbWFpbmJ1aWxkZXI6
IGRldGFpbDogbG9hZGVyIHByb2JlIGZhaWxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2Rv
bV9maW5kX2xvYWRlcjogdHJ5aW5nIExpbnV4IHpJbWFnZSAoQVJNKSBsb2FkZXIgLi4uIApkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wcm9iZV96aW1hZ2Vfa2VybmVsOiBmb3VuZCBhbiBh
cHBlbmRlZCBEVEIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsb2FkZXIgcHJvYmUgT0sKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiB4Y19kb21fcGFyc2VfemltYWdlX2tlcm5lbDogY2FsbGVkCmRvbWFp
bmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX3ppbWFnZV9rZXJuZWw6IHhlbi0zLjAtYXJt
djdsOiBSQU0gc3RhcnRzIGF0IDgwMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3Bh
cnNlX3ppbWFnZV9rZXJuZWw6IHhlbi0zLjAtYXJtdjdsOiAweDgwMDA4MDAwIC0+IDB4ODAxZmQ3
ZjEKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWVtX2luaXQ6IG1lbSAyNTYgTUIsIHBh
Z2VzIDB4MTAwMDAgcGFnZXMsIDRrIGVhY2gKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21f
bWVtX2luaXQ6IDB4MTAwMDAgcGFnZXMKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYm9v
dF9tZW1faW5pdDogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21hbGxvYyAg
ICAgICAgICAgIDogNTEyIGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogcmVtYXBfYXJlYV9tZm5f
cHRlX2ZuOiBwdGVwIDg3MTIxNmEwIGFkZHIgMHg3NjlhODAwMCA9PiAweDkwMDAwMzBmIC8gMHg5
MDAwMAp4Y19kb21fYnVpbGRfaW1hcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MTIxNmE0
IGFkZHIgMHg3NjlhOTAwMCA9PiAweDkwMDAxMzBmIC8gMHg5MDAwMQpnZTogY2FsbGVkCmRvbWFy
ZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YTggYWRkciAweDc2OWFhMDAwID0+IDB4
OTAwMDIzMGYgLyAweDkwMDAyCmluYnVpbGRlcjogZGV0YWlyZW1hcF9hcmVhX21mbl9wdGVfZm46
IHB0ZXAgODcxMjE2YWMgYWRkciAweDc2OWFiMDAwID0+IDB4OTAwMDMzMGYgLyAweDkwMDAzCmw6
IHhjX2RvbV9hbGxvY19yZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YjAgYWRkciAw
eDc2OWFjMDAwID0+IDB4OTAwMDQzMGYgLyAweDkwMDA0CnNlZ21lbnQ6ICAga2VybmVyZW1hcF9h
cmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YjQgYWRkciAweDc2OWFkMDAwID0+IDB4OTAwMDUz
MGYgLyAweDkwMDA1CmwgICAgICAgOiAweDgwMDByZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAg
ODcxMjE2YjggYWRkciAweDc2OWFlMDAwID0+IDB4OTAwMDYzMGYgLyAweDkwMDA2CjgwMDAgLT4g
MHg4MDFmZTByZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2YmMgYWRkciAweDc2OWFm
MDAwID0+IDB4OTAwMDczMGYgLyAweDkwMDA3CjAwICAocGZuIDB4ODAwMDhyZW1hcF9hcmVhX21m
bl9wdGVfZm46IHB0ZXAgODcxMjE2YzAgYWRkciAweDc2OWIwMDAwID0+IDB4OTAwMDgzMGYgLyAw
eDkwMDA4CnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzEyMTZjNCBhZGRyIDB4NzY5YjEw
MDAgPT4gMHg5MDAwOTMwZiAvIDB4OTAwMDkKCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4
NzEyMTZjOCBhZGRyIDB4NzY5YjIwMDAgPT4gMHg5MDAwYTMwZiAvIDB4OTAwMGEKcmVtYXBfYXJl
YV9tZm5fcHRlX2ZuOiBwdGVwIDg3MTIxNmNjIGFkZHIgMHg3NjliMzAwMCA9PiAweDkwMDBiMzBm
IC8gMHg5MDAwYgpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2ZDAgYWRkciAweDc2
OWI0MDAwID0+IDB4OTAwMGMzMGYgLyAweDkwMDBjCnJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRl
cCA4NzEyMTZkNCBhZGRyIDB4NzY5YjUwMDAgPT4gMHg5MDAwZDMwZiAvIDB4OTAwMGQKcmVtYXBf
YXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MTIxNmQ4IGFkZHIgMHg3NjliNjAwMCA9PiAweDkwMDBl
MzBmIC8gMHg5MDAwZQpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcxMjE2ZGMgYWRkciAw
eDc2OWI3MDAwID0+IDB4OTAwMGYzMGYgLyAweDkwMDBmCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDog
eGNfZG9tX3Bmbl90b19wdHI6IGRvbVUgbWFwcGluZzogcGZuIDB4ODAwMDgrMHgxZjYgYXQgMHg3
NjlhODAwMApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9sb2FkX3ppbWFnZV9rZXJuZWw6
IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9sb2FkX3ppbWFnZV9rZXJuZWw6
IGtlcmZvcmVpZ24gbWFwIGFkZF90b19waHlzbWFwIGZhaWxlZCwgZXJyPS0yMgpuZWwgc2VkIDB4
ODAwMDgwMDAtMHg4MDFmZTAwMApkZm9yZWlnbiBtYXAgYWRkX3RvX3BoeXNtYXAgZmFpbGVkLCBl
cnI9LTIyCm9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbG9hZF96aW1hZ2Vfa2VybmVsOiBj
b3B5IDIwNTQxMjkgYnl0ZXMgZnJvbSBibG9iIDB4NzZjMWYwMDAgdG8gZHN0IDB4NzY5YTgwMDAK
ZG9tYWluYnVpbGRlcjogZGV0YWlsOiBhbGxvY19tYWdpY19wYWdlczogY2FsbGVkCmRvbWFpbmJ1
aWxkZXI6IGRldGFpbDogY291bnRfcGd0YWJsZXNfYXJtOiBjYWxsZWQKZG9tYWluYnVpbGRlcjog
ZGV0YWlsOiB4Y19kb21fYnVpbGRfaW1hZ2UgIDogdmlydF9hbGxvY19lbmQgOiAweDgwMWZlMDAw
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfcGd0YWJf
ZW5kIDogMHgwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfaW1hZ2U6IGNhbGxl
ZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGFyY2hfc2V0dXBfYm9vdGVhcmx5OiBkb2luZyBub3Ro
aW5nCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2NvbXBhdF9jaGVjazogc3VwcG9ydGVk
IGd1ZXN0IHR5cGU6IHhlbi0zLjAtYXJtdjdsIDw9IG1hdGNoZXMKZG9tYWluYnVpbGRlcjogZGV0
YWlsOiBzZXR1cF9wZ3RhYmxlc19hcm06IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHN0
YXJ0X2luZm9fYXJtOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBkb21haW4gYnVpbGRl
ciBtZW1vcnkgZm9vdHByaW50CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogICAgYWxsb2NhdGVkCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogICAgICAgbWFsbG9jICAgICAgICAgICAgIDogNTI1IGtCCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogICAgICAgYW5vbiBtbWFwICAgICAgICAgIDogMCBieXRlcwpk
b21haW5idWlsZGVyOiBkZXRhaWw6ICAgIG1hcHBlZApkb21haW5idWlsZGVyOiBkZXRhaWw6ICAg
ICAgIGZpbGUgbW1hcCAgICAgICAgICA6IDIwMDUga0IKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAg
ICAgICBkb21VIG1tYXAgICAgICAgICAgOiAyMDA4IGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDog
dmNwdV9hcm06IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IEluaXRpYWwgc3RhdGUgQ1BT
UiAweDFkMyBQQyAweDgwMDA4MDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogbGF1bmNoX3ZtOiBj
YWxsZWQsIGN0eHQ9MHg3ZWFmMmEwYwpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9yZWxl
YXNlOiBjYWxsZWQKeGM6IGRlYnVnOiBoeXBlcmNhbGwgYnVmZmVyOiB0b3RhbCBhbGxvY2F0aW9u
czoxNCB0b3RhbCByZWxlYXNlczoxNAp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IGN1cnJl
bnQgYWxsb2NhdGlvbnM6MCBtYXhpbXVtIGFsbG9jYXRpb25zOjIKeGM6IGRlYnVnOiBoeXBlcmNh
bGwgYnVmZmVyOiBjYWNoZSBjdXJyZW50IHNpemU6Mgp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZm
ZXI6IGNhY2hlIGhpdHM6MTEgbWlzc2VzOjIgdG9vYmlnOjEKCgo=
--14dae9340efb28ec1704c636d706
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae9340efb28ec1704c636d706--


From xen-devel-bounces@lists.xen.org Wed Aug 01 16:44:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc1H-0001zF-E8; Wed, 01 Aug 2012 16:43:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <t.wagner@inode.at>) id 1Swbsn-0001kd-0H
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:34:51 +0000
X-Env-Sender: t.wagner@inode.at
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343838850!11782792!1
X-Originating-IP: [213.47.214.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=spamassassin: 
	dGltZW91dCB3b3JraW5nIG9uOiAobm8gZmlsZSksIHJ1bGUgX19NTF80MTlfTk9SSVNLLAo=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4019 invoked from network); 1 Aug 2012 16:34:10 -0000
Received: from webmail.inode.at (HELO webmail.inode.at) (213.47.214.141)
	by server-2.tower-27.messagelabs.com with SMTP;
	1 Aug 2012 16:34:10 -0000
Received: from [127.0.0.1] (helo=inode.at) by webmail with smtp (Exim 4.67)
	(envelope-from <t.wagner@inode.at>) id 1Swbrt-00015E-UL
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 18:33:55 +0200
Received: from 85.126.176.235
	(SquirrelMail authenticated user t.wagner@inode.at)
	by webmail.inode.at with HTTP; Wed, 1 Aug 2012 18:33:53 +0200 (CEST)
Message-ID: <85.126.176.235.1343838833.wm@webmail.inode.at>
Date: Wed, 1 Aug 2012 18:33:53 +0200 (CEST)
From: "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at>
To: <xen-devel@lists.xen.org>
X-Priority: 3
Importance: Normal
X-Mailer: SquirrelMail (version 1.2.8)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="----=_20120801183353_31106"
X-Mailman-Approved-At: Wed, 01 Aug 2012 16:43:32 +0000
Subject: [Xen-devel] [Fwd: Re:  XEN 4.1.2 and kernel 3.4.7]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------=_20120801183353_31106
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit

I am building the kernel by myself. I attach the config and system.map
of the running kernel.


regards
Thomas

>>>> On 01.08.12 at 09:49, "Dipl.-Ing. Thomas Wagner"
>>>> <t.wagner@inode.at> wrote:
>> Aug  1 05:52:08 zeus kernel: INFO: rcu_bh self-detected stall on CPU
>> { 3}  (t=0 jiffies)
>> Aug  1 05:52:08 zeus kernel: Pid: 0, comm: swapper/3 Not tainted
>> 3.4.7-4-xen #1
>> Aug  1 05:52:08 zeus kernel: Call Trace:
>> Aug  1 05:52:08 zeus kernel: <IRQ>  [<ffffffff81096841>] ?
>> __rcu_pending+0x1a1/0x4b0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f150>] ?
>> tick_init_highres+0x10/0x10
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8109771b>] ?
>> rcu_check_callbacks+0xbb/0xd0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81042c9f>] ?
>> update_process_times+0x3f/0x80
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f1a4>] ?
>> tick_sched_timer+0x54/0x130
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81055469>] ?
>> __run_hrtimer.isra.34+0x59/0xf0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81055aac>] ?
>> hrtimer_interrupt+0xec/0x230
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81006caa>] ?
>> xen_timer_interrupt+0x2a/0x1a0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff810422b3>] ? cascade+0x73/0x90
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81090fea>] ?
>> handle_irq_event_percpu+0x3a/0x150
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8109403f>] ?
>> handle_percpu_irq+0x3f/0x70
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8103d23d>] ?
>> __do_softirq+0xbd/0x130 Aug  1 05:52:08 zeus kernel:
>> [<ffffffff811f35f4>] ?
>> __xen_evtchn_do_upcall+0x194/0x240
>> Aug  1 05:52:08 zeus kernel: [<ffffffff811f55f2>] ?
>> xen_evtchn_do_upcall+0x22/0x40
>> Aug  1 05:52:08 zeus kernel: [<ffffffff813244ee>] ?
>> xen_do_hypervisor_callback+0x1e/0x30
>> Aug  1 05:52:08 zeus kernel: <EOI>  [<ffffffff810013aa>] ?
>> hypercall_page+0x3aa/0x1000
>> Aug  1 05:52:08 zeus kernel: [<ffffffff810013aa>] ?
>> hypercall_page+0x3aa/0x1000
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81006afc>] ?
>> xen_safe_halt+0xc/0x20 Aug  1 05:52:08 zeus kernel:
>> [<ffffffff81012453>] ? default_idle+0x23/0x40 Aug  1 05:52:08 zeus
>> kernel: [<ffffffff81012e86>] ? cpu_idle+0xa6/0xc0
>>
>>
>> Im am using a HP DL585 G2 running openSuse 12.1.
>
> Yet the above trace appears to be from a pv-ops kernel, which
> 12.1 doesn't include (nor does it ship a 3.4-based kernel in the first
> place). Please provide complete, consistent information.
>
> Jan



------=_20120801183353_31106
Content-Type: text/plain; name="config-3.4.7"
Content-Disposition: attachment; filename="config-3.4.7"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.4.7 Kernel Configuration
#
CONFIG_64BIT=y
# CONFIG_X86_32 is not set
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
# CONFIG_RWSEM_GENERIC_SPINLOCK is not set
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_ARCH_HAS_CPU_IDLE_WAIT=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
# CONFIG_KTIME_SCALAR is not set
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_HAVE_IRQ_WORK=y
CONFIG_IRQ_WORK=y

#
# General setup
#
CONFIG_EXPERIMENTAL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION="-4-xen"
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
# CONFIG_FHANDLE is not set
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_FANOUT=64
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_TREE_RCU_TRACE is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
# CONFIG_PROC_PID_CPUSET is not set
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_CGROUP_MEM_RES_CTLR is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
# CONFIG_USER_NS is not set
CONFIG_PID_NS=y
CONFIG_NET_NS=y
# CONFIG_SCHED_AUTOGROUP is not set
# CONFIG_SYSFS_DEPRECATED is not set
# CONFIG_RELAY is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_PERF_COUNTERS is not set
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PCI_QUIRKS=y
CONFIG_COMPAT_BRK=y
CONFIG_SLAB=y
# CONFIG_SLUB is not set
# CONFIG_PROFILING is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
# CONFIG_KPROBES is not set
# CONFIG_JUMP_LABEL is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y

#
# GCOV-based kernel profiling
#
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
# CONFIG_BLK_DEV_INTEGRITY is not set
# CONFIG_BLK_DEV_THROTTLING is not set

#
# Partition Types
#
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
# CONFIG_INLINE_SPIN_TRYLOCK is not set
# CONFIG_INLINE_SPIN_TRYLOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK is not set
# CONFIG_INLINE_SPIN_LOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK_IRQ is not set
# CONFIG_INLINE_SPIN_LOCK_IRQSAVE is not set
# CONFIG_INLINE_SPIN_UNLOCK_BH is not set
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
# CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_READ_TRYLOCK is not set
# CONFIG_INLINE_READ_LOCK is not set
# CONFIG_INLINE_READ_LOCK_BH is not set
# CONFIG_INLINE_READ_LOCK_IRQ is not set
# CONFIG_INLINE_READ_LOCK_IRQSAVE is not set
CONFIG_INLINE_READ_UNLOCK=y
# CONFIG_INLINE_READ_UNLOCK_BH is not set
CONFIG_INLINE_READ_UNLOCK_IRQ=y
# CONFIG_INLINE_READ_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_WRITE_TRYLOCK is not set
# CONFIG_INLINE_WRITE_LOCK is not set
# CONFIG_INLINE_WRITE_LOCK_BH is not set
# CONFIG_INLINE_WRITE_LOCK_IRQ is not set
# CONFIG_INLINE_WRITE_LOCK_IRQSAVE is not set
CONFIG_INLINE_WRITE_UNLOCK=y
# CONFIG_INLINE_WRITE_UNLOCK_BH is not set
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
# CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE is not set
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_TICK_ONESHOT=y
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_PARAVIRT_GUEST=y
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_KVM_CLOCK is not set
# CONFIG_KVM_GUEST is not set
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
CONFIG_MK8=y
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
# CONFIG_GENERIC_CPU is not set
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_XADD=y
CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_INTEL_USERCOPY=y
CONFIG_X86_USE_PPRO_CHECKSUM=y
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
# CONFIG_CALGARY_IOMMU is not set
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_NR_CPUS=8
# CONFIG_SCHED_SMT is not set
CONFIG_SCHED_MC=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
# CONFIG_X86_MCE_INTEL is not set
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
# CONFIG_X86_MCE_INJECT is not set
# CONFIG_I8K is not set
CONFIG_MICROCODE=m
# CONFIG_MICROCODE_INTEL is not set
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=m
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=8
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
# CONFIG_MEMORY_HOTPLUG is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_TRANSPARENT_HUGEPAGE=y
# CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
CONFIG_CLEANCACHE=y
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=0
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
# CONFIG_EFI is not set
# CONFIG_SECCOMP is not set
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
# CONFIG_KEXEC is not set
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x1000000
# CONFIG_RELOCATABLE is not set
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
# CONFIG_SUSPEND is not set
CONFIG_HIBERNATE_CALLBACKS=y
# CONFIG_HIBERNATION is not set
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_RUNTIME=y
CONFIG_PM=y
# CONFIG_PM_DEBUG is not set
CONFIG_ACPI=y
CONFIG_ACPI_PROCFS=y
CONFIG_ACPI_PROCFS_POWER=y
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_PROC_EVENT=y
# CONFIG_ACPI_AC is not set
# CONFIG_ACPI_BATTERY is not set
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
# CONFIG_ACPI_DOCK is not set
CONFIG_ACPI_PROCESSOR=m
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=m
CONFIG_ACPI_NUMA=y
CONFIG_ACPI_CUSTOM_DSDT_FILE=""
# CONFIG_ACPI_CUSTOM_DSDT is not set
CONFIG_ACPI_BLACKLIST_YEAR=0
# CONFIG_ACPI_DEBUG is not set
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=m
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
# CONFIG_ACPI_APEI_MEMORY_FAILURE is not set
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_TABLE=y
CONFIG_CPU_FREQ_STAT=y
# CONFIG_CPU_FREQ_STAT_DETAILS is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_PCC_CPUFREQ is not set
# CONFIG_X86_ACPI_CPUFREQ is not set
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
# CONFIG_X86_P4_CLOCKMOD is not set

#
# shared options
#
# CONFIG_X86_SPEEDSTEP_LIB is not set
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
# CONFIG_INTEL_IDLE is not set

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
# CONFIG_PCI_CNB20LE_QUIRK is not set
CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y
# CONFIG_PCIE_ECRC is not set
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
CONFIG_ARCH_SUPPORTS_MSI=y
CONFIG_PCI_MSI=y
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
CONFIG_PCI_STUB=m
# CONFIG_XEN_PCIDEV_FRONTEND is not set
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
CONFIG_PCI_IOAPIC=y
CONFIG_PCI_LABEL=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# CONFIG_PCCARD is not set
# CONFIG_HOTPLUG_PCI is not set
# CONFIG_RAPIDIO is not set

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
# CONFIG_BINFMT_MISC is not set
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=m
CONFIG_UNIX=y
CONFIG_UNIX_DIAG=m
CONFIG_XFRM=y
CONFIG_XFRM_USER=m
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
CONFIG_NET_KEY=m
# CONFIG_NET_KEY_MIGRATE is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
# CONFIG_IP_MULTIPLE_TABLES is not set
# CONFIG_IP_ROUTE_MULTIPATH is not set
# CONFIG_IP_ROUTE_VERBOSE is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
# CONFIG_INET_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
# CONFIG_INET_UDP_DIAG is not set
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_CUBIC=y
CONFIG_DEFAULT_TCP_CONG="cubic"
# CONFIG_TCP_MD5SIG is not set
# CONFIG_IPV6 is not set
# CONFIG_NETWORK_SECMARK is not set
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
# CONFIG_NETFILTER is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
# CONFIG_NET_DSA is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
# CONFIG_NET_SCHED is not set
# CONFIG_DCB is not set
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
CONFIG_NETPRIO_CGROUP=m
CONFIG_BQL=y
CONFIG_HAVE_BPF_JIT=y
# CONFIG_BPF_JIT is not set

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
# CONFIG_RFKILL is not set
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_FIRMWARE_IN_KERNEL is not set
CONFIG_EXTRA_FIRMWARE=""
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
# CONFIG_DMA_SHARED_BUFFER is not set
# CONFIG_CONNECTOR is not set
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_BLK_CPQ_DA=m
CONFIG_BLK_CPQ_CISS_DA=m
# CONFIG_CISS_SCSI_TAPE is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set

#
# DRBD disabled because PROC_FS, INET or CONNECTOR not selected
#
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_UB is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_XEN_BLKDEV_FRONTEND is not set
CONFIG_XEN_BLKDEV_BACKEND=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_INTEL_MID_PTI is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ENCLOSURE_SERVICES is not set
CONFIG_HP_ILO=m
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#

#
# Altera FPGA firmware download module
#
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
# CONFIG_SCSI_NETLINK is not set
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
# CONFIG_BLK_DEV_SR_VENDOR is not set
CONFIG_CHR_DEV_SG=m
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
# CONFIG_SCSI_SCAN_ASYNC is not set
CONFIG_SCSI_WAIT_SCAN=m

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
# CONFIG_SCSI_FC_ATTRS is not set
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
# CONFIG_SCSI_SAS_LIBSAS is not set
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC7XXX_OLD is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_VMWARE_PVSCSI is not set
# CONFIG_LIBFC is not set
# CONFIG_LIBFCOE is not set
# CONFIG_FCOE is not set
# CONFIG_FCOE_FNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_ISCI is not set
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_FC is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_SRP is not set
# CONFIG_SCSI_BFA_FC is not set
# CONFIG_XEN_SCSI_FRONTEND is not set
CONFIG_XEN_SCSI_BACKEND=m
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=y
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_PMP is not set

#
# Controllers with non-SFF native interface
#
# CONFIG_SATA_AHCI is not set
# CONFIG_SATA_AHCI_PLATFORM is not set
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
# CONFIG_ATA_PIIX is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
CONFIG_PATA_AMD=m
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
CONFIG_PATA_ACPI=m
CONFIG_ATA_GENERIC=m
# CONFIG_PATA_LEGACY is not set
# CONFIG_MD is not set
# CONFIG_TARGET_CORE is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
# CONFIG_FUSION_FC is not set
# CONFIG_FUSION_SAS is not set
CONFIG_FUSION_MAX_SGE=128
# CONFIG_FUSION_CTL is not set
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
# CONFIG_FIREWIRE is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
# CONFIG_MACINTOSH_DRIVERS is not set
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_MII is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_NETCONSOLE is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
CONFIG_TUN=m
# CONFIG_VETH is not set
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_ALTEON is not set
# CONFIG_NET_VENDOR_AMD is not set
# CONFIG_NET_VENDOR_ATHEROS is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
# CONFIG_TIGON3 is not set
# CONFIG_BNX2X is not set
# CONFIG_NET_VENDOR_BROCADE is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
# CONFIG_NET_VENDOR_CHELSIO is not set
# CONFIG_NET_VENDOR_CISCO is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DEC is not set
# CONFIG_NET_VENDOR_DLINK is not set
# CONFIG_NET_VENDOR_EMULEX is not set
# CONFIG_NET_VENDOR_EXAR is not set
# CONFIG_NET_VENDOR_HP is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
CONFIG_E1000=m
CONFIG_E1000E=m
# CONFIG_IGB is not set
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
# CONFIG_IXGBE is not set
# CONFIG_IXGBEVF is not set
# CONFIG_NET_VENDOR_I825XX is not set
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MYRI is not set
# CONFIG_FEALNX is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set
# CONFIG_ETHOC is not set
# CONFIG_NET_PACKET_ENGINE is not set
# CONFIG_NET_VENDOR_QLOGIC is not set
# CONFIG_NET_VENDOR_REALTEK is not set
# CONFIG_NET_VENDOR_RDC is not set
# CONFIG_NET_VENDOR_SEEQ is not set
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
# CONFIG_SFC is not set
# CONFIG_NET_VENDOR_SMSC is not set
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_SUN is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
# CONFIG_NET_VENDOR_TI is not set
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
# CONFIG_PHYLIB is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
# CONFIG_TR is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
# CONFIG_XEN_NETDEV_FRONTEND is not set
CONFIG_XEN_NETDEV_BACKEND=m
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
# CONFIG_INPUT_FF_MEMLESS is not set
# CONFIG_INPUT_POLLDEV is not set
# CONFIG_INPUT_SPARSEKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
# CONFIG_INPUT_EVDEV is not set
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_OMAP4 is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
CONFIG_INPUT_PCSPKR=m
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
# CONFIG_LEGACY_PTYS is not set
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
# CONFIG_DEVKMEM is not set

#
# Serial drivers
#
CONFIG_SERIAL_8250=m
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_PCI=m
CONFIG_SERIAL_8250_PNP=m
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=m
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_XILINX_PS_UART is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_PANIC_EVENT=y
CONFIG_IPMI_PANIC_STRING=y
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
# CONFIG_HW_RANDOM is not set
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
# CONFIG_HPET is not set
# CONFIG_HANGCHECK_TIMER is not set
# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
# CONFIG_RAMOOPS is not set
# CONFIG_I2C is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
# CONFIG_PPS is not set

#
# PPS generators support
#

#
# PTP clock support
#

#
# Enable Device Drivers -> PPS to see the PTP clock options.
#
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_POWER_SUPPLY is not set
CONFIG_HWMON=m
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
CONFIG_SENSORS_K8TEMP=m
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IBMAEM is not set
# CONFIG_SENSORS_IBMPEX is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_SCH5627 is not set
# CONFIG_SENSORS_SCH5636 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=m
CONFIG_THERMAL_HWMON=y
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set

#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
# CONFIG_ALIM1535_WDT is not set
# CONFIG_ALIM7101_WDT is not set
# CONFIG_F71808E_WDT is not set
# CONFIG_SP5100_TCO is not set
# CONFIG_SC520_WDT is not set
# CONFIG_SBC_FITPC2_WATCHDOG is not set
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
# CONFIG_IBMASR is not set
# CONFIG_WAFER_WDT is not set
# CONFIG_I6300ESB_WDT is not set
# CONFIG_ITCO_WDT is not set
# CONFIG_IT8712F_WDT is not set
# CONFIG_IT87_WDT is not set
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
# CONFIG_NV_TCO is not set
# CONFIG_60XX_WDT is not set
# CONFIG_SBC8360_WDT is not set
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_VIA_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
# CONFIG_W83697UG_WDT is not set
# CONFIG_W83877F_WDT is not set
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_XEN_WDT=m

#
# PCI-based Watchdog Cards
#
# CONFIG_PCIPCWATCHDOG is not set
# CONFIG_WDTPCI is not set

#
# USB-based Watchdog Cards
#
# CONFIG_USBPCWATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=m
CONFIG_AGP_AMD64=m
# CONFIG_AGP_INTEL is not set
# CONFIG_AGP_SIS is not set
# CONFIG_AGP_VIA is not set
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
# CONFIG_VGA_SWITCHEROO is not set
# CONFIG_DRM is not set
# CONFIG_STUB_POULSBO is not set
CONFIG_VGASTATE=m
# CONFIG_VIDEO_OUTPUT_CONTROL is not set
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
# CONFIG_FB_SYS_FILLRECT is not set
# CONFIG_FB_SYS_COPYAREA is not set
# CONFIG_FB_SYS_IMAGEBLIT is not set
# CONFIG_FB_FOREIGN_ENDIAN is not set
# CONFIG_FB_SYS_FOPS is not set
# CONFIG_FB_WMT_GE_ROPS is not set
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
# CONFIG_FB_TILEBLITTING is not set

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
CONFIG_FB_VGA16=m
CONFIG_FB_VESA=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_GEODE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_XEN_FBDEV_FRONTEND is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
# CONFIG_SOUND is not set
CONFIG_HID_SUPPORT=y
CONFIG_HID=y
# CONFIG_HIDRAW is not set

#
# USB Input Devices
#
CONFIG_USB_HID=m
# CONFIG_HID_PID is not set
# CONFIG_USB_HIDDEV is not set

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
# CONFIG_HID_GYRATION is not set
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
# CONFIG_LOGIWHEELS_FF is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
# CONFIG_HID_NTRIG is not set
# CONFIG_HID_ORTEK is not set
# CONFIG_HID_PANTHERLORD is not set
# CONFIG_HID_PETALYNX is not set
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
# CONFIG_HID_SAMSUNG is not set
# CONFIG_HID_SONY is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_SUNPLUS is not set
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
# CONFIG_HID_TOPSEED is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
CONFIG_USB_ARCH_HAS_OHCI=y
CONFIG_USB_ARCH_HAS_EHCI=y
CONFIG_USB_ARCH_HAS_XHCI=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=m
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=m
# CONFIG_USB_DEBUG is not set
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEVICEFS=y
# CONFIG_USB_DEVICE_CLASS is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_SUSPEND=y
# CONFIG_USB_OTG is not set
CONFIG_USB_MON=m
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=m
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
CONFIG_USB_OHCI_HCD=m
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=m
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_XEN_USBDEV_FRONTEND is not set
CONFIG_XEN_USBDEV_BACKEND=m

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
# CONFIG_USB_UAS is not set
# CONFIG_USB_LIBUSUAL is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_GADGET is not set

#
# OTG and related infrastructure
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y

#
# Reporting subsystems
#
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=m
CONFIG_EDAC_MCE_INJ=m
CONFIG_EDAC_MM_EDAC=m
CONFIG_EDAC_AMD64=m
# CONFIG_EDAC_AMD64_ERROR_INJECTION is not set
# CONFIG_EDAC_E752X is not set
# CONFIG_EDAC_I82975X is not set
# CONFIG_EDAC_I3000 is not set
# CONFIG_EDAC_I3200 is not set
# CONFIG_EDAC_X38 is not set
# CONFIG_EDAC_I5400 is not set
# CONFIG_EDAC_I5000 is not set
# CONFIG_EDAC_I5100 is not set
# CONFIG_EDAC_I7300 is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set

#
# on-CPU RTC drivers
#
# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set

#
# Virtio drivers
#
# CONFIG_VIRTIO_PCI is not set
# CONFIG_VIRTIO_BALLOON is not set
# CONFIG_VIRTIO_MMIO is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=m
CONFIG_XEN_GRANT_DEV_ALLOC=m
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=y
CONFIG_XEN_PCIDEV_BACKEND=m
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=m
# CONFIG_STAGING is not set
# CONFIG_X86_PLATFORM_DEVICES is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
# CONFIG_AMD_IOMMU_STATS is not set
# CONFIG_INTEL_IOMMU is not set
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers (EXPERIMENTAL)
#

#
# Rpmsg drivers (EXPERIMENTAL)
#
# CONFIG_VIRT_DRIVERS is not set
# CONFIG_PM_DEVFREQ is not set

#
# Firmware Drivers
#
# CONFIG_EDD is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
# CONFIG_DMIID is not set
CONFIG_DMI_SYSFS=m
CONFIG_ISCSI_IBFT_FIND=y
# CONFIG_ISCSI_IBFT is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT23=y
CONFIG_EXT4_FS_XATTR=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=m
CONFIG_FS_MBCACHE=m
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_XFS_FS is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
# CONFIG_QUOTA is not set
# CONFIG_QUOTACTL is not set
CONFIG_AUTOFS4_FS=m
# CONFIG_FUSE_FS is not set
CONFIG_GENERIC_ACL=y

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
CONFIG_UDF_NLS=y

#
# DOS/FAT/NT Filesystems
#
# CONFIG_MSDOS_FS is not set
# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=m
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
CONFIG_NLS_CODEPAGE_1250=m
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=m
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_UTF8=m
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
# CONFIG_PRINTK_TIME is not set
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FRAME_WARN=2048
# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_UNUSED_SYMBOLS is not set
# CONFIG_DEBUG_FS is not set
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
# CONFIG_DEBUG_KERNEL is not set
# CONFIG_HARDLOCKUP_DETECTOR is not set
# CONFIG_SPARSE_RCU_POINTER is not set
CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_ARCH_WANT_FRAME_POINTERS=y
# CONFIG_FRAME_POINTER is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACING_SUPPORT=y
# CONFIG_FTRACE is not set
# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
# CONFIG_EARLY_PRINTK_DBGP is not set
# CONFIG_DEBUG_SET_MODULE_RONX is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
# CONFIG_OPTIMIZE_INLINING is not set

#
# Security options
#
# CONFIG_KEYS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
# CONFIG_SECURITY is not set
# CONFIG_SECURITYFS is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_DEFAULT_SECURITY=""
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=m
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=m
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=m
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
CONFIG_CRYPTO_NULL=m
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
# CONFIG_CRYPTO_AUTHENC is not set
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
# CONFIG_CRYPTO_CBC is not set
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
# CONFIG_CRYPTO_HMAC is not set
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
# CONFIG_CRYPTO_CRC32C_INTEL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
# CONFIG_CRYPTO_SHA1 is not set
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_X86_64=m
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST6 is not set
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
CONFIG_CRYPTO_SEED=m
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=m
CONFIG_CRYPTO_LZO=m

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_USER_API=m
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
# CONFIG_CRYPTO_HW is not set
CONFIG_HAVE_KVM=y
# CONFIG_VIRTUALIZATION is not set
# CONFIG_BINARY_PRINTF is not set

#
# Library routines
#
CONFIG_BITREVERSE=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
# CONFIG_CRC_CCITT is not set
CONFIG_CRC16=y
# CONFIG_CRC_T10DIF is not set
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
# CONFIG_LIBCRC32C is not set
# CONFIG_CRC8 is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_LZO_COMPRESS=m
CONFIG_LZO_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
# CONFIG_AVERAGE is not set
# CONFIG_CORDIC is not set

------=_20120801183353_31106
Content-Type: text/plain; name="System.map-3.4.7"
Content-Disposition: attachment; filename="System.map-3.4.7"

0000000000000000 A VDSO32_PRELINK
0000000000000000 D __per_cpu_start
0000000000000000 D irq_stack_union
0000000000000000 A xen_irq_disable_direct_reloc
0000000000000000 A xen_save_fl_direct_reloc
0000000000000040 A VDSO32_vsyscall_eh_frame_size
00000000000001f0 A VDSO32_NOTE_MASK
0000000000000400 A VDSO32_sigreturn
0000000000000410 A VDSO32_rt_sigreturn
0000000000000420 A VDSO32_vsyscall
0000000000000430 A VDSO32_SYSENTER_RETURN
0000000000004000 D gdt_page
0000000000005000 d exception_stacks
000000000000b000 d tlb_vector_offset
000000000000b040 d cpu_loops_per_jiffy
000000000000b080 D xen_vcpu_info
000000000000b0c0 D xen_vcpu
000000000000b0c8 d idt_desc
000000000000b0d8 d xen_cr0_value
000000000000b0e0 D xen_mc_irq_flags
000000000000b100 d mc_buffer
000000000000bd10 D xen_current_cr3
000000000000bd18 D xen_cr3
000000000000bd40 d xen_runstate
000000000000bd80 d xen_clock_events
000000000000be40 d xen_runstate_snapshot
000000000000be70 d xen_residual_stolen
000000000000be78 d xen_residual_blocked
000000000000be80 d xen_resched_irq
000000000000be84 d xen_callfunc_irq
000000000000be88 d xen_debug_irq
000000000000be8c d xen_callfuncsingle_irq
000000000000be90 d lock_kicker_irq
000000000000be98 d lock_spinners
000000000000bec0 D old_rsp
000000000000bec8 D irq_regs
000000000000bed0 d update_debug_stack
000000000000bed8 d last_nmi_rip
000000000000bee0 d swallow_nmi
000000000000bef0 d nmi_stats
000000000000bf00 D vector_irq
000000000000c300 d cpu_devices
000000000000c580 D cpu_dr7
000000000000c5a0 d bp_per_reg
000000000000c5c0 d cpu_debugreg
000000000000c5e0 D cyc2ns_offset
000000000000c5e8 D cyc2ns
000000000000c5f0 d is_idle
000000000000c600 d ici_cpuid4_info
000000000000c608 d ici_index_kobject
000000000000c610 d ici_cache_kobject
000000000000c640 D debug_stack_usage
000000000000c660 D orig_ist
000000000000c698 D fpu_owner_task
000000000000c6a0 D irq_count
000000000000c6a8 D irq_stack_ptr
000000000000c6b0 D kernel_stack
000000000000c6c0 D current_task
000000000000c6c8 d debug_stack_addr
000000000000c6d0 d old_perf_sched
000000000000c6e0 D cpu_hw_events
000000000000d600 d pmc_prev_left
000000000000d800 D mce_device
000000000000d808 D mce_poll_count
000000000000d810 D mce_irq_work
000000000000d840 D injectm
000000000000d898 D mce_poll_banks
000000000000d8a0 D mce_exception_count
000000000000d8c0 d mces_seen
000000000000d920 d mce_ring
000000000000d9c0 d mce_work
000000000000d9e0 d mce_timer
000000000000da18 d mce_next_interval
000000000000da20 d bank_map
000000000000da40 d threshold_banks
000000000000da70 D cpu_llc_shared_map
000000000000da78 D cpu_core_map
000000000000da80 D cpu_sibling_map
000000000000da88 D cpu_llc_id
000000000000da8c D cpu_state
000000000000da90 d idle_thread_array
000000000000da98 D this_cpu_off
000000000000daa0 D cpu_number
000000000000dac0 D x86_bios_cpu_apicid
000000000000dac2 D x86_cpu_to_apicid
000000000000db00 d lapic_events
000000000000dbc0 d cpu_hpet_dev
000000000000dbc8 d paravirt_lazy_mode
000000000000dbcc D x86_cpu_to_node_map
000000000000dc00 D process_counts
000000000000dc20 d printk_pending
000000000000dc40 d printk_sched_buf
000000000000de40 D softirq_work_list
000000000000dee0 D ksoftirqd
000000000000def0 d tasklet_vec
000000000000df00 d tasklet_hi_vec
000000000000df10 d tvec_bases
000000000000df40 d global_cwq
000000000000e240 D hrtimer_bases
000000000000e340 D sd_llc_id
000000000000e348 D sd_llc
000000000000e360 D kernel_cpustat
000000000000e3c0 D kstat
000000000000e3f0 D load_balance_tmpmask
000000000000e3f8 d local_cpu_mask
000000000000e400 D tick_cpu_device
000000000000e420 d tick_cpu_sched
000000000000e500 d cpu_stopper
000000000000e520 d stop_cpus_work
000000000000e560 D rcu_dynticks
000000000000e580 D rcu_bh_data
000000000000e6a0 D rcu_sched_data
000000000000e7b0 d rcu_barrier_head
000000000000e7c0 d listener_array
000000000000e7f0 d taskstats_seqnum
000000000000e7f8 d irq_work_list
000000000000e800 d perf_cgroup_events
000000000000e804 d perf_branch_stack_events
000000000000e810 d rotation_list
000000000000e820 d perf_throttled_seq
000000000000e828 d perf_throttled_count
000000000000e840 d swevent_htable
000000000000e880 d callchain_recursion
000000000000e890 d nr_cpu_bp_pinned
000000000000e898 d nr_task_bp_pinned
000000000000e8a0 d nr_bp_flexible
000000000000e8c0 D numa_node
000000000000e8e0 d boot_pageset
000000000000e948 D dirty_throttle_leaks
000000000000e94c d bdp_ratelimits
000000000000e960 d lru_rotate_pvecs
000000000000e9e0 d activate_page_pvecs
000000000000ea60 d lru_add_pvecs
000000000000ece0 d lru_deactivate_pvecs
000000000000ed60 D vm_event_states
000000000000ef60 d vmstat_work
000000000000efc0 d vmap_block_queue
000000000000efe0 d slab_reap_work
000000000000f038 d slab_reap_node
000000000000f040 d memory_failure_cpu
000000000000f180 D files_lglock_lock
000000000000f184 d nr_dentry
000000000000f188 d nr_unused
000000000000f18c d nr_inodes
000000000000f190 d last_ino
000000000000f1a0 d fdtable_defer_list
000000000000f1d0 D vfsmount_lock_lock
000000000000f1e0 d bh_lrus
000000000000f220 d bh_accounting
000000000000f230 d blk_cpu_done
000000000000f240 d blk_cpu_iopoll
000000000000f260 d radix_tree_preloads
000000000000f2c0 d net_rand_state
000000000000f2e0 d cpu_evtchn_mask
000000000000f4e0 d virq_to_irq
000000000000f540 d ipi_to_irq
000000000000f550 d xed_nesting_count
000000000000f554 d current_word_idx
000000000000f558 d current_bit_idx
000000000000f560 D get_random_int_hash
000000000000f570 d trickle_count
000000000000f578 d cpu_sys_devices
000000000000f580 d cpufreq_cpu_data
000000000000f588 d cpufreq_policy_cpu
000000000000f5a0 d cpu_policy_rwsem
000000000000f5c0 d cpufreq_cpu_governor
000000000000f5d0 d cpufreq_stats_table
000000000000f5d8 d cpufreq_show_table
000000000000f5e0 D cpuidle_devices
000000000000f600 d ladder_devices
000000000000f6e0 d sockets_in_use
000000000000f6e4 d xmit_recursion
000000000000f700 d rt_cache_stat
000000000000f740 d ipv4_cookie_scratch
000000000000f800 D init_tss
0000000000011ac0 D irq_stat
0000000000011b00 D cpu_info
0000000000011bc0 D cpu_tlbstate
0000000000011c00 d gcwq_nr_running
0000000000011c40 D runqueues
0000000000012600 d sched_clock_data
0000000000012640 d call_single_queue
0000000000012680 d cfd_data
00000000000126c0 d csd_data
0000000000012700 D softnet_data
0000000000012840 D __per_cpu_end
0000000001000000 A phys_startup_64
ffffffff81000000 T _text
ffffffff81000000 T startup_64
ffffffff810000b7 t ident_complete
ffffffff81000100 T secondary_startup_64
ffffffff8100018a t bad_address
ffffffff81000190 T _stext
ffffffff81001000 T hypercall_page
ffffffff81002000 T do_one_initcall
ffffffff81002170 t match_dev_by_uuid
ffffffff810021a0 T name_to_dev_t
ffffffff810025b0 t native_read_msr_safe
ffffffff810025d0 t native_read_pmc
ffffffff810025f0 t native_read_cr4
ffffffff81002600 t native_read_cr4_safe
ffffffff81002610 t native_wbinvd
ffffffff81002620 t native_store_gdt
ffffffff81002630 t native_store_idt
ffffffff81002640 t xen_cpuid
ffffffff81002720 t xen_set_debugreg
ffffffff81002730 t xen_get_debugreg
ffffffff81002740 t xen_store_tr
ffffffff81002750 t xen_io_delay
ffffffff81002760 t xen_get_apic_id
ffffffff81002770 t xen_apic_read
ffffffff810027f0 t xen_apic_icr_read
ffffffff81002800 t xen_apic_wait_icr_idle
ffffffff81002810 t xen_safe_apic_wait_icr_idle
ffffffff81002820 t xen_write_cr4
ffffffff81002830 t xen_set_iopl_mask
ffffffff81002890 t xen_set_apic_id
ffffffff810028b0 t xen_apic_icr_write
ffffffff810028d0 t xen_apic_write
ffffffff810028f0 t xen_end_context_switch
ffffffff81002910 t xen_set_ldt
ffffffff810029d0 t xen_load_sp0
ffffffff81002a60 t xen_clts
ffffffff81002ae0 t xen_write_cr0
ffffffff81002b70 t set_aliased_prot
ffffffff81002c10 t xen_free_ldt
ffffffff81002c50 t xen_alloc_ldt
ffffffff81002c90 t load_TLS_descriptor
ffffffff81002d00 t xen_load_tls
ffffffff81002db0 t xen_load_gdt
ffffffff81002f00 t xen_patch
ffffffff810030b0 T xen_hvm_need_lapic
ffffffff810030e0 t xen_read_cr0
ffffffff81003100 t xen_reboot
ffffffff81003130 t xen_emergency_restart
ffffffff81003140 t xen_crash_shutdown
ffffffff81003150 t xen_machine_power_off
ffffffff81003170 t xen_machine_halt
ffffffff81003180 t xen_restart
ffffffff81003190 t xen_panic_event
ffffffff810031a0 t xen_load_gs_index
ffffffff810031c0 t xen_vcpu_setup
ffffffff810032b0 t cvt_gate_to_trap.part.17
ffffffff810033a0 t xen_convert_trap_info
ffffffff81003460 t xen_write_idt_entry
ffffffff81003530 t xen_load_idt
ffffffff810035a0 t xen_write_msr_safe
ffffffff81003670 t xen_write_gdt_entry
ffffffff810036c0 t xen_write_ldt_entry
ffffffff81003710 T xen_vcpu_restore
ffffffff810037b0 T xen_copy_trap_info
ffffffff810037d0 T xen_setup_shared_info
ffffffff81003840 T xen_setup_vcpu_info_placement
ffffffff810038d0 T xen_panic_handler_init
ffffffff810038f0 T xen_mc_flush
ffffffff81003a40 T __xen_mc_entry
ffffffff81003ae0 T xen_mc_extend_args
ffffffff81003b50 T xen_mc_callback
ffffffff81003bb0 t __raw_callee_save_xen_pte_val
ffffffff81003bce t __raw_callee_save_xen_pgd_val
ffffffff81003bec t __raw_callee_save_xen_make_pte
ffffffff81003c0a t __raw_callee_save_xen_make_pgd
ffffffff81003c28 t __raw_callee_save_xen_pmd_val
ffffffff81003c46 t __raw_callee_save_xen_make_pmd
ffffffff81003c64 t __raw_callee_save_xen_pud_val
ffffffff81003c82 t __raw_callee_save_xen_make_pud
ffffffff81003ca0 t __ptep_modify_prot_start
ffffffff81003cc0 t __ptep_modify_prot_commit
ffffffff81003cd0 t xen_pte_unlock
ffffffff81003ce0 t xen_write_cr2
ffffffff81003cf0 t xen_read_cr2
ffffffff81003d00 t xen_read_cr3
ffffffff81003d10 t set_current_cr3
ffffffff81003d20 t xen_exchange_memory
ffffffff81003dc0 t remap_area_mfn_pte_fn
ffffffff81003e90 t xen_get_user_pgd
ffffffff81003ee0 t xen_page_pinned
ffffffff81003f20 t __xen_pgd_walk
ffffffff810041d0 t xen_flush_tlb
ffffffff81004270 t xen_flush_tlb_single
ffffffff81004320 t xen_release_pte
ffffffff810044c0 t xen_release_pmd
ffffffff810045c0 t xen_release_pud
ffffffff810046c0 T xen_set_domain_pte
ffffffff810047e0 t xen_zap_pfn_range
ffffffff81004930 t xen_remap_exchanged_ptes
ffffffff81004a60 t xen_set_fixmap
ffffffff81004ba0 t xen_leave_lazy_mmu
ffffffff81004bc0 t xen_extend_mmu_update
ffffffff81004c40 t __xen_set_pgd_hyper
ffffffff81004cb0 t xen_set_pgd
ffffffff81004da0 t xen_batched_set_pte
ffffffff81004e90 t xen_set_pte
ffffffff81004ec0 t xen_set_pte_at
ffffffff81004f00 t xen_extend_mmuext_op
ffffffff81004f80 t xen_do_pin
ffffffff81004fd0 t xen_pgd_free
ffffffff81005000 t xen_pgd_alloc
ffffffff81005100 t xen_flush_tlb_others
ffffffff81005230 t xen_alloc_pte
ffffffff810053d0 t xen_alloc_pmd
ffffffff810054f0 t xen_alloc_pud
ffffffff81005610 t pte_pfn_to_mfn
ffffffff810056a0 t xen_make_pud
ffffffff810056b0 t xen_make_pmd
ffffffff810056c0 t xen_make_pgd
ffffffff810056d0 t xen_make_pte
ffffffff81005740 t pte_mfn_to_pfn
ffffffff81005830 t xen_pud_val
ffffffff81005840 t xen_pmd_val
ffffffff81005850 t xen_pgd_val
ffffffff81005860 t xen_pte_val
ffffffff81005890 T xen_remap_domain_mfn_range
ffffffff81005980 t xen_hvm_exit_mmap
ffffffff810059e0 T xen_destroy_contiguous_region
ffffffff81005b10 T xen_create_contiguous_region
ffffffff81005c40 t drop_other_mm_ref
ffffffff81005cd0 t xen_unpin_page
ffffffff81005e10 t __xen_pgd_unpin
ffffffff81005ef0 t xen_exit_mmap
ffffffff81006020 t xen_pin_page
ffffffff81006140 t __xen_pgd_pin
ffffffff81006270 t xen_dup_mmap
ffffffff810062c0 t xen_activate_mm
ffffffff81006310 t __xen_write_cr3
ffffffff81006400 t xen_write_cr3
ffffffff810064b0 T arbitrary_virt_to_machine
ffffffff81006540 t xen_set_pud_hyper
ffffffff810065c0 t xen_set_pud
ffffffff81006610 t xen_set_pmd_hyper
ffffffff81006690 t xen_set_pmd
ffffffff810066e0 T arbitrary_virt_to_mfn
ffffffff81006700 T make_lowmem_page_readonly
ffffffff81006740 T make_lowmem_page_readwrite
ffffffff81006780 T set_pte_mfn
ffffffff810067b0 T xen_ptep_modify_prot_start
ffffffff810067c0 T xen_ptep_modify_prot_commit
ffffffff81006890 T xen_set_pat
ffffffff810068c0 T xen_mm_pin_all
ffffffff81006980 T xen_mm_unpin_all
ffffffff81006a30 T xen_read_cr2_direct
ffffffff81006a40 t __raw_callee_save_xen_save_fl
ffffffff81006a5e t __raw_callee_save_xen_restore_fl
ffffffff81006a7c t __raw_callee_save_xen_irq_disable
ffffffff81006a9a t __raw_callee_save_xen_irq_enable
ffffffff81006ac0 t xen_save_fl
ffffffff81006ae0 t xen_irq_disable
ffffffff81006af0 t xen_safe_halt
ffffffff81006b10 t xen_halt
ffffffff81006b40 t xen_irq_enable
ffffffff81006b60 t xen_restore_fl
ffffffff81006b90 T xen_force_evtchn_callback
ffffffff81006ba0 t xen_read_wallclock
ffffffff81006bd0 t xen_get_wallclock
ffffffff81006bf0 t xen_tsc_khz
ffffffff81006c00 t xen_vcpuop_set_mode
ffffffff81006c80 t xen_timer_interrupt
ffffffff81006e20 T xen_clocksource_read
ffffffff81006e40 t xen_set_wallclock
ffffffff81006ef0 t xen_clocksource_get_cycles
ffffffff81006f00 t xen_timerop_set_mode
ffffffff81006f30 t xen_timerop_set_next_event
ffffffff81006f70 t xen_vcpuop_set_next_event
ffffffff81006fd0 T xen_vcpu_stolen
ffffffff81006ff0 T xen_setup_runstate_info
ffffffff81007030 T xen_setup_timer
ffffffff81007170 T xen_teardown_timer
ffffffff810071a0 T xen_setup_cpu_clockevents
ffffffff810071c0 t xen_hvm_setup_cpu_clockevents
ffffffff810071e0 T xen_timer_resume
ffffffff81007240 T xen_irq_enable_direct
ffffffff81007255 T xen_irq_enable_direct_reloc
ffffffff81007259 T xen_irq_enable_direct_end
ffffffff81007260 T xen_irq_disable_direct
ffffffff81007269 T xen_irq_disable_direct_end
ffffffff81007270 T xen_save_fl_direct
ffffffff8100727e T xen_save_fl_direct_end
ffffffff81007280 T xen_restore_fl_direct
ffffffff8100729b T xen_restore_fl_direct_reloc
ffffffff8100729f T xen_restore_fl_direct_end
ffffffff810072a0 t check_events
ffffffff810072c0 T xen_adjust_exception_frame
ffffffff810072d0 T xen_iret
ffffffff810072d3 T xen_iret_reloc
ffffffff810072d7 T xen_iret_end
ffffffff810072e0 T xen_sysexit
ffffffff810072ee T xen_sysexit_reloc
ffffffff810072f2 T xen_sysexit_end
ffffffff81007300 T xen_sysret64
ffffffff81007327 T xen_sysret64_reloc
ffffffff8100732b T xen_sysret64_end
ffffffff81007330 T xen_sysret32
ffffffff81007354 T xen_sysret32_reloc
ffffffff81007358 T xen_sysret32_end
ffffffff81007360 T xen_syscall_target
ffffffff81007380 T xen_syscall32_target
ffffffff810073a0 T xen_sysenter_target
ffffffff810073c0 t map_pte_fn
ffffffff81007420 t map_pte_fn_status
ffffffff81007480 t unmap_pte_fn
ffffffff810074b0 T arch_gnttab_map_shared
ffffffff81007520 T arch_gnttab_map_status
ffffffff81007590 T arch_gnttab_unmap
ffffffff810075b0 t xen_vcpu_notify_restore
ffffffff810075d0 T xen_arch_pre_suspend
ffffffff810077a0 T xen_arch_hvm_post_suspend
ffffffff81007800 T xen_arch_post_suspend
ffffffff810078d0 T xen_arch_resume
ffffffff810078f0 T xen_unplug_emulated_devices
ffffffff81007990 T get_phys_to_machine
ffffffff810079f0 t p2m_mid_mfn_init
ffffffff81007a60 T xen_setup_mfn_list_list
ffffffff81007ad0 T __set_phys_to_machine
ffffffff81007bc0 T set_phys_to_machine
ffffffff81007ef0 T m2p_remove_override
ffffffff810081b0 T m2p_add_override
ffffffff81008400 T m2p_find_override
ffffffff810084b0 T m2p_find_override_pfn
ffffffff810084f0 t xen_smp_cpus_done
ffffffff81008500 t xen_smp_send_reschedule
ffffffff81008510 t xen_send_IPI_mask
ffffffff81008560 t xen_smp_send_call_function_single_ipi
ffffffff81008590 t xen_hvm_cpu_die
ffffffff81008610 t xen_call_function_single_interrupt
ffffffff81008640 t xen_call_function_interrupt
ffffffff81008670 t xen_reschedule_interrupt
ffffffff81008690 t xen_cpu_die
ffffffff81008780 t xen_cpu_disable
ffffffff810087c0 t stop_self
ffffffff81008800 t xen_stop_other_cpus
ffffffff81008810 t xen_smp_send_call_function_ipi
ffffffff81008870 t xen_spin_is_locked
ffffffff81008880 t xen_spin_is_contended
ffffffff81008890 t xen_spin_trylock
ffffffff810088a0 t xen_spin_unlock
ffffffff810088c0 t xen_spin_lock
ffffffff81008920 t xen_spin_lock_flags
ffffffff81008990 t dummy_handler
ffffffff810089a0 T xen_uninit_lock_cpu
ffffffff810089c0 T set_personality_ia32
ffffffff81008a60 t start_thread_common.constprop.5
ffffffff81008b00 T __show_regs
ffffffff81008d80 T release_thread
ffffffff81008dc0 T prepare_to_copy
ffffffff81008dd0 T start_thread
ffffffff81008de0 T start_thread_ia32
ffffffff81008e20 T __switch_to
ffffffff81009230 T set_personality_64bit
ffffffff81009290 T get_wchan
ffffffff81009350 T do_arch_prctl
ffffffff810096b0 T copy_thread
ffffffff810098a0 T sys_arch_prctl
ffffffff810098c0 T KSTK_ESP
ffffffff810098f0 T restore_sigcontext
ffffffff81009a40 T setup_sigcontext
ffffffff81009b90 t do_signal
ffffffff8100a180 T sys_sigaltstack
ffffffff8100a1a0 T do_notify_resume
ffffffff8100a1f0 T signal_fault
ffffffff8100a2e0 T sys_rt_sigreturn
ffffffff8100a3b0 T math_state_restore
ffffffff8100a480 t do_trap
ffffffff8100a660 T do_divide_error
ffffffff8100a700 T do_overflow
ffffffff8100a790 T do_bounds
ffffffff8100a820 T do_invalid_op
ffffffff8100a8c0 T do_coprocessor_segment_overrun
ffffffff8100a950 T do_invalid_TSS
ffffffff8100a9e0 T do_segment_not_present
ffffffff8100aa70 T do_alignment_check
ffffffff8100ab10 T do_stack_segment
ffffffff8100abc0 T do_double_fault
ffffffff8100ac20 T do_general_protection
ffffffff8100ada0 T do_int3
ffffffff8100ae80 T sync_regs
ffffffff8100aee0 T do_debug
ffffffff8100b0a0 T math_error
ffffffff8100b340 T do_coprocessor_error
ffffffff8100b350 T do_simd_coprocessor_error
ffffffff8100b360 T do_spurious_interrupt_bug
ffffffff8100b380 W smp_thermal_interrupt
ffffffff8100b3a0 T do_device_not_available
ffffffff8100b3b0 T ack_bad_irq
ffffffff8100b3f0 T arch_show_interrupts
ffffffff8100bad0 T arch_irq_stat_cpu
ffffffff8100bb60 T arch_irq_stat
ffffffff8100bb80 T do_IRQ
ffffffff8100bc50 T smp_x86_platform_ipi
ffffffff8100bcb0 T fixup_irqs
ffffffff8100bf20 T handle_irq
ffffffff8100bf40 T do_softirq
ffffffff8100bfe0 T dump_trace
ffffffff8100c250 T show_stack_log_lvl
ffffffff8100c3e0 T show_registers
ffffffff8100c630 T is_valid_bugaddr
ffffffff8100c650 t timer_interrupt
ffffffff8100c670 T profile_pc
ffffffff8100c6e0 T sys_ioperm
ffffffff8100c8c0 T sys_iopl
ffffffff8100c950 t flush_ldt
ffffffff8100c990 t alloc_ldt.part.3
ffffffff8100cb80 t write_ldt
ffffffff8100cd70 T init_new_context
ffffffff8100ce60 T destroy_context
ffffffff8100cee0 T sys_modify_ldt
ffffffff8100d060 t print_trace_stack
ffffffff8100d080 T oops_begin
ffffffff8100d130 T print_context_stack_bp
ffffffff8100d1d0 T print_context_stack
ffffffff8100d2a0 T printk_address
ffffffff8100d2d0 t print_trace_address
ffffffff8100d310 T show_trace_log_lvl
ffffffff8100d380 T show_trace
ffffffff8100d390 T show_stack
ffffffff8100d3b0 T oops_end
ffffffff8100d460 T __die
ffffffff8100d550 T die
ffffffff8100d5e0 T unregister_nmi_handler
ffffffff8100d6f0 T register_nmi_handler
ffffffff8100d880 T do_nmi
ffffffff8100dc30 T stop_nmi
ffffffff8100dc40 T restart_nmi
ffffffff8100dc50 T local_touch_nmi
ffffffff8100dc60 T default_cpu_present_to_apicid
ffffffff8100dca0 T default_check_phys_apicid_present
ffffffff8100dcb0 t is_ISA_range
ffffffff8100dcd0 t default_get_nmi_reason
ffffffff8100dce0 T iommu_shutdown_noop
ffffffff8100dcf0 T wallclock_init_noop
ffffffff8100dd00 t default_nmi_init
ffffffff8100dd10 t default_i8042_detect
ffffffff8100dd20 t i8259A_suspend
ffffffff8100dd40 t i8259A_shutdown
ffffffff8100dd50 t legacy_pic_noop
ffffffff8100dd60 t legacy_pic_uint_noop
ffffffff8100dd70 t legacy_pic_int_noop
ffffffff8100dd80 t legacy_pic_irq_pending_noop
ffffffff8100dd90 t make_8259A_irq
ffffffff8100ddd0 t i8259A_irq_pending
ffffffff8100de40 t unmask_8259A
ffffffff8100de80 t mask_8259A
ffffffff8100deb0 t unmask_8259A_irq
ffffffff8100df10 t enable_8259A_irq
ffffffff8100df20 t mask_8259A_irq
ffffffff8100df80 t disable_8259A_irq
ffffffff8100df90 t init_8259A
ffffffff8100e0b0 t i8259A_resume
ffffffff8100e0e0 t mask_and_ack_8259A
ffffffff8100e1e0 T vector_used_by_percpu_irq
ffffffff8100e250 T setup_vector_irq
ffffffff8100e260 T smp_irq_work_interrupt
ffffffff8100e2a0 T arch_irq_work_raise
ffffffff8100e2e0 t find_oprom
ffffffff8100e670 T pci_biosrom_size
ffffffff8100e690 T pci_unmap_biosrom
ffffffff8100e6a0 T pci_map_biosrom
ffffffff8100e6d0 T align_addr
ffffffff8100e740 T sys_mmap
ffffffff8100e760 T arch_get_unmapped_area
ffffffff8100ea50 T arch_get_unmapped_area_topdown
ffffffff8100ed30 t write_ok_or_segv
ffffffff8100ed60 t warn_bad_vsyscall
ffffffff8100ee20 T update_vsyscall_tz
ffffffff8100ee30 T update_vsyscall
ffffffff8100ef00 T emulate_vsyscall
ffffffff8100f1c0 T e820_any_mapped
ffffffff8100f220 T dma_supported
ffffffff8100f2d0 T dma_set_mask
ffffffff8100f330 T dma_generic_alloc_coherent
ffffffff8100f480 t force_disable_hpet_msi
ffffffff8100f490 t old_ich_force_enable_hpet
ffffffff8100f5c0 t old_ich_force_enable_hpet_user
ffffffff8100f5e0 t nvidia_force_enable_hpet
ffffffff8100f690 t vt8237_force_enable_hpet
ffffffff8100f7d0 t ich_force_enable_hpet
ffffffff8100f970 t ati_force_enable_hpet
ffffffff8100fb60 T force_hpet_resume
ffffffff8100fd30 T arch_unregister_cpu
ffffffff8100fd50 t add_nops
ffffffff8100fda0 T alternatives_smp_module_del
ffffffff8100fe90 T alternatives_text_reserved
ffffffff8100ff20 T text_poke_early
ffffffff8100ff80 T apply_paravirt
ffffffff81010040 T apply_alternatives
ffffffff810101c0 T text_poke
ffffffff810103d0 t stop_machine_text_poke
ffffffff81010460 t alternatives_smp_unlock
ffffffff81010510 T alternatives_smp_module_add
ffffffff810106b0 T alternatives_smp_switch
ffffffff81010890 T text_poke_smp
ffffffff810108f0 T text_poke_smp_batch
ffffffff81010930 t nommu_sync_single_for_device
ffffffff81010940 t nommu_sync_sg_for_device
ffffffff81010950 t nommu_free_coherent
ffffffff81010970 t check_addr
ffffffff810109c0 t nommu_map_sg
ffffffff81010aa0 t nommu_map_page
ffffffff81010b20 T aout_dump_debugregs
ffffffff81010bf0 T hw_breakpoint_restore
ffffffff81010c80 T encode_dr7
ffffffff81010cb0 T decode_dr7
ffffffff81010cf0 T arch_install_hw_breakpoint
ffffffff81010de0 T arch_uninstall_hw_breakpoint
ffffffff81010ea0 T arch_check_bp_in_kernelspace
ffffffff81010f70 T arch_bp_generic_fields
ffffffff81011020 T arch_validate_hwbkpt_settings
ffffffff81011120 T flush_ptrace_hw_breakpoint
ffffffff81011160 T hw_breakpoint_exceptions_notify
ffffffff810112c0 T hw_breakpoint_pmu_read
ffffffff810112d0 T check_tsc_unstable
ffffffff810112e0 T recalibrate_cpu_khz
ffffffff810112f0 t read_tsc
ffffffff81011310 t resume_tsc
ffffffff81011320 t tsc_read_refs
ffffffff810113b0 t tsc_refine_calibration_work
ffffffff81011560 t set_cyc2ns_scale
ffffffff81011650 T mark_tsc_unstable
ffffffff810116c0 t time_cpufreq_notifier
ffffffff810117c0 T native_sched_clock
ffffffff81011830 T sched_clock
ffffffff81011840 T native_calibrate_tsc
ffffffff81011de0 T tsc_save_sched_clock_state
ffffffff81011e00 T tsc_restore_sched_clock_state
ffffffff81011ea0 T native_io_delay
ffffffff81011ef0 T rtc_cmos_read
ffffffff81011f00 T rtc_cmos_write
ffffffff81011f10 T native_read_tsc
ffffffff81011f20 T mach_set_rtc_mmss
ffffffff810120d0 T mach_get_cmos_time
ffffffff81012270 T update_persistent_clock
ffffffff81012290 T read_persistent_clock
ffffffff810122b0 T arch_remove_reservations
ffffffff810123e0 t hard_disable_TSC
ffffffff81012400 t hard_enable_TSC
ffffffff81012420 t do_nothing
ffffffff81012430 T default_idle
ffffffff81012470 t mwait_idle
ffffffff810124f0 t poll_idle
ffffffff81012520 t amd_e400_idle
ffffffff81012630 T cpu_idle_wait
ffffffff81012650 T kernel_thread
ffffffff810126e0 T idle_notifier_unregister
ffffffff810126f0 T idle_notifier_register
ffffffff81012700 T arch_dup_task_struct
ffffffff810127a0 T free_thread_xstate
ffffffff810127d0 T free_thread_info
ffffffff810127f0 T arch_task_cache_init
ffffffff81012820 T exit_thread
ffffffff810128c0 T show_regs
ffffffff810128e0 T show_regs_common
ffffffff810129f0 T flush_thread
ffffffff81012a80 T disable_TSC
ffffffff81012ab0 T get_tsc_mode
ffffffff81012ae0 T set_tsc_mode
ffffffff81012b30 T __switch_to_xtra
ffffffff81012c90 T sys_fork
ffffffff81012cc0 T sys_vfork
ffffffff81012cf0 T sys_clone
ffffffff81012d10 T sys_execve
ffffffff81012d80 T enter_idle
ffffffff81012da0 T exit_idle
ffffffff81012de0 T cpu_idle
ffffffff81012ea0 T set_pm_idle_to_default
ffffffff81012ec0 T stop_this_cpu
ffffffff81012ef0 T mwait_usable
ffffffff81012f50 T amd_e400_remove_cpu
ffffffff81012f60 T arch_align_stack
ffffffff81012fc0 T arch_randomize_brk
ffffffff81012ff0 T irq_fpu_usable
ffffffff81013050 T kernel_fpu_end
ffffffff81013070 t convert_to_fxsr
ffffffff81013110 t restore_i387_fxsave
ffffffff810131b0 t convert_from_fxsr
ffffffff81013340 t save_i387_fxsave
ffffffff810133f0 T kernel_fpu_begin
ffffffff810134b0 T fpu_finit
ffffffff810134e0 T unlazy_fpu
ffffffff81013570 T init_fpu
ffffffff81013610 T fpregs_active
ffffffff81013620 T xfpregs_active
ffffffff81013630 T xfpregs_get
ffffffff810136e0 T xfpregs_set
ffffffff810137c0 T xstateregs_get
ffffffff810138d0 T xstateregs_set
ffffffff810139d0 T fpregs_get
ffffffff81013ad0 T dump_fpu
ffffffff81013b10 T fpregs_set
ffffffff81013c40 T save_i387_xstate_ia32
ffffffff81013d50 T restore_i387_xstate_ia32
ffffffff81013f50 t xstate_enable
ffffffff81013f90 T __sanitize_i387_state
ffffffff810141e0 T check_for_xstate
ffffffff81014260 T save_i387_xstate
ffffffff81014460 T restore_i387_xstate
ffffffff81014620 t set_flags
ffffffff81014670 t ptrace_triggered
ffffffff810146c0 t ioperm_active
ffffffff810146d0 t set_segment_reg
ffffffff81014910 t putreg
ffffffff81014a40 t ioperm_get
ffffffff81014ab0 t ptrace_get_debugreg
ffffffff81014b40 t get_segment_reg
ffffffff81014c10 t getreg
ffffffff81014d20 t getreg32
ffffffff81014f60 t genregs_get
ffffffff81015010 t genregs_set
ffffffff810150b0 t genregs32_get
ffffffff81015170 t ptrace_modify_breakpoint.isra.13
ffffffff810151f0 t ptrace_set_debugreg
ffffffff810154d0 t putreg32
ffffffff810156e0 t genregs32_set
ffffffff81015780 T regs_query_register_offset
ffffffff810157e0 T regs_query_register_name
ffffffff81015820 T ptrace_disable
ffffffff81015840 T arch_ptrace
ffffffff81015b60 T compat_arch_ptrace
ffffffff81015d60 T update_regset_xstate_info
ffffffff81015d80 T task_user_regset_view
ffffffff81015db0 T user_single_step_siginfo
ffffffff81015e80 T send_sigtrap
ffffffff81015ef0 T syscall_trace_enter
ffffffff81016090 T syscall_trace_leave
ffffffff810161b0 t set_tls_desc
ffffffff81016330 t fill_user_desc
ffffffff81016400 T do_set_thread_area
ffffffff810164e0 T sys_set_thread_area
ffffffff81016500 T do_get_thread_area
ffffffff81016570 T sys_get_thread_area
ffffffff81016590 T regset_tls_active
ffffffff810165c0 T regset_tls_get
ffffffff810166a0 T regset_tls_set
ffffffff81016730 T convert_ip_to_linear
ffffffff810167f0 t enable_step
ffffffff81016a00 T user_enable_single_step
ffffffff81016a10 T user_enable_block_step
ffffffff81016a20 T user_disable_single_step
ffffffff81016ac0 t i8237A_resume
ffffffff81016b80 t show_shared_cpu_map
ffffffff81016b90 t show_shared_cpu_list
ffffffff81016ba0 t show
ffffffff81016bf0 t store
ffffffff81016c40 t show_shared_cpu_map_func
ffffffff81016c90 t show_size
ffffffff81016cc0 t show_number_of_sets
ffffffff81016cf0 t show_ways_of_associativity
ffffffff81016d20 t show_physical_line_partition
ffffffff81016d50 t show_coherency_line_size
ffffffff81016d80 t show_level
ffffffff81016db0 t store_subcaches
ffffffff81016e60 t show_subcaches
ffffffff81016ea0 t show_type
ffffffff81016f20 T amd_get_l3_disable_slot
ffffffff81016f70 t show_cache_disable.isra.7
ffffffff81016fd0 t show_cache_disable_1
ffffffff81016fe0 t show_cache_disable_0
ffffffff81016ff0 T amd_set_l3_disable_slot
ffffffff810170f0 t store_cache_disable
ffffffff810171e0 t store_cache_disable_1
ffffffff810171f0 t store_cache_disable_0
ffffffff81017200 t c_stop
ffffffff81017210 t show_cpuinfo
ffffffff810175d0 t c_start
ffffffff81017640 t c_next
ffffffff81017650 T load_percpu_segment
ffffffff81017680 T switch_to_new_gdt
ffffffff810176c0 T syscall_init
ffffffff81017740 T is_debug_stack
ffffffff810177a0 T debug_stack_set_zero
ffffffff810177b0 T debug_stack_reset
ffffffff810177c0 t vmware_get_tsc_khz
ffffffff81017880 T arch_scale_freq_power
ffffffff81017980 T arch_scale_smt_power
ffffffff810179a0 t read_hv_clock
ffffffff810179c0 T x86_match_cpu
ffffffff81017a80 T arch_print_cpu_modalias
ffffffff81017b50 T arch_cpu_uevent
ffffffff81017bc0 T amd_get_nb_id
ffffffff81017be0 T cpu_has_amd_erratum
ffffffff81017d00 t x86_pmu_extra_regs
ffffffff81017da0 t x86_pmu_disable
ffffffff81017df0 t collect_events
ffffffff81017e90 t x86_pmu_read
ffffffff81017ea0 t x86_pmu_event_idx
ffffffff81017ee0 t x86_pmu_flush_branch_stack
ffffffff81017f00 t backtrace_stack
ffffffff81017f10 t backtrace_address
ffffffff81017f30 T perf_get_x86_pmu_capability
ffffffff81017f70 t set_attr_rdpmc
ffffffff81017fc0 t get_attr_rdpmc
ffffffff81017ff0 t x86_pmu_cancel_txn
ffffffff81018020 t x86_pmu_commit_txn
ffffffff810180e0 t x86_pmu_start_txn
ffffffff81018110 t x86_pmu_add
ffffffff81018250 t allocate_fake_cpuc
ffffffff810182e0 t x86_pmu_event_init
ffffffff81018610 t perf_event_nmi_handler
ffffffff81018630 t change_rdpmc
ffffffff81018690 t hw_perf_event_destroy
ffffffff81018700 T x86_perf_event_update
ffffffff810187c0 T x86_pmu_stop
ffffffff81018890 t x86_pmu_del
ffffffff81018980 T x86_setup_perfctr
ffffffff81018b10 T x86_pmu_hw_config
ffffffff81018c40 T x86_pmu_disable_all
ffffffff81018ce0 T x86_pmu_enable_all
ffffffff81018da0 T x86_schedule_events
ffffffff81019290 T x86_perf_event_set_period
ffffffff810193e0 t x86_pmu_start
ffffffff81019510 T x86_pmu_enable_event
ffffffff810195a0 T perf_event_print_debug
ffffffff81019810 T x86_pmu_handle_irq
ffffffff81019930 T perf_events_lapic_init
ffffffff81019970 t x86_pmu_enable
ffffffff81019be0 T arch_perf_update_userpage
ffffffff81019c50 T perf_callchain_kernel
ffffffff81019cc0 T perf_callchain_user
ffffffff81019e40 T perf_instruction_pointer
ffffffff81019e70 T perf_misc_flags
ffffffff81019ed0 t x86_pmu_disable_event
ffffffff81019ef0 t amd_pmu_event_map
ffffffff81019f00 t amd_put_event_constraints
ffffffff81019f80 t amd_get_event_constraints
ffffffff8101a070 T amd_pmu_disable_virt
ffffffff8101a0b0 T amd_pmu_enable_virt
ffffffff8101a0e0 t cmask_show
ffffffff8101a100 t inv_show
ffffffff8101a120 t edge_show
ffffffff8101a140 t umask_show
ffffffff8101a160 t event_show
ffffffff8101a190 t amd_pmu_cpu_dead
ffffffff8101a1e0 t amd_get_event_constraints_f15h
ffffffff8101a400 t amd_pmu_cpu_prepare
ffffffff8101a500 t amd_pmu_cpu_starting
ffffffff8101a630 t amd_pmu_hw_config
ffffffff8101a6c0 t p6_pmu_event_map
ffffffff8101a6d0 t p6_pmu_disable_all
ffffffff8101a720 t p6_pmu_enable_all
ffffffff8101a770 t p6_pmu_disable_event
ffffffff8101a7b0 t p6_pmu_enable_event
ffffffff8101a800 t cmask_show
ffffffff8101a820 t inv_show
ffffffff8101a840 t pc_show
ffffffff8101a860 t edge_show
ffffffff8101a880 t umask_show
ffffffff8101a8a0 t event_show
ffffffff8101a8c0 t p4_pmu_event_map
ffffffff8101a910 t p4_pmu_disable_event
ffffffff8101a940 t ht_show
ffffffff8101a960 t escr_show
ffffffff8101a980 t cccr_show
ffffffff8101a9a0 t p4_pmu_disable_all
ffffffff8101aa20 t p4_pmu_enable_event
ffffffff8101aba0 t p4_pmu_enable_all
ffffffff8101ac10 t p4_pmu_handle_irq
ffffffff8101ae00 t p4_pmu_schedule_events
ffffffff8101b240 t p4_hw_config
ffffffff8101b540 t __intel_pmu_lbr_disable
ffffffff8101b590 t branch_type
ffffffff8101b7c0 T intel_pmu_lbr_reset
ffffffff8101b870 T intel_pmu_lbr_enable
ffffffff8101b8e0 T intel_pmu_lbr_disable
ffffffff8101b960 T intel_pmu_lbr_enable_all
ffffffff8101ba10 T intel_pmu_lbr_disable_all
ffffffff8101ba40 T intel_pmu_lbr_read
ffffffff8101bdf0 T intel_pmu_setup_lbr_filter
ffffffff8101beb0 T intel_pmu_lbr_init_core
ffffffff8101bef0 T intel_pmu_lbr_init_nhm
ffffffff8101bf40 T intel_pmu_lbr_init_snb
ffffffff8101bf90 T intel_pmu_lbr_init_atom
ffffffff8101bff0 t release_bts_buffer
ffffffff8101c040 t release_pebs_buffer
ffffffff8101c090 t release_ds_buffer
ffffffff8101c0d0 t intel_pmu_pebs_fixup_ip.isra.2
ffffffff8101c230 t __intel_pmu_pebs_event
ffffffff8101c3a0 t intel_pmu_drain_pebs_nhm
ffffffff8101c530 t intel_pmu_drain_pebs_core
ffffffff8101c660 T init_debug_store_on_cpu
ffffffff8101c6a0 T fini_debug_store_on_cpu
ffffffff8101c6e0 T release_ds_buffers
ffffffff8101c790 T reserve_ds_buffers
ffffffff8101caf0 T intel_pmu_enable_bts
ffffffff8101cb70 T intel_pmu_disable_bts
ffffffff8101cbe0 T intel_pmu_drain_bts_buffer
ffffffff8101cd70 T intel_pebs_constraints
ffffffff8101cdf0 T intel_pmu_pebs_enable
ffffffff8101ce30 T intel_pmu_pebs_disable
ffffffff8101ce90 T intel_pmu_pebs_enable_all
ffffffff8101cec0 T intel_pmu_pebs_disable_all
ffffffff8101cef0 T intel_ds_init
ffffffff8101cfd0 t x86_pmu_disable_event
ffffffff8101cff0 t intel_pmu_event_map
ffffffff8101d000 T perf_guest_get_msrs
ffffffff8101d020 t intel_guest_get_msrs
ffffffff8101d080 t offcore_rsp_show
ffffffff8101d0a0 t any_show
ffffffff8101d0c0 t cmask_show
ffffffff8101d0e0 t inv_show
ffffffff8101d100 t pc_show
ffffffff8101d120 t edge_show
ffffffff8101d140 t umask_show
ffffffff8101d160 t event_show
ffffffff8101d180 t intel_pmu_flush_branch_stack
ffffffff8101d1a0 t intel_pmu_cpu_dying
ffffffff8101d210 t intel_pmu_cpu_starting
ffffffff8101d320 t intel_pmu_disable_event
ffffffff8101d450 t intel_pmu_enable_event
ffffffff8101d660 t intel_pmu_enable_all
ffffffff8101d700 t intel_pmu_nhm_enable_all
ffffffff8101d8a0 t intel_pmu_disable_all
ffffffff8101d900 t core_guest_get_msrs
ffffffff8101da00 t core_pmu_enable_all
ffffffff8101dad0 t core_pmu_enable_event
ffffffff8101daf0 t intel_put_event_constraints
ffffffff8101db60 t intel_pmu_hw_config
ffffffff8101dc70 t __intel_shared_reg_get_constraints.isra.7
ffffffff8101dd90 t intel_get_event_constraints
ffffffff8101df10 T intel_pmu_save_and_restart
ffffffff8101df30 t intel_pmu_handle_irq
ffffffff8101e210 T x86_get_event_constraints
ffffffff8101e280 T allocate_shared_regs
ffffffff8101e2d0 t intel_pmu_cpu_prepare
ffffffff8101e340 t collect_tscs
ffffffff8101e380 T register_mce_write_callback
ffffffff8101e390 T mce_chrdev_write
ffffffff8101e3b0 t mce_disable_error_reporting
ffffffff8101e410 t mce_syscore_suspend
ffffffff8101e420 t mce_syscore_shutdown
ffffffff8101e430 t mce_chrdev_release
ffffffff8101e470 t mce_chrdev_ioctl
ffffffff8101e530 t mce_device_release
ffffffff8101e540 t __mce_read_apei
ffffffff8101e5d0 t mce_chrdev_read
ffffffff8101e880 t set_cmci_disabled
ffffffff8101e910 t __mcheck_cpu_init_timer
ffffffff8101e9d0 t set_trigger
ffffffff8101ea60 t show_trigger
ffffffff8101ead0 t show_bank
ffffffff8101eaf0 t mce_timer_delete_all
ffffffff8101eb50 t mce_restart
ffffffff8101eb70 t set_bank
ffffffff8101ebd0 t store_int_with_restart
ffffffff8101ebf0 t set_ignore_ce
ffffffff8101ec90 t unexpected_machine_check
ffffffff8101ecb0 t mce_schedule_work
ffffffff8101ecf0 t mce_process_work
ffffffff8101ed40 t mce_do_trigger
ffffffff8101ed90 t print_mce
ffffffff8101ef00 T mce_unregister_decode_chain
ffffffff8101ef10 T mce_register_decode_chain
ffffffff8101f020 t mce_chrdev_open
ffffffff8101f0c0 t mce_chrdev_poll
ffffffff8101f120 T mce_notify_irq
ffffffff8101f1b0 t mce_irq_work_cb
ffffffff8101f1d0 t mce_panic
ffffffff8101f380 t mce_timed_out
ffffffff8101f400 t mce_rdmsrl
ffffffff8101f500 t mce_read_aux
ffffffff8101f590 t mce_wrmsrl.constprop.25
ffffffff8101f630 T mce_setup
ffffffff8101f780 T mce_log
ffffffff8101f830 T do_machine_check
ffffffff810200d0 T machine_check_poll
ffffffff81020230 t __mcheck_cpu_init_generic
ffffffff81020310 t mce_syscore_resume
ffffffff81020350 T mce_available
ffffffff81020380 t mce_enable_ce
ffffffff810203b0 t mce_start_timer
ffffffff810204d0 t mce_disable_cmci
ffffffff810204f0 t mce_cpu_restart
ffffffff81020530 T mce_notify_process
ffffffff81020600 T mce_severity
ffffffff810206d0 t local_error_count_handler
ffffffff81020720 t show
ffffffff81020740 t store
ffffffff81020760 t show_threshold_limit
ffffffff81020790 t show_interrupt_enable
ffffffff810207c0 t store_error_count
ffffffff81020810 t show_error_count
ffffffff81020860 t threshold_restart_bank
ffffffff810209e0 t store_threshold_limit
ffffffff81020a80 t store_interrupt_enable
ffffffff81020b10 t amd_threshold_interrupt
ffffffff81020c90 T mce_amd_feature_init
ffffffff81020ee0 t default_threshold_interrupt
ffffffff81020f00 T smp_threshold_interrupt
ffffffff81020f40 T apei_mce_report_mem_error
ffffffff81020f90 T apei_write_mce
ffffffff81021170 T apei_read_mce
ffffffff81021370 T apei_check_mce
ffffffff81021380 T apei_clear_mce
ffffffff81021390 t mtrr_save
ffffffff810213e0 t set_mtrr
ffffffff81021420 t mtrr_restore
ffffffff81021480 t mtrr_rendezvous_handler
ffffffff810214e0 T set_mtrr_ops
ffffffff81021500 T mtrr_add_page
ffffffff81021940 T mtrr_add
ffffffff810219b0 T mtrr_del_page
ffffffff81021b30 T mtrr_del
ffffffff81021ba0 T mtrr_ap_init
ffffffff81021c10 T mtrr_save_state
ffffffff81021c30 T set_mtrr_aps_delayed_init
ffffffff81021c50 T mtrr_aps_init
ffffffff81021ca0 T mtrr_bp_restore
ffffffff81021cc0 t mtrr_close
ffffffff81021d60 t mtrr_open
ffffffff81021da0 t mtrr_seq_show
ffffffff81021ea0 t mtrr_write
ffffffff810220f0 t mtrr_file_add.isra.1
ffffffff810221c0 t mtrr_file_del.isra.2
ffffffff81022220 t mtrr_ioctl
ffffffff81022870 T mtrr_attrib_to_str
ffffffff81022890 T generic_get_free_region
ffffffff81022910 t generic_have_wrcomb
ffffffff81022930 T generic_validate_add_page
ffffffff81022a10 t generic_get_mtrr
ffffffff81022b40 t __mtrr_type_lookup.part.2
ffffffff81022dd0 T mtrr_type_lookup
ffffffff81022ec0 T fill_mtrr_var_range
ffffffff81022ee0 T mtrr_wrmsr
ffffffff81022f50 t prepare_set
ffffffff81022ff0 t post_set
ffffffff81023060 t generic_set_mtrr
ffffffff81023180 t generic_set_all
ffffffff81023460 t get_fixed_ranges.constprop.5
ffffffff81023590 T mtrr_save_fixed_ranges
ffffffff810235b0 T positive_have_wrcomb
ffffffff810235c0 T reserve_perfctr_nmi
ffffffff81023640 T reserve_evntsel_nmi
ffffffff810236c0 T release_perfctr_nmi
ffffffff81023730 T release_evntsel_nmi
ffffffff810237a0 T avail_to_resrv_perfctr_nmi_bit
ffffffff810237c0 t perf_ibs_init
ffffffff810237e0 t perf_ibs_add
ffffffff810237f0 t perf_ibs_del
ffffffff81023800 T get_ibs_caps
ffffffff81023810 t setup_APIC_ibs
ffffffff81023870 T acpi_register_ioapic
ffffffff81023880 T acpi_unregister_ioapic
ffffffff81023890 t acpi_register_gsi_pic
ffffffff810238b0 T acpi_unmap_lsapic
ffffffff810238e0 t gsi_to_irq
ffffffff81023910 T acpi_gsi_to_irq
ffffffff81023960 T acpi_isa_irq_to_gsi
ffffffff81023980 T acpi_register_gsi
ffffffff810239a0 T mp_register_gsi
ffffffff81023b80 t acpi_register_gsi_ioapic
ffffffff81023b90 T __acpi_acquire_global_lock
ffffffff81023bc0 T __acpi_release_global_lock
ffffffff81023be0 T acpi_processor_power_init_bm_check
ffffffff81023c40 T acpi_processor_ffh_cstate_probe
ffffffff81023d10 t acpi_processor_ffh_cstate_probe_cpu
ffffffff81023dd0 T mwait_idle_with_hints
ffffffff81023e40 T acpi_processor_ffh_cstate_enter
ffffffff81023e70 t native_machine_power_off
ffffffff81023eb0 t crash_nmi_callback
ffffffff81023f00 t native_machine_halt
ffffffff81023f20 t native_machine_restart
ffffffff81023f60 T native_machine_shutdown
ffffffff81023fe0 t vmxoff_nmi
ffffffff81024050 W mach_reboot_fixups
ffffffff81024060 T machine_power_off
ffffffff81024070 T machine_shutdown
ffffffff81024080 T machine_emergency_restart
ffffffff810240a0 T machine_restart
ffffffff810240b0 T machine_halt
ffffffff810240c0 T nmi_shootdown_cpus
ffffffff81024160 t native_machine_emergency_restart
ffffffff81024370 T native_send_call_func_single_ipi
ffffffff810243a0 T native_send_call_func_ipi
ffffffff81024410 t smp_stop_nmi_callback
ffffffff81024440 t native_smp_send_reschedule
ffffffff810244a0 t native_nmi_stop_other_cpus
ffffffff810245a0 t native_irq_stop_other_cpus
ffffffff81024630 T smp_reboot_interrupt
ffffffff81024660 T smp_reschedule_interrupt
ffffffff81024690 T smp_call_function_interrupt
ffffffff810246d0 T smp_call_function_single_interrupt
ffffffff81024710 T cpu_hotplug_driver_lock
ffffffff81024720 T cpu_hotplug_driver_unlock
ffffffff81024730 T arch_cpu_probe
ffffffff81024740 T arch_cpu_release
ffffffff81024750 T cpu_coregroup_mask
ffffffff810247b0 T __inquire_remote_apic
ffffffff81024900 T arch_disable_smp_support
ffffffff81024910 T arch_disable_nonboot_cpus_begin
ffffffff81024920 T arch_disable_nonboot_cpus_end
ffffffff81024930 T arch_enable_nonboot_cpus_begin
ffffffff81024940 T arch_enable_nonboot_cpus_end
ffffffff81024950 T cpu_disable_common
ffffffff81024ab0 T native_cpu_disable
ffffffff81024ae0 T native_cpu_die
ffffffff81024bd0 T play_dead_common
ffffffff81024c20 T native_play_dead
ffffffff81024d20 t lapic_next_event
ffffffff81024d40 t lapic_timer_broadcast
ffffffff81024d60 T setup_APIC_eilvt
ffffffff81024ef0 t __setup_APIC_LVTT
ffffffff81024f80 t lapic_timer_setup
ffffffff81025010 t lapic_resume
ffffffff81025260 T native_apic_wait_icr_idle
ffffffff81025290 T native_safe_apic_wait_icr_idle
ffffffff810252e0 T native_apic_icr_write
ffffffff81025310 T native_apic_icr_read
ffffffff81025350 T lapic_get_maxlvt
ffffffff81025380 T smp_apic_timer_interrupt
ffffffff81025420 T setup_profiling_timer
ffffffff81025430 T clear_local_APIC
ffffffff81025650 T disable_local_APIC
ffffffff810256b0 t lapic_suspend
ffffffff81025840 T lapic_shutdown
ffffffff81025890 T smp_spurious_interrupt
ffffffff81025900 T smp_error_interrupt
ffffffff81025a00 T disconnect_bsp_APIC
ffffffff81025ab0 T hard_smp_processor_id
ffffffff81025ae0 T default_init_apic_ldr
ffffffff81025b40 t physid_set_mask_of_physid
ffffffff81025be0 t default_apic_id_valid
ffffffff81025bf0 t default_cpu_mask_to_apicid
ffffffff81025c00 t default_cpu_mask_to_apicid_and
ffffffff81025c10 t default_ioapic_phys_id_map
ffffffff81025c30 t noop_init_apic_ldr
ffffffff81025c40 t noop_send_IPI_mask
ffffffff81025c50 t noop_send_IPI_mask_allbutself
ffffffff81025c60 t noop_send_IPI_allbutself
ffffffff81025c70 t noop_send_IPI_all
ffffffff81025c80 t noop_send_IPI_self
ffffffff81025c90 t noop_apic_wait_icr_idle
ffffffff81025ca0 t noop_apic_icr_write
ffffffff81025cb0 t noop_wakeup_secondary_cpu
ffffffff81025cc0 t noop_safe_apic_wait_icr_idle
ffffffff81025cd0 t noop_apic_icr_read
ffffffff81025ce0 t noop_phys_pkg_id
ffffffff81025cf0 t noop_get_apic_id
ffffffff81025d00 t noop_probe
ffffffff81025d10 t noop_apic_id_registered
ffffffff81025d20 t noop_target_cpus
ffffffff81025d30 t noop_vector_allocation_domain
ffffffff81025d70 t noop_apic_write
ffffffff81025dc0 t noop_apic_read
ffffffff81025e10 t noop_check_apicid_present
ffffffff81025e20 t noop_check_apicid_used
ffffffff81025e30 T default_send_IPI_mask_sequence_phys
ffffffff81025f00 T default_send_IPI_mask_allbutself_phys
ffffffff81025fe0 t __io_apic_read
ffffffff81026020 t __io_apic_write
ffffffff81026060 t __io_apic_modify
ffffffff810260b0 t __ioapic_write_entry
ffffffff81026100 t io_apic_sync
ffffffff81026140 t mask_lapic_irq
ffffffff81026180 t unmask_lapic_irq
ffffffff810261c0 t ack_lapic_irq
ffffffff810261e0 t ioapic_read_entry
ffffffff81026260 T save_ioapic_entries
ffffffff81026300 t ioapic_write_entry
ffffffff81026360 t ioapic_retrigger_irq
ffffffff81026400 t irq_cfg
ffffffff81026420 t pin_2_irq
ffffffff810264c0 t free_irq_at
ffffffff81026510 t __assign_irq_vector
ffffffff81026710 t io_apic_modify_irq.isra.17
ffffffff81026770 t mask_ioapic
ffffffff810267d0 t mask_ioapic_irq
ffffffff810267e0 t unmask_ioapic
ffffffff81026830 t unmask_ioapic_irq
ffffffff81026840 t startup_ioapic_irq
ffffffff810268e0 t __eoi_ioapic_pin.isra.18.part.19
ffffffff81026950 t clear_IO_APIC_pin
ffffffff81026b10 t clear_IO_APIC
ffffffff81026b70 t __add_pin_to_irq_node
ffffffff81026c10 t alloc_irq_cfg.isra.26
ffffffff81026c50 t irq_trigger
ffffffff81026cc0 t irq_polarity
ffffffff81026d20 T IO_APIC_get_PCI_irq_vector
ffffffff81026f60 t __irq_complete_move
ffffffff81026fc0 t ack_apic_level
ffffffff81027170 t ack_apic_edge
ffffffff810271b0 t find_irq_entry.constprop.36
ffffffff81027230 T mpc_ioapic_id
ffffffff81027250 T mpc_ioapic_addr
ffffffff81027270 T mp_ioapic_gsi_routing
ffffffff81027290 T disable_ioapic_support
ffffffff810272b0 T mp_save_irq
ffffffff81027380 T mask_ioapic_entries
ffffffff81027410 T restore_ioapic_entries
ffffffff81027480 t ioapic_resume
ffffffff81027510 T lock_vector_lock
ffffffff81027520 T unlock_vector_lock
ffffffff81027530 T assign_irq_vector
ffffffff81027590 t io_apic_setup_irq_pin
ffffffff81027830 t msi_compose_msg.isra.33
ffffffff81027920 T __setup_vector_irq
ffffffff81027a40 T disable_IO_APIC
ffffffff81027af0 T send_cleanup_vector
ffffffff81027b30 T __ioapic_set_affinity
ffffffff81027bc0 t ioapic_set_affinity
ffffffff81027c90 t ht_set_affinity
ffffffff81027d30 t hpet_msi_set_affinity
ffffffff81027da0 t msi_set_affinity
ffffffff81027e10 T smp_irq_move_cleanup_interrupt
ffffffff81027f30 T irq_force_complete_move
ffffffff81027f60 T create_irq_nr
ffffffff81028070 T create_irq
ffffffff810280a0 T destroy_irq
ffffffff810281f0 T native_setup_msi_irqs
ffffffff81028340 T native_teardown_msi_irq
ffffffff81028350 T arch_setup_hpet_msi
ffffffff810283c0 T arch_setup_ht_irq
ffffffff810284f0 T io_apic_setup_irq_pin_once
ffffffff81028560 T get_nr_irqs_gsi
ffffffff81028570 T io_apic_set_pci_routing
ffffffff810285e0 T mp_find_ioapic
ffffffff81028650 T mp_find_ioapic_pin
ffffffff810286b0 T acpi_get_override_irq
ffffffff81028750 T setup_IO_APIC_irq_extra
ffffffff81028820 t default_inquire_remote_apic
ffffffff81028840 t native_apic_mem_write
ffffffff81028850 t native_apic_mem_read
ffffffff81028860 t default_apic_id_valid
ffffffff81028870 t default_cpu_mask_to_apicid
ffffffff81028880 t default_cpu_mask_to_apicid_and
ffffffff81028890 t flat_acpi_madt_oem_check
ffffffff810288a0 t flat_target_cpus
ffffffff810288b0 t flat_vector_allocation_domain
ffffffff810288c0 T flat_init_apic_ldr
ffffffff81028920 t flat_get_apic_id
ffffffff81028930 t set_apic_id
ffffffff81028940 t flat_apic_id_registered
ffffffff81028970 t flat_phys_pkg_id
ffffffff81028980 t flat_probe
ffffffff81028990 t physflat_target_cpus
ffffffff810289a0 t physflat_send_IPI_allbutself
ffffffff810289b0 t physflat_send_IPI_mask_allbutself
ffffffff810289c0 t physflat_send_IPI_mask
ffffffff810289d0 t physflat_send_IPI_all
ffffffff810289e0 t physflat_cpu_mask_to_apicid_and
ffffffff81028a40 t physflat_cpu_mask_to_apicid
ffffffff81028a90 t physflat_vector_allocation_domain
ffffffff81028aa0 t physflat_probe
ffffffff81028ad0 t flat_send_IPI_mask
ffffffff81028b60 t flat_send_IPI_mask_allbutself
ffffffff81028c10 t flat_send_IPI_allbutself
ffffffff81028ce0 t flat_send_IPI_all
ffffffff81028d40 t physflat_acpi_madt_oem_check
ffffffff81028dc0 t apicid_phys_pkg_id
ffffffff81028dd0 T apic_send_IPI_self
ffffffff81028e10 T module_alloc
ffffffff81028e70 T apply_relocate_add
ffffffff81028fc0 T module_finalize
ffffffff81029110 T module_arch_cleanup
ffffffff81029120 t early_vga_write
ffffffff81029290 t early_serial_write
ffffffff81029350 T early_printk
ffffffff810293e0 T is_hpet_enabled
ffffffff81029410 t hpet_restart_counter
ffffffff81029460 t hpet_legacy_next_event
ffffffff810294a0 t hpet_msi_next_event
ffffffff810294f0 t read_hpet
ffffffff81029500 T hpet_register_irq_handler
ffffffff81029540 T hpet_unregister_irq_handler
ffffffff81029580 T hpet_set_alarm_time
ffffffff810295d0 T hpet_rtc_dropped_irq
ffffffff81029600 t _hpet_print_config
ffffffff81029740 t hpet_setup_msi_irq
ffffffff81029770 t hpet_cpuhp_notify
ffffffff81029890 t hpet_work
ffffffff81029a70 t hpet_set_mode
ffffffff81029c40 t hpet_msi_set_mode
ffffffff81029c50 t hpet_legacy_set_mode
ffffffff81029c60 T hpet_rtc_interrupt
ffffffff81029fb0 t hpet_resume_counter
ffffffff81029fd0 T hpet_set_periodic_freq
ffffffff8102a050 t hpet_interrupt_handler
ffffffff8102a080 T hpet_mask_rtc_irq_bit
ffffffff8102a0e0 T hpet_rtc_timer_init
ffffffff8102a1e0 T hpet_set_rtc_irq_bit
ffffffff8102a250 T EVT_TO_HPET_DEV
ffffffff8102a260 T hpet_readl
ffffffff8102a270 T hpet_msi_unmask
ffffffff8102a2b0 T hpet_msi_mask
ffffffff8102a2f0 T hpet_msi_write
ffffffff8102a330 T hpet_msi_read
ffffffff8102a370 T hpet_disable
ffffffff8102a3c0 t next_northbridge
ffffffff8102a410 T amd_flush_garts
ffffffff8102a560 T amd_cache_northbridges
ffffffff8102a730 T amd_get_mmconfig_range
ffffffff8102a7b0 T amd_get_subcaches
ffffffff8102a840 T amd_set_subcaches
ffffffff8102aa00 t native_read_tscp
ffffffff8102aa20 t native_read_msr_safe
ffffffff8102aa40 t native_write_msr_safe
ffffffff8102aa50 t native_read_pmc
ffffffff8102aa70 t native_clts
ffffffff8102aa80 t native_read_cr0
ffffffff8102aa90 t native_write_cr0
ffffffff8102aaa0 t native_read_cr2
ffffffff8102aab0 t native_write_cr2
ffffffff8102aac0 t native_read_cr3
ffffffff8102aad0 t native_write_cr3
ffffffff8102aae0 t native_read_cr4
ffffffff8102aaf0 t native_read_cr4_safe
ffffffff8102ab00 t native_write_cr4
ffffffff8102ab10 t native_read_cr8
ffffffff8102ab20 t native_write_cr8
ffffffff8102ab30 t native_wbinvd
ffffffff8102ab40 t native_save_fl
ffffffff8102ab50 t native_restore_fl
ffffffff8102ab60 t native_irq_disable
ffffffff8102ab70 t native_irq_enable
ffffffff8102ab80 t native_safe_halt
ffffffff8102ab90 t native_halt
ffffffff8102aba0 t native_cpuid
ffffffff8102abc0 t native_set_iopl_mask
ffffffff8102abd0 t native_load_sp0
ffffffff8102abe0 t native_swapgs
ffffffff8102abf0 t native_set_pte
ffffffff8102ac00 t native_set_pmd
ffffffff8102ac10 t native_set_pud
ffffffff8102ac20 t native_set_pgd
ffffffff8102ac30 t native_set_pte_at
ffffffff8102ac40 t native_set_pmd_at
ffffffff8102ac50 t __ptep_modify_prot_start
ffffffff8102ac70 t __ptep_modify_prot_commit
ffffffff8102ac80 t native_get_debugreg
ffffffff8102acf0 t native_set_debugreg
ffffffff8102ad60 t native_write_idt_entry
ffffffff8102ad80 t native_write_ldt_entry
ffffffff8102ad90 t native_write_gdt_entry
ffffffff8102adc0 t native_set_ldt
ffffffff8102ae60 t native_load_tr_desc
ffffffff8102ae70 t native_load_gdt
ffffffff8102ae80 t native_load_idt
ffffffff8102ae90 t native_store_gdt
ffffffff8102aea0 t native_store_idt
ffffffff8102aeb0 t native_store_tr
ffffffff8102aec0 t native_load_tls
ffffffff8102aef0 t __paravirt_pgd_alloc
ffffffff8102af00 T _paravirt_nop
ffffffff8102af10 T _paravirt_ident_32
ffffffff8102af20 T _paravirt_ident_64
ffffffff8102af30 t get_call_destination
ffffffff8102b080 t native_flush_tlb
ffffffff8102b090 t native_flush_tlb_global
ffffffff8102b0c0 t native_flush_tlb_single
ffffffff8102b0d0 t native_steal_clock
ffffffff8102b0e0 T paravirt_patch_nop
ffffffff8102b0f0 T paravirt_patch_ignore
ffffffff8102b100 T paravirt_patch_call
ffffffff8102b130 T paravirt_patch_jmp
ffffffff8102b150 T paravirt_patch_insns
ffffffff8102b180 T paravirt_patch_default
ffffffff8102b2f0 T paravirt_disable_iospace
ffffffff8102b310 T paravirt_enter_lazy_mmu
ffffffff8102b330 T paravirt_leave_lazy_mmu
ffffffff8102b350 T paravirt_start_context_switch
ffffffff8102b3a0 T paravirt_end_context_switch
ffffffff8102b3e0 T paravirt_get_lazy_mode
ffffffff8102b410 T arch_flush_lazy_mmu_mode
ffffffff8102b450 t start_pv_irq_ops_irq_disable
ffffffff8102b451 t end_pv_irq_ops_irq_disable
ffffffff8102b451 t start_pv_irq_ops_irq_enable
ffffffff8102b452 t end_pv_irq_ops_irq_enable
ffffffff8102b452 t start_pv_irq_ops_restore_fl
ffffffff8102b454 t end_pv_irq_ops_restore_fl
ffffffff8102b454 t start_pv_irq_ops_save_fl
ffffffff8102b456 t end_pv_irq_ops_save_fl
ffffffff8102b456 t start_pv_cpu_ops_iret
ffffffff8102b458 t end_pv_cpu_ops_iret
ffffffff8102b458 t start_pv_mmu_ops_read_cr2
ffffffff8102b45b t end_pv_mmu_ops_read_cr2
ffffffff8102b45b t start_pv_mmu_ops_read_cr3
ffffffff8102b45e t end_pv_mmu_ops_read_cr3
ffffffff8102b45e t start_pv_mmu_ops_write_cr3
ffffffff8102b461 t end_pv_mmu_ops_write_cr3
ffffffff8102b461 t start_pv_mmu_ops_flush_tlb_single
ffffffff8102b464 t end_pv_mmu_ops_flush_tlb_single
ffffffff8102b464 t start_pv_cpu_ops_clts
ffffffff8102b466 t end_pv_cpu_ops_clts
ffffffff8102b466 t start_pv_cpu_ops_wbinvd
ffffffff8102b468 t end_pv_cpu_ops_wbinvd
ffffffff8102b468 t start_pv_cpu_ops_irq_enable_sysexit
ffffffff8102b46e t end_pv_cpu_ops_irq_enable_sysexit
ffffffff8102b46e t start_pv_cpu_ops_usergs_sysret64
ffffffff8102b474 t end_pv_cpu_ops_usergs_sysret64
ffffffff8102b474 t start_pv_cpu_ops_usergs_sysret32
ffffffff8102b479 t end_pv_cpu_ops_usergs_sysret32
ffffffff8102b479 t start_pv_cpu_ops_swapgs
ffffffff8102b47c t end_pv_cpu_ops_swapgs
ffffffff8102b47c t start__mov32
ffffffff8102b47e t end__mov32
ffffffff8102b47e t start__mov64
ffffffff8102b481 t end__mov64
ffffffff8102b490 T paravirt_patch_ident_32
ffffffff8102b4b0 T paravirt_patch_ident_64
ffffffff8102b4d0 T native_patch
ffffffff8102b640 t __ticket_spin_lock
ffffffff8102b670 t __ticket_spin_trylock
ffffffff8102b6a0 t __ticket_spin_unlock
ffffffff8102b6b0 t __ticket_spin_is_locked
ffffffff8102b6c0 t __ticket_spin_is_contended
ffffffff8102b6e0 t default_spin_lock_flags
ffffffff8102b6f0 T pvclock_set_flags
ffffffff8102b700 T pvclock_tsc_khz
ffffffff8102b740 T pvclock_resume
ffffffff8102b750 T pvclock_clocksource_read
ffffffff8102b840 T pvclock_read_wallclock
ffffffff8102b8c0 t x86_swiotlb_free_coherent
ffffffff8102b8d0 t x86_swiotlb_alloc_coherent
ffffffff8102b950 T audit_classify_arch
ffffffff8102b960 T audit_classify_syscall
ffffffff8102b990 t gart_mapping_error
ffffffff8102b9a0 t alloc_iommu
ffffffff8102bb10 t flush_gart
ffffffff8102bb60 t gart_iommu_shutdown
ffffffff8102bc10 t __dma_map_cont.isra.7
ffffffff8102bd40 t iommu_full
ffffffff8102bdb0 t dma_map_area
ffffffff8102bed0 t gart_map_page
ffffffff8102bf50 t gart_unmap_page
ffffffff8102c040 t gart_unmap_sg
ffffffff8102c0b0 t gart_free_coherent
ffffffff8102c100 t enable_gart_translations.part.10
ffffffff8102c1d0 t gart_resume
ffffffff8102c2b0 t gart_map_sg
ffffffff8102c700 t gart_alloc_coherent
ffffffff8102c850 T set_up_gart_resume
ffffffff8102c870 t __raw_callee_save_vsmp_save_fl
ffffffff8102c88e t __raw_callee_save_vsmp_restore_fl
ffffffff8102c8ac t __raw_callee_save_vsmp_irq_disable
ffffffff8102c8ca t __raw_callee_save_vsmp_irq_enable
ffffffff8102c8f0 t vsmp_save_fl
ffffffff8102c910 t vsmp_restore_fl
ffffffff8102c930 t vsmp_irq_disable
ffffffff8102c950 t vsmp_irq_enable
ffffffff8102c960 t vsmp_patch
ffffffff8102c980 T is_vsmp_box
ffffffff8102c9d0 T devmem_is_allowed
ffffffff8102ca10 T free_init_pages
ffffffff8102cbe0 T free_initmem
ffffffff8102cc00 T free_initrd_mem
ffffffff8102cc20 t fill_pte
ffffffff8102cd20 t fill_pmd
ffffffff8102ce40 T sync_global_pgds
ffffffff8102cfb0 T set_pte_vaddr_pud
ffffffff8102d010 T set_pte_vaddr
ffffffff8102d070 T kern_addr_valid
ffffffff8102d190 T get_gate_vma
ffffffff8102d1c0 T in_gate_area
ffffffff8102d1f0 T in_gate_area_no_mm
ffffffff8102d210 T arch_vma_name
ffffffff8102d250 T vmalloc_sync_all
ffffffff8102d270 T do_page_fault
ffffffff8102d6a0 t __ioremap_caller
ffffffff8102da30 T ioremap_prot
ffffffff8102da40 T ioremap_cache
ffffffff8102da50 T ioremap_nocache
ffffffff8102da60 T iounmap
ffffffff8102db30 T ioremap_wc
ffffffff8102db60 T ioremap_change_attr
ffffffff8102db90 T xlate_dev_mem_ptr
ffffffff8102dc10 T unxlate_dev_mem_ptr
ffffffff8102dc40 T fixup_exception
ffffffff8102dca0 T clflush_cache_range
ffffffff8102dcd0 T lookup_address
ffffffff8102ddf0 t __cpa_flush_all
ffffffff8102de30 t __cpa_flush_range
ffffffff8102de50 t __cpa_process_fault
ffffffff8102df00 t __change_page_attr_set_clr
ffffffff8102e9e0 t change_page_attr_set_clr
ffffffff8102ee20 T set_memory_x
ffffffff8102ee70 T set_pages_x
ffffffff8102eeb0 T set_memory_nx
ffffffff8102ef00 T set_pages_nx
ffffffff8102ef40 T set_memory_ro
ffffffff8102ef70 T set_memory_rw
ffffffff8102efa0 T set_pages_array_wb
ffffffff8102f060 T set_memory_array_wb
ffffffff8102f0e0 t _set_pages_array
ffffffff8102f230 T set_pages_array_wc
ffffffff8102f240 T set_pages_array_uc
ffffffff8102f250 t _set_memory_array
ffffffff8102f370 T set_memory_array_wc
ffffffff8102f380 T set_memory_array_uc
ffffffff8102f390 T update_page_count
ffffffff8102f3e0 T arch_report_meminfo
ffffffff8102f450 T _set_memory_uc
ffffffff8102f480 T set_memory_uc
ffffffff8102f530 T set_pages_uc
ffffffff8102f570 T _set_memory_wc
ffffffff8102f5d0 T set_memory_wc
ffffffff8102f6a0 T _set_memory_wb
ffffffff8102f6d0 T set_memory_wb
ffffffff8102f730 T set_pages_wb
ffffffff8102f770 T set_memory_np
ffffffff8102f7a0 T set_memory_4k
ffffffff8102f7d0 T set_pages_ro
ffffffff8102f810 T set_pages_rw
ffffffff8102f850 t mmap_rnd
ffffffff8102f8b0 t stack_maxrandom_size
ffffffff8102f900 T arch_pick_mmap_layout
ffffffff8102fb70 T pgprot_writecombine
ffffffff8102fba0 t pat_pagerange_is_ram
ffffffff8102fc20 t lookup_memtype
ffffffff8102fd00 T pat_init
ffffffff8102fde0 T reserve_memtype
ffffffff810301b0 T free_memtype
ffffffff81030310 T io_free_memtype
ffffffff81030320 T phys_mem_access_prot
ffffffff81030330 T phys_mem_access_prot_allowed
ffffffff810303e0 T kernel_map_sync_memtype
ffffffff810304c0 t reserve_pfn_range
ffffffff810306b0 T io_reserve_memtype
ffffffff810307d0 T track_pfn_vma_copy
ffffffff81030890 T track_pfn_vma_new
ffffffff81030920 T untrack_pfn_vma
ffffffff81030990 T pte_alloc_one_kernel
ffffffff810309a0 T pte_alloc_one
ffffffff810309e0 T ___pte_free_tlb
ffffffff81030a70 T ___pmd_free_tlb
ffffffff81030ae0 T ___pud_free_tlb
ffffffff81030b50 T pgd_page_get_mm
ffffffff81030b60 T pgd_alloc
ffffffff81030d20 T pgd_free
ffffffff81030dd0 T ptep_set_access_flags
ffffffff81030e40 T pmdp_set_access_flags
ffffffff81030eb0 T ptep_test_and_clear_young
ffffffff81030ee0 T pmdp_test_and_clear_young
ffffffff81030f10 T ptep_clear_flush_young
ffffffff81030f50 T pmdp_clear_flush_young
ffffffff81030f80 T pmdp_splitting_flush
ffffffff81030fb0 T __native_set_fixmap
ffffffff81030fe0 T native_set_fixmap
ffffffff81031020 T __phys_addr
ffffffff81031050 T __virt_addr_valid
ffffffff810310f0 t gup_huge_pmd
ffffffff810311d0 t gup_huge_pud
ffffffff810312b0 t gup_pte_range
ffffffff810313e0 t gup_pud_range
ffffffff81031590 T __get_user_pages_fast
ffffffff810316a0 T get_user_pages_fast
ffffffff81031850 t memtype_rb_augment_cb
ffffffff81031890 t memtype_rb_lowest_match.constprop.2
ffffffff810318f0 T rbt_memtype_check_insert
ffffffff81031ab0 T rbt_memtype_erase
ffffffff81031b40 T rbt_memtype_lookup
ffffffff81031b50 T leave_mm
ffffffff81031b90 t do_flush_tlb_all
ffffffff81031be0 T smp_invalidate_interrupt
ffffffff81031ca0 T native_flush_tlb_others
ffffffff81031dd0 T flush_tlb_current_task
ffffffff81031e40 T flush_tlb_mm
ffffffff81031ee0 T flush_tlb_page
ffffffff81031f80 T flush_tlb_all
ffffffff81031fa0 T huge_pmd_unshare
ffffffff81032120 T huge_pte_offset
ffffffff810321c0 T huge_pte_alloc
ffffffff810325a0 T follow_huge_addr
ffffffff810325b0 T pmd_huge
ffffffff810325c0 T pud_huge
ffffffff810325d0 T follow_huge_pmd
ffffffff81032620 T follow_huge_pud
ffffffff81032670 T __node_distance
ffffffff810326b0 T arch_setup_additional_pages
ffffffff81032800 T syscall32_cpu_init
ffffffff81032880 T syscall32_setup_pages
ffffffff810329a0 t cp_stat64
ffffffff81032b00 T sys32_truncate64
ffffffff81032b10 T sys32_ftruncate64
ffffffff81032b20 T sys32_stat64
ffffffff81032b50 T sys32_lstat64
ffffffff81032b80 T sys32_fstat64
ffffffff81032bb0 T sys32_fstatat
ffffffff81032be0 T sys32_mmap
ffffffff81032c40 T sys32_mprotect
ffffffff81032c50 T sys32_rt_sigaction
ffffffff81032db0 T sys32_sigaction
ffffffff81032ed0 T sys32_alarm
ffffffff81032ee0 T sys32_waitpid
ffffffff81032ef0 T sys32_sysfs
ffffffff81032f00 T sys32_sched_rr_get_interval
ffffffff81032f80 T sys32_rt_sigpending
ffffffff81033030 T sys32_rt_sigqueueinfo
ffffffff810330e0 T sys32_pread
ffffffff81033100 T sys32_pwrite
ffffffff81033120 T sys32_personality
ffffffff81033160 T sys32_sendfile
ffffffff81033220 T sys32_execve
ffffffff81033290 T sys32_clone
ffffffff810332b0 T sys32_lseek
ffffffff810332c0 T sys32_kill
ffffffff810332d0 T sys32_fadvise64_64
ffffffff810332f0 T sys32_vm86_warning
ffffffff81033350 T sys32_lookup_dcookie
ffffffff81033370 T sys32_readahead
ffffffff81033390 T sys32_sync_file_range
ffffffff810333b0 T sys32_fadvise64
ffffffff810333d0 T sys32_fallocate
ffffffff810333f0 T sys32_fanotify_mark
ffffffff81033410 t ia32_setup_sigcontext
ffffffff81033510 t ia32_restore_sigcontext
ffffffff81033690 t get_sigframe.isra.0
ffffffff81033770 T copy_siginfo_to_user32
ffffffff810338b0 T copy_siginfo_from_user32
ffffffff81033940 T sys32_sigsuspend
ffffffff810339b0 T sys32_sigaltstack
ffffffff81033b90 T sys32_sigreturn
ffffffff81033c50 T sys32_rt_sigreturn
ffffffff81033d60 T ia32_setup_frame
ffffffff81033f70 T ia32_setup_rt_frame
ffffffff81034200 T compat_ni_syscall
ffffffff81034210 T sys32_ipc
ffffffff81034300 T ia32_classify_syscall
ffffffff81034340 t unshare_fd
ffffffff810343b0 T get_task_mm
ffffffff81034420 t sighand_ctor
ffffffff81034450 t account_kernel_stack
ffffffff810344b0 T free_task
ffffffff810344e0 T __put_task_struct
ffffffff810345d0 t mm_init.isra.27
ffffffff81034730 T __mmdrop
ffffffff810347b0 T nr_processes
ffffffff81034810 T mm_alloc
ffffffff810348f0 T added_exe_file_vma
ffffffff81034900 T removed_exe_file_vma
ffffffff81034940 T set_mm_exe_file
ffffffff81034990 T mmput
ffffffff81034a90 T get_mm_exe_file
ffffffff81034ae0 T mm_access
ffffffff81034b90 T mm_release
ffffffff81034cc0 T dup_mm
ffffffff81035250 T __cleanup_sighand
ffffffff81035280 t copy_process
ffffffff81036640 T sys_set_tid_address
ffffffff81036670 T do_fork
ffffffff81036940 T sys_unshare
ffffffff81036ba0 T unshare_files
ffffffff81036c40 t execdomains_proc_open
ffffffff81036c60 t execdomains_proc_show
ffffffff81036cd0 T __set_personality
ffffffff81036df0 t default_handler
ffffffff81036e70 T unregister_exec_domain
ffffffff81036ef0 T register_exec_domain
ffffffff81036f70 T sys_personality
ffffffff81036fa0 t no_blink
ffffffff81036fb0 t init_oops_id
ffffffff81036ff0 T add_taint
ffffffff81037030 t do_oops_enter_exit.part.3
ffffffff81037110 T test_taint
ffffffff81037120 W panic_smp_self_stop
ffffffff81037130 T print_tainted
ffffffff810371e0 T get_taint
ffffffff810371f0 T oops_may_print
ffffffff81037200 T oops_enter
ffffffff81037230 T print_oops_end_marker
ffffffff81037260 t warn_slowpath_common
ffffffff81037320 T warn_slowpath_null
ffffffff81037340 T warn_slowpath_fmt_taint
ffffffff81037390 T warn_slowpath_fmt
ffffffff810373e0 T oops_exit
ffffffff81037410 t __call_console_drivers
ffffffff810374b0 T kmsg_dump_register
ffffffff81037540 T kmsg_dump_unregister
ffffffff810375c0 T printk_timed_ratelimit
ffffffff81037620 T __printk_ratelimit
ffffffff81037630 t _call_console_drivers
ffffffff810376a0 t emit_log_char
ffffffff81037710 t log_prefix.part.5
ffffffff810377e0 T console_trylock
ffffffff81037830 T console_unlock
ffffffff81037a80 T vprintk
ffffffff81037f00 T console_lock
ffffffff81037f50 T unregister_console
ffffffff81037ff0 T console_start
ffffffff81038010 T console_stop
ffffffff81038030 T register_console
ffffffff810383c0 t __add_preferred_console.constprop.8
ffffffff810384a0 T do_syslog
ffffffff81038a40 T sys_syslog
ffffffff81038a60 T add_preferred_console
ffffffff81038a70 T update_console_cmdline
ffffffff81038b20 T suspend_console
ffffffff81038b60 T resume_console
ffffffff81038ba0 T is_console_locked
ffffffff81038bb0 T printk_tick
ffffffff81038c20 T printk_needs_cpu
ffffffff81038c50 T wake_up_klogd
ffffffff81038c70 T console_unblank
ffffffff81038cf0 T console_device
ffffffff81038d50 T kmsg_dump
ffffffff81038e30 t __cpu_notify
ffffffff81038e60 T get_online_cpus
ffffffff81038ea0 t cpu_hotplug_begin
ffffffff81038ef0 T put_online_cpus
ffffffff81038f50 t cpu_notify_nofail
ffffffff81038f70 T cpu_maps_update_begin
ffffffff81038f80 T cpu_maps_update_done
ffffffff81038fb0 T disable_nonboot_cpus
ffffffff810390c0 T cpu_hotplug_disable_before_freeze
ffffffff810390e0 T cpu_hotplug_enable_after_thaw
ffffffff81039100 t cpu_hotplug_pm_callback
ffffffff81039150 T set_cpu_possible
ffffffff81039170 T set_cpu_present
ffffffff81039190 T set_cpu_online
ffffffff810391b0 T set_cpu_active
ffffffff810391d0 T init_cpu_present
ffffffff810391e0 T init_cpu_possible
ffffffff810391f0 T init_cpu_online
ffffffff81039200 t will_become_orphaned_pgrp
ffffffff81039290 t kill_orphaned_pgrp
ffffffff81039340 t exit_mm
ffffffff81039460 T disallow_signal
ffffffff810394e0 T allow_signal
ffffffff81039570 t delayed_put_task_struct
ffffffff810395d0 t task_stopped_code
ffffffff81039610 t child_wait_callback
ffffffff81039670 T release_task
ffffffff81039a70 t wait_consider_task
ffffffff8103a4a0 t do_wait
ffffffff8103a6b0 T session_of_pgrp
ffffffff8103a6f0 T is_current_pgrp_orphaned
ffffffff8103a730 T __set_special_pids
ffffffff8103a7b0 T get_files_struct
ffffffff8103a800 T put_files_struct
ffffffff8103a920 T reset_files_struct
ffffffff8103a990 T exit_files
ffffffff8103aa10 T do_exit
ffffffff8103b300 T complete_and_exit
ffffffff8103b320 T daemonize
ffffffff8103b5e0 T sys_exit
ffffffff8103b600 T do_group_exit
ffffffff8103b6b0 T sys_exit_group
ffffffff8103b6d0 T __wake_up_parent
ffffffff8103b6f0 T sys_waitid
ffffffff8103b850 T sys_wait4
ffffffff8103b950 T sys_waitpid
ffffffff8103b960 t set_cpu_itimer
ffffffff8103bb60 t get_cpu_itimer
ffffffff8103bc60 T do_getitimer
ffffffff8103bd90 T sys_getitimer
ffffffff8103bde0 T it_real_fn
ffffffff8103be00 T do_setitimer
ffffffff8103c050 T alarm_setitimer
ffffffff8103c0b0 T sys_setitimer
ffffffff8103c1b0 T jiffies_to_msecs
ffffffff8103c1c0 T jiffies_to_usecs
ffffffff8103c1d0 T timespec_trunc
ffffffff8103c210 T mktime
ffffffff8103c2b0 T set_normalized_timespec
ffffffff8103c2f0 T msecs_to_jiffies
ffffffff8103c310 T usecs_to_jiffies
ffffffff8103c350 T timespec_to_jiffies
ffffffff8103c3a0 T jiffies_to_timespec
ffffffff8103c3e0 T timeval_to_jiffies
ffffffff8103c430 T jiffies_to_timeval
ffffffff8103c470 T jiffies_to_clock_t
ffffffff8103c490 T clock_t_to_jiffies
ffffffff8103c4e0 T jiffies_64_to_clock_t
ffffffff8103c500 T current_fs_time
ffffffff8103c550 T ns_to_timespec
ffffffff8103c5d0 T ns_to_timeval
ffffffff8103c610 T sys_time
ffffffff8103c660 T sys_stime
ffffffff8103c6b0 T sys_gettimeofday
ffffffff8103c730 T do_sys_settimeofday
ffffffff8103c820 T sys_settimeofday
ffffffff8103c8c0 T sys_adjtimex
ffffffff8103c950 T nsec_to_clock_t
ffffffff8103c970 T nsecs_to_jiffies64
ffffffff8103c990 T nsecs_to_jiffies
ffffffff8103c9b0 T timespec_add_safe
ffffffff8103ca30 T tasklet_init
ffffffff8103ca50 t tasklet_hi_action
ffffffff8103cb30 t tasklet_action
ffffffff8103cc10 t wakeup_softirqd
ffffffff8103cc30 T tasklet_hrtimer_init
ffffffff8103cc90 t __tasklet_hrtimer_trampoline
ffffffff8103ccc0 T tasklet_kill
ffffffff8103cd40 T local_bh_disable
ffffffff8103cd60 T local_bh_enable_ip
ffffffff8103ce00 T local_bh_enable
ffffffff8103cea0 t __local_bh_enable
ffffffff8103cf20 T _local_bh_enable
ffffffff8103cf30 T __tasklet_hi_schedule_first
ffffffff8103cf60 t __local_trigger
ffffffff8103cfc0 T __send_remote_softirq
ffffffff8103d010 T send_remote_softirq
ffffffff8103d040 t remote_softirq_receive
ffffffff8103d070 T __tasklet_hi_schedule
ffffffff8103d0e0 t __hrtimer_tasklet_trampoline
ffffffff8103d110 T __tasklet_schedule
ffffffff8103d180 T __do_softirq
ffffffff8103d2b0 t run_ksoftirqd
ffffffff8103d420 T irq_enter
ffffffff8103d490 T irq_exit
ffffffff8103d500 T raise_softirq_irqoff
ffffffff8103d540 T raise_softirq
ffffffff8103d5a0 T __raise_softirq_irqoff
ffffffff8103d5c0 T open_softirq
ffffffff8103d5d0 T tasklet_kill_immediate
ffffffff8103d650 t r_stop
ffffffff8103d660 t __request_resource
ffffffff8103d6b0 t __is_ram
ffffffff8103d6c0 t simple_align_resource
ffffffff8103d6d0 t devm_region_match
ffffffff8103d700 t iomem_open
ffffffff8103d730 t ioports_open
ffffffff8103d760 t r_show
ffffffff8103d7e0 t __insert_resource
ffffffff8103d8f0 T adjust_resource
ffffffff8103d9c0 T release_resource
ffffffff8103da30 T __release_region
ffffffff8103db10 t devm_region_release
ffffffff8103db30 T __request_region
ffffffff8103dd00 T __devm_request_region
ffffffff8103ddd0 T __check_region
ffffffff8103de10 t r_next
ffffffff8103de40 t r_start
ffffffff8103deb0 t __release_child_resources.isra.5
ffffffff8103df30 T __devm_release_region
ffffffff8103df90 T release_child_resources
ffffffff8103dfc0 T request_resource_conflict
ffffffff8103e000 T request_resource
ffffffff8103e020 T walk_system_ram_range
ffffffff8103e130 W page_is_ram
ffffffff8103e170 t __find_resource
ffffffff8103e340 T reallocate_resource
ffffffff8103e500 T allocate_resource
ffffffff8103e5d0 T lookup_resource
ffffffff8103e620 T insert_resource_conflict
ffffffff8103e660 T insert_resource
ffffffff8103e680 T insert_resource_expand_to_fit
ffffffff8103e720 T resource_alignment
ffffffff8103e760 T iomem_map_sanity_check
ffffffff8103e830 T iomem_is_exclusive
ffffffff8103e8e0 t proc_put_long
ffffffff8103e980 T proc_dostring
ffffffff8103eb60 t proc_put_char
ffffffff8103eb90 t do_proc_dointvec_conv
ffffffff8103ebe0 t do_proc_dointvec_minmax_conv
ffffffff8103ec50 t do_proc_dointvec_jiffies_conv
ffffffff8103ecd0 t do_proc_dointvec_ms_jiffies_conv
ffffffff8103ed40 t do_proc_dointvec_userhz_jiffies_conv
ffffffff8103edc0 t proc_get_long.constprop.8
ffffffff8103ef30 t __do_proc_doulongvec_minmax
ffffffff8103f2f0 T proc_doulongvec_ms_jiffies_minmax
ffffffff8103f330 T proc_doulongvec_minmax
ffffffff8103f370 t proc_taint
ffffffff8103f460 t __do_proc_dointvec.isra.5
ffffffff8103f800 T proc_dointvec_ms_jiffies
ffffffff8103f840 T proc_dointvec_userhz_jiffies
ffffffff8103f880 T proc_dointvec_jiffies
ffffffff8103f8c0 T proc_dointvec_minmax
ffffffff8103f910 t proc_dointvec_minmax_sysadmin
ffffffff8103f970 T proc_dointvec
ffffffff8103f9b0 t proc_do_cad_pid
ffffffff8103fa70 T proc_do_large_bitmap
ffffffff8103fee0 t do_sysctl.isra.1.part.2
ffffffff81040030 T sys_sysctl
ffffffff810400f0 T compat_sys_sysctl
ffffffff810401a0 T ns_capable
ffffffff810401f0 T capable
ffffffff81040200 t cap_validate_magic
ffffffff810402f0 T sys_capget
ffffffff81040440 T sys_capset
ffffffff810405e0 T has_ns_capability
ffffffff81040600 T has_capability
ffffffff81040610 T has_ns_capability_noaudit
ffffffff81040630 T has_capability_noaudit
ffffffff81040640 T nsown_capable
ffffffff81040650 t ptrace_get_task_struct
ffffffff81040680 t ptrace_trapping_sleep_fn
ffffffff81040690 t ptrace_getsiginfo
ffffffff81040700 t ptrace_setsiginfo
ffffffff81040770 t ptrace_regset
ffffffff810408b0 t ptrace_resume
ffffffff81040960 t ptrace_has_cap
ffffffff810409a0 t __ptrace_detach.part.7
ffffffff81040ab0 T __ptrace_link
ffffffff81040b00 t ptrace_traceme
ffffffff81040b90 T __ptrace_unlink
ffffffff81040c90 T ptrace_check_attach
ffffffff81040da0 T __ptrace_may_access
ffffffff81040e70 t ptrace_attach
ffffffff810410f0 T ptrace_may_access
ffffffff81041140 T exit_ptrace
ffffffff810412b0 T ptrace_readdata
ffffffff81041360 T ptrace_writedata
ffffffff81041410 T sys_ptrace
ffffffff810414f0 T generic_ptrace_peekdata
ffffffff81041540 T generic_ptrace_pokedata
ffffffff81041570 T ptrace_request
ffffffff810419e0 T compat_ptrace_request
ffffffff81041bf0 T compat_sys_ptrace
ffffffff81041cd0 T ptrace_get_breakpoints
ffffffff81041d20 T ptrace_put_breakpoints
ffffffff81041d40 T __round_jiffies
ffffffff81041da0 T __round_jiffies_relative
ffffffff81041e00 T round_jiffies
ffffffff81041e70 T round_jiffies_relative
ffffffff81041e80 T __round_jiffies_up
ffffffff81041ed0 T __round_jiffies_up_relative
ffffffff81041f30 T round_jiffies_up
ffffffff81041f80 T round_jiffies_up_relative
ffffffff81041fe0 T set_timer_slack
ffffffff81041ff0 t internal_add_timer
ffffffff810420f0 T init_timer_key
ffffffff81042120 T init_timer_deferrable_key
ffffffff81042150 T usleep_range
ffffffff81042190 T add_timer_on
ffffffff81042230 t process_timeout
ffffffff81042240 t cascade
ffffffff810422d0 t run_timer_softirq
ffffffff81042510 t lock_timer_base.isra.28
ffffffff81042580 T mod_timer_pinned
ffffffff810426e0 T mod_timer
ffffffff810428d0 T add_timer
ffffffff810428f0 T mod_timer_pending
ffffffff81042a30 T try_to_del_timer_sync
ffffffff81042ac0 T del_timer_sync
ffffffff81042b10 T msleep
ffffffff81042b40 T msleep_interruptible
ffffffff81042b80 T del_timer
ffffffff81042c00 T setup_deferrable_timer_on_stack_key
ffffffff81042c40 T run_local_timers
ffffffff81042c60 T update_process_times
ffffffff81042ce0 T sys_alarm
ffffffff81042cf0 T sys_getpid
ffffffff81042d20 T sys_getppid
ffffffff81042d50 T sys_getuid
ffffffff81042d70 T sys_geteuid
ffffffff81042d90 T sys_getgid
ffffffff81042db0 T sys_getegid
ffffffff81042dd0 T sys_gettid
ffffffff81042df0 T do_sysinfo
ffffffff81042f70 T sys_sysinfo
ffffffff81042fb0 t uid_hash_find
ffffffff81042ff0 T find_user
ffffffff81043050 T free_uid
ffffffff81043100 T alloc_uid
ffffffff81043230 t recalc_sigpending_tsk
ffffffff81043280 T block_all_signals
ffffffff81043310 t __sigqueue_alloc
ffffffff810433e0 T recalc_sigpending
ffffffff81043430 T unblock_all_signals
ffffffff810434a0 t __sigqueue_free
ffffffff810434e0 t __dequeue_signal
ffffffff810436f0 T dequeue_signal
ffffffff81043870 t __flush_itimer_signals
ffffffff810438f0 t rm_from_queue_full.part.15
ffffffff81043970 t rm_from_queue
ffffffff81043a00 t check_kill_permission.part.17
ffffffff81043b40 T next_signal
ffffffff81043b70 T task_set_jobctl_pending
ffffffff81043bd0 T task_clear_jobctl_trapping
ffffffff81043c00 T task_clear_jobctl_pending
ffffffff81043c40 t task_participate_group_stop
ffffffff81043d10 T flush_sigqueue
ffffffff81043d50 T __flush_signals
ffffffff81043d80 T flush_signals
ffffffff81043de0 T flush_itimer_signals
ffffffff81043e50 T ignore_signals
ffffffff81043e80 T flush_signal_handlers
ffffffff81043ee0 T unhandled_signal
ffffffff81043f20 T signal_wake_up
ffffffff81043f60 t __set_task_blocked
ffffffff81043fe0 t prepare_signal
ffffffff810441f0 t complete_signal
ffffffff81044440 t __send_signal
ffffffff81044780 t send_signal
ffffffff81044810 T recalc_sigpending_and_wake
ffffffff81044830 T __group_send_sig_info
ffffffff81044840 t do_notify_parent_cldstop
ffffffff810449e0 t ptrace_stop
ffffffff81044c50 t ptrace_do_notify
ffffffff81044d00 t do_signal_stop
ffffffff81044ee0 T force_sig_info
ffffffff81044fe0 T force_sig
ffffffff81044ff0 T zap_other_threads
ffffffff81045080 T __lock_task_sighand
ffffffff81045120 T kill_pid_info_as_cred
ffffffff81045250 T do_send_sig_info
ffffffff810452e0 t do_send_specific
ffffffff810453b0 t do_tkill
ffffffff81045430 T send_sig_info
ffffffff81045450 T send_sig
ffffffff81045470 T group_send_sig_info
ffffffff810454f0 T __kill_pgrp_info
ffffffff81045550 T kill_pgrp
ffffffff810455b0 T kill_pid_info
ffffffff81045610 T kill_pid
ffffffff81045630 T kill_proc_info
ffffffff81045660 T force_sigsegv
ffffffff810456c0 T sigqueue_alloc
ffffffff810456f0 T sigqueue_free
ffffffff81045780 T send_sigqueue
ffffffff810458d0 T do_notify_parent
ffffffff81045b40 T ptrace_notify
ffffffff81045bc0 T get_signal_to_deliver
ffffffff810460d0 T exit_signals
ffffffff81046200 T sys_restart_syscall
ffffffff81046220 T do_no_restart_syscall
ffffffff81046230 T set_current_blocked
ffffffff810462a0 T sigprocmask
ffffffff81046330 T block_sigmask
ffffffff81046380 T sys_rt_sigprocmask
ffffffff81046420 T do_sigpending
ffffffff810464e0 T sys_rt_sigpending
ffffffff810464f0 T copy_siginfo_to_user
ffffffff810466d0 T do_sigtimedwait
ffffffff81046890 T sys_rt_sigtimedwait
ffffffff81046980 T sys_kill
ffffffff81046b40 T sys_tgkill
ffffffff81046b70 T sys_tkill
ffffffff81046ba0 T sys_rt_sigqueueinfo
ffffffff81046c60 T do_rt_tgsigqueueinfo
ffffffff81046ce0 T sys_rt_tgsigqueueinfo
ffffffff81046d60 T do_sigaction
ffffffff81046f70 T do_sigaltstack
ffffffff810470c0 T sys_sigpending
ffffffff810470d0 T sys_sigprocmask
ffffffff810471c0 T sys_rt_sigaction
ffffffff81047280 T sys_sgetmask
ffffffff810472a0 T sys_ssetmask
ffffffff810472e0 T sys_signal
ffffffff81047320 T sys_pause
ffffffff81047360 T sys_rt_sigsuspend
ffffffff81047400 t argv_cleanup
ffffffff81047410 t kernel_shutdown_prepare
ffffffff81047450 T kernel_power_off
ffffffff810474a0 T kernel_halt
ffffffff810474e0 T unregister_reboot_notifier
ffffffff810474f0 T register_reboot_notifier
ffffffff81047500 T emergency_restart
ffffffff81047520 t set_one_prio
ffffffff81047600 t override_release
ffffffff810476a0 t set_user.isra.12
ffffffff81047730 T orderly_poweroff
ffffffff81047810 T sys_setpriority
ffffffff81047a50 T sys_getpriority
ffffffff81047ca0 T kernel_restart_prepare
ffffffff81047ce0 T kernel_restart
ffffffff81047d30 t deferred_cad
ffffffff81047d40 T sys_reboot
ffffffff81047f60 T ctrl_alt_del
ffffffff81047f90 T sys_setregid
ffffffff81048090 T sys_setgid
ffffffff81048140 T sys_setreuid
ffffffff81048280 T sys_setuid
ffffffff81048370 T sys_setresuid
ffffffff810484e0 T sys_getresuid
ffffffff81048530 T sys_setresgid
ffffffff81048660 T sys_getresgid
ffffffff810486b0 T sys_setfsuid
ffffffff81048780 T sys_setfsgid
ffffffff81048820 T do_sys_times
ffffffff810488f0 T sys_times
ffffffff81048940 T sys_setpgid
ffffffff81048b10 T sys_getpgid
ffffffff81048b70 T sys_getpgrp
ffffffff81048b80 T sys_getsid
ffffffff81048be0 T sys_setsid
ffffffff81048cb0 T sys_newuname
ffffffff81048d70 T sys_uname
ffffffff81048e40 T sys_olduname
ffffffff81048fc0 T sys_sethostname
ffffffff810490b0 T sys_gethostname
ffffffff81049150 T sys_setdomainname
ffffffff81049250 T sys_old_getrlimit
ffffffff81049330 T do_prlimit
ffffffff81049530 T sys_getrlimit
ffffffff81049590 T sys_prlimit64
ffffffff81049780 T sys_setrlimit
ffffffff810497d0 T getrusage
ffffffff81049ba0 T sys_getrusage
ffffffff81049be0 T sys_umask
ffffffff81049c00 T sys_prctl
ffffffff81049f00 T sys_getcpu
ffffffff81049f60 T call_usermodehelper_setfns
ffffffff81049f70 t proc_cap_handler
ffffffff8104a150 T call_usermodehelper_setup
ffffffff8104a1d0 t ____call_usermodehelper
ffffffff8104a2f0 T usermodehelper_read_unlock
ffffffff8104a300 T usermodehelper_read_lock_wait
ffffffff8104a3b0 T usermodehelper_read_trylock
ffffffff8104a480 T call_usermodehelper_freeinfo
ffffffff8104a4a0 T call_usermodehelper_exec
ffffffff8104a5a0 t umh_complete
ffffffff8104a5c0 t wait_for_helper
ffffffff8104a660 t __call_usermodehelper
ffffffff8104a710 t free_modprobe_argv
ffffffff8104a730 T __request_module
ffffffff8104a8c0 T __usermodehelper_set_disable_depth
ffffffff8104a900 T __usermodehelper_disable
ffffffff8104a9f0 t need_to_create_worker
ffffffff8104aa40 t move_linked_works
ffffffff8104aad0 t cwq_activate_first_delayed
ffffffff8104ab60 t send_mayday
ffffffff8104abc0 t wake_up_worker
ffffffff8104abe0 t alloc_worker
ffffffff8104ac40 t wq_clamp_max_active
ffffffff8104acb0 t do_work_for_cpu
ffffffff8104acd0 t wq_barrier_func
ffffffff8104ace0 t worker_enter_idle
ffffffff8104ae10 t start_worker
ffffffff8104ae30 t destroy_worker
ffffffff8104af00 t create_worker
ffffffff8104b0d0 t gcwq_mayday_timeout
ffffffff8104b140 t idle_worker_timeout
ffffffff8104b1c0 T work_on_cpu
ffffffff8104b280 t get_cwq
ffffffff8104b2c0 T workqueue_congested
ffffffff8104b2e0 T workqueue_set_max_active
ffffffff8104b460 t flush_workqueue_prep_cwqs
ffffffff8104b630 T flush_workqueue
ffffffff8104b9a0 T flush_scheduled_work
ffffffff8104b9b0 T drain_workqueue
ffffffff8104bb80 t get_work_gcwq
ffffffff8104bbd0 T work_cpu
ffffffff8104bbf0 T work_busy
ffffffff8104bc60 T queue_delayed_work_on
ffffffff8104bd60 T schedule_delayed_work_on
ffffffff8104bd80 t insert_work
ffffffff8104bdf0 t insert_wq_barrier
ffffffff8104bea0 t start_flush_work
ffffffff8104bf90 T flush_work
ffffffff8104bfc0 t wait_on_work
ffffffff8104c130 T flush_work_sync
ffffffff8104c190 t __queue_work
ffffffff8104c500 t delayed_work_timer_fn
ffffffff8104c530 T queue_work_on
ffffffff8104c550 T schedule_work_on
ffffffff8104c560 T queue_work
ffffffff8104c580 T schedule_work
ffffffff8104c590 T flush_delayed_work_sync
ffffffff8104c5d0 T flush_delayed_work
ffffffff8104c610 T execute_in_process_context
ffffffff8104c670 t worker_maybe_bind_and_lock.isra.28
ffffffff8104c750 t worker_rebind_fn
ffffffff8104c830 t cwq_dec_nr_in_flight
ffffffff8104c8d0 t process_one_work
ffffffff8104cc40 t process_scheduled_works
ffffffff8104cc80 t rescuer_thread
ffffffff8104ceb0 t manage_workers.isra.30
ffffffff8104d0c0 t worker_thread
ffffffff8104d400 t free_cwqs
ffffffff8104d430 T __alloc_workqueue_key
ffffffff8104d8a0 T destroy_workqueue
ffffffff8104da50 t __cancel_work_timer
ffffffff8104dba0 T cancel_delayed_work_sync
ffffffff8104dbb0 T cancel_work_sync
ffffffff8104dbc0 T queue_delayed_work
ffffffff8104dbe0 T schedule_delayed_work
ffffffff8104dc00 T wq_worker_waking_up
ffffffff8104dc30 T wq_worker_sleeping
ffffffff8104dcc0 T schedule_on_each_cpu
ffffffff8104ddb0 T keventd_up
ffffffff8104ddc0 T freeze_workqueues_begin
ffffffff8104df20 T freeze_workqueues_busy
ffffffff8104e030 T thaw_workqueues
ffffffff8104e1e0 T is_container_init
ffffffff8104e210 T find_pid_ns
ffffffff8104e2b0 T find_vpid
ffffffff8104e2d0 T pid_task
ffffffff8104e310 T get_task_pid
ffffffff8104e340 T get_pid_task
ffffffff8104e390 T find_get_pid
ffffffff8104e3a0 T task_active_pid_ns
ffffffff8104e3d0 T put_pid
ffffffff8104e430 t delayed_put_pid
ffffffff8104e440 t free_pidmap.isra.1
ffffffff8104e470 T next_pidmap
ffffffff8104e500 T free_pid
ffffffff8104e5a0 t __change_pid
ffffffff8104e610 T alloc_pid
ffffffff8104ea50 T attach_pid
ffffffff8104ea90 T detach_pid
ffffffff8104eaa0 T change_pid
ffffffff8104eb10 T transfer_pid
ffffffff8104eb70 T find_task_by_pid_ns
ffffffff8104eba0 T find_task_by_vpid
ffffffff8104ebc0 T pid_nr_ns
ffffffff8104ebf0 T task_tgid_nr_ns
ffffffff8104ec10 T __task_pid_nr_ns
ffffffff8104ec70 T pid_vnr
ffffffff8104ec90 T find_ge_pid
ffffffff8104ecd0 T do_trace_rcu_torture_read
ffffffff8104ece0 T wait_rcu_gp
ffffffff8104ed30 t wakeme_after_rcu
ffffffff8104ed40 T search_exception_tables
ffffffff8104ed80 T core_kernel_text
ffffffff8104edc0 T core_kernel_data
ffffffff8104ede0 T __kernel_text_address
ffffffff8104ee60 T kernel_text_address
ffffffff8104eec0 T func_ptr_is_kernel_text
ffffffff8104ef20 t param_array_free
ffffffff8104ef70 t module_attr_show
ffffffff8104efa0 t module_attr_store
ffffffff8104efd0 t uevent_filter
ffffffff8104efe0 t param_array_get
ffffffff8104f090 T param_get_string
ffffffff8104f0b0 t maybe_kfree_parameter
ffffffff8104f120 t param_free_charp
ffffffff8104f130 T __kernel_param_lock
ffffffff8104f140 t param_attr_store
ffffffff8104f1c0 T __kernel_param_unlock
ffffffff8104f1d0 t param_attr_show
ffffffff8104f250 T param_get_invbool
ffffffff8104f270 T param_get_bool
ffffffff8104f290 T param_get_charp
ffffffff8104f2b0 T param_get_ulong
ffffffff8104f2d0 T param_get_long
ffffffff8104f2f0 T param_get_uint
ffffffff8104f310 T param_get_int
ffffffff8104f330 T param_get_ushort
ffffffff8104f350 T param_get_short
ffffffff8104f370 T param_get_byte
ffffffff8104f390 T param_set_copystring
ffffffff8104f400 T param_set_bool
ffffffff8104f420 T param_set_bint
ffffffff8104f470 T param_set_invbool
ffffffff8104f4b0 T param_set_byte
ffffffff8104f4f0 T param_set_ushort
ffffffff8104f530 T param_set_uint
ffffffff8104f570 T param_set_ulong
ffffffff8104f5a0 T param_set_short
ffffffff8104f5f0 T param_set_int
ffffffff8104f630 T param_set_long
ffffffff8104f660 t add_sysfs_param.isra.3
ffffffff8104f8a0 T param_set_charp
ffffffff8104f990 t param_array_set
ffffffff8104faa0 T parameqn
ffffffff8104faf0 T parameq
ffffffff8104fb60 T parse_args
ffffffff8104fe80 T module_param_sysfs_setup
ffffffff8104ff50 T module_param_sysfs_remove
ffffffff8104ffa0 T destroy_params
ffffffff8104ffe0 T __modver_version_show
ffffffff81050010 t posix_clock_realtime_adj
ffffffff81050020 t posix_clock_realtime_get
ffffffff81050040 t posix_clock_realtime_set
ffffffff81050050 t posix_get_boottime
ffffffff81050070 t posix_get_monotonic_coarse
ffffffff81050090 t posix_get_realtime_coarse
ffffffff810500b0 t posix_get_coarse_res
ffffffff810500e0 t posix_get_monotonic_raw
ffffffff81050100 t common_timer_get
ffffffff81050210 t common_timer_del
ffffffff81050230 t common_timer_create
ffffffff81050250 t common_timer_set
ffffffff810503f0 t common_nsleep
ffffffff81050420 t posix_ktime_get_ts
ffffffff81050440 t release_posix_timer
ffffffff810504c0 t k_itimer_rcu_free
ffffffff810504d0 t __lock_timer
ffffffff81050550 T posix_timer_event
ffffffff81050590 t posix_timer_fn
ffffffff81050660 t clockid_to_kclock
ffffffff810506b0 T posix_timers_register_clock
ffffffff81050720 T do_schedule_next_timer
ffffffff81050800 T sys_timer_create
ffffffff81050be0 T sys_timer_gettime
ffffffff81050ca0 T sys_timer_getoverrun
ffffffff81050ce0 T sys_timer_settime
ffffffff81050e80 T sys_timer_delete
ffffffff81050fa0 T exit_itimers
ffffffff81051090 T sys_clock_settime
ffffffff81051110 T sys_clock_gettime
ffffffff81051180 T sys_clock_adjtime
ffffffff81051250 T sys_clock_getres
ffffffff810512c0 T sys_clock_nanosleep
ffffffff81051380 T clock_nanosleep_restart
ffffffff810513d0 T kthread_should_stop
ffffffff810513f0 T __init_kthread_worker
ffffffff81051410 t kthread_flush_work_fn
ffffffff81051420 T flush_kthread_work
ffffffff810514b0 T queue_kthread_work
ffffffff81051550 T flush_kthread_worker
ffffffff810515f0 T kthread_worker_fn
ffffffff81051790 T kthread_freezable_should_stop
ffffffff810517f0 t kthread
ffffffff81051880 T kthread_stop
ffffffff810518f0 T kthread_create_on_node
ffffffff81051a10 T kthread_bind
ffffffff81051a90 T kthread_data
ffffffff81051aa0 T tsk_fork_get_node
ffffffff81051ac0 T kthreadd
ffffffff81051c30 T __init_waitqueue_head
ffffffff81051c50 T bit_waitqueue
ffffffff81051d20 T __wake_up_bit
ffffffff81051d50 T wake_up_bit
ffffffff81051d90 T prepare_to_wait_exclusive
ffffffff81051e20 T prepare_to_wait
ffffffff81051eb0 T remove_wait_queue
ffffffff81051f10 T add_wait_queue_exclusive
ffffffff81051f60 T add_wait_queue
ffffffff81051fc0 T abort_exclusive_wait
ffffffff81052070 T autoremove_wake_function
ffffffff810520a0 T wake_bit_function
ffffffff810520d0 T finish_wait
ffffffff81052160 T __kfifo_len_r
ffffffff81052190 T __kfifo_skip_r
ffffffff810521d0 T __kfifo_dma_in_finish_r
ffffffff81052220 T __kfifo_dma_out_finish_r
ffffffff81052260 T __kfifo_init
ffffffff810522c0 T __kfifo_free
ffffffff81052300 T __kfifo_alloc
ffffffff810523a0 t setup_sgl_buf.part.4
ffffffff81052540 t setup_sgl.isra.5
ffffffff81052610 T __kfifo_dma_out_prepare
ffffffff81052650 T __kfifo_dma_in_prepare
ffffffff81052690 t kfifo_copy_to_user.isra.6
ffffffff81052760 T __kfifo_to_user_r
ffffffff81052820 T __kfifo_to_user
ffffffff810528a0 t kfifo_copy_from_user.isra.7
ffffffff81052980 T __kfifo_from_user
ffffffff81052a00 T __kfifo_from_user_r
ffffffff81052ae0 t kfifo_copy_out.isra.8
ffffffff81052b70 t kfifo_out_copy_r
ffffffff81052bd0 T __kfifo_out_peek
ffffffff81052c00 T __kfifo_out
ffffffff81052c10 T __kfifo_out_r
ffffffff81052c60 T __kfifo_out_peek_r
ffffffff81052c80 t kfifo_copy_in.isra.11
ffffffff81052d10 T __kfifo_in
ffffffff81052d50 T __kfifo_in_r
ffffffff81052df0 T __kfifo_dma_out_prepare_r
ffffffff81052e60 T __kfifo_dma_in_prepare_r
ffffffff81052ed0 T __kfifo_max_r
ffffffff81052ef0 W compat_sys_ipc
ffffffff81052ef0 W compat_sys_kexec_load
ffffffff81052ef0 W compat_sys_keyctl
ffffffff81052ef0 W compat_sys_open_by_handle_at
ffffffff81052ef0 W ppc_rtas
ffffffff81052ef0 W sys32_quotactl
ffffffff81052ef0 W sys_add_key
ffffffff81052ef0 W sys_ipc
ffffffff81052ef0 W sys_kexec_load
ffffffff81052ef0 W sys_keyctl
ffffffff81052ef0 W sys_lookup_dcookie
ffffffff81052ef0 W sys_name_to_handle_at
ffffffff81052ef0 T sys_ni_syscall
ffffffff81052ef0 W sys_open_by_handle_at
ffffffff81052ef0 W sys_pciconfig_iobase
ffffffff81052ef0 W sys_pciconfig_read
ffffffff81052ef0 W sys_pciconfig_write
ffffffff81052ef0 W sys_quotactl
ffffffff81052ef0 W sys_request_key
ffffffff81052ef0 W sys_spu_create
ffffffff81052ef0 W sys_spu_run
ffffffff81052ef0 W sys_subpage_prot
ffffffff81052ef0 W sys_vm86
ffffffff81052ef0 W sys_vm86old
ffffffff81052f00 t bump_cpu_timer
ffffffff81052fe0 t cleanup_timers
ffffffff81053100 t arm_timer
ffffffff81053220 t process_cpu_nsleep_restart
ffffffff81053230 t cpu_clock_sample
ffffffff810532a0 t sample_to_timespec
ffffffff810532e0 t posix_cpu_timer_del
ffffffff810533f0 t posix_cpu_timer_create
ffffffff810534b0 t thread_cpu_timer_create
ffffffff810534c0 t process_cpu_timer_create
ffffffff810534d0 t check_clock
ffffffff81053540 t posix_cpu_clock_set
ffffffff81053560 t posix_cpu_clock_getres
ffffffff810535c0 t thread_cpu_clock_getres
ffffffff810535d0 t process_cpu_clock_getres
ffffffff810535e0 t check_cpu_itimer.part.3
ffffffff81053680 T thread_group_cputime
ffffffff81053720 t cpu_clock_sample_group
ffffffff810537b0 t posix_cpu_clock_get
ffffffff810538e0 t thread_cpu_clock_get
ffffffff810538f0 t process_cpu_clock_get
ffffffff81053900 T thread_group_cputimer
ffffffff810539f0 t cpu_timer_sample_group
ffffffff81053a80 t posix_cpu_timer_get
ffffffff81053c10 T posix_cpu_timers_exit
ffffffff81053c40 T posix_cpu_timers_exit_group
ffffffff81053c80 T posix_cpu_timer_schedule
ffffffff81053df0 t cpu_timer_fire
ffffffff81053e60 t posix_cpu_timer_set
ffffffff81054190 t do_cpu_nanosleep
ffffffff81054380 t posix_cpu_nsleep_restart
ffffffff81054420 t posix_cpu_nsleep
ffffffff81054550 t process_cpu_nsleep
ffffffff81054560 T run_posix_cpu_timers
ffffffff81054d30 T set_process_cpu_timer
ffffffff81054e40 T update_rlimit_cpu
ffffffff81054ea0 T __mutex_init
ffffffff81054ed0 T atomic_dec_and_mutex_lock
ffffffff81054f70 T ktime_add_safe
ffffffff81054fb0 T hrtimer_init_sleeper
ffffffff81054fc0 T hrtimer_forward
ffffffff81055080 t enqueue_hrtimer
ffffffff810550d0 T hrtimer_get_res
ffffffff81055110 t update_rmtp
ffffffff81055170 t hrtimer_wakeup
ffffffff810551a0 T hrtimer_init
ffffffff81055290 t hrtimer_force_reprogram
ffffffff81055300 t __remove_hrtimer
ffffffff810553b0 t retrigger_next_event
ffffffff81055410 t __run_hrtimer.isra.34
ffffffff81055500 t lock_hrtimer_base.isra.36
ffffffff81055560 T hrtimer_get_remaining
ffffffff810555b0 T hrtimer_try_to_cancel
ffffffff81055630 T hrtimer_cancel
ffffffff81055650 T clock_was_set_delayed
ffffffff81055680 T clock_was_set
ffffffff810556a0 T hrtimers_resume
ffffffff810556f0 T __hrtimer_start_range_ns
ffffffff810559a0 T hrtimer_start
ffffffff810559b0 T hrtimer_start_range_ns
ffffffff810559c0 T hrtimer_interrupt
ffffffff81055bf0 t __hrtimer_peek_ahead_timers.part.37
ffffffff81055c20 T hrtimer_peek_ahead_timers
ffffffff81055c70 t run_hrtimer_softirq
ffffffff81055cb0 T hrtimer_run_pending
ffffffff81055db0 T hrtimer_run_queues
ffffffff81055f00 T hrtimer_nanosleep
ffffffff81056050 T sys_nanosleep
ffffffff810560c0 T down_read_trylock
ffffffff810560e0 T down_write_trylock
ffffffff81056100 T up_read
ffffffff81056120 T up_write
ffffffff81056140 T downgrade_write
ffffffff81056160 t create_new_namespaces
ffffffff81056300 T free_nsproxy
ffffffff81056390 T copy_namespaces
ffffffff81056440 T unshare_nsproxy_namespaces
ffffffff810564d0 T switch_task_namespaces
ffffffff81056500 T exit_task_namespaces
ffffffff81056510 T sys_setns
ffffffff81056620 T __srcu_read_lock
ffffffff81056650 T __srcu_read_unlock
ffffffff81056670 T srcu_batches_completed
ffffffff81056680 T init_srcu_struct
ffffffff810566c0 t srcu_readers_active_idx.isra.0
ffffffff81056730 t __synchronize_srcu
ffffffff810567e0 T synchronize_srcu_expedited
ffffffff810567f0 T synchronize_srcu
ffffffff81056800 T cleanup_srcu_struct
ffffffff81056890 T down_trylock
ffffffff810568d0 T down
ffffffff81056910 T down_interruptible
ffffffff81056970 T down_killable
ffffffff810569d0 T down_timeout
ffffffff81056a30 T up
ffffffff81056a70 t notifier_call_chain
ffffffff81056ae0 T __atomic_notifier_call_chain
ffffffff81056af0 T atomic_notifier_call_chain
ffffffff81056b10 T raw_notifier_chain_register
ffffffff81056b50 T raw_notifier_chain_unregister
ffffffff81056b90 T __raw_notifier_call_chain
ffffffff81056ba0 T raw_notifier_call_chain
ffffffff81056bb0 T __srcu_notifier_call_chain
ffffffff81056c30 T srcu_notifier_call_chain
ffffffff81056c40 T __blocking_notifier_call_chain
ffffffff81056ca0 T blocking_notifier_call_chain
ffffffff81056cb0 T blocking_notifier_chain_cond_register
ffffffff81056d10 T atomic_notifier_chain_register
ffffffff81056d70 T register_die_notifier
ffffffff81056d90 T atomic_notifier_chain_unregister
ffffffff81056e00 T unregister_die_notifier
ffffffff81056e10 T srcu_init_notifier_head
ffffffff81056e40 T srcu_notifier_chain_register
ffffffff81056ee0 T srcu_notifier_chain_unregister
ffffffff81056fa0 T blocking_notifier_chain_unregister
ffffffff81057050 T blocking_notifier_chain_register
ffffffff810570f0 T notify_die
ffffffff81057130 t uevent_helper_store
ffffffff810571a0 t notes_read
ffffffff810571c0 t uevent_helper_show
ffffffff810571f0 t uevent_seqnum_show
ffffffff81057220 t fscaps_show
ffffffff81057250 T override_creds
ffffffff81057270 T set_security_override
ffffffff81057280 T set_security_override_from_ctx
ffffffff81057290 T set_create_files_as
ffffffff810572a0 T prepare_creds
ffffffff810573a0 t put_cred_rcu
ffffffff81057420 T __put_cred
ffffffff81057460 T revert_creds
ffffffff81057490 T abort_creds
ffffffff810574c0 T commit_creds
ffffffff810575e0 T exit_creds
ffffffff81057670 T get_task_cred
ffffffff810576b0 T prepare_kernel_cred
ffffffff81057730 T cred_alloc_blank
ffffffff81057760 T prepare_exec_creds
ffffffff81057790 T copy_creds
ffffffff81057860 t lowest_in_progress
ffffffff810578c0 T async_synchronize_cookie_domain
ffffffff810579e0 T async_synchronize_cookie
ffffffff810579f0 T async_synchronize_full
ffffffff81057a30 T async_synchronize_full_domain
ffffffff81057a40 t __async_schedule
ffffffff81057b80 T async_schedule_domain
ffffffff81057b90 T async_schedule
ffffffff81057ba0 t async_run_entry_fn
ffffffff81057d00 t cmp_range
ffffffff81057d10 T add_range
ffffffff81057d30 T add_range_with_merge
ffffffff81057db0 T subtract_range
ffffffff81057ee0 T clean_sort_range
ffffffff81057fe0 T sort_range
ffffffff81058000 T groups_free
ffffffff81058050 T set_groups
ffffffff810581d0 T set_current_groups
ffffffff81058240 T groups_alloc
ffffffff81058340 T groups_search
ffffffff810583a0 T in_egroup_p
ffffffff810583d0 T in_group_p
ffffffff81058400 T sys_getgroups
ffffffff81058500 T sys_setgroups
ffffffff81058620 T tg_nop
ffffffff81058630 T kick_process
ffffffff81058680 t __wake_up_common
ffffffff81058700 T __wake_up_locked
ffffffff81058710 T __wake_up_locked_key
ffffffff81058730 T task_nice
ffffffff81058740 t cpu_allnodes_mask
ffffffff81058750 t cpu_cpu_mask
ffffffff81058780 t sd_init_CPU
ffffffff81058820 t sd_init_NODE
ffffffff810588a0 t sd_init_MC
ffffffff81058940 t cpu_shares_read_u64
ffffffff81058950 t cpu_cfs_quota_read_s64
ffffffff81058990 t cpu_cfs_period_read_u64
ffffffff810589c0 t tg_cfs_schedulable_down
ffffffff81058aa0 t cpu_stats_show
ffffffff81058b00 t cpu_rt_runtime_read
ffffffff81058b30 t cpu_rt_period_read_uint
ffffffff81058b60 t cpuacct_populate
ffffffff81058b80 t cpu_cgroup_populate
ffffffff81058ba0 t cpuusage_read
ffffffff81058c10 t cpuusage_write
ffffffff81058c70 t cpuacct_stats_show
ffffffff81058d50 t cpuacct_percpu_seq_read
ffffffff81058dd0 t cpuacct_destroy
ffffffff81058e00 t free_sched_groups
ffffffff81058e70 t free_sched_domain
ffffffff81058ee0 t sd_degenerate
ffffffff81058f30 t sd_init_ALLNODES
ffffffff81058fd0 t cpuacct_create
ffffffff81059070 t cpu_shares_write_u64
ffffffff81059090 t tg_rt_schedulable
ffffffff81059270 t cpu_cgroup_can_attach
ffffffff810592f0 T completion_done
ffffffff81059330 T try_wait_for_completion
ffffffff81059390 T complete_all
ffffffff810593f0 T complete
ffffffff81059450 T __wake_up_sync_key
ffffffff810594d0 T __wake_up_sync
ffffffff810594e0 T __wake_up
ffffffff81059540 t task_rq_lock
ffffffff810595e0 t get_group
ffffffff81059660 t free_sched_group_rcu
ffffffff81059690 t free_rootdomain
ffffffff810596c0 t __hrtick_start
ffffffff81059710 t cpuset_cpu_active
ffffffff81059740 t cpuset_cpu_inactive
ffffffff81059770 t hotplug_hrtick
ffffffff810597e0 t sched_mc_power_savings_show
ffffffff81059800 t find_process_by_pid
ffffffff81059820 t finish_task_switch
ffffffff810598f0 t set_rq_online.part.49
ffffffff81059940 t set_rq_offline.part.50
ffffffff810599a0 t rq_attach_root
ffffffff81059ac0 t cpu_attach_domain
ffffffff81059cc0 t cpu_node_mask
ffffffff81059df0 T start_bandwidth_timer
ffffffff81059e50 T update_rq_clock
ffffffff81059ea0 t dequeue_task
ffffffff81059f30 t enqueue_task
ffffffff81059fa0 t hrtick
ffffffff8105a030 T hrtick_start
ffffffff8105a0d0 T resched_task
ffffffff8105a140 T set_user_nice
ffffffff8105a2c0 T resched_cpu
ffffffff8105a360 T sched_avg_update
ffffffff8105a3b0 T walk_tg_tree_from
ffffffff8105a470 t tg_set_cfs_bandwidth
ffffffff8105a680 t tg_set_rt_bandwidth
ffffffff8105a7a0 T activate_task
ffffffff8105a7c0 T deactivate_task
ffffffff8105a7e0 T task_curr
ffffffff8105a810 T check_preempt_curr
ffffffff8105a8b0 t ttwu_do_wakeup
ffffffff8105a940 t __cond_resched
ffffffff8105a980 T __cond_resched_lock
ffffffff8105a9d0 t ttwu_do_activate.constprop.69
ffffffff8105aa40 t sched_ttwu_pending
ffffffff8105aaa0 T set_task_cpu
ffffffff8105ab80 t __migrate_task
ffffffff8105ace0 t migration_cpu_stop
ffffffff8105ad10 T wait_task_inactive
ffffffff8105ae10 T scheduler_ipi
ffffffff8105ae50 T cpus_share_cache
ffffffff8105ae80 T sched_fork
ffffffff8105b0a0 T schedule_tail
ffffffff8105b140 T nr_running
ffffffff8105b1b0 T nr_uninterruptible
ffffffff8105b230 T nr_context_switches
ffffffff8105b290 T nr_iowait
ffffffff8105b300 T nr_iowait_cpu
ffffffff8105b320 T this_cpu_load
ffffffff8105b340 T get_avenrun
ffffffff8105b380 T calc_global_load
ffffffff8105b470 T update_cpu_load
ffffffff8105b5c0 T sched_exec
ffffffff8105b680 T task_delta_exec
ffffffff8105b710 T task_sched_runtime
ffffffff8105b7b0 T account_user_time
ffffffff8105b890 T account_system_time
ffffffff8105ba60 T account_steal_time
ffffffff8105ba80 T account_idle_time
ffffffff8105bac0 T account_process_tick
ffffffff8105bc70 T account_steal_ticks
ffffffff8105bc90 T account_idle_ticks
ffffffff8105bcd0 T task_times
ffffffff8105bd80 T thread_group_times
ffffffff8105be30 T scheduler_tick
ffffffff8105bf60 T get_parent_ip
ffffffff8105bf90 T mutex_spin_on_owner
ffffffff8105bff0 T rt_mutex_setprio
ffffffff8105c1f0 T can_nice
ffffffff8105c230 t __sched_setscheduler
ffffffff8105c730 T sched_setscheduler
ffffffff8105c740 t do_sched_setscheduler
ffffffff8105c7b0 T sys_nice
ffffffff8105c890 T task_prio
ffffffff8105c8a0 T idle_cpu
ffffffff8105c8f0 T idle_task
ffffffff8105c910 T sched_setscheduler_nocheck
ffffffff8105c920 T sched_set_stop_task
ffffffff8105c9b0 T sys_sched_setscheduler
ffffffff8105c9d0 T sys_sched_setparam
ffffffff8105c9f0 T sys_sched_getscheduler
ffffffff8105ca30 T sys_sched_getparam
ffffffff8105caa0 T sched_getaffinity
ffffffff8105cb20 T sys_sched_getaffinity
ffffffff8105cbc0 T sys_sched_yield
ffffffff8105cc20 T sys_sched_get_priority_max
ffffffff8105cc60 T sys_sched_get_priority_min
ffffffff8105cca0 T sys_sched_rr_get_interval
ffffffff8105cd60 T sched_show_task
ffffffff8105ce50 T show_state_filter
ffffffff8105cef0 T do_set_cpus_allowed
ffffffff8105cf40 t try_to_wake_up
ffffffff8105d1e0 T default_wake_function
ffffffff8105d1f0 T wake_up_process
ffffffff8105d210 T wake_up_state
ffffffff8105d220 T wake_up_new_task
ffffffff8105d350 T set_cpus_allowed_ptr
ffffffff8105d480 T sched_setaffinity
ffffffff8105d5b0 T sys_sched_setaffinity
ffffffff8105d610 T idle_task_exit
ffffffff8105d6d0 W arch_sd_sibling_asym_packing
ffffffff8105d6e0 T build_sched_domain
ffffffff8105d7c0 t build_sched_domains
ffffffff8105e120 W arch_update_cpu_topology
ffffffff8105e130 T alloc_sched_domains
ffffffff8105e150 T free_sched_domains
ffffffff8105e160 T partition_sched_domains
ffffffff8105e4d0 t sched_mc_power_savings_store
ffffffff8105e540 T in_sched_functions
ffffffff8105e580 T sched_create_group
ffffffff8105e6e0 t cpu_cgroup_create
ffffffff8105e720 T sched_destroy_group
ffffffff8105e7f0 t cpu_cgroup_destroy
ffffffff8105e800 T sched_move_task
ffffffff8105e940 t cpu_cgroup_exit
ffffffff8105e960 t cpu_cgroup_attach
ffffffff8105e9c0 T sched_group_set_rt_runtime
ffffffff8105e9f0 t cpu_rt_runtime_write
ffffffff8105ea10 T sched_group_rt_runtime
ffffffff8105ea40 T sched_group_set_rt_period
ffffffff8105ea70 t cpu_rt_period_write_uint
ffffffff8105ea90 T sched_group_rt_period
ffffffff8105eac0 T sched_rt_can_attach
ffffffff8105eae0 T sched_rt_handler
ffffffff8105ec70 T tg_set_cfs_quota
ffffffff8105eca0 t cpu_cfs_quota_write_s64
ffffffff8105ecc0 T tg_get_cfs_quota
ffffffff8105ecf0 T tg_set_cfs_period
ffffffff8105ed10 t cpu_cfs_period_write_u64
ffffffff8105ed30 T tg_get_cfs_period
ffffffff8105ed60 T cpuacct_charge
ffffffff8105ede0 t sched_clock_local
ffffffff8105ee60 T sched_clock_init
ffffffff8105eed0 T sched_clock_cpu
ffffffff8105efc0 T local_clock
ffffffff8105f000 T cpu_clock
ffffffff8105f030 T sched_clock_idle_sleep_event
ffffffff8105f040 T sched_clock_tick
ffffffff8105f0e0 T sched_clock_idle_wakeup_event
ffffffff8105f100 t select_task_rq_idle
ffffffff8105f110 t pick_next_task_idle
ffffffff8105f120 t put_prev_task_idle
ffffffff8105f130 t task_tick_idle
ffffffff8105f140 t set_curr_task_idle
ffffffff8105f150 t get_rr_interval_idle
ffffffff8105f160 t prio_changed_idle
ffffffff8105f170 t switched_to_idle
ffffffff8105f180 t check_preempt_curr_idle
ffffffff8105f190 t dequeue_task_idle
ffffffff8105f1e0 t update_cfs_load
ffffffff8105f390 t tg_throttle_down
ffffffff8105f3d0 t task_waking_fair
ffffffff8105f3f0 t tg_load_down
ffffffff8105f460 t task_move_group_fair
ffffffff8105f510 t calc_delta_mine
ffffffff8105f5b0 t set_next_entity
ffffffff8105f630 t update_sysctl
ffffffff8105f6a0 t rq_offline_fair
ffffffff8105f6b0 t rq_online_fair
ffffffff8105f6c0 t __enqueue_entity
ffffffff8105f730 t move_task
ffffffff8105f780 t clear_buddies
ffffffff8105f860 t effective_load.isra.33
ffffffff8105f8f0 t select_task_rq_fair
ffffffff81060190 t wakeup_preempt_entity.isra.37
ffffffff810601e0 t sched_slice.isra.45
ffffffff81060260 t get_rr_interval_fair
ffffffff810602a0 t place_entity
ffffffff81060340 t switched_from_fair
ffffffff810603a0 t can_migrate_task.part.47
ffffffff81060430 t active_load_balance_cpu_stop
ffffffff81060680 t prio_changed_fair
ffffffff810606c0 t switched_to_fair
ffffffff810606f0 t hrtick_start_fair
ffffffff810607e0 t hrtick_update
ffffffff81060840 t pick_next_task_fair
ffffffff81060980 T sched_init_granularity
ffffffff81060990 T __pick_first_entity
ffffffff810609b0 T account_cfs_bandwidth_used
ffffffff810609c0 T __refill_cfs_bandwidth_runtime
ffffffff810609f0 T init_cfs_bandwidth
ffffffff81060a70 T __start_cfs_bandwidth
ffffffff81060ae0 t __account_cfs_rq_runtime
ffffffff81060c60 t update_curr
ffffffff81060dc0 t task_fork_fair
ffffffff81060f50 t update_cfs_shares
ffffffff810610f0 t update_shares
ffffffff810611e0 t tg_unthrottle_up
ffffffff81061240 t yield_task_fair
ffffffff810612c0 t yield_to_task_fair
ffffffff81061330 t dequeue_entity
ffffffff810615a0 t throttle_cfs_rq
ffffffff810616d0 t dequeue_task_fair
ffffffff810617c0 t put_prev_task_fair
ffffffff81061850 t check_preempt_wakeup
ffffffff81061ab0 t task_tick_fair
ffffffff81061bd0 t set_curr_task_fair
ffffffff81061c30 t enqueue_entity
ffffffff81061e10 t enqueue_task_fair
ffffffff81061eb0 T unthrottle_cfs_rq
ffffffff81062010 t distribute_cfs_runtime
ffffffff810620e0 t sched_cfs_slack_timer
ffffffff810621d0 t sched_cfs_period_timer
ffffffff81062310 T unthrottle_offline_cfs_rqs
ffffffff81062390 T default_scale_freq_power
ffffffff810623b0 T default_scale_smt_power
ffffffff810623d0 T scale_rt_power
ffffffff81062440 T update_group_power
ffffffff81062590 t find_busiest_group
ffffffff81063170 t load_balance
ffffffff81063860 t run_rebalance_domains
ffffffff810639e0 T idle_balance
ffffffff81063b20 T update_max_interval
ffffffff81063b50 T trigger_load_balance
ffffffff81063b90 T init_cfs_rq
ffffffff81063bb0 T free_fair_sched_group
ffffffff81063c50 T unregister_fair_sched_group
ffffffff81063d00 T init_tg_cfs_entry
ffffffff81063da0 T alloc_fair_sched_group
ffffffff81063f00 T sched_group_set_shares
ffffffff81064010 t get_rr_interval_rt
ffffffff81064030 t pick_next_pushable_task
ffffffff81064090 t __enable_runtime
ffffffff81064170 t __disable_runtime
ffffffff81064360 t pull_rt_task
ffffffff81064660 t pre_schedule_rt
ffffffff81064680 t find_lowest_rq
ffffffff810647c0 t select_task_rq_rt
ffffffff81064830 t update_rt_migration
ffffffff810648f0 t rq_offline_rt
ffffffff81064950 t rq_online_rt
ffffffff810649c0 t dequeue_pushable_task
ffffffff81064a30 t set_curr_task_rt
ffffffff81064a50 t pick_next_task_rt
ffffffff81064b30 t enqueue_pushable_task
ffffffff81064bd0 t set_cpus_allowed_rt
ffffffff81064cd0 t requeue_task_rt.isra.25
ffffffff81064d60 t yield_task_rt
ffffffff81064d70 t dequeue_rt_stack
ffffffff81065050 t balance_runtime.part.30
ffffffff810651f0 t prio_changed_rt
ffffffff81065270 t switched_from_rt
ffffffff81065290 t check_preempt_curr_rt
ffffffff81065350 t push_rt_task.part.35
ffffffff810655a0 t post_schedule_rt
ffffffff810655c0 t switched_to_rt
ffffffff81065640 t task_woken_rt
ffffffff810656b0 t __enqueue_rt_entity
ffffffff81065920 t dequeue_rt_entity
ffffffff81065970 t update_curr_rt
ffffffff81065c20 t task_tick_rt
ffffffff81065ce0 t put_prev_task_rt
ffffffff81065d40 t dequeue_task_rt
ffffffff81065d90 t sched_rt_period_timer
ffffffff81066130 t enqueue_task_rt
ffffffff810661c0 T init_rt_bandwidth
ffffffff810661f0 T init_rt_rq
ffffffff81066290 T free_rt_sched_group
ffffffff81066320 T init_tg_rt_entry
ffffffff810663a0 T alloc_rt_sched_group
ffffffff81066530 T update_runtime
ffffffff81066600 T init_sched_rt_class
ffffffff81066660 t select_task_rq_stop
ffffffff81066670 t check_preempt_curr_stop
ffffffff81066680 t pick_next_task_stop
ffffffff810666a0 t enqueue_task_stop
ffffffff810666b0 t dequeue_task_stop
ffffffff810666c0 t put_prev_task_stop
ffffffff810666d0 t task_tick_stop
ffffffff810666e0 t set_curr_task_stop
ffffffff810666f0 t get_rr_interval_stop
ffffffff81066700 t prio_changed_stop
ffffffff81066710 t switched_to_stop
ffffffff81066720 t yield_task_stop
ffffffff81066730 T cpupri_find
ffffffff81066820 T cpupri_set
ffffffff810668b0 T cpupri_init
ffffffff810669b0 T cpupri_cleanup
ffffffff810669c0 T pm_qos_request
ffffffff810669e0 T pm_qos_request_active
ffffffff810669f0 t pm_qos_power_read
ffffffff81066ad0 T pm_qos_remove_notifier
ffffffff81066af0 T pm_qos_add_notifier
ffffffff81066b10 T pm_qos_read_value
ffffffff81066b20 T pm_qos_update_target
ffffffff81066cc0 T pm_qos_remove_request
ffffffff81066e00 t pm_qos_power_release
ffffffff81066e20 T pm_qos_update_request
ffffffff81066eb0 t pm_qos_power_write
ffffffff81066f80 t pm_qos_work_fn
ffffffff81066f90 T pm_qos_add_request
ffffffff81067060 t pm_qos_power_open
ffffffff810670e0 T pm_qos_update_request_timeout
ffffffff810671c0 t state_show
ffffffff810671d0 t wakeup_count_store
ffffffff81067230 t wakeup_count_show
ffffffff81067270 t pm_async_show
ffffffff810672a0 t pm_async_store
ffffffff810672f0 t state_store
ffffffff81067360 T unregister_pm_notifier
ffffffff81067370 T register_pm_notifier
ffffffff81067380 T pm_notifier_call_chain
ffffffff810673b0 T pm_prepare_console
ffffffff810673f0 T pm_restore_console
ffffffff81067420 t try_to_freeze_tasks
ffffffff81067710 T thaw_processes
ffffffff810677e0 T freeze_processes
ffffffff81067880 T thaw_kernel_threads
ffffffff81067930 T freeze_kernel_threads
ffffffff810679a0 T freezing_slow_path
ffffffff81067a00 T __refrigerator
ffffffff81067af0 T set_freezable
ffffffff81067b50 T freeze_task
ffffffff81067c10 T __thaw_task
ffffffff81067c50 t timekeeper_setup_internals
ffffffff81067cf0 t timekeeping_forward_now
ffffffff81067db0 T get_seconds
ffffffff81067dc0 T monotonic_to_bootbased
ffffffff81067e00 T getboottime
ffffffff81067e30 T getrawmonotonic
ffffffff81067ed0 T current_kernel_time
ffffffff81067f00 T ktime_get_monotonic_offset
ffffffff81067f50 t update_rt_offset
ffffffff81067fb0 t __timekeeping_inject_sleeptime
ffffffff810680b0 t timekeeping_update
ffffffff81068100 t change_clocksource
ffffffff810681b0 T get_monotonic_boottime
ffffffff810682c0 T ktime_get_boottime
ffffffff81068300 T ktime_get_ts
ffffffff810683e0 T ktime_get
ffffffff810684d0 T getnstimeofday
ffffffff81068590 T ktime_get_real
ffffffff810685d0 T do_gettimeofday
ffffffff81068620 T timekeeping_inject_offset
ffffffff81068710 T do_settimeofday
ffffffff810687f0 T timekeeping_notify
ffffffff81068820 T timekeeping_valid_for_hres
ffffffff81068850 T timekeeping_max_deferment
ffffffff81068890 t timekeeping_resume
ffffffff810689a0 t timekeeping_suspend
ffffffff81068ad0 W read_boot_clock
ffffffff81068ae0 T timekeeping_inject_sleeptime
ffffffff81068b90 T __current_kernel_time
ffffffff81068bb0 T get_monotonic_coarse
ffffffff81068c20 T do_timer
ffffffff81069150 T get_xtime_and_monotonic_and_sleep_offset
ffffffff810691b0 T ktime_get_update_offsets
ffffffff81069290 T xtime_update
ffffffff810692d0 t ntp_update_frequency
ffffffff81069340 t sync_cmos_clock
ffffffff81069410 T ntp_clear
ffffffff81069480 T ntp_tick_length
ffffffff810694b0 T second_overflow
ffffffff81069780 T do_adjtimex
ffffffff81069d30 T timecounter_init
ffffffff81069d70 T timecounter_read
ffffffff81069db0 t clocksource_enqueue
ffffffff81069e10 t sysfs_show_available_clocksources
ffffffff81069ee0 t sysfs_show_current_clocksources
ffffffff81069f30 t clocksource_max_deferment
ffffffff81069f80 t clocksource_enqueue_watchdog
ffffffff8106a0d0 t clocksource_watchdog_work
ffffffff8106a110 t clocksource_watchdog
ffffffff8106a360 T timecounter_cyc2time
ffffffff8106a3c0 t clocksource_select
ffffffff8106a490 t sysfs_override_clocksource
ffffffff8106a520 T clocksource_change_rating
ffffffff8106a5a0 t clocksource_watchdog_kthread
ffffffff8106a740 T clocksource_unregister
ffffffff8106a8e0 T clocksource_register
ffffffff8106a990 T clocks_calc_mult_shift
ffffffff8106a9f0 T __clocksource_updatefreq_scale
ffffffff8106aaf0 T __clocksource_register_scale
ffffffff8106ab30 T clocksource_mark_unstable
ffffffff8106abd0 T clocksource_suspend
ffffffff8106ac10 T clocksource_resume
ffffffff8106ac50 T clocksource_touch_watchdog
ffffffff8106ac60 t jiffies_read
ffffffff8106ac70 t timer_list_open
ffffffff8106ac90 t print_name_offset
ffffffff8106ad30 t print_tickdevice
ffffffff8106b0c0 t timer_list_show
ffffffff8106bbb0 T sysrq_timer_list_show
ffffffff8106bbc0 T timecompare_transform
ffffffff8106bbf0 T timecompare_offset
ffffffff8106bdb0 T __timecompare_update
ffffffff8106be70 T time_to_tm
ffffffff8106c210 t delete_clock
ffffffff8106c230 t put_clock_desc
ffffffff8106c250 t posix_clock_release
ffffffff8106c2c0 t posix_clock_open
ffffffff8106c380 T posix_clock_register
ffffffff8106c400 t get_posix_clock.isra.0
ffffffff8106c450 t get_clock_desc
ffffffff8106c4f0 t pc_timer_gettime
ffffffff8106c540 t pc_timer_delete
ffffffff8106c590 t pc_timer_settime
ffffffff8106c610 t pc_timer_create
ffffffff8106c660 t pc_clock_adjtime
ffffffff8106c6c0 t pc_clock_gettime
ffffffff8106c710 t pc_clock_settime
ffffffff8106c770 t pc_clock_getres
ffffffff8106c7c0 t posix_clock_fasync
ffffffff8106c850 t posix_clock_mmap
ffffffff8106c8c0 t posix_clock_compat_ioctl
ffffffff8106c940 t posix_clock_ioctl
ffffffff8106c9c0 t posix_clock_poll
ffffffff8106ca30 t posix_clock_read
ffffffff8106cac0 T posix_clock_unregister
ffffffff8106cb40 t alarmtimer_get_rtcdev
ffffffff8106cb70 t alarmtimer_freezerset
ffffffff8106cbc0 t alarmtimer_suspend
ffffffff8106cd50 t alarmtimer_rtc_add_device
ffffffff8106cdf0 t alarmtimer_fired
ffffffff8106cee0 t alarm_timer_get
ffffffff8106cf50 t alarm_clock_get
ffffffff8106cfb0 t alarm_timer_create
ffffffff8106d020 t update_rmtp
ffffffff8106d090 t alarmtimer_nsleep_wakeup
ffffffff8106d0c0 t alarm_clock_getres
ffffffff8106d130 t alarmtimer_remove
ffffffff8106d1b0 T alarm_init
ffffffff8106d1e0 T alarm_start
ffffffff8106d2b0 T alarm_try_to_cancel
ffffffff8106d340 t alarm_timer_del
ffffffff8106d370 t alarm_timer_set
ffffffff8106d440 T alarm_cancel
ffffffff8106d460 t alarmtimer_do_nsleep
ffffffff8106d500 t alarm_timer_nsleep
ffffffff8106d6b0 T alarm_forward
ffffffff8106d700 t alarm_handle_timer
ffffffff8106d780 T clockevents_notify
ffffffff8106d900 T clockevents_register_device
ffffffff8106da30 T clockevent_delta2ns
ffffffff8106daa0 t clockevents_program_min_delta
ffffffff8106dba0 t clockevents_config.part.0
ffffffff8106dc20 T clockevents_set_mode
ffffffff8106dc90 T clockevents_shutdown
ffffffff8106dcb0 T clockevents_program_event
ffffffff8106ddc0 T clockevents_register_notifier
ffffffff8106de20 T clockevents_config_and_register
ffffffff8106de40 T clockevents_update_freq
ffffffff8106de80 T clockevents_handle_noop
ffffffff8106de90 T clockevents_exchange_device
ffffffff8106df40 t tick_periodic
ffffffff8106dfb0 T tick_get_device
ffffffff8106dfd0 T tick_is_oneshot_available
ffffffff8106e010 T tick_handle_periodic
ffffffff8106e070 T tick_setup_periodic
ffffffff8106e100 t tick_notify
ffffffff8106e510 t tick_broadcast_clear_oneshot
ffffffff8106e520 t tick_broadcast_set_event
ffffffff8106e580 t tick_broadcast_start_periodic
ffffffff8106e5a0 t tick_do_broadcast.constprop.1
ffffffff8106e620 t tick_handle_oneshot_broadcast
ffffffff8106e700 t tick_do_periodic_broadcast
ffffffff8106e740 t tick_handle_periodic_broadcast
ffffffff8106e790 T tick_get_broadcast_device
ffffffff8106e7a0 T tick_get_broadcast_mask
ffffffff8106e7b0 T tick_check_broadcast_device
ffffffff8106e820 T tick_is_broadcast_device
ffffffff8106e840 T tick_device_uses_broadcast
ffffffff8106e8e0 T tick_set_periodic_handler
ffffffff8106e900 T tick_shutdown_broadcast
ffffffff8106e970 T tick_suspend_broadcast
ffffffff8106e9b0 T tick_resume_broadcast
ffffffff8106ea70 T tick_get_broadcast_oneshot_mask
ffffffff8106ea80 T tick_resume_broadcast_oneshot
ffffffff8106eaa0 T tick_check_oneshot_broadcast
ffffffff8106ead0 T tick_broadcast_oneshot_control
ffffffff8106ec10 T tick_broadcast_setup_oneshot
ffffffff8106ed10 T tick_broadcast_on_off
ffffffff8106ef30 T tick_broadcast_switch_to_oneshot
ffffffff8106ef70 T tick_shutdown_broadcast_oneshot
ffffffff8106efa0 T tick_broadcast_oneshot_active
ffffffff8106efb0 T tick_broadcast_oneshot_available
ffffffff8106efd0 T tick_program_event
ffffffff8106eff0 T tick_resume_oneshot
ffffffff8106f020 T tick_setup_oneshot
ffffffff8106f060 T tick_switch_to_oneshot
ffffffff8106f110 T tick_oneshot_mode_active
ffffffff8106f140 T tick_init_highres
ffffffff8106f150 t tick_sched_timer
ffffffff8106f280 T tick_get_tick_sched
ffffffff8106f2a0 T tick_check_idle
ffffffff8106f2b0 T tick_setup_sched_timer
ffffffff8106f380 T tick_cancel_sched_timer
ffffffff8106f3b0 T tick_clock_notify
ffffffff8106f410 T tick_oneshot_notify
ffffffff8106f430 T tick_check_oneshot_change
ffffffff8106f490 t hash_futex
ffffffff8106f550 t get_futex_value_locked
ffffffff8106f580 t cmpxchg_futex_value_locked
ffffffff8106f5d0 t futex_wait_queue_me
ffffffff8106f6e0 t __unqueue_futex
ffffffff8106f740 t wake_futex
ffffffff8106f7b0 t fault_in_user_writeable
ffffffff8106f830 t lookup_pi_state
ffffffff8106fa80 t futex_lock_pi_atomic
ffffffff8106fbe0 t get_futex_key_refs.isra.11
ffffffff8106fc20 t get_futex_key
ffffffff8106fea0 t drop_futex_key_refs.isra.13
ffffffff8106ff10 t futex_wait_setup
ffffffff81070010 t futex_wait
ffffffff81070250 t futex_wait_restart
ffffffff81070290 t futex_wake
ffffffff810703b0 t fixup_pi_state_owner.isra.15
ffffffff81070570 t fixup_owner
ffffffff81070680 t free_pi_state
ffffffff81070720 t futex_requeue
ffffffff81070f60 t unqueue_me_pi
ffffffff81070f90 t futex_wait_requeue_pi
ffffffff81071370 t futex_lock_pi.isra.19
ffffffff81071670 T exit_pi_state_list
ffffffff810717f0 T sys_set_robust_list
ffffffff81071830 T sys_get_robust_list
ffffffff81071910 T handle_futex_death
ffffffff810719d0 T exit_robust_list
ffffffff81071b40 T do_futex
ffffffff810725f0 T sys_futex
ffffffff810727b0 T compat_exit_robust_list
ffffffff81072910 T compat_sys_set_robust_list
ffffffff81072950 T compat_sys_get_robust_list
ffffffff81072a40 T compat_sys_futex
ffffffff81072bf0 T __rt_mutex_init
ffffffff81072c10 t __rt_mutex_adjust_prio
ffffffff81072c40 t rt_mutex_adjust_prio_chain
ffffffff81073020 t task_blocks_on_rt_mutex
ffffffff81073210 t try_to_take_rt_mutex
ffffffff81073360 T rt_mutex_destroy
ffffffff81073380 T rt_mutex_timed_lock
ffffffff810733b0 T rt_mutex_getprio
ffffffff810733e0 T rt_mutex_adjust_pi
ffffffff81073460 T rt_mutex_init_proxy_locked
ffffffff81073480 T rt_mutex_proxy_unlock
ffffffff810734a0 T rt_mutex_start_proxy_lock
ffffffff81073550 T rt_mutex_next_owner
ffffffff81073570 T rt_mutex_finish_proxy_lock
ffffffff81073640 T request_dma
ffffffff81073670 t proc_dma_open
ffffffff81073690 t proc_dma_show
ffffffff810736f0 T free_dma
ffffffff81073730 t hotplug_cfd
ffffffff81073770 t csd_unlock.isra.4
ffffffff810737a0 t generic_exec_single
ffffffff81073850 T smp_call_function_single
ffffffff810739a0 T smp_call_function_many
ffffffff81073c20 T smp_call_function
ffffffff81073c50 T on_each_cpu
ffffffff81073cc0 T smp_call_function_any
ffffffff81073dc0 T on_each_cpu_mask
ffffffff81073e30 T on_each_cpu_cond
ffffffff81073ec0 T generic_smp_call_function_interrupt
ffffffff81074040 T generic_smp_call_function_single_interrupt
ffffffff81074150 T __smp_call_function_single
ffffffff81074220 T ipi_call_lock
ffffffff81074230 T ipi_call_unlock
ffffffff81074240 T ipi_call_lock_irq
ffffffff81074250 T ipi_call_unlock_irq
ffffffff81074280 T in_lock_functions
ffffffff810742a0 T sys_chown16
ffffffff810742c0 T sys_lchown16
ffffffff810742e0 T sys_fchown16
ffffffff81074300 T sys_setregid16
ffffffff81074320 T sys_setgid16
ffffffff81074340 T sys_setreuid16
ffffffff81074360 T sys_setuid16
ffffffff81074380 T sys_setresuid16
ffffffff810743b0 T sys_getresuid16
ffffffff81074430 T sys_setresgid16
ffffffff81074460 T sys_getresgid16
ffffffff810744e0 T sys_setfsuid16
ffffffff81074500 T sys_setfsgid16
ffffffff81074520 T sys_getgroups16
ffffffff810745b0 T sys_setgroups16
ffffffff810746c0 T sys_getuid16
ffffffff810746f0 T sys_geteuid16
ffffffff81074720 T sys_getgid16
ffffffff81074750 T sys_getegid16
ffffffff81074780 t modinfo_version_exists
ffffffff81074790 t modinfo_srcversion_exists
ffffffff810747a0 t show_taint
ffffffff81074810 t module_flags
ffffffff810748f0 T __module_address
ffffffff81074990 T __module_text_address
ffffffff810749f0 t modules_open
ffffffff81074a00 t m_next
ffffffff81074a10 t m_stop
ffffffff81074a20 t m_start
ffffffff81074a40 t store_uevent
ffffffff81074a90 t get_ksymbol
ffffffff81074cf0 t mod_find_symname
ffffffff81074d60 t cmp_name
ffffffff81074d70 t find_sec
ffffffff81074df0 t section_addr
ffffffff81074e10 t section_objs
ffffffff81074e70 T find_module
ffffffff81074ee0 t find_symbol_in_section
ffffffff81074fc0 t __unlink_module
ffffffff81075000 t free_modinfo_srcversion
ffffffff81075020 t free_modinfo_version
ffffffff81075040 t module_notes_read
ffffffff81075060 t show_initsize
ffffffff81075090 t show_coresize
ffffffff810750c0 t show_initstate
ffffffff81075100 t show_modinfo_srcversion
ffffffff81075130 t show_modinfo_version
ffffffff81075160 t module_sect_show
ffffffff81075190 t setup_modinfo_srcversion
ffffffff810751b0 t setup_modinfo_version
ffffffff810751d0 T module_refcount
ffffffff81075280 t __try_stop_module
ffffffff810752c0 t m_show
ffffffff81075440 t show_refcnt
ffffffff81075470 t free_notes_attrs
ffffffff810754d0 T try_module_get
ffffffff81075500 T __module_get
ffffffff81075520 T unregister_module_notifier
ffffffff81075530 T register_module_notifier
ffffffff81075540 t each_symbol_section.part.12
ffffffff81075690 T each_symbol_section
ffffffff81075710 T find_symbol
ffffffff81075770 T __symbol_get
ffffffff810757c0 t verify_export_symbols
ffffffff810758b0 t get_modinfo.isra.34
ffffffff81075970 T module_put
ffffffff810759a0 t module_unload_free
ffffffff81075a80 T __module_put_and_exit
ffffffff81075aa0 T ref_module
ffffffff81075ba0 t resolve_symbol.isra.41
ffffffff81075c50 t simplify_symbols
ffffffff81075f80 T symbol_put_addr
ffffffff81075fb0 T __symbol_put
ffffffff81075fe0 T is_module_percpu_address
ffffffff810760a0 W module_free
ffffffff810760c0 t free_module
ffffffff810762b0 T sys_delete_module
ffffffff810764e0 W apply_relocate
ffffffff81076520 W arch_mod_section_prepend
ffffffff81076530 t get_offset.isra.46
ffffffff810765c0 t module_alloc_update_bounds
ffffffff81076640 W module_frob_arch_sections
ffffffff81076660 T sys_init_module
ffffffff81078180 T module_address_lookup
ffffffff81078250 T lookup_module_symbol_name
ffffffff81078310 T lookup_module_symbol_attrs
ffffffff81078410 T module_get_kallsym
ffffffff81078570 T module_kallsyms_lookup_name
ffffffff81078620 T module_kallsyms_on_each_symbol
ffffffff810786c0 T search_module_extables
ffffffff81078750 T is_module_address
ffffffff81078760 T is_module_text_address
ffffffff81078770 T print_modules
ffffffff81078800 t kallsyms_expand_symbol
ffffffff81078880 t s_stop
ffffffff81078890 t s_show
ffffffff81078930 t update_iter
ffffffff81078a30 t s_next
ffffffff81078a60 t s_start
ffffffff81078a80 t kallsyms_open
ffffffff81078b00 T kallsyms_on_each_symbol
ffffffff81078b80 T kallsyms_lookup_name
ffffffff81078c00 t is_ksym_addr
ffffffff81078c50 t get_symbol_pos
ffffffff81078d40 T kallsyms_lookup_size_offset
ffffffff81078db0 T kallsyms_lookup
ffffffff81078ea0 t __sprint_symbol
ffffffff81078f70 T sprint_symbol
ffffffff81078f80 T __print_symbol
ffffffff81078fb0 T lookup_symbol_name
ffffffff81079040 T lookup_symbol_attrs
ffffffff81079130 T sprint_backtrace
ffffffff81079140 t encode_comp_t
ffffffff81079190 t check_free_space
ffffffff81079350 t do_acct_process
ffffffff810796a0 t acct_file_reopen
ffffffff810797e0 T sys_acct
ffffffff81079a00 T acct_auto_close_mnt
ffffffff81079a70 T acct_auto_close
ffffffff81079ae0 T acct_exit_ns
ffffffff81079b30 T acct_collect
ffffffff81079ce0 T acct_process
ffffffff81079d80 T sigset_from_compat
ffffffff81079d90 T compat_alloc_user_space
ffffffff81079e10 t compat_put_timex
ffffffff81079f80 T put_compat_timespec
ffffffff81079fe0 T compat_put_timespec
ffffffff81079ff0 T get_compat_timespec
ffffffff8107a050 T compat_get_timespec
ffffffff8107a060 T put_compat_timeval
ffffffff8107a0c0 T compat_put_timeval
ffffffff8107a0d0 T get_compat_timeval
ffffffff8107a130 T compat_get_timeval
ffffffff8107a140 t compat_get_timex
ffffffff8107a380 t compat_clock_nanosleep_restart
ffffffff8107a440 t compat_nanosleep_restart
ffffffff8107a4d0 T compat_sys_gettimeofday
ffffffff8107a570 T compat_sys_settimeofday
ffffffff8107a600 T compat_sys_nanosleep
ffffffff8107a700 T compat_sys_getitimer
ffffffff8107a790 T compat_sys_setitimer
ffffffff8107a8d0 T compat_sys_times
ffffffff8107a9e0 T compat_sys_sigpending
ffffffff8107aa50 T compat_sys_sigprocmask
ffffffff8107aaf0 T compat_sys_setrlimit
ffffffff8107aba0 T compat_sys_old_getrlimit
ffffffff8107ac80 T compat_sys_getrlimit
ffffffff8107ad20 T put_compat_rusage
ffffffff8107ae60 T compat_sys_getrusage
ffffffff8107af00 T compat_sys_wait4
ffffffff8107b000 T compat_sys_waitid
ffffffff8107b120 T get_compat_itimerspec
ffffffff8107b160 T put_compat_itimerspec
ffffffff8107b1a0 T compat_sys_timer_settime
ffffffff8107b280 T compat_sys_timer_gettime
ffffffff8107b310 T compat_sys_clock_settime
ffffffff8107b380 T compat_sys_clock_gettime
ffffffff8107b410 T compat_sys_clock_adjtime
ffffffff8107b4d0 T compat_sys_clock_getres
ffffffff8107b550 T compat_sys_clock_nanosleep
ffffffff8107b640 T get_compat_sigevent
ffffffff8107b750 T compat_sys_timer_create
ffffffff8107b7f0 T compat_get_bitmap
ffffffff8107b8a0 T compat_sys_sched_setaffinity
ffffffff8107b8f0 T compat_put_bitmap
ffffffff8107b990 T compat_sys_sched_getaffinity
ffffffff8107ba30 T compat_sys_rt_sigtimedwait
ffffffff8107bb20 T compat_sys_rt_tgsigqueueinfo
ffffffff8107bb90 T compat_sys_time
ffffffff8107bbe0 T compat_sys_stime
ffffffff8107bc30 T compat_sys_adjtimex
ffffffff8107bca0 T compat_sys_move_pages
ffffffff8107bd60 T compat_sys_migrate_pages
ffffffff8107bed0 T compat_sys_sysinfo
ffffffff8107c060 T cgroup_lock_is_held
ffffffff8107c070 t css_set_hash
ffffffff8107c0e0 t cgroup_delete
ffffffff8107c0f0 T cgroup_taskset_cur_cgroup
ffffffff8107c100 T cgroup_taskset_size
ffffffff8107c120 t cgroup_seqfile_show
ffffffff8107c170 t cgroup_file_release
ffffffff8107c190 t started_after
ffffffff8107c1d0 t cmppid
ffffffff8107c1e0 t cgroup_pidlist_next
ffffffff8107c210 t cgroup_read_notify_on_release
ffffffff8107c220 t cgroup_clone_children_read
ffffffff8107c230 T css_id
ffffffff8107c250 T css_depth
ffffffff8107c270 t cgroup_test_super
ffffffff8107c2e0 t cgroup_open
ffffffff8107c300 t cgroupstats_open
ffffffff8107c320 t cgroup_lock_hierarchy
ffffffff8107c380 T cgroup_lock
ffffffff8107c390 t cgroup_map_add
ffffffff8107c3b0 t cgroup_pidlist_show
ffffffff8107c3c0 t cgroup_unlock_hierarchy
ffffffff8107c420 t proc_cgroupstats_show
ffffffff8107c4b0 t cgroup_show_options
ffffffff8107c5c0 T cgroup_unlock
ffffffff8107c5d0 T cgroup_lock_live_group
ffffffff8107c600 t cgroup_release_agent_show
ffffffff8107c660 t cgroup_seqfile_release
ffffffff8107c690 t free_cg_links
ffffffff8107c6f0 t allocate_cg_links
ffffffff8107c760 t task_cgroup_from_root
ffffffff8107c7f0 t cgroup_rename
ffffffff8107c830 t cgroup_event_wake
ffffffff8107c8e0 t cgroup_clear_directory
ffffffff8107c9f0 t get_new_cssid
ffffffff8107cb10 t cgroup_file_read
ffffffff8107cbe0 t cgroup_new_inode
ffffffff8107cc60 t cgroup_release_agent_write
ffffffff8107ccd0 t cgroup_write_event_control
ffffffff8107d010 t cgroup_event_remove
ffffffff8107d070 t cgroup_event_ptable_queue_proc
ffffffff8107d080 T cgroup_taskset_next
ffffffff8107d0c0 T cgroup_unload_subsys
ffffffff8107d220 t cgroup_pidlist_stop
ffffffff8107d230 t cgroup_pidlist_start
ffffffff8107d2f0 t pidlist_free
ffffffff8107d320 t drop_parsed_module_refcounts
ffffffff8107d370 t rebind_subsystems
ffffffff8107d6d0 t parse_cgroupfs_options
ffffffff8107db00 t cgroup_diput
ffffffff8107dbe0 t cgroup_init_idr
ffffffff8107dc40 t init_root_id
ffffffff8107dcf0 T cgroup_path
ffffffff8107ddd0 t proc_cgroup_show
ffffffff8107dfa0 t cgroup_release_agent
ffffffff8107e140 t cgroup_lookup
ffffffff8107e170 t cgroup_write_notify_on_release
ffffffff8107e190 t cgroup_clone_children_write
ffffffff8107e1b0 t init_cgroup_css.isra.11
ffffffff8107e200 t link_css_set
ffffffff8107e260 t find_css_set
ffffffff8107e4b0 T free_css_id
ffffffff8107e550 t cgroup_file_open
ffffffff8107e640 t cgroup_file_write
ffffffff8107e8f0 t cgroup_create_file
ffffffff8107e9d0 T cgroup_add_file
ffffffff8107ebc0 T cgroup_add_files
ffffffff8107ec20 t cgroup_populate_dir
ffffffff8107ed30 t cgroup_mkdir
ffffffff8107f140 t cgroup_remount
ffffffff8107f270 T cgroup_taskset_first
ffffffff8107f2a0 t cgroup_wakeup_rmdir_waiter
ffffffff8107f2d0 t cgroup_release_pid_array
ffffffff8107f3a0 t cgroup_pidlist_release
ffffffff8107f3e0 T css_lookup
ffffffff8107f410 t cgroup_drop_root
ffffffff8107f460 t cgroup_kill_sb
ffffffff8107f5f0 t cgroup_mount
ffffffff8107fb50 t cgroup_set_super
ffffffff8107fbd0 T cgroup_load_subsys
ffffffff8107fec0 t check_for_release
ffffffff8107ffc0 t cgroup_rmdir
ffffffff810804c0 t __put_css_set
ffffffff81080690 t cgroup_task_migrate.isra.21
ffffffff810807a0 T __css_put
ffffffff81080810 T cgroup_is_removed
ffffffff81080820 T cgroup_exclude_rmdir
ffffffff81080830 T cgroup_release_and_wakeup_rmdir
ffffffff81080860 T cgroup_attach_task
ffffffff81080a30 t attach_task_by_pid
ffffffff81080f10 t cgroup_procs_write
ffffffff81080f20 t cgroup_tasks_write
ffffffff81080f30 T cgroup_attach_task_all
ffffffff81080fb0 T cgroup_task_count
ffffffff81081000 T cgroup_iter_start
ffffffff81081170 T cgroup_iter_next
ffffffff810811e0 t cgroup_pidlist_open
ffffffff810815f0 t cgroup_procs_open
ffffffff81081600 t cgroup_tasks_open
ffffffff81081610 T cgroup_iter_end
ffffffff81081620 T cgroup_scan_tasks
ffffffff810817f0 T cgroupstats_build
ffffffff810818c0 T cgroup_fork
ffffffff810818f0 T cgroup_fork_callbacks
ffffffff81081930 T cgroup_post_fork
ffffffff810819a0 T cgroup_exit
ffffffff81081ae0 T cgroup_is_descendant
ffffffff81081b40 T css_is_ancestor
ffffffff81081b80 T css_get_next
ffffffff81081c30 T cgroup_css_from_dir
ffffffff81081c80 t freezer_populate
ffffffff81081cb0 t is_task_frozen_enough
ffffffff81081ce0 t freezer_fork
ffffffff81081d60 t freezer_destroy
ffffffff81081d80 t freezer_create
ffffffff81081dc0 t update_if_frozen.isra.2
ffffffff81081e70 t freezer_read
ffffffff81081f30 t freezer_write
ffffffff81082100 t freezer_can_attach
ffffffff81082180 T cgroup_freezing
ffffffff810821a0 t update_domain_attr_tree
ffffffff81082270 t cpuset_test_cpumask
ffffffff81082290 t cpuset_change_flag
ffffffff810822e0 t cpuset_post_clone
ffffffff81082380 t cpuset_populate
ffffffff810823f0 t async_rebuild_sched_domains
ffffffff81082410 t cpuset_write_s64
ffffffff810824b0 t generate_sched_domains
ffffffff81082880 t do_rebuild_sched_domains
ffffffff810828c0 t cpuset_create
ffffffff81082980 t is_cpuset_subset
ffffffff81082a10 t fmeter_update
ffffffff81082a90 t cpuset_read_u64
ffffffff81082bf0 t cpuset_change_cpumask
ffffffff81082c00 t guarantee_online_mems
ffffffff81082cb0 t cpuset_migrate_mm
ffffffff81082d10 t cpuset_common_file_read
ffffffff81082e30 t cpuset_spread_node
ffffffff81082eb0 T cpuset_mem_spread_node
ffffffff81082f00 t cpuset_do_move_task
ffffffff81082f10 t cpuset_mount
ffffffff81082ff0 t cpuset_read_s64
ffffffff81083010 t guarantee_online_cpus
ffffffff81083070 t cpuset_can_attach
ffffffff81083150 t validate_change
ffffffff810832b0 t update_flag
ffffffff81083460 t cpuset_write_u64
ffffffff810835a0 t cpuset_destroy
ffffffff810835d0 t cpuset_write_resmask
ffffffff81083930 t cpuset_change_task_nodemask
ffffffff81083ac0 t cpuset_change_nodemask
ffffffff81083b80 t cpuset_attach
ffffffff81083d50 t scan_for_empty_cpusets.constprop.12
ffffffff81084000 T rebuild_sched_domains
ffffffff81084010 T current_cpuset_is_being_rebound
ffffffff81084040 T cpuset_update_active_cpus
ffffffff810840b0 T cpuset_cpus_allowed
ffffffff81084120 T cpuset_cpus_allowed_fallback
ffffffff81084150 T cpuset_init_current_mems_allowed
ffffffff81084180 T cpuset_mems_allowed
ffffffff81084220 T cpuset_nodemask_valid_mems_allowed
ffffffff81084240 T __cpuset_node_allowed_softwall
ffffffff81084340 T __cpuset_node_allowed_hardwall
ffffffff81084390 T cpuset_unlock
ffffffff810843a0 T cpuset_slab_spread_node
ffffffff810843f0 T cpuset_mems_allowed_intersects
ffffffff81084410 T cpuset_print_task_mems_allowed
ffffffff810844b0 T __cpuset_memory_pressure_bump
ffffffff81084550 T cpuset_task_status_allowed
ffffffff810845d0 t utsns_get
ffffffff81084620 T free_uts_ns
ffffffff81084630 t utsns_install
ffffffff81084690 t utsns_put
ffffffff810846b0 T copy_utsname
ffffffff81084870 t pid_ns_ctl_handler
ffffffff81084930 T free_pid_ns
ffffffff81084990 T copy_pid_ns
ffffffff81084ca0 T zap_pid_ns_processes
ffffffff81084d70 T reboot_pid_ns
ffffffff81084e00 t ikconfig_read_current
ffffffff81084e20 T res_counter_init
ffffffff81084e40 T res_counter_charge_locked
ffffffff81084e70 T res_counter_charge_nofail
ffffffff81084f60 T res_counter_uncharge_locked
ffffffff81084f90 T res_counter_charge
ffffffff81085080 T res_counter_uncharge
ffffffff810850f0 T res_counter_read
ffffffff810851b0 T res_counter_read_u64
ffffffff810851f0 T res_counter_memparse_write_strategy
ffffffff81085280 T res_counter_write
ffffffff81085340 t cpu_stop_signal_done
ffffffff81085370 t cpu_stopper_thread
ffffffff810854e0 t cpu_stop_init_done
ffffffff81085600 t cpu_stop_queue_work
ffffffff81085680 t queue_stop_cpus_work
ffffffff81085740 t stop_machine_cpu_stop
ffffffff81085820 t __stop_cpus
ffffffff81085880 T stop_one_cpu
ffffffff81085900 T stop_one_cpu_nowait
ffffffff81085940 T stop_cpus
ffffffff81085990 T try_stop_cpus
ffffffff81085a00 T __stop_machine
ffffffff81085af0 T stop_machine
ffffffff81085b30 T stop_machine_from_inactive_cpu
ffffffff81085c20 t audit_send_reply_thread
ffffffff81085c60 t audit_hold_skb
ffffffff81085c90 t audit_buffer_free
ffffffff81085d20 T audit_panic
ffffffff81085d80 T audit_log_lost
ffffffff81085e40 t audit_log_vformat
ffffffff81086010 T audit_log_format
ffffffff81086060 t kauditd_send_skb
ffffffff810860d0 t audit_printk_skb
ffffffff81086140 t kauditd_thread
ffffffff810862c0 T audit_log_end
ffffffff810863d0 T audit_send_list
ffffffff81086450 T audit_make_reply
ffffffff81086540 t audit_send_reply.constprop.21
ffffffff81086670 T audit_serial
ffffffff810866b0 T audit_log_start
ffffffff81086b30 T audit_log
ffffffff81086b90 t audit_log_common_recv_msg.part.16.constprop.18
ffffffff81086bf0 t audit_log_config_change.constprop.19
ffffffff81086ca0 t audit_do_config_change.constprop.20
ffffffff81086d10 T audit_log_n_hex
ffffffff81086e80 T audit_log_n_string
ffffffff81087010 T audit_string_contains_control
ffffffff81087070 T audit_log_n_untrustedstring
ffffffff810870c0 T audit_log_untrustedstring
ffffffff810870f0 t audit_receive
ffffffff81087bb0 T audit_log_d_path
ffffffff81087c90 T audit_log_key
ffffffff81087cf0 T audit_free_rule_rcu
ffffffff81087d70 t audit_match_signal
ffffffff81087e90 t audit_rule_to_entry
ffffffff810882a0 t audit_compare_rule
ffffffff81088470 t audit_find_rule
ffffffff81088570 t audit_log_rule_change.isra.11.part.12
ffffffff810886a0 T audit_unpack_string
ffffffff81088740 t audit_data_to_entry
ffffffff81088c50 T audit_match_class
ffffffff81088c90 T audit_dupe_rule
ffffffff81088eb0 T audit_receive_filter
ffffffff81089980 T audit_comparator
ffffffff810899e0 T audit_compare_dname_path
ffffffff81089ab0 T audit_filter_user
ffffffff81089bd0 T audit_filter_type
ffffffff81089c80 T audit_update_lsm_rules
ffffffff81089d00 T audit_log_task_context
ffffffff81089d10 t audit_log_cap
ffffffff81089d70 t audit_log_abend
ffffffff81089e40 t audit_copy_inode
ffffffff81089ec0 t audit_alloc_name
ffffffff8108a000 t unroll_tree_refs
ffffffff8108a100 T __audit_inode_child
ffffffff8108a390 t audit_log_pid_context.isra.10
ffffffff8108a450 t audit_compare_id.isra.12
ffffffff8108a4c0 t audit_filter_rules.isra.14
ffffffff8108b130 t audit_filter_syscall
ffffffff8108b210 t audit_log_exit
ffffffff8108c310 T audit_filter_inodes
ffffffff8108c410 T audit_alloc
ffffffff8108c5d0 T __audit_free
ffffffff8108c870 T __audit_syscall_entry
ffffffff8108cbb0 T __audit_syscall_exit
ffffffff8108cfd0 T __audit_getname
ffffffff8108d080 T audit_putname
ffffffff8108d0c0 T __audit_inode
ffffffff8108d2c0 T auditsc_get_stamp
ffffffff8108d340 T audit_set_loginuid
ffffffff8108d430 T __audit_mq_open
ffffffff8108d570 T __audit_mq_sendrecv
ffffffff8108d5e0 T __audit_mq_notify
ffffffff8108d620 T __audit_mq_getsetattr
ffffffff8108d6a0 T __audit_ipc_obj
ffffffff8108d6f0 T __audit_ipc_set_perm
ffffffff8108d730 T __audit_bprm
ffffffff8108d7b0 T __audit_socketcall
ffffffff8108d7f0 T __audit_fd_pair
ffffffff8108d810 T __audit_sockaddr
ffffffff8108d8a0 T __audit_ptrace
ffffffff8108d910 T __audit_signal_info
ffffffff8108db20 T __audit_log_bprm_fcaps
ffffffff8108dc50 T __audit_log_capset
ffffffff8108dca0 T __audit_mmap_fd
ffffffff8108dcd0 T audit_core_dumps
ffffffff8108dd40 T __audit_seccomp
ffffffff8108ddb0 T audit_killed_trees
ffffffff8108dde0 t audit_watch_should_send_event
ffffffff8108ddf0 t audit_init_watch
ffffffff8108de40 t audit_get_nd
ffffffff8108df80 t audit_free_parent
ffffffff8108dfb0 t audit_watch_free_mark
ffffffff8108dfc0 t audit_watch_log_rule_change.isra.2.part.3
ffffffff8108e0a0 T audit_get_watch
ffffffff8108e0b0 T audit_put_watch
ffffffff8108e110 t audit_remove_watch
ffffffff8108e170 t audit_update_watch
ffffffff8108e4c0 t audit_watch_handle_event
ffffffff8108e710 T audit_watch_path
ffffffff8108e720 T audit_watch_compare
ffffffff8108e750 T audit_to_watch
ffffffff8108e7d0 T audit_add_watch
ffffffff8108ea20 T audit_remove_watch_rule
ffffffff8108eab0 t compare_root
ffffffff8108eac0 t audit_tree_send_event
ffffffff8108ead0 t kill_rules
ffffffff8108ec60 t audit_tree_destroy_watch
ffffffff8108ec80 t alloc_chunk
ffffffff8108ed20 t free_chunk
ffffffff8108ed90 t untag_chunk
ffffffff8108f1e0 t audit_tree_handle_event
ffffffff8108f1f0 t audit_schedule_prune
ffffffff8108f230 t audit_tree_freeing_mark
ffffffff8108f420 t tag_mount
ffffffff8108f950 t prune_one
ffffffff8108f9c0 t prune_tree_thread
ffffffff8108fa40 t trim_marked
ffffffff8108fb50 T audit_tree_path
ffffffff8108fb60 T audit_put_chunk
ffffffff8108fb80 t __put_chunk
ffffffff8108fb90 T audit_tree_lookup
ffffffff8108fc00 T audit_tree_match
ffffffff8108fc50 T audit_remove_tree_rule
ffffffff8108fd60 T audit_trim_trees
ffffffff8108ff60 T audit_make_tree
ffffffff81090080 T audit_put_tree
ffffffff810900a0 T audit_add_tree_rule
ffffffff81090300 T audit_tag_tree
ffffffff810906d0 T audit_kill_trees
ffffffff81090750 t desc_set_defaults
ffffffff81090820 t alloc_desc
ffffffff810908d0 T irq_to_desc
ffffffff810908e0 t free_desc
ffffffff81090950 T generic_handle_irq
ffffffff81090980 T irq_free_descs
ffffffff81090a20 T irq_reserve_irqs
ffffffff81090ac0 T irq_get_next_irq
ffffffff81090ae0 T __irq_get_desc_lock
ffffffff81090b80 T __irq_put_desc_unlock
ffffffff81090be0 T irq_set_percpu_devid
ffffffff81090c50 T dynamic_irq_cleanup
ffffffff81090cc0 T kstat_irqs_cpu
ffffffff81090cf0 T kstat_irqs
ffffffff81090d60 T handle_bad_irq
ffffffff81090fa0 T no_action
ffffffff81090fb0 T handle_irq_event_percpu
ffffffff81091100 T handle_irq_event
ffffffff81091180 t irq_default_primary_handler
ffffffff81091190 t set_irq_wake_real
ffffffff810911d0 t irq_nested_primary_handler
ffffffff81091200 t __free_percpu_irq
ffffffff81091360 T irq_set_affinity_hint
ffffffff810913b0 T irq_set_irq_wake
ffffffff810914d0 T irq_set_affinity_notifier
ffffffff810915c0 t irq_affinity_notify
ffffffff81091680 T synchronize_irq
ffffffff81091750 t __free_irq
ffffffff81091900 T free_irq
ffffffff810919e0 T remove_irq
ffffffff81091a50 t wake_threads_waitq
ffffffff81091aa0 t irq_thread
ffffffff81091c20 t irq_finalize_oneshot.part.30
ffffffff81091d30 t irq_thread_fn
ffffffff81091d80 t irq_forced_thread_fn
ffffffff81091de0 T irq_can_set_affinity
ffffffff81091e20 T irq_set_thread_affinity
ffffffff81091e50 t setup_affinity
ffffffff81091f30 T __irq_set_affinity_locked
ffffffff81092020 T irq_set_affinity
ffffffff810920a0 T irq_select_affinity_usr
ffffffff81092120 T __disable_irq
ffffffff81092160 t __disable_irq_nosync
ffffffff810921c0 T disable_irq
ffffffff810921e0 T disable_irq_nosync
ffffffff810921f0 T __enable_irq
ffffffff810922a0 T enable_irq
ffffffff81092320 T can_request_irq
ffffffff81092380 T __irq_set_trigger
ffffffff810924b0 t __setup_irq
ffffffff810928f0 T request_threaded_irq
ffffffff81092a80 T request_any_context_irq
ffffffff81092b30 T setup_irq
ffffffff81092bd0 T exit_irq_thread
ffffffff81092c80 T enable_percpu_irq
ffffffff81092d30 T disable_percpu_irq
ffffffff81092d90 T remove_percpu_irq
ffffffff81092df0 T free_percpu_irq
ffffffff81092e80 T setup_percpu_irq
ffffffff81092f10 T request_percpu_irq
ffffffff81093020 T noirqdebug_setup
ffffffff81093050 t try_one_irq
ffffffff81093160 t poll_spurious_irqs
ffffffff81093200 t __report_bad_irq
ffffffff810932d0 T irq_wait_for_poll
ffffffff81093370 T note_interrupt
ffffffff810935a0 T check_irq_resend
ffffffff810935e0 T irq_get_irq_data
ffffffff810935f0 T irq_set_handler_data
ffffffff81093630 T irq_set_chip_data
ffffffff81093670 T irq_modify_status
ffffffff81093710 t irq_check_poll
ffffffff81093730 T handle_simple_irq
ffffffff810937b0 T handle_edge_irq
ffffffff810938c0 T handle_nested_irq
ffffffff810939a0 T irq_set_irq_type
ffffffff81093a20 T irq_set_chip
ffffffff81093a80 t cond_unmask_irq
ffffffff81093ac0 T handle_level_irq
ffffffff81093b70 T irq_set_msi_desc
ffffffff81093bc0 T irq_shutdown
ffffffff81093c10 T irq_enable
ffffffff81093c50 T irq_startup
ffffffff81093cc0 T __irq_set_handler
ffffffff81093e00 T irq_disable
ffffffff81093e30 T irq_percpu_enable
ffffffff81093e80 T irq_percpu_disable
ffffffff81093ed0 T mask_irq
ffffffff81093ef0 T unmask_irq
ffffffff81093f10 T handle_fasteoi_irq
ffffffff81094000 T handle_percpu_irq
ffffffff81094070 T handle_percpu_devid_irq
ffffffff81094120 T irq_set_chip_and_handler_name
ffffffff81094160 T irq_cpu_online
ffffffff81094200 T irq_cpu_offline
ffffffff810942a0 t noop
ffffffff810942b0 t noop_ret
ffffffff810942c0 t ack_bad
ffffffff810944e0 t devm_irq_match
ffffffff81094500 T devm_free_irq
ffffffff81094570 t devm_irq_release
ffffffff81094580 T devm_request_threaded_irq
ffffffff81094650 T probe_irq_off
ffffffff81094710 T probe_irq_mask
ffffffff810947d0 T probe_irq_on
ffffffff810949c0 t irq_node_proc_show
ffffffff810949f0 t default_affinity_open
ffffffff81094a10 t irq_spurious_proc_open
ffffffff81094a30 t irq_node_proc_open
ffffffff81094a50 t irq_affinity_list_proc_open
ffffffff81094a70 t irq_affinity_hint_proc_open
ffffffff81094a90 t irq_affinity_proc_open
ffffffff81094ab0 t default_affinity_show
ffffffff81094ae0 t irq_affinity_hint_proc_show
ffffffff81094b70 t default_affinity_write
ffffffff81094bd0 t irq_spurious_proc_show
ffffffff81094c30 t show_irq_affinity.isra.4
ffffffff81094c90 t irq_affinity_list_proc_show
ffffffff81094ca0 t irq_affinity_proc_show
ffffffff81094cb0 t write_irq_affinity.isra.5
ffffffff81094d80 t irq_affinity_list_proc_write
ffffffff81094da0 t irq_affinity_proc_write
ffffffff81094dc0 T register_handler_proc
ffffffff81094ef0 T register_irq_proc
ffffffff81095020 T unregister_irq_proc
ffffffff810950e0 T unregister_handler_proc
ffffffff81095110 T init_irq_proc
ffffffff810951a0 T show_interrupts
ffffffff81095490 T irq_move_masked_irq
ffffffff81095580 T irq_move_irq
ffffffff810955c0 t resume_irqs
ffffffff81095660 t irq_pm_syscore_resume
ffffffff81095670 T resume_device_irqs
ffffffff81095680 T suspend_device_irqs
ffffffff81095740 T check_wakeup_irqs
ffffffff810957c0 T rcu_note_context_switch
ffffffff810957e0 T rcu_batches_completed_sched
ffffffff810957f0 T rcu_batches_completed_bh
ffffffff81095800 T rcutorture_record_test_transition
ffffffff81095820 T rcutorture_record_progress
ffffffff81095830 t dyntick_save_progress_counter
ffffffff81095850 t rcu_panic
ffffffff81095860 t synchronize_sched_expedited_cpu_stop
ffffffff81095870 t rcu_barrier_func
ffffffff810958a0 T rcu_batches_completed
ffffffff810958b0 t rcu_barrier_callback
ffffffff810958d0 T synchronize_rcu_bh
ffffffff81095900 T synchronize_sched
ffffffff81095930 T synchronize_sched_expedited
ffffffff81095a50 T synchronize_rcu_expedited
ffffffff81095a60 t rcu_start_gp
ffffffff81095d50 t rcu_report_qs_rnp
ffffffff81095f00 t force_qs_rnp
ffffffff81096060 t rcu_implicit_dynticks_qs
ffffffff810960e0 t rcu_process_gp_end.isra.32
ffffffff810961b0 t check_for_new_grace_period.isra.34
ffffffff81096290 t force_quiescent_state
ffffffff81096470 t __call_rcu
ffffffff81096630 T kfree_call_rcu
ffffffff81096650 T call_rcu_bh
ffffffff81096660 T call_rcu_sched
ffffffff81096670 T rcu_sched_force_quiescent_state
ffffffff81096680 T rcu_force_quiescent_state
ffffffff81096690 T rcu_bh_force_quiescent_state
ffffffff810966a0 t __rcu_pending
ffffffff81096b50 t rcu_report_qs_rdp.isra.36
ffffffff81096c00 t __rcu_process_callbacks
ffffffff81096fc0 t rcu_process_callbacks
ffffffff81097000 t _rcu_barrier.isra.40
ffffffff810970b0 T rcu_barrier_sched
ffffffff810970c0 T rcu_barrier
ffffffff810970d0 T rcu_barrier_bh
ffffffff810970e0 t rcu_idle_exit_common.isra.43
ffffffff81097190 T rcu_idle_exit
ffffffff81097250 t rcu_idle_enter_common.isra.45
ffffffff81097310 T rcu_idle_enter
ffffffff810973d0 T rcu_sched_qs
ffffffff810973f0 T rcu_bh_qs
ffffffff81097410 T rcu_irq_exit
ffffffff810974b0 T rcu_irq_enter
ffffffff81097550 T rcu_nmi_enter
ffffffff810975b0 T rcu_nmi_exit
ffffffff81097610 T rcu_is_cpu_rrupt_from_idle
ffffffff81097630 T rcu_cpu_stall_reset
ffffffff81097660 T rcu_check_callbacks
ffffffff81097730 T rcu_scheduler_starting
ffffffff81097790 T rcu_needs_cpu
ffffffff810977d0 t proc_do_uts_string
ffffffff810978f0 T uts_proc_notify
ffffffff81097910 t delayacct_end
ffffffff810979b0 T __delayacct_tsk_init
ffffffff810979e0 T delayacct_init
ffffffff81097a40 T __delayacct_blkio_start
ffffffff81097a60 T __delayacct_blkio_end
ffffffff81097ab0 T __delayacct_add_tsk
ffffffff81097c30 T __delayacct_blkio_ticks
ffffffff81097c90 T __delayacct_freepages_start
ffffffff81097cb0 T __delayacct_freepages_end
ffffffff81097ce0 t prepare_reply
ffffffff81097db0 t send_reply
ffffffff81097e10 t cgroupstats_user_cmd
ffffffff81097f10 t parse
ffffffff81097fd0 t add_del_listener
ffffffff810981e0 t mk_reply
ffffffff810982a0 t fill_stats
ffffffff810983b0 t taskstats_user_cmd
ffffffff810987a0 T taskstats_exit
ffffffff81098bf0 T bacct_add_tsk
ffffffff81098da0 T xacct_add_tsk
ffffffff81098ef0 T acct_update_integrals
ffffffff81098fd0 T acct_clear_integrals
ffffffff81099000 W elf_core_extra_phdrs
ffffffff81099010 W elf_core_write_extra_phdrs
ffffffff81099020 W elf_core_write_extra_data
ffffffff81099030 W elf_core_extra_data_size
ffffffff81099040 T irq_work_sync
ffffffff81099090 T irq_work_run
ffffffff81099130 T irq_work_queue
ffffffff810991c0 t perf_ctx_unlock
ffffffff810991f0 t update_event_times
ffffffff81099270 t update_group_times
ffffffff810992a0 t perf_event__header_size
ffffffff81099330 t __perf_event_mark_enabled
ffffffff81099390 t perf_mmap_open
ffffffff810993b0 T perf_register_guest_info_callbacks
ffffffff810993c0 T perf_unregister_guest_info_callbacks
ffffffff810993d0 T perf_swevent_get_recursion_context
ffffffff81099440 t perf_swevent_read
ffffffff81099450 t perf_swevent_del
ffffffff81099480 t perf_swevent_start
ffffffff81099490 t perf_swevent_stop
ffffffff810994a0 t perf_swevent_event_idx
ffffffff810994b0 t perf_pmu_nop_void
ffffffff810994c0 t perf_pmu_nop_int
ffffffff810994d0 t perf_pmu_start_txn
ffffffff81099500 t perf_pmu_commit_txn
ffffffff81099530 t perf_pmu_cancel_txn
ffffffff81099560 t perf_event_idx_default
ffffffff81099570 t type_show
ffffffff810995a0 t pmu_dev_alloc
ffffffff81099660 t pmu_dev_release
ffffffff81099670 t cpu_function_call
ffffffff810996b0 t perf_event_exit_cpu
ffffffff810997b0 t perf_swevent_add
ffffffff810998e0 t perf_event_for_each_child
ffffffff81099980 t perf_ctx_lock
ffffffff810999b0 t perf_reboot
ffffffff81099a00 t task_clock_event_read
ffffffff81099a40 t cpu_clock_event_read
ffffffff81099a60 t task_clock_event_stop
ffffffff81099ad0 t task_clock_event_del
ffffffff81099ae0 t cpu_clock_event_stop
ffffffff81099b40 t cpu_clock_event_del
ffffffff81099b50 t alloc_perf_context
ffffffff81099be0 t perf_poll
ffffffff81099cf0 t perf_lock_task_context
ffffffff81099da0 t perf_unpin_context
ffffffff81099de0 t free_event_rcu
ffffffff81099e20 t ring_buffer_detach
ffffffff81099ee0 t rb_free_rcu
ffffffff81099ef0 t perf_event_pid
ffffffff81099f10 t perf_event_tid
ffffffff81099f40 t __perf_event_header__init_id
ffffffff8109a030 t task_function_call
ffffffff8109a080 T perf_event_enable
ffffffff8109a1b0 T perf_event_refresh
ffffffff8109a1e0 T perf_event_disable
ffffffff8109a2a0 t perf_output_read
ffffffff8109a620 t perf_fasync
ffffffff8109a6a0 t perf_mmap_fault
ffffffff8109a780 t perf_fget_light
ffffffff8109a7e0 t put_ctx
ffffffff8109a850 t update_context_time.isra.27
ffffffff8109a890 t __perf_event_read
ffffffff8109a940 t perf_event_read
ffffffff8109aa00 T perf_event_read_value
ffffffff8109aad0 t list_del_event.part.30
ffffffff8109ab50 t perf_remove_from_context
ffffffff8109ac10 t perf_group_detach.part.31
ffffffff8109acf0 t event_sched_out.isra.33
ffffffff8109ae20 t __perf_remove_from_context
ffffffff8109aef0 t __perf_event_exit_context
ffffffff8109afb0 t group_sched_out
ffffffff8109b080 t __perf_event_disable
ffffffff8109b160 t ctx_sched_out
ffffffff8109b2a0 t task_ctx_sched_out
ffffffff8109b300 t perf_adjust_period
ffffffff8109b4c0 t swevent_hlist_get_cpu.isra.51
ffffffff8109b560 t swevent_hlist_put_cpu.isra.52
ffffffff8109b5d0 t sw_perf_event_destroy
ffffffff8109b650 t perf_pmu_rotate_start.isra.56
ffffffff8109b6f0 t add_event_to_ctx
ffffffff8109b890 t perf_install_in_context
ffffffff8109b940 t get_ctx
ffffffff8109b990 t find_get_context
ffffffff8109bb70 t perf_swevent_start_hrtimer.part.60
ffffffff8109bbd0 t task_clock_event_start
ffffffff8109bc00 t task_clock_event_add
ffffffff8109bc20 t cpu_clock_event_start
ffffffff8109bc50 t cpu_clock_event_add
ffffffff8109bc70 t perf_swevent_init_hrtimer
ffffffff8109bce0 t task_clock_event_init
ffffffff8109bd20 t cpu_clock_event_init
ffffffff8109bd60 t perf_swevent_init
ffffffff8109bea0 t perf_event_read_group.isra.63
ffffffff8109c020 t perf_read
ffffffff8109c170 t ring_buffer_put
ffffffff8109c240 t free_event
ffffffff8109c350 t perf_free_event
ffffffff8109c450 T perf_event_release_kernel
ffffffff8109c510 t perf_release
ffffffff8109c5d0 t perf_event_set_output
ffffffff8109c730 t perf_ioctl
ffffffff8109c9e0 t perf_mmap_close
ffffffff8109cad0 t remote_function
ffffffff8109cb10 T perf_proc_update_handler
ffffffff8109cb60 W perf_pmu_name
ffffffff8109cb70 T perf_cgroup_switch
ffffffff8109cb80 T perf_pmu_disable
ffffffff8109cbb0 T perf_pmu_enable
ffffffff8109cbe0 T perf_event_task_enable
ffffffff8109cc60 T perf_event_task_disable
ffffffff8109ccf0 T perf_event_update_userpage
ffffffff8109ce00 t perf_mmap
ffffffff8109d0e0 t perf_event_reset
ffffffff8109d100 T __perf_event_task_sched_out
ffffffff8109d3a0 T perf_event_wakeup
ffffffff8109d440 t perf_pending_event
ffffffff8109d4b0 T perf_event_header__init_id
ffffffff8109d4d0 T perf_event__output_id_sample
ffffffff8109d5b0 t perf_event_task_output
ffffffff8109d6d0 t perf_event_task_ctx
ffffffff8109d760 t perf_event_task
ffffffff8109d8b0 t perf_event_read_event
ffffffff8109d980 t __perf_event_exit_task
ffffffff8109db00 t perf_log_throttle
ffffffff8109dbe0 t event_sched_in.isra.67
ffffffff8109dd20 t group_sched_in
ffffffff8109ded0 t ctx_sched_in.isra.68
ffffffff8109e060 t perf_event_sched_in.isra.71
ffffffff8109e0f0 t __perf_install_in_context
ffffffff8109e240 t perf_event_context_sched_in.isra.72
ffffffff8109e320 T __perf_event_task_sched_in
ffffffff8109e4b0 t __perf_event_enable
ffffffff8109e650 t perf_adjust_freq_unthr_context.part.73
ffffffff8109e800 T perf_event_task_tick
ffffffff8109ead0 t perf_event_mmap_output
ffffffff8109ec20 t perf_event_mmap_ctx
ffffffff8109ecc0 t perf_event_comm_output
ffffffff8109ee00 t perf_event_comm_ctx
ffffffff8109ee90 T perf_output_sample
ffffffff8109f250 T perf_prepare_sample
ffffffff8109f3b0 t __perf_event_overflow
ffffffff8109f5d0 t perf_swevent_overflow
ffffffff8109f680 t perf_swevent_event
ffffffff8109f700 T perf_event_fork
ffffffff8109f710 T perf_event_comm
ffffffff8109f970 T perf_event_mmap
ffffffff8109fc80 T perf_event_overflow
ffffffff8109fc90 t perf_swevent_hrtimer
ffffffff8109fdb0 T perf_swevent_put_recursion_context
ffffffff8109fdd0 T __perf_sw_event
ffffffff8109ff60 T perf_bp_event
ffffffff8109ffe0 T perf_pmu_register
ffffffff810a0350 T perf_pmu_unregister
ffffffff810a0480 T perf_init_event
ffffffff810a0550 t perf_event_alloc
ffffffff810a0980 t inherit_event.isra.75
ffffffff810a0b60 t inherit_task_group.isra.77.part.78
ffffffff810a0c30 T perf_event_create_kernel_counter
ffffffff810a0d20 T sys_perf_event_open
ffffffff810a15b0 T perf_event_exit_task
ffffffff810a1790 T perf_event_free_task
ffffffff810a1870 T perf_event_delayed_put
ffffffff810a18d0 T perf_event_init_context
ffffffff810a1b00 T perf_event_init_task
ffffffff810a1b70 t perf_mmap_free_page
ffffffff810a1bb0 t perf_mmap_alloc_page
ffffffff810a1c50 t perf_output_put_handle
ffffffff810a1cd0 T perf_output_copy
ffffffff810a1d60 T perf_output_begin
ffffffff810a1f20 T perf_output_end
ffffffff810a1f30 T perf_mmap_to_page
ffffffff810a1f80 T rb_alloc
ffffffff810a20a0 T rb_free
ffffffff810a20f0 t release_callchain_buffers_rcu
ffffffff810a2160 T get_callchain_buffers
ffffffff810a2300 T put_callchain_buffers
ffffffff810a2350 T perf_callchain
ffffffff810a2480 t hw_breakpoint_start
ffffffff810a2490 t hw_breakpoint_stop
ffffffff810a24a0 t hw_breakpoint_event_idx
ffffffff810a24b0 t hw_breakpoint_del
ffffffff810a24c0 t hw_breakpoint_add
ffffffff810a24e0 T register_user_hw_breakpoint
ffffffff810a2500 T unregister_hw_breakpoint
ffffffff810a2520 T unregister_wide_hw_breakpoint
ffffffff810a2590 T register_wide_hw_breakpoint
ffffffff810a26d0 t validate_hw_breakpoint.part.4
ffffffff810a2700 T modify_user_hw_breakpoint
ffffffff810a2800 W hw_breakpoint_weight
ffffffff810a2810 t task_bp_pinned.isra.5.constprop.6
ffffffff810a2870 t toggle_bp_task_slot.constprop.8
ffffffff810a28f0 t toggle_bp_slot.constprop.9
ffffffff810a2a50 t __reserve_bp_slot
ffffffff810a2c80 t __release_bp_slot
ffffffff810a2ca0 W arch_unregister_hw_breakpoint
ffffffff810a2cb0 T reserve_bp_slot
ffffffff810a2cf0 T release_bp_slot
ffffffff810a2d20 t bp_perf_event_destroy
ffffffff810a2d30 T dbg_reserve_bp_slot
ffffffff810a2d50 T dbg_release_bp_slot
ffffffff810a2d70 T register_perf_hw_breakpoint
ffffffff810a2dd0 t hw_breakpoint_event_init
ffffffff810a2e10 t page_waitqueue
ffffffff810a2ea0 T iov_iter_single_seg_count
ffffffff810a2ed0 T generic_write_checks
ffffffff810a3090 T pagecache_write_begin
ffffffff810a30a0 T iov_iter_fault_in_readable
ffffffff810a3100 T generic_segment_checks
ffffffff810a31b0 T iov_iter_copy_from_user_atomic
ffffffff810a3250 T pagecache_write_end
ffffffff810a32d0 T file_read_actor
ffffffff810a3430 T iov_iter_copy_from_user
ffffffff810a34a0 T should_remove_suid
ffffffff810a3510 T file_remove_suid
ffffffff810a35b0 T generic_file_mmap
ffffffff810a3610 T generic_file_readonly_mmap
ffffffff810a3630 T find_get_pages_tag
ffffffff810a37a0 T find_get_pages_contig
ffffffff810a3900 T find_get_page
ffffffff810a3990 T __lock_page_killable
ffffffff810a3a00 T __lock_page
ffffffff810a3a70 t sleep_on_page
ffffffff810a3a80 t sleep_on_page_killable
ffffffff810a3ac0 T unlock_page
ffffffff810a3ae0 T find_lock_page
ffffffff810a3b60 T add_page_wait_queue
ffffffff810a3bc0 T wait_on_page_bit
ffffffff810a3c40 t wait_on_page_read
ffffffff810a3ca0 T __page_cache_alloc
ffffffff810a3da0 T filemap_fdatawait_range
ffffffff810a3f10 T filemap_fdatawait
ffffffff810a3f40 T iov_iter_advance
ffffffff810a3fd0 T generic_file_buffered_write
ffffffff810a4270 T try_to_release_page
ffffffff810a42b0 T end_page_writeback
ffffffff810a4300 T add_to_page_cache_locked
ffffffff810a43f0 T add_to_page_cache_lru
ffffffff810a4430 T grab_cache_page_write_begin
ffffffff810a4520 t do_read_cache_page
ffffffff810a4690 T read_cache_page_gfp
ffffffff810a46c0 T read_cache_page_async
ffffffff810a46d0 T read_cache_page
ffffffff810a46f0 T grab_cache_page_nowait
ffffffff810a4780 T find_or_create_page
ffffffff810a4830 T __delete_from_page_cache
ffffffff810a4990 T replace_page_cache_page
ffffffff810a4ac0 T delete_from_page_cache
ffffffff810a4b40 T __filemap_fdatawrite_range
ffffffff810a4bd0 T filemap_fdatawrite
ffffffff810a4bf0 T filemap_write_and_wait
ffffffff810a4c40 T filemap_flush
ffffffff810a4c60 T filemap_write_and_wait_range
ffffffff810a4ce0 T generic_file_direct_write
ffffffff810a4e80 T __generic_file_aio_write
ffffffff810a5290 T generic_file_aio_write
ffffffff810a5370 T generic_file_aio_read
ffffffff810a5a60 T filemap_fdatawrite_range
ffffffff810a5a70 T wait_on_page_bit_killable
ffffffff810a5af0 T __lock_page_or_retry
ffffffff810a5bb0 T filemap_fault
ffffffff810a6020 T find_get_pages
ffffffff810a6170 T sys_readahead
ffffffff810a6220 T mempool_free_pages
ffffffff810a6230 T mempool_alloc_pages
ffffffff810a6240 T mempool_kfree
ffffffff810a6250 T mempool_kmalloc
ffffffff810a6260 T mempool_alloc_slab
ffffffff810a6270 T mempool_free_slab
ffffffff810a6280 t add_element
ffffffff810a62a0 T mempool_free
ffffffff810a6360 t remove_element.isra.3
ffffffff810a6380 T mempool_destroy
ffffffff810a63d0 T mempool_create_node
ffffffff810a64c0 T mempool_create
ffffffff810a64d0 T mempool_alloc
ffffffff810a6620 T mempool_resize
ffffffff810a67b0 T unregister_oom_notifier
ffffffff810a67c0 T register_oom_notifier
ffffffff810a67d0 t oom_unkillable_task.isra.5
ffffffff810a6880 T compare_swap_oom_score_adj
ffffffff810a6900 T test_set_oom_score_adj
ffffffff810a6970 T find_lock_task_mm
ffffffff810a69f0 t dump_header.isra.8
ffffffff810a6b80 T oom_badness
ffffffff810a6cc0 t oom_kill_process.part.10.constprop.11
ffffffff810a6f40 T try_set_zonelist_oom
ffffffff810a7040 T clear_zonelist_oom
ffffffff810a70b0 T out_of_memory
ffffffff810a7610 T pagefault_out_of_memory
ffffffff810a7730 T sys_fadvise64_64
ffffffff810a7980 T sys_fadvise64
ffffffff810a7990 T __probe_kernel_read
ffffffff810a7990 W probe_kernel_read
ffffffff810a7a00 T __probe_kernel_write
ffffffff810a7a00 W probe_kernel_write
ffffffff810a7a70 t move_freepages_block
ffffffff810a7bd0 t zlc_zone_worth_trying
ffffffff810a7c10 t calculate_totalreserve_pages
ffffffff810a7ca0 t setup_per_zone_lowmem_reserve
ffffffff810a7d60 t setup_pageset
ffffffff810a7e50 T si_meminfo
ffffffff810a7ea0 t nr_free_zone_pages
ffffffff810a7f20 T nr_free_buffer_pages
ffffffff810a7f30 T __get_free_pages
ffffffff810a7f80 T get_zeroed_page
ffffffff810a7f90 t __parse_numa_zonelist_order
ffffffff810a8010 t zone_batchsize.isra.54
ffffffff810a8070 t build_zonelists_node.constprop.65
ffffffff810a80e0 T pm_restore_gfp_mask
ffffffff810a8120 T pm_restrict_gfp_mask
ffffffff810a8180 T pm_suspended_storage
ffffffff810a81a0 T prep_compound_page
ffffffff810a8210 T drain_all_pages
ffffffff810a82d0 T split_page
ffffffff810a8330 T zone_watermark_ok
ffffffff810a83f0 T zone_watermark_ok_safe
ffffffff810a8550 T warn_alloc_failed
ffffffff810a86b0 T nr_free_pagecache_pages
ffffffff810a86c0 T si_meminfo_node
ffffffff810a8740 T skip_free_areas_node
ffffffff810a8780 T show_free_areas
ffffffff810a8f40 T numa_zonelist_order_handler
ffffffff810a9040 T zone_pcp_update
ffffffff810a9060 T sysctl_min_unmapped_ratio_sysctl_handler
ffffffff810a90d0 T sysctl_min_slab_ratio_sysctl_handler
ffffffff810a9140 T lowmem_reserve_ratio_sysctl_handler
ffffffff810a9160 T percpu_pagelist_fraction_sysctl_handler
ffffffff810a9240 T get_pageblock_flags_group
ffffffff810a92d0 t __count_immobile_pages
ffffffff810a93f0 T set_pageblock_flags_group
ffffffff810a9480 t set_pageblock_migratetype
ffffffff810a94a0 T setup_per_zone_wmarks
ffffffff810a9750 T min_free_kbytes_sysctl_handler
ffffffff810a9770 t __rmqueue
ffffffff810a9b60 T split_free_page
ffffffff810a9cd0 T is_pageblock_removable_nolock
ffffffff810a9d60 T set_migratetype_isolate
ffffffff810a9e20 T unset_migratetype_isolate
ffffffff810a9ed0 T is_free_buddy_page
ffffffff810a9f80 T dump_page
ffffffff810aa050 t bad_page
ffffffff810aa150 t free_pages_prepare
ffffffff810aa200 t get_page_from_freelist
ffffffff810aaab0 t destroy_compound_page
ffffffff810aab60 t free_pcppages_bulk
ffffffff810aaf00 t drain_pages
ffffffff810aaf90 t page_alloc_cpu_notify
ffffffff810aafd0 T drain_local_pages
ffffffff810aafe0 T __alloc_pages_nodemask
ffffffff810ab980 t __zone_pcp_update
ffffffff810aba20 T drain_zone_pages
ffffffff810aba80 t free_one_page
ffffffff810abde0 t __free_pages_ok
ffffffff810abec0 t free_compound_page
ffffffff810abee0 T free_hot_cold_page
ffffffff810ac080 T __free_pages
ffffffff810ac0b0 T free_pages
ffffffff810ac110 T free_pages_exact
ffffffff810ac150 t make_alloc_exact
ffffffff810ac200 T alloc_pages_exact_nid
ffffffff810ac2c0 T alloc_pages_exact
ffffffff810ac310 T free_hot_cold_page_list
ffffffff810ac360 T mapping_tagged
ffffffff810ac370 T account_page_redirty
ffffffff810ac400 T account_page_writeback
ffffffff810ac410 T test_set_page_writeback
ffffffff810ac5a0 T bdi_writeout_inc
ffffffff810ac610 T tag_pages_for_writeback
ffffffff810ac6c0 T bdi_set_max_ratio
ffffffff810ac750 t __writepage
ffffffff810ac780 T set_page_dirty
ffffffff810ac7e0 T clear_page_dirty_for_io
ffffffff810ac8c0 T write_one_page
ffffffff810aca10 T write_cache_pages
ffffffff810acdd0 T generic_writepages
ffffffff810ace30 T set_page_dirty_lock
ffffffff810ace70 t bdi_position_ratio.isra.17
ffffffff810acf80 T account_page_dirtied
ffffffff810ad040 T __set_page_dirty_nobuffers
ffffffff810ad1a0 T redirty_page_for_writepage
ffffffff810ad1c0 T global_dirtyable_memory
ffffffff810ad1f0 t calc_period_shift
ffffffff810ad240 T global_dirty_limits
ffffffff810ad360 T zone_dirty_ok
ffffffff810ad450 T dirty_background_ratio_handler
ffffffff810ad470 T dirty_background_bytes_handler
ffffffff810ad490 T bdi_set_min_ratio
ffffffff810ad510 T bdi_dirty_limit
ffffffff810ad5c0 T __bdi_update_bandwidth
ffffffff810ad910 T balance_dirty_pages_ratelimited_nr
ffffffff810adfe0 T set_page_dirty_balance
ffffffff810ae050 T throttle_vm_writeout
ffffffff810ae0f0 T dirty_writeback_centisecs_handler
ffffffff810ae110 T laptop_mode_timer_fn
ffffffff810ae190 T laptop_io_completion
ffffffff810ae1b0 T laptop_sync_completion
ffffffff810ae200 T writeback_set_ratelimit
ffffffff810ae250 t update_completion_period
ffffffff810ae270 T dirty_bytes_handler
ffffffff810ae2d0 T dirty_ratio_handler
ffffffff810ae330 T do_writepages
ffffffff810ae360 T __set_page_dirty_no_writeback
ffffffff810ae380 T test_clear_page_writeback
ffffffff810ae4f0 T file_ra_state_init
ffffffff810ae510 t __do_page_cache_readahead
ffffffff810ae720 t get_init_ra_size
ffffffff810ae770 t read_cache_pages_invalidate_page
ffffffff810ae7c0 T read_cache_pages
ffffffff810ae8e0 T max_sane_readahead
ffffffff810ae990 T force_page_cache_readahead
ffffffff810aea20 T ra_submit
ffffffff810aea50 t ondemand_readahead
ffffffff810aec80 T page_cache_async_readahead
ffffffff810aed30 T page_cache_sync_readahead
ffffffff810aed70 T pagevec_lookup_tag
ffffffff810aed90 T pagevec_lookup
ffffffff810aedb0 t __activate_page
ffffffff810aef30 t lru_deactivate_fn
ffffffff810af130 t __pagevec_lru_add_fn
ffffffff810af200 t pagevec_move_tail_fn
ffffffff810af290 t __page_cache_release.part.12
ffffffff810af3a0 t __put_compound_page
ffffffff810af3c0 t __put_single_page
ffffffff810af3e0 t put_compound_page
ffffffff810af520 T release_pages
ffffffff810af700 t pagevec_lru_move_fn
ffffffff810af7f0 T __pagevec_lru_add
ffffffff810af800 t pagevec_move_tail
ffffffff810af830 T put_page
ffffffff810af860 T put_pages_list
ffffffff810af8b0 T __get_page_tail
ffffffff810af980 T __lru_cache_add
ffffffff810afa10 T rotate_reclaimable_page
ffffffff810afac0 T activate_page
ffffffff810afb60 T mark_page_accessed
ffffffff810afbb0 T lru_cache_add_lru
ffffffff810afbf0 T add_page_to_unevictable_list
ffffffff810afcc0 T lru_add_drain_cpu
ffffffff810afdb0 T deactivate_page
ffffffff810afe30 T lru_add_drain
ffffffff810afe40 T __pagevec_release
ffffffff810afe70 t lru_add_drain_per_cpu
ffffffff810afe80 T lru_add_drain_all
ffffffff810afe90 T lru_add_page_tail
ffffffff810affe0 T invalidate_inode_pages2_range
ffffffff810b0330 T invalidate_inode_pages2
ffffffff810b0340 T cancel_dirty_page
ffffffff810b03f0 T do_invalidatepage
ffffffff810b0410 T truncate_inode_page
ffffffff810b04b0 T truncate_inode_pages_range
ffffffff810b0900 T truncate_pagecache_range
ffffffff810b0970 T truncate_inode_pages
ffffffff810b0980 T truncate_pagecache
ffffffff810b09f0 T truncate_setsize
ffffffff810b0a10 T vmtruncate
ffffffff810b0a80 T generic_error_remove_page
ffffffff810b0ab0 T invalidate_inode_page
ffffffff810b0b70 T invalidate_mapping_pages
ffffffff810b0cd0 T vmtruncate_range
ffffffff810b0db0 t zone_pagecache_reclaimable
ffffffff810b0e30 t move_active_pages_to_lru
ffffffff810b0fd0 t __remove_mapping
ffffffff810b1110 T unregister_shrinker
ffffffff810b1160 T register_shrinker
ffffffff810b11b0 t warn_scan_unevictable_pages
ffffffff810b11e0 t write_scan_unevictable_node
ffffffff810b1200 t read_scan_unevictable_node
ffffffff810b1220 t update_isolated_counts.isra.53
ffffffff810b1370 t sleeping_prematurely.part.55
ffffffff810b1450 T shrink_slab
ffffffff810b1610 T remove_mapping
ffffffff810b1650 T __isolate_lru_page
ffffffff810b1760 t isolate_lru_pages.isra.57
ffffffff810b1a90 T isolate_lru_page
ffffffff810b1c00 T wakeup_kswapd
ffffffff810b1ce0 T global_reclaimable_pages
ffffffff810b1d30 T zone_reclaimable_pages
ffffffff810b1d80 T kswapd_run
ffffffff810b1e40 T kswapd_stop
ffffffff810b1e80 T page_evictable
ffffffff810b1f10 T putback_lru_page
ffffffff810b1fe0 t shrink_page_list
ffffffff810b28d0 t putback_inactive_pages
ffffffff810b2ba0 t shrink_inactive_list
ffffffff810b2fa0 t shrink_active_list.isra.59
ffffffff810b3300 t shrink_mem_cgroup_zone
ffffffff810b3850 t __zone_reclaim
ffffffff810b3a00 T zone_reclaim
ffffffff810b3ad0 t do_try_to_free_pages
ffffffff810b3f70 T try_to_free_pages
ffffffff810b3ff0 t balance_pgdat
ffffffff810b46a0 t kswapd
ffffffff810b4970 T check_move_unevictable_pages
ffffffff810b4b00 T scan_unevictable_handler
ffffffff810b4b50 T scan_unevictable_register_node
ffffffff810b4b60 T scan_unevictable_unregister_node
ffffffff810b4b70 t shmem_follow_short_symlink
ffffffff810b4b90 t shmem_get_parent
ffffffff810b4ba0 t shmem_match
ffffffff810b4bd0 t shmem_write_end
ffffffff810b4c30 t shmem_reserve_inode
ffffffff810b4cb0 t shmem_free_inode
ffffffff810b4d00 t shmem_recalc_inode
ffffffff810b4db0 t shmem_get_policy
ffffffff810b4de0 t shmem_swapin
ffffffff810b4e80 t shmem_alloc_page
ffffffff810b4ee0 t shmem_set_policy
ffffffff810b4f10 t shmem_mmap
ffffffff810b4f50 t shmem_put_link
ffffffff810b4f80 t shmem_get_inode
ffffffff810b51a0 t shmem_mknod
ffffffff810b5270 t shmem_mkdir
ffffffff810b52a0 t shmem_create
ffffffff810b52b0 t shmem_alloc_inode
ffffffff810b52e0 t shmem_unlink
ffffffff810b5370 t shmem_rename
ffffffff810b5480 t shmem_rmdir
ffffffff810b54f0 t shmem_link
ffffffff810b55b0 t shmem_xattr_validate
ffffffff810b5610 t shmem_xattr_set
ffffffff810b57d0 t shmem_listxattr
ffffffff810b58b0 t shmem_mount
ffffffff810b58c0 t shmem_init_inode
ffffffff810b58d0 t shmem_show_options
ffffffff810b59e0 t shmem_statfs
ffffffff810b5a70 t shmem_destroy_inode
ffffffff810b5ab0 t shmem_destroy_callback
ffffffff810b5ad0 t shmem_fh_to_dentry
ffffffff810b5b30 t shmem_encode_fh
ffffffff810b5bf0 t shmem_parse_options
ffffffff810b5ef0 t shmem_remount_fs
ffffffff810b6020 t shmem_put_super
ffffffff810b6070 T shmem_fill_super
ffffffff810b6200 t shmem_find_get_pages_and_swap
ffffffff810b6310 t shmem_radix_tree_replace
ffffffff810b63a0 t shmem_writepage
ffffffff810b65e0 t shmem_free_swap
ffffffff810b6660 t shmem_add_to_page_cache
ffffffff810b67c0 t shmem_getpage_gfp
ffffffff810b6d20 t shmem_fault
ffffffff810b6d90 t shmem_write_begin
ffffffff810b6dc0 t shmem_follow_link
ffffffff810b6e60 t shmem_file_splice_read
ffffffff810b72c0 t shmem_symlink
ffffffff810b74d0 T shmem_truncate_range
ffffffff810b7a70 t shmem_setattr
ffffffff810b7ba0 t shmem_evict_inode
ffffffff810b7cd0 T shmem_read_mapping_page_gfp
ffffffff810b7d20 T shmem_file_setup
ffffffff810b7f00 t shmem_removexattr
ffffffff810b7f90 t shmem_getxattr
ffffffff810b80d0 t shmem_setxattr
ffffffff810b8190 t shmem_file_aio_read
ffffffff810b8500 T shmem_unlock_mapping
ffffffff810b85f0 T shmem_unuse
ffffffff810b8790 T shmem_lock
ffffffff810b8870 T shmem_zero_setup
ffffffff810b88d0 T vma_prio_tree_add
ffffffff810b8990 T vma_prio_tree_insert
ffffffff810b89f0 T vma_prio_tree_remove
ffffffff810b8b20 T vma_prio_tree_next
ffffffff810b8c50 T kzfree
ffffffff810b8c80 T __krealloc
ffffffff810b8d10 T krealloc
ffffffff810b8d60 T kmemdup
ffffffff810b8db0 T memdup_user
ffffffff810b8e30 T strndup_user
ffffffff810b8ea0 T kstrndup
ffffffff810b8f10 T kstrdup
ffffffff810b8f80 T __vma_link_list
ffffffff810b8fc0 T vm_is_stack
ffffffff810b90a0 T first_online_pgdat
ffffffff810b90e0 T next_online_pgdat
ffffffff810b9130 T next_zone
ffffffff810b9170 T next_zones_zonelist
ffffffff810b91d0 T mod_zone_page_state
ffffffff810b9240 T inc_zone_page_state
ffffffff810b92d0 T dec_zone_page_state
ffffffff810b9360 t frag_stop
ffffffff810b9370 t vmstat_next
ffffffff810b9390 t zoneinfo_open
ffffffff810b93a0 t vmstat_open
ffffffff810b93b0 t pagetypeinfo_open
ffffffff810b93c0 t fragmentation_open
ffffffff810b93d0 t walk_zones_in_node
ffffffff810b9470 t zoneinfo_show
ffffffff810b9490 t frag_show
ffffffff810b94b0 t vmstat_show
ffffffff810b94e0 t pagetypeinfo_showfree_print
ffffffff810b95a0 t frag_show_print
ffffffff810b9600 t frag_next
ffffffff810b9610 t frag_start
ffffffff810b9650 t vmstat_stop
ffffffff810b9670 t pagetypeinfo_showblockcount_print
ffffffff810b97e0 t zoneinfo_show_print
ffffffff810b99a0 T __mod_zone_page_state
ffffffff810b99f0 T all_vm_events
ffffffff810b9b00 t vmstat_start
ffffffff810b9ba0 t pagetypeinfo_show
ffffffff810b9cc0 T vm_events_fold_cpu
ffffffff810b9d00 T calculate_pressure_threshold
ffffffff810b9d40 T calculate_normal_threshold
ffffffff810b9d90 T refresh_zone_stat_thresholds
ffffffff810b9e40 T set_pgdat_percpu_threshold
ffffffff810b9ef0 T __inc_zone_state
ffffffff810b9f40 T __inc_zone_page_state
ffffffff810b9f70 T __dec_zone_state
ffffffff810b9fc0 T __dec_zone_page_state
ffffffff810b9ff0 T inc_zone_state
ffffffff810ba060 T refresh_cpu_vm_stats
ffffffff810ba1f0 t vmstat_update
ffffffff810ba230 T zone_statistics
ffffffff810ba2d0 T fragmentation_index
ffffffff810ba360 t bdi_sched_wait
ffffffff810ba370 t bdi_sync_supers
ffffffff810ba3d0 t read_ahead_kb_store
ffffffff810ba450 t max_ratio_store
ffffffff810ba4e0 t max_ratio_show
ffffffff810ba510 t min_ratio_show
ffffffff810ba540 t read_ahead_kb_show
ffffffff810ba570 t min_ratio_store
ffffffff810ba600 T wait_iff_congested
ffffffff810ba710 T congestion_wait
ffffffff810ba7e0 T clear_bdi_congested
ffffffff810ba850 T bdi_init
ffffffff810baac0 t wakeup_timer_fn
ffffffff810bab20 T bdi_unregister
ffffffff810baca0 T bdi_register
ffffffff810bade0 T bdi_register_dev
ffffffff810bae00 t bdi_clear_pending
ffffffff810bae20 t bdi_forker_thread
ffffffff810bb260 T set_bdi_congested
ffffffff810bb280 T bdi_lock_two
ffffffff810bb2e0 T bdi_destroy
ffffffff810bb440 T bdi_setup_and_register
ffffffff810bb4e0 T bdi_has_dirty_io
ffffffff810bb520 T bdi_arm_supers_timer
ffffffff810bb570 t sync_supers_timer_fn
ffffffff810bb590 T bdi_wakeup_thread_delayed
ffffffff810bb5d0 T start_isolate_page_range
ffffffff810bb680 T undo_isolate_page_range
ffffffff810bb710 T test_pages_isolated
ffffffff810bb890 T mminit_verify_zonelist
ffffffff810bba00 T unuse_mm
ffffffff810bba80 T use_mm
ffffffff810bbc00 t pcpu_mem_zalloc
ffffffff810bbc60 t pcpu_get_pages_and_bitmap
ffffffff810bbd70 t pcpu_next_unpop
ffffffff810bbde0 t pcpu_next_pop
ffffffff810bbe50 t pcpu_unmap_pages
ffffffff810bbff0 t pcpu_free_chunk
ffffffff810bc040 t pcpu_extend_area_map
ffffffff810bc170 t pcpu_chunk_slot
ffffffff810bc1c0 t pcpu_chunk_relocate
ffffffff810bc260 t pcpu_free_area
ffffffff810bc3c0 T free_percpu
ffffffff810bc520 t pcpu_alloc_area
ffffffff810bc7d0 t pcpu_free_pages.isra.14
ffffffff810bc870 t pcpu_alloc
ffffffff810bd270 T __alloc_percpu
ffffffff810bd280 t pcpu_reclaim
ffffffff810bd4e0 T __alloc_reserved_percpu
ffffffff810bd4f0 T is_kernel_percpu_address
ffffffff810bd580 T per_cpu_ptr_to_phys
ffffffff810bd690 T sys_remap_file_pages
ffffffff810bdbb0 T sys_madvise
ffffffff810be330 t print_bad_pte
ffffffff810be5c0 t add_mm_counter_fast
ffffffff810be5f0 t __do_fault
ffffffff810beb20 t __follow_pte.isra.47
ffffffff810becd0 T follow_pfn
ffffffff810bed40 T sync_mm_rss
ffffffff810bed90 T tlb_gather_mmu
ffffffff810bedf0 T tlb_flush_mmu
ffffffff810bee70 T tlb_finish_mmu
ffffffff810beec0 T __tlb_remove_page
ffffffff810bef50 T pgd_clear_bad
ffffffff810befa0 T pud_clear_bad
ffffffff810beff0 T pmd_clear_bad
ffffffff810bf040 T free_pgd_range
ffffffff810bf4f0 T free_pgtables
ffffffff810bf5f0 T __pte_alloc
ffffffff810bf760 T __pte_alloc_kernel
ffffffff810bf830 T vm_normal_page
ffffffff810bf8b0 t do_wp_page
ffffffff810c00e0 t unmap_single_vma
ffffffff810c0900 t zap_page_range_single
ffffffff810c0a10 T zap_vma_ptes
ffffffff810c0a40 T unmap_mapping_range
ffffffff810c0bb0 T copy_pte_range
ffffffff810c1090 T unmap_vmas
ffffffff810c1140 T zap_page_range
ffffffff810c1220 T follow_page
ffffffff810c1720 T handle_pte_fault
ffffffff810c2050 T __pud_alloc
ffffffff810c2130 T __pmd_alloc
ffffffff810c2210 T remap_pfn_range
ffffffff810c2690 T apply_to_page_range
ffffffff810c2ab0 T copy_page_range
ffffffff810c2f20 T __get_locked_pte
ffffffff810c3100 t insert_pfn.isra.40
ffffffff810c31d0 T vm_insert_mixed
ffffffff810c3200 T vm_insert_pfn
ffffffff810c3330 T vm_insert_page
ffffffff810c34b0 T handle_mm_fault
ffffffff810c3830 T fixup_user_fault
ffffffff810c38f0 T __get_user_pages
ffffffff810c3e40 T get_dump_page
ffffffff810c3e90 T get_user_pages
ffffffff810c3ee0 t __access_remote_vm
ffffffff810c40c0 T make_pages_present
ffffffff810c4190 T follow_phys
ffffffff810c4240 T generic_access_phys
ffffffff810c4300 T access_remote_vm
ffffffff810c4320 T access_process_vm
ffffffff810c43b0 T print_vma_addr
ffffffff810c44f0 T clear_huge_page
ffffffff810c46a0 T copy_user_huge_page
ffffffff810c4920 t mincore_hugetlb_page_range
ffffffff810c49b0 t mincore_page
ffffffff810c4a00 t mincore_unmapped_range
ffffffff810c4ad0 T sys_mincore
ffffffff810c5200 T can_do_mlock
ffffffff810c5240 t __mlock_vma_pages_range.isra.4
ffffffff810c52a0 t do_mlock_pages
ffffffff810c53d0 T __clear_page_mlock
ffffffff810c5430 T mlock_vma_page
ffffffff810c5480 T munlock_vma_page
ffffffff810c5500 T mlock_vma_pages_range
ffffffff810c55b0 T munlock_vma_pages_range
ffffffff810c5650 t mlock_fixup
ffffffff810c57e0 t do_mlockall
ffffffff810c5870 t do_mlock
ffffffff810c5940 T sys_mlock
ffffffff810c5a70 T sys_munlock
ffffffff810c5b00 T sys_mlockall
ffffffff810c5c90 T sys_munlockall
ffffffff810c5cf0 T user_shm_lock
ffffffff810c5da0 T user_shm_unlock
ffffffff810c5e00 T vm_get_page_prot
ffffffff810c5e10 t find_vma_prepare
ffffffff810c5e70 t can_vma_merge_before
ffffffff810c5ee0 T arch_unmap_area
ffffffff810c5f40 T find_vma
ffffffff810c5fb0 t special_mapping_close
ffffffff810c5fc0 t special_mapping_fault
ffffffff810c6050 t __vma_link_file
ffffffff810c60d0 t unmap_region
ffffffff810c6210 t remove_vma
ffffffff810c6290 t reusable_anon_vma
ffffffff810c6370 T get_unmapped_area
ffffffff810c6490 t __remove_shared_vm_struct.isra.24
ffffffff810c64f0 T __vm_enough_memory
ffffffff810c6670 T unlink_file_vma
ffffffff810c6700 T __vma_link_rb
ffffffff810c6730 t vma_link
ffffffff810c6810 T vma_adjust
ffffffff810c6d50 t __split_vma
ffffffff810c6fd0 T vma_merge
ffffffff810c7310 T find_mergeable_anon_vma
ffffffff810c7350 T vm_stat_account
ffffffff810c73b0 T do_munmap
ffffffff810c7730 T vm_munmap
ffffffff810c77b0 t do_brk
ffffffff810c7ae0 T vm_brk
ffffffff810c7b40 T sys_brk
ffffffff810c7ca0 T vma_wants_writenotify
ffffffff810c7d20 T mmap_region
ffffffff810c8250 t do_mmap_pgoff
ffffffff810c8610 T sys_mmap_pgoff
ffffffff810c8820 T do_mmap
ffffffff810c8850 T vm_mmap
ffffffff810c88d0 T arch_unmap_area_topdown
ffffffff810c88f0 T find_vma_prev
ffffffff810c8940 T expand_downwards
ffffffff810c8af0 T expand_stack
ffffffff810c8b00 T find_extend_vma
ffffffff810c8b80 T split_vma
ffffffff810c8ba0 T sys_munmap
ffffffff810c8bb0 T exit_mmap
ffffffff810c8cf0 T insert_vm_struct
ffffffff810c8db0 T copy_vma
ffffffff810c9020 T may_expand_vm
ffffffff810c9050 T install_special_mapping
ffffffff810c9180 T mm_drop_all_locks
ffffffff810c9270 T mm_take_all_locks
ffffffff810c93e0 T mprotect_fixup
ffffffff810c9bf0 T sys_mprotect
ffffffff810c9e10 t vma_to_resize
ffffffff810c9fb0 T move_page_tables
ffffffff810ca580 t move_vma
ffffffff810ca7d0 T do_mremap
ffffffff810cad10 T sys_mremap
ffffffff810cad90 T sys_msync
ffffffff810caf70 t anon_vma_chain_free
ffffffff810caf80 t anon_vma_ctor
ffffffff810cafb0 t __hugepage_set_anon_rmap
ffffffff810cb010 t __page_set_anon_rmap
ffffffff810cb070 T anon_vma_moveto_tail
ffffffff810cb160 T page_unlock_anon_vma
ffffffff810cb170 T vma_address
ffffffff810cb1e0 T page_address_in_vma
ffffffff810cb2e0 T __page_check_address
ffffffff810cb4c0 T page_mkclean
ffffffff810cb6b0 T page_mapped_in_vma
ffffffff810cb750 T page_referenced_one
ffffffff810cb930 T page_move_anon_rmap
ffffffff810cb980 T do_page_add_anon_rmap
ffffffff810cba00 T page_add_anon_rmap
ffffffff810cba10 T page_add_new_anon_rmap
ffffffff810cbab0 T page_add_file_rmap
ffffffff810cbad0 T page_remove_rmap
ffffffff810cbb50 T try_to_unmap_one
ffffffff810cbfe0 t try_to_unmap_file
ffffffff810cc660 T is_vma_temporary_stack
ffffffff810cc680 T __put_anon_vma
ffffffff810cc710 T anon_vma_prepare
ffffffff810cc890 T unlink_anon_vmas
ffffffff810cca40 T anon_vma_clone
ffffffff810ccbd0 T anon_vma_fork
ffffffff810ccd10 T page_get_anon_vma
ffffffff810ccda0 T page_lock_anon_vma
ffffffff810ccec0 t try_to_unmap_anon
ffffffff810ccfe0 T try_to_munlock
ffffffff810cd010 T try_to_unmap
ffffffff810cd070 T page_referenced
ffffffff810cd2e0 T rmap_walk
ffffffff810cd500 T hugepage_add_anon_rmap
ffffffff810cd540 T hugepage_add_new_anon_rmap
ffffffff810cd560 T vmalloc_to_page
ffffffff810cd640 T vmalloc_to_pfn
ffffffff810cd670 t f
ffffffff810cd690 t s_next
ffffffff810cd6a0 t s_stop
ffffffff810cd6b0 t s_start
ffffffff810cd6f0 t lazy_max_pages
ffffffff810cd720 t vmalloc_open
ffffffff810cd790 t pvm_determine_end
ffffffff810cd800 t find_vmap_area
ffffffff810cd860 t find_vm_area
ffffffff810cd890 t __free_vmap_area
ffffffff810cd990 t pvm_find_next_prev
ffffffff810cda40 t __insert_vmap_area
ffffffff810cdb10 t insert_vmalloc_vmlist
ffffffff810cdb70 T remap_vmalloc_range
ffffffff810cdc50 t s_show
ffffffff810cde80 t vunmap_page_range
ffffffff810ce120 T unmap_kernel_range_noflush
ffffffff810ce130 t free_vmap_block
ffffffff810ce1b0 t purge_fragmented_blocks
ffffffff810ce510 t __purge_vmap_area_lazy
ffffffff810ce6e0 t purge_vmap_area_lazy
ffffffff810ce710 T pcpu_get_vm_areas
ffffffff810cec10 t alloc_vmap_area
ffffffff810cef80 t __get_vm_area_node
ffffffff810cf100 T __get_vm_area
ffffffff810cf140 t free_vmap_area_noflush
ffffffff810cf1a0 T vm_unmap_aliases
ffffffff810cf320 T vm_unmap_ram
ffffffff810cf4d0 t vmap_page_range_noflush
ffffffff810cf840 T map_vm_area
ffffffff810cf880 T vm_map_ram
ffffffff810cfda0 T is_vmalloc_or_module_addr
ffffffff810cfde0 T set_iounmap_nonlazy
ffffffff810cfdf0 T map_kernel_range_noflush
ffffffff810cfe00 T unmap_kernel_range
ffffffff810cfe20 T __get_vm_area_caller
ffffffff810cfe50 T get_vm_area
ffffffff810cfea0 T get_vm_area_caller
ffffffff810cfee0 T remove_vm_area
ffffffff810cff70 T free_vm_area
ffffffff810cff90 T alloc_vm_area
ffffffff810d0000 t __vunmap
ffffffff810d00e0 T vunmap
ffffffff810d0100 T vmap
ffffffff810d0190 T vfree
ffffffff810d01c0 T __vmalloc_node_range
ffffffff810d03f0 t __vmalloc_node
ffffffff810d0430 T vmalloc
ffffffff810d0460 T vzalloc
ffffffff810d0490 T vzalloc_node
ffffffff810d04c0 T vmalloc_32_user
ffffffff810d0500 T vmalloc_32
ffffffff810d0530 T vmalloc_node
ffffffff810d0560 T vmalloc_user
ffffffff810d05a0 T __vmalloc
ffffffff810d05c0 T vmalloc_exec
ffffffff810d05f0 T vread
ffffffff810d0840 T vwrite
ffffffff810d0a40 T pcpu_free_vm_areas
ffffffff810d0a70 T walk_page_range
ffffffff810d0f60 T ptep_clear_flush
ffffffff810d0fc0 T pmdp_clear_flush
ffffffff810d1000 t process_vm_rw_core.isra.2
ffffffff810d1660 t process_vm_rw
ffffffff810d17e0 T sys_process_vm_readv
ffffffff810d1800 T sys_process_vm_writev
ffffffff810d1820 T compat_process_vm_rw
ffffffff810d1a00 T compat_sys_process_vm_readv
ffffffff810d1a20 T compat_sys_process_vm_writev
ffffffff810d1a40 t bounce_end_io
ffffffff810d1ae0 t __bounce_end_io_read
ffffffff810d1bd0 t bounce_end_io_read_isa
ffffffff810d1be0 t bounce_end_io_read
ffffffff810d1bf0 t bounce_end_io_write_isa
ffffffff810d1c00 t bounce_end_io_write
ffffffff810d1c10 t mempool_alloc_pages_isa
ffffffff810d1c20 T blk_queue_bounce
ffffffff810d1f30 T init_emergency_isa_pool
ffffffff810d1f90 t get_swap_bio
ffffffff810d2020 t end_swap_bio_write
ffffffff810d20a0 T end_swap_bio_read
ffffffff810d2120 T swap_writepage
ffffffff810d21e0 T swap_readpage
ffffffff810d2230 t __add_to_swap_cache
ffffffff810d22e0 T show_swap_cache_info
ffffffff810d2360 T add_to_swap_cache
ffffffff810d23b0 T __delete_from_swap_cache
ffffffff810d2400 T add_to_swap
ffffffff810d24a0 T delete_from_swap_cache
ffffffff810d2500 T free_page_and_swap_cache
ffffffff810d2550 T free_pages_and_swap_cache
ffffffff810d2600 T lookup_swap_cache
ffffffff810d2630 T read_swap_cache_async
ffffffff810d27a0 T swapin_readahead
ffffffff810d2860 t swaps_poll
ffffffff810d28b0 t swap_next
ffffffff810d2900 t swaps_open
ffffffff810d2930 t swap_stop
ffffffff810d2940 t swap_start
ffffffff810d29a0 t enable_swap_info
ffffffff810d2a80 t swap_info_get
ffffffff810d2b50 t destroy_swap_extents
ffffffff810d2bb0 t wait_for_discard
ffffffff810d2bc0 t swap_show
ffffffff810d2c90 t swap_count_continued.isra.20
ffffffff810d2fb0 t swap_entry_free
ffffffff810d30f0 t __swap_duplicate
ffffffff810d3270 t add_swap_extent
ffffffff810d3340 T sys_swapon
ffffffff810d3ed0 T swap_free
ffffffff810d3f00 t unuse_mm
ffffffff810d44c0 T swapcache_free
ffffffff810d44f0 T reuse_swap_page
ffffffff810d45d0 T try_to_free_swap
ffffffff810d4680 t scan_swap_map
ffffffff810d4c20 T get_swap_page_of_type
ffffffff810d4cc0 T get_swap_page
ffffffff810d4dc0 T free_swap_and_cache
ffffffff810d4ee0 T map_swap_page
ffffffff810d4f40 T sys_swapoff
ffffffff810d5920 T si_swapinfo
ffffffff810d59a0 T swap_shmem_alloc
ffffffff810d59b0 T swapcache_prepare
ffffffff810d59c0 T add_swap_count_continuation
ffffffff810d5b50 T swap_duplicate
ffffffff810d5b90 T grab_swap_token
ffffffff810d5ca0 T __put_swap_token
ffffffff810d5ce0 T disable_swap_token
ffffffff810d5d60 t dmam_pool_match
ffffffff810d5d70 T dma_pool_free
ffffffff810d5e60 t show_pools
ffffffff810d5fa0 T dma_pool_create
ffffffff810d6180 T dmam_pool_create
ffffffff810d6240 T dma_pool_destroy
ffffffff810d63e0 T dmam_pool_destroy
ffffffff810d6420 t dmam_pool_release
ffffffff810d6430 T dma_pool_alloc
ffffffff810d66c0 t make_huge_pte
ffffffff810d6750 t hugetlb_vm_op_fault
ffffffff810d6760 t is_hugetlb_entry_hwpoisoned
ffffffff810d67a0 t hugetlb_vm_op_open
ffffffff810d67e0 t region_truncate
ffffffff810d68b0 t resv_map_release
ffffffff810d68d0 t resv_map_put
ffffffff810d6900 t region_add
ffffffff810d69e0 t hugepage_subpool_get_pages
ffffffff810d6a40 t hugepage_subpool_put_pages
ffffffff810d6ac0 t prep_new_huge_page
ffffffff810d6b30 t update_and_free_page
ffffffff810d6bb0 t alloc_buddy_huge_page
ffffffff810d6d00 t hugetlb_sysfs_add_hstate
ffffffff810d6d70 T hugetlb_unregister_node
ffffffff810d6e50 T hugetlb_register_node
ffffffff810d6f80 t region_chg
ffffffff810d7070 T vma_kernel_pagesize
ffffffff810d70b0 T PageHuge
ffffffff810d70e0 t next_node_allowed
ffffffff810d7150 t get_valid_node_allowed
ffffffff810d7170 t vma_needs_reservation
ffffffff810d7220 t alloc_huge_page
ffffffff810d75e0 t kobj_to_hstate
ffffffff810d76a0 t nr_overcommit_hugepages_store
ffffffff810d7730 t surplus_hugepages_show
ffffffff810d7780 t resv_hugepages_show
ffffffff810d77b0 t free_hugepages_show
ffffffff810d7800 t nr_overcommit_hugepages_show
ffffffff810d7830 t hstate_next_node_to_free.isra.35
ffffffff810d7880 t free_pool_huge_page
ffffffff810d7950 t return_unused_surplus_pages.part.36
ffffffff810d7990 t hugetlb_acct_memory
ffffffff810d7d10 t hugetlb_vm_op_close
ffffffff810d7df0 t hstate_next_node_to_alloc.isra.37
ffffffff810d7e40 t alloc_fresh_huge_page
ffffffff810d7ef0 t adjust_pool_surplus
ffffffff810d7fc0 t set_max_huge_pages.part.38
ffffffff810d8110 t hugetlb_sysctl_handler_common
ffffffff810d8210 t nr_hugepages_show_common.isra.39
ffffffff810d8260 t nr_hugepages_mempolicy_show
ffffffff810d8270 t nr_hugepages_show
ffffffff810d8280 t nr_hugepages_store_common.isra.40
ffffffff810d8390 t nr_hugepages_mempolicy_store
ffffffff810d83a0 t nr_hugepages_store
ffffffff810d83b0 T hugepage_new_subpool
ffffffff810d83f0 T hugepage_put_subpool
ffffffff810d8480 T linear_hugepage_index
ffffffff810d84c0 T vma_mmu_pagesize
ffffffff810d84d0 T reset_vma_resv_huge_pages
ffffffff810d84f0 T size_to_hstate
ffffffff810d8560 t free_huge_page
ffffffff810d86a0 T copy_huge_page
ffffffff810d8980 T alloc_huge_page_node
ffffffff810d8a50 T hugetlb_sysctl_handler
ffffffff810d8a70 T hugetlb_mempolicy_sysctl_handler
ffffffff810d8a90 T hugetlb_treat_movable_handler
ffffffff810d8ac0 T hugetlb_overcommit_handler
ffffffff810d8b70 T hugetlb_report_meminfo
ffffffff810d8bd0 T hugetlb_report_node_meminfo
ffffffff810d8c20 T hugetlb_total_pages
ffffffff810d8c50 T copy_hugetlb_page_range
ffffffff810d8e10 T __unmap_hugepage_range
ffffffff810d90d0 t hugetlb_cow
ffffffff810d94c0 T unmap_hugepage_range
ffffffff810d9530 T hugetlb_fault
ffffffff810d9bd0 T follow_hugetlb_page
ffffffff810d9ee0 T hugetlb_change_protection
ffffffff810da070 T hugetlb_reserve_pages
ffffffff810da210 T hugetlb_unreserve_pages
ffffffff810da2e0 T dequeue_hwpoisoned_huge_page
ffffffff810da4c0 t mpol_rebind_default
ffffffff810da4d0 t mpol_new_interleave
ffffffff810da530 t sp_alloc
ffffffff810da5a0 t mpol_new_preferred
ffffffff810da620 t mpol_new_bind
ffffffff810da6e0 t interleave_nodes
ffffffff810da780 t policy_zonelist
ffffffff810da810 t mpol_relative_nodemask
ffffffff810da880 t mpol_rebind_preferred
ffffffff810da990 t mpol_set_nodemask
ffffffff810dab10 t mpol_rebind_nodemask
ffffffff810dada0 t mpol_rebind_policy
ffffffff810dae60 t new_node_page
ffffffff810daea0 t alloc_page_interleave
ffffffff810daf30 t get_nodes
ffffffff810db030 t check_range
ffffffff810db5d0 t mpol_new
ffffffff810db690 t sp_lookup.isra.17
ffffffff810db700 t sp_insert
ffffffff810db760 t policy_nodemask
ffffffff810db7b0 T alloc_pages_current
ffffffff810db8e0 T __mpol_put
ffffffff810db900 t do_set_mempolicy
ffffffff810dbaa0 T mpol_rebind_task
ffffffff810dbab0 T mpol_rebind_mm
ffffffff810dbb00 T mpol_fix_fork_child_flag
ffffffff810dbb20 T do_migrate_pages
ffffffff810dbd90 T sys_set_mempolicy
ffffffff810dbdf0 T sys_migrate_pages
ffffffff810dc030 T sys_get_mempolicy
ffffffff810dc520 T compat_sys_get_mempolicy
ffffffff810dc630 T compat_sys_set_mempolicy
ffffffff810dc6e0 T get_vma_policy
ffffffff810dc740 T slab_node
ffffffff810dc7e0 T node_random
ffffffff810dc840 T huge_zonelist
ffffffff810dc960 T init_nodemask_of_mempolicy
ffffffff810dca50 T mempolicy_nodemask_intersects
ffffffff810dcaf0 T alloc_pages_vma
ffffffff810dccd0 t new_vma_page
ffffffff810dcd30 T __mpol_dup
ffffffff810dce40 T __mpol_cond_copy
ffffffff810dce70 T __mpol_equal
ffffffff810dcf40 t do_mbind
ffffffff810dd360 T sys_mbind
ffffffff810dd410 T compat_sys_mbind
ffffffff810dd4f0 T mpol_shared_policy_lookup
ffffffff810dd570 T mpol_set_shared_policy
ffffffff810dd750 T mpol_shared_policy_init
ffffffff810dd8b0 T mpol_free_shared_policy
ffffffff810dd940 T numa_default_policy
ffffffff810dd950 T mpol_parse_str
ffffffff810ddce0 T mpol_to_str
ffffffff810ddf20 T __section_nr
ffffffff810ddf80 T sparse_decode_mem_map
ffffffff810ddfa0 T usemap_size
ffffffff810ddfb0 t suitable_migration_target
ffffffff810ddff0 t compaction_alloc
ffffffff810de360 T compaction_suitable
ffffffff810de410 t compact_zone
ffffffff810debf0 t __compact_pgdat
ffffffff810ded40 t compact_node
ffffffff810ded80 T sysfs_compact_node
ffffffff810dedd0 T try_to_compact_pages
ffffffff810def60 T compact_pgdat
ffffffff810def90 T sysctl_compaction_handler
ffffffff810df020 T sysctl_extfrag_handler
ffffffff810df030 T compaction_register_node
ffffffff810df040 T compaction_unregister_node
ffffffff810df050 T mmu_notifier_unregister
ffffffff810df110 t do_mmu_notifier_register
ffffffff810df240 T __mmu_notifier_register
ffffffff810df250 T mmu_notifier_register
ffffffff810df260 T __mmu_notifier_release
ffffffff810df300 T __mmu_notifier_clear_flush_young
ffffffff810df360 T __mmu_notifier_test_young
ffffffff810df3b0 T __mmu_notifier_change_pte
ffffffff810df430 T __mmu_notifier_invalidate_page
ffffffff810df480 T __mmu_notifier_invalidate_range_start
ffffffff810df4e0 T __mmu_notifier_invalidate_range_end
ffffffff810df540 T __mmu_notifier_mm_destroy
ffffffff810df570 t cache_estimate
ffffffff810df610 T kmem_cache_size
ffffffff810df620 t do_ccupdate_local
ffffffff810df650 t slabinfo_open
ffffffff810df660 t s_show
ffffffff810df9b0 t s_next
ffffffff810df9c0 t s_stop
ffffffff810df9d0 t s_start
ffffffff810dfa70 T ksize
ffffffff810dfae0 t transfer_objects
ffffffff810dfb50 t slab_out_of_memory
ffffffff810dfcc0 t kmem_getpages
ffffffff810dfe40 t kmem_flagcheck.isra.41
ffffffff810dfe60 t kmem_freepages.isra.43
ffffffff810dff90 t slab_destroy
ffffffff810e0020 t free_block
ffffffff810e01a0 t __drain_alien_cache
ffffffff810e0240 t drain_alien_cache
ffffffff810e0300 T kfree
ffffffff810e0520 t free_alien_cache
ffffffff810e05b0 T kmem_cache_free
ffffffff810e0760 t __kmem_cache_destroy
ffffffff810e0850 t do_drain
ffffffff810e08e0 t drain_freelist
ffffffff810e09b0 t kmem_rcu_free
ffffffff810e0a10 t cache_grow
ffffffff810e0ca0 t ____cache_alloc_node
ffffffff810e0de0 t fallback_alloc
ffffffff810e1040 T kmem_cache_alloc_node
ffffffff810e1140 T __kmalloc_node
ffffffff810e1190 t alloc_arraycache
ffffffff810e11f0 t alloc_alien_cache
ffffffff810e1330 T __kmalloc
ffffffff810e1490 T kmem_cache_alloc
ffffffff810e1590 t do_tune_cpucache
ffffffff810e1a80 t slabinfo_write
ffffffff810e1c10 t enable_cpucache
ffffffff810e1cd0 T kmem_cache_create
ffffffff810e2200 t drain_array
ffffffff810e2300 t cache_reap
ffffffff810e2500 t __cache_shrink
ffffffff810e26e0 T kmem_cache_destroy
ffffffff810e27d0 T kmem_cache_shrink
ffffffff810e2830 T slab_is_available
ffffffff810e2840 T fail_migrate_page
ffffffff810e2850 t new_page_node
ffffffff810e28c0 t do_pages_stat
ffffffff810e2a20 t remove_migration_pte
ffffffff810e2d40 t buffer_migrate_lock_buffers
ffffffff810e2dd0 t migrate_page_move_mapping.part.15
ffffffff810e3010 T migrate_prep
ffffffff810e3020 T migrate_prep_local
ffffffff810e3030 T putback_lru_pages
ffffffff810e30c0 T migration_entry_wait
ffffffff810e3250 T migrate_huge_page_move_mapping
ffffffff810e33e0 T migrate_page_copy
ffffffff810e3590 T migrate_page
ffffffff810e3600 t move_to_new_page
ffffffff810e3890 T buffer_migrate_page
ffffffff810e39e0 T migrate_pages
ffffffff810e3e30 T migrate_huge_pages
ffffffff810e40b0 T sys_move_pages
ffffffff810e4660 T migrate_vmas
ffffffff810e46c0 t pages_to_scan_store
ffffffff810e4710 t khugepaged_max_ptes_none_store
ffffffff810e4760 t alloc_sleep_millisecs_store
ffffffff810e47c0 t scan_sleep_millisecs_store
ffffffff810e4820 t alloc_sleep_millisecs_show
ffffffff810e4850 t scan_sleep_millisecs_show
ffffffff810e4880 t full_scans_show
ffffffff810e48b0 t pages_collapsed_show
ffffffff810e48e0 t pages_to_scan_show
ffffffff810e4910 t khugepaged_max_ptes_none_show
ffffffff810e4940 t khugepaged_defrag_show
ffffffff810e4970 t khugepaged_alloc_sleep
ffffffff810e4ab0 t collect_mm_slot
ffffffff810e4b80 t set_recommended_min_free_kbytes
ffffffff810e4c10 t start_khugepaged
ffffffff810e4d40 t khugepaged_wait_event
ffffffff810e4d70 t double_flag_store.isra.27
ffffffff810e4e60 t defrag_store
ffffffff810e4e80 t enabled_store
ffffffff810e4ee0 t double_flag_show.isra.28
ffffffff810e4f90 t defrag_show
ffffffff810e4fb0 t enabled_show
ffffffff810e4fc0 t prepare_pmd_huge_pte
ffffffff810e5030 t khugepaged
ffffffff810e64b0 t khugepaged_defrag_store
ffffffff810e6510 T copy_huge_pmd
ffffffff810e6780 T get_pmd_huge_pte
ffffffff810e67f0 T do_huge_pmd_wp_page
ffffffff810e6fc0 T follow_trans_huge_pmd
ffffffff810e70f0 T __pmd_trans_huge_lock
ffffffff810e71c0 T change_huge_pmd
ffffffff810e72b0 T move_huge_pmd
ffffffff810e73a0 T mincore_huge_pmd
ffffffff810e7420 T zap_huge_pmd
ffffffff810e7540 T page_check_address_pmd
ffffffff810e7650 T split_huge_page
ffffffff810e7da0 T __khugepaged_enter
ffffffff810e7ee0 T do_huge_pmd_anonymous_page
ffffffff810e8240 T khugepaged_enter_vma_merge
ffffffff810e82e0 T hugepage_madvise
ffffffff810e8360 T __khugepaged_exit
ffffffff810e84a0 T __split_huge_page_pmd
ffffffff810e8590 t split_huge_page_address
ffffffff810e8620 T __vma_adjust_trans_huge
ffffffff810e8710 T hwpoison_filter
ffffffff810e8720 t me_kernel
ffffffff810e8730 t me_unknown
ffffffff810e8750 t action_result
ffffffff810e87b0 t delete_from_lru_cache
ffffffff810e87e0 t me_swapcache_dirty
ffffffff810e8800 t set_page_hwpoison_huge_page
ffffffff810e88a0 t me_huge_page
ffffffff810e8900 T unpoison_memory
ffffffff810e8ba0 t new_page
ffffffff810e8c40 T memory_failure_queue
ffffffff810e8d20 t me_swapcache_clean
ffffffff810e8d40 t add_to_kill
ffffffff810e8e50 t get_any_page.part.9
ffffffff810e8f40 t me_pagecache_clean
ffffffff810e9050 t me_pagecache_dirty
ffffffff810e9090 t kill_proc.isra.11
ffffffff810e9230 T shake_page
ffffffff810e92d0 T memory_failure
ffffffff810e9de0 t memory_failure_work_func
ffffffff810e9e70 T soft_offline_page
ffffffff810ea2b0 T cleancache_register_ops
ffffffff810ea360 T __cleancache_init_fs
ffffffff810ea380 T __cleancache_init_shared_fs
ffffffff810ea3a0 t cleancache_get_key
ffffffff810ea420 T __cleancache_get_page
ffffffff810ea4e0 T __cleancache_put_page
ffffffff810ea570 T __cleancache_invalidate_page
ffffffff810ea600 T __cleancache_invalidate_inode
ffffffff810ea670 T __cleancache_invalidate_fs
ffffffff810ea6a0 T generic_file_open
ffffffff810ea6c0 T nonseekable_open
ffffffff810ea6d0 T put_unused_fd
ffffffff810ea740 T filp_open
ffffffff810ea810 T file_open_root
ffffffff810ea930 t chmod_common
ffffffff810ea9e0 T filp_close
ffffffff810eaa70 T sys_close
ffffffff810eab80 T fd_install
ffffffff810eac00 t __dentry_open.isra.17
ffffffff810eaf00 T lookup_instantiate_filp
ffffffff810eafc0 T dentry_open
ffffffff810eb040 t chown_common.isra.19
ffffffff810eb0d0 T do_truncate
ffffffff810eb160 T sys_truncate
ffffffff810eb320 T sys_ftruncate
ffffffff810eb430 T do_fallocate
ffffffff810eb510 T sys_fallocate
ffffffff810eb580 T sys_faccessat
ffffffff810eb740 T sys_access
ffffffff810eb750 T sys_chdir
ffffffff810eb7d0 T sys_fchdir
ffffffff810eb870 T sys_chroot
ffffffff810eb910 T sys_fchmod
ffffffff810eb980 T sys_fchmodat
ffffffff810eb9d0 T sys_chmod
ffffffff810eb9e0 T sys_chown
ffffffff810eba70 T sys_fchownat
ffffffff810ebb20 T sys_lchown
ffffffff810ebbb0 T sys_fchown
ffffffff810ebc70 T nameidata_to_filp
ffffffff810ebcc0 T do_sys_open
ffffffff810ebe90 T sys_open
ffffffff810ebeb0 T sys_openat
ffffffff810ebec0 T sys_creat
ffffffff810ebed0 T sys_vhangup
ffffffff810ebf00 T noop_llseek
ffffffff810ebf10 T no_llseek
ffffffff810ebf20 T vfs_llseek
ffffffff810ebf50 T iov_shorten
ffffffff810ebfa0 t wait_on_retry_sync_kiocb
ffffffff810ec000 T do_sync_write
ffffffff810ec100 T do_sync_read
ffffffff810ec200 T default_llseek
ffffffff810ec320 T generic_file_llseek_size
ffffffff810ec460 T generic_file_llseek
ffffffff810ec480 T sys_lseek
ffffffff810ec510 T sys_llseek
ffffffff810ec5e0 T rw_verify_area
ffffffff810ec690 t do_sendfile
ffffffff810ec8a0 T vfs_write
ffffffff810eca20 T vfs_read
ffffffff810ecb90 T sys_read
ffffffff810ecc20 T sys_write
ffffffff810eccb0 T sys_pread64
ffffffff810ecd50 T sys_pwrite64
ffffffff810ecdf0 T do_sync_readv_writev
ffffffff810ecee0 T do_loop_readv_writev
ffffffff810ecf90 T rw_copy_check_uvector
ffffffff810ed0c0 t do_readv_writev
ffffffff810ed2c0 T vfs_writev
ffffffff810ed310 T vfs_readv
ffffffff810ed360 T sys_readv
ffffffff810ed420 T sys_writev
ffffffff810ed4e0 T sys_preadv
ffffffff810ed5a0 T sys_pwritev
ffffffff810ed660 T sys_sendfile
ffffffff810ed6d0 T sys_sendfile64
ffffffff810ed770 T files_lglock_local_lock
ffffffff810ed790 T files_lglock_local_unlock
ffffffff810ed7b0 T files_lglock_local_lock_cpu
ffffffff810ed7d0 T files_lglock_local_unlock_cpu
ffffffff810ed7f0 t file_free_rcu
ffffffff810ed830 T get_max_files
ffffffff810ed840 T files_lglock_global_lock_online
ffffffff810ed8a0 T files_lglock_global_unlock_online
ffffffff810ed910 T files_lglock_global_lock
ffffffff810ed970 T files_lglock_global_unlock
ffffffff810ed9d0 T fget
ffffffff810eda50 T fget_raw
ffffffff810edad0 t files_lglock_lg_cpu_callback
ffffffff810edb60 T files_lglock_lock_init
ffffffff810edc00 T proc_nr_files
ffffffff810edc20 T get_empty_filp
ffffffff810edd60 T alloc_file
ffffffff810ede30 T fget_light
ffffffff810edef0 T fget_raw_light
ffffffff810edf90 T file_sb_list_add
ffffffff810edff0 T file_sb_list_del
ffffffff810ee030 T put_filp
ffffffff810ee080 T fput
ffffffff810ee2e0 T mark_files_ro
ffffffff810ee3c0 t ns_test_super
ffffffff810ee3d0 t set_bdev_super
ffffffff810ee400 t test_bdev_super
ffffffff810ee410 t compare_single
ffffffff810ee420 T lock_super
ffffffff810ee430 T unlock_super
ffffffff810ee440 T free_anon_bdev
ffffffff810ee490 T get_anon_bdev
ffffffff810ee590 T set_anon_super
ffffffff810ee5b0 t ns_set_super
ffffffff810ee5c0 T generic_shutdown_super
ffffffff810ee6b0 T kill_block_super
ffffffff810ee730 T kill_anon_super
ffffffff810ee750 T kill_litter_super
ffffffff810ee770 t __put_super
ffffffff810ee7f0 t put_super
ffffffff810ee820 T drop_super
ffffffff810ee840 T deactivate_locked_super
ffffffff810ee8d0 T thaw_super
ffffffff810ee990 T get_super
ffffffff810eea60 T get_super_thawed
ffffffff810eeb50 T iterate_supers_type
ffffffff810eec30 t grab_super
ffffffff810eecd0 T sget
ffffffff810ef1a0 T mount_nodev
ffffffff810ef260 T mount_bdev
ffffffff810ef460 T mount_ns
ffffffff810ef550 T freeze_super
ffffffff810ef660 T deactivate_super
ffffffff810ef6c0 T grab_super_passive
ffffffff810ef750 t prune_super
ffffffff810ef8f0 T sync_supers
ffffffff810ef9f0 T iterate_supers
ffffffff810efae0 T get_active_super
ffffffff810efb70 T user_get_super
ffffffff810efc30 T do_remount_sb
ffffffff810efdb0 T mount_single
ffffffff810efe90 t do_emergency_remount
ffffffff810effa0 T emergency_remount
ffffffff810efff0 T mount_fs
ffffffff810f00c0 t exact_match
ffffffff810f00d0 t cdev_purge
ffffffff810f0130 t cdev_default_release
ffffffff810f0140 t cdev_get
ffffffff810f01a0 t exact_lock
ffffffff810f01c0 T cdev_init
ffffffff810f0280 t cdev_dynamic_release
ffffffff810f02a0 T cdev_alloc
ffffffff810f02f0 T cdev_del
ffffffff810f0310 T cdev_add
ffffffff810f0350 t __unregister_chrdev_region
ffffffff810f03f0 T __unregister_chrdev
ffffffff810f0420 T unregister_chrdev_region
ffffffff810f0470 t __register_chrdev_region
ffffffff810f05e0 T __register_chrdev
ffffffff810f06f0 T alloc_chrdev_region
ffffffff810f0720 T register_chrdev_region
ffffffff810f07d0 t base_probe
ffffffff810f0810 T chrdev_show
ffffffff810f0890 T cdev_put
ffffffff810f08b0 t chrdev_open
ffffffff810f0a50 T cd_forget
ffffffff810f0ab0 T generic_fillattr
ffffffff810f0b40 T vfs_getattr
ffffffff810f0b80 T inode_set_bytes
ffffffff810f0ba0 T inode_sub_bytes
ffffffff810f0c30 T inode_get_bytes
ffffffff810f0c90 T inode_add_bytes
ffffffff810f0d20 t cp_new_stat
ffffffff810f0e20 t cp_old_stat
ffffffff810f0f60 T vfs_fstatat
ffffffff810f0fc0 T vfs_lstat
ffffffff810f0fe0 T vfs_stat
ffffffff810f1000 T vfs_fstat
ffffffff810f1060 T sys_stat
ffffffff810f1090 T sys_lstat
ffffffff810f10c0 T sys_fstat
ffffffff810f10f0 T sys_newstat
ffffffff810f1120 T sys_newlstat
ffffffff810f1150 T sys_newfstatat
ffffffff810f1180 T sys_newfstat
ffffffff810f11b0 T sys_readlinkat
ffffffff810f1260 T sys_readlink
ffffffff810f1280 T __inode_add_bytes
ffffffff810f12c0 T dump_write
ffffffff810f1310 T dump_seek
ffffffff810f13f0 t zap_process
ffffffff810f14a0 t cn_printf
ffffffff810f15b0 t umh_pipe_setup
ffffffff810f16a0 T set_binfmt
ffffffff810f1710 T search_binary_handler
ffffffff810f1990 T install_exec_creds
ffffffff810f19d0 T would_dump
ffffffff810f1a00 T get_task_comm
ffffffff810f1a60 T kernel_read
ffffffff810f1ac0 T prepare_binprm
ffffffff810f1c70 T open_exec
ffffffff810f1d60 t shift_arg_pages
ffffffff810f1f00 T setup_arg_pages
ffffffff810f2120 T unregister_binfmt
ffffffff810f2170 t acct_arg_size.isra.22
ffffffff810f21a0 t get_arg_page
ffffffff810f2280 t copy_strings.isra.32
ffffffff810f24f0 T copy_strings_kernel
ffffffff810f2540 T remove_arg_zero
ffffffff810f2640 T flush_old_exec
ffffffff810f2d40 T __register_binfmt
ffffffff810f2dd0 t count.isra.31.constprop.37
ffffffff810f2e80 T sys_uselib
ffffffff810f2fe0 T bprm_mm_init
ffffffff810f31d0 T set_task_comm
ffffffff810f3250 T prepare_bprm_creds
ffffffff810f32d0 T free_bprm
ffffffff810f3310 t do_execve_common.isra.36
ffffffff810f3750 T do_execve
ffffffff810f3770 T compat_do_execve
ffffffff810f3790 T set_dumpable
ffffffff810f3800 T setup_new_exec
ffffffff810f3a40 T get_dumpable
ffffffff810f3a60 T do_coredump
ffffffff810f46b0 T generic_pipe_buf_confirm
ffffffff810f46c0 t bad_pipe_r
ffffffff810f46d0 t bad_pipe_w
ffffffff810f46e0 t pipe_poll
ffffffff810f4780 t pipefs_mount
ffffffff810f47a0 t pipefs_dname
ffffffff810f47c0 t iov_fault_in_pages_read
ffffffff810f4850 t pipe_rdwr_fasync
ffffffff810f4920 t pipe_rdwr_open
ffffffff810f49b0 t pipe_write_fasync
ffffffff810f4a30 t pipe_write_open
ffffffff810f4a90 t pipe_read_fasync
ffffffff810f4b10 t pipe_read_open
ffffffff810f4b70 t pipe_ioctl
ffffffff810f4c00 T pipe_unlock
ffffffff810f4c20 t anon_pipe_buf_release
ffffffff810f4c60 T generic_pipe_buf_release
ffffffff810f4c70 t pipe_iov_copy_from_user
ffffffff810f4d20 T generic_pipe_buf_get
ffffffff810f4d50 T generic_pipe_buf_steal
ffffffff810f4da0 T generic_pipe_buf_map
ffffffff810f4e20 T generic_pipe_buf_unmap
ffffffff810f4e40 t pipe_lock_nested.isra.7
ffffffff810f4e60 T pipe_lock
ffffffff810f4e70 T pipe_double_lock
ffffffff810f4ec0 T pipe_wait
ffffffff810f4f30 t pipe_write
ffffffff810f54a0 t pipe_read
ffffffff810f59d0 T alloc_pipe_info
ffffffff810f5a60 T __free_pipe_info
ffffffff810f5ad0 T free_pipe_info
ffffffff810f5af0 t pipe_release
ffffffff810f5bc0 t pipe_rdwr_release
ffffffff810f5be0 t pipe_write_release
ffffffff810f5bf0 t pipe_read_release
ffffffff810f5c00 T create_write_pipe
ffffffff810f5db0 T free_write_pipe
ffffffff810f5de0 T create_read_pipe
ffffffff810f5e50 T do_pipe_flags
ffffffff810f5f70 T sys_pipe2
ffffffff810f5fd0 T sys_pipe
ffffffff810f5fe0 T pipe_proc_fn
ffffffff810f6030 T get_pipe_info
ffffffff810f6060 T pipe_fcntl
ffffffff810f6260 t get_write_access
ffffffff810f6290 T full_name_hash
ffffffff810f62f0 T __page_symlink
ffffffff810f6410 T page_symlink
ffffffff810f6430 T page_put_link
ffffffff810f6450 T path_get
ffffffff810f64a0 t unlazy_walk
ffffffff810f66b0 t follow_dotdot_rcu
ffffffff810f6850 T follow_down
ffffffff810f6950 T follow_down_one
ffffffff810f69d0 T path_put
ffffffff810f69f0 t terminate_walk
ffffffff810f6a20 t complete_walk
ffffffff810f6b40 T unlock_rename
ffffffff810f6bb0 t follow_managed
ffffffff810f6e60 t __lookup_hash
ffffffff810f6f70 t lookup_hash
ffffffff810f6f80 T vfs_readlink
ffffffff810f7000 T generic_readlink
ffffffff810f70b0 T follow_up
ffffffff810f7160 t follow_dotdot
ffffffff810f72a0 T dentry_unhash
ffffffff810f7300 t getname_flags
ffffffff810f7560 T getname
ffffffff810f7570 T generic_permission
ffffffff810f77c0 T inode_permission
ffffffff810f78c0 T vfs_mkdir
ffffffff810f79c0 T vfs_link
ffffffff810f7b70 t path_init
ffffffff810f7f30 T lookup_one_len
ffffffff810f8030 T putname
ffffffff810f8070 T vfs_create
ffffffff810f8160 T vfs_symlink
ffffffff810f8240 t page_getlink.isra.30
ffffffff810f82d0 T page_follow_link_light
ffffffff810f8310 T page_readlink
ffffffff810f8370 t may_delete
ffffffff810f84e0 T vfs_rename
ffffffff810f8990 t path_put_conditional.isra.33
ffffffff810f89e0 t do_lookup
ffffffff810f8d20 t link_path_walk
ffffffff810f95a0 T vfs_follow_link
ffffffff810f96b0 t path_lookupat
ffffffff810f9dc0 t do_path_lookup
ffffffff810f9e80 T kern_path_create
ffffffff810f9fa0 T user_path_create
ffffffff810fa000 T kern_path
ffffffff810fa040 t user_path_parent
ffffffff810fa0c0 t do_last
ffffffff810fa960 T vfs_path_lookup
ffffffff810fa9c0 T vfs_unlink
ffffffff810faac0 t do_unlinkat
ffffffff810fac80 T vfs_rmdir
ffffffff810fada0 t do_rmdir
ffffffff810faeb0 T vfs_mknod
ffffffff810fb010 T lock_rename
ffffffff810fb0e0 T release_open_intent
ffffffff810fb120 t path_openat
ffffffff810fb510 T kern_path_parent
ffffffff810fb530 T user_path_at_empty
ffffffff810fb5e0 T user_path_at
ffffffff810fb5f0 T do_filp_open
ffffffff810fb6a0 T do_file_open_root
ffffffff810fb770 T sys_mknodat
ffffffff810fb990 T sys_mknod
ffffffff810fb9b0 T sys_mkdirat
ffffffff810fba80 T sys_mkdir
ffffffff810fba90 T sys_rmdir
ffffffff810fbaa0 T sys_unlinkat
ffffffff810fbad0 T sys_unlink
ffffffff810fbae0 T sys_symlinkat
ffffffff810fbba0 T sys_symlink
ffffffff810fbbb0 T sys_linkat
ffffffff810fbd20 T sys_link
ffffffff810fbd40 T sys_renameat
ffffffff810fbf80 T sys_rename
ffffffff810fbfa0 t fasync_free_rcu
ffffffff810fbfb0 t send_sigio_to_task
ffffffff810fc080 t f_modown
ffffffff810fc140 T __f_setown
ffffffff810fc150 T f_setown
ffffffff810fc1a0 T set_close_on_exec
ffffffff810fc220 T sys_dup3
ffffffff810fc390 T sys_dup2
ffffffff810fc3d0 T sys_dup
ffffffff810fc440 T f_delown
ffffffff810fc450 T f_getown
ffffffff810fc480 T sys_fcntl
ffffffff810fca00 T send_sigio
ffffffff810fcae0 T kill_fasync
ffffffff810fcb80 T send_sigurg
ffffffff810fcc70 T fasync_remove_entry
ffffffff810fcd30 T fasync_alloc
ffffffff810fcd50 T fasync_free
ffffffff810fcd60 T fasync_insert_entry
ffffffff810fce50 T fasync_helper
ffffffff810fcee0 T fiemap_check_flags
ffffffff810fcf00 T fiemap_fill_next_extent
ffffffff810fcff0 T __generic_block_fiemap
ffffffff810fd270 T generic_block_fiemap
ffffffff810fd2e0 T ioctl_preallocate
ffffffff810fd370 T do_vfs_ioctl
ffffffff810fd860 T sys_ioctl
ffffffff810fd8f0 t filldir
ffffffff810fd9c0 t filldir64
ffffffff810fda90 t fillonedir
ffffffff810fdb30 T vfs_readdir
ffffffff810fdc00 T sys_old_readdir
ffffffff810fdc60 T sys_getdents
ffffffff810fdd60 T sys_getdents64
ffffffff810fde50 T poll_initwait
ffffffff810fde90 t poll_select_copy_remaining
ffffffff810fdfd0 T poll_schedule_timeout
ffffffff810fe030 T poll_freewait
ffffffff810fe0e0 t __pollwait
ffffffff810fe1f0 t pollwake
ffffffff810fe260 T select_estimate_accuracy
ffffffff810fe360 T poll_select_set_timeout
ffffffff810fe3e0 T do_select
ffffffff810fea00 T core_sys_select
ffffffff810fed20 T sys_select
ffffffff810fee20 T sys_pselect6
ffffffff810ff020 T do_sys_poll
ffffffff810ff450 t do_restart_poll
ffffffff810ff4b0 T sys_poll
ffffffff810ff5a0 T sys_ppoll
ffffffff810ff720 t wake_up_partner.isra.0
ffffffff810ff740 t wait_for_partner.isra.1
ffffffff810ff7a0 t fifo_open
ffffffff810ffa40 t dentry_lru_del
ffffffff810ffad0 t dentry_lru_prune
ffffffff810ffb70 t __d_find_alias
ffffffff810ffc90 T d_find_alias
ffffffff810ffcf0 t __d_find_any_alias
ffffffff810ffd60 T d_find_any_alias
ffffffff810ffdb0 t dentry_lock_for_move
ffffffff810ffe70 t try_to_ascend
ffffffff810fff00 T d_genocide
ffffffff811000a0 T have_submounts
ffffffff81100270 t __d_shrink
ffffffff81100300 t prepend
ffffffff81100340 t switch_names
ffffffff81100400 t __dentry_path
ffffffff811004e0 T dentry_path_raw
ffffffff81100540 t prepend_path
ffffffff81100700 t path_with_deleted
ffffffff811007b0 T d_path
ffffffff811008c0 t __d_instantiate
ffffffff811009d0 t __d_instantiate_unique
ffffffff81100ad0 t __d_rehash
ffffffff81100b20 t _d_rehash
ffffffff81100b50 T d_rehash
ffffffff81100b90 T dentry_update_name_case
ffffffff81100c10 T dget_parent
ffffffff81100c80 T d_instantiate_unique
ffffffff81100d10 T d_set_d_op
ffffffff81100dc0 t __d_free
ffffffff81100e30 t dentry_unlock_parents_for_move.isra.8
ffffffff81100e60 T d_validate
ffffffff81100f00 T __d_drop
ffffffff81100f30 T d_clear_need_lookup
ffffffff81100f80 T d_drop
ffffffff81100fc0 T d_delete
ffffffff81101160 t __d_move
ffffffff811013e0 T d_move
ffffffff81101430 T d_instantiate
ffffffff811014b0 T d_splice_alias
ffffffff81101590 t d_free
ffffffff811015e0 t d_kill
ffffffff81101720 t shrink_dentry_list
ffffffff811018f0 T shrink_dcache_sb
ffffffff811019a0 T shrink_dcache_parent
ffffffff81101c40 T d_invalidate
ffffffff81101d00 T dput
ffffffff81101eb0 T d_prune_aliases
ffffffff81101f70 T d_materialise_unique
ffffffff81102420 t shrink_dcache_for_umount_subtree
ffffffff81102600 T proc_nr_dentry
ffffffff811026a0 T prune_dcache_sb
ffffffff81102820 T shrink_dcache_for_umount
ffffffff81102880 T __d_alloc
ffffffff81102a00 T d_obtain_alias
ffffffff81102bb0 T d_make_root
ffffffff81102c10 T d_alloc_pseudo
ffffffff81102c30 T d_alloc
ffffffff81102cc0 T d_alloc_name
ffffffff81102d10 T __d_lookup_rcu
ffffffff81102e90 T __d_lookup
ffffffff81103000 T d_lookup
ffffffff81103060 T d_hash_and_lookup
ffffffff811030d0 T find_inode_number
ffffffff81103100 T d_add_ci
ffffffff81103230 T d_ancestor
ffffffff81103250 T __d_path
ffffffff811032e0 T d_absolute_path
ffffffff81103390 T d_path_with_unreachable
ffffffff811034b0 T dynamic_dname
ffffffff81103550 T dentry_path
ffffffff81103630 T sys_getcwd
ffffffff81103820 T is_subdir
ffffffff81103870 T generic_delete_inode
ffffffff81103880 T bmap
ffffffff811038a0 T inode_init_owner
ffffffff81103900 T inode_wait
ffffffff81103910 t inode_lru_list_del
ffffffff81103980 T inode_sb_list_add
ffffffff811039e0 T __insert_inode_hash
ffffffff81103aa0 T __remove_inode_hash
ffffffff81103b30 T iunique
ffffffff81103c40 T file_update_time
ffffffff81103da0 T touch_atime
ffffffff81103f10 T get_next_ino
ffffffff81103f40 T inc_nlink
ffffffff81103f80 T unlock_new_inode
ffffffff81104000 t i_callback
ffffffff81104020 T free_inode_nonrcu
ffffffff81104030 t __wait_on_freeing_inode
ffffffff81104110 t find_inode_fast
ffffffff811041b0 T ilookup
ffffffff81104280 t find_inode
ffffffff81104320 T ilookup5_nowait
ffffffff811043c0 T ilookup5
ffffffff81104400 T end_writeback
ffffffff811044a0 T address_space_init_once
ffffffff811045b0 T inode_init_once
ffffffff811046c0 t init_once
ffffffff811046d0 T inode_init_always
ffffffff81104870 t alloc_inode
ffffffff81104900 T __destroy_inode
ffffffff811049d0 t get_nr_inodes
ffffffff81104a30 T clear_nlink
ffffffff81104a50 T set_nlink
ffffffff81104a80 T inode_needs_sync
ffffffff81104ad0 T inode_owner_or_capable
ffffffff81104b10 T init_special_inode
ffffffff81104ba0 T ihold
ffffffff81104bd0 T drop_nlink
ffffffff81104c10 t destroy_inode
ffffffff81104c60 t evict
ffffffff81104e00 t dispose_list
ffffffff81104e40 T iget_locked
ffffffff81104ff0 T iget5_locked
ffffffff811051e0 T iput
ffffffff81105400 T insert_inode_locked4
ffffffff81105590 T insert_inode_locked
ffffffff81105730 T igrab
ffffffff81105790 T get_nr_dirty_inodes
ffffffff81105810 T proc_nr_inodes
ffffffff811058c0 T __iget
ffffffff811058d0 T evict_inodes
ffffffff811059e0 T invalidate_inodes
ffffffff81105b20 T prune_icache_sb
ffffffff81105e40 T new_inode_pseudo
ffffffff81105eb0 T new_inode
ffffffff81105ee0 T notify_change
ffffffff81106220 T setattr_copy
ffffffff81106330 T inode_newsize_ok
ffffffff811063a0 T inode_change_ok
ffffffff81106520 t bad_file_llseek
ffffffff81106530 t bad_file_read
ffffffff81106540 t bad_file_write
ffffffff81106550 t bad_file_aio_read
ffffffff81106560 t bad_file_aio_write
ffffffff81106570 t bad_file_readdir
ffffffff81106580 t bad_file_poll
ffffffff81106590 t bad_file_unlocked_ioctl
ffffffff811065a0 t bad_file_compat_ioctl
ffffffff811065b0 t bad_file_mmap
ffffffff811065c0 t bad_file_open
ffffffff811065d0 t bad_file_flush
ffffffff811065e0 t bad_file_release
ffffffff811065f0 t bad_file_fsync
ffffffff81106600 t bad_file_aio_fsync
ffffffff81106610 t bad_file_fasync
ffffffff81106620 t bad_file_lock
ffffffff81106630 t bad_file_sendpage
ffffffff81106640 t bad_file_get_unmapped_area
ffffffff81106650 t bad_file_check_flags
ffffffff81106660 t bad_file_flock
ffffffff81106670 t bad_file_splice_write
ffffffff81106680 t bad_file_splice_read
ffffffff81106690 t bad_inode_create
ffffffff811066a0 t bad_inode_lookup
ffffffff811066b0 t bad_inode_link
ffffffff811066c0 t bad_inode_unlink
ffffffff811066d0 t bad_inode_symlink
ffffffff811066e0 t bad_inode_mkdir
ffffffff811066f0 t bad_inode_rmdir
ffffffff81106700 t bad_inode_mknod
ffffffff81106710 t bad_inode_rename
ffffffff81106720 t bad_inode_readlink
ffffffff81106730 t bad_inode_permission
ffffffff81106740 t bad_inode_getattr
ffffffff81106750 t bad_inode_setattr
ffffffff81106760 t bad_inode_setxattr
ffffffff81106770 t bad_inode_getxattr
ffffffff81106780 t bad_inode_listxattr
ffffffff81106790 t bad_inode_removexattr
ffffffff811067a0 T is_bad_inode
ffffffff811067b0 T make_bad_inode
ffffffff81106800 T iget_failed
ffffffff81106820 t free_fdmem
ffffffff81106850 t __free_fdtable
ffffffff81106870 t free_fdtable_work
ffffffff811068c0 t alloc_fdmem
ffffffff811068f0 t alloc_fdtable
ffffffff811069c0 T free_fdtable_rcu
ffffffff81106ac0 T expand_files
ffffffff81106c80 T dup_fd
ffffffff81106f50 T alloc_fd
ffffffff81107060 T get_unused_fd
ffffffff81107070 t filesystems_proc_open
ffffffff81107090 t filesystems_proc_show
ffffffff81107110 t find_filesystem
ffffffff81107180 t __get_fs_type
ffffffff811071d0 T get_fs_type
ffffffff811072a0 T unregister_filesystem
ffffffff81107320 T register_filesystem
ffffffff811073b0 T get_filesystem
ffffffff811073c0 T put_filesystem
ffffffff811073d0 T sys_sysfs
ffffffff81107580 T vfsmount_lock_local_lock
ffffffff811075a0 T vfsmount_lock_local_unlock
ffffffff811075c0 T vfsmount_lock_local_lock_cpu
ffffffff811075e0 T vfsmount_lock_local_unlock_cpu
ffffffff81107600 T mnt_drop_write
ffffffff81107610 T mnt_drop_write_file
ffffffff81107620 T mntget
ffffffff81107640 t m_show
ffffffff81107650 t m_next
ffffffff81107670 t m_stop
ffffffff81107680 t m_start
ffffffff811076c0 t alloc_mnt_ns
ffffffff81107740 t vfsmount_lock_lg_cpu_callback
ffffffff811077d0 t touch_mnt_namespace
ffffffff81107810 t commit_tree
ffffffff81107910 T vfsmount_lock_global_lock_online
ffffffff81107970 T vfsmount_lock_global_unlock_online
ffffffff811079e0 T mnt_pin
ffffffff81107a00 T mnt_unpin
ffffffff81107a30 T mnt_set_expiry
ffffffff81107a90 T vfsmount_lock_global_lock
ffffffff81107af0 T vfsmount_lock_global_unlock
ffffffff81107b50 t mnt_alloc_group_id
ffffffff81107ba0 T may_umount
ffffffff81107be0 T replace_mount_options
ffffffff81107c10 T generic_show_options
ffffffff81107c70 T vfsmount_lock_lock_init
ffffffff81107d10 T __mnt_is_readonly
ffffffff81107d30 T mnt_want_write
ffffffff81107d80 T mnt_clone_write
ffffffff81107da0 t mnt_get_writers.isra.12
ffffffff81107df0 T mnt_want_write_file
ffffffff81107e40 t dentry_reset_mounted
ffffffff81107eb0 t detach_mnt
ffffffff81107f10 t unlock_mount.isra.18
ffffffff81107f40 T save_mount_options
ffffffff81107f70 t mnt_free_id.isra.20
ffffffff81107fb0 t alloc_vfsmnt
ffffffff81108160 t free_vfsmnt
ffffffff811081a0 t clone_mnt
ffffffff81108410 T vfs_kern_mount
ffffffff811084f0 T kern_mount_data
ffffffff81108520 T mnt_release_group_id
ffffffff81108570 t cleanup_group_ids
ffffffff81108600 t invent_group_ids
ffffffff811086b0 T mnt_get_count
ffffffff81108700 t mntput_no_expire
ffffffff81108840 T mntput
ffffffff81108870 t do_kern_mount
ffffffff81108990 t create_mnt_ns
ffffffff811089f0 T may_umount_tree
ffffffff81108aa0 T sb_prepare_remount_readonly
ffffffff81108b70 T __lookup_mnt
ffffffff81108be0 T lookup_mnt
ffffffff81108c30 t lock_mount
ffffffff81108d00 T mnt_set_mountpoint
ffffffff81108d90 t attach_mnt
ffffffff81108e20 t attach_recursive_mnt
ffffffff81108ff0 t graft_tree
ffffffff81109060 t do_add_mount
ffffffff81109130 T release_mounts
ffffffff811091c0 T umount_tree
ffffffff811093b0 T mark_mounts_for_expiry
ffffffff81109500 T sys_umount
ffffffff81109890 T sys_oldumount
ffffffff811098a0 T copy_tree
ffffffff81109b00 T collect_mounts
ffffffff81109b50 T drop_collected_mounts
ffffffff81109bb0 T iterate_mounts
ffffffff81109c10 T finish_automount
ffffffff81109cf0 T copy_mount_options
ffffffff81109e50 T copy_mount_string
ffffffff81109e90 T do_mount
ffffffff8110a6f0 T mnt_make_longterm
ffffffff8110a700 T mnt_make_shortterm
ffffffff8110a760 T kern_unmount
ffffffff8110a790 T sys_mount
ffffffff8110a880 T is_path_reachable
ffffffff8110a8e0 T path_is_under
ffffffff8110a930 T sys_pivot_root
ffffffff8110abc0 T put_mnt_ns
ffffffff8110ac30 T mount_subtree
ffffffff8110acc0 T copy_mnt_ns
ffffffff8110afa0 T our_mnt
ffffffff8110afc0 t single_start
ffffffff8110afd0 t single_next
ffffffff8110afe0 t single_stop
ffffffff8110aff0 T seq_putc
ffffffff8110b020 T seq_list_start
ffffffff8110b050 T seq_list_next
ffffffff8110b070 T seq_hlist_start
ffffffff8110b090 T seq_hlist_next
ffffffff8110b0b0 T seq_hlist_start_rcu
ffffffff8110b0e0 T seq_hlist_next_rcu
ffffffff8110b100 T seq_write
ffffffff8110b160 T seq_puts
ffffffff8110b1d0 T seq_release
ffffffff8110b1f0 T seq_release_private
ffffffff8110b240 T single_release
ffffffff8110b270 T seq_bitmap_list
ffffffff8110b2c0 T seq_bitmap
ffffffff8110b310 t traverse
ffffffff8110b500 T mangle_path
ffffffff8110b5b0 T seq_path
ffffffff8110b660 T seq_escape
ffffffff8110b770 T seq_printf
ffffffff8110b800 T seq_lseek
ffffffff8110b910 T seq_read
ffffffff8110bcc0 T seq_open
ffffffff8110be10 T __seq_open_private
ffffffff8110be80 T seq_open_private
ffffffff8110bea0 T single_open
ffffffff8110bf50 T seq_list_start_head
ffffffff8110bfa0 T seq_hlist_start_head
ffffffff8110bfe0 T seq_hlist_start_head_rcu
ffffffff8110c020 T seq_put_decimal_ull
ffffffff8110c0a0 T seq_put_decimal_ll
ffffffff8110c100 T seq_path_root
ffffffff8110c1d0 T seq_dentry
ffffffff8110c280 T xattr_getsecurity
ffffffff8110c290 T vfs_listxattr
ffffffff8110c2b0 t xattr_resolve_name
ffffffff8110c320 T generic_getxattr
ffffffff8110c3a0 T generic_listxattr
ffffffff8110c470 T generic_setxattr
ffffffff8110c500 T generic_removexattr
ffffffff8110c550 t listxattr
ffffffff8110c690 t xattr_permission
ffffffff8110c790 T vfs_removexattr
ffffffff8110c8b0 t removexattr
ffffffff8110c900 T vfs_getxattr
ffffffff8110c990 t getxattr
ffffffff8110caf0 T __vfs_setxattr_noperm
ffffffff8110cbd0 T vfs_setxattr
ffffffff8110cca0 t setxattr
ffffffff8110ce50 T vfs_getxattr_alloc
ffffffff8110cf70 T vfs_xattr_cmp
ffffffff8110d000 T sys_setxattr
ffffffff8110d0b0 T sys_lsetxattr
ffffffff8110d160 T sys_fsetxattr
ffffffff8110d250 T sys_getxattr
ffffffff8110d2d0 T sys_lgetxattr
ffffffff8110d350 T sys_fgetxattr
ffffffff8110d400 T sys_listxattr
ffffffff8110d470 T sys_llistxattr
ffffffff8110d4e0 T sys_flistxattr
ffffffff8110d580 T sys_removexattr
ffffffff8110d610 T sys_lremovexattr
ffffffff8110d6a0 T sys_fremovexattr
ffffffff8110d760 T simple_statfs
ffffffff8110d780 t simple_delete_dentry
ffffffff8110d790 T generic_read_dir
ffffffff8110d7a0 T simple_open
ffffffff8110d7c0 T noop_fsync
ffffffff8110d7d0 T generic_check_addressable
ffffffff8110d810 T generic_file_fsync
ffffffff8110d8b0 T generic_fh_to_parent
ffffffff8110d900 T generic_fh_to_dentry
ffffffff8110d940 T simple_write_to_buffer
ffffffff8110d9d0 T simple_attr_write
ffffffff8110dac0 T simple_attr_release
ffffffff8110dae0 T simple_attr_open
ffffffff8110dbc0 T simple_transaction_release
ffffffff8110dbe0 T simple_empty
ffffffff8110dc80 T dcache_readdir
ffffffff8110dec0 T dcache_dir_lseek
ffffffff8110e060 T memory_read_from_buffer
ffffffff8110e0d0 T simple_read_from_buffer
ffffffff8110e160 T simple_transaction_read
ffffffff8110e190 T simple_attr_read
ffffffff8110e280 T simple_release_fs
ffffffff8110e2f0 T simple_pin_fs
ffffffff8110e3b0 T dcache_dir_close
ffffffff8110e3d0 T simple_fill_super
ffffffff8110e590 T simple_write_end
ffffffff8110e6a0 T simple_write_begin
ffffffff8110e7a0 T simple_readpage
ffffffff8110e810 T simple_setattr
ffffffff8110e8c0 T simple_unlink
ffffffff8110e930 T simple_rmdir
ffffffff8110e990 T simple_rename
ffffffff8110ea90 T simple_link
ffffffff8110eb20 T mount_pseudo
ffffffff8110ecb0 T dcache_dir_open
ffffffff8110ece0 T simple_getattr
ffffffff8110ed30 T simple_transaction_get
ffffffff8110ee10 T simple_transaction_set
ffffffff8110ee30 T simple_lookup
ffffffff8110ee70 t redirty_tail
ffffffff8110eef0 t inode_wait_for_writeback
ffffffff8110efd0 t bdi_queue_work
ffffffff8110f060 T writeback_inodes_sb_nr
ffffffff8110f110 T sync_inodes_sb
ffffffff8110f2b0 t get_nr_dirty_pages
ffffffff8110f2e0 T writeback_inodes_sb
ffffffff8110f310 T __mark_inode_dirty
ffffffff8110f560 t over_bground_thresh
ffffffff8110f5e0 t queue_io
ffffffff8110f790 t requeue_io
ffffffff8110f810 t writeback_single_inode
ffffffff8110fae0 T sync_inode
ffffffff8110fba0 T sync_inode_metadata
ffffffff8110fc00 T write_inode_now
ffffffff8110fd50 t writeback_sb_inodes
ffffffff8110ffd0 t __writeback_inodes_wb
ffffffff81110080 t wb_writeback
ffffffff81110260 t wb_check_old_data_flush
ffffffff81110310 t __bdi_start_writeback
ffffffff811103e0 T writeback_inodes_sb_nr_if_idle
ffffffff81110450 T writeback_inodes_sb_if_idle
ffffffff811104b0 T writeback_in_progress
ffffffff811104c0 T bdi_start_writeback
ffffffff811104d0 T bdi_start_background_writeback
ffffffff81110530 T inode_wb_list_del
ffffffff811105c0 T writeback_inodes_wb
ffffffff81110660 T wb_do_writeback
ffffffff81110790 T bdi_writeback_thread
ffffffff811108d0 T wakeup_flusher_threads
ffffffff81110970 t propagation_next
ffffffff81110a00 T get_dominating_id
ffffffff81110a90 T change_mnt_propagation
ffffffff81110d00 T propagate_mnt
ffffffff81110f10 T propagate_mount_busy
ffffffff81111020 T propagate_umount
ffffffff811110e0 t drop_pagecache_sb
ffffffff811111d0 T drop_caches_sysctl_handler
ffffffff81111250 T splice_from_pipe_begin
ffffffff81111260 t pipe_to_sendpage
ffffffff811112d0 T splice_from_pipe_feed
ffffffff81111400 t page_cache_pipe_buf_confirm
ffffffff81111470 t page_cache_pipe_buf_steal
ffffffff81111560 t page_cache_pipe_buf_release
ffffffff81111580 T spd_release_page
ffffffff81111590 t wakeup_pipe_readers
ffffffff811115d0 t wakeup_pipe_writers
ffffffff81111610 T splice_from_pipe_end
ffffffff81111620 T splice_from_pipe_next
ffffffff811116d0 T __splice_from_pipe
ffffffff81111740 t do_splice_from
ffffffff81111810 t direct_splice_actor
ffffffff81111830 t do_splice_to
ffffffff811118f0 t write_pipe_buf
ffffffff811119b0 t pipe_to_user
ffffffff81111ae0 T splice_direct_to_actor
ffffffff81111c90 T generic_file_splice_write
ffffffff81111e10 T pipe_to_file
ffffffff81111f90 t ipipe_prep.part.5
ffffffff81112040 t opipe_prep.part.6
ffffffff81112100 t user_page_pipe_buf_steal
ffffffff81112120 T splice_to_pipe
ffffffff81112350 T splice_grow_spd
ffffffff811123e0 T splice_shrink_spd
ffffffff81112410 t vmsplice_to_pipe
ffffffff811126f0 T default_file_splice_read
ffffffff81112ae0 t __generic_file_splice_read
ffffffff81113030 T generic_file_splice_read
ffffffff811130a0 T splice_from_pipe
ffffffff81113140 t default_file_splice_write
ffffffff81113160 T generic_splice_sendpage
ffffffff81113170 T do_splice_direct
ffffffff81113200 T sys_vmsplice
ffffffff81113450 T sys_splice
ffffffff81113a10 T sys_tee
ffffffff81113d00 T vfs_fsync_range
ffffffff81113d20 T vfs_fsync
ffffffff81113d50 T generic_write_sync
ffffffff81113dc0 t do_fsync
ffffffff81113e40 t do_sync_work
ffffffff81113ea0 t __sync_filesystem
ffffffff81113f30 t sync_one_sb
ffffffff81113f40 T sync_filesystem
ffffffff81113f90 T sys_sync
ffffffff81113ff0 T emergency_sync
ffffffff81114040 T sys_syncfs
ffffffff811140d0 T sys_fsync
ffffffff811140f0 T sys_fdatasync
ffffffff81114110 T sys_sync_file_range
ffffffff81114290 T sys_sync_file_range2
ffffffff811142a0 t utimes_common
ffffffff811143f0 T do_utimes
ffffffff811144e0 T sys_utime
ffffffff81114550 T sys_utimensat
ffffffff811145d0 T sys_futimesat
ffffffff81114680 T sys_utimes
ffffffff81114690 T fsstack_copy_inode_size
ffffffff811146b0 T fsstack_copy_attr_all
ffffffff81114720 T current_umask
ffffffff81114740 T set_fs_root
ffffffff811147e0 T set_fs_pwd
ffffffff81114880 T chroot_fs_refs
ffffffff81114a30 T free_fs_struct
ffffffff81114a70 T exit_fs
ffffffff81114b20 T copy_fs_struct
ffffffff81114bf0 T unshare_fs_struct
ffffffff81114cb0 T daemonize_fs_struct
ffffffff81114d80 t statfs_by_dentry
ffffffff81114e80 t do_statfs64
ffffffff81114ed0 t do_statfs_native
ffffffff81114f20 T vfs_statfs
ffffffff81114fb0 T user_statfs
ffffffff81115000 T fd_statfs
ffffffff81115060 T sys_statfs
ffffffff81115090 T sys_statfs64
ffffffff811150d0 T sys_fstatfs
ffffffff81115100 T sys_fstatfs64
ffffffff81115140 T vfs_ustat
ffffffff811151a0 T sys_ustat
ffffffff81115250 T init_buffer
ffffffff81115260 T mark_buffer_async_write
ffffffff81115270 t has_bh_in_lru
ffffffff811152b0 T generic_block_bmap
ffffffff81115300 T block_is_partially_uptodate
ffffffff811153a0 t __remove_assoc_queue
ffffffff81115400 T invalidate_inode_buffers
ffffffff81115470 t drop_buffers
ffffffff81115520 T submit_bh
ffffffff81115630 t end_bio_bh_io_sync
ffffffff81115670 t attach_nobh_buffers
ffffffff81115700 t quiet_error
ffffffff81115740 t __find_get_block_slow
ffffffff811158c0 T invalidate_bh_lrus
ffffffff811158e0 t free_more_memory
ffffffff811159b0 t __set_page_dirty
ffffffff81115a90 T mark_buffer_dirty
ffffffff81115b20 T mark_buffer_dirty_inode
ffffffff81115be0 T __set_page_dirty_buffers
ffffffff81115cd0 t do_thaw_all
ffffffff81115d00 t do_thaw_one
ffffffff81115d60 T __wait_on_buffer
ffffffff81115d90 t sleep_on_buffer
ffffffff81115da0 T unlock_buffer
ffffffff81115db0 T ll_rw_block
ffffffff81115e50 t __end_buffer_read_notouch
ffffffff81115e70 t end_buffer_read_nobh
ffffffff81115e80 T end_buffer_read_sync
ffffffff81115e90 T __lock_buffer
ffffffff81115ec0 t recalc_bh_state
ffffffff81115f40 T alloc_buffer_head
ffffffff81115f90 T set_bh_page
ffffffff81115fe0 T free_buffer_head
ffffffff81116010 T try_to_free_buffers
ffffffff811160c0 T alloc_page_buffers
ffffffff81116190 T create_empty_buffers
ffffffff81116250 T block_truncate_page
ffffffff811164d0 T nobh_truncate_page
ffffffff811167b0 T generic_cont_expand_simple
ffffffff81116830 t buffer_io_error.isra.14
ffffffff81116860 t end_buffer_async_read
ffffffff811169a0 T block_read_full_page
ffffffff81116cc0 T end_buffer_async_write
ffffffff81116e00 T end_buffer_write_sync
ffffffff81116e70 t init_page_buffers.isra.15
ffffffff81116f00 T __brelse
ffffffff81116f30 t invalidate_bh_lru
ffffffff81116f80 t buffer_cpu_notify
ffffffff81117000 T unmap_underlying_metadata
ffffffff81117050 t __block_write_full_page
ffffffff811173d0 T block_write_full_page_endio
ffffffff81117500 T block_write_full_page
ffffffff81117510 T nobh_writepage
ffffffff81117640 T __find_get_block
ffffffff81117830 T __getblk
ffffffff81117ac0 T __bforget
ffffffff81117b40 T __breadahead
ffffffff81117b80 T __bread
ffffffff81117c10 t __block_commit_write.isra.17
ffffffff81117ce0 T block_commit_write
ffffffff81117d10 T page_zero_new_buffers
ffffffff81117e60 T block_write_end
ffffffff81117ee0 T generic_write_end
ffffffff81117f80 T nobh_write_end
ffffffff81118100 T __block_write_begin
ffffffff81118580 T __block_page_mkwrite
ffffffff811186c0 T block_page_mkwrite
ffffffff811187c0 T block_write_begin
ffffffff81118860 T cont_write_begin
ffffffff81118b60 T nobh_write_begin
ffffffff81118f50 T bh_submit_read
ffffffff81118fc0 T bh_uptodate_or_lock
ffffffff81119000 T __sync_dirty_buffer
ffffffff811190c0 T sync_dirty_buffer
ffffffff811190d0 T write_dirty_buffer
ffffffff81119140 T sync_mapping_buffers
ffffffff811193c0 T block_invalidatepage
ffffffff811194e0 T inode_has_buffers
ffffffff81119500 T emergency_thaw_all
ffffffff81119550 T write_boundary_block
ffffffff811195a0 T remove_inode_buffers
ffffffff81119640 T sys_bdflush
ffffffff811196b0 T bio_phys_segments
ffffffff811196d0 T bio_get_nr_vecs
ffffffff81119710 T bio_endio
ffffffff81119750 T bio_sector_offset
ffffffff81119800 t bio_free_map_data
ffffffff81119820 t bio_kmalloc_destructor
ffffffff81119830 T bio_split
ffffffff81119950 T __bio_clone
ffffffff811199b0 T zero_fill_bio
ffffffff81119a50 T bio_init
ffffffff81119af0 T bio_kmalloc
ffffffff81119b60 T bioset_free
ffffffff81119c40 T bioset_create
ffffffff81119ea0 T bio_put
ffffffff81119ed0 t bio_map_kern_endio
ffffffff81119ee0 T bio_unmap_user
ffffffff81119f40 t bio_copy_kern_endio
ffffffff8111a010 T bio_pair_release
ffffffff8111a050 t bio_pair_end_2
ffffffff8111a070 t bio_pair_end_1
ffffffff8111a080 t __bio_copy_iov.isra.21
ffffffff8111a230 T bio_uncopy_user
ffffffff8111a2b0 t __bio_add_page.part.22
ffffffff8111a510 T bio_add_page
ffffffff8111a560 T bio_add_pc_page
ffffffff8111a590 T bio_map_kern
ffffffff8111a6d0 T bvec_nr_vecs
ffffffff8111a6f0 T bvec_free_bs
ffffffff8111a730 T bio_free
ffffffff8111a780 t bio_fs_destructor
ffffffff8111a790 T bvec_alloc_bs
ffffffff8111a880 T bio_alloc_bioset
ffffffff8111a980 T bio_clone
ffffffff8111a9d0 T bio_alloc
ffffffff8111aa00 T bio_copy_user_iov
ffffffff8111ae30 T bio_copy_user
ffffffff8111ae60 T bio_copy_kern
ffffffff8111af30 T bio_map_user_iov
ffffffff8111b250 T bio_map_user
ffffffff8111b280 T bio_set_pages_dirty
ffffffff8111b2d0 t bio_dirty_fn
ffffffff8111b370 T bio_check_pages_dirty
ffffffff8111b430 T I_BDEV
ffffffff8111b440 t bdev_test
ffffffff8111b450 t bdev_set
ffffffff8111b460 T bd_set_size
ffffffff8111b500 t block_ioctl
ffffffff8111b550 T ioctl_by_bdev
ffffffff8111b5a0 t block_llseek
ffffffff8111b660 t bdev_inode_switch_bdi
ffffffff8111b750 T bd_unlink_disk_holder
ffffffff8111b850 t bdev_alloc_inode
ffffffff8111b880 T bd_link_disk_holder
ffffffff8111ba50 T bdput
ffffffff8111ba60 T bdget
ffffffff8111bb90 t blkdev_direct_IO
ffffffff8111bbf0 t blkdev_releasepage
ffffffff8111bc30 t blkdev_write_end
ffffffff8111bc70 t blkdev_write_begin
ffffffff8111bc90 t blkdev_readpage
ffffffff8111bca0 t blkdev_writepage
ffffffff8111bcb0 t bd_mount
ffffffff8111bcd0 t bdev_evict_inode
ffffffff8111bd80 t bdev_destroy_inode
ffffffff8111bda0 t bdev_i_callback
ffffffff8111bdc0 t init_once
ffffffff8111bed0 T blkdev_fsync
ffffffff8111bf20 T thaw_bdev
ffffffff8111bfb0 T kill_bdev
ffffffff8111bfe0 T invalidate_bdev
ffffffff8111c040 T __invalidate_device
ffffffff8111c0c0 t flush_disk
ffffffff8111c150 T check_disk_change
ffffffff8111c1d0 T check_disk_size_change
ffffffff8111c250 T revalidate_disk
ffffffff8111c2e0 t bd_may_claim
ffffffff8111c320 T blkdev_aio_write
ffffffff8111c3b0 t bd_acquire
ffffffff8111c4a0 T lookup_bdev
ffffffff8111c540 t blkdev_get_blocks
ffffffff8111c600 t blkdev_get_block
ffffffff8111c670 T blkdev_max_block
ffffffff8111c6b0 T __sync_blockdev
ffffffff8111c6e0 T sync_blockdev
ffffffff8111c6f0 t __blkdev_put
ffffffff8111c8b0 T blkdev_put
ffffffff8111ca10 t blkdev_close
ffffffff8111ca30 t __blkdev_get
ffffffff8111ceb0 T blkdev_get
ffffffff8111d1a0 t blkdev_open
ffffffff8111d210 T blkdev_get_by_dev
ffffffff8111d270 T blkdev_get_by_path
ffffffff8111d2e0 T freeze_bdev
ffffffff8111d3b0 T fsync_bdev
ffffffff8111d410 T set_blocksize
ffffffff8111d4b0 T sb_set_blocksize
ffffffff8111d510 T sb_min_blocksize
ffffffff8111d550 T bdgrab
ffffffff8111d570 T nr_blockdev_pages
ffffffff8111d5d0 T bd_forget
ffffffff8111d670 t dio_bio_complete
ffffffff8111d730 t dio_bio_end_io
ffffffff8111d7c0 T inode_dio_wait
ffffffff8111d890 T inode_dio_done
ffffffff8111d8c0 t dio_complete
ffffffff8111d980 T __blockdev_direct_IO
ffffffff81120f20 t dio_bio_end_aio
ffffffff81121000 T dio_end_io
ffffffff81121020 t mpage_alloc
ffffffff811210b0 t __mpage_writepage
ffffffff81121650 T mpage_writepage
ffffffff811216b0 t mpage_end_io
ffffffff81121750 T mpage_writepages
ffffffff811217f0 t do_mpage_readpage
ffffffff81121da0 T mpage_readpage
ffffffff81121e20 T mpage_readpages
ffffffff81121f50 T set_task_ioprio
ffffffff81122000 T sys_ioprio_set
ffffffff81122270 T ioprio_best
ffffffff811222b0 T sys_ioprio_get
ffffffff81122560 t mounts_open_common
ffffffff81122750 t mountstats_open
ffffffff81122760 t mountinfo_open
ffffffff81122770 t mounts_open
ffffffff81122780 t mounts_release
ffffffff811227d0 t mounts_poll
ffffffff81122840 t show_mnt_opts.isra.2
ffffffff81122890 t show_sb_opts.isra.3
ffffffff811228e0 t show_type.isra.4
ffffffff81122950 t show_vfsstat
ffffffff81122a90 t show_mountinfo
ffffffff81122d50 t show_vfsmnt
ffffffff81122e90 T __fsnotify_inode_delete
ffffffff81122ea0 t send_to_group.isra.1
ffffffff81123080 T fsnotify
ffffffff81123350 T __fsnotify_vfsmount_delete
ffffffff81123360 T __fsnotify_update_child_dentry_flags
ffffffff811234a0 T __fsnotify_parent
ffffffff81123590 T fsnotify_get_cookie
ffffffff811235a0 T fsnotify_notify_queue_is_empty
ffffffff811235c0 T fsnotify_get_event
ffffffff811235d0 T fsnotify_put_event
ffffffff81123630 T fsnotify_alloc_event_holder
ffffffff81123650 T fsnotify_destroy_event_holder
ffffffff81123670 T fsnotify_remove_priv_from_event
ffffffff811236f0 T fsnotify_add_notify_event
ffffffff81123930 T fsnotify_remove_notify_event
ffffffff811239d0 T fsnotify_peek_notify_event
ffffffff811239f0 T fsnotify_flush_notify
ffffffff81123aa0 T fsnotify_replace_event
ffffffff81123b70 T fsnotify_clone_event
ffffffff81123cf0 T fsnotify_create_event
ffffffff81123e60 T fsnotify_final_destroy_group
ffffffff81123e90 T fsnotify_put_group
ffffffff81123ed0 T fsnotify_alloc_group
ffffffff81123f80 t fsnotify_recalc_inode_mask_locked
ffffffff81123fc0 T fsnotify_recalc_inode_mask
ffffffff81124010 T fsnotify_destroy_inode_mark
ffffffff811240b0 T fsnotify_clear_marks_by_inode
ffffffff81124190 T fsnotify_clear_inode_marks_by_group
ffffffff811241a0 T fsnotify_find_inode_mark_locked
ffffffff81124220 T fsnotify_find_inode_mark
ffffffff81124270 T fsnotify_set_inode_mark_mask_locked
ffffffff811242d0 T fsnotify_add_inode_mark
ffffffff81124460 T fsnotify_unmount_inodes
ffffffff811245f0 T fsnotify_get_mark
ffffffff81124600 T fsnotify_put_mark
ffffffff81124620 t fsnotify_mark_destroy
ffffffff81124780 T fsnotify_destroy_mark
ffffffff81124910 T fsnotify_set_mark_mask_locked
ffffffff81124970 T fsnotify_set_mark_ignored_mask_locked
ffffffff811249b0 T fsnotify_add_mark
ffffffff81124bb0 T fsnotify_clear_marks_by_group_flags
ffffffff81124ca0 T fsnotify_clear_marks_by_group
ffffffff81124cb0 T fsnotify_duplicate_mark
ffffffff81124d10 T fsnotify_init_mark
ffffffff81124db0 t fsnotify_recalc_vfsmount_mask_locked
ffffffff81124df0 T fsnotify_clear_marks_by_mount
ffffffff81124ed0 T fsnotify_clear_vfsmount_marks_by_group
ffffffff81124ee0 T fsnotify_recalc_vfsmount_mask
ffffffff81124f10 T fsnotify_destroy_vfsmount_mark
ffffffff81124fa0 T fsnotify_find_vfsmount_mark
ffffffff81125020 T fsnotify_add_vfsmount_mark
ffffffff811251a0 t dnotify_should_send_event
ffffffff811251c0 t dnotify_free_mark
ffffffff811251e0 t dnotify_recalc_inode_mask
ffffffff81125250 t dnotify_handle_event
ffffffff81125300 T dnotify_flush
ffffffff81125420 T fcntl_dirnotify
ffffffff81125790 t inotify_should_send_event
ffffffff811257d0 t inotify_freeing_mark
ffffffff811257e0 t inotify_free_group_priv
ffffffff81125840 t inotify_merge
ffffffff81125920 T inotify_free_event_priv
ffffffff81125930 t idr_callback
ffffffff811259a0 t inotify_handle_event
ffffffff81125a80 t inotify_fasync
ffffffff81125ab0 t inotify_release
ffffffff81125ad0 t inotify_ioctl
ffffffff81125b90 t inotify_poll
ffffffff81125bf0 t inotify_read
ffffffff81125ef0 t inotify_free_mark
ffffffff81125f00 t inotify_idr_find_locked
ffffffff81125f70 t inotify_remove_from_idr
ffffffff81126190 T inotify_ignored_and_remove_idr
ffffffff81126260 T sys_inotify_init1
ffffffff81126370 T sys_inotify_init
ffffffff81126380 T sys_inotify_add_watch
ffffffff811266d0 T sys_inotify_rm_watch
ffffffff811267a0 t fanotify_free_group_priv
ffffffff811267b0 t fanotify_handle_event
ffffffff811267f0 t fanotify_merge
ffffffff811268f0 t fanotify_should_send_event
ffffffff811269a0 t fanotify_write
ffffffff811269b0 t fanotify_release
ffffffff811269d0 t fanotify_ioctl
ffffffff81126a70 t fanotify_poll
ffffffff81126ad0 t fanotify_mark_add_to_mask
ffffffff81126b90 t fanotify_free_mark
ffffffff81126ba0 t fanotify_mark_remove_from_mask
ffffffff81126c40 t fanotify_read
ffffffff81126fb0 T sys_fanotify_init
ffffffff811271a0 T sys_fanotify_mark
ffffffff81127750 t ep_read_events_proc
ffffffff81127800 t ep_send_events_proc
ffffffff81127930 t ep_poll_wakeup_proc
ffffffff81127960 t ep_ptable_queue_proc
ffffffff81127a10 t ep_unregister_pollwait.isra.6
ffffffff81127a80 t ep_remove
ffffffff81127b30 t ep_call_nested.constprop.8
ffffffff81127c20 t reverse_path_check_proc
ffffffff81127d00 t ep_loop_check_proc
ffffffff81127e40 t ep_eventpoll_poll
ffffffff81127ea0 t ep_poll_safewake
ffffffff81127ed0 t ep_free
ffffffff81127f80 t ep_eventpoll_release
ffffffff81127fa0 t ep_scan_ready_list.isra.7
ffffffff81128120 t ep_poll_readyevents_proc
ffffffff81128130 t ep_poll
ffffffff81128530 t ep_poll_callback
ffffffff81128650 T eventpoll_release_file
ffffffff811286e0 T sys_epoll_create1
ffffffff81128830 T sys_epoll_create
ffffffff81128850 T sys_epoll_ctl
ffffffff81129050 T sys_epoll_wait
ffffffff81129130 T sys_epoll_pwait
ffffffff81129230 t anon_set_page_dirty
ffffffff81129240 t anon_inodefs_dname
ffffffff81129260 t anon_inodefs_mount
ffffffff81129390 T anon_inode_getfile
ffffffff81129510 T anon_inode_getfd
ffffffff811295a0 t signalfd_release
ffffffff811295c0 t signalfd_poll
ffffffff811296b0 t signalfd_read
ffffffff81129ae0 T signalfd_cleanup
ffffffff81129b10 T sys_signalfd4
ffffffff81129cb0 T sys_signalfd
ffffffff81129cc0 t timerfd_fget
ffffffff81129d00 t timerfd_poll
ffffffff81129d60 t timerfd_read
ffffffff81129fc0 t timerfd_tmrproc
ffffffff8112a030 t timerfd_remove_cancel.part.6
ffffffff8112a080 t timerfd_release
ffffffff8112a0c0 T timerfd_clock_was_set
ffffffff8112a160 T sys_timerfd_create
ffffffff8112a250 T sys_timerfd_settime
ffffffff8112a570 T sys_timerfd_gettime
ffffffff8112a6b0 t eventfd_poll
ffffffff8112a730 T eventfd_ctx_remove_wait_queue
ffffffff8112a800 T eventfd_signal
ffffffff8112a8b0 t eventfd_write
ffffffff8112aac0 T eventfd_ctx_read
ffffffff8112acd0 t eventfd_read
ffffffff8112ad30 t eventfd_free
ffffffff8112ad40 T eventfd_fget
ffffffff8112ad80 T eventfd_ctx_get
ffffffff8112adc0 T eventfd_ctx_fdget
ffffffff8112ae10 T eventfd_ctx_fileget
ffffffff8112ae40 T eventfd_ctx_put
ffffffff8112ae60 t eventfd_release
ffffffff8112ae90 T eventfd_file_create
ffffffff8112af60 T sys_eventfd2
ffffffff8112afd0 T sys_eventfd
ffffffff8112afe0 t aio_fdsync
ffffffff8112b020 t aio_fsync
ffffffff8112b050 t lookup_ioctx
ffffffff8112b0d0 t aio_rw_vect_retry
ffffffff8112b290 T wait_on_sync_kiocb
ffffffff8112b310 t aio_queue_work
ffffffff8112b340 t aio_read_evt
ffffffff8112b4c0 t ctx_rcu_free
ffffffff8112b4e0 t __aio_put_req
ffffffff8112b6c0 T aio_put_req
ffffffff8112b720 t aio_fput_routine
ffffffff8112b890 t timeout_func
ffffffff8112b8a0 t aio_free_ring
ffffffff8112b940 t __put_ioctx
ffffffff8112b9e0 t aio_setup_vectored_rw
ffffffff8112ba70 t kill_ctx
ffffffff8112bc10 t io_destroy
ffffffff8112bcd0 T aio_complete
ffffffff8112bed0 t aio_run_iocb
ffffffff8112c050 t __aio_run_iocbs
ffffffff8112c110 t read_events
ffffffff8112c4f0 t aio_kick_handler
ffffffff8112c5f0 T kick_iocb
ffffffff8112c700 T exit_aio
ffffffff8112c7a0 T sys_io_setup
ffffffff8112cc20 T sys_io_destroy
ffffffff8112cc70 T do_io_submit
ffffffff8112d6d0 T sys_io_submit
ffffffff8112d6e0 T sys_io_cancel
ffffffff8112d860 T sys_io_getevents
ffffffff8112d8f0 T unlock_flocks
ffffffff8112d900 T locks_release_private
ffffffff8112d950 T __locks_copy_lock
ffffffff8112d9a0 T locks_copy_lock
ffffffff8112da70 t flock_to_posix_lock
ffffffff8112db80 t posix_same_owner
ffffffff8112dbd0 t locks_insert_lock
ffffffff8112dc20 T vfs_cancel_lock
ffffffff8112dc50 t locks_stop
ffffffff8112dc60 t locks_open
ffffffff8112dc80 t lock_get_status
ffffffff8112dfa0 t locks_show
ffffffff8112e010 t locks_next
ffffffff8112e030 t locks_wake_up_blocks
ffffffff8112e0d0 T lock_flocks
ffffffff8112e0e0 T locks_delete_block
ffffffff8112e130 T posix_unblock_lock
ffffffff8112e190 T lock_may_read
ffffffff8112e220 T lock_may_write
ffffffff8112e2a0 t locks_start
ffffffff8112e2f0 t lease_break_callback
ffffffff8112e310 t lease_release_private_callback
ffffffff8112e330 T locks_init_lock
ffffffff8112e3f0 T locks_alloc_lock
ffffffff8112e450 t posix_locks_conflict
ffffffff8112e4c0 T posix_test_lock
ffffffff8112e590 T vfs_test_lock
ffffffff8112e5d0 t locks_insert_block
ffffffff8112e620 T lease_get_mtime
ffffffff8112e680 T locks_free_lock
ffffffff8112e6c0 t locks_delete_lock
ffffffff8112e750 T lease_modify
ffffffff8112e7e0 t time_out_leases
ffffffff8112e870 T flock_lock_file_wait
ffffffff8112eb60 t lease_alloc
ffffffff8112ec30 T __break_lease
ffffffff8112ef90 t __posix_lock_file
ffffffff8112f440 T locks_mandatory_area
ffffffff8112f5c0 T posix_lock_file
ffffffff8112f5d0 T posix_lock_file_wait
ffffffff8112f6a0 T vfs_lock_file
ffffffff8112f6d0 T locks_remove_posix
ffffffff8112f780 T locks_mandatory_locked
ffffffff8112f7f0 T fcntl_getlease
ffffffff8112f890 T generic_add_lease
ffffffff8112f9c0 T generic_delete_lease
ffffffff8112fa20 T generic_setlease
ffffffff8112fb40 t __vfs_setlease
ffffffff8112fb60 T vfs_setlease
ffffffff8112fba0 t do_fcntl_delete_lease
ffffffff8112fc30 T fcntl_setlease
ffffffff8112fd70 T sys_flock
ffffffff8112ff20 T fcntl_getlk
ffffffff81130040 T fcntl_setlk
ffffffff81130310 T locks_remove_flock
ffffffff81130420 t compat_set_fd_set
ffffffff811304b0 t compat_filldir
ffffffff811305a0 t compat_filldir64
ffffffff81130670 t compat_fillonedir
ffffffff81130730 t poll_select_copy_remaining
ffffffff81130850 t compat_get_fd_set
ffffffff81130930 t cp_compat_stat
ffffffff81130aa0 t put_compat_statfs64
ffffffff81130b90 t put_compat_statfs
ffffffff81130cd0 T compat_printk
ffffffff81130d30 T compat_sys_utime
ffffffff81130da0 T compat_sys_utimensat
ffffffff81130e50 T compat_sys_futimesat
ffffffff81130f00 T compat_sys_utimes
ffffffff81130f10 T compat_sys_newstat
ffffffff81130f40 T compat_sys_newlstat
ffffffff81130f70 T compat_sys_newfstatat
ffffffff81130fa0 T compat_sys_newfstat
ffffffff81130fd0 T compat_sys_statfs
ffffffff81131000 T compat_sys_fstatfs
ffffffff81131030 T compat_sys_statfs64
ffffffff81131070 T compat_sys_fstatfs64
ffffffff811310b0 T compat_sys_ustat
ffffffff81131150 T compat_sys_fcntl64
ffffffff81131470 T compat_sys_fcntl
ffffffff81131490 T compat_sys_io_setup
ffffffff81131520 T compat_sys_io_getevents
ffffffff81131610 T compat_rw_copy_check_uvector
ffffffff81131780 t compat_do_readv_writev
ffffffff811319b0 t compat_writev
ffffffff81131a30 t compat_readv
ffffffff81131aa0 T compat_sys_io_submit
ffffffff81131b90 T compat_sys_mount
ffffffff81131e00 T compat_sys_old_readdir
ffffffff81131e60 T compat_sys_getdents
ffffffff81131f60 T compat_sys_getdents64
ffffffff81132050 T compat_sys_readv
ffffffff811320d0 T compat_sys_preadv64
ffffffff81132170 T compat_sys_preadv
ffffffff81132190 T compat_sys_writev
ffffffff81132210 T compat_sys_pwritev64
ffffffff811322b0 T compat_sys_pwritev
ffffffff811322d0 T compat_sys_vmsplice
ffffffff81132400 T compat_sys_open
ffffffff81132420 T compat_sys_openat
ffffffff81132430 T compat_core_sys_select
ffffffff811326f0 T compat_sys_select
ffffffff81132800 T compat_sys_old_select
ffffffff81132850 T compat_sys_pselect6
ffffffff81132a60 T compat_sys_ppoll
ffffffff81132bf0 T compat_sys_epoll_pwait
ffffffff81132d00 T compat_sys_signalfd4
ffffffff81132da0 T compat_sys_signalfd
ffffffff81132db0 T compat_sys_timerfd_settime
ffffffff81132e90 T compat_sys_timerfd_gettime
ffffffff81132f20 T compat_sys_ioctl
ffffffff81133f60 t load_script
ffffffff811341d0 t vma_dump_size
ffffffff81134300 t padzero
ffffffff81134340 t load_elf_library
ffffffff81134550 t load_elf_binary
ffffffff81135da0 t notesize.isra.7
ffffffff81135dc0 t elf_core_dump
ffffffff81136f80 t cputime_to_compat_timeval
ffffffff81136fb0 t vma_dump_size
ffffffff811370e0 t padzero
ffffffff81137120 t load_elf_library
ffffffff81137360 t notesize.isra.7
ffffffff81137380 t elf_core_dump
ffffffff811384d0 t load_elf_binary
ffffffff81139e20 T posix_acl_init
ffffffff81139e30 T posix_acl_valid
ffffffff81139f30 T posix_acl_equiv_mode
ffffffff81139fe0 T posix_acl_chmod
ffffffff8113a170 T posix_acl_create
ffffffff8113a320 T posix_acl_alloc
ffffffff8113a350 T posix_acl_from_mode
ffffffff8113a3c0 T posix_acl_permission
ffffffff8113a530 T posix_acl_to_xattr
ffffffff8113a5a0 T posix_acl_from_xattr
ffffffff8113a6e0 t generic_acl_set
ffffffff8113a8e0 t generic_acl_get
ffffffff8113a9c0 t generic_acl_list
ffffffff8113aad0 T generic_acl_init
ffffffff8113ac90 T generic_acl_chmod
ffffffff8113add0 T get_vmalloc_info
ffffffff8113aeb0 t pagemap_pte_hole
ffffffff8113af30 t gather_stats
ffffffff8113afb0 t pad_len_spaces
ffffffff8113afe0 t show_numa_map
ffffffff8113b440 t show_tid_numa_map
ffffffff8113b450 t show_pid_numa_map
ffffffff8113b460 t can_gather_numa_stats
ffffffff8113b4c0 t gather_pte_stats
ffffffff8113b680 t m_next
ffffffff8113b700 t m_stop
ffffffff8113b780 t m_start
ffffffff8113b8f0 t numa_maps_open
ffffffff8113b990 t tid_numa_maps_open
ffffffff8113b9a0 t pid_numa_maps_open
ffffffff8113b9b0 t do_maps_open
ffffffff8113ba50 t tid_smaps_open
ffffffff8113ba60 t pid_smaps_open
ffffffff8113ba70 t tid_maps_open
ffffffff8113ba80 t pid_maps_open
ffffffff8113ba90 t pagemap_read
ffffffff8113bda0 t pagemap_hugetlb_range
ffffffff8113be50 t pagemap_pte_range
ffffffff8113c130 t clear_refs_write
ffffffff8113c2f0 t clear_refs_pte_range
ffffffff8113c490 t show_map_vma
ffffffff8113c730 t show_smap
ffffffff8113c900 t show_tid_smap
ffffffff8113c910 t show_pid_smap
ffffffff8113c920 t show_tid_map
ffffffff8113c990 t show_pid_map
ffffffff8113ca00 t gather_hugetbl_stats
ffffffff8113ca60 t smaps_pte_entry.isra.9
ffffffff8113cb50 t smaps_pte_range
ffffffff8113ccf0 T task_mem
ffffffff8113ce30 T task_vsize
ffffffff8113ce40 T task_statm
ffffffff8113ceb0 t proc_show_options
ffffffff8113cf20 t proc_evict_inode
ffffffff8113cf90 t proc_destroy_inode
ffffffff8113cfb0 t proc_i_callback
ffffffff8113cfd0 t proc_alloc_inode
ffffffff8113d070 t proc_reg_open
ffffffff8113d1f0 t init_once
ffffffff8113d200 T pde_users_dec
ffffffff8113d260 t proc_reg_compat_ioctl
ffffffff8113d320 t proc_reg_release
ffffffff8113d490 t proc_reg_mmap
ffffffff8113d540 t proc_reg_unlocked_ioctl
ffffffff8113d600 t proc_reg_poll
ffffffff8113d6c0 t proc_reg_write
ffffffff8113d790 t proc_reg_read
ffffffff8113d860 t proc_reg_llseek
ffffffff8113d930 T proc_get_inode
ffffffff8113da40 T proc_fill_super
ffffffff8113dac0 t proc_test_super
ffffffff8113dad0 t proc_root_readdir
ffffffff8113db40 t proc_root_getattr
ffffffff8113db80 t proc_root_lookup
ffffffff8113dbe0 t proc_kill_sb
ffffffff8113dc20 t proc_parse_options.isra.0
ffffffff8113dd20 t proc_mount
ffffffff8113de80 t proc_set_super
ffffffff8113def0 T proc_remount
ffffffff8113df30 T pid_ns_prepare_proc
ffffffff8113df60 T pid_ns_release_proc
ffffffff8113df70 T mem_lseek
ffffffff8113dfa0 t pid_delete_dentry
ffffffff8113dfc0 t fake_filldir
ffffffff8113dfd0 t proc_self_put_link
ffffffff8113dff0 t proc_self_readlink
ffffffff8113e070 t proc_self_follow_link
ffffffff8113e110 t proc_base_instantiate
ffffffff8113e260 t proc_loginuid_write
ffffffff8113e370 t mem_release
ffffffff8113e3a0 t comm_open
ffffffff8113e3c0 t proc_single_open
ffffffff8113e3e0 t task_dumpable
ffffffff8113e440 t proc_fd_permission
ffffffff8113e470 t do_io_accounting
ffffffff8113e680 t proc_tid_io_accounting
ffffffff8113e690 t proc_tgid_io_accounting
ffffffff8113e6a0 T pid_getattr
ffffffff8113e790 t proc_oom_score
ffffffff8113e7f0 t proc_pid_wchan
ffffffff8113e890 t proc_pid_cmdline
ffffffff8113e9b0 t proc_pid_limits
ffffffff8113eb70 t proc_cwd_link
ffffffff8113ec40 t proc_root_link
ffffffff8113ed20 t proc_single_show
ffffffff8113edc0 t proc_fd_access_allowed
ffffffff8113ee20 t proc_pid_follow_link
ffffffff8113ee80 t proc_pid_readlink
ffffffff8113ef70 t proc_coredump_filter_write
ffffffff8113f0c0 t proc_coredump_filter_read
ffffffff8113f1c0 t oom_score_adj_read
ffffffff8113f2c0 t oom_adjust_read
ffffffff8113f3c0 t proc_sessionid_read
ffffffff8113f480 t proc_loginuid_read
ffffffff8113f540 t proc_info_read
ffffffff8113f640 t oom_score_adj_write
ffffffff8113f860 t oom_adjust_write
ffffffff8113fab0 t mem_open
ffffffff8113fb60 t comm_show
ffffffff8113fc00 t comm_write
ffffffff8113fcd0 t proc_fd_info
ffffffff8113fe50 t proc_fdinfo_read
ffffffff8113ff00 t proc_fd_link
ffffffff8113ff10 t tid_fd_revalidate
ffffffff81140090 T pid_revalidate
ffffffff81140170 t proc_pid_permission
ffffffff81140270 t proc_task_getattr
ffffffff811402e0 t proc_exe_link
ffffffff81140390 t next_tgid
ffffffff81140430 T proc_setattr
ffffffff811404c0 t name_to_int.isra.6
ffffffff81140540 t proc_lookupfd_common
ffffffff811405e0 t proc_lookupfd
ffffffff811405f0 t proc_lookupfdinfo
ffffffff81140600 t mem_rw.isra.9
ffffffff81140770 t mem_write
ffffffff81140790 t mem_read
ffffffff811407a0 t lock_trace
ffffffff81140810 t proc_pid_personality
ffffffff81140880 t proc_pid_syscall
ffffffff811409b0 T mm_for_maps
ffffffff811409c0 t environ_read
ffffffff81140b90 t proc_pid_auxv
ffffffff81140c10 T proc_pid_make_inode
ffffffff81140cb0 t proc_pid_instantiate
ffffffff81140d80 t proc_fdinfo_instantiate
ffffffff81140e20 t proc_fd_instantiate
ffffffff81140ed0 t proc_task_instantiate
ffffffff81140fa0 t proc_task_lookup
ffffffff81141090 t proc_pident_instantiate
ffffffff81141140 t proc_pident_lookup
ffffffff81141220 t proc_tid_base_lookup
ffffffff81141240 t proc_tgid_base_lookup
ffffffff81141260 T proc_fill_cache
ffffffff811413d0 t proc_readfd_common
ffffffff81141590 t proc_readfdinfo
ffffffff811415a0 t proc_readfd
ffffffff811415b0 t proc_task_readdir
ffffffff81141900 t proc_pident_readdir
ffffffff81141ab0 t proc_tid_base_readdir
ffffffff81141ad0 t proc_tgid_base_readdir
ffffffff81141af0 T proc_flush_task
ffffffff81141cb0 T proc_pid_lookup
ffffffff81141df0 T proc_pid_readdir
ffffffff81141ff0 t proc_file_lseek
ffffffff81142030 t proc_follow_link
ffffffff81142050 t proc_delete_dentry
ffffffff81142060 t proc_file_write
ffffffff81142120 t proc_getattr
ffffffff81142160 t proc_notify_change
ffffffff81142210 t proc_register
ffffffff81142420 t __xlate_proc_name
ffffffff81142510 t __proc_create
ffffffff81142700 T proc_create_data
ffffffff811427d0 T create_proc_entry
ffffffff81142870 T proc_net_mkdir
ffffffff811428e0 T proc_mkdir_mode
ffffffff81142940 T proc_mkdir
ffffffff81142950 T proc_symlink
ffffffff811429f0 t proc_file_read
ffffffff81142d40 T pde_put
ffffffff81142de0 T remove_proc_entry
ffffffff81143050 T proc_readdir_de
ffffffff81143200 T proc_readdir
ffffffff81143220 T proc_lookup_de
ffffffff81143330 T proc_lookup
ffffffff81143350 t do_task_stat
ffffffff81143d60 t render_sigset_t
ffffffff81143df0 t render_cap_t
ffffffff81143e50 T proc_pid_status
ffffffff811444a0 T proc_tid_stat
ffffffff811444b0 T proc_tgid_stat
ffffffff811444c0 T proc_pid_statm
ffffffff811445b0 t tty_drivers_open
ffffffff811445c0 t show_tty_range
ffffffff811447e0 t show_tty_driver
ffffffff811449a0 t t_next
ffffffff811449b0 t t_stop
ffffffff811449c0 t t_start
ffffffff811449e0 T proc_tty_register_driver
ffffffff81144a30 T proc_tty_unregister_driver
ffffffff81144a60 t cmdline_proc_open
ffffffff81144a80 t cmdline_proc_show
ffffffff81144aa0 t c_next
ffffffff81144ab0 t consoles_open
ffffffff81144ac0 t show_console_dev
ffffffff81144c10 t c_stop
ffffffff81144c20 t c_start
ffffffff81144c60 t cpuinfo_open
ffffffff81144c70 t devinfo_start
ffffffff81144c90 t devinfo_next
ffffffff81144cb0 t devinfo_stop
ffffffff81144cc0 t devinfo_open
ffffffff81144cd0 t devinfo_show
ffffffff81144d60 t int_seq_start
ffffffff81144d80 t int_seq_next
ffffffff81144da0 t int_seq_stop
ffffffff81144db0 t interrupts_open
ffffffff81144dc0 t loadavg_proc_open
ffffffff81144de0 t loadavg_proc_show
ffffffff81144eb0 t meminfo_proc_open
ffffffff81144ee0 t meminfo_proc_show
ffffffff811453a0 t stat_open
ffffffff81145450 t show_stat
ffffffff81145a30 t uptime_proc_open
ffffffff81145a50 t uptime_proc_show
ffffffff81145b40 t version_proc_open
ffffffff81145b60 t version_proc_show
ffffffff81145ba0 t softirqs_open
ffffffff81145bc0 t show_softirqs
ffffffff81145cd0 t proc_ns_instantiate
ffffffff81145da0 t proc_ns_dir_lookup
ffffffff81145ec0 t proc_ns_dir_readdir
ffffffff81146090 T proc_ns_fget
ffffffff811460d0 t init_header
ffffffff81146150 t proc_sys_revalidate
ffffffff81146170 t proc_sys_delete
ffffffff81146190 t count_subheaders
ffffffff81146200 t first_usable_entry
ffffffff81146240 t sysctl_head_finish
ffffffff81146290 t proc_sys_make_inode
ffffffff81146380 t sysctl_perm
ffffffff81146400 t proc_sys_setattr
ffffffff81146490 t erase_header
ffffffff81146500 t append_path
ffffffff81146580 t sysctl_err
ffffffff811465f0 t sysctl_head_grab
ffffffff81146630 t grab_header
ffffffff81146650 t proc_sys_open
ffffffff811466b0 t proc_sys_poll
ffffffff811467a0 t proc_sys_permission
ffffffff81146840 t proc_sys_getattr
ffffffff811468c0 t find_entry.isra.7
ffffffff81146980 t find_subdir
ffffffff811469d0 t get_links
ffffffff81146ae0 t xlate_dir.isra.8
ffffffff81146b60 t sysctl_follow_link
ffffffff81146c80 t proc_sys_lookup
ffffffff81146e00 t proc_sys_compare
ffffffff81146eb0 t proc_sys_fill_cache.isra.12
ffffffff81147000 t proc_sys_readdir
ffffffff81147300 t proc_sys_call_handler.isra.13
ffffffff811473e0 t proc_sys_write
ffffffff811473f0 t proc_sys_read
ffffffff81147400 t sysctl_print_dir.isra.14
ffffffff81147430 t put_links
ffffffff81147550 t drop_sysctl_table
ffffffff81147630 T unregister_sysctl_table
ffffffff811476e0 t insert_header
ffffffff81147a60 T proc_sys_poll_notify
ffffffff81147a90 T sysctl_head_put
ffffffff81147ad0 T register_sysctl_root
ffffffff81147ae0 T __register_sysctl_table
ffffffff81147f80 t register_leaf_sysctl_tables
ffffffff81148190 T register_sysctl
ffffffff811481b0 T __register_sysctl_paths
ffffffff81148380 T register_sysctl_paths
ffffffff811483a0 T register_sysctl_table
ffffffff811483b0 T setup_sysctl_set
ffffffff81148460 T retire_sysctl_set
ffffffff81148480 t get_proc_task_net
ffffffff811484c0 t proc_tgid_net_readdir
ffffffff81148550 t proc_tgid_net_getattr
ffffffff811485c0 t proc_tgid_net_lookup
ffffffff81148640 T proc_net_remove
ffffffff81148650 t proc_net_ns_exit
ffffffff81148670 t proc_net_ns_init
ffffffff81148700 T proc_net_fops_create
ffffffff81148720 T single_release_net
ffffffff81148760 T seq_release_net
ffffffff811487a0 T single_open_net
ffffffff81148840 T seq_open_net
ffffffff811488f0 t free_kclist_ents
ffffffff81148950 t storenote
ffffffff811489d0 t kcore_update_ram
ffffffff81148c20 t notesize.isra.3
ffffffff81148c40 t elf_kcore_store_hdr
ffffffff81148ef0 t read_kcore
ffffffff81149220 t open_kcore
ffffffff811492b0 t kclist_add_private
ffffffff811494e0 T kclist_add
ffffffff81149530 t kmsg_release
ffffffff81149550 t kmsg_open
ffffffff81149570 t kmsg_poll
ffffffff811495c0 t kmsg_read
ffffffff81149620 t kpagecount_read
ffffffff81149730 T stable_page_flags
ffffffff811498e0 t kpageflags_read
ffffffff81149a00 T sysfs_setxattr
ffffffff81149a40 t sysfs_refresh_inode
ffffffff81149ab0 T sysfs_permission
ffffffff81149b30 T sysfs_getattr
ffffffff81149b90 T sysfs_sd_setattr
ffffffff81149cb0 T sysfs_setattr
ffffffff81149d50 T sysfs_get_inode
ffffffff81149e90 T sysfs_evict_inode
ffffffff81149f00 T sysfs_hash_and_remove
ffffffff81149fa0 t sysfs_release
ffffffff8114a060 t sysfs_poll
ffffffff8114a110 t sysfs_write_file
ffffffff8114a270 t sysfs_schedule_callback_work
ffffffff8114a2e0 T sysfs_remove_file_from_group
ffffffff8114a370 T sysfs_notify_dirent
ffffffff8114a3d0 T sysfs_notify
ffffffff8114a470 t sysfs_open_file
ffffffff8114a6b0 t sysfs_read_file
ffffffff8114a840 T sysfs_schedule_callback
ffffffff8114a9e0 T sysfs_attr_ns
ffffffff8114aa70 T sysfs_remove_file
ffffffff8114aac0 T sysfs_remove_files
ffffffff8114ab00 T sysfs_chmod_file
ffffffff8114abc0 T sysfs_add_file_mode
ffffffff8114aca0 T sysfs_add_file
ffffffff8114acb0 T sysfs_add_file_to_group
ffffffff8114ad40 T sysfs_create_file
ffffffff8114ad60 T sysfs_create_files
ffffffff8114ade0 t sysfs_dentry_delete
ffffffff8114adf0 t sysfs_name_hash
ffffffff8114aec0 t sysfs_dentry_revalidate
ffffffff8114af90 t sysfs_unlink_sibling
ffffffff8114afb0 t sysfs_link_sibling
ffffffff8114b080 t sysfs_pathname.isra.7
ffffffff8114b0e0 T sysfs_get_active
ffffffff8114b130 T sysfs_put_active
ffffffff8114b160 T release_sysfs_dirent
ffffffff8114b220 t sysfs_dir_release
ffffffff8114b250 t sysfs_dir_pos
ffffffff8114b350 t sysfs_readdir
ffffffff8114b590 t sysfs_dentry_iput
ffffffff8114b5d0 T sysfs_new_dirent
ffffffff8114b6f0 T sysfs_addrm_start
ffffffff8114b710 T __sysfs_add_one
ffffffff8114b820 T sysfs_add_one
ffffffff8114b8f0 T sysfs_remove_one
ffffffff8114b970 T sysfs_addrm_finish
ffffffff8114ba20 t create_dir
ffffffff8114bb00 T sysfs_find_dirent
ffffffff8114bbf0 t sysfs_lookup
ffffffff8114bd10 T sysfs_get_dirent
ffffffff8114bd80 T sysfs_create_subdir
ffffffff8114bda0 T sysfs_create_dir
ffffffff8114be70 T sysfs_remove_subdir
ffffffff8114bea0 T sysfs_remove_dir
ffffffff8114bf60 T sysfs_rename
ffffffff8114c0f0 T sysfs_rename_dir
ffffffff8114c150 T sysfs_move_dir
ffffffff8114c1c0 t sysfs_put_link
ffffffff8114c1e0 t sysfs_follow_link
ffffffff8114c390 T sysfs_rename_link
ffffffff8114c460 T sysfs_remove_link
ffffffff8114c480 t sysfs_do_create_link
ffffffff8114c6b0 T sysfs_create_link
ffffffff8114c6c0 T sysfs_create_link_nowarn
ffffffff8114c6d0 T sysfs_delete_link
ffffffff8114c750 t sysfs_test_super
ffffffff8114c770 T sysfs_put
ffffffff8114c7a0 T sysfs_get
ffffffff8114c7e0 t sysfs_kill_sb
ffffffff8114c810 t sysfs_mount
ffffffff8114ca00 t sysfs_set_super
ffffffff8114ca40 t release
ffffffff8114cab0 t mmap
ffffffff8114cbe0 t bin_migrate
ffffffff8114cc90 t bin_get_policy
ffffffff8114cd40 t bin_set_policy
ffffffff8114cde0 t bin_access
ffffffff8114cea0 t bin_page_mkwrite
ffffffff8114cf40 t bin_fault
ffffffff8114cfd0 t bin_vma_open
ffffffff8114d060 t open
ffffffff8114d1b0 t write
ffffffff8114d360 t read
ffffffff8114d540 T sysfs_remove_bin_file
ffffffff8114d550 T sysfs_create_bin_file
ffffffff8114d570 T unmap_bin_file
ffffffff8114d5d0 T sysfs_unmerge_group
ffffffff8114d640 T sysfs_merge_group
ffffffff8114d700 t remove_files.isra.1
ffffffff8114d740 T sysfs_remove_group
ffffffff8114d860 t internal_create_group
ffffffff8114da60 T sysfs_update_group
ffffffff8114da70 T sysfs_create_group
ffffffff8114da80 T configfs_setattr
ffffffff8114dc70 T configfs_new_inode
ffffffff8114dd70 T configfs_create
ffffffff8114de90 T configfs_get_name
ffffffff8114ded0 T configfs_drop_dentry
ffffffff8114df60 T configfs_hash_and_remove
ffffffff8114e080 T configfs_inode_exit
ffffffff8114e090 t configfs_release
ffffffff8114e100 t configfs_write_file
ffffffff8114e220 t configfs_read_file
ffffffff8114e340 t configfs_open_file
ffffffff8114e510 T configfs_add_file
ffffffff8114e5b0 T configfs_create_file
ffffffff8114e5d0 t configfs_d_delete
ffffffff8114e5e0 t configfs_init_file
ffffffff8114e600 t init_symlink
ffffffff8114e610 t configfs_dir_set_ready
ffffffff8114e660 t configfs_dir_lseek
ffffffff8114e7c0 t configfs_dir_close
ffffffff8114e850 t configfs_new_dirent
ffffffff8114e940 t configfs_readdir
ffffffff8114ebb0 t configfs_depend_prep
ffffffff8114ec30 t unlink_obj
ffffffff8114ec80 t unlink_group
ffffffff8114ecd0 t link_obj
ffffffff8114ed10 t init_dir
ffffffff8114ed40 t configfs_d_iput
ffffffff8114edf0 T configfs_depend_item
ffffffff8114eed0 t configfs_detach_prep.isra.6
ffffffff8114ef90 t configfs_detach_rollback.isra.7
ffffffff8114eff0 t client_disconnect_notify
ffffffff8114f020 T configfs_undepend_item
ffffffff8114f060 t client_drop_item
ffffffff8114f090 t detach_attrs.isra.12
ffffffff8114f1c0 t configfs_remove_dir.isra.13
ffffffff8114f2f0 t detach_groups.isra.14
ffffffff8114f3f0 t configfs_detach_group
ffffffff8114f410 t configfs_rmdir
ffffffff8114f6a0 T configfs_unregister_subsystem
ffffffff8114f7f0 t link_group
ffffffff8114f860 t configfs_attach_item.isra.16.part.17
ffffffff8114f940 T configfs_make_dirent
ffffffff8114f9c0 t create_dir
ffffffff8114fb50 t configfs_attach_group.isra.18
ffffffff8114fd20 T configfs_register_subsystem
ffffffff8114fe90 T configfs_dirent_is_ready
ffffffff8114fec0 t configfs_dir_open
ffffffff8114ff60 t configfs_mkdir
ffffffff81150370 t configfs_lookup
ffffffff811504e0 T configfs_create_link
ffffffff811505c0 t configfs_put_link
ffffffff811505e0 t configfs_follow_link
ffffffff81150870 T configfs_symlink
ffffffff81150ba0 T configfs_unlink
ffffffff81150d70 t configfs_do_mount
ffffffff81150d80 t configfs_fill_super
ffffffff81150e40 T configfs_is_root
ffffffff81150e50 T configfs_pin_fs
ffffffff81150e90 T configfs_release_fs
ffffffff81150eb0 T config_item_init
ffffffff81150ed0 T config_group_init
ffffffff81150ef0 T config_item_get
ffffffff81150f30 T config_group_find_item
ffffffff81150fa0 t config_item_release
ffffffff81151060 T config_item_put
ffffffff81151080 T config_item_set_name
ffffffff811511b0 T config_group_init_type_name
ffffffff81151200 T config_item_init_type_name
ffffffff81151250 t devpts_kill_sb
ffffffff81151270 t devpts_mount
ffffffff81151280 t devpts_show_options
ffffffff811512f0 t devpts_fill_super
ffffffff81151410 t devpts_remount
ffffffff81151530 T devpts_new_index
ffffffff81151610 T devpts_kill_index
ffffffff81151660 T devpts_pty_new
ffffffff811518b0 T devpts_get_tty
ffffffff81151920 T devpts_pty_kill
ffffffff811519b0 t ramfs_kill_sb
ffffffff811519d0 t rootfs_mount
ffffffff811519f0 T ramfs_mount
ffffffff81151a00 T ramfs_get_inode
ffffffff81151b40 T ramfs_fill_super
ffffffff81151c50 t ramfs_mknod
ffffffff81151ce0 t ramfs_mkdir
ffffffff81151d10 t ramfs_create
ffffffff81151d20 t ramfs_symlink
ffffffff81151df0 t hugetlbfs_write_begin
ffffffff81151e00 t hugetlbfs_mount
ffffffff81151e10 t hugetlbfs_statfs
ffffffff81151ee0 t hugetlbfs_put_super
ffffffff81151f20 t hugetlbfs_set_page_dirty
ffffffff81151f40 t hugetlbfs_write_end
ffffffff81151f50 t truncate_hugepages
ffffffff811520e0 t hugetlbfs_evict_inode
ffffffff81152100 t hugetlbfs_i_callback
ffffffff81152120 t hugetlbfs_fill_super
ffffffff81152520 t hugetlbfs_get_inode
ffffffff81152660 t hugetlbfs_mknod
ffffffff811526f0 t hugetlbfs_mkdir
ffffffff81152720 t hugetlbfs_create
ffffffff81152730 t hugetlbfs_migrate_page
ffffffff81152770 t hugetlbfs_symlink
ffffffff81152840 t init_once
ffffffff81152850 t hugetlb_get_unmapped_area
ffffffff81152c00 t hugetlbfs_file_mmap
ffffffff81152d40 t hugetlbfs_read
ffffffff81153090 t hugetlbfs_inc_free_inodes.part.17
ffffffff811530d0 t hugetlbfs_destroy_inode
ffffffff81153110 t hugetlbfs_alloc_inode
ffffffff811531b0 t hugetlbfs_setattr
ffffffff81153320 T hugetlb_file_setup
ffffffff811535b0 T utf8_to_utf32
ffffffff81153690 T utf32_to_utf8
ffffffff81153740 t uni2char
ffffffff81153790 t char2uni
ffffffff811537b0 T unload_nls
ffffffff811537d0 t find_nls
ffffffff81153860 T utf8s_to_utf16s
ffffffff81153a00 T utf16s_to_utf8s
ffffffff81153b40 T register_nls
ffffffff81153bc0 T unregister_nls
ffffffff81153c40 T load_nls
ffffffff81153c70 T load_nls_default
ffffffff81153ca0 t pstore_kill_sb
ffffffff81153cc0 t pstore_mount
ffffffff81153cd0 t pstore_unlink
ffffffff81153d20 t pstore_evict_inode
ffffffff81153d90 t parse_options
ffffffff81153e00 t pstore_remount
ffffffff81153e20 t pstore_get_inode
ffffffff81153e60 T pstore_fill_super
ffffffff81153f10 t pstore_file_read
ffffffff81153f30 T pstore_is_mounted
ffffffff81153f40 T pstore_mkfile
ffffffff81154220 t pstore_timefunc
ffffffff81154270 t pstore_dump
ffffffff811544d0 T pstore_set_kmsg_bytes
ffffffff811544e0 T pstore_get_records
ffffffff81154600 T pstore_register
ffffffff81154730 t pstore_dowork
ffffffff81154740 t do_compat_semctl
ffffffff81154b20 T compat_sys_semctl
ffffffff81154b50 T compat_sys_msgsnd
ffffffff81154b90 T compat_sys_msgrcv
ffffffff81154c70 T compat_sys_msgctl
ffffffff811550d0 T compat_sys_shmat
ffffffff81155110 T compat_sys_shmctl
ffffffff811556e0 T compat_sys_semtimedop
ffffffff81155780 t sysvipc_find_ipc
ffffffff81155850 t sysvipc_proc_next
ffffffff811558c0 t ipc_schedule_free
ffffffff811558f0 t ipc_do_vfree
ffffffff81155900 t sysvipc_proc_release
ffffffff81155940 t sysvipc_proc_open
ffffffff811559f0 t sysvipc_proc_show
ffffffff81155a20 t sysvipc_proc_stop
ffffffff81155a80 t sysvipc_proc_start
ffffffff81155af0 T ipc_init_ids
ffffffff81155b30 T ipc_get_maxid
ffffffff81155ba0 T ipc_addid
ffffffff81155c90 T ipc_rmid
ffffffff81155ce0 T ipc_alloc
ffffffff81155d00 T ipc_free
ffffffff81155d20 T ipc_rcu_alloc
ffffffff81155d90 T ipc_rcu_getref
ffffffff81155da0 T ipc_rcu_putref
ffffffff81155de0 T ipcperms
ffffffff81155ee0 T kernel_to_ipc64_perm
ffffffff81155f10 T ipc64_perm_to_ipc_perm
ffffffff81155f40 T ipc_lock
ffffffff81155fa0 T ipc_lock_check
ffffffff81155fe0 T ipcget
ffffffff811561f0 T ipc_update_perm
ffffffff81156220 T ipcctl_pre_down
ffffffff81156370 T store_msg
ffffffff81156410 T free_msg
ffffffff81156450 T load_msg
ffffffff811565a0 t msg_security
ffffffff811565b0 t ss_wakeup
ffffffff811565f0 t expunge_all
ffffffff81156650 t freeque
ffffffff811566e0 t newque
ffffffff81156810 t sysvipc_msg_proc_show
ffffffff81156890 t msgctl_down.constprop.7
ffffffff811569f0 T recompute_msgmni
ffffffff81156a70 T msg_init_ns
ffffffff81156ab0 T msg_exit_ns
ffffffff81156ad0 T sys_msgget
ffffffff81156b30 T sys_msgctl
ffffffff81156e30 T do_msgsnd
ffffffff811571d0 T sys_msgsnd
ffffffff81157200 T do_msgrcv
ffffffff811575c0 T sys_msgrcv
ffffffff81157610 t sem_security
ffffffff81157620 t sem_more_checks
ffffffff81157640 t lookup_undo
ffffffff811576e0 t wake_up_sem_queue_do
ffffffff81157720 t newary
ffffffff81157880 t sysvipc_sem_proc_show
ffffffff811578e0 t try_atomic_semop.isra.6
ffffffff81157a80 t unlink_queue.isra.7
ffffffff81157af0 t freeary
ffffffff81157c90 t update_queue
ffffffff81157ef0 t do_smart_update
ffffffff81157fe0 t semctl_main.isra.11
ffffffff811585d0 t semctl_down.constprop.12
ffffffff811586e0 t copy_semid_to_user.constprop.13
ffffffff81158700 t semctl_nolock.constprop.14
ffffffff81158930 T sem_init_ns
ffffffff81158970 T sem_exit_ns
ffffffff81158990 T sys_semget
ffffffff81158a00 T sys_semctl
ffffffff81158aa0 T sys_semtimedop
ffffffff811592f0 T sys_semop
ffffffff81159300 T copy_semundo
ffffffff811593c0 T exit_sem
ffffffff81159640 t shm_fault
ffffffff81159660 t shm_set_policy
ffffffff81159690 t shm_fsync
ffffffff811596c0 t shm_get_unmapped_area
ffffffff811596e0 t shm_security
ffffffff811596f0 t shm_more_checks
ffffffff81159700 t shm_open
ffffffff81159770 t shm_release
ffffffff811597c0 t shm_get_policy
ffffffff811597f0 t shm_mmap
ffffffff81159860 t shm_add_rss_swap.isra.13
ffffffff81159910 t sysvipc_shm_proc_show
ffffffff811599e0 t shm_destroy
ffffffff81159a80 t shm_close
ffffffff81159b50 t do_shm_rmid
ffffffff81159b80 t shmctl_down.constprop.18
ffffffff81159c90 t shm_try_destroy_current
ffffffff81159cf0 t shm_try_destroy_orphaned
ffffffff81159d50 t newseg
ffffffff81159fa0 T shm_init_ns
ffffffff81159fe0 T shm_exit_ns
ffffffff8115a010 T shm_destroy_orphaned
ffffffff8115a070 T exit_shm
ffffffff8115a100 T is_file_shm_hugepages
ffffffff8115a110 T sys_shmget
ffffffff8115a170 T sys_shmctl
ffffffff8115a6a0 T do_shmat
ffffffff8115aa80 T sys_shmat
ffffffff8115aaa0 T sys_shmdt
ffffffff8115ac10 t ipcns_callback
ffffffff8115ac40 T register_ipcns_notifier
ffffffff8115ac90 T cond_register_ipcns_notifier
ffffffff8115ace0 T unregister_ipcns_notifier
ffffffff8115ad10 T ipcns_notify
ffffffff8115ad30 t proc_ipcauto_dointvec_minmax
ffffffff8115ae50 t proc_ipc_dointvec
ffffffff8115aed0 t proc_ipc_dointvec_minmax_orphans
ffffffff8115af80 t proc_ipc_doulongvec_minmax
ffffffff8115b000 t proc_ipc_callback_dointvec.part.1
ffffffff8115b030 t proc_ipc_callback_dointvec
ffffffff8115b100 t msg_insert
ffffffff8115b190 t mqueue_mount
ffffffff8115b1c0 t mqueue_poll_file
ffffffff8115b250 t mqueue_destroy_inode
ffffffff8115b270 t mqueue_i_callback
ffffffff8115b290 t mqueue_alloc_inode
ffffffff8115b2c0 t mqueue_unlink
ffffffff8115b330 t remove_notification
ffffffff8115b390 t mqueue_flush_file
ffffffff8115b400 t mqueue_read_file
ffffffff8115b560 t init_once
ffffffff8115b570 t prepare_timeout
ffffffff8115b5f0 t wq_sleep
ffffffff8115b770 t __do_notify
ffffffff8115b8a0 t mqueue_evict_inode
ffffffff8115ba00 t mqueue_get_inode.isra.6
ffffffff8115bd60 t mqueue_fill_super
ffffffff8115bdc0 t mqueue_create
ffffffff8115bf50 t do_open.isra.9
ffffffff8115c020 T sys_mq_open
ffffffff8115c450 T sys_mq_unlink
ffffffff8115c5c0 T sys_mq_timedsend
ffffffff8115c8c0 T sys_mq_timedreceive
ffffffff8115cc00 T sys_mq_notify
ffffffff8115cfb0 T sys_mq_getsetattr
ffffffff8115d1d0 T mq_init_ns
ffffffff8115d230 T mq_clear_sbinfo
ffffffff8115d250 T mq_put_mnt
ffffffff8115d260 t compat_prepare_timeout
ffffffff8115d2e0 T compat_sys_mq_open
ffffffff8115d410 T compat_sys_mq_timedsend
ffffffff8115d480 T compat_sys_mq_timedreceive
ffffffff8115d4f0 T compat_sys_mq_notify
ffffffff8115d580 T compat_sys_mq_getsetattr
ffffffff8115d720 t ipcns_get
ffffffff8115d750 T copy_ipcs
ffffffff8115d830 T free_ipcs
ffffffff8115d8c0 T put_ipc_ns
ffffffff8115d940 t ipcns_install
ffffffff8115d990 t ipcns_put
ffffffff8115d9a0 t proc_mq_dointvec_minmax
ffffffff8115da20 t proc_mq_dointvec
ffffffff8115daa0 T mq_register_sysctl_table
ffffffff8115dab0 t cap_safe_nice
ffffffff8115db10 T cap_netlink_send
ffffffff8115db20 T cap_capable
ffffffff8115db80 T cap_settime
ffffffff8115dba0 T cap_ptrace_access_check
ffffffff8115dc10 T cap_ptrace_traceme
ffffffff8115dc80 T cap_capget
ffffffff8115dca0 T cap_capset
ffffffff8115ddd0 T cap_inode_need_killpriv
ffffffff8115de10 T cap_inode_killpriv
ffffffff8115de40 T get_vfs_caps_from_disk
ffffffff8115df30 T cap_bprm_set_creds
ffffffff8115e3a0 T cap_bprm_secureexec
ffffffff8115e3f0 T cap_inode_setxattr
ffffffff8115e460 T cap_inode_removexattr
ffffffff8115e4d0 T cap_task_fix_setuid
ffffffff8115e630 T cap_task_setscheduler
ffffffff8115e640 T cap_task_setioprio
ffffffff8115e650 T cap_task_setnice
ffffffff8115e660 T cap_task_prctl
ffffffff8115e830 T cap_vm_enough_memory
ffffffff8115e890 T cap_file_mmap
ffffffff8115e8e0 T mmap_min_addr_handler
ffffffff8115e950 T ipv4_skb_to_auditdata
ffffffff8115ea00 T common_lsm_audit
ffffffff8115f120 t devcgroup_populate
ffffffff8115f140 t devcgroup_seq_read
ffffffff8115f280 t devcgroup_destroy
ffffffff8115f300 t devcgroup_access_write
ffffffff8115f7b0 t devcgroup_can_attach
ffffffff8115f7f0 t devcgroup_create
ffffffff8115f9b0 T __devcgroup_inode_permission
ffffffff8115fab0 T devcgroup_inode_mknod
ffffffff8115fbb0 T crypto_find_alg
ffffffff8115fbf0 T crypto_shoot_alg
ffffffff8115fc20 T crypto_larval_alloc
ffffffff8115fce0 T crypto_mod_put
ffffffff8115fd10 T crypto_mod_get
ffffffff8115fd40 t crypto_larval_wait
ffffffff8115fe00 t __crypto_alg_lookup
ffffffff8115ff00 T crypto_alg_lookup
ffffffff8115ff50 T crypto_larval_lookup
ffffffff811600f0 t crypto_exit_ops
ffffffff81160140 T crypto_create_tfm
ffffffff81160210 T crypto_alloc_tfm
ffffffff811602f0 T __crypto_alloc_tfm
ffffffff81160450 T crypto_destroy_tfm
ffffffff811604d0 T crypto_probing_notify
ffffffff81160550 T crypto_larval_kill
ffffffff811605d0 T crypto_alg_mod_lookup
ffffffff81160650 T crypto_has_alg
ffffffff81160680 T crypto_alloc_base
ffffffff81160730 t crypto_larval_destroy
ffffffff81160760 t cipher_decrypt_unaligned
ffffffff81160790 t cipher_encrypt_unaligned
ffffffff811607c0 t setkey
ffffffff81160900 T crypto_init_cipher_ops
ffffffff81160950 T crypto_exit_cipher_ops
ffffffff81160960 t crypto_compress
ffffffff81160970 t crypto_decompress
ffffffff81160980 T crypto_init_compress_ops
ffffffff811609a0 T crypto_exit_compress_ops
ffffffff811609b0 T crypto_remove_final
ffffffff81160a20 T crypto_get_attr_type
ffffffff81160a50 T crypto_attr_u32
ffffffff81160a90 T crypto_init_queue
ffffffff81160ab0 T crypto_tfm_in_queue
ffffffff81160af0 T crypto_xor
ffffffff81160b50 T crypto_inc
ffffffff81160bc0 T crypto_check_attr_type
ffffffff81160c10 T __crypto_dequeue_request
ffffffff81160c70 T crypto_dequeue_request
ffffffff81160c80 T crypto_enqueue_request
ffffffff81160cc0 T crypto_alloc_instance2
ffffffff81160d90 T crypto_unregister_notifier
ffffffff81160da0 T crypto_register_notifier
ffffffff81160db0 T crypto_drop_spawn
ffffffff81160e10 T crypto_init_spawn
ffffffff81160e90 T crypto_alloc_instance
ffffffff81160ef0 T crypto_init_spawn2
ffffffff81160f10 T crypto_register_template
ffffffff81160fb0 t crypto_check_alg
ffffffff81161020 t __crypto_register_alg
ffffffff811611e0 t __crypto_lookup_template
ffffffff81161260 t crypto_destroy_instance
ffffffff81161280 T crypto_larval_error
ffffffff811612d0 T crypto_attr_alg_name
ffffffff81161310 T crypto_attr_alg2
ffffffff81161350 t crypto_spawn_alg.isra.8
ffffffff811613d0 T crypto_spawn_tfm2
ffffffff81161440 T crypto_spawn_tfm
ffffffff811614b0 T crypto_remove_spawns
ffffffff811617d0 t crypto_remove_alg
ffffffff81161830 T crypto_unregister_instance
ffffffff81161910 T crypto_unregister_template
ffffffff81161a00 T crypto_alg_tested
ffffffff81161bf0 t crypto_wait_for_test
ffffffff81161c60 T crypto_unregister_alg
ffffffff81161cd0 T crypto_unregister_algs
ffffffff81161d40 T crypto_lookup_template
ffffffff81161d70 T crypto_register_instance
ffffffff81161e50 T crypto_register_alg
ffffffff81161ec0 T crypto_register_algs
ffffffff81161f30 T scatterwalk_map
ffffffff81161fa0 T scatterwalk_start
ffffffff81161fc0 t scatterwalk_pagedone
ffffffff81162040 T scatterwalk_copychunks
ffffffff81162100 T scatterwalk_done
ffffffff81162140 T scatterwalk_map_and_copy
ffffffff81162210 t crypto_info_open
ffffffff81162220 t c_show
ffffffff811623c0 t c_next
ffffffff811623d0 t c_stop
ffffffff811623e0 t c_start
ffffffff81162400 T crypto_aead_setauthsize
ffffffff81162460 t crypto_aead_ctxsize
ffffffff81162470 t no_givcrypt
ffffffff81162480 t crypto_init_aead_ops
ffffffff81162520 t aead_null_givencrypt
ffffffff81162530 t aead_null_givdecrypt
ffffffff81162540 t crypto_init_nivaead_ops
ffffffff811625c0 t crypto_nivaead_report
ffffffff81162660 t crypto_aead_report
ffffffff81162710 t crypto_nivaead_show
ffffffff811627c0 t crypto_aead_show
ffffffff81162870 t setkey
ffffffff81162970 t crypto_nivaead_default
ffffffff81162b40 T crypto_lookup_aead
ffffffff81162c00 T crypto_alloc_aead
ffffffff81162cc0 T crypto_grab_aead
ffffffff81162d20 T aead_geniv_exit
ffffffff81162d30 T aead_geniv_init
ffffffff81162d70 T aead_geniv_free
ffffffff81162d90 T aead_geniv_alloc
ffffffff81163100 t crypto_ablkcipher_ctxsize
ffffffff81163110 T skcipher_null_givencrypt
ffffffff81163120 T skcipher_null_givdecrypt
ffffffff81163130 t crypto_init_ablkcipher_ops
ffffffff81163190 t no_givdecrypt
ffffffff811631a0 t crypto_init_givcipher_ops
ffffffff81163220 t skcipher_module_exit
ffffffff81163230 t crypto_givcipher_report
ffffffff811632e0 t crypto_ablkcipher_report
ffffffff81163390 t crypto_givcipher_show
ffffffff81163460 t crypto_ablkcipher_show
ffffffff81163530 t setkey
ffffffff81163660 t crypto_givcipher_default
ffffffff81163890 T crypto_lookup_skcipher
ffffffff811639a0 T crypto_alloc_ablkcipher
ffffffff81163a60 T crypto_grab_skcipher
ffffffff81163ac0 T __ablkcipher_walk_complete
ffffffff81163b40 T ablkcipher_walk_done
ffffffff81163d30 t ablkcipher_walk_next
ffffffff81163f80 T ablkcipher_walk_phys
ffffffff81164160 T crypto_default_geniv
ffffffff811641b0 t async_encrypt
ffffffff811641f0 t async_decrypt
ffffffff81164230 t crypto_blkcipher_ctxsize
ffffffff81164260 t crypto_blkcipher_report
ffffffff81164310 t crypto_blkcipher_show
ffffffff811643b0 t setkey
ffffffff811644e0 t async_setkey
ffffffff811644f0 T skcipher_geniv_exit
ffffffff81164500 T skcipher_geniv_init
ffffffff81164540 T skcipher_geniv_free
ffffffff81164560 T skcipher_geniv_alloc
ffffffff811649a0 T blkcipher_walk_done
ffffffff81164bd0 t blkcipher_walk_next
ffffffff81164f70 t blkcipher_walk_first
ffffffff81165150 T blkcipher_walk_virt_block
ffffffff81165160 T blkcipher_walk_phys
ffffffff81165180 T blkcipher_walk_virt
ffffffff811651a0 t crypto_init_blkcipher_ops
ffffffff81165250 t chainiv_module_exit
ffffffff81165260 t chainiv_free
ffffffff81165280 t chainiv_alloc
ffffffff81165360 t async_chainiv_init
ffffffff811653c0 t chainiv_init
ffffffff811653e0 t chainiv_givencrypt
ffffffff811654e0 t chainiv_givencrypt_first
ffffffff81165580 t async_chainiv_exit
ffffffff811655a0 t async_chainiv_schedule_work
ffffffff811655e0 t async_chainiv_givencrypt_tail
ffffffff81165680 t async_chainiv_do_postponed
ffffffff81165720 t async_chainiv_givencrypt
ffffffff81165860 t async_chainiv_givencrypt_first
ffffffff811658f0 t eseqiv_free
ffffffff81165910 t eseqiv_init
ffffffff81165940 t eseqiv_complete2
ffffffff81165970 t eseqiv_complete
ffffffff811659a0 t eseqiv_givencrypt
ffffffff81165d90 t eseqiv_givencrypt_first
ffffffff81165e30 t eseqiv_alloc
ffffffff81165ed0 t hash_walk_next
ffffffff81165f50 t hash_walk_new_entry
ffffffff81165f90 t ahash_nosetkey
ffffffff81165fa0 t ahash_no_export
ffffffff81165fb0 t ahash_no_import
ffffffff81165fc0 t crypto_ahash_extsize
ffffffff81165fe0 t crypto_ahash_report
ffffffff81166040 t crypto_ahash_show
ffffffff811660b0 t crypto_ahash_init_tfm
ffffffff81166140 t ahash_def_finup_finish2
ffffffff81166190 t ahash_def_finup
ffffffff81166280 t ahash_def_finup_done1
ffffffff81166300 t ahash_def_finup_done2
ffffffff81166350 t ahash_op_unaligned_finish
ffffffff811663a0 t crypto_ahash_op
ffffffff81166460 T crypto_ahash_digest
ffffffff81166470 T crypto_ahash_finup
ffffffff81166480 T crypto_ahash_final
ffffffff81166490 t ahash_op_unaligned_done
ffffffff811664e0 T ahash_attr_alg
ffffffff81166510 T crypto_init_ahash_spawn
ffffffff81166520 T ahash_free_instance
ffffffff81166540 T crypto_unregister_ahash
ffffffff81166550 T crypto_register_ahash
ffffffff81166590 T crypto_alloc_ahash
ffffffff811665a0 T crypto_hash_walk_done
ffffffff811666a0 T crypto_hash_walk_first
ffffffff811666e0 T crypto_ahash_setkey
ffffffff811667d0 T ahash_register_instance
ffffffff81166810 T crypto_hash_walk_first_compat
ffffffff81166840 t shash_no_setkey
ffffffff81166850 t shash_async_init
ffffffff81166870 t shash_async_export
ffffffff81166890 t shash_async_import
ffffffff811668b0 t shash_compat_init
ffffffff811668d0 t crypto_shash_ctxsize
ffffffff811668e0 t crypto_shash_init_tfm
ffffffff811668f0 t crypto_shash_extsize
ffffffff81166900 T shash_attr_alg
ffffffff81166930 t crypto_shash_report
ffffffff81166990 t crypto_shash_show
ffffffff811669f0 t crypto_exit_shash_ops_async
ffffffff81166a00 t crypto_init_shash_ops
ffffffff81166b00 t crypto_exit_shash_ops_compat
ffffffff81166b20 T crypto_init_shash_spawn
ffffffff81166b30 T shash_free_instance
ffffffff81166b50 T shash_register_instance
ffffffff81166c00 t shash_default_import
ffffffff81166c20 t shash_default_export
ffffffff81166c40 T crypto_shash_setkey
ffffffff81166d40 t shash_compat_setkey
ffffffff81166d50 t shash_async_setkey
ffffffff81166d60 T crypto_unregister_shash
ffffffff81166d70 T crypto_register_shash
ffffffff81166e20 T crypto_alloc_shash
ffffffff81166e30 t shash_final_unaligned
ffffffff81166ed0 T crypto_shash_final
ffffffff81166ef0 t shash_compat_final
ffffffff81166f00 t shash_async_final
ffffffff81166f10 t shash_update_unaligned
ffffffff81167000 T crypto_shash_update
ffffffff81167020 t shash_compat_update
ffffffff81167070 t shash_finup_unaligned
ffffffff811670c0 T crypto_shash_finup
ffffffff811670f0 t shash_digest_unaligned
ffffffff81167170 T crypto_shash_digest
ffffffff811671a0 t shash_compat_digest
ffffffff811672e0 T shash_ahash_update
ffffffff81167320 t shash_async_update
ffffffff81167330 T shash_ahash_finup
ffffffff811673b0 t shash_async_finup
ffffffff811673d0 T shash_ahash_digest
ffffffff811674e0 t shash_async_digest
ffffffff81167500 T crypto_init_shash_ops_async
ffffffff811675d0 t crypto_pcomp_init
ffffffff811675e0 t crypto_pcomp_extsize
ffffffff811675f0 t crypto_pcomp_init_tfm
ffffffff81167600 T crypto_unregister_pcomp
ffffffff81167610 T crypto_register_pcomp
ffffffff81167630 t crypto_pcomp_report
ffffffff81167680 t crypto_pcomp_show
ffffffff81167690 T crypto_alloc_pcomp
ffffffff811676a0 t cryptomgr_notify
ffffffff81167b40 t cryptomgr_probe
ffffffff81167c30 t cryptomgr_test
ffffffff81167c50 T alg_test
ffffffff81167c60 T crypto_aes_expand_key
ffffffff81168040 T crypto_aes_set_key
ffffffff81168070 t aes_encrypt
ffffffff81168e30 t aes_decrypt
ffffffff81169c50 t chksum_init
ffffffff81169c60 t chksum_setkey
ffffffff81169c80 t chksum_final
ffffffff81169c90 t crc32c_cra_init
ffffffff81169ca0 t chksum_digest
ffffffff81169cc0 t chksum_finup
ffffffff81169ce0 t chksum_update
ffffffff81169d00 t crypto_init_rng_ops
ffffffff81169d20 t crypto_rng_ctxsize
ffffffff81169d30 t crypto_rng_report
ffffffff81169d90 t crypto_rng_show
ffffffff81169de0 t rngapi_reset
ffffffff81169e70 T crypto_put_default_rng
ffffffff81169ec0 T crypto_get_default_rng
ffffffff81169f70 t krng_reset
ffffffff81169f80 t krng_get_random
ffffffff81169fa0 T elv_rb_find
ffffffff81169fe0 T elv_dispatch_sort
ffffffff8116a0c0 T elv_dispatch_add_tail
ffffffff8116a130 T elv_rb_latter_request
ffffffff8116a160 T elv_rb_former_request
ffffffff8116a190 t elevator_find
ffffffff8116a200 t elevator_get
ffffffff8116a2c0 t elevator_release
ffffffff8116a2f0 t elv_attr_store
ffffffff8116a3a0 t elv_attr_show
ffffffff8116a440 T elevator_exit
ffffffff8116a490 T elv_unregister
ffffffff8116a500 T elv_register
ffffffff8116a6b0 T elv_unregister_queue
ffffffff8116a700 T elv_abort_queue
ffffffff8116a750 T elv_rb_add
ffffffff8116a7b0 t elevator_alloc.isra.12
ffffffff8116a860 T elevator_init
ffffffff8116a9a0 t elv_rqhash_find.isra.13
ffffffff8116aa80 t elv_rqhash_add.isra.14
ffffffff8116ab00 T elv_rb_del
ffffffff8116ab60 T elv_rq_merge_ok
ffffffff8116abc0 T elv_merge
ffffffff8116acd0 T elv_merged_request
ffffffff8116ad60 T elv_merge_requests
ffffffff8116ae30 T elv_bio_merged
ffffffff8116ae50 T elv_drain_elevator
ffffffff8116aeb0 T __elv_add_request
ffffffff8116b090 T elv_add_request
ffffffff8116b0f0 T elv_requeue_request
ffffffff8116b180 T elv_quiesce_start
ffffffff8116b1d0 T elv_quiesce_end
ffffffff8116b200 T elv_latter_request
ffffffff8116b220 T elv_former_request
ffffffff8116b240 T elv_set_request
ffffffff8116b260 T elv_put_request
ffffffff8116b280 T elv_may_queue
ffffffff8116b2a0 T elv_completed_request
ffffffff8116b2f0 T __elv_register_queue
ffffffff8116b380 T elv_register_queue
ffffffff8116b390 T elevator_change
ffffffff8116b520 T elv_iosched_store
ffffffff8116b580 T elv_iosched_show
ffffffff8116b6b0 T blk_get_backing_dev_info
ffffffff8116b6e0 T blk_add_request_payload
ffffffff8116b780 T blk_unprep_request
ffffffff8116b7b0 T blk_lld_busy
ffffffff8116b7d0 T blk_start_plug
ffffffff8116b820 t plug_rq_cmp
ffffffff8116b830 T part_round_stats
ffffffff8116b960 T __blk_run_queue
ffffffff8116b980 T kblockd_schedule_delayed_work
ffffffff8116b990 T kblockd_schedule_work
ffffffff8116b9a0 T blk_rq_unprep_clone
ffffffff8116b9d0 T blk_start_queue
ffffffff8116ba20 T blk_run_queue
ffffffff8116ba70 T blk_dump_rq_flags
ffffffff8116bb70 t drive_stat_acct
ffffffff8116bd00 t handle_bad_sector
ffffffff8116bd90 t generic_make_request_checks
ffffffff8116bf60 t blk_delay_work
ffffffff8116bfa0 T blk_get_queue
ffffffff8116bfd0 T blk_alloc_queue_node
ffffffff8116c1d0 T blk_alloc_queue
ffffffff8116c1e0 T blk_put_queue
ffffffff8116c1f0 T blk_stop_queue
ffffffff8116c220 T blk_run_queue_async
ffffffff8116c260 T blk_sync_queue
ffffffff8116c280 T blk_delay_queue
ffffffff8116c2b0 T blk_rq_init
ffffffff8116c410 T blk_rq_prep_clone
ffffffff8116c570 T blk_rq_err_bytes
ffffffff8116c5d0 t queue_unplugged.isra.41
ffffffff8116c620 T blk_rq_check_limits
ffffffff8116c6d0 T blk_insert_cloned_request
ffffffff8116c7b0 t req_bio_endio.isra.43
ffffffff8116c870 T blk_update_request
ffffffff8116cc70 t blk_update_bidi_request
ffffffff8116cd00 T generic_make_request
ffffffff8116cde0 T submit_bio
ffffffff8116ced0 t bio_attempt_back_merge
ffffffff8116cf60 t bio_attempt_front_merge
ffffffff8116d050 t __freed_request
ffffffff8116d100 t freed_request
ffffffff8116d170 t get_request
ffffffff8116d600 t get_request_wait
ffffffff8116d7c0 T __blk_put_request
ffffffff8116d8b0 t blk_finish_request
ffffffff8116db30 t blk_end_bidi_request
ffffffff8116dbb0 T blk_end_request
ffffffff8116dbc0 T blk_end_request_err
ffffffff8116dc10 T blk_end_request_cur
ffffffff8116dc40 T blk_put_request
ffffffff8116dca0 T blk_end_request_all
ffffffff8116dcd0 T blk_requeue_request
ffffffff8116dd30 T blk_get_request
ffffffff8116ddb0 T blk_make_request
ffffffff8116de40 T blk_queue_congestion_threshold
ffffffff8116de80 T blk_init_allocated_queue
ffffffff8116dfa0 T blk_drain_queue
ffffffff8116e080 T blk_cleanup_queue
ffffffff8116e150 T blk_init_queue_node
ffffffff8116e1c0 T blk_init_queue
ffffffff8116e1d0 T blk_dequeue_request
ffffffff8116e250 T blk_start_request
ffffffff8116e290 T __blk_end_bidi_request
ffffffff8116e2d0 T __blk_end_request_all
ffffffff8116e300 T blk_peek_request
ffffffff8116e4c0 T blk_fetch_request
ffffffff8116e4f0 T __blk_end_request
ffffffff8116e500 T __blk_end_request_err
ffffffff8116e550 T __blk_end_request_cur
ffffffff8116e580 T blk_rq_bio_prep
ffffffff8116e630 T init_request_from_bio
ffffffff8116e690 T blk_flush_plug_list
ffffffff8116e8b0 T blk_finish_plug
ffffffff8116e8e0 T blk_queue_bio
ffffffff8116ec30 T blk_queue_free_tags
ffffffff8116ec40 T blk_queue_invalidate_tags
ffffffff8116ec80 T blk_queue_find_tag
ffffffff8116ecb0 T blk_queue_start_tag
ffffffff8116edc0 T blk_queue_end_tag
ffffffff8116eea0 t init_tag_map
ffffffff8116ef70 T blk_queue_resize_tags
ffffffff8116f040 t __blk_queue_init_tags
ffffffff8116f0b0 T blk_queue_init_tags
ffffffff8116f160 T blk_init_tags
ffffffff8116f170 t __blk_free_tags
ffffffff8116f1f0 T blk_free_tags
ffffffff8116f210 T __blk_queue_free_tags
ffffffff8116f240 t queue_var_store
ffffffff8116f290 t queue_ra_store
ffffffff8116f2c0 t queue_store_random
ffffffff8116f330 t queue_store_iostats
ffffffff8116f3a0 t queue_rq_affinity_store
ffffffff8116f440 t queue_nomerges_store
ffffffff8116f4c0 t queue_store_nonrot
ffffffff8116f540 t queue_max_sectors_store
ffffffff8116f5e0 t queue_var_show
ffffffff8116f600 t queue_show_random
ffffffff8116f610 t queue_show_iostats
ffffffff8116f620 t queue_rq_affinity_show
ffffffff8116f650 t queue_nomerges_show
ffffffff8116f680 t queue_show_nonrot
ffffffff8116f6a0 t queue_discard_zeroes_data_show
ffffffff8116f6c0 t queue_discard_granularity_show
ffffffff8116f6d0 t queue_io_opt_show
ffffffff8116f6e0 t queue_io_min_show
ffffffff8116f6f0 t queue_physical_block_size_show
ffffffff8116f700 t queue_logical_block_size_show
ffffffff8116f730 t queue_max_integrity_segments_show
ffffffff8116f740 t queue_max_segments_show
ffffffff8116f750 t queue_max_sectors_show
ffffffff8116f760 t queue_max_hw_sectors_show
ffffffff8116f770 t queue_ra_show
ffffffff8116f780 t queue_requests_show
ffffffff8116f790 t queue_discard_max_show
ffffffff8116f7c0 t queue_requests_store
ffffffff8116f970 t queue_attr_store
ffffffff8116fa30 t queue_attr_show
ffffffff8116fae0 t blk_release_queue
ffffffff8116fb90 t queue_max_segment_size_show
ffffffff8116fbc0 T blk_register_queue
ffffffff8116fca0 T blk_unregister_queue
ffffffff8116fd20 T blkdev_issue_flush
ffffffff8116fdf0 t bio_end_flush
ffffffff8116fe20 t blk_flush_complete_seq
ffffffff811700c0 t flush_end_io
ffffffff81170200 t flush_data_end_io
ffffffff81170230 T blk_insert_flush
ffffffff81170340 T blk_abort_flushes
ffffffff81170470 T blk_queue_prep_rq
ffffffff81170480 T blk_queue_unprep_rq
ffffffff81170490 T blk_queue_merge_bvec
ffffffff811704a0 T blk_queue_softirq_done
ffffffff811704b0 T blk_queue_rq_timeout
ffffffff811704c0 T blk_queue_rq_timed_out
ffffffff811704d0 T blk_queue_lld_busy
ffffffff811704e0 T blk_set_default_limits
ffffffff81170560 T blk_set_stacking_limits
ffffffff811705e0 T blk_queue_max_discard_sectors
ffffffff811705f0 T blk_queue_logical_block_size
ffffffff81170620 T blk_queue_physical_block_size
ffffffff81170650 T blk_queue_alignment_offset
ffffffff81170670 T blk_limits_io_min
ffffffff81170690 T blk_queue_io_min
ffffffff811706c0 T blk_limits_io_opt
ffffffff811706d0 T blk_queue_io_opt
ffffffff811706e0 T blk_queue_dma_pad
ffffffff811706f0 T blk_queue_update_dma_pad
ffffffff81170700 T blk_queue_dma_alignment
ffffffff81170710 T blk_queue_flush_queueable
ffffffff81170730 T blk_queue_flush
ffffffff811707c0 T blk_queue_segment_boundary
ffffffff81170800 T blk_queue_max_segment_size
ffffffff81170840 T blk_queue_max_segments
ffffffff81170880 T blk_queue_dma_drain
ffffffff811708f0 T blk_limits_max_hw_sectors
ffffffff81170940 T blk_queue_max_hw_sectors
ffffffff81170950 T blk_stack_limits
ffffffff81170ce0 T bdev_stack_limits
ffffffff81170d10 T blk_queue_stack_limits
ffffffff81170d30 T blk_queue_bounce_limit
ffffffff81170db0 T blk_queue_make_request
ffffffff81170eb0 T blk_queue_update_dma_alignment
ffffffff81170ed0 T disk_stack_limits
ffffffff81170f40 T ioc_cgroup_changed
ffffffff81170f80 t icq_free_icq_rcu
ffffffff81170f90 T ioc_lookup_icq
ffffffff81170fe0 t ioc_destroy_icq
ffffffff811710b0 t ioc_release_fn
ffffffff81171160 T put_io_context
ffffffff81171220 T icq_get_changed
ffffffff81171270 T get_io_context
ffffffff81171280 T exit_io_context
ffffffff81171390 T ioc_clear_queue
ffffffff811713f0 T create_io_context_slowpath
ffffffff811714e0 T get_task_io_context
ffffffff811715c0 T ioc_create_icq
ffffffff811717d0 T ioc_set_icq_flags
ffffffff811717f0 T ioc_ioprio_changed
ffffffff81171840 t __blk_rq_unmap_user
ffffffff81171870 T blk_rq_unmap_user
ffffffff811718c0 T blk_rq_map_user_iov
ffffffff81171a10 T blk_rq_append_bio
ffffffff81171a70 T blk_rq_map_kern
ffffffff81171bd0 T blk_rq_map_user
ffffffff81171e30 t blk_end_sync_rq
ffffffff81171e60 T blk_execute_rq_nowait
ffffffff81171f90 T blk_execute_rq
ffffffff81172040 t __blk_recalc_rq_segments
ffffffff81172290 T blk_recount_segments
ffffffff811722d0 T blk_rq_map_sg
ffffffff81172640 T blk_recalc_rq_segments
ffffffff81172660 T ll_back_merge_fn
ffffffff81172760 T ll_front_merge_fn
ffffffff81172860 T blk_rq_set_mixed_merge
ffffffff811728e0 t attempt_merge
ffffffff81172cd0 T attempt_back_merge
ffffffff81172d30 T attempt_front_merge
ffffffff81172d90 T blk_attempt_req_merge
ffffffff81172da0 T blk_rq_merge_ok
ffffffff81172e10 T blk_try_merge
ffffffff81172e40 t blk_done_softirq
ffffffff81172ed0 t trigger_softirq
ffffffff81172f30 T __blk_complete_request
ffffffff81173050 T blk_complete_request
ffffffff81173070 T blk_delete_timer
ffffffff811730a0 T blk_add_timer
ffffffff81173160 t blk_rq_timed_out
ffffffff811731c0 T blk_abort_request
ffffffff81173200 T blk_abort_queue
ffffffff81173310 T blk_rq_timed_out_timer
ffffffff81173430 T __blk_iopoll_complete
ffffffff81173460 T blk_iopoll_complete
ffffffff811734c0 t blk_iopoll_softirq
ffffffff811735c0 T blk_iopoll_sched
ffffffff81173610 T blk_iopoll_init
ffffffff81173720 T blk_iopoll_disable
ffffffff81173770 T blk_iopoll_enable
ffffffff81173780 T blkdev_issue_zeroout
ffffffff81173900 t bio_batch_end_io
ffffffff81173940 T blkdev_issue_discard
ffffffff81173b10 T __blkdev_driver_ioctl
ffffffff81173b40 t blkpg_ioctl
ffffffff81173dc0 T blkdev_ioctl
ffffffff81174500 T disk_part_iter_init
ffffffff81174540 T disk_map_sector_rcu
ffffffff811745b0 t exact_match
ffffffff811745c0 t block_devnode
ffffffff811745e0 T set_device_ro
ffffffff811745f0 T bdev_read_only
ffffffff81174610 t partitions_open
ffffffff81174620 t diskstats_open
ffffffff81174630 t disk_seqf_next
ffffffff81174660 t disk_seqf_stop
ffffffff81174690 t disk_capability_show
ffffffff811746c0 t disk_discard_alignment_show
ffffffff81174700 t disk_alignment_offset_show
ffffffff81174740 t disk_ro_show
ffffffff81174770 t disk_removable_show
ffffffff811747a0 t disk_ext_range_show
ffffffff811747d0 t disk_range_show
ffffffff81174800 t disk_events_poll_msecs_show
ffffffff81174830 t disk_release
ffffffff811748f0 t disk_seqf_start
ffffffff81174960 t show_partition_start
ffffffff811749d0 t __disk_unblock_events
ffffffff81174af0 t disk_events_workfn
ffffffff81174c40 T put_disk
ffffffff81174c60 T get_disk
ffffffff81174cd0 t exact_lock
ffffffff81174cf0 T unregister_blkdev
ffffffff81174db0 T disk_part_iter_exit
ffffffff81174dd0 t __disk_events_show
ffffffff81174e70 t disk_events_async_show
ffffffff81174e80 t disk_events_show
ffffffff81174e90 T blk_unregister_region
ffffffff81174eb0 T blk_register_region
ffffffff81174ee0 T register_blkdev
ffffffff81175010 T disk_part_iter_next
ffffffff811750f0 t diskstats_show
ffffffff81175510 T set_disk_ro
ffffffff811755c0 T disk_get_part
ffffffff81175600 T blk_lookup_devt
ffffffff811756d0 T bdget_disk
ffffffff81175710 T invalidate_partition
ffffffff81175750 t base_probe
ffffffff81175790 T get_gendisk
ffffffff81175870 t show_partition
ffffffff81175960 T blkdev_show
ffffffff811759e0 T blk_alloc_devt
ffffffff81175ab0 T add_disk
ffffffff81175ef0 T blk_free_devt
ffffffff81175f40 T disk_expand_part_tbl
ffffffff81176020 T alloc_disk_node
ffffffff81176110 T alloc_disk
ffffffff81176120 T disk_block_events
ffffffff811761e0 T del_gendisk
ffffffff81176420 t disk_events_poll_msecs_store
ffffffff811764b0 T disk_unblock_events
ffffffff811764d0 T disk_flush_events
ffffffff81176560 t disk_events_set_dfl_poll_msecs
ffffffff811765b0 T disk_clear_events
ffffffff811766e0 T scsi_verify_blk_ioctl
ffffffff811767b0 T blk_verify_command
ffffffff81176810 T sg_scsi_ioctl
ffffffff81176b70 t sg_io
ffffffff81176fb0 t __blk_send_generic.constprop.12
ffffffff81177050 T scsi_cmd_ioctl
ffffffff811774c0 T scsi_cmd_blk_ioctl
ffffffff81177540 t whole_disk_show
ffffffff81177550 t part_discard_alignment_show
ffffffff81177580 t part_alignment_offset_show
ffffffff811775b0 t part_ro_show
ffffffff811775e0 t part_start_show
ffffffff81177610 t part_partition_show
ffffffff81177640 T part_inflight_show
ffffffff81177670 T part_size_show
ffffffff811776a0 t part_release
ffffffff811776d0 T read_dev_sector
ffffffff81177770 t disk_unlock_native_capacity
ffffffff811777f0 t delete_partition_rcu_cb
ffffffff811778e0 T part_stat_show
ffffffff81177c50 T __bdevname
ffffffff81177c80 T disk_name
ffffffff81177d30 T bdevname
ffffffff81177d50 T __delete_partition
ffffffff81177d70 T delete_partition
ffffffff81177e10 t drop_partitions.isra.13.part.14
ffffffff81177e60 T add_partition
ffffffff81178300 T rescan_partitions
ffffffff81178580 T invalidate_partitions
ffffffff81178620 T check_partition
ffffffff81178820 t parse_solaris_x86
ffffffff81178830 t parse_freebsd
ffffffff81178840 t parse_netbsd
ffffffff81178850 t parse_openbsd
ffffffff81178860 t parse_unixware
ffffffff81178870 t parse_minix
ffffffff81178880 t parse_extended
ffffffff81178b20 T msdos_partition
ffffffff81179040 t bsg_poll
ffffffff81179120 t bsg_get_done_cmd
ffffffff81179260 t blk_complete_sgv4_hdr_rq
ffffffff81179410 t bsg_free_command
ffffffff81179470 t bsg_kref_release_function
ffffffff81179490 t bsg_release
ffffffff811796c0 t bsg_open
ffffffff81179980 t bsg_rq_end_io
ffffffff81179a30 t bsg_devnode
ffffffff81179a60 T bsg_register_queue
ffffffff81179d80 T bsg_unregister_queue
ffffffff81179e30 t bsg_read
ffffffff81179fd0 t bsg_map_hdr.isra.7
ffffffff8117a310 t bsg_ioctl
ffffffff8117a550 t bsg_write
ffffffff8117a850 T bsg_remove_queue
ffffffff8117a8f0 T bsg_setup_queue
ffffffff8117a970 t bsg_softirq_done
ffffffff8117a9b0 t bsg_map_buffer
ffffffff8117aa20 T bsg_request_fn
ffffffff8117abf0 T bsg_goose_queue
ffffffff8117ac10 T bsg_job_done
ffffffff8117aca0 T cgroup_to_blkio_cgroup
ffffffff8117acb0 T task_blkio_cgroup
ffffffff8117acc0 T blkiocg_update_dispatch_stats
ffffffff8117ad50 T blkiocg_update_io_merged_stats
ffffffff8117adc0 T blkiocg_lookup_group
ffffffff8117ae00 T blkio_policy_register
ffffffff8117ae40 T blkio_policy_unregister
ffffffff8117ae80 t blkiocg_reset_stats
ffffffff8117b090 t blkio_get_key_name
ffffffff8117b1c0 t blkiocg_create
ffffffff8117b230 t blkio_read_policy_node_files
ffffffff8117b350 t blkiocg_file_read
ffffffff8117b3a0 t blkiocg_populate
ffffffff8117b3c0 t blkiocg_attach
ffffffff8117b440 t blkiocg_can_attach
ffffffff8117b4e0 T blkcg_get_weight
ffffffff8117b550 T blkiocg_update_timeslice_used
ffffffff8117b5a0 T blkiocg_update_io_add_stats
ffffffff8117b630 t blkiocg_destroy
ffffffff8117b7b0 T blkiocg_del_blkio_group
ffffffff8117b840 T blkiocg_add_blkio_group
ffffffff8117b910 T blkio_alloc_blkg_stats
ffffffff8117b940 T blkiocg_update_completion_stats
ffffffff8117ba40 T blkiocg_update_io_remove_stats
ffffffff8117bb10 t blkio_read_stat_cpu.isra.11
ffffffff8117bb90 t blkiocg_file_write_u64
ffffffff8117bce0 t blkiocg_file_read_u64
ffffffff8117bd00 t blkio_delete_rule_command
ffffffff8117bd50 t blkiocg_file_write
ffffffff8117c370 t blkio_read_blkg_stats.isra.16
ffffffff8117c620 t blkiocg_file_read_map
ffffffff8117c740 T blkcg_get_read_bps
ffffffff8117c7d0 T blkcg_get_write_bps
ffffffff8117c860 T blkcg_get_read_iops
ffffffff8117c8e0 T blkcg_get_write_iops
ffffffff8117c960 t noop_merged_requests
ffffffff8117c980 t noop_add_request
ffffffff8117c9a0 t noop_former_request
ffffffff8117c9c0 t noop_latter_request
ffffffff8117c9e0 t noop_init_queue
ffffffff8117ca10 t noop_dispatch
ffffffff8117ca50 t noop_exit_queue
ffffffff8117ca60 t cfq_update_blkio_group_weight
ffffffff8117ca80 t cfq_activate_request
ffffffff8117caa0 t cfq_init_icq
ffffffff8117cab0 t __cfq_update_io_thinktime
ffffffff8117cb10 t cfq_var_store
ffffffff8117cb50 t cfq_target_latency_store
ffffffff8117cb90 t cfq_low_latency_store
ffffffff8117cbd0 t cfq_group_idle_store
ffffffff8117cc10 t cfq_slice_idle_store
ffffffff8117cc50 t cfq_slice_async_rq_store
ffffffff8117cc80 t cfq_slice_async_store
ffffffff8117ccc0 t cfq_slice_sync_store
ffffffff8117cd00 t cfq_back_seek_penalty_store
ffffffff8117cd30 t cfq_back_seek_max_store
ffffffff8117cd60 t cfq_fifo_expire_async_store
ffffffff8117cda0 t cfq_fifo_expire_sync_store
ffffffff8117cde0 t cfq_quantum_store
ffffffff8117ce10 t cfq_var_show
ffffffff8117ce30 t cfq_target_latency_show
ffffffff8117ce60 t cfq_low_latency_show
ffffffff8117ce70 t cfq_group_idle_show
ffffffff8117cea0 t cfq_slice_idle_show
ffffffff8117ced0 t cfq_slice_async_rq_show
ffffffff8117cee0 t cfq_slice_async_show
ffffffff8117cf10 t cfq_slice_sync_show
ffffffff8117cf40 t cfq_back_seek_penalty_show
ffffffff8117cf50 t cfq_back_seek_max_show
ffffffff8117cf60 t cfq_fifo_expire_async_show
ffffffff8117cf90 t cfq_fifo_expire_sync_show
ffffffff8117cfc0 t cfq_quantum_show
ffffffff8117cfd0 t cfq_should_idle
ffffffff8117d070 t __cfq_set_active_queue
ffffffff8117d100 t cfq_deactivate_request
ffffffff8117d140 t cfq_rb_erase
ffffffff8117d1a0 t cfq_del_cfqq_rr
ffffffff8117d280 t cfq_group_service_tree_add
ffffffff8117d330 t cfq_prio_tree_add
ffffffff8117d3f0 t cfq_rb_first
ffffffff8117d440 t cfq_get_next_cfqg
ffffffff8117d490 t cfq_init_queue
ffffffff8117d840 t cfq_kick_queue
ffffffff8117d8a0 t cfq_allow_merge
ffffffff8117d940 t cfq_init_prio_data
ffffffff8117da70 t cfq_may_queue
ffffffff8117daf0 t cfq_find_cfqg
ffffffff8117db90 t cfq_bio_merged
ffffffff8117dbc0 t cfq_merge
ffffffff8117dc80 t cfqq_type
ffffffff8117dca0 t cfq_service_tree_add
ffffffff8117df80 t __cfq_slice_expired
ffffffff8117e3a0 t cfq_idle_slice_timer
ffffffff8117e460 t cfq_choose_req.isra.73
ffffffff8117e570 t cfq_find_next_rq
ffffffff8117e640 t cfq_remove_request
ffffffff8117e760 t cfq_merged_requests
ffffffff8117e870 t cfq_dispatch_insert
ffffffff8117e940 t cfq_add_rq_rb
ffffffff8117ea30 t cfq_put_cfqg
ffffffff8117eaf0 t cfq_put_queue
ffffffff8117ebc0 t cfq_exit_cfqq
ffffffff8117ec70 t cfq_exit_icq
ffffffff8117ecd0 t cfq_destroy_cfqg.isra.87
ffffffff8117ed20 t cfq_unlink_blkio_group
ffffffff8117ed90 t cfq_exit_queue
ffffffff8117eee0 t cfq_insert_request
ffffffff8117f3b0 t cfq_put_request
ffffffff8117f440 t cfq_get_queue
ffffffff8117f9f0 t cfq_set_request
ffffffff8117fc90 t cfq_close_cooperator
ffffffff8117fe20 t cfq_dispatch_requests
ffffffff811808a0 t cfq_completed_request
ffffffff81180f20 t cfq_merged_request
ffffffff81180fe0 T compat_blkdev_ioctl
ffffffff81182230 T argv_free
ffffffff81182270 T argv_split
ffffffff81182390 T module_bug_finalize
ffffffff81182460 T module_bug_cleanup
ffffffff811824a0 T find_bug
ffffffff81182570 T report_bug
ffffffff81182690 T memparse
ffffffff81182720 T get_option
ffffffff811827c0 T get_options
ffffffff81182860 T cpumask_next_and
ffffffff811828a0 T __next_cpu
ffffffff811828d0 T __first_cpu
ffffffff811828f0 T cpumask_any_but
ffffffff81182930 T _atomic_dec_and_lock
ffffffff811829a0 T decompress_method
ffffffff81182a10 t cmp_ex
ffffffff81182a30 T sort_extable
ffffffff81182a50 T trim_init_extable
ffffffff81182b20 T search_extable
ffffffff81182b60 T idr_for_each
ffffffff81182c40 T idr_get_next
ffffffff81182d10 T idr_init
ffffffff81182d30 T ida_init
ffffffff81182d60 t get_from_free_list
ffffffff81182dc0 t free_bitmap
ffffffff81182e30 T idr_replace
ffffffff81182ee0 T idr_destroy
ffffffff81182f20 T ida_destroy
ffffffff81182f40 t idr_layer_rcu_free
ffffffff81182f60 t idr_get_empty_slot
ffffffff81183270 T ida_get_new_above
ffffffff81183550 T ida_get_new
ffffffff81183560 t idr_get_new_above_int
ffffffff811835e0 T idr_get_new
ffffffff81183610 T idr_get_new_above
ffffffff81183640 T idr_pre_get
ffffffff811836d0 T idr_remove_all
ffffffff811837c0 T idr_remove
ffffffff81183980 T ida_remove
ffffffff81183a80 T idr_find
ffffffff81183b20 T ida_pre_get
ffffffff81183ba0 T ida_simple_get
ffffffff81183ca0 T ida_simple_remove
ffffffff81183d00 T int_sqrt
ffffffff81183d50 T ioremap_page_range
ffffffff81184060 t kobj_attr_show
ffffffff81184080 t kobj_attr_store
ffffffff811840a0 t kset_release
ffffffff811840b0 t dynamic_kobj_release
ffffffff811840c0 T kobject_get
ffffffff81184100 T kobject_put
ffffffff81184160 t kobj_kset_leave
ffffffff811841c0 T kset_unregister
ffffffff811841e0 T kobject_del
ffffffff81184210 t kobject_release
ffffffff811842d0 t kobject_add_internal
ffffffff811844f0 T kobject_init
ffffffff811845a0 T kobject_get_path
ffffffff81184650 T kobject_rename
ffffffff81184790 T kset_register
ffffffff81184800 T kobject_set_name_vargs
ffffffff81184870 T kobject_init_and_add
ffffffff81184910 T kobject_add
ffffffff811849e0 T kobject_set_name
ffffffff81184a30 T kset_create_and_add
ffffffff81184ad0 T kobject_move
ffffffff81184c20 T kobject_create
ffffffff81184c60 T kobject_create_and_add
ffffffff81184ce0 T kset_init
ffffffff81184d20 T kset_find_obj
ffffffff81184da0 T kobj_ns_type_register
ffffffff81184e10 T kobj_ns_type_registered
ffffffff81184e60 T kobj_child_ns_ops
ffffffff81184e80 T kobj_ns_ops
ffffffff81184ea0 T kobj_ns_grab_current
ffffffff81184f00 T kobj_ns_netlink
ffffffff81184f70 T kobj_ns_initial
ffffffff81184fd0 T kobj_ns_drop
ffffffff81185030 t uevent_net_exit
ffffffff811850d0 t uevent_net_init
ffffffff81185180 T add_uevent_var
ffffffff81185290 t kobj_bcast_filter
ffffffff811852f0 T kobject_uevent_env
ffffffff81185770 T kobject_uevent
ffffffff81185780 T kobject_action_type
ffffffff81185840 T plist_add
ffffffff81185940 T plist_del
ffffffff811859b0 T heap_init
ffffffff81185a10 T heap_free
ffffffff81185a20 T heap_insert
ffffffff81185b60 t get_index.isra.1
ffffffff81185ba0 t iter_walk_down
ffffffff81185c00 t prio_tree_left
ffffffff81185c70 t prio_tree_right
ffffffff81185d10 T prio_tree_replace
ffffffff81185d70 T prio_tree_remove
ffffffff81185e60 T prio_tree_insert
ffffffff81186070 T prio_tree_next
ffffffff811862d0 t prop_norm_single
ffffffff811863d0 t prop_norm_percpu
ffffffff81186550 T prop_descriptor_init
ffffffff811865e0 T prop_change_shift
ffffffff81186700 T prop_local_init_percpu
ffffffff81186730 T prop_local_destroy_percpu
ffffffff81186740 T __prop_inc_percpu
ffffffff811867b0 T __prop_inc_percpu_max
ffffffff81186890 T prop_fraction_percpu
ffffffff81186920 T prop_local_init_single
ffffffff81186940 T prop_local_destroy_single
ffffffff81186950 T __prop_inc_single
ffffffff811869b0 T prop_fraction_single
ffffffff81186a40 t radix_tree_lookup_element
ffffffff81186ad0 T radix_tree_lookup_slot
ffffffff81186ae0 T radix_tree_lookup
ffffffff81186af0 T radix_tree_next_hole
ffffffff81186b30 T radix_tree_prev_hole
ffffffff81186b70 T radix_tree_tagged
ffffffff81186b80 t radix_tree_callback
ffffffff81186bf0 t radix_tree_node_rcu_free
ffffffff81186c30 t radix_tree_node_ctor
ffffffff81186cb0 T radix_tree_tag_clear
ffffffff81186d90 T radix_tree_range_tag_if_tagged
ffffffff81186f60 T radix_tree_next_chunk
ffffffff81187160 T radix_tree_gang_lookup_tag_slot
ffffffff81187220 T radix_tree_gang_lookup_tag
ffffffff811872f0 T radix_tree_gang_lookup_slot
ffffffff811873b0 T radix_tree_gang_lookup
ffffffff81187470 T radix_tree_tag_set
ffffffff81187510 T radix_tree_delete
ffffffff81187750 T radix_tree_preload
ffffffff811877e0 T radix_tree_tag_get
ffffffff811878a0 t radix_tree_node_alloc
ffffffff81187900 T radix_tree_insert
ffffffff81187b50 T radix_tree_locate_item
ffffffff81187c70 T ___ratelimit
ffffffff81187d90 t __rb_rotate_left
ffffffff81187e10 t __rb_rotate_right
ffffffff81187e90 T rb_insert_color
ffffffff81187fd0 T rb_erase
ffffffff811882c0 t rb_augment_path
ffffffff81188330 T rb_augment_insert
ffffffff81188360 T rb_augment_erase_end
ffffffff81188380 T rb_first
ffffffff811883a0 T rb_last
ffffffff811883c0 T rb_replace_node
ffffffff81188430 T rb_next
ffffffff81188480 T rb_augment_erase_begin
ffffffff811884e0 T rb_prev
ffffffff81188530 T reciprocal_value
ffffffff81188550 T __init_rwsem
ffffffff81188570 t __rwsem_do_wake
ffffffff81188720 T rwsem_downgrade_wake
ffffffff81188790 T rwsem_wake
ffffffff811887f0 T show_mem
ffffffff811889b0 T strcasecmp
ffffffff81188a00 T strncasecmp
ffffffff81188a60 T strcpy
ffffffff81188a80 T strncpy
ffffffff81188ab0 T strcat
ffffffff81188af0 T strcmp
ffffffff81188b20 T strncmp
ffffffff81188b80 T strchr
ffffffff81188bc0 T strrchr
ffffffff81188c00 T strnchr
ffffffff81188c40 T skip_spaces
ffffffff81188c70 T strlen
ffffffff81188c90 T strnlen
ffffffff81188cd0 T strspn
ffffffff81188d30 T strcspn
ffffffff81188d80 T strpbrk
ffffffff81188de0 T strsep
ffffffff81188e50 T sysfs_streq
ffffffff81188ec0 T strtobool
ffffffff81188f00 T memcmp
ffffffff81188f50 T memscan
ffffffff81188f80 T strstr
ffffffff81188ff0 T strnstr
ffffffff81189050 T memchr
ffffffff81189080 T memchr_inv
ffffffff81189190 T strlcpy
ffffffff811891e0 T strnicmp
ffffffff81189250 T strncat
ffffffff811892a0 T strim
ffffffff81189310 T strlcat
ffffffff81189390 T timerqueue_iterate_next
ffffffff811893b0 T timerqueue_del
ffffffff81189440 T timerqueue_add
ffffffff811894f0 t skip_atoi
ffffffff81189530 t put_dec_trunc
ffffffff81189630 t put_dec_full
ffffffff811896f0 t put_dec
ffffffff81189750 t ip4_string
ffffffff81189830 t ip6_string
ffffffff811898c0 t format_decode
ffffffff81189c70 t ip6_compressed_string
ffffffff81189e80 T simple_strtoull
ffffffff81189ed0 T simple_strtoul
ffffffff81189ee0 t number.isra.2
ffffffff8118a1d0 t string.isra.4
ffffffff8118a280 t mac_address_string.isra.5
ffffffff8118a320 t ip4_addr_string.isra.6
ffffffff8118a390 t uuid_string.isra.7
ffffffff8118a4a0 t symbol_string.isra.8
ffffffff8118a590 t resource_string.isra.9
ffffffff8118a8b0 t ip6_addr_string.isra.10
ffffffff8118a950 t pointer.isra.11
ffffffff8118ac10 T vsnprintf
ffffffff8118b200 T sprintf
ffffffff8118b250 T vsprintf
ffffffff8118b260 T snprintf
ffffffff8118b2a0 T vscnprintf
ffffffff8118b2c0 T scnprintf
ffffffff8118b300 T simple_strtoll
ffffffff8118b330 T simple_strtol
ffffffff8118b360 T vsscanf
ffffffff8118bbb0 T sscanf
ffffffff8118bc00 T num_to_str
ffffffff8118bc60 T clear_page_c
ffffffff8118bc70 T clear_page_c_e
ffffffff8118bc80 T clear_page
ffffffff8118bcc0 t copy_page_c
ffffffff8118bcd0 T copy_page
ffffffff8118bdb0 T _copy_to_user
ffffffff8118bde0 T _copy_from_user
ffffffff8118be10 T copy_user_generic_unrolled
ffffffff8118bec0 T copy_user_generic_string
ffffffff8118bf00 T copy_user_enhanced_fast_string
ffffffff8118bf10 T __copy_user_nocache
ffffffff8118bfd0 T csum_partial
ffffffff8118c160 T ip_compute_csum
ffffffff8118c180 t delay_loop
ffffffff8118c1b0 T __delay
ffffffff8118c1c0 T __const_udelay
ffffffff8118c1f0 T __udelay
ffffffff8118c220 T __ndelay
ffffffff8118c250 t delay_tsc
ffffffff8118c2c0 T use_tsc_delay
ffffffff8118c2d0 T __get_user_1
ffffffff8118c2f0 T __get_user_2
ffffffff8118c320 T __get_user_4
ffffffff8118c350 T __get_user_8
ffffffff8118c373 t bad_get_user
ffffffff8118c380 T insn_init
ffffffff8118c440 T insn_get_prefixes
ffffffff8118c690 T insn_get_opcode
ffffffff8118c810 T insn_get_modrm
ffffffff8118c950 T insn_rip_relative
ffffffff8118c990 T insn_get_sib
ffffffff8118ca10 T insn_get_displacement
ffffffff8118cb10 T insn_get_immediate
ffffffff8118ced0 T insn_get_length
ffffffff8118cf10 T __memcpy
ffffffff8118cf10 T memcpy
ffffffff8118d020 T memmove
ffffffff8118d1c0 T __memset
ffffffff8118d1c0 T memset
ffffffff8118d270 T __put_user_1
ffffffff8118d290 T __put_user_2
ffffffff8118d2c0 T __put_user_4
ffffffff8118d2f0 T __put_user_8
ffffffff8118d313 t bad_put_user
ffffffff8118d320 T __write_lock_failed
ffffffff8118d340 T __read_lock_failed
ffffffff8118d350 T call_rwsem_down_read_failed
ffffffff8118d380 T call_rwsem_down_write_failed
ffffffff8118d3a0 T call_rwsem_wake
ffffffff8118d3d0 T call_rwsem_downgrade_wake
ffffffff8118d400 T strncpy_from_user
ffffffff8118d520 T copy_from_user_nmi
ffffffff8118d610 T __clear_user
ffffffff8118d650 T __strnlen_user
ffffffff8118d6a0 T strlen_user
ffffffff8118d6f0 T copy_in_user
ffffffff8118d740 T strnlen_user
ffffffff8118d770 T clear_user
ffffffff8118d7a0 T copy_user_handle_tail
ffffffff8118d820 T inat_get_opcode_attribute
ffffffff8118d830 T inat_get_last_prefix_id
ffffffff8118d850 T inat_get_escape_attribute
ffffffff8118d8a0 T inat_get_group_attribute
ffffffff8118d910 T inat_get_avx_attribute
ffffffff8118d970 T bcd2bin
ffffffff8118d990 T bin2bcd
ffffffff8118d9b0 T iter_div_u64_rem
ffffffff8118d9d0 t u32_swap
ffffffff8118d9e0 t generic_swap
ffffffff8118da10 T sort
ffffffff8118dc40 T match_strlcpy
ffffffff8118dca0 T match_strdup
ffffffff8118dd00 t match_number
ffffffff8118ddc0 T match_hex
ffffffff8118ddd0 T match_octal
ffffffff8118dde0 T match_int
ffffffff8118ddf0 T match_token
ffffffff8118dff0 T half_md4_transform
ffffffff8118e2c0 T debug_locks_off
ffffffff8118e2f0 T prandom32
ffffffff8118e340 T random32
ffffffff8118e360 T srandom32
ffffffff8118e3d0 W bust_spinlocks
ffffffff8118e410 T hex_dump_to_buffer
ffffffff8118e750 T print_hex_dump
ffffffff8118e8a0 T print_hex_dump_bytes
ffffffff8118e8e0 T hex_to_bin
ffffffff8118e920 T hex2bin
ffffffff8118e990 T kvasprintf
ffffffff8118ea20 T kasprintf
ffffffff8118ea70 T __bitmap_empty
ffffffff8118eae0 T __bitmap_full
ffffffff8118eb50 T __bitmap_equal
ffffffff8118ebd0 T __bitmap_complement
ffffffff8118ec40 T __bitmap_and
ffffffff8118ec80 T __bitmap_or
ffffffff8118ecb0 T __bitmap_xor
ffffffff8118ece0 T __bitmap_andnot
ffffffff8118ed20 T __bitmap_intersects
ffffffff8118edb0 T __bitmap_subset
ffffffff8118ee50 T bitmap_set
ffffffff8118ef10 T bitmap_clear
ffffffff8118efd0 t __reg_op
ffffffff8118f0c0 T bitmap_find_free_region
ffffffff8118f160 T bitmap_release_region
ffffffff8118f170 T bitmap_allocate_region
ffffffff8118f1d0 T bitmap_copy_le
ffffffff8118f200 T __bitmap_weight
ffffffff8118f270 t __bitmap_parselist
ffffffff8118f410 T bitmap_fold
ffffffff8118f480 T bitmap_onto
ffffffff8118f500 T __bitmap_shift_left
ffffffff8118f620 T __bitmap_shift_right
ffffffff8118f790 T bitmap_parselist_user
ffffffff8118f7d0 T bitmap_parselist
ffffffff8118f830 T bitmap_scnprintf
ffffffff8118f8f0 T __bitmap_parse
ffffffff8118fae0 T bitmap_parse_user
ffffffff8118fb20 T bitmap_find_next_zero_area
ffffffff8118fba0 T bitmap_scnlistprintf
ffffffff8118fca0 t bitmap_pos_to_ord
ffffffff8118fd10 T bitmap_ord_to_pos
ffffffff8118fd90 T bitmap_bitremap
ffffffff8118fe30 T bitmap_remap
ffffffff8118ff10 T __sg_free_table
ffffffff8118ff90 T sg_free_table
ffffffff8118ffb0 T sg_next
ffffffff8118ffd0 T sg_last
ffffffff81190010 T sg_miter_stop
ffffffff811900a0 T sg_miter_start
ffffffff81190170 T sg_init_table
ffffffff811901c0 T __sg_alloc_table
ffffffff81190300 t sg_kfree
ffffffff81190320 t sg_kmalloc
ffffffff81190350 T sg_init_one
ffffffff811903d0 T sg_miter_next
ffffffff81190510 t sg_copy_buffer
ffffffff811905e0 T sg_copy_to_buffer
ffffffff811905f0 T sg_copy_from_buffer
ffffffff81190600 T sg_alloc_table
ffffffff81190650 T string_get_size
ffffffff811907e0 T gcd
ffffffff81190810 T lcm
ffffffff81190850 T list_sort
ffffffff81190b50 t __uuid_gen_common
ffffffff81190b90 T uuid_be_gen
ffffffff81190bb0 T uuid_le_gen
ffffffff81190bd0 T flex_array_get
ffffffff81190c30 T flex_array_get_ptr
ffffffff81190c50 t __fa_get_part
ffffffff81190d40 T flex_array_alloc
ffffffff81190ee0 T flex_array_shrink
ffffffff81190f70 T flex_array_free_parts
ffffffff81190fb0 T flex_array_free
ffffffff81190fd0 T flex_array_prealloc
ffffffff811910a0 T flex_array_clear
ffffffff81191120 T flex_array_put
ffffffff811911d0 T bsearch
ffffffff81191260 T find_last_bit
ffffffff811912c0 T find_next_bit
ffffffff81191380 T find_next_zero_bit
ffffffff81191460 T find_first_bit
ffffffff811914e0 T find_first_zero_bit
ffffffff81191560 T llist_add_batch
ffffffff81191590 T llist_del_first
ffffffff811915e0 T _parse_integer_fixup_radix
ffffffff81191650 T _parse_integer
ffffffff81191710 t _kstrtoull
ffffffff81191790 T kstrtoull
ffffffff811917a0 T kstrtoul_from_user
ffffffff81191810 T kstrtoull_from_user
ffffffff81191880 T kstrtou8
ffffffff811918c0 T kstrtou8_from_user
ffffffff81191930 T kstrtou16
ffffffff81191970 T kstrtou16_from_user
ffffffff811919e0 T kstrtouint
ffffffff81191a20 T kstrtouint_from_user
ffffffff81191a90 T _kstrtoul
ffffffff81191ac0 T kstrtoll
ffffffff81191b30 T kstrtol_from_user
ffffffff81191ba0 T kstrtoll_from_user
ffffffff81191c10 T kstrtos8
ffffffff81191c50 T kstrtos8_from_user
ffffffff81191cc0 T kstrtos16
ffffffff81191d00 T kstrtos16_from_user
ffffffff81191d70 T kstrtoint
ffffffff81191db0 T kstrtoint_from_user
ffffffff81191e20 T _kstrtol
ffffffff81191e50 T ioport_map
ffffffff81191e70 T ioport_unmap
ffffffff81191e80 t bad_io_access
ffffffff81191ec0 T pci_iounmap
ffffffff81191ef0 T iowrite32
ffffffff81191f30 T iowrite16
ffffffff81191f70 T iowrite8
ffffffff81191fb0 T ioread32be
ffffffff81192000 T ioread32
ffffffff81192040 T ioread16
ffffffff81192090 T ioread8
ffffffff811920e0 T iowrite32_rep
ffffffff81192140 T iowrite16_rep
ffffffff811921a0 T iowrite8_rep
ffffffff81192200 T ioread32_rep
ffffffff81192260 T ioread16_rep
ffffffff811922c0 T ioread8_rep
ffffffff81192320 T ioread16be
ffffffff81192370 T iowrite32be
ffffffff811923b0 T iowrite16be
ffffffff811923f0 T pci_iomap
ffffffff811924c0 W __iowrite64_copy
ffffffff811924f0 t devm_ioremap_match
ffffffff81192500 t devm_ioport_map_match
ffffffff81192510 t pcim_iomap_release
ffffffff81192550 t devm_ioport_map_release
ffffffff81192560 T devm_ioport_map
ffffffff81192600 T devm_iounmap
ffffffff81192640 T devm_ioremap_release
ffffffff81192650 T devm_ioremap
ffffffff811926f0 T devm_ioremap_nocache
ffffffff81192790 T devm_request_and_ioremap
ffffffff811928d0 T pcim_iomap_table
ffffffff81192930 T pcim_iounmap
ffffffff81192990 T pcim_iounmap_regions
ffffffff81192a00 T pcim_iomap
ffffffff81192a80 T pcim_iomap_regions
ffffffff81192b90 T pcim_iomap_regions_request_all
ffffffff81192c10 T devm_ioport_unmap
ffffffff81192c80 T __sw_hweight32
ffffffff81192cd0 T __sw_hweight16
ffffffff81192d10 T __sw_hweight8
ffffffff81192d50 T __sw_hweight64
ffffffff81192dd0 T bitrev16
ffffffff81192df0 T bitrev32
ffffffff81192e40 T crc16
ffffffff81192e70 T crc32_le
ffffffff81192f80 T __crc32c_le
ffffffff81193090 T crc32_be
ffffffff811931b0 t bitmap_clear_ll
ffffffff811932d0 T gen_pool_virt_to_phys
ffffffff81193340 T gen_pool_for_each_chunk
ffffffff811933a0 T gen_pool_avail
ffffffff811933e0 T gen_pool_size
ffffffff81193420 T gen_pool_free
ffffffff811934c0 T gen_pool_alloc
ffffffff811936c0 T gen_pool_create
ffffffff81193700 T gen_pool_add_virt
ffffffff811937c0 T gen_pool_destroy
ffffffff81193870 T inflate_fast
ffffffff81193e90 t zlib_updatewindow
ffffffff81193f80 T zlib_inflate_workspacesize
ffffffff81193f90 T zlib_inflateReset
ffffffff81194040 T zlib_inflateInit2
ffffffff811940a0 T zlib_inflate
ffffffff81195730 T zlib_inflateEnd
ffffffff81195750 T zlib_inflateIncomp
ffffffff811959c0 T zlib_inflate_blob
ffffffff81195ac0 T zlib_inflate_table
ffffffff81195ff0 T lzo1x_decompress_safe
ffffffff811965c0 t fill_temp
ffffffff81196650 t dec_vli.isra.2
ffffffff811966d0 t index_update.isra.4
ffffffff81196700 T xz_dec_reset
ffffffff81196910 T xz_dec_init
ffffffff811969b0 T xz_dec_run
ffffffff81197270 T xz_dec_end
ffffffff811972b0 t lzma_len
ffffffff811974f0 t dict_repeat.part.2
ffffffff81197570 t lzma_main
ffffffff81198010 T xz_dec_lzma2_run
ffffffff81198890 T xz_dec_lzma2_create
ffffffff81198920 T xz_dec_lzma2_reset
ffffffff811989c0 T xz_dec_lzma2_end
ffffffff811989e0 t bcj_flush
ffffffff81198a60 t bcj_apply
ffffffff81198fd0 T xz_dec_bcj_run
ffffffff811991c0 T xz_dec_bcj_create
ffffffff811991e0 T xz_dec_bcj_reset
ffffffff81199220 T percpu_counter_destroy
ffffffff81199290 T __percpu_counter_init
ffffffff81199310 T __percpu_counter_add
ffffffff81199390 T __percpu_counter_sum
ffffffff81199400 T percpu_counter_compare
ffffffff81199480 T percpu_counter_set
ffffffff811994f0 T swiotlb_nr_tbl
ffffffff81199500 t is_swiotlb_buffer
ffffffff81199540 T swiotlb_dma_mapping_error
ffffffff81199560 T swiotlb_dma_supported
ffffffff81199580 t swiotlb_full
ffffffff81199620 T swiotlb_bounce
ffffffff81199660 T swiotlb_tbl_sync_single
ffffffff811996d0 T swiotlb_tbl_unmap_single
ffffffff811997c0 T swiotlb_free_coherent
ffffffff81199870 T swiotlb_tbl_map_single
ffffffff81199ab0 t map_single
ffffffff81199b10 T swiotlb_map_page
ffffffff81199c40 T swiotlb_alloc_coherent
ffffffff81199d70 t swiotlb_sync_single
ffffffff81199e10 T swiotlb_sync_sg_for_device
ffffffff81199e70 T swiotlb_sync_sg_for_cpu
ffffffff81199ec0 T swiotlb_sync_single_for_device
ffffffff81199ed0 T swiotlb_sync_single_for_cpu
ffffffff81199ee0 t unmap_single
ffffffff81199f70 T swiotlb_unmap_page
ffffffff81199f80 T swiotlb_unmap_sg_attrs
ffffffff81199fd0 T swiotlb_unmap_sg
ffffffff81199fe0 T swiotlb_map_sg_attrs
ffffffff8119a110 T swiotlb_map_sg
ffffffff8119a120 T swiotlb_print_info
ffffffff8119a1b0 T swiotlb_late_init_with_default_size
ffffffff8119a4c0 T iommu_is_span_boundary
ffffffff8119a4f0 T iommu_area_alloc
ffffffff8119a570 t collect_syscall
ffffffff8119a670 T task_current_syscall
ffffffff8119a760 T nla_policy_len
ffffffff8119a7b0 T nla_find
ffffffff8119a830 T nla_append
ffffffff8119a880 T nla_memcpy
ffffffff8119a8a0 T __nla_reserve_nohdr
ffffffff8119a8e0 T __nla_put_nohdr
ffffffff8119a920 T nla_put_nohdr
ffffffff8119a960 T nla_reserve_nohdr
ffffffff8119a990 T __nla_reserve
ffffffff8119aa00 T __nla_put
ffffffff8119aa40 T nla_put
ffffffff8119aa80 T nla_reserve
ffffffff8119aab0 T nla_strlcpy
ffffffff8119ab30 T nla_strcmp
ffffffff8119ab90 t validate_nla
ffffffff8119ad80 T nla_parse
ffffffff8119ae50 T nla_validate
ffffffff8119aee0 T nla_memcmp
ffffffff8119af00 t irq_cpu_rmap_release
ffffffff8119af10 T free_irq_cpu_rmap
ffffffff8119af60 T alloc_cpu_rmap
ffffffff8119b000 t cpu_rmap_copy_neigh
ffffffff8119b090 T cpu_rmap_update
ffffffff8119b220 t irq_cpu_rmap_notify
ffffffff8119b250 T cpu_rmap_add
ffffffff8119b280 T irq_cpu_rmap_add
ffffffff8119b310 T dql_reset
ffffffff8119b360 T dql_init
ffffffff8119b3c0 T dql_completed
ffffffff8119b520 t __rdmsr_on_cpu
ffffffff8119b570 t __wrmsr_on_cpu
ffffffff8119b5b0 t __rdmsr_safe_on_cpu
ffffffff8119b5e0 t __wrmsr_safe_on_cpu
ffffffff8119b600 t __rdmsr_safe_regs_on_cpu
ffffffff8119b620 t __wrmsr_safe_regs_on_cpu
ffffffff8119b640 T wrmsr_safe_regs_on_cpu
ffffffff8119b670 T rdmsr_safe_regs_on_cpu
ffffffff8119b6a0 T wrmsr_safe_on_cpu
ffffffff8119b6f0 T rdmsr_safe_on_cpu
ffffffff8119b760 T wrmsr_on_cpu
ffffffff8119b7b0 T rdmsr_on_cpu
ffffffff8119b810 t __rwmsr_on_cpus
ffffffff8119b880 T wrmsr_on_cpus
ffffffff8119b890 T rdmsr_on_cpus
ffffffff8119b8a0 t __wbinvd
ffffffff8119b8b0 T wbinvd_on_all_cpus
ffffffff8119b8d0 T wbinvd_on_cpu
ffffffff8119b8f0 T msrs_free
ffffffff8119b900 T msrs_alloc
ffffffff8119b940 T native_rdmsr_safe_regs
ffffffff8119b990 T native_wrmsr_safe_regs
ffffffff8119b9e0 T __iowrite32_copy
ffffffff8119b9f0 T pci_read_vpd
ffffffff8119ba20 T pci_write_vpd
ffffffff8119ba50 T pci_vpd_truncate
ffffffff8119ba90 T pci_cfg_access_unlock
ffffffff8119bb10 T pci_cfg_access_trylock
ffffffff8119bb70 T pci_bus_set_ops
ffffffff8119bbd0 T pci_bus_write_config_byte
ffffffff8119bc50 T pci_bus_read_config_dword
ffffffff8119bcf0 T pci_bus_read_config_word
ffffffff8119bd90 T pci_bus_read_config_byte
ffffffff8119be20 t pci_wait_cfg
ffffffff8119bee0 T pci_cfg_access_lock
ffffffff8119bf30 t pci_vpd_pci22_release
ffffffff8119bf40 T pci_bus_write_config_dword
ffffffff8119bfe0 T pci_bus_write_config_word
ffffffff8119c070 T pci_user_read_config_byte
ffffffff8119c120 T pci_user_read_config_word
ffffffff8119c1e0 t pci_vpd_pci22_wait
ffffffff8119c2d0 T pci_user_read_config_dword
ffffffff8119c390 T pci_user_write_config_byte
ffffffff8119c420 T pci_user_write_config_word
ffffffff8119c4d0 t pci_vpd_pci22_read
ffffffff8119c630 T pci_user_write_config_dword
ffffffff8119c6e0 t pci_vpd_pci22_write
ffffffff8119c840 T pci_vpd_pci22_init
ffffffff8119c8e0 T pci_walk_bus
ffffffff8119c9a0 T pci_enable_bridges
ffffffff8119ca20 T pci_free_resource_list
ffffffff8119ca80 T pci_bus_resource_n
ffffffff8119cae0 T pci_bus_alloc_resource
ffffffff8119cbe0 T pci_bus_add_device
ffffffff8119cc30 T pci_add_resource_offset
ffffffff8119ccb0 T pci_add_resource
ffffffff8119ccc0 T pci_bus_add_resource
ffffffff8119cd50 T pci_bus_remove_resources
ffffffff8119cd80 T pci_bus_add_child
ffffffff8119cdb0 T pci_bus_add_devices
ffffffff8119cec0 t find_anything
ffffffff8119ced0 T pcibios_resource_to_bus
ffffffff8119cf80 T pcibios_bus_to_resource
ffffffff8119d050 T pcie_update_link_speed
ffffffff8119d070 t next_trad_fn
ffffffff8119d080 t no_next_fn
ffffffff8119d090 t release_pcibus_dev
ffffffff8119d0c0 t pci_release_bus_bridge_dev
ffffffff8119d0d0 t pci_alloc_bus
ffffffff8119d140 T alloc_pci_dev
ffffffff8119d170 t next_ari_fn
ffffffff8119d1e0 t pci_release_dev
ffffffff8119d220 t pci_read_irq
ffffffff8119d280 T no_pci_devices
ffffffff8119d2b0 t pcie_find_smpss
ffffffff8119d320 t pcie_bus_configure_set
ffffffff8119d4c0 T pcie_bus_configure_settings
ffffffff8119d560 T pci_bus_read_dev_vendor_id
ffffffff8119d640 T __pci_read_base
ffffffff8119da60 t pci_read_bases
ffffffff8119db00 T set_pcie_port_type
ffffffff8119dba0 T set_pcie_hotplug_bridge
ffffffff8119dc10 T pci_cfg_space_size_ext
ffffffff8119dc50 T pci_cfg_space_size
ffffffff8119dcd0 T pci_setup_device
ffffffff8119e110 T pci_device_add
ffffffff8119e210 T pci_scan_slot
ffffffff8119e320 T pci_create_root_bus
ffffffff8119e700 T pci_stop_bus_device
ffffffff8119e7a0 T pci_remove_bus
ffffffff8119e810 t __pci_remove_behind_bridge.isra.2
ffffffff8119e860 T __pci_remove_bus_device
ffffffff8119e920 T pci_stop_and_remove_bus_device
ffffffff8119e940 T pci_stop_and_remove_behind_bridge
ffffffff8119e9a0 T pci_bus_max_busnr
ffffffff8119e9e0 T pci_pme_capable
ffffffff8119ea00 T pci_target_state
ffffffff8119eae0 T pci_dev_run_wake
ffffffff8119eb40 T pci_set_dma_max_seg_size
ffffffff8119eb60 T pci_set_dma_seg_boundary
ffffffff8119eb80 T pci_select_bars
ffffffff8119ebb0 W pci_fixup_cardbus
ffffffff8119ebc0 T pcie_get_readrq
ffffffff8119ec00 t __pci_bus_find_cap_start
ffffffff8119ec40 T pci_enable_ido
ffffffff8119ecd0 t __pci_set_master
ffffffff8119ed40 T pci_clear_master
ffffffff8119ed50 T pci_clear_mwi
ffffffff8119eda0 T pci_intx_mask_supported
ffffffff8119ee70 t __pci_find_next_cap_ttl
ffffffff8119ef20 T pci_find_capability
ffffffff8119ef80 T pcix_set_mmrbc
ffffffff8119f0b0 T pcix_get_mmrbc
ffffffff8119f100 T pcix_get_max_mmrbc
ffffffff8119f160 T pci_msi_off
ffffffff8119f210 T pci_find_next_capability
ffffffff8119f240 t __pci_find_next_ht_cap
ffffffff8119f2f0 T pci_find_ht_capability
ffffffff8119f350 T pci_find_next_ht_capability
ffffffff8119f360 T pci_bus_find_capability
ffffffff8119f3e0 t find_pci_dr
ffffffff8119f410 T pci_intx
ffffffff8119f490 T pcim_pin_device
ffffffff8119f4f0 T pci_set_cacheline_size
ffffffff8119f5b0 t __pci_request_region
ffffffff8119f700 T pci_request_region_exclusive
ffffffff8119f710 T pci_request_region
ffffffff8119f720 T pci_release_region
ffffffff8119f7f0 T pci_release_selected_regions
ffffffff8119f840 T pci_release_regions
ffffffff8119f850 T pci_enable_obff
ffffffff8119f990 t pci_add_cap_save_buffer
ffffffff8119fa20 T pci_load_saved_state
ffffffff8119fb10 T pci_load_and_free_saved_state
ffffffff8119fb40 T pci_store_saved_state
ffffffff8119fc30 t pci_restore_config_space_range
ffffffff8119fcf0 T pci_choose_state
ffffffff8119fd90 T pci_find_parent_resource
ffffffff8119fe20 T pci_find_ext_capability
ffffffff8119fee0 T pci_disable_ido
ffffffff8119ff70 T pci_disable_obff
ffffffff8119ffe0 T pci_ltr_supported
ffffffff811a0020 T pci_enable_ltr
ffffffff811a00b0 T pci_set_ltr
ffffffff811a01c0 T pci_disable_ltr
ffffffff811a0240 t pci_dev_d3_sleep.isra.22
ffffffff811a0250 t pci_dev_reset
ffffffff811a0630 T __pci_reset_function_locked
ffffffff811a0640 T __pci_reset_function
ffffffff811a0650 T pci_save_state
ffffffff811a0930 t pci_check_and_set_intx_mask.isra.24
ffffffff811a0a10 T pci_check_and_unmask_intx
ffffffff811a0a20 T pci_check_and_mask_intx
ffffffff811a0a40 T pci_set_mwi
ffffffff811a0ab0 T pci_try_set_mwi
ffffffff811a0ac0 T pci_pme_active
ffffffff811a0c40 T __pci_enable_wake
ffffffff811a0d80 T pci_wake_from_d3
ffffffff811a0db0 T pci_restore_state
ffffffff811a1100 T pci_reset_function
ffffffff811a1160 T pci_ioremap_bar
ffffffff811a11d0 T pci_bus_find_ext_capability
ffffffff811a1290 T pci_set_platform_pm
ffffffff811a12d0 T pci_update_current_state
ffffffff811a1310 t pci_platform_power_transition
ffffffff811a13c0 T __pci_complete_power_transition
ffffffff811a13e0 T pci_set_power_state
ffffffff811a16b0 T pci_back_from_sleep
ffffffff811a16d0 T pci_prepare_to_sleep
ffffffff811a1760 t do_pci_enable_device
ffffffff811a17d0 t __pci_enable_device_flags
ffffffff811a18a0 T pci_enable_device
ffffffff811a18b0 T pcim_enable_device
ffffffff811a1980 T pci_enable_device_mem
ffffffff811a1990 T pci_enable_device_io
ffffffff811a19a0 T pci_reenable_device
ffffffff811a19d0 t do_pci_disable_device
ffffffff811a1a30 T pci_disable_device
ffffffff811a1a70 t pcim_release
ffffffff811a1b30 T pci_disable_enabled_device
ffffffff811a1b50 W pcibios_set_pcie_reset_state
ffffffff811a1b60 T pci_set_pcie_reset_state
ffffffff811a1b70 T pci_check_pme_status
ffffffff811a1c00 t pci_pme_wakeup
ffffffff811a1c50 t pci_pme_list_scan
ffffffff811a1d10 T pci_pme_wakeup_bus
ffffffff811a1d30 T pci_finish_runtime_suspend
ffffffff811a1db0 T pci_pm_init
ffffffff811a1ff0 T platform_pci_wakeup_init
ffffffff811a2040 T pci_allocate_cap_save_buffers
ffffffff811a20a0 T pci_free_cap_save_buffers
ffffffff811a20d0 T pci_enable_ari
ffffffff811a21b0 T pci_request_acs
ffffffff811a21c0 T pci_enable_acs
ffffffff811a2260 T pci_swizzle_interrupt_pin
ffffffff811a22a0 T pci_get_interrupt_pin
ffffffff811a22e0 T pci_common_swizzle
ffffffff811a2320 T __pci_request_selected_regions
ffffffff811a23b0 T pci_request_selected_regions_exclusive
ffffffff811a23c0 T pci_request_regions_exclusive
ffffffff811a23d0 T pci_request_selected_regions
ffffffff811a23e0 T pci_request_regions
ffffffff811a23f0 W pcibios_set_master
ffffffff811a24a0 T pci_set_master
ffffffff811a24c0 T pci_probe_reset_function
ffffffff811a24d0 T pcie_get_mps
ffffffff811a2510 T pcie_set_readrq
ffffffff811a25f0 T pcie_set_mps
ffffffff811a26c0 T pci_resource_bar
ffffffff811a2760 T pci_set_vga_state
ffffffff811a2880 T pci_specified_resource_alignment
ffffffff811a2a40 T pci_is_reassigndev
ffffffff811a2a60 T pci_reassigndev_resource_alignment
ffffffff811a2bf0 T pci_set_resource_alignment_param
ffffffff811a2c60 t pci_resource_alignment_store
ffffffff811a2c70 T pci_get_resource_alignment_param
ffffffff811a2cc0 t pci_resource_alignment_show
ffffffff811a2ce0 T pci_match_id
ffffffff811a2d70 t pci_device_shutdown
ffffffff811a2e00 t pci_match_device
ffffffff811a2ed0 t pci_bus_match
ffffffff811a2f00 t local_pci_probe
ffffffff811a2fd0 t pci_restore_standard_config
ffffffff811a3010 t pci_pm_runtime_resume
ffffffff811a30c0 t pci_legacy_suspend
ffffffff811a31a0 t pci_legacy_suspend_late
ffffffff811a3290 t pci_pm_runtime_suspend
ffffffff811a33a0 t pci_pm_reenable_device
ffffffff811a33d0 t pci_legacy_resume
ffffffff811a3430 t pci_pm_complete
ffffffff811a3470 t pci_pm_prepare
ffffffff811a3510 T pci_dev_put
ffffffff811a3530 t pci_device_remove
ffffffff811a3640 T pci_dev_get
ffffffff811a3660 t pci_device_probe
ffffffff811a3790 t store_remove_id
ffffffff811a38e0 T pci_unregister_driver
ffffffff811a3990 T pci_add_dynid
ffffffff811a3aa0 t store_new_id
ffffffff811a3ba0 t pci_pm_runtime_idle
ffffffff811a3be0 T pci_dev_driver
ffffffff811a3c20 t pci_has_legacy_pm_support.isra.6
ffffffff811a3c80 t pci_pm_thaw_noirq
ffffffff811a3d30 t pci_pm_freeze_noirq
ffffffff811a3df0 t pci_pm_poweroff_noirq
ffffffff811a3ec0 t pci_pm_restore
ffffffff811a3fc0 t pci_pm_thaw
ffffffff811a4040 t pci_pm_poweroff
ffffffff811a4100 t pci_pm_freeze
ffffffff811a41a0 t pci_pm_restore_noirq
ffffffff811a4260 T __pci_register_driver
ffffffff811a4320 t pci_do_find_bus
ffffffff811a4370 t match_pci_dev_by_id
ffffffff811a43e0 t pci_get_dev_by_id
ffffffff811a4480 T pci_dev_present
ffffffff811a44f0 T pci_get_class
ffffffff811a4580 T pci_get_subsys
ffffffff811a4630 T pci_get_device
ffffffff811a4640 T pci_get_domain_bus_and_slot
ffffffff811a4690 T pci_get_slot
ffffffff811a4720 T pci_find_next_bus
ffffffff811a4790 T pci_find_bus
ffffffff811a47e0 T pci_find_upstream_pcie_bridge
ffffffff811a4850 t pci_bus_show_cpumaskaffinity
ffffffff811a4860 t pci_bus_show_cpulistaffinity
ffffffff811a4870 t pci_write_rom
ffffffff811a48a0 t boot_vga_show
ffffffff811a48d0 t msi_bus_show
ffffffff811a4910 t broken_parity_status_show
ffffffff811a4940 t is_enabled_show
ffffffff811a4970 t consistent_dma_mask_bits_show
ffffffff811a49a0 t dma_mask_bits_show
ffffffff811a49d0 t numa_node_show
ffffffff811a4a00 t modalias_show
ffffffff811a4a50 t irq_show
ffffffff811a4a80 t class_show
ffffffff811a4ab0 t subsystem_device_show
ffffffff811a4ae0 t subsystem_vendor_show
ffffffff811a4b10 t device_show
ffffffff811a4b40 t vendor_show
ffffffff811a4b70 t resource_show
ffffffff811a4bf0 t local_cpulist_show
ffffffff811a4c40 t local_cpus_show
ffffffff811a4c90 t broken_parity_status_store
ffffffff811a4cf0 t dev_bus_rescan_store
ffffffff811a4d90 t dev_rescan_store
ffffffff811a4e10 t remove_store
ffffffff811a4e90 t remove_callback
ffffffff811a4ec0 t msi_bus_store
ffffffff811a4f70 t is_enabled_store
ffffffff811a4ff0 t bus_rescan_store
ffffffff811a5060 t pci_remove_resource_files
ffffffff811a50d0 t pci_write_config
ffffffff811a52b0 t pci_read_config
ffffffff811a5500 t reset_store
ffffffff811a5550 t pci_create_attr
ffffffff811a56e0 t write_vpd_attr
ffffffff811a5710 t read_vpd_attr
ffffffff811a5740 t pci_read_rom
ffffffff811a5810 t pci_bus_show_cpuaffinity.isra.8
ffffffff811a5860 t pci_resource_io.isra.9
ffffffff811a5960 t pci_read_resource_io
ffffffff811a5980 t pci_write_resource_io
ffffffff811a59a0 T pci_mmap_fits
ffffffff811a5a50 t pci_mmap_resource.isra.10
ffffffff811a5c00 t pci_mmap_resource_uc
ffffffff811a5c20 t pci_mmap_resource_wc
ffffffff811a5c40 W pcibios_add_platform_entries
ffffffff811a5c50 T pci_create_sysfs_dev_files
ffffffff811a6050 T pci_remove_sysfs_dev_files
ffffffff811a61b0 T pci_disable_rom
ffffffff811a61f0 T pci_enable_rom
ffffffff811a6260 T pci_unmap_rom
ffffffff811a6290 T pci_get_rom_size
ffffffff811a6330 T pci_map_rom
ffffffff811a64a0 T pci_cleanup_rom
ffffffff811a64e0 t _pci_assign_resource
ffffffff811a6650 T pci_claim_resource
ffffffff811a66e0 T pci_update_resource
ffffffff811a6890 T pci_disable_bridge_window
ffffffff811a6910 T pci_reassign_resource
ffffffff811a69e0 T pci_assign_resource
ffffffff811a6c10 T pci_enable_resources
ffffffff811a6d30 t pci_note_irq_problem
ffffffff811a6dd0 T pci_lost_interrupt
ffffffff811a6e60 T pci_vpd_find_tag
ffffffff811a6ed0 T pci_vpd_find_info_keyword
ffffffff811a6f30 t proc_bus_pci_ioctl
ffffffff811a6fc0 t pci_seq_next
ffffffff811a6fe0 t pci_seq_start
ffffffff811a7010 t proc_bus_pci_dev_open
ffffffff811a7020 t show_device
ffffffff811a7170 t pci_seq_stop
ffffffff811a7190 t proc_bus_pci_release
ffffffff811a71b0 t proc_bus_pci_open
ffffffff811a71f0 t proc_bus_pci_mmap
ffffffff811a7280 t proc_bus_pci_write
ffffffff811a74b0 t proc_bus_pci_read
ffffffff811a7700 t proc_bus_pci_lseek
ffffffff811a77c0 T pci_proc_attach_device
ffffffff811a78b0 T pci_proc_detach_device
ffffffff811a78e0 T pci_proc_detach_bus
ffffffff811a7910 t pci_slot_attr_show
ffffffff811a7930 t pci_slot_attr_store
ffffffff811a7950 T pci_destroy_slot
ffffffff811a7980 T pci_renumber_slot
ffffffff811a7a00 t pci_slot_release
ffffffff811a7a90 t cur_speed_read_file
ffffffff811a7ad0 t max_speed_read_file
ffffffff811a7b10 t make_slot_name
ffffffff811a7be0 T pci_create_slot
ffffffff811a7e00 t pci_slot_init
ffffffff811a7e50 t address_read_file
ffffffff811a7ec0 t quirk_intel_pcie_pm
ffffffff811a7ed0 t quirk_nvidia_ck804_pcie_aer_ext_cap
ffffffff811a7f40 t quirk_sis_96x_smbus
ffffffff811a7fb0 t quirk_sis_503
ffffffff811a8050 t quirk_mediagx_master
ffffffff811a80c0 t quirk_via_vt8237_bypass_apic_deassert
ffffffff811a8120 t quirk_via_ioapic
ffffffff811a8180 t quirk_passive_release
ffffffff811a8200 t quirk_vialatency
ffffffff811a82c0 t quirk_cardbus_legacy
ffffffff811a82e0 t quirk_amd_ordering
ffffffff811a8380 t quirk_via_vlink
ffffffff811a8440 t reset_intel_generic_dev
ffffffff811a84b0 t reset_intel_82599_sfp_virtfn
ffffffff811a8520 T pci_fixup_device
ffffffff811a86a0 t quirk_via_bridge
ffffffff811a8730 t quirk_jmicron_ata
ffffffff811a8870 t quirk_disable_amd_8111_boot_interrupt
ffffffff811a8910 t quirk_disable_amd_813x_boot_interrupt
ffffffff811a8990 t quirk_disable_broadcom_boot_interrupt
ffffffff811a8a40 t quirk_disable_intel_boot_interrupt
ffffffff811a8ab0 t quirk_reroute_to_boot_interrupts_intel
ffffffff811a8b00 t quirk_disable_pxb
ffffffff811a8b70 t asus_hides_ac97_lpc
ffffffff811a8c20 t asus_hides_smbus_lpc_ich6_resume_early
ffffffff811a8c50 t asus_hides_smbus_lpc_ich6_resume
ffffffff811a8ca0 t asus_hides_smbus_lpc
ffffffff811a8d40 t asus_hides_smbus_lpc_ich6_suspend
ffffffff811a8db0 t asus_hides_smbus_lpc_ich6
ffffffff811a8dd0 T pci_dev_specific_reset
ffffffff811a8e30 T pcie_aspm_enabled
ffffffff811a8e40 T pcie_aspm_support_enabled
ffffffff811a8e50 t pcie_aspm_get_policy
ffffffff811a8eb0 t pcie_set_clkpm
ffffffff811a8f80 t pcie_config_aspm_dev
ffffffff811a8ff0 t pcie_config_aspm_link
ffffffff811a90f0 t pcie_config_aspm_path
ffffffff811a9140 t pcie_aspm_set_policy
ffffffff811a9280 t __pci_disable_link_state
ffffffff811a93d0 T pci_disable_link_state
ffffffff811a93e0 T pci_disable_link_state_locked
ffffffff811a93f0 t pcie_get_aspm_reg
ffffffff811a9480 t pcie_aspm_check_latency.isra.6.part.7
ffffffff811a9550 t pcie_update_aspm_capable
ffffffff811a9630 T pcie_aspm_init_link_state
ffffffff811a9e30 T pcie_aspm_exit_link_state
ffffffff811a9f80 T pcie_aspm_pm_state_change
ffffffff811aa000 T pcie_aspm_powersave_config_link
ffffffff811aa0a0 T pcie_clear_aspm
ffffffff811aa0e0 T pcie_no_aspm
ffffffff811aa100 t pcie_port_shutdown_service
ffffffff811aa110 T pcie_port_service_unregister
ffffffff811aa120 T pcie_port_service_register
ffffffff811aa170 t pcie_port_remove_service
ffffffff811aa1e0 t pcie_port_probe_service
ffffffff811aa260 t cleanup_service_irqs
ffffffff811aa2a0 t release_pcie_device
ffffffff811aa2b0 t suspend_iter
ffffffff811aa2f0 t resume_iter
ffffffff811aa330 t remove_iter
ffffffff811aa360 T pcie_port_device_register
ffffffff811aa8c0 T pcie_port_device_suspend
ffffffff811aa8d0 T pcie_port_device_resume
ffffffff811aa8e0 T pcie_port_device_remove
ffffffff811aa910 t pcie_portdrv_err_resume
ffffffff811aa930 t pcie_portdrv_mmio_enabled
ffffffff811aa960 t pcie_portdrv_error_detected
ffffffff811aa9a0 t pcie_portdrv_slot_reset
ffffffff811aaa20 t pcie_portdrv_remove
ffffffff811aaa40 t error_detected_iter
ffffffff811aaab0 t mmio_enabled_iter
ffffffff811aab30 t slot_reset_iter
ffffffff811aabb0 t resume_iter
ffffffff811aac00 T pcie_clear_root_pme_status
ffffffff811aac50 t pcie_port_resume_noirq
ffffffff811aac80 t pcie_port_bus_match
ffffffff811aacd0 T pcie_port_bus_register
ffffffff811aacf0 T pcie_port_bus_unregister
ffffffff811aad00 T pcie_port_acpi_setup
ffffffff811aad90 T cper_severity_to_aer
ffffffff811aadb0 T aer_print_error
ffffffff811ab110 T aer_print_port_info
ffffffff811ab150 T cper_print_aer
ffffffff811ab3f0 t find_device_iter
ffffffff811ab560 t get_device_error_info
ffffffff811ab6f0 T pci_cleanup_aer_uncorrect_error_status
ffffffff811ab760 t broadcast_error_message
ffffffff811ab850 T aer_recover_queue
ffffffff811ab930 T pci_disable_pcie_error_reporting
ffffffff811ab9a0 T pci_enable_pcie_error_reporting
ffffffff811aba40 t report_mmio_enabled
ffffffff811aba90 t report_slot_reset
ffffffff811abae0 t report_resume
ffffffff811abb20 t find_aer_service_iter
ffffffff811abb60 t report_error_detected
ffffffff811abc00 t find_source_device
ffffffff811abc90 t handle_error_source.isra.13.part.14
ffffffff811abcf0 T aer_do_secondary_bus_reset
ffffffff811abd70 t do_recovery
ffffffff811abfc0 t aer_recover_work_func
ffffffff811ac060 T aer_isr
ffffffff811ac400 T aer_init
ffffffff811ac450 t aer_error_detected
ffffffff811ac460 t aer_error_resume
ffffffff811ac520 t aer_root_reset
ffffffff811ac600 t set_device_error_reporting
ffffffff811ac640 t set_downstream_devices_error_reporting
ffffffff811ac680 T aer_irq
ffffffff811ac7d0 t aer_remove
ffffffff811ac930 T pci_no_aer
ffffffff811ac940 T pci_aer_available
ffffffff811ac970 t aer_hest_parse
ffffffff811aca50 t aer_hest_parse_aff
ffffffff811aca80 T pcie_aer_get_firmware_first
ffffffff811acb10 T aer_acpi_firmware_first
ffffffff811acb40 t pcie_pme_set_native
ffffffff811acb70 t pcie_pme_walk_bus
ffffffff811acc00 t pcie_pme_from_pci_bridge.part.9
ffffffff811acc80 T pcie_pme_interrupt_enable
ffffffff811acd00 t pcie_pme_resume
ffffffff811acd60 t pcie_pme_suspend
ffffffff811acdd0 t pcie_pme_remove
ffffffff811acdf0 t pcie_pme_probe
ffffffff811acf70 t pcie_pme_irq
ffffffff811ad010 t pcie_pme_work_fn
ffffffff811ad290 T pci_uevent
ffffffff811ad390 t msi_irq_attr_show
ffffffff811ad3b0 T pci_msi_enabled
ffffffff811ad3c0 t show_msi_mode
ffffffff811ad400 t free_msi_irqs
ffffffff811ad530 t pci_intx_for_msi
ffffffff811ad550 t alloc_msi_entry
ffffffff811ad580 t populate_msi_sysfs
ffffffff811ad690 t __msi_mask_irq
ffffffff811ad6c0 t msi_set_mask_bit
ffffffff811ad720 T msi_kobj_release
ffffffff811ad730 t msi_set_enable
ffffffff811ad7b0 T pci_restore_msi_state
ffffffff811ad990 t pci_msi_check_device
ffffffff811ada10 T pci_enable_msi_block
ffffffff811adcd0 t msix_set_enable.constprop.14
ffffffff811add40 T arch_msi_check_device
ffffffff811add50 T default_teardown_msi_irqs
ffffffff811addd0 T mask_msi_irq
ffffffff811adde0 T unmask_msi_irq
ffffffff811addf0 T __read_msi_msg
ffffffff811adef0 T read_msi_msg
ffffffff811adf20 T __get_cached_msi_msg
ffffffff811adf40 T get_cached_msi_msg
ffffffff811adf70 T __write_msi_msg
ffffffff811ae0b0 T write_msi_msg
ffffffff811ae0e0 T default_restore_msi_irqs
ffffffff811ae160 T pci_msi_shutdown
ffffffff811ae260 T pci_disable_msi
ffffffff811ae2b0 T pci_msix_table_size
ffffffff811ae300 T pci_enable_msix
ffffffff811ae710 T pci_msix_shutdown
ffffffff811ae790 T pci_disable_msix
ffffffff811ae7e0 T msi_remove_pci_irq_vectors
ffffffff811ae810 T pci_no_msi
ffffffff811ae820 T pci_msi_init_pci_dev
ffffffff811ae860 T ht_destroy_irq
ffffffff811ae8b0 T __ht_create_irq
ffffffff811aea00 T ht_create_irq
ffffffff811aea10 T write_ht_irq_msg
ffffffff811aeb10 T fetch_ht_irq_msg
ffffffff811aeb30 T mask_ht_irq
ffffffff811aeb60 T unmask_ht_irq
ffffffff811aeb90 T pci_max_pasids
ffffffff811aebe0 T pci_pasid_features
ffffffff811aec20 T pci_pri_status
ffffffff811aec90 T pci_pri_stopped
ffffffff811aed10 T pci_pri_enabled
ffffffff811aed50 T pci_disable_pasid
ffffffff811aed80 T pci_enable_pasid
ffffffff811aee50 T pci_reset_pri
ffffffff811aeec0 T pci_disable_pri
ffffffff811aef30 T pci_enable_pri
ffffffff811af010 T pci_disable_ats
ffffffff811af110 t ats_alloc_one
ffffffff811af1c0 T pci_enable_ats
ffffffff811af310 T pci_ats_queue_depth
ffffffff811af380 T pci_restore_ats_state
ffffffff811af3f0 T pci_num_vf
ffffffff811af410 T pci_sriov_migration
ffffffff811af480 t virtfn_remove_bus
ffffffff811af4d0 t virtfn_remove
ffffffff811af660 T pci_disable_sriov
ffffffff811af780 t virtfn_add
ffffffff811afb60 T pci_enable_sriov
ffffffff811b0060 t sriov_migration_task
ffffffff811b0170 T pci_iov_init
ffffffff811b0500 T pci_iov_release
ffffffff811b0550 T pci_iov_resource_bar
ffffffff811b0580 T pci_sriov_resource_alignment
ffffffff811b05c0 T pci_restore_iov_state
ffffffff811b06a0 T pci_iov_bus_range
ffffffff811b0720 t get_res_add_size
ffffffff811b07a0 t free_list
ffffffff811b0800 T pci_setup_cardbus
ffffffff811b09d0 t pci_bus_dump_resources
ffffffff811b0a70 t find_free_bus_resource
ffffffff811b0ae0 t add_to_list
ffffffff811b0b90 t assign_requested_resources_sorted
ffffffff811b0c60 t __assign_resources_sorted
ffffffff811b0fd0 t __pci_setup_bridge
ffffffff811b1300 T pci_setup_bridge
ffffffff811b1310 T pci_cardbus_resource_alignment
ffffffff811b1340 t __dev_sort_resources
ffffffff811b1580 t pbus_size_mem
ffffffff811b1a30 T pci_assign_unassigned_bridge_resources
ffffffff811b1b70 t acpi_pci_can_wakeup
ffffffff811b1b90 t acpi_pci_choose_state
ffffffff811b1bc0 t acpi_pci_set_power_state
ffffffff811b1c70 t acpi_pci_power_manageable
ffffffff811b1c90 t acpi_pci_find_root_bridge
ffffffff811b1cf0 t acpi_pci_find_device
ffffffff811b1d30 t remove_pm_notifier
ffffffff811b1db0 t pci_acpi_wake_bus
ffffffff811b1dd0 t add_pm_notifier
ffffffff811b1e60 t acpi_pci_run_wake
ffffffff811b1ed0 t acpi_pci_sleep_wake
ffffffff811b1f50 t pci_acpi_wake_dev
ffffffff811b1ff0 T pci_acpi_add_bus_pm_notifier
ffffffff811b2000 T pci_acpi_remove_bus_pm_notifier
ffffffff811b2010 T pci_acpi_add_pm_notifier
ffffffff811b2020 T pci_acpi_remove_pm_notifier
ffffffff811b2030 t find_smbios_instance_string.isra.2
ffffffff811b20f0 t smbiosinstance_show
ffffffff811b2110 t smbioslabel_show
ffffffff811b2130 t smbios_instance_string_exist
ffffffff811b2160 t dsm_get_label.constprop.3
ffffffff811b22d0 t acpilabel_show
ffffffff811b2320 t acpiindex_show
ffffffff811b2370 t device_has_dsm.isra.1
ffffffff811b23b0 t acpi_index_string_exist
ffffffff811b23d0 T pci_create_firmware_label_files
ffffffff811b2410 T pci_remove_firmware_label_files
ffffffff811b2450 T fb_notifier_call_chain
ffffffff811b2470 T fb_unregister_client
ffffffff811b2480 T fb_register_client
ffffffff811b2490 T fb_get_color_depth
ffffffff811b24d0 T fb_pad_aligned_buffer
ffffffff811b2510 T fb_pad_unaligned_buffer
ffffffff811b25f0 T fb_get_buffer_offset
ffffffff811b26a0 t fb_seq_next
ffffffff811b26c0 T fb_pan_display
ffffffff811b2830 t fb_seq_start
ffffffff811b2860 t fb_seq_stop
ffffffff811b2870 T lock_fb_info
ffffffff811b28d0 t fb_mmap
ffffffff811b2ac0 T fb_set_suspend
ffffffff811b2b10 T fb_blank
ffffffff811b2b80 t fb_write
ffffffff811b2da0 t fb_read
ffffffff811b2fd0 t proc_fb_open
ffffffff811b2fe0 t fb_seq_show
ffffffff811b3020 T fb_get_options
ffffffff811b3110 T unlink_framebuffer
ffffffff811b3180 T fb_set_var
ffffffff811b3610 t do_fb_ioctl
ffffffff811b3b90 t fb_compat_ioctl
ffffffff811b3f00 t fb_ioctl
ffffffff811b3f40 t put_fb_info
ffffffff811b3f70 t fb_release
ffffffff811b3fd0 t do_unregister_framebuffer
ffffffff811b40c0 T unregister_framebuffer
ffffffff811b4100 t do_remove_conflicting_framebuffers
ffffffff811b4260 T register_framebuffer
ffffffff811b4520 T remove_conflicting_framebuffers
ffffffff811b4570 t get_fb_info.part.14
ffffffff811b45a0 t fb_open
ffffffff811b46e0 t fb_set_logocmap.isra.15
ffffffff811b4800 T fb_show_logo
ffffffff811b5010 T fb_prepare_logo
ffffffff811b51b0 T fb_new_modelist
ffffffff811b5300 t copy_string
ffffffff811b5380 t get_detailed_timing
ffffffff811b5580 t fb_timings_vfreq
ffffffff811b5620 T fb_validate_mode
ffffffff811b5760 T fb_firmware_edid
ffffffff811b5770 T fb_destroy_modedb
ffffffff811b5780 t fb_timings_dclk
ffffffff811b5850 T fb_get_mode
ffffffff811b5ce0 t calc_mode_timings
ffffffff811b5dd0 t get_std_timing
ffffffff811b5f40 t check_edid
ffffffff811b60c0 t fix_edid
ffffffff811b61e0 t edid_checksum
ffffffff811b6230 t edid_check_header
ffffffff811b6270 t fb_create_modedb
ffffffff811b6b50 T fb_edid_add_monspecs
ffffffff811b6e90 T fb_parse_edid
ffffffff811b70b0 T fb_edid_to_monspecs
ffffffff811b7810 T fb_invert_cmaps
ffffffff811b78b0 T fb_copy_cmap
ffffffff811b79c0 T fb_set_cmap
ffffffff811b7af0 T fb_dealloc_cmap
ffffffff811b7b50 T fb_default_cmap
ffffffff811b7b80 T fb_alloc_cmap_gfp
ffffffff811b7ca0 T fb_alloc_cmap
ffffffff811b7cb0 T fb_cmap_to_user
ffffffff811b7de0 T fb_set_user_cmap
ffffffff811b7f40 t show_blank
ffffffff811b7f50 t store_console
ffffffff811b7f60 t show_console
ffffffff811b7f70 t store_cursor
ffffffff811b7f80 t show_cursor
ffffffff811b7f90 t store_fbstate
ffffffff811b8020 t show_fbstate
ffffffff811b8050 t show_rotate
ffffffff811b8080 t show_stride
ffffffff811b80b0 t show_name
ffffffff811b80e0 t show_virtual
ffffffff811b8110 t show_pan
ffffffff811b8140 t mode_string
ffffffff811b81d0 t show_modes
ffffffff811b8230 t show_mode
ffffffff811b8260 t show_bpp
ffffffff811b8290 t activate
ffffffff811b82e0 t store_rotate
ffffffff811b8370 t store_virtual
ffffffff811b8460 t store_bpp
ffffffff811b84f0 t store_pan
ffffffff811b85e0 t store_modes
ffffffff811b86f0 t store_mode
ffffffff811b8850 t store_blank
ffffffff811b88d0 T framebuffer_release
ffffffff811b88f0 T framebuffer_alloc
ffffffff811b8970 T fb_init_device
ffffffff811b8a00 T fb_cleanup_device
ffffffff811b8a50 t fb_try_mode
ffffffff811b8ae0 T fb_videomode_to_var
ffffffff811b8b50 T fb_mode_is_equal
ffffffff811b8ba0 T fb_find_best_mode
ffffffff811b8c20 T fb_find_nearest_mode
ffffffff811b8cd0 T fb_find_best_display
ffffffff811b8dd0 T fb_destroy_modelist
ffffffff811b8e30 T fb_find_mode
ffffffff811b9500 T fb_var_to_videomode
ffffffff811b95c0 T fb_match_mode
ffffffff811b9620 T fb_add_videomode
ffffffff811b96d0 T fb_videomode_to_modelist
ffffffff811b9730 T fb_delete_videomode
ffffffff811b97c0 T fb_find_mode_cvt
ffffffff811b9ef0 t dummycon_startup
ffffffff811b9f00 t dummycon_dummy
ffffffff811b9f10 t dummycon_init
ffffffff811b9f40 T vgacon_text_force
ffffffff811b9f50 t vgacon_build_attr
ffffffff811b9ff0 t vgacon_invert_region
ffffffff811ba070 t vga_set_palette
ffffffff811ba1c0 t vgacon_set_palette
ffffffff811ba200 t vgacon_dummy
ffffffff811ba210 t vgacon_scrolldelta
ffffffff811ba340 t vgacon_deinit
ffffffff811ba3d0 t vgacon_init
ffffffff811ba4f0 t vgacon_startup
ffffffff811ba8e0 t vgacon_set_origin
ffffffff811ba960 t vgacon_save_screen
ffffffff811ba9d0 t vgacon_doresize.isra.5
ffffffff811bac30 t vgacon_switch
ffffffff811bad50 t vgacon_resize
ffffffff811badc0 t vgacon_set_cursor_size.isra.7
ffffffff811baf20 t vgacon_cursor
ffffffff811bb130 t vgacon_scroll
ffffffff811bb3b0 t vgacon_do_font_op.isra.9.constprop.17
ffffffff811bb8c0 t vgacon_font_set
ffffffff811bbb20 t vgacon_font_get
ffffffff811bbb70 t vgacon_blank
ffffffff811bc3b0 t fbcon_clear
ffffffff811bc5b0 t fbcon_clear_margins
ffffffff811bc640 t fbcon_bmove_rec
ffffffff811bc820 t fbcon_bmove
ffffffff811bc8f0 t fbcon_debug_leave
ffffffff811bc940 t fbcon_getxy
ffffffff811bca30 t fbcon_invert_region
ffffffff811bcae0 t fbcon_add_cursor_timer
ffffffff811bcbc0 t cursor_timer_handler
ffffffff811bcbf0 t var_to_display
ffffffff811bcc90 t fbcon_set_palette
ffffffff811bcdf0 t fbcon_debug_enter
ffffffff811bce50 t display_to_var
ffffffff811bcee0 t fbcon_get_font
ffffffff811bd0c0 t fbcon_set_disp
ffffffff811bd3c0 t fbcon_prepare_logo
ffffffff811bd7d0 t fbcon_takeover
ffffffff811bd880 t fbcon_new_modelist
ffffffff811bd950 t updatescrollmode.isra.14
ffffffff811bdb50 t fbcon_resize
ffffffff811bdd90 t fbcon_screen_pos
ffffffff811bde00 t fbcon_del_cursor_timer.isra.17
ffffffff811bde40 t fbcon_deinit
ffffffff811be050 t get_color.isra.18
ffffffff811be1b0 t fb_flashcursor
ffffffff811be300 t fbcon_putcs
ffffffff811be4a0 t fbcon_putc
ffffffff811be4d0 t fbcon_set_origin
ffffffff811be4f0 t fbcon_cursor
ffffffff811be6c0 t fbcon_scrolldelta
ffffffff811bed50 t fbcon_blank
ffffffff811bf040 t fbcon_do_set_font
ffffffff811bf410 t fbcon_copy_font
ffffffff811bf450 t fbcon_set_def_font
ffffffff811bf4e0 t fbcon_set_font
ffffffff811bf710 t set_blitting_type.isra.23
ffffffff811bf750 t fbcon_modechanged
ffffffff811bf960 t fbcon_switch
ffffffff811bfe90 t con2fb_acquire_newinfo
ffffffff811bff60 t fbcon_startup
ffffffff811c0230 t fbcon_init
ffffffff811c07f0 t fbcon_redraw_blit.isra.25
ffffffff811c09c0 t fbcon_redraw_move.isra.26
ffffffff811c0b00 t fbcon_redraw.isra.27
ffffffff811c0cd0 t fbcon_scroll
ffffffff811c1a20 t set_con2fb_map
ffffffff811c1ed0 t fbcon_event_notify
ffffffff811c2730 t store_cursor_blink
ffffffff811c27f0 t store_rotate_all
ffffffff811c2860 t store_rotate
ffffffff811c28d0 t show_cursor_blink
ffffffff811c2980 t show_rotate
ffffffff811c2a20 t bit_bmove
ffffffff811c2a80 t bit_clear
ffffffff811c2b70 t bit_clear_margins
ffffffff811c2cf0 T fbcon_set_bitops
ffffffff811c2d30 t bit_update_start
ffffffff811c2d80 t update_attr.isra.3
ffffffff811c2e10 t bit_cursor
ffffffff811c3390 t bit_putcs
ffffffff811c3850 T get_default_font
ffffffff811c38e0 T find_font
ffffffff811c3930 T soft_cursor
ffffffff811c3b70 T cfb_fillrect
ffffffff811c3eb0 t bitfill_aligned
ffffffff811c3fe0 t bitfill_unaligned
ffffffff811c4190 t bitfill_aligned_rev
ffffffff811c42f0 t bitfill_unaligned_rev
ffffffff811c44b0 T cfb_copyarea
ffffffff811c4f00 T cfb_imageblit
ffffffff811c53a0 t vesafb_pan_display
ffffffff811c53b0 t vesafb_destroy
ffffffff811c5400 t vesafb_setcolreg
ffffffff811c5580 T acpi_table_print_madt_entry
ffffffff811c57ec t acpi_map_lookup
ffffffff811c5834 T acpi_resources_are_enforced
ffffffff811c5841 T acpi_os_wait_events_complete
ffffffff811c585b t acpi_os_execute_deferred
ffffffff811c5884 t __acpi_os_execute
ffffffff811c593d T acpi_os_execute
ffffffff811c5944 T acpi_os_get_iomem
ffffffff811c598b t acpi_irq
ffffffff811c59b4 t acpi_osi_handler
ffffffff811c5a25 T acpi_os_write_port
ffffffff811c5a4b T acpi_os_read_port
ffffffff811c5a90 T acpi_check_resource_conflict
ffffffff811c5b08 T acpi_check_region
ffffffff811c5b49 t acpi_os_map_cleanup.part.11
ffffffff811c5b7d T acpi_os_unmap_generic_address
ffffffff811c5c39 T acpi_os_map_generic_address
ffffffff811c5c7e T acpi_os_vprintf
ffffffff811c5ca7 T acpi_os_printf
ffffffff811c5cef T acpi_os_predefined_override
ffffffff811c5d51 T acpi_os_table_override
ffffffff811c5d6b T acpi_os_physical_table_override
ffffffff811c5d71 T acpi_os_install_interrupt_handler
ffffffff811c5e2e T acpi_os_remove_interrupt_handler
ffffffff811c5e5a T acpi_os_sleep
ffffffff811c5e6a T acpi_os_stall
ffffffff811c5e96 T acpi_os_get_timer
ffffffff811c5ec4 T acpi_os_read_memory
ffffffff811c5f79 T acpi_os_write_memory
ffffffff811c6013 T acpi_os_read_pci_configuration
ffffffff811c6093 T acpi_os_write_pci_configuration
ffffffff811c60f5 T acpi_os_hotplug_execute
ffffffff811c6107 T acpi_os_create_semaphore
ffffffff811c6168 T acpi_os_delete_semaphore
ffffffff811c618a T acpi_os_wait_semaphore
ffffffff811c61d7 T acpi_os_signal_semaphore
ffffffff811c61fc T acpi_os_signal
ffffffff811c6215 T acpi_os_delete_lock
ffffffff811c621a T acpi_os_acquire_lock
ffffffff811c621f T acpi_os_release_lock
ffffffff811c6224 T acpi_os_create_cache
ffffffff811c6245 T acpi_os_purge_cache
ffffffff811c6250 T acpi_os_delete_cache
ffffffff811c625a T acpi_os_release_object
ffffffff811c6266 T acpi_os_terminate
ffffffff811c62d7 T acpi_os_prepare_sleep
ffffffff811c630e T acpi_os_set_prepare_sleep
ffffffff811c6318 T acpi_evaluate_integer
ffffffff811c6367 T acpi_evaluate_reference
ffffffff811c6476 T acpi_extract_package
ffffffff811c668c T acpi_reboot
ffffffff811c6740 T acpi_nvs_register
ffffffff811c6795 T acpi_nvs_for_each_region
ffffffff811c67d4 T acpi_enable_wakeup_devices
ffffffff811c6880 T acpi_disable_wakeup_devices
ffffffff811c6924 t acpi_power_off
ffffffff811c6953 t acpi_power_off_prepare
ffffffff811c6981 t set_param_wake_flag
ffffffff811c69d8 t tts_notify_reboot
ffffffff811c6a3e T acpi_suspend
ffffffff811c6a8f T acpi_pm_device_sleep_state
ffffffff811c6b27 T acpi_pm_device_run_wake
ffffffff811c6bad T acpi_pm_device_sleep_wake
ffffffff811c6c34 T acpi_bus_private_data_handler
ffffffff811c6c35 t set_copy_dsdt
ffffffff811c6c53 T unregister_acpi_bus_notifier
ffffffff811c6c62 T register_acpi_bus_notifier
ffffffff811c6c71 t acpi_print_osc_error
ffffffff811c6d20 T acpi_run_osc
ffffffff811c6fcb t __acpi_bus_get_power
ffffffff811c7056 t __acpi_bus_set_power
ffffffff811c71c4 T acpi_bus_get_private_data
ffffffff811c71f8 T acpi_bus_get_device
ffffffff811c722b T acpi_bus_can_wakeup
ffffffff811c7255 T acpi_bus_power_manageable
ffffffff811c727c T acpi_bus_update_power
ffffffff811c72d1 T acpi_bus_set_power
ffffffff811c7303 T acpi_bus_generate_proc_event4
ffffffff811c73c8 T acpi_bus_generate_proc_event
ffffffff811c73eb T acpi_bus_get_status_handle
ffffffff811c7414 T acpi_bus_get_status
ffffffff811c7445 t acpi_bus_check_device
ffffffff811c7482 t acpi_bus_notify
ffffffff811c74fb T acpi_bus_init_power
ffffffff811c7558 T acpi_bus_receive_event
ffffffff811c76a8 t acpi_glue_data_handler
ffffffff811c76a9 t acpi_platform_notify
ffffffff811c782c T acpi_get_physical_device
ffffffff811c7858 t acpi_platform_notify_remove
ffffffff811c7900 T acpi_get_child
ffffffff811c794c t do_acpi_find_child
ffffffff811c798d T register_acpi_bus_type
ffffffff811c79ff T unregister_acpi_bus_type
ffffffff811c7a60 t acpi_device_suspend
ffffffff811c7a81 t acpi_device_resume
ffffffff811c7aa2 t acpi_device_notify
ffffffff811c7ab5 t acpi_device_notify_fixed
ffffffff811c7acc t acpi_start_single_object
ffffffff811c7b11 T acpi_bus_data_handler
ffffffff811c7b12 T acpi_device_hid
ffffffff811c7b2b t acpi_device_remove
ffffffff811c7bc4 t acpi_device_probe
ffffffff811c7cd1 t create_modalias
ffffffff811c7d5a t acpi_device_uevent
ffffffff811c7dda t acpi_device_modalias_show
ffffffff811c7e08 t acpi_bus_extract_wakeup_device_power_package
ffffffff811c7f5b t acpi_device_release
ffffffff811c7fa8 T acpi_bus_get_ejd
ffffffff811c8023 T acpi_match_device_ids
ffffffff811c807b t acpi_bus_match
ffffffff811c8096 t acpi_add_id
ffffffff811c80f6 t acpi_bus_set_run_wake_flags
ffffffff811c81b7 t acpi_eject_store
ffffffff811c822a t acpi_device_hid_show
ffffffff811c8252 t acpi_device_path_show
ffffffff811c82b1 T acpi_bus_unregister_driver
ffffffff811c82bd T acpi_bus_register_driver
ffffffff811c82f8 t acpi_add_single_object
ffffffff811c8df8 t acpi_bus_check_add
ffffffff811c8f86 t acpi_bus_scan
ffffffff811c8ffc T acpi_bus_start
ffffffff811c9037 T acpi_bus_add
ffffffff811c9060 t acpi_device_unregister.isra.7
ffffffff811c9164 t acpi_bus_remove.part.8
ffffffff811c9191 T acpi_bus_trim
ffffffff811c9299 t acpi_bus_hot_remove_device
ffffffff811c9398 T acpi_get_cpuid
ffffffff811c9598 t start_transaction
ffffffff811c95bf T ec_get_handle
ffffffff811c95d1 t ec_skip_dsdt_scan
ffffffff811c95de t ec_validate_ecdt
ffffffff811c95eb T acpi_ec_remove_query_handler
ffffffff811c9661 t ec_flag_msi
ffffffff811c968a t acpi_ec_remove
ffffffff811c97bb t advance_transaction
ffffffff811c9879 t ec_transaction_done
ffffffff811c98b1 t acpi_ec_transaction_unlocked
ffffffff811c9a8d T acpi_ec_add_query_handler
ffffffff811c9b16 t acpi_ec_sync_query
ffffffff811c9bca t acpi_ec_gpe_query
ffffffff811c9bfa t acpi_ec_run
ffffffff811c9c33 t acpi_ec_transaction
ffffffff811c9e65 t acpi_ec_burst_enable
ffffffff811c9e9c T ec_burst_enable
ffffffff811c9eb0 t acpi_ec_read
ffffffff811c9f0a T ec_read
ffffffff811c9f44 t acpi_ec_write
ffffffff811c9f83 T ec_write
ffffffff811c9fa1 t acpi_ec_burst_disable
ffffffff811c9fdb t acpi_ec_space_handler
ffffffff811ca0ce T ec_burst_disable
ffffffff811ca0e5 T ec_transaction
ffffffff811ca135 t make_acpi_ec
ffffffff811ca19f t ec_parse_device
ffffffff811ca238 t acpi_ec_register_query_methods
ffffffff811ca2a9 t ec_parse_io_ports
ffffffff811ca2d7 t acpi_ec_gpe_handler
ffffffff811ca345 t acpi_ec_add
ffffffff811ca4c4 T acpi_ec_block_transactions
ffffffff811ca4f4 T acpi_ec_unblock_transactions
ffffffff811ca523 T acpi_ec_unblock_transactions_early
ffffffff811ca538 T acpi_pci_register_driver
ffffffff811ca592 T acpi_get_pci_rootbridge_handle
ffffffff811ca5c1 t acpi_pci_bridge_scan
ffffffff811ca613 T acpi_pci_find_root
ffffffff811ca634 t acpi_pci_root_start
ffffffff811ca649 t acpi_pci_root_remove
ffffffff811ca674 t acpi_pci_run_osc
ffffffff811ca6da t acpi_pci_query_osc
ffffffff811ca74c T acpi_pci_osc_control_set
ffffffff811ca865 T acpi_is_root_bridge
ffffffff811ca898 T acpi_get_pci_dev
ffffffff811ca9d2 t get_root_bridge_busnr_callback
ffffffff811caa18 T acpi_pci_unregister_driver
ffffffff811caa78 t acpi_pci_link_remove
ffffffff811caacd t acpi_pci_link_check_possible
ffffffff811cabe6 t acpi_pci_link_get_current
ffffffff811caca9 t acpi_pci_link_add
ffffffff811cae68 t acpi_pci_link_set
ffffffff811cb037 t irqrouter_resume
ffffffff811cb070 t acpi_pci_link_check_current
ffffffff811cb0de T acpi_pci_link_allocate_irq
ffffffff811cb2e7 T acpi_pci_link_free_irq
ffffffff811cb380 T acpi_penalize_isa_irq
ffffffff811cb3a8 t acpi_pci_irq_find_prt_entry
ffffffff811cb442 t acpi_pci_irq_lookup
ffffffff811cb582 T acpi_pci_irq_add_prt
ffffffff811cb82d T acpi_pci_irq_del_prt
ffffffff811cb8d5 T acpi_pci_irq_enable
ffffffff811cba3e W acpi_unregister_gsi
ffffffff811cba3f T acpi_pci_irq_disable
ffffffff811cba78 t acpi_pci_unbind
ffffffff811cbad0 t acpi_pci_bind
ffffffff811cbb62 T acpi_pci_bind_root
ffffffff811cbb7c t acpi_power_get_state
ffffffff811cbbf6 t acpi_power_remove
ffffffff811cbc16 t acpi_power_get_context
ffffffff811cbc80 t acpi_power_off
ffffffff811cbd0b t acpi_power_add
ffffffff811cbe68 T acpi_power_resource_unregister_device
ffffffff811cbf30 T acpi_power_resource_register_device
ffffffff811cc078 T acpi_device_sleep_wake
ffffffff811cc140 T acpi_disable_wakeup_device_power
ffffffff811cc1f4 T acpi_power_get_inferred_state
ffffffff811cc2cd t __acpi_power_on
ffffffff811cc371 t acpi_power_resume
ffffffff811cc3df t acpi_power_on
ffffffff811cc44e t acpi_power_on_list
ffffffff811cc495 T acpi_enable_wakeup_device_power
ffffffff811cc55b T acpi_power_on_resources
ffffffff811cc586 T acpi_power_transition
ffffffff811cc634 t acpi_system_poll_event
ffffffff811cc668 t acpi_system_close_event
ffffffff811cc698 t acpi_system_read_event
ffffffff811cc779 T acpi_bus_generate_netlink_event
ffffffff811cc8a2 T unregister_acpi_notifier
ffffffff811cc8b1 T register_acpi_notifier
ffffffff811cc8c0 T acpi_notifier_call_chain
ffffffff811cc928 t acpi_system_open_event
ffffffff811cc978 t param_get_acpica_version
ffffffff811cc98b t acpi_show_profile
ffffffff811cc9a8 t delete_gpe_attr_array
ffffffff811cca01 t acpi_table_attr_init
ffffffff811ccad4 t acpi_sysfs_table_handler
ffffffff811ccb62 t acpi_table_show
ffffffff811ccbea t acpi_gbl_event_handler
ffffffff811ccc3f t get_status
ffffffff811ccccd t counter_set
ffffffff811cced0 t counter_show
ffffffff811ccfde T acpi_irq_stats_init
ffffffff811cd20c T pxm_to_node
ffffffff811cd21e T node_to_pxm
ffffffff811cd230 T __acpi_map_pxm_to_node
ffffffff811cd265 T acpi_map_pxm_to_node
ffffffff811cd2cb T acpi_get_pxm
ffffffff811cd311 T acpi_get_node
ffffffff811cd32c T acpi_unlock_battery_dir
ffffffff811cd38a T acpi_unlock_ac_dir
ffffffff811cd3e8 T acpi_lock_battery_dir
ffffffff811cd455 T acpi_lock_ac_dir
ffffffff811cd4c4 t acpi_ds_execute_arguments
ffffffff811cd5f2 T acpi_ds_get_buffer_field_arguments
ffffffff811cd61a T acpi_ds_get_bank_field_arguments
ffffffff811cd664 T acpi_ds_get_buffer_arguments
ffffffff811cd6b1 T acpi_ds_get_package_arguments
ffffffff811cd6fe T acpi_ds_get_region_arguments
ffffffff811cd754 T acpi_ds_exec_begin_control_op
ffffffff811cd7e9 T acpi_ds_exec_end_control_op
ffffffff811cda04 t acpi_ds_get_field_names
ffffffff811cdc22 T acpi_ds_create_buffer_field
ffffffff811cdd68 T acpi_ds_create_field
ffffffff811cde12 T acpi_ds_init_field_objects
ffffffff811cdf3c T acpi_ds_create_bank_field
ffffffff811ce025 T acpi_ds_create_index_field
ffffffff811ce0f4 t acpi_ds_init_one_object
ffffffff811ce16b T acpi_ds_initialize_objects
ffffffff811ce21c T acpi_ds_method_error
ffffffff811ce27a T acpi_ds_begin_method_execution
ffffffff811ce428 T acpi_ds_restart_control_method
ffffffff811ce49c T acpi_ds_terminate_control_method
ffffffff811ce5b0 T acpi_ds_call_control_method
ffffffff811ce72c T acpi_ds_method_data_init
ffffffff811ce7b8 T acpi_ds_method_data_delete_all
ffffffff811ce825 T acpi_ds_method_data_get_node
ffffffff811ce8c7 T acpi_ds_method_data_init_args
ffffffff811ce935 T acpi_ds_method_data_get_value
ffffffff811cea35 T acpi_ds_store_object_to_local
ffffffff811ceb9c T acpi_ds_build_internal_buffer_obj
ffffffff811cecc3 T acpi_ds_init_object_from_op
ffffffff811cef05 t acpi_ds_build_internal_object
ffffffff811cf081 T acpi_ds_create_node
ffffffff811cf0f0 T acpi_ds_build_internal_package_obj
ffffffff811cf2ec t acpi_ds_init_buffer_field
ffffffff811cf554 T acpi_ds_initialize_region
ffffffff811cf565 T acpi_ds_eval_buffer_field_operands
ffffffff811cf649 T acpi_ds_eval_region_operands
ffffffff811cf6f0 T acpi_ds_eval_table_region_operands
ffffffff811cf7db T acpi_ds_eval_data_object_operands
ffffffff811cf8d3 T acpi_ds_eval_bank_field_operands
ffffffff811cf968 T acpi_ds_clear_implicit_return
ffffffff811cf993 T acpi_ds_do_implicit_return
ffffffff811cf9f4 T acpi_ds_is_result_used
ffffffff811cfafc T acpi_ds_delete_result_if_not_used
ffffffff811cfb5a T acpi_ds_resolve_operands
ffffffff811cfb8b T acpi_ds_clear_operands
ffffffff811cfbc5 T acpi_ds_create_operand
ffffffff811cfdd4 T acpi_ds_create_operands
ffffffff811cfe80 T acpi_ds_evaluate_name_path
ffffffff811cff74 T acpi_ds_get_predicate_value
ffffffff811d00eb T acpi_ds_exec_begin_op
ffffffff811d0205 T acpi_ds_exec_end_op
ffffffff811d05d8 T acpi_ds_load1_end_op
ffffffff811d075a T acpi_ds_load1_begin_op
ffffffff811d09a2 T acpi_ds_init_callbacks
ffffffff811d0a14 T acpi_ds_load2_begin_op
ffffffff811d0d22 T acpi_ds_load2_end_op
ffffffff811d106c T acpi_ds_scope_stack_clear
ffffffff811d108f T acpi_ds_scope_stack_push
ffffffff811d111a T acpi_ds_scope_stack_pop
ffffffff811d1148 T acpi_ds_result_pop
ffffffff811d123c T acpi_ds_result_push
ffffffff811d136a T acpi_ds_obj_stack_push
ffffffff811d13bf T acpi_ds_obj_stack_pop
ffffffff811d1416 T acpi_ds_obj_stack_pop_and_delete
ffffffff811d145c T acpi_ds_get_current_walk_state
ffffffff811d1468 T acpi_ds_push_walk_state
ffffffff811d1474 T acpi_ds_pop_walk_state
ffffffff811d1485 T acpi_ds_create_walk_state
ffffffff811d151b T acpi_ds_init_aml_walk
ffffffff811d161e T acpi_ds_delete_walk_state
ffffffff811d16e0 T acpi_ev_initialize_events
ffffffff811d1779 T acpi_ev_install_xrupt_handlers
ffffffff811d17d4 T acpi_ev_fixed_event_detect
ffffffff811d18c8 t acpi_ev_asynch_execute_gpe_method
ffffffff811d1a70 T acpi_ev_update_gpe_enable_mask
ffffffff811d1aa9 T acpi_ev_enable_gpe
ffffffff811d1ace T acpi_ev_add_gpe_reference
ffffffff811d1b04 T acpi_ev_remove_gpe_reference
ffffffff811d1b3e T acpi_ev_low_get_gpe_info
ffffffff811d1b62 T acpi_ev_get_gpe_event_info
ffffffff811d1ba5 T acpi_ev_finish_gpe
ffffffff811d1bc9 t acpi_ev_asynch_enable_gpe
ffffffff811d1bdb T acpi_ev_gpe_dispatch
ffffffff811d1d05 T acpi_ev_gpe_detect
ffffffff811d1df4 T acpi_ev_delete_gpe_block
ffffffff811d1ea7 T acpi_ev_create_gpe_block
ffffffff811d21b8 T acpi_ev_initialize_gpe_block
ffffffff811d2250 T acpi_ev_match_gpe_method
ffffffff811d233b T acpi_ev_gpe_initialize
ffffffff811d24bc T acpi_ev_update_gpes
ffffffff811d259c T acpi_ev_walk_gpe_list
ffffffff811d2623 T acpi_ev_valid_gpe_event
ffffffff811d2663 T acpi_ev_get_gpe_device
ffffffff811d268d T acpi_ev_get_gpe_xrupt_block
ffffffff811d2780 T acpi_ev_delete_gpe_xrupt
ffffffff811d2805 T acpi_ev_delete_gpe_handlers
ffffffff811d2868 t acpi_ev_global_lock_handler
ffffffff811d28cd T acpi_ev_init_global_lock_handler
ffffffff811d296a T acpi_ev_remove_global_lock_handler
ffffffff811d2982 T acpi_ev_acquire_global_lock
ffffffff811d2a5c T acpi_ev_release_global_lock
ffffffff811d2ad8 t acpi_ev_notify_dispatch
ffffffff811d2b47 T acpi_ev_is_notify_object
ffffffff811d2b67 T acpi_ev_queue_notify_request
ffffffff811d2c3d T acpi_ev_terminate
ffffffff811d2d10 T acpi_ev_execute_reg_method
ffffffff811d2e03 t acpi_ev_reg_run
ffffffff811d2e4c T acpi_ev_address_space_dispatch
ffffffff811d3048 T acpi_ev_detach_region
ffffffff811d317c T acpi_ev_attach_region
ffffffff811d31a2 t acpi_ev_install_handler
ffffffff811d323a T acpi_ev_install_space_handler
ffffffff811d347c T acpi_ev_install_region_handlers
ffffffff811d34e5 T acpi_ev_execute_reg_methods
ffffffff811d3618 T acpi_ev_initialize_op_regions
ffffffff811d3698 T acpi_ev_system_memory_region_setup
ffffffff811d3725 T acpi_ev_io_space_region_setup
ffffffff811d3734 T acpi_ev_pci_config_region_setup
ffffffff811d399f T acpi_ev_pci_bar_region_setup
ffffffff811d39a2 T acpi_ev_cmos_region_setup
ffffffff811d39a5 T acpi_ev_default_region_setup
ffffffff811d39b4 T acpi_ev_initialize_region
ffffffff811d3af4 t acpi_ev_sci_xrupt_handler
ffffffff811d3b17 T acpi_ev_gpe_xrupt_handler
ffffffff811d3b1c T acpi_ev_install_sci_handler
ffffffff811d3b36 T acpi_ev_remove_sci_handler
ffffffff811d3b4c T acpi_remove_notify_handler
ffffffff811d3d37 T acpi_release_global_lock
ffffffff811d3d58 T acpi_acquire_global_lock
ffffffff811d3dab T acpi_remove_gpe_handler
ffffffff811d3e7a T acpi_install_global_event_handler
ffffffff811d3ece T acpi_install_gpe_handler
ffffffff811d4028 T acpi_remove_fixed_event_handler
ffffffff811d409e T acpi_install_fixed_event_handler
ffffffff811d4152 T acpi_install_notify_handler
ffffffff811d43b8 T acpi_disable
ffffffff811d43f3 T acpi_get_event_status
ffffffff811d4471 T acpi_clear_event
ffffffff811d4493 T acpi_disable_event
ffffffff811d450d T acpi_enable_event
ffffffff811d458a T acpi_enable
ffffffff811d4644 T acpi_update_all_gpes
ffffffff811d4687 T acpi_get_gpe_status
ffffffff811d46f6 T acpi_clear_gpe
ffffffff811d474a T acpi_set_gpe_wake_mask
ffffffff811d4803 T acpi_disable_gpe
ffffffff811d4857 T acpi_enable_gpe
ffffffff811d48ab T acpi_get_gpe_device
ffffffff811d490d T acpi_remove_gpe_block
ffffffff811d4981 T acpi_install_gpe_block
ffffffff811d4a87 T acpi_enable_all_runtime_gpes
ffffffff811d4aad T acpi_disable_all_gpes
ffffffff811d4ad3 T acpi_setup_gpe_for_wake
ffffffff811d4bbc T acpi_remove_address_space_handler
ffffffff811d4c8f T acpi_install_address_space_handler
ffffffff811d4d40 t acpi_ex_region_read
ffffffff811d4d92 t acpi_ex_add_table
ffffffff811d4e3a T acpi_ex_unload_table
ffffffff811d4ed0 T acpi_ex_load_op
ffffffff811d5168 T acpi_ex_load_table_op
ffffffff811d5344 t acpi_ex_convert_to_ascii
ffffffff811d5432 T acpi_ex_convert_to_integer
ffffffff811d54e2 T acpi_ex_convert_to_buffer
ffffffff811d5574 T acpi_ex_convert_to_string
ffffffff811d56f5 T acpi_ex_convert_to_target_type
ffffffff811d57d0 T acpi_ex_create_alias
ffffffff811d583b T acpi_ex_create_event
ffffffff811d58a9 T acpi_ex_create_mutex
ffffffff811d592e T acpi_ex_create_region
ffffffff811d5a29 T acpi_ex_create_processor
ffffffff811d5ab0 T acpi_ex_create_power_resource
ffffffff811d5b28 T acpi_ex_create_method
ffffffff811d5bc4 T acpi_ex_do_debug_object
ffffffff811d5e9c T acpi_ex_read_data_from_field
ffffffff811d5ff1 T acpi_ex_write_data_to_field
ffffffff811d61b4 T acpi_ex_access_region
ffffffff811d643a T acpi_ex_insert_into_field
ffffffff811d663e t acpi_ex_field_datum_io
ffffffff811d67dc T acpi_ex_extract_from_field
ffffffff811d69db T acpi_ex_write_with_update_rule
ffffffff811d6a94 T acpi_ex_unlink_mutex
ffffffff811d6acf T acpi_ex_acquire_mutex_object
ffffffff811d6b2d T acpi_ex_acquire_mutex
ffffffff811d6c22 T acpi_ex_release_mutex_object
ffffffff811d6c7f T acpi_ex_release_mutex
ffffffff811d6dd8 T acpi_ex_release_all_mutexes
ffffffff811d6e40 t acpi_ex_allocate_name_string
ffffffff811d6ef8 t acpi_ex_name_segment
ffffffff811d6fda T acpi_ex_get_name_string
ffffffff811d71c8 T acpi_ex_opcode_0A_0T_1R
ffffffff811d7248 T acpi_ex_opcode_1A_0T_0R
ffffffff811d72e4 T acpi_ex_opcode_1A_1T_0R
ffffffff811d732b T acpi_ex_opcode_1A_1T_1R
ffffffff811d77e5 T acpi_ex_opcode_1A_0T_1R
ffffffff811d7c6c T acpi_ex_opcode_2A_0T_0R
ffffffff811d7cf8 T acpi_ex_opcode_2A_2T_1R
ffffffff811d7e17 T acpi_ex_opcode_2A_1T_1R
ffffffff811d8158 T acpi_ex_opcode_2A_0T_1R
ffffffff811d8294 T acpi_ex_opcode_3A_0T_0R
ffffffff811d8346 T acpi_ex_opcode_3A_1T_1R
ffffffff811d84e8 t acpi_ex_do_match
ffffffff811d857b T acpi_ex_opcode_6A_0T_1R
ffffffff811d872c T acpi_ex_prep_common_field_object
ffffffff811d87c5 T acpi_ex_prep_field_value
ffffffff811d8a2c T acpi_ex_get_object_reference
ffffffff811d8af4 T acpi_ex_concat_template
ffffffff811d8ba3 T acpi_ex_do_concatenate
ffffffff811d8d61 T acpi_ex_do_math_op
ffffffff811d8dd7 T acpi_ex_do_logical_numeric_op
ffffffff811d8e12 T acpi_ex_do_logical_op
ffffffff811d8f78 T acpi_ex_system_memory_space_handler
ffffffff811d9145 T acpi_ex_system_io_space_handler
ffffffff811d9184 T acpi_ex_pci_config_space_handler
ffffffff811d91b8 T acpi_ex_cmos_space_handler
ffffffff811d91bb T acpi_ex_pci_bar_space_handler
ffffffff811d91be T acpi_ex_data_table_space_handler
ffffffff811d91ec T acpi_ex_resolve_node_to_value
ffffffff811d9414 T acpi_ex_resolve_to_value
ffffffff811d9626 T acpi_ex_resolve_multiple
ffffffff811d9834 t acpi_ex_check_object_type
ffffffff811d98a4 T acpi_ex_resolve_operands
ffffffff811d9d74 T acpi_ex_store_object_to_node
ffffffff811d9e4d T acpi_ex_store
ffffffff811da090 T acpi_ex_resolve_object
ffffffff811da191 T acpi_ex_store_object_to_object
ffffffff811da2a8 T acpi_ex_store_buffer_to_buffer
ffffffff811da342 T acpi_ex_store_string_to_string
ffffffff811da3f4 T acpi_ex_system_wait_semaphore
ffffffff811da43d T acpi_ex_system_wait_mutex
ffffffff811da486 T acpi_ex_system_do_stall
ffffffff811da4bb T acpi_ex_system_do_sleep
ffffffff811da4e5 T acpi_ex_system_signal_event
ffffffff811da4fb T acpi_ex_system_wait_event
ffffffff811da513 T acpi_ex_system_reset_event
ffffffff811da54c T acpi_ex_enter_interpreter
ffffffff811da575 T acpi_ex_reacquire_interpreter
ffffffff811da584 T acpi_ex_exit_interpreter
ffffffff811da5af T acpi_ex_relinquish_interpreter
ffffffff811da5be T acpi_ex_truncate_for32bit_table
ffffffff811da5de T acpi_ex_acquire_global_lock
ffffffff811da624 T acpi_ex_release_global_lock
ffffffff811da65c T acpi_ex_eisa_id_to_string
ffffffff811da70c T acpi_ex_integer_to_string
ffffffff811da786 T acpi_is_valid_space_id
ffffffff811da7a0 T acpi_hw_set_mode
ffffffff811da84d T acpi_hw_get_mode
ffffffff811da88c T acpi_hw_execute_sleep_method
ffffffff811da8f9 T acpi_hw_extended_sleep
ffffffff811da99a T acpi_hw_extended_wake_prep
ffffffff811da9f4 T acpi_hw_extended_wake
ffffffff811daa4c t acpi_hw_enable_wakeup_gpe_block
ffffffff811daa87 T acpi_hw_enable_runtime_gpe_block
ffffffff811daac2 T acpi_hw_clear_gpe_block
ffffffff811daaf3 T acpi_hw_disable_gpe_block
ffffffff811dab25 T acpi_hw_get_gpe_register_bit
ffffffff811dab37 T acpi_hw_low_set_gpe
ffffffff811dabdc T acpi_hw_clear_gpe
ffffffff811dac01 T acpi_hw_get_gpe_status
ffffffff811dac71 T acpi_hw_disable_all_gpes
ffffffff811dac8f T acpi_hw_enable_all_runtime_gpes
ffffffff811dac9d T acpi_hw_enable_all_wakeup_gpes
ffffffff811dacac T acpi_hw_derive_pci_id
ffffffff811dae70 T acpi_hw_validate_register
ffffffff811daf20 T acpi_hw_read
ffffffff811daf77 t acpi_hw_read_multiple
ffffffff811dafd7 T acpi_hw_write
ffffffff811db01d t acpi_hw_write_multiple
ffffffff811db049 T acpi_hw_get_bit_register_info
ffffffff811db079 T acpi_hw_write_pm1_control
ffffffff811db0a7 T acpi_hw_register_read
ffffffff811db17c T acpi_hw_register_write
ffffffff811db280 T acpi_hw_clear_acpi_status
ffffffff811db2d8 T acpi_hw_legacy_sleep
ffffffff811db467 T acpi_hw_legacy_wake_prep
ffffffff811db52a T acpi_hw_legacy_wake
ffffffff811db5c0 t acpi_hw_validate_io_request
ffffffff811db689 T acpi_hw_read_port
ffffffff811db727 T acpi_hw_write_port
ffffffff811db7bc T acpi_read_bit_register
ffffffff811db800 T acpi_write_bit_register
ffffffff811db8ab T acpi_write
ffffffff811db922 T acpi_get_sleep_type_data
ffffffff811dbafa T acpi_read
ffffffff811dbba2 T acpi_reset
ffffffff811dbbec t acpi_hw_sleep_dispatch
ffffffff811dbc1f T acpi_leave_sleep_state_prep
ffffffff811dbc2e T acpi_leave_sleep_state
ffffffff811dbc3b T acpi_set_firmware_waking_vector
ffffffff811dbc5c T acpi_set_firmware_waking_vector64
ffffffff811dbc82 T acpi_enter_sleep_state
ffffffff811dbcd3 T acpi_enter_sleep_state_prep
ffffffff811dbd63 T acpi_enter_sleep_state_s4bios
ffffffff811dbde4 T acpi_ns_lookup
ffffffff811dc143 T acpi_ns_root_initialize
ffffffff811dc3f0 T acpi_ns_create_node
ffffffff811dc42f T acpi_ns_delete_node
ffffffff811dc475 T acpi_ns_remove_node
ffffffff811dc4a5 T acpi_ns_install_node
ffffffff811dc4fd T acpi_ns_delete_children
ffffffff811dc557 T acpi_ns_delete_namespace_subtree
ffffffff811dc5ce T acpi_ns_delete_namespace_by_owner
ffffffff811dc694 T acpi_ns_evaluate
ffffffff811dc82e T acpi_ns_exec_module_code_list
ffffffff811dc9a0 t acpi_ns_init_one_device
ffffffff811dca5e t acpi_ns_init_one_object
ffffffff811dcb5b t acpi_ns_find_ini_methods
ffffffff811dcbab T acpi_ns_initialize_objects
ffffffff811dcc07 T acpi_ns_initialize_devices
ffffffff811dcd44 T acpi_ns_load_table
ffffffff811dcdc0 T acpi_ns_build_external_path
ffffffff811dce30 T acpi_ns_get_pathname_length
ffffffff811dce87 T acpi_ns_get_external_pathname
ffffffff811dcf11 T acpi_ns_handle_to_pathname
ffffffff811dcf68 T acpi_ns_detach_object
ffffffff811dcfca T acpi_ns_attach_object
ffffffff811dd0b4 T acpi_ns_get_attached_object
ffffffff811dd0fd T acpi_ns_get_secondary_object
ffffffff811dd120 T acpi_ns_attach_data
ffffffff811dd1a1 T acpi_ns_detach_data
ffffffff811dd1e4 T acpi_ns_get_attached_data
ffffffff811dd20c T acpi_ns_one_complete_parse
ffffffff811dd31d T acpi_ns_parse_table
ffffffff811dd350 t acpi_ns_check_object_type
ffffffff811dd516 t acpi_ns_check_package_elements
ffffffff811dd5aa t acpi_ns_check_package_list
ffffffff811dd7bb T acpi_ns_check_parameter_count
ffffffff811dd8a8 T acpi_ns_check_for_predefined_name
ffffffff811dd8d8 T acpi_ns_check_predefined_names
ffffffff811ddcd0 T acpi_ns_repair_null_element
ffffffff811ddd38 T acpi_ns_remove_null_elements
ffffffff811ddd8a T acpi_ns_wrap_with_package
ffffffff811dddc4 T acpi_ns_repair_object
ffffffff811de01c t acpi_ns_repair_HID
ffffffff811de0cb t acpi_ns_repair_CID
ffffffff811de14f t acpi_ns_repair_FDE
ffffffff811de1fc t acpi_ns_check_sorted_list.isra.0.part.1
ffffffff811de314 t acpi_ns_repair_ALR
ffffffff811de339 t acpi_ns_repair_TSS
ffffffff811de38e t acpi_ns_repair_PSS
ffffffff811de437 T acpi_ns_complex_repairs
ffffffff811de468 T acpi_ns_search_one_scope
ffffffff811de4a4 T acpi_ns_search_and_enter
ffffffff811de61c T acpi_ns_print_node_pathname
ffffffff811de68c T acpi_ns_valid_root_prefix
ffffffff811de694 T acpi_ns_get_type
ffffffff811de6be T acpi_ns_local
ffffffff811de6f6 T acpi_ns_get_internal_name_length
ffffffff811de761 T acpi_ns_build_internal_name
ffffffff811de847 T acpi_ns_internalize_name
ffffffff811de8e5 T acpi_ns_externalize_name
ffffffff811deab1 T acpi_ns_validate_handle
ffffffff811dead4 T acpi_ns_terminate
ffffffff811deb03 T acpi_ns_opens_scope
ffffffff811deb3b T acpi_ns_get_node
ffffffff811debdc T acpi_ns_get_next_node
ffffffff811debeb T acpi_ns_get_next_node_typed
ffffffff811dec11 T acpi_ns_walk_namespace
ffffffff811ded88 T acpi_evaluate_object
ffffffff811defca T acpi_evaluate_object_typed
ffffffff811df098 T acpi_get_data
ffffffff811df106 T acpi_detach_data
ffffffff811df160 T acpi_attach_data
ffffffff811df1cf T acpi_get_devices
ffffffff811df244 t acpi_ns_get_device_callback
ffffffff811df3a5 T acpi_walk_namespace
ffffffff811df468 T acpi_get_object_info
ffffffff811df7bb T acpi_get_handle
ffffffff811df85f T acpi_install_method
ffffffff811dfa56 T acpi_get_name
ffffffff811dfb08 T acpi_get_next_object
ffffffff811dfb98 T acpi_get_parent
ffffffff811dfbfe T acpi_get_type
ffffffff811dfc61 T acpi_get_id
ffffffff811dfcb8 t acpi_ps_get_next_package_length.isra.0
ffffffff811dfcf4 T acpi_ps_get_next_package_end
ffffffff811dfd09 T acpi_ps_get_next_namestring
ffffffff811dfd68 T acpi_ps_get_next_namepath
ffffffff811dff2f T acpi_ps_get_next_simple_arg
ffffffff811e0010 T acpi_ps_get_next_arg
ffffffff811e03ec t acpi_ps_complete_op
ffffffff811e064b T acpi_ps_parse_loop
ffffffff811e0f84 T acpi_ps_get_opcode_info
ffffffff811e0fcf T acpi_ps_get_opcode_name
ffffffff811e0fd7 T acpi_ps_get_argument_count
ffffffff811e0fe8 T acpi_ps_get_opcode_size
ffffffff811e0ff4 T acpi_ps_peek_opcode
ffffffff811e1009 T acpi_ps_complete_this_op
ffffffff811e1187 T acpi_ps_next_parse_state
ffffffff811e126b T acpi_ps_parse_aml
ffffffff811e14cc T acpi_ps_get_parent_scope
ffffffff811e14d5 T acpi_ps_has_completed_scope
ffffffff811e14f2 T acpi_ps_init_scope
ffffffff811e153a T acpi_ps_push_scope
ffffffff811e15a5 T acpi_ps_pop_scope
ffffffff811e1615 T acpi_ps_cleanup_scope
ffffffff811e1644 T acpi_ps_get_arg
ffffffff811e167f T acpi_ps_append_arg
ffffffff811e16fc T acpi_ps_init_op
ffffffff811e1705 T acpi_ps_alloc_op
ffffffff811e179e T acpi_ps_create_scope_op
ffffffff811e17b7 T acpi_ps_free_op
ffffffff811e17d3 T acpi_ps_is_leading_char
ffffffff811e17ea T acpi_ps_is_prefix_char
ffffffff811e17fd T acpi_ps_set_name
ffffffff811e1808 T acpi_ps_delete_parse_tree
ffffffff811e1860 t acpi_ps_update_parameter_list.isra.2
ffffffff811e1893 T acpi_debug_trace
ffffffff811e18f2 T acpi_ps_execute_method
ffffffff811e1b58 T acpi_rs_get_address_common
ffffffff811e1bac T acpi_rs_set_address_common
ffffffff811e1bfc T acpi_rs_get_aml_length
ffffffff811e1ce9 T acpi_rs_get_list_length
ffffffff811e1edf T acpi_rs_get_pci_routing_table_length
ffffffff811e1fcc T acpi_buffer_to_resource
ffffffff811e207b T acpi_rs_create_resource_list
ffffffff811e20df T acpi_rs_create_pci_routing_table
ffffffff811e23a8 T acpi_rs_create_aml_resources
ffffffff811e23f4 T acpi_rs_convert_aml_to_resources
ffffffff811e24c7 T acpi_rs_convert_resources_to_aml
ffffffff811e25c4 T acpi_rs_convert_aml_to_resource
ffffffff811e2937 T acpi_rs_convert_resource_to_aml
ffffffff811e2bc4 T acpi_rs_decode_bitmask
ffffffff811e2be3 T acpi_rs_encode_bitmask
ffffffff811e2c08 T acpi_rs_move_data
ffffffff811e2c57 T acpi_rs_set_resource_length
ffffffff811e2c86 T acpi_rs_set_resource_header
ffffffff811e2c95 T acpi_rs_get_resource_source
ffffffff811e2d20 T acpi_rs_set_resource_source
ffffffff811e2d55 T acpi_rs_get_prt_method_data
ffffffff811e2d9c T acpi_rs_get_crs_method_data
ffffffff811e2de3 T acpi_rs_get_aei_method_data
ffffffff811e2e2a T acpi_rs_get_method_data
ffffffff811e2e6a T acpi_rs_set_srs_method_data
ffffffff811e2f6c T acpi_get_event_resources
ffffffff811e2fb4 T acpi_set_current_resources
ffffffff811e3012 T acpi_get_current_resources
ffffffff811e305d T acpi_get_irq_routing_table
ffffffff811e30a6 T acpi_walk_resources
ffffffff811e3170 T acpi_get_vendor_resource
ffffffff811e31b5 t acpi_rs_match_vendor_resource
ffffffff811e3233 T acpi_resource_to_address64
ffffffff811e3330 T acpi_tb_create_local_fadt
ffffffff811e3787 T acpi_tb_parse_fadt
ffffffff811e3810 T acpi_tb_find_table
ffffffff811e3970 T acpi_tb_verify_table
ffffffff811e39bf T acpi_tb_resize_root_table_list
ffffffff811e3a94 T acpi_tb_store_table
ffffffff811e3b11 T acpi_tb_delete_table
ffffffff811e3b4a T acpi_tb_table_override
ffffffff811e3c73 T acpi_tb_add_table
ffffffff811e3dfa T acpi_tb_terminate
ffffffff811e3e61 T acpi_tb_delete_namespace_by_owner
ffffffff811e3eec T acpi_tb_allocate_owner_id
ffffffff811e3f2f T acpi_tb_release_owner_id
ffffffff811e3f72 T acpi_tb_get_owner_id
ffffffff811e3fb8 T acpi_tb_is_table_loaded
ffffffff811e3ff2 T acpi_tb_set_table_loaded_flag
ffffffff811e4034 t acpi_tb_fix_string
ffffffff811e405b T acpi_tb_initialize_facs
ffffffff811e4083 T acpi_tb_tables_loaded
ffffffff811e408e T acpi_tb_print_table_header
ffffffff811e41e1 T acpi_tb_checksum
ffffffff811e41f5 T acpi_tb_verify_checksum
ffffffff811e4234 T acpi_tb_check_dsdt_header
ffffffff811e42bd T acpi_tb_copy_dsdt
ffffffff811e4358 T acpi_tb_install_table
ffffffff811e4484 T acpi_remove_table_handler
ffffffff811e44ca T acpi_load_tables
ffffffff811e4623 T acpi_unload_table_id
ffffffff811e467b T acpi_install_table_handler
ffffffff811e46cf T acpi_get_table_by_index
ffffffff811e4743 T acpi_get_table_header
ffffffff811e47f0 T acpi_load_table
ffffffff811e4845 T acpi_allocate_root_table
ffffffff811e4857 T acpi_reallocate_root_table
ffffffff811e48db T acpi_get_table_with_size
ffffffff811e497d T acpi_get_table
ffffffff811e4990 t acpi_tb_scan_memory_for_rsdp
ffffffff811e49ef T acpi_find_root_pointer
ffffffff811e4b44 T acpi_ut_add_address_range
ffffffff811e4bf7 T acpi_ut_remove_address_range
ffffffff811e4c40 T acpi_ut_check_address_range
ffffffff811e4cf4 T acpi_ut_delete_address_lists
ffffffff811e4d30 T acpi_ut_create_caches
ffffffff811e4ddb T acpi_ut_delete_caches
ffffffff811e4e55 T acpi_ut_validate_buffer
ffffffff811e4e7b T acpi_ut_initialize_buffer
ffffffff811e4f00 t acpi_ut_copy_simple_object
ffffffff811e5047 t acpi_ut_copy_ielement_to_ielement
ffffffff811e5105 t acpi_ut_copy_isimple_to_esimple
ffffffff811e524c t acpi_ut_copy_ielement_to_eelement
ffffffff811e52c7 T acpi_ut_copy_iobject_to_eobject
ffffffff811e5359 T acpi_ut_copy_eobject_to_iobject
ffffffff811e5561 T acpi_ut_copy_iobject_to_iobject
ffffffff811e5660 T acpi_ut_dump_buffer2
ffffffff811e5819 T acpi_ut_dump_buffer
ffffffff811e5830 T acpi_format_exception
ffffffff811e5860 T acpi_ut_hex_to_ascii_char
ffffffff811e5870 T acpi_ut_get_region_name
ffffffff811e58b0 T acpi_ut_get_event_name
ffffffff811e58c7 T acpi_ut_get_type_name
ffffffff811e58de T acpi_ut_get_object_type_name
ffffffff811e5903 T acpi_ut_get_node_name
ffffffff811e5944 T acpi_ut_get_descriptor_name
ffffffff811e596b T acpi_ut_get_reference_name
ffffffff811e59ac T acpi_ut_valid_object_type
ffffffff811e59b4 T acpi_ut_remove_reference
ffffffff811e59dc t acpi_ut_delete_internal_obj
ffffffff811e5b7c t acpi_ut_update_ref_count
ffffffff811e5c18 T acpi_ut_update_object_reference
ffffffff811e5d6e T acpi_ut_add_reference
ffffffff811e5d88 T acpi_ut_delete_internal_object_list
ffffffff811e5db0 T acpi_ut_evaluate_object
ffffffff811e5f39 T acpi_ut_evaluate_numeric_object
ffffffff811e5f7c T acpi_ut_execute_STA
ffffffff811e5fcb T acpi_ut_execute_power_methods
ffffffff811e6040 T acpi_ut_init_globals
ffffffff811e6288 T acpi_ut_execute_HID
ffffffff811e634e T acpi_ut_execute_UID
ffffffff811e6414 T acpi_ut_execute_CID
ffffffff811e6594 T acpi_ut_subsystem_shutdown
ffffffff811e6604 T acpi_ut_create_rw_lock
ffffffff811e663b T acpi_ut_delete_rw_lock
ffffffff811e6668 T acpi_ut_acquire_read_lock
ffffffff811e66bb T acpi_ut_release_read_lock
ffffffff811e6708 T acpi_ut_acquire_write_lock
ffffffff811e671a T acpi_ut_release_write_lock
ffffffff811e6728 T acpi_ut_short_divide
ffffffff811e677e T acpi_ut_divide
ffffffff811e67d4 T acpi_ut_validate_exception
ffffffff811e6866 T acpi_ut_is_pci_root_bridge
ffffffff811e6894 T acpi_ut_is_aml_table
ffffffff811e68b2 T acpi_ut_allocate_owner_id
ffffffff811e69a4 T acpi_ut_release_owner_id
ffffffff811e6a29 T acpi_ut_strupr
ffffffff811e6a4c T acpi_ut_print_string
ffffffff811e6ba4 T acpi_ut_dword_byte_swap
ffffffff811e6bc8 T acpi_ut_set_integer_width
ffffffff811e6bfa T acpi_ut_valid_acpi_char
ffffffff811e6c23 T acpi_ut_valid_acpi_name
ffffffff811e6c4e T acpi_ut_repair_name
ffffffff811e6c87 T acpi_ut_strtoul64
ffffffff811e6e57 T acpi_ut_create_update_state_and_push
ffffffff811e6e85 T acpi_ut_walk_package_tree
ffffffff811e6fa0 T acpi_ut_mutex_initialize
ffffffff811e709d T acpi_ut_mutex_terminate
ffffffff811e7100 T acpi_ut_acquire_mutex
ffffffff811e7189 T acpi_ut_release_mutex
ffffffff811e71e8 t acpi_ut_get_simple_object_size
ffffffff811e7309 t acpi_ut_get_element_length
ffffffff811e7356 T acpi_ut_valid_internal_object
ffffffff811e7365 T acpi_ut_allocate_object_desc_dbg
ffffffff811e73d3 T acpi_ut_delete_object_desc
ffffffff811e7413 T acpi_ut_create_internal_object_dbg
ffffffff811e748e T acpi_ut_create_string_object
ffffffff811e751d T acpi_ut_create_buffer_object
ffffffff811e75b5 T acpi_ut_create_integer_object
ffffffff811e75df T acpi_ut_create_package_object
ffffffff811e7657 T acpi_ut_get_object_size
ffffffff811e76b4 T acpi_ut_initialize_interfaces
ffffffff811e7711 T acpi_ut_interface_terminate
ffffffff811e7769 T acpi_ut_install_interface
ffffffff811e7821 T acpi_ut_remove_interface
ffffffff811e78a2 T acpi_ut_get_interface
ffffffff811e78d2 T acpi_ut_osi_implementation
ffffffff811e79a8 T acpi_ut_get_resource_type
ffffffff811e79b5 T acpi_ut_get_resource_length
ffffffff811e79c5 T acpi_ut_validate_resource
ffffffff811e7ad8 T acpi_ut_get_resource_header_length
ffffffff811e7ae3 T acpi_ut_get_descriptor_length
ffffffff811e7afd T acpi_ut_walk_aml_resources
ffffffff811e7c03 T acpi_ut_get_resource_end_tag
ffffffff811e7c20 T acpi_ut_push_generic_state
ffffffff811e7c2a T acpi_ut_pop_generic_state
ffffffff811e7c39 T acpi_ut_create_generic_state
ffffffff811e7c85 T acpi_ut_create_thread_state
ffffffff811e7cce T acpi_ut_create_update_state
ffffffff811e7cf1 T acpi_ut_create_pkg_state
ffffffff811e7d25 T acpi_ut_create_pkg_state_and_push
ffffffff811e7d4b T acpi_ut_create_control_state
ffffffff811e7d64 T acpi_ut_delete_generic_state
ffffffff811e7d7c T acpi_install_interface_handler
ffffffff811e7dcc T acpi_purge_cached_objects
ffffffff811e7e01 T acpi_enable_subsystem
ffffffff811e7e8c T acpi_check_address_range
ffffffff811e7ee0 T acpi_remove_interface
ffffffff811e7f2c T acpi_install_interface
ffffffff811e7f9d T acpi_terminate
ffffffff811e7fe9 T acpi_initialize_objects
ffffffff811e802c T acpi_info
ffffffff811e808e T acpi_warning
ffffffff811e8105 T acpi_exception
ffffffff811e8181 T acpi_error
ffffffff811e81f8 T acpi_ut_predefined_warning
ffffffff811e8273 T acpi_ut_predefined_info
ffffffff811e82ee T acpi_ut_namespace_error
ffffffff811e83b8 T acpi_ut_method_error
ffffffff811e8468 t acpi_ut_get_mutex_object.part.0
ffffffff811e84ba T acpi_acquire_mutex
ffffffff811e84fa T acpi_release_mutex
ffffffff811e853c t acpi_hed_notify
ffffffff811e854c T unregister_acpi_hed_notifier
ffffffff811e855b T register_acpi_hed_notifier
ffffffff811e8570 T apei_exec_ctx_init
ffffffff811e8580 T apei_exec_noop
ffffffff811e8590 T apei_get_debugfs_dir
ffffffff811e85c0 T apei_osc_setup
ffffffff811e8690 t apei_res_clean
ffffffff811e86f0 T apei_resources_fini
ffffffff811e8700 t apei_exec_for_each_entry
ffffffff811e87a0 T apei_exec_collect_resources
ffffffff811e87c0 T apei_exec_post_unmap_gars
ffffffff811e87d0 T __apei_exec_run
ffffffff811e88b0 t apei_check_gar
ffffffff811e8980 T apei_exec_pre_map_gars
ffffffff811e89e0 t apei_res_add
ffffffff811e8b10 t apei_get_nvs_callback
ffffffff811e8b30 t collect_res_callback
ffffffff811e8bc0 t apei_res_sub
ffffffff811e8d20 T apei_write
ffffffff811e8dc0 T apei_read
ffffffff811e8e60 T apei_map_generic_address
ffffffff811e8e90 t post_unmap_gar_callback
ffffffff811e8ec0 T apei_resources_add
ffffffff811e8ed0 T apei_resources_sub
ffffffff811e8f20 t pre_map_gar_callback
ffffffff811e8f50 T apei_resources_release
ffffffff811e8fe0 T apei_resources_request
ffffffff811e9290 T __apei_exec_read_register
ffffffff811e92e0 T apei_exec_read_register
ffffffff811e9320 T apei_exec_read_register_value
ffffffff811e9360 T __apei_exec_write_register
ffffffff811e93f0 T apei_exec_write_register
ffffffff811e9400 T apei_exec_write_register_value
ffffffff811e9420 T apei_hest_parse
ffffffff811e9550 T apei_estatus_check_header
ffffffff811e9580 T cper_next_record_id
ffffffff811e95c0 T apei_estatus_check
ffffffff811e9650 T cper_print_bits
ffffffff811e9750 t apei_estatus_print_section
ffffffff811ea070 T apei_estatus_print
ffffffff811ea110 t erst_exec_add
ffffffff811ea120 t erst_exec_subtract
ffffffff811ea130 t erst_exec_goto
ffffffff811ea140 t erst_exec_set_dst_address_base
ffffffff811ea150 t erst_exec_set_src_address_base
ffffffff811ea160 t erst_exec_skip_next_instruction_if_true
ffffffff811ea1a0 t erst_exec_load_var2
ffffffff811ea1b0 t erst_exec_load_var1
ffffffff811ea1c0 t erst_exec_move_data
ffffffff811ea2a0 t erst_exec_stall
ffffffff811ea300 t erst_exec_subtract_value
ffffffff811ea350 t erst_exec_add_value
ffffffff811ea3a0 t erst_exec_store_var1
ffffffff811ea3b0 T erst_get_record_count
ffffffff811ea430 T erst_get_record_id_next
ffffffff811ea750 t __erst_record_id_cache_compact.part.5
ffffffff811ea7a0 t erst_timedout
ffffffff811ea7e0 t erst_exec_stall_while_true
ffffffff811ea880 T erst_get_record_id_begin
ffffffff811ea8d0 t erst_open_pstore
ffffffff811ea8f0 t pr_unimpl_nvram
ffffffff811ea920 T erst_clear
ffffffff811eab80 t erst_clearer
ffffffff811eab90 T erst_get_record_id_end
ffffffff811eabe0 t erst_close_pstore
ffffffff811eabf0 T erst_read
ffffffff811eae00 t erst_reader
ffffffff811eb120 T erst_write
ffffffff811eb390 t erst_writer
ffffffff811eb6e0 t ghes_copy_tofrom_phys
ffffffff811eb8e0 t ghes_read_estatus
ffffffff811eba60 t ghes_estatus_cached
ffffffff811ebb10 t ghes_estatus_cache_free
ffffffff811ebb50 t ghes_estatus_cache_add
ffffffff811ebcd0 t ghes_estatus_cache_rcu_free
ffffffff811ebce0 t ghes_do_proc
ffffffff811ebef0 t ghes_add_timer
ffffffff811ebf40 t ghes_estatus_pool_free_chunk_page
ffffffff811ebf50 t ghes_clear_estatus
ffffffff811ebf90 t __ghes_print_estatus.isra.6
ffffffff811ec020 t ghes_print_estatus.constprop.7
ffffffff811ec090 t ghes_notify_nmi
ffffffff811ec2b0 t ghes_proc_in_irq
ffffffff811ec370 t ghes_proc
ffffffff811ec3e0 t ghes_notify_sci
ffffffff811ec450 t ghes_irq_func
ffffffff811ec470 t ghes_poll_func
ffffffff811ec4a0 T pnp_alloc
ffffffff811ec4d0 T pnp_register_protocol
ffffffff811ec5a0 T pnp_unregister_protocol
ffffffff811ec600 T pnp_free_resource
ffffffff811ec630 T pnp_free_resources
ffffffff811ec680 t pnp_release_device
ffffffff811ec6d0 T pnp_alloc_dev
ffffffff811ec7e0 T __pnp_add_device
ffffffff811ec8a0 T pnp_add_device
ffffffff811ec9b0 T __pnp_remove_device
ffffffff811eca40 t card_remove
ffffffff811eca50 t card_suspend
ffffffff811eca80 t card_resume
ffffffff811ecab0 T pnp_unregister_card_driver
ffffffff811ecb10 t card_remove_first
ffffffff811ecb80 t pnp_release_card
ffffffff811ecbc0 T pnp_release_card_device
ffffffff811ecbf0 t card_probe
ffffffff811ecd60 T pnp_request_card_device
ffffffff811ece90 t pnp_show_card_ids
ffffffff811ecee0 t pnp_show_card_name
ffffffff811ecf10 T pnp_register_card_driver
ffffffff811ed000 T pnp_alloc_card
ffffffff811ed170 T pnp_add_card
ffffffff811ed2f0 T pnp_add_card_device
ffffffff811ed390 T pnp_remove_card_device
ffffffff811ed400 T pnp_remove_card
ffffffff811ed4e0 t pnp_device_shutdown
ffffffff811ed500 t pnp_bus_resume
ffffffff811ed590 t pnp_bus_suspend
ffffffff811ed620 T pnp_unregister_driver
ffffffff811ed630 T pnp_register_driver
ffffffff811ed650 T pnp_device_detach
ffffffff811ed690 t pnp_device_remove
ffffffff811ed6d0 T pnp_device_attach
ffffffff811ed720 T compare_pnp_id
ffffffff811ed800 t match_device.isra.2
ffffffff811ed860 t pnp_device_probe
ffffffff811ed930 t pnp_bus_match
ffffffff811ed960 T pnp_add_id
ffffffff811eda50 t pnp_test_handler
ffffffff811eda60 T pnp_get_resource
ffffffff811edab0 T pnp_range_reserved
ffffffff811edb20 T pnp_possible_config
ffffffff811edbd0 t pnp_new_resource
ffffffff811edc10 T pnp_build_option
ffffffff811edc80 T pnp_register_irq_resource
ffffffff811edd20 T pnp_register_dma_resource
ffffffff811edd80 T pnp_register_port_resource
ffffffff811ede20 T pnp_register_mem_resource
ffffffff811edec0 T pnp_free_options
ffffffff811edf30 T pnp_check_port
ffffffff811ee130 T pnp_check_mem
ffffffff811ee330 T pnp_check_irq
ffffffff811ee5a0 T pnp_check_dma
ffffffff811ee770 T pnp_resource_type
ffffffff811ee780 T pnp_add_irq_resource
ffffffff811ee820 T pnp_add_dma_resource
ffffffff811ee8c0 T pnp_add_io_resource
ffffffff811ee970 T pnp_add_mem_resource
ffffffff811eea20 T pnp_add_bus_resource
ffffffff811eeac0 t pnp_clean_resource_table
ffffffff811eeb10 t pnp_assign_resources
ffffffff811ef0c0 T pnp_stop_dev
ffffffff811ef120 T pnp_disable_dev
ffffffff811ef190 T pnp_start_dev
ffffffff811ef200 T pnp_init_resources
ffffffff811ef210 T pnp_auto_config_dev
ffffffff811ef2a0 T pnp_activate_dev
ffffffff811ef2e0 T pnp_is_active
ffffffff811ef3f0 T pnp_eisa_id_to_string
ffffffff811ef470 T pnp_resource_type_name
ffffffff811ef4e0 T dbg_pnp_show_resources
ffffffff811ef510 T pnp_option_priority_name
ffffffff811ef530 T dbg_pnp_show_option
ffffffff811ef830 t pnp_show_current_ids
ffffffff811ef880 t pnp_printf
ffffffff811ef920 t pnp_show_options
ffffffff811efec0 t pnp_set_current_resources
ffffffff811f02a0 t pnp_show_current_resources
ffffffff811f0440 t quirk_ad1815_mpu_resources
ffffffff811f04b0 t quirk_sb16audio_resources
ffffffff811f0540 t quirk_amd_mmconfig_area
ffffffff811f0620 t quirk_system_pci_resources
ffffffff811f07a0 t quirk_add_irq_optional_dependent_sets
ffffffff811f0970 t quirk_awe32_add_ports
ffffffff811f0a40 t quirk_awe32_resources
ffffffff811f0ad0 t quirk_cmi8330_resources
ffffffff811f0bb0 T pnp_fixup_device
ffffffff811f0c10 t reserve_range
ffffffff811f0d60 t system_pnp_probe
ffffffff811f0e10 t pnpacpi_get_resources
ffffffff811f0e20 t pnpacpi_disable_resources
ffffffff811f0ea0 t pnpacpi_set_resources
ffffffff811f0f70 t pnpacpi_count_resources
ffffffff811f0f90 t decode_irq_flags
ffffffff811f1050 t dma_flags
ffffffff811f1110 t pnpacpi_parse_allocated_ioresource
ffffffff811f1150 t pnpacpi_parse_allocated_memresource
ffffffff811f1190 t pnpacpi_type_resources
ffffffff811f11d0 t pnpacpi_parse_allocated_irqresource
ffffffff811f1350 t pnpacpi_allocated_resource
ffffffff811f1720 T pnpacpi_parse_allocated_resource
ffffffff811f1790 T pnpacpi_build_resource_template
ffffffff811f1880 T pnpacpi_encode_resources
ffffffff811f1d90 t gnttab_update_entry_v1
ffffffff811f1dd0 t gnttab_update_entry_v2
ffffffff811f1e00 T gnttab_grant_foreign_access_ref
ffffffff811f1e20 T gnttab_update_subpage_entry_v2
ffffffff811f1e70 T gnttab_grant_foreign_access_subpage_ref
ffffffff811f1eb0 T gnttab_subpage_grants_available
ffffffff811f1ec0 T gnttab_update_trans_entry_v2
ffffffff811f1f00 T gnttab_grant_foreign_access_trans_ref
ffffffff811f1f40 T gnttab_trans_grants_available
ffffffff811f1f50 t gnttab_query_foreign_access_v1
ffffffff811f1f70 t gnttab_query_foreign_access_v2
ffffffff811f1f90 T gnttab_query_foreign_access
ffffffff811f1fa0 t gnttab_end_foreign_access_ref_v2
ffffffff811f1fd0 T gnttab_end_foreign_access_ref
ffffffff811f1fe0 T gnttab_grant_foreign_transfer_ref
ffffffff811f2000 T gnttab_end_foreign_transfer_ref
ffffffff811f2010 T gnttab_empty_grant_references
ffffffff811f2020 T gnttab_release_grant_reference
ffffffff811f2050 T gnttab_max_grant_frames
ffffffff811f2090 T gnttab_claim_grant_reference
ffffffff811f20c0 t gnttab_map
ffffffff811f21d0 t gnttab_unmap_frames_v1
ffffffff811f21f0 t gnttab_unmap_frames_v2
ffffffff811f2240 t gnttab_map_frames_v2
ffffffff811f23b0 T gnttab_unmap_refs
ffffffff811f2420 T gnttab_map_refs
ffffffff811f2610 T gnttab_cancel_free_callback
ffffffff811f2680 T gnttab_request_free_callback
ffffffff811f2720 t get_free_entries
ffffffff811f2a50 T gnttab_alloc_grant_references
ffffffff811f2a70 T gnttab_grant_foreign_transfer
ffffffff811f2ad0 T gnttab_grant_foreign_access
ffffffff811f2b40 t put_free_entry
ffffffff811f2bb0 T gnttab_free_grant_reference
ffffffff811f2bc0 T gnttab_end_foreign_transfer
ffffffff811f2bf0 T gnttab_grant_foreign_access_trans
ffffffff811f2ca0 T gnttab_grant_foreign_access_subpage
ffffffff811f2d60 t gnttab_end_foreign_access_ref_v1
ffffffff811f2db0 t gnttab_end_foreign_transfer_ref_v1
ffffffff811f2e40 t gnttab_end_foreign_transfer_ref_v2
ffffffff811f2ed0 t gnttab_map_frames_v1
ffffffff811f2f10 T gnttab_free_grant_references
ffffffff811f2fd0 T gnttab_end_foreign_access
ffffffff811f3050 T gnttab_resume
ffffffff811f3170 T gnttab_init
ffffffff811f3360 T gnttab_suspend
ffffffff811f3380 T xen_setup_features
ffffffff811f33d0 T irq_from_evtchn
ffffffff811f33e0 T xen_irq_from_gsi
ffffffff811f3430 T xen_set_callback_via
ffffffff811f3460 t __xen_evtchn_do_upcall
ffffffff811f36a0 T xen_hvm_evtchn_do_upcall
ffffffff811f36b0 t init_evtchn_cpu_bindings
ffffffff811f3750 t info_for_irq
ffffffff811f3770 t cpu_from_evtchn
ffffffff811f37a0 t unmask_evtchn
ffffffff811f3870 T xen_test_irq_shared
ffffffff811f38f0 t bind_evtchn_to_cpu
ffffffff811f39b0 T evtchn_make_refcounted
ffffffff811f3a10 t xen_free_irq
ffffffff811f3aa0 t evtchn_from_irq
ffffffff811f3ae0 T xen_clear_irq_pending
ffffffff811f3b10 t enable_dynirq
ffffffff811f3b40 t disable_dynirq
ffffffff811f3b70 t disable_pirq
ffffffff811f3b80 t retrigger_dynirq
ffffffff811f3bd0 t shutdown_pirq
ffffffff811f3c80 T notify_remote_via_irq
ffffffff811f3cd0 t ack_dynirq
ffffffff811f3d20 t mask_ack_dynirq
ffffffff811f3d30 T evtchn_get
ffffffff811f3da0 t xen_irq_init
ffffffff811f3e20 t xen_allocate_irq_dynamic
ffffffff811f3e80 t set_affinity_irq
ffffffff811f3f40 t pirq_from_irq
ffffffff811f3f70 t pirq_check_eoi_map
ffffffff811f3f90 T xen_pirq_from_irq
ffffffff811f3fa0 t pirq_query_unmask
ffffffff811f4050 t eoi_pirq
ffffffff811f4100 t mask_ack_pirq
ffffffff811f4120 t __startup_pirq
ffffffff811f4250 t startup_pirq
ffffffff811f4260 t enable_pirq
ffffffff811f4270 t pirq_needs_eoi_flag
ffffffff811f4290 t virq_from_irq
ffffffff811f42c0 t ipi_from_irq
ffffffff811f42e0 t unbind_from_irq
ffffffff811f4440 T evtchn_put
ffffffff811f4470 T unbind_from_irqhandler
ffffffff811f4480 T xen_poll_irq_timeout
ffffffff811f44d0 t xen_irq_info_common_init.constprop.10
ffffffff811f4500 T bind_evtchn_to_irq
ffffffff811f45a0 T bind_interdomain_evtchn_to_irqhandler
ffffffff811f4680 T bind_evtchn_to_irqhandler
ffffffff811f4710 T xen_bind_pirq_gsi_to_irq
ffffffff811f4960 T xen_allocate_pirq_msi
ffffffff811f4a00 T xen_bind_pirq_msi_to_irq
ffffffff811f4b00 T xen_destroy_irq
ffffffff811f4c30 T xen_irq_from_pirq
ffffffff811f4ca0 T bind_virq_to_irq
ffffffff811f4f10 T bind_virq_to_irqhandler
ffffffff811f4f90 T bind_ipi_to_irqhandler
ffffffff811f5190 T xen_send_IPI_one
ffffffff811f51c0 T xen_debug_interrupt
ffffffff811f55d0 T xen_evtchn_do_upcall
ffffffff811f5610 T rebind_evtchn_irq
ffffffff811f56b0 T resend_irq_on_evtchn
ffffffff811f5700 T xen_set_irq_pending
ffffffff811f5730 T xen_test_irq_pending
ffffffff811f5760 T xen_poll_irq
ffffffff811f5770 T xen_irq_resume
ffffffff811f5be0 T xen_callback_vector
ffffffff811f5ce0 T xen_setup_shutdown_event
ffffffff811f5d10 t do_suspend
ffffffff811f5eb0 t xen_suspend
ffffffff811f5f50 t xen_post_suspend
ffffffff811f5f70 t xen_pre_suspend
ffffffff811f5f90 t xen_hvm_post_suspend
ffffffff811f5fb0 t do_reboot
ffffffff811f5fc0 t do_poweroff
ffffffff811f5fe0 t shutdown_event
ffffffff811f6010 t shutdown_handler
ffffffff811f6160 t balloon_retrieve
ffffffff811f61d0 T balloon_set_new_target
ffffffff811f61f0 T free_xenballooned_pages
ffffffff811f62a0 t decrease_reservation
ffffffff811f6580 t balloon_process
ffffffff811f68e0 T alloc_xenballooned_pages
ffffffff811f69f0 T xenbus_strstate
ffffffff811f6a10 T xenbus_map_ring_valloc
ffffffff811f6a20 T xenbus_unmap_ring_vfree
ffffffff811f6a30 T xenbus_read_driver_state
ffffffff811f6a70 t xenbus_va_dev_error
ffffffff811f6ba0 T xenbus_dev_error
ffffffff811f6be0 t xenbus_unmap_ring_vfree_pv
ffffffff811f6d50 T xenbus_free_evtchn
ffffffff811f6de0 t __xenbus_switch_state
ffffffff811f6f00 T xenbus_switch_state
ffffffff811f6f10 T xenbus_dev_fatal
ffffffff811f6f60 t xenbus_map_ring_valloc_pv
ffffffff811f70f0 T xenbus_bind_evtchn
ffffffff811f71b0 T xenbus_alloc_evtchn
ffffffff811f7260 T xenbus_grant_ring
ffffffff811f72b0 T xenbus_frontend_closed
ffffffff811f72d0 t xenbus_switch_fatal
ffffffff811f7340 T xenbus_watch_path
ffffffff811f73c0 T xenbus_watch_pathfmt
ffffffff811f7490 T xenbus_unmap_ring
ffffffff811f7520 t xenbus_unmap_ring_vfree_hvm
ffffffff811f76a0 T xenbus_map_ring
ffffffff811f7760 t xenbus_map_ring_valloc_hvm
ffffffff811f78c0 t wake_waiting
ffffffff811f7910 T xb_write
ffffffff811f7b20 T xb_data_to_read
ffffffff811f7b40 T xb_wait_for_data_to_read
ffffffff811f7c10 T xb_read
ffffffff811f7d50 T xb_init_comms
ffffffff811f7e20 t transaction_start
ffffffff811f7e50 t xs_error
ffffffff811f7e70 t split
ffffffff811f7f60 t find_watch
ffffffff811f7fb0 t xenbus_thread
ffffffff811f8240 t xenwatch_thread
ffffffff811f83a0 t read_reply
ffffffff811f84f0 t xs_talkv
ffffffff811f86c0 t xs_watch
ffffffff811f8720 t xs_single
ffffffff811f8780 T unregister_xenbus_watch
ffffffff811f8970 T register_xenbus_watch
ffffffff811f8a70 t join
ffffffff811f8ac0 T xenbus_rm
ffffffff811f8b10 T xenbus_mkdir
ffffffff811f8b60 T xenbus_write
ffffffff811f8c00 T xenbus_printf
ffffffff811f8cc0 T xenbus_read
ffffffff811f8d10 T xenbus_gather
ffffffff811f8e80 T xenbus_scanf
ffffffff811f8f00 T xenbus_directory
ffffffff811f8f90 T xenbus_exists
ffffffff811f8fc0 t transaction_end
ffffffff811f8ff0 T xenbus_dev_request_and_reply
ffffffff811f90a0 T xenbus_transaction_end
ffffffff811f90e0 T xenbus_transaction_start
ffffffff811f9140 T xs_suspend
ffffffff811f91f0 T xs_resume
ffffffff811f9280 T xs_suspend_cancel
ffffffff811f92c0 T xs_init
ffffffff811f9420 T xenbus_dev_cancel
ffffffff811f9430 T xenbus_dev_suspend
ffffffff811f9480 t modalias_show
ffffffff811f94b0 t devtype_show
ffffffff811f94e0 t nodename_show
ffffffff811f9510 T xenbus_probe
ffffffff811f9530 T unregister_xenstore_notifier
ffffffff811f9540 t talk_to_otherend
ffffffff811f95b0 t xenbus_dev_release
ffffffff811f95d0 T xenbus_dev_resume
ffffffff811f96f0 t cleanup_dev
ffffffff811f9790 T xenbus_probe_devices
ffffffff811f9880 t cmp_dev
ffffffff811f98d0 T xenbus_match
ffffffff811f9930 T xenbus_unregister_driver
ffffffff811f9940 T xenbus_register_driver_common
ffffffff811f9960 T xenbus_dev_remove
ffffffff811f99f0 T xenbus_dev_shutdown
ffffffff811f9a90 T xenbus_dev_probe
ffffffff811f9bc0 T xenbus_otherend_changed
ffffffff811f9cb0 T xenbus_read_otherend_details
ffffffff811f9d80 T register_xenstore_notifier
ffffffff811f9dc0 T xenbus_probe_node
ffffffff811f9f20 T xenbus_device_find
ffffffff811f9f60 T xenbus_dev_changed
ffffffff811fa100 t backend_probe_and_watch
ffffffff811fa130 t backend_changed
ffffffff811fa150 t xenbus_uevent_backend
ffffffff811fa210 t frontend_changed
ffffffff811fa220 t xenbus_probe_backend
ffffffff811fa330 t backend_bus_id
ffffffff811fa4b0 T xenbus_register_backend
ffffffff811fa4d0 t read_frontend_details
ffffffff811fa4f0 T xenbus_dev_is_online
ffffffff811fa530 t xenbus_file_poll
ffffffff811fa580 t free_watch_adapter
ffffffff811fa5a0 t queue_cleanup
ffffffff811fa5f0 t xenbus_file_release
ffffffff811fa730 t xenbus_file_open
ffffffff811fa800 t xenbus_file_read
ffffffff811faa20 t queue_reply
ffffffff811faab0 t watch_fired
ffffffff811fac50 t xenbus_file_write
ffffffff811fb170 t xenbus_backend_ioctl
ffffffff811fb1d0 t xenbus_backend_open
ffffffff811fb230 t xenbus_backend_mmap
ffffffff811fb2c0 t xenbus_uevent_frontend
ffffffff811fb2f0 t backend_changed
ffffffff811fb300 t is_device_connecting
ffffffff811fb3a0 t non_essential_device_connecting
ffffffff811fb3b0 t essential_device_connecting
ffffffff811fb3c0 t xenbus_probe_frontend
ffffffff811fb450 t wait_loop
ffffffff811fb4d0 t wait_for_devices
ffffffff811fb5c0 t frontend_changed
ffffffff811fb5e0 t frontend_probe_and_watch
ffffffff811fb920 t xenbus_reset_backend_state_changed
ffffffff811fb980 T xenbus_register_frontend
ffffffff811fb9c0 t read_backend_details
ffffffff811fb9e0 t frontend_bus_id
ffffffff811fba90 t print_device_status
ffffffff811fbb00 T xen_biovec_phys_mergeable
ffffffff811fbc70 t vcpu_online
ffffffff811fbd50 t handle_vcpu_hotplug_event
ffffffff811fbe20 t setup_cpu_watcher
ffffffff811fbe90 t balloon_exit
ffffffff811fbea0 t watch_target
ffffffff811fbef0 t show_high_kb
ffffffff811fbf20 t show_low_kb
ffffffff811fbf50 t show_current_kb
ffffffff811fbf80 t show_target
ffffffff811fbfb0 t show_target_kb
ffffffff811fbfe0 t store_target
ffffffff811fc040 t store_target_kb
ffffffff811fc0a0 t balloon_init_watcher
ffffffff811fc0d0 t selfballoon_process
ffffffff811fc210 T register_xen_selfballooning
ffffffff811fc220 t store_selfballoon_min_usable_mb
ffffffff811fc280 t store_selfballoon_uphys
ffffffff811fc2e0 t store_selfballoon_downhys
ffffffff811fc340 t store_selfballoon_interval
ffffffff811fc3a0 t store_selfballooning
ffffffff811fc450 t show_selfballoon_min_usable_mb
ffffffff811fc480 t show_selfballoon_uphys
ffffffff811fc4b0 t show_selfballoon_downhys
ffffffff811fc4e0 t show_selfballoon_interval
ffffffff811fc510 t show_selfballooning
ffffffff811fc540 t hyp_sysfs_show
ffffffff811fc560 t hyp_sysfs_store
ffffffff811fc580 t type_show
ffffffff811fc590 t minor_show
ffffffff811fc5d0 t major_show
ffffffff811fc610 t features_show
ffffffff811fc670 t pagesize_show
ffffffff811fc6b0 t extra_show
ffffffff811fc740 t compile_date_show
ffffffff811fc7d0 t compiled_by_show
ffffffff811fc860 t compiler_show
ffffffff811fc8f0 t virtual_start_show
ffffffff811fc980 t changeset_show
ffffffff811fca10 t capabilities_show
ffffffff811fcaa0 t uuid_show
ffffffff811fcb50 t do_hvm_evtchn_intr
ffffffff811fcb70 t platform_pci_resume
ffffffff811fcbd0 T alloc_xen_mmio
ffffffff811fcc00 t tmem_cleancache_flush_page
ffffffff811fcc70 t tmem_cleancache_flush_inode
ffffffff811fcce0 t tmem_cleancache_flush_fs
ffffffff811fcd50 t tmem_cleancache_init_fs
ffffffff811fcdb0 t tmem_cleancache_init_shared_fs
ffffffff811fce10 t tmem_cleancache_put_page
ffffffff811fcf10 t tmem_cleancache_get_page
ffffffff811fd030 T xen_swiotlb_dma_mapping_error
ffffffff811fd040 t xen_phys_to_bus
ffffffff811fd090 t xen_virt_to_bus
ffffffff811fd0b0 T xen_swiotlb_dma_supported
ffffffff811fd0d0 t range_straddles_page_boundary
ffffffff811fd1c0 t xen_bus_to_phys
ffffffff811fd290 t is_xen_swiotlb_buffer
ffffffff811fd3d0 T xen_swiotlb_map_page
ffffffff811fd4e0 T xen_swiotlb_free_coherent
ffffffff811fd5a0 T xen_swiotlb_alloc_coherent
ffffffff811fd730 t xen_swiotlb_sync_single
ffffffff811fd7f0 T xen_swiotlb_sync_sg_for_device
ffffffff811fd850 T xen_swiotlb_sync_sg_for_cpu
ffffffff811fd8a0 T xen_swiotlb_sync_single_for_device
ffffffff811fd8b0 T xen_swiotlb_sync_single_for_cpu
ffffffff811fd8c0 t xen_unmap_single
ffffffff811fd970 T xen_swiotlb_unmap_page
ffffffff811fd980 T xen_swiotlb_unmap_sg_attrs
ffffffff811fd9d0 T xen_swiotlb_unmap_sg
ffffffff811fd9e0 T xen_swiotlb_map_sg_attrs
ffffffff811fdb60 T xen_swiotlb_map_sg
ffffffff811fdb70 t xen_add_device
ffffffff811fdee0 t xen_pci_notifier
ffffffff811fe040 t hung_up_tty_read
ffffffff811fe050 t hung_up_tty_write
ffffffff811fe060 t hung_up_tty_poll
ffffffff811fe070 t hung_up_tty_ioctl
ffffffff811fe090 t hung_up_tty_compat_ioctl
ffffffff811fe0b0 T tty_hung_up_p
ffffffff811fe0c0 T tty_pair_get_tty
ffffffff811fe0e0 T tty_pair_get_pty
ffffffff811fe100 t dev_match_devt
ffffffff811fe110 T tty_put_char
ffffffff811fe150 T tty_set_operations
ffffffff811fe160 T tty_devnum
ffffffff811fe180 t tty_devnode
ffffffff811fe1b0 t show_cons_active
ffffffff811fe280 T get_current_tty
ffffffff811fe310 T tty_get_pgrp
ffffffff811fe360 T tty_register_device
ffffffff811fe470 t check_tty_count
ffffffff811fe520 T tty_free_termios
ffffffff811fe560 T tty_shutdown
ffffffff811fe5b0 T __alloc_tty_driver
ffffffff811fe600 T tty_unregister_driver
ffffffff811fe680 T tty_unregister_device
ffffffff811fe6a0 T tty_register_driver
ffffffff811fe990 t destruct_tty_driver
ffffffff811fea50 T tty_driver_kref_put
ffffffff811fea70 T put_tty_driver
ffffffff811fea80 T do_SAK
ffffffff811feaa0 T tty_hangup
ffffffff811feab0 t queue_release_one_tty
ffffffff811feb10 T tty_kref_put
ffffffff811feb30 t release_tty
ffffffff811feb70 t __proc_set_tty
ffffffff811fecd0 T stop_tty
ffffffff811feda0 T tty_init_termios
ffffffff811fee90 T tty_wakeup
ffffffff811fef10 T start_tty
ffffffff811fefe0 T tty_standard_install
ffffffff811ff050 T tty_check_change
ffffffff811ff160 T tty_name
ffffffff811ff1a0 T alloc_tty_struct
ffffffff811ff1c0 T free_tty_struct
ffffffff811ff1f0 t release_one_tty
ffffffff811ff2b0 T tty_alloc_file
ffffffff811ff2e0 T tty_add_file
ffffffff811ff350 T tty_free_file
ffffffff811ff370 T tty_del_file
ffffffff811ff3f0 T tty_paranoia_check
ffffffff811ff460 t __tty_fasync
ffffffff811ff5d0 t tty_fasync
ffffffff811ff610 t tty_compat_ioctl
ffffffff811ff6f0 t tty_poll
ffffffff811ff790 t tty_read
ffffffff811ff890 T __tty_hangup
ffffffff811ffc30 t do_tty_hangup
ffffffff811ffc40 T tty_vhangup
ffffffff811ffc50 T tty_vhangup_self
ffffffff811ffc80 T tty_write_unlock
ffffffff811ffcb0 T tty_write_lock
ffffffff811ffd10 t tty_write
ffffffff811fff90 T redirected_tty_write
ffffffff81200060 t send_break
ffffffff81200150 T tty_write_message
ffffffff812001e0 T tty_driver_remove_tty
ffffffff81200210 T tty_do_resize
ffffffff812002e0 T __do_SAK
ffffffff812004f0 t do_SAK_work
ffffffff81200500 T initialize_tty_struct
ffffffff812007b0 T tty_init_dev
ffffffff812008e0 T deinitialize_tty_struct
ffffffff812008f0 T proc_clear_tty
ffffffff81200960 t session_clear_tty
ffffffff81200990 T tty_release
ffffffff81200ef0 t tty_open
ffffffff812014d0 T disassociate_ctty
ffffffff81201730 T no_tty
ffffffff81201760 T tty_ioctl
ffffffff812022b0 T tty_default_fops
ffffffff81202310 T console_sysfs_notify
ffffffff81202340 t add_echo_byte
ffffffff812023e0 T n_tty_inherit_ops
ffffffff81202410 t put_tty_queue
ffffffff812024a0 t n_tty_chars_in_buffer
ffffffff81202540 t echo_char_raw
ffffffff812025c0 t echo_set_canon_col
ffffffff81202610 t echo_char
ffffffff812026d0 t do_output_char
ffffffff812028c0 t process_output
ffffffff81202920 t n_tty_poll
ffffffff81202af0 t copy_from_read_buf
ffffffff81202c40 t n_tty_write_wakeup
ffffffff81202c80 t process_echoes
ffffffff81202f60 t n_tty_write
ffffffff812033c0 t n_tty_set_room
ffffffff81203430 t n_tty_set_termios
ffffffff81203810 t reset_buffer_flags
ffffffff81203950 t n_tty_flush_buffer
ffffffff812039f0 t n_tty_receive_buf
ffffffff81204b30 t n_tty_close
ffffffff81204b80 t n_tty_open
ffffffff81204c40 t n_tty_ioctl
ffffffff81204cf0 t n_tty_read
ffffffff81205530 T is_ignored
ffffffff81205570 T tty_chars_in_buffer
ffffffff81205590 T tty_driver_flush_buffer
ffffffff812055b0 T tty_termios_baud_rate
ffffffff81205600 T tty_termios_input_baud_rate
ffffffff81205660 T tty_termios_encode_baud_rate
ffffffff81205790 T tty_encode_baud_rate
ffffffff812057a0 T tty_termios_copy_hw
ffffffff812057d0 t send_prio_char
ffffffff81205890 t copy_termios
ffffffff81205900 t tty_change_softcar
ffffffff812059b0 T tty_unthrottle
ffffffff81205a10 T tty_throttle
ffffffff81205a70 t get_termio
ffffffff81205b10 T tty_set_termios
ffffffff81205e00 T tty_wait_until_sent
ffffffff81205f20 t set_termiox
ffffffff81206000 T tty_write_room
ffffffff81206020 T tty_termios_hw_change
ffffffff81206050 T tty_perform_flush
ffffffff81206110 T tty_get_baud_rate
ffffffff81206160 t set_termios
ffffffff81206380 T tty_mode_ioctl
ffffffff81206850 T n_tty_compat_ioctl_helper
ffffffff81206880 T n_tty_ioctl_helper
ffffffff81206a80 t tty_ldiscs_seq_start
ffffffff81206aa0 t tty_ldiscs_seq_next
ffffffff81206ac0 t tty_ldiscs_seq_stop
ffffffff81206ad0 t proc_tty_ldiscs_open
ffffffff81206ae0 t tty_ldisc_try
ffffffff81206b40 T tty_ldisc_ref
ffffffff81206b50 T tty_unregister_ldisc
ffffffff81206bc0 T tty_register_ldisc
ffffffff81206c30 t get_ldops
ffffffff81206cb0 t tty_ldiscs_seq_show
ffffffff81206d50 t put_ldisc
ffffffff81206e10 T tty_ldisc_deref
ffffffff81206e20 t tty_set_termios_ldisc
ffffffff81206e70 t tty_ldisc_halt
ffffffff81206e90 t tty_ldisc_flush_works
ffffffff81206ec0 T tty_ldisc_flush
ffffffff81206f10 t tty_ldisc_close.isra.3
ffffffff81206f80 t tty_ldisc_open.isra.4
ffffffff81206ff0 t tty_ldisc_get
ffffffff812070d0 t tty_ldisc_reinit
ffffffff81207140 t tty_ldisc_restore.isra.6
ffffffff81207210 t tty_ldisc_wait_idle.isra.7
ffffffff812072b0 T tty_ldisc_ref_wait
ffffffff81207350 T tty_ldisc_enable
ffffffff81207380 T tty_set_ldisc
ffffffff81207670 T tty_ldisc_hangup
ffffffff81207940 T tty_ldisc_setup
ffffffff812079d0 T tty_ldisc_release
ffffffff81207a50 T tty_ldisc_init
ffffffff81207a80 T tty_ldisc_deinit
ffffffff81207aa0 T tty_ldisc_begin
ffffffff81207ab0 t __tty_buffer_flush
ffffffff81207b40 t flush_to_ldisc
ffffffff81207d00 T tty_schedule_flip
ffffffff81207d60 T tty_buffer_request_room
ffffffff81207f10 T tty_prepare_flip_string_flags
ffffffff81207f70 T tty_prepare_flip_string
ffffffff81207fe0 T tty_insert_flip_string_flags
ffffffff812080c0 T tty_insert_flip_string_fixed_flag
ffffffff81208190 T tty_flip_buffer_push
ffffffff81208210 T tty_buffer_free_all
ffffffff81208280 T tty_buffer_flush
ffffffff81208370 T tty_flush_to_ldisc
ffffffff81208380 T tty_buffer_init
ffffffff812083e0 T tty_port_carrier_raised
ffffffff81208400 T tty_port_raise_dtr_rts
ffffffff81208420 T tty_port_lower_dtr_rts
ffffffff81208440 t tty_port_shutdown
ffffffff812084a0 T tty_port_close_end
ffffffff81208570 T tty_port_close_start
ffffffff81208780 T tty_port_block_til_ready
ffffffff81208a50 T tty_port_hangup
ffffffff81208b00 T tty_port_tty_get
ffffffff81208b70 T tty_port_tty_set
ffffffff81208c00 T tty_port_open
ffffffff81208d00 T tty_port_free_xmit_buf
ffffffff81208d60 t tty_port_destructor
ffffffff81208dd0 T tty_port_put
ffffffff81208e00 T tty_port_alloc_xmit_buf
ffffffff81208e70 T tty_port_init
ffffffff81208fb0 T tty_port_close
ffffffff81209020 t pty_chars_in_buffer
ffffffff81209030 t pty_open
ffffffff81209090 t pty_set_termios
ffffffff812090b0 t ptm_unix98_lookup
ffffffff812090c0 t ptm_unix98_remove
ffffffff812090d0 t pts_unix98_remove
ffffffff812090e0 t pts_unix98_lookup
ffffffff81209110 t pty_flush_buffer
ffffffff812091c0 t pty_unthrottle
ffffffff812091e0 t pty_write
ffffffff81209260 t pty_unix98_shutdown
ffffffff81209280 t pty_close
ffffffff812093c0 t pty_unix98_install
ffffffff812095c0 T pty_resize
ffffffff81209700 t pty_write_room
ffffffff81209730 t pty_unix98_ioctl
ffffffff81209820 t ptmx_open
ffffffff81209960 t tty_audit_log
ffffffff81209a60 t tty_audit_buf_push_current
ffffffff81209ad0 t tty_audit_buf_free
ffffffff81209b00 t tty_audit_buf_put
ffffffff81209b20 T tty_audit_exit
ffffffff81209bb0 T tty_audit_fork
ffffffff81209c20 T tty_audit_tiocsti
ffffffff81209d70 T tty_audit_push_task
ffffffff81209ee0 T tty_audit_add_data
ffffffff8120a240 T tty_audit_push
ffffffff8120a370 T pm_set_vt_switch
ffffffff8120a390 t vt_event_wait
ffffffff8120a4b0 t vt_event_wait_ioctl
ffffffff8120a520 T vt_event_post
ffffffff8120a5e0 T vt_waitactive
ffffffff8120a630 T reset_vc
ffffffff8120a6c0 t complete_change_console
ffffffff8120a7b0 T vt_ioctl
ffffffff8120b9e0 T vc_SAK
ffffffff8120ba10 T vt_compat_ioctl
ffffffff8120be30 T change_console
ffffffff8120bef0 T vt_move_to_console
ffffffff8120bf90 t vcs_release
ffffffff8120bfc0 t vcs_open
ffffffff8120c010 t vcs_vc
ffffffff8120c090 t vcs_size
ffffffff8120c100 t vcs_lseek
ffffffff8120c1b0 t vcs_write
ffffffff8120c780 t vcs_read
ffffffff8120cbe0 t vcs_poll_data_get.part.3
ffffffff8120ccc0 t vcs_fasync
ffffffff8120cd20 t vcs_poll
ffffffff8120cda0 t vcs_notifier
ffffffff8120ce10 T vcs_make_sysfs
ffffffff8120ce70 T vcs_remove_sysfs
ffffffff8120ceb0 t sel_pos
ffffffff8120cee0 T clear_selection
ffffffff8120cf30 T sel_loadlut
ffffffff8120cf90 T set_selection
ffffffff8120d610 T paste_selection
ffffffff8120d770 t fn_compose
ffffffff8120d780 t k_ignore
ffffffff8120d790 t kbd_bh
ffffffff8120d7f0 t kd_nosound
ffffffff8120d810 t kbd_disconnect
ffffffff8120d830 t kbd_connect
ffffffff8120d8d0 t fn_SAK
ffffffff8120d8f0 t fn_send_intr
ffffffff8120d9a0 T vt_get_leds
ffffffff8120da00 t k_lowercase
ffffffff8120da10 t k_cons
ffffffff8120da30 t fn_lastcons
ffffffff8120da40 t fn_spawn_con
ffffffff8120dab0 t fn_inc_console
ffffffff8120db10 t fn_dec_console
ffffffff8120db70 t fn_boot_it
ffffffff8120db80 t fn_scroll_back
ffffffff8120db90 t fn_scroll_forw
ffffffff8120dba0 t fn_hold
ffffffff8120dbe0 t fn_show_state
ffffffff8120dbf0 t fn_show_mem
ffffffff8120dc00 t fn_show_ptregs
ffffffff8120dc20 t do_compute_shiftstate
ffffffff8120dce0 t fn_null
ffffffff8120dcf0 t kbd_update_leds_helper
ffffffff8120dd80 t kbd_start
ffffffff8120ddc0 t kbd_rate_helper
ffffffff8120de40 t getkeycode_helper
ffffffff8120de60 t setkeycode_helper
ffffffff8120de80 T unregister_keyboard_notifier
ffffffff8120de90 T register_keyboard_notifier
ffffffff8120dea0 t put_queue.isra.1
ffffffff8120df50 t puts_queue.isra.2
ffffffff8120e010 t applkey
ffffffff8120e040 t to_utf8
ffffffff8120e110 t k_shift
ffffffff8120e200 t handle_diacr
ffffffff8120e2f0 t k_dead2
ffffffff8120e330 t k_dead
ffffffff8120e370 t kbd_event
ffffffff8120ea40 t fn_caps_toggle
ffffffff8120ea70 t fn_caps_on
ffffffff8120ea90 t fn_bare_num
ffffffff8120eac0 t fn_num
ffffffff8120eae0 t k_spec
ffffffff8120eb30 t k_fn
ffffffff8120eb60 t k_cur
ffffffff8120eb90 t k_pad
ffffffff8120ed30 t k_meta
ffffffff8120eda0 t k_ascii
ffffffff8120edf0 t k_lock
ffffffff8120ee20 t k_slock
ffffffff8120ee90 t kbd_match
ffffffff8120ef00 t k_unicode.part.17
ffffffff8120ef70 t k_self
ffffffff8120efc0 t fn_enter
ffffffff8120f050 t kd_sound_helper
ffffffff8120f0f0 T kd_mksound
ffffffff8120f160 t k_brlcommit.constprop.24
ffffffff8120f1f0 t k_brl
ffffffff8120f390 T kbd_rate
ffffffff8120f3d0 T compute_shiftstate
ffffffff8120f400 T getledstate
ffffffff8120f410 T setledstate
ffffffff8120f4a0 T vt_set_led_state
ffffffff8120f4b0 T vt_kbd_con_start
ffffffff8120f4e0 T vt_kbd_con_stop
ffffffff8120f510 T vt_do_diacrit
ffffffff8120f920 T vt_do_kdskbmode
ffffffff8120fa20 T vt_do_kdskbmeta
ffffffff8120faa0 T vt_do_kbkeycode_ioctl
ffffffff8120fcd0 T vt_do_kdsk_ioctl
ffffffff81210030 T vt_do_kdgkb_ioctl
ffffffff81210490 T vt_do_kdskled
ffffffff81210610 T vt_do_kdgkbmode
ffffffff81210640 T vt_do_kdgkbmeta
ffffffff81210660 T vt_reset_unicode
ffffffff812106b0 T vt_get_shift_state
ffffffff812106c0 T vt_reset_keyboard
ffffffff81210730 T vt_get_kbd_mode_bit
ffffffff81210750 T vt_set_kbd_mode_bit
ffffffff812107b0 T vt_clr_kbd_mode_bit
ffffffff81210820 t con_release_unimap
ffffffff812108f0 t con_insert_unipair
ffffffff81210a80 T inverse_translate
ffffffff81210af0 t set_inverse_trans_unicode.isra.2
ffffffff81210c20 t con_unify_unimap.isra.3
ffffffff81210d80 T set_translate
ffffffff81210da0 T con_get_trans_new
ffffffff81210e00 T con_free_unimap
ffffffff81210e50 T con_copy_unimap
ffffffff81210ec0 T con_clear_unimap
ffffffff81210f90 T con_get_unimap
ffffffff81211080 T con_protect_unimap
ffffffff812110a0 T conv_8bit_to_uni
ffffffff812110c0 T conv_uni_to_8bit
ffffffff81211110 T conv_uni_to_pc
ffffffff812111c0 t set_inverse_transl
ffffffff81211300 T con_set_default_unimap
ffffffff81211480 T con_set_unimap
ffffffff812116a0 t update_user_maps
ffffffff81211720 T con_set_trans_new
ffffffff81211780 T con_set_trans_old
ffffffff812117f0 T con_get_trans_old
ffffffff81211890 t do_update_region
ffffffff81211a00 t add_softcursor
ffffffff81211ab0 t gotoxy
ffffffff81211b60 t gotoxay
ffffffff81211b80 t vt_console_device
ffffffff81211ba0 t con_write_room
ffffffff81211bc0 t con_chars_in_buffer
ffffffff81211bd0 t con_throttle
ffffffff81211be0 t con_close
ffffffff81211bf0 T con_is_bound
ffffffff81211c20 T con_debug_enter
ffffffff81211c90 T con_debug_leave
ffffffff81211d00 T screen_glyph
ffffffff81211d40 t vtconsole_init_device
ffffffff81211db0 t show_name
ffffffff81211df0 t show_bind
ffffffff81211e40 t visual_init
ffffffff81211f50 T register_con_driver
ffffffff81212080 t set_origin
ffffffff81212140 t hide_cursor
ffffffff812121d0 t scrup
ffffffff812122f0 t notify_write
ffffffff81212320 t lf
ffffffff81212380 t notify_update
ffffffff812123b0 T do_blank_screen
ffffffff81212630 T unregister_con_driver
ffffffff81212720 T give_up_console
ffffffff81212730 t con_start
ffffffff81212760 t con_stop
ffffffff81212790 t con_unthrottle
ffffffff812127b0 t respond_string
ffffffff81212880 t show_tty_active
ffffffff812128b0 T unregister_vt_notifier
ffffffff812128c0 T register_vt_notifier
ffffffff812128d0 t build_attr
ffffffff812129b0 t update_attr
ffffffff81212a60 t insert_char
ffffffff81212bc0 t set_palette
ffffffff81212c20 t set_get_cmap
ffffffff81212d80 t set_cursor
ffffffff81212e00 T redraw_screen
ffffffff81213090 t bind_con_driver
ffffffff81213430 T take_over_console
ffffffff812134a0 T unbind_con_driver
ffffffff812136b0 t store_bind
ffffffff812138e0 t csi_J
ffffffff81213b30 t reset_terminal
ffffffff81213d80 t vc_init
ffffffff81213e50 t vc_do_resize
ffffffff812142b0 t vt_resize
ffffffff81214310 T vc_resize
ffffffff81214320 t vt_console_print
ffffffff812146f0 t con_flush_chars
ffffffff81214740 T update_region
ffffffff812147f0 t blank_screen_t
ffffffff81214840 t con_shutdown
ffffffff81214880 T do_unblank_screen
ffffffff81214a40 T unblank_screen
ffffffff81214a50 T schedule_console_callback
ffffffff81214a60 T invert_screen
ffffffff81214c80 t set_mode
ffffffff81214f00 T complement_pos
ffffffff81215080 T vc_cons_allocated
ffffffff812150b0 T vc_allocate
ffffffff81215270 t con_open
ffffffff81215350 T vc_deallocate
ffffffff81215450 T scrollback
ffffffff81215470 T scrollfront
ffffffff81215490 T mouse_report
ffffffff812154e0 T mouse_reporting
ffffffff81215510 T set_console
ffffffff81215580 T vt_kmsg_redirect
ffffffff812155a0 T tioclinux
ffffffff81215830 T poke_blanked_console
ffffffff81215900 t console_callback
ffffffff81215a40 T con_set_cmap
ffffffff81215a70 T con_get_cmap
ffffffff81215aa0 T reset_palette
ffffffff81215ae0 t do_con_write.part.17
ffffffff81217960 t con_put_char
ffffffff812179a0 t con_write
ffffffff812179f0 T con_font_op
ffffffff81217e50 T screen_pos
ffffffff81217e90 T getconsxy
ffffffff81217eb0 T putconsxy
ffffffff81217ef0 T vcs_scr_readw
ffffffff81217f10 T vcs_scr_writew
ffffffff81217f30 T vcs_scr_updated
ffffffff81217f40 t hvc_console_device
ffffffff81217f60 t hvc_write_room
ffffffff81217f80 t hvc_chars_in_buffer
ffffffff81217fa0 t hvc_tiocmget
ffffffff81217fd0 t hvc_tiocmset
ffffffff81218000 t hvc_console_print
ffffffff81218110 t hvc_push
ffffffff81218190 t hvc_get_by_index
ffffffff81218250 t destroy_hvc_struct
ffffffff812182e0 T hvc_remove
ffffffff81218380 T hvc_kick
ffffffff812183a0 t hvc_unthrottle
ffffffff812183b0 t hvc_open
ffffffff81218530 t hvc_hangup
ffffffff81218600 t hvc_write
ffffffff812186e0 T hvc_alloc
ffffffff81218980 t hvc_set_winsz
ffffffff81218a30 T __hvc_resize
ffffffff81218a40 T hvc_poll
ffffffff81218c70 t khvcd
ffffffff81218db0 T hvc_instantiate
ffffffff81218e70 t hvc_close
ffffffff81218f80 t hvc_handle_interrupt
ffffffff81218fa0 T notifier_add_irq
ffffffff81218fe0 T notifier_del_irq
ffffffff81219010 T notifier_hangup_irq
ffffffff81219020 t dom0_read_console
ffffffff81219040 t dom0_write_console
ffffffff81219060 t domU_write_console
ffffffff812191f0 t domU_read_console
ffffffff81219300 t xen_hvm_console_init
ffffffff81219490 t xen_pv_console_init
ffffffff81219630 t xencons_disconnect_backend
ffffffff812196a0 t xencons_connect_backend
ffffffff81219920 t xencons_resume
ffffffff812199d0 t xenboot_write_console
ffffffff81219ab0 t xen_cons_init
ffffffff81219b20 t xen_console_remove
ffffffff81219bd0 t xencons_remove
ffffffff81219bf0 t xencons_backend_changed
ffffffff81219c20 T xen_console_resume
ffffffff81219c90 T xen_raw_console_write
ffffffff81219cb0 T xen_raw_printk
ffffffff81219d20 t read_null
ffffffff81219d30 t write_null
ffffffff81219d40 t pipe_to_null
ffffffff81219d50 t write_full
ffffffff81219d60 t null_lseek
ffffffff81219d70 t memory_open
ffffffff81219df0 t mem_devnode
ffffffff81219e20 t write_port
ffffffff81219ec0 t read_port
ffffffff81219f60 t kmsg_writev
ffffffff8121a040 t mmap_zero
ffffffff8121a060 t splice_write_null
ffffffff8121a070 t open_port
ffffffff8121a090 t memory_lseek
ffffffff8121a120 t read_zero
ffffffff8121a1f0 t write_mem
ffffffff8121a2d0 t read_mem
ffffffff8121a3b0 t mmap_mem
ffffffff8121a450 t random_poll
ffffffff8121a4d0 t mix_pool_bytes_extract
ffffffff8121a640 t account
ffffffff8121a770 t extract_buf
ffffffff8121a8a0 t random_fasync
ffffffff8121a8b0 t write_pool
ffffffff8121a940 t init_std_data
ffffffff8121aa00 t rand_initialize
ffffffff8121aa30 t credit_entropy_bits.part.3
ffffffff8121aae0 t add_timer_randomness
ffffffff8121ac00 t random_ioctl
ffffffff8121ad90 T add_input_randomness
ffffffff8121add0 t xfer_secondary_pool.part.5
ffffffff8121ae60 t xfer_secondary_pool
ffffffff8121aea0 t extract_entropy
ffffffff8121af30 T get_random_bytes
ffffffff8121afc0 T generate_random_uuid
ffffffff8121aff0 t proc_do_uuid
ffffffff8121b110 t extract_entropy_user
ffffffff8121b200 t urandom_read
ffffffff8121b210 t random_write
ffffffff8121b270 t random_read
ffffffff8121b3a0 T add_interrupt_randomness
ffffffff8121b3d0 T add_disk_randomness
ffffffff8121b400 T rand_initialize_irq
ffffffff8121b460 T rand_initialize_disk
ffffffff8121b490 T get_random_int
ffffffff8121b510 T randomize_range
ffffffff8121b570 t misc_seq_stop
ffffffff8121b580 t misc_open
ffffffff8121b710 t misc_seq_open
ffffffff8121b720 t misc_seq_show
ffffffff8121b750 t misc_seq_next
ffffffff8121b760 t misc_seq_start
ffffffff8121b780 t misc_devnode
ffffffff8121b7c0 T misc_deregister
ffffffff8121b870 T misc_register
ffffffff8121b9a0 t nvram_llseek
ffffffff8121b9f0 t nvram_open
ffffffff8121ba80 t nvram_release
ffffffff8121bad0 t nvram_proc_open
ffffffff8121baf0 T __nvram_write_byte
ffffffff8121bb00 T nvram_write_byte
ffffffff8121bb50 T __nvram_read_byte
ffffffff8121bb60 t __nvram_set_checksum
ffffffff8121bbb0 t nvram_ioctl
ffffffff8121bc70 T __nvram_check_checksum
ffffffff8121bcc0 t nvram_write
ffffffff8121be10 T nvram_check_checksum
ffffffff8121be60 t nvram_read
ffffffff8121bf60 T nvram_read_byte
ffffffff8121bfb0 t nvram_proc_read
ffffffff8121c320 T vga_client_register
ffffffff8121c3d0 t vga_arb_open
ffffffff8121c470 t __vga_tryget
ffffffff8121c670 t __vga_put
ffffffff8121c730 t __vga_set_legacy_decoding
ffffffff8121c930 T vga_set_legacy_decoding
ffffffff8121c940 T vga_put
ffffffff8121c9c0 t vga_arb_release
ffffffff8121cac0 t vga_arb_read
ffffffff8121cc80 t vga_arb_fpoll
ffffffff8121ccd0 t vga_arbiter_notify_clients.part.8
ffffffff8121ce90 T vga_tryget
ffffffff8121cf70 T vga_get
ffffffff8121d120 t vga_str_to_iostate.isra.10
ffffffff8121d1b0 t vga_arb_write
ffffffff8121d710 t vga_arbiter_add_pci_device.part.13
ffffffff8121da30 t pci_notify
ffffffff8121dba0 T vga_default_device
ffffffff8121dbb0 t dev_attr_store
ffffffff8121dbd0 t device_namespace
ffffffff8121dc00 t dev_uevent_filter
ffffffff8121dc30 t class_dir_child_ns_type
ffffffff8121dc40 t __match_devt
ffffffff8121dc50 T get_device
ffffffff8121dc70 t klist_children_get
ffffffff8121dc80 T put_device
ffffffff8121dca0 t klist_children_put
ffffffff8121dcb0 t class_dir_release
ffffffff8121dcc0 t device_create_release
ffffffff8121dcd0 t root_device_release
ffffffff8121dce0 T device_rename
ffffffff8121ddd0 T dev_set_name
ffffffff8121de20 t device_release
ffffffff8121dea0 T device_find_child
ffffffff8121df20 T device_for_each_child
ffffffff8121df80 t show_uevent
ffffffff8121e080 t show_dev
ffffffff8121e0b0 t device_remove_sys_dev_entry
ffffffff8121e110 t device_remove_class_symlinks
ffffffff8121e190 t device_remove_groups
ffffffff8121e1d0 t device_add_groups
ffffffff8121e250 T device_initialize
ffffffff8121e2d0 T device_schedule_callback_owner
ffffffff8121e2f0 T device_remove_bin_file
ffffffff8121e310 t device_remove_bin_attributes
ffffffff8121e360 T device_create_bin_file
ffffffff8121e380 T device_remove_file
ffffffff8121e3a0 t device_remove_attributes
ffffffff8121e3e0 t device_remove_attrs
ffffffff8121e470 T device_create_file
ffffffff8121e490 T device_show_int
ffffffff8121e4c0 T device_show_ulong
ffffffff8121e4f0 T device_store_int
ffffffff8121e560 T device_store_ulong
ffffffff8121e5c0 T dev_driver_string
ffffffff8121e600 t dev_attr_show
ffffffff8121e650 t dev_uevent_name
ffffffff8121e670 T __dev_printk
ffffffff8121e700 T _dev_info
ffffffff8121e760 T dev_notice
ffffffff8121e7c0 T dev_warn
ffffffff8121e820 T dev_err
ffffffff8121e880 t store_uevent
ffffffff8121e8e0 T dev_crit
ffffffff8121e940 T dev_alert
ffffffff8121e9a0 T dev_emerg
ffffffff8121ea00 T dev_printk
ffffffff8121ea50 t get_device_parent.isra.13
ffffffff8121ec00 t cleanup_glue_dir.isra.14
ffffffff8121ec30 T device_move
ffffffff8121ee80 T device_del
ffffffff8121f010 T device_unregister
ffffffff8121f030 T device_destroy
ffffffff8121f070 T root_device_unregister
ffffffff8121f0a0 T device_private_init
ffffffff8121f110 T device_add
ffffffff8121f7b0 T device_register
ffffffff8121f7d0 T device_create_vargs
ffffffff8121f8e0 T device_create
ffffffff8121f920 T __root_device_register
ffffffff8121fa10 T device_get_devnode
ffffffff8121fb30 t dev_uevent
ffffffff8121fc90 T to_root_device
ffffffff8121fca0 T device_shutdown
ffffffff8121fd70 t drv_attr_show
ffffffff8121fd90 t drv_attr_store
ffffffff8121fdc0 t bus_attr_show
ffffffff8121fde0 t bus_attr_store
ffffffff8121fe10 t bus_uevent_filter
ffffffff8121fe20 t store_drivers_autoprobe
ffffffff8121fe50 T bus_get_kset
ffffffff8121fe60 T bus_get_device_klist
ffffffff8121fe70 t klist_devices_put
ffffffff8121fe80 t system_root_device_release
ffffffff8121fe90 t driver_release
ffffffff8121fea0 t bus_put
ffffffff8121fec0 t bus_get
ffffffff8121fee0 T subsys_dev_iter_exit
ffffffff8121fef0 T subsys_dev_iter_next
ffffffff8121ff30 t next_device
ffffffff8121ff50 T subsys_dev_iter_init
ffffffff8121ffa0 T subsys_interface_unregister
ffffffff81220040 T subsys_interface_register
ffffffff81220100 T bus_for_each_drv
ffffffff81220180 T bus_for_each_dev
ffffffff81220200 T bus_rescan_devices
ffffffff81220210 T bus_sort_breadthfirst
ffffffff81220390 T bus_unregister_notifier
ffffffff812203a0 T bus_register_notifier
ffffffff812203b0 t bus_uevent_store
ffffffff81220400 t driver_uevent_store
ffffffff81220450 t bus_rescan_devices_helper
ffffffff812204d0 t show_drivers_autoprobe
ffffffff81220500 t klist_devices_get
ffffffff81220510 T subsys_find_device_by_id
ffffffff812205e0 T bus_find_device
ffffffff81220660 T bus_find_device_by_name
ffffffff81220670 t store_drivers_probe
ffffffff812206b0 T device_reprobe
ffffffff81220700 t driver_unbind
ffffffff812207c0 t driver_bind
ffffffff812208d0 t match_name
ffffffff81220910 T bus_remove_file
ffffffff81220970 T bus_unregister
ffffffff81220a20 T bus_create_file
ffffffff81220a80 T __bus_register
ffffffff81220d70 t device_remove_attrs.isra.5
ffffffff81220db0 T subsys_system_register
ffffffff81220ea0 T bus_add_device
ffffffff81221010 T bus_probe_device
ffffffff812210c0 T bus_remove_device
ffffffff812211d0 T bus_add_driver
ffffffff81221410 T bus_remove_driver
ffffffff812214d0 T dev_get_drvdata
ffffffff81221500 t deferred_probe_work_func
ffffffff81221590 T dev_set_drvdata
ffffffff812215d0 t driver_sysfs_remove
ffffffff81221610 t __device_release_driver
ffffffff812216f0 T device_release_driver
ffffffff81221730 T driver_attach
ffffffff81221750 T wait_for_device_probe
ffffffff812217d0 t driver_deferred_probe_trigger.part.2
ffffffff81221850 t deferred_probe_initcall
ffffffff812218b0 t driver_sysfs_add
ffffffff81221950 T driver_deferred_probe_del
ffffffff812219a0 t driver_bound
ffffffff81221a40 T device_bind_driver
ffffffff81221a70 T device_attach
ffffffff81221b30 T driver_probe_done
ffffffff81221b50 T driver_probe_device
ffffffff81221d50 t __driver_attach
ffffffff81221df0 t __device_attach
ffffffff81221e50 T driver_detach
ffffffff81221f10 T unregister_syscore_ops
ffffffff81221f60 T register_syscore_ops
ffffffff81221fa0 T syscore_resume
ffffffff81222070 T syscore_suspend
ffffffff812221a0 T syscore_shutdown
ffffffff81222210 T driver_find
ffffffff81222250 T driver_register
ffffffff81222370 T driver_remove_file
ffffffff81222390 T driver_create_file
ffffffff812223b0 T driver_find_device
ffffffff81222440 T driver_for_each_device
ffffffff812224c0 T driver_unregister
ffffffff81222540 t class_attr_show
ffffffff81222560 t class_attr_store
ffffffff81222580 t class_attr_namespace
ffffffff812225a0 t class_child_ns_type
ffffffff812225b0 T class_compat_remove_link
ffffffff81222610 T class_compat_create_link
ffffffff812226b0 T class_compat_unregister
ffffffff812226d0 t class_create_release
ffffffff812226e0 t class_release
ffffffff81222700 T class_compat_register
ffffffff81222770 T show_class_attr_string
ffffffff812227a0 t klist_class_dev_get
ffffffff812227b0 T class_dev_iter_exit
ffffffff812227c0 T class_dev_iter_next
ffffffff81222800 T class_dev_iter_init
ffffffff81222850 T class_interface_unregister
ffffffff812228f0 T class_interface_register
ffffffff812229b0 t klist_class_dev_put
ffffffff812229c0 T class_remove_file
ffffffff812229e0 T class_unregister
ffffffff81222a40 T class_destroy
ffffffff81222a60 T class_create_file
ffffffff81222a80 T __class_register
ffffffff81222c10 T __class_create
ffffffff81222cb0 T class_find_device
ffffffff81222d50 T class_for_each_device
ffffffff81222df0 T platform_get_resource
ffffffff81222e50 T platform_get_irq
ffffffff81222ed0 t platform_drv_probe
ffffffff81222ef0 t platform_drv_probe_fail
ffffffff81222f00 t platform_drv_remove
ffffffff81222f20 t platform_drv_shutdown
ffffffff81222f40 T platform_pm_freeze
ffffffff81222f90 T platform_pm_thaw
ffffffff81222fd0 T platform_pm_poweroff
ffffffff81223020 T platform_pm_restore
ffffffff81223060 T dma_get_required_mask
ffffffff812230b0 t modalias_show
ffffffff812230f0 t platform_uevent
ffffffff81223120 T platform_get_resource_byname
ffffffff812231b0 T platform_get_irq_byname
ffffffff812231e0 T platform_driver_unregister
ffffffff812231f0 T platform_driver_register
ffffffff81223230 T platform_driver_probe
ffffffff812232d0 t platform_device_release
ffffffff81223310 T platform_device_del
ffffffff81223390 T platform_device_add
ffffffff81223560 T platform_device_add_data
ffffffff812235d0 T platform_device_add_resources
ffffffff81223650 T platform_device_put
ffffffff81223670 T platform_device_unregister
ffffffff81223690 t platform_match
ffffffff81223710 W arch_setup_pdev_archdata
ffffffff81223720 T platform_device_register
ffffffff81223740 T platform_add_devices
ffffffff812237b0 T platform_device_alloc
ffffffff81223850 T platform_create_bundle
ffffffff81223940 T platform_device_register_full
ffffffff81223a30 t cpu_device_release
ffffffff81223a40 T get_cpu_device
ffffffff81223a80 T cpu_is_hotpluggable
ffffffff81223ac0 t print_cpus_kernel_max
ffffffff81223af0 t show_cpus_attr
ffffffff81223b20 t print_cpus_offline
ffffffff81223c10 t cpu_release_store
ffffffff81223c20 t cpu_probe_store
ffffffff81223c30 t show_online
ffffffff81223c70 T unregister_cpu
ffffffff81223cf0 T kobj_map
ffffffff81223e90 T kobj_unmap
ffffffff81223f70 T kobj_lookup
ffffffff812240d0 T kobj_map_init
ffffffff81224170 t group_open_release
ffffffff81224180 t group_close_release
ffffffff81224190 t find_dr
ffffffff81224230 t find_group
ffffffff81224290 t devm_kzalloc_release
ffffffff812242a0 t devm_kzalloc_match
ffffffff812242b0 T devres_alloc
ffffffff81224310 T devres_remove
ffffffff812243b0 T devres_find
ffffffff81224440 T devres_remove_group
ffffffff81224500 t release_nodes
ffffffff812246d0 T devres_release_group
ffffffff81224780 t add_dr
ffffffff812247b0 T devres_close_group
ffffffff81224840 T devres_open_group
ffffffff81224910 T devres_add
ffffffff81224970 T devm_kzalloc
ffffffff812249e0 T devres_free
ffffffff81224a00 T devres_destroy
ffffffff81224a30 T devres_get
ffffffff81224ae0 T devm_kfree
ffffffff81224b20 T devres_release_all
ffffffff81224b70 T attribute_container_classdev_to_container
ffffffff81224b80 T attribute_container_find_class_device
ffffffff81224be0 t internal_container_klist_get
ffffffff81224bf0 t attribute_container_release
ffffffff81224c10 t internal_container_klist_put
ffffffff81224c20 T attribute_container_unregister
ffffffff81224cc0 T attribute_container_register
ffffffff81224d20 T attribute_container_device_trigger
ffffffff81224df0 T attribute_container_trigger
ffffffff81224e60 T attribute_container_add_attrs
ffffffff81224ee0 T attribute_container_add_class_device
ffffffff81224f00 T attribute_container_add_device
ffffffff81225050 T attribute_container_add_class_device_adapter
ffffffff81225060 T attribute_container_remove_attrs
ffffffff812250e0 T attribute_container_remove_device
ffffffff812251d0 T attribute_container_class_device_del
ffffffff812251f0 t anon_transport_dummy_function
ffffffff81225200 t transport_setup_classdev
ffffffff81225220 t transport_configure
ffffffff81225240 T transport_destroy_device
ffffffff81225250 t transport_destroy_classdev
ffffffff81225280 T transport_remove_device
ffffffff81225290 T transport_configure_device
ffffffff812252a0 T transport_add_device
ffffffff812252b0 t transport_remove_classdev
ffffffff81225320 T transport_setup_device
ffffffff81225330 T anon_transport_class_register
ffffffff81225370 T transport_class_unregister
ffffffff81225380 T transport_class_register
ffffffff81225390 t transport_add_class_device
ffffffff812253e0 T anon_transport_class_unregister
ffffffff81225400 t show_cpumap
ffffffff81225450 t show_thread_cpumask
ffffffff81225470 t show_thread_cpumask_list
ffffffff81225490 t show_core_cpumask
ffffffff812254b0 t show_core_cpumask_list
ffffffff812254d0 t show_core_id
ffffffff81225510 t show_physical_package_id
ffffffff81225550 t dev_mount
ffffffff81225560 t dev_rmdir
ffffffff81225630 t handle_remove
ffffffff81225880 t handle_create.isra.2
ffffffff81225ac0 t devtmpfsd
ffffffff81225c00 T devtmpfs_create_node
ffffffff81225d10 T devtmpfs_delete_node
ffffffff81225de0 T devtmpfs_mount
ffffffff81225e50 t wake_show
ffffffff81225ea0 t autosuspend_delay_ms_show
ffffffff81225ee0 t control_show
ffffffff81225f20 t rtpm_status_show
ffffffff81225fc0 t pm_qos_latency_show
ffffffff81225ff0 t wakeup_active_show
ffffffff81226090 t wakeup_hit_count_show
ffffffff81226130 t wakeup_active_count_show
ffffffff812261d0 t wakeup_count_show
ffffffff81226270 t wakeup_last_time_show
ffffffff81226350 t wakeup_max_time_show
ffffffff81226430 t wakeup_total_time_show
ffffffff81226510 t wake_store
ffffffff812265d0 t autosuspend_delay_ms_store
ffffffff81226660 t rtpm_active_time_show
ffffffff812266e0 t rtpm_suspended_time_show
ffffffff81226760 t control_store
ffffffff81226850 t pm_qos_latency_store
ffffffff812268a0 T dpm_sysfs_add
ffffffff81226950 T wakeup_sysfs_add
ffffffff81226960 T wakeup_sysfs_remove
ffffffff81226970 T pm_qos_sysfs_add
ffffffff81226980 T pm_qos_sysfs_remove
ffffffff81226990 T rpm_sysfs_remove
ffffffff812269a0 T dpm_sysfs_remove
ffffffff812269d0 T pm_generic_runtime_suspend
ffffffff81226a00 T pm_generic_runtime_resume
ffffffff81226a30 T pm_generic_suspend_noirq
ffffffff81226a60 T pm_generic_suspend_late
ffffffff81226a90 T pm_generic_suspend
ffffffff81226ac0 T pm_generic_freeze_noirq
ffffffff81226af0 T pm_generic_freeze_late
ffffffff81226b20 T pm_generic_freeze
ffffffff81226b50 T pm_generic_poweroff_noirq
ffffffff81226b80 T pm_generic_poweroff_late
ffffffff81226bb0 T pm_generic_poweroff
ffffffff81226be0 T pm_generic_thaw_noirq
ffffffff81226c10 T pm_generic_thaw_early
ffffffff81226c40 T pm_generic_thaw
ffffffff81226c70 T pm_generic_resume_noirq
ffffffff81226ca0 T pm_generic_resume_early
ffffffff81226cd0 T pm_generic_resume
ffffffff81226d00 T pm_generic_restore_noirq
ffffffff81226d30 T pm_generic_restore_early
ffffffff81226d60 T pm_generic_restore
ffffffff81226d90 T pm_generic_runtime_idle
ffffffff81226dd0 T pm_generic_prepare
ffffffff81226e00 T pm_generic_complete
ffffffff81226e30 T dev_pm_put_subsys_data
ffffffff81226ec0 T dev_pm_get_subsys_data
ffffffff81226f70 T dev_pm_qos_remove_global_notifier
ffffffff81226f80 T dev_pm_qos_add_global_notifier
ffffffff81226f90 T dev_pm_qos_remove_notifier
ffffffff81226ff0 T dev_pm_qos_add_notifier
ffffffff81227050 t apply_constraint
ffffffff812270c0 T dev_pm_qos_remove_request
ffffffff81227270 T dev_pm_qos_hide_latency_limit
ffffffff812272a0 T dev_pm_qos_update_request
ffffffff81227350 T dev_pm_qos_add_request
ffffffff81227500 T dev_pm_qos_add_ancestor_request
ffffffff81227540 T dev_pm_qos_expose_latency_limit
ffffffff81227610 T __dev_pm_qos_read_value
ffffffff81227630 T dev_pm_qos_read_value
ffffffff81227690 T dev_pm_qos_constraints_init
ffffffff812276d0 T dev_pm_qos_constraints_destroy
ffffffff81227890 t pm_op
ffffffff812278e0 t pm_late_early_op
ffffffff81227930 t pm_noirq_op
ffffffff81227980 t dpm_wait
ffffffff812279c0 T device_pm_wait_for_dev
ffffffff812279f0 t dpm_wait_fn
ffffffff81227a10 t pm_dev_err
ffffffff81227ac0 t initcall_debug_start
ffffffff81227b30 t initcall_debug_report
ffffffff81227b90 t dpm_show_time
ffffffff81227cd0 T __suspend_report_result
ffffffff81227cf0 t dpm_run_callback.isra.5
ffffffff81227d70 t dpm_resume_early
ffffffff81227f70 t dpm_resume_noirq
ffffffff81228180 T dpm_suspend_end
ffffffff81228660 T dpm_resume_start
ffffffff81228670 t __device_suspend
ffffffff81228890 t async_suspend
ffffffff81228920 t device_resume
ffffffff81228a80 t async_resume
ffffffff81228ac0 T device_pm_init
ffffffff81228b60 T device_pm_lock
ffffffff81228b70 T device_pm_unlock
ffffffff81228b80 T device_pm_add
ffffffff81228c10 T device_pm_remove
ffffffff81228c80 T device_pm_move_before
ffffffff81228cd0 T device_pm_move_after
ffffffff81228d20 T device_pm_move_last
ffffffff81228d60 T dpm_resume
ffffffff81228f70 T dpm_complete
ffffffff81229110 T dpm_resume_end
ffffffff81229120 T dpm_suspend
ffffffff81229340 T dpm_prepare
ffffffff81229520 T dpm_suspend_start
ffffffff81229570 t wakeup_source_activate
ffffffff812295a0 t pm_wakeup_update_hit_counts
ffffffff81229600 T __pm_stay_awake
ffffffff81229690 T pm_stay_awake
ffffffff81229700 T device_set_wakeup_capable
ffffffff81229780 T wakeup_source_remove
ffffffff812297e0 T wakeup_source_add
ffffffff81229870 T wakeup_source_prepare
ffffffff81229930 T wakeup_source_create
ffffffff81229990 T wakeup_source_register
ffffffff812299c0 t wakeup_source_deactivate
ffffffff81229a30 T __pm_wakeup_event
ffffffff81229b20 T pm_wakeup_event
ffffffff81229bb0 T __pm_relax
ffffffff81229c20 T pm_relax
ffffffff81229c90 T wakeup_source_drop
ffffffff81229cc0 T wakeup_source_destroy
ffffffff81229cf0 T wakeup_source_unregister
ffffffff81229d10 T device_wakeup_disable
ffffffff81229d90 T device_wakeup_enable
ffffffff81229e50 T device_set_wakeup_enable
ffffffff81229e80 t pm_wakeup_timer_fn
ffffffff81229ef0 T device_init_wakeup
ffffffff81229f20 T pm_wakeup_pending
ffffffff81229fb0 T pm_get_wakeup_count
ffffffff8122a030 T pm_save_wakeup_count
ffffffff8122a0a0 t rpm_check_suspend_allowed
ffffffff8122a140 t rpm_update_qos_constraint
ffffffff8122a1e0 t pm_runtime_cancel_pending
ffffffff8122a220 t __rpm_callback
ffffffff8122a2b0 T pm_runtime_no_callbacks
ffffffff8122a320 T pm_runtime_enable
ffffffff8122a3b0 t __pm_runtime_barrier
ffffffff8122a530 T pm_runtime_autosuspend_expiration
ffffffff8122a5c0 t rpm_suspend
ffffffff8122ac20 t pm_suspend_timer_fn
ffffffff8122acb0 t rpm_idle
ffffffff8122ae70 T pm_runtime_allow
ffffffff8122aef0 T __pm_runtime_idle
ffffffff8122af70 t rpm_resume
ffffffff8122b520 T pm_runtime_forbid
ffffffff8122b590 T __pm_runtime_disable
ffffffff8122b6c0 T pm_runtime_barrier
ffffffff8122b790 T __pm_runtime_resume
ffffffff8122b800 T pm_runtime_irq_safe
ffffffff8122b860 T __pm_runtime_set_status
ffffffff8122baa0 T __pm_runtime_suspend
ffffffff8122bb20 T pm_schedule_suspend
ffffffff8122bc00 t pm_runtime_work
ffffffff8122bcc0 t update_autosuspend
ffffffff8122bd30 T __pm_runtime_use_autosuspend
ffffffff8122bdb0 T pm_runtime_set_autosuspend_delay
ffffffff8122be20 T update_pm_runtime_accounting
ffffffff8122be60 T pm_runtime_init
ffffffff8122bf50 T pm_runtime_remove
ffffffff8122bfa0 T pm_runtime_update_max_time_suspended
ffffffff8122c020 t dmam_coherent_release
ffffffff8122c0d0 t dmam_noncoherent_release
ffffffff8122c180 T dmam_free_noncoherent
ffffffff8122c260 T dmam_free_coherent
ffffffff8122c340 T dmam_alloc_noncoherent
ffffffff8122c4a0 T dmam_alloc_coherent
ffffffff8122c600 t dmam_match
ffffffff8122c650 t firmware_timeout_store
ffffffff8122c680 t firmware_timeout_show
ffffffff8122c6b0 t firmware_loading_show
ffffffff8122c6e0 t fw_dev_release
ffffffff8122c730 t firmware_uevent
ffffffff8122c7c0 T request_firmware_nowait
ffffffff8122c8c0 t fw_load_abort
ffffffff8122c8d0 t firmware_class_timeout
ffffffff8122c8e0 t _request_firmware_load
ffffffff8122cac0 t firmware_free_data
ffffffff8122cb30 T release_firmware
ffffffff8122cb90 t firmware_loading_store
ffffffff8122cd20 t firmware_data_read
ffffffff8122ce40 t firmware_data_write
ffffffff8122d080 t _request_firmware_prepare.isra.7
ffffffff8122d250 t request_firmware_work_func
ffffffff8122d320 T request_firmware
ffffffff8122d410 t node_read_cpumask
ffffffff8122d420 t node_read_cpulist
ffffffff8122d430 t show_node_state
ffffffff8122d480 t node_read_vmstat
ffffffff8122d540 t node_read_numastat
ffffffff8122d7b0 t node_read_distance
ffffffff8122d880 t node_read_meminfo
ffffffff8122e8f0 t node_read_cpumap
ffffffff8122e940 T register_hugetlbfs_with_node
ffffffff8122e950 T register_node
ffffffff8122ea20 T unregister_node
ffffffff8122eaa0 T register_cpu_under_node
ffffffff8122eb60 T unregister_cpu_under_node
ffffffff8122ebd0 T register_one_node
ffffffff8122ec70 T unregister_one_node
ffffffff8122ec90 T module_add_driver
ffffffff8122ed90 T module_remove_driver
ffffffff8122ee20 T scsi_device_type
ffffffff8122ee50 T scsi_cmd_get_serial
ffffffff8122ee80 T __scsi_device_lookup_by_target
ffffffff8122eec0 T __scsi_device_lookup
ffffffff8122ef10 T __starget_for_each_device
ffffffff8122efc0 T scsi_device_put
ffffffff8122f020 T scsi_device_get
ffffffff8122f080 T scsi_device_lookup
ffffffff8122f130 T scsi_device_lookup_by_target
ffffffff8122f1f0 T __scsi_iterate_devices
ffffffff8122f280 T starget_for_each_device
ffffffff8122f340 t scsi_vpd_inquiry
ffffffff8122f3c0 T scsi_get_vpd_page
ffffffff8122f460 T scsi_finish_command
ffffffff8122f580 t scsi_done
ffffffff8122f590 t scsi_get_host_cmd_pool
ffffffff8122f630 t scsi_pool_alloc_command
ffffffff8122f6b0 T scsi_allocate_command
ffffffff8122f6d0 T scsi_adjust_queue_depth
ffffffff8122f810 T scsi_track_queue_full
ffffffff8122f8d0 t scsi_pool_free_command.isra.9
ffffffff8122f930 T __scsi_put_command
ffffffff8122f9d0 t scsi_host_alloc_command
ffffffff8122fa50 T __scsi_get_command
ffffffff8122fb40 T scsi_get_command
ffffffff8122fc00 T scsi_put_command
ffffffff8122fc70 t scsi_put_host_cmd_pool
ffffffff8122fce0 T scsi_free_command
ffffffff8122fd30 T scsi_setup_command_freelist
ffffffff8122fde0 T scsi_destroy_command_freelist
ffffffff8122fe60 T scsi_log_send
ffffffff8122ff80 T scsi_log_completion
ffffffff812301b0 T scsi_dispatch_cmd
ffffffff812303d0 t __scsi_host_match
ffffffff812303e0 T scsi_is_host_device
ffffffff812303f0 t scsi_host_dev_release
ffffffff812304d0 t scsi_host_cls_release
ffffffff812304e0 T scsi_host_put
ffffffff812304f0 T scsi_unregister
ffffffff81230540 T scsi_host_get
ffffffff81230580 T scsi_host_lookup
ffffffff812305d0 T scsi_host_alloc
ffffffff812308e0 T scsi_register
ffffffff81230960 T scsi_host_set_state
ffffffff81230a60 T scsi_add_host_with_dma
ffffffff81230d10 T scsi_remove_host
ffffffff81230e10 T scsi_flush_work
ffffffff81230e50 T scsi_queue_work
ffffffff81230ea0 T scsi_init_hosts
ffffffff81230ec0 T scsi_exit_hosts
ffffffff81230ed0 T scsi_nonblockable_ioctl
ffffffff81230fc0 t ioctl_internal_command.constprop.4
ffffffff81231170 T scsi_set_medium_removal
ffffffff81231200 T scsi_ioctl
ffffffff812315e0 T scsi_sense_key_string
ffffffff81231600 T scsi_show_result
ffffffff81231650 T scsi_print_result
ffffffff812316d0 T scsi_show_sense_hdr
ffffffff81231760 T scsi_print_status
ffffffff81231790 t print_opcode_name
ffffffff812319e0 T __scsi_print_command
ffffffff81231a70 T scsi_extd_sense_format
ffffffff81231af0 T scsi_show_extd_sense
ffffffff81231be0 T scsi_cmd_print_sense_hdr
ffffffff81231cf0 T scsi_print_sense_hdr
ffffffff81231d50 T scsi_print_command
ffffffff81231e30 t scsi_decode_sense_buffer.part.3
ffffffff81231ed0 t scsi_decode_sense_extras
ffffffff81232120 T scsi_print_sense
ffffffff81232250 T __scsi_print_sense
ffffffff812322f0 T scsi_partsize
ffffffff812323f0 T scsi_bios_ptable
ffffffff81232520 T scsicam_bios_param
ffffffff812326a0 t __scsi_report_device_reset
ffffffff812326b0 t scsi_try_bus_device_reset
ffffffff81232700 T scsi_eh_restore_cmnd
ffffffff81232760 T scsi_eh_finish_cmd
ffffffff812327a0 T scsi_report_bus_reset
ffffffff812327f0 T scsi_report_device_reset
ffffffff81232840 t scsi_reset_provider_done_command
ffffffff81232850 T scsi_sense_desc_find
ffffffff812328e0 T scsi_build_sense_buffer
ffffffff81232910 t scsi_try_bus_reset
ffffffff81232a20 t scsi_try_host_reset
ffffffff81232b30 t scsi_try_target_reset
ffffffff81232bd0 T scsi_reset_provider
ffffffff81232dc0 t scsi_handle_queue_ramp_up
ffffffff81232eb0 t scsi_handle_queue_full
ffffffff81232f30 t scsi_eh_done
ffffffff81232f90 t eh_lock_door_done
ffffffff81232fa0 T scsi_eh_prep_cmnd
ffffffff812331d0 T scsi_block_when_processing_errors
ffffffff812332c0 T scsi_get_sense_info_fld
ffffffff81233390 T scsi_normalize_sense
ffffffff81233450 T scsi_command_normalize_sense
ffffffff81233470 t scsi_check_sense
ffffffff812337e0 t scsi_send_eh_cmnd
ffffffff81233b90 t scsi_eh_try_stu
ffffffff81233c00 t scsi_eh_tur
ffffffff81233ca0 t scsi_eh_test_devices
ffffffff81233e00 T scsi_eh_ready_devs
ffffffff812346e0 T scsi_eh_wakeup
ffffffff81234730 T scsi_schedule_eh
ffffffff81234790 T scsi_eh_scmd_add
ffffffff81234870 T scsi_times_out
ffffffff81234900 T scsi_noretry_cmd
ffffffff812349a0 T scsi_eh_flush_done_q
ffffffff81234ac0 T scsi_decide_disposition
ffffffff81234c60 T scsi_eh_get_sense
ffffffff81234e50 T scsi_error_handler
ffffffff812354e0 t scsi_lld_busy
ffffffff81235530 T scsi_block_requests
ffffffff81235540 T scsi_kunmap_atomic_sg
ffffffff81235550 T scsi_kmap_atomic_sg
ffffffff812356f0 T scsi_target_resume
ffffffff81235700 T scsi_target_quiesce
ffffffff81235710 T scsi_internal_device_unblock
ffffffff81235790 t device_unblock
ffffffff812357a0 t scsi_run_queue
ffffffff81235a00 T sdev_evt_alloc
ffffffff81235a40 T sdev_evt_send
ffffffff81235ad0 T scsi_device_set_state
ffffffff81235c00 T scsi_internal_device_block
ffffffff81235c60 t device_block
ffffffff81235c70 T scsi_device_resume
ffffffff81235ca0 t device_resume_fn
ffffffff81235cb0 t scsi_get_cmd_from_req
ffffffff81235d00 t __scsi_release_buffers
ffffffff81235e10 T scsi_release_buffers
ffffffff81235e20 t scsi_requeue_command
ffffffff81235eb0 T scsi_execute
ffffffff81236020 T scsi_execute_req
ffffffff81236140 T scsi_test_unit_ready
ffffffff81236260 T scsi_mode_sense
ffffffff81236590 T scsi_mode_select
ffffffff81236780 T scsi_calculate_bounce_limit
ffffffff812367d0 T __scsi_alloc_queue
ffffffff812368e0 t target_unblock
ffffffff81236910 t target_block
ffffffff81236940 T scsi_target_unblock
ffffffff81236980 T scsi_target_block
ffffffff812369c0 T scsi_prep_state_check
ffffffff81236a50 T sdev_evt_send_simple
ffffffff81236ac0 T scsi_device_quiesce
ffffffff81236b10 t device_quiesce_fn
ffffffff81236b20 t scsi_init_cmd_errh
ffffffff81236c10 t scsi_kill_request.isra.29
ffffffff81236d40 t scsi_request_fn
ffffffff81237190 t scsi_init_sgtable
ffffffff81237220 T scsi_init_io
ffffffff812372d0 t scsi_sg_free
ffffffff81237310 t scsi_sg_alloc
ffffffff81237350 T scsi_prep_return
ffffffff812373f0 T scsi_setup_fs_cmnd
ffffffff81237480 T scsi_setup_blk_pc_cmnd
ffffffff81237590 T scsi_prep_fn
ffffffff812375e0 T scsi_device_unbusy
ffffffff812376b0 t __scsi_queue_insert
ffffffff81237800 T scsi_queue_insert
ffffffff81237810 t scsi_softirq_done
ffffffff81237970 T scsi_requeue_run_queue
ffffffff81237980 T scsi_next_command
ffffffff812379e0 T scsi_run_host_queues
ffffffff81237a20 T scsi_unblock_requests
ffffffff81237a30 T scsi_io_completion
ffffffff81238090 T scsi_alloc_queue
ffffffff812380f0 T scsi_free_queue
ffffffff81238160 T scsi_exit_queue
ffffffff812381b0 T scsi_evt_thread
ffffffff812382c0 T scsi_dma_map
ffffffff81238390 T scsi_dma_unmap
ffffffff812383f0 T scsi_is_target_device
ffffffff81238400 t sanitize_inquiry_string
ffffffff81238450 T scsilun_to_int
ffffffff81238470 t scsi_target_dev_release
ffffffff81238490 t scsi_target_destroy
ffffffff81238540 t scsi_alloc_target
ffffffff81238820 t scsi_alloc_sdev
ffffffff81238aa0 T scsi_complete_async_scans
ffffffff81238be0 T int_to_scsilun
ffffffff81238c10 T scsi_rescan_device
ffffffff81238c70 t scsi_target_reap_usercontext
ffffffff81238cb0 T scsi_free_host_dev
ffffffff81238cd0 t scsi_probe_and_add_lun
ffffffff81239930 T scsi_target_reap
ffffffff812399f0 T scsi_get_host_dev
ffffffff81239aa0 t __scsi_scan_target
ffffffff8123a170 t scsi_scan_channel.part.8
ffffffff8123a1e0 T scsi_scan_target
ffffffff8123a2e0 T __scsi_add_device
ffffffff8123a420 T scsi_add_device
ffffffff8123a450 T scsi_scan_host_selected
ffffffff8123a620 t do_scsi_scan_host
ffffffff8123a6c0 t do_scan_async
ffffffff8123a830 T scsi_scan_host
ffffffff8123aa50 T scsi_forget_host
ffffffff8123aac0 T scsi_is_sdev_device
ffffffff8123aad0 t store_host_reset
ffffffff8123aba0 t sdev_store_queue_type_rw
ffffffff8123ac90 t show_prot_guard_type
ffffffff8123acc0 t show_prot_capabilities
ffffffff8123acf0 t show_proc_name
ffffffff8123ad30 t show_unchecked_isa_dma
ffffffff8123ad70 t show_sg_prot_tablesize
ffffffff8123ada0 t show_sg_tablesize
ffffffff8123add0 t show_can_queue
ffffffff8123ae00 t show_cmd_per_lun
ffffffff8123ae30 t show_host_busy
ffffffff8123ae60 t show_unique_id
ffffffff8123ae90 t sdev_show_evt_media_change
ffffffff8123aec0 t sdev_show_modalias
ffffffff8123aef0 t show_iostat_ioerr_cnt
ffffffff8123af20 t show_iostat_iodone_cnt
ffffffff8123af50 t show_iostat_iorequest_cnt
ffffffff8123af80 t show_iostat_counterbits
ffffffff8123afb0 t sdev_show_timeout
ffffffff8123aff0 t sdev_show_rev
ffffffff8123b020 t sdev_show_model
ffffffff8123b050 t sdev_show_vendor
ffffffff8123b080 t sdev_show_scsi_level
ffffffff8123b0b0 t sdev_show_type
ffffffff8123b0e0 t sdev_show_device_blocked
ffffffff8123b110 t show_queue_type_field
ffffffff8123b160 t sdev_show_queue_depth
ffffffff8123b190 t show_state_field
ffffffff8123b200 t show_shost_state
ffffffff8123b280 t show_shost_mode
ffffffff8123b310 t show_shost_supported_mode
ffffffff8123b340 t check_set
ffffffff8123b3a0 t store_scan
ffffffff8123b480 t sdev_store_evt_media_change
ffffffff8123b4e0 t sdev_store_queue_depth_rw
ffffffff8123b580 t sdev_store_timeout
ffffffff8123b5e0 t store_state_field
ffffffff8123b690 t sdev_store_delete
ffffffff8123b6b0 t store_rescan_field
ffffffff8123b6c0 t scsi_device_dev_release
ffffffff8123b6e0 t scsi_device_cls_release
ffffffff8123b6f0 t scsi_device_dev_release_usercontext
ffffffff8123b8a0 t store_shost_state
ffffffff8123b950 T scsi_register_interface
ffffffff8123b960 T scsi_register_driver
ffffffff8123b970 t sdev_store_queue_ramp_up_period
ffffffff8123b9c0 t sdev_show_queue_ramp_up_period
ffffffff8123b9f0 t scsi_bus_match
ffffffff8123ba20 t show_shost_active_mode
ffffffff8123ba70 t scsi_bus_uevent
ffffffff8123bab0 T scsi_device_state_name
ffffffff8123baf0 T scsi_host_state_name
ffffffff8123bb30 T scsi_sysfs_register
ffffffff8123bb80 T scsi_sysfs_unregister
ffffffff8123bba0 T scsi_sysfs_add_sdev
ffffffff8123beb0 T __scsi_remove_device
ffffffff8123bf80 T scsi_remove_device
ffffffff8123bfc0 t sdev_store_delete_callback
ffffffff8123bfd0 t __scsi_remove_target
ffffffff8123c0c0 T scsi_remove_target
ffffffff8123c100 t __remove_child
ffffffff8123c120 T scsi_sysfs_add_host
ffffffff8123c1b0 T scsi_sysfs_device_initialize
ffffffff8123c300 t proc_scsi_devinfo_open
ffffffff8123c310 t devinfo_seq_show
ffffffff8123c380 t devinfo_seq_next
ffffffff8123c3f0 t devinfo_seq_stop
ffffffff8123c400 t devinfo_seq_start
ffffffff8123c490 T scsi_dev_info_remove_list
ffffffff8123c540 T scsi_get_device_flags_keyed
ffffffff8123c710 T scsi_dev_info_list_del_keyed
ffffffff8123c8e0 T scsi_dev_info_add_list
ffffffff8123c980 t scsi_strcpy_devinfo
ffffffff8123ca70 T scsi_dev_info_list_add_keyed
ffffffff8123cbf0 t scsi_dev_info_list_add_str
ffffffff8123cd00 t proc_scsi_devinfo_write
ffffffff8123cdc0 T scsi_get_device_flags
ffffffff8123cdd0 T scsi_exit_devinfo
ffffffff8123cdf0 T scsi_exit_sysctl
ffffffff8123ce00 t proc_scsi_read
ffffffff8123ce50 t always_match
ffffffff8123ce60 t proc_scsi_open
ffffffff8123ce70 t scsi_seq_show
ffffffff8123d080 t scsi_seq_start
ffffffff8123d0e0 t scsi_seq_next
ffffffff8123d120 t scsi_seq_stop
ffffffff8123d130 t proc_scsi_write_proc
ffffffff8123d1f0 t proc_scsi_write
ffffffff8123d4a0 T scsi_proc_hostdir_add
ffffffff8123d530 T scsi_proc_hostdir_rm
ffffffff8123d5a0 T scsi_proc_host_add
ffffffff8123d660 T scsi_proc_host_rm
ffffffff8123d6b0 T scsi_exit_procfs
ffffffff8123d6e0 T scsi_trace_parse_cdb
ffffffff8123d780 t scsi_dev_type_resume
ffffffff8123d7d0 t scsi_runtime_resume
ffffffff8123d7f0 t scsi_dev_type_suspend
ffffffff8123d850 t scsi_bus_suspend_common
ffffffff8123d8d0 t scsi_bus_poweroff
ffffffff8123d8f0 t scsi_bus_freeze
ffffffff8123d910 t scsi_bus_suspend
ffffffff8123d930 T scsi_autopm_get_device
ffffffff8123d990 T scsi_autopm_put_device
ffffffff8123d9b0 t scsi_runtime_idle
ffffffff8123d9e0 t scsi_runtime_suspend
ffffffff8123da50 t scsi_bus_resume_common
ffffffff8123dad0 t scsi_bus_prepare
ffffffff8123db10 T scsi_autopm_get_target
ffffffff8123db20 T scsi_autopm_put_target
ffffffff8123db30 T scsi_autopm_get_host
ffffffff8123db90 T scsi_autopm_put_host
ffffffff8123dbb0 t sd_config_discard
ffffffff8123dcf0 t sd_unlock_native_capacity
ffffffff8123dd20 t sd_store_max_medium_access_timeouts
ffffffff8123dd90 t sd_show_max_medium_access_timeouts
ffffffff8123ddc0 t sd_show_provisioning_mode
ffffffff8123de00 t sd_show_thin_provisioning
ffffffff8123de40 t sd_show_app_tag_own
ffffffff8123de70 t sd_show_protection_type
ffffffff8123dea0 t sd_show_manage_start_stop
ffffffff8123dee0 t sd_show_allow_restart
ffffffff8123df20 t sd_show_fua
ffffffff8123df60 t sd_show_cache_type
ffffffff8123dfb0 t sd_store_provisioning_mode
ffffffff8123e0d0 t sd_store_manage_start_stop
ffffffff8123e140 t sd_store_allow_restart
ffffffff8123e1c0 t sd_eh_action
ffffffff8123e3a0 t sd_completed_bytes
ffffffff8123e4a0 t sd_done
ffffffff8123e880 t __scsi_disk_get
ffffffff8123e8b0 t scsi_disk_get_from_dev
ffffffff8123e900 t scsi_disk_put
ffffffff8123e950 t sd_rescan
ffffffff8123e980 t scsi_disk_release
ffffffff8123ea00 t sd_probe
ffffffff8123ed40 t sd_getgeo
ffffffff8123edd0 t sd_compat_ioctl
ffffffff8123ee70 t sd_ioctl
ffffffff8123eff0 t sd_release
ffffffff8123f0d0 t sd_open
ffffffff8123f2b0 t sd_prep_fn
ffffffff81240060 t media_not_present
ffffffff812400d0 t sd_check_events
ffffffff812402a0 t sd_show_protection_mode
ffffffff81240320 t sd_print_sense_hdr.isra.24
ffffffff81240410 t sd_store_cache_type
ffffffff812405d0 t sd_print_result.isra.25
ffffffff81240630 t sd_start_stop_device
ffffffff81240760 t sd_resume
ffffffff812407f0 t sd_sync_cache
ffffffff812408e0 t sd_suspend
ffffffff81240a10 t sd_shutdown
ffffffff81240b60 t sd_remove
ffffffff81240c10 t read_capacity_error
ffffffff81240d40 t read_capacity_10
ffffffff81240f30 t read_capacity_16
ffffffff81241410 t sd_revalidate_disk
ffffffff81243020 t sd_unprep_fn
ffffffff81243050 t sd_major
ffffffff81243080 t sd_probe_async
ffffffff81243250 T atapi_cmd_type
ffffffff812432d0 T ata_tf_to_fis
ffffffff81243360 T ata_tf_from_fis
ffffffff812433b0 t ata_rwcmd_protocol
ffffffff81243430 T ata_pack_xfermask
ffffffff81243450 T ata_unpack_xfermask
ffffffff81243490 T ata_xfer_mask2mode
ffffffff812434e0 T ata_xfer_mode2mask
ffffffff81243540 T ata_xfer_mode2shift
ffffffff81243580 T ata_mode_string
ffffffff812435b0 T ata_id_xfermask
ffffffff81243690 T ata_cable_40wire
ffffffff812436a0 T ata_cable_80wire
ffffffff812436b0 T ata_cable_unknown
ffffffff812436c0 T ata_cable_ignore
ffffffff812436d0 T ata_cable_sata
ffffffff812436e0 T ata_dev_pair
ffffffff81243730 T ata_timing_merge
ffffffff81243800 T ata_timing_find_mode
ffffffff81243840 T ata_timing_cycle2mode
ffffffff81243940 t glob_match
ffffffff81243a60 T ata_std_qc_defer
ffffffff81243aa0 T ata_noop_qc_prep
ffffffff81243ab0 T ata_sg_init
ffffffff81243ad0 T sata_scr_valid
ffffffff81243af0 T sata_scr_read
ffffffff81243b20 T sata_scr_write
ffffffff81243b50 T sata_set_spd
ffffffff81243bd0 T sata_scr_write_flush
ffffffff81243c50 T ata_host_suspend
ffffffff81243c60 T ata_host_resume
ffffffff81243c70 t ata_dummy_qc_issue
ffffffff81243c80 t ata_dummy_error_handler
ffffffff81243c90 t ata_port_runtime_idle
ffffffff81243ca0 T ata_print_version
ffffffff81243cc0 T ata_dev_printk
ffffffff81243d30 T ata_link_printk
ffffffff81243dc0 T ata_port_printk
ffffffff81243e20 T ata_msleep
ffffffff81243e80 T ata_wait_register
ffffffff81243f30 T sata_link_debounce
ffffffff81244030 T sata_link_resume
ffffffff81244170 T ata_ratelimit
ffffffff81244190 t ata_host_stop
ffffffff81244210 T ata_pci_device_do_resume
ffffffff81244280 T ata_pci_device_resume
ffffffff812442d0 T pci_test_config_bits
ffffffff81244380 T ata_host_detach
ffffffff81244470 T ata_pci_remove_one
ffffffff81244490 T ata_host_init
ffffffff81244500 t ata_finalize_port_ops
ffffffff812445d0 t ata_host_release
ffffffff81244650 T ata_dev_next
ffffffff81244710 T ata_link_next
ffffffff812447b0 T ata_timing_compute
ffffffff81244af0 T sata_link_scr_lpm
ffffffff81244c20 t ata_qc_complete_internal
ffffffff81244c30 T ata_dev_classify
ffffffff81244c70 t ata_id_n_sectors
ffffffff81244d40 T ata_pio_need_iordy
ffffffff81244dc0 T ata_pci_device_do_suspend
ffffffff81244e20 T ata_pci_device_suspend
ffffffff81244e60 T ata_host_start
ffffffff81245000 T ata_id_string
ffffffff81245040 T ata_id_c_string
ffffffff81245080 t ata_dev_same_device
ffffffff812451d0 t ata_dev_blacklisted
ffffffff81245270 t ata_port_request_pm.constprop.41
ffffffff81245370 t ata_port_suspend_common
ffffffff81245390 t ata_port_poweroff
ffffffff812453c0 t ata_port_do_freeze
ffffffff81245400 t ata_port_suspend
ffffffff81245430 t ata_port_resume_common
ffffffff81245460 t ata_port_resume
ffffffff812454b0 T ata_dev_phys_link
ffffffff812454e0 T ata_force_cbl
ffffffff81245550 T ata_tf_read_block
ffffffff81245640 T ata_build_rw_tf
ffffffff81245910 T sata_spd_string
ffffffff81245930 T ata_tf_to_lba48
ffffffff81245970 T ata_tf_to_lba
ffffffff812459a0 T sata_down_spd_limit
ffffffff81245ac0 T ata_down_xfermask_limit
ffffffff81245d20 T ata_sg_clean
ffffffff81245df0 T atapi_check_dma
ffffffff81245e30 T swap_buf_le16
ffffffff81245e40 T ata_qc_new_init
ffffffff81245f60 T ata_qc_free
ffffffff81245fb0 T __ata_qc_complete
ffffffff812460f0 T ata_qc_complete
ffffffff81246330 T ata_qc_complete_multiple
ffffffff812463f0 T ata_qc_issue
ffffffff812467a0 T ata_exec_internal_sg
ffffffff81246cf0 T ata_exec_internal
ffffffff81246da0 T ata_dev_set_feature
ffffffff81246e20 T ata_do_dev_read_id
ffffffff81246e50 T ata_dev_read_id
ffffffff81247340 T ata_dev_reread_id
ffffffff81247490 T ata_dev_configure
ffffffff81248820 T ata_dev_revalidate
ffffffff81248a10 T ata_do_set_mode
ffffffff81249360 T ata_bus_probe
ffffffff812496b0 T ata_do_simple_cmd
ffffffff81249730 T ata_phys_link_online
ffffffff81249760 T ata_link_online
ffffffff812497f0 T ata_std_postreset
ffffffff812498d0 T ata_phys_link_offline
ffffffff81249900 T ata_link_offline
ffffffff81249980 T ata_wait_ready
ffffffff81249b20 T ata_wait_after_reset
ffffffff81249b70 T sata_link_hardreset
ffffffff81249d50 T sata_std_hardreset
ffffffff81249d90 T ata_std_prereset
ffffffff81249e10 T ata_dev_init
ffffffff81249f70 T ata_link_init
ffffffff8124a0b0 T ata_slave_link_init
ffffffff8124a150 T sata_link_init_spd
ffffffff8124a2b0 T ata_host_register
ffffffff8124a530 T ata_host_activate
ffffffff8124a660 T ata_port_alloc
ffffffff8124a7e0 T ata_host_alloc
ffffffff8124a8c0 T ata_host_alloc_pinfo
ffffffff8124a970 T __ata_port_probe
ffffffff8124a9d0 T ata_port_probe
ffffffff8124aa00 t async_port_probe
ffffffff8124aa70 t ata_scsi_em_message_store
ffffffff8124aaa0 t ata_scsi_em_message_show
ffffffff8124aad0 t ata_scsi_flush_xlat
ffffffff8124ab00 t scsi_16_lba_len
ffffffff8124ab80 t ata_scsiop_inq_00
ffffffff8124abb0 t ata_scsiop_inq_b1
ffffffff8124ac70 t ata_scsiop_inq_b2
ffffffff8124ac80 t ata_scsiop_noop
ffffffff8124ac90 t ata_msense_caching
ffffffff8124ad10 t ata_scsiop_read_cap
ffffffff8124af90 t ata_scsiop_report_luns
ffffffff8124afa0 T ata_sas_port_start
ffffffff8124afc0 T ata_sas_port_stop
ffffffff8124afd0 t ata_scsi_find_dev
ffffffff8124b050 t ata_scsi_activity_store
ffffffff8124b100 t ata_scsi_activity_show
ffffffff8124b180 t ata_scsi_em_message_type_show
ffffffff8124b1c0 t ata_scsi_lpm_show
ffffffff8124b210 t ata_scsi_park_store
ffffffff8124b3a0 t ata_scsi_park_show
ffffffff8124b490 t ata_scsi_lpm_store
ffffffff8124b550 t ata_scsi_translate
ffffffff8124b6c0 t ata_to_sense_error
ffffffff8124b880 t ata_gen_passthru_sense
ffffffff8124ba40 t atapi_sense_complete
ffffffff8124ba80 t atapi_xlat
ffffffff8124bc10 t atapi_qc_complete
ffffffff8124bf80 t ata_scsi_rbuf_fill
ffffffff8124c040 t ata_scsiop_inq_b0
ffffffff8124c0d0 t ata_scsi_dev_config
ffffffff8124c2d0 T ata_sas_slave_configure
ffffffff8124c300 T ata_sas_port_destroy
ffffffff8124c320 T ata_sas_sync_probe
ffffffff8124c330 T ata_sas_async_probe
ffffffff8124c340 T ata_sas_port_alloc
ffffffff8124c3a0 t ata_scsi_handle_link_detach
ffffffff8124c500 t ata_scsiop_inq_83
ffffffff8124c5e0 t ata_scsiop_inq_80
ffffffff8124c620 t ata_scsiop_inq_std
ffffffff8124c6c0 t ata_scsiop_inq_89
ffffffff8124c810 T ata_sas_port_init
ffffffff8124c840 t ata_scsi_qc_complete
ffffffff8124cc70 t atapi_drain_needed
ffffffff8124ccb0 t ata_scsi_set_sense.constprop.21
ffffffff8124ccd0 t ata_scsiop_mode_sense
ffffffff8124cfb0 t ata_scsi_rw_xlat
ffffffff8124d1d0 t ata_scsi_pass_thru
ffffffff8124d490 t ata_scsi_invalid_field
ffffffff8124d4b0 t ata_scsi_write_same_xlat
ffffffff8124d630 t ata_scsi_verify_xlat
ffffffff8124d8c0 t ata_scsi_start_stop_xlat
ffffffff8124d9a0 T ata_std_bios_param
ffffffff8124d9e0 T ata_scsi_unlock_native_capacity
ffffffff8124da70 T ata_cmd_ioctl
ffffffff8124dd00 T ata_task_ioctl
ffffffff8124df00 T ata_sas_scsi_ioctl
ffffffff8124e1a0 T ata_scsi_ioctl
ffffffff8124e1c0 T ata_scsi_slave_config
ffffffff8124e250 T ata_scsi_slave_destroy
ffffffff8124e350 T __ata_change_queue_depth
ffffffff8124e490 T ata_scsi_change_queue_depth
ffffffff8124e4b0 T ata_scsi_simulate
ffffffff8124e6e0 T ata_sas_queuecmd
ffffffff8124e920 T ata_scsi_queuecmd
ffffffff8124eb90 T ata_scsi_add_hosts
ffffffff8124ecb0 T ata_scsi_scan_host
ffffffff8124ee60 T ata_scsi_offline_dev
ffffffff8124ee90 T ata_scsi_media_change_notify
ffffffff8124eeb0 T ata_scsi_hotplug
ffffffff8124ef80 T ata_scsi_user_scan
ffffffff8124f0a0 T ata_scsi_dev_rescan
ffffffff8124f1c0 t ata_eh_scsidone
ffffffff8124f1d0 t ata_eh_categorize_error
ffffffff8124f230 t ata_do_reset
ffffffff8124f2c0 t ata_ering_record
ffffffff8124f330 t ata_eh_clear_action
ffffffff8124f430 t __ata_port_freeze
ffffffff8124f470 t atapi_eh_request_sense
ffffffff8124f5e0 t ata_eh_park_issue_cmd
ffffffff8124f710 t __ata_eh_qc_complete
ffffffff8124f7a0 T ata_port_wait_eh
ffffffff8124f870 T ata_scsi_cmd_error_handler
ffffffff8124f9d0 t __ata_ehi_pushv_desc
ffffffff8124fa20 t ata_eh_set_pending
ffffffff8124fad0 T __ata_ehi_push_desc
ffffffff8124fb20 T ata_ehi_push_desc
ffffffff8124fba0 T ata_ehi_clear_desc
ffffffff8124fbb0 T ata_port_desc
ffffffff8124fc60 T ata_port_pbar_desc
ffffffff8124fd20 T ata_internal_cmd_timeout
ffffffff8124fd90 T ata_internal_cmd_timed_out
ffffffff8124fe10 T ata_ering_map
ffffffff8124fe70 T ata_ering_clear_cb
ffffffff8124fe80 T ata_eh_acquire
ffffffff8124fee0 T ata_eh_release
ffffffff8124ff40 T ata_scsi_timed_out
ffffffff81250030 T ata_qc_schedule_eh
ffffffff812500d0 T ata_port_schedule_eh
ffffffff81250120 t ata_do_link_abort
ffffffff812501f0 T ata_link_abort
ffffffff81250200 T ata_port_abort
ffffffff81250210 T ata_port_freeze
ffffffff81250250 T ata_eh_fastdrain_timerfn
ffffffff81250370 T sata_async_notification
ffffffff812503f0 T ata_eh_freeze_port
ffffffff81250450 T ata_eh_thaw_port
ffffffff812504c0 T ata_eh_qc_complete
ffffffff812504d0 T ata_eh_qc_retry
ffffffff812504f0 T ata_dev_disable
ffffffff812505b0 T ata_eh_detach_dev
ffffffff81250680 t ata_eh_schedule_probe
ffffffff81250820 T ata_eh_about_to_do
ffffffff812508b0 T ata_eh_done
ffffffff812508d0 T ata_eh_analyze_ncq_error
ffffffff81250b70 t ata_eh_link_autopsy
ffffffff812513b0 T ata_eh_autopsy
ffffffff81251470 T ata_get_cmd_descript
ffffffff812514b0 T ata_eh_report
ffffffff81251ea0 T ata_eh_reset
ffffffff81252bd0 T ata_set_mode
ffffffff81252ce0 T ata_link_nr_enabled
ffffffff81252d20 T ata_eh_recover
ffffffff81254020 T ata_eh_finish
ffffffff812540d0 T ata_scsi_port_error_handler
ffffffff812548b0 T ata_scsi_error
ffffffff81254980 T ata_do_eh
ffffffff81254a30 T ata_std_error_handler
ffffffff81254ab0 t ata_tport_match
ffffffff81254ae0 t ata_tlink_match
ffffffff81254b10 t ata_tdev_match
ffffffff81254b40 t show_ata_dev_gscr
ffffffff81254bd0 t show_ata_dev_id
ffffffff81254c50 t show_ata_dev_spdn_cnt
ffffffff81254c80 t show_ata_port_idle_irq
ffffffff81254cb0 t show_ata_port_nr_pmp_links
ffffffff81254ce0 t show_ata_dev_ering
ffffffff81254d20 t ata_show_ering
ffffffff81254df0 t get_ata_xfer_names
ffffffff81254e80 t show_ata_dev_xfer_mode
ffffffff81254ea0 t show_ata_dev_dma_mode
ffffffff81254ec0 t show_ata_dev_pio_mode
ffffffff81254ee0 t show_ata_dev_class
ffffffff81254f50 t show_ata_link_sata_spd
ffffffff81254f90 t show_ata_link_sata_spd_limit
ffffffff81254fd0 t show_ata_link_hw_sata_spd_limit
ffffffff81255010 t ata_tdev_release
ffffffff81255020 t ata_tlink_release
ffffffff81255030 t ata_tport_release
ffffffff81255040 T ata_is_port
ffffffff81255060 T ata_is_link
ffffffff81255080 T ata_tlink_delete
ffffffff81255110 T ata_tport_delete
ffffffff81255150 T ata_tlink_add
ffffffff81255310 T ata_tport_add
ffffffff81255420 T ata_is_ata_dev
ffffffff81255440 T ata_attach_transport
ffffffff81255930 T ata_release_transport
ffffffff81255970 t ata_sff_check_ready
ffffffff812559a0 T ata_sff_qc_fill_rtf
ffffffff812559d0 T ata_sff_std_ports
ffffffff81255a20 T ata_pci_bmdma_clear_simplex
ffffffff81255a50 t ata_bmdma_nodma
ffffffff81255ab0 T ata_bmdma_status
ffffffff81255ad0 T ata_sff_check_status
ffffffff81255af0 T ata_pci_bmdma_init
ffffffff81255c80 T ata_bmdma_port_start
ffffffff81255cd0 T ata_bmdma_port_start32
ffffffff81255ce0 T ata_bmdma_start
ffffffff81255d10 T ata_bmdma_irq_clear
ffffffff81255d40 t ata_sff_set_devctl
ffffffff81255d70 T ata_sff_freeze
ffffffff81255df0 t ata_devchk
ffffffff81255e90 T ata_sff_tf_read
ffffffff81255fa0 T ata_sff_tf_load
ffffffff81256140 T ata_bmdma_setup
ffffffff812561d0 T ata_bmdma_post_internal_cmd
ffffffff81256250 T ata_bmdma_dumb_qc_prep
ffffffff81256350 t ata_pio_sector
ffffffff81256460 T ata_pci_sff_activate_host
ffffffff81256690 T ata_pci_sff_init_host
ffffffff81256870 T ata_pci_sff_prepare_host
ffffffff81256930 T ata_pci_bmdma_prepare_host
ffffffff81256960 t ata_pci_init_one
ffffffff81256b00 T ata_pci_bmdma_init_one
ffffffff81256b10 T ata_pci_sff_init_one
ffffffff81256b20 T ata_sff_error_handler
ffffffff81256c40 T ata_bmdma_error_handler
ffffffff81256d60 T ata_sff_drain_fifo
ffffffff81256dd0 T ata_sff_postreset
ffffffff81256e70 T ata_sff_dev_classify
ffffffff81256f80 T ata_sff_busy_sleep
ffffffff81257120 T ata_sff_queue_delayed_work
ffffffff81257140 T ata_sff_queue_pio_task
ffffffff812571b0 T ata_sff_queue_work
ffffffff812571c0 T ata_sff_data_xfer
ffffffff81257290 T ata_sff_data_xfer32
ffffffff812573a0 T ata_sff_data_xfer_noirq
ffffffff812573d0 T ata_sff_wait_ready
ffffffff812573e0 T ata_sff_wait_after_reset
ffffffff81257530 T ata_sff_softreset
ffffffff812576d0 t ata_sff_altstatus
ffffffff81257710 T ata_sff_dma_pause
ffffffff81257730 T ata_bmdma_stop
ffffffff81257780 t ata_sff_sync
ffffffff812577c0 T ata_sff_pause
ffffffff812577e0 T ata_sff_exec_command
ffffffff81257800 T ata_sff_dev_select
ffffffff81257830 t ata_pio_sectors
ffffffff812578e0 T ata_sff_irq_on
ffffffff81257990 T ata_sff_thaw
ffffffff812579c0 t ata_hsm_qc_complete
ffffffff81257af0 T ata_sff_hsm_move
ffffffff81258230 t ata_sff_pio_task
ffffffff812583a0 T ata_bmdma_qc_prep
ffffffff81258470 T sata_sff_hardreset
ffffffff812584f0 t __ata_sff_port_intr
ffffffff81258600 T ata_bmdma_port_intr
ffffffff81258720 T ata_bmdma_interrupt
ffffffff81258900 T ata_sff_port_intr
ffffffff81258910 T ata_sff_interrupt
ffffffff81258af0 T ata_sff_lost_interrupt
ffffffff81258b80 T ata_sff_prereset
ffffffff81258c10 t ata_dev_select.constprop.20
ffffffff81258cc0 T ata_sff_qc_issue
ffffffff81258e90 T ata_bmdma_qc_issue
ffffffff81259020 T ata_sff_flush_pio_task
ffffffff81259070 T ata_sff_port_init
ffffffff812590c0 T ata_sff_exit
ffffffff812590d0 t ata_dev_get_GTF
ffffffff812592b0 T ata_acpi_gtm_xfermask
ffffffff81259340 T ata_acpi_cbl_80wire
ffffffff812593d0 T ata_acpi_stm
ffffffff812594b0 T ata_acpi_gtm
ffffffff812595a0 t ata_acpi_run_tf
ffffffff81259a10 T ata_acpi_associate_sata_port
ffffffff81259a60 T ata_acpi_associate
ffffffff81259ba0 T ata_acpi_dissociate
ffffffff81259bf0 T ata_acpi_on_suspend
ffffffff81259c00 T ata_acpi_on_resume
ffffffff81259d40 T ata_acpi_set_state
ffffffff81259e00 T ata_acpi_on_devcfg
ffffffff8125a0f0 T ata_acpi_on_disable
ffffffff8125a110 t always_on
ffffffff8125a120 t loopback_setup
ffffffff8125a1b0 t loopback_net_init
ffffffff8125a250 t loopback_get_stats64
ffffffff8125a2c0 t loopback_xmit
ffffffff8125a350 t loopback_dev_init
ffffffff8125a380 t loopback_dev_free
ffffffff8125a3a0 T usb_is_intel_switchable_xhci
ffffffff8125a3d0 T usb_enable_xhci_ports
ffffffff8125a400 T uhci_reset_hc
ffffffff8125a490 T uhci_check_and_reset_hc
ffffffff8125a500 t usb_amd_quirk_pll
ffffffff8125a910 T usb_amd_quirk_pll_enable
ffffffff8125a920 T usb_amd_quirk_pll_disable
ffffffff8125a930 T usb_amd_dev_put
ffffffff8125aa20 T usb_amd_find_chipset_info
ffffffff8125aca0 T usb_is_intel_ppt_switchable_xhci
ffffffff8125acc0 T usb_is_intel_lpt_switchable_xhci
ffffffff8125ace0 t serio_match_port
ffffffff8125ad40 t serio_bus_match
ffffffff8125ad70 t serio_cleanup
ffffffff8125adc0 t serio_suspend
ffffffff8125ade0 t serio_shutdown
ffffffff8125adf0 t serio_driver_probe
ffffffff8125ae50 t serio_disconnect_driver
ffffffff8125aea0 t serio_driver_remove
ffffffff8125aec0 t serio_set_drv
ffffffff8125af10 T serio_open
ffffffff8125af70 T serio_close
ffffffff8125af90 t serio_find_driver
ffffffff8125b000 t serio_release_port
ffffffff8125b020 t serio_queue_event
ffffffff8125b140 t serio_resume
ffffffff8125b160 T serio_reconnect
ffffffff8125b170 T serio_rescan
ffffffff8125b180 t serio_remove_pending_events
ffffffff8125b230 t serio_destroy_port
ffffffff8125b390 t serio_disconnect_port
ffffffff8125b420 T serio_unregister_child_port
ffffffff8125b4a0 T serio_unregister_port
ffffffff8125b4d0 t serio_remove_duplicate_events
ffffffff8125b580 t serio_driver_set_bind_mode
ffffffff8125b5f0 t serio_set_bind_mode
ffffffff8125b670 t serio_driver_show_bind_mode
ffffffff8125b6b0 t serio_driver_show_description
ffffffff8125b6f0 t serio_show_bind_mode
ffffffff8125b730 t serio_show_modalias
ffffffff8125b770 t serio_show_description
ffffffff8125b7a0 t serio_show_id_extra
ffffffff8125b7d0 t serio_show_id_id
ffffffff8125b800 t serio_show_id_proto
ffffffff8125b830 t serio_show_id_type
ffffffff8125b860 T serio_interrupt
ffffffff8125b900 T serio_unregister_driver
ffffffff8125b9a0 T __serio_register_port
ffffffff8125bab0 t serio_reconnect_port
ffffffff8125bb40 t serio_reconnect_subtree
ffffffff8125bbd0 t serio_rebind_driver
ffffffff8125be10 t serio_handle_event
ffffffff8125c030 t serio_uevent
ffffffff8125c100 T __serio_register_driver
ffffffff8125c1b0 t i8042_start
ffffffff8125c1c0 T i8042_check_port_owner
ffffffff8125c1e0 t i8042_panic_blink
ffffffff8125c310 t i8042_wait_write
ffffffff8125c370 T i8042_remove_filter
ffffffff8125c3d0 T i8042_install_filter
ffffffff8125c430 t i8042_flush
ffffffff8125c4e0 t i8042_kbd_write
ffffffff8125c570 t i8042_interrupt
ffffffff8125c910 t i8042_pm_thaw
ffffffff8125c930 t i8042_pnp_aux_probe
ffffffff8125cac0 t i8042_pnp_kbd_probe
ffffffff8125cc60 t i8042_stop
ffffffff8125cc90 T i8042_unlock_chip
ffffffff8125cca0 T i8042_lock_chip
ffffffff8125ccb0 t __i8042_command
ffffffff8125ceb0 T i8042_command
ffffffff8125cf10 t i8042_dritek_enable
ffffffff8125cf50 t i8042_set_mux_mode
ffffffff8125d020 t i8042_aux_write
ffffffff8125d060 t i8042_port_close
ffffffff8125d140 t i8042_controller_selftest
ffffffff8125d1f0 t i8042_controller_reset
ffffffff8125d290 t i8042_pm_reset
ffffffff8125d2b0 t i8042_pm_suspend
ffffffff8125d2d0 t i8042_shutdown
ffffffff8125d2e0 t i8042_enable_aux_port
ffffffff8125d340 t i8042_enable_mux_ports
ffffffff8125d380 t i8042_enable_kbd_port
ffffffff8125d3e0 t i8042_controller_resume
ffffffff8125d530 t i8042_pm_restore
ffffffff8125d540 t i8042_pm_resume
ffffffff8125d550 T ps2_cmd_aborted
ffffffff8125d590 T ps2_init
ffffffff8125d5f0 T ps2_sendbyte
ffffffff8125d700 T ps2_is_keyboard_id
ffffffff8125d730 T __ps2_command
ffffffff8125db60 T ps2_end_command
ffffffff8125db80 T ps2_begin_command
ffffffff8125dbb0 T ps2_command
ffffffff8125dbf0 T ps2_drain
ffffffff8125dcf0 T ps2_handle_response
ffffffff8125dd80 T ps2_handle_ack
ffffffff8125de90 T input_scancode_to_scalar
ffffffff8125ded0 t input_default_getkeycode
ffffffff8125df60 t input_default_setkeycode
ffffffff8125e0d0 t input_proc_devices_poll
ffffffff8125e110 T input_handler_for_each_handle
ffffffff8125e170 t input_attach_handler
ffffffff8125e360 t input_seq_stop
ffffffff8125e380 T input_register_handle
ffffffff8125e470 T input_flush_device
ffffffff8125e4e0 T input_grab_device
ffffffff8125e550 t input_proc_handlers_open
ffffffff8125e560 t input_proc_devices_open
ffffffff8125e570 t input_handlers_seq_show
ffffffff8125e5e0 t input_handlers_seq_next
ffffffff8125e600 t input_devices_seq_next
ffffffff8125e610 t input_print_modalias_bits
ffffffff8125e6d0 t input_print_modalias
ffffffff8125e8c0 t input_dev_show_modalias
ffffffff8125e900 t input_devnode
ffffffff8125e930 t __input_release_device
ffffffff8125e9c0 T input_release_device
ffffffff8125ea10 T input_open_device
ffffffff8125eac0 T input_unregister_handle
ffffffff8125eb40 T input_close_device
ffffffff8125ebc0 T input_register_handler
ffffffff8125ecc0 T input_unregister_handler
ffffffff8125ed90 t input_dev_toggle
ffffffff8125eec0 t input_dev_suspend
ffffffff8125ef20 T input_register_device
ffffffff8125f350 T input_get_keycode
ffffffff8125f3c0 T input_set_capability
ffffffff8125f480 T input_free_device
ffffffff8125f4a0 t input_dev_show_id_version
ffffffff8125f4d0 t input_dev_show_id_product
ffffffff8125f500 t input_dev_show_id_vendor
ffffffff8125f530 t input_dev_show_id_bustype
ffffffff8125f560 t input_dev_show_uniq
ffffffff8125f5a0 t input_dev_show_phys
ffffffff8125f5e0 t input_dev_show_name
ffffffff8125f620 t input_dev_release
ffffffff8125f680 T input_allocate_device
ffffffff8125f720 t input_pass_event
ffffffff8125f800 T input_set_keycode
ffffffff8125f8f0 t input_repeat_key
ffffffff8125f9b0 t input_dev_release_keys.part.5
ffffffff8125fa10 T input_unregister_device
ffffffff8125fb80 T input_reset_device
ffffffff8125fc20 t input_dev_resume
ffffffff8125fc40 t input_open_file
ffffffff8125fd80 t input_handlers_seq_start
ffffffff8125fdf0 t input_devices_seq_start
ffffffff8125fe60 t input_bits_to_string
ffffffff8125ff70 t input_seq_print_bitmap
ffffffff81260050 t input_devices_seq_show
ffffffff81260380 t input_print_bitmap
ffffffff81260490 t input_dev_show_cap_sw
ffffffff812604d0 t input_dev_show_cap_ff
ffffffff81260510 t input_dev_show_cap_snd
ffffffff81260550 t input_dev_show_cap_led
ffffffff81260590 t input_dev_show_cap_msc
ffffffff812605d0 t input_dev_show_cap_abs
ffffffff81260610 t input_dev_show_cap_rel
ffffffff81260650 t input_dev_show_cap_key
ffffffff81260690 t input_dev_show_cap_ev
ffffffff812606d0 t input_dev_show_properties
ffffffff81260710 t input_add_uevent_bm_var
ffffffff812607b0 t input_dev_uevent
ffffffff81260b00 T input_alloc_absinfo
ffffffff81260b50 t input_handle_event
ffffffff81261030 T input_inject_event
ffffffff812610f0 T input_set_abs_params
ffffffff812611a0 T input_event
ffffffff81261260 T input_event_from_user
ffffffff812612f0 T input_ff_effect_from_user
ffffffff81261370 T input_event_to_user
ffffffff812613d0 T input_mt_report_finger_count
ffffffff81261460 T input_mt_report_pointer_emulation
ffffffff81261550 T input_mt_destroy_slots
ffffffff81261590 T input_mt_report_slot_state
ffffffff81261620 T input_mt_init_slots
ffffffff81261700 T input_ff_event
ffffffff81261790 T input_ff_destroy
ffffffff81261800 T input_ff_create
ffffffff81261920 t erase_effect
ffffffff81261a40 t flush_effects
ffffffff81261aa0 T input_ff_erase
ffffffff81261b10 T input_ff_upload
ffffffff81261df0 t mousedev_packet
ffffffff81261f70 t mousedev_poll
ffffffff81261fe0 t mousedev_fasync
ffffffff81261ff0 t mousedev_free
ffffffff81262020 t mousedev_detach_client
ffffffff81262080 t mousedev_close_device
ffffffff81262150 t mousedev_release
ffffffff812621b0 t mousedev_open_device
ffffffff812622a0 t mousedev_open
ffffffff81262430 t mousedev_cleanup
ffffffff81262510 t mousedev_write
ffffffff81262790 t mousedev_read
ffffffff81262990 t mousedev_notify_readers
ffffffff81262b90 t mousedev_event
ffffffff81263080 t mousedev_create
ffffffff81263270 t mousedev_destroy
ffffffff812632c0 t mousedev_disconnect
ffffffff81263350 t mousedev_connect
ffffffff81263480 t atkbd_reset_state
ffffffff812634d0 t atkbd_select_set
ffffffff81263660 t atkbd_set_leds
ffffffff81263760 t atkbd_set_repeat_rate
ffffffff81263910 t atkbd_attr_show_helper
ffffffff81263940 t atkbd_do_show_err_count
ffffffff81263950 t atkbd_do_show_softraw
ffffffff81263960 t atkbd_do_show_softrepeat
ffffffff81263970 t atkbd_do_show_set
ffffffff81263980 t atkbd_do_show_scroll
ffffffff81263990 t atkbd_do_show_force_release
ffffffff812639a0 t atkbd_do_show_extra
ffffffff812639b0 t atkbd_cleanup
ffffffff81263a00 t atkbd_show_err_count
ffffffff81263a30 t atkbd_show_softraw
ffffffff81263a60 t atkbd_show_softrepeat
ffffffff81263a90 t atkbd_show_set
ffffffff81263ac0 t atkbd_show_scroll
ffffffff81263af0 t atkbd_show_extra
ffffffff81263b20 t atkbd_attr_set_helper
ffffffff81263bf0 t atkbd_do_set_softraw
ffffffff81263c10 t atkbd_do_set_softrepeat
ffffffff81263c30 t atkbd_do_set_set
ffffffff81263c50 t atkbd_do_set_scroll
ffffffff81263c70 t atkbd_do_set_force_release
ffffffff81263c90 t atkbd_do_set_extra
ffffffff81263cb0 t atkbd_disconnect
ffffffff81263d60 t atkbd_set_device_attrs
ffffffff81263f50 t atkbd_set_softraw
ffffffff81264040 t atkbd_set_softrepeat
ffffffff81264170 t atkbd_schedule_event_work
ffffffff81264200 t atkbd_event
ffffffff81264280 t atkbd_set_keycode_table
ffffffff812646a0 t atkbd_set_scroll
ffffffff812647b0 t atkbd_set_force_release
ffffffff81264850 t atkbd_show_force_release
ffffffff81264880 t atkbd_event_work
ffffffff81264920 t atkbd_probe
ffffffff81264a20 t atkbd_interrupt
ffffffff81265070 t atkbd_apply_forced_release_keylist
ffffffff812650b0 t atkbd_oqo_01plus_scancode_fixup
ffffffff812650e0 t atkbd_activate
ffffffff81265120 t atkbd_set_set
ffffffff81265280 t atkbd_set_extra
ffffffff812653e0 t atkbd_reconnect
ffffffff81265520 t atkbd_connect
ffffffff812657a0 t hgpk_detect
ffffffff812657b0 t touchkit_ps2_detect
ffffffff812657c0 t elantech_detect
ffffffff812657d0 T fsp_detect
ffffffff812657e0 t psmouse_show_int_attr
ffffffff81265810 t genius_detect
ffffffff812658f0 t psmouse_poll
ffffffff81265910 t psmouse_set_rate
ffffffff81265970 T psmouse_set_resolution
ffffffff812659d0 t psmouse_protocol_by_name
ffffffff81265a80 t psmouse_set_maxproto
ffffffff81265ad0 T psmouse_attr_show_helper
ffffffff81265b00 t psmouse_set_int_attr
ffffffff81265b60 t psmouse_attr_set_resolution
ffffffff81265bc0 t psmouse_attr_set_rate
ffffffff81265c20 t psmouse_apply_defaults
ffffffff81265d90 t psmouse_do_detect
ffffffff81265de0 T psmouse_process_byte
ffffffff81266100 t ps2bare_detect
ffffffff81266150 t cortron_detect
ffffffff81266190 t psmouse_protocol_by_type
ffffffff81266200 t psmouse_get_maxproto
ffffffff81266230 t psmouse_attr_show_protocol
ffffffff81266260 t intellimouse_detect
ffffffff81266360 t im_explorer_detect
ffffffff812664b0 t thinking_detect
ffffffff81266590 t psmouse_initialize
ffffffff812665e0 t psmouse_probe
ffffffff81266680 t psmouse_handle_byte
ffffffff812667c0 T fsp_init
ffffffff812667d0 T psmouse_queue_work
ffffffff812667e0 t psmouse_interrupt
ffffffff81266b10 T psmouse_set_state
ffffffff81266b90 T psmouse_sliced_command
ffffffff81266c10 T psmouse_reset
ffffffff81266c50 t psmouse_extensions
ffffffff81266fd0 t psmouse_switch_protocol
ffffffff81267160 t psmouse_attr_set_protocol
ffffffff81267420 T psmouse_activate
ffffffff81267480 T psmouse_deactivate
ffffffff812674e0 t psmouse_cleanup
ffffffff812675f0 t psmouse_disconnect
ffffffff81267760 t psmouse_reconnect
ffffffff812678a0 t psmouse_connect
ffffffff81267b40 t psmouse_resync
ffffffff81267d50 T psmouse_attr_set_helper
ffffffff81267e80 t synaptics_mode_cmd
ffffffff81267ec0 t synaptics_set_disable_gesture
ffffffff81267f70 t synaptics_set_rate
ffffffff81267fb0 T synaptics_reset
ffffffff81267fc0 t synaptics_send_cmd
ffffffff81268000 t synaptics_set_mode
ffffffff81268110 t synaptics_show_disable_gesture
ffffffff81268140 t synaptics_query_hardware
ffffffff812684f0 t set_abs_position_params
ffffffff81268610 t __synaptics_init
ffffffff81268b90 t synaptics_pt_write
ffffffff81268bf0 t synaptics_pt_start
ffffffff81268c60 t synaptics_pt_stop
ffffffff81268cc0 t synaptics_disconnect
ffffffff81268d30 t synaptics_report_slot
ffffffff81268dc0 t synaptics_report_buttons.isra.2
ffffffff81268ef0 t synaptics_validate_byte
ffffffff81268f90 t synaptics_pt_activate
ffffffff81269010 t synaptics_report_semi_mt_slot
ffffffff812690c0 t synaptics_process_byte
ffffffff81269f00 T synaptics_detect
ffffffff81269fc0 t synaptics_reconnect
ffffffff8126a130 T synaptics_init
ffffffff8126a140 T synaptics_init_relative
ffffffff8126a150 T synaptics_supported
ffffffff8126a160 t alps_enter_command_mode
ffffffff8126a230 t alps_get_model
ffffffff8126a4c0 t alps_passthrough_mode_v2
ffffffff8126a560 t alps_poll
ffffffff8126a660 t alps_disconnect
ffffffff8126a690 t alps_report_buttons.isra.2
ffffffff8126a760 t alps_report_bare_ps2_packet.isra.3
ffffffff8126a810 t alps_command_mode_send_nibble
ffffffff8126a850 t __alps_command_mode_write_reg
ffffffff8126a890 t alps_command_mode_set_addr
ffffffff8126a900 t alps_command_mode_read_reg
ffffffff8126a980 t alps_passthrough_mode_v3
ffffffff8126a9d0 t alps_command_mode_write_reg
ffffffff8126aa20 t alps_process_packet
ffffffff8126b930 t alps_process_byte
ffffffff8126bb00 t alps_flush_packet
ffffffff8126bb60 t alps_hw_init
ffffffff8126c300 t alps_reconnect
ffffffff8126c330 T alps_init
ffffffff8126c680 T alps_detect
ffffffff8126c6f0 t ps2pp_attr_show_smartscroll
ffffffff8126c720 t ps2pp_cmd
ffffffff8126c760 t ps2pp_set_smartscroll
ffffffff8126c7e0 t ps2pp_attr_set_smartscroll
ffffffff8126c860 t ps2pp_disconnect
ffffffff8126c880 t ps2pp_process_byte
ffffffff8126cb40 t ps2pp_set_resolution
ffffffff8126cbe0 T ps2pp_init
ffffffff8126d020 t lifebook_limit_serio3
ffffffff8126d040 t lifebook_set_6byte_proto
ffffffff8126d050 t lifebook_set_resolution
ffffffff8126d0b0 t lifebook_absolute_mode
ffffffff8126d100 t lifebook_disconnect
ffffffff8126d150 t lifebook_process_byte
ffffffff8126d480 T lifebook_detect
ffffffff8126d500 T lifebook_init
ffffffff8126d710 t trackpoint_write
ffffffff8126d7a0 t trackpoint_start_protocol
ffffffff8126d7f0 t trackpoint_read
ffffffff8126d850 t trackpoint_set_int_attr
ffffffff8126d8e0 t trackpoint_show_int_attr
ffffffff8126d920 t trackpoint_disconnect
ffffffff8126d950 t trackpoint_toggle_bit
ffffffff8126d9f0 t trackpoint_set_bit_attr
ffffffff8126daa0 t trackpoint_sync
ffffffff8126dca0 t trackpoint_reconnect
ffffffff8126dcd0 T trackpoint_detect
ffffffff8126de90 T rtc_month_days
ffffffff8126def0 T rtc_year_days
ffffffff8126df60 T rtc_time_to_tm
ffffffff8126e100 T rtc_valid_tm
ffffffff8126e170 T rtc_ktime_to_tm
ffffffff8126e1e0 T rtc_tm_to_time
ffffffff8126e210 T rtc_tm_to_ktime
ffffffff8126e250 T rtc_device_unregister
ffffffff8126e2e0 t rtc_device_release
ffffffff8126e300 T rtc_device_register
ffffffff8126e5c0 t rtc_suspend
ffffffff8126e6f0 t rtc_resume.part.5
ffffffff8126e7f0 t rtc_resume
ffffffff8126e830 T rtc_update_irq
ffffffff8126e840 T rtc_irq_set_freq
ffffffff8126e940 T rtc_irq_set_state
ffffffff8126ea20 T rtc_irq_register
ffffffff8126ead0 T rtc_irq_unregister
ffffffff8126eb40 T rtc_class_close
ffffffff8126eb60 T rtc_class_open
ffffffff8126ebb0 t __rtc_match
ffffffff8126ebe0 T rtc_read_alarm
ffffffff8126ece0 t __rtc_read_time.isra.4
ffffffff8126ed40 t __rtc_set_alarm.part.5
ffffffff8126ed70 t __rtc_set_alarm
ffffffff8126ee00 t rtc_timer_remove
ffffffff8126ef20 t rtc_timer_enqueue
ffffffff8126f030 T rtc_update_irq_enable
ffffffff8126f160 T rtc_alarm_irq_enable
ffffffff8126f210 T rtc_set_alarm
ffffffff8126f300 T rtc_set_time
ffffffff8126f3f0 T rtc_read_time
ffffffff8126f460 T rtc_initialize_alarm
ffffffff8126f5a0 T rtc_set_mmss
ffffffff8126f6a0 T __rtc_read_alarm
ffffffff8126f980 T rtc_handle_legacy_irq
ffffffff8126fa50 T rtc_aie_update_irq
ffffffff8126fa60 T rtc_uie_update_irq
ffffffff8126fa70 T rtc_pie_update_irq
ffffffff8126fae0 T rtc_timer_do_work
ffffffff8126fca0 T rtc_timer_init
ffffffff8126fcd0 T rtc_timer_start
ffffffff8126fd60 T rtc_timer_cancel
ffffffff8126fdc0 t rtc_dev_poll
ffffffff8126fe00 t rtc_dev_fasync
ffffffff8126fe20 t rtc_dev_open
ffffffff8126feb0 t rtc_dev_ioctl
ffffffff81270390 t rtc_dev_release
ffffffff812703e0 t rtc_dev_read
ffffffff81270590 T rtc_dev_prepare
ffffffff812705e0 T rtc_dev_add_device
ffffffff81270630 T rtc_dev_del_device
ffffffff81270650 t rtc_proc_release
ffffffff81270670 t rtc_proc_open
ffffffff812706e0 t rtc_proc_show
ffffffff81270a10 T rtc_proc_add_device
ffffffff81270a40 T rtc_proc_del_device
ffffffff81270a60 t rtc_sysfs_set_max_user_freq
ffffffff81270ab0 t rtc_sysfs_show_max_user_freq
ffffffff81270ae0 t rtc_sysfs_show_name
ffffffff81270b10 t rtc_sysfs_show_time
ffffffff81270b50 t rtc_sysfs_show_date
ffffffff81270ba0 t rtc_sysfs_show_since_epoch
ffffffff81270bf0 t rtc_sysfs_show_wakealarm
ffffffff81270c50 t rtc_sysfs_set_wakealarm
ffffffff81270d40 t rtc_sysfs_show_hctosys
ffffffff81270d90 T rtc_sysfs_add_device
ffffffff81270de0 T rtc_sysfs_del_device
ffffffff81270e10 t cmos_resume
ffffffff81270f00 t cmos_pnp_resume
ffffffff81270f10 t rtc_handler
ffffffff81270f40 t rtc_wake_off
ffffffff81270f50 t rtc_wake_on
ffffffff81270f70 t cmos_nvram_write
ffffffff81271060 t cmos_nvram_read
ffffffff81271130 t cmos_procfs
ffffffff81271260 t cmos_set_time
ffffffff81271440 t cmos_read_alarm
ffffffff81271620 t cmos_read_time
ffffffff81271760 t cmos_interrupt
ffffffff81271840 t cmos_checkintr.isra.3
ffffffff812718c0 t cmos_suspend
ffffffff812719a0 t cmos_pnp_suspend
ffffffff812719b0 t cmos_irq_disable
ffffffff81271a10 t cmos_do_shutdown
ffffffff81271a50 t cmos_platform_shutdown
ffffffff81271a80 t cmos_pnp_shutdown
ffffffff81271ab0 t cmos_irq_enable.constprop.5
ffffffff81271b10 t cmos_set_alarm
ffffffff81271cf0 t cmos_alarm_irq_enable
ffffffff81271d70 T watchdog_unregister_device
ffffffff81271da0 T watchdog_register_device
ffffffff81271e20 t watchdog_ping
ffffffff81271e50 t watchdog_start
ffffffff81271e80 t watchdog_open
ffffffff81271f40 t watchdog_write
ffffffff81271fb0 t watchdog_stop
ffffffff81272000 t watchdog_ioctl
ffffffff81272260 t watchdog_release
ffffffff812722f0 T watchdog_dev_register
ffffffff81272390 T watchdog_dev_unregister
ffffffff812723f0 T edac_handler_set
ffffffff81272410 T edac_atomic_assert_error
ffffffff81272420 T edac_put_sysfs_subsys
ffffffff81272440 T edac_get_sysfs_subsys
ffffffff81272490 T __cpufreq_driver_target
ffffffff812724e0 T cpufreq_cpu_put
ffffffff81272500 t __find_governor
ffffffff81272570 t store_scaling_setspeed
ffffffff812725e0 t show_scaling_available_governors
ffffffff812726a0 t show_scaling_driver
ffffffff812726d0 t show_cpus
ffffffff81272780 t show_affected_cpus
ffffffff81272790 T cpufreq_register_governor
ffffffff81272820 t show_scaling_max_freq
ffffffff81272850 t show_scaling_min_freq
ffffffff81272880 t show_cpuinfo_transition_latency
ffffffff812728b0 t show_cpuinfo_max_freq
ffffffff812728e0 t show_cpuinfo_min_freq
ffffffff81272910 t show_bios_limit
ffffffff81272980 t show_scaling_cur_freq
ffffffff812729b0 t cpufreq_sysfs_release
ffffffff812729c0 T cpufreq_register_driver
ffffffff81272b30 T cpufreq_notify_transition
ffffffff81272bf0 t __cpufreq_get
ffffffff81272ca0 T cpufreq_cpu_get
ffffffff81272d60 t cpufreq_bp_resume
ffffffff81272dc0 t cpufreq_bp_suspend
ffffffff81272e30 T cpufreq_get_policy
ffffffff81272f30 T __cpufreq_driver_getavg
ffffffff81272fb0 T cpufreq_quick_get_max
ffffffff81272fd0 T cpufreq_quick_get
ffffffff81272ff0 T cpufreq_unregister_driver
ffffffff81273050 t lock_policy_rwsem_write
ffffffff812730e0 t unlock_policy_rwsem_write
ffffffff81273120 t store
ffffffff812731d0 T cpufreq_driver_target
ffffffff81273250 t __cpufreq_governor
ffffffff81273300 t __cpufreq_set_policy
ffffffff81273490 t cpufreq_add_dev_interface
ffffffff81273720 t cpufreq_add_dev
ffffffff81273b50 t __cpufreq_remove_dev
ffffffff81273e10 T cpufreq_update_policy
ffffffff81273f10 t handle_update
ffffffff81273f20 t store_scaling_governor
ffffffff81274100 t store_scaling_max_freq
ffffffff812741b0 t store_scaling_min_freq
ffffffff81274260 t show_scaling_setspeed
ffffffff812742a0 t show_scaling_governor
ffffffff81274340 t show_related_cpus
ffffffff81274350 t lock_policy_rwsem_read
ffffffff812743e0 t unlock_policy_rwsem_read
ffffffff81274420 t show
ffffffff812744c0 T cpufreq_get
ffffffff81274520 T cpufreq_unregister_notifier
ffffffff81274560 T cpufreq_register_notifier
ffffffff812745f0 t show_cpuinfo_cur_freq
ffffffff81274640 t cpufreq_remove_dev
ffffffff81274690 T cpufreq_unregister_governor
ffffffff81274790 T cpufreq_disabled
ffffffff812747a0 T disable_cpufreq
ffffffff812747b0 t show_total_trans
ffffffff81274800 t cpufreq_stats_update
ffffffff81274870 t show_time_in_state
ffffffff81274900 t cpufreq_stat_notifier_trans
ffffffff812749b0 t cpufreq_stat_notifier_policy
ffffffff81274c50 t cpufreq_governor_performance
ffffffff81274c80 T cpufreq_frequency_table_cpuinfo
ffffffff81274d00 T cpufreq_frequency_table_verify
ffffffff81274e30 T cpufreq_frequency_table_target
ffffffff81274f30 T cpufreq_frequency_table_get_attr
ffffffff81274f50 T cpufreq_frequency_table_put_attr
ffffffff81274f70 T cpufreq_frequency_get_table
ffffffff81274f90 t show_available_freqs
ffffffff81275020 t cpuidle_enter
ffffffff81275040 t cpuidle_enter_tk
ffffffff81275050 t smp_callback
ffffffff81275060 t cpuidle_latency_notify
ffffffff81275090 T cpuidle_resume_and_unlock
ffffffff812750b0 t __cpuidle_register_device
ffffffff812751b0 T cpuidle_enable_device
ffffffff81275330 t poll_idle
ffffffff812753b0 T cpuidle_disable_device
ffffffff81275430 T cpuidle_register_device
ffffffff812754a0 T cpuidle_disabled
ffffffff812754b0 T disable_cpuidle
ffffffff812754c0 T cpuidle_play_dead
ffffffff81275550 T cpuidle_idle_call
ffffffff81275640 T cpuidle_install_idle_handler
ffffffff81275660 T cpuidle_uninstall_idle_handler
ffffffff81275680 T cpuidle_pause_and_lock
ffffffff812756a0 T cpuidle_unregister_device
ffffffff81275770 T cpuidle_wrap_enter
ffffffff81275800 T cpuidle_get_driver
ffffffff81275810 T cpuidle_unregister_driver
ffffffff81275870 T cpuidle_register_driver
ffffffff81275910 T cpuidle_switch_governor
ffffffff81275a00 T cpuidle_register_governor
ffffffff81275af0 T cpuidle_unregister_governor
ffffffff81275ba0 t cpuidle_state_show
ffffffff81275bd0 t cpuidle_state_store
ffffffff81275c00 t cpuidle_store
ffffffff81275c80 t cpuidle_show
ffffffff81275cf0 t cpuidle_sysfs_release
ffffffff81275d00 t cpuidle_state_sysfs_release
ffffffff81275d10 t store_state_disable
ffffffff81275d90 t show_state_disable
ffffffff81275dc0 t show_state_time
ffffffff81275de0 t show_state_usage
ffffffff81275e00 t show_state_power_usage
ffffffff81275e30 t show_state_exit_latency
ffffffff81275e60 t show_current_governor
ffffffff81275ec0 t show_current_driver
ffffffff81275f40 t store_current_governor
ffffffff81276030 t show_available_governors
ffffffff812760d0 t show_state_desc
ffffffff81276120 t show_state_name
ffffffff81276170 T cpuidle_add_interface
ffffffff812761a0 T cpuidle_remove_interface
ffffffff812761b0 T cpuidle_add_state_sysfs
ffffffff81276330 T cpuidle_remove_state_sysfs
ffffffff812763a0 T cpuidle_add_sysfs
ffffffff81276400 T cpuidle_remove_sysfs
ffffffff81276430 t ladder_enable_device
ffffffff812764d0 t ladder_reflect
ffffffff812764f0 t ladder_select_state
ffffffff81276680 t dmi_table
ffffffff81276730 T dmi_get_system_info
ffffffff81276740 T dmi_match
ffffffff81276780 T dmi_find_device
ffffffff812767f0 T dmi_walk
ffffffff81276880 T dmi_get_date
ffffffff81276a20 T dmi_name_in_vendors
ffffffff81276a80 t dmi_matches
ffffffff81276b40 T dmi_first_match
ffffffff81276b80 T dmi_check_system
ffffffff81276bd0 T dmi_name_in_serial
ffffffff81276c00 t memmap_attr_show
ffffffff81276c10 t type_show
ffffffff81276c40 t end_show
ffffffff81276c70 t start_show
ffffffff81276ca0 t acpi_pm_read
ffffffff81276cb0 T acpi_pm_read_verified
ffffffff81276d10 t acpi_pm_read_slow
ffffffff81276d20 t init_pit_timer
ffffffff81276de0 t pit_next_event
ffffffff81276e20 t hid_uevent
ffffffff81276ee0 T hid_register_report
ffffffff81276fa0 t store_new_id
ffffffff812770a0 T hid_unregister_driver
ffffffff81277140 t hid_device_release
ffffffff81277210 T hid_destroy_device
ffffffff81277260 T hid_allocate_device
ffffffff812773a0 T hid_disconnect
ffffffff812773f0 t hid_device_remove
ffffffff81277490 t hid_parser_local
ffffffff81277790 t hid_parser_global
ffffffff81277d10 t hid_add_field
ffffffff81278020 t extract
ffffffff812780a0 t implement
ffffffff812781a0 T hid_output_report
ffffffff81278300 t hid_parser_main
ffffffff812785e0 t hid_process_event
ffffffff81278720 T hid_parse_report
ffffffff81278a10 t snto32
ffffffff81278a50 T hid_set_field
ffffffff81278b30 T hid_report_raw_event
ffffffff81278ef0 T hid_input_report
ffffffff81279160 T hid_check_keys_pressed
ffffffff812791c0 t hid_parser_reserved
ffffffff812791f0 T __hid_register_driver
ffffffff81279280 t read_report_descriptor
ffffffff812792d0 T hid_match_id
ffffffff81279330 t hid_match_device
ffffffff812793e0 t hid_bus_match
ffffffff81279490 T hid_add_device
ffffffff81279680 T hid_connect
ffffffff812799b0 t hid_device_probe
ffffffff81279b20 t match_scancode
ffffffff81279b30 t match_keycode
ffffffff81279b50 t match_index
ffffffff81279b60 t hidinput_find_key
ffffffff81279ca0 T hidinput_find_field
ffffffff81279d30 T hidinput_get_led_field
ffffffff81279db0 T hidinput_count_leds
ffffffff81279e40 T hidinput_disconnect
ffffffff81279eb0 t hidinput_open
ffffffff81279ee0 t hidinput_close
ffffffff81279f10 T hidinput_connect
ffffffff8127ca60 T hidinput_report_event
ffffffff8127cab0 t hidinput_locate_usage
ffffffff8127cb30 t hidinput_getkeycode
ffffffff8127cb90 t hidinput_setkeycode
ffffffff8127cc90 T hidinput_hid_event
ffffffff8127d080 T iommu_present
ffffffff8127d090 T iommu_device_group
ffffffff8127d0c0 T iommu_domain_has_cap
ffffffff8127d0e0 T iommu_iova_to_phys
ffffffff8127d100 T iommu_detach_device
ffffffff8127d110 T iommu_attach_device
ffffffff8127d130 T iommu_unmap
ffffffff8127d1e0 T iommu_map
ffffffff8127d320 T iommu_domain_free
ffffffff8127d340 T iommu_domain_alloc
ffffffff8127d3b0 t show_iommu_group
ffffffff8127d410 T iommu_set_fault_handler
ffffffff8127d420 T bus_set_iommu
ffffffff8127d460 t add_iommu_group
ffffffff8127d4c0 t iommu_device_notifier
ffffffff8127d540 t amd_iommu_domain_has_cap
ffffffff8127d550 t amd_iommu_device_group
ffffffff8127d5b0 t build_inv_iommu_pages
ffffffff8127d630 T amd_iommu_device_info
ffffffff8127d730 t set_dte_entry
ffffffff8127d860 t build_completion_wait
ffffffff8127d8e0 t find_protection_domain
ffffffff8127d970 t add_domain_to_list
ffffffff8127d9b0 t del_domain_from_list
ffffffff8127da00 t __get_gcr3_pte
ffffffff8127daa0 t alloc_pte
ffffffff8127dca0 t free_gcr3_tbl_level1
ffffffff8127dd00 T amd_iommu_unregister_ppr_notifier
ffffffff8127dd10 T amd_iommu_register_ppr_notifier
ffffffff8127dd20 t protection_domain_free
ffffffff8127ddb0 t dma_ops_area_alloc
ffffffff8127df00 t check_device
ffffffff8127df50 t amd_iommu_dma_supported
ffffffff8127df60 t fetch_pte.isra.10
ffffffff8127e090 t amd_iommu_iova_to_phys
ffffffff8127e120 T amd_iommu_enable_device_erratum
ffffffff8127e160 t dma_ops_domain_unmap.part.14
ffffffff8127e1c0 t find_dev_data
ffffffff8127e2b0 t iommu_init_device
ffffffff8127e3d0 t wait_on_sem
ffffffff8127e430 t iommu_queue_command_sync
ffffffff8127e570 t iommu_flush_dte
ffffffff8127e5a0 T amd_iommu_complete_ppr
ffffffff8127e620 t iommu_completion_wait
ffffffff8127e680 t domain_flush_complete
ffffffff8127e6d0 t __flush_pasid
ffffffff8127e890 T amd_iommu_domain_clear_gcr3
ffffffff8127e940 T amd_iommu_domain_set_gcr3
ffffffff8127ea10 T amd_iommu_flush_tlb
ffffffff8127ea80 T amd_iommu_flush_page
ffffffff8127eaf0 t device_flush_iotlb.isra.18
ffffffff8127eb90 t device_flush_dte
ffffffff8127ebd0 t do_attach
ffffffff8127ec50 t __attach_device
ffffffff8127ed20 t domain_for_device
ffffffff8127edb0 t do_detach
ffffffff8127ee40 t __detach_device
ffffffff8127ef20 t detach_device
ffffffff8127efd0 t amd_iommu_detach_device
ffffffff8127f060 t __domain_flush_pages
ffffffff8127f140 t domain_flush_tlb_pde
ffffffff8127f160 t amd_iommu_unmap
ffffffff8127f2f0 t update_domain
ffffffff8127f360 T amd_iommu_domain_enable_v2
ffffffff8127f430 t iommu_map_page
ffffffff8127f5a0 t amd_iommu_map
ffffffff8127f630 t alloc_new_range
ffffffff8127f9a0 t free_pagetable.isra.20
ffffffff8127fa70 T amd_iommu_domain_direct_map
ffffffff8127fad0 t dma_ops_domain_free
ffffffff8127fb40 t amd_iommu_domain_destroy
ffffffff8127fc90 t domain_id_alloc
ffffffff8127fcf0 t protection_domain_alloc
ffffffff8127fd60 t amd_iommu_domain_init
ffffffff8127fdd0 t dma_ops_domain_alloc
ffffffff8127fea0 t dma_ops_free_addresses
ffffffff8127fee0 t __map_single
ffffffff81280330 t __unmap_single.isra.26
ffffffff81280450 t device_change_notifier
ffffffff812805b0 T amd_iommu_int_thread
ffffffff81280a60 T amd_iommu_int_handler
ffffffff81280a70 T iommu_flush_all_caches
ffffffff81280b30 T pci_pri_tlp_required
ffffffff81280b70 t attach_device
ffffffff81280d40 t get_domain
ffffffff81280e40 T amd_iommu_get_v2_domain
ffffffff81280e70 t unmap_sg
ffffffff81280f20 t map_sg
ffffffff81281110 t unmap_page
ffffffff812811b0 t map_page
ffffffff812812b0 t free_coherent
ffffffff81281360 t alloc_coherent
ffffffff81281510 t amd_iommu_attach_device
ffffffff812815a0 T amd_iommu_init_notifier
ffffffff812815c0 t iommu_disable
ffffffff81281610 t disable_iommus
ffffffff81281640 t amd_iommu_suspend
ffffffff81281650 T amd_iommu_v2_supported
ffffffff81281660 t amd_iommu_enable_interrupts
ffffffff81281750 T amd_iommu_reset_cmd_buffer
ffffffff81281790 t enable_iommus
ffffffff81281b00 t amd_iommu_resume
ffffffff81281d10 T amd_iommu_apply_erratum_63
ffffffff81281d50 T pcibios_align_resource
ffffffff81281da0 t pcibios_fwaddrmap_lookup
ffffffff81281e00 T pcibios_retrieve_fw_addr
ffffffff81281e70 T pci_mmap_page_range
ffffffff81281f10 t pci_dev_base
ffffffff81281f60 t pci_mmcfg_write
ffffffff81282000 t pci_mmcfg_read
ffffffff812820b0 t pci_conf1_write
ffffffff812821c0 t pci_conf1_read
ffffffff812822e0 t pci_conf2_write
ffffffff81282420 t pci_conf2_read
ffffffff81282580 T pci_mmconfig_lookup
ffffffff812825d0 T xen_find_device_domain_owner
ffffffff81282640 T xen_register_device_domain_owner
ffffffff81282710 t xen_initdom_restore_msi_irqs
ffffffff81282890 t xen_teardown_msi_irq
ffffffff812828a0 t xen_initdom_setup_msi_irqs
ffffffff81282b70 t xen_hvm_setup_msi_irqs
ffffffff81282cb0 t xen_setup_msi_irqs
ffffffff81282e00 t xen_teardown_msi_irqs
ffffffff81282e50 t xen_pcifront_enable_irq
ffffffff81282f30 T xen_unregister_device_domain_owner
ffffffff81282fe0 t xen_register_pirq
ffffffff81283120 t acpi_register_gsi_xen_hvm
ffffffff81283140 t xen_register_gsi.part.7
ffffffff81283230 t acpi_register_gsi_xen
ffffffff81283250 t sb600_disable_hpet_bar
ffffffff812832a0 t pci_early_fixup_cyrix_5530
ffffffff812832f0 t pcie_rootport_aspm_quirk
ffffffff812833b0 t quirk_pcie_aspm_write
ffffffff81283410 t quirk_pcie_aspm_read
ffffffff81283440 t pci_fixup_via_northbridge_bug
ffffffff81283560 t pci_fixup_nforce2
ffffffff812835d0 t resource_to_addr
ffffffff81283720 t setup_resource
ffffffff812838d0 t count_resource
ffffffff81283900 t pirq_serverworks_get
ffffffff81283910 t pirq_serverworks_set
ffffffff81283930 t pirq_pico_get
ffffffff81283950 t pirq_pico_set
ffffffff81283980 t read_config_nybble
ffffffff812839d0 t pirq_via_get
ffffffff812839f0 t pirq_opti_get
ffffffff81283a00 t pirq_cyrix_get
ffffffff81283a10 t pirq_amd756_get
ffffffff81283a70 t pirq_piix_get
ffffffff81283aa0 t pirq_sis_get
ffffffff81283ae0 t write_config_nybble
ffffffff81283b80 t pirq_via_set
ffffffff81283bb0 t pirq_opti_set
ffffffff81283bd0 t pirq_cyrix_set
ffffffff81283bf0 t pirq_piix_set
ffffffff81283c10 t pirq_sis_set
ffffffff81283c90 t pirq_via586_set
ffffffff81283d00 t pirq_via586_get
ffffffff81283d60 t pirq_ite_set
ffffffff81283dd0 t pirq_ite_get
ffffffff81283e30 t pirq_ali_set
ffffffff81283ea0 t pirq_ali_get
ffffffff81283f00 t pirq_amd756_set
ffffffff81283f70 t pirq_vlsi_set
ffffffff81283fe0 t pirq_vlsi_get
ffffffff81284050 T eisa_set_level_irq
ffffffff812840d0 t pcibios_lookup_irq
ffffffff812845e0 t pirq_enable_irq
ffffffff81284830 T pcibios_penalize_isa_irq
ffffffff81284870 T raw_pci_read
ffffffff812848b0 t pci_read
ffffffff812848e0 T raw_pci_write
ffffffff81284920 t pci_write
ffffffff81284950 T pcibios_assign_all_busses
ffffffff81284960 T pcibios_enable_device
ffffffff81284990 T pcibios_disable_device
ffffffff812849b0 T pci_ext_cfg_avail
ffffffff812849c0 T set_mp_bus_to_node
ffffffff812849e0 T get_mp_bus_to_node
ffffffff81284a20 T read_pci_config
ffffffff81284a50 T read_pci_config_byte
ffffffff81284a90 T read_pci_config_16
ffffffff81284ad0 T write_pci_config
ffffffff81284b00 T write_pci_config_byte
ffffffff81284b40 T write_pci_config_16
ffffffff81284b80 T early_pci_allowed
ffffffff81284ba0 T early_dump_pci_device
ffffffff81284c60 T early_dump_pci_devices
ffffffff81284d50 T x86_pci_root_bus_resources
ffffffff81284e70 T save_processor_state
ffffffff81284fa0 T restore_processor_state
ffffffff812851c0 T fb_is_primary_device
ffffffff812851e0 t sock_no_open
ffffffff812851f0 T sock_tx_timestamp
ffffffff81285230 t sock_poll
ffffffff81285250 t sock_mmap
ffffffff81285270 T kernel_bind
ffffffff81285280 T kernel_listen
ffffffff81285290 T kernel_connect
ffffffff812852a0 T kernel_getsockname
ffffffff812852b0 T kernel_getpeername
ffffffff812852c0 T kernel_sock_ioctl
ffffffff81285310 T kernel_sock_shutdown
ffffffff81285320 t sockfs_mount
ffffffff81285340 t sockfs_dname
ffffffff81285360 t sock_destroy_inode
ffffffff81285390 t sock_alloc_inode
ffffffff81285460 t init_once
ffffffff81285470 t sock_splice_read
ffffffff812854e0 T kernel_sendpage
ffffffff81285540 t sock_sendpage
ffffffff81285570 T kernel_setsockopt
ffffffff812855c0 T kernel_getsockopt
ffffffff81285610 t sock_fasync
ffffffff812856b0 t sock_do_ioctl
ffffffff81285710 t routing_ioctl
ffffffff81285930 t sock_ioctl
ffffffff81285bd0 t compat_sock_ioctl
ffffffff81286650 T dlci_ioctl_set
ffffffff81286680 T vlan_ioctl_set
ffffffff812866b0 T brioctl_set
ffffffff812866e0 t sock_aio_dtor
ffffffff812866f0 t sock_recvmsg_nosec
ffffffff81286820 t sock_sendmsg_nosec
ffffffff81286940 T sock_recvmsg
ffffffff81286a70 T kernel_recvmsg
ffffffff81286ad0 T sock_sendmsg
ffffffff81286bf0 T kernel_sendmsg
ffffffff81286c40 t sockfd_lookup_light
ffffffff81286cc0 t __sys_sendmsg
ffffffff81287050 t sock_alloc
ffffffff812870b0 T sock_create_lite
ffffffff812870f0 t sock_alloc_file
ffffffff81287210 T sock_map_fd
ffffffff81287240 T sock_wake_async
ffffffff812872f0 T __sock_recv_timestamp
ffffffff81287540 T sock_release
ffffffff812875d0 T __sock_create
ffffffff812877d0 T sock_create_kern
ffffffff812877f0 T sock_create
ffffffff81287820 T sockfd_lookup
ffffffff81287870 t move_addr_to_user
ffffffff81287930 t __sys_recvmsg
ffffffff81287bc0 t sock_aio_read.part.23
ffffffff81287cf0 t sock_aio_read
ffffffff81287d20 t sock_aio_write
ffffffff81287e70 T sock_unregister
ffffffff81287ec0 T sock_register
ffffffff81287f60 T __sock_recv_wifi_status
ffffffff81287fb0 T __sock_recv_ts_and_drops
ffffffff81288100 t sock_close
ffffffff81288130 T kernel_accept
ffffffff81288200 T move_addr_to_kernel
ffffffff81288290 T sys_socket
ffffffff812882f0 T sys_socketpair
ffffffff812884c0 T sys_bind
ffffffff81288580 T sys_listen
ffffffff81288600 T sys_accept4
ffffffff812887f0 T sys_accept
ffffffff81288800 T sys_connect
ffffffff812888d0 T sys_getsockname
ffffffff812889a0 T sys_getpeername
ffffffff81288a80 T sys_sendto
ffffffff81288c20 T sys_send
ffffffff81288c30 T sys_recvfrom
ffffffff81288dc0 T sys_recv
ffffffff81288dd0 T sys_setsockopt
ffffffff81288ea0 T sys_getsockopt
ffffffff81288f60 T sys_shutdown
ffffffff81288fd0 T sys_sendmsg
ffffffff81289050 T __sys_sendmmsg
ffffffff81289190 T sys_sendmmsg
ffffffff812891a0 T sys_recvmsg
ffffffff81289220 T __sys_recvmmsg
ffffffff81289430 T sys_recvmmsg
ffffffff81289500 T sys_socketcall
ffffffff81289770 T socket_seq_show
ffffffff812897f0 T sk_reset_txq
ffffffff81289800 T sock_update_classid
ffffffff81289860 T sock_rfree
ffffffff81289890 T __sk_mem_schedule
ffffffff81289ae0 T sock_no_bind
ffffffff81289af0 T sock_no_connect
ffffffff81289b00 T sock_no_socketpair
ffffffff81289b10 T sock_no_accept
ffffffff81289b20 T sock_no_getname
ffffffff81289b30 T sock_no_poll
ffffffff81289b40 T sock_no_ioctl
ffffffff81289b50 T sock_no_listen
ffffffff81289b60 T sock_no_shutdown
ffffffff81289b70 T sock_no_setsockopt
ffffffff81289b80 T sock_no_getsockopt
ffffffff81289b90 T sock_no_sendmsg
ffffffff81289ba0 T sock_no_recvmsg
ffffffff81289bb0 T sock_no_mmap
ffffffff81289bc0 T sock_common_getsockopt
ffffffff81289bd0 T compat_sock_common_getsockopt
ffffffff81289bf0 T sock_common_recvmsg
ffffffff81289c40 T sock_common_setsockopt
ffffffff81289c50 T compat_sock_common_setsockopt
ffffffff81289c70 t proto_exit_net
ffffffff81289c80 t proto_init_net
ffffffff81289cb0 t proto_seq_open
ffffffff81289cd0 t proto_seq_next
ffffffff81289ce0 t proto_seq_stop
ffffffff81289cf0 t proto_seq_start
ffffffff81289d10 t sock_inuse_exit_net
ffffffff81289d20 t sock_inuse_init_net
ffffffff81289d50 t sock_def_destruct
ffffffff81289d60 T sock_kfree_s
ffffffff81289da0 T proto_unregister
ffffffff81289ea0 T sock_prot_inuse_add
ffffffff81289ec0 T proto_register
ffffffff8128a0f0 T sock_prot_inuse_get
ffffffff8128a160 t __lock_sock
ffffffff8128a1f0 T lock_sock_nested
ffffffff8128a240 t sock_def_wakeup
ffffffff8128a270 T release_sock
ffffffff8128a380 T sock_init_data
ffffffff8128a560 T sock_no_sendpage
ffffffff8128a5e0 T sk_wait_data
ffffffff8128a6c0 T sock_wmalloc
ffffffff8128a730 T sock_alloc_send_pskb
ffffffff8128aa40 T sock_alloc_send_skb
ffffffff8128aa50 T sock_kmalloc
ffffffff8128aab0 T sock_i_ino
ffffffff8128ab00 T sock_i_uid
ffffffff8128ab50 t __sk_free
ffffffff8128aca0 T sock_wfree
ffffffff8128ad00 T sk_free
ffffffff8128ad20 T sk_common_release
ffffffff8128adf0 T sk_dst_check
ffffffff8128ae90 T __sk_dst_check
ffffffff8128af00 T sk_prot_clear_portaddr_nulls
ffffffff8128af50 T sk_release_kernel
ffffffff8128afa0 T cred_to_ucred
ffffffff8128aff0 T sk_receive_skb
ffffffff8128b120 T sock_queue_rcv_skb
ffffffff8128b2b0 T sock_update_netprioidx
ffffffff8128b300 T sk_setup_caps
ffffffff8128b3c0 T __sk_mem_reclaim
ffffffff8128b420 t proto_seq_show
ffffffff8128b880 T lock_sock_fast
ffffffff8128b8e0 t sock_def_error_report
ffffffff8128b940 t sock_def_write_space
ffffffff8128b9c0 t sock_def_readable
ffffffff8128ba20 T sk_stop_timer
ffffffff8128ba40 T sk_reset_timer
ffffffff8128ba60 T sk_send_sigurg
ffffffff8128bab0 t sk_prot_alloc.isra.43
ffffffff8128bc20 T sk_alloc
ffffffff8128bcd0 T sk_clone_lock
ffffffff8128bf80 t sock_warn_obsolete_bsdism
ffffffff8128c000 t sock_set_timeout
ffffffff8128c100 T sock_getsockopt
ffffffff8128c790 T sock_rmalloc
ffffffff8128c820 T sock_enable_timestamp
ffffffff8128c850 T sock_get_timestampns
ffffffff8128c900 T sock_get_timestamp
ffffffff8128c9b0 T sock_setsockopt
ffffffff8128d1f0 T reqsk_queue_alloc
ffffffff8128d2d0 T __reqsk_queue_destroy
ffffffff8128d300 T reqsk_queue_destroy
ffffffff8128d3f0 t sock_pipe_buf_steal
ffffffff8128d400 T skb_add_rx_frag
ffffffff8128d450 T skb_prepare_seq_read
ffffffff8128d480 T skb_abort_seq_read
ffffffff8128d4a0 t skb_ts_finish
ffffffff8128d4c0 T skb_find_text
ffffffff8128d570 t sock_rmem_free
ffffffff8128d590 T skb_seq_read
ffffffff8128d770 t skb_ts_get_next_block
ffffffff8128d780 t __skb_to_sgvec
ffffffff8128d9b0 T skb_to_sgvec
ffffffff8128d9e0 t __copy_skb_header
ffffffff8128db40 t copy_skb_header
ffffffff8128dbc0 t __skb_clone
ffffffff8128dca0 T skb_store_bits
ffffffff8128df40 T skb_copy_bits
ffffffff8128e200 t skb_release_head_state
ffffffff8128e2a0 T skb_recycle
ffffffff8128e390 T skb_recycle_check
ffffffff8128e410 t sock_pipe_buf_get
ffffffff8128e440 T skb_pull_rcsum
ffffffff8128e4b0 T skb_checksum
ffffffff8128e770 T skb_append_datato_frags
ffffffff8128e910 t __skb_splice_bits
ffffffff8128ee50 t sock_pipe_buf_release
ffffffff8128ee60 t sock_spd_release
ffffffff8128ee70 T skb_insert
ffffffff8128eee0 T skb_append
ffffffff8128ef50 T skb_unlink
ffffffff8128efc0 T skb_queue_tail
ffffffff8128f020 T sock_queue_err_skb
ffffffff8128f0d0 T skb_queue_head
ffffffff8128f130 T skb_dequeue_tail
ffffffff8128f1b0 T skb_dequeue
ffffffff8128f230 T skb_copy_and_csum_bits
ffffffff8128f510 T skb_copy_and_csum_dev
ffffffff8128f5f0 T build_skb
ffffffff8128f770 T __alloc_skb
ffffffff8128f9a0 T dev_alloc_skb
ffffffff8128f9e0 T skb_gro_receive
ffffffff8128fe00 T __netdev_alloc_skb
ffffffff8128fe40 T skb_pull
ffffffff8128fe70 T skb_trim
ffffffff8128feb0 T __skb_warn_lro_forwarding
ffffffff8128fee0 T skb_partial_csum_set
ffffffff8128ff80 T skb_push
ffffffff81290000 T skb_put
ffffffff81290090 T skb_split
ffffffff81290340 T skb_copy_expand
ffffffff81290470 T skb_copy
ffffffff81290520 t skb_release_data
ffffffff81290610 T skb_morph
ffffffff81290640 T __kfree_skb
ffffffff812906d0 T consume_skb
ffffffff81290700 T kfree_skb
ffffffff81290730 T skb_complete_wifi_ack
ffffffff812907b0 T skb_queue_purge
ffffffff812907d0 t skb_drop_list
ffffffff81290800 T skb_copy_ubufs
ffffffff81290a40 T pskb_expand_head
ffffffff81290d30 t skb_prepare_for_shift
ffffffff81290d70 T __pskb_copy
ffffffff81290f50 T skb_clone
ffffffff81291000 T skb_tstamp_tx
ffffffff812910e0 T skb_segment
ffffffff81291700 T __pskb_pull_tail
ffffffff81291a50 T skb_pad
ffffffff81291b50 T skb_cow_data
ffffffff81291e60 T ___pskb_trim
ffffffff812920e0 T skb_realloc_headroom
ffffffff81292160 T skb_splice_bits
ffffffff81292310 T skb_shift
ffffffff812926f0 T memcpy_fromiovec
ffffffff81292760 T memcpy_fromiovecend
ffffffff81292800 T csum_partial_copy_fromiovecend
ffffffff81292a20 T memcpy_toiovec
ffffffff81292aa0 T memcpy_toiovecend
ffffffff81292b30 T verify_iovec
ffffffff81292bf0 T datagram_poll
ffffffff81292cd0 t skb_copy_and_csum_datagram
ffffffff81293010 T __skb_checksum_complete_head
ffffffff81293080 T __skb_checksum_complete
ffffffff81293090 T skb_copy_datagram_from_iovec
ffffffff812932d0 T skb_copy_datagram_const_iovec
ffffffff81293510 T skb_copy_datagram_iovec
ffffffff81293720 T skb_copy_and_csum_datagram_iovec
ffffffff81293850 T skb_kill_datagram
ffffffff81293940 T skb_free_datagram_locked
ffffffff81293a20 T skb_free_datagram
ffffffff81293a60 T __skb_recv_datagram
ffffffff81293d80 T skb_recv_datagram
ffffffff81293dc0 t receiver_wake_function
ffffffff81293de0 T sk_stream_kill_queues
ffffffff81293f00 T sk_stream_wait_memory
ffffffff81294140 T sk_stream_wait_connect
ffffffff812942e0 T sk_stream_write_space
ffffffff812943a0 T sk_stream_error
ffffffff81294400 T sk_stream_wait_close
ffffffff812944e0 T scm_fp_dup
ffffffff81294540 T put_cmsg
ffffffff81294650 T __scm_destroy
ffffffff81294740 T __scm_send
ffffffff81294b30 T scm_detach_fds
ffffffff81294cb0 T gnet_stats_copy_queue
ffffffff81294d10 T gnet_stats_copy_app
ffffffff81294d70 T gnet_stats_finish_copy
ffffffff81294df0 T gnet_stats_copy_basic
ffffffff81294e60 T gnet_stats_start_copy_compat
ffffffff81294fc0 T gnet_stats_start_copy
ffffffff81294fd0 T gnet_stats_copy_rate_est
ffffffff81295060 T gen_estimator_active
ffffffff81295100 T gen_kill_estimator
ffffffff812951c0 T gen_new_estimator
ffffffff812953a0 T gen_replace_estimator
ffffffff812953f0 t est_timer
ffffffff81295550 t netns_get
ffffffff81295570 t net_alloc_generic
ffffffff812955a0 T get_net_ns_by_pid
ffffffff812955e0 T __put_net
ffffffff81295640 t netns_put
ffffffff81295660 t netns_install
ffffffff812956a0 t ops_exit_list.isra.2
ffffffff81295710 t ops_free_list
ffffffff81295790 t unregister_pernet_operations
ffffffff81295860 T unregister_pernet_device
ffffffff812958a0 T unregister_pernet_subsys
ffffffff812958d0 t ops_init
ffffffff81295a10 t setup_net
ffffffff81295b00 t register_pernet_operations
ffffffff81295c90 T register_pernet_device
ffffffff81295cf0 T register_pernet_subsys
ffffffff81295d30 T net_drop_ns
ffffffff81295d70 t cleanup_net
ffffffff81295f10 T copy_net_ns
ffffffff81296020 T get_net_ns_by_fd
ffffffff81296070 T secure_ipv4_port_ephemeral
ffffffff812960b0 T secure_ip_id
ffffffff812960f0 T secure_ipv6_id
ffffffff81296120 T secure_tcp_sequence_number
ffffffff81296170 T skb_flow_dissect
ffffffff812964e0 t sysctl_core_net_init
ffffffff81296580 t rps_sock_flow_sysctl
ffffffff81296740 t sysctl_core_net_exit
ffffffff81296770 T __dev_get_by_index
ffffffff812967c0 T dev_get_by_index_rcu
ffffffff81296820 T dev_get_by_index
ffffffff81296880 T dev_getfirstbyhwtype
ffffffff81296900 T dev_get_by_flags_rcu
ffffffff81296980 T net_disable_timestamp
ffffffff81296990 T rps_may_expire_flow
ffffffff81296a10 T napi_gro_flush
ffffffff81296a60 T netif_napi_add
ffffffff81296ac0 T register_gifconf
ffffffff81296ae0 T dev_seq_next
ffffffff81296b60 T dev_seq_stop
ffffffff81296b70 t softnet_seq_start
ffffffff81296bd0 t softnet_seq_next
ffffffff81296c40 t softnet_seq_stop
ffffffff81296c50 t ptype_get_idx
ffffffff81296d00 t ptype_seq_stop
ffffffff81296d10 T dev_get_flags
ffffffff81296d90 T dev_set_group
ffffffff81296da0 t dev_new_index
ffffffff81296e00 T netdev_increment_features
ffffffff81296e40 T __skb_tx_hash
ffffffff81296f20 T init_dummy_netdev
ffffffff81296fe0 t netdev_exit
ffffffff81297000 t netdev_create_hash
ffffffff81297040 t netdev_init
ffffffff812970a0 t dev_proc_net_exit
ffffffff812970d0 t dev_proc_net_init
ffffffff81297170 t ptype_seq_open
ffffffff81297190 t dev_seq_open
ffffffff812971b0 t softnet_seq_show
ffffffff81297210 t softnet_seq_open
ffffffff81297220 T netdev_refcnt_read
ffffffff81297270 T net_enable_timestamp
ffffffff812972b0 t rps_trigger_softirq
ffffffff812972f0 T __napi_schedule
ffffffff81297340 t net_tx_action
ffffffff812974c0 T netif_napi_del
ffffffff81297530 t dev_gso_skb_destructor
ffffffff81297570 t __netif_receive_skb
ffffffff812978a0 T dev_add_pack
ffffffff81297920 T __dev_remove_pack
ffffffff812979d0 t flush_backlog
ffffffff81297ad0 t enqueue_to_backlog
ffffffff81297c70 T netdev_set_master
ffffffff81297d20 T netdev_rx_handler_unregister
ffffffff81297d70 T netdev_rx_handler_register
ffffffff81297df0 T __dev_getfirstbyhwtype
ffffffff81297e80 t unlist_netdevice
ffffffff81297f70 t list_netdevice
ffffffff812980a0 T synchronize_net
ffffffff812980d0 T dev_remove_pack
ffffffff812980f0 T free_netdev
ffffffff812981d0 T alloc_netdev_mqs
ffffffff812984c0 T netdev_stats_to_stats64
ffffffff81298560 T dev_get_stats
ffffffff81298690 t dev_seq_printf_stats
ffffffff81298780 T netif_stacked_transfer_operstate
ffffffff81298810 t __dev_set_promiscuity
ffffffff812989a0 T dev_getbyhwaddr_rcu
ffffffff81298a40 T __skb_get_rxhash
ffffffff81298b00 t get_rps_cpu
ffffffff81298e30 T netif_receive_skb
ffffffff81298eb0 t napi_gro_complete
ffffffff81298f80 T dev_gro_receive
ffffffff812991f0 T skb_gso_segment
ffffffff81299470 T skb_checksum_help
ffffffff812995d0 T netif_set_real_num_rx_queues
ffffffff81299660 T netif_set_real_num_tx_queues
ffffffff81299820 T call_netdevice_notifiers
ffffffff81299880 t __dev_close_many
ffffffff81299950 t __dev_close
ffffffff812999a0 t dev_close_many
ffffffff81299ab0 t rollback_registered_many
ffffffff81299d00 T unregister_netdevice_many
ffffffff81299d60 t rollback_registered
ffffffff81299db0 T dev_set_mac_address
ffffffff81299e20 T dev_set_mtu
ffffffff81299ea0 T netdev_bonding_change
ffffffff81299eb0 T netdev_features_change
ffffffff81299ec0 T unregister_netdevice_notifier
ffffffff81299fb0 T register_netdevice_notifier
ffffffff8129a160 T dev_get_by_name_rcu
ffffffff8129a1e0 T dev_get_by_name
ffffffff8129a200 T __dev_get_by_name
ffffffff8129a280 T netdev_boot_setup_check
ffffffff8129a310 T __netif_schedule
ffffffff8129a380 T netif_device_detach
ffffffff8129a410 t harmonize_features.isra.73
ffffffff8129a450 T netif_skb_features
ffffffff8129a4d0 T skb_gro_reset_offset
ffffffff8129a550 T napi_frags_skb
ffffffff8129a670 T napi_get_frags
ffffffff8129a6a0 T dev_seq_start
ffffffff8129a730 t ptype_seq_start
ffffffff8129a750 T __napi_complete
ffffffff8129a7a0 T napi_complete
ffffffff8129a810 T dev_kfree_skb_irq
ffffffff8129a870 T dev_kfree_skb_any
ffffffff8129a8a0 t ptype_seq_next
ffffffff8129a930 t ptype_seq_show
ffffffff8129a9f0 t net_rps_action_and_irq_enable.isra.86
ffffffff8129aa60 t net_rx_action
ffffffff8129abb0 t process_backlog
ffffffff8129ad30 T __netdev_printk
ffffffff8129adb0 T netdev_info
ffffffff8129ae10 T netdev_notice
ffffffff8129ae70 T netdev_warn
ffffffff8129aed0 T netdev_err
ffffffff8129af30 T netdev_crit
ffffffff8129af90 T netdev_alert
ffffffff8129aff0 T netdev_emerg
ffffffff8129b050 T netdev_printk
ffffffff8129b0a0 T netdev_set_bond_master
ffffffff8129b140 t dev_seq_show
ffffffff8129b170 T napi_frags_finish
ffffffff8129b250 T napi_gro_frags
ffffffff8129b380 T napi_skb_finish
ffffffff8129b3c0 T napi_gro_receive
ffffffff8129b4e0 T netif_rx
ffffffff8129b570 t dev_cpu_callback
ffffffff8129b700 T netif_rx_ni
ffffffff8129b730 T dev_forward_skb
ffffffff8129b890 T netdev_rx_csum_fault
ffffffff8129b8d0 T netif_device_attach
ffffffff8129b970 T unregister_netdevice_queue
ffffffff8129ba50 T unregister_netdev
ffffffff8129ba80 t default_device_exit_batch
ffffffff8129bb60 T dev_close
ffffffff8129bbb0 T netdev_state_change
ffffffff8129bbe0 T dev_load
ffffffff8129bc60 T dev_alloc_name
ffffffff8129be00 T dev_valid_name
ffffffff8129beb0 t dev_get_valid_name
ffffffff8129bf70 T dev_change_net_namespace
ffffffff8129c130 t default_device_exit
ffffffff8129c200 T netdev_boot_base
ffffffff8129c290 T dev_change_name
ffffffff8129c480 T dev_set_alias
ffffffff8129c560 T dev_hard_start_xmit
ffffffff8129cad0 T dev_queue_xmit
ffffffff8129d0e0 T __dev_set_rx_mode
ffffffff8129d190 T dev_set_rx_mode
ffffffff8129d1d0 t __dev_open
ffffffff8129d2b0 T dev_open
ffffffff8129d310 T dev_set_allmulti
ffffffff8129d3f0 T dev_set_promiscuity
ffffffff8129d440 T __dev_change_flags
ffffffff8129d5c0 T __dev_notify_flags
ffffffff8129d650 T dev_change_flags
ffffffff8129d6c0 t dev_ifsioc
ffffffff8129dab0 T dev_ioctl
ffffffff8129e040 T __netdev_update_features
ffffffff8129e220 T register_netdevice
ffffffff8129e450 T register_netdev
ffffffff8129e480 T netdev_change_features
ffffffff8129e4a0 T netdev_update_features
ffffffff8129e4c0 T dev_disable_lro
ffffffff8129e520 T netdev_run_todo
ffffffff8129e790 T dev_ingress_queue_create
ffffffff8129e7a0 T netdev_drivername
ffffffff8129e7e0 T ethtool_op_get_link
ffffffff8129e7f0 t __ethtool_get_flags
ffffffff8129e840 t ethtool_set_coalesce
ffffffff8129e8a0 t ethtool_set_channels
ffffffff8129e900 t ethtool_set_value
ffffffff8129e950 t ethtool_flash_device
ffffffff8129e9b0 t ethtool_set_rxnfc
ffffffff8129eaa0 t ethtool_get_coalesce
ffffffff8129eb30 t ethtool_get_channels
ffffffff8129ebb0 t ethtool_get_value
ffffffff8129ec00 t ethtool_get_drvinfo
ffffffff8129ed90 t ethtool_get_rxfh_indir
ffffffff8129eec0 t ethtool_set_rxfh_indir
ffffffff8129f0d0 t ethtool_get_rxnfc
ffffffff8129f250 t ethtool_get_sset_info
ffffffff8129f3b0 t __ethtool_set_flags
ffffffff8129f430 T __ethtool_get_settings
ffffffff8129f5d0 t ethtool_get_feature_mask
ffffffff8129f620 T dev_ethtool
ffffffff812a0e80 T __hw_addr_init
ffffffff812a0e90 T dev_uc_init
ffffffff812a0eb0 T dev_mc_init
ffffffff812a0ed0 t dev_mc_net_exit
ffffffff812a0ee0 t dev_mc_net_init
ffffffff812a0f10 t dev_mc_seq_open
ffffffff812a0f30 t dev_mc_seq_show
ffffffff812a1000 T __hw_addr_flush
ffffffff812a1060 T dev_addr_flush
ffffffff812a1080 T dev_uc_flush
ffffffff812a10d0 T dev_mc_flush
ffffffff812a1120 t __hw_addr_del_ex
ffffffff812a11f0 t __dev_mc_del
ffffffff812a1270 T dev_mc_del_global
ffffffff812a1280 T dev_mc_del
ffffffff812a1290 T dev_uc_del
ffffffff812a1310 T __hw_addr_unsync
ffffffff812a13a0 T __hw_addr_del_multiple
ffffffff812a1400 T dev_addr_del_multiple
ffffffff812a14a0 T dev_addr_del
ffffffff812a1550 T dev_mc_unsync
ffffffff812a1610 T dev_uc_unsync
ffffffff812a16d0 t __hw_addr_add_ex
ffffffff812a17d0 t __dev_mc_add
ffffffff812a1850 T dev_mc_add_global
ffffffff812a1860 T dev_mc_add
ffffffff812a1870 T dev_uc_add
ffffffff812a18f0 T __hw_addr_sync
ffffffff812a19b0 T dev_mc_sync
ffffffff812a1a50 T dev_uc_sync
ffffffff812a1af0 T __hw_addr_add_multiple
ffffffff812a1ba0 T dev_addr_add_multiple
ffffffff812a1c60 T dev_addr_init
ffffffff812a1ce0 T dev_addr_add
ffffffff812a1d70 T __dst_destroy_metrics_generic
ffffffff812a1da0 T dst_cow_metrics_generic
ffffffff812a1e40 T dst_alloc
ffffffff812a1fb0 T dst_destroy
ffffffff812a20a0 t dst_gc_task
ffffffff812a22a0 T __dst_free
ffffffff812a2360 T dst_discard
ffffffff812a2370 t dst_ifdown
ffffffff812a2440 t dst_dev_event
ffffffff812a2570 T skb_dst_set_noref
ffffffff812a2590 T dst_release
ffffffff812a2610 T call_netevent_notifiers
ffffffff812a2630 T unregister_netevent_notifier
ffffffff812a2640 T register_netevent_notifier
ffffffff812a2650 t neigh_get_first
ffffffff812a2730 t neigh_get_next
ffffffff812a2800 t pneigh_get_first
ffffffff812a2860 t pneigh_get_next
ffffffff812a28e0 t neigh_stat_seq_start
ffffffff812a2960 t neigh_stat_seq_next
ffffffff812a29c0 t neigh_stat_seq_stop
ffffffff812a29d0 t neightbl_set
ffffffff812a2da0 T neigh_seq_stop
ffffffff812a2db0 T neigh_for_each
ffffffff812a2e60 t neightbl_fill_parms
ffffffff812a31a0 t neigh_fill_info
ffffffff812a33e0 T neigh_sysctl_unregister
ffffffff812a3420 t neigh_rcu_free_parms
ffffffff812a3440 T neigh_sysctl_register
ffffffff812a3730 t proc_unres_qlen
ffffffff812a37e0 t neigh_blackhole
ffffffff812a3800 t pneigh_queue_purge
ffffffff812a3830 t neigh_hash_free_rcu
ffffffff812a3890 t neigh_stat_seq_open
ffffffff812a38e0 t neigh_hash_alloc
ffffffff812a3990 t neigh_proxy_process
ffffffff812a3ac0 T pneigh_enqueue
ffffffff812a3c10 T neigh_direct_output
ffffffff812a3c20 T neigh_connected_output
ffffffff812a3d30 T neigh_compat_output
ffffffff812a3dd0 t __pneigh_lookup_1
ffffffff812a3e30 T pneigh_lookup
ffffffff812a4010 T __pneigh_lookup
ffffffff812a4060 T neigh_lookup_nodev
ffffffff812a4150 T neigh_lookup
ffffffff812a4240 t neigh_invalidate
ffffffff812a42e0 t neigh_probe
ffffffff812a4350 T neigh_seq_start
ffffffff812a4470 T neigh_seq_next
ffffffff812a44f0 T neigh_parms_release
ffffffff812a45a0 t neigh_stat_seq_show
ffffffff812a4640 t neigh_del_timer
ffffffff812a46a0 T neigh_destroy
ffffffff812a4780 t neigh_add_timer
ffffffff812a47c0 T __neigh_event_send
ffffffff812a49f0 T neigh_resolve_output
ffffffff812a4bf0 T neigh_rand_reach_time
ffffffff812a4c20 T neigh_table_init_no_netlink
ffffffff812a4dd0 T neigh_parms_alloc
ffffffff812a4ee0 T neigh_table_init
ffffffff812a4f50 t neigh_dump_info
ffffffff812a53d0 t neightbl_fill_info.constprop.39
ffffffff812a5720 t neightbl_dump_info
ffffffff812a5980 t __neigh_notify.constprop.41
ffffffff812a5a80 t neigh_update_notify
ffffffff812a5aa0 T neigh_update
ffffffff812a5f70 t neigh_timer_handler
ffffffff812a61f0 t neigh_cleanup_and_release
ffffffff812a6230 T __neigh_for_each_release
ffffffff812a62e0 T neigh_create
ffffffff812a68c0 t neigh_add
ffffffff812a6be0 T neigh_event_ns
ffffffff812a6cb0 t neigh_periodic_work
ffffffff812a6e70 t neigh_flush_dev.isra.37
ffffffff812a6f70 T neigh_ifdown
ffffffff812a7050 T neigh_table_clear
ffffffff812a7170 T neigh_changeaddr
ffffffff812a71c0 T pneigh_delete
ffffffff812a72d0 t neigh_delete
ffffffff812a7490 T rtnl_is_locked
ffffffff812a74a0 T __rtnl_af_register
ffffffff812a74c0 T __rtnl_af_unregister
ffffffff812a74f0 t validate_linkmsg
ffffffff812a7620 t rtnetlink_net_exit
ffffffff812a7640 t rtnetlink_net_init
ffffffff812a7680 t rtnl_dump_all
ffffffff812a7860 T __rtnl_link_unregister
ffffffff812a7930 t rtnl_dellink
ffffffff812a7a50 t set_operstate
ffffffff812a7af0 t rtnl_link_ops_get
ffffffff812a7b50 T __rtnl_link_register
ffffffff812a7ba0 T __rtnl_register
ffffffff812a7c80 T rtnl_register
ffffffff812a7cc0 t if_nlmsg_size
ffffffff812a7e80 t rtnl_calcit
ffffffff812a7f70 T rtnetlink_put_metrics
ffffffff812a8070 t rtnl_fill_ifinfo
ffffffff812a8dc0 t rtnl_dump_ifinfo
ffffffff812a8f60 T rtnl_create_link
ffffffff812a90f0 T rtnl_put_cacheinfo
ffffffff812a91e0 T rtnl_set_sk_err
ffffffff812a9200 T rtnl_notify
ffffffff812a9230 T rtnl_unicast
ffffffff812a9260 t rtnl_getlink
ffffffff812a9450 T __rta_fill
ffffffff812a94e0 T rtnl_trylock
ffffffff812a94f0 T rtnl_unlock
ffffffff812a9500 T rtnl_lock
ffffffff812a9510 t rtnetlink_rcv
ffffffff812a9540 T rtnl_af_unregister
ffffffff812a9580 T rtnl_af_register
ffffffff812a95b0 T rtnl_link_unregister
ffffffff812a95e0 T rtnl_link_register
ffffffff812a9610 T rtnl_unregister_all
ffffffff812a9640 T rtnl_unregister
ffffffff812a96a0 T rtnl_link_get_net
ffffffff812a96e0 t do_setlink
ffffffff812aa080 t rtnl_setlink
ffffffff812aa180 T __rtnl_unlock
ffffffff812aa190 t rtnetlink_rcv_msg
ffffffff812aa4d0 T rtnetlink_send
ffffffff812aa560 T rtmsg_ifinfo
ffffffff812aa680 t rtnetlink_event
ffffffff812aa6d0 T rtnl_configure_link
ffffffff812aa750 t rtnl_newlink
ffffffff812aacd0 T in4_pton
ffffffff812aae40 T inet_proto_csum_replace4
ffffffff812aaf10 T in6_pton
ffffffff812ab2e0 T in_aton
ffffffff812ab350 T net_ratelimit
ffffffff812ab370 T mac_pton
ffffffff812ab440 t linkwatch_do_dev
ffffffff812ab510 t linkwatch_schedule_work
ffffffff812ab5b0 T linkwatch_fire_event
ffffffff812ab710 t __linkwatch_run_queue
ffffffff812ab8f0 t linkwatch_event
ffffffff812ab920 T linkwatch_forget_dev
ffffffff812ab990 T linkwatch_run_queue
ffffffff812ab9a0 T sk_detach_filter
ffffffff812aba00 T sk_filter_release_rcu
ffffffff812aba10 T sk_chk_filter
ffffffff812abd10 T sk_attach_filter
ffffffff812abe80 T bpf_internal_load_pointer_neg_helper
ffffffff812abef0 T sk_run_filter
ffffffff812ac3e0 T sk_filter
ffffffff812ac450 T sock_diag_check_cookie
ffffffff812ac490 T sock_diag_save_cookie
ffffffff812ac4a0 t sock_diag_rcv
ffffffff812ac4d0 T sock_diag_register
ffffffff812ac540 T sock_diag_unregister_inet_compat
ffffffff812ac570 T sock_diag_register_inet_compat
ffffffff812ac5a0 T sock_diag_put_meminfo
ffffffff812ac630 t sock_diag_rcv_msg
ffffffff812ac760 T sock_diag_unregister
ffffffff812ac7d0 t flow_cache_gc_task
ffffffff812ac890 t flow_cache_new_hashrnd
ffffffff812ac900 t flow_cache_flush_per_cpu
ffffffff812ac940 t flow_cache_queue_garbage.isra.5.part.6
ffffffff812ac9a0 t flow_cache_flush_tasklet
ffffffff812acaf0 t __flow_cache_shrink.isra.7
ffffffff812acc30 T flow_cache_lookup
ffffffff812ad050 T flow_cache_flush
ffffffff812ad0f0 t flow_cache_flush_task
ffffffff812ad100 T flow_cache_flush_deferred
ffffffff812ad110 t change_tx_queue_len
ffffffff812ad120 t rx_queue_attr_show
ffffffff812ad140 t rx_queue_attr_store
ffffffff812ad160 t netdev_queue_attr_show
ffffffff812ad180 t netdev_queue_attr_store
ffffffff812ad1a0 t net_grab_current_ns
ffffffff812ad1c0 t net_initial_ns
ffffffff812ad1d0 t net_netlink_ns
ffffffff812ad1e0 t net_namespace
ffffffff812ad1f0 t change_group
ffffffff812ad200 t format_group
ffffffff812ad230 t format_tx_queue_len
ffffffff812ad260 t format_flags
ffffffff812ad290 t format_mtu
ffffffff812ad2c0 t show_operstate
ffffffff812ad330 t show_dormant
ffffffff812ad370 t show_carrier
ffffffff812ad3c0 t format_link_mode
ffffffff812ad3f0 t format_type
ffffffff812ad420 t format_ifindex
ffffffff812ad450 t format_iflink
ffffffff812ad480 t format_dev_id
ffffffff812ad4b0 t format_addr_len
ffffffff812ad4e0 t format_addr_assign_type
ffffffff812ad510 t bql_show_inflight
ffffffff812ad540 t bql_show_limit_min
ffffffff812ad570 t bql_show_limit_max
ffffffff812ad5a0 t bql_show_limit
ffffffff812ad5d0 t show_rps_dev_flow_table_cnt
ffffffff812ad600 t change_flags
ffffffff812ad610 t change_mtu
ffffffff812ad620 t show_broadcast
ffffffff812ad650 t show_address
ffffffff812ad6b0 T netdev_class_remove_file
ffffffff812ad6c0 T netdev_class_create_file
ffffffff812ad6d0 t bql_set_hold_time
ffffffff812ad740 t bql_show_hold_time
ffffffff812ad770 t bql_set
ffffffff812ad800 t bql_set_limit_min
ffffffff812ad820 t bql_set_limit_max
ffffffff812ad840 t bql_set_limit
ffffffff812ad860 t store_xps_map
ffffffff812adcd0 t netdev_queue_release
ffffffff812aded0 t show_xps_map
ffffffff812adff0 t show_rps_map
ffffffff812ae070 t show_trans_timeout
ffffffff812ae0e0 t rx_queue_release
ffffffff812ae1c0 t store_rps_dev_flow_table_cnt
ffffffff812ae2e0 t rps_dev_flow_table_release
ffffffff812ae310 t rps_dev_flow_table_release_work
ffffffff812ae320 t store_rps_map
ffffffff812ae460 t netdev_store.isra.11
ffffffff812ae540 t store_group
ffffffff812ae560 t store_tx_queue_len
ffffffff812ae580 t store_flags
ffffffff812ae5a0 t store_mtu
ffffffff812ae5c0 t netdev_show.isra.13
ffffffff812ae620 t show_group
ffffffff812ae630 t show_tx_queue_len
ffffffff812ae640 t show_flags
ffffffff812ae650 t show_mtu
ffffffff812ae660 t show_link_mode
ffffffff812ae670 t show_type
ffffffff812ae680 t show_ifindex
ffffffff812ae690 t show_iflink
ffffffff812ae6a0 t show_dev_id
ffffffff812ae6b0 t show_addr_len
ffffffff812ae6c0 t show_addr_assign_type
ffffffff812ae6d0 t show_ifalias
ffffffff812ae750 t show_duplex
ffffffff812ae810 t show_speed
ffffffff812ae8d0 t store_ifalias
ffffffff812ae990 t netdev_release
ffffffff812ae9c0 t netdev_uevent
ffffffff812aea30 t netstat_show.isra.20
ffffffff812aeaf0 t show_tx_compressed
ffffffff812aeb00 t show_rx_compressed
ffffffff812aeb10 t show_tx_window_errors
ffffffff812aeb20 t show_tx_heartbeat_errors
ffffffff812aeb30 t show_tx_fifo_errors
ffffffff812aeb40 t show_tx_carrier_errors
ffffffff812aeb50 t show_tx_aborted_errors
ffffffff812aeb60 t show_rx_missed_errors
ffffffff812aeb70 t show_rx_fifo_errors
ffffffff812aeb80 t show_rx_frame_errors
ffffffff812aeb90 t show_rx_crc_errors
ffffffff812aeba0 t show_rx_over_errors
ffffffff812aebb0 t show_rx_length_errors
ffffffff812aebc0 t show_collisions
ffffffff812aebd0 t show_multicast
ffffffff812aebe0 t show_tx_dropped
ffffffff812aebf0 t show_rx_dropped
ffffffff812aec00 t show_tx_errors
ffffffff812aec10 t show_rx_errors
ffffffff812aec20 t show_tx_bytes
ffffffff812aec30 t show_rx_bytes
ffffffff812aec40 t show_tx_packets
ffffffff812aec50 t show_rx_packets
ffffffff812aec60 T net_rx_queue_update_kobjects
ffffffff812aed40 T netdev_queue_update_kobjects
ffffffff812aee60 T netdev_unregister_kobject
ffffffff812aeee0 T netdev_register_kobject
ffffffff812af030 T netdev_kobject_init
ffffffff812af060 T compat_mc_setsockopt
ffffffff812af3d0 T compat_sock_get_timestampns
ffffffff812af490 T compat_sock_get_timestamp
ffffffff812af550 T compat_mc_getsockopt
ffffffff812af890 T get_compat_msghdr
ffffffff812af930 T verify_compat_iovec
ffffffff812afa40 T cmsghdr_from_user_compat_to_kern
ffffffff812afce0 T put_cmsg_compat
ffffffff812afe80 T scm_detach_fds_compat
ffffffff812affb0 T compat_sys_setsockopt
ffffffff812b01c0 T compat_sys_getsockopt
ffffffff812b0370 T compat_sys_sendmsg
ffffffff812b0380 T compat_sys_sendmmsg
ffffffff812b03a0 T compat_sys_recvmsg
ffffffff812b03b0 T compat_sys_recv
ffffffff812b03c0 T compat_sys_recvfrom
ffffffff812b03d0 T compat_sys_recvmmsg
ffffffff812b04a0 T compat_sys_socketcall
ffffffff812b06c0 T eth_change_mtu
ffffffff812b06e0 T eth_validate_addr
ffffffff812b0710 T sysfs_format_mac
ffffffff812b07d0 T alloc_etherdev_mqs
ffffffff812b07f0 T ether_setup
ffffffff812b0860 T eth_mac_addr
ffffffff812b08b0 T eth_header_cache_update
ffffffff812b08c0 T eth_header_cache
ffffffff812b0910 T eth_header_parse
ffffffff812b0930 T eth_type_trans
ffffffff812b0a00 T eth_rebuild_header
ffffffff812b0a80 T eth_header
ffffffff812b0b60 T dev_trans_start
ffffffff812b0bc0 t noop_dequeue
ffffffff812b0bd0 t pfifo_fast_peek
ffffffff812b0c20 t pfifo_fast_init
ffffffff812b0c70 t transition_one_qdisc
ffffffff812b0cb0 t pfifo_fast_dump
ffffffff812b0d10 t pfifo_fast_dequeue
ffffffff812b0de0 t pfifo_fast_reset
ffffffff812b0e80 t noop_enqueue
ffffffff812b0ea0 T qdisc_reset
ffffffff812b0ee0 t dev_watchdog
ffffffff812b1150 T dev_graft_qdisc
ffffffff812b11e0 T qdisc_destroy
ffffffff812b1290 t qdisc_rcu_free
ffffffff812b12b0 T netif_notify_peers
ffffffff812b12d0 t pfifo_fast_enqueue
ffffffff812b1370 T netif_carrier_off
ffffffff812b13a0 t dev_deactivate_queue.constprop.31
ffffffff812b1420 T sch_direct_xmit
ffffffff812b1600 T __qdisc_run
ffffffff812b1730 T __netdev_watchdog_up
ffffffff812b17a0 T netif_carrier_on
ffffffff812b17e0 T qdisc_alloc
ffffffff812b18d0 T qdisc_create_dflt
ffffffff812b1940 T dev_activate
ffffffff812b1af0 T dev_deactivate_many
ffffffff812b1d60 T dev_deactivate
ffffffff812b1db0 T dev_init_scheduler
ffffffff812b1e40 T dev_shutdown
ffffffff812b1f00 t mq_select_queue
ffffffff812b1f50 t mq_leaf
ffffffff812b1f90 t mq_get
ffffffff812b1fe0 t mq_put
ffffffff812b1ff0 t mq_dump_class
ffffffff812b2040 t mq_walk
ffffffff812b20a0 t mq_dump_class_stats
ffffffff812b2120 t mq_graft
ffffffff812b21c0 t mq_dump
ffffffff812b22e0 t mq_attach
ffffffff812b2350 t mq_destroy
ffffffff812b23c0 t mq_init
ffffffff812b24b0 t netlink_update_listeners
ffffffff812b2530 t netlink_update_subscriptions
ffffffff812b25b0 t netlink_getname
ffffffff812b2610 t netlink_overrun
ffffffff812b2650 t netlink_getsockopt
ffffffff812b2750 T netlink_set_nonroot
ffffffff812b2770 t netlink_seq_socket_idx
ffffffff812b2800 t netlink_seq_next
ffffffff812b28c0 t netlink_seq_stop
ffffffff812b28d0 t netlink_net_exit
ffffffff812b28e0 t netlink_net_init
ffffffff812b2910 t netlink_seq_open
ffffffff812b2930 t netlink_lookup
ffffffff812b2a00 t __netlink_create
ffffffff812b2ae0 t netlink_create
ffffffff812b2ce0 t netlink_update_socket_mc
ffffffff812b2d30 T netlink_set_err
ffffffff812b2e20 T __nlmsg_put
ffffffff812b2ec0 t netlink_dump
ffffffff812b30d0 t netlink_recvmsg
ffffffff812b3460 t netlink_data_ready
ffffffff812b3470 t netlink_sock_destruct
ffffffff812b3540 T netlink_dump_start
ffffffff812b36d0 T netlink_unregister_notifier
ffffffff812b36e0 T netlink_register_notifier
ffffffff812b36f0 T netlink_kernel_release
ffffffff812b3700 t netlink_trim
ffffffff812b37c0 T netlink_broadcast_filtered
ffffffff812b3b60 T netlink_broadcast
ffffffff812b3b80 t netlink_seq_show
ffffffff812b3c40 t netlink_seq_start
ffffffff812b3ca0 T netlink_has_listeners
ffffffff812b3ce0 T netlink_table_grab
ffffffff812b3dc0 T netlink_table_ungrab
ffffffff812b3df0 t netlink_insert
ffffffff812b4010 t netlink_autobind.isra.23
ffffffff812b4140 t netlink_connect
ffffffff812b4250 t netlink_realloc_groups
ffffffff812b4340 t netlink_setsockopt
ffffffff812b4500 t netlink_bind
ffffffff812b4660 t netlink_release
ffffffff812b48d0 T netlink_kernel_create
ffffffff812b4ae0 T netlink_getsockbyfilp
ffffffff812b4b30 T netlink_attachskb
ffffffff812b4d50 T netlink_sendskb
ffffffff812b4da0 T netlink_unicast
ffffffff812b4fa0 T nlmsg_notify
ffffffff812b5060 t netlink_sendmsg
ffffffff812b5380 T netlink_ack
ffffffff812b54d0 T netlink_rcv_skb
ffffffff812b5580 T netlink_detachskb
ffffffff812b55b0 T __netlink_change_ngroups
ffffffff812b5680 T netlink_change_ngroups
ffffffff812b56b0 T __netlink_clear_multicast_users
ffffffff812b56f0 T netlink_clear_multicast_users
ffffffff812b5720 t genl_family_find_byid
ffffffff812b5770 t genl_pernet_exit
ffffffff812b5790 t genl_pernet_init
ffffffff812b57f0 t genl_family_find_byname
ffffffff812b5870 T genl_notify
ffffffff812b58a0 t genlmsg_mcast
ffffffff812b59a0 T genlmsg_multicast_allns
ffffffff812b59b0 T genlmsg_put
ffffffff812b5a20 t ctrl_fill_info
ffffffff812b5df0 t ctrl_dumpfamily
ffffffff812b5f30 t ctrl_build_family_msg
ffffffff812b5fe0 T genl_unlock
ffffffff812b5ff0 T genl_lock
ffffffff812b6000 t genl_rcv
ffffffff812b6030 t genl_rcv_msg
ffffffff812b62a0 t ctrl_getfamily
ffffffff812b63a0 t genl_ctrl_event
ffffffff812b66a0 t __genl_unregister_mc_group
ffffffff812b6750 T genl_unregister_mc_group
ffffffff812b6780 T genl_unregister_family
ffffffff812b68a0 T genl_register_family
ffffffff812b6a40 T genl_unregister_ops
ffffffff812b6ae0 T genl_register_ops
ffffffff812b6bb0 T genl_register_family_with_ops
ffffffff812b6c10 T genl_register_mc_group
ffffffff812b6dd0 t dst_rcu_free
ffffffff812b6e00 t ipv4_dst_ifdown
ffffffff812b6e10 t rt_cpu_seq_next
ffffffff812b6e70 t rt_cpu_seq_stop
ffffffff812b6e80 t ipv4_blackhole_dst_check
ffffffff812b6e90 t ipv4_blackhole_mtu
ffffffff812b6eb0 t ipv4_rt_blackhole_update_pmtu
ffffffff812b6ec0 t ipv4_rt_blackhole_cow_metrics
ffffffff812b6ed0 t ipv4_mtu
ffffffff812b6f30 t rt_cache_get_first
ffffffff812b6fe0 t rt_cache_get_next
ffffffff812b7090 t rt_cache_seq_next
ffffffff812b70c0 t rt_cache_seq_stop
ffffffff812b70d0 t ipv4_neigh_lookup
ffffffff812b71e0 t check_peer_pmtu
ffffffff812b72a0 t ipv4_link_failure
ffffffff812b7330 t ipv4_dst_destroy
ffffffff812b73b0 t rt_genid_init
ffffffff812b73e0 t sysctl_route_net_init
ffffffff812b7470 t ip_rt_do_proc_exit
ffffffff812b74a0 t rt_cpu_seq_open
ffffffff812b74b0 t rt_cpu_seq_show
ffffffff812b7690 t rt_cache_seq_show
ffffffff812b7850 t rt_cache_seq_open
ffffffff812b7870 t ip_rt_bug
ffffffff812b78d0 t rt_do_flush
ffffffff812b7a10 t rt_dst_alloc
ffffffff812b7a50 t rt_cache_invalidate
ffffffff812b7a90 t rt_cache_seq_start
ffffffff812b7b00 t rt_cpu_seq_start
ffffffff812b7b80 t rt_may_expire.part.32
ffffffff812b7bf0 t rt_garbage_collect
ffffffff812b7fd0 t rt_intern_hash
ffffffff812b8620 t rt_worker_func
ffffffff812b8920 t ipv4_negative_advice
ffffffff812b8b20 t ipv4_default_advmss
ffffffff812b8b60 t rt_set_nexthop.isra.40
ffffffff812b8d40 t check_peer_redir.isra.41
ffffffff812b8e30 t sysctl_route_net_exit
ffffffff812b8e60 t ip_rt_do_proc_init
ffffffff812b8ed0 t rt_fill_info.isra.44.constprop.48
ffffffff812b92d0 T rt_cache_flush
ffffffff812b9330 t ipv4_sysctl_rtcache_flush
ffffffff812b93b0 T rt_cache_flush_batch
ffffffff812b93d0 T rt_bind_peer
ffffffff812b9430 t ip_rt_update_pmtu
ffffffff812b9500 t ipv4_cow_metrics
ffffffff812b9610 t ipv4_validate_peer
ffffffff812b96b0 t ipv4_dst_check
ffffffff812b96e0 T __ip_route_output_key
ffffffff812b9fa0 T ip_route_output_flow
ffffffff812ba010 T ip_route_input_common
ffffffff812bab00 t inet_rtm_getroute
ffffffff812bade0 t ip_error
ffffffff812baef0 T __ip_select_ident
ffffffff812bb060 T ip_rt_redirect
ffffffff812bb3b0 T ip_rt_send_redirect
ffffffff812bb510 T ip_rt_frag_needed
ffffffff812bb680 T ip_rt_get_source
ffffffff812bb7e0 T ipv4_blackhole_route
ffffffff812bba20 T ip_rt_dump
ffffffff812bbbc0 T ip_rt_multicast_event
ffffffff812bbbe0 T inet_putpeer
ffffffff812bbc00 T inet_peer_xrlim_allow
ffffffff812bbc50 T inetpeer_invalidate_tree
ffffffff812bbce0 t inetpeer_inval_rcu
ffffffff812bbd30 t inetpeer_free_rcu
ffffffff812bbd50 t inetpeer_gc_worker
ffffffff812bbf40 t peer_avl_rebalance.isra.2
ffffffff812bc070 T inet_getpeer
ffffffff812bc580 T inet_add_protocol
ffffffff812bc5a0 T inet_del_protocol
ffffffff812bc5d0 T ip_call_ra_chain
ffffffff812bc6f0 T ip_local_deliver
ffffffff812bc910 T ip_rcv
ffffffff812bce40 t ipqhashfn
ffffffff812bcea0 t ip4_hashfn
ffffffff812bcec0 t ip4_frag_match
ffffffff812bcf10 t ip4_frag_free
ffffffff812bcf30 t ipv4_frags_exit_net
ffffffff812bcf80 t ipv4_frags_init_net
ffffffff812bd070 t ip_expire
ffffffff812bd220 t ip4_frag_init
ffffffff812bd2c0 T ip_defrag
ffffffff812bde60 T ip_check_defrag
ffffffff812be050 T ip_frag_nqueues
ffffffff812be060 T ip_frag_mem
ffffffff812be070 T ip_forward
ffffffff812be440 T ip_options_rcv_srr
ffffffff812be660 t ip_options_get_alloc
ffffffff812be680 T ip_options_compile
ffffffff812becc0 t ip_options_get_finish
ffffffff812bed30 T ip_options_build
ffffffff812bef20 T ip_options_echo
ffffffff812bf2f0 T ip_options_fragment
ffffffff812bf3a0 T ip_options_undo
ffffffff812bf4a0 T ip_options_get_from_user
ffffffff812bf560 T ip_options_get
ffffffff812bf610 T ip_forward_options
ffffffff812bf810 T ip_send_check
ffffffff812bf850 t ip_finish_output2
ffffffff812bfaf0 t ip_copy_metadata
ffffffff812bfb90 t ip_reply_glue_bits
ffffffff812bfbe0 t ip_setup_cork
ffffffff812bfd00 T ip_generic_getfrag
ffffffff812bfda0 T ip_fragment
ffffffff812c05a0 t ip_finish_output
ffffffff812c08d0 t ip_dev_loopback_xmit
ffffffff812c0970 t __ip_append_data.isra.33
ffffffff812c12e0 t __ip_flush_pending_frames.isra.34
ffffffff812c1350 T __ip_local_out
ffffffff812c13b0 T ip_local_out
ffffffff812c13e0 T ip_queue_xmit
ffffffff812c1790 T ip_build_and_send_pkt
ffffffff812c19a0 T ip_mc_output
ffffffff812c1ac0 T ip_output
ffffffff812c1b10 T ip_append_data
ffffffff812c1c20 T ip_append_page
ffffffff812c2180 T __ip_make_skb
ffffffff812c2510 T ip_send_skb
ffffffff812c2550 T ip_push_pending_frames
ffffffff812c2580 T ip_flush_pending_frames
ffffffff812c25a0 T ip_make_skb
ffffffff812c26f0 T ip_send_reply
ffffffff812c2990 t do_ip_getsockopt
ffffffff812c3040 T ip_getsockopt
ffffffff812c3050 t ip_ra_destroy_rcu
ffffffff812c3080 T ip_cmsg_recv
ffffffff812c32b0 T compat_ip_getsockopt
ffffffff812c32d0 T ip_cmsg_send
ffffffff812c33c0 T ip_ra_control
ffffffff812c3520 t do_ip_setsockopt.isra.18
ffffffff812c42a0 T compat_ip_setsockopt
ffffffff812c42e0 T ip_setsockopt
ffffffff812c4300 T ip_icmp_error
ffffffff812c4430 T ip_local_error
ffffffff812c45a0 T ip_recv_error
ffffffff812c4850 T ipv4_pktinfo_prepare
ffffffff812c48b0 T inet_hashinfo_init
ffffffff812c48f0 T __inet_hash_nolisten
ffffffff812c4a60 t __inet_check_established
ffffffff812c4d50 T inet_unhash
ffffffff812c4e10 T __inet_lookup_established
ffffffff812c50a0 T __inet_lookup_listener
ffffffff812c5250 T inet_hash
ffffffff812c5360 T inet_bind_bucket_create
ffffffff812c53e0 T inet_bind_bucket_destroy
ffffffff812c5410 T inet_put_port
ffffffff812c54d0 T inet_bind_hash
ffffffff812c5520 T __inet_inherit_port
ffffffff812c5610 T __inet_hash_connect
ffffffff812c5910 T inet_hash_connect
ffffffff812c5960 T inet_twsk_schedule
ffffffff812c5b50 T inet_twsk_alloc
ffffffff812c5c40 T __inet_twsk_hashdance
ffffffff812c5da0 t inet_twsk_free
ffffffff812c5e00 T inet_twsk_put
ffffffff812c5e20 T inet_twsk_unhash
ffffffff812c5e50 T inet_twsk_bind_unhash
ffffffff812c5ea0 t __inet_twsk_kill
ffffffff812c5f60 T inet_twdr_twcal_tick
ffffffff812c6110 T inet_twsk_deschedule
ffffffff812c61b0 T inet_twsk_purge
ffffffff812c62b0 t inet_twdr_do_twkill_work
ffffffff812c6370 T inet_twdr_twkill_work
ffffffff812c6440 T inet_twdr_hangman
ffffffff812c6500 T inet_csk_bind_conflict
ffffffff812c6580 T inet_csk_addr2sockaddr
ffffffff812c65a0 T inet_get_local_port_range
ffffffff812c65d0 T inet_csk_search_req
ffffffff812c66b0 T inet_csk_destroy_sock
ffffffff812c67e0 T inet_csk_clone_lock
ffffffff812c6880 T inet_csk_route_child_sock
ffffffff812c6a00 T inet_csk_route_req
ffffffff812c6b40 T inet_csk_reset_keepalive_timer
ffffffff812c6b60 T inet_csk_reqsk_queue_hash_add
ffffffff812c6c90 T inet_csk_reqsk_queue_prune
ffffffff812c6f90 T inet_csk_delete_keepalive_timer
ffffffff812c6fa0 T inet_csk_listen_stop
ffffffff812c7120 T inet_csk_clear_xmit_timers
ffffffff812c7170 T inet_csk_init_xmit_timers
ffffffff812c7210 T inet_csk_accept
ffffffff812c7470 T inet_csk_get_port
ffffffff812c7850 T inet_csk_compat_getsockopt
ffffffff812c7870 T inet_csk_compat_setsockopt
ffffffff812c7890 T inet_csk_listen_start
ffffffff812c7990 t tcp_cookie_values_release
ffffffff812c79a0 T tcp_poll
ffffffff812c7b00 T tcp_cookie_generator
ffffffff812c7c50 t tcp_prequeue_process
ffffffff812c7ce0 T tcp_gro_receive
ffffffff812c7f50 T tcp_get_info
ffffffff812c81e0 T tcp_ioctl
ffffffff812c8390 t tcp_send_mss
ffffffff812c8480 T tcp_set_state
ffffffff812c8550 T tcp_done
ffffffff812c85c0 T tcp_disconnect
ffffffff812c8950 t tcp_splice_data_recv
ffffffff812c8990 T tcp_enter_memory_pressure
ffffffff812c89c0 T tcp_gro_complete
ffffffff812c8a10 T tcp_tso_segment
ffffffff812c8d20 t do_tcp_getsockopt.isra.20
ffffffff812c9180 T compat_tcp_getsockopt
ffffffff812c91b0 T tcp_getsockopt
ffffffff812c91e0 T tcp_shutdown
ffffffff812c9250 T sk_stream_alloc_skb
ffffffff812c9350 T tcp_sendmsg
ffffffff812ca0f0 T tcp_sendpage
ffffffff812ca7f0 T tcp_cleanup_rbuf
ffffffff812ca8f0 t do_tcp_setsockopt.isra.24
ffffffff812cb020 T compat_tcp_setsockopt
ffffffff812cb050 T tcp_setsockopt
ffffffff812cb080 T tcp_recvmsg
ffffffff812cb9d0 T tcp_read_sock
ffffffff812cbbc0 T tcp_splice_read
ffffffff812cbe00 T tcp_check_oom
ffffffff812cbf00 T tcp_close
ffffffff812cc2e0 T tcp_init_mem
ffffffff812cc320 t tcp_init_buffer_space
ffffffff812cc510 T tcp_initialize_rcv_mss
ffffffff812cc560 t tcp_sacktag_one
ffffffff812cc6e0 t tcp_undo_cwr
ffffffff812cc790 t tcp_try_undo_recovery
ffffffff812cc8a0 t tcp_check_space
ffffffff812cc980 t tcp_reset
ffffffff812cc9f0 t tcp_enter_frto_loss
ffffffff812ccc00 t tcp_shifted_skb
ffffffff812ccee0 t tcp_match_skb_to_sack
ffffffff812ccfb0 t tcp_sacktag_walk
ffffffff812cd480 t tcp_mark_head_lost
ffffffff812cd6f0 T tcp_valid_rtt_meas
ffffffff812cd8c0 t tcp_init_metrics
ffffffff812cdb40 t tcp_collapse
ffffffff812cdf60 t tcp_event_data_recv
ffffffff812ce3f0 t tcp_prune_ofo_queue
ffffffff812ce4c0 t tcp_fin
ffffffff812ce670 t tcp_prune_queue
ffffffff812ce8b0 t __tcp_ack_snd_check
ffffffff812ce950 T tcp_parse_options
ffffffff812cec60 t tcp_skb_mark_lost_uncond_verify
ffffffff812cecd0 t tcp_sacktag_write_queue
ffffffff812cf880 T tcp_simple_retransmit
ffffffff812cfa80 t tcp_check_reno_reordering
ffffffff812cfb40 t tcp_dsack_set
ffffffff812cfb90 t tcp_send_dupack
ffffffff812cfc50 t tcp_dsack_extend
ffffffff812cfcb0 t tcp_parse_aligned_timestamp.part.32
ffffffff812cfce0 t tcp_validate_incoming
ffffffff812d0020 t tcp_add_reno_sack
ffffffff812d0060 t tcp_urg
ffffffff812d0240 T tcp_rcv_space_adjust
ffffffff812d0360 t tcp_data_queue
ffffffff812d1170 T tcp_update_metrics
ffffffff812d1630 T tcp_init_cwnd
ffffffff812d1660 T tcp_enter_cwr
ffffffff812d1740 T tcp_use_frto
ffffffff812d1810 T tcp_enter_frto
ffffffff812d1a70 T tcp_clear_retrans
ffffffff812d1ab0 T tcp_enter_loss
ffffffff812d1d40 t tcp_fastretrans_alert
ffffffff812d2d20 t tcp_ack
ffffffff812d3e10 T tcp_rcv_state_process
ffffffff812d4980 T tcp_rcv_established
ffffffff812d5010 T tcp_cwnd_application_limited
ffffffff812d50c0 T tcp_select_initial_window
ffffffff812d51e0 t tcp_init_nondata_skb
ffffffff812d5240 T tcp_mtup_init
ffffffff812d52a0 T tcp_sync_mss
ffffffff812d5380 t tcp_options_write
ffffffff812d55d0 t tcp_adjust_pcount
ffffffff812d56c0 t __pskb_trim_head
ffffffff812d57f0 t tcp_event_new_data_sent
ffffffff812d58a0 t tcp_set_skb_tso_segs
ffffffff812d5960 t tcp_init_tso_segs
ffffffff812d59b0 T tcp_make_synack
ffffffff812d5ec0 T tcp_fragment
ffffffff812d6180 T tcp_trim_head
ffffffff812d6260 T tcp_mtu_to_mss
ffffffff812d62a0 T tcp_mss_to_mtu
ffffffff812d62c0 T tcp_current_mss
ffffffff812d6350 T tcp_may_send_now
ffffffff812d64d0 T __tcp_select_window
ffffffff812d65f0 t tcp_transmit_skb
ffffffff812d6e30 t tcp_xmit_probe_skb
ffffffff812d6ee0 T tcp_connect
ffffffff812d7340 t tcp_write_xmit
ffffffff812d7d50 T tcp_push_one
ffffffff812d7d80 T __tcp_push_pending_frames
ffffffff812d7e00 T tcp_retransmit_skb
ffffffff812d83a0 T tcp_xmit_retransmit_queue
ffffffff812d8680 T tcp_send_fin
ffffffff812d8800 T tcp_send_active_reset
ffffffff812d88e0 T tcp_send_synack
ffffffff812d8ad0 T tcp_send_ack
ffffffff812d8bd0 T tcp_send_delayed_ack
ffffffff812d8cb0 T tcp_write_wakeup
ffffffff812d8e10 T tcp_send_probe0
ffffffff812d8ee0 T tcp_syn_ack_timeout
ffffffff812d8f00 t tcp_write_err
ffffffff812d8f40 t retransmits_timed_out
ffffffff812d9000 T tcp_init_xmit_timers
ffffffff812d9020 t tcp_keepalive_timer
ffffffff812d9280 t tcp_delack_timer
ffffffff812d94b0 t tcp_out_of_resources
ffffffff812d9570 T tcp_retransmit_timer
ffffffff812d9af0 t tcp_write_timer
ffffffff812d9d20 T tcp_set_keepalive
ffffffff812d9d70 t tcp_cookie_values_release
ffffffff812d9d80 t tcp_v4_init_sock
ffffffff812d9f10 t tcp_v4_reqsk_destructor
ffffffff812d9f20 t tcp_v4_send_reset
ffffffff812da0f0 t __tcp_v4_send_check
ffffffff812da1e0 t tcp_v4_send_synack
ffffffff812da2c0 t tcp_v4_rtx_synack
ffffffff812da2e0 T tcp_v4_send_check
ffffffff812da300 t tcp_sk_exit_batch
ffffffff812da320 t tcp_sk_exit
ffffffff812da330 t tcp_sk_init
ffffffff812da360 t tcp4_seq_show
ffffffff812da8f0 T tcp_proc_unregister
ffffffff812da900 t tcp4_proc_exit_net
ffffffff812da910 T tcp_proc_register
ffffffff812da960 t tcp4_proc_init_net
ffffffff812da970 t tcp_seq_stop
ffffffff812daa10 t established_get_first
ffffffff812dab20 t established_get_next
ffffffff812dac90 t listening_get_next
ffffffff812daef0 t tcp_get_idx
ffffffff812dafa0 t tcp_seq_next
ffffffff812db040 t tcp_seq_start
ffffffff812db1f0 T tcp_seq_open
ffffffff812db250 T tcp_v4_tw_get_peer
ffffffff812db280 T tcp_v4_do_rcv
ffffffff812db560 T tcp_v4_syn_recv_sock
ffffffff812db7e0 T tcp_syn_flood_action
ffffffff812db870 T tcp_v4_connect
ffffffff812dbd60 t tcp_v4_send_ack.isra.30
ffffffff812dbef0 t tcp_v4_reqsk_send_ack
ffffffff812dbf40 T tcp_v4_get_peer
ffffffff812dbfe0 T tcp_twsk_unique
ffffffff812dc080 T tcp_v4_conn_request
ffffffff812dc7a0 T tcp_v4_destroy_sock
ffffffff812dc980 T tcp_v4_err
ffffffff812dcf40 T tcp_v4_gso_send_check
ffffffff812dcfc0 T tcp_v4_rcv
ffffffff812dd810 T tcp4_proc_exit
ffffffff812dd820 T tcp4_gro_receive
ffffffff812dd8e0 T tcp4_gro_complete
ffffffff812dd960 T tcp_twsk_destructor
ffffffff812dd970 T tcp_child_process
ffffffff812dda80 T tcp_check_req
ffffffff812ddf20 T tcp_create_openreq_child
ffffffff812de3a0 T tcp_timewait_state_process
ffffffff812de7b0 T tcp_time_wait
ffffffff812de9e0 T tcp_slow_start
ffffffff812dea80 T tcp_reno_ssthresh
ffffffff812deaa0 T tcp_reno_min_cwnd
ffffffff812deab0 t tcp_ca_find
ffffffff812deb20 T tcp_unregister_congestion_control
ffffffff812deb60 T tcp_register_congestion_control
ffffffff812dec20 T tcp_is_cwnd_limited
ffffffff812dec80 T tcp_cong_avoid_ai
ffffffff812decc0 T tcp_reno_cong_avoid
ffffffff812ded40 T tcp_init_congestion_control
ffffffff812dede0 T tcp_cleanup_congestion_control
ffffffff812dee10 T tcp_set_default_congestion_control
ffffffff812def00 T tcp_get_available_congestion_control
ffffffff812defa0 T tcp_get_default_congestion_control
ffffffff812defc0 T tcp_get_allowed_congestion_control
ffffffff812df060 T tcp_set_allowed_congestion_control
ffffffff812df180 T tcp_set_congestion_control
ffffffff812df270 T ip4_datagram_connect
ffffffff812df590 t __raw_v4_lookup
ffffffff812df610 t compat_raw_ioctl
ffffffff812df620 t raw_get_next
ffffffff812df690 T raw_seq_next
ffffffff812df710 T raw_seq_stop
ffffffff812df720 t raw_rcv_skb
ffffffff812df770 t raw_bind
ffffffff812df820 t raw_recvmsg
ffffffff812df9d0 t raw_destroy
ffffffff812df9f0 t raw_close
ffffffff812dfa10 t raw_exit_net
ffffffff812dfa20 t raw_init_net
ffffffff812dfa50 t raw_seq_show
ffffffff812dfbc0 T raw_seq_open
ffffffff812dfc10 t raw_v4_seq_open
ffffffff812dfc30 T raw_unhash_sk
ffffffff812dfcb0 T raw_hash_sk
ffffffff812dfd50 t do_raw_setsockopt.isra.15
ffffffff812dfda0 t compat_raw_setsockopt
ffffffff812dfdc0 t raw_setsockopt
ffffffff812dfdf0 t do_raw_getsockopt.isra.16
ffffffff812dfe70 t compat_raw_getsockopt
ffffffff812dfea0 t raw_getsockopt
ffffffff812dfed0 t raw_ioctl
ffffffff812dff80 t raw_init
ffffffff812dffa0 t raw_sendmsg
ffffffff812e0870 T raw_seq_start
ffffffff812e0920 T raw_icmp_error
ffffffff812e0b00 T raw_rcv
ffffffff812e0bb0 T raw_local_deliver
ffffffff812e0d90 t udp_lib_hash
ffffffff812e0da0 t udp_lib_close
ffffffff812e0db0 t udplite_getfrag
ffffffff812e0dc0 t udp4_portaddr_hash
ffffffff812e0e20 t __udp_queue_rcv_skb
ffffffff812e0f00 T udp_proc_unregister
ffffffff812e0f10 t udp4_proc_exit_net
ffffffff812e0f20 T udp_proc_register
ffffffff812e0f70 t udp4_proc_init_net
ffffffff812e0f80 t udp_seq_stop
ffffffff812e0fb0 t udp_get_first
ffffffff812e1060 t udp_get_next
ffffffff812e10e0 t udp_get_idx
ffffffff812e1130 t udp_seq_next
ffffffff812e1160 T udp_seq_open
ffffffff812e11c0 t first_packet_length
ffffffff812e1390 T udp_poll
ffffffff812e1400 T udp_lib_getsockopt
ffffffff812e14c0 T compat_udp_getsockopt
ffffffff812e14e0 T udp_getsockopt
ffffffff812e1500 t udp_send_skb
ffffffff812e1850 t udp_push_pending_frames
ffffffff812e18b0 T udp_lib_setsockopt
ffffffff812e1a40 t udp_lib_lport_inuse2
ffffffff812e1b10 T udp_lib_rehash
ffffffff812e1c60 t udp_v4_rehash
ffffffff812e1c90 T udp_lib_unhash
ffffffff812e1e30 T udp_disconnect
ffffffff812e1f10 T udp_recvmsg
ffffffff812e2270 t udp_lib_lport_inuse.isra.24
ffffffff812e2340 T udp_lib_get_port
ffffffff812e2680 T udp_v4_get_port
ffffffff812e26f0 t ipv4_rcv_saddr_equal
ffffffff812e2710 t udp_seq_start
ffffffff812e2740 t udp4_lib_lookup2.isra.28
ffffffff812e2940 T __udp4_lib_lookup
ffffffff812e2c10 T udp4_lib_lookup
ffffffff812e2c30 T udp4_seq_show
ffffffff812e2df0 T udp_ioctl
ffffffff812e2e50 T compat_udp_setsockopt
ffffffff812e2e80 T udp_setsockopt
ffffffff812e2eb0 T udp_flush_pending_frames
ffffffff812e2ee0 T udp_destroy_sock
ffffffff812e2f40 T udp_sendmsg
ffffffff812e3830 T udp_sendpage
ffffffff812e39b0 T __udp4_lib_err
ffffffff812e3ba0 T udp_err
ffffffff812e3bb0 T udp_queue_rcv_skb
ffffffff812e3ec0 t flush_stack
ffffffff812e3fe0 t __udp4_lib_mcast_deliver
ffffffff812e42b0 T __udp4_lib_rcv
ffffffff812e48f0 T udp_rcv
ffffffff812e4910 T udp4_proc_exit
ffffffff812e4920 T udp4_ufo_send_check
ffffffff812e49c0 T udp4_ufo_fragment
ffffffff812e4aa0 t udp_lib_hash
ffffffff812e4ab0 t udp_lib_close
ffffffff812e4ac0 t udplite_sk_init
ffffffff812e4ad0 t udplite_err
ffffffff812e4ae0 t udplite_rcv
ffffffff812e4b00 t udplite4_proc_exit_net
ffffffff812e4b10 t udplite4_proc_init_net
ffffffff812e4b20 t arp_hash
ffffffff812e4b30 T arp_invalidate
ffffffff812e4bb0 t arp_error_report
ffffffff812e4be0 t arp_net_exit
ffffffff812e4bf0 t arp_net_init
ffffffff812e4c20 t arp_seq_open
ffffffff812e4c40 t arp_seq_start
ffffffff812e4c60 t arp_req_delete
ffffffff812e4d80 t arp_req_set
ffffffff812e4ff0 T arp_xmit
ffffffff812e5000 T arp_create
ffffffff812e5200 t arp_netdev_event
ffffffff812e5240 t arp_seq_show
ffffffff812e5450 T arp_send
ffffffff812e5490 t arp_process
ffffffff812e5ae0 t parp_redo
ffffffff812e5af0 t arp_rcv
ffffffff812e5bf0 t arp_solicit
ffffffff812e5e40 T arp_mc_map
ffffffff812e5fc0 t arp_constructor
ffffffff812e6180 T arp_find
ffffffff812e63a0 T arp_ioctl
ffffffff812e6680 T arp_ifdown
ffffffff812e6690 t icmp_address
ffffffff812e66a0 t icmp_discard
ffffffff812e66b0 t icmp_sk_exit
ffffffff812e6720 t icmp_sk_init
ffffffff812e6890 t icmp_address_reply
ffffffff812e6a00 t icmp_push_reply
ffffffff812e6b30 t icmp_glue_bits
ffffffff812e6b90 t icmp_reply
ffffffff812e6db0 t icmp_unreach
ffffffff812e70a0 t icmp_timestamp.part.14
ffffffff812e7180 t icmp_timestamp
ffffffff812e71b0 t icmp_echo.part.15
ffffffff812e7200 t icmp_echo
ffffffff812e7230 t icmp_redirect
ffffffff812e7350 T icmp_send
ffffffff812e79e0 T icmp_out_count
ffffffff812e7a10 T icmp_rcv
ffffffff812e7d10 T inet_select_addr
ffffffff812e7e10 t inet_get_link_af_size
ffffffff812e7e30 t inet_validate_link_af
ffffffff812e7f20 t inet_set_link_af
ffffffff812e7ff0 t inet_fill_link_af
ffffffff812e8040 t inet_alloc_ifa
ffffffff812e8060 t inet_hash_remove
ffffffff812e80a0 t inet_fill_ifaddr
ffffffff812e8230 t rtmsg_ifa
ffffffff812e8360 t __inet_insert_ifa
ffffffff812e8590 t inet_insert_ifa
ffffffff812e85a0 t inet_dump_ifaddr
ffffffff812e8720 t __inet_del_ifa
ffffffff812e89b0 t inet_del_ifa
ffffffff812e89c0 t __devinet_sysctl_register
ffffffff812e8ac0 t devinet_sysctl_register
ffffffff812e8b00 t inetdev_init
ffffffff812e8ce0 t ipv4_doint_and_flush
ffffffff812e8d40 t devinet_conf_proc
ffffffff812e8e50 t devinet_sysctl_forward
ffffffff812e8fc0 t inet_rtm_newaddr
ffffffff812e91d0 t inet_gifconf
ffffffff812e92b0 T unregister_inetaddr_notifier
ffffffff812e92c0 T register_inetaddr_notifier
ffffffff812e92d0 T inetdev_by_index
ffffffff812e9300 t inet_rtm_deladdr
ffffffff812e9450 T __ip_dev_find
ffffffff812e95a0 t confirm_addr_indev.isra.14
ffffffff812e9660 T inet_confirm_addr
ffffffff812e9720 T in_dev_finish_destroy
ffffffff812e97b0 t in_dev_rcu_put
ffffffff812e97d0 t inet_rcu_free_ifa
ffffffff812e9800 t __devinet_sysctl_unregister.isra.19
ffffffff812e9840 t devinet_exit_net
ffffffff812e98b0 t inetdev_event
ffffffff812e9d60 t devinet_init_net
ffffffff812e9f00 T inet_addr_onlink
ffffffff812e9f60 T inet_ifa_byprefix
ffffffff812e9fd0 T devinet_ioctl
ffffffff812ea6c0 T inet_recvmsg
ffffffff812ea740 t inet_compat_ioctl
ffffffff812ea760 t inet_gro_complete
ffffffff812ea840 t inet_gro_receive
ffffffff812eaa10 t inet_gso_send_check
ffffffff812eab10 t inet_gso_segment
ffffffff812ead80 T snmp_fold_field
ffffffff812eadf0 T inet_ctl_sock_create
ffffffff812eae80 T inet_register_protosw
ffffffff812eaf30 T inet_ioctl
ffffffff812eafa0 T inet_shutdown
ffffffff812eb0d0 t inet_autobind
ffffffff812eb130 T inet_sendmsg
ffffffff812eb1e0 T inet_dgram_connect
ffffffff812eb260 T inet_sendpage
ffffffff812eb330 T inet_getname
ffffffff812eb3c0 T inet_accept
ffffffff812eb4d0 T inet_stream_connect
ffffffff812eb7b0 T inet_bind
ffffffff812eb9f0 T inet_release
ffffffff812eba70 T build_ehash_secret
ffffffff812ebaa0 t inet_create
ffffffff812ebdf0 T inet_listen
ffffffff812ebe90 T inet_sock_destruct
ffffffff812ec050 T snmp_mib_free
ffffffff812ec070 t ipv4_mib_exit_net
ffffffff812ec0d0 T snmp_mib_init
ffffffff812ec100 t ipv4_mib_init_net
ffffffff812ec2b0 T inet_sk_rebuild_header
ffffffff812ec600 T inet_unregister_protosw
ffffffff812ec660 T ip_mc_rejoin_groups
ffffffff812ec670 t igmp_mc_seq_stop
ffffffff812ec690 t igmp_net_exit
ffffffff812ec6b0 t igmp_net_init
ffffffff812ec720 t igmp_mcf_seq_open
ffffffff812ec740 t igmp_mc_seq_open
ffffffff812ec760 t igmp_mcf_seq_show
ffffffff812ec7f0 t igmp_mcf_seq_stop
ffffffff812ec830 t igmp_mc_seq_show
ffffffff812ec910 t ip_mc_clear_src
ffffffff812ec990 t ip_mc_find_dev
ffffffff812eca40 t igmp_group_added
ffffffff812eca90 t igmp_group_dropped
ffffffff812ecae0 t ip_ma_put
ffffffff812ecb30 T ip_mc_dec_group
ffffffff812ecbe0 t igmp_mc_get_next.isra.6
ffffffff812ecc50 t igmp_mc_seq_next
ffffffff812ecd10 t igmp_mc_seq_start
ffffffff812ecdf0 t igmp_mcf_get_next.isra.8
ffffffff812eceb0 t igmp_mcf_seq_next
ffffffff812ecfc0 t igmp_mcf_seq_start
ffffffff812ed110 t ip_mc_del1_src.isra.11
ffffffff812ed1d0 t ip_mc_del_src
ffffffff812ed300 t ip_mc_add_src
ffffffff812ed510 T ip_mc_inc_group
ffffffff812ed610 T ip_mc_join_group
ffffffff812ed700 t ip_mc_leave_src
ffffffff812ed7b0 T ip_mc_unmap
ffffffff812ed800 T ip_mc_remap
ffffffff812ed850 T ip_mc_down
ffffffff812ed8c0 T ip_mc_init_dev
ffffffff812ed900 T ip_mc_up
ffffffff812ed960 T ip_mc_destroy_dev
ffffffff812ed9e0 T ip_mc_leave_group
ffffffff812edae0 T ip_mc_source
ffffffff812edea0 T ip_mc_msfilter
ffffffff812ee110 T ip_mc_msfget
ffffffff812ee270 T ip_mc_gsfget
ffffffff812ee3f0 T ip_mc_sf_allow
ffffffff812ee4c0 T ip_mc_drop_socket
ffffffff812ee570 T ip_check_mc_rcu
ffffffff812ee600 t fib_flush
ffffffff812ee670 t fib_disable_ip
ffffffff812ee6c0 T inet_addr_type
ffffffff812ee750 T inet_dev_addr_type
ffffffff812ee7f0 t nl_fib_input
ffffffff812ee950 t inet_dump_fib
ffffffff812eea60 t rtm_to_fib_config
ffffffff812eecb0 t inet_rtm_delroute
ffffffff812eed10 t inet_rtm_newroute
ffffffff812eed70 t ip_fib_net_exit.isra.13
ffffffff812eee30 t fib_net_exit
ffffffff812eee60 t fib_net_init
ffffffff812eefa0 t fib_magic.isra.16
ffffffff812ef050 T fib_validate_source
ffffffff812ef360 T ip_rt_ioctl
ffffffff812ef7e0 T fib_add_ifaddr
ffffffff812ef9d0 t fib_netdev_event
ffffffff812efab0 T fib_del_ifaddr
ffffffff812eff20 t fib_inetaddr_event
ffffffff812effe0 t free_fib_info_rcu
ffffffff812f0020 t fib_info_hash_free
ffffffff812f0060 t fib_info_hash_alloc
ffffffff812f00a0 T free_fib_info
ffffffff812f00d0 T fib_release_info
ffffffff812f01d0 T ip_fib_check_default
ffffffff812f0260 T fib_find_alias
ffffffff812f02b0 T fib_detect_death
ffffffff812f0390 T fib_nh_match
ffffffff812f03e0 T fib_info_update_nh_saddr
ffffffff812f0430 T fib_create_info
ffffffff812f0d40 T fib_dump_info
ffffffff812f0fb0 T rtmsg_fib
ffffffff812f1130 T fib_sync_down_addr
ffffffff812f11b0 T fib_sync_down_dev
ffffffff812f1280 T fib_select_default
ffffffff812f1410 t fib_trie_seq_stop
ffffffff812f1420 t fib_route_seq_stop
ffffffff812f1430 t fib_route_seq_open
ffffffff812f1450 t fib_trie_seq_open
ffffffff812f1470 t fib_route_seq_show
ffffffff812f16d0 t fib_trie_get_next
ffffffff812f1790 t fib_trie_seq_start
ffffffff812f1850 t fib_trie_seq_next
ffffffff812f1920 t tnode_put_child_reorg
ffffffff812f1a00 t fib_find_node
ffffffff812f1b00 t fib_triestat_seq_open
ffffffff812f1b10 t seq_indent
ffffffff812f1b40 t leaf_info_new
ffffffff812f1b90 t __alias_free_mem
ffffffff812f1ba0 t __leaf_free_rcu
ffffffff812f1bb0 t tnode_new
ffffffff812f1c30 t __tnode_vfree
ffffffff812f1c40 t tnode_free_flush
ffffffff812f1cd0 t tnode_clean_free
ffffffff812f1d70 t insert_leaf_info
ffffffff812f1de0 t check_leaf.isra.13
ffffffff812f1f40 T fib_table_lookup
ffffffff812f2230 t leaf_walk_rcu
ffffffff812f22d0 t trie_firstleaf
ffffffff812f2300 t trie_nextleaf
ffffffff812f2320 t fib_route_seq_next
ffffffff812f2370 t fib_route_seq_start
ffffffff812f2430 t tnode_free_safe
ffffffff812f2470 t resize
ffffffff812f2ce0 t trie_rebalance
ffffffff812f2e00 t trie_leaf_remove
ffffffff812f2e90 t fib_table_print.isra.16
ffffffff812f2ed0 t fib_triestat_seq_show
ffffffff812f3270 t fib_trie_seq_show
ffffffff812f34c0 t __tnode_free_rcu
ffffffff812f3510 T fib_table_insert
ffffffff812f3de0 T fib_table_delete
ffffffff812f4080 T fib_table_flush
ffffffff812f4220 T fib_free_table
ffffffff812f4230 T fib_table_dump
ffffffff812f44c0 T fib_trie_table
ffffffff812f4500 T fib_proc_init
ffffffff812f45a0 T fib_proc_exit
ffffffff812f45d0 T inet_frags_init_net
ffffffff812f45f0 T inet_frags_fini
ffffffff812f4600 T inet_frag_destroy
ffffffff812f4710 T inet_frag_find
ffffffff812f4910 T inet_frags_init
ffffffff812f49b0 t inet_frag_secret_rebuild
ffffffff812f4aa0 T inet_frag_kill
ffffffff812f4b70 T inet_frag_evictor
ffffffff812f4c60 T inet_frags_exit_net
ffffffff812f4c90 t ping_get_first
ffffffff812f4cf0 t ping_get_next
ffffffff812f4d30 t ping_get_idx
ffffffff812f4d70 t ping_seq_next
ffffffff812f4da0 t ping_v4_get_port
ffffffff812f4f10 t ping_v4_unhash
ffffffff812f4fa0 t ping_v4_hash
ffffffff812f4fb0 t ping_queue_rcv_skb
ffffffff812f4fd0 t ping_bind
ffffffff812f5150 t ping_recvmsg
ffffffff812f5340 t ping_init_sock
ffffffff812f5400 t ping_sendmsg
ffffffff812f5a40 t ping_close
ffffffff812f5a50 t ping_proc_exit_net
ffffffff812f5a60 t ping_proc_init_net
ffffffff812f5a90 t ping_seq_open
ffffffff812f5ab0 t ping_seq_stop
ffffffff812f5ac0 t ping_getfrag
ffffffff812f5b20 t ping_seq_show
ffffffff812f5ce0 t ping_seq_start
ffffffff812f5d50 t ping_v4_lookup.isra.11
ffffffff812f5df0 T ping_err
ffffffff812f5f90 T ping_rcv
ffffffff812f6060 T ping_proc_exit
ffffffff812f6070 t ipv4_tcp_mem
ffffffff812f6120 t ipv4_sysctl_exit_net
ffffffff812f6140 t ipv4_sysctl_init_net
ffffffff812f6270 t ipv4_ping_group_range
ffffffff812f6360 t proc_allowed_congestion_control
ffffffff812f6430 t proc_tcp_available_congestion_control
ffffffff812f64f0 t proc_tcp_congestion_control
ffffffff812f6580 t ipv4_local_port_range
ffffffff812f6650 t ip_proc_exit_net
ffffffff812f6680 t ip_proc_init_net
ffffffff812f6720 t snmp_seq_open
ffffffff812f6730 t netstat_seq_open
ffffffff812f6740 t sockstat_seq_open
ffffffff812f6750 t netstat_seq_show
ffffffff812f6890 t sockstat_seq_show
ffffffff812f6a00 t icmpmsg_put_line.part.3
ffffffff812f6ae0 t icmpmsg_put
ffffffff812f6b80 t snmp_seq_show
ffffffff812f6f70 T cookie_check_timestamp
ffffffff812f7020 t cookie_hash
ffffffff812f7140 T cookie_init_timestamp
ffffffff812f7190 T cookie_v4_init_sequence
ffffffff812f7310 T cookie_v4_check
ffffffff812f7920 t lro_tcp_ip_check
ffffffff812f79d0 t lro_get_desc.isra.7
ffffffff812f7a60 t lro_flush.isra.8
ffffffff812f7c80 T lro_flush_pkt
ffffffff812f7cc0 T lro_flush_all
ffffffff812f7d20 t lro_tcp_data_csum.isra.9
ffffffff812f7de0 t lro_init_desc
ffffffff812f7ec0 t lro_add_common
ffffffff812f7f80 t lro_gen_skb.isra.11
ffffffff812f80d0 T lro_receive_frags
ffffffff812f8480 T lro_receive_skb
ffffffff812f86d0 t bictcp_recalc_ssthresh
ffffffff812f8740 t bictcp_undo_cwnd
ffffffff812f8760 t bictcp_init
ffffffff812f8880 t bictcp_state
ffffffff812f8970 t bictcp_acked
ffffffff812f8b40 t bictcp_cong_avoid
ffffffff812f8ef0 t xfrm4_get_tos
ffffffff812f8f00 t xfrm4_init_path
ffffffff812f8f10 t xfrm4_fill_dst
ffffffff812f8fe0 t xfrm4_garbage_collect
ffffffff812f9040 t xfrm4_update_pmtu
ffffffff812f9060 t xfrm4_dst_ifdown
ffffffff812f9070 t xfrm4_dst_destroy
ffffffff812f9110 t _decode_session4
ffffffff812f94c0 t xfrm4_get_saddr
ffffffff812f9520 t xfrm4_dst_lookup
ffffffff812f9570 t xfrm4_init_flags
ffffffff812f9590 t xfrm4_init_temprop
ffffffff812f9600 t __xfrm4_init_tempsel
ffffffff812f96f0 T xfrm4_extract_header
ffffffff812f9740 T xfrm4_rcv_encap
ffffffff812f9760 T xfrm4_rcv
ffffffff812f9780 T xfrm4_extract_input
ffffffff812f9790 T xfrm4_transport_finish
ffffffff812f9850 T xfrm4_udp_encap_rcv
ffffffff812f9a20 T xfrm4_prepare_output
ffffffff812f9aa0 T xfrm4_extract_output
ffffffff812f9b70 T xfrm4_output_finish
ffffffff812f9b80 T xfrm4_output
ffffffff812f9ba0 t xfrm_policy_flo_check
ffffffff812f9bb0 T xfrm_policy_walk_init
ffffffff812f9bd0 t __xfrm_policy_unlink
ffffffff812f9c90 T xfrm_dst_ifdown
ffffffff812f9cf0 t xfrm_link_failure
ffffffff812f9d00 t xfrm_default_advmss
ffffffff812f9d30 t xfrm_neigh_lookup
ffffffff812f9d40 t xfrm_audit_common_policyinfo
ffffffff812f9e80 T xfrm_audit_policy_delete
ffffffff812f9f70 T xfrm_audit_policy_add
ffffffff812fa060 t xfrm_policy_flo_get
ffffffff812fa080 t xfrm_bundle_flo_delete
ffffffff812fa0d0 T xfrm_policy_unregister_afinfo
ffffffff812fa170 T xfrm_policy_walk_done
ffffffff812fa1d0 T xfrm_policy_walk
ffffffff812fa320 t policy_hash_bysel
ffffffff812fa3a0 T xfrm_spd_getinfo
ffffffff812fa420 T xfrm_policy_register_afinfo
ffffffff812fa560 t xfrm_negative_advice
ffffffff812fa590 t xfrm_bundle_ok
ffffffff812fa740 t xfrm_bundle_flo_check
ffffffff812fa770 t xfrm_bundle_flo_get
ffffffff812fa7b0 t xfrm_policy_get_afinfo
ffffffff812fa7f0 T __xfrm_decode_session
ffffffff812fa850 t xfrm_tmpl_resolve
ffffffff812faca0 t xfrm_resolve_and_create_bundle
ffffffff812fb4b0 t __xfrm_policy_link
ffffffff812fb5b0 T xfrm_policy_alloc
ffffffff812fb660 t __xfrm_garbage_collect.isra.36
ffffffff812fb6f0 t xfrm_dev_event
ffffffff812fb710 t xfrm_garbage_collect_deferred
ffffffff812fb730 t xfrm_mtu
ffffffff812fb760 t xfrm_hash_resize
ffffffff812fbab0 t xfrm_dst_check
ffffffff812fbae0 T xfrm_policy_destroy
ffffffff812fbb10 t xfrm_policy_flo_delete
ffffffff812fbb30 t clone_policy
ffffffff812fbd20 t xfrm_policy_kill
ffffffff812fbd80 T xfrm_policy_delete
ffffffff812fbdd0 T xfrm_policy_flush
ffffffff812fbfc0 t xfrm_policy_fini
ffffffff812fc0e0 t xfrm_net_exit
ffffffff812fc100 t xfrm_net_init
ffffffff812fc300 T xfrm_policy_byid
ffffffff812fc440 T xfrm_policy_bysel_ctx
ffffffff812fc560 T xfrm_policy_insert
ffffffff812fc8f0 t xfrm_policy_timer
ffffffff812fcb30 T xfrm_selector_match
ffffffff812fcf10 t xfrm_sk_policy_lookup
ffffffff812fcfa0 T xfrm_lookup
ffffffff812fd450 T __xfrm_route_forward
ffffffff812fd4e0 T __xfrm_policy_check
ffffffff812fdaa0 t xfrm_policy_lookup_bytype.constprop.48
ffffffff812fdce0 t xfrm_bundle_lookup
ffffffff812fe130 t xfrm_policy_lookup
ffffffff812fe1a0 T xfrm_sk_policy_insert
ffffffff812fe290 T __xfrm_sk_clone_policy
ffffffff812fe310 T xfrm_get_acqseq
ffffffff812fe330 T xfrm_state_walk_init
ffffffff812fe350 t xfrm_audit_helper_sainfo
ffffffff812fe410 T xfrm_audit_state_delete
ffffffff812fe500 T xfrm_audit_state_add
ffffffff812fe5f0 T xfrm_sad_getinfo
ffffffff812fe650 T xfrm_state_walk
ffffffff812fe7a0 T xfrm_state_walk_done
ffffffff812fe800 t xfrm_state_gc_task
ffffffff812fe970 t xfrm_hash_resize
ffffffff812fec20 t xfrm_state_get_afinfo
ffffffff812fec60 T km_report
ffffffff812fecf0 T km_new_mapping
ffffffff812fed70 T km_query
ffffffff812fedf0 T km_state_notify
ffffffff812fee50 T km_state_expired
ffffffff812feeb0 T km_policy_notify
ffffffff812fef20 T km_policy_expired
ffffffff812fef80 t xfrm_get_mode
ffffffff812ff010 T __xfrm_init_state
ffffffff812ff250 T xfrm_init_state
ffffffff812ff260 T xfrm_state_unregister_afinfo
ffffffff812ff2d0 T xfrm_state_register_afinfo
ffffffff812ff330 T xfrm_unregister_km
ffffffff812ff380 T xfrm_register_km
ffffffff812ff3c0 t xfrm_state_lock_afinfo
ffffffff812ff410 T xfrm_unregister_mode
ffffffff812ff480 T xfrm_register_mode
ffffffff812ff520 T xfrm_unregister_type
ffffffff812ff570 T xfrm_register_type
ffffffff812ff5c0 T xfrm_user_policy
ffffffff812ff710 t __xfrm_state_lookup
ffffffff812ff800 T xfrm_state_lookup
ffffffff812ff880 t __xfrm_state_lookup_byaddr
ffffffff812ff9b0 T xfrm_state_lookup_byaddr
ffffffff812ffa30 t __xfrm_state_bump_genids
ffffffff812ffbc0 T __xfrm_state_destroy
ffffffff812ffc50 T xfrm_alloc_spi
ffffffff812ffe60 T __xfrm_state_delete
ffffffff812fffa0 T xfrm_state_delete
ffffffff812ffff0 T xfrm_state_delete_tunnel
ffffffff81300070 T xfrm_state_flush
ffffffff81300240 T xfrm_stateonly_find
ffffffff81300430 t xfrm_timer_handler
ffffffff813006a0 T xfrm_state_alloc
ffffffff813007d0 t xfrm_replay_timer_handler
ffffffff81300850 t __xfrm_find_acq_byseq.isra.20
ffffffff813008b0 T xfrm_find_acq_byseq
ffffffff81300920 t xfrm_audit_helper_pktinfo
ffffffff813009b0 T xfrm_audit_state_icvfail
ffffffff81300a90 T xfrm_audit_state_notfound
ffffffff81300b70 T xfrm_audit_state_notfound_simple
ffffffff81300c10 T xfrm_audit_state_replay
ffffffff81300cf0 T xfrm_audit_state_replay_overflow
ffffffff81300db0 t xfrm_hash_grow_check
ffffffff81300de0 t __xfrm_state_insert
ffffffff81301010 T xfrm_state_insert
ffffffff81301040 t __find_acq_core
ffffffff813014a0 T xfrm_find_acq
ffffffff81301560 T xfrm_state_add
ffffffff813017f0 T xfrm_state_check_expire
ffffffff813018a0 T xfrm_state_update
ffffffff81301c10 t xfrm_state_look_at.isra.28
ffffffff81301d10 T xfrm_state_find
ffffffff813025a0 T xfrm_state_mtu
ffffffff81302620 T xfrm_state_init
ffffffff81302750 T xfrm_state_fini
ffffffff81302860 T xfrm_hash_alloc
ffffffff813028b0 T xfrm_hash_free
ffffffff813028f0 T xfrm_prepare_input
ffffffff813029c0 T secpath_dup
ffffffff81302a50 T __secpath_destroy
ffffffff81302ab0 T xfrm_parse_spi
ffffffff81302c20 T xfrm_input
ffffffff81303050 T xfrm_input_resume
ffffffff81303060 T xfrm_inner_extract_output
ffffffff813030f0 T xfrm_output_resume
ffffffff81303340 T xfrm_output
ffffffff81303430 t xfrm_alg_id_match
ffffffff81303440 T xfrm_aalg_get_byidx
ffffffff81303460 T xfrm_ealg_get_byidx
ffffffff81303480 T xfrm_count_auth_supported
ffffffff813034b0 T xfrm_count_enc_supported
ffffffff813034e0 T xfrm_probe_algs
ffffffff81303600 t xfrm_find_algo
ffffffff813036c0 T xfrm_aead_get_byname
ffffffff813036f0 T xfrm_calg_get_byname
ffffffff81303710 T xfrm_ealg_get_byname
ffffffff81303730 T xfrm_aalg_get_byname
ffffffff81303750 T xfrm_calg_get_byid
ffffffff81303770 T xfrm_ealg_get_byid
ffffffff81303790 T xfrm_aalg_get_byid
ffffffff813037b0 t xfrm_aead_name_match
ffffffff813037f0 t xfrm_alg_name_match
ffffffff81303850 T xfrm_sysctl_init
ffffffff81303910 T xfrm_sysctl_fini
ffffffff81303930 T xfrm_init_replay
ffffffff81303990 t xfrm_replay_overflow_esn
ffffffff81303a30 t xfrm_replay_notify
ffffffff81303b80 t xfrm_replay_notify_bmp
ffffffff81303d00 t xfrm_replay_advance_bmp
ffffffff81303e40 t xfrm_replay_check
ffffffff81303ec0 t xfrm_replay_check_bmp
ffffffff81303f50 t xfrm_replay_check_esn
ffffffff81304050 t xfrm_replay_overflow
ffffffff813040f0 t xfrm_replay_overflow_bmp
ffffffff81304180 t xfrm_replay_advance
ffffffff81304220 T xfrm_replay_seqhi
ffffffff81304260 t xfrm_replay_advance_esn
ffffffff81304420 T unix_outq_len
ffffffff81304430 t unix_poll
ffffffff813044e0 t unix_seq_stop
ffffffff813044f0 t unix_net_exit
ffffffff81304510 t unix_net_init
ffffffff81304580 t unix_seq_open
ffffffff813045a0 T unix_peer_get
ffffffff813045f0 T unix_inq_len
ffffffff81304680 t unix_ioctl
ffffffff813046f0 t unix_seq_show
ffffffff81304850 t init_peercred
ffffffff813048c0 t unix_socketpair
ffffffff81304950 t unix_set_peek_off
ffffffff813049a0 t __unix_find_socket_byname
ffffffff81304a30 t __unix_insert_socket
ffffffff81304a90 t unix_listen
ffffffff81304ba0 t unix_wait_for_peer
ffffffff81304c80 t unix_dgram_poll
ffffffff81304e30 t unix_getname
ffffffff81304f10 t unix_find_other
ffffffff81305110 t unix_shutdown
ffffffff81305260 t unix_accept
ffffffff81305390 t unix_create1
ffffffff81305500 t unix_create
ffffffff81305590 t unix_sock_destructor
ffffffff81305670 t __unix_remove_socket
ffffffff813056c0 t unix_release_sock
ffffffff81305910 t unix_release
ffffffff81305940 t maybe_add_creds
ffffffff813059b0 t next_unix_socket
ffffffff81305a20 t unix_seq_next
ffffffff81305af0 t unix_seq_start
ffffffff81305bb0 t unix_state_double_lock
ffffffff81305c10 t unix_state_double_unlock
ffffffff81305c50 t unix_write_space
ffffffff81305cc0 t unix_detach_fds.isra.32
ffffffff81305d10 t unix_dgram_recvmsg
ffffffff81306190 t unix_seqpacket_recvmsg
ffffffff813061b0 t unix_destruct_scm
ffffffff81306260 t unix_stream_recvmsg
ffffffff813069d0 t unix_mkname
ffffffff81306a50 t unix_autobind.isra.34
ffffffff81306c20 t unix_bind
ffffffff81306f50 t unix_stream_connect
ffffffff813073a0 t unix_scm_to_skb
ffffffff813074c0 t unix_stream_sendmsg
ffffffff813078d0 t unix_dgram_disconnected
ffffffff81307960 t unix_dgram_connect
ffffffff81307b40 t unix_dgram_sendmsg
ffffffff81308170 t unix_seqpacket_sendmsg
ffffffff813081c0 t dec_inflight
ffffffff813081d0 t inc_inflight
ffffffff813081e0 t inc_inflight_move_tail
ffffffff81308240 T unix_get_socket
ffffffff81308280 t scan_inflight
ffffffff813083a0 t scan_children
ffffffff813084c0 T unix_inflight
ffffffff81308550 T unix_notinflight
ffffffff813085d0 T unix_gc
ffffffff81308960 T wait_for_unix_gc
ffffffff81308a10 T unix_sysctl_register
ffffffff81308a90 T unix_sysctl_unregister
ffffffff81308ab0 T __ipv6_addr_type
ffffffff81308b80 T ipv6_ext_hdr
ffffffff81308bc0 T ipv6_skip_exthdr
ffffffff81308d00 t net_ctl_header_lookup
ffffffff81308d10 t is_seen
ffffffff81308d40 t net_ctl_ro_header_perms
ffffffff81308d60 t sysctl_net_init
ffffffff81308d90 t sysctl_net_exit
ffffffff81308da0 T unregister_net_sysctl_table
ffffffff81308db0 T register_net_sysctl_rotable
ffffffff81308dd0 T register_net_sysctl_table
ffffffff81308de0 t net_ctl_permissions
ffffffff81308e20 T klist_init
ffffffff81308e40 T klist_node_attached
ffffffff81308e50 T klist_iter_init_node
ffffffff81308e90 T klist_iter_init
ffffffff81308ea0 t klist_node_init
ffffffff81308f20 T klist_add_before
ffffffff81308f90 T klist_add_after
ffffffff81309000 T klist_add_tail
ffffffff81309060 T klist_add_head
ffffffff813090c0 t klist_release
ffffffff813091c0 t klist_dec_and_del
ffffffff813091f0 T klist_next
ffffffff813092f0 t klist_put
ffffffff813093b0 T klist_iter_exit
ffffffff813093d0 T klist_del
ffffffff813093e0 T klist_remove
ffffffff81309490 T md5_transform
ffffffff81309c40 T sha_transform
ffffffff8130ad40 T sha_init
ffffffff8130ad70 T csum_ipv6_magic
ffffffff8130adc0 T csum_partial_copy_nocheck
ffffffff8130add0 T csum_partial_copy_to_user
ffffffff8130ae60 T csum_partial_copy_from_user
ffffffff8130af30 T csum_partial_copy_generic
ffffffff8130b09c t rest_init
ffffffff8130b110 T xen_hvm_init_shared_info
ffffffff8130b1e0 T xen_build_mfn_list_list
ffffffff8130b4a0 T arch_register_cpu
ffffffff8130b4d0 T acpi_map_lsapic
ffffffff8130b640 t remove_cpu_from_maps
ffffffff8130b66a T check_enable_amd_mmconf_dmi
ffffffff8130b680 T init_memory_mapping
ffffffff8130bbc0 t map_low_page
ffffffff8130bbf4 t alloc_low_page
ffffffff8130bc70 t unmap_low_page
ffffffff8130bc90 t spp_getpage
ffffffff8130bd00 t take_cpu_down
ffffffff8130bd30 t _cpu_down
ffffffff8130bf90 T cpu_down
ffffffff8130bfe0 T unregister_cpu_notifier
ffffffff8130c000 T register_cpu_notifier
ffffffff8130c030 T enable_nonboot_cpus
ffffffff8130c0f0 T __irq_alloc_descs
ffffffff8130c2b0 t zone_wait_table_init
ffffffff8130c380 t __build_all_zonelists
ffffffff8130c8d0 T build_all_zonelists
ffffffff8130cb7b t sparse_index_alloc
ffffffff8130cc20 t __earlyonly_bootmem_alloc
ffffffff8130cc30 t setup_cpu_cache
ffffffff8130ce80 T pci_add_new_bus
ffffffff8130d1b0 T pci_scan_single_device
ffffffff8130d270 T pci_rescan_bus_bridge_resize
ffffffff8130d2c0 t pci_bus_release_bridge_resources
ffffffff8130d440 t __pci_bus_assign_resources
ffffffff8130d540 T pci_bus_assign_resources
ffffffff8130d550 t __pci_bridge_assign_resources
ffffffff8130d620 T __pci_bus_size_bridges
ffffffff8130de10 T pci_rescan_bus
ffffffff8130ded0 T pci_bus_size_bridges
ffffffff8130dee0 T fb_find_logo
ffffffff8130df12 T acpi_os_map_memory
ffffffff8130e06d T acpi_os_unmap_memory
ffffffff8130e180 t store_online
ffffffff8130e238 t via_no_dac
ffffffff8130e263 t quirk_intel_irqbalance
ffffffff8130e311 t workqueue_cpu_callback
ffffffff8130e52a t cpu_callback
ffffffff8130e5e6 T read_current_timer
ffffffff8130e608 T pci_read_bridge_bases
ffffffff8130ea01 T pci_scan_child_bus
ffffffff8130eaa2 T pci_scan_bridge
ffffffff8130ef54 T pci_scan_bus
ffffffff8130efd5 T pci_scan_bus_parented
ffffffff8130f05c T pci_scan_root_bus
ffffffff8130f085 t quirk_mmio_always_on
ffffffff8130f08a t quirk_mellanox_tavor
ffffffff8130f092 t quirk_citrine
ffffffff8130f09d t quirk_s3_64M
ffffffff8130f0d1 t quirk_dunord
ffffffff8130f0e8 t quirk_transparent_bridge
ffffffff8130f0f0 t quirk_no_ata_d3
ffffffff8130f0f9 t quirk_brcm_570x_limit_vpd
ffffffff8130f136 t quirk_msi_intx_disable_bug
ffffffff8130f13f t quirk_hotplug_bridge
ffffffff8130f147 t fixup_mpss_256
ffffffff8130f154 t quirk_via_acpi
ffffffff8130f190 t quirk_disable_msi
ffffffff8130f1be t quirk_xio2000a
ffffffff8130f234 t fixup_ti816x_class
ffffffff8130f256 t quirk_netmos
ffffffff8130f2d5 t quirk_cs5536_vsa
ffffffff8130f31c t quirk_msi_intx_disable_ati_bug
ffffffff8130f354 t ht_check_msi_mapping
ffffffff8130f3c4 t msi_ht_cap_enabled
ffffffff8130f456 t quirk_nvidia_ck804_msi_ht_cap
ffffffff8130f4bc t quirk_amd_780_apc_msi
ffffffff8130f4fe t quirk_vt82c598_id
ffffffff8130f52b t quirk_svwks_csb5ide
ffffffff8130f577 t quirk_unhide_mch_dev6
ffffffff8130f5d4 t quirk_amd_ide_mode
ffffffff8130f692 t quirk_via_cx700_pci_parking_caching
ffffffff8130f795 t ht_enable_msi_mapping
ffffffff8130f82a t ich7_lpc_generic_decode
ffffffff8130f88b t ich6_lpc_generic_decode
ffffffff8130f8fc t quirk_tigerpoint_bm_sts
ffffffff8130f954 t nvenet_msi_disable
ffffffff8130f9b5 t quirk_disable_aspm_l0s
ffffffff8130f9dc t quirk_pcie_pxh
ffffffff8130fa02 t quirk_pcie_mch
ffffffff8130fa14 t quirk_io_region
ffffffff8130fabf t quirk_vt8235_acpi
ffffffff8130fb4a t ich6_lpc_acpi_gpio
ffffffff8130fc16 t quirk_ich7_lpc
ffffffff8130fc70 t quirk_ich6_lpc
ffffffff8130fca9 t quirk_ich4_lpc_acpi
ffffffff8130fd75 t quirk_ali7101_acpi
ffffffff8130fdeb t quirk_ati_exploding_mce
ffffffff8130fe44 t quirk_amd_ioapic
ffffffff8130fe77 t disable_igfx_irq
ffffffff8130fedc t quirk_intel_mc_errata
ffffffff8130ff87 t quirk_p64h2_1k_io_fix_iobl
ffffffff8131001f t quirk_p64h2_1k_io
ffffffff813100be t fixup_rev1_53c810
ffffffff813100e6 t quirk_natoma
ffffffff8131010e t quirk_vsfx
ffffffff81310136 t quirk_viaetbf
ffffffff8131015f t quirk_triton
ffffffff81310189 t quirk_nopciamd
ffffffff813101d0 t quirk_nopcipci
ffffffff813101f8 t quirk_isa_dma_hangs
ffffffff81310221 t quirk_msi_ht_cap
ffffffff81310256 t __nv_msi_ht_cap_quirk
ffffffff813104af t nv_msi_ht_cap_quirk_all
ffffffff813104b9 t nv_msi_ht_cap_quirk_leaf
ffffffff813104c0 t nvbridge_check_legacy_irq_routing
ffffffff81310529 t quirk_brcm_5719_limit_mrrs
ffffffff81310573 t quirk_e100_interrupt
ffffffff813106d9 t quirk_vt82c586_acpi
ffffffff81310727 t quirk_vt82c686_acpi
ffffffff813107b3 t quirk_piix4_acpi
ffffffff81310909 t pcie_portdrv_probe
ffffffff8131096f t aer_probe
ffffffff81310bd5 t ioapic_probe
ffffffff81310cfd t acpi_pci_root_add
ffffffff81311143 t acpi_hed_add
ffffffff81311160 t ghes_probe
ffffffff8131147e t __gnttab_init
ffffffff8131149b t platform_pci_init
ffffffff8131167a t xencons_probe
ffffffff81311793 t mmio_resource_enabled.constprop.6
ffffffff813117d0 t quirk_usb_early_handoff
ffffffff81311d94 t cmos_wake_setup.part.4
ffffffff81311e47 t cmos_pnp_probe
ffffffff81311edb t acpi_pm_check_graylist
ffffffff81311f0a t acpi_pm_check_blacklist
ffffffff81311f3f t pci_fixup_latency
ffffffff81311f4a t pci_fixup_piix4_acpi
ffffffff81311f55 t pci_fixup_transparent_bridge
ffffffff81311f6d t pci_siemens_interrupt_controller
ffffffff81311f76 t pci_fixup_umc_ide
ffffffff81311fa8 t pci_fixup_i450gx
ffffffff81311ffa t pci_fixup_i450nx
ffffffff813120a7 t pci_post_fixup_toshiba_ohci1394
ffffffff8131211e t pci_pre_fixup_toshiba_ohci1394
ffffffff81312154 t pci_fixup_msi_k8t_onboard_sound
ffffffff813121eb t pci_fixup_video
ffffffff8131227b t pci_fixup_ncr53c810
ffffffff813122a3 T pci_acpi_scan_root
ffffffff813125ed T pcibios_scan_specific_bus
ffffffff81312673 t can_skip_ioresource_align
ffffffff81312694 t read_dmi_type_b1
ffffffff813126d4 t set_bf_sort
ffffffff813126fe t find_sort_method
ffffffff81312728 T pcibios_fixup_bus
ffffffff813127c9 T pcibios_setup
ffffffff81312b49 T pci_scan_bus_on_node
ffffffff81312be7 T pci_scan_bus_with_sysdata
ffffffff81312bf6 T pcibios_scan_root
ffffffff81312cbc T update_res
ffffffff81312d5f t ioapic_remove
ffffffff81312d9c t acpi_hed_remove
ffffffff81312daa t ghes_remove
ffffffff81312f1e t i8042_unregister_ports
ffffffff81312f49 t i8042_remove
ffffffff81312f6d T calibrate_delay
ffffffff8131343b t xen_hvm_cpu_notify
ffffffff81313470 t register_callback
ffffffff81313493 T xen_enable_sysenter
ffffffff813134c8 T xen_enable_syscall
ffffffff81313524 t cpu_bringup
ffffffff813135a3 t xen_play_dead
ffffffff813135c6 t xen_cpu_up
ffffffff8131392b t cpu_bringup_and_idle
ffffffff81313938 t xen_hvm_cpu_up
ffffffff81313967 T xen_init_lock_cpu
ffffffff813139d6 T x86_init_noop
ffffffff813139d7 t cpu_vsyscall_init
ffffffff81313a87 t cpu_vsyscall_notifier
ffffffff81313ab0 T unsynchronized_tsc
ffffffff81313b17 T calibrate_delay_is_known
ffffffff81313bb6 T select_idle_routine
ffffffff81313c5f T fpu_init
ffffffff81313d11 T xsave_init
ffffffff81313d34 t cpuid4_cache_lookup_regs
ffffffff813140d1 t cache_remove_shared_cpu_map
ffffffff8131413f t cpuid4_cache_sysfs_exit
ffffffff813141eb t cache_add_dev
ffffffff81314561 t get_cpu_leaves
ffffffff8131494b t cacheinfo_cpu_callback
ffffffff81314a21 t cache_sysfs_init
ffffffff81314a89 T init_intel_cacheinfo
ffffffff81314f58 T init_scattered_cpuid_features
ffffffff8131504c T detect_extended_topology
ffffffff813151fc t filter_cpuid_features
ffffffff81315266 t get_cpu_vendor
ffffffff813152fd t setup_smep.part.6
ffffffff8131535e T cpu_detect_cache_sizes
ffffffff8131542f t default_init
ffffffff81315434 T detect_ht
ffffffff813155ae T cpu_detect
ffffffff8131567b T get_cpu_cap
ffffffff81315841 t identify_cpu
ffffffff81315b75 T identify_secondary_cpu
ffffffff81315b8e T print_cpu_msr
ffffffff81315c1f T print_cpu_info
ffffffff81315ccf T cpu_init
ffffffff81315faa t vmware_set_cpu_features
ffffffff81315fb5 T init_hypervisor
ffffffff81315fcd T x86_init_rdrand
ffffffff8131600c t early_init_intel
ffffffff813161dc t init_intel
ffffffff8131643e t early_init_amd
ffffffff8131650e t init_amd
ffffffff81316b02 t bsp_init_amd
ffffffff81316baf t early_init_centaur
ffffffff81316bc5 t centaur_size_cache
ffffffff81316bc8 t init_centaur
ffffffff81316d46 t x86_pmu_notifier
ffffffff81316dfc t mce_reenable_cpu
ffffffff81316e56 t mce_disable_cpu
ffffffff81316ebc t mce_device_create
ffffffff81316ffe t mce_cpu_callback
ffffffff81317193 T mcheck_cpu_init
ffffffff813174a4 t allocate_threshold_blocks
ffffffff813176a8 t threshold_create_device
ffffffff813179eb t amd_64_threshold_cpu_callback
ffffffff81317cac t perf_ibs_cpu_notifier
ffffffff81317cfc t acpi_register_lapic
ffffffff81317d37 t link_thread_siblings
ffffffff81317da5 t do_boot_cpu
ffffffff813183db t do_fork_idle
ffffffff813183f5 T smp_store_cpu_info
ffffffff8131842e T set_cpu_sibling_map
ffffffff813186e2 t start_secondary
ffffffff81318887 T wakeup_secondary_cpu_via_nmi
ffffffff81318925 T native_cpu_up
ffffffff81318a1d t check_tsc_warp
ffffffff81318b31 T check_tsc_sync_source
ffffffff81318c47 T check_tsc_sync_target
ffffffff81318cc3 t apic_cluster_num
ffffffff81318d5e t setup_APIC_timer
ffffffff81318e00 t set_multi
ffffffff81318e2a T setup_secondary_APIC_clock
ffffffff81318e2f T setup_local_APIC
ffffffff8131909a T end_local_APIC_setup
ffffffff8131916c T generic_processor_info
ffffffff813192ea T apic_is_clustered_box
ffffffff81319334 t cmp_range
ffffffff8131933b t get_fam10h_pci_mmconf_base
ffffffff81319600 T fam10h_check_enable_mmcfg
ffffffff813196bc T x86_configure_nx
ffffffff813196f8 t calculate_tlb_offset
ffffffff813197fa t tlb_cpuhp_notify
ffffffff81319816 t init_smp_flush
ffffffff81319845 T numa_cpu_node
ffffffff81319883 T numa_set_node
ffffffff813198c0 T numa_clear_node
ffffffff813198c8 T numa_add_cpu
ffffffff813198f9 T numa_remove_cpu
ffffffff8131992a W idle_regs
ffffffff81319960 T fork_idle
ffffffff813199ed t console_cpu_notify
ffffffff81319a19 t _cpu_up
ffffffff81319b14 T cpu_up
ffffffff81319b65 T notify_cpu_starting
ffffffff81319b8b t cpu_callback
ffffffff81319df6 t remote_softirq_cpu_notify
ffffffff81319eaf t timer_cpu_notify
ffffffff8131a171 t trustee_thread
ffffffff8131a66f t wait_trustee_state.part.26
ffffffff8131a70f t hrtimer_cpu_notify
ffffffff8131a8e0 t sched_cpu_active
ffffffff8131a909 t sched_cpu_inactive
ffffffff8131a92a T init_idle_bootup_task
ffffffff8131a938 t migration_call
ffffffff8131ab60 T init_idle
ffffffff8131acdb t cpu_stop_cpu_callback
ffffffff8131ae6e t rcu_init_percpu_data.constprop.46
ffffffff8131af74 t rcu_cpu_notify
ffffffff8131b014 t perf_cpu_notify
ffffffff8131b0ce t ratelimit_handler
ffffffff8131b0da t start_cpu_timer
ffffffff8131b136 t vmstat_cpuup_callback
ffffffff8131b1d4 t start_cpu_timer
ffffffff8131b2cd t cpuup_canceled
ffffffff8131b46e t cpuup_callback
ffffffff8131b787 t blk_cpu_notify
ffffffff8131b7f5 t blk_iopoll_cpu_notify
ffffffff8131b864 t percpu_counter_hotcpu_callback
ffffffff8131b8e9 T acpi_processor_set_pdc
ffffffff8131ba77 T register_cpu
ffffffff8131bb5c t topology_cpu_callback
ffffffff8131bbcc t topology_sysfs_init
ffffffff8131bc30 t cpufreq_cpu_callback
ffffffff8131bc9f t cpufreq_stat_cpu_callback
ffffffff8131bce2 t enable_pci_io_ecs
ffffffff8131bd25 t amd_cpu_notify
ffffffff8131bd4c t flow_cache_cpu_prepare.isra.4
ffffffff8131bdd6 t flow_cache_cpu
ffffffff8131be41 t run_init_process
ffffffff8131be5b t init_post
ffffffff8131bf13 t m2p
ffffffff8131bfa6 t convert_pfn_mfn
ffffffff8131bfcc t pin_pagetable_pfn
ffffffff8131c022 t xen_remap_domain_mfn_range.part.29
ffffffff8131c024 t set_page_prot
ffffffff8131c069 t xen_smp_intr_init
ffffffff8131c23c t xen_spin_unlock_slow
ffffffff8131c298 t xen_spin_lock_slow
ffffffff8131c377 T dump_stack
ffffffff8131c3e6 t write_ok_or_segv.part.3
ffffffff8131c44d t wait_for_panic
ffffffff8131c48e t hpet_msi_capability_lookup
ffffffff8131c61b t bad_address
ffffffff8131c674 t vmalloc_fault
ffffffff8131c8ca t spurious_fault
ffffffff8131ca8c t dump_pagetable
ffffffff8131cc3e t pgtable_bad
ffffffff8131ccc6 t force_sig_info_fault
ffffffff8131cd29 t is_prefetch.isra.15.part.16
ffffffff8131cf1d t no_context
ffffffff8131d189 t __bad_area_nosemaphore
ffffffff8131d386 t bad_area_nosemaphore
ffffffff8131d390 t bad_area
ffffffff8131d3d5 t bad_area_access_error
ffffffff8131d41a t mm_fault_error
ffffffff8131d5f7 T panic
ffffffff8131d7b8 T printk
ffffffff8131d800 T printk_sched
ffffffff8131d88a t wait_noreap_copyout.isra.13
ffffffff8131d950 t migrate_timer_list
ffffffff8131d9aa t retarget_shared_pending
ffffffff8131da2c t ptrace_trap_notify
ffffffff8131da94 t clear_dead_task
ffffffff8131dacc t __schedule_bug
ffffffff8131db17 t select_fallback_rq
ffffffff8131dc80 t print_name_offset.part.3
ffffffff8131dc91 t refill_pi_state_cache.part.10
ffffffff8131dce8 t remove_waiter
ffffffff8131de4a t grow_tree_refs
ffffffff8131dea2 t rcu_cleanup_dead_cpu
ffffffff8131df40 t rcu_cleanup_dying_cpu
ffffffff8131e046 t __iovec_copy_from_user_inatomic
ffffffff8131e0ab t __alloc_pages_direct_compact
ffffffff8131e251 t pcpu_dump_alloc_info
ffffffff8131e4a3 t offset_il_node.isra.15
ffffffff8131e50e t cache_flusharray
ffffffff8131e5ae t cache_alloc_refill
ffffffff8131e775 t alternate_node_alloc
ffffffff8131e80f t release_pte_pages
ffffffff8131e87f t vfs_path_lookup.part.34
ffffffff8131e881 t block_dump___mark_inode_dirty
ffffffff8131e91f t set_brk
ffffffff8131e9be t elf_map
ffffffff8131eb06 t alignfile
ffffffff8131eb5b t writenote
ffffffff8131ec11 t set_brk
ffffffff8131ecb0 t alignfile
ffffffff8131ed04 t writenote
ffffffff8131edba t elf_map.isra.10
ffffffff8131eef9 t cipher_crypt_unaligned
ffffffff8131ef69 t compute_batch_value
ffffffff8131ef92 t pci_fixup_parent_subordinate_busnr.isra.31
ffffffff8131efe4 t piix4_io_quirk
ffffffff8131f064 t piix4_mem_quirk.constprop.44
ffffffff8131f0d8 t ec_install_handlers
ffffffff8131f177 t erst_get_erange.constprop.14
ffffffff8131f1ff t ghes_estatus_pool_expand
ffffffff8131f287 t ghes_estatus_pool_exit
ffffffff8131f2ab t do_free_callbacks
ffffffff8131f2fa t xenbus_reset_wait_for_backend
ffffffff8131f3cb t scrdown
ffffffff8131f49f t restore_cur
ffffffff8131f56f t legacy_suspend
ffffffff8131f5c4 t handshake
ffffffff8131f612 t i8042_free_irqs
ffffffff8131f659 t i8042_pnp_exit
ffffffff8131f697 t input_proc_exit
ffffffff8131f6ce t cmos_do_probe
ffffffff8131fa12 t cpufreq_stats_free_sysfs
ffffffff8131fa55 t cpufreq_stats_free_table
ffffffff8131faa2 t add_sysfs_fw_map_entry
ffffffff8131fb1d t iommu_ignore_device
ffffffff8131fbdd t free_dev_data
ffffffff8131fc30 t dma_ops_unity_map
ffffffff8131fc9a t coalesce_windows
ffffffff8131fd93 t skb_warn_bad_offload
ffffffff8131fe4e t nl_pid_hash_zalloc
ffffffff8131fe7c t nl_pid_hash_free
ffffffff8131fea1 t nl_pid_hash_rehash
ffffffff8131fff4 t arp_ignore
ffffffff81320060 T __sched_text_start
ffffffff81320060 T console_conditional_schedule
ffffffff81320080 T schedule_timeout
ffffffff81320270 T schedule_timeout_uninterruptible
ffffffff81320290 T schedule_timeout_killable
ffffffff813202b0 T schedule_timeout_interruptible
ffffffff813202d0 T __wait_on_bit
ffffffff81320350 T out_of_line_wait_on_bit
ffffffff813203e0 T __wait_on_bit_lock
ffffffff81320480 T out_of_line_wait_on_bit_lock
ffffffff81320510 T mutex_lock
ffffffff81320550 T mutex_unlock
ffffffff81320570 T mutex_trylock
ffffffff81320595 t __mutex_lock_killable_slowpath
ffffffff81320700 T mutex_lock_killable
ffffffff81320731 t __mutex_lock_interruptible_slowpath
ffffffff81320890 T mutex_lock_interruptible
ffffffff813208d0 t __mutex_unlock_slowpath
ffffffff81320930 t __mutex_lock_slowpath
ffffffff81320a70 t do_nanosleep
ffffffff81320b40 T hrtimer_nanosleep_restart
ffffffff81320bb0 T schedule_hrtimeout_range_clock
ffffffff81320cf0 T schedule_hrtimeout_range
ffffffff81320d00 T schedule_hrtimeout
ffffffff81320d10 T down_read
ffffffff81320d20 T down_write
ffffffff81320d3d t __down
ffffffff81320de1 t __down_interruptible
ffffffff81320eb2 t __down_killable
ffffffff81320f88 t __down_timeout
ffffffff8132102c t __up.isra.0
ffffffff81321070 t wait_for_common
ffffffff813211f0 T wait_for_completion
ffffffff81321210 T wait_for_completion_timeout
ffffffff81321220 T wait_for_completion_interruptible
ffffffff81321250 T wait_for_completion_interruptible_timeout
ffffffff81321260 T wait_for_completion_killable
ffffffff81321290 T wait_for_completion_killable_timeout
ffffffff813212a0 t sleep_on_common
ffffffff81321370 T sleep_on_timeout
ffffffff81321390 T sleep_on
ffffffff813213b0 T interruptible_sleep_on_timeout
ffffffff813213d0 T interruptible_sleep_on
ffffffff813213f0 t __schedule
ffffffff81321af0 T __cond_resched_softirq
ffffffff81321b40 T _cond_resched
ffffffff81321b80 T schedule
ffffffff81321be0 T io_schedule
ffffffff81321cb0 T yield_to
ffffffff81321e30 T schedule_preempt_disabled
ffffffff81321e40 T yield
ffffffff81321e70 T io_schedule_timeout
ffffffff81321f60 t alarm_timer_nsleep_restart
ffffffff81322030 T rt_mutex_trylock
ffffffff813220c0 T rt_mutex_unlock
ffffffff813221e0 t __rt_mutex_slowlock
ffffffff813222a0 t rt_mutex_slowlock.part.12
ffffffff813223f0 t rt_mutex_slowlock
ffffffff81322480 T rt_mutex_lock
ffffffff813224b0 T rt_mutex_lock_interruptible
ffffffff813224e0 t rwsem_down_failed_common
ffffffff81322640 T rwsem_down_write_failed
ffffffff81322650 T rwsem_down_read_failed
ffffffff81322661 T __sched_text_end
ffffffff81322668 T __lock_text_start
ffffffff81322670 T _raw_spin_trylock
ffffffff81322680 T _raw_spin_lock
ffffffff81322690 T _raw_spin_lock_irqsave
ffffffff813226d0 T _raw_spin_lock_irq
ffffffff813226e0 T _raw_read_trylock
ffffffff81322700 T _raw_read_lock
ffffffff81322710 T _raw_read_lock_irqsave
ffffffff81322730 T _raw_read_lock_irq
ffffffff81322750 T _raw_write_trylock
ffffffff81322770 T _raw_write_lock
ffffffff81322780 T _raw_write_lock_irqsave
ffffffff813227b0 T _raw_write_lock_irq
ffffffff813227d0 T _raw_spin_unlock_bh
ffffffff813227e0 T _raw_read_unlock_bh
ffffffff813227f0 T _raw_write_unlock_bh
ffffffff81322800 T _raw_spin_trylock_bh
ffffffff81322850 T _raw_spin_lock_bh
ffffffff81322870 T _raw_read_lock_bh
ffffffff81322890 T _raw_write_lock_bh
ffffffff813228b0 T _raw_spin_unlock_irqrestore
ffffffff813228d0 T _raw_read_unlock_irqrestore
ffffffff813228f0 T _raw_write_unlock_irqrestore
ffffffff81322920 T tty_unlock
ffffffff81322930 T tty_lock
ffffffff8132293c T __lock_text_end
ffffffff81322940 T __kprobes_text_start
ffffffff81322940 T save_paranoid
ffffffff813229c0 t common_interrupt
ffffffff81322a2a t ret_from_intr
ffffffff81322a3f t exit_intr
ffffffff81322a59 t retint_with_reschedule
ffffffff81322a5e t retint_check
ffffffff81322a65 t retint_swapgs
ffffffff81322a73 t retint_restore_args
ffffffff81322a79 t restore_args
ffffffff81322aa9 t irq_return
ffffffff81322ab0 T native_iret
ffffffff81322ab2 t retint_careful
ffffffff81322ae4 t retint_signal
ffffffff81322b70 T debug
ffffffff81322bb0 T int3
ffffffff81322bf0 T stack_segment
ffffffff81322c20 T xen_debug
ffffffff81322c40 T xen_int3
ffffffff81322c60 T xen_stack_segment
ffffffff81322c90 T general_protection
ffffffff81322cc0 T page_fault
ffffffff81322cf0 T machine_check
ffffffff81322d20 T paranoid_exit
ffffffff81322d3d t paranoid_swapgs
ffffffff81322d96 t paranoid_restore
ffffffff81322dec t paranoid_userspace
ffffffff81322e3c t paranoid_schedule
ffffffff81322e50 T error_entry
ffffffff81322eab t error_swapgs
ffffffff81322eb1 t error_sti
ffffffff81322eb2 t error_kernelspace
ffffffff81322ee1 t bstep_iret
ffffffff81322ef0 T error_exit
ffffffff81322f50 T nmi
ffffffff81322f81 t nested_nmi
ffffffff81322fb4 t nested_nmi_out
ffffffff81322fbb t first_nmi
ffffffff81322fd5 t repeat_nmi
ffffffff81322ff2 t end_repeat_nmi
ffffffff81323010 t nmi_swapgs
ffffffff81323013 t nmi_restore
ffffffff81323080 T ignore_sysret
ffffffff81323087 T __kprobes_text_end
ffffffff81323088 T __entry_text_start
ffffffff813230c0 T native_usergs_sysret64
ffffffff813230d0 T save_rest
ffffffff81323140 T ret_from_fork
ffffffff813231c0 T system_call
ffffffff813231c3 T system_call_after_swapgs
ffffffff81323223 t system_call_fastpath
ffffffff8132323e t ret_from_sys_call
ffffffff81323243 t sysret_check
ffffffff81323291 t sysret_careful
ffffffff813232a8 t sysret_signal
ffffffff813232f1 t badsys
ffffffff813232ff t auditsys
ffffffff81323349 t sysret_audit
ffffffff8132336a t tracesys
ffffffff8132344c T int_ret_from_sys_call
ffffffff81323459 T int_with_check
ffffffff81323479 t int_careful
ffffffff8132349b t int_very_careful
ffffffff813234a3 t int_check_syscall_exit_work
ffffffff813234e0 t int_signal
ffffffff813234f7 t int_restore_rest
ffffffff81323530 T stub_clone
ffffffff81323550 T stub_fork
ffffffff81323570 T stub_vfork
ffffffff81323590 T stub_sigaltstack
ffffffff813235b0 T stub_iopl
ffffffff813235d0 T ptregscall_common
ffffffff81323610 T stub_execve
ffffffff813236d0 T stub_rt_sigreturn
ffffffff81323780 T irq_entries_start
ffffffff81323b80 T irq_move_cleanup_interrupt
ffffffff81323bf0 T reboot_interrupt
ffffffff81323c60 T apic_timer_interrupt
ffffffff81323cd0 T x86_platform_ipi
ffffffff81323d40 T invalidate_interrupt1
ffffffff81323d50 T invalidate_interrupt2
ffffffff81323d60 T invalidate_interrupt3
ffffffff81323d70 T invalidate_interrupt4
ffffffff81323d80 T invalidate_interrupt5
ffffffff81323d90 T invalidate_interrupt6
ffffffff81323da0 T invalidate_interrupt7
ffffffff81323db0 T invalidate_interrupt0
ffffffff81323e20 T threshold_interrupt
ffffffff81323e90 T thermal_interrupt
ffffffff81323f00 T call_function_single_interrupt
ffffffff81323f70 T call_function_interrupt
ffffffff81323fe0 T reschedule_interrupt
ffffffff81324050 T error_interrupt
ffffffff813240c0 T spurious_interrupt
ffffffff81324130 T irq_work_interrupt
ffffffff813241a0 T divide_error
ffffffff813241c0 T overflow
ffffffff813241e0 T bounds
ffffffff81324200 T invalid_op
ffffffff81324220 T device_not_available
ffffffff81324240 T double_fault
ffffffff81324270 T coprocessor_segment_overrun
ffffffff81324290 T invalid_TSS
ffffffff813242c0 T segment_not_present
ffffffff813242f0 T spurious_interrupt_bug
ffffffff81324310 T coprocessor_error
ffffffff81324330 T alignment_check
ffffffff81324360 T simd_coprocessor_error
ffffffff81324380 T native_load_gs_index
ffffffff8132438d t gs_change
ffffffff813243a0 T kernel_thread_helper
ffffffff813243b0 T kernel_execve
ffffffff81324480 T call_softirq
ffffffff813244b0 T xen_hypervisor_callback
ffffffff813244d0 T xen_do_hypervisor_callback
ffffffff81324500 T xen_failsafe_callback
ffffffff813245b0 T xen_hvm_callback_vector
ffffffff81324620 T native_usergs_sysret32
ffffffff81324630 T native_irq_enable_sysexit
ffffffff81324640 T ia32_sysenter_target
ffffffff813246b3 t sysenter_do_call
ffffffff813246bf t sysenter_dispatch
ffffffff813246e0 t sysexit_from_sys_call
ffffffff81324716 t sysenter_auditsys
ffffffff81324755 t sysexit_audit
ffffffff813247b0 t sysenter_tracesys
ffffffff81324850 T ia32_cstar_target
ffffffff813248d6 t cstar_do_call
ffffffff813248df t cstar_dispatch
ffffffff81324900 t sysretl_from_sys_call
ffffffff81324933 t cstar_auditsys
ffffffff81324979 t sysretl_audit
ffffffff813249d4 t cstar_tracesys
ffffffff81324a7c t ia32_badarg
ffffffff81324a90 T ia32_syscall
ffffffff81324ae6 t ia32_do_call
ffffffff81324af9 t ia32_sysret
ffffffff81324afe t ia32_ret_from_sys_call
ffffffff81324b18 t ia32_tracesys
ffffffff81324ba4 t ia32_badsys
ffffffff81324bc0 T stub32_rt_sigreturn
ffffffff81324bd0 T stub32_sigreturn
ffffffff81324be0 T stub32_sigaltstack
ffffffff81324bf0 T stub32_execve
ffffffff81324c00 T stub32_fork
ffffffff81324c10 T stub32_clone
ffffffff81324c20 T stub32_vfork
ffffffff81324c30 T stub32_iopl
ffffffff81324c40 t ia32_ptregs_common
ffffffff81324c8b T __entry_text_end
ffffffff81324d59 t bad_iret
ffffffff81324d66 t bad_gs
ffffffff81326770 T bad_from_user
ffffffff81326776 t bad_to_user
ffffffff81326e30 T __start_notes
ffffffff81326e30 T _etext
ffffffff81326fac T __stop_notes
ffffffff81326fb0 R __start___ex_table
ffffffff8132aea0 R __stop___ex_table
ffffffff8132b000 r __param_str_initcall_debug
ffffffff8132b000 R __start_rodata
ffffffff8132b020 R linux_proc_banner
ffffffff8132b060 R linux_banner
ffffffff8132b100 r xen_vcpuop_clockevent
ffffffff8132b1c0 r xen_timerop_clockevent
ffffffff8132b280 r __func__.26234
ffffffff8132b290 r __func__.28840
ffffffff8132b298 r str.28884
ffffffff8132b2b0 r __func__.28893
ffffffff8132b2c6 r __func__.22053
ffffffff8132b2d2 r __func__.22257
ffffffff8132b2e0 r print_trace_ops
ffffffff8132b300 R sys_call_table
ffffffff8132bcc0 r __func__.28008
ffffffff8132bd20 r k8_nops
ffffffff8132bd70 r __func__.20946
ffffffff8132bd90 r __func__.20998
ffffffff8132bdb0 r __func__.21008
ffffffff8132bde0 r p6_nops
ffffffff8132be40 r k8nops
ffffffff8132be80 r p6nops
ffffffff8132bec0 r CSWTCH.37
ffffffff8132c540 r regoffset_table
ffffffff8132c6a0 r user_x86_32_view
ffffffff8132c6c0 r user_x86_64_view
ffffffff8132c720 r sysfs_ops
ffffffff8132c740 R cpuinfo_op
ffffffff8132c760 R x86_cap_flags
ffffffff8132d160 R x86_power_flags
ffffffff8132d260 r exception_stack_sizes
ffffffff8132d280 R amd_erratum_383
ffffffff8132d290 R amd_erratum_400
ffffffff8132d2f0 r backtrace_ops
ffffffff8132d3a0 r amd_perfmon_event_map
ffffffff8132d400 r p6_perfmon_event_map
ffffffff8132d440 r p4_general_events
ffffffff8132d4a0 r p4_escr_table
ffffffff8132d5c0 r CSWTCH.25
ffffffff8132d5e0 r nhm_lbr_sel_map
ffffffff8132d7e0 r snb_lbr_sel_map
ffffffff8132db00 r nhm_magic.25569
ffffffff8132dbd0 r __func__.23553
ffffffff8132dbda r __func__.24029
ffffffff8132dbf0 r CSWTCH.184
ffffffff8132dc20 r mce_device_attrs
ffffffff8132dc60 r mce_chrdev_ops
ffffffff8132dd30 r shared_bank
ffffffff8132dd40 r threshold_ops
ffffffff8132dd60 r mtrr_strings
ffffffff8132dda0 r mtrr_fops
ffffffff8132de80 R generic_mtrr_ops
ffffffff8132dec0 r fixed_range_blocks
ffffffff8132e038 r no_idt
ffffffff8132e042 r __func__.27161
ffffffff8132e050 r __func__.26864
ffffffff8132e0c0 r error_interrupt_reason.33340
ffffffff8132e188 r __func__.20785
ffffffff8132e194 r __func__.20610
ffffffff8132e1a2 r __func__.20790
ffffffff8132e1c0 r __func__.20704
ffffffff8132e1e0 r __func__.20954
ffffffff8132e200 R amd_nb_misc_ids
ffffffff8132e300 r CSWTCH.21
ffffffff8132e320 r ud2a
ffffffff8132e500 r __func__.27763
ffffffff8132e520 r errata93_warning
ffffffff8132e620 r nx_warning
ffffffff8132e680 r CSWTCH.32
ffffffff8132e760 r CSWTCH.30
ffffffff8132e800 r CSWTCH.9
ffffffff8132e918 r code.25302
ffffffff8132e920 r code.25348
ffffffff8132e940 R ia32_sys_call_table
ffffffff8132f500 r execdomains_proc_fops
ffffffff8132f5e0 r tnts
ffffffff8132f607 r __param_str_pause_on_oops
ffffffff8132f615 r __param_str_panic
ffffffff8132f680 r recursion_bug_msg
ffffffff8132f6b0 r __param_str_console_suspend
ffffffff8132f6d0 r __param_str_always_kmsg_dump
ffffffff8132f6e8 r __param_str_time
ffffffff8132f700 r __param_str_ignore_loglevel
ffffffff8132f720 R cpu_active_mask
ffffffff8132f728 R cpu_present_mask
ffffffff8132f730 R cpu_online_mask
ffffffff8132f738 R cpu_possible_mask
ffffffff8132f740 R cpu_all_bits
ffffffff8132f760 R cpu_bit_bitmap
ffffffff8132f968 r __func__.23787
ffffffff8132f972 r __func__.23809
ffffffff8132fa30 r param.26175
ffffffff8132fa40 r proc_ioports_operations
ffffffff8132fb20 r proc_iomem_operations
ffffffff8132fc00 r resource_op
ffffffff8132fc20 r proc_wspace_sep
ffffffff8132fc24 r cap_last_cap
ffffffff8132fc30 r __func__.41859
ffffffff8132fc4c R __cap_empty_set
ffffffff8132fc60 r __func__.32617
ffffffff8132fdc0 r __func__.32697
ffffffff8132fe20 r module_uevent_ops
ffffffff8132fe40 r module_sysfs_ops
ffffffff8132feb0 r param.19414
ffffffff8132fec0 r hrtimer_clock_to_base_table
ffffffff8132ff00 R min_cfs_quota_period
ffffffff8132ff08 R max_cfs_quota_period
ffffffff8132ff10 R sysctl_timer_migration
ffffffff8132ff14 R sysctl_sched_time_avg
ffffffff8132ff18 R sysctl_sched_nr_migrate
ffffffff8132ff1c R sysctl_sched_features
ffffffff8132ff20 r __func__.39484
ffffffff8132ff40 r prio_to_weight
ffffffff8132ffe0 r prio_to_wmult
ffffffff81330080 r degrade_zero_ticks
ffffffff813300a0 r degrade_factor
ffffffff813300e0 R idle_sched_class
ffffffff813301a0 R fair_sched_class
ffffffff81330260 R sysctl_sched_migration_cost
ffffffff81330280 R rt_sched_class
ffffffff81330340 R stop_sched_class
ffffffff81330400 r pm_qos_array
ffffffff81330420 r __func__.20570
ffffffff81330440 r pm_qos_power_fops
ffffffff81330560 r timer_list_fops
ffffffff81330640 r __mon_yday
ffffffff81330680 r posix_clock_file_operations
ffffffff81330760 r alarmtimer_pm_ops
ffffffff81330940 r proc_dma_operations
ffffffff81330a20 r arr.29319
ffffffff81330a80 r proc_modules_operations
ffffffff81330b60 r modules_op
ffffffff81330b80 r __func__.30451
ffffffff81330ba0 r modinfo_attrs
ffffffff81330bf0 r CSWTCH.170
ffffffff81330c10 r vermagic
ffffffff81330c40 r masks.30190
ffffffff81330c80 r __param_str_nomodule
ffffffff81330ca0 r kallsyms_operations
ffffffff81330d80 r kallsyms_op
ffffffff81330dc0 R proc_cgroup_operations
ffffffff81330ec0 r cgroup_dir_inode_operations
ffffffff81330f80 r cgroup_file_operations
ffffffff81331060 r cgroup_pidlist_operations
ffffffff81331140 r cgroup_pidlist_seq_operations
ffffffff81331160 r cgroup_seqfile_operations
ffffffff81331240 r cgroup_ops
ffffffff81331300 r proc_cgroupstats_operations
ffffffff81331400 r cgroup_dops.31045
ffffffff81331480 r freezer_state_strs
ffffffff81331560 R utsns_operations
ffffffff813315a0 r kernel_config_data
ffffffff81334900 r ikconfig_file_ops
ffffffff81334a50 r __func__.37037
ffffffff81334a5c r __func__.37065
ffffffff81334a70 r __func__.37377
ffffffff81334a80 r __func__.37123
ffffffff81335220 r audit_ops
ffffffff81335a80 r audit_watch_fsnotify_ops
ffffffff81335ac0 r audit_tree_ops
ffffffff81335b00 r param.18824
ffffffff81335b10 r __param_str_irqfixup
ffffffff81335b30 r __param_str_noirqdebug
ffffffff81335b60 r irq_affinity_proc_fops
ffffffff81335c40 r irq_affinity_hint_proc_fops
ffffffff81335d20 r irq_affinity_list_proc_fops
ffffffff81335e00 r irq_node_proc_fops
ffffffff81335ee0 r irq_spurious_proc_fops
ffffffff81335fc0 r default_affinity_proc_fops
ffffffff81336090 r __param_str_rcu_cpu_stall_timeout
ffffffff813360b0 r __param_str_rcu_cpu_stall_suppress
ffffffff813360d0 r __param_str_qlowmark
ffffffff813360f0 r __param_str_qhimark
ffffffff81336100 r __param_str_blimit
ffffffff81336110 r taskstats_cmd_get_policy
ffffffff81336124 r cgroupstats_cmd_get_policy
ffffffff81336140 r perf_fops
ffffffff81336220 r perf_mmap_vmops
ffffffff81336280 R generic_file_vm_ops
ffffffff813362c0 r __func__.26414
ffffffff81336320 r __func__.32359
ffffffff81336340 r fallbacks
ffffffff813363a0 r zone_names
ffffffff813363c0 r pageflag_names
ffffffff81336550 r __func__.30290
ffffffff81336580 r shmem_export_ops
ffffffff813365c0 r shmem_ops
ffffffff81336680 r shmem_aops
ffffffff81336740 r shmem_special_inode_operations
ffffffff81336800 r shmem_inode_operations
ffffffff813368c0 r shmem_file_operations
ffffffff813369c0 r shmem_dir_inode_operations
ffffffff81336a80 r shmem_vm_ops
ffffffff81336ac0 r shmem_short_symlink_operations
ffffffff81336b80 r shmem_symlink_inode_operations
ffffffff81336d00 R vmstat_text
ffffffff81337020 r fragmentation_file_operations
ffffffff81337100 r pagetypeinfo_file_ops
ffffffff813371e0 r proc_vmstat_file_operations
ffffffff813372c0 r proc_zoneinfo_file_operations
ffffffff813373a0 r vmstat_op
ffffffff813373c0 r pagetypeinfo_op
ffffffff813373e0 r migratetype_names
ffffffff81337420 r fragmentation_op
ffffffff81337440 r zoneinfo_op
ffffffff81337500 r special_mapping_vmops
ffffffff81337540 r __func__.24559
ffffffff81337560 r proc_vmalloc_operations
ffffffff81337640 r vmalloc_op
ffffffff81337660 r swap_aops
ffffffff81337700 r proc_swaps_operations
ffffffff813377e0 r swaps_op
ffffffff81337800 r Unused_offset
ffffffff81337820 r Bad_offset
ffffffff81337840 r Unused_file
ffffffff81337860 r Bad_file
ffffffff81337880 R hugetlb_vm_ops
ffffffff813378c0 R hugetlb_infinity
ffffffff813378c8 R hugetlb_zero
ffffffff813378e0 r mpol_ops
ffffffff81337920 r policy_modes
ffffffff81337960 r __func__.19760
ffffffff81337980 r __func__.22898
ffffffff81337a60 r proc_slabinfo_operations
ffffffff81337b40 r slabinfo_op
ffffffff81337b60 r __func__.27911
ffffffff81337b80 r __func__.28010
ffffffff81337b93 r __func__.27818
ffffffff81337bc0 r action_name
ffffffff81337c00 r empty_fops.31745
ffffffff81337ce0 R generic_ro_fops
ffffffff81337dc0 R def_chr_fops
ffffffff81337e90 r uselib_flags.33193
ffffffff81337ea0 r open_exec_flags.33367
ffffffff81337eb0 r __func__.33769
ffffffff81337ec0 R rdwr_pipefifo_fops
ffffffff81337fa0 R write_pipefifo_fops
ffffffff81338080 R read_pipefifo_fops
ffffffff81338160 r pipefs_ops
ffffffff81338240 r pipefs_dentry_operations
ffffffff813382c0 r anon_pipe_buf_ops
ffffffff81338300 r packet_pipe_buf_ops
ffffffff81338340 R page_symlink_inode_operations
ffffffff81338400 r band_table
ffffffff81338440 R def_fifo_fops
ffffffff81338520 r name.30158
ffffffff81338530 r anonstring.30180
ffffffff81338540 r __func__.30395
ffffffff81338560 R empty_aops
ffffffff81338600 r bad_inode_ops
ffffffff813386c0 r bad_file_ops
ffffffff813387a0 r filesystems_proc_fops
ffffffff81338880 R mounts_op
ffffffff813388a0 r __func__.30429
ffffffff813388c0 R simple_dir_inode_operations
ffffffff81338980 R simple_dir_operations
ffffffff81338a80 r simple_dentry_operations.24815
ffffffff81338b00 r simple_super_operations
ffffffff81338bb0 r __func__.24984
ffffffff81338be0 R page_cache_pipe_buf_ops
ffffffff81338c20 r default_pipe_buf_ops
ffffffff81338c60 r user_page_pipe_buf_ops
ffffffff81338c98 r __func__.31185
ffffffff81338cc0 R def_blk_fops
ffffffff81338da0 r bdev_sops
ffffffff81338e60 r def_blk_aops
ffffffff81338f00 R proc_mountstats_operations
ffffffff81338fe0 R proc_mountinfo_operations
ffffffff813390c0 R proc_mounts_operations
ffffffff813391a0 r mnt_info.21914
ffffffff81339220 r fs_info.21905
ffffffff81339260 R inotify_fsnotify_ops
ffffffff813392a0 r __func__.27708
ffffffff813392c0 r inotify_fops
ffffffff813393a0 R fanotify_fsnotify_ops
ffffffff813393e0 r fanotify_fops
ffffffff813394c0 r eventpoll_fops
ffffffff81339590 r path_limits
ffffffff813395c0 r anon_inodefs_dentry_operations
ffffffff81339640 r anon_aops
ffffffff813396e0 r signalfd_fops
ffffffff813397c0 r timerfd_fops
ffffffff813398a0 r eventfd_fops
ffffffff813399c0 r lease_manager_ops
ffffffff81339a00 r proc_locks_operations
ffffffff81339ae0 r locks_seq_operations
ffffffff81339b00 r CSWTCH.60
ffffffff81339b20 r buf.25421
ffffffff81339b24 r buf.26557
ffffffff8133a060 R generic_acl_default_handler
ffffffff8133a0a0 R generic_acl_access_handler
ffffffff8133a100 R proc_tid_numa_maps_operations
ffffffff8133a1e0 R proc_pid_numa_maps_operations
ffffffff8133a2c0 R proc_pagemap_operations
ffffffff8133a3a0 R proc_clear_refs_operations
ffffffff8133a480 R proc_tid_smaps_operations
ffffffff8133a560 R proc_pid_smaps_operations
ffffffff8133a640 R proc_tid_maps_operations
ffffffff8133a720 R proc_pid_maps_operations
ffffffff8133a800 r proc_tid_maps_op
ffffffff8133a820 r proc_pid_smaps_op
ffffffff8133a840 r proc_tid_smaps_op
ffffffff8133a860 r proc_pid_numa_maps_op
ffffffff8133a880 r proc_tid_numa_maps_op
ffffffff8133a8a0 r proc_pid_maps_op
ffffffff8133a8c0 r proc_reg_file_ops_no_compat
ffffffff8133a9a0 r proc_reg_file_ops
ffffffff8133aa80 r proc_sops
ffffffff8133ab40 r tokens
ffffffff8133ab80 r proc_root_inode_operations
ffffffff8133ac40 r proc_root_operations
ffffffff8133ad40 R pid_dentry_operations
ffffffff8133adc0 r proc_def_inode_operations
ffffffff8133ae80 r proc_base_stuff
ffffffff8133aec0 r proc_tgid_base_inode_operations
ffffffff8133af80 r proc_tgid_base_operations
ffffffff8133b060 r tgid_base_stuff
ffffffff8133b600 r proc_pid_link_inode_operations
ffffffff8133b6c0 r proc_tid_base_inode_operations
ffffffff8133b780 r proc_tid_base_operations
ffffffff8133b860 r tid_base_stuff
ffffffff8133bd80 r tid_fd_dentry_operations
ffffffff8133be00 r proc_fdinfo_file_operations
ffffffff8133bee0 r lnames
ffffffff8133c000 r proc_self_inode_operations
ffffffff8133c0c0 r proc_task_inode_operations
ffffffff8133c180 r proc_task_operations
ffffffff8133c280 r proc_fd_inode_operations
ffffffff8133c340 r proc_fd_operations
ffffffff8133c440 r proc_fdinfo_inode_operations
ffffffff8133c500 r proc_fdinfo_operations
ffffffff8133c5e0 r proc_environ_operations
ffffffff8133c6c0 r proc_info_file_operations
ffffffff8133c7a0 r proc_single_file_operations
ffffffff8133c880 r proc_pid_set_comm_operations
ffffffff8133c960 r proc_mem_operations
ffffffff8133ca40 r proc_oom_adjust_operations
ffffffff8133cb20 r proc_oom_score_adj_operations
ffffffff8133cc00 r proc_loginuid_operations
ffffffff8133cce0 r proc_sessionid_operations
ffffffff8133cdc0 r proc_coredump_filter_operations
ffffffff8133cec0 r proc_dentry_operations
ffffffff8133cf40 r proc_dir_operations
ffffffff8133d040 r proc_dir_inode_operations
ffffffff8133d100 r proc_link_inode_operations
ffffffff8133d1c0 r proc_file_operations
ffffffff8133d2c0 r proc_file_inode_operations
ffffffff8133d380 r __func__.20802
ffffffff8133d3a0 r task_state_array
ffffffff8133d400 r proc_tty_drivers_operations
ffffffff8133d4e0 r tty_drivers_op
ffffffff8133d500 r cmdline_proc_fops
ffffffff8133d5e0 r proc_consoles_operations
ffffffff8133d6c0 r consoles_op
ffffffff8133d6e0 r con_flags.16565
ffffffff8133d700 r proc_cpuinfo_operations
ffffffff8133d7e0 r proc_devinfo_operations
ffffffff8133d8c0 r devinfo_ops
ffffffff8133d8e0 r proc_interrupts_operations
ffffffff8133d9c0 r int_seq_ops
ffffffff8133d9e0 r loadavg_proc_fops
ffffffff8133dac0 r meminfo_proc_fops
ffffffff8133dba0 r proc_stat_operations
ffffffff8133dc80 r uptime_proc_fops
ffffffff8133dd60 r version_proc_fops
ffffffff8133de40 r proc_softirqs_operations
ffffffff8133df40 R proc_ns_dir_inode_operations
ffffffff8133e000 R proc_ns_dir_operations
ffffffff8133e0e0 r ns_file_operations
ffffffff8133e1c0 r null_path.24433
ffffffff8133e200 r proc_sys_dir_operations
ffffffff8133e2c0 r proc_sys_dir_file_operations
ffffffff8133e3c0 r proc_sys_dentry_operations
ffffffff8133e440 r proc_sys_inode_operations
ffffffff8133e500 r proc_sys_file_operations
ffffffff8133e600 R proc_net_operations
ffffffff8133e700 R proc_net_inode_operations
ffffffff8133e7c0 r proc_kcore_operations
ffffffff8133e8a0 r proc_kmsg_operations
ffffffff8133e980 r proc_kpagecount_operations
ffffffff8133ea60 r proc_kpageflags_operations
ffffffff8133eb40 r sysfs_aops
ffffffff8133ec00 r sysfs_inode_operations
ffffffff8133ecc0 R sysfs_file_operations
ffffffff8133edc0 R sysfs_dir_operations
ffffffff8133eec0 R sysfs_dir_inode_operations
ffffffff8133ef80 r sysfs_dentry_ops
ffffffff8133f000 R sysfs_symlink_inode_operations
ffffffff8133f0c0 r sysfs_ops
ffffffff8133f170 r __func__.21523
ffffffff8133f1a0 R bin_fops
ffffffff8133f280 r bin_vm_ops
ffffffff8133f2c0 r configfs_aops
ffffffff8133f380 r configfs_inode_operations
ffffffff8133f440 R configfs_file_operations
ffffffff8133f540 R configfs_dir_operations
ffffffff8133f640 R configfs_root_inode_operations
ffffffff8133f700 R configfs_dir_inode_operations
ffffffff8133f7c0 R configfs_dentry_ops
ffffffff8133f840 R configfs_symlink_inode_operations
ffffffff8133f900 r configfs_ops
ffffffff8133f9c0 r devpts_sops
ffffffff8133fa80 r tokens
ffffffff8133fac0 r ramfs_dir_inode_operations
ffffffff8133fb80 r tokens
ffffffff8133fba0 r ramfs_ops
ffffffff8133fc80 R ramfs_file_inode_operations
ffffffff8133fd40 R ramfs_file_operations
ffffffff8133fe20 R ramfs_aops
ffffffff8133ff00 R hugetlbfs_file_operations
ffffffff8133ffe0 r hugetlbfs_ops
ffffffff813400c0 r hugetlbfs_dir_inode_operations
ffffffff81340180 r tokens
ffffffff81340200 r hugetlbfs_aops
ffffffff813402c0 r hugetlbfs_inode_operations
ffffffff81340380 r utf8_table
ffffffff81340460 r charset2uni
ffffffff81340660 r page_uni2charset
ffffffff81340e60 r charset2lower
ffffffff81340f60 r charset2upper
ffffffff81341060 r page00
ffffffff81341180 r pstore_file_operations
ffffffff81341260 r pstore_ops
ffffffff81341340 r pstore_dir_inode_operations
ffffffff81341400 r tokens
ffffffff81341420 r CSWTCH.23
ffffffff81341450 r __param_str_backend
ffffffff81341500 r sysvipc_proc_fops
ffffffff813415e0 r sysvipc_proc_seqops
ffffffff813416c0 r shm_file_operations_huge
ffffffff813417a0 r shm_vm_ops
ffffffff813417e0 r shm_file_operations
ffffffff813418c0 r mqueue_super_ops
ffffffff81341980 r mqueue_file_operations
ffffffff81341a80 r mqueue_dir_inode_operations
ffffffff81341b40 r oflag2acc.40042
ffffffff81341b60 R ipcns_operations
ffffffff81341c38 r __func__.30831
ffffffff81341ca0 r proc_crypto_ops
ffffffff81341d80 r crypto_seq_ops
ffffffff81341da0 R crypto_nivaead_type
ffffffff81341e00 R crypto_aead_type
ffffffff81341e60 R crypto_givcipher_type
ffffffff81341ec0 R crypto_ablkcipher_type
ffffffff81341f20 R crypto_blkcipher_type
ffffffff81341f80 R crypto_ahash_type
ffffffff81341fe0 r crypto_shash_type
ffffffff81342040 r crypto_pcomp_type
ffffffff813420a0 R crypto_il_tab
ffffffff813430a0 R crypto_it_tab
ffffffff813440a0 R crypto_fl_tab
ffffffff813450a0 R crypto_ft_tab
ffffffff813460a0 r rco_tab
ffffffff813460e0 R crypto_rng_type
ffffffff81346180 r __func__.29257
ffffffff813461a0 r elv_sysfs_ops
ffffffff813461c0 r __func__.30914
ffffffff813461e0 r __func__.31011
ffffffff81346200 r __func__.31055
ffffffff81346213 r __func__.30445
ffffffff81346230 r __func__.27102
ffffffff81346240 r __func__.27150
ffffffff81346260 r __func__.27163
ffffffff81346280 r queue_sysfs_ops
ffffffff813462a0 r __func__.27327
ffffffff813462c0 r __func__.27358
ffffffff813462e0 r __func__.27368
ffffffff81346300 r __func__.27603
ffffffff81346320 r proc_diskstats_operations
ffffffff81346400 r proc_partitions_operations
ffffffff813464e0 r diskstats_op
ffffffff81346500 r partitions_op
ffffffff81346520 r __param_str_events_dfl_poll_msecs
ffffffff81346540 r disk_events_dfl_poll_msecs_param_ops
ffffffff81346560 r dev_attr_events
ffffffff81346580 r dev_attr_events_async
ffffffff813465a0 r dev_attr_events_poll_msecs
ffffffff813465c0 R scsi_command_size_tbl
ffffffff813465d0 r __func__.28680
ffffffff81346600 r check_part
ffffffff81346620 r subtypes
ffffffff813466a0 r bsg_fops
ffffffff81346770 r CSWTCH.36
ffffffff81346800 r fd_ioctl_trans_table
ffffffff81346860 R _ctype
ffffffff81346960 r compressed_formats
ffffffff81346a08 r lzop_magic
ffffffff81346a60 R kobj_sysfs_ops
ffffffff81346a80 r __func__.12132
ffffffff81346aa0 r __func__.12267
ffffffff81346ac0 r kobject_actions
ffffffff813474e8 r io_spec.38765
ffffffff813474f0 r mem_spec.38766
ffffffff813474f8 r dec_spec.38768
ffffffff81347500 r bus_spec.38767
ffffffff81347510 r CSWTCH.83
ffffffff81347530 r CSWTCH.84
ffffffff81347550 r be.38880
ffffffff81347560 r le.38881
ffffffff813475c0 R inat_group_table_22
ffffffff813475e0 R inat_group_table_12
ffffffff81347600 R inat_group_table_18_2
ffffffff81347620 R inat_group_table_18
ffffffff81347640 R inat_group_table_15_1
ffffffff81347660 R inat_group_table_15
ffffffff81347680 R inat_group_table_14_1
ffffffff813476a0 R inat_group_table_14
ffffffff813476c0 R inat_group_table_13_1
ffffffff813476e0 R inat_group_table_13
ffffffff81347700 R inat_group_table_21_2
ffffffff81347720 R inat_group_table_21_1
ffffffff81347740 R inat_group_table_21
ffffffff81347760 R inat_group_table_10
ffffffff81347780 R inat_group_table_9
ffffffff813477a0 R inat_group_table_8
ffffffff813477c0 R inat_group_table_7
ffffffff813477e0 R inat_group_table_6
ffffffff81347800 R inat_group_table_5
ffffffff81347820 R inat_escape_table_3_3
ffffffff81347c20 R inat_escape_table_3_1
ffffffff81348020 R inat_escape_table_3
ffffffff81348420 R inat_escape_table_2_3
ffffffff81348820 R inat_escape_table_2_2
ffffffff81348c20 R inat_escape_table_2_1
ffffffff81349020 R inat_escape_table_2
ffffffff81349420 R inat_escape_table_1_3
ffffffff81349820 R inat_escape_table_1_2
ffffffff81349c20 R inat_escape_table_1_1
ffffffff8134a020 R inat_escape_table_1
ffffffff8134a420 R inat_primary_table
ffffffff8134a8d0 R hex_asc
ffffffff8134a9c0 R byte_rev_table
ffffffff8134aac0 R crc16_table
ffffffff8134adc0 r lenfix.2649
ffffffff8134b5c0 r distfix.2650
ffffffff8134b640 r order.2681
ffffffff8134b680 r lbase.2594
ffffffff8134b6c0 r dbase.2596
ffffffff8134b700 r lext.2595
ffffffff8134b740 r dext.2597
ffffffff8134b850 r mask_to_bit_num.11799
ffffffff8134b858 r mask_to_allowed_status.11798
ffffffff8134b860 r branch_table.11828
ffffffff8134b8c0 r nla_attr_minlen
ffffffff8134b8d8 r __func__.13682
ffffffff8134b900 r pci_vpd_pci22_ops
ffffffff8134b920 r pcie_link_speed
ffffffff8134b930 r agp_speeds
ffffffff8134b940 r pcix_bus_speed
ffffffff8134b950 r CSWTCH.210
ffffffff8134b960 r __func__.21752
ffffffff8134b980 r __func__.21817
ffffffff8134b9a0 r __func__.21831
ffffffff8134b9c0 R pci_dev_pm_ops
ffffffff8134ba80 r __func__.28878
ffffffff8134baa0 r __func__.28852
ffffffff8134bac0 r __func__.28752
ffffffff8134bae0 r __func__.28813
ffffffff8134bb00 r __func__.28840
ffffffff8134bb10 r __func__.28736
ffffffff8134bb23 r __func__.28803
ffffffff8134bb40 r proc_bus_pci_operations
ffffffff8134bc20 r proc_bus_pci_dev_operations
ffffffff8134bd00 r proc_bus_pci_devices_op
ffffffff8134bd20 r pci_bus_speed_strings
ffffffff8134bde0 r pci_slot_sysfs_ops
ffffffff8134bea0 r pci_dev_reset_methods
ffffffff8134bee0 r policy_str
ffffffff8134bf00 r __param_str_policy
ffffffff8134bf20 r port_pci_ids
ffffffff8134bf60 r pcie_portdrv_pm_ops
ffffffff8134c020 r aer_error_severity_string
ffffffff8134c040 r aer_agent_string
ffffffff8134c060 r aer_error_layer
ffffffff8134c078 r CSWTCH.31
ffffffff8134c080 r __param_str_nosourceid
ffffffff8134c0a0 r __param_str_forceload
ffffffff8134c0c0 r msi_irq_sysfs_ops
ffffffff8134c0e0 r CSWTCH.7
ffffffff8134c0f4 r state_conv.28739
ffffffff8134c100 r device_label_dsm_uuid
ffffffff8134c1b0 r CSWTCH.65
ffffffff8134c1b2 r mask.29468
ffffffff8134c1c0 r fb_proc_fops
ffffffff8134c2a0 r fb_fops
ffffffff8134c380 r proc_fb_seq_ops
ffffffff8134c3a0 r brokendb
ffffffff8134c3c4 r edid_v1_header
ffffffff8134c3e0 r default_2_colors
ffffffff8134c420 r default_4_colors
ffffffff8134c460 r default_8_colors
ffffffff8134c4a0 r default_16_colors
ffffffff8134c740 R vesa_modes
ffffffff8134cfc0 R cea_modes
ffffffff8134dfc0 r modedb
ffffffff8134eec0 r fb_cvt_vbi_tab
ffffffff8134eee0 R dummy_con
ffffffff8134f000 R vga_con
ffffffff8134f1c0 r CSWTCH.380
ffffffff8134f200 r fb_con
ffffffff8134f320 r fonts
ffffffff8134f340 R font_vga_8x8
ffffffff8134f380 r fontdata_8x8
ffffffff8134fb80 R font_vga_8x16
ffffffff8134fbc0 r fontdata_8x16
ffffffff81350bc0 r __param_str_nologo
ffffffff81350ce0 r CSWTCH.44
ffffffff81350d28 r cfb_tab32
ffffffff81350d40 r cfb_tab8_le
ffffffff81350d80 r cfb_tab16_le
ffffffff81350d90 r CSWTCH.98
ffffffff81350e00 r mps_inti_flags_trigger
ffffffff81350e20 r mps_inti_flags_polarity
ffffffff81350e40 r __func__.31534
ffffffff81350e58 r __func__.28766
ffffffff81350e68 r __param_str_bfs
ffffffff81350e78 r __param_str_gts
ffffffff81350e90 r opc_map_to_uuid.29883
ffffffff81350ed0 r __func__.30057
ffffffff81350eda r _acpi_module_name
ffffffff81350f80 r _acpi_module_name
ffffffff81350f90 r __func__.24377
ffffffff81350fb0 r ec_device_ids
ffffffff81350fe0 r __param_str_ec_delay
ffffffff81350ff0 r root_device_ids
ffffffff81351020 r _acpi_module_name
ffffffff81351030 r link_device_ids
ffffffff81351060 r _acpi_module_name
ffffffff81351070 r prt_quirks
ffffffff813510f0 r medion_md9580
ffffffff813513a0 r dell_optiplex
ffffffff81351650 r hp_t5710
ffffffff81351900 r power_device_ids
ffffffff81351930 r acpi_system_event_ops
ffffffff81351a05 r _acpi_module_name
ffffffff81351a10 r pm_profile_attr
ffffffff81351a30 r __param_str_acpica_version
ffffffff81351a50 r __param_str_aml_debug_output
ffffffff81351a70 r _acpi_module_name
ffffffff81351a78 r _acpi_module_name
ffffffff81351ad8 r _acpi_module_name
ffffffff81351ae0 r _acpi_module_name
ffffffff81351ae8 r _acpi_module_name
ffffffff81351af8 r _acpi_module_name
ffffffff81351b08 r _acpi_module_name
ffffffff81351b18 r _acpi_module_name
ffffffff81351b28 r _acpi_module_name
ffffffff81351b88 r _acpi_module_name
ffffffff81351b90 r acpi_gbl_op_type_dispatch
ffffffff81351bf0 r _acpi_module_name
ffffffff81351c70 r _acpi_module_name
ffffffff81351c80 r _acpi_module_name
ffffffff81351c90 r _acpi_module_name
ffffffff81351ca0 r _acpi_module_name
ffffffff81351ca8 r _acpi_module_name
ffffffff81351cb0 r _acpi_module_name
ffffffff81351cc0 r _acpi_module_name
ffffffff81351cd0 r _acpi_module_name
ffffffff81351ce0 r _acpi_module_name
ffffffff81351ce8 r _acpi_module_name
ffffffff81351cf0 r acpi_gbl_default_address_spaces
ffffffff81351cf8 r _acpi_module_name
ffffffff81351d08 r _acpi_module_name
ffffffff81351d18 r _acpi_module_name
ffffffff81351d20 r _acpi_module_name
ffffffff81351d30 r _acpi_module_name
ffffffff81351d38 r _acpi_module_name
ffffffff81351d80 r CSWTCH.8
ffffffff81351d88 r _acpi_module_name
ffffffff81351d98 r _acpi_module_name
ffffffff81351da8 r _acpi_module_name
ffffffff81351db0 r _acpi_module_name
ffffffff81351db8 r _acpi_module_name
ffffffff81351dc0 r _acpi_module_name
ffffffff81351e18 r _acpi_module_name
ffffffff81351e28 r _acpi_module_name
ffffffff81351e38 r _acpi_module_name
ffffffff81351e78 r _acpi_module_name
ffffffff81351eb8 r _acpi_module_name
ffffffff81351f30 r _acpi_module_name
ffffffff81351f38 r _acpi_module_name
ffffffff81351ff0 r _acpi_module_name
ffffffff81352100 r _acpi_module_name
ffffffff813521c0 r _acpi_module_name
ffffffff81352200 r _acpi_module_name
ffffffff81352208 r _acpi_module_name
ffffffff81352218 r _acpi_module_name
ffffffff81352228 r _acpi_module_name
ffffffff81352230 r _acpi_module_name
ffffffff81352238 r _acpi_module_name
ffffffff81352241 r _acpi_module_name
ffffffff813522b8 r _acpi_module_name
ffffffff813522c0 r _acpi_module_name
ffffffff813522d0 r acpi_protected_ports
ffffffff813523e0 r _acpi_module_name
ffffffff813523e8 r CSWTCH.7
ffffffff813523f0 r _acpi_module_name
ffffffff81352400 r _acpi_module_name
ffffffff81352410 r _acpi_module_name
ffffffff81352418 r _acpi_module_name
ffffffff81352520 r _acpi_module_name
ffffffff81352528 r _acpi_module_name
ffffffff81352530 r _acpi_module_name
ffffffff813525c8 r _acpi_module_name
ffffffff813525e0 r acpi_rtype_names
ffffffff81352610 r predefined_names
ffffffff81352ce0 r acpi_ns_repairable_names
ffffffff81352d60 r _acpi_module_name
ffffffff81352d70 r _acpi_module_name
ffffffff81352d80 r _acpi_module_name
ffffffff81352d88 r _acpi_module_name
ffffffff81352d98 r _acpi_module_name
ffffffff81352f10 r _acpi_module_name
ffffffff81352f17 r _acpi_module_name
ffffffff81352f20 R acpi_gbl_aml_op_info
ffffffff81353730 r acpi_gbl_short_op_index
ffffffff81353830 r acpi_gbl_long_op_index
ffffffff813538c0 r acpi_gbl_argument_count
ffffffff81353918 r _acpi_module_name
ffffffff81353920 r _acpi_module_name
ffffffff813539c8 r _acpi_module_name
ffffffff813539e0 R acpi_gbl_resource_struct_serial_bus_sizes
ffffffff813539e4 R acpi_gbl_aml_resource_serial_bus_sizes
ffffffff813539f0 R acpi_gbl_resource_struct_sizes
ffffffff81353a10 R acpi_gbl_aml_resource_sizes
ffffffff81353a24 r _acpi_module_name
ffffffff81353c30 r _acpi_module_name
ffffffff81353c78 r _acpi_module_name
ffffffff81353c80 r _acpi_module_name
ffffffff81353c90 r fadt_info_table
ffffffff81353d10 r fadt_pm_info_table
ffffffff81353d50 r _acpi_module_name
ffffffff81353d60 r _acpi_module_name
ffffffff81353d68 r _acpi_module_name
ffffffff81353d70 r _acpi_module_name
ffffffff81353d80 r _acpi_module_name
ffffffff81353e28 r _acpi_module_name
ffffffff81353e30 R acpi_gbl_ns_properties
ffffffff81353e50 r _acpi_module_name
ffffffff81353e60 r acpi_gbl_hex_to_ascii
ffffffff81353e70 r acpi_gbl_event_types
ffffffff81353ea0 r acpi_gbl_ns_type_names
ffffffff81353f98 r acpi_gbl_bad_type
ffffffff81353fb0 r acpi_gbl_desc_type_names
ffffffff81354030 r acpi_gbl_ref_class_names
ffffffff81354178 r _acpi_module_name
ffffffff81354181 r _acpi_module_name
ffffffff81354188 r CSWTCH.7
ffffffff81354190 R acpi_gbl_pre_defined_names
ffffffff81354280 r _acpi_module_name
ffffffff81354287 r _acpi_module_name
ffffffff81354290 r _acpi_module_name
ffffffff81354298 r _acpi_module_name
ffffffff813542a1 r _acpi_module_name
ffffffff813542b0 R acpi_gbl_resource_aml_serial_bus_sizes
ffffffff813542c0 R acpi_gbl_resource_aml_sizes
ffffffff813542e0 r acpi_gbl_resource_types
ffffffff81354300 r _acpi_module_name
ffffffff81354308 r _acpi_module_name
ffffffff81354310 r _acpi_module_name
ffffffff81354320 r hest_esrc_len_tab
ffffffff81354360 r cper_severity_strs
ffffffff81354380 r cper_proc_type_strs
ffffffff81354390 r cper_proc_isa_strs
ffffffff813543c0 r cper_proc_op_strs
ffffffff813543e0 r cper_mem_err_type_strs
ffffffff81354460 r cper_pcie_port_type_strs
ffffffff813544c0 r __func__.25411
ffffffff813544e0 r CSWTCH.35
ffffffff81354500 r __func__.25474
ffffffff81354510 r __func__.30575
ffffffff81354530 r CSWTCH.70
ffffffff81354540 r __func__.30613
ffffffff81354553 r __param_str_disable
ffffffff81354560 r xtab.15093
ffffffff81354580 r xtab.15110
ffffffff81354590 r CSWTCH.14
ffffffff813545c0 r CSWTCH.37
ffffffff81354690 r CSWTCH.39
ffffffff813546a0 r pnp_dev_table
ffffffff81354860 r CSWTCH.38
ffffffff81354880 r name.19735
ffffffff813548d0 r ring_ops_pv
ffffffff813548e0 r ring_ops_hvm
ffffffff81354900 r xsd_errors
ffffffff813549e0 r __func__.21171
ffffffff813549f0 r __func__.27324
ffffffff81354a20 R xen_xenbus_fops
ffffffff81354b00 R xenbus_backend_fops
ffffffff81354be0 r xenbus_pm_ops
ffffffff81354ca0 r balloon_attrs
ffffffff81354cd0 r balloon_info_group
ffffffff81354cf0 r selfballoon_group
ffffffff81354d10 r xen_properties_group
ffffffff81354d30 r xen_compilation_group
ffffffff81354d50 r version_group
ffffffff81354d70 r hyp_sysfs_ops
ffffffff81354da0 r hung_up_tty_fops
ffffffff81354e70 r __func__.27183
ffffffff81354e7d r __func__.27246
ffffffff81354e90 r __func__.27232
ffffffff81354eb0 r ptychar
ffffffff81354ee0 r tty_fops
ffffffff81354fb0 r __func__.27282
ffffffff81354fc0 r console_fops
ffffffff813550c0 r baud_table
ffffffff81355140 r baud_bits
ffffffff813551c0 R tty_ldiscs_proc_fops
ffffffff813552a0 r tty_ldiscs_seq_ops
ffffffff813552c0 r __func__.25998
ffffffff813552e0 r ptm_unix98_ops
ffffffff813553e0 r pty_unix98_ops
ffffffff813554e0 r vcs_fops
ffffffff81355670 r __func__.26935
ffffffff81355680 r k_handler
ffffffff81355700 r ret_diacr.26734
ffffffff81355710 r app_map.26759
ffffffff81355740 r fn_handler
ffffffff813557e0 r x86_keycodes
ffffffff813559e0 r max_vals
ffffffff81355a20 r CSWTCH.125
ffffffff81355a30 r __param_str_brl_nbchords
ffffffff81355a50 r __param_str_brl_timeout
ffffffff81355a80 r kbd_ids
ffffffff813563e0 r con_ops
ffffffff813564e0 r utf8_length_changes.26907
ffffffff81356500 r double_width.26880
ffffffff81356560 r __param_str_underline
ffffffff8135656d r __param_str_italic
ffffffff81356577 r __param_str_default_blu
ffffffff813565a0 r __param_arr_default_blu
ffffffff813565c0 r __param_str_default_grn
ffffffff813565e0 r __param_arr_default_grn
ffffffff81356600 r __param_str_default_red
ffffffff81356620 r __param_arr_default_red
ffffffff81356640 r __param_str_consoleblank
ffffffff8135664d r __param_str_cur_default
ffffffff81356660 r __param_str_global_cursor_default
ffffffff81356680 r __param_str_default_utf8
ffffffff813566a0 r hvc_ops
ffffffff813567a0 r xencons_ids
ffffffff813567e0 r memory_fops
ffffffff813568c0 r devlist
ffffffff81356a40 r mmap_mem_ops
ffffffff81356a80 r mem_fops
ffffffff81356b60 r null_fops
ffffffff81356c40 r port_fops
ffffffff81356d20 r zero_fops
ffffffff81356e00 r full_fops
ffffffff81356ee0 r kmsg_fops
ffffffff81356fc0 R urandom_fops
ffffffff813570a0 R random_fops
ffffffff81357180 r twist_table.24979
ffffffff813571a0 r misc_proc_fops
ffffffff81357280 r misc_fops
ffffffff81357360 r misc_seq_ops
ffffffff81357380 r nvram_proc_fops
ffffffff81357460 r floppy_types
ffffffff813574a0 r gfx_types
ffffffff813574c0 r nvram_fops
ffffffff813575a0 r CSWTCH.67
ffffffff813575c0 r vga_arb_device_fops
ffffffff813576a0 r device_uevent_ops
ffffffff813576c0 r dev_sysfs_ops
ffffffff813576e0 r __func__.15168
ffffffff813576f0 r bus_uevent_ops
ffffffff81357710 r driver_sysfs_ops
ffffffff81357730 r bus_sysfs_ops
ffffffff81357748 r __func__.17932
ffffffff81357755 r __func__.17957
ffffffff81357770 r __func__.18599
ffffffff81357790 r __func__.18622
ffffffff813577b0 r class_sysfs_ops
ffffffff813577e0 r platform_dev_pm_ops
ffffffff813578a0 R power_group_name
ffffffff813578a6 r enabled
ffffffff813578ae r disabled
ffffffff813578b7 r ctrl_auto
ffffffff813578bc r ctrl_on
ffffffff813578c0 r __func__.13023
ffffffff813578e0 r __func__.13038
ffffffff81357900 r __func__.13050
ffffffff81357920 r __func__.22055
ffffffff81357931 r __func__.22212
ffffffff81357940 r __func__.22250
ffffffff81357950 r __func__.17195
ffffffff81357970 r __func__.24828
ffffffff81357990 r __func__.24818
ffffffff813579b0 r __func__.24840
ffffffff813579d0 r __func__.24733
ffffffff81357a40 r scsi_device_types
ffffffff81357ae0 r __param_str_scsi_logging_level
ffffffff81357cc0 r variable_length_arr
ffffffff81357f00 r maint_in_arr
ffffffff81357f80 r maint_out_arr
ffffffff81357fe0 r serv_in16_arr
ffffffff81358020 r serv_out16_arr
ffffffff81358040 r cdb_byte0_names
ffffffff81358640 r CSWTCH.21
ffffffff81358860 r snstext
ffffffff813588e0 r additional
ffffffff8135afc0 r additional2
ffffffff8135b040 r driverbyte_table
ffffffff8135b0a0 r hostbyte_table
ffffffff8135b3e0 r __func__.29583
ffffffff8135b410 r __func__.29768
ffffffff8135b422 r __func__.29676
ffffffff8135b430 r __func__.29695
ffffffff8135b450 r __func__.29684
ffffffff8135b470 r __func__.30036
ffffffff8135b488 r __func__.29817
ffffffff8135b4a0 r __func__.29605
ffffffff8135b4c0 r __func__.30083
ffffffff8135b4e0 r __func__.30216
ffffffff8135b5e0 r __func__.29883
ffffffff8135b5f0 r __func__.30302
ffffffff8135b620 r __func__.29114
ffffffff8135b640 r __func__.29060
ffffffff8135b650 r __func__.29290
ffffffff8135b670 r __func__.29392
ffffffff8135b690 r __func__.29418
ffffffff8135b6b0 r __func__.29408
ffffffff8135b6d0 r __param_str_inq_timeout
ffffffff8135b6f0 r __param_str_max_report_luns
ffffffff8135b709 r __param_str_scan
ffffffff8135b720 r __param_string_scan
ffffffff8135b730 r __param_str_max_luns
ffffffff8135b760 r sdev_states
ffffffff8135b7e0 r shost_states
ffffffff8135b860 r __func__.27904
ffffffff8135b880 r spaces
ffffffff8135b8a0 r __func__.27886
ffffffff8135b8c0 r scsi_devinfo_proc_fops
ffffffff8135b9a0 r scsi_devinfo_seq_ops
ffffffff8135b9c0 r __func__.27958
ffffffff8135b9e0 r __param_str_default_dev_flags
ffffffff8135ba00 r __param_str_dev_flags
ffffffff8135ba20 r __param_string_dev_flags
ffffffff8135ba40 r __func__.28319
ffffffff8135ba60 r __func__.28329
ffffffff8135ba80 r proc_scsi_operations
ffffffff8135bb60 r scsi_seq_ops
ffffffff8135bb80 R scsi_bus_pm_ops
ffffffff8135bd50 r cap.29265
ffffffff8135bd60 r sd_fops
ffffffff8135bdc0 r lbp_mode
ffffffff8135be00 r sd_cache_types
ffffffff8135be60 R ata_dummy_port_info
ffffffff8135bea0 R sata_port_ops
ffffffff8135c080 R ata_base_port_ops
ffffffff8135c250 R sata_deb_timing_long
ffffffff8135c270 R sata_deb_timing_hotplug
ffffffff8135c290 R sata_deb_timing_normal
ffffffff8135c2b0 r ata_rw_cmds
ffffffff8135c2e0 r ata_xfer_tbl
ffffffff8135c320 r xfer_mode_str.37179
ffffffff8135c3c0 r spd_str.37186
ffffffff8135c3e0 r __func__.37348
ffffffff8135c3f0 r __func__.37391
ffffffff8135c404 r CSWTCH.237
ffffffff8135c420 r ata_device_blacklist
ffffffff8135ca40 r ata_timing
ffffffff8135cc10 r CSWTCH.234
ffffffff8135cc28 r __func__.38431
ffffffff8135cc40 r ata_port_pm_ops
ffffffff8135cd00 r __param_str_atapi_an
ffffffff8135cd10 r __param_str_allow_tpm
ffffffff8135cd21 r __param_str_noacpi
ffffffff8135cd30 r __param_str_ata_probe_timeout
ffffffff8135cd49 r __param_str_dma
ffffffff8135cd60 r __param_str_ignore_hpa
ffffffff8135cd72 r __param_str_fua
ffffffff8135cd80 r __param_str_atapi_passthru16
ffffffff8135cda0 r __param_str_atapi_dmadir
ffffffff8135cdc0 r __param_str_atapi_enabled
ffffffff8135cdd5 r __param_str_force
ffffffff8135cdf0 r __param_string_force
ffffffff8135d960 r ata_lpm_policy_names
ffffffff8135d980 r CSWTCH.114
ffffffff8135d9a0 r sense_table.35561
ffffffff8135d9e0 r stat_table.35562
ffffffff8135d9f4 r def_rw_recovery_mpage
ffffffff8135da00 r def_control_mpage
ffffffff8135da10 r def_cache_mpage
ffffffff8135da40 r ata_eh_cmd_timeout_table
ffffffff8135daa0 r dma_dnxfer_sel.35671
ffffffff8135daa8 r pio_dnxfer_sel.35672
ffffffff8135dac0 r cmd_descr.35705
ffffffff8135dfe0 r dma_str.35732
ffffffff8135e000 r prot_str.35733
ffffffff8135e040 r CSWTCH.105
ffffffff8135e060 r ata_eh_reset_timeouts
ffffffff8135e0a0 r ata_eh_identify_timeouts
ffffffff8135e0c0 r ata_eh_other_timeouts
ffffffff8135e0e0 r ata_eh_flush_timeouts
ffffffff8135e100 r dev_attr_nr_pmp_links
ffffffff8135e120 r dev_attr_idle_irq
ffffffff8135e140 r dev_attr_hw_sata_spd_limit
ffffffff8135e160 r dev_attr_sata_spd_limit
ffffffff8135e180 r dev_attr_sata_spd
ffffffff8135e1a0 r dev_attr_class
ffffffff8135e1c0 r dev_attr_pio_mode
ffffffff8135e1e0 r dev_attr_dma_mode
ffffffff8135e200 r dev_attr_xfer_mode
ffffffff8135e220 r dev_attr_spdn_cnt
ffffffff8135e240 r dev_attr_ering
ffffffff8135e260 r dev_attr_id
ffffffff8135e280 r dev_attr_gscr
ffffffff8135e2a0 r ata_class_names
ffffffff8135e340 r ata_xfer_names
ffffffff8135e4c0 r ata_err_names
ffffffff8135e580 R ata_bmdma32_port_ops
ffffffff8135e760 R ata_bmdma_port_ops
ffffffff8135e940 R ata_sff_port_ops
ffffffff8135eb10 r CSWTCH.67
ffffffff8135eb30 r __func__.33174
ffffffff8135eb50 r __func__.35842
ffffffff8135eb60 r __func__.35919
ffffffff8135eb80 r __param_str_acpi_gtf_filter
ffffffff8135eba0 r loopback_ethtool_ops
ffffffff8135ed00 r loopback_ops
ffffffff8135ee60 r serio_pm_ops
ffffffff8135ef20 r __param_str_debug
ffffffff8135ef2c r __param_str_nopnp
ffffffff8135ef38 r __param_str_dritek
ffffffff8135ef50 r __param_str_notimeout
ffffffff8135ef60 r __param_str_noloop
ffffffff8135ef6d r __param_str_dumbkbd
ffffffff8135ef7b r __param_str_direct
ffffffff8135ef88 r __param_str_reset
ffffffff8135ef94 r __param_str_unlock
ffffffff8135efa1 r __param_str_nomux
ffffffff8135efad r __param_str_noaux
ffffffff8135efb9 r __param_str_nokbd
ffffffff8135efe0 r i8042_pm_ops
ffffffff8135f098 r keyboard_ids.22695
ffffffff8135f210 r __func__.23203
ffffffff8135f240 r input_devices_fileops
ffffffff8135f320 r input_handlers_fileops
ffffffff8135f400 r input_fops
ffffffff8135f4e0 r input_devices_seq_ops
ffffffff8135f500 r input_handlers_seq_ops
ffffffff8135f520 r input_dev_pm_ops
ffffffff8135f848 r mousedev_imex_seq
ffffffff8135f84e r mousedev_imps_seq
ffffffff8135f860 r __param_str_tap_time
ffffffff8135f872 r __param_str_yres
ffffffff8135f880 r __param_str_xres
ffffffff8135f8a0 r mousedev_fops
ffffffff8135f980 r mousedev_ids
ffffffff8135fe00 r atkbd_unxlate_table
ffffffff8135ff00 r atkbd_set2_keycode
ffffffff81360300 r atkbd_scroll_keys
ffffffff81360320 r atkbd_set3_keycode
ffffffff81360720 r xl_table
ffffffff81360740 r __func__.20796
ffffffff81360750 r __param_str_terminal
ffffffff8136075f r __param_str_extra
ffffffff8136076b r __param_str_scroll
ffffffff81360778 r __param_str_softraw
ffffffff81360790 r __param_str_softrepeat
ffffffff813607a1 r __param_str_reset
ffffffff813607ad r __param_str_set
ffffffff813607c0 r psmouse_protocols
ffffffff813609c8 r seq.20958
ffffffff813609d1 r rates.20925
ffffffff813609d9 r params.20919
ffffffff813609e0 r __param_str_resync_time
ffffffff81360a00 r __param_str_resetafter
ffffffff81360a20 r __param_str_smartscroll
ffffffff81360a34 r __param_str_rate
ffffffff81360a50 r __param_str_resolution
ffffffff81360a63 r __param_str_proto
ffffffff81360a71 r newabs_rel_mask.20934
ffffffff81360a76 r newabs_rslt.20935
ffffffff81360a7b r newabs_mask.20933
ffffffff81360a80 r oldabs_mask.20936
ffffffff81360a85 r oldabs_rslt.20937
ffffffff81360aa0 r rates.19066
ffffffff81360ac0 r alps_model_data
ffffffff81360b80 r alps_v3_nibble_commands
ffffffff81360c00 r alps_v4_nibble_commands
ffffffff81360c80 r ps2pp_list.18720
ffffffff81360d40 r params.18936
ffffffff81360d60 r rtc_days_in_month
ffffffff81360d80 r rtc_ydays
ffffffff81360dc0 r rtc_dev_fops
ffffffff81360ea0 r rtc_proc_fops
ffffffff81360f80 r driver_name
ffffffff81360fa0 r cmos_rtc_ops
ffffffff81361000 r rtc_ids
ffffffff81361040 r cmos_pm_ops
ffffffff81361100 r watchdog_fops
ffffffff813611d0 r sysfs_ops
ffffffff813611f0 r __param_str_off
ffffffff81361200 r cpuidle_state_sysfs_ops
ffffffff81361220 r cpuidle_sysfs_ops
ffffffff81361240 r fields.21404
ffffffff81361250 r memmap_attr_ops
ffffffff81361320 r dispatch_type.24605
ffffffff81361340 r __func__.24644
ffffffff81361360 r hid_have_special_driver
ffffffff81362dc0 r hid_hiddev_list
ffffffff81362e20 r types.24828
ffffffff81362e70 r CSWTCH.71
ffffffff81362ea0 r hid_mouse_ignore_list
ffffffff81363320 r hid_ignore_list
ffffffff81364250 r __param_str_ignore_special_drivers
ffffffff8136426b r __param_str_debug
ffffffff81365f20 r hid_hat_to_axis
ffffffff81365f80 r hid_keyboard
ffffffff81366080 r __func__.14763
ffffffff8136608a r __func__.14790
ffffffff813660e8 r caps.22049
ffffffff81366100 r __func__.22182
ffffffff81366120 r feat_str.29231
ffffffff81366180 r pci_mmap_ops
ffffffff813661c0 r pci_mmcfg
ffffffff813661d0 R pci_direct_conf1
ffffffff813661e0 r pci_direct_conf2
ffffffff813661f0 r extcfg_base_mask.27720
ffffffff81366200 r extcfg_sizebus.27719
ffffffff81366240 r pirqmap.29098
ffffffff81366260 r pirqmap.29086
ffffffff81366274 r pirqmap.29121
ffffffff81366278 r pirqmap.29109
ffffffff81366280 r irqmap.29051
ffffffff81366290 r irqmap.29039
ffffffff81366920 R bad_sock_fops
ffffffff81366a00 r socket_file_ops
ffffffff81366ae0 r sockfs_ops
ffffffff81366bc0 r sockfs_dentry_operations
ffffffff81366c40 r nargs
ffffffff81366f20 r __func__.44382
ffffffff81366f40 r proto_seq_fops
ffffffff81367020 r proto_seq_ops
ffffffff81367040 r sock_pipe_buf_ops
ffffffff81367080 R netns_operations
ffffffff813675c0 r null_features.46504
ffffffff813675d0 r __func__.48467
ffffffff81367600 r dev_seq_fops
ffffffff813676e0 r softnet_seq_fops
ffffffff813677c0 r ptype_seq_fops
ffffffff813678a0 r dev_seq_ops
ffffffff813678c0 r softnet_seq_ops
ffffffff813678e0 r ptype_seq_ops
ffffffff81367be0 r netdev_features_strings
ffffffff81368020 r dev_mc_seq_fops
ffffffff81368100 r dev_mc_seq_ops
ffffffff813681a0 r nl_neightbl_policy
ffffffff813681e0 r nl_ntbl_parm_policy
ffffffff81368240 r neigh_stat_seq_fops
ffffffff81368320 r neigh_stat_seq_ops
ffffffff81368340 R ifla_policy
ffffffff813683c0 r rta_max
ffffffff81368400 r rtm_min
ffffffff81368440 r ifla_info_policy
ffffffff81368460 r ifla_port_policy
ffffffff81368480 r __func__.36326
ffffffff813688c8 r __func__.37366
ffffffff813688e0 r codes.37407
ffffffff813689a0 r CSWTCH.35
ffffffff81368a00 r fmt_dec
ffffffff81368a04 r fmt_ulong
ffffffff81368a09 r fmt_hex
ffffffff81368a20 r operstates
ffffffff81368a58 r fmt_udec
ffffffff81368a5c r fmt_u64
ffffffff81368a70 r rx_queue_sysfs_ops
ffffffff81368a90 r netdev_queue_sysfs_ops
ffffffff81368b60 r nas
ffffffff81368b80 R eth_header_ops
ffffffff81368bc0 r prio2band
ffffffff81368be0 r bitmap2band
ffffffff81368c00 r mq_class_ops
ffffffff81368cb0 r netlink_family_ops
ffffffff81368ce0 r netlink_ops
ffffffff81368da0 r netlink_seq_fops
ffffffff81368e80 r netlink_seq_ops
ffffffff81368ea0 r ctrl_policy
ffffffff81368ec0 R ip_tos2prio
ffffffff81368ed0 r inaddr_any.45647
ffffffff81368ee0 r mtu_plateau
ffffffff81368f00 r rt_cache_seq_fops
ffffffff81368fe0 r rt_cpu_seq_fops
ffffffff813690c0 r rt_cache_seq_ops
ffffffff813690e0 r rt_cpu_seq_ops
ffffffff81369100 r peer_fake_node
ffffffff813691c0 r __func__.41924
ffffffff813691e0 r ip4_frag_ecn_table
ffffffff813691f0 r __func__.39515
ffffffff81369540 R inet_csk_timer_bug_msg
ffffffff813696a0 r new_state
ffffffff81369710 r __func__.43559
ffffffff81369722 r __func__.43639
ffffffff81369740 R ipv4_specific
ffffffff813697c0 r tcp_afinfo_seq_fops
ffffffff813698a0 r __func__.41337
ffffffff813698c0 r raw_seq_fops
ffffffff813699a0 r raw_seq_ops
ffffffff813699c0 r udp_afinfo_seq_fops
ffffffff81369aa0 r udplite_protocol
ffffffff81369ae0 r __func__.37671
ffffffff81369b00 r udplite_afinfo_seq_fops
ffffffff81369be0 r arp_direct_ops
ffffffff81369c20 r arp_hh_ops
ffffffff81369c60 r arp_generic_ops
ffffffff81369ca0 r arp_seq_fops
ffffffff81369d80 r arp_seq_ops
ffffffff81369da0 R icmp_err_convert
ffffffff81369e20 r icmp_pointers
ffffffff8136a048 r inet_af_policy
ffffffff8136a060 r ifa_ipv4_policy
ffffffff8136a300 R inet_dgram_ops
ffffffff8136a3c0 R inet_stream_ops
ffffffff8136a480 r inet_family_ops
ffffffff8136a4a0 r icmp_protocol
ffffffff8136a4d8 r __func__.45188
ffffffff8136a500 r udp_protocol
ffffffff8136a540 r tcp_protocol
ffffffff8136a580 r __func__.45014
ffffffff8136a5a0 r inet_sockraw_ops
ffffffff8136a660 r igmp_mc_seq_fops
ffffffff8136a740 r igmp_mcf_seq_fops
ffffffff8136a820 r igmp_mc_seq_ops
ffffffff8136a840 r igmp_mcf_seq_ops
ffffffff8136a980 R rtm_ipv4_policy
ffffffff8136a9c4 r __func__.44231
ffffffff8136a9d3 r __func__.44247
ffffffff8136aa00 R fib_props
ffffffff8136aa60 r fib_trie_fops
ffffffff8136ab40 r fib_triestat_fops
ffffffff8136ac20 r fib_route_fops
ffffffff8136ad00 r fib_trie_seq_ops
ffffffff8136ad20 r rtn_type_names
ffffffff8136ad80 r fib_route_seq_ops
ffffffff8136ada0 r ping_seq_fops
ffffffff8136ae80 r ping_seq_ops
ffffffff8136aea0 r sockstat_seq_fops
ffffffff8136af80 r netstat_seq_fops
ffffffff8136b060 r snmp_seq_fops
ffffffff8136b140 r snmp4_net_list
ffffffff8136b680 r snmp4_ipextstats_list
ffffffff8136b760 r snmp4_ipstats_list
ffffffff8136b880 r snmp4_tcp_list
ffffffff8136b980 r snmp4_udp_list
ffffffff8136ba00 r icmpmibmap
ffffffff8136bac0 r msstab
ffffffff8136bae0 r v.41716
ffffffff8136bb20 r __param_str_hystart_ack_delta
ffffffff8136bb40 r __param_str_hystart_low_window
ffffffff8136bb60 r __param_str_hystart_detect
ffffffff8136bb80 r __param_str_hystart
ffffffff8136bba0 r __param_str_tcp_friendliness
ffffffff8136bbc0 r __param_str_bic_scale
ffffffff8136bbe0 r __param_str_initial_ssthresh
ffffffff8136bbfb r __param_str_beta
ffffffff8136bc10 r __param_str_fast_convergence
ffffffff8136bc40 r xfrm_policy_fc_ops
ffffffff8136bc60 r xfrm_bundle_fc_ops
ffffffff8136bc80 r CSWTCH.15
ffffffff8136bcb0 r xfrm_aalg_list
ffffffff8136bcd0 r xfrm_ealg_list
ffffffff8136bcf0 r xfrm_calg_list
ffffffff8136bd10 r xfrm_aead_list
ffffffff8136bd40 r unix_seq_fops
ffffffff8136be20 r unix_seq_ops
ffffffff8136be40 r unix_family_ops
ffffffff8136be60 r unix_stream_ops
ffffffff8136bf20 r unix_dgram_ops
ffffffff8136bfe0 r unix_seqpacket_ops
ffffffff8136c098 r __func__.37631
ffffffff8136c0a8 R kallsyms_addresses
ffffffff8138d2f8 R kallsyms_num_syms
ffffffff8138d300 R kallsyms_names
ffffffff813baaa0 R kallsyms_markers
ffffffff813bacb8 R kallsyms_token_table
ffffffff813bb018 R kallsyms_token_index
ffffffff813fa710 R __start___bug_table
ffffffff813fa710 R __start___tracepoints_ptrs
ffffffff813fa710 R __stop___tracepoints_ptrs
ffffffff813fede4 R __stop___bug_table
ffffffff813fede8 r __pci_fixup_PCI_VENDOR_ID_TI0xb800fixup_ti816x_class
ffffffff813fede8 R __start_pci_fixups_early
ffffffff813fee00 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_MCP55_BRIDGE_V4nvbridge_check_legacy_irq_routing
ffffffff813fee18 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_MCP55_BRIDGE_V0nvbridge_check_legacy_irq_routing
ffffffff813fee30 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_NVENET_15nvenet_msi_disable
ffffffff813fee48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82875_HBquirk_unhide_mch_dev6
ffffffff813fee60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82865_HBquirk_unhide_mch_dev6
ffffffff813fee78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHVquirk_pcie_pxh
ffffffff813fee90 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_1quirk_pcie_pxh
ffffffff813feea8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_0quirk_pcie_pxh
ffffffff813feec0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHD_1quirk_pcie_pxh
ffffffff813feed8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHD_0quirk_pcie_pxh
ffffffff813feef0 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB369quirk_jmicron_ata
ffffffff813fef08 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB368quirk_jmicron_ata
ffffffff813fef20 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB366quirk_jmicron_ata
ffffffff813fef38 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB365quirk_jmicron_ata
ffffffff813fef50 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB364quirk_jmicron_ata
ffffffff813fef68 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB363quirk_jmicron_ata
ffffffff813fef80 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB362quirk_jmicron_ata
ffffffff813fef98 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB361quirk_jmicron_ata
ffffffff813fefb0 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB360quirk_jmicron_ata
ffffffff813fefc8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_ANY_IDquirk_no_ata_d3
ffffffff813fefe0 r __pci_fixup_PCI_VENDOR_ID_ALPCI_ANY_IDquirk_no_ata_d3
ffffffff813feff8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_ANY_IDquirk_no_ata_d3
ffffffff813ff010 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_ANY_IDquirk_no_ata_d3
ffffffff813ff028 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_10quirk_ide_samemode
ffffffff813ff040 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_CSB5IDEquirk_svwks_csb5ide
ffffffff813ff058 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDquirk_mmio_always_on
ffffffff813ff070 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_LEacpi_pm_check_graylist
ffffffff813ff088 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0acpi_pm_check_graylist
ffffffff813ff0a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371AB_3acpi_pm_check_blacklist
ffffffff813ff0b8 r __pci_fixup_PCI_VENDOR_ID_ATI0x4385sb600_disable_hpet_bar
ffffffff813ff0d0 r __pci_fixup_PCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_5530_LEGACYpci_early_fixup_cyrix_5530
ffffffff813ff0e8 R __end_pci_fixups_early
ffffffff813ff0e8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_SBX00_SMBUSforce_disable_hpet_msi
ffffffff813ff0e8 R __start_pci_fixups_header
ffffffff813ff100 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0367nvidia_force_enable_hpet
ffffffff813ff118 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0366nvidia_force_enable_hpet
ffffffff813ff130 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0365nvidia_force_enable_hpet
ffffffff813ff148 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0364nvidia_force_enable_hpet
ffffffff813ff160 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0363nvidia_force_enable_hpet
ffffffff813ff178 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0362nvidia_force_enable_hpet
ffffffff813ff190 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0361nvidia_force_enable_hpet
ffffffff813ff1a8 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0360nvidia_force_enable_hpet
ffffffff813ff1c0 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0260nvidia_force_enable_hpet
ffffffff813ff1d8 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0051nvidia_force_enable_hpet
ffffffff813ff1f0 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0050nvidia_force_enable_hpet
ffffffff813ff208 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP400_SMBUSati_force_enable_hpet
ffffffff813ff220 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_CX700vt8237_force_enable_hpet
ffffffff813ff238 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237vt8237_force_enable_hpet
ffffffff813ff250 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8235vt8237_force_enable_hpet
ffffffff813ff268 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_12old_ich_force_enable_hpet
ffffffff813ff280 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0old_ich_force_enable_hpet
ffffffff813ff298 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12old_ich_force_enable_hpet_user
ffffffff813ff2b0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0old_ich_force_enable_hpet_user
ffffffff813ff2c8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12old_ich_force_enable_hpet_user
ffffffff813ff2e0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0old_ich_force_enable_hpet_user
ffffffff813ff2f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_1old_ich_force_enable_hpet_user
ffffffff813ff310 r __pci_fixup_PCI_VENDOR_ID_INTEL0x3a16ich_force_enable_hpet
ffffffff813ff328 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_7ich_force_enable_hpet
ffffffff813ff340 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_4ich_force_enable_hpet
ffffffff813ff358 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_1ich_force_enable_hpet
ffffffff813ff370 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_31ich_force_enable_hpet
ffffffff813ff388 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_1ich_force_enable_hpet
ffffffff813ff3a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_0ich_force_enable_hpet
ffffffff813ff3b8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1ich_force_enable_hpet
ffffffff813ff3d0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_0ich_force_enable_hpet
ffffffff813ff3e8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB2_0ich_force_enable_hpet
ffffffff813ff400 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65faquirk_intel_mc_errata
ffffffff813ff418 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65f9quirk_intel_mc_errata
ffffffff813ff430 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65f8quirk_intel_mc_errata
ffffffff813ff448 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65f7quirk_intel_mc_errata
ffffffff813ff460 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e7quirk_intel_mc_errata
ffffffff813ff478 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e6quirk_intel_mc_errata
ffffffff813ff490 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e5quirk_intel_mc_errata
ffffffff813ff4a8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e4quirk_intel_mc_errata
ffffffff813ff4c0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e3quirk_intel_mc_errata
ffffffff813ff4d8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e2quirk_intel_mc_errata
ffffffff813ff4f0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65c0quirk_intel_mc_errata
ffffffff813ff508 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25faquirk_intel_mc_errata
ffffffff813ff520 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f9quirk_intel_mc_errata
ffffffff813ff538 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f8quirk_intel_mc_errata
ffffffff813ff550 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f7quirk_intel_mc_errata
ffffffff813ff568 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e7quirk_intel_mc_errata
ffffffff813ff580 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e6quirk_intel_mc_errata
ffffffff813ff598 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e5quirk_intel_mc_errata
ffffffff813ff5b0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e4quirk_intel_mc_errata
ffffffff813ff5c8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e3quirk_intel_mc_errata
ffffffff813ff5e0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e2quirk_intel_mc_errata
ffffffff813ff5f8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25d8quirk_intel_mc_errata
ffffffff813ff610 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25d4quirk_intel_mc_errata
ffffffff813ff628 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25d0quirk_intel_mc_errata
ffffffff813ff640 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25c0quirk_intel_mc_errata
ffffffff813ff658 r __pci_fixup_PCI_VENDOR_ID_SOLARFLAREPCI_DEVICE_ID_SOLARFLARE_SFC4000Bfixup_mpss_256
ffffffff813ff670 r __pci_fixup_PCI_VENDOR_ID_SOLARFLAREPCI_DEVICE_ID_SOLARFLARE_SFC4000A_1fixup_mpss_256
ffffffff813ff688 r __pci_fixup_PCI_VENDOR_ID_SOLARFLAREPCI_DEVICE_ID_SOLARFLARE_SFC4000A_0fixup_mpss_256
ffffffff813ff6a0 r __pci_fixup_PCI_VENDOR_ID_HINT0x0020quirk_hotplug_bridge
ffffffff813ff6b8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8132_BRIDGEht_enable_msi_mapping
ffffffff813ff6d0 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT1000_PXBht_enable_msi_mapping
ffffffff813ff6e8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x1460quirk_p64h2_1k_io
ffffffff813ff700 r __pci_fixup_PCI_VENDOR_ID_NCRPCI_DEVICE_ID_NCR_53C810fixup_rev1_53c810
ffffffff813ff718 r __pci_fixup_PCI_VENDOR_ID_NETMOSPCI_ANY_IDquirk_netmos
ffffffff813ff730 r __pci_fixup_PCI_VENDOR_ID_TOSHIBA_2PCI_DEVICE_ID_TOSHIBA_TC86C001_IDEquirk_tc86c001_ide
ffffffff813ff748 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_EESSCquirk_alder_ioapic
ffffffff813ff760 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237asus_hides_ac97_lpc
ffffffff813ff778 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_503quirk_sis_503
ffffffff813ff790 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_LPCquirk_sis_96x_smbus
ffffffff813ff7a8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_963quirk_sis_96x_smbus
ffffffff813ff7c0 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_962quirk_sis_96x_smbus
ffffffff813ff7d8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_961quirk_sis_96x_smbus
ffffffff813ff7f0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6
ffffffff813ff808 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0asus_hides_smbus_lpc
ffffffff813ff820 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12asus_hides_smbus_lpc
ffffffff813ff838 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12asus_hides_smbus_lpc
ffffffff813ff850 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0asus_hides_smbus_lpc
ffffffff813ff868 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_0asus_hides_smbus_lpc
ffffffff813ff880 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0asus_hides_smbus_lpc
ffffffff813ff898 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AA_0asus_hides_smbus_lpc
ffffffff813ff8b0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82815_CGCasus_hides_smbus_hostbridge
ffffffff813ff8c8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_2asus_hides_smbus_hostbridge
ffffffff813ff8e0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82810_IG3asus_hides_smbus_hostbridge
ffffffff813ff8f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82915GM_HBasus_hides_smbus_hostbridge
ffffffff813ff910 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82855GM_HBasus_hides_smbus_hostbridge
ffffffff813ff928 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82855PM_HBasus_hides_smbus_hostbridge
ffffffff813ff940 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7501_MCHasus_hides_smbus_hostbridge
ffffffff813ff958 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_7205_0asus_hides_smbus_hostbridge
ffffffff813ff970 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82875_HBasus_hides_smbus_hostbridge
ffffffff813ff988 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82865_HBasus_hides_smbus_hostbridge
ffffffff813ff9a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82850_HBasus_hides_smbus_hostbridge
ffffffff813ff9b8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82845G_HBasus_hides_smbus_hostbridge
ffffffff813ff9d0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82845_HBasus_hides_smbus_hostbridge
ffffffff813ff9e8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82375quirk_eisa_bridge
ffffffff813ffa00 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_HUDSON2_SATA_IDEquirk_amd_ide_mode
ffffffff813ffa18 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP700_SATAquirk_amd_ide_mode
ffffffff813ffa30 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP600_SATAquirk_amd_ide_mode
ffffffff813ffa48 r __pci_fixup_PCI_VENDOR_ID_TOSHIBA0x605quirk_transparent_bridge
ffffffff813ffa60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82380FBquirk_transparent_bridge
ffffffff813ffa78 r __pci_fixup_PCI_VENDOR_ID_DUNORDPCI_DEVICE_ID_DUNORD_I3000quirk_dunord
ffffffff813ffa90 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C597_0quirk_vt82c598_id
ffffffff813ffaa8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237Aquirk_via_bridge
ffffffff813ffac0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237quirk_via_bridge
ffffffff813ffad8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8235quirk_via_bridge
ffffffff813ffaf0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8233C_0quirk_via_bridge
ffffffff813ffb08 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8233Aquirk_via_bridge
ffffffff813ffb20 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8233_0quirk_via_bridge
ffffffff813ffb38 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8231quirk_via_bridge
ffffffff813ffb50 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686quirk_via_bridge
ffffffff813ffb68 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686_4quirk_via_acpi
ffffffff813ffb80 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C586_3quirk_via_acpi
ffffffff813ffb98 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8235quirk_vt8235_acpi
ffffffff813ffbb0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686_4quirk_vt82c686_acpi
ffffffff813ffbc8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C586_3quirk_vt82c586_acpi
ffffffff813ffbe0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH10_1quirk_ich7_lpc
ffffffff813ffbf8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_8quirk_ich7_lpc
ffffffff813ffc10 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_7quirk_ich7_lpc
ffffffff813ffc28 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_4quirk_ich7_lpc
ffffffff813ffc40 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_2quirk_ich7_lpc
ffffffff813ffc58 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_4quirk_ich7_lpc
ffffffff813ffc70 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_1quirk_ich7_lpc
ffffffff813ffc88 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_3quirk_ich7_lpc
ffffffff813ffca0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_2quirk_ich7_lpc
ffffffff813ffcb8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_0quirk_ich7_lpc
ffffffff813ffcd0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_31quirk_ich7_lpc
ffffffff813ffce8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_1quirk_ich7_lpc
ffffffff813ffd00 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_0quirk_ich7_lpc
ffffffff813ffd18 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1quirk_ich6_lpc
ffffffff813ffd30 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_0quirk_ich6_lpc
ffffffff813ffd48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_1quirk_ich4_lpc_acpi
ffffffff813ffd60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0quirk_ich4_lpc_acpi
ffffffff813ffd78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12quirk_ich4_lpc_acpi
ffffffff813ffd90 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0quirk_ich4_lpc_acpi
ffffffff813ffda8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12quirk_ich4_lpc_acpi
ffffffff813ffdc0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0quirk_ich4_lpc_acpi
ffffffff813ffdd8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_10quirk_ich4_lpc_acpi
ffffffff813ffdf0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_0quirk_ich4_lpc_acpi
ffffffff813ffe08 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AB_0quirk_ich4_lpc_acpi
ffffffff813ffe20 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AA_0quirk_ich4_lpc_acpi
ffffffff813ffe38 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443MX_3quirk_piix4_acpi
ffffffff813ffe50 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371AB_3quirk_piix4_acpi
ffffffff813ffe68 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M7101quirk_ali7101_acpi
ffffffff813ffe80 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_CS5536_ISAquirk_cs5536_vsa
ffffffff813ffe98 r __pci_fixup_PCI_VENDOR_ID_S3PCI_DEVICE_ID_S3_968quirk_s3_64M
ffffffff813ffeb0 r __pci_fixup_PCI_VENDOR_ID_S3PCI_DEVICE_ID_S3_868quirk_s3_64M
ffffffff813ffec8 r __pci_fixup_PCI_VENDOR_ID_IBMPCI_DEVICE_ID_IBM_CITRINEquirk_citrine
ffffffff813ffee0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_TGP_LPCquirk_tigerpoint_bm_sts
ffffffff813ffef8 r __pci_fixup_PCI_VENDOR_ID_SIEMENS0x0015pci_siemens_interrupt_controller
ffffffff813fff10 r __pci_fixup_PCI_VENDOR_ID_TI0x8032pci_pre_fixup_toshiba_ohci1394
ffffffff813fff28 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237pci_fixup_msi_k8t_onboard_sound
ffffffff813fff40 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_NFORCE2pci_fixup_nforce2
ffffffff813fff58 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_ANY_IDpci_fixup_transparent_bridge
ffffffff813fff70 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8367_0pci_fixup_via_northbridge_bug
ffffffff813fff88 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361pci_fixup_via_northbridge_bug
ffffffff813fffa0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8622pci_fixup_via_northbridge_bug
ffffffff813fffb8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0pci_fixup_via_northbridge_bug
ffffffff813fffd0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371AB_3pci_fixup_piix4_acpi
ffffffff813fffe8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_5598pci_fixup_latency
ffffffff81400000 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_5597pci_fixup_latency
ffffffff81400018 r __pci_fixup_PCI_VENDOR_ID_NCRPCI_DEVICE_ID_NCR_53C810pci_fixup_ncr53c810
ffffffff81400030 r __pci_fixup_PCI_VENDOR_ID_UMCPCI_DEVICE_ID_UMC_UM8886BFpci_fixup_umc_ide
ffffffff81400048 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82454GXpci_fixup_i450gx
ffffffff81400060 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82451NXpci_fixup_i450nx
ffffffff81400078 R __end_pci_fixups_header
ffffffff81400078 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_ANY_IDvia_no_dac
ffffffff81400078 R __start_pci_fixups_final
ffffffff81400090 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F5quirk_amd_nb_node
ffffffff814000a8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F4quirk_amd_nb_node
ffffffff814000c0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F3quirk_amd_nb_node
ffffffff814000d8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F2quirk_amd_nb_node
ffffffff814000f0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F1quirk_amd_nb_node
ffffffff81400108 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F0quirk_amd_nb_node
ffffffff81400120 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_LINKquirk_amd_nb_node
ffffffff81400138 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_MISCquirk_amd_nb_node
ffffffff81400150 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_DRAMquirk_amd_nb_node
ffffffff81400168 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_MAPquirk_amd_nb_node
ffffffff81400180 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_HTquirk_amd_nb_node
ffffffff81400198 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NB_MISCquirk_amd_nb_node
ffffffff814001b0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NB_MEMCTLquirk_amd_nb_node
ffffffff814001c8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NB_ADDRMAPquirk_amd_nb_node
ffffffff814001e0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NBquirk_amd_nb_node
ffffffff814001f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7520_MCHquirk_intel_irqbalance
ffffffff81400210 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7525_MCHquirk_intel_irqbalance
ffffffff81400228 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7320_MCHquirk_intel_irqbalance
ffffffff81400240 r __pci_fixup_PCI_VENDOR_ID_INTEL0x010adisable_igfx_irq
ffffffff81400258 r __pci_fixup_PCI_VENDOR_ID_INTEL0x0102disable_igfx_irq
ffffffff81400270 r __pci_fixup_PCI_VENDOR_ID_ATI0x4375quirk_msi_intx_disable_bug
ffffffff81400288 r __pci_fixup_PCI_VENDOR_ID_ATI0x4374quirk_msi_intx_disable_bug
ffffffff814002a0 r __pci_fixup_PCI_VENDOR_ID_ATI0x4373quirk_msi_intx_disable_bug
ffffffff814002b8 r __pci_fixup_PCI_VENDOR_ID_ATI0x4394quirk_msi_intx_disable_ati_bug
ffffffff814002d0 r __pci_fixup_PCI_VENDOR_ID_ATI0x4393quirk_msi_intx_disable_ati_bug
ffffffff814002e8 r __pci_fixup_PCI_VENDOR_ID_ATI0x4392quirk_msi_intx_disable_ati_bug
ffffffff81400300 r __pci_fixup_PCI_VENDOR_ID_ATI0x4391quirk_msi_intx_disable_ati_bug
ffffffff81400318 r __pci_fixup_PCI_VENDOR_ID_ATI0x4390quirk_msi_intx_disable_ati_bug
ffffffff81400330 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5715Squirk_msi_intx_disable_bug
ffffffff81400348 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5715quirk_msi_intx_disable_bug
ffffffff81400360 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5714Squirk_msi_intx_disable_bug
ffffffff81400378 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5714quirk_msi_intx_disable_bug
ffffffff81400390 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5780Squirk_msi_intx_disable_bug
ffffffff814003a8 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5780quirk_msi_intx_disable_bug
ffffffff814003c0 r __pci_fixup_PCI_VENDOR_ID_ALPCI_ANY_IDnv_msi_ht_cap_quirk_all
ffffffff814003d8 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_ANY_IDnv_msi_ht_cap_quirk_leaf
ffffffff814003f0 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_CK804_PCIEquirk_nvidia_ck804_msi_ht_cap
ffffffff81400408 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT2000_PCIEquirk_msi_ht_cap
ffffffff81400420 r __pci_fixup_PCI_VENDOR_ID_AMD0x9601quirk_amd_780_apc_msi
ffffffff81400438 r __pci_fixup_PCI_VENDOR_ID_AMD0x9600quirk_amd_780_apc_msi
ffffffff81400450 r __pci_fixup_PCI_VENDOR_ID_ATI0x5a3fquirk_disable_msi
ffffffff81400468 r __pci_fixup_PCI_VENDOR_ID_VIA0xa238quirk_disable_msi
ffffffff81400480 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_disable_msi
ffffffff81400498 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8380_0quirk_disable_all_msi
ffffffff814004b0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_VT3364quirk_disable_all_msi
ffffffff814004c8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_VT3351quirk_disable_all_msi
ffffffff814004e0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_VT3336quirk_disable_all_msi
ffffffff814004f8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_RS480quirk_disable_all_msi
ffffffff81400510 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_RS400_200quirk_disable_all_msi
ffffffff81400528 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_GCNB_LEquirk_disable_all_msi
ffffffff81400540 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5709Squirk_brcm_570x_limit_vpd
ffffffff81400558 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5709quirk_brcm_570x_limit_vpd
ffffffff81400570 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5708Squirk_brcm_570x_limit_vpd
ffffffff81400588 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5708quirk_brcm_570x_limit_vpd
ffffffff814005a0 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5706Squirk_brcm_570x_limit_vpd
ffffffff814005b8 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5706quirk_brcm_570x_limit_vpd
ffffffff814005d0 r __pci_fixup_PCI_VENDOR_ID_VIA0x324equirk_via_cx700_pci_parking_caching
ffffffff814005e8 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_CK804_PCIEquirk_nvidia_ck804_pcie_aer_ext_cap
ffffffff81400600 r __pci_fixup_PCI_VENDOR_ID_INTEL0x1460quirk_p64h2_1k_io_fix_iobl
ffffffff81400618 r __pci_fixup_PCI_VENDOR_ID_INTEL0x1508quirk_disable_aspm_l0s
ffffffff81400630 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10f4quirk_disable_aspm_l0s
ffffffff81400648 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10f1quirk_disable_aspm_l0s
ffffffff81400660 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10ecquirk_disable_aspm_l0s
ffffffff81400678 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10e1quirk_disable_aspm_l0s
ffffffff81400690 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10ddquirk_disable_aspm_l0s
ffffffff814006a8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10dbquirk_disable_aspm_l0s
ffffffff814006c0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10d6quirk_disable_aspm_l0s
ffffffff814006d8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10c8quirk_disable_aspm_l0s
ffffffff814006f0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10c7quirk_disable_aspm_l0s
ffffffff81400708 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10c6quirk_disable_aspm_l0s
ffffffff81400720 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10b6quirk_disable_aspm_l0s
ffffffff81400738 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10a9quirk_disable_aspm_l0s
ffffffff81400750 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10a7quirk_disable_aspm_l0s
ffffffff81400768 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_ANY_IDquirk_e100_interrupt
ffffffff81400780 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8111_SMBUSquirk_disable_amd_8111_boot_interrupt
ffffffff81400798 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8132_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff814007b0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff814007c8 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT1000SBquirk_disable_broadcom_boot_interrupt
ffffffff814007e0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_10quirk_disable_intel_boot_interrupt
ffffffff814007f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_1quirk_reroute_to_boot_interrupts_intel
ffffffff81400810 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_0quirk_reroute_to_boot_interrupts_intel
ffffffff81400828 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHVquirk_reroute_to_boot_interrupts_intel
ffffffff81400840 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_1quirk_reroute_to_boot_interrupts_intel
ffffffff81400858 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_0quirk_reroute_to_boot_interrupts_intel
ffffffff81400870 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB2_0quirk_reroute_to_boot_interrupts_intel
ffffffff81400888 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_1quirk_reroute_to_boot_interrupts_intel
ffffffff814008a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_0quirk_reroute_to_boot_interrupts_intel
ffffffff814008b8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x260bquirk_intel_pcie_pm
ffffffff814008d0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x260aquirk_intel_pcie_pm
ffffffff814008e8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2609quirk_intel_pcie_pm
ffffffff81400900 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2608quirk_intel_pcie_pm
ffffffff81400918 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2607quirk_intel_pcie_pm
ffffffff81400930 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2606quirk_intel_pcie_pm
ffffffff81400948 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2605quirk_intel_pcie_pm
ffffffff81400960 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2604quirk_intel_pcie_pm
ffffffff81400978 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2603quirk_intel_pcie_pm
ffffffff81400990 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2602quirk_intel_pcie_pm
ffffffff814009a8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2601quirk_intel_pcie_pm
ffffffff814009c0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25faquirk_intel_pcie_pm
ffffffff814009d8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f9quirk_intel_pcie_pm
ffffffff814009f0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f8quirk_intel_pcie_pm
ffffffff81400a08 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f7quirk_intel_pcie_pm
ffffffff81400a20 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e7quirk_intel_pcie_pm
ffffffff81400a38 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e6quirk_intel_pcie_pm
ffffffff81400a50 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e5quirk_intel_pcie_pm
ffffffff81400a68 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e4quirk_intel_pcie_pm
ffffffff81400a80 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e3quirk_intel_pcie_pm
ffffffff81400a98 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e2quirk_intel_pcie_pm
ffffffff81400ab0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7525_MCHquirk_pcie_mch
ffffffff81400ac8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7320_MCHquirk_pcie_mch
ffffffff81400ae0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7520_MCHquirk_pcie_mch
ffffffff81400af8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82454NXquirk_disable_pxb
ffffffff81400b10 r __pci_fixup_PCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_PCI_MASTERquirk_mediagx_master
ffffffff81400b28 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_FE_GATE_700Cquirk_amd_ordering
ffffffff81400b40 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDquirk_cardbus_legacy
ffffffff81400b58 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_amd_8131_mmrbc
ffffffff81400b70 r __pci_fixup_PCI_VENDOR_ID_SIPCI_ANY_IDquirk_ioapic_rmw
ffffffff81400b88 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_VIPER_7410quirk_amd_ioapic
ffffffff81400ba0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237quirk_via_vt8237_bypass_apic_deassert
ffffffff81400bb8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686quirk_via_ioapic
ffffffff81400bd0 r __pci_fixup_PCI_VENDOR_ID_TIPCI_DEVICE_ID_TI_XIO2000Aquirk_xio2000a
ffffffff81400be8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_RS100quirk_ati_exploding_mce
ffffffff81400c00 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443BX_2quirk_natoma
ffffffff81400c18 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443BX_1quirk_natoma
ffffffff81400c30 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443BX_0quirk_natoma
ffffffff81400c48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443LX_1quirk_natoma
ffffffff81400c60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443LX_0quirk_natoma
ffffffff81400c78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82441quirk_natoma
ffffffff81400c90 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M1651quirk_alimagik
ffffffff81400ca8 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M1647quirk_alimagik
ffffffff81400cc0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C576quirk_vsfx
ffffffff81400cd8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C597_0quirk_viaetbf
ffffffff81400cf0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361quirk_vialatency
ffffffff81400d08 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8371_1quirk_vialatency
ffffffff81400d20 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0quirk_vialatency
ffffffff81400d38 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82439TXquirk_triton
ffffffff81400d50 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82439quirk_triton
ffffffff81400d68 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82437VXquirk_triton
ffffffff81400d80 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82437quirk_triton
ffffffff81400d98 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8151_0quirk_nopciamd
ffffffff81400db0 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_496quirk_nopcipci
ffffffff81400dc8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_5597quirk_nopcipci
ffffffff81400de0 r __pci_fixup_PCI_VENDOR_ID_NECPCI_DEVICE_ID_NEC_CBUS_3quirk_isa_dma_hangs
ffffffff81400df8 r __pci_fixup_PCI_VENDOR_ID_NECPCI_DEVICE_ID_NEC_CBUS_2quirk_isa_dma_hangs
ffffffff81400e10 r __pci_fixup_PCI_VENDOR_ID_NECPCI_DEVICE_ID_NEC_CBUS_1quirk_isa_dma_hangs
ffffffff81400e28 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M1533quirk_isa_dma_hangs
ffffffff81400e40 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371SB_0quirk_isa_dma_hangs
ffffffff81400e58 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C596quirk_isa_dma_hangs
ffffffff81400e70 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C586_0quirk_isa_dma_hangs
ffffffff81400e88 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82441quirk_passive_release
ffffffff81400ea0 r __pci_fixup_PCI_VENDOR_ID_MELLANOXPCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGEquirk_mellanox_tavor
ffffffff81400eb8 r __pci_fixup_PCI_VENDOR_ID_MELLANOXPCI_DEVICE_ID_MELLANOX_TAVORquirk_mellanox_tavor
ffffffff81400ed0 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDquirk_usb_early_handoff
ffffffff81400ee8 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDpci_fixup_video
ffffffff81400f00 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PC1pcie_rootport_aspm_quirk
ffffffff81400f18 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PCpcie_rootport_aspm_quirk
ffffffff81400f30 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PB1pcie_rootport_aspm_quirk
ffffffff81400f48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PBpcie_rootport_aspm_quirk
ffffffff81400f60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PA1pcie_rootport_aspm_quirk
ffffffff81400f78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PApcie_rootport_aspm_quirk
ffffffff81400f90 R __end_pci_fixups_final
ffffffff81400f90 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5719quirk_brcm_5719_limit_mrrs
ffffffff81400f90 R __start_pci_fixups_enable
ffffffff81400fa8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_ANY_IDquirk_via_vlink
ffffffff81400fc0 r __pci_fixup_PCI_VENDOR_ID_TI0x8032pci_post_fixup_toshiba_ohci1394
ffffffff81400fd8 R __end_pci_fixups_enable
ffffffff81400fd8 r __pci_fixup_resumePCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8111_SMBUSquirk_disable_amd_8111_boot_interrupt
ffffffff81400fd8 R __start_pci_fixups_resume
ffffffff81400ff0 r __pci_fixup_resumePCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8132_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff81401008 r __pci_fixup_resumePCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff81401020 r __pci_fixup_resumePCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT1000SBquirk_disable_broadcom_boot_interrupt
ffffffff81401038 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_10quirk_disable_intel_boot_interrupt
ffffffff81401050 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_1quirk_reroute_to_boot_interrupts_intel
ffffffff81401068 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_0quirk_reroute_to_boot_interrupts_intel
ffffffff81401080 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHVquirk_reroute_to_boot_interrupts_intel
ffffffff81401098 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_1quirk_reroute_to_boot_interrupts_intel
ffffffff814010b0 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_0quirk_reroute_to_boot_interrupts_intel
ffffffff814010c8 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB2_0quirk_reroute_to_boot_interrupts_intel
ffffffff814010e0 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_1quirk_reroute_to_boot_interrupts_intel
ffffffff814010f8 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_0quirk_reroute_to_boot_interrupts_intel
ffffffff81401110 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6_resume
ffffffff81401128 r __pci_fixup_resumePCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_PCI_MASTERquirk_mediagx_master
ffffffff81401140 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361quirk_vialatency
ffffffff81401158 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8371_1quirk_vialatency
ffffffff81401170 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0quirk_vialatency
ffffffff81401188 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82441quirk_passive_release
ffffffff814011a0 r __pci_fixup_resumePCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_5530_LEGACYpci_early_fixup_cyrix_5530
ffffffff814011b8 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237pci_fixup_msi_k8t_onboard_sound
ffffffff814011d0 r __pci_fixup_resumePCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_NFORCE2pci_fixup_nforce2
ffffffff814011e8 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8367_0pci_fixup_via_northbridge_bug
ffffffff81401200 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361pci_fixup_via_northbridge_bug
ffffffff81401218 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8622pci_fixup_via_northbridge_bug
ffffffff81401230 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0pci_fixup_via_northbridge_bug
ffffffff81401248 R __end_pci_fixups_resume
ffffffff81401248 r __pci_fixup_resume_earlyPCI_VENDOR_ID_ALPCI_ANY_IDnv_msi_ht_cap_quirk_all
ffffffff81401248 R __start_pci_fixups_resume_early
ffffffff81401260 r __pci_fixup_resume_earlyPCI_VENDOR_ID_NVIDIAPCI_ANY_IDnv_msi_ht_cap_quirk_leaf
ffffffff81401278 r __pci_fixup_resume_earlyPCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_CK804_PCIEquirk_nvidia_ck804_pcie_aer_ext_cap
ffffffff81401290 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB369quirk_jmicron_ata
ffffffff814012a8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB368quirk_jmicron_ata
ffffffff814012c0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB366quirk_jmicron_ata
ffffffff814012d8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB365quirk_jmicron_ata
ffffffff814012f0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB364quirk_jmicron_ata
ffffffff81401308 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB363quirk_jmicron_ata
ffffffff81401320 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB362quirk_jmicron_ata
ffffffff81401338 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB361quirk_jmicron_ata
ffffffff81401350 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB360quirk_jmicron_ata
ffffffff81401368 r __pci_fixup_resume_earlyPCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237asus_hides_ac97_lpc
ffffffff81401380 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_503quirk_sis_503
ffffffff81401398 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_LPCquirk_sis_96x_smbus
ffffffff814013b0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_963quirk_sis_96x_smbus
ffffffff814013c8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_962quirk_sis_96x_smbus
ffffffff814013e0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_961quirk_sis_96x_smbus
ffffffff814013f8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6_resume_early
ffffffff81401410 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0asus_hides_smbus_lpc
ffffffff81401428 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12asus_hides_smbus_lpc
ffffffff81401440 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12asus_hides_smbus_lpc
ffffffff81401458 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0asus_hides_smbus_lpc
ffffffff81401470 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_0asus_hides_smbus_lpc
ffffffff81401488 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0asus_hides_smbus_lpc
ffffffff814014a0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AA_0asus_hides_smbus_lpc
ffffffff814014b8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_HUDSON2_SATA_IDEquirk_amd_ide_mode
ffffffff814014d0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP700_SATAquirk_amd_ide_mode
ffffffff814014e8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP600_SATAquirk_amd_ide_mode
ffffffff81401500 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82454NXquirk_disable_pxb
ffffffff81401518 r __pci_fixup_resume_earlyPCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_FE_GATE_700Cquirk_amd_ordering
ffffffff81401530 r __pci_fixup_resume_earlyPCI_ANY_IDPCI_ANY_IDquirk_cardbus_legacy
ffffffff81401548 r __pci_fixup_resume_earlyPCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237quirk_via_vt8237_bypass_apic_deassert
ffffffff81401560 r __pci_fixup_resume_earlyPCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686quirk_via_ioapic
ffffffff81401578 R __end_pci_fixups_resume_early
ffffffff81401578 r __pci_fixup_suspendPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6_suspend
ffffffff81401578 R __start_pci_fixups_suspend
ffffffff81401590 R __end_builtin_fw
ffffffff81401590 R __end_pci_fixups_suspend
ffffffff81401590 R __end_rio_switch_ops
ffffffff81401590 r __ksymtab_IO_APIC_get_PCI_irq_vector
ffffffff81401590 R __start___ksymtab
ffffffff81401590 R __start_builtin_fw
ffffffff81401590 R __start_rio_switch_ops
ffffffff814015a0 r __ksymtab_I_BDEV
ffffffff814015b0 r __ksymtab____pskb_trim
ffffffff814015c0 r __ksymtab____ratelimit
ffffffff814015d0 r __ksymtab___alloc_pages_nodemask
ffffffff814015e0 r __ksymtab___alloc_skb
ffffffff814015f0 r __ksymtab___alloc_tty_driver
ffffffff81401600 r __ksymtab___bdevname
ffffffff81401610 r __ksymtab___bforget
ffffffff81401620 r __ksymtab___bio_clone
ffffffff81401630 r __ksymtab___bitmap_and
ffffffff81401640 r __ksymtab___bitmap_andnot
ffffffff81401650 r __ksymtab___bitmap_complement
ffffffff81401660 r __ksymtab___bitmap_empty
ffffffff81401670 r __ksymtab___bitmap_equal
ffffffff81401680 r __ksymtab___bitmap_full
ffffffff81401690 r __ksymtab___bitmap_intersects
ffffffff814016a0 r __ksymtab___bitmap_or
ffffffff814016b0 r __ksymtab___bitmap_parse
ffffffff814016c0 r __ksymtab___bitmap_shift_left
ffffffff814016d0 r __ksymtab___bitmap_shift_right
ffffffff814016e0 r __ksymtab___bitmap_subset
ffffffff814016f0 r __ksymtab___bitmap_weight
ffffffff81401700 r __ksymtab___bitmap_xor
ffffffff81401710 r __ksymtab___blk_end_request
ffffffff81401720 r __ksymtab___blk_end_request_all
ffffffff81401730 r __ksymtab___blk_end_request_cur
ffffffff81401740 r __ksymtab___blk_iopoll_complete
ffffffff81401750 r __ksymtab___blk_run_queue
ffffffff81401760 r __ksymtab___block_page_mkwrite
ffffffff81401770 r __ksymtab___block_write_begin
ffffffff81401780 r __ksymtab___blockdev_direct_IO
ffffffff81401790 r __ksymtab___bread
ffffffff814017a0 r __ksymtab___breadahead
ffffffff814017b0 r __ksymtab___break_lease
ffffffff814017c0 r __ksymtab___brelse
ffffffff814017d0 r __ksymtab___cap_empty_set
ffffffff814017e0 r __ksymtab___check_region
ffffffff814017f0 r __ksymtab___cleancache_get_page
ffffffff81401800 r __ksymtab___cleancache_init_fs
ffffffff81401810 r __ksymtab___cleancache_init_shared_fs
ffffffff81401820 r __ksymtab___cleancache_invalidate_fs
ffffffff81401830 r __ksymtab___cleancache_invalidate_inode
ffffffff81401840 r __ksymtab___cleancache_invalidate_page
ffffffff81401850 r __ksymtab___cleancache_put_page
ffffffff81401860 r __ksymtab___clear_user
ffffffff81401870 r __ksymtab___cond_resched_lock
ffffffff81401880 r __ksymtab___cond_resched_softirq
ffffffff81401890 r __ksymtab___const_udelay
ffffffff814018a0 r __ksymtab___copy_user_nocache
ffffffff814018b0 r __ksymtab___crc32c_le
ffffffff814018c0 r __ksymtab___d_drop
ffffffff814018d0 r __ksymtab___dec_zone_page_state
ffffffff814018e0 r __ksymtab___delay
ffffffff814018f0 r __ksymtab___destroy_inode
ffffffff81401900 r __ksymtab___dev_get_by_index
ffffffff81401910 r __ksymtab___dev_get_by_name
ffffffff81401920 r __ksymtab___dev_getfirstbyhwtype
ffffffff81401930 r __ksymtab___dev_printk
ffffffff81401940 r __ksymtab___dev_remove_pack
ffffffff81401950 r __ksymtab___devm_release_region
ffffffff81401960 r __ksymtab___devm_request_region
ffffffff81401970 r __ksymtab___dst_destroy_metrics_generic
ffffffff81401980 r __ksymtab___dst_free
ffffffff81401990 r __ksymtab___elv_add_request
ffffffff814019a0 r __ksymtab___ethtool_get_settings
ffffffff814019b0 r __ksymtab___f_setown
ffffffff814019c0 r __ksymtab___find_get_block
ffffffff814019d0 r __ksymtab___first_cpu
ffffffff814019e0 r __ksymtab___free_pages
ffffffff814019f0 r __ksymtab___generic_block_fiemap
ffffffff81401a00 r __ksymtab___generic_file_aio_write
ffffffff81401a10 r __ksymtab___get_free_pages
ffffffff81401a20 r __ksymtab___get_page_tail
ffffffff81401a30 r __ksymtab___get_user_1
ffffffff81401a40 r __ksymtab___get_user_2
ffffffff81401a50 r __ksymtab___get_user_4
ffffffff81401a60 r __ksymtab___get_user_8
ffffffff81401a70 r __ksymtab___get_user_pages
ffffffff81401a80 r __ksymtab___getblk
ffffffff81401a90 r __ksymtab___ht_create_irq
ffffffff81401aa0 r __ksymtab___hw_addr_add_multiple
ffffffff81401ab0 r __ksymtab___hw_addr_del_multiple
ffffffff81401ac0 r __ksymtab___hw_addr_flush
ffffffff81401ad0 r __ksymtab___hw_addr_init
ffffffff81401ae0 r __ksymtab___hw_addr_sync
ffffffff81401af0 r __ksymtab___hw_addr_unsync
ffffffff81401b00 r __ksymtab___inc_zone_page_state
ffffffff81401b10 r __ksymtab___init_rwsem
ffffffff81401b20 r __ksymtab___init_waitqueue_head
ffffffff81401b30 r __ksymtab___insert_inode_hash
ffffffff81401b40 r __ksymtab___invalidate_device
ffffffff81401b50 r __ksymtab___ip_dev_find
ffffffff81401b60 r __ksymtab___ip_select_ident
ffffffff81401b70 r __ksymtab___ipv6_addr_type
ffffffff81401b80 r __ksymtab___kernel_param_lock
ffffffff81401b90 r __ksymtab___kernel_param_unlock
ffffffff81401ba0 r __ksymtab___kfifo_alloc
ffffffff81401bb0 r __ksymtab___kfifo_dma_in_finish_r
ffffffff81401bc0 r __ksymtab___kfifo_dma_in_prepare
ffffffff81401bd0 r __ksymtab___kfifo_dma_in_prepare_r
ffffffff81401be0 r __ksymtab___kfifo_dma_out_finish_r
ffffffff81401bf0 r __ksymtab___kfifo_dma_out_prepare
ffffffff81401c00 r __ksymtab___kfifo_dma_out_prepare_r
ffffffff81401c10 r __ksymtab___kfifo_free
ffffffff81401c20 r __ksymtab___kfifo_from_user
ffffffff81401c30 r __ksymtab___kfifo_from_user_r
ffffffff81401c40 r __ksymtab___kfifo_in
ffffffff81401c50 r __ksymtab___kfifo_in_r
ffffffff81401c60 r __ksymtab___kfifo_init
ffffffff81401c70 r __ksymtab___kfifo_len_r
ffffffff81401c80 r __ksymtab___kfifo_out
ffffffff81401c90 r __ksymtab___kfifo_out_peek
ffffffff81401ca0 r __ksymtab___kfifo_out_peek_r
ffffffff81401cb0 r __ksymtab___kfifo_out_r
ffffffff81401cc0 r __ksymtab___kfifo_skip_r
ffffffff81401cd0 r __ksymtab___kfifo_to_user
ffffffff81401ce0 r __ksymtab___kfifo_to_user_r
ffffffff81401cf0 r __ksymtab___kfree_skb
ffffffff81401d00 r __ksymtab___kmalloc
ffffffff81401d10 r __ksymtab___kmalloc_node
ffffffff81401d20 r __ksymtab___krealloc
ffffffff81401d30 r __ksymtab___lock_buffer
ffffffff81401d40 r __ksymtab___lock_page
ffffffff81401d50 r __ksymtab___locks_copy_lock
ffffffff81401d60 r __ksymtab___lru_cache_add
ffffffff81401d70 r __ksymtab___mark_inode_dirty
ffffffff81401d80 r __ksymtab___memcpy
ffffffff81401d90 r __ksymtab___mod_zone_page_state
ffffffff81401da0 r __ksymtab___module_get
ffffffff81401db0 r __ksymtab___module_put_and_exit
ffffffff81401dc0 r __ksymtab___mutex_init
ffffffff81401dd0 r __ksymtab___napi_complete
ffffffff81401de0 r __ksymtab___napi_schedule
ffffffff81401df0 r __ksymtab___ndelay
ffffffff81401e00 r __ksymtab___neigh_event_send
ffffffff81401e10 r __ksymtab___neigh_for_each_release
ffffffff81401e20 r __ksymtab___netdev_alloc_skb
ffffffff81401e30 r __ksymtab___netdev_printk
ffffffff81401e40 r __ksymtab___netif_schedule
ffffffff81401e50 r __ksymtab___next_cpu
ffffffff81401e60 r __ksymtab___nla_put
ffffffff81401e70 r __ksymtab___nla_put_nohdr
ffffffff81401e80 r __ksymtab___nla_reserve
ffffffff81401e90 r __ksymtab___nla_reserve_nohdr
ffffffff81401ea0 r __ksymtab___nlmsg_put
ffffffff81401eb0 r __ksymtab___node_distance
ffffffff81401ec0 r __ksymtab___nvram_check_checksum
ffffffff81401ed0 r __ksymtab___nvram_read_byte
ffffffff81401ee0 r __ksymtab___nvram_write_byte
ffffffff81401ef0 r __ksymtab___page_cache_alloc
ffffffff81401f00 r __ksymtab___page_symlink
ffffffff81401f10 r __ksymtab___pagevec_lru_add
ffffffff81401f20 r __ksymtab___pagevec_release
ffffffff81401f30 r __ksymtab___pci_enable_wake
ffffffff81401f40 r __ksymtab___pci_register_driver
ffffffff81401f50 r __ksymtab___pci_remove_bus_device
ffffffff81401f60 r __ksymtab___per_cpu_offset
ffffffff81401f70 r __ksymtab___percpu_counter_add
ffffffff81401f80 r __ksymtab___percpu_counter_init
ffffffff81401f90 r __ksymtab___percpu_counter_sum
ffffffff81401fa0 r __ksymtab___phys_addr
ffffffff81401fb0 r __ksymtab___print_symbol
ffffffff81401fc0 r __ksymtab___printk_ratelimit
ffffffff81401fd0 r __ksymtab___ps2_command
ffffffff81401fe0 r __ksymtab___pskb_copy
ffffffff81401ff0 r __ksymtab___pskb_pull_tail
ffffffff81402000 r __ksymtab___put_cred
ffffffff81402010 r __ksymtab___put_user_1
ffffffff81402020 r __ksymtab___put_user_2
ffffffff81402030 r __ksymtab___put_user_4
ffffffff81402040 r __ksymtab___put_user_8
ffffffff81402050 r __ksymtab___refrigerator
ffffffff81402060 r __ksymtab___register_binfmt
ffffffff81402070 r __ksymtab___register_chrdev
ffffffff81402080 r __ksymtab___release_region
ffffffff81402090 r __ksymtab___remove_inode_hash
ffffffff814020a0 r __ksymtab___request_module
ffffffff814020b0 r __ksymtab___request_region
ffffffff814020c0 r __ksymtab___rta_fill
ffffffff814020d0 r __ksymtab___scm_destroy
ffffffff814020e0 r __ksymtab___scm_send
ffffffff814020f0 r __ksymtab___scsi_add_device
ffffffff81402100 r __ksymtab___scsi_alloc_queue
ffffffff81402110 r __ksymtab___scsi_device_lookup
ffffffff81402120 r __ksymtab___scsi_device_lookup_by_target
ffffffff81402130 r __ksymtab___scsi_iterate_devices
ffffffff81402140 r __ksymtab___scsi_print_command
ffffffff81402150 r __ksymtab___scsi_print_sense
ffffffff81402160 r __ksymtab___scsi_put_command
ffffffff81402170 r __ksymtab___secpath_destroy
ffffffff81402180 r __ksymtab___send_remote_softirq
ffffffff81402190 r __ksymtab___seq_open_private
ffffffff814021a0 r __ksymtab___serio_register_driver
ffffffff814021b0 r __ksymtab___serio_register_port
ffffffff814021c0 r __ksymtab___set_page_dirty_buffers
ffffffff814021d0 r __ksymtab___set_page_dirty_nobuffers
ffffffff814021e0 r __ksymtab___set_personality
ffffffff814021f0 r __ksymtab___sg_alloc_table
ffffffff81402200 r __ksymtab___sg_free_table
ffffffff81402210 r __ksymtab___sk_dst_check
ffffffff81402220 r __ksymtab___sk_mem_reclaim
ffffffff81402230 r __ksymtab___sk_mem_schedule
ffffffff81402240 r __ksymtab___skb_checksum_complete
ffffffff81402250 r __ksymtab___skb_checksum_complete_head
ffffffff81402260 r __ksymtab___skb_get_rxhash
ffffffff81402270 r __ksymtab___skb_recv_datagram
ffffffff81402280 r __ksymtab___skb_tx_hash
ffffffff81402290 r __ksymtab___skb_warn_lro_forwarding
ffffffff814022a0 r __ksymtab___sock_create
ffffffff814022b0 r __ksymtab___splice_from_pipe
ffffffff814022c0 r __ksymtab___starget_for_each_device
ffffffff814022d0 r __ksymtab___strnlen_user
ffffffff814022e0 r __ksymtab___sw_hweight16
ffffffff814022f0 r __ksymtab___sw_hweight32
ffffffff81402300 r __ksymtab___sw_hweight64
ffffffff81402310 r __ksymtab___sw_hweight8
ffffffff81402320 r __ksymtab___symbol_put
ffffffff81402330 r __ksymtab___sync_dirty_buffer
ffffffff81402340 r __ksymtab___task_pid_nr_ns
ffffffff81402350 r __ksymtab___tasklet_hi_schedule
ffffffff81402360 r __ksymtab___tasklet_hi_schedule_first
ffffffff81402370 r __ksymtab___tasklet_schedule
ffffffff81402380 r __ksymtab___udelay
ffffffff81402390 r __ksymtab___unregister_chrdev
ffffffff814023a0 r __ksymtab___virt_addr_valid
ffffffff814023b0 r __ksymtab___vmalloc
ffffffff814023c0 r __ksymtab___wait_on_bit
ffffffff814023d0 r __ksymtab___wait_on_bit_lock
ffffffff814023e0 r __ksymtab___wait_on_buffer
ffffffff814023f0 r __ksymtab___wake_up
ffffffff81402400 r __ksymtab___wake_up_bit
ffffffff81402410 r __ksymtab___xfrm_decode_session
ffffffff81402420 r __ksymtab___xfrm_init_state
ffffffff81402430 r __ksymtab___xfrm_policy_check
ffffffff81402440 r __ksymtab___xfrm_route_forward
ffffffff81402450 r __ksymtab___xfrm_state_delete
ffffffff81402460 r __ksymtab___xfrm_state_destroy
ffffffff81402470 r __ksymtab__atomic_dec_and_lock
ffffffff81402480 r __ksymtab__cond_resched
ffffffff81402490 r __ksymtab__copy_from_user
ffffffff814024a0 r __ksymtab__copy_to_user
ffffffff814024b0 r __ksymtab__ctype
ffffffff814024c0 r __ksymtab__dev_info
ffffffff814024d0 r __ksymtab__kstrtol
ffffffff814024e0 r __ksymtab__kstrtoul
ffffffff814024f0 r __ksymtab__local_bh_enable
ffffffff81402500 r __ksymtab__raw_read_lock
ffffffff81402510 r __ksymtab__raw_read_lock_bh
ffffffff81402520 r __ksymtab__raw_read_lock_irq
ffffffff81402530 r __ksymtab__raw_read_lock_irqsave
ffffffff81402540 r __ksymtab__raw_read_trylock
ffffffff81402550 r __ksymtab__raw_read_unlock_bh
ffffffff81402560 r __ksymtab__raw_read_unlock_irqrestore
ffffffff81402570 r __ksymtab__raw_spin_lock
ffffffff81402580 r __ksymtab__raw_spin_lock_bh
ffffffff81402590 r __ksymtab__raw_spin_lock_irq
ffffffff814025a0 r __ksymtab__raw_spin_lock_irqsave
ffffffff814025b0 r __ksymtab__raw_spin_trylock
ffffffff814025c0 r __ksymtab__raw_spin_trylock_bh
ffffffff814025d0 r __ksymtab__raw_spin_unlock_bh
ffffffff814025e0 r __ksymtab__raw_spin_unlock_irqrestore
ffffffff814025f0 r __ksymtab__raw_write_lock
ffffffff81402600 r __ksymtab__raw_write_lock_bh
ffffffff81402610 r __ksymtab__raw_write_lock_irq
ffffffff81402620 r __ksymtab__raw_write_lock_irqsave
ffffffff81402630 r __ksymtab__raw_write_trylock
ffffffff81402640 r __ksymtab__raw_write_unlock_bh
ffffffff81402650 r __ksymtab__raw_write_unlock_irqrestore
ffffffff81402660 r __ksymtab_abort_creds
ffffffff81402670 r __ksymtab_abort_exclusive_wait
ffffffff81402680 r __ksymtab_account_page_dirtied
ffffffff81402690 r __ksymtab_account_page_redirty
ffffffff814026a0 r __ksymtab_account_page_writeback
ffffffff814026b0 r __ksymtab_acpi_acquire_global_lock
ffffffff814026c0 r __ksymtab_acpi_attach_data
ffffffff814026d0 r __ksymtab_acpi_bus_add
ffffffff814026e0 r __ksymtab_acpi_bus_can_wakeup
ffffffff814026f0 r __ksymtab_acpi_bus_generate_netlink_event
ffffffff81402700 r __ksymtab_acpi_bus_generate_proc_event
ffffffff81402710 r __ksymtab_acpi_bus_get_device
ffffffff81402720 r __ksymtab_acpi_bus_get_private_data
ffffffff81402730 r __ksymtab_acpi_bus_get_status
ffffffff81402740 r __ksymtab_acpi_bus_power_manageable
ffffffff81402750 r __ksymtab_acpi_bus_private_data_handler
ffffffff81402760 r __ksymtab_acpi_bus_register_driver
ffffffff81402770 r __ksymtab_acpi_bus_set_power
ffffffff81402780 r __ksymtab_acpi_bus_start
ffffffff81402790 r __ksymtab_acpi_bus_unregister_driver
ffffffff814027a0 r __ksymtab_acpi_check_address_range
ffffffff814027b0 r __ksymtab_acpi_check_region
ffffffff814027c0 r __ksymtab_acpi_check_resource_conflict
ffffffff814027d0 r __ksymtab_acpi_clear_event
ffffffff814027e0 r __ksymtab_acpi_clear_gpe
ffffffff814027f0 r __ksymtab_acpi_current_gpe_count
ffffffff81402800 r __ksymtab_acpi_dbg_layer
ffffffff81402810 r __ksymtab_acpi_dbg_level
ffffffff81402820 r __ksymtab_acpi_detach_data
ffffffff81402830 r __ksymtab_acpi_device_hid
ffffffff81402840 r __ksymtab_acpi_disable
ffffffff81402850 r __ksymtab_acpi_disable_all_gpes
ffffffff81402860 r __ksymtab_acpi_disable_event
ffffffff81402870 r __ksymtab_acpi_disable_gpe
ffffffff81402880 r __ksymtab_acpi_disabled
ffffffff81402890 r __ksymtab_acpi_enable
ffffffff814028a0 r __ksymtab_acpi_enable_all_runtime_gpes
ffffffff814028b0 r __ksymtab_acpi_enable_event
ffffffff814028c0 r __ksymtab_acpi_enable_gpe
ffffffff814028d0 r __ksymtab_acpi_enable_subsystem
ffffffff814028e0 r __ksymtab_acpi_enter_sleep_state
ffffffff814028f0 r __ksymtab_acpi_enter_sleep_state_prep
ffffffff81402900 r __ksymtab_acpi_enter_sleep_state_s4bios
ffffffff81402910 r __ksymtab_acpi_error
ffffffff81402920 r __ksymtab_acpi_evaluate_integer
ffffffff81402930 r __ksymtab_acpi_evaluate_object
ffffffff81402940 r __ksymtab_acpi_evaluate_object_typed
ffffffff81402950 r __ksymtab_acpi_evaluate_reference
ffffffff81402960 r __ksymtab_acpi_exception
ffffffff81402970 r __ksymtab_acpi_extract_package
ffffffff81402980 r __ksymtab_acpi_format_exception
ffffffff81402990 r __ksymtab_acpi_gbl_FADT
ffffffff814029a0 r __ksymtab_acpi_get_child
ffffffff814029b0 r __ksymtab_acpi_get_current_resources
ffffffff814029c0 r __ksymtab_acpi_get_data
ffffffff814029d0 r __ksymtab_acpi_get_devices
ffffffff814029e0 r __ksymtab_acpi_get_event_resources
ffffffff814029f0 r __ksymtab_acpi_get_event_status
ffffffff81402a00 r __ksymtab_acpi_get_gpe_device
ffffffff81402a10 r __ksymtab_acpi_get_gpe_status
ffffffff81402a20 r __ksymtab_acpi_get_handle
ffffffff81402a30 r __ksymtab_acpi_get_id
ffffffff81402a40 r __ksymtab_acpi_get_irq_routing_table
ffffffff81402a50 r __ksymtab_acpi_get_name
ffffffff81402a60 r __ksymtab_acpi_get_next_object
ffffffff81402a70 r __ksymtab_acpi_get_node
ffffffff81402a80 r __ksymtab_acpi_get_object_info
ffffffff81402a90 r __ksymtab_acpi_get_parent
ffffffff81402aa0 r __ksymtab_acpi_get_physical_device
ffffffff81402ab0 r __ksymtab_acpi_get_sleep_type_data
ffffffff81402ac0 r __ksymtab_acpi_get_table
ffffffff81402ad0 r __ksymtab_acpi_get_table_by_index
ffffffff81402ae0 r __ksymtab_acpi_get_table_header
ffffffff81402af0 r __ksymtab_acpi_get_type
ffffffff81402b00 r __ksymtab_acpi_get_vendor_resource
ffffffff81402b10 r __ksymtab_acpi_info
ffffffff81402b20 r __ksymtab_acpi_initialize_objects
ffffffff81402b30 r __ksymtab_acpi_install_address_space_handler
ffffffff81402b40 r __ksymtab_acpi_install_fixed_event_handler
ffffffff81402b50 r __ksymtab_acpi_install_global_event_handler
ffffffff81402b60 r __ksymtab_acpi_install_gpe_block
ffffffff81402b70 r __ksymtab_acpi_install_gpe_handler
ffffffff81402b80 r __ksymtab_acpi_install_interface
ffffffff81402b90 r __ksymtab_acpi_install_interface_handler
ffffffff81402ba0 r __ksymtab_acpi_install_method
ffffffff81402bb0 r __ksymtab_acpi_install_notify_handler
ffffffff81402bc0 r __ksymtab_acpi_install_table_handler
ffffffff81402bd0 r __ksymtab_acpi_leave_sleep_state
ffffffff81402be0 r __ksymtab_acpi_leave_sleep_state_prep
ffffffff81402bf0 r __ksymtab_acpi_load_table
ffffffff81402c00 r __ksymtab_acpi_load_tables
ffffffff81402c10 r __ksymtab_acpi_lock_ac_dir
ffffffff81402c20 r __ksymtab_acpi_lock_battery_dir
ffffffff81402c30 r __ksymtab_acpi_map_lsapic
ffffffff81402c40 r __ksymtab_acpi_match_device_ids
ffffffff81402c50 r __ksymtab_acpi_notifier_call_chain
ffffffff81402c60 r __ksymtab_acpi_os_execute
ffffffff81402c70 r __ksymtab_acpi_os_map_generic_address
ffffffff81402c80 r __ksymtab_acpi_os_read_port
ffffffff81402c90 r __ksymtab_acpi_os_unmap_generic_address
ffffffff81402ca0 r __ksymtab_acpi_os_wait_events_complete
ffffffff81402cb0 r __ksymtab_acpi_os_write_port
ffffffff81402cc0 r __ksymtab_acpi_pci_disabled
ffffffff81402cd0 r __ksymtab_acpi_pci_osc_control_set
ffffffff81402ce0 r __ksymtab_acpi_pci_register_driver
ffffffff81402cf0 r __ksymtab_acpi_pci_unregister_driver
ffffffff81402d00 r __ksymtab_acpi_processor_power_init_bm_check
ffffffff81402d10 r __ksymtab_acpi_purge_cached_objects
ffffffff81402d20 r __ksymtab_acpi_read
ffffffff81402d30 r __ksymtab_acpi_read_bit_register
ffffffff81402d40 r __ksymtab_acpi_register_ioapic
ffffffff81402d50 r __ksymtab_acpi_release_global_lock
ffffffff81402d60 r __ksymtab_acpi_remove_address_space_handler
ffffffff81402d70 r __ksymtab_acpi_remove_fixed_event_handler
ffffffff81402d80 r __ksymtab_acpi_remove_gpe_block
ffffffff81402d90 r __ksymtab_acpi_remove_gpe_handler
ffffffff81402da0 r __ksymtab_acpi_remove_interface
ffffffff81402db0 r __ksymtab_acpi_remove_notify_handler
ffffffff81402dc0 r __ksymtab_acpi_remove_table_handler
ffffffff81402dd0 r __ksymtab_acpi_reset
ffffffff81402de0 r __ksymtab_acpi_resource_to_address64
ffffffff81402df0 r __ksymtab_acpi_resources_are_enforced
ffffffff81402e00 r __ksymtab_acpi_root_dir
ffffffff81402e10 r __ksymtab_acpi_run_osc
ffffffff81402e20 r __ksymtab_acpi_set_current_resources
ffffffff81402e30 r __ksymtab_acpi_set_firmware_waking_vector
ffffffff81402e40 r __ksymtab_acpi_set_firmware_waking_vector64
ffffffff81402e50 r __ksymtab_acpi_set_gpe_wake_mask
ffffffff81402e60 r __ksymtab_acpi_setup_gpe_for_wake
ffffffff81402e70 r __ksymtab_acpi_terminate
ffffffff81402e80 r __ksymtab_acpi_unload_table_id
ffffffff81402e90 r __ksymtab_acpi_unlock_ac_dir
ffffffff81402ea0 r __ksymtab_acpi_unlock_battery_dir
ffffffff81402eb0 r __ksymtab_acpi_unmap_lsapic
ffffffff81402ec0 r __ksymtab_acpi_unregister_ioapic
ffffffff81402ed0 r __ksymtab_acpi_update_all_gpes
ffffffff81402ee0 r __ksymtab_acpi_walk_namespace
ffffffff81402ef0 r __ksymtab_acpi_walk_resources
ffffffff81402f00 r __ksymtab_acpi_warning
ffffffff81402f10 r __ksymtab_acpi_write
ffffffff81402f20 r __ksymtab_acpi_write_bit_register
ffffffff81402f30 r __ksymtab_add_disk
ffffffff81402f40 r __ksymtab_add_taint
ffffffff81402f50 r __ksymtab_add_timer
ffffffff81402f60 r __ksymtab_add_to_page_cache_locked
ffffffff81402f70 r __ksymtab_add_wait_queue
ffffffff81402f80 r __ksymtab_add_wait_queue_exclusive
ffffffff81402f90 r __ksymtab_address_space_init_once
ffffffff81402fa0 r __ksymtab_adjust_resource
ffffffff81402fb0 r __ksymtab_aio_complete
ffffffff81402fc0 r __ksymtab_aio_put_req
ffffffff81402fd0 r __ksymtab_alloc_buffer_head
ffffffff81402fe0 r __ksymtab_alloc_chrdev_region
ffffffff81402ff0 r __ksymtab_alloc_cpu_rmap
ffffffff81403000 r __ksymtab_alloc_disk
ffffffff81403010 r __ksymtab_alloc_disk_node
ffffffff81403020 r __ksymtab_alloc_etherdev_mqs
ffffffff81403030 r __ksymtab_alloc_file
ffffffff81403040 r __ksymtab_alloc_netdev_mqs
ffffffff81403050 r __ksymtab_alloc_pages_current
ffffffff81403060 r __ksymtab_alloc_pages_exact
ffffffff81403070 r __ksymtab_alloc_pages_exact_nid
ffffffff81403080 r __ksymtab_alloc_pci_dev
ffffffff81403090 r __ksymtab_alloc_xenballooned_pages
ffffffff814030a0 r __ksymtab_allocate_resource
ffffffff814030b0 r __ksymtab_allow_signal
ffffffff814030c0 r __ksymtab_amd_e400_c1e_detected
ffffffff814030d0 r __ksymtab_amd_iommu_complete_ppr
ffffffff814030e0 r __ksymtab_amd_iommu_device_info
ffffffff814030f0 r __ksymtab_amd_iommu_domain_clear_gcr3
ffffffff81403100 r __ksymtab_amd_iommu_domain_direct_map
ffffffff81403110 r __ksymtab_amd_iommu_domain_enable_v2
ffffffff81403120 r __ksymtab_amd_iommu_domain_set_gcr3
ffffffff81403130 r __ksymtab_amd_iommu_enable_device_erratum
ffffffff81403140 r __ksymtab_amd_iommu_flush_page
ffffffff81403150 r __ksymtab_amd_iommu_flush_tlb
ffffffff81403160 r __ksymtab_amd_iommu_get_v2_domain
ffffffff81403170 r __ksymtab_amd_iommu_register_ppr_notifier
ffffffff81403180 r __ksymtab_amd_iommu_unregister_ppr_notifier
ffffffff81403190 r __ksymtab_amd_iommu_v2_supported
ffffffff814031a0 r __ksymtab_amd_nb_misc_ids
ffffffff814031b0 r __ksymtab_amd_northbridges
ffffffff814031c0 r __ksymtab_arch_debugfs_dir
ffffffff814031d0 r __ksymtab_arch_register_cpu
ffffffff814031e0 r __ksymtab_arch_unregister_cpu
ffffffff814031f0 r __ksymtab_argv_free
ffffffff81403200 r __ksymtab_argv_split
ffffffff81403210 r __ksymtab_arp_create
ffffffff81403220 r __ksymtab_arp_find
ffffffff81403230 r __ksymtab_arp_invalidate
ffffffff81403240 r __ksymtab_arp_send
ffffffff81403250 r __ksymtab_arp_tbl
ffffffff81403260 r __ksymtab_arp_xmit
ffffffff81403270 r __ksymtab_ata_dev_printk
ffffffff81403280 r __ksymtab_ata_link_printk
ffffffff81403290 r __ksymtab_ata_port_printk
ffffffff814032a0 r __ksymtab_ata_print_version
ffffffff814032b0 r __ksymtab_ata_scsi_cmd_error_handler
ffffffff814032c0 r __ksymtab_atomic_dec_and_mutex_lock
ffffffff814032d0 r __ksymtab_audit_log
ffffffff814032e0 r __ksymtab_audit_log_end
ffffffff814032f0 r __ksymtab_audit_log_format
ffffffff81403300 r __ksymtab_audit_log_start
ffffffff81403310 r __ksymtab_audit_log_task_context
ffffffff81403320 r __ksymtab_autoremove_wake_function
ffffffff81403330 r __ksymtab_avail_to_resrv_perfctr_nmi_bit
ffffffff81403340 r __ksymtab_avenrun
ffffffff81403350 r __ksymtab_balance_dirty_pages_ratelimited_nr
ffffffff81403360 r __ksymtab_bcd2bin
ffffffff81403370 r __ksymtab_bd_set_size
ffffffff81403380 r __ksymtab_bdev_read_only
ffffffff81403390 r __ksymtab_bdev_stack_limits
ffffffff814033a0 r __ksymtab_bdevname
ffffffff814033b0 r __ksymtab_bdget
ffffffff814033c0 r __ksymtab_bdget_disk
ffffffff814033d0 r __ksymtab_bdi_destroy
ffffffff814033e0 r __ksymtab_bdi_init
ffffffff814033f0 r __ksymtab_bdi_register
ffffffff81403400 r __ksymtab_bdi_register_dev
ffffffff81403410 r __ksymtab_bdi_set_max_ratio
ffffffff81403420 r __ksymtab_bdi_setup_and_register
ffffffff81403430 r __ksymtab_bdi_unregister
ffffffff81403440 r __ksymtab_bdput
ffffffff81403450 r __ksymtab_bh_submit_read
ffffffff81403460 r __ksymtab_bh_uptodate_or_lock
ffffffff81403470 r __ksymtab_bin2bcd
ffffffff81403480 r __ksymtab_bio_add_page
ffffffff81403490 r __ksymtab_bio_add_pc_page
ffffffff814034a0 r __ksymtab_bio_alloc
ffffffff814034b0 r __ksymtab_bio_alloc_bioset
ffffffff814034c0 r __ksymtab_bio_clone
ffffffff814034d0 r __ksymtab_bio_copy_kern
ffffffff814034e0 r __ksymtab_bio_copy_user
ffffffff814034f0 r __ksymtab_bio_endio
ffffffff81403500 r __ksymtab_bio_free
ffffffff81403510 r __ksymtab_bio_get_nr_vecs
ffffffff81403520 r __ksymtab_bio_init
ffffffff81403530 r __ksymtab_bio_kmalloc
ffffffff81403540 r __ksymtab_bio_map_kern
ffffffff81403550 r __ksymtab_bio_map_user
ffffffff81403560 r __ksymtab_bio_pair_release
ffffffff81403570 r __ksymtab_bio_phys_segments
ffffffff81403580 r __ksymtab_bio_put
ffffffff81403590 r __ksymtab_bio_sector_offset
ffffffff814035a0 r __ksymtab_bio_split
ffffffff814035b0 r __ksymtab_bio_uncopy_user
ffffffff814035c0 r __ksymtab_bio_unmap_user
ffffffff814035d0 r __ksymtab_bioset_create
ffffffff814035e0 r __ksymtab_bioset_free
ffffffff814035f0 r __ksymtab_bit_waitqueue
ffffffff81403600 r __ksymtab_bitmap_allocate_region
ffffffff81403610 r __ksymtab_bitmap_bitremap
ffffffff81403620 r __ksymtab_bitmap_clear
ffffffff81403630 r __ksymtab_bitmap_copy_le
ffffffff81403640 r __ksymtab_bitmap_find_free_region
ffffffff81403650 r __ksymtab_bitmap_find_next_zero_area
ffffffff81403660 r __ksymtab_bitmap_fold
ffffffff81403670 r __ksymtab_bitmap_onto
ffffffff81403680 r __ksymtab_bitmap_parse_user
ffffffff81403690 r __ksymtab_bitmap_parselist
ffffffff814036a0 r __ksymtab_bitmap_parselist_user
ffffffff814036b0 r __ksymtab_bitmap_release_region
ffffffff814036c0 r __ksymtab_bitmap_remap
ffffffff814036d0 r __ksymtab_bitmap_scnlistprintf
ffffffff814036e0 r __ksymtab_bitmap_scnprintf
ffffffff814036f0 r __ksymtab_bitmap_set
ffffffff81403700 r __ksymtab_bitrev16
ffffffff81403710 r __ksymtab_bitrev32
ffffffff81403720 r __ksymtab_blk_alloc_queue
ffffffff81403730 r __ksymtab_blk_alloc_queue_node
ffffffff81403740 r __ksymtab_blk_cleanup_queue
ffffffff81403750 r __ksymtab_blk_complete_request
ffffffff81403760 r __ksymtab_blk_delay_queue
ffffffff81403770 r __ksymtab_blk_dump_rq_flags
ffffffff81403780 r __ksymtab_blk_end_request
ffffffff81403790 r __ksymtab_blk_end_request_all
ffffffff814037a0 r __ksymtab_blk_end_request_cur
ffffffff814037b0 r __ksymtab_blk_execute_rq
ffffffff814037c0 r __ksymtab_blk_fetch_request
ffffffff814037d0 r __ksymtab_blk_finish_plug
ffffffff814037e0 r __ksymtab_blk_free_tags
ffffffff814037f0 r __ksymtab_blk_get_backing_dev_info
ffffffff81403800 r __ksymtab_blk_get_queue
ffffffff81403810 r __ksymtab_blk_get_request
ffffffff81403820 r __ksymtab_blk_init_allocated_queue
ffffffff81403830 r __ksymtab_blk_init_queue
ffffffff81403840 r __ksymtab_blk_init_queue_node
ffffffff81403850 r __ksymtab_blk_init_tags
ffffffff81403860 r __ksymtab_blk_iopoll_complete
ffffffff81403870 r __ksymtab_blk_iopoll_disable
ffffffff81403880 r __ksymtab_blk_iopoll_enable
ffffffff81403890 r __ksymtab_blk_iopoll_enabled
ffffffff814038a0 r __ksymtab_blk_iopoll_init
ffffffff814038b0 r __ksymtab_blk_iopoll_sched
ffffffff814038c0 r __ksymtab_blk_limits_io_min
ffffffff814038d0 r __ksymtab_blk_limits_io_opt
ffffffff814038e0 r __ksymtab_blk_limits_max_hw_sectors
ffffffff814038f0 r __ksymtab_blk_lookup_devt
ffffffff81403900 r __ksymtab_blk_make_request
ffffffff81403910 r __ksymtab_blk_max_low_pfn
ffffffff81403920 r __ksymtab_blk_peek_request
ffffffff81403930 r __ksymtab_blk_put_queue
ffffffff81403940 r __ksymtab_blk_put_request
ffffffff81403950 r __ksymtab_blk_queue_alignment_offset
ffffffff81403960 r __ksymtab_blk_queue_bounce
ffffffff81403970 r __ksymtab_blk_queue_bounce_limit
ffffffff81403980 r __ksymtab_blk_queue_dma_alignment
ffffffff81403990 r __ksymtab_blk_queue_dma_pad
ffffffff814039a0 r __ksymtab_blk_queue_end_tag
ffffffff814039b0 r __ksymtab_blk_queue_find_tag
ffffffff814039c0 r __ksymtab_blk_queue_free_tags
ffffffff814039d0 r __ksymtab_blk_queue_init_tags
ffffffff814039e0 r __ksymtab_blk_queue_invalidate_tags
ffffffff814039f0 r __ksymtab_blk_queue_io_min
ffffffff81403a00 r __ksymtab_blk_queue_io_opt
ffffffff81403a10 r __ksymtab_blk_queue_logical_block_size
ffffffff81403a20 r __ksymtab_blk_queue_make_request
ffffffff81403a30 r __ksymtab_blk_queue_max_discard_sectors
ffffffff81403a40 r __ksymtab_blk_queue_max_hw_sectors
ffffffff81403a50 r __ksymtab_blk_queue_max_segment_size
ffffffff81403a60 r __ksymtab_blk_queue_max_segments
ffffffff81403a70 r __ksymtab_blk_queue_merge_bvec
ffffffff81403a80 r __ksymtab_blk_queue_physical_block_size
ffffffff81403a90 r __ksymtab_blk_queue_prep_rq
ffffffff81403aa0 r __ksymtab_blk_queue_resize_tags
ffffffff81403ab0 r __ksymtab_blk_queue_segment_boundary
ffffffff81403ac0 r __ksymtab_blk_queue_softirq_done
ffffffff81403ad0 r __ksymtab_blk_queue_stack_limits
ffffffff81403ae0 r __ksymtab_blk_queue_start_tag
ffffffff81403af0 r __ksymtab_blk_queue_unprep_rq
ffffffff81403b00 r __ksymtab_blk_queue_update_dma_alignment
ffffffff81403b10 r __ksymtab_blk_queue_update_dma_pad
ffffffff81403b20 r __ksymtab_blk_recount_segments
ffffffff81403b30 r __ksymtab_blk_register_region
ffffffff81403b40 r __ksymtab_blk_requeue_request
ffffffff81403b50 r __ksymtab_blk_rq_init
ffffffff81403b60 r __ksymtab_blk_rq_map_kern
ffffffff81403b70 r __ksymtab_blk_rq_map_sg
ffffffff81403b80 r __ksymtab_blk_rq_map_user
ffffffff81403b90 r __ksymtab_blk_rq_map_user_iov
ffffffff81403ba0 r __ksymtab_blk_rq_unmap_user
ffffffff81403bb0 r __ksymtab_blk_run_queue
ffffffff81403bc0 r __ksymtab_blk_run_queue_async
ffffffff81403bd0 r __ksymtab_blk_set_default_limits
ffffffff81403be0 r __ksymtab_blk_set_stacking_limits
ffffffff81403bf0 r __ksymtab_blk_stack_limits
ffffffff81403c00 r __ksymtab_blk_start_plug
ffffffff81403c10 r __ksymtab_blk_start_queue
ffffffff81403c20 r __ksymtab_blk_start_request
ffffffff81403c30 r __ksymtab_blk_stop_queue
ffffffff81403c40 r __ksymtab_blk_sync_queue
ffffffff81403c50 r __ksymtab_blk_unregister_region
ffffffff81403c60 r __ksymtab_blk_verify_command
ffffffff81403c70 r __ksymtab_blkdev_fsync
ffffffff81403c80 r __ksymtab_blkdev_get
ffffffff81403c90 r __ksymtab_blkdev_get_by_dev
ffffffff81403ca0 r __ksymtab_blkdev_get_by_path
ffffffff81403cb0 r __ksymtab_blkdev_issue_discard
ffffffff81403cc0 r __ksymtab_blkdev_issue_flush
ffffffff81403cd0 r __ksymtab_blkdev_issue_zeroout
ffffffff81403ce0 r __ksymtab_blkdev_put
ffffffff81403cf0 r __ksymtab_block_all_signals
ffffffff81403d00 r __ksymtab_block_commit_write
ffffffff81403d10 r __ksymtab_block_invalidatepage
ffffffff81403d20 r __ksymtab_block_is_partially_uptodate
ffffffff81403d30 r __ksymtab_block_page_mkwrite
ffffffff81403d40 r __ksymtab_block_read_full_page
ffffffff81403d50 r __ksymtab_block_truncate_page
ffffffff81403d60 r __ksymtab_block_write_begin
ffffffff81403d70 r __ksymtab_block_write_end
ffffffff81403d80 r __ksymtab_block_write_full_page
ffffffff81403d90 r __ksymtab_block_write_full_page_endio
ffffffff81403da0 r __ksymtab_bmap
ffffffff81403db0 r __ksymtab_boot_cpu_data
ffffffff81403dc0 r __ksymtab_boot_option_idle_override
ffffffff81403dd0 r __ksymtab_boot_tvec_bases
ffffffff81403de0 r __ksymtab_brioctl_set
ffffffff81403df0 r __ksymtab_bsearch
ffffffff81403e00 r __ksymtab_buffer_migrate_page
ffffffff81403e10 r __ksymtab_build_ehash_secret
ffffffff81403e20 r __ksymtab_build_skb
ffffffff81403e30 r __ksymtab_cad_pid
ffffffff81403e40 r __ksymtab_call_netdevice_notifiers
ffffffff81403e50 r __ksymtab_call_usermodehelper_exec
ffffffff81403e60 r __ksymtab_call_usermodehelper_freeinfo
ffffffff81403e70 r __ksymtab_call_usermodehelper_setfns
ffffffff81403e80 r __ksymtab_call_usermodehelper_setup
ffffffff81403e90 r __ksymtab_can_do_mlock
ffffffff81403ea0 r __ksymtab_cancel_delayed_work_sync
ffffffff81403eb0 r __ksymtab_cancel_dirty_page
ffffffff81403ec0 r __ksymtab_capable
ffffffff81403ed0 r __ksymtab_cdev_add
ffffffff81403ee0 r __ksymtab_cdev_alloc
ffffffff81403ef0 r __ksymtab_cdev_del
ffffffff81403f00 r __ksymtab_cdev_init
ffffffff81403f10 r __ksymtab_cfb_copyarea
ffffffff81403f20 r __ksymtab_cfb_fillrect
ffffffff81403f30 r __ksymtab_cfb_imageblit
ffffffff81403f40 r __ksymtab_check_disk_change
ffffffff81403f50 r __ksymtab_check_disk_size_change
ffffffff81403f60 r __ksymtab_cleancache_enabled
ffffffff81403f70 r __ksymtab_cleancache_register_ops
ffffffff81403f80 r __ksymtab_clear_bdi_congested
ffffffff81403f90 r __ksymtab_clear_nlink
ffffffff81403fa0 r __ksymtab_clear_page
ffffffff81403fb0 r __ksymtab_clear_page_dirty_for_io
ffffffff81403fc0 r __ksymtab_clear_user
ffffffff81403fd0 r __ksymtab_clock_t_to_jiffies
ffffffff81403fe0 r __ksymtab_clocksource_change_rating
ffffffff81403ff0 r __ksymtab_clocksource_register
ffffffff81404000 r __ksymtab_clocksource_unregister
ffffffff81404010 r __ksymtab_color_table
ffffffff81404020 r __ksymtab_commit_creds
ffffffff81404030 r __ksymtab_compat_ip_getsockopt
ffffffff81404040 r __ksymtab_compat_ip_setsockopt
ffffffff81404050 r __ksymtab_compat_mc_getsockopt
ffffffff81404060 r __ksymtab_compat_mc_setsockopt
ffffffff81404070 r __ksymtab_compat_sock_common_getsockopt
ffffffff81404080 r __ksymtab_compat_sock_common_setsockopt
ffffffff81404090 r __ksymtab_compat_sock_get_timestamp
ffffffff814040a0 r __ksymtab_compat_sock_get_timestampns
ffffffff814040b0 r __ksymtab_compat_tcp_getsockopt
ffffffff814040c0 r __ksymtab_compat_tcp_setsockopt
ffffffff814040d0 r __ksymtab_complete
ffffffff814040e0 r __ksymtab_complete_all
ffffffff814040f0 r __ksymtab_complete_and_exit
ffffffff81404100 r __ksymtab_completion_done
ffffffff81404110 r __ksymtab_con_copy_unimap
ffffffff81404120 r __ksymtab_con_is_bound
ffffffff81404130 r __ksymtab_con_set_default_unimap
ffffffff81404140 r __ksymtab_config_group_find_item
ffffffff81404150 r __ksymtab_config_group_init
ffffffff81404160 r __ksymtab_config_group_init_type_name
ffffffff81404170 r __ksymtab_config_item_get
ffffffff81404180 r __ksymtab_config_item_init
ffffffff81404190 r __ksymtab_config_item_init_type_name
ffffffff814041a0 r __ksymtab_config_item_put
ffffffff814041b0 r __ksymtab_config_item_set_name
ffffffff814041c0 r __ksymtab_configfs_depend_item
ffffffff814041d0 r __ksymtab_configfs_register_subsystem
ffffffff814041e0 r __ksymtab_configfs_undepend_item
ffffffff814041f0 r __ksymtab_configfs_unregister_subsystem
ffffffff81404200 r __ksymtab_congestion_wait
ffffffff81404210 r __ksymtab_console_blank_hook
ffffffff81404220 r __ksymtab_console_blanked
ffffffff81404230 r __ksymtab_console_conditional_schedule
ffffffff81404240 r __ksymtab_console_lock
ffffffff81404250 r __ksymtab_console_set_on_cmdline
ffffffff81404260 r __ksymtab_console_start
ffffffff81404270 r __ksymtab_console_stop
ffffffff81404280 r __ksymtab_console_suspend_enabled
ffffffff81404290 r __ksymtab_console_trylock
ffffffff814042a0 r __ksymtab_console_unlock
ffffffff814042b0 r __ksymtab_consume_skb
ffffffff814042c0 r __ksymtab_cont_write_begin
ffffffff814042d0 r __ksymtab_cookie_check_timestamp
ffffffff814042e0 r __ksymtab_copy_in_user
ffffffff814042f0 r __ksymtab_copy_page
ffffffff81404300 r __ksymtab_copy_strings_kernel
ffffffff81404310 r __ksymtab_copy_user_generic_string
ffffffff81404320 r __ksymtab_copy_user_generic_unrolled
ffffffff81404330 r __ksymtab_cpu_active_mask
ffffffff81404340 r __ksymtab_cpu_all_bits
ffffffff81404350 r __ksymtab_cpu_core_map
ffffffff81404360 r __ksymtab_cpu_down
ffffffff81404370 r __ksymtab_cpu_dr7
ffffffff81404380 r __ksymtab_cpu_info
ffffffff81404390 r __ksymtab_cpu_khz
ffffffff814043a0 r __ksymtab_cpu_number
ffffffff814043b0 r __ksymtab_cpu_online_mask
ffffffff814043c0 r __ksymtab_cpu_possible_mask
ffffffff814043d0 r __ksymtab_cpu_present_mask
ffffffff814043e0 r __ksymtab_cpu_rmap_add
ffffffff814043f0 r __ksymtab_cpu_rmap_update
ffffffff81404400 r __ksymtab_cpu_sibling_map
ffffffff81404410 r __ksymtab_cpufreq_get
ffffffff81404420 r __ksymtab_cpufreq_get_policy
ffffffff81404430 r __ksymtab_cpufreq_global_kobject
ffffffff81404440 r __ksymtab_cpufreq_quick_get
ffffffff81404450 r __ksymtab_cpufreq_quick_get_max
ffffffff81404460 r __ksymtab_cpufreq_register_notifier
ffffffff81404470 r __ksymtab_cpufreq_unregister_notifier
ffffffff81404480 r __ksymtab_cpufreq_update_policy
ffffffff81404490 r __ksymtab_cpumask_next_and
ffffffff814044a0 r __ksymtab_crc16
ffffffff814044b0 r __ksymtab_crc16_table
ffffffff814044c0 r __ksymtab_crc32_be
ffffffff814044d0 r __ksymtab_crc32_le
ffffffff814044e0 r __ksymtab_create_empty_buffers
ffffffff814044f0 r __ksymtab_create_proc_entry
ffffffff81404500 r __ksymtab_csum_ipv6_magic
ffffffff81404510 r __ksymtab_csum_partial
ffffffff81404520 r __ksymtab_csum_partial_copy_from_user
ffffffff81404530 r __ksymtab_csum_partial_copy_fromiovecend
ffffffff81404540 r __ksymtab_csum_partial_copy_nocheck
ffffffff81404550 r __ksymtab_csum_partial_copy_to_user
ffffffff81404560 r __ksymtab_current_fs_time
ffffffff81404570 r __ksymtab_current_kernel_time
ffffffff81404580 r __ksymtab_current_task
ffffffff81404590 r __ksymtab_current_umask
ffffffff814045a0 r __ksymtab_d_add_ci
ffffffff814045b0 r __ksymtab_d_alloc
ffffffff814045c0 r __ksymtab_d_alloc_name
ffffffff814045d0 r __ksymtab_d_alloc_pseudo
ffffffff814045e0 r __ksymtab_d_clear_need_lookup
ffffffff814045f0 r __ksymtab_d_delete
ffffffff81404600 r __ksymtab_d_drop
ffffffff81404610 r __ksymtab_d_find_alias
ffffffff81404620 r __ksymtab_d_find_any_alias
ffffffff81404630 r __ksymtab_d_genocide
ffffffff81404640 r __ksymtab_d_instantiate
ffffffff81404650 r __ksymtab_d_instantiate_unique
ffffffff81404660 r __ksymtab_d_invalidate
ffffffff81404670 r __ksymtab_d_lookup
ffffffff81404680 r __ksymtab_d_make_root
ffffffff81404690 r __ksymtab_d_move
ffffffff814046a0 r __ksymtab_d_obtain_alias
ffffffff814046b0 r __ksymtab_d_path
ffffffff814046c0 r __ksymtab_d_prune_aliases
ffffffff814046d0 r __ksymtab_d_rehash
ffffffff814046e0 r __ksymtab_d_set_d_op
ffffffff814046f0 r __ksymtab_d_splice_alias
ffffffff81404700 r __ksymtab_d_validate
ffffffff81404710 r __ksymtab_daemonize
ffffffff81404720 r __ksymtab_datagram_poll
ffffffff81404730 r __ksymtab_dcache_dir_close
ffffffff81404740 r __ksymtab_dcache_dir_lseek
ffffffff81404750 r __ksymtab_dcache_dir_open
ffffffff81404760 r __ksymtab_dcache_readdir
ffffffff81404770 r __ksymtab_deactivate_locked_super
ffffffff81404780 r __ksymtab_deactivate_super
ffffffff81404790 r __ksymtab_dec_zone_page_state
ffffffff814047a0 r __ksymtab_default_blu
ffffffff814047b0 r __ksymtab_default_file_splice_read
ffffffff814047c0 r __ksymtab_default_grn
ffffffff814047d0 r __ksymtab_default_llseek
ffffffff814047e0 r __ksymtab_default_red
ffffffff814047f0 r __ksymtab_default_wake_function
ffffffff81404800 r __ksymtab_del_gendisk
ffffffff81404810 r __ksymtab_del_timer
ffffffff81404820 r __ksymtab_del_timer_sync
ffffffff81404830 r __ksymtab_delete_from_page_cache
ffffffff81404840 r __ksymtab_dentry_open
ffffffff81404850 r __ksymtab_dentry_path_raw
ffffffff81404860 r __ksymtab_dentry_unhash
ffffffff81404870 r __ksymtab_dentry_update_name_case
ffffffff81404880 r __ksymtab_dev_activate
ffffffff81404890 r __ksymtab_dev_add_pack
ffffffff814048a0 r __ksymtab_dev_addr_add
ffffffff814048b0 r __ksymtab_dev_addr_add_multiple
ffffffff814048c0 r __ksymtab_dev_addr_del
ffffffff814048d0 r __ksymtab_dev_addr_del_multiple
ffffffff814048e0 r __ksymtab_dev_addr_flush
ffffffff814048f0 r __ksymtab_dev_addr_init
ffffffff81404900 r __ksymtab_dev_alert
ffffffff81404910 r __ksymtab_dev_alloc_name
ffffffff81404920 r __ksymtab_dev_alloc_skb
ffffffff81404930 r __ksymtab_dev_base_lock
ffffffff81404940 r __ksymtab_dev_change_flags
ffffffff81404950 r __ksymtab_dev_close
ffffffff81404960 r __ksymtab_dev_crit
ffffffff81404970 r __ksymtab_dev_deactivate
ffffffff81404980 r __ksymtab_dev_disable_lro
ffffffff81404990 r __ksymtab_dev_driver_string
ffffffff814049a0 r __ksymtab_dev_emerg
ffffffff814049b0 r __ksymtab_dev_err
ffffffff814049c0 r __ksymtab_dev_get_by_flags_rcu
ffffffff814049d0 r __ksymtab_dev_get_by_index
ffffffff814049e0 r __ksymtab_dev_get_by_index_rcu
ffffffff814049f0 r __ksymtab_dev_get_by_name
ffffffff81404a00 r __ksymtab_dev_get_by_name_rcu
ffffffff81404a10 r __ksymtab_dev_get_drvdata
ffffffff81404a20 r __ksymtab_dev_get_flags
ffffffff81404a30 r __ksymtab_dev_get_stats
ffffffff81404a40 r __ksymtab_dev_getbyhwaddr_rcu
ffffffff81404a50 r __ksymtab_dev_getfirstbyhwtype
ffffffff81404a60 r __ksymtab_dev_graft_qdisc
ffffffff81404a70 r __ksymtab_dev_gro_receive
ffffffff81404a80 r __ksymtab_dev_kfree_skb_any
ffffffff81404a90 r __ksymtab_dev_kfree_skb_irq
ffffffff81404aa0 r __ksymtab_dev_load
ffffffff81404ab0 r __ksymtab_dev_mc_add
ffffffff81404ac0 r __ksymtab_dev_mc_add_global
ffffffff81404ad0 r __ksymtab_dev_mc_del
ffffffff81404ae0 r __ksymtab_dev_mc_del_global
ffffffff81404af0 r __ksymtab_dev_mc_flush
ffffffff81404b00 r __ksymtab_dev_mc_init
ffffffff81404b10 r __ksymtab_dev_mc_sync
ffffffff81404b20 r __ksymtab_dev_mc_unsync
ffffffff81404b30 r __ksymtab_dev_notice
ffffffff81404b40 r __ksymtab_dev_open
ffffffff81404b50 r __ksymtab_dev_printk
ffffffff81404b60 r __ksymtab_dev_queue_xmit
ffffffff81404b70 r __ksymtab_dev_remove_pack
ffffffff81404b80 r __ksymtab_dev_set_allmulti
ffffffff81404b90 r __ksymtab_dev_set_drvdata
ffffffff81404ba0 r __ksymtab_dev_set_group
ffffffff81404bb0 r __ksymtab_dev_set_mac_address
ffffffff81404bc0 r __ksymtab_dev_set_mtu
ffffffff81404bd0 r __ksymtab_dev_set_promiscuity
ffffffff81404be0 r __ksymtab_dev_trans_start
ffffffff81404bf0 r __ksymtab_dev_uc_add
ffffffff81404c00 r __ksymtab_dev_uc_del
ffffffff81404c10 r __ksymtab_dev_uc_flush
ffffffff81404c20 r __ksymtab_dev_uc_init
ffffffff81404c30 r __ksymtab_dev_uc_sync
ffffffff81404c40 r __ksymtab_dev_uc_unsync
ffffffff81404c50 r __ksymtab_dev_valid_name
ffffffff81404c60 r __ksymtab_dev_warn
ffffffff81404c70 r __ksymtab_devm_free_irq
ffffffff81404c80 r __ksymtab_devm_ioport_map
ffffffff81404c90 r __ksymtab_devm_ioport_unmap
ffffffff81404ca0 r __ksymtab_devm_ioremap
ffffffff81404cb0 r __ksymtab_devm_ioremap_nocache
ffffffff81404cc0 r __ksymtab_devm_iounmap
ffffffff81404cd0 r __ksymtab_devm_request_and_ioremap
ffffffff81404ce0 r __ksymtab_devm_request_threaded_irq
ffffffff81404cf0 r __ksymtab_dget_parent
ffffffff81404d00 r __ksymtab_directly_mappable_cdev_bdi
ffffffff81404d10 r __ksymtab_disable_irq
ffffffff81404d20 r __ksymtab_disable_irq_nosync
ffffffff81404d30 r __ksymtab_disallow_signal
ffffffff81404d40 r __ksymtab_disk_stack_limits
ffffffff81404d50 r __ksymtab_dlci_ioctl_set
ffffffff81404d60 r __ksymtab_dma_ops
ffffffff81404d70 r __ksymtab_dma_pool_alloc
ffffffff81404d80 r __ksymtab_dma_pool_create
ffffffff81404d90 r __ksymtab_dma_pool_destroy
ffffffff81404da0 r __ksymtab_dma_pool_free
ffffffff81404db0 r __ksymtab_dma_set_mask
ffffffff81404dc0 r __ksymtab_dma_spin_lock
ffffffff81404dd0 r __ksymtab_dma_supported
ffffffff81404de0 r __ksymtab_dmam_alloc_coherent
ffffffff81404df0 r __ksymtab_dmam_alloc_noncoherent
ffffffff81404e00 r __ksymtab_dmam_free_coherent
ffffffff81404e10 r __ksymtab_dmam_free_noncoherent
ffffffff81404e20 r __ksymtab_dmam_pool_create
ffffffff81404e30 r __ksymtab_dmam_pool_destroy
ffffffff81404e40 r __ksymtab_dmi_check_system
ffffffff81404e50 r __ksymtab_dmi_find_device
ffffffff81404e60 r __ksymtab_dmi_first_match
ffffffff81404e70 r __ksymtab_dmi_get_date
ffffffff81404e80 r __ksymtab_dmi_get_system_info
ffffffff81404e90 r __ksymtab_dmi_name_in_vendors
ffffffff81404ea0 r __ksymtab_do_SAK
ffffffff81404eb0 r __ksymtab_do_blank_screen
ffffffff81404ec0 r __ksymtab_do_gettimeofday
ffffffff81404ed0 r __ksymtab_do_mmap
ffffffff81404ee0 r __ksymtab_do_munmap
ffffffff81404ef0 r __ksymtab_do_settimeofday
ffffffff81404f00 r __ksymtab_do_sync_read
ffffffff81404f10 r __ksymtab_do_sync_write
ffffffff81404f20 r __ksymtab_do_unblank_screen
ffffffff81404f30 r __ksymtab_down
ffffffff81404f40 r __ksymtab_down_interruptible
ffffffff81404f50 r __ksymtab_down_killable
ffffffff81404f60 r __ksymtab_down_read
ffffffff81404f70 r __ksymtab_down_read_trylock
ffffffff81404f80 r __ksymtab_down_timeout
ffffffff81404f90 r __ksymtab_down_trylock
ffffffff81404fa0 r __ksymtab_down_write
ffffffff81404fb0 r __ksymtab_down_write_trylock
ffffffff81404fc0 r __ksymtab_downgrade_write
ffffffff81404fd0 r __ksymtab_dput
ffffffff81404fe0 r __ksymtab_dql_completed
ffffffff81404ff0 r __ksymtab_dql_init
ffffffff81405000 r __ksymtab_dql_reset
ffffffff81405010 r __ksymtab_drop_nlink
ffffffff81405020 r __ksymtab_drop_super
ffffffff81405030 r __ksymtab_dst_alloc
ffffffff81405040 r __ksymtab_dst_cow_metrics_generic
ffffffff81405050 r __ksymtab_dst_destroy
ffffffff81405060 r __ksymtab_dst_discard
ffffffff81405070 r __ksymtab_dst_release
ffffffff81405080 r __ksymtab_dump_fpu
ffffffff81405090 r __ksymtab_dump_seek
ffffffff814050a0 r __ksymtab_dump_stack
ffffffff814050b0 r __ksymtab_dump_trace
ffffffff814050c0 r __ksymtab_dump_write
ffffffff814050d0 r __ksymtab_ec_burst_disable
ffffffff814050e0 r __ksymtab_ec_burst_enable
ffffffff814050f0 r __ksymtab_ec_get_handle
ffffffff81405100 r __ksymtab_ec_read
ffffffff81405110 r __ksymtab_ec_transaction
ffffffff81405120 r __ksymtab_ec_write
ffffffff81405130 r __ksymtab_elevator_change
ffffffff81405140 r __ksymtab_elevator_exit
ffffffff81405150 r __ksymtab_elevator_init
ffffffff81405160 r __ksymtab_elv_abort_queue
ffffffff81405170 r __ksymtab_elv_add_request
ffffffff81405180 r __ksymtab_elv_dispatch_add_tail
ffffffff81405190 r __ksymtab_elv_dispatch_sort
ffffffff814051a0 r __ksymtab_elv_rb_add
ffffffff814051b0 r __ksymtab_elv_rb_del
ffffffff814051c0 r __ksymtab_elv_rb_find
ffffffff814051d0 r __ksymtab_elv_rb_former_request
ffffffff814051e0 r __ksymtab_elv_rb_latter_request
ffffffff814051f0 r __ksymtab_elv_register_queue
ffffffff81405200 r __ksymtab_elv_rq_merge_ok
ffffffff81405210 r __ksymtab_elv_unregister_queue
ffffffff81405220 r __ksymtab_empty_aops
ffffffff81405230 r __ksymtab_empty_zero_page
ffffffff81405240 r __ksymtab_enable_irq
ffffffff81405250 r __ksymtab_end_buffer_async_write
ffffffff81405260 r __ksymtab_end_buffer_read_sync
ffffffff81405270 r __ksymtab_end_buffer_write_sync
ffffffff81405280 r __ksymtab_end_page_writeback
ffffffff81405290 r __ksymtab_end_writeback
ffffffff814052a0 r __ksymtab_eth_change_mtu
ffffffff814052b0 r __ksymtab_eth_header
ffffffff814052c0 r __ksymtab_eth_header_cache
ffffffff814052d0 r __ksymtab_eth_header_cache_update
ffffffff814052e0 r __ksymtab_eth_header_parse
ffffffff814052f0 r __ksymtab_eth_mac_addr
ffffffff81405300 r __ksymtab_eth_rebuild_header
ffffffff81405310 r __ksymtab_eth_type_trans
ffffffff81405320 r __ksymtab_eth_validate_addr
ffffffff81405330 r __ksymtab_ether_setup
ffffffff81405340 r __ksymtab_ethtool_op_get_link
ffffffff81405350 r __ksymtab_f_setown
ffffffff81405360 r __ksymtab_fail_migrate_page
ffffffff81405370 r __ksymtab_fasync_helper
ffffffff81405380 r __ksymtab_fb_add_videomode
ffffffff81405390 r __ksymtab_fb_alloc_cmap
ffffffff814053a0 r __ksymtab_fb_blank
ffffffff814053b0 r __ksymtab_fb_class
ffffffff814053c0 r __ksymtab_fb_copy_cmap
ffffffff814053d0 r __ksymtab_fb_dealloc_cmap
ffffffff814053e0 r __ksymtab_fb_default_cmap
ffffffff814053f0 r __ksymtab_fb_destroy_modedb
ffffffff81405400 r __ksymtab_fb_edid_add_monspecs
ffffffff81405410 r __ksymtab_fb_edid_to_monspecs
ffffffff81405420 r __ksymtab_fb_find_best_display
ffffffff81405430 r __ksymtab_fb_find_best_mode
ffffffff81405440 r __ksymtab_fb_find_mode
ffffffff81405450 r __ksymtab_fb_find_mode_cvt
ffffffff81405460 r __ksymtab_fb_find_nearest_mode
ffffffff81405470 r __ksymtab_fb_firmware_edid
ffffffff81405480 r __ksymtab_fb_get_buffer_offset
ffffffff81405490 r __ksymtab_fb_get_color_depth
ffffffff814054a0 r __ksymtab_fb_get_mode
ffffffff814054b0 r __ksymtab_fb_get_options
ffffffff814054c0 r __ksymtab_fb_invert_cmaps
ffffffff814054d0 r __ksymtab_fb_is_primary_device
ffffffff814054e0 r __ksymtab_fb_match_mode
ffffffff814054f0 r __ksymtab_fb_mode_is_equal
ffffffff81405500 r __ksymtab_fb_pad_aligned_buffer
ffffffff81405510 r __ksymtab_fb_pad_unaligned_buffer
ffffffff81405520 r __ksymtab_fb_pan_display
ffffffff81405530 r __ksymtab_fb_parse_edid
ffffffff81405540 r __ksymtab_fb_register_client
ffffffff81405550 r __ksymtab_fb_set_cmap
ffffffff81405560 r __ksymtab_fb_set_suspend
ffffffff81405570 r __ksymtab_fb_set_var
ffffffff81405580 r __ksymtab_fb_show_logo
ffffffff81405590 r __ksymtab_fb_unregister_client
ffffffff814055a0 r __ksymtab_fb_validate_mode
ffffffff814055b0 r __ksymtab_fb_var_to_videomode
ffffffff814055c0 r __ksymtab_fb_videomode_to_modelist
ffffffff814055d0 r __ksymtab_fb_videomode_to_var
ffffffff814055e0 r __ksymtab_fbcon_set_bitops
ffffffff814055f0 r __ksymtab_fd_install
ffffffff81405600 r __ksymtab_fg_console
ffffffff81405610 r __ksymtab_fget
ffffffff81405620 r __ksymtab_fget_raw
ffffffff81405630 r __ksymtab_fiemap_check_flags
ffffffff81405640 r __ksymtab_fiemap_fill_next_extent
ffffffff81405650 r __ksymtab_file_open_root
ffffffff81405660 r __ksymtab_file_remove_suid
ffffffff81405670 r __ksymtab_file_update_time
ffffffff81405680 r __ksymtab_filemap_fault
ffffffff81405690 r __ksymtab_filemap_fdatawait
ffffffff814056a0 r __ksymtab_filemap_fdatawait_range
ffffffff814056b0 r __ksymtab_filemap_fdatawrite
ffffffff814056c0 r __ksymtab_filemap_fdatawrite_range
ffffffff814056d0 r __ksymtab_filemap_flush
ffffffff814056e0 r __ksymtab_filemap_write_and_wait
ffffffff814056f0 r __ksymtab_filemap_write_and_wait_range
ffffffff81405700 r __ksymtab_files_lglock_global_lock
ffffffff81405710 r __ksymtab_files_lglock_global_lock_online
ffffffff81405720 r __ksymtab_files_lglock_global_unlock
ffffffff81405730 r __ksymtab_files_lglock_global_unlock_online
ffffffff81405740 r __ksymtab_files_lglock_local_lock
ffffffff81405750 r __ksymtab_files_lglock_local_lock_cpu
ffffffff81405760 r __ksymtab_files_lglock_local_unlock
ffffffff81405770 r __ksymtab_files_lglock_local_unlock_cpu
ffffffff81405780 r __ksymtab_files_lglock_lock_init
ffffffff81405790 r __ksymtab_filp_close
ffffffff814057a0 r __ksymtab_filp_open
ffffffff814057b0 r __ksymtab_find_first_bit
ffffffff814057c0 r __ksymtab_find_first_zero_bit
ffffffff814057d0 r __ksymtab_find_font
ffffffff814057e0 r __ksymtab_find_get_page
ffffffff814057f0 r __ksymtab_find_get_pages_contig
ffffffff81405800 r __ksymtab_find_get_pages_tag
ffffffff81405810 r __ksymtab_find_inode_number
ffffffff81405820 r __ksymtab_find_last_bit
ffffffff81405830 r __ksymtab_find_lock_page
ffffffff81405840 r __ksymtab_find_next_bit
ffffffff81405850 r __ksymtab_find_next_zero_bit
ffffffff81405860 r __ksymtab_find_or_create_page
ffffffff81405870 r __ksymtab_find_vma
ffffffff81405880 r __ksymtab_finish_wait
ffffffff81405890 r __ksymtab_first_ec
ffffffff814058a0 r __ksymtab_flex_array_alloc
ffffffff814058b0 r __ksymtab_flex_array_clear
ffffffff814058c0 r __ksymtab_flex_array_free
ffffffff814058d0 r __ksymtab_flex_array_free_parts
ffffffff814058e0 r __ksymtab_flex_array_get
ffffffff814058f0 r __ksymtab_flex_array_get_ptr
ffffffff81405900 r __ksymtab_flex_array_prealloc
ffffffff81405910 r __ksymtab_flex_array_put
ffffffff81405920 r __ksymtab_flex_array_shrink
ffffffff81405930 r __ksymtab_flock_lock_file_wait
ffffffff81405940 r __ksymtab_flow_cache_genid
ffffffff81405950 r __ksymtab_flow_cache_lookup
ffffffff81405960 r __ksymtab_flush_delayed_work
ffffffff81405970 r __ksymtab_flush_delayed_work_sync
ffffffff81405980 r __ksymtab_flush_old_exec
ffffffff81405990 r __ksymtab_flush_scheduled_work
ffffffff814059a0 r __ksymtab_flush_signals
ffffffff814059b0 r __ksymtab_follow_down
ffffffff814059c0 r __ksymtab_follow_down_one
ffffffff814059d0 r __ksymtab_follow_pfn
ffffffff814059e0 r __ksymtab_follow_up
ffffffff814059f0 r __ksymtab_font_vga_8x16
ffffffff81405a00 r __ksymtab_force_sig
ffffffff81405a10 r __ksymtab_fput
ffffffff81405a20 r __ksymtab_framebuffer_alloc
ffffffff81405a30 r __ksymtab_framebuffer_release
ffffffff81405a40 r __ksymtab_free_anon_bdev
ffffffff81405a50 r __ksymtab_free_buffer_head
ffffffff81405a60 r __ksymtab_free_dma
ffffffff81405a70 r __ksymtab_free_inode_nonrcu
ffffffff81405a80 r __ksymtab_free_irq
ffffffff81405a90 r __ksymtab_free_irq_cpu_rmap
ffffffff81405aa0 r __ksymtab_free_netdev
ffffffff81405ab0 r __ksymtab_free_pages
ffffffff81405ac0 r __ksymtab_free_pages_exact
ffffffff81405ad0 r __ksymtab_free_task
ffffffff81405ae0 r __ksymtab_free_xenballooned_pages
ffffffff81405af0 r __ksymtab_freeze_bdev
ffffffff81405b00 r __ksymtab_freeze_super
ffffffff81405b10 r __ksymtab_freezing_slow_path
ffffffff81405b20 r __ksymtab_fs_overflowgid
ffffffff81405b30 r __ksymtab_fs_overflowuid
ffffffff81405b40 r __ksymtab_fsync_bdev
ffffffff81405b50 r __ksymtab_full_name_hash
ffffffff81405b60 r __ksymtab_gen_estimator_active
ffffffff81405b70 r __ksymtab_gen_kill_estimator
ffffffff81405b80 r __ksymtab_gen_new_estimator
ffffffff81405b90 r __ksymtab_gen_pool_add_virt
ffffffff81405ba0 r __ksymtab_gen_pool_alloc
ffffffff81405bb0 r __ksymtab_gen_pool_create
ffffffff81405bc0 r __ksymtab_gen_pool_destroy
ffffffff81405bd0 r __ksymtab_gen_pool_for_each_chunk
ffffffff81405be0 r __ksymtab_gen_pool_free
ffffffff81405bf0 r __ksymtab_gen_pool_virt_to_phys
ffffffff81405c00 r __ksymtab_gen_replace_estimator
ffffffff81405c10 r __ksymtab_generate_random_uuid
ffffffff81405c20 r __ksymtab_generic_block_bmap
ffffffff81405c30 r __ksymtab_generic_block_fiemap
ffffffff81405c40 r __ksymtab_generic_check_addressable
ffffffff81405c50 r __ksymtab_generic_cont_expand_simple
ffffffff81405c60 r __ksymtab_generic_delete_inode
ffffffff81405c70 r __ksymtab_generic_error_remove_page
ffffffff81405c80 r __ksymtab_generic_file_aio_read
ffffffff81405c90 r __ksymtab_generic_file_aio_write
ffffffff81405ca0 r __ksymtab_generic_file_buffered_write
ffffffff81405cb0 r __ksymtab_generic_file_direct_write
ffffffff81405cc0 r __ksymtab_generic_file_fsync
ffffffff81405cd0 r __ksymtab_generic_file_llseek
ffffffff81405ce0 r __ksymtab_generic_file_llseek_size
ffffffff81405cf0 r __ksymtab_generic_file_mmap
ffffffff81405d00 r __ksymtab_generic_file_open
ffffffff81405d10 r __ksymtab_generic_file_readonly_mmap
ffffffff81405d20 r __ksymtab_generic_file_splice_read
ffffffff81405d30 r __ksymtab_generic_file_splice_write
ffffffff81405d40 r __ksymtab_generic_fillattr
ffffffff81405d50 r __ksymtab_generic_getxattr
ffffffff81405d60 r __ksymtab_generic_listxattr
ffffffff81405d70 r __ksymtab_generic_make_request
ffffffff81405d80 r __ksymtab_generic_permission
ffffffff81405d90 r __ksymtab_generic_pipe_buf_confirm
ffffffff81405da0 r __ksymtab_generic_pipe_buf_get
ffffffff81405db0 r __ksymtab_generic_pipe_buf_map
ffffffff81405dc0 r __ksymtab_generic_pipe_buf_release
ffffffff81405dd0 r __ksymtab_generic_pipe_buf_steal
ffffffff81405de0 r __ksymtab_generic_pipe_buf_unmap
ffffffff81405df0 r __ksymtab_generic_read_dir
ffffffff81405e00 r __ksymtab_generic_readlink
ffffffff81405e10 r __ksymtab_generic_removexattr
ffffffff81405e20 r __ksymtab_generic_ro_fops
ffffffff81405e30 r __ksymtab_generic_segment_checks
ffffffff81405e40 r __ksymtab_generic_setlease
ffffffff81405e50 r __ksymtab_generic_setxattr
ffffffff81405e60 r __ksymtab_generic_show_options
ffffffff81405e70 r __ksymtab_generic_shutdown_super
ffffffff81405e80 r __ksymtab_generic_splice_sendpage
ffffffff81405e90 r __ksymtab_generic_write_checks
ffffffff81405ea0 r __ksymtab_generic_write_end
ffffffff81405eb0 r __ksymtab_generic_write_sync
ffffffff81405ec0 r __ksymtab_generic_writepages
ffffffff81405ed0 r __ksymtab_genl_lock
ffffffff81405ee0 r __ksymtab_genl_notify
ffffffff81405ef0 r __ksymtab_genl_register_family
ffffffff81405f00 r __ksymtab_genl_register_family_with_ops
ffffffff81405f10 r __ksymtab_genl_register_mc_group
ffffffff81405f20 r __ksymtab_genl_register_ops
ffffffff81405f30 r __ksymtab_genl_unlock
ffffffff81405f40 r __ksymtab_genl_unregister_family
ffffffff81405f50 r __ksymtab_genl_unregister_mc_group
ffffffff81405f60 r __ksymtab_genl_unregister_ops
ffffffff81405f70 r __ksymtab_genlmsg_multicast_allns
ffffffff81405f80 r __ksymtab_genlmsg_put
ffffffff81405f90 r __ksymtab_get_anon_bdev
ffffffff81405fa0 r __ksymtab_get_default_font
ffffffff81405fb0 r __ksymtab_get_disk
ffffffff81405fc0 r __ksymtab_get_fs_type
ffffffff81405fd0 r __ksymtab_get_gendisk
ffffffff81405fe0 r __ksymtab_get_ibs_caps
ffffffff81405ff0 r __ksymtab_get_io_context
ffffffff81406000 r __ksymtab_get_next_ino
ffffffff81406010 r __ksymtab_get_option
ffffffff81406020 r __ksymtab_get_options
ffffffff81406030 r __ksymtab_get_random_bytes
ffffffff81406040 r __ksymtab_get_seconds
ffffffff81406050 r __ksymtab_get_super
ffffffff81406060 r __ksymtab_get_super_thawed
ffffffff81406070 r __ksymtab_get_task_io_context
ffffffff81406080 r __ksymtab_get_unmapped_area
ffffffff81406090 r __ksymtab_get_unused_fd
ffffffff814060a0 r __ksymtab_get_user_pages
ffffffff814060b0 r __ksymtab_get_write_access
ffffffff814060c0 r __ksymtab_get_zeroed_page
ffffffff814060d0 r __ksymtab_getname
ffffffff814060e0 r __ksymtab_getnstimeofday
ffffffff814060f0 r __ksymtab_getrawmonotonic
ffffffff81406100 r __ksymtab_give_up_console
ffffffff81406110 r __ksymtab_global_cursor_default
ffffffff81406120 r __ksymtab_gnet_stats_copy_app
ffffffff81406130 r __ksymtab_gnet_stats_copy_basic
ffffffff81406140 r __ksymtab_gnet_stats_copy_queue
ffffffff81406150 r __ksymtab_gnet_stats_copy_rate_est
ffffffff81406160 r __ksymtab_gnet_stats_finish_copy
ffffffff81406170 r __ksymtab_gnet_stats_start_copy
ffffffff81406180 r __ksymtab_gnet_stats_start_copy_compat
ffffffff81406190 r __ksymtab_grab_cache_page_nowait
ffffffff814061a0 r __ksymtab_grab_cache_page_write_begin
ffffffff814061b0 r __ksymtab_groups_alloc
ffffffff814061c0 r __ksymtab_groups_free
ffffffff814061d0 r __ksymtab_half_md4_transform
ffffffff814061e0 r __ksymtab_handle_edge_irq
ffffffff814061f0 r __ksymtab_have_submounts
ffffffff81406200 r __ksymtab_hex2bin
ffffffff81406210 r __ksymtab_hex_asc
ffffffff81406220 r __ksymtab_hex_dump_to_buffer
ffffffff81406230 r __ksymtab_hex_to_bin
ffffffff81406240 r __ksymtab_high_memory
ffffffff81406250 r __ksymtab_ht_create_irq
ffffffff81406260 r __ksymtab_ht_destroy_irq
ffffffff81406270 r __ksymtab_i8042_check_port_owner
ffffffff81406280 r __ksymtab_i8042_command
ffffffff81406290 r __ksymtab_i8042_install_filter
ffffffff814062a0 r __ksymtab_i8042_lock_chip
ffffffff814062b0 r __ksymtab_i8042_remove_filter
ffffffff814062c0 r __ksymtab_i8042_unlock_chip
ffffffff814062d0 r __ksymtab_i8253_lock
ffffffff814062e0 r __ksymtab_icmp_err_convert
ffffffff814062f0 r __ksymtab_icmp_send
ffffffff81406300 r __ksymtab_icq_get_changed
ffffffff81406310 r __ksymtab_ida_destroy
ffffffff81406320 r __ksymtab_ida_get_new
ffffffff81406330 r __ksymtab_ida_get_new_above
ffffffff81406340 r __ksymtab_ida_init
ffffffff81406350 r __ksymtab_ida_pre_get
ffffffff81406360 r __ksymtab_ida_remove
ffffffff81406370 r __ksymtab_ida_simple_get
ffffffff81406380 r __ksymtab_ida_simple_remove
ffffffff81406390 r __ksymtab_idr_destroy
ffffffff814063a0 r __ksymtab_idr_find
ffffffff814063b0 r __ksymtab_idr_for_each
ffffffff814063c0 r __ksymtab_idr_get_new
ffffffff814063d0 r __ksymtab_idr_get_new_above
ffffffff814063e0 r __ksymtab_idr_get_next
ffffffff814063f0 r __ksymtab_idr_init
ffffffff81406400 r __ksymtab_idr_pre_get
ffffffff81406410 r __ksymtab_idr_remove
ffffffff81406420 r __ksymtab_idr_remove_all
ffffffff81406430 r __ksymtab_idr_replace
ffffffff81406440 r __ksymtab_ifla_policy
ffffffff81406450 r __ksymtab_iget5_locked
ffffffff81406460 r __ksymtab_iget_failed
ffffffff81406470 r __ksymtab_iget_locked
ffffffff81406480 r __ksymtab_igrab
ffffffff81406490 r __ksymtab_ihold
ffffffff814064a0 r __ksymtab_ilookup
ffffffff814064b0 r __ksymtab_ilookup5
ffffffff814064c0 r __ksymtab_ilookup5_nowait
ffffffff814064d0 r __ksymtab_in4_pton
ffffffff814064e0 r __ksymtab_in6_pton
ffffffff814064f0 r __ksymtab_in_aton
ffffffff81406500 r __ksymtab_in_dev_finish_destroy
ffffffff81406510 r __ksymtab_in_egroup_p
ffffffff81406520 r __ksymtab_in_group_p
ffffffff81406530 r __ksymtab_in_lock_functions
ffffffff81406540 r __ksymtab_inc_nlink
ffffffff81406550 r __ksymtab_inc_zone_page_state
ffffffff81406560 r __ksymtab_inet_accept
ffffffff81406570 r __ksymtab_inet_add_protocol
ffffffff81406580 r __ksymtab_inet_addr_type
ffffffff81406590 r __ksymtab_inet_bind
ffffffff814065a0 r __ksymtab_inet_confirm_addr
ffffffff814065b0 r __ksymtab_inet_csk_accept
ffffffff814065c0 r __ksymtab_inet_csk_clear_xmit_timers
ffffffff814065d0 r __ksymtab_inet_csk_delete_keepalive_timer
ffffffff814065e0 r __ksymtab_inet_csk_destroy_sock
ffffffff814065f0 r __ksymtab_inet_csk_init_xmit_timers
ffffffff81406600 r __ksymtab_inet_csk_reset_keepalive_timer
ffffffff81406610 r __ksymtab_inet_csk_timer_bug_msg
ffffffff81406620 r __ksymtab_inet_del_protocol
ffffffff81406630 r __ksymtab_inet_dev_addr_type
ffffffff81406640 r __ksymtab_inet_dgram_connect
ffffffff81406650 r __ksymtab_inet_dgram_ops
ffffffff81406660 r __ksymtab_inet_ehash_secret
ffffffff81406670 r __ksymtab_inet_frag_destroy
ffffffff81406680 r __ksymtab_inet_frag_evictor
ffffffff81406690 r __ksymtab_inet_frag_find
ffffffff814066a0 r __ksymtab_inet_frag_kill
ffffffff814066b0 r __ksymtab_inet_frags_exit_net
ffffffff814066c0 r __ksymtab_inet_frags_fini
ffffffff814066d0 r __ksymtab_inet_frags_init
ffffffff814066e0 r __ksymtab_inet_frags_init_net
ffffffff814066f0 r __ksymtab_inet_get_local_port_range
ffffffff81406700 r __ksymtab_inet_getname
ffffffff81406710 r __ksymtab_inet_ioctl
ffffffff81406720 r __ksymtab_inet_listen
ffffffff81406730 r __ksymtab_inet_peer_xrlim_allow
ffffffff81406740 r __ksymtab_inet_proto_csum_replace4
ffffffff81406750 r __ksymtab_inet_put_port
ffffffff81406760 r __ksymtab_inet_recvmsg
ffffffff81406770 r __ksymtab_inet_register_protosw
ffffffff81406780 r __ksymtab_inet_release
ffffffff81406790 r __ksymtab_inet_select_addr
ffffffff814067a0 r __ksymtab_inet_sendmsg
ffffffff814067b0 r __ksymtab_inet_sendpage
ffffffff814067c0 r __ksymtab_inet_shutdown
ffffffff814067d0 r __ksymtab_inet_sk_rebuild_header
ffffffff814067e0 r __ksymtab_inet_sock_destruct
ffffffff814067f0 r __ksymtab_inet_stream_connect
ffffffff81406800 r __ksymtab_inet_stream_ops
ffffffff81406810 r __ksymtab_inet_twsk_deschedule
ffffffff81406820 r __ksymtab_inet_unregister_protosw
ffffffff81406830 r __ksymtab_inetdev_by_index
ffffffff81406840 r __ksymtab_inetpeer_invalidate_tree
ffffffff81406850 r __ksymtab_init_buffer
ffffffff81406860 r __ksymtab_init_net
ffffffff81406870 r __ksymtab_init_special_inode
ffffffff81406880 r __ksymtab_init_task
ffffffff81406890 r __ksymtab_init_timer_deferrable_key
ffffffff814068a0 r __ksymtab_init_timer_key
ffffffff814068b0 r __ksymtab_inode_add_bytes
ffffffff814068c0 r __ksymtab_inode_change_ok
ffffffff814068d0 r __ksymtab_inode_dio_done
ffffffff814068e0 r __ksymtab_inode_dio_wait
ffffffff814068f0 r __ksymtab_inode_get_bytes
ffffffff81406900 r __ksymtab_inode_init_always
ffffffff81406910 r __ksymtab_inode_init_once
ffffffff81406920 r __ksymtab_inode_init_owner
ffffffff81406930 r __ksymtab_inode_needs_sync
ffffffff81406940 r __ksymtab_inode_newsize_ok
ffffffff81406950 r __ksymtab_inode_owner_or_capable
ffffffff81406960 r __ksymtab_inode_permission
ffffffff81406970 r __ksymtab_inode_set_bytes
ffffffff81406980 r __ksymtab_inode_sub_bytes
ffffffff81406990 r __ksymtab_inode_wait
ffffffff814069a0 r __ksymtab_input_alloc_absinfo
ffffffff814069b0 r __ksymtab_input_allocate_device
ffffffff814069c0 r __ksymtab_input_close_device
ffffffff814069d0 r __ksymtab_input_event
ffffffff814069e0 r __ksymtab_input_flush_device
ffffffff814069f0 r __ksymtab_input_free_device
ffffffff81406a00 r __ksymtab_input_get_keycode
ffffffff81406a10 r __ksymtab_input_grab_device
ffffffff81406a20 r __ksymtab_input_handler_for_each_handle
ffffffff81406a30 r __ksymtab_input_inject_event
ffffffff81406a40 r __ksymtab_input_mt_destroy_slots
ffffffff81406a50 r __ksymtab_input_mt_init_slots
ffffffff81406a60 r __ksymtab_input_mt_report_finger_count
ffffffff81406a70 r __ksymtab_input_mt_report_pointer_emulation
ffffffff81406a80 r __ksymtab_input_mt_report_slot_state
ffffffff81406a90 r __ksymtab_input_open_device
ffffffff81406aa0 r __ksymtab_input_register_device
ffffffff81406ab0 r __ksymtab_input_register_handle
ffffffff81406ac0 r __ksymtab_input_register_handler
ffffffff81406ad0 r __ksymtab_input_release_device
ffffffff81406ae0 r __ksymtab_input_reset_device
ffffffff81406af0 r __ksymtab_input_scancode_to_scalar
ffffffff81406b00 r __ksymtab_input_set_abs_params
ffffffff81406b10 r __ksymtab_input_set_capability
ffffffff81406b20 r __ksymtab_input_set_keycode
ffffffff81406b30 r __ksymtab_input_unregister_device
ffffffff81406b40 r __ksymtab_input_unregister_handle
ffffffff81406b50 r __ksymtab_input_unregister_handler
ffffffff81406b60 r __ksymtab_insert_inode_locked
ffffffff81406b70 r __ksymtab_insert_inode_locked4
ffffffff81406b80 r __ksymtab_install_exec_creds
ffffffff81406b90 r __ksymtab_int_sqrt
ffffffff81406ba0 r __ksymtab_int_to_scsilun
ffffffff81406bb0 r __ksymtab_interruptible_sleep_on
ffffffff81406bc0 r __ksymtab_interruptible_sleep_on_timeout
ffffffff81406bd0 r __ksymtab_invalidate_bdev
ffffffff81406be0 r __ksymtab_invalidate_inode_buffers
ffffffff81406bf0 r __ksymtab_invalidate_mapping_pages
ffffffff81406c00 r __ksymtab_invalidate_partition
ffffffff81406c10 r __ksymtab_io_schedule
ffffffff81406c20 r __ksymtab_ioc_cgroup_changed
ffffffff81406c30 r __ksymtab_ioc_lookup_icq
ffffffff81406c40 r __ksymtab_ioctl_by_bdev
ffffffff81406c50 r __ksymtab_iomem_resource
ffffffff81406c60 r __ksymtab_iommu_area_alloc
ffffffff81406c70 r __ksymtab_ioport_map
ffffffff81406c80 r __ksymtab_ioport_resource
ffffffff81406c90 r __ksymtab_ioport_unmap
ffffffff81406ca0 r __ksymtab_ioread16
ffffffff81406cb0 r __ksymtab_ioread16_rep
ffffffff81406cc0 r __ksymtab_ioread16be
ffffffff81406cd0 r __ksymtab_ioread32
ffffffff81406ce0 r __ksymtab_ioread32_rep
ffffffff81406cf0 r __ksymtab_ioread32be
ffffffff81406d00 r __ksymtab_ioread8
ffffffff81406d10 r __ksymtab_ioread8_rep
ffffffff81406d20 r __ksymtab_ioremap_cache
ffffffff81406d30 r __ksymtab_ioremap_nocache
ffffffff81406d40 r __ksymtab_ioremap_prot
ffffffff81406d50 r __ksymtab_ioremap_wc
ffffffff81406d60 r __ksymtab_iounmap
ffffffff81406d70 r __ksymtab_iov_iter_advance
ffffffff81406d80 r __ksymtab_iov_iter_copy_from_user
ffffffff81406d90 r __ksymtab_iov_iter_copy_from_user_atomic
ffffffff81406da0 r __ksymtab_iov_iter_fault_in_readable
ffffffff81406db0 r __ksymtab_iov_iter_single_seg_count
ffffffff81406dc0 r __ksymtab_iov_shorten
ffffffff81406dd0 r __ksymtab_iowrite16
ffffffff81406de0 r __ksymtab_iowrite16_rep
ffffffff81406df0 r __ksymtab_iowrite16be
ffffffff81406e00 r __ksymtab_iowrite32
ffffffff81406e10 r __ksymtab_iowrite32_rep
ffffffff81406e20 r __ksymtab_iowrite32be
ffffffff81406e30 r __ksymtab_iowrite8
ffffffff81406e40 r __ksymtab_iowrite8_rep
ffffffff81406e50 r __ksymtab_ip4_datagram_connect
ffffffff81406e60 r __ksymtab_ip_check_defrag
ffffffff81406e70 r __ksymtab_ip_cmsg_recv
ffffffff81406e80 r __ksymtab_ip_compute_csum
ffffffff81406e90 r __ksymtab_ip_defrag
ffffffff81406ea0 r __ksymtab_ip_fragment
ffffffff81406eb0 r __ksymtab_ip_generic_getfrag
ffffffff81406ec0 r __ksymtab_ip_getsockopt
ffffffff81406ed0 r __ksymtab_ip_mc_dec_group
ffffffff81406ee0 r __ksymtab_ip_mc_inc_group
ffffffff81406ef0 r __ksymtab_ip_mc_join_group
ffffffff81406f00 r __ksymtab_ip_mc_rejoin_groups
ffffffff81406f10 r __ksymtab_ip_options_compile
ffffffff81406f20 r __ksymtab_ip_options_rcv_srr
ffffffff81406f30 r __ksymtab_ip_queue_xmit
ffffffff81406f40 r __ksymtab_ip_route_input_common
ffffffff81406f50 r __ksymtab_ip_send_check
ffffffff81406f60 r __ksymtab_ip_setsockopt
ffffffff81406f70 r __ksymtab_iput
ffffffff81406f80 r __ksymtab_ipv4_config
ffffffff81406f90 r __ksymtab_ipv4_specific
ffffffff81406fa0 r __ksymtab_ipv6_ext_hdr
ffffffff81406fb0 r __ksymtab_ipv6_skip_exthdr
ffffffff81406fc0 r __ksymtab_irq_cpu_rmap_add
ffffffff81406fd0 r __ksymtab_irq_fpu_usable
ffffffff81406fe0 r __ksymtab_irq_regs
ffffffff81406ff0 r __ksymtab_irq_set_chip
ffffffff81407000 r __ksymtab_irq_set_chip_data
ffffffff81407010 r __ksymtab_irq_set_handler_data
ffffffff81407020 r __ksymtab_irq_set_irq_type
ffffffff81407030 r __ksymtab_irq_set_irq_wake
ffffffff81407040 r __ksymtab_irq_stat
ffffffff81407050 r __ksymtab_irq_to_desc
ffffffff81407060 r __ksymtab_is_bad_inode
ffffffff81407070 r __ksymtab_is_container_init
ffffffff81407080 r __ksymtab_isa_dma_bridge_buggy
ffffffff81407090 r __ksymtab_iter_div_u64_rem
ffffffff814070a0 r __ksymtab_iterate_supers_type
ffffffff814070b0 r __ksymtab_iunique
ffffffff814070c0 r __ksymtab_jiffies
ffffffff814070d0 r __ksymtab_jiffies_64
ffffffff814070e0 r __ksymtab_jiffies_64_to_clock_t
ffffffff814070f0 r __ksymtab_jiffies_to_clock_t
ffffffff81407100 r __ksymtab_jiffies_to_msecs
ffffffff81407110 r __ksymtab_jiffies_to_timespec
ffffffff81407120 r __ksymtab_jiffies_to_timeval
ffffffff81407130 r __ksymtab_jiffies_to_usecs
ffffffff81407140 r __ksymtab_kacpi_hotplug_wq
ffffffff81407150 r __ksymtab_kasprintf
ffffffff81407160 r __ksymtab_kblockd_schedule_delayed_work
ffffffff81407170 r __ksymtab_kblockd_schedule_work
ffffffff81407180 r __ksymtab_kd_mksound
ffffffff81407190 r __ksymtab_kern_path
ffffffff814071a0 r __ksymtab_kern_path_create
ffffffff814071b0 r __ksymtab_kern_unmount
ffffffff814071c0 r __ksymtab_kernel_accept
ffffffff814071d0 r __ksymtab_kernel_bind
ffffffff814071e0 r __ksymtab_kernel_connect
ffffffff814071f0 r __ksymtab_kernel_cpustat
ffffffff81407200 r __ksymtab_kernel_fpu_begin
ffffffff81407210 r __ksymtab_kernel_fpu_end
ffffffff81407220 r __ksymtab_kernel_getpeername
ffffffff81407230 r __ksymtab_kernel_getsockname
ffffffff81407240 r __ksymtab_kernel_getsockopt
ffffffff81407250 r __ksymtab_kernel_listen
ffffffff81407260 r __ksymtab_kernel_read
ffffffff81407270 r __ksymtab_kernel_recvmsg
ffffffff81407280 r __ksymtab_kernel_sendmsg
ffffffff81407290 r __ksymtab_kernel_sendpage
ffffffff814072a0 r __ksymtab_kernel_setsockopt
ffffffff814072b0 r __ksymtab_kernel_sock_ioctl
ffffffff814072c0 r __ksymtab_kernel_sock_shutdown
ffffffff814072d0 r __ksymtab_kernel_stack
ffffffff814072e0 r __ksymtab_kernel_thread
ffffffff814072f0 r __ksymtab_kfree
ffffffff81407300 r __ksymtab_kfree_skb
ffffffff81407310 r __ksymtab_kick_iocb
ffffffff81407320 r __ksymtab_kill_anon_super
ffffffff81407330 r __ksymtab_kill_bdev
ffffffff81407340 r __ksymtab_kill_block_super
ffffffff81407350 r __ksymtab_kill_fasync
ffffffff81407360 r __ksymtab_kill_litter_super
ffffffff81407370 r __ksymtab_kill_pgrp
ffffffff81407380 r __ksymtab_kill_pid
ffffffff81407390 r __ksymtab_km_new_mapping
ffffffff814073a0 r __ksymtab_km_policy_expired
ffffffff814073b0 r __ksymtab_km_policy_notify
ffffffff814073c0 r __ksymtab_km_query
ffffffff814073d0 r __ksymtab_km_report
ffffffff814073e0 r __ksymtab_km_state_expired
ffffffff814073f0 r __ksymtab_km_state_notify
ffffffff81407400 r __ksymtab_kmem_cache_alloc
ffffffff81407410 r __ksymtab_kmem_cache_alloc_node
ffffffff81407420 r __ksymtab_kmem_cache_create
ffffffff81407430 r __ksymtab_kmem_cache_destroy
ffffffff81407440 r __ksymtab_kmem_cache_free
ffffffff81407450 r __ksymtab_kmem_cache_shrink
ffffffff81407460 r __ksymtab_kmem_cache_size
ffffffff81407470 r __ksymtab_kmemdup
ffffffff81407480 r __ksymtab_kobject_add
ffffffff81407490 r __ksymtab_kobject_del
ffffffff814074a0 r __ksymtab_kobject_get
ffffffff814074b0 r __ksymtab_kobject_init
ffffffff814074c0 r __ksymtab_kobject_put
ffffffff814074d0 r __ksymtab_kobject_set_name
ffffffff814074e0 r __ksymtab_krealloc
ffffffff814074f0 r __ksymtab_kset_register
ffffffff81407500 r __ksymtab_kset_unregister
ffffffff81407510 r __ksymtab_ksize
ffffffff81407520 r __ksymtab_kstat
ffffffff81407530 r __ksymtab_kstrdup
ffffffff81407540 r __ksymtab_kstrndup
ffffffff81407550 r __ksymtab_kstrtoint
ffffffff81407560 r __ksymtab_kstrtoint_from_user
ffffffff81407570 r __ksymtab_kstrtol_from_user
ffffffff81407580 r __ksymtab_kstrtoll
ffffffff81407590 r __ksymtab_kstrtoll_from_user
ffffffff814075a0 r __ksymtab_kstrtos16
ffffffff814075b0 r __ksymtab_kstrtos16_from_user
ffffffff814075c0 r __ksymtab_kstrtos8
ffffffff814075d0 r __ksymtab_kstrtos8_from_user
ffffffff814075e0 r __ksymtab_kstrtou16
ffffffff814075f0 r __ksymtab_kstrtou16_from_user
ffffffff81407600 r __ksymtab_kstrtou8
ffffffff81407610 r __ksymtab_kstrtou8_from_user
ffffffff81407620 r __ksymtab_kstrtouint
ffffffff81407630 r __ksymtab_kstrtouint_from_user
ffffffff81407640 r __ksymtab_kstrtoul_from_user
ffffffff81407650 r __ksymtab_kstrtoull
ffffffff81407660 r __ksymtab_kstrtoull_from_user
ffffffff81407670 r __ksymtab_kthread_bind
ffffffff81407680 r __ksymtab_kthread_create_on_node
ffffffff81407690 r __ksymtab_kthread_should_stop
ffffffff814076a0 r __ksymtab_kthread_stop
ffffffff814076b0 r __ksymtab_kvasprintf
ffffffff814076c0 r __ksymtab_kzfree
ffffffff814076d0 r __ksymtab_laptop_mode
ffffffff814076e0 r __ksymtab_lease_get_mtime
ffffffff814076f0 r __ksymtab_lease_modify
ffffffff81407700 r __ksymtab_linkwatch_fire_event
ffffffff81407710 r __ksymtab_list_sort
ffffffff81407720 r __ksymtab_ll_rw_block
ffffffff81407730 r __ksymtab_load_nls
ffffffff81407740 r __ksymtab_load_nls_default
ffffffff81407750 r __ksymtab_local_bh_disable
ffffffff81407760 r __ksymtab_local_bh_enable
ffffffff81407770 r __ksymtab_local_bh_enable_ip
ffffffff81407780 r __ksymtab_lock_fb_info
ffffffff81407790 r __ksymtab_lock_may_read
ffffffff814077a0 r __ksymtab_lock_may_write
ffffffff814077b0 r __ksymtab_lock_rename
ffffffff814077c0 r __ksymtab_lock_sock_fast
ffffffff814077d0 r __ksymtab_lock_sock_nested
ffffffff814077e0 r __ksymtab_lock_super
ffffffff814077f0 r __ksymtab_locks_copy_lock
ffffffff81407800 r __ksymtab_locks_delete_block
ffffffff81407810 r __ksymtab_locks_free_lock
ffffffff81407820 r __ksymtab_locks_init_lock
ffffffff81407830 r __ksymtab_locks_mandatory_area
ffffffff81407840 r __ksymtab_locks_remove_posix
ffffffff81407850 r __ksymtab_lookup_bdev
ffffffff81407860 r __ksymtab_lookup_one_len
ffffffff81407870 r __ksymtab_loops_per_jiffy
ffffffff81407880 r __ksymtab_lro_flush_all
ffffffff81407890 r __ksymtab_lro_flush_pkt
ffffffff814078a0 r __ksymtab_lro_receive_frags
ffffffff814078b0 r __ksymtab_lro_receive_skb
ffffffff814078c0 r __ksymtab_mac_pton
ffffffff814078d0 r __ksymtab_machine_to_phys_mapping
ffffffff814078e0 r __ksymtab_machine_to_phys_nr
ffffffff814078f0 r __ksymtab_make_bad_inode
ffffffff81407900 r __ksymtab_malloc_sizes
ffffffff81407910 r __ksymtab_mangle_path
ffffffff81407920 r __ksymtab_mapping_tagged
ffffffff81407930 r __ksymtab_mark_buffer_async_write
ffffffff81407940 r __ksymtab_mark_buffer_dirty
ffffffff81407950 r __ksymtab_mark_buffer_dirty_inode
ffffffff81407960 r __ksymtab_mark_page_accessed
ffffffff81407970 r __ksymtab_match_hex
ffffffff81407980 r __ksymtab_match_int
ffffffff81407990 r __ksymtab_match_octal
ffffffff814079a0 r __ksymtab_match_strdup
ffffffff814079b0 r __ksymtab_match_strlcpy
ffffffff814079c0 r __ksymtab_match_token
ffffffff814079d0 r __ksymtab_may_umount
ffffffff814079e0 r __ksymtab_may_umount_tree
ffffffff814079f0 r __ksymtab_md5_transform
ffffffff81407a00 r __ksymtab_mem_section
ffffffff81407a10 r __ksymtab_memcg_socket_limit_enabled
ffffffff81407a20 r __ksymtab_memchr
ffffffff81407a30 r __ksymtab_memchr_inv
ffffffff81407a40 r __ksymtab_memcmp
ffffffff81407a50 r __ksymtab_memcpy
ffffffff81407a60 r __ksymtab_memcpy_fromiovec
ffffffff81407a70 r __ksymtab_memcpy_fromiovecend
ffffffff81407a80 r __ksymtab_memcpy_toiovec
ffffffff81407a90 r __ksymtab_memcpy_toiovecend
ffffffff81407aa0 r __ksymtab_memdup_user
ffffffff81407ab0 r __ksymtab_memmove
ffffffff81407ac0 r __ksymtab_memory_read_from_buffer
ffffffff81407ad0 r __ksymtab_memparse
ffffffff81407ae0 r __ksymtab_mempool_alloc
ffffffff81407af0 r __ksymtab_mempool_alloc_pages
ffffffff81407b00 r __ksymtab_mempool_alloc_slab
ffffffff81407b10 r __ksymtab_mempool_create
ffffffff81407b20 r __ksymtab_mempool_create_node
ffffffff81407b30 r __ksymtab_mempool_destroy
ffffffff81407b40 r __ksymtab_mempool_free
ffffffff81407b50 r __ksymtab_mempool_free_pages
ffffffff81407b60 r __ksymtab_mempool_free_slab
ffffffff81407b70 r __ksymtab_mempool_kfree
ffffffff81407b80 r __ksymtab_mempool_kmalloc
ffffffff81407b90 r __ksymtab_mempool_resize
ffffffff81407ba0 r __ksymtab_memscan
ffffffff81407bb0 r __ksymtab_memset
ffffffff81407bc0 r __ksymtab_migrate_page
ffffffff81407bd0 r __ksymtab_misc_deregister
ffffffff81407be0 r __ksymtab_misc_register
ffffffff81407bf0 r __ksymtab_mktime
ffffffff81407c00 r __ksymtab_mnt_drop_write_file
ffffffff81407c10 r __ksymtab_mnt_pin
ffffffff81407c20 r __ksymtab_mnt_set_expiry
ffffffff81407c30 r __ksymtab_mnt_unpin
ffffffff81407c40 r __ksymtab_mntget
ffffffff81407c50 r __ksymtab_mntput
ffffffff81407c60 r __ksymtab_mod_timer
ffffffff81407c70 r __ksymtab_mod_timer_pending
ffffffff81407c80 r __ksymtab_mod_timer_pinned
ffffffff81407c90 r __ksymtab_mod_zone_page_state
ffffffff81407ca0 r __ksymtab_module_put
ffffffff81407cb0 r __ksymtab_module_refcount
ffffffff81407cc0 r __ksymtab_mount_bdev
ffffffff81407cd0 r __ksymtab_mount_nodev
ffffffff81407ce0 r __ksymtab_mount_ns
ffffffff81407cf0 r __ksymtab_mount_pseudo
ffffffff81407d00 r __ksymtab_mount_single
ffffffff81407d10 r __ksymtab_mount_subtree
ffffffff81407d20 r __ksymtab_movable_zone
ffffffff81407d30 r __ksymtab_mpage_readpage
ffffffff81407d40 r __ksymtab_mpage_readpages
ffffffff81407d50 r __ksymtab_mpage_writepage
ffffffff81407d60 r __ksymtab_mpage_writepages
ffffffff81407d70 r __ksymtab_msecs_to_jiffies
ffffffff81407d80 r __ksymtab_msleep
ffffffff81407d90 r __ksymtab_msleep_interruptible
ffffffff81407da0 r __ksymtab_msrs_alloc
ffffffff81407db0 r __ksymtab_msrs_free
ffffffff81407dc0 r __ksymtab_mtrr_add
ffffffff81407dd0 r __ksymtab_mtrr_del
ffffffff81407de0 r __ksymtab_mutex_lock
ffffffff81407df0 r __ksymtab_mutex_lock_interruptible
ffffffff81407e00 r __ksymtab_mutex_lock_killable
ffffffff81407e10 r __ksymtab_mutex_trylock
ffffffff81407e20 r __ksymtab_mutex_unlock
ffffffff81407e30 r __ksymtab_n_tty_compat_ioctl_helper
ffffffff81407e40 r __ksymtab_n_tty_ioctl_helper
ffffffff81407e50 r __ksymtab_names_cachep
ffffffff81407e60 r __ksymtab_napi_complete
ffffffff81407e70 r __ksymtab_napi_frags_finish
ffffffff81407e80 r __ksymtab_napi_frags_skb
ffffffff81407e90 r __ksymtab_napi_get_frags
ffffffff81407ea0 r __ksymtab_napi_gro_flush
ffffffff81407eb0 r __ksymtab_napi_gro_frags
ffffffff81407ec0 r __ksymtab_napi_gro_receive
ffffffff81407ed0 r __ksymtab_napi_skb_finish
ffffffff81407ee0 r __ksymtab_native_io_delay
ffffffff81407ef0 r __ksymtab_native_rdmsr_safe_regs
ffffffff81407f00 r __ksymtab_native_read_tsc
ffffffff81407f10 r __ksymtab_native_wrmsr_safe_regs
ffffffff81407f20 r __ksymtab_neigh_changeaddr
ffffffff81407f30 r __ksymtab_neigh_compat_output
ffffffff81407f40 r __ksymtab_neigh_connected_output
ffffffff81407f50 r __ksymtab_neigh_create
ffffffff81407f60 r __ksymtab_neigh_destroy
ffffffff81407f70 r __ksymtab_neigh_direct_output
ffffffff81407f80 r __ksymtab_neigh_event_ns
ffffffff81407f90 r __ksymtab_neigh_for_each
ffffffff81407fa0 r __ksymtab_neigh_ifdown
ffffffff81407fb0 r __ksymtab_neigh_lookup
ffffffff81407fc0 r __ksymtab_neigh_lookup_nodev
ffffffff81407fd0 r __ksymtab_neigh_parms_alloc
ffffffff81407fe0 r __ksymtab_neigh_parms_release
ffffffff81407ff0 r __ksymtab_neigh_rand_reach_time
ffffffff81408000 r __ksymtab_neigh_resolve_output
ffffffff81408010 r __ksymtab_neigh_seq_next
ffffffff81408020 r __ksymtab_neigh_seq_start
ffffffff81408030 r __ksymtab_neigh_seq_stop
ffffffff81408040 r __ksymtab_neigh_sysctl_register
ffffffff81408050 r __ksymtab_neigh_sysctl_unregister
ffffffff81408060 r __ksymtab_neigh_table_clear
ffffffff81408070 r __ksymtab_neigh_table_init
ffffffff81408080 r __ksymtab_neigh_table_init_no_netlink
ffffffff81408090 r __ksymtab_neigh_update
ffffffff814080a0 r __ksymtab_net_disable_timestamp
ffffffff814080b0 r __ksymtab_net_enable_timestamp
ffffffff814080c0 r __ksymtab_net_msg_warn
ffffffff814080d0 r __ksymtab_net_ratelimit
ffffffff814080e0 r __ksymtab_netdev_alert
ffffffff814080f0 r __ksymtab_netdev_bonding_change
ffffffff81408100 r __ksymtab_netdev_boot_setup_check
ffffffff81408110 r __ksymtab_netdev_change_features
ffffffff81408120 r __ksymtab_netdev_class_create_file
ffffffff81408130 r __ksymtab_netdev_class_remove_file
ffffffff81408140 r __ksymtab_netdev_crit
ffffffff81408150 r __ksymtab_netdev_emerg
ffffffff81408160 r __ksymtab_netdev_err
ffffffff81408170 r __ksymtab_netdev_features_change
ffffffff81408180 r __ksymtab_netdev_increment_features
ffffffff81408190 r __ksymtab_netdev_info
ffffffff814081a0 r __ksymtab_netdev_notice
ffffffff814081b0 r __ksymtab_netdev_printk
ffffffff814081c0 r __ksymtab_netdev_refcnt_read
ffffffff814081d0 r __ksymtab_netdev_rx_csum_fault
ffffffff814081e0 r __ksymtab_netdev_set_bond_master
ffffffff814081f0 r __ksymtab_netdev_set_master
ffffffff81408200 r __ksymtab_netdev_state_change
ffffffff81408210 r __ksymtab_netdev_stats_to_stats64
ffffffff81408220 r __ksymtab_netdev_update_features
ffffffff81408230 r __ksymtab_netdev_warn
ffffffff81408240 r __ksymtab_netif_carrier_off
ffffffff81408250 r __ksymtab_netif_carrier_on
ffffffff81408260 r __ksymtab_netif_device_attach
ffffffff81408270 r __ksymtab_netif_device_detach
ffffffff81408280 r __ksymtab_netif_napi_add
ffffffff81408290 r __ksymtab_netif_napi_del
ffffffff814082a0 r __ksymtab_netif_notify_peers
ffffffff814082b0 r __ksymtab_netif_receive_skb
ffffffff814082c0 r __ksymtab_netif_rx
ffffffff814082d0 r __ksymtab_netif_rx_ni
ffffffff814082e0 r __ksymtab_netif_set_real_num_rx_queues
ffffffff814082f0 r __ksymtab_netif_set_real_num_tx_queues
ffffffff81408300 r __ksymtab_netif_skb_features
ffffffff81408310 r __ksymtab_netif_stacked_transfer_operstate
ffffffff81408320 r __ksymtab_netlink_ack
ffffffff81408330 r __ksymtab_netlink_broadcast
ffffffff81408340 r __ksymtab_netlink_broadcast_filtered
ffffffff81408350 r __ksymtab_netlink_dump_start
ffffffff81408360 r __ksymtab_netlink_kernel_create
ffffffff81408370 r __ksymtab_netlink_kernel_release
ffffffff81408380 r __ksymtab_netlink_rcv_skb
ffffffff81408390 r __ksymtab_netlink_register_notifier
ffffffff814083a0 r __ksymtab_netlink_set_err
ffffffff814083b0 r __ksymtab_netlink_set_nonroot
ffffffff814083c0 r __ksymtab_netlink_unicast
ffffffff814083d0 r __ksymtab_netlink_unregister_notifier
ffffffff814083e0 r __ksymtab_new_inode
ffffffff814083f0 r __ksymtab_nla_append
ffffffff81408400 r __ksymtab_nla_find
ffffffff81408410 r __ksymtab_nla_memcmp
ffffffff81408420 r __ksymtab_nla_memcpy
ffffffff81408430 r __ksymtab_nla_parse
ffffffff81408440 r __ksymtab_nla_policy_len
ffffffff81408450 r __ksymtab_nla_put
ffffffff81408460 r __ksymtab_nla_put_nohdr
ffffffff81408470 r __ksymtab_nla_reserve
ffffffff81408480 r __ksymtab_nla_reserve_nohdr
ffffffff81408490 r __ksymtab_nla_strcmp
ffffffff814084a0 r __ksymtab_nla_strlcpy
ffffffff814084b0 r __ksymtab_nla_validate
ffffffff814084c0 r __ksymtab_nlmsg_notify
ffffffff814084d0 r __ksymtab_no_llseek
ffffffff814084e0 r __ksymtab_no_pci_devices
ffffffff814084f0 r __ksymtab_nobh_truncate_page
ffffffff81408500 r __ksymtab_nobh_write_begin
ffffffff81408510 r __ksymtab_nobh_write_end
ffffffff81408520 r __ksymtab_nobh_writepage
ffffffff81408530 r __ksymtab_node_data
ffffffff81408540 r __ksymtab_node_states
ffffffff81408550 r __ksymtab_node_to_cpumask_map
ffffffff81408560 r __ksymtab_nonseekable_open
ffffffff81408570 r __ksymtab_noop_fsync
ffffffff81408580 r __ksymtab_noop_llseek
ffffffff81408590 r __ksymtab_noop_qdisc
ffffffff814085a0 r __ksymtab_notify_change
ffffffff814085b0 r __ksymtab_nr_cpu_ids
ffffffff814085c0 r __ksymtab_nr_node_ids
ffffffff814085d0 r __ksymtab_nr_online_nodes
ffffffff814085e0 r __ksymtab_ns_capable
ffffffff814085f0 r __ksymtab_ns_to_timespec
ffffffff81408600 r __ksymtab_ns_to_timeval
ffffffff81408610 r __ksymtab_num_physpages
ffffffff81408620 r __ksymtab_num_registered_fb
ffffffff81408630 r __ksymtab_numa_node
ffffffff81408640 r __ksymtab_nvram_check_checksum
ffffffff81408650 r __ksymtab_nvram_read_byte
ffffffff81408660 r __ksymtab_nvram_write_byte
ffffffff81408670 r __ksymtab_on_each_cpu
ffffffff81408680 r __ksymtab_on_each_cpu_cond
ffffffff81408690 r __ksymtab_on_each_cpu_mask
ffffffff814086a0 r __ksymtab_oops_in_progress
ffffffff814086b0 r __ksymtab_open_exec
ffffffff814086c0 r __ksymtab_out_of_line_wait_on_bit
ffffffff814086d0 r __ksymtab_out_of_line_wait_on_bit_lock
ffffffff814086e0 r __ksymtab_overflowgid
ffffffff814086f0 r __ksymtab_overflowuid
ffffffff81408700 r __ksymtab_override_creds
ffffffff81408710 r __ksymtab_page_follow_link_light
ffffffff81408720 r __ksymtab_page_put_link
ffffffff81408730 r __ksymtab_page_readlink
ffffffff81408740 r __ksymtab_page_symlink
ffffffff81408750 r __ksymtab_page_symlink_inode_operations
ffffffff81408760 r __ksymtab_page_zero_new_buffers
ffffffff81408770 r __ksymtab_pagecache_write_begin
ffffffff81408780 r __ksymtab_pagecache_write_end
ffffffff81408790 r __ksymtab_pagevec_lookup
ffffffff814087a0 r __ksymtab_pagevec_lookup_tag
ffffffff814087b0 r __ksymtab_panic
ffffffff814087c0 r __ksymtab_panic_blink
ffffffff814087d0 r __ksymtab_panic_notifier_list
ffffffff814087e0 r __ksymtab_param_array_ops
ffffffff814087f0 r __ksymtab_param_get_bool
ffffffff81408800 r __ksymtab_param_get_byte
ffffffff81408810 r __ksymtab_param_get_charp
ffffffff81408820 r __ksymtab_param_get_int
ffffffff81408830 r __ksymtab_param_get_invbool
ffffffff81408840 r __ksymtab_param_get_long
ffffffff81408850 r __ksymtab_param_get_short
ffffffff81408860 r __ksymtab_param_get_string
ffffffff81408870 r __ksymtab_param_get_uint
ffffffff81408880 r __ksymtab_param_get_ulong
ffffffff81408890 r __ksymtab_param_get_ushort
ffffffff814088a0 r __ksymtab_param_ops_bint
ffffffff814088b0 r __ksymtab_param_ops_bool
ffffffff814088c0 r __ksymtab_param_ops_byte
ffffffff814088d0 r __ksymtab_param_ops_charp
ffffffff814088e0 r __ksymtab_param_ops_int
ffffffff814088f0 r __ksymtab_param_ops_invbool
ffffffff81408900 r __ksymtab_param_ops_long
ffffffff81408910 r __ksymtab_param_ops_short
ffffffff81408920 r __ksymtab_param_ops_string
ffffffff81408930 r __ksymtab_param_ops_uint
ffffffff81408940 r __ksymtab_param_ops_ulong
ffffffff81408950 r __ksymtab_param_ops_ushort
ffffffff81408960 r __ksymtab_param_set_bint
ffffffff81408970 r __ksymtab_param_set_bool
ffffffff81408980 r __ksymtab_param_set_byte
ffffffff81408990 r __ksymtab_param_set_charp
ffffffff814089a0 r __ksymtab_param_set_copystring
ffffffff814089b0 r __ksymtab_param_set_int
ffffffff814089c0 r __ksymtab_param_set_invbool
ffffffff814089d0 r __ksymtab_param_set_long
ffffffff814089e0 r __ksymtab_param_set_short
ffffffff814089f0 r __ksymtab_param_set_uint
ffffffff81408a00 r __ksymtab_param_set_ulong
ffffffff81408a10 r __ksymtab_param_set_ushort
ffffffff81408a20 r __ksymtab_path_get
ffffffff81408a30 r __ksymtab_path_is_under
ffffffff81408a40 r __ksymtab_path_put
ffffffff81408a50 r __ksymtab_pci_add_new_bus
ffffffff81408a60 r __ksymtab_pci_add_resource
ffffffff81408a70 r __ksymtab_pci_add_resource_offset
ffffffff81408a80 r __ksymtab_pci_assign_resource
ffffffff81408a90 r __ksymtab_pci_back_from_sleep
ffffffff81408aa0 r __ksymtab_pci_biosrom_size
ffffffff81408ab0 r __ksymtab_pci_bus_add_devices
ffffffff81408ac0 r __ksymtab_pci_bus_alloc_resource
ffffffff81408ad0 r __ksymtab_pci_bus_assign_resources
ffffffff81408ae0 r __ksymtab_pci_bus_find_capability
ffffffff81408af0 r __ksymtab_pci_bus_read_config_byte
ffffffff81408b00 r __ksymtab_pci_bus_read_config_dword
ffffffff81408b10 r __ksymtab_pci_bus_read_config_word
ffffffff81408b20 r __ksymtab_pci_bus_read_dev_vendor_id
ffffffff81408b30 r __ksymtab_pci_bus_set_ops
ffffffff81408b40 r __ksymtab_pci_bus_size_bridges
ffffffff81408b50 r __ksymtab_pci_bus_type
ffffffff81408b60 r __ksymtab_pci_bus_write_config_byte
ffffffff81408b70 r __ksymtab_pci_bus_write_config_dword
ffffffff81408b80 r __ksymtab_pci_bus_write_config_word
ffffffff81408b90 r __ksymtab_pci_choose_state
ffffffff81408ba0 r __ksymtab_pci_claim_resource
ffffffff81408bb0 r __ksymtab_pci_clear_master
ffffffff81408bc0 r __ksymtab_pci_clear_mwi
ffffffff81408bd0 r __ksymtab_pci_dev_driver
ffffffff81408be0 r __ksymtab_pci_dev_get
ffffffff81408bf0 r __ksymtab_pci_dev_present
ffffffff81408c00 r __ksymtab_pci_dev_put
ffffffff81408c10 r __ksymtab_pci_disable_device
ffffffff81408c20 r __ksymtab_pci_disable_ido
ffffffff81408c30 r __ksymtab_pci_disable_link_state
ffffffff81408c40 r __ksymtab_pci_disable_link_state_locked
ffffffff81408c50 r __ksymtab_pci_disable_ltr
ffffffff81408c60 r __ksymtab_pci_disable_msi
ffffffff81408c70 r __ksymtab_pci_disable_msix
ffffffff81408c80 r __ksymtab_pci_disable_obff
ffffffff81408c90 r __ksymtab_pci_enable_bridges
ffffffff81408ca0 r __ksymtab_pci_enable_device
ffffffff81408cb0 r __ksymtab_pci_enable_device_io
ffffffff81408cc0 r __ksymtab_pci_enable_device_mem
ffffffff81408cd0 r __ksymtab_pci_enable_ido
ffffffff81408ce0 r __ksymtab_pci_enable_ltr
ffffffff81408cf0 r __ksymtab_pci_enable_msi_block
ffffffff81408d00 r __ksymtab_pci_enable_msix
ffffffff81408d10 r __ksymtab_pci_enable_obff
ffffffff81408d20 r __ksymtab_pci_find_bus
ffffffff81408d30 r __ksymtab_pci_find_capability
ffffffff81408d40 r __ksymtab_pci_find_next_bus
ffffffff81408d50 r __ksymtab_pci_find_parent_resource
ffffffff81408d60 r __ksymtab_pci_fixup_cardbus
ffffffff81408d70 r __ksymtab_pci_fixup_device
ffffffff81408d80 r __ksymtab_pci_free_resource_list
ffffffff81408d90 r __ksymtab_pci_get_class
ffffffff81408da0 r __ksymtab_pci_get_device
ffffffff81408db0 r __ksymtab_pci_get_domain_bus_and_slot
ffffffff81408dc0 r __ksymtab_pci_get_slot
ffffffff81408dd0 r __ksymtab_pci_get_subsys
ffffffff81408de0 r __ksymtab_pci_iomap
ffffffff81408df0 r __ksymtab_pci_iounmap
ffffffff81408e00 r __ksymtab_pci_lost_interrupt
ffffffff81408e10 r __ksymtab_pci_ltr_supported
ffffffff81408e20 r __ksymtab_pci_map_biosrom
ffffffff81408e30 r __ksymtab_pci_map_rom
ffffffff81408e40 r __ksymtab_pci_match_id
ffffffff81408e50 r __ksymtab_pci_mem_start
ffffffff81408e60 r __ksymtab_pci_msi_enabled
ffffffff81408e70 r __ksymtab_pci_pci_problems
ffffffff81408e80 r __ksymtab_pci_pme_active
ffffffff81408e90 r __ksymtab_pci_pme_capable
ffffffff81408ea0 r __ksymtab_pci_prepare_to_sleep
ffffffff81408eb0 r __ksymtab_pci_read_vpd
ffffffff81408ec0 r __ksymtab_pci_reenable_device
ffffffff81408ed0 r __ksymtab_pci_release_region
ffffffff81408ee0 r __ksymtab_pci_release_regions
ffffffff81408ef0 r __ksymtab_pci_release_selected_regions
ffffffff81408f00 r __ksymtab_pci_remove_bus
ffffffff81408f10 r __ksymtab_pci_request_region
ffffffff81408f20 r __ksymtab_pci_request_region_exclusive
ffffffff81408f30 r __ksymtab_pci_request_regions
ffffffff81408f40 r __ksymtab_pci_request_regions_exclusive
ffffffff81408f50 r __ksymtab_pci_request_selected_regions
ffffffff81408f60 r __ksymtab_pci_request_selected_regions_exclusive
ffffffff81408f70 r __ksymtab_pci_restore_state
ffffffff81408f80 r __ksymtab_pci_root_buses
ffffffff81408f90 r __ksymtab_pci_save_state
ffffffff81408fa0 r __ksymtab_pci_scan_bridge
ffffffff81408fb0 r __ksymtab_pci_scan_bus
ffffffff81408fc0 r __ksymtab_pci_scan_bus_parented
ffffffff81408fd0 r __ksymtab_pci_scan_root_bus
ffffffff81408fe0 r __ksymtab_pci_scan_single_device
ffffffff81408ff0 r __ksymtab_pci_scan_slot
ffffffff81409000 r __ksymtab_pci_select_bars
ffffffff81409010 r __ksymtab_pci_set_dma_max_seg_size
ffffffff81409020 r __ksymtab_pci_set_dma_seg_boundary
ffffffff81409030 r __ksymtab_pci_set_ltr
ffffffff81409040 r __ksymtab_pci_set_master
ffffffff81409050 r __ksymtab_pci_set_mwi
ffffffff81409060 r __ksymtab_pci_set_power_state
ffffffff81409070 r __ksymtab_pci_setup_cardbus
ffffffff81409080 r __ksymtab_pci_stop_and_remove_behind_bridge
ffffffff81409090 r __ksymtab_pci_stop_and_remove_bus_device
ffffffff814090a0 r __ksymtab_pci_target_state
ffffffff814090b0 r __ksymtab_pci_try_set_mwi
ffffffff814090c0 r __ksymtab_pci_unmap_biosrom
ffffffff814090d0 r __ksymtab_pci_unmap_rom
ffffffff814090e0 r __ksymtab_pci_unregister_driver
ffffffff814090f0 r __ksymtab_pci_vpd_truncate
ffffffff81409100 r __ksymtab_pci_wake_from_d3
ffffffff81409110 r __ksymtab_pci_write_vpd
ffffffff81409120 r __ksymtab_pcibios_align_resource
ffffffff81409130 r __ksymtab_pcibios_bus_to_resource
ffffffff81409140 r __ksymtab_pcibios_resource_to_bus
ffffffff81409150 r __ksymtab_pcie_aspm_enabled
ffffffff81409160 r __ksymtab_pcie_aspm_support_enabled
ffffffff81409170 r __ksymtab_pcie_get_readrq
ffffffff81409180 r __ksymtab_pcie_port_service_register
ffffffff81409190 r __ksymtab_pcie_port_service_unregister
ffffffff814091a0 r __ksymtab_pcie_set_readrq
ffffffff814091b0 r __ksymtab_pcim_enable_device
ffffffff814091c0 r __ksymtab_pcim_iomap
ffffffff814091d0 r __ksymtab_pcim_iomap_regions
ffffffff814091e0 r __ksymtab_pcim_iomap_regions_request_all
ffffffff814091f0 r __ksymtab_pcim_iomap_table
ffffffff81409200 r __ksymtab_pcim_iounmap
ffffffff81409210 r __ksymtab_pcim_iounmap_regions
ffffffff81409220 r __ksymtab_pcim_pin_device
ffffffff81409230 r __ksymtab_pcix_get_max_mmrbc
ffffffff81409240 r __ksymtab_pcix_get_mmrbc
ffffffff81409250 r __ksymtab_pcix_set_mmrbc
ffffffff81409260 r __ksymtab_percpu_counter_batch
ffffffff81409270 r __ksymtab_percpu_counter_compare
ffffffff81409280 r __ksymtab_percpu_counter_destroy
ffffffff81409290 r __ksymtab_percpu_counter_set
ffffffff814092a0 r __ksymtab_pfifo_fast_ops
ffffffff814092b0 r __ksymtab_pid_task
ffffffff814092c0 r __ksymtab_ping_prot
ffffffff814092d0 r __ksymtab_pipe_lock
ffffffff814092e0 r __ksymtab_pipe_to_file
ffffffff814092f0 r __ksymtab_pipe_unlock
ffffffff81409300 r __ksymtab_pm_power_off
ffffffff81409310 r __ksymtab_pm_set_vt_switch
ffffffff81409320 r __ksymtab_pneigh_enqueue
ffffffff81409330 r __ksymtab_pneigh_lookup
ffffffff81409340 r __ksymtab_pnp_activate_dev
ffffffff81409350 r __ksymtab_pnp_device_attach
ffffffff81409360 r __ksymtab_pnp_device_detach
ffffffff81409370 r __ksymtab_pnp_disable_dev
ffffffff81409380 r __ksymtab_pnp_get_resource
ffffffff81409390 r __ksymtab_pnp_is_active
ffffffff814093a0 r __ksymtab_pnp_platform_devices
ffffffff814093b0 r __ksymtab_pnp_possible_config
ffffffff814093c0 r __ksymtab_pnp_range_reserved
ffffffff814093d0 r __ksymtab_pnp_register_card_driver
ffffffff814093e0 r __ksymtab_pnp_register_driver
ffffffff814093f0 r __ksymtab_pnp_release_card_device
ffffffff81409400 r __ksymtab_pnp_request_card_device
ffffffff81409410 r __ksymtab_pnp_start_dev
ffffffff81409420 r __ksymtab_pnp_stop_dev
ffffffff81409430 r __ksymtab_pnp_unregister_card_driver
ffffffff81409440 r __ksymtab_pnp_unregister_driver
ffffffff81409450 r __ksymtab_pnpacpi_protocol
ffffffff81409460 r __ksymtab_poll_freewait
ffffffff81409470 r __ksymtab_poll_initwait
ffffffff81409480 r __ksymtab_poll_schedule_timeout
ffffffff81409490 r __ksymtab_posix_acl_alloc
ffffffff814094a0 r __ksymtab_posix_acl_chmod
ffffffff814094b0 r __ksymtab_posix_acl_create
ffffffff814094c0 r __ksymtab_posix_acl_equiv_mode
ffffffff814094d0 r __ksymtab_posix_acl_from_mode
ffffffff814094e0 r __ksymtab_posix_acl_from_xattr
ffffffff814094f0 r __ksymtab_posix_acl_init
ffffffff81409500 r __ksymtab_posix_acl_to_xattr
ffffffff81409510 r __ksymtab_posix_acl_valid
ffffffff81409520 r __ksymtab_posix_lock_file
ffffffff81409530 r __ksymtab_posix_lock_file_wait
ffffffff81409540 r __ksymtab_posix_test_lock
ffffffff81409550 r __ksymtab_posix_unblock_lock
ffffffff81409560 r __ksymtab_prandom32
ffffffff81409570 r __ksymtab_prepare_binprm
ffffffff81409580 r __ksymtab_prepare_creds
ffffffff81409590 r __ksymtab_prepare_kernel_cred
ffffffff814095a0 r __ksymtab_prepare_to_wait
ffffffff814095b0 r __ksymtab_prepare_to_wait_exclusive
ffffffff814095c0 r __ksymtab_print_hex_dump
ffffffff814095d0 r __ksymtab_print_hex_dump_bytes
ffffffff814095e0 r __ksymtab_printk
ffffffff814095f0 r __ksymtab_printk_timed_ratelimit
ffffffff81409600 r __ksymtab_probe_irq_mask
ffffffff81409610 r __ksymtab_probe_irq_off
ffffffff81409620 r __ksymtab_probe_irq_on
ffffffff81409630 r __ksymtab_proc_create_data
ffffffff81409640 r __ksymtab_proc_dointvec
ffffffff81409650 r __ksymtab_proc_dointvec_jiffies
ffffffff81409660 r __ksymtab_proc_dointvec_minmax
ffffffff81409670 r __ksymtab_proc_dointvec_ms_jiffies
ffffffff81409680 r __ksymtab_proc_dointvec_userhz_jiffies
ffffffff81409690 r __ksymtab_proc_dostring
ffffffff814096a0 r __ksymtab_proc_doulongvec_minmax
ffffffff814096b0 r __ksymtab_proc_doulongvec_ms_jiffies_minmax
ffffffff814096c0 r __ksymtab_proc_mkdir
ffffffff814096d0 r __ksymtab_proc_mkdir_mode
ffffffff814096e0 r __ksymtab_proc_symlink
ffffffff814096f0 r __ksymtab_profile_pc
ffffffff81409700 r __ksymtab_proto_register
ffffffff81409710 r __ksymtab_proto_unregister
ffffffff81409720 r __ksymtab_ps2_begin_command
ffffffff81409730 r __ksymtab_ps2_cmd_aborted
ffffffff81409740 r __ksymtab_ps2_command
ffffffff81409750 r __ksymtab_ps2_drain
ffffffff81409760 r __ksymtab_ps2_end_command
ffffffff81409770 r __ksymtab_ps2_handle_ack
ffffffff81409780 r __ksymtab_ps2_handle_response
ffffffff81409790 r __ksymtab_ps2_init
ffffffff814097a0 r __ksymtab_ps2_is_keyboard_id
ffffffff814097b0 r __ksymtab_ps2_sendbyte
ffffffff814097c0 r __ksymtab_pskb_expand_head
ffffffff814097d0 r __ksymtab_put_cmsg
ffffffff814097e0 r __ksymtab_put_disk
ffffffff814097f0 r __ksymtab_put_io_context
ffffffff81409800 r __ksymtab_put_page
ffffffff81409810 r __ksymtab_put_pages_list
ffffffff81409820 r __ksymtab_put_tty_driver
ffffffff81409830 r __ksymtab_put_unused_fd
ffffffff81409840 r __ksymtab_putname
ffffffff81409850 r __ksymtab_pv_cpu_ops
ffffffff81409860 r __ksymtab_pv_irq_ops
ffffffff81409870 r __ksymtab_pv_lock_ops
ffffffff81409880 r __ksymtab_pv_mmu_ops
ffffffff81409890 r __ksymtab_qdisc_create_dflt
ffffffff814098a0 r __ksymtab_qdisc_destroy
ffffffff814098b0 r __ksymtab_qdisc_reset
ffffffff814098c0 r __ksymtab_radix_tree_delete
ffffffff814098d0 r __ksymtab_radix_tree_gang_lookup
ffffffff814098e0 r __ksymtab_radix_tree_gang_lookup_slot
ffffffff814098f0 r __ksymtab_radix_tree_gang_lookup_tag
ffffffff81409900 r __ksymtab_radix_tree_gang_lookup_tag_slot
ffffffff81409910 r __ksymtab_radix_tree_insert
ffffffff81409920 r __ksymtab_radix_tree_lookup
ffffffff81409930 r __ksymtab_radix_tree_lookup_slot
ffffffff81409940 r __ksymtab_radix_tree_next_chunk
ffffffff81409950 r __ksymtab_radix_tree_next_hole
ffffffff81409960 r __ksymtab_radix_tree_preload
ffffffff81409970 r __ksymtab_radix_tree_prev_hole
ffffffff81409980 r __ksymtab_radix_tree_range_tag_if_tagged
ffffffff81409990 r __ksymtab_radix_tree_tag_clear
ffffffff814099a0 r __ksymtab_radix_tree_tag_get
ffffffff814099b0 r __ksymtab_radix_tree_tag_set
ffffffff814099c0 r __ksymtab_radix_tree_tagged
ffffffff814099d0 r __ksymtab_random32
ffffffff814099e0 r __ksymtab_rb_augment_erase_begin
ffffffff814099f0 r __ksymtab_rb_augment_erase_end
ffffffff81409a00 r __ksymtab_rb_augment_insert
ffffffff81409a10 r __ksymtab_rb_erase
ffffffff81409a20 r __ksymtab_rb_first
ffffffff81409a30 r __ksymtab_rb_insert_color
ffffffff81409a40 r __ksymtab_rb_last
ffffffff81409a50 r __ksymtab_rb_next
ffffffff81409a60 r __ksymtab_rb_prev
ffffffff81409a70 r __ksymtab_rb_replace_node
ffffffff81409a80 r __ksymtab_rdmsr_on_cpu
ffffffff81409a90 r __ksymtab_rdmsr_on_cpus
ffffffff81409aa0 r __ksymtab_rdmsr_safe_on_cpu
ffffffff81409ab0 r __ksymtab_rdmsr_safe_regs_on_cpu
ffffffff81409ac0 r __ksymtab_read_cache_page
ffffffff81409ad0 r __ksymtab_read_cache_page_async
ffffffff81409ae0 r __ksymtab_read_cache_page_gfp
ffffffff81409af0 r __ksymtab_read_cache_pages
ffffffff81409b00 r __ksymtab_read_dev_sector
ffffffff81409b10 r __ksymtab_recalc_sigpending
ffffffff81409b20 r __ksymtab_recalibrate_cpu_khz
ffffffff81409b30 r __ksymtab_reciprocal_value
ffffffff81409b40 r __ksymtab_redirty_page_for_writepage
ffffffff81409b50 r __ksymtab_redraw_screen
ffffffff81409b60 r __ksymtab_register_acpi_notifier
ffffffff81409b70 r __ksymtab_register_blkdev
ffffffff81409b80 r __ksymtab_register_chrdev_region
ffffffff81409b90 r __ksymtab_register_con_driver
ffffffff81409ba0 r __ksymtab_register_console
ffffffff81409bb0 r __ksymtab_register_cpu_notifier
ffffffff81409bc0 r __ksymtab_register_exec_domain
ffffffff81409bd0 r __ksymtab_register_filesystem
ffffffff81409be0 r __ksymtab_register_framebuffer
ffffffff81409bf0 r __ksymtab_register_gifconf
ffffffff81409c00 r __ksymtab_register_inetaddr_notifier
ffffffff81409c10 r __ksymtab_register_module_notifier
ffffffff81409c20 r __ksymtab_register_netdev
ffffffff81409c30 r __ksymtab_register_netdevice
ffffffff81409c40 r __ksymtab_register_netdevice_notifier
ffffffff81409c50 r __ksymtab_register_nls
ffffffff81409c60 r __ksymtab_register_reboot_notifier
ffffffff81409c70 r __ksymtab_register_shrinker
ffffffff81409c80 r __ksymtab_register_sysctl
ffffffff81409c90 r __ksymtab_register_sysctl_paths
ffffffff81409ca0 r __ksymtab_register_sysctl_table
ffffffff81409cb0 r __ksymtab_register_xen_selfballooning
ffffffff81409cc0 r __ksymtab_registered_fb
ffffffff81409cd0 r __ksymtab_release_evntsel_nmi
ffffffff81409ce0 r __ksymtab_release_firmware
ffffffff81409cf0 r __ksymtab_release_pages
ffffffff81409d00 r __ksymtab_release_perfctr_nmi
ffffffff81409d10 r __ksymtab_release_resource
ffffffff81409d20 r __ksymtab_release_sock
ffffffff81409d30 r __ksymtab_remap_pfn_range
ffffffff81409d40 r __ksymtab_remap_vmalloc_range
ffffffff81409d50 r __ksymtab_remove_arg_zero
ffffffff81409d60 r __ksymtab_remove_conflicting_framebuffers
ffffffff81409d70 r __ksymtab_remove_proc_entry
ffffffff81409d80 r __ksymtab_remove_wait_queue
ffffffff81409d90 r __ksymtab_rename_lock
ffffffff81409da0 r __ksymtab_replace_mount_options
ffffffff81409db0 r __ksymtab_request_dma
ffffffff81409dc0 r __ksymtab_request_firmware
ffffffff81409dd0 r __ksymtab_request_firmware_nowait
ffffffff81409de0 r __ksymtab_request_resource
ffffffff81409df0 r __ksymtab_request_threaded_irq
ffffffff81409e00 r __ksymtab_reserve_evntsel_nmi
ffffffff81409e10 r __ksymtab_reserve_perfctr_nmi
ffffffff81409e20 r __ksymtab_reset_devices
ffffffff81409e30 r __ksymtab_revalidate_disk
ffffffff81409e40 r __ksymtab_revert_creds
ffffffff81409e50 r __ksymtab_rps_may_expire_flow
ffffffff81409e60 r __ksymtab_rps_sock_flow_table
ffffffff81409e70 r __ksymtab_rtc_cmos_read
ffffffff81409e80 r __ksymtab_rtc_cmos_write
ffffffff81409e90 r __ksymtab_rtc_lock
ffffffff81409ea0 r __ksymtab_rtc_month_days
ffffffff81409eb0 r __ksymtab_rtc_time_to_tm
ffffffff81409ec0 r __ksymtab_rtc_tm_to_time
ffffffff81409ed0 r __ksymtab_rtc_valid_tm
ffffffff81409ee0 r __ksymtab_rtc_year_days
ffffffff81409ef0 r __ksymtab_rtnetlink_put_metrics
ffffffff81409f00 r __ksymtab_rtnl_configure_link
ffffffff81409f10 r __ksymtab_rtnl_create_link
ffffffff81409f20 r __ksymtab_rtnl_is_locked
ffffffff81409f30 r __ksymtab_rtnl_link_get_net
ffffffff81409f40 r __ksymtab_rtnl_lock
ffffffff81409f50 r __ksymtab_rtnl_notify
ffffffff81409f60 r __ksymtab_rtnl_set_sk_err
ffffffff81409f70 r __ksymtab_rtnl_trylock
ffffffff81409f80 r __ksymtab_rtnl_unicast
ffffffff81409f90 r __ksymtab_rtnl_unlock
ffffffff81409fa0 r __ksymtab_rwsem_down_read_failed
ffffffff81409fb0 r __ksymtab_rwsem_down_write_failed
ffffffff81409fc0 r __ksymtab_rwsem_downgrade_wake
ffffffff81409fd0 r __ksymtab_rwsem_wake
ffffffff81409fe0 r __ksymtab_save_mount_options
ffffffff81409ff0 r __ksymtab_sb_min_blocksize
ffffffff8140a000 r __ksymtab_sb_set_blocksize
ffffffff8140a010 r __ksymtab_schedule
ffffffff8140a020 r __ksymtab_schedule_delayed_work
ffffffff8140a030 r __ksymtab_schedule_delayed_work_on
ffffffff8140a040 r __ksymtab_schedule_timeout
ffffffff8140a050 r __ksymtab_schedule_timeout_interruptible
ffffffff8140a060 r __ksymtab_schedule_timeout_killable
ffffffff8140a070 r __ksymtab_schedule_timeout_uninterruptible
ffffffff8140a080 r __ksymtab_schedule_work
ffffffff8140a090 r __ksymtab_schedule_work_on
ffffffff8140a0a0 r __ksymtab_scm_detach_fds
ffffffff8140a0b0 r __ksymtab_scm_fp_dup
ffffffff8140a0c0 r __ksymtab_scnprintf
ffffffff8140a0d0 r __ksymtab_screen_info
ffffffff8140a0e0 r __ksymtab_scsi_add_device
ffffffff8140a0f0 r __ksymtab_scsi_add_host_with_dma
ffffffff8140a100 r __ksymtab_scsi_adjust_queue_depth
ffffffff8140a110 r __ksymtab_scsi_allocate_command
ffffffff8140a120 r __ksymtab_scsi_bios_ptable
ffffffff8140a130 r __ksymtab_scsi_block_requests
ffffffff8140a140 r __ksymtab_scsi_block_when_processing_errors
ffffffff8140a150 r __ksymtab_scsi_build_sense_buffer
ffffffff8140a160 r __ksymtab_scsi_calculate_bounce_limit
ffffffff8140a170 r __ksymtab_scsi_cmd_blk_ioctl
ffffffff8140a180 r __ksymtab_scsi_cmd_get_serial
ffffffff8140a190 r __ksymtab_scsi_cmd_ioctl
ffffffff8140a1a0 r __ksymtab_scsi_cmd_print_sense_hdr
ffffffff8140a1b0 r __ksymtab_scsi_command_normalize_sense
ffffffff8140a1c0 r __ksymtab_scsi_command_size_tbl
ffffffff8140a1d0 r __ksymtab_scsi_dev_info_add_list
ffffffff8140a1e0 r __ksymtab_scsi_dev_info_list_add_keyed
ffffffff8140a1f0 r __ksymtab_scsi_dev_info_list_del_keyed
ffffffff8140a200 r __ksymtab_scsi_dev_info_remove_list
ffffffff8140a210 r __ksymtab_scsi_device_get
ffffffff8140a220 r __ksymtab_scsi_device_lookup
ffffffff8140a230 r __ksymtab_scsi_device_lookup_by_target
ffffffff8140a240 r __ksymtab_scsi_device_put
ffffffff8140a250 r __ksymtab_scsi_device_quiesce
ffffffff8140a260 r __ksymtab_scsi_device_resume
ffffffff8140a270 r __ksymtab_scsi_device_set_state
ffffffff8140a280 r __ksymtab_scsi_device_type
ffffffff8140a290 r __ksymtab_scsi_dma_map
ffffffff8140a2a0 r __ksymtab_scsi_dma_unmap
ffffffff8140a2b0 r __ksymtab_scsi_eh_finish_cmd
ffffffff8140a2c0 r __ksymtab_scsi_eh_flush_done_q
ffffffff8140a2d0 r __ksymtab_scsi_eh_prep_cmnd
ffffffff8140a2e0 r __ksymtab_scsi_eh_restore_cmnd
ffffffff8140a2f0 r __ksymtab_scsi_execute
ffffffff8140a300 r __ksymtab_scsi_execute_req
ffffffff8140a310 r __ksymtab_scsi_extd_sense_format
ffffffff8140a320 r __ksymtab_scsi_finish_command
ffffffff8140a330 r __ksymtab_scsi_free_command
ffffffff8140a340 r __ksymtab_scsi_free_host_dev
ffffffff8140a350 r __ksymtab_scsi_get_command
ffffffff8140a360 r __ksymtab_scsi_get_device_flags_keyed
ffffffff8140a370 r __ksymtab_scsi_get_host_dev
ffffffff8140a380 r __ksymtab_scsi_get_sense_info_fld
ffffffff8140a390 r __ksymtab_scsi_host_alloc
ffffffff8140a3a0 r __ksymtab_scsi_host_get
ffffffff8140a3b0 r __ksymtab_scsi_host_lookup
ffffffff8140a3c0 r __ksymtab_scsi_host_put
ffffffff8140a3d0 r __ksymtab_scsi_host_set_state
ffffffff8140a3e0 r __ksymtab_scsi_init_io
ffffffff8140a3f0 r __ksymtab_scsi_ioctl
ffffffff8140a400 r __ksymtab_scsi_is_host_device
ffffffff8140a410 r __ksymtab_scsi_is_sdev_device
ffffffff8140a420 r __ksymtab_scsi_is_target_device
ffffffff8140a430 r __ksymtab_scsi_kmap_atomic_sg
ffffffff8140a440 r __ksymtab_scsi_kunmap_atomic_sg
ffffffff8140a450 r __ksymtab_scsi_logging_level
ffffffff8140a460 r __ksymtab_scsi_mode_sense
ffffffff8140a470 r __ksymtab_scsi_nonblockable_ioctl
ffffffff8140a480 r __ksymtab_scsi_normalize_sense
ffffffff8140a490 r __ksymtab_scsi_partsize
ffffffff8140a4a0 r __ksymtab_scsi_prep_fn
ffffffff8140a4b0 r __ksymtab_scsi_prep_return
ffffffff8140a4c0 r __ksymtab_scsi_prep_state_check
ffffffff8140a4d0 r __ksymtab_scsi_print_command
ffffffff8140a4e0 r __ksymtab_scsi_print_result
ffffffff8140a4f0 r __ksymtab_scsi_print_sense
ffffffff8140a500 r __ksymtab_scsi_print_sense_hdr
ffffffff8140a510 r __ksymtab_scsi_print_status
ffffffff8140a520 r __ksymtab_scsi_put_command
ffffffff8140a530 r __ksymtab_scsi_register
ffffffff8140a540 r __ksymtab_scsi_register_driver
ffffffff8140a550 r __ksymtab_scsi_register_interface
ffffffff8140a560 r __ksymtab_scsi_release_buffers
ffffffff8140a570 r __ksymtab_scsi_remove_device
ffffffff8140a580 r __ksymtab_scsi_remove_host
ffffffff8140a590 r __ksymtab_scsi_remove_target
ffffffff8140a5a0 r __ksymtab_scsi_report_bus_reset
ffffffff8140a5b0 r __ksymtab_scsi_report_device_reset
ffffffff8140a5c0 r __ksymtab_scsi_rescan_device
ffffffff8140a5d0 r __ksymtab_scsi_reset_provider
ffffffff8140a5e0 r __ksymtab_scsi_scan_host
ffffffff8140a5f0 r __ksymtab_scsi_scan_target
ffffffff8140a600 r __ksymtab_scsi_sense_desc_find
ffffffff8140a610 r __ksymtab_scsi_sense_key_string
ffffffff8140a620 r __ksymtab_scsi_set_medium_removal
ffffffff8140a630 r __ksymtab_scsi_setup_blk_pc_cmnd
ffffffff8140a640 r __ksymtab_scsi_setup_fs_cmnd
ffffffff8140a650 r __ksymtab_scsi_show_extd_sense
ffffffff8140a660 r __ksymtab_scsi_show_result
ffffffff8140a670 r __ksymtab_scsi_show_sense_hdr
ffffffff8140a680 r __ksymtab_scsi_target_quiesce
ffffffff8140a690 r __ksymtab_scsi_target_resume
ffffffff8140a6a0 r __ksymtab_scsi_test_unit_ready
ffffffff8140a6b0 r __ksymtab_scsi_track_queue_full
ffffffff8140a6c0 r __ksymtab_scsi_unblock_requests
ffffffff8140a6d0 r __ksymtab_scsi_unregister
ffffffff8140a6e0 r __ksymtab_scsi_verify_blk_ioctl
ffffffff8140a6f0 r __ksymtab_scsicam_bios_param
ffffffff8140a700 r __ksymtab_scsilun_to_int
ffffffff8140a710 r __ksymtab_search_binary_handler
ffffffff8140a720 r __ksymtab_secpath_dup
ffffffff8140a730 r __ksymtab_send_remote_softirq
ffffffff8140a740 r __ksymtab_send_sig
ffffffff8140a750 r __ksymtab_send_sig_info
ffffffff8140a760 r __ksymtab_seq_bitmap
ffffffff8140a770 r __ksymtab_seq_bitmap_list
ffffffff8140a780 r __ksymtab_seq_escape
ffffffff8140a790 r __ksymtab_seq_hlist_next
ffffffff8140a7a0 r __ksymtab_seq_hlist_next_rcu
ffffffff8140a7b0 r __ksymtab_seq_hlist_start
ffffffff8140a7c0 r __ksymtab_seq_hlist_start_head
ffffffff8140a7d0 r __ksymtab_seq_hlist_start_head_rcu
ffffffff8140a7e0 r __ksymtab_seq_hlist_start_rcu
ffffffff8140a7f0 r __ksymtab_seq_list_next
ffffffff8140a800 r __ksymtab_seq_list_start
ffffffff8140a810 r __ksymtab_seq_list_start_head
ffffffff8140a820 r __ksymtab_seq_lseek
ffffffff8140a830 r __ksymtab_seq_open
ffffffff8140a840 r __ksymtab_seq_open_private
ffffffff8140a850 r __ksymtab_seq_path
ffffffff8140a860 r __ksymtab_seq_printf
ffffffff8140a870 r __ksymtab_seq_put_decimal_ll
ffffffff8140a880 r __ksymtab_seq_put_decimal_ull
ffffffff8140a890 r __ksymtab_seq_putc
ffffffff8140a8a0 r __ksymtab_seq_puts
ffffffff8140a8b0 r __ksymtab_seq_read
ffffffff8140a8c0 r __ksymtab_seq_release
ffffffff8140a8d0 r __ksymtab_seq_release_private
ffffffff8140a8e0 r __ksymtab_seq_write
ffffffff8140a8f0 r __ksymtab_serio_close
ffffffff8140a900 r __ksymtab_serio_interrupt
ffffffff8140a910 r __ksymtab_serio_open
ffffffff8140a920 r __ksymtab_serio_reconnect
ffffffff8140a930 r __ksymtab_serio_rescan
ffffffff8140a940 r __ksymtab_serio_unregister_child_port
ffffffff8140a950 r __ksymtab_serio_unregister_driver
ffffffff8140a960 r __ksymtab_serio_unregister_port
ffffffff8140a970 r __ksymtab_set_anon_super
ffffffff8140a980 r __ksymtab_set_bdi_congested
ffffffff8140a990 r __ksymtab_set_bh_page
ffffffff8140a9a0 r __ksymtab_set_binfmt
ffffffff8140a9b0 r __ksymtab_set_blocksize
ffffffff8140a9c0 r __ksymtab_set_create_files_as
ffffffff8140a9d0 r __ksymtab_set_current_groups
ffffffff8140a9e0 r __ksymtab_set_device_ro
ffffffff8140a9f0 r __ksymtab_set_disk_ro
ffffffff8140aa00 r __ksymtab_set_freezable
ffffffff8140aa10 r __ksymtab_set_groups
ffffffff8140aa20 r __ksymtab_set_memory_array_uc
ffffffff8140aa30 r __ksymtab_set_memory_array_wb
ffffffff8140aa40 r __ksymtab_set_memory_array_wc
ffffffff8140aa50 r __ksymtab_set_memory_nx
ffffffff8140aa60 r __ksymtab_set_memory_uc
ffffffff8140aa70 r __ksymtab_set_memory_wb
ffffffff8140aa80 r __ksymtab_set_memory_wc
ffffffff8140aa90 r __ksymtab_set_memory_x
ffffffff8140aaa0 r __ksymtab_set_nlink
ffffffff8140aab0 r __ksymtab_set_normalized_timespec
ffffffff8140aac0 r __ksymtab_set_page_dirty
ffffffff8140aad0 r __ksymtab_set_page_dirty_lock
ffffffff8140aae0 r __ksymtab_set_pages_array_uc
ffffffff8140aaf0 r __ksymtab_set_pages_array_wb
ffffffff8140ab00 r __ksymtab_set_pages_array_wc
ffffffff8140ab10 r __ksymtab_set_pages_nx
ffffffff8140ab20 r __ksymtab_set_pages_uc
ffffffff8140ab30 r __ksymtab_set_pages_wb
ffffffff8140ab40 r __ksymtab_set_pages_x
ffffffff8140ab50 r __ksymtab_set_security_override
ffffffff8140ab60 r __ksymtab_set_security_override_from_ctx
ffffffff8140ab70 r __ksymtab_set_user_nice
ffffffff8140ab80 r __ksymtab_setattr_copy
ffffffff8140ab90 r __ksymtab_setup_arg_pages
ffffffff8140aba0 r __ksymtab_setup_max_cpus
ffffffff8140abb0 r __ksymtab_setup_new_exec
ffffffff8140abc0 r __ksymtab_sg_alloc_table
ffffffff8140abd0 r __ksymtab_sg_copy_from_buffer
ffffffff8140abe0 r __ksymtab_sg_copy_to_buffer
ffffffff8140abf0 r __ksymtab_sg_free_table
ffffffff8140ac00 r __ksymtab_sg_init_one
ffffffff8140ac10 r __ksymtab_sg_init_table
ffffffff8140ac20 r __ksymtab_sg_last
ffffffff8140ac30 r __ksymtab_sg_miter_next
ffffffff8140ac40 r __ksymtab_sg_miter_start
ffffffff8140ac50 r __ksymtab_sg_miter_stop
ffffffff8140ac60 r __ksymtab_sg_next
ffffffff8140ac70 r __ksymtab_sget
ffffffff8140ac80 r __ksymtab_sha_transform
ffffffff8140ac90 r __ksymtab_should_remove_suid
ffffffff8140aca0 r __ksymtab_shrink_dcache_parent
ffffffff8140acb0 r __ksymtab_shrink_dcache_sb
ffffffff8140acc0 r __ksymtab_si_meminfo
ffffffff8140acd0 r __ksymtab_sigprocmask
ffffffff8140ace0 r __ksymtab_simple_dir_inode_operations
ffffffff8140acf0 r __ksymtab_simple_dir_operations
ffffffff8140ad00 r __ksymtab_simple_empty
ffffffff8140ad10 r __ksymtab_simple_fill_super
ffffffff8140ad20 r __ksymtab_simple_getattr
ffffffff8140ad30 r __ksymtab_simple_link
ffffffff8140ad40 r __ksymtab_simple_lookup
ffffffff8140ad50 r __ksymtab_simple_open
ffffffff8140ad60 r __ksymtab_simple_pin_fs
ffffffff8140ad70 r __ksymtab_simple_read_from_buffer
ffffffff8140ad80 r __ksymtab_simple_readpage
ffffffff8140ad90 r __ksymtab_simple_release_fs
ffffffff8140ada0 r __ksymtab_simple_rename
ffffffff8140adb0 r __ksymtab_simple_rmdir
ffffffff8140adc0 r __ksymtab_simple_setattr
ffffffff8140add0 r __ksymtab_simple_statfs
ffffffff8140ade0 r __ksymtab_simple_strtol
ffffffff8140adf0 r __ksymtab_simple_strtoll
ffffffff8140ae00 r __ksymtab_simple_strtoul
ffffffff8140ae10 r __ksymtab_simple_strtoull
ffffffff8140ae20 r __ksymtab_simple_transaction_get
ffffffff8140ae30 r __ksymtab_simple_transaction_read
ffffffff8140ae40 r __ksymtab_simple_transaction_release
ffffffff8140ae50 r __ksymtab_simple_transaction_set
ffffffff8140ae60 r __ksymtab_simple_unlink
ffffffff8140ae70 r __ksymtab_simple_write_begin
ffffffff8140ae80 r __ksymtab_simple_write_end
ffffffff8140ae90 r __ksymtab_simple_write_to_buffer
ffffffff8140aea0 r __ksymtab_single_open
ffffffff8140aeb0 r __ksymtab_single_release
ffffffff8140aec0 r __ksymtab_sk_alloc
ffffffff8140aed0 r __ksymtab_sk_chk_filter
ffffffff8140aee0 r __ksymtab_sk_common_release
ffffffff8140aef0 r __ksymtab_sk_dst_check
ffffffff8140af00 r __ksymtab_sk_filter
ffffffff8140af10 r __ksymtab_sk_filter_release_rcu
ffffffff8140af20 r __ksymtab_sk_free
ffffffff8140af30 r __ksymtab_sk_prot_clear_portaddr_nulls
ffffffff8140af40 r __ksymtab_sk_receive_skb
ffffffff8140af50 r __ksymtab_sk_release_kernel
ffffffff8140af60 r __ksymtab_sk_reset_timer
ffffffff8140af70 r __ksymtab_sk_reset_txq
ffffffff8140af80 r __ksymtab_sk_run_filter
ffffffff8140af90 r __ksymtab_sk_send_sigurg
ffffffff8140afa0 r __ksymtab_sk_stop_timer
ffffffff8140afb0 r __ksymtab_sk_stream_error
ffffffff8140afc0 r __ksymtab_sk_stream_kill_queues
ffffffff8140afd0 r __ksymtab_sk_stream_wait_close
ffffffff8140afe0 r __ksymtab_sk_stream_wait_connect
ffffffff8140aff0 r __ksymtab_sk_stream_wait_memory
ffffffff8140b000 r __ksymtab_sk_stream_write_space
ffffffff8140b010 r __ksymtab_sk_wait_data
ffffffff8140b020 r __ksymtab_skb_abort_seq_read
ffffffff8140b030 r __ksymtab_skb_add_rx_frag
ffffffff8140b040 r __ksymtab_skb_append
ffffffff8140b050 r __ksymtab_skb_append_datato_frags
ffffffff8140b060 r __ksymtab_skb_checksum
ffffffff8140b070 r __ksymtab_skb_checksum_help
ffffffff8140b080 r __ksymtab_skb_clone
ffffffff8140b090 r __ksymtab_skb_copy
ffffffff8140b0a0 r __ksymtab_skb_copy_and_csum_bits
ffffffff8140b0b0 r __ksymtab_skb_copy_and_csum_datagram_iovec
ffffffff8140b0c0 r __ksymtab_skb_copy_and_csum_dev
ffffffff8140b0d0 r __ksymtab_skb_copy_bits
ffffffff8140b0e0 r __ksymtab_skb_copy_datagram_const_iovec
ffffffff8140b0f0 r __ksymtab_skb_copy_datagram_from_iovec
ffffffff8140b100 r __ksymtab_skb_copy_datagram_iovec
ffffffff8140b110 r __ksymtab_skb_copy_expand
ffffffff8140b120 r __ksymtab_skb_dequeue
ffffffff8140b130 r __ksymtab_skb_dequeue_tail
ffffffff8140b140 r __ksymtab_skb_dst_set_noref
ffffffff8140b150 r __ksymtab_skb_find_text
ffffffff8140b160 r __ksymtab_skb_flow_dissect
ffffffff8140b170 r __ksymtab_skb_free_datagram
ffffffff8140b180 r __ksymtab_skb_free_datagram_locked
ffffffff8140b190 r __ksymtab_skb_gro_reset_offset
ffffffff8140b1a0 r __ksymtab_skb_gso_segment
ffffffff8140b1b0 r __ksymtab_skb_insert
ffffffff8140b1c0 r __ksymtab_skb_kill_datagram
ffffffff8140b1d0 r __ksymtab_skb_pad
ffffffff8140b1e0 r __ksymtab_skb_prepare_seq_read
ffffffff8140b1f0 r __ksymtab_skb_pull
ffffffff8140b200 r __ksymtab_skb_push
ffffffff8140b210 r __ksymtab_skb_put
ffffffff8140b220 r __ksymtab_skb_queue_head
ffffffff8140b230 r __ksymtab_skb_queue_purge
ffffffff8140b240 r __ksymtab_skb_queue_tail
ffffffff8140b250 r __ksymtab_skb_realloc_headroom
ffffffff8140b260 r __ksymtab_skb_recv_datagram
ffffffff8140b270 r __ksymtab_skb_recycle
ffffffff8140b280 r __ksymtab_skb_recycle_check
ffffffff8140b290 r __ksymtab_skb_seq_read
ffffffff8140b2a0 r __ksymtab_skb_split
ffffffff8140b2b0 r __ksymtab_skb_store_bits
ffffffff8140b2c0 r __ksymtab_skb_trim
ffffffff8140b2d0 r __ksymtab_skb_unlink
ffffffff8140b2e0 r __ksymtab_skip_spaces
ffffffff8140b2f0 r __ksymtab_sleep_on
ffffffff8140b300 r __ksymtab_sleep_on_timeout
ffffffff8140b310 r __ksymtab_smp_call_function
ffffffff8140b320 r __ksymtab_smp_call_function_many
ffffffff8140b330 r __ksymtab_smp_call_function_single
ffffffff8140b340 r __ksymtab_smp_num_siblings
ffffffff8140b350 r __ksymtab_snprintf
ffffffff8140b360 r __ksymtab_sock_alloc_send_pskb
ffffffff8140b370 r __ksymtab_sock_alloc_send_skb
ffffffff8140b380 r __ksymtab_sock_common_getsockopt
ffffffff8140b390 r __ksymtab_sock_common_recvmsg
ffffffff8140b3a0 r __ksymtab_sock_common_setsockopt
ffffffff8140b3b0 r __ksymtab_sock_create
ffffffff8140b3c0 r __ksymtab_sock_create_kern
ffffffff8140b3d0 r __ksymtab_sock_create_lite
ffffffff8140b3e0 r __ksymtab_sock_get_timestamp
ffffffff8140b3f0 r __ksymtab_sock_get_timestampns
ffffffff8140b400 r __ksymtab_sock_i_ino
ffffffff8140b410 r __ksymtab_sock_i_uid
ffffffff8140b420 r __ksymtab_sock_init_data
ffffffff8140b430 r __ksymtab_sock_kfree_s
ffffffff8140b440 r __ksymtab_sock_kmalloc
ffffffff8140b450 r __ksymtab_sock_map_fd
ffffffff8140b460 r __ksymtab_sock_no_accept
ffffffff8140b470 r __ksymtab_sock_no_bind
ffffffff8140b480 r __ksymtab_sock_no_connect
ffffffff8140b490 r __ksymtab_sock_no_getname
ffffffff8140b4a0 r __ksymtab_sock_no_getsockopt
ffffffff8140b4b0 r __ksymtab_sock_no_ioctl
ffffffff8140b4c0 r __ksymtab_sock_no_listen
ffffffff8140b4d0 r __ksymtab_sock_no_mmap
ffffffff8140b4e0 r __ksymtab_sock_no_poll
ffffffff8140b4f0 r __ksymtab_sock_no_recvmsg
ffffffff8140b500 r __ksymtab_sock_no_sendmsg
ffffffff8140b510 r __ksymtab_sock_no_sendpage
ffffffff8140b520 r __ksymtab_sock_no_setsockopt
ffffffff8140b530 r __ksymtab_sock_no_shutdown
ffffffff8140b540 r __ksymtab_sock_no_socketpair
ffffffff8140b550 r __ksymtab_sock_queue_err_skb
ffffffff8140b560 r __ksymtab_sock_queue_rcv_skb
ffffffff8140b570 r __ksymtab_sock_recvmsg
ffffffff8140b580 r __ksymtab_sock_register
ffffffff8140b590 r __ksymtab_sock_release
ffffffff8140b5a0 r __ksymtab_sock_rfree
ffffffff8140b5b0 r __ksymtab_sock_sendmsg
ffffffff8140b5c0 r __ksymtab_sock_setsockopt
ffffffff8140b5d0 r __ksymtab_sock_tx_timestamp
ffffffff8140b5e0 r __ksymtab_sock_unregister
ffffffff8140b5f0 r __ksymtab_sock_update_classid
ffffffff8140b600 r __ksymtab_sock_wake_async
ffffffff8140b610 r __ksymtab_sock_wfree
ffffffff8140b620 r __ksymtab_sock_wmalloc
ffffffff8140b630 r __ksymtab_sockfd_lookup
ffffffff8140b640 r __ksymtab_soft_cursor
ffffffff8140b650 r __ksymtab_softirq_work_list
ffffffff8140b660 r __ksymtab_softnet_data
ffffffff8140b670 r __ksymtab_sort
ffffffff8140b680 r __ksymtab_splice_direct_to_actor
ffffffff8140b690 r __ksymtab_splice_from_pipe_begin
ffffffff8140b6a0 r __ksymtab_splice_from_pipe_end
ffffffff8140b6b0 r __ksymtab_splice_from_pipe_feed
ffffffff8140b6c0 r __ksymtab_splice_from_pipe_next
ffffffff8140b6d0 r __ksymtab_sprintf
ffffffff8140b6e0 r __ksymtab_srandom32
ffffffff8140b6f0 r __ksymtab_sscanf
ffffffff8140b700 r __ksymtab_starget_for_each_device
ffffffff8140b710 r __ksymtab_start_tty
ffffffff8140b720 r __ksymtab_stop_tty
ffffffff8140b730 r __ksymtab_strcasecmp
ffffffff8140b740 r __ksymtab_strcat
ffffffff8140b750 r __ksymtab_strchr
ffffffff8140b760 r __ksymtab_strcmp
ffffffff8140b770 r __ksymtab_strcpy
ffffffff8140b780 r __ksymtab_strcspn
ffffffff8140b790 r __ksymtab_strim
ffffffff8140b7a0 r __ksymtab_string_get_size
ffffffff8140b7b0 r __ksymtab_strlcat
ffffffff8140b7c0 r __ksymtab_strlcpy
ffffffff8140b7d0 r __ksymtab_strlen
ffffffff8140b7e0 r __ksymtab_strlen_user
ffffffff8140b7f0 r __ksymtab_strncasecmp
ffffffff8140b800 r __ksymtab_strncat
ffffffff8140b810 r __ksymtab_strnchr
ffffffff8140b820 r __ksymtab_strncmp
ffffffff8140b830 r __ksymtab_strncpy
ffffffff8140b840 r __ksymtab_strncpy_from_user
ffffffff8140b850 r __ksymtab_strndup_user
ffffffff8140b860 r __ksymtab_strnicmp
ffffffff8140b870 r __ksymtab_strnlen
ffffffff8140b880 r __ksymtab_strnlen_user
ffffffff8140b890 r __ksymtab_strnstr
ffffffff8140b8a0 r __ksymtab_strpbrk
ffffffff8140b8b0 r __ksymtab_strrchr
ffffffff8140b8c0 r __ksymtab_strsep
ffffffff8140b8d0 r __ksymtab_strspn
ffffffff8140b8e0 r __ksymtab_strstr
ffffffff8140b8f0 r __ksymtab_strtobool
ffffffff8140b900 r __ksymtab_submit_bh
ffffffff8140b910 r __ksymtab_submit_bio
ffffffff8140b920 r __ksymtab_swiotlb_alloc_coherent
ffffffff8140b930 r __ksymtab_swiotlb_dma_mapping_error
ffffffff8140b940 r __ksymtab_swiotlb_dma_supported
ffffffff8140b950 r __ksymtab_swiotlb_free_coherent
ffffffff8140b960 r __ksymtab_swiotlb_map_sg
ffffffff8140b970 r __ksymtab_swiotlb_map_sg_attrs
ffffffff8140b980 r __ksymtab_swiotlb_sync_sg_for_cpu
ffffffff8140b990 r __ksymtab_swiotlb_sync_sg_for_device
ffffffff8140b9a0 r __ksymtab_swiotlb_sync_single_for_cpu
ffffffff8140b9b0 r __ksymtab_swiotlb_sync_single_for_device
ffffffff8140b9c0 r __ksymtab_swiotlb_unmap_sg
ffffffff8140b9d0 r __ksymtab_swiotlb_unmap_sg_attrs
ffffffff8140b9e0 r __ksymtab_sync_blockdev
ffffffff8140b9f0 r __ksymtab_sync_dirty_buffer
ffffffff8140ba00 r __ksymtab_sync_inode
ffffffff8140ba10 r __ksymtab_sync_inode_metadata
ffffffff8140ba20 r __ksymtab_sync_inodes_sb
ffffffff8140ba30 r __ksymtab_sync_mapping_buffers
ffffffff8140ba40 r __ksymtab_synchronize_irq
ffffffff8140ba50 r __ksymtab_synchronize_net
ffffffff8140ba60 r __ksymtab_syncookie_secret
ffffffff8140ba70 r __ksymtab_sys_close
ffffffff8140ba80 r __ksymtab_sys_tz
ffffffff8140ba90 r __ksymtab_sysctl_ip_default_ttl
ffffffff8140baa0 r __ksymtab_sysctl_ip_nonlocal_bind
ffffffff8140bab0 r __ksymtab_sysctl_local_reserved_ports
ffffffff8140bac0 r __ksymtab_sysctl_max_syn_backlog
ffffffff8140bad0 r __ksymtab_sysctl_optmem_max
ffffffff8140bae0 r __ksymtab_sysctl_tcp_adv_win_scale
ffffffff8140baf0 r __ksymtab_sysctl_tcp_ecn
ffffffff8140bb00 r __ksymtab_sysctl_tcp_low_latency
ffffffff8140bb10 r __ksymtab_sysctl_tcp_reordering
ffffffff8140bb20 r __ksymtab_sysctl_tcp_rmem
ffffffff8140bb30 r __ksymtab_sysctl_tcp_syncookies
ffffffff8140bb40 r __ksymtab_sysctl_tcp_wmem
ffffffff8140bb50 r __ksymtab_sysctl_udp_mem
ffffffff8140bb60 r __ksymtab_sysctl_udp_rmem_min
ffffffff8140bb70 r __ksymtab_sysctl_udp_wmem_min
ffffffff8140bb80 r __ksymtab_sysfs_format_mac
ffffffff8140bb90 r __ksymtab_sysfs_streq
ffffffff8140bba0 r __ksymtab_system_freezing_cnt
ffffffff8140bbb0 r __ksymtab_system_state
ffffffff8140bbc0 r __ksymtab_tag_pages_for_writeback
ffffffff8140bbd0 r __ksymtab_take_over_console
ffffffff8140bbe0 r __ksymtab_task_nice
ffffffff8140bbf0 r __ksymtab_task_tgid_nr_ns
ffffffff8140bc00 r __ksymtab_tasklet_init
ffffffff8140bc10 r __ksymtab_tasklet_kill
ffffffff8140bc20 r __ksymtab_tcp_check_req
ffffffff8140bc30 r __ksymtab_tcp_child_process
ffffffff8140bc40 r __ksymtab_tcp_close
ffffffff8140bc50 r __ksymtab_tcp_connect
ffffffff8140bc60 r __ksymtab_tcp_cookie_generator
ffffffff8140bc70 r __ksymtab_tcp_create_openreq_child
ffffffff8140bc80 r __ksymtab_tcp_disconnect
ffffffff8140bc90 r __ksymtab_tcp_enter_memory_pressure
ffffffff8140bca0 r __ksymtab_tcp_getsockopt
ffffffff8140bcb0 r __ksymtab_tcp_gro_complete
ffffffff8140bcc0 r __ksymtab_tcp_gro_receive
ffffffff8140bcd0 r __ksymtab_tcp_hashinfo
ffffffff8140bce0 r __ksymtab_tcp_init_xmit_timers
ffffffff8140bcf0 r __ksymtab_tcp_initialize_rcv_mss
ffffffff8140bd00 r __ksymtab_tcp_ioctl
ffffffff8140bd10 r __ksymtab_tcp_make_synack
ffffffff8140bd20 r __ksymtab_tcp_memory_allocated
ffffffff8140bd30 r __ksymtab_tcp_memory_pressure
ffffffff8140bd40 r __ksymtab_tcp_mtup_init
ffffffff8140bd50 r __ksymtab_tcp_parse_options
ffffffff8140bd60 r __ksymtab_tcp_poll
ffffffff8140bd70 r __ksymtab_tcp_proc_register
ffffffff8140bd80 r __ksymtab_tcp_proc_unregister
ffffffff8140bd90 r __ksymtab_tcp_prot
ffffffff8140bda0 r __ksymtab_tcp_rcv_established
ffffffff8140bdb0 r __ksymtab_tcp_rcv_state_process
ffffffff8140bdc0 r __ksymtab_tcp_read_sock
ffffffff8140bdd0 r __ksymtab_tcp_recvmsg
ffffffff8140bde0 r __ksymtab_tcp_select_initial_window
ffffffff8140bdf0 r __ksymtab_tcp_sendmsg
ffffffff8140be00 r __ksymtab_tcp_sendpage
ffffffff8140be10 r __ksymtab_tcp_seq_open
ffffffff8140be20 r __ksymtab_tcp_setsockopt
ffffffff8140be30 r __ksymtab_tcp_shutdown
ffffffff8140be40 r __ksymtab_tcp_simple_retransmit
ffffffff8140be50 r __ksymtab_tcp_sockets_allocated
ffffffff8140be60 r __ksymtab_tcp_splice_read
ffffffff8140be70 r __ksymtab_tcp_syn_ack_timeout
ffffffff8140be80 r __ksymtab_tcp_syn_flood_action
ffffffff8140be90 r __ksymtab_tcp_sync_mss
ffffffff8140bea0 r __ksymtab_tcp_timewait_state_process
ffffffff8140beb0 r __ksymtab_tcp_tso_segment
ffffffff8140bec0 r __ksymtab_tcp_v4_conn_request
ffffffff8140bed0 r __ksymtab_tcp_v4_connect
ffffffff8140bee0 r __ksymtab_tcp_v4_destroy_sock
ffffffff8140bef0 r __ksymtab_tcp_v4_do_rcv
ffffffff8140bf00 r __ksymtab_tcp_v4_get_peer
ffffffff8140bf10 r __ksymtab_tcp_v4_send_check
ffffffff8140bf20 r __ksymtab_tcp_v4_syn_recv_sock
ffffffff8140bf30 r __ksymtab_tcp_v4_tw_get_peer
ffffffff8140bf40 r __ksymtab_tcp_valid_rtt_meas
ffffffff8140bf50 r __ksymtab_test_set_page_writeback
ffffffff8140bf60 r __ksymtab_test_taint
ffffffff8140bf70 r __ksymtab_thaw_bdev
ffffffff8140bf80 r __ksymtab_thaw_super
ffffffff8140bf90 r __ksymtab_this_cpu_off
ffffffff8140bfa0 r __ksymtab_time_to_tm
ffffffff8140bfb0 r __ksymtab_timekeeping_inject_offset
ffffffff8140bfc0 r __ksymtab_timespec_to_jiffies
ffffffff8140bfd0 r __ksymtab_timespec_trunc
ffffffff8140bfe0 r __ksymtab_timeval_to_jiffies
ffffffff8140bff0 r __ksymtab_totalram_pages
ffffffff8140c000 r __ksymtab_touch_atime
ffffffff8140c010 r __ksymtab_truncate_inode_pages
ffffffff8140c020 r __ksymtab_truncate_inode_pages_range
ffffffff8140c030 r __ksymtab_truncate_pagecache
ffffffff8140c040 r __ksymtab_truncate_pagecache_range
ffffffff8140c050 r __ksymtab_truncate_setsize
ffffffff8140c060 r __ksymtab_try_module_get
ffffffff8140c070 r __ksymtab_try_to_del_timer_sync
ffffffff8140c080 r __ksymtab_try_to_free_buffers
ffffffff8140c090 r __ksymtab_try_to_release_page
ffffffff8140c0a0 r __ksymtab_try_wait_for_completion
ffffffff8140c0b0 r __ksymtab_tsc_khz
ffffffff8140c0c0 r __ksymtab_tty_chars_in_buffer
ffffffff8140c0d0 r __ksymtab_tty_check_change
ffffffff8140c0e0 r __ksymtab_tty_devnum
ffffffff8140c0f0 r __ksymtab_tty_driver_flush_buffer
ffffffff8140c100 r __ksymtab_tty_driver_kref_put
ffffffff8140c110 r __ksymtab_tty_flip_buffer_push
ffffffff8140c120 r __ksymtab_tty_free_termios
ffffffff8140c130 r __ksymtab_tty_get_baud_rate
ffffffff8140c140 r __ksymtab_tty_hangup
ffffffff8140c150 r __ksymtab_tty_hung_up_p
ffffffff8140c160 r __ksymtab_tty_insert_flip_string_fixed_flag
ffffffff8140c170 r __ksymtab_tty_insert_flip_string_flags
ffffffff8140c180 r __ksymtab_tty_kref_put
ffffffff8140c190 r __ksymtab_tty_lock
ffffffff8140c1a0 r __ksymtab_tty_mutex
ffffffff8140c1b0 r __ksymtab_tty_name
ffffffff8140c1c0 r __ksymtab_tty_pair_get_pty
ffffffff8140c1d0 r __ksymtab_tty_pair_get_tty
ffffffff8140c1e0 r __ksymtab_tty_port_alloc_xmit_buf
ffffffff8140c1f0 r __ksymtab_tty_port_block_til_ready
ffffffff8140c200 r __ksymtab_tty_port_carrier_raised
ffffffff8140c210 r __ksymtab_tty_port_close
ffffffff8140c220 r __ksymtab_tty_port_close_end
ffffffff8140c230 r __ksymtab_tty_port_close_start
ffffffff8140c240 r __ksymtab_tty_port_free_xmit_buf
ffffffff8140c250 r __ksymtab_tty_port_hangup
ffffffff8140c260 r __ksymtab_tty_port_init
ffffffff8140c270 r __ksymtab_tty_port_lower_dtr_rts
ffffffff8140c280 r __ksymtab_tty_port_open
ffffffff8140c290 r __ksymtab_tty_port_put
ffffffff8140c2a0 r __ksymtab_tty_port_raise_dtr_rts
ffffffff8140c2b0 r __ksymtab_tty_port_tty_get
ffffffff8140c2c0 r __ksymtab_tty_port_tty_set
ffffffff8140c2d0 r __ksymtab_tty_register_device
ffffffff8140c2e0 r __ksymtab_tty_register_driver
ffffffff8140c2f0 r __ksymtab_tty_register_ldisc
ffffffff8140c300 r __ksymtab_tty_schedule_flip
ffffffff8140c310 r __ksymtab_tty_set_operations
ffffffff8140c320 r __ksymtab_tty_shutdown
ffffffff8140c330 r __ksymtab_tty_std_termios
ffffffff8140c340 r __ksymtab_tty_termios_baud_rate
ffffffff8140c350 r __ksymtab_tty_termios_copy_hw
ffffffff8140c360 r __ksymtab_tty_termios_hw_change
ffffffff8140c370 r __ksymtab_tty_termios_input_baud_rate
ffffffff8140c380 r __ksymtab_tty_throttle
ffffffff8140c390 r __ksymtab_tty_unlock
ffffffff8140c3a0 r __ksymtab_tty_unregister_device
ffffffff8140c3b0 r __ksymtab_tty_unregister_driver
ffffffff8140c3c0 r __ksymtab_tty_unregister_ldisc
ffffffff8140c3d0 r __ksymtab_tty_unthrottle
ffffffff8140c3e0 r __ksymtab_tty_vhangup
ffffffff8140c3f0 r __ksymtab_tty_wait_until_sent
ffffffff8140c400 r __ksymtab_tty_write_room
ffffffff8140c410 r __ksymtab_udp_disconnect
ffffffff8140c420 r __ksymtab_udp_flush_pending_frames
ffffffff8140c430 r __ksymtab_udp_ioctl
ffffffff8140c440 r __ksymtab_udp_lib_get_port
ffffffff8140c450 r __ksymtab_udp_lib_getsockopt
ffffffff8140c460 r __ksymtab_udp_lib_rehash
ffffffff8140c470 r __ksymtab_udp_lib_setsockopt
ffffffff8140c480 r __ksymtab_udp_lib_unhash
ffffffff8140c490 r __ksymtab_udp_memory_allocated
ffffffff8140c4a0 r __ksymtab_udp_poll
ffffffff8140c4b0 r __ksymtab_udp_proc_register
ffffffff8140c4c0 r __ksymtab_udp_proc_unregister
ffffffff8140c4d0 r __ksymtab_udp_prot
ffffffff8140c4e0 r __ksymtab_udp_sendmsg
ffffffff8140c4f0 r __ksymtab_udp_seq_open
ffffffff8140c500 r __ksymtab_udp_table
ffffffff8140c510 r __ksymtab_udplite_prot
ffffffff8140c520 r __ksymtab_udplite_table
ffffffff8140c530 r __ksymtab_unbind_con_driver
ffffffff8140c540 r __ksymtab_unblock_all_signals
ffffffff8140c550 r __ksymtab_unlazy_fpu
ffffffff8140c560 r __ksymtab_unlink_framebuffer
ffffffff8140c570 r __ksymtab_unload_nls
ffffffff8140c580 r __ksymtab_unlock_buffer
ffffffff8140c590 r __ksymtab_unlock_new_inode
ffffffff8140c5a0 r __ksymtab_unlock_page
ffffffff8140c5b0 r __ksymtab_unlock_rename
ffffffff8140c5c0 r __ksymtab_unlock_super
ffffffff8140c5d0 r __ksymtab_unmap_mapping_range
ffffffff8140c5e0 r __ksymtab_unmap_underlying_metadata
ffffffff8140c5f0 r __ksymtab_unpoison_memory
ffffffff8140c600 r __ksymtab_unregister_acpi_notifier
ffffffff8140c610 r __ksymtab_unregister_binfmt
ffffffff8140c620 r __ksymtab_unregister_blkdev
ffffffff8140c630 r __ksymtab_unregister_chrdev_region
ffffffff8140c640 r __ksymtab_unregister_con_driver
ffffffff8140c650 r __ksymtab_unregister_console
ffffffff8140c660 r __ksymtab_unregister_cpu_notifier
ffffffff8140c670 r __ksymtab_unregister_exec_domain
ffffffff8140c680 r __ksymtab_unregister_filesystem
ffffffff8140c690 r __ksymtab_unregister_framebuffer
ffffffff8140c6a0 r __ksymtab_unregister_inetaddr_notifier
ffffffff8140c6b0 r __ksymtab_unregister_module_notifier
ffffffff8140c6c0 r __ksymtab_unregister_netdev
ffffffff8140c6d0 r __ksymtab_unregister_netdevice_many
ffffffff8140c6e0 r __ksymtab_unregister_netdevice_notifier
ffffffff8140c6f0 r __ksymtab_unregister_netdevice_queue
ffffffff8140c700 r __ksymtab_unregister_nls
ffffffff8140c710 r __ksymtab_unregister_reboot_notifier
ffffffff8140c720 r __ksymtab_unregister_shrinker
ffffffff8140c730 r __ksymtab_unregister_sysctl_table
ffffffff8140c740 r __ksymtab_up
ffffffff8140c750 r __ksymtab_up_read
ffffffff8140c760 r __ksymtab_up_write
ffffffff8140c770 r __ksymtab_update_region
ffffffff8140c780 r __ksymtab_usecs_to_jiffies
ffffffff8140c790 r __ksymtab_user_path_at
ffffffff8140c7a0 r __ksymtab_user_path_create
ffffffff8140c7b0 r __ksymtab_usleep_range
ffffffff8140c7c0 r __ksymtab_utf16s_to_utf8s
ffffffff8140c7d0 r __ksymtab_utf32_to_utf8
ffffffff8140c7e0 r __ksymtab_utf8_to_utf32
ffffffff8140c7f0 r __ksymtab_utf8s_to_utf16s
ffffffff8140c800 r __ksymtab_vc_cons
ffffffff8140c810 r __ksymtab_vc_resize
ffffffff8140c820 r __ksymtab_vesa_modes
ffffffff8140c830 r __ksymtab_vfree
ffffffff8140c840 r __ksymtab_vfs_create
ffffffff8140c850 r __ksymtab_vfs_follow_link
ffffffff8140c860 r __ksymtab_vfs_fstat
ffffffff8140c870 r __ksymtab_vfs_fstatat
ffffffff8140c880 r __ksymtab_vfs_fsync
ffffffff8140c890 r __ksymtab_vfs_fsync_range
ffffffff8140c8a0 r __ksymtab_vfs_getattr
ffffffff8140c8b0 r __ksymtab_vfs_link
ffffffff8140c8c0 r __ksymtab_vfs_llseek
ffffffff8140c8d0 r __ksymtab_vfs_lstat
ffffffff8140c8e0 r __ksymtab_vfs_mkdir
ffffffff8140c8f0 r __ksymtab_vfs_mknod
ffffffff8140c900 r __ksymtab_vfs_path_lookup
ffffffff8140c910 r __ksymtab_vfs_read
ffffffff8140c920 r __ksymtab_vfs_readdir
ffffffff8140c930 r __ksymtab_vfs_readlink
ffffffff8140c940 r __ksymtab_vfs_readv
ffffffff8140c950 r __ksymtab_vfs_rename
ffffffff8140c960 r __ksymtab_vfs_rmdir
ffffffff8140c970 r __ksymtab_vfs_stat
ffffffff8140c980 r __ksymtab_vfs_statfs
ffffffff8140c990 r __ksymtab_vfs_symlink
ffffffff8140c9a0 r __ksymtab_vfs_unlink
ffffffff8140c9b0 r __ksymtab_vfs_write
ffffffff8140c9c0 r __ksymtab_vfs_writev
ffffffff8140c9d0 r __ksymtab_vfsmount_lock_global_lock
ffffffff8140c9e0 r __ksymtab_vfsmount_lock_global_lock_online
ffffffff8140c9f0 r __ksymtab_vfsmount_lock_global_unlock
ffffffff8140ca00 r __ksymtab_vfsmount_lock_global_unlock_online
ffffffff8140ca10 r __ksymtab_vfsmount_lock_local_lock
ffffffff8140ca20 r __ksymtab_vfsmount_lock_local_lock_cpu
ffffffff8140ca30 r __ksymtab_vfsmount_lock_local_unlock
ffffffff8140ca40 r __ksymtab_vfsmount_lock_local_unlock_cpu
ffffffff8140ca50 r __ksymtab_vfsmount_lock_lock_init
ffffffff8140ca60 r __ksymtab_vga_client_register
ffffffff8140ca70 r __ksymtab_vga_get
ffffffff8140ca80 r __ksymtab_vga_put
ffffffff8140ca90 r __ksymtab_vga_set_legacy_decoding
ffffffff8140caa0 r __ksymtab_vga_tryget
ffffffff8140cab0 r __ksymtab_vgacon_text_force
ffffffff8140cac0 r __ksymtab_vlan_ioctl_set
ffffffff8140cad0 r __ksymtab_vm_brk
ffffffff8140cae0 r __ksymtab_vm_event_states
ffffffff8140caf0 r __ksymtab_vm_get_page_prot
ffffffff8140cb00 r __ksymtab_vm_insert_mixed
ffffffff8140cb10 r __ksymtab_vm_insert_page
ffffffff8140cb20 r __ksymtab_vm_insert_pfn
ffffffff8140cb30 r __ksymtab_vm_map_ram
ffffffff8140cb40 r __ksymtab_vm_mmap
ffffffff8140cb50 r __ksymtab_vm_munmap
ffffffff8140cb60 r __ksymtab_vm_stat
ffffffff8140cb70 r __ksymtab_vm_unmap_ram
ffffffff8140cb80 r __ksymtab_vmalloc
ffffffff8140cb90 r __ksymtab_vmalloc_32
ffffffff8140cba0 r __ksymtab_vmalloc_32_user
ffffffff8140cbb0 r __ksymtab_vmalloc_node
ffffffff8140cbc0 r __ksymtab_vmalloc_to_page
ffffffff8140cbd0 r __ksymtab_vmalloc_to_pfn
ffffffff8140cbe0 r __ksymtab_vmalloc_user
ffffffff8140cbf0 r __ksymtab_vmap
ffffffff8140cc00 r __ksymtab_vmtruncate
ffffffff8140cc10 r __ksymtab_vprintk
ffffffff8140cc20 r __ksymtab_vscnprintf
ffffffff8140cc30 r __ksymtab_vsnprintf
ffffffff8140cc40 r __ksymtab_vsprintf
ffffffff8140cc50 r __ksymtab_vsscanf
ffffffff8140cc60 r __ksymtab_vunmap
ffffffff8140cc70 r __ksymtab_vzalloc
ffffffff8140cc80 r __ksymtab_vzalloc_node
ffffffff8140cc90 r __ksymtab_wait_for_completion
ffffffff8140cca0 r __ksymtab_wait_for_completion_interruptible
ffffffff8140ccb0 r __ksymtab_wait_for_completion_interruptible_timeout
ffffffff8140ccc0 r __ksymtab_wait_for_completion_killable
ffffffff8140ccd0 r __ksymtab_wait_for_completion_killable_timeout
ffffffff8140cce0 r __ksymtab_wait_for_completion_timeout
ffffffff8140ccf0 r __ksymtab_wait_iff_congested
ffffffff8140cd00 r __ksymtab_wait_on_page_bit
ffffffff8140cd10 r __ksymtab_wait_on_sync_kiocb
ffffffff8140cd20 r __ksymtab_wake_bit_function
ffffffff8140cd30 r __ksymtab_wake_up_bit
ffffffff8140cd40 r __ksymtab_wake_up_process
ffffffff8140cd50 r __ksymtab_warn_slowpath_fmt
ffffffff8140cd60 r __ksymtab_warn_slowpath_fmt_taint
ffffffff8140cd70 r __ksymtab_warn_slowpath_null
ffffffff8140cd80 r __ksymtab_wbinvd_on_all_cpus
ffffffff8140cd90 r __ksymtab_wbinvd_on_cpu
ffffffff8140cda0 r __ksymtab_would_dump
ffffffff8140cdb0 r __ksymtab_write_cache_pages
ffffffff8140cdc0 r __ksymtab_write_dirty_buffer
ffffffff8140cdd0 r __ksymtab_write_inode_now
ffffffff8140cde0 r __ksymtab_write_one_page
ffffffff8140cdf0 r __ksymtab_writeback_inodes_sb
ffffffff8140ce00 r __ksymtab_writeback_inodes_sb_if_idle
ffffffff8140ce10 r __ksymtab_writeback_inodes_sb_nr
ffffffff8140ce20 r __ksymtab_writeback_inodes_sb_nr_if_idle
ffffffff8140ce30 r __ksymtab_wrmsr_on_cpu
ffffffff8140ce40 r __ksymtab_wrmsr_on_cpus
ffffffff8140ce50 r __ksymtab_wrmsr_safe_on_cpu
ffffffff8140ce60 r __ksymtab_wrmsr_safe_regs_on_cpu
ffffffff8140ce70 r __ksymtab_x86_bios_cpu_apicid
ffffffff8140ce80 r __ksymtab_x86_cpu_to_apicid
ffffffff8140ce90 r __ksymtab_x86_cpu_to_node_map
ffffffff8140cea0 r __ksymtab_x86_dma_fallback_dev
ffffffff8140ceb0 r __ksymtab_x86_hyper
ffffffff8140cec0 r __ksymtab_x86_hyper_ms_hyperv
ffffffff8140ced0 r __ksymtab_x86_hyper_vmware
ffffffff8140cee0 r __ksymtab_x86_hyper_xen_hvm
ffffffff8140cef0 r __ksymtab_x86_match_cpu
ffffffff8140cf00 r __ksymtab_xen_biovec_phys_mergeable
ffffffff8140cf10 r __ksymtab_xen_clear_irq_pending
ffffffff8140cf20 r __ksymtab_xen_poll_irq_timeout
ffffffff8140cf30 r __ksymtab_xenbus_dev_request_and_reply
ffffffff8140cf40 r __ksymtab_xfrm4_prepare_output
ffffffff8140cf50 r __ksymtab_xfrm4_rcv
ffffffff8140cf60 r __ksymtab_xfrm4_rcv_encap
ffffffff8140cf70 r __ksymtab_xfrm_alloc_spi
ffffffff8140cf80 r __ksymtab_xfrm_cfg_mutex
ffffffff8140cf90 r __ksymtab_xfrm_dst_ifdown
ffffffff8140cfa0 r __ksymtab_xfrm_find_acq
ffffffff8140cfb0 r __ksymtab_xfrm_find_acq_byseq
ffffffff8140cfc0 r __ksymtab_xfrm_get_acqseq
ffffffff8140cfd0 r __ksymtab_xfrm_init_replay
ffffffff8140cfe0 r __ksymtab_xfrm_init_state
ffffffff8140cff0 r __ksymtab_xfrm_input
ffffffff8140d000 r __ksymtab_xfrm_input_resume
ffffffff8140d010 r __ksymtab_xfrm_lookup
ffffffff8140d020 r __ksymtab_xfrm_policy_alloc
ffffffff8140d030 r __ksymtab_xfrm_policy_byid
ffffffff8140d040 r __ksymtab_xfrm_policy_bysel_ctx
ffffffff8140d050 r __ksymtab_xfrm_policy_delete
ffffffff8140d060 r __ksymtab_xfrm_policy_destroy
ffffffff8140d070 r __ksymtab_xfrm_policy_flush
ffffffff8140d080 r __ksymtab_xfrm_policy_insert
ffffffff8140d090 r __ksymtab_xfrm_policy_register_afinfo
ffffffff8140d0a0 r __ksymtab_xfrm_policy_unregister_afinfo
ffffffff8140d0b0 r __ksymtab_xfrm_policy_walk
ffffffff8140d0c0 r __ksymtab_xfrm_policy_walk_done
ffffffff8140d0d0 r __ksymtab_xfrm_policy_walk_init
ffffffff8140d0e0 r __ksymtab_xfrm_prepare_input
ffffffff8140d0f0 r __ksymtab_xfrm_register_km
ffffffff8140d100 r __ksymtab_xfrm_register_mode
ffffffff8140d110 r __ksymtab_xfrm_register_type
ffffffff8140d120 r __ksymtab_xfrm_sad_getinfo
ffffffff8140d130 r __ksymtab_xfrm_spd_getinfo
ffffffff8140d140 r __ksymtab_xfrm_state_add
ffffffff8140d150 r __ksymtab_xfrm_state_alloc
ffffffff8140d160 r __ksymtab_xfrm_state_check_expire
ffffffff8140d170 r __ksymtab_xfrm_state_delete
ffffffff8140d180 r __ksymtab_xfrm_state_delete_tunnel
ffffffff8140d190 r __ksymtab_xfrm_state_flush
ffffffff8140d1a0 r __ksymtab_xfrm_state_insert
ffffffff8140d1b0 r __ksymtab_xfrm_state_lookup
ffffffff8140d1c0 r __ksymtab_xfrm_state_lookup_byaddr
ffffffff8140d1d0 r __ksymtab_xfrm_state_register_afinfo
ffffffff8140d1e0 r __ksymtab_xfrm_state_unregister_afinfo
ffffffff8140d1f0 r __ksymtab_xfrm_state_update
ffffffff8140d200 r __ksymtab_xfrm_state_walk
ffffffff8140d210 r __ksymtab_xfrm_state_walk_done
ffffffff8140d220 r __ksymtab_xfrm_state_walk_init
ffffffff8140d230 r __ksymtab_xfrm_stateonly_find
ffffffff8140d240 r __ksymtab_xfrm_unregister_km
ffffffff8140d250 r __ksymtab_xfrm_unregister_mode
ffffffff8140d260 r __ksymtab_xfrm_unregister_type
ffffffff8140d270 r __ksymtab_xfrm_user_policy
ffffffff8140d280 r __ksymtab_xz_dec_end
ffffffff8140d290 r __ksymtab_xz_dec_init
ffffffff8140d2a0 r __ksymtab_xz_dec_reset
ffffffff8140d2b0 r __ksymtab_xz_dec_run
ffffffff8140d2c0 r __ksymtab_yield
ffffffff8140d2d0 r __ksymtab_zero_fill_bio
ffffffff8140d2e0 r __ksymtab_zlib_inflate
ffffffff8140d2f0 r __ksymtab_zlib_inflateEnd
ffffffff8140d300 r __ksymtab_zlib_inflateIncomp
ffffffff8140d310 r __ksymtab_zlib_inflateInit2
ffffffff8140d320 r __ksymtab_zlib_inflateReset
ffffffff8140d330 r __ksymtab_zlib_inflate_blob
ffffffff8140d340 r __ksymtab_zlib_inflate_workspacesize
ffffffff8140d350 r __ksymtab_PageHuge
ffffffff8140d350 R __start___ksymtab_gpl
ffffffff8140d350 R __stop___ksymtab
ffffffff8140d360 r __ksymtab___ablkcipher_walk_complete
ffffffff8140d370 r __ksymtab___alloc_percpu
ffffffff8140d380 r __ksymtab___alloc_workqueue_key
ffffffff8140d390 r __ksymtab___apei_exec_run
ffffffff8140d3a0 r __ksymtab___ata_change_queue_depth
ffffffff8140d3b0 r __ksymtab___ata_ehi_push_desc
ffffffff8140d3c0 r __ksymtab___atomic_notifier_call_chain
ffffffff8140d3d0 r __ksymtab___audit_inode_child
ffffffff8140d3e0 r __ksymtab___blk_end_request_err
ffffffff8140d3f0 r __ksymtab___blk_put_request
ffffffff8140d400 r __ksymtab___blkdev_driver_ioctl
ffffffff8140d410 r __ksymtab___blocking_notifier_call_chain
ffffffff8140d420 r __ksymtab___bus_register
ffffffff8140d430 r __ksymtab___class_create
ffffffff8140d440 r __ksymtab___class_register
ffffffff8140d450 r __ksymtab___clocksource_register_scale
ffffffff8140d460 r __ksymtab___clocksource_updatefreq_scale
ffffffff8140d470 r __ksymtab___cpufreq_driver_getavg
ffffffff8140d480 r __ksymtab___cpufreq_driver_target
ffffffff8140d490 r __ksymtab___crypto_alloc_tfm
ffffffff8140d4a0 r __ksymtab___crypto_dequeue_request
ffffffff8140d4b0 r __ksymtab___css_put
ffffffff8140d4c0 r __ksymtab___fsnotify_inode_delete
ffffffff8140d4d0 r __ksymtab___fsnotify_parent
ffffffff8140d4e0 r __ksymtab___get_user_pages_fast
ffffffff8140d4f0 r __ksymtab___get_vm_area
ffffffff8140d500 r __ksymtab___hid_register_driver
ffffffff8140d510 r __ksymtab___hvc_resize
ffffffff8140d520 r __ksymtab___inet_hash_nolisten
ffffffff8140d530 r __ksymtab___inet_inherit_port
ffffffff8140d540 r __ksymtab___inet_lookup_established
ffffffff8140d550 r __ksymtab___inet_lookup_listener
ffffffff8140d560 r __ksymtab___inet_twsk_hashdance
ffffffff8140d570 r __ksymtab___init_kthread_worker
ffffffff8140d580 r __ksymtab___iowrite32_copy
ffffffff8140d590 r __ksymtab___iowrite64_copy
ffffffff8140d5a0 r __ksymtab___ip_route_output_key
ffffffff8140d5b0 r __ksymtab___irq_alloc_descs
ffffffff8140d5c0 r __ksymtab___irq_set_handler
ffffffff8140d5d0 r __ksymtab___lock_page_killable
ffffffff8140d5e0 r __ksymtab___mmdrop
ffffffff8140d5f0 r __ksymtab___mmu_notifier_register
ffffffff8140d600 r __ksymtab___mnt_is_readonly
ffffffff8140d610 r __ksymtab___module_address
ffffffff8140d620 r __ksymtab___module_text_address
ffffffff8140d630 r __ksymtab___pci_complete_power_transition
ffffffff8140d640 r __ksymtab___pci_reset_function
ffffffff8140d650 r __ksymtab___pci_reset_function_locked
ffffffff8140d660 r __ksymtab___pm_relax
ffffffff8140d670 r __ksymtab___pm_runtime_disable
ffffffff8140d680 r __ksymtab___pm_runtime_idle
ffffffff8140d690 r __ksymtab___pm_runtime_resume
ffffffff8140d6a0 r __ksymtab___pm_runtime_set_status
ffffffff8140d6b0 r __ksymtab___pm_runtime_suspend
ffffffff8140d6c0 r __ksymtab___pm_runtime_use_autosuspend
ffffffff8140d6d0 r __ksymtab___pm_stay_awake
ffffffff8140d6e0 r __ksymtab___pm_wakeup_event
ffffffff8140d6f0 r __ksymtab___pneigh_lookup
ffffffff8140d700 r __ksymtab___put_net
ffffffff8140d710 r __ksymtab___put_task_struct
ffffffff8140d720 r __ksymtab___raw_notifier_call_chain
ffffffff8140d730 r __ksymtab___root_device_register
ffffffff8140d740 r __ksymtab___round_jiffies
ffffffff8140d750 r __ksymtab___round_jiffies_relative
ffffffff8140d760 r __ksymtab___round_jiffies_up
ffffffff8140d770 r __ksymtab___round_jiffies_up_relative
ffffffff8140d780 r __ksymtab___rt_mutex_init
ffffffff8140d790 r __ksymtab___rtnl_af_register
ffffffff8140d7a0 r __ksymtab___rtnl_af_unregister
ffffffff8140d7b0 r __ksymtab___rtnl_link_register
ffffffff8140d7c0 r __ksymtab___rtnl_link_unregister
ffffffff8140d7d0 r __ksymtab___rtnl_register
ffffffff8140d7e0 r __ksymtab___scsi_get_command
ffffffff8140d7f0 r __ksymtab___sock_recv_timestamp
ffffffff8140d800 r __ksymtab___sock_recv_ts_and_drops
ffffffff8140d810 r __ksymtab___sock_recv_wifi_status
ffffffff8140d820 r __ksymtab___srcu_notifier_call_chain
ffffffff8140d830 r __ksymtab___srcu_read_lock
ffffffff8140d840 r __ksymtab___srcu_read_unlock
ffffffff8140d850 r __ksymtab___supported_pte_mask
ffffffff8140d860 r __ksymtab___suspend_report_result
ffffffff8140d870 r __ksymtab___symbol_get
ffffffff8140d880 r __ksymtab___timecompare_update
ffffffff8140d890 r __ksymtab___udp4_lib_lookup
ffffffff8140d8a0 r __ksymtab___wake_up_locked
ffffffff8140d8b0 r __ksymtab___wake_up_locked_key
ffffffff8140d8c0 r __ksymtab___wake_up_sync
ffffffff8140d8d0 r __ksymtab___wake_up_sync_key
ffffffff8140d8e0 r __ksymtab_ablkcipher_walk_done
ffffffff8140d8f0 r __ksymtab_ablkcipher_walk_phys
ffffffff8140d900 r __ksymtab_acpi_bus_generate_proc_event4
ffffffff8140d910 r __ksymtab_acpi_bus_get_ejd
ffffffff8140d920 r __ksymtab_acpi_bus_trim
ffffffff8140d930 r __ksymtab_acpi_bus_update_power
ffffffff8140d940 r __ksymtab_acpi_ec_add_query_handler
ffffffff8140d950 r __ksymtab_acpi_ec_remove_query_handler
ffffffff8140d960 r __ksymtab_acpi_get_cpuid
ffffffff8140d970 r __ksymtab_acpi_get_pci_dev
ffffffff8140d980 r __ksymtab_acpi_get_pci_rootbridge_handle
ffffffff8140d990 r __ksymtab_acpi_gsi_to_irq
ffffffff8140d9a0 r __ksymtab_acpi_is_root_bridge
ffffffff8140d9b0 r __ksymtab_acpi_kobj
ffffffff8140d9c0 r __ksymtab_acpi_os_get_iomem
ffffffff8140d9d0 r __ksymtab_acpi_os_map_memory
ffffffff8140d9e0 r __ksymtab_acpi_os_unmap_memory
ffffffff8140d9f0 r __ksymtab_acpi_pci_find_root
ffffffff8140da00 r __ksymtab_acpi_processor_ffh_cstate_enter
ffffffff8140da10 r __ksymtab_acpi_processor_ffh_cstate_probe
ffffffff8140da20 r __ksymtab_add_input_randomness
ffffffff8140da30 r __ksymtab_add_page_wait_queue
ffffffff8140da40 r __ksymtab_add_timer_on
ffffffff8140da50 r __ksymtab_add_to_page_cache_lru
ffffffff8140da60 r __ksymtab_add_uevent_var
ffffffff8140da70 r __ksymtab_aead_geniv_alloc
ffffffff8140da80 r __ksymtab_aead_geniv_exit
ffffffff8140da90 r __ksymtab_aead_geniv_free
ffffffff8140daa0 r __ksymtab_aead_geniv_init
ffffffff8140dab0 r __ksymtab_aer_irq
ffffffff8140dac0 r __ksymtab_aer_recover_queue
ffffffff8140dad0 r __ksymtab_ahash_attr_alg
ffffffff8140dae0 r __ksymtab_ahash_free_instance
ffffffff8140daf0 r __ksymtab_ahash_register_instance
ffffffff8140db00 r __ksymtab_alg_test
ffffffff8140db10 r __ksymtab_all_vm_events
ffffffff8140db20 r __ksymtab_alloc_page_buffers
ffffffff8140db30 r __ksymtab_alloc_vm_area
ffffffff8140db40 r __ksymtab_amd_cache_northbridges
ffffffff8140db50 r __ksymtab_amd_erratum_383
ffffffff8140db60 r __ksymtab_amd_erratum_400
ffffffff8140db70 r __ksymtab_amd_flush_garts
ffffffff8140db80 r __ksymtab_amd_get_nb_id
ffffffff8140db90 r __ksymtab_amd_pmu_disable_virt
ffffffff8140dba0 r __ksymtab_amd_pmu_enable_virt
ffffffff8140dbb0 r __ksymtab_anon_inode_getfd
ffffffff8140dbc0 r __ksymtab_anon_inode_getfile
ffffffff8140dbd0 r __ksymtab_anon_transport_class_register
ffffffff8140dbe0 r __ksymtab_anon_transport_class_unregister
ffffffff8140dbf0 r __ksymtab_aout_dump_debugregs
ffffffff8140dc00 r __ksymtab_apei_estatus_check
ffffffff8140dc10 r __ksymtab_apei_estatus_check_header
ffffffff8140dc20 r __ksymtab_apei_estatus_print
ffffffff8140dc30 r __ksymtab_apei_exec_collect_resources
ffffffff8140dc40 r __ksymtab_apei_exec_ctx_init
ffffffff8140dc50 r __ksymtab_apei_exec_noop
ffffffff8140dc60 r __ksymtab_apei_exec_post_unmap_gars
ffffffff8140dc70 r __ksymtab_apei_exec_pre_map_gars
ffffffff8140dc80 r __ksymtab_apei_exec_read_register
ffffffff8140dc90 r __ksymtab_apei_exec_read_register_value
ffffffff8140dca0 r __ksymtab_apei_exec_write_register
ffffffff8140dcb0 r __ksymtab_apei_exec_write_register_value
ffffffff8140dcc0 r __ksymtab_apei_get_debugfs_dir
ffffffff8140dcd0 r __ksymtab_apei_hest_parse
ffffffff8140dce0 r __ksymtab_apei_map_generic_address
ffffffff8140dcf0 r __ksymtab_apei_mce_report_mem_error
ffffffff8140dd00 r __ksymtab_apei_osc_setup
ffffffff8140dd10 r __ksymtab_apei_read
ffffffff8140dd20 r __ksymtab_apei_resources_add
ffffffff8140dd30 r __ksymtab_apei_resources_fini
ffffffff8140dd40 r __ksymtab_apei_resources_release
ffffffff8140dd50 r __ksymtab_apei_resources_request
ffffffff8140dd60 r __ksymtab_apei_resources_sub
ffffffff8140dd70 r __ksymtab_apei_write
ffffffff8140dd80 r __ksymtab_apic
ffffffff8140dd90 r __ksymtab_apply_to_page_range
ffffffff8140dda0 r __ksymtab_arbitrary_virt_to_machine
ffffffff8140ddb0 r __ksymtab_async_schedule
ffffffff8140ddc0 r __ksymtab_async_schedule_domain
ffffffff8140ddd0 r __ksymtab_async_synchronize_cookie
ffffffff8140dde0 r __ksymtab_async_synchronize_cookie_domain
ffffffff8140ddf0 r __ksymtab_async_synchronize_full
ffffffff8140de00 r __ksymtab_async_synchronize_full_domain
ffffffff8140de10 r __ksymtab_ata_acpi_cbl_80wire
ffffffff8140de20 r __ksymtab_ata_acpi_gtm
ffffffff8140de30 r __ksymtab_ata_acpi_gtm_xfermask
ffffffff8140de40 r __ksymtab_ata_acpi_stm
ffffffff8140de50 r __ksymtab_ata_base_port_ops
ffffffff8140de60 r __ksymtab_ata_bmdma32_port_ops
ffffffff8140de70 r __ksymtab_ata_bmdma_dumb_qc_prep
ffffffff8140de80 r __ksymtab_ata_bmdma_error_handler
ffffffff8140de90 r __ksymtab_ata_bmdma_interrupt
ffffffff8140dea0 r __ksymtab_ata_bmdma_irq_clear
ffffffff8140deb0 r __ksymtab_ata_bmdma_port_intr
ffffffff8140dec0 r __ksymtab_ata_bmdma_port_ops
ffffffff8140ded0 r __ksymtab_ata_bmdma_port_start
ffffffff8140dee0 r __ksymtab_ata_bmdma_port_start32
ffffffff8140def0 r __ksymtab_ata_bmdma_post_internal_cmd
ffffffff8140df00 r __ksymtab_ata_bmdma_qc_issue
ffffffff8140df10 r __ksymtab_ata_bmdma_qc_prep
ffffffff8140df20 r __ksymtab_ata_bmdma_setup
ffffffff8140df30 r __ksymtab_ata_bmdma_start
ffffffff8140df40 r __ksymtab_ata_bmdma_status
ffffffff8140df50 r __ksymtab_ata_bmdma_stop
ffffffff8140df60 r __ksymtab_ata_cable_40wire
ffffffff8140df70 r __ksymtab_ata_cable_80wire
ffffffff8140df80 r __ksymtab_ata_cable_ignore
ffffffff8140df90 r __ksymtab_ata_cable_sata
ffffffff8140dfa0 r __ksymtab_ata_cable_unknown
ffffffff8140dfb0 r __ksymtab_ata_common_sdev_attrs
ffffffff8140dfc0 r __ksymtab_ata_dev_classify
ffffffff8140dfd0 r __ksymtab_ata_dev_disable
ffffffff8140dfe0 r __ksymtab_ata_dev_next
ffffffff8140dff0 r __ksymtab_ata_dev_pair
ffffffff8140e000 r __ksymtab_ata_do_dev_read_id
ffffffff8140e010 r __ksymtab_ata_do_eh
ffffffff8140e020 r __ksymtab_ata_do_set_mode
ffffffff8140e030 r __ksymtab_ata_dummy_port_info
ffffffff8140e040 r __ksymtab_ata_dummy_port_ops
ffffffff8140e050 r __ksymtab_ata_eh_analyze_ncq_error
ffffffff8140e060 r __ksymtab_ata_eh_freeze_port
ffffffff8140e070 r __ksymtab_ata_eh_qc_complete
ffffffff8140e080 r __ksymtab_ata_eh_qc_retry
ffffffff8140e090 r __ksymtab_ata_eh_thaw_port
ffffffff8140e0a0 r __ksymtab_ata_ehi_clear_desc
ffffffff8140e0b0 r __ksymtab_ata_ehi_push_desc
ffffffff8140e0c0 r __ksymtab_ata_host_activate
ffffffff8140e0d0 r __ksymtab_ata_host_alloc
ffffffff8140e0e0 r __ksymtab_ata_host_alloc_pinfo
ffffffff8140e0f0 r __ksymtab_ata_host_detach
ffffffff8140e100 r __ksymtab_ata_host_init
ffffffff8140e110 r __ksymtab_ata_host_register
ffffffff8140e120 r __ksymtab_ata_host_resume
ffffffff8140e130 r __ksymtab_ata_host_start
ffffffff8140e140 r __ksymtab_ata_host_suspend
ffffffff8140e150 r __ksymtab_ata_id_c_string
ffffffff8140e160 r __ksymtab_ata_id_string
ffffffff8140e170 r __ksymtab_ata_id_xfermask
ffffffff8140e180 r __ksymtab_ata_link_abort
ffffffff8140e190 r __ksymtab_ata_link_next
ffffffff8140e1a0 r __ksymtab_ata_link_offline
ffffffff8140e1b0 r __ksymtab_ata_link_online
ffffffff8140e1c0 r __ksymtab_ata_mode_string
ffffffff8140e1d0 r __ksymtab_ata_msleep
ffffffff8140e1e0 r __ksymtab_ata_noop_qc_prep
ffffffff8140e1f0 r __ksymtab_ata_pack_xfermask
ffffffff8140e200 r __ksymtab_ata_pci_bmdma_clear_simplex
ffffffff8140e210 r __ksymtab_ata_pci_bmdma_init
ffffffff8140e220 r __ksymtab_ata_pci_bmdma_init_one
ffffffff8140e230 r __ksymtab_ata_pci_bmdma_prepare_host
ffffffff8140e240 r __ksymtab_ata_pci_device_do_resume
ffffffff8140e250 r __ksymtab_ata_pci_device_do_suspend
ffffffff8140e260 r __ksymtab_ata_pci_device_resume
ffffffff8140e270 r __ksymtab_ata_pci_device_suspend
ffffffff8140e280 r __ksymtab_ata_pci_remove_one
ffffffff8140e290 r __ksymtab_ata_pci_sff_activate_host
ffffffff8140e2a0 r __ksymtab_ata_pci_sff_init_host
ffffffff8140e2b0 r __ksymtab_ata_pci_sff_init_one
ffffffff8140e2c0 r __ksymtab_ata_pci_sff_prepare_host
ffffffff8140e2d0 r __ksymtab_ata_pio_need_iordy
ffffffff8140e2e0 r __ksymtab_ata_port_abort
ffffffff8140e2f0 r __ksymtab_ata_port_desc
ffffffff8140e300 r __ksymtab_ata_port_freeze
ffffffff8140e310 r __ksymtab_ata_port_pbar_desc
ffffffff8140e320 r __ksymtab_ata_port_schedule_eh
ffffffff8140e330 r __ksymtab_ata_port_wait_eh
ffffffff8140e340 r __ksymtab_ata_qc_complete
ffffffff8140e350 r __ksymtab_ata_qc_complete_multiple
ffffffff8140e360 r __ksymtab_ata_ratelimit
ffffffff8140e370 r __ksymtab_ata_sas_async_probe
ffffffff8140e380 r __ksymtab_ata_sas_port_alloc
ffffffff8140e390 r __ksymtab_ata_sas_port_destroy
ffffffff8140e3a0 r __ksymtab_ata_sas_port_init
ffffffff8140e3b0 r __ksymtab_ata_sas_port_start
ffffffff8140e3c0 r __ksymtab_ata_sas_port_stop
ffffffff8140e3d0 r __ksymtab_ata_sas_queuecmd
ffffffff8140e3e0 r __ksymtab_ata_sas_scsi_ioctl
ffffffff8140e3f0 r __ksymtab_ata_sas_slave_configure
ffffffff8140e400 r __ksymtab_ata_sas_sync_probe
ffffffff8140e410 r __ksymtab_ata_scsi_change_queue_depth
ffffffff8140e420 r __ksymtab_ata_scsi_ioctl
ffffffff8140e430 r __ksymtab_ata_scsi_port_error_handler
ffffffff8140e440 r __ksymtab_ata_scsi_queuecmd
ffffffff8140e450 r __ksymtab_ata_scsi_simulate
ffffffff8140e460 r __ksymtab_ata_scsi_slave_config
ffffffff8140e470 r __ksymtab_ata_scsi_slave_destroy
ffffffff8140e480 r __ksymtab_ata_scsi_unlock_native_capacity
ffffffff8140e490 r __ksymtab_ata_sff_busy_sleep
ffffffff8140e4a0 r __ksymtab_ata_sff_check_status
ffffffff8140e4b0 r __ksymtab_ata_sff_data_xfer
ffffffff8140e4c0 r __ksymtab_ata_sff_data_xfer32
ffffffff8140e4d0 r __ksymtab_ata_sff_data_xfer_noirq
ffffffff8140e4e0 r __ksymtab_ata_sff_dev_classify
ffffffff8140e4f0 r __ksymtab_ata_sff_dev_select
ffffffff8140e500 r __ksymtab_ata_sff_dma_pause
ffffffff8140e510 r __ksymtab_ata_sff_drain_fifo
ffffffff8140e520 r __ksymtab_ata_sff_error_handler
ffffffff8140e530 r __ksymtab_ata_sff_exec_command
ffffffff8140e540 r __ksymtab_ata_sff_freeze
ffffffff8140e550 r __ksymtab_ata_sff_hsm_move
ffffffff8140e560 r __ksymtab_ata_sff_interrupt
ffffffff8140e570 r __ksymtab_ata_sff_irq_on
ffffffff8140e580 r __ksymtab_ata_sff_lost_interrupt
ffffffff8140e590 r __ksymtab_ata_sff_pause
ffffffff8140e5a0 r __ksymtab_ata_sff_port_intr
ffffffff8140e5b0 r __ksymtab_ata_sff_port_ops
ffffffff8140e5c0 r __ksymtab_ata_sff_postreset
ffffffff8140e5d0 r __ksymtab_ata_sff_prereset
ffffffff8140e5e0 r __ksymtab_ata_sff_qc_fill_rtf
ffffffff8140e5f0 r __ksymtab_ata_sff_qc_issue
ffffffff8140e600 r __ksymtab_ata_sff_queue_delayed_work
ffffffff8140e610 r __ksymtab_ata_sff_queue_pio_task
ffffffff8140e620 r __ksymtab_ata_sff_queue_work
ffffffff8140e630 r __ksymtab_ata_sff_softreset
ffffffff8140e640 r __ksymtab_ata_sff_std_ports
ffffffff8140e650 r __ksymtab_ata_sff_tf_load
ffffffff8140e660 r __ksymtab_ata_sff_tf_read
ffffffff8140e670 r __ksymtab_ata_sff_thaw
ffffffff8140e680 r __ksymtab_ata_sff_wait_after_reset
ffffffff8140e690 r __ksymtab_ata_sff_wait_ready
ffffffff8140e6a0 r __ksymtab_ata_sg_init
ffffffff8140e6b0 r __ksymtab_ata_slave_link_init
ffffffff8140e6c0 r __ksymtab_ata_std_bios_param
ffffffff8140e6d0 r __ksymtab_ata_std_error_handler
ffffffff8140e6e0 r __ksymtab_ata_std_postreset
ffffffff8140e6f0 r __ksymtab_ata_std_prereset
ffffffff8140e700 r __ksymtab_ata_std_qc_defer
ffffffff8140e710 r __ksymtab_ata_tf_from_fis
ffffffff8140e720 r __ksymtab_ata_tf_to_fis
ffffffff8140e730 r __ksymtab_ata_timing_compute
ffffffff8140e740 r __ksymtab_ata_timing_cycle2mode
ffffffff8140e750 r __ksymtab_ata_timing_find_mode
ffffffff8140e760 r __ksymtab_ata_timing_merge
ffffffff8140e770 r __ksymtab_ata_unpack_xfermask
ffffffff8140e780 r __ksymtab_ata_wait_after_reset
ffffffff8140e790 r __ksymtab_ata_wait_register
ffffffff8140e7a0 r __ksymtab_ata_xfer_mask2mode
ffffffff8140e7b0 r __ksymtab_ata_xfer_mode2mask
ffffffff8140e7c0 r __ksymtab_ata_xfer_mode2shift
ffffffff8140e7d0 r __ksymtab_atapi_cmd_type
ffffffff8140e7e0 r __ksymtab_atomic_notifier_call_chain
ffffffff8140e7f0 r __ksymtab_atomic_notifier_chain_register
ffffffff8140e800 r __ksymtab_atomic_notifier_chain_unregister
ffffffff8140e810 r __ksymtab_attribute_container_classdev_to_container
ffffffff8140e820 r __ksymtab_attribute_container_find_class_device
ffffffff8140e830 r __ksymtab_attribute_container_register
ffffffff8140e840 r __ksymtab_attribute_container_unregister
ffffffff8140e850 r __ksymtab_audit_enabled
ffffffff8140e860 r __ksymtab_balloon_set_new_target
ffffffff8140e870 r __ksymtab_balloon_stats
ffffffff8140e880 r __ksymtab_bd_link_disk_holder
ffffffff8140e890 r __ksymtab_bd_unlink_disk_holder
ffffffff8140e8a0 r __ksymtab_bdi_writeout_inc
ffffffff8140e8b0 r __ksymtab_bind_evtchn_to_irq
ffffffff8140e8c0 r __ksymtab_bind_evtchn_to_irqhandler
ffffffff8140e8d0 r __ksymtab_bind_interdomain_evtchn_to_irqhandler
ffffffff8140e8e0 r __ksymtab_bind_virq_to_irqhandler
ffffffff8140e8f0 r __ksymtab_blk_abort_queue
ffffffff8140e900 r __ksymtab_blk_abort_request
ffffffff8140e910 r __ksymtab_blk_add_request_payload
ffffffff8140e920 r __ksymtab_blk_end_request_err
ffffffff8140e930 r __ksymtab_blk_execute_rq_nowait
ffffffff8140e940 r __ksymtab_blk_insert_cloned_request
ffffffff8140e950 r __ksymtab_blk_lld_busy
ffffffff8140e960 r __ksymtab_blk_queue_bio
ffffffff8140e970 r __ksymtab_blk_queue_dma_drain
ffffffff8140e980 r __ksymtab_blk_queue_flush
ffffffff8140e990 r __ksymtab_blk_queue_flush_queueable
ffffffff8140e9a0 r __ksymtab_blk_queue_lld_busy
ffffffff8140e9b0 r __ksymtab_blk_queue_rq_timed_out
ffffffff8140e9c0 r __ksymtab_blk_queue_rq_timeout
ffffffff8140e9d0 r __ksymtab_blk_rq_check_limits
ffffffff8140e9e0 r __ksymtab_blk_rq_err_bytes
ffffffff8140e9f0 r __ksymtab_blk_rq_prep_clone
ffffffff8140ea00 r __ksymtab_blk_rq_unprep_clone
ffffffff8140ea10 r __ksymtab_blk_unprep_request
ffffffff8140ea20 r __ksymtab_blk_update_request
ffffffff8140ea30 r __ksymtab_blkcg_get_weight
ffffffff8140ea40 r __ksymtab_blkcipher_walk_done
ffffffff8140ea50 r __ksymtab_blkcipher_walk_phys
ffffffff8140ea60 r __ksymtab_blkcipher_walk_virt
ffffffff8140ea70 r __ksymtab_blkcipher_walk_virt_block
ffffffff8140ea80 r __ksymtab_blkdev_aio_write
ffffffff8140ea90 r __ksymtab_blkdev_ioctl
ffffffff8140eaa0 r __ksymtab_blkio_alloc_blkg_stats
ffffffff8140eab0 r __ksymtab_blkio_policy_register
ffffffff8140eac0 r __ksymtab_blkio_policy_unregister
ffffffff8140ead0 r __ksymtab_blkio_root_cgroup
ffffffff8140eae0 r __ksymtab_blkio_subsys
ffffffff8140eaf0 r __ksymtab_blkiocg_add_blkio_group
ffffffff8140eb00 r __ksymtab_blkiocg_del_blkio_group
ffffffff8140eb10 r __ksymtab_blkiocg_lookup_group
ffffffff8140eb20 r __ksymtab_blkiocg_update_completion_stats
ffffffff8140eb30 r __ksymtab_blkiocg_update_dispatch_stats
ffffffff8140eb40 r __ksymtab_blkiocg_update_io_add_stats
ffffffff8140eb50 r __ksymtab_blkiocg_update_io_merged_stats
ffffffff8140eb60 r __ksymtab_blkiocg_update_io_remove_stats
ffffffff8140eb70 r __ksymtab_blkiocg_update_timeslice_used
ffffffff8140eb80 r __ksymtab_blocking_notifier_call_chain
ffffffff8140eb90 r __ksymtab_blocking_notifier_chain_cond_register
ffffffff8140eba0 r __ksymtab_blocking_notifier_chain_register
ffffffff8140ebb0 r __ksymtab_blocking_notifier_chain_unregister
ffffffff8140ebc0 r __ksymtab_bsg_goose_queue
ffffffff8140ebd0 r __ksymtab_bsg_job_done
ffffffff8140ebe0 r __ksymtab_bsg_register_queue
ffffffff8140ebf0 r __ksymtab_bsg_remove_queue
ffffffff8140ec00 r __ksymtab_bsg_request_fn
ffffffff8140ec10 r __ksymtab_bsg_setup_queue
ffffffff8140ec20 r __ksymtab_bsg_unregister_queue
ffffffff8140ec30 r __ksymtab_bus_create_file
ffffffff8140ec40 r __ksymtab_bus_find_device
ffffffff8140ec50 r __ksymtab_bus_find_device_by_name
ffffffff8140ec60 r __ksymtab_bus_for_each_dev
ffffffff8140ec70 r __ksymtab_bus_for_each_drv
ffffffff8140ec80 r __ksymtab_bus_get_device_klist
ffffffff8140ec90 r __ksymtab_bus_get_kset
ffffffff8140eca0 r __ksymtab_bus_register_notifier
ffffffff8140ecb0 r __ksymtab_bus_remove_file
ffffffff8140ecc0 r __ksymtab_bus_rescan_devices
ffffffff8140ecd0 r __ksymtab_bus_set_iommu
ffffffff8140ece0 r __ksymtab_bus_sort_breadthfirst
ffffffff8140ecf0 r __ksymtab_bus_unregister
ffffffff8140ed00 r __ksymtab_bus_unregister_notifier
ffffffff8140ed10 r __ksymtab_byte_rev_table
ffffffff8140ed20 r __ksymtab_call_netevent_notifiers
ffffffff8140ed30 r __ksymtab_call_rcu_bh
ffffffff8140ed40 r __ksymtab_call_rcu_sched
ffffffff8140ed50 r __ksymtab_cancel_work_sync
ffffffff8140ed60 r __ksymtab_cgroup_add_file
ffffffff8140ed70 r __ksymtab_cgroup_add_files
ffffffff8140ed80 r __ksymtab_cgroup_attach_task_all
ffffffff8140ed90 r __ksymtab_cgroup_load_subsys
ffffffff8140eda0 r __ksymtab_cgroup_lock
ffffffff8140edb0 r __ksymtab_cgroup_lock_is_held
ffffffff8140edc0 r __ksymtab_cgroup_lock_live_group
ffffffff8140edd0 r __ksymtab_cgroup_path
ffffffff8140ede0 r __ksymtab_cgroup_taskset_cur_cgroup
ffffffff8140edf0 r __ksymtab_cgroup_taskset_first
ffffffff8140ee00 r __ksymtab_cgroup_taskset_next
ffffffff8140ee10 r __ksymtab_cgroup_taskset_size
ffffffff8140ee20 r __ksymtab_cgroup_to_blkio_cgroup
ffffffff8140ee30 r __ksymtab_cgroup_unload_subsys
ffffffff8140ee40 r __ksymtab_cgroup_unlock
ffffffff8140ee50 r __ksymtab_check_tsc_unstable
ffffffff8140ee60 r __ksymtab_class_compat_create_link
ffffffff8140ee70 r __ksymtab_class_compat_register
ffffffff8140ee80 r __ksymtab_class_compat_remove_link
ffffffff8140ee90 r __ksymtab_class_compat_unregister
ffffffff8140eea0 r __ksymtab_class_create_file
ffffffff8140eeb0 r __ksymtab_class_destroy
ffffffff8140eec0 r __ksymtab_class_dev_iter_exit
ffffffff8140eed0 r __ksymtab_class_dev_iter_init
ffffffff8140eee0 r __ksymtab_class_dev_iter_next
ffffffff8140eef0 r __ksymtab_class_find_device
ffffffff8140ef00 r __ksymtab_class_for_each_device
ffffffff8140ef10 r __ksymtab_class_interface_register
ffffffff8140ef20 r __ksymtab_class_interface_unregister
ffffffff8140ef30 r __ksymtab_class_remove_file
ffffffff8140ef40 r __ksymtab_class_unregister
ffffffff8140ef50 r __ksymtab_cleanup_srcu_struct
ffffffff8140ef60 r __ksymtab_clflush_cache_range
ffffffff8140ef70 r __ksymtab_clockevent_delta2ns
ffffffff8140ef80 r __ksymtab_clockevents_notify
ffffffff8140ef90 r __ksymtab_clockevents_register_device
ffffffff8140efa0 r __ksymtab_compat_alloc_user_space
ffffffff8140efb0 r __ksymtab_compat_get_timespec
ffffffff8140efc0 r __ksymtab_compat_get_timeval
ffffffff8140efd0 r __ksymtab_compat_put_timespec
ffffffff8140efe0 r __ksymtab_compat_put_timeval
ffffffff8140eff0 r __ksymtab_con_debug_enter
ffffffff8140f000 r __ksymtab_con_debug_leave
ffffffff8140f010 r __ksymtab_console_drivers
ffffffff8140f020 r __ksymtab_copy_from_user_nmi
ffffffff8140f030 r __ksymtab_cper_next_record_id
ffffffff8140f040 r __ksymtab_cper_severity_to_aer
ffffffff8140f050 r __ksymtab_cpu_bit_bitmap
ffffffff8140f060 r __ksymtab_cpu_clock
ffffffff8140f070 r __ksymtab_cpu_has_amd_erratum
ffffffff8140f080 r __ksymtab_cpu_idle_wait
ffffffff8140f090 r __ksymtab_cpu_is_hotpluggable
ffffffff8140f0a0 r __ksymtab_cpu_subsys
ffffffff8140f0b0 r __ksymtab_cpu_up
ffffffff8140f0c0 r __ksymtab_cpufreq_cpu_get
ffffffff8140f0d0 r __ksymtab_cpufreq_cpu_put
ffffffff8140f0e0 r __ksymtab_cpufreq_driver_target
ffffffff8140f0f0 r __ksymtab_cpufreq_freq_attr_scaling_available_freqs
ffffffff8140f100 r __ksymtab_cpufreq_frequency_get_table
ffffffff8140f110 r __ksymtab_cpufreq_frequency_table_cpuinfo
ffffffff8140f120 r __ksymtab_cpufreq_frequency_table_get_attr
ffffffff8140f130 r __ksymtab_cpufreq_frequency_table_put_attr
ffffffff8140f140 r __ksymtab_cpufreq_frequency_table_target
ffffffff8140f150 r __ksymtab_cpufreq_frequency_table_verify
ffffffff8140f160 r __ksymtab_cpufreq_notify_transition
ffffffff8140f170 r __ksymtab_cpufreq_register_driver
ffffffff8140f180 r __ksymtab_cpufreq_register_governor
ffffffff8140f190 r __ksymtab_cpufreq_unregister_driver
ffffffff8140f1a0 r __ksymtab_cpufreq_unregister_governor
ffffffff8140f1b0 r __ksymtab_cpuidle_disable_device
ffffffff8140f1c0 r __ksymtab_cpuidle_enable_device
ffffffff8140f1d0 r __ksymtab_cpuidle_get_driver
ffffffff8140f1e0 r __ksymtab_cpuidle_pause_and_lock
ffffffff8140f1f0 r __ksymtab_cpuidle_register_device
ffffffff8140f200 r __ksymtab_cpuidle_register_driver
ffffffff8140f210 r __ksymtab_cpuidle_resume_and_unlock
ffffffff8140f220 r __ksymtab_cpuidle_unregister_device
ffffffff8140f230 r __ksymtab_cpuidle_unregister_driver
ffffffff8140f240 r __ksymtab_cpuset_mem_spread_node
ffffffff8140f250 r __ksymtab_cred_to_ucred
ffffffff8140f260 r __ksymtab_crypto_ablkcipher_type
ffffffff8140f270 r __ksymtab_crypto_aead_setauthsize
ffffffff8140f280 r __ksymtab_crypto_aead_type
ffffffff8140f290 r __ksymtab_crypto_aes_expand_key
ffffffff8140f2a0 r __ksymtab_crypto_aes_set_key
ffffffff8140f2b0 r __ksymtab_crypto_ahash_digest
ffffffff8140f2c0 r __ksymtab_crypto_ahash_final
ffffffff8140f2d0 r __ksymtab_crypto_ahash_finup
ffffffff8140f2e0 r __ksymtab_crypto_ahash_setkey
ffffffff8140f2f0 r __ksymtab_crypto_ahash_type
ffffffff8140f300 r __ksymtab_crypto_alg_list
ffffffff8140f310 r __ksymtab_crypto_alg_lookup
ffffffff8140f320 r __ksymtab_crypto_alg_mod_lookup
ffffffff8140f330 r __ksymtab_crypto_alg_sem
ffffffff8140f340 r __ksymtab_crypto_alg_tested
ffffffff8140f350 r __ksymtab_crypto_alloc_ablkcipher
ffffffff8140f360 r __ksymtab_crypto_alloc_aead
ffffffff8140f370 r __ksymtab_crypto_alloc_ahash
ffffffff8140f380 r __ksymtab_crypto_alloc_base
ffffffff8140f390 r __ksymtab_crypto_alloc_instance
ffffffff8140f3a0 r __ksymtab_crypto_alloc_instance2
ffffffff8140f3b0 r __ksymtab_crypto_alloc_pcomp
ffffffff8140f3c0 r __ksymtab_crypto_alloc_shash
ffffffff8140f3d0 r __ksymtab_crypto_alloc_tfm
ffffffff8140f3e0 r __ksymtab_crypto_attr_alg2
ffffffff8140f3f0 r __ksymtab_crypto_attr_alg_name
ffffffff8140f400 r __ksymtab_crypto_attr_u32
ffffffff8140f410 r __ksymtab_crypto_blkcipher_type
ffffffff8140f420 r __ksymtab_crypto_chain
ffffffff8140f430 r __ksymtab_crypto_check_attr_type
ffffffff8140f440 r __ksymtab_crypto_create_tfm
ffffffff8140f450 r __ksymtab_crypto_default_rng
ffffffff8140f460 r __ksymtab_crypto_dequeue_request
ffffffff8140f470 r __ksymtab_crypto_destroy_tfm
ffffffff8140f480 r __ksymtab_crypto_drop_spawn
ffffffff8140f490 r __ksymtab_crypto_enqueue_request
ffffffff8140f4a0 r __ksymtab_crypto_find_alg
ffffffff8140f4b0 r __ksymtab_crypto_fl_tab
ffffffff8140f4c0 r __ksymtab_crypto_ft_tab
ffffffff8140f4d0 r __ksymtab_crypto_get_attr_type
ffffffff8140f4e0 r __ksymtab_crypto_get_default_rng
ffffffff8140f4f0 r __ksymtab_crypto_givcipher_type
ffffffff8140f500 r __ksymtab_crypto_grab_aead
ffffffff8140f510 r __ksymtab_crypto_grab_skcipher
ffffffff8140f520 r __ksymtab_crypto_has_alg
ffffffff8140f530 r __ksymtab_crypto_hash_walk_done
ffffffff8140f540 r __ksymtab_crypto_hash_walk_first
ffffffff8140f550 r __ksymtab_crypto_il_tab
ffffffff8140f560 r __ksymtab_crypto_inc
ffffffff8140f570 r __ksymtab_crypto_init_ahash_spawn
ffffffff8140f580 r __ksymtab_crypto_init_queue
ffffffff8140f590 r __ksymtab_crypto_init_shash_spawn
ffffffff8140f5a0 r __ksymtab_crypto_init_spawn
ffffffff8140f5b0 r __ksymtab_crypto_init_spawn2
ffffffff8140f5c0 r __ksymtab_crypto_it_tab
ffffffff8140f5d0 r __ksymtab_crypto_larval_alloc
ffffffff8140f5e0 r __ksymtab_crypto_larval_error
ffffffff8140f5f0 r __ksymtab_crypto_larval_kill
ffffffff8140f600 r __ksymtab_crypto_larval_lookup
ffffffff8140f610 r __ksymtab_crypto_lookup_aead
ffffffff8140f620 r __ksymtab_crypto_lookup_skcipher
ffffffff8140f630 r __ksymtab_crypto_lookup_template
ffffffff8140f640 r __ksymtab_crypto_mod_get
ffffffff8140f650 r __ksymtab_crypto_mod_put
ffffffff8140f660 r __ksymtab_crypto_nivaead_type
ffffffff8140f670 r __ksymtab_crypto_probing_notify
ffffffff8140f680 r __ksymtab_crypto_put_default_rng
ffffffff8140f690 r __ksymtab_crypto_register_ahash
ffffffff8140f6a0 r __ksymtab_crypto_register_alg
ffffffff8140f6b0 r __ksymtab_crypto_register_algs
ffffffff8140f6c0 r __ksymtab_crypto_register_instance
ffffffff8140f6d0 r __ksymtab_crypto_register_notifier
ffffffff8140f6e0 r __ksymtab_crypto_register_pcomp
ffffffff8140f6f0 r __ksymtab_crypto_register_shash
ffffffff8140f700 r __ksymtab_crypto_register_template
ffffffff8140f710 r __ksymtab_crypto_remove_final
ffffffff8140f720 r __ksymtab_crypto_remove_spawns
ffffffff8140f730 r __ksymtab_crypto_rng_type
ffffffff8140f740 r __ksymtab_crypto_shash_digest
ffffffff8140f750 r __ksymtab_crypto_shash_final
ffffffff8140f760 r __ksymtab_crypto_shash_finup
ffffffff8140f770 r __ksymtab_crypto_shash_setkey
ffffffff8140f780 r __ksymtab_crypto_shash_update
ffffffff8140f790 r __ksymtab_crypto_shoot_alg
ffffffff8140f7a0 r __ksymtab_crypto_spawn_tfm
ffffffff8140f7b0 r __ksymtab_crypto_spawn_tfm2
ffffffff8140f7c0 r __ksymtab_crypto_tfm_in_queue
ffffffff8140f7d0 r __ksymtab_crypto_unregister_ahash
ffffffff8140f7e0 r __ksymtab_crypto_unregister_alg
ffffffff8140f7f0 r __ksymtab_crypto_unregister_algs
ffffffff8140f800 r __ksymtab_crypto_unregister_instance
ffffffff8140f810 r __ksymtab_crypto_unregister_notifier
ffffffff8140f820 r __ksymtab_crypto_unregister_pcomp
ffffffff8140f830 r __ksymtab_crypto_unregister_shash
ffffffff8140f840 r __ksymtab_crypto_unregister_template
ffffffff8140f850 r __ksymtab_crypto_xor
ffffffff8140f860 r __ksymtab_css_depth
ffffffff8140f870 r __ksymtab_css_id
ffffffff8140f880 r __ksymtab_css_lookup
ffffffff8140f890 r __ksymtab_d_materialise_unique
ffffffff8140f8a0 r __ksymtab_debug_locks
ffffffff8140f8b0 r __ksymtab_default_backing_dev_info
ffffffff8140f8c0 r __ksymtab_delayacct_on
ffffffff8140f8d0 r __ksymtab_dequeue_signal
ffffffff8140f8e0 r __ksymtab_destroy_workqueue
ffffffff8140f8f0 r __ksymtab_dev_attr_em_message
ffffffff8140f900 r __ksymtab_dev_attr_em_message_type
ffffffff8140f910 r __ksymtab_dev_attr_link_power_management_policy
ffffffff8140f920 r __ksymtab_dev_attr_sw_activity
ffffffff8140f930 r __ksymtab_dev_attr_unload_heads
ffffffff8140f940 r __ksymtab_dev_change_net_namespace
ffffffff8140f950 r __ksymtab_dev_forward_skb
ffffffff8140f960 r __ksymtab_dev_pm_get_subsys_data
ffffffff8140f970 r __ksymtab_dev_pm_put_subsys_data
ffffffff8140f980 r __ksymtab_dev_pm_qos_add_ancestor_request
ffffffff8140f990 r __ksymtab_dev_pm_qos_add_global_notifier
ffffffff8140f9a0 r __ksymtab_dev_pm_qos_add_notifier
ffffffff8140f9b0 r __ksymtab_dev_pm_qos_add_request
ffffffff8140f9c0 r __ksymtab_dev_pm_qos_expose_latency_limit
ffffffff8140f9d0 r __ksymtab_dev_pm_qos_hide_latency_limit
ffffffff8140f9e0 r __ksymtab_dev_pm_qos_remove_global_notifier
ffffffff8140f9f0 r __ksymtab_dev_pm_qos_remove_notifier
ffffffff8140fa00 r __ksymtab_dev_pm_qos_remove_request
ffffffff8140fa10 r __ksymtab_dev_pm_qos_update_request
ffffffff8140fa20 r __ksymtab_dev_set_name
ffffffff8140fa30 r __ksymtab_device_add
ffffffff8140fa40 r __ksymtab_device_attach
ffffffff8140fa50 r __ksymtab_device_bind_driver
ffffffff8140fa60 r __ksymtab_device_create
ffffffff8140fa70 r __ksymtab_device_create_bin_file
ffffffff8140fa80 r __ksymtab_device_create_file
ffffffff8140fa90 r __ksymtab_device_create_vargs
ffffffff8140faa0 r __ksymtab_device_del
ffffffff8140fab0 r __ksymtab_device_destroy
ffffffff8140fac0 r __ksymtab_device_find_child
ffffffff8140fad0 r __ksymtab_device_for_each_child
ffffffff8140fae0 r __ksymtab_device_init_wakeup
ffffffff8140faf0 r __ksymtab_device_initialize
ffffffff8140fb00 r __ksymtab_device_move
ffffffff8140fb10 r __ksymtab_device_pm_wait_for_dev
ffffffff8140fb20 r __ksymtab_device_register
ffffffff8140fb30 r __ksymtab_device_release_driver
ffffffff8140fb40 r __ksymtab_device_remove_bin_file
ffffffff8140fb50 r __ksymtab_device_remove_file
ffffffff8140fb60 r __ksymtab_device_rename
ffffffff8140fb70 r __ksymtab_device_reprobe
ffffffff8140fb80 r __ksymtab_device_schedule_callback_owner
ffffffff8140fb90 r __ksymtab_device_set_wakeup_capable
ffffffff8140fba0 r __ksymtab_device_set_wakeup_enable
ffffffff8140fbb0 r __ksymtab_device_show_int
ffffffff8140fbc0 r __ksymtab_device_show_ulong
ffffffff8140fbd0 r __ksymtab_device_store_int
ffffffff8140fbe0 r __ksymtab_device_store_ulong
ffffffff8140fbf0 r __ksymtab_device_unregister
ffffffff8140fc00 r __ksymtab_device_wakeup_disable
ffffffff8140fc10 r __ksymtab_device_wakeup_enable
ffffffff8140fc20 r __ksymtab_devm_kfree
ffffffff8140fc30 r __ksymtab_devm_kzalloc
ffffffff8140fc40 r __ksymtab_devres_add
ffffffff8140fc50 r __ksymtab_devres_alloc
ffffffff8140fc60 r __ksymtab_devres_close_group
ffffffff8140fc70 r __ksymtab_devres_destroy
ffffffff8140fc80 r __ksymtab_devres_find
ffffffff8140fc90 r __ksymtab_devres_free
ffffffff8140fca0 r __ksymtab_devres_get
ffffffff8140fcb0 r __ksymtab_devres_open_group
ffffffff8140fcc0 r __ksymtab_devres_release_group
ffffffff8140fcd0 r __ksymtab_devres_remove
ffffffff8140fce0 r __ksymtab_devres_remove_group
ffffffff8140fcf0 r __ksymtab_dio_end_io
ffffffff8140fd00 r __ksymtab_dirty_writeback_interval
ffffffff8140fd10 r __ksymtab_disk_get_part
ffffffff8140fd20 r __ksymtab_disk_map_sector_rcu
ffffffff8140fd30 r __ksymtab_disk_part_iter_exit
ffffffff8140fd40 r __ksymtab_disk_part_iter_init
ffffffff8140fd50 r __ksymtab_disk_part_iter_next
ffffffff8140fd60 r __ksymtab_dma_get_required_mask
ffffffff8140fd70 r __ksymtab_dmi_match
ffffffff8140fd80 r __ksymtab_dmi_walk
ffffffff8140fd90 r __ksymtab_do_exit
ffffffff8140fda0 r __ksymtab_do_machine_check
ffffffff8140fdb0 r __ksymtab_do_trace_rcu_torture_read
ffffffff8140fdc0 r __ksymtab_dpm_resume_end
ffffffff8140fdd0 r __ksymtab_dpm_resume_start
ffffffff8140fde0 r __ksymtab_dpm_suspend_end
ffffffff8140fdf0 r __ksymtab_dpm_suspend_start
ffffffff8140fe00 r __ksymtab_drain_workqueue
ffffffff8140fe10 r __ksymtab_driver_attach
ffffffff8140fe20 r __ksymtab_driver_create_file
ffffffff8140fe30 r __ksymtab_driver_find
ffffffff8140fe40 r __ksymtab_driver_find_device
ffffffff8140fe50 r __ksymtab_driver_for_each_device
ffffffff8140fe60 r __ksymtab_driver_register
ffffffff8140fe70 r __ksymtab_driver_remove_file
ffffffff8140fe80 r __ksymtab_driver_unregister
ffffffff8140fe90 r __ksymtab_e820_any_mapped
ffffffff8140fea0 r __ksymtab_each_symbol_section
ffffffff8140feb0 r __ksymtab_edac_atomic_assert_error
ffffffff8140fec0 r __ksymtab_edac_err_assert
ffffffff8140fed0 r __ksymtab_edac_get_sysfs_subsys
ffffffff8140fee0 r __ksymtab_edac_handler_set
ffffffff8140fef0 r __ksymtab_edac_handlers
ffffffff8140ff00 r __ksymtab_edac_op_state
ffffffff8140ff10 r __ksymtab_edac_put_sysfs_subsys
ffffffff8140ff20 r __ksymtab_edac_subsys
ffffffff8140ff30 r __ksymtab_edid_info
ffffffff8140ff40 r __ksymtab_elv_register
ffffffff8140ff50 r __ksymtab_elv_unregister
ffffffff8140ff60 r __ksymtab_emergency_restart
ffffffff8140ff70 r __ksymtab_erst_clear
ffffffff8140ff80 r __ksymtab_erst_disable
ffffffff8140ff90 r __ksymtab_erst_get_record_count
ffffffff8140ffa0 r __ksymtab_erst_get_record_id_begin
ffffffff8140ffb0 r __ksymtab_erst_get_record_id_end
ffffffff8140ffc0 r __ksymtab_erst_get_record_id_next
ffffffff8140ffd0 r __ksymtab_erst_read
ffffffff8140ffe0 r __ksymtab_erst_write
ffffffff8140fff0 r __ksymtab_eventfd_ctx_fdget
ffffffff81410000 r __ksymtab_eventfd_ctx_fileget
ffffffff81410010 r __ksymtab_eventfd_ctx_get
ffffffff81410020 r __ksymtab_eventfd_ctx_put
ffffffff81410030 r __ksymtab_eventfd_ctx_read
ffffffff81410040 r __ksymtab_eventfd_ctx_remove_wait_queue
ffffffff81410050 r __ksymtab_eventfd_fget
ffffffff81410060 r __ksymtab_eventfd_signal
ffffffff81410070 r __ksymtab_evtchn_get
ffffffff81410080 r __ksymtab_evtchn_make_refcounted
ffffffff81410090 r __ksymtab_evtchn_put
ffffffff814100a0 r __ksymtab_execute_in_process_context
ffffffff814100b0 r __ksymtab_fb_destroy_modelist
ffffffff814100c0 r __ksymtab_fb_find_logo
ffffffff814100d0 r __ksymtab_fb_mode_option
ffffffff814100e0 r __ksymtab_fb_notifier_call_chain
ffffffff814100f0 r __ksymtab_fib_table_lookup
ffffffff81410100 r __ksymtab_file_ra_state_init
ffffffff81410110 r __ksymtab_find_get_pid
ffffffff81410120 r __ksymtab_find_module
ffffffff81410130 r __ksymtab_find_pid_ns
ffffffff81410140 r __ksymtab_find_symbol
ffffffff81410150 r __ksymtab_find_vpid
ffffffff81410160 r __ksymtab_firmware_kobj
ffffffff81410170 r __ksymtab_flush_kthread_work
ffffffff81410180 r __ksymtab_flush_kthread_worker
ffffffff81410190 r __ksymtab_flush_work
ffffffff814101a0 r __ksymtab_flush_work_sync
ffffffff814101b0 r __ksymtab_flush_workqueue
ffffffff814101c0 r __ksymtab_fpu_finit
ffffffff814101d0 r __ksymtab_free_css_id
ffffffff814101e0 r __ksymtab_free_percpu
ffffffff814101f0 r __ksymtab_free_vm_area
ffffffff81410200 r __ksymtab_fs_kobj
ffffffff81410210 r __ksymtab_fsnotify
ffffffff81410220 r __ksymtab_fsnotify_get_cookie
ffffffff81410230 r __ksymtab_fsstack_copy_attr_all
ffffffff81410240 r __ksymtab_fsstack_copy_inode_size
ffffffff81410250 r __ksymtab_gcd
ffffffff81410260 r __ksymtab_gdt_page
ffffffff81410270 r __ksymtab_gen_pool_avail
ffffffff81410280 r __ksymtab_gen_pool_size
ffffffff81410290 r __ksymtab_generic_fh_to_dentry
ffffffff814102a0 r __ksymtab_generic_fh_to_parent
ffffffff814102b0 r __ksymtab_generic_handle_irq
ffffffff814102c0 r __ksymtab_get_compat_timespec
ffffffff814102d0 r __ksymtab_get_compat_timeval
ffffffff814102e0 r __ksymtab_get_cpu_device
ffffffff814102f0 r __ksymtab_get_current_tty
ffffffff81410300 r __ksymtab_get_device
ffffffff81410310 r __ksymtab_get_max_files
ffffffff81410320 r __ksymtab_get_monotonic_boottime
ffffffff81410330 r __ksymtab_get_net_ns_by_pid
ffffffff81410340 r __ksymtab_get_online_cpus
ffffffff81410350 r __ksymtab_get_phys_to_machine
ffffffff81410360 r __ksymtab_get_pid_task
ffffffff81410370 r __ksymtab_get_task_comm
ffffffff81410380 r __ksymtab_get_task_mm
ffffffff81410390 r __ksymtab_get_task_pid
ffffffff814103a0 r __ksymtab_get_user_pages_fast
ffffffff814103b0 r __ksymtab_getboottime
ffffffff814103c0 r __ksymtab_gnttab_alloc_grant_references
ffffffff814103d0 r __ksymtab_gnttab_cancel_free_callback
ffffffff814103e0 r __ksymtab_gnttab_claim_grant_reference
ffffffff814103f0 r __ksymtab_gnttab_empty_grant_references
ffffffff81410400 r __ksymtab_gnttab_end_foreign_access
ffffffff81410410 r __ksymtab_gnttab_end_foreign_access_ref
ffffffff81410420 r __ksymtab_gnttab_end_foreign_transfer
ffffffff81410430 r __ksymtab_gnttab_end_foreign_transfer_ref
ffffffff81410440 r __ksymtab_gnttab_free_grant_reference
ffffffff81410450 r __ksymtab_gnttab_free_grant_references
ffffffff81410460 r __ksymtab_gnttab_grant_foreign_access
ffffffff81410470 r __ksymtab_gnttab_grant_foreign_access_ref
ffffffff81410480 r __ksymtab_gnttab_grant_foreign_access_subpage
ffffffff81410490 r __ksymtab_gnttab_grant_foreign_access_subpage_ref
ffffffff814104a0 r __ksymtab_gnttab_grant_foreign_access_trans
ffffffff814104b0 r __ksymtab_gnttab_grant_foreign_access_trans_ref
ffffffff814104c0 r __ksymtab_gnttab_grant_foreign_transfer
ffffffff814104d0 r __ksymtab_gnttab_grant_foreign_transfer_ref
ffffffff814104e0 r __ksymtab_gnttab_init
ffffffff814104f0 r __ksymtab_gnttab_map_refs
ffffffff81410500 r __ksymtab_gnttab_max_grant_frames
ffffffff81410510 r __ksymtab_gnttab_query_foreign_access
ffffffff81410520 r __ksymtab_gnttab_release_grant_reference
ffffffff81410530 r __ksymtab_gnttab_request_free_callback
ffffffff81410540 r __ksymtab_gnttab_subpage_grants_available
ffffffff81410550 r __ksymtab_gnttab_trans_grants_available
ffffffff81410560 r __ksymtab_gnttab_unmap_refs
ffffffff81410570 r __ksymtab_handle_level_irq
ffffffff81410580 r __ksymtab_handle_nested_irq
ffffffff81410590 r __ksymtab_handle_simple_irq
ffffffff814105a0 r __ksymtab_hest_disable
ffffffff814105b0 r __ksymtab_hid_add_device
ffffffff814105c0 r __ksymtab_hid_allocate_device
ffffffff814105d0 r __ksymtab_hid_check_keys_pressed
ffffffff814105e0 r __ksymtab_hid_connect
ffffffff814105f0 r __ksymtab_hid_debug
ffffffff81410600 r __ksymtab_hid_destroy_device
ffffffff81410610 r __ksymtab_hid_disconnect
ffffffff81410620 r __ksymtab_hid_input_report
ffffffff81410630 r __ksymtab_hid_output_report
ffffffff81410640 r __ksymtab_hid_parse_report
ffffffff81410650 r __ksymtab_hid_register_report
ffffffff81410660 r __ksymtab_hid_report_raw_event
ffffffff81410670 r __ksymtab_hid_set_field
ffffffff81410680 r __ksymtab_hid_unregister_driver
ffffffff81410690 r __ksymtab_hidinput_connect
ffffffff814106a0 r __ksymtab_hidinput_count_leds
ffffffff814106b0 r __ksymtab_hidinput_disconnect
ffffffff814106c0 r __ksymtab_hidinput_find_field
ffffffff814106d0 r __ksymtab_hidinput_get_led_field
ffffffff814106e0 r __ksymtab_hidinput_report_event
ffffffff814106f0 r __ksymtab_hpet_mask_rtc_irq_bit
ffffffff81410700 r __ksymtab_hpet_register_irq_handler
ffffffff81410710 r __ksymtab_hpet_rtc_dropped_irq
ffffffff81410720 r __ksymtab_hpet_rtc_interrupt
ffffffff81410730 r __ksymtab_hpet_rtc_timer_init
ffffffff81410740 r __ksymtab_hpet_set_alarm_time
ffffffff81410750 r __ksymtab_hpet_set_periodic_freq
ffffffff81410760 r __ksymtab_hpet_set_rtc_irq_bit
ffffffff81410770 r __ksymtab_hpet_unregister_irq_handler
ffffffff81410780 r __ksymtab_hrtimer_cancel
ffffffff81410790 r __ksymtab_hrtimer_forward
ffffffff814107a0 r __ksymtab_hrtimer_get_remaining
ffffffff814107b0 r __ksymtab_hrtimer_get_res
ffffffff814107c0 r __ksymtab_hrtimer_init
ffffffff814107d0 r __ksymtab_hrtimer_init_sleeper
ffffffff814107e0 r __ksymtab_hrtimer_start
ffffffff814107f0 r __ksymtab_hrtimer_start_range_ns
ffffffff81410800 r __ksymtab_hrtimer_try_to_cancel
ffffffff81410810 r __ksymtab_hvc_alloc
ffffffff81410820 r __ksymtab_hvc_instantiate
ffffffff81410830 r __ksymtab_hvc_kick
ffffffff81410840 r __ksymtab_hvc_poll
ffffffff81410850 r __ksymtab_hvc_remove
ffffffff81410860 r __ksymtab_hw_breakpoint_restore
ffffffff81410870 r __ksymtab_hwpoison_filter
ffffffff81410880 r __ksymtab_hypercall_page
ffffffff81410890 r __ksymtab_hypervisor_kobj
ffffffff814108a0 r __ksymtab_ibft_addr
ffffffff814108b0 r __ksymtab_idle_notifier_register
ffffffff814108c0 r __ksymtab_idle_notifier_unregister
ffffffff814108d0 r __ksymtab_inet_csk_addr2sockaddr
ffffffff814108e0 r __ksymtab_inet_csk_bind_conflict
ffffffff814108f0 r __ksymtab_inet_csk_clone_lock
ffffffff81410900 r __ksymtab_inet_csk_compat_getsockopt
ffffffff81410910 r __ksymtab_inet_csk_compat_setsockopt
ffffffff81410920 r __ksymtab_inet_csk_get_port
ffffffff81410930 r __ksymtab_inet_csk_listen_start
ffffffff81410940 r __ksymtab_inet_csk_listen_stop
ffffffff81410950 r __ksymtab_inet_csk_reqsk_queue_hash_add
ffffffff81410960 r __ksymtab_inet_csk_reqsk_queue_prune
ffffffff81410970 r __ksymtab_inet_csk_route_child_sock
ffffffff81410980 r __ksymtab_inet_csk_route_req
ffffffff81410990 r __ksymtab_inet_csk_search_req
ffffffff814109a0 r __ksymtab_inet_ctl_sock_create
ffffffff814109b0 r __ksymtab_inet_getpeer
ffffffff814109c0 r __ksymtab_inet_hash
ffffffff814109d0 r __ksymtab_inet_hash_connect
ffffffff814109e0 r __ksymtab_inet_hashinfo_init
ffffffff814109f0 r __ksymtab_inet_putpeer
ffffffff81410a00 r __ksymtab_inet_twdr_hangman
ffffffff81410a10 r __ksymtab_inet_twdr_twcal_tick
ffffffff81410a20 r __ksymtab_inet_twdr_twkill_work
ffffffff81410a30 r __ksymtab_inet_twsk_alloc
ffffffff81410a40 r __ksymtab_inet_twsk_purge
ffffffff81410a50 r __ksymtab_inet_twsk_put
ffffffff81410a60 r __ksymtab_inet_twsk_schedule
ffffffff81410a70 r __ksymtab_inet_unhash
ffffffff81410a80 r __ksymtab_init_dummy_netdev
ffffffff81410a90 r __ksymtab_init_fpu
ffffffff81410aa0 r __ksymtab_init_pid_ns
ffffffff81410ab0 r __ksymtab_init_srcu_struct
ffffffff81410ac0 r __ksymtab_init_user_ns
ffffffff81410ad0 r __ksymtab_init_uts_ns
ffffffff81410ae0 r __ksymtab_injectm
ffffffff81410af0 r __ksymtab_inode_sb_list_add
ffffffff81410b00 r __ksymtab_input_class
ffffffff81410b10 r __ksymtab_input_event_from_user
ffffffff81410b20 r __ksymtab_input_event_to_user
ffffffff81410b30 r __ksymtab_input_ff_create
ffffffff81410b40 r __ksymtab_input_ff_destroy
ffffffff81410b50 r __ksymtab_input_ff_effect_from_user
ffffffff81410b60 r __ksymtab_input_ff_erase
ffffffff81410b70 r __ksymtab_input_ff_event
ffffffff81410b80 r __ksymtab_input_ff_upload
ffffffff81410b90 r __ksymtab_invalidate_bh_lrus
ffffffff81410ba0 r __ksymtab_invalidate_inode_pages2
ffffffff81410bb0 r __ksymtab_invalidate_inode_pages2_range
ffffffff81410bc0 r __ksymtab_inverse_translate
ffffffff81410bd0 r __ksymtab_iommu_attach_device
ffffffff81410be0 r __ksymtab_iommu_detach_device
ffffffff81410bf0 r __ksymtab_iommu_device_group
ffffffff81410c00 r __ksymtab_iommu_domain_alloc
ffffffff81410c10 r __ksymtab_iommu_domain_free
ffffffff81410c20 r __ksymtab_iommu_domain_has_cap
ffffffff81410c30 r __ksymtab_iommu_iova_to_phys
ffffffff81410c40 r __ksymtab_iommu_map
ffffffff81410c50 r __ksymtab_iommu_present
ffffffff81410c60 r __ksymtab_iommu_set_fault_handler
ffffffff81410c70 r __ksymtab_iommu_unmap
ffffffff81410c80 r __ksymtab_ioremap_page_range
ffffffff81410c90 r __ksymtab_ip_build_and_send_pkt
ffffffff81410ca0 r __ksymtab_ip_local_out
ffffffff81410cb0 r __ksymtab_ip_route_output_flow
ffffffff81410cc0 r __ksymtab_irq_free_descs
ffffffff81410cd0 r __ksymtab_irq_from_evtchn
ffffffff81410ce0 r __ksymtab_irq_get_irq_data
ffffffff81410cf0 r __ksymtab_irq_modify_status
ffffffff81410d00 r __ksymtab_irq_set_affinity_hint
ffffffff81410d10 r __ksymtab_irq_set_affinity_notifier
ffffffff81410d20 r __ksymtab_irq_work_queue
ffffffff81410d30 r __ksymtab_irq_work_run
ffffffff81410d40 r __ksymtab_irq_work_sync
ffffffff81410d50 r __ksymtab_is_hpet_enabled
ffffffff81410d60 r __ksymtab_kallsyms_lookup_name
ffffffff81410d70 r __ksymtab_kallsyms_on_each_symbol
ffffffff81410d80 r __ksymtab_kcrypto_wq
ffffffff81410d90 r __ksymtab_kern_mount_data
ffffffff81410da0 r __ksymtab_kernel_halt
ffffffff81410db0 r __ksymtab_kernel_kobj
ffffffff81410dc0 r __ksymtab_kernel_power_off
ffffffff81410dd0 r __ksymtab_kernel_restart
ffffffff81410de0 r __ksymtab_kfree_call_rcu
ffffffff81410df0 r __ksymtab_kick_process
ffffffff81410e00 r __ksymtab_kill_pid_info_as_cred
ffffffff81410e10 r __ksymtab_klist_add_after
ffffffff81410e20 r __ksymtab_klist_add_before
ffffffff81410e30 r __ksymtab_klist_add_head
ffffffff81410e40 r __ksymtab_klist_add_tail
ffffffff81410e50 r __ksymtab_klist_del
ffffffff81410e60 r __ksymtab_klist_init
ffffffff81410e70 r __ksymtab_klist_iter_exit
ffffffff81410e80 r __ksymtab_klist_iter_init
ffffffff81410e90 r __ksymtab_klist_iter_init_node
ffffffff81410ea0 r __ksymtab_klist_next
ffffffff81410eb0 r __ksymtab_klist_node_attached
ffffffff81410ec0 r __ksymtab_klist_remove
ffffffff81410ed0 r __ksymtab_kmsg_dump_register
ffffffff81410ee0 r __ksymtab_kmsg_dump_unregister
ffffffff81410ef0 r __ksymtab_kobject_create_and_add
ffffffff81410f00 r __ksymtab_kobject_get_path
ffffffff81410f10 r __ksymtab_kobject_init_and_add
ffffffff81410f20 r __ksymtab_kobject_rename
ffffffff81410f30 r __ksymtab_kobject_uevent
ffffffff81410f40 r __ksymtab_kobject_uevent_env
ffffffff81410f50 r __ksymtab_kset_create_and_add
ffffffff81410f60 r __ksymtab_kthread_freezable_should_stop
ffffffff81410f70 r __ksymtab_kthread_worker_fn
ffffffff81410f80 r __ksymtab_ktime_add_safe
ffffffff81410f90 r __ksymtab_ktime_get
ffffffff81410fa0 r __ksymtab_ktime_get_boottime
ffffffff81410fb0 r __ksymtab_ktime_get_monotonic_offset
ffffffff81410fc0 r __ksymtab_ktime_get_real
ffffffff81410fd0 r __ksymtab_ktime_get_ts
ffffffff81410fe0 r __ksymtab_lcm
ffffffff81410ff0 r __ksymtab_leave_mm
ffffffff81411000 r __ksymtab_llist_add_batch
ffffffff81411010 r __ksymtab_llist_del_first
ffffffff81411020 r __ksymtab_local_apic_timer_c2_ok
ffffffff81411030 r __ksymtab_local_clock
ffffffff81411040 r __ksymtab_lock_flocks
ffffffff81411050 r __ksymtab_locks_alloc_lock
ffffffff81411060 r __ksymtab_locks_release_private
ffffffff81411070 r __ksymtab_lookup_address
ffffffff81411080 r __ksymtab_lookup_instantiate_filp
ffffffff81411090 r __ksymtab_lzo1x_decompress_safe
ffffffff814110a0 r __ksymtab_m2p_add_override
ffffffff814110b0 r __ksymtab_m2p_find_override_pfn
ffffffff814110c0 r __ksymtab_m2p_remove_override
ffffffff814110d0 r __ksymtab_machine_check_poll
ffffffff814110e0 r __ksymtab_map_vm_area
ffffffff814110f0 r __ksymtab_mark_mounts_for_expiry
ffffffff81411100 r __ksymtab_mark_tsc_unstable
ffffffff81411110 r __ksymtab_math_state_restore
ffffffff81411120 r __ksymtab_mce_notify_irq
ffffffff81411130 r __ksymtab_mce_register_decode_chain
ffffffff81411140 r __ksymtab_mce_unregister_decode_chain
ffffffff81411150 r __ksymtab_memory_failure
ffffffff81411160 r __ksymtab_memory_failure_queue
ffffffff81411170 r __ksymtab_mm_kobj
ffffffff81411180 r __ksymtab_mmput
ffffffff81411190 r __ksymtab_mmu_notifier_register
ffffffff814111a0 r __ksymtab_mmu_notifier_unregister
ffffffff814111b0 r __ksymtab_mnt_clone_write
ffffffff814111c0 r __ksymtab_mnt_drop_write
ffffffff814111d0 r __ksymtab_mnt_want_write
ffffffff814111e0 r __ksymtab_mnt_want_write_file
ffffffff814111f0 r __ksymtab_modify_user_hw_breakpoint
ffffffff81411200 r __ksymtab_module_mutex
ffffffff81411210 r __ksymtab_monotonic_to_bootbased
ffffffff81411220 r __ksymtab_ms_hyperv
ffffffff81411230 r __ksymtab_mtrr_state
ffffffff81411240 r __ksymtab_n_tty_inherit_ops
ffffffff81411250 r __ksymtab_net_cls_subsys_id
ffffffff81411260 r __ksymtab_net_ipv4_ctl_path
ffffffff81411270 r __ksymtab_net_namespace_list
ffffffff81411280 r __ksymtab_net_ns_type_operations
ffffffff81411290 r __ksymtab_net_prio_subsys_id
ffffffff814112a0 r __ksymtab_netdev_rx_handler_register
ffffffff814112b0 r __ksymtab_netdev_rx_handler_unregister
ffffffff814112c0 r __ksymtab_netlink_has_listeners
ffffffff814112d0 r __ksymtab_noop_backing_dev_info
ffffffff814112e0 r __ksymtab_notify_remote_via_irq
ffffffff814112f0 r __ksymtab_nr_free_buffer_pages
ffffffff81411300 r __ksymtab_nr_irqs
ffffffff81411310 r __ksymtab_oops_begin
ffffffff81411320 r __ksymtab_orderly_poweroff
ffffffff81411330 r __ksymtab_page_cache_async_readahead
ffffffff81411340 r __ksymtab_page_cache_sync_readahead
ffffffff81411350 r __ksymtab_page_mkclean
ffffffff81411360 r __ksymtab_panic_timeout
ffffffff81411370 r __ksymtab_part_round_stats
ffffffff81411380 r __ksymtab_pci_add_dynid
ffffffff81411390 r __ksymtab_pci_assign_unassigned_bridge_resources
ffffffff814113a0 r __ksymtab_pci_ats_queue_depth
ffffffff814113b0 r __ksymtab_pci_bus_add_device
ffffffff814113c0 r __ksymtab_pci_bus_max_busnr
ffffffff814113d0 r __ksymtab_pci_bus_resource_n
ffffffff814113e0 r __ksymtab_pci_cfg_access_lock
ffffffff814113f0 r __ksymtab_pci_cfg_access_trylock
ffffffff81411400 r __ksymtab_pci_cfg_access_unlock
ffffffff81411410 r __ksymtab_pci_check_and_mask_intx
ffffffff81411420 r __ksymtab_pci_check_and_unmask_intx
ffffffff81411430 r __ksymtab_pci_cleanup_aer_uncorrect_error_status
ffffffff81411440 r __ksymtab_pci_create_slot
ffffffff81411450 r __ksymtab_pci_destroy_slot
ffffffff81411460 r __ksymtab_pci_dev_run_wake
ffffffff81411470 r __ksymtab_pci_disable_ats
ffffffff81411480 r __ksymtab_pci_disable_pasid
ffffffff81411490 r __ksymtab_pci_disable_pcie_error_reporting
ffffffff814114a0 r __ksymtab_pci_disable_pri
ffffffff814114b0 r __ksymtab_pci_disable_rom
ffffffff814114c0 r __ksymtab_pci_disable_sriov
ffffffff814114d0 r __ksymtab_pci_enable_ats
ffffffff814114e0 r __ksymtab_pci_enable_pasid
ffffffff814114f0 r __ksymtab_pci_enable_pcie_error_reporting
ffffffff81411500 r __ksymtab_pci_enable_pri
ffffffff81411510 r __ksymtab_pci_enable_rom
ffffffff81411520 r __ksymtab_pci_enable_sriov
ffffffff81411530 r __ksymtab_pci_find_ext_capability
ffffffff81411540 r __ksymtab_pci_find_ht_capability
ffffffff81411550 r __ksymtab_pci_find_next_capability
ffffffff81411560 r __ksymtab_pci_find_next_ht_capability
ffffffff81411570 r __ksymtab_pci_intx
ffffffff81411580 r __ksymtab_pci_intx_mask_supported
ffffffff81411590 r __ksymtab_pci_ioremap_bar
ffffffff814115a0 r __ksymtab_pci_load_and_free_saved_state
ffffffff814115b0 r __ksymtab_pci_load_saved_state
ffffffff814115c0 r __ksymtab_pci_max_pasids
ffffffff814115d0 r __ksymtab_pci_msi_off
ffffffff814115e0 r __ksymtab_pci_num_vf
ffffffff814115f0 r __ksymtab_pci_pasid_features
ffffffff81411600 r __ksymtab_pci_power_names
ffffffff81411610 r __ksymtab_pci_pri_enabled
ffffffff81411620 r __ksymtab_pci_pri_status
ffffffff81411630 r __ksymtab_pci_pri_stopped
ffffffff81411640 r __ksymtab_pci_renumber_slot
ffffffff81411650 r __ksymtab_pci_rescan_bus
ffffffff81411660 r __ksymtab_pci_reset_function
ffffffff81411670 r __ksymtab_pci_reset_pri
ffffffff81411680 r __ksymtab_pci_restore_ats_state
ffffffff81411690 r __ksymtab_pci_restore_msi_state
ffffffff814116a0 r __ksymtab_pci_scan_child_bus
ffffffff814116b0 r __ksymtab_pci_set_cacheline_size
ffffffff814116c0 r __ksymtab_pci_set_pcie_reset_state
ffffffff814116d0 r __ksymtab_pci_slots_kset
ffffffff814116e0 r __ksymtab_pci_sriov_migration
ffffffff814116f0 r __ksymtab_pci_stop_bus_device
ffffffff81411700 r __ksymtab_pci_store_saved_state
ffffffff81411710 r __ksymtab_pci_test_config_bits
ffffffff81411720 r __ksymtab_pci_vpd_find_info_keyword
ffffffff81411730 r __ksymtab_pci_vpd_find_tag
ffffffff81411740 r __ksymtab_pci_walk_bus
ffffffff81411750 r __ksymtab_pcibios_scan_specific_bus
ffffffff81411760 r __ksymtab_pcie_bus_configure_settings
ffffffff81411770 r __ksymtab_pcie_port_bus_type
ffffffff81411780 r __ksymtab_pcie_update_link_speed
ffffffff81411790 r __ksymtab_pcpu_base_addr
ffffffff814117a0 r __ksymtab_perf_event_create_kernel_counter
ffffffff814117b0 r __ksymtab_perf_event_disable
ffffffff814117c0 r __ksymtab_perf_event_enable
ffffffff814117d0 r __ksymtab_perf_event_read_value
ffffffff814117e0 r __ksymtab_perf_event_refresh
ffffffff814117f0 r __ksymtab_perf_event_release_kernel
ffffffff81411800 r __ksymtab_perf_get_x86_pmu_capability
ffffffff81411810 r __ksymtab_perf_guest_get_msrs
ffffffff81411820 r __ksymtab_perf_register_guest_info_callbacks
ffffffff81411830 r __ksymtab_perf_swevent_get_recursion_context
ffffffff81411840 r __ksymtab_perf_unregister_guest_info_callbacks
ffffffff81411850 r __ksymtab_pgprot_writecombine
ffffffff81411860 r __ksymtab_pid_vnr
ffffffff81411870 r __ksymtab_platform_add_devices
ffffffff81411880 r __ksymtab_platform_bus
ffffffff81411890 r __ksymtab_platform_bus_type
ffffffff814118a0 r __ksymtab_platform_create_bundle
ffffffff814118b0 r __ksymtab_platform_device_add
ffffffff814118c0 r __ksymtab_platform_device_add_data
ffffffff814118d0 r __ksymtab_platform_device_add_resources
ffffffff814118e0 r __ksymtab_platform_device_alloc
ffffffff814118f0 r __ksymtab_platform_device_del
ffffffff81411900 r __ksymtab_platform_device_put
ffffffff81411910 r __ksymtab_platform_device_register
ffffffff81411920 r __ksymtab_platform_device_register_full
ffffffff81411930 r __ksymtab_platform_device_unregister
ffffffff81411940 r __ksymtab_platform_driver_probe
ffffffff81411950 r __ksymtab_platform_driver_register
ffffffff81411960 r __ksymtab_platform_driver_unregister
ffffffff81411970 r __ksymtab_platform_get_irq
ffffffff81411980 r __ksymtab_platform_get_irq_byname
ffffffff81411990 r __ksymtab_platform_get_resource
ffffffff814119a0 r __ksymtab_platform_get_resource_byname
ffffffff814119b0 r __ksymtab_pm_generic_freeze
ffffffff814119c0 r __ksymtab_pm_generic_freeze_late
ffffffff814119d0 r __ksymtab_pm_generic_freeze_noirq
ffffffff814119e0 r __ksymtab_pm_generic_poweroff
ffffffff814119f0 r __ksymtab_pm_generic_poweroff_late
ffffffff81411a00 r __ksymtab_pm_generic_poweroff_noirq
ffffffff81411a10 r __ksymtab_pm_generic_restore
ffffffff81411a20 r __ksymtab_pm_generic_restore_early
ffffffff81411a30 r __ksymtab_pm_generic_restore_noirq
ffffffff81411a40 r __ksymtab_pm_generic_resume
ffffffff81411a50 r __ksymtab_pm_generic_resume_early
ffffffff81411a60 r __ksymtab_pm_generic_resume_noirq
ffffffff81411a70 r __ksymtab_pm_generic_runtime_idle
ffffffff81411a80 r __ksymtab_pm_generic_runtime_resume
ffffffff81411a90 r __ksymtab_pm_generic_runtime_suspend
ffffffff81411aa0 r __ksymtab_pm_generic_suspend
ffffffff81411ab0 r __ksymtab_pm_generic_suspend_late
ffffffff81411ac0 r __ksymtab_pm_generic_suspend_noirq
ffffffff81411ad0 r __ksymtab_pm_generic_thaw
ffffffff81411ae0 r __ksymtab_pm_generic_thaw_early
ffffffff81411af0 r __ksymtab_pm_generic_thaw_noirq
ffffffff81411b00 r __ksymtab_pm_qos_add_notifier
ffffffff81411b10 r __ksymtab_pm_qos_add_request
ffffffff81411b20 r __ksymtab_pm_qos_remove_notifier
ffffffff81411b30 r __ksymtab_pm_qos_remove_request
ffffffff81411b40 r __ksymtab_pm_qos_request
ffffffff81411b50 r __ksymtab_pm_qos_request_active
ffffffff81411b60 r __ksymtab_pm_qos_update_request
ffffffff81411b70 r __ksymtab_pm_relax
ffffffff81411b80 r __ksymtab_pm_runtime_allow
ffffffff81411b90 r __ksymtab_pm_runtime_autosuspend_expiration
ffffffff81411ba0 r __ksymtab_pm_runtime_barrier
ffffffff81411bb0 r __ksymtab_pm_runtime_enable
ffffffff81411bc0 r __ksymtab_pm_runtime_forbid
ffffffff81411bd0 r __ksymtab_pm_runtime_irq_safe
ffffffff81411be0 r __ksymtab_pm_runtime_no_callbacks
ffffffff81411bf0 r __ksymtab_pm_runtime_set_autosuspend_delay
ffffffff81411c00 r __ksymtab_pm_schedule_suspend
ffffffff81411c10 r __ksymtab_pm_stay_awake
ffffffff81411c20 r __ksymtab_pm_wakeup_event
ffffffff81411c30 r __ksymtab_pm_wq
ffffffff81411c40 r __ksymtab_posix_clock_register
ffffffff81411c50 r __ksymtab_posix_clock_unregister
ffffffff81411c60 r __ksymtab_posix_timer_event
ffffffff81411c70 r __ksymtab_posix_timers_register_clock
ffffffff81411c80 r __ksymtab_power_group_name
ffffffff81411c90 r __ksymtab_print_context_stack
ffffffff81411ca0 r __ksymtab_print_context_stack_bp
ffffffff81411cb0 r __ksymtab_probe_kernel_read
ffffffff81411cc0 r __ksymtab_probe_kernel_write
ffffffff81411cd0 r __ksymtab_proc_net_fops_create
ffffffff81411ce0 r __ksymtab_proc_net_mkdir
ffffffff81411cf0 r __ksymtab_proc_net_remove
ffffffff81411d00 r __ksymtab_pstore_register
ffffffff81411d10 r __ksymtab_put_compat_timespec
ffffffff81411d20 r __ksymtab_put_compat_timeval
ffffffff81411d30 r __ksymtab_put_device
ffffffff81411d40 r __ksymtab_put_online_cpus
ffffffff81411d50 r __ksymtab_put_pid
ffffffff81411d60 r __ksymtab_pv_apic_ops
ffffffff81411d70 r __ksymtab_pv_info
ffffffff81411d80 r __ksymtab_pv_time_ops
ffffffff81411d90 r __ksymtab_queue_delayed_work
ffffffff81411da0 r __ksymtab_queue_delayed_work_on
ffffffff81411db0 r __ksymtab_queue_kthread_work
ffffffff81411dc0 r __ksymtab_queue_work
ffffffff81411dd0 r __ksymtab_queue_work_on
ffffffff81411de0 r __ksymtab_raw_hash_sk
ffffffff81411df0 r __ksymtab_raw_notifier_call_chain
ffffffff81411e00 r __ksymtab_raw_notifier_chain_register
ffffffff81411e10 r __ksymtab_raw_notifier_chain_unregister
ffffffff81411e20 r __ksymtab_raw_seq_next
ffffffff81411e30 r __ksymtab_raw_seq_open
ffffffff81411e40 r __ksymtab_raw_seq_start
ffffffff81411e50 r __ksymtab_raw_seq_stop
ffffffff81411e60 r __ksymtab_raw_unhash_sk
ffffffff81411e70 r __ksymtab_rcu_barrier
ffffffff81411e80 r __ksymtab_rcu_barrier_bh
ffffffff81411e90 r __ksymtab_rcu_barrier_sched
ffffffff81411ea0 r __ksymtab_rcu_batches_completed
ffffffff81411eb0 r __ksymtab_rcu_batches_completed_bh
ffffffff81411ec0 r __ksymtab_rcu_batches_completed_sched
ffffffff81411ed0 r __ksymtab_rcu_bh_force_quiescent_state
ffffffff81411ee0 r __ksymtab_rcu_force_quiescent_state
ffffffff81411ef0 r __ksymtab_rcu_idle_enter
ffffffff81411f00 r __ksymtab_rcu_idle_exit
ffffffff81411f10 r __ksymtab_rcu_note_context_switch
ffffffff81411f20 r __ksymtab_rcu_sched_force_quiescent_state
ffffffff81411f30 r __ksymtab_rcu_scheduler_active
ffffffff81411f40 r __ksymtab_rcutorture_record_progress
ffffffff81411f50 r __ksymtab_rcutorture_record_test_transition
ffffffff81411f60 r __ksymtab_ref_module
ffffffff81411f70 r __ksymtab_register_acpi_bus_notifier
ffffffff81411f80 r __ksymtab_register_acpi_hed_notifier
ffffffff81411f90 r __ksymtab_register_die_notifier
ffffffff81411fa0 r __ksymtab_register_keyboard_notifier
ffffffff81411fb0 r __ksymtab_register_mce_write_callback
ffffffff81411fc0 r __ksymtab_register_net_sysctl_rotable
ffffffff81411fd0 r __ksymtab_register_net_sysctl_table
ffffffff81411fe0 r __ksymtab_register_netevent_notifier
ffffffff81411ff0 r __ksymtab_register_nmi_handler
ffffffff81412000 r __ksymtab_register_oom_notifier
ffffffff81412010 r __ksymtab_register_pernet_device
ffffffff81412020 r __ksymtab_register_pernet_subsys
ffffffff81412030 r __ksymtab_register_pm_notifier
ffffffff81412040 r __ksymtab_register_syscore_ops
ffffffff81412050 r __ksymtab_register_user_hw_breakpoint
ffffffff81412060 r __ksymtab_register_vt_notifier
ffffffff81412070 r __ksymtab_register_wide_hw_breakpoint
ffffffff81412080 r __ksymtab_register_xenbus_watch
ffffffff81412090 r __ksymtab_register_xenstore_notifier
ffffffff814120a0 r __ksymtab_remove_irq
ffffffff814120b0 r __ksymtab_replace_page_cache_page
ffffffff814120c0 r __ksymtab_request_any_context_irq
ffffffff814120d0 r __ksymtab_resume_device_irqs
ffffffff814120e0 r __ksymtab_root_device_unregister
ffffffff814120f0 r __ksymtab_round_jiffies
ffffffff81412100 r __ksymtab_round_jiffies_relative
ffffffff81412110 r __ksymtab_round_jiffies_up
ffffffff81412120 r __ksymtab_round_jiffies_up_relative
ffffffff81412130 r __ksymtab_rt_mutex_destroy
ffffffff81412140 r __ksymtab_rt_mutex_lock
ffffffff81412150 r __ksymtab_rt_mutex_lock_interruptible
ffffffff81412160 r __ksymtab_rt_mutex_timed_lock
ffffffff81412170 r __ksymtab_rt_mutex_trylock
ffffffff81412180 r __ksymtab_rt_mutex_unlock
ffffffff81412190 r __ksymtab_rtc_alarm_irq_enable
ffffffff814121a0 r __ksymtab_rtc_class_close
ffffffff814121b0 r __ksymtab_rtc_class_open
ffffffff814121c0 r __ksymtab_rtc_device_register
ffffffff814121d0 r __ksymtab_rtc_device_unregister
ffffffff814121e0 r __ksymtab_rtc_initialize_alarm
ffffffff814121f0 r __ksymtab_rtc_irq_register
ffffffff81412200 r __ksymtab_rtc_irq_set_freq
ffffffff81412210 r __ksymtab_rtc_irq_set_state
ffffffff81412220 r __ksymtab_rtc_irq_unregister
ffffffff81412230 r __ksymtab_rtc_ktime_to_tm
ffffffff81412240 r __ksymtab_rtc_read_alarm
ffffffff81412250 r __ksymtab_rtc_read_time
ffffffff81412260 r __ksymtab_rtc_set_alarm
ffffffff81412270 r __ksymtab_rtc_set_mmss
ffffffff81412280 r __ksymtab_rtc_set_time
ffffffff81412290 r __ksymtab_rtc_tm_to_ktime
ffffffff814122a0 r __ksymtab_rtc_update_irq
ffffffff814122b0 r __ksymtab_rtc_update_irq_enable
ffffffff814122c0 r __ksymtab_rtnl_af_register
ffffffff814122d0 r __ksymtab_rtnl_af_unregister
ffffffff814122e0 r __ksymtab_rtnl_link_register
ffffffff814122f0 r __ksymtab_rtnl_link_unregister
ffffffff81412300 r __ksymtab_rtnl_put_cacheinfo
ffffffff81412310 r __ksymtab_rtnl_register
ffffffff81412320 r __ksymtab_rtnl_unregister
ffffffff81412330 r __ksymtab_rtnl_unregister_all
ffffffff81412340 r __ksymtab_sata_async_notification
ffffffff81412350 r __ksymtab_sata_deb_timing_hotplug
ffffffff81412360 r __ksymtab_sata_deb_timing_long
ffffffff81412370 r __ksymtab_sata_deb_timing_normal
ffffffff81412380 r __ksymtab_sata_link_debounce
ffffffff81412390 r __ksymtab_sata_link_hardreset
ffffffff814123a0 r __ksymtab_sata_link_resume
ffffffff814123b0 r __ksymtab_sata_link_scr_lpm
ffffffff814123c0 r __ksymtab_sata_port_ops
ffffffff814123d0 r __ksymtab_sata_scr_read
ffffffff814123e0 r __ksymtab_sata_scr_valid
ffffffff814123f0 r __ksymtab_sata_scr_write
ffffffff81412400 r __ksymtab_sata_scr_write_flush
ffffffff81412410 r __ksymtab_sata_set_spd
ffffffff81412420 r __ksymtab_sata_sff_hardreset
ffffffff81412430 r __ksymtab_sata_std_hardreset
ffffffff81412440 r __ksymtab_scatterwalk_copychunks
ffffffff81412450 r __ksymtab_scatterwalk_done
ffffffff81412460 r __ksymtab_scatterwalk_map
ffffffff81412470 r __ksymtab_scatterwalk_map_and_copy
ffffffff81412480 r __ksymtab_scatterwalk_start
ffffffff81412490 r __ksymtab_sched_clock
ffffffff814124a0 r __ksymtab_sched_clock_idle_sleep_event
ffffffff814124b0 r __ksymtab_sched_clock_idle_wakeup_event
ffffffff814124c0 r __ksymtab_sched_setscheduler
ffffffff814124d0 r __ksymtab_schedule_hrtimeout
ffffffff814124e0 r __ksymtab_schedule_hrtimeout_range
ffffffff814124f0 r __ksymtab_screen_glyph
ffffffff81412500 r __ksymtab_scsi_autopm_get_device
ffffffff81412510 r __ksymtab_scsi_autopm_put_device
ffffffff81412520 r __ksymtab_scsi_bus_type
ffffffff81412530 r __ksymtab_scsi_complete_async_scans
ffffffff81412540 r __ksymtab_scsi_eh_get_sense
ffffffff81412550 r __ksymtab_scsi_eh_ready_devs
ffffffff81412560 r __ksymtab_scsi_flush_work
ffffffff81412570 r __ksymtab_scsi_get_vpd_page
ffffffff81412580 r __ksymtab_scsi_internal_device_block
ffffffff81412590 r __ksymtab_scsi_internal_device_unblock
ffffffff814125a0 r __ksymtab_scsi_mode_select
ffffffff814125b0 r __ksymtab_scsi_queue_work
ffffffff814125c0 r __ksymtab_scsi_schedule_eh
ffffffff814125d0 r __ksymtab_scsi_target_block
ffffffff814125e0 r __ksymtab_scsi_target_unblock
ffffffff814125f0 r __ksymtab_sdev_evt_alloc
ffffffff81412600 r __ksymtab_sdev_evt_send
ffffffff81412610 r __ksymtab_sdev_evt_send_simple
ffffffff81412620 r __ksymtab_secure_ipv4_port_ephemeral
ffffffff81412630 r __ksymtab_seq_open_net
ffffffff81412640 r __ksymtab_seq_release_net
ffffffff81412650 r __ksymtab_set_cpus_allowed_ptr
ffffffff81412660 r __ksymtab_set_memory_ro
ffffffff81412670 r __ksymtab_set_memory_rw
ffffffff81412680 r __ksymtab_set_personality_ia32
ffffffff81412690 r __ksymtab_set_task_ioprio
ffffffff814126a0 r __ksymtab_set_timer_slack
ffffffff814126b0 r __ksymtab_setup_APIC_eilvt
ffffffff814126c0 r __ksymtab_setup_deferrable_timer_on_stack_key
ffffffff814126d0 r __ksymtab_setup_irq
ffffffff814126e0 r __ksymtab_sg_scsi_ioctl
ffffffff814126f0 r __ksymtab_shake_page
ffffffff81412700 r __ksymtab_shash_ahash_digest
ffffffff81412710 r __ksymtab_shash_ahash_finup
ffffffff81412720 r __ksymtab_shash_ahash_update
ffffffff81412730 r __ksymtab_shash_attr_alg
ffffffff81412740 r __ksymtab_shash_free_instance
ffffffff81412750 r __ksymtab_shash_register_instance
ffffffff81412760 r __ksymtab_shmem_file_setup
ffffffff81412770 r __ksymtab_shmem_read_mapping_page_gfp
ffffffff81412780 r __ksymtab_shmem_truncate_range
ffffffff81412790 r __ksymtab_show_class_attr_string
ffffffff814127a0 r __ksymtab_sigset_from_compat
ffffffff814127b0 r __ksymtab_simple_attr_open
ffffffff814127c0 r __ksymtab_simple_attr_read
ffffffff814127d0 r __ksymtab_simple_attr_release
ffffffff814127e0 r __ksymtab_simple_attr_write
ffffffff814127f0 r __ksymtab_single_open_net
ffffffff81412800 r __ksymtab_single_release_net
ffffffff81412810 r __ksymtab_sk_attach_filter
ffffffff81412820 r __ksymtab_sk_clone_lock
ffffffff81412830 r __ksymtab_sk_detach_filter
ffffffff81412840 r __ksymtab_sk_setup_caps
ffffffff81412850 r __ksymtab_skb_complete_wifi_ack
ffffffff81412860 r __ksymtab_skb_cow_data
ffffffff81412870 r __ksymtab_skb_gro_receive
ffffffff81412880 r __ksymtab_skb_morph
ffffffff81412890 r __ksymtab_skb_partial_csum_set
ffffffff814128a0 r __ksymtab_skb_pull_rcsum
ffffffff814128b0 r __ksymtab_skb_segment
ffffffff814128c0 r __ksymtab_skb_to_sgvec
ffffffff814128d0 r __ksymtab_skb_tstamp_tx
ffffffff814128e0 r __ksymtab_skcipher_geniv_alloc
ffffffff814128f0 r __ksymtab_skcipher_geniv_exit
ffffffff81412900 r __ksymtab_skcipher_geniv_free
ffffffff81412910 r __ksymtab_skcipher_geniv_init
ffffffff81412920 r __ksymtab_smp_call_function_any
ffffffff81412930 r __ksymtab_smp_ops
ffffffff81412940 r __ksymtab_snmp_fold_field
ffffffff81412950 r __ksymtab_snmp_mib_free
ffffffff81412960 r __ksymtab_snmp_mib_init
ffffffff81412970 r __ksymtab_sock_diag_check_cookie
ffffffff81412980 r __ksymtab_sock_diag_nlsk
ffffffff81412990 r __ksymtab_sock_diag_put_meminfo
ffffffff814129a0 r __ksymtab_sock_diag_register
ffffffff814129b0 r __ksymtab_sock_diag_register_inet_compat
ffffffff814129c0 r __ksymtab_sock_diag_save_cookie
ffffffff814129d0 r __ksymtab_sock_diag_unregister
ffffffff814129e0 r __ksymtab_sock_diag_unregister_inet_compat
ffffffff814129f0 r __ksymtab_sock_prot_inuse_add
ffffffff81412a00 r __ksymtab_sock_prot_inuse_get
ffffffff81412a10 r __ksymtab_sock_update_netprioidx
ffffffff81412a20 r __ksymtab_sprint_symbol
ffffffff81412a30 r __ksymtab_srcu_batches_completed
ffffffff81412a40 r __ksymtab_srcu_init_notifier_head
ffffffff81412a50 r __ksymtab_srcu_notifier_call_chain
ffffffff81412a60 r __ksymtab_srcu_notifier_chain_register
ffffffff81412a70 r __ksymtab_srcu_notifier_chain_unregister
ffffffff81412a80 r __ksymtab_stop_machine
ffffffff81412a90 r __ksymtab_subsys_dev_iter_exit
ffffffff81412aa0 r __ksymtab_subsys_dev_iter_init
ffffffff81412ab0 r __ksymtab_subsys_dev_iter_next
ffffffff81412ac0 r __ksymtab_subsys_find_device_by_id
ffffffff81412ad0 r __ksymtab_subsys_interface_register
ffffffff81412ae0 r __ksymtab_subsys_interface_unregister
ffffffff81412af0 r __ksymtab_subsys_system_register
ffffffff81412b00 r __ksymtab_suspend_device_irqs
ffffffff81412b10 r __ksymtab_swiotlb_bounce
ffffffff81412b20 r __ksymtab_swiotlb_map_page
ffffffff81412b30 r __ksymtab_swiotlb_nr_tbl
ffffffff81412b40 r __ksymtab_swiotlb_tbl_map_single
ffffffff81412b50 r __ksymtab_swiotlb_tbl_sync_single
ffffffff81412b60 r __ksymtab_swiotlb_tbl_unmap_single
ffffffff81412b70 r __ksymtab_swiotlb_unmap_page
ffffffff81412b80 r __ksymtab_symbol_put_addr
ffffffff81412b90 r __ksymtab_sync_filesystem
ffffffff81412ba0 r __ksymtab_synchronize_rcu_bh
ffffffff81412bb0 r __ksymtab_synchronize_rcu_expedited
ffffffff81412bc0 r __ksymtab_synchronize_sched
ffffffff81412bd0 r __ksymtab_synchronize_sched_expedited
ffffffff81412be0 r __ksymtab_synchronize_srcu
ffffffff81412bf0 r __ksymtab_synchronize_srcu_expedited
ffffffff81412c00 r __ksymtab_syscore_resume
ffffffff81412c10 r __ksymtab_syscore_suspend
ffffffff81412c20 r __ksymtab_sysctl_tcp_cookie_size
ffffffff81412c30 r __ksymtab_sysctl_vfs_cache_pressure
ffffffff81412c40 r __ksymtab_sysfs_add_file_to_group
ffffffff81412c50 r __ksymtab_sysfs_chmod_file
ffffffff81412c60 r __ksymtab_sysfs_create_bin_file
ffffffff81412c70 r __ksymtab_sysfs_create_file
ffffffff81412c80 r __ksymtab_sysfs_create_files
ffffffff81412c90 r __ksymtab_sysfs_create_group
ffffffff81412ca0 r __ksymtab_sysfs_create_link
ffffffff81412cb0 r __ksymtab_sysfs_get
ffffffff81412cc0 r __ksymtab_sysfs_get_dirent
ffffffff81412cd0 r __ksymtab_sysfs_merge_group
ffffffff81412ce0 r __ksymtab_sysfs_notify
ffffffff81412cf0 r __ksymtab_sysfs_notify_dirent
ffffffff81412d00 r __ksymtab_sysfs_put
ffffffff81412d10 r __ksymtab_sysfs_remove_bin_file
ffffffff81412d20 r __ksymtab_sysfs_remove_file
ffffffff81412d30 r __ksymtab_sysfs_remove_file_from_group
ffffffff81412d40 r __ksymtab_sysfs_remove_files
ffffffff81412d50 r __ksymtab_sysfs_remove_group
ffffffff81412d60 r __ksymtab_sysfs_remove_link
ffffffff81412d70 r __ksymtab_sysfs_rename_link
ffffffff81412d80 r __ksymtab_sysfs_schedule_callback
ffffffff81412d90 r __ksymtab_sysfs_unmerge_group
ffffffff81412da0 r __ksymtab_sysfs_update_group
ffffffff81412db0 r __ksymtab_system_freezable_wq
ffffffff81412dc0 r __ksymtab_system_long_wq
ffffffff81412dd0 r __ksymtab_system_nrt_freezable_wq
ffffffff81412de0 r __ksymtab_system_nrt_wq
ffffffff81412df0 r __ksymtab_system_unbound_wq
ffffffff81412e00 r __ksymtab_system_wq
ffffffff81412e10 r __ksymtab_task_active_pid_ns
ffffffff81412e20 r __ksymtab_task_blkio_cgroup
ffffffff81412e30 r __ksymtab_task_current_syscall
ffffffff81412e40 r __ksymtab_task_xstate_cachep
ffffffff81412e50 r __ksymtab_tasklet_hrtimer_init
ffffffff81412e60 r __ksymtab_tcp_cong_avoid_ai
ffffffff81412e70 r __ksymtab_tcp_death_row
ffffffff81412e80 r __ksymtab_tcp_done
ffffffff81412e90 r __ksymtab_tcp_get_info
ffffffff81412ea0 r __ksymtab_tcp_init_congestion_ops
ffffffff81412eb0 r __ksymtab_tcp_is_cwnd_limited
ffffffff81412ec0 r __ksymtab_tcp_orphan_count
ffffffff81412ed0 r __ksymtab_tcp_register_congestion_control
ffffffff81412ee0 r __ksymtab_tcp_reno_cong_avoid
ffffffff81412ef0 r __ksymtab_tcp_reno_min_cwnd
ffffffff81412f00 r __ksymtab_tcp_reno_ssthresh
ffffffff81412f10 r __ksymtab_tcp_set_state
ffffffff81412f20 r __ksymtab_tcp_slow_start
ffffffff81412f30 r __ksymtab_tcp_twsk_destructor
ffffffff81412f40 r __ksymtab_tcp_twsk_unique
ffffffff81412f50 r __ksymtab_tcp_unregister_congestion_control
ffffffff81412f60 r __ksymtab_timecompare_offset
ffffffff81412f70 r __ksymtab_timecompare_transform
ffffffff81412f80 r __ksymtab_timecounter_cyc2time
ffffffff81412f90 r __ksymtab_timecounter_init
ffffffff81412fa0 r __ksymtab_timecounter_read
ffffffff81412fb0 r __ksymtab_timerqueue_add
ffffffff81412fc0 r __ksymtab_timerqueue_del
ffffffff81412fd0 r __ksymtab_timerqueue_iterate_next
ffffffff81412fe0 r __ksymtab_transport_add_device
ffffffff81412ff0 r __ksymtab_transport_class_register
ffffffff81413000 r __ksymtab_transport_class_unregister
ffffffff81413010 r __ksymtab_transport_configure_device
ffffffff81413020 r __ksymtab_transport_destroy_device
ffffffff81413030 r __ksymtab_transport_remove_device
ffffffff81413040 r __ksymtab_transport_setup_device
ffffffff81413050 r __ksymtab_tty_buffer_request_room
ffffffff81413060 r __ksymtab_tty_encode_baud_rate
ffffffff81413070 r __ksymtab_tty_get_pgrp
ffffffff81413080 r __ksymtab_tty_init_termios
ffffffff81413090 r __ksymtab_tty_ldisc_deref
ffffffff814130a0 r __ksymtab_tty_ldisc_flush
ffffffff814130b0 r __ksymtab_tty_ldisc_ref
ffffffff814130c0 r __ksymtab_tty_ldisc_ref_wait
ffffffff814130d0 r __ksymtab_tty_mode_ioctl
ffffffff814130e0 r __ksymtab_tty_perform_flush
ffffffff814130f0 r __ksymtab_tty_prepare_flip_string
ffffffff81413100 r __ksymtab_tty_prepare_flip_string_flags
ffffffff81413110 r __ksymtab_tty_put_char
ffffffff81413120 r __ksymtab_tty_set_termios
ffffffff81413130 r __ksymtab_tty_standard_install
ffffffff81413140 r __ksymtab_tty_termios_encode_baud_rate
ffffffff81413150 r __ksymtab_tty_wakeup
ffffffff81413160 r __ksymtab_udp4_lib_lookup
ffffffff81413170 r __ksymtab_uhci_check_and_reset_hc
ffffffff81413180 r __ksymtab_uhci_reset_hc
ffffffff81413190 r __ksymtab_unbind_from_irqhandler
ffffffff814131a0 r __ksymtab_unix_inq_len
ffffffff814131b0 r __ksymtab_unix_outq_len
ffffffff814131c0 r __ksymtab_unix_peer_get
ffffffff814131d0 r __ksymtab_unix_socket_table
ffffffff814131e0 r __ksymtab_unix_table_lock
ffffffff814131f0 r __ksymtab_unlock_flocks
ffffffff81413200 r __ksymtab_unmap_kernel_range_noflush
ffffffff81413210 r __ksymtab_unregister_acpi_bus_notifier
ffffffff81413220 r __ksymtab_unregister_acpi_hed_notifier
ffffffff81413230 r __ksymtab_unregister_die_notifier
ffffffff81413240 r __ksymtab_unregister_hw_breakpoint
ffffffff81413250 r __ksymtab_unregister_keyboard_notifier
ffffffff81413260 r __ksymtab_unregister_net_sysctl_table
ffffffff81413270 r __ksymtab_unregister_netevent_notifier
ffffffff81413280 r __ksymtab_unregister_nmi_handler
ffffffff81413290 r __ksymtab_unregister_oom_notifier
ffffffff814132a0 r __ksymtab_unregister_pernet_device
ffffffff814132b0 r __ksymtab_unregister_pernet_subsys
ffffffff814132c0 r __ksymtab_unregister_pm_notifier
ffffffff814132d0 r __ksymtab_unregister_syscore_ops
ffffffff814132e0 r __ksymtab_unregister_vt_notifier
ffffffff814132f0 r __ksymtab_unregister_wide_hw_breakpoint
ffffffff81413300 r __ksymtab_unregister_xenbus_watch
ffffffff81413310 r __ksymtab_unregister_xenstore_notifier
ffffffff81413320 r __ksymtab_unshare_fs_struct
ffffffff81413330 r __ksymtab_unuse_mm
ffffffff81413340 r __ksymtab_usb_amd_dev_put
ffffffff81413350 r __ksymtab_usb_amd_find_chipset_info
ffffffff81413360 r __ksymtab_usb_amd_quirk_pll_disable
ffffffff81413370 r __ksymtab_usb_amd_quirk_pll_enable
ffffffff81413380 r __ksymtab_usb_enable_xhci_ports
ffffffff81413390 r __ksymtab_usb_is_intel_switchable_xhci
ffffffff814133a0 r __ksymtab_use_mm
ffffffff814133b0 r __ksymtab_used_vectors
ffffffff814133c0 r __ksymtab_usermodehelper_read_lock_wait
ffffffff814133d0 r __ksymtab_usermodehelper_read_trylock
ffffffff814133e0 r __ksymtab_usermodehelper_read_unlock
ffffffff814133f0 r __ksymtab_uuid_be_gen
ffffffff81413400 r __ksymtab_uuid_le_gen
ffffffff81413410 r __ksymtab_vector_used_by_percpu_irq
ffffffff81413420 r __ksymtab_vfs_cancel_lock
ffffffff81413430 r __ksymtab_vfs_getxattr
ffffffff81413440 r __ksymtab_vfs_kern_mount
ffffffff81413450 r __ksymtab_vfs_listxattr
ffffffff81413460 r __ksymtab_vfs_lock_file
ffffffff81413470 r __ksymtab_vfs_removexattr
ffffffff81413480 r __ksymtab_vfs_setlease
ffffffff81413490 r __ksymtab_vfs_setxattr
ffffffff814134a0 r __ksymtab_vfs_test_lock
ffffffff814134b0 r __ksymtab_vm_unmap_aliases
ffffffff814134c0 r __ksymtab_vma_kernel_pagesize
ffffffff814134d0 r __ksymtab_vt_get_leds
ffffffff814134e0 r __ksymtab_wait_for_device_probe
ffffffff814134f0 r __ksymtab_wait_rcu_gp
ffffffff81413500 r __ksymtab_wakeup_source_add
ffffffff81413510 r __ksymtab_wakeup_source_create
ffffffff81413520 r __ksymtab_wakeup_source_destroy
ffffffff81413530 r __ksymtab_wakeup_source_drop
ffffffff81413540 r __ksymtab_wakeup_source_prepare
ffffffff81413550 r __ksymtab_wakeup_source_register
ffffffff81413560 r __ksymtab_wakeup_source_remove
ffffffff81413570 r __ksymtab_wakeup_source_unregister
ffffffff81413580 r __ksymtab_watchdog_register_device
ffffffff81413590 r __ksymtab_watchdog_unregister_device
ffffffff814135a0 r __ksymtab_work_busy
ffffffff814135b0 r __ksymtab_work_cpu
ffffffff814135c0 r __ksymtab_work_on_cpu
ffffffff814135d0 r __ksymtab_workqueue_congested
ffffffff814135e0 r __ksymtab_workqueue_set_max_active
ffffffff814135f0 r __ksymtab_x86_platform
ffffffff81413600 r __ksymtab_xattr_getsecurity
ffffffff81413610 r __ksymtab_xen_create_contiguous_region
ffffffff81413620 r __ksymtab_xen_destroy_contiguous_region
ffffffff81413630 r __ksymtab_xen_domain_type
ffffffff81413640 r __ksymtab_xen_features
ffffffff81413650 r __ksymtab_xen_find_device_domain_owner
ffffffff81413660 r __ksymtab_xen_have_vector_callback
ffffffff81413670 r __ksymtab_xen_hvm_evtchn_do_upcall
ffffffff81413680 r __ksymtab_xen_hvm_need_lapic
ffffffff81413690 r __ksymtab_xen_hvm_resume_frames
ffffffff814136a0 r __ksymtab_xen_irq_from_gsi
ffffffff814136b0 r __ksymtab_xen_pci_frontend
ffffffff814136c0 r __ksymtab_xen_pirq_from_irq
ffffffff814136d0 r __ksymtab_xen_platform_pci_unplug
ffffffff814136e0 r __ksymtab_xen_register_device_domain_owner
ffffffff814136f0 r __ksymtab_xen_remap_domain_mfn_range
ffffffff81413700 r __ksymtab_xen_set_callback_via
ffffffff81413710 r __ksymtab_xen_set_domain_pte
ffffffff81413720 r __ksymtab_xen_setup_shutdown_event
ffffffff81413730 r __ksymtab_xen_start_info
ffffffff81413740 r __ksymtab_xen_store_evtchn
ffffffff81413750 r __ksymtab_xen_store_interface
ffffffff81413760 r __ksymtab_xen_swiotlb_alloc_coherent
ffffffff81413770 r __ksymtab_xen_swiotlb_dma_mapping_error
ffffffff81413780 r __ksymtab_xen_swiotlb_dma_supported
ffffffff81413790 r __ksymtab_xen_swiotlb_free_coherent
ffffffff814137a0 r __ksymtab_xen_swiotlb_map_page
ffffffff814137b0 r __ksymtab_xen_swiotlb_map_sg
ffffffff814137c0 r __ksymtab_xen_swiotlb_map_sg_attrs
ffffffff814137d0 r __ksymtab_xen_swiotlb_sync_sg_for_cpu
ffffffff814137e0 r __ksymtab_xen_swiotlb_sync_sg_for_device
ffffffff814137f0 r __ksymtab_xen_swiotlb_sync_single_for_cpu
ffffffff81413800 r __ksymtab_xen_swiotlb_sync_single_for_device
ffffffff81413810 r __ksymtab_xen_swiotlb_unmap_page
ffffffff81413820 r __ksymtab_xen_swiotlb_unmap_sg
ffffffff81413830 r __ksymtab_xen_swiotlb_unmap_sg_attrs
ffffffff81413840 r __ksymtab_xen_test_irq_shared
ffffffff81413850 r __ksymtab_xen_unregister_device_domain_owner
ffffffff81413860 r __ksymtab_xen_xenbus_fops
ffffffff81413870 r __ksymtab_xenbus_alloc_evtchn
ffffffff81413880 r __ksymtab_xenbus_bind_evtchn
ffffffff81413890 r __ksymtab_xenbus_dev_attrs
ffffffff814138a0 r __ksymtab_xenbus_dev_cancel
ffffffff814138b0 r __ksymtab_xenbus_dev_changed
ffffffff814138c0 r __ksymtab_xenbus_dev_error
ffffffff814138d0 r __ksymtab_xenbus_dev_fatal
ffffffff814138e0 r __ksymtab_xenbus_dev_is_online
ffffffff814138f0 r __ksymtab_xenbus_dev_probe
ffffffff81413900 r __ksymtab_xenbus_dev_remove
ffffffff81413910 r __ksymtab_xenbus_dev_resume
ffffffff81413920 r __ksymtab_xenbus_dev_shutdown
ffffffff81413930 r __ksymtab_xenbus_dev_suspend
ffffffff81413940 r __ksymtab_xenbus_directory
ffffffff81413950 r __ksymtab_xenbus_exists
ffffffff81413960 r __ksymtab_xenbus_free_evtchn
ffffffff81413970 r __ksymtab_xenbus_frontend_closed
ffffffff81413980 r __ksymtab_xenbus_gather
ffffffff81413990 r __ksymtab_xenbus_grant_ring
ffffffff814139a0 r __ksymtab_xenbus_map_ring
ffffffff814139b0 r __ksymtab_xenbus_map_ring_valloc
ffffffff814139c0 r __ksymtab_xenbus_match
ffffffff814139d0 r __ksymtab_xenbus_mkdir
ffffffff814139e0 r __ksymtab_xenbus_otherend_changed
ffffffff814139f0 r __ksymtab_xenbus_printf
ffffffff81413a00 r __ksymtab_xenbus_probe
ffffffff81413a10 r __ksymtab_xenbus_probe_devices
ffffffff81413a20 r __ksymtab_xenbus_probe_node
ffffffff81413a30 r __ksymtab_xenbus_read
ffffffff81413a40 r __ksymtab_xenbus_read_driver_state
ffffffff81413a50 r __ksymtab_xenbus_read_otherend_details
ffffffff81413a60 r __ksymtab_xenbus_register_backend
ffffffff81413a70 r __ksymtab_xenbus_register_driver_common
ffffffff81413a80 r __ksymtab_xenbus_register_frontend
ffffffff81413a90 r __ksymtab_xenbus_rm
ffffffff81413aa0 r __ksymtab_xenbus_scanf
ffffffff81413ab0 r __ksymtab_xenbus_strstate
ffffffff81413ac0 r __ksymtab_xenbus_switch_state
ffffffff81413ad0 r __ksymtab_xenbus_transaction_end
ffffffff81413ae0 r __ksymtab_xenbus_transaction_start
ffffffff81413af0 r __ksymtab_xenbus_unmap_ring
ffffffff81413b00 r __ksymtab_xenbus_unmap_ring_vfree
ffffffff81413b10 r __ksymtab_xenbus_unregister_driver
ffffffff81413b20 r __ksymtab_xenbus_watch_path
ffffffff81413b30 r __ksymtab_xenbus_watch_pathfmt
ffffffff81413b40 r __ksymtab_xenbus_write
ffffffff81413b50 r __ksymtab_xfrm_aalg_get_byid
ffffffff81413b60 r __ksymtab_xfrm_aalg_get_byidx
ffffffff81413b70 r __ksymtab_xfrm_aalg_get_byname
ffffffff81413b80 r __ksymtab_xfrm_aead_get_byname
ffffffff81413b90 r __ksymtab_xfrm_audit_policy_add
ffffffff81413ba0 r __ksymtab_xfrm_audit_policy_delete
ffffffff81413bb0 r __ksymtab_xfrm_audit_state_add
ffffffff81413bc0 r __ksymtab_xfrm_audit_state_delete
ffffffff81413bd0 r __ksymtab_xfrm_audit_state_icvfail
ffffffff81413be0 r __ksymtab_xfrm_audit_state_notfound
ffffffff81413bf0 r __ksymtab_xfrm_audit_state_notfound_simple
ffffffff81413c00 r __ksymtab_xfrm_audit_state_replay
ffffffff81413c10 r __ksymtab_xfrm_audit_state_replay_overflow
ffffffff81413c20 r __ksymtab_xfrm_calg_get_byid
ffffffff81413c30 r __ksymtab_xfrm_calg_get_byname
ffffffff81413c40 r __ksymtab_xfrm_count_auth_supported
ffffffff81413c50 r __ksymtab_xfrm_count_enc_supported
ffffffff81413c60 r __ksymtab_xfrm_ealg_get_byid
ffffffff81413c70 r __ksymtab_xfrm_ealg_get_byidx
ffffffff81413c80 r __ksymtab_xfrm_ealg_get_byname
ffffffff81413c90 r __ksymtab_xfrm_inner_extract_output
ffffffff81413ca0 r __ksymtab_xfrm_output
ffffffff81413cb0 r __ksymtab_xfrm_output_resume
ffffffff81413cc0 r __ksymtab_xfrm_probe_algs
ffffffff81413cd0 r __ksymtab_xstate_size
ffffffff81413ce0 r __ksymtab_yield_to
ffffffff81413cf0 r __ksymtab_zap_vma_ptes
ffffffff81413d00 r __kstrtab_init_task
ffffffff81413d00 R __start___kcrctab
ffffffff81413d00 R __start___kcrctab_gpl
ffffffff81413d00 R __start___kcrctab_gpl_future
ffffffff81413d00 R __start___kcrctab_unused
ffffffff81413d00 R __start___kcrctab_unused_gpl
ffffffff81413d00 R __start___ksymtab_gpl_future
ffffffff81413d00 R __start___ksymtab_unused
ffffffff81413d00 R __start___ksymtab_unused_gpl
ffffffff81413d00 R __stop___kcrctab
ffffffff81413d00 R __stop___kcrctab_gpl
ffffffff81413d00 R __stop___kcrctab_gpl_future
ffffffff81413d00 R __stop___kcrctab_unused
ffffffff81413d00 R __stop___kcrctab_unused_gpl
ffffffff81413d00 R __stop___ksymtab_gpl
ffffffff81413d00 R __stop___ksymtab_gpl_future
ffffffff81413d00 R __stop___ksymtab_unused
ffffffff81413d00 R __stop___ksymtab_unused_gpl
ffffffff81413d0a r __kstrtab_loops_per_jiffy
ffffffff81413d1a r __kstrtab_reset_devices
ffffffff81413d28 r __kstrtab_system_state
ffffffff81413d35 r __kstrtab_init_uts_ns
ffffffff81413d41 r __kstrtab_x86_hyper_xen_hvm
ffffffff81413d53 r __kstrtab_xen_hvm_need_lapic
ffffffff81413d66 r __kstrtab_xen_have_vector_callback
ffffffff81413d7f r __kstrtab_xen_start_info
ffffffff81413d8e r __kstrtab_machine_to_phys_nr
ffffffff81413da1 r __kstrtab_machine_to_phys_mapping
ffffffff81413db9 r __kstrtab_xen_domain_type
ffffffff81413dc9 r __kstrtab_hypercall_page
ffffffff81413dd8 r __kstrtab_xen_remap_domain_mfn_range
ffffffff81413df3 r __kstrtab_xen_destroy_contiguous_region
ffffffff81413e11 r __kstrtab_xen_create_contiguous_region
ffffffff81413e2e r __kstrtab_xen_set_domain_pte
ffffffff81413e41 r __kstrtab_arbitrary_virt_to_machine
ffffffff81413e5b r __kstrtab_xen_platform_pci_unplug
ffffffff81413e73 r __kstrtab_m2p_find_override_pfn
ffffffff81413e89 r __kstrtab_m2p_remove_override
ffffffff81413e9d r __kstrtab_m2p_add_override
ffffffff81413eae r __kstrtab_get_phys_to_machine
ffffffff81413ec2 r __kstrtab_set_personality_ia32
ffffffff81413ed7 r __kstrtab_math_state_restore
ffffffff81413eea r __kstrtab_used_vectors
ffffffff81413ef7 r __kstrtab_vector_used_by_percpu_irq
ffffffff81413f11 r __kstrtab_irq_regs
ffffffff81413f1a r __kstrtab_irq_stat
ffffffff81413f23 r __kstrtab_dump_trace
ffffffff81413f2e r __kstrtab_profile_pc
ffffffff81413f39 r __kstrtab_oops_begin
ffffffff81413f44 r __kstrtab_dump_stack
ffffffff81413f4f r __kstrtab_print_context_stack_bp
ffffffff81413f66 r __kstrtab_print_context_stack
ffffffff81413f7a r __kstrtab_unregister_nmi_handler
ffffffff81413f91 r __kstrtab_register_nmi_handler
ffffffff81413fa6 r __kstrtab_edid_info
ffffffff81413fb0 r __kstrtab_screen_info
ffffffff81413fbc r __kstrtab_boot_cpu_data
ffffffff81413fca r __kstrtab_x86_platform
ffffffff81413fd7 r __kstrtab_pci_biosrom_size
ffffffff81413fe8 r __kstrtab_pci_unmap_biosrom
ffffffff81413ffa r __kstrtab_pci_map_biosrom
ffffffff8141400a r __kstrtab_empty_zero_page
ffffffff8141401a r __kstrtab_memmove
ffffffff81414022 r __kstrtab___memcpy
ffffffff8141402b r __kstrtab_memcpy
ffffffff81414032 r __kstrtab_memset
ffffffff81414039 r __kstrtab_csum_partial
ffffffff81414046 r __kstrtab_clear_page
ffffffff81414051 r __kstrtab_copy_page
ffffffff8141405b r __kstrtab__copy_to_user
ffffffff81414069 r __kstrtab__copy_from_user
ffffffff81414079 r __kstrtab___copy_user_nocache
ffffffff8141408d r __kstrtab_copy_user_generic_unrolled
ffffffff814140a8 r __kstrtab_copy_user_generic_string
ffffffff814140c1 r __kstrtab___put_user_8
ffffffff814140ce r __kstrtab___put_user_4
ffffffff814140db r __kstrtab___put_user_2
ffffffff814140e8 r __kstrtab___put_user_1
ffffffff814140f5 r __kstrtab___get_user_8
ffffffff81414102 r __kstrtab___get_user_4
ffffffff8141410f r __kstrtab___get_user_2
ffffffff8141411c r __kstrtab___get_user_1
ffffffff81414129 r __kstrtab_e820_any_mapped
ffffffff81414139 r __kstrtab_pci_mem_start
ffffffff81414147 r __kstrtab_dma_supported
ffffffff81414155 r __kstrtab_dma_set_mask
ffffffff81414162 r __kstrtab_x86_dma_fallback_dev
ffffffff81414177 r __kstrtab_dma_ops
ffffffff8141417f r __kstrtab_arch_unregister_cpu
ffffffff81414193 r __kstrtab_arch_register_cpu
ffffffff814141a5 r __kstrtab_arch_debugfs_dir
ffffffff814141b6 r __kstrtab_hw_breakpoint_restore
ffffffff814141cc r __kstrtab_aout_dump_debugregs
ffffffff814141e0 r __kstrtab_cpu_dr7
ffffffff814141e8 r __kstrtab_mark_tsc_unstable
ffffffff814141fa r __kstrtab_recalibrate_cpu_khz
ffffffff8141420e r __kstrtab_check_tsc_unstable
ffffffff81414221 r __kstrtab_tsc_khz
ffffffff81414229 r __kstrtab_cpu_khz
ffffffff81414231 r __kstrtab_native_io_delay
ffffffff81414241 r __kstrtab_native_read_tsc
ffffffff81414251 r __kstrtab_rtc_cmos_write
ffffffff81414260 r __kstrtab_rtc_cmos_read
ffffffff8141426e r __kstrtab_rtc_lock
ffffffff81414277 r __kstrtab_amd_e400_c1e_detected
ffffffff8141428d r __kstrtab_cpu_idle_wait
ffffffff8141429b r __kstrtab_boot_option_idle_override
ffffffff814142b5 r __kstrtab_kernel_thread
ffffffff814142c3 r __kstrtab_task_xstate_cachep
ffffffff814142d6 r __kstrtab_idle_notifier_unregister
ffffffff814142ef r __kstrtab_idle_notifier_register
ffffffff81414306 r __kstrtab_dump_fpu
ffffffff8141430f r __kstrtab_init_fpu
ffffffff81414318 r __kstrtab_fpu_finit
ffffffff81414322 r __kstrtab_xstate_size
ffffffff8141432e r __kstrtab_unlazy_fpu
ffffffff81414339 r __kstrtab_kernel_fpu_end
ffffffff81414348 r __kstrtab_kernel_fpu_begin
ffffffff81414359 r __kstrtab_irq_fpu_usable
ffffffff81414368 r __kstrtab_kernel_stack
ffffffff81414375 r __kstrtab_current_task
ffffffff81414382 r __kstrtab_gdt_page
ffffffff8141438b r __kstrtab_x86_hyper_vmware
ffffffff8141439c r __kstrtab_x86_hyper
ffffffff814143a6 r __kstrtab_x86_hyper_ms_hyperv
ffffffff814143ba r __kstrtab_ms_hyperv
ffffffff814143c4 r __kstrtab_x86_match_cpu
ffffffff814143d2 r __kstrtab_cpu_has_amd_erratum
ffffffff814143e6 r __kstrtab_amd_erratum_383
ffffffff814143f6 r __kstrtab_amd_erratum_400
ffffffff81414406 r __kstrtab_amd_get_nb_id
ffffffff81414414 r __kstrtab_perf_get_x86_pmu_capability
ffffffff81414430 r __kstrtab_amd_pmu_disable_virt
ffffffff81414445 r __kstrtab_amd_pmu_enable_virt
ffffffff81414459 r __kstrtab_perf_guest_get_msrs
ffffffff8141446d r __kstrtab_register_mce_write_callback
ffffffff81414489 r __kstrtab_mce_notify_irq
ffffffff81414498 r __kstrtab_do_machine_check
ffffffff814144a9 r __kstrtab_machine_check_poll
ffffffff814144bc r __kstrtab_mce_unregister_decode_chain
ffffffff814144d8 r __kstrtab_mce_register_decode_chain
ffffffff814144f2 r __kstrtab_injectm
ffffffff814144fa r __kstrtab_apei_mce_report_mem_error
ffffffff81414514 r __kstrtab_mtrr_del
ffffffff8141451d r __kstrtab_mtrr_add
ffffffff81414526 r __kstrtab_mtrr_state
ffffffff81414531 r __kstrtab_release_evntsel_nmi
ffffffff81414545 r __kstrtab_reserve_evntsel_nmi
ffffffff81414559 r __kstrtab_release_perfctr_nmi
ffffffff8141456d r __kstrtab_reserve_perfctr_nmi
ffffffff81414581 r __kstrtab_avail_to_resrv_perfctr_nmi_bit
ffffffff814145a0 r __kstrtab_get_ibs_caps
ffffffff814145ad r __kstrtab_acpi_unregister_ioapic
ffffffff814145c4 r __kstrtab_acpi_register_ioapic
ffffffff814145d9 r __kstrtab_acpi_unmap_lsapic
ffffffff814145eb r __kstrtab_acpi_map_lsapic
ffffffff814145fb r __kstrtab_acpi_gsi_to_irq
ffffffff8141460b r __kstrtab_acpi_pci_disabled
ffffffff8141461d r __kstrtab_acpi_disabled
ffffffff8141462b r __kstrtab_acpi_processor_ffh_cstate_enter
ffffffff8141464b r __kstrtab_acpi_processor_ffh_cstate_probe
ffffffff8141466b r __kstrtab_acpi_processor_power_init_bm_check
ffffffff8141468e r __kstrtab_pm_power_off
ffffffff8141469b r __kstrtab_smp_ops
ffffffff814146a3 r __kstrtab_cpu_info
ffffffff814146ac r __kstrtab_cpu_core_map
ffffffff814146b9 r __kstrtab_cpu_sibling_map
ffffffff814146c9 r __kstrtab_smp_num_siblings
ffffffff814146da r __kstrtab___per_cpu_offset
ffffffff814146eb r __kstrtab_this_cpu_off
ffffffff814146f8 r __kstrtab_cpu_number
ffffffff81414703 r __kstrtab_setup_APIC_eilvt
ffffffff81414714 r __kstrtab_local_apic_timer_c2_ok
ffffffff8141472b r __kstrtab_x86_bios_cpu_apicid
ffffffff8141473f r __kstrtab_x86_cpu_to_apicid
ffffffff81414751 r __kstrtab_IO_APIC_get_PCI_irq_vector
ffffffff8141476c r __kstrtab_apic
ffffffff81414771 r __kstrtab_hpet_rtc_interrupt
ffffffff81414784 r __kstrtab_hpet_rtc_dropped_irq
ffffffff81414799 r __kstrtab_hpet_set_periodic_freq
ffffffff814147b0 r __kstrtab_hpet_set_alarm_time
ffffffff814147c4 r __kstrtab_hpet_set_rtc_irq_bit
ffffffff814147d9 r __kstrtab_hpet_mask_rtc_irq_bit
ffffffff814147ef r __kstrtab_hpet_rtc_timer_init
ffffffff81414803 r __kstrtab_hpet_unregister_irq_handler
ffffffff8141481f r __kstrtab_hpet_register_irq_handler
ffffffff81414839 r __kstrtab_is_hpet_enabled
ffffffff81414849 r __kstrtab_amd_flush_garts
ffffffff81414859 r __kstrtab_amd_cache_northbridges
ffffffff81414870 r __kstrtab_amd_northbridges
ffffffff81414881 r __kstrtab_amd_nb_misc_ids
ffffffff81414891 r __kstrtab_pv_irq_ops
ffffffff8141489c r __kstrtab_pv_info
ffffffff814148a4 r __kstrtab_pv_apic_ops
ffffffff814148b0 r __kstrtab_pv_mmu_ops
ffffffff814148bb r __kstrtab_pv_cpu_ops
ffffffff814148c6 r __kstrtab_pv_time_ops
ffffffff814148d2 r __kstrtab_pv_lock_ops
ffffffff814148de r __kstrtab___supported_pte_mask
ffffffff814148f3 r __kstrtab_iounmap
ffffffff814148fb r __kstrtab_ioremap_prot
ffffffff81414908 r __kstrtab_ioremap_cache
ffffffff81414916 r __kstrtab_ioremap_wc
ffffffff81414921 r __kstrtab_ioremap_nocache
ffffffff81414931 r __kstrtab_set_pages_nx
ffffffff8141493e r __kstrtab_set_pages_x
ffffffff8141494a r __kstrtab_set_pages_array_wb
ffffffff8141495d r __kstrtab_set_pages_wb
ffffffff8141496a r __kstrtab_set_pages_array_wc
ffffffff8141497d r __kstrtab_set_pages_array_uc
ffffffff81414990 r __kstrtab_set_pages_uc
ffffffff8141499d r __kstrtab_set_memory_rw
ffffffff814149ab r __kstrtab_set_memory_ro
ffffffff814149b9 r __kstrtab_set_memory_nx
ffffffff814149c7 r __kstrtab_set_memory_x
ffffffff814149d4 r __kstrtab_set_memory_array_wb
ffffffff814149e8 r __kstrtab_set_memory_wb
ffffffff814149f6 r __kstrtab_set_memory_wc
ffffffff81414a04 r __kstrtab_set_memory_array_wc
ffffffff81414a18 r __kstrtab_set_memory_array_uc
ffffffff81414a2c r __kstrtab_set_memory_uc
ffffffff81414a3a r __kstrtab_lookup_address
ffffffff81414a49 r __kstrtab_clflush_cache_range
ffffffff81414a5d r __kstrtab_pgprot_writecombine
ffffffff81414a71 r __kstrtab___virt_addr_valid
ffffffff81414a83 r __kstrtab___phys_addr
ffffffff81414a8f r __kstrtab_leave_mm
ffffffff81414a98 r __kstrtab___node_distance
ffffffff81414aa8 r __kstrtab_x86_cpu_to_node_map
ffffffff81414abc r __kstrtab_node_to_cpumask_map
ffffffff81414ad0 r __kstrtab_node_data
ffffffff81414ada r __kstrtab_get_task_mm
ffffffff81414ae6 r __kstrtab_mmput
ffffffff81414aec r __kstrtab___mmdrop
ffffffff81414af5 r __kstrtab___put_task_struct
ffffffff81414b07 r __kstrtab_free_task
ffffffff81414b11 r __kstrtab___set_personality
ffffffff81414b23 r __kstrtab_unregister_exec_domain
ffffffff81414b3a r __kstrtab_register_exec_domain
ffffffff81414b4f r __kstrtab_warn_slowpath_null
ffffffff81414b62 r __kstrtab_warn_slowpath_fmt_taint
ffffffff81414b7a r __kstrtab_warn_slowpath_fmt
ffffffff81414b8c r __kstrtab_add_taint
ffffffff81414b96 r __kstrtab_test_taint
ffffffff81414ba1 r __kstrtab_panic
ffffffff81414ba7 r __kstrtab_panic_blink
ffffffff81414bb3 r __kstrtab_panic_notifier_list
ffffffff81414bc7 r __kstrtab_panic_timeout
ffffffff81414bd5 r __kstrtab_kmsg_dump_unregister
ffffffff81414bea r __kstrtab_kmsg_dump_register
ffffffff81414bfd r __kstrtab_printk_timed_ratelimit
ffffffff81414c14 r __kstrtab___printk_ratelimit
ffffffff81414c27 r __kstrtab_unregister_console
ffffffff81414c3a r __kstrtab_register_console
ffffffff81414c4b r __kstrtab_console_start
ffffffff81414c59 r __kstrtab_console_stop
ffffffff81414c66 r __kstrtab_console_conditional_schedule
ffffffff81414c83 r __kstrtab_console_unlock
ffffffff81414c92 r __kstrtab_console_trylock
ffffffff81414ca2 r __kstrtab_console_lock
ffffffff81414caf r __kstrtab_console_suspend_enabled
ffffffff81414cc7 r __kstrtab_vprintk
ffffffff81414ccf r __kstrtab_printk
ffffffff81414cd6 r __kstrtab_console_set_on_cmdline
ffffffff81414ced r __kstrtab_console_drivers
ffffffff81414cfd r __kstrtab_oops_in_progress
ffffffff81414d0e r __kstrtab_cpu_active_mask
ffffffff81414d1e r __kstrtab_cpu_present_mask
ffffffff81414d2f r __kstrtab_cpu_online_mask
ffffffff81414d3f r __kstrtab_cpu_possible_mask
ffffffff81414d51 r __kstrtab_cpu_all_bits
ffffffff81414d5e r __kstrtab_cpu_bit_bitmap
ffffffff81414d6d r __kstrtab_cpu_up
ffffffff81414d74 r __kstrtab_cpu_down
ffffffff81414d7d r __kstrtab_unregister_cpu_notifier
ffffffff81414d95 r __kstrtab_register_cpu_notifier
ffffffff81414dab r __kstrtab_put_online_cpus
ffffffff81414dbb r __kstrtab_get_online_cpus
ffffffff81414dcb r __kstrtab_complete_and_exit
ffffffff81414ddd r __kstrtab_do_exit
ffffffff81414de5 r __kstrtab_daemonize
ffffffff81414def r __kstrtab_disallow_signal
ffffffff81414dff r __kstrtab_allow_signal
ffffffff81414e0c r __kstrtab_jiffies_64_to_clock_t
ffffffff81414e22 r __kstrtab_clock_t_to_jiffies
ffffffff81414e35 r __kstrtab_jiffies_to_clock_t
ffffffff81414e48 r __kstrtab_jiffies_to_timeval
ffffffff81414e5b r __kstrtab_timeval_to_jiffies
ffffffff81414e6e r __kstrtab_jiffies_to_timespec
ffffffff81414e82 r __kstrtab_timespec_to_jiffies
ffffffff81414e96 r __kstrtab_usecs_to_jiffies
ffffffff81414ea7 r __kstrtab_msecs_to_jiffies
ffffffff81414eb8 r __kstrtab_ns_to_timeval
ffffffff81414ec6 r __kstrtab_ns_to_timespec
ffffffff81414ed5 r __kstrtab_set_normalized_timespec
ffffffff81414eed r __kstrtab_mktime
ffffffff81414ef4 r __kstrtab_timespec_trunc
ffffffff81414f03 r __kstrtab_jiffies_to_usecs
ffffffff81414f14 r __kstrtab_jiffies_to_msecs
ffffffff81414f25 r __kstrtab_current_fs_time
ffffffff81414f35 r __kstrtab_sys_tz
ffffffff81414f3c r __kstrtab_send_remote_softirq
ffffffff81414f50 r __kstrtab___send_remote_softirq
ffffffff81414f66 r __kstrtab_softirq_work_list
ffffffff81414f78 r __kstrtab_tasklet_hrtimer_init
ffffffff81414f8d r __kstrtab_tasklet_kill
ffffffff81414f9a r __kstrtab_tasklet_init
ffffffff81414fa7 r __kstrtab___tasklet_hi_schedule_first
ffffffff81414fc3 r __kstrtab___tasklet_hi_schedule
ffffffff81414fd9 r __kstrtab___tasklet_schedule
ffffffff81414fec r __kstrtab_local_bh_enable_ip
ffffffff81414fff r __kstrtab_local_bh_enable
ffffffff8141500f r __kstrtab__local_bh_enable
ffffffff81415020 r __kstrtab_local_bh_disable
ffffffff81415031 r __kstrtab___devm_release_region
ffffffff81415047 r __kstrtab___devm_request_region
ffffffff8141505d r __kstrtab___release_region
ffffffff8141506e r __kstrtab___check_region
ffffffff8141507d r __kstrtab___request_region
ffffffff8141508e r __kstrtab_adjust_resource
ffffffff8141509e r __kstrtab_allocate_resource
ffffffff814150b0 r __kstrtab_release_resource
ffffffff814150c1 r __kstrtab_request_resource
ffffffff814150d2 r __kstrtab_iomem_resource
ffffffff814150e1 r __kstrtab_ioport_resource
ffffffff814150f1 r __kstrtab_proc_doulongvec_ms_jiffies_minmax
ffffffff81415113 r __kstrtab_proc_doulongvec_minmax
ffffffff8141512a r __kstrtab_proc_dostring
ffffffff81415138 r __kstrtab_proc_dointvec_ms_jiffies
ffffffff81415151 r __kstrtab_proc_dointvec_userhz_jiffies
ffffffff8141516e r __kstrtab_proc_dointvec_minmax
ffffffff81415183 r __kstrtab_proc_dointvec_jiffies
ffffffff81415199 r __kstrtab_proc_dointvec
ffffffff814151a7 r __kstrtab_capable
ffffffff814151af r __kstrtab_ns_capable
ffffffff814151ba r __kstrtab___cap_empty_set
ffffffff814151ca r __kstrtab_usleep_range
ffffffff814151d7 r __kstrtab_msleep_interruptible
ffffffff814151ec r __kstrtab_msleep
ffffffff814151f3 r __kstrtab_schedule_timeout_uninterruptible
ffffffff81415214 r __kstrtab_schedule_timeout_killable
ffffffff8141522e r __kstrtab_schedule_timeout_interruptible
ffffffff8141524d r __kstrtab_schedule_timeout
ffffffff8141525e r __kstrtab_del_timer_sync
ffffffff8141526d r __kstrtab_try_to_del_timer_sync
ffffffff81415283 r __kstrtab_del_timer
ffffffff8141528d r __kstrtab_add_timer_on
ffffffff8141529a r __kstrtab_add_timer
ffffffff814152a4 r __kstrtab_mod_timer_pinned
ffffffff814152b5 r __kstrtab_mod_timer
ffffffff814152bf r __kstrtab_mod_timer_pending
ffffffff814152d1 r __kstrtab_init_timer_deferrable_key
ffffffff814152eb r __kstrtab_init_timer_key
ffffffff814152fa r __kstrtab_setup_deferrable_timer_on_stack_key
ffffffff8141531e r __kstrtab_set_timer_slack
ffffffff8141532e r __kstrtab_round_jiffies_up_relative
ffffffff81415348 r __kstrtab_round_jiffies_up
ffffffff81415359 r __kstrtab___round_jiffies_up_relative
ffffffff81415375 r __kstrtab___round_jiffies_up
ffffffff81415388 r __kstrtab_round_jiffies_relative
ffffffff8141539f r __kstrtab_round_jiffies
ffffffff814153ad r __kstrtab___round_jiffies_relative
ffffffff814153c6 r __kstrtab___round_jiffies
ffffffff814153d6 r __kstrtab_boot_tvec_bases
ffffffff814153e6 r __kstrtab_jiffies_64
ffffffff814153f1 r __kstrtab_init_user_ns
ffffffff814153fe r __kstrtab_unblock_all_signals
ffffffff81415412 r __kstrtab_block_all_signals
ffffffff81415424 r __kstrtab_sigprocmask
ffffffff81415430 r __kstrtab_send_sig_info
ffffffff8141543e r __kstrtab_send_sig
ffffffff81415447 r __kstrtab_force_sig
ffffffff81415451 r __kstrtab_flush_signals
ffffffff8141545f r __kstrtab_dequeue_signal
ffffffff8141546e r __kstrtab_recalc_sigpending
ffffffff81415480 r __kstrtab_kill_pid
ffffffff81415489 r __kstrtab_kill_pgrp
ffffffff81415493 r __kstrtab_kill_pid_info_as_cred
ffffffff814154a9 r __kstrtab_orderly_poweroff
ffffffff814154ba r __kstrtab_kernel_power_off
ffffffff814154cb r __kstrtab_kernel_halt
ffffffff814154d7 r __kstrtab_kernel_restart
ffffffff814154e6 r __kstrtab_unregister_reboot_notifier
ffffffff81415501 r __kstrtab_register_reboot_notifier
ffffffff8141551a r __kstrtab_emergency_restart
ffffffff8141552c r __kstrtab_cad_pid
ffffffff81415534 r __kstrtab_fs_overflowgid
ffffffff81415543 r __kstrtab_fs_overflowuid
ffffffff81415552 r __kstrtab_overflowgid
ffffffff8141555e r __kstrtab_overflowuid
ffffffff8141556a r __kstrtab_call_usermodehelper_exec
ffffffff81415583 r __kstrtab_call_usermodehelper_setfns
ffffffff8141559e r __kstrtab_call_usermodehelper_setup
ffffffff814155b8 r __kstrtab_usermodehelper_read_unlock
ffffffff814155d3 r __kstrtab_usermodehelper_read_lock_wait
ffffffff814155f1 r __kstrtab_usermodehelper_read_trylock
ffffffff8141560d r __kstrtab_call_usermodehelper_freeinfo
ffffffff8141562a r __kstrtab___request_module
ffffffff8141563b r __kstrtab_work_on_cpu
ffffffff81415647 r __kstrtab_work_busy
ffffffff81415651 r __kstrtab_work_cpu
ffffffff8141565a r __kstrtab_workqueue_congested
ffffffff8141566e r __kstrtab_workqueue_set_max_active
ffffffff81415687 r __kstrtab_destroy_workqueue
ffffffff81415699 r __kstrtab___alloc_workqueue_key
ffffffff814156af r __kstrtab_execute_in_process_context
ffffffff814156ca r __kstrtab_flush_scheduled_work
ffffffff814156df r __kstrtab_schedule_delayed_work_on
ffffffff814156f8 r __kstrtab_schedule_delayed_work
ffffffff8141570e r __kstrtab_schedule_work_on
ffffffff8141571f r __kstrtab_schedule_work
ffffffff8141572d r __kstrtab_cancel_delayed_work_sync
ffffffff81415746 r __kstrtab_flush_delayed_work_sync
ffffffff8141575e r __kstrtab_flush_delayed_work
ffffffff81415771 r __kstrtab_cancel_work_sync
ffffffff81415782 r __kstrtab_flush_work_sync
ffffffff81415792 r __kstrtab_flush_work
ffffffff8141579d r __kstrtab_drain_workqueue
ffffffff814157ad r __kstrtab_flush_workqueue
ffffffff814157bd r __kstrtab_queue_delayed_work_on
ffffffff814157d3 r __kstrtab_queue_delayed_work
ffffffff814157e6 r __kstrtab_queue_work_on
ffffffff814157f4 r __kstrtab_queue_work
ffffffff814157ff r __kstrtab_system_nrt_freezable_wq
ffffffff81415817 r __kstrtab_system_freezable_wq
ffffffff8141582b r __kstrtab_system_unbound_wq
ffffffff8141583d r __kstrtab_system_nrt_wq
ffffffff8141584b r __kstrtab_system_long_wq
ffffffff8141585a r __kstrtab_system_wq
ffffffff81415864 r __kstrtab_task_active_pid_ns
ffffffff81415877 r __kstrtab_task_tgid_nr_ns
ffffffff81415887 r __kstrtab___task_pid_nr_ns
ffffffff81415898 r __kstrtab_pid_vnr
ffffffff814158a0 r __kstrtab_find_get_pid
ffffffff814158ad r __kstrtab_get_pid_task
ffffffff814158ba r __kstrtab_get_task_pid
ffffffff814158c7 r __kstrtab_pid_task
ffffffff814158d0 r __kstrtab_find_vpid
ffffffff814158da r __kstrtab_find_pid_ns
ffffffff814158e6 r __kstrtab_put_pid
ffffffff814158ee r __kstrtab_is_container_init
ffffffff81415900 r __kstrtab_init_pid_ns
ffffffff8141590c r __kstrtab_do_trace_rcu_torture_read
ffffffff81415926 r __kstrtab_wait_rcu_gp
ffffffff81415932 r __kstrtab___kernel_param_unlock
ffffffff81415948 r __kstrtab___kernel_param_lock
ffffffff8141595c r __kstrtab_param_ops_string
ffffffff8141596d r __kstrtab_param_get_string
ffffffff8141597e r __kstrtab_param_set_copystring
ffffffff81415993 r __kstrtab_param_array_ops
ffffffff814159a3 r __kstrtab_param_ops_bint
ffffffff814159b2 r __kstrtab_param_set_bint
ffffffff814159c1 r __kstrtab_param_ops_invbool
ffffffff814159d3 r __kstrtab_param_get_invbool
ffffffff814159e5 r __kstrtab_param_set_invbool
ffffffff814159f7 r __kstrtab_param_ops_bool
ffffffff81415a06 r __kstrtab_param_get_bool
ffffffff81415a15 r __kstrtab_param_set_bool
ffffffff81415a24 r __kstrtab_param_ops_charp
ffffffff81415a34 r __kstrtab_param_get_charp
ffffffff81415a44 r __kstrtab_param_set_charp
ffffffff81415a54 r __kstrtab_param_ops_ulong
ffffffff81415a64 r __kstrtab_param_get_ulong
ffffffff81415a74 r __kstrtab_param_set_ulong
ffffffff81415a84 r __kstrtab_param_ops_long
ffffffff81415a93 r __kstrtab_param_get_long
ffffffff81415aa2 r __kstrtab_param_set_long
ffffffff81415ab1 r __kstrtab_param_ops_uint
ffffffff81415ac0 r __kstrtab_param_get_uint
ffffffff81415acf r __kstrtab_param_set_uint
ffffffff81415ade r __kstrtab_param_ops_int
ffffffff81415aec r __kstrtab_param_get_int
ffffffff81415afa r __kstrtab_param_set_int
ffffffff81415b08 r __kstrtab_param_ops_ushort
ffffffff81415b19 r __kstrtab_param_get_ushort
ffffffff81415b2a r __kstrtab_param_set_ushort
ffffffff81415b3b r __kstrtab_param_ops_short
ffffffff81415b4b r __kstrtab_param_get_short
ffffffff81415b5b r __kstrtab_param_set_short
ffffffff81415b6b r __kstrtab_param_ops_byte
ffffffff81415b7a r __kstrtab_param_get_byte
ffffffff81415b89 r __kstrtab_param_set_byte
ffffffff81415b98 r __kstrtab_posix_timers_register_clock
ffffffff81415bb4 r __kstrtab_posix_timer_event
ffffffff81415bc6 r __kstrtab_flush_kthread_worker
ffffffff81415bdb r __kstrtab_flush_kthread_work
ffffffff81415bee r __kstrtab_queue_kthread_work
ffffffff81415c01 r __kstrtab_kthread_worker_fn
ffffffff81415c13 r __kstrtab___init_kthread_worker
ffffffff81415c29 r __kstrtab_kthread_stop
ffffffff81415c36 r __kstrtab_kthread_bind
ffffffff81415c43 r __kstrtab_kthread_create_on_node
ffffffff81415c5a r __kstrtab_kthread_freezable_should_stop
ffffffff81415c78 r __kstrtab_kthread_should_stop
ffffffff81415c8c r __kstrtab_bit_waitqueue
ffffffff81415c9a r __kstrtab_wake_up_bit
ffffffff81415ca6 r __kstrtab___wake_up_bit
ffffffff81415cb4 r __kstrtab_out_of_line_wait_on_bit_lock
ffffffff81415cd1 r __kstrtab___wait_on_bit_lock
ffffffff81415ce4 r __kstrtab_out_of_line_wait_on_bit
ffffffff81415cfc r __kstrtab___wait_on_bit
ffffffff81415d0a r __kstrtab_wake_bit_function
ffffffff81415d1c r __kstrtab_autoremove_wake_function
ffffffff81415d35 r __kstrtab_abort_exclusive_wait
ffffffff81415d4a r __kstrtab_finish_wait
ffffffff81415d56 r __kstrtab_prepare_to_wait_exclusive
ffffffff81415d70 r __kstrtab_prepare_to_wait
ffffffff81415d80 r __kstrtab_remove_wait_queue
ffffffff81415d92 r __kstrtab_add_wait_queue_exclusive
ffffffff81415dab r __kstrtab_add_wait_queue
ffffffff81415dba r __kstrtab___init_waitqueue_head
ffffffff81415dd0 r __kstrtab___kfifo_dma_out_finish_r
ffffffff81415de9 r __kstrtab___kfifo_dma_out_prepare_r
ffffffff81415e03 r __kstrtab___kfifo_dma_in_finish_r
ffffffff81415e1b r __kstrtab___kfifo_dma_in_prepare_r
ffffffff81415e34 r __kstrtab___kfifo_to_user_r
ffffffff81415e46 r __kstrtab___kfifo_from_user_r
ffffffff81415e5a r __kstrtab___kfifo_skip_r
ffffffff81415e69 r __kstrtab___kfifo_out_r
ffffffff81415e77 r __kstrtab___kfifo_out_peek_r
ffffffff81415e8a r __kstrtab___kfifo_in_r
ffffffff81415e97 r __kstrtab___kfifo_len_r
ffffffff81415ea5 r __kstrtab___kfifo_dma_out_prepare
ffffffff81415ebd r __kstrtab___kfifo_dma_in_prepare
ffffffff81415ed4 r __kstrtab___kfifo_to_user
ffffffff81415ee4 r __kstrtab___kfifo_from_user
ffffffff81415ef6 r __kstrtab___kfifo_out
ffffffff81415f02 r __kstrtab___kfifo_out_peek
ffffffff81415f13 r __kstrtab___kfifo_in
ffffffff81415f1e r __kstrtab___kfifo_init
ffffffff81415f2b r __kstrtab___kfifo_free
ffffffff81415f38 r __kstrtab___kfifo_alloc
ffffffff81415f46 r __kstrtab_atomic_dec_and_mutex_lock
ffffffff81415f60 r __kstrtab_mutex_trylock
ffffffff81415f6e r __kstrtab_mutex_lock_killable
ffffffff81415f82 r __kstrtab_mutex_lock_interruptible
ffffffff81415f9b r __kstrtab_mutex_unlock
ffffffff81415fa8 r __kstrtab_mutex_lock
ffffffff81415fb3 r __kstrtab___mutex_init
ffffffff81415fc0 r __kstrtab_schedule_hrtimeout
ffffffff81415fd3 r __kstrtab_schedule_hrtimeout_range
ffffffff81415fec r __kstrtab_hrtimer_init_sleeper
ffffffff81416001 r __kstrtab_hrtimer_get_res
ffffffff81416011 r __kstrtab_hrtimer_init
ffffffff8141601e r __kstrtab_hrtimer_get_remaining
ffffffff81416034 r __kstrtab_hrtimer_cancel
ffffffff81416043 r __kstrtab_hrtimer_try_to_cancel
ffffffff81416059 r __kstrtab_hrtimer_start
ffffffff81416067 r __kstrtab_hrtimer_start_range_ns
ffffffff8141607e r __kstrtab_hrtimer_forward
ffffffff8141608e r __kstrtab_ktime_add_safe
ffffffff8141609d r __kstrtab_downgrade_write
ffffffff814160ad r __kstrtab_up_write
ffffffff814160b6 r __kstrtab_up_read
ffffffff814160be r __kstrtab_down_write_trylock
ffffffff814160d1 r __kstrtab_down_write
ffffffff814160dc r __kstrtab_down_read_trylock
ffffffff814160ee r __kstrtab_down_read
ffffffff814160f8 r __kstrtab_srcu_batches_completed
ffffffff8141610f r __kstrtab_synchronize_srcu_expedited
ffffffff8141612a r __kstrtab_synchronize_srcu
ffffffff8141613b r __kstrtab___srcu_read_unlock
ffffffff8141614e r __kstrtab___srcu_read_lock
ffffffff8141615f r __kstrtab_cleanup_srcu_struct
ffffffff81416173 r __kstrtab_init_srcu_struct
ffffffff81416184 r __kstrtab_up
ffffffff81416187 r __kstrtab_down_timeout
ffffffff81416194 r __kstrtab_down_trylock
ffffffff814161a1 r __kstrtab_down_killable
ffffffff814161af r __kstrtab_down_interruptible
ffffffff814161c2 r __kstrtab_down
ffffffff814161c7 r __kstrtab_unregister_die_notifier
ffffffff814161df r __kstrtab_register_die_notifier
ffffffff814161f5 r __kstrtab_srcu_init_notifier_head
ffffffff8141620d r __kstrtab_srcu_notifier_call_chain
ffffffff81416226 r __kstrtab___srcu_notifier_call_chain
ffffffff81416241 r __kstrtab_srcu_notifier_chain_unregister
ffffffff81416260 r __kstrtab_srcu_notifier_chain_register
ffffffff8141627d r __kstrtab_raw_notifier_call_chain
ffffffff81416295 r __kstrtab___raw_notifier_call_chain
ffffffff814162af r __kstrtab_raw_notifier_chain_unregister
ffffffff814162cd r __kstrtab_raw_notifier_chain_register
ffffffff814162e9 r __kstrtab_blocking_notifier_call_chain
ffffffff81416306 r __kstrtab___blocking_notifier_call_chain
ffffffff81416325 r __kstrtab_blocking_notifier_chain_unregister
ffffffff81416348 r __kstrtab_blocking_notifier_chain_cond_register
ffffffff8141636e r __kstrtab_blocking_notifier_chain_register
ffffffff8141638f r __kstrtab_atomic_notifier_call_chain
ffffffff814163aa r __kstrtab___atomic_notifier_call_chain
ffffffff814163c7 r __kstrtab_atomic_notifier_chain_unregister
ffffffff814163e8 r __kstrtab_atomic_notifier_chain_register
ffffffff81416407 r __kstrtab_kernel_kobj
ffffffff81416413 r __kstrtab_set_create_files_as
ffffffff81416427 r __kstrtab_set_security_override_from_ctx
ffffffff81416446 r __kstrtab_set_security_override
ffffffff8141645c r __kstrtab_prepare_kernel_cred
ffffffff81416470 r __kstrtab_revert_creds
ffffffff8141647d r __kstrtab_override_creds
ffffffff8141648c r __kstrtab_abort_creds
ffffffff81416498 r __kstrtab_commit_creds
ffffffff814164a5 r __kstrtab_prepare_creds
ffffffff814164b3 r __kstrtab___put_cred
ffffffff814164be r __kstrtab_async_synchronize_cookie
ffffffff814164d7 r __kstrtab_async_synchronize_cookie_domain
ffffffff814164f7 r __kstrtab_async_synchronize_full_domain
ffffffff81416515 r __kstrtab_async_synchronize_full
ffffffff8141652c r __kstrtab_async_schedule_domain
ffffffff81416542 r __kstrtab_async_schedule
ffffffff81416551 r __kstrtab_in_egroup_p
ffffffff8141655d r __kstrtab_in_group_p
ffffffff81416568 r __kstrtab_set_current_groups
ffffffff8141657b r __kstrtab_set_groups
ffffffff81416586 r __kstrtab_groups_free
ffffffff81416592 r __kstrtab_groups_alloc
ffffffff8141659f r __kstrtab_set_cpus_allowed_ptr
ffffffff814165b4 r __kstrtab_io_schedule
ffffffff814165c0 r __kstrtab_yield_to
ffffffff814165c9 r __kstrtab_yield
ffffffff814165cf r __kstrtab___cond_resched_softirq
ffffffff814165e6 r __kstrtab___cond_resched_lock
ffffffff814165fa r __kstrtab__cond_resched
ffffffff81416608 r __kstrtab_sched_setscheduler
ffffffff8141661b r __kstrtab_task_nice
ffffffff81416625 r __kstrtab_set_user_nice
ffffffff81416633 r __kstrtab_sleep_on_timeout
ffffffff81416644 r __kstrtab_sleep_on
ffffffff8141664d r __kstrtab_interruptible_sleep_on_timeout
ffffffff8141666c r __kstrtab_interruptible_sleep_on
ffffffff81416683 r __kstrtab_completion_done
ffffffff81416693 r __kstrtab_try_wait_for_completion
ffffffff814166ab r __kstrtab_wait_for_completion_killable_timeout
ffffffff814166d0 r __kstrtab_wait_for_completion_killable
ffffffff814166ed r __kstrtab_wait_for_completion_interruptible_timeout
ffffffff81416717 r __kstrtab_wait_for_completion_interruptible
ffffffff81416739 r __kstrtab_wait_for_completion_timeout
ffffffff81416755 r __kstrtab_wait_for_completion
ffffffff81416769 r __kstrtab_complete_all
ffffffff81416776 r __kstrtab_complete
ffffffff8141677f r __kstrtab___wake_up_sync
ffffffff8141678e r __kstrtab___wake_up_sync_key
ffffffff814167a1 r __kstrtab___wake_up_locked_key
ffffffff814167b6 r __kstrtab___wake_up_locked
ffffffff814167c7 r __kstrtab___wake_up
ffffffff814167d1 r __kstrtab_default_wake_function
ffffffff814167e7 r __kstrtab_schedule
ffffffff814167f0 r __kstrtab_kernel_cpustat
ffffffff814167ff r __kstrtab_kstat
ffffffff81416805 r __kstrtab_avenrun
ffffffff8141680d r __kstrtab_wake_up_process
ffffffff8141681d r __kstrtab_kick_process
ffffffff8141682a r __kstrtab_local_clock
ffffffff81416836 r __kstrtab_cpu_clock
ffffffff81416840 r __kstrtab_sched_clock_idle_wakeup_event
ffffffff8141685e r __kstrtab_sched_clock_idle_sleep_event
ffffffff8141687b r __kstrtab_sched_clock
ffffffff81416887 r __kstrtab_pm_qos_remove_notifier
ffffffff8141689e r __kstrtab_pm_qos_add_notifier
ffffffff814168b2 r __kstrtab_pm_qos_remove_request
ffffffff814168c8 r __kstrtab_pm_qos_update_request
ffffffff814168de r __kstrtab_pm_qos_add_request
ffffffff814168f1 r __kstrtab_pm_qos_request_active
ffffffff81416907 r __kstrtab_pm_qos_request
ffffffff81416916 r __kstrtab_pm_wq
ffffffff8141691c r __kstrtab_unregister_pm_notifier
ffffffff81416933 r __kstrtab_register_pm_notifier
ffffffff81416948 r __kstrtab_set_freezable
ffffffff81416956 r __kstrtab___refrigerator
ffffffff81416965 r __kstrtab_freezing_slow_path
ffffffff81416978 r __kstrtab_system_freezing_cnt
ffffffff8141698c r __kstrtab_ktime_get_monotonic_offset
ffffffff814169a7 r __kstrtab_current_kernel_time
ffffffff814169bb r __kstrtab_get_seconds
ffffffff814169c7 r __kstrtab_monotonic_to_bootbased
ffffffff814169de r __kstrtab_ktime_get_boottime
ffffffff814169f1 r __kstrtab_get_monotonic_boottime
ffffffff81416a08 r __kstrtab_getboottime
ffffffff81416a14 r __kstrtab_getrawmonotonic
ffffffff81416a24 r __kstrtab_ktime_get_real
ffffffff81416a33 r __kstrtab_timekeeping_inject_offset
ffffffff81416a4d r __kstrtab_do_settimeofday
ffffffff81416a5d r __kstrtab_do_gettimeofday
ffffffff81416a6d r __kstrtab_ktime_get_ts
ffffffff81416a7a r __kstrtab_ktime_get
ffffffff81416a84 r __kstrtab_getnstimeofday
ffffffff81416a93 r __kstrtab_clocksource_unregister
ffffffff81416aaa r __kstrtab_clocksource_change_rating
ffffffff81416ac4 r __kstrtab_clocksource_register
ffffffff81416ad9 r __kstrtab___clocksource_register_scale
ffffffff81416af6 r __kstrtab___clocksource_updatefreq_scale
ffffffff81416b15 r __kstrtab_timecounter_cyc2time
ffffffff81416b2a r __kstrtab_timecounter_read
ffffffff81416b3b r __kstrtab_timecounter_init
ffffffff81416b4c r __kstrtab_jiffies
ffffffff81416b54 r __kstrtab___timecompare_update
ffffffff81416b69 r __kstrtab_timecompare_offset
ffffffff81416b7c r __kstrtab_timecompare_transform
ffffffff81416b92 r __kstrtab_time_to_tm
ffffffff81416b9d r __kstrtab_posix_clock_unregister
ffffffff81416bb4 r __kstrtab_posix_clock_register
ffffffff81416bc9 r __kstrtab_clockevents_notify
ffffffff81416bdc r __kstrtab_clockevents_register_device
ffffffff81416bf8 r __kstrtab_clockevent_delta2ns
ffffffff81416c0c r __kstrtab___rt_mutex_init
ffffffff81416c1c r __kstrtab_rt_mutex_destroy
ffffffff81416c2d r __kstrtab_rt_mutex_unlock
ffffffff81416c3d r __kstrtab_rt_mutex_trylock
ffffffff81416c4e r __kstrtab_rt_mutex_timed_lock
ffffffff81416c62 r __kstrtab_rt_mutex_lock_interruptible
ffffffff81416c7e r __kstrtab_rt_mutex_lock
ffffffff81416c8c r __kstrtab_dma_spin_lock
ffffffff81416c9a r __kstrtab_free_dma
ffffffff81416ca3 r __kstrtab_request_dma
ffffffff81416caf r __kstrtab_on_each_cpu_cond
ffffffff81416cc0 r __kstrtab_on_each_cpu_mask
ffffffff81416cd1 r __kstrtab_on_each_cpu
ffffffff81416cdd r __kstrtab_nr_cpu_ids
ffffffff81416ce8 r __kstrtab_setup_max_cpus
ffffffff81416cf7 r __kstrtab_smp_call_function
ffffffff81416d09 r __kstrtab_smp_call_function_many
ffffffff81416d20 r __kstrtab_smp_call_function_any
ffffffff81416d36 r __kstrtab_smp_call_function_single
ffffffff81416d4f r __kstrtab_in_lock_functions
ffffffff81416d61 r __kstrtab__raw_write_unlock_bh
ffffffff81416d76 r __kstrtab__raw_write_unlock_irqrestore
ffffffff81416d93 r __kstrtab__raw_write_lock_bh
ffffffff81416da6 r __kstrtab__raw_write_lock_irq
ffffffff81416dba r __kstrtab__raw_write_lock_irqsave
ffffffff81416dd2 r __kstrtab__raw_write_lock
ffffffff81416de2 r __kstrtab__raw_write_trylock
ffffffff81416df5 r __kstrtab__raw_read_unlock_bh
ffffffff81416e09 r __kstrtab__raw_read_unlock_irqrestore
ffffffff81416e25 r __kstrtab__raw_read_lock_bh
ffffffff81416e37 r __kstrtab__raw_read_lock_irq
ffffffff81416e4a r __kstrtab__raw_read_lock_irqsave
ffffffff81416e61 r __kstrtab__raw_read_lock
ffffffff81416e70 r __kstrtab__raw_read_trylock
ffffffff81416e82 r __kstrtab__raw_spin_unlock_bh
ffffffff81416e96 r __kstrtab__raw_spin_unlock_irqrestore
ffffffff81416eb2 r __kstrtab__raw_spin_lock_bh
ffffffff81416ec4 r __kstrtab__raw_spin_lock_irq
ffffffff81416ed7 r __kstrtab__raw_spin_lock_irqsave
ffffffff81416eee r __kstrtab__raw_spin_lock
ffffffff81416efd r __kstrtab__raw_spin_trylock_bh
ffffffff81416f12 r __kstrtab__raw_spin_trylock
ffffffff81416f24 r __kstrtab___module_text_address
ffffffff81416f3a r __kstrtab___module_address
ffffffff81416f4b r __kstrtab___symbol_get
ffffffff81416f58 r __kstrtab_module_put
ffffffff81416f63 r __kstrtab_try_module_get
ffffffff81416f72 r __kstrtab___module_get
ffffffff81416f7f r __kstrtab_symbol_put_addr
ffffffff81416f8f r __kstrtab___symbol_put
ffffffff81416f9c r __kstrtab_module_refcount
ffffffff81416fac r __kstrtab_ref_module
ffffffff81416fb7 r __kstrtab_find_module
ffffffff81416fc3 r __kstrtab_find_symbol
ffffffff81416fcf r __kstrtab_each_symbol_section
ffffffff81416fe3 r __kstrtab___module_put_and_exit
ffffffff81416ff9 r __kstrtab_unregister_module_notifier
ffffffff81417014 r __kstrtab_register_module_notifier
ffffffff8141702d r __kstrtab_module_mutex
ffffffff8141703a r __kstrtab___print_symbol
ffffffff81417049 r __kstrtab_sprint_symbol
ffffffff81417057 r __kstrtab_kallsyms_on_each_symbol
ffffffff8141706f r __kstrtab_kallsyms_lookup_name
ffffffff81417084 r __kstrtab_compat_alloc_user_space
ffffffff8141709c r __kstrtab_sigset_from_compat
ffffffff814170af r __kstrtab_compat_put_timespec
ffffffff814170c3 r __kstrtab_compat_get_timespec
ffffffff814170d7 r __kstrtab_compat_put_timeval
ffffffff814170ea r __kstrtab_compat_get_timeval
ffffffff814170fd r __kstrtab_put_compat_timespec
ffffffff81417111 r __kstrtab_get_compat_timespec
ffffffff81417125 r __kstrtab_put_compat_timeval
ffffffff81417138 r __kstrtab_get_compat_timeval
ffffffff8141714b r __kstrtab_css_lookup
ffffffff81417156 r __kstrtab_free_css_id
ffffffff81417162 r __kstrtab_css_depth
ffffffff8141716c r __kstrtab_css_id
ffffffff81417173 r __kstrtab___css_put
ffffffff8141717d r __kstrtab_cgroup_unload_subsys
ffffffff81417192 r __kstrtab_cgroup_load_subsys
ffffffff814171a5 r __kstrtab_cgroup_add_files
ffffffff814171b6 r __kstrtab_cgroup_add_file
ffffffff814171c6 r __kstrtab_cgroup_lock_live_group
ffffffff814171dd r __kstrtab_cgroup_attach_task_all
ffffffff814171f4 r __kstrtab_cgroup_taskset_size
ffffffff81417208 r __kstrtab_cgroup_taskset_cur_cgroup
ffffffff81417222 r __kstrtab_cgroup_taskset_next
ffffffff81417236 r __kstrtab_cgroup_taskset_first
ffffffff8141724b r __kstrtab_cgroup_path
ffffffff81417257 r __kstrtab_cgroup_unlock
ffffffff81417265 r __kstrtab_cgroup_lock
ffffffff81417271 r __kstrtab_cgroup_lock_is_held
ffffffff81417285 r __kstrtab_cpuset_mem_spread_node
ffffffff8141729c r __kstrtab_stop_machine
ffffffff814172a9 r __kstrtab_audit_log
ffffffff814172b3 r __kstrtab_audit_log_format
ffffffff814172c4 r __kstrtab_audit_log_end
ffffffff814172d2 r __kstrtab_audit_log_start
ffffffff814172e2 r __kstrtab_audit_enabled
ffffffff814172f0 r __kstrtab___audit_inode_child
ffffffff81417304 r __kstrtab_audit_log_task_context
ffffffff8141731b r __kstrtab___irq_alloc_descs
ffffffff8141732d r __kstrtab_irq_free_descs
ffffffff8141733c r __kstrtab_generic_handle_irq
ffffffff8141734f r __kstrtab_irq_to_desc
ffffffff8141735b r __kstrtab_nr_irqs
ffffffff81417363 r __kstrtab_request_any_context_irq
ffffffff8141737b r __kstrtab_request_threaded_irq
ffffffff81417390 r __kstrtab_free_irq
ffffffff81417399 r __kstrtab_remove_irq
ffffffff814173a4 r __kstrtab_setup_irq
ffffffff814173ae r __kstrtab_irq_set_irq_wake
ffffffff814173bf r __kstrtab_enable_irq
ffffffff814173ca r __kstrtab_disable_irq
ffffffff814173d6 r __kstrtab_disable_irq_nosync
ffffffff814173e9 r __kstrtab_irq_set_affinity_notifier
ffffffff81417403 r __kstrtab_irq_set_affinity_hint
ffffffff81417419 r __kstrtab_synchronize_irq
ffffffff81417429 r __kstrtab_irq_modify_status
ffffffff8141743b r __kstrtab___irq_set_handler
ffffffff8141744d r __kstrtab_handle_edge_irq
ffffffff8141745d r __kstrtab_handle_level_irq
ffffffff8141746e r __kstrtab_handle_simple_irq
ffffffff81417480 r __kstrtab_handle_nested_irq
ffffffff81417492 r __kstrtab_irq_get_irq_data
ffffffff814174a3 r __kstrtab_irq_set_chip_data
ffffffff814174b5 r __kstrtab_irq_set_handler_data
ffffffff814174ca r __kstrtab_irq_set_irq_type
ffffffff814174db r __kstrtab_irq_set_chip
ffffffff814174e8 r __kstrtab_devm_free_irq
ffffffff814174f6 r __kstrtab_devm_request_threaded_irq
ffffffff81417510 r __kstrtab_probe_irq_off
ffffffff8141751e r __kstrtab_probe_irq_mask
ffffffff8141752d r __kstrtab_probe_irq_on
ffffffff8141753a r __kstrtab_resume_device_irqs
ffffffff8141754d r __kstrtab_suspend_device_irqs
ffffffff81417561 r __kstrtab_rcu_barrier
ffffffff8141756d r __kstrtab_synchronize_rcu_expedited
ffffffff81417587 r __kstrtab_kfree_call_rcu
ffffffff81417596 r __kstrtab_rcu_force_quiescent_state
ffffffff814175b0 r __kstrtab_rcu_batches_completed
ffffffff814175c6 r __kstrtab_rcu_barrier_sched
ffffffff814175d8 r __kstrtab_rcu_barrier_bh
ffffffff814175e7 r __kstrtab_synchronize_sched_expedited
ffffffff81417603 r __kstrtab_synchronize_rcu_bh
ffffffff81417616 r __kstrtab_synchronize_sched
ffffffff81417628 r __kstrtab_call_rcu_bh
ffffffff81417634 r __kstrtab_call_rcu_sched
ffffffff81417643 r __kstrtab_rcu_idle_exit
ffffffff81417651 r __kstrtab_rcu_idle_enter
ffffffff81417660 r __kstrtab_rcu_sched_force_quiescent_state
ffffffff81417680 r __kstrtab_rcutorture_record_progress
ffffffff8141769b r __kstrtab_rcutorture_record_test_transition
ffffffff814176bd r __kstrtab_rcu_bh_force_quiescent_state
ffffffff814176da r __kstrtab_rcu_batches_completed_bh
ffffffff814176f3 r __kstrtab_rcu_batches_completed_sched
ffffffff8141770f r __kstrtab_rcu_note_context_switch
ffffffff81417727 r __kstrtab_rcu_scheduler_active
ffffffff8141773c r __kstrtab_delayacct_on
ffffffff81417749 r __kstrtab_irq_work_sync
ffffffff81417757 r __kstrtab_irq_work_run
ffffffff81417764 r __kstrtab_irq_work_queue
ffffffff81417773 r __kstrtab_perf_event_create_kernel_counter
ffffffff81417794 r __kstrtab_perf_swevent_get_recursion_context
ffffffff814177b7 r __kstrtab_perf_unregister_guest_info_callbacks
ffffffff814177dc r __kstrtab_perf_register_guest_info_callbacks
ffffffff814177ff r __kstrtab_perf_event_read_value
ffffffff81417815 r __kstrtab_perf_event_release_kernel
ffffffff8141782f r __kstrtab_perf_event_refresh
ffffffff81417842 r __kstrtab_perf_event_enable
ffffffff81417854 r __kstrtab_perf_event_disable
ffffffff81417867 r __kstrtab_unregister_wide_hw_breakpoint
ffffffff81417885 r __kstrtab_register_wide_hw_breakpoint
ffffffff814178a1 r __kstrtab_unregister_hw_breakpoint
ffffffff814178ba r __kstrtab_modify_user_hw_breakpoint
ffffffff814178d4 r __kstrtab_register_user_hw_breakpoint
ffffffff814178f0 r __kstrtab_try_to_release_page
ffffffff81417904 r __kstrtab_generic_file_aio_write
ffffffff8141791b r __kstrtab___generic_file_aio_write
ffffffff81417934 r __kstrtab_generic_file_buffered_write
ffffffff81417950 r __kstrtab_grab_cache_page_write_begin
ffffffff8141796c r __kstrtab_generic_file_direct_write
ffffffff81417986 r __kstrtab_pagecache_write_end
ffffffff8141799a r __kstrtab_pagecache_write_begin
ffffffff814179b0 r __kstrtab_generic_write_checks
ffffffff814179c5 r __kstrtab_iov_iter_single_seg_count
ffffffff814179df r __kstrtab_iov_iter_fault_in_readable
ffffffff814179fa r __kstrtab_iov_iter_advance
ffffffff81417a0b r __kstrtab_iov_iter_copy_from_user
ffffffff81417a23 r __kstrtab_iov_iter_copy_from_user_atomic
ffffffff81417a42 r __kstrtab_file_remove_suid
ffffffff81417a53 r __kstrtab_should_remove_suid
ffffffff81417a66 r __kstrtab_read_cache_page
ffffffff81417a76 r __kstrtab_read_cache_page_gfp
ffffffff81417a8a r __kstrtab_read_cache_page_async
ffffffff81417aa0 r __kstrtab_generic_file_readonly_mmap
ffffffff81417abb r __kstrtab_generic_file_mmap
ffffffff81417acd r __kstrtab_filemap_fault
ffffffff81417adb r __kstrtab_generic_file_aio_read
ffffffff81417af1 r __kstrtab_generic_segment_checks
ffffffff81417b08 r __kstrtab_grab_cache_page_nowait
ffffffff81417b1f r __kstrtab_find_get_pages_tag
ffffffff81417b32 r __kstrtab_find_get_pages_contig
ffffffff81417b48 r __kstrtab_find_or_create_page
ffffffff81417b5c r __kstrtab_find_lock_page
ffffffff81417b6b r __kstrtab_find_get_page
ffffffff81417b79 r __kstrtab___lock_page_killable
ffffffff81417b8e r __kstrtab___lock_page
ffffffff81417b9a r __kstrtab_end_page_writeback
ffffffff81417bad r __kstrtab_unlock_page
ffffffff81417bb9 r __kstrtab_add_page_wait_queue
ffffffff81417bcd r __kstrtab_wait_on_page_bit
ffffffff81417bde r __kstrtab___page_cache_alloc
ffffffff81417bf1 r __kstrtab_add_to_page_cache_lru
ffffffff81417c07 r __kstrtab_add_to_page_cache_locked
ffffffff81417c20 r __kstrtab_replace_page_cache_page
ffffffff81417c38 r __kstrtab_filemap_write_and_wait_range
ffffffff81417c55 r __kstrtab_filemap_write_and_wait
ffffffff81417c6c r __kstrtab_filemap_fdatawait
ffffffff81417c7e r __kstrtab_filemap_fdatawait_range
ffffffff81417c96 r __kstrtab_filemap_flush
ffffffff81417ca4 r __kstrtab_filemap_fdatawrite_range
ffffffff81417cbd r __kstrtab_filemap_fdatawrite
ffffffff81417cd0 r __kstrtab_delete_from_page_cache
ffffffff81417ce7 r __kstrtab_mempool_free_pages
ffffffff81417cfa r __kstrtab_mempool_alloc_pages
ffffffff81417d0e r __kstrtab_mempool_kfree
ffffffff81417d1c r __kstrtab_mempool_kmalloc
ffffffff81417d2c r __kstrtab_mempool_free_slab
ffffffff81417d3e r __kstrtab_mempool_alloc_slab
ffffffff81417d51 r __kstrtab_mempool_free
ffffffff81417d5e r __kstrtab_mempool_alloc
ffffffff81417d6c r __kstrtab_mempool_resize
ffffffff81417d7b r __kstrtab_mempool_create_node
ffffffff81417d8f r __kstrtab_mempool_create
ffffffff81417d9e r __kstrtab_mempool_destroy
ffffffff81417dae r __kstrtab_unregister_oom_notifier
ffffffff81417dc6 r __kstrtab_register_oom_notifier
ffffffff81417ddc r __kstrtab_probe_kernel_write
ffffffff81417def r __kstrtab_probe_kernel_read
ffffffff81417e01 r __kstrtab_si_meminfo
ffffffff81417e0c r __kstrtab_nr_free_buffer_pages
ffffffff81417e21 r __kstrtab_free_pages_exact
ffffffff81417e32 r __kstrtab_alloc_pages_exact_nid
ffffffff81417e48 r __kstrtab_alloc_pages_exact
ffffffff81417e5a r __kstrtab_free_pages
ffffffff81417e65 r __kstrtab___free_pages
ffffffff81417e72 r __kstrtab_get_zeroed_page
ffffffff81417e82 r __kstrtab___get_free_pages
ffffffff81417e93 r __kstrtab___alloc_pages_nodemask
ffffffff81417eaa r __kstrtab_nr_online_nodes
ffffffff81417eba r __kstrtab_nr_node_ids
ffffffff81417ec6 r __kstrtab_movable_zone
ffffffff81417ed3 r __kstrtab_totalram_pages
ffffffff81417ee2 r __kstrtab_node_states
ffffffff81417eee r __kstrtab_numa_node
ffffffff81417ef8 r __kstrtab_mapping_tagged
ffffffff81417f07 r __kstrtab_test_set_page_writeback
ffffffff81417f1f r __kstrtab_clear_page_dirty_for_io
ffffffff81417f37 r __kstrtab_set_page_dirty_lock
ffffffff81417f4b r __kstrtab_set_page_dirty
ffffffff81417f5a r __kstrtab_redirty_page_for_writepage
ffffffff81417f75 r __kstrtab_account_page_redirty
ffffffff81417f8a r __kstrtab___set_page_dirty_nobuffers
ffffffff81417fa5 r __kstrtab_account_page_writeback
ffffffff81417fbc r __kstrtab_account_page_dirtied
ffffffff81417fd1 r __kstrtab_write_one_page
ffffffff81417fe0 r __kstrtab_generic_writepages
ffffffff81417ff3 r __kstrtab_write_cache_pages
ffffffff81418005 r __kstrtab_tag_pages_for_writeback
ffffffff8141801d r __kstrtab_balance_dirty_pages_ratelimited_nr
ffffffff81418040 r __kstrtab_bdi_set_max_ratio
ffffffff81418052 r __kstrtab_bdi_writeout_inc
ffffffff81418063 r __kstrtab_laptop_mode
ffffffff8141806f r __kstrtab_dirty_writeback_interval
ffffffff81418088 r __kstrtab_page_cache_async_readahead
ffffffff814180a3 r __kstrtab_page_cache_sync_readahead
ffffffff814180bd r __kstrtab_read_cache_pages
ffffffff814180ce r __kstrtab_file_ra_state_init
ffffffff814180e1 r __kstrtab_pagevec_lookup_tag
ffffffff814180f4 r __kstrtab_pagevec_lookup
ffffffff81418103 r __kstrtab___pagevec_lru_add
ffffffff81418115 r __kstrtab___pagevec_release
ffffffff81418127 r __kstrtab_release_pages
ffffffff81418135 r __kstrtab___lru_cache_add
ffffffff81418145 r __kstrtab_mark_page_accessed
ffffffff81418158 r __kstrtab_put_pages_list
ffffffff81418167 r __kstrtab___get_page_tail
ffffffff81418177 r __kstrtab_put_page
ffffffff81418180 r __kstrtab_truncate_pagecache_range
ffffffff81418199 r __kstrtab_vmtruncate
ffffffff814181a4 r __kstrtab_truncate_setsize
ffffffff814181b5 r __kstrtab_truncate_pagecache
ffffffff814181c8 r __kstrtab_invalidate_inode_pages2
ffffffff814181e0 r __kstrtab_invalidate_inode_pages2_range
ffffffff814181fe r __kstrtab_invalidate_mapping_pages
ffffffff81418217 r __kstrtab_truncate_inode_pages
ffffffff8141822c r __kstrtab_truncate_inode_pages_range
ffffffff81418247 r __kstrtab_generic_error_remove_page
ffffffff81418261 r __kstrtab_cancel_dirty_page
ffffffff81418273 r __kstrtab_unregister_shrinker
ffffffff81418287 r __kstrtab_register_shrinker
ffffffff81418299 r __kstrtab_shmem_read_mapping_page_gfp
ffffffff814182b5 r __kstrtab_shmem_file_setup
ffffffff814182c6 r __kstrtab_shmem_truncate_range
ffffffff814182db r __kstrtab_get_user_pages_fast
ffffffff814182ef r __kstrtab___get_user_pages_fast
ffffffff81418305 r __kstrtab_strndup_user
ffffffff81418312 r __kstrtab_kzfree
ffffffff81418319 r __kstrtab_krealloc
ffffffff81418322 r __kstrtab___krealloc
ffffffff8141832d r __kstrtab_memdup_user
ffffffff81418339 r __kstrtab_kmemdup
ffffffff81418341 r __kstrtab_kstrndup
ffffffff8141834a r __kstrtab_kstrdup
ffffffff81418352 r __kstrtab_dec_zone_page_state
ffffffff81418366 r __kstrtab_inc_zone_page_state
ffffffff8141837a r __kstrtab_mod_zone_page_state
ffffffff8141838e r __kstrtab___dec_zone_page_state
ffffffff814183a4 r __kstrtab___inc_zone_page_state
ffffffff814183ba r __kstrtab___mod_zone_page_state
ffffffff814183d0 r __kstrtab_vm_stat
ffffffff814183d8 r __kstrtab_all_vm_events
ffffffff814183e6 r __kstrtab_vm_event_states
ffffffff814183f6 r __kstrtab_wait_iff_congested
ffffffff81418409 r __kstrtab_congestion_wait
ffffffff81418419 r __kstrtab_set_bdi_congested
ffffffff8141842b r __kstrtab_clear_bdi_congested
ffffffff8141843f r __kstrtab_bdi_setup_and_register
ffffffff81418456 r __kstrtab_bdi_destroy
ffffffff81418462 r __kstrtab_bdi_init
ffffffff8141846b r __kstrtab_bdi_unregister
ffffffff8141847a r __kstrtab_bdi_register_dev
ffffffff8141848b r __kstrtab_bdi_register
ffffffff81418498 r __kstrtab_noop_backing_dev_info
ffffffff814184ae r __kstrtab_default_backing_dev_info
ffffffff814184c7 r __kstrtab_mm_kobj
ffffffff814184cf r __kstrtab_unuse_mm
ffffffff814184d8 r __kstrtab_use_mm
ffffffff814184df r __kstrtab_free_percpu
ffffffff814184eb r __kstrtab___alloc_percpu
ffffffff814184fa r __kstrtab_pcpu_base_addr
ffffffff81418509 r __kstrtab_follow_pfn
ffffffff81418514 r __kstrtab_unmap_mapping_range
ffffffff81418528 r __kstrtab_apply_to_page_range
ffffffff8141853c r __kstrtab_remap_pfn_range
ffffffff8141854c r __kstrtab_vm_insert_mixed
ffffffff8141855c r __kstrtab_vm_insert_pfn
ffffffff8141856a r __kstrtab_vm_insert_page
ffffffff81418579 r __kstrtab_get_user_pages
ffffffff81418588 r __kstrtab___get_user_pages
ffffffff81418599 r __kstrtab_zap_vma_ptes
ffffffff814185a6 r __kstrtab_high_memory
ffffffff814185b2 r __kstrtab_num_physpages
ffffffff814185c0 r __kstrtab_can_do_mlock
ffffffff814185cd r __kstrtab_vm_brk
ffffffff814185d4 r __kstrtab_vm_munmap
ffffffff814185de r __kstrtab_do_munmap
ffffffff814185e8 r __kstrtab_find_vma
ffffffff814185f1 r __kstrtab_get_unmapped_area
ffffffff81418603 r __kstrtab_vm_mmap
ffffffff8141860b r __kstrtab_do_mmap
ffffffff81418613 r __kstrtab_vm_get_page_prot
ffffffff81418624 r __kstrtab_page_mkclean
ffffffff81418631 r __kstrtab_free_vm_area
ffffffff8141863e r __kstrtab_alloc_vm_area
ffffffff8141864c r __kstrtab_remap_vmalloc_range
ffffffff81418660 r __kstrtab_vmalloc_32_user
ffffffff81418670 r __kstrtab_vmalloc_32
ffffffff8141867b r __kstrtab_vzalloc_node
ffffffff81418688 r __kstrtab_vmalloc_node
ffffffff81418695 r __kstrtab_vmalloc_user
ffffffff814186a2 r __kstrtab_vzalloc
ffffffff814186aa r __kstrtab_vmalloc
ffffffff814186b2 r __kstrtab___vmalloc
ffffffff814186bc r __kstrtab_vmap
ffffffff814186c1 r __kstrtab_vunmap
ffffffff814186c8 r __kstrtab_vfree
ffffffff814186ce r __kstrtab___get_vm_area
ffffffff814186dc r __kstrtab_map_vm_area
ffffffff814186e8 r __kstrtab_unmap_kernel_range_noflush
ffffffff81418703 r __kstrtab_vm_map_ram
ffffffff8141870e r __kstrtab_vm_unmap_ram
ffffffff8141871b r __kstrtab_vm_unmap_aliases
ffffffff8141872c r __kstrtab_vmalloc_to_pfn
ffffffff8141873b r __kstrtab_vmalloc_to_page
ffffffff8141874b r __kstrtab_blk_queue_bounce
ffffffff8141875c r __kstrtab_dmam_pool_destroy
ffffffff8141876e r __kstrtab_dmam_pool_create
ffffffff8141877f r __kstrtab_dma_pool_free
ffffffff8141878d r __kstrtab_dma_pool_alloc
ffffffff8141879c r __kstrtab_dma_pool_destroy
ffffffff814187ad r __kstrtab_dma_pool_create
ffffffff814187bd r __kstrtab_PageHuge
ffffffff814187c6 r __kstrtab_vma_kernel_pagesize
ffffffff814187da r __kstrtab_alloc_pages_current
ffffffff814187ee r __kstrtab_mem_section
ffffffff814187fa r __kstrtab_mmu_notifier_unregister
ffffffff81418812 r __kstrtab___mmu_notifier_register
ffffffff8141882a r __kstrtab_mmu_notifier_register
ffffffff81418840 r __kstrtab_ksize
ffffffff81418846 r __kstrtab_kmem_cache_size
ffffffff81418856 r __kstrtab_kfree
ffffffff8141885c r __kstrtab_kmem_cache_free
ffffffff8141886c r __kstrtab___kmalloc
ffffffff81418876 r __kstrtab___kmalloc_node
ffffffff81418885 r __kstrtab_kmem_cache_alloc_node
ffffffff8141889b r __kstrtab_kmem_cache_alloc
ffffffff814188ac r __kstrtab_kmem_cache_destroy
ffffffff814188bf r __kstrtab_kmem_cache_shrink
ffffffff814188d1 r __kstrtab_kmem_cache_create
ffffffff814188e3 r __kstrtab_malloc_sizes
ffffffff814188f0 r __kstrtab_buffer_migrate_page
ffffffff81418904 r __kstrtab_migrate_page
ffffffff81418911 r __kstrtab_fail_migrate_page
ffffffff81418923 r __kstrtab_unpoison_memory
ffffffff81418933 r __kstrtab_memory_failure_queue
ffffffff81418948 r __kstrtab_memory_failure
ffffffff81418957 r __kstrtab_shake_page
ffffffff81418962 r __kstrtab_hwpoison_filter
ffffffff81418972 r __kstrtab___cleancache_invalidate_fs
ffffffff8141898d r __kstrtab___cleancache_invalidate_inode
ffffffff814189ab r __kstrtab___cleancache_invalidate_page
ffffffff814189c8 r __kstrtab___cleancache_put_page
ffffffff814189de r __kstrtab___cleancache_get_page
ffffffff814189f4 r __kstrtab___cleancache_init_shared_fs
ffffffff81418a10 r __kstrtab___cleancache_init_fs
ffffffff81418a25 r __kstrtab_cleancache_register_ops
ffffffff81418a3d r __kstrtab_cleancache_enabled
ffffffff81418a50 r __kstrtab_nonseekable_open
ffffffff81418a61 r __kstrtab_generic_file_open
ffffffff81418a73 r __kstrtab_sys_close
ffffffff81418a7d r __kstrtab_filp_close
ffffffff81418a88 r __kstrtab_file_open_root
ffffffff81418a97 r __kstrtab_filp_open
ffffffff81418aa1 r __kstrtab_fd_install
ffffffff81418aac r __kstrtab_put_unused_fd
ffffffff81418aba r __kstrtab_dentry_open
ffffffff81418ac6 r __kstrtab_lookup_instantiate_filp
ffffffff81418ade r __kstrtab_vfs_writev
ffffffff81418ae9 r __kstrtab_vfs_readv
ffffffff81418af3 r __kstrtab_iov_shorten
ffffffff81418aff r __kstrtab_vfs_write
ffffffff81418b09 r __kstrtab_do_sync_write
ffffffff81418b17 r __kstrtab_vfs_read
ffffffff81418b20 r __kstrtab_do_sync_read
ffffffff81418b2d r __kstrtab_vfs_llseek
ffffffff81418b38 r __kstrtab_default_llseek
ffffffff81418b47 r __kstrtab_no_llseek
ffffffff81418b51 r __kstrtab_noop_llseek
ffffffff81418b5d r __kstrtab_generic_file_llseek
ffffffff81418b71 r __kstrtab_generic_file_llseek_size
ffffffff81418b8a r __kstrtab_generic_ro_fops
ffffffff81418b9a r __kstrtab_fget_raw
ffffffff81418ba3 r __kstrtab_fget
ffffffff81418ba8 r __kstrtab_fput
ffffffff81418bad r __kstrtab_alloc_file
ffffffff81418bb8 r __kstrtab_get_max_files
ffffffff81418bc6 r __kstrtab_files_lglock_global_unlock
ffffffff81418be1 r __kstrtab_files_lglock_global_lock
ffffffff81418bfa r __kstrtab_files_lglock_global_unlock_online
ffffffff81418c1c r __kstrtab_files_lglock_global_lock_online
ffffffff81418c3c r __kstrtab_files_lglock_local_unlock_cpu
ffffffff81418c5a r __kstrtab_files_lglock_local_lock_cpu
ffffffff81418c76 r __kstrtab_files_lglock_local_unlock
ffffffff81418c90 r __kstrtab_files_lglock_local_lock
ffffffff81418ca8 r __kstrtab_files_lglock_lock_init
ffffffff81418cbf r __kstrtab_thaw_super
ffffffff81418cca r __kstrtab_freeze_super
ffffffff81418cd7 r __kstrtab_mount_single
ffffffff81418ce4 r __kstrtab_mount_nodev
ffffffff81418cf0 r __kstrtab_kill_block_super
ffffffff81418d01 r __kstrtab_mount_bdev
ffffffff81418d0c r __kstrtab_mount_ns
ffffffff81418d15 r __kstrtab_kill_litter_super
ffffffff81418d27 r __kstrtab_kill_anon_super
ffffffff81418d37 r __kstrtab_set_anon_super
ffffffff81418d46 r __kstrtab_free_anon_bdev
ffffffff81418d55 r __kstrtab_get_anon_bdev
ffffffff81418d63 r __kstrtab_get_super_thawed
ffffffff81418d74 r __kstrtab_get_super
ffffffff81418d7e r __kstrtab_iterate_supers_type
ffffffff81418d92 r __kstrtab_drop_super
ffffffff81418d9d r __kstrtab_sget
ffffffff81418da2 r __kstrtab_generic_shutdown_super
ffffffff81418db9 r __kstrtab_unlock_super
ffffffff81418dc6 r __kstrtab_lock_super
ffffffff81418dd1 r __kstrtab_deactivate_super
ffffffff81418de2 r __kstrtab_deactivate_locked_super
ffffffff81418dfa r __kstrtab_directly_mappable_cdev_bdi
ffffffff81418e15 r __kstrtab___unregister_chrdev
ffffffff81418e29 r __kstrtab___register_chrdev
ffffffff81418e3b r __kstrtab_cdev_add
ffffffff81418e44 r __kstrtab_cdev_del
ffffffff81418e4d r __kstrtab_cdev_alloc
ffffffff81418e58 r __kstrtab_cdev_init
ffffffff81418e62 r __kstrtab_alloc_chrdev_region
ffffffff81418e76 r __kstrtab_unregister_chrdev_region
ffffffff81418e8f r __kstrtab_register_chrdev_region
ffffffff81418ea6 r __kstrtab_inode_set_bytes
ffffffff81418eb6 r __kstrtab_inode_get_bytes
ffffffff81418ec6 r __kstrtab_inode_sub_bytes
ffffffff81418ed6 r __kstrtab_inode_add_bytes
ffffffff81418ee6 r __kstrtab_vfs_lstat
ffffffff81418ef0 r __kstrtab_vfs_stat
ffffffff81418ef9 r __kstrtab_vfs_fstatat
ffffffff81418f05 r __kstrtab_vfs_fstat
ffffffff81418f0f r __kstrtab_vfs_getattr
ffffffff81418f1b r __kstrtab_generic_fillattr
ffffffff81418f2c r __kstrtab_dump_seek
ffffffff81418f36 r __kstrtab_dump_write
ffffffff81418f41 r __kstrtab_set_binfmt
ffffffff81418f4c r __kstrtab_search_binary_handler
ffffffff81418f62 r __kstrtab_remove_arg_zero
ffffffff81418f72 r __kstrtab_prepare_binprm
ffffffff81418f81 r __kstrtab_install_exec_creds
ffffffff81418f94 r __kstrtab_setup_new_exec
ffffffff81418fa3 r __kstrtab_would_dump
ffffffff81418fae r __kstrtab_flush_old_exec
ffffffff81418fbd r __kstrtab_get_task_comm
ffffffff81418fcb r __kstrtab_kernel_read
ffffffff81418fd7 r __kstrtab_open_exec
ffffffff81418fe1 r __kstrtab_setup_arg_pages
ffffffff81418ff1 r __kstrtab_copy_strings_kernel
ffffffff81419005 r __kstrtab_unregister_binfmt
ffffffff81419017 r __kstrtab___register_binfmt
ffffffff81419029 r __kstrtab_generic_pipe_buf_release
ffffffff81419042 r __kstrtab_generic_pipe_buf_confirm
ffffffff8141905b r __kstrtab_generic_pipe_buf_get
ffffffff81419070 r __kstrtab_generic_pipe_buf_steal
ffffffff81419087 r __kstrtab_generic_pipe_buf_unmap
ffffffff8141909e r __kstrtab_generic_pipe_buf_map
ffffffff814190b3 r __kstrtab_pipe_unlock
ffffffff814190bf r __kstrtab_pipe_lock
ffffffff814190c9 r __kstrtab_generic_readlink
ffffffff814190da r __kstrtab_dentry_unhash
ffffffff814190e8 r __kstrtab_vfs_unlink
ffffffff814190f3 r __kstrtab_vfs_symlink
ffffffff814190ff r __kstrtab_vfs_rmdir
ffffffff81419109 r __kstrtab_vfs_rename
ffffffff81419114 r __kstrtab_vfs_readlink
ffffffff81419121 r __kstrtab_generic_permission
ffffffff81419134 r __kstrtab_vfs_mknod
ffffffff8141913e r __kstrtab_vfs_mkdir
ffffffff81419148 r __kstrtab_vfs_link
ffffffff81419151 r __kstrtab_vfs_follow_link
ffffffff81419161 r __kstrtab_vfs_create
ffffffff8141916c r __kstrtab_unlock_rename
ffffffff8141917a r __kstrtab_inode_permission
ffffffff8141918b r __kstrtab_vfs_path_lookup
ffffffff8141919b r __kstrtab_kern_path
ffffffff814191a5 r __kstrtab_page_symlink_inode_operations
ffffffff814191c3 r __kstrtab_page_symlink
ffffffff814191d0 r __kstrtab___page_symlink
ffffffff814191df r __kstrtab_page_readlink
ffffffff814191ed r __kstrtab_page_put_link
ffffffff814191fb r __kstrtab_page_follow_link_light
ffffffff81419212 r __kstrtab_lookup_one_len
ffffffff81419221 r __kstrtab_lock_rename
ffffffff8141922d r __kstrtab_getname
ffffffff81419235 r __kstrtab_get_write_access
ffffffff81419246 r __kstrtab_follow_up
ffffffff81419250 r __kstrtab_follow_down
ffffffff8141925c r __kstrtab_follow_down_one
ffffffff8141926c r __kstrtab_user_path_at
ffffffff81419279 r __kstrtab_user_path_create
ffffffff8141928a r __kstrtab_kern_path_create
ffffffff8141929b r __kstrtab_full_name_hash
ffffffff814192aa r __kstrtab_path_put
ffffffff814192b3 r __kstrtab_path_get
ffffffff814192bc r __kstrtab_putname
ffffffff814192c4 r __kstrtab_kill_fasync
ffffffff814192d0 r __kstrtab_fasync_helper
ffffffff814192de r __kstrtab_f_setown
ffffffff814192e7 r __kstrtab___f_setown
ffffffff814192f2 r __kstrtab_generic_block_fiemap
ffffffff81419307 r __kstrtab___generic_block_fiemap
ffffffff8141931e r __kstrtab_fiemap_check_flags
ffffffff81419331 r __kstrtab_fiemap_fill_next_extent
ffffffff81419349 r __kstrtab_vfs_readdir
ffffffff81419355 r __kstrtab_poll_schedule_timeout
ffffffff8141936b r __kstrtab_poll_freewait
ffffffff81419379 r __kstrtab_poll_initwait
ffffffff81419387 r __kstrtab_d_genocide
ffffffff81419392 r __kstrtab_names_cachep
ffffffff8141939f r __kstrtab_find_inode_number
ffffffff814193b1 r __kstrtab_dentry_path_raw
ffffffff814193c1 r __kstrtab_d_path
ffffffff814193c8 r __kstrtab_d_materialise_unique
ffffffff814193dd r __kstrtab_d_move
ffffffff814193e4 r __kstrtab_dentry_update_name_case
ffffffff814193fc r __kstrtab_d_rehash
ffffffff81419405 r __kstrtab_d_delete
ffffffff8141940e r __kstrtab_d_validate
ffffffff81419419 r __kstrtab_d_lookup
ffffffff81419422 r __kstrtab_d_add_ci
ffffffff8141942b r __kstrtab_d_splice_alias
ffffffff8141943a r __kstrtab_d_obtain_alias
ffffffff81419449 r __kstrtab_d_find_any_alias
ffffffff8141945a r __kstrtab_d_make_root
ffffffff81419466 r __kstrtab_d_instantiate_unique
ffffffff8141947b r __kstrtab_d_instantiate
ffffffff81419489 r __kstrtab_d_set_d_op
ffffffff81419494 r __kstrtab_d_alloc_name
ffffffff814194a1 r __kstrtab_d_alloc_pseudo
ffffffff814194b0 r __kstrtab_d_alloc
ffffffff814194b8 r __kstrtab_shrink_dcache_parent
ffffffff814194cd r __kstrtab_have_submounts
ffffffff814194dc r __kstrtab_shrink_dcache_sb
ffffffff814194ed r __kstrtab_d_prune_aliases
ffffffff814194fd r __kstrtab_d_find_alias
ffffffff8141950a r __kstrtab_dget_parent
ffffffff81419516 r __kstrtab_d_invalidate
ffffffff81419523 r __kstrtab_dput
ffffffff81419528 r __kstrtab_d_clear_need_lookup
ffffffff8141953c r __kstrtab_d_drop
ffffffff81419543 r __kstrtab___d_drop
ffffffff8141954c r __kstrtab_rename_lock
ffffffff81419558 r __kstrtab_sysctl_vfs_cache_pressure
ffffffff81419572 r __kstrtab_inode_owner_or_capable
ffffffff81419589 r __kstrtab_inode_init_owner
ffffffff8141959a r __kstrtab_init_special_inode
ffffffff814195ad r __kstrtab_inode_wait
ffffffff814195b8 r __kstrtab_inode_needs_sync
ffffffff814195c9 r __kstrtab_file_update_time
ffffffff814195da r __kstrtab_touch_atime
ffffffff814195e6 r __kstrtab_bmap
ffffffff814195eb r __kstrtab_iput
ffffffff814195f0 r __kstrtab_generic_delete_inode
ffffffff81419605 r __kstrtab_insert_inode_locked4
ffffffff8141961a r __kstrtab_insert_inode_locked
ffffffff8141962e r __kstrtab_ilookup
ffffffff81419636 r __kstrtab_ilookup5
ffffffff8141963f r __kstrtab_ilookup5_nowait
ffffffff8141964f r __kstrtab_igrab
ffffffff81419655 r __kstrtab_iunique
ffffffff8141965d r __kstrtab_iget_locked
ffffffff81419669 r __kstrtab_iget5_locked
ffffffff81419676 r __kstrtab_unlock_new_inode
ffffffff81419687 r __kstrtab_new_inode
ffffffff81419691 r __kstrtab_get_next_ino
ffffffff8141969e r __kstrtab_end_writeback
ffffffff814196ac r __kstrtab___remove_inode_hash
ffffffff814196c0 r __kstrtab___insert_inode_hash
ffffffff814196d4 r __kstrtab_inode_sb_list_add
ffffffff814196e6 r __kstrtab_ihold
ffffffff814196ec r __kstrtab_inode_init_once
ffffffff814196fc r __kstrtab_address_space_init_once
ffffffff81419714 r __kstrtab_inc_nlink
ffffffff8141971e r __kstrtab_set_nlink
ffffffff81419728 r __kstrtab_clear_nlink
ffffffff81419734 r __kstrtab_drop_nlink
ffffffff8141973f r __kstrtab___destroy_inode
ffffffff8141974f r __kstrtab_free_inode_nonrcu
ffffffff81419761 r __kstrtab_inode_init_always
ffffffff81419773 r __kstrtab_empty_aops
ffffffff8141977e r __kstrtab_notify_change
ffffffff8141978c r __kstrtab_setattr_copy
ffffffff81419799 r __kstrtab_inode_newsize_ok
ffffffff814197aa r __kstrtab_inode_change_ok
ffffffff814197ba r __kstrtab_iget_failed
ffffffff814197c6 r __kstrtab_is_bad_inode
ffffffff814197d3 r __kstrtab_make_bad_inode
ffffffff814197e2 r __kstrtab_get_unused_fd
ffffffff814197f0 r __kstrtab_get_fs_type
ffffffff814197fc r __kstrtab_unregister_filesystem
ffffffff81419812 r __kstrtab_register_filesystem
ffffffff81419826 r __kstrtab_kern_unmount
ffffffff81419833 r __kstrtab_kern_mount_data
ffffffff81419843 r __kstrtab_path_is_under
ffffffff81419851 r __kstrtab_mount_subtree
ffffffff8141985f r __kstrtab_mark_mounts_for_expiry
ffffffff81419876 r __kstrtab_mnt_set_expiry
ffffffff81419885 r __kstrtab_may_umount
ffffffff81419890 r __kstrtab_may_umount_tree
ffffffff814198a0 r __kstrtab_replace_mount_options
ffffffff814198b6 r __kstrtab_save_mount_options
ffffffff814198c9 r __kstrtab_generic_show_options
ffffffff814198de r __kstrtab_mnt_unpin
ffffffff814198e8 r __kstrtab_mnt_pin
ffffffff814198f0 r __kstrtab_mntget
ffffffff814198f7 r __kstrtab_mntput
ffffffff814198fe r __kstrtab_vfs_kern_mount
ffffffff8141990d r __kstrtab_mnt_drop_write_file
ffffffff81419921 r __kstrtab_mnt_drop_write
ffffffff81419930 r __kstrtab_mnt_want_write_file
ffffffff81419944 r __kstrtab_mnt_clone_write
ffffffff81419954 r __kstrtab_mnt_want_write
ffffffff81419963 r __kstrtab___mnt_is_readonly
ffffffff81419975 r __kstrtab_vfsmount_lock_global_unlock
ffffffff81419991 r __kstrtab_vfsmount_lock_global_lock
ffffffff814199ab r __kstrtab_vfsmount_lock_global_unlock_online
ffffffff814199ce r __kstrtab_vfsmount_lock_global_lock_online
ffffffff814199ef r __kstrtab_vfsmount_lock_local_unlock_cpu
ffffffff81419a0e r __kstrtab_vfsmount_lock_local_lock_cpu
ffffffff81419a2b r __kstrtab_vfsmount_lock_local_unlock
ffffffff81419a46 r __kstrtab_vfsmount_lock_local_lock
ffffffff81419a5f r __kstrtab_vfsmount_lock_lock_init
ffffffff81419a77 r __kstrtab_fs_kobj
ffffffff81419a7f r __kstrtab_seq_hlist_next_rcu
ffffffff81419a92 r __kstrtab_seq_hlist_start_head_rcu
ffffffff81419aab r __kstrtab_seq_hlist_start_rcu
ffffffff81419abf r __kstrtab_seq_hlist_next
ffffffff81419ace r __kstrtab_seq_hlist_start_head
ffffffff81419ae3 r __kstrtab_seq_hlist_start
ffffffff81419af3 r __kstrtab_seq_list_next
ffffffff81419b01 r __kstrtab_seq_list_start_head
ffffffff81419b15 r __kstrtab_seq_list_start
ffffffff81419b24 r __kstrtab_seq_write
ffffffff81419b2e r __kstrtab_seq_put_decimal_ll
ffffffff81419b41 r __kstrtab_seq_put_decimal_ull
ffffffff81419b55 r __kstrtab_seq_puts
ffffffff81419b5e r __kstrtab_seq_putc
ffffffff81419b67 r __kstrtab_seq_open_private
ffffffff81419b78 r __kstrtab___seq_open_private
ffffffff81419b8b r __kstrtab_seq_release_private
ffffffff81419b9f r __kstrtab_single_release
ffffffff81419bae r __kstrtab_single_open
ffffffff81419bba r __kstrtab_seq_bitmap_list
ffffffff81419bca r __kstrtab_seq_bitmap
ffffffff81419bd5 r __kstrtab_seq_path
ffffffff81419bde r __kstrtab_mangle_path
ffffffff81419bea r __kstrtab_seq_printf
ffffffff81419bf5 r __kstrtab_seq_escape
ffffffff81419c00 r __kstrtab_seq_release
ffffffff81419c0c r __kstrtab_seq_lseek
ffffffff81419c16 r __kstrtab_seq_read
ffffffff81419c1f r __kstrtab_seq_open
ffffffff81419c28 r __kstrtab_generic_removexattr
ffffffff81419c3c r __kstrtab_generic_setxattr
ffffffff81419c4d r __kstrtab_generic_listxattr
ffffffff81419c5f r __kstrtab_generic_getxattr
ffffffff81419c70 r __kstrtab_vfs_removexattr
ffffffff81419c80 r __kstrtab_vfs_listxattr
ffffffff81419c8e r __kstrtab_vfs_getxattr
ffffffff81419c9b r __kstrtab_xattr_getsecurity
ffffffff81419cad r __kstrtab_vfs_setxattr
ffffffff81419cba r __kstrtab_simple_attr_write
ffffffff81419ccc r __kstrtab_simple_attr_read
ffffffff81419cdd r __kstrtab_simple_attr_release
ffffffff81419cf1 r __kstrtab_simple_attr_open
ffffffff81419d02 r __kstrtab_simple_transaction_release
ffffffff81419d1d r __kstrtab_simple_transaction_read
ffffffff81419d35 r __kstrtab_simple_transaction_get
ffffffff81419d4c r __kstrtab_simple_transaction_set
ffffffff81419d63 r __kstrtab_memory_read_from_buffer
ffffffff81419d7b r __kstrtab_simple_write_to_buffer
ffffffff81419d92 r __kstrtab_simple_read_from_buffer
ffffffff81419daa r __kstrtab_simple_unlink
ffffffff81419db8 r __kstrtab_noop_fsync
ffffffff81419dc3 r __kstrtab_simple_statfs
ffffffff81419dd1 r __kstrtab_simple_rmdir
ffffffff81419dde r __kstrtab_simple_rename
ffffffff81419dec r __kstrtab_simple_release_fs
ffffffff81419dfe r __kstrtab_simple_readpage
ffffffff81419e0e r __kstrtab_simple_pin_fs
ffffffff81419e1c r __kstrtab_simple_lookup
ffffffff81419e2a r __kstrtab_simple_link
ffffffff81419e36 r __kstrtab_simple_open
ffffffff81419e42 r __kstrtab_simple_getattr
ffffffff81419e51 r __kstrtab_simple_fill_super
ffffffff81419e63 r __kstrtab_simple_empty
ffffffff81419e70 r __kstrtab_simple_dir_operations
ffffffff81419e86 r __kstrtab_simple_dir_inode_operations
ffffffff81419ea2 r __kstrtab_simple_write_end
ffffffff81419eb3 r __kstrtab_simple_write_begin
ffffffff81419ec6 r __kstrtab_mount_pseudo
ffffffff81419ed3 r __kstrtab_generic_read_dir
ffffffff81419ee4 r __kstrtab_dcache_readdir
ffffffff81419ef3 r __kstrtab_dcache_dir_open
ffffffff81419f03 r __kstrtab_dcache_dir_lseek
ffffffff81419f14 r __kstrtab_dcache_dir_close
ffffffff81419f25 r __kstrtab_generic_check_addressable
ffffffff81419f3f r __kstrtab_generic_file_fsync
ffffffff81419f52 r __kstrtab_generic_fh_to_parent
ffffffff81419f67 r __kstrtab_generic_fh_to_dentry
ffffffff81419f7c r __kstrtab_simple_setattr
ffffffff81419f8b r __kstrtab_sync_inode_metadata
ffffffff81419f9f r __kstrtab_sync_inode
ffffffff81419faa r __kstrtab_write_inode_now
ffffffff81419fba r __kstrtab_sync_inodes_sb
ffffffff81419fc9 r __kstrtab_writeback_inodes_sb_nr_if_idle
ffffffff81419fe8 r __kstrtab_writeback_inodes_sb_if_idle
ffffffff8141a004 r __kstrtab_writeback_inodes_sb
ffffffff8141a018 r __kstrtab_writeback_inodes_sb_nr
ffffffff8141a02f r __kstrtab___mark_inode_dirty
ffffffff8141a042 r __kstrtab_splice_direct_to_actor
ffffffff8141a059 r __kstrtab_generic_splice_sendpage
ffffffff8141a071 r __kstrtab_generic_file_splice_write
ffffffff8141a08b r __kstrtab___splice_from_pipe
ffffffff8141a09e r __kstrtab_splice_from_pipe_end
ffffffff8141a0b3 r __kstrtab_splice_from_pipe_begin
ffffffff8141a0ca r __kstrtab_splice_from_pipe_next
ffffffff8141a0e0 r __kstrtab_splice_from_pipe_feed
ffffffff8141a0f6 r __kstrtab_pipe_to_file
ffffffff8141a103 r __kstrtab_default_file_splice_read
ffffffff8141a11c r __kstrtab_generic_file_splice_read
ffffffff8141a135 r __kstrtab_generic_write_sync
ffffffff8141a148 r __kstrtab_vfs_fsync
ffffffff8141a152 r __kstrtab_vfs_fsync_range
ffffffff8141a162 r __kstrtab_sync_filesystem
ffffffff8141a172 r __kstrtab_fsstack_copy_attr_all
ffffffff8141a188 r __kstrtab_fsstack_copy_inode_size
ffffffff8141a1a0 r __kstrtab_current_umask
ffffffff8141a1ae r __kstrtab_unshare_fs_struct
ffffffff8141a1c0 r __kstrtab_vfs_statfs
ffffffff8141a1cb r __kstrtab_bh_submit_read
ffffffff8141a1da r __kstrtab_bh_uptodate_or_lock
ffffffff8141a1ee r __kstrtab_free_buffer_head
ffffffff8141a1ff r __kstrtab_alloc_buffer_head
ffffffff8141a211 r __kstrtab_try_to_free_buffers
ffffffff8141a225 r __kstrtab_sync_dirty_buffer
ffffffff8141a237 r __kstrtab___sync_dirty_buffer
ffffffff8141a24b r __kstrtab_write_dirty_buffer
ffffffff8141a25e r __kstrtab_ll_rw_block
ffffffff8141a26a r __kstrtab_submit_bh
ffffffff8141a274 r __kstrtab_generic_block_bmap
ffffffff8141a287 r __kstrtab_block_write_full_page
ffffffff8141a29d r __kstrtab_block_write_full_page_endio
ffffffff8141a2b9 r __kstrtab_block_truncate_page
ffffffff8141a2cd r __kstrtab_nobh_truncate_page
ffffffff8141a2e0 r __kstrtab_nobh_writepage
ffffffff8141a2ef r __kstrtab_nobh_write_end
ffffffff8141a2fe r __kstrtab_nobh_write_begin
ffffffff8141a30f r __kstrtab_block_page_mkwrite
ffffffff8141a322 r __kstrtab___block_page_mkwrite
ffffffff8141a337 r __kstrtab_block_commit_write
ffffffff8141a34a r __kstrtab_cont_write_begin
ffffffff8141a35b r __kstrtab_generic_cont_expand_simple
ffffffff8141a376 r __kstrtab_block_read_full_page
ffffffff8141a38b r __kstrtab_block_is_partially_uptodate
ffffffff8141a3a7 r __kstrtab_generic_write_end
ffffffff8141a3b9 r __kstrtab_block_write_end
ffffffff8141a3c9 r __kstrtab_block_write_begin
ffffffff8141a3db r __kstrtab___block_write_begin
ffffffff8141a3ef r __kstrtab_page_zero_new_buffers
ffffffff8141a405 r __kstrtab_unmap_underlying_metadata
ffffffff8141a41f r __kstrtab_create_empty_buffers
ffffffff8141a434 r __kstrtab_block_invalidatepage
ffffffff8141a449 r __kstrtab_set_bh_page
ffffffff8141a455 r __kstrtab_invalidate_bh_lrus
ffffffff8141a468 r __kstrtab___bread
ffffffff8141a470 r __kstrtab___breadahead
ffffffff8141a47d r __kstrtab___getblk
ffffffff8141a486 r __kstrtab___find_get_block
ffffffff8141a497 r __kstrtab___bforget
ffffffff8141a4a1 r __kstrtab___brelse
ffffffff8141a4aa r __kstrtab_mark_buffer_dirty
ffffffff8141a4bc r __kstrtab_alloc_page_buffers
ffffffff8141a4cf r __kstrtab_invalidate_inode_buffers
ffffffff8141a4e8 r __kstrtab___set_page_dirty_buffers
ffffffff8141a501 r __kstrtab_mark_buffer_dirty_inode
ffffffff8141a519 r __kstrtab_sync_mapping_buffers
ffffffff8141a52e r __kstrtab_mark_buffer_async_write
ffffffff8141a546 r __kstrtab_end_buffer_async_write
ffffffff8141a55d r __kstrtab_end_buffer_write_sync
ffffffff8141a573 r __kstrtab_end_buffer_read_sync
ffffffff8141a588 r __kstrtab___wait_on_buffer
ffffffff8141a599 r __kstrtab_unlock_buffer
ffffffff8141a5a7 r __kstrtab___lock_buffer
ffffffff8141a5b5 r __kstrtab_init_buffer
ffffffff8141a5c1 r __kstrtab_bioset_create
ffffffff8141a5cf r __kstrtab_bioset_free
ffffffff8141a5db r __kstrtab_bio_sector_offset
ffffffff8141a5ed r __kstrtab_bio_split
ffffffff8141a5f7 r __kstrtab_bio_pair_release
ffffffff8141a608 r __kstrtab_bio_endio
ffffffff8141a612 r __kstrtab_bio_copy_kern
ffffffff8141a620 r __kstrtab_bio_map_kern
ffffffff8141a62d r __kstrtab_bio_unmap_user
ffffffff8141a63c r __kstrtab_bio_map_user
ffffffff8141a649 r __kstrtab_bio_copy_user
ffffffff8141a657 r __kstrtab_bio_uncopy_user
ffffffff8141a667 r __kstrtab_bio_add_page
ffffffff8141a674 r __kstrtab_bio_add_pc_page
ffffffff8141a684 r __kstrtab_bio_get_nr_vecs
ffffffff8141a694 r __kstrtab_bio_clone
ffffffff8141a69e r __kstrtab___bio_clone
ffffffff8141a6aa r __kstrtab_bio_phys_segments
ffffffff8141a6bc r __kstrtab_bio_put
ffffffff8141a6c4 r __kstrtab_zero_fill_bio
ffffffff8141a6d2 r __kstrtab_bio_kmalloc
ffffffff8141a6de r __kstrtab_bio_alloc
ffffffff8141a6e8 r __kstrtab_bio_alloc_bioset
ffffffff8141a6f9 r __kstrtab_bio_init
ffffffff8141a702 r __kstrtab_bio_free
ffffffff8141a70b r __kstrtab___invalidate_device
ffffffff8141a71f r __kstrtab_lookup_bdev
ffffffff8141a72b r __kstrtab_ioctl_by_bdev
ffffffff8141a739 r __kstrtab_blkdev_aio_write
ffffffff8141a74a r __kstrtab_blkdev_put
ffffffff8141a755 r __kstrtab_blkdev_get_by_dev
ffffffff8141a767 r __kstrtab_blkdev_get_by_path
ffffffff8141a77a r __kstrtab_blkdev_get
ffffffff8141a785 r __kstrtab_bd_set_size
ffffffff8141a791 r __kstrtab_check_disk_change
ffffffff8141a7a3 r __kstrtab_revalidate_disk
ffffffff8141a7b3 r __kstrtab_check_disk_size_change
ffffffff8141a7ca r __kstrtab_bd_unlink_disk_holder
ffffffff8141a7e0 r __kstrtab_bd_link_disk_holder
ffffffff8141a7f4 r __kstrtab_bdput
ffffffff8141a7fa r __kstrtab_bdget
ffffffff8141a800 r __kstrtab_blkdev_fsync
ffffffff8141a80d r __kstrtab_thaw_bdev
ffffffff8141a817 r __kstrtab_freeze_bdev
ffffffff8141a823 r __kstrtab_fsync_bdev
ffffffff8141a82e r __kstrtab_sync_blockdev
ffffffff8141a83c r __kstrtab_sb_min_blocksize
ffffffff8141a84d r __kstrtab_sb_set_blocksize
ffffffff8141a85e r __kstrtab_set_blocksize
ffffffff8141a86c r __kstrtab_invalidate_bdev
ffffffff8141a87c r __kstrtab_kill_bdev
ffffffff8141a886 r __kstrtab_I_BDEV
ffffffff8141a88d r __kstrtab___blockdev_direct_IO
ffffffff8141a8a2 r __kstrtab_dio_end_io
ffffffff8141a8ad r __kstrtab_inode_dio_done
ffffffff8141a8bc r __kstrtab_inode_dio_wait
ffffffff8141a8cb r __kstrtab_mpage_writepage
ffffffff8141a8db r __kstrtab_mpage_writepages
ffffffff8141a8ec r __kstrtab_mpage_readpage
ffffffff8141a8fb r __kstrtab_mpage_readpages
ffffffff8141a90b r __kstrtab_set_task_ioprio
ffffffff8141a91b r __kstrtab_fsnotify
ffffffff8141a924 r __kstrtab___fsnotify_parent
ffffffff8141a936 r __kstrtab___fsnotify_inode_delete
ffffffff8141a94e r __kstrtab_fsnotify_get_cookie
ffffffff8141a962 r __kstrtab_anon_inode_getfd
ffffffff8141a973 r __kstrtab_anon_inode_getfile
ffffffff8141a986 r __kstrtab_eventfd_ctx_fileget
ffffffff8141a99a r __kstrtab_eventfd_ctx_fdget
ffffffff8141a9ac r __kstrtab_eventfd_fget
ffffffff8141a9b9 r __kstrtab_eventfd_ctx_read
ffffffff8141a9ca r __kstrtab_eventfd_ctx_remove_wait_queue
ffffffff8141a9e8 r __kstrtab_eventfd_ctx_put
ffffffff8141a9f8 r __kstrtab_eventfd_ctx_get
ffffffff8141aa08 r __kstrtab_eventfd_signal
ffffffff8141aa17 r __kstrtab_aio_complete
ffffffff8141aa24 r __kstrtab_kick_iocb
ffffffff8141aa2e r __kstrtab_aio_put_req
ffffffff8141aa3a r __kstrtab_wait_on_sync_kiocb
ffffffff8141aa4d r __kstrtab_lock_may_write
ffffffff8141aa5c r __kstrtab_lock_may_read
ffffffff8141aa6a r __kstrtab_vfs_cancel_lock
ffffffff8141aa7a r __kstrtab_posix_unblock_lock
ffffffff8141aa8d r __kstrtab_locks_remove_posix
ffffffff8141aaa0 r __kstrtab_vfs_lock_file
ffffffff8141aaae r __kstrtab_vfs_test_lock
ffffffff8141aabc r __kstrtab_flock_lock_file_wait
ffffffff8141aad1 r __kstrtab_vfs_setlease
ffffffff8141aade r __kstrtab_generic_setlease
ffffffff8141aaef r __kstrtab_lease_get_mtime
ffffffff8141aaff r __kstrtab___break_lease
ffffffff8141ab0d r __kstrtab_lease_modify
ffffffff8141ab1a r __kstrtab_locks_mandatory_area
ffffffff8141ab2f r __kstrtab_posix_lock_file_wait
ffffffff8141ab44 r __kstrtab_posix_lock_file
ffffffff8141ab54 r __kstrtab_posix_test_lock
ffffffff8141ab64 r __kstrtab_locks_delete_block
ffffffff8141ab77 r __kstrtab_locks_copy_lock
ffffffff8141ab87 r __kstrtab___locks_copy_lock
ffffffff8141ab99 r __kstrtab_locks_init_lock
ffffffff8141aba9 r __kstrtab_locks_free_lock
ffffffff8141abb9 r __kstrtab_locks_release_private
ffffffff8141abcf r __kstrtab_locks_alloc_lock
ffffffff8141abe0 r __kstrtab_unlock_flocks
ffffffff8141abee r __kstrtab_lock_flocks
ffffffff8141abfa r __kstrtab_posix_acl_chmod
ffffffff8141ac0a r __kstrtab_posix_acl_create
ffffffff8141ac1b r __kstrtab_posix_acl_from_mode
ffffffff8141ac2f r __kstrtab_posix_acl_equiv_mode
ffffffff8141ac44 r __kstrtab_posix_acl_valid
ffffffff8141ac54 r __kstrtab_posix_acl_alloc
ffffffff8141ac64 r __kstrtab_posix_acl_init
ffffffff8141ac73 r __kstrtab_posix_acl_to_xattr
ffffffff8141ac86 r __kstrtab_posix_acl_from_xattr
ffffffff8141ac9b r __kstrtab_remove_proc_entry
ffffffff8141acad r __kstrtab_proc_create_data
ffffffff8141acbe r __kstrtab_create_proc_entry
ffffffff8141acd0 r __kstrtab_proc_mkdir
ffffffff8141acdb r __kstrtab_proc_net_mkdir
ffffffff8141acea r __kstrtab_proc_mkdir_mode
ffffffff8141acfa r __kstrtab_proc_symlink
ffffffff8141ad07 r __kstrtab_unregister_sysctl_table
ffffffff8141ad1f r __kstrtab_register_sysctl_table
ffffffff8141ad35 r __kstrtab_register_sysctl_paths
ffffffff8141ad4b r __kstrtab_register_sysctl
ffffffff8141ad5b r __kstrtab_proc_net_remove
ffffffff8141ad6b r __kstrtab_proc_net_fops_create
ffffffff8141ad80 r __kstrtab_single_release_net
ffffffff8141ad93 r __kstrtab_seq_release_net
ffffffff8141ada3 r __kstrtab_single_open_net
ffffffff8141adb3 r __kstrtab_seq_open_net
ffffffff8141adc0 r __kstrtab_sysfs_create_files
ffffffff8141add3 r __kstrtab_sysfs_remove_files
ffffffff8141ade6 r __kstrtab_sysfs_remove_file
ffffffff8141adf8 r __kstrtab_sysfs_create_file
ffffffff8141ae0a r __kstrtab_sysfs_schedule_callback
ffffffff8141ae22 r __kstrtab_sysfs_remove_file_from_group
ffffffff8141ae3f r __kstrtab_sysfs_chmod_file
ffffffff8141ae50 r __kstrtab_sysfs_add_file_to_group
ffffffff8141ae68 r __kstrtab_sysfs_notify
ffffffff8141ae75 r __kstrtab_sysfs_notify_dirent
ffffffff8141ae89 r __kstrtab_sysfs_get_dirent
ffffffff8141ae9a r __kstrtab_sysfs_rename_link
ffffffff8141aeac r __kstrtab_sysfs_remove_link
ffffffff8141aebe r __kstrtab_sysfs_create_link
ffffffff8141aed0 r __kstrtab_sysfs_put
ffffffff8141aeda r __kstrtab_sysfs_get
ffffffff8141aee4 r __kstrtab_sysfs_remove_bin_file
ffffffff8141aefa r __kstrtab_sysfs_create_bin_file
ffffffff8141af10 r __kstrtab_sysfs_remove_group
ffffffff8141af23 r __kstrtab_sysfs_update_group
ffffffff8141af36 r __kstrtab_sysfs_create_group
ffffffff8141af49 r __kstrtab_sysfs_unmerge_group
ffffffff8141af5d r __kstrtab_sysfs_merge_group
ffffffff8141af6f r __kstrtab_configfs_unregister_subsystem
ffffffff8141af8d r __kstrtab_configfs_register_subsystem
ffffffff8141afa9 r __kstrtab_configfs_undepend_item
ffffffff8141afc0 r __kstrtab_configfs_depend_item
ffffffff8141afd5 r __kstrtab_config_group_find_item
ffffffff8141afec r __kstrtab_config_item_put
ffffffff8141affc r __kstrtab_config_item_get
ffffffff8141b00c r __kstrtab_config_group_init
ffffffff8141b01e r __kstrtab_config_item_init
ffffffff8141b02f r __kstrtab_config_group_init_type_name
ffffffff8141b04b r __kstrtab_config_item_init_type_name
ffffffff8141b066 r __kstrtab_config_item_set_name
ffffffff8141b07b r __kstrtab_load_nls_default
ffffffff8141b08c r __kstrtab_load_nls
ffffffff8141b095 r __kstrtab_unload_nls
ffffffff8141b0a0 r __kstrtab_unregister_nls
ffffffff8141b0af r __kstrtab_register_nls
ffffffff8141b0bc r __kstrtab_utf16s_to_utf8s
ffffffff8141b0cc r __kstrtab_utf8s_to_utf16s
ffffffff8141b0dc r __kstrtab_utf32_to_utf8
ffffffff8141b0ea r __kstrtab_utf8_to_utf32
ffffffff8141b0f8 r __kstrtab_pstore_register
ffffffff8141b108 r __kstrtab_crypto_has_alg
ffffffff8141b117 r __kstrtab_crypto_destroy_tfm
ffffffff8141b12a r __kstrtab_crypto_alloc_tfm
ffffffff8141b13b r __kstrtab_crypto_find_alg
ffffffff8141b14b r __kstrtab_crypto_create_tfm
ffffffff8141b15d r __kstrtab_crypto_alloc_base
ffffffff8141b16f r __kstrtab___crypto_alloc_tfm
ffffffff8141b182 r __kstrtab_crypto_shoot_alg
ffffffff8141b193 r __kstrtab_crypto_alg_mod_lookup
ffffffff8141b1a9 r __kstrtab_crypto_probing_notify
ffffffff8141b1bf r __kstrtab_crypto_larval_lookup
ffffffff8141b1d4 r __kstrtab_crypto_alg_lookup
ffffffff8141b1e6 r __kstrtab_crypto_larval_kill
ffffffff8141b1f9 r __kstrtab_crypto_larval_alloc
ffffffff8141b20d r __kstrtab_crypto_mod_put
ffffffff8141b21c r __kstrtab_crypto_mod_get
ffffffff8141b22b r __kstrtab_crypto_chain
ffffffff8141b238 r __kstrtab_crypto_alg_sem
ffffffff8141b247 r __kstrtab_crypto_alg_list
ffffffff8141b257 r __kstrtab_kcrypto_wq
ffffffff8141b262 r __kstrtab_crypto_xor
ffffffff8141b26d r __kstrtab_crypto_inc
ffffffff8141b278 r __kstrtab_crypto_tfm_in_queue
ffffffff8141b28c r __kstrtab_crypto_dequeue_request
ffffffff8141b2a3 r __kstrtab___crypto_dequeue_request
ffffffff8141b2bc r __kstrtab_crypto_enqueue_request
ffffffff8141b2d3 r __kstrtab_crypto_init_queue
ffffffff8141b2e5 r __kstrtab_crypto_alloc_instance
ffffffff8141b2fb r __kstrtab_crypto_alloc_instance2
ffffffff8141b312 r __kstrtab_crypto_attr_u32
ffffffff8141b322 r __kstrtab_crypto_attr_alg2
ffffffff8141b333 r __kstrtab_crypto_attr_alg_name
ffffffff8141b348 r __kstrtab_crypto_check_attr_type
ffffffff8141b35f r __kstrtab_crypto_get_attr_type
ffffffff8141b374 r __kstrtab_crypto_unregister_notifier
ffffffff8141b38f r __kstrtab_crypto_register_notifier
ffffffff8141b3a8 r __kstrtab_crypto_spawn_tfm2
ffffffff8141b3ba r __kstrtab_crypto_spawn_tfm
ffffffff8141b3cb r __kstrtab_crypto_drop_spawn
ffffffff8141b3dd r __kstrtab_crypto_init_spawn2
ffffffff8141b3f0 r __kstrtab_crypto_init_spawn
ffffffff8141b402 r __kstrtab_crypto_unregister_instance
ffffffff8141b41d r __kstrtab_crypto_register_instance
ffffffff8141b436 r __kstrtab_crypto_lookup_template
ffffffff8141b44d r __kstrtab_crypto_unregister_template
ffffffff8141b468 r __kstrtab_crypto_register_template
ffffffff8141b481 r __kstrtab_crypto_unregister_algs
ffffffff8141b498 r __kstrtab_crypto_register_algs
ffffffff8141b4ad r __kstrtab_crypto_unregister_alg
ffffffff8141b4c3 r __kstrtab_crypto_register_alg
ffffffff8141b4d7 r __kstrtab_crypto_remove_final
ffffffff8141b4eb r __kstrtab_crypto_alg_tested
ffffffff8141b4fd r __kstrtab_crypto_remove_spawns
ffffffff8141b512 r __kstrtab_crypto_larval_error
ffffffff8141b526 r __kstrtab_scatterwalk_map_and_copy
ffffffff8141b53f r __kstrtab_scatterwalk_copychunks
ffffffff8141b556 r __kstrtab_scatterwalk_done
ffffffff8141b567 r __kstrtab_scatterwalk_map
ffffffff8141b577 r __kstrtab_scatterwalk_start
ffffffff8141b589 r __kstrtab_crypto_alloc_aead
ffffffff8141b59b r __kstrtab_crypto_grab_aead
ffffffff8141b5ac r __kstrtab_crypto_lookup_aead
ffffffff8141b5bf r __kstrtab_aead_geniv_exit
ffffffff8141b5cf r __kstrtab_aead_geniv_init
ffffffff8141b5df r __kstrtab_aead_geniv_free
ffffffff8141b5ef r __kstrtab_aead_geniv_alloc
ffffffff8141b600 r __kstrtab_crypto_nivaead_type
ffffffff8141b614 r __kstrtab_crypto_aead_type
ffffffff8141b625 r __kstrtab_crypto_aead_setauthsize
ffffffff8141b63d r __kstrtab_crypto_alloc_ablkcipher
ffffffff8141b655 r __kstrtab_crypto_grab_skcipher
ffffffff8141b66a r __kstrtab_crypto_lookup_skcipher
ffffffff8141b681 r __kstrtab_crypto_givcipher_type
ffffffff8141b697 r __kstrtab_crypto_ablkcipher_type
ffffffff8141b6ae r __kstrtab_ablkcipher_walk_phys
ffffffff8141b6c3 r __kstrtab_ablkcipher_walk_done
ffffffff8141b6d8 r __kstrtab___ablkcipher_walk_complete
ffffffff8141b6f3 r __kstrtab_skcipher_geniv_exit
ffffffff8141b707 r __kstrtab_skcipher_geniv_init
ffffffff8141b71b r __kstrtab_skcipher_geniv_free
ffffffff8141b72f r __kstrtab_skcipher_geniv_alloc
ffffffff8141b744 r __kstrtab_crypto_blkcipher_type
ffffffff8141b75a r __kstrtab_blkcipher_walk_virt_block
ffffffff8141b774 r __kstrtab_blkcipher_walk_phys
ffffffff8141b788 r __kstrtab_blkcipher_walk_virt
ffffffff8141b79c r __kstrtab_blkcipher_walk_done
ffffffff8141b7b0 r __kstrtab_ahash_attr_alg
ffffffff8141b7bf r __kstrtab_crypto_init_ahash_spawn
ffffffff8141b7d7 r __kstrtab_ahash_free_instance
ffffffff8141b7eb r __kstrtab_ahash_register_instance
ffffffff8141b803 r __kstrtab_crypto_unregister_ahash
ffffffff8141b81b r __kstrtab_crypto_register_ahash
ffffffff8141b831 r __kstrtab_crypto_alloc_ahash
ffffffff8141b844 r __kstrtab_crypto_ahash_type
ffffffff8141b856 r __kstrtab_crypto_ahash_digest
ffffffff8141b86a r __kstrtab_crypto_ahash_finup
ffffffff8141b87d r __kstrtab_crypto_ahash_final
ffffffff8141b890 r __kstrtab_crypto_ahash_setkey
ffffffff8141b8a4 r __kstrtab_crypto_hash_walk_first
ffffffff8141b8bb r __kstrtab_crypto_hash_walk_done
ffffffff8141b8d1 r __kstrtab_shash_attr_alg
ffffffff8141b8e0 r __kstrtab_crypto_init_shash_spawn
ffffffff8141b8f8 r __kstrtab_shash_free_instance
ffffffff8141b90c r __kstrtab_shash_register_instance
ffffffff8141b924 r __kstrtab_crypto_unregister_shash
ffffffff8141b93c r __kstrtab_crypto_register_shash
ffffffff8141b952 r __kstrtab_crypto_alloc_shash
ffffffff8141b965 r __kstrtab_shash_ahash_digest
ffffffff8141b978 r __kstrtab_shash_ahash_finup
ffffffff8141b98a r __kstrtab_shash_ahash_update
ffffffff8141b99d r __kstrtab_crypto_shash_digest
ffffffff8141b9b1 r __kstrtab_crypto_shash_finup
ffffffff8141b9c4 r __kstrtab_crypto_shash_final
ffffffff8141b9d7 r __kstrtab_crypto_shash_update
ffffffff8141b9eb r __kstrtab_crypto_shash_setkey
ffffffff8141b9ff r __kstrtab_crypto_unregister_pcomp
ffffffff8141ba17 r __kstrtab_crypto_register_pcomp
ffffffff8141ba2d r __kstrtab_crypto_alloc_pcomp
ffffffff8141ba40 r __kstrtab_alg_test
ffffffff8141ba49 r __kstrtab_crypto_aes_set_key
ffffffff8141ba5c r __kstrtab_crypto_aes_expand_key
ffffffff8141ba72 r __kstrtab_crypto_il_tab
ffffffff8141ba80 r __kstrtab_crypto_it_tab
ffffffff8141ba8e r __kstrtab_crypto_fl_tab
ffffffff8141ba9c r __kstrtab_crypto_ft_tab
ffffffff8141baaa r __kstrtab_crypto_put_default_rng
ffffffff8141bac1 r __kstrtab_crypto_get_default_rng
ffffffff8141bad8 r __kstrtab_crypto_rng_type
ffffffff8141bae8 r __kstrtab_crypto_default_rng
ffffffff8141bafb r __kstrtab_elv_rb_latter_request
ffffffff8141bb11 r __kstrtab_elv_rb_former_request
ffffffff8141bb27 r __kstrtab_elevator_change
ffffffff8141bb37 r __kstrtab_elv_unregister
ffffffff8141bb46 r __kstrtab_elv_register
ffffffff8141bb53 r __kstrtab_elv_unregister_queue
ffffffff8141bb68 r __kstrtab_elv_register_queue
ffffffff8141bb7b r __kstrtab_elv_abort_queue
ffffffff8141bb8b r __kstrtab_elv_add_request
ffffffff8141bb9b r __kstrtab___elv_add_request
ffffffff8141bbad r __kstrtab_elv_dispatch_add_tail
ffffffff8141bbc3 r __kstrtab_elv_dispatch_sort
ffffffff8141bbd5 r __kstrtab_elv_rb_find
ffffffff8141bbe1 r __kstrtab_elv_rb_del
ffffffff8141bbec r __kstrtab_elv_rb_add
ffffffff8141bbf7 r __kstrtab_elevator_exit
ffffffff8141bc05 r __kstrtab_elevator_init
ffffffff8141bc13 r __kstrtab_elv_rq_merge_ok
ffffffff8141bc23 r __kstrtab_blk_finish_plug
ffffffff8141bc33 r __kstrtab_blk_start_plug
ffffffff8141bc42 r __kstrtab_kblockd_schedule_delayed_work
ffffffff8141bc60 r __kstrtab_kblockd_schedule_work
ffffffff8141bc76 r __kstrtab_blk_rq_prep_clone
ffffffff8141bc88 r __kstrtab_blk_rq_unprep_clone
ffffffff8141bc9c r __kstrtab_blk_lld_busy
ffffffff8141bca9 r __kstrtab___blk_end_request_err
ffffffff8141bcbf r __kstrtab___blk_end_request_cur
ffffffff8141bcd5 r __kstrtab___blk_end_request_all
ffffffff8141bceb r __kstrtab___blk_end_request
ffffffff8141bcfd r __kstrtab_blk_end_request_err
ffffffff8141bd11 r __kstrtab_blk_end_request_cur
ffffffff8141bd25 r __kstrtab_blk_end_request_all
ffffffff8141bd39 r __kstrtab_blk_end_request
ffffffff8141bd49 r __kstrtab_blk_unprep_request
ffffffff8141bd5c r __kstrtab_blk_update_request
ffffffff8141bd6f r __kstrtab_blk_fetch_request
ffffffff8141bd81 r __kstrtab_blk_start_request
ffffffff8141bd93 r __kstrtab_blk_peek_request
ffffffff8141bda4 r __kstrtab_blk_rq_err_bytes
ffffffff8141bdb5 r __kstrtab_blk_insert_cloned_request
ffffffff8141bdcf r __kstrtab_blk_rq_check_limits
ffffffff8141bde3 r __kstrtab_submit_bio
ffffffff8141bdee r __kstrtab_generic_make_request
ffffffff8141be03 r __kstrtab_blk_queue_bio
ffffffff8141be11 r __kstrtab_blk_add_request_payload
ffffffff8141be29 r __kstrtab_blk_put_request
ffffffff8141be39 r __kstrtab___blk_put_request
ffffffff8141be4b r __kstrtab_part_round_stats
ffffffff8141be5c r __kstrtab_blk_requeue_request
ffffffff8141be70 r __kstrtab_blk_make_request
ffffffff8141be81 r __kstrtab_blk_get_request
ffffffff8141be91 r __kstrtab_blk_get_queue
ffffffff8141be9f r __kstrtab_blk_init_allocated_queue
ffffffff8141beb8 r __kstrtab_blk_init_queue_node
ffffffff8141becc r __kstrtab_blk_init_queue
ffffffff8141bedb r __kstrtab_blk_alloc_queue_node
ffffffff8141bef0 r __kstrtab_blk_alloc_queue
ffffffff8141bf00 r __kstrtab_blk_cleanup_queue
ffffffff8141bf12 r __kstrtab_blk_put_queue
ffffffff8141bf20 r __kstrtab_blk_run_queue
ffffffff8141bf2e r __kstrtab_blk_run_queue_async
ffffffff8141bf42 r __kstrtab___blk_run_queue
ffffffff8141bf52 r __kstrtab_blk_sync_queue
ffffffff8141bf61 r __kstrtab_blk_stop_queue
ffffffff8141bf70 r __kstrtab_blk_start_queue
ffffffff8141bf80 r __kstrtab_blk_delay_queue
ffffffff8141bf90 r __kstrtab_blk_dump_rq_flags
ffffffff8141bfa2 r __kstrtab_blk_rq_init
ffffffff8141bfae r __kstrtab_blk_get_backing_dev_info
ffffffff8141bfc7 r __kstrtab_blk_queue_invalidate_tags
ffffffff8141bfe1 r __kstrtab_blk_queue_start_tag
ffffffff8141bff5 r __kstrtab_blk_queue_end_tag
ffffffff8141c007 r __kstrtab_blk_queue_resize_tags
ffffffff8141c01d r __kstrtab_blk_queue_init_tags
ffffffff8141c031 r __kstrtab_blk_init_tags
ffffffff8141c03f r __kstrtab_blk_queue_free_tags
ffffffff8141c053 r __kstrtab_blk_free_tags
ffffffff8141c061 r __kstrtab_blk_queue_find_tag
ffffffff8141c074 r __kstrtab_blkdev_issue_flush
ffffffff8141c087 r __kstrtab_blk_queue_flush_queueable
ffffffff8141c0a1 r __kstrtab_blk_queue_flush
ffffffff8141c0b1 r __kstrtab_blk_queue_update_dma_alignment
ffffffff8141c0d0 r __kstrtab_blk_queue_dma_alignment
ffffffff8141c0e8 r __kstrtab_blk_queue_segment_boundary
ffffffff8141c103 r __kstrtab_blk_queue_dma_drain
ffffffff8141c117 r __kstrtab_blk_queue_update_dma_pad
ffffffff8141c130 r __kstrtab_blk_queue_dma_pad
ffffffff8141c142 r __kstrtab_disk_stack_limits
ffffffff8141c154 r __kstrtab_bdev_stack_limits
ffffffff8141c166 r __kstrtab_blk_stack_limits
ffffffff8141c177 r __kstrtab_blk_queue_stack_limits
ffffffff8141c18e r __kstrtab_blk_queue_io_opt
ffffffff8141c19f r __kstrtab_blk_limits_io_opt
ffffffff8141c1b1 r __kstrtab_blk_queue_io_min
ffffffff8141c1c2 r __kstrtab_blk_limits_io_min
ffffffff8141c1d4 r __kstrtab_blk_queue_alignment_offset
ffffffff8141c1ef r __kstrtab_blk_queue_physical_block_size
ffffffff8141c20d r __kstrtab_blk_queue_logical_block_size
ffffffff8141c22a r __kstrtab_blk_queue_max_segment_size
ffffffff8141c245 r __kstrtab_blk_queue_max_segments
ffffffff8141c25c r __kstrtab_blk_queue_max_discard_sectors
ffffffff8141c27a r __kstrtab_blk_queue_max_hw_sectors
ffffffff8141c293 r __kstrtab_blk_limits_max_hw_sectors
ffffffff8141c2ad r __kstrtab_blk_queue_bounce_limit
ffffffff8141c2c4 r __kstrtab_blk_queue_make_request
ffffffff8141c2db r __kstrtab_blk_set_stacking_limits
ffffffff8141c2f3 r __kstrtab_blk_set_default_limits
ffffffff8141c30a r __kstrtab_blk_queue_lld_busy
ffffffff8141c31d r __kstrtab_blk_queue_rq_timed_out
ffffffff8141c334 r __kstrtab_blk_queue_rq_timeout
ffffffff8141c349 r __kstrtab_blk_queue_softirq_done
ffffffff8141c360 r __kstrtab_blk_queue_merge_bvec
ffffffff8141c375 r __kstrtab_blk_queue_unprep_rq
ffffffff8141c389 r __kstrtab_blk_queue_prep_rq
ffffffff8141c39b r __kstrtab_blk_max_low_pfn
ffffffff8141c3ab r __kstrtab_icq_get_changed
ffffffff8141c3bb r __kstrtab_ioc_cgroup_changed
ffffffff8141c3ce r __kstrtab_ioc_lookup_icq
ffffffff8141c3dd r __kstrtab_get_task_io_context
ffffffff8141c3f1 r __kstrtab_put_io_context
ffffffff8141c400 r __kstrtab_get_io_context
ffffffff8141c40f r __kstrtab_blk_rq_map_kern
ffffffff8141c41f r __kstrtab_blk_rq_unmap_user
ffffffff8141c431 r __kstrtab_blk_rq_map_user_iov
ffffffff8141c445 r __kstrtab_blk_rq_map_user
ffffffff8141c455 r __kstrtab_blk_execute_rq
ffffffff8141c464 r __kstrtab_blk_execute_rq_nowait
ffffffff8141c47a r __kstrtab_blk_rq_map_sg
ffffffff8141c488 r __kstrtab_blk_recount_segments
ffffffff8141c49d r __kstrtab_blk_complete_request
ffffffff8141c4b2 r __kstrtab_blk_abort_queue
ffffffff8141c4c2 r __kstrtab_blk_abort_request
ffffffff8141c4d4 r __kstrtab_blk_iopoll_init
ffffffff8141c4e4 r __kstrtab_blk_iopoll_enable
ffffffff8141c4f6 r __kstrtab_blk_iopoll_disable
ffffffff8141c509 r __kstrtab_blk_iopoll_complete
ffffffff8141c51d r __kstrtab___blk_iopoll_complete
ffffffff8141c533 r __kstrtab_blk_iopoll_sched
ffffffff8141c544 r __kstrtab_blk_iopoll_enabled
ffffffff8141c557 r __kstrtab_blkdev_issue_zeroout
ffffffff8141c56c r __kstrtab_blkdev_issue_discard
ffffffff8141c581 r __kstrtab_blkdev_ioctl
ffffffff8141c58e r __kstrtab___blkdev_driver_ioctl
ffffffff8141c5a4 r __kstrtab_invalidate_partition
ffffffff8141c5b9 r __kstrtab_bdev_read_only
ffffffff8141c5c8 r __kstrtab_set_disk_ro
ffffffff8141c5d4 r __kstrtab_set_device_ro
ffffffff8141c5e2 r __kstrtab_put_disk
ffffffff8141c5eb r __kstrtab_get_disk
ffffffff8141c5f4 r __kstrtab_alloc_disk_node
ffffffff8141c604 r __kstrtab_alloc_disk
ffffffff8141c60f r __kstrtab_blk_lookup_devt
ffffffff8141c61f r __kstrtab_bdget_disk
ffffffff8141c62a r __kstrtab_get_gendisk
ffffffff8141c636 r __kstrtab_del_gendisk
ffffffff8141c642 r __kstrtab_add_disk
ffffffff8141c64b r __kstrtab_blk_unregister_region
ffffffff8141c661 r __kstrtab_blk_register_region
ffffffff8141c675 r __kstrtab_unregister_blkdev
ffffffff8141c687 r __kstrtab_register_blkdev
ffffffff8141c697 r __kstrtab_disk_map_sector_rcu
ffffffff8141c6ab r __kstrtab_disk_part_iter_exit
ffffffff8141c6bf r __kstrtab_disk_part_iter_next
ffffffff8141c6d3 r __kstrtab_disk_part_iter_init
ffffffff8141c6e7 r __kstrtab_disk_get_part
ffffffff8141c6f5 r __kstrtab_scsi_cmd_blk_ioctl
ffffffff8141c708 r __kstrtab_scsi_verify_blk_ioctl
ffffffff8141c71e r __kstrtab_scsi_cmd_ioctl
ffffffff8141c72d r __kstrtab_sg_scsi_ioctl
ffffffff8141c73b r __kstrtab_blk_verify_command
ffffffff8141c74e r __kstrtab_scsi_command_size_tbl
ffffffff8141c764 r __kstrtab_read_dev_sector
ffffffff8141c774 r __kstrtab___bdevname
ffffffff8141c77f r __kstrtab_bdevname
ffffffff8141c788 r __kstrtab_bsg_register_queue
ffffffff8141c79b r __kstrtab_bsg_unregister_queue
ffffffff8141c7b0 r __kstrtab_bsg_remove_queue
ffffffff8141c7c1 r __kstrtab_bsg_setup_queue
ffffffff8141c7d1 r __kstrtab_bsg_request_fn
ffffffff8141c7e0 r __kstrtab_bsg_goose_queue
ffffffff8141c7f0 r __kstrtab_bsg_job_done
ffffffff8141c7fd r __kstrtab_blkio_policy_unregister
ffffffff8141c815 r __kstrtab_blkio_policy_register
ffffffff8141c82b r __kstrtab_blkcg_get_weight
ffffffff8141c83c r __kstrtab_blkiocg_lookup_group
ffffffff8141c851 r __kstrtab_blkiocg_del_blkio_group
ffffffff8141c869 r __kstrtab_blkiocg_add_blkio_group
ffffffff8141c881 r __kstrtab_blkio_alloc_blkg_stats
ffffffff8141c898 r __kstrtab_blkiocg_update_io_merged_stats
ffffffff8141c8b7 r __kstrtab_blkiocg_update_completion_stats
ffffffff8141c8d7 r __kstrtab_blkiocg_update_dispatch_stats
ffffffff8141c8f5 r __kstrtab_blkiocg_update_timeslice_used
ffffffff8141c913 r __kstrtab_blkiocg_update_io_remove_stats
ffffffff8141c932 r __kstrtab_blkiocg_update_io_add_stats
ffffffff8141c94e r __kstrtab_task_blkio_cgroup
ffffffff8141c960 r __kstrtab_cgroup_to_blkio_cgroup
ffffffff8141c977 r __kstrtab_blkio_subsys
ffffffff8141c984 r __kstrtab_blkio_root_cgroup
ffffffff8141c996 r __kstrtab_argv_split
ffffffff8141c9a1 r __kstrtab_argv_free
ffffffff8141c9ab r __kstrtab_get_options
ffffffff8141c9b7 r __kstrtab_get_option
ffffffff8141c9c2 r __kstrtab_memparse
ffffffff8141c9cb r __kstrtab_cpumask_next_and
ffffffff8141c9dc r __kstrtab___next_cpu
ffffffff8141c9e7 r __kstrtab___first_cpu
ffffffff8141c9f3 r __kstrtab__ctype
ffffffff8141c9fa r __kstrtab__atomic_dec_and_lock
ffffffff8141ca0f r __kstrtab_ida_init
ffffffff8141ca18 r __kstrtab_ida_simple_remove
ffffffff8141ca2a r __kstrtab_ida_simple_get
ffffffff8141ca39 r __kstrtab_ida_destroy
ffffffff8141ca45 r __kstrtab_ida_remove
ffffffff8141ca50 r __kstrtab_ida_get_new
ffffffff8141ca5c r __kstrtab_ida_get_new_above
ffffffff8141ca6e r __kstrtab_ida_pre_get
ffffffff8141ca7a r __kstrtab_idr_init
ffffffff8141ca83 r __kstrtab_idr_replace
ffffffff8141ca8f r __kstrtab_idr_get_next
ffffffff8141ca9c r __kstrtab_idr_for_each
ffffffff8141caa9 r __kstrtab_idr_find
ffffffff8141cab2 r __kstrtab_idr_destroy
ffffffff8141cabe r __kstrtab_idr_remove_all
ffffffff8141cacd r __kstrtab_idr_remove
ffffffff8141cad8 r __kstrtab_idr_get_new
ffffffff8141cae4 r __kstrtab_idr_get_new_above
ffffffff8141caf6 r __kstrtab_idr_pre_get
ffffffff8141cb02 r __kstrtab_int_sqrt
ffffffff8141cb0b r __kstrtab_ioremap_page_range
ffffffff8141cb1e r __kstrtab_kset_unregister
ffffffff8141cb2e r __kstrtab_kset_register
ffffffff8141cb3c r __kstrtab_kobject_del
ffffffff8141cb48 r __kstrtab_kobject_put
ffffffff8141cb54 r __kstrtab_kobject_get
ffffffff8141cb60 r __kstrtab_kset_create_and_add
ffffffff8141cb74 r __kstrtab_kobject_create_and_add
ffffffff8141cb8b r __kstrtab_kobject_rename
ffffffff8141cb9a r __kstrtab_kobject_init_and_add
ffffffff8141cbaf r __kstrtab_kobject_add
ffffffff8141cbbb r __kstrtab_kobject_init
ffffffff8141cbc8 r __kstrtab_kobject_set_name
ffffffff8141cbd9 r __kstrtab_kobject_get_path
ffffffff8141cbea r __kstrtab_add_uevent_var
ffffffff8141cbf9 r __kstrtab_kobject_uevent
ffffffff8141cc08 r __kstrtab_kobject_uevent_env
ffffffff8141cc1b r __kstrtab_radix_tree_tagged
ffffffff8141cc2d r __kstrtab_radix_tree_delete
ffffffff8141cc3f r __kstrtab_radix_tree_gang_lookup_tag_slot
ffffffff8141cc5f r __kstrtab_radix_tree_gang_lookup_tag
ffffffff8141cc7a r __kstrtab_radix_tree_gang_lookup_slot
ffffffff8141cc96 r __kstrtab_radix_tree_gang_lookup
ffffffff8141ccad r __kstrtab_radix_tree_prev_hole
ffffffff8141ccc2 r __kstrtab_radix_tree_next_hole
ffffffff8141ccd7 r __kstrtab_radix_tree_range_tag_if_tagged
ffffffff8141ccf6 r __kstrtab_radix_tree_next_chunk
ffffffff8141cd0c r __kstrtab_radix_tree_tag_get
ffffffff8141cd1f r __kstrtab_radix_tree_tag_clear
ffffffff8141cd34 r __kstrtab_radix_tree_tag_set
ffffffff8141cd47 r __kstrtab_radix_tree_lookup
ffffffff8141cd59 r __kstrtab_radix_tree_lookup_slot
ffffffff8141cd70 r __kstrtab_radix_tree_insert
ffffffff8141cd82 r __kstrtab_radix_tree_preload
ffffffff8141cd95 r __kstrtab____ratelimit
ffffffff8141cda2 r __kstrtab_rb_replace_node
ffffffff8141cdb2 r __kstrtab_rb_prev
ffffffff8141cdba r __kstrtab_rb_next
ffffffff8141cdc2 r __kstrtab_rb_last
ffffffff8141cdca r __kstrtab_rb_first
ffffffff8141cdd3 r __kstrtab_rb_augment_erase_end
ffffffff8141cde8 r __kstrtab_rb_augment_erase_begin
ffffffff8141cdff r __kstrtab_rb_augment_insert
ffffffff8141ce11 r __kstrtab_rb_erase
ffffffff8141ce1a r __kstrtab_rb_insert_color
ffffffff8141ce2a r __kstrtab_reciprocal_value
ffffffff8141ce3b r __kstrtab_rwsem_downgrade_wake
ffffffff8141ce50 r __kstrtab_rwsem_wake
ffffffff8141ce5b r __kstrtab_rwsem_down_write_failed
ffffffff8141ce73 r __kstrtab_rwsem_down_read_failed
ffffffff8141ce8a r __kstrtab___init_rwsem
ffffffff8141ce97 r __kstrtab_memchr_inv
ffffffff8141cea2 r __kstrtab_memchr
ffffffff8141cea9 r __kstrtab_strnstr
ffffffff8141ceb1 r __kstrtab_strstr
ffffffff8141ceb8 r __kstrtab_memscan
ffffffff8141cec0 r __kstrtab_memcmp
ffffffff8141cec7 r __kstrtab_strtobool
ffffffff8141ced1 r __kstrtab_sysfs_streq
ffffffff8141cedd r __kstrtab_strsep
ffffffff8141cee4 r __kstrtab_strpbrk
ffffffff8141ceec r __kstrtab_strcspn
ffffffff8141cef4 r __kstrtab_strspn
ffffffff8141cefb r __kstrtab_strnlen
ffffffff8141cf03 r __kstrtab_strlen
ffffffff8141cf0a r __kstrtab_strim
ffffffff8141cf10 r __kstrtab_skip_spaces
ffffffff8141cf1c r __kstrtab_strnchr
ffffffff8141cf24 r __kstrtab_strrchr
ffffffff8141cf2c r __kstrtab_strchr
ffffffff8141cf33 r __kstrtab_strncmp
ffffffff8141cf3b r __kstrtab_strcmp
ffffffff8141cf42 r __kstrtab_strlcat
ffffffff8141cf4a r __kstrtab_strncat
ffffffff8141cf52 r __kstrtab_strcat
ffffffff8141cf59 r __kstrtab_strlcpy
ffffffff8141cf61 r __kstrtab_strncpy
ffffffff8141cf69 r __kstrtab_strcpy
ffffffff8141cf70 r __kstrtab_strncasecmp
ffffffff8141cf7c r __kstrtab_strcasecmp
ffffffff8141cf87 r __kstrtab_strnicmp
ffffffff8141cf90 r __kstrtab_timerqueue_iterate_next
ffffffff8141cfa8 r __kstrtab_timerqueue_del
ffffffff8141cfb7 r __kstrtab_timerqueue_add
ffffffff8141cfc6 r __kstrtab_sscanf
ffffffff8141cfcd r __kstrtab_vsscanf
ffffffff8141cfd5 r __kstrtab_sprintf
ffffffff8141cfdd r __kstrtab_vsprintf
ffffffff8141cfe6 r __kstrtab_scnprintf
ffffffff8141cff0 r __kstrtab_snprintf
ffffffff8141cff9 r __kstrtab_vscnprintf
ffffffff8141d004 r __kstrtab_vsnprintf
ffffffff8141d00e r __kstrtab_simple_strtoll
ffffffff8141d01d r __kstrtab_simple_strtol
ffffffff8141d02b r __kstrtab_simple_strtoul
ffffffff8141d03a r __kstrtab_simple_strtoull
ffffffff8141d04a r __kstrtab_ip_compute_csum
ffffffff8141d05a r __kstrtab___ndelay
ffffffff8141d063 r __kstrtab___udelay
ffffffff8141d06c r __kstrtab___const_udelay
ffffffff8141d07b r __kstrtab___delay
ffffffff8141d083 r __kstrtab_strncpy_from_user
ffffffff8141d095 r __kstrtab_copy_from_user_nmi
ffffffff8141d0a8 r __kstrtab_copy_in_user
ffffffff8141d0b5 r __kstrtab_strlen_user
ffffffff8141d0c1 r __kstrtab_strnlen_user
ffffffff8141d0ce r __kstrtab___strnlen_user
ffffffff8141d0dd r __kstrtab_clear_user
ffffffff8141d0e8 r __kstrtab___clear_user
ffffffff8141d0f5 r __kstrtab_bin2bcd
ffffffff8141d0fd r __kstrtab_bcd2bin
ffffffff8141d105 r __kstrtab_iter_div_u64_rem
ffffffff8141d116 r __kstrtab_sort
ffffffff8141d11b r __kstrtab_match_strdup
ffffffff8141d128 r __kstrtab_match_strlcpy
ffffffff8141d136 r __kstrtab_match_hex
ffffffff8141d140 r __kstrtab_match_octal
ffffffff8141d14c r __kstrtab_match_int
ffffffff8141d156 r __kstrtab_match_token
ffffffff8141d162 r __kstrtab_half_md4_transform
ffffffff8141d175 r __kstrtab_debug_locks
ffffffff8141d181 r __kstrtab_srandom32
ffffffff8141d18b r __kstrtab_random32
ffffffff8141d194 r __kstrtab_prandom32
ffffffff8141d19e r __kstrtab_print_hex_dump_bytes
ffffffff8141d1b3 r __kstrtab_print_hex_dump
ffffffff8141d1c2 r __kstrtab_hex_dump_to_buffer
ffffffff8141d1d5 r __kstrtab_hex2bin
ffffffff8141d1dd r __kstrtab_hex_to_bin
ffffffff8141d1e8 r __kstrtab_hex_asc
ffffffff8141d1f0 r __kstrtab_kasprintf
ffffffff8141d1fa r __kstrtab_kvasprintf
ffffffff8141d205 r __kstrtab_bitmap_copy_le
ffffffff8141d214 r __kstrtab_bitmap_allocate_region
ffffffff8141d22b r __kstrtab_bitmap_release_region
ffffffff8141d241 r __kstrtab_bitmap_find_free_region
ffffffff8141d259 r __kstrtab_bitmap_fold
ffffffff8141d265 r __kstrtab_bitmap_onto
ffffffff8141d271 r __kstrtab_bitmap_bitremap
ffffffff8141d281 r __kstrtab_bitmap_remap
ffffffff8141d28e r __kstrtab_bitmap_parselist_user
ffffffff8141d2a4 r __kstrtab_bitmap_parselist
ffffffff8141d2b5 r __kstrtab_bitmap_scnlistprintf
ffffffff8141d2ca r __kstrtab_bitmap_parse_user
ffffffff8141d2dc r __kstrtab___bitmap_parse
ffffffff8141d2eb r __kstrtab_bitmap_scnprintf
ffffffff8141d2fc r __kstrtab_bitmap_find_next_zero_area
ffffffff8141d317 r __kstrtab_bitmap_clear
ffffffff8141d324 r __kstrtab_bitmap_set
ffffffff8141d32f r __kstrtab___bitmap_weight
ffffffff8141d33f r __kstrtab___bitmap_subset
ffffffff8141d34f r __kstrtab___bitmap_intersects
ffffffff8141d363 r __kstrtab___bitmap_andnot
ffffffff8141d373 r __kstrtab___bitmap_xor
ffffffff8141d380 r __kstrtab___bitmap_or
ffffffff8141d38c r __kstrtab___bitmap_and
ffffffff8141d399 r __kstrtab___bitmap_shift_left
ffffffff8141d3ad r __kstrtab___bitmap_shift_right
ffffffff8141d3c2 r __kstrtab___bitmap_complement
ffffffff8141d3d6 r __kstrtab___bitmap_equal
ffffffff8141d3e5 r __kstrtab___bitmap_full
ffffffff8141d3f3 r __kstrtab___bitmap_empty
ffffffff8141d402 r __kstrtab_sg_copy_to_buffer
ffffffff8141d414 r __kstrtab_sg_copy_from_buffer
ffffffff8141d428 r __kstrtab_sg_miter_stop
ffffffff8141d436 r __kstrtab_sg_miter_next
ffffffff8141d444 r __kstrtab_sg_miter_start
ffffffff8141d453 r __kstrtab_sg_alloc_table
ffffffff8141d462 r __kstrtab___sg_alloc_table
ffffffff8141d473 r __kstrtab_sg_free_table
ffffffff8141d481 r __kstrtab___sg_free_table
ffffffff8141d491 r __kstrtab_sg_init_one
ffffffff8141d49d r __kstrtab_sg_init_table
ffffffff8141d4ab r __kstrtab_sg_last
ffffffff8141d4b3 r __kstrtab_sg_next
ffffffff8141d4bb r __kstrtab_string_get_size
ffffffff8141d4cb r __kstrtab_gcd
ffffffff8141d4cf r __kstrtab_lcm
ffffffff8141d4d3 r __kstrtab_list_sort
ffffffff8141d4dd r __kstrtab_uuid_be_gen
ffffffff8141d4e9 r __kstrtab_uuid_le_gen
ffffffff8141d4f5 r __kstrtab_flex_array_shrink
ffffffff8141d507 r __kstrtab_flex_array_get_ptr
ffffffff8141d51a r __kstrtab_flex_array_get
ffffffff8141d529 r __kstrtab_flex_array_prealloc
ffffffff8141d53d r __kstrtab_flex_array_clear
ffffffff8141d54e r __kstrtab_flex_array_put
ffffffff8141d55d r __kstrtab_flex_array_free
ffffffff8141d56d r __kstrtab_flex_array_free_parts
ffffffff8141d583 r __kstrtab_flex_array_alloc
ffffffff8141d594 r __kstrtab_bsearch
ffffffff8141d59c r __kstrtab_find_last_bit
ffffffff8141d5aa r __kstrtab_find_first_zero_bit
ffffffff8141d5be r __kstrtab_find_first_bit
ffffffff8141d5cd r __kstrtab_find_next_zero_bit
ffffffff8141d5e0 r __kstrtab_find_next_bit
ffffffff8141d5ee r __kstrtab_llist_del_first
ffffffff8141d5fe r __kstrtab_llist_add_batch
ffffffff8141d60e r __kstrtab_kstrtos8_from_user
ffffffff8141d621 r __kstrtab_kstrtou8_from_user
ffffffff8141d634 r __kstrtab_kstrtos16_from_user
ffffffff8141d648 r __kstrtab_kstrtou16_from_user
ffffffff8141d65c r __kstrtab_kstrtoint_from_user
ffffffff8141d670 r __kstrtab_kstrtouint_from_user
ffffffff8141d685 r __kstrtab_kstrtol_from_user
ffffffff8141d697 r __kstrtab_kstrtoul_from_user
ffffffff8141d6aa r __kstrtab_kstrtoll_from_user
ffffffff8141d6bd r __kstrtab_kstrtoull_from_user
ffffffff8141d6d1 r __kstrtab_kstrtos8
ffffffff8141d6da r __kstrtab_kstrtou8
ffffffff8141d6e3 r __kstrtab_kstrtos16
ffffffff8141d6ed r __kstrtab_kstrtou16
ffffffff8141d6f7 r __kstrtab_kstrtoint
ffffffff8141d701 r __kstrtab_kstrtouint
ffffffff8141d70c r __kstrtab__kstrtol
ffffffff8141d715 r __kstrtab__kstrtoul
ffffffff8141d71f r __kstrtab_kstrtoll
ffffffff8141d728 r __kstrtab_kstrtoull
ffffffff8141d732 r __kstrtab_pci_iounmap
ffffffff8141d73e r __kstrtab_ioport_unmap
ffffffff8141d74b r __kstrtab_ioport_map
ffffffff8141d756 r __kstrtab_iowrite32_rep
ffffffff8141d764 r __kstrtab_iowrite16_rep
ffffffff8141d772 r __kstrtab_iowrite8_rep
ffffffff8141d77f r __kstrtab_ioread32_rep
ffffffff8141d78c r __kstrtab_ioread16_rep
ffffffff8141d799 r __kstrtab_ioread8_rep
ffffffff8141d7a5 r __kstrtab_iowrite32be
ffffffff8141d7b1 r __kstrtab_iowrite32
ffffffff8141d7bb r __kstrtab_iowrite16be
ffffffff8141d7c7 r __kstrtab_iowrite16
ffffffff8141d7d1 r __kstrtab_iowrite8
ffffffff8141d7da r __kstrtab_ioread32be
ffffffff8141d7e5 r __kstrtab_ioread32
ffffffff8141d7ee r __kstrtab_ioread16be
ffffffff8141d7f9 r __kstrtab_ioread16
ffffffff8141d802 r __kstrtab_ioread8
ffffffff8141d80a r __kstrtab_pci_iomap
ffffffff8141d814 r __kstrtab___iowrite64_copy
ffffffff8141d825 r __kstrtab___iowrite32_copy
ffffffff8141d836 r __kstrtab_pcim_iounmap_regions
ffffffff8141d84b r __kstrtab_pcim_iomap_regions_request_all
ffffffff8141d86a r __kstrtab_pcim_iomap_regions
ffffffff8141d87d r __kstrtab_pcim_iounmap
ffffffff8141d88a r __kstrtab_pcim_iomap
ffffffff8141d895 r __kstrtab_pcim_iomap_table
ffffffff8141d8a6 r __kstrtab_devm_ioport_unmap
ffffffff8141d8b8 r __kstrtab_devm_ioport_map
ffffffff8141d8c8 r __kstrtab_devm_request_and_ioremap
ffffffff8141d8e1 r __kstrtab_devm_iounmap
ffffffff8141d8ee r __kstrtab_devm_ioremap_nocache
ffffffff8141d903 r __kstrtab_devm_ioremap
ffffffff8141d910 r __kstrtab___sw_hweight64
ffffffff8141d91f r __kstrtab___sw_hweight8
ffffffff8141d92d r __kstrtab___sw_hweight16
ffffffff8141d93c r __kstrtab___sw_hweight32
ffffffff8141d94b r __kstrtab_bitrev32
ffffffff8141d954 r __kstrtab_bitrev16
ffffffff8141d95d r __kstrtab_byte_rev_table
ffffffff8141d96c r __kstrtab_crc16
ffffffff8141d972 r __kstrtab_crc16_table
ffffffff8141d97e r __kstrtab_crc32_be
ffffffff8141d987 r __kstrtab___crc32c_le
ffffffff8141d993 r __kstrtab_crc32_le
ffffffff8141d99c r __kstrtab_gen_pool_size
ffffffff8141d9aa r __kstrtab_gen_pool_avail
ffffffff8141d9b9 r __kstrtab_gen_pool_for_each_chunk
ffffffff8141d9d1 r __kstrtab_gen_pool_free
ffffffff8141d9df r __kstrtab_gen_pool_alloc
ffffffff8141d9ee r __kstrtab_gen_pool_destroy
ffffffff8141d9ff r __kstrtab_gen_pool_virt_to_phys
ffffffff8141da15 r __kstrtab_gen_pool_add_virt
ffffffff8141da27 r __kstrtab_gen_pool_create
ffffffff8141da37 r __kstrtab_zlib_inflate_blob
ffffffff8141da49 r __kstrtab_zlib_inflateIncomp
ffffffff8141da5c r __kstrtab_zlib_inflateReset
ffffffff8141da6e r __kstrtab_zlib_inflateEnd
ffffffff8141da7e r __kstrtab_zlib_inflateInit2
ffffffff8141da90 r __kstrtab_zlib_inflate
ffffffff8141da9d r __kstrtab_zlib_inflate_workspacesize
ffffffff8141dab8 r __kstrtab_lzo1x_decompress_safe
ffffffff8141dace r __kstrtab_xz_dec_end
ffffffff8141dad9 r __kstrtab_xz_dec_run
ffffffff8141dae4 r __kstrtab_xz_dec_reset
ffffffff8141daf1 r __kstrtab_xz_dec_init
ffffffff8141dafd r __kstrtab_percpu_counter_compare
ffffffff8141db14 r __kstrtab_percpu_counter_batch
ffffffff8141db29 r __kstrtab_percpu_counter_destroy
ffffffff8141db40 r __kstrtab___percpu_counter_init
ffffffff8141db56 r __kstrtab___percpu_counter_sum
ffffffff8141db6b r __kstrtab___percpu_counter_add
ffffffff8141db80 r __kstrtab_percpu_counter_set
ffffffff8141db93 r __kstrtab_swiotlb_dma_supported
ffffffff8141dba9 r __kstrtab_swiotlb_dma_mapping_error
ffffffff8141dbc3 r __kstrtab_swiotlb_sync_sg_for_device
ffffffff8141dbde r __kstrtab_swiotlb_sync_sg_for_cpu
ffffffff8141dbf6 r __kstrtab_swiotlb_unmap_sg
ffffffff8141dc07 r __kstrtab_swiotlb_unmap_sg_attrs
ffffffff8141dc1e r __kstrtab_swiotlb_map_sg
ffffffff8141dc2d r __kstrtab_swiotlb_map_sg_attrs
ffffffff8141dc42 r __kstrtab_swiotlb_sync_single_for_device
ffffffff8141dc61 r __kstrtab_swiotlb_sync_single_for_cpu
ffffffff8141dc7d r __kstrtab_swiotlb_unmap_page
ffffffff8141dc90 r __kstrtab_swiotlb_map_page
ffffffff8141dca1 r __kstrtab_swiotlb_free_coherent
ffffffff8141dcb7 r __kstrtab_swiotlb_alloc_coherent
ffffffff8141dcce r __kstrtab_swiotlb_tbl_sync_single
ffffffff8141dce6 r __kstrtab_swiotlb_tbl_unmap_single
ffffffff8141dcff r __kstrtab_swiotlb_tbl_map_single
ffffffff8141dd16 r __kstrtab_swiotlb_bounce
ffffffff8141dd25 r __kstrtab_swiotlb_nr_tbl
ffffffff8141dd34 r __kstrtab_iommu_area_alloc
ffffffff8141dd45 r __kstrtab_task_current_syscall
ffffffff8141dd5a r __kstrtab_nla_strcmp
ffffffff8141dd65 r __kstrtab_nla_memcmp
ffffffff8141dd70 r __kstrtab_nla_memcpy
ffffffff8141dd7b r __kstrtab_nla_strlcpy
ffffffff8141dd87 r __kstrtab_nla_find
ffffffff8141dd90 r __kstrtab_nla_parse
ffffffff8141dd9a r __kstrtab_nla_policy_len
ffffffff8141dda9 r __kstrtab_nla_validate
ffffffff8141ddb6 r __kstrtab_nla_append
ffffffff8141ddc1 r __kstrtab_nla_put_nohdr
ffffffff8141ddcf r __kstrtab_nla_put
ffffffff8141ddd7 r __kstrtab___nla_put_nohdr
ffffffff8141dde7 r __kstrtab___nla_put
ffffffff8141ddf1 r __kstrtab_nla_reserve_nohdr
ffffffff8141de03 r __kstrtab_nla_reserve
ffffffff8141de0f r __kstrtab___nla_reserve_nohdr
ffffffff8141de23 r __kstrtab___nla_reserve
ffffffff8141de31 r __kstrtab_irq_cpu_rmap_add
ffffffff8141de42 r __kstrtab_free_irq_cpu_rmap
ffffffff8141de54 r __kstrtab_cpu_rmap_update
ffffffff8141de64 r __kstrtab_cpu_rmap_add
ffffffff8141de71 r __kstrtab_alloc_cpu_rmap
ffffffff8141de80 r __kstrtab_dql_init
ffffffff8141de89 r __kstrtab_dql_reset
ffffffff8141de93 r __kstrtab_dql_completed
ffffffff8141dea1 r __kstrtab_wrmsr_safe_regs_on_cpu
ffffffff8141deb8 r __kstrtab_rdmsr_safe_regs_on_cpu
ffffffff8141decf r __kstrtab_wrmsr_safe_on_cpu
ffffffff8141dee1 r __kstrtab_rdmsr_safe_on_cpu
ffffffff8141def3 r __kstrtab_wrmsr_on_cpus
ffffffff8141df01 r __kstrtab_rdmsr_on_cpus
ffffffff8141df0f r __kstrtab_wrmsr_on_cpu
ffffffff8141df1c r __kstrtab_rdmsr_on_cpu
ffffffff8141df29 r __kstrtab_wbinvd_on_all_cpus
ffffffff8141df3c r __kstrtab_wbinvd_on_cpu
ffffffff8141df4a r __kstrtab_msrs_free
ffffffff8141df54 r __kstrtab_msrs_alloc
ffffffff8141df5f r __kstrtab_native_wrmsr_safe_regs
ffffffff8141df76 r __kstrtab_native_rdmsr_safe_regs
ffffffff8141df8d r __kstrtab_pci_cfg_access_unlock
ffffffff8141dfa3 r __kstrtab_pci_cfg_access_trylock
ffffffff8141dfba r __kstrtab_pci_cfg_access_lock
ffffffff8141dfce r __kstrtab_pci_vpd_truncate
ffffffff8141dfdf r __kstrtab_pci_write_vpd
ffffffff8141dfed r __kstrtab_pci_read_vpd
ffffffff8141dffa r __kstrtab_pci_bus_set_ops
ffffffff8141e00a r __kstrtab_pci_bus_write_config_dword
ffffffff8141e025 r __kstrtab_pci_bus_write_config_word
ffffffff8141e03f r __kstrtab_pci_bus_write_config_byte
ffffffff8141e059 r __kstrtab_pci_bus_read_config_dword
ffffffff8141e073 r __kstrtab_pci_bus_read_config_word
ffffffff8141e08c r __kstrtab_pci_bus_read_config_byte
ffffffff8141e0a5 r __kstrtab_pci_enable_bridges
ffffffff8141e0b8 r __kstrtab_pci_bus_add_devices
ffffffff8141e0cc r __kstrtab_pci_bus_add_device
ffffffff8141e0df r __kstrtab_pci_bus_alloc_resource
ffffffff8141e0f6 r __kstrtab_pci_walk_bus
ffffffff8141e103 r __kstrtab_pci_bus_resource_n
ffffffff8141e116 r __kstrtab_pci_free_resource_list
ffffffff8141e12d r __kstrtab_pci_add_resource
ffffffff8141e13e r __kstrtab_pci_add_resource_offset
ffffffff8141e156 r __kstrtab_pci_scan_child_bus
ffffffff8141e169 r __kstrtab_pci_scan_bridge
ffffffff8141e179 r __kstrtab_pci_scan_slot
ffffffff8141e187 r __kstrtab_pci_add_new_bus
ffffffff8141e197 r __kstrtab_pci_scan_bus
ffffffff8141e1a4 r __kstrtab_pci_scan_bus_parented
ffffffff8141e1ba r __kstrtab_pci_scan_root_bus
ffffffff8141e1cc r __kstrtab_pcie_bus_configure_settings
ffffffff8141e1e8 r __kstrtab_pci_scan_single_device
ffffffff8141e1ff r __kstrtab_pci_bus_read_dev_vendor_id
ffffffff8141e21a r __kstrtab_alloc_pci_dev
ffffffff8141e228 r __kstrtab_pcie_update_link_speed
ffffffff8141e23f r __kstrtab_pcibios_bus_to_resource
ffffffff8141e257 r __kstrtab_pcibios_resource_to_bus
ffffffff8141e26f r __kstrtab_no_pci_devices
ffffffff8141e27e r __kstrtab_pci_root_buses
ffffffff8141e28d r __kstrtab_pci_stop_bus_device
ffffffff8141e2a1 r __kstrtab_pci_stop_and_remove_behind_bridge
ffffffff8141e2c3 r __kstrtab_pci_stop_and_remove_bus_device
ffffffff8141e2e2 r __kstrtab___pci_remove_bus_device
ffffffff8141e2fa r __kstrtab_pci_remove_bus
ffffffff8141e309 r __kstrtab_pci_set_pcie_reset_state
ffffffff8141e322 r __kstrtab_pci_back_from_sleep
ffffffff8141e336 r __kstrtab_pci_prepare_to_sleep
ffffffff8141e34b r __kstrtab_pci_target_state
ffffffff8141e35c r __kstrtab_pci_wake_from_d3
ffffffff8141e36d r __kstrtab_pci_pme_active
ffffffff8141e37c r __kstrtab_pci_pme_capable
ffffffff8141e38c r __kstrtab_pci_restore_state
ffffffff8141e39e r __kstrtab_pci_save_state
ffffffff8141e3ad r __kstrtab_pci_set_power_state
ffffffff8141e3c1 r __kstrtab_pci_select_bars
ffffffff8141e3d1 r __kstrtab_pci_find_parent_resource
ffffffff8141e3ea r __kstrtab_pci_assign_resource
ffffffff8141e3fe r __kstrtab_pci_intx
ffffffff8141e407 r __kstrtab_pci_clear_mwi
ffffffff8141e415 r __kstrtab_pci_try_set_mwi
ffffffff8141e425 r __kstrtab_pci_set_mwi
ffffffff8141e431 r __kstrtab_pci_clear_master
ffffffff8141e442 r __kstrtab_pci_set_master
ffffffff8141e451 r __kstrtab_pci_request_selected_regions_exclusive
ffffffff8141e478 r __kstrtab_pci_request_selected_regions
ffffffff8141e495 r __kstrtab_pci_release_selected_regions
ffffffff8141e4b2 r __kstrtab_pci_request_region_exclusive
ffffffff8141e4cf r __kstrtab_pci_request_region
ffffffff8141e4e2 r __kstrtab_pci_release_region
ffffffff8141e4f5 r __kstrtab_pci_request_regions_exclusive
ffffffff8141e513 r __kstrtab_pci_request_regions
ffffffff8141e527 r __kstrtab_pci_release_regions
ffffffff8141e53b r __kstrtab_pci_bus_find_capability
ffffffff8141e553 r __kstrtab_pci_find_capability
ffffffff8141e567 r __kstrtab_pci_disable_device
ffffffff8141e57a r __kstrtab_pcim_pin_device
ffffffff8141e58a r __kstrtab_pcim_enable_device
ffffffff8141e59d r __kstrtab_pci_enable_device
ffffffff8141e5af r __kstrtab_pci_enable_device_mem
ffffffff8141e5c5 r __kstrtab_pci_enable_device_io
ffffffff8141e5da r __kstrtab_pci_reenable_device
ffffffff8141e5ee r __kstrtab_pci_fixup_cardbus
ffffffff8141e600 r __kstrtab_pcie_set_readrq
ffffffff8141e610 r __kstrtab_pcie_get_readrq
ffffffff8141e620 r __kstrtab_pcix_set_mmrbc
ffffffff8141e62f r __kstrtab_pcix_get_mmrbc
ffffffff8141e63e r __kstrtab_pcix_get_max_mmrbc
ffffffff8141e651 r __kstrtab_pci_reset_function
ffffffff8141e664 r __kstrtab___pci_reset_function_locked
ffffffff8141e680 r __kstrtab___pci_reset_function
ffffffff8141e695 r __kstrtab_pci_set_dma_seg_boundary
ffffffff8141e6ae r __kstrtab_pci_set_dma_max_seg_size
ffffffff8141e6c7 r __kstrtab_pci_msi_off
ffffffff8141e6d3 r __kstrtab_pci_check_and_unmask_intx
ffffffff8141e6ed r __kstrtab_pci_check_and_mask_intx
ffffffff8141e705 r __kstrtab_pci_intx_mask_supported
ffffffff8141e71d r __kstrtab_pci_set_cacheline_size
ffffffff8141e734 r __kstrtab_pci_set_ltr
ffffffff8141e740 r __kstrtab_pci_disable_ltr
ffffffff8141e750 r __kstrtab_pci_enable_ltr
ffffffff8141e75f r __kstrtab_pci_ltr_supported
ffffffff8141e771 r __kstrtab_pci_disable_obff
ffffffff8141e782 r __kstrtab_pci_enable_obff
ffffffff8141e792 r __kstrtab_pci_disable_ido
ffffffff8141e7a2 r __kstrtab_pci_enable_ido
ffffffff8141e7b1 r __kstrtab_pci_dev_run_wake
ffffffff8141e7c2 r __kstrtab___pci_enable_wake
ffffffff8141e7d4 r __kstrtab_pci_load_and_free_saved_state
ffffffff8141e7f2 r __kstrtab_pci_load_saved_state
ffffffff8141e807 r __kstrtab_pci_store_saved_state
ffffffff8141e81d r __kstrtab_pci_choose_state
ffffffff8141e82e r __kstrtab___pci_complete_power_transition
ffffffff8141e84e r __kstrtab_pci_find_ht_capability
ffffffff8141e865 r __kstrtab_pci_find_next_ht_capability
ffffffff8141e881 r __kstrtab_pci_find_ext_capability
ffffffff8141e899 r __kstrtab_pci_find_next_capability
ffffffff8141e8b2 r __kstrtab_pci_ioremap_bar
ffffffff8141e8c2 r __kstrtab_pci_bus_max_busnr
ffffffff8141e8d4 r __kstrtab_pci_pci_problems
ffffffff8141e8e5 r __kstrtab_isa_dma_bridge_buggy
ffffffff8141e8fa r __kstrtab_pci_power_names
ffffffff8141e90a r __kstrtab_pci_dev_put
ffffffff8141e916 r __kstrtab_pci_dev_get
ffffffff8141e922 r __kstrtab_pci_bus_type
ffffffff8141e92f r __kstrtab_pci_dev_driver
ffffffff8141e93e r __kstrtab_pci_unregister_driver
ffffffff8141e954 r __kstrtab___pci_register_driver
ffffffff8141e96a r __kstrtab_pci_match_id
ffffffff8141e977 r __kstrtab_pci_add_dynid
ffffffff8141e985 r __kstrtab_pci_get_class
ffffffff8141e993 r __kstrtab_pci_get_slot
ffffffff8141e9a0 r __kstrtab_pci_get_subsys
ffffffff8141e9af r __kstrtab_pci_get_device
ffffffff8141e9be r __kstrtab_pci_find_next_bus
ffffffff8141e9d0 r __kstrtab_pci_find_bus
ffffffff8141e9dd r __kstrtab_pci_dev_present
ffffffff8141e9ed r __kstrtab_pci_get_domain_bus_and_slot
ffffffff8141ea09 r __kstrtab_pci_disable_rom
ffffffff8141ea19 r __kstrtab_pci_enable_rom
ffffffff8141ea28 r __kstrtab_pci_unmap_rom
ffffffff8141ea36 r __kstrtab_pci_map_rom
ffffffff8141ea42 r __kstrtab_pci_claim_resource
ffffffff8141ea55 r __kstrtab_pci_lost_interrupt
ffffffff8141ea68 r __kstrtab_pci_vpd_find_info_keyword
ffffffff8141ea82 r __kstrtab_pci_vpd_find_tag
ffffffff8141ea93 r __kstrtab_pci_destroy_slot
ffffffff8141eaa4 r __kstrtab_pci_renumber_slot
ffffffff8141eab6 r __kstrtab_pci_create_slot
ffffffff8141eac6 r __kstrtab_pci_slots_kset
ffffffff8141ead5 r __kstrtab_pci_fixup_device
ffffffff8141eae6 r __kstrtab_pcie_aspm_support_enabled
ffffffff8141eb00 r __kstrtab_pcie_aspm_enabled
ffffffff8141eb12 r __kstrtab_pci_disable_link_state
ffffffff8141eb29 r __kstrtab_pci_disable_link_state_locked
ffffffff8141eb47 r __kstrtab_pcie_port_service_unregister
ffffffff8141eb64 r __kstrtab_pcie_port_service_register
ffffffff8141eb7f r __kstrtab_pcie_port_bus_type
ffffffff8141eb92 r __kstrtab_cper_severity_to_aer
ffffffff8141eba7 r __kstrtab_aer_recover_queue
ffffffff8141ebb9 r __kstrtab_pci_cleanup_aer_uncorrect_error_status
ffffffff8141ebe0 r __kstrtab_pci_disable_pcie_error_reporting
ffffffff8141ec01 r __kstrtab_pci_enable_pcie_error_reporting
ffffffff8141ec21 r __kstrtab_aer_irq
ffffffff8141ec29 r __kstrtab_pci_msi_enabled
ffffffff8141ec39 r __kstrtab_pci_disable_msix
ffffffff8141ec4a r __kstrtab_pci_enable_msix
ffffffff8141ec5a r __kstrtab_pci_disable_msi
ffffffff8141ec6a r __kstrtab_pci_enable_msi_block
ffffffff8141ec7f r __kstrtab_pci_restore_msi_state
ffffffff8141ec95 r __kstrtab_ht_destroy_irq
ffffffff8141eca4 r __kstrtab_ht_create_irq
ffffffff8141ecb2 r __kstrtab___ht_create_irq
ffffffff8141ecc2 r __kstrtab_pci_max_pasids
ffffffff8141ecd1 r __kstrtab_pci_pasid_features
ffffffff8141ece4 r __kstrtab_pci_disable_pasid
ffffffff8141ecf6 r __kstrtab_pci_enable_pasid
ffffffff8141ed07 r __kstrtab_pci_pri_status
ffffffff8141ed16 r __kstrtab_pci_pri_stopped
ffffffff8141ed26 r __kstrtab_pci_reset_pri
ffffffff8141ed34 r __kstrtab_pci_pri_enabled
ffffffff8141ed44 r __kstrtab_pci_disable_pri
ffffffff8141ed54 r __kstrtab_pci_enable_pri
ffffffff8141ed63 r __kstrtab_pci_ats_queue_depth
ffffffff8141ed77 r __kstrtab_pci_restore_ats_state
ffffffff8141ed8d r __kstrtab_pci_disable_ats
ffffffff8141ed9d r __kstrtab_pci_enable_ats
ffffffff8141edac r __kstrtab_pci_num_vf
ffffffff8141edb7 r __kstrtab_pci_sriov_migration
ffffffff8141edcb r __kstrtab_pci_disable_sriov
ffffffff8141eddd r __kstrtab_pci_enable_sriov
ffffffff8141edee r __kstrtab_pci_rescan_bus
ffffffff8141edfd r __kstrtab_pci_assign_unassigned_bridge_resources
ffffffff8141ee24 r __kstrtab_pci_bus_assign_resources
ffffffff8141ee3d r __kstrtab_pci_bus_size_bridges
ffffffff8141ee52 r __kstrtab_pci_setup_cardbus
ffffffff8141ee64 r __kstrtab_fb_notifier_call_chain
ffffffff8141ee7b r __kstrtab_fb_unregister_client
ffffffff8141ee90 r __kstrtab_fb_register_client
ffffffff8141eea3 r __kstrtab_fb_get_options
ffffffff8141eeb2 r __kstrtab_fb_set_suspend
ffffffff8141eec1 r __kstrtab_fb_get_buffer_offset
ffffffff8141eed6 r __kstrtab_fb_pan_display
ffffffff8141eee5 r __kstrtab_fb_blank
ffffffff8141eeee r __kstrtab_fb_set_var
ffffffff8141eef9 r __kstrtab_fb_show_logo
ffffffff8141ef06 r __kstrtab_registered_fb
ffffffff8141ef14 r __kstrtab_num_registered_fb
ffffffff8141ef26 r __kstrtab_unregister_framebuffer
ffffffff8141ef3d r __kstrtab_register_framebuffer
ffffffff8141ef52 r __kstrtab_remove_conflicting_framebuffers
ffffffff8141ef72 r __kstrtab_unlink_framebuffer
ffffffff8141ef85 r __kstrtab_fb_class
ffffffff8141ef8e r __kstrtab_fb_pad_unaligned_buffer
ffffffff8141efa6 r __kstrtab_fb_pad_aligned_buffer
ffffffff8141efbc r __kstrtab_fb_get_color_depth
ffffffff8141efcf r __kstrtab_lock_fb_info
ffffffff8141efdc r __kstrtab_fb_destroy_modedb
ffffffff8141efee r __kstrtab_fb_validate_mode
ffffffff8141efff r __kstrtab_fb_get_mode
ffffffff8141f00b r __kstrtab_fb_edid_add_monspecs
ffffffff8141f020 r __kstrtab_fb_edid_to_monspecs
ffffffff8141f034 r __kstrtab_fb_parse_edid
ffffffff8141f042 r __kstrtab_fb_firmware_edid
ffffffff8141f053 r __kstrtab_fb_invert_cmaps
ffffffff8141f063 r __kstrtab_fb_default_cmap
ffffffff8141f073 r __kstrtab_fb_set_cmap
ffffffff8141f07f r __kstrtab_fb_copy_cmap
ffffffff8141f08c r __kstrtab_fb_dealloc_cmap
ffffffff8141f09c r __kstrtab_fb_alloc_cmap
ffffffff8141f0aa r __kstrtab_framebuffer_release
ffffffff8141f0be r __kstrtab_framebuffer_alloc
ffffffff8141f0d0 r __kstrtab_fb_find_mode_cvt
ffffffff8141f0e1 r __kstrtab_fb_find_mode
ffffffff8141f0ee r __kstrtab_fb_videomode_to_modelist
ffffffff8141f107 r __kstrtab_fb_find_nearest_mode
ffffffff8141f11c r __kstrtab_fb_find_best_mode
ffffffff8141f12e r __kstrtab_fb_match_mode
ffffffff8141f13c r __kstrtab_fb_add_videomode
ffffffff8141f14d r __kstrtab_fb_mode_is_equal
ffffffff8141f15e r __kstrtab_fb_var_to_videomode
ffffffff8141f172 r __kstrtab_fb_videomode_to_var
ffffffff8141f186 r __kstrtab_fb_find_best_display
ffffffff8141f19b r __kstrtab_fb_destroy_modelist
ffffffff8141f1af r __kstrtab_vesa_modes
ffffffff8141f1ba r __kstrtab_fb_mode_option
ffffffff8141f1c9 r __kstrtab_vgacon_text_force
ffffffff8141f1db r __kstrtab_fbcon_set_bitops
ffffffff8141f1ec r __kstrtab_get_default_font
ffffffff8141f1fd r __kstrtab_find_font
ffffffff8141f207 r __kstrtab_font_vga_8x16
ffffffff8141f215 r __kstrtab_soft_cursor
ffffffff8141f221 r __kstrtab_fb_find_logo
ffffffff8141f22e r __kstrtab_cfb_fillrect
ffffffff8141f23b r __kstrtab_cfb_copyarea
ffffffff8141f248 r __kstrtab_cfb_imageblit
ffffffff8141f256 r __kstrtab_acpi_resources_are_enforced
ffffffff8141f272 r __kstrtab_acpi_check_region
ffffffff8141f284 r __kstrtab_acpi_check_resource_conflict
ffffffff8141f2a1 r __kstrtab_acpi_os_wait_events_complete
ffffffff8141f2be r __kstrtab_acpi_os_execute
ffffffff8141f2ce r __kstrtab_acpi_os_write_port
ffffffff8141f2e1 r __kstrtab_acpi_os_read_port
ffffffff8141f2f3 r __kstrtab_acpi_os_unmap_generic_address
ffffffff8141f311 r __kstrtab_acpi_os_map_generic_address
ffffffff8141f32d r __kstrtab_acpi_os_unmap_memory
ffffffff8141f342 r __kstrtab_acpi_os_map_memory
ffffffff8141f355 r __kstrtab_acpi_os_get_iomem
ffffffff8141f367 r __kstrtab_kacpi_hotplug_wq
ffffffff8141f378 r __kstrtab_acpi_evaluate_reference
ffffffff8141f390 r __kstrtab_acpi_evaluate_integer
ffffffff8141f3a6 r __kstrtab_acpi_extract_package
ffffffff8141f3bb r __kstrtab_acpi_kobj
ffffffff8141f3c5 r __kstrtab_unregister_acpi_bus_notifier
ffffffff8141f3e2 r __kstrtab_register_acpi_bus_notifier
ffffffff8141f3fd r __kstrtab_acpi_bus_generate_proc_event
ffffffff8141f41a r __kstrtab_acpi_bus_generate_proc_event4
ffffffff8141f438 r __kstrtab_acpi_run_osc
ffffffff8141f445 r __kstrtab_acpi_bus_can_wakeup
ffffffff8141f459 r __kstrtab_acpi_bus_power_manageable
ffffffff8141f473 r __kstrtab_acpi_bus_update_power
ffffffff8141f489 r __kstrtab_acpi_bus_set_power
ffffffff8141f49c r __kstrtab_acpi_bus_get_private_data
ffffffff8141f4b6 r __kstrtab_acpi_bus_private_data_handler
ffffffff8141f4d4 r __kstrtab_acpi_bus_get_status
ffffffff8141f4e8 r __kstrtab_acpi_bus_get_device
ffffffff8141f4fc r __kstrtab_acpi_root_dir
ffffffff8141f50a r __kstrtab_acpi_get_physical_device
ffffffff8141f523 r __kstrtab_acpi_get_child
ffffffff8141f532 r __kstrtab_acpi_bus_trim
ffffffff8141f540 r __kstrtab_acpi_bus_start
ffffffff8141f54f r __kstrtab_acpi_bus_add
ffffffff8141f55c r __kstrtab_acpi_device_hid
ffffffff8141f56c r __kstrtab_acpi_bus_get_ejd
ffffffff8141f57d r __kstrtab_acpi_bus_unregister_driver
ffffffff8141f598 r __kstrtab_acpi_bus_register_driver
ffffffff8141f5b1 r __kstrtab_acpi_match_device_ids
ffffffff8141f5c7 r __kstrtab_acpi_get_cpuid
ffffffff8141f5d6 r __kstrtab_acpi_ec_remove_query_handler
ffffffff8141f5f3 r __kstrtab_acpi_ec_add_query_handler
ffffffff8141f60d r __kstrtab_ec_get_handle
ffffffff8141f61b r __kstrtab_ec_transaction
ffffffff8141f62a r __kstrtab_ec_write
ffffffff8141f633 r __kstrtab_ec_read
ffffffff8141f63b r __kstrtab_ec_burst_disable
ffffffff8141f64c r __kstrtab_ec_burst_enable
ffffffff8141f65c r __kstrtab_first_ec
ffffffff8141f665 r __kstrtab_acpi_pci_osc_control_set
ffffffff8141f67e r __kstrtab_acpi_get_pci_dev
ffffffff8141f68f r __kstrtab_acpi_pci_find_root
ffffffff8141f6a2 r __kstrtab_acpi_is_root_bridge
ffffffff8141f6b6 r __kstrtab_acpi_get_pci_rootbridge_handle
ffffffff8141f6d5 r __kstrtab_acpi_pci_unregister_driver
ffffffff8141f6f0 r __kstrtab_acpi_pci_register_driver
ffffffff8141f709 r __kstrtab_acpi_bus_generate_netlink_event
ffffffff8141f729 r __kstrtab_unregister_acpi_notifier
ffffffff8141f742 r __kstrtab_register_acpi_notifier
ffffffff8141f759 r __kstrtab_acpi_notifier_call_chain
ffffffff8141f772 r __kstrtab_acpi_get_node
ffffffff8141f780 r __kstrtab_acpi_unlock_battery_dir
ffffffff8141f798 r __kstrtab_acpi_lock_battery_dir
ffffffff8141f7ae r __kstrtab_acpi_unlock_ac_dir
ffffffff8141f7c1 r __kstrtab_acpi_lock_ac_dir
ffffffff8141f7d2 r __kstrtab_acpi_release_global_lock
ffffffff8141f7eb r __kstrtab_acpi_acquire_global_lock
ffffffff8141f804 r __kstrtab_acpi_remove_gpe_handler
ffffffff8141f81c r __kstrtab_acpi_install_gpe_handler
ffffffff8141f835 r __kstrtab_acpi_remove_fixed_event_handler
ffffffff8141f855 r __kstrtab_acpi_install_fixed_event_handler
ffffffff8141f876 r __kstrtab_acpi_install_global_event_handler
ffffffff8141f898 r __kstrtab_acpi_remove_notify_handler
ffffffff8141f8b3 r __kstrtab_acpi_install_notify_handler
ffffffff8141f8cf r __kstrtab_acpi_get_event_status
ffffffff8141f8e5 r __kstrtab_acpi_clear_event
ffffffff8141f8f6 r __kstrtab_acpi_disable_event
ffffffff8141f909 r __kstrtab_acpi_enable_event
ffffffff8141f91b r __kstrtab_acpi_disable
ffffffff8141f928 r __kstrtab_acpi_enable
ffffffff8141f934 r __kstrtab_acpi_get_gpe_device
ffffffff8141f948 r __kstrtab_acpi_remove_gpe_block
ffffffff8141f95e r __kstrtab_acpi_install_gpe_block
ffffffff8141f975 r __kstrtab_acpi_enable_all_runtime_gpes
ffffffff8141f992 r __kstrtab_acpi_disable_all_gpes
ffffffff8141f9a8 r __kstrtab_acpi_get_gpe_status
ffffffff8141f9bc r __kstrtab_acpi_clear_gpe
ffffffff8141f9cb r __kstrtab_acpi_set_gpe_wake_mask
ffffffff8141f9e2 r __kstrtab_acpi_setup_gpe_for_wake
ffffffff8141f9fa r __kstrtab_acpi_disable_gpe
ffffffff8141fa0b r __kstrtab_acpi_enable_gpe
ffffffff8141fa1b r __kstrtab_acpi_update_all_gpes
ffffffff8141fa30 r __kstrtab_acpi_remove_address_space_handler
ffffffff8141fa52 r __kstrtab_acpi_install_address_space_handler
ffffffff8141fa75 r __kstrtab_acpi_get_sleep_type_data
ffffffff8141fa8e r __kstrtab_acpi_write_bit_register
ffffffff8141faa6 r __kstrtab_acpi_read_bit_register
ffffffff8141fabd r __kstrtab_acpi_write
ffffffff8141fac8 r __kstrtab_acpi_read
ffffffff8141fad2 r __kstrtab_acpi_reset
ffffffff8141fadd r __kstrtab_acpi_leave_sleep_state
ffffffff8141faf4 r __kstrtab_acpi_leave_sleep_state_prep
ffffffff8141fb10 r __kstrtab_acpi_enter_sleep_state
ffffffff8141fb27 r __kstrtab_acpi_enter_sleep_state_prep
ffffffff8141fb43 r __kstrtab_acpi_enter_sleep_state_s4bios
ffffffff8141fb61 r __kstrtab_acpi_set_firmware_waking_vector64
ffffffff8141fb83 r __kstrtab_acpi_set_firmware_waking_vector
ffffffff8141fba3 r __kstrtab_acpi_get_data
ffffffff8141fbb1 r __kstrtab_acpi_detach_data
ffffffff8141fbc2 r __kstrtab_acpi_attach_data
ffffffff8141fbd3 r __kstrtab_acpi_get_devices
ffffffff8141fbe4 r __kstrtab_acpi_walk_namespace
ffffffff8141fbf8 r __kstrtab_acpi_evaluate_object
ffffffff8141fc0d r __kstrtab_acpi_evaluate_object_typed
ffffffff8141fc28 r __kstrtab_acpi_install_method
ffffffff8141fc3c r __kstrtab_acpi_get_object_info
ffffffff8141fc51 r __kstrtab_acpi_get_name
ffffffff8141fc5f r __kstrtab_acpi_get_handle
ffffffff8141fc6f r __kstrtab_acpi_get_next_object
ffffffff8141fc84 r __kstrtab_acpi_get_parent
ffffffff8141fc94 r __kstrtab_acpi_get_type
ffffffff8141fca2 r __kstrtab_acpi_get_id
ffffffff8141fcae r __kstrtab_acpi_walk_resources
ffffffff8141fcc2 r __kstrtab_acpi_get_vendor_resource
ffffffff8141fcdb r __kstrtab_acpi_resource_to_address64
ffffffff8141fcf6 r __kstrtab_acpi_get_event_resources
ffffffff8141fd0f r __kstrtab_acpi_set_current_resources
ffffffff8141fd2a r __kstrtab_acpi_get_current_resources
ffffffff8141fd45 r __kstrtab_acpi_get_irq_routing_table
ffffffff8141fd60 r __kstrtab_acpi_remove_table_handler
ffffffff8141fd7a r __kstrtab_acpi_install_table_handler
ffffffff8141fd95 r __kstrtab_acpi_load_tables
ffffffff8141fda6 r __kstrtab_acpi_get_table_by_index
ffffffff8141fdbe r __kstrtab_acpi_get_table
ffffffff8141fdcd r __kstrtab_acpi_unload_table_id
ffffffff8141fde2 r __kstrtab_acpi_get_table_header
ffffffff8141fdf8 r __kstrtab_acpi_load_table
ffffffff8141fe08 r __kstrtab_acpi_format_exception
ffffffff8141fe1e r __kstrtab_acpi_current_gpe_count
ffffffff8141fe35 r __kstrtab_acpi_dbg_layer
ffffffff8141fe44 r __kstrtab_acpi_dbg_level
ffffffff8141fe53 r __kstrtab_acpi_gbl_FADT
ffffffff8141fe61 r __kstrtab_acpi_check_address_range
ffffffff8141fe7a r __kstrtab_acpi_install_interface_handler
ffffffff8141fe99 r __kstrtab_acpi_remove_interface
ffffffff8141feaf r __kstrtab_acpi_install_interface
ffffffff8141fec6 r __kstrtab_acpi_purge_cached_objects
ffffffff8141fee0 r __kstrtab_acpi_terminate
ffffffff8141feef r __kstrtab_acpi_initialize_objects
ffffffff8141ff07 r __kstrtab_acpi_enable_subsystem
ffffffff8141ff1d r __kstrtab_acpi_info
ffffffff8141ff27 r __kstrtab_acpi_warning
ffffffff8141ff34 r __kstrtab_acpi_exception
ffffffff8141ff43 r __kstrtab_acpi_error
ffffffff8141ff4e r __kstrtab_unregister_acpi_hed_notifier
ffffffff8141ff6b r __kstrtab_register_acpi_hed_notifier
ffffffff8141ff86 r __kstrtab_apei_osc_setup
ffffffff8141ff95 r __kstrtab_apei_get_debugfs_dir
ffffffff8141ffaa r __kstrtab_apei_exec_collect_resources
ffffffff8141ffc6 r __kstrtab_apei_write
ffffffff8141ffd1 r __kstrtab_apei_read
ffffffff8141ffdb r __kstrtab_apei_map_generic_address
ffffffff8141fff4 r __kstrtab_apei_resources_release
ffffffff8142000b r __kstrtab_apei_resources_request
ffffffff81420022 r __kstrtab_apei_resources_sub
ffffffff81420035 r __kstrtab_apei_resources_add
ffffffff81420048 r __kstrtab_apei_resources_fini
ffffffff8142005c r __kstrtab_apei_exec_post_unmap_gars
ffffffff81420076 r __kstrtab_apei_exec_pre_map_gars
ffffffff8142008d r __kstrtab___apei_exec_run
ffffffff8142009d r __kstrtab_apei_exec_noop
ffffffff814200ac r __kstrtab_apei_exec_write_register_value
ffffffff814200cb r __kstrtab_apei_exec_write_register
ffffffff814200e4 r __kstrtab_apei_exec_read_register_value
ffffffff81420102 r __kstrtab_apei_exec_read_register
ffffffff8142011a r __kstrtab_apei_exec_ctx_init
ffffffff8142012d r __kstrtab_apei_hest_parse
ffffffff8142013d r __kstrtab_hest_disable
ffffffff8142014a r __kstrtab_apei_estatus_check
ffffffff8142015d r __kstrtab_apei_estatus_check_header
ffffffff81420177 r __kstrtab_apei_estatus_print
ffffffff8142018a r __kstrtab_cper_next_record_id
ffffffff8142019e r __kstrtab_erst_clear
ffffffff814201a9 r __kstrtab_erst_read
ffffffff814201b3 r __kstrtab_erst_write
ffffffff814201be r __kstrtab_erst_get_record_id_end
ffffffff814201d5 r __kstrtab_erst_get_record_id_next
ffffffff814201ed r __kstrtab_erst_get_record_id_begin
ffffffff81420206 r __kstrtab_erst_get_record_count
ffffffff8142021c r __kstrtab_erst_disable
ffffffff81420229 r __kstrtab_pnp_platform_devices
ffffffff8142023e r __kstrtab_pnp_unregister_card_driver
ffffffff81420259 r __kstrtab_pnp_register_card_driver
ffffffff81420272 r __kstrtab_pnp_release_card_device
ffffffff8142028a r __kstrtab_pnp_request_card_device
ffffffff814202a2 r __kstrtab_pnp_device_detach
ffffffff814202b4 r __kstrtab_pnp_device_attach
ffffffff814202c6 r __kstrtab_pnp_unregister_driver
ffffffff814202dc r __kstrtab_pnp_register_driver
ffffffff814202f0 r __kstrtab_pnp_range_reserved
ffffffff81420303 r __kstrtab_pnp_possible_config
ffffffff81420317 r __kstrtab_pnp_get_resource
ffffffff81420328 r __kstrtab_pnp_disable_dev
ffffffff81420338 r __kstrtab_pnp_activate_dev
ffffffff81420349 r __kstrtab_pnp_stop_dev
ffffffff81420356 r __kstrtab_pnp_start_dev
ffffffff81420364 r __kstrtab_pnp_is_active
ffffffff81420372 r __kstrtab_pnpacpi_protocol
ffffffff81420383 r __kstrtab_gnttab_init
ffffffff8142038f r __kstrtab_gnttab_unmap_refs
ffffffff814203a1 r __kstrtab_gnttab_map_refs
ffffffff814203b1 r __kstrtab_gnttab_max_grant_frames
ffffffff814203c9 r __kstrtab_gnttab_cancel_free_callback
ffffffff814203e5 r __kstrtab_gnttab_request_free_callback
ffffffff81420402 r __kstrtab_gnttab_release_grant_reference
ffffffff81420421 r __kstrtab_gnttab_claim_grant_reference
ffffffff8142043e r __kstrtab_gnttab_empty_grant_references
ffffffff8142045c r __kstrtab_gnttab_alloc_grant_references
ffffffff8142047a r __kstrtab_gnttab_free_grant_references
ffffffff81420497 r __kstrtab_gnttab_free_grant_reference
ffffffff814204b3 r __kstrtab_gnttab_end_foreign_transfer
ffffffff814204cf r __kstrtab_gnttab_end_foreign_transfer_ref
ffffffff814204ef r __kstrtab_gnttab_grant_foreign_transfer_ref
ffffffff81420511 r __kstrtab_gnttab_grant_foreign_transfer
ffffffff8142052f r __kstrtab_gnttab_end_foreign_access
ffffffff81420549 r __kstrtab_gnttab_end_foreign_access_ref
ffffffff81420567 r __kstrtab_gnttab_query_foreign_access
ffffffff81420583 r __kstrtab_gnttab_trans_grants_available
ffffffff814205a1 r __kstrtab_gnttab_grant_foreign_access_trans
ffffffff814205c3 r __kstrtab_gnttab_grant_foreign_access_trans_ref
ffffffff814205e9 r __kstrtab_gnttab_subpage_grants_available
ffffffff81420609 r __kstrtab_gnttab_grant_foreign_access_subpage
ffffffff8142062d r __kstrtab_gnttab_grant_foreign_access_subpage_ref
ffffffff81420655 r __kstrtab_gnttab_grant_foreign_access
ffffffff81420671 r __kstrtab_gnttab_grant_foreign_access_ref
ffffffff81420691 r __kstrtab_xen_hvm_resume_frames
ffffffff814206a7 r __kstrtab_xen_features
ffffffff814206b4 r __kstrtab_xen_set_callback_via
ffffffff814206c9 r __kstrtab_xen_test_irq_shared
ffffffff814206dd r __kstrtab_xen_poll_irq_timeout
ffffffff814206f2 r __kstrtab_xen_clear_irq_pending
ffffffff81420708 r __kstrtab_xen_hvm_evtchn_do_upcall
ffffffff81420721 r __kstrtab_evtchn_put
ffffffff8142072c r __kstrtab_evtchn_get
ffffffff81420737 r __kstrtab_evtchn_make_refcounted
ffffffff8142074e r __kstrtab_unbind_from_irqhandler
ffffffff81420765 r __kstrtab_bind_virq_to_irqhandler
ffffffff8142077d r __kstrtab_bind_interdomain_evtchn_to_irqhandler
ffffffff814207a3 r __kstrtab_bind_evtchn_to_irqhandler
ffffffff814207bd r __kstrtab_bind_evtchn_to_irq
ffffffff814207d0 r __kstrtab_xen_pirq_from_irq
ffffffff814207e2 r __kstrtab_xen_irq_from_gsi
ffffffff814207f3 r __kstrtab_notify_remote_via_irq
ffffffff81420809 r __kstrtab_irq_from_evtchn
ffffffff81420819 r __kstrtab_xen_setup_shutdown_event
ffffffff81420832 r __kstrtab_free_xenballooned_pages
ffffffff8142084a r __kstrtab_alloc_xenballooned_pages
ffffffff81420863 r __kstrtab_balloon_set_new_target
ffffffff8142087a r __kstrtab_balloon_stats
ffffffff81420888 r __kstrtab_xenbus_read_driver_state
ffffffff814208a1 r __kstrtab_xenbus_unmap_ring
ffffffff814208b3 r __kstrtab_xenbus_unmap_ring_vfree
ffffffff814208cb r __kstrtab_xenbus_map_ring
ffffffff814208db r __kstrtab_xenbus_map_ring_valloc
ffffffff814208f2 r __kstrtab_xenbus_free_evtchn
ffffffff81420905 r __kstrtab_xenbus_bind_evtchn
ffffffff81420918 r __kstrtab_xenbus_alloc_evtchn
ffffffff8142092c r __kstrtab_xenbus_grant_ring
ffffffff8142093e r __kstrtab_xenbus_dev_fatal
ffffffff8142094f r __kstrtab_xenbus_dev_error
ffffffff81420960 r __kstrtab_xenbus_frontend_closed
ffffffff81420977 r __kstrtab_xenbus_switch_state
ffffffff8142098b r __kstrtab_xenbus_watch_pathfmt
ffffffff814209a0 r __kstrtab_xenbus_watch_path
ffffffff814209b2 r __kstrtab_xenbus_strstate
ffffffff814209c2 r __kstrtab_unregister_xenbus_watch
ffffffff814209da r __kstrtab_register_xenbus_watch
ffffffff814209f0 r __kstrtab_xenbus_gather
ffffffff814209fe r __kstrtab_xenbus_printf
ffffffff81420a0c r __kstrtab_xenbus_scanf
ffffffff81420a19 r __kstrtab_xenbus_transaction_end
ffffffff81420a30 r __kstrtab_xenbus_transaction_start
ffffffff81420a49 r __kstrtab_xenbus_rm
ffffffff81420a53 r __kstrtab_xenbus_mkdir
ffffffff81420a60 r __kstrtab_xenbus_write
ffffffff81420a6d r __kstrtab_xenbus_read
ffffffff81420a79 r __kstrtab_xenbus_exists
ffffffff81420a87 r __kstrtab_xenbus_directory
ffffffff81420a98 r __kstrtab_xenbus_dev_request_and_reply
ffffffff81420ab5 r __kstrtab_xenbus_probe
ffffffff81420ac2 r __kstrtab_unregister_xenstore_notifier
ffffffff81420adf r __kstrtab_register_xenstore_notifier
ffffffff81420afa r __kstrtab_xenbus_dev_cancel
ffffffff81420b0c r __kstrtab_xenbus_dev_resume
ffffffff81420b1e r __kstrtab_xenbus_dev_suspend
ffffffff81420b31 r __kstrtab_xenbus_dev_changed
ffffffff81420b44 r __kstrtab_xenbus_probe_devices
ffffffff81420b59 r __kstrtab_xenbus_probe_node
ffffffff81420b6b r __kstrtab_xenbus_dev_attrs
ffffffff81420b7c r __kstrtab_xenbus_unregister_driver
ffffffff81420b95 r __kstrtab_xenbus_register_driver_common
ffffffff81420bb3 r __kstrtab_xenbus_dev_shutdown
ffffffff81420bc7 r __kstrtab_xenbus_dev_remove
ffffffff81420bd9 r __kstrtab_xenbus_dev_probe
ffffffff81420bea r __kstrtab_xenbus_otherend_changed
ffffffff81420c02 r __kstrtab_xenbus_read_otherend_details
ffffffff81420c1f r __kstrtab_xenbus_match
ffffffff81420c2c r __kstrtab_xen_store_interface
ffffffff81420c40 r __kstrtab_xen_store_evtchn
ffffffff81420c51 r __kstrtab_xenbus_register_backend
ffffffff81420c69 r __kstrtab_xenbus_dev_is_online
ffffffff81420c7e r __kstrtab_xen_xenbus_fops
ffffffff81420c8e r __kstrtab_xenbus_register_frontend
ffffffff81420ca7 r __kstrtab_xen_biovec_phys_mergeable
ffffffff81420cc1 r __kstrtab_register_xen_selfballooning
ffffffff81420cdd r __kstrtab_xen_swiotlb_dma_supported
ffffffff81420cf7 r __kstrtab_xen_swiotlb_dma_mapping_error
ffffffff81420d15 r __kstrtab_xen_swiotlb_sync_sg_for_device
ffffffff81420d34 r __kstrtab_xen_swiotlb_sync_sg_for_cpu
ffffffff81420d50 r __kstrtab_xen_swiotlb_unmap_sg
ffffffff81420d65 r __kstrtab_xen_swiotlb_unmap_sg_attrs
ffffffff81420d80 r __kstrtab_xen_swiotlb_map_sg
ffffffff81420d93 r __kstrtab_xen_swiotlb_map_sg_attrs
ffffffff81420dac r __kstrtab_xen_swiotlb_sync_single_for_device
ffffffff81420dcf r __kstrtab_xen_swiotlb_sync_single_for_cpu
ffffffff81420def r __kstrtab_xen_swiotlb_unmap_page
ffffffff81420e06 r __kstrtab_xen_swiotlb_map_page
ffffffff81420e1b r __kstrtab_xen_swiotlb_free_coherent
ffffffff81420e35 r __kstrtab_xen_swiotlb_alloc_coherent
ffffffff81420e50 r __kstrtab_get_current_tty
ffffffff81420e60 r __kstrtab_tty_devnum
ffffffff81420e6b r __kstrtab_tty_unregister_driver
ffffffff81420e81 r __kstrtab_tty_register_driver
ffffffff81420e95 r __kstrtab_put_tty_driver
ffffffff81420ea4 r __kstrtab_tty_set_operations
ffffffff81420eb7 r __kstrtab_tty_driver_kref_put
ffffffff81420ecb r __kstrtab___alloc_tty_driver
ffffffff81420ede r __kstrtab_tty_unregister_device
ffffffff81420ef4 r __kstrtab_tty_register_device
ffffffff81420f08 r __kstrtab_tty_put_char
ffffffff81420f15 r __kstrtab_do_SAK
ffffffff81420f1c r __kstrtab_tty_pair_get_pty
ffffffff81420f2d r __kstrtab_tty_pair_get_tty
ffffffff81420f3e r __kstrtab_tty_get_pgrp
ffffffff81420f4b r __kstrtab_tty_kref_put
ffffffff81420f58 r __kstrtab_tty_shutdown
ffffffff81420f65 r __kstrtab_tty_free_termios
ffffffff81420f76 r __kstrtab_tty_standard_install
ffffffff81420f8b r __kstrtab_tty_init_termios
ffffffff81420f9c r __kstrtab_start_tty
ffffffff81420fa6 r __kstrtab_stop_tty
ffffffff81420faf r __kstrtab_tty_hung_up_p
ffffffff81420fbd r __kstrtab_tty_vhangup
ffffffff81420fc9 r __kstrtab_tty_hangup
ffffffff81420fd4 r __kstrtab_tty_wakeup
ffffffff81420fdf r __kstrtab_tty_check_change
ffffffff81420ff0 r __kstrtab_tty_name
ffffffff81420ff9 r __kstrtab_tty_mutex
ffffffff81421003 r __kstrtab_tty_std_termios
ffffffff81421013 r __kstrtab_n_tty_inherit_ops
ffffffff81421025 r __kstrtab_n_tty_compat_ioctl_helper
ffffffff8142103f r __kstrtab_n_tty_ioctl_helper
ffffffff81421052 r __kstrtab_tty_perform_flush
ffffffff81421064 r __kstrtab_tty_mode_ioctl
ffffffff81421073 r __kstrtab_tty_set_termios
ffffffff81421083 r __kstrtab_tty_termios_hw_change
ffffffff81421099 r __kstrtab_tty_termios_copy_hw
ffffffff814210ad r __kstrtab_tty_get_baud_rate
ffffffff814210bf r __kstrtab_tty_encode_baud_rate
ffffffff814210d4 r __kstrtab_tty_termios_encode_baud_rate
ffffffff814210f1 r __kstrtab_tty_termios_input_baud_rate
ffffffff8142110d r __kstrtab_tty_termios_baud_rate
ffffffff81421123 r __kstrtab_tty_wait_until_sent
ffffffff81421137 r __kstrtab_tty_unthrottle
ffffffff81421146 r __kstrtab_tty_throttle
ffffffff81421153 r __kstrtab_tty_driver_flush_buffer
ffffffff8142116b r __kstrtab_tty_write_room
ffffffff8142117a r __kstrtab_tty_chars_in_buffer
ffffffff8142118e r __kstrtab_tty_ldisc_flush
ffffffff8142119e r __kstrtab_tty_ldisc_deref
ffffffff814211ae r __kstrtab_tty_ldisc_ref
ffffffff814211bc r __kstrtab_tty_ldisc_ref_wait
ffffffff814211cf r __kstrtab_tty_unregister_ldisc
ffffffff814211e4 r __kstrtab_tty_register_ldisc
ffffffff814211f7 r __kstrtab_tty_flip_buffer_push
ffffffff8142120c r __kstrtab_tty_prepare_flip_string_flags
ffffffff8142122a r __kstrtab_tty_prepare_flip_string
ffffffff81421242 r __kstrtab_tty_schedule_flip
ffffffff81421254 r __kstrtab_tty_insert_flip_string_flags
ffffffff81421271 r __kstrtab_tty_insert_flip_string_fixed_flag
ffffffff81421293 r __kstrtab_tty_buffer_request_room
ffffffff814212ab r __kstrtab_tty_port_open
ffffffff814212b9 r __kstrtab_tty_port_close
ffffffff814212c8 r __kstrtab_tty_port_close_end
ffffffff814212db r __kstrtab_tty_port_close_start
ffffffff814212f0 r __kstrtab_tty_port_block_til_ready
ffffffff81421309 r __kstrtab_tty_port_lower_dtr_rts
ffffffff81421320 r __kstrtab_tty_port_raise_dtr_rts
ffffffff81421337 r __kstrtab_tty_port_carrier_raised
ffffffff8142134f r __kstrtab_tty_port_hangup
ffffffff8142135f r __kstrtab_tty_port_tty_set
ffffffff81421370 r __kstrtab_tty_port_tty_get
ffffffff81421381 r __kstrtab_tty_port_put
ffffffff8142138e r __kstrtab_tty_port_free_xmit_buf
ffffffff814213a5 r __kstrtab_tty_port_alloc_xmit_buf
ffffffff814213bd r __kstrtab_tty_port_init
ffffffff814213cb r __kstrtab_tty_unlock
ffffffff814213d6 r __kstrtab_tty_lock
ffffffff814213df r __kstrtab_pm_set_vt_switch
ffffffff814213f0 r __kstrtab_vt_get_leds
ffffffff814213fc r __kstrtab_kd_mksound
ffffffff81421407 r __kstrtab_unregister_keyboard_notifier
ffffffff81421424 r __kstrtab_register_keyboard_notifier
ffffffff8142143f r __kstrtab_con_copy_unimap
ffffffff8142144f r __kstrtab_con_set_default_unimap
ffffffff81421466 r __kstrtab_inverse_translate
ffffffff81421478 r __kstrtab_give_up_console
ffffffff81421488 r __kstrtab_take_over_console
ffffffff8142149a r __kstrtab_global_cursor_default
ffffffff814214b0 r __kstrtab_vc_cons
ffffffff814214b8 r __kstrtab_console_blanked
ffffffff814214c8 r __kstrtab_console_blank_hook
ffffffff814214db r __kstrtab_fg_console
ffffffff814214e6 r __kstrtab_vc_resize
ffffffff814214f0 r __kstrtab_redraw_screen
ffffffff814214fe r __kstrtab_update_region
ffffffff8142150c r __kstrtab_default_blu
ffffffff81421518 r __kstrtab_default_grn
ffffffff81421524 r __kstrtab_default_red
ffffffff81421530 r __kstrtab_color_table
ffffffff8142153c r __kstrtab_screen_glyph
ffffffff81421549 r __kstrtab_do_unblank_screen
ffffffff8142155b r __kstrtab_do_blank_screen
ffffffff8142156b r __kstrtab_unregister_con_driver
ffffffff81421581 r __kstrtab_register_con_driver
ffffffff81421595 r __kstrtab_con_debug_leave
ffffffff814215a5 r __kstrtab_con_debug_enter
ffffffff814215b5 r __kstrtab_con_is_bound
ffffffff814215c2 r __kstrtab_unbind_con_driver
ffffffff814215d4 r __kstrtab_unregister_vt_notifier
ffffffff814215eb r __kstrtab_register_vt_notifier
ffffffff81421600 r __kstrtab_hvc_remove
ffffffff8142160b r __kstrtab_hvc_alloc
ffffffff81421615 r __kstrtab___hvc_resize
ffffffff81421622 r __kstrtab_hvc_poll
ffffffff8142162b r __kstrtab_hvc_kick
ffffffff81421634 r __kstrtab_hvc_instantiate
ffffffff81421644 r __kstrtab_generate_random_uuid
ffffffff81421659 r __kstrtab_get_random_bytes
ffffffff8142166a r __kstrtab_add_input_randomness
ffffffff8142167f r __kstrtab_misc_deregister
ffffffff8142168f r __kstrtab_misc_register
ffffffff8142169d r __kstrtab_nvram_check_checksum
ffffffff814216b2 r __kstrtab___nvram_check_checksum
ffffffff814216c9 r __kstrtab_nvram_write_byte
ffffffff814216da r __kstrtab___nvram_write_byte
ffffffff814216ed r __kstrtab_nvram_read_byte
ffffffff814216fd r __kstrtab___nvram_read_byte
ffffffff8142170f r __kstrtab_vga_client_register
ffffffff81421723 r __kstrtab_vga_set_legacy_decoding
ffffffff8142173b r __kstrtab_vga_put
ffffffff81421743 r __kstrtab_vga_tryget
ffffffff8142174e r __kstrtab_vga_get
ffffffff81421756 r __kstrtab__dev_info
ffffffff81421760 r __kstrtab_dev_notice
ffffffff8142176b r __kstrtab_dev_warn
ffffffff81421774 r __kstrtab_dev_err
ffffffff8142177c r __kstrtab_dev_crit
ffffffff81421785 r __kstrtab_dev_alert
ffffffff8142178f r __kstrtab_dev_emerg
ffffffff81421799 r __kstrtab_dev_printk
ffffffff814217a4 r __kstrtab___dev_printk
ffffffff814217b1 r __kstrtab_device_move
ffffffff814217bd r __kstrtab_device_rename
ffffffff814217cb r __kstrtab_device_destroy
ffffffff814217da r __kstrtab_device_create
ffffffff814217e8 r __kstrtab_device_create_vargs
ffffffff814217fc r __kstrtab_root_device_unregister
ffffffff81421813 r __kstrtab___root_device_register
ffffffff8142182a r __kstrtab_device_remove_file
ffffffff8142183d r __kstrtab_device_create_file
ffffffff81421850 r __kstrtab_put_device
ffffffff8142185b r __kstrtab_get_device
ffffffff81421866 r __kstrtab_device_unregister
ffffffff81421878 r __kstrtab_device_del
ffffffff81421883 r __kstrtab_device_register
ffffffff81421893 r __kstrtab_device_add
ffffffff8142189e r __kstrtab_device_initialize
ffffffff814218b0 r __kstrtab_device_find_child
ffffffff814218c2 r __kstrtab_device_for_each_child
ffffffff814218d8 r __kstrtab_dev_set_name
ffffffff814218e5 r __kstrtab_device_schedule_callback_owner
ffffffff81421904 r __kstrtab_device_remove_bin_file
ffffffff8142191b r __kstrtab_device_create_bin_file
ffffffff81421932 r __kstrtab_device_show_int
ffffffff81421942 r __kstrtab_device_store_int
ffffffff81421953 r __kstrtab_device_show_ulong
ffffffff81421965 r __kstrtab_device_store_ulong
ffffffff81421978 r __kstrtab_dev_driver_string
ffffffff8142198a r __kstrtab_subsys_system_register
ffffffff814219a1 r __kstrtab_subsys_interface_unregister
ffffffff814219bd r __kstrtab_subsys_interface_register
ffffffff814219d7 r __kstrtab_subsys_dev_iter_exit
ffffffff814219ec r __kstrtab_subsys_dev_iter_next
ffffffff81421a01 r __kstrtab_subsys_dev_iter_init
ffffffff81421a16 r __kstrtab_bus_sort_breadthfirst
ffffffff81421a2c r __kstrtab_bus_get_device_klist
ffffffff81421a41 r __kstrtab_bus_get_kset
ffffffff81421a4e r __kstrtab_bus_unregister_notifier
ffffffff81421a66 r __kstrtab_bus_register_notifier
ffffffff81421a7c r __kstrtab_bus_unregister
ffffffff81421a8b r __kstrtab___bus_register
ffffffff81421a9a r __kstrtab_device_reprobe
ffffffff81421aa9 r __kstrtab_bus_rescan_devices
ffffffff81421abc r __kstrtab_bus_for_each_drv
ffffffff81421acd r __kstrtab_subsys_find_device_by_id
ffffffff81421ae6 r __kstrtab_bus_find_device_by_name
ffffffff81421afe r __kstrtab_bus_find_device
ffffffff81421b0e r __kstrtab_bus_for_each_dev
ffffffff81421b1f r __kstrtab_bus_remove_file
ffffffff81421b2f r __kstrtab_bus_create_file
ffffffff81421b3f r __kstrtab_dev_set_drvdata
ffffffff81421b4f r __kstrtab_dev_get_drvdata
ffffffff81421b5f r __kstrtab_device_release_driver
ffffffff81421b75 r __kstrtab_driver_attach
ffffffff81421b83 r __kstrtab_device_attach
ffffffff81421b91 r __kstrtab_wait_for_device_probe
ffffffff81421ba7 r __kstrtab_device_bind_driver
ffffffff81421bba r __kstrtab_syscore_resume
ffffffff81421bc9 r __kstrtab_syscore_suspend
ffffffff81421bd9 r __kstrtab_unregister_syscore_ops
ffffffff81421bf0 r __kstrtab_register_syscore_ops
ffffffff81421c05 r __kstrtab_driver_find
ffffffff81421c11 r __kstrtab_driver_unregister
ffffffff81421c23 r __kstrtab_driver_register
ffffffff81421c33 r __kstrtab_driver_remove_file
ffffffff81421c46 r __kstrtab_driver_create_file
ffffffff81421c59 r __kstrtab_driver_find_device
ffffffff81421c6c r __kstrtab_driver_for_each_device
ffffffff81421c83 r __kstrtab_class_interface_unregister
ffffffff81421c9e r __kstrtab_class_interface_register
ffffffff81421cb7 r __kstrtab_class_destroy
ffffffff81421cc5 r __kstrtab_class_unregister
ffffffff81421cd6 r __kstrtab_class_remove_file
ffffffff81421ce8 r __kstrtab_class_create_file
ffffffff81421cfa r __kstrtab_class_compat_remove_link
ffffffff81421d13 r __kstrtab_class_compat_create_link
ffffffff81421d2c r __kstrtab_class_compat_unregister
ffffffff81421d44 r __kstrtab_class_compat_register
ffffffff81421d5a r __kstrtab_show_class_attr_string
ffffffff81421d71 r __kstrtab_class_find_device
ffffffff81421d83 r __kstrtab_class_for_each_device
ffffffff81421d99 r __kstrtab_class_dev_iter_exit
ffffffff81421dad r __kstrtab_class_dev_iter_next
ffffffff81421dc1 r __kstrtab_class_dev_iter_init
ffffffff81421dd5 r __kstrtab___class_create
ffffffff81421de4 r __kstrtab___class_register
ffffffff81421df5 r __kstrtab_dma_get_required_mask
ffffffff81421e0b r __kstrtab_platform_bus_type
ffffffff81421e1d r __kstrtab_platform_create_bundle
ffffffff81421e34 r __kstrtab_platform_driver_probe
ffffffff81421e4a r __kstrtab_platform_driver_unregister
ffffffff81421e65 r __kstrtab_platform_driver_register
ffffffff81421e7e r __kstrtab_platform_device_register_full
ffffffff81421e9c r __kstrtab_platform_device_unregister
ffffffff81421eb7 r __kstrtab_platform_device_register
ffffffff81421ed0 r __kstrtab_platform_device_del
ffffffff81421ee4 r __kstrtab_platform_device_add
ffffffff81421ef8 r __kstrtab_platform_device_add_data
ffffffff81421f11 r __kstrtab_platform_device_add_resources
ffffffff81421f2f r __kstrtab_platform_device_alloc
ffffffff81421f45 r __kstrtab_platform_device_put
ffffffff81421f59 r __kstrtab_platform_add_devices
ffffffff81421f6e r __kstrtab_platform_get_irq_byname
ffffffff81421f86 r __kstrtab_platform_get_resource_byname
ffffffff81421fa3 r __kstrtab_platform_get_irq
ffffffff81421fb4 r __kstrtab_platform_get_resource
ffffffff81421fca r __kstrtab_platform_bus
ffffffff81421fd7 r __kstrtab_cpu_is_hotpluggable
ffffffff81421feb r __kstrtab_get_cpu_device
ffffffff81421ffa r __kstrtab_cpu_subsys
ffffffff81422005 r __kstrtab_firmware_kobj
ffffffff81422013 r __kstrtab_devm_kfree
ffffffff8142201e r __kstrtab_devm_kzalloc
ffffffff8142202b r __kstrtab_devres_release_group
ffffffff81422040 r __kstrtab_devres_remove_group
ffffffff81422054 r __kstrtab_devres_close_group
ffffffff81422067 r __kstrtab_devres_open_group
ffffffff81422079 r __kstrtab_devres_destroy
ffffffff81422088 r __kstrtab_devres_remove
ffffffff81422096 r __kstrtab_devres_get
ffffffff814220a1 r __kstrtab_devres_find
ffffffff814220ad r __kstrtab_devres_add
ffffffff814220b8 r __kstrtab_devres_free
ffffffff814220c4 r __kstrtab_devres_alloc
ffffffff814220d1 r __kstrtab_attribute_container_find_class_device
ffffffff814220f7 r __kstrtab_attribute_container_unregister
ffffffff81422116 r __kstrtab_attribute_container_register
ffffffff81422133 r __kstrtab_attribute_container_classdev_to_container
ffffffff8142215d r __kstrtab_transport_destroy_device
ffffffff81422176 r __kstrtab_transport_remove_device
ffffffff8142218e r __kstrtab_transport_configure_device
ffffffff814221a9 r __kstrtab_transport_add_device
ffffffff814221be r __kstrtab_transport_setup_device
ffffffff814221d5 r __kstrtab_anon_transport_class_unregister
ffffffff814221f5 r __kstrtab_anon_transport_class_register
ffffffff81422213 r __kstrtab_transport_class_unregister
ffffffff8142222e r __kstrtab_transport_class_register
ffffffff81422247 r __kstrtab_power_group_name
ffffffff81422258 r __kstrtab_pm_generic_restore
ffffffff8142226b r __kstrtab_pm_generic_restore_early
ffffffff81422284 r __kstrtab_pm_generic_restore_noirq
ffffffff8142229d r __kstrtab_pm_generic_resume
ffffffff814222af r __kstrtab_pm_generic_resume_early
ffffffff814222c7 r __kstrtab_pm_generic_resume_noirq
ffffffff814222df r __kstrtab_pm_generic_thaw
ffffffff814222ef r __kstrtab_pm_generic_thaw_early
ffffffff81422305 r __kstrtab_pm_generic_thaw_noirq
ffffffff8142231b r __kstrtab_pm_generic_poweroff
ffffffff8142232f r __kstrtab_pm_generic_poweroff_late
ffffffff81422348 r __kstrtab_pm_generic_poweroff_noirq
ffffffff81422362 r __kstrtab_pm_generic_freeze
ffffffff81422374 r __kstrtab_pm_generic_freeze_late
ffffffff8142238b r __kstrtab_pm_generic_freeze_noirq
ffffffff814223a3 r __kstrtab_pm_generic_suspend
ffffffff814223b6 r __kstrtab_pm_generic_suspend_late
ffffffff814223ce r __kstrtab_pm_generic_suspend_noirq
ffffffff814223e7 r __kstrtab_pm_generic_runtime_resume
ffffffff81422401 r __kstrtab_pm_generic_runtime_suspend
ffffffff8142241c r __kstrtab_pm_generic_runtime_idle
ffffffff81422434 r __kstrtab_dev_pm_put_subsys_data
ffffffff8142244b r __kstrtab_dev_pm_get_subsys_data
ffffffff81422462 r __kstrtab_dev_pm_qos_hide_latency_limit
ffffffff81422480 r __kstrtab_dev_pm_qos_expose_latency_limit
ffffffff814224a0 r __kstrtab_dev_pm_qos_add_ancestor_request
ffffffff814224c0 r __kstrtab_dev_pm_qos_remove_global_notifier
ffffffff814224e2 r __kstrtab_dev_pm_qos_add_global_notifier
ffffffff81422501 r __kstrtab_dev_pm_qos_remove_notifier
ffffffff8142251c r __kstrtab_dev_pm_qos_add_notifier
ffffffff81422534 r __kstrtab_dev_pm_qos_remove_request
ffffffff8142254e r __kstrtab_dev_pm_qos_update_request
ffffffff81422568 r __kstrtab_dev_pm_qos_add_request
ffffffff8142257f r __kstrtab_device_pm_wait_for_dev
ffffffff81422596 r __kstrtab___suspend_report_result
ffffffff814225ae r __kstrtab_dpm_suspend_start
ffffffff814225c0 r __kstrtab_dpm_suspend_end
ffffffff814225d0 r __kstrtab_dpm_resume_end
ffffffff814225df r __kstrtab_dpm_resume_start
ffffffff814225f0 r __kstrtab_pm_wakeup_event
ffffffff81422600 r __kstrtab___pm_wakeup_event
ffffffff81422612 r __kstrtab_pm_relax
ffffffff8142261b r __kstrtab___pm_relax
ffffffff81422626 r __kstrtab_pm_stay_awake
ffffffff81422634 r __kstrtab___pm_stay_awake
ffffffff81422644 r __kstrtab_device_set_wakeup_enable
ffffffff8142265d r __kstrtab_device_init_wakeup
ffffffff81422670 r __kstrtab_device_set_wakeup_capable
ffffffff8142268a r __kstrtab_device_wakeup_disable
ffffffff814226a0 r __kstrtab_device_wakeup_enable
ffffffff814226b5 r __kstrtab_wakeup_source_unregister
ffffffff814226ce r __kstrtab_wakeup_source_register
ffffffff814226e5 r __kstrtab_wakeup_source_remove
ffffffff814226fa r __kstrtab_wakeup_source_add
ffffffff8142270c r __kstrtab_wakeup_source_destroy
ffffffff81422722 r __kstrtab_wakeup_source_drop
ffffffff81422735 r __kstrtab_wakeup_source_create
ffffffff8142274a r __kstrtab_wakeup_source_prepare
ffffffff81422760 r __kstrtab___pm_runtime_use_autosuspend
ffffffff8142277d r __kstrtab_pm_runtime_set_autosuspend_delay
ffffffff8142279e r __kstrtab_pm_runtime_irq_safe
ffffffff814227b2 r __kstrtab_pm_runtime_no_callbacks
ffffffff814227ca r __kstrtab_pm_runtime_allow
ffffffff814227db r __kstrtab_pm_runtime_forbid
ffffffff814227ed r __kstrtab_pm_runtime_enable
ffffffff814227ff r __kstrtab___pm_runtime_disable
ffffffff81422814 r __kstrtab_pm_runtime_barrier
ffffffff81422827 r __kstrtab___pm_runtime_set_status
ffffffff8142283f r __kstrtab___pm_runtime_resume
ffffffff81422853 r __kstrtab___pm_runtime_suspend
ffffffff81422868 r __kstrtab___pm_runtime_idle
ffffffff8142287a r __kstrtab_pm_schedule_suspend
ffffffff8142288e r __kstrtab_pm_runtime_autosuspend_expiration
ffffffff814228b0 r __kstrtab_dmam_free_noncoherent
ffffffff814228c6 r __kstrtab_dmam_alloc_noncoherent
ffffffff814228dd r __kstrtab_dmam_free_coherent
ffffffff814228f0 r __kstrtab_dmam_alloc_coherent
ffffffff81422904 r __kstrtab_request_firmware_nowait
ffffffff8142291c r __kstrtab_request_firmware
ffffffff8142292d r __kstrtab_release_firmware
ffffffff8142293e r __kstrtab_hypervisor_kobj
ffffffff8142294e r __kstrtab_scsi_device_lookup
ffffffff81422961 r __kstrtab___scsi_device_lookup
ffffffff81422976 r __kstrtab_scsi_device_lookup_by_target
ffffffff81422993 r __kstrtab___scsi_device_lookup_by_target
ffffffff814229b2 r __kstrtab___starget_for_each_device
ffffffff814229cc r __kstrtab_starget_for_each_device
ffffffff814229e4 r __kstrtab___scsi_iterate_devices
ffffffff814229fb r __kstrtab_scsi_device_put
ffffffff81422a0b r __kstrtab_scsi_device_get
ffffffff81422a1b r __kstrtab_scsi_get_vpd_page
ffffffff81422a2d r __kstrtab_scsi_track_queue_full
ffffffff81422a43 r __kstrtab_scsi_adjust_queue_depth
ffffffff81422a5b r __kstrtab_scsi_finish_command
ffffffff81422a6f r __kstrtab_scsi_cmd_get_serial
ffffffff81422a83 r __kstrtab_scsi_free_command
ffffffff81422a95 r __kstrtab_scsi_allocate_command
ffffffff81422aab r __kstrtab_scsi_put_command
ffffffff81422abc r __kstrtab___scsi_put_command
ffffffff81422acf r __kstrtab_scsi_get_command
ffffffff81422ae0 r __kstrtab___scsi_get_command
ffffffff81422af3 r __kstrtab_scsi_device_type
ffffffff81422b04 r __kstrtab_scsi_logging_level
ffffffff81422b17 r __kstrtab_scsi_flush_work
ffffffff81422b27 r __kstrtab_scsi_queue_work
ffffffff81422b37 r __kstrtab_scsi_is_host_device
ffffffff81422b4b r __kstrtab_scsi_host_put
ffffffff81422b59 r __kstrtab_scsi_host_get
ffffffff81422b67 r __kstrtab_scsi_host_lookup
ffffffff81422b78 r __kstrtab_scsi_unregister
ffffffff81422b88 r __kstrtab_scsi_register
ffffffff81422b96 r __kstrtab_scsi_host_alloc
ffffffff81422ba6 r __kstrtab_scsi_add_host_with_dma
ffffffff81422bbd r __kstrtab_scsi_remove_host
ffffffff81422bce r __kstrtab_scsi_host_set_state
ffffffff81422be2 r __kstrtab_scsi_nonblockable_ioctl
ffffffff81422bfa r __kstrtab_scsi_ioctl
ffffffff81422c05 r __kstrtab_scsi_set_medium_removal
ffffffff81422c1d r __kstrtab_scsi_print_result
ffffffff81422c2f r __kstrtab_scsi_show_result
ffffffff81422c40 r __kstrtab_scsi_print_sense
ffffffff81422c51 r __kstrtab___scsi_print_sense
ffffffff81422c64 r __kstrtab_scsi_cmd_print_sense_hdr
ffffffff81422c7d r __kstrtab_scsi_print_sense_hdr
ffffffff81422c92 r __kstrtab_scsi_show_sense_hdr
ffffffff81422ca6 r __kstrtab_scsi_show_extd_sense
ffffffff81422cbb r __kstrtab_scsi_extd_sense_format
ffffffff81422cd2 r __kstrtab_scsi_sense_key_string
ffffffff81422ce8 r __kstrtab_scsi_print_status
ffffffff81422cfa r __kstrtab_scsi_print_command
ffffffff81422d0d r __kstrtab___scsi_print_command
ffffffff81422d22 r __kstrtab_scsi_partsize
ffffffff81422d30 r __kstrtab_scsicam_bios_param
ffffffff81422d43 r __kstrtab_scsi_bios_ptable
ffffffff81422d54 r __kstrtab_scsi_build_sense_buffer
ffffffff81422d6c r __kstrtab_scsi_get_sense_info_fld
ffffffff81422d84 r __kstrtab_scsi_sense_desc_find
ffffffff81422d99 r __kstrtab_scsi_command_normalize_sense
ffffffff81422db6 r __kstrtab_scsi_normalize_sense
ffffffff81422dcb r __kstrtab_scsi_reset_provider
ffffffff81422ddf r __kstrtab_scsi_report_device_reset
ffffffff81422df8 r __kstrtab_scsi_report_bus_reset
ffffffff81422e0e r __kstrtab_scsi_eh_flush_done_q
ffffffff81422e23 r __kstrtab_scsi_eh_ready_devs
ffffffff81422e36 r __kstrtab_scsi_eh_get_sense
ffffffff81422e48 r __kstrtab_scsi_eh_finish_cmd
ffffffff81422e5b r __kstrtab_scsi_eh_restore_cmnd
ffffffff81422e70 r __kstrtab_scsi_eh_prep_cmnd
ffffffff81422e82 r __kstrtab_scsi_block_when_processing_errors
ffffffff81422ea4 r __kstrtab_scsi_schedule_eh
ffffffff81422eb5 r __kstrtab_scsi_kunmap_atomic_sg
ffffffff81422ecb r __kstrtab_scsi_kmap_atomic_sg
ffffffff81422edf r __kstrtab_scsi_target_unblock
ffffffff81422ef3 r __kstrtab_scsi_target_block
ffffffff81422f05 r __kstrtab_scsi_internal_device_unblock
ffffffff81422f22 r __kstrtab_scsi_internal_device_block
ffffffff81422f3d r __kstrtab_scsi_target_resume
ffffffff81422f50 r __kstrtab_scsi_target_quiesce
ffffffff81422f64 r __kstrtab_scsi_device_resume
ffffffff81422f77 r __kstrtab_scsi_device_quiesce
ffffffff81422f8b r __kstrtab_sdev_evt_send_simple
ffffffff81422fa0 r __kstrtab_sdev_evt_alloc
ffffffff81422faf r __kstrtab_sdev_evt_send
ffffffff81422fbd r __kstrtab_scsi_device_set_state
ffffffff81422fd3 r __kstrtab_scsi_test_unit_ready
ffffffff81422fe8 r __kstrtab_scsi_mode_sense
ffffffff81422ff8 r __kstrtab_scsi_mode_select
ffffffff81423009 r __kstrtab_scsi_unblock_requests
ffffffff8142301f r __kstrtab_scsi_block_requests
ffffffff81423033 r __kstrtab___scsi_alloc_queue
ffffffff81423046 r __kstrtab_scsi_calculate_bounce_limit
ffffffff81423062 r __kstrtab_scsi_prep_fn
ffffffff8142306f r __kstrtab_scsi_prep_return
ffffffff81423080 r __kstrtab_scsi_prep_state_check
ffffffff81423096 r __kstrtab_scsi_setup_fs_cmnd
ffffffff814230a9 r __kstrtab_scsi_setup_blk_pc_cmnd
ffffffff814230c0 r __kstrtab_scsi_init_io
ffffffff814230cd r __kstrtab_scsi_release_buffers
ffffffff814230e2 r __kstrtab_scsi_execute_req
ffffffff814230f3 r __kstrtab_scsi_execute
ffffffff81423100 r __kstrtab_scsi_dma_unmap
ffffffff8142310f r __kstrtab_scsi_dma_map
ffffffff8142311c r __kstrtab_scsi_free_host_dev
ffffffff8142312f r __kstrtab_scsi_get_host_dev
ffffffff81423141 r __kstrtab_scsi_scan_host
ffffffff81423150 r __kstrtab_scsi_scan_target
ffffffff81423161 r __kstrtab_scsi_rescan_device
ffffffff81423174 r __kstrtab_scsi_add_device
ffffffff81423184 r __kstrtab___scsi_add_device
ffffffff81423196 r __kstrtab_int_to_scsilun
ffffffff814231a5 r __kstrtab_scsilun_to_int
ffffffff814231b4 r __kstrtab_scsi_is_target_device
ffffffff814231ca r __kstrtab_scsi_complete_async_scans
ffffffff814231e4 r __kstrtab_scsi_is_sdev_device
ffffffff814231f8 r __kstrtab_scsi_register_interface
ffffffff81423210 r __kstrtab_scsi_register_driver
ffffffff81423225 r __kstrtab_scsi_remove_target
ffffffff81423238 r __kstrtab_scsi_remove_device
ffffffff8142324b r __kstrtab_scsi_bus_type
ffffffff81423259 r __kstrtab_scsi_dev_info_remove_list
ffffffff81423273 r __kstrtab_scsi_dev_info_add_list
ffffffff8142328a r __kstrtab_scsi_get_device_flags_keyed
ffffffff814232a6 r __kstrtab_scsi_dev_info_list_del_keyed
ffffffff814232c3 r __kstrtab_scsi_dev_info_list_add_keyed
ffffffff814232e0 r __kstrtab_scsi_autopm_put_device
ffffffff814232f7 r __kstrtab_scsi_autopm_get_device
ffffffff8142330e r __kstrtab_ata_cable_sata
ffffffff8142331d r __kstrtab_ata_cable_ignore
ffffffff8142332e r __kstrtab_ata_cable_unknown
ffffffff81423340 r __kstrtab_ata_cable_80wire
ffffffff81423351 r __kstrtab_ata_cable_40wire
ffffffff81423362 r __kstrtab_ata_std_error_handler
ffffffff81423378 r __kstrtab_ata_do_eh
ffffffff81423382 r __kstrtab_ata_eh_analyze_ncq_error
ffffffff8142339b r __kstrtab_ata_eh_qc_retry
ffffffff814233ab r __kstrtab_ata_eh_qc_complete
ffffffff814233be r __kstrtab_ata_eh_thaw_port
ffffffff814233cf r __kstrtab_ata_eh_freeze_port
ffffffff814233e2 r __kstrtab_sata_async_notification
ffffffff814233fa r __kstrtab_ata_port_freeze
ffffffff8142340a r __kstrtab_ata_port_abort
ffffffff81423419 r __kstrtab_ata_link_abort
ffffffff81423428 r __kstrtab_ata_port_schedule_eh
ffffffff8142343d r __kstrtab_ata_port_pbar_desc
ffffffff81423450 r __kstrtab_ata_port_desc
ffffffff8142345e r __kstrtab_ata_ehi_clear_desc
ffffffff81423471 r __kstrtab_ata_ehi_push_desc
ffffffff81423483 r __kstrtab___ata_ehi_push_desc
ffffffff81423497 r __kstrtab_ata_pci_device_resume
ffffffff814234ad r __kstrtab_ata_pci_device_suspend
ffffffff814234c4 r __kstrtab_ata_pci_device_do_resume
ffffffff814234dd r __kstrtab_ata_pci_device_do_suspend
ffffffff814234f7 r __kstrtab_ata_pci_remove_one
ffffffff8142350a r __kstrtab_pci_test_config_bits
ffffffff8142351f r __kstrtab_ata_timing_cycle2mode
ffffffff81423535 r __kstrtab_ata_timing_merge
ffffffff81423546 r __kstrtab_ata_timing_compute
ffffffff81423559 r __kstrtab_ata_timing_find_mode
ffffffff8142356e r __kstrtab_ata_pio_need_iordy
ffffffff81423581 r __kstrtab_ata_scsi_simulate
ffffffff81423593 r __kstrtab_ata_do_dev_read_id
ffffffff814235a6 r __kstrtab_ata_id_c_string
ffffffff814235b6 r __kstrtab_ata_id_string
ffffffff814235c4 r __kstrtab_ata_host_resume
ffffffff814235d4 r __kstrtab_ata_host_suspend
ffffffff814235e5 r __kstrtab_ata_link_offline
ffffffff814235f6 r __kstrtab_ata_link_online
ffffffff81423606 r __kstrtab_sata_scr_write_flush
ffffffff8142361b r __kstrtab_sata_scr_write
ffffffff8142362a r __kstrtab_sata_scr_read
ffffffff81423638 r __kstrtab_sata_scr_valid
ffffffff81423647 r __kstrtab___ata_change_queue_depth
ffffffff81423660 r __kstrtab_ata_scsi_change_queue_depth
ffffffff8142367c r __kstrtab_ata_scsi_slave_destroy
ffffffff81423693 r __kstrtab_ata_scsi_slave_config
ffffffff814236a9 r __kstrtab_ata_scsi_queuecmd
ffffffff814236bb r __kstrtab_ata_wait_register
ffffffff814236cd r __kstrtab_ata_msleep
ffffffff814236d8 r __kstrtab_ata_ratelimit
ffffffff814236e6 r __kstrtab_ata_dev_pair
ffffffff814236f3 r __kstrtab_ata_dev_classify
ffffffff81423704 r __kstrtab_ata_std_postreset
ffffffff81423716 r __kstrtab_sata_std_hardreset
ffffffff81423729 r __kstrtab_sata_link_hardreset
ffffffff8142373d r __kstrtab_ata_std_prereset
ffffffff8142374e r __kstrtab_sata_link_scr_lpm
ffffffff81423760 r __kstrtab_sata_link_resume
ffffffff81423771 r __kstrtab_sata_link_debounce
ffffffff81423784 r __kstrtab_ata_wait_after_reset
ffffffff81423799 r __kstrtab_sata_set_spd
ffffffff814237a6 r __kstrtab_ata_dev_disable
ffffffff814237b6 r __kstrtab_ata_noop_qc_prep
ffffffff814237c7 r __kstrtab_ata_std_qc_defer
ffffffff814237d8 r __kstrtab_ata_do_set_mode
ffffffff814237e8 r __kstrtab_ata_id_xfermask
ffffffff814237f8 r __kstrtab_ata_mode_string
ffffffff81423808 r __kstrtab_ata_xfer_mode2shift
ffffffff8142381c r __kstrtab_ata_xfer_mode2mask
ffffffff8142382f r __kstrtab_ata_xfer_mask2mode
ffffffff81423842 r __kstrtab_ata_unpack_xfermask
ffffffff81423856 r __kstrtab_ata_pack_xfermask
ffffffff81423868 r __kstrtab_ata_tf_from_fis
ffffffff81423878 r __kstrtab_ata_tf_to_fis
ffffffff81423886 r __kstrtab_atapi_cmd_type
ffffffff81423895 r __kstrtab_ata_qc_complete_multiple
ffffffff814238ae r __kstrtab_ata_qc_complete
ffffffff814238be r __kstrtab_ata_sg_init
ffffffff814238ca r __kstrtab_ata_host_detach
ffffffff814238da r __kstrtab_ata_host_activate
ffffffff814238ec r __kstrtab_ata_host_register
ffffffff814238fe r __kstrtab_ata_host_start
ffffffff8142390d r __kstrtab_ata_slave_link_init
ffffffff81423921 r __kstrtab_ata_host_alloc_pinfo
ffffffff81423936 r __kstrtab_ata_host_alloc
ffffffff81423945 r __kstrtab_ata_host_init
ffffffff81423953 r __kstrtab_ata_scsi_unlock_native_capacity
ffffffff81423973 r __kstrtab_ata_std_bios_param
ffffffff81423986 r __kstrtab_ata_dev_next
ffffffff81423993 r __kstrtab_ata_link_next
ffffffff814239a1 r __kstrtab_ata_dummy_port_info
ffffffff814239b5 r __kstrtab_ata_dummy_port_ops
ffffffff814239c8 r __kstrtab_sata_port_ops
ffffffff814239d6 r __kstrtab_ata_base_port_ops
ffffffff814239e8 r __kstrtab_sata_deb_timing_long
ffffffff814239fd r __kstrtab_sata_deb_timing_hotplug
ffffffff81423a15 r __kstrtab_sata_deb_timing_normal
ffffffff81423a2c r __kstrtab_ata_print_version
ffffffff81423a3e r __kstrtab_ata_dev_printk
ffffffff81423a4d r __kstrtab_ata_link_printk
ffffffff81423a5d r __kstrtab_ata_port_printk
ffffffff81423a6d r __kstrtab_ata_sas_queuecmd
ffffffff81423a7e r __kstrtab_ata_sas_slave_configure
ffffffff81423a96 r __kstrtab_ata_sas_port_destroy
ffffffff81423aab r __kstrtab_ata_sas_port_init
ffffffff81423abd r __kstrtab_ata_sas_sync_probe
ffffffff81423ad0 r __kstrtab_ata_sas_async_probe
ffffffff81423ae4 r __kstrtab_ata_sas_port_stop
ffffffff81423af6 r __kstrtab_ata_sas_port_start
ffffffff81423b09 r __kstrtab_ata_sas_port_alloc
ffffffff81423b1c r __kstrtab_ata_scsi_ioctl
ffffffff81423b2b r __kstrtab_ata_sas_scsi_ioctl
ffffffff81423b3e r __kstrtab_ata_common_sdev_attrs
ffffffff81423b54 r __kstrtab_dev_attr_sw_activity
ffffffff81423b69 r __kstrtab_dev_attr_em_message_type
ffffffff81423b82 r __kstrtab_dev_attr_em_message
ffffffff81423b96 r __kstrtab_dev_attr_unload_heads
ffffffff81423bac r __kstrtab_dev_attr_link_power_management_policy
ffffffff81423bd2 r __kstrtab_ata_port_wait_eh
ffffffff81423be3 r __kstrtab_ata_scsi_port_error_handler
ffffffff81423bff r __kstrtab_ata_scsi_cmd_error_handler
ffffffff81423c1a r __kstrtab_ata_pci_bmdma_init_one
ffffffff81423c31 r __kstrtab_ata_pci_bmdma_prepare_host
ffffffff81423c4c r __kstrtab_ata_pci_bmdma_init
ffffffff81423c5f r __kstrtab_ata_pci_bmdma_clear_simplex
ffffffff81423c7b r __kstrtab_ata_bmdma_port_start32
ffffffff81423c92 r __kstrtab_ata_bmdma_port_start
ffffffff81423ca7 r __kstrtab_ata_bmdma_status
ffffffff81423cb8 r __kstrtab_ata_bmdma_stop
ffffffff81423cc7 r __kstrtab_ata_bmdma_start
ffffffff81423cd7 r __kstrtab_ata_bmdma_setup
ffffffff81423ce7 r __kstrtab_ata_bmdma_irq_clear
ffffffff81423cfb r __kstrtab_ata_bmdma_post_internal_cmd
ffffffff81423d17 r __kstrtab_ata_bmdma_error_handler
ffffffff81423d2f r __kstrtab_ata_bmdma_interrupt
ffffffff81423d43 r __kstrtab_ata_bmdma_port_intr
ffffffff81423d57 r __kstrtab_ata_bmdma_qc_issue
ffffffff81423d6a r __kstrtab_ata_bmdma_dumb_qc_prep
ffffffff81423d81 r __kstrtab_ata_bmdma_qc_prep
ffffffff81423d93 r __kstrtab_ata_bmdma32_port_ops
ffffffff81423da8 r __kstrtab_ata_bmdma_port_ops
ffffffff81423dbb r __kstrtab_ata_pci_sff_init_one
ffffffff81423dd0 r __kstrtab_ata_pci_sff_activate_host
ffffffff81423dea r __kstrtab_ata_pci_sff_prepare_host
ffffffff81423e03 r __kstrtab_ata_pci_sff_init_host
ffffffff81423e19 r __kstrtab_ata_sff_std_ports
ffffffff81423e2b r __kstrtab_ata_sff_error_handler
ffffffff81423e41 r __kstrtab_ata_sff_drain_fifo
ffffffff81423e54 r __kstrtab_ata_sff_postreset
ffffffff81423e66 r __kstrtab_sata_sff_hardreset
ffffffff81423e79 r __kstrtab_ata_sff_softreset
ffffffff81423e8b r __kstrtab_ata_sff_wait_after_reset
ffffffff81423ea4 r __kstrtab_ata_sff_dev_classify
ffffffff81423eb9 r __kstrtab_ata_sff_prereset
ffffffff81423eca r __kstrtab_ata_sff_thaw
ffffffff81423ed7 r __kstrtab_ata_sff_freeze
ffffffff81423ee6 r __kstrtab_ata_sff_lost_interrupt
ffffffff81423efd r __kstrtab_ata_sff_interrupt
ffffffff81423f0f r __kstrtab_ata_sff_port_intr
ffffffff81423f21 r __kstrtab_ata_sff_qc_fill_rtf
ffffffff81423f35 r __kstrtab_ata_sff_qc_issue
ffffffff81423f46 r __kstrtab_ata_sff_queue_pio_task
ffffffff81423f5d r __kstrtab_ata_sff_queue_delayed_work
ffffffff81423f78 r __kstrtab_ata_sff_queue_work
ffffffff81423f8b r __kstrtab_ata_sff_hsm_move
ffffffff81423f9c r __kstrtab_ata_sff_data_xfer_noirq
ffffffff81423fb4 r __kstrtab_ata_sff_data_xfer32
ffffffff81423fc8 r __kstrtab_ata_sff_data_xfer
ffffffff81423fda r __kstrtab_ata_sff_exec_command
ffffffff81423fef r __kstrtab_ata_sff_tf_read
ffffffff81423fff r __kstrtab_ata_sff_tf_load
ffffffff8142400f r __kstrtab_ata_sff_irq_on
ffffffff8142401e r __kstrtab_ata_sff_dev_select
ffffffff81424031 r __kstrtab_ata_sff_wait_ready
ffffffff81424044 r __kstrtab_ata_sff_busy_sleep
ffffffff81424057 r __kstrtab_ata_sff_dma_pause
ffffffff81424069 r __kstrtab_ata_sff_pause
ffffffff81424077 r __kstrtab_ata_sff_check_status
ffffffff8142408c r __kstrtab_ata_sff_port_ops
ffffffff8142409d r __kstrtab_ata_acpi_cbl_80wire
ffffffff814240b1 r __kstrtab_ata_acpi_gtm_xfermask
ffffffff814240c7 r __kstrtab_ata_acpi_stm
ffffffff814240d4 r __kstrtab_ata_acpi_gtm
ffffffff814240e1 r __kstrtab_usb_enable_xhci_ports
ffffffff814240f7 r __kstrtab_usb_is_intel_switchable_xhci
ffffffff81424114 r __kstrtab_uhci_check_and_reset_hc
ffffffff8142412c r __kstrtab_uhci_reset_hc
ffffffff8142413a r __kstrtab_usb_amd_dev_put
ffffffff8142414a r __kstrtab_usb_amd_quirk_pll_enable
ffffffff81424163 r __kstrtab_usb_amd_quirk_pll_disable
ffffffff8142417d r __kstrtab_usb_amd_find_chipset_info
ffffffff81424197 r __kstrtab_serio_interrupt
ffffffff814241a7 r __kstrtab_serio_close
ffffffff814241b3 r __kstrtab_serio_open
ffffffff814241be r __kstrtab_serio_unregister_driver
ffffffff814241d6 r __kstrtab___serio_register_driver
ffffffff814241ee r __kstrtab_serio_unregister_child_port
ffffffff8142420a r __kstrtab_serio_unregister_port
ffffffff81424220 r __kstrtab___serio_register_port
ffffffff81424236 r __kstrtab_serio_reconnect
ffffffff81424246 r __kstrtab_serio_rescan
ffffffff81424253 r __kstrtab_i8042_check_port_owner
ffffffff8142426a r __kstrtab_i8042_command
ffffffff81424278 r __kstrtab_i8042_remove_filter
ffffffff8142428c r __kstrtab_i8042_install_filter
ffffffff814242a1 r __kstrtab_i8042_unlock_chip
ffffffff814242b3 r __kstrtab_i8042_lock_chip
ffffffff814242c3 r __kstrtab_ps2_cmd_aborted
ffffffff814242d3 r __kstrtab_ps2_handle_response
ffffffff814242e7 r __kstrtab_ps2_handle_ack
ffffffff814242f6 r __kstrtab_ps2_init
ffffffff814242ff r __kstrtab_ps2_command
ffffffff8142430b r __kstrtab___ps2_command
ffffffff81424319 r __kstrtab_ps2_is_keyboard_id
ffffffff8142432c r __kstrtab_ps2_drain
ffffffff81424336 r __kstrtab_ps2_end_command
ffffffff81424346 r __kstrtab_ps2_begin_command
ffffffff81424358 r __kstrtab_ps2_sendbyte
ffffffff81424365 r __kstrtab_input_unregister_handle
ffffffff8142437d r __kstrtab_input_register_handle
ffffffff81424393 r __kstrtab_input_handler_for_each_handle
ffffffff814243b1 r __kstrtab_input_unregister_handler
ffffffff814243ca r __kstrtab_input_register_handler
ffffffff814243e1 r __kstrtab_input_unregister_device
ffffffff814243f9 r __kstrtab_input_register_device
ffffffff8142440f r __kstrtab_input_set_capability
ffffffff81424424 r __kstrtab_input_free_device
ffffffff81424436 r __kstrtab_input_allocate_device
ffffffff8142444c r __kstrtab_input_class
ffffffff81424458 r __kstrtab_input_reset_device
ffffffff8142446b r __kstrtab_input_set_keycode
ffffffff8142447d r __kstrtab_input_get_keycode
ffffffff8142448f r __kstrtab_input_scancode_to_scalar
ffffffff814244a8 r __kstrtab_input_close_device
ffffffff814244bb r __kstrtab_input_flush_device
ffffffff814244ce r __kstrtab_input_open_device
ffffffff814244e0 r __kstrtab_input_release_device
ffffffff814244f5 r __kstrtab_input_grab_device
ffffffff81424507 r __kstrtab_input_set_abs_params
ffffffff8142451c r __kstrtab_input_alloc_absinfo
ffffffff81424530 r __kstrtab_input_inject_event
ffffffff81424543 r __kstrtab_input_event
ffffffff8142454f r __kstrtab_input_ff_effect_from_user
ffffffff81424569 r __kstrtab_input_event_to_user
ffffffff8142457d r __kstrtab_input_event_from_user
ffffffff81424593 r __kstrtab_input_mt_report_pointer_emulation
ffffffff814245b5 r __kstrtab_input_mt_report_finger_count
ffffffff814245d2 r __kstrtab_input_mt_report_slot_state
ffffffff814245ed r __kstrtab_input_mt_destroy_slots
ffffffff81424604 r __kstrtab_input_mt_init_slots
ffffffff81424618 r __kstrtab_input_ff_destroy
ffffffff81424629 r __kstrtab_input_ff_create
ffffffff81424639 r __kstrtab_input_ff_event
ffffffff81424648 r __kstrtab_input_ff_erase
ffffffff81424657 r __kstrtab_input_ff_upload
ffffffff81424667 r __kstrtab_rtc_ktime_to_tm
ffffffff81424677 r __kstrtab_rtc_tm_to_ktime
ffffffff81424687 r __kstrtab_rtc_tm_to_time
ffffffff81424696 r __kstrtab_rtc_valid_tm
ffffffff814246a3 r __kstrtab_rtc_time_to_tm
ffffffff814246b2 r __kstrtab_rtc_year_days
ffffffff814246c0 r __kstrtab_rtc_month_days
ffffffff814246cf r __kstrtab_rtc_device_unregister
ffffffff814246e5 r __kstrtab_rtc_device_register
ffffffff814246f9 r __kstrtab_rtc_irq_set_freq
ffffffff8142470a r __kstrtab_rtc_irq_set_state
ffffffff8142471c r __kstrtab_rtc_irq_unregister
ffffffff8142472f r __kstrtab_rtc_irq_register
ffffffff81424740 r __kstrtab_rtc_class_close
ffffffff81424750 r __kstrtab_rtc_class_open
ffffffff8142475f r __kstrtab_rtc_update_irq
ffffffff8142476e r __kstrtab_rtc_update_irq_enable
ffffffff81424784 r __kstrtab_rtc_alarm_irq_enable
ffffffff81424799 r __kstrtab_rtc_initialize_alarm
ffffffff814247ae r __kstrtab_rtc_set_alarm
ffffffff814247bc r __kstrtab_rtc_read_alarm
ffffffff814247cb r __kstrtab_rtc_set_mmss
ffffffff814247d8 r __kstrtab_rtc_set_time
ffffffff814247e5 r __kstrtab_rtc_read_time
ffffffff814247f3 r __kstrtab_watchdog_unregister_device
ffffffff8142480e r __kstrtab_watchdog_register_device
ffffffff81424827 r __kstrtab_edac_put_sysfs_subsys
ffffffff8142483d r __kstrtab_edac_get_sysfs_subsys
ffffffff81424853 r __kstrtab_edac_subsys
ffffffff8142485f r __kstrtab_edac_atomic_assert_error
ffffffff81424878 r __kstrtab_edac_handler_set
ffffffff81424889 r __kstrtab_edac_err_assert
ffffffff81424899 r __kstrtab_edac_handlers
ffffffff814248a7 r __kstrtab_edac_op_state
ffffffff814248b5 r __kstrtab_cpufreq_unregister_driver
ffffffff814248cf r __kstrtab_cpufreq_register_driver
ffffffff814248e7 r __kstrtab_cpufreq_update_policy
ffffffff814248fd r __kstrtab_cpufreq_get_policy
ffffffff81424910 r __kstrtab_cpufreq_unregister_governor
ffffffff8142492c r __kstrtab_cpufreq_register_governor
ffffffff81424946 r __kstrtab___cpufreq_driver_getavg
ffffffff8142495e r __kstrtab_cpufreq_driver_target
ffffffff81424974 r __kstrtab___cpufreq_driver_target
ffffffff8142498c r __kstrtab_cpufreq_unregister_notifier
ffffffff814249a8 r __kstrtab_cpufreq_register_notifier
ffffffff814249c2 r __kstrtab_cpufreq_get
ffffffff814249ce r __kstrtab_cpufreq_quick_get_max
ffffffff814249e4 r __kstrtab_cpufreq_quick_get
ffffffff814249f6 r __kstrtab_cpufreq_global_kobject
ffffffff81424a0d r __kstrtab_cpufreq_notify_transition
ffffffff81424a27 r __kstrtab_cpufreq_cpu_put
ffffffff81424a37 r __kstrtab_cpufreq_cpu_get
ffffffff81424a47 r __kstrtab_cpufreq_frequency_get_table
ffffffff81424a63 r __kstrtab_cpufreq_frequency_table_put_attr
ffffffff81424a84 r __kstrtab_cpufreq_frequency_table_get_attr
ffffffff81424aa5 r __kstrtab_cpufreq_freq_attr_scaling_available_freqs
ffffffff81424acf r __kstrtab_cpufreq_frequency_table_target
ffffffff81424aee r __kstrtab_cpufreq_frequency_table_verify
ffffffff81424b0d r __kstrtab_cpufreq_frequency_table_cpuinfo
ffffffff81424b2d r __kstrtab_cpuidle_unregister_device
ffffffff81424b47 r __kstrtab_cpuidle_register_device
ffffffff81424b5f r __kstrtab_cpuidle_disable_device
ffffffff81424b76 r __kstrtab_cpuidle_enable_device
ffffffff81424b8c r __kstrtab_cpuidle_resume_and_unlock
ffffffff81424ba6 r __kstrtab_cpuidle_pause_and_lock
ffffffff81424bbd r __kstrtab_cpuidle_unregister_driver
ffffffff81424bd7 r __kstrtab_cpuidle_get_driver
ffffffff81424bea r __kstrtab_cpuidle_register_driver
ffffffff81424c02 r __kstrtab_dmi_match
ffffffff81424c0c r __kstrtab_dmi_walk
ffffffff81424c15 r __kstrtab_dmi_get_date
ffffffff81424c22 r __kstrtab_dmi_find_device
ffffffff81424c32 r __kstrtab_dmi_name_in_vendors
ffffffff81424c46 r __kstrtab_dmi_get_system_info
ffffffff81424c5a r __kstrtab_dmi_first_match
ffffffff81424c6a r __kstrtab_dmi_check_system
ffffffff81424c7b r __kstrtab_ibft_addr
ffffffff81424c85 r __kstrtab_i8253_lock
ffffffff81424c90 r __kstrtab_hid_check_keys_pressed
ffffffff81424ca7 r __kstrtab_hid_unregister_driver
ffffffff81424cbd r __kstrtab___hid_register_driver
ffffffff81424cd3 r __kstrtab_hid_destroy_device
ffffffff81424ce6 r __kstrtab_hid_allocate_device
ffffffff81424cfa r __kstrtab_hid_add_device
ffffffff81424d09 r __kstrtab_hid_disconnect
ffffffff81424d18 r __kstrtab_hid_connect
ffffffff81424d24 r __kstrtab_hid_input_report
ffffffff81424d35 r __kstrtab_hid_report_raw_event
ffffffff81424d4a r __kstrtab_hid_set_field
ffffffff81424d58 r __kstrtab_hid_output_report
ffffffff81424d6a r __kstrtab_hid_parse_report
ffffffff81424d7b r __kstrtab_hid_register_report
ffffffff81424d8f r __kstrtab_hid_debug
ffffffff81424d99 r __kstrtab_hidinput_disconnect
ffffffff81424dad r __kstrtab_hidinput_connect
ffffffff81424dbe r __kstrtab_hidinput_count_leds
ffffffff81424dd2 r __kstrtab_hidinput_get_led_field
ffffffff81424de9 r __kstrtab_hidinput_find_field
ffffffff81424dfd r __kstrtab_hidinput_report_event
ffffffff81424e13 r __kstrtab_iommu_device_group
ffffffff81424e26 r __kstrtab_iommu_unmap
ffffffff81424e32 r __kstrtab_iommu_map
ffffffff81424e3c r __kstrtab_iommu_domain_has_cap
ffffffff81424e51 r __kstrtab_iommu_iova_to_phys
ffffffff81424e64 r __kstrtab_iommu_detach_device
ffffffff81424e78 r __kstrtab_iommu_attach_device
ffffffff81424e8c r __kstrtab_iommu_domain_free
ffffffff81424e9e r __kstrtab_iommu_domain_alloc
ffffffff81424eb1 r __kstrtab_iommu_set_fault_handler
ffffffff81424ec9 r __kstrtab_iommu_present
ffffffff81424ed7 r __kstrtab_bus_set_iommu
ffffffff81424ee5 r __kstrtab_amd_iommu_device_info
ffffffff81424efb r __kstrtab_amd_iommu_enable_device_erratum
ffffffff81424f1b r __kstrtab_amd_iommu_get_v2_domain
ffffffff81424f33 r __kstrtab_amd_iommu_complete_ppr
ffffffff81424f4a r __kstrtab_amd_iommu_domain_clear_gcr3
ffffffff81424f66 r __kstrtab_amd_iommu_domain_set_gcr3
ffffffff81424f80 r __kstrtab_amd_iommu_flush_tlb
ffffffff81424f94 r __kstrtab_amd_iommu_flush_page
ffffffff81424fa9 r __kstrtab_amd_iommu_domain_enable_v2
ffffffff81424fc4 r __kstrtab_amd_iommu_domain_direct_map
ffffffff81424fe0 r __kstrtab_amd_iommu_unregister_ppr_notifier
ffffffff81425002 r __kstrtab_amd_iommu_register_ppr_notifier
ffffffff81425022 r __kstrtab_amd_iommu_v2_supported
ffffffff81425039 r __kstrtab_pcibios_align_resource
ffffffff81425050 r __kstrtab_xen_unregister_device_domain_owner
ffffffff81425073 r __kstrtab_xen_register_device_domain_owner
ffffffff81425094 r __kstrtab_xen_find_device_domain_owner
ffffffff814250b1 r __kstrtab_xen_pci_frontend
ffffffff814250c2 r __kstrtab_pcibios_scan_specific_bus
ffffffff814250dc r __kstrtab_fb_is_primary_device
ffffffff814250f1 r __kstrtab_kernel_sock_shutdown
ffffffff81425106 r __kstrtab_kernel_sock_ioctl
ffffffff81425118 r __kstrtab_kernel_sendpage
ffffffff81425128 r __kstrtab_kernel_setsockopt
ffffffff8142513a r __kstrtab_kernel_getsockopt
ffffffff8142514c r __kstrtab_kernel_getpeername
ffffffff8142515f r __kstrtab_kernel_getsockname
ffffffff81425172 r __kstrtab_kernel_connect
ffffffff81425181 r __kstrtab_kernel_accept
ffffffff8142518f r __kstrtab_kernel_listen
ffffffff8142519d r __kstrtab_kernel_bind
ffffffff814251a9 r __kstrtab_sock_unregister
ffffffff814251b9 r __kstrtab_sock_register
ffffffff814251c7 r __kstrtab_sock_create_kern
ffffffff814251d8 r __kstrtab_sock_create
ffffffff814251e4 r __kstrtab___sock_create
ffffffff814251f2 r __kstrtab_sock_wake_async
ffffffff81425202 r __kstrtab_sock_create_lite
ffffffff81425213 r __kstrtab_dlci_ioctl_set
ffffffff81425222 r __kstrtab_vlan_ioctl_set
ffffffff81425231 r __kstrtab_brioctl_set
ffffffff8142523d r __kstrtab_kernel_recvmsg
ffffffff8142524c r __kstrtab_sock_recvmsg
ffffffff81425259 r __kstrtab___sock_recv_ts_and_drops
ffffffff81425272 r __kstrtab___sock_recv_wifi_status
ffffffff8142528a r __kstrtab___sock_recv_timestamp
ffffffff814252a0 r __kstrtab_kernel_sendmsg
ffffffff814252af r __kstrtab_sock_sendmsg
ffffffff814252bc r __kstrtab_sock_tx_timestamp
ffffffff814252ce r __kstrtab_sock_release
ffffffff814252db r __kstrtab_sockfd_lookup
ffffffff814252e9 r __kstrtab_sock_map_fd
ffffffff814252f5 r __kstrtab_proto_unregister
ffffffff81425306 r __kstrtab_proto_register
ffffffff81425315 r __kstrtab_sock_prot_inuse_get
ffffffff81425329 r __kstrtab_sock_prot_inuse_add
ffffffff8142533d r __kstrtab_sk_common_release
ffffffff8142534f r __kstrtab_compat_sock_common_setsockopt
ffffffff8142536d r __kstrtab_sock_common_setsockopt
ffffffff81425384 r __kstrtab_sock_common_recvmsg
ffffffff81425398 r __kstrtab_compat_sock_common_getsockopt
ffffffff814253b6 r __kstrtab_sock_common_getsockopt
ffffffff814253cd r __kstrtab_sock_get_timestampns
ffffffff814253e2 r __kstrtab_sock_get_timestamp
ffffffff814253f5 r __kstrtab_lock_sock_fast
ffffffff81425404 r __kstrtab_release_sock
ffffffff81425411 r __kstrtab_lock_sock_nested
ffffffff81425422 r __kstrtab_sock_init_data
ffffffff81425431 r __kstrtab_sk_stop_timer
ffffffff8142543f r __kstrtab_sk_reset_timer
ffffffff8142544e r __kstrtab_sk_send_sigurg
ffffffff8142545d r __kstrtab_sock_no_sendpage
ffffffff8142546e r __kstrtab_sock_no_mmap
ffffffff8142547b r __kstrtab_sock_no_recvmsg
ffffffff8142548b r __kstrtab_sock_no_sendmsg
ffffffff8142549b r __kstrtab_sock_no_getsockopt
ffffffff814254ae r __kstrtab_sock_no_setsockopt
ffffffff814254c1 r __kstrtab_sock_no_shutdown
ffffffff814254d2 r __kstrtab_sock_no_listen
ffffffff814254e1 r __kstrtab_sock_no_ioctl
ffffffff814254ef r __kstrtab_sock_no_poll
ffffffff814254fc r __kstrtab_sock_no_getname
ffffffff8142550c r __kstrtab_sock_no_accept
ffffffff8142551b r __kstrtab_sock_no_socketpair
ffffffff8142552e r __kstrtab_sock_no_connect
ffffffff8142553e r __kstrtab_sock_no_bind
ffffffff8142554b r __kstrtab___sk_mem_reclaim
ffffffff8142555c r __kstrtab___sk_mem_schedule
ffffffff8142556e r __kstrtab_sk_wait_data
ffffffff8142557b r __kstrtab_sock_alloc_send_skb
ffffffff8142558f r __kstrtab_sock_alloc_send_pskb
ffffffff814255a4 r __kstrtab_sock_kfree_s
ffffffff814255b1 r __kstrtab_sock_kmalloc
ffffffff814255be r __kstrtab_sock_wmalloc
ffffffff814255cb r __kstrtab_sock_i_ino
ffffffff814255d6 r __kstrtab_sock_i_uid
ffffffff814255e1 r __kstrtab_sock_rfree
ffffffff814255ec r __kstrtab_sock_wfree
ffffffff814255f7 r __kstrtab_sk_setup_caps
ffffffff81425605 r __kstrtab_sk_clone_lock
ffffffff81425613 r __kstrtab_sk_release_kernel
ffffffff81425625 r __kstrtab_sk_free
ffffffff8142562d r __kstrtab_sk_alloc
ffffffff81425636 r __kstrtab_sock_update_netprioidx
ffffffff8142564d r __kstrtab_sock_update_classid
ffffffff81425661 r __kstrtab_sk_prot_clear_portaddr_nulls
ffffffff8142567e r __kstrtab_cred_to_ucred
ffffffff8142568c r __kstrtab_sock_setsockopt
ffffffff8142569c r __kstrtab_sk_dst_check
ffffffff814256a9 r __kstrtab___sk_dst_check
ffffffff814256b8 r __kstrtab_sk_reset_txq
ffffffff814256c5 r __kstrtab_sk_receive_skb
ffffffff814256d4 r __kstrtab_sock_queue_rcv_skb
ffffffff814256e7 r __kstrtab_net_prio_subsys_id
ffffffff814256fa r __kstrtab_net_cls_subsys_id
ffffffff8142570c r __kstrtab_sysctl_optmem_max
ffffffff8142571e r __kstrtab_memcg_socket_limit_enabled
ffffffff81425739 r __kstrtab_sysctl_max_syn_backlog
ffffffff81425750 r __kstrtab___skb_warn_lro_forwarding
ffffffff8142576a r __kstrtab_skb_partial_csum_set
ffffffff8142577f r __kstrtab_skb_complete_wifi_ack
ffffffff81425795 r __kstrtab_skb_tstamp_tx
ffffffff814257a3 r __kstrtab_sock_queue_err_skb
ffffffff814257b6 r __kstrtab_skb_cow_data
ffffffff814257c3 r __kstrtab_skb_to_sgvec
ffffffff814257d0 r __kstrtab_skb_gro_receive
ffffffff814257e0 r __kstrtab_skb_segment
ffffffff814257ec r __kstrtab_skb_pull_rcsum
ffffffff814257fb r __kstrtab_skb_append_datato_frags
ffffffff81425813 r __kstrtab_skb_find_text
ffffffff81425821 r __kstrtab_skb_abort_seq_read
ffffffff81425834 r __kstrtab_skb_seq_read
ffffffff81425841 r __kstrtab_skb_prepare_seq_read
ffffffff81425856 r __kstrtab_skb_split
ffffffff81425860 r __kstrtab_skb_insert
ffffffff8142586b r __kstrtab_skb_append
ffffffff81425876 r __kstrtab_skb_unlink
ffffffff81425881 r __kstrtab_skb_queue_tail
ffffffff81425890 r __kstrtab_skb_queue_head
ffffffff8142589f r __kstrtab_skb_queue_purge
ffffffff814258af r __kstrtab_skb_dequeue_tail
ffffffff814258c0 r __kstrtab_skb_dequeue
ffffffff814258cc r __kstrtab_skb_copy_and_csum_dev
ffffffff814258e2 r __kstrtab_skb_copy_and_csum_bits
ffffffff814258f9 r __kstrtab_skb_checksum
ffffffff81425906 r __kstrtab_skb_store_bits
ffffffff81425915 r __kstrtab_skb_copy_bits
ffffffff81425923 r __kstrtab___pskb_pull_tail
ffffffff81425934 r __kstrtab____pskb_trim
ffffffff81425941 r __kstrtab_skb_trim
ffffffff8142594a r __kstrtab_skb_pull
ffffffff81425953 r __kstrtab_skb_push
ffffffff8142595c r __kstrtab_skb_put
ffffffff81425964 r __kstrtab_skb_pad
ffffffff8142596c r __kstrtab_skb_copy_expand
ffffffff8142597c r __kstrtab_skb_realloc_headroom
ffffffff81425991 r __kstrtab_pskb_expand_head
ffffffff814259a2 r __kstrtab___pskb_copy
ffffffff814259ae r __kstrtab_skb_copy
ffffffff814259b7 r __kstrtab_skb_clone
ffffffff814259c1 r __kstrtab_skb_morph
ffffffff814259cb r __kstrtab_skb_recycle_check
ffffffff814259dd r __kstrtab_skb_recycle
ffffffff814259e9 r __kstrtab_consume_skb
ffffffff814259f5 r __kstrtab_kfree_skb
ffffffff814259ff r __kstrtab___kfree_skb
ffffffff81425a0b r __kstrtab_dev_alloc_skb
ffffffff81425a19 r __kstrtab_skb_add_rx_frag
ffffffff81425a29 r __kstrtab___netdev_alloc_skb
ffffffff81425a3c r __kstrtab_build_skb
ffffffff81425a46 r __kstrtab___alloc_skb
ffffffff81425a52 r __kstrtab_csum_partial_copy_fromiovecend
ffffffff81425a71 r __kstrtab_memcpy_fromiovecend
ffffffff81425a85 r __kstrtab_memcpy_fromiovec
ffffffff81425a96 r __kstrtab_memcpy_toiovecend
ffffffff81425aa8 r __kstrtab_memcpy_toiovec
ffffffff81425ab7 r __kstrtab_datagram_poll
ffffffff81425ac5 r __kstrtab_skb_copy_and_csum_datagram_iovec
ffffffff81425ae6 r __kstrtab___skb_checksum_complete
ffffffff81425afe r __kstrtab___skb_checksum_complete_head
ffffffff81425b1b r __kstrtab_skb_copy_datagram_from_iovec
ffffffff81425b38 r __kstrtab_skb_copy_datagram_const_iovec
ffffffff81425b56 r __kstrtab_skb_copy_datagram_iovec
ffffffff81425b6e r __kstrtab_skb_kill_datagram
ffffffff81425b80 r __kstrtab_skb_free_datagram_locked
ffffffff81425b99 r __kstrtab_skb_free_datagram
ffffffff81425bab r __kstrtab_skb_recv_datagram
ffffffff81425bbd r __kstrtab___skb_recv_datagram
ffffffff81425bd1 r __kstrtab_sk_stream_kill_queues
ffffffff81425be7 r __kstrtab_sk_stream_error
ffffffff81425bf7 r __kstrtab_sk_stream_wait_memory
ffffffff81425c0d r __kstrtab_sk_stream_wait_close
ffffffff81425c22 r __kstrtab_sk_stream_wait_connect
ffffffff81425c39 r __kstrtab_sk_stream_write_space
ffffffff81425c4f r __kstrtab_scm_fp_dup
ffffffff81425c5a r __kstrtab_scm_detach_fds
ffffffff81425c69 r __kstrtab_put_cmsg
ffffffff81425c72 r __kstrtab___scm_send
ffffffff81425c7d r __kstrtab___scm_destroy
ffffffff81425c8b r __kstrtab_gnet_stats_finish_copy
ffffffff81425ca2 r __kstrtab_gnet_stats_copy_app
ffffffff81425cb6 r __kstrtab_gnet_stats_copy_queue
ffffffff81425ccc r __kstrtab_gnet_stats_copy_rate_est
ffffffff81425ce5 r __kstrtab_gnet_stats_copy_basic
ffffffff81425cfb r __kstrtab_gnet_stats_start_copy
ffffffff81425d11 r __kstrtab_gnet_stats_start_copy_compat
ffffffff81425d2e r __kstrtab_gen_estimator_active
ffffffff81425d43 r __kstrtab_gen_replace_estimator
ffffffff81425d59 r __kstrtab_gen_kill_estimator
ffffffff81425d6c r __kstrtab_gen_new_estimator
ffffffff81425d7e r __kstrtab_unregister_pernet_device
ffffffff81425d97 r __kstrtab_register_pernet_device
ffffffff81425dae r __kstrtab_unregister_pernet_subsys
ffffffff81425dc7 r __kstrtab_register_pernet_subsys
ffffffff81425dde r __kstrtab_get_net_ns_by_pid
ffffffff81425df0 r __kstrtab___put_net
ffffffff81425dfa r __kstrtab_init_net
ffffffff81425e03 r __kstrtab_net_namespace_list
ffffffff81425e16 r __kstrtab_secure_ipv4_port_ephemeral
ffffffff81425e31 r __kstrtab_skb_flow_dissect
ffffffff81425e42 r __kstrtab_netdev_info
ffffffff81425e4e r __kstrtab_netdev_notice
ffffffff81425e5c r __kstrtab_netdev_warn
ffffffff81425e68 r __kstrtab_netdev_err
ffffffff81425e73 r __kstrtab_netdev_crit
ffffffff81425e7f r __kstrtab_netdev_alert
ffffffff81425e8c r __kstrtab_netdev_emerg
ffffffff81425e99 r __kstrtab_netdev_printk
ffffffff81425ea7 r __kstrtab___netdev_printk
ffffffff81425eb7 r __kstrtab_netdev_increment_features
ffffffff81425ed1 r __kstrtab_dev_change_net_namespace
ffffffff81425eea r __kstrtab_unregister_netdev
ffffffff81425efc r __kstrtab_unregister_netdevice_many
ffffffff81425f16 r __kstrtab_unregister_netdevice_queue
ffffffff81425f31 r __kstrtab_synchronize_net
ffffffff81425f41 r __kstrtab_free_netdev
ffffffff81425f4d r __kstrtab_alloc_netdev_mqs
ffffffff81425f5e r __kstrtab_dev_get_stats
ffffffff81425f6c r __kstrtab_netdev_stats_to_stats64
ffffffff81425f84 r __kstrtab_netdev_refcnt_read
ffffffff81425f97 r __kstrtab_register_netdev
ffffffff81425fa7 r __kstrtab_init_dummy_netdev
ffffffff81425fb9 r __kstrtab_register_netdevice
ffffffff81425fcc r __kstrtab_netif_stacked_transfer_operstate
ffffffff81425fed r __kstrtab_netdev_change_features
ffffffff81426004 r __kstrtab_netdev_update_features
ffffffff8142601b r __kstrtab_dev_set_mac_address
ffffffff8142602f r __kstrtab_dev_set_group
ffffffff8142603d r __kstrtab_dev_set_mtu
ffffffff81426049 r __kstrtab_dev_change_flags
ffffffff8142605a r __kstrtab_dev_get_flags
ffffffff81426068 r __kstrtab_dev_set_allmulti
ffffffff81426079 r __kstrtab_dev_set_promiscuity
ffffffff8142608d r __kstrtab_netdev_set_bond_master
ffffffff814260a4 r __kstrtab_netdev_set_master
ffffffff814260b6 r __kstrtab_register_gifconf
ffffffff814260c7 r __kstrtab_netif_napi_del
ffffffff814260d6 r __kstrtab_netif_napi_add
ffffffff814260e5 r __kstrtab_napi_complete
ffffffff814260f3 r __kstrtab___napi_complete
ffffffff81426103 r __kstrtab___napi_schedule
ffffffff81426113 r __kstrtab_napi_gro_frags
ffffffff81426122 r __kstrtab_napi_frags_skb
ffffffff81426131 r __kstrtab_napi_frags_finish
ffffffff81426143 r __kstrtab_napi_get_frags
ffffffff81426152 r __kstrtab_napi_gro_receive
ffffffff81426163 r __kstrtab_skb_gro_reset_offset
ffffffff81426178 r __kstrtab_napi_skb_finish
ffffffff81426188 r __kstrtab_dev_gro_receive
ffffffff81426198 r __kstrtab_napi_gro_flush
ffffffff814261a7 r __kstrtab_netif_receive_skb
ffffffff814261b9 r __kstrtab_netdev_rx_handler_unregister
ffffffff814261d6 r __kstrtab_netdev_rx_handler_register
ffffffff814261f1 r __kstrtab_netif_rx_ni
ffffffff814261fd r __kstrtab_netif_rx
ffffffff81426206 r __kstrtab_rps_may_expire_flow
ffffffff8142621a r __kstrtab_rps_sock_flow_table
ffffffff8142622e r __kstrtab___skb_get_rxhash
ffffffff8142623f r __kstrtab_dev_queue_xmit
ffffffff8142624e r __kstrtab___skb_tx_hash
ffffffff8142625c r __kstrtab_netif_skb_features
ffffffff8142626f r __kstrtab_netdev_rx_csum_fault
ffffffff81426284 r __kstrtab_skb_gso_segment
ffffffff81426294 r __kstrtab_skb_checksum_help
ffffffff814262a6 r __kstrtab_netif_device_attach
ffffffff814262ba r __kstrtab_netif_device_detach
ffffffff814262ce r __kstrtab_dev_kfree_skb_any
ffffffff814262e0 r __kstrtab_dev_kfree_skb_irq
ffffffff814262f2 r __kstrtab___netif_schedule
ffffffff81426303 r __kstrtab_netif_set_real_num_rx_queues
ffffffff81426320 r __kstrtab_netif_set_real_num_tx_queues
ffffffff8142633d r __kstrtab_dev_forward_skb
ffffffff8142634d r __kstrtab_net_disable_timestamp
ffffffff81426363 r __kstrtab_net_enable_timestamp
ffffffff81426378 r __kstrtab_call_netdevice_notifiers
ffffffff81426391 r __kstrtab_unregister_netdevice_notifier
ffffffff814263af r __kstrtab_register_netdevice_notifier
ffffffff814263cb r __kstrtab_dev_disable_lro
ffffffff814263db r __kstrtab_dev_close
ffffffff814263e5 r __kstrtab_dev_open
ffffffff814263ee r __kstrtab_dev_load
ffffffff814263f7 r __kstrtab_netdev_bonding_change
ffffffff8142640d r __kstrtab_netdev_state_change
ffffffff81426421 r __kstrtab_netdev_features_change
ffffffff81426438 r __kstrtab_dev_alloc_name
ffffffff81426447 r __kstrtab_dev_valid_name
ffffffff81426456 r __kstrtab_dev_get_by_flags_rcu
ffffffff8142646b r __kstrtab_dev_getfirstbyhwtype
ffffffff81426480 r __kstrtab___dev_getfirstbyhwtype
ffffffff81426497 r __kstrtab_dev_getbyhwaddr_rcu
ffffffff814264ab r __kstrtab_dev_get_by_index
ffffffff814264bc r __kstrtab_dev_get_by_index_rcu
ffffffff814264d1 r __kstrtab___dev_get_by_index
ffffffff814264e4 r __kstrtab_dev_get_by_name
ffffffff814264f4 r __kstrtab_dev_get_by_name_rcu
ffffffff81426508 r __kstrtab___dev_get_by_name
ffffffff8142651a r __kstrtab_netdev_boot_setup_check
ffffffff81426532 r __kstrtab_dev_remove_pack
ffffffff81426542 r __kstrtab___dev_remove_pack
ffffffff81426554 r __kstrtab_dev_add_pack
ffffffff81426561 r __kstrtab_softnet_data
ffffffff8142656e r __kstrtab_dev_base_lock
ffffffff8142657c r __kstrtab___ethtool_get_settings
ffffffff81426593 r __kstrtab_ethtool_op_get_link
ffffffff814265a7 r __kstrtab_dev_mc_init
ffffffff814265b3 r __kstrtab_dev_mc_flush
ffffffff814265c0 r __kstrtab_dev_mc_unsync
ffffffff814265ce r __kstrtab_dev_mc_sync
ffffffff814265da r __kstrtab_dev_mc_del_global
ffffffff814265ec r __kstrtab_dev_mc_del
ffffffff814265f7 r __kstrtab_dev_mc_add_global
ffffffff81426609 r __kstrtab_dev_mc_add
ffffffff81426614 r __kstrtab_dev_uc_init
ffffffff81426620 r __kstrtab_dev_uc_flush
ffffffff8142662d r __kstrtab_dev_uc_unsync
ffffffff8142663b r __kstrtab_dev_uc_sync
ffffffff81426647 r __kstrtab_dev_uc_del
ffffffff81426652 r __kstrtab_dev_uc_add
ffffffff8142665d r __kstrtab_dev_addr_del_multiple
ffffffff81426673 r __kstrtab_dev_addr_add_multiple
ffffffff81426689 r __kstrtab_dev_addr_del
ffffffff81426696 r __kstrtab_dev_addr_add
ffffffff814266a3 r __kstrtab_dev_addr_init
ffffffff814266b1 r __kstrtab_dev_addr_flush
ffffffff814266c0 r __kstrtab___hw_addr_init
ffffffff814266cf r __kstrtab___hw_addr_flush
ffffffff814266df r __kstrtab___hw_addr_unsync
ffffffff814266f0 r __kstrtab___hw_addr_sync
ffffffff814266ff r __kstrtab___hw_addr_del_multiple
ffffffff81426716 r __kstrtab___hw_addr_add_multiple
ffffffff8142672d r __kstrtab_skb_dst_set_noref
ffffffff8142673f r __kstrtab___dst_destroy_metrics_generic
ffffffff8142675d r __kstrtab_dst_cow_metrics_generic
ffffffff81426775 r __kstrtab_dst_release
ffffffff81426781 r __kstrtab_dst_destroy
ffffffff8142678d r __kstrtab___dst_free
ffffffff81426798 r __kstrtab_dst_alloc
ffffffff814267a2 r __kstrtab_dst_discard
ffffffff814267ae r __kstrtab_call_netevent_notifiers
ffffffff814267c6 r __kstrtab_unregister_netevent_notifier
ffffffff814267e3 r __kstrtab_register_netevent_notifier
ffffffff814267fe r __kstrtab_neigh_sysctl_unregister
ffffffff81426816 r __kstrtab_neigh_sysctl_register
ffffffff8142682c r __kstrtab_neigh_seq_stop
ffffffff8142683b r __kstrtab_neigh_seq_next
ffffffff8142684a r __kstrtab_neigh_seq_start
ffffffff8142685a r __kstrtab___neigh_for_each_release
ffffffff81426873 r __kstrtab_neigh_for_each
ffffffff81426882 r __kstrtab_neigh_table_clear
ffffffff81426894 r __kstrtab_neigh_table_init
ffffffff814268a5 r __kstrtab_neigh_table_init_no_netlink
ffffffff814268c1 r __kstrtab_neigh_parms_release
ffffffff814268d5 r __kstrtab_neigh_parms_alloc
ffffffff814268e7 r __kstrtab_pneigh_enqueue
ffffffff814268f6 r __kstrtab_neigh_direct_output
ffffffff8142690a r __kstrtab_neigh_connected_output
ffffffff81426921 r __kstrtab_neigh_resolve_output
ffffffff81426936 r __kstrtab_neigh_compat_output
ffffffff8142694a r __kstrtab_neigh_event_ns
ffffffff81426959 r __kstrtab_neigh_update
ffffffff81426966 r __kstrtab___neigh_event_send
ffffffff81426979 r __kstrtab_neigh_destroy
ffffffff81426987 r __kstrtab_pneigh_lookup
ffffffff81426995 r __kstrtab___pneigh_lookup
ffffffff814269a5 r __kstrtab_neigh_create
ffffffff814269b2 r __kstrtab_neigh_lookup_nodev
ffffffff814269c5 r __kstrtab_neigh_lookup
ffffffff814269d2 r __kstrtab_neigh_ifdown
ffffffff814269df r __kstrtab_neigh_changeaddr
ffffffff814269f0 r __kstrtab_neigh_rand_reach_time
ffffffff81426a06 r __kstrtab_rtnl_create_link
ffffffff81426a17 r __kstrtab_rtnl_configure_link
ffffffff81426a2b r __kstrtab_rtnl_link_get_net
ffffffff81426a3d r __kstrtab_ifla_policy
ffffffff81426a49 r __kstrtab_rtnl_put_cacheinfo
ffffffff81426a5c r __kstrtab_rtnetlink_put_metrics
ffffffff81426a72 r __kstrtab_rtnl_set_sk_err
ffffffff81426a82 r __kstrtab_rtnl_notify
ffffffff81426a8e r __kstrtab_rtnl_unicast
ffffffff81426a9b r __kstrtab___rta_fill
ffffffff81426aa6 r __kstrtab_rtnl_af_unregister
ffffffff81426ab9 r __kstrtab___rtnl_af_unregister
ffffffff81426ace r __kstrtab_rtnl_af_register
ffffffff81426adf r __kstrtab___rtnl_af_register
ffffffff81426af2 r __kstrtab_rtnl_link_unregister
ffffffff81426b07 r __kstrtab___rtnl_link_unregister
ffffffff81426b1e r __kstrtab_rtnl_link_register
ffffffff81426b31 r __kstrtab___rtnl_link_register
ffffffff81426b46 r __kstrtab_rtnl_unregister_all
ffffffff81426b5a r __kstrtab_rtnl_unregister
ffffffff81426b6a r __kstrtab_rtnl_register
ffffffff81426b78 r __kstrtab___rtnl_register
ffffffff81426b88 r __kstrtab_rtnl_is_locked
ffffffff81426b97 r __kstrtab_rtnl_trylock
ffffffff81426ba4 r __kstrtab_rtnl_unlock
ffffffff81426bb0 r __kstrtab_rtnl_lock
ffffffff81426bba r __kstrtab_mac_pton
ffffffff81426bc3 r __kstrtab_inet_proto_csum_replace4
ffffffff81426bdc r __kstrtab_in6_pton
ffffffff81426be5 r __kstrtab_in4_pton
ffffffff81426bee r __kstrtab_in_aton
ffffffff81426bf6 r __kstrtab_net_ratelimit
ffffffff81426c04 r __kstrtab_net_msg_warn
ffffffff81426c11 r __kstrtab_linkwatch_fire_event
ffffffff81426c26 r __kstrtab_sk_detach_filter
ffffffff81426c37 r __kstrtab_sk_attach_filter
ffffffff81426c48 r __kstrtab_sk_filter_release_rcu
ffffffff81426c5e r __kstrtab_sk_chk_filter
ffffffff81426c6c r __kstrtab_sk_run_filter
ffffffff81426c7a r __kstrtab_sk_filter
ffffffff81426c84 r __kstrtab_sock_diag_nlsk
ffffffff81426c93 r __kstrtab_sock_diag_unregister
ffffffff81426ca8 r __kstrtab_sock_diag_register
ffffffff81426cbb r __kstrtab_sock_diag_unregister_inet_compat
ffffffff81426cdc r __kstrtab_sock_diag_register_inet_compat
ffffffff81426cfb r __kstrtab_sock_diag_put_meminfo
ffffffff81426d11 r __kstrtab_sock_diag_save_cookie
ffffffff81426d27 r __kstrtab_sock_diag_check_cookie
ffffffff81426d3e r __kstrtab_flow_cache_lookup
ffffffff81426d50 r __kstrtab_flow_cache_genid
ffffffff81426d61 r __kstrtab_netdev_class_remove_file
ffffffff81426d7a r __kstrtab_netdev_class_create_file
ffffffff81426d93 r __kstrtab_net_ns_type_operations
ffffffff81426daa r __kstrtab_compat_mc_getsockopt
ffffffff81426dbf r __kstrtab_compat_mc_setsockopt
ffffffff81426dd4 r __kstrtab_compat_sock_get_timestampns
ffffffff81426df0 r __kstrtab_compat_sock_get_timestamp
ffffffff81426e0a r __kstrtab_sysfs_format_mac
ffffffff81426e1b r __kstrtab_alloc_etherdev_mqs
ffffffff81426e2e r __kstrtab_ether_setup
ffffffff81426e3a r __kstrtab_eth_validate_addr
ffffffff81426e4c r __kstrtab_eth_change_mtu
ffffffff81426e5b r __kstrtab_eth_mac_addr
ffffffff81426e68 r __kstrtab_eth_header_cache_update
ffffffff81426e80 r __kstrtab_eth_header_cache
ffffffff81426e91 r __kstrtab_eth_header_parse
ffffffff81426ea2 r __kstrtab_eth_type_trans
ffffffff81426eb1 r __kstrtab_eth_rebuild_header
ffffffff81426ec4 r __kstrtab_eth_header
ffffffff81426ecf r __kstrtab_dev_deactivate
ffffffff81426ede r __kstrtab_dev_activate
ffffffff81426eeb r __kstrtab_dev_graft_qdisc
ffffffff81426efb r __kstrtab_qdisc_destroy
ffffffff81426f09 r __kstrtab_qdisc_reset
ffffffff81426f15 r __kstrtab_qdisc_create_dflt
ffffffff81426f27 r __kstrtab_pfifo_fast_ops
ffffffff81426f36 r __kstrtab_noop_qdisc
ffffffff81426f41 r __kstrtab_netif_notify_peers
ffffffff81426f54 r __kstrtab_netif_carrier_off
ffffffff81426f66 r __kstrtab_netif_carrier_on
ffffffff81426f77 r __kstrtab_dev_trans_start
ffffffff81426f87 r __kstrtab_netlink_unregister_notifier
ffffffff81426fa3 r __kstrtab_netlink_register_notifier
ffffffff81426fbd r __kstrtab_nlmsg_notify
ffffffff81426fca r __kstrtab_netlink_rcv_skb
ffffffff81426fda r __kstrtab_netlink_ack
ffffffff81426fe6 r __kstrtab_netlink_dump_start
ffffffff81426ff9 r __kstrtab___nlmsg_put
ffffffff81427005 r __kstrtab_netlink_set_nonroot
ffffffff81427019 r __kstrtab_netlink_kernel_release
ffffffff81427030 r __kstrtab_netlink_kernel_create
ffffffff81427046 r __kstrtab_netlink_set_err
ffffffff81427056 r __kstrtab_netlink_broadcast
ffffffff81427068 r __kstrtab_netlink_broadcast_filtered
ffffffff81427083 r __kstrtab_netlink_has_listeners
ffffffff81427099 r __kstrtab_netlink_unicast
ffffffff814270a9 r __kstrtab_genl_notify
ffffffff814270b5 r __kstrtab_genlmsg_multicast_allns
ffffffff814270cd r __kstrtab_genlmsg_put
ffffffff814270d9 r __kstrtab_genl_unregister_family
ffffffff814270f0 r __kstrtab_genl_register_family_with_ops
ffffffff8142710e r __kstrtab_genl_register_family
ffffffff81427123 r __kstrtab_genl_unregister_ops
ffffffff81427137 r __kstrtab_genl_register_ops
ffffffff81427149 r __kstrtab_genl_unregister_mc_group
ffffffff81427162 r __kstrtab_genl_register_mc_group
ffffffff81427179 r __kstrtab_genl_unlock
ffffffff81427185 r __kstrtab_genl_lock
ffffffff8142718f r __kstrtab_ip_route_output_flow
ffffffff814271a4 r __kstrtab___ip_route_output_key
ffffffff814271ba r __kstrtab_ip_route_input_common
ffffffff814271d0 r __kstrtab___ip_select_ident
ffffffff814271e2 r __kstrtab_inetpeer_invalidate_tree
ffffffff814271fb r __kstrtab_inet_peer_xrlim_allow
ffffffff81427211 r __kstrtab_inet_putpeer
ffffffff8142721e r __kstrtab_inet_getpeer
ffffffff8142722b r __kstrtab_inet_del_protocol
ffffffff8142723d r __kstrtab_inet_add_protocol
ffffffff8142724f r __kstrtab_ip_check_defrag
ffffffff8142725f r __kstrtab_ip_defrag
ffffffff81427269 r __kstrtab_ip_options_rcv_srr
ffffffff8142727c r __kstrtab_ip_options_compile
ffffffff8142728f r __kstrtab_ip_generic_getfrag
ffffffff814272a2 r __kstrtab_ip_fragment
ffffffff814272ae r __kstrtab_ip_queue_xmit
ffffffff814272bc r __kstrtab_ip_build_and_send_pkt
ffffffff814272d2 r __kstrtab_ip_local_out
ffffffff814272df r __kstrtab_ip_send_check
ffffffff814272ed r __kstrtab_sysctl_ip_default_ttl
ffffffff81427303 r __kstrtab_compat_ip_getsockopt
ffffffff81427318 r __kstrtab_ip_getsockopt
ffffffff81427326 r __kstrtab_compat_ip_setsockopt
ffffffff8142733b r __kstrtab_ip_setsockopt
ffffffff81427349 r __kstrtab_ip_cmsg_recv
ffffffff81427356 r __kstrtab_inet_hashinfo_init
ffffffff81427369 r __kstrtab_inet_hash_connect
ffffffff8142737b r __kstrtab_inet_unhash
ffffffff81427387 r __kstrtab_inet_hash
ffffffff81427391 r __kstrtab___inet_hash_nolisten
ffffffff814273a6 r __kstrtab___inet_lookup_established
ffffffff814273c0 r __kstrtab___inet_lookup_listener
ffffffff814273d7 r __kstrtab___inet_inherit_port
ffffffff814273eb r __kstrtab_inet_put_port
ffffffff814273f9 r __kstrtab_inet_twsk_purge
ffffffff81427409 r __kstrtab_inet_twdr_twcal_tick
ffffffff8142741e r __kstrtab_inet_twsk_schedule
ffffffff81427431 r __kstrtab_inet_twsk_deschedule
ffffffff81427446 r __kstrtab_inet_twdr_twkill_work
ffffffff8142745c r __kstrtab_inet_twdr_hangman
ffffffff8142746e r __kstrtab_inet_twsk_alloc
ffffffff8142747e r __kstrtab___inet_twsk_hashdance
ffffffff81427494 r __kstrtab_inet_twsk_put
ffffffff814274a2 r __kstrtab_inet_csk_compat_setsockopt
ffffffff814274bd r __kstrtab_inet_csk_compat_getsockopt
ffffffff814274d8 r __kstrtab_inet_csk_addr2sockaddr
ffffffff814274ef r __kstrtab_inet_csk_listen_stop
ffffffff81427504 r __kstrtab_inet_csk_listen_start
ffffffff8142751a r __kstrtab_inet_csk_destroy_sock
ffffffff81427530 r __kstrtab_inet_csk_clone_lock
ffffffff81427544 r __kstrtab_inet_csk_reqsk_queue_prune
ffffffff8142755f r __kstrtab_inet_csk_reqsk_queue_hash_add
ffffffff8142757d r __kstrtab_inet_csk_search_req
ffffffff81427591 r __kstrtab_inet_csk_route_child_sock
ffffffff814275ab r __kstrtab_inet_csk_route_req
ffffffff814275be r __kstrtab_inet_csk_reset_keepalive_timer
ffffffff814275dd r __kstrtab_inet_csk_delete_keepalive_timer
ffffffff814275fd r __kstrtab_inet_csk_clear_xmit_timers
ffffffff81427618 r __kstrtab_inet_csk_init_xmit_timers
ffffffff81427632 r __kstrtab_inet_csk_accept
ffffffff81427642 r __kstrtab_inet_csk_get_port
ffffffff81427654 r __kstrtab_inet_csk_bind_conflict
ffffffff8142766b r __kstrtab_inet_get_local_port_range
ffffffff81427685 r __kstrtab_sysctl_local_reserved_ports
ffffffff814276a1 r __kstrtab_inet_csk_timer_bug_msg
ffffffff814276b8 r __kstrtab_tcp_done
ffffffff814276c1 r __kstrtab_tcp_cookie_generator
ffffffff814276d6 r __kstrtab_tcp_gro_complete
ffffffff814276e7 r __kstrtab_tcp_gro_receive
ffffffff814276f7 r __kstrtab_tcp_tso_segment
ffffffff81427707 r __kstrtab_compat_tcp_getsockopt
ffffffff8142771d r __kstrtab_tcp_getsockopt
ffffffff8142772c r __kstrtab_tcp_get_info
ffffffff81427739 r __kstrtab_compat_tcp_setsockopt
ffffffff8142774f r __kstrtab_tcp_setsockopt
ffffffff8142775e r __kstrtab_tcp_disconnect
ffffffff8142776d r __kstrtab_tcp_close
ffffffff81427777 r __kstrtab_tcp_shutdown
ffffffff81427784 r __kstrtab_tcp_set_state
ffffffff81427792 r __kstrtab_tcp_recvmsg
ffffffff8142779e r __kstrtab_tcp_read_sock
ffffffff814277ac r __kstrtab_tcp_sendmsg
ffffffff814277b8 r __kstrtab_tcp_sendpage
ffffffff814277c5 r __kstrtab_tcp_splice_read
ffffffff814277d5 r __kstrtab_tcp_ioctl
ffffffff814277df r __kstrtab_tcp_poll
ffffffff814277e8 r __kstrtab_tcp_enter_memory_pressure
ffffffff81427802 r __kstrtab_tcp_memory_pressure
ffffffff81427816 r __kstrtab_tcp_sockets_allocated
ffffffff8142782c r __kstrtab_tcp_memory_allocated
ffffffff81427841 r __kstrtab_sysctl_tcp_wmem
ffffffff81427851 r __kstrtab_sysctl_tcp_rmem
ffffffff81427861 r __kstrtab_tcp_orphan_count
ffffffff81427872 r __kstrtab_tcp_rcv_state_process
ffffffff81427888 r __kstrtab_tcp_rcv_established
ffffffff8142789c r __kstrtab_tcp_parse_options
ffffffff814278ae r __kstrtab_tcp_valid_rtt_meas
ffffffff814278c1 r __kstrtab_tcp_simple_retransmit
ffffffff814278d7 r __kstrtab_tcp_initialize_rcv_mss
ffffffff814278ee r __kstrtab_sysctl_tcp_adv_win_scale
ffffffff81427907 r __kstrtab_sysctl_tcp_ecn
ffffffff81427916 r __kstrtab_sysctl_tcp_reordering
ffffffff8142792c r __kstrtab_tcp_connect
ffffffff81427938 r __kstrtab_tcp_make_synack
ffffffff81427948 r __kstrtab_tcp_sync_mss
ffffffff81427955 r __kstrtab_tcp_mtup_init
ffffffff81427963 r __kstrtab_tcp_select_initial_window
ffffffff8142797d r __kstrtab_sysctl_tcp_cookie_size
ffffffff81427994 r __kstrtab_tcp_syn_ack_timeout
ffffffff814279a8 r __kstrtab_tcp_init_xmit_timers
ffffffff814279bd r __kstrtab_tcp_prot
ffffffff814279c6 r __kstrtab_tcp_proc_unregister
ffffffff814279da r __kstrtab_tcp_proc_register
ffffffff814279ec r __kstrtab_tcp_seq_open
ffffffff814279f9 r __kstrtab_tcp_v4_destroy_sock
ffffffff81427a0d r __kstrtab_ipv4_specific
ffffffff81427a1b r __kstrtab_tcp_v4_tw_get_peer
ffffffff81427a2e r __kstrtab_tcp_v4_get_peer
ffffffff81427a3e r __kstrtab_tcp_v4_do_rcv
ffffffff81427a4c r __kstrtab_tcp_v4_syn_recv_sock
ffffffff81427a61 r __kstrtab_tcp_v4_conn_request
ffffffff81427a75 r __kstrtab_tcp_syn_flood_action
ffffffff81427a8a r __kstrtab_tcp_v4_send_check
ffffffff81427a9c r __kstrtab_tcp_v4_connect
ffffffff81427aab r __kstrtab_tcp_twsk_unique
ffffffff81427abb r __kstrtab_tcp_hashinfo
ffffffff81427ac8 r __kstrtab_sysctl_tcp_low_latency
ffffffff81427adf r __kstrtab_tcp_child_process
ffffffff81427af1 r __kstrtab_tcp_check_req
ffffffff81427aff r __kstrtab_tcp_create_openreq_child
ffffffff81427b18 r __kstrtab_tcp_twsk_destructor
ffffffff81427b2c r __kstrtab_tcp_timewait_state_process
ffffffff81427b47 r __kstrtab_tcp_death_row
ffffffff81427b55 r __kstrtab_sysctl_tcp_syncookies
ffffffff81427b6b r __kstrtab_tcp_init_congestion_ops
ffffffff81427b83 r __kstrtab_tcp_reno_min_cwnd
ffffffff81427b95 r __kstrtab_tcp_reno_ssthresh
ffffffff81427ba7 r __kstrtab_tcp_reno_cong_avoid
ffffffff81427bbb r __kstrtab_tcp_cong_avoid_ai
ffffffff81427bcd r __kstrtab_tcp_slow_start
ffffffff81427bdc r __kstrtab_tcp_is_cwnd_limited
ffffffff81427bf0 r __kstrtab_tcp_unregister_congestion_control
ffffffff81427c12 r __kstrtab_tcp_register_congestion_control
ffffffff81427c32 r __kstrtab_ip4_datagram_connect
ffffffff81427c47 r __kstrtab_raw_seq_open
ffffffff81427c54 r __kstrtab_raw_seq_stop
ffffffff81427c61 r __kstrtab_raw_seq_next
ffffffff81427c6e r __kstrtab_raw_seq_start
ffffffff81427c7c r __kstrtab_raw_unhash_sk
ffffffff81427c8a r __kstrtab_raw_hash_sk
ffffffff81427c96 r __kstrtab_udp_proc_unregister
ffffffff81427caa r __kstrtab_udp_proc_register
ffffffff81427cbc r __kstrtab_udp_seq_open
ffffffff81427cc9 r __kstrtab_udp_prot
ffffffff81427cd2 r __kstrtab_udp_poll
ffffffff81427cdb r __kstrtab_udp_lib_getsockopt
ffffffff81427cee r __kstrtab_udp_lib_setsockopt
ffffffff81427d01 r __kstrtab_udp_lib_rehash
ffffffff81427d10 r __kstrtab_udp_lib_unhash
ffffffff81427d1f r __kstrtab_udp_disconnect
ffffffff81427d2e r __kstrtab_udp_ioctl
ffffffff81427d38 r __kstrtab_udp_sendmsg
ffffffff81427d44 r __kstrtab_udp_flush_pending_frames
ffffffff81427d5d r __kstrtab_udp4_lib_lookup
ffffffff81427d6d r __kstrtab___udp4_lib_lookup
ffffffff81427d7f r __kstrtab_udp_lib_get_port
ffffffff81427d90 r __kstrtab_udp_memory_allocated
ffffffff81427da5 r __kstrtab_sysctl_udp_wmem_min
ffffffff81427db9 r __kstrtab_sysctl_udp_rmem_min
ffffffff81427dcd r __kstrtab_sysctl_udp_mem
ffffffff81427ddc r __kstrtab_udp_table
ffffffff81427de6 r __kstrtab_udplite_prot
ffffffff81427df3 r __kstrtab_udplite_table
ffffffff81427e01 r __kstrtab_arp_invalidate
ffffffff81427e10 r __kstrtab_arp_send
ffffffff81427e19 r __kstrtab_arp_xmit
ffffffff81427e22 r __kstrtab_arp_create
ffffffff81427e2d r __kstrtab_arp_find
ffffffff81427e36 r __kstrtab_arp_tbl
ffffffff81427e3e r __kstrtab_icmp_send
ffffffff81427e48 r __kstrtab_icmp_err_convert
ffffffff81427e59 r __kstrtab_unregister_inetaddr_notifier
ffffffff81427e76 r __kstrtab_register_inetaddr_notifier
ffffffff81427e91 r __kstrtab_inet_confirm_addr
ffffffff81427ea3 r __kstrtab_inet_select_addr
ffffffff81427eb4 r __kstrtab_inetdev_by_index
ffffffff81427ec5 r __kstrtab_in_dev_finish_destroy
ffffffff81427edb r __kstrtab___ip_dev_find
ffffffff81427ee9 r __kstrtab_snmp_mib_free
ffffffff81427ef7 r __kstrtab_snmp_mib_init
ffffffff81427f05 r __kstrtab_snmp_fold_field
ffffffff81427f15 r __kstrtab_inet_ctl_sock_create
ffffffff81427f2a r __kstrtab_inet_sk_rebuild_header
ffffffff81427f41 r __kstrtab_inet_unregister_protosw
ffffffff81427f59 r __kstrtab_inet_register_protosw
ffffffff81427f6f r __kstrtab_inet_dgram_ops
ffffffff81427f7e r __kstrtab_inet_stream_ops
ffffffff81427f8e r __kstrtab_inet_ioctl
ffffffff81427f99 r __kstrtab_inet_shutdown
ffffffff81427fa7 r __kstrtab_inet_recvmsg
ffffffff81427fb4 r __kstrtab_inet_sendpage
ffffffff81427fc2 r __kstrtab_inet_sendmsg
ffffffff81427fcf r __kstrtab_inet_getname
ffffffff81427fdc r __kstrtab_inet_accept
ffffffff81427fe8 r __kstrtab_inet_stream_connect
ffffffff81427ffc r __kstrtab_inet_dgram_connect
ffffffff8142800f r __kstrtab_inet_bind
ffffffff81428019 r __kstrtab_sysctl_ip_nonlocal_bind
ffffffff81428031 r __kstrtab_inet_release
ffffffff8142803e r __kstrtab_build_ehash_secret
ffffffff81428051 r __kstrtab_inet_ehash_secret
ffffffff81428063 r __kstrtab_inet_listen
ffffffff8142806f r __kstrtab_inet_sock_destruct
ffffffff81428082 r __kstrtab_ipv4_config
ffffffff8142808e r __kstrtab_ip_mc_join_group
ffffffff8142809f r __kstrtab_ip_mc_dec_group
ffffffff814280af r __kstrtab_ip_mc_rejoin_groups
ffffffff814280c3 r __kstrtab_ip_mc_inc_group
ffffffff814280d3 r __kstrtab_inet_dev_addr_type
ffffffff814280e6 r __kstrtab_inet_addr_type
ffffffff814280f5 r __kstrtab_fib_table_lookup
ffffffff81428106 r __kstrtab_inet_frag_find
ffffffff81428115 r __kstrtab_inet_frag_evictor
ffffffff81428127 r __kstrtab_inet_frag_destroy
ffffffff81428139 r __kstrtab_inet_frag_kill
ffffffff81428148 r __kstrtab_inet_frags_exit_net
ffffffff8142815c r __kstrtab_inet_frags_fini
ffffffff8142816c r __kstrtab_inet_frags_init_net
ffffffff81428180 r __kstrtab_inet_frags_init
ffffffff81428190 r __kstrtab_ping_prot
ffffffff8142819a r __kstrtab_net_ipv4_ctl_path
ffffffff814281ac r __kstrtab_cookie_check_timestamp
ffffffff814281c3 r __kstrtab_syncookie_secret
ffffffff814281d4 r __kstrtab_lro_flush_pkt
ffffffff814281e2 r __kstrtab_lro_flush_all
ffffffff814281f0 r __kstrtab_lro_receive_frags
ffffffff81428202 r __kstrtab_lro_receive_skb
ffffffff81428212 r __kstrtab_xfrm4_rcv
ffffffff8142821c r __kstrtab_xfrm4_rcv_encap
ffffffff8142822c r __kstrtab_xfrm4_prepare_output
ffffffff81428241 r __kstrtab_xfrm_audit_policy_delete
ffffffff8142825a r __kstrtab_xfrm_audit_policy_add
ffffffff81428270 r __kstrtab_xfrm_policy_unregister_afinfo
ffffffff8142828e r __kstrtab_xfrm_policy_register_afinfo
ffffffff814282aa r __kstrtab_xfrm_dst_ifdown
ffffffff814282ba r __kstrtab___xfrm_route_forward
ffffffff814282cf r __kstrtab___xfrm_policy_check
ffffffff814282e3 r __kstrtab___xfrm_decode_session
ffffffff814282f9 r __kstrtab_xfrm_lookup
ffffffff81428305 r __kstrtab_xfrm_policy_delete
ffffffff81428318 r __kstrtab_xfrm_policy_walk_done
ffffffff8142832e r __kstrtab_xfrm_policy_walk_init
ffffffff81428344 r __kstrtab_xfrm_policy_walk
ffffffff81428355 r __kstrtab_xfrm_policy_flush
ffffffff81428367 r __kstrtab_xfrm_policy_byid
ffffffff81428378 r __kstrtab_xfrm_policy_bysel_ctx
ffffffff8142838e r __kstrtab_xfrm_policy_insert
ffffffff814283a1 r __kstrtab_xfrm_spd_getinfo
ffffffff814283b2 r __kstrtab_xfrm_policy_destroy
ffffffff814283c6 r __kstrtab_xfrm_policy_alloc
ffffffff814283d8 r __kstrtab_xfrm_cfg_mutex
ffffffff814283e7 r __kstrtab_xfrm_audit_state_icvfail
ffffffff81428400 r __kstrtab_xfrm_audit_state_notfound
ffffffff8142841a r __kstrtab_xfrm_audit_state_notfound_simple
ffffffff8142843b r __kstrtab_xfrm_audit_state_replay
ffffffff81428453 r __kstrtab_xfrm_audit_state_replay_overflow
ffffffff81428474 r __kstrtab_xfrm_audit_state_delete
ffffffff8142848c r __kstrtab_xfrm_audit_state_add
ffffffff814284a1 r __kstrtab_xfrm_init_state
ffffffff814284b1 r __kstrtab___xfrm_init_state
ffffffff814284c3 r __kstrtab_xfrm_state_delete_tunnel
ffffffff814284dc r __kstrtab_xfrm_state_unregister_afinfo
ffffffff814284f9 r __kstrtab_xfrm_state_register_afinfo
ffffffff81428514 r __kstrtab_xfrm_unregister_km
ffffffff81428527 r __kstrtab_xfrm_register_km
ffffffff81428538 r __kstrtab_xfrm_user_policy
ffffffff81428549 r __kstrtab_km_report
ffffffff81428553 r __kstrtab_km_policy_expired
ffffffff81428565 r __kstrtab_km_new_mapping
ffffffff81428574 r __kstrtab_km_query
ffffffff8142857d r __kstrtab_km_state_expired
ffffffff8142858e r __kstrtab_km_state_notify
ffffffff8142859e r __kstrtab_km_policy_notify
ffffffff814285af r __kstrtab_xfrm_state_walk_done
ffffffff814285c4 r __kstrtab_xfrm_state_walk_init
ffffffff814285d9 r __kstrtab_xfrm_state_walk
ffffffff814285e9 r __kstrtab_xfrm_alloc_spi
ffffffff814285f8 r __kstrtab_xfrm_get_acqseq
ffffffff81428608 r __kstrtab_xfrm_find_acq_byseq
ffffffff8142861c r __kstrtab_xfrm_find_acq
ffffffff8142862a r __kstrtab_xfrm_state_lookup_byaddr
ffffffff81428643 r __kstrtab_xfrm_state_lookup
ffffffff81428655 r __kstrtab_xfrm_state_check_expire
ffffffff8142866d r __kstrtab_xfrm_state_update
ffffffff8142867f r __kstrtab_xfrm_state_add
ffffffff8142868e r __kstrtab_xfrm_state_insert
ffffffff814286a0 r __kstrtab_xfrm_stateonly_find
ffffffff814286b4 r __kstrtab_xfrm_sad_getinfo
ffffffff814286c5 r __kstrtab_xfrm_state_flush
ffffffff814286d6 r __kstrtab_xfrm_state_delete
ffffffff814286e8 r __kstrtab___xfrm_state_delete
ffffffff814286fc r __kstrtab___xfrm_state_destroy
ffffffff81428711 r __kstrtab_xfrm_state_alloc
ffffffff81428722 r __kstrtab_xfrm_unregister_mode
ffffffff81428737 r __kstrtab_xfrm_register_mode
ffffffff8142874a r __kstrtab_xfrm_unregister_type
ffffffff8142875f r __kstrtab_xfrm_register_type
ffffffff81428772 r __kstrtab_xfrm_input_resume
ffffffff81428784 r __kstrtab_xfrm_input
ffffffff8142878f r __kstrtab_xfrm_prepare_input
ffffffff814287a2 r __kstrtab_secpath_dup
ffffffff814287ae r __kstrtab___secpath_destroy
ffffffff814287c0 r __kstrtab_xfrm_inner_extract_output
ffffffff814287da r __kstrtab_xfrm_output
ffffffff814287e6 r __kstrtab_xfrm_output_resume
ffffffff814287f9 r __kstrtab_xfrm_count_enc_supported
ffffffff81428812 r __kstrtab_xfrm_count_auth_supported
ffffffff8142882c r __kstrtab_xfrm_probe_algs
ffffffff8142883c r __kstrtab_xfrm_ealg_get_byidx
ffffffff81428850 r __kstrtab_xfrm_aalg_get_byidx
ffffffff81428864 r __kstrtab_xfrm_aead_get_byname
ffffffff81428879 r __kstrtab_xfrm_calg_get_byname
ffffffff8142888e r __kstrtab_xfrm_ealg_get_byname
ffffffff814288a3 r __kstrtab_xfrm_aalg_get_byname
ffffffff814288b8 r __kstrtab_xfrm_calg_get_byid
ffffffff814288cb r __kstrtab_xfrm_ealg_get_byid
ffffffff814288de r __kstrtab_xfrm_aalg_get_byid
ffffffff814288f1 r __kstrtab_xfrm_init_replay
ffffffff81428902 r __kstrtab_unix_outq_len
ffffffff81428910 r __kstrtab_unix_inq_len
ffffffff8142891d r __kstrtab_unix_peer_get
ffffffff8142892b r __kstrtab_unix_table_lock
ffffffff8142893b r __kstrtab_unix_socket_table
ffffffff8142894d r __kstrtab___ipv6_addr_type
ffffffff8142895e r __kstrtab_ipv6_skip_exthdr
ffffffff8142896f r __kstrtab_ipv6_ext_hdr
ffffffff8142897c r __kstrtab_unregister_net_sysctl_table
ffffffff81428998 r __kstrtab_register_net_sysctl_rotable
ffffffff814289b4 r __kstrtab_register_net_sysctl_table
ffffffff814289ce r __kstrtab_klist_next
ffffffff814289d9 r __kstrtab_klist_iter_exit
ffffffff814289e9 r __kstrtab_klist_iter_init
ffffffff814289f9 r __kstrtab_klist_iter_init_node
ffffffff81428a0e r __kstrtab_klist_node_attached
ffffffff81428a22 r __kstrtab_klist_remove
ffffffff81428a2f r __kstrtab_klist_del
ffffffff81428a39 r __kstrtab_klist_add_before
ffffffff81428a4a r __kstrtab_klist_add_after
ffffffff81428a5a r __kstrtab_klist_add_tail
ffffffff81428a69 r __kstrtab_klist_add_head
ffffffff81428a78 r __kstrtab_klist_init
ffffffff81428a83 r __kstrtab_md5_transform
ffffffff81428a91 r __kstrtab_sha_transform
ffffffff81428a9f r __kstrtab_csum_ipv6_magic
ffffffff81428aaf r __kstrtab_csum_partial_copy_nocheck
ffffffff81428ac9 r __kstrtab_csum_partial_copy_to_user
ffffffff81428ae3 r __kstrtab_csum_partial_copy_from_user
ffffffff81428b00 R x86_hyper_xen_hvm
ffffffff81428b20 R x86_hyper_vmware
ffffffff81428b40 R x86_hyper_ms_hyperv
ffffffff81428b60 r ioapic_devices
ffffffff81428bc0 r ehci_dmi_nohandoff_table
ffffffff81428fe0 r msi_k8t_dmi_table
ffffffff814292a0 r toshiba_ohci1394_dmi_table
ffffffff81429800 r can_skip_pciprobe_dmi_table
ffffffff81429d60 r pciprobe_dmi_table
ffffffff8142bb00 r assocs
ffffffff8142bb20 r types
ffffffff8142bb24 r levels
ffffffff8142bb40 r cache_table
ffffffff8142bc80 r cpuid_bits.11607
ffffffff8142bd80 r default_cpu
ffffffff8142c000 r cpuid_dependent_features
ffffffff8142c020 r msr_range_array
ffffffff8142c040 r intel_cpu_dev
ffffffff8142c2c0 r amd_cpu_dev
ffffffff8142c540 r centaur_cpu_dev
ffffffff8142c7c0 r multi_dmi_table
ffffffff8142ca70 r __param_initcall_debug
ffffffff8142ca70 R __start___param
ffffffff8142ca90 r __param_pause_on_oops
ffffffff8142cab0 r __param_panic
ffffffff8142cad0 r __param_console_suspend
ffffffff8142caf0 r __param_always_kmsg_dump
ffffffff8142cb10 r __param_time
ffffffff8142cb30 r __param_ignore_loglevel
ffffffff8142cb50 r __param_nomodule
ffffffff8142cb70 r __param_irqfixup
ffffffff8142cb90 r __param_noirqdebug
ffffffff8142cbb0 r __param_rcu_cpu_stall_timeout
ffffffff8142cbd0 r __param_rcu_cpu_stall_suppress
ffffffff8142cbf0 r __param_qlowmark
ffffffff8142cc10 r __param_qhimark
ffffffff8142cc30 r __param_blimit
ffffffff8142cc50 r __param_backend
ffffffff8142cc70 r __param_events_dfl_poll_msecs
ffffffff8142cc90 r __param_policy
ffffffff8142ccb0 r __param_nosourceid
ffffffff8142ccd0 r __param_forceload
ffffffff8142ccf0 r __param_nologo
ffffffff8142cd10 r __param_bfs
ffffffff8142cd30 r __param_gts
ffffffff8142cd50 r __param_ec_delay
ffffffff8142cd70 r __param_acpica_version
ffffffff8142cd90 r __param_aml_debug_output
ffffffff8142cdb0 r __param_disable
ffffffff8142cdd0 r __param_brl_nbchords
ffffffff8142cdf0 r __param_brl_timeout
ffffffff8142ce10 r __param_underline
ffffffff8142ce30 r __param_italic
ffffffff8142ce50 r __param_default_blu
ffffffff8142ce70 r __param_default_grn
ffffffff8142ce90 r __param_default_red
ffffffff8142ceb0 r __param_consoleblank
ffffffff8142ced0 r __param_cur_default
ffffffff8142cef0 r __param_global_cursor_default
ffffffff8142cf10 r __param_default_utf8
ffffffff8142cf30 r __param_scsi_logging_level
ffffffff8142cf50 r __param_inq_timeout
ffffffff8142cf70 r __param_max_report_luns
ffffffff8142cf90 r __param_scan
ffffffff8142cfb0 r __param_max_luns
ffffffff8142cfd0 r __param_default_dev_flags
ffffffff8142cff0 r __param_dev_flags
ffffffff8142d010 r __param_atapi_an
ffffffff8142d030 r __param_allow_tpm
ffffffff8142d050 r __param_noacpi
ffffffff8142d070 r __param_ata_probe_timeout
ffffffff8142d090 r __param_dma
ffffffff8142d0b0 r __param_ignore_hpa
ffffffff8142d0d0 r __param_fua
ffffffff8142d0f0 r __param_atapi_passthru16
ffffffff8142d110 r __param_atapi_dmadir
ffffffff8142d130 r __param_atapi_enabled
ffffffff8142d150 r __param_force
ffffffff8142d170 r __param_acpi_gtf_filter
ffffffff8142d190 r __param_debug
ffffffff8142d1b0 r __param_nopnp
ffffffff8142d1d0 r __param_dritek
ffffffff8142d1f0 r __param_notimeout
ffffffff8142d210 r __param_noloop
ffffffff8142d230 r __param_dumbkbd
ffffffff8142d250 r __param_direct
ffffffff8142d270 r __param_reset
ffffffff8142d290 r __param_unlock
ffffffff8142d2b0 r __param_nomux
ffffffff8142d2d0 r __param_noaux
ffffffff8142d2f0 r __param_nokbd
ffffffff8142d310 r __param_tap_time
ffffffff8142d330 r __param_yres
ffffffff8142d350 r __param_xres
ffffffff8142d370 r __param_terminal
ffffffff8142d390 r __param_extra
ffffffff8142d3b0 r __param_scroll
ffffffff8142d3d0 r __param_softraw
ffffffff8142d3f0 r __param_softrepeat
ffffffff8142d410 r __param_reset
ffffffff8142d430 r __param_set
ffffffff8142d450 r __param_resync_time
ffffffff8142d470 r __param_resetafter
ffffffff8142d490 r __param_smartscroll
ffffffff8142d4b0 r __param_rate
ffffffff8142d4d0 r __param_resolution
ffffffff8142d4f0 r __param_proto
ffffffff8142d510 r __param_off
ffffffff8142d530 r __param_ignore_special_drivers
ffffffff8142d550 r __param_debug
ffffffff8142d570 r __param_hystart_ack_delta
ffffffff8142d590 r __param_hystart_low_window
ffffffff8142d5b0 r __param_hystart_detect
ffffffff8142d5d0 r __param_hystart
ffffffff8142d5f0 r __param_tcp_friendliness
ffffffff8142d610 r __param_bic_scale
ffffffff8142d630 r __param_initial_ssthresh
ffffffff8142d650 r __param_beta
ffffffff8142d670 r __param_fast_convergence
ffffffff8142d690 r __modver_attr
ffffffff8142d690 R __start___modver
ffffffff8142d690 R __stop___param
ffffffff8142d698 r __modver_attr
ffffffff8142d6a0 r __modver_attr
ffffffff8142d6a8 r __modver_attr
ffffffff8142d6b0 R __stop___modver
ffffffff8142e000 R __end_rodata
ffffffff8142e000 D _sdata
ffffffff8142e000 D init_thread_union
ffffffff81430000 D __vsyscall_page
ffffffff81431000 D vdso_start
ffffffff81431f98 D vdso_end
ffffffff81432000 D mmlist_lock
ffffffff81432040 D tasklist_lock
ffffffff81432080 d softirq_vec
ffffffff81432140 d pidmap_lock
ffffffff81432180 D xtime_lock
ffffffff814321c0 d call_function
ffffffff81432200 d hash_lock
ffffffff81432240 D vm_stat
ffffffff81432380 d nr_files
ffffffff814323c0 D rename_lock
ffffffff81432400 d dcache_lru_lock
ffffffff81432440 D inode_sb_list_lock
ffffffff81432480 d inode_hash_lock
ffffffff814324c0 d bdev_lock
ffffffff81432500 d crc32table_le
ffffffff81434500 d crc32ctable_le
ffffffff81436500 d crc32table_be
ffffffff81439000 D init_level4_pgt
ffffffff8143a000 D level3_ident_pgt
ffffffff8143b000 D level3_kernel_pgt
ffffffff8143c000 D level2_fixmap_pgt
ffffffff8143d000 D level1_fixmap_pgt
ffffffff8143e000 D level2_ident_pgt
ffffffff8143f000 D level2_kernel_pgt
ffffffff81440000 D level2_spare_pgt
ffffffff81441000 D early_gdt_descr
ffffffff81441002 d early_gdt_descr_base
ffffffff81441010 D phys_base
ffffffff81441020 D init_task
ffffffff814416c0 d init_signals
ffffffff81441ae0 d init_sighand
ffffffff81442320 D loops_per_jiffy
ffffffff81442340 D envp_init
ffffffff81442460 d argv_init
ffffffff81442580 D init_uts_ns
ffffffff81442718 D root_mountflags
ffffffff81442740 D HYPERVISOR_shared_info
ffffffff81442748 D machine_to_phys_mapping
ffffffff81442750 d have_vcpu_info_placement
ffffffff81442760 d xen_panic_block
ffffffff81442778 d __xen_make_pud__
ffffffff81442780 d __xen_pud_val__
ffffffff81442788 d __xen_make_pmd__
ffffffff81442790 d __xen_pmd_val__
ffffffff81442798 d __xen_make_pgd__
ffffffff814427a0 d __xen_make_pte__
ffffffff814427a8 d __xen_pgd_val__
ffffffff814427b0 d __xen_pte_val__
ffffffff814427b8 d __xen_irq_enable__
ffffffff814427c0 d __xen_irq_disable__
ffffffff814427c8 d __xen_restore_fl__
ffffffff814427d0 d __xen_save_fl__
ffffffff814427d8 d xen_clockevent
ffffffff814427e0 d xen_swiotlb_dma_ops
ffffffff81442880 d x86_stack_ids
ffffffff814428c0 d irq0
ffffffff81442940 D kstack_depth_to_print
ffffffff81442944 D code_bytes
ffffffff81442948 d die_owner
ffffffff81442960 d nmi_desc
ffffffff814429a0 D _brk_end
ffffffff814429c0 d standard_io_resources
ffffffff81442c00 d code_resource
ffffffff81442c40 d data_resource
ffffffff81442c80 d bss_resource
ffffffff81442cb8 d reserve_low
ffffffff81442cc0 D x86_msi
ffffffff81442ce0 D x86_platform
ffffffff81442d40 D legacy_pic
ffffffff81442d60 D default_legacy_pic
ffffffff81442dc0 D null_legacy_pic
ffffffff81442e20 D i8259A_chip
ffffffff81442ed8 D cached_irq_mask
ffffffff81442ee0 d i8259_syscore_ops
ffffffff81442f40 d irq2
ffffffff81442fc0 d adapter_rom_resources
ffffffff81443120 d video_rom_resource
ffffffff81443160 d system_rom_resource
ffffffff814431a0 d extension_rom_resource
ffffffff814431e0 d rs.28005
ffffffff81443200 D pci_mem_start
ffffffff81443220 D x86_dma_fallback_dev
ffffffff81443498 D dma_ops
ffffffff814434a0 D ideal_nops
ffffffff814434c0 d smp_alt
ffffffff814434e0 d smp_alt_modules
ffffffff814434f0 d smp_mode
ffffffff81443500 D nommu_dma_ops
ffffffff81443580 d clocksource_tsc
ffffffff81443640 d time_cpufreq_notifier_block
ffffffff81443660 d tsc_irqwork
ffffffff814436b8 d tsc_start.23387
ffffffff814436c0 d rtc_device
ffffffff81443980 d rtc_resources
ffffffff814439f0 D sig_xstate_ia32_size
ffffffff814439f4 D sig_xstate_size
ffffffff81443a00 d i8237_syscore_ops
ffffffff81443a40 d ktype_percpu_entry
ffffffff81443a80 d ktype_cache
ffffffff81443ac0 d default_attrs
ffffffff81443b20 d cache_disable_0
ffffffff81443b40 d cache_disable_1
ffffffff81443b60 d subcaches
ffffffff81443b80 d type
ffffffff81443ba0 d level
ffffffff81443bc0 d coherency_line_size
ffffffff81443be0 d physical_line_partition
ffffffff81443c00 d ways_of_associativity
ffffffff81443c20 d number_of_sets
ffffffff81443c40 d size
ffffffff81443c60 d shared_cpu_map
ffffffff81443c80 d shared_cpu_list
ffffffff81443ca0 D nmi_idt_descr
ffffffff81443caa D idt_descr
ffffffff81443cc0 d hyperv_cs
ffffffff81443d80 d x86_pmu_format_group
ffffffff81443da0 d pmu
ffffffff81443e60 d pmc_reserve_mutex
ffffffff81443e80 d x86_pmu_attr_groups
ffffffff81443ea0 d x86_pmu_attr_group
ffffffff81443ec0 d x86_pmu_attrs
ffffffff81443ee0 d dev_attr_rdpmc
ffffffff81443f00 d amd_format_attr
ffffffff81443f40 d amd_f15_PMC3
ffffffff81443f60 d amd_f15_PMC53
ffffffff81443f80 d amd_f15_PMC20
ffffffff81443fa0 d amd_f15_PMC30
ffffffff81443fc0 d amd_f15_PMC50
ffffffff81443fe0 d amd_f15_PMC0
ffffffff81444000 d format_attr_event
ffffffff81444020 d format_attr_umask
ffffffff81444040 d format_attr_edge
ffffffff81444060 d format_attr_inv
ffffffff81444080 d format_attr_cmask
ffffffff814440a0 d p6_event_constraints
ffffffff81444180 d intel_p6_formats_attr
ffffffff814441c0 d format_attr_event
ffffffff814441e0 d format_attr_umask
ffffffff81444200 d format_attr_edge
ffffffff81444220 d format_attr_pc
ffffffff81444240 d format_attr_inv
ffffffff81444260 d format_attr_cmask
ffffffff81444280 D p4_event_aliases
ffffffff814442a0 d intel_p4_formats_attr
ffffffff814442c0 d p4_event_bind_map
ffffffff814447e0 d p4_pebs_bind_map
ffffffff81444840 d format_attr_cccr
ffffffff81444860 d format_attr_escr
ffffffff81444880 d format_attr_ht
ffffffff814448a0 D intel_snb_pebs_event_constraints
ffffffff81444ae0 D intel_westmere_pebs_event_constraints
ffffffff81444c60 D intel_nehalem_pebs_event_constraints
ffffffff81444de0 D intel_atom_pebs_event_constraints
ffffffff81444e60 D intel_core2_pebs_event_constraints
ffffffff81444f20 D bts_constraint
ffffffff81444f40 d intel_arch_formats_attr
ffffffff81444f80 d intel_arch3_formats_attr
ffffffff81444fe0 d format_attr_event
ffffffff81445000 d format_attr_umask
ffffffff81445020 d format_attr_edge
ffffffff81445040 d format_attr_pc
ffffffff81445060 d format_attr_inv
ffffffff81445080 d format_attr_cmask
ffffffff814450a0 d format_attr_any
ffffffff814450c0 d format_attr_offcore_rsp
ffffffff814450e0 D machine_check_vector
ffffffff81445100 d mcelog
ffffffff81445c20 d _rs.23551
ffffffff81445c40 d mce_chrdev_wait
ffffffff81445c60 d mce_trigger_work
ffffffff81445c80 d ratelimit.24027
ffffffff81445ca0 d mce_helper_argv
ffffffff81445cb0 d check_interval
ffffffff81445cc0 d mce_subsys
ffffffff81445d40 d mce_syscore_ops
ffffffff81445d80 d mce_chrdev_device
ffffffff81445de0 d mce_chrdev_read_mutex
ffffffff81445e00 d dev_attr_tolerant
ffffffff81445e40 d dev_attr_check_interval
ffffffff81445e80 d dev_attr_trigger
ffffffff81445ea0 d dev_attr_monarch_timeout
ffffffff81445ee0 d dev_attr_dont_log_ce
ffffffff81445f20 d dev_attr_ignore_ce
ffffffff81445f60 d dev_attr_cmci_disabled
ffffffff81445fa0 d severities
ffffffff81446240 d threshold_ktype
ffffffff81446280 d default_attrs
ffffffff814462a0 d interrupt_enable
ffffffff814462c0 d threshold_limit
ffffffff814462e0 d error_count
ffffffff81446300 D mce_threshold_vector
ffffffff81446320 d mtrr_mutex
ffffffff81446340 d mtrr_syscore_ops
ffffffff81446380 d perf_ibs
ffffffff81446430 D __acpi_register_gsi
ffffffff81446440 D machine_ops
ffffffff81446470 D reboot_type
ffffffff81446474 d reboot_default
ffffffff81446480 D smp_ops
ffffffff814464d8 d stopping_cpu
ffffffff814464e0 D smp_num_siblings
ffffffff81446500 d x86_cpu_hotplug_driver_mutex
ffffffff81446520 d current_node.27104
ffffffff81446540 D first_system_vector
ffffffff81446544 D boot_cpu_physical_apicid
ffffffff81446580 d lapic_clockevent
ffffffff81446640 d lapic_syscore_ops
ffffffff81446680 d lapic_resource
ffffffff814466c0 D apic_noop
ffffffff81446820 D sis_apic_bug
ffffffff81446840 d io_apic_ops
ffffffff81446860 d nr_irqs_gsi
ffffffff81446880 d ioapic_syscore_ops
ffffffff814468a8 d current_vector.31620
ffffffff814468ac d current_offset.31621
ffffffff814468b0 d ioapic_i8259
ffffffff814468c0 d msi_chip
ffffffff81446980 d hpet_msi_type
ffffffff81446a40 d ht_irq_chip
ffffffff81446b00 d apic_physflat
ffffffff81446c60 d apic_flat
ffffffff81446dc0 d early_console
ffffffff81446de0 d early_vga_console
ffffffff81446e38 d current_ypos
ffffffff81446e3c d max_ypos
ffffffff81446e40 d max_xpos
ffffffff81446e60 d early_serial_console
ffffffff81446eb8 d early_serial_base
ffffffff81446ec0 d hpet_clockevent
ffffffff81446f80 d clocksource_hpet
ffffffff81447040 d amd_nb_link_ids
ffffffff81447080 D pv_mmu_ops
ffffffff814471d0 D pv_apic_ops
ffffffff814471e0 D pv_cpu_ops
ffffffff81447340 D pv_irq_ops
ffffffff81447380 D pv_time_ops
ffffffff81447398 D pv_init_ops
ffffffff814473a0 D pv_info
ffffffff814473c0 d reserve_ioports
ffffffff81447400 D pv_lock_ops
ffffffff81447440 d swiotlb_dma_ops
ffffffff814474c0 d write_class
ffffffff81447520 d read_class
ffffffff81447560 d dir_class
ffffffff814475a0 d chattr_class
ffffffff814475e0 d signal_class
ffffffff81447600 d gart_dma_ops
ffffffff81447678 d iommu_fullflush
ffffffff81447680 d gart_syscore_ops
ffffffff814476c0 d gart_resource
ffffffff814476f8 d __vsmp_irq_enable__
ffffffff81447700 d __vsmp_irq_disable__
ffffffff81447708 d __vsmp_restore_fl__
ffffffff81447710 d __vsmp_save_fl__
ffffffff81447718 d is_vsmp
ffffffff81447740 D direct_gbpages
ffffffff81447760 d gate_vma
ffffffff81447810 D show_unhandled_signals
ffffffff81447820 D pgd_list
ffffffff81447840 D va_align
ffffffff81447880 D __userpte_alloc_gfp
ffffffff814478a0 d abi_root_table2
ffffffff81447920 d abi_table2
ffffffff814479a0 D ia32_signal_class
ffffffff814479c0 D ia32_read_class
ffffffff81447a00 D ia32_write_class
ffffffff81447a60 D ia32_chattr_class
ffffffff81447ac0 D ia32_dir_class
ffffffff81447b00 d default_dump_filter
ffffffff81447b20 D default_exec_domain
ffffffff81447b78 d exec_domains_lock
ffffffff81447b80 d exec_domains
ffffffff81447ba0 d ident_map
ffffffff81447ca0 D printk_ratelimit_state
ffffffff81447cc0 D console_suspend_enabled
ffffffff81447cc4 d printk_cpu
ffffffff81447cd0 D console_printk
ffffffff81447ce0 D log_wait
ffffffff81447cf8 d log_buf_len
ffffffff81447d00 d log_buf
ffffffff81447d08 d saved_console_loglevel
ffffffff81447d10 d console_sem
ffffffff81447d28 d new_text_line
ffffffff81447d2c d selected_console
ffffffff81447d30 d msg_level.30848
ffffffff81447d34 d preferred_console
ffffffff81447d40 d dump_list
ffffffff81447d60 d cpu_add_remove_lock
ffffffff81447d80 d cpu_hotplug
ffffffff81447db0 d cpu_hotplug_pm_callback_nb.23875
ffffffff81447dc8 d firsttime.28209
ffffffff81447de0 D softirq_to_name
ffffffff81447e40 D iomem_resource
ffffffff81447e80 D ioport_resource
ffffffff81447eb8 d resource_lock
ffffffff81447ec0 d muxed_resource_wait
ffffffff81447ee0 d sysctl_base_table
ffffffff81448060 d kern_table
ffffffff81448c60 d vm_table
ffffffff81449660 d fs_table
ffffffff81449ae0 d debug_table
ffffffff81449b60 d one
ffffffff81449b64 d maxolduid
ffffffff81449b68 d ten_thousand
ffffffff81449b6c d two
ffffffff81449b70 d ngroups_max
ffffffff81449b74 d one_hundred
ffffffff81449b78 d one_ul
ffffffff81449b80 d dirty_bytes_min
ffffffff81449b88 d three
ffffffff81449b8c d max_extfrag_threshold
ffffffff81449b90 d min_percpu_pagelist_fract
ffffffff81449b94 D file_caps_enabled
ffffffff81449ba0 D root_user
ffffffff81449c00 D init_user_ns
ffffffff8144a040 d ratelimit_state.32615
ffffffff8144a060 D poweroff_cmd
ffffffff8144a160 D uts_sem
ffffffff8144a180 D C_A_D
ffffffff8144a184 D fs_overflowgid
ffffffff8144a188 D fs_overflowuid
ffffffff8144a18c D overflowgid
ffffffff8144a190 D overflowuid
ffffffff8144a1a0 d reboot_mutex
ffffffff8144a1c0 d cad_work.32196
ffffffff8144a1e0 d envp.32694
ffffffff8144a200 D usermodehelper_table
ffffffff8144a2c0 D modprobe_path
ffffffff8144a3c0 d envp.29705
ffffffff8144a3e0 d umhelper_sem
ffffffff8144a400 d usermodehelper_disabled_waitq
ffffffff8144a418 d usermodehelper_disabled
ffffffff8144a41c d usermodehelper_bset
ffffffff8144a424 d usermodehelper_inheritable
ffffffff8144a430 d running_helpers_waitq
ffffffff8144a450 d workqueues
ffffffff8144a460 D init_pid_ns
ffffffff8144acb0 D pid_max_max
ffffffff8144acb4 D pid_max_min
ffffffff8144acb8 D pid_max
ffffffff8144acc0 D init_struct_pid
ffffffff8144ad10 d pidhash_shift
ffffffff8144ad20 D text_mutex
ffffffff8144ad40 D module_ktype
ffffffff8144ad70 D param_ops_string
ffffffff8144ad90 D param_array_ops
ffffffff8144adb0 D param_ops_bint
ffffffff8144add0 D param_ops_invbool
ffffffff8144adf0 D param_ops_bool
ffffffff8144ae10 D param_ops_charp
ffffffff8144ae30 D param_ops_ulong
ffffffff8144ae50 D param_ops_long
ffffffff8144ae70 D param_ops_uint
ffffffff8144ae90 D param_ops_int
ffffffff8144aeb0 D param_ops_ushort
ffffffff8144aed0 D param_ops_short
ffffffff8144aef0 D param_ops_byte
ffffffff8144af20 d param_lock
ffffffff8144af40 d kmalloced_params
ffffffff8144af50 d kthread_create_list
ffffffff8144af60 D clock_posix_cpu
ffffffff8144afc0 D init_nsproxy
ffffffff8144b000 D reboot_notifier_list
ffffffff8144b040 d kernel_attr_group
ffffffff8144b060 d notes_attr
ffffffff8144b0a0 d kernel_attrs
ffffffff8144b0c0 d fscaps_attr
ffffffff8144b0e0 d uevent_seqnum_attr
ffffffff8144b100 d uevent_helper_attr
ffffffff8144b120 D init_cred
ffffffff8144b190 d async_running
ffffffff8144b1a0 d next_cookie
ffffffff8144b1b0 d async_pending
ffffffff8144b1c0 d async_done
ffffffff8144b1e0 D init_groups
ffffffff8144b280 D cpu_cgroup_subsys
ffffffff8144b360 D cpuacct_subsys
ffffffff8144b440 D sysctl_sched_rt_runtime
ffffffff8144b444 D sysctl_sched_rt_period
ffffffff8144b460 D sched_domains_mutex
ffffffff8144b480 d files
ffffffff8144b6c0 d default_relax_domain_level
ffffffff8144b6e0 d cpu_files
ffffffff8144bb60 d default_topology
ffffffff8144bc60 d dev_attr_sched_mc_power_savings
ffffffff8144bc80 d task_groups
ffffffff8144bca0 d rt_constraints_mutex
ffffffff8144bcc0 d mutex.41753
ffffffff8144bce0 d cfs_constraints_mutex
ffffffff8144bd00 D sysctl_sched_cfs_bandwidth_slice
ffffffff8144bd04 D normalized_sysctl_sched_wakeup_granularity
ffffffff8144bd08 D sysctl_sched_wakeup_granularity
ffffffff8144bd0c D normalized_sysctl_sched_min_granularity
ffffffff8144bd10 D sysctl_sched_min_granularity
ffffffff8144bd14 D sysctl_sched_tunable_scaling
ffffffff8144bd18 D normalized_sysctl_sched_latency
ffffffff8144bd1c D sysctl_sched_latency
ffffffff8144bd20 d shares_mutex
ffffffff8144bd40 d task_groups
ffffffff8144bd60 d cpu_dma_pm_qos
ffffffff8144bdc0 d network_lat_pm_qos
ffffffff8144be20 d network_throughput_pm_qos
ffffffff8144be80 d cpu_dma_constraints
ffffffff8144bec0 d network_lat_constraints
ffffffff8144bf00 d network_tput_constraints
ffffffff8144bf40 d cpu_dma_lat_notifier
ffffffff8144bf80 d network_lat_notifier
ffffffff8144bfc0 d network_throughput_notifier
ffffffff8144c000 D pm_async_enabled
ffffffff8144c020 D pm_mutex
ffffffff8144c040 d pm_chain_head
ffffffff8144c070 d attr_group
ffffffff8144c0a0 d g
ffffffff8144c0c0 d state_attr
ffffffff8144c0e0 d pm_async_attr
ffffffff8144c100 d wakeup_count_attr
ffffffff8144c140 d timekeeping_syscore_ops
ffffffff8144c180 D tick_usec
ffffffff8144c188 d time_status
ffffffff8144c190 d time_maxerror
ffffffff8144c198 d time_esterror
ffffffff8144c1a0 d time_constant
ffffffff8144c1c0 d sync_cmos_work
ffffffff8144c220 d watchdog_list
ffffffff8144c240 d watchdog_work
ffffffff8144c260 d clocksource_mutex
ffffffff8144c280 d clocksource_list
ffffffff8144c2a0 d clocksource_subsys
ffffffff8144c320 d device_clocksource
ffffffff8144c5a0 d dev_attr_current_clocksource
ffffffff8144c5c0 d dev_attr_available_clocksource
ffffffff8144c600 D clocksource_jiffies
ffffffff8144c6c0 D clock_posix_dynamic
ffffffff8144c720 d alarmtimer_rtc_interface
ffffffff8144c760 d alarmtimer_driver
ffffffff8144c800 d clockevent_devices
ffffffff8144c810 d clockevents_released
ffffffff8144c820 d tick_notifier
ffffffff8144c838 D max_lock_depth
ffffffff8144c840 d dma_chan_busy
ffffffff8144c8c0 D setup_max_cpus
ffffffff8144c8e0 D module_uevent
ffffffff8144c920 D module_mutex
ffffffff8144c940 d module_notify_list
ffffffff8144c970 d modules
ffffffff8144c980 d module_wq
ffffffff8144c998 d module_addr_min
ffffffff8144c9a0 d modinfo_version
ffffffff8144c9e0 d modinfo_srcversion
ffffffff8144ca20 d modinfo_initstate
ffffffff8144ca60 d modinfo_coresize
ffffffff8144caa0 d modinfo_initsize
ffffffff8144cae0 d modinfo_taint
ffffffff8144cb20 d modinfo_refcnt
ffffffff8144cb60 D acct_parm
ffffffff8144cb70 d acct_list
ffffffff8144cb80 d cgroup_mutex
ffffffff8144cba0 d cgroup_rmdir_waitq
ffffffff8144cbb8 d css_set_lock
ffffffff8144cbc0 d roots
ffffffff8144cbe0 d subsys
ffffffff8144cde0 d release_list
ffffffff8144ce00 d release_agent_work
ffffffff8144ce20 d files
ffffffff8144d1e0 d cft_release_agent
ffffffff8144d2a0 d cgroup_root_mutex
ffffffff8144d2c0 d cgroup_backing_dev_info
ffffffff8144d520 d cgroup_fs_type
ffffffff8144d560 D freezer_subsys
ffffffff8144d640 d files
ffffffff8144d700 D cpuset_subsys
ffffffff8144d7e0 d callback_mutex
ffffffff8144d800 d files
ffffffff8144e040 d cft_memory_pressure_enabled
ffffffff8144e100 d rebuild_sched_domains_work
ffffffff8144e120 d top_cpuset
ffffffff8144e1a8 d warnings.26996
ffffffff8144e1c0 d cpuset_fs_type
ffffffff8144e200 d kern_path
ffffffff8144e220 d pid_ns_ctl_table
ffffffff8144e2a0 d pid_caches_mutex
ffffffff8144e2c0 d pid_caches_lh
ffffffff8144e2e0 d stop_cpus_mutex
ffffffff8144e300 D audit_cmd_mutex
ffffffff8144e320 D audit_sig_pid
ffffffff8144e324 D audit_sig_uid
ffffffff8144e328 d audit_failure
ffffffff8144e32c d audit_backlog_limit
ffffffff8144e330 d audit_backlog_wait_time
ffffffff8144e340 d audit_backlog_wait
ffffffff8144e360 d audit_freelist
ffffffff8144e370 d kauditd_wait
ffffffff8144e3a0 D audit_filter_mutex
ffffffff8144e3c0 D audit_filter_list
ffffffff8144e420 d prio_high
ffffffff8144e428 d prio_low
ffffffff8144e440 d audit_rules_list
ffffffff8144e4a0 d prune_list
ffffffff8144e4b0 d tree_list
ffffffff8144e4c0 D nr_irqs
ffffffff8144e4d0 d irq_desc_tree
ffffffff8144e4e0 d sparse_irq_lock
ffffffff8144e500 d count.15636
ffffffff8144e520 d poll_spurious_irq_timer
ffffffff8144e560 D dummy_irq_chip
ffffffff8144e620 D no_irq_chip
ffffffff8144e6e0 d probing_active
ffffffff8144e700 d irq_pm_syscore_ops
ffffffff8144e740 D rcu_bh_state
ffffffff8144e8c0 D rcu_sched_state
ffffffff8144ea40 d blimit
ffffffff8144ea44 d qhimark
ffffffff8144ea48 d qlowmark
ffffffff8144ea60 d rcu_barrier_mutex
ffffffff8144ea80 d rcu_panic_block
ffffffff8144eaa0 d uts_kern_table
ffffffff8144ec20 d uts_root_table
ffffffff8144eca0 d hostname_poll
ffffffff8144ecc0 d domainname_poll
ffffffff8144ece0 d family
ffffffff8144ed60 d taskstats_ops
ffffffff8144eda0 d cgroupstats_ops
ffffffff8144ede0 d pmus_lock
ffffffff8144ee00 d pmu_bus
ffffffff8144ee80 d pmus
ffffffff8144eea0 d perf_swevent
ffffffff8144ef60 d perf_cpu_clock
ffffffff8144f020 d perf_task_clock
ffffffff8144f0d0 d perf_reboot_notifier
ffffffff8144f100 d pmu_dev_attrs
ffffffff8144f140 d callchain_mutex
ffffffff8144f160 d nr_bp_mutex
ffffffff8144f180 d bp_task_head
ffffffff8144f1a0 d perf_breakpoint
ffffffff8144f250 d hw_breakpoint_exceptions_nb
ffffffff8144f280 D sysctl_oom_dump_tasks
ffffffff8144f2a0 d oom_notify_list
ffffffff8144f2e0 d oom_rs.26412
ffffffff8144f300 D hashdist
ffffffff8144f320 D zonelists_mutex
ffffffff8144f340 D numa_zonelist_order
ffffffff8144f350 D min_free_kbytes
ffffffff8144f354 D sysctl_lowmem_reserve_ratio
ffffffff8144f360 d nopage_rs
ffffffff8144f380 d zl_order_mutex.32698
ffffffff8144f3a0 d zonelist_order_name
ffffffff8144f3b8 D dirty_expire_interval
ffffffff8144f3bc D dirty_writeback_interval
ffffffff8144f3c0 D vm_dirty_ratio
ffffffff8144f3c4 D dirty_background_ratio
ffffffff8144f3c8 d ratelimit_pages
ffffffff8144f3e0 D sysctl_min_slab_ratio
ffffffff8144f3e4 D sysctl_min_unmapped_ratio
ffffffff8144f3e8 D vm_swappiness
ffffffff8144f400 d shrinker_rwsem
ffffffff8144f420 d shrinker_list
ffffffff8144f440 d dev_attr_scan_unevictable_pages
ffffffff8144f460 d shmem_swaplist_mutex
ffffffff8144f480 d shmem_swaplist
ffffffff8144f490 d shmem_xattr_handlers
ffffffff8144f4c0 d shmem_fs_type
ffffffff8144f500 D bdi_pending_list
ffffffff8144f510 D bdi_list
ffffffff8144f520 D noop_backing_dev_info
ffffffff8144f780 D default_backing_dev_info
ffffffff8144f9e0 d bdi_dev_attrs
ffffffff8144fa60 d congestion_wqh
ffffffff8144faa0 d pcpu_alloc_mutex
ffffffff8144fac0 d warn_limit.20211
ffffffff8144fae0 d pcpu_reclaim_work
ffffffff8144fb00 D protection_map
ffffffff8144fb80 d mm_all_locks_mutex
ffffffff8144fba0 D vmlist_lock
ffffffff8144fbb0 d vmap_area_list
ffffffff8144fbc0 d vmap_block_tree
ffffffff8144fbe0 D init_mm
ffffffff8144ff40 D swapper_space
ffffffff81450000 d swap_backing_dev_info
ffffffff81450260 d swap_list
ffffffff81450280 d swapon_mutex
ffffffff814502a0 d proc_poll_wait
ffffffff814502c0 d pools_lock
ffffffff814502e0 d dev_attr_pools
ffffffff81450300 d hstate_attr_group
ffffffff81450318 d htlb_alloc_mask
ffffffff81450320 d per_node_hstate_attr_group
ffffffff81450340 d hugetlb_instantiation_mutex.27899
ffffffff81450360 d hstate_attrs
ffffffff814503a0 d per_node_hstate_attrs
ffffffff814503c0 d nr_hugepages_attr
ffffffff814503e0 d nr_overcommit_hugepages_attr
ffffffff81450400 d free_hugepages_attr
ffffffff81450420 d resv_hugepages_attr
ffffffff81450440 d surplus_hugepages_attr
ffffffff81450460 d nr_hugepages_mempolicy_attr
ffffffff81450480 d default_policy
ffffffff814504e0 D sysctl_extfrag_threshold
ffffffff81450500 d dev_attr_compact
ffffffff81450520 D malloc_sizes
ffffffff81450700 d cache_cache
ffffffff814507b0 d slab_early_init
ffffffff814507c0 d initarray_generic
ffffffff814507e0 d cache_chain_mutex
ffffffff81450800 d khugepaged_attr_group
ffffffff81450820 d hugepage_attr_group
ffffffff81450840 d khugepaged_wait
ffffffff81450860 d khugepaged_mutex
ffffffff81450880 d khugepaged_scan
ffffffff814508a0 d khugepaged_attr
ffffffff814508e0 d hugepage_attr
ffffffff81450900 d khugepaged_defrag_attr
ffffffff81450920 d khugepaged_max_ptes_none_attr
ffffffff81450940 d pages_to_scan_attr
ffffffff81450960 d pages_collapsed_attr
ffffffff81450980 d full_scans_attr
ffffffff814509a0 d scan_sleep_millisecs_attr
ffffffff814509c0 d alloc_sleep_millisecs_attr
ffffffff814509e0 d enabled_attr
ffffffff81450a00 d defrag_attr
ffffffff81450a20 d error_states
ffffffff81450bc0 D files_stat
ffffffff81450be0 d files_lglock_lg_cpu_notifier
ffffffff81450c00 D super_blocks
ffffffff81450c20 D directly_mappable_cdev_bdi
ffffffff81450e80 d chrdevs_lock
ffffffff81450ea0 d ktype_cdev_dynamic
ffffffff81450ee0 d ktype_cdev_default
ffffffff81450f08 d warncount.28692
ffffffff81450f20 D core_pattern
ffffffff81450fa0 d binfmt_lock
ffffffff81450fb0 d formats
ffffffff81450fc0 d call_count
ffffffff81450fe0 D pipe_min_size
ffffffff81450fe4 D pipe_max_size
ffffffff81451000 d pipe_fs_type
ffffffff81451040 D dentry_stat
ffffffff81451060 d _rs.30393
ffffffff81451080 D init_files
ffffffff81451340 D sysctl_nr_open_max
ffffffff81451344 D sysctl_nr_open_min
ffffffff81451348 d file_systems_lock
ffffffff81451350 d vfsmount_lock_lg_cpu_notifier
ffffffff81451368 d mnt_group_start
ffffffff81451370 d cursor_name.24820
ffffffff81451380 D init_fs
ffffffff814513c0 d bio_dirty_work
ffffffff814513e0 d bio_slab_lock
ffffffff81451400 d bd_type
ffffffff81451440 d all_bdevs
ffffffff81451460 d destroy_list
ffffffff81451470 d destroy_waitq
ffffffff814514a0 d dnotify_mark_mutex
ffffffff814514c0 d dnotify_fsnotify_ops
ffffffff81451500 D inotify_table
ffffffff81451600 D epoll_table
ffffffff81451680 d epmutex
ffffffff814516a0 d visited_list
ffffffff814516b0 d tfile_check_list
ffffffff814516c0 d long_max
ffffffff814516e0 d anon_inode_fs_type
ffffffff81451720 d cancel_list
ffffffff81451740 D aio_max_nr
ffffffff81451750 d fput_head
ffffffff81451760 d fput_work
ffffffff81451780 D lease_break_time
ffffffff81451784 D leases_enable
ffffffff81451790 d blocked_list
ffffffff814517a0 d file_lock_list
ffffffff814517b0 D compat_log
ffffffff814517c0 d ioctl_pointer
ffffffff81451f80 d script_format
ffffffff81451fc0 d elf_format
ffffffff81452000 d compat_elf_format
ffffffff81452040 D proc_root
ffffffff814520e0 d proc_fs_type
ffffffff81452120 d ns_entries
ffffffff81452140 d sysctl_table_root
ffffffff814521c0 d root_table
ffffffff81452240 d proc_net_ns_ops
ffffffff81452280 d kclist_lock
ffffffff81452290 d kclist_head
ffffffff814522a0 d kcore_need_update
ffffffff814522c0 d sysfs_backing_dev_info
ffffffff81452520 d sysfs_workq_mutex
ffffffff81452540 d sysfs_workq
ffffffff81452560 D sysfs_mutex
ffffffff81452580 D sysfs_root
ffffffff81452600 d sysfs_fs_type
ffffffff81452640 d sysfs_bin_lock
ffffffff81452660 d configfs_backing_dev_info
ffffffff814528c0 D configfs_rename_sem
ffffffff814528e0 D configfs_symlink_mutex
ffffffff81452900 d configfs_root_group
ffffffff81452980 d configfs_fs_type
ffffffff814529c0 d configfs_root
ffffffff81452a20 d ___modver_attr
ffffffff81452a80 d allocated_ptys_lock
ffffffff81452aa0 d pty_limit
ffffffff81452aa4 d pty_reserve
ffffffff81452ac0 d devpts_fs_type
ffffffff81452b00 d pty_root_table
ffffffff81452b80 d pty_kern_table
ffffffff81452c00 d pty_table
ffffffff81452d00 d pty_limit_max
ffffffff81452d20 d ramfs_backing_dev_info
ffffffff81452f80 d ramfs_fs_type
ffffffff81452fc0 d rootfs_fs_type
ffffffff81453000 d hugetlbfs_backing_dev_info
ffffffff81453260 d hugetlbfs_fs_type
ffffffff814532a0 d tables
ffffffff814532c0 d default_table
ffffffff81453300 d allpstore
ffffffff81453320 d pstore_fs_type
ffffffff81453360 d kmsg_bytes
ffffffff81453380 d pstore_dumper
ffffffff814533a0 d pstore_timer
ffffffff814533e0 d pstore_work
ffffffff81453400 D nr_ipc_ns
ffffffff81453420 D init_ipc_ns
ffffffff814535a0 d ipcns_chain
ffffffff814535e0 d ipc_root_table
ffffffff81453660 d ipc_kern_table
ffffffff814538e0 d one
ffffffff81453900 d mqueue_fs_type
ffffffff81453940 d mq_sysctl_root
ffffffff814539c0 d mq_sysctl_dir
ffffffff81453a40 d mq_sysctls
ffffffff81453b40 d msg_max_limit_min
ffffffff81453b44 d msg_max_limit_max
ffffffff81453b48 d msg_maxsize_limit_min
ffffffff81453b4c d msg_maxsize_limit_max
ffffffff81453b60 D dac_mmap_min_addr
ffffffff81453b80 D devices_subsys
ffffffff81453c60 d dev_cgroup_files
ffffffff81453ea0 d devcgroup_mutex
ffffffff81453ec0 D crypto_chain
ffffffff81453f00 D crypto_alg_sem
ffffffff81453f20 D crypto_alg_list
ffffffff81453f30 d crypto_template_list
ffffffff81453f40 d chainiv_tmpl
ffffffff81453fc0 d eseqiv_tmpl
ffffffff81454040 d cryptomgr_notifier
ffffffff81454060 d aes_alg
ffffffff81454180 d alg
ffffffff81454300 d crypto_default_rng_lock
ffffffff81454320 d krng_alg
ffffffff81454440 d elv_list
ffffffff81454460 d elv_ktype
ffffffff814544a0 D blk_queue_ktype
ffffffff814544e0 d default_attrs
ffffffff814545a0 d queue_requests_entry
ffffffff814545c0 d queue_ra_entry
ffffffff814545e0 d queue_max_hw_sectors_entry
ffffffff81454600 d queue_max_sectors_entry
ffffffff81454620 d queue_max_segments_entry
ffffffff81454640 d queue_max_integrity_segments_entry
ffffffff81454660 d queue_max_segment_size_entry
ffffffff81454680 d queue_iosched_entry
ffffffff814546a0 d queue_hw_sector_size_entry
ffffffff814546c0 d queue_logical_block_size_entry
ffffffff814546e0 d queue_physical_block_size_entry
ffffffff81454700 d queue_io_min_entry
ffffffff81454720 d queue_io_opt_entry
ffffffff81454740 d queue_discard_granularity_entry
ffffffff81454760 d queue_discard_max_entry
ffffffff81454780 d queue_discard_zeroes_data_entry
ffffffff814547a0 d queue_nonrot_entry
ffffffff814547c0 d queue_nomerges_entry
ffffffff814547e0 d queue_rq_affinity_entry
ffffffff81454800 d queue_iostats_entry
ffffffff81454820 d queue_random_entry
ffffffff81454840 D blk_iopoll_enabled
ffffffff81454860 D block_class
ffffffff814548e0 d block_class_lock
ffffffff81454900 d ext_devt_mutex
ffffffff81454920 d disk_events_attrs
ffffffff81454940 d disk_events_mutex
ffffffff81454960 d disk_events
ffffffff81454980 d disk_type
ffffffff814549b0 d disk_attr_groups
ffffffff814549c0 d disk_attr_group
ffffffff814549e0 d disk_attrs
ffffffff81454a40 d dev_attr_range
ffffffff81454a60 d dev_attr_ext_range
ffffffff81454a80 d dev_attr_removable
ffffffff81454aa0 d dev_attr_ro
ffffffff81454ac0 d dev_attr_size
ffffffff81454ae0 d dev_attr_alignment_offset
ffffffff81454b00 d dev_attr_discard_alignment
ffffffff81454b20 d dev_attr_capability
ffffffff81454b40 d dev_attr_stat
ffffffff81454b60 d dev_attr_inflight
ffffffff81454b80 d _rs.28678
ffffffff81454ba0 D part_type
ffffffff81454be0 d dev_attr_whole_disk
ffffffff81454c00 d part_attr_groups
ffffffff81454c10 d part_attr_group
ffffffff81454c40 d part_attrs
ffffffff81454ca0 d dev_attr_partition
ffffffff81454cc0 d dev_attr_start
ffffffff81454ce0 d dev_attr_size
ffffffff81454d00 d dev_attr_ro
ffffffff81454d20 d dev_attr_alignment_offset
ffffffff81454d40 d dev_attr_discard_alignment
ffffffff81454d60 d dev_attr_stat
ffffffff81454d80 d dev_attr_inflight
ffffffff81454da0 D warn_no_part
ffffffff81454dc0 d bsg_mutex
ffffffff81454de0 D blkio_files
ffffffff81455620 D blkio_subsys
ffffffff81455700 D blkio_root_cgroup
ffffffff81455740 d blkio_list
ffffffff81455760 d elevator_noop
ffffffff81455860 d blkio_policy_cfq
ffffffff814558c0 d iosched_cfq
ffffffff814559b8 d cfq_slice_async
ffffffff814559bc d cfq_slice_idle
ffffffff814559c0 d cfq_group_idle
ffffffff814559e0 d cfq_attrs
ffffffff81455b80 d module_bug_list
ffffffff81455ba0 d dynamic_kobj_ktype
ffffffff81455be0 d kset_ktype
ffffffff81455c20 d uevent_sock_mutex
ffffffff81455c40 d uevent_sock_list
ffffffff81455c60 d uevent_net_ops
ffffffff81455c98 d delay_fn
ffffffff81455ca0 D inat_avx_tables
ffffffff814560a0 D inat_group_tables
ffffffff814564a0 D inat_escape_tables
ffffffff81456520 D debug_locks
ffffffff81456524 d count.17762
ffffffff81456540 d ___modver_attr
ffffffff814565a0 d percpu_counters_lock
ffffffff814565c0 d percpu_counters
ffffffff81456600 d pci_cfg_wait
ffffffff81456620 D pci_root_buses
ffffffff81456630 d pci_host_bridges
ffffffff81456640 d pcibus_class
ffffffff814566c0 D bus_attr_resource_alignment
ffffffff814566e0 D pcibios_max_latency
ffffffff814566e8 D pci_hotplug_mem_size
ffffffff814566f0 D pci_hotplug_io_size
ffffffff814566f8 D pci_cardbus_mem_size
ffffffff81456700 D pci_cardbus_io_size
ffffffff81456708 D pci_domains_supported
ffffffff81456720 D pci_power_names
ffffffff81456760 d pci_pme_list_mutex
ffffffff81456780 d pci_pme_list
ffffffff814567a0 d pci_pme_work
ffffffff81456800 D pci_bus_type
ffffffff81456880 d driver_attr_new_id
ffffffff814568a0 d driver_attr_remove_id
ffffffff814568c0 d pci_compat_driver
ffffffff814569c0 D pci_bus_sem
ffffffff814569e0 D vga_attr
ffffffff81456a00 D pcibus_dev_attrs
ffffffff81456a80 D pci_dev_attrs
ffffffff81456ce0 D pci_bus_attrs
ffffffff81456d20 d pci_remove_rescan_mutex
ffffffff81456d40 d pci_config_attr
ffffffff81456d80 d pcie_config_attr
ffffffff81456dc0 d reset_attr
ffffffff81456de0 d pci_slot_ktype
ffffffff81456e20 d pci_slot_default_attrs
ffffffff81456e40 d pci_slot_attr_address
ffffffff81456e60 d pci_slot_attr_max_speed
ffffffff81456e80 d pci_slot_attr_cur_speed
ffffffff81456ea0 d via_vlink_dev_lo
ffffffff81456ea4 d via_vlink_dev_hi
ffffffff81456ec0 d aspm_lock
ffffffff81456ee0 d link_list
ffffffff81456ef0 d aspm_support_enabled
ffffffff81456f00 d __param_ops_policy
ffffffff81456f20 D pcie_ports_auto
ffffffff81456f40 d pcie_portdriver
ffffffff81457040 d pcie_portdrv_err_handler
ffffffff81457080 D pcie_port_bus_type
ffffffff81457100 d aer_correctable_error_string
ffffffff81457180 d aer_uncorrectable_error_string
ffffffff81457240 d aer_recover_ring
ffffffff814572e0 d aer_recover_work
ffffffff81457300 d aerdriver
ffffffff814573c0 d aer_error_handlers
ffffffff81457400 d pcie_pme_driver
ffffffff814574c0 d ioapic_driver
ffffffff814575c0 D msi_irq_default_attrs
ffffffff814575d0 d pci_msi_enable
ffffffff814575e0 d msi_irq_ktype
ffffffff81457620 d mode_attribute
ffffffff81457640 d acpi_pci_bus
ffffffff81457680 d acpi_pci_platform_pm
ffffffff814576c0 d pci_acpi_pm_notify_mtx
ffffffff814576e0 d acpi_attr_group
ffffffff81457700 d smbios_attr_group
ffffffff81457720 d acpi_attributes
ffffffff81457740 d smbios_attributes
ffffffff81457760 d acpi_attr_label
ffffffff81457780 d acpi_attr_index
ffffffff814577a0 d smbios_attr_label
ffffffff814577c0 d smbios_attr_instance
ffffffff814577e0 d fb_notifier_list
ffffffff81457820 d registration_lock
ffffffff81457840 d device_attrs
ffffffff814579c0 d vga_font_is_default
ffffffff814579e0 d ega_console_resource.22542
ffffffff81457a20 d mda1_console_resource.22543
ffffffff81457a60 d mda2_console_resource.22544
ffffffff81457aa0 d ega_console_resource.22546
ffffffff81457ae0 d vga_console_resource.22547
ffffffff81457b20 d cga_console_resource.22554
ffffffff81457b60 d fbcon_softback_size
ffffffff81457b64 d last_fb_vc
ffffffff81457b68 d fbcon_is_default
ffffffff81457b70 d fbcon_event_notifier
ffffffff81457ba0 d device_attrs
ffffffff81457c00 d info_idx
ffffffff81457c04 d logo_shown
ffffffff81457c20 d palette_cmap
ffffffff81457c48 d primary_device
ffffffff81457c60 d vesafb_driver
ffffffff81457d00 d vesafb_ops
ffffffff81457dc0 d acpi_ioremap_lock
ffffffff81457de0 d acpi_ioremaps
ffffffff81457df0 d acpi_enforce_resources
ffffffff81457e00 d nvs_region_list
ffffffff81457e10 d tts_notifier
ffffffff81457e30 d __param_ops_bfs
ffffffff81457e50 d __param_ops_gts
ffffffff81457e70 D acpi_bus_event_queue
ffffffff81457e90 D acpi_bus_event_list
ffffffff81457ea0 d acpi_bus_notify_list
ffffffff81457ed0 d sb_uuid_str
ffffffff81457f00 d bus_type_sem
ffffffff81457f20 d bus_type_list
ffffffff81457f30 D acpi_bus_type
ffffffff81457fb0 D acpi_wakeup_device_list
ffffffff81457fc0 D acpi_device_lock
ffffffff81457fe0 d acpi_bus_id_list
ffffffff81457ff0 d dev_attr_path
ffffffff81458010 d dev_attr_hid
ffffffff81458030 d dev_attr_modalias
ffffffff81458050 d dev_attr_eject
ffffffff81458070 d acpi_ec_driver
ffffffff814581e0 d acpi_pci_roots
ffffffff814581f0 d osc_lock
ffffffff81458210 d acpi_pci_root_driver
ffffffff81458380 d pci_osc_uuid_str
ffffffff814583b0 d acpi_link_list
ffffffff814583c0 d acpi_irq_penalty
ffffffff814587c0 d acpi_link_lock
ffffffff814587e0 d acpi_irq_balance
ffffffff814587f0 d irqrouter_syscore_ops
ffffffff81458820 d acpi_pci_link_driver
ffffffff81458990 d acpi_prt_list
ffffffff814589a0 d acpi_power_driver
ffffffff81458b10 d acpi_chain_head
ffffffff81458b40 d acpi_event_genl_family
ffffffff81458bb0 d acpi_event_mcgrp
ffffffff81458be0 d interrupt_stats_attr_group
ffffffff81458c00 d acpi_table_attr_list
ffffffff81458c10 d __param_ops_acpica_version
ffffffff81458c30 d pxm_to_node_map
ffffffff81459030 d node_to_pxm_map
ffffffff81459430 d cm_sbs_mutex
ffffffff81459450 d acpi_sleep_dispatch
ffffffff81459480 D acpi_rs_convert_ext_address64
ffffffff814594a0 D acpi_rs_convert_address64
ffffffff814594c0 D acpi_rs_convert_address32
ffffffff814594e0 D acpi_rs_convert_address16
ffffffff81459500 d acpi_rs_convert_general_flags
ffffffff81459520 d acpi_rs_convert_mem_flags
ffffffff81459540 d acpi_rs_convert_io_flags
ffffffff81459550 D acpi_gbl_convert_resource_serial_bus_dispatch
ffffffff81459570 D acpi_gbl_get_resource_dispatch
ffffffff81459670 D acpi_gbl_set_resource_dispatch
ffffffff81459710 D acpi_rs_set_start_dpf
ffffffff81459740 D acpi_rs_get_start_dpf
ffffffff81459758 D acpi_rs_convert_end_tag
ffffffff81459760 D acpi_rs_convert_end_dpf
ffffffff81459770 D acpi_rs_convert_generic_reg
ffffffff81459780 D acpi_rs_convert_fixed_io
ffffffff81459790 D acpi_rs_convert_io
ffffffff814597b0 D acpi_rs_convert_fixed_dma
ffffffff814597c0 D acpi_rs_convert_dma
ffffffff814597e0 D acpi_rs_convert_ext_irq
ffffffff81459810 D acpi_rs_set_irq
ffffffff81459850 D acpi_rs_get_irq
ffffffff81459870 D acpi_rs_set_vendor
ffffffff81459890 D acpi_rs_get_vendor_large
ffffffff814598a0 D acpi_rs_get_vendor_small
ffffffff814598b0 D acpi_rs_convert_fixed_memory32
ffffffff814598c0 D acpi_rs_convert_memory32
ffffffff814598d0 D acpi_rs_convert_memory24
ffffffff814598e0 D acpi_rs_convert_uart_serial_bus
ffffffff81459940 D acpi_rs_convert_spi_serial_bus
ffffffff81459990 D acpi_rs_convert_i2c_serial_bus
ffffffff814599d0 D acpi_rs_convert_gpio
ffffffff81459a20 D acpi_gbl_region_types
ffffffff81459a70 D acpi_gbl_fixed_event_info
ffffffff81459a90 D acpi_gbl_bit_register_info
ffffffff81459ae0 D acpi_gbl_highest_dstate_names
ffffffff81459b00 D acpi_gbl_lowest_dstate_names
ffffffff81459b30 D acpi_gbl_sleep_state_names
ffffffff81459b60 D acpi_gbl_shutdown
ffffffff81459b64 D acpi_dbg_level
ffffffff81459b68 D acpi_gbl_use_default_register_widths
ffffffff81459b69 D acpi_gbl_create_osi_method
ffffffff81459b70 D acpi_gbl_exception_names_ctrl
ffffffff81459be0 D acpi_gbl_exception_names_aml
ffffffff81459cf0 D acpi_gbl_exception_names_tbl
ffffffff81459d20 D acpi_gbl_exception_names_pgm
ffffffff81459d70 D acpi_gbl_exception_names_env
ffffffff81459e60 d acpi_default_supported_interfaces
ffffffff81459f80 d acpi_hed_notify_list
ffffffff81459fb0 d acpi_hed_driver
ffffffff8145a120 d acpi_hed_ids
ffffffff8145a160 D apei_resources_all
ffffffff8145a180 d whea_uuid_str.26117
ffffffff8145a1c0 d apei_estatus_section_flag_strs
ffffffff8145a200 d cper_proc_error_type_strs
ffffffff8145a220 d cper_proc_flag_strs
ffffffff8145a240 d erst_ins_type
ffffffff8145a380 d erst_info
ffffffff8145a400 d erst_record_id_cache
ffffffff8145a440 d ghes_list_mutex
ffffffff8145a460 d ghes_sci
ffffffff8145a470 d ghes_notifier_sci
ffffffff8145a490 d ghes_nmi
ffffffff8145a4a0 d ghes_platform_driver
ffffffff8145a540 d ratelimit_corrected.30608
ffffffff8145a560 d ratelimit_uncorrected.30610
ffffffff8145a580 D pnp_global
ffffffff8145a590 d pnp_protocols
ffffffff8145a5a0 D pnp_cards
ffffffff8145a5c0 d dev_attr_name
ffffffff8145a5e0 d dev_attr_card_id
ffffffff8145a600 d pnp_card_drivers
ffffffff8145a620 D pnp_bus_type
ffffffff8145a6a0 d pnp_reserve_irq
ffffffff8145a6e0 d pnp_reserve_dma
ffffffff8145a700 d pnp_reserve_io
ffffffff8145a740 d pnp_reserve_mem
ffffffff8145a780 D pnp_res_mutex
ffffffff8145a7a0 D pnp_interface_attrs
ffffffff8145a820 d pnp_fixups
ffffffff8145a940 d system_pnp_driver
ffffffff8145aa00 D pnpacpi_protocol
ffffffff8145acf0 d hp_ccsr_uuid
ffffffff8145ad20 d gnttab_v2_ops
ffffffff8145ad60 d gnttab_v1_ops
ffffffff8145ada0 d xen_irq_list_head
ffffffff8145adc0 d irq_mapping_update_lock
ffffffff8145ade0 d xenstore_notifier.23406
ffffffff8145ae00 d shutdown_watch
ffffffff8145ae20 d shutting_down
ffffffff8145ae40 d handlers.23386
ffffffff8145aea0 d balloon_worker
ffffffff8145af00 d balloon_mutex
ffffffff8145af20 d ballooned_pages
ffffffff8145af40 d xenbus_valloc_pages
ffffffff8145af60 d xb_waitq
ffffffff8145af80 d probe_work
ffffffff8145afa0 d watches
ffffffff8145afc0 d xenwatch_mutex
ffffffff8145afe0 d watch_events
ffffffff8145aff0 d watch_events_waitq
ffffffff8145b020 D xenbus_dev_attrs
ffffffff8145b0a0 d xenstore_chain
ffffffff8145b0e0 d xenbus_backend
ffffffff8145b190 d xenstore_notifier.19507
ffffffff8145b1c0 d be_watch
ffffffff8145b1e0 d xenbus_dev
ffffffff8145b240 d xenbus_backend_dev
ffffffff8145b2a0 d xenbus_frontend
ffffffff8145b350 d xenstore_notifier.27378
ffffffff8145b380 d fe_watch
ffffffff8145b3a0 d backend_state_wq
ffffffff8145b3c0 d xsn_cpu.13283
ffffffff8145b3e0 d cpu_watch.13276
ffffffff8145b400 d xenstore_notifier
ffffffff8145b420 d target_watch
ffffffff8145b440 d balloon_subsys
ffffffff8145b4c0 d dev_attr_target_kb
ffffffff8145b4e0 d dev_attr_target
ffffffff8145b500 d dev_attr_schedule_delay
ffffffff8145b540 d dev_attr_max_schedule_delay
ffffffff8145b580 d dev_attr_retry_count
ffffffff8145b5c0 d dev_attr_max_retry_count
ffffffff8145b600 d balloon_info_attrs
ffffffff8145b620 d dev_attr_current_kb
ffffffff8145b640 d dev_attr_low_kb
ffffffff8145b660 d dev_attr_high_kb
ffffffff8145b680 d selfballoon_worker
ffffffff8145b6e0 d selfballoon_attrs
ffffffff8145b720 d dev_attr_selfballooning
ffffffff8145b740 d dev_attr_selfballoon_interval
ffffffff8145b760 d dev_attr_selfballoon_downhysteresis
ffffffff8145b780 d dev_attr_selfballoon_uphysteresis
ffffffff8145b7a0 d dev_attr_selfballoon_min_usable_mb
ffffffff8145b7c0 d uuid_attr
ffffffff8145b800 d type_attr
ffffffff8145b840 d hyp_sysfs_kobj_type
ffffffff8145b880 d xen_properties_attrs
ffffffff8145b8c0 d xen_compile_attrs
ffffffff8145b8e0 d version_attrs
ffffffff8145b900 d capabilities_attr
ffffffff8145b940 d changeset_attr
ffffffff8145b980 d virtual_start_attr
ffffffff8145b9c0 d pagesize_attr
ffffffff8145ba00 d features_attr
ffffffff8145ba40 d compiler_attr
ffffffff8145ba80 d compiled_by_attr
ffffffff8145bac0 d compile_date_attr
ffffffff8145bb00 d major_attr
ffffffff8145bb40 d minor_attr
ffffffff8145bb80 d extra_attr
ffffffff8145bbc0 d platform_driver
ffffffff8145bcb0 d device_nb
ffffffff8145bce0 D tty_mutex
ffffffff8145bd00 D tty_drivers
ffffffff8145bd20 D tty_std_termios
ffffffff8145bd60 d _rs.27181
ffffffff8145bd80 d dev_attr_active
ffffffff8145bda0 D tty_ldisc_N_TTY
ffffffff8145be40 d tty_ldisc_wait
ffffffff8145be60 d tty_ldisc_idle
ffffffff8145be80 d _rs.25996
ffffffff8145bea0 d big_tty_mutex
ffffffff8145bec0 d vt_events
ffffffff8145bed0 d vt_event_waitqueue
ffffffff8145bf00 d sel_start
ffffffff8145bf20 d inwordLut
ffffffff8145bf40 D keyboard_tasklet
ffffffff8145bf80 d kd_mksound_timer
ffffffff8145bfc0 d kbd_handler
ffffffff8145c038 d ledstate
ffffffff8145c03c d brl_timeout
ffffffff8145c040 d brl_nbchords
ffffffff8145c048 d kbd
ffffffff8145c050 d npadch
ffffffff8145c054 d buf.26576
ffffffff8145c060 d translations
ffffffff8145c860 D dfont_unitable
ffffffff8145cac0 D dfont_unicount
ffffffff8145cbc0 D default_blu
ffffffff8145cc00 D default_grn
ffffffff8145cc40 D default_red
ffffffff8145cc80 D color_table
ffffffff8145cc90 D want_console
ffffffff8145cc94 D global_cursor_default
ffffffff8145cc98 D default_utf8
ffffffff8145cca0 d console_work
ffffffff8145ccc0 d cur_default
ffffffff8145ccc4 d blankinterval
ffffffff8145ccc8 d old_offset.26372
ffffffff8145cccc d default_underline_color
ffffffff8145ccd0 d default_italic_color
ffffffff8145cce0 d console_timer
ffffffff8145cd20 d vt_console_driver
ffffffff8145cd80 d device_attrs
ffffffff8145cdc0 d dev_attr_active
ffffffff8145cde0 D accent_table_size
ffffffff8145ce00 D accent_table
ffffffff8145da00 D func_table
ffffffff8145e200 D funcbufsize
ffffffff8145e208 D funcbufptr
ffffffff8145e220 D func_buf
ffffffff8145e2bc D keymap_count
ffffffff8145e2c0 D key_maps
ffffffff8145eac0 D ctrl_alt_map
ffffffff8145ecc0 D alt_map
ffffffff8145eec0 D shift_ctrl_map
ffffffff8145f0c0 D ctrl_map
ffffffff8145f2c0 D altgr_map
ffffffff8145f4c0 D shift_map
ffffffff8145f6c0 D plain_map
ffffffff8145f8c0 d vtermnos
ffffffff8145f900 d hvc_structs
ffffffff8145f920 d hvc_console
ffffffff8145f978 d last_hvc
ffffffff8145f97c d timeout
ffffffff8145f980 D xenboot_console
ffffffff8145f9e0 d xenconsoles
ffffffff8145fa00 d dom0_hvc_ops
ffffffff8145fa40 d domU_hvc_ops
ffffffff8145fa80 d xencons_driver
ffffffff8145fb40 d zero_bdi
ffffffff8145fda0 D random_table
ffffffff8145ff60 d input_pool
ffffffff8145ffa0 d random_read_wakeup_thresh
ffffffff8145ffb0 d random_read_wait
ffffffff8145ffe0 d nonblocking_pool
ffffffff81460020 d random_write_wakeup_thresh
ffffffff81460030 d random_write_wait
ffffffff81460060 d blocking_pool
ffffffff814600a0 d sysctl_poolsize
ffffffff814600a4 d min_read_thresh
ffffffff814600a8 d max_read_thresh
ffffffff814600ac d max_write_thresh
ffffffff814600c0 d poolinfo_table
ffffffff81460100 d misc_mtx
ffffffff81460120 d misc_list
ffffffff81460140 d nvram_dev
ffffffff814601a0 d nvram_mutex
ffffffff814601c0 d vga_list
ffffffff814601d0 d vga_wait_queue
ffffffff81460200 d vga_arb_device
ffffffff81460250 d pci_notifier
ffffffff81460270 d vga_user_list
ffffffff81460280 d device_ktype
ffffffff814602c0 d uevent_attr
ffffffff814602e0 d devt_attr
ffffffff81460300 d gdp_mutex.19636
ffffffff81460320 d class_dir_ktype
ffffffff81460360 d driver_ktype
ffffffff814603a0 d driver_attr_uevent
ffffffff814603c0 d driver_attr_unbind
ffffffff814603e0 d driver_attr_bind
ffffffff81460400 d bus_ktype
ffffffff81460440 d bus_attr_uevent
ffffffff81460460 d bus_attr_drivers_probe
ffffffff81460480 d bus_attr_drivers_autoprobe
ffffffff814604a0 d deferred_probe_mutex
ffffffff814604c0 d deferred_probe_pending_list
ffffffff814604d0 d deferred_probe_active_list
ffffffff814604e0 d deferred_probe_work
ffffffff81460500 d probe_waitqueue
ffffffff81460520 d syscore_ops_lock
ffffffff81460540 d syscore_ops_list
ffffffff81460560 d class_ktype
ffffffff814605a0 D platform_bus_type
ffffffff81460620 D platform_bus
ffffffff814608a0 d platform_dev_attrs
ffffffff814608e0 D cpu_subsys
ffffffff81460960 d dev_attr_online
ffffffff81460980 d cpu_root_attr_groups
ffffffff81460990 d cpu_root_attr_group
ffffffff814609c0 d cpu_root_attrs
ffffffff81460a20 d dev_attr_probe
ffffffff81460a40 d dev_attr_release
ffffffff81460a60 d cpu_attrs
ffffffff81460ae0 d dev_attr_kernel_max
ffffffff81460b00 d dev_attr_offline
ffffffff81460b20 d dev_attr_modalias
ffffffff81460b40 d attribute_container_mutex
ffffffff81460b60 d attribute_container_list
ffffffff81460b80 d topology_attr_group
ffffffff81460ba0 d default_attrs
ffffffff81460be0 d dev_attr_physical_package_id
ffffffff81460c00 d dev_attr_core_id
ffffffff81460c20 d dev_attr_thread_siblings
ffffffff81460c40 d dev_attr_thread_siblings_list
ffffffff81460c60 d dev_attr_core_siblings
ffffffff81460c80 d dev_attr_core_siblings_list
ffffffff81460ca0 d mount_dev
ffffffff81460cc0 d dev_fs_type
ffffffff81460d00 d setup_done
ffffffff81460d20 d pm_attr_group
ffffffff81460d40 d pm_runtime_attr_group
ffffffff81460d60 d pm_wakeup_attr_group
ffffffff81460d80 d pm_qos_attr_group
ffffffff81460da0 d runtime_attrs
ffffffff81460de0 d wakeup_attrs
ffffffff81460e30 d pm_qos_attrs
ffffffff81460e40 d dev_attr_runtime_status
ffffffff81460e60 d dev_attr_control
ffffffff81460e80 d dev_attr_runtime_suspended_time
ffffffff81460ea0 d dev_attr_runtime_active_time
ffffffff81460ec0 d dev_attr_autosuspend_delay_ms
ffffffff81460ee0 d dev_attr_wakeup
ffffffff81460f00 d dev_attr_wakeup_count
ffffffff81460f20 d dev_attr_wakeup_active_count
ffffffff81460f40 d dev_attr_wakeup_hit_count
ffffffff81460f60 d dev_attr_wakeup_active
ffffffff81460f80 d dev_attr_wakeup_total_time_ms
ffffffff81460fa0 d dev_attr_wakeup_max_time_ms
ffffffff81460fc0 d dev_attr_wakeup_last_time_ms
ffffffff81460fe0 d dev_attr_pm_qos_resume_latency_us
ffffffff81461000 d dev_pm_qos_mtx
ffffffff81461020 d dev_pm_notifiers
ffffffff81461060 D dpm_noirq_list
ffffffff81461070 D dpm_late_early_list
ffffffff81461080 D dpm_suspended_list
ffffffff81461090 D dpm_prepared_list
ffffffff814610a0 D dpm_list
ffffffff814610c0 d dpm_list_mtx
ffffffff814610e0 d wakeup_sources
ffffffff81461100 d loading_timeout
ffffffff81461120 d firmware_class
ffffffff814611a0 d firmware_attr_data
ffffffff814611e0 d dev_attr_loading
ffffffff81461200 d fw_lock
ffffffff81461220 d firmware_class_attrs
ffffffff81461280 d node_subsys
ffffffff81461300 d dev_attr_cpumap
ffffffff81461320 d dev_attr_cpulist
ffffffff81461340 d dev_attr_meminfo
ffffffff81461360 d dev_attr_numastat
ffffffff81461380 d dev_attr_distance
ffffffff814613a0 d dev_attr_vmstat
ffffffff814613c0 d cpu_root_attr_groups
ffffffff814613d0 d memory_root_attr_group
ffffffff81461400 d node_state_attrs
ffffffff81461440 d node_state_attr
ffffffff814614e0 d host_cmd_pool_mutex
ffffffff81461500 d scsi_cmd_dma_pool
ffffffff81461540 d scsi_cmd_pool
ffffffff81461580 d scsi_host_type
ffffffff814615c0 d shost_class
ffffffff81461640 d stu_command.29866
ffffffff81461660 d scsi_sg_pools
ffffffff81461700 d scanning_hosts
ffffffff81461710 d max_scsi_luns
ffffffff81461720 d scsi_target_type
ffffffff81461750 d scsi_scan_type
ffffffff81461758 d max_scsi_report_luns
ffffffff8146175c d scsi_inq_timeout
ffffffff81461760 D scsi_bus_type
ffffffff814617e0 D scsi_sysfs_shost_attr_groups
ffffffff814617f0 D scsi_shost_attr_group
ffffffff81461820 D dev_attr_hstate
ffffffff81461840 d sdev_class
ffffffff814618c0 d scsi_dev_type
ffffffff81461900 d sdev_attr_queue_depth_rw
ffffffff81461920 d sdev_attr_queue_ramp_up_period
ffffffff81461940 d dev_attr_queue_depth
ffffffff81461960 d sdev_attr_queue_type_rw
ffffffff81461980 d dev_attr_queue_type
ffffffff814619a0 d scsi_sysfs_shost_attrs
ffffffff81461a20 d scsi_sdev_attr_groups
ffffffff81461a40 d dev_attr_unique_id
ffffffff81461a60 d dev_attr_host_busy
ffffffff81461a80 d dev_attr_cmd_per_lun
ffffffff81461aa0 d dev_attr_can_queue
ffffffff81461ac0 d dev_attr_sg_tablesize
ffffffff81461ae0 d dev_attr_sg_prot_tablesize
ffffffff81461b00 d dev_attr_unchecked_isa_dma
ffffffff81461b20 d dev_attr_proc_name
ffffffff81461b40 d dev_attr_scan
ffffffff81461b60 d dev_attr_supported_mode
ffffffff81461b80 d dev_attr_active_mode
ffffffff81461ba0 d dev_attr_prot_capabilities
ffffffff81461bc0 d dev_attr_prot_guard_type
ffffffff81461be0 d dev_attr_host_reset
ffffffff81461c00 d scsi_sdev_attr_group
ffffffff81461c20 d scsi_sdev_attrs
ffffffff81461cc0 d dev_attr_device_blocked
ffffffff81461ce0 d dev_attr_type
ffffffff81461d00 d dev_attr_scsi_level
ffffffff81461d20 d dev_attr_vendor
ffffffff81461d40 d dev_attr_model
ffffffff81461d60 d dev_attr_rev
ffffffff81461d80 d dev_attr_rescan
ffffffff81461da0 d dev_attr_delete
ffffffff81461dc0 d dev_attr_state
ffffffff81461de0 d dev_attr_timeout
ffffffff81461e00 d dev_attr_iocounterbits
ffffffff81461e20 d dev_attr_iorequest_cnt
ffffffff81461e40 d dev_attr_iodone_cnt
ffffffff81461e60 d dev_attr_ioerr_cnt
ffffffff81461e80 d dev_attr_modalias
ffffffff81461ea0 d dev_attr_evt_media_change
ffffffff81461ec0 d scsi_dev_info_list
ffffffff81461ee0 d scsi_root_table
ffffffff81461f60 d scsi_dir_table
ffffffff81461fe0 d scsi_table
ffffffff81462060 d global_host_template_mutex
ffffffff81462080 d sd_template
ffffffff81462120 d sd_disk_class
ffffffff814621a0 d sd_ref_mutex
ffffffff814621c0 d sd_disk_attrs
ffffffff81462320 D ata_dummy_port_ops
ffffffff81462500 D ata_port_type
ffffffff81462530 D atapi_passthru16
ffffffff81462534 d atapi_enabled
ffffffff81462538 d libata_dma_mask
ffffffff81462540 d ratelimit
ffffffff81462560 d ___modver_attr
ffffffff814625c0 D ata_common_sdev_attrs
ffffffff814625e0 D dev_attr_sw_activity
ffffffff81462600 D dev_attr_em_message_type
ffffffff81462620 D dev_attr_em_message
ffffffff81462640 D dev_attr_unload_heads
ffffffff81462660 D dev_attr_link_power_management_policy
ffffffff81462680 d __compound_literal.0
ffffffff81462683 d __compound_literal.1
ffffffff81462686 d __compound_literal.2
ffffffff81462689 d __compound_literal.3
ffffffff8146268b d __compound_literal.4
ffffffff8146268d d __compound_literal.5
ffffffff814626a0 d ata_port_class
ffffffff81462740 d ata_link_class
ffffffff814627e0 d ata_dev_class
ffffffff81462878 D ata_acpi_gtf_filter
ffffffff81462880 D loopback_net_ops
ffffffff814628c0 d serio_event_list
ffffffff814628e0 d serio_event_work
ffffffff81462900 d serio_mutex
ffffffff81462920 d serio_list
ffffffff81462940 d serio_bus
ffffffff814629c0 d serio_device_attr_groups
ffffffff814629e0 d serio_device_attrs
ffffffff81462a80 d serio_driver_attrs
ffffffff81462ae0 d serio_id_attr_group
ffffffff81462b00 d serio_device_id_attrs
ffffffff81462b40 d dev_attr_type
ffffffff81462b60 d dev_attr_proto
ffffffff81462b80 d dev_attr_id
ffffffff81462ba0 d dev_attr_extra
ffffffff81462bc0 d i8042_mutex
ffffffff81462be0 d i8042_command_reg
ffffffff81462be4 d i8042_data_reg
ffffffff81462c00 d i8042_driver
ffffffff81462ca0 d i8042_pnp_kbd_driver
ffffffff81462d60 d i8042_pnp_aux_driver
ffffffff81462e20 d pnp_kbd_devids
ffffffff81462f20 d pnp_aux_devids
ffffffff81462fe0 D input_class
ffffffff81463060 d input_dev_type
ffffffff814630a0 d input_mutex
ffffffff814630c0 d input_dev_list
ffffffff814630d0 d input_devices_poll_wait
ffffffff814630f0 d input_handler_list
ffffffff81463100 d input_dev_attr_groups
ffffffff81463120 d input_dev_attr_group
ffffffff81463140 d input_dev_id_attr_group
ffffffff81463160 d input_dev_caps_attr_group
ffffffff81463180 d input_dev_attrs
ffffffff814631c0 d input_dev_id_attrs
ffffffff81463200 d input_dev_caps_attrs
ffffffff81463260 d dev_attr_name
ffffffff81463280 d dev_attr_phys
ffffffff814632a0 d dev_attr_uniq
ffffffff814632c0 d dev_attr_modalias
ffffffff814632e0 d dev_attr_properties
ffffffff81463300 d dev_attr_bustype
ffffffff81463320 d dev_attr_vendor
ffffffff81463340 d dev_attr_product
ffffffff81463360 d dev_attr_version
ffffffff81463380 d dev_attr_ev
ffffffff814633a0 d dev_attr_key
ffffffff814633c0 d dev_attr_rel
ffffffff814633e0 d dev_attr_abs
ffffffff81463400 d dev_attr_msc
ffffffff81463420 d dev_attr_led
ffffffff81463440 d dev_attr_snd
ffffffff81463460 d dev_attr_ff
ffffffff81463480 d dev_attr_sw
ffffffff814634a0 d mousedev_handler
ffffffff81463518 d xres
ffffffff8146351c d yres
ffffffff81463520 d mousedev_table_mutex
ffffffff81463540 d tap_time
ffffffff81463550 d mousedev_mix_list
ffffffff81463560 d atkbd_drv
ffffffff81463618 d atkbd_set
ffffffff81463620 d atkbd_attribute_group
ffffffff81463638 d atkbd_softraw
ffffffff81463640 d atkbd_serio_ids
ffffffff81463660 d atkbd_attributes
ffffffff814636a0 d atkbd_dell_laptop_forced_release_keys
ffffffff814636c8 d atkbd_hp_forced_release_keys
ffffffff814636d0 d atkbd_volume_forced_release_keys
ffffffff814636e0 d atkbd_samsung_forced_release_keys
ffffffff81463710 d atkbd_amilo_pi3525_forced_release_keys
ffffffff81463740 d atkbd_amilo_xi3650_forced_release_keys
ffffffff81463770 d atkdb_soltech_ta12_forced_release_keys
ffffffff81463780 d atkbd_attr_extra
ffffffff814637a0 d atkbd_attr_force_release
ffffffff814637c0 d atkbd_attr_scroll
ffffffff814637e0 d atkbd_attr_set
ffffffff81463800 d atkbd_attr_softrepeat
ffffffff81463820 d atkbd_attr_softraw
ffffffff81463840 d atkbd_attr_err_count
ffffffff81463860 d psmouse_max_proto
ffffffff81463864 d psmouse_resolution
ffffffff81463868 d psmouse_rate
ffffffff8146386c d psmouse_smartscroll
ffffffff81463870 d psmouse_resetafter
ffffffff81463880 d psmouse_drv
ffffffff81463940 d psmouse_mutex
ffffffff81463960 d psmouse_attribute_group
ffffffff81463980 d param_ops_proto_abbrev
ffffffff81463998 d psmouse_serio_ids
ffffffff814639c0 d psmouse_attributes
ffffffff81463a00 d psmouse_attr_protocol
ffffffff81463a40 d psmouse_attr_rate
ffffffff81463a80 d psmouse_attr_resolution
ffffffff81463ac0 d psmouse_attr_resetafter
ffffffff81463b00 d psmouse_attr_resync_time
ffffffff81463b40 d psmouse_attr_disable_gesture
ffffffff81463b80 d param.20733
ffffffff81463ba0 d psmouse_attr_smartscroll
ffffffff81463be0 d trackpoint_attr_group
ffffffff81463c00 d trackpoint_attrs
ffffffff81463c80 d psmouse_attr_sensitivity
ffffffff81463cc0 d psmouse_attr_speed
ffffffff81463d00 d psmouse_attr_inertia
ffffffff81463d40 d psmouse_attr_reach
ffffffff81463d80 d psmouse_attr_draghys
ffffffff81463dc0 d psmouse_attr_mindrag
ffffffff81463e00 d psmouse_attr_thresh
ffffffff81463e40 d psmouse_attr_upthresh
ffffffff81463e80 d psmouse_attr_ztime
ffffffff81463ec0 d psmouse_attr_jenks
ffffffff81463f00 d psmouse_attr_press_to_select
ffffffff81463f40 d psmouse_attr_skipback
ffffffff81463f80 d psmouse_attr_ext_dev
ffffffff81463fc0 d trackpoint_attr_sensitivity
ffffffff81463fd0 d trackpoint_attr_speed
ffffffff81463fe0 d trackpoint_attr_inertia
ffffffff81463ff0 d trackpoint_attr_reach
ffffffff81464000 d trackpoint_attr_draghys
ffffffff81464010 d trackpoint_attr_mindrag
ffffffff81464020 d trackpoint_attr_thresh
ffffffff81464030 d trackpoint_attr_upthresh
ffffffff81464040 d trackpoint_attr_ztime
ffffffff81464050 d trackpoint_attr_jenks
ffffffff81464060 d trackpoint_attr_press_to_select
ffffffff81464070 d trackpoint_attr_skipback
ffffffff81464080 d trackpoint_attr_ext_dev
ffffffff814640a0 D rtc_hctosys_ret
ffffffff814640c0 d dev_attr_wakealarm
ffffffff814640e0 d rtc_attrs
ffffffff814641c0 d nvram
ffffffff81464200 d cmos_pnp_driver
ffffffff814642c0 d cmos_platform_driver
ffffffff81464360 d watchdog_miscdev
ffffffff814643c0 D edac_subsys
ffffffff81464440 D edac_op_state
ffffffff81464460 d cpufreq_policy_notifier_list
ffffffff814644a0 d cpufreq_governor_mutex
ffffffff814644c0 d cpufreq_governor_list
ffffffff814644e0 d cpufreq_syscore_ops
ffffffff81464520 d cpufreq_interface
ffffffff81464560 d ktype_cpufreq
ffffffff814645a0 d cpuinfo_cur_freq
ffffffff814645c0 d scaling_cur_freq
ffffffff814645e0 d bios_limit
ffffffff81464600 d default_attrs
ffffffff81464660 d cpuinfo_min_freq
ffffffff81464680 d cpuinfo_max_freq
ffffffff814646a0 d cpuinfo_transition_latency
ffffffff814646c0 d scaling_min_freq
ffffffff814646e0 d scaling_max_freq
ffffffff81464700 d affected_cpus
ffffffff81464720 d related_cpus
ffffffff81464740 d scaling_governor
ffffffff81464760 d scaling_driver
ffffffff81464780 d scaling_available_governors
ffffffff814647a0 d scaling_setspeed
ffffffff814647c0 d notifier_policy_block
ffffffff814647e0 d notifier_trans_block
ffffffff81464800 d stats_attr_group
ffffffff81464820 d default_attrs
ffffffff81464840 d _attr_total_trans
ffffffff81464860 d _attr_time_in_state
ffffffff81464880 D cpufreq_gov_performance
ffffffff814648e0 D cpufreq_freq_attr_scaling_available_freqs
ffffffff81464900 D cpuidle_detected_devices
ffffffff81464920 D cpuidle_lock
ffffffff81464940 d cpuidle_latency_notifier
ffffffff81464960 D cpuidle_governors
ffffffff81464980 d cpuidle_attr_group
ffffffff814649a0 d cpuidle_switch_attrs
ffffffff814649c0 d ktype_state_cpuidle
ffffffff81464a00 d ktype_cpuidle
ffffffff81464a30 d cpuidle_default_attrs
ffffffff81464a60 d dev_attr_available_governors
ffffffff81464a80 d dev_attr_current_driver
ffffffff81464aa0 d dev_attr_current_governor
ffffffff81464ac0 d cpuidle_state_default_attrs
ffffffff81464b00 d dev_attr_current_governor_ro
ffffffff81464b20 d attr_name
ffffffff81464b40 d attr_desc
ffffffff81464b60 d attr_latency
ffffffff81464b80 d attr_power
ffffffff81464ba0 d attr_usage
ffffffff81464bc0 d attr_time
ffffffff81464be0 d attr_disable
ffffffff81464c00 d ladder_governor
ffffffff81464c60 d dmi_empty_string
ffffffff81464c70 d dmi_devices
ffffffff81464c80 d memmap_ktype
ffffffff81464cb0 d map_entries
ffffffff81464cc0 d def_attrs
ffffffff81464ce0 d memmap_start_attr
ffffffff81464d00 d memmap_end_attr
ffffffff81464d20 d memmap_type_attr
ffffffff81464d40 d clocksource_acpi_pm
ffffffff81464e00 D i8253_clockevent
ffffffff81464ec0 d hid_bus_type
ffffffff81464f40 d dev_bin_attr_report_desc
ffffffff81464f80 d driver_attr_new_id
ffffffff81464fa0 d iommu_device_nb
ffffffff81464fc0 d dev_attr_iommu_group
ffffffff81464fe0 D amd_iommu_max_glx_val
ffffffff81464ff0 d dev_data_list
ffffffff81465000 d _rs.22180
ffffffff81465020 d device_nb
ffffffff81465040 d iommu_pd_list
ffffffff81465060 d amd_iommu_dma_ops
ffffffff814650d8 d amd_iommu_devtable_lock
ffffffff814650e0 d amd_iommu_ops
ffffffff81465140 D amd_iommu_pd_list
ffffffff81465150 D amd_iommu_list
ffffffff81465160 D amd_iommu_unity_map
ffffffff81465180 d amd_iommu_syscore_ops
ffffffff814651c0 d pcibios_fwaddrmappings
ffffffff814651d0 D pci_mmcfg_list
ffffffff814651e0 d dev_domain_list
ffffffff814651f0 d quirk_pcie_aspm_ops
ffffffff81465200 d pci_use_crs
ffffffff81465220 D pcibios_enable_irq
ffffffff81465228 D pcibios_irq_mask
ffffffff81465240 d pirq_penalty
ffffffff81465280 D pci_root_ops
ffffffff81465290 D pcibios_last_bus
ffffffff81465294 D pci_probe
ffffffff814652a0 d mp_bus_to_node
ffffffff814656c0 d br_ioctl_mutex
ffffffff814656e0 d vlan_ioctl_mutex
ffffffff81465700 d dlci_ioctl_mutex
ffffffff81465720 d sock_fs_type
ffffffff81465760 D net_prio_subsys_id
ffffffff81465764 D net_cls_subsys_id
ffffffff81465780 d net_inuse_ops
ffffffff814657c0 d proto_net_ops
ffffffff81465800 d proto_list
ffffffff81465820 d proto_list_mutex
ffffffff81465840 D sysctl_max_syn_backlog
ffffffff81465844 d est_lock
ffffffff81465860 D net_namespace_list
ffffffff81465880 d net_mutex
ffffffff814658a0 d max_gen_ptrs
ffffffff814658b0 d pernet_list
ffffffff814658c0 d cleanup_list
ffffffff814658e0 d net_cleanup_work
ffffffff81465900 d first_device
ffffffff81465920 D net_core_path
ffffffff81465940 d net_core_table
ffffffff81465cc0 d sysctl_core_ops
ffffffff81465d00 d netns_core_table
ffffffff81465d80 d sock_flow_mutex.37170
ffffffff81465da0 D dev_base_lock
ffffffff81465da4 d dev_boot_phase
ffffffff81465db0 d net_todo_list
ffffffff81465dc0 d dev_proc_ops
ffffffff81465e00 d netdev_net_ops
ffffffff81465e40 d default_device_ops
ffffffff81465e80 d dev_mc_net_ops
ffffffff81465ec0 d dst_garbage
ffffffff81465ee0 d dst_gc_work
ffffffff81465f40 d dst_gc_mutex
ffffffff81465f60 d dst_dev_notifier
ffffffff81465f78 d neigh_tbl_lock
ffffffff81465f80 d rtnl_mutex
ffffffff81465fa0 d link_ops
ffffffff81465fb0 d rtnl_af_ops
ffffffff81465fc0 d rtnetlink_net_ops
ffffffff81466000 d rtnetlink_dev_notifier
ffffffff81466020 D net_ratelimit_state
ffffffff81466040 d lweventlist
ffffffff81466060 d linkwatch_work
ffffffff814660c0 d _rs.37363
ffffffff814660e0 d sock_diag_table_mutex
ffffffff81466100 d sock_diag_mutex
ffffffff81466120 d flow_cache_gc_list
ffffffff81466140 d flow_cache_gc_work
ffffffff81466160 d flow_flush_sem.20568
ffffffff81466180 d flow_cache_flush_work
ffffffff814661a0 D net_ns_type_operations
ffffffff814661e0 d rx_queue_ktype
ffffffff81466220 d netdev_queue_ktype
ffffffff81466250 d dql_group
ffffffff81466280 d xps_map_mutex
ffffffff814662a0 d net_class
ffffffff81466320 d netstat_group
ffffffff81466340 d rx_queue_default_attrs
ffffffff81466360 d netdev_queue_default_attrs
ffffffff81466380 d dql_attrs
ffffffff814663c0 d net_class_attributes
ffffffff81466640 d netstat_attrs
ffffffff81466700 d rps_cpus_attribute
ffffffff81466720 d rps_dev_flow_table_cnt_attribute
ffffffff81466740 d queue_trans_timeout
ffffffff81466760 d xps_cpus_attribute
ffffffff81466780 d bql_limit_attribute
ffffffff814667a0 d bql_limit_max_attribute
ffffffff814667c0 d bql_limit_min_attribute
ffffffff814667e0 d bql_hold_time_attribute
ffffffff81466800 d bql_inflight_attribute
ffffffff81466820 d dev_attr_rx_packets
ffffffff81466840 d dev_attr_tx_packets
ffffffff81466860 d dev_attr_rx_bytes
ffffffff81466880 d dev_attr_tx_bytes
ffffffff814668a0 d dev_attr_rx_errors
ffffffff814668c0 d dev_attr_tx_errors
ffffffff814668e0 d dev_attr_rx_dropped
ffffffff81466900 d dev_attr_tx_dropped
ffffffff81466920 d dev_attr_multicast
ffffffff81466940 d dev_attr_collisions
ffffffff81466960 d dev_attr_rx_length_errors
ffffffff81466980 d dev_attr_rx_over_errors
ffffffff814669a0 d dev_attr_rx_crc_errors
ffffffff814669c0 d dev_attr_rx_frame_errors
ffffffff814669e0 d dev_attr_rx_fifo_errors
ffffffff81466a00 d dev_attr_rx_missed_errors
ffffffff81466a20 d dev_attr_tx_aborted_errors
ffffffff81466a40 d dev_attr_tx_carrier_errors
ffffffff81466a60 d dev_attr_tx_fifo_errors
ffffffff81466a80 d dev_attr_tx_heartbeat_errors
ffffffff81466aa0 d dev_attr_tx_window_errors
ffffffff81466ac0 d dev_attr_rx_compressed
ffffffff81466ae0 d dev_attr_tx_compressed
ffffffff81466b00 D noop_qdisc
ffffffff81466be0 d noqueue_qdisc
ffffffff81466cc0 d noop_netdev_queue
ffffffff81466e00 d noqueue_netdev_queue
ffffffff81466f40 d nl_table_lock
ffffffff81466f50 d nl_table_wait
ffffffff81466f80 d netlink_proto
ffffffff81467100 d netlink_net_ops
ffffffff81467138 d rover.37442
ffffffff81467140 d genl_mutex
ffffffff81467160 d notify_grp
ffffffff81467190 d mc_groups_longs
ffffffff81467198 d mc_groups
ffffffff814671a0 d mc_group_start
ffffffff814671c0 d genl_ctrl
ffffffff81467240 d genl_ctrl_ops
ffffffff81467280 d genl_pernet_ops
ffffffff814672b8 d id_gen_idx.36549
ffffffff814672c0 d ipv4_dst_ops
ffffffff81467380 d expire.45427
ffffffff814673c0 d ipv4_dst_blackhole_ops
ffffffff81467480 d ip_rt_proc_ops
ffffffff814674c0 d sysctl_route_ops
ffffffff81467500 d rt_genid_ops
ffffffff81467540 d ipv4_route_flush_table
ffffffff814675c0 d ipv4_route_path
ffffffff814675e0 d ipv4_path
ffffffff81467600 d ipv4_skeleton
ffffffff814676c0 d ipv4_route_table
ffffffff81467ac0 d gc_list
ffffffff81467ad0 d v4_peers
ffffffff81467af0 d v6_peers
ffffffff81467b20 d ip4_frags_ctl_table
ffffffff81467be0 d ip4_frags_ops
ffffffff81467c20 d ip4_frags_ns_ctl_table
ffffffff81467d20 D tcp_prot
ffffffff81467ea0 d tcp4_net_ops
ffffffff81467ee0 d tcp4_seq_afinfo
ffffffff81467f20 d tcp_sk_ops
ffffffff81467f60 d tcp_timewait_sock_ops
ffffffff81467fa0 D tcp_death_row
ffffffff814681c0 D tcp_init_congestion_ops
ffffffff81468240 D tcp_reno
ffffffff814682c0 d tcp_cong_list
ffffffff814682e0 D raw_prot
ffffffff81468460 d raw_v4_hashinfo
ffffffff81468c80 d raw_net_ops
ffffffff81468cc0 D udp_prot
ffffffff81468e40 d udp4_net_ops
ffffffff81468e80 d udp4_seq_afinfo
ffffffff81468ec0 D udplite_prot
ffffffff81469040 d udplite4_protosw
ffffffff81469080 d udplite4_net_ops
ffffffff814690c0 d udplite4_seq_afinfo
ffffffff81469100 D arp_tbl
ffffffff814692c0 d arp_net_ops
ffffffff81469300 d arp_netdev_notifier
ffffffff81469320 d icmp_sk_ops
ffffffff81469360 d inetaddr_chain
ffffffff814693a0 d devinet_ops
ffffffff814693e0 d ip_netdev_notifier
ffffffff81469400 d inet_af_ops
ffffffff81469440 d ipv4_devconf
ffffffff814694c0 d ipv4_devconf_dflt
ffffffff81469540 d ctl_forward_entry
ffffffff814695c0 d net_ipv4_path
ffffffff814695e0 d devinet_sysctl
ffffffff81469c80 d inetsw_array
ffffffff81469d40 d ipv4_mib_ops
ffffffff81469d80 d igmp_net_ops
ffffffff81469dc0 d fib_net_ops
ffffffff81469e00 d fib_netdev_notifier
ffffffff81469e20 d fib_inetaddr_notifier
ffffffff81469e40 D ping_prot
ffffffff81469fc0 d ping_net_ops
ffffffff8146a000 D net_ipv4_ctl_path
ffffffff8146a020 d ipv4_table
ffffffff8146afe0 d ipv4_sysctl_ops
ffffffff8146b018 d ip_local_port_range_min
ffffffff8146b020 d ip_local_port_range_max
ffffffff8146b040 d ipv4_net_table
ffffffff8146b2c0 d ip_ping_group_range_max
ffffffff8146b2c8 d ip_ttl_min
ffffffff8146b2cc d ip_ttl_max
ffffffff8146b2d0 d tcp_retr1_max
ffffffff8146b2d4 d tcp_adv_win_scale_min
ffffffff8146b2d8 d tcp_adv_win_scale_max
ffffffff8146b2e0 d ip_proc_ops
ffffffff8146b320 d ___modver_attr
ffffffff8146b380 d xfrm4_policy_afinfo
ffffffff8146b400 d xfrm4_dst_ops
ffffffff8146b4c0 d xfrm4_policy_table
ffffffff8146b540 d xfrm4_state_afinfo
ffffffff8146bde0 D xfrm_cfg_mutex
ffffffff8146be00 d xfrm_policy_lock
ffffffff8146be04 d xfrm_policy_afinfo_lock
ffffffff8146be20 d xfrm_net_ops
ffffffff8146be60 d xfrm_dev_notifier
ffffffff8146be80 d hash_resize_mutex
ffffffff8146bea0 d xfrm_state_afinfo_lock
ffffffff8146bea4 d xfrm_km_lock
ffffffff8146beb0 d xfrm_km_list
ffffffff8146bec0 d hash_resize_mutex
ffffffff8146bee0 d aalg_list
ffffffff8146bfe0 d ealg_list
ffffffff8146c120 d calg_list
ffffffff8146c180 d aead_list
ffffffff8146c260 d xfrm_table
ffffffff8146c3a0 d xfrm_replay_esn
ffffffff8146c3c0 d xfrm_replay_bmp
ffffffff8146c3e0 d xfrm_replay_legacy
ffffffff8146c400 d unix_proto
ffffffff8146c580 d unix_net_ops
ffffffff8146c5b8 d ordernum.37185
ffffffff8146c5c0 d gc_inflight_list
ffffffff8146c5d0 d unix_gc_wait
ffffffff8146c5f0 d gc_candidates
ffffffff8146c600 d unix_table
ffffffff8146c680 d unix_path
ffffffff8146c6a0 d sysctl_pernet_ops
ffffffff8146c6e0 d net_sysctl_root
ffffffff8146c760 d net_sysctl_ro_root
ffffffff8146c7d0 d klist_remove_waiters
ffffffff8146c7e0 D initial_code
ffffffff8146c7f0 D initial_gs
ffffffff8146c800 D stack_start
ffffffff8146c810 d next_func.22060
ffffffff8146c818 D x86_bios_cpu_apicid_early_ptr
ffffffff8146c820 D x86_cpu_to_apicid_early_ptr
ffffffff8146c828 D x86_cpu_to_node_map_early_ptr
ffffffff8146c830 d cpufreq_cpu_notifier
ffffffff8146c850 d cpufreq_stat_cpu_notifier
ffffffff8146c880 D pci_dfl_cache_line_size
ffffffff8146c8a0 d platform_pci_tbl
ffffffff8146c8e0 d acpi_pm_good
ffffffff8146c900 d xen_hvm_cpu_notifier
ffffffff8146c920 D x86_cpuinit
ffffffff8146c940 d cpu_vsyscall_notifier_nb.28100
ffffffff8146c960 d fx_scratch
ffffffff8146cb60 d cacheinfo_cpu_notifier
ffffffff8146cb80 D cpu_caps_set
ffffffff8146cbc0 D cpu_caps_cleared
ffffffff8146cc00 d cpu_devs
ffffffff8146cc48 d this_cpu
ffffffff8146cc50 d disable_smep
ffffffff8146cc54 d show_msr
ffffffff8146cc60 d x86_pmu_notifier_nb.27386
ffffffff8146cc80 D threshold_cpu_callback
ffffffff8146cc90 d mce_cpu_notifier
ffffffff8146ccb0 d perf_ibs_cpu_notifier_nb.27426
ffffffff8146ccc8 d stop_count
ffffffff8146cccc d start_count
ffffffff8146ccd0 d nr_warps
ffffffff8146ccd8 d max_warp
ffffffff8146cce0 d last_tsc
ffffffff8146cce8 d sync_lock
ffffffff8146ccec D disabled_cpus
ffffffff8146ccf0 d multi_checked
ffffffff8146ccf4 d multi
ffffffff8146cd00 d hpet_cpuhp_notify_nb.20794
ffffffff8146cd20 d fam10h_pci_mmconf_base
ffffffff8146cd40 d pci_probes
ffffffff8146cd60 d disable_nx
ffffffff8146cd70 d tlb_cpuhp_notify_nb.25463
ffffffff8146cda0 D __apicid_to_node
ffffffff8147cda0 d console_cpu_notify_nb.31395
ffffffff8147cdc0 d cpu_nfb
ffffffff8147cde0 d remote_softirq_cpu_notifier
ffffffff8147ce00 d timers_nb
ffffffff8147ce18 d tvec_base_done.29688
ffffffff8147ce20 d workqueue_cpu_callback_nb.25605
ffffffff8147ce40 d hrtimers_nb
ffffffff8147ce60 d migration_notifier
ffffffff8147ce80 d sched_cpu_active_nb.40969
ffffffff8147cea0 d sched_cpu_inactive_nb.40970
ffffffff8147cec0 d cpuset_cpu_active_nb.41558
ffffffff8147cee0 d cpuset_cpu_inactive_nb.41559
ffffffff8147cf00 d update_runtime_nb.41560
ffffffff8147cf20 d hotplug_hrtick_nb.39235
ffffffff8147cf40 d hotplug_cfd_notifier
ffffffff8147cf60 d cpu_stop_cpu_notifier
ffffffff8147cf80 d rcu_cpu_notify_nb.19876
ffffffff8147cfa0 d perf_cpu_notify_nb.30752
ffffffff8147cfc0 d page_alloc_cpu_notify_nb.33359
ffffffff8147cfe0 d ratelimit_nb
ffffffff8147d000 d cpu_callback_nb.30854
ffffffff8147d020 d vmstat_notifier
ffffffff8147d040 d cpucache_notifier
ffffffff8147d060 d buffer_cpu_notify_nb.33834
ffffffff8147d080 d blk_cpu_notifier
ffffffff8147d0a0 d blk_iopoll_cpu_notifier
ffffffff8147d0c0 d radix_tree_callback_nb.17465
ffffffff8147d0e0 d percpu_counter_hotcpu_callback_nb.14839
ffffffff8147d100 d topology_cpu_callback_nb.18296
ffffffff8147d120 d amd_cpu_notifier
ffffffff8147d140 d dev_cpu_callback_nb.48509
ffffffff8147d158 d __warned.28545
ffffffff8147d159 d __warned.27602
ffffffff8147d15a d __warned.21674
ffffffff8147d15b d __warned.22496
ffffffff8147d15c d __warned.28031
ffffffff8147d15d d __warned.28050
ffffffff8147d15e d __warned.25252
ffffffff8147d15f d __warned.25312
ffffffff8147d160 d __warned.25577
ffffffff8147d161 d __warned.28421
ffffffff8147d162 d __warned.18784
ffffffff8147d163 d __warned.5709
ffffffff8147d164 d __warned.27207
ffffffff8147d165 d __warned.27212
ffffffff8147d166 d __warned.27217
ffffffff8147d167 d __warned.26982
ffffffff8147d168 d __warned.27289
ffffffff8147d169 d __warned.24665
ffffffff8147d16a d __warned.24681
ffffffff8147d16b d __warned.24690
ffffffff8147d16c d __warned.25752
ffffffff8147d16d d __warned.25689
ffffffff8147d16e d __warned.25694
ffffffff8147d16f d __warned.25727
ffffffff8147d170 d __warned.24684
ffffffff8147d171 d __warned.25147
ffffffff8147d172 d __warned.25153
ffffffff8147d173 d __warned.25123
ffffffff8147d174 d __warned.25128
ffffffff8147d175 d __warned.25557
ffffffff8147d176 d __warned.25754
ffffffff8147d177 d __warned.23675
ffffffff8147d178 d __warned.15425
ffffffff8147d179 d __warned.15416
ffffffff8147d17a d __warned.32191
ffffffff8147d17b d __warned.32196
ffffffff8147d17c d __warned.10482
ffffffff8147d17d d __warned.27687
ffffffff8147d17e d __warned.27826
ffffffff8147d17f d __warned.24244
ffffffff8147d180 d __warned.24250
ffffffff8147d181 d __warned.24270
ffffffff8147d182 d __warned.28055
ffffffff8147d183 d __warned.28063
ffffffff8147d184 d __warned.24159
ffffffff8147d185 d __warned.24212
ffffffff8147d186 d __warned.24277
ffffffff8147d187 d __warned.24496
ffffffff8147d188 d __warned.24501
ffffffff8147d189 d __warned.24524
ffffffff8147d18a d __warned.29085
ffffffff8147d18b d __warned.29504
ffffffff8147d18c d __warned.32834
ffffffff8147d18d d __warned.32634
ffffffff8147d18e d __warned.32639
ffffffff8147d18f d __warned.33240
ffffffff8147d190 d __warned.33245
ffffffff8147d191 d __warned.33226
ffffffff8147d192 d __warned.33633
ffffffff8147d193 d __warned.33644
ffffffff8147d194 d __warned.24697
ffffffff8147d195 d __warned.24602
ffffffff8147d196 d __warned.24614
ffffffff8147d197 d __warned.24809
ffffffff8147d198 d __warned.28155
ffffffff8147d199 d __warned.28192
ffffffff8147d19a d __warned.28204
ffffffff8147d19b d __warned.28280
ffffffff8147d19c d __warned.28191
ffffffff8147d19d d __warned.28107
ffffffff8147d19e d __warned.28381
ffffffff8147d19f d __warned.41494
ffffffff8147d1a0 d __warned.41285
ffffffff8147d1a1 d __warned.39189
ffffffff8147d1a2 d __warned.16556
ffffffff8147d1a3 d __warned.16584
ffffffff8147d1a4 d __warned.18109
ffffffff8147d1a5 d __warned.14353
ffffffff8147d1a6 d __warned.28525
ffffffff8147d1a7 d __warned.27978
ffffffff8147d1a8 d __warned.19960
ffffffff8147d1a9 d __warned.12918
ffffffff8147d1aa d __warned.12956
ffffffff8147d1ab d __warned.13011
ffffffff8147d1ac d __warned.13094
ffffffff8147d1ad d __warned.13165
ffffffff8147d1ae d __warned.31178
ffffffff8147d1af d __warned.32113
ffffffff8147d1b0 d __warned.27151
ffffffff8147d1b1 d __warned.18249
ffffffff8147d1b2 d __warned.18373
ffffffff8147d1b3 d __warned.17568
ffffffff8147d1b4 d __warned.18910
ffffffff8147d1b5 d __warned.15538
ffffffff8147d1b6 d __warned.19233
ffffffff8147d1b7 d __warned.19924
ffffffff8147d1b8 d __warned.19282
ffffffff8147d1b9 d __warned.19254
ffffffff8147d1ba d __warned.18880
ffffffff8147d1bb d __warned.18850
ffffffff8147d1bc d __warned.18863
ffffffff8147d1bd d __warned.18910
ffffffff8147d1be d __warned.18968
ffffffff8147d1bf d __warned.18929
ffffffff8147d1c0 d __warned.18951
ffffffff8147d1c1 d __warned.18998
ffffffff8147d1c2 d __warned.19020
ffffffff8147d1c3 d __warned.19032
ffffffff8147d1c4 d __warned.19543
ffffffff8147d1c5 d __warned.19627
ffffffff8147d1c6 d __warned.19510
ffffffff8147d1c7 d __warned.19995
ffffffff8147d1c8 d __warned.19377
ffffffff8147d1c9 d __warned.19761
ffffffff8147d1ca d __warned.19766
ffffffff8147d1cb d __warned.12712
ffffffff8147d1cc d __warned.28380
ffffffff8147d1cd d __warned.28906
ffffffff8147d1ce d __warned.29010
ffffffff8147d1cf d __warned.29376
ffffffff8147d1d0 d __warned.28496
ffffffff8147d1d1 d __warned.30367
ffffffff8147d1d2 d __warned.29239
ffffffff8147d1d3 d __warned.29029
ffffffff8147d1d4 d __warned.28980
ffffffff8147d1d5 d __warned.28152
ffffffff8147d1d6 d __warned.30391
ffffffff8147d1d7 d __warned.30410
ffffffff8147d1d8 d __warned.30478
ffffffff8147d1d9 d __warned.30517
ffffffff8147d1da d __warned.30543
ffffffff8147d1db d __warned.29919
ffffffff8147d1dc d __warned.25205
ffffffff8147d1dd d __warned.32464
ffffffff8147d1de d __warned.32228
ffffffff8147d1df d __warned.32128
ffffffff8147d1e0 d __warned.32283
ffffffff8147d1e1 d __warned.29300
ffffffff8147d1e2 d __warned.19898
ffffffff8147d1e3 d __warned.25781
ffffffff8147d1e4 d __warned.14901
ffffffff8147d1e5 d __warned.14933
ffffffff8147d1e6 d __warned.14959
ffffffff8147d1e7 d __warned.14976
ffffffff8147d1e8 d __warned.19688
ffffffff8147d1e9 d __warned.19693
ffffffff8147d1ea d __warned.28552
ffffffff8147d1eb d __warned.30110
ffffffff8147d1ec d __warned.30099
ffffffff8147d1ed d __warned.30104
ffffffff8147d1ee d __warned.24936
ffffffff8147d1ef d __warned.31398
ffffffff8147d1f0 d __warned.31541
ffffffff8147d1f1 d __warned.29120
ffffffff8147d1f2 d __warned.29140
ffffffff8147d1f3 d __warned.29226
ffffffff8147d1f4 d __warned.29277
ffffffff8147d1f5 d __warned.29288
ffffffff8147d1f6 d __warned.29293
ffffffff8147d1f7 d __warned.27705
ffffffff8147d1f8 d __warned.27712
ffffffff8147d1f9 d __warned.27717
ffffffff8147d1fa d __warned.35210
ffffffff8147d1fb d __warned.29735
ffffffff8147d1fc d __warned.30357
ffffffff8147d1fd d __warned.27631
ffffffff8147d1fe d __warned.27636
ffffffff8147d1ff d __warned.27179
ffffffff8147d200 d __warned.27259
ffffffff8147d201 d __warned.27661
ffffffff8147d202 d __warned.28298
ffffffff8147d203 d __warned.28164
ffffffff8147d204 d __warned.28185
ffffffff8147d205 d __warned.7007
ffffffff8147d206 d __warned.7026
ffffffff8147d207 d __warned.38975
ffffffff8147d208 d __warned.28879
ffffffff8147d209 d __warned.28753
ffffffff8147d20a d __warned.28737
ffffffff8147d20b d __warned.21311
ffffffff8147d20c d __warned.24136
ffffffff8147d20d d __warned.25775
ffffffff8147d20e d __warned.15305
ffffffff8147d20f d __warned.15324
ffffffff8147d210 d __warned.15355
ffffffff8147d211 d __warned.15373
ffffffff8147d212 d __warned.29054
ffffffff8147d213 d __warned.37879
ffffffff8147d214 d __warned.37927
ffffffff8147d215 d __warned.37937
ffffffff8147d216 d __warned.37942
ffffffff8147d217 d __warned.37961
ffffffff8147d218 d __warned.37988
ffffffff8147d219 d __warned.37993
ffffffff8147d21a d __warned.37998
ffffffff8147d21b d __warned.38003
ffffffff8147d21c d __warned.35270
ffffffff8147d21d d __warned.35278
ffffffff8147d21e d __warned.32881
ffffffff8147d21f d __warned.32896
ffffffff8147d220 d __warned.33111
ffffffff8147d221 d __warned.33116
ffffffff8147d222 d __warned.33129
ffffffff8147d223 d __warned.33023
ffffffff8147d224 d __warned.33008
ffffffff8147d225 d __warned.33202
ffffffff8147d226 d __warned.33606
ffffffff8147d227 d __warned.33613
ffffffff8147d228 d __warned.33186
ffffffff8147d229 d __warned.29180
ffffffff8147d22a d __warned.29169
ffffffff8147d22b d __warned.29099
ffffffff8147d22c d __warned.29087
ffffffff8147d22d d __warned.29122
ffffffff8147d22e d __warned.29110
ffffffff8147d22f d __warned.29053
ffffffff8147d230 d __warned.29040
ffffffff8147d231 d __warned.44566
ffffffff8147d232 d __warned.46927
ffffffff8147d233 d __warned.47518
ffffffff8147d234 d __warned.34960
ffffffff8147d235 d __warned.35008
ffffffff8147d236 d __warned.39918
ffffffff8147d237 d __warned.38792
ffffffff8147d240 D __start___jump_table
ffffffff8147d240 D __start___verbose
ffffffff8147d240 D __stop___jump_table
ffffffff8147d240 D __stop___verbose
ffffffff8147d240 D system_state
ffffffff8147d244 D early_boot_irqs_disabled
ffffffff8147d280 D xen_have_vector_callback
ffffffff8147d284 d cpuid_leaf1_edx_mask
ffffffff8147d288 d cpuid_leaf1_ecx_mask
ffffffff8147d28c d cpuid_leaf1_ecx_set_mask
ffffffff8147d290 d cpuid_leaf5_ecx_val
ffffffff8147d294 d cpuid_leaf5_edx_val
ffffffff8147d2c0 d xen_clocksource
ffffffff8147d380 D xen_max_p2m_pfn
ffffffff8147d388 D xen_swiotlb
ffffffff8147d3c0 D boot_cpu_data
ffffffff8147d480 D iommu_group_mf
ffffffff8147d484 D iommu_pass_through
ffffffff8147d488 D iommu_detected
ffffffff8147d48c D no_iommu
ffffffff8147d490 D iommu_merge
ffffffff8147d494 D force_iommu
ffffffff8147d498 D panic_on_overflow
ffffffff8147d49c d forbid_dac
ffffffff8147d4a0 d iommu_sac_force
ffffffff8147d4a4 D tsc_khz
ffffffff8147d4a8 D cpu_khz
ffffffff8147d4ac d tsc_disabled
ffffffff8147d4b0 d tsc_unstable
ffffffff8147d4b4 D io_delay_type
ffffffff8147d4b8 d mxcsr_feature_mask
ffffffff8147d4c0 d x86_64_regsets
ffffffff8147d5a0 d x86_32_regsets
ffffffff8147d700 D hw_cache_extra_regs
ffffffff8147d860 D hw_cache_event_ids
ffffffff8147d9c0 D x86_pmu
ffffffff8147db20 d intel_core_event_constraints
ffffffff8147dc00 d intel_core2_event_constraints
ffffffff8147ddc0 d intel_nehalem_event_constraints
ffffffff8147df40 d intel_nehalem_extra_regs
ffffffff8147df80 d intel_perfmon_event_map
ffffffff8147dfe0 d intel_gen_event_constraints
ffffffff8147e060 d intel_westmere_event_constraints
ffffffff8147e160 d intel_westmere_extra_regs
ffffffff8147e1c0 d intel_snb_event_constraints
ffffffff8147e2a0 d intel_snb_extra_regs
ffffffff8147e300 d intel_v1_event_constraints
ffffffff8147e320 D mce_banks
ffffffff8147e328 D mce_ser
ffffffff8147e32c D mce_ignore_ce
ffffffff8147e330 D mce_cmci_disabled
ffffffff8147e334 D mce_disabled
ffffffff8147e338 d mce_dont_log_ce
ffffffff8147e33c d banks
ffffffff8147e340 d rip_msr
ffffffff8147e344 d tolerant
ffffffff8147e348 d monarch_timeout
ffffffff8147e34c d mce_panic_timeout
ffffffff8147e350 d mce_bootlog
ffffffff8147e360 d isa_irq_to_gsi
ffffffff8147e3a0 D __per_cpu_offset
ffffffff8147e3e0 d ioapic_chip
ffffffff8147e4a0 d lapic_chip
ffffffff8147e558 D apic
ffffffff8147e560 d valid_flags
ffffffff8147e564 D swiotlb
ffffffff8147e580 D __supported_pte_mask
ffffffff8147e588 D pat_enabled
ffffffff8147e590 d boot_pat_state
ffffffff8147e5a0 D node_data
ffffffff8147eda0 D vdso_enabled
ffffffff8147eda4 D sysctl_vsyscall32
ffffffff8147eda8 D printk_delay_msec
ffffffff8147edac d ignore_loglevel
ffffffff8147edb0 d keep_bootcon
ffffffff8147edb8 d cpu_online_bits
ffffffff8147edc0 d cpu_possible_bits
ffffffff8147edc8 d cpu_present_bits
ffffffff8147edd0 d cpu_active_bits
ffffffff8147edd8 D print_fatal_signals
ffffffff8147ede0 D system_nrt_freezable_wq
ffffffff8147ede8 D system_freezable_wq
ffffffff8147edf0 D system_unbound_wq
ffffffff8147edf8 D system_nrt_wq
ffffffff8147ee00 D system_long_wq
ffffffff8147ee08 D system_wq
ffffffff8147ee10 d hrtimer_hres_enabled
ffffffff8147ee18 D scheduler_running
ffffffff8147ee1c D sched_clock_stable
ffffffff8147ee20 D sched_clock_running
ffffffff8147ee28 D sysctl_sched_shares_window
ffffffff8147ee2c D sysctl_sched_child_runs_first
ffffffff8147ee30 d max_load_balance_interval
ffffffff8147ee38 D timekeeping_suspended
ffffffff8147ee3c D tick_do_timer_cpu
ffffffff8147ee40 D futex_cmpxchg_enabled
ffffffff8147ee44 D nr_cpu_ids
ffffffff8147ee48 d use_task_css_set_links
ffffffff8147ee4c d need_forkexit_callback
ffffffff8147ee50 D cpuset_memory_pressure_enabled
ffffffff8147ee54 D number_of_cpusets
ffffffff8147ee58 D force_irqthreads
ffffffff8147ee5c D noirqdebug
ffffffff8147ee60 d irqfixup
ffffffff8147ee64 D rcu_cpu_stall_timeout
ffffffff8147ee68 D rcu_cpu_stall_suppress
ffffffff8147ee6c D rcu_scheduler_active
ffffffff8147ee70 d rcu_scheduler_fully_active
ffffffff8147ee74 D delayacct_on
ffffffff8147ee78 D sysctl_perf_event_sample_rate
ffffffff8147ee7c D sysctl_perf_event_mlock
ffffffff8147ee80 D sysctl_perf_event_paranoid
ffffffff8147ee84 D perf_sched_events
ffffffff8147ee88 d max_samples_per_tick
ffffffff8147ee8c d nr_mmap_events
ffffffff8147ee90 d nr_comm_events
ffffffff8147ee94 d nr_task_events
ffffffff8147eea0 D oom_killer_disabled
ffffffff8147eea4 D page_group_by_mobility_disabled
ffffffff8147eea8 D nr_online_nodes
ffffffff8147eeac D nr_node_ids
ffffffff8147eeb0 D gfp_allowed_mask
ffffffff8147eeb8 D dirty_balance_reserve
ffffffff8147eec0 D totalreserve_pages
ffffffff8147eec8 D totalram_pages
ffffffff8147eee0 D node_states
ffffffff8147ef60 D zone_reclaim_mode
ffffffff8147ef80 d shmem_backing_dev_info
ffffffff8147f1d8 D sysctl_stat_interval
ffffffff8147f1e0 D pcpu_unit_offsets
ffffffff8147f1e8 D pcpu_base_addr
ffffffff8147f1f0 d pcpu_unit_size
ffffffff8147f1f4 d pcpu_nr_slots
ffffffff8147f1f8 d pcpu_slot
ffffffff8147f200 d pcpu_chunk_struct_size
ffffffff8147f208 d pcpu_atom_size
ffffffff8147f20c d pcpu_nr_groups
ffffffff8147f210 d pcpu_group_sizes
ffffffff8147f218 d pcpu_group_offsets
ffffffff8147f220 d pcpu_unit_pages
ffffffff8147f224 d pcpu_nr_units
ffffffff8147f228 d pcpu_unit_map
ffffffff8147f230 d pcpu_low_unit_cpu
ffffffff8147f234 d pcpu_high_unit_cpu
ffffffff8147f238 D highest_memmap_pfn
ffffffff8147f240 D zero_pfn
ffffffff8147f248 D randomize_va_space
ffffffff8147f24c D sysctl_max_map_count
ffffffff8147f250 D sysctl_overcommit_ratio
ffffffff8147f254 D sysctl_overcommit_memory
ffffffff8147f258 d vmap_initialized
ffffffff8147f25c d use_alien_caches
ffffffff8147f260 D transparent_hugepage_flags
ffffffff8147f268 d mm_slot_cache
ffffffff8147f270 d mm_slots_hash
ffffffff8147f278 d khugepaged_alloc_sleep_millisecs
ffffffff8147f27c d khugepaged_scan_sleep_millisecs
ffffffff8147f280 d khugepaged_pages_to_scan
ffffffff8147f284 d khugepaged_max_ptes_none
ffffffff8147f288 d khugepaged_thread
ffffffff8147f290 D mce_bad_pages
ffffffff8147f298 D sysctl_memory_failure_recovery
ffffffff8147f29c D sysctl_memory_failure_early_kill
ffffffff8147f2a0 D cleancache_enabled
ffffffff8147f2c0 d cleancache_ops
ffffffff8147f300 D files_lglock_cpus
ffffffff8147f308 d filp_cachep
ffffffff8147f310 d pipe_mnt
ffffffff8147f318 d fasync_cache
ffffffff8147f320 D names_cachep
ffffffff8147f328 D sysctl_vfs_cache_pressure
ffffffff8147f32c d d_hash_shift
ffffffff8147f330 d dentry_hashtable
ffffffff8147f338 d d_hash_mask
ffffffff8147f340 d dentry_cache
ffffffff8147f348 d inode_cachep
ffffffff8147f350 d inode_hashtable
ffffffff8147f358 d i_hash_shift
ffffffff8147f35c d i_hash_mask
ffffffff8147f360 D sysctl_nr_open
ffffffff8147f368 D vfsmount_lock_cpus
ffffffff8147f370 d mount_hashtable
ffffffff8147f378 d mnt_cache
ffffffff8147f380 d bvec_slabs
ffffffff8147f410 d bio_split_pool
ffffffff8147f418 d bdev_cachep
ffffffff8147f420 d blockdev_superblock
ffffffff8147f428 d dio_cache
ffffffff8147f430 D dir_notify_enable
ffffffff8147f438 d dnotify_group
ffffffff8147f440 d dnotify_struct_cache
ffffffff8147f448 d dnotify_mark_cache
ffffffff8147f450 D event_priv_cachep
ffffffff8147f458 d inotify_max_user_instances
ffffffff8147f45c d inotify_max_user_watches
ffffffff8147f460 d inotify_max_queued_events
ffffffff8147f468 d inotify_inode_mark_cachep
ffffffff8147f470 d fanotify_mark_cache
ffffffff8147f478 d fanotify_response_event_cache
ffffffff8147f480 d max_user_watches
ffffffff8147f488 d epi_cache
ffffffff8147f490 d pwq_cache
ffffffff8147f498 d anon_inode_mnt
ffffffff8147f4a0 d filelock_cache
ffffffff8147f4a8 d skcipher_default_geniv
ffffffff8147f4b0 d blk_iopoll_budget
ffffffff8147f4c0 d height_to_maxindex
ffffffff8147f520 D kptr_restrict
ffffffff8147f524 D percpu_counter_batch
ffffffff8147f540 D num_registered_fb
ffffffff8147f560 D registered_fb
ffffffff8147f660 d fb_logo
ffffffff8147f678 d ofonly
ffffffff8147f680 d video_options
ffffffff8147f780 d red2
ffffffff8147f784 d green2
ffffffff8147f788 d blue2
ffffffff8147f7a0 d red16
ffffffff8147f7c0 d green16
ffffffff8147f7e0 d blue16
ffffffff8147f800 d red4
ffffffff8147f808 d green4
ffffffff8147f810 d blue4
ffffffff8147f820 d red8
ffffffff8147f830 d green8
ffffffff8147f840 d blue8
ffffffff8147f850 d vga_hardscroll_enabled
ffffffff8147f851 d vga_hardscroll_user_enable
ffffffff8147f854 d vga_can_do_color
ffffffff8147f858 d vga_vram_size
ffffffff8147f860 d vga_vram_base
ffffffff8147f868 d vga_video_port_reg
ffffffff8147f86a d vga_video_type
ffffffff8147f86c d vga_default_font_height
ffffffff8147f870 d vga_video_port_val
ffffffff8147f878 d vga_vram_end
ffffffff8147f880 d vga_scan_lines
ffffffff8147f884 d vga_init_done
ffffffff8147f888 d vga_compat
ffffffff8147f88c d pmi_setpal
ffffffff8147f890 d ypan
ffffffff8147f898 d pmi_start
ffffffff8147f8a0 d pmi_pal
ffffffff8147f8a8 d depth
ffffffff8147f8ac d mtrr
ffffffff8147f8b0 d inverse
ffffffff8147f8b8 d ec_delay
ffffffff8147f8c0 d hest_tab
ffffffff8147f8c8 d ghes_panic_timeout
ffffffff8147f8e0 D xen_features
ffffffff8147f900 d xen_pirq_chip
ffffffff8147f9c0 d xen_dynamic_chip
ffffffff8147fa80 d xen_percpu_chip
ffffffff8147fb38 d ring_ops
ffffffff8147fb40 d selfballoon_uphysteresis
ffffffff8147fb44 d selfballoon_downhysteresis
ffffffff8147fb48 d selfballoon_interval
ffffffff8147fb4c d xen_selfballooning_enabled
ffffffff8147fb50 D tmem_enabled
ffffffff8147fb51 d pci_seg_supported
ffffffff8147fb54 d trickle_thresh
ffffffff8147fb58 d off
ffffffff8147fb5c d off
ffffffff8147fb60 d initialized
ffffffff8147fb64 D pmtmr_ioport
ffffffff8147fb68 D amd_iommu_force_isolation
ffffffff8147fb69 D amd_iommu_v2_present
ffffffff8147fb6c D amd_iommu_max_pasids
ffffffff8147fb70 D amd_iommu_iotlb_sup
ffffffff8147fb71 D amd_iommu_np_cache
ffffffff8147fb78 d pci_seg_supported
ffffffff8147fb80 D raw_pci_ext_ops
ffffffff8147fb88 D raw_pci_ops
ffffffff8147fba0 d sock_mnt
ffffffff8147fba8 d sock_inode_cachep
ffffffff8147fbc0 d net_families
ffffffff8147fd00 D sysctl_optmem_max
ffffffff8147fd04 D sysctl_rmem_default
ffffffff8147fd08 D sysctl_wmem_default
ffffffff8147fd0c D sysctl_rmem_max
ffffffff8147fd10 D sysctl_wmem_max
ffffffff8147fd14 d warned.44065
ffffffff8147fd18 d skbuff_fclone_cache
ffffffff8147fd20 d skbuff_head_cache
ffffffff8147fd40 D rps_needed
ffffffff8147fd48 D rps_sock_flow_table
ffffffff8147fd50 D weight_p
ffffffff8147fd54 D netdev_budget
ffffffff8147fd58 D netdev_tstamp_prequeue
ffffffff8147fd5c D netdev_max_backlog
ffffffff8147fd60 d ptype_base
ffffffff8147fe60 d ptype_all
ffffffff8147fe70 d netstamp_needed
ffffffff8147fe74 d hashrnd
ffffffff8147fe80 d neigh_sysctl_template
ffffffff81480390 D net_msg_warn
ffffffff81480398 d flow_cachep
ffffffff814803a0 D pfifo_fast_ops
ffffffff81480440 D noop_qdisc_ops
ffffffff814804e0 d noqueue_qdisc_ops
ffffffff81480580 D mq_qdisc_ops
ffffffff81480620 d rt_hash_table
ffffffff81480628 d rt_hash_mask
ffffffff8148062c d ip_rt_redirect_silence
ffffffff81480630 d ip_rt_redirect_number
ffffffff81480634 d ip_rt_redirect_load
ffffffff81480638 d ip_rt_min_pmtu
ffffffff8148063c d ip_rt_mtu_expires
ffffffff81480640 d ip_rt_min_advmss
ffffffff81480644 d ip_rt_gc_min_interval
ffffffff81480648 d ip_rt_gc_elasticity
ffffffff8148064c d rt_hash_log
ffffffff81480650 d ip_rt_gc_timeout
ffffffff81480654 d rt_chain_length_max
ffffffff81480658 d ip_rt_error_burst
ffffffff8148065c d ip_rt_error_cost
ffffffff81480660 d ip_rt_gc_interval
ffffffff81480668 D inet_peer_maxttl
ffffffff8148066c D inet_peer_minttl
ffffffff81480670 D inet_peer_threshold
ffffffff81480678 d peer_cachep
ffffffff81480680 D inet_protos
ffffffff81480e80 d sysctl_ipfrag_max_dist
ffffffff81480e84 D sysctl_ip_default_ttl
ffffffff81480e90 D sysctl_local_ports
ffffffff81480ea0 D tcp_memory_pressure
ffffffff81480ea4 D sysctl_tcp_rmem
ffffffff81480eb0 D sysctl_tcp_wmem
ffffffff81480ebc D sysctl_tcp_fin_timeout
ffffffff81480ec0 D sysctl_tcp_abc
ffffffff81480ec4 D sysctl_tcp_moderate_rcvbuf
ffffffff81480ec8 D sysctl_tcp_thin_dupack
ffffffff81480ecc D sysctl_tcp_nometrics_save
ffffffff81480ed0 D sysctl_tcp_frto_response
ffffffff81480ed4 D sysctl_tcp_frto
ffffffff81480ed8 D sysctl_tcp_max_orphans
ffffffff81480edc D sysctl_tcp_rfc1337
ffffffff81480ee0 D sysctl_tcp_stdurg
ffffffff81480ee4 D sysctl_tcp_adv_win_scale
ffffffff81480ee8 D sysctl_tcp_app_win
ffffffff81480eec D sysctl_tcp_dsack
ffffffff81480ef0 D sysctl_tcp_ecn
ffffffff81480ef4 D sysctl_tcp_reordering
ffffffff81480ef8 D sysctl_tcp_fack
ffffffff81480efc D sysctl_tcp_sack
ffffffff81480f00 D sysctl_tcp_window_scaling
ffffffff81480f04 D sysctl_tcp_timestamps
ffffffff81480f08 D sysctl_tcp_cookie_size
ffffffff81480f0c D sysctl_tcp_slow_start_after_idle
ffffffff81480f10 D sysctl_tcp_base_mss
ffffffff81480f14 D sysctl_tcp_mtu_probing
ffffffff81480f18 D sysctl_tcp_tso_win_divisor
ffffffff81480f1c D sysctl_tcp_workaround_signed_windows
ffffffff81480f20 D sysctl_tcp_retrans_collapse
ffffffff81480f24 D sysctl_tcp_thin_linear_timeouts
ffffffff81480f28 D sysctl_tcp_orphan_retries
ffffffff81480f2c D sysctl_tcp_retries2
ffffffff81480f30 D sysctl_tcp_retries1
ffffffff81480f34 D sysctl_tcp_keepalive_intvl
ffffffff81480f38 D sysctl_tcp_keepalive_probes
ffffffff81480f3c D sysctl_tcp_keepalive_time
ffffffff81480f40 D sysctl_tcp_synack_retries
ffffffff81480f44 D sysctl_tcp_syn_retries
ffffffff81480f60 D tcp_request_sock_ops
ffffffff81480fa0 D sysctl_tcp_low_latency
ffffffff81480fa4 D sysctl_tcp_tw_reuse
ffffffff81480fa8 D sysctl_tcp_abort_on_overflow
ffffffff81480fac D sysctl_tcp_syncookies
ffffffff81480fb0 D sysctl_udp_wmem_min
ffffffff81480fb4 D sysctl_udp_rmem_min
ffffffff81480fc0 D sysctl_udp_mem
ffffffff81480fe0 D udp_table
ffffffff81481000 D udplite_table
ffffffff81481020 d arp_packet_type
ffffffff81481080 D sysctl_ip_dynaddr
ffffffff81481084 D sysctl_ip_nonlocal_bind
ffffffff81481088 D inet_ehash_secret
ffffffff814810a0 d ip_packet_type
ffffffff814810f0 D sysctl_igmp_max_msf
ffffffff814810f4 D sysctl_igmp_max_memberships
ffffffff814810f8 d fn_alias_kmem
ffffffff81481100 d trie_leaf_kmem
ffffffff81481120 d cubictcp
ffffffff814811a0 d hystart
ffffffff814811a4 d hystart_low_window
ffffffff814811a8 d hystart_detect
ffffffff814811ac d hystart_ack_delta
ffffffff814811b0 d fast_convergence
ffffffff814811b4 d beta
ffffffff814811b8 d cube_factor
ffffffff814811c0 d cube_rtt_scale
ffffffff814811c4 d tcp_friendliness
ffffffff814811c8 d beta_scale
ffffffff814811cc d initial_ssthresh
ffffffff814811d0 d bic_scale
ffffffff814811d8 d xfrm_policy_hashmax
ffffffff814811e0 d xfrm_dst_cache
ffffffff814811e8 d xfrm_state_hashmax
ffffffff814811f0 d secpath_cachep
ffffffff81481200 D _edata
ffffffff81482000 D __vvar_beginning_hack
ffffffff81482000 A __vvar_page
ffffffff81482000 D jiffies
ffffffff81482000 D jiffies_64
ffffffff81482010 D vgetcpu_mode
ffffffff81482080 D vsyscall_gtod_data
ffffffff81483000 D __init_begin
ffffffff81483000 A __per_cpu_load
ffffffff81483000 A init_per_cpu__irq_stack_union
ffffffff81487000 A init_per_cpu__gdt_page
ffffffff81496000 T _sinittext
ffffffff81496000 T early_idt_handlers
ffffffff81496140 T early_idt_handler
ffffffff814961b1 t early_recursion_flag
ffffffff814961b5 t early_idt_msg
ffffffff814961f1 t early_idt_ripmsg
ffffffff81496200 T startup_xen
ffffffff81496215 T x86_64_start_reservations
ffffffff81496342 T x86_64_start_kernel
ffffffff81496457 T reserve_ebda_region
ffffffff814964b7 t set_reset_devices
ffffffff814964c7 t debug_kernel
ffffffff814964d4 t quiet_kernel
ffffffff814964e1 t init_setup
ffffffff81496504 t rdinit_setup
ffffffff81496527 t loglevel
ffffffff8149655b t do_early_param
ffffffff814965d8 t repair_env_string
ffffffff81496630 t unknown_bootoption
ffffffff814967b0 T parse_early_options
ffffffff814967da T parse_early_param
ffffffff81496815 W smp_setup_processor_id
ffffffff81496816 W thread_info_cache_init
ffffffff81496817 T start_kernel
ffffffff81496b68 t kernel_init
ffffffff81496d38 t rootwait_setup
ffffffff81496d4c t root_data_setup
ffffffff81496d59 t fs_names_setup
ffffffff81496d66 t root_delay_setup
ffffffff81496d7d t root_dev_setup
ffffffff81496d99 t load_ramdisk
ffffffff81496db5 t readonly
ffffffff81496dc6 t readwrite
ffffffff81496dd7 T mount_block_root
ffffffff8149704a T mount_root
ffffffff81497099 T prepare_namespace
ffffffff81497202 t no_initrd
ffffffff81497212 T initrd_load
ffffffff8149724e t error
ffffffff81497260 t flush_buffer
ffffffff81497311 t retain_initrd_param
ffffffff81497325 t clean_path
ffffffff81497373 t do_utime
ffffffff814973ac t do_symlink
ffffffff81497439 t unpack_to_rootfs
ffffffff814976b7 t do_collect
ffffffff81497729 t parse_header
ffffffff8149780c t read_into
ffffffff8149786b t do_start
ffffffff81497884 t do_skip
ffffffff814978ed t do_reset
ffffffff81497960 t do_copy
ffffffff81497a0e t maybe_link.part.5
ffffffff81497b03 t do_name
ffffffff81497d79 t do_header
ffffffff81497eb4 t populate_rootfs
ffffffff81497f51 t lpj_setup
ffffffff81497f69 t xen_banner
ffffffff81497fd3 t xen_hvm_platform
ffffffff81498088 t xen_load_gdt_boot
ffffffff8149819f t xen_write_gdt_entry_boot
ffffffff81498213 T xen_start_kernel
ffffffff814987f0 t xen_hvm_guest_init
ffffffff8149897d T xen_memory_setup
ffffffff81498fa2 T xen_arch_setup
ffffffff814990ba t xen_mark_pinned
ffffffff814990c2 t xen_pagetable_setup_start
ffffffff814990c3 t xen_set_pgd_hyper
ffffffff81499122 t xen_set_pte_init
ffffffff8149918d t xen_pagetable_setup_done
ffffffff8149924b t xen_alloc_pmd_init
ffffffff81499261 t xen_alloc_pte_init
ffffffff81499293 t xen_release_pmd_init
ffffffff814992a9 t xen_release_pte_init
ffffffff814992d2 t xen_mapping_pagetable_reserve
ffffffff81499329 T xen_reserve_top
ffffffff8149932a T xen_setup_machphys_mapping
ffffffff81499374 T xen_setup_kernel_pagetable
ffffffff814996cf T xen_ident_map_ISA
ffffffff8149974d T xen_init_mmu_ops
ffffffff81499950 T xen_hvm_init_mmu_ops
ffffffff81499994 T xen_init_irq_ops
ffffffff814999ed t xen_time_init
ffffffff81499a73 T xen_init_time_ops
ffffffff81499ad0 T xen_hvm_init_time_ops
ffffffff81499b43 t parse_xen_emul_unplug
ffffffff81499cb6 t __early_alloc_p2m
ffffffff81499e32 T xen_build_dynamic_phys_to_machine
ffffffff8149a016 T set_phys_range_identity
ffffffff8149a21a t xen_hvm_smp_prepare_cpus
ffffffff8149a244 t xen_smp_prepare_cpus
ffffffff8149a3cd t xen_smp_prepare_boot_cpu
ffffffff8149a47a T xen_smp_init
ffffffff8149a4df T xen_hvm_smp_init
ffffffff8149a52b T xen_init_spinlocks
ffffffff8149a56e T xen_init_vga
ffffffff8149a6a3 T pci_xen_swiotlb_detect
ffffffff8149a6f1 T pci_xen_swiotlb_init
ffffffff8149a718 T early_trap_init
ffffffff8149a827 T trap_init
ffffffff8149ae82 t x86_late_time_init
ffffffff8149ae8f T setup_default_timer_irq
ffffffff8149ae9d T hpet_time_init
ffffffff8149aeb2 T time_init
ffffffff8149aebe t code_bytes_setup
ffffffff8149aee2 t kstack_setup
ffffffff8149af00 t setup_unknown_nmi_panic
ffffffff8149af10 t parse_reservelow
ffffffff8149af58 T extend_brk
ffffffff8149afa9 T reserve_standard_io_resources
ffffffff8149afce T setup_arch
ffffffff8149b8e2 T x86_init_uint_noop
ffffffff8149b8e3 T x86_init_pgd_noop
ffffffff8149b8e4 T iommu_init_noop
ffffffff8149b8e7 t i8259A_init_ops
ffffffff8149b905 t smp_intr_init
ffffffff8149beef t apic_intr_init
ffffffff8149c1ca T init_ISA_irqs
ffffffff8149c218 T init_IRQ
ffffffff8149c24c T native_init_IRQ
ffffffff8149c326 t romsignature
ffffffff8149c399 t romchecksum
ffffffff8149c428 T probe_roms
ffffffff8149c6da t control_va_addr_alignment
ffffffff8149c78c t vsyscall_init
ffffffff8149c7b0 t vsyscall_setup
ffffffff8149c829 T map_vsyscall
ffffffff8149c889 t sbf_init
ffffffff8149c97a t cpcompare
ffffffff8149c9b3 t e820_mark_nvs_memory
ffffffff8149c9e9 t e820_print_type
ffffffff8149ca61 t __e820_add_region.part.2
ffffffff8149ca6f t __e820_update_range
ffffffff8149ccad t e820_end_pfn.constprop.4
ffffffff8149cd2f T e820_all_mapped
ffffffff8149cd86 T e820_add_region
ffffffff8149cdba t __append_e820_map
ffffffff8149cdf0 T e820_print_map
ffffffff8149ce4b T sanitize_e820_map
ffffffff8149d044 T e820_update_range
ffffffff8149d05b T e820_remove_range
ffffffff8149d1ba t parse_memmap_opt
ffffffff8149d2cd t parse_memopt
ffffffff8149d342 T update_e820
ffffffff8149d393 T e820_search_gap
ffffffff8149d3fd T e820_setup_gap
ffffffff8149d478 T parse_e820_ext
ffffffff8149d4c5 T e820_mark_nosave_regions
ffffffff8149d4c6 T early_reserve_e820
ffffffff8149d541 T e820_end_of_ram_pfn
ffffffff8149d550 T e820_end_of_low_ram_pfn
ffffffff8149d55a T finish_e820_parsing
ffffffff8149d5d0 T e820_reserve_resources
ffffffff8149d72f T e820_reserve_resources_late
ffffffff8149d801 T default_machine_specific_memory_setup
ffffffff8149d8bc T setup_memory_map
ffffffff8149d8f2 T memblock_x86_fill
ffffffff8149d944 T memblock_find_dma_reserve
ffffffff8149da82 t pci_iommu_init
ffffffff8149dab9 t iommu_setup
ffffffff8149dd2f T pci_iommu_alloc
ffffffff8149dd96 t quirk_amd_nb_node
ffffffff8149ddf8 t topology_init
ffffffff8149de86 t arch_kdebugfs_init
ffffffff8149de94 t bootonly
ffffffff8149dea4 t debug_alt
ffffffff8149deb4 t setup_noreplace_smp
ffffffff8149dec4 t setup_noreplace_paravirt
ffffffff8149ded4 T arch_init_ideal_nops
ffffffff8149dee0 T alternative_instructions
ffffffff8149e00e T setup_pit_timer
ffffffff8149e026 T notsc_setup
ffffffff8149e046 t tsc_setup
ffffffff8149e08d t init_tsc_clocksource
ffffffff8149e10e t cpufreq_tsc
ffffffff8149e13d T tsc_init
ffffffff8149e22a t io_delay_param
ffffffff8149e2c7 t dmi_io_delay_0xed_port
ffffffff8149e2f2 T io_delay_init
ffffffff8149e308 t add_rtc_cmos
ffffffff8149e39a T sort_iommu_table
ffffffff8149e475 T check_iommu_entries
ffffffff8149e540 t configure_trampolines
ffffffff8149e563 T setup_trampolines
ffffffff8149e5f9 t idle_setup
ffffffff8149e6e6 T init_amd_e400_c1e_mask
ffffffff8149e6ff t xstate_enable_boot_cpu
ffffffff8149e9a9 t i8237A_init_ops
ffffffff8149e9ba t x86_xsave_setup
ffffffff8149e9e0 t x86_xsaveopt_setup
ffffffff8149e9f6 t setup_disable_smep
ffffffff8149ea06 t setup_noclflush
ffffffff8149ea1c t setup_show_msr
ffffffff8149ea4c t setup_disablecpuid
ffffffff8149ea8e T setup_cpu_local_masks
ffffffff8149ea8f T early_cpu_init
ffffffff8149eb84 T identify_boot_cpu
ffffffff8149ebb1 t vmware_platform
ffffffff8149ec56 t vmware_platform_setup
ffffffff8149ec8a T init_hypervisor_platform
ffffffff8149ecf7 t ms_hyperv_init_platform
ffffffff8149edac t ms_hyperv_platform
ffffffff8149ee23 t x86_rdrand_setup
ffffffff8149ee39 T check_bugs
ffffffff8149ee64 t init_hw_perf_events
ffffffff8149f2a6 T amd_pmu_init
ffffffff8149f505 T p6_pmu_init
ffffffff8149f625 T p4_pmu_init
ffffffff8149f776 t intel_clovertown_quirk
ffffffff8149f79c t intel_nehalem_quirk
ffffffff8149f7ca t intel_sandybridge_quirk
ffffffff8149f7f0 t intel_arch_events_quirk
ffffffff8149f855 T intel_pmu_init
ffffffff8149fe08 t mcheck_disable
ffffffff8149fe18 t mcheck_enable
ffffffff8149ff7f t mcheck_init_device
ffffffff814a0071 T mcheck_init
ffffffff814a0074 t threshold_init_device
ffffffff814a00c7 t mtrr_init_finialize
ffffffff814a00fa T mtrr_bp_init
ffffffff814a02ce t mtrr_if_init
ffffffff814a032f t print_fixed_last.part.1
ffffffff814a0367 t print_fixed
ffffffff814a03f4 T mtrr_state_warn
ffffffff814a0459 T get_mtrr_state
ffffffff814a0741 t disable_mtrr_cleanup_setup
ffffffff814a074e t enable_mtrr_cleanup_setup
ffffffff814a075b t mtrr_cleanup_debug_setup
ffffffff814a0768 t disable_mtrr_trim_setup
ffffffff814a0775 t parse_mtrr_spare_reg
ffffffff814a078f t parse_mtrr_gran_size_opt
ffffffff814a07ba t parse_mtrr_chunk_size_opt
ffffffff814a07e5 t print_out_mtrr_range_state
ffffffff814a08ca t mtrr_print_out_one_result
ffffffff814a09d0 t x86_get_mtrr_mem_range
ffffffff814a0bc3 t real_trim_memory
ffffffff814a0bdd t set_var_mtrr_all
ffffffff814a0c64 t range_to_mtrr
ffffffff814a0dbe t range_to_mtrr_with_hole
ffffffff814a0fd7 t x86_setup_var_mtrrs.constprop.2
ffffffff814a110f t mtrr_calc_range_state.constprop.1
ffffffff814a1267 T mtrr_cleanup
ffffffff814a1645 T amd_special_default_mtrr
ffffffff814a168d T mtrr_trim_uncached_memory
ffffffff814a195b t amd_ibs_init
ffffffff814a1d09 t acpi_parse_lapic_addr_ovr
ffffffff814a1d30 t parse_acpi_skip_timer_override
ffffffff814a1d3d t parse_acpi_use_timer_override
ffffffff814a1d4a t parse_pci
ffffffff814a1d78 t hpet_insert_resource
ffffffff814a1d96 t acpi_parse_nmi_src
ffffffff814a1dba t acpi_parse_hpet
ffffffff814a1ef4 t setup_acpi_sci
ffffffff814a1f99 t parse_acpi
ffffffff814a2093 t dmi_ignore_irq0_timer_override
ffffffff814a20bd t acpi_parse_fadt
ffffffff814a2109 t acpi_parse_sbf
ffffffff814a2134 t disable_acpi_pci
ffffffff814a216a t disable_acpi_irq
ffffffff814a2194 t dmi_disable_acpi
ffffffff814a21e3 t acpi_parse_lapic_nmi
ffffffff814a2225 t acpi_parse_x2apic_nmi
ffffffff814a2266 t acpi_parse_x2apic
ffffffff814a2297 t acpi_parse_lapic
ffffffff814a22cd t acpi_parse_sapic
ffffffff814a230c t acpi_parse_ioapic
ffffffff814a2341 t acpi_parse_madt
ffffffff814a239f T __acpi_map_table
ffffffff814a23b1 T __acpi_unmap_table
ffffffff814a23c1 T acpi_pic_sci_set_trigger
ffffffff814a2431 T acpi_set_irq_model_pic
ffffffff814a2451 T acpi_set_irq_model_ioapic
ffffffff814a2471 T mp_override_legacy_irq
ffffffff814a2510 t acpi_sci_ioapic_setup
ffffffff814a2561 t acpi_parse_int_src_ovr
ffffffff814a262a T mp_config_acpi_legacy_irqs
ffffffff814a270f T acpi_boot_table_init
ffffffff814a2793 T early_acpi_boot_init
ffffffff814a2860 T acpi_boot_init
ffffffff814a2baa T acpi_mps_check
ffffffff814a2bad t ffh_cstate_init
ffffffff814a2bd4 t reboot_setup
ffffffff814a2c3c t set_pci_reboot
ffffffff814a2c68 t pci_reboot_init
ffffffff814a2c83 t nvidia_hpet_check
ffffffff814a2c86 t ati_bugs_contd
ffffffff814a2d32 t fix_hypertransport_config
ffffffff814a2dbf t via_bugs
ffffffff814a2dfa t ati_bugs
ffffffff814a2ee4 t nvidia_bugs
ffffffff814a2f2e T early_quirks
ffffffff814a3037 t nonmi_ipi_setup
ffffffff814a3048 t _setup_possible_cpus
ffffffff814a3069 t disable_smp
ffffffff814a30e7 T native_smp_prepare_cpus
ffffffff814a340c T native_smp_prepare_boot_cpu
ffffffff814a343f T native_smp_cpus_done
ffffffff814a34ef T prefill_possible_map
ffffffff814a35dd t pcpu_cpu_distance
ffffffff814a3630 t pcpup_populate_pte
ffffffff814a3635 t pcpu_fc_free
ffffffff814a3654 t pcpu_fc_alloc
ffffffff814a3708 T setup_per_cpu_areas
ffffffff814a3912 t mpf_checksum
ffffffff814a392a t ELCR_trigger
ffffffff814a3944 t update_mptable_setup
ffffffff814a395b t smp_check_mpc
ffffffff814a3a5c t parse_alloc_mptable_opt
ffffffff814a3aa0 t get_mpc_size
ffffffff814a3ae6 t smp_scan_config
ffffffff814a3bc7 t MP_bus_info
ffffffff814a3c52 t construct_default_ioirq_mptable
ffffffff814a3d61 t print_mp_irq_info
ffffffff814a3dae t MP_lintsrc_info.part.1
ffffffff814a3df2 t smp_dump_mptable.isra.2
ffffffff814a3e48 t update_mp_table
ffffffff814a424b t MP_processor_info.part.5
ffffffff814a4297 T default_mpc_apic_id
ffffffff814a429c T default_mpc_oem_bus_info
ffffffff814a42ce T default_smp_read_mpc_oem
ffffffff814a42cf T default_get_smp_config
ffffffff814a470c T default_find_smp_config
ffffffff814a4767 T early_reserve_e820_mpc_new
ffffffff814a4794 t setup_disableapic
ffffffff814a47b1 t setup_nolapic
ffffffff814a47b3 t parse_lapic_timer_c2_ok
ffffffff814a47c0 t parse_disable_apic_timer
ffffffff814a47cd t parse_nolapic_timer
ffffffff814a47da t lapic_insert_resource
ffffffff814a4816 t init_lapic_sysfs
ffffffff814a4833 t setup_apicpmtimer
ffffffff814a484b t lapic_cal_handler
ffffffff814a4908 t apic_set_verbosity
ffffffff814a4975 T setup_boot_APIC_clock
ffffffff814a4e4b T verify_local_APIC
ffffffff814a5015 T sync_Arb_IDs
ffffffff814a508e T init_bsp_APIC
ffffffff814a510a T bsp_end_local_APIC_setup
ffffffff814a510f T enable_IR
ffffffff814a5113 T enable_IR_x2apic
ffffffff814a5114 T register_lapic_address
ffffffff814a51b4 T init_apic_mappings
ffffffff814a52b2 T connect_bsp_APIC
ffffffff814a52c8 T APIC_init_uniprocessor
ffffffff814a53c2 t parse_noapic
ffffffff814a53e3 t notimercheck
ffffffff814a53f3 t disable_timer_pin_setup
ffffffff814a5400 t io_apic_bug_finalize
ffffffff814a5416 t ioapic_init_ops
ffffffff814a5427 t print_APIC_field
ffffffff814a547a t __ioapic_init_mappings
ffffffff814a55d9 t print_local_APIC
ffffffff814a59ea t setup_show_lapic
ffffffff814a5a3e t find_isa_irq_pin
ffffffff814a5a8c t find_isa_irq_apic
ffffffff814a5afc t setup_timer_IRQ0_pin
ffffffff814a5b92 t timer_irq_works
ffffffff814a5bf4 t print_ICs
ffffffff814a60ec T set_io_apic_ops
ffffffff814a60fe T arch_early_irq_init
ffffffff814a61c3 T enable_IO_APIC
ffffffff814a62ac T setup_IO_APIC
ffffffff814a69b8 T arch_probe_nr_irqs
ffffffff814a69f3 T setup_ioapic_dest
ffffffff814a6a89 T ioapic_and_gsi_init
ffffffff814a6a92 T ioapic_insert_resources
ffffffff814a6ae1 T mp_register_ioapic
ffffffff814a6d2d T pre_init_apic_IRQ0
ffffffff814a6d90 T default_setup_apic_routing
ffffffff814a6e01 T default_acpi_madt_oem_check
ffffffff814a6e65 t early_serial_init
ffffffff814a6fba t setup_early_printk
ffffffff814a724d t disable_hpet
ffffffff814a725d t hpet_setup
ffffffff814a72d3 T hpet_enable
ffffffff814a74d5 t hpet_late_init
ffffffff814a75c7 t init_amd_nbs
ffffffff814a7676 T early_is_amd_nb
ffffffff814a769e T default_banner
ffffffff814a76b3 t add_pcspkr
ffffffff814a7716 T pci_swiotlb_detect_override
ffffffff814a7733 T pci_swiotlb_detect_4gb
ffffffff814a775a T pci_swiotlb_late_init
ffffffff814a777e T pci_swiotlb_init
ffffffff814a779c t audit_classes_init
ffffffff814a7848 T gart_iommu_init
ffffffff814a7c76 T gart_parse_options
ffffffff814a7dd9 t parse_gart_mem
ffffffff814a7e32 t search_agp_bridge
ffffffff814a8113 T early_gart_iommu_check
ffffffff814a8433 T gart_iommu_hole_init
ffffffff814a89d2 t set_check_enable_amd_mmconf
ffffffff814a89df T vsmp_init
ffffffff814a8acc T native_pagetable_reserve
ffffffff814a8ad4 T zone_sizes_init
ffffffff814a8b0e t parse_direct_gbpages_off
ffffffff814a8b1b t parse_direct_gbpages_on
ffffffff814a8b28 t __init_extra_mapping
ffffffff814a8c6d t nonx32_setup
ffffffff814a8cb1 T populate_extra_pmd
ffffffff814a8db4 T populate_extra_pte
ffffffff814a8dc9 T init_extra_mapping_wb
ffffffff814a8ddd T init_extra_mapping_uc
ffffffff814a8df1 T cleanup_highmap
ffffffff814a8e86 T paging_init
ffffffff814a8ea4 T mem_init
ffffffff814a8f89 t early_ioremap_debug_setup
ffffffff814a8f96 t check_early_ioremap_leak
ffffffff814a8fe3 t __early_set_fixmap
ffffffff814a9078 t __early_ioremap
ffffffff814a9213 T is_early_ioremap_ptep
ffffffff814a922b T early_ioremap_init
ffffffff814a9410 T early_ioremap_reset
ffffffff814a941b T fixup_early_ioremap
ffffffff814a944b T early_ioremap
ffffffff814a945d T early_memremap
ffffffff814a946f T early_iounmap
ffffffff814a9585 t pat_debug_setup
ffffffff814a9592 t nopat
ffffffff814a95b6 t setup_userpte
ffffffff814a95de T reserve_top_address
ffffffff814a95df t noexec_setup
ffffffff814a9639 T x86_report_nx
ffffffff814a967b t setup_hugepagesz
ffffffff814a96ec t numa_setup
ffffffff814a9741 t numa_nodemask_from_meminfo
ffffffff814a9769 T setup_node_to_cpumask_map
ffffffff814a97d8 T numa_remove_memblk_from
ffffffff814a9808 T numa_add_memblk
ffffffff814a9888 t dummy_numa_init
ffffffff814a98ed T numa_cleanup_meminfo
ffffffff814a9ab2 T numa_reset_distance
ffffffff814a9aee t numa_init
ffffffff814a9fd5 T numa_set_distance
ffffffff814aa1be T x86_numa_init
ffffffff814aa1f7 T init_cpu_to_node
ffffffff814aa2fd T initmem_init
ffffffff814aa302 T numa_free_all_bootmem
ffffffff814aa382 T amd_numa_init
ffffffff814aa704 t bad_srat
ffffffff814aa71f T acpi_numa_slit_init
ffffffff814aa78a T acpi_numa_x2apic_affinity_init
ffffffff814aa849 T acpi_numa_processor_affinity_init
ffffffff814aa8e6 T acpi_numa_memory_affinity_init
ffffffff814aa9a2 T acpi_numa_arch_fixup
ffffffff814aa9a3 T x86_acpi_numa_init
ffffffff814aa9bb t vdso_setup
ffffffff814aa9cf t init_vdso
ffffffff814aaaec t ia32_binfmt_init
ffffffff814aaafd t vdso_setup
ffffffff814aab14 T sysenter_setup
ffffffff814aadcc t coredump_filter_setup
ffffffff814aaded T fork_init
ffffffff814aae7b T proc_caches_init
ffffffff814aaf5b t proc_execdomains_init
ffffffff814aaf7a t oops_setup
ffffffff814aafa5 t console_suspend_disable
ffffffff814aafb2 t log_buf_len_setup
ffffffff814aaff8 t keep_bootcon_setup
ffffffff814ab015 t ignore_loglevel_setup
ffffffff814ab02f t printk_late_init
ffffffff814ab07e t console_setup
ffffffff814ab136 T setup_log_buf
ffffffff814ab274 t alloc_frozen_cpus
ffffffff814ab277 t cpu_hotplug_pm_sync_init
ffffffff814ab288 t spawn_ksoftirqd
ffffffff814ab2d2 T softirq_init
ffffffff814ab387 t strict_iomem
ffffffff814ab3cb t ioresources_init
ffffffff814ab404 t __reserve_region_with_split
ffffffff814ab49f t reserve_setup
ffffffff814ab56a T reserve_region_with_split
ffffffff814ab5b5 T sysctl_init
ffffffff814ab5c6 t file_caps_disable
ffffffff814ab5d6 T init_timers
ffffffff814ab619 t uid_cache_init
ffffffff814ab6ab t setup_print_fatal_signals
ffffffff814ab6cf T signals_init
ffffffff814ab6f7 T usermodehelper_init
ffffffff814ab725 t init_workqueues
ffffffff814aba89 T pidhash_init
ffffffff814abaef T pidmap_init
ffffffff814abba5 T sort_main_extable
ffffffff814abbb8 t locate_module_kobject
ffffffff814abc85 t param_sysfs_init
ffffffff814abe05 t init_posix_timers
ffffffff814abfe5 t init_posix_cpu_timers
ffffffff814ac0aa t setup_hrtimer_hres
ffffffff814ac0f4 T hrtimers_init
ffffffff814ac12f T nsproxy_cache_init
ffffffff814ac159 t ksysfs_init
ffffffff814ac1f4 T cred_init
ffffffff814ac219 t isolated_cpu_setup
ffffffff814ac235 t setup_relax_domain_level
ffffffff814ac260 t migration_init
ffffffff814ac2ce T sched_create_sysfs_power_savings_entries
ffffffff814ac318 T sched_init_smp
ffffffff814ac416 T sched_init
ffffffff814ac88c T init_sched_fair_class
ffffffff814ac89d t pm_qos_power_init
ffffffff814ac8f7 t pm_init
ffffffff814ac94f t timekeeping_init_ops
ffffffff814ac960 T timekeeping_init
ffffffff814aca74 t ntp_tick_adj_setup
ffffffff814aca90 T ntp_init
ffffffff814aca95 t boot_override_clocksource
ffffffff814acad1 t boot_override_clock
ffffffff814acb0f t init_clocksource_sysfs
ffffffff814acb61 t clocksource_done_booting
ffffffff814acbb8 t init_jiffies_clocksource
ffffffff814acbc4 W clocksource_default_clock
ffffffff814acbcc t init_timer_list_procfs
ffffffff814acbf5 t alarmtimer_init
ffffffff814acdcc T tick_init
ffffffff814acdd8 t futex_init
ffffffff814ace30 t proc_dma_init
ffffffff814ace4f t nrcpus
ffffffff814ace84 T call_function_init
ffffffff814acf04 t maxcpus
ffffffff814acf33 t nosmp
ffffffff814acf49 T setup_nr_cpu_ids
ffffffff814acf66 T smp_init
ffffffff814acffe t proc_modules_init
ffffffff814ad01d t kallsyms_init
ffffffff814ad03f t cgroup_disable
ffffffff814ad0be t cgroup_init_subsys
ffffffff814ad19b T cgroup_init_early
ffffffff814ad3fb T cgroup_init
ffffffff814ad508 T cpuset_init
ffffffff814ad583 T cpuset_init_smp
ffffffff814ad5d9 t pid_namespaces_init
ffffffff814ad616 t ikconfig_init
ffffffff814ad64c t cpu_stop_init
ffffffff814ad6ef t audit_enable
ffffffff814ad78e t audit_init
ffffffff814ad8c9 T audit_register_class
ffffffff814ad946 t audit_watch_init
ffffffff814ad97d t audit_tree_init
ffffffff814ad9d4 T early_irq_init
ffffffff814adabe t setup_forced_irqthreads
ffffffff814adac8 t irqpoll_setup
ffffffff814adaf6 t irqfixup_setup
ffffffff814adb24 t irq_pm_init_ops
ffffffff814adb35 t rcu_scheduler_really_started
ffffffff814adb42 t rcu_init_one
ffffffff814add74 T rcu_init
ffffffff814ade15 t utsname_sysctl_init
ffffffff814ade26 t delayacct_setup_disable
ffffffff814ade36 t taskstats_init
ffffffff814adec5 T taskstats_init_early
ffffffff814adf5d t perf_event_sysfs_init
ffffffff814adff1 T perf_event_init
ffffffff814ae159 T init_hw_breakpoint
ffffffff814ae248 t set_hashdist
ffffffff814ae274 t cmdline_parse_movablecore
ffffffff814ae2a3 t cmdline_parse_kernelcore
ffffffff814ae2d2 t setup_numa_zonelist_order
ffffffff814ae305 t find_min_pfn_for_node
ffffffff814ae380 T setup_per_cpu_pageset
ffffffff814ae45e T free_bootmem_with_active_regions
ffffffff814ae4fc T sparse_memory_present_with_active_regions
ffffffff814ae562 T absent_pages_in_range
ffffffff814ae572 T node_map_pfn_alignment
ffffffff814ae642 T find_min_pfn_with_active_regions
ffffffff814ae64c T set_dma_reserve
ffffffff814ae654 T page_alloc_init
ffffffff814ae660 T free_area_init
ffffffff814ae687 T free_area_init_nodes
ffffffff814aec5d T alloc_large_system_hash
ffffffff814aee91 T page_writeback_init
ffffffff814aeeb8 T swap_setup
ffffffff814aeee1 t kswapd_init
ffffffff814aef51 T shmem_init
ffffffff814af01b t setup_vmstat
ffffffff814af0df t bdi_class_init
ffffffff814af110 t default_bdi_init
ffffffff814af1ad t mm_sysfs_init
ffffffff814af1d3 t set_mminit_loglevel
ffffffff814af1f4 T mminit_verify_pageflags_layout
ffffffff814af30f t percpu_alloc_setup
ffffffff814af363 T pcpu_alloc_alloc_info
ffffffff814af3d7 t pcpu_build_alloc_info
ffffffff814af792 T pcpu_free_alloc_info
ffffffff814af7a8 T pcpu_setup_first_chunk
ffffffff814aff7b T pcpu_embed_first_chunk
ffffffff814b01ea T pcpu_page_first_chunk
ffffffff814b0553 T percpu_init_late
ffffffff814b0613 t disable_randmaps
ffffffff814b0623 t init_zero_pfn
ffffffff814b0655 T mmap_init
ffffffff814b066a T anon_vma_init
ffffffff814b06b8 t proc_vmalloc_init
ffffffff814b06da T vm_area_add_early
ffffffff814b072a T vm_area_register_early
ffffffff814b0777 T vmalloc_init
ffffffff814b0828 t __alloc_memory_core_early
ffffffff814b0887 t __free_pages_memory
ffffffff814b0963 t ___alloc_bootmem_nopanic
ffffffff814b09e1 T free_bootmem_late
ffffffff814b0a2d T free_low_memory_core_early
ffffffff814b0b2e T free_all_bootmem_node
ffffffff814b0b31 T free_all_bootmem
ffffffff814b0b3b T free_bootmem_node
ffffffff814b0b46 T free_bootmem
ffffffff814b0b4b T __alloc_bootmem_nopanic
ffffffff814b0b54 T __alloc_bootmem
ffffffff814b0b87 T __alloc_bootmem_node
ffffffff814b0c2e T __alloc_bootmem_node_high
ffffffff814b0c33 T alloc_bootmem_section
ffffffff814b0c6b T __alloc_bootmem_node_nopanic
ffffffff814b0d02 T __alloc_bootmem_low
ffffffff814b0d36 T __alloc_bootmem_low_node
ffffffff814b0ddb t early_memblock
ffffffff814b0e00 t memblock_alloc_base_nid
ffffffff814b0e4c T memblock_alloc_nid
ffffffff814b0e52 T __memblock_alloc_base
ffffffff814b0e59 T memblock_alloc_base
ffffffff814b0e84 T memblock_alloc
ffffffff814b0e8b T memblock_alloc_try_nid
ffffffff814b0eb3 T memblock_phys_mem_size
ffffffff814b0ebb T memblock_enforce_memory_limit
ffffffff814b0f23 T memblock_is_reserved
ffffffff814b0f42 T memblock_allow_resize
ffffffff814b0f4d t max_swapfiles_check
ffffffff814b0f50 t procswaps_init
ffffffff814b0f6f t hugetlb_default_setup
ffffffff814b0f93 W alloc_bootmem_huge_page
ffffffff814b104a t hugetlb_hstate_alloc_pages
ffffffff814b108e t hugetlb_nrpages_setup
ffffffff814b1108 T hugetlb_add_hstate
ffffffff814b125a t hugetlb_init
ffffffff814b16e0 T numa_policy_init
ffffffff814b17f8 t sparse_early_usemaps_alloc_node
ffffffff814b1878 T memory_present
ffffffff814b196a T node_memmap_size_bytes
ffffffff814b19fa T sparse_init
ffffffff814b1c99 T sparse_mem_maps_populate_node
ffffffff814b1dc0 t noaliencache_setup
ffffffff814b1dd0 t set_up_list3s
ffffffff814b1e7d t slab_proc_init
ffffffff814b1ea1 t cpucache_init
ffffffff814b1ee5 t slab_max_order_setup
ffffffff814b1f30 t init_list.isra.44
ffffffff814b2033 T kmem_cache_init
ffffffff814b24d7 T kmem_cache_init_late
ffffffff814b2542 t setup_transparent_hugepage
ffffffff814b25cf t hugepage_init
ffffffff814b273d t memory_failure_init
ffffffff814b27d8 t init_cleancache
ffffffff814b27db T files_init
ffffffff814b2849 T chrdev_init
ffffffff814b2871 t init_pipe_fs
ffffffff814b28b3 t fcntl_init
ffffffff814b28da t set_dhash_entries
ffffffff814b2907 T vfs_caches_init_early
ffffffff814b2982 T vfs_caches_init
ffffffff814b2a88 t set_ihash_entries
ffffffff814b2ab5 T inode_init_early
ffffffff814b2b2c T inode_init
ffffffff814b2bc6 T files_defer_init
ffffffff814b2c3b t proc_filesystems_init
ffffffff814b2c5a T get_filesystem_list
ffffffff814b2cd1 T mnt_init
ffffffff814b2e4c T buffer_init
ffffffff814b2e98 t init_bio
ffffffff814b2f75 T bdev_cache_init
ffffffff814b2ff2 t dio_init
ffffffff814b301c t fsnotify_init
ffffffff814b303f t fsnotify_notification_init
ffffffff814b30c9 t fsnotify_mark_init
ffffffff814b3106 t dnotify_init
ffffffff814b317e t inotify_user_setup
ffffffff814b31eb t fanotify_user_setup
ffffffff814b323a t eventpoll_init
ffffffff814b3311 t anon_inode_init
ffffffff814b336f t aio_setup
ffffffff814b33e8 t filelock_init
ffffffff814b340f t proc_locks_init
ffffffff814b342e t init_sys32_ioctl_cmp
ffffffff814b343c t init_sys32_ioctl
ffffffff814b3461 t init_script_binfmt
ffffffff814b3474 t init_elf_binfmt
ffffffff814b3488 t init_compat_elf_binfmt
ffffffff814b349b T proc_init_inodecache
ffffffff814b34c4 T proc_root_init
ffffffff814b3568 T proc_tty_init
ffffffff814b35e4 t proc_cmdline_init
ffffffff814b3603 t proc_consoles_init
ffffffff814b3622 t proc_cpuinfo_init
ffffffff814b3641 t proc_devices_init
ffffffff814b3660 t proc_interrupts_init
ffffffff814b367f t proc_loadavg_init
ffffffff814b369f t proc_meminfo_init
ffffffff814b36be t proc_stat_init
ffffffff814b36dd t proc_uptime_init
ffffffff814b36fc t proc_version_init
ffffffff814b371b t proc_softirqs_init
ffffffff814b373a T proc_sys_init
ffffffff814b3767 T proc_net_init
ffffffff814b378a t proc_kcore_init
ffffffff814b3832 t proc_kmsg_init
ffffffff814b3854 t proc_page_init
ffffffff814b3893 T sysfs_inode_init
ffffffff814b389f T sysfs_init
ffffffff814b394d T configfs_inode_init
ffffffff814b3959 t configfs_init
ffffffff814b39f9 t init_devpts_fs
ffffffff814b3a56 t init_ramfs_fs
ffffffff814b3a62 T init_rootfs
ffffffff814b3a9f t init_hugetlbfs_fs
ffffffff814b3b30 t init_pstore_fs
ffffffff814b3b3c t ipc_init
ffffffff814b3b5c T ipc_init_proc_interface
ffffffff814b3bcf T msg_init
ffffffff814b3c11 T sem_init
ffffffff814b3c3b t ipc_ns_init
ffffffff814b3c4e T shm_init
ffffffff814b3c6d t ipc_sysctl_init
ffffffff814b3c7e t init_mqueue_fs
ffffffff814b3d15 t init_mmap_min_addr
ffffffff814b3d26 t crypto_wq_init
ffffffff814b3d56 t crypto_algapi_init
ffffffff814b3d60 T crypto_init_proc
ffffffff814b3d7a t skcipher_module_init
ffffffff814b3dae t chainiv_module_init
ffffffff814b3dba t eseqiv_module_init
ffffffff814b3dc6 t cryptomgr_init
ffffffff814b3dd2 t aes_init
ffffffff814b3dde t crc32c_mod_init
ffffffff814b3dea t krng_mod_init
ffffffff814b3df6 t elevator_setup
ffffffff814b3e12 T blk_dev_init
ffffffff814b3e8d t blk_settings_init
ffffffff814b3eb2 t blk_ioc_init
ffffffff814b3ed9 t blk_softirq_init
ffffffff814b3f41 t blk_iopoll_setup
ffffffff814b3fab t proc_genhd_init
ffffffff814b3fe4 t genhd_device_init
ffffffff814b4055 T printk_all_partitions
ffffffff814b428b t blk_scsi_ioctl_init
ffffffff814b450f t bsg_init
ffffffff814b463a t init_cgroup_blkio
ffffffff814b4646 t noop_init
ffffffff814b4652 t cfq_init
ffffffff814b46e8 t get_bits
ffffffff814b47bd t get_next_block
ffffffff814b4e31 t nofill
ffffffff814b4e35 T bunzip2
ffffffff814b52a6 t nofill
ffffffff814b52aa T gunzip
ffffffff814b559f t nofill
ffffffff814b55a3 t rc_read
ffffffff814b55d8 t rc_do_normalize
ffffffff814b560a t rc_get_bit
ffffffff814b5694 T unlzma
ffffffff814b65b9 T parse_header
ffffffff814b666c T unlzo
ffffffff814b6ac8 T unxz
ffffffff814b6d3e T idr_init_cache
ffffffff814b6d65 t kobject_uevent_init
ffffffff814b6d82 T prio_tree_init
ffffffff814b6db5 T radix_tree_init
ffffffff814b6e8b t random32_init
ffffffff814b6f58 t random32_reseed
ffffffff814b6ffa t percpu_counter_startup
ffffffff814b7010 t setup_io_tlb_npages
ffffffff814b707b T swiotlb_init_with_tbl
ffffffff814b7147 T swiotlb_init_with_default_size
ffffffff814b71b6 T swiotlb_init
ffffffff814b71c2 T swiotlb_free
ffffffff814b7311 t pcibus_class_init
ffffffff814b7324 t pci_sort_bf_cmp
ffffffff814b737a T pci_sort_breadthfirst
ffffffff814b738d t pci_resource_alignment_sysfs_init
ffffffff814b73a0 T pci_register_set_vga_state
ffffffff814b73a8 t pci_setup
ffffffff814b76b2 t pci_driver_init
ffffffff814b76c5 t pci_sysfs_init
ffffffff814b7713 t pci_proc_init
ffffffff814b7773 t quirk_eisa_bridge
ffffffff814b777b t quirk_tc86c001_ide
ffffffff814b779b t quirk_ide_samemode
ffffffff814b7808 t quirk_disable_all_msi
ffffffff814b7827 t asus_hides_smbus_hostbridge
ffffffff814b7a59 t pci_apply_final_quirks
ffffffff814b7b58 t quirk_ioapic_rmw
ffffffff814b7b76 t quirk_amd_8131_mmrbc
ffffffff814b7baf t quirk_alimagik
ffffffff814b7bd7 t quirk_alder_ioapic
ffffffff814b7c61 t pcie_aspm_disable
ffffffff814b7cce t pciehp_setup
ffffffff814b7cf2 t dmi_pcie_pme_disable_msi
ffffffff814b7d10 t pcie_portdrv_init
ffffffff814b7d7d t pcie_port_setup
ffffffff814b7df4 t aer_service_init
ffffffff814b7e1b t pcie_pme_service_init
ffffffff814b7e27 t pcie_pme_setup
ffffffff814b7e4b t ioapic_init
ffffffff814b7e60 t pci_bus_get_depth
ffffffff814b7e98 T pci_realloc_get_opt
ffffffff814b7ee1 T pci_assign_unassigned_resources
ffffffff814b8106 t acpi_pci_init
ffffffff814b815f t video_setup
ffffffff814b81d5 t fbmem_init
ffffffff814b826b t text_mode
ffffffff814b827b t no_scroll
ffffffff814b828f t fb_console_init
ffffffff814b83a6 t fb_console_setup
ffffffff814b85df t vesafb_init
ffffffff814b8830 t vesafb_probe
ffffffff814b8f81 t acpi_parse_apic_instance
ffffffff814b8faf T acpi_table_parse_entries
ffffffff814b90ec T acpi_table_parse_madt
ffffffff814b9105 T acpi_table_parse
ffffffff814b9182 T acpi_table_init
ffffffff814b921b t dmi_enable_osi_linux
ffffffff814b922d t dmi_disable_osi_win7
ffffffff814b9250 t dmi_disable_osi_vista
ffffffff814b928d T acpi_blacklisted
ffffffff814b93eb t acpi_serialize_setup
ffffffff814b940a t acpi_request_region
ffffffff814b9458 t acpi_reserve_resources
ffffffff814b9541 t acpi_os_name_setup
ffffffff814b9599 t acpi_enforce_resources_setup
ffffffff814b9612 T acpi_os_get_root_pointer
ffffffff814b9633 T early_acpi_os_unmap_memory
ffffffff814b9642 T acpi_osi_setup
ffffffff814b96e4 t set_osi_linux
ffffffff814b971d t osi_setup
ffffffff814b978f T acpi_dmi_osi_linux
ffffffff814b97ba T acpi_os_initialize
ffffffff814b97f1 T acpi_os_initialize1
ffffffff814b98e0 T acpi_wakeup_device_init
ffffffff814b994a T acpi_sleep_init
ffffffff814b9a34 t acpi_init
ffffffff814b9ce5 T acpi_early_init
ffffffff814b9dda T init_acpi_device_notify
ffffffff814b9e1b T acpi_scan_init
ffffffff814b9eee t set_no_mwait
ffffffff814b9f10 t early_init_pdc
ffffffff814b9fba T acpi_early_processor_set_pdc
ffffffff814ba00b T acpi_boot_ec_enable
ffffffff814ba044 T acpi_ec_ecdt_probe
ffffffff814ba214 T acpi_ec_init
ffffffff814ba22d t acpi_pci_root_init
ffffffff814ba257 t acpi_irq_nobalance_set
ffffffff814ba267 t acpi_irq_balance_set
ffffffff814ba277 t acpi_irq_penalty_update
ffffffff814ba2e0 t acpi_irq_pci
ffffffff814ba2e4 t acpi_irq_isa
ffffffff814ba2eb t acpi_pci_link_init
ffffffff814ba327 t irqrouter_init_ops
ffffffff814ba34b T acpi_irq_penalty_init
ffffffff814ba3c5 T acpi_power_init
ffffffff814ba3e6 t acpi_event_init
ffffffff814ba462 T acpi_sysfs_init
ffffffff814ba5a7 t acpi_parse_srat
ffffffff814ba5bd t acpi_table_print_srat_entry
ffffffff814ba5de t acpi_parse_slit
ffffffff814ba647 t acpi_parse_memory_affinity
ffffffff814ba666 t acpi_parse_processor_affinity
ffffffff814ba696 t acpi_parse_x2apic_affinity
ffffffff814ba6b5 T acpi_numa_init
ffffffff814ba75e T acpi_tb_parse_root_table
ffffffff814baa12 t acpi_no_auto_ssdt_setup
ffffffff814baa32 T acpi_initialize_tables
ffffffff814baa8a T acpi_initialize_subsystem
ffffffff814bab2c t acpi_hed_init
ffffffff814bab4f t hest_parse_ghes_count
ffffffff814bab5a t setup_hest_disable
ffffffff814bab64 t hest_parse_ghes
ffffffff814bac32 T acpi_hest_init
ffffffff814bad4c t setup_erst_disable
ffffffff814bad59 t erst_init
ffffffff814baffd t ghes_init
ffffffff814bb165 t pnp_init
ffffffff814bb178 t pnp_setup_reserve_mem
ffffffff814bb1af t pnp_setup_reserve_io
ffffffff814bb1e6 t pnp_setup_reserve_dma
ffffffff814bb21d t pnp_setup_reserve_irq
ffffffff814bb254 t pnp_system_init
ffffffff814bb260 t pnpacpi_setup
ffffffff814bb28c t acpi_pnp_find_device
ffffffff814bb2c8 t acpi_pnp_match
ffffffff814bb316 t ispnpidacpi
ffffffff814bb374 t pnpacpi_add_device_handler
ffffffff814bb587 t pnpacpi_init
ffffffff814bb610 t pnpacpi_option_resource
ffffffff814bb971 T pnpacpi_parse_resource_option_data
ffffffff814bb9d1 T xen_init_IRQ
ffffffff814bbb29 t balloon_init
ffffffff814bbc84 T xenbus_ring_ops_init
ffffffff814bbca5 t xenbus_init
ffffffff814bbf66 t xenbus_probe_initcall
ffffffff814bbf9c t xenbus_probe_backend_init
ffffffff814bbfc6 t xenbus_init
ffffffff814bbff9 t xenbus_backend_init
ffffffff814bc039 t boot_wait_for_devices
ffffffff814bc066 t xenbus_probe_frontend_init
ffffffff814bc090 t setup_vcpu_hotplug_event
ffffffff814bc0af t balloon_init
ffffffff814bc1a6 t xen_selfballooning_setup
ffffffff814bc1b3 t xen_selfballoon_init
ffffffff814bc22f t hypervisor_subsys_init
ffffffff814bc24f t hyper_sysfs_init
ffffffff814bc351 t platform_pci_module_init
ffffffff814bc366 t enable_tmem
ffffffff814bc373 t no_cleancache
ffffffff814bc380 t xen_tmem_init
ffffffff814bc3dd T xen_swiotlb_init
ffffffff814bc5b2 t register_xen_pci_notifier
ffffffff814bc5de t tty_class_init
ffffffff814bc60f T console_init
ffffffff814bc62f T tty_init
ffffffff814bc766 t pty_init
ffffffff814bc9c4 T vcs_init
ffffffff814bca5a T kbd_init
ffffffff814bcb0f T console_map_init
ffffffff814bcb49 t vtconsole_class_init
ffffffff814bcc20 t con_init
ffffffff814bce6a T vty_init
ffffffff814bcfe4 t hvc_console_setup
ffffffff814bd008 t hvc_console_init
ffffffff814bd019 t xen_hvc_init
ffffffff814bd23f t chr_dev_init
ffffffff814bd2fc t random_int_secret_init
ffffffff814bd312 t misc_init
ffffffff814bd3c9 t nvram_init
ffffffff814bd441 t vga_arb_device_init
ffffffff814bd52b T devices_init
ffffffff814bd5d5 T buses_init
ffffffff814bd629 T classes_init
ffffffff814bd64c T early_platform_driver_register
ffffffff814bd7aa T early_platform_add_devices
ffffffff814bd7ee T early_platform_driver_register_all
ffffffff814bd7f3 T early_platform_driver_probe
ffffffff814bda82 T early_platform_cleanup
ffffffff814bdb28 T platform_bus_init
ffffffff814bdb71 T cpu_dev_init
ffffffff814bdba4 T firmware_init
ffffffff814bdbc5 T driver_init
ffffffff814bdbef t mount_param
ffffffff814bdc06 T devtmpfs_init
ffffffff814bdcd3 t wakeup_sources_debugfs_init
ffffffff814bdce1 t firmware_class_init
ffffffff814bdcf4 t register_node_type
ffffffff814bdd07 T hypervisor_init
ffffffff814bdd28 t init_scsi
ffffffff814bddaa T scsi_init_queue
ffffffff814bdecf T scsi_init_devinfo
ffffffff814bdf5d T scsi_init_sysctl
ffffffff814bdf7c T scsi_init_procfs
ffffffff814bdfd4 t init_sd
ffffffff814be0fa t ata_init
ffffffff814be4a6 T libata_transport_init
ffffffff814be507 T ata_sff_init
ffffffff814be537 t probe_list2
ffffffff814be57f t net_olddevs_init
ffffffff814be612 t serio_init
ffffffff814be640 t i8042_aux_test_irq
ffffffff814be6ef t i8042_create_aux_port
ffffffff814be7f2 t i8042_free_aux_ports
ffffffff814be818 t i8042_toggle_aux
ffffffff814be88c t i8042_init
ffffffff814bec4f t i8042_probe
ffffffff814bf27e t input_init
ffffffff814bf380 t mousedev_init
ffffffff814bf3d7 t atkbd_setup_forced_release
ffffffff814bf3f6 t atkbd_setup_scancode_fixup
ffffffff814bf40a t atkbd_init
ffffffff814bf42d t psmouse_init
ffffffff814bf4a9 T synaptics_module_init
ffffffff814bf4d7 T lifebook_module_init
ffffffff814bf4ef t rtc_hctosys
ffffffff814bf608 t rtc_init
ffffffff814bf66f T rtc_dev_init
ffffffff814bf6a6 T rtc_sysfs_init
ffffffff814bf6af t cmos_init
ffffffff814bf71a t cmos_platform_probe
ffffffff814bf764 t init_cpufreq_transition_notifier_list
ffffffff814bf77c t cpufreq_core_init
ffffffff814bf833 t cpufreq_stats_init
ffffffff814bf8c3 t cpufreq_gov_performance_init
ffffffff814bf8cf t cpuidle_init
ffffffff814bf905 t cpuidle_sysfs_setup
ffffffff814bf915 t init_ladder
ffffffff814bf921 t print_filtered
ffffffff814bf970 t dmi_string_nosave
ffffffff814bf9f4 t dmi_string
ffffffff814bfa61 t dmi_save_ident
ffffffff814bfa8b t dmi_save_one_device
ffffffff814bfb17 t dmi_decode
ffffffff814bff9a T dmi_scan_machine
ffffffff814c01d6 T find_ibft_region
ffffffff814c02d5 t memmap_init
ffffffff814c0303 T firmware_map_add_early
ffffffff814c039b t acpi_pm_good_setup
ffffffff814c03ab t parse_pmtmr
ffffffff814c0403 t init_acpi_pm_clocksource
ffffffff814c04d6 T clockevent_i8253_init
ffffffff814c0524 t hid_init
ffffffff814c0567 t alloc_passthrough_domain.part.23
ffffffff814c058c T amd_iommu_uninit_devices
ffffffff814c05d3 T amd_iommu_init_devices
ffffffff814c063a T amd_iommu_init_api
ffffffff814c064d T amd_iommu_init_dma_ops
ffffffff814c08f0 T amd_iommu_init_passthrough
ffffffff814c0986 t set_device_exclusion_range
ffffffff814c09cd t early_amd_iommu_detect
ffffffff814c09d0 t parse_amd_iommu_dump
ffffffff814c09dd t parse_amd_iommu_options
ffffffff814c0a4f T amd_iommu_detect
ffffffff814c0ac2 t init_memory_definitions
ffffffff814c0cdf t free_on_init_error
ffffffff814c0e70 t find_last_devid_acpi
ffffffff814c0f8f t set_dev_entry_from_acpi.isra.23
ffffffff814c1096 t init_iommu_all
ffffffff814c19cd T amd_iommu_init_hardware
ffffffff814c1c23 t amd_iommu_init
ffffffff814c1ca7 t pcibios_allocate_bus_resources
ffffffff814c1d2b t pcibios_assign_resources
ffffffff814c1e1e t pcibios_allocate_resources
ffffffff814c2063 T pcibios_resource_survey
ffffffff814c208e t pci_arch_init
ffffffff814c20ed T pci_mmcfg_arch_free
ffffffff814c2128 T pci_mmcfg_arch_init
ffffffff814c21a5 t pci_sanity_check.isra.0
ffffffff814c225f T pci_direct_init
ffffffff814c22c0 T pci_direct_probe
ffffffff814c2493 t is_mmconf_reserved
ffffffff814c25a5 t pci_mmconfig_add
ffffffff814c26c7 t pci_mmcfg_intel_945
ffffffff814c2766 t pci_mmcfg_e7520
ffffffff814c27d0 t free_all_mmcfg
ffffffff814c2839 t pci_mmcfg_amd_fam10h
ffffffff814c28ee t is_acpi_reserved
ffffffff814c294d t find_mboard_resource
ffffffff814c2977 t pci_mmcfg_late_insert_resources
ffffffff814c29c9 t pci_mmcfg_nvidia_mcp55
ffffffff814c2ac1 t __pci_mmcfg_init
ffffffff814c2cc2 t check_mcfg_resource
ffffffff814c2d3f t pci_parse_mcfg
ffffffff814c2e75 T pci_mmcfg_early_init
ffffffff814c2e7f T pci_mmcfg_late_init
ffffffff814c2e86 T pci_xen_init
ffffffff814c2efc T pci_xen_hvm_init
ffffffff814c2f32 T pci_xen_initial_domain
ffffffff814c307b t set_use_crs
ffffffff814c3085 t set_nouse_crs
ffffffff814c308f T pci_acpi_crs_quirks
ffffffff814c3129 T pci_acpi_init
ffffffff814c31b5 T pci_legacy_init
ffffffff814c31f0 T pci_subsys_init
ffffffff814c3233 t via_router_probe
ffffffff814c32c6 t vlsi_router_probe
ffffffff814c32ed t serverworks_router_probe
ffffffff814c3314 t sis_router_probe
ffffffff814c3336 t cyrix_router_probe
ffffffff814c335c t opti_router_probe
ffffffff814c3383 t ite_router_probe
ffffffff814c33aa t ali_router_probe
ffffffff814c33d8 t amd_router_probe
ffffffff814c341f t pico_router_probe
ffffffff814c346b t pirq_peer_trick
ffffffff814c3515 t fix_acer_tm360_irqrouting
ffffffff814c353f t fix_broken_hp_bios_irq9
ffffffff814c356a t intel_router_probe
ffffffff814c3787 T pcibios_fixup_irqs
ffffffff814c384d T pcibios_irq_init
ffffffff814c3ab4 T dmi_check_skip_isa_align
ffffffff814c3ac0 T dmi_check_pciprobe
ffffffff814c3acc T pcibios_set_cache_line_size
ffffffff814c3b0d T pcibios_init
ffffffff814c3b45 t early_fill_mp_bus_info
ffffffff814c42db t amd_postcore_init
ffffffff814c440f t sock_init
ffffffff814c4481 t proto_init
ffffffff814c448d t net_inuse_init
ffffffff814c44b2 T sk_init
ffffffff814c4507 T skb_init
ffffffff814c454e t net_ns_init
ffffffff814c4635 t net_secret_init
ffffffff814c464b t sysctl_core_init
ffffffff814c467f t initialize_hashrnd
ffffffff814c4695 t net_dev_init
ffffffff814c48c1 T netdev_boot_setup
ffffffff814c49fb T dev_mcast_init
ffffffff814c4a07 T dst_init
ffffffff814c4a13 t neigh_init
ffffffff814c4a90 T rtnetlink_init
ffffffff814c4bab t sock_diag_init
ffffffff814c4bde t flow_cache_init_global
ffffffff814c4d6e t netlink_proto_init
ffffffff814c4f11 t genl_init
ffffffff814c4f9f t set_rhash_entries
ffffffff814c4fcc T ip_rt_init
ffffffff814c51e1 T ip_static_sysctl_init
ffffffff814c51f4 T inet_initpeers
ffffffff814c52a4 T ipfrag_init
ffffffff814c5329 T ip_init
ffffffff814c5337 t set_thash_entries
ffffffff814c5364 T tcp_init
ffffffff814c56b4 T tcp4_proc_init
ffffffff814c56c0 T tcp_v4_init
ffffffff814c56ed t tcp_congestion_default
ffffffff814c56f9 T raw_proc_init
ffffffff814c5705 T raw_proc_exit
ffffffff814c5711 t set_uhash_entries
ffffffff814c5756 T udp4_proc_init
ffffffff814c5762 T udp_table_init
ffffffff814c5862 T udp_init
ffffffff814c58c6 T udplite4_register
ffffffff814c595a T arp_init
ffffffff814c59a5 T icmp_init
ffffffff814c59b1 T devinet_init
ffffffff814c5a52 t inet_init
ffffffff814c5ca8 T igmp_mc_proc_init
ffffffff814c5cb4 T ip_fib_init
ffffffff814c5d30 T fib_trie_init
ffffffff814c5d79 T ping_proc_init
ffffffff814c5d85 T ping_init
ffffffff814c5dab t sysctl_ipv4_init
ffffffff814c5e28 T ip_misc_proc_init
ffffffff814c5e34 t init_syncookies
ffffffff814c5e4c t cubictcp_register
ffffffff814c5eb0 T xfrm4_init
ffffffff814c5f0a T xfrm4_state_init
ffffffff814c5f16 T xfrm_init
ffffffff814c5f29 T xfrm_input_init
ffffffff814c5f50 t af_unix_init
ffffffff814c5f9b t net_sysctl_init
ffffffff814c5fe5 t save_mr
ffffffff814c6022 t phys_pte_init
ffffffff814c6156 t phys_pmd_init
ffffffff814c642b t phys_pud_init
ffffffff814c670e T kernel_physical_mapping_init
ffffffff814c68f9 T vmemmap_populate
ffffffff814c6b3b T vmemmap_populate_print_last
ffffffff814c6b9b t adjust_zone_range_for_zone_movable.isra.44
ffffffff814c6be3 T init_currently_empty_zone
ffffffff814c6cdf T __early_pfn_to_nid
ffffffff814c6d4d T early_pfn_to_nid
ffffffff814c6d5e T early_pfn_in_nid
ffffffff814c6d79 T get_pfn_range_for_nid
ffffffff814c6e15 t zone_spanned_pages_in_node.isra.60
ffffffff814c6e9a T __absent_pages_in_range
ffffffff814c6f4d t zone_absent_pages_in_node.isra.61
ffffffff814c6fd8 T init_per_zone_wmark_min
ffffffff814c7059 T memmap_init_zone
ffffffff814c723d T free_area_init_node
ffffffff814c7558 T __free_pages_bootmem
ffffffff814c75a3 T mminit_verify_page_links
ffffffff814c75e8 t memblock_search.isra.4
ffffffff814c7619 t memblock_dump.isra.5
ffffffff814c770b t memblock_insert_region
ffffffff814c7767 t memblock_merge_regions.isra.7
ffffffff814c77e5 T get_allocated_memblock_reserved_regions_info
ffffffff814c7819 T __next_free_mem_range
ffffffff814c7972 T __next_free_mem_range_rev
ffffffff814c7aa1 T memblock_find_in_range_node
ffffffff814c7b92 T memblock_find_in_range
ffffffff814c7b9d t memblock_double_array
ffffffff814c7df7 t memblock_isolate_range
ffffffff814c7f3b t __memblock_remove
ffffffff814c8006 T memblock_free
ffffffff814c8047 T memblock_remove
ffffffff814c8059 t memblock_add_region
ffffffff814c81c7 T memblock_reserve
ffffffff814c820d T memblock_add
ffffffff814c8224 T memblock_add_node
ffffffff814c8238 T __next_mem_pfn_range
ffffffff814c82ba T memblock_set_node
ffffffff814c8323 T memblock_start_of_DRAM
ffffffff814c832e T memblock_end_of_DRAM
ffffffff814c834d T memblock_is_memory
ffffffff814c836c T memblock_is_region_memory
ffffffff814c83cc T memblock_is_region_reserved
ffffffff814c842b T memblock_set_current_limit
ffffffff814c8433 T __memblock_dump_all
ffffffff814c8493 T mminit_validate_memmodel_limits
ffffffff814c8582 T vmemmap_alloc_block
ffffffff814c865e T vmemmap_alloc_block_buf
ffffffff814c8692 T vmemmap_verify
ffffffff814c86ee T vmemmap_pte_populate
ffffffff814c8798 T vmemmap_pmd_populate
ffffffff814c8836 T vmemmap_pud_populate
ffffffff814c88d2 T vmemmap_pgd_populate
ffffffff814c894f T vmemmap_populate_basepages
ffffffff814c89e0 T sparse_mem_map_populate
ffffffff814c8a14 T firmware_map_add_hotplug
ffffffff814c8aa2 T _einittext
ffffffff814c8ac0 T boot_command_line
ffffffff814c92c0 T late_time_init
ffffffff814c92c8 t done.37417
ffffffff814c92e0 t tmp_cmdline.37418
ffffffff814c9ae0 t kthreadd_done
ffffffff814c9b00 t initcall_level_names
ffffffff814c9b40 t initcall_levels
ffffffff814c9ba0 T rd_doload
ffffffff814c9bc0 t saved_root_name
ffffffff814c9c00 t root_mount_data
ffffffff814c9c08 t root_fs_names
ffffffff814c9c10 t root_delay
ffffffff814c9c18 t root_device_name
ffffffff814c9c20 t mount_initrd
ffffffff814c9c40 t do_retain_initrd
ffffffff814c9c48 t header_buf
ffffffff814c9c50 t symlink_buf
ffffffff814c9c58 t name_buf
ffffffff814c9c60 t state
ffffffff814c9c68 t this_header
ffffffff814c9c70 t message
ffffffff814c9c78 t count
ffffffff814c9c80 t victim
ffffffff814c9ca0 t actions
ffffffff814c9ce0 t msg_buf.27339
ffffffff814c9d20 t dir_list
ffffffff814c9d30 t collected
ffffffff814c9d38 t name_len
ffffffff814c9d40 t body_len
ffffffff814c9d48 t gid
ffffffff814c9d4c t uid
ffffffff814c9d50 t mtime
ffffffff814c9d58 t next_state
ffffffff814c9d5c t wfd
ffffffff814c9d60 t vcollected
ffffffff814c9d80 t head
ffffffff814c9e80 t mode
ffffffff814c9e88 t nlink
ffffffff814c9e90 t rdev
ffffffff814c9e98 t ino
ffffffff814c9ea0 t minor
ffffffff814c9ea8 t major
ffffffff814c9eb0 t next_header
ffffffff814c9eb8 t collect
ffffffff814c9ec0 t remains
ffffffff814c9ee0 T xen_extra_mem
ffffffff814ca6e0 t map.22252
ffffffff814cb0e0 T boot_params
ffffffff814cc0e0 t _brk_start
ffffffff814cc100 t command_line
ffffffff814cc900 T x86_init
ffffffff814cc9e0 T sbf_port
ffffffff814cca00 t userdef
ffffffff814cca20 t change_point_list.26921
ffffffff814cda20 t change_point.26922
ffffffff814ce220 t overlap_list.26923
ffffffff814ce620 t new_bios.26924
ffffffff814cf020 t e820_res
ffffffff814cf040 t io_delay_override
ffffffff814cf060 t io_delay_0xed_port_dmi_table
ffffffff814cf880 t __quirk.26039
ffffffff814cf890 t __quirk.26043
ffffffff814cf8a0 t __quirk.26052
ffffffff814cf8b0 t __quirk.26060
ffffffff814cf8c0 T changed_by_mtrr_cleanup
ffffffff814cf8c4 t last_fixed_end
ffffffff814cf8c8 t last_fixed_start
ffffffff814cf8cc t last_fixed_type
ffffffff814cf8e0 t enable_mtrr_cleanup
ffffffff814cf900 t range_state
ffffffff814d1100 t range
ffffffff814d2100 t nr_range
ffffffff814d2108 t range_sums
ffffffff814d2110 t mtrr_chunk_size
ffffffff814d2118 t mtrr_gran_size
ffffffff814d2120 t result
ffffffff814d3220 t min_loss_pfn
ffffffff814d3a20 t debug_print
ffffffff814d3a28 t nr_mtrr_spare_reg
ffffffff814d3a40 T acpi_fix_pin2_polarity
ffffffff814d3a44 T acpi_use_timer_override
ffffffff814d3a48 T acpi_skip_timer_override
ffffffff814d3a4c T acpi_sci_override_gsi
ffffffff814d3a50 T acpi_sci_flags
ffffffff814d3a60 t acpi_dmi_table
ffffffff814d43c8 t acpi_force
ffffffff814d43d0 t acpi_lapic_addr
ffffffff814d43e0 t acpi_dmi_table_late
ffffffff814d4c00 t pci_reboot_dmi_table
ffffffff814d5980 t early_qrk
ffffffff814d5a40 t setup_possible_cpus
ffffffff814d5a60 t alloc_mptable
ffffffff814d5a68 t mpc_new_length
ffffffff814d5a70 t mpc_new_phys
ffffffff814d5a80 t irq_used
ffffffff814d5e80 t m_spare
ffffffff814d5f20 T x86_bios_cpu_apicid_early_map
ffffffff814d5f30 T x86_cpu_to_apicid_early_map
ffffffff814d5f40 t disable_apic_timer
ffffffff814d5f44 t lapic_cal_loops
ffffffff814d5f48 t lapic_cal_t1
ffffffff814d5f50 t lapic_cal_t2
ffffffff814d5f58 t lapic_cal_tsc2
ffffffff814d5f60 t lapic_cal_tsc1
ffffffff814d5f68 t lapic_cal_pm2
ffffffff814d5f70 t lapic_cal_pm1
ffffffff814d5f78 t lapic_cal_j2
ffffffff814d5f80 t lapic_cal_j1
ffffffff814d5f88 t apic_calibrate_pmtmr
ffffffff814d5f8c T timer_through_8259
ffffffff814d5f90 T no_timer_check
ffffffff814d5f94 t show_lapic
ffffffff814d5f98 t disable_timer_pin_1
ffffffff814d5f9c t early_console_initialized
ffffffff814d5fa0 T fix_aperture
ffffffff814d5fa4 T fallback_aper_force
ffffffff814d5fa8 T fallback_aper_order
ffffffff814d5fac T gart_iommu_aperture_allowed
ffffffff814d5fb0 T gart_iommu_aperture_disabled
ffffffff814d5fb4 t gart_fix_e820
ffffffff814d5fb8 t printed_gart_size_msg
ffffffff814d5fc0 T pgt_buf_start
ffffffff814d5fe0 t early_ioremap_debug
ffffffff814d6000 t prev_map
ffffffff814d6020 t slot_virt
ffffffff814d6040 t after_paging_init
ffffffff814d6060 t prev_size
ffffffff814d6080 T x86_cpu_to_node_map_early_map
ffffffff814d60a0 T numa_nodes_parsed
ffffffff814d60c0 T numa_off
ffffffff814d60e0 t numa_meminfo
ffffffff814d90e8 t nodeids
ffffffff814d90f0 T acpi_numa
ffffffff814d90f4 T vdso32_int80_start
ffffffff814d9790 T vdso32_int80_end
ffffffff814d9790 T vdso32_syscall_start
ffffffff814d9e38 T vdso32_syscall_end
ffffffff814d9e38 T vdso32_sysenter_start
ffffffff814da4b8 t new_log_buf_len
ffffffff814da4b8 T vdso32_sysenter_end
ffffffff814da4c0 t required_kernelcore
ffffffff814da4c8 t required_movablecore
ffffffff814da4e0 T pcpu_chosen_fc
ffffffff814da4f0 T pcpu_fc_names
ffffffff814da520 t cpus_buf.20616
ffffffff814db520 t smap.20617
ffffffff814db720 t dmap.20618
ffffffff814db920 t group_map.20665
ffffffff814db940 t group_cnt.20666
ffffffff814db960 t vm_init_off.24867
ffffffff814db970 T huge_boot_pages
ffffffff814db980 t default_hstate_size
ffffffff814db988 t default_hstate_max_huge_pages
ffffffff814db990 t parsed_hstate
ffffffff814db9a0 t initkmem_list3
ffffffff814ef1a0 t slab_max_order_set
ffffffff814ef1c0 t initarray_cache
ffffffff814ef1e0 t cache_names
ffffffff814ef320 t dhash_entries
ffffffff814ef328 t ihash_entries
ffffffff814ef340 t pcie_portdrv_dmi_table
ffffffff814ef5f0 t pci_realloc_enable
ffffffff814ef600 t logo_linux_mono_data
ffffffff814ef920 t logo_linux_vga16_data
ffffffff814f05a0 t logo_linux_clut224_clut
ffffffff814f07e0 t logo_linux_clut224_data
ffffffff814f20e0 t vesafb_fix
ffffffff814f2140 t vesafb_defined
ffffffff814f21e0 t vram_total
ffffffff814f21e4 t vram_remap
ffffffff814f21f0 t acpi_apic_instance
ffffffff814f2200 t initial_tables
ffffffff814f3200 t acpi_blacklist
ffffffff814f3320 t acpi_osi_dmi_table
ffffffff814f45f0 t osi_setup_entries
ffffffff814f4a00 t dsdt_dmi_table
ffffffff814f4cb0 t processor_idle_dmi_table
ffffffff814f4f60 t ec_dmi_table
ffffffff814f5cd0 T acpi_srat_revision
ffffffff814f5ce0 T pnpacpi_disabled
ffffffff814f5d00 t acpi_pnp_bus
ffffffff814f5d40 t excluded_id_list
ffffffff814f5dc0 t use_selfballooning
ffffffff814f5de0 t use_cleancache
ffffffff814f5e00 t tmem_cleancache_ops
ffffffff814f5e40 t early_platform_driver_list
ffffffff814f5e50 t early_platform_device_list
ffffffff814f5e60 t scsi_static_device_list
ffffffff814f7420 t ata_force_param_buf
ffffffff814f8420 t force_tbl.38377
ffffffff814f8ba0 t m68k_probes
ffffffff814f8bb0 t eisa_probes
ffffffff814f8bc0 t mca_probes
ffffffff814f8bd0 t isa_probes
ffffffff814f8be0 t parport_probes
ffffffff814f8c00 t i8042_aux_irq_delivered
ffffffff814f8c20 t i8042_irq_being_tested
ffffffff814f8c24 t amd_iommu_detected
ffffffff814f8c28 t amd_iommu_init_err
ffffffff814f8c2c t amd_iommu_disabled
ffffffff814f8c40 t pci_mmcfg_resources_inserted
ffffffff814f8c44 t known_bridge
ffffffff814f8c60 t pci_mmcfg_probes
ffffffff814f8cd8 t mcp55_checked
ffffffff814f8ce0 t pciirq_dmi_table
ffffffff814f9100 t pirq_routers
ffffffff814f91c0 t pirq_440gx.29225
ffffffff814f9220 t pci_probes
ffffffff814f9260 t rhash_entries
ffffffff814f9268 t thash_entries
ffffffff814f9270 t uhash_entries
ffffffff814f9278 T pgt_buf_top
ffffffff814f9280 T pgt_buf_end
ffffffff814f9288 t addr_end
ffffffff814f9290 t p_end
ffffffff814f9298 t node_start
ffffffff814f92a0 t p_start
ffffffff814f92a8 t addr_start
ffffffff814f92c0 t arch_zone_lowest_possible_pfn
ffffffff814f92e0 t arch_zone_highest_possible_pfn
ffffffff814f9300 t zone_movable_pfn
ffffffff814f9b00 t dma_reserve
ffffffff814f9b08 t nr_kernel_pages
ffffffff814f9b10 t nr_all_pages
ffffffff814f9b20 T memblock_debug
ffffffff814f9b40 T memblock
ffffffff814f9ba0 t memblock_memory_init_regions
ffffffff814fa7a0 t memblock_reserved_init_regions
ffffffff814fb3a0 t memblock_can_resize
ffffffff814fb3a4 t memblock_memory_in_slab
ffffffff814fb3a8 t memblock_reserved_in_slab
ffffffff814fb3ac t __setup_str_rdinit_setup
ffffffff814fb3b4 t __setup_str_init_setup
ffffffff814fb3ba t __setup_str_loglevel
ffffffff814fb3c3 t __setup_str_quiet_kernel
ffffffff814fb3c9 t __setup_str_debug_kernel
ffffffff814fb3cf t __setup_str_set_reset_devices
ffffffff814fb3dd t __setup_str_root_delay_setup
ffffffff814fb3e8 t __setup_str_fs_names_setup
ffffffff814fb3f4 t __setup_str_root_data_setup
ffffffff814fb3ff t __setup_str_rootwait_setup
ffffffff814fb408 t __setup_str_root_dev_setup
ffffffff814fb40e t __setup_str_readwrite
ffffffff814fb411 t __setup_str_readonly
ffffffff814fb414 t __setup_str_load_ramdisk
ffffffff814fb422 t __setup_str_no_initrd
ffffffff814fb42b t __setup_str_retain_initrd_param
ffffffff814fb439 t __setup_str_lpj_setup
ffffffff814fb440 t __setup_str_parse_xen_emul_unplug
ffffffff814fb460 t xen_smp_ops
ffffffff814fb4c0 T interrupt
ffffffff814fbbc0 t __setup_str_code_bytes_setup
ffffffff814fbbcc t __setup_str_kstack_setup
ffffffff814fbbd3 t __setup_str_setup_unknown_nmi_panic
ffffffff814fbbe5 t __setup_str_parse_reservelow
ffffffff814fbbf0 t __setup_str_control_va_addr_alignment
ffffffff814fbbfe t __setup_str_vsyscall_setup
ffffffff814fbc07 t __setup_str_parse_memmap_opt
ffffffff814fbc0e t __setup_str_parse_memopt
ffffffff814fbc12 t __setup_str_iommu_setup
ffffffff814fbc18 t __setup_str_setup_noreplace_paravirt
ffffffff814fbc2b t __setup_str_setup_noreplace_smp
ffffffff814fbc39 t __setup_str_debug_alt
ffffffff814fbc4b t __setup_str_bootonly
ffffffff814fbc58 t __setup_str_tsc_setup
ffffffff814fbc5d t __setup_str_notsc_setup
ffffffff814fbc63 t __setup_str_io_delay_param
ffffffff814fbc70 t ids.25619
ffffffff814fbc88 t __setup_str_idle_setup
ffffffff814fbca0 t __setup_str_setup_disablecpuid
ffffffff814fbcac t __setup_str_setup_noclflush
ffffffff814fbcb6 t __setup_str_setup_show_msr
ffffffff814fbcc0 t __setup_str_setup_disable_smep
ffffffff814fbcc7 t __setup_str_x86_xsaveopt_setup
ffffffff814fbcd2 t __setup_str_x86_xsave_setup
ffffffff814fbce0 t hypervisors
ffffffff814fbcf8 t __setup_str_x86_rdrand_setup
ffffffff814fbd20 t amd_hw_cache_event_ids
ffffffff814fbe80 t p4_hw_cache_event_ids
ffffffff814fbfe0 t core2_hw_cache_event_ids
ffffffff814fc140 t nehalem_hw_cache_event_ids
ffffffff814fc2a0 t nehalem_hw_cache_extra_regs
ffffffff814fc400 t atom_hw_cache_event_ids
ffffffff814fc560 t westmere_hw_cache_event_ids
ffffffff814fc6c0 t snb_hw_cache_event_ids
ffffffff814fc820 t intel_arch_events_map
ffffffff814fc890 t __setup_str_mcheck_disable
ffffffff814fc896 t __setup_str_mcheck_enable
ffffffff814fc89a t __setup_str_disable_mtrr_trim_setup
ffffffff814fc8ac t __setup_str_parse_mtrr_spare_reg
ffffffff814fc8be t __setup_str_parse_mtrr_gran_size_opt
ffffffff814fc8cd t __setup_str_parse_mtrr_chunk_size_opt
ffffffff814fc8dd t __setup_str_mtrr_cleanup_debug_setup
ffffffff814fc8f0 t __setup_str_enable_mtrr_cleanup_setup
ffffffff814fc904 t __setup_str_disable_mtrr_cleanup_setup
ffffffff814fc919 t __setup_str_setup_acpi_sci
ffffffff814fc922 t __setup_str_parse_acpi_use_timer_override
ffffffff814fc93a t __setup_str_parse_acpi_skip_timer_override
ffffffff814fc953 t __setup_str_parse_pci
ffffffff814fc957 t __setup_str_parse_acpi
ffffffff814fc95c t __setup_str_reboot_setup
ffffffff814fc964 t __setup_str_nonmi_ipi_setup
ffffffff814fc96e t __setup_str__setup_possible_cpus
ffffffff814fc97c t __setup_str_parse_alloc_mptable_opt
ffffffff814fc98a t __setup_str_update_mptable_setup
ffffffff814fc999 t __setup_str_apic_set_verbosity
ffffffff814fc99e t __setup_str_parse_nolapic_timer
ffffffff814fc9ac t __setup_str_parse_disable_apic_timer
ffffffff814fc9b8 t __setup_str_parse_lapic_timer_c2_ok
ffffffff814fc9ca t __setup_str_setup_nolapic
ffffffff814fc9d2 t __setup_str_setup_disableapic
ffffffff814fc9de t __setup_str_setup_apicpmtimer
ffffffff814fc9ea t __setup_str_disable_timer_pin_setup
ffffffff814fc9fe t __setup_str_notimercheck
ffffffff814fca0d t __setup_str_setup_show_lapic
ffffffff814fca19 t __setup_str_parse_noapic
ffffffff814fca20 t bases.10458
ffffffff814fca28 t __setup_str_setup_early_printk
ffffffff814fca34 t __setup_str_disable_hpet
ffffffff814fca3b t __setup_str_hpet_setup
ffffffff814fca41 T amd_nb_bus_dev_ranges
ffffffff814fca4d t __setup_str_parse_gart_mem
ffffffff814fca60 t mmconf_dmi_table
ffffffff814fcd10 t __setup_str_nonx32_setup
ffffffff814fcd1a t __setup_str_parse_direct_gbpages_on
ffffffff814fcd22 t __setup_str_parse_direct_gbpages_off
ffffffff814fcd2c t __setup_str_early_ioremap_debug_setup
ffffffff814fcd40 t __setup_str_pat_debug_setup
ffffffff814fcd49 t __setup_str_nopat
ffffffff814fcd4f t __setup_str_setup_userpte
ffffffff814fcd57 t __setup_str_noexec_setup
ffffffff814fcd5e t __setup_str_setup_hugepagesz
ffffffff814fcd6a t __setup_str_numa_setup
ffffffff814fcd6f t __setup_str_vdso_setup
ffffffff814fcd75 t __setup_str_vdso_setup
ffffffff814fcd7d t __setup_str_coredump_filter_setup
ffffffff814fcd8e t __setup_str_oops_setup
ffffffff814fcd93 t __setup_str_keep_bootcon_setup
ffffffff814fcda0 t __setup_str_console_suspend_disable
ffffffff814fcdb3 t __setup_str_console_setup
ffffffff814fcdbc t __setup_str_ignore_loglevel_setup
ffffffff814fcdcc t __setup_str_log_buf_len_setup
ffffffff814fcdd8 t __setup_str_strict_iomem
ffffffff814fcddf t __setup_str_reserve_setup
ffffffff814fcde8 t __setup_str_file_caps_disable
ffffffff814fcdf5 t __setup_str_setup_print_fatal_signals
ffffffff814fce0a t __setup_str_setup_hrtimer_hres
ffffffff814fce13 t __setup_str_setup_relax_domain_level
ffffffff814fce27 t __setup_str_isolated_cpu_setup
ffffffff814fce31 t __setup_str_ntp_tick_adj_setup
ffffffff814fce3f t __setup_str_boot_override_clock
ffffffff814fce46 t __setup_str_boot_override_clocksource
ffffffff814fce53 t __setup_str_maxcpus
ffffffff814fce5b t __setup_str_nrcpus
ffffffff814fce63 t __setup_str_nosmp
ffffffff814fce69 t __setup_str_cgroup_disable
ffffffff814fce79 t __setup_str_audit_enable
ffffffff814fce80 t __setup_str_setup_forced_irqthreads
ffffffff814fce8b t __setup_str_irqpoll_setup
ffffffff814fce93 t __setup_str_irqfixup_setup
ffffffff814fce9c t __setup_str_noirqdebug_setup
ffffffff814fcea7 t __setup_str_delayacct_setup_disable
ffffffff814fceb3 t __setup_str_set_hashdist
ffffffff814fcebd t __setup_str_cmdline_parse_movablecore
ffffffff814fcec9 t __setup_str_cmdline_parse_kernelcore
ffffffff814fced4 t __setup_str_setup_numa_zonelist_order
ffffffff814fcee8 t __setup_str_set_mminit_loglevel
ffffffff814fcef8 t __setup_str_percpu_alloc_setup
ffffffff814fcf05 t __setup_str_disable_randmaps
ffffffff814fcf10 t __setup_str_early_memblock
ffffffff814fcf19 t __setup_str_hugetlb_default_setup
ffffffff814fcf2d t __setup_str_hugetlb_nrpages_setup
ffffffff814fcf38 t __setup_str_slab_max_order_setup
ffffffff814fcf48 t __setup_str_noaliencache_setup
ffffffff814fcf55 t __setup_str_setup_transparent_hugepage
ffffffff814fcf6b t __setup_str_set_dhash_entries
ffffffff814fcf7a t __setup_str_set_ihash_entries
ffffffff814fcf89 t __setup_str_elevator_setup
ffffffff814fcf93 t __setup_str_setup_io_tlb_npages
ffffffff814fcfa0 t __setup_str_pci_setup
ffffffff814fcfa4 t __setup_str_pcie_aspm_disable
ffffffff814fcfaf t __setup_str_pciehp_setup
ffffffff814fcfb8 t __setup_str_pcie_port_setup
ffffffff814fcfc4 t __setup_str_pcie_pme_setup
ffffffff814fcfe0 t __setup_str_video_setup
ffffffff814fcfe7 t __setup_str_no_scroll
ffffffff814fcff1 t __setup_str_text_mode
ffffffff814fcffb t __setup_str_fb_console_setup
ffffffff814fd020 T logo_linux_mono
ffffffff814fd040 T logo_linux_vga16
ffffffff814fd060 T logo_linux_clut224
ffffffff814fd080 t __setup_str_acpi_parse_apic_instance
ffffffff814fd093 t __setup_str_acpi_enforce_resources_setup
ffffffff814fd0ab t __setup_str_acpi_serialize_setup
ffffffff814fd0ba t __setup_str_osi_setup
ffffffff814fd0c4 t __setup_str_acpi_os_name_setup
ffffffff814fd0d2 t __setup_str_acpi_irq_balance_set
ffffffff814fd0e3 t __setup_str_acpi_irq_nobalance_set
ffffffff814fd0f6 t __setup_str_acpi_irq_pci
ffffffff814fd104 t __setup_str_acpi_irq_isa
ffffffff814fd112 t __setup_str_acpi_no_auto_ssdt_setup
ffffffff814fd124 t __setup_str_setup_hest_disable
ffffffff814fd131 t __setup_str_setup_erst_disable
ffffffff814fd13e t __setup_str_pnp_setup_reserve_mem
ffffffff814fd14f t __setup_str_pnp_setup_reserve_io
ffffffff814fd15f t __setup_str_pnp_setup_reserve_dma
ffffffff814fd170 t __setup_str_pnp_setup_reserve_irq
ffffffff814fd181 t __setup_str_pnpacpi_setup
ffffffff814fd18a t __setup_str_xen_selfballooning_setup
ffffffff814fd199 t __setup_str_no_cleancache
ffffffff814fd1a6 t __setup_str_enable_tmem
ffffffff814fd1ab t __setup_str_mount_param
ffffffff814fd1c0 t i8042_dmi_reset_table
ffffffff814fe1e0 t i8042_dmi_noloop_table
ffffffff814ff620 t i8042_dmi_nomux_table
ffffffff81502280 t i8042_dmi_notimeout_table
ffffffff815026a0 t i8042_dmi_dritek_table
ffffffff81503580 t i8042_dmi_nopnp_table
ffffffff815039a0 t i8042_dmi_laptop_table
ffffffff81504060 t atkbd_dmi_quirk_table
ffffffff81505740 t toshiba_dmi_table
ffffffff81505e00 t olpc_dmi_table
ffffffff81505f60 t lifebook_dmi_table
ffffffff815070d8 t __setup_str_cpuidle_sysfs_setup
ffffffff815070ed t __setup_str_parse_pmtmr
ffffffff815070f4 t __setup_str_acpi_pm_good_setup
ffffffff81507101 t __setup_str_parse_amd_iommu_options
ffffffff8150710c t __setup_str_parse_amd_iommu_dump
ffffffff81507120 t pci_use_crs_table
ffffffff81507a88 t __setup_str_netdev_boot_setup
ffffffff81507a90 t __setup_str_netdev_boot_setup
ffffffff81507a97 t __setup_str_set_rhash_entries
ffffffff81507aa6 t __setup_str_set_thash_entries
ffffffff81507ab5 t __setup_str_set_uhash_entries
ffffffff81507ae0 T __dtb_end
ffffffff81507ae0 T __dtb_start
ffffffff81507ae0 t __setup_rdinit_setup
ffffffff81507ae0 T __setup_start
ffffffff81507af8 t __setup_init_setup
ffffffff81507b10 t __setup_loglevel
ffffffff81507b28 t __setup_quiet_kernel
ffffffff81507b40 t __setup_debug_kernel
ffffffff81507b58 t __setup_set_reset_devices
ffffffff81507b70 t __setup_root_delay_setup
ffffffff81507b88 t __setup_fs_names_setup
ffffffff81507ba0 t __setup_root_data_setup
ffffffff81507bb8 t __setup_rootwait_setup
ffffffff81507bd0 t __setup_root_dev_setup
ffffffff81507be8 t __setup_readwrite
ffffffff81507c00 t __setup_readonly
ffffffff81507c18 t __setup_load_ramdisk
ffffffff81507c30 t __setup_no_initrd
ffffffff81507c48 t __setup_retain_initrd_param
ffffffff81507c60 t __setup_lpj_setup
ffffffff81507c78 t __setup_parse_xen_emul_unplug
ffffffff81507c90 t __setup_code_bytes_setup
ffffffff81507ca8 t __setup_kstack_setup
ffffffff81507cc0 t __setup_setup_unknown_nmi_panic
ffffffff81507cd8 t __setup_parse_reservelow
ffffffff81507cf0 t __setup_control_va_addr_alignment
ffffffff81507d08 t __setup_vsyscall_setup
ffffffff81507d20 t __setup_parse_memmap_opt
ffffffff81507d38 t __setup_parse_memopt
ffffffff81507d50 t __setup_iommu_setup
ffffffff81507d68 t __setup_setup_noreplace_paravirt
ffffffff81507d80 t __setup_setup_noreplace_smp
ffffffff81507d98 t __setup_debug_alt
ffffffff81507db0 t __setup_bootonly
ffffffff81507dc8 t __setup_tsc_setup
ffffffff81507de0 t __setup_notsc_setup
ffffffff81507df8 t __setup_io_delay_param
ffffffff81507e10 t __setup_idle_setup
ffffffff81507e28 t __setup_setup_disablecpuid
ffffffff81507e40 t __setup_setup_noclflush
ffffffff81507e58 t __setup_setup_show_msr
ffffffff81507e70 t __setup_setup_disable_smep
ffffffff81507e88 t __setup_x86_xsaveopt_setup
ffffffff81507ea0 t __setup_x86_xsave_setup
ffffffff81507eb8 t __setup_x86_rdrand_setup
ffffffff81507ed0 t __setup_mcheck_disable
ffffffff81507ee8 t __setup_mcheck_enable
ffffffff81507f00 t __setup_disable_mtrr_trim_setup
ffffffff81507f18 t __setup_parse_mtrr_spare_reg
ffffffff81507f30 t __setup_parse_mtrr_gran_size_opt
ffffffff81507f48 t __setup_parse_mtrr_chunk_size_opt
ffffffff81507f60 t __setup_mtrr_cleanup_debug_setup
ffffffff81507f78 t __setup_enable_mtrr_cleanup_setup
ffffffff81507f90 t __setup_disable_mtrr_cleanup_setup
ffffffff81507fa8 t __setup_setup_acpi_sci
ffffffff81507fc0 t __setup_parse_acpi_use_timer_override
ffffffff81507fd8 t __setup_parse_acpi_skip_timer_override
ffffffff81507ff0 t __setup_parse_pci
ffffffff81508008 t __setup_parse_acpi
ffffffff81508020 t __setup_reboot_setup
ffffffff81508038 t __setup_nonmi_ipi_setup
ffffffff81508050 t __setup__setup_possible_cpus
ffffffff81508068 t __setup_parse_alloc_mptable_opt
ffffffff81508080 t __setup_update_mptable_setup
ffffffff81508098 t __setup_apic_set_verbosity
ffffffff815080b0 t __setup_parse_nolapic_timer
ffffffff815080c8 t __setup_parse_disable_apic_timer
ffffffff815080e0 t __setup_parse_lapic_timer_c2_ok
ffffffff815080f8 t __setup_setup_nolapic
ffffffff81508110 t __setup_setup_disableapic
ffffffff81508128 t __setup_setup_apicpmtimer
ffffffff81508140 t __setup_disable_timer_pin_setup
ffffffff81508158 t __setup_notimercheck
ffffffff81508170 t __setup_setup_show_lapic
ffffffff81508188 t __setup_parse_noapic
ffffffff815081a0 t __setup_setup_early_printk
ffffffff815081b8 t __setup_disable_hpet
ffffffff815081d0 t __setup_hpet_setup
ffffffff815081e8 t __setup_parse_gart_mem
ffffffff81508200 t __setup_nonx32_setup
ffffffff81508218 t __setup_parse_direct_gbpages_on
ffffffff81508230 t __setup_parse_direct_gbpages_off
ffffffff81508248 t __setup_early_ioremap_debug_setup
ffffffff81508260 t __setup_pat_debug_setup
ffffffff81508278 t __setup_nopat
ffffffff81508290 t __setup_setup_userpte
ffffffff815082a8 t __setup_noexec_setup
ffffffff815082c0 t __setup_setup_hugepagesz
ffffffff815082d8 t __setup_numa_setup
ffffffff815082f0 t __setup_vdso_setup
ffffffff81508308 t __setup_vdso_setup
ffffffff81508320 t __setup_coredump_filter_setup
ffffffff81508338 t __setup_oops_setup
ffffffff81508350 t __setup_keep_bootcon_setup
ffffffff81508368 t __setup_console_suspend_disable
ffffffff81508380 t __setup_console_setup
ffffffff81508398 t __setup_ignore_loglevel_setup
ffffffff815083b0 t __setup_log_buf_len_setup
ffffffff815083c8 t __setup_strict_iomem
ffffffff815083e0 t __setup_reserve_setup
ffffffff815083f8 t __setup_file_caps_disable
ffffffff81508410 t __setup_setup_print_fatal_signals
ffffffff81508428 t __setup_setup_hrtimer_hres
ffffffff81508440 t __setup_setup_relax_domain_level
ffffffff81508458 t __setup_isolated_cpu_setup
ffffffff81508470 t __setup_ntp_tick_adj_setup
ffffffff81508488 t __setup_boot_override_clock
ffffffff815084a0 t __setup_boot_override_clocksource
ffffffff815084b8 t __setup_maxcpus
ffffffff815084d0 t __setup_nrcpus
ffffffff815084e8 t __setup_nosmp
ffffffff81508500 t __setup_cgroup_disable
ffffffff81508518 t __setup_audit_enable
ffffffff81508530 t __setup_setup_forced_irqthreads
ffffffff81508548 t __setup_irqpoll_setup
ffffffff81508560 t __setup_irqfixup_setup
ffffffff81508578 t __setup_noirqdebug_setup
ffffffff81508590 t __setup_delayacct_setup_disable
ffffffff815085a8 t __setup_set_hashdist
ffffffff815085c0 t __setup_cmdline_parse_movablecore
ffffffff815085d8 t __setup_cmdline_parse_kernelcore
ffffffff815085f0 t __setup_setup_numa_zonelist_order
ffffffff81508608 t __setup_set_mminit_loglevel
ffffffff81508620 t __setup_percpu_alloc_setup
ffffffff81508638 t __setup_disable_randmaps
ffffffff81508650 t __setup_early_memblock
ffffffff81508668 t __setup_hugetlb_default_setup
ffffffff81508680 t __setup_hugetlb_nrpages_setup
ffffffff81508698 t __setup_slab_max_order_setup
ffffffff815086b0 t __setup_noaliencache_setup
ffffffff815086c8 t __setup_setup_transparent_hugepage
ffffffff815086e0 t __setup_set_dhash_entries
ffffffff815086f8 t __setup_set_ihash_entries
ffffffff81508710 t __setup_elevator_setup
ffffffff81508728 t __setup_setup_io_tlb_npages
ffffffff81508740 t __setup_pci_setup
ffffffff81508758 t __setup_pcie_aspm_disable
ffffffff81508770 t __setup_pciehp_setup
ffffffff81508788 t __setup_pcie_port_setup
ffffffff815087a0 t __setup_pcie_pme_setup
ffffffff815087b8 t __setup_video_setup
ffffffff815087d0 t __setup_no_scroll
ffffffff815087e8 t __setup_text_mode
ffffffff81508800 t __setup_fb_console_setup
ffffffff81508818 t __setup_acpi_parse_apic_instance
ffffffff81508830 t __setup_acpi_enforce_resources_setup
ffffffff81508848 t __setup_acpi_serialize_setup
ffffffff81508860 t __setup_osi_setup
ffffffff81508878 t __setup_acpi_os_name_setup
ffffffff81508890 t __setup_acpi_irq_balance_set
ffffffff815088a8 t __setup_acpi_irq_nobalance_set
ffffffff815088c0 t __setup_acpi_irq_pci
ffffffff815088d8 t __setup_acpi_irq_isa
ffffffff815088f0 t __setup_acpi_no_auto_ssdt_setup
ffffffff81508908 t __setup_setup_hest_disable
ffffffff81508920 t __setup_setup_erst_disable
ffffffff81508938 t __setup_pnp_setup_reserve_mem
ffffffff81508950 t __setup_pnp_setup_reserve_io
ffffffff81508968 t __setup_pnp_setup_reserve_dma
ffffffff81508980 t __setup_pnp_setup_reserve_irq
ffffffff81508998 t __setup_pnpacpi_setup
ffffffff815089b0 t __setup_xen_selfballooning_setup
ffffffff815089c8 t __setup_no_cleancache
ffffffff815089e0 t __setup_enable_tmem
ffffffff815089f8 t __setup_mount_param
ffffffff81508a10 t __setup_cpuidle_sysfs_setup
ffffffff81508a28 t __setup_parse_pmtmr
ffffffff81508a40 t __setup_acpi_pm_good_setup
ffffffff81508a58 t __setup_parse_amd_iommu_options
ffffffff81508a70 t __setup_parse_amd_iommu_dump
ffffffff81508a88 t __setup_netdev_boot_setup
ffffffff81508aa0 t __setup_netdev_boot_setup
ffffffff81508ab8 t __setup_set_rhash_entries
ffffffff81508ad0 t __setup_set_thash_entries
ffffffff81508ae8 t __setup_set_uhash_entries
ffffffff81508b00 t __initcall_init_hw_perf_eventsearly
ffffffff81508b00 T __initcall_start
ffffffff81508b00 T __setup_end
ffffffff81508b08 t __initcall_spawn_ksoftirqdearly
ffffffff81508b10 t __initcall_init_workqueuesearly
ffffffff81508b18 t __initcall_migration_initearly
ffffffff81508b20 t __initcall_cpu_stop_initearly
ffffffff81508b28 t __initcall_rcu_scheduler_really_startedearly
ffffffff81508b30 T __initcall0_start
ffffffff81508b30 t __initcall_ipc_ns_init0
ffffffff81508b38 t __initcall_init_mmap_min_addr0
ffffffff81508b40 t __initcall_init_cpufreq_transition_notifier_list0
ffffffff81508b48 t __initcall_net_ns_init0
ffffffff81508b50 T __initcall1_start
ffffffff81508b50 t __initcall_e820_mark_nvs_memory1
ffffffff81508b58 t __initcall_cpufreq_tsc1
ffffffff81508b60 t __initcall_pci_reboot_init1
ffffffff81508b68 t __initcall_init_lapic_sysfs1
ffffffff81508b70 t __initcall_init_smp_flush1
ffffffff81508b78 t __initcall_cpu_hotplug_pm_sync_init1
ffffffff81508b80 t __initcall_alloc_frozen_cpus1
ffffffff81508b88 t __initcall_ksysfs_init1
ffffffff81508b90 t __initcall_pm_init1
ffffffff81508b98 t __initcall_init_jiffies_clocksource1
ffffffff81508ba0 t __initcall_init_zero_pfn1
ffffffff81508ba8 t __initcall_memory_failure_init1
ffffffff81508bb0 t __initcall_fsnotify_init1
ffffffff81508bb8 t __initcall_filelock_init1
ffffffff81508bc0 t __initcall_init_script_binfmt1
ffffffff81508bc8 t __initcall_init_elf_binfmt1
ffffffff81508bd0 t __initcall_init_compat_elf_binfmt1
ffffffff81508bd8 t __initcall_random32_init1
ffffffff81508be0 t __initcall___gnttab_init1
ffffffff81508be8 t __initcall_cpufreq_core_init1
ffffffff81508bf0 t __initcall_cpuidle_init1
ffffffff81508bf8 t __initcall_sock_init1
ffffffff81508c00 t __initcall_net_inuse_init1
ffffffff81508c08 t __initcall_netlink_proto_init1
ffffffff81508c10 T __initcall2_start
ffffffff81508c10 t __initcall_bdi_class_init2
ffffffff81508c18 t __initcall_kobject_uevent_init2
ffffffff81508c20 t __initcall_pcibus_class_init2
ffffffff81508c28 t __initcall_pci_driver_init2
ffffffff81508c30 t __initcall_xenbus_init2
ffffffff81508c38 t __initcall_tty_class_init2
ffffffff81508c40 t __initcall_vtconsole_class_init2
ffffffff81508c48 t __initcall_wakeup_sources_debugfs_init2
ffffffff81508c50 t __initcall_register_node_type2
ffffffff81508c58 t __initcall_amd_postcore_init2
ffffffff81508c60 T __initcall3_start
ffffffff81508c60 t __initcall_arch_kdebugfs_init3
ffffffff81508c68 t __initcall_configure_trampolines3
ffffffff81508c70 t __initcall_mtrr_if_init3
ffffffff81508c78 t __initcall_ffh_cstate_init3
ffffffff81508c80 t __initcall_acpi_pci_init3
ffffffff81508c88 t __initcall_setup_vcpu_hotplug_event3
ffffffff81508c90 t __initcall_register_xen_pci_notifier3
ffffffff81508c98 t __initcall_pci_arch_init3
ffffffff81508ca0 T __initcall4_start
ffffffff81508ca0 t __initcall_topology_init4
ffffffff81508ca8 t __initcall_mtrr_init_finialize4
ffffffff81508cb0 t __initcall_init_vdso4
ffffffff81508cb8 t __initcall_sysenter_setup4
ffffffff81508cc0 t __initcall_param_sysfs_init4
ffffffff81508cc8 t __initcall_default_bdi_init4
ffffffff81508cd0 t __initcall_init_bio4
ffffffff81508cd8 t __initcall_fsnotify_notification_init4
ffffffff81508ce0 t __initcall_cryptomgr_init4
ffffffff81508ce8 t __initcall_blk_settings_init4
ffffffff81508cf0 t __initcall_blk_ioc_init4
ffffffff81508cf8 t __initcall_blk_softirq_init4
ffffffff81508d00 t __initcall_blk_iopoll_setup4
ffffffff81508d08 t __initcall_genhd_device_init4
ffffffff81508d10 t __initcall_pci_slot_init4
ffffffff81508d18 t __initcall_fbmem_init4
ffffffff81508d20 t __initcall_acpi_init4
ffffffff81508d28 t __initcall_acpi_pci_root_init4
ffffffff81508d30 t __initcall_acpi_pci_link_init4
ffffffff81508d38 t __initcall_pnp_init4
ffffffff81508d40 t __initcall_xen_setup_shutdown_event4
ffffffff81508d48 t __initcall_balloon_init4
ffffffff81508d50 t __initcall_xenbus_probe_backend_init4
ffffffff81508d58 t __initcall_xenbus_probe_frontend_init4
ffffffff81508d60 t __initcall_balloon_init4
ffffffff81508d68 t __initcall_xen_selfballoon_init4
ffffffff81508d70 t __initcall_misc_init4
ffffffff81508d78 t __initcall_vga_arb_device_init4
ffffffff81508d80 t __initcall_init_scsi4
ffffffff81508d88 t __initcall_ata_init4
ffffffff81508d90 t __initcall_serio_init4
ffffffff81508d98 t __initcall_input_init4
ffffffff81508da0 t __initcall_rtc_init4
ffffffff81508da8 t __initcall_pci_subsys_init4
ffffffff81508db0 t __initcall_proto_init4
ffffffff81508db8 t __initcall_net_dev_init4
ffffffff81508dc0 t __initcall_neigh_init4
ffffffff81508dc8 t __initcall_genl_init4
ffffffff81508dd0 t __initcall_net_sysctl_init4
ffffffff81508dd8 T __initcall5_start
ffffffff81508dd8 t __initcall_hpet_late_init5
ffffffff81508de0 t __initcall_init_amd_nbs5
ffffffff81508de8 t __initcall_clocksource_done_booting5
ffffffff81508df0 t __initcall_init_pipe_fs5
ffffffff81508df8 t __initcall_eventpoll_init5
ffffffff81508e00 t __initcall_anon_inode_init5
ffffffff81508e08 t __initcall_blk_scsi_ioctl_init5
ffffffff81508e10 t __initcall_acpi_event_init5
ffffffff81508e18 t __initcall_pnp_system_init5
ffffffff81508e20 t __initcall_pnpacpi_init5
ffffffff81508e28 t __initcall_chr_dev_init5
ffffffff81508e30 t __initcall_firmware_class_init5
ffffffff81508e38 t __initcall_cpufreq_gov_performance_init5
ffffffff81508e40 t __initcall_init_acpi_pm_clocksource5
ffffffff81508e48 t __initcall_pcibios_assign_resources5
ffffffff81508e50 t __initcall_sysctl_core_init5
ffffffff81508e58 t __initcall_inet_init5
ffffffff81508e60 t __initcall_af_unix_init5
ffffffff81508e68 t __initcall_pci_apply_final_quirks5s
ffffffff81508e70 t __initcall_populate_rootfsrootfs
ffffffff81508e70 T __initcallrootfs_start
ffffffff81508e78 t __initcall_pci_iommu_initrootfs
ffffffff81508e80 T __initcall6_start
ffffffff81508e80 t __initcall_i8259A_init_ops6
ffffffff81508e88 t __initcall_vsyscall_init6
ffffffff81508e90 t __initcall_sbf_init6
ffffffff81508e98 t __initcall_init_tsc_clocksource6
ffffffff81508ea0 t __initcall_add_rtc_cmos6
ffffffff81508ea8 t __initcall_i8237A_init_ops6
ffffffff81508eb0 t __initcall_cache_sysfs_init6
ffffffff81508eb8 t __initcall_mcheck_init_device6
ffffffff81508ec0 t __initcall_threshold_init_device6
ffffffff81508ec8 t __initcall_amd_ibs_init6
ffffffff81508ed0 t __initcall_ioapic_init_ops6
ffffffff81508ed8 t __initcall_add_pcspkr6
ffffffff81508ee0 t __initcall_audit_classes_init6
ffffffff81508ee8 t __initcall_ia32_binfmt_init6
ffffffff81508ef0 t __initcall_proc_execdomains_init6
ffffffff81508ef8 t __initcall_ioresources_init6
ffffffff81508f00 t __initcall_uid_cache_init6
ffffffff81508f08 t __initcall_init_posix_timers6
ffffffff81508f10 t __initcall_init_posix_cpu_timers6
ffffffff81508f18 t __initcall_timekeeping_init_ops6
ffffffff81508f20 t __initcall_init_clocksource_sysfs6
ffffffff81508f28 t __initcall_init_timer_list_procfs6
ffffffff81508f30 t __initcall_alarmtimer_init6
ffffffff81508f38 t __initcall_futex_init6
ffffffff81508f40 t __initcall_proc_dma_init6
ffffffff81508f48 t __initcall_proc_modules_init6
ffffffff81508f50 t __initcall_kallsyms_init6
ffffffff81508f58 t __initcall_pid_namespaces_init6
ffffffff81508f60 t __initcall_ikconfig_init6
ffffffff81508f68 t __initcall_audit_init6
ffffffff81508f70 t __initcall_audit_watch_init6
ffffffff81508f78 t __initcall_audit_tree_init6
ffffffff81508f80 t __initcall_irq_pm_init_ops6
ffffffff81508f88 t __initcall_utsname_sysctl_init6
ffffffff81508f90 t __initcall_perf_event_sysfs_init6
ffffffff81508f98 t __initcall_init_per_zone_wmark_min6
ffffffff81508fa0 t __initcall_kswapd_init6
ffffffff81508fa8 t __initcall_setup_vmstat6
ffffffff81508fb0 t __initcall_mm_sysfs_init6
ffffffff81508fb8 t __initcall_proc_vmalloc_init6
ffffffff81508fc0 t __initcall_procswaps_init6
ffffffff81508fc8 t __initcall_hugetlb_init6
ffffffff81508fd0 t __initcall_slab_proc_init6
ffffffff81508fd8 t __initcall_cpucache_init6
ffffffff81508fe0 t __initcall_hugepage_init6
ffffffff81508fe8 t __initcall_init_cleancache6
ffffffff81508ff0 t __initcall_fcntl_init6
ffffffff81508ff8 t __initcall_proc_filesystems_init6
ffffffff81509000 t __initcall_dio_init6
ffffffff81509008 t __initcall_fsnotify_mark_init6
ffffffff81509010 t __initcall_dnotify_init6
ffffffff81509018 t __initcall_inotify_user_setup6
ffffffff81509020 t __initcall_fanotify_user_setup6
ffffffff81509028 t __initcall_aio_setup6
ffffffff81509030 t __initcall_proc_locks_init6
ffffffff81509038 t __initcall_init_sys32_ioctl6
ffffffff81509040 t __initcall_proc_cmdline_init6
ffffffff81509048 t __initcall_proc_consoles_init6
ffffffff81509050 t __initcall_proc_cpuinfo_init6
ffffffff81509058 t __initcall_proc_devices_init6
ffffffff81509060 t __initcall_proc_interrupts_init6
ffffffff81509068 t __initcall_proc_loadavg_init6
ffffffff81509070 t __initcall_proc_meminfo_init6
ffffffff81509078 t __initcall_proc_stat_init6
ffffffff81509080 t __initcall_proc_uptime_init6
ffffffff81509088 t __initcall_proc_version_init6
ffffffff81509090 t __initcall_proc_softirqs_init6
ffffffff81509098 t __initcall_proc_kcore_init6
ffffffff815090a0 t __initcall_proc_kmsg_init6
ffffffff815090a8 t __initcall_proc_page_init6
ffffffff815090b0 t __initcall_configfs_init6
ffffffff815090b8 t __initcall_init_devpts_fs6
ffffffff815090c0 t __initcall_init_ramfs_fs6
ffffffff815090c8 t __initcall_init_hugetlbfs_fs6
ffffffff815090d0 t __initcall_init_pstore_fs6
ffffffff815090d8 t __initcall_ipc_init6
ffffffff815090e0 t __initcall_ipc_sysctl_init6
ffffffff815090e8 t __initcall_init_mqueue_fs6
ffffffff815090f0 t __initcall_crypto_wq_init6
ffffffff815090f8 t __initcall_crypto_algapi_init6
ffffffff81509100 t __initcall_skcipher_module_init6
ffffffff81509108 t __initcall_chainiv_module_init6
ffffffff81509110 t __initcall_eseqiv_module_init6
ffffffff81509118 t __initcall_aes_init6
ffffffff81509120 t __initcall_crc32c_mod_init6
ffffffff81509128 t __initcall_krng_mod_init6
ffffffff81509130 t __initcall_proc_genhd_init6
ffffffff81509138 t __initcall_bsg_init6
ffffffff81509140 t __initcall_init_cgroup_blkio6
ffffffff81509148 t __initcall_noop_init6
ffffffff81509150 t __initcall_cfq_init6
ffffffff81509158 t __initcall_percpu_counter_startup6
ffffffff81509160 t __initcall_pci_proc_init6
ffffffff81509168 t __initcall_pcie_portdrv_init6
ffffffff81509170 t __initcall_aer_service_init6
ffffffff81509178 t __initcall_pcie_pme_service_init6
ffffffff81509180 t __initcall_ioapic_init6
ffffffff81509188 t __initcall_fb_console_init6
ffffffff81509190 t __initcall_vesafb_init6
ffffffff81509198 t __initcall_acpi_reserve_resources6
ffffffff815091a0 t __initcall_irqrouter_init_ops6
ffffffff815091a8 t __initcall_acpi_hed_init6
ffffffff815091b0 t __initcall_erst_init6
ffffffff815091b8 t __initcall_ghes_init6
ffffffff815091c0 t __initcall_xenbus_probe_initcall6
ffffffff815091c8 t __initcall_xenbus_init6
ffffffff815091d0 t __initcall_xenbus_backend_init6
ffffffff815091d8 t __initcall_hypervisor_subsys_init6
ffffffff815091e0 t __initcall_hyper_sysfs_init6
ffffffff815091e8 t __initcall_platform_pci_module_init6
ffffffff815091f0 t __initcall_xen_tmem_init6
ffffffff815091f8 t __initcall_pty_init6
ffffffff81509200 t __initcall_xen_hvc_init6
ffffffff81509208 t __initcall_rand_initialize6
ffffffff81509210 t __initcall_nvram_init6
ffffffff81509218 t __initcall_topology_sysfs_init6
ffffffff81509220 t __initcall_init_sd6
ffffffff81509228 t __initcall_net_olddevs_init6
ffffffff81509230 t __initcall_i8042_init6
ffffffff81509238 t __initcall_mousedev_init6
ffffffff81509240 t __initcall_atkbd_init6
ffffffff81509248 t __initcall_psmouse_init6
ffffffff81509250 t __initcall_cmos_init6
ffffffff81509258 t __initcall_cpufreq_stats_init6
ffffffff81509260 t __initcall_init_ladder6
ffffffff81509268 t __initcall_hid_init6
ffffffff81509270 t __initcall_sock_diag_init6
ffffffff81509278 t __initcall_flow_cache_init_global6
ffffffff81509280 t __initcall_sysctl_ipv4_init6
ffffffff81509288 t __initcall_init_syncookies6
ffffffff81509290 t __initcall_cubictcp_register6
ffffffff81509298 T __initcall7_start
ffffffff81509298 t __initcall_hpet_insert_resource7
ffffffff815092a0 t __initcall_update_mp_table7
ffffffff815092a8 t __initcall_lapic_insert_resource7
ffffffff815092b0 t __initcall_io_apic_bug_finalize7
ffffffff815092b8 t __initcall_print_ICs7
ffffffff815092c0 t __initcall_check_early_ioremap_leak7
ffffffff815092c8 t __initcall_init_oops_id7
ffffffff815092d0 t __initcall_printk_late_init7
ffffffff815092d8 t __initcall_pm_qos_power_init7
ffffffff815092e0 t __initcall_taskstats_init7
ffffffff815092e8 t __initcall_max_swapfiles_check7
ffffffff815092f0 t __initcall_set_recommended_min_free_kbytes7
ffffffff815092f8 t __initcall_random32_reseed7
ffffffff81509300 t __initcall_pci_resource_alignment_sysfs_init7
ffffffff81509308 t __initcall_pci_sysfs_init7
ffffffff81509310 t __initcall_boot_wait_for_devices7
ffffffff81509318 t __initcall_random_int_secret_init7
ffffffff81509320 t __initcall_deferred_probe_initcall7
ffffffff81509328 t __initcall_scsi_complete_async_scans7
ffffffff81509330 t __initcall_rtc_hctosys7
ffffffff81509338 t __initcall_memmap_init7
ffffffff81509340 t __initcall_pci_mmcfg_late_insert_resources7
ffffffff81509348 t __initcall_net_secret_init7
ffffffff81509350 t __initcall_tcp_congestion_default7
ffffffff81509358 t __initcall_initialize_hashrnd7s
ffffffff81509360 T __con_initcall_start
ffffffff81509360 t __initcall_con_init
ffffffff81509360 T __initcall_end
ffffffff81509368 t __initcall_hvc_console_init
ffffffff81509370 t __initcall_xen_cons_init
ffffffff81509378 T __con_initcall_end
ffffffff81509378 T __initramfs_start
ffffffff81509378 t __irf_start
ffffffff81509378 T __security_initcall_end
ffffffff81509378 T __security_initcall_start
ffffffff81509578 T __initramfs_size
ffffffff81509578 t __irf_end
ffffffff8150a000 r r_base
ffffffff8150a000 R trampoline_data
ffffffff8150a000 R x86_trampoline_start
ffffffff8150a068 r startup_32
ffffffff8150a09c r startup_64
ffffffff8150a0a5 r no_longmode
ffffffff8150a0a8 r verify_cpu
ffffffff8150a0fd r verify_cpu_noamd
ffffffff8150a146 r verify_cpu_clear_xd
ffffffff8150a157 r verify_cpu_check
ffffffff8150a197 r verify_cpu_sse_test
ffffffff8150a1c6 r verify_cpu_no_longmode
ffffffff8150a1cf r verify_cpu_sse_ok
ffffffff8150a1d8 r tidt
ffffffff8150a1e0 r tgdt
ffffffff8150a200 r startup_32_vector
ffffffff8150a200 r tgdt_end
ffffffff8150a208 r startup_64_vector
ffffffff8150a210 R trampoline_status
ffffffff8150a214 r trampoline_stack
ffffffff8150b000 R trampoline_level4_pgt
ffffffff8150b000 r trampoline_stack_end
ffffffff8150c000 r __cpu_dev_intel_cpu_dev
ffffffff8150c000 R __x86_cpu_dev_start
ffffffff8150c000 R trampoline_end
ffffffff8150c000 R x86_trampoline_end
ffffffff8150c008 r __cpu_dev_amd_cpu_dev
ffffffff8150c010 r __cpu_dev_centaur_cpu_dev
ffffffff8150c018 R __parainstructions
ffffffff8150c018 R __x86_cpu_dev_end
ffffffff8151e554 R __parainstructions_end
ffffffff8151e558 R __alt_instructions
ffffffff8151f11c R __alt_instructions_end
ffffffff8151f5b8 r __iommu_entry_pci_xen_swiotlb_detect
ffffffff8151f5b8 R __iommu_table
ffffffff8151f5e0 r __iommu_entry_pci_swiotlb_detect_4gb
ffffffff8151f608 r __iommu_entry_pci_swiotlb_detect_override
ffffffff8151f630 r __iommu_entry_gart_iommu_hole_init
ffffffff8151f658 r __iommu_entry_amd_iommu_detect
ffffffff8151f680 D __apicdrivers
ffffffff8151f680 d __apicdrivers_apic_physflatapic_flat
ffffffff8151f680 R __iommu_table_end
ffffffff8151f690 D __apicdrivers_end
ffffffff8151f690 t ffh_cstate_exit
ffffffff8151f6aa t ikconfig_cleanup
ffffffff8151f6b8 t hugetlb_exit
ffffffff8151f747 t exit_script_binfmt
ffffffff8151f753 t exit_elf_binfmt
ffffffff8151f75f t exit_compat_elf_binfmt
ffffffff8151f76b t configfs_exit
ffffffff8151f7a1 t exit_hugetlbfs_fs
ffffffff8151f7d5 t crypto_wq_exit
ffffffff8151f7e1 t crypto_algapi_exit
ffffffff8151f7e6 T crypto_exit_proc
ffffffff8151f7f4 t eseqiv_module_exit
ffffffff8151f800 t cryptomgr_exit
ffffffff8151f816 t aes_fini
ffffffff8151f822 t crc32c_mod_fini
ffffffff8151f82e t krng_mod_fini
ffffffff8151f83a t exit_cgroup_blkio
ffffffff8151f846 t noop_exit
ffffffff8151f852 t cfq_exit
ffffffff8151f87a t aer_service_exit
ffffffff8151f886 t ioapic_exit
ffffffff8151f892 t interrupt_stats_exit
ffffffff8151f8ad t acpi_hed_exit
ffffffff8151f8b9 t ghes_exit
ffffffff8151f8d8 t xenbus_exit
ffffffff8151f8e4 t xenbus_backend_exit
ffffffff8151f8f0 t hyper_sysfs_exit
ffffffff8151f952 t hvc_exit
ffffffff8151f990 t xen_hvc_fini
ffffffff8151f9bc t nvram_cleanup_module
ffffffff8151f9d8 t firmware_class_exit
ffffffff8151f9e4 t exit_scsi
ffffffff8151fa04 t exit_sd
ffffffff8151fa74 t ata_exit
ffffffff8151fa98 T libata_transport_exit
ffffffff8151fabe t serio_exit
ffffffff8151fad8 t i8042_exit
ffffffff8151fb03 t input_exit
ffffffff8151fb30 t mousedev_exit
ffffffff8151fb4a t atkbd_exit
ffffffff8151fb56 t psmouse_exit
ffffffff8151fb70 t rtc_exit
ffffffff8151fb8f T rtc_dev_exit
ffffffff8151fba4 t cmos_exit
ffffffff8151fbd2 t cmos_do_remove
ffffffff8151fc5c t cmos_pnp_remove
ffffffff8151fc61 t cmos_platform_remove
ffffffff8151fc71 t cpufreq_stats_exit
ffffffff8151fce1 t cpufreq_gov_performance_exit
ffffffff8151fced t exit_ladder
ffffffff8151fcf9 t hid_exit
ffffffff8151fd05 t sock_diag_exit
ffffffff8151fd11 t cubictcp_unregister
ffffffff8151fd1d t xfrm4_policy_fini
ffffffff8151fd3c t af_unix_exit
ffffffff81520000 R __init_end
ffffffff81520000 R __smp_locks
ffffffff81524000 B __bss_start
ffffffff81524000 R __nosave_begin
ffffffff81524000 R __nosave_end
ffffffff81524000 R __smp_locks_end
ffffffff81524000 B empty_zero_page
ffffffff81525000 b level3_user_vsyscall
ffffffff81526000 b dummy_mapping
ffffffff81527000 b fake_ioapic_mapping
ffffffff81528000 b bm_pte
ffffffff81529000 B idt_table
ffffffff8152a000 B nmi_idt_table
ffffffff8152b000 B initcall_debug
ffffffff8152b004 B reset_devices
ffffffff8152b008 B saved_command_line
ffffffff8152b010 b static_command_line
ffffffff8152b018 b panic_later
ffffffff8152b020 b panic_param
ffffffff8152b028 b execute_command
ffffffff8152b030 b ramdisk_execute_command
ffffffff8152b040 b msgbuf
ffffffff8152b080 B ROOT_DEV
ffffffff8152b084 b root_wait
ffffffff8152b088 B real_root_dev
ffffffff8152b08c B initrd_below_start_ok
ffffffff8152b090 B initrd_end
ffffffff8152b098 B initrd_start
ffffffff8152b0a0 b my_inptr
ffffffff8152b0a8 B preset_lpj
ffffffff8152b0b0 B lpj_fine
ffffffff8152b0b8 b printed.10632
ffffffff8152b0c0 B xen_initial_gdt
ffffffff8152b0e0 B xen_dummy_shared_info
ffffffff8152bd08 B xen_start_info
ffffffff8152bd10 B machine_to_phys_nr
ffffffff8152bd18 B xen_domain_type
ffffffff8152bd1c b lock.35222
ffffffff8152bd20 b traps.35224
ffffffff8152cd30 b __force_order
ffffffff8152cd38 b shared_info_page.35578
ffffffff8152cd40 B xen_released_pages
ffffffff8152cd60 B xen_reservation_lock
ffffffff8152cd68 b level1_ident_pgt
ffffffff8152cd80 b discontig_frames
ffffffff8152dd80 B xen_platform_pci_unplug
ffffffff8152dd84 b xen_emul_unplug
ffffffff8152dd88 b p2m_top_mfn
ffffffff8152dd90 b p2m_mid_missing_mfn
ffffffff8152dd98 b p2m_top_mfn_p
ffffffff8152dda0 b p2m_top
ffffffff8152dda8 b p2m_mid_missing
ffffffff8152ddb0 b p2m_missing
ffffffff8152ddb8 b p2m_identity
ffffffff8152ddc0 b m2p_overrides
ffffffff8152ddc8 b m2p_override_lock
ffffffff8152ddd0 B xen_cpu_initialized_map
ffffffff8152dde0 B used_vectors
ffffffff8152de00 B x86_platform_ipi_callback
ffffffff8152de08 B irq_err_count
ffffffff8152de0c b warned.22284
ffffffff8152de10 B sysctl_panic_on_stackoverflow
ffffffff8152de14 b __key.22954
ffffffff8152de14 B panic_on_io_nmi
ffffffff8152de18 B panic_on_unrecovered_nmi
ffffffff8152de1c b die_lock
ffffffff8152de20 b die_nest_count
ffffffff8152de24 b die_counter
ffffffff8152de28 B unknown_nmi_panic
ffffffff8152de2c b ignore_nmis
ffffffff8152de30 b nmi_reason_lock
ffffffff8152de40 B saved_video_mode
ffffffff8152de60 B edid_info
ffffffff8152dee0 B screen_info
ffffffff8152df20 B bootloader_version
ffffffff8152df24 B bootloader_type
ffffffff8152df28 B mmu_cr4_features
ffffffff8152df30 B max_pfn_mapped
ffffffff8152df38 B max_low_pfn_mapped
ffffffff8152df40 B io_apic_irqs
ffffffff8152df48 B i8259A_lock
ffffffff8152df4c b i8259A_auto_eoi
ffffffff8152df50 b irq_trigger
ffffffff8152df54 b spurious_irq_mask.25340
ffffffff8152df58 b vsyscall_mode
ffffffff8152df60 B e820_saved
ffffffff8152e980 B e820
ffffffff8152f388 B force_hpet_address
ffffffff8152f390 b force_hpet_resume_type
ffffffff8152f398 b cached_dev
ffffffff8152f3a0 b rcba_base
ffffffff8152f3a8 B arch_debugfs_dir
ffffffff8152f3b0 B skip_smp_alternatives
ffffffff8152f3b4 b debug_alternative
ffffffff8152f3b8 b smp_alt_once
ffffffff8152f3bc b noreplace_smp
ffffffff8152f3c0 b noreplace_paravirt
ffffffff8152f3c4 b stop_machine_first
ffffffff8152f3c8 b wrote_text
ffffffff8152f3d0 B global_clock_event
ffffffff8152f3d8 B tsc_clocksource_reliable
ffffffff8152f3e0 b cyc2ns_suspend
ffffffff8152f3e8 b no_sched_irq_time
ffffffff8152f3ec b ref_freq
ffffffff8152f3f0 b loops_per_jiffy_ref
ffffffff8152f3f8 b tsc_khz_ref
ffffffff8152f400 b hpet.23389
ffffffff8152f408 b ref_start.23388
ffffffff8152f410 B rtc_lock
ffffffff8152f412 b __print_once.25558
ffffffff8152f418 B x86_trampoline_base
ffffffff8152f420 B amd_e400_c1e_detected
ffffffff8152f428 B pm_idle
ffffffff8152f430 B boot_option_idle_override
ffffffff8152f438 B task_xstate_cachep
ffffffff8152f440 b idle_notifier
ffffffff8152f450 b amd_e400_c1e_mask
ffffffff8152f458 b __print_once.28414
ffffffff8152f45c B xstate_size
ffffffff8152f460 B fx_sw_reserved_ia32
ffffffff8152f4a0 B fx_sw_reserved
ffffffff8152f4d0 B pcntxt_mask
ffffffff8152f4d8 b xstate_offsets
ffffffff8152f4e0 b xstate_sizes
ffffffff8152f4e8 b init_xstate_buf
ffffffff8152f4f0 b xstate_features
ffffffff8152f500 B xstate_fx_sw_bytes
ffffffff8152f540 B num_cache_leaves
ffffffff8152f548 b cache_dev_map
ffffffff8152f550 b attrs.22127
ffffffff8152f558 b is_initialized.21853
ffffffff8152f55c b printed.11602
ffffffff8152f560 B kernel_eflags
ffffffff8152f568 B cpu_sibling_setup_mask
ffffffff8152f570 B cpu_callin_mask
ffffffff8152f578 B cpu_callout_mask
ffffffff8152f580 B cpu_initialized_mask
ffffffff8152f588 b __print_once.29704
ffffffff8152f589 b printed.29702
ffffffff8152f58a b __print_once.29713
ffffffff8152f590 B x86_hyper
ffffffff8152f598 B ms_hyperv
ffffffff8152f5a0 b __print_once.18294
ffffffff8152f5c0 B unconstrained
ffffffff8152f5e0 B emptyconstraint
ffffffff8152f600 b active_events
ffffffff8152f620 B mce_info
ffffffff8152f820 B x86_mce_decoder_chain
ffffffff8152f830 B mce_entry
ffffffff8152f838 b mce_need_notify
ffffffff8152f840 b global_nwo
ffffffff8152f844 b mce_callin
ffffffff8152f848 b mce_executing
ffffffff8152f84c b mce_paniced
ffffffff8152f850 b cpu_missing
ffffffff8152f860 b mce_helper
ffffffff8152f8e0 b mce_write
ffffffff8152f8e8 b mce_device_initialized
ffffffff8152f8f0 b mce_chrdev_state_lock
ffffffff8152f8f4 b mce_chrdev_open_count
ffffffff8152f8f8 b mce_chrdev_open_exclu
ffffffff8152f8fc b mce_apei_read_done
ffffffff8152f900 B mtrr_if
ffffffff8152f908 B size_and_mask
ffffffff8152f910 B size_or_mask
ffffffff8152f920 B mtrr_usage_table
ffffffff8152fd20 B num_var_ranges
ffffffff8152fd40 b mtrr_ops
ffffffff8152fd88 b mtrr_aps_delayed_init
ffffffff8152fda0 b mtrr_value
ffffffff815315a0 B mtrr_state
ffffffff81532600 B mtrr_tom2
ffffffff81532608 b mtrr_state_set
ffffffff8153260c b set_atomicity_lock
ffffffff81532610 b cr4
ffffffff81532618 b deftype_lo
ffffffff8153261c b deftype_hi
ffffffff81532620 b smp_changes_mask
ffffffff81532640 b range_new.27148
ffffffff81533640 b nr_range_new.27150
ffffffff81533644 b disable_mtrr_trim
ffffffff81533650 b perfctr_nmi_owner
ffffffff81533660 b evntsel_nmi_owner
ffffffff81533670 b ibs_caps
ffffffff81533680 B acpi_irq_model
ffffffff81533684 B acpi_strict
ffffffff81533688 B acpi_ioapic
ffffffff8153368c B acpi_lapic
ffffffff81533690 B acpi_pci_disabled
ffffffff81533694 B acpi_noirq
ffffffff81533698 B acpi_disabled
ffffffff8153369c B acpi_rsdt_forced
ffffffff815336a0 b hpet_res
ffffffff815336b0 b cpu_cstate_entry
ffffffff815336c0 b mwait_supported
ffffffff815336d0 B port_cf9_safe
ffffffff815336d4 B reboot_force
ffffffff815336d8 B pm_power_off
ffffffff815336e0 b reboot_emergency
ffffffff815336e4 b crashing_cpu
ffffffff815336e8 b shootdown_callback
ffffffff815336f0 b waiting_for_crash_ipi
ffffffff815336f4 b reboot_mode
ffffffff815336f8 B init_deasserted
ffffffff815336fc b __key.8440
ffffffff81533700 B enable_update_mptable
ffffffff81533708 b mpf_found
ffffffff81533720 B apic_version
ffffffff81553720 B lapic_timer_frequency
ffffffff81553724 B smp_found_config
ffffffff81553728 B pic_mode
ffffffff8155372c B apic_verbosity
ffffffff81553730 B local_apic_timer_c2_ok
ffffffff81553734 B disable_apic
ffffffff81553738 B mp_lapic_addr
ffffffff81553740 B x2apic_mode
ffffffff81553760 B phys_cpu_present_map
ffffffff81554760 B max_physical_apicid
ffffffff81554764 B num_processors
ffffffff81554770 b eilvt_offsets
ffffffff81554780 b apic_phys
ffffffff815547a0 b apic_pm_state
ffffffff815547e0 B irq_mis_count
ffffffff815547e4 B skip_ioapic_setup
ffffffff81554800 B mp_bus_not_pci
ffffffff81554820 B mp_irq_entries
ffffffff81554840 B mp_irqs
ffffffff81556840 B gsi_top
ffffffff81556844 B nr_ioapics
ffffffff81556860 b ioapics
ffffffff81558060 b ioapic_resources
ffffffff81558080 b irq_cfgx
ffffffff81558280 b ioapic_lock
ffffffff81558282 b vector_lock
ffffffff81558284 b current_xpos
ffffffff815582a0 B hpet_force_user
ffffffff815582a4 B hpet_msi_disable
ffffffff815582a5 B hpet_blockid
ffffffff815582a8 B hpet_address
ffffffff815582b0 b hpet_virt_address
ffffffff815582b8 b boot_hpet_disable
ffffffff815582bc b hpet_legacy_int_enabled
ffffffff815582c0 b hpet_freq
ffffffff815582c8 b hpet_verbose
ffffffff815582d0 b hpet_devs
ffffffff815582d8 b hpet_num_timers
ffffffff815582e0 b __key.8255
ffffffff815582e0 b irq_handler
ffffffff815582e8 b hpet_rtc_flags
ffffffff815582f0 b hpet_default_delta
ffffffff815582f8 b hpet_pie_limit
ffffffff81558300 b hpet_pie_delta
ffffffff81558304 b hpet_t1_cmp
ffffffff81558308 b hpet_prev_update_sec
ffffffff81558320 b hpet_alarm_time
ffffffff81558348 b hpet_pie_count
ffffffff81558350 B amd_northbridges
ffffffff81558368 b reset.19590
ffffffff8155836c b ban.19591
ffffffff81558370 b gart_lock.19612
ffffffff81558378 b flush_words
ffffffff81558380 B paravirt_steal_rq_enabled
ffffffff81558384 B paravirt_steal_enabled
ffffffff81558388 b __force_order
ffffffff81558390 b last_value
ffffffff81558398 B agp_gatt_table
ffffffff815583a0 B agp_memory_reserved
ffffffff815583a4 b fix_up_north_bridges
ffffffff815583a8 b aperture_order
ffffffff815583ac b aperture_alloc
ffffffff815583b0 b no_agp
ffffffff815583b8 b iommu_size
ffffffff815583c0 b iommu_pages
ffffffff815583c8 b iommu_gart_bitmap
ffffffff815583d0 b iommu_bus_base
ffffffff815583d8 b bad_dma_addr
ffffffff815583e0 b iommu_gatt_base
ffffffff815583e8 b gart_unmapped_entry
ffffffff815583ec b iommu_bitmap_lock
ffffffff815583f0 b next_bit
ffffffff815583f8 b need_flush
ffffffff815583fc B gart_iommu_aperture
ffffffff81558400 B after_bootmem
ffffffff81558420 B force_personality32
ffffffff81558440 b kcore_vsyscall
ffffffff81558468 B pgd_lock
ffffffff8155846a b __print_once.27720
ffffffff81558480 b direct_pages_count
ffffffff815584a0 b cpa_lock
ffffffff815584a4 B pat_debug_enable
ffffffff815584a8 b memtype_lock
ffffffff815584ac B fixmaps_set
ffffffff815584b0 b memtype_rbroot
ffffffff815584c0 b flush_state
ffffffff815586c0 B node_to_cpumask_map
ffffffff81558ec0 b numa_distance_cnt
ffffffff81558ec8 b numa_distance
ffffffff81558ed0 b __print_once.22825
ffffffff81558ed1 b __print_once.22826
ffffffff81558ed8 b vdso_size
ffffffff81558ee0 B vdso_pages
ffffffff81558ee8 b vdso32_pages
ffffffff81558ef0 b lastcomm.29388
ffffffff81558f00 B vm_area_cachep
ffffffff81558f08 B fs_cachep
ffffffff81558f10 B files_cachep
ffffffff81558f18 B sighand_cachep
ffffffff81558f20 B max_threads
ffffffff81558f24 B nr_threads
ffffffff81558f28 B total_forks
ffffffff81558f30 b task_struct_cachep
ffffffff81558f38 b signal_cachep
ffffffff81558f40 b mm_cachep
ffffffff81558f48 b __key.41693
ffffffff81558f48 b __key.41831
ffffffff81558f48 b __key.41832
ffffffff81558f48 b __key.41833
ffffffff81558f48 b __key.41962
ffffffff81558f48 b __key.7292
ffffffff81558f60 B panic_blink
ffffffff81558f70 B panic_notifier_list
ffffffff81558f80 B panic_timeout
ffffffff81558f84 B panic_on_oops
ffffffff81558f88 b panic_lock.18615
ffffffff81558fa0 b buf.18617
ffffffff815593a0 b tainted_mask
ffffffff815593b0 b buf.18648
ffffffff815593c8 b pause_on_oops_flag
ffffffff815593cc b pause_on_oops
ffffffff815593d0 b pause_on_oops_lock
ffffffff815593d4 b spin_counter.18695
ffffffff815593d8 b oops_id
ffffffff815593e0 B dmesg_restrict
ffffffff815593e4 B console_set_on_cmdline
ffffffff815593e8 B console_drivers
ffffffff815593f0 B oops_in_progress
ffffffff815593f4 b logbuf_lock
ffffffff815593f8 b log_end
ffffffff815593fc b con_start
ffffffff81559400 b log_start
ffffffff81559420 b __log_buf
ffffffff81599420 b __print_once.30681
ffffffff81599424 b logged_chars
ffffffff81599428 b recursion_bug
ffffffff81599430 b oops_timestamp.30862
ffffffff81599440 b printk_buf
ffffffff81599840 b printk_time
ffffffff81599844 b console_locked
ffffffff81599860 b console_cmdline
ffffffff81599920 b console_suspended
ffffffff81599924 b console_may_schedule
ffffffff81599928 b always_kmsg_dump
ffffffff81599930 b exclusive_console
ffffffff81599938 b dump_list_lock
ffffffff81599940 b cpu_chain
ffffffff81599948 b cpu_hotplug_disabled
ffffffff81599950 b frozen_cpus
ffffffff81599958 b __print_once.28158
ffffffff8159995c B sys_tz
ffffffff81599980 b reserved.20581
ffffffff815999a0 b reserve.20582
ffffffff81599a80 b strict_iomem_checks
ffffffff81599aa0 B sysctl_legacy_va_layout
ffffffff81599ac0 b dev_table
ffffffff81599b00 b minolduid
ffffffff81599b04 b zero
ffffffff81599b08 b min_extfrag_threshold
ffffffff81599b20 b warn_once_bitmap
ffffffff81599b40 b warned.28374
ffffffff81599b44 b warned.28369
ffffffff81599b80 B boot_tvec_bases
ffffffff8159bbc0 b boot_done.29689
ffffffff8159bbc8 b uidhash_lock
ffffffff8159bbd0 b uid_cachep
ffffffff8159bbd8 b sigqueue_cachep
ffffffff8159bbe0 B pm_power_off_prepare
ffffffff8159bbe8 B cad_pid
ffffffff8159bbf0 b kmod_concurrent.29717
ffffffff8159bbf4 b kmod_loop_msg.29718
ffffffff8159bbf8 b umh_sysctl_lock
ffffffff8159bbfc b running_helpers
ffffffff8159bc00 b khelper_wq
ffffffff8159bc40 b unbound_gcwq_nr_running
ffffffff8159bc80 b unbound_global_cwq
ffffffff8159bf80 b __key.7579
ffffffff8159bf80 b workqueue_lock
ffffffff8159bf82 b __key.25301
ffffffff8159bf82 b workqueue_freezing
ffffffff8159bf83 b __key.25611
ffffffff8159bf88 b pid_hash
ffffffff8159bf90 b __key.8440
ffffffff8159bf90 B module_sysfs_initialized
ffffffff8159bf98 B module_kset
ffffffff8159bfa0 b posix_timers_cache
ffffffff8159bfc0 b posix_timers_id
ffffffff8159bfe0 b posix_clocks
ffffffff8159c4e0 b idr_lock
ffffffff8159c4e8 B kthreadd_task
ffffffff8159c4f0 b __key.7576
ffffffff8159c4f0 b kthread_create_lock
ffffffff8159c500 b onecputick
ffffffff8159c520 b zero_it.22584
ffffffff8159c540 b __print_once.28419
ffffffff8159c548 b nsproxy_cachep
ffffffff8159c550 b __key.11426
ffffffff8159c550 b __key.15371
ffffffff8159c550 b die_chain
ffffffff8159c560 B kernel_kobj
ffffffff8159c568 b cred_jar
ffffffff8159c570 b entry_count
ffffffff8159c574 b async_lock
ffffffff8159c580 B root_task_group
ffffffff8159c748 B sched_domain_level_max
ffffffff8159c74c B sched_mc_power_savings
ffffffff8159c750 B sched_smt_power_savings
ffffffff8159c760 B def_root_domain
ffffffff8159ce20 B root_cpuacct
ffffffff8159ce50 B avenrun
ffffffff8159ce68 b calc_load_update
ffffffff8159ce70 b calc_load_tasks
ffffffff8159ce78 b cpu_isolated_map
ffffffff8159ce80 b doms_cur
ffffffff8159ce88 b dattr_cur
ffffffff8159ce90 b ndoms_cur
ffffffff8159ce98 b fallback_doms
ffffffff8159cea0 b sched_domains_tmpmask
ffffffff8159cea8 b task_group_lock
ffffffff8159ceac b balancing
ffffffff8159cec0 B def_rt_bandwidth
ffffffff8159cf18 b once.18238
ffffffff8159cf20 b pm_qos_lock
ffffffff8159cf40 b null_pm_qos
ffffffff8159cf98 B pm_wq
ffffffff8159cfa0 B power_kobj
ffffffff8159cfa8 b orig_fgconsole
ffffffff8159cfac b orig_kmsg
ffffffff8159cfb0 B pm_nosig_freezing
ffffffff8159cfb1 B pm_freezing
ffffffff8159cfb4 B system_freezing_cnt
ffffffff8159cfb8 b freezer_lock
ffffffff8159cfc0 b timekeeper
ffffffff8159d060 b timekeeping_suspend_time
ffffffff8159d070 b old_delta.21960
ffffffff8159d080 b __print_once.22010
ffffffff8159d088 B tick_nsec
ffffffff8159d090 B ntp_lock
ffffffff8159d098 b time_adjust
ffffffff8159d0a0 b tick_length_base
ffffffff8159d0a8 b tick_length
ffffffff8159d0b0 b time_offset
ffffffff8159d0b8 b ntp_tick_adj
ffffffff8159d0c0 b time_freq
ffffffff8159d0c8 b time_state
ffffffff8159d0d0 b time_tai
ffffffff8159d0d8 b time_reftime
ffffffff8159d0e0 b watchdog_lock
ffffffff8159d0e4 b finished_booting
ffffffff8159d0e8 b watchdog_running
ffffffff8159d0f0 b watchdog
ffffffff8159d100 b watchdog_timer
ffffffff8159d140 b override_name
ffffffff8159d160 b curr_clocksource
ffffffff8159d168 b watchdog_reset_pending
ffffffff8159d16c b __key.28066
ffffffff8159d180 b rtctimer
ffffffff8159d1c0 b alarm_bases
ffffffff8159d290 b freezer_delta_lock
ffffffff8159d298 b freezer_delta
ffffffff8159d2a0 b rtcdev_lock
ffffffff8159d2a8 b rtcdev
ffffffff8159d2b0 b clockevents_lock
ffffffff8159d2b8 b clockevents_chain
ffffffff8159d2c0 B tick_period
ffffffff8159d2c8 B tick_next_period
ffffffff8159d2d0 b tick_device_lock
ffffffff8159d2e0 b tick_broadcast_device
ffffffff8159d2f0 b tick_broadcast_mask
ffffffff8159d2f8 b tick_broadcast_lock
ffffffff8159d300 b tick_broadcast_oneshot_mask
ffffffff8159d308 b tick_broadcast_force
ffffffff8159d310 b tmpmask
ffffffff8159d318 b last_jiffies_update
ffffffff8159d320 b futex_queues
ffffffff8159eb20 b prev_max.15450
ffffffff8159eb24 B dma_spin_lock
ffffffff8159eb40 B modules_disabled
ffffffff8159eb60 b last_unloaded_module
ffffffff8159eba0 b module_addr_max
ffffffff8159eba8 b acct_lock
ffffffff8159ebc0 b rootnode
ffffffff8159ff20 b init_css_set
ffffffff815a0168 b root_count
ffffffff815a016c b css_set_count
ffffffff815a0180 b css_set_table
ffffffff815a0580 b release_list_lock
ffffffff815a0582 b __key.31011
ffffffff815a0582 b __key.31644
ffffffff815a0582 b __key.31972
ffffffff815a05a0 b init_css_set_link
ffffffff815a05d0 b __key.31946
ffffffff815a05d0 b cgroup_kobj
ffffffff815a05d8 b hierarchy_id_lock
ffffffff815a05e0 b hierarchy_ida
ffffffff815a0608 b next_hierarchy_id
ffffffff815a0620 b cpuset_wq
ffffffff815a0628 b cpuset_being_rebound
ffffffff815a0640 b newmems.27058
ffffffff815a0660 b cpus_attach
ffffffff815a0680 b cpuset_attach_nodemask_to
ffffffff815a06a0 b cpuset_attach_nodemask_from
ffffffff815a06c0 b oldmems.27329
ffffffff815a06e0 b cpuset_buffer_lock
ffffffff815a0700 b cpuset_name
ffffffff815a0780 b cpuset_nodelist
ffffffff815a0880 b pid_ns_cachep
ffffffff815a0888 b __key.6723
ffffffff815a0888 b stop_machine_initialized
ffffffff815a08a0 B audit_inode_hash
ffffffff815a0aa0 B audit_sig_sid
ffffffff815a0aa4 B audit_pid
ffffffff815a0aa8 B audit_ever_enabled
ffffffff815a0aac B audit_enabled
ffffffff815a0ab0 b audit_lost
ffffffff815a0ab4 b audit_rate_limit
ffffffff815a0ab8 b lock.37057
ffffffff815a0ac0 b last_msg.37056
ffffffff815a0ac8 b audit_sock
ffffffff815a0ad0 b serial_lock.37336
ffffffff815a0ad4 b serial.37338
ffffffff815a0ad8 b audit_initialized
ffffffff815a0ae0 b audit_skb_queue
ffffffff815a0af8 b lock.37044
ffffffff815a0afc b messages.37043
ffffffff815a0b00 b last_check.37042
ffffffff815a0b08 b audit_freelist_lock
ffffffff815a0b0c b audit_freelist_count
ffffffff815a0b10 b audit_default
ffffffff815a0b20 b audit_skb_hold_queue
ffffffff815a0b38 b kauditd_task
ffffffff815a0b40 b audit_nlk_pid
ffffffff815a0b60 b classes
ffffffff815a0be0 B audit_signals
ffffffff815a0be4 B audit_n_rules
ffffffff815a0be8 b session_id
ffffffff815a0bf0 b audit_watch_group
ffffffff815a0c00 b audit_tree_group
ffffffff815a0c20 b chunk_hash_heads
ffffffff815a1420 b allocated_irqs
ffffffff815a1a48 B irq_default_affinity
ffffffff815a1a50 b __key.18867
ffffffff815a1a50 b irq_poll_cpu
ffffffff815a1a54 b irq_poll_active
ffffffff815a1a58 B no_irq_affinity
ffffffff815a1a60 b root_irq_dir
ffffffff815a1a68 b prec.20938
ffffffff815a1a80 B rcutorture_vernum
ffffffff815a1a88 B rcutorture_testseq
ffffffff815a1a90 b sync_sched_expedited_started
ffffffff815a1a94 b sync_sched_expedited_done
ffffffff815a1aa0 b rcu_barrier_completion
ffffffff815a1ac0 b __key.8440
ffffffff815a1ac0 b rcu_barrier_cpu_count
ffffffff815a1ac8 B delayacct_cache
ffffffff815a1ad0 B taskstats_cache
ffffffff815a1ad8 b family_registered
ffffffff815a1adc b __key.28132
ffffffff815a1ae0 B perf_swevent_enabled
ffffffff815a1b08 B perf_guest_cbs
ffffffff815a1b10 b __key.30784
ffffffff815a1b10 b pmu_bus_running
ffffffff815a1b20 b pmu_idr
ffffffff815a1b40 b __key.28849
ffffffff815a1b40 b pmus_srcu
ffffffff815a1b70 b __key.30265
ffffffff815a1b70 b __key.30266
ffffffff815a1b70 b __key.30267
ffffffff815a1b70 b perf_event_id
ffffffff815a1b78 b __key.30618
ffffffff815a1b78 b __key.30631
ffffffff815a1b78 b nr_callchain_events
ffffffff815a1b80 b callchain_cpus_entries
ffffffff815a1b88 b constraints_initialized
ffffffff815a1b8c b nr_slots
ffffffff815a1bc0 b __key.26012
ffffffff815a1bc0 B sysctl_oom_kill_allocating_task
ffffffff815a1bc4 B sysctl_panic_on_oom
ffffffff815a1bc8 b zone_scan_lock
ffffffff815a1be0 B movable_zone
ffffffff815a1be4 B percpu_pagelist_fraction
ffffffff815a1be8 b saved_gfp_mask
ffffffff815a1bf0 b nr_shown.31811
ffffffff815a1bf8 b resume.31810
ffffffff815a1c00 b nr_unshown.31812
ffffffff815a1c08 b cpus_with_pcps.32133
ffffffff815a1c10 b user_zonelist_order
ffffffff815a1c14 b current_zonelist_order
ffffffff815a1c20 b node_load
ffffffff815a2020 b node_order
ffffffff815a2420 b __key.32950
ffffffff815a2420 b __key.33168
ffffffff815a2420 B global_dirty_limit
ffffffff815a2428 B laptop_mode
ffffffff815a242c B block_dump
ffffffff815a2430 B vm_dirty_bytes
ffffffff815a2438 B vm_highmem_is_dirtyable
ffffffff815a2440 B dirty_background_bytes
ffffffff815a2460 b vm_completions
ffffffff815a24e8 b bdi_min_ratio
ffffffff815a24f0 b update_time.31857
ffffffff815a24f8 b dirty_lock.31855
ffffffff815a24fc B page_cluster
ffffffff815a2500 B scan_unevictable_pages
ffffffff815a2508 B vm_total_pages
ffffffff815a2510 b __print_once.30921
ffffffff815a2518 b __key.29772
ffffffff815a2518 b lock.29717
ffffffff815a2520 b shmem_inode_cachep
ffffffff815a2528 b shm_mnt
ffffffff815a2540 B bdi_lock
ffffffff815a2560 b sync_supers_timer
ffffffff815a2598 b bdi_class
ffffffff815a25a0 b __key.25684
ffffffff815a25a0 b sync_supers_tsk
ffffffff815a25a8 b __key.25839
ffffffff815a25a8 b bdi_seq
ffffffff815a25b0 b nr_bdi_congested
ffffffff815a25b8 B mm_kobj
ffffffff815a25c0 B mminit_loglevel
ffffffff815a25e0 b pcpu_lock
ffffffff815a25e8 b pcpu_reserved_chunk
ffffffff815a25f0 b pages.20023
ffffffff815a25f8 b bitmap.20024
ffffffff815a2600 b pcpu_first_chunk
ffffffff815a2608 b pcpu_reserved_chunk_limit
ffffffff815a2620 b vm.20788
ffffffff815a2660 B high_memory
ffffffff815a2668 B num_physpages
ffffffff815a2670 b nr_shown.27006
ffffffff815a2678 b resume.27005
ffffffff815a2680 b nr_unshown.27007
ffffffff815a2688 b shmlock_user_lock
ffffffff815a26c0 B vm_committed_as
ffffffff815a26e8 b __key.31699
ffffffff815a26e8 b anon_vma_chain_cachep
ffffffff815a26f0 b anon_vma_cachep
ffffffff815a26f8 b __key.25857
ffffffff815a26f8 B vmlist
ffffffff815a2700 b vmap_lazy_nr
ffffffff815a2704 b purge_lock.24595
ffffffff815a2706 b vmap_area_lock
ffffffff815a2708 b vmap_block_tree_lock
ffffffff815a2710 b free_vmap_cache
ffffffff815a2718 b cached_vstart
ffffffff815a2720 b vmap_area_root
ffffffff815a2728 b vmap_area_pcpu_hole
ffffffff815a2730 b cached_hole_size
ffffffff815a2738 b cached_align
ffffffff815a2740 B max_pfn
ffffffff815a2748 B min_low_pfn
ffffffff815a2750 B max_low_pfn
ffffffff815a2758 b isa_page_pool
ffffffff815a2760 b page_pool
ffffffff815a2780 b swap_cache_info
ffffffff815a27a0 B total_swap_pages
ffffffff815a27a8 B nr_swap_pages
ffffffff815a27b0 b swap_lock
ffffffff815a27c0 b swap_info
ffffffff815a28a8 b proc_poll_event
ffffffff815a28ac b nr_swapfiles
ffffffff815a28b0 b least_priority
ffffffff815a28b8 B swap_token_mm
ffffffff815a28c0 b global_faults.20701
ffffffff815a28c4 b swap_token_lock
ffffffff815a28c8 b last_aging.20702
ffffffff815a28d0 b swap_token_memcg
ffffffff815a28d8 b __key.19966
ffffffff815a28e0 B node_hstates
ffffffff815a40e0 B hstates
ffffffff815a79b0 B default_hstate_idx
ffffffff815a79b8 B hugepages_treat_as_movable
ffffffff815a79c0 b max_hstate
ffffffff815a79c8 b hugepages_kobj
ffffffff815a79d0 b hstate_kobjs
ffffffff815a79e0 b hugetlb_lock
ffffffff815a79e8 b last_mhp.27642
ffffffff815a79f0 B policy_zone
ffffffff815a79f8 b policy_cache
ffffffff815a7a00 b sn_cache
ffffffff815a7a40 B mem_section
ffffffff815aba40 b index_init_lock.19662
ffffffff815aba48 b vmemmap_buf
ffffffff815aba50 b vmemmap_buf_end
ffffffff815aba58 B sysctl_compact_memory
ffffffff815aba60 b g_cpucache_up
ffffffff815aba64 b slab_max_order
ffffffff815aba70 b cache_chain
ffffffff815aba80 b cache_cache_nodelists
ffffffff815ac280 b khugepaged_full_scans
ffffffff815ac284 b khugepaged_pages_collapsed
ffffffff815ac288 b khugepaged_mm_lock
ffffffff815ac290 b cleancache_succ_gets
ffffffff815ac298 b cleancache_failed_gets
ffffffff815ac2a0 b cleancache_puts
ffffffff815ac2a8 b cleancache_invalidates
ffffffff815ac2c0 B files_lglock_cpu_lock
ffffffff815ac2c8 b old_max.22967
ffffffff815ac2d0 b __key.23098
ffffffff815ac2e0 B sb_lock
ffffffff815ac2e2 b __key.28206
ffffffff815ac2e2 b __key.28207
ffffffff815ac2e2 b __key.28208
ffffffff815ac2e2 b __key.28209
ffffffff815ac2e2 b __key.28210
ffffffff815ac2e2 b __key.28211
ffffffff815ac2e2 b __key.28212
ffffffff815ac300 b default_op.28195
ffffffff815ac3c0 b unnamed_dev_ida
ffffffff815ac3e8 b unnamed_dev_lock
ffffffff815ac3ec b unnamed_dev_start
ffffffff815ac400 b chrdevs
ffffffff815acbf8 b cdev_lock
ffffffff815acc00 b cdev_map
ffffffff815acc08 B suid_dumpable
ffffffff815acc0c B core_pipe_limit
ffffffff815acc10 B core_uses_pid
ffffffff815acc14 b core_dump_count.33760
ffffffff815acc18 b __key.29214
ffffffff815acc18 b __key.7292
ffffffff815acc18 b fasync_lock
ffffffff815acc40 B inodes_stat
ffffffff815acc80 b empty_iops.27685
ffffffff815acd40 b empty_fops.27686
ffffffff815ace10 b __key.27690
ffffffff815ace10 b __key.27850
ffffffff815ace10 b shared_last_ino.28209
ffffffff815ace14 b iunique_lock.28287
ffffffff815ace18 b counter.28289
ffffffff815ace20 b file_systems
ffffffff815ace40 B vfsmount_lock_cpu_lock
ffffffff815ace48 B fs_kobj
ffffffff815ace60 b mnt_group_ida
ffffffff815acea0 b mnt_id_ida
ffffffff815acec8 b mnt_id_lock
ffffffff815acecc b mnt_id_start
ffffffff815acee0 b namespace_sem
ffffffff815acf00 b event
ffffffff815acf04 b __key.15525
ffffffff815acf04 b __key.30317
ffffffff815acf04 b __key.30425
ffffffff815acf04 b pin_fs_lock
ffffffff815acf06 b simple_transaction_lock.25039
ffffffff815acf08 b __key.25076
ffffffff815acf08 B nr_pdflush_threads
ffffffff815acf0c B sysctl_drop_caches
ffffffff815acf10 B buffer_heads_over_limit
ffffffff815acf14 b msg_count.33407
ffffffff815acf18 b bh_cachep
ffffffff815acf20 b max_buffer_heads
ffffffff815acf28 B fs_bio_set
ffffffff815acf30 b bio_slab_max
ffffffff815acf34 b bio_slab_nr
ffffffff815acf38 b bio_slabs
ffffffff815acf40 b bio_dirty_lock
ffffffff815acf48 b bio_dirty_list
ffffffff815acf50 b bd_mnt.29014
ffffffff815acf58 b __key.28986
ffffffff815acf58 b __key.28987
ffffffff815acf60 b fsnotify_sync_cookie
ffffffff815acf68 b fsnotify_event_cachep
ffffffff815acf70 b fsnotify_event_holder_cachep
ffffffff815acf78 b q_overflow_event
ffffffff815acf80 b __key.19553
ffffffff815acf80 b __key.19554
ffffffff815acf80 B fsnotify_mark_srcu
ffffffff815acfb0 b destroy_lock
ffffffff815acfb4 b warned.19061
ffffffff815acfb8 b zero
ffffffff815acfc0 b poll_loop_ncalls
ffffffff815acfe0 b poll_safewake_ncalls
ffffffff815ad000 b poll_readywalk_ncalls
ffffffff815ad018 b __key.27570
ffffffff815ad018 b __key.27571
ffffffff815ad018 b __key.27572
ffffffff815ad020 b path_count
ffffffff815ad038 b zero
ffffffff815ad040 b anon_inode_inode
ffffffff815ad060 b anon_inode_fops
ffffffff815ad130 b __key.27365
ffffffff815ad130 b cancel_lock
ffffffff815ad134 b __key.27430
ffffffff815ad138 B aio_nr
ffffffff815ad140 b kiocb_cachep
ffffffff815ad148 b kioctx_cachep
ffffffff815ad150 b aio_wq
ffffffff815ad158 b aio_nr_lock
ffffffff815ad15a b fput_lock
ffffffff815ad15c b __key.31994
ffffffff815ad15c b file_lock_lock
ffffffff815ad15e b __key.28900
ffffffff815ad160 b proc_inode_cachep
ffffffff815ad168 b __print_once.26962
ffffffff815ad180 B proc_subdir_lock
ffffffff815ad1a0 b proc_inum_ida
ffffffff815ad1c8 b proc_inum_lock
ffffffff815ad1d0 b proc_tty_driver
ffffffff815ad1d8 b proc_tty_ldisc
ffffffff815ad1e0 b sysctl_lock
ffffffff815ad1e2 b __key.7469
ffffffff815ad200 B kcore_modules
ffffffff815ad228 b proc_root_kcore
ffffffff815ad240 b kcore_text
ffffffff815ad280 b kcore_vmalloc
ffffffff815ad2c0 b sysfs_open_dirent_lock
ffffffff815ad2c2 b __key.21320
ffffffff815ad2c2 b __key.21342
ffffffff815ad2c8 b sysfs_workqueue
ffffffff815ad2e0 B sysfs_assoc_lock
ffffffff815ad2e2 b sysfs_ino_lock
ffffffff815ad300 b sysfs_ino_ida
ffffffff815ad328 B sysfs_dir_cachep
ffffffff815ad330 b sysfs_mnt
ffffffff815ad338 b __key.17715
ffffffff815ad338 b __key.20269
ffffffff815ad338 B configfs_dirent_lock
ffffffff815ad340 B configfs_dir_cachep
ffffffff815ad348 b configfs_mount
ffffffff815ad350 b configfs_mnt_count
ffffffff815ad358 b config_kobj
ffffffff815ad360 b devpts_mnt
ffffffff815ad368 b pty_count
ffffffff815ad36c b pty_limit_min
ffffffff815ad370 B sysctl_hugetlb_shm_group
ffffffff815ad378 b hugetlbfs_vfsmount
ffffffff815ad380 b __print_once.26277
ffffffff815ad388 b hugetlbfs_inode_cachep
ffffffff815ad390 b nls_lock
ffffffff815ad398 b pstore_sb
ffffffff815ad3a0 b allpstore_lock
ffffffff815ad3a8 b pstore_lock
ffffffff815ad3b0 b psinfo
ffffffff815ad3b8 b backend
ffffffff815ad3c0 b __key.14983
ffffffff815ad3c0 b pstore_new_entry
ffffffff815ad3c4 b oopscount
ffffffff815ad3c8 b __key.24392
ffffffff815ad3c8 B mq_lock
ffffffff815ad3cc b zero
ffffffff815ad3d0 b mqueue_inode_cachep
ffffffff815ad3d8 b mq_sysctl_table
ffffffff815ad3e0 b __key.39861
ffffffff815ad3e0 b warned.30716
ffffffff815ad3e8 B mmap_min_addr
ffffffff815ad3f0 b __key.7292
ffffffff815ad3f0 B kcrypto_wq
ffffffff815ad3f8 B crypto_default_rng
ffffffff815ad400 b crypto_default_rng_refcnt
ffffffff815ad420 b chosen_elevator
ffffffff815ad430 b elv_list_lock
ffffffff815ad432 b __key.29063
ffffffff815ad434 b printed.29234
ffffffff815ad440 B blk_requestq_cachep
ffffffff815ad460 B blk_queue_ida
ffffffff815ad488 b kblockd_workqueue
ffffffff815ad490 b __key.30561
ffffffff815ad490 b __key.30562
ffffffff815ad490 b __key.30579
ffffffff815ad490 b request_cachep
ffffffff815ad498 B blk_max_pfn
ffffffff815ad4a0 B blk_max_low_pfn
ffffffff815ad4a8 b iocontext_cachep
ffffffff815ad4c0 B block_depr
ffffffff815ad4e0 b major_names
ffffffff815adce0 b ext_devt_idr
ffffffff815add00 b bdev_map
ffffffff815add08 b __key.28287
ffffffff815add08 b disk_events_dfl_poll_msecs
ffffffff815add10 b __key.27786
ffffffff815add10 b p.27759
ffffffff815add20 b blk_default_cmd_filter
ffffffff815add60 b bsg_minor_idr
ffffffff815add80 b bsg_cmd_cachep
ffffffff815adda0 b bsg_device_list
ffffffff815adde0 b __key.28860
ffffffff815adde0 b bsg_class
ffffffff815adde8 b bsg_major
ffffffff815ade00 b bsg_cdev
ffffffff815ade68 b __key.28709
ffffffff815ade68 b __key.28710
ffffffff815ade68 b blkio_list_lock
ffffffff815ade70 b cfq_pool
ffffffff815ade78 b idr_layer_cache
ffffffff815ade80 b simple_ida_lock
ffffffff815ade90 b kobj_ns_type_lock
ffffffff815adea0 b kobj_ns_ops_tbl
ffffffff815adec0 B uevent_helper
ffffffff815adfc0 B uevent_seqnum
ffffffff815adfe0 b index_bits_to_maxindex
ffffffff815ae1e0 b __key.10819
ffffffff815ae1e0 b __key.10820
ffffffff815ae1e0 b __key.10823
ffffffff815ae1e0 b __key.10865
ffffffff815ae1e0 b radix_tree_node_cachep
ffffffff815ae1e8 B debug_locks_silent
ffffffff815ae1ec b __print_once.13863
ffffffff815ae1f0 B swiotlb_force
ffffffff815ae1f8 b io_tlb_nslabs
ffffffff815ae200 b io_tlb_start
ffffffff815ae208 b io_tlb_end
ffffffff815ae210 b io_tlb_list
ffffffff815ae218 b io_tlb_index
ffffffff815ae220 b io_tlb_orig_addr
ffffffff815ae228 b io_tlb_overflow_buffer
ffffffff815ae230 b late_alloc
ffffffff815ae234 b io_tlb_lock
ffffffff815ae240 B pci_lock
ffffffff815ae242 b __key.22838
ffffffff815ae244 b __key.19869
ffffffff815ae260 B pci_cache_line_size
ffffffff815ae264 B pcie_bus_config
ffffffff815ae268 B pci_pm_d3_delay
ffffffff815ae26c B pci_pci_problems
ffffffff815ae270 B isa_dma_bridge_buggy
ffffffff815ae278 b pci_platform_pm
ffffffff815ae280 b pcie_ari_disabled
ffffffff815ae284 b pci_acs_enable
ffffffff815ae288 b arch_set_vga_state
ffffffff815ae290 b resource_alignment_lock
ffffffff815ae2a0 b resource_alignment_param
ffffffff815aeaa0 b __key.28938
ffffffff815aeaa0 b sysfs_initialized
ffffffff815aeaa8 b proc_initialized
ffffffff815aeab0 b proc_bus_pci_dir
ffffffff815aeab8 B pci_slots_kset
ffffffff815aeac0 b asus_hides_smbus
ffffffff815aeac8 b asus_rcba_base
ffffffff815aead0 b __print_once.28775
ffffffff815aead4 b aspm_disabled
ffffffff815aead8 b aspm_force
ffffffff815aeadc b aspm_policy
ffffffff815aeae0 B pciehp_msi_disabled
ffffffff815aeae4 B pcie_ports_disabled
ffffffff815aeae8 b __key.19573
ffffffff815aeae8 b forceload
ffffffff815aeae9 b nosourceid
ffffffff815aeaea b aer_recover_ring_lock
ffffffff815aeaec b pcie_aer_disable
ffffffff815aeaf0 b __key.30056
ffffffff815aeaf0 b __key.30057
ffffffff815aeaf0 b parsed.31133
ffffffff815aeaf1 b aer_firmware_first
ffffffff815aeaf4 B pcie_pme_msi_disabled
ffffffff815aeaf8 b ht_irq_lock
ffffffff815aeafc b __key.18200
ffffffff815aeafc B pci_flags
ffffffff815aeb00 B fb_class
ffffffff815aeb08 b __key.30092
ffffffff815aeb08 b __key.30093
ffffffff815aeb08 b __key.30135
ffffffff815aeb08 B fb_mode_option
ffffffff815aeb20 b vgacon_text_mode_force
ffffffff815aeb24 b vga_bootup_console.22750
ffffffff815aeb28 b vga_is_gfx
ffffffff815aeb2c b vga_palette_blanked
ffffffff815aeb30 b vga_lock
ffffffff815aeb34 b vga_rolled_over
ffffffff815aeb40 b state
ffffffff815aeb78 b vgacon_xres
ffffffff815aeb7c b vgacon_yres
ffffffff815aeb80 b vga_512_chars
ffffffff815aeb84 b vga_video_font_height
ffffffff815aeb88 b cursor_size_lastfrom
ffffffff815aeb8c b cursor_size_lastto
ffffffff815aeb90 b vga_vesa_blanked
ffffffff815aeb94 b vga_state
ffffffff815aeba0 b vga_video_num_columns
ffffffff815aeba4 b vga_video_num_lines
ffffffff815aebb0 b vgacon_uni_pagedir
ffffffff815aebc0 b fontname
ffffffff815aec00 b con2fb_map_boot
ffffffff815aec40 b map_override
ffffffff815aec44 b first_fb_vc
ffffffff815aec48 b initial_rotation
ffffffff815aec50 b fbcon_device
ffffffff815aec58 b fbcon_has_sysfs
ffffffff815aec60 b con2fb_map
ffffffff815aeca0 b fbcon_has_exited
ffffffff815aeca4 b fbcon_cursor_noblink
ffffffff815aeca8 b softback_lines
ffffffff815aecc0 b fb_display
ffffffff815b0c40 b softback_curr
ffffffff815b0c48 b softback_end
ffffffff815b0c50 b softback_buf
ffffffff815b0c58 b softback_in
ffffffff815b0c60 b softback_top
ffffffff815b0c68 b logo_lines
ffffffff815b0c6c b scrollback_phys_max
ffffffff815b0c70 b scrollback_current
ffffffff815b0c74 b scrollback_max
ffffffff815b0c80 b palette_red
ffffffff815b0ca0 b palette_green
ffffffff815b0cc0 b palette_blue
ffffffff815b0ce0 b vbl_cursor_cnt
ffffffff815b0ce4 b fbcon_has_console_bind
ffffffff815b0ce8 b nologo
ffffffff815b0cf0 b vesafb_device
ffffffff815b0d00 B kacpi_hotplug_wq
ffffffff815b0d10 b buffer.31437
ffffffff815b0f10 b acpi_os_name
ffffffff815b0f74 b osi_linux
ffffffff815b0f78 b acpi_irq_handler
ffffffff815b0f80 b acpi_irq_context
ffffffff815b0f88 b t.31605
ffffffff815b0f90 b kacpi_notify_wq
ffffffff815b0f98 b kacpid_wq
ffffffff815b0fa0 b __print_once.31418
ffffffff815b0fa8 b __acpi_os_prepare_sleep
ffffffff815b0fb0 B wake_sleep_flags
ffffffff815b0fb4 b gts
ffffffff815b0fb8 b bfs
ffffffff815b0fbc b sleep_states
ffffffff815b0fc8 B acpi_kobj
ffffffff815b0fd0 B acpi_gbl_permanent_mmap
ffffffff815b0fd1 B osc_sb_apei_support_acked
ffffffff815b0fd8 B acpi_root_dir
ffffffff815b0fe0 B acpi_root
ffffffff815b0fe8 b acpi_bus_event_lock
ffffffff815b0fec b __key.24499
ffffffff815b0ff0 b read_madt.23771
ffffffff815b0ff8 b madt.23770
ffffffff815b1000 B first_ec
ffffffff815b1008 B boot_ec
ffffffff815b1010 b EC_FLAGS_MSI
ffffffff815b1014 b EC_FLAGS_VALIDATE_ECDT
ffffffff815b1018 b EC_FLAGS_SKIP_DSDT_SCAN
ffffffff815b101c b __key.25681
ffffffff815b101c b __key.25682
ffffffff815b1020 b sub_driver
ffffffff815b1028 b acpi_prt_lock
ffffffff815b1030 b acpi_power_resource_list
ffffffff815b1040 b __key.24186
ffffffff815b1040 B event_is_open
ffffffff815b1044 b acpi_event_seqnum
ffffffff815b1048 b acpi_system_event_lock
ffffffff815b104c b chars_remaining.30556
ffffffff815b1050 b str.30555
ffffffff815b10a0 b ptr.30557
ffffffff815b10a8 B acpi_irq_not_handled
ffffffff815b10ac B acpi_irq_handled
ffffffff815b10b0 b all_counters
ffffffff815b10b8 b num_gpes
ffffffff815b10bc b num_counters
ffffffff815b10c0 b all_attrs
ffffffff815b10c8 b counter_attrs
ffffffff815b10d0 b acpi_gpe_count
ffffffff815b10d8 b tables_kobj
ffffffff815b10e0 b dynamic_tables_kobj
ffffffff815b10f0 b nodes_found_map
ffffffff815b1110 b acpi_ac_dir
ffffffff815b1118 b lock_ac_dir_cnt
ffffffff815b1120 b acpi_battery_dir
ffffffff815b1128 b lock_battery_dir_cnt
ffffffff815b1130 b acpi_gbl_depth
ffffffff815b1134 b no_auto_ssdt
ffffffff815b1140 B acpi_gbl_startup_flags
ffffffff815b1144 B acpi_gbl_method_executing
ffffffff815b1145 B acpi_gbl_abort_method
ffffffff815b1146 B acpi_gbl_db_terminate_threads
ffffffff815b1148 B acpi_gbl_nesting_level
ffffffff815b114c B acpi_dbg_layer
ffffffff815b1150 B acpi_gbl_db_output_flags
ffffffff815b1158 B acpi_gbl_global_event_handler_context
ffffffff815b1160 B acpi_gbl_global_event_handler
ffffffff815b1168 B acpi_gbl_all_gpes_initialized
ffffffff815b1170 B acpi_gbl_gpe_fadt_blocks
ffffffff815b1180 B acpi_gbl_gpe_xrupt_list_head
ffffffff815b1190 B acpi_gbl_fixed_event_handlers
ffffffff815b11e0 B acpi_gbl_sleep_type_b
ffffffff815b11e1 B acpi_gbl_sleep_type_a
ffffffff815b11e2 B acpi_gbl_cm_single_step
ffffffff815b11e8 B acpi_gbl_current_walk_list
ffffffff815b11f0 B acpi_gbl_module_code_list
ffffffff815b11f8 B acpi_gbl_fadt_gpe_device
ffffffff815b1200 B acpi_gbl_root_node
ffffffff815b1210 B acpi_gbl_root_node_struct
ffffffff815b1240 B acpi_gbl_address_range_list
ffffffff815b1250 B acpi_gbl_supported_interfaces
ffffffff815b1258 B acpi_gbl_osi_data
ffffffff815b1259 B acpi_gbl_events_initialized
ffffffff815b125a B acpi_gbl_acpi_hardware_present
ffffffff815b125b B acpi_gbl_step_to_next_call
ffffffff815b125c B acpi_gbl_debugger_configuration
ffffffff815b125e B acpi_gbl_pm1_enable_register_save
ffffffff815b1260 B acpi_gbl_ps_find_count
ffffffff815b1264 B acpi_gbl_ns_lookup_count
ffffffff815b1268 B acpi_gbl_rsdp_original_location
ffffffff815b126c B acpi_gbl_original_mode
ffffffff815b1270 B acpi_gbl_reg_methods_executed
ffffffff815b1271 B acpi_gbl_next_owner_id_offset
ffffffff815b1272 B acpi_gbl_last_owner_id_index
ffffffff815b1280 B acpi_gbl_owner_id_mask
ffffffff815b12a0 B acpi_gbl_interface_handler
ffffffff815b12a8 B acpi_gbl_breakpoint_walk
ffffffff815b12b0 B acpi_gbl_table_handler_context
ffffffff815b12b8 B acpi_gbl_table_handler
ffffffff815b12c0 B acpi_gbl_init_handler
ffffffff815b12c8 B acpi_gbl_exception_handler
ffffffff815b12d0 B acpi_gbl_system_notify
ffffffff815b1310 B acpi_gbl_device_notify
ffffffff815b1348 B acpi_gbl_operand_cache
ffffffff815b1350 B acpi_gbl_ps_node_ext_cache
ffffffff815b1358 B acpi_gbl_ps_node_cache
ffffffff815b1360 B acpi_gbl_state_cache
ffffffff815b1368 B acpi_gbl_namespace_cache
ffffffff815b1370 B acpi_gbl_hardware_lock
ffffffff815b1378 B acpi_gbl_gpe_lock
ffffffff815b1380 B acpi_gbl_global_lock_pending
ffffffff815b1381 B acpi_gbl_global_lock_present
ffffffff815b1382 B acpi_gbl_global_lock_acquired
ffffffff815b1384 B acpi_gbl_global_lock_handle
ffffffff815b1388 B acpi_gbl_global_lock_pending_lock
ffffffff815b1390 B acpi_gbl_global_lock_semaphore
ffffffff815b1398 B acpi_gbl_global_lock_mutex
ffffffff815b13a0 B acpi_gbl_mutex_info
ffffffff815b1460 B acpi_gbl_namespace_rw_lock
ffffffff815b1478 B acpi_gbl_osi_mutex
ffffffff815b1480 B acpi_gbl_integer_nybble_width
ffffffff815b1481 B acpi_gbl_integer_byte_width
ffffffff815b1482 B acpi_gbl_integer_bit_width
ffffffff815b1490 B acpi_gbl_original_dsdt_header
ffffffff815b14b8 B acpi_gbl_DSDT
ffffffff815b14c0 B acpi_gbl_xpm1b_enable
ffffffff815b14d0 B acpi_gbl_xpm1b_status
ffffffff815b14e0 B acpi_gbl_xpm1a_enable
ffffffff815b14f0 B acpi_gbl_xpm1a_status
ffffffff815b1500 B acpi_gbl_FACS
ffffffff815b1510 B acpi_gbl_root_table_list
ffffffff815b1528 B acpi_gbl_trace_dbg_layer
ffffffff815b152c B acpi_gbl_trace_dbg_level
ffffffff815b1530 B acpi_gbl_original_dbg_layer
ffffffff815b1534 B acpi_gbl_original_dbg_level
ffffffff815b1540 B acpi_fixed_event_count
ffffffff815b1554 B acpi_gpe_count
ffffffff815b1558 B acpi_gbl_no_resource_disassembly
ffffffff815b1559 B acpi_gbl_reduced_hardware
ffffffff815b155a B acpi_gbl_system_awake_and_running
ffffffff815b155c B acpi_gbl_trace_method_name
ffffffff815b1560 B acpi_gbl_trace_flags
ffffffff815b1564 B acpi_current_gpe_count
ffffffff815b1570 B acpi_gbl_FADT
ffffffff815b167c B acpi_gbl_disable_auto_repair
ffffffff815b167d B acpi_gbl_truncate_io_addresses
ffffffff815b167e B acpi_gbl_copy_dsdt_locally
ffffffff815b167f B acpi_gbl_enable_aml_debug_object
ffffffff815b1680 B acpi_gbl_all_methods_serialized
ffffffff815b1681 B acpi_gbl_enable_interpreter_slack
ffffffff815b1688 b hed_handle
ffffffff815b16a0 b dapei.26110
ffffffff815b16a8 B hest_disable
ffffffff815b16b0 b seq.23950
ffffffff815b16c0 B erst_disable
ffffffff815b16c4 b erst_lock
ffffffff815b16c8 b erst_tab
ffffffff815b16e0 b erst_erange
ffffffff815b1700 b reader_pos
ffffffff815b1720 B ghes_estatus_caches
ffffffff815b1740 B ghes_disable
ffffffff815b1748 b ghes_estatus_pool_size_request
ffffffff815b1750 b ghes_ioremap_lock_nmi
ffffffff815b1758 b ghes_ioremap_area
ffffffff815b1760 b ghes_ioremap_lock_irq
ffffffff815b1770 b ghes_proc_irq_work
ffffffff815b1788 b ghes_estatus_pool
ffffffff815b1790 b ghes_estatus_llist
ffffffff815b1798 b ghes_nmi_lock
ffffffff815b179c b seqno.30600
ffffffff815b17a0 b ghes_estatus_cache_alloced
ffffffff815b17a4 B pnp_debug
ffffffff815b17a8 B pnp_platform_devices
ffffffff815b17ac B pnp_lock
ffffffff815b17ae b __key.18648
ffffffff815b17b0 b num
ffffffff815b17c0 B xen_hvm_resume_frames
ffffffff815b17c8 b gnttab_interface
ffffffff815b17d0 b gnttab_list_lock
ffffffff815b17d4 b gnttab_free_count
ffffffff815b17d8 b nr_grant_frames
ffffffff815b17dc b grant_table_version
ffffffff815b17e0 b gnttab_free_head
ffffffff815b17e8 b gnttab_list
ffffffff815b17f0 b gnttab_free_callback_list
ffffffff815b17f8 b gnttab_shared
ffffffff815b1800 b boot_max_nr_grant_frames
ffffffff815b1808 b grstatus
ffffffff815b1810 b evtchn_to_irq
ffffffff815b1818 b pirq_needs_eoi
ffffffff815b1820 b debug_lock.24388
ffffffff815b1828 b pirq_eoi_map
ffffffff815b1830 b handler.23387
ffffffff815b1840 B balloon_stats
ffffffff815b1880 b frame_list
ffffffff815b2880 b xenbus_valloc_lock
ffffffff815b2884 b xenbus_irq
ffffffff815b28a0 b xs_state
ffffffff815b2970 b watches_lock
ffffffff815b2974 b xenwatch_pid
ffffffff815b2978 b watch_events_lock
ffffffff815b297a b __key.21476
ffffffff815b297a b __key.21477
ffffffff815b297a b __key.21478
ffffffff815b297a b __key.21479
ffffffff815b297a b __key.21480
ffffffff815b297a b __key.21481
ffffffff815b2980 B xenstored_ready
ffffffff815b2988 B xen_store_interface
ffffffff815b2990 B xen_store_evtchn
ffffffff815b2994 b __key.7311
ffffffff815b2998 b xen_store_mfn
ffffffff815b29a0 b __key.19510
ffffffff815b29a0 b __key.25302
ffffffff815b29a0 b __key.25303
ffffffff815b29a0 b __key.25304
ffffffff815b29a0 b __key.27381
ffffffff815b29a0 b ready_to_wait_for_devices
ffffffff815b29a4 b backend_state
ffffffff815b29c0 b balloon_dev
ffffffff815b2c38 b selfballoon_min_usable_mb
ffffffff815b2c40 b platform_mmio
ffffffff815b2c48 b platform_mmio_alloc
ffffffff815b2c50 b platform_mmiolen
ffffffff815b2c58 b callback_via
ffffffff815b2c60 B start_dma_addr
ffffffff815b2c68 b xen_io_tlb_nslabs
ffffffff815b2c70 b xen_io_tlb_start
ffffffff815b2c78 b xen_io_tlb_end
ffffffff815b2c80 B tty_class
ffffffff815b2c88 B tty_files_lock
ffffffff815b2c8a b redirect_lock
ffffffff815b2c90 b redirect
ffffffff815b2c98 b __key.27651
ffffffff815b2c98 b __key.27652
ffffffff815b2c98 b __key.27653
ffffffff815b2c98 b __key.27654
ffffffff815b2c98 b __key.27656
ffffffff815b2c98 b __key.27657
ffffffff815b2c98 b __key.27658
ffffffff815b2c98 b __key.27659
ffffffff815b2c98 b __key.27824
ffffffff815b2c98 b consdev
ffffffff815b2ca0 b tty_cdev
ffffffff815b2d20 b console_cdev
ffffffff815b2da0 b tty_ldisc_lock
ffffffff815b2dc0 b tty_ldiscs
ffffffff815b2eb0 b __key.21834
ffffffff815b2eb0 b __key.21835
ffffffff815b2eb0 b __key.21836
ffffffff815b2eb0 b __key.21837
ffffffff815b2eb0 b __key.21838
ffffffff815b2ec0 b ptm_driver
ffffffff815b2ec8 b pts_driver
ffffffff815b2ee0 b ptmx_fops
ffffffff815b2fc0 b ptmx_cdev
ffffffff815b3028 b __key.21283
ffffffff815b3040 B vt_dont_switch
ffffffff815b3042 b vt_event_lock
ffffffff815b3044 b disable_vt_switch
ffffffff815b3048 b vc_class
ffffffff815b3050 b __key.25317
ffffffff815b3050 b __key.25458
ffffffff815b3050 B sel_cons
ffffffff815b3058 b sel_end
ffffffff815b305c b use_unicode
ffffffff815b3060 b sel_buffer
ffffffff815b3068 b sel_buffer_lth
ffffffff815b3080 B vt_spawn_con
ffffffff815b30a0 b keyboard_notifier_list
ffffffff815b30b0 b zero.26537
ffffffff815b30b4 b kbd_event_lock
ffffffff815b30c0 b kbd_table
ffffffff815b31fb b rep
ffffffff815b3200 b key_down
ffffffff815b3260 b shift_state
ffffffff815b3264 b pressed.26837
ffffffff815b3268 b committing.26838
ffffffff815b326c b diacr
ffffffff815b3270 b dead_key_next
ffffffff815b3278 b releasestart.26839
ffffffff815b3280 b committed.26831
ffffffff815b3288 b chords.26830
ffffffff815b3290 b shift_down
ffffffff815b3299 b ledioctl
ffffffff815b32a0 b inv_translate
ffffffff815b33a0 b dflt
ffffffff815b33c0 B console_driver
ffffffff815b33c8 B console_blank_hook
ffffffff815b33d0 B last_console
ffffffff815b33d4 B fg_console
ffffffff815b33d8 B console_blanked
ffffffff815b33dc B do_poke_blanked_console
ffffffff815b33e0 B vc_cons
ffffffff815b3db8 B conswitchp
ffffffff815b3dc0 b vt_notifier_list
ffffffff815b3dd0 b scrollback_delta
ffffffff815b3dd4 b blank_timer_expired
ffffffff815b3dd8 b softcursor_original
ffffffff815b3ddc b old.26373
ffffffff815b3dde b oldx.26374
ffffffff815b3de0 b oldy.26375
ffffffff815b3de8 b tty0dev
ffffffff815b3e00 b con_driver_map
ffffffff815b3ff8 b master_display_fg
ffffffff815b4000 b __key.27116
ffffffff815b4000 b registered_con_driver
ffffffff815b4280 b blank_state
ffffffff815b4284 b printable
ffffffff815b4288 b printing_lock.26941
ffffffff815b428c b kmsg_con.26925
ffffffff815b4290 b __key.27387
ffffffff815b4290 b vtconsole_class
ffffffff815b4298 b vesa_blank_mode
ffffffff815b429c b ignore_poke
ffffffff815b42a0 b vc0_cdev
ffffffff815b4308 b __print_once.26904
ffffffff815b430c b vesa_off_interval
ffffffff815b4310 b saved_fg_console
ffffffff815b4314 b saved_last_console
ffffffff815b4318 b saved_want_console
ffffffff815b431c b saved_vc_mode
ffffffff815b4320 b saved_console_blanked
ffffffff815b4324 B funcbufleft
ffffffff815b4340 b hvc_structs_lock
ffffffff815b4348 b hvc_driver
ffffffff815b4360 b cons_ops
ffffffff815b43e0 b hvc_kicked
ffffffff815b43e8 b hvc_task
ffffffff815b4400 b xencons_lock
ffffffff815b4420 b buf.24502
ffffffff815b4640 b __key.25448
ffffffff815b4640 b mem_class
ffffffff815b4680 b last_value.25091
ffffffff815b46a0 b input_timer_state
ffffffff815b46c0 b fasync
ffffffff815b46c8 b bootid_spinlock.25353
ffffffff815b4700 b random_int_secret
ffffffff815b4740 b min_write_thresh
ffffffff815b4750 b sysctl_bootid
ffffffff815b4760 b input_pool_data
ffffffff815b4960 b nonblocking_pool_data
ffffffff815b49e0 b blocking_pool_data
ffffffff815b4a60 b misc_minors
ffffffff815b4a68 b misc_class
ffffffff815b4a70 b __key.19518
ffffffff815b4a70 b nvram_state_lock
ffffffff815b4a74 b nvram_open_cnt
ffffffff815b4a78 b nvram_open_mode
ffffffff815b4a80 b vga_default
ffffffff815b4a88 b vga_arbiter_used
ffffffff815b4a8a b vga_lock
ffffffff815b4a8c b vga_count
ffffffff815b4a90 b vga_decode_count
ffffffff815b4a94 b vga_user_lock
ffffffff815b4aa0 B devices_kset
ffffffff815b4aa8 B sysfs_dev_block_kobj
ffffffff815b4ab0 B sysfs_dev_char_kobj
ffffffff815b4ab8 B platform_notify_remove
ffffffff815b4ac0 B platform_notify
ffffffff815b4ac8 b __key.19604
ffffffff815b4ac8 b virtual_dir.19609
ffffffff815b4ad0 b dev_kobj
ffffffff815b4ad8 B system_kset
ffffffff815b4ae0 b __key.15238
ffffffff815b4ae0 b bus_kset
ffffffff815b4ae8 b __key.15400
ffffffff815b4ae8 b deferred_wq
ffffffff815b4af0 b driver_deferred_probe_enable
ffffffff815b4af4 b probe_count
ffffffff815b4af8 b class_kset
ffffffff815b4b00 b __key.19027
ffffffff815b4b00 B total_cpus
ffffffff815b4b08 B firmware_kobj
ffffffff815b4b10 b __key.9039
ffffffff815b4b10 b thread
ffffffff815b4b18 b __key.7055
ffffffff815b4b18 b req_lock
ffffffff815b4b20 b requests
ffffffff815b4b40 b power_attrs
ffffffff815b4b48 b __key.12994
ffffffff815b4b60 B suspend_stats
ffffffff815b4bf4 b __key.7985
ffffffff815b4bf4 b pm_transition
ffffffff815b4bf8 b async_error
ffffffff815b4c00 B events_check_enabled
ffffffff815b4c02 b events_lock
ffffffff815b4c04 b combined_event_count
ffffffff815b4c08 b saved_count
ffffffff815b4c10 b wakeup_sources_stats_dentry
ffffffff815b4c18 b __key.17262
ffffffff815b4c18 b __key.24895
ffffffff815b4c18 b __key.8112
ffffffff815b4c20 B node_devices
ffffffff815dc420 b __hugetlb_register_node
ffffffff815dc428 b __hugetlb_unregister_node
ffffffff815dc430 B hypervisor_kobj
ffffffff815dc440 B scsi_logging_level
ffffffff815dc444 b __key.28590
ffffffff815dc444 b __key.28591
ffffffff815dc444 b scsi_host_next_hn
ffffffff815dc448 b __key.28653
ffffffff815dc448 b tur_command.29813
ffffffff815dc450 B scsi_sdb_cache
ffffffff815dc458 b __key.7489
ffffffff815dc458 b async_scan_lock
ffffffff815dc460 B blank_transport_template
ffffffff815dc5d8 b __key.29243
ffffffff815dc5d8 b __key.29245
ffffffff815dc5e0 b scsi_dev_flags
ffffffff815dc6e0 b scsi_default_dev_flags
ffffffff815dc6e8 b scsi_table_header
ffffffff815dc6f0 b proc_scsi
ffffffff815dc700 b sd_cdb_pool
ffffffff815dc708 b sd_cdb_cache
ffffffff815dc710 b sd_index_lock
ffffffff815dc720 b sd_index_ida
ffffffff815dc748 b __key.30132
ffffffff815dc760 B libata_allow_tpm
ffffffff815dc764 B libata_noacpi
ffffffff815dc768 B libata_fua
ffffffff815dc76c B ata_print_id
ffffffff815dc770 b ata_force_tbl_size
ffffffff815dc778 b ata_force_tbl
ffffffff815dc780 b atapi_dmadir
ffffffff815dc784 b ata_probe_timeout
ffffffff815dc788 b ata_ignore_hpa
ffffffff815dc78c b atapi_an
ffffffff815dc790 b __key.38144
ffffffff815dc790 b __key.38147
ffffffff815dc790 b __key.38168
ffffffff815dc790 b __key.7489
ffffffff815dc790 b lock.38209
ffffffff815dc792 b __key.38252
ffffffff815dc7a0 b ata_scsi_rbuf_lock
ffffffff815dc7c0 b ata_scsi_rbuf
ffffffff815dd7c0 B ata_scsi_transport_template
ffffffff815dd7c8 b ata_sff_wq
ffffffff815dd7e0 b amd_lock
ffffffff815dd800 b amd_chipset
ffffffff815dd840 b serio_event_lock
ffffffff815dd842 b __key.19460
ffffffff815dd842 b __key.19711
ffffffff815dd844 b serio_no.19458
ffffffff815dd860 b i8042_nokbd
ffffffff815dd862 b i8042_lock
ffffffff815dd868 b i8042_platform_filter
ffffffff815dd870 b i8042_noaux
ffffffff815dd871 b i8042_noloop
ffffffff815dd872 b i8042_debug
ffffffff815dd878 b i8042_start_time
ffffffff815dd880 b i8042_ports
ffffffff815dd8e0 b i8042_platform_device
ffffffff815dd8e8 b i8042_aux_irq_registered
ffffffff815dd8ec b i8042_aux_irq
ffffffff815dd8f0 b i8042_kbd_irq_registered
ffffffff815dd8f4 b i8042_kbd_irq
ffffffff815dd8f8 b i8042_ctr
ffffffff815dd8f9 b i8042_mux_present
ffffffff815dd8fa b i8042_reset
ffffffff815dd8fb b i8042_initial_ctr
ffffffff815dd8fc b i8042_nomux
ffffffff815dd8fd b i8042_direct
ffffffff815dd8fe b i8042_dritek
ffffffff815dd900 b last_transmit.17396
ffffffff815dd908 b last_str.17397
ffffffff815dd909 b i8042_notimeout
ffffffff815dd90a b i8042_suppress_kbd_ack
ffffffff815dd90b b i8042_unlock
ffffffff815dd90c b i8042_dumbkbd
ffffffff815dd90d b i8042_pnp_kbd_registered
ffffffff815dd90e b i8042_pnp_aux_registered
ffffffff815dd910 b i8042_pnp_data_reg
ffffffff815dd914 b i8042_pnp_command_reg
ffffffff815dd918 b i8042_pnp_kbd_irq
ffffffff815dd920 b i8042_pnp_kbd_name
ffffffff815dd940 b i8042_pnp_kbd_devices
ffffffff815dd944 b i8042_pnp_aux_irq
ffffffff815dd960 b i8042_pnp_aux_name
ffffffff815dd980 b i8042_pnp_aux_devices
ffffffff815dd984 b i8042_nopnp
ffffffff815dd985 b i8042_bypass_aux_irq_test
ffffffff815dd986 b __key.7517
ffffffff815dd988 b __key.22765
ffffffff815dd988 b __key.22766
ffffffff815dd9a0 b __key.23900
ffffffff815dd9a0 b __key.24090
ffffffff815dd9a0 b proc_bus_input_dir
ffffffff815dd9c0 b input_table
ffffffff815dda00 b input_devices_state
ffffffff815dda04 b input_no.23956
ffffffff815dda08 b __key.21339
ffffffff815dda20 b mousedev_mix
ffffffff815dda40 b mousedev_table
ffffffff815ddb40 b __key.21952
ffffffff815ddb40 b __key.21953
ffffffff815ddb40 b atkbd_reset
ffffffff815ddb41 b atkbd_softrepeat
ffffffff815ddb42 b atkbd_terminal
ffffffff815ddb48 b atkbd_platform_fixup
ffffffff815ddb50 b atkbd_platform_fixup_data
ffffffff815ddb58 b atkbd_scroll
ffffffff815ddb59 b atkbd_extra
ffffffff815ddb60 b atkbd_platform_scancode_fixup
ffffffff815ddb68 b __key.20960
ffffffff815ddb68 b psmouse_resync_time
ffffffff815ddb70 b kpsmoused_wq
ffffffff815ddb78 b impaired_toshiba_kbc
ffffffff815ddb79 b broken_olpc_ec
ffffffff815ddb80 b lifebook_present
ffffffff815ddb81 b lifebook_use_6byte_proto
ffffffff815ddb88 b desired_serio_phys
ffffffff815ddba0 B rtc_class
ffffffff815ddbc0 b rtc_ida
ffffffff815ddbe8 b __key.20252
ffffffff815ddbe8 b __key.20255
ffffffff815ddbe8 b __key.20275
ffffffff815ddbf0 b old_rtc
ffffffff815ddc00 b old_system
ffffffff815ddc10 b old_delta
ffffffff815ddc20 b rtc_devt
ffffffff815ddc40 b pnp_driver_registered
ffffffff815ddc60 b cmos_rtc
ffffffff815ddc98 b platform_driver_registered
ffffffff815ddca0 b acpi_rtc_info
ffffffff815ddcb8 b watchdog_dev_busy
ffffffff815ddcc0 b wdd
ffffffff815ddcc8 B edac_err_assert
ffffffff815ddccc B edac_handlers
ffffffff815ddcd0 b edac_subsys_valid
ffffffff815ddce0 B cpufreq_global_kobject
ffffffff815ddd00 b cpufreq_transition_notifier_list
ffffffff815ddd58 b init_cpufreq_transition_notifier_list_called
ffffffff815ddd5a b cpufreq_driver_lock
ffffffff815ddd60 b cpufreq_driver
ffffffff815ddd68 b __key.17462
ffffffff815ddd68 b __key.7489
ffffffff815ddd68 b cpufreq_stats_lock
ffffffff815ddd70 b cpuidle_enter_ops
ffffffff815ddd78 b enabled_devices
ffffffff815ddd7c b __key.7607
ffffffff815ddd80 B cpuidle_driver_lock
ffffffff815ddd88 b cpuidle_curr_driver
ffffffff815ddd90 B cpuidle_curr_governor
ffffffff815ddd98 b sysfs_switch
ffffffff815ddd9c b __key.8440
ffffffff815ddda0 B dmi_available
ffffffff815ddda4 b dmi_initialized
ffffffff815ddda8 b dmi_num
ffffffff815dddaa b dmi_len
ffffffff815dddac b dmi_base
ffffffff815dddc0 b dmi_ident
ffffffff815dde58 B ibft_addr
ffffffff815dde60 b mmap_kset.14003
ffffffff815dde68 b map_entries_nr.14002
ffffffff815dde6c B i8253_lock
ffffffff815dde70 B hid_debug
ffffffff815dde74 b hid_ignore_special_drivers
ffffffff815dde78 b __key.25028
ffffffff815dde78 b id.24953
ffffffff815dde7c b __key.24971
ffffffff815dde80 b dev_data_list_lock
ffffffff815dde90 b ppr_notifier
ffffffff815ddea0 b iommu_pd_list_lock
ffffffff815ddea8 b pt_domain
ffffffff815ddeb0 b __key.23093
ffffffff815ddec0 B amd_iommu_pd_alloc_bitmap
ffffffff815ddec8 B amd_iommu_rlookup_table
ffffffff815dded0 B amd_iommu_alias_table
ffffffff815dded8 B amd_iommu_dev_table
ffffffff815ddee0 B amd_iommu_pd_lock
ffffffff815ddee4 B amd_iommus_present
ffffffff815ddf00 B amd_iommus
ffffffff815de000 B amd_iommu_unmap_flush
ffffffff815de002 B amd_iommu_last_bdf
ffffffff815de004 B amd_iommu_dump
ffffffff815de008 b dev_table_size
ffffffff815de00c b alias_table_size
ffffffff815de010 b rlookup_table_size
ffffffff815de020 b pcibios_fwaddrmap_lock
ffffffff815de028 B xen_pci_frontend
ffffffff815de030 b dev_domain_list_spinlock
ffffffff815de040 b quirk_aspm_offset
ffffffff815de100 b toshiba_line_size
ffffffff815de120 B pcibios_disable_irq
ffffffff815de128 b eisa_irq_mask.29018
ffffffff815de130 b pirq_table
ffffffff815de138 b broken_hp_bios_irq9
ffffffff815de140 b pirq_router
ffffffff815de160 b pirq_router_dev
ffffffff815de168 b acer_tm360_irqrouting
ffffffff815de170 B pci_config_lock
ffffffff815de178 B pci_root_bus
ffffffff815de180 B pirq_table_addr
ffffffff815de188 B noioapicreroute
ffffffff815de18c B noioapicquirk
ffffffff815de190 B pci_routeirq
ffffffff815de194 B pci_early_dump_regs
ffffffff815de198 b smbios_type_b1_flag
ffffffff815de19c b pci_bf_sort
ffffffff815de1a0 B pci_root_info
ffffffff815df020 B pci_root_num
ffffffff815df040 B saved_context
ffffffff815df180 b br_ioctl_hook
ffffffff815df188 b vlan_ioctl_hook
ffffffff815df190 b dlci_ioctl_hook
ffffffff815df198 b __key.41667
ffffffff815df198 b warned.42188
ffffffff815df19c b net_family_lock
ffffffff815df1c0 B memcg_socket_limit_enabled
ffffffff815df1d0 b warncomm.44070
ffffffff815df1e0 b warned.44069
ffffffff815df1e4 b __key.44310
ffffffff815df1e8 b proto_inuse_idx
ffffffff815df200 b est_tree_lock
ffffffff815df220 b elist
ffffffff815df3d0 b est_root
ffffffff815df400 B init_net
ffffffff815df900 b net_cachep
ffffffff815df908 b netns_wq
ffffffff815df910 b cleanup_list_lock
ffffffff815df920 b net_generic_ids
ffffffff815df980 b net_secret
ffffffff815df9c0 b empty.37194
ffffffff815dfa00 b ptype_lock
ffffffff815dfa20 b dev_boot_setup
ffffffff815dfb60 b netdev_chain
ffffffff815dfb80 b gifconf_list
ffffffff815dfcc0 b ifindex.47961
ffffffff815dfcc4 b busy.33051
ffffffff815dfce0 B dst_default_metrics
ffffffff815dfd18 b dst_busy_list
ffffffff815dfd20 b netevent_notif_chain
ffffffff815dfd30 b neigh_tables
ffffffff815dfd40 b rtnl_msg_handlers
ffffffff815e0150 b rtattr_max
ffffffff815e0158 b rta_buf
ffffffff815e0160 b lweventlist_lock
ffffffff815e0168 b linkwatch_nextevent
ffffffff815e0170 b linkwatch_flags
ffffffff815e0180 B sock_diag_nlsk
ffffffff815e0188 b inet_rcv_compat
ffffffff815e01a0 b sock_diag_handlers
ffffffff815e02e0 B flow_cache_genid
ffffffff815e0300 b flow_cache_global
ffffffff815e0368 b flow_cache_gc_lock
ffffffff815e036a b __key.7489
ffffffff815e036c b rps_dev_flow_lock.36355
ffffffff815e036e b rps_map_lock.36313
ffffffff815e0370 b __key.36720
ffffffff815e0380 b nl_table_users
ffffffff815e0388 b nl_table
ffffffff815e0390 b __key.37411
ffffffff815e0390 b __key.37412
ffffffff815e0390 b netlink_chain
ffffffff815e03a0 b family_ht
ffffffff815e04c0 b rt_hash_locks
ffffffff815e04c8 b __rt_peer_genid
ffffffff815e04cc b ip_fb_id_lock.45713
ffffffff815e04d0 b ip_fallback_id.45715
ffffffff815e04d8 b last_gc.45428
ffffffff815e04e0 b ip_rt_max_size
ffffffff815e04e4 b equilibrium.45430
ffffffff815e04e8 b rover.45429
ffffffff815e04ec b __key.28822
ffffffff815e0500 b expires_work
ffffffff815e0558 b expires_ljiffies
ffffffff815e0560 b rover.45369
ffffffff815e0580 b empty
ffffffff815e05c0 b gc_work
ffffffff815e0618 b gc_lock
ffffffff815e0620 b ip4_frags
ffffffff815e0898 b zero
ffffffff815e08a0 B ip_ra_chain
ffffffff815e08a8 b ip_ra_lock
ffffffff815e08ac b hint.39836
ffffffff815e08b0 B sysctl_local_reserved_ports
ffffffff815e08c0 B tcp_sockets_allocated
ffffffff815e08e8 B tcp_memory_allocated
ffffffff815e0900 B tcp_orphan_count
ffffffff815e0928 b tcp_secret_generating
ffffffff815e0930 b tcp_secret_locker
ffffffff815e0938 b tcp_secret_primary
ffffffff815e0940 b tcp_secret_secondary
ffffffff815e0948 b tcp_secret_retiring
ffffffff815e0950 b __key.45701
ffffffff815e0950 b __key.45703
ffffffff815e0960 b tcp_secret_one
ffffffff815e09c0 b tcp_secret_two
ffffffff815e0a40 B tcp_hashinfo
ffffffff815e0cc0 B sysctl_tcp_max_ssthresh
ffffffff815e0cc4 b tcp_cong_list_lock
ffffffff815e0cc8 b __print_once.41336
ffffffff815e0cd0 B udp_memory_allocated
ffffffff815e0ce0 b inet_addr_lst
ffffffff815e14e0 b inet_addr_hash_lock
ffffffff815e1500 B ipv4_config
ffffffff815e1508 b inetsw_lock
ffffffff815e1520 b inetsw
ffffffff815e15e0 b fib_info_cnt
ffffffff815e15e4 b fib_info_lock
ffffffff815e1600 b fib_info_devhash
ffffffff815e1e00 b fib_info_hash_size
ffffffff815e1e08 b fib_info_hash
ffffffff815e1e10 b fib_info_laddrhash
ffffffff815e1e18 b tnode_free_head
ffffffff815e1e20 b tnode_free_size
ffffffff815e1e40 b ping_table
ffffffff815e2048 b ping_port_rover
ffffffff815e204c b ip_ping_group_range_min
ffffffff815e2054 b zero
ffffffff815e2060 B syncookie_secret
ffffffff815e20e8 b sysctl_hdr
ffffffff815e20f0 b __key.27703
ffffffff815e2100 b idx_generator.41565
ffffffff815e2104 b xfrm_policy_sk_bundle_lock
ffffffff815e2108 b xfrm_policy_sk_bundles
ffffffff815e2120 b xfrm_policy_afinfo
ffffffff815e2260 b dummy.42214
ffffffff815e22a0 b xfrm_state_afinfo
ffffffff815e23e0 b xfrm_state_gc_lock
ffffffff815e23e2 b xfrm_state_lock
ffffffff815e23e4 b acqseq.41924
ffffffff815e23e8 b __key.42243
ffffffff815e2400 B unix_table_lock
ffffffff815e2420 B unix_socket_table
ffffffff815e2c28 b unix_nr_socks
ffffffff815e2c30 b __key.37161
ffffffff815e2c30 b __key.37162
ffffffff815e2c30 B unix_tot_inflight
ffffffff815e2c34 b unix_gc_lock
ffffffff815e2c36 b gc_in_progress
ffffffff815e2c38 b klist_remove_lock
ffffffff815e3000 B __brk_base
ffffffff815e3000 B __bss_stop
ffffffff815f3000 b .brk.shared_info_page_brk
ffffffff815f4000 b .brk.level1_ident_pgt
ffffffff815f8000 b .brk.p2m_missing
ffffffff815f9000 b .brk.p2m_mid_missing
ffffffff815fa000 b .brk.p2m_mid_missing_mfn
ffffffff815fb000 b .brk.p2m_top
ffffffff815fc000 b .brk.p2m_top_mfn
ffffffff815fd000 b .brk.p2m_top_mfn_p
ffffffff815fe000 b .brk.p2m_identity
ffffffff815ff000 b .brk.p2m_mid
ffffffff817f3000 b .brk.p2m_mid_mfn
ffffffff819e7000 b .brk.p2m_mid_identity
ffffffff819ed000 b .brk.m2p_overrides
ffffffff819f1000 b .brk.dmi_alloc
ffffffff81a01000 B __brk_limit
ffffffff81a01000 A _end
ffffffffff700000 A VDSO64_PRELINK



------=_20120801183353_31106
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------=_20120801183353_31106--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:44:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc1H-0001zF-E8; Wed, 01 Aug 2012 16:43:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <t.wagner@inode.at>) id 1Swbsn-0001kd-0H
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:34:51 +0000
X-Env-Sender: t.wagner@inode.at
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343838850!11782792!1
X-Originating-IP: [213.47.214.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=spamassassin: 
	dGltZW91dCB3b3JraW5nIG9uOiAobm8gZmlsZSksIHJ1bGUgX19NTF80MTlfTk9SSVNLLAo=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4019 invoked from network); 1 Aug 2012 16:34:10 -0000
Received: from webmail.inode.at (HELO webmail.inode.at) (213.47.214.141)
	by server-2.tower-27.messagelabs.com with SMTP;
	1 Aug 2012 16:34:10 -0000
Received: from [127.0.0.1] (helo=inode.at) by webmail with smtp (Exim 4.67)
	(envelope-from <t.wagner@inode.at>) id 1Swbrt-00015E-UL
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 18:33:55 +0200
Received: from 85.126.176.235
	(SquirrelMail authenticated user t.wagner@inode.at)
	by webmail.inode.at with HTTP; Wed, 1 Aug 2012 18:33:53 +0200 (CEST)
Message-ID: <85.126.176.235.1343838833.wm@webmail.inode.at>
Date: Wed, 1 Aug 2012 18:33:53 +0200 (CEST)
From: "Dipl.-Ing. Thomas Wagner" <t.wagner@inode.at>
To: <xen-devel@lists.xen.org>
X-Priority: 3
Importance: Normal
X-Mailer: SquirrelMail (version 1.2.8)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="----=_20120801183353_31106"
X-Mailman-Approved-At: Wed, 01 Aug 2012 16:43:32 +0000
Subject: [Xen-devel] [Fwd: Re:  XEN 4.1.2 and kernel 3.4.7]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------=_20120801183353_31106
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit

I am building the kernel by myself. I attach the config and system.map
of the running kernel.


regards
Thomas

>>>> On 01.08.12 at 09:49, "Dipl.-Ing. Thomas Wagner"
>>>> <t.wagner@inode.at> wrote:
>> Aug  1 05:52:08 zeus kernel: INFO: rcu_bh self-detected stall on CPU
>> { 3}  (t=0 jiffies)
>> Aug  1 05:52:08 zeus kernel: Pid: 0, comm: swapper/3 Not tainted
>> 3.4.7-4-xen #1
>> Aug  1 05:52:08 zeus kernel: Call Trace:
>> Aug  1 05:52:08 zeus kernel: <IRQ>  [<ffffffff81096841>] ?
>> __rcu_pending+0x1a1/0x4b0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f150>] ?
>> tick_init_highres+0x10/0x10
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8109771b>] ?
>> rcu_check_callbacks+0xbb/0xd0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81042c9f>] ?
>> update_process_times+0x3f/0x80
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8106f1a4>] ?
>> tick_sched_timer+0x54/0x130
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81055469>] ?
>> __run_hrtimer.isra.34+0x59/0xf0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81055aac>] ?
>> hrtimer_interrupt+0xec/0x230
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81006caa>] ?
>> xen_timer_interrupt+0x2a/0x1a0
>> Aug  1 05:52:08 zeus kernel: [<ffffffff810422b3>] ? cascade+0x73/0x90
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81090fea>] ?
>> handle_irq_event_percpu+0x3a/0x150
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8109403f>] ?
>> handle_percpu_irq+0x3f/0x70
>> Aug  1 05:52:08 zeus kernel: [<ffffffff8103d23d>] ?
>> __do_softirq+0xbd/0x130 Aug  1 05:52:08 zeus kernel:
>> [<ffffffff811f35f4>] ?
>> __xen_evtchn_do_upcall+0x194/0x240
>> Aug  1 05:52:08 zeus kernel: [<ffffffff811f55f2>] ?
>> xen_evtchn_do_upcall+0x22/0x40
>> Aug  1 05:52:08 zeus kernel: [<ffffffff813244ee>] ?
>> xen_do_hypervisor_callback+0x1e/0x30
>> Aug  1 05:52:08 zeus kernel: <EOI>  [<ffffffff810013aa>] ?
>> hypercall_page+0x3aa/0x1000
>> Aug  1 05:52:08 zeus kernel: [<ffffffff810013aa>] ?
>> hypercall_page+0x3aa/0x1000
>> Aug  1 05:52:08 zeus kernel: [<ffffffff81006afc>] ?
>> xen_safe_halt+0xc/0x20 Aug  1 05:52:08 zeus kernel:
>> [<ffffffff81012453>] ? default_idle+0x23/0x40 Aug  1 05:52:08 zeus
>> kernel: [<ffffffff81012e86>] ? cpu_idle+0xa6/0xc0
>>
>>
>> Im am using a HP DL585 G2 running openSuse 12.1.
>
> Yet the above trace appears to be from a pv-ops kernel, which
> 12.1 doesn't include (nor does it ship a 3.4-based kernel in the first
> place). Please provide complete, consistent information.
>
> Jan



------=_20120801183353_31106
Content-Type: text/plain; name="config-3.4.7"
Content-Disposition: attachment; filename="config-3.4.7"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.4.7 Kernel Configuration
#
CONFIG_64BIT=y
# CONFIG_X86_32 is not set
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
# CONFIG_RWSEM_GENERIC_SPINLOCK is not set
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_ARCH_HAS_CPU_IDLE_WAIT=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
# CONFIG_KTIME_SCALAR is not set
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_HAVE_IRQ_WORK=y
CONFIG_IRQ_WORK=y

#
# General setup
#
CONFIG_EXPERIMENTAL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION="-4-xen"
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
# CONFIG_FHANDLE is not set
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_FANOUT=64
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_TREE_RCU_TRACE is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
# CONFIG_PROC_PID_CPUSET is not set
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_CGROUP_MEM_RES_CTLR is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
# CONFIG_USER_NS is not set
CONFIG_PID_NS=y
CONFIG_NET_NS=y
# CONFIG_SCHED_AUTOGROUP is not set
# CONFIG_SYSFS_DEPRECATED is not set
# CONFIG_RELAY is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_PERF_COUNTERS is not set
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PCI_QUIRKS=y
CONFIG_COMPAT_BRK=y
CONFIG_SLAB=y
# CONFIG_SLUB is not set
# CONFIG_PROFILING is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
# CONFIG_KPROBES is not set
# CONFIG_JUMP_LABEL is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y

#
# GCOV-based kernel profiling
#
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
# CONFIG_BLK_DEV_INTEGRITY is not set
# CONFIG_BLK_DEV_THROTTLING is not set

#
# Partition Types
#
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
# CONFIG_IOSCHED_DEADLINE is not set
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
# CONFIG_INLINE_SPIN_TRYLOCK is not set
# CONFIG_INLINE_SPIN_TRYLOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK is not set
# CONFIG_INLINE_SPIN_LOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK_IRQ is not set
# CONFIG_INLINE_SPIN_LOCK_IRQSAVE is not set
# CONFIG_INLINE_SPIN_UNLOCK_BH is not set
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
# CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_READ_TRYLOCK is not set
# CONFIG_INLINE_READ_LOCK is not set
# CONFIG_INLINE_READ_LOCK_BH is not set
# CONFIG_INLINE_READ_LOCK_IRQ is not set
# CONFIG_INLINE_READ_LOCK_IRQSAVE is not set
CONFIG_INLINE_READ_UNLOCK=y
# CONFIG_INLINE_READ_UNLOCK_BH is not set
CONFIG_INLINE_READ_UNLOCK_IRQ=y
# CONFIG_INLINE_READ_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_WRITE_TRYLOCK is not set
# CONFIG_INLINE_WRITE_LOCK is not set
# CONFIG_INLINE_WRITE_LOCK_BH is not set
# CONFIG_INLINE_WRITE_LOCK_IRQ is not set
# CONFIG_INLINE_WRITE_LOCK_IRQSAVE is not set
CONFIG_INLINE_WRITE_UNLOCK=y
# CONFIG_INLINE_WRITE_UNLOCK_BH is not set
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
# CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE is not set
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_TICK_ONESHOT=y
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_PARAVIRT_GUEST=y
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_KVM_CLOCK is not set
# CONFIG_KVM_GUEST is not set
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
CONFIG_MK8=y
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
# CONFIG_GENERIC_CPU is not set
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_XADD=y
CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_INTEL_USERCOPY=y
CONFIG_X86_USE_PPRO_CHECKSUM=y
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
# CONFIG_CALGARY_IOMMU is not set
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_NR_CPUS=8
# CONFIG_SCHED_SMT is not set
CONFIG_SCHED_MC=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
# CONFIG_X86_MCE_INTEL is not set
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
# CONFIG_X86_MCE_INJECT is not set
# CONFIG_I8K is not set
CONFIG_MICROCODE=m
# CONFIG_MICROCODE_INTEL is not set
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=m
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=8
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
# CONFIG_MEMORY_HOTPLUG is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_TRANSPARENT_HUGEPAGE=y
# CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
CONFIG_CLEANCACHE=y
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=0
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
# CONFIG_EFI is not set
# CONFIG_SECCOMP is not set
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
# CONFIG_KEXEC is not set
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x1000000
# CONFIG_RELOCATABLE is not set
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
# CONFIG_SUSPEND is not set
CONFIG_HIBERNATE_CALLBACKS=y
# CONFIG_HIBERNATION is not set
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_RUNTIME=y
CONFIG_PM=y
# CONFIG_PM_DEBUG is not set
CONFIG_ACPI=y
CONFIG_ACPI_PROCFS=y
CONFIG_ACPI_PROCFS_POWER=y
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_PROC_EVENT=y
# CONFIG_ACPI_AC is not set
# CONFIG_ACPI_BATTERY is not set
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
# CONFIG_ACPI_DOCK is not set
CONFIG_ACPI_PROCESSOR=m
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=m
CONFIG_ACPI_NUMA=y
CONFIG_ACPI_CUSTOM_DSDT_FILE=""
# CONFIG_ACPI_CUSTOM_DSDT is not set
CONFIG_ACPI_BLACKLIST_YEAR=0
# CONFIG_ACPI_DEBUG is not set
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=m
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
# CONFIG_ACPI_APEI_MEMORY_FAILURE is not set
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_TABLE=y
CONFIG_CPU_FREQ_STAT=y
# CONFIG_CPU_FREQ_STAT_DETAILS is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_PCC_CPUFREQ is not set
# CONFIG_X86_ACPI_CPUFREQ is not set
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
# CONFIG_X86_P4_CLOCKMOD is not set

#
# shared options
#
# CONFIG_X86_SPEEDSTEP_LIB is not set
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
# CONFIG_INTEL_IDLE is not set

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
# CONFIG_PCI_CNB20LE_QUIRK is not set
CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y
# CONFIG_PCIE_ECRC is not set
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
CONFIG_ARCH_SUPPORTS_MSI=y
CONFIG_PCI_MSI=y
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
CONFIG_PCI_STUB=m
# CONFIG_XEN_PCIDEV_FRONTEND is not set
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
CONFIG_PCI_IOAPIC=y
CONFIG_PCI_LABEL=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# CONFIG_PCCARD is not set
# CONFIG_HOTPLUG_PCI is not set
# CONFIG_RAPIDIO is not set

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
# CONFIG_BINFMT_MISC is not set
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=m
CONFIG_UNIX=y
CONFIG_UNIX_DIAG=m
CONFIG_XFRM=y
CONFIG_XFRM_USER=m
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
CONFIG_NET_KEY=m
# CONFIG_NET_KEY_MIGRATE is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
# CONFIG_IP_MULTIPLE_TABLES is not set
# CONFIG_IP_ROUTE_MULTIPATH is not set
# CONFIG_IP_ROUTE_VERBOSE is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
# CONFIG_INET_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
# CONFIG_INET_UDP_DIAG is not set
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_CUBIC=y
CONFIG_DEFAULT_TCP_CONG="cubic"
# CONFIG_TCP_MD5SIG is not set
# CONFIG_IPV6 is not set
# CONFIG_NETWORK_SECMARK is not set
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
# CONFIG_NETFILTER is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
# CONFIG_NET_DSA is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
# CONFIG_NET_SCHED is not set
# CONFIG_DCB is not set
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
CONFIG_NETPRIO_CGROUP=m
CONFIG_BQL=y
CONFIG_HAVE_BPF_JIT=y
# CONFIG_BPF_JIT is not set

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
# CONFIG_RFKILL is not set
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_FIRMWARE_IN_KERNEL is not set
CONFIG_EXTRA_FIRMWARE=""
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
# CONFIG_DMA_SHARED_BUFFER is not set
# CONFIG_CONNECTOR is not set
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_BLK_CPQ_DA=m
CONFIG_BLK_CPQ_CISS_DA=m
# CONFIG_CISS_SCSI_TAPE is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set

#
# DRBD disabled because PROC_FS, INET or CONNECTOR not selected
#
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_UB is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_XEN_BLKDEV_FRONTEND is not set
CONFIG_XEN_BLKDEV_BACKEND=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_INTEL_MID_PTI is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ENCLOSURE_SERVICES is not set
CONFIG_HP_ILO=m
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#

#
# Altera FPGA firmware download module
#
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
# CONFIG_SCSI_NETLINK is not set
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
# CONFIG_BLK_DEV_SR_VENDOR is not set
CONFIG_CHR_DEV_SG=m
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
# CONFIG_SCSI_SCAN_ASYNC is not set
CONFIG_SCSI_WAIT_SCAN=m

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
# CONFIG_SCSI_FC_ATTRS is not set
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
# CONFIG_SCSI_SAS_LIBSAS is not set
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC7XXX_OLD is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_VMWARE_PVSCSI is not set
# CONFIG_LIBFC is not set
# CONFIG_LIBFCOE is not set
# CONFIG_FCOE is not set
# CONFIG_FCOE_FNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_ISCI is not set
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_FC is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_SRP is not set
# CONFIG_SCSI_BFA_FC is not set
# CONFIG_XEN_SCSI_FRONTEND is not set
CONFIG_XEN_SCSI_BACKEND=m
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=y
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_PMP is not set

#
# Controllers with non-SFF native interface
#
# CONFIG_SATA_AHCI is not set
# CONFIG_SATA_AHCI_PLATFORM is not set
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
# CONFIG_ATA_PIIX is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
CONFIG_PATA_AMD=m
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
CONFIG_PATA_ACPI=m
CONFIG_ATA_GENERIC=m
# CONFIG_PATA_LEGACY is not set
# CONFIG_MD is not set
# CONFIG_TARGET_CORE is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
# CONFIG_FUSION_FC is not set
# CONFIG_FUSION_SAS is not set
CONFIG_FUSION_MAX_SGE=128
# CONFIG_FUSION_CTL is not set
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
# CONFIG_FIREWIRE is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
# CONFIG_MACINTOSH_DRIVERS is not set
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_MII is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_NETCONSOLE is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
CONFIG_TUN=m
# CONFIG_VETH is not set
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_ALTEON is not set
# CONFIG_NET_VENDOR_AMD is not set
# CONFIG_NET_VENDOR_ATHEROS is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
# CONFIG_TIGON3 is not set
# CONFIG_BNX2X is not set
# CONFIG_NET_VENDOR_BROCADE is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
# CONFIG_NET_VENDOR_CHELSIO is not set
# CONFIG_NET_VENDOR_CISCO is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DEC is not set
# CONFIG_NET_VENDOR_DLINK is not set
# CONFIG_NET_VENDOR_EMULEX is not set
# CONFIG_NET_VENDOR_EXAR is not set
# CONFIG_NET_VENDOR_HP is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
CONFIG_E1000=m
CONFIG_E1000E=m
# CONFIG_IGB is not set
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
# CONFIG_IXGBE is not set
# CONFIG_IXGBEVF is not set
# CONFIG_NET_VENDOR_I825XX is not set
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MELLANOX is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MYRI is not set
# CONFIG_FEALNX is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
# CONFIG_NET_VENDOR_OKI is not set
# CONFIG_ETHOC is not set
# CONFIG_NET_PACKET_ENGINE is not set
# CONFIG_NET_VENDOR_QLOGIC is not set
# CONFIG_NET_VENDOR_REALTEK is not set
# CONFIG_NET_VENDOR_RDC is not set
# CONFIG_NET_VENDOR_SEEQ is not set
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
# CONFIG_SFC is not set
# CONFIG_NET_VENDOR_SMSC is not set
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_SUN is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
# CONFIG_NET_VENDOR_TI is not set
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
# CONFIG_PHYLIB is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
# CONFIG_TR is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
# CONFIG_XEN_NETDEV_FRONTEND is not set
CONFIG_XEN_NETDEV_BACKEND=m
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
# CONFIG_INPUT_FF_MEMLESS is not set
# CONFIG_INPUT_POLLDEV is not set
# CONFIG_INPUT_SPARSEKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
# CONFIG_INPUT_EVDEV is not set
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_OMAP4 is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
CONFIG_INPUT_PCSPKR=m
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
# CONFIG_LEGACY_PTYS is not set
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
# CONFIG_DEVKMEM is not set

#
# Serial drivers
#
CONFIG_SERIAL_8250=m
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_PCI=m
CONFIG_SERIAL_8250_PNP=m
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=m
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_XILINX_PS_UART is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_PANIC_EVENT=y
CONFIG_IPMI_PANIC_STRING=y
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
# CONFIG_HW_RANDOM is not set
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
# CONFIG_HPET is not set
# CONFIG_HANGCHECK_TIMER is not set
# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
# CONFIG_RAMOOPS is not set
# CONFIG_I2C is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
# CONFIG_PPS is not set

#
# PPS generators support
#

#
# PTP clock support
#

#
# Enable Device Drivers -> PPS to see the PTP clock options.
#
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_POWER_SUPPLY is not set
CONFIG_HWMON=m
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
CONFIG_SENSORS_K8TEMP=m
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IBMAEM is not set
# CONFIG_SENSORS_IBMPEX is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_SCH5627 is not set
# CONFIG_SENSORS_SCH5636 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=m
CONFIG_THERMAL_HWMON=y
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set

#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
# CONFIG_ALIM1535_WDT is not set
# CONFIG_ALIM7101_WDT is not set
# CONFIG_F71808E_WDT is not set
# CONFIG_SP5100_TCO is not set
# CONFIG_SC520_WDT is not set
# CONFIG_SBC_FITPC2_WATCHDOG is not set
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
# CONFIG_IBMASR is not set
# CONFIG_WAFER_WDT is not set
# CONFIG_I6300ESB_WDT is not set
# CONFIG_ITCO_WDT is not set
# CONFIG_IT8712F_WDT is not set
# CONFIG_IT87_WDT is not set
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
# CONFIG_NV_TCO is not set
# CONFIG_60XX_WDT is not set
# CONFIG_SBC8360_WDT is not set
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_VIA_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
# CONFIG_W83697UG_WDT is not set
# CONFIG_W83877F_WDT is not set
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_XEN_WDT=m

#
# PCI-based Watchdog Cards
#
# CONFIG_PCIPCWATCHDOG is not set
# CONFIG_WDTPCI is not set

#
# USB-based Watchdog Cards
#
# CONFIG_USBPCWATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=m
CONFIG_AGP_AMD64=m
# CONFIG_AGP_INTEL is not set
# CONFIG_AGP_SIS is not set
# CONFIG_AGP_VIA is not set
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
# CONFIG_VGA_SWITCHEROO is not set
# CONFIG_DRM is not set
# CONFIG_STUB_POULSBO is not set
CONFIG_VGASTATE=m
# CONFIG_VIDEO_OUTPUT_CONTROL is not set
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
# CONFIG_FB_SYS_FILLRECT is not set
# CONFIG_FB_SYS_COPYAREA is not set
# CONFIG_FB_SYS_IMAGEBLIT is not set
# CONFIG_FB_FOREIGN_ENDIAN is not set
# CONFIG_FB_SYS_FOPS is not set
# CONFIG_FB_WMT_GE_ROPS is not set
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
# CONFIG_FB_TILEBLITTING is not set

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
CONFIG_FB_VGA16=m
CONFIG_FB_VESA=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_GEODE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_XEN_FBDEV_FRONTEND is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
# CONFIG_SOUND is not set
CONFIG_HID_SUPPORT=y
CONFIG_HID=y
# CONFIG_HIDRAW is not set

#
# USB Input Devices
#
CONFIG_USB_HID=m
# CONFIG_HID_PID is not set
# CONFIG_USB_HIDDEV is not set

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
# CONFIG_HID_GYRATION is not set
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
# CONFIG_LOGIWHEELS_FF is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
# CONFIG_HID_NTRIG is not set
# CONFIG_HID_ORTEK is not set
# CONFIG_HID_PANTHERLORD is not set
# CONFIG_HID_PETALYNX is not set
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
# CONFIG_HID_SAMSUNG is not set
# CONFIG_HID_SONY is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_SUNPLUS is not set
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
# CONFIG_HID_TOPSEED is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
CONFIG_USB_ARCH_HAS_OHCI=y
CONFIG_USB_ARCH_HAS_EHCI=y
CONFIG_USB_ARCH_HAS_XHCI=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=m
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=m
# CONFIG_USB_DEBUG is not set
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEVICEFS=y
# CONFIG_USB_DEVICE_CLASS is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_SUSPEND=y
# CONFIG_USB_OTG is not set
CONFIG_USB_MON=m
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=m
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
CONFIG_USB_OHCI_HCD=m
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=m
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_XEN_USBDEV_FRONTEND is not set
CONFIG_XEN_USBDEV_BACKEND=m

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
# CONFIG_USB_UAS is not set
# CONFIG_USB_LIBUSUAL is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_GADGET is not set

#
# OTG and related infrastructure
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y

#
# Reporting subsystems
#
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=m
CONFIG_EDAC_MCE_INJ=m
CONFIG_EDAC_MM_EDAC=m
CONFIG_EDAC_AMD64=m
# CONFIG_EDAC_AMD64_ERROR_INJECTION is not set
# CONFIG_EDAC_E752X is not set
# CONFIG_EDAC_I82975X is not set
# CONFIG_EDAC_I3000 is not set
# CONFIG_EDAC_I3200 is not set
# CONFIG_EDAC_X38 is not set
# CONFIG_EDAC_I5400 is not set
# CONFIG_EDAC_I5000 is not set
# CONFIG_EDAC_I5100 is not set
# CONFIG_EDAC_I7300 is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set

#
# on-CPU RTC drivers
#
# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set

#
# Virtio drivers
#
# CONFIG_VIRTIO_PCI is not set
# CONFIG_VIRTIO_BALLOON is not set
# CONFIG_VIRTIO_MMIO is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=m
CONFIG_XEN_GRANT_DEV_ALLOC=m
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=y
CONFIG_XEN_PCIDEV_BACKEND=m
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=m
# CONFIG_STAGING is not set
# CONFIG_X86_PLATFORM_DEVICES is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
# CONFIG_AMD_IOMMU_STATS is not set
# CONFIG_INTEL_IOMMU is not set
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers (EXPERIMENTAL)
#

#
# Rpmsg drivers (EXPERIMENTAL)
#
# CONFIG_VIRT_DRIVERS is not set
# CONFIG_PM_DEVFREQ is not set

#
# Firmware Drivers
#
# CONFIG_EDD is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
# CONFIG_DMIID is not set
CONFIG_DMI_SYSFS=m
CONFIG_ISCSI_IBFT_FIND=y
# CONFIG_ISCSI_IBFT is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT23=y
CONFIG_EXT4_FS_XATTR=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=m
CONFIG_FS_MBCACHE=m
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_XFS_FS is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
# CONFIG_QUOTA is not set
# CONFIG_QUOTACTL is not set
CONFIG_AUTOFS4_FS=m
# CONFIG_FUSE_FS is not set
CONFIG_GENERIC_ACL=y

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
CONFIG_UDF_NLS=y

#
# DOS/FAT/NT Filesystems
#
# CONFIG_MSDOS_FS is not set
# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=m
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
CONFIG_NLS_CODEPAGE_1250=m
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=m
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_UTF8=m
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
# CONFIG_PRINTK_TIME is not set
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FRAME_WARN=2048
# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_UNUSED_SYMBOLS is not set
# CONFIG_DEBUG_FS is not set
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
# CONFIG_DEBUG_KERNEL is not set
# CONFIG_HARDLOCKUP_DETECTOR is not set
# CONFIG_SPARSE_RCU_POINTER is not set
CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_ARCH_WANT_FRAME_POINTERS=y
# CONFIG_FRAME_POINTER is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACING_SUPPORT=y
# CONFIG_FTRACE is not set
# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
# CONFIG_EARLY_PRINTK_DBGP is not set
# CONFIG_DEBUG_SET_MODULE_RONX is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
# CONFIG_OPTIMIZE_INLINING is not set

#
# Security options
#
# CONFIG_KEYS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
# CONFIG_SECURITY is not set
# CONFIG_SECURITYFS is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_DEFAULT_SECURITY=""
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=m
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=m
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=m
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
CONFIG_CRYPTO_NULL=m
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
# CONFIG_CRYPTO_AUTHENC is not set
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
# CONFIG_CRYPTO_CBC is not set
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
# CONFIG_CRYPTO_HMAC is not set
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
# CONFIG_CRYPTO_CRC32C_INTEL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
# CONFIG_CRYPTO_SHA1 is not set
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_X86_64=m
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST6 is not set
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
CONFIG_CRYPTO_SEED=m
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=m
CONFIG_CRYPTO_LZO=m

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_USER_API=m
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
# CONFIG_CRYPTO_HW is not set
CONFIG_HAVE_KVM=y
# CONFIG_VIRTUALIZATION is not set
# CONFIG_BINARY_PRINTF is not set

#
# Library routines
#
CONFIG_BITREVERSE=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
# CONFIG_CRC_CCITT is not set
CONFIG_CRC16=y
# CONFIG_CRC_T10DIF is not set
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
# CONFIG_LIBCRC32C is not set
# CONFIG_CRC8 is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_LZO_COMPRESS=m
CONFIG_LZO_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
# CONFIG_AVERAGE is not set
# CONFIG_CORDIC is not set

------=_20120801183353_31106
Content-Type: text/plain; name="System.map-3.4.7"
Content-Disposition: attachment; filename="System.map-3.4.7"

0000000000000000 A VDSO32_PRELINK
0000000000000000 D __per_cpu_start
0000000000000000 D irq_stack_union
0000000000000000 A xen_irq_disable_direct_reloc
0000000000000000 A xen_save_fl_direct_reloc
0000000000000040 A VDSO32_vsyscall_eh_frame_size
00000000000001f0 A VDSO32_NOTE_MASK
0000000000000400 A VDSO32_sigreturn
0000000000000410 A VDSO32_rt_sigreturn
0000000000000420 A VDSO32_vsyscall
0000000000000430 A VDSO32_SYSENTER_RETURN
0000000000004000 D gdt_page
0000000000005000 d exception_stacks
000000000000b000 d tlb_vector_offset
000000000000b040 d cpu_loops_per_jiffy
000000000000b080 D xen_vcpu_info
000000000000b0c0 D xen_vcpu
000000000000b0c8 d idt_desc
000000000000b0d8 d xen_cr0_value
000000000000b0e0 D xen_mc_irq_flags
000000000000b100 d mc_buffer
000000000000bd10 D xen_current_cr3
000000000000bd18 D xen_cr3
000000000000bd40 d xen_runstate
000000000000bd80 d xen_clock_events
000000000000be40 d xen_runstate_snapshot
000000000000be70 d xen_residual_stolen
000000000000be78 d xen_residual_blocked
000000000000be80 d xen_resched_irq
000000000000be84 d xen_callfunc_irq
000000000000be88 d xen_debug_irq
000000000000be8c d xen_callfuncsingle_irq
000000000000be90 d lock_kicker_irq
000000000000be98 d lock_spinners
000000000000bec0 D old_rsp
000000000000bec8 D irq_regs
000000000000bed0 d update_debug_stack
000000000000bed8 d last_nmi_rip
000000000000bee0 d swallow_nmi
000000000000bef0 d nmi_stats
000000000000bf00 D vector_irq
000000000000c300 d cpu_devices
000000000000c580 D cpu_dr7
000000000000c5a0 d bp_per_reg
000000000000c5c0 d cpu_debugreg
000000000000c5e0 D cyc2ns_offset
000000000000c5e8 D cyc2ns
000000000000c5f0 d is_idle
000000000000c600 d ici_cpuid4_info
000000000000c608 d ici_index_kobject
000000000000c610 d ici_cache_kobject
000000000000c640 D debug_stack_usage
000000000000c660 D orig_ist
000000000000c698 D fpu_owner_task
000000000000c6a0 D irq_count
000000000000c6a8 D irq_stack_ptr
000000000000c6b0 D kernel_stack
000000000000c6c0 D current_task
000000000000c6c8 d debug_stack_addr
000000000000c6d0 d old_perf_sched
000000000000c6e0 D cpu_hw_events
000000000000d600 d pmc_prev_left
000000000000d800 D mce_device
000000000000d808 D mce_poll_count
000000000000d810 D mce_irq_work
000000000000d840 D injectm
000000000000d898 D mce_poll_banks
000000000000d8a0 D mce_exception_count
000000000000d8c0 d mces_seen
000000000000d920 d mce_ring
000000000000d9c0 d mce_work
000000000000d9e0 d mce_timer
000000000000da18 d mce_next_interval
000000000000da20 d bank_map
000000000000da40 d threshold_banks
000000000000da70 D cpu_llc_shared_map
000000000000da78 D cpu_core_map
000000000000da80 D cpu_sibling_map
000000000000da88 D cpu_llc_id
000000000000da8c D cpu_state
000000000000da90 d idle_thread_array
000000000000da98 D this_cpu_off
000000000000daa0 D cpu_number
000000000000dac0 D x86_bios_cpu_apicid
000000000000dac2 D x86_cpu_to_apicid
000000000000db00 d lapic_events
000000000000dbc0 d cpu_hpet_dev
000000000000dbc8 d paravirt_lazy_mode
000000000000dbcc D x86_cpu_to_node_map
000000000000dc00 D process_counts
000000000000dc20 d printk_pending
000000000000dc40 d printk_sched_buf
000000000000de40 D softirq_work_list
000000000000dee0 D ksoftirqd
000000000000def0 d tasklet_vec
000000000000df00 d tasklet_hi_vec
000000000000df10 d tvec_bases
000000000000df40 d global_cwq
000000000000e240 D hrtimer_bases
000000000000e340 D sd_llc_id
000000000000e348 D sd_llc
000000000000e360 D kernel_cpustat
000000000000e3c0 D kstat
000000000000e3f0 D load_balance_tmpmask
000000000000e3f8 d local_cpu_mask
000000000000e400 D tick_cpu_device
000000000000e420 d tick_cpu_sched
000000000000e500 d cpu_stopper
000000000000e520 d stop_cpus_work
000000000000e560 D rcu_dynticks
000000000000e580 D rcu_bh_data
000000000000e6a0 D rcu_sched_data
000000000000e7b0 d rcu_barrier_head
000000000000e7c0 d listener_array
000000000000e7f0 d taskstats_seqnum
000000000000e7f8 d irq_work_list
000000000000e800 d perf_cgroup_events
000000000000e804 d perf_branch_stack_events
000000000000e810 d rotation_list
000000000000e820 d perf_throttled_seq
000000000000e828 d perf_throttled_count
000000000000e840 d swevent_htable
000000000000e880 d callchain_recursion
000000000000e890 d nr_cpu_bp_pinned
000000000000e898 d nr_task_bp_pinned
000000000000e8a0 d nr_bp_flexible
000000000000e8c0 D numa_node
000000000000e8e0 d boot_pageset
000000000000e948 D dirty_throttle_leaks
000000000000e94c d bdp_ratelimits
000000000000e960 d lru_rotate_pvecs
000000000000e9e0 d activate_page_pvecs
000000000000ea60 d lru_add_pvecs
000000000000ece0 d lru_deactivate_pvecs
000000000000ed60 D vm_event_states
000000000000ef60 d vmstat_work
000000000000efc0 d vmap_block_queue
000000000000efe0 d slab_reap_work
000000000000f038 d slab_reap_node
000000000000f040 d memory_failure_cpu
000000000000f180 D files_lglock_lock
000000000000f184 d nr_dentry
000000000000f188 d nr_unused
000000000000f18c d nr_inodes
000000000000f190 d last_ino
000000000000f1a0 d fdtable_defer_list
000000000000f1d0 D vfsmount_lock_lock
000000000000f1e0 d bh_lrus
000000000000f220 d bh_accounting
000000000000f230 d blk_cpu_done
000000000000f240 d blk_cpu_iopoll
000000000000f260 d radix_tree_preloads
000000000000f2c0 d net_rand_state
000000000000f2e0 d cpu_evtchn_mask
000000000000f4e0 d virq_to_irq
000000000000f540 d ipi_to_irq
000000000000f550 d xed_nesting_count
000000000000f554 d current_word_idx
000000000000f558 d current_bit_idx
000000000000f560 D get_random_int_hash
000000000000f570 d trickle_count
000000000000f578 d cpu_sys_devices
000000000000f580 d cpufreq_cpu_data
000000000000f588 d cpufreq_policy_cpu
000000000000f5a0 d cpu_policy_rwsem
000000000000f5c0 d cpufreq_cpu_governor
000000000000f5d0 d cpufreq_stats_table
000000000000f5d8 d cpufreq_show_table
000000000000f5e0 D cpuidle_devices
000000000000f600 d ladder_devices
000000000000f6e0 d sockets_in_use
000000000000f6e4 d xmit_recursion
000000000000f700 d rt_cache_stat
000000000000f740 d ipv4_cookie_scratch
000000000000f800 D init_tss
0000000000011ac0 D irq_stat
0000000000011b00 D cpu_info
0000000000011bc0 D cpu_tlbstate
0000000000011c00 d gcwq_nr_running
0000000000011c40 D runqueues
0000000000012600 d sched_clock_data
0000000000012640 d call_single_queue
0000000000012680 d cfd_data
00000000000126c0 d csd_data
0000000000012700 D softnet_data
0000000000012840 D __per_cpu_end
0000000001000000 A phys_startup_64
ffffffff81000000 T _text
ffffffff81000000 T startup_64
ffffffff810000b7 t ident_complete
ffffffff81000100 T secondary_startup_64
ffffffff8100018a t bad_address
ffffffff81000190 T _stext
ffffffff81001000 T hypercall_page
ffffffff81002000 T do_one_initcall
ffffffff81002170 t match_dev_by_uuid
ffffffff810021a0 T name_to_dev_t
ffffffff810025b0 t native_read_msr_safe
ffffffff810025d0 t native_read_pmc
ffffffff810025f0 t native_read_cr4
ffffffff81002600 t native_read_cr4_safe
ffffffff81002610 t native_wbinvd
ffffffff81002620 t native_store_gdt
ffffffff81002630 t native_store_idt
ffffffff81002640 t xen_cpuid
ffffffff81002720 t xen_set_debugreg
ffffffff81002730 t xen_get_debugreg
ffffffff81002740 t xen_store_tr
ffffffff81002750 t xen_io_delay
ffffffff81002760 t xen_get_apic_id
ffffffff81002770 t xen_apic_read
ffffffff810027f0 t xen_apic_icr_read
ffffffff81002800 t xen_apic_wait_icr_idle
ffffffff81002810 t xen_safe_apic_wait_icr_idle
ffffffff81002820 t xen_write_cr4
ffffffff81002830 t xen_set_iopl_mask
ffffffff81002890 t xen_set_apic_id
ffffffff810028b0 t xen_apic_icr_write
ffffffff810028d0 t xen_apic_write
ffffffff810028f0 t xen_end_context_switch
ffffffff81002910 t xen_set_ldt
ffffffff810029d0 t xen_load_sp0
ffffffff81002a60 t xen_clts
ffffffff81002ae0 t xen_write_cr0
ffffffff81002b70 t set_aliased_prot
ffffffff81002c10 t xen_free_ldt
ffffffff81002c50 t xen_alloc_ldt
ffffffff81002c90 t load_TLS_descriptor
ffffffff81002d00 t xen_load_tls
ffffffff81002db0 t xen_load_gdt
ffffffff81002f00 t xen_patch
ffffffff810030b0 T xen_hvm_need_lapic
ffffffff810030e0 t xen_read_cr0
ffffffff81003100 t xen_reboot
ffffffff81003130 t xen_emergency_restart
ffffffff81003140 t xen_crash_shutdown
ffffffff81003150 t xen_machine_power_off
ffffffff81003170 t xen_machine_halt
ffffffff81003180 t xen_restart
ffffffff81003190 t xen_panic_event
ffffffff810031a0 t xen_load_gs_index
ffffffff810031c0 t xen_vcpu_setup
ffffffff810032b0 t cvt_gate_to_trap.part.17
ffffffff810033a0 t xen_convert_trap_info
ffffffff81003460 t xen_write_idt_entry
ffffffff81003530 t xen_load_idt
ffffffff810035a0 t xen_write_msr_safe
ffffffff81003670 t xen_write_gdt_entry
ffffffff810036c0 t xen_write_ldt_entry
ffffffff81003710 T xen_vcpu_restore
ffffffff810037b0 T xen_copy_trap_info
ffffffff810037d0 T xen_setup_shared_info
ffffffff81003840 T xen_setup_vcpu_info_placement
ffffffff810038d0 T xen_panic_handler_init
ffffffff810038f0 T xen_mc_flush
ffffffff81003a40 T __xen_mc_entry
ffffffff81003ae0 T xen_mc_extend_args
ffffffff81003b50 T xen_mc_callback
ffffffff81003bb0 t __raw_callee_save_xen_pte_val
ffffffff81003bce t __raw_callee_save_xen_pgd_val
ffffffff81003bec t __raw_callee_save_xen_make_pte
ffffffff81003c0a t __raw_callee_save_xen_make_pgd
ffffffff81003c28 t __raw_callee_save_xen_pmd_val
ffffffff81003c46 t __raw_callee_save_xen_make_pmd
ffffffff81003c64 t __raw_callee_save_xen_pud_val
ffffffff81003c82 t __raw_callee_save_xen_make_pud
ffffffff81003ca0 t __ptep_modify_prot_start
ffffffff81003cc0 t __ptep_modify_prot_commit
ffffffff81003cd0 t xen_pte_unlock
ffffffff81003ce0 t xen_write_cr2
ffffffff81003cf0 t xen_read_cr2
ffffffff81003d00 t xen_read_cr3
ffffffff81003d10 t set_current_cr3
ffffffff81003d20 t xen_exchange_memory
ffffffff81003dc0 t remap_area_mfn_pte_fn
ffffffff81003e90 t xen_get_user_pgd
ffffffff81003ee0 t xen_page_pinned
ffffffff81003f20 t __xen_pgd_walk
ffffffff810041d0 t xen_flush_tlb
ffffffff81004270 t xen_flush_tlb_single
ffffffff81004320 t xen_release_pte
ffffffff810044c0 t xen_release_pmd
ffffffff810045c0 t xen_release_pud
ffffffff810046c0 T xen_set_domain_pte
ffffffff810047e0 t xen_zap_pfn_range
ffffffff81004930 t xen_remap_exchanged_ptes
ffffffff81004a60 t xen_set_fixmap
ffffffff81004ba0 t xen_leave_lazy_mmu
ffffffff81004bc0 t xen_extend_mmu_update
ffffffff81004c40 t __xen_set_pgd_hyper
ffffffff81004cb0 t xen_set_pgd
ffffffff81004da0 t xen_batched_set_pte
ffffffff81004e90 t xen_set_pte
ffffffff81004ec0 t xen_set_pte_at
ffffffff81004f00 t xen_extend_mmuext_op
ffffffff81004f80 t xen_do_pin
ffffffff81004fd0 t xen_pgd_free
ffffffff81005000 t xen_pgd_alloc
ffffffff81005100 t xen_flush_tlb_others
ffffffff81005230 t xen_alloc_pte
ffffffff810053d0 t xen_alloc_pmd
ffffffff810054f0 t xen_alloc_pud
ffffffff81005610 t pte_pfn_to_mfn
ffffffff810056a0 t xen_make_pud
ffffffff810056b0 t xen_make_pmd
ffffffff810056c0 t xen_make_pgd
ffffffff810056d0 t xen_make_pte
ffffffff81005740 t pte_mfn_to_pfn
ffffffff81005830 t xen_pud_val
ffffffff81005840 t xen_pmd_val
ffffffff81005850 t xen_pgd_val
ffffffff81005860 t xen_pte_val
ffffffff81005890 T xen_remap_domain_mfn_range
ffffffff81005980 t xen_hvm_exit_mmap
ffffffff810059e0 T xen_destroy_contiguous_region
ffffffff81005b10 T xen_create_contiguous_region
ffffffff81005c40 t drop_other_mm_ref
ffffffff81005cd0 t xen_unpin_page
ffffffff81005e10 t __xen_pgd_unpin
ffffffff81005ef0 t xen_exit_mmap
ffffffff81006020 t xen_pin_page
ffffffff81006140 t __xen_pgd_pin
ffffffff81006270 t xen_dup_mmap
ffffffff810062c0 t xen_activate_mm
ffffffff81006310 t __xen_write_cr3
ffffffff81006400 t xen_write_cr3
ffffffff810064b0 T arbitrary_virt_to_machine
ffffffff81006540 t xen_set_pud_hyper
ffffffff810065c0 t xen_set_pud
ffffffff81006610 t xen_set_pmd_hyper
ffffffff81006690 t xen_set_pmd
ffffffff810066e0 T arbitrary_virt_to_mfn
ffffffff81006700 T make_lowmem_page_readonly
ffffffff81006740 T make_lowmem_page_readwrite
ffffffff81006780 T set_pte_mfn
ffffffff810067b0 T xen_ptep_modify_prot_start
ffffffff810067c0 T xen_ptep_modify_prot_commit
ffffffff81006890 T xen_set_pat
ffffffff810068c0 T xen_mm_pin_all
ffffffff81006980 T xen_mm_unpin_all
ffffffff81006a30 T xen_read_cr2_direct
ffffffff81006a40 t __raw_callee_save_xen_save_fl
ffffffff81006a5e t __raw_callee_save_xen_restore_fl
ffffffff81006a7c t __raw_callee_save_xen_irq_disable
ffffffff81006a9a t __raw_callee_save_xen_irq_enable
ffffffff81006ac0 t xen_save_fl
ffffffff81006ae0 t xen_irq_disable
ffffffff81006af0 t xen_safe_halt
ffffffff81006b10 t xen_halt
ffffffff81006b40 t xen_irq_enable
ffffffff81006b60 t xen_restore_fl
ffffffff81006b90 T xen_force_evtchn_callback
ffffffff81006ba0 t xen_read_wallclock
ffffffff81006bd0 t xen_get_wallclock
ffffffff81006bf0 t xen_tsc_khz
ffffffff81006c00 t xen_vcpuop_set_mode
ffffffff81006c80 t xen_timer_interrupt
ffffffff81006e20 T xen_clocksource_read
ffffffff81006e40 t xen_set_wallclock
ffffffff81006ef0 t xen_clocksource_get_cycles
ffffffff81006f00 t xen_timerop_set_mode
ffffffff81006f30 t xen_timerop_set_next_event
ffffffff81006f70 t xen_vcpuop_set_next_event
ffffffff81006fd0 T xen_vcpu_stolen
ffffffff81006ff0 T xen_setup_runstate_info
ffffffff81007030 T xen_setup_timer
ffffffff81007170 T xen_teardown_timer
ffffffff810071a0 T xen_setup_cpu_clockevents
ffffffff810071c0 t xen_hvm_setup_cpu_clockevents
ffffffff810071e0 T xen_timer_resume
ffffffff81007240 T xen_irq_enable_direct
ffffffff81007255 T xen_irq_enable_direct_reloc
ffffffff81007259 T xen_irq_enable_direct_end
ffffffff81007260 T xen_irq_disable_direct
ffffffff81007269 T xen_irq_disable_direct_end
ffffffff81007270 T xen_save_fl_direct
ffffffff8100727e T xen_save_fl_direct_end
ffffffff81007280 T xen_restore_fl_direct
ffffffff8100729b T xen_restore_fl_direct_reloc
ffffffff8100729f T xen_restore_fl_direct_end
ffffffff810072a0 t check_events
ffffffff810072c0 T xen_adjust_exception_frame
ffffffff810072d0 T xen_iret
ffffffff810072d3 T xen_iret_reloc
ffffffff810072d7 T xen_iret_end
ffffffff810072e0 T xen_sysexit
ffffffff810072ee T xen_sysexit_reloc
ffffffff810072f2 T xen_sysexit_end
ffffffff81007300 T xen_sysret64
ffffffff81007327 T xen_sysret64_reloc
ffffffff8100732b T xen_sysret64_end
ffffffff81007330 T xen_sysret32
ffffffff81007354 T xen_sysret32_reloc
ffffffff81007358 T xen_sysret32_end
ffffffff81007360 T xen_syscall_target
ffffffff81007380 T xen_syscall32_target
ffffffff810073a0 T xen_sysenter_target
ffffffff810073c0 t map_pte_fn
ffffffff81007420 t map_pte_fn_status
ffffffff81007480 t unmap_pte_fn
ffffffff810074b0 T arch_gnttab_map_shared
ffffffff81007520 T arch_gnttab_map_status
ffffffff81007590 T arch_gnttab_unmap
ffffffff810075b0 t xen_vcpu_notify_restore
ffffffff810075d0 T xen_arch_pre_suspend
ffffffff810077a0 T xen_arch_hvm_post_suspend
ffffffff81007800 T xen_arch_post_suspend
ffffffff810078d0 T xen_arch_resume
ffffffff810078f0 T xen_unplug_emulated_devices
ffffffff81007990 T get_phys_to_machine
ffffffff810079f0 t p2m_mid_mfn_init
ffffffff81007a60 T xen_setup_mfn_list_list
ffffffff81007ad0 T __set_phys_to_machine
ffffffff81007bc0 T set_phys_to_machine
ffffffff81007ef0 T m2p_remove_override
ffffffff810081b0 T m2p_add_override
ffffffff81008400 T m2p_find_override
ffffffff810084b0 T m2p_find_override_pfn
ffffffff810084f0 t xen_smp_cpus_done
ffffffff81008500 t xen_smp_send_reschedule
ffffffff81008510 t xen_send_IPI_mask
ffffffff81008560 t xen_smp_send_call_function_single_ipi
ffffffff81008590 t xen_hvm_cpu_die
ffffffff81008610 t xen_call_function_single_interrupt
ffffffff81008640 t xen_call_function_interrupt
ffffffff81008670 t xen_reschedule_interrupt
ffffffff81008690 t xen_cpu_die
ffffffff81008780 t xen_cpu_disable
ffffffff810087c0 t stop_self
ffffffff81008800 t xen_stop_other_cpus
ffffffff81008810 t xen_smp_send_call_function_ipi
ffffffff81008870 t xen_spin_is_locked
ffffffff81008880 t xen_spin_is_contended
ffffffff81008890 t xen_spin_trylock
ffffffff810088a0 t xen_spin_unlock
ffffffff810088c0 t xen_spin_lock
ffffffff81008920 t xen_spin_lock_flags
ffffffff81008990 t dummy_handler
ffffffff810089a0 T xen_uninit_lock_cpu
ffffffff810089c0 T set_personality_ia32
ffffffff81008a60 t start_thread_common.constprop.5
ffffffff81008b00 T __show_regs
ffffffff81008d80 T release_thread
ffffffff81008dc0 T prepare_to_copy
ffffffff81008dd0 T start_thread
ffffffff81008de0 T start_thread_ia32
ffffffff81008e20 T __switch_to
ffffffff81009230 T set_personality_64bit
ffffffff81009290 T get_wchan
ffffffff81009350 T do_arch_prctl
ffffffff810096b0 T copy_thread
ffffffff810098a0 T sys_arch_prctl
ffffffff810098c0 T KSTK_ESP
ffffffff810098f0 T restore_sigcontext
ffffffff81009a40 T setup_sigcontext
ffffffff81009b90 t do_signal
ffffffff8100a180 T sys_sigaltstack
ffffffff8100a1a0 T do_notify_resume
ffffffff8100a1f0 T signal_fault
ffffffff8100a2e0 T sys_rt_sigreturn
ffffffff8100a3b0 T math_state_restore
ffffffff8100a480 t do_trap
ffffffff8100a660 T do_divide_error
ffffffff8100a700 T do_overflow
ffffffff8100a790 T do_bounds
ffffffff8100a820 T do_invalid_op
ffffffff8100a8c0 T do_coprocessor_segment_overrun
ffffffff8100a950 T do_invalid_TSS
ffffffff8100a9e0 T do_segment_not_present
ffffffff8100aa70 T do_alignment_check
ffffffff8100ab10 T do_stack_segment
ffffffff8100abc0 T do_double_fault
ffffffff8100ac20 T do_general_protection
ffffffff8100ada0 T do_int3
ffffffff8100ae80 T sync_regs
ffffffff8100aee0 T do_debug
ffffffff8100b0a0 T math_error
ffffffff8100b340 T do_coprocessor_error
ffffffff8100b350 T do_simd_coprocessor_error
ffffffff8100b360 T do_spurious_interrupt_bug
ffffffff8100b380 W smp_thermal_interrupt
ffffffff8100b3a0 T do_device_not_available
ffffffff8100b3b0 T ack_bad_irq
ffffffff8100b3f0 T arch_show_interrupts
ffffffff8100bad0 T arch_irq_stat_cpu
ffffffff8100bb60 T arch_irq_stat
ffffffff8100bb80 T do_IRQ
ffffffff8100bc50 T smp_x86_platform_ipi
ffffffff8100bcb0 T fixup_irqs
ffffffff8100bf20 T handle_irq
ffffffff8100bf40 T do_softirq
ffffffff8100bfe0 T dump_trace
ffffffff8100c250 T show_stack_log_lvl
ffffffff8100c3e0 T show_registers
ffffffff8100c630 T is_valid_bugaddr
ffffffff8100c650 t timer_interrupt
ffffffff8100c670 T profile_pc
ffffffff8100c6e0 T sys_ioperm
ffffffff8100c8c0 T sys_iopl
ffffffff8100c950 t flush_ldt
ffffffff8100c990 t alloc_ldt.part.3
ffffffff8100cb80 t write_ldt
ffffffff8100cd70 T init_new_context
ffffffff8100ce60 T destroy_context
ffffffff8100cee0 T sys_modify_ldt
ffffffff8100d060 t print_trace_stack
ffffffff8100d080 T oops_begin
ffffffff8100d130 T print_context_stack_bp
ffffffff8100d1d0 T print_context_stack
ffffffff8100d2a0 T printk_address
ffffffff8100d2d0 t print_trace_address
ffffffff8100d310 T show_trace_log_lvl
ffffffff8100d380 T show_trace
ffffffff8100d390 T show_stack
ffffffff8100d3b0 T oops_end
ffffffff8100d460 T __die
ffffffff8100d550 T die
ffffffff8100d5e0 T unregister_nmi_handler
ffffffff8100d6f0 T register_nmi_handler
ffffffff8100d880 T do_nmi
ffffffff8100dc30 T stop_nmi
ffffffff8100dc40 T restart_nmi
ffffffff8100dc50 T local_touch_nmi
ffffffff8100dc60 T default_cpu_present_to_apicid
ffffffff8100dca0 T default_check_phys_apicid_present
ffffffff8100dcb0 t is_ISA_range
ffffffff8100dcd0 t default_get_nmi_reason
ffffffff8100dce0 T iommu_shutdown_noop
ffffffff8100dcf0 T wallclock_init_noop
ffffffff8100dd00 t default_nmi_init
ffffffff8100dd10 t default_i8042_detect
ffffffff8100dd20 t i8259A_suspend
ffffffff8100dd40 t i8259A_shutdown
ffffffff8100dd50 t legacy_pic_noop
ffffffff8100dd60 t legacy_pic_uint_noop
ffffffff8100dd70 t legacy_pic_int_noop
ffffffff8100dd80 t legacy_pic_irq_pending_noop
ffffffff8100dd90 t make_8259A_irq
ffffffff8100ddd0 t i8259A_irq_pending
ffffffff8100de40 t unmask_8259A
ffffffff8100de80 t mask_8259A
ffffffff8100deb0 t unmask_8259A_irq
ffffffff8100df10 t enable_8259A_irq
ffffffff8100df20 t mask_8259A_irq
ffffffff8100df80 t disable_8259A_irq
ffffffff8100df90 t init_8259A
ffffffff8100e0b0 t i8259A_resume
ffffffff8100e0e0 t mask_and_ack_8259A
ffffffff8100e1e0 T vector_used_by_percpu_irq
ffffffff8100e250 T setup_vector_irq
ffffffff8100e260 T smp_irq_work_interrupt
ffffffff8100e2a0 T arch_irq_work_raise
ffffffff8100e2e0 t find_oprom
ffffffff8100e670 T pci_biosrom_size
ffffffff8100e690 T pci_unmap_biosrom
ffffffff8100e6a0 T pci_map_biosrom
ffffffff8100e6d0 T align_addr
ffffffff8100e740 T sys_mmap
ffffffff8100e760 T arch_get_unmapped_area
ffffffff8100ea50 T arch_get_unmapped_area_topdown
ffffffff8100ed30 t write_ok_or_segv
ffffffff8100ed60 t warn_bad_vsyscall
ffffffff8100ee20 T update_vsyscall_tz
ffffffff8100ee30 T update_vsyscall
ffffffff8100ef00 T emulate_vsyscall
ffffffff8100f1c0 T e820_any_mapped
ffffffff8100f220 T dma_supported
ffffffff8100f2d0 T dma_set_mask
ffffffff8100f330 T dma_generic_alloc_coherent
ffffffff8100f480 t force_disable_hpet_msi
ffffffff8100f490 t old_ich_force_enable_hpet
ffffffff8100f5c0 t old_ich_force_enable_hpet_user
ffffffff8100f5e0 t nvidia_force_enable_hpet
ffffffff8100f690 t vt8237_force_enable_hpet
ffffffff8100f7d0 t ich_force_enable_hpet
ffffffff8100f970 t ati_force_enable_hpet
ffffffff8100fb60 T force_hpet_resume
ffffffff8100fd30 T arch_unregister_cpu
ffffffff8100fd50 t add_nops
ffffffff8100fda0 T alternatives_smp_module_del
ffffffff8100fe90 T alternatives_text_reserved
ffffffff8100ff20 T text_poke_early
ffffffff8100ff80 T apply_paravirt
ffffffff81010040 T apply_alternatives
ffffffff810101c0 T text_poke
ffffffff810103d0 t stop_machine_text_poke
ffffffff81010460 t alternatives_smp_unlock
ffffffff81010510 T alternatives_smp_module_add
ffffffff810106b0 T alternatives_smp_switch
ffffffff81010890 T text_poke_smp
ffffffff810108f0 T text_poke_smp_batch
ffffffff81010930 t nommu_sync_single_for_device
ffffffff81010940 t nommu_sync_sg_for_device
ffffffff81010950 t nommu_free_coherent
ffffffff81010970 t check_addr
ffffffff810109c0 t nommu_map_sg
ffffffff81010aa0 t nommu_map_page
ffffffff81010b20 T aout_dump_debugregs
ffffffff81010bf0 T hw_breakpoint_restore
ffffffff81010c80 T encode_dr7
ffffffff81010cb0 T decode_dr7
ffffffff81010cf0 T arch_install_hw_breakpoint
ffffffff81010de0 T arch_uninstall_hw_breakpoint
ffffffff81010ea0 T arch_check_bp_in_kernelspace
ffffffff81010f70 T arch_bp_generic_fields
ffffffff81011020 T arch_validate_hwbkpt_settings
ffffffff81011120 T flush_ptrace_hw_breakpoint
ffffffff81011160 T hw_breakpoint_exceptions_notify
ffffffff810112c0 T hw_breakpoint_pmu_read
ffffffff810112d0 T check_tsc_unstable
ffffffff810112e0 T recalibrate_cpu_khz
ffffffff810112f0 t read_tsc
ffffffff81011310 t resume_tsc
ffffffff81011320 t tsc_read_refs
ffffffff810113b0 t tsc_refine_calibration_work
ffffffff81011560 t set_cyc2ns_scale
ffffffff81011650 T mark_tsc_unstable
ffffffff810116c0 t time_cpufreq_notifier
ffffffff810117c0 T native_sched_clock
ffffffff81011830 T sched_clock
ffffffff81011840 T native_calibrate_tsc
ffffffff81011de0 T tsc_save_sched_clock_state
ffffffff81011e00 T tsc_restore_sched_clock_state
ffffffff81011ea0 T native_io_delay
ffffffff81011ef0 T rtc_cmos_read
ffffffff81011f00 T rtc_cmos_write
ffffffff81011f10 T native_read_tsc
ffffffff81011f20 T mach_set_rtc_mmss
ffffffff810120d0 T mach_get_cmos_time
ffffffff81012270 T update_persistent_clock
ffffffff81012290 T read_persistent_clock
ffffffff810122b0 T arch_remove_reservations
ffffffff810123e0 t hard_disable_TSC
ffffffff81012400 t hard_enable_TSC
ffffffff81012420 t do_nothing
ffffffff81012430 T default_idle
ffffffff81012470 t mwait_idle
ffffffff810124f0 t poll_idle
ffffffff81012520 t amd_e400_idle
ffffffff81012630 T cpu_idle_wait
ffffffff81012650 T kernel_thread
ffffffff810126e0 T idle_notifier_unregister
ffffffff810126f0 T idle_notifier_register
ffffffff81012700 T arch_dup_task_struct
ffffffff810127a0 T free_thread_xstate
ffffffff810127d0 T free_thread_info
ffffffff810127f0 T arch_task_cache_init
ffffffff81012820 T exit_thread
ffffffff810128c0 T show_regs
ffffffff810128e0 T show_regs_common
ffffffff810129f0 T flush_thread
ffffffff81012a80 T disable_TSC
ffffffff81012ab0 T get_tsc_mode
ffffffff81012ae0 T set_tsc_mode
ffffffff81012b30 T __switch_to_xtra
ffffffff81012c90 T sys_fork
ffffffff81012cc0 T sys_vfork
ffffffff81012cf0 T sys_clone
ffffffff81012d10 T sys_execve
ffffffff81012d80 T enter_idle
ffffffff81012da0 T exit_idle
ffffffff81012de0 T cpu_idle
ffffffff81012ea0 T set_pm_idle_to_default
ffffffff81012ec0 T stop_this_cpu
ffffffff81012ef0 T mwait_usable
ffffffff81012f50 T amd_e400_remove_cpu
ffffffff81012f60 T arch_align_stack
ffffffff81012fc0 T arch_randomize_brk
ffffffff81012ff0 T irq_fpu_usable
ffffffff81013050 T kernel_fpu_end
ffffffff81013070 t convert_to_fxsr
ffffffff81013110 t restore_i387_fxsave
ffffffff810131b0 t convert_from_fxsr
ffffffff81013340 t save_i387_fxsave
ffffffff810133f0 T kernel_fpu_begin
ffffffff810134b0 T fpu_finit
ffffffff810134e0 T unlazy_fpu
ffffffff81013570 T init_fpu
ffffffff81013610 T fpregs_active
ffffffff81013620 T xfpregs_active
ffffffff81013630 T xfpregs_get
ffffffff810136e0 T xfpregs_set
ffffffff810137c0 T xstateregs_get
ffffffff810138d0 T xstateregs_set
ffffffff810139d0 T fpregs_get
ffffffff81013ad0 T dump_fpu
ffffffff81013b10 T fpregs_set
ffffffff81013c40 T save_i387_xstate_ia32
ffffffff81013d50 T restore_i387_xstate_ia32
ffffffff81013f50 t xstate_enable
ffffffff81013f90 T __sanitize_i387_state
ffffffff810141e0 T check_for_xstate
ffffffff81014260 T save_i387_xstate
ffffffff81014460 T restore_i387_xstate
ffffffff81014620 t set_flags
ffffffff81014670 t ptrace_triggered
ffffffff810146c0 t ioperm_active
ffffffff810146d0 t set_segment_reg
ffffffff81014910 t putreg
ffffffff81014a40 t ioperm_get
ffffffff81014ab0 t ptrace_get_debugreg
ffffffff81014b40 t get_segment_reg
ffffffff81014c10 t getreg
ffffffff81014d20 t getreg32
ffffffff81014f60 t genregs_get
ffffffff81015010 t genregs_set
ffffffff810150b0 t genregs32_get
ffffffff81015170 t ptrace_modify_breakpoint.isra.13
ffffffff810151f0 t ptrace_set_debugreg
ffffffff810154d0 t putreg32
ffffffff810156e0 t genregs32_set
ffffffff81015780 T regs_query_register_offset
ffffffff810157e0 T regs_query_register_name
ffffffff81015820 T ptrace_disable
ffffffff81015840 T arch_ptrace
ffffffff81015b60 T compat_arch_ptrace
ffffffff81015d60 T update_regset_xstate_info
ffffffff81015d80 T task_user_regset_view
ffffffff81015db0 T user_single_step_siginfo
ffffffff81015e80 T send_sigtrap
ffffffff81015ef0 T syscall_trace_enter
ffffffff81016090 T syscall_trace_leave
ffffffff810161b0 t set_tls_desc
ffffffff81016330 t fill_user_desc
ffffffff81016400 T do_set_thread_area
ffffffff810164e0 T sys_set_thread_area
ffffffff81016500 T do_get_thread_area
ffffffff81016570 T sys_get_thread_area
ffffffff81016590 T regset_tls_active
ffffffff810165c0 T regset_tls_get
ffffffff810166a0 T regset_tls_set
ffffffff81016730 T convert_ip_to_linear
ffffffff810167f0 t enable_step
ffffffff81016a00 T user_enable_single_step
ffffffff81016a10 T user_enable_block_step
ffffffff81016a20 T user_disable_single_step
ffffffff81016ac0 t i8237A_resume
ffffffff81016b80 t show_shared_cpu_map
ffffffff81016b90 t show_shared_cpu_list
ffffffff81016ba0 t show
ffffffff81016bf0 t store
ffffffff81016c40 t show_shared_cpu_map_func
ffffffff81016c90 t show_size
ffffffff81016cc0 t show_number_of_sets
ffffffff81016cf0 t show_ways_of_associativity
ffffffff81016d20 t show_physical_line_partition
ffffffff81016d50 t show_coherency_line_size
ffffffff81016d80 t show_level
ffffffff81016db0 t store_subcaches
ffffffff81016e60 t show_subcaches
ffffffff81016ea0 t show_type
ffffffff81016f20 T amd_get_l3_disable_slot
ffffffff81016f70 t show_cache_disable.isra.7
ffffffff81016fd0 t show_cache_disable_1
ffffffff81016fe0 t show_cache_disable_0
ffffffff81016ff0 T amd_set_l3_disable_slot
ffffffff810170f0 t store_cache_disable
ffffffff810171e0 t store_cache_disable_1
ffffffff810171f0 t store_cache_disable_0
ffffffff81017200 t c_stop
ffffffff81017210 t show_cpuinfo
ffffffff810175d0 t c_start
ffffffff81017640 t c_next
ffffffff81017650 T load_percpu_segment
ffffffff81017680 T switch_to_new_gdt
ffffffff810176c0 T syscall_init
ffffffff81017740 T is_debug_stack
ffffffff810177a0 T debug_stack_set_zero
ffffffff810177b0 T debug_stack_reset
ffffffff810177c0 t vmware_get_tsc_khz
ffffffff81017880 T arch_scale_freq_power
ffffffff81017980 T arch_scale_smt_power
ffffffff810179a0 t read_hv_clock
ffffffff810179c0 T x86_match_cpu
ffffffff81017a80 T arch_print_cpu_modalias
ffffffff81017b50 T arch_cpu_uevent
ffffffff81017bc0 T amd_get_nb_id
ffffffff81017be0 T cpu_has_amd_erratum
ffffffff81017d00 t x86_pmu_extra_regs
ffffffff81017da0 t x86_pmu_disable
ffffffff81017df0 t collect_events
ffffffff81017e90 t x86_pmu_read
ffffffff81017ea0 t x86_pmu_event_idx
ffffffff81017ee0 t x86_pmu_flush_branch_stack
ffffffff81017f00 t backtrace_stack
ffffffff81017f10 t backtrace_address
ffffffff81017f30 T perf_get_x86_pmu_capability
ffffffff81017f70 t set_attr_rdpmc
ffffffff81017fc0 t get_attr_rdpmc
ffffffff81017ff0 t x86_pmu_cancel_txn
ffffffff81018020 t x86_pmu_commit_txn
ffffffff810180e0 t x86_pmu_start_txn
ffffffff81018110 t x86_pmu_add
ffffffff81018250 t allocate_fake_cpuc
ffffffff810182e0 t x86_pmu_event_init
ffffffff81018610 t perf_event_nmi_handler
ffffffff81018630 t change_rdpmc
ffffffff81018690 t hw_perf_event_destroy
ffffffff81018700 T x86_perf_event_update
ffffffff810187c0 T x86_pmu_stop
ffffffff81018890 t x86_pmu_del
ffffffff81018980 T x86_setup_perfctr
ffffffff81018b10 T x86_pmu_hw_config
ffffffff81018c40 T x86_pmu_disable_all
ffffffff81018ce0 T x86_pmu_enable_all
ffffffff81018da0 T x86_schedule_events
ffffffff81019290 T x86_perf_event_set_period
ffffffff810193e0 t x86_pmu_start
ffffffff81019510 T x86_pmu_enable_event
ffffffff810195a0 T perf_event_print_debug
ffffffff81019810 T x86_pmu_handle_irq
ffffffff81019930 T perf_events_lapic_init
ffffffff81019970 t x86_pmu_enable
ffffffff81019be0 T arch_perf_update_userpage
ffffffff81019c50 T perf_callchain_kernel
ffffffff81019cc0 T perf_callchain_user
ffffffff81019e40 T perf_instruction_pointer
ffffffff81019e70 T perf_misc_flags
ffffffff81019ed0 t x86_pmu_disable_event
ffffffff81019ef0 t amd_pmu_event_map
ffffffff81019f00 t amd_put_event_constraints
ffffffff81019f80 t amd_get_event_constraints
ffffffff8101a070 T amd_pmu_disable_virt
ffffffff8101a0b0 T amd_pmu_enable_virt
ffffffff8101a0e0 t cmask_show
ffffffff8101a100 t inv_show
ffffffff8101a120 t edge_show
ffffffff8101a140 t umask_show
ffffffff8101a160 t event_show
ffffffff8101a190 t amd_pmu_cpu_dead
ffffffff8101a1e0 t amd_get_event_constraints_f15h
ffffffff8101a400 t amd_pmu_cpu_prepare
ffffffff8101a500 t amd_pmu_cpu_starting
ffffffff8101a630 t amd_pmu_hw_config
ffffffff8101a6c0 t p6_pmu_event_map
ffffffff8101a6d0 t p6_pmu_disable_all
ffffffff8101a720 t p6_pmu_enable_all
ffffffff8101a770 t p6_pmu_disable_event
ffffffff8101a7b0 t p6_pmu_enable_event
ffffffff8101a800 t cmask_show
ffffffff8101a820 t inv_show
ffffffff8101a840 t pc_show
ffffffff8101a860 t edge_show
ffffffff8101a880 t umask_show
ffffffff8101a8a0 t event_show
ffffffff8101a8c0 t p4_pmu_event_map
ffffffff8101a910 t p4_pmu_disable_event
ffffffff8101a940 t ht_show
ffffffff8101a960 t escr_show
ffffffff8101a980 t cccr_show
ffffffff8101a9a0 t p4_pmu_disable_all
ffffffff8101aa20 t p4_pmu_enable_event
ffffffff8101aba0 t p4_pmu_enable_all
ffffffff8101ac10 t p4_pmu_handle_irq
ffffffff8101ae00 t p4_pmu_schedule_events
ffffffff8101b240 t p4_hw_config
ffffffff8101b540 t __intel_pmu_lbr_disable
ffffffff8101b590 t branch_type
ffffffff8101b7c0 T intel_pmu_lbr_reset
ffffffff8101b870 T intel_pmu_lbr_enable
ffffffff8101b8e0 T intel_pmu_lbr_disable
ffffffff8101b960 T intel_pmu_lbr_enable_all
ffffffff8101ba10 T intel_pmu_lbr_disable_all
ffffffff8101ba40 T intel_pmu_lbr_read
ffffffff8101bdf0 T intel_pmu_setup_lbr_filter
ffffffff8101beb0 T intel_pmu_lbr_init_core
ffffffff8101bef0 T intel_pmu_lbr_init_nhm
ffffffff8101bf40 T intel_pmu_lbr_init_snb
ffffffff8101bf90 T intel_pmu_lbr_init_atom
ffffffff8101bff0 t release_bts_buffer
ffffffff8101c040 t release_pebs_buffer
ffffffff8101c090 t release_ds_buffer
ffffffff8101c0d0 t intel_pmu_pebs_fixup_ip.isra.2
ffffffff8101c230 t __intel_pmu_pebs_event
ffffffff8101c3a0 t intel_pmu_drain_pebs_nhm
ffffffff8101c530 t intel_pmu_drain_pebs_core
ffffffff8101c660 T init_debug_store_on_cpu
ffffffff8101c6a0 T fini_debug_store_on_cpu
ffffffff8101c6e0 T release_ds_buffers
ffffffff8101c790 T reserve_ds_buffers
ffffffff8101caf0 T intel_pmu_enable_bts
ffffffff8101cb70 T intel_pmu_disable_bts
ffffffff8101cbe0 T intel_pmu_drain_bts_buffer
ffffffff8101cd70 T intel_pebs_constraints
ffffffff8101cdf0 T intel_pmu_pebs_enable
ffffffff8101ce30 T intel_pmu_pebs_disable
ffffffff8101ce90 T intel_pmu_pebs_enable_all
ffffffff8101cec0 T intel_pmu_pebs_disable_all
ffffffff8101cef0 T intel_ds_init
ffffffff8101cfd0 t x86_pmu_disable_event
ffffffff8101cff0 t intel_pmu_event_map
ffffffff8101d000 T perf_guest_get_msrs
ffffffff8101d020 t intel_guest_get_msrs
ffffffff8101d080 t offcore_rsp_show
ffffffff8101d0a0 t any_show
ffffffff8101d0c0 t cmask_show
ffffffff8101d0e0 t inv_show
ffffffff8101d100 t pc_show
ffffffff8101d120 t edge_show
ffffffff8101d140 t umask_show
ffffffff8101d160 t event_show
ffffffff8101d180 t intel_pmu_flush_branch_stack
ffffffff8101d1a0 t intel_pmu_cpu_dying
ffffffff8101d210 t intel_pmu_cpu_starting
ffffffff8101d320 t intel_pmu_disable_event
ffffffff8101d450 t intel_pmu_enable_event
ffffffff8101d660 t intel_pmu_enable_all
ffffffff8101d700 t intel_pmu_nhm_enable_all
ffffffff8101d8a0 t intel_pmu_disable_all
ffffffff8101d900 t core_guest_get_msrs
ffffffff8101da00 t core_pmu_enable_all
ffffffff8101dad0 t core_pmu_enable_event
ffffffff8101daf0 t intel_put_event_constraints
ffffffff8101db60 t intel_pmu_hw_config
ffffffff8101dc70 t __intel_shared_reg_get_constraints.isra.7
ffffffff8101dd90 t intel_get_event_constraints
ffffffff8101df10 T intel_pmu_save_and_restart
ffffffff8101df30 t intel_pmu_handle_irq
ffffffff8101e210 T x86_get_event_constraints
ffffffff8101e280 T allocate_shared_regs
ffffffff8101e2d0 t intel_pmu_cpu_prepare
ffffffff8101e340 t collect_tscs
ffffffff8101e380 T register_mce_write_callback
ffffffff8101e390 T mce_chrdev_write
ffffffff8101e3b0 t mce_disable_error_reporting
ffffffff8101e410 t mce_syscore_suspend
ffffffff8101e420 t mce_syscore_shutdown
ffffffff8101e430 t mce_chrdev_release
ffffffff8101e470 t mce_chrdev_ioctl
ffffffff8101e530 t mce_device_release
ffffffff8101e540 t __mce_read_apei
ffffffff8101e5d0 t mce_chrdev_read
ffffffff8101e880 t set_cmci_disabled
ffffffff8101e910 t __mcheck_cpu_init_timer
ffffffff8101e9d0 t set_trigger
ffffffff8101ea60 t show_trigger
ffffffff8101ead0 t show_bank
ffffffff8101eaf0 t mce_timer_delete_all
ffffffff8101eb50 t mce_restart
ffffffff8101eb70 t set_bank
ffffffff8101ebd0 t store_int_with_restart
ffffffff8101ebf0 t set_ignore_ce
ffffffff8101ec90 t unexpected_machine_check
ffffffff8101ecb0 t mce_schedule_work
ffffffff8101ecf0 t mce_process_work
ffffffff8101ed40 t mce_do_trigger
ffffffff8101ed90 t print_mce
ffffffff8101ef00 T mce_unregister_decode_chain
ffffffff8101ef10 T mce_register_decode_chain
ffffffff8101f020 t mce_chrdev_open
ffffffff8101f0c0 t mce_chrdev_poll
ffffffff8101f120 T mce_notify_irq
ffffffff8101f1b0 t mce_irq_work_cb
ffffffff8101f1d0 t mce_panic
ffffffff8101f380 t mce_timed_out
ffffffff8101f400 t mce_rdmsrl
ffffffff8101f500 t mce_read_aux
ffffffff8101f590 t mce_wrmsrl.constprop.25
ffffffff8101f630 T mce_setup
ffffffff8101f780 T mce_log
ffffffff8101f830 T do_machine_check
ffffffff810200d0 T machine_check_poll
ffffffff81020230 t __mcheck_cpu_init_generic
ffffffff81020310 t mce_syscore_resume
ffffffff81020350 T mce_available
ffffffff81020380 t mce_enable_ce
ffffffff810203b0 t mce_start_timer
ffffffff810204d0 t mce_disable_cmci
ffffffff810204f0 t mce_cpu_restart
ffffffff81020530 T mce_notify_process
ffffffff81020600 T mce_severity
ffffffff810206d0 t local_error_count_handler
ffffffff81020720 t show
ffffffff81020740 t store
ffffffff81020760 t show_threshold_limit
ffffffff81020790 t show_interrupt_enable
ffffffff810207c0 t store_error_count
ffffffff81020810 t show_error_count
ffffffff81020860 t threshold_restart_bank
ffffffff810209e0 t store_threshold_limit
ffffffff81020a80 t store_interrupt_enable
ffffffff81020b10 t amd_threshold_interrupt
ffffffff81020c90 T mce_amd_feature_init
ffffffff81020ee0 t default_threshold_interrupt
ffffffff81020f00 T smp_threshold_interrupt
ffffffff81020f40 T apei_mce_report_mem_error
ffffffff81020f90 T apei_write_mce
ffffffff81021170 T apei_read_mce
ffffffff81021370 T apei_check_mce
ffffffff81021380 T apei_clear_mce
ffffffff81021390 t mtrr_save
ffffffff810213e0 t set_mtrr
ffffffff81021420 t mtrr_restore
ffffffff81021480 t mtrr_rendezvous_handler
ffffffff810214e0 T set_mtrr_ops
ffffffff81021500 T mtrr_add_page
ffffffff81021940 T mtrr_add
ffffffff810219b0 T mtrr_del_page
ffffffff81021b30 T mtrr_del
ffffffff81021ba0 T mtrr_ap_init
ffffffff81021c10 T mtrr_save_state
ffffffff81021c30 T set_mtrr_aps_delayed_init
ffffffff81021c50 T mtrr_aps_init
ffffffff81021ca0 T mtrr_bp_restore
ffffffff81021cc0 t mtrr_close
ffffffff81021d60 t mtrr_open
ffffffff81021da0 t mtrr_seq_show
ffffffff81021ea0 t mtrr_write
ffffffff810220f0 t mtrr_file_add.isra.1
ffffffff810221c0 t mtrr_file_del.isra.2
ffffffff81022220 t mtrr_ioctl
ffffffff81022870 T mtrr_attrib_to_str
ffffffff81022890 T generic_get_free_region
ffffffff81022910 t generic_have_wrcomb
ffffffff81022930 T generic_validate_add_page
ffffffff81022a10 t generic_get_mtrr
ffffffff81022b40 t __mtrr_type_lookup.part.2
ffffffff81022dd0 T mtrr_type_lookup
ffffffff81022ec0 T fill_mtrr_var_range
ffffffff81022ee0 T mtrr_wrmsr
ffffffff81022f50 t prepare_set
ffffffff81022ff0 t post_set
ffffffff81023060 t generic_set_mtrr
ffffffff81023180 t generic_set_all
ffffffff81023460 t get_fixed_ranges.constprop.5
ffffffff81023590 T mtrr_save_fixed_ranges
ffffffff810235b0 T positive_have_wrcomb
ffffffff810235c0 T reserve_perfctr_nmi
ffffffff81023640 T reserve_evntsel_nmi
ffffffff810236c0 T release_perfctr_nmi
ffffffff81023730 T release_evntsel_nmi
ffffffff810237a0 T avail_to_resrv_perfctr_nmi_bit
ffffffff810237c0 t perf_ibs_init
ffffffff810237e0 t perf_ibs_add
ffffffff810237f0 t perf_ibs_del
ffffffff81023800 T get_ibs_caps
ffffffff81023810 t setup_APIC_ibs
ffffffff81023870 T acpi_register_ioapic
ffffffff81023880 T acpi_unregister_ioapic
ffffffff81023890 t acpi_register_gsi_pic
ffffffff810238b0 T acpi_unmap_lsapic
ffffffff810238e0 t gsi_to_irq
ffffffff81023910 T acpi_gsi_to_irq
ffffffff81023960 T acpi_isa_irq_to_gsi
ffffffff81023980 T acpi_register_gsi
ffffffff810239a0 T mp_register_gsi
ffffffff81023b80 t acpi_register_gsi_ioapic
ffffffff81023b90 T __acpi_acquire_global_lock
ffffffff81023bc0 T __acpi_release_global_lock
ffffffff81023be0 T acpi_processor_power_init_bm_check
ffffffff81023c40 T acpi_processor_ffh_cstate_probe
ffffffff81023d10 t acpi_processor_ffh_cstate_probe_cpu
ffffffff81023dd0 T mwait_idle_with_hints
ffffffff81023e40 T acpi_processor_ffh_cstate_enter
ffffffff81023e70 t native_machine_power_off
ffffffff81023eb0 t crash_nmi_callback
ffffffff81023f00 t native_machine_halt
ffffffff81023f20 t native_machine_restart
ffffffff81023f60 T native_machine_shutdown
ffffffff81023fe0 t vmxoff_nmi
ffffffff81024050 W mach_reboot_fixups
ffffffff81024060 T machine_power_off
ffffffff81024070 T machine_shutdown
ffffffff81024080 T machine_emergency_restart
ffffffff810240a0 T machine_restart
ffffffff810240b0 T machine_halt
ffffffff810240c0 T nmi_shootdown_cpus
ffffffff81024160 t native_machine_emergency_restart
ffffffff81024370 T native_send_call_func_single_ipi
ffffffff810243a0 T native_send_call_func_ipi
ffffffff81024410 t smp_stop_nmi_callback
ffffffff81024440 t native_smp_send_reschedule
ffffffff810244a0 t native_nmi_stop_other_cpus
ffffffff810245a0 t native_irq_stop_other_cpus
ffffffff81024630 T smp_reboot_interrupt
ffffffff81024660 T smp_reschedule_interrupt
ffffffff81024690 T smp_call_function_interrupt
ffffffff810246d0 T smp_call_function_single_interrupt
ffffffff81024710 T cpu_hotplug_driver_lock
ffffffff81024720 T cpu_hotplug_driver_unlock
ffffffff81024730 T arch_cpu_probe
ffffffff81024740 T arch_cpu_release
ffffffff81024750 T cpu_coregroup_mask
ffffffff810247b0 T __inquire_remote_apic
ffffffff81024900 T arch_disable_smp_support
ffffffff81024910 T arch_disable_nonboot_cpus_begin
ffffffff81024920 T arch_disable_nonboot_cpus_end
ffffffff81024930 T arch_enable_nonboot_cpus_begin
ffffffff81024940 T arch_enable_nonboot_cpus_end
ffffffff81024950 T cpu_disable_common
ffffffff81024ab0 T native_cpu_disable
ffffffff81024ae0 T native_cpu_die
ffffffff81024bd0 T play_dead_common
ffffffff81024c20 T native_play_dead
ffffffff81024d20 t lapic_next_event
ffffffff81024d40 t lapic_timer_broadcast
ffffffff81024d60 T setup_APIC_eilvt
ffffffff81024ef0 t __setup_APIC_LVTT
ffffffff81024f80 t lapic_timer_setup
ffffffff81025010 t lapic_resume
ffffffff81025260 T native_apic_wait_icr_idle
ffffffff81025290 T native_safe_apic_wait_icr_idle
ffffffff810252e0 T native_apic_icr_write
ffffffff81025310 T native_apic_icr_read
ffffffff81025350 T lapic_get_maxlvt
ffffffff81025380 T smp_apic_timer_interrupt
ffffffff81025420 T setup_profiling_timer
ffffffff81025430 T clear_local_APIC
ffffffff81025650 T disable_local_APIC
ffffffff810256b0 t lapic_suspend
ffffffff81025840 T lapic_shutdown
ffffffff81025890 T smp_spurious_interrupt
ffffffff81025900 T smp_error_interrupt
ffffffff81025a00 T disconnect_bsp_APIC
ffffffff81025ab0 T hard_smp_processor_id
ffffffff81025ae0 T default_init_apic_ldr
ffffffff81025b40 t physid_set_mask_of_physid
ffffffff81025be0 t default_apic_id_valid
ffffffff81025bf0 t default_cpu_mask_to_apicid
ffffffff81025c00 t default_cpu_mask_to_apicid_and
ffffffff81025c10 t default_ioapic_phys_id_map
ffffffff81025c30 t noop_init_apic_ldr
ffffffff81025c40 t noop_send_IPI_mask
ffffffff81025c50 t noop_send_IPI_mask_allbutself
ffffffff81025c60 t noop_send_IPI_allbutself
ffffffff81025c70 t noop_send_IPI_all
ffffffff81025c80 t noop_send_IPI_self
ffffffff81025c90 t noop_apic_wait_icr_idle
ffffffff81025ca0 t noop_apic_icr_write
ffffffff81025cb0 t noop_wakeup_secondary_cpu
ffffffff81025cc0 t noop_safe_apic_wait_icr_idle
ffffffff81025cd0 t noop_apic_icr_read
ffffffff81025ce0 t noop_phys_pkg_id
ffffffff81025cf0 t noop_get_apic_id
ffffffff81025d00 t noop_probe
ffffffff81025d10 t noop_apic_id_registered
ffffffff81025d20 t noop_target_cpus
ffffffff81025d30 t noop_vector_allocation_domain
ffffffff81025d70 t noop_apic_write
ffffffff81025dc0 t noop_apic_read
ffffffff81025e10 t noop_check_apicid_present
ffffffff81025e20 t noop_check_apicid_used
ffffffff81025e30 T default_send_IPI_mask_sequence_phys
ffffffff81025f00 T default_send_IPI_mask_allbutself_phys
ffffffff81025fe0 t __io_apic_read
ffffffff81026020 t __io_apic_write
ffffffff81026060 t __io_apic_modify
ffffffff810260b0 t __ioapic_write_entry
ffffffff81026100 t io_apic_sync
ffffffff81026140 t mask_lapic_irq
ffffffff81026180 t unmask_lapic_irq
ffffffff810261c0 t ack_lapic_irq
ffffffff810261e0 t ioapic_read_entry
ffffffff81026260 T save_ioapic_entries
ffffffff81026300 t ioapic_write_entry
ffffffff81026360 t ioapic_retrigger_irq
ffffffff81026400 t irq_cfg
ffffffff81026420 t pin_2_irq
ffffffff810264c0 t free_irq_at
ffffffff81026510 t __assign_irq_vector
ffffffff81026710 t io_apic_modify_irq.isra.17
ffffffff81026770 t mask_ioapic
ffffffff810267d0 t mask_ioapic_irq
ffffffff810267e0 t unmask_ioapic
ffffffff81026830 t unmask_ioapic_irq
ffffffff81026840 t startup_ioapic_irq
ffffffff810268e0 t __eoi_ioapic_pin.isra.18.part.19
ffffffff81026950 t clear_IO_APIC_pin
ffffffff81026b10 t clear_IO_APIC
ffffffff81026b70 t __add_pin_to_irq_node
ffffffff81026c10 t alloc_irq_cfg.isra.26
ffffffff81026c50 t irq_trigger
ffffffff81026cc0 t irq_polarity
ffffffff81026d20 T IO_APIC_get_PCI_irq_vector
ffffffff81026f60 t __irq_complete_move
ffffffff81026fc0 t ack_apic_level
ffffffff81027170 t ack_apic_edge
ffffffff810271b0 t find_irq_entry.constprop.36
ffffffff81027230 T mpc_ioapic_id
ffffffff81027250 T mpc_ioapic_addr
ffffffff81027270 T mp_ioapic_gsi_routing
ffffffff81027290 T disable_ioapic_support
ffffffff810272b0 T mp_save_irq
ffffffff81027380 T mask_ioapic_entries
ffffffff81027410 T restore_ioapic_entries
ffffffff81027480 t ioapic_resume
ffffffff81027510 T lock_vector_lock
ffffffff81027520 T unlock_vector_lock
ffffffff81027530 T assign_irq_vector
ffffffff81027590 t io_apic_setup_irq_pin
ffffffff81027830 t msi_compose_msg.isra.33
ffffffff81027920 T __setup_vector_irq
ffffffff81027a40 T disable_IO_APIC
ffffffff81027af0 T send_cleanup_vector
ffffffff81027b30 T __ioapic_set_affinity
ffffffff81027bc0 t ioapic_set_affinity
ffffffff81027c90 t ht_set_affinity
ffffffff81027d30 t hpet_msi_set_affinity
ffffffff81027da0 t msi_set_affinity
ffffffff81027e10 T smp_irq_move_cleanup_interrupt
ffffffff81027f30 T irq_force_complete_move
ffffffff81027f60 T create_irq_nr
ffffffff81028070 T create_irq
ffffffff810280a0 T destroy_irq
ffffffff810281f0 T native_setup_msi_irqs
ffffffff81028340 T native_teardown_msi_irq
ffffffff81028350 T arch_setup_hpet_msi
ffffffff810283c0 T arch_setup_ht_irq
ffffffff810284f0 T io_apic_setup_irq_pin_once
ffffffff81028560 T get_nr_irqs_gsi
ffffffff81028570 T io_apic_set_pci_routing
ffffffff810285e0 T mp_find_ioapic
ffffffff81028650 T mp_find_ioapic_pin
ffffffff810286b0 T acpi_get_override_irq
ffffffff81028750 T setup_IO_APIC_irq_extra
ffffffff81028820 t default_inquire_remote_apic
ffffffff81028840 t native_apic_mem_write
ffffffff81028850 t native_apic_mem_read
ffffffff81028860 t default_apic_id_valid
ffffffff81028870 t default_cpu_mask_to_apicid
ffffffff81028880 t default_cpu_mask_to_apicid_and
ffffffff81028890 t flat_acpi_madt_oem_check
ffffffff810288a0 t flat_target_cpus
ffffffff810288b0 t flat_vector_allocation_domain
ffffffff810288c0 T flat_init_apic_ldr
ffffffff81028920 t flat_get_apic_id
ffffffff81028930 t set_apic_id
ffffffff81028940 t flat_apic_id_registered
ffffffff81028970 t flat_phys_pkg_id
ffffffff81028980 t flat_probe
ffffffff81028990 t physflat_target_cpus
ffffffff810289a0 t physflat_send_IPI_allbutself
ffffffff810289b0 t physflat_send_IPI_mask_allbutself
ffffffff810289c0 t physflat_send_IPI_mask
ffffffff810289d0 t physflat_send_IPI_all
ffffffff810289e0 t physflat_cpu_mask_to_apicid_and
ffffffff81028a40 t physflat_cpu_mask_to_apicid
ffffffff81028a90 t physflat_vector_allocation_domain
ffffffff81028aa0 t physflat_probe
ffffffff81028ad0 t flat_send_IPI_mask
ffffffff81028b60 t flat_send_IPI_mask_allbutself
ffffffff81028c10 t flat_send_IPI_allbutself
ffffffff81028ce0 t flat_send_IPI_all
ffffffff81028d40 t physflat_acpi_madt_oem_check
ffffffff81028dc0 t apicid_phys_pkg_id
ffffffff81028dd0 T apic_send_IPI_self
ffffffff81028e10 T module_alloc
ffffffff81028e70 T apply_relocate_add
ffffffff81028fc0 T module_finalize
ffffffff81029110 T module_arch_cleanup
ffffffff81029120 t early_vga_write
ffffffff81029290 t early_serial_write
ffffffff81029350 T early_printk
ffffffff810293e0 T is_hpet_enabled
ffffffff81029410 t hpet_restart_counter
ffffffff81029460 t hpet_legacy_next_event
ffffffff810294a0 t hpet_msi_next_event
ffffffff810294f0 t read_hpet
ffffffff81029500 T hpet_register_irq_handler
ffffffff81029540 T hpet_unregister_irq_handler
ffffffff81029580 T hpet_set_alarm_time
ffffffff810295d0 T hpet_rtc_dropped_irq
ffffffff81029600 t _hpet_print_config
ffffffff81029740 t hpet_setup_msi_irq
ffffffff81029770 t hpet_cpuhp_notify
ffffffff81029890 t hpet_work
ffffffff81029a70 t hpet_set_mode
ffffffff81029c40 t hpet_msi_set_mode
ffffffff81029c50 t hpet_legacy_set_mode
ffffffff81029c60 T hpet_rtc_interrupt
ffffffff81029fb0 t hpet_resume_counter
ffffffff81029fd0 T hpet_set_periodic_freq
ffffffff8102a050 t hpet_interrupt_handler
ffffffff8102a080 T hpet_mask_rtc_irq_bit
ffffffff8102a0e0 T hpet_rtc_timer_init
ffffffff8102a1e0 T hpet_set_rtc_irq_bit
ffffffff8102a250 T EVT_TO_HPET_DEV
ffffffff8102a260 T hpet_readl
ffffffff8102a270 T hpet_msi_unmask
ffffffff8102a2b0 T hpet_msi_mask
ffffffff8102a2f0 T hpet_msi_write
ffffffff8102a330 T hpet_msi_read
ffffffff8102a370 T hpet_disable
ffffffff8102a3c0 t next_northbridge
ffffffff8102a410 T amd_flush_garts
ffffffff8102a560 T amd_cache_northbridges
ffffffff8102a730 T amd_get_mmconfig_range
ffffffff8102a7b0 T amd_get_subcaches
ffffffff8102a840 T amd_set_subcaches
ffffffff8102aa00 t native_read_tscp
ffffffff8102aa20 t native_read_msr_safe
ffffffff8102aa40 t native_write_msr_safe
ffffffff8102aa50 t native_read_pmc
ffffffff8102aa70 t native_clts
ffffffff8102aa80 t native_read_cr0
ffffffff8102aa90 t native_write_cr0
ffffffff8102aaa0 t native_read_cr2
ffffffff8102aab0 t native_write_cr2
ffffffff8102aac0 t native_read_cr3
ffffffff8102aad0 t native_write_cr3
ffffffff8102aae0 t native_read_cr4
ffffffff8102aaf0 t native_read_cr4_safe
ffffffff8102ab00 t native_write_cr4
ffffffff8102ab10 t native_read_cr8
ffffffff8102ab20 t native_write_cr8
ffffffff8102ab30 t native_wbinvd
ffffffff8102ab40 t native_save_fl
ffffffff8102ab50 t native_restore_fl
ffffffff8102ab60 t native_irq_disable
ffffffff8102ab70 t native_irq_enable
ffffffff8102ab80 t native_safe_halt
ffffffff8102ab90 t native_halt
ffffffff8102aba0 t native_cpuid
ffffffff8102abc0 t native_set_iopl_mask
ffffffff8102abd0 t native_load_sp0
ffffffff8102abe0 t native_swapgs
ffffffff8102abf0 t native_set_pte
ffffffff8102ac00 t native_set_pmd
ffffffff8102ac10 t native_set_pud
ffffffff8102ac20 t native_set_pgd
ffffffff8102ac30 t native_set_pte_at
ffffffff8102ac40 t native_set_pmd_at
ffffffff8102ac50 t __ptep_modify_prot_start
ffffffff8102ac70 t __ptep_modify_prot_commit
ffffffff8102ac80 t native_get_debugreg
ffffffff8102acf0 t native_set_debugreg
ffffffff8102ad60 t native_write_idt_entry
ffffffff8102ad80 t native_write_ldt_entry
ffffffff8102ad90 t native_write_gdt_entry
ffffffff8102adc0 t native_set_ldt
ffffffff8102ae60 t native_load_tr_desc
ffffffff8102ae70 t native_load_gdt
ffffffff8102ae80 t native_load_idt
ffffffff8102ae90 t native_store_gdt
ffffffff8102aea0 t native_store_idt
ffffffff8102aeb0 t native_store_tr
ffffffff8102aec0 t native_load_tls
ffffffff8102aef0 t __paravirt_pgd_alloc
ffffffff8102af00 T _paravirt_nop
ffffffff8102af10 T _paravirt_ident_32
ffffffff8102af20 T _paravirt_ident_64
ffffffff8102af30 t get_call_destination
ffffffff8102b080 t native_flush_tlb
ffffffff8102b090 t native_flush_tlb_global
ffffffff8102b0c0 t native_flush_tlb_single
ffffffff8102b0d0 t native_steal_clock
ffffffff8102b0e0 T paravirt_patch_nop
ffffffff8102b0f0 T paravirt_patch_ignore
ffffffff8102b100 T paravirt_patch_call
ffffffff8102b130 T paravirt_patch_jmp
ffffffff8102b150 T paravirt_patch_insns
ffffffff8102b180 T paravirt_patch_default
ffffffff8102b2f0 T paravirt_disable_iospace
ffffffff8102b310 T paravirt_enter_lazy_mmu
ffffffff8102b330 T paravirt_leave_lazy_mmu
ffffffff8102b350 T paravirt_start_context_switch
ffffffff8102b3a0 T paravirt_end_context_switch
ffffffff8102b3e0 T paravirt_get_lazy_mode
ffffffff8102b410 T arch_flush_lazy_mmu_mode
ffffffff8102b450 t start_pv_irq_ops_irq_disable
ffffffff8102b451 t end_pv_irq_ops_irq_disable
ffffffff8102b451 t start_pv_irq_ops_irq_enable
ffffffff8102b452 t end_pv_irq_ops_irq_enable
ffffffff8102b452 t start_pv_irq_ops_restore_fl
ffffffff8102b454 t end_pv_irq_ops_restore_fl
ffffffff8102b454 t start_pv_irq_ops_save_fl
ffffffff8102b456 t end_pv_irq_ops_save_fl
ffffffff8102b456 t start_pv_cpu_ops_iret
ffffffff8102b458 t end_pv_cpu_ops_iret
ffffffff8102b458 t start_pv_mmu_ops_read_cr2
ffffffff8102b45b t end_pv_mmu_ops_read_cr2
ffffffff8102b45b t start_pv_mmu_ops_read_cr3
ffffffff8102b45e t end_pv_mmu_ops_read_cr3
ffffffff8102b45e t start_pv_mmu_ops_write_cr3
ffffffff8102b461 t end_pv_mmu_ops_write_cr3
ffffffff8102b461 t start_pv_mmu_ops_flush_tlb_single
ffffffff8102b464 t end_pv_mmu_ops_flush_tlb_single
ffffffff8102b464 t start_pv_cpu_ops_clts
ffffffff8102b466 t end_pv_cpu_ops_clts
ffffffff8102b466 t start_pv_cpu_ops_wbinvd
ffffffff8102b468 t end_pv_cpu_ops_wbinvd
ffffffff8102b468 t start_pv_cpu_ops_irq_enable_sysexit
ffffffff8102b46e t end_pv_cpu_ops_irq_enable_sysexit
ffffffff8102b46e t start_pv_cpu_ops_usergs_sysret64
ffffffff8102b474 t end_pv_cpu_ops_usergs_sysret64
ffffffff8102b474 t start_pv_cpu_ops_usergs_sysret32
ffffffff8102b479 t end_pv_cpu_ops_usergs_sysret32
ffffffff8102b479 t start_pv_cpu_ops_swapgs
ffffffff8102b47c t end_pv_cpu_ops_swapgs
ffffffff8102b47c t start__mov32
ffffffff8102b47e t end__mov32
ffffffff8102b47e t start__mov64
ffffffff8102b481 t end__mov64
ffffffff8102b490 T paravirt_patch_ident_32
ffffffff8102b4b0 T paravirt_patch_ident_64
ffffffff8102b4d0 T native_patch
ffffffff8102b640 t __ticket_spin_lock
ffffffff8102b670 t __ticket_spin_trylock
ffffffff8102b6a0 t __ticket_spin_unlock
ffffffff8102b6b0 t __ticket_spin_is_locked
ffffffff8102b6c0 t __ticket_spin_is_contended
ffffffff8102b6e0 t default_spin_lock_flags
ffffffff8102b6f0 T pvclock_set_flags
ffffffff8102b700 T pvclock_tsc_khz
ffffffff8102b740 T pvclock_resume
ffffffff8102b750 T pvclock_clocksource_read
ffffffff8102b840 T pvclock_read_wallclock
ffffffff8102b8c0 t x86_swiotlb_free_coherent
ffffffff8102b8d0 t x86_swiotlb_alloc_coherent
ffffffff8102b950 T audit_classify_arch
ffffffff8102b960 T audit_classify_syscall
ffffffff8102b990 t gart_mapping_error
ffffffff8102b9a0 t alloc_iommu
ffffffff8102bb10 t flush_gart
ffffffff8102bb60 t gart_iommu_shutdown
ffffffff8102bc10 t __dma_map_cont.isra.7
ffffffff8102bd40 t iommu_full
ffffffff8102bdb0 t dma_map_area
ffffffff8102bed0 t gart_map_page
ffffffff8102bf50 t gart_unmap_page
ffffffff8102c040 t gart_unmap_sg
ffffffff8102c0b0 t gart_free_coherent
ffffffff8102c100 t enable_gart_translations.part.10
ffffffff8102c1d0 t gart_resume
ffffffff8102c2b0 t gart_map_sg
ffffffff8102c700 t gart_alloc_coherent
ffffffff8102c850 T set_up_gart_resume
ffffffff8102c870 t __raw_callee_save_vsmp_save_fl
ffffffff8102c88e t __raw_callee_save_vsmp_restore_fl
ffffffff8102c8ac t __raw_callee_save_vsmp_irq_disable
ffffffff8102c8ca t __raw_callee_save_vsmp_irq_enable
ffffffff8102c8f0 t vsmp_save_fl
ffffffff8102c910 t vsmp_restore_fl
ffffffff8102c930 t vsmp_irq_disable
ffffffff8102c950 t vsmp_irq_enable
ffffffff8102c960 t vsmp_patch
ffffffff8102c980 T is_vsmp_box
ffffffff8102c9d0 T devmem_is_allowed
ffffffff8102ca10 T free_init_pages
ffffffff8102cbe0 T free_initmem
ffffffff8102cc00 T free_initrd_mem
ffffffff8102cc20 t fill_pte
ffffffff8102cd20 t fill_pmd
ffffffff8102ce40 T sync_global_pgds
ffffffff8102cfb0 T set_pte_vaddr_pud
ffffffff8102d010 T set_pte_vaddr
ffffffff8102d070 T kern_addr_valid
ffffffff8102d190 T get_gate_vma
ffffffff8102d1c0 T in_gate_area
ffffffff8102d1f0 T in_gate_area_no_mm
ffffffff8102d210 T arch_vma_name
ffffffff8102d250 T vmalloc_sync_all
ffffffff8102d270 T do_page_fault
ffffffff8102d6a0 t __ioremap_caller
ffffffff8102da30 T ioremap_prot
ffffffff8102da40 T ioremap_cache
ffffffff8102da50 T ioremap_nocache
ffffffff8102da60 T iounmap
ffffffff8102db30 T ioremap_wc
ffffffff8102db60 T ioremap_change_attr
ffffffff8102db90 T xlate_dev_mem_ptr
ffffffff8102dc10 T unxlate_dev_mem_ptr
ffffffff8102dc40 T fixup_exception
ffffffff8102dca0 T clflush_cache_range
ffffffff8102dcd0 T lookup_address
ffffffff8102ddf0 t __cpa_flush_all
ffffffff8102de30 t __cpa_flush_range
ffffffff8102de50 t __cpa_process_fault
ffffffff8102df00 t __change_page_attr_set_clr
ffffffff8102e9e0 t change_page_attr_set_clr
ffffffff8102ee20 T set_memory_x
ffffffff8102ee70 T set_pages_x
ffffffff8102eeb0 T set_memory_nx
ffffffff8102ef00 T set_pages_nx
ffffffff8102ef40 T set_memory_ro
ffffffff8102ef70 T set_memory_rw
ffffffff8102efa0 T set_pages_array_wb
ffffffff8102f060 T set_memory_array_wb
ffffffff8102f0e0 t _set_pages_array
ffffffff8102f230 T set_pages_array_wc
ffffffff8102f240 T set_pages_array_uc
ffffffff8102f250 t _set_memory_array
ffffffff8102f370 T set_memory_array_wc
ffffffff8102f380 T set_memory_array_uc
ffffffff8102f390 T update_page_count
ffffffff8102f3e0 T arch_report_meminfo
ffffffff8102f450 T _set_memory_uc
ffffffff8102f480 T set_memory_uc
ffffffff8102f530 T set_pages_uc
ffffffff8102f570 T _set_memory_wc
ffffffff8102f5d0 T set_memory_wc
ffffffff8102f6a0 T _set_memory_wb
ffffffff8102f6d0 T set_memory_wb
ffffffff8102f730 T set_pages_wb
ffffffff8102f770 T set_memory_np
ffffffff8102f7a0 T set_memory_4k
ffffffff8102f7d0 T set_pages_ro
ffffffff8102f810 T set_pages_rw
ffffffff8102f850 t mmap_rnd
ffffffff8102f8b0 t stack_maxrandom_size
ffffffff8102f900 T arch_pick_mmap_layout
ffffffff8102fb70 T pgprot_writecombine
ffffffff8102fba0 t pat_pagerange_is_ram
ffffffff8102fc20 t lookup_memtype
ffffffff8102fd00 T pat_init
ffffffff8102fde0 T reserve_memtype
ffffffff810301b0 T free_memtype
ffffffff81030310 T io_free_memtype
ffffffff81030320 T phys_mem_access_prot
ffffffff81030330 T phys_mem_access_prot_allowed
ffffffff810303e0 T kernel_map_sync_memtype
ffffffff810304c0 t reserve_pfn_range
ffffffff810306b0 T io_reserve_memtype
ffffffff810307d0 T track_pfn_vma_copy
ffffffff81030890 T track_pfn_vma_new
ffffffff81030920 T untrack_pfn_vma
ffffffff81030990 T pte_alloc_one_kernel
ffffffff810309a0 T pte_alloc_one
ffffffff810309e0 T ___pte_free_tlb
ffffffff81030a70 T ___pmd_free_tlb
ffffffff81030ae0 T ___pud_free_tlb
ffffffff81030b50 T pgd_page_get_mm
ffffffff81030b60 T pgd_alloc
ffffffff81030d20 T pgd_free
ffffffff81030dd0 T ptep_set_access_flags
ffffffff81030e40 T pmdp_set_access_flags
ffffffff81030eb0 T ptep_test_and_clear_young
ffffffff81030ee0 T pmdp_test_and_clear_young
ffffffff81030f10 T ptep_clear_flush_young
ffffffff81030f50 T pmdp_clear_flush_young
ffffffff81030f80 T pmdp_splitting_flush
ffffffff81030fb0 T __native_set_fixmap
ffffffff81030fe0 T native_set_fixmap
ffffffff81031020 T __phys_addr
ffffffff81031050 T __virt_addr_valid
ffffffff810310f0 t gup_huge_pmd
ffffffff810311d0 t gup_huge_pud
ffffffff810312b0 t gup_pte_range
ffffffff810313e0 t gup_pud_range
ffffffff81031590 T __get_user_pages_fast
ffffffff810316a0 T get_user_pages_fast
ffffffff81031850 t memtype_rb_augment_cb
ffffffff81031890 t memtype_rb_lowest_match.constprop.2
ffffffff810318f0 T rbt_memtype_check_insert
ffffffff81031ab0 T rbt_memtype_erase
ffffffff81031b40 T rbt_memtype_lookup
ffffffff81031b50 T leave_mm
ffffffff81031b90 t do_flush_tlb_all
ffffffff81031be0 T smp_invalidate_interrupt
ffffffff81031ca0 T native_flush_tlb_others
ffffffff81031dd0 T flush_tlb_current_task
ffffffff81031e40 T flush_tlb_mm
ffffffff81031ee0 T flush_tlb_page
ffffffff81031f80 T flush_tlb_all
ffffffff81031fa0 T huge_pmd_unshare
ffffffff81032120 T huge_pte_offset
ffffffff810321c0 T huge_pte_alloc
ffffffff810325a0 T follow_huge_addr
ffffffff810325b0 T pmd_huge
ffffffff810325c0 T pud_huge
ffffffff810325d0 T follow_huge_pmd
ffffffff81032620 T follow_huge_pud
ffffffff81032670 T __node_distance
ffffffff810326b0 T arch_setup_additional_pages
ffffffff81032800 T syscall32_cpu_init
ffffffff81032880 T syscall32_setup_pages
ffffffff810329a0 t cp_stat64
ffffffff81032b00 T sys32_truncate64
ffffffff81032b10 T sys32_ftruncate64
ffffffff81032b20 T sys32_stat64
ffffffff81032b50 T sys32_lstat64
ffffffff81032b80 T sys32_fstat64
ffffffff81032bb0 T sys32_fstatat
ffffffff81032be0 T sys32_mmap
ffffffff81032c40 T sys32_mprotect
ffffffff81032c50 T sys32_rt_sigaction
ffffffff81032db0 T sys32_sigaction
ffffffff81032ed0 T sys32_alarm
ffffffff81032ee0 T sys32_waitpid
ffffffff81032ef0 T sys32_sysfs
ffffffff81032f00 T sys32_sched_rr_get_interval
ffffffff81032f80 T sys32_rt_sigpending
ffffffff81033030 T sys32_rt_sigqueueinfo
ffffffff810330e0 T sys32_pread
ffffffff81033100 T sys32_pwrite
ffffffff81033120 T sys32_personality
ffffffff81033160 T sys32_sendfile
ffffffff81033220 T sys32_execve
ffffffff81033290 T sys32_clone
ffffffff810332b0 T sys32_lseek
ffffffff810332c0 T sys32_kill
ffffffff810332d0 T sys32_fadvise64_64
ffffffff810332f0 T sys32_vm86_warning
ffffffff81033350 T sys32_lookup_dcookie
ffffffff81033370 T sys32_readahead
ffffffff81033390 T sys32_sync_file_range
ffffffff810333b0 T sys32_fadvise64
ffffffff810333d0 T sys32_fallocate
ffffffff810333f0 T sys32_fanotify_mark
ffffffff81033410 t ia32_setup_sigcontext
ffffffff81033510 t ia32_restore_sigcontext
ffffffff81033690 t get_sigframe.isra.0
ffffffff81033770 T copy_siginfo_to_user32
ffffffff810338b0 T copy_siginfo_from_user32
ffffffff81033940 T sys32_sigsuspend
ffffffff810339b0 T sys32_sigaltstack
ffffffff81033b90 T sys32_sigreturn
ffffffff81033c50 T sys32_rt_sigreturn
ffffffff81033d60 T ia32_setup_frame
ffffffff81033f70 T ia32_setup_rt_frame
ffffffff81034200 T compat_ni_syscall
ffffffff81034210 T sys32_ipc
ffffffff81034300 T ia32_classify_syscall
ffffffff81034340 t unshare_fd
ffffffff810343b0 T get_task_mm
ffffffff81034420 t sighand_ctor
ffffffff81034450 t account_kernel_stack
ffffffff810344b0 T free_task
ffffffff810344e0 T __put_task_struct
ffffffff810345d0 t mm_init.isra.27
ffffffff81034730 T __mmdrop
ffffffff810347b0 T nr_processes
ffffffff81034810 T mm_alloc
ffffffff810348f0 T added_exe_file_vma
ffffffff81034900 T removed_exe_file_vma
ffffffff81034940 T set_mm_exe_file
ffffffff81034990 T mmput
ffffffff81034a90 T get_mm_exe_file
ffffffff81034ae0 T mm_access
ffffffff81034b90 T mm_release
ffffffff81034cc0 T dup_mm
ffffffff81035250 T __cleanup_sighand
ffffffff81035280 t copy_process
ffffffff81036640 T sys_set_tid_address
ffffffff81036670 T do_fork
ffffffff81036940 T sys_unshare
ffffffff81036ba0 T unshare_files
ffffffff81036c40 t execdomains_proc_open
ffffffff81036c60 t execdomains_proc_show
ffffffff81036cd0 T __set_personality
ffffffff81036df0 t default_handler
ffffffff81036e70 T unregister_exec_domain
ffffffff81036ef0 T register_exec_domain
ffffffff81036f70 T sys_personality
ffffffff81036fa0 t no_blink
ffffffff81036fb0 t init_oops_id
ffffffff81036ff0 T add_taint
ffffffff81037030 t do_oops_enter_exit.part.3
ffffffff81037110 T test_taint
ffffffff81037120 W panic_smp_self_stop
ffffffff81037130 T print_tainted
ffffffff810371e0 T get_taint
ffffffff810371f0 T oops_may_print
ffffffff81037200 T oops_enter
ffffffff81037230 T print_oops_end_marker
ffffffff81037260 t warn_slowpath_common
ffffffff81037320 T warn_slowpath_null
ffffffff81037340 T warn_slowpath_fmt_taint
ffffffff81037390 T warn_slowpath_fmt
ffffffff810373e0 T oops_exit
ffffffff81037410 t __call_console_drivers
ffffffff810374b0 T kmsg_dump_register
ffffffff81037540 T kmsg_dump_unregister
ffffffff810375c0 T printk_timed_ratelimit
ffffffff81037620 T __printk_ratelimit
ffffffff81037630 t _call_console_drivers
ffffffff810376a0 t emit_log_char
ffffffff81037710 t log_prefix.part.5
ffffffff810377e0 T console_trylock
ffffffff81037830 T console_unlock
ffffffff81037a80 T vprintk
ffffffff81037f00 T console_lock
ffffffff81037f50 T unregister_console
ffffffff81037ff0 T console_start
ffffffff81038010 T console_stop
ffffffff81038030 T register_console
ffffffff810383c0 t __add_preferred_console.constprop.8
ffffffff810384a0 T do_syslog
ffffffff81038a40 T sys_syslog
ffffffff81038a60 T add_preferred_console
ffffffff81038a70 T update_console_cmdline
ffffffff81038b20 T suspend_console
ffffffff81038b60 T resume_console
ffffffff81038ba0 T is_console_locked
ffffffff81038bb0 T printk_tick
ffffffff81038c20 T printk_needs_cpu
ffffffff81038c50 T wake_up_klogd
ffffffff81038c70 T console_unblank
ffffffff81038cf0 T console_device
ffffffff81038d50 T kmsg_dump
ffffffff81038e30 t __cpu_notify
ffffffff81038e60 T get_online_cpus
ffffffff81038ea0 t cpu_hotplug_begin
ffffffff81038ef0 T put_online_cpus
ffffffff81038f50 t cpu_notify_nofail
ffffffff81038f70 T cpu_maps_update_begin
ffffffff81038f80 T cpu_maps_update_done
ffffffff81038fb0 T disable_nonboot_cpus
ffffffff810390c0 T cpu_hotplug_disable_before_freeze
ffffffff810390e0 T cpu_hotplug_enable_after_thaw
ffffffff81039100 t cpu_hotplug_pm_callback
ffffffff81039150 T set_cpu_possible
ffffffff81039170 T set_cpu_present
ffffffff81039190 T set_cpu_online
ffffffff810391b0 T set_cpu_active
ffffffff810391d0 T init_cpu_present
ffffffff810391e0 T init_cpu_possible
ffffffff810391f0 T init_cpu_online
ffffffff81039200 t will_become_orphaned_pgrp
ffffffff81039290 t kill_orphaned_pgrp
ffffffff81039340 t exit_mm
ffffffff81039460 T disallow_signal
ffffffff810394e0 T allow_signal
ffffffff81039570 t delayed_put_task_struct
ffffffff810395d0 t task_stopped_code
ffffffff81039610 t child_wait_callback
ffffffff81039670 T release_task
ffffffff81039a70 t wait_consider_task
ffffffff8103a4a0 t do_wait
ffffffff8103a6b0 T session_of_pgrp
ffffffff8103a6f0 T is_current_pgrp_orphaned
ffffffff8103a730 T __set_special_pids
ffffffff8103a7b0 T get_files_struct
ffffffff8103a800 T put_files_struct
ffffffff8103a920 T reset_files_struct
ffffffff8103a990 T exit_files
ffffffff8103aa10 T do_exit
ffffffff8103b300 T complete_and_exit
ffffffff8103b320 T daemonize
ffffffff8103b5e0 T sys_exit
ffffffff8103b600 T do_group_exit
ffffffff8103b6b0 T sys_exit_group
ffffffff8103b6d0 T __wake_up_parent
ffffffff8103b6f0 T sys_waitid
ffffffff8103b850 T sys_wait4
ffffffff8103b950 T sys_waitpid
ffffffff8103b960 t set_cpu_itimer
ffffffff8103bb60 t get_cpu_itimer
ffffffff8103bc60 T do_getitimer
ffffffff8103bd90 T sys_getitimer
ffffffff8103bde0 T it_real_fn
ffffffff8103be00 T do_setitimer
ffffffff8103c050 T alarm_setitimer
ffffffff8103c0b0 T sys_setitimer
ffffffff8103c1b0 T jiffies_to_msecs
ffffffff8103c1c0 T jiffies_to_usecs
ffffffff8103c1d0 T timespec_trunc
ffffffff8103c210 T mktime
ffffffff8103c2b0 T set_normalized_timespec
ffffffff8103c2f0 T msecs_to_jiffies
ffffffff8103c310 T usecs_to_jiffies
ffffffff8103c350 T timespec_to_jiffies
ffffffff8103c3a0 T jiffies_to_timespec
ffffffff8103c3e0 T timeval_to_jiffies
ffffffff8103c430 T jiffies_to_timeval
ffffffff8103c470 T jiffies_to_clock_t
ffffffff8103c490 T clock_t_to_jiffies
ffffffff8103c4e0 T jiffies_64_to_clock_t
ffffffff8103c500 T current_fs_time
ffffffff8103c550 T ns_to_timespec
ffffffff8103c5d0 T ns_to_timeval
ffffffff8103c610 T sys_time
ffffffff8103c660 T sys_stime
ffffffff8103c6b0 T sys_gettimeofday
ffffffff8103c730 T do_sys_settimeofday
ffffffff8103c820 T sys_settimeofday
ffffffff8103c8c0 T sys_adjtimex
ffffffff8103c950 T nsec_to_clock_t
ffffffff8103c970 T nsecs_to_jiffies64
ffffffff8103c990 T nsecs_to_jiffies
ffffffff8103c9b0 T timespec_add_safe
ffffffff8103ca30 T tasklet_init
ffffffff8103ca50 t tasklet_hi_action
ffffffff8103cb30 t tasklet_action
ffffffff8103cc10 t wakeup_softirqd
ffffffff8103cc30 T tasklet_hrtimer_init
ffffffff8103cc90 t __tasklet_hrtimer_trampoline
ffffffff8103ccc0 T tasklet_kill
ffffffff8103cd40 T local_bh_disable
ffffffff8103cd60 T local_bh_enable_ip
ffffffff8103ce00 T local_bh_enable
ffffffff8103cea0 t __local_bh_enable
ffffffff8103cf20 T _local_bh_enable
ffffffff8103cf30 T __tasklet_hi_schedule_first
ffffffff8103cf60 t __local_trigger
ffffffff8103cfc0 T __send_remote_softirq
ffffffff8103d010 T send_remote_softirq
ffffffff8103d040 t remote_softirq_receive
ffffffff8103d070 T __tasklet_hi_schedule
ffffffff8103d0e0 t __hrtimer_tasklet_trampoline
ffffffff8103d110 T __tasklet_schedule
ffffffff8103d180 T __do_softirq
ffffffff8103d2b0 t run_ksoftirqd
ffffffff8103d420 T irq_enter
ffffffff8103d490 T irq_exit
ffffffff8103d500 T raise_softirq_irqoff
ffffffff8103d540 T raise_softirq
ffffffff8103d5a0 T __raise_softirq_irqoff
ffffffff8103d5c0 T open_softirq
ffffffff8103d5d0 T tasklet_kill_immediate
ffffffff8103d650 t r_stop
ffffffff8103d660 t __request_resource
ffffffff8103d6b0 t __is_ram
ffffffff8103d6c0 t simple_align_resource
ffffffff8103d6d0 t devm_region_match
ffffffff8103d700 t iomem_open
ffffffff8103d730 t ioports_open
ffffffff8103d760 t r_show
ffffffff8103d7e0 t __insert_resource
ffffffff8103d8f0 T adjust_resource
ffffffff8103d9c0 T release_resource
ffffffff8103da30 T __release_region
ffffffff8103db10 t devm_region_release
ffffffff8103db30 T __request_region
ffffffff8103dd00 T __devm_request_region
ffffffff8103ddd0 T __check_region
ffffffff8103de10 t r_next
ffffffff8103de40 t r_start
ffffffff8103deb0 t __release_child_resources.isra.5
ffffffff8103df30 T __devm_release_region
ffffffff8103df90 T release_child_resources
ffffffff8103dfc0 T request_resource_conflict
ffffffff8103e000 T request_resource
ffffffff8103e020 T walk_system_ram_range
ffffffff8103e130 W page_is_ram
ffffffff8103e170 t __find_resource
ffffffff8103e340 T reallocate_resource
ffffffff8103e500 T allocate_resource
ffffffff8103e5d0 T lookup_resource
ffffffff8103e620 T insert_resource_conflict
ffffffff8103e660 T insert_resource
ffffffff8103e680 T insert_resource_expand_to_fit
ffffffff8103e720 T resource_alignment
ffffffff8103e760 T iomem_map_sanity_check
ffffffff8103e830 T iomem_is_exclusive
ffffffff8103e8e0 t proc_put_long
ffffffff8103e980 T proc_dostring
ffffffff8103eb60 t proc_put_char
ffffffff8103eb90 t do_proc_dointvec_conv
ffffffff8103ebe0 t do_proc_dointvec_minmax_conv
ffffffff8103ec50 t do_proc_dointvec_jiffies_conv
ffffffff8103ecd0 t do_proc_dointvec_ms_jiffies_conv
ffffffff8103ed40 t do_proc_dointvec_userhz_jiffies_conv
ffffffff8103edc0 t proc_get_long.constprop.8
ffffffff8103ef30 t __do_proc_doulongvec_minmax
ffffffff8103f2f0 T proc_doulongvec_ms_jiffies_minmax
ffffffff8103f330 T proc_doulongvec_minmax
ffffffff8103f370 t proc_taint
ffffffff8103f460 t __do_proc_dointvec.isra.5
ffffffff8103f800 T proc_dointvec_ms_jiffies
ffffffff8103f840 T proc_dointvec_userhz_jiffies
ffffffff8103f880 T proc_dointvec_jiffies
ffffffff8103f8c0 T proc_dointvec_minmax
ffffffff8103f910 t proc_dointvec_minmax_sysadmin
ffffffff8103f970 T proc_dointvec
ffffffff8103f9b0 t proc_do_cad_pid
ffffffff8103fa70 T proc_do_large_bitmap
ffffffff8103fee0 t do_sysctl.isra.1.part.2
ffffffff81040030 T sys_sysctl
ffffffff810400f0 T compat_sys_sysctl
ffffffff810401a0 T ns_capable
ffffffff810401f0 T capable
ffffffff81040200 t cap_validate_magic
ffffffff810402f0 T sys_capget
ffffffff81040440 T sys_capset
ffffffff810405e0 T has_ns_capability
ffffffff81040600 T has_capability
ffffffff81040610 T has_ns_capability_noaudit
ffffffff81040630 T has_capability_noaudit
ffffffff81040640 T nsown_capable
ffffffff81040650 t ptrace_get_task_struct
ffffffff81040680 t ptrace_trapping_sleep_fn
ffffffff81040690 t ptrace_getsiginfo
ffffffff81040700 t ptrace_setsiginfo
ffffffff81040770 t ptrace_regset
ffffffff810408b0 t ptrace_resume
ffffffff81040960 t ptrace_has_cap
ffffffff810409a0 t __ptrace_detach.part.7
ffffffff81040ab0 T __ptrace_link
ffffffff81040b00 t ptrace_traceme
ffffffff81040b90 T __ptrace_unlink
ffffffff81040c90 T ptrace_check_attach
ffffffff81040da0 T __ptrace_may_access
ffffffff81040e70 t ptrace_attach
ffffffff810410f0 T ptrace_may_access
ffffffff81041140 T exit_ptrace
ffffffff810412b0 T ptrace_readdata
ffffffff81041360 T ptrace_writedata
ffffffff81041410 T sys_ptrace
ffffffff810414f0 T generic_ptrace_peekdata
ffffffff81041540 T generic_ptrace_pokedata
ffffffff81041570 T ptrace_request
ffffffff810419e0 T compat_ptrace_request
ffffffff81041bf0 T compat_sys_ptrace
ffffffff81041cd0 T ptrace_get_breakpoints
ffffffff81041d20 T ptrace_put_breakpoints
ffffffff81041d40 T __round_jiffies
ffffffff81041da0 T __round_jiffies_relative
ffffffff81041e00 T round_jiffies
ffffffff81041e70 T round_jiffies_relative
ffffffff81041e80 T __round_jiffies_up
ffffffff81041ed0 T __round_jiffies_up_relative
ffffffff81041f30 T round_jiffies_up
ffffffff81041f80 T round_jiffies_up_relative
ffffffff81041fe0 T set_timer_slack
ffffffff81041ff0 t internal_add_timer
ffffffff810420f0 T init_timer_key
ffffffff81042120 T init_timer_deferrable_key
ffffffff81042150 T usleep_range
ffffffff81042190 T add_timer_on
ffffffff81042230 t process_timeout
ffffffff81042240 t cascade
ffffffff810422d0 t run_timer_softirq
ffffffff81042510 t lock_timer_base.isra.28
ffffffff81042580 T mod_timer_pinned
ffffffff810426e0 T mod_timer
ffffffff810428d0 T add_timer
ffffffff810428f0 T mod_timer_pending
ffffffff81042a30 T try_to_del_timer_sync
ffffffff81042ac0 T del_timer_sync
ffffffff81042b10 T msleep
ffffffff81042b40 T msleep_interruptible
ffffffff81042b80 T del_timer
ffffffff81042c00 T setup_deferrable_timer_on_stack_key
ffffffff81042c40 T run_local_timers
ffffffff81042c60 T update_process_times
ffffffff81042ce0 T sys_alarm
ffffffff81042cf0 T sys_getpid
ffffffff81042d20 T sys_getppid
ffffffff81042d50 T sys_getuid
ffffffff81042d70 T sys_geteuid
ffffffff81042d90 T sys_getgid
ffffffff81042db0 T sys_getegid
ffffffff81042dd0 T sys_gettid
ffffffff81042df0 T do_sysinfo
ffffffff81042f70 T sys_sysinfo
ffffffff81042fb0 t uid_hash_find
ffffffff81042ff0 T find_user
ffffffff81043050 T free_uid
ffffffff81043100 T alloc_uid
ffffffff81043230 t recalc_sigpending_tsk
ffffffff81043280 T block_all_signals
ffffffff81043310 t __sigqueue_alloc
ffffffff810433e0 T recalc_sigpending
ffffffff81043430 T unblock_all_signals
ffffffff810434a0 t __sigqueue_free
ffffffff810434e0 t __dequeue_signal
ffffffff810436f0 T dequeue_signal
ffffffff81043870 t __flush_itimer_signals
ffffffff810438f0 t rm_from_queue_full.part.15
ffffffff81043970 t rm_from_queue
ffffffff81043a00 t check_kill_permission.part.17
ffffffff81043b40 T next_signal
ffffffff81043b70 T task_set_jobctl_pending
ffffffff81043bd0 T task_clear_jobctl_trapping
ffffffff81043c00 T task_clear_jobctl_pending
ffffffff81043c40 t task_participate_group_stop
ffffffff81043d10 T flush_sigqueue
ffffffff81043d50 T __flush_signals
ffffffff81043d80 T flush_signals
ffffffff81043de0 T flush_itimer_signals
ffffffff81043e50 T ignore_signals
ffffffff81043e80 T flush_signal_handlers
ffffffff81043ee0 T unhandled_signal
ffffffff81043f20 T signal_wake_up
ffffffff81043f60 t __set_task_blocked
ffffffff81043fe0 t prepare_signal
ffffffff810441f0 t complete_signal
ffffffff81044440 t __send_signal
ffffffff81044780 t send_signal
ffffffff81044810 T recalc_sigpending_and_wake
ffffffff81044830 T __group_send_sig_info
ffffffff81044840 t do_notify_parent_cldstop
ffffffff810449e0 t ptrace_stop
ffffffff81044c50 t ptrace_do_notify
ffffffff81044d00 t do_signal_stop
ffffffff81044ee0 T force_sig_info
ffffffff81044fe0 T force_sig
ffffffff81044ff0 T zap_other_threads
ffffffff81045080 T __lock_task_sighand
ffffffff81045120 T kill_pid_info_as_cred
ffffffff81045250 T do_send_sig_info
ffffffff810452e0 t do_send_specific
ffffffff810453b0 t do_tkill
ffffffff81045430 T send_sig_info
ffffffff81045450 T send_sig
ffffffff81045470 T group_send_sig_info
ffffffff810454f0 T __kill_pgrp_info
ffffffff81045550 T kill_pgrp
ffffffff810455b0 T kill_pid_info
ffffffff81045610 T kill_pid
ffffffff81045630 T kill_proc_info
ffffffff81045660 T force_sigsegv
ffffffff810456c0 T sigqueue_alloc
ffffffff810456f0 T sigqueue_free
ffffffff81045780 T send_sigqueue
ffffffff810458d0 T do_notify_parent
ffffffff81045b40 T ptrace_notify
ffffffff81045bc0 T get_signal_to_deliver
ffffffff810460d0 T exit_signals
ffffffff81046200 T sys_restart_syscall
ffffffff81046220 T do_no_restart_syscall
ffffffff81046230 T set_current_blocked
ffffffff810462a0 T sigprocmask
ffffffff81046330 T block_sigmask
ffffffff81046380 T sys_rt_sigprocmask
ffffffff81046420 T do_sigpending
ffffffff810464e0 T sys_rt_sigpending
ffffffff810464f0 T copy_siginfo_to_user
ffffffff810466d0 T do_sigtimedwait
ffffffff81046890 T sys_rt_sigtimedwait
ffffffff81046980 T sys_kill
ffffffff81046b40 T sys_tgkill
ffffffff81046b70 T sys_tkill
ffffffff81046ba0 T sys_rt_sigqueueinfo
ffffffff81046c60 T do_rt_tgsigqueueinfo
ffffffff81046ce0 T sys_rt_tgsigqueueinfo
ffffffff81046d60 T do_sigaction
ffffffff81046f70 T do_sigaltstack
ffffffff810470c0 T sys_sigpending
ffffffff810470d0 T sys_sigprocmask
ffffffff810471c0 T sys_rt_sigaction
ffffffff81047280 T sys_sgetmask
ffffffff810472a0 T sys_ssetmask
ffffffff810472e0 T sys_signal
ffffffff81047320 T sys_pause
ffffffff81047360 T sys_rt_sigsuspend
ffffffff81047400 t argv_cleanup
ffffffff81047410 t kernel_shutdown_prepare
ffffffff81047450 T kernel_power_off
ffffffff810474a0 T kernel_halt
ffffffff810474e0 T unregister_reboot_notifier
ffffffff810474f0 T register_reboot_notifier
ffffffff81047500 T emergency_restart
ffffffff81047520 t set_one_prio
ffffffff81047600 t override_release
ffffffff810476a0 t set_user.isra.12
ffffffff81047730 T orderly_poweroff
ffffffff81047810 T sys_setpriority
ffffffff81047a50 T sys_getpriority
ffffffff81047ca0 T kernel_restart_prepare
ffffffff81047ce0 T kernel_restart
ffffffff81047d30 t deferred_cad
ffffffff81047d40 T sys_reboot
ffffffff81047f60 T ctrl_alt_del
ffffffff81047f90 T sys_setregid
ffffffff81048090 T sys_setgid
ffffffff81048140 T sys_setreuid
ffffffff81048280 T sys_setuid
ffffffff81048370 T sys_setresuid
ffffffff810484e0 T sys_getresuid
ffffffff81048530 T sys_setresgid
ffffffff81048660 T sys_getresgid
ffffffff810486b0 T sys_setfsuid
ffffffff81048780 T sys_setfsgid
ffffffff81048820 T do_sys_times
ffffffff810488f0 T sys_times
ffffffff81048940 T sys_setpgid
ffffffff81048b10 T sys_getpgid
ffffffff81048b70 T sys_getpgrp
ffffffff81048b80 T sys_getsid
ffffffff81048be0 T sys_setsid
ffffffff81048cb0 T sys_newuname
ffffffff81048d70 T sys_uname
ffffffff81048e40 T sys_olduname
ffffffff81048fc0 T sys_sethostname
ffffffff810490b0 T sys_gethostname
ffffffff81049150 T sys_setdomainname
ffffffff81049250 T sys_old_getrlimit
ffffffff81049330 T do_prlimit
ffffffff81049530 T sys_getrlimit
ffffffff81049590 T sys_prlimit64
ffffffff81049780 T sys_setrlimit
ffffffff810497d0 T getrusage
ffffffff81049ba0 T sys_getrusage
ffffffff81049be0 T sys_umask
ffffffff81049c00 T sys_prctl
ffffffff81049f00 T sys_getcpu
ffffffff81049f60 T call_usermodehelper_setfns
ffffffff81049f70 t proc_cap_handler
ffffffff8104a150 T call_usermodehelper_setup
ffffffff8104a1d0 t ____call_usermodehelper
ffffffff8104a2f0 T usermodehelper_read_unlock
ffffffff8104a300 T usermodehelper_read_lock_wait
ffffffff8104a3b0 T usermodehelper_read_trylock
ffffffff8104a480 T call_usermodehelper_freeinfo
ffffffff8104a4a0 T call_usermodehelper_exec
ffffffff8104a5a0 t umh_complete
ffffffff8104a5c0 t wait_for_helper
ffffffff8104a660 t __call_usermodehelper
ffffffff8104a710 t free_modprobe_argv
ffffffff8104a730 T __request_module
ffffffff8104a8c0 T __usermodehelper_set_disable_depth
ffffffff8104a900 T __usermodehelper_disable
ffffffff8104a9f0 t need_to_create_worker
ffffffff8104aa40 t move_linked_works
ffffffff8104aad0 t cwq_activate_first_delayed
ffffffff8104ab60 t send_mayday
ffffffff8104abc0 t wake_up_worker
ffffffff8104abe0 t alloc_worker
ffffffff8104ac40 t wq_clamp_max_active
ffffffff8104acb0 t do_work_for_cpu
ffffffff8104acd0 t wq_barrier_func
ffffffff8104ace0 t worker_enter_idle
ffffffff8104ae10 t start_worker
ffffffff8104ae30 t destroy_worker
ffffffff8104af00 t create_worker
ffffffff8104b0d0 t gcwq_mayday_timeout
ffffffff8104b140 t idle_worker_timeout
ffffffff8104b1c0 T work_on_cpu
ffffffff8104b280 t get_cwq
ffffffff8104b2c0 T workqueue_congested
ffffffff8104b2e0 T workqueue_set_max_active
ffffffff8104b460 t flush_workqueue_prep_cwqs
ffffffff8104b630 T flush_workqueue
ffffffff8104b9a0 T flush_scheduled_work
ffffffff8104b9b0 T drain_workqueue
ffffffff8104bb80 t get_work_gcwq
ffffffff8104bbd0 T work_cpu
ffffffff8104bbf0 T work_busy
ffffffff8104bc60 T queue_delayed_work_on
ffffffff8104bd60 T schedule_delayed_work_on
ffffffff8104bd80 t insert_work
ffffffff8104bdf0 t insert_wq_barrier
ffffffff8104bea0 t start_flush_work
ffffffff8104bf90 T flush_work
ffffffff8104bfc0 t wait_on_work
ffffffff8104c130 T flush_work_sync
ffffffff8104c190 t __queue_work
ffffffff8104c500 t delayed_work_timer_fn
ffffffff8104c530 T queue_work_on
ffffffff8104c550 T schedule_work_on
ffffffff8104c560 T queue_work
ffffffff8104c580 T schedule_work
ffffffff8104c590 T flush_delayed_work_sync
ffffffff8104c5d0 T flush_delayed_work
ffffffff8104c610 T execute_in_process_context
ffffffff8104c670 t worker_maybe_bind_and_lock.isra.28
ffffffff8104c750 t worker_rebind_fn
ffffffff8104c830 t cwq_dec_nr_in_flight
ffffffff8104c8d0 t process_one_work
ffffffff8104cc40 t process_scheduled_works
ffffffff8104cc80 t rescuer_thread
ffffffff8104ceb0 t manage_workers.isra.30
ffffffff8104d0c0 t worker_thread
ffffffff8104d400 t free_cwqs
ffffffff8104d430 T __alloc_workqueue_key
ffffffff8104d8a0 T destroy_workqueue
ffffffff8104da50 t __cancel_work_timer
ffffffff8104dba0 T cancel_delayed_work_sync
ffffffff8104dbb0 T cancel_work_sync
ffffffff8104dbc0 T queue_delayed_work
ffffffff8104dbe0 T schedule_delayed_work
ffffffff8104dc00 T wq_worker_waking_up
ffffffff8104dc30 T wq_worker_sleeping
ffffffff8104dcc0 T schedule_on_each_cpu
ffffffff8104ddb0 T keventd_up
ffffffff8104ddc0 T freeze_workqueues_begin
ffffffff8104df20 T freeze_workqueues_busy
ffffffff8104e030 T thaw_workqueues
ffffffff8104e1e0 T is_container_init
ffffffff8104e210 T find_pid_ns
ffffffff8104e2b0 T find_vpid
ffffffff8104e2d0 T pid_task
ffffffff8104e310 T get_task_pid
ffffffff8104e340 T get_pid_task
ffffffff8104e390 T find_get_pid
ffffffff8104e3a0 T task_active_pid_ns
ffffffff8104e3d0 T put_pid
ffffffff8104e430 t delayed_put_pid
ffffffff8104e440 t free_pidmap.isra.1
ffffffff8104e470 T next_pidmap
ffffffff8104e500 T free_pid
ffffffff8104e5a0 t __change_pid
ffffffff8104e610 T alloc_pid
ffffffff8104ea50 T attach_pid
ffffffff8104ea90 T detach_pid
ffffffff8104eaa0 T change_pid
ffffffff8104eb10 T transfer_pid
ffffffff8104eb70 T find_task_by_pid_ns
ffffffff8104eba0 T find_task_by_vpid
ffffffff8104ebc0 T pid_nr_ns
ffffffff8104ebf0 T task_tgid_nr_ns
ffffffff8104ec10 T __task_pid_nr_ns
ffffffff8104ec70 T pid_vnr
ffffffff8104ec90 T find_ge_pid
ffffffff8104ecd0 T do_trace_rcu_torture_read
ffffffff8104ece0 T wait_rcu_gp
ffffffff8104ed30 t wakeme_after_rcu
ffffffff8104ed40 T search_exception_tables
ffffffff8104ed80 T core_kernel_text
ffffffff8104edc0 T core_kernel_data
ffffffff8104ede0 T __kernel_text_address
ffffffff8104ee60 T kernel_text_address
ffffffff8104eec0 T func_ptr_is_kernel_text
ffffffff8104ef20 t param_array_free
ffffffff8104ef70 t module_attr_show
ffffffff8104efa0 t module_attr_store
ffffffff8104efd0 t uevent_filter
ffffffff8104efe0 t param_array_get
ffffffff8104f090 T param_get_string
ffffffff8104f0b0 t maybe_kfree_parameter
ffffffff8104f120 t param_free_charp
ffffffff8104f130 T __kernel_param_lock
ffffffff8104f140 t param_attr_store
ffffffff8104f1c0 T __kernel_param_unlock
ffffffff8104f1d0 t param_attr_show
ffffffff8104f250 T param_get_invbool
ffffffff8104f270 T param_get_bool
ffffffff8104f290 T param_get_charp
ffffffff8104f2b0 T param_get_ulong
ffffffff8104f2d0 T param_get_long
ffffffff8104f2f0 T param_get_uint
ffffffff8104f310 T param_get_int
ffffffff8104f330 T param_get_ushort
ffffffff8104f350 T param_get_short
ffffffff8104f370 T param_get_byte
ffffffff8104f390 T param_set_copystring
ffffffff8104f400 T param_set_bool
ffffffff8104f420 T param_set_bint
ffffffff8104f470 T param_set_invbool
ffffffff8104f4b0 T param_set_byte
ffffffff8104f4f0 T param_set_ushort
ffffffff8104f530 T param_set_uint
ffffffff8104f570 T param_set_ulong
ffffffff8104f5a0 T param_set_short
ffffffff8104f5f0 T param_set_int
ffffffff8104f630 T param_set_long
ffffffff8104f660 t add_sysfs_param.isra.3
ffffffff8104f8a0 T param_set_charp
ffffffff8104f990 t param_array_set
ffffffff8104faa0 T parameqn
ffffffff8104faf0 T parameq
ffffffff8104fb60 T parse_args
ffffffff8104fe80 T module_param_sysfs_setup
ffffffff8104ff50 T module_param_sysfs_remove
ffffffff8104ffa0 T destroy_params
ffffffff8104ffe0 T __modver_version_show
ffffffff81050010 t posix_clock_realtime_adj
ffffffff81050020 t posix_clock_realtime_get
ffffffff81050040 t posix_clock_realtime_set
ffffffff81050050 t posix_get_boottime
ffffffff81050070 t posix_get_monotonic_coarse
ffffffff81050090 t posix_get_realtime_coarse
ffffffff810500b0 t posix_get_coarse_res
ffffffff810500e0 t posix_get_monotonic_raw
ffffffff81050100 t common_timer_get
ffffffff81050210 t common_timer_del
ffffffff81050230 t common_timer_create
ffffffff81050250 t common_timer_set
ffffffff810503f0 t common_nsleep
ffffffff81050420 t posix_ktime_get_ts
ffffffff81050440 t release_posix_timer
ffffffff810504c0 t k_itimer_rcu_free
ffffffff810504d0 t __lock_timer
ffffffff81050550 T posix_timer_event
ffffffff81050590 t posix_timer_fn
ffffffff81050660 t clockid_to_kclock
ffffffff810506b0 T posix_timers_register_clock
ffffffff81050720 T do_schedule_next_timer
ffffffff81050800 T sys_timer_create
ffffffff81050be0 T sys_timer_gettime
ffffffff81050ca0 T sys_timer_getoverrun
ffffffff81050ce0 T sys_timer_settime
ffffffff81050e80 T sys_timer_delete
ffffffff81050fa0 T exit_itimers
ffffffff81051090 T sys_clock_settime
ffffffff81051110 T sys_clock_gettime
ffffffff81051180 T sys_clock_adjtime
ffffffff81051250 T sys_clock_getres
ffffffff810512c0 T sys_clock_nanosleep
ffffffff81051380 T clock_nanosleep_restart
ffffffff810513d0 T kthread_should_stop
ffffffff810513f0 T __init_kthread_worker
ffffffff81051410 t kthread_flush_work_fn
ffffffff81051420 T flush_kthread_work
ffffffff810514b0 T queue_kthread_work
ffffffff81051550 T flush_kthread_worker
ffffffff810515f0 T kthread_worker_fn
ffffffff81051790 T kthread_freezable_should_stop
ffffffff810517f0 t kthread
ffffffff81051880 T kthread_stop
ffffffff810518f0 T kthread_create_on_node
ffffffff81051a10 T kthread_bind
ffffffff81051a90 T kthread_data
ffffffff81051aa0 T tsk_fork_get_node
ffffffff81051ac0 T kthreadd
ffffffff81051c30 T __init_waitqueue_head
ffffffff81051c50 T bit_waitqueue
ffffffff81051d20 T __wake_up_bit
ffffffff81051d50 T wake_up_bit
ffffffff81051d90 T prepare_to_wait_exclusive
ffffffff81051e20 T prepare_to_wait
ffffffff81051eb0 T remove_wait_queue
ffffffff81051f10 T add_wait_queue_exclusive
ffffffff81051f60 T add_wait_queue
ffffffff81051fc0 T abort_exclusive_wait
ffffffff81052070 T autoremove_wake_function
ffffffff810520a0 T wake_bit_function
ffffffff810520d0 T finish_wait
ffffffff81052160 T __kfifo_len_r
ffffffff81052190 T __kfifo_skip_r
ffffffff810521d0 T __kfifo_dma_in_finish_r
ffffffff81052220 T __kfifo_dma_out_finish_r
ffffffff81052260 T __kfifo_init
ffffffff810522c0 T __kfifo_free
ffffffff81052300 T __kfifo_alloc
ffffffff810523a0 t setup_sgl_buf.part.4
ffffffff81052540 t setup_sgl.isra.5
ffffffff81052610 T __kfifo_dma_out_prepare
ffffffff81052650 T __kfifo_dma_in_prepare
ffffffff81052690 t kfifo_copy_to_user.isra.6
ffffffff81052760 T __kfifo_to_user_r
ffffffff81052820 T __kfifo_to_user
ffffffff810528a0 t kfifo_copy_from_user.isra.7
ffffffff81052980 T __kfifo_from_user
ffffffff81052a00 T __kfifo_from_user_r
ffffffff81052ae0 t kfifo_copy_out.isra.8
ffffffff81052b70 t kfifo_out_copy_r
ffffffff81052bd0 T __kfifo_out_peek
ffffffff81052c00 T __kfifo_out
ffffffff81052c10 T __kfifo_out_r
ffffffff81052c60 T __kfifo_out_peek_r
ffffffff81052c80 t kfifo_copy_in.isra.11
ffffffff81052d10 T __kfifo_in
ffffffff81052d50 T __kfifo_in_r
ffffffff81052df0 T __kfifo_dma_out_prepare_r
ffffffff81052e60 T __kfifo_dma_in_prepare_r
ffffffff81052ed0 T __kfifo_max_r
ffffffff81052ef0 W compat_sys_ipc
ffffffff81052ef0 W compat_sys_kexec_load
ffffffff81052ef0 W compat_sys_keyctl
ffffffff81052ef0 W compat_sys_open_by_handle_at
ffffffff81052ef0 W ppc_rtas
ffffffff81052ef0 W sys32_quotactl
ffffffff81052ef0 W sys_add_key
ffffffff81052ef0 W sys_ipc
ffffffff81052ef0 W sys_kexec_load
ffffffff81052ef0 W sys_keyctl
ffffffff81052ef0 W sys_lookup_dcookie
ffffffff81052ef0 W sys_name_to_handle_at
ffffffff81052ef0 T sys_ni_syscall
ffffffff81052ef0 W sys_open_by_handle_at
ffffffff81052ef0 W sys_pciconfig_iobase
ffffffff81052ef0 W sys_pciconfig_read
ffffffff81052ef0 W sys_pciconfig_write
ffffffff81052ef0 W sys_quotactl
ffffffff81052ef0 W sys_request_key
ffffffff81052ef0 W sys_spu_create
ffffffff81052ef0 W sys_spu_run
ffffffff81052ef0 W sys_subpage_prot
ffffffff81052ef0 W sys_vm86
ffffffff81052ef0 W sys_vm86old
ffffffff81052f00 t bump_cpu_timer
ffffffff81052fe0 t cleanup_timers
ffffffff81053100 t arm_timer
ffffffff81053220 t process_cpu_nsleep_restart
ffffffff81053230 t cpu_clock_sample
ffffffff810532a0 t sample_to_timespec
ffffffff810532e0 t posix_cpu_timer_del
ffffffff810533f0 t posix_cpu_timer_create
ffffffff810534b0 t thread_cpu_timer_create
ffffffff810534c0 t process_cpu_timer_create
ffffffff810534d0 t check_clock
ffffffff81053540 t posix_cpu_clock_set
ffffffff81053560 t posix_cpu_clock_getres
ffffffff810535c0 t thread_cpu_clock_getres
ffffffff810535d0 t process_cpu_clock_getres
ffffffff810535e0 t check_cpu_itimer.part.3
ffffffff81053680 T thread_group_cputime
ffffffff81053720 t cpu_clock_sample_group
ffffffff810537b0 t posix_cpu_clock_get
ffffffff810538e0 t thread_cpu_clock_get
ffffffff810538f0 t process_cpu_clock_get
ffffffff81053900 T thread_group_cputimer
ffffffff810539f0 t cpu_timer_sample_group
ffffffff81053a80 t posix_cpu_timer_get
ffffffff81053c10 T posix_cpu_timers_exit
ffffffff81053c40 T posix_cpu_timers_exit_group
ffffffff81053c80 T posix_cpu_timer_schedule
ffffffff81053df0 t cpu_timer_fire
ffffffff81053e60 t posix_cpu_timer_set
ffffffff81054190 t do_cpu_nanosleep
ffffffff81054380 t posix_cpu_nsleep_restart
ffffffff81054420 t posix_cpu_nsleep
ffffffff81054550 t process_cpu_nsleep
ffffffff81054560 T run_posix_cpu_timers
ffffffff81054d30 T set_process_cpu_timer
ffffffff81054e40 T update_rlimit_cpu
ffffffff81054ea0 T __mutex_init
ffffffff81054ed0 T atomic_dec_and_mutex_lock
ffffffff81054f70 T ktime_add_safe
ffffffff81054fb0 T hrtimer_init_sleeper
ffffffff81054fc0 T hrtimer_forward
ffffffff81055080 t enqueue_hrtimer
ffffffff810550d0 T hrtimer_get_res
ffffffff81055110 t update_rmtp
ffffffff81055170 t hrtimer_wakeup
ffffffff810551a0 T hrtimer_init
ffffffff81055290 t hrtimer_force_reprogram
ffffffff81055300 t __remove_hrtimer
ffffffff810553b0 t retrigger_next_event
ffffffff81055410 t __run_hrtimer.isra.34
ffffffff81055500 t lock_hrtimer_base.isra.36
ffffffff81055560 T hrtimer_get_remaining
ffffffff810555b0 T hrtimer_try_to_cancel
ffffffff81055630 T hrtimer_cancel
ffffffff81055650 T clock_was_set_delayed
ffffffff81055680 T clock_was_set
ffffffff810556a0 T hrtimers_resume
ffffffff810556f0 T __hrtimer_start_range_ns
ffffffff810559a0 T hrtimer_start
ffffffff810559b0 T hrtimer_start_range_ns
ffffffff810559c0 T hrtimer_interrupt
ffffffff81055bf0 t __hrtimer_peek_ahead_timers.part.37
ffffffff81055c20 T hrtimer_peek_ahead_timers
ffffffff81055c70 t run_hrtimer_softirq
ffffffff81055cb0 T hrtimer_run_pending
ffffffff81055db0 T hrtimer_run_queues
ffffffff81055f00 T hrtimer_nanosleep
ffffffff81056050 T sys_nanosleep
ffffffff810560c0 T down_read_trylock
ffffffff810560e0 T down_write_trylock
ffffffff81056100 T up_read
ffffffff81056120 T up_write
ffffffff81056140 T downgrade_write
ffffffff81056160 t create_new_namespaces
ffffffff81056300 T free_nsproxy
ffffffff81056390 T copy_namespaces
ffffffff81056440 T unshare_nsproxy_namespaces
ffffffff810564d0 T switch_task_namespaces
ffffffff81056500 T exit_task_namespaces
ffffffff81056510 T sys_setns
ffffffff81056620 T __srcu_read_lock
ffffffff81056650 T __srcu_read_unlock
ffffffff81056670 T srcu_batches_completed
ffffffff81056680 T init_srcu_struct
ffffffff810566c0 t srcu_readers_active_idx.isra.0
ffffffff81056730 t __synchronize_srcu
ffffffff810567e0 T synchronize_srcu_expedited
ffffffff810567f0 T synchronize_srcu
ffffffff81056800 T cleanup_srcu_struct
ffffffff81056890 T down_trylock
ffffffff810568d0 T down
ffffffff81056910 T down_interruptible
ffffffff81056970 T down_killable
ffffffff810569d0 T down_timeout
ffffffff81056a30 T up
ffffffff81056a70 t notifier_call_chain
ffffffff81056ae0 T __atomic_notifier_call_chain
ffffffff81056af0 T atomic_notifier_call_chain
ffffffff81056b10 T raw_notifier_chain_register
ffffffff81056b50 T raw_notifier_chain_unregister
ffffffff81056b90 T __raw_notifier_call_chain
ffffffff81056ba0 T raw_notifier_call_chain
ffffffff81056bb0 T __srcu_notifier_call_chain
ffffffff81056c30 T srcu_notifier_call_chain
ffffffff81056c40 T __blocking_notifier_call_chain
ffffffff81056ca0 T blocking_notifier_call_chain
ffffffff81056cb0 T blocking_notifier_chain_cond_register
ffffffff81056d10 T atomic_notifier_chain_register
ffffffff81056d70 T register_die_notifier
ffffffff81056d90 T atomic_notifier_chain_unregister
ffffffff81056e00 T unregister_die_notifier
ffffffff81056e10 T srcu_init_notifier_head
ffffffff81056e40 T srcu_notifier_chain_register
ffffffff81056ee0 T srcu_notifier_chain_unregister
ffffffff81056fa0 T blocking_notifier_chain_unregister
ffffffff81057050 T blocking_notifier_chain_register
ffffffff810570f0 T notify_die
ffffffff81057130 t uevent_helper_store
ffffffff810571a0 t notes_read
ffffffff810571c0 t uevent_helper_show
ffffffff810571f0 t uevent_seqnum_show
ffffffff81057220 t fscaps_show
ffffffff81057250 T override_creds
ffffffff81057270 T set_security_override
ffffffff81057280 T set_security_override_from_ctx
ffffffff81057290 T set_create_files_as
ffffffff810572a0 T prepare_creds
ffffffff810573a0 t put_cred_rcu
ffffffff81057420 T __put_cred
ffffffff81057460 T revert_creds
ffffffff81057490 T abort_creds
ffffffff810574c0 T commit_creds
ffffffff810575e0 T exit_creds
ffffffff81057670 T get_task_cred
ffffffff810576b0 T prepare_kernel_cred
ffffffff81057730 T cred_alloc_blank
ffffffff81057760 T prepare_exec_creds
ffffffff81057790 T copy_creds
ffffffff81057860 t lowest_in_progress
ffffffff810578c0 T async_synchronize_cookie_domain
ffffffff810579e0 T async_synchronize_cookie
ffffffff810579f0 T async_synchronize_full
ffffffff81057a30 T async_synchronize_full_domain
ffffffff81057a40 t __async_schedule
ffffffff81057b80 T async_schedule_domain
ffffffff81057b90 T async_schedule
ffffffff81057ba0 t async_run_entry_fn
ffffffff81057d00 t cmp_range
ffffffff81057d10 T add_range
ffffffff81057d30 T add_range_with_merge
ffffffff81057db0 T subtract_range
ffffffff81057ee0 T clean_sort_range
ffffffff81057fe0 T sort_range
ffffffff81058000 T groups_free
ffffffff81058050 T set_groups
ffffffff810581d0 T set_current_groups
ffffffff81058240 T groups_alloc
ffffffff81058340 T groups_search
ffffffff810583a0 T in_egroup_p
ffffffff810583d0 T in_group_p
ffffffff81058400 T sys_getgroups
ffffffff81058500 T sys_setgroups
ffffffff81058620 T tg_nop
ffffffff81058630 T kick_process
ffffffff81058680 t __wake_up_common
ffffffff81058700 T __wake_up_locked
ffffffff81058710 T __wake_up_locked_key
ffffffff81058730 T task_nice
ffffffff81058740 t cpu_allnodes_mask
ffffffff81058750 t cpu_cpu_mask
ffffffff81058780 t sd_init_CPU
ffffffff81058820 t sd_init_NODE
ffffffff810588a0 t sd_init_MC
ffffffff81058940 t cpu_shares_read_u64
ffffffff81058950 t cpu_cfs_quota_read_s64
ffffffff81058990 t cpu_cfs_period_read_u64
ffffffff810589c0 t tg_cfs_schedulable_down
ffffffff81058aa0 t cpu_stats_show
ffffffff81058b00 t cpu_rt_runtime_read
ffffffff81058b30 t cpu_rt_period_read_uint
ffffffff81058b60 t cpuacct_populate
ffffffff81058b80 t cpu_cgroup_populate
ffffffff81058ba0 t cpuusage_read
ffffffff81058c10 t cpuusage_write
ffffffff81058c70 t cpuacct_stats_show
ffffffff81058d50 t cpuacct_percpu_seq_read
ffffffff81058dd0 t cpuacct_destroy
ffffffff81058e00 t free_sched_groups
ffffffff81058e70 t free_sched_domain
ffffffff81058ee0 t sd_degenerate
ffffffff81058f30 t sd_init_ALLNODES
ffffffff81058fd0 t cpuacct_create
ffffffff81059070 t cpu_shares_write_u64
ffffffff81059090 t tg_rt_schedulable
ffffffff81059270 t cpu_cgroup_can_attach
ffffffff810592f0 T completion_done
ffffffff81059330 T try_wait_for_completion
ffffffff81059390 T complete_all
ffffffff810593f0 T complete
ffffffff81059450 T __wake_up_sync_key
ffffffff810594d0 T __wake_up_sync
ffffffff810594e0 T __wake_up
ffffffff81059540 t task_rq_lock
ffffffff810595e0 t get_group
ffffffff81059660 t free_sched_group_rcu
ffffffff81059690 t free_rootdomain
ffffffff810596c0 t __hrtick_start
ffffffff81059710 t cpuset_cpu_active
ffffffff81059740 t cpuset_cpu_inactive
ffffffff81059770 t hotplug_hrtick
ffffffff810597e0 t sched_mc_power_savings_show
ffffffff81059800 t find_process_by_pid
ffffffff81059820 t finish_task_switch
ffffffff810598f0 t set_rq_online.part.49
ffffffff81059940 t set_rq_offline.part.50
ffffffff810599a0 t rq_attach_root
ffffffff81059ac0 t cpu_attach_domain
ffffffff81059cc0 t cpu_node_mask
ffffffff81059df0 T start_bandwidth_timer
ffffffff81059e50 T update_rq_clock
ffffffff81059ea0 t dequeue_task
ffffffff81059f30 t enqueue_task
ffffffff81059fa0 t hrtick
ffffffff8105a030 T hrtick_start
ffffffff8105a0d0 T resched_task
ffffffff8105a140 T set_user_nice
ffffffff8105a2c0 T resched_cpu
ffffffff8105a360 T sched_avg_update
ffffffff8105a3b0 T walk_tg_tree_from
ffffffff8105a470 t tg_set_cfs_bandwidth
ffffffff8105a680 t tg_set_rt_bandwidth
ffffffff8105a7a0 T activate_task
ffffffff8105a7c0 T deactivate_task
ffffffff8105a7e0 T task_curr
ffffffff8105a810 T check_preempt_curr
ffffffff8105a8b0 t ttwu_do_wakeup
ffffffff8105a940 t __cond_resched
ffffffff8105a980 T __cond_resched_lock
ffffffff8105a9d0 t ttwu_do_activate.constprop.69
ffffffff8105aa40 t sched_ttwu_pending
ffffffff8105aaa0 T set_task_cpu
ffffffff8105ab80 t __migrate_task
ffffffff8105ace0 t migration_cpu_stop
ffffffff8105ad10 T wait_task_inactive
ffffffff8105ae10 T scheduler_ipi
ffffffff8105ae50 T cpus_share_cache
ffffffff8105ae80 T sched_fork
ffffffff8105b0a0 T schedule_tail
ffffffff8105b140 T nr_running
ffffffff8105b1b0 T nr_uninterruptible
ffffffff8105b230 T nr_context_switches
ffffffff8105b290 T nr_iowait
ffffffff8105b300 T nr_iowait_cpu
ffffffff8105b320 T this_cpu_load
ffffffff8105b340 T get_avenrun
ffffffff8105b380 T calc_global_load
ffffffff8105b470 T update_cpu_load
ffffffff8105b5c0 T sched_exec
ffffffff8105b680 T task_delta_exec
ffffffff8105b710 T task_sched_runtime
ffffffff8105b7b0 T account_user_time
ffffffff8105b890 T account_system_time
ffffffff8105ba60 T account_steal_time
ffffffff8105ba80 T account_idle_time
ffffffff8105bac0 T account_process_tick
ffffffff8105bc70 T account_steal_ticks
ffffffff8105bc90 T account_idle_ticks
ffffffff8105bcd0 T task_times
ffffffff8105bd80 T thread_group_times
ffffffff8105be30 T scheduler_tick
ffffffff8105bf60 T get_parent_ip
ffffffff8105bf90 T mutex_spin_on_owner
ffffffff8105bff0 T rt_mutex_setprio
ffffffff8105c1f0 T can_nice
ffffffff8105c230 t __sched_setscheduler
ffffffff8105c730 T sched_setscheduler
ffffffff8105c740 t do_sched_setscheduler
ffffffff8105c7b0 T sys_nice
ffffffff8105c890 T task_prio
ffffffff8105c8a0 T idle_cpu
ffffffff8105c8f0 T idle_task
ffffffff8105c910 T sched_setscheduler_nocheck
ffffffff8105c920 T sched_set_stop_task
ffffffff8105c9b0 T sys_sched_setscheduler
ffffffff8105c9d0 T sys_sched_setparam
ffffffff8105c9f0 T sys_sched_getscheduler
ffffffff8105ca30 T sys_sched_getparam
ffffffff8105caa0 T sched_getaffinity
ffffffff8105cb20 T sys_sched_getaffinity
ffffffff8105cbc0 T sys_sched_yield
ffffffff8105cc20 T sys_sched_get_priority_max
ffffffff8105cc60 T sys_sched_get_priority_min
ffffffff8105cca0 T sys_sched_rr_get_interval
ffffffff8105cd60 T sched_show_task
ffffffff8105ce50 T show_state_filter
ffffffff8105cef0 T do_set_cpus_allowed
ffffffff8105cf40 t try_to_wake_up
ffffffff8105d1e0 T default_wake_function
ffffffff8105d1f0 T wake_up_process
ffffffff8105d210 T wake_up_state
ffffffff8105d220 T wake_up_new_task
ffffffff8105d350 T set_cpus_allowed_ptr
ffffffff8105d480 T sched_setaffinity
ffffffff8105d5b0 T sys_sched_setaffinity
ffffffff8105d610 T idle_task_exit
ffffffff8105d6d0 W arch_sd_sibling_asym_packing
ffffffff8105d6e0 T build_sched_domain
ffffffff8105d7c0 t build_sched_domains
ffffffff8105e120 W arch_update_cpu_topology
ffffffff8105e130 T alloc_sched_domains
ffffffff8105e150 T free_sched_domains
ffffffff8105e160 T partition_sched_domains
ffffffff8105e4d0 t sched_mc_power_savings_store
ffffffff8105e540 T in_sched_functions
ffffffff8105e580 T sched_create_group
ffffffff8105e6e0 t cpu_cgroup_create
ffffffff8105e720 T sched_destroy_group
ffffffff8105e7f0 t cpu_cgroup_destroy
ffffffff8105e800 T sched_move_task
ffffffff8105e940 t cpu_cgroup_exit
ffffffff8105e960 t cpu_cgroup_attach
ffffffff8105e9c0 T sched_group_set_rt_runtime
ffffffff8105e9f0 t cpu_rt_runtime_write
ffffffff8105ea10 T sched_group_rt_runtime
ffffffff8105ea40 T sched_group_set_rt_period
ffffffff8105ea70 t cpu_rt_period_write_uint
ffffffff8105ea90 T sched_group_rt_period
ffffffff8105eac0 T sched_rt_can_attach
ffffffff8105eae0 T sched_rt_handler
ffffffff8105ec70 T tg_set_cfs_quota
ffffffff8105eca0 t cpu_cfs_quota_write_s64
ffffffff8105ecc0 T tg_get_cfs_quota
ffffffff8105ecf0 T tg_set_cfs_period
ffffffff8105ed10 t cpu_cfs_period_write_u64
ffffffff8105ed30 T tg_get_cfs_period
ffffffff8105ed60 T cpuacct_charge
ffffffff8105ede0 t sched_clock_local
ffffffff8105ee60 T sched_clock_init
ffffffff8105eed0 T sched_clock_cpu
ffffffff8105efc0 T local_clock
ffffffff8105f000 T cpu_clock
ffffffff8105f030 T sched_clock_idle_sleep_event
ffffffff8105f040 T sched_clock_tick
ffffffff8105f0e0 T sched_clock_idle_wakeup_event
ffffffff8105f100 t select_task_rq_idle
ffffffff8105f110 t pick_next_task_idle
ffffffff8105f120 t put_prev_task_idle
ffffffff8105f130 t task_tick_idle
ffffffff8105f140 t set_curr_task_idle
ffffffff8105f150 t get_rr_interval_idle
ffffffff8105f160 t prio_changed_idle
ffffffff8105f170 t switched_to_idle
ffffffff8105f180 t check_preempt_curr_idle
ffffffff8105f190 t dequeue_task_idle
ffffffff8105f1e0 t update_cfs_load
ffffffff8105f390 t tg_throttle_down
ffffffff8105f3d0 t task_waking_fair
ffffffff8105f3f0 t tg_load_down
ffffffff8105f460 t task_move_group_fair
ffffffff8105f510 t calc_delta_mine
ffffffff8105f5b0 t set_next_entity
ffffffff8105f630 t update_sysctl
ffffffff8105f6a0 t rq_offline_fair
ffffffff8105f6b0 t rq_online_fair
ffffffff8105f6c0 t __enqueue_entity
ffffffff8105f730 t move_task
ffffffff8105f780 t clear_buddies
ffffffff8105f860 t effective_load.isra.33
ffffffff8105f8f0 t select_task_rq_fair
ffffffff81060190 t wakeup_preempt_entity.isra.37
ffffffff810601e0 t sched_slice.isra.45
ffffffff81060260 t get_rr_interval_fair
ffffffff810602a0 t place_entity
ffffffff81060340 t switched_from_fair
ffffffff810603a0 t can_migrate_task.part.47
ffffffff81060430 t active_load_balance_cpu_stop
ffffffff81060680 t prio_changed_fair
ffffffff810606c0 t switched_to_fair
ffffffff810606f0 t hrtick_start_fair
ffffffff810607e0 t hrtick_update
ffffffff81060840 t pick_next_task_fair
ffffffff81060980 T sched_init_granularity
ffffffff81060990 T __pick_first_entity
ffffffff810609b0 T account_cfs_bandwidth_used
ffffffff810609c0 T __refill_cfs_bandwidth_runtime
ffffffff810609f0 T init_cfs_bandwidth
ffffffff81060a70 T __start_cfs_bandwidth
ffffffff81060ae0 t __account_cfs_rq_runtime
ffffffff81060c60 t update_curr
ffffffff81060dc0 t task_fork_fair
ffffffff81060f50 t update_cfs_shares
ffffffff810610f0 t update_shares
ffffffff810611e0 t tg_unthrottle_up
ffffffff81061240 t yield_task_fair
ffffffff810612c0 t yield_to_task_fair
ffffffff81061330 t dequeue_entity
ffffffff810615a0 t throttle_cfs_rq
ffffffff810616d0 t dequeue_task_fair
ffffffff810617c0 t put_prev_task_fair
ffffffff81061850 t check_preempt_wakeup
ffffffff81061ab0 t task_tick_fair
ffffffff81061bd0 t set_curr_task_fair
ffffffff81061c30 t enqueue_entity
ffffffff81061e10 t enqueue_task_fair
ffffffff81061eb0 T unthrottle_cfs_rq
ffffffff81062010 t distribute_cfs_runtime
ffffffff810620e0 t sched_cfs_slack_timer
ffffffff810621d0 t sched_cfs_period_timer
ffffffff81062310 T unthrottle_offline_cfs_rqs
ffffffff81062390 T default_scale_freq_power
ffffffff810623b0 T default_scale_smt_power
ffffffff810623d0 T scale_rt_power
ffffffff81062440 T update_group_power
ffffffff81062590 t find_busiest_group
ffffffff81063170 t load_balance
ffffffff81063860 t run_rebalance_domains
ffffffff810639e0 T idle_balance
ffffffff81063b20 T update_max_interval
ffffffff81063b50 T trigger_load_balance
ffffffff81063b90 T init_cfs_rq
ffffffff81063bb0 T free_fair_sched_group
ffffffff81063c50 T unregister_fair_sched_group
ffffffff81063d00 T init_tg_cfs_entry
ffffffff81063da0 T alloc_fair_sched_group
ffffffff81063f00 T sched_group_set_shares
ffffffff81064010 t get_rr_interval_rt
ffffffff81064030 t pick_next_pushable_task
ffffffff81064090 t __enable_runtime
ffffffff81064170 t __disable_runtime
ffffffff81064360 t pull_rt_task
ffffffff81064660 t pre_schedule_rt
ffffffff81064680 t find_lowest_rq
ffffffff810647c0 t select_task_rq_rt
ffffffff81064830 t update_rt_migration
ffffffff810648f0 t rq_offline_rt
ffffffff81064950 t rq_online_rt
ffffffff810649c0 t dequeue_pushable_task
ffffffff81064a30 t set_curr_task_rt
ffffffff81064a50 t pick_next_task_rt
ffffffff81064b30 t enqueue_pushable_task
ffffffff81064bd0 t set_cpus_allowed_rt
ffffffff81064cd0 t requeue_task_rt.isra.25
ffffffff81064d60 t yield_task_rt
ffffffff81064d70 t dequeue_rt_stack
ffffffff81065050 t balance_runtime.part.30
ffffffff810651f0 t prio_changed_rt
ffffffff81065270 t switched_from_rt
ffffffff81065290 t check_preempt_curr_rt
ffffffff81065350 t push_rt_task.part.35
ffffffff810655a0 t post_schedule_rt
ffffffff810655c0 t switched_to_rt
ffffffff81065640 t task_woken_rt
ffffffff810656b0 t __enqueue_rt_entity
ffffffff81065920 t dequeue_rt_entity
ffffffff81065970 t update_curr_rt
ffffffff81065c20 t task_tick_rt
ffffffff81065ce0 t put_prev_task_rt
ffffffff81065d40 t dequeue_task_rt
ffffffff81065d90 t sched_rt_period_timer
ffffffff81066130 t enqueue_task_rt
ffffffff810661c0 T init_rt_bandwidth
ffffffff810661f0 T init_rt_rq
ffffffff81066290 T free_rt_sched_group
ffffffff81066320 T init_tg_rt_entry
ffffffff810663a0 T alloc_rt_sched_group
ffffffff81066530 T update_runtime
ffffffff81066600 T init_sched_rt_class
ffffffff81066660 t select_task_rq_stop
ffffffff81066670 t check_preempt_curr_stop
ffffffff81066680 t pick_next_task_stop
ffffffff810666a0 t enqueue_task_stop
ffffffff810666b0 t dequeue_task_stop
ffffffff810666c0 t put_prev_task_stop
ffffffff810666d0 t task_tick_stop
ffffffff810666e0 t set_curr_task_stop
ffffffff810666f0 t get_rr_interval_stop
ffffffff81066700 t prio_changed_stop
ffffffff81066710 t switched_to_stop
ffffffff81066720 t yield_task_stop
ffffffff81066730 T cpupri_find
ffffffff81066820 T cpupri_set
ffffffff810668b0 T cpupri_init
ffffffff810669b0 T cpupri_cleanup
ffffffff810669c0 T pm_qos_request
ffffffff810669e0 T pm_qos_request_active
ffffffff810669f0 t pm_qos_power_read
ffffffff81066ad0 T pm_qos_remove_notifier
ffffffff81066af0 T pm_qos_add_notifier
ffffffff81066b10 T pm_qos_read_value
ffffffff81066b20 T pm_qos_update_target
ffffffff81066cc0 T pm_qos_remove_request
ffffffff81066e00 t pm_qos_power_release
ffffffff81066e20 T pm_qos_update_request
ffffffff81066eb0 t pm_qos_power_write
ffffffff81066f80 t pm_qos_work_fn
ffffffff81066f90 T pm_qos_add_request
ffffffff81067060 t pm_qos_power_open
ffffffff810670e0 T pm_qos_update_request_timeout
ffffffff810671c0 t state_show
ffffffff810671d0 t wakeup_count_store
ffffffff81067230 t wakeup_count_show
ffffffff81067270 t pm_async_show
ffffffff810672a0 t pm_async_store
ffffffff810672f0 t state_store
ffffffff81067360 T unregister_pm_notifier
ffffffff81067370 T register_pm_notifier
ffffffff81067380 T pm_notifier_call_chain
ffffffff810673b0 T pm_prepare_console
ffffffff810673f0 T pm_restore_console
ffffffff81067420 t try_to_freeze_tasks
ffffffff81067710 T thaw_processes
ffffffff810677e0 T freeze_processes
ffffffff81067880 T thaw_kernel_threads
ffffffff81067930 T freeze_kernel_threads
ffffffff810679a0 T freezing_slow_path
ffffffff81067a00 T __refrigerator
ffffffff81067af0 T set_freezable
ffffffff81067b50 T freeze_task
ffffffff81067c10 T __thaw_task
ffffffff81067c50 t timekeeper_setup_internals
ffffffff81067cf0 t timekeeping_forward_now
ffffffff81067db0 T get_seconds
ffffffff81067dc0 T monotonic_to_bootbased
ffffffff81067e00 T getboottime
ffffffff81067e30 T getrawmonotonic
ffffffff81067ed0 T current_kernel_time
ffffffff81067f00 T ktime_get_monotonic_offset
ffffffff81067f50 t update_rt_offset
ffffffff81067fb0 t __timekeeping_inject_sleeptime
ffffffff810680b0 t timekeeping_update
ffffffff81068100 t change_clocksource
ffffffff810681b0 T get_monotonic_boottime
ffffffff810682c0 T ktime_get_boottime
ffffffff81068300 T ktime_get_ts
ffffffff810683e0 T ktime_get
ffffffff810684d0 T getnstimeofday
ffffffff81068590 T ktime_get_real
ffffffff810685d0 T do_gettimeofday
ffffffff81068620 T timekeeping_inject_offset
ffffffff81068710 T do_settimeofday
ffffffff810687f0 T timekeeping_notify
ffffffff81068820 T timekeeping_valid_for_hres
ffffffff81068850 T timekeeping_max_deferment
ffffffff81068890 t timekeeping_resume
ffffffff810689a0 t timekeeping_suspend
ffffffff81068ad0 W read_boot_clock
ffffffff81068ae0 T timekeeping_inject_sleeptime
ffffffff81068b90 T __current_kernel_time
ffffffff81068bb0 T get_monotonic_coarse
ffffffff81068c20 T do_timer
ffffffff81069150 T get_xtime_and_monotonic_and_sleep_offset
ffffffff810691b0 T ktime_get_update_offsets
ffffffff81069290 T xtime_update
ffffffff810692d0 t ntp_update_frequency
ffffffff81069340 t sync_cmos_clock
ffffffff81069410 T ntp_clear
ffffffff81069480 T ntp_tick_length
ffffffff810694b0 T second_overflow
ffffffff81069780 T do_adjtimex
ffffffff81069d30 T timecounter_init
ffffffff81069d70 T timecounter_read
ffffffff81069db0 t clocksource_enqueue
ffffffff81069e10 t sysfs_show_available_clocksources
ffffffff81069ee0 t sysfs_show_current_clocksources
ffffffff81069f30 t clocksource_max_deferment
ffffffff81069f80 t clocksource_enqueue_watchdog
ffffffff8106a0d0 t clocksource_watchdog_work
ffffffff8106a110 t clocksource_watchdog
ffffffff8106a360 T timecounter_cyc2time
ffffffff8106a3c0 t clocksource_select
ffffffff8106a490 t sysfs_override_clocksource
ffffffff8106a520 T clocksource_change_rating
ffffffff8106a5a0 t clocksource_watchdog_kthread
ffffffff8106a740 T clocksource_unregister
ffffffff8106a8e0 T clocksource_register
ffffffff8106a990 T clocks_calc_mult_shift
ffffffff8106a9f0 T __clocksource_updatefreq_scale
ffffffff8106aaf0 T __clocksource_register_scale
ffffffff8106ab30 T clocksource_mark_unstable
ffffffff8106abd0 T clocksource_suspend
ffffffff8106ac10 T clocksource_resume
ffffffff8106ac50 T clocksource_touch_watchdog
ffffffff8106ac60 t jiffies_read
ffffffff8106ac70 t timer_list_open
ffffffff8106ac90 t print_name_offset
ffffffff8106ad30 t print_tickdevice
ffffffff8106b0c0 t timer_list_show
ffffffff8106bbb0 T sysrq_timer_list_show
ffffffff8106bbc0 T timecompare_transform
ffffffff8106bbf0 T timecompare_offset
ffffffff8106bdb0 T __timecompare_update
ffffffff8106be70 T time_to_tm
ffffffff8106c210 t delete_clock
ffffffff8106c230 t put_clock_desc
ffffffff8106c250 t posix_clock_release
ffffffff8106c2c0 t posix_clock_open
ffffffff8106c380 T posix_clock_register
ffffffff8106c400 t get_posix_clock.isra.0
ffffffff8106c450 t get_clock_desc
ffffffff8106c4f0 t pc_timer_gettime
ffffffff8106c540 t pc_timer_delete
ffffffff8106c590 t pc_timer_settime
ffffffff8106c610 t pc_timer_create
ffffffff8106c660 t pc_clock_adjtime
ffffffff8106c6c0 t pc_clock_gettime
ffffffff8106c710 t pc_clock_settime
ffffffff8106c770 t pc_clock_getres
ffffffff8106c7c0 t posix_clock_fasync
ffffffff8106c850 t posix_clock_mmap
ffffffff8106c8c0 t posix_clock_compat_ioctl
ffffffff8106c940 t posix_clock_ioctl
ffffffff8106c9c0 t posix_clock_poll
ffffffff8106ca30 t posix_clock_read
ffffffff8106cac0 T posix_clock_unregister
ffffffff8106cb40 t alarmtimer_get_rtcdev
ffffffff8106cb70 t alarmtimer_freezerset
ffffffff8106cbc0 t alarmtimer_suspend
ffffffff8106cd50 t alarmtimer_rtc_add_device
ffffffff8106cdf0 t alarmtimer_fired
ffffffff8106cee0 t alarm_timer_get
ffffffff8106cf50 t alarm_clock_get
ffffffff8106cfb0 t alarm_timer_create
ffffffff8106d020 t update_rmtp
ffffffff8106d090 t alarmtimer_nsleep_wakeup
ffffffff8106d0c0 t alarm_clock_getres
ffffffff8106d130 t alarmtimer_remove
ffffffff8106d1b0 T alarm_init
ffffffff8106d1e0 T alarm_start
ffffffff8106d2b0 T alarm_try_to_cancel
ffffffff8106d340 t alarm_timer_del
ffffffff8106d370 t alarm_timer_set
ffffffff8106d440 T alarm_cancel
ffffffff8106d460 t alarmtimer_do_nsleep
ffffffff8106d500 t alarm_timer_nsleep
ffffffff8106d6b0 T alarm_forward
ffffffff8106d700 t alarm_handle_timer
ffffffff8106d780 T clockevents_notify
ffffffff8106d900 T clockevents_register_device
ffffffff8106da30 T clockevent_delta2ns
ffffffff8106daa0 t clockevents_program_min_delta
ffffffff8106dba0 t clockevents_config.part.0
ffffffff8106dc20 T clockevents_set_mode
ffffffff8106dc90 T clockevents_shutdown
ffffffff8106dcb0 T clockevents_program_event
ffffffff8106ddc0 T clockevents_register_notifier
ffffffff8106de20 T clockevents_config_and_register
ffffffff8106de40 T clockevents_update_freq
ffffffff8106de80 T clockevents_handle_noop
ffffffff8106de90 T clockevents_exchange_device
ffffffff8106df40 t tick_periodic
ffffffff8106dfb0 T tick_get_device
ffffffff8106dfd0 T tick_is_oneshot_available
ffffffff8106e010 T tick_handle_periodic
ffffffff8106e070 T tick_setup_periodic
ffffffff8106e100 t tick_notify
ffffffff8106e510 t tick_broadcast_clear_oneshot
ffffffff8106e520 t tick_broadcast_set_event
ffffffff8106e580 t tick_broadcast_start_periodic
ffffffff8106e5a0 t tick_do_broadcast.constprop.1
ffffffff8106e620 t tick_handle_oneshot_broadcast
ffffffff8106e700 t tick_do_periodic_broadcast
ffffffff8106e740 t tick_handle_periodic_broadcast
ffffffff8106e790 T tick_get_broadcast_device
ffffffff8106e7a0 T tick_get_broadcast_mask
ffffffff8106e7b0 T tick_check_broadcast_device
ffffffff8106e820 T tick_is_broadcast_device
ffffffff8106e840 T tick_device_uses_broadcast
ffffffff8106e8e0 T tick_set_periodic_handler
ffffffff8106e900 T tick_shutdown_broadcast
ffffffff8106e970 T tick_suspend_broadcast
ffffffff8106e9b0 T tick_resume_broadcast
ffffffff8106ea70 T tick_get_broadcast_oneshot_mask
ffffffff8106ea80 T tick_resume_broadcast_oneshot
ffffffff8106eaa0 T tick_check_oneshot_broadcast
ffffffff8106ead0 T tick_broadcast_oneshot_control
ffffffff8106ec10 T tick_broadcast_setup_oneshot
ffffffff8106ed10 T tick_broadcast_on_off
ffffffff8106ef30 T tick_broadcast_switch_to_oneshot
ffffffff8106ef70 T tick_shutdown_broadcast_oneshot
ffffffff8106efa0 T tick_broadcast_oneshot_active
ffffffff8106efb0 T tick_broadcast_oneshot_available
ffffffff8106efd0 T tick_program_event
ffffffff8106eff0 T tick_resume_oneshot
ffffffff8106f020 T tick_setup_oneshot
ffffffff8106f060 T tick_switch_to_oneshot
ffffffff8106f110 T tick_oneshot_mode_active
ffffffff8106f140 T tick_init_highres
ffffffff8106f150 t tick_sched_timer
ffffffff8106f280 T tick_get_tick_sched
ffffffff8106f2a0 T tick_check_idle
ffffffff8106f2b0 T tick_setup_sched_timer
ffffffff8106f380 T tick_cancel_sched_timer
ffffffff8106f3b0 T tick_clock_notify
ffffffff8106f410 T tick_oneshot_notify
ffffffff8106f430 T tick_check_oneshot_change
ffffffff8106f490 t hash_futex
ffffffff8106f550 t get_futex_value_locked
ffffffff8106f580 t cmpxchg_futex_value_locked
ffffffff8106f5d0 t futex_wait_queue_me
ffffffff8106f6e0 t __unqueue_futex
ffffffff8106f740 t wake_futex
ffffffff8106f7b0 t fault_in_user_writeable
ffffffff8106f830 t lookup_pi_state
ffffffff8106fa80 t futex_lock_pi_atomic
ffffffff8106fbe0 t get_futex_key_refs.isra.11
ffffffff8106fc20 t get_futex_key
ffffffff8106fea0 t drop_futex_key_refs.isra.13
ffffffff8106ff10 t futex_wait_setup
ffffffff81070010 t futex_wait
ffffffff81070250 t futex_wait_restart
ffffffff81070290 t futex_wake
ffffffff810703b0 t fixup_pi_state_owner.isra.15
ffffffff81070570 t fixup_owner
ffffffff81070680 t free_pi_state
ffffffff81070720 t futex_requeue
ffffffff81070f60 t unqueue_me_pi
ffffffff81070f90 t futex_wait_requeue_pi
ffffffff81071370 t futex_lock_pi.isra.19
ffffffff81071670 T exit_pi_state_list
ffffffff810717f0 T sys_set_robust_list
ffffffff81071830 T sys_get_robust_list
ffffffff81071910 T handle_futex_death
ffffffff810719d0 T exit_robust_list
ffffffff81071b40 T do_futex
ffffffff810725f0 T sys_futex
ffffffff810727b0 T compat_exit_robust_list
ffffffff81072910 T compat_sys_set_robust_list
ffffffff81072950 T compat_sys_get_robust_list
ffffffff81072a40 T compat_sys_futex
ffffffff81072bf0 T __rt_mutex_init
ffffffff81072c10 t __rt_mutex_adjust_prio
ffffffff81072c40 t rt_mutex_adjust_prio_chain
ffffffff81073020 t task_blocks_on_rt_mutex
ffffffff81073210 t try_to_take_rt_mutex
ffffffff81073360 T rt_mutex_destroy
ffffffff81073380 T rt_mutex_timed_lock
ffffffff810733b0 T rt_mutex_getprio
ffffffff810733e0 T rt_mutex_adjust_pi
ffffffff81073460 T rt_mutex_init_proxy_locked
ffffffff81073480 T rt_mutex_proxy_unlock
ffffffff810734a0 T rt_mutex_start_proxy_lock
ffffffff81073550 T rt_mutex_next_owner
ffffffff81073570 T rt_mutex_finish_proxy_lock
ffffffff81073640 T request_dma
ffffffff81073670 t proc_dma_open
ffffffff81073690 t proc_dma_show
ffffffff810736f0 T free_dma
ffffffff81073730 t hotplug_cfd
ffffffff81073770 t csd_unlock.isra.4
ffffffff810737a0 t generic_exec_single
ffffffff81073850 T smp_call_function_single
ffffffff810739a0 T smp_call_function_many
ffffffff81073c20 T smp_call_function
ffffffff81073c50 T on_each_cpu
ffffffff81073cc0 T smp_call_function_any
ffffffff81073dc0 T on_each_cpu_mask
ffffffff81073e30 T on_each_cpu_cond
ffffffff81073ec0 T generic_smp_call_function_interrupt
ffffffff81074040 T generic_smp_call_function_single_interrupt
ffffffff81074150 T __smp_call_function_single
ffffffff81074220 T ipi_call_lock
ffffffff81074230 T ipi_call_unlock
ffffffff81074240 T ipi_call_lock_irq
ffffffff81074250 T ipi_call_unlock_irq
ffffffff81074280 T in_lock_functions
ffffffff810742a0 T sys_chown16
ffffffff810742c0 T sys_lchown16
ffffffff810742e0 T sys_fchown16
ffffffff81074300 T sys_setregid16
ffffffff81074320 T sys_setgid16
ffffffff81074340 T sys_setreuid16
ffffffff81074360 T sys_setuid16
ffffffff81074380 T sys_setresuid16
ffffffff810743b0 T sys_getresuid16
ffffffff81074430 T sys_setresgid16
ffffffff81074460 T sys_getresgid16
ffffffff810744e0 T sys_setfsuid16
ffffffff81074500 T sys_setfsgid16
ffffffff81074520 T sys_getgroups16
ffffffff810745b0 T sys_setgroups16
ffffffff810746c0 T sys_getuid16
ffffffff810746f0 T sys_geteuid16
ffffffff81074720 T sys_getgid16
ffffffff81074750 T sys_getegid16
ffffffff81074780 t modinfo_version_exists
ffffffff81074790 t modinfo_srcversion_exists
ffffffff810747a0 t show_taint
ffffffff81074810 t module_flags
ffffffff810748f0 T __module_address
ffffffff81074990 T __module_text_address
ffffffff810749f0 t modules_open
ffffffff81074a00 t m_next
ffffffff81074a10 t m_stop
ffffffff81074a20 t m_start
ffffffff81074a40 t store_uevent
ffffffff81074a90 t get_ksymbol
ffffffff81074cf0 t mod_find_symname
ffffffff81074d60 t cmp_name
ffffffff81074d70 t find_sec
ffffffff81074df0 t section_addr
ffffffff81074e10 t section_objs
ffffffff81074e70 T find_module
ffffffff81074ee0 t find_symbol_in_section
ffffffff81074fc0 t __unlink_module
ffffffff81075000 t free_modinfo_srcversion
ffffffff81075020 t free_modinfo_version
ffffffff81075040 t module_notes_read
ffffffff81075060 t show_initsize
ffffffff81075090 t show_coresize
ffffffff810750c0 t show_initstate
ffffffff81075100 t show_modinfo_srcversion
ffffffff81075130 t show_modinfo_version
ffffffff81075160 t module_sect_show
ffffffff81075190 t setup_modinfo_srcversion
ffffffff810751b0 t setup_modinfo_version
ffffffff810751d0 T module_refcount
ffffffff81075280 t __try_stop_module
ffffffff810752c0 t m_show
ffffffff81075440 t show_refcnt
ffffffff81075470 t free_notes_attrs
ffffffff810754d0 T try_module_get
ffffffff81075500 T __module_get
ffffffff81075520 T unregister_module_notifier
ffffffff81075530 T register_module_notifier
ffffffff81075540 t each_symbol_section.part.12
ffffffff81075690 T each_symbol_section
ffffffff81075710 T find_symbol
ffffffff81075770 T __symbol_get
ffffffff810757c0 t verify_export_symbols
ffffffff810758b0 t get_modinfo.isra.34
ffffffff81075970 T module_put
ffffffff810759a0 t module_unload_free
ffffffff81075a80 T __module_put_and_exit
ffffffff81075aa0 T ref_module
ffffffff81075ba0 t resolve_symbol.isra.41
ffffffff81075c50 t simplify_symbols
ffffffff81075f80 T symbol_put_addr
ffffffff81075fb0 T __symbol_put
ffffffff81075fe0 T is_module_percpu_address
ffffffff810760a0 W module_free
ffffffff810760c0 t free_module
ffffffff810762b0 T sys_delete_module
ffffffff810764e0 W apply_relocate
ffffffff81076520 W arch_mod_section_prepend
ffffffff81076530 t get_offset.isra.46
ffffffff810765c0 t module_alloc_update_bounds
ffffffff81076640 W module_frob_arch_sections
ffffffff81076660 T sys_init_module
ffffffff81078180 T module_address_lookup
ffffffff81078250 T lookup_module_symbol_name
ffffffff81078310 T lookup_module_symbol_attrs
ffffffff81078410 T module_get_kallsym
ffffffff81078570 T module_kallsyms_lookup_name
ffffffff81078620 T module_kallsyms_on_each_symbol
ffffffff810786c0 T search_module_extables
ffffffff81078750 T is_module_address
ffffffff81078760 T is_module_text_address
ffffffff81078770 T print_modules
ffffffff81078800 t kallsyms_expand_symbol
ffffffff81078880 t s_stop
ffffffff81078890 t s_show
ffffffff81078930 t update_iter
ffffffff81078a30 t s_next
ffffffff81078a60 t s_start
ffffffff81078a80 t kallsyms_open
ffffffff81078b00 T kallsyms_on_each_symbol
ffffffff81078b80 T kallsyms_lookup_name
ffffffff81078c00 t is_ksym_addr
ffffffff81078c50 t get_symbol_pos
ffffffff81078d40 T kallsyms_lookup_size_offset
ffffffff81078db0 T kallsyms_lookup
ffffffff81078ea0 t __sprint_symbol
ffffffff81078f70 T sprint_symbol
ffffffff81078f80 T __print_symbol
ffffffff81078fb0 T lookup_symbol_name
ffffffff81079040 T lookup_symbol_attrs
ffffffff81079130 T sprint_backtrace
ffffffff81079140 t encode_comp_t
ffffffff81079190 t check_free_space
ffffffff81079350 t do_acct_process
ffffffff810796a0 t acct_file_reopen
ffffffff810797e0 T sys_acct
ffffffff81079a00 T acct_auto_close_mnt
ffffffff81079a70 T acct_auto_close
ffffffff81079ae0 T acct_exit_ns
ffffffff81079b30 T acct_collect
ffffffff81079ce0 T acct_process
ffffffff81079d80 T sigset_from_compat
ffffffff81079d90 T compat_alloc_user_space
ffffffff81079e10 t compat_put_timex
ffffffff81079f80 T put_compat_timespec
ffffffff81079fe0 T compat_put_timespec
ffffffff81079ff0 T get_compat_timespec
ffffffff8107a050 T compat_get_timespec
ffffffff8107a060 T put_compat_timeval
ffffffff8107a0c0 T compat_put_timeval
ffffffff8107a0d0 T get_compat_timeval
ffffffff8107a130 T compat_get_timeval
ffffffff8107a140 t compat_get_timex
ffffffff8107a380 t compat_clock_nanosleep_restart
ffffffff8107a440 t compat_nanosleep_restart
ffffffff8107a4d0 T compat_sys_gettimeofday
ffffffff8107a570 T compat_sys_settimeofday
ffffffff8107a600 T compat_sys_nanosleep
ffffffff8107a700 T compat_sys_getitimer
ffffffff8107a790 T compat_sys_setitimer
ffffffff8107a8d0 T compat_sys_times
ffffffff8107a9e0 T compat_sys_sigpending
ffffffff8107aa50 T compat_sys_sigprocmask
ffffffff8107aaf0 T compat_sys_setrlimit
ffffffff8107aba0 T compat_sys_old_getrlimit
ffffffff8107ac80 T compat_sys_getrlimit
ffffffff8107ad20 T put_compat_rusage
ffffffff8107ae60 T compat_sys_getrusage
ffffffff8107af00 T compat_sys_wait4
ffffffff8107b000 T compat_sys_waitid
ffffffff8107b120 T get_compat_itimerspec
ffffffff8107b160 T put_compat_itimerspec
ffffffff8107b1a0 T compat_sys_timer_settime
ffffffff8107b280 T compat_sys_timer_gettime
ffffffff8107b310 T compat_sys_clock_settime
ffffffff8107b380 T compat_sys_clock_gettime
ffffffff8107b410 T compat_sys_clock_adjtime
ffffffff8107b4d0 T compat_sys_clock_getres
ffffffff8107b550 T compat_sys_clock_nanosleep
ffffffff8107b640 T get_compat_sigevent
ffffffff8107b750 T compat_sys_timer_create
ffffffff8107b7f0 T compat_get_bitmap
ffffffff8107b8a0 T compat_sys_sched_setaffinity
ffffffff8107b8f0 T compat_put_bitmap
ffffffff8107b990 T compat_sys_sched_getaffinity
ffffffff8107ba30 T compat_sys_rt_sigtimedwait
ffffffff8107bb20 T compat_sys_rt_tgsigqueueinfo
ffffffff8107bb90 T compat_sys_time
ffffffff8107bbe0 T compat_sys_stime
ffffffff8107bc30 T compat_sys_adjtimex
ffffffff8107bca0 T compat_sys_move_pages
ffffffff8107bd60 T compat_sys_migrate_pages
ffffffff8107bed0 T compat_sys_sysinfo
ffffffff8107c060 T cgroup_lock_is_held
ffffffff8107c070 t css_set_hash
ffffffff8107c0e0 t cgroup_delete
ffffffff8107c0f0 T cgroup_taskset_cur_cgroup
ffffffff8107c100 T cgroup_taskset_size
ffffffff8107c120 t cgroup_seqfile_show
ffffffff8107c170 t cgroup_file_release
ffffffff8107c190 t started_after
ffffffff8107c1d0 t cmppid
ffffffff8107c1e0 t cgroup_pidlist_next
ffffffff8107c210 t cgroup_read_notify_on_release
ffffffff8107c220 t cgroup_clone_children_read
ffffffff8107c230 T css_id
ffffffff8107c250 T css_depth
ffffffff8107c270 t cgroup_test_super
ffffffff8107c2e0 t cgroup_open
ffffffff8107c300 t cgroupstats_open
ffffffff8107c320 t cgroup_lock_hierarchy
ffffffff8107c380 T cgroup_lock
ffffffff8107c390 t cgroup_map_add
ffffffff8107c3b0 t cgroup_pidlist_show
ffffffff8107c3c0 t cgroup_unlock_hierarchy
ffffffff8107c420 t proc_cgroupstats_show
ffffffff8107c4b0 t cgroup_show_options
ffffffff8107c5c0 T cgroup_unlock
ffffffff8107c5d0 T cgroup_lock_live_group
ffffffff8107c600 t cgroup_release_agent_show
ffffffff8107c660 t cgroup_seqfile_release
ffffffff8107c690 t free_cg_links
ffffffff8107c6f0 t allocate_cg_links
ffffffff8107c760 t task_cgroup_from_root
ffffffff8107c7f0 t cgroup_rename
ffffffff8107c830 t cgroup_event_wake
ffffffff8107c8e0 t cgroup_clear_directory
ffffffff8107c9f0 t get_new_cssid
ffffffff8107cb10 t cgroup_file_read
ffffffff8107cbe0 t cgroup_new_inode
ffffffff8107cc60 t cgroup_release_agent_write
ffffffff8107ccd0 t cgroup_write_event_control
ffffffff8107d010 t cgroup_event_remove
ffffffff8107d070 t cgroup_event_ptable_queue_proc
ffffffff8107d080 T cgroup_taskset_next
ffffffff8107d0c0 T cgroup_unload_subsys
ffffffff8107d220 t cgroup_pidlist_stop
ffffffff8107d230 t cgroup_pidlist_start
ffffffff8107d2f0 t pidlist_free
ffffffff8107d320 t drop_parsed_module_refcounts
ffffffff8107d370 t rebind_subsystems
ffffffff8107d6d0 t parse_cgroupfs_options
ffffffff8107db00 t cgroup_diput
ffffffff8107dbe0 t cgroup_init_idr
ffffffff8107dc40 t init_root_id
ffffffff8107dcf0 T cgroup_path
ffffffff8107ddd0 t proc_cgroup_show
ffffffff8107dfa0 t cgroup_release_agent
ffffffff8107e140 t cgroup_lookup
ffffffff8107e170 t cgroup_write_notify_on_release
ffffffff8107e190 t cgroup_clone_children_write
ffffffff8107e1b0 t init_cgroup_css.isra.11
ffffffff8107e200 t link_css_set
ffffffff8107e260 t find_css_set
ffffffff8107e4b0 T free_css_id
ffffffff8107e550 t cgroup_file_open
ffffffff8107e640 t cgroup_file_write
ffffffff8107e8f0 t cgroup_create_file
ffffffff8107e9d0 T cgroup_add_file
ffffffff8107ebc0 T cgroup_add_files
ffffffff8107ec20 t cgroup_populate_dir
ffffffff8107ed30 t cgroup_mkdir
ffffffff8107f140 t cgroup_remount
ffffffff8107f270 T cgroup_taskset_first
ffffffff8107f2a0 t cgroup_wakeup_rmdir_waiter
ffffffff8107f2d0 t cgroup_release_pid_array
ffffffff8107f3a0 t cgroup_pidlist_release
ffffffff8107f3e0 T css_lookup
ffffffff8107f410 t cgroup_drop_root
ffffffff8107f460 t cgroup_kill_sb
ffffffff8107f5f0 t cgroup_mount
ffffffff8107fb50 t cgroup_set_super
ffffffff8107fbd0 T cgroup_load_subsys
ffffffff8107fec0 t check_for_release
ffffffff8107ffc0 t cgroup_rmdir
ffffffff810804c0 t __put_css_set
ffffffff81080690 t cgroup_task_migrate.isra.21
ffffffff810807a0 T __css_put
ffffffff81080810 T cgroup_is_removed
ffffffff81080820 T cgroup_exclude_rmdir
ffffffff81080830 T cgroup_release_and_wakeup_rmdir
ffffffff81080860 T cgroup_attach_task
ffffffff81080a30 t attach_task_by_pid
ffffffff81080f10 t cgroup_procs_write
ffffffff81080f20 t cgroup_tasks_write
ffffffff81080f30 T cgroup_attach_task_all
ffffffff81080fb0 T cgroup_task_count
ffffffff81081000 T cgroup_iter_start
ffffffff81081170 T cgroup_iter_next
ffffffff810811e0 t cgroup_pidlist_open
ffffffff810815f0 t cgroup_procs_open
ffffffff81081600 t cgroup_tasks_open
ffffffff81081610 T cgroup_iter_end
ffffffff81081620 T cgroup_scan_tasks
ffffffff810817f0 T cgroupstats_build
ffffffff810818c0 T cgroup_fork
ffffffff810818f0 T cgroup_fork_callbacks
ffffffff81081930 T cgroup_post_fork
ffffffff810819a0 T cgroup_exit
ffffffff81081ae0 T cgroup_is_descendant
ffffffff81081b40 T css_is_ancestor
ffffffff81081b80 T css_get_next
ffffffff81081c30 T cgroup_css_from_dir
ffffffff81081c80 t freezer_populate
ffffffff81081cb0 t is_task_frozen_enough
ffffffff81081ce0 t freezer_fork
ffffffff81081d60 t freezer_destroy
ffffffff81081d80 t freezer_create
ffffffff81081dc0 t update_if_frozen.isra.2
ffffffff81081e70 t freezer_read
ffffffff81081f30 t freezer_write
ffffffff81082100 t freezer_can_attach
ffffffff81082180 T cgroup_freezing
ffffffff810821a0 t update_domain_attr_tree
ffffffff81082270 t cpuset_test_cpumask
ffffffff81082290 t cpuset_change_flag
ffffffff810822e0 t cpuset_post_clone
ffffffff81082380 t cpuset_populate
ffffffff810823f0 t async_rebuild_sched_domains
ffffffff81082410 t cpuset_write_s64
ffffffff810824b0 t generate_sched_domains
ffffffff81082880 t do_rebuild_sched_domains
ffffffff810828c0 t cpuset_create
ffffffff81082980 t is_cpuset_subset
ffffffff81082a10 t fmeter_update
ffffffff81082a90 t cpuset_read_u64
ffffffff81082bf0 t cpuset_change_cpumask
ffffffff81082c00 t guarantee_online_mems
ffffffff81082cb0 t cpuset_migrate_mm
ffffffff81082d10 t cpuset_common_file_read
ffffffff81082e30 t cpuset_spread_node
ffffffff81082eb0 T cpuset_mem_spread_node
ffffffff81082f00 t cpuset_do_move_task
ffffffff81082f10 t cpuset_mount
ffffffff81082ff0 t cpuset_read_s64
ffffffff81083010 t guarantee_online_cpus
ffffffff81083070 t cpuset_can_attach
ffffffff81083150 t validate_change
ffffffff810832b0 t update_flag
ffffffff81083460 t cpuset_write_u64
ffffffff810835a0 t cpuset_destroy
ffffffff810835d0 t cpuset_write_resmask
ffffffff81083930 t cpuset_change_task_nodemask
ffffffff81083ac0 t cpuset_change_nodemask
ffffffff81083b80 t cpuset_attach
ffffffff81083d50 t scan_for_empty_cpusets.constprop.12
ffffffff81084000 T rebuild_sched_domains
ffffffff81084010 T current_cpuset_is_being_rebound
ffffffff81084040 T cpuset_update_active_cpus
ffffffff810840b0 T cpuset_cpus_allowed
ffffffff81084120 T cpuset_cpus_allowed_fallback
ffffffff81084150 T cpuset_init_current_mems_allowed
ffffffff81084180 T cpuset_mems_allowed
ffffffff81084220 T cpuset_nodemask_valid_mems_allowed
ffffffff81084240 T __cpuset_node_allowed_softwall
ffffffff81084340 T __cpuset_node_allowed_hardwall
ffffffff81084390 T cpuset_unlock
ffffffff810843a0 T cpuset_slab_spread_node
ffffffff810843f0 T cpuset_mems_allowed_intersects
ffffffff81084410 T cpuset_print_task_mems_allowed
ffffffff810844b0 T __cpuset_memory_pressure_bump
ffffffff81084550 T cpuset_task_status_allowed
ffffffff810845d0 t utsns_get
ffffffff81084620 T free_uts_ns
ffffffff81084630 t utsns_install
ffffffff81084690 t utsns_put
ffffffff810846b0 T copy_utsname
ffffffff81084870 t pid_ns_ctl_handler
ffffffff81084930 T free_pid_ns
ffffffff81084990 T copy_pid_ns
ffffffff81084ca0 T zap_pid_ns_processes
ffffffff81084d70 T reboot_pid_ns
ffffffff81084e00 t ikconfig_read_current
ffffffff81084e20 T res_counter_init
ffffffff81084e40 T res_counter_charge_locked
ffffffff81084e70 T res_counter_charge_nofail
ffffffff81084f60 T res_counter_uncharge_locked
ffffffff81084f90 T res_counter_charge
ffffffff81085080 T res_counter_uncharge
ffffffff810850f0 T res_counter_read
ffffffff810851b0 T res_counter_read_u64
ffffffff810851f0 T res_counter_memparse_write_strategy
ffffffff81085280 T res_counter_write
ffffffff81085340 t cpu_stop_signal_done
ffffffff81085370 t cpu_stopper_thread
ffffffff810854e0 t cpu_stop_init_done
ffffffff81085600 t cpu_stop_queue_work
ffffffff81085680 t queue_stop_cpus_work
ffffffff81085740 t stop_machine_cpu_stop
ffffffff81085820 t __stop_cpus
ffffffff81085880 T stop_one_cpu
ffffffff81085900 T stop_one_cpu_nowait
ffffffff81085940 T stop_cpus
ffffffff81085990 T try_stop_cpus
ffffffff81085a00 T __stop_machine
ffffffff81085af0 T stop_machine
ffffffff81085b30 T stop_machine_from_inactive_cpu
ffffffff81085c20 t audit_send_reply_thread
ffffffff81085c60 t audit_hold_skb
ffffffff81085c90 t audit_buffer_free
ffffffff81085d20 T audit_panic
ffffffff81085d80 T audit_log_lost
ffffffff81085e40 t audit_log_vformat
ffffffff81086010 T audit_log_format
ffffffff81086060 t kauditd_send_skb
ffffffff810860d0 t audit_printk_skb
ffffffff81086140 t kauditd_thread
ffffffff810862c0 T audit_log_end
ffffffff810863d0 T audit_send_list
ffffffff81086450 T audit_make_reply
ffffffff81086540 t audit_send_reply.constprop.21
ffffffff81086670 T audit_serial
ffffffff810866b0 T audit_log_start
ffffffff81086b30 T audit_log
ffffffff81086b90 t audit_log_common_recv_msg.part.16.constprop.18
ffffffff81086bf0 t audit_log_config_change.constprop.19
ffffffff81086ca0 t audit_do_config_change.constprop.20
ffffffff81086d10 T audit_log_n_hex
ffffffff81086e80 T audit_log_n_string
ffffffff81087010 T audit_string_contains_control
ffffffff81087070 T audit_log_n_untrustedstring
ffffffff810870c0 T audit_log_untrustedstring
ffffffff810870f0 t audit_receive
ffffffff81087bb0 T audit_log_d_path
ffffffff81087c90 T audit_log_key
ffffffff81087cf0 T audit_free_rule_rcu
ffffffff81087d70 t audit_match_signal
ffffffff81087e90 t audit_rule_to_entry
ffffffff810882a0 t audit_compare_rule
ffffffff81088470 t audit_find_rule
ffffffff81088570 t audit_log_rule_change.isra.11.part.12
ffffffff810886a0 T audit_unpack_string
ffffffff81088740 t audit_data_to_entry
ffffffff81088c50 T audit_match_class
ffffffff81088c90 T audit_dupe_rule
ffffffff81088eb0 T audit_receive_filter
ffffffff81089980 T audit_comparator
ffffffff810899e0 T audit_compare_dname_path
ffffffff81089ab0 T audit_filter_user
ffffffff81089bd0 T audit_filter_type
ffffffff81089c80 T audit_update_lsm_rules
ffffffff81089d00 T audit_log_task_context
ffffffff81089d10 t audit_log_cap
ffffffff81089d70 t audit_log_abend
ffffffff81089e40 t audit_copy_inode
ffffffff81089ec0 t audit_alloc_name
ffffffff8108a000 t unroll_tree_refs
ffffffff8108a100 T __audit_inode_child
ffffffff8108a390 t audit_log_pid_context.isra.10
ffffffff8108a450 t audit_compare_id.isra.12
ffffffff8108a4c0 t audit_filter_rules.isra.14
ffffffff8108b130 t audit_filter_syscall
ffffffff8108b210 t audit_log_exit
ffffffff8108c310 T audit_filter_inodes
ffffffff8108c410 T audit_alloc
ffffffff8108c5d0 T __audit_free
ffffffff8108c870 T __audit_syscall_entry
ffffffff8108cbb0 T __audit_syscall_exit
ffffffff8108cfd0 T __audit_getname
ffffffff8108d080 T audit_putname
ffffffff8108d0c0 T __audit_inode
ffffffff8108d2c0 T auditsc_get_stamp
ffffffff8108d340 T audit_set_loginuid
ffffffff8108d430 T __audit_mq_open
ffffffff8108d570 T __audit_mq_sendrecv
ffffffff8108d5e0 T __audit_mq_notify
ffffffff8108d620 T __audit_mq_getsetattr
ffffffff8108d6a0 T __audit_ipc_obj
ffffffff8108d6f0 T __audit_ipc_set_perm
ffffffff8108d730 T __audit_bprm
ffffffff8108d7b0 T __audit_socketcall
ffffffff8108d7f0 T __audit_fd_pair
ffffffff8108d810 T __audit_sockaddr
ffffffff8108d8a0 T __audit_ptrace
ffffffff8108d910 T __audit_signal_info
ffffffff8108db20 T __audit_log_bprm_fcaps
ffffffff8108dc50 T __audit_log_capset
ffffffff8108dca0 T __audit_mmap_fd
ffffffff8108dcd0 T audit_core_dumps
ffffffff8108dd40 T __audit_seccomp
ffffffff8108ddb0 T audit_killed_trees
ffffffff8108dde0 t audit_watch_should_send_event
ffffffff8108ddf0 t audit_init_watch
ffffffff8108de40 t audit_get_nd
ffffffff8108df80 t audit_free_parent
ffffffff8108dfb0 t audit_watch_free_mark
ffffffff8108dfc0 t audit_watch_log_rule_change.isra.2.part.3
ffffffff8108e0a0 T audit_get_watch
ffffffff8108e0b0 T audit_put_watch
ffffffff8108e110 t audit_remove_watch
ffffffff8108e170 t audit_update_watch
ffffffff8108e4c0 t audit_watch_handle_event
ffffffff8108e710 T audit_watch_path
ffffffff8108e720 T audit_watch_compare
ffffffff8108e750 T audit_to_watch
ffffffff8108e7d0 T audit_add_watch
ffffffff8108ea20 T audit_remove_watch_rule
ffffffff8108eab0 t compare_root
ffffffff8108eac0 t audit_tree_send_event
ffffffff8108ead0 t kill_rules
ffffffff8108ec60 t audit_tree_destroy_watch
ffffffff8108ec80 t alloc_chunk
ffffffff8108ed20 t free_chunk
ffffffff8108ed90 t untag_chunk
ffffffff8108f1e0 t audit_tree_handle_event
ffffffff8108f1f0 t audit_schedule_prune
ffffffff8108f230 t audit_tree_freeing_mark
ffffffff8108f420 t tag_mount
ffffffff8108f950 t prune_one
ffffffff8108f9c0 t prune_tree_thread
ffffffff8108fa40 t trim_marked
ffffffff8108fb50 T audit_tree_path
ffffffff8108fb60 T audit_put_chunk
ffffffff8108fb80 t __put_chunk
ffffffff8108fb90 T audit_tree_lookup
ffffffff8108fc00 T audit_tree_match
ffffffff8108fc50 T audit_remove_tree_rule
ffffffff8108fd60 T audit_trim_trees
ffffffff8108ff60 T audit_make_tree
ffffffff81090080 T audit_put_tree
ffffffff810900a0 T audit_add_tree_rule
ffffffff81090300 T audit_tag_tree
ffffffff810906d0 T audit_kill_trees
ffffffff81090750 t desc_set_defaults
ffffffff81090820 t alloc_desc
ffffffff810908d0 T irq_to_desc
ffffffff810908e0 t free_desc
ffffffff81090950 T generic_handle_irq
ffffffff81090980 T irq_free_descs
ffffffff81090a20 T irq_reserve_irqs
ffffffff81090ac0 T irq_get_next_irq
ffffffff81090ae0 T __irq_get_desc_lock
ffffffff81090b80 T __irq_put_desc_unlock
ffffffff81090be0 T irq_set_percpu_devid
ffffffff81090c50 T dynamic_irq_cleanup
ffffffff81090cc0 T kstat_irqs_cpu
ffffffff81090cf0 T kstat_irqs
ffffffff81090d60 T handle_bad_irq
ffffffff81090fa0 T no_action
ffffffff81090fb0 T handle_irq_event_percpu
ffffffff81091100 T handle_irq_event
ffffffff81091180 t irq_default_primary_handler
ffffffff81091190 t set_irq_wake_real
ffffffff810911d0 t irq_nested_primary_handler
ffffffff81091200 t __free_percpu_irq
ffffffff81091360 T irq_set_affinity_hint
ffffffff810913b0 T irq_set_irq_wake
ffffffff810914d0 T irq_set_affinity_notifier
ffffffff810915c0 t irq_affinity_notify
ffffffff81091680 T synchronize_irq
ffffffff81091750 t __free_irq
ffffffff81091900 T free_irq
ffffffff810919e0 T remove_irq
ffffffff81091a50 t wake_threads_waitq
ffffffff81091aa0 t irq_thread
ffffffff81091c20 t irq_finalize_oneshot.part.30
ffffffff81091d30 t irq_thread_fn
ffffffff81091d80 t irq_forced_thread_fn
ffffffff81091de0 T irq_can_set_affinity
ffffffff81091e20 T irq_set_thread_affinity
ffffffff81091e50 t setup_affinity
ffffffff81091f30 T __irq_set_affinity_locked
ffffffff81092020 T irq_set_affinity
ffffffff810920a0 T irq_select_affinity_usr
ffffffff81092120 T __disable_irq
ffffffff81092160 t __disable_irq_nosync
ffffffff810921c0 T disable_irq
ffffffff810921e0 T disable_irq_nosync
ffffffff810921f0 T __enable_irq
ffffffff810922a0 T enable_irq
ffffffff81092320 T can_request_irq
ffffffff81092380 T __irq_set_trigger
ffffffff810924b0 t __setup_irq
ffffffff810928f0 T request_threaded_irq
ffffffff81092a80 T request_any_context_irq
ffffffff81092b30 T setup_irq
ffffffff81092bd0 T exit_irq_thread
ffffffff81092c80 T enable_percpu_irq
ffffffff81092d30 T disable_percpu_irq
ffffffff81092d90 T remove_percpu_irq
ffffffff81092df0 T free_percpu_irq
ffffffff81092e80 T setup_percpu_irq
ffffffff81092f10 T request_percpu_irq
ffffffff81093020 T noirqdebug_setup
ffffffff81093050 t try_one_irq
ffffffff81093160 t poll_spurious_irqs
ffffffff81093200 t __report_bad_irq
ffffffff810932d0 T irq_wait_for_poll
ffffffff81093370 T note_interrupt
ffffffff810935a0 T check_irq_resend
ffffffff810935e0 T irq_get_irq_data
ffffffff810935f0 T irq_set_handler_data
ffffffff81093630 T irq_set_chip_data
ffffffff81093670 T irq_modify_status
ffffffff81093710 t irq_check_poll
ffffffff81093730 T handle_simple_irq
ffffffff810937b0 T handle_edge_irq
ffffffff810938c0 T handle_nested_irq
ffffffff810939a0 T irq_set_irq_type
ffffffff81093a20 T irq_set_chip
ffffffff81093a80 t cond_unmask_irq
ffffffff81093ac0 T handle_level_irq
ffffffff81093b70 T irq_set_msi_desc
ffffffff81093bc0 T irq_shutdown
ffffffff81093c10 T irq_enable
ffffffff81093c50 T irq_startup
ffffffff81093cc0 T __irq_set_handler
ffffffff81093e00 T irq_disable
ffffffff81093e30 T irq_percpu_enable
ffffffff81093e80 T irq_percpu_disable
ffffffff81093ed0 T mask_irq
ffffffff81093ef0 T unmask_irq
ffffffff81093f10 T handle_fasteoi_irq
ffffffff81094000 T handle_percpu_irq
ffffffff81094070 T handle_percpu_devid_irq
ffffffff81094120 T irq_set_chip_and_handler_name
ffffffff81094160 T irq_cpu_online
ffffffff81094200 T irq_cpu_offline
ffffffff810942a0 t noop
ffffffff810942b0 t noop_ret
ffffffff810942c0 t ack_bad
ffffffff810944e0 t devm_irq_match
ffffffff81094500 T devm_free_irq
ffffffff81094570 t devm_irq_release
ffffffff81094580 T devm_request_threaded_irq
ffffffff81094650 T probe_irq_off
ffffffff81094710 T probe_irq_mask
ffffffff810947d0 T probe_irq_on
ffffffff810949c0 t irq_node_proc_show
ffffffff810949f0 t default_affinity_open
ffffffff81094a10 t irq_spurious_proc_open
ffffffff81094a30 t irq_node_proc_open
ffffffff81094a50 t irq_affinity_list_proc_open
ffffffff81094a70 t irq_affinity_hint_proc_open
ffffffff81094a90 t irq_affinity_proc_open
ffffffff81094ab0 t default_affinity_show
ffffffff81094ae0 t irq_affinity_hint_proc_show
ffffffff81094b70 t default_affinity_write
ffffffff81094bd0 t irq_spurious_proc_show
ffffffff81094c30 t show_irq_affinity.isra.4
ffffffff81094c90 t irq_affinity_list_proc_show
ffffffff81094ca0 t irq_affinity_proc_show
ffffffff81094cb0 t write_irq_affinity.isra.5
ffffffff81094d80 t irq_affinity_list_proc_write
ffffffff81094da0 t irq_affinity_proc_write
ffffffff81094dc0 T register_handler_proc
ffffffff81094ef0 T register_irq_proc
ffffffff81095020 T unregister_irq_proc
ffffffff810950e0 T unregister_handler_proc
ffffffff81095110 T init_irq_proc
ffffffff810951a0 T show_interrupts
ffffffff81095490 T irq_move_masked_irq
ffffffff81095580 T irq_move_irq
ffffffff810955c0 t resume_irqs
ffffffff81095660 t irq_pm_syscore_resume
ffffffff81095670 T resume_device_irqs
ffffffff81095680 T suspend_device_irqs
ffffffff81095740 T check_wakeup_irqs
ffffffff810957c0 T rcu_note_context_switch
ffffffff810957e0 T rcu_batches_completed_sched
ffffffff810957f0 T rcu_batches_completed_bh
ffffffff81095800 T rcutorture_record_test_transition
ffffffff81095820 T rcutorture_record_progress
ffffffff81095830 t dyntick_save_progress_counter
ffffffff81095850 t rcu_panic
ffffffff81095860 t synchronize_sched_expedited_cpu_stop
ffffffff81095870 t rcu_barrier_func
ffffffff810958a0 T rcu_batches_completed
ffffffff810958b0 t rcu_barrier_callback
ffffffff810958d0 T synchronize_rcu_bh
ffffffff81095900 T synchronize_sched
ffffffff81095930 T synchronize_sched_expedited
ffffffff81095a50 T synchronize_rcu_expedited
ffffffff81095a60 t rcu_start_gp
ffffffff81095d50 t rcu_report_qs_rnp
ffffffff81095f00 t force_qs_rnp
ffffffff81096060 t rcu_implicit_dynticks_qs
ffffffff810960e0 t rcu_process_gp_end.isra.32
ffffffff810961b0 t check_for_new_grace_period.isra.34
ffffffff81096290 t force_quiescent_state
ffffffff81096470 t __call_rcu
ffffffff81096630 T kfree_call_rcu
ffffffff81096650 T call_rcu_bh
ffffffff81096660 T call_rcu_sched
ffffffff81096670 T rcu_sched_force_quiescent_state
ffffffff81096680 T rcu_force_quiescent_state
ffffffff81096690 T rcu_bh_force_quiescent_state
ffffffff810966a0 t __rcu_pending
ffffffff81096b50 t rcu_report_qs_rdp.isra.36
ffffffff81096c00 t __rcu_process_callbacks
ffffffff81096fc0 t rcu_process_callbacks
ffffffff81097000 t _rcu_barrier.isra.40
ffffffff810970b0 T rcu_barrier_sched
ffffffff810970c0 T rcu_barrier
ffffffff810970d0 T rcu_barrier_bh
ffffffff810970e0 t rcu_idle_exit_common.isra.43
ffffffff81097190 T rcu_idle_exit
ffffffff81097250 t rcu_idle_enter_common.isra.45
ffffffff81097310 T rcu_idle_enter
ffffffff810973d0 T rcu_sched_qs
ffffffff810973f0 T rcu_bh_qs
ffffffff81097410 T rcu_irq_exit
ffffffff810974b0 T rcu_irq_enter
ffffffff81097550 T rcu_nmi_enter
ffffffff810975b0 T rcu_nmi_exit
ffffffff81097610 T rcu_is_cpu_rrupt_from_idle
ffffffff81097630 T rcu_cpu_stall_reset
ffffffff81097660 T rcu_check_callbacks
ffffffff81097730 T rcu_scheduler_starting
ffffffff81097790 T rcu_needs_cpu
ffffffff810977d0 t proc_do_uts_string
ffffffff810978f0 T uts_proc_notify
ffffffff81097910 t delayacct_end
ffffffff810979b0 T __delayacct_tsk_init
ffffffff810979e0 T delayacct_init
ffffffff81097a40 T __delayacct_blkio_start
ffffffff81097a60 T __delayacct_blkio_end
ffffffff81097ab0 T __delayacct_add_tsk
ffffffff81097c30 T __delayacct_blkio_ticks
ffffffff81097c90 T __delayacct_freepages_start
ffffffff81097cb0 T __delayacct_freepages_end
ffffffff81097ce0 t prepare_reply
ffffffff81097db0 t send_reply
ffffffff81097e10 t cgroupstats_user_cmd
ffffffff81097f10 t parse
ffffffff81097fd0 t add_del_listener
ffffffff810981e0 t mk_reply
ffffffff810982a0 t fill_stats
ffffffff810983b0 t taskstats_user_cmd
ffffffff810987a0 T taskstats_exit
ffffffff81098bf0 T bacct_add_tsk
ffffffff81098da0 T xacct_add_tsk
ffffffff81098ef0 T acct_update_integrals
ffffffff81098fd0 T acct_clear_integrals
ffffffff81099000 W elf_core_extra_phdrs
ffffffff81099010 W elf_core_write_extra_phdrs
ffffffff81099020 W elf_core_write_extra_data
ffffffff81099030 W elf_core_extra_data_size
ffffffff81099040 T irq_work_sync
ffffffff81099090 T irq_work_run
ffffffff81099130 T irq_work_queue
ffffffff810991c0 t perf_ctx_unlock
ffffffff810991f0 t update_event_times
ffffffff81099270 t update_group_times
ffffffff810992a0 t perf_event__header_size
ffffffff81099330 t __perf_event_mark_enabled
ffffffff81099390 t perf_mmap_open
ffffffff810993b0 T perf_register_guest_info_callbacks
ffffffff810993c0 T perf_unregister_guest_info_callbacks
ffffffff810993d0 T perf_swevent_get_recursion_context
ffffffff81099440 t perf_swevent_read
ffffffff81099450 t perf_swevent_del
ffffffff81099480 t perf_swevent_start
ffffffff81099490 t perf_swevent_stop
ffffffff810994a0 t perf_swevent_event_idx
ffffffff810994b0 t perf_pmu_nop_void
ffffffff810994c0 t perf_pmu_nop_int
ffffffff810994d0 t perf_pmu_start_txn
ffffffff81099500 t perf_pmu_commit_txn
ffffffff81099530 t perf_pmu_cancel_txn
ffffffff81099560 t perf_event_idx_default
ffffffff81099570 t type_show
ffffffff810995a0 t pmu_dev_alloc
ffffffff81099660 t pmu_dev_release
ffffffff81099670 t cpu_function_call
ffffffff810996b0 t perf_event_exit_cpu
ffffffff810997b0 t perf_swevent_add
ffffffff810998e0 t perf_event_for_each_child
ffffffff81099980 t perf_ctx_lock
ffffffff810999b0 t perf_reboot
ffffffff81099a00 t task_clock_event_read
ffffffff81099a40 t cpu_clock_event_read
ffffffff81099a60 t task_clock_event_stop
ffffffff81099ad0 t task_clock_event_del
ffffffff81099ae0 t cpu_clock_event_stop
ffffffff81099b40 t cpu_clock_event_del
ffffffff81099b50 t alloc_perf_context
ffffffff81099be0 t perf_poll
ffffffff81099cf0 t perf_lock_task_context
ffffffff81099da0 t perf_unpin_context
ffffffff81099de0 t free_event_rcu
ffffffff81099e20 t ring_buffer_detach
ffffffff81099ee0 t rb_free_rcu
ffffffff81099ef0 t perf_event_pid
ffffffff81099f10 t perf_event_tid
ffffffff81099f40 t __perf_event_header__init_id
ffffffff8109a030 t task_function_call
ffffffff8109a080 T perf_event_enable
ffffffff8109a1b0 T perf_event_refresh
ffffffff8109a1e0 T perf_event_disable
ffffffff8109a2a0 t perf_output_read
ffffffff8109a620 t perf_fasync
ffffffff8109a6a0 t perf_mmap_fault
ffffffff8109a780 t perf_fget_light
ffffffff8109a7e0 t put_ctx
ffffffff8109a850 t update_context_time.isra.27
ffffffff8109a890 t __perf_event_read
ffffffff8109a940 t perf_event_read
ffffffff8109aa00 T perf_event_read_value
ffffffff8109aad0 t list_del_event.part.30
ffffffff8109ab50 t perf_remove_from_context
ffffffff8109ac10 t perf_group_detach.part.31
ffffffff8109acf0 t event_sched_out.isra.33
ffffffff8109ae20 t __perf_remove_from_context
ffffffff8109aef0 t __perf_event_exit_context
ffffffff8109afb0 t group_sched_out
ffffffff8109b080 t __perf_event_disable
ffffffff8109b160 t ctx_sched_out
ffffffff8109b2a0 t task_ctx_sched_out
ffffffff8109b300 t perf_adjust_period
ffffffff8109b4c0 t swevent_hlist_get_cpu.isra.51
ffffffff8109b560 t swevent_hlist_put_cpu.isra.52
ffffffff8109b5d0 t sw_perf_event_destroy
ffffffff8109b650 t perf_pmu_rotate_start.isra.56
ffffffff8109b6f0 t add_event_to_ctx
ffffffff8109b890 t perf_install_in_context
ffffffff8109b940 t get_ctx
ffffffff8109b990 t find_get_context
ffffffff8109bb70 t perf_swevent_start_hrtimer.part.60
ffffffff8109bbd0 t task_clock_event_start
ffffffff8109bc00 t task_clock_event_add
ffffffff8109bc20 t cpu_clock_event_start
ffffffff8109bc50 t cpu_clock_event_add
ffffffff8109bc70 t perf_swevent_init_hrtimer
ffffffff8109bce0 t task_clock_event_init
ffffffff8109bd20 t cpu_clock_event_init
ffffffff8109bd60 t perf_swevent_init
ffffffff8109bea0 t perf_event_read_group.isra.63
ffffffff8109c020 t perf_read
ffffffff8109c170 t ring_buffer_put
ffffffff8109c240 t free_event
ffffffff8109c350 t perf_free_event
ffffffff8109c450 T perf_event_release_kernel
ffffffff8109c510 t perf_release
ffffffff8109c5d0 t perf_event_set_output
ffffffff8109c730 t perf_ioctl
ffffffff8109c9e0 t perf_mmap_close
ffffffff8109cad0 t remote_function
ffffffff8109cb10 T perf_proc_update_handler
ffffffff8109cb60 W perf_pmu_name
ffffffff8109cb70 T perf_cgroup_switch
ffffffff8109cb80 T perf_pmu_disable
ffffffff8109cbb0 T perf_pmu_enable
ffffffff8109cbe0 T perf_event_task_enable
ffffffff8109cc60 T perf_event_task_disable
ffffffff8109ccf0 T perf_event_update_userpage
ffffffff8109ce00 t perf_mmap
ffffffff8109d0e0 t perf_event_reset
ffffffff8109d100 T __perf_event_task_sched_out
ffffffff8109d3a0 T perf_event_wakeup
ffffffff8109d440 t perf_pending_event
ffffffff8109d4b0 T perf_event_header__init_id
ffffffff8109d4d0 T perf_event__output_id_sample
ffffffff8109d5b0 t perf_event_task_output
ffffffff8109d6d0 t perf_event_task_ctx
ffffffff8109d760 t perf_event_task
ffffffff8109d8b0 t perf_event_read_event
ffffffff8109d980 t __perf_event_exit_task
ffffffff8109db00 t perf_log_throttle
ffffffff8109dbe0 t event_sched_in.isra.67
ffffffff8109dd20 t group_sched_in
ffffffff8109ded0 t ctx_sched_in.isra.68
ffffffff8109e060 t perf_event_sched_in.isra.71
ffffffff8109e0f0 t __perf_install_in_context
ffffffff8109e240 t perf_event_context_sched_in.isra.72
ffffffff8109e320 T __perf_event_task_sched_in
ffffffff8109e4b0 t __perf_event_enable
ffffffff8109e650 t perf_adjust_freq_unthr_context.part.73
ffffffff8109e800 T perf_event_task_tick
ffffffff8109ead0 t perf_event_mmap_output
ffffffff8109ec20 t perf_event_mmap_ctx
ffffffff8109ecc0 t perf_event_comm_output
ffffffff8109ee00 t perf_event_comm_ctx
ffffffff8109ee90 T perf_output_sample
ffffffff8109f250 T perf_prepare_sample
ffffffff8109f3b0 t __perf_event_overflow
ffffffff8109f5d0 t perf_swevent_overflow
ffffffff8109f680 t perf_swevent_event
ffffffff8109f700 T perf_event_fork
ffffffff8109f710 T perf_event_comm
ffffffff8109f970 T perf_event_mmap
ffffffff8109fc80 T perf_event_overflow
ffffffff8109fc90 t perf_swevent_hrtimer
ffffffff8109fdb0 T perf_swevent_put_recursion_context
ffffffff8109fdd0 T __perf_sw_event
ffffffff8109ff60 T perf_bp_event
ffffffff8109ffe0 T perf_pmu_register
ffffffff810a0350 T perf_pmu_unregister
ffffffff810a0480 T perf_init_event
ffffffff810a0550 t perf_event_alloc
ffffffff810a0980 t inherit_event.isra.75
ffffffff810a0b60 t inherit_task_group.isra.77.part.78
ffffffff810a0c30 T perf_event_create_kernel_counter
ffffffff810a0d20 T sys_perf_event_open
ffffffff810a15b0 T perf_event_exit_task
ffffffff810a1790 T perf_event_free_task
ffffffff810a1870 T perf_event_delayed_put
ffffffff810a18d0 T perf_event_init_context
ffffffff810a1b00 T perf_event_init_task
ffffffff810a1b70 t perf_mmap_free_page
ffffffff810a1bb0 t perf_mmap_alloc_page
ffffffff810a1c50 t perf_output_put_handle
ffffffff810a1cd0 T perf_output_copy
ffffffff810a1d60 T perf_output_begin
ffffffff810a1f20 T perf_output_end
ffffffff810a1f30 T perf_mmap_to_page
ffffffff810a1f80 T rb_alloc
ffffffff810a20a0 T rb_free
ffffffff810a20f0 t release_callchain_buffers_rcu
ffffffff810a2160 T get_callchain_buffers
ffffffff810a2300 T put_callchain_buffers
ffffffff810a2350 T perf_callchain
ffffffff810a2480 t hw_breakpoint_start
ffffffff810a2490 t hw_breakpoint_stop
ffffffff810a24a0 t hw_breakpoint_event_idx
ffffffff810a24b0 t hw_breakpoint_del
ffffffff810a24c0 t hw_breakpoint_add
ffffffff810a24e0 T register_user_hw_breakpoint
ffffffff810a2500 T unregister_hw_breakpoint
ffffffff810a2520 T unregister_wide_hw_breakpoint
ffffffff810a2590 T register_wide_hw_breakpoint
ffffffff810a26d0 t validate_hw_breakpoint.part.4
ffffffff810a2700 T modify_user_hw_breakpoint
ffffffff810a2800 W hw_breakpoint_weight
ffffffff810a2810 t task_bp_pinned.isra.5.constprop.6
ffffffff810a2870 t toggle_bp_task_slot.constprop.8
ffffffff810a28f0 t toggle_bp_slot.constprop.9
ffffffff810a2a50 t __reserve_bp_slot
ffffffff810a2c80 t __release_bp_slot
ffffffff810a2ca0 W arch_unregister_hw_breakpoint
ffffffff810a2cb0 T reserve_bp_slot
ffffffff810a2cf0 T release_bp_slot
ffffffff810a2d20 t bp_perf_event_destroy
ffffffff810a2d30 T dbg_reserve_bp_slot
ffffffff810a2d50 T dbg_release_bp_slot
ffffffff810a2d70 T register_perf_hw_breakpoint
ffffffff810a2dd0 t hw_breakpoint_event_init
ffffffff810a2e10 t page_waitqueue
ffffffff810a2ea0 T iov_iter_single_seg_count
ffffffff810a2ed0 T generic_write_checks
ffffffff810a3090 T pagecache_write_begin
ffffffff810a30a0 T iov_iter_fault_in_readable
ffffffff810a3100 T generic_segment_checks
ffffffff810a31b0 T iov_iter_copy_from_user_atomic
ffffffff810a3250 T pagecache_write_end
ffffffff810a32d0 T file_read_actor
ffffffff810a3430 T iov_iter_copy_from_user
ffffffff810a34a0 T should_remove_suid
ffffffff810a3510 T file_remove_suid
ffffffff810a35b0 T generic_file_mmap
ffffffff810a3610 T generic_file_readonly_mmap
ffffffff810a3630 T find_get_pages_tag
ffffffff810a37a0 T find_get_pages_contig
ffffffff810a3900 T find_get_page
ffffffff810a3990 T __lock_page_killable
ffffffff810a3a00 T __lock_page
ffffffff810a3a70 t sleep_on_page
ffffffff810a3a80 t sleep_on_page_killable
ffffffff810a3ac0 T unlock_page
ffffffff810a3ae0 T find_lock_page
ffffffff810a3b60 T add_page_wait_queue
ffffffff810a3bc0 T wait_on_page_bit
ffffffff810a3c40 t wait_on_page_read
ffffffff810a3ca0 T __page_cache_alloc
ffffffff810a3da0 T filemap_fdatawait_range
ffffffff810a3f10 T filemap_fdatawait
ffffffff810a3f40 T iov_iter_advance
ffffffff810a3fd0 T generic_file_buffered_write
ffffffff810a4270 T try_to_release_page
ffffffff810a42b0 T end_page_writeback
ffffffff810a4300 T add_to_page_cache_locked
ffffffff810a43f0 T add_to_page_cache_lru
ffffffff810a4430 T grab_cache_page_write_begin
ffffffff810a4520 t do_read_cache_page
ffffffff810a4690 T read_cache_page_gfp
ffffffff810a46c0 T read_cache_page_async
ffffffff810a46d0 T read_cache_page
ffffffff810a46f0 T grab_cache_page_nowait
ffffffff810a4780 T find_or_create_page
ffffffff810a4830 T __delete_from_page_cache
ffffffff810a4990 T replace_page_cache_page
ffffffff810a4ac0 T delete_from_page_cache
ffffffff810a4b40 T __filemap_fdatawrite_range
ffffffff810a4bd0 T filemap_fdatawrite
ffffffff810a4bf0 T filemap_write_and_wait
ffffffff810a4c40 T filemap_flush
ffffffff810a4c60 T filemap_write_and_wait_range
ffffffff810a4ce0 T generic_file_direct_write
ffffffff810a4e80 T __generic_file_aio_write
ffffffff810a5290 T generic_file_aio_write
ffffffff810a5370 T generic_file_aio_read
ffffffff810a5a60 T filemap_fdatawrite_range
ffffffff810a5a70 T wait_on_page_bit_killable
ffffffff810a5af0 T __lock_page_or_retry
ffffffff810a5bb0 T filemap_fault
ffffffff810a6020 T find_get_pages
ffffffff810a6170 T sys_readahead
ffffffff810a6220 T mempool_free_pages
ffffffff810a6230 T mempool_alloc_pages
ffffffff810a6240 T mempool_kfree
ffffffff810a6250 T mempool_kmalloc
ffffffff810a6260 T mempool_alloc_slab
ffffffff810a6270 T mempool_free_slab
ffffffff810a6280 t add_element
ffffffff810a62a0 T mempool_free
ffffffff810a6360 t remove_element.isra.3
ffffffff810a6380 T mempool_destroy
ffffffff810a63d0 T mempool_create_node
ffffffff810a64c0 T mempool_create
ffffffff810a64d0 T mempool_alloc
ffffffff810a6620 T mempool_resize
ffffffff810a67b0 T unregister_oom_notifier
ffffffff810a67c0 T register_oom_notifier
ffffffff810a67d0 t oom_unkillable_task.isra.5
ffffffff810a6880 T compare_swap_oom_score_adj
ffffffff810a6900 T test_set_oom_score_adj
ffffffff810a6970 T find_lock_task_mm
ffffffff810a69f0 t dump_header.isra.8
ffffffff810a6b80 T oom_badness
ffffffff810a6cc0 t oom_kill_process.part.10.constprop.11
ffffffff810a6f40 T try_set_zonelist_oom
ffffffff810a7040 T clear_zonelist_oom
ffffffff810a70b0 T out_of_memory
ffffffff810a7610 T pagefault_out_of_memory
ffffffff810a7730 T sys_fadvise64_64
ffffffff810a7980 T sys_fadvise64
ffffffff810a7990 T __probe_kernel_read
ffffffff810a7990 W probe_kernel_read
ffffffff810a7a00 T __probe_kernel_write
ffffffff810a7a00 W probe_kernel_write
ffffffff810a7a70 t move_freepages_block
ffffffff810a7bd0 t zlc_zone_worth_trying
ffffffff810a7c10 t calculate_totalreserve_pages
ffffffff810a7ca0 t setup_per_zone_lowmem_reserve
ffffffff810a7d60 t setup_pageset
ffffffff810a7e50 T si_meminfo
ffffffff810a7ea0 t nr_free_zone_pages
ffffffff810a7f20 T nr_free_buffer_pages
ffffffff810a7f30 T __get_free_pages
ffffffff810a7f80 T get_zeroed_page
ffffffff810a7f90 t __parse_numa_zonelist_order
ffffffff810a8010 t zone_batchsize.isra.54
ffffffff810a8070 t build_zonelists_node.constprop.65
ffffffff810a80e0 T pm_restore_gfp_mask
ffffffff810a8120 T pm_restrict_gfp_mask
ffffffff810a8180 T pm_suspended_storage
ffffffff810a81a0 T prep_compound_page
ffffffff810a8210 T drain_all_pages
ffffffff810a82d0 T split_page
ffffffff810a8330 T zone_watermark_ok
ffffffff810a83f0 T zone_watermark_ok_safe
ffffffff810a8550 T warn_alloc_failed
ffffffff810a86b0 T nr_free_pagecache_pages
ffffffff810a86c0 T si_meminfo_node
ffffffff810a8740 T skip_free_areas_node
ffffffff810a8780 T show_free_areas
ffffffff810a8f40 T numa_zonelist_order_handler
ffffffff810a9040 T zone_pcp_update
ffffffff810a9060 T sysctl_min_unmapped_ratio_sysctl_handler
ffffffff810a90d0 T sysctl_min_slab_ratio_sysctl_handler
ffffffff810a9140 T lowmem_reserve_ratio_sysctl_handler
ffffffff810a9160 T percpu_pagelist_fraction_sysctl_handler
ffffffff810a9240 T get_pageblock_flags_group
ffffffff810a92d0 t __count_immobile_pages
ffffffff810a93f0 T set_pageblock_flags_group
ffffffff810a9480 t set_pageblock_migratetype
ffffffff810a94a0 T setup_per_zone_wmarks
ffffffff810a9750 T min_free_kbytes_sysctl_handler
ffffffff810a9770 t __rmqueue
ffffffff810a9b60 T split_free_page
ffffffff810a9cd0 T is_pageblock_removable_nolock
ffffffff810a9d60 T set_migratetype_isolate
ffffffff810a9e20 T unset_migratetype_isolate
ffffffff810a9ed0 T is_free_buddy_page
ffffffff810a9f80 T dump_page
ffffffff810aa050 t bad_page
ffffffff810aa150 t free_pages_prepare
ffffffff810aa200 t get_page_from_freelist
ffffffff810aaab0 t destroy_compound_page
ffffffff810aab60 t free_pcppages_bulk
ffffffff810aaf00 t drain_pages
ffffffff810aaf90 t page_alloc_cpu_notify
ffffffff810aafd0 T drain_local_pages
ffffffff810aafe0 T __alloc_pages_nodemask
ffffffff810ab980 t __zone_pcp_update
ffffffff810aba20 T drain_zone_pages
ffffffff810aba80 t free_one_page
ffffffff810abde0 t __free_pages_ok
ffffffff810abec0 t free_compound_page
ffffffff810abee0 T free_hot_cold_page
ffffffff810ac080 T __free_pages
ffffffff810ac0b0 T free_pages
ffffffff810ac110 T free_pages_exact
ffffffff810ac150 t make_alloc_exact
ffffffff810ac200 T alloc_pages_exact_nid
ffffffff810ac2c0 T alloc_pages_exact
ffffffff810ac310 T free_hot_cold_page_list
ffffffff810ac360 T mapping_tagged
ffffffff810ac370 T account_page_redirty
ffffffff810ac400 T account_page_writeback
ffffffff810ac410 T test_set_page_writeback
ffffffff810ac5a0 T bdi_writeout_inc
ffffffff810ac610 T tag_pages_for_writeback
ffffffff810ac6c0 T bdi_set_max_ratio
ffffffff810ac750 t __writepage
ffffffff810ac780 T set_page_dirty
ffffffff810ac7e0 T clear_page_dirty_for_io
ffffffff810ac8c0 T write_one_page
ffffffff810aca10 T write_cache_pages
ffffffff810acdd0 T generic_writepages
ffffffff810ace30 T set_page_dirty_lock
ffffffff810ace70 t bdi_position_ratio.isra.17
ffffffff810acf80 T account_page_dirtied
ffffffff810ad040 T __set_page_dirty_nobuffers
ffffffff810ad1a0 T redirty_page_for_writepage
ffffffff810ad1c0 T global_dirtyable_memory
ffffffff810ad1f0 t calc_period_shift
ffffffff810ad240 T global_dirty_limits
ffffffff810ad360 T zone_dirty_ok
ffffffff810ad450 T dirty_background_ratio_handler
ffffffff810ad470 T dirty_background_bytes_handler
ffffffff810ad490 T bdi_set_min_ratio
ffffffff810ad510 T bdi_dirty_limit
ffffffff810ad5c0 T __bdi_update_bandwidth
ffffffff810ad910 T balance_dirty_pages_ratelimited_nr
ffffffff810adfe0 T set_page_dirty_balance
ffffffff810ae050 T throttle_vm_writeout
ffffffff810ae0f0 T dirty_writeback_centisecs_handler
ffffffff810ae110 T laptop_mode_timer_fn
ffffffff810ae190 T laptop_io_completion
ffffffff810ae1b0 T laptop_sync_completion
ffffffff810ae200 T writeback_set_ratelimit
ffffffff810ae250 t update_completion_period
ffffffff810ae270 T dirty_bytes_handler
ffffffff810ae2d0 T dirty_ratio_handler
ffffffff810ae330 T do_writepages
ffffffff810ae360 T __set_page_dirty_no_writeback
ffffffff810ae380 T test_clear_page_writeback
ffffffff810ae4f0 T file_ra_state_init
ffffffff810ae510 t __do_page_cache_readahead
ffffffff810ae720 t get_init_ra_size
ffffffff810ae770 t read_cache_pages_invalidate_page
ffffffff810ae7c0 T read_cache_pages
ffffffff810ae8e0 T max_sane_readahead
ffffffff810ae990 T force_page_cache_readahead
ffffffff810aea20 T ra_submit
ffffffff810aea50 t ondemand_readahead
ffffffff810aec80 T page_cache_async_readahead
ffffffff810aed30 T page_cache_sync_readahead
ffffffff810aed70 T pagevec_lookup_tag
ffffffff810aed90 T pagevec_lookup
ffffffff810aedb0 t __activate_page
ffffffff810aef30 t lru_deactivate_fn
ffffffff810af130 t __pagevec_lru_add_fn
ffffffff810af200 t pagevec_move_tail_fn
ffffffff810af290 t __page_cache_release.part.12
ffffffff810af3a0 t __put_compound_page
ffffffff810af3c0 t __put_single_page
ffffffff810af3e0 t put_compound_page
ffffffff810af520 T release_pages
ffffffff810af700 t pagevec_lru_move_fn
ffffffff810af7f0 T __pagevec_lru_add
ffffffff810af800 t pagevec_move_tail
ffffffff810af830 T put_page
ffffffff810af860 T put_pages_list
ffffffff810af8b0 T __get_page_tail
ffffffff810af980 T __lru_cache_add
ffffffff810afa10 T rotate_reclaimable_page
ffffffff810afac0 T activate_page
ffffffff810afb60 T mark_page_accessed
ffffffff810afbb0 T lru_cache_add_lru
ffffffff810afbf0 T add_page_to_unevictable_list
ffffffff810afcc0 T lru_add_drain_cpu
ffffffff810afdb0 T deactivate_page
ffffffff810afe30 T lru_add_drain
ffffffff810afe40 T __pagevec_release
ffffffff810afe70 t lru_add_drain_per_cpu
ffffffff810afe80 T lru_add_drain_all
ffffffff810afe90 T lru_add_page_tail
ffffffff810affe0 T invalidate_inode_pages2_range
ffffffff810b0330 T invalidate_inode_pages2
ffffffff810b0340 T cancel_dirty_page
ffffffff810b03f0 T do_invalidatepage
ffffffff810b0410 T truncate_inode_page
ffffffff810b04b0 T truncate_inode_pages_range
ffffffff810b0900 T truncate_pagecache_range
ffffffff810b0970 T truncate_inode_pages
ffffffff810b0980 T truncate_pagecache
ffffffff810b09f0 T truncate_setsize
ffffffff810b0a10 T vmtruncate
ffffffff810b0a80 T generic_error_remove_page
ffffffff810b0ab0 T invalidate_inode_page
ffffffff810b0b70 T invalidate_mapping_pages
ffffffff810b0cd0 T vmtruncate_range
ffffffff810b0db0 t zone_pagecache_reclaimable
ffffffff810b0e30 t move_active_pages_to_lru
ffffffff810b0fd0 t __remove_mapping
ffffffff810b1110 T unregister_shrinker
ffffffff810b1160 T register_shrinker
ffffffff810b11b0 t warn_scan_unevictable_pages
ffffffff810b11e0 t write_scan_unevictable_node
ffffffff810b1200 t read_scan_unevictable_node
ffffffff810b1220 t update_isolated_counts.isra.53
ffffffff810b1370 t sleeping_prematurely.part.55
ffffffff810b1450 T shrink_slab
ffffffff810b1610 T remove_mapping
ffffffff810b1650 T __isolate_lru_page
ffffffff810b1760 t isolate_lru_pages.isra.57
ffffffff810b1a90 T isolate_lru_page
ffffffff810b1c00 T wakeup_kswapd
ffffffff810b1ce0 T global_reclaimable_pages
ffffffff810b1d30 T zone_reclaimable_pages
ffffffff810b1d80 T kswapd_run
ffffffff810b1e40 T kswapd_stop
ffffffff810b1e80 T page_evictable
ffffffff810b1f10 T putback_lru_page
ffffffff810b1fe0 t shrink_page_list
ffffffff810b28d0 t putback_inactive_pages
ffffffff810b2ba0 t shrink_inactive_list
ffffffff810b2fa0 t shrink_active_list.isra.59
ffffffff810b3300 t shrink_mem_cgroup_zone
ffffffff810b3850 t __zone_reclaim
ffffffff810b3a00 T zone_reclaim
ffffffff810b3ad0 t do_try_to_free_pages
ffffffff810b3f70 T try_to_free_pages
ffffffff810b3ff0 t balance_pgdat
ffffffff810b46a0 t kswapd
ffffffff810b4970 T check_move_unevictable_pages
ffffffff810b4b00 T scan_unevictable_handler
ffffffff810b4b50 T scan_unevictable_register_node
ffffffff810b4b60 T scan_unevictable_unregister_node
ffffffff810b4b70 t shmem_follow_short_symlink
ffffffff810b4b90 t shmem_get_parent
ffffffff810b4ba0 t shmem_match
ffffffff810b4bd0 t shmem_write_end
ffffffff810b4c30 t shmem_reserve_inode
ffffffff810b4cb0 t shmem_free_inode
ffffffff810b4d00 t shmem_recalc_inode
ffffffff810b4db0 t shmem_get_policy
ffffffff810b4de0 t shmem_swapin
ffffffff810b4e80 t shmem_alloc_page
ffffffff810b4ee0 t shmem_set_policy
ffffffff810b4f10 t shmem_mmap
ffffffff810b4f50 t shmem_put_link
ffffffff810b4f80 t shmem_get_inode
ffffffff810b51a0 t shmem_mknod
ffffffff810b5270 t shmem_mkdir
ffffffff810b52a0 t shmem_create
ffffffff810b52b0 t shmem_alloc_inode
ffffffff810b52e0 t shmem_unlink
ffffffff810b5370 t shmem_rename
ffffffff810b5480 t shmem_rmdir
ffffffff810b54f0 t shmem_link
ffffffff810b55b0 t shmem_xattr_validate
ffffffff810b5610 t shmem_xattr_set
ffffffff810b57d0 t shmem_listxattr
ffffffff810b58b0 t shmem_mount
ffffffff810b58c0 t shmem_init_inode
ffffffff810b58d0 t shmem_show_options
ffffffff810b59e0 t shmem_statfs
ffffffff810b5a70 t shmem_destroy_inode
ffffffff810b5ab0 t shmem_destroy_callback
ffffffff810b5ad0 t shmem_fh_to_dentry
ffffffff810b5b30 t shmem_encode_fh
ffffffff810b5bf0 t shmem_parse_options
ffffffff810b5ef0 t shmem_remount_fs
ffffffff810b6020 t shmem_put_super
ffffffff810b6070 T shmem_fill_super
ffffffff810b6200 t shmem_find_get_pages_and_swap
ffffffff810b6310 t shmem_radix_tree_replace
ffffffff810b63a0 t shmem_writepage
ffffffff810b65e0 t shmem_free_swap
ffffffff810b6660 t shmem_add_to_page_cache
ffffffff810b67c0 t shmem_getpage_gfp
ffffffff810b6d20 t shmem_fault
ffffffff810b6d90 t shmem_write_begin
ffffffff810b6dc0 t shmem_follow_link
ffffffff810b6e60 t shmem_file_splice_read
ffffffff810b72c0 t shmem_symlink
ffffffff810b74d0 T shmem_truncate_range
ffffffff810b7a70 t shmem_setattr
ffffffff810b7ba0 t shmem_evict_inode
ffffffff810b7cd0 T shmem_read_mapping_page_gfp
ffffffff810b7d20 T shmem_file_setup
ffffffff810b7f00 t shmem_removexattr
ffffffff810b7f90 t shmem_getxattr
ffffffff810b80d0 t shmem_setxattr
ffffffff810b8190 t shmem_file_aio_read
ffffffff810b8500 T shmem_unlock_mapping
ffffffff810b85f0 T shmem_unuse
ffffffff810b8790 T shmem_lock
ffffffff810b8870 T shmem_zero_setup
ffffffff810b88d0 T vma_prio_tree_add
ffffffff810b8990 T vma_prio_tree_insert
ffffffff810b89f0 T vma_prio_tree_remove
ffffffff810b8b20 T vma_prio_tree_next
ffffffff810b8c50 T kzfree
ffffffff810b8c80 T __krealloc
ffffffff810b8d10 T krealloc
ffffffff810b8d60 T kmemdup
ffffffff810b8db0 T memdup_user
ffffffff810b8e30 T strndup_user
ffffffff810b8ea0 T kstrndup
ffffffff810b8f10 T kstrdup
ffffffff810b8f80 T __vma_link_list
ffffffff810b8fc0 T vm_is_stack
ffffffff810b90a0 T first_online_pgdat
ffffffff810b90e0 T next_online_pgdat
ffffffff810b9130 T next_zone
ffffffff810b9170 T next_zones_zonelist
ffffffff810b91d0 T mod_zone_page_state
ffffffff810b9240 T inc_zone_page_state
ffffffff810b92d0 T dec_zone_page_state
ffffffff810b9360 t frag_stop
ffffffff810b9370 t vmstat_next
ffffffff810b9390 t zoneinfo_open
ffffffff810b93a0 t vmstat_open
ffffffff810b93b0 t pagetypeinfo_open
ffffffff810b93c0 t fragmentation_open
ffffffff810b93d0 t walk_zones_in_node
ffffffff810b9470 t zoneinfo_show
ffffffff810b9490 t frag_show
ffffffff810b94b0 t vmstat_show
ffffffff810b94e0 t pagetypeinfo_showfree_print
ffffffff810b95a0 t frag_show_print
ffffffff810b9600 t frag_next
ffffffff810b9610 t frag_start
ffffffff810b9650 t vmstat_stop
ffffffff810b9670 t pagetypeinfo_showblockcount_print
ffffffff810b97e0 t zoneinfo_show_print
ffffffff810b99a0 T __mod_zone_page_state
ffffffff810b99f0 T all_vm_events
ffffffff810b9b00 t vmstat_start
ffffffff810b9ba0 t pagetypeinfo_show
ffffffff810b9cc0 T vm_events_fold_cpu
ffffffff810b9d00 T calculate_pressure_threshold
ffffffff810b9d40 T calculate_normal_threshold
ffffffff810b9d90 T refresh_zone_stat_thresholds
ffffffff810b9e40 T set_pgdat_percpu_threshold
ffffffff810b9ef0 T __inc_zone_state
ffffffff810b9f40 T __inc_zone_page_state
ffffffff810b9f70 T __dec_zone_state
ffffffff810b9fc0 T __dec_zone_page_state
ffffffff810b9ff0 T inc_zone_state
ffffffff810ba060 T refresh_cpu_vm_stats
ffffffff810ba1f0 t vmstat_update
ffffffff810ba230 T zone_statistics
ffffffff810ba2d0 T fragmentation_index
ffffffff810ba360 t bdi_sched_wait
ffffffff810ba370 t bdi_sync_supers
ffffffff810ba3d0 t read_ahead_kb_store
ffffffff810ba450 t max_ratio_store
ffffffff810ba4e0 t max_ratio_show
ffffffff810ba510 t min_ratio_show
ffffffff810ba540 t read_ahead_kb_show
ffffffff810ba570 t min_ratio_store
ffffffff810ba600 T wait_iff_congested
ffffffff810ba710 T congestion_wait
ffffffff810ba7e0 T clear_bdi_congested
ffffffff810ba850 T bdi_init
ffffffff810baac0 t wakeup_timer_fn
ffffffff810bab20 T bdi_unregister
ffffffff810baca0 T bdi_register
ffffffff810bade0 T bdi_register_dev
ffffffff810bae00 t bdi_clear_pending
ffffffff810bae20 t bdi_forker_thread
ffffffff810bb260 T set_bdi_congested
ffffffff810bb280 T bdi_lock_two
ffffffff810bb2e0 T bdi_destroy
ffffffff810bb440 T bdi_setup_and_register
ffffffff810bb4e0 T bdi_has_dirty_io
ffffffff810bb520 T bdi_arm_supers_timer
ffffffff810bb570 t sync_supers_timer_fn
ffffffff810bb590 T bdi_wakeup_thread_delayed
ffffffff810bb5d0 T start_isolate_page_range
ffffffff810bb680 T undo_isolate_page_range
ffffffff810bb710 T test_pages_isolated
ffffffff810bb890 T mminit_verify_zonelist
ffffffff810bba00 T unuse_mm
ffffffff810bba80 T use_mm
ffffffff810bbc00 t pcpu_mem_zalloc
ffffffff810bbc60 t pcpu_get_pages_and_bitmap
ffffffff810bbd70 t pcpu_next_unpop
ffffffff810bbde0 t pcpu_next_pop
ffffffff810bbe50 t pcpu_unmap_pages
ffffffff810bbff0 t pcpu_free_chunk
ffffffff810bc040 t pcpu_extend_area_map
ffffffff810bc170 t pcpu_chunk_slot
ffffffff810bc1c0 t pcpu_chunk_relocate
ffffffff810bc260 t pcpu_free_area
ffffffff810bc3c0 T free_percpu
ffffffff810bc520 t pcpu_alloc_area
ffffffff810bc7d0 t pcpu_free_pages.isra.14
ffffffff810bc870 t pcpu_alloc
ffffffff810bd270 T __alloc_percpu
ffffffff810bd280 t pcpu_reclaim
ffffffff810bd4e0 T __alloc_reserved_percpu
ffffffff810bd4f0 T is_kernel_percpu_address
ffffffff810bd580 T per_cpu_ptr_to_phys
ffffffff810bd690 T sys_remap_file_pages
ffffffff810bdbb0 T sys_madvise
ffffffff810be330 t print_bad_pte
ffffffff810be5c0 t add_mm_counter_fast
ffffffff810be5f0 t __do_fault
ffffffff810beb20 t __follow_pte.isra.47
ffffffff810becd0 T follow_pfn
ffffffff810bed40 T sync_mm_rss
ffffffff810bed90 T tlb_gather_mmu
ffffffff810bedf0 T tlb_flush_mmu
ffffffff810bee70 T tlb_finish_mmu
ffffffff810beec0 T __tlb_remove_page
ffffffff810bef50 T pgd_clear_bad
ffffffff810befa0 T pud_clear_bad
ffffffff810beff0 T pmd_clear_bad
ffffffff810bf040 T free_pgd_range
ffffffff810bf4f0 T free_pgtables
ffffffff810bf5f0 T __pte_alloc
ffffffff810bf760 T __pte_alloc_kernel
ffffffff810bf830 T vm_normal_page
ffffffff810bf8b0 t do_wp_page
ffffffff810c00e0 t unmap_single_vma
ffffffff810c0900 t zap_page_range_single
ffffffff810c0a10 T zap_vma_ptes
ffffffff810c0a40 T unmap_mapping_range
ffffffff810c0bb0 T copy_pte_range
ffffffff810c1090 T unmap_vmas
ffffffff810c1140 T zap_page_range
ffffffff810c1220 T follow_page
ffffffff810c1720 T handle_pte_fault
ffffffff810c2050 T __pud_alloc
ffffffff810c2130 T __pmd_alloc
ffffffff810c2210 T remap_pfn_range
ffffffff810c2690 T apply_to_page_range
ffffffff810c2ab0 T copy_page_range
ffffffff810c2f20 T __get_locked_pte
ffffffff810c3100 t insert_pfn.isra.40
ffffffff810c31d0 T vm_insert_mixed
ffffffff810c3200 T vm_insert_pfn
ffffffff810c3330 T vm_insert_page
ffffffff810c34b0 T handle_mm_fault
ffffffff810c3830 T fixup_user_fault
ffffffff810c38f0 T __get_user_pages
ffffffff810c3e40 T get_dump_page
ffffffff810c3e90 T get_user_pages
ffffffff810c3ee0 t __access_remote_vm
ffffffff810c40c0 T make_pages_present
ffffffff810c4190 T follow_phys
ffffffff810c4240 T generic_access_phys
ffffffff810c4300 T access_remote_vm
ffffffff810c4320 T access_process_vm
ffffffff810c43b0 T print_vma_addr
ffffffff810c44f0 T clear_huge_page
ffffffff810c46a0 T copy_user_huge_page
ffffffff810c4920 t mincore_hugetlb_page_range
ffffffff810c49b0 t mincore_page
ffffffff810c4a00 t mincore_unmapped_range
ffffffff810c4ad0 T sys_mincore
ffffffff810c5200 T can_do_mlock
ffffffff810c5240 t __mlock_vma_pages_range.isra.4
ffffffff810c52a0 t do_mlock_pages
ffffffff810c53d0 T __clear_page_mlock
ffffffff810c5430 T mlock_vma_page
ffffffff810c5480 T munlock_vma_page
ffffffff810c5500 T mlock_vma_pages_range
ffffffff810c55b0 T munlock_vma_pages_range
ffffffff810c5650 t mlock_fixup
ffffffff810c57e0 t do_mlockall
ffffffff810c5870 t do_mlock
ffffffff810c5940 T sys_mlock
ffffffff810c5a70 T sys_munlock
ffffffff810c5b00 T sys_mlockall
ffffffff810c5c90 T sys_munlockall
ffffffff810c5cf0 T user_shm_lock
ffffffff810c5da0 T user_shm_unlock
ffffffff810c5e00 T vm_get_page_prot
ffffffff810c5e10 t find_vma_prepare
ffffffff810c5e70 t can_vma_merge_before
ffffffff810c5ee0 T arch_unmap_area
ffffffff810c5f40 T find_vma
ffffffff810c5fb0 t special_mapping_close
ffffffff810c5fc0 t special_mapping_fault
ffffffff810c6050 t __vma_link_file
ffffffff810c60d0 t unmap_region
ffffffff810c6210 t remove_vma
ffffffff810c6290 t reusable_anon_vma
ffffffff810c6370 T get_unmapped_area
ffffffff810c6490 t __remove_shared_vm_struct.isra.24
ffffffff810c64f0 T __vm_enough_memory
ffffffff810c6670 T unlink_file_vma
ffffffff810c6700 T __vma_link_rb
ffffffff810c6730 t vma_link
ffffffff810c6810 T vma_adjust
ffffffff810c6d50 t __split_vma
ffffffff810c6fd0 T vma_merge
ffffffff810c7310 T find_mergeable_anon_vma
ffffffff810c7350 T vm_stat_account
ffffffff810c73b0 T do_munmap
ffffffff810c7730 T vm_munmap
ffffffff810c77b0 t do_brk
ffffffff810c7ae0 T vm_brk
ffffffff810c7b40 T sys_brk
ffffffff810c7ca0 T vma_wants_writenotify
ffffffff810c7d20 T mmap_region
ffffffff810c8250 t do_mmap_pgoff
ffffffff810c8610 T sys_mmap_pgoff
ffffffff810c8820 T do_mmap
ffffffff810c8850 T vm_mmap
ffffffff810c88d0 T arch_unmap_area_topdown
ffffffff810c88f0 T find_vma_prev
ffffffff810c8940 T expand_downwards
ffffffff810c8af0 T expand_stack
ffffffff810c8b00 T find_extend_vma
ffffffff810c8b80 T split_vma
ffffffff810c8ba0 T sys_munmap
ffffffff810c8bb0 T exit_mmap
ffffffff810c8cf0 T insert_vm_struct
ffffffff810c8db0 T copy_vma
ffffffff810c9020 T may_expand_vm
ffffffff810c9050 T install_special_mapping
ffffffff810c9180 T mm_drop_all_locks
ffffffff810c9270 T mm_take_all_locks
ffffffff810c93e0 T mprotect_fixup
ffffffff810c9bf0 T sys_mprotect
ffffffff810c9e10 t vma_to_resize
ffffffff810c9fb0 T move_page_tables
ffffffff810ca580 t move_vma
ffffffff810ca7d0 T do_mremap
ffffffff810cad10 T sys_mremap
ffffffff810cad90 T sys_msync
ffffffff810caf70 t anon_vma_chain_free
ffffffff810caf80 t anon_vma_ctor
ffffffff810cafb0 t __hugepage_set_anon_rmap
ffffffff810cb010 t __page_set_anon_rmap
ffffffff810cb070 T anon_vma_moveto_tail
ffffffff810cb160 T page_unlock_anon_vma
ffffffff810cb170 T vma_address
ffffffff810cb1e0 T page_address_in_vma
ffffffff810cb2e0 T __page_check_address
ffffffff810cb4c0 T page_mkclean
ffffffff810cb6b0 T page_mapped_in_vma
ffffffff810cb750 T page_referenced_one
ffffffff810cb930 T page_move_anon_rmap
ffffffff810cb980 T do_page_add_anon_rmap
ffffffff810cba00 T page_add_anon_rmap
ffffffff810cba10 T page_add_new_anon_rmap
ffffffff810cbab0 T page_add_file_rmap
ffffffff810cbad0 T page_remove_rmap
ffffffff810cbb50 T try_to_unmap_one
ffffffff810cbfe0 t try_to_unmap_file
ffffffff810cc660 T is_vma_temporary_stack
ffffffff810cc680 T __put_anon_vma
ffffffff810cc710 T anon_vma_prepare
ffffffff810cc890 T unlink_anon_vmas
ffffffff810cca40 T anon_vma_clone
ffffffff810ccbd0 T anon_vma_fork
ffffffff810ccd10 T page_get_anon_vma
ffffffff810ccda0 T page_lock_anon_vma
ffffffff810ccec0 t try_to_unmap_anon
ffffffff810ccfe0 T try_to_munlock
ffffffff810cd010 T try_to_unmap
ffffffff810cd070 T page_referenced
ffffffff810cd2e0 T rmap_walk
ffffffff810cd500 T hugepage_add_anon_rmap
ffffffff810cd540 T hugepage_add_new_anon_rmap
ffffffff810cd560 T vmalloc_to_page
ffffffff810cd640 T vmalloc_to_pfn
ffffffff810cd670 t f
ffffffff810cd690 t s_next
ffffffff810cd6a0 t s_stop
ffffffff810cd6b0 t s_start
ffffffff810cd6f0 t lazy_max_pages
ffffffff810cd720 t vmalloc_open
ffffffff810cd790 t pvm_determine_end
ffffffff810cd800 t find_vmap_area
ffffffff810cd860 t find_vm_area
ffffffff810cd890 t __free_vmap_area
ffffffff810cd990 t pvm_find_next_prev
ffffffff810cda40 t __insert_vmap_area
ffffffff810cdb10 t insert_vmalloc_vmlist
ffffffff810cdb70 T remap_vmalloc_range
ffffffff810cdc50 t s_show
ffffffff810cde80 t vunmap_page_range
ffffffff810ce120 T unmap_kernel_range_noflush
ffffffff810ce130 t free_vmap_block
ffffffff810ce1b0 t purge_fragmented_blocks
ffffffff810ce510 t __purge_vmap_area_lazy
ffffffff810ce6e0 t purge_vmap_area_lazy
ffffffff810ce710 T pcpu_get_vm_areas
ffffffff810cec10 t alloc_vmap_area
ffffffff810cef80 t __get_vm_area_node
ffffffff810cf100 T __get_vm_area
ffffffff810cf140 t free_vmap_area_noflush
ffffffff810cf1a0 T vm_unmap_aliases
ffffffff810cf320 T vm_unmap_ram
ffffffff810cf4d0 t vmap_page_range_noflush
ffffffff810cf840 T map_vm_area
ffffffff810cf880 T vm_map_ram
ffffffff810cfda0 T is_vmalloc_or_module_addr
ffffffff810cfde0 T set_iounmap_nonlazy
ffffffff810cfdf0 T map_kernel_range_noflush
ffffffff810cfe00 T unmap_kernel_range
ffffffff810cfe20 T __get_vm_area_caller
ffffffff810cfe50 T get_vm_area
ffffffff810cfea0 T get_vm_area_caller
ffffffff810cfee0 T remove_vm_area
ffffffff810cff70 T free_vm_area
ffffffff810cff90 T alloc_vm_area
ffffffff810d0000 t __vunmap
ffffffff810d00e0 T vunmap
ffffffff810d0100 T vmap
ffffffff810d0190 T vfree
ffffffff810d01c0 T __vmalloc_node_range
ffffffff810d03f0 t __vmalloc_node
ffffffff810d0430 T vmalloc
ffffffff810d0460 T vzalloc
ffffffff810d0490 T vzalloc_node
ffffffff810d04c0 T vmalloc_32_user
ffffffff810d0500 T vmalloc_32
ffffffff810d0530 T vmalloc_node
ffffffff810d0560 T vmalloc_user
ffffffff810d05a0 T __vmalloc
ffffffff810d05c0 T vmalloc_exec
ffffffff810d05f0 T vread
ffffffff810d0840 T vwrite
ffffffff810d0a40 T pcpu_free_vm_areas
ffffffff810d0a70 T walk_page_range
ffffffff810d0f60 T ptep_clear_flush
ffffffff810d0fc0 T pmdp_clear_flush
ffffffff810d1000 t process_vm_rw_core.isra.2
ffffffff810d1660 t process_vm_rw
ffffffff810d17e0 T sys_process_vm_readv
ffffffff810d1800 T sys_process_vm_writev
ffffffff810d1820 T compat_process_vm_rw
ffffffff810d1a00 T compat_sys_process_vm_readv
ffffffff810d1a20 T compat_sys_process_vm_writev
ffffffff810d1a40 t bounce_end_io
ffffffff810d1ae0 t __bounce_end_io_read
ffffffff810d1bd0 t bounce_end_io_read_isa
ffffffff810d1be0 t bounce_end_io_read
ffffffff810d1bf0 t bounce_end_io_write_isa
ffffffff810d1c00 t bounce_end_io_write
ffffffff810d1c10 t mempool_alloc_pages_isa
ffffffff810d1c20 T blk_queue_bounce
ffffffff810d1f30 T init_emergency_isa_pool
ffffffff810d1f90 t get_swap_bio
ffffffff810d2020 t end_swap_bio_write
ffffffff810d20a0 T end_swap_bio_read
ffffffff810d2120 T swap_writepage
ffffffff810d21e0 T swap_readpage
ffffffff810d2230 t __add_to_swap_cache
ffffffff810d22e0 T show_swap_cache_info
ffffffff810d2360 T add_to_swap_cache
ffffffff810d23b0 T __delete_from_swap_cache
ffffffff810d2400 T add_to_swap
ffffffff810d24a0 T delete_from_swap_cache
ffffffff810d2500 T free_page_and_swap_cache
ffffffff810d2550 T free_pages_and_swap_cache
ffffffff810d2600 T lookup_swap_cache
ffffffff810d2630 T read_swap_cache_async
ffffffff810d27a0 T swapin_readahead
ffffffff810d2860 t swaps_poll
ffffffff810d28b0 t swap_next
ffffffff810d2900 t swaps_open
ffffffff810d2930 t swap_stop
ffffffff810d2940 t swap_start
ffffffff810d29a0 t enable_swap_info
ffffffff810d2a80 t swap_info_get
ffffffff810d2b50 t destroy_swap_extents
ffffffff810d2bb0 t wait_for_discard
ffffffff810d2bc0 t swap_show
ffffffff810d2c90 t swap_count_continued.isra.20
ffffffff810d2fb0 t swap_entry_free
ffffffff810d30f0 t __swap_duplicate
ffffffff810d3270 t add_swap_extent
ffffffff810d3340 T sys_swapon
ffffffff810d3ed0 T swap_free
ffffffff810d3f00 t unuse_mm
ffffffff810d44c0 T swapcache_free
ffffffff810d44f0 T reuse_swap_page
ffffffff810d45d0 T try_to_free_swap
ffffffff810d4680 t scan_swap_map
ffffffff810d4c20 T get_swap_page_of_type
ffffffff810d4cc0 T get_swap_page
ffffffff810d4dc0 T free_swap_and_cache
ffffffff810d4ee0 T map_swap_page
ffffffff810d4f40 T sys_swapoff
ffffffff810d5920 T si_swapinfo
ffffffff810d59a0 T swap_shmem_alloc
ffffffff810d59b0 T swapcache_prepare
ffffffff810d59c0 T add_swap_count_continuation
ffffffff810d5b50 T swap_duplicate
ffffffff810d5b90 T grab_swap_token
ffffffff810d5ca0 T __put_swap_token
ffffffff810d5ce0 T disable_swap_token
ffffffff810d5d60 t dmam_pool_match
ffffffff810d5d70 T dma_pool_free
ffffffff810d5e60 t show_pools
ffffffff810d5fa0 T dma_pool_create
ffffffff810d6180 T dmam_pool_create
ffffffff810d6240 T dma_pool_destroy
ffffffff810d63e0 T dmam_pool_destroy
ffffffff810d6420 t dmam_pool_release
ffffffff810d6430 T dma_pool_alloc
ffffffff810d66c0 t make_huge_pte
ffffffff810d6750 t hugetlb_vm_op_fault
ffffffff810d6760 t is_hugetlb_entry_hwpoisoned
ffffffff810d67a0 t hugetlb_vm_op_open
ffffffff810d67e0 t region_truncate
ffffffff810d68b0 t resv_map_release
ffffffff810d68d0 t resv_map_put
ffffffff810d6900 t region_add
ffffffff810d69e0 t hugepage_subpool_get_pages
ffffffff810d6a40 t hugepage_subpool_put_pages
ffffffff810d6ac0 t prep_new_huge_page
ffffffff810d6b30 t update_and_free_page
ffffffff810d6bb0 t alloc_buddy_huge_page
ffffffff810d6d00 t hugetlb_sysfs_add_hstate
ffffffff810d6d70 T hugetlb_unregister_node
ffffffff810d6e50 T hugetlb_register_node
ffffffff810d6f80 t region_chg
ffffffff810d7070 T vma_kernel_pagesize
ffffffff810d70b0 T PageHuge
ffffffff810d70e0 t next_node_allowed
ffffffff810d7150 t get_valid_node_allowed
ffffffff810d7170 t vma_needs_reservation
ffffffff810d7220 t alloc_huge_page
ffffffff810d75e0 t kobj_to_hstate
ffffffff810d76a0 t nr_overcommit_hugepages_store
ffffffff810d7730 t surplus_hugepages_show
ffffffff810d7780 t resv_hugepages_show
ffffffff810d77b0 t free_hugepages_show
ffffffff810d7800 t nr_overcommit_hugepages_show
ffffffff810d7830 t hstate_next_node_to_free.isra.35
ffffffff810d7880 t free_pool_huge_page
ffffffff810d7950 t return_unused_surplus_pages.part.36
ffffffff810d7990 t hugetlb_acct_memory
ffffffff810d7d10 t hugetlb_vm_op_close
ffffffff810d7df0 t hstate_next_node_to_alloc.isra.37
ffffffff810d7e40 t alloc_fresh_huge_page
ffffffff810d7ef0 t adjust_pool_surplus
ffffffff810d7fc0 t set_max_huge_pages.part.38
ffffffff810d8110 t hugetlb_sysctl_handler_common
ffffffff810d8210 t nr_hugepages_show_common.isra.39
ffffffff810d8260 t nr_hugepages_mempolicy_show
ffffffff810d8270 t nr_hugepages_show
ffffffff810d8280 t nr_hugepages_store_common.isra.40
ffffffff810d8390 t nr_hugepages_mempolicy_store
ffffffff810d83a0 t nr_hugepages_store
ffffffff810d83b0 T hugepage_new_subpool
ffffffff810d83f0 T hugepage_put_subpool
ffffffff810d8480 T linear_hugepage_index
ffffffff810d84c0 T vma_mmu_pagesize
ffffffff810d84d0 T reset_vma_resv_huge_pages
ffffffff810d84f0 T size_to_hstate
ffffffff810d8560 t free_huge_page
ffffffff810d86a0 T copy_huge_page
ffffffff810d8980 T alloc_huge_page_node
ffffffff810d8a50 T hugetlb_sysctl_handler
ffffffff810d8a70 T hugetlb_mempolicy_sysctl_handler
ffffffff810d8a90 T hugetlb_treat_movable_handler
ffffffff810d8ac0 T hugetlb_overcommit_handler
ffffffff810d8b70 T hugetlb_report_meminfo
ffffffff810d8bd0 T hugetlb_report_node_meminfo
ffffffff810d8c20 T hugetlb_total_pages
ffffffff810d8c50 T copy_hugetlb_page_range
ffffffff810d8e10 T __unmap_hugepage_range
ffffffff810d90d0 t hugetlb_cow
ffffffff810d94c0 T unmap_hugepage_range
ffffffff810d9530 T hugetlb_fault
ffffffff810d9bd0 T follow_hugetlb_page
ffffffff810d9ee0 T hugetlb_change_protection
ffffffff810da070 T hugetlb_reserve_pages
ffffffff810da210 T hugetlb_unreserve_pages
ffffffff810da2e0 T dequeue_hwpoisoned_huge_page
ffffffff810da4c0 t mpol_rebind_default
ffffffff810da4d0 t mpol_new_interleave
ffffffff810da530 t sp_alloc
ffffffff810da5a0 t mpol_new_preferred
ffffffff810da620 t mpol_new_bind
ffffffff810da6e0 t interleave_nodes
ffffffff810da780 t policy_zonelist
ffffffff810da810 t mpol_relative_nodemask
ffffffff810da880 t mpol_rebind_preferred
ffffffff810da990 t mpol_set_nodemask
ffffffff810dab10 t mpol_rebind_nodemask
ffffffff810dada0 t mpol_rebind_policy
ffffffff810dae60 t new_node_page
ffffffff810daea0 t alloc_page_interleave
ffffffff810daf30 t get_nodes
ffffffff810db030 t check_range
ffffffff810db5d0 t mpol_new
ffffffff810db690 t sp_lookup.isra.17
ffffffff810db700 t sp_insert
ffffffff810db760 t policy_nodemask
ffffffff810db7b0 T alloc_pages_current
ffffffff810db8e0 T __mpol_put
ffffffff810db900 t do_set_mempolicy
ffffffff810dbaa0 T mpol_rebind_task
ffffffff810dbab0 T mpol_rebind_mm
ffffffff810dbb00 T mpol_fix_fork_child_flag
ffffffff810dbb20 T do_migrate_pages
ffffffff810dbd90 T sys_set_mempolicy
ffffffff810dbdf0 T sys_migrate_pages
ffffffff810dc030 T sys_get_mempolicy
ffffffff810dc520 T compat_sys_get_mempolicy
ffffffff810dc630 T compat_sys_set_mempolicy
ffffffff810dc6e0 T get_vma_policy
ffffffff810dc740 T slab_node
ffffffff810dc7e0 T node_random
ffffffff810dc840 T huge_zonelist
ffffffff810dc960 T init_nodemask_of_mempolicy
ffffffff810dca50 T mempolicy_nodemask_intersects
ffffffff810dcaf0 T alloc_pages_vma
ffffffff810dccd0 t new_vma_page
ffffffff810dcd30 T __mpol_dup
ffffffff810dce40 T __mpol_cond_copy
ffffffff810dce70 T __mpol_equal
ffffffff810dcf40 t do_mbind
ffffffff810dd360 T sys_mbind
ffffffff810dd410 T compat_sys_mbind
ffffffff810dd4f0 T mpol_shared_policy_lookup
ffffffff810dd570 T mpol_set_shared_policy
ffffffff810dd750 T mpol_shared_policy_init
ffffffff810dd8b0 T mpol_free_shared_policy
ffffffff810dd940 T numa_default_policy
ffffffff810dd950 T mpol_parse_str
ffffffff810ddce0 T mpol_to_str
ffffffff810ddf20 T __section_nr
ffffffff810ddf80 T sparse_decode_mem_map
ffffffff810ddfa0 T usemap_size
ffffffff810ddfb0 t suitable_migration_target
ffffffff810ddff0 t compaction_alloc
ffffffff810de360 T compaction_suitable
ffffffff810de410 t compact_zone
ffffffff810debf0 t __compact_pgdat
ffffffff810ded40 t compact_node
ffffffff810ded80 T sysfs_compact_node
ffffffff810dedd0 T try_to_compact_pages
ffffffff810def60 T compact_pgdat
ffffffff810def90 T sysctl_compaction_handler
ffffffff810df020 T sysctl_extfrag_handler
ffffffff810df030 T compaction_register_node
ffffffff810df040 T compaction_unregister_node
ffffffff810df050 T mmu_notifier_unregister
ffffffff810df110 t do_mmu_notifier_register
ffffffff810df240 T __mmu_notifier_register
ffffffff810df250 T mmu_notifier_register
ffffffff810df260 T __mmu_notifier_release
ffffffff810df300 T __mmu_notifier_clear_flush_young
ffffffff810df360 T __mmu_notifier_test_young
ffffffff810df3b0 T __mmu_notifier_change_pte
ffffffff810df430 T __mmu_notifier_invalidate_page
ffffffff810df480 T __mmu_notifier_invalidate_range_start
ffffffff810df4e0 T __mmu_notifier_invalidate_range_end
ffffffff810df540 T __mmu_notifier_mm_destroy
ffffffff810df570 t cache_estimate
ffffffff810df610 T kmem_cache_size
ffffffff810df620 t do_ccupdate_local
ffffffff810df650 t slabinfo_open
ffffffff810df660 t s_show
ffffffff810df9b0 t s_next
ffffffff810df9c0 t s_stop
ffffffff810df9d0 t s_start
ffffffff810dfa70 T ksize
ffffffff810dfae0 t transfer_objects
ffffffff810dfb50 t slab_out_of_memory
ffffffff810dfcc0 t kmem_getpages
ffffffff810dfe40 t kmem_flagcheck.isra.41
ffffffff810dfe60 t kmem_freepages.isra.43
ffffffff810dff90 t slab_destroy
ffffffff810e0020 t free_block
ffffffff810e01a0 t __drain_alien_cache
ffffffff810e0240 t drain_alien_cache
ffffffff810e0300 T kfree
ffffffff810e0520 t free_alien_cache
ffffffff810e05b0 T kmem_cache_free
ffffffff810e0760 t __kmem_cache_destroy
ffffffff810e0850 t do_drain
ffffffff810e08e0 t drain_freelist
ffffffff810e09b0 t kmem_rcu_free
ffffffff810e0a10 t cache_grow
ffffffff810e0ca0 t ____cache_alloc_node
ffffffff810e0de0 t fallback_alloc
ffffffff810e1040 T kmem_cache_alloc_node
ffffffff810e1140 T __kmalloc_node
ffffffff810e1190 t alloc_arraycache
ffffffff810e11f0 t alloc_alien_cache
ffffffff810e1330 T __kmalloc
ffffffff810e1490 T kmem_cache_alloc
ffffffff810e1590 t do_tune_cpucache
ffffffff810e1a80 t slabinfo_write
ffffffff810e1c10 t enable_cpucache
ffffffff810e1cd0 T kmem_cache_create
ffffffff810e2200 t drain_array
ffffffff810e2300 t cache_reap
ffffffff810e2500 t __cache_shrink
ffffffff810e26e0 T kmem_cache_destroy
ffffffff810e27d0 T kmem_cache_shrink
ffffffff810e2830 T slab_is_available
ffffffff810e2840 T fail_migrate_page
ffffffff810e2850 t new_page_node
ffffffff810e28c0 t do_pages_stat
ffffffff810e2a20 t remove_migration_pte
ffffffff810e2d40 t buffer_migrate_lock_buffers
ffffffff810e2dd0 t migrate_page_move_mapping.part.15
ffffffff810e3010 T migrate_prep
ffffffff810e3020 T migrate_prep_local
ffffffff810e3030 T putback_lru_pages
ffffffff810e30c0 T migration_entry_wait
ffffffff810e3250 T migrate_huge_page_move_mapping
ffffffff810e33e0 T migrate_page_copy
ffffffff810e3590 T migrate_page
ffffffff810e3600 t move_to_new_page
ffffffff810e3890 T buffer_migrate_page
ffffffff810e39e0 T migrate_pages
ffffffff810e3e30 T migrate_huge_pages
ffffffff810e40b0 T sys_move_pages
ffffffff810e4660 T migrate_vmas
ffffffff810e46c0 t pages_to_scan_store
ffffffff810e4710 t khugepaged_max_ptes_none_store
ffffffff810e4760 t alloc_sleep_millisecs_store
ffffffff810e47c0 t scan_sleep_millisecs_store
ffffffff810e4820 t alloc_sleep_millisecs_show
ffffffff810e4850 t scan_sleep_millisecs_show
ffffffff810e4880 t full_scans_show
ffffffff810e48b0 t pages_collapsed_show
ffffffff810e48e0 t pages_to_scan_show
ffffffff810e4910 t khugepaged_max_ptes_none_show
ffffffff810e4940 t khugepaged_defrag_show
ffffffff810e4970 t khugepaged_alloc_sleep
ffffffff810e4ab0 t collect_mm_slot
ffffffff810e4b80 t set_recommended_min_free_kbytes
ffffffff810e4c10 t start_khugepaged
ffffffff810e4d40 t khugepaged_wait_event
ffffffff810e4d70 t double_flag_store.isra.27
ffffffff810e4e60 t defrag_store
ffffffff810e4e80 t enabled_store
ffffffff810e4ee0 t double_flag_show.isra.28
ffffffff810e4f90 t defrag_show
ffffffff810e4fb0 t enabled_show
ffffffff810e4fc0 t prepare_pmd_huge_pte
ffffffff810e5030 t khugepaged
ffffffff810e64b0 t khugepaged_defrag_store
ffffffff810e6510 T copy_huge_pmd
ffffffff810e6780 T get_pmd_huge_pte
ffffffff810e67f0 T do_huge_pmd_wp_page
ffffffff810e6fc0 T follow_trans_huge_pmd
ffffffff810e70f0 T __pmd_trans_huge_lock
ffffffff810e71c0 T change_huge_pmd
ffffffff810e72b0 T move_huge_pmd
ffffffff810e73a0 T mincore_huge_pmd
ffffffff810e7420 T zap_huge_pmd
ffffffff810e7540 T page_check_address_pmd
ffffffff810e7650 T split_huge_page
ffffffff810e7da0 T __khugepaged_enter
ffffffff810e7ee0 T do_huge_pmd_anonymous_page
ffffffff810e8240 T khugepaged_enter_vma_merge
ffffffff810e82e0 T hugepage_madvise
ffffffff810e8360 T __khugepaged_exit
ffffffff810e84a0 T __split_huge_page_pmd
ffffffff810e8590 t split_huge_page_address
ffffffff810e8620 T __vma_adjust_trans_huge
ffffffff810e8710 T hwpoison_filter
ffffffff810e8720 t me_kernel
ffffffff810e8730 t me_unknown
ffffffff810e8750 t action_result
ffffffff810e87b0 t delete_from_lru_cache
ffffffff810e87e0 t me_swapcache_dirty
ffffffff810e8800 t set_page_hwpoison_huge_page
ffffffff810e88a0 t me_huge_page
ffffffff810e8900 T unpoison_memory
ffffffff810e8ba0 t new_page
ffffffff810e8c40 T memory_failure_queue
ffffffff810e8d20 t me_swapcache_clean
ffffffff810e8d40 t add_to_kill
ffffffff810e8e50 t get_any_page.part.9
ffffffff810e8f40 t me_pagecache_clean
ffffffff810e9050 t me_pagecache_dirty
ffffffff810e9090 t kill_proc.isra.11
ffffffff810e9230 T shake_page
ffffffff810e92d0 T memory_failure
ffffffff810e9de0 t memory_failure_work_func
ffffffff810e9e70 T soft_offline_page
ffffffff810ea2b0 T cleancache_register_ops
ffffffff810ea360 T __cleancache_init_fs
ffffffff810ea380 T __cleancache_init_shared_fs
ffffffff810ea3a0 t cleancache_get_key
ffffffff810ea420 T __cleancache_get_page
ffffffff810ea4e0 T __cleancache_put_page
ffffffff810ea570 T __cleancache_invalidate_page
ffffffff810ea600 T __cleancache_invalidate_inode
ffffffff810ea670 T __cleancache_invalidate_fs
ffffffff810ea6a0 T generic_file_open
ffffffff810ea6c0 T nonseekable_open
ffffffff810ea6d0 T put_unused_fd
ffffffff810ea740 T filp_open
ffffffff810ea810 T file_open_root
ffffffff810ea930 t chmod_common
ffffffff810ea9e0 T filp_close
ffffffff810eaa70 T sys_close
ffffffff810eab80 T fd_install
ffffffff810eac00 t __dentry_open.isra.17
ffffffff810eaf00 T lookup_instantiate_filp
ffffffff810eafc0 T dentry_open
ffffffff810eb040 t chown_common.isra.19
ffffffff810eb0d0 T do_truncate
ffffffff810eb160 T sys_truncate
ffffffff810eb320 T sys_ftruncate
ffffffff810eb430 T do_fallocate
ffffffff810eb510 T sys_fallocate
ffffffff810eb580 T sys_faccessat
ffffffff810eb740 T sys_access
ffffffff810eb750 T sys_chdir
ffffffff810eb7d0 T sys_fchdir
ffffffff810eb870 T sys_chroot
ffffffff810eb910 T sys_fchmod
ffffffff810eb980 T sys_fchmodat
ffffffff810eb9d0 T sys_chmod
ffffffff810eb9e0 T sys_chown
ffffffff810eba70 T sys_fchownat
ffffffff810ebb20 T sys_lchown
ffffffff810ebbb0 T sys_fchown
ffffffff810ebc70 T nameidata_to_filp
ffffffff810ebcc0 T do_sys_open
ffffffff810ebe90 T sys_open
ffffffff810ebeb0 T sys_openat
ffffffff810ebec0 T sys_creat
ffffffff810ebed0 T sys_vhangup
ffffffff810ebf00 T noop_llseek
ffffffff810ebf10 T no_llseek
ffffffff810ebf20 T vfs_llseek
ffffffff810ebf50 T iov_shorten
ffffffff810ebfa0 t wait_on_retry_sync_kiocb
ffffffff810ec000 T do_sync_write
ffffffff810ec100 T do_sync_read
ffffffff810ec200 T default_llseek
ffffffff810ec320 T generic_file_llseek_size
ffffffff810ec460 T generic_file_llseek
ffffffff810ec480 T sys_lseek
ffffffff810ec510 T sys_llseek
ffffffff810ec5e0 T rw_verify_area
ffffffff810ec690 t do_sendfile
ffffffff810ec8a0 T vfs_write
ffffffff810eca20 T vfs_read
ffffffff810ecb90 T sys_read
ffffffff810ecc20 T sys_write
ffffffff810eccb0 T sys_pread64
ffffffff810ecd50 T sys_pwrite64
ffffffff810ecdf0 T do_sync_readv_writev
ffffffff810ecee0 T do_loop_readv_writev
ffffffff810ecf90 T rw_copy_check_uvector
ffffffff810ed0c0 t do_readv_writev
ffffffff810ed2c0 T vfs_writev
ffffffff810ed310 T vfs_readv
ffffffff810ed360 T sys_readv
ffffffff810ed420 T sys_writev
ffffffff810ed4e0 T sys_preadv
ffffffff810ed5a0 T sys_pwritev
ffffffff810ed660 T sys_sendfile
ffffffff810ed6d0 T sys_sendfile64
ffffffff810ed770 T files_lglock_local_lock
ffffffff810ed790 T files_lglock_local_unlock
ffffffff810ed7b0 T files_lglock_local_lock_cpu
ffffffff810ed7d0 T files_lglock_local_unlock_cpu
ffffffff810ed7f0 t file_free_rcu
ffffffff810ed830 T get_max_files
ffffffff810ed840 T files_lglock_global_lock_online
ffffffff810ed8a0 T files_lglock_global_unlock_online
ffffffff810ed910 T files_lglock_global_lock
ffffffff810ed970 T files_lglock_global_unlock
ffffffff810ed9d0 T fget
ffffffff810eda50 T fget_raw
ffffffff810edad0 t files_lglock_lg_cpu_callback
ffffffff810edb60 T files_lglock_lock_init
ffffffff810edc00 T proc_nr_files
ffffffff810edc20 T get_empty_filp
ffffffff810edd60 T alloc_file
ffffffff810ede30 T fget_light
ffffffff810edef0 T fget_raw_light
ffffffff810edf90 T file_sb_list_add
ffffffff810edff0 T file_sb_list_del
ffffffff810ee030 T put_filp
ffffffff810ee080 T fput
ffffffff810ee2e0 T mark_files_ro
ffffffff810ee3c0 t ns_test_super
ffffffff810ee3d0 t set_bdev_super
ffffffff810ee400 t test_bdev_super
ffffffff810ee410 t compare_single
ffffffff810ee420 T lock_super
ffffffff810ee430 T unlock_super
ffffffff810ee440 T free_anon_bdev
ffffffff810ee490 T get_anon_bdev
ffffffff810ee590 T set_anon_super
ffffffff810ee5b0 t ns_set_super
ffffffff810ee5c0 T generic_shutdown_super
ffffffff810ee6b0 T kill_block_super
ffffffff810ee730 T kill_anon_super
ffffffff810ee750 T kill_litter_super
ffffffff810ee770 t __put_super
ffffffff810ee7f0 t put_super
ffffffff810ee820 T drop_super
ffffffff810ee840 T deactivate_locked_super
ffffffff810ee8d0 T thaw_super
ffffffff810ee990 T get_super
ffffffff810eea60 T get_super_thawed
ffffffff810eeb50 T iterate_supers_type
ffffffff810eec30 t grab_super
ffffffff810eecd0 T sget
ffffffff810ef1a0 T mount_nodev
ffffffff810ef260 T mount_bdev
ffffffff810ef460 T mount_ns
ffffffff810ef550 T freeze_super
ffffffff810ef660 T deactivate_super
ffffffff810ef6c0 T grab_super_passive
ffffffff810ef750 t prune_super
ffffffff810ef8f0 T sync_supers
ffffffff810ef9f0 T iterate_supers
ffffffff810efae0 T get_active_super
ffffffff810efb70 T user_get_super
ffffffff810efc30 T do_remount_sb
ffffffff810efdb0 T mount_single
ffffffff810efe90 t do_emergency_remount
ffffffff810effa0 T emergency_remount
ffffffff810efff0 T mount_fs
ffffffff810f00c0 t exact_match
ffffffff810f00d0 t cdev_purge
ffffffff810f0130 t cdev_default_release
ffffffff810f0140 t cdev_get
ffffffff810f01a0 t exact_lock
ffffffff810f01c0 T cdev_init
ffffffff810f0280 t cdev_dynamic_release
ffffffff810f02a0 T cdev_alloc
ffffffff810f02f0 T cdev_del
ffffffff810f0310 T cdev_add
ffffffff810f0350 t __unregister_chrdev_region
ffffffff810f03f0 T __unregister_chrdev
ffffffff810f0420 T unregister_chrdev_region
ffffffff810f0470 t __register_chrdev_region
ffffffff810f05e0 T __register_chrdev
ffffffff810f06f0 T alloc_chrdev_region
ffffffff810f0720 T register_chrdev_region
ffffffff810f07d0 t base_probe
ffffffff810f0810 T chrdev_show
ffffffff810f0890 T cdev_put
ffffffff810f08b0 t chrdev_open
ffffffff810f0a50 T cd_forget
ffffffff810f0ab0 T generic_fillattr
ffffffff810f0b40 T vfs_getattr
ffffffff810f0b80 T inode_set_bytes
ffffffff810f0ba0 T inode_sub_bytes
ffffffff810f0c30 T inode_get_bytes
ffffffff810f0c90 T inode_add_bytes
ffffffff810f0d20 t cp_new_stat
ffffffff810f0e20 t cp_old_stat
ffffffff810f0f60 T vfs_fstatat
ffffffff810f0fc0 T vfs_lstat
ffffffff810f0fe0 T vfs_stat
ffffffff810f1000 T vfs_fstat
ffffffff810f1060 T sys_stat
ffffffff810f1090 T sys_lstat
ffffffff810f10c0 T sys_fstat
ffffffff810f10f0 T sys_newstat
ffffffff810f1120 T sys_newlstat
ffffffff810f1150 T sys_newfstatat
ffffffff810f1180 T sys_newfstat
ffffffff810f11b0 T sys_readlinkat
ffffffff810f1260 T sys_readlink
ffffffff810f1280 T __inode_add_bytes
ffffffff810f12c0 T dump_write
ffffffff810f1310 T dump_seek
ffffffff810f13f0 t zap_process
ffffffff810f14a0 t cn_printf
ffffffff810f15b0 t umh_pipe_setup
ffffffff810f16a0 T set_binfmt
ffffffff810f1710 T search_binary_handler
ffffffff810f1990 T install_exec_creds
ffffffff810f19d0 T would_dump
ffffffff810f1a00 T get_task_comm
ffffffff810f1a60 T kernel_read
ffffffff810f1ac0 T prepare_binprm
ffffffff810f1c70 T open_exec
ffffffff810f1d60 t shift_arg_pages
ffffffff810f1f00 T setup_arg_pages
ffffffff810f2120 T unregister_binfmt
ffffffff810f2170 t acct_arg_size.isra.22
ffffffff810f21a0 t get_arg_page
ffffffff810f2280 t copy_strings.isra.32
ffffffff810f24f0 T copy_strings_kernel
ffffffff810f2540 T remove_arg_zero
ffffffff810f2640 T flush_old_exec
ffffffff810f2d40 T __register_binfmt
ffffffff810f2dd0 t count.isra.31.constprop.37
ffffffff810f2e80 T sys_uselib
ffffffff810f2fe0 T bprm_mm_init
ffffffff810f31d0 T set_task_comm
ffffffff810f3250 T prepare_bprm_creds
ffffffff810f32d0 T free_bprm
ffffffff810f3310 t do_execve_common.isra.36
ffffffff810f3750 T do_execve
ffffffff810f3770 T compat_do_execve
ffffffff810f3790 T set_dumpable
ffffffff810f3800 T setup_new_exec
ffffffff810f3a40 T get_dumpable
ffffffff810f3a60 T do_coredump
ffffffff810f46b0 T generic_pipe_buf_confirm
ffffffff810f46c0 t bad_pipe_r
ffffffff810f46d0 t bad_pipe_w
ffffffff810f46e0 t pipe_poll
ffffffff810f4780 t pipefs_mount
ffffffff810f47a0 t pipefs_dname
ffffffff810f47c0 t iov_fault_in_pages_read
ffffffff810f4850 t pipe_rdwr_fasync
ffffffff810f4920 t pipe_rdwr_open
ffffffff810f49b0 t pipe_write_fasync
ffffffff810f4a30 t pipe_write_open
ffffffff810f4a90 t pipe_read_fasync
ffffffff810f4b10 t pipe_read_open
ffffffff810f4b70 t pipe_ioctl
ffffffff810f4c00 T pipe_unlock
ffffffff810f4c20 t anon_pipe_buf_release
ffffffff810f4c60 T generic_pipe_buf_release
ffffffff810f4c70 t pipe_iov_copy_from_user
ffffffff810f4d20 T generic_pipe_buf_get
ffffffff810f4d50 T generic_pipe_buf_steal
ffffffff810f4da0 T generic_pipe_buf_map
ffffffff810f4e20 T generic_pipe_buf_unmap
ffffffff810f4e40 t pipe_lock_nested.isra.7
ffffffff810f4e60 T pipe_lock
ffffffff810f4e70 T pipe_double_lock
ffffffff810f4ec0 T pipe_wait
ffffffff810f4f30 t pipe_write
ffffffff810f54a0 t pipe_read
ffffffff810f59d0 T alloc_pipe_info
ffffffff810f5a60 T __free_pipe_info
ffffffff810f5ad0 T free_pipe_info
ffffffff810f5af0 t pipe_release
ffffffff810f5bc0 t pipe_rdwr_release
ffffffff810f5be0 t pipe_write_release
ffffffff810f5bf0 t pipe_read_release
ffffffff810f5c00 T create_write_pipe
ffffffff810f5db0 T free_write_pipe
ffffffff810f5de0 T create_read_pipe
ffffffff810f5e50 T do_pipe_flags
ffffffff810f5f70 T sys_pipe2
ffffffff810f5fd0 T sys_pipe
ffffffff810f5fe0 T pipe_proc_fn
ffffffff810f6030 T get_pipe_info
ffffffff810f6060 T pipe_fcntl
ffffffff810f6260 t get_write_access
ffffffff810f6290 T full_name_hash
ffffffff810f62f0 T __page_symlink
ffffffff810f6410 T page_symlink
ffffffff810f6430 T page_put_link
ffffffff810f6450 T path_get
ffffffff810f64a0 t unlazy_walk
ffffffff810f66b0 t follow_dotdot_rcu
ffffffff810f6850 T follow_down
ffffffff810f6950 T follow_down_one
ffffffff810f69d0 T path_put
ffffffff810f69f0 t terminate_walk
ffffffff810f6a20 t complete_walk
ffffffff810f6b40 T unlock_rename
ffffffff810f6bb0 t follow_managed
ffffffff810f6e60 t __lookup_hash
ffffffff810f6f70 t lookup_hash
ffffffff810f6f80 T vfs_readlink
ffffffff810f7000 T generic_readlink
ffffffff810f70b0 T follow_up
ffffffff810f7160 t follow_dotdot
ffffffff810f72a0 T dentry_unhash
ffffffff810f7300 t getname_flags
ffffffff810f7560 T getname
ffffffff810f7570 T generic_permission
ffffffff810f77c0 T inode_permission
ffffffff810f78c0 T vfs_mkdir
ffffffff810f79c0 T vfs_link
ffffffff810f7b70 t path_init
ffffffff810f7f30 T lookup_one_len
ffffffff810f8030 T putname
ffffffff810f8070 T vfs_create
ffffffff810f8160 T vfs_symlink
ffffffff810f8240 t page_getlink.isra.30
ffffffff810f82d0 T page_follow_link_light
ffffffff810f8310 T page_readlink
ffffffff810f8370 t may_delete
ffffffff810f84e0 T vfs_rename
ffffffff810f8990 t path_put_conditional.isra.33
ffffffff810f89e0 t do_lookup
ffffffff810f8d20 t link_path_walk
ffffffff810f95a0 T vfs_follow_link
ffffffff810f96b0 t path_lookupat
ffffffff810f9dc0 t do_path_lookup
ffffffff810f9e80 T kern_path_create
ffffffff810f9fa0 T user_path_create
ffffffff810fa000 T kern_path
ffffffff810fa040 t user_path_parent
ffffffff810fa0c0 t do_last
ffffffff810fa960 T vfs_path_lookup
ffffffff810fa9c0 T vfs_unlink
ffffffff810faac0 t do_unlinkat
ffffffff810fac80 T vfs_rmdir
ffffffff810fada0 t do_rmdir
ffffffff810faeb0 T vfs_mknod
ffffffff810fb010 T lock_rename
ffffffff810fb0e0 T release_open_intent
ffffffff810fb120 t path_openat
ffffffff810fb510 T kern_path_parent
ffffffff810fb530 T user_path_at_empty
ffffffff810fb5e0 T user_path_at
ffffffff810fb5f0 T do_filp_open
ffffffff810fb6a0 T do_file_open_root
ffffffff810fb770 T sys_mknodat
ffffffff810fb990 T sys_mknod
ffffffff810fb9b0 T sys_mkdirat
ffffffff810fba80 T sys_mkdir
ffffffff810fba90 T sys_rmdir
ffffffff810fbaa0 T sys_unlinkat
ffffffff810fbad0 T sys_unlink
ffffffff810fbae0 T sys_symlinkat
ffffffff810fbba0 T sys_symlink
ffffffff810fbbb0 T sys_linkat
ffffffff810fbd20 T sys_link
ffffffff810fbd40 T sys_renameat
ffffffff810fbf80 T sys_rename
ffffffff810fbfa0 t fasync_free_rcu
ffffffff810fbfb0 t send_sigio_to_task
ffffffff810fc080 t f_modown
ffffffff810fc140 T __f_setown
ffffffff810fc150 T f_setown
ffffffff810fc1a0 T set_close_on_exec
ffffffff810fc220 T sys_dup3
ffffffff810fc390 T sys_dup2
ffffffff810fc3d0 T sys_dup
ffffffff810fc440 T f_delown
ffffffff810fc450 T f_getown
ffffffff810fc480 T sys_fcntl
ffffffff810fca00 T send_sigio
ffffffff810fcae0 T kill_fasync
ffffffff810fcb80 T send_sigurg
ffffffff810fcc70 T fasync_remove_entry
ffffffff810fcd30 T fasync_alloc
ffffffff810fcd50 T fasync_free
ffffffff810fcd60 T fasync_insert_entry
ffffffff810fce50 T fasync_helper
ffffffff810fcee0 T fiemap_check_flags
ffffffff810fcf00 T fiemap_fill_next_extent
ffffffff810fcff0 T __generic_block_fiemap
ffffffff810fd270 T generic_block_fiemap
ffffffff810fd2e0 T ioctl_preallocate
ffffffff810fd370 T do_vfs_ioctl
ffffffff810fd860 T sys_ioctl
ffffffff810fd8f0 t filldir
ffffffff810fd9c0 t filldir64
ffffffff810fda90 t fillonedir
ffffffff810fdb30 T vfs_readdir
ffffffff810fdc00 T sys_old_readdir
ffffffff810fdc60 T sys_getdents
ffffffff810fdd60 T sys_getdents64
ffffffff810fde50 T poll_initwait
ffffffff810fde90 t poll_select_copy_remaining
ffffffff810fdfd0 T poll_schedule_timeout
ffffffff810fe030 T poll_freewait
ffffffff810fe0e0 t __pollwait
ffffffff810fe1f0 t pollwake
ffffffff810fe260 T select_estimate_accuracy
ffffffff810fe360 T poll_select_set_timeout
ffffffff810fe3e0 T do_select
ffffffff810fea00 T core_sys_select
ffffffff810fed20 T sys_select
ffffffff810fee20 T sys_pselect6
ffffffff810ff020 T do_sys_poll
ffffffff810ff450 t do_restart_poll
ffffffff810ff4b0 T sys_poll
ffffffff810ff5a0 T sys_ppoll
ffffffff810ff720 t wake_up_partner.isra.0
ffffffff810ff740 t wait_for_partner.isra.1
ffffffff810ff7a0 t fifo_open
ffffffff810ffa40 t dentry_lru_del
ffffffff810ffad0 t dentry_lru_prune
ffffffff810ffb70 t __d_find_alias
ffffffff810ffc90 T d_find_alias
ffffffff810ffcf0 t __d_find_any_alias
ffffffff810ffd60 T d_find_any_alias
ffffffff810ffdb0 t dentry_lock_for_move
ffffffff810ffe70 t try_to_ascend
ffffffff810fff00 T d_genocide
ffffffff811000a0 T have_submounts
ffffffff81100270 t __d_shrink
ffffffff81100300 t prepend
ffffffff81100340 t switch_names
ffffffff81100400 t __dentry_path
ffffffff811004e0 T dentry_path_raw
ffffffff81100540 t prepend_path
ffffffff81100700 t path_with_deleted
ffffffff811007b0 T d_path
ffffffff811008c0 t __d_instantiate
ffffffff811009d0 t __d_instantiate_unique
ffffffff81100ad0 t __d_rehash
ffffffff81100b20 t _d_rehash
ffffffff81100b50 T d_rehash
ffffffff81100b90 T dentry_update_name_case
ffffffff81100c10 T dget_parent
ffffffff81100c80 T d_instantiate_unique
ffffffff81100d10 T d_set_d_op
ffffffff81100dc0 t __d_free
ffffffff81100e30 t dentry_unlock_parents_for_move.isra.8
ffffffff81100e60 T d_validate
ffffffff81100f00 T __d_drop
ffffffff81100f30 T d_clear_need_lookup
ffffffff81100f80 T d_drop
ffffffff81100fc0 T d_delete
ffffffff81101160 t __d_move
ffffffff811013e0 T d_move
ffffffff81101430 T d_instantiate
ffffffff811014b0 T d_splice_alias
ffffffff81101590 t d_free
ffffffff811015e0 t d_kill
ffffffff81101720 t shrink_dentry_list
ffffffff811018f0 T shrink_dcache_sb
ffffffff811019a0 T shrink_dcache_parent
ffffffff81101c40 T d_invalidate
ffffffff81101d00 T dput
ffffffff81101eb0 T d_prune_aliases
ffffffff81101f70 T d_materialise_unique
ffffffff81102420 t shrink_dcache_for_umount_subtree
ffffffff81102600 T proc_nr_dentry
ffffffff811026a0 T prune_dcache_sb
ffffffff81102820 T shrink_dcache_for_umount
ffffffff81102880 T __d_alloc
ffffffff81102a00 T d_obtain_alias
ffffffff81102bb0 T d_make_root
ffffffff81102c10 T d_alloc_pseudo
ffffffff81102c30 T d_alloc
ffffffff81102cc0 T d_alloc_name
ffffffff81102d10 T __d_lookup_rcu
ffffffff81102e90 T __d_lookup
ffffffff81103000 T d_lookup
ffffffff81103060 T d_hash_and_lookup
ffffffff811030d0 T find_inode_number
ffffffff81103100 T d_add_ci
ffffffff81103230 T d_ancestor
ffffffff81103250 T __d_path
ffffffff811032e0 T d_absolute_path
ffffffff81103390 T d_path_with_unreachable
ffffffff811034b0 T dynamic_dname
ffffffff81103550 T dentry_path
ffffffff81103630 T sys_getcwd
ffffffff81103820 T is_subdir
ffffffff81103870 T generic_delete_inode
ffffffff81103880 T bmap
ffffffff811038a0 T inode_init_owner
ffffffff81103900 T inode_wait
ffffffff81103910 t inode_lru_list_del
ffffffff81103980 T inode_sb_list_add
ffffffff811039e0 T __insert_inode_hash
ffffffff81103aa0 T __remove_inode_hash
ffffffff81103b30 T iunique
ffffffff81103c40 T file_update_time
ffffffff81103da0 T touch_atime
ffffffff81103f10 T get_next_ino
ffffffff81103f40 T inc_nlink
ffffffff81103f80 T unlock_new_inode
ffffffff81104000 t i_callback
ffffffff81104020 T free_inode_nonrcu
ffffffff81104030 t __wait_on_freeing_inode
ffffffff81104110 t find_inode_fast
ffffffff811041b0 T ilookup
ffffffff81104280 t find_inode
ffffffff81104320 T ilookup5_nowait
ffffffff811043c0 T ilookup5
ffffffff81104400 T end_writeback
ffffffff811044a0 T address_space_init_once
ffffffff811045b0 T inode_init_once
ffffffff811046c0 t init_once
ffffffff811046d0 T inode_init_always
ffffffff81104870 t alloc_inode
ffffffff81104900 T __destroy_inode
ffffffff811049d0 t get_nr_inodes
ffffffff81104a30 T clear_nlink
ffffffff81104a50 T set_nlink
ffffffff81104a80 T inode_needs_sync
ffffffff81104ad0 T inode_owner_or_capable
ffffffff81104b10 T init_special_inode
ffffffff81104ba0 T ihold
ffffffff81104bd0 T drop_nlink
ffffffff81104c10 t destroy_inode
ffffffff81104c60 t evict
ffffffff81104e00 t dispose_list
ffffffff81104e40 T iget_locked
ffffffff81104ff0 T iget5_locked
ffffffff811051e0 T iput
ffffffff81105400 T insert_inode_locked4
ffffffff81105590 T insert_inode_locked
ffffffff81105730 T igrab
ffffffff81105790 T get_nr_dirty_inodes
ffffffff81105810 T proc_nr_inodes
ffffffff811058c0 T __iget
ffffffff811058d0 T evict_inodes
ffffffff811059e0 T invalidate_inodes
ffffffff81105b20 T prune_icache_sb
ffffffff81105e40 T new_inode_pseudo
ffffffff81105eb0 T new_inode
ffffffff81105ee0 T notify_change
ffffffff81106220 T setattr_copy
ffffffff81106330 T inode_newsize_ok
ffffffff811063a0 T inode_change_ok
ffffffff81106520 t bad_file_llseek
ffffffff81106530 t bad_file_read
ffffffff81106540 t bad_file_write
ffffffff81106550 t bad_file_aio_read
ffffffff81106560 t bad_file_aio_write
ffffffff81106570 t bad_file_readdir
ffffffff81106580 t bad_file_poll
ffffffff81106590 t bad_file_unlocked_ioctl
ffffffff811065a0 t bad_file_compat_ioctl
ffffffff811065b0 t bad_file_mmap
ffffffff811065c0 t bad_file_open
ffffffff811065d0 t bad_file_flush
ffffffff811065e0 t bad_file_release
ffffffff811065f0 t bad_file_fsync
ffffffff81106600 t bad_file_aio_fsync
ffffffff81106610 t bad_file_fasync
ffffffff81106620 t bad_file_lock
ffffffff81106630 t bad_file_sendpage
ffffffff81106640 t bad_file_get_unmapped_area
ffffffff81106650 t bad_file_check_flags
ffffffff81106660 t bad_file_flock
ffffffff81106670 t bad_file_splice_write
ffffffff81106680 t bad_file_splice_read
ffffffff81106690 t bad_inode_create
ffffffff811066a0 t bad_inode_lookup
ffffffff811066b0 t bad_inode_link
ffffffff811066c0 t bad_inode_unlink
ffffffff811066d0 t bad_inode_symlink
ffffffff811066e0 t bad_inode_mkdir
ffffffff811066f0 t bad_inode_rmdir
ffffffff81106700 t bad_inode_mknod
ffffffff81106710 t bad_inode_rename
ffffffff81106720 t bad_inode_readlink
ffffffff81106730 t bad_inode_permission
ffffffff81106740 t bad_inode_getattr
ffffffff81106750 t bad_inode_setattr
ffffffff81106760 t bad_inode_setxattr
ffffffff81106770 t bad_inode_getxattr
ffffffff81106780 t bad_inode_listxattr
ffffffff81106790 t bad_inode_removexattr
ffffffff811067a0 T is_bad_inode
ffffffff811067b0 T make_bad_inode
ffffffff81106800 T iget_failed
ffffffff81106820 t free_fdmem
ffffffff81106850 t __free_fdtable
ffffffff81106870 t free_fdtable_work
ffffffff811068c0 t alloc_fdmem
ffffffff811068f0 t alloc_fdtable
ffffffff811069c0 T free_fdtable_rcu
ffffffff81106ac0 T expand_files
ffffffff81106c80 T dup_fd
ffffffff81106f50 T alloc_fd
ffffffff81107060 T get_unused_fd
ffffffff81107070 t filesystems_proc_open
ffffffff81107090 t filesystems_proc_show
ffffffff81107110 t find_filesystem
ffffffff81107180 t __get_fs_type
ffffffff811071d0 T get_fs_type
ffffffff811072a0 T unregister_filesystem
ffffffff81107320 T register_filesystem
ffffffff811073b0 T get_filesystem
ffffffff811073c0 T put_filesystem
ffffffff811073d0 T sys_sysfs
ffffffff81107580 T vfsmount_lock_local_lock
ffffffff811075a0 T vfsmount_lock_local_unlock
ffffffff811075c0 T vfsmount_lock_local_lock_cpu
ffffffff811075e0 T vfsmount_lock_local_unlock_cpu
ffffffff81107600 T mnt_drop_write
ffffffff81107610 T mnt_drop_write_file
ffffffff81107620 T mntget
ffffffff81107640 t m_show
ffffffff81107650 t m_next
ffffffff81107670 t m_stop
ffffffff81107680 t m_start
ffffffff811076c0 t alloc_mnt_ns
ffffffff81107740 t vfsmount_lock_lg_cpu_callback
ffffffff811077d0 t touch_mnt_namespace
ffffffff81107810 t commit_tree
ffffffff81107910 T vfsmount_lock_global_lock_online
ffffffff81107970 T vfsmount_lock_global_unlock_online
ffffffff811079e0 T mnt_pin
ffffffff81107a00 T mnt_unpin
ffffffff81107a30 T mnt_set_expiry
ffffffff81107a90 T vfsmount_lock_global_lock
ffffffff81107af0 T vfsmount_lock_global_unlock
ffffffff81107b50 t mnt_alloc_group_id
ffffffff81107ba0 T may_umount
ffffffff81107be0 T replace_mount_options
ffffffff81107c10 T generic_show_options
ffffffff81107c70 T vfsmount_lock_lock_init
ffffffff81107d10 T __mnt_is_readonly
ffffffff81107d30 T mnt_want_write
ffffffff81107d80 T mnt_clone_write
ffffffff81107da0 t mnt_get_writers.isra.12
ffffffff81107df0 T mnt_want_write_file
ffffffff81107e40 t dentry_reset_mounted
ffffffff81107eb0 t detach_mnt
ffffffff81107f10 t unlock_mount.isra.18
ffffffff81107f40 T save_mount_options
ffffffff81107f70 t mnt_free_id.isra.20
ffffffff81107fb0 t alloc_vfsmnt
ffffffff81108160 t free_vfsmnt
ffffffff811081a0 t clone_mnt
ffffffff81108410 T vfs_kern_mount
ffffffff811084f0 T kern_mount_data
ffffffff81108520 T mnt_release_group_id
ffffffff81108570 t cleanup_group_ids
ffffffff81108600 t invent_group_ids
ffffffff811086b0 T mnt_get_count
ffffffff81108700 t mntput_no_expire
ffffffff81108840 T mntput
ffffffff81108870 t do_kern_mount
ffffffff81108990 t create_mnt_ns
ffffffff811089f0 T may_umount_tree
ffffffff81108aa0 T sb_prepare_remount_readonly
ffffffff81108b70 T __lookup_mnt
ffffffff81108be0 T lookup_mnt
ffffffff81108c30 t lock_mount
ffffffff81108d00 T mnt_set_mountpoint
ffffffff81108d90 t attach_mnt
ffffffff81108e20 t attach_recursive_mnt
ffffffff81108ff0 t graft_tree
ffffffff81109060 t do_add_mount
ffffffff81109130 T release_mounts
ffffffff811091c0 T umount_tree
ffffffff811093b0 T mark_mounts_for_expiry
ffffffff81109500 T sys_umount
ffffffff81109890 T sys_oldumount
ffffffff811098a0 T copy_tree
ffffffff81109b00 T collect_mounts
ffffffff81109b50 T drop_collected_mounts
ffffffff81109bb0 T iterate_mounts
ffffffff81109c10 T finish_automount
ffffffff81109cf0 T copy_mount_options
ffffffff81109e50 T copy_mount_string
ffffffff81109e90 T do_mount
ffffffff8110a6f0 T mnt_make_longterm
ffffffff8110a700 T mnt_make_shortterm
ffffffff8110a760 T kern_unmount
ffffffff8110a790 T sys_mount
ffffffff8110a880 T is_path_reachable
ffffffff8110a8e0 T path_is_under
ffffffff8110a930 T sys_pivot_root
ffffffff8110abc0 T put_mnt_ns
ffffffff8110ac30 T mount_subtree
ffffffff8110acc0 T copy_mnt_ns
ffffffff8110afa0 T our_mnt
ffffffff8110afc0 t single_start
ffffffff8110afd0 t single_next
ffffffff8110afe0 t single_stop
ffffffff8110aff0 T seq_putc
ffffffff8110b020 T seq_list_start
ffffffff8110b050 T seq_list_next
ffffffff8110b070 T seq_hlist_start
ffffffff8110b090 T seq_hlist_next
ffffffff8110b0b0 T seq_hlist_start_rcu
ffffffff8110b0e0 T seq_hlist_next_rcu
ffffffff8110b100 T seq_write
ffffffff8110b160 T seq_puts
ffffffff8110b1d0 T seq_release
ffffffff8110b1f0 T seq_release_private
ffffffff8110b240 T single_release
ffffffff8110b270 T seq_bitmap_list
ffffffff8110b2c0 T seq_bitmap
ffffffff8110b310 t traverse
ffffffff8110b500 T mangle_path
ffffffff8110b5b0 T seq_path
ffffffff8110b660 T seq_escape
ffffffff8110b770 T seq_printf
ffffffff8110b800 T seq_lseek
ffffffff8110b910 T seq_read
ffffffff8110bcc0 T seq_open
ffffffff8110be10 T __seq_open_private
ffffffff8110be80 T seq_open_private
ffffffff8110bea0 T single_open
ffffffff8110bf50 T seq_list_start_head
ffffffff8110bfa0 T seq_hlist_start_head
ffffffff8110bfe0 T seq_hlist_start_head_rcu
ffffffff8110c020 T seq_put_decimal_ull
ffffffff8110c0a0 T seq_put_decimal_ll
ffffffff8110c100 T seq_path_root
ffffffff8110c1d0 T seq_dentry
ffffffff8110c280 T xattr_getsecurity
ffffffff8110c290 T vfs_listxattr
ffffffff8110c2b0 t xattr_resolve_name
ffffffff8110c320 T generic_getxattr
ffffffff8110c3a0 T generic_listxattr
ffffffff8110c470 T generic_setxattr
ffffffff8110c500 T generic_removexattr
ffffffff8110c550 t listxattr
ffffffff8110c690 t xattr_permission
ffffffff8110c790 T vfs_removexattr
ffffffff8110c8b0 t removexattr
ffffffff8110c900 T vfs_getxattr
ffffffff8110c990 t getxattr
ffffffff8110caf0 T __vfs_setxattr_noperm
ffffffff8110cbd0 T vfs_setxattr
ffffffff8110cca0 t setxattr
ffffffff8110ce50 T vfs_getxattr_alloc
ffffffff8110cf70 T vfs_xattr_cmp
ffffffff8110d000 T sys_setxattr
ffffffff8110d0b0 T sys_lsetxattr
ffffffff8110d160 T sys_fsetxattr
ffffffff8110d250 T sys_getxattr
ffffffff8110d2d0 T sys_lgetxattr
ffffffff8110d350 T sys_fgetxattr
ffffffff8110d400 T sys_listxattr
ffffffff8110d470 T sys_llistxattr
ffffffff8110d4e0 T sys_flistxattr
ffffffff8110d580 T sys_removexattr
ffffffff8110d610 T sys_lremovexattr
ffffffff8110d6a0 T sys_fremovexattr
ffffffff8110d760 T simple_statfs
ffffffff8110d780 t simple_delete_dentry
ffffffff8110d790 T generic_read_dir
ffffffff8110d7a0 T simple_open
ffffffff8110d7c0 T noop_fsync
ffffffff8110d7d0 T generic_check_addressable
ffffffff8110d810 T generic_file_fsync
ffffffff8110d8b0 T generic_fh_to_parent
ffffffff8110d900 T generic_fh_to_dentry
ffffffff8110d940 T simple_write_to_buffer
ffffffff8110d9d0 T simple_attr_write
ffffffff8110dac0 T simple_attr_release
ffffffff8110dae0 T simple_attr_open
ffffffff8110dbc0 T simple_transaction_release
ffffffff8110dbe0 T simple_empty
ffffffff8110dc80 T dcache_readdir
ffffffff8110dec0 T dcache_dir_lseek
ffffffff8110e060 T memory_read_from_buffer
ffffffff8110e0d0 T simple_read_from_buffer
ffffffff8110e160 T simple_transaction_read
ffffffff8110e190 T simple_attr_read
ffffffff8110e280 T simple_release_fs
ffffffff8110e2f0 T simple_pin_fs
ffffffff8110e3b0 T dcache_dir_close
ffffffff8110e3d0 T simple_fill_super
ffffffff8110e590 T simple_write_end
ffffffff8110e6a0 T simple_write_begin
ffffffff8110e7a0 T simple_readpage
ffffffff8110e810 T simple_setattr
ffffffff8110e8c0 T simple_unlink
ffffffff8110e930 T simple_rmdir
ffffffff8110e990 T simple_rename
ffffffff8110ea90 T simple_link
ffffffff8110eb20 T mount_pseudo
ffffffff8110ecb0 T dcache_dir_open
ffffffff8110ece0 T simple_getattr
ffffffff8110ed30 T simple_transaction_get
ffffffff8110ee10 T simple_transaction_set
ffffffff8110ee30 T simple_lookup
ffffffff8110ee70 t redirty_tail
ffffffff8110eef0 t inode_wait_for_writeback
ffffffff8110efd0 t bdi_queue_work
ffffffff8110f060 T writeback_inodes_sb_nr
ffffffff8110f110 T sync_inodes_sb
ffffffff8110f2b0 t get_nr_dirty_pages
ffffffff8110f2e0 T writeback_inodes_sb
ffffffff8110f310 T __mark_inode_dirty
ffffffff8110f560 t over_bground_thresh
ffffffff8110f5e0 t queue_io
ffffffff8110f790 t requeue_io
ffffffff8110f810 t writeback_single_inode
ffffffff8110fae0 T sync_inode
ffffffff8110fba0 T sync_inode_metadata
ffffffff8110fc00 T write_inode_now
ffffffff8110fd50 t writeback_sb_inodes
ffffffff8110ffd0 t __writeback_inodes_wb
ffffffff81110080 t wb_writeback
ffffffff81110260 t wb_check_old_data_flush
ffffffff81110310 t __bdi_start_writeback
ffffffff811103e0 T writeback_inodes_sb_nr_if_idle
ffffffff81110450 T writeback_inodes_sb_if_idle
ffffffff811104b0 T writeback_in_progress
ffffffff811104c0 T bdi_start_writeback
ffffffff811104d0 T bdi_start_background_writeback
ffffffff81110530 T inode_wb_list_del
ffffffff811105c0 T writeback_inodes_wb
ffffffff81110660 T wb_do_writeback
ffffffff81110790 T bdi_writeback_thread
ffffffff811108d0 T wakeup_flusher_threads
ffffffff81110970 t propagation_next
ffffffff81110a00 T get_dominating_id
ffffffff81110a90 T change_mnt_propagation
ffffffff81110d00 T propagate_mnt
ffffffff81110f10 T propagate_mount_busy
ffffffff81111020 T propagate_umount
ffffffff811110e0 t drop_pagecache_sb
ffffffff811111d0 T drop_caches_sysctl_handler
ffffffff81111250 T splice_from_pipe_begin
ffffffff81111260 t pipe_to_sendpage
ffffffff811112d0 T splice_from_pipe_feed
ffffffff81111400 t page_cache_pipe_buf_confirm
ffffffff81111470 t page_cache_pipe_buf_steal
ffffffff81111560 t page_cache_pipe_buf_release
ffffffff81111580 T spd_release_page
ffffffff81111590 t wakeup_pipe_readers
ffffffff811115d0 t wakeup_pipe_writers
ffffffff81111610 T splice_from_pipe_end
ffffffff81111620 T splice_from_pipe_next
ffffffff811116d0 T __splice_from_pipe
ffffffff81111740 t do_splice_from
ffffffff81111810 t direct_splice_actor
ffffffff81111830 t do_splice_to
ffffffff811118f0 t write_pipe_buf
ffffffff811119b0 t pipe_to_user
ffffffff81111ae0 T splice_direct_to_actor
ffffffff81111c90 T generic_file_splice_write
ffffffff81111e10 T pipe_to_file
ffffffff81111f90 t ipipe_prep.part.5
ffffffff81112040 t opipe_prep.part.6
ffffffff81112100 t user_page_pipe_buf_steal
ffffffff81112120 T splice_to_pipe
ffffffff81112350 T splice_grow_spd
ffffffff811123e0 T splice_shrink_spd
ffffffff81112410 t vmsplice_to_pipe
ffffffff811126f0 T default_file_splice_read
ffffffff81112ae0 t __generic_file_splice_read
ffffffff81113030 T generic_file_splice_read
ffffffff811130a0 T splice_from_pipe
ffffffff81113140 t default_file_splice_write
ffffffff81113160 T generic_splice_sendpage
ffffffff81113170 T do_splice_direct
ffffffff81113200 T sys_vmsplice
ffffffff81113450 T sys_splice
ffffffff81113a10 T sys_tee
ffffffff81113d00 T vfs_fsync_range
ffffffff81113d20 T vfs_fsync
ffffffff81113d50 T generic_write_sync
ffffffff81113dc0 t do_fsync
ffffffff81113e40 t do_sync_work
ffffffff81113ea0 t __sync_filesystem
ffffffff81113f30 t sync_one_sb
ffffffff81113f40 T sync_filesystem
ffffffff81113f90 T sys_sync
ffffffff81113ff0 T emergency_sync
ffffffff81114040 T sys_syncfs
ffffffff811140d0 T sys_fsync
ffffffff811140f0 T sys_fdatasync
ffffffff81114110 T sys_sync_file_range
ffffffff81114290 T sys_sync_file_range2
ffffffff811142a0 t utimes_common
ffffffff811143f0 T do_utimes
ffffffff811144e0 T sys_utime
ffffffff81114550 T sys_utimensat
ffffffff811145d0 T sys_futimesat
ffffffff81114680 T sys_utimes
ffffffff81114690 T fsstack_copy_inode_size
ffffffff811146b0 T fsstack_copy_attr_all
ffffffff81114720 T current_umask
ffffffff81114740 T set_fs_root
ffffffff811147e0 T set_fs_pwd
ffffffff81114880 T chroot_fs_refs
ffffffff81114a30 T free_fs_struct
ffffffff81114a70 T exit_fs
ffffffff81114b20 T copy_fs_struct
ffffffff81114bf0 T unshare_fs_struct
ffffffff81114cb0 T daemonize_fs_struct
ffffffff81114d80 t statfs_by_dentry
ffffffff81114e80 t do_statfs64
ffffffff81114ed0 t do_statfs_native
ffffffff81114f20 T vfs_statfs
ffffffff81114fb0 T user_statfs
ffffffff81115000 T fd_statfs
ffffffff81115060 T sys_statfs
ffffffff81115090 T sys_statfs64
ffffffff811150d0 T sys_fstatfs
ffffffff81115100 T sys_fstatfs64
ffffffff81115140 T vfs_ustat
ffffffff811151a0 T sys_ustat
ffffffff81115250 T init_buffer
ffffffff81115260 T mark_buffer_async_write
ffffffff81115270 t has_bh_in_lru
ffffffff811152b0 T generic_block_bmap
ffffffff81115300 T block_is_partially_uptodate
ffffffff811153a0 t __remove_assoc_queue
ffffffff81115400 T invalidate_inode_buffers
ffffffff81115470 t drop_buffers
ffffffff81115520 T submit_bh
ffffffff81115630 t end_bio_bh_io_sync
ffffffff81115670 t attach_nobh_buffers
ffffffff81115700 t quiet_error
ffffffff81115740 t __find_get_block_slow
ffffffff811158c0 T invalidate_bh_lrus
ffffffff811158e0 t free_more_memory
ffffffff811159b0 t __set_page_dirty
ffffffff81115a90 T mark_buffer_dirty
ffffffff81115b20 T mark_buffer_dirty_inode
ffffffff81115be0 T __set_page_dirty_buffers
ffffffff81115cd0 t do_thaw_all
ffffffff81115d00 t do_thaw_one
ffffffff81115d60 T __wait_on_buffer
ffffffff81115d90 t sleep_on_buffer
ffffffff81115da0 T unlock_buffer
ffffffff81115db0 T ll_rw_block
ffffffff81115e50 t __end_buffer_read_notouch
ffffffff81115e70 t end_buffer_read_nobh
ffffffff81115e80 T end_buffer_read_sync
ffffffff81115e90 T __lock_buffer
ffffffff81115ec0 t recalc_bh_state
ffffffff81115f40 T alloc_buffer_head
ffffffff81115f90 T set_bh_page
ffffffff81115fe0 T free_buffer_head
ffffffff81116010 T try_to_free_buffers
ffffffff811160c0 T alloc_page_buffers
ffffffff81116190 T create_empty_buffers
ffffffff81116250 T block_truncate_page
ffffffff811164d0 T nobh_truncate_page
ffffffff811167b0 T generic_cont_expand_simple
ffffffff81116830 t buffer_io_error.isra.14
ffffffff81116860 t end_buffer_async_read
ffffffff811169a0 T block_read_full_page
ffffffff81116cc0 T end_buffer_async_write
ffffffff81116e00 T end_buffer_write_sync
ffffffff81116e70 t init_page_buffers.isra.15
ffffffff81116f00 T __brelse
ffffffff81116f30 t invalidate_bh_lru
ffffffff81116f80 t buffer_cpu_notify
ffffffff81117000 T unmap_underlying_metadata
ffffffff81117050 t __block_write_full_page
ffffffff811173d0 T block_write_full_page_endio
ffffffff81117500 T block_write_full_page
ffffffff81117510 T nobh_writepage
ffffffff81117640 T __find_get_block
ffffffff81117830 T __getblk
ffffffff81117ac0 T __bforget
ffffffff81117b40 T __breadahead
ffffffff81117b80 T __bread
ffffffff81117c10 t __block_commit_write.isra.17
ffffffff81117ce0 T block_commit_write
ffffffff81117d10 T page_zero_new_buffers
ffffffff81117e60 T block_write_end
ffffffff81117ee0 T generic_write_end
ffffffff81117f80 T nobh_write_end
ffffffff81118100 T __block_write_begin
ffffffff81118580 T __block_page_mkwrite
ffffffff811186c0 T block_page_mkwrite
ffffffff811187c0 T block_write_begin
ffffffff81118860 T cont_write_begin
ffffffff81118b60 T nobh_write_begin
ffffffff81118f50 T bh_submit_read
ffffffff81118fc0 T bh_uptodate_or_lock
ffffffff81119000 T __sync_dirty_buffer
ffffffff811190c0 T sync_dirty_buffer
ffffffff811190d0 T write_dirty_buffer
ffffffff81119140 T sync_mapping_buffers
ffffffff811193c0 T block_invalidatepage
ffffffff811194e0 T inode_has_buffers
ffffffff81119500 T emergency_thaw_all
ffffffff81119550 T write_boundary_block
ffffffff811195a0 T remove_inode_buffers
ffffffff81119640 T sys_bdflush
ffffffff811196b0 T bio_phys_segments
ffffffff811196d0 T bio_get_nr_vecs
ffffffff81119710 T bio_endio
ffffffff81119750 T bio_sector_offset
ffffffff81119800 t bio_free_map_data
ffffffff81119820 t bio_kmalloc_destructor
ffffffff81119830 T bio_split
ffffffff81119950 T __bio_clone
ffffffff811199b0 T zero_fill_bio
ffffffff81119a50 T bio_init
ffffffff81119af0 T bio_kmalloc
ffffffff81119b60 T bioset_free
ffffffff81119c40 T bioset_create
ffffffff81119ea0 T bio_put
ffffffff81119ed0 t bio_map_kern_endio
ffffffff81119ee0 T bio_unmap_user
ffffffff81119f40 t bio_copy_kern_endio
ffffffff8111a010 T bio_pair_release
ffffffff8111a050 t bio_pair_end_2
ffffffff8111a070 t bio_pair_end_1
ffffffff8111a080 t __bio_copy_iov.isra.21
ffffffff8111a230 T bio_uncopy_user
ffffffff8111a2b0 t __bio_add_page.part.22
ffffffff8111a510 T bio_add_page
ffffffff8111a560 T bio_add_pc_page
ffffffff8111a590 T bio_map_kern
ffffffff8111a6d0 T bvec_nr_vecs
ffffffff8111a6f0 T bvec_free_bs
ffffffff8111a730 T bio_free
ffffffff8111a780 t bio_fs_destructor
ffffffff8111a790 T bvec_alloc_bs
ffffffff8111a880 T bio_alloc_bioset
ffffffff8111a980 T bio_clone
ffffffff8111a9d0 T bio_alloc
ffffffff8111aa00 T bio_copy_user_iov
ffffffff8111ae30 T bio_copy_user
ffffffff8111ae60 T bio_copy_kern
ffffffff8111af30 T bio_map_user_iov
ffffffff8111b250 T bio_map_user
ffffffff8111b280 T bio_set_pages_dirty
ffffffff8111b2d0 t bio_dirty_fn
ffffffff8111b370 T bio_check_pages_dirty
ffffffff8111b430 T I_BDEV
ffffffff8111b440 t bdev_test
ffffffff8111b450 t bdev_set
ffffffff8111b460 T bd_set_size
ffffffff8111b500 t block_ioctl
ffffffff8111b550 T ioctl_by_bdev
ffffffff8111b5a0 t block_llseek
ffffffff8111b660 t bdev_inode_switch_bdi
ffffffff8111b750 T bd_unlink_disk_holder
ffffffff8111b850 t bdev_alloc_inode
ffffffff8111b880 T bd_link_disk_holder
ffffffff8111ba50 T bdput
ffffffff8111ba60 T bdget
ffffffff8111bb90 t blkdev_direct_IO
ffffffff8111bbf0 t blkdev_releasepage
ffffffff8111bc30 t blkdev_write_end
ffffffff8111bc70 t blkdev_write_begin
ffffffff8111bc90 t blkdev_readpage
ffffffff8111bca0 t blkdev_writepage
ffffffff8111bcb0 t bd_mount
ffffffff8111bcd0 t bdev_evict_inode
ffffffff8111bd80 t bdev_destroy_inode
ffffffff8111bda0 t bdev_i_callback
ffffffff8111bdc0 t init_once
ffffffff8111bed0 T blkdev_fsync
ffffffff8111bf20 T thaw_bdev
ffffffff8111bfb0 T kill_bdev
ffffffff8111bfe0 T invalidate_bdev
ffffffff8111c040 T __invalidate_device
ffffffff8111c0c0 t flush_disk
ffffffff8111c150 T check_disk_change
ffffffff8111c1d0 T check_disk_size_change
ffffffff8111c250 T revalidate_disk
ffffffff8111c2e0 t bd_may_claim
ffffffff8111c320 T blkdev_aio_write
ffffffff8111c3b0 t bd_acquire
ffffffff8111c4a0 T lookup_bdev
ffffffff8111c540 t blkdev_get_blocks
ffffffff8111c600 t blkdev_get_block
ffffffff8111c670 T blkdev_max_block
ffffffff8111c6b0 T __sync_blockdev
ffffffff8111c6e0 T sync_blockdev
ffffffff8111c6f0 t __blkdev_put
ffffffff8111c8b0 T blkdev_put
ffffffff8111ca10 t blkdev_close
ffffffff8111ca30 t __blkdev_get
ffffffff8111ceb0 T blkdev_get
ffffffff8111d1a0 t blkdev_open
ffffffff8111d210 T blkdev_get_by_dev
ffffffff8111d270 T blkdev_get_by_path
ffffffff8111d2e0 T freeze_bdev
ffffffff8111d3b0 T fsync_bdev
ffffffff8111d410 T set_blocksize
ffffffff8111d4b0 T sb_set_blocksize
ffffffff8111d510 T sb_min_blocksize
ffffffff8111d550 T bdgrab
ffffffff8111d570 T nr_blockdev_pages
ffffffff8111d5d0 T bd_forget
ffffffff8111d670 t dio_bio_complete
ffffffff8111d730 t dio_bio_end_io
ffffffff8111d7c0 T inode_dio_wait
ffffffff8111d890 T inode_dio_done
ffffffff8111d8c0 t dio_complete
ffffffff8111d980 T __blockdev_direct_IO
ffffffff81120f20 t dio_bio_end_aio
ffffffff81121000 T dio_end_io
ffffffff81121020 t mpage_alloc
ffffffff811210b0 t __mpage_writepage
ffffffff81121650 T mpage_writepage
ffffffff811216b0 t mpage_end_io
ffffffff81121750 T mpage_writepages
ffffffff811217f0 t do_mpage_readpage
ffffffff81121da0 T mpage_readpage
ffffffff81121e20 T mpage_readpages
ffffffff81121f50 T set_task_ioprio
ffffffff81122000 T sys_ioprio_set
ffffffff81122270 T ioprio_best
ffffffff811222b0 T sys_ioprio_get
ffffffff81122560 t mounts_open_common
ffffffff81122750 t mountstats_open
ffffffff81122760 t mountinfo_open
ffffffff81122770 t mounts_open
ffffffff81122780 t mounts_release
ffffffff811227d0 t mounts_poll
ffffffff81122840 t show_mnt_opts.isra.2
ffffffff81122890 t show_sb_opts.isra.3
ffffffff811228e0 t show_type.isra.4
ffffffff81122950 t show_vfsstat
ffffffff81122a90 t show_mountinfo
ffffffff81122d50 t show_vfsmnt
ffffffff81122e90 T __fsnotify_inode_delete
ffffffff81122ea0 t send_to_group.isra.1
ffffffff81123080 T fsnotify
ffffffff81123350 T __fsnotify_vfsmount_delete
ffffffff81123360 T __fsnotify_update_child_dentry_flags
ffffffff811234a0 T __fsnotify_parent
ffffffff81123590 T fsnotify_get_cookie
ffffffff811235a0 T fsnotify_notify_queue_is_empty
ffffffff811235c0 T fsnotify_get_event
ffffffff811235d0 T fsnotify_put_event
ffffffff81123630 T fsnotify_alloc_event_holder
ffffffff81123650 T fsnotify_destroy_event_holder
ffffffff81123670 T fsnotify_remove_priv_from_event
ffffffff811236f0 T fsnotify_add_notify_event
ffffffff81123930 T fsnotify_remove_notify_event
ffffffff811239d0 T fsnotify_peek_notify_event
ffffffff811239f0 T fsnotify_flush_notify
ffffffff81123aa0 T fsnotify_replace_event
ffffffff81123b70 T fsnotify_clone_event
ffffffff81123cf0 T fsnotify_create_event
ffffffff81123e60 T fsnotify_final_destroy_group
ffffffff81123e90 T fsnotify_put_group
ffffffff81123ed0 T fsnotify_alloc_group
ffffffff81123f80 t fsnotify_recalc_inode_mask_locked
ffffffff81123fc0 T fsnotify_recalc_inode_mask
ffffffff81124010 T fsnotify_destroy_inode_mark
ffffffff811240b0 T fsnotify_clear_marks_by_inode
ffffffff81124190 T fsnotify_clear_inode_marks_by_group
ffffffff811241a0 T fsnotify_find_inode_mark_locked
ffffffff81124220 T fsnotify_find_inode_mark
ffffffff81124270 T fsnotify_set_inode_mark_mask_locked
ffffffff811242d0 T fsnotify_add_inode_mark
ffffffff81124460 T fsnotify_unmount_inodes
ffffffff811245f0 T fsnotify_get_mark
ffffffff81124600 T fsnotify_put_mark
ffffffff81124620 t fsnotify_mark_destroy
ffffffff81124780 T fsnotify_destroy_mark
ffffffff81124910 T fsnotify_set_mark_mask_locked
ffffffff81124970 T fsnotify_set_mark_ignored_mask_locked
ffffffff811249b0 T fsnotify_add_mark
ffffffff81124bb0 T fsnotify_clear_marks_by_group_flags
ffffffff81124ca0 T fsnotify_clear_marks_by_group
ffffffff81124cb0 T fsnotify_duplicate_mark
ffffffff81124d10 T fsnotify_init_mark
ffffffff81124db0 t fsnotify_recalc_vfsmount_mask_locked
ffffffff81124df0 T fsnotify_clear_marks_by_mount
ffffffff81124ed0 T fsnotify_clear_vfsmount_marks_by_group
ffffffff81124ee0 T fsnotify_recalc_vfsmount_mask
ffffffff81124f10 T fsnotify_destroy_vfsmount_mark
ffffffff81124fa0 T fsnotify_find_vfsmount_mark
ffffffff81125020 T fsnotify_add_vfsmount_mark
ffffffff811251a0 t dnotify_should_send_event
ffffffff811251c0 t dnotify_free_mark
ffffffff811251e0 t dnotify_recalc_inode_mask
ffffffff81125250 t dnotify_handle_event
ffffffff81125300 T dnotify_flush
ffffffff81125420 T fcntl_dirnotify
ffffffff81125790 t inotify_should_send_event
ffffffff811257d0 t inotify_freeing_mark
ffffffff811257e0 t inotify_free_group_priv
ffffffff81125840 t inotify_merge
ffffffff81125920 T inotify_free_event_priv
ffffffff81125930 t idr_callback
ffffffff811259a0 t inotify_handle_event
ffffffff81125a80 t inotify_fasync
ffffffff81125ab0 t inotify_release
ffffffff81125ad0 t inotify_ioctl
ffffffff81125b90 t inotify_poll
ffffffff81125bf0 t inotify_read
ffffffff81125ef0 t inotify_free_mark
ffffffff81125f00 t inotify_idr_find_locked
ffffffff81125f70 t inotify_remove_from_idr
ffffffff81126190 T inotify_ignored_and_remove_idr
ffffffff81126260 T sys_inotify_init1
ffffffff81126370 T sys_inotify_init
ffffffff81126380 T sys_inotify_add_watch
ffffffff811266d0 T sys_inotify_rm_watch
ffffffff811267a0 t fanotify_free_group_priv
ffffffff811267b0 t fanotify_handle_event
ffffffff811267f0 t fanotify_merge
ffffffff811268f0 t fanotify_should_send_event
ffffffff811269a0 t fanotify_write
ffffffff811269b0 t fanotify_release
ffffffff811269d0 t fanotify_ioctl
ffffffff81126a70 t fanotify_poll
ffffffff81126ad0 t fanotify_mark_add_to_mask
ffffffff81126b90 t fanotify_free_mark
ffffffff81126ba0 t fanotify_mark_remove_from_mask
ffffffff81126c40 t fanotify_read
ffffffff81126fb0 T sys_fanotify_init
ffffffff811271a0 T sys_fanotify_mark
ffffffff81127750 t ep_read_events_proc
ffffffff81127800 t ep_send_events_proc
ffffffff81127930 t ep_poll_wakeup_proc
ffffffff81127960 t ep_ptable_queue_proc
ffffffff81127a10 t ep_unregister_pollwait.isra.6
ffffffff81127a80 t ep_remove
ffffffff81127b30 t ep_call_nested.constprop.8
ffffffff81127c20 t reverse_path_check_proc
ffffffff81127d00 t ep_loop_check_proc
ffffffff81127e40 t ep_eventpoll_poll
ffffffff81127ea0 t ep_poll_safewake
ffffffff81127ed0 t ep_free
ffffffff81127f80 t ep_eventpoll_release
ffffffff81127fa0 t ep_scan_ready_list.isra.7
ffffffff81128120 t ep_poll_readyevents_proc
ffffffff81128130 t ep_poll
ffffffff81128530 t ep_poll_callback
ffffffff81128650 T eventpoll_release_file
ffffffff811286e0 T sys_epoll_create1
ffffffff81128830 T sys_epoll_create
ffffffff81128850 T sys_epoll_ctl
ffffffff81129050 T sys_epoll_wait
ffffffff81129130 T sys_epoll_pwait
ffffffff81129230 t anon_set_page_dirty
ffffffff81129240 t anon_inodefs_dname
ffffffff81129260 t anon_inodefs_mount
ffffffff81129390 T anon_inode_getfile
ffffffff81129510 T anon_inode_getfd
ffffffff811295a0 t signalfd_release
ffffffff811295c0 t signalfd_poll
ffffffff811296b0 t signalfd_read
ffffffff81129ae0 T signalfd_cleanup
ffffffff81129b10 T sys_signalfd4
ffffffff81129cb0 T sys_signalfd
ffffffff81129cc0 t timerfd_fget
ffffffff81129d00 t timerfd_poll
ffffffff81129d60 t timerfd_read
ffffffff81129fc0 t timerfd_tmrproc
ffffffff8112a030 t timerfd_remove_cancel.part.6
ffffffff8112a080 t timerfd_release
ffffffff8112a0c0 T timerfd_clock_was_set
ffffffff8112a160 T sys_timerfd_create
ffffffff8112a250 T sys_timerfd_settime
ffffffff8112a570 T sys_timerfd_gettime
ffffffff8112a6b0 t eventfd_poll
ffffffff8112a730 T eventfd_ctx_remove_wait_queue
ffffffff8112a800 T eventfd_signal
ffffffff8112a8b0 t eventfd_write
ffffffff8112aac0 T eventfd_ctx_read
ffffffff8112acd0 t eventfd_read
ffffffff8112ad30 t eventfd_free
ffffffff8112ad40 T eventfd_fget
ffffffff8112ad80 T eventfd_ctx_get
ffffffff8112adc0 T eventfd_ctx_fdget
ffffffff8112ae10 T eventfd_ctx_fileget
ffffffff8112ae40 T eventfd_ctx_put
ffffffff8112ae60 t eventfd_release
ffffffff8112ae90 T eventfd_file_create
ffffffff8112af60 T sys_eventfd2
ffffffff8112afd0 T sys_eventfd
ffffffff8112afe0 t aio_fdsync
ffffffff8112b020 t aio_fsync
ffffffff8112b050 t lookup_ioctx
ffffffff8112b0d0 t aio_rw_vect_retry
ffffffff8112b290 T wait_on_sync_kiocb
ffffffff8112b310 t aio_queue_work
ffffffff8112b340 t aio_read_evt
ffffffff8112b4c0 t ctx_rcu_free
ffffffff8112b4e0 t __aio_put_req
ffffffff8112b6c0 T aio_put_req
ffffffff8112b720 t aio_fput_routine
ffffffff8112b890 t timeout_func
ffffffff8112b8a0 t aio_free_ring
ffffffff8112b940 t __put_ioctx
ffffffff8112b9e0 t aio_setup_vectored_rw
ffffffff8112ba70 t kill_ctx
ffffffff8112bc10 t io_destroy
ffffffff8112bcd0 T aio_complete
ffffffff8112bed0 t aio_run_iocb
ffffffff8112c050 t __aio_run_iocbs
ffffffff8112c110 t read_events
ffffffff8112c4f0 t aio_kick_handler
ffffffff8112c5f0 T kick_iocb
ffffffff8112c700 T exit_aio
ffffffff8112c7a0 T sys_io_setup
ffffffff8112cc20 T sys_io_destroy
ffffffff8112cc70 T do_io_submit
ffffffff8112d6d0 T sys_io_submit
ffffffff8112d6e0 T sys_io_cancel
ffffffff8112d860 T sys_io_getevents
ffffffff8112d8f0 T unlock_flocks
ffffffff8112d900 T locks_release_private
ffffffff8112d950 T __locks_copy_lock
ffffffff8112d9a0 T locks_copy_lock
ffffffff8112da70 t flock_to_posix_lock
ffffffff8112db80 t posix_same_owner
ffffffff8112dbd0 t locks_insert_lock
ffffffff8112dc20 T vfs_cancel_lock
ffffffff8112dc50 t locks_stop
ffffffff8112dc60 t locks_open
ffffffff8112dc80 t lock_get_status
ffffffff8112dfa0 t locks_show
ffffffff8112e010 t locks_next
ffffffff8112e030 t locks_wake_up_blocks
ffffffff8112e0d0 T lock_flocks
ffffffff8112e0e0 T locks_delete_block
ffffffff8112e130 T posix_unblock_lock
ffffffff8112e190 T lock_may_read
ffffffff8112e220 T lock_may_write
ffffffff8112e2a0 t locks_start
ffffffff8112e2f0 t lease_break_callback
ffffffff8112e310 t lease_release_private_callback
ffffffff8112e330 T locks_init_lock
ffffffff8112e3f0 T locks_alloc_lock
ffffffff8112e450 t posix_locks_conflict
ffffffff8112e4c0 T posix_test_lock
ffffffff8112e590 T vfs_test_lock
ffffffff8112e5d0 t locks_insert_block
ffffffff8112e620 T lease_get_mtime
ffffffff8112e680 T locks_free_lock
ffffffff8112e6c0 t locks_delete_lock
ffffffff8112e750 T lease_modify
ffffffff8112e7e0 t time_out_leases
ffffffff8112e870 T flock_lock_file_wait
ffffffff8112eb60 t lease_alloc
ffffffff8112ec30 T __break_lease
ffffffff8112ef90 t __posix_lock_file
ffffffff8112f440 T locks_mandatory_area
ffffffff8112f5c0 T posix_lock_file
ffffffff8112f5d0 T posix_lock_file_wait
ffffffff8112f6a0 T vfs_lock_file
ffffffff8112f6d0 T locks_remove_posix
ffffffff8112f780 T locks_mandatory_locked
ffffffff8112f7f0 T fcntl_getlease
ffffffff8112f890 T generic_add_lease
ffffffff8112f9c0 T generic_delete_lease
ffffffff8112fa20 T generic_setlease
ffffffff8112fb40 t __vfs_setlease
ffffffff8112fb60 T vfs_setlease
ffffffff8112fba0 t do_fcntl_delete_lease
ffffffff8112fc30 T fcntl_setlease
ffffffff8112fd70 T sys_flock
ffffffff8112ff20 T fcntl_getlk
ffffffff81130040 T fcntl_setlk
ffffffff81130310 T locks_remove_flock
ffffffff81130420 t compat_set_fd_set
ffffffff811304b0 t compat_filldir
ffffffff811305a0 t compat_filldir64
ffffffff81130670 t compat_fillonedir
ffffffff81130730 t poll_select_copy_remaining
ffffffff81130850 t compat_get_fd_set
ffffffff81130930 t cp_compat_stat
ffffffff81130aa0 t put_compat_statfs64
ffffffff81130b90 t put_compat_statfs
ffffffff81130cd0 T compat_printk
ffffffff81130d30 T compat_sys_utime
ffffffff81130da0 T compat_sys_utimensat
ffffffff81130e50 T compat_sys_futimesat
ffffffff81130f00 T compat_sys_utimes
ffffffff81130f10 T compat_sys_newstat
ffffffff81130f40 T compat_sys_newlstat
ffffffff81130f70 T compat_sys_newfstatat
ffffffff81130fa0 T compat_sys_newfstat
ffffffff81130fd0 T compat_sys_statfs
ffffffff81131000 T compat_sys_fstatfs
ffffffff81131030 T compat_sys_statfs64
ffffffff81131070 T compat_sys_fstatfs64
ffffffff811310b0 T compat_sys_ustat
ffffffff81131150 T compat_sys_fcntl64
ffffffff81131470 T compat_sys_fcntl
ffffffff81131490 T compat_sys_io_setup
ffffffff81131520 T compat_sys_io_getevents
ffffffff81131610 T compat_rw_copy_check_uvector
ffffffff81131780 t compat_do_readv_writev
ffffffff811319b0 t compat_writev
ffffffff81131a30 t compat_readv
ffffffff81131aa0 T compat_sys_io_submit
ffffffff81131b90 T compat_sys_mount
ffffffff81131e00 T compat_sys_old_readdir
ffffffff81131e60 T compat_sys_getdents
ffffffff81131f60 T compat_sys_getdents64
ffffffff81132050 T compat_sys_readv
ffffffff811320d0 T compat_sys_preadv64
ffffffff81132170 T compat_sys_preadv
ffffffff81132190 T compat_sys_writev
ffffffff81132210 T compat_sys_pwritev64
ffffffff811322b0 T compat_sys_pwritev
ffffffff811322d0 T compat_sys_vmsplice
ffffffff81132400 T compat_sys_open
ffffffff81132420 T compat_sys_openat
ffffffff81132430 T compat_core_sys_select
ffffffff811326f0 T compat_sys_select
ffffffff81132800 T compat_sys_old_select
ffffffff81132850 T compat_sys_pselect6
ffffffff81132a60 T compat_sys_ppoll
ffffffff81132bf0 T compat_sys_epoll_pwait
ffffffff81132d00 T compat_sys_signalfd4
ffffffff81132da0 T compat_sys_signalfd
ffffffff81132db0 T compat_sys_timerfd_settime
ffffffff81132e90 T compat_sys_timerfd_gettime
ffffffff81132f20 T compat_sys_ioctl
ffffffff81133f60 t load_script
ffffffff811341d0 t vma_dump_size
ffffffff81134300 t padzero
ffffffff81134340 t load_elf_library
ffffffff81134550 t load_elf_binary
ffffffff81135da0 t notesize.isra.7
ffffffff81135dc0 t elf_core_dump
ffffffff81136f80 t cputime_to_compat_timeval
ffffffff81136fb0 t vma_dump_size
ffffffff811370e0 t padzero
ffffffff81137120 t load_elf_library
ffffffff81137360 t notesize.isra.7
ffffffff81137380 t elf_core_dump
ffffffff811384d0 t load_elf_binary
ffffffff81139e20 T posix_acl_init
ffffffff81139e30 T posix_acl_valid
ffffffff81139f30 T posix_acl_equiv_mode
ffffffff81139fe0 T posix_acl_chmod
ffffffff8113a170 T posix_acl_create
ffffffff8113a320 T posix_acl_alloc
ffffffff8113a350 T posix_acl_from_mode
ffffffff8113a3c0 T posix_acl_permission
ffffffff8113a530 T posix_acl_to_xattr
ffffffff8113a5a0 T posix_acl_from_xattr
ffffffff8113a6e0 t generic_acl_set
ffffffff8113a8e0 t generic_acl_get
ffffffff8113a9c0 t generic_acl_list
ffffffff8113aad0 T generic_acl_init
ffffffff8113ac90 T generic_acl_chmod
ffffffff8113add0 T get_vmalloc_info
ffffffff8113aeb0 t pagemap_pte_hole
ffffffff8113af30 t gather_stats
ffffffff8113afb0 t pad_len_spaces
ffffffff8113afe0 t show_numa_map
ffffffff8113b440 t show_tid_numa_map
ffffffff8113b450 t show_pid_numa_map
ffffffff8113b460 t can_gather_numa_stats
ffffffff8113b4c0 t gather_pte_stats
ffffffff8113b680 t m_next
ffffffff8113b700 t m_stop
ffffffff8113b780 t m_start
ffffffff8113b8f0 t numa_maps_open
ffffffff8113b990 t tid_numa_maps_open
ffffffff8113b9a0 t pid_numa_maps_open
ffffffff8113b9b0 t do_maps_open
ffffffff8113ba50 t tid_smaps_open
ffffffff8113ba60 t pid_smaps_open
ffffffff8113ba70 t tid_maps_open
ffffffff8113ba80 t pid_maps_open
ffffffff8113ba90 t pagemap_read
ffffffff8113bda0 t pagemap_hugetlb_range
ffffffff8113be50 t pagemap_pte_range
ffffffff8113c130 t clear_refs_write
ffffffff8113c2f0 t clear_refs_pte_range
ffffffff8113c490 t show_map_vma
ffffffff8113c730 t show_smap
ffffffff8113c900 t show_tid_smap
ffffffff8113c910 t show_pid_smap
ffffffff8113c920 t show_tid_map
ffffffff8113c990 t show_pid_map
ffffffff8113ca00 t gather_hugetbl_stats
ffffffff8113ca60 t smaps_pte_entry.isra.9
ffffffff8113cb50 t smaps_pte_range
ffffffff8113ccf0 T task_mem
ffffffff8113ce30 T task_vsize
ffffffff8113ce40 T task_statm
ffffffff8113ceb0 t proc_show_options
ffffffff8113cf20 t proc_evict_inode
ffffffff8113cf90 t proc_destroy_inode
ffffffff8113cfb0 t proc_i_callback
ffffffff8113cfd0 t proc_alloc_inode
ffffffff8113d070 t proc_reg_open
ffffffff8113d1f0 t init_once
ffffffff8113d200 T pde_users_dec
ffffffff8113d260 t proc_reg_compat_ioctl
ffffffff8113d320 t proc_reg_release
ffffffff8113d490 t proc_reg_mmap
ffffffff8113d540 t proc_reg_unlocked_ioctl
ffffffff8113d600 t proc_reg_poll
ffffffff8113d6c0 t proc_reg_write
ffffffff8113d790 t proc_reg_read
ffffffff8113d860 t proc_reg_llseek
ffffffff8113d930 T proc_get_inode
ffffffff8113da40 T proc_fill_super
ffffffff8113dac0 t proc_test_super
ffffffff8113dad0 t proc_root_readdir
ffffffff8113db40 t proc_root_getattr
ffffffff8113db80 t proc_root_lookup
ffffffff8113dbe0 t proc_kill_sb
ffffffff8113dc20 t proc_parse_options.isra.0
ffffffff8113dd20 t proc_mount
ffffffff8113de80 t proc_set_super
ffffffff8113def0 T proc_remount
ffffffff8113df30 T pid_ns_prepare_proc
ffffffff8113df60 T pid_ns_release_proc
ffffffff8113df70 T mem_lseek
ffffffff8113dfa0 t pid_delete_dentry
ffffffff8113dfc0 t fake_filldir
ffffffff8113dfd0 t proc_self_put_link
ffffffff8113dff0 t proc_self_readlink
ffffffff8113e070 t proc_self_follow_link
ffffffff8113e110 t proc_base_instantiate
ffffffff8113e260 t proc_loginuid_write
ffffffff8113e370 t mem_release
ffffffff8113e3a0 t comm_open
ffffffff8113e3c0 t proc_single_open
ffffffff8113e3e0 t task_dumpable
ffffffff8113e440 t proc_fd_permission
ffffffff8113e470 t do_io_accounting
ffffffff8113e680 t proc_tid_io_accounting
ffffffff8113e690 t proc_tgid_io_accounting
ffffffff8113e6a0 T pid_getattr
ffffffff8113e790 t proc_oom_score
ffffffff8113e7f0 t proc_pid_wchan
ffffffff8113e890 t proc_pid_cmdline
ffffffff8113e9b0 t proc_pid_limits
ffffffff8113eb70 t proc_cwd_link
ffffffff8113ec40 t proc_root_link
ffffffff8113ed20 t proc_single_show
ffffffff8113edc0 t proc_fd_access_allowed
ffffffff8113ee20 t proc_pid_follow_link
ffffffff8113ee80 t proc_pid_readlink
ffffffff8113ef70 t proc_coredump_filter_write
ffffffff8113f0c0 t proc_coredump_filter_read
ffffffff8113f1c0 t oom_score_adj_read
ffffffff8113f2c0 t oom_adjust_read
ffffffff8113f3c0 t proc_sessionid_read
ffffffff8113f480 t proc_loginuid_read
ffffffff8113f540 t proc_info_read
ffffffff8113f640 t oom_score_adj_write
ffffffff8113f860 t oom_adjust_write
ffffffff8113fab0 t mem_open
ffffffff8113fb60 t comm_show
ffffffff8113fc00 t comm_write
ffffffff8113fcd0 t proc_fd_info
ffffffff8113fe50 t proc_fdinfo_read
ffffffff8113ff00 t proc_fd_link
ffffffff8113ff10 t tid_fd_revalidate
ffffffff81140090 T pid_revalidate
ffffffff81140170 t proc_pid_permission
ffffffff81140270 t proc_task_getattr
ffffffff811402e0 t proc_exe_link
ffffffff81140390 t next_tgid
ffffffff81140430 T proc_setattr
ffffffff811404c0 t name_to_int.isra.6
ffffffff81140540 t proc_lookupfd_common
ffffffff811405e0 t proc_lookupfd
ffffffff811405f0 t proc_lookupfdinfo
ffffffff81140600 t mem_rw.isra.9
ffffffff81140770 t mem_write
ffffffff81140790 t mem_read
ffffffff811407a0 t lock_trace
ffffffff81140810 t proc_pid_personality
ffffffff81140880 t proc_pid_syscall
ffffffff811409b0 T mm_for_maps
ffffffff811409c0 t environ_read
ffffffff81140b90 t proc_pid_auxv
ffffffff81140c10 T proc_pid_make_inode
ffffffff81140cb0 t proc_pid_instantiate
ffffffff81140d80 t proc_fdinfo_instantiate
ffffffff81140e20 t proc_fd_instantiate
ffffffff81140ed0 t proc_task_instantiate
ffffffff81140fa0 t proc_task_lookup
ffffffff81141090 t proc_pident_instantiate
ffffffff81141140 t proc_pident_lookup
ffffffff81141220 t proc_tid_base_lookup
ffffffff81141240 t proc_tgid_base_lookup
ffffffff81141260 T proc_fill_cache
ffffffff811413d0 t proc_readfd_common
ffffffff81141590 t proc_readfdinfo
ffffffff811415a0 t proc_readfd
ffffffff811415b0 t proc_task_readdir
ffffffff81141900 t proc_pident_readdir
ffffffff81141ab0 t proc_tid_base_readdir
ffffffff81141ad0 t proc_tgid_base_readdir
ffffffff81141af0 T proc_flush_task
ffffffff81141cb0 T proc_pid_lookup
ffffffff81141df0 T proc_pid_readdir
ffffffff81141ff0 t proc_file_lseek
ffffffff81142030 t proc_follow_link
ffffffff81142050 t proc_delete_dentry
ffffffff81142060 t proc_file_write
ffffffff81142120 t proc_getattr
ffffffff81142160 t proc_notify_change
ffffffff81142210 t proc_register
ffffffff81142420 t __xlate_proc_name
ffffffff81142510 t __proc_create
ffffffff81142700 T proc_create_data
ffffffff811427d0 T create_proc_entry
ffffffff81142870 T proc_net_mkdir
ffffffff811428e0 T proc_mkdir_mode
ffffffff81142940 T proc_mkdir
ffffffff81142950 T proc_symlink
ffffffff811429f0 t proc_file_read
ffffffff81142d40 T pde_put
ffffffff81142de0 T remove_proc_entry
ffffffff81143050 T proc_readdir_de
ffffffff81143200 T proc_readdir
ffffffff81143220 T proc_lookup_de
ffffffff81143330 T proc_lookup
ffffffff81143350 t do_task_stat
ffffffff81143d60 t render_sigset_t
ffffffff81143df0 t render_cap_t
ffffffff81143e50 T proc_pid_status
ffffffff811444a0 T proc_tid_stat
ffffffff811444b0 T proc_tgid_stat
ffffffff811444c0 T proc_pid_statm
ffffffff811445b0 t tty_drivers_open
ffffffff811445c0 t show_tty_range
ffffffff811447e0 t show_tty_driver
ffffffff811449a0 t t_next
ffffffff811449b0 t t_stop
ffffffff811449c0 t t_start
ffffffff811449e0 T proc_tty_register_driver
ffffffff81144a30 T proc_tty_unregister_driver
ffffffff81144a60 t cmdline_proc_open
ffffffff81144a80 t cmdline_proc_show
ffffffff81144aa0 t c_next
ffffffff81144ab0 t consoles_open
ffffffff81144ac0 t show_console_dev
ffffffff81144c10 t c_stop
ffffffff81144c20 t c_start
ffffffff81144c60 t cpuinfo_open
ffffffff81144c70 t devinfo_start
ffffffff81144c90 t devinfo_next
ffffffff81144cb0 t devinfo_stop
ffffffff81144cc0 t devinfo_open
ffffffff81144cd0 t devinfo_show
ffffffff81144d60 t int_seq_start
ffffffff81144d80 t int_seq_next
ffffffff81144da0 t int_seq_stop
ffffffff81144db0 t interrupts_open
ffffffff81144dc0 t loadavg_proc_open
ffffffff81144de0 t loadavg_proc_show
ffffffff81144eb0 t meminfo_proc_open
ffffffff81144ee0 t meminfo_proc_show
ffffffff811453a0 t stat_open
ffffffff81145450 t show_stat
ffffffff81145a30 t uptime_proc_open
ffffffff81145a50 t uptime_proc_show
ffffffff81145b40 t version_proc_open
ffffffff81145b60 t version_proc_show
ffffffff81145ba0 t softirqs_open
ffffffff81145bc0 t show_softirqs
ffffffff81145cd0 t proc_ns_instantiate
ffffffff81145da0 t proc_ns_dir_lookup
ffffffff81145ec0 t proc_ns_dir_readdir
ffffffff81146090 T proc_ns_fget
ffffffff811460d0 t init_header
ffffffff81146150 t proc_sys_revalidate
ffffffff81146170 t proc_sys_delete
ffffffff81146190 t count_subheaders
ffffffff81146200 t first_usable_entry
ffffffff81146240 t sysctl_head_finish
ffffffff81146290 t proc_sys_make_inode
ffffffff81146380 t sysctl_perm
ffffffff81146400 t proc_sys_setattr
ffffffff81146490 t erase_header
ffffffff81146500 t append_path
ffffffff81146580 t sysctl_err
ffffffff811465f0 t sysctl_head_grab
ffffffff81146630 t grab_header
ffffffff81146650 t proc_sys_open
ffffffff811466b0 t proc_sys_poll
ffffffff811467a0 t proc_sys_permission
ffffffff81146840 t proc_sys_getattr
ffffffff811468c0 t find_entry.isra.7
ffffffff81146980 t find_subdir
ffffffff811469d0 t get_links
ffffffff81146ae0 t xlate_dir.isra.8
ffffffff81146b60 t sysctl_follow_link
ffffffff81146c80 t proc_sys_lookup
ffffffff81146e00 t proc_sys_compare
ffffffff81146eb0 t proc_sys_fill_cache.isra.12
ffffffff81147000 t proc_sys_readdir
ffffffff81147300 t proc_sys_call_handler.isra.13
ffffffff811473e0 t proc_sys_write
ffffffff811473f0 t proc_sys_read
ffffffff81147400 t sysctl_print_dir.isra.14
ffffffff81147430 t put_links
ffffffff81147550 t drop_sysctl_table
ffffffff81147630 T unregister_sysctl_table
ffffffff811476e0 t insert_header
ffffffff81147a60 T proc_sys_poll_notify
ffffffff81147a90 T sysctl_head_put
ffffffff81147ad0 T register_sysctl_root
ffffffff81147ae0 T __register_sysctl_table
ffffffff81147f80 t register_leaf_sysctl_tables
ffffffff81148190 T register_sysctl
ffffffff811481b0 T __register_sysctl_paths
ffffffff81148380 T register_sysctl_paths
ffffffff811483a0 T register_sysctl_table
ffffffff811483b0 T setup_sysctl_set
ffffffff81148460 T retire_sysctl_set
ffffffff81148480 t get_proc_task_net
ffffffff811484c0 t proc_tgid_net_readdir
ffffffff81148550 t proc_tgid_net_getattr
ffffffff811485c0 t proc_tgid_net_lookup
ffffffff81148640 T proc_net_remove
ffffffff81148650 t proc_net_ns_exit
ffffffff81148670 t proc_net_ns_init
ffffffff81148700 T proc_net_fops_create
ffffffff81148720 T single_release_net
ffffffff81148760 T seq_release_net
ffffffff811487a0 T single_open_net
ffffffff81148840 T seq_open_net
ffffffff811488f0 t free_kclist_ents
ffffffff81148950 t storenote
ffffffff811489d0 t kcore_update_ram
ffffffff81148c20 t notesize.isra.3
ffffffff81148c40 t elf_kcore_store_hdr
ffffffff81148ef0 t read_kcore
ffffffff81149220 t open_kcore
ffffffff811492b0 t kclist_add_private
ffffffff811494e0 T kclist_add
ffffffff81149530 t kmsg_release
ffffffff81149550 t kmsg_open
ffffffff81149570 t kmsg_poll
ffffffff811495c0 t kmsg_read
ffffffff81149620 t kpagecount_read
ffffffff81149730 T stable_page_flags
ffffffff811498e0 t kpageflags_read
ffffffff81149a00 T sysfs_setxattr
ffffffff81149a40 t sysfs_refresh_inode
ffffffff81149ab0 T sysfs_permission
ffffffff81149b30 T sysfs_getattr
ffffffff81149b90 T sysfs_sd_setattr
ffffffff81149cb0 T sysfs_setattr
ffffffff81149d50 T sysfs_get_inode
ffffffff81149e90 T sysfs_evict_inode
ffffffff81149f00 T sysfs_hash_and_remove
ffffffff81149fa0 t sysfs_release
ffffffff8114a060 t sysfs_poll
ffffffff8114a110 t sysfs_write_file
ffffffff8114a270 t sysfs_schedule_callback_work
ffffffff8114a2e0 T sysfs_remove_file_from_group
ffffffff8114a370 T sysfs_notify_dirent
ffffffff8114a3d0 T sysfs_notify
ffffffff8114a470 t sysfs_open_file
ffffffff8114a6b0 t sysfs_read_file
ffffffff8114a840 T sysfs_schedule_callback
ffffffff8114a9e0 T sysfs_attr_ns
ffffffff8114aa70 T sysfs_remove_file
ffffffff8114aac0 T sysfs_remove_files
ffffffff8114ab00 T sysfs_chmod_file
ffffffff8114abc0 T sysfs_add_file_mode
ffffffff8114aca0 T sysfs_add_file
ffffffff8114acb0 T sysfs_add_file_to_group
ffffffff8114ad40 T sysfs_create_file
ffffffff8114ad60 T sysfs_create_files
ffffffff8114ade0 t sysfs_dentry_delete
ffffffff8114adf0 t sysfs_name_hash
ffffffff8114aec0 t sysfs_dentry_revalidate
ffffffff8114af90 t sysfs_unlink_sibling
ffffffff8114afb0 t sysfs_link_sibling
ffffffff8114b080 t sysfs_pathname.isra.7
ffffffff8114b0e0 T sysfs_get_active
ffffffff8114b130 T sysfs_put_active
ffffffff8114b160 T release_sysfs_dirent
ffffffff8114b220 t sysfs_dir_release
ffffffff8114b250 t sysfs_dir_pos
ffffffff8114b350 t sysfs_readdir
ffffffff8114b590 t sysfs_dentry_iput
ffffffff8114b5d0 T sysfs_new_dirent
ffffffff8114b6f0 T sysfs_addrm_start
ffffffff8114b710 T __sysfs_add_one
ffffffff8114b820 T sysfs_add_one
ffffffff8114b8f0 T sysfs_remove_one
ffffffff8114b970 T sysfs_addrm_finish
ffffffff8114ba20 t create_dir
ffffffff8114bb00 T sysfs_find_dirent
ffffffff8114bbf0 t sysfs_lookup
ffffffff8114bd10 T sysfs_get_dirent
ffffffff8114bd80 T sysfs_create_subdir
ffffffff8114bda0 T sysfs_create_dir
ffffffff8114be70 T sysfs_remove_subdir
ffffffff8114bea0 T sysfs_remove_dir
ffffffff8114bf60 T sysfs_rename
ffffffff8114c0f0 T sysfs_rename_dir
ffffffff8114c150 T sysfs_move_dir
ffffffff8114c1c0 t sysfs_put_link
ffffffff8114c1e0 t sysfs_follow_link
ffffffff8114c390 T sysfs_rename_link
ffffffff8114c460 T sysfs_remove_link
ffffffff8114c480 t sysfs_do_create_link
ffffffff8114c6b0 T sysfs_create_link
ffffffff8114c6c0 T sysfs_create_link_nowarn
ffffffff8114c6d0 T sysfs_delete_link
ffffffff8114c750 t sysfs_test_super
ffffffff8114c770 T sysfs_put
ffffffff8114c7a0 T sysfs_get
ffffffff8114c7e0 t sysfs_kill_sb
ffffffff8114c810 t sysfs_mount
ffffffff8114ca00 t sysfs_set_super
ffffffff8114ca40 t release
ffffffff8114cab0 t mmap
ffffffff8114cbe0 t bin_migrate
ffffffff8114cc90 t bin_get_policy
ffffffff8114cd40 t bin_set_policy
ffffffff8114cde0 t bin_access
ffffffff8114cea0 t bin_page_mkwrite
ffffffff8114cf40 t bin_fault
ffffffff8114cfd0 t bin_vma_open
ffffffff8114d060 t open
ffffffff8114d1b0 t write
ffffffff8114d360 t read
ffffffff8114d540 T sysfs_remove_bin_file
ffffffff8114d550 T sysfs_create_bin_file
ffffffff8114d570 T unmap_bin_file
ffffffff8114d5d0 T sysfs_unmerge_group
ffffffff8114d640 T sysfs_merge_group
ffffffff8114d700 t remove_files.isra.1
ffffffff8114d740 T sysfs_remove_group
ffffffff8114d860 t internal_create_group
ffffffff8114da60 T sysfs_update_group
ffffffff8114da70 T sysfs_create_group
ffffffff8114da80 T configfs_setattr
ffffffff8114dc70 T configfs_new_inode
ffffffff8114dd70 T configfs_create
ffffffff8114de90 T configfs_get_name
ffffffff8114ded0 T configfs_drop_dentry
ffffffff8114df60 T configfs_hash_and_remove
ffffffff8114e080 T configfs_inode_exit
ffffffff8114e090 t configfs_release
ffffffff8114e100 t configfs_write_file
ffffffff8114e220 t configfs_read_file
ffffffff8114e340 t configfs_open_file
ffffffff8114e510 T configfs_add_file
ffffffff8114e5b0 T configfs_create_file
ffffffff8114e5d0 t configfs_d_delete
ffffffff8114e5e0 t configfs_init_file
ffffffff8114e600 t init_symlink
ffffffff8114e610 t configfs_dir_set_ready
ffffffff8114e660 t configfs_dir_lseek
ffffffff8114e7c0 t configfs_dir_close
ffffffff8114e850 t configfs_new_dirent
ffffffff8114e940 t configfs_readdir
ffffffff8114ebb0 t configfs_depend_prep
ffffffff8114ec30 t unlink_obj
ffffffff8114ec80 t unlink_group
ffffffff8114ecd0 t link_obj
ffffffff8114ed10 t init_dir
ffffffff8114ed40 t configfs_d_iput
ffffffff8114edf0 T configfs_depend_item
ffffffff8114eed0 t configfs_detach_prep.isra.6
ffffffff8114ef90 t configfs_detach_rollback.isra.7
ffffffff8114eff0 t client_disconnect_notify
ffffffff8114f020 T configfs_undepend_item
ffffffff8114f060 t client_drop_item
ffffffff8114f090 t detach_attrs.isra.12
ffffffff8114f1c0 t configfs_remove_dir.isra.13
ffffffff8114f2f0 t detach_groups.isra.14
ffffffff8114f3f0 t configfs_detach_group
ffffffff8114f410 t configfs_rmdir
ffffffff8114f6a0 T configfs_unregister_subsystem
ffffffff8114f7f0 t link_group
ffffffff8114f860 t configfs_attach_item.isra.16.part.17
ffffffff8114f940 T configfs_make_dirent
ffffffff8114f9c0 t create_dir
ffffffff8114fb50 t configfs_attach_group.isra.18
ffffffff8114fd20 T configfs_register_subsystem
ffffffff8114fe90 T configfs_dirent_is_ready
ffffffff8114fec0 t configfs_dir_open
ffffffff8114ff60 t configfs_mkdir
ffffffff81150370 t configfs_lookup
ffffffff811504e0 T configfs_create_link
ffffffff811505c0 t configfs_put_link
ffffffff811505e0 t configfs_follow_link
ffffffff81150870 T configfs_symlink
ffffffff81150ba0 T configfs_unlink
ffffffff81150d70 t configfs_do_mount
ffffffff81150d80 t configfs_fill_super
ffffffff81150e40 T configfs_is_root
ffffffff81150e50 T configfs_pin_fs
ffffffff81150e90 T configfs_release_fs
ffffffff81150eb0 T config_item_init
ffffffff81150ed0 T config_group_init
ffffffff81150ef0 T config_item_get
ffffffff81150f30 T config_group_find_item
ffffffff81150fa0 t config_item_release
ffffffff81151060 T config_item_put
ffffffff81151080 T config_item_set_name
ffffffff811511b0 T config_group_init_type_name
ffffffff81151200 T config_item_init_type_name
ffffffff81151250 t devpts_kill_sb
ffffffff81151270 t devpts_mount
ffffffff81151280 t devpts_show_options
ffffffff811512f0 t devpts_fill_super
ffffffff81151410 t devpts_remount
ffffffff81151530 T devpts_new_index
ffffffff81151610 T devpts_kill_index
ffffffff81151660 T devpts_pty_new
ffffffff811518b0 T devpts_get_tty
ffffffff81151920 T devpts_pty_kill
ffffffff811519b0 t ramfs_kill_sb
ffffffff811519d0 t rootfs_mount
ffffffff811519f0 T ramfs_mount
ffffffff81151a00 T ramfs_get_inode
ffffffff81151b40 T ramfs_fill_super
ffffffff81151c50 t ramfs_mknod
ffffffff81151ce0 t ramfs_mkdir
ffffffff81151d10 t ramfs_create
ffffffff81151d20 t ramfs_symlink
ffffffff81151df0 t hugetlbfs_write_begin
ffffffff81151e00 t hugetlbfs_mount
ffffffff81151e10 t hugetlbfs_statfs
ffffffff81151ee0 t hugetlbfs_put_super
ffffffff81151f20 t hugetlbfs_set_page_dirty
ffffffff81151f40 t hugetlbfs_write_end
ffffffff81151f50 t truncate_hugepages
ffffffff811520e0 t hugetlbfs_evict_inode
ffffffff81152100 t hugetlbfs_i_callback
ffffffff81152120 t hugetlbfs_fill_super
ffffffff81152520 t hugetlbfs_get_inode
ffffffff81152660 t hugetlbfs_mknod
ffffffff811526f0 t hugetlbfs_mkdir
ffffffff81152720 t hugetlbfs_create
ffffffff81152730 t hugetlbfs_migrate_page
ffffffff81152770 t hugetlbfs_symlink
ffffffff81152840 t init_once
ffffffff81152850 t hugetlb_get_unmapped_area
ffffffff81152c00 t hugetlbfs_file_mmap
ffffffff81152d40 t hugetlbfs_read
ffffffff81153090 t hugetlbfs_inc_free_inodes.part.17
ffffffff811530d0 t hugetlbfs_destroy_inode
ffffffff81153110 t hugetlbfs_alloc_inode
ffffffff811531b0 t hugetlbfs_setattr
ffffffff81153320 T hugetlb_file_setup
ffffffff811535b0 T utf8_to_utf32
ffffffff81153690 T utf32_to_utf8
ffffffff81153740 t uni2char
ffffffff81153790 t char2uni
ffffffff811537b0 T unload_nls
ffffffff811537d0 t find_nls
ffffffff81153860 T utf8s_to_utf16s
ffffffff81153a00 T utf16s_to_utf8s
ffffffff81153b40 T register_nls
ffffffff81153bc0 T unregister_nls
ffffffff81153c40 T load_nls
ffffffff81153c70 T load_nls_default
ffffffff81153ca0 t pstore_kill_sb
ffffffff81153cc0 t pstore_mount
ffffffff81153cd0 t pstore_unlink
ffffffff81153d20 t pstore_evict_inode
ffffffff81153d90 t parse_options
ffffffff81153e00 t pstore_remount
ffffffff81153e20 t pstore_get_inode
ffffffff81153e60 T pstore_fill_super
ffffffff81153f10 t pstore_file_read
ffffffff81153f30 T pstore_is_mounted
ffffffff81153f40 T pstore_mkfile
ffffffff81154220 t pstore_timefunc
ffffffff81154270 t pstore_dump
ffffffff811544d0 T pstore_set_kmsg_bytes
ffffffff811544e0 T pstore_get_records
ffffffff81154600 T pstore_register
ffffffff81154730 t pstore_dowork
ffffffff81154740 t do_compat_semctl
ffffffff81154b20 T compat_sys_semctl
ffffffff81154b50 T compat_sys_msgsnd
ffffffff81154b90 T compat_sys_msgrcv
ffffffff81154c70 T compat_sys_msgctl
ffffffff811550d0 T compat_sys_shmat
ffffffff81155110 T compat_sys_shmctl
ffffffff811556e0 T compat_sys_semtimedop
ffffffff81155780 t sysvipc_find_ipc
ffffffff81155850 t sysvipc_proc_next
ffffffff811558c0 t ipc_schedule_free
ffffffff811558f0 t ipc_do_vfree
ffffffff81155900 t sysvipc_proc_release
ffffffff81155940 t sysvipc_proc_open
ffffffff811559f0 t sysvipc_proc_show
ffffffff81155a20 t sysvipc_proc_stop
ffffffff81155a80 t sysvipc_proc_start
ffffffff81155af0 T ipc_init_ids
ffffffff81155b30 T ipc_get_maxid
ffffffff81155ba0 T ipc_addid
ffffffff81155c90 T ipc_rmid
ffffffff81155ce0 T ipc_alloc
ffffffff81155d00 T ipc_free
ffffffff81155d20 T ipc_rcu_alloc
ffffffff81155d90 T ipc_rcu_getref
ffffffff81155da0 T ipc_rcu_putref
ffffffff81155de0 T ipcperms
ffffffff81155ee0 T kernel_to_ipc64_perm
ffffffff81155f10 T ipc64_perm_to_ipc_perm
ffffffff81155f40 T ipc_lock
ffffffff81155fa0 T ipc_lock_check
ffffffff81155fe0 T ipcget
ffffffff811561f0 T ipc_update_perm
ffffffff81156220 T ipcctl_pre_down
ffffffff81156370 T store_msg
ffffffff81156410 T free_msg
ffffffff81156450 T load_msg
ffffffff811565a0 t msg_security
ffffffff811565b0 t ss_wakeup
ffffffff811565f0 t expunge_all
ffffffff81156650 t freeque
ffffffff811566e0 t newque
ffffffff81156810 t sysvipc_msg_proc_show
ffffffff81156890 t msgctl_down.constprop.7
ffffffff811569f0 T recompute_msgmni
ffffffff81156a70 T msg_init_ns
ffffffff81156ab0 T msg_exit_ns
ffffffff81156ad0 T sys_msgget
ffffffff81156b30 T sys_msgctl
ffffffff81156e30 T do_msgsnd
ffffffff811571d0 T sys_msgsnd
ffffffff81157200 T do_msgrcv
ffffffff811575c0 T sys_msgrcv
ffffffff81157610 t sem_security
ffffffff81157620 t sem_more_checks
ffffffff81157640 t lookup_undo
ffffffff811576e0 t wake_up_sem_queue_do
ffffffff81157720 t newary
ffffffff81157880 t sysvipc_sem_proc_show
ffffffff811578e0 t try_atomic_semop.isra.6
ffffffff81157a80 t unlink_queue.isra.7
ffffffff81157af0 t freeary
ffffffff81157c90 t update_queue
ffffffff81157ef0 t do_smart_update
ffffffff81157fe0 t semctl_main.isra.11
ffffffff811585d0 t semctl_down.constprop.12
ffffffff811586e0 t copy_semid_to_user.constprop.13
ffffffff81158700 t semctl_nolock.constprop.14
ffffffff81158930 T sem_init_ns
ffffffff81158970 T sem_exit_ns
ffffffff81158990 T sys_semget
ffffffff81158a00 T sys_semctl
ffffffff81158aa0 T sys_semtimedop
ffffffff811592f0 T sys_semop
ffffffff81159300 T copy_semundo
ffffffff811593c0 T exit_sem
ffffffff81159640 t shm_fault
ffffffff81159660 t shm_set_policy
ffffffff81159690 t shm_fsync
ffffffff811596c0 t shm_get_unmapped_area
ffffffff811596e0 t shm_security
ffffffff811596f0 t shm_more_checks
ffffffff81159700 t shm_open
ffffffff81159770 t shm_release
ffffffff811597c0 t shm_get_policy
ffffffff811597f0 t shm_mmap
ffffffff81159860 t shm_add_rss_swap.isra.13
ffffffff81159910 t sysvipc_shm_proc_show
ffffffff811599e0 t shm_destroy
ffffffff81159a80 t shm_close
ffffffff81159b50 t do_shm_rmid
ffffffff81159b80 t shmctl_down.constprop.18
ffffffff81159c90 t shm_try_destroy_current
ffffffff81159cf0 t shm_try_destroy_orphaned
ffffffff81159d50 t newseg
ffffffff81159fa0 T shm_init_ns
ffffffff81159fe0 T shm_exit_ns
ffffffff8115a010 T shm_destroy_orphaned
ffffffff8115a070 T exit_shm
ffffffff8115a100 T is_file_shm_hugepages
ffffffff8115a110 T sys_shmget
ffffffff8115a170 T sys_shmctl
ffffffff8115a6a0 T do_shmat
ffffffff8115aa80 T sys_shmat
ffffffff8115aaa0 T sys_shmdt
ffffffff8115ac10 t ipcns_callback
ffffffff8115ac40 T register_ipcns_notifier
ffffffff8115ac90 T cond_register_ipcns_notifier
ffffffff8115ace0 T unregister_ipcns_notifier
ffffffff8115ad10 T ipcns_notify
ffffffff8115ad30 t proc_ipcauto_dointvec_minmax
ffffffff8115ae50 t proc_ipc_dointvec
ffffffff8115aed0 t proc_ipc_dointvec_minmax_orphans
ffffffff8115af80 t proc_ipc_doulongvec_minmax
ffffffff8115b000 t proc_ipc_callback_dointvec.part.1
ffffffff8115b030 t proc_ipc_callback_dointvec
ffffffff8115b100 t msg_insert
ffffffff8115b190 t mqueue_mount
ffffffff8115b1c0 t mqueue_poll_file
ffffffff8115b250 t mqueue_destroy_inode
ffffffff8115b270 t mqueue_i_callback
ffffffff8115b290 t mqueue_alloc_inode
ffffffff8115b2c0 t mqueue_unlink
ffffffff8115b330 t remove_notification
ffffffff8115b390 t mqueue_flush_file
ffffffff8115b400 t mqueue_read_file
ffffffff8115b560 t init_once
ffffffff8115b570 t prepare_timeout
ffffffff8115b5f0 t wq_sleep
ffffffff8115b770 t __do_notify
ffffffff8115b8a0 t mqueue_evict_inode
ffffffff8115ba00 t mqueue_get_inode.isra.6
ffffffff8115bd60 t mqueue_fill_super
ffffffff8115bdc0 t mqueue_create
ffffffff8115bf50 t do_open.isra.9
ffffffff8115c020 T sys_mq_open
ffffffff8115c450 T sys_mq_unlink
ffffffff8115c5c0 T sys_mq_timedsend
ffffffff8115c8c0 T sys_mq_timedreceive
ffffffff8115cc00 T sys_mq_notify
ffffffff8115cfb0 T sys_mq_getsetattr
ffffffff8115d1d0 T mq_init_ns
ffffffff8115d230 T mq_clear_sbinfo
ffffffff8115d250 T mq_put_mnt
ffffffff8115d260 t compat_prepare_timeout
ffffffff8115d2e0 T compat_sys_mq_open
ffffffff8115d410 T compat_sys_mq_timedsend
ffffffff8115d480 T compat_sys_mq_timedreceive
ffffffff8115d4f0 T compat_sys_mq_notify
ffffffff8115d580 T compat_sys_mq_getsetattr
ffffffff8115d720 t ipcns_get
ffffffff8115d750 T copy_ipcs
ffffffff8115d830 T free_ipcs
ffffffff8115d8c0 T put_ipc_ns
ffffffff8115d940 t ipcns_install
ffffffff8115d990 t ipcns_put
ffffffff8115d9a0 t proc_mq_dointvec_minmax
ffffffff8115da20 t proc_mq_dointvec
ffffffff8115daa0 T mq_register_sysctl_table
ffffffff8115dab0 t cap_safe_nice
ffffffff8115db10 T cap_netlink_send
ffffffff8115db20 T cap_capable
ffffffff8115db80 T cap_settime
ffffffff8115dba0 T cap_ptrace_access_check
ffffffff8115dc10 T cap_ptrace_traceme
ffffffff8115dc80 T cap_capget
ffffffff8115dca0 T cap_capset
ffffffff8115ddd0 T cap_inode_need_killpriv
ffffffff8115de10 T cap_inode_killpriv
ffffffff8115de40 T get_vfs_caps_from_disk
ffffffff8115df30 T cap_bprm_set_creds
ffffffff8115e3a0 T cap_bprm_secureexec
ffffffff8115e3f0 T cap_inode_setxattr
ffffffff8115e460 T cap_inode_removexattr
ffffffff8115e4d0 T cap_task_fix_setuid
ffffffff8115e630 T cap_task_setscheduler
ffffffff8115e640 T cap_task_setioprio
ffffffff8115e650 T cap_task_setnice
ffffffff8115e660 T cap_task_prctl
ffffffff8115e830 T cap_vm_enough_memory
ffffffff8115e890 T cap_file_mmap
ffffffff8115e8e0 T mmap_min_addr_handler
ffffffff8115e950 T ipv4_skb_to_auditdata
ffffffff8115ea00 T common_lsm_audit
ffffffff8115f120 t devcgroup_populate
ffffffff8115f140 t devcgroup_seq_read
ffffffff8115f280 t devcgroup_destroy
ffffffff8115f300 t devcgroup_access_write
ffffffff8115f7b0 t devcgroup_can_attach
ffffffff8115f7f0 t devcgroup_create
ffffffff8115f9b0 T __devcgroup_inode_permission
ffffffff8115fab0 T devcgroup_inode_mknod
ffffffff8115fbb0 T crypto_find_alg
ffffffff8115fbf0 T crypto_shoot_alg
ffffffff8115fc20 T crypto_larval_alloc
ffffffff8115fce0 T crypto_mod_put
ffffffff8115fd10 T crypto_mod_get
ffffffff8115fd40 t crypto_larval_wait
ffffffff8115fe00 t __crypto_alg_lookup
ffffffff8115ff00 T crypto_alg_lookup
ffffffff8115ff50 T crypto_larval_lookup
ffffffff811600f0 t crypto_exit_ops
ffffffff81160140 T crypto_create_tfm
ffffffff81160210 T crypto_alloc_tfm
ffffffff811602f0 T __crypto_alloc_tfm
ffffffff81160450 T crypto_destroy_tfm
ffffffff811604d0 T crypto_probing_notify
ffffffff81160550 T crypto_larval_kill
ffffffff811605d0 T crypto_alg_mod_lookup
ffffffff81160650 T crypto_has_alg
ffffffff81160680 T crypto_alloc_base
ffffffff81160730 t crypto_larval_destroy
ffffffff81160760 t cipher_decrypt_unaligned
ffffffff81160790 t cipher_encrypt_unaligned
ffffffff811607c0 t setkey
ffffffff81160900 T crypto_init_cipher_ops
ffffffff81160950 T crypto_exit_cipher_ops
ffffffff81160960 t crypto_compress
ffffffff81160970 t crypto_decompress
ffffffff81160980 T crypto_init_compress_ops
ffffffff811609a0 T crypto_exit_compress_ops
ffffffff811609b0 T crypto_remove_final
ffffffff81160a20 T crypto_get_attr_type
ffffffff81160a50 T crypto_attr_u32
ffffffff81160a90 T crypto_init_queue
ffffffff81160ab0 T crypto_tfm_in_queue
ffffffff81160af0 T crypto_xor
ffffffff81160b50 T crypto_inc
ffffffff81160bc0 T crypto_check_attr_type
ffffffff81160c10 T __crypto_dequeue_request
ffffffff81160c70 T crypto_dequeue_request
ffffffff81160c80 T crypto_enqueue_request
ffffffff81160cc0 T crypto_alloc_instance2
ffffffff81160d90 T crypto_unregister_notifier
ffffffff81160da0 T crypto_register_notifier
ffffffff81160db0 T crypto_drop_spawn
ffffffff81160e10 T crypto_init_spawn
ffffffff81160e90 T crypto_alloc_instance
ffffffff81160ef0 T crypto_init_spawn2
ffffffff81160f10 T crypto_register_template
ffffffff81160fb0 t crypto_check_alg
ffffffff81161020 t __crypto_register_alg
ffffffff811611e0 t __crypto_lookup_template
ffffffff81161260 t crypto_destroy_instance
ffffffff81161280 T crypto_larval_error
ffffffff811612d0 T crypto_attr_alg_name
ffffffff81161310 T crypto_attr_alg2
ffffffff81161350 t crypto_spawn_alg.isra.8
ffffffff811613d0 T crypto_spawn_tfm2
ffffffff81161440 T crypto_spawn_tfm
ffffffff811614b0 T crypto_remove_spawns
ffffffff811617d0 t crypto_remove_alg
ffffffff81161830 T crypto_unregister_instance
ffffffff81161910 T crypto_unregister_template
ffffffff81161a00 T crypto_alg_tested
ffffffff81161bf0 t crypto_wait_for_test
ffffffff81161c60 T crypto_unregister_alg
ffffffff81161cd0 T crypto_unregister_algs
ffffffff81161d40 T crypto_lookup_template
ffffffff81161d70 T crypto_register_instance
ffffffff81161e50 T crypto_register_alg
ffffffff81161ec0 T crypto_register_algs
ffffffff81161f30 T scatterwalk_map
ffffffff81161fa0 T scatterwalk_start
ffffffff81161fc0 t scatterwalk_pagedone
ffffffff81162040 T scatterwalk_copychunks
ffffffff81162100 T scatterwalk_done
ffffffff81162140 T scatterwalk_map_and_copy
ffffffff81162210 t crypto_info_open
ffffffff81162220 t c_show
ffffffff811623c0 t c_next
ffffffff811623d0 t c_stop
ffffffff811623e0 t c_start
ffffffff81162400 T crypto_aead_setauthsize
ffffffff81162460 t crypto_aead_ctxsize
ffffffff81162470 t no_givcrypt
ffffffff81162480 t crypto_init_aead_ops
ffffffff81162520 t aead_null_givencrypt
ffffffff81162530 t aead_null_givdecrypt
ffffffff81162540 t crypto_init_nivaead_ops
ffffffff811625c0 t crypto_nivaead_report
ffffffff81162660 t crypto_aead_report
ffffffff81162710 t crypto_nivaead_show
ffffffff811627c0 t crypto_aead_show
ffffffff81162870 t setkey
ffffffff81162970 t crypto_nivaead_default
ffffffff81162b40 T crypto_lookup_aead
ffffffff81162c00 T crypto_alloc_aead
ffffffff81162cc0 T crypto_grab_aead
ffffffff81162d20 T aead_geniv_exit
ffffffff81162d30 T aead_geniv_init
ffffffff81162d70 T aead_geniv_free
ffffffff81162d90 T aead_geniv_alloc
ffffffff81163100 t crypto_ablkcipher_ctxsize
ffffffff81163110 T skcipher_null_givencrypt
ffffffff81163120 T skcipher_null_givdecrypt
ffffffff81163130 t crypto_init_ablkcipher_ops
ffffffff81163190 t no_givdecrypt
ffffffff811631a0 t crypto_init_givcipher_ops
ffffffff81163220 t skcipher_module_exit
ffffffff81163230 t crypto_givcipher_report
ffffffff811632e0 t crypto_ablkcipher_report
ffffffff81163390 t crypto_givcipher_show
ffffffff81163460 t crypto_ablkcipher_show
ffffffff81163530 t setkey
ffffffff81163660 t crypto_givcipher_default
ffffffff81163890 T crypto_lookup_skcipher
ffffffff811639a0 T crypto_alloc_ablkcipher
ffffffff81163a60 T crypto_grab_skcipher
ffffffff81163ac0 T __ablkcipher_walk_complete
ffffffff81163b40 T ablkcipher_walk_done
ffffffff81163d30 t ablkcipher_walk_next
ffffffff81163f80 T ablkcipher_walk_phys
ffffffff81164160 T crypto_default_geniv
ffffffff811641b0 t async_encrypt
ffffffff811641f0 t async_decrypt
ffffffff81164230 t crypto_blkcipher_ctxsize
ffffffff81164260 t crypto_blkcipher_report
ffffffff81164310 t crypto_blkcipher_show
ffffffff811643b0 t setkey
ffffffff811644e0 t async_setkey
ffffffff811644f0 T skcipher_geniv_exit
ffffffff81164500 T skcipher_geniv_init
ffffffff81164540 T skcipher_geniv_free
ffffffff81164560 T skcipher_geniv_alloc
ffffffff811649a0 T blkcipher_walk_done
ffffffff81164bd0 t blkcipher_walk_next
ffffffff81164f70 t blkcipher_walk_first
ffffffff81165150 T blkcipher_walk_virt_block
ffffffff81165160 T blkcipher_walk_phys
ffffffff81165180 T blkcipher_walk_virt
ffffffff811651a0 t crypto_init_blkcipher_ops
ffffffff81165250 t chainiv_module_exit
ffffffff81165260 t chainiv_free
ffffffff81165280 t chainiv_alloc
ffffffff81165360 t async_chainiv_init
ffffffff811653c0 t chainiv_init
ffffffff811653e0 t chainiv_givencrypt
ffffffff811654e0 t chainiv_givencrypt_first
ffffffff81165580 t async_chainiv_exit
ffffffff811655a0 t async_chainiv_schedule_work
ffffffff811655e0 t async_chainiv_givencrypt_tail
ffffffff81165680 t async_chainiv_do_postponed
ffffffff81165720 t async_chainiv_givencrypt
ffffffff81165860 t async_chainiv_givencrypt_first
ffffffff811658f0 t eseqiv_free
ffffffff81165910 t eseqiv_init
ffffffff81165940 t eseqiv_complete2
ffffffff81165970 t eseqiv_complete
ffffffff811659a0 t eseqiv_givencrypt
ffffffff81165d90 t eseqiv_givencrypt_first
ffffffff81165e30 t eseqiv_alloc
ffffffff81165ed0 t hash_walk_next
ffffffff81165f50 t hash_walk_new_entry
ffffffff81165f90 t ahash_nosetkey
ffffffff81165fa0 t ahash_no_export
ffffffff81165fb0 t ahash_no_import
ffffffff81165fc0 t crypto_ahash_extsize
ffffffff81165fe0 t crypto_ahash_report
ffffffff81166040 t crypto_ahash_show
ffffffff811660b0 t crypto_ahash_init_tfm
ffffffff81166140 t ahash_def_finup_finish2
ffffffff81166190 t ahash_def_finup
ffffffff81166280 t ahash_def_finup_done1
ffffffff81166300 t ahash_def_finup_done2
ffffffff81166350 t ahash_op_unaligned_finish
ffffffff811663a0 t crypto_ahash_op
ffffffff81166460 T crypto_ahash_digest
ffffffff81166470 T crypto_ahash_finup
ffffffff81166480 T crypto_ahash_final
ffffffff81166490 t ahash_op_unaligned_done
ffffffff811664e0 T ahash_attr_alg
ffffffff81166510 T crypto_init_ahash_spawn
ffffffff81166520 T ahash_free_instance
ffffffff81166540 T crypto_unregister_ahash
ffffffff81166550 T crypto_register_ahash
ffffffff81166590 T crypto_alloc_ahash
ffffffff811665a0 T crypto_hash_walk_done
ffffffff811666a0 T crypto_hash_walk_first
ffffffff811666e0 T crypto_ahash_setkey
ffffffff811667d0 T ahash_register_instance
ffffffff81166810 T crypto_hash_walk_first_compat
ffffffff81166840 t shash_no_setkey
ffffffff81166850 t shash_async_init
ffffffff81166870 t shash_async_export
ffffffff81166890 t shash_async_import
ffffffff811668b0 t shash_compat_init
ffffffff811668d0 t crypto_shash_ctxsize
ffffffff811668e0 t crypto_shash_init_tfm
ffffffff811668f0 t crypto_shash_extsize
ffffffff81166900 T shash_attr_alg
ffffffff81166930 t crypto_shash_report
ffffffff81166990 t crypto_shash_show
ffffffff811669f0 t crypto_exit_shash_ops_async
ffffffff81166a00 t crypto_init_shash_ops
ffffffff81166b00 t crypto_exit_shash_ops_compat
ffffffff81166b20 T crypto_init_shash_spawn
ffffffff81166b30 T shash_free_instance
ffffffff81166b50 T shash_register_instance
ffffffff81166c00 t shash_default_import
ffffffff81166c20 t shash_default_export
ffffffff81166c40 T crypto_shash_setkey
ffffffff81166d40 t shash_compat_setkey
ffffffff81166d50 t shash_async_setkey
ffffffff81166d60 T crypto_unregister_shash
ffffffff81166d70 T crypto_register_shash
ffffffff81166e20 T crypto_alloc_shash
ffffffff81166e30 t shash_final_unaligned
ffffffff81166ed0 T crypto_shash_final
ffffffff81166ef0 t shash_compat_final
ffffffff81166f00 t shash_async_final
ffffffff81166f10 t shash_update_unaligned
ffffffff81167000 T crypto_shash_update
ffffffff81167020 t shash_compat_update
ffffffff81167070 t shash_finup_unaligned
ffffffff811670c0 T crypto_shash_finup
ffffffff811670f0 t shash_digest_unaligned
ffffffff81167170 T crypto_shash_digest
ffffffff811671a0 t shash_compat_digest
ffffffff811672e0 T shash_ahash_update
ffffffff81167320 t shash_async_update
ffffffff81167330 T shash_ahash_finup
ffffffff811673b0 t shash_async_finup
ffffffff811673d0 T shash_ahash_digest
ffffffff811674e0 t shash_async_digest
ffffffff81167500 T crypto_init_shash_ops_async
ffffffff811675d0 t crypto_pcomp_init
ffffffff811675e0 t crypto_pcomp_extsize
ffffffff811675f0 t crypto_pcomp_init_tfm
ffffffff81167600 T crypto_unregister_pcomp
ffffffff81167610 T crypto_register_pcomp
ffffffff81167630 t crypto_pcomp_report
ffffffff81167680 t crypto_pcomp_show
ffffffff81167690 T crypto_alloc_pcomp
ffffffff811676a0 t cryptomgr_notify
ffffffff81167b40 t cryptomgr_probe
ffffffff81167c30 t cryptomgr_test
ffffffff81167c50 T alg_test
ffffffff81167c60 T crypto_aes_expand_key
ffffffff81168040 T crypto_aes_set_key
ffffffff81168070 t aes_encrypt
ffffffff81168e30 t aes_decrypt
ffffffff81169c50 t chksum_init
ffffffff81169c60 t chksum_setkey
ffffffff81169c80 t chksum_final
ffffffff81169c90 t crc32c_cra_init
ffffffff81169ca0 t chksum_digest
ffffffff81169cc0 t chksum_finup
ffffffff81169ce0 t chksum_update
ffffffff81169d00 t crypto_init_rng_ops
ffffffff81169d20 t crypto_rng_ctxsize
ffffffff81169d30 t crypto_rng_report
ffffffff81169d90 t crypto_rng_show
ffffffff81169de0 t rngapi_reset
ffffffff81169e70 T crypto_put_default_rng
ffffffff81169ec0 T crypto_get_default_rng
ffffffff81169f70 t krng_reset
ffffffff81169f80 t krng_get_random
ffffffff81169fa0 T elv_rb_find
ffffffff81169fe0 T elv_dispatch_sort
ffffffff8116a0c0 T elv_dispatch_add_tail
ffffffff8116a130 T elv_rb_latter_request
ffffffff8116a160 T elv_rb_former_request
ffffffff8116a190 t elevator_find
ffffffff8116a200 t elevator_get
ffffffff8116a2c0 t elevator_release
ffffffff8116a2f0 t elv_attr_store
ffffffff8116a3a0 t elv_attr_show
ffffffff8116a440 T elevator_exit
ffffffff8116a490 T elv_unregister
ffffffff8116a500 T elv_register
ffffffff8116a6b0 T elv_unregister_queue
ffffffff8116a700 T elv_abort_queue
ffffffff8116a750 T elv_rb_add
ffffffff8116a7b0 t elevator_alloc.isra.12
ffffffff8116a860 T elevator_init
ffffffff8116a9a0 t elv_rqhash_find.isra.13
ffffffff8116aa80 t elv_rqhash_add.isra.14
ffffffff8116ab00 T elv_rb_del
ffffffff8116ab60 T elv_rq_merge_ok
ffffffff8116abc0 T elv_merge
ffffffff8116acd0 T elv_merged_request
ffffffff8116ad60 T elv_merge_requests
ffffffff8116ae30 T elv_bio_merged
ffffffff8116ae50 T elv_drain_elevator
ffffffff8116aeb0 T __elv_add_request
ffffffff8116b090 T elv_add_request
ffffffff8116b0f0 T elv_requeue_request
ffffffff8116b180 T elv_quiesce_start
ffffffff8116b1d0 T elv_quiesce_end
ffffffff8116b200 T elv_latter_request
ffffffff8116b220 T elv_former_request
ffffffff8116b240 T elv_set_request
ffffffff8116b260 T elv_put_request
ffffffff8116b280 T elv_may_queue
ffffffff8116b2a0 T elv_completed_request
ffffffff8116b2f0 T __elv_register_queue
ffffffff8116b380 T elv_register_queue
ffffffff8116b390 T elevator_change
ffffffff8116b520 T elv_iosched_store
ffffffff8116b580 T elv_iosched_show
ffffffff8116b6b0 T blk_get_backing_dev_info
ffffffff8116b6e0 T blk_add_request_payload
ffffffff8116b780 T blk_unprep_request
ffffffff8116b7b0 T blk_lld_busy
ffffffff8116b7d0 T blk_start_plug
ffffffff8116b820 t plug_rq_cmp
ffffffff8116b830 T part_round_stats
ffffffff8116b960 T __blk_run_queue
ffffffff8116b980 T kblockd_schedule_delayed_work
ffffffff8116b990 T kblockd_schedule_work
ffffffff8116b9a0 T blk_rq_unprep_clone
ffffffff8116b9d0 T blk_start_queue
ffffffff8116ba20 T blk_run_queue
ffffffff8116ba70 T blk_dump_rq_flags
ffffffff8116bb70 t drive_stat_acct
ffffffff8116bd00 t handle_bad_sector
ffffffff8116bd90 t generic_make_request_checks
ffffffff8116bf60 t blk_delay_work
ffffffff8116bfa0 T blk_get_queue
ffffffff8116bfd0 T blk_alloc_queue_node
ffffffff8116c1d0 T blk_alloc_queue
ffffffff8116c1e0 T blk_put_queue
ffffffff8116c1f0 T blk_stop_queue
ffffffff8116c220 T blk_run_queue_async
ffffffff8116c260 T blk_sync_queue
ffffffff8116c280 T blk_delay_queue
ffffffff8116c2b0 T blk_rq_init
ffffffff8116c410 T blk_rq_prep_clone
ffffffff8116c570 T blk_rq_err_bytes
ffffffff8116c5d0 t queue_unplugged.isra.41
ffffffff8116c620 T blk_rq_check_limits
ffffffff8116c6d0 T blk_insert_cloned_request
ffffffff8116c7b0 t req_bio_endio.isra.43
ffffffff8116c870 T blk_update_request
ffffffff8116cc70 t blk_update_bidi_request
ffffffff8116cd00 T generic_make_request
ffffffff8116cde0 T submit_bio
ffffffff8116ced0 t bio_attempt_back_merge
ffffffff8116cf60 t bio_attempt_front_merge
ffffffff8116d050 t __freed_request
ffffffff8116d100 t freed_request
ffffffff8116d170 t get_request
ffffffff8116d600 t get_request_wait
ffffffff8116d7c0 T __blk_put_request
ffffffff8116d8b0 t blk_finish_request
ffffffff8116db30 t blk_end_bidi_request
ffffffff8116dbb0 T blk_end_request
ffffffff8116dbc0 T blk_end_request_err
ffffffff8116dc10 T blk_end_request_cur
ffffffff8116dc40 T blk_put_request
ffffffff8116dca0 T blk_end_request_all
ffffffff8116dcd0 T blk_requeue_request
ffffffff8116dd30 T blk_get_request
ffffffff8116ddb0 T blk_make_request
ffffffff8116de40 T blk_queue_congestion_threshold
ffffffff8116de80 T blk_init_allocated_queue
ffffffff8116dfa0 T blk_drain_queue
ffffffff8116e080 T blk_cleanup_queue
ffffffff8116e150 T blk_init_queue_node
ffffffff8116e1c0 T blk_init_queue
ffffffff8116e1d0 T blk_dequeue_request
ffffffff8116e250 T blk_start_request
ffffffff8116e290 T __blk_end_bidi_request
ffffffff8116e2d0 T __blk_end_request_all
ffffffff8116e300 T blk_peek_request
ffffffff8116e4c0 T blk_fetch_request
ffffffff8116e4f0 T __blk_end_request
ffffffff8116e500 T __blk_end_request_err
ffffffff8116e550 T __blk_end_request_cur
ffffffff8116e580 T blk_rq_bio_prep
ffffffff8116e630 T init_request_from_bio
ffffffff8116e690 T blk_flush_plug_list
ffffffff8116e8b0 T blk_finish_plug
ffffffff8116e8e0 T blk_queue_bio
ffffffff8116ec30 T blk_queue_free_tags
ffffffff8116ec40 T blk_queue_invalidate_tags
ffffffff8116ec80 T blk_queue_find_tag
ffffffff8116ecb0 T blk_queue_start_tag
ffffffff8116edc0 T blk_queue_end_tag
ffffffff8116eea0 t init_tag_map
ffffffff8116ef70 T blk_queue_resize_tags
ffffffff8116f040 t __blk_queue_init_tags
ffffffff8116f0b0 T blk_queue_init_tags
ffffffff8116f160 T blk_init_tags
ffffffff8116f170 t __blk_free_tags
ffffffff8116f1f0 T blk_free_tags
ffffffff8116f210 T __blk_queue_free_tags
ffffffff8116f240 t queue_var_store
ffffffff8116f290 t queue_ra_store
ffffffff8116f2c0 t queue_store_random
ffffffff8116f330 t queue_store_iostats
ffffffff8116f3a0 t queue_rq_affinity_store
ffffffff8116f440 t queue_nomerges_store
ffffffff8116f4c0 t queue_store_nonrot
ffffffff8116f540 t queue_max_sectors_store
ffffffff8116f5e0 t queue_var_show
ffffffff8116f600 t queue_show_random
ffffffff8116f610 t queue_show_iostats
ffffffff8116f620 t queue_rq_affinity_show
ffffffff8116f650 t queue_nomerges_show
ffffffff8116f680 t queue_show_nonrot
ffffffff8116f6a0 t queue_discard_zeroes_data_show
ffffffff8116f6c0 t queue_discard_granularity_show
ffffffff8116f6d0 t queue_io_opt_show
ffffffff8116f6e0 t queue_io_min_show
ffffffff8116f6f0 t queue_physical_block_size_show
ffffffff8116f700 t queue_logical_block_size_show
ffffffff8116f730 t queue_max_integrity_segments_show
ffffffff8116f740 t queue_max_segments_show
ffffffff8116f750 t queue_max_sectors_show
ffffffff8116f760 t queue_max_hw_sectors_show
ffffffff8116f770 t queue_ra_show
ffffffff8116f780 t queue_requests_show
ffffffff8116f790 t queue_discard_max_show
ffffffff8116f7c0 t queue_requests_store
ffffffff8116f970 t queue_attr_store
ffffffff8116fa30 t queue_attr_show
ffffffff8116fae0 t blk_release_queue
ffffffff8116fb90 t queue_max_segment_size_show
ffffffff8116fbc0 T blk_register_queue
ffffffff8116fca0 T blk_unregister_queue
ffffffff8116fd20 T blkdev_issue_flush
ffffffff8116fdf0 t bio_end_flush
ffffffff8116fe20 t blk_flush_complete_seq
ffffffff811700c0 t flush_end_io
ffffffff81170200 t flush_data_end_io
ffffffff81170230 T blk_insert_flush
ffffffff81170340 T blk_abort_flushes
ffffffff81170470 T blk_queue_prep_rq
ffffffff81170480 T blk_queue_unprep_rq
ffffffff81170490 T blk_queue_merge_bvec
ffffffff811704a0 T blk_queue_softirq_done
ffffffff811704b0 T blk_queue_rq_timeout
ffffffff811704c0 T blk_queue_rq_timed_out
ffffffff811704d0 T blk_queue_lld_busy
ffffffff811704e0 T blk_set_default_limits
ffffffff81170560 T blk_set_stacking_limits
ffffffff811705e0 T blk_queue_max_discard_sectors
ffffffff811705f0 T blk_queue_logical_block_size
ffffffff81170620 T blk_queue_physical_block_size
ffffffff81170650 T blk_queue_alignment_offset
ffffffff81170670 T blk_limits_io_min
ffffffff81170690 T blk_queue_io_min
ffffffff811706c0 T blk_limits_io_opt
ffffffff811706d0 T blk_queue_io_opt
ffffffff811706e0 T blk_queue_dma_pad
ffffffff811706f0 T blk_queue_update_dma_pad
ffffffff81170700 T blk_queue_dma_alignment
ffffffff81170710 T blk_queue_flush_queueable
ffffffff81170730 T blk_queue_flush
ffffffff811707c0 T blk_queue_segment_boundary
ffffffff81170800 T blk_queue_max_segment_size
ffffffff81170840 T blk_queue_max_segments
ffffffff81170880 T blk_queue_dma_drain
ffffffff811708f0 T blk_limits_max_hw_sectors
ffffffff81170940 T blk_queue_max_hw_sectors
ffffffff81170950 T blk_stack_limits
ffffffff81170ce0 T bdev_stack_limits
ffffffff81170d10 T blk_queue_stack_limits
ffffffff81170d30 T blk_queue_bounce_limit
ffffffff81170db0 T blk_queue_make_request
ffffffff81170eb0 T blk_queue_update_dma_alignment
ffffffff81170ed0 T disk_stack_limits
ffffffff81170f40 T ioc_cgroup_changed
ffffffff81170f80 t icq_free_icq_rcu
ffffffff81170f90 T ioc_lookup_icq
ffffffff81170fe0 t ioc_destroy_icq
ffffffff811710b0 t ioc_release_fn
ffffffff81171160 T put_io_context
ffffffff81171220 T icq_get_changed
ffffffff81171270 T get_io_context
ffffffff81171280 T exit_io_context
ffffffff81171390 T ioc_clear_queue
ffffffff811713f0 T create_io_context_slowpath
ffffffff811714e0 T get_task_io_context
ffffffff811715c0 T ioc_create_icq
ffffffff811717d0 T ioc_set_icq_flags
ffffffff811717f0 T ioc_ioprio_changed
ffffffff81171840 t __blk_rq_unmap_user
ffffffff81171870 T blk_rq_unmap_user
ffffffff811718c0 T blk_rq_map_user_iov
ffffffff81171a10 T blk_rq_append_bio
ffffffff81171a70 T blk_rq_map_kern
ffffffff81171bd0 T blk_rq_map_user
ffffffff81171e30 t blk_end_sync_rq
ffffffff81171e60 T blk_execute_rq_nowait
ffffffff81171f90 T blk_execute_rq
ffffffff81172040 t __blk_recalc_rq_segments
ffffffff81172290 T blk_recount_segments
ffffffff811722d0 T blk_rq_map_sg
ffffffff81172640 T blk_recalc_rq_segments
ffffffff81172660 T ll_back_merge_fn
ffffffff81172760 T ll_front_merge_fn
ffffffff81172860 T blk_rq_set_mixed_merge
ffffffff811728e0 t attempt_merge
ffffffff81172cd0 T attempt_back_merge
ffffffff81172d30 T attempt_front_merge
ffffffff81172d90 T blk_attempt_req_merge
ffffffff81172da0 T blk_rq_merge_ok
ffffffff81172e10 T blk_try_merge
ffffffff81172e40 t blk_done_softirq
ffffffff81172ed0 t trigger_softirq
ffffffff81172f30 T __blk_complete_request
ffffffff81173050 T blk_complete_request
ffffffff81173070 T blk_delete_timer
ffffffff811730a0 T blk_add_timer
ffffffff81173160 t blk_rq_timed_out
ffffffff811731c0 T blk_abort_request
ffffffff81173200 T blk_abort_queue
ffffffff81173310 T blk_rq_timed_out_timer
ffffffff81173430 T __blk_iopoll_complete
ffffffff81173460 T blk_iopoll_complete
ffffffff811734c0 t blk_iopoll_softirq
ffffffff811735c0 T blk_iopoll_sched
ffffffff81173610 T blk_iopoll_init
ffffffff81173720 T blk_iopoll_disable
ffffffff81173770 T blk_iopoll_enable
ffffffff81173780 T blkdev_issue_zeroout
ffffffff81173900 t bio_batch_end_io
ffffffff81173940 T blkdev_issue_discard
ffffffff81173b10 T __blkdev_driver_ioctl
ffffffff81173b40 t blkpg_ioctl
ffffffff81173dc0 T blkdev_ioctl
ffffffff81174500 T disk_part_iter_init
ffffffff81174540 T disk_map_sector_rcu
ffffffff811745b0 t exact_match
ffffffff811745c0 t block_devnode
ffffffff811745e0 T set_device_ro
ffffffff811745f0 T bdev_read_only
ffffffff81174610 t partitions_open
ffffffff81174620 t diskstats_open
ffffffff81174630 t disk_seqf_next
ffffffff81174660 t disk_seqf_stop
ffffffff81174690 t disk_capability_show
ffffffff811746c0 t disk_discard_alignment_show
ffffffff81174700 t disk_alignment_offset_show
ffffffff81174740 t disk_ro_show
ffffffff81174770 t disk_removable_show
ffffffff811747a0 t disk_ext_range_show
ffffffff811747d0 t disk_range_show
ffffffff81174800 t disk_events_poll_msecs_show
ffffffff81174830 t disk_release
ffffffff811748f0 t disk_seqf_start
ffffffff81174960 t show_partition_start
ffffffff811749d0 t __disk_unblock_events
ffffffff81174af0 t disk_events_workfn
ffffffff81174c40 T put_disk
ffffffff81174c60 T get_disk
ffffffff81174cd0 t exact_lock
ffffffff81174cf0 T unregister_blkdev
ffffffff81174db0 T disk_part_iter_exit
ffffffff81174dd0 t __disk_events_show
ffffffff81174e70 t disk_events_async_show
ffffffff81174e80 t disk_events_show
ffffffff81174e90 T blk_unregister_region
ffffffff81174eb0 T blk_register_region
ffffffff81174ee0 T register_blkdev
ffffffff81175010 T disk_part_iter_next
ffffffff811750f0 t diskstats_show
ffffffff81175510 T set_disk_ro
ffffffff811755c0 T disk_get_part
ffffffff81175600 T blk_lookup_devt
ffffffff811756d0 T bdget_disk
ffffffff81175710 T invalidate_partition
ffffffff81175750 t base_probe
ffffffff81175790 T get_gendisk
ffffffff81175870 t show_partition
ffffffff81175960 T blkdev_show
ffffffff811759e0 T blk_alloc_devt
ffffffff81175ab0 T add_disk
ffffffff81175ef0 T blk_free_devt
ffffffff81175f40 T disk_expand_part_tbl
ffffffff81176020 T alloc_disk_node
ffffffff81176110 T alloc_disk
ffffffff81176120 T disk_block_events
ffffffff811761e0 T del_gendisk
ffffffff81176420 t disk_events_poll_msecs_store
ffffffff811764b0 T disk_unblock_events
ffffffff811764d0 T disk_flush_events
ffffffff81176560 t disk_events_set_dfl_poll_msecs
ffffffff811765b0 T disk_clear_events
ffffffff811766e0 T scsi_verify_blk_ioctl
ffffffff811767b0 T blk_verify_command
ffffffff81176810 T sg_scsi_ioctl
ffffffff81176b70 t sg_io
ffffffff81176fb0 t __blk_send_generic.constprop.12
ffffffff81177050 T scsi_cmd_ioctl
ffffffff811774c0 T scsi_cmd_blk_ioctl
ffffffff81177540 t whole_disk_show
ffffffff81177550 t part_discard_alignment_show
ffffffff81177580 t part_alignment_offset_show
ffffffff811775b0 t part_ro_show
ffffffff811775e0 t part_start_show
ffffffff81177610 t part_partition_show
ffffffff81177640 T part_inflight_show
ffffffff81177670 T part_size_show
ffffffff811776a0 t part_release
ffffffff811776d0 T read_dev_sector
ffffffff81177770 t disk_unlock_native_capacity
ffffffff811777f0 t delete_partition_rcu_cb
ffffffff811778e0 T part_stat_show
ffffffff81177c50 T __bdevname
ffffffff81177c80 T disk_name
ffffffff81177d30 T bdevname
ffffffff81177d50 T __delete_partition
ffffffff81177d70 T delete_partition
ffffffff81177e10 t drop_partitions.isra.13.part.14
ffffffff81177e60 T add_partition
ffffffff81178300 T rescan_partitions
ffffffff81178580 T invalidate_partitions
ffffffff81178620 T check_partition
ffffffff81178820 t parse_solaris_x86
ffffffff81178830 t parse_freebsd
ffffffff81178840 t parse_netbsd
ffffffff81178850 t parse_openbsd
ffffffff81178860 t parse_unixware
ffffffff81178870 t parse_minix
ffffffff81178880 t parse_extended
ffffffff81178b20 T msdos_partition
ffffffff81179040 t bsg_poll
ffffffff81179120 t bsg_get_done_cmd
ffffffff81179260 t blk_complete_sgv4_hdr_rq
ffffffff81179410 t bsg_free_command
ffffffff81179470 t bsg_kref_release_function
ffffffff81179490 t bsg_release
ffffffff811796c0 t bsg_open
ffffffff81179980 t bsg_rq_end_io
ffffffff81179a30 t bsg_devnode
ffffffff81179a60 T bsg_register_queue
ffffffff81179d80 T bsg_unregister_queue
ffffffff81179e30 t bsg_read
ffffffff81179fd0 t bsg_map_hdr.isra.7
ffffffff8117a310 t bsg_ioctl
ffffffff8117a550 t bsg_write
ffffffff8117a850 T bsg_remove_queue
ffffffff8117a8f0 T bsg_setup_queue
ffffffff8117a970 t bsg_softirq_done
ffffffff8117a9b0 t bsg_map_buffer
ffffffff8117aa20 T bsg_request_fn
ffffffff8117abf0 T bsg_goose_queue
ffffffff8117ac10 T bsg_job_done
ffffffff8117aca0 T cgroup_to_blkio_cgroup
ffffffff8117acb0 T task_blkio_cgroup
ffffffff8117acc0 T blkiocg_update_dispatch_stats
ffffffff8117ad50 T blkiocg_update_io_merged_stats
ffffffff8117adc0 T blkiocg_lookup_group
ffffffff8117ae00 T blkio_policy_register
ffffffff8117ae40 T blkio_policy_unregister
ffffffff8117ae80 t blkiocg_reset_stats
ffffffff8117b090 t blkio_get_key_name
ffffffff8117b1c0 t blkiocg_create
ffffffff8117b230 t blkio_read_policy_node_files
ffffffff8117b350 t blkiocg_file_read
ffffffff8117b3a0 t blkiocg_populate
ffffffff8117b3c0 t blkiocg_attach
ffffffff8117b440 t blkiocg_can_attach
ffffffff8117b4e0 T blkcg_get_weight
ffffffff8117b550 T blkiocg_update_timeslice_used
ffffffff8117b5a0 T blkiocg_update_io_add_stats
ffffffff8117b630 t blkiocg_destroy
ffffffff8117b7b0 T blkiocg_del_blkio_group
ffffffff8117b840 T blkiocg_add_blkio_group
ffffffff8117b910 T blkio_alloc_blkg_stats
ffffffff8117b940 T blkiocg_update_completion_stats
ffffffff8117ba40 T blkiocg_update_io_remove_stats
ffffffff8117bb10 t blkio_read_stat_cpu.isra.11
ffffffff8117bb90 t blkiocg_file_write_u64
ffffffff8117bce0 t blkiocg_file_read_u64
ffffffff8117bd00 t blkio_delete_rule_command
ffffffff8117bd50 t blkiocg_file_write
ffffffff8117c370 t blkio_read_blkg_stats.isra.16
ffffffff8117c620 t blkiocg_file_read_map
ffffffff8117c740 T blkcg_get_read_bps
ffffffff8117c7d0 T blkcg_get_write_bps
ffffffff8117c860 T blkcg_get_read_iops
ffffffff8117c8e0 T blkcg_get_write_iops
ffffffff8117c960 t noop_merged_requests
ffffffff8117c980 t noop_add_request
ffffffff8117c9a0 t noop_former_request
ffffffff8117c9c0 t noop_latter_request
ffffffff8117c9e0 t noop_init_queue
ffffffff8117ca10 t noop_dispatch
ffffffff8117ca50 t noop_exit_queue
ffffffff8117ca60 t cfq_update_blkio_group_weight
ffffffff8117ca80 t cfq_activate_request
ffffffff8117caa0 t cfq_init_icq
ffffffff8117cab0 t __cfq_update_io_thinktime
ffffffff8117cb10 t cfq_var_store
ffffffff8117cb50 t cfq_target_latency_store
ffffffff8117cb90 t cfq_low_latency_store
ffffffff8117cbd0 t cfq_group_idle_store
ffffffff8117cc10 t cfq_slice_idle_store
ffffffff8117cc50 t cfq_slice_async_rq_store
ffffffff8117cc80 t cfq_slice_async_store
ffffffff8117ccc0 t cfq_slice_sync_store
ffffffff8117cd00 t cfq_back_seek_penalty_store
ffffffff8117cd30 t cfq_back_seek_max_store
ffffffff8117cd60 t cfq_fifo_expire_async_store
ffffffff8117cda0 t cfq_fifo_expire_sync_store
ffffffff8117cde0 t cfq_quantum_store
ffffffff8117ce10 t cfq_var_show
ffffffff8117ce30 t cfq_target_latency_show
ffffffff8117ce60 t cfq_low_latency_show
ffffffff8117ce70 t cfq_group_idle_show
ffffffff8117cea0 t cfq_slice_idle_show
ffffffff8117ced0 t cfq_slice_async_rq_show
ffffffff8117cee0 t cfq_slice_async_show
ffffffff8117cf10 t cfq_slice_sync_show
ffffffff8117cf40 t cfq_back_seek_penalty_show
ffffffff8117cf50 t cfq_back_seek_max_show
ffffffff8117cf60 t cfq_fifo_expire_async_show
ffffffff8117cf90 t cfq_fifo_expire_sync_show
ffffffff8117cfc0 t cfq_quantum_show
ffffffff8117cfd0 t cfq_should_idle
ffffffff8117d070 t __cfq_set_active_queue
ffffffff8117d100 t cfq_deactivate_request
ffffffff8117d140 t cfq_rb_erase
ffffffff8117d1a0 t cfq_del_cfqq_rr
ffffffff8117d280 t cfq_group_service_tree_add
ffffffff8117d330 t cfq_prio_tree_add
ffffffff8117d3f0 t cfq_rb_first
ffffffff8117d440 t cfq_get_next_cfqg
ffffffff8117d490 t cfq_init_queue
ffffffff8117d840 t cfq_kick_queue
ffffffff8117d8a0 t cfq_allow_merge
ffffffff8117d940 t cfq_init_prio_data
ffffffff8117da70 t cfq_may_queue
ffffffff8117daf0 t cfq_find_cfqg
ffffffff8117db90 t cfq_bio_merged
ffffffff8117dbc0 t cfq_merge
ffffffff8117dc80 t cfqq_type
ffffffff8117dca0 t cfq_service_tree_add
ffffffff8117df80 t __cfq_slice_expired
ffffffff8117e3a0 t cfq_idle_slice_timer
ffffffff8117e460 t cfq_choose_req.isra.73
ffffffff8117e570 t cfq_find_next_rq
ffffffff8117e640 t cfq_remove_request
ffffffff8117e760 t cfq_merged_requests
ffffffff8117e870 t cfq_dispatch_insert
ffffffff8117e940 t cfq_add_rq_rb
ffffffff8117ea30 t cfq_put_cfqg
ffffffff8117eaf0 t cfq_put_queue
ffffffff8117ebc0 t cfq_exit_cfqq
ffffffff8117ec70 t cfq_exit_icq
ffffffff8117ecd0 t cfq_destroy_cfqg.isra.87
ffffffff8117ed20 t cfq_unlink_blkio_group
ffffffff8117ed90 t cfq_exit_queue
ffffffff8117eee0 t cfq_insert_request
ffffffff8117f3b0 t cfq_put_request
ffffffff8117f440 t cfq_get_queue
ffffffff8117f9f0 t cfq_set_request
ffffffff8117fc90 t cfq_close_cooperator
ffffffff8117fe20 t cfq_dispatch_requests
ffffffff811808a0 t cfq_completed_request
ffffffff81180f20 t cfq_merged_request
ffffffff81180fe0 T compat_blkdev_ioctl
ffffffff81182230 T argv_free
ffffffff81182270 T argv_split
ffffffff81182390 T module_bug_finalize
ffffffff81182460 T module_bug_cleanup
ffffffff811824a0 T find_bug
ffffffff81182570 T report_bug
ffffffff81182690 T memparse
ffffffff81182720 T get_option
ffffffff811827c0 T get_options
ffffffff81182860 T cpumask_next_and
ffffffff811828a0 T __next_cpu
ffffffff811828d0 T __first_cpu
ffffffff811828f0 T cpumask_any_but
ffffffff81182930 T _atomic_dec_and_lock
ffffffff811829a0 T decompress_method
ffffffff81182a10 t cmp_ex
ffffffff81182a30 T sort_extable
ffffffff81182a50 T trim_init_extable
ffffffff81182b20 T search_extable
ffffffff81182b60 T idr_for_each
ffffffff81182c40 T idr_get_next
ffffffff81182d10 T idr_init
ffffffff81182d30 T ida_init
ffffffff81182d60 t get_from_free_list
ffffffff81182dc0 t free_bitmap
ffffffff81182e30 T idr_replace
ffffffff81182ee0 T idr_destroy
ffffffff81182f20 T ida_destroy
ffffffff81182f40 t idr_layer_rcu_free
ffffffff81182f60 t idr_get_empty_slot
ffffffff81183270 T ida_get_new_above
ffffffff81183550 T ida_get_new
ffffffff81183560 t idr_get_new_above_int
ffffffff811835e0 T idr_get_new
ffffffff81183610 T idr_get_new_above
ffffffff81183640 T idr_pre_get
ffffffff811836d0 T idr_remove_all
ffffffff811837c0 T idr_remove
ffffffff81183980 T ida_remove
ffffffff81183a80 T idr_find
ffffffff81183b20 T ida_pre_get
ffffffff81183ba0 T ida_simple_get
ffffffff81183ca0 T ida_simple_remove
ffffffff81183d00 T int_sqrt
ffffffff81183d50 T ioremap_page_range
ffffffff81184060 t kobj_attr_show
ffffffff81184080 t kobj_attr_store
ffffffff811840a0 t kset_release
ffffffff811840b0 t dynamic_kobj_release
ffffffff811840c0 T kobject_get
ffffffff81184100 T kobject_put
ffffffff81184160 t kobj_kset_leave
ffffffff811841c0 T kset_unregister
ffffffff811841e0 T kobject_del
ffffffff81184210 t kobject_release
ffffffff811842d0 t kobject_add_internal
ffffffff811844f0 T kobject_init
ffffffff811845a0 T kobject_get_path
ffffffff81184650 T kobject_rename
ffffffff81184790 T kset_register
ffffffff81184800 T kobject_set_name_vargs
ffffffff81184870 T kobject_init_and_add
ffffffff81184910 T kobject_add
ffffffff811849e0 T kobject_set_name
ffffffff81184a30 T kset_create_and_add
ffffffff81184ad0 T kobject_move
ffffffff81184c20 T kobject_create
ffffffff81184c60 T kobject_create_and_add
ffffffff81184ce0 T kset_init
ffffffff81184d20 T kset_find_obj
ffffffff81184da0 T kobj_ns_type_register
ffffffff81184e10 T kobj_ns_type_registered
ffffffff81184e60 T kobj_child_ns_ops
ffffffff81184e80 T kobj_ns_ops
ffffffff81184ea0 T kobj_ns_grab_current
ffffffff81184f00 T kobj_ns_netlink
ffffffff81184f70 T kobj_ns_initial
ffffffff81184fd0 T kobj_ns_drop
ffffffff81185030 t uevent_net_exit
ffffffff811850d0 t uevent_net_init
ffffffff81185180 T add_uevent_var
ffffffff81185290 t kobj_bcast_filter
ffffffff811852f0 T kobject_uevent_env
ffffffff81185770 T kobject_uevent
ffffffff81185780 T kobject_action_type
ffffffff81185840 T plist_add
ffffffff81185940 T plist_del
ffffffff811859b0 T heap_init
ffffffff81185a10 T heap_free
ffffffff81185a20 T heap_insert
ffffffff81185b60 t get_index.isra.1
ffffffff81185ba0 t iter_walk_down
ffffffff81185c00 t prio_tree_left
ffffffff81185c70 t prio_tree_right
ffffffff81185d10 T prio_tree_replace
ffffffff81185d70 T prio_tree_remove
ffffffff81185e60 T prio_tree_insert
ffffffff81186070 T prio_tree_next
ffffffff811862d0 t prop_norm_single
ffffffff811863d0 t prop_norm_percpu
ffffffff81186550 T prop_descriptor_init
ffffffff811865e0 T prop_change_shift
ffffffff81186700 T prop_local_init_percpu
ffffffff81186730 T prop_local_destroy_percpu
ffffffff81186740 T __prop_inc_percpu
ffffffff811867b0 T __prop_inc_percpu_max
ffffffff81186890 T prop_fraction_percpu
ffffffff81186920 T prop_local_init_single
ffffffff81186940 T prop_local_destroy_single
ffffffff81186950 T __prop_inc_single
ffffffff811869b0 T prop_fraction_single
ffffffff81186a40 t radix_tree_lookup_element
ffffffff81186ad0 T radix_tree_lookup_slot
ffffffff81186ae0 T radix_tree_lookup
ffffffff81186af0 T radix_tree_next_hole
ffffffff81186b30 T radix_tree_prev_hole
ffffffff81186b70 T radix_tree_tagged
ffffffff81186b80 t radix_tree_callback
ffffffff81186bf0 t radix_tree_node_rcu_free
ffffffff81186c30 t radix_tree_node_ctor
ffffffff81186cb0 T radix_tree_tag_clear
ffffffff81186d90 T radix_tree_range_tag_if_tagged
ffffffff81186f60 T radix_tree_next_chunk
ffffffff81187160 T radix_tree_gang_lookup_tag_slot
ffffffff81187220 T radix_tree_gang_lookup_tag
ffffffff811872f0 T radix_tree_gang_lookup_slot
ffffffff811873b0 T radix_tree_gang_lookup
ffffffff81187470 T radix_tree_tag_set
ffffffff81187510 T radix_tree_delete
ffffffff81187750 T radix_tree_preload
ffffffff811877e0 T radix_tree_tag_get
ffffffff811878a0 t radix_tree_node_alloc
ffffffff81187900 T radix_tree_insert
ffffffff81187b50 T radix_tree_locate_item
ffffffff81187c70 T ___ratelimit
ffffffff81187d90 t __rb_rotate_left
ffffffff81187e10 t __rb_rotate_right
ffffffff81187e90 T rb_insert_color
ffffffff81187fd0 T rb_erase
ffffffff811882c0 t rb_augment_path
ffffffff81188330 T rb_augment_insert
ffffffff81188360 T rb_augment_erase_end
ffffffff81188380 T rb_first
ffffffff811883a0 T rb_last
ffffffff811883c0 T rb_replace_node
ffffffff81188430 T rb_next
ffffffff81188480 T rb_augment_erase_begin
ffffffff811884e0 T rb_prev
ffffffff81188530 T reciprocal_value
ffffffff81188550 T __init_rwsem
ffffffff81188570 t __rwsem_do_wake
ffffffff81188720 T rwsem_downgrade_wake
ffffffff81188790 T rwsem_wake
ffffffff811887f0 T show_mem
ffffffff811889b0 T strcasecmp
ffffffff81188a00 T strncasecmp
ffffffff81188a60 T strcpy
ffffffff81188a80 T strncpy
ffffffff81188ab0 T strcat
ffffffff81188af0 T strcmp
ffffffff81188b20 T strncmp
ffffffff81188b80 T strchr
ffffffff81188bc0 T strrchr
ffffffff81188c00 T strnchr
ffffffff81188c40 T skip_spaces
ffffffff81188c70 T strlen
ffffffff81188c90 T strnlen
ffffffff81188cd0 T strspn
ffffffff81188d30 T strcspn
ffffffff81188d80 T strpbrk
ffffffff81188de0 T strsep
ffffffff81188e50 T sysfs_streq
ffffffff81188ec0 T strtobool
ffffffff81188f00 T memcmp
ffffffff81188f50 T memscan
ffffffff81188f80 T strstr
ffffffff81188ff0 T strnstr
ffffffff81189050 T memchr
ffffffff81189080 T memchr_inv
ffffffff81189190 T strlcpy
ffffffff811891e0 T strnicmp
ffffffff81189250 T strncat
ffffffff811892a0 T strim
ffffffff81189310 T strlcat
ffffffff81189390 T timerqueue_iterate_next
ffffffff811893b0 T timerqueue_del
ffffffff81189440 T timerqueue_add
ffffffff811894f0 t skip_atoi
ffffffff81189530 t put_dec_trunc
ffffffff81189630 t put_dec_full
ffffffff811896f0 t put_dec
ffffffff81189750 t ip4_string
ffffffff81189830 t ip6_string
ffffffff811898c0 t format_decode
ffffffff81189c70 t ip6_compressed_string
ffffffff81189e80 T simple_strtoull
ffffffff81189ed0 T simple_strtoul
ffffffff81189ee0 t number.isra.2
ffffffff8118a1d0 t string.isra.4
ffffffff8118a280 t mac_address_string.isra.5
ffffffff8118a320 t ip4_addr_string.isra.6
ffffffff8118a390 t uuid_string.isra.7
ffffffff8118a4a0 t symbol_string.isra.8
ffffffff8118a590 t resource_string.isra.9
ffffffff8118a8b0 t ip6_addr_string.isra.10
ffffffff8118a950 t pointer.isra.11
ffffffff8118ac10 T vsnprintf
ffffffff8118b200 T sprintf
ffffffff8118b250 T vsprintf
ffffffff8118b260 T snprintf
ffffffff8118b2a0 T vscnprintf
ffffffff8118b2c0 T scnprintf
ffffffff8118b300 T simple_strtoll
ffffffff8118b330 T simple_strtol
ffffffff8118b360 T vsscanf
ffffffff8118bbb0 T sscanf
ffffffff8118bc00 T num_to_str
ffffffff8118bc60 T clear_page_c
ffffffff8118bc70 T clear_page_c_e
ffffffff8118bc80 T clear_page
ffffffff8118bcc0 t copy_page_c
ffffffff8118bcd0 T copy_page
ffffffff8118bdb0 T _copy_to_user
ffffffff8118bde0 T _copy_from_user
ffffffff8118be10 T copy_user_generic_unrolled
ffffffff8118bec0 T copy_user_generic_string
ffffffff8118bf00 T copy_user_enhanced_fast_string
ffffffff8118bf10 T __copy_user_nocache
ffffffff8118bfd0 T csum_partial
ffffffff8118c160 T ip_compute_csum
ffffffff8118c180 t delay_loop
ffffffff8118c1b0 T __delay
ffffffff8118c1c0 T __const_udelay
ffffffff8118c1f0 T __udelay
ffffffff8118c220 T __ndelay
ffffffff8118c250 t delay_tsc
ffffffff8118c2c0 T use_tsc_delay
ffffffff8118c2d0 T __get_user_1
ffffffff8118c2f0 T __get_user_2
ffffffff8118c320 T __get_user_4
ffffffff8118c350 T __get_user_8
ffffffff8118c373 t bad_get_user
ffffffff8118c380 T insn_init
ffffffff8118c440 T insn_get_prefixes
ffffffff8118c690 T insn_get_opcode
ffffffff8118c810 T insn_get_modrm
ffffffff8118c950 T insn_rip_relative
ffffffff8118c990 T insn_get_sib
ffffffff8118ca10 T insn_get_displacement
ffffffff8118cb10 T insn_get_immediate
ffffffff8118ced0 T insn_get_length
ffffffff8118cf10 T __memcpy
ffffffff8118cf10 T memcpy
ffffffff8118d020 T memmove
ffffffff8118d1c0 T __memset
ffffffff8118d1c0 T memset
ffffffff8118d270 T __put_user_1
ffffffff8118d290 T __put_user_2
ffffffff8118d2c0 T __put_user_4
ffffffff8118d2f0 T __put_user_8
ffffffff8118d313 t bad_put_user
ffffffff8118d320 T __write_lock_failed
ffffffff8118d340 T __read_lock_failed
ffffffff8118d350 T call_rwsem_down_read_failed
ffffffff8118d380 T call_rwsem_down_write_failed
ffffffff8118d3a0 T call_rwsem_wake
ffffffff8118d3d0 T call_rwsem_downgrade_wake
ffffffff8118d400 T strncpy_from_user
ffffffff8118d520 T copy_from_user_nmi
ffffffff8118d610 T __clear_user
ffffffff8118d650 T __strnlen_user
ffffffff8118d6a0 T strlen_user
ffffffff8118d6f0 T copy_in_user
ffffffff8118d740 T strnlen_user
ffffffff8118d770 T clear_user
ffffffff8118d7a0 T copy_user_handle_tail
ffffffff8118d820 T inat_get_opcode_attribute
ffffffff8118d830 T inat_get_last_prefix_id
ffffffff8118d850 T inat_get_escape_attribute
ffffffff8118d8a0 T inat_get_group_attribute
ffffffff8118d910 T inat_get_avx_attribute
ffffffff8118d970 T bcd2bin
ffffffff8118d990 T bin2bcd
ffffffff8118d9b0 T iter_div_u64_rem
ffffffff8118d9d0 t u32_swap
ffffffff8118d9e0 t generic_swap
ffffffff8118da10 T sort
ffffffff8118dc40 T match_strlcpy
ffffffff8118dca0 T match_strdup
ffffffff8118dd00 t match_number
ffffffff8118ddc0 T match_hex
ffffffff8118ddd0 T match_octal
ffffffff8118dde0 T match_int
ffffffff8118ddf0 T match_token
ffffffff8118dff0 T half_md4_transform
ffffffff8118e2c0 T debug_locks_off
ffffffff8118e2f0 T prandom32
ffffffff8118e340 T random32
ffffffff8118e360 T srandom32
ffffffff8118e3d0 W bust_spinlocks
ffffffff8118e410 T hex_dump_to_buffer
ffffffff8118e750 T print_hex_dump
ffffffff8118e8a0 T print_hex_dump_bytes
ffffffff8118e8e0 T hex_to_bin
ffffffff8118e920 T hex2bin
ffffffff8118e990 T kvasprintf
ffffffff8118ea20 T kasprintf
ffffffff8118ea70 T __bitmap_empty
ffffffff8118eae0 T __bitmap_full
ffffffff8118eb50 T __bitmap_equal
ffffffff8118ebd0 T __bitmap_complement
ffffffff8118ec40 T __bitmap_and
ffffffff8118ec80 T __bitmap_or
ffffffff8118ecb0 T __bitmap_xor
ffffffff8118ece0 T __bitmap_andnot
ffffffff8118ed20 T __bitmap_intersects
ffffffff8118edb0 T __bitmap_subset
ffffffff8118ee50 T bitmap_set
ffffffff8118ef10 T bitmap_clear
ffffffff8118efd0 t __reg_op
ffffffff8118f0c0 T bitmap_find_free_region
ffffffff8118f160 T bitmap_release_region
ffffffff8118f170 T bitmap_allocate_region
ffffffff8118f1d0 T bitmap_copy_le
ffffffff8118f200 T __bitmap_weight
ffffffff8118f270 t __bitmap_parselist
ffffffff8118f410 T bitmap_fold
ffffffff8118f480 T bitmap_onto
ffffffff8118f500 T __bitmap_shift_left
ffffffff8118f620 T __bitmap_shift_right
ffffffff8118f790 T bitmap_parselist_user
ffffffff8118f7d0 T bitmap_parselist
ffffffff8118f830 T bitmap_scnprintf
ffffffff8118f8f0 T __bitmap_parse
ffffffff8118fae0 T bitmap_parse_user
ffffffff8118fb20 T bitmap_find_next_zero_area
ffffffff8118fba0 T bitmap_scnlistprintf
ffffffff8118fca0 t bitmap_pos_to_ord
ffffffff8118fd10 T bitmap_ord_to_pos
ffffffff8118fd90 T bitmap_bitremap
ffffffff8118fe30 T bitmap_remap
ffffffff8118ff10 T __sg_free_table
ffffffff8118ff90 T sg_free_table
ffffffff8118ffb0 T sg_next
ffffffff8118ffd0 T sg_last
ffffffff81190010 T sg_miter_stop
ffffffff811900a0 T sg_miter_start
ffffffff81190170 T sg_init_table
ffffffff811901c0 T __sg_alloc_table
ffffffff81190300 t sg_kfree
ffffffff81190320 t sg_kmalloc
ffffffff81190350 T sg_init_one
ffffffff811903d0 T sg_miter_next
ffffffff81190510 t sg_copy_buffer
ffffffff811905e0 T sg_copy_to_buffer
ffffffff811905f0 T sg_copy_from_buffer
ffffffff81190600 T sg_alloc_table
ffffffff81190650 T string_get_size
ffffffff811907e0 T gcd
ffffffff81190810 T lcm
ffffffff81190850 T list_sort
ffffffff81190b50 t __uuid_gen_common
ffffffff81190b90 T uuid_be_gen
ffffffff81190bb0 T uuid_le_gen
ffffffff81190bd0 T flex_array_get
ffffffff81190c30 T flex_array_get_ptr
ffffffff81190c50 t __fa_get_part
ffffffff81190d40 T flex_array_alloc
ffffffff81190ee0 T flex_array_shrink
ffffffff81190f70 T flex_array_free_parts
ffffffff81190fb0 T flex_array_free
ffffffff81190fd0 T flex_array_prealloc
ffffffff811910a0 T flex_array_clear
ffffffff81191120 T flex_array_put
ffffffff811911d0 T bsearch
ffffffff81191260 T find_last_bit
ffffffff811912c0 T find_next_bit
ffffffff81191380 T find_next_zero_bit
ffffffff81191460 T find_first_bit
ffffffff811914e0 T find_first_zero_bit
ffffffff81191560 T llist_add_batch
ffffffff81191590 T llist_del_first
ffffffff811915e0 T _parse_integer_fixup_radix
ffffffff81191650 T _parse_integer
ffffffff81191710 t _kstrtoull
ffffffff81191790 T kstrtoull
ffffffff811917a0 T kstrtoul_from_user
ffffffff81191810 T kstrtoull_from_user
ffffffff81191880 T kstrtou8
ffffffff811918c0 T kstrtou8_from_user
ffffffff81191930 T kstrtou16
ffffffff81191970 T kstrtou16_from_user
ffffffff811919e0 T kstrtouint
ffffffff81191a20 T kstrtouint_from_user
ffffffff81191a90 T _kstrtoul
ffffffff81191ac0 T kstrtoll
ffffffff81191b30 T kstrtol_from_user
ffffffff81191ba0 T kstrtoll_from_user
ffffffff81191c10 T kstrtos8
ffffffff81191c50 T kstrtos8_from_user
ffffffff81191cc0 T kstrtos16
ffffffff81191d00 T kstrtos16_from_user
ffffffff81191d70 T kstrtoint
ffffffff81191db0 T kstrtoint_from_user
ffffffff81191e20 T _kstrtol
ffffffff81191e50 T ioport_map
ffffffff81191e70 T ioport_unmap
ffffffff81191e80 t bad_io_access
ffffffff81191ec0 T pci_iounmap
ffffffff81191ef0 T iowrite32
ffffffff81191f30 T iowrite16
ffffffff81191f70 T iowrite8
ffffffff81191fb0 T ioread32be
ffffffff81192000 T ioread32
ffffffff81192040 T ioread16
ffffffff81192090 T ioread8
ffffffff811920e0 T iowrite32_rep
ffffffff81192140 T iowrite16_rep
ffffffff811921a0 T iowrite8_rep
ffffffff81192200 T ioread32_rep
ffffffff81192260 T ioread16_rep
ffffffff811922c0 T ioread8_rep
ffffffff81192320 T ioread16be
ffffffff81192370 T iowrite32be
ffffffff811923b0 T iowrite16be
ffffffff811923f0 T pci_iomap
ffffffff811924c0 W __iowrite64_copy
ffffffff811924f0 t devm_ioremap_match
ffffffff81192500 t devm_ioport_map_match
ffffffff81192510 t pcim_iomap_release
ffffffff81192550 t devm_ioport_map_release
ffffffff81192560 T devm_ioport_map
ffffffff81192600 T devm_iounmap
ffffffff81192640 T devm_ioremap_release
ffffffff81192650 T devm_ioremap
ffffffff811926f0 T devm_ioremap_nocache
ffffffff81192790 T devm_request_and_ioremap
ffffffff811928d0 T pcim_iomap_table
ffffffff81192930 T pcim_iounmap
ffffffff81192990 T pcim_iounmap_regions
ffffffff81192a00 T pcim_iomap
ffffffff81192a80 T pcim_iomap_regions
ffffffff81192b90 T pcim_iomap_regions_request_all
ffffffff81192c10 T devm_ioport_unmap
ffffffff81192c80 T __sw_hweight32
ffffffff81192cd0 T __sw_hweight16
ffffffff81192d10 T __sw_hweight8
ffffffff81192d50 T __sw_hweight64
ffffffff81192dd0 T bitrev16
ffffffff81192df0 T bitrev32
ffffffff81192e40 T crc16
ffffffff81192e70 T crc32_le
ffffffff81192f80 T __crc32c_le
ffffffff81193090 T crc32_be
ffffffff811931b0 t bitmap_clear_ll
ffffffff811932d0 T gen_pool_virt_to_phys
ffffffff81193340 T gen_pool_for_each_chunk
ffffffff811933a0 T gen_pool_avail
ffffffff811933e0 T gen_pool_size
ffffffff81193420 T gen_pool_free
ffffffff811934c0 T gen_pool_alloc
ffffffff811936c0 T gen_pool_create
ffffffff81193700 T gen_pool_add_virt
ffffffff811937c0 T gen_pool_destroy
ffffffff81193870 T inflate_fast
ffffffff81193e90 t zlib_updatewindow
ffffffff81193f80 T zlib_inflate_workspacesize
ffffffff81193f90 T zlib_inflateReset
ffffffff81194040 T zlib_inflateInit2
ffffffff811940a0 T zlib_inflate
ffffffff81195730 T zlib_inflateEnd
ffffffff81195750 T zlib_inflateIncomp
ffffffff811959c0 T zlib_inflate_blob
ffffffff81195ac0 T zlib_inflate_table
ffffffff81195ff0 T lzo1x_decompress_safe
ffffffff811965c0 t fill_temp
ffffffff81196650 t dec_vli.isra.2
ffffffff811966d0 t index_update.isra.4
ffffffff81196700 T xz_dec_reset
ffffffff81196910 T xz_dec_init
ffffffff811969b0 T xz_dec_run
ffffffff81197270 T xz_dec_end
ffffffff811972b0 t lzma_len
ffffffff811974f0 t dict_repeat.part.2
ffffffff81197570 t lzma_main
ffffffff81198010 T xz_dec_lzma2_run
ffffffff81198890 T xz_dec_lzma2_create
ffffffff81198920 T xz_dec_lzma2_reset
ffffffff811989c0 T xz_dec_lzma2_end
ffffffff811989e0 t bcj_flush
ffffffff81198a60 t bcj_apply
ffffffff81198fd0 T xz_dec_bcj_run
ffffffff811991c0 T xz_dec_bcj_create
ffffffff811991e0 T xz_dec_bcj_reset
ffffffff81199220 T percpu_counter_destroy
ffffffff81199290 T __percpu_counter_init
ffffffff81199310 T __percpu_counter_add
ffffffff81199390 T __percpu_counter_sum
ffffffff81199400 T percpu_counter_compare
ffffffff81199480 T percpu_counter_set
ffffffff811994f0 T swiotlb_nr_tbl
ffffffff81199500 t is_swiotlb_buffer
ffffffff81199540 T swiotlb_dma_mapping_error
ffffffff81199560 T swiotlb_dma_supported
ffffffff81199580 t swiotlb_full
ffffffff81199620 T swiotlb_bounce
ffffffff81199660 T swiotlb_tbl_sync_single
ffffffff811996d0 T swiotlb_tbl_unmap_single
ffffffff811997c0 T swiotlb_free_coherent
ffffffff81199870 T swiotlb_tbl_map_single
ffffffff81199ab0 t map_single
ffffffff81199b10 T swiotlb_map_page
ffffffff81199c40 T swiotlb_alloc_coherent
ffffffff81199d70 t swiotlb_sync_single
ffffffff81199e10 T swiotlb_sync_sg_for_device
ffffffff81199e70 T swiotlb_sync_sg_for_cpu
ffffffff81199ec0 T swiotlb_sync_single_for_device
ffffffff81199ed0 T swiotlb_sync_single_for_cpu
ffffffff81199ee0 t unmap_single
ffffffff81199f70 T swiotlb_unmap_page
ffffffff81199f80 T swiotlb_unmap_sg_attrs
ffffffff81199fd0 T swiotlb_unmap_sg
ffffffff81199fe0 T swiotlb_map_sg_attrs
ffffffff8119a110 T swiotlb_map_sg
ffffffff8119a120 T swiotlb_print_info
ffffffff8119a1b0 T swiotlb_late_init_with_default_size
ffffffff8119a4c0 T iommu_is_span_boundary
ffffffff8119a4f0 T iommu_area_alloc
ffffffff8119a570 t collect_syscall
ffffffff8119a670 T task_current_syscall
ffffffff8119a760 T nla_policy_len
ffffffff8119a7b0 T nla_find
ffffffff8119a830 T nla_append
ffffffff8119a880 T nla_memcpy
ffffffff8119a8a0 T __nla_reserve_nohdr
ffffffff8119a8e0 T __nla_put_nohdr
ffffffff8119a920 T nla_put_nohdr
ffffffff8119a960 T nla_reserve_nohdr
ffffffff8119a990 T __nla_reserve
ffffffff8119aa00 T __nla_put
ffffffff8119aa40 T nla_put
ffffffff8119aa80 T nla_reserve
ffffffff8119aab0 T nla_strlcpy
ffffffff8119ab30 T nla_strcmp
ffffffff8119ab90 t validate_nla
ffffffff8119ad80 T nla_parse
ffffffff8119ae50 T nla_validate
ffffffff8119aee0 T nla_memcmp
ffffffff8119af00 t irq_cpu_rmap_release
ffffffff8119af10 T free_irq_cpu_rmap
ffffffff8119af60 T alloc_cpu_rmap
ffffffff8119b000 t cpu_rmap_copy_neigh
ffffffff8119b090 T cpu_rmap_update
ffffffff8119b220 t irq_cpu_rmap_notify
ffffffff8119b250 T cpu_rmap_add
ffffffff8119b280 T irq_cpu_rmap_add
ffffffff8119b310 T dql_reset
ffffffff8119b360 T dql_init
ffffffff8119b3c0 T dql_completed
ffffffff8119b520 t __rdmsr_on_cpu
ffffffff8119b570 t __wrmsr_on_cpu
ffffffff8119b5b0 t __rdmsr_safe_on_cpu
ffffffff8119b5e0 t __wrmsr_safe_on_cpu
ffffffff8119b600 t __rdmsr_safe_regs_on_cpu
ffffffff8119b620 t __wrmsr_safe_regs_on_cpu
ffffffff8119b640 T wrmsr_safe_regs_on_cpu
ffffffff8119b670 T rdmsr_safe_regs_on_cpu
ffffffff8119b6a0 T wrmsr_safe_on_cpu
ffffffff8119b6f0 T rdmsr_safe_on_cpu
ffffffff8119b760 T wrmsr_on_cpu
ffffffff8119b7b0 T rdmsr_on_cpu
ffffffff8119b810 t __rwmsr_on_cpus
ffffffff8119b880 T wrmsr_on_cpus
ffffffff8119b890 T rdmsr_on_cpus
ffffffff8119b8a0 t __wbinvd
ffffffff8119b8b0 T wbinvd_on_all_cpus
ffffffff8119b8d0 T wbinvd_on_cpu
ffffffff8119b8f0 T msrs_free
ffffffff8119b900 T msrs_alloc
ffffffff8119b940 T native_rdmsr_safe_regs
ffffffff8119b990 T native_wrmsr_safe_regs
ffffffff8119b9e0 T __iowrite32_copy
ffffffff8119b9f0 T pci_read_vpd
ffffffff8119ba20 T pci_write_vpd
ffffffff8119ba50 T pci_vpd_truncate
ffffffff8119ba90 T pci_cfg_access_unlock
ffffffff8119bb10 T pci_cfg_access_trylock
ffffffff8119bb70 T pci_bus_set_ops
ffffffff8119bbd0 T pci_bus_write_config_byte
ffffffff8119bc50 T pci_bus_read_config_dword
ffffffff8119bcf0 T pci_bus_read_config_word
ffffffff8119bd90 T pci_bus_read_config_byte
ffffffff8119be20 t pci_wait_cfg
ffffffff8119bee0 T pci_cfg_access_lock
ffffffff8119bf30 t pci_vpd_pci22_release
ffffffff8119bf40 T pci_bus_write_config_dword
ffffffff8119bfe0 T pci_bus_write_config_word
ffffffff8119c070 T pci_user_read_config_byte
ffffffff8119c120 T pci_user_read_config_word
ffffffff8119c1e0 t pci_vpd_pci22_wait
ffffffff8119c2d0 T pci_user_read_config_dword
ffffffff8119c390 T pci_user_write_config_byte
ffffffff8119c420 T pci_user_write_config_word
ffffffff8119c4d0 t pci_vpd_pci22_read
ffffffff8119c630 T pci_user_write_config_dword
ffffffff8119c6e0 t pci_vpd_pci22_write
ffffffff8119c840 T pci_vpd_pci22_init
ffffffff8119c8e0 T pci_walk_bus
ffffffff8119c9a0 T pci_enable_bridges
ffffffff8119ca20 T pci_free_resource_list
ffffffff8119ca80 T pci_bus_resource_n
ffffffff8119cae0 T pci_bus_alloc_resource
ffffffff8119cbe0 T pci_bus_add_device
ffffffff8119cc30 T pci_add_resource_offset
ffffffff8119ccb0 T pci_add_resource
ffffffff8119ccc0 T pci_bus_add_resource
ffffffff8119cd50 T pci_bus_remove_resources
ffffffff8119cd80 T pci_bus_add_child
ffffffff8119cdb0 T pci_bus_add_devices
ffffffff8119cec0 t find_anything
ffffffff8119ced0 T pcibios_resource_to_bus
ffffffff8119cf80 T pcibios_bus_to_resource
ffffffff8119d050 T pcie_update_link_speed
ffffffff8119d070 t next_trad_fn
ffffffff8119d080 t no_next_fn
ffffffff8119d090 t release_pcibus_dev
ffffffff8119d0c0 t pci_release_bus_bridge_dev
ffffffff8119d0d0 t pci_alloc_bus
ffffffff8119d140 T alloc_pci_dev
ffffffff8119d170 t next_ari_fn
ffffffff8119d1e0 t pci_release_dev
ffffffff8119d220 t pci_read_irq
ffffffff8119d280 T no_pci_devices
ffffffff8119d2b0 t pcie_find_smpss
ffffffff8119d320 t pcie_bus_configure_set
ffffffff8119d4c0 T pcie_bus_configure_settings
ffffffff8119d560 T pci_bus_read_dev_vendor_id
ffffffff8119d640 T __pci_read_base
ffffffff8119da60 t pci_read_bases
ffffffff8119db00 T set_pcie_port_type
ffffffff8119dba0 T set_pcie_hotplug_bridge
ffffffff8119dc10 T pci_cfg_space_size_ext
ffffffff8119dc50 T pci_cfg_space_size
ffffffff8119dcd0 T pci_setup_device
ffffffff8119e110 T pci_device_add
ffffffff8119e210 T pci_scan_slot
ffffffff8119e320 T pci_create_root_bus
ffffffff8119e700 T pci_stop_bus_device
ffffffff8119e7a0 T pci_remove_bus
ffffffff8119e810 t __pci_remove_behind_bridge.isra.2
ffffffff8119e860 T __pci_remove_bus_device
ffffffff8119e920 T pci_stop_and_remove_bus_device
ffffffff8119e940 T pci_stop_and_remove_behind_bridge
ffffffff8119e9a0 T pci_bus_max_busnr
ffffffff8119e9e0 T pci_pme_capable
ffffffff8119ea00 T pci_target_state
ffffffff8119eae0 T pci_dev_run_wake
ffffffff8119eb40 T pci_set_dma_max_seg_size
ffffffff8119eb60 T pci_set_dma_seg_boundary
ffffffff8119eb80 T pci_select_bars
ffffffff8119ebb0 W pci_fixup_cardbus
ffffffff8119ebc0 T pcie_get_readrq
ffffffff8119ec00 t __pci_bus_find_cap_start
ffffffff8119ec40 T pci_enable_ido
ffffffff8119ecd0 t __pci_set_master
ffffffff8119ed40 T pci_clear_master
ffffffff8119ed50 T pci_clear_mwi
ffffffff8119eda0 T pci_intx_mask_supported
ffffffff8119ee70 t __pci_find_next_cap_ttl
ffffffff8119ef20 T pci_find_capability
ffffffff8119ef80 T pcix_set_mmrbc
ffffffff8119f0b0 T pcix_get_mmrbc
ffffffff8119f100 T pcix_get_max_mmrbc
ffffffff8119f160 T pci_msi_off
ffffffff8119f210 T pci_find_next_capability
ffffffff8119f240 t __pci_find_next_ht_cap
ffffffff8119f2f0 T pci_find_ht_capability
ffffffff8119f350 T pci_find_next_ht_capability
ffffffff8119f360 T pci_bus_find_capability
ffffffff8119f3e0 t find_pci_dr
ffffffff8119f410 T pci_intx
ffffffff8119f490 T pcim_pin_device
ffffffff8119f4f0 T pci_set_cacheline_size
ffffffff8119f5b0 t __pci_request_region
ffffffff8119f700 T pci_request_region_exclusive
ffffffff8119f710 T pci_request_region
ffffffff8119f720 T pci_release_region
ffffffff8119f7f0 T pci_release_selected_regions
ffffffff8119f840 T pci_release_regions
ffffffff8119f850 T pci_enable_obff
ffffffff8119f990 t pci_add_cap_save_buffer
ffffffff8119fa20 T pci_load_saved_state
ffffffff8119fb10 T pci_load_and_free_saved_state
ffffffff8119fb40 T pci_store_saved_state
ffffffff8119fc30 t pci_restore_config_space_range
ffffffff8119fcf0 T pci_choose_state
ffffffff8119fd90 T pci_find_parent_resource
ffffffff8119fe20 T pci_find_ext_capability
ffffffff8119fee0 T pci_disable_ido
ffffffff8119ff70 T pci_disable_obff
ffffffff8119ffe0 T pci_ltr_supported
ffffffff811a0020 T pci_enable_ltr
ffffffff811a00b0 T pci_set_ltr
ffffffff811a01c0 T pci_disable_ltr
ffffffff811a0240 t pci_dev_d3_sleep.isra.22
ffffffff811a0250 t pci_dev_reset
ffffffff811a0630 T __pci_reset_function_locked
ffffffff811a0640 T __pci_reset_function
ffffffff811a0650 T pci_save_state
ffffffff811a0930 t pci_check_and_set_intx_mask.isra.24
ffffffff811a0a10 T pci_check_and_unmask_intx
ffffffff811a0a20 T pci_check_and_mask_intx
ffffffff811a0a40 T pci_set_mwi
ffffffff811a0ab0 T pci_try_set_mwi
ffffffff811a0ac0 T pci_pme_active
ffffffff811a0c40 T __pci_enable_wake
ffffffff811a0d80 T pci_wake_from_d3
ffffffff811a0db0 T pci_restore_state
ffffffff811a1100 T pci_reset_function
ffffffff811a1160 T pci_ioremap_bar
ffffffff811a11d0 T pci_bus_find_ext_capability
ffffffff811a1290 T pci_set_platform_pm
ffffffff811a12d0 T pci_update_current_state
ffffffff811a1310 t pci_platform_power_transition
ffffffff811a13c0 T __pci_complete_power_transition
ffffffff811a13e0 T pci_set_power_state
ffffffff811a16b0 T pci_back_from_sleep
ffffffff811a16d0 T pci_prepare_to_sleep
ffffffff811a1760 t do_pci_enable_device
ffffffff811a17d0 t __pci_enable_device_flags
ffffffff811a18a0 T pci_enable_device
ffffffff811a18b0 T pcim_enable_device
ffffffff811a1980 T pci_enable_device_mem
ffffffff811a1990 T pci_enable_device_io
ffffffff811a19a0 T pci_reenable_device
ffffffff811a19d0 t do_pci_disable_device
ffffffff811a1a30 T pci_disable_device
ffffffff811a1a70 t pcim_release
ffffffff811a1b30 T pci_disable_enabled_device
ffffffff811a1b50 W pcibios_set_pcie_reset_state
ffffffff811a1b60 T pci_set_pcie_reset_state
ffffffff811a1b70 T pci_check_pme_status
ffffffff811a1c00 t pci_pme_wakeup
ffffffff811a1c50 t pci_pme_list_scan
ffffffff811a1d10 T pci_pme_wakeup_bus
ffffffff811a1d30 T pci_finish_runtime_suspend
ffffffff811a1db0 T pci_pm_init
ffffffff811a1ff0 T platform_pci_wakeup_init
ffffffff811a2040 T pci_allocate_cap_save_buffers
ffffffff811a20a0 T pci_free_cap_save_buffers
ffffffff811a20d0 T pci_enable_ari
ffffffff811a21b0 T pci_request_acs
ffffffff811a21c0 T pci_enable_acs
ffffffff811a2260 T pci_swizzle_interrupt_pin
ffffffff811a22a0 T pci_get_interrupt_pin
ffffffff811a22e0 T pci_common_swizzle
ffffffff811a2320 T __pci_request_selected_regions
ffffffff811a23b0 T pci_request_selected_regions_exclusive
ffffffff811a23c0 T pci_request_regions_exclusive
ffffffff811a23d0 T pci_request_selected_regions
ffffffff811a23e0 T pci_request_regions
ffffffff811a23f0 W pcibios_set_master
ffffffff811a24a0 T pci_set_master
ffffffff811a24c0 T pci_probe_reset_function
ffffffff811a24d0 T pcie_get_mps
ffffffff811a2510 T pcie_set_readrq
ffffffff811a25f0 T pcie_set_mps
ffffffff811a26c0 T pci_resource_bar
ffffffff811a2760 T pci_set_vga_state
ffffffff811a2880 T pci_specified_resource_alignment
ffffffff811a2a40 T pci_is_reassigndev
ffffffff811a2a60 T pci_reassigndev_resource_alignment
ffffffff811a2bf0 T pci_set_resource_alignment_param
ffffffff811a2c60 t pci_resource_alignment_store
ffffffff811a2c70 T pci_get_resource_alignment_param
ffffffff811a2cc0 t pci_resource_alignment_show
ffffffff811a2ce0 T pci_match_id
ffffffff811a2d70 t pci_device_shutdown
ffffffff811a2e00 t pci_match_device
ffffffff811a2ed0 t pci_bus_match
ffffffff811a2f00 t local_pci_probe
ffffffff811a2fd0 t pci_restore_standard_config
ffffffff811a3010 t pci_pm_runtime_resume
ffffffff811a30c0 t pci_legacy_suspend
ffffffff811a31a0 t pci_legacy_suspend_late
ffffffff811a3290 t pci_pm_runtime_suspend
ffffffff811a33a0 t pci_pm_reenable_device
ffffffff811a33d0 t pci_legacy_resume
ffffffff811a3430 t pci_pm_complete
ffffffff811a3470 t pci_pm_prepare
ffffffff811a3510 T pci_dev_put
ffffffff811a3530 t pci_device_remove
ffffffff811a3640 T pci_dev_get
ffffffff811a3660 t pci_device_probe
ffffffff811a3790 t store_remove_id
ffffffff811a38e0 T pci_unregister_driver
ffffffff811a3990 T pci_add_dynid
ffffffff811a3aa0 t store_new_id
ffffffff811a3ba0 t pci_pm_runtime_idle
ffffffff811a3be0 T pci_dev_driver
ffffffff811a3c20 t pci_has_legacy_pm_support.isra.6
ffffffff811a3c80 t pci_pm_thaw_noirq
ffffffff811a3d30 t pci_pm_freeze_noirq
ffffffff811a3df0 t pci_pm_poweroff_noirq
ffffffff811a3ec0 t pci_pm_restore
ffffffff811a3fc0 t pci_pm_thaw
ffffffff811a4040 t pci_pm_poweroff
ffffffff811a4100 t pci_pm_freeze
ffffffff811a41a0 t pci_pm_restore_noirq
ffffffff811a4260 T __pci_register_driver
ffffffff811a4320 t pci_do_find_bus
ffffffff811a4370 t match_pci_dev_by_id
ffffffff811a43e0 t pci_get_dev_by_id
ffffffff811a4480 T pci_dev_present
ffffffff811a44f0 T pci_get_class
ffffffff811a4580 T pci_get_subsys
ffffffff811a4630 T pci_get_device
ffffffff811a4640 T pci_get_domain_bus_and_slot
ffffffff811a4690 T pci_get_slot
ffffffff811a4720 T pci_find_next_bus
ffffffff811a4790 T pci_find_bus
ffffffff811a47e0 T pci_find_upstream_pcie_bridge
ffffffff811a4850 t pci_bus_show_cpumaskaffinity
ffffffff811a4860 t pci_bus_show_cpulistaffinity
ffffffff811a4870 t pci_write_rom
ffffffff811a48a0 t boot_vga_show
ffffffff811a48d0 t msi_bus_show
ffffffff811a4910 t broken_parity_status_show
ffffffff811a4940 t is_enabled_show
ffffffff811a4970 t consistent_dma_mask_bits_show
ffffffff811a49a0 t dma_mask_bits_show
ffffffff811a49d0 t numa_node_show
ffffffff811a4a00 t modalias_show
ffffffff811a4a50 t irq_show
ffffffff811a4a80 t class_show
ffffffff811a4ab0 t subsystem_device_show
ffffffff811a4ae0 t subsystem_vendor_show
ffffffff811a4b10 t device_show
ffffffff811a4b40 t vendor_show
ffffffff811a4b70 t resource_show
ffffffff811a4bf0 t local_cpulist_show
ffffffff811a4c40 t local_cpus_show
ffffffff811a4c90 t broken_parity_status_store
ffffffff811a4cf0 t dev_bus_rescan_store
ffffffff811a4d90 t dev_rescan_store
ffffffff811a4e10 t remove_store
ffffffff811a4e90 t remove_callback
ffffffff811a4ec0 t msi_bus_store
ffffffff811a4f70 t is_enabled_store
ffffffff811a4ff0 t bus_rescan_store
ffffffff811a5060 t pci_remove_resource_files
ffffffff811a50d0 t pci_write_config
ffffffff811a52b0 t pci_read_config
ffffffff811a5500 t reset_store
ffffffff811a5550 t pci_create_attr
ffffffff811a56e0 t write_vpd_attr
ffffffff811a5710 t read_vpd_attr
ffffffff811a5740 t pci_read_rom
ffffffff811a5810 t pci_bus_show_cpuaffinity.isra.8
ffffffff811a5860 t pci_resource_io.isra.9
ffffffff811a5960 t pci_read_resource_io
ffffffff811a5980 t pci_write_resource_io
ffffffff811a59a0 T pci_mmap_fits
ffffffff811a5a50 t pci_mmap_resource.isra.10
ffffffff811a5c00 t pci_mmap_resource_uc
ffffffff811a5c20 t pci_mmap_resource_wc
ffffffff811a5c40 W pcibios_add_platform_entries
ffffffff811a5c50 T pci_create_sysfs_dev_files
ffffffff811a6050 T pci_remove_sysfs_dev_files
ffffffff811a61b0 T pci_disable_rom
ffffffff811a61f0 T pci_enable_rom
ffffffff811a6260 T pci_unmap_rom
ffffffff811a6290 T pci_get_rom_size
ffffffff811a6330 T pci_map_rom
ffffffff811a64a0 T pci_cleanup_rom
ffffffff811a64e0 t _pci_assign_resource
ffffffff811a6650 T pci_claim_resource
ffffffff811a66e0 T pci_update_resource
ffffffff811a6890 T pci_disable_bridge_window
ffffffff811a6910 T pci_reassign_resource
ffffffff811a69e0 T pci_assign_resource
ffffffff811a6c10 T pci_enable_resources
ffffffff811a6d30 t pci_note_irq_problem
ffffffff811a6dd0 T pci_lost_interrupt
ffffffff811a6e60 T pci_vpd_find_tag
ffffffff811a6ed0 T pci_vpd_find_info_keyword
ffffffff811a6f30 t proc_bus_pci_ioctl
ffffffff811a6fc0 t pci_seq_next
ffffffff811a6fe0 t pci_seq_start
ffffffff811a7010 t proc_bus_pci_dev_open
ffffffff811a7020 t show_device
ffffffff811a7170 t pci_seq_stop
ffffffff811a7190 t proc_bus_pci_release
ffffffff811a71b0 t proc_bus_pci_open
ffffffff811a71f0 t proc_bus_pci_mmap
ffffffff811a7280 t proc_bus_pci_write
ffffffff811a74b0 t proc_bus_pci_read
ffffffff811a7700 t proc_bus_pci_lseek
ffffffff811a77c0 T pci_proc_attach_device
ffffffff811a78b0 T pci_proc_detach_device
ffffffff811a78e0 T pci_proc_detach_bus
ffffffff811a7910 t pci_slot_attr_show
ffffffff811a7930 t pci_slot_attr_store
ffffffff811a7950 T pci_destroy_slot
ffffffff811a7980 T pci_renumber_slot
ffffffff811a7a00 t pci_slot_release
ffffffff811a7a90 t cur_speed_read_file
ffffffff811a7ad0 t max_speed_read_file
ffffffff811a7b10 t make_slot_name
ffffffff811a7be0 T pci_create_slot
ffffffff811a7e00 t pci_slot_init
ffffffff811a7e50 t address_read_file
ffffffff811a7ec0 t quirk_intel_pcie_pm
ffffffff811a7ed0 t quirk_nvidia_ck804_pcie_aer_ext_cap
ffffffff811a7f40 t quirk_sis_96x_smbus
ffffffff811a7fb0 t quirk_sis_503
ffffffff811a8050 t quirk_mediagx_master
ffffffff811a80c0 t quirk_via_vt8237_bypass_apic_deassert
ffffffff811a8120 t quirk_via_ioapic
ffffffff811a8180 t quirk_passive_release
ffffffff811a8200 t quirk_vialatency
ffffffff811a82c0 t quirk_cardbus_legacy
ffffffff811a82e0 t quirk_amd_ordering
ffffffff811a8380 t quirk_via_vlink
ffffffff811a8440 t reset_intel_generic_dev
ffffffff811a84b0 t reset_intel_82599_sfp_virtfn
ffffffff811a8520 T pci_fixup_device
ffffffff811a86a0 t quirk_via_bridge
ffffffff811a8730 t quirk_jmicron_ata
ffffffff811a8870 t quirk_disable_amd_8111_boot_interrupt
ffffffff811a8910 t quirk_disable_amd_813x_boot_interrupt
ffffffff811a8990 t quirk_disable_broadcom_boot_interrupt
ffffffff811a8a40 t quirk_disable_intel_boot_interrupt
ffffffff811a8ab0 t quirk_reroute_to_boot_interrupts_intel
ffffffff811a8b00 t quirk_disable_pxb
ffffffff811a8b70 t asus_hides_ac97_lpc
ffffffff811a8c20 t asus_hides_smbus_lpc_ich6_resume_early
ffffffff811a8c50 t asus_hides_smbus_lpc_ich6_resume
ffffffff811a8ca0 t asus_hides_smbus_lpc
ffffffff811a8d40 t asus_hides_smbus_lpc_ich6_suspend
ffffffff811a8db0 t asus_hides_smbus_lpc_ich6
ffffffff811a8dd0 T pci_dev_specific_reset
ffffffff811a8e30 T pcie_aspm_enabled
ffffffff811a8e40 T pcie_aspm_support_enabled
ffffffff811a8e50 t pcie_aspm_get_policy
ffffffff811a8eb0 t pcie_set_clkpm
ffffffff811a8f80 t pcie_config_aspm_dev
ffffffff811a8ff0 t pcie_config_aspm_link
ffffffff811a90f0 t pcie_config_aspm_path
ffffffff811a9140 t pcie_aspm_set_policy
ffffffff811a9280 t __pci_disable_link_state
ffffffff811a93d0 T pci_disable_link_state
ffffffff811a93e0 T pci_disable_link_state_locked
ffffffff811a93f0 t pcie_get_aspm_reg
ffffffff811a9480 t pcie_aspm_check_latency.isra.6.part.7
ffffffff811a9550 t pcie_update_aspm_capable
ffffffff811a9630 T pcie_aspm_init_link_state
ffffffff811a9e30 T pcie_aspm_exit_link_state
ffffffff811a9f80 T pcie_aspm_pm_state_change
ffffffff811aa000 T pcie_aspm_powersave_config_link
ffffffff811aa0a0 T pcie_clear_aspm
ffffffff811aa0e0 T pcie_no_aspm
ffffffff811aa100 t pcie_port_shutdown_service
ffffffff811aa110 T pcie_port_service_unregister
ffffffff811aa120 T pcie_port_service_register
ffffffff811aa170 t pcie_port_remove_service
ffffffff811aa1e0 t pcie_port_probe_service
ffffffff811aa260 t cleanup_service_irqs
ffffffff811aa2a0 t release_pcie_device
ffffffff811aa2b0 t suspend_iter
ffffffff811aa2f0 t resume_iter
ffffffff811aa330 t remove_iter
ffffffff811aa360 T pcie_port_device_register
ffffffff811aa8c0 T pcie_port_device_suspend
ffffffff811aa8d0 T pcie_port_device_resume
ffffffff811aa8e0 T pcie_port_device_remove
ffffffff811aa910 t pcie_portdrv_err_resume
ffffffff811aa930 t pcie_portdrv_mmio_enabled
ffffffff811aa960 t pcie_portdrv_error_detected
ffffffff811aa9a0 t pcie_portdrv_slot_reset
ffffffff811aaa20 t pcie_portdrv_remove
ffffffff811aaa40 t error_detected_iter
ffffffff811aaab0 t mmio_enabled_iter
ffffffff811aab30 t slot_reset_iter
ffffffff811aabb0 t resume_iter
ffffffff811aac00 T pcie_clear_root_pme_status
ffffffff811aac50 t pcie_port_resume_noirq
ffffffff811aac80 t pcie_port_bus_match
ffffffff811aacd0 T pcie_port_bus_register
ffffffff811aacf0 T pcie_port_bus_unregister
ffffffff811aad00 T pcie_port_acpi_setup
ffffffff811aad90 T cper_severity_to_aer
ffffffff811aadb0 T aer_print_error
ffffffff811ab110 T aer_print_port_info
ffffffff811ab150 T cper_print_aer
ffffffff811ab3f0 t find_device_iter
ffffffff811ab560 t get_device_error_info
ffffffff811ab6f0 T pci_cleanup_aer_uncorrect_error_status
ffffffff811ab760 t broadcast_error_message
ffffffff811ab850 T aer_recover_queue
ffffffff811ab930 T pci_disable_pcie_error_reporting
ffffffff811ab9a0 T pci_enable_pcie_error_reporting
ffffffff811aba40 t report_mmio_enabled
ffffffff811aba90 t report_slot_reset
ffffffff811abae0 t report_resume
ffffffff811abb20 t find_aer_service_iter
ffffffff811abb60 t report_error_detected
ffffffff811abc00 t find_source_device
ffffffff811abc90 t handle_error_source.isra.13.part.14
ffffffff811abcf0 T aer_do_secondary_bus_reset
ffffffff811abd70 t do_recovery
ffffffff811abfc0 t aer_recover_work_func
ffffffff811ac060 T aer_isr
ffffffff811ac400 T aer_init
ffffffff811ac450 t aer_error_detected
ffffffff811ac460 t aer_error_resume
ffffffff811ac520 t aer_root_reset
ffffffff811ac600 t set_device_error_reporting
ffffffff811ac640 t set_downstream_devices_error_reporting
ffffffff811ac680 T aer_irq
ffffffff811ac7d0 t aer_remove
ffffffff811ac930 T pci_no_aer
ffffffff811ac940 T pci_aer_available
ffffffff811ac970 t aer_hest_parse
ffffffff811aca50 t aer_hest_parse_aff
ffffffff811aca80 T pcie_aer_get_firmware_first
ffffffff811acb10 T aer_acpi_firmware_first
ffffffff811acb40 t pcie_pme_set_native
ffffffff811acb70 t pcie_pme_walk_bus
ffffffff811acc00 t pcie_pme_from_pci_bridge.part.9
ffffffff811acc80 T pcie_pme_interrupt_enable
ffffffff811acd00 t pcie_pme_resume
ffffffff811acd60 t pcie_pme_suspend
ffffffff811acdd0 t pcie_pme_remove
ffffffff811acdf0 t pcie_pme_probe
ffffffff811acf70 t pcie_pme_irq
ffffffff811ad010 t pcie_pme_work_fn
ffffffff811ad290 T pci_uevent
ffffffff811ad390 t msi_irq_attr_show
ffffffff811ad3b0 T pci_msi_enabled
ffffffff811ad3c0 t show_msi_mode
ffffffff811ad400 t free_msi_irqs
ffffffff811ad530 t pci_intx_for_msi
ffffffff811ad550 t alloc_msi_entry
ffffffff811ad580 t populate_msi_sysfs
ffffffff811ad690 t __msi_mask_irq
ffffffff811ad6c0 t msi_set_mask_bit
ffffffff811ad720 T msi_kobj_release
ffffffff811ad730 t msi_set_enable
ffffffff811ad7b0 T pci_restore_msi_state
ffffffff811ad990 t pci_msi_check_device
ffffffff811ada10 T pci_enable_msi_block
ffffffff811adcd0 t msix_set_enable.constprop.14
ffffffff811add40 T arch_msi_check_device
ffffffff811add50 T default_teardown_msi_irqs
ffffffff811addd0 T mask_msi_irq
ffffffff811adde0 T unmask_msi_irq
ffffffff811addf0 T __read_msi_msg
ffffffff811adef0 T read_msi_msg
ffffffff811adf20 T __get_cached_msi_msg
ffffffff811adf40 T get_cached_msi_msg
ffffffff811adf70 T __write_msi_msg
ffffffff811ae0b0 T write_msi_msg
ffffffff811ae0e0 T default_restore_msi_irqs
ffffffff811ae160 T pci_msi_shutdown
ffffffff811ae260 T pci_disable_msi
ffffffff811ae2b0 T pci_msix_table_size
ffffffff811ae300 T pci_enable_msix
ffffffff811ae710 T pci_msix_shutdown
ffffffff811ae790 T pci_disable_msix
ffffffff811ae7e0 T msi_remove_pci_irq_vectors
ffffffff811ae810 T pci_no_msi
ffffffff811ae820 T pci_msi_init_pci_dev
ffffffff811ae860 T ht_destroy_irq
ffffffff811ae8b0 T __ht_create_irq
ffffffff811aea00 T ht_create_irq
ffffffff811aea10 T write_ht_irq_msg
ffffffff811aeb10 T fetch_ht_irq_msg
ffffffff811aeb30 T mask_ht_irq
ffffffff811aeb60 T unmask_ht_irq
ffffffff811aeb90 T pci_max_pasids
ffffffff811aebe0 T pci_pasid_features
ffffffff811aec20 T pci_pri_status
ffffffff811aec90 T pci_pri_stopped
ffffffff811aed10 T pci_pri_enabled
ffffffff811aed50 T pci_disable_pasid
ffffffff811aed80 T pci_enable_pasid
ffffffff811aee50 T pci_reset_pri
ffffffff811aeec0 T pci_disable_pri
ffffffff811aef30 T pci_enable_pri
ffffffff811af010 T pci_disable_ats
ffffffff811af110 t ats_alloc_one
ffffffff811af1c0 T pci_enable_ats
ffffffff811af310 T pci_ats_queue_depth
ffffffff811af380 T pci_restore_ats_state
ffffffff811af3f0 T pci_num_vf
ffffffff811af410 T pci_sriov_migration
ffffffff811af480 t virtfn_remove_bus
ffffffff811af4d0 t virtfn_remove
ffffffff811af660 T pci_disable_sriov
ffffffff811af780 t virtfn_add
ffffffff811afb60 T pci_enable_sriov
ffffffff811b0060 t sriov_migration_task
ffffffff811b0170 T pci_iov_init
ffffffff811b0500 T pci_iov_release
ffffffff811b0550 T pci_iov_resource_bar
ffffffff811b0580 T pci_sriov_resource_alignment
ffffffff811b05c0 T pci_restore_iov_state
ffffffff811b06a0 T pci_iov_bus_range
ffffffff811b0720 t get_res_add_size
ffffffff811b07a0 t free_list
ffffffff811b0800 T pci_setup_cardbus
ffffffff811b09d0 t pci_bus_dump_resources
ffffffff811b0a70 t find_free_bus_resource
ffffffff811b0ae0 t add_to_list
ffffffff811b0b90 t assign_requested_resources_sorted
ffffffff811b0c60 t __assign_resources_sorted
ffffffff811b0fd0 t __pci_setup_bridge
ffffffff811b1300 T pci_setup_bridge
ffffffff811b1310 T pci_cardbus_resource_alignment
ffffffff811b1340 t __dev_sort_resources
ffffffff811b1580 t pbus_size_mem
ffffffff811b1a30 T pci_assign_unassigned_bridge_resources
ffffffff811b1b70 t acpi_pci_can_wakeup
ffffffff811b1b90 t acpi_pci_choose_state
ffffffff811b1bc0 t acpi_pci_set_power_state
ffffffff811b1c70 t acpi_pci_power_manageable
ffffffff811b1c90 t acpi_pci_find_root_bridge
ffffffff811b1cf0 t acpi_pci_find_device
ffffffff811b1d30 t remove_pm_notifier
ffffffff811b1db0 t pci_acpi_wake_bus
ffffffff811b1dd0 t add_pm_notifier
ffffffff811b1e60 t acpi_pci_run_wake
ffffffff811b1ed0 t acpi_pci_sleep_wake
ffffffff811b1f50 t pci_acpi_wake_dev
ffffffff811b1ff0 T pci_acpi_add_bus_pm_notifier
ffffffff811b2000 T pci_acpi_remove_bus_pm_notifier
ffffffff811b2010 T pci_acpi_add_pm_notifier
ffffffff811b2020 T pci_acpi_remove_pm_notifier
ffffffff811b2030 t find_smbios_instance_string.isra.2
ffffffff811b20f0 t smbiosinstance_show
ffffffff811b2110 t smbioslabel_show
ffffffff811b2130 t smbios_instance_string_exist
ffffffff811b2160 t dsm_get_label.constprop.3
ffffffff811b22d0 t acpilabel_show
ffffffff811b2320 t acpiindex_show
ffffffff811b2370 t device_has_dsm.isra.1
ffffffff811b23b0 t acpi_index_string_exist
ffffffff811b23d0 T pci_create_firmware_label_files
ffffffff811b2410 T pci_remove_firmware_label_files
ffffffff811b2450 T fb_notifier_call_chain
ffffffff811b2470 T fb_unregister_client
ffffffff811b2480 T fb_register_client
ffffffff811b2490 T fb_get_color_depth
ffffffff811b24d0 T fb_pad_aligned_buffer
ffffffff811b2510 T fb_pad_unaligned_buffer
ffffffff811b25f0 T fb_get_buffer_offset
ffffffff811b26a0 t fb_seq_next
ffffffff811b26c0 T fb_pan_display
ffffffff811b2830 t fb_seq_start
ffffffff811b2860 t fb_seq_stop
ffffffff811b2870 T lock_fb_info
ffffffff811b28d0 t fb_mmap
ffffffff811b2ac0 T fb_set_suspend
ffffffff811b2b10 T fb_blank
ffffffff811b2b80 t fb_write
ffffffff811b2da0 t fb_read
ffffffff811b2fd0 t proc_fb_open
ffffffff811b2fe0 t fb_seq_show
ffffffff811b3020 T fb_get_options
ffffffff811b3110 T unlink_framebuffer
ffffffff811b3180 T fb_set_var
ffffffff811b3610 t do_fb_ioctl
ffffffff811b3b90 t fb_compat_ioctl
ffffffff811b3f00 t fb_ioctl
ffffffff811b3f40 t put_fb_info
ffffffff811b3f70 t fb_release
ffffffff811b3fd0 t do_unregister_framebuffer
ffffffff811b40c0 T unregister_framebuffer
ffffffff811b4100 t do_remove_conflicting_framebuffers
ffffffff811b4260 T register_framebuffer
ffffffff811b4520 T remove_conflicting_framebuffers
ffffffff811b4570 t get_fb_info.part.14
ffffffff811b45a0 t fb_open
ffffffff811b46e0 t fb_set_logocmap.isra.15
ffffffff811b4800 T fb_show_logo
ffffffff811b5010 T fb_prepare_logo
ffffffff811b51b0 T fb_new_modelist
ffffffff811b5300 t copy_string
ffffffff811b5380 t get_detailed_timing
ffffffff811b5580 t fb_timings_vfreq
ffffffff811b5620 T fb_validate_mode
ffffffff811b5760 T fb_firmware_edid
ffffffff811b5770 T fb_destroy_modedb
ffffffff811b5780 t fb_timings_dclk
ffffffff811b5850 T fb_get_mode
ffffffff811b5ce0 t calc_mode_timings
ffffffff811b5dd0 t get_std_timing
ffffffff811b5f40 t check_edid
ffffffff811b60c0 t fix_edid
ffffffff811b61e0 t edid_checksum
ffffffff811b6230 t edid_check_header
ffffffff811b6270 t fb_create_modedb
ffffffff811b6b50 T fb_edid_add_monspecs
ffffffff811b6e90 T fb_parse_edid
ffffffff811b70b0 T fb_edid_to_monspecs
ffffffff811b7810 T fb_invert_cmaps
ffffffff811b78b0 T fb_copy_cmap
ffffffff811b79c0 T fb_set_cmap
ffffffff811b7af0 T fb_dealloc_cmap
ffffffff811b7b50 T fb_default_cmap
ffffffff811b7b80 T fb_alloc_cmap_gfp
ffffffff811b7ca0 T fb_alloc_cmap
ffffffff811b7cb0 T fb_cmap_to_user
ffffffff811b7de0 T fb_set_user_cmap
ffffffff811b7f40 t show_blank
ffffffff811b7f50 t store_console
ffffffff811b7f60 t show_console
ffffffff811b7f70 t store_cursor
ffffffff811b7f80 t show_cursor
ffffffff811b7f90 t store_fbstate
ffffffff811b8020 t show_fbstate
ffffffff811b8050 t show_rotate
ffffffff811b8080 t show_stride
ffffffff811b80b0 t show_name
ffffffff811b80e0 t show_virtual
ffffffff811b8110 t show_pan
ffffffff811b8140 t mode_string
ffffffff811b81d0 t show_modes
ffffffff811b8230 t show_mode
ffffffff811b8260 t show_bpp
ffffffff811b8290 t activate
ffffffff811b82e0 t store_rotate
ffffffff811b8370 t store_virtual
ffffffff811b8460 t store_bpp
ffffffff811b84f0 t store_pan
ffffffff811b85e0 t store_modes
ffffffff811b86f0 t store_mode
ffffffff811b8850 t store_blank
ffffffff811b88d0 T framebuffer_release
ffffffff811b88f0 T framebuffer_alloc
ffffffff811b8970 T fb_init_device
ffffffff811b8a00 T fb_cleanup_device
ffffffff811b8a50 t fb_try_mode
ffffffff811b8ae0 T fb_videomode_to_var
ffffffff811b8b50 T fb_mode_is_equal
ffffffff811b8ba0 T fb_find_best_mode
ffffffff811b8c20 T fb_find_nearest_mode
ffffffff811b8cd0 T fb_find_best_display
ffffffff811b8dd0 T fb_destroy_modelist
ffffffff811b8e30 T fb_find_mode
ffffffff811b9500 T fb_var_to_videomode
ffffffff811b95c0 T fb_match_mode
ffffffff811b9620 T fb_add_videomode
ffffffff811b96d0 T fb_videomode_to_modelist
ffffffff811b9730 T fb_delete_videomode
ffffffff811b97c0 T fb_find_mode_cvt
ffffffff811b9ef0 t dummycon_startup
ffffffff811b9f00 t dummycon_dummy
ffffffff811b9f10 t dummycon_init
ffffffff811b9f40 T vgacon_text_force
ffffffff811b9f50 t vgacon_build_attr
ffffffff811b9ff0 t vgacon_invert_region
ffffffff811ba070 t vga_set_palette
ffffffff811ba1c0 t vgacon_set_palette
ffffffff811ba200 t vgacon_dummy
ffffffff811ba210 t vgacon_scrolldelta
ffffffff811ba340 t vgacon_deinit
ffffffff811ba3d0 t vgacon_init
ffffffff811ba4f0 t vgacon_startup
ffffffff811ba8e0 t vgacon_set_origin
ffffffff811ba960 t vgacon_save_screen
ffffffff811ba9d0 t vgacon_doresize.isra.5
ffffffff811bac30 t vgacon_switch
ffffffff811bad50 t vgacon_resize
ffffffff811badc0 t vgacon_set_cursor_size.isra.7
ffffffff811baf20 t vgacon_cursor
ffffffff811bb130 t vgacon_scroll
ffffffff811bb3b0 t vgacon_do_font_op.isra.9.constprop.17
ffffffff811bb8c0 t vgacon_font_set
ffffffff811bbb20 t vgacon_font_get
ffffffff811bbb70 t vgacon_blank
ffffffff811bc3b0 t fbcon_clear
ffffffff811bc5b0 t fbcon_clear_margins
ffffffff811bc640 t fbcon_bmove_rec
ffffffff811bc820 t fbcon_bmove
ffffffff811bc8f0 t fbcon_debug_leave
ffffffff811bc940 t fbcon_getxy
ffffffff811bca30 t fbcon_invert_region
ffffffff811bcae0 t fbcon_add_cursor_timer
ffffffff811bcbc0 t cursor_timer_handler
ffffffff811bcbf0 t var_to_display
ffffffff811bcc90 t fbcon_set_palette
ffffffff811bcdf0 t fbcon_debug_enter
ffffffff811bce50 t display_to_var
ffffffff811bcee0 t fbcon_get_font
ffffffff811bd0c0 t fbcon_set_disp
ffffffff811bd3c0 t fbcon_prepare_logo
ffffffff811bd7d0 t fbcon_takeover
ffffffff811bd880 t fbcon_new_modelist
ffffffff811bd950 t updatescrollmode.isra.14
ffffffff811bdb50 t fbcon_resize
ffffffff811bdd90 t fbcon_screen_pos
ffffffff811bde00 t fbcon_del_cursor_timer.isra.17
ffffffff811bde40 t fbcon_deinit
ffffffff811be050 t get_color.isra.18
ffffffff811be1b0 t fb_flashcursor
ffffffff811be300 t fbcon_putcs
ffffffff811be4a0 t fbcon_putc
ffffffff811be4d0 t fbcon_set_origin
ffffffff811be4f0 t fbcon_cursor
ffffffff811be6c0 t fbcon_scrolldelta
ffffffff811bed50 t fbcon_blank
ffffffff811bf040 t fbcon_do_set_font
ffffffff811bf410 t fbcon_copy_font
ffffffff811bf450 t fbcon_set_def_font
ffffffff811bf4e0 t fbcon_set_font
ffffffff811bf710 t set_blitting_type.isra.23
ffffffff811bf750 t fbcon_modechanged
ffffffff811bf960 t fbcon_switch
ffffffff811bfe90 t con2fb_acquire_newinfo
ffffffff811bff60 t fbcon_startup
ffffffff811c0230 t fbcon_init
ffffffff811c07f0 t fbcon_redraw_blit.isra.25
ffffffff811c09c0 t fbcon_redraw_move.isra.26
ffffffff811c0b00 t fbcon_redraw.isra.27
ffffffff811c0cd0 t fbcon_scroll
ffffffff811c1a20 t set_con2fb_map
ffffffff811c1ed0 t fbcon_event_notify
ffffffff811c2730 t store_cursor_blink
ffffffff811c27f0 t store_rotate_all
ffffffff811c2860 t store_rotate
ffffffff811c28d0 t show_cursor_blink
ffffffff811c2980 t show_rotate
ffffffff811c2a20 t bit_bmove
ffffffff811c2a80 t bit_clear
ffffffff811c2b70 t bit_clear_margins
ffffffff811c2cf0 T fbcon_set_bitops
ffffffff811c2d30 t bit_update_start
ffffffff811c2d80 t update_attr.isra.3
ffffffff811c2e10 t bit_cursor
ffffffff811c3390 t bit_putcs
ffffffff811c3850 T get_default_font
ffffffff811c38e0 T find_font
ffffffff811c3930 T soft_cursor
ffffffff811c3b70 T cfb_fillrect
ffffffff811c3eb0 t bitfill_aligned
ffffffff811c3fe0 t bitfill_unaligned
ffffffff811c4190 t bitfill_aligned_rev
ffffffff811c42f0 t bitfill_unaligned_rev
ffffffff811c44b0 T cfb_copyarea
ffffffff811c4f00 T cfb_imageblit
ffffffff811c53a0 t vesafb_pan_display
ffffffff811c53b0 t vesafb_destroy
ffffffff811c5400 t vesafb_setcolreg
ffffffff811c5580 T acpi_table_print_madt_entry
ffffffff811c57ec t acpi_map_lookup
ffffffff811c5834 T acpi_resources_are_enforced
ffffffff811c5841 T acpi_os_wait_events_complete
ffffffff811c585b t acpi_os_execute_deferred
ffffffff811c5884 t __acpi_os_execute
ffffffff811c593d T acpi_os_execute
ffffffff811c5944 T acpi_os_get_iomem
ffffffff811c598b t acpi_irq
ffffffff811c59b4 t acpi_osi_handler
ffffffff811c5a25 T acpi_os_write_port
ffffffff811c5a4b T acpi_os_read_port
ffffffff811c5a90 T acpi_check_resource_conflict
ffffffff811c5b08 T acpi_check_region
ffffffff811c5b49 t acpi_os_map_cleanup.part.11
ffffffff811c5b7d T acpi_os_unmap_generic_address
ffffffff811c5c39 T acpi_os_map_generic_address
ffffffff811c5c7e T acpi_os_vprintf
ffffffff811c5ca7 T acpi_os_printf
ffffffff811c5cef T acpi_os_predefined_override
ffffffff811c5d51 T acpi_os_table_override
ffffffff811c5d6b T acpi_os_physical_table_override
ffffffff811c5d71 T acpi_os_install_interrupt_handler
ffffffff811c5e2e T acpi_os_remove_interrupt_handler
ffffffff811c5e5a T acpi_os_sleep
ffffffff811c5e6a T acpi_os_stall
ffffffff811c5e96 T acpi_os_get_timer
ffffffff811c5ec4 T acpi_os_read_memory
ffffffff811c5f79 T acpi_os_write_memory
ffffffff811c6013 T acpi_os_read_pci_configuration
ffffffff811c6093 T acpi_os_write_pci_configuration
ffffffff811c60f5 T acpi_os_hotplug_execute
ffffffff811c6107 T acpi_os_create_semaphore
ffffffff811c6168 T acpi_os_delete_semaphore
ffffffff811c618a T acpi_os_wait_semaphore
ffffffff811c61d7 T acpi_os_signal_semaphore
ffffffff811c61fc T acpi_os_signal
ffffffff811c6215 T acpi_os_delete_lock
ffffffff811c621a T acpi_os_acquire_lock
ffffffff811c621f T acpi_os_release_lock
ffffffff811c6224 T acpi_os_create_cache
ffffffff811c6245 T acpi_os_purge_cache
ffffffff811c6250 T acpi_os_delete_cache
ffffffff811c625a T acpi_os_release_object
ffffffff811c6266 T acpi_os_terminate
ffffffff811c62d7 T acpi_os_prepare_sleep
ffffffff811c630e T acpi_os_set_prepare_sleep
ffffffff811c6318 T acpi_evaluate_integer
ffffffff811c6367 T acpi_evaluate_reference
ffffffff811c6476 T acpi_extract_package
ffffffff811c668c T acpi_reboot
ffffffff811c6740 T acpi_nvs_register
ffffffff811c6795 T acpi_nvs_for_each_region
ffffffff811c67d4 T acpi_enable_wakeup_devices
ffffffff811c6880 T acpi_disable_wakeup_devices
ffffffff811c6924 t acpi_power_off
ffffffff811c6953 t acpi_power_off_prepare
ffffffff811c6981 t set_param_wake_flag
ffffffff811c69d8 t tts_notify_reboot
ffffffff811c6a3e T acpi_suspend
ffffffff811c6a8f T acpi_pm_device_sleep_state
ffffffff811c6b27 T acpi_pm_device_run_wake
ffffffff811c6bad T acpi_pm_device_sleep_wake
ffffffff811c6c34 T acpi_bus_private_data_handler
ffffffff811c6c35 t set_copy_dsdt
ffffffff811c6c53 T unregister_acpi_bus_notifier
ffffffff811c6c62 T register_acpi_bus_notifier
ffffffff811c6c71 t acpi_print_osc_error
ffffffff811c6d20 T acpi_run_osc
ffffffff811c6fcb t __acpi_bus_get_power
ffffffff811c7056 t __acpi_bus_set_power
ffffffff811c71c4 T acpi_bus_get_private_data
ffffffff811c71f8 T acpi_bus_get_device
ffffffff811c722b T acpi_bus_can_wakeup
ffffffff811c7255 T acpi_bus_power_manageable
ffffffff811c727c T acpi_bus_update_power
ffffffff811c72d1 T acpi_bus_set_power
ffffffff811c7303 T acpi_bus_generate_proc_event4
ffffffff811c73c8 T acpi_bus_generate_proc_event
ffffffff811c73eb T acpi_bus_get_status_handle
ffffffff811c7414 T acpi_bus_get_status
ffffffff811c7445 t acpi_bus_check_device
ffffffff811c7482 t acpi_bus_notify
ffffffff811c74fb T acpi_bus_init_power
ffffffff811c7558 T acpi_bus_receive_event
ffffffff811c76a8 t acpi_glue_data_handler
ffffffff811c76a9 t acpi_platform_notify
ffffffff811c782c T acpi_get_physical_device
ffffffff811c7858 t acpi_platform_notify_remove
ffffffff811c7900 T acpi_get_child
ffffffff811c794c t do_acpi_find_child
ffffffff811c798d T register_acpi_bus_type
ffffffff811c79ff T unregister_acpi_bus_type
ffffffff811c7a60 t acpi_device_suspend
ffffffff811c7a81 t acpi_device_resume
ffffffff811c7aa2 t acpi_device_notify
ffffffff811c7ab5 t acpi_device_notify_fixed
ffffffff811c7acc t acpi_start_single_object
ffffffff811c7b11 T acpi_bus_data_handler
ffffffff811c7b12 T acpi_device_hid
ffffffff811c7b2b t acpi_device_remove
ffffffff811c7bc4 t acpi_device_probe
ffffffff811c7cd1 t create_modalias
ffffffff811c7d5a t acpi_device_uevent
ffffffff811c7dda t acpi_device_modalias_show
ffffffff811c7e08 t acpi_bus_extract_wakeup_device_power_package
ffffffff811c7f5b t acpi_device_release
ffffffff811c7fa8 T acpi_bus_get_ejd
ffffffff811c8023 T acpi_match_device_ids
ffffffff811c807b t acpi_bus_match
ffffffff811c8096 t acpi_add_id
ffffffff811c80f6 t acpi_bus_set_run_wake_flags
ffffffff811c81b7 t acpi_eject_store
ffffffff811c822a t acpi_device_hid_show
ffffffff811c8252 t acpi_device_path_show
ffffffff811c82b1 T acpi_bus_unregister_driver
ffffffff811c82bd T acpi_bus_register_driver
ffffffff811c82f8 t acpi_add_single_object
ffffffff811c8df8 t acpi_bus_check_add
ffffffff811c8f86 t acpi_bus_scan
ffffffff811c8ffc T acpi_bus_start
ffffffff811c9037 T acpi_bus_add
ffffffff811c9060 t acpi_device_unregister.isra.7
ffffffff811c9164 t acpi_bus_remove.part.8
ffffffff811c9191 T acpi_bus_trim
ffffffff811c9299 t acpi_bus_hot_remove_device
ffffffff811c9398 T acpi_get_cpuid
ffffffff811c9598 t start_transaction
ffffffff811c95bf T ec_get_handle
ffffffff811c95d1 t ec_skip_dsdt_scan
ffffffff811c95de t ec_validate_ecdt
ffffffff811c95eb T acpi_ec_remove_query_handler
ffffffff811c9661 t ec_flag_msi
ffffffff811c968a t acpi_ec_remove
ffffffff811c97bb t advance_transaction
ffffffff811c9879 t ec_transaction_done
ffffffff811c98b1 t acpi_ec_transaction_unlocked
ffffffff811c9a8d T acpi_ec_add_query_handler
ffffffff811c9b16 t acpi_ec_sync_query
ffffffff811c9bca t acpi_ec_gpe_query
ffffffff811c9bfa t acpi_ec_run
ffffffff811c9c33 t acpi_ec_transaction
ffffffff811c9e65 t acpi_ec_burst_enable
ffffffff811c9e9c T ec_burst_enable
ffffffff811c9eb0 t acpi_ec_read
ffffffff811c9f0a T ec_read
ffffffff811c9f44 t acpi_ec_write
ffffffff811c9f83 T ec_write
ffffffff811c9fa1 t acpi_ec_burst_disable
ffffffff811c9fdb t acpi_ec_space_handler
ffffffff811ca0ce T ec_burst_disable
ffffffff811ca0e5 T ec_transaction
ffffffff811ca135 t make_acpi_ec
ffffffff811ca19f t ec_parse_device
ffffffff811ca238 t acpi_ec_register_query_methods
ffffffff811ca2a9 t ec_parse_io_ports
ffffffff811ca2d7 t acpi_ec_gpe_handler
ffffffff811ca345 t acpi_ec_add
ffffffff811ca4c4 T acpi_ec_block_transactions
ffffffff811ca4f4 T acpi_ec_unblock_transactions
ffffffff811ca523 T acpi_ec_unblock_transactions_early
ffffffff811ca538 T acpi_pci_register_driver
ffffffff811ca592 T acpi_get_pci_rootbridge_handle
ffffffff811ca5c1 t acpi_pci_bridge_scan
ffffffff811ca613 T acpi_pci_find_root
ffffffff811ca634 t acpi_pci_root_start
ffffffff811ca649 t acpi_pci_root_remove
ffffffff811ca674 t acpi_pci_run_osc
ffffffff811ca6da t acpi_pci_query_osc
ffffffff811ca74c T acpi_pci_osc_control_set
ffffffff811ca865 T acpi_is_root_bridge
ffffffff811ca898 T acpi_get_pci_dev
ffffffff811ca9d2 t get_root_bridge_busnr_callback
ffffffff811caa18 T acpi_pci_unregister_driver
ffffffff811caa78 t acpi_pci_link_remove
ffffffff811caacd t acpi_pci_link_check_possible
ffffffff811cabe6 t acpi_pci_link_get_current
ffffffff811caca9 t acpi_pci_link_add
ffffffff811cae68 t acpi_pci_link_set
ffffffff811cb037 t irqrouter_resume
ffffffff811cb070 t acpi_pci_link_check_current
ffffffff811cb0de T acpi_pci_link_allocate_irq
ffffffff811cb2e7 T acpi_pci_link_free_irq
ffffffff811cb380 T acpi_penalize_isa_irq
ffffffff811cb3a8 t acpi_pci_irq_find_prt_entry
ffffffff811cb442 t acpi_pci_irq_lookup
ffffffff811cb582 T acpi_pci_irq_add_prt
ffffffff811cb82d T acpi_pci_irq_del_prt
ffffffff811cb8d5 T acpi_pci_irq_enable
ffffffff811cba3e W acpi_unregister_gsi
ffffffff811cba3f T acpi_pci_irq_disable
ffffffff811cba78 t acpi_pci_unbind
ffffffff811cbad0 t acpi_pci_bind
ffffffff811cbb62 T acpi_pci_bind_root
ffffffff811cbb7c t acpi_power_get_state
ffffffff811cbbf6 t acpi_power_remove
ffffffff811cbc16 t acpi_power_get_context
ffffffff811cbc80 t acpi_power_off
ffffffff811cbd0b t acpi_power_add
ffffffff811cbe68 T acpi_power_resource_unregister_device
ffffffff811cbf30 T acpi_power_resource_register_device
ffffffff811cc078 T acpi_device_sleep_wake
ffffffff811cc140 T acpi_disable_wakeup_device_power
ffffffff811cc1f4 T acpi_power_get_inferred_state
ffffffff811cc2cd t __acpi_power_on
ffffffff811cc371 t acpi_power_resume
ffffffff811cc3df t acpi_power_on
ffffffff811cc44e t acpi_power_on_list
ffffffff811cc495 T acpi_enable_wakeup_device_power
ffffffff811cc55b T acpi_power_on_resources
ffffffff811cc586 T acpi_power_transition
ffffffff811cc634 t acpi_system_poll_event
ffffffff811cc668 t acpi_system_close_event
ffffffff811cc698 t acpi_system_read_event
ffffffff811cc779 T acpi_bus_generate_netlink_event
ffffffff811cc8a2 T unregister_acpi_notifier
ffffffff811cc8b1 T register_acpi_notifier
ffffffff811cc8c0 T acpi_notifier_call_chain
ffffffff811cc928 t acpi_system_open_event
ffffffff811cc978 t param_get_acpica_version
ffffffff811cc98b t acpi_show_profile
ffffffff811cc9a8 t delete_gpe_attr_array
ffffffff811cca01 t acpi_table_attr_init
ffffffff811ccad4 t acpi_sysfs_table_handler
ffffffff811ccb62 t acpi_table_show
ffffffff811ccbea t acpi_gbl_event_handler
ffffffff811ccc3f t get_status
ffffffff811ccccd t counter_set
ffffffff811cced0 t counter_show
ffffffff811ccfde T acpi_irq_stats_init
ffffffff811cd20c T pxm_to_node
ffffffff811cd21e T node_to_pxm
ffffffff811cd230 T __acpi_map_pxm_to_node
ffffffff811cd265 T acpi_map_pxm_to_node
ffffffff811cd2cb T acpi_get_pxm
ffffffff811cd311 T acpi_get_node
ffffffff811cd32c T acpi_unlock_battery_dir
ffffffff811cd38a T acpi_unlock_ac_dir
ffffffff811cd3e8 T acpi_lock_battery_dir
ffffffff811cd455 T acpi_lock_ac_dir
ffffffff811cd4c4 t acpi_ds_execute_arguments
ffffffff811cd5f2 T acpi_ds_get_buffer_field_arguments
ffffffff811cd61a T acpi_ds_get_bank_field_arguments
ffffffff811cd664 T acpi_ds_get_buffer_arguments
ffffffff811cd6b1 T acpi_ds_get_package_arguments
ffffffff811cd6fe T acpi_ds_get_region_arguments
ffffffff811cd754 T acpi_ds_exec_begin_control_op
ffffffff811cd7e9 T acpi_ds_exec_end_control_op
ffffffff811cda04 t acpi_ds_get_field_names
ffffffff811cdc22 T acpi_ds_create_buffer_field
ffffffff811cdd68 T acpi_ds_create_field
ffffffff811cde12 T acpi_ds_init_field_objects
ffffffff811cdf3c T acpi_ds_create_bank_field
ffffffff811ce025 T acpi_ds_create_index_field
ffffffff811ce0f4 t acpi_ds_init_one_object
ffffffff811ce16b T acpi_ds_initialize_objects
ffffffff811ce21c T acpi_ds_method_error
ffffffff811ce27a T acpi_ds_begin_method_execution
ffffffff811ce428 T acpi_ds_restart_control_method
ffffffff811ce49c T acpi_ds_terminate_control_method
ffffffff811ce5b0 T acpi_ds_call_control_method
ffffffff811ce72c T acpi_ds_method_data_init
ffffffff811ce7b8 T acpi_ds_method_data_delete_all
ffffffff811ce825 T acpi_ds_method_data_get_node
ffffffff811ce8c7 T acpi_ds_method_data_init_args
ffffffff811ce935 T acpi_ds_method_data_get_value
ffffffff811cea35 T acpi_ds_store_object_to_local
ffffffff811ceb9c T acpi_ds_build_internal_buffer_obj
ffffffff811cecc3 T acpi_ds_init_object_from_op
ffffffff811cef05 t acpi_ds_build_internal_object
ffffffff811cf081 T acpi_ds_create_node
ffffffff811cf0f0 T acpi_ds_build_internal_package_obj
ffffffff811cf2ec t acpi_ds_init_buffer_field
ffffffff811cf554 T acpi_ds_initialize_region
ffffffff811cf565 T acpi_ds_eval_buffer_field_operands
ffffffff811cf649 T acpi_ds_eval_region_operands
ffffffff811cf6f0 T acpi_ds_eval_table_region_operands
ffffffff811cf7db T acpi_ds_eval_data_object_operands
ffffffff811cf8d3 T acpi_ds_eval_bank_field_operands
ffffffff811cf968 T acpi_ds_clear_implicit_return
ffffffff811cf993 T acpi_ds_do_implicit_return
ffffffff811cf9f4 T acpi_ds_is_result_used
ffffffff811cfafc T acpi_ds_delete_result_if_not_used
ffffffff811cfb5a T acpi_ds_resolve_operands
ffffffff811cfb8b T acpi_ds_clear_operands
ffffffff811cfbc5 T acpi_ds_create_operand
ffffffff811cfdd4 T acpi_ds_create_operands
ffffffff811cfe80 T acpi_ds_evaluate_name_path
ffffffff811cff74 T acpi_ds_get_predicate_value
ffffffff811d00eb T acpi_ds_exec_begin_op
ffffffff811d0205 T acpi_ds_exec_end_op
ffffffff811d05d8 T acpi_ds_load1_end_op
ffffffff811d075a T acpi_ds_load1_begin_op
ffffffff811d09a2 T acpi_ds_init_callbacks
ffffffff811d0a14 T acpi_ds_load2_begin_op
ffffffff811d0d22 T acpi_ds_load2_end_op
ffffffff811d106c T acpi_ds_scope_stack_clear
ffffffff811d108f T acpi_ds_scope_stack_push
ffffffff811d111a T acpi_ds_scope_stack_pop
ffffffff811d1148 T acpi_ds_result_pop
ffffffff811d123c T acpi_ds_result_push
ffffffff811d136a T acpi_ds_obj_stack_push
ffffffff811d13bf T acpi_ds_obj_stack_pop
ffffffff811d1416 T acpi_ds_obj_stack_pop_and_delete
ffffffff811d145c T acpi_ds_get_current_walk_state
ffffffff811d1468 T acpi_ds_push_walk_state
ffffffff811d1474 T acpi_ds_pop_walk_state
ffffffff811d1485 T acpi_ds_create_walk_state
ffffffff811d151b T acpi_ds_init_aml_walk
ffffffff811d161e T acpi_ds_delete_walk_state
ffffffff811d16e0 T acpi_ev_initialize_events
ffffffff811d1779 T acpi_ev_install_xrupt_handlers
ffffffff811d17d4 T acpi_ev_fixed_event_detect
ffffffff811d18c8 t acpi_ev_asynch_execute_gpe_method
ffffffff811d1a70 T acpi_ev_update_gpe_enable_mask
ffffffff811d1aa9 T acpi_ev_enable_gpe
ffffffff811d1ace T acpi_ev_add_gpe_reference
ffffffff811d1b04 T acpi_ev_remove_gpe_reference
ffffffff811d1b3e T acpi_ev_low_get_gpe_info
ffffffff811d1b62 T acpi_ev_get_gpe_event_info
ffffffff811d1ba5 T acpi_ev_finish_gpe
ffffffff811d1bc9 t acpi_ev_asynch_enable_gpe
ffffffff811d1bdb T acpi_ev_gpe_dispatch
ffffffff811d1d05 T acpi_ev_gpe_detect
ffffffff811d1df4 T acpi_ev_delete_gpe_block
ffffffff811d1ea7 T acpi_ev_create_gpe_block
ffffffff811d21b8 T acpi_ev_initialize_gpe_block
ffffffff811d2250 T acpi_ev_match_gpe_method
ffffffff811d233b T acpi_ev_gpe_initialize
ffffffff811d24bc T acpi_ev_update_gpes
ffffffff811d259c T acpi_ev_walk_gpe_list
ffffffff811d2623 T acpi_ev_valid_gpe_event
ffffffff811d2663 T acpi_ev_get_gpe_device
ffffffff811d268d T acpi_ev_get_gpe_xrupt_block
ffffffff811d2780 T acpi_ev_delete_gpe_xrupt
ffffffff811d2805 T acpi_ev_delete_gpe_handlers
ffffffff811d2868 t acpi_ev_global_lock_handler
ffffffff811d28cd T acpi_ev_init_global_lock_handler
ffffffff811d296a T acpi_ev_remove_global_lock_handler
ffffffff811d2982 T acpi_ev_acquire_global_lock
ffffffff811d2a5c T acpi_ev_release_global_lock
ffffffff811d2ad8 t acpi_ev_notify_dispatch
ffffffff811d2b47 T acpi_ev_is_notify_object
ffffffff811d2b67 T acpi_ev_queue_notify_request
ffffffff811d2c3d T acpi_ev_terminate
ffffffff811d2d10 T acpi_ev_execute_reg_method
ffffffff811d2e03 t acpi_ev_reg_run
ffffffff811d2e4c T acpi_ev_address_space_dispatch
ffffffff811d3048 T acpi_ev_detach_region
ffffffff811d317c T acpi_ev_attach_region
ffffffff811d31a2 t acpi_ev_install_handler
ffffffff811d323a T acpi_ev_install_space_handler
ffffffff811d347c T acpi_ev_install_region_handlers
ffffffff811d34e5 T acpi_ev_execute_reg_methods
ffffffff811d3618 T acpi_ev_initialize_op_regions
ffffffff811d3698 T acpi_ev_system_memory_region_setup
ffffffff811d3725 T acpi_ev_io_space_region_setup
ffffffff811d3734 T acpi_ev_pci_config_region_setup
ffffffff811d399f T acpi_ev_pci_bar_region_setup
ffffffff811d39a2 T acpi_ev_cmos_region_setup
ffffffff811d39a5 T acpi_ev_default_region_setup
ffffffff811d39b4 T acpi_ev_initialize_region
ffffffff811d3af4 t acpi_ev_sci_xrupt_handler
ffffffff811d3b17 T acpi_ev_gpe_xrupt_handler
ffffffff811d3b1c T acpi_ev_install_sci_handler
ffffffff811d3b36 T acpi_ev_remove_sci_handler
ffffffff811d3b4c T acpi_remove_notify_handler
ffffffff811d3d37 T acpi_release_global_lock
ffffffff811d3d58 T acpi_acquire_global_lock
ffffffff811d3dab T acpi_remove_gpe_handler
ffffffff811d3e7a T acpi_install_global_event_handler
ffffffff811d3ece T acpi_install_gpe_handler
ffffffff811d4028 T acpi_remove_fixed_event_handler
ffffffff811d409e T acpi_install_fixed_event_handler
ffffffff811d4152 T acpi_install_notify_handler
ffffffff811d43b8 T acpi_disable
ffffffff811d43f3 T acpi_get_event_status
ffffffff811d4471 T acpi_clear_event
ffffffff811d4493 T acpi_disable_event
ffffffff811d450d T acpi_enable_event
ffffffff811d458a T acpi_enable
ffffffff811d4644 T acpi_update_all_gpes
ffffffff811d4687 T acpi_get_gpe_status
ffffffff811d46f6 T acpi_clear_gpe
ffffffff811d474a T acpi_set_gpe_wake_mask
ffffffff811d4803 T acpi_disable_gpe
ffffffff811d4857 T acpi_enable_gpe
ffffffff811d48ab T acpi_get_gpe_device
ffffffff811d490d T acpi_remove_gpe_block
ffffffff811d4981 T acpi_install_gpe_block
ffffffff811d4a87 T acpi_enable_all_runtime_gpes
ffffffff811d4aad T acpi_disable_all_gpes
ffffffff811d4ad3 T acpi_setup_gpe_for_wake
ffffffff811d4bbc T acpi_remove_address_space_handler
ffffffff811d4c8f T acpi_install_address_space_handler
ffffffff811d4d40 t acpi_ex_region_read
ffffffff811d4d92 t acpi_ex_add_table
ffffffff811d4e3a T acpi_ex_unload_table
ffffffff811d4ed0 T acpi_ex_load_op
ffffffff811d5168 T acpi_ex_load_table_op
ffffffff811d5344 t acpi_ex_convert_to_ascii
ffffffff811d5432 T acpi_ex_convert_to_integer
ffffffff811d54e2 T acpi_ex_convert_to_buffer
ffffffff811d5574 T acpi_ex_convert_to_string
ffffffff811d56f5 T acpi_ex_convert_to_target_type
ffffffff811d57d0 T acpi_ex_create_alias
ffffffff811d583b T acpi_ex_create_event
ffffffff811d58a9 T acpi_ex_create_mutex
ffffffff811d592e T acpi_ex_create_region
ffffffff811d5a29 T acpi_ex_create_processor
ffffffff811d5ab0 T acpi_ex_create_power_resource
ffffffff811d5b28 T acpi_ex_create_method
ffffffff811d5bc4 T acpi_ex_do_debug_object
ffffffff811d5e9c T acpi_ex_read_data_from_field
ffffffff811d5ff1 T acpi_ex_write_data_to_field
ffffffff811d61b4 T acpi_ex_access_region
ffffffff811d643a T acpi_ex_insert_into_field
ffffffff811d663e t acpi_ex_field_datum_io
ffffffff811d67dc T acpi_ex_extract_from_field
ffffffff811d69db T acpi_ex_write_with_update_rule
ffffffff811d6a94 T acpi_ex_unlink_mutex
ffffffff811d6acf T acpi_ex_acquire_mutex_object
ffffffff811d6b2d T acpi_ex_acquire_mutex
ffffffff811d6c22 T acpi_ex_release_mutex_object
ffffffff811d6c7f T acpi_ex_release_mutex
ffffffff811d6dd8 T acpi_ex_release_all_mutexes
ffffffff811d6e40 t acpi_ex_allocate_name_string
ffffffff811d6ef8 t acpi_ex_name_segment
ffffffff811d6fda T acpi_ex_get_name_string
ffffffff811d71c8 T acpi_ex_opcode_0A_0T_1R
ffffffff811d7248 T acpi_ex_opcode_1A_0T_0R
ffffffff811d72e4 T acpi_ex_opcode_1A_1T_0R
ffffffff811d732b T acpi_ex_opcode_1A_1T_1R
ffffffff811d77e5 T acpi_ex_opcode_1A_0T_1R
ffffffff811d7c6c T acpi_ex_opcode_2A_0T_0R
ffffffff811d7cf8 T acpi_ex_opcode_2A_2T_1R
ffffffff811d7e17 T acpi_ex_opcode_2A_1T_1R
ffffffff811d8158 T acpi_ex_opcode_2A_0T_1R
ffffffff811d8294 T acpi_ex_opcode_3A_0T_0R
ffffffff811d8346 T acpi_ex_opcode_3A_1T_1R
ffffffff811d84e8 t acpi_ex_do_match
ffffffff811d857b T acpi_ex_opcode_6A_0T_1R
ffffffff811d872c T acpi_ex_prep_common_field_object
ffffffff811d87c5 T acpi_ex_prep_field_value
ffffffff811d8a2c T acpi_ex_get_object_reference
ffffffff811d8af4 T acpi_ex_concat_template
ffffffff811d8ba3 T acpi_ex_do_concatenate
ffffffff811d8d61 T acpi_ex_do_math_op
ffffffff811d8dd7 T acpi_ex_do_logical_numeric_op
ffffffff811d8e12 T acpi_ex_do_logical_op
ffffffff811d8f78 T acpi_ex_system_memory_space_handler
ffffffff811d9145 T acpi_ex_system_io_space_handler
ffffffff811d9184 T acpi_ex_pci_config_space_handler
ffffffff811d91b8 T acpi_ex_cmos_space_handler
ffffffff811d91bb T acpi_ex_pci_bar_space_handler
ffffffff811d91be T acpi_ex_data_table_space_handler
ffffffff811d91ec T acpi_ex_resolve_node_to_value
ffffffff811d9414 T acpi_ex_resolve_to_value
ffffffff811d9626 T acpi_ex_resolve_multiple
ffffffff811d9834 t acpi_ex_check_object_type
ffffffff811d98a4 T acpi_ex_resolve_operands
ffffffff811d9d74 T acpi_ex_store_object_to_node
ffffffff811d9e4d T acpi_ex_store
ffffffff811da090 T acpi_ex_resolve_object
ffffffff811da191 T acpi_ex_store_object_to_object
ffffffff811da2a8 T acpi_ex_store_buffer_to_buffer
ffffffff811da342 T acpi_ex_store_string_to_string
ffffffff811da3f4 T acpi_ex_system_wait_semaphore
ffffffff811da43d T acpi_ex_system_wait_mutex
ffffffff811da486 T acpi_ex_system_do_stall
ffffffff811da4bb T acpi_ex_system_do_sleep
ffffffff811da4e5 T acpi_ex_system_signal_event
ffffffff811da4fb T acpi_ex_system_wait_event
ffffffff811da513 T acpi_ex_system_reset_event
ffffffff811da54c T acpi_ex_enter_interpreter
ffffffff811da575 T acpi_ex_reacquire_interpreter
ffffffff811da584 T acpi_ex_exit_interpreter
ffffffff811da5af T acpi_ex_relinquish_interpreter
ffffffff811da5be T acpi_ex_truncate_for32bit_table
ffffffff811da5de T acpi_ex_acquire_global_lock
ffffffff811da624 T acpi_ex_release_global_lock
ffffffff811da65c T acpi_ex_eisa_id_to_string
ffffffff811da70c T acpi_ex_integer_to_string
ffffffff811da786 T acpi_is_valid_space_id
ffffffff811da7a0 T acpi_hw_set_mode
ffffffff811da84d T acpi_hw_get_mode
ffffffff811da88c T acpi_hw_execute_sleep_method
ffffffff811da8f9 T acpi_hw_extended_sleep
ffffffff811da99a T acpi_hw_extended_wake_prep
ffffffff811da9f4 T acpi_hw_extended_wake
ffffffff811daa4c t acpi_hw_enable_wakeup_gpe_block
ffffffff811daa87 T acpi_hw_enable_runtime_gpe_block
ffffffff811daac2 T acpi_hw_clear_gpe_block
ffffffff811daaf3 T acpi_hw_disable_gpe_block
ffffffff811dab25 T acpi_hw_get_gpe_register_bit
ffffffff811dab37 T acpi_hw_low_set_gpe
ffffffff811dabdc T acpi_hw_clear_gpe
ffffffff811dac01 T acpi_hw_get_gpe_status
ffffffff811dac71 T acpi_hw_disable_all_gpes
ffffffff811dac8f T acpi_hw_enable_all_runtime_gpes
ffffffff811dac9d T acpi_hw_enable_all_wakeup_gpes
ffffffff811dacac T acpi_hw_derive_pci_id
ffffffff811dae70 T acpi_hw_validate_register
ffffffff811daf20 T acpi_hw_read
ffffffff811daf77 t acpi_hw_read_multiple
ffffffff811dafd7 T acpi_hw_write
ffffffff811db01d t acpi_hw_write_multiple
ffffffff811db049 T acpi_hw_get_bit_register_info
ffffffff811db079 T acpi_hw_write_pm1_control
ffffffff811db0a7 T acpi_hw_register_read
ffffffff811db17c T acpi_hw_register_write
ffffffff811db280 T acpi_hw_clear_acpi_status
ffffffff811db2d8 T acpi_hw_legacy_sleep
ffffffff811db467 T acpi_hw_legacy_wake_prep
ffffffff811db52a T acpi_hw_legacy_wake
ffffffff811db5c0 t acpi_hw_validate_io_request
ffffffff811db689 T acpi_hw_read_port
ffffffff811db727 T acpi_hw_write_port
ffffffff811db7bc T acpi_read_bit_register
ffffffff811db800 T acpi_write_bit_register
ffffffff811db8ab T acpi_write
ffffffff811db922 T acpi_get_sleep_type_data
ffffffff811dbafa T acpi_read
ffffffff811dbba2 T acpi_reset
ffffffff811dbbec t acpi_hw_sleep_dispatch
ffffffff811dbc1f T acpi_leave_sleep_state_prep
ffffffff811dbc2e T acpi_leave_sleep_state
ffffffff811dbc3b T acpi_set_firmware_waking_vector
ffffffff811dbc5c T acpi_set_firmware_waking_vector64
ffffffff811dbc82 T acpi_enter_sleep_state
ffffffff811dbcd3 T acpi_enter_sleep_state_prep
ffffffff811dbd63 T acpi_enter_sleep_state_s4bios
ffffffff811dbde4 T acpi_ns_lookup
ffffffff811dc143 T acpi_ns_root_initialize
ffffffff811dc3f0 T acpi_ns_create_node
ffffffff811dc42f T acpi_ns_delete_node
ffffffff811dc475 T acpi_ns_remove_node
ffffffff811dc4a5 T acpi_ns_install_node
ffffffff811dc4fd T acpi_ns_delete_children
ffffffff811dc557 T acpi_ns_delete_namespace_subtree
ffffffff811dc5ce T acpi_ns_delete_namespace_by_owner
ffffffff811dc694 T acpi_ns_evaluate
ffffffff811dc82e T acpi_ns_exec_module_code_list
ffffffff811dc9a0 t acpi_ns_init_one_device
ffffffff811dca5e t acpi_ns_init_one_object
ffffffff811dcb5b t acpi_ns_find_ini_methods
ffffffff811dcbab T acpi_ns_initialize_objects
ffffffff811dcc07 T acpi_ns_initialize_devices
ffffffff811dcd44 T acpi_ns_load_table
ffffffff811dcdc0 T acpi_ns_build_external_path
ffffffff811dce30 T acpi_ns_get_pathname_length
ffffffff811dce87 T acpi_ns_get_external_pathname
ffffffff811dcf11 T acpi_ns_handle_to_pathname
ffffffff811dcf68 T acpi_ns_detach_object
ffffffff811dcfca T acpi_ns_attach_object
ffffffff811dd0b4 T acpi_ns_get_attached_object
ffffffff811dd0fd T acpi_ns_get_secondary_object
ffffffff811dd120 T acpi_ns_attach_data
ffffffff811dd1a1 T acpi_ns_detach_data
ffffffff811dd1e4 T acpi_ns_get_attached_data
ffffffff811dd20c T acpi_ns_one_complete_parse
ffffffff811dd31d T acpi_ns_parse_table
ffffffff811dd350 t acpi_ns_check_object_type
ffffffff811dd516 t acpi_ns_check_package_elements
ffffffff811dd5aa t acpi_ns_check_package_list
ffffffff811dd7bb T acpi_ns_check_parameter_count
ffffffff811dd8a8 T acpi_ns_check_for_predefined_name
ffffffff811dd8d8 T acpi_ns_check_predefined_names
ffffffff811ddcd0 T acpi_ns_repair_null_element
ffffffff811ddd38 T acpi_ns_remove_null_elements
ffffffff811ddd8a T acpi_ns_wrap_with_package
ffffffff811dddc4 T acpi_ns_repair_object
ffffffff811de01c t acpi_ns_repair_HID
ffffffff811de0cb t acpi_ns_repair_CID
ffffffff811de14f t acpi_ns_repair_FDE
ffffffff811de1fc t acpi_ns_check_sorted_list.isra.0.part.1
ffffffff811de314 t acpi_ns_repair_ALR
ffffffff811de339 t acpi_ns_repair_TSS
ffffffff811de38e t acpi_ns_repair_PSS
ffffffff811de437 T acpi_ns_complex_repairs
ffffffff811de468 T acpi_ns_search_one_scope
ffffffff811de4a4 T acpi_ns_search_and_enter
ffffffff811de61c T acpi_ns_print_node_pathname
ffffffff811de68c T acpi_ns_valid_root_prefix
ffffffff811de694 T acpi_ns_get_type
ffffffff811de6be T acpi_ns_local
ffffffff811de6f6 T acpi_ns_get_internal_name_length
ffffffff811de761 T acpi_ns_build_internal_name
ffffffff811de847 T acpi_ns_internalize_name
ffffffff811de8e5 T acpi_ns_externalize_name
ffffffff811deab1 T acpi_ns_validate_handle
ffffffff811dead4 T acpi_ns_terminate
ffffffff811deb03 T acpi_ns_opens_scope
ffffffff811deb3b T acpi_ns_get_node
ffffffff811debdc T acpi_ns_get_next_node
ffffffff811debeb T acpi_ns_get_next_node_typed
ffffffff811dec11 T acpi_ns_walk_namespace
ffffffff811ded88 T acpi_evaluate_object
ffffffff811defca T acpi_evaluate_object_typed
ffffffff811df098 T acpi_get_data
ffffffff811df106 T acpi_detach_data
ffffffff811df160 T acpi_attach_data
ffffffff811df1cf T acpi_get_devices
ffffffff811df244 t acpi_ns_get_device_callback
ffffffff811df3a5 T acpi_walk_namespace
ffffffff811df468 T acpi_get_object_info
ffffffff811df7bb T acpi_get_handle
ffffffff811df85f T acpi_install_method
ffffffff811dfa56 T acpi_get_name
ffffffff811dfb08 T acpi_get_next_object
ffffffff811dfb98 T acpi_get_parent
ffffffff811dfbfe T acpi_get_type
ffffffff811dfc61 T acpi_get_id
ffffffff811dfcb8 t acpi_ps_get_next_package_length.isra.0
ffffffff811dfcf4 T acpi_ps_get_next_package_end
ffffffff811dfd09 T acpi_ps_get_next_namestring
ffffffff811dfd68 T acpi_ps_get_next_namepath
ffffffff811dff2f T acpi_ps_get_next_simple_arg
ffffffff811e0010 T acpi_ps_get_next_arg
ffffffff811e03ec t acpi_ps_complete_op
ffffffff811e064b T acpi_ps_parse_loop
ffffffff811e0f84 T acpi_ps_get_opcode_info
ffffffff811e0fcf T acpi_ps_get_opcode_name
ffffffff811e0fd7 T acpi_ps_get_argument_count
ffffffff811e0fe8 T acpi_ps_get_opcode_size
ffffffff811e0ff4 T acpi_ps_peek_opcode
ffffffff811e1009 T acpi_ps_complete_this_op
ffffffff811e1187 T acpi_ps_next_parse_state
ffffffff811e126b T acpi_ps_parse_aml
ffffffff811e14cc T acpi_ps_get_parent_scope
ffffffff811e14d5 T acpi_ps_has_completed_scope
ffffffff811e14f2 T acpi_ps_init_scope
ffffffff811e153a T acpi_ps_push_scope
ffffffff811e15a5 T acpi_ps_pop_scope
ffffffff811e1615 T acpi_ps_cleanup_scope
ffffffff811e1644 T acpi_ps_get_arg
ffffffff811e167f T acpi_ps_append_arg
ffffffff811e16fc T acpi_ps_init_op
ffffffff811e1705 T acpi_ps_alloc_op
ffffffff811e179e T acpi_ps_create_scope_op
ffffffff811e17b7 T acpi_ps_free_op
ffffffff811e17d3 T acpi_ps_is_leading_char
ffffffff811e17ea T acpi_ps_is_prefix_char
ffffffff811e17fd T acpi_ps_set_name
ffffffff811e1808 T acpi_ps_delete_parse_tree
ffffffff811e1860 t acpi_ps_update_parameter_list.isra.2
ffffffff811e1893 T acpi_debug_trace
ffffffff811e18f2 T acpi_ps_execute_method
ffffffff811e1b58 T acpi_rs_get_address_common
ffffffff811e1bac T acpi_rs_set_address_common
ffffffff811e1bfc T acpi_rs_get_aml_length
ffffffff811e1ce9 T acpi_rs_get_list_length
ffffffff811e1edf T acpi_rs_get_pci_routing_table_length
ffffffff811e1fcc T acpi_buffer_to_resource
ffffffff811e207b T acpi_rs_create_resource_list
ffffffff811e20df T acpi_rs_create_pci_routing_table
ffffffff811e23a8 T acpi_rs_create_aml_resources
ffffffff811e23f4 T acpi_rs_convert_aml_to_resources
ffffffff811e24c7 T acpi_rs_convert_resources_to_aml
ffffffff811e25c4 T acpi_rs_convert_aml_to_resource
ffffffff811e2937 T acpi_rs_convert_resource_to_aml
ffffffff811e2bc4 T acpi_rs_decode_bitmask
ffffffff811e2be3 T acpi_rs_encode_bitmask
ffffffff811e2c08 T acpi_rs_move_data
ffffffff811e2c57 T acpi_rs_set_resource_length
ffffffff811e2c86 T acpi_rs_set_resource_header
ffffffff811e2c95 T acpi_rs_get_resource_source
ffffffff811e2d20 T acpi_rs_set_resource_source
ffffffff811e2d55 T acpi_rs_get_prt_method_data
ffffffff811e2d9c T acpi_rs_get_crs_method_data
ffffffff811e2de3 T acpi_rs_get_aei_method_data
ffffffff811e2e2a T acpi_rs_get_method_data
ffffffff811e2e6a T acpi_rs_set_srs_method_data
ffffffff811e2f6c T acpi_get_event_resources
ffffffff811e2fb4 T acpi_set_current_resources
ffffffff811e3012 T acpi_get_current_resources
ffffffff811e305d T acpi_get_irq_routing_table
ffffffff811e30a6 T acpi_walk_resources
ffffffff811e3170 T acpi_get_vendor_resource
ffffffff811e31b5 t acpi_rs_match_vendor_resource
ffffffff811e3233 T acpi_resource_to_address64
ffffffff811e3330 T acpi_tb_create_local_fadt
ffffffff811e3787 T acpi_tb_parse_fadt
ffffffff811e3810 T acpi_tb_find_table
ffffffff811e3970 T acpi_tb_verify_table
ffffffff811e39bf T acpi_tb_resize_root_table_list
ffffffff811e3a94 T acpi_tb_store_table
ffffffff811e3b11 T acpi_tb_delete_table
ffffffff811e3b4a T acpi_tb_table_override
ffffffff811e3c73 T acpi_tb_add_table
ffffffff811e3dfa T acpi_tb_terminate
ffffffff811e3e61 T acpi_tb_delete_namespace_by_owner
ffffffff811e3eec T acpi_tb_allocate_owner_id
ffffffff811e3f2f T acpi_tb_release_owner_id
ffffffff811e3f72 T acpi_tb_get_owner_id
ffffffff811e3fb8 T acpi_tb_is_table_loaded
ffffffff811e3ff2 T acpi_tb_set_table_loaded_flag
ffffffff811e4034 t acpi_tb_fix_string
ffffffff811e405b T acpi_tb_initialize_facs
ffffffff811e4083 T acpi_tb_tables_loaded
ffffffff811e408e T acpi_tb_print_table_header
ffffffff811e41e1 T acpi_tb_checksum
ffffffff811e41f5 T acpi_tb_verify_checksum
ffffffff811e4234 T acpi_tb_check_dsdt_header
ffffffff811e42bd T acpi_tb_copy_dsdt
ffffffff811e4358 T acpi_tb_install_table
ffffffff811e4484 T acpi_remove_table_handler
ffffffff811e44ca T acpi_load_tables
ffffffff811e4623 T acpi_unload_table_id
ffffffff811e467b T acpi_install_table_handler
ffffffff811e46cf T acpi_get_table_by_index
ffffffff811e4743 T acpi_get_table_header
ffffffff811e47f0 T acpi_load_table
ffffffff811e4845 T acpi_allocate_root_table
ffffffff811e4857 T acpi_reallocate_root_table
ffffffff811e48db T acpi_get_table_with_size
ffffffff811e497d T acpi_get_table
ffffffff811e4990 t acpi_tb_scan_memory_for_rsdp
ffffffff811e49ef T acpi_find_root_pointer
ffffffff811e4b44 T acpi_ut_add_address_range
ffffffff811e4bf7 T acpi_ut_remove_address_range
ffffffff811e4c40 T acpi_ut_check_address_range
ffffffff811e4cf4 T acpi_ut_delete_address_lists
ffffffff811e4d30 T acpi_ut_create_caches
ffffffff811e4ddb T acpi_ut_delete_caches
ffffffff811e4e55 T acpi_ut_validate_buffer
ffffffff811e4e7b T acpi_ut_initialize_buffer
ffffffff811e4f00 t acpi_ut_copy_simple_object
ffffffff811e5047 t acpi_ut_copy_ielement_to_ielement
ffffffff811e5105 t acpi_ut_copy_isimple_to_esimple
ffffffff811e524c t acpi_ut_copy_ielement_to_eelement
ffffffff811e52c7 T acpi_ut_copy_iobject_to_eobject
ffffffff811e5359 T acpi_ut_copy_eobject_to_iobject
ffffffff811e5561 T acpi_ut_copy_iobject_to_iobject
ffffffff811e5660 T acpi_ut_dump_buffer2
ffffffff811e5819 T acpi_ut_dump_buffer
ffffffff811e5830 T acpi_format_exception
ffffffff811e5860 T acpi_ut_hex_to_ascii_char
ffffffff811e5870 T acpi_ut_get_region_name
ffffffff811e58b0 T acpi_ut_get_event_name
ffffffff811e58c7 T acpi_ut_get_type_name
ffffffff811e58de T acpi_ut_get_object_type_name
ffffffff811e5903 T acpi_ut_get_node_name
ffffffff811e5944 T acpi_ut_get_descriptor_name
ffffffff811e596b T acpi_ut_get_reference_name
ffffffff811e59ac T acpi_ut_valid_object_type
ffffffff811e59b4 T acpi_ut_remove_reference
ffffffff811e59dc t acpi_ut_delete_internal_obj
ffffffff811e5b7c t acpi_ut_update_ref_count
ffffffff811e5c18 T acpi_ut_update_object_reference
ffffffff811e5d6e T acpi_ut_add_reference
ffffffff811e5d88 T acpi_ut_delete_internal_object_list
ffffffff811e5db0 T acpi_ut_evaluate_object
ffffffff811e5f39 T acpi_ut_evaluate_numeric_object
ffffffff811e5f7c T acpi_ut_execute_STA
ffffffff811e5fcb T acpi_ut_execute_power_methods
ffffffff811e6040 T acpi_ut_init_globals
ffffffff811e6288 T acpi_ut_execute_HID
ffffffff811e634e T acpi_ut_execute_UID
ffffffff811e6414 T acpi_ut_execute_CID
ffffffff811e6594 T acpi_ut_subsystem_shutdown
ffffffff811e6604 T acpi_ut_create_rw_lock
ffffffff811e663b T acpi_ut_delete_rw_lock
ffffffff811e6668 T acpi_ut_acquire_read_lock
ffffffff811e66bb T acpi_ut_release_read_lock
ffffffff811e6708 T acpi_ut_acquire_write_lock
ffffffff811e671a T acpi_ut_release_write_lock
ffffffff811e6728 T acpi_ut_short_divide
ffffffff811e677e T acpi_ut_divide
ffffffff811e67d4 T acpi_ut_validate_exception
ffffffff811e6866 T acpi_ut_is_pci_root_bridge
ffffffff811e6894 T acpi_ut_is_aml_table
ffffffff811e68b2 T acpi_ut_allocate_owner_id
ffffffff811e69a4 T acpi_ut_release_owner_id
ffffffff811e6a29 T acpi_ut_strupr
ffffffff811e6a4c T acpi_ut_print_string
ffffffff811e6ba4 T acpi_ut_dword_byte_swap
ffffffff811e6bc8 T acpi_ut_set_integer_width
ffffffff811e6bfa T acpi_ut_valid_acpi_char
ffffffff811e6c23 T acpi_ut_valid_acpi_name
ffffffff811e6c4e T acpi_ut_repair_name
ffffffff811e6c87 T acpi_ut_strtoul64
ffffffff811e6e57 T acpi_ut_create_update_state_and_push
ffffffff811e6e85 T acpi_ut_walk_package_tree
ffffffff811e6fa0 T acpi_ut_mutex_initialize
ffffffff811e709d T acpi_ut_mutex_terminate
ffffffff811e7100 T acpi_ut_acquire_mutex
ffffffff811e7189 T acpi_ut_release_mutex
ffffffff811e71e8 t acpi_ut_get_simple_object_size
ffffffff811e7309 t acpi_ut_get_element_length
ffffffff811e7356 T acpi_ut_valid_internal_object
ffffffff811e7365 T acpi_ut_allocate_object_desc_dbg
ffffffff811e73d3 T acpi_ut_delete_object_desc
ffffffff811e7413 T acpi_ut_create_internal_object_dbg
ffffffff811e748e T acpi_ut_create_string_object
ffffffff811e751d T acpi_ut_create_buffer_object
ffffffff811e75b5 T acpi_ut_create_integer_object
ffffffff811e75df T acpi_ut_create_package_object
ffffffff811e7657 T acpi_ut_get_object_size
ffffffff811e76b4 T acpi_ut_initialize_interfaces
ffffffff811e7711 T acpi_ut_interface_terminate
ffffffff811e7769 T acpi_ut_install_interface
ffffffff811e7821 T acpi_ut_remove_interface
ffffffff811e78a2 T acpi_ut_get_interface
ffffffff811e78d2 T acpi_ut_osi_implementation
ffffffff811e79a8 T acpi_ut_get_resource_type
ffffffff811e79b5 T acpi_ut_get_resource_length
ffffffff811e79c5 T acpi_ut_validate_resource
ffffffff811e7ad8 T acpi_ut_get_resource_header_length
ffffffff811e7ae3 T acpi_ut_get_descriptor_length
ffffffff811e7afd T acpi_ut_walk_aml_resources
ffffffff811e7c03 T acpi_ut_get_resource_end_tag
ffffffff811e7c20 T acpi_ut_push_generic_state
ffffffff811e7c2a T acpi_ut_pop_generic_state
ffffffff811e7c39 T acpi_ut_create_generic_state
ffffffff811e7c85 T acpi_ut_create_thread_state
ffffffff811e7cce T acpi_ut_create_update_state
ffffffff811e7cf1 T acpi_ut_create_pkg_state
ffffffff811e7d25 T acpi_ut_create_pkg_state_and_push
ffffffff811e7d4b T acpi_ut_create_control_state
ffffffff811e7d64 T acpi_ut_delete_generic_state
ffffffff811e7d7c T acpi_install_interface_handler
ffffffff811e7dcc T acpi_purge_cached_objects
ffffffff811e7e01 T acpi_enable_subsystem
ffffffff811e7e8c T acpi_check_address_range
ffffffff811e7ee0 T acpi_remove_interface
ffffffff811e7f2c T acpi_install_interface
ffffffff811e7f9d T acpi_terminate
ffffffff811e7fe9 T acpi_initialize_objects
ffffffff811e802c T acpi_info
ffffffff811e808e T acpi_warning
ffffffff811e8105 T acpi_exception
ffffffff811e8181 T acpi_error
ffffffff811e81f8 T acpi_ut_predefined_warning
ffffffff811e8273 T acpi_ut_predefined_info
ffffffff811e82ee T acpi_ut_namespace_error
ffffffff811e83b8 T acpi_ut_method_error
ffffffff811e8468 t acpi_ut_get_mutex_object.part.0
ffffffff811e84ba T acpi_acquire_mutex
ffffffff811e84fa T acpi_release_mutex
ffffffff811e853c t acpi_hed_notify
ffffffff811e854c T unregister_acpi_hed_notifier
ffffffff811e855b T register_acpi_hed_notifier
ffffffff811e8570 T apei_exec_ctx_init
ffffffff811e8580 T apei_exec_noop
ffffffff811e8590 T apei_get_debugfs_dir
ffffffff811e85c0 T apei_osc_setup
ffffffff811e8690 t apei_res_clean
ffffffff811e86f0 T apei_resources_fini
ffffffff811e8700 t apei_exec_for_each_entry
ffffffff811e87a0 T apei_exec_collect_resources
ffffffff811e87c0 T apei_exec_post_unmap_gars
ffffffff811e87d0 T __apei_exec_run
ffffffff811e88b0 t apei_check_gar
ffffffff811e8980 T apei_exec_pre_map_gars
ffffffff811e89e0 t apei_res_add
ffffffff811e8b10 t apei_get_nvs_callback
ffffffff811e8b30 t collect_res_callback
ffffffff811e8bc0 t apei_res_sub
ffffffff811e8d20 T apei_write
ffffffff811e8dc0 T apei_read
ffffffff811e8e60 T apei_map_generic_address
ffffffff811e8e90 t post_unmap_gar_callback
ffffffff811e8ec0 T apei_resources_add
ffffffff811e8ed0 T apei_resources_sub
ffffffff811e8f20 t pre_map_gar_callback
ffffffff811e8f50 T apei_resources_release
ffffffff811e8fe0 T apei_resources_request
ffffffff811e9290 T __apei_exec_read_register
ffffffff811e92e0 T apei_exec_read_register
ffffffff811e9320 T apei_exec_read_register_value
ffffffff811e9360 T __apei_exec_write_register
ffffffff811e93f0 T apei_exec_write_register
ffffffff811e9400 T apei_exec_write_register_value
ffffffff811e9420 T apei_hest_parse
ffffffff811e9550 T apei_estatus_check_header
ffffffff811e9580 T cper_next_record_id
ffffffff811e95c0 T apei_estatus_check
ffffffff811e9650 T cper_print_bits
ffffffff811e9750 t apei_estatus_print_section
ffffffff811ea070 T apei_estatus_print
ffffffff811ea110 t erst_exec_add
ffffffff811ea120 t erst_exec_subtract
ffffffff811ea130 t erst_exec_goto
ffffffff811ea140 t erst_exec_set_dst_address_base
ffffffff811ea150 t erst_exec_set_src_address_base
ffffffff811ea160 t erst_exec_skip_next_instruction_if_true
ffffffff811ea1a0 t erst_exec_load_var2
ffffffff811ea1b0 t erst_exec_load_var1
ffffffff811ea1c0 t erst_exec_move_data
ffffffff811ea2a0 t erst_exec_stall
ffffffff811ea300 t erst_exec_subtract_value
ffffffff811ea350 t erst_exec_add_value
ffffffff811ea3a0 t erst_exec_store_var1
ffffffff811ea3b0 T erst_get_record_count
ffffffff811ea430 T erst_get_record_id_next
ffffffff811ea750 t __erst_record_id_cache_compact.part.5
ffffffff811ea7a0 t erst_timedout
ffffffff811ea7e0 t erst_exec_stall_while_true
ffffffff811ea880 T erst_get_record_id_begin
ffffffff811ea8d0 t erst_open_pstore
ffffffff811ea8f0 t pr_unimpl_nvram
ffffffff811ea920 T erst_clear
ffffffff811eab80 t erst_clearer
ffffffff811eab90 T erst_get_record_id_end
ffffffff811eabe0 t erst_close_pstore
ffffffff811eabf0 T erst_read
ffffffff811eae00 t erst_reader
ffffffff811eb120 T erst_write
ffffffff811eb390 t erst_writer
ffffffff811eb6e0 t ghes_copy_tofrom_phys
ffffffff811eb8e0 t ghes_read_estatus
ffffffff811eba60 t ghes_estatus_cached
ffffffff811ebb10 t ghes_estatus_cache_free
ffffffff811ebb50 t ghes_estatus_cache_add
ffffffff811ebcd0 t ghes_estatus_cache_rcu_free
ffffffff811ebce0 t ghes_do_proc
ffffffff811ebef0 t ghes_add_timer
ffffffff811ebf40 t ghes_estatus_pool_free_chunk_page
ffffffff811ebf50 t ghes_clear_estatus
ffffffff811ebf90 t __ghes_print_estatus.isra.6
ffffffff811ec020 t ghes_print_estatus.constprop.7
ffffffff811ec090 t ghes_notify_nmi
ffffffff811ec2b0 t ghes_proc_in_irq
ffffffff811ec370 t ghes_proc
ffffffff811ec3e0 t ghes_notify_sci
ffffffff811ec450 t ghes_irq_func
ffffffff811ec470 t ghes_poll_func
ffffffff811ec4a0 T pnp_alloc
ffffffff811ec4d0 T pnp_register_protocol
ffffffff811ec5a0 T pnp_unregister_protocol
ffffffff811ec600 T pnp_free_resource
ffffffff811ec630 T pnp_free_resources
ffffffff811ec680 t pnp_release_device
ffffffff811ec6d0 T pnp_alloc_dev
ffffffff811ec7e0 T __pnp_add_device
ffffffff811ec8a0 T pnp_add_device
ffffffff811ec9b0 T __pnp_remove_device
ffffffff811eca40 t card_remove
ffffffff811eca50 t card_suspend
ffffffff811eca80 t card_resume
ffffffff811ecab0 T pnp_unregister_card_driver
ffffffff811ecb10 t card_remove_first
ffffffff811ecb80 t pnp_release_card
ffffffff811ecbc0 T pnp_release_card_device
ffffffff811ecbf0 t card_probe
ffffffff811ecd60 T pnp_request_card_device
ffffffff811ece90 t pnp_show_card_ids
ffffffff811ecee0 t pnp_show_card_name
ffffffff811ecf10 T pnp_register_card_driver
ffffffff811ed000 T pnp_alloc_card
ffffffff811ed170 T pnp_add_card
ffffffff811ed2f0 T pnp_add_card_device
ffffffff811ed390 T pnp_remove_card_device
ffffffff811ed400 T pnp_remove_card
ffffffff811ed4e0 t pnp_device_shutdown
ffffffff811ed500 t pnp_bus_resume
ffffffff811ed590 t pnp_bus_suspend
ffffffff811ed620 T pnp_unregister_driver
ffffffff811ed630 T pnp_register_driver
ffffffff811ed650 T pnp_device_detach
ffffffff811ed690 t pnp_device_remove
ffffffff811ed6d0 T pnp_device_attach
ffffffff811ed720 T compare_pnp_id
ffffffff811ed800 t match_device.isra.2
ffffffff811ed860 t pnp_device_probe
ffffffff811ed930 t pnp_bus_match
ffffffff811ed960 T pnp_add_id
ffffffff811eda50 t pnp_test_handler
ffffffff811eda60 T pnp_get_resource
ffffffff811edab0 T pnp_range_reserved
ffffffff811edb20 T pnp_possible_config
ffffffff811edbd0 t pnp_new_resource
ffffffff811edc10 T pnp_build_option
ffffffff811edc80 T pnp_register_irq_resource
ffffffff811edd20 T pnp_register_dma_resource
ffffffff811edd80 T pnp_register_port_resource
ffffffff811ede20 T pnp_register_mem_resource
ffffffff811edec0 T pnp_free_options
ffffffff811edf30 T pnp_check_port
ffffffff811ee130 T pnp_check_mem
ffffffff811ee330 T pnp_check_irq
ffffffff811ee5a0 T pnp_check_dma
ffffffff811ee770 T pnp_resource_type
ffffffff811ee780 T pnp_add_irq_resource
ffffffff811ee820 T pnp_add_dma_resource
ffffffff811ee8c0 T pnp_add_io_resource
ffffffff811ee970 T pnp_add_mem_resource
ffffffff811eea20 T pnp_add_bus_resource
ffffffff811eeac0 t pnp_clean_resource_table
ffffffff811eeb10 t pnp_assign_resources
ffffffff811ef0c0 T pnp_stop_dev
ffffffff811ef120 T pnp_disable_dev
ffffffff811ef190 T pnp_start_dev
ffffffff811ef200 T pnp_init_resources
ffffffff811ef210 T pnp_auto_config_dev
ffffffff811ef2a0 T pnp_activate_dev
ffffffff811ef2e0 T pnp_is_active
ffffffff811ef3f0 T pnp_eisa_id_to_string
ffffffff811ef470 T pnp_resource_type_name
ffffffff811ef4e0 T dbg_pnp_show_resources
ffffffff811ef510 T pnp_option_priority_name
ffffffff811ef530 T dbg_pnp_show_option
ffffffff811ef830 t pnp_show_current_ids
ffffffff811ef880 t pnp_printf
ffffffff811ef920 t pnp_show_options
ffffffff811efec0 t pnp_set_current_resources
ffffffff811f02a0 t pnp_show_current_resources
ffffffff811f0440 t quirk_ad1815_mpu_resources
ffffffff811f04b0 t quirk_sb16audio_resources
ffffffff811f0540 t quirk_amd_mmconfig_area
ffffffff811f0620 t quirk_system_pci_resources
ffffffff811f07a0 t quirk_add_irq_optional_dependent_sets
ffffffff811f0970 t quirk_awe32_add_ports
ffffffff811f0a40 t quirk_awe32_resources
ffffffff811f0ad0 t quirk_cmi8330_resources
ffffffff811f0bb0 T pnp_fixup_device
ffffffff811f0c10 t reserve_range
ffffffff811f0d60 t system_pnp_probe
ffffffff811f0e10 t pnpacpi_get_resources
ffffffff811f0e20 t pnpacpi_disable_resources
ffffffff811f0ea0 t pnpacpi_set_resources
ffffffff811f0f70 t pnpacpi_count_resources
ffffffff811f0f90 t decode_irq_flags
ffffffff811f1050 t dma_flags
ffffffff811f1110 t pnpacpi_parse_allocated_ioresource
ffffffff811f1150 t pnpacpi_parse_allocated_memresource
ffffffff811f1190 t pnpacpi_type_resources
ffffffff811f11d0 t pnpacpi_parse_allocated_irqresource
ffffffff811f1350 t pnpacpi_allocated_resource
ffffffff811f1720 T pnpacpi_parse_allocated_resource
ffffffff811f1790 T pnpacpi_build_resource_template
ffffffff811f1880 T pnpacpi_encode_resources
ffffffff811f1d90 t gnttab_update_entry_v1
ffffffff811f1dd0 t gnttab_update_entry_v2
ffffffff811f1e00 T gnttab_grant_foreign_access_ref
ffffffff811f1e20 T gnttab_update_subpage_entry_v2
ffffffff811f1e70 T gnttab_grant_foreign_access_subpage_ref
ffffffff811f1eb0 T gnttab_subpage_grants_available
ffffffff811f1ec0 T gnttab_update_trans_entry_v2
ffffffff811f1f00 T gnttab_grant_foreign_access_trans_ref
ffffffff811f1f40 T gnttab_trans_grants_available
ffffffff811f1f50 t gnttab_query_foreign_access_v1
ffffffff811f1f70 t gnttab_query_foreign_access_v2
ffffffff811f1f90 T gnttab_query_foreign_access
ffffffff811f1fa0 t gnttab_end_foreign_access_ref_v2
ffffffff811f1fd0 T gnttab_end_foreign_access_ref
ffffffff811f1fe0 T gnttab_grant_foreign_transfer_ref
ffffffff811f2000 T gnttab_end_foreign_transfer_ref
ffffffff811f2010 T gnttab_empty_grant_references
ffffffff811f2020 T gnttab_release_grant_reference
ffffffff811f2050 T gnttab_max_grant_frames
ffffffff811f2090 T gnttab_claim_grant_reference
ffffffff811f20c0 t gnttab_map
ffffffff811f21d0 t gnttab_unmap_frames_v1
ffffffff811f21f0 t gnttab_unmap_frames_v2
ffffffff811f2240 t gnttab_map_frames_v2
ffffffff811f23b0 T gnttab_unmap_refs
ffffffff811f2420 T gnttab_map_refs
ffffffff811f2610 T gnttab_cancel_free_callback
ffffffff811f2680 T gnttab_request_free_callback
ffffffff811f2720 t get_free_entries
ffffffff811f2a50 T gnttab_alloc_grant_references
ffffffff811f2a70 T gnttab_grant_foreign_transfer
ffffffff811f2ad0 T gnttab_grant_foreign_access
ffffffff811f2b40 t put_free_entry
ffffffff811f2bb0 T gnttab_free_grant_reference
ffffffff811f2bc0 T gnttab_end_foreign_transfer
ffffffff811f2bf0 T gnttab_grant_foreign_access_trans
ffffffff811f2ca0 T gnttab_grant_foreign_access_subpage
ffffffff811f2d60 t gnttab_end_foreign_access_ref_v1
ffffffff811f2db0 t gnttab_end_foreign_transfer_ref_v1
ffffffff811f2e40 t gnttab_end_foreign_transfer_ref_v2
ffffffff811f2ed0 t gnttab_map_frames_v1
ffffffff811f2f10 T gnttab_free_grant_references
ffffffff811f2fd0 T gnttab_end_foreign_access
ffffffff811f3050 T gnttab_resume
ffffffff811f3170 T gnttab_init
ffffffff811f3360 T gnttab_suspend
ffffffff811f3380 T xen_setup_features
ffffffff811f33d0 T irq_from_evtchn
ffffffff811f33e0 T xen_irq_from_gsi
ffffffff811f3430 T xen_set_callback_via
ffffffff811f3460 t __xen_evtchn_do_upcall
ffffffff811f36a0 T xen_hvm_evtchn_do_upcall
ffffffff811f36b0 t init_evtchn_cpu_bindings
ffffffff811f3750 t info_for_irq
ffffffff811f3770 t cpu_from_evtchn
ffffffff811f37a0 t unmask_evtchn
ffffffff811f3870 T xen_test_irq_shared
ffffffff811f38f0 t bind_evtchn_to_cpu
ffffffff811f39b0 T evtchn_make_refcounted
ffffffff811f3a10 t xen_free_irq
ffffffff811f3aa0 t evtchn_from_irq
ffffffff811f3ae0 T xen_clear_irq_pending
ffffffff811f3b10 t enable_dynirq
ffffffff811f3b40 t disable_dynirq
ffffffff811f3b70 t disable_pirq
ffffffff811f3b80 t retrigger_dynirq
ffffffff811f3bd0 t shutdown_pirq
ffffffff811f3c80 T notify_remote_via_irq
ffffffff811f3cd0 t ack_dynirq
ffffffff811f3d20 t mask_ack_dynirq
ffffffff811f3d30 T evtchn_get
ffffffff811f3da0 t xen_irq_init
ffffffff811f3e20 t xen_allocate_irq_dynamic
ffffffff811f3e80 t set_affinity_irq
ffffffff811f3f40 t pirq_from_irq
ffffffff811f3f70 t pirq_check_eoi_map
ffffffff811f3f90 T xen_pirq_from_irq
ffffffff811f3fa0 t pirq_query_unmask
ffffffff811f4050 t eoi_pirq
ffffffff811f4100 t mask_ack_pirq
ffffffff811f4120 t __startup_pirq
ffffffff811f4250 t startup_pirq
ffffffff811f4260 t enable_pirq
ffffffff811f4270 t pirq_needs_eoi_flag
ffffffff811f4290 t virq_from_irq
ffffffff811f42c0 t ipi_from_irq
ffffffff811f42e0 t unbind_from_irq
ffffffff811f4440 T evtchn_put
ffffffff811f4470 T unbind_from_irqhandler
ffffffff811f4480 T xen_poll_irq_timeout
ffffffff811f44d0 t xen_irq_info_common_init.constprop.10
ffffffff811f4500 T bind_evtchn_to_irq
ffffffff811f45a0 T bind_interdomain_evtchn_to_irqhandler
ffffffff811f4680 T bind_evtchn_to_irqhandler
ffffffff811f4710 T xen_bind_pirq_gsi_to_irq
ffffffff811f4960 T xen_allocate_pirq_msi
ffffffff811f4a00 T xen_bind_pirq_msi_to_irq
ffffffff811f4b00 T xen_destroy_irq
ffffffff811f4c30 T xen_irq_from_pirq
ffffffff811f4ca0 T bind_virq_to_irq
ffffffff811f4f10 T bind_virq_to_irqhandler
ffffffff811f4f90 T bind_ipi_to_irqhandler
ffffffff811f5190 T xen_send_IPI_one
ffffffff811f51c0 T xen_debug_interrupt
ffffffff811f55d0 T xen_evtchn_do_upcall
ffffffff811f5610 T rebind_evtchn_irq
ffffffff811f56b0 T resend_irq_on_evtchn
ffffffff811f5700 T xen_set_irq_pending
ffffffff811f5730 T xen_test_irq_pending
ffffffff811f5760 T xen_poll_irq
ffffffff811f5770 T xen_irq_resume
ffffffff811f5be0 T xen_callback_vector
ffffffff811f5ce0 T xen_setup_shutdown_event
ffffffff811f5d10 t do_suspend
ffffffff811f5eb0 t xen_suspend
ffffffff811f5f50 t xen_post_suspend
ffffffff811f5f70 t xen_pre_suspend
ffffffff811f5f90 t xen_hvm_post_suspend
ffffffff811f5fb0 t do_reboot
ffffffff811f5fc0 t do_poweroff
ffffffff811f5fe0 t shutdown_event
ffffffff811f6010 t shutdown_handler
ffffffff811f6160 t balloon_retrieve
ffffffff811f61d0 T balloon_set_new_target
ffffffff811f61f0 T free_xenballooned_pages
ffffffff811f62a0 t decrease_reservation
ffffffff811f6580 t balloon_process
ffffffff811f68e0 T alloc_xenballooned_pages
ffffffff811f69f0 T xenbus_strstate
ffffffff811f6a10 T xenbus_map_ring_valloc
ffffffff811f6a20 T xenbus_unmap_ring_vfree
ffffffff811f6a30 T xenbus_read_driver_state
ffffffff811f6a70 t xenbus_va_dev_error
ffffffff811f6ba0 T xenbus_dev_error
ffffffff811f6be0 t xenbus_unmap_ring_vfree_pv
ffffffff811f6d50 T xenbus_free_evtchn
ffffffff811f6de0 t __xenbus_switch_state
ffffffff811f6f00 T xenbus_switch_state
ffffffff811f6f10 T xenbus_dev_fatal
ffffffff811f6f60 t xenbus_map_ring_valloc_pv
ffffffff811f70f0 T xenbus_bind_evtchn
ffffffff811f71b0 T xenbus_alloc_evtchn
ffffffff811f7260 T xenbus_grant_ring
ffffffff811f72b0 T xenbus_frontend_closed
ffffffff811f72d0 t xenbus_switch_fatal
ffffffff811f7340 T xenbus_watch_path
ffffffff811f73c0 T xenbus_watch_pathfmt
ffffffff811f7490 T xenbus_unmap_ring
ffffffff811f7520 t xenbus_unmap_ring_vfree_hvm
ffffffff811f76a0 T xenbus_map_ring
ffffffff811f7760 t xenbus_map_ring_valloc_hvm
ffffffff811f78c0 t wake_waiting
ffffffff811f7910 T xb_write
ffffffff811f7b20 T xb_data_to_read
ffffffff811f7b40 T xb_wait_for_data_to_read
ffffffff811f7c10 T xb_read
ffffffff811f7d50 T xb_init_comms
ffffffff811f7e20 t transaction_start
ffffffff811f7e50 t xs_error
ffffffff811f7e70 t split
ffffffff811f7f60 t find_watch
ffffffff811f7fb0 t xenbus_thread
ffffffff811f8240 t xenwatch_thread
ffffffff811f83a0 t read_reply
ffffffff811f84f0 t xs_talkv
ffffffff811f86c0 t xs_watch
ffffffff811f8720 t xs_single
ffffffff811f8780 T unregister_xenbus_watch
ffffffff811f8970 T register_xenbus_watch
ffffffff811f8a70 t join
ffffffff811f8ac0 T xenbus_rm
ffffffff811f8b10 T xenbus_mkdir
ffffffff811f8b60 T xenbus_write
ffffffff811f8c00 T xenbus_printf
ffffffff811f8cc0 T xenbus_read
ffffffff811f8d10 T xenbus_gather
ffffffff811f8e80 T xenbus_scanf
ffffffff811f8f00 T xenbus_directory
ffffffff811f8f90 T xenbus_exists
ffffffff811f8fc0 t transaction_end
ffffffff811f8ff0 T xenbus_dev_request_and_reply
ffffffff811f90a0 T xenbus_transaction_end
ffffffff811f90e0 T xenbus_transaction_start
ffffffff811f9140 T xs_suspend
ffffffff811f91f0 T xs_resume
ffffffff811f9280 T xs_suspend_cancel
ffffffff811f92c0 T xs_init
ffffffff811f9420 T xenbus_dev_cancel
ffffffff811f9430 T xenbus_dev_suspend
ffffffff811f9480 t modalias_show
ffffffff811f94b0 t devtype_show
ffffffff811f94e0 t nodename_show
ffffffff811f9510 T xenbus_probe
ffffffff811f9530 T unregister_xenstore_notifier
ffffffff811f9540 t talk_to_otherend
ffffffff811f95b0 t xenbus_dev_release
ffffffff811f95d0 T xenbus_dev_resume
ffffffff811f96f0 t cleanup_dev
ffffffff811f9790 T xenbus_probe_devices
ffffffff811f9880 t cmp_dev
ffffffff811f98d0 T xenbus_match
ffffffff811f9930 T xenbus_unregister_driver
ffffffff811f9940 T xenbus_register_driver_common
ffffffff811f9960 T xenbus_dev_remove
ffffffff811f99f0 T xenbus_dev_shutdown
ffffffff811f9a90 T xenbus_dev_probe
ffffffff811f9bc0 T xenbus_otherend_changed
ffffffff811f9cb0 T xenbus_read_otherend_details
ffffffff811f9d80 T register_xenstore_notifier
ffffffff811f9dc0 T xenbus_probe_node
ffffffff811f9f20 T xenbus_device_find
ffffffff811f9f60 T xenbus_dev_changed
ffffffff811fa100 t backend_probe_and_watch
ffffffff811fa130 t backend_changed
ffffffff811fa150 t xenbus_uevent_backend
ffffffff811fa210 t frontend_changed
ffffffff811fa220 t xenbus_probe_backend
ffffffff811fa330 t backend_bus_id
ffffffff811fa4b0 T xenbus_register_backend
ffffffff811fa4d0 t read_frontend_details
ffffffff811fa4f0 T xenbus_dev_is_online
ffffffff811fa530 t xenbus_file_poll
ffffffff811fa580 t free_watch_adapter
ffffffff811fa5a0 t queue_cleanup
ffffffff811fa5f0 t xenbus_file_release
ffffffff811fa730 t xenbus_file_open
ffffffff811fa800 t xenbus_file_read
ffffffff811faa20 t queue_reply
ffffffff811faab0 t watch_fired
ffffffff811fac50 t xenbus_file_write
ffffffff811fb170 t xenbus_backend_ioctl
ffffffff811fb1d0 t xenbus_backend_open
ffffffff811fb230 t xenbus_backend_mmap
ffffffff811fb2c0 t xenbus_uevent_frontend
ffffffff811fb2f0 t backend_changed
ffffffff811fb300 t is_device_connecting
ffffffff811fb3a0 t non_essential_device_connecting
ffffffff811fb3b0 t essential_device_connecting
ffffffff811fb3c0 t xenbus_probe_frontend
ffffffff811fb450 t wait_loop
ffffffff811fb4d0 t wait_for_devices
ffffffff811fb5c0 t frontend_changed
ffffffff811fb5e0 t frontend_probe_and_watch
ffffffff811fb920 t xenbus_reset_backend_state_changed
ffffffff811fb980 T xenbus_register_frontend
ffffffff811fb9c0 t read_backend_details
ffffffff811fb9e0 t frontend_bus_id
ffffffff811fba90 t print_device_status
ffffffff811fbb00 T xen_biovec_phys_mergeable
ffffffff811fbc70 t vcpu_online
ffffffff811fbd50 t handle_vcpu_hotplug_event
ffffffff811fbe20 t setup_cpu_watcher
ffffffff811fbe90 t balloon_exit
ffffffff811fbea0 t watch_target
ffffffff811fbef0 t show_high_kb
ffffffff811fbf20 t show_low_kb
ffffffff811fbf50 t show_current_kb
ffffffff811fbf80 t show_target
ffffffff811fbfb0 t show_target_kb
ffffffff811fbfe0 t store_target
ffffffff811fc040 t store_target_kb
ffffffff811fc0a0 t balloon_init_watcher
ffffffff811fc0d0 t selfballoon_process
ffffffff811fc210 T register_xen_selfballooning
ffffffff811fc220 t store_selfballoon_min_usable_mb
ffffffff811fc280 t store_selfballoon_uphys
ffffffff811fc2e0 t store_selfballoon_downhys
ffffffff811fc340 t store_selfballoon_interval
ffffffff811fc3a0 t store_selfballooning
ffffffff811fc450 t show_selfballoon_min_usable_mb
ffffffff811fc480 t show_selfballoon_uphys
ffffffff811fc4b0 t show_selfballoon_downhys
ffffffff811fc4e0 t show_selfballoon_interval
ffffffff811fc510 t show_selfballooning
ffffffff811fc540 t hyp_sysfs_show
ffffffff811fc560 t hyp_sysfs_store
ffffffff811fc580 t type_show
ffffffff811fc590 t minor_show
ffffffff811fc5d0 t major_show
ffffffff811fc610 t features_show
ffffffff811fc670 t pagesize_show
ffffffff811fc6b0 t extra_show
ffffffff811fc740 t compile_date_show
ffffffff811fc7d0 t compiled_by_show
ffffffff811fc860 t compiler_show
ffffffff811fc8f0 t virtual_start_show
ffffffff811fc980 t changeset_show
ffffffff811fca10 t capabilities_show
ffffffff811fcaa0 t uuid_show
ffffffff811fcb50 t do_hvm_evtchn_intr
ffffffff811fcb70 t platform_pci_resume
ffffffff811fcbd0 T alloc_xen_mmio
ffffffff811fcc00 t tmem_cleancache_flush_page
ffffffff811fcc70 t tmem_cleancache_flush_inode
ffffffff811fcce0 t tmem_cleancache_flush_fs
ffffffff811fcd50 t tmem_cleancache_init_fs
ffffffff811fcdb0 t tmem_cleancache_init_shared_fs
ffffffff811fce10 t tmem_cleancache_put_page
ffffffff811fcf10 t tmem_cleancache_get_page
ffffffff811fd030 T xen_swiotlb_dma_mapping_error
ffffffff811fd040 t xen_phys_to_bus
ffffffff811fd090 t xen_virt_to_bus
ffffffff811fd0b0 T xen_swiotlb_dma_supported
ffffffff811fd0d0 t range_straddles_page_boundary
ffffffff811fd1c0 t xen_bus_to_phys
ffffffff811fd290 t is_xen_swiotlb_buffer
ffffffff811fd3d0 T xen_swiotlb_map_page
ffffffff811fd4e0 T xen_swiotlb_free_coherent
ffffffff811fd5a0 T xen_swiotlb_alloc_coherent
ffffffff811fd730 t xen_swiotlb_sync_single
ffffffff811fd7f0 T xen_swiotlb_sync_sg_for_device
ffffffff811fd850 T xen_swiotlb_sync_sg_for_cpu
ffffffff811fd8a0 T xen_swiotlb_sync_single_for_device
ffffffff811fd8b0 T xen_swiotlb_sync_single_for_cpu
ffffffff811fd8c0 t xen_unmap_single
ffffffff811fd970 T xen_swiotlb_unmap_page
ffffffff811fd980 T xen_swiotlb_unmap_sg_attrs
ffffffff811fd9d0 T xen_swiotlb_unmap_sg
ffffffff811fd9e0 T xen_swiotlb_map_sg_attrs
ffffffff811fdb60 T xen_swiotlb_map_sg
ffffffff811fdb70 t xen_add_device
ffffffff811fdee0 t xen_pci_notifier
ffffffff811fe040 t hung_up_tty_read
ffffffff811fe050 t hung_up_tty_write
ffffffff811fe060 t hung_up_tty_poll
ffffffff811fe070 t hung_up_tty_ioctl
ffffffff811fe090 t hung_up_tty_compat_ioctl
ffffffff811fe0b0 T tty_hung_up_p
ffffffff811fe0c0 T tty_pair_get_tty
ffffffff811fe0e0 T tty_pair_get_pty
ffffffff811fe100 t dev_match_devt
ffffffff811fe110 T tty_put_char
ffffffff811fe150 T tty_set_operations
ffffffff811fe160 T tty_devnum
ffffffff811fe180 t tty_devnode
ffffffff811fe1b0 t show_cons_active
ffffffff811fe280 T get_current_tty
ffffffff811fe310 T tty_get_pgrp
ffffffff811fe360 T tty_register_device
ffffffff811fe470 t check_tty_count
ffffffff811fe520 T tty_free_termios
ffffffff811fe560 T tty_shutdown
ffffffff811fe5b0 T __alloc_tty_driver
ffffffff811fe600 T tty_unregister_driver
ffffffff811fe680 T tty_unregister_device
ffffffff811fe6a0 T tty_register_driver
ffffffff811fe990 t destruct_tty_driver
ffffffff811fea50 T tty_driver_kref_put
ffffffff811fea70 T put_tty_driver
ffffffff811fea80 T do_SAK
ffffffff811feaa0 T tty_hangup
ffffffff811feab0 t queue_release_one_tty
ffffffff811feb10 T tty_kref_put
ffffffff811feb30 t release_tty
ffffffff811feb70 t __proc_set_tty
ffffffff811fecd0 T stop_tty
ffffffff811feda0 T tty_init_termios
ffffffff811fee90 T tty_wakeup
ffffffff811fef10 T start_tty
ffffffff811fefe0 T tty_standard_install
ffffffff811ff050 T tty_check_change
ffffffff811ff160 T tty_name
ffffffff811ff1a0 T alloc_tty_struct
ffffffff811ff1c0 T free_tty_struct
ffffffff811ff1f0 t release_one_tty
ffffffff811ff2b0 T tty_alloc_file
ffffffff811ff2e0 T tty_add_file
ffffffff811ff350 T tty_free_file
ffffffff811ff370 T tty_del_file
ffffffff811ff3f0 T tty_paranoia_check
ffffffff811ff460 t __tty_fasync
ffffffff811ff5d0 t tty_fasync
ffffffff811ff610 t tty_compat_ioctl
ffffffff811ff6f0 t tty_poll
ffffffff811ff790 t tty_read
ffffffff811ff890 T __tty_hangup
ffffffff811ffc30 t do_tty_hangup
ffffffff811ffc40 T tty_vhangup
ffffffff811ffc50 T tty_vhangup_self
ffffffff811ffc80 T tty_write_unlock
ffffffff811ffcb0 T tty_write_lock
ffffffff811ffd10 t tty_write
ffffffff811fff90 T redirected_tty_write
ffffffff81200060 t send_break
ffffffff81200150 T tty_write_message
ffffffff812001e0 T tty_driver_remove_tty
ffffffff81200210 T tty_do_resize
ffffffff812002e0 T __do_SAK
ffffffff812004f0 t do_SAK_work
ffffffff81200500 T initialize_tty_struct
ffffffff812007b0 T tty_init_dev
ffffffff812008e0 T deinitialize_tty_struct
ffffffff812008f0 T proc_clear_tty
ffffffff81200960 t session_clear_tty
ffffffff81200990 T tty_release
ffffffff81200ef0 t tty_open
ffffffff812014d0 T disassociate_ctty
ffffffff81201730 T no_tty
ffffffff81201760 T tty_ioctl
ffffffff812022b0 T tty_default_fops
ffffffff81202310 T console_sysfs_notify
ffffffff81202340 t add_echo_byte
ffffffff812023e0 T n_tty_inherit_ops
ffffffff81202410 t put_tty_queue
ffffffff812024a0 t n_tty_chars_in_buffer
ffffffff81202540 t echo_char_raw
ffffffff812025c0 t echo_set_canon_col
ffffffff81202610 t echo_char
ffffffff812026d0 t do_output_char
ffffffff812028c0 t process_output
ffffffff81202920 t n_tty_poll
ffffffff81202af0 t copy_from_read_buf
ffffffff81202c40 t n_tty_write_wakeup
ffffffff81202c80 t process_echoes
ffffffff81202f60 t n_tty_write
ffffffff812033c0 t n_tty_set_room
ffffffff81203430 t n_tty_set_termios
ffffffff81203810 t reset_buffer_flags
ffffffff81203950 t n_tty_flush_buffer
ffffffff812039f0 t n_tty_receive_buf
ffffffff81204b30 t n_tty_close
ffffffff81204b80 t n_tty_open
ffffffff81204c40 t n_tty_ioctl
ffffffff81204cf0 t n_tty_read
ffffffff81205530 T is_ignored
ffffffff81205570 T tty_chars_in_buffer
ffffffff81205590 T tty_driver_flush_buffer
ffffffff812055b0 T tty_termios_baud_rate
ffffffff81205600 T tty_termios_input_baud_rate
ffffffff81205660 T tty_termios_encode_baud_rate
ffffffff81205790 T tty_encode_baud_rate
ffffffff812057a0 T tty_termios_copy_hw
ffffffff812057d0 t send_prio_char
ffffffff81205890 t copy_termios
ffffffff81205900 t tty_change_softcar
ffffffff812059b0 T tty_unthrottle
ffffffff81205a10 T tty_throttle
ffffffff81205a70 t get_termio
ffffffff81205b10 T tty_set_termios
ffffffff81205e00 T tty_wait_until_sent
ffffffff81205f20 t set_termiox
ffffffff81206000 T tty_write_room
ffffffff81206020 T tty_termios_hw_change
ffffffff81206050 T tty_perform_flush
ffffffff81206110 T tty_get_baud_rate
ffffffff81206160 t set_termios
ffffffff81206380 T tty_mode_ioctl
ffffffff81206850 T n_tty_compat_ioctl_helper
ffffffff81206880 T n_tty_ioctl_helper
ffffffff81206a80 t tty_ldiscs_seq_start
ffffffff81206aa0 t tty_ldiscs_seq_next
ffffffff81206ac0 t tty_ldiscs_seq_stop
ffffffff81206ad0 t proc_tty_ldiscs_open
ffffffff81206ae0 t tty_ldisc_try
ffffffff81206b40 T tty_ldisc_ref
ffffffff81206b50 T tty_unregister_ldisc
ffffffff81206bc0 T tty_register_ldisc
ffffffff81206c30 t get_ldops
ffffffff81206cb0 t tty_ldiscs_seq_show
ffffffff81206d50 t put_ldisc
ffffffff81206e10 T tty_ldisc_deref
ffffffff81206e20 t tty_set_termios_ldisc
ffffffff81206e70 t tty_ldisc_halt
ffffffff81206e90 t tty_ldisc_flush_works
ffffffff81206ec0 T tty_ldisc_flush
ffffffff81206f10 t tty_ldisc_close.isra.3
ffffffff81206f80 t tty_ldisc_open.isra.4
ffffffff81206ff0 t tty_ldisc_get
ffffffff812070d0 t tty_ldisc_reinit
ffffffff81207140 t tty_ldisc_restore.isra.6
ffffffff81207210 t tty_ldisc_wait_idle.isra.7
ffffffff812072b0 T tty_ldisc_ref_wait
ffffffff81207350 T tty_ldisc_enable
ffffffff81207380 T tty_set_ldisc
ffffffff81207670 T tty_ldisc_hangup
ffffffff81207940 T tty_ldisc_setup
ffffffff812079d0 T tty_ldisc_release
ffffffff81207a50 T tty_ldisc_init
ffffffff81207a80 T tty_ldisc_deinit
ffffffff81207aa0 T tty_ldisc_begin
ffffffff81207ab0 t __tty_buffer_flush
ffffffff81207b40 t flush_to_ldisc
ffffffff81207d00 T tty_schedule_flip
ffffffff81207d60 T tty_buffer_request_room
ffffffff81207f10 T tty_prepare_flip_string_flags
ffffffff81207f70 T tty_prepare_flip_string
ffffffff81207fe0 T tty_insert_flip_string_flags
ffffffff812080c0 T tty_insert_flip_string_fixed_flag
ffffffff81208190 T tty_flip_buffer_push
ffffffff81208210 T tty_buffer_free_all
ffffffff81208280 T tty_buffer_flush
ffffffff81208370 T tty_flush_to_ldisc
ffffffff81208380 T tty_buffer_init
ffffffff812083e0 T tty_port_carrier_raised
ffffffff81208400 T tty_port_raise_dtr_rts
ffffffff81208420 T tty_port_lower_dtr_rts
ffffffff81208440 t tty_port_shutdown
ffffffff812084a0 T tty_port_close_end
ffffffff81208570 T tty_port_close_start
ffffffff81208780 T tty_port_block_til_ready
ffffffff81208a50 T tty_port_hangup
ffffffff81208b00 T tty_port_tty_get
ffffffff81208b70 T tty_port_tty_set
ffffffff81208c00 T tty_port_open
ffffffff81208d00 T tty_port_free_xmit_buf
ffffffff81208d60 t tty_port_destructor
ffffffff81208dd0 T tty_port_put
ffffffff81208e00 T tty_port_alloc_xmit_buf
ffffffff81208e70 T tty_port_init
ffffffff81208fb0 T tty_port_close
ffffffff81209020 t pty_chars_in_buffer
ffffffff81209030 t pty_open
ffffffff81209090 t pty_set_termios
ffffffff812090b0 t ptm_unix98_lookup
ffffffff812090c0 t ptm_unix98_remove
ffffffff812090d0 t pts_unix98_remove
ffffffff812090e0 t pts_unix98_lookup
ffffffff81209110 t pty_flush_buffer
ffffffff812091c0 t pty_unthrottle
ffffffff812091e0 t pty_write
ffffffff81209260 t pty_unix98_shutdown
ffffffff81209280 t pty_close
ffffffff812093c0 t pty_unix98_install
ffffffff812095c0 T pty_resize
ffffffff81209700 t pty_write_room
ffffffff81209730 t pty_unix98_ioctl
ffffffff81209820 t ptmx_open
ffffffff81209960 t tty_audit_log
ffffffff81209a60 t tty_audit_buf_push_current
ffffffff81209ad0 t tty_audit_buf_free
ffffffff81209b00 t tty_audit_buf_put
ffffffff81209b20 T tty_audit_exit
ffffffff81209bb0 T tty_audit_fork
ffffffff81209c20 T tty_audit_tiocsti
ffffffff81209d70 T tty_audit_push_task
ffffffff81209ee0 T tty_audit_add_data
ffffffff8120a240 T tty_audit_push
ffffffff8120a370 T pm_set_vt_switch
ffffffff8120a390 t vt_event_wait
ffffffff8120a4b0 t vt_event_wait_ioctl
ffffffff8120a520 T vt_event_post
ffffffff8120a5e0 T vt_waitactive
ffffffff8120a630 T reset_vc
ffffffff8120a6c0 t complete_change_console
ffffffff8120a7b0 T vt_ioctl
ffffffff8120b9e0 T vc_SAK
ffffffff8120ba10 T vt_compat_ioctl
ffffffff8120be30 T change_console
ffffffff8120bef0 T vt_move_to_console
ffffffff8120bf90 t vcs_release
ffffffff8120bfc0 t vcs_open
ffffffff8120c010 t vcs_vc
ffffffff8120c090 t vcs_size
ffffffff8120c100 t vcs_lseek
ffffffff8120c1b0 t vcs_write
ffffffff8120c780 t vcs_read
ffffffff8120cbe0 t vcs_poll_data_get.part.3
ffffffff8120ccc0 t vcs_fasync
ffffffff8120cd20 t vcs_poll
ffffffff8120cda0 t vcs_notifier
ffffffff8120ce10 T vcs_make_sysfs
ffffffff8120ce70 T vcs_remove_sysfs
ffffffff8120ceb0 t sel_pos
ffffffff8120cee0 T clear_selection
ffffffff8120cf30 T sel_loadlut
ffffffff8120cf90 T set_selection
ffffffff8120d610 T paste_selection
ffffffff8120d770 t fn_compose
ffffffff8120d780 t k_ignore
ffffffff8120d790 t kbd_bh
ffffffff8120d7f0 t kd_nosound
ffffffff8120d810 t kbd_disconnect
ffffffff8120d830 t kbd_connect
ffffffff8120d8d0 t fn_SAK
ffffffff8120d8f0 t fn_send_intr
ffffffff8120d9a0 T vt_get_leds
ffffffff8120da00 t k_lowercase
ffffffff8120da10 t k_cons
ffffffff8120da30 t fn_lastcons
ffffffff8120da40 t fn_spawn_con
ffffffff8120dab0 t fn_inc_console
ffffffff8120db10 t fn_dec_console
ffffffff8120db70 t fn_boot_it
ffffffff8120db80 t fn_scroll_back
ffffffff8120db90 t fn_scroll_forw
ffffffff8120dba0 t fn_hold
ffffffff8120dbe0 t fn_show_state
ffffffff8120dbf0 t fn_show_mem
ffffffff8120dc00 t fn_show_ptregs
ffffffff8120dc20 t do_compute_shiftstate
ffffffff8120dce0 t fn_null
ffffffff8120dcf0 t kbd_update_leds_helper
ffffffff8120dd80 t kbd_start
ffffffff8120ddc0 t kbd_rate_helper
ffffffff8120de40 t getkeycode_helper
ffffffff8120de60 t setkeycode_helper
ffffffff8120de80 T unregister_keyboard_notifier
ffffffff8120de90 T register_keyboard_notifier
ffffffff8120dea0 t put_queue.isra.1
ffffffff8120df50 t puts_queue.isra.2
ffffffff8120e010 t applkey
ffffffff8120e040 t to_utf8
ffffffff8120e110 t k_shift
ffffffff8120e200 t handle_diacr
ffffffff8120e2f0 t k_dead2
ffffffff8120e330 t k_dead
ffffffff8120e370 t kbd_event
ffffffff8120ea40 t fn_caps_toggle
ffffffff8120ea70 t fn_caps_on
ffffffff8120ea90 t fn_bare_num
ffffffff8120eac0 t fn_num
ffffffff8120eae0 t k_spec
ffffffff8120eb30 t k_fn
ffffffff8120eb60 t k_cur
ffffffff8120eb90 t k_pad
ffffffff8120ed30 t k_meta
ffffffff8120eda0 t k_ascii
ffffffff8120edf0 t k_lock
ffffffff8120ee20 t k_slock
ffffffff8120ee90 t kbd_match
ffffffff8120ef00 t k_unicode.part.17
ffffffff8120ef70 t k_self
ffffffff8120efc0 t fn_enter
ffffffff8120f050 t kd_sound_helper
ffffffff8120f0f0 T kd_mksound
ffffffff8120f160 t k_brlcommit.constprop.24
ffffffff8120f1f0 t k_brl
ffffffff8120f390 T kbd_rate
ffffffff8120f3d0 T compute_shiftstate
ffffffff8120f400 T getledstate
ffffffff8120f410 T setledstate
ffffffff8120f4a0 T vt_set_led_state
ffffffff8120f4b0 T vt_kbd_con_start
ffffffff8120f4e0 T vt_kbd_con_stop
ffffffff8120f510 T vt_do_diacrit
ffffffff8120f920 T vt_do_kdskbmode
ffffffff8120fa20 T vt_do_kdskbmeta
ffffffff8120faa0 T vt_do_kbkeycode_ioctl
ffffffff8120fcd0 T vt_do_kdsk_ioctl
ffffffff81210030 T vt_do_kdgkb_ioctl
ffffffff81210490 T vt_do_kdskled
ffffffff81210610 T vt_do_kdgkbmode
ffffffff81210640 T vt_do_kdgkbmeta
ffffffff81210660 T vt_reset_unicode
ffffffff812106b0 T vt_get_shift_state
ffffffff812106c0 T vt_reset_keyboard
ffffffff81210730 T vt_get_kbd_mode_bit
ffffffff81210750 T vt_set_kbd_mode_bit
ffffffff812107b0 T vt_clr_kbd_mode_bit
ffffffff81210820 t con_release_unimap
ffffffff812108f0 t con_insert_unipair
ffffffff81210a80 T inverse_translate
ffffffff81210af0 t set_inverse_trans_unicode.isra.2
ffffffff81210c20 t con_unify_unimap.isra.3
ffffffff81210d80 T set_translate
ffffffff81210da0 T con_get_trans_new
ffffffff81210e00 T con_free_unimap
ffffffff81210e50 T con_copy_unimap
ffffffff81210ec0 T con_clear_unimap
ffffffff81210f90 T con_get_unimap
ffffffff81211080 T con_protect_unimap
ffffffff812110a0 T conv_8bit_to_uni
ffffffff812110c0 T conv_uni_to_8bit
ffffffff81211110 T conv_uni_to_pc
ffffffff812111c0 t set_inverse_transl
ffffffff81211300 T con_set_default_unimap
ffffffff81211480 T con_set_unimap
ffffffff812116a0 t update_user_maps
ffffffff81211720 T con_set_trans_new
ffffffff81211780 T con_set_trans_old
ffffffff812117f0 T con_get_trans_old
ffffffff81211890 t do_update_region
ffffffff81211a00 t add_softcursor
ffffffff81211ab0 t gotoxy
ffffffff81211b60 t gotoxay
ffffffff81211b80 t vt_console_device
ffffffff81211ba0 t con_write_room
ffffffff81211bc0 t con_chars_in_buffer
ffffffff81211bd0 t con_throttle
ffffffff81211be0 t con_close
ffffffff81211bf0 T con_is_bound
ffffffff81211c20 T con_debug_enter
ffffffff81211c90 T con_debug_leave
ffffffff81211d00 T screen_glyph
ffffffff81211d40 t vtconsole_init_device
ffffffff81211db0 t show_name
ffffffff81211df0 t show_bind
ffffffff81211e40 t visual_init
ffffffff81211f50 T register_con_driver
ffffffff81212080 t set_origin
ffffffff81212140 t hide_cursor
ffffffff812121d0 t scrup
ffffffff812122f0 t notify_write
ffffffff81212320 t lf
ffffffff81212380 t notify_update
ffffffff812123b0 T do_blank_screen
ffffffff81212630 T unregister_con_driver
ffffffff81212720 T give_up_console
ffffffff81212730 t con_start
ffffffff81212760 t con_stop
ffffffff81212790 t con_unthrottle
ffffffff812127b0 t respond_string
ffffffff81212880 t show_tty_active
ffffffff812128b0 T unregister_vt_notifier
ffffffff812128c0 T register_vt_notifier
ffffffff812128d0 t build_attr
ffffffff812129b0 t update_attr
ffffffff81212a60 t insert_char
ffffffff81212bc0 t set_palette
ffffffff81212c20 t set_get_cmap
ffffffff81212d80 t set_cursor
ffffffff81212e00 T redraw_screen
ffffffff81213090 t bind_con_driver
ffffffff81213430 T take_over_console
ffffffff812134a0 T unbind_con_driver
ffffffff812136b0 t store_bind
ffffffff812138e0 t csi_J
ffffffff81213b30 t reset_terminal
ffffffff81213d80 t vc_init
ffffffff81213e50 t vc_do_resize
ffffffff812142b0 t vt_resize
ffffffff81214310 T vc_resize
ffffffff81214320 t vt_console_print
ffffffff812146f0 t con_flush_chars
ffffffff81214740 T update_region
ffffffff812147f0 t blank_screen_t
ffffffff81214840 t con_shutdown
ffffffff81214880 T do_unblank_screen
ffffffff81214a40 T unblank_screen
ffffffff81214a50 T schedule_console_callback
ffffffff81214a60 T invert_screen
ffffffff81214c80 t set_mode
ffffffff81214f00 T complement_pos
ffffffff81215080 T vc_cons_allocated
ffffffff812150b0 T vc_allocate
ffffffff81215270 t con_open
ffffffff81215350 T vc_deallocate
ffffffff81215450 T scrollback
ffffffff81215470 T scrollfront
ffffffff81215490 T mouse_report
ffffffff812154e0 T mouse_reporting
ffffffff81215510 T set_console
ffffffff81215580 T vt_kmsg_redirect
ffffffff812155a0 T tioclinux
ffffffff81215830 T poke_blanked_console
ffffffff81215900 t console_callback
ffffffff81215a40 T con_set_cmap
ffffffff81215a70 T con_get_cmap
ffffffff81215aa0 T reset_palette
ffffffff81215ae0 t do_con_write.part.17
ffffffff81217960 t con_put_char
ffffffff812179a0 t con_write
ffffffff812179f0 T con_font_op
ffffffff81217e50 T screen_pos
ffffffff81217e90 T getconsxy
ffffffff81217eb0 T putconsxy
ffffffff81217ef0 T vcs_scr_readw
ffffffff81217f10 T vcs_scr_writew
ffffffff81217f30 T vcs_scr_updated
ffffffff81217f40 t hvc_console_device
ffffffff81217f60 t hvc_write_room
ffffffff81217f80 t hvc_chars_in_buffer
ffffffff81217fa0 t hvc_tiocmget
ffffffff81217fd0 t hvc_tiocmset
ffffffff81218000 t hvc_console_print
ffffffff81218110 t hvc_push
ffffffff81218190 t hvc_get_by_index
ffffffff81218250 t destroy_hvc_struct
ffffffff812182e0 T hvc_remove
ffffffff81218380 T hvc_kick
ffffffff812183a0 t hvc_unthrottle
ffffffff812183b0 t hvc_open
ffffffff81218530 t hvc_hangup
ffffffff81218600 t hvc_write
ffffffff812186e0 T hvc_alloc
ffffffff81218980 t hvc_set_winsz
ffffffff81218a30 T __hvc_resize
ffffffff81218a40 T hvc_poll
ffffffff81218c70 t khvcd
ffffffff81218db0 T hvc_instantiate
ffffffff81218e70 t hvc_close
ffffffff81218f80 t hvc_handle_interrupt
ffffffff81218fa0 T notifier_add_irq
ffffffff81218fe0 T notifier_del_irq
ffffffff81219010 T notifier_hangup_irq
ffffffff81219020 t dom0_read_console
ffffffff81219040 t dom0_write_console
ffffffff81219060 t domU_write_console
ffffffff812191f0 t domU_read_console
ffffffff81219300 t xen_hvm_console_init
ffffffff81219490 t xen_pv_console_init
ffffffff81219630 t xencons_disconnect_backend
ffffffff812196a0 t xencons_connect_backend
ffffffff81219920 t xencons_resume
ffffffff812199d0 t xenboot_write_console
ffffffff81219ab0 t xen_cons_init
ffffffff81219b20 t xen_console_remove
ffffffff81219bd0 t xencons_remove
ffffffff81219bf0 t xencons_backend_changed
ffffffff81219c20 T xen_console_resume
ffffffff81219c90 T xen_raw_console_write
ffffffff81219cb0 T xen_raw_printk
ffffffff81219d20 t read_null
ffffffff81219d30 t write_null
ffffffff81219d40 t pipe_to_null
ffffffff81219d50 t write_full
ffffffff81219d60 t null_lseek
ffffffff81219d70 t memory_open
ffffffff81219df0 t mem_devnode
ffffffff81219e20 t write_port
ffffffff81219ec0 t read_port
ffffffff81219f60 t kmsg_writev
ffffffff8121a040 t mmap_zero
ffffffff8121a060 t splice_write_null
ffffffff8121a070 t open_port
ffffffff8121a090 t memory_lseek
ffffffff8121a120 t read_zero
ffffffff8121a1f0 t write_mem
ffffffff8121a2d0 t read_mem
ffffffff8121a3b0 t mmap_mem
ffffffff8121a450 t random_poll
ffffffff8121a4d0 t mix_pool_bytes_extract
ffffffff8121a640 t account
ffffffff8121a770 t extract_buf
ffffffff8121a8a0 t random_fasync
ffffffff8121a8b0 t write_pool
ffffffff8121a940 t init_std_data
ffffffff8121aa00 t rand_initialize
ffffffff8121aa30 t credit_entropy_bits.part.3
ffffffff8121aae0 t add_timer_randomness
ffffffff8121ac00 t random_ioctl
ffffffff8121ad90 T add_input_randomness
ffffffff8121add0 t xfer_secondary_pool.part.5
ffffffff8121ae60 t xfer_secondary_pool
ffffffff8121aea0 t extract_entropy
ffffffff8121af30 T get_random_bytes
ffffffff8121afc0 T generate_random_uuid
ffffffff8121aff0 t proc_do_uuid
ffffffff8121b110 t extract_entropy_user
ffffffff8121b200 t urandom_read
ffffffff8121b210 t random_write
ffffffff8121b270 t random_read
ffffffff8121b3a0 T add_interrupt_randomness
ffffffff8121b3d0 T add_disk_randomness
ffffffff8121b400 T rand_initialize_irq
ffffffff8121b460 T rand_initialize_disk
ffffffff8121b490 T get_random_int
ffffffff8121b510 T randomize_range
ffffffff8121b570 t misc_seq_stop
ffffffff8121b580 t misc_open
ffffffff8121b710 t misc_seq_open
ffffffff8121b720 t misc_seq_show
ffffffff8121b750 t misc_seq_next
ffffffff8121b760 t misc_seq_start
ffffffff8121b780 t misc_devnode
ffffffff8121b7c0 T misc_deregister
ffffffff8121b870 T misc_register
ffffffff8121b9a0 t nvram_llseek
ffffffff8121b9f0 t nvram_open
ffffffff8121ba80 t nvram_release
ffffffff8121bad0 t nvram_proc_open
ffffffff8121baf0 T __nvram_write_byte
ffffffff8121bb00 T nvram_write_byte
ffffffff8121bb50 T __nvram_read_byte
ffffffff8121bb60 t __nvram_set_checksum
ffffffff8121bbb0 t nvram_ioctl
ffffffff8121bc70 T __nvram_check_checksum
ffffffff8121bcc0 t nvram_write
ffffffff8121be10 T nvram_check_checksum
ffffffff8121be60 t nvram_read
ffffffff8121bf60 T nvram_read_byte
ffffffff8121bfb0 t nvram_proc_read
ffffffff8121c320 T vga_client_register
ffffffff8121c3d0 t vga_arb_open
ffffffff8121c470 t __vga_tryget
ffffffff8121c670 t __vga_put
ffffffff8121c730 t __vga_set_legacy_decoding
ffffffff8121c930 T vga_set_legacy_decoding
ffffffff8121c940 T vga_put
ffffffff8121c9c0 t vga_arb_release
ffffffff8121cac0 t vga_arb_read
ffffffff8121cc80 t vga_arb_fpoll
ffffffff8121ccd0 t vga_arbiter_notify_clients.part.8
ffffffff8121ce90 T vga_tryget
ffffffff8121cf70 T vga_get
ffffffff8121d120 t vga_str_to_iostate.isra.10
ffffffff8121d1b0 t vga_arb_write
ffffffff8121d710 t vga_arbiter_add_pci_device.part.13
ffffffff8121da30 t pci_notify
ffffffff8121dba0 T vga_default_device
ffffffff8121dbb0 t dev_attr_store
ffffffff8121dbd0 t device_namespace
ffffffff8121dc00 t dev_uevent_filter
ffffffff8121dc30 t class_dir_child_ns_type
ffffffff8121dc40 t __match_devt
ffffffff8121dc50 T get_device
ffffffff8121dc70 t klist_children_get
ffffffff8121dc80 T put_device
ffffffff8121dca0 t klist_children_put
ffffffff8121dcb0 t class_dir_release
ffffffff8121dcc0 t device_create_release
ffffffff8121dcd0 t root_device_release
ffffffff8121dce0 T device_rename
ffffffff8121ddd0 T dev_set_name
ffffffff8121de20 t device_release
ffffffff8121dea0 T device_find_child
ffffffff8121df20 T device_for_each_child
ffffffff8121df80 t show_uevent
ffffffff8121e080 t show_dev
ffffffff8121e0b0 t device_remove_sys_dev_entry
ffffffff8121e110 t device_remove_class_symlinks
ffffffff8121e190 t device_remove_groups
ffffffff8121e1d0 t device_add_groups
ffffffff8121e250 T device_initialize
ffffffff8121e2d0 T device_schedule_callback_owner
ffffffff8121e2f0 T device_remove_bin_file
ffffffff8121e310 t device_remove_bin_attributes
ffffffff8121e360 T device_create_bin_file
ffffffff8121e380 T device_remove_file
ffffffff8121e3a0 t device_remove_attributes
ffffffff8121e3e0 t device_remove_attrs
ffffffff8121e470 T device_create_file
ffffffff8121e490 T device_show_int
ffffffff8121e4c0 T device_show_ulong
ffffffff8121e4f0 T device_store_int
ffffffff8121e560 T device_store_ulong
ffffffff8121e5c0 T dev_driver_string
ffffffff8121e600 t dev_attr_show
ffffffff8121e650 t dev_uevent_name
ffffffff8121e670 T __dev_printk
ffffffff8121e700 T _dev_info
ffffffff8121e760 T dev_notice
ffffffff8121e7c0 T dev_warn
ffffffff8121e820 T dev_err
ffffffff8121e880 t store_uevent
ffffffff8121e8e0 T dev_crit
ffffffff8121e940 T dev_alert
ffffffff8121e9a0 T dev_emerg
ffffffff8121ea00 T dev_printk
ffffffff8121ea50 t get_device_parent.isra.13
ffffffff8121ec00 t cleanup_glue_dir.isra.14
ffffffff8121ec30 T device_move
ffffffff8121ee80 T device_del
ffffffff8121f010 T device_unregister
ffffffff8121f030 T device_destroy
ffffffff8121f070 T root_device_unregister
ffffffff8121f0a0 T device_private_init
ffffffff8121f110 T device_add
ffffffff8121f7b0 T device_register
ffffffff8121f7d0 T device_create_vargs
ffffffff8121f8e0 T device_create
ffffffff8121f920 T __root_device_register
ffffffff8121fa10 T device_get_devnode
ffffffff8121fb30 t dev_uevent
ffffffff8121fc90 T to_root_device
ffffffff8121fca0 T device_shutdown
ffffffff8121fd70 t drv_attr_show
ffffffff8121fd90 t drv_attr_store
ffffffff8121fdc0 t bus_attr_show
ffffffff8121fde0 t bus_attr_store
ffffffff8121fe10 t bus_uevent_filter
ffffffff8121fe20 t store_drivers_autoprobe
ffffffff8121fe50 T bus_get_kset
ffffffff8121fe60 T bus_get_device_klist
ffffffff8121fe70 t klist_devices_put
ffffffff8121fe80 t system_root_device_release
ffffffff8121fe90 t driver_release
ffffffff8121fea0 t bus_put
ffffffff8121fec0 t bus_get
ffffffff8121fee0 T subsys_dev_iter_exit
ffffffff8121fef0 T subsys_dev_iter_next
ffffffff8121ff30 t next_device
ffffffff8121ff50 T subsys_dev_iter_init
ffffffff8121ffa0 T subsys_interface_unregister
ffffffff81220040 T subsys_interface_register
ffffffff81220100 T bus_for_each_drv
ffffffff81220180 T bus_for_each_dev
ffffffff81220200 T bus_rescan_devices
ffffffff81220210 T bus_sort_breadthfirst
ffffffff81220390 T bus_unregister_notifier
ffffffff812203a0 T bus_register_notifier
ffffffff812203b0 t bus_uevent_store
ffffffff81220400 t driver_uevent_store
ffffffff81220450 t bus_rescan_devices_helper
ffffffff812204d0 t show_drivers_autoprobe
ffffffff81220500 t klist_devices_get
ffffffff81220510 T subsys_find_device_by_id
ffffffff812205e0 T bus_find_device
ffffffff81220660 T bus_find_device_by_name
ffffffff81220670 t store_drivers_probe
ffffffff812206b0 T device_reprobe
ffffffff81220700 t driver_unbind
ffffffff812207c0 t driver_bind
ffffffff812208d0 t match_name
ffffffff81220910 T bus_remove_file
ffffffff81220970 T bus_unregister
ffffffff81220a20 T bus_create_file
ffffffff81220a80 T __bus_register
ffffffff81220d70 t device_remove_attrs.isra.5
ffffffff81220db0 T subsys_system_register
ffffffff81220ea0 T bus_add_device
ffffffff81221010 T bus_probe_device
ffffffff812210c0 T bus_remove_device
ffffffff812211d0 T bus_add_driver
ffffffff81221410 T bus_remove_driver
ffffffff812214d0 T dev_get_drvdata
ffffffff81221500 t deferred_probe_work_func
ffffffff81221590 T dev_set_drvdata
ffffffff812215d0 t driver_sysfs_remove
ffffffff81221610 t __device_release_driver
ffffffff812216f0 T device_release_driver
ffffffff81221730 T driver_attach
ffffffff81221750 T wait_for_device_probe
ffffffff812217d0 t driver_deferred_probe_trigger.part.2
ffffffff81221850 t deferred_probe_initcall
ffffffff812218b0 t driver_sysfs_add
ffffffff81221950 T driver_deferred_probe_del
ffffffff812219a0 t driver_bound
ffffffff81221a40 T device_bind_driver
ffffffff81221a70 T device_attach
ffffffff81221b30 T driver_probe_done
ffffffff81221b50 T driver_probe_device
ffffffff81221d50 t __driver_attach
ffffffff81221df0 t __device_attach
ffffffff81221e50 T driver_detach
ffffffff81221f10 T unregister_syscore_ops
ffffffff81221f60 T register_syscore_ops
ffffffff81221fa0 T syscore_resume
ffffffff81222070 T syscore_suspend
ffffffff812221a0 T syscore_shutdown
ffffffff81222210 T driver_find
ffffffff81222250 T driver_register
ffffffff81222370 T driver_remove_file
ffffffff81222390 T driver_create_file
ffffffff812223b0 T driver_find_device
ffffffff81222440 T driver_for_each_device
ffffffff812224c0 T driver_unregister
ffffffff81222540 t class_attr_show
ffffffff81222560 t class_attr_store
ffffffff81222580 t class_attr_namespace
ffffffff812225a0 t class_child_ns_type
ffffffff812225b0 T class_compat_remove_link
ffffffff81222610 T class_compat_create_link
ffffffff812226b0 T class_compat_unregister
ffffffff812226d0 t class_create_release
ffffffff812226e0 t class_release
ffffffff81222700 T class_compat_register
ffffffff81222770 T show_class_attr_string
ffffffff812227a0 t klist_class_dev_get
ffffffff812227b0 T class_dev_iter_exit
ffffffff812227c0 T class_dev_iter_next
ffffffff81222800 T class_dev_iter_init
ffffffff81222850 T class_interface_unregister
ffffffff812228f0 T class_interface_register
ffffffff812229b0 t klist_class_dev_put
ffffffff812229c0 T class_remove_file
ffffffff812229e0 T class_unregister
ffffffff81222a40 T class_destroy
ffffffff81222a60 T class_create_file
ffffffff81222a80 T __class_register
ffffffff81222c10 T __class_create
ffffffff81222cb0 T class_find_device
ffffffff81222d50 T class_for_each_device
ffffffff81222df0 T platform_get_resource
ffffffff81222e50 T platform_get_irq
ffffffff81222ed0 t platform_drv_probe
ffffffff81222ef0 t platform_drv_probe_fail
ffffffff81222f00 t platform_drv_remove
ffffffff81222f20 t platform_drv_shutdown
ffffffff81222f40 T platform_pm_freeze
ffffffff81222f90 T platform_pm_thaw
ffffffff81222fd0 T platform_pm_poweroff
ffffffff81223020 T platform_pm_restore
ffffffff81223060 T dma_get_required_mask
ffffffff812230b0 t modalias_show
ffffffff812230f0 t platform_uevent
ffffffff81223120 T platform_get_resource_byname
ffffffff812231b0 T platform_get_irq_byname
ffffffff812231e0 T platform_driver_unregister
ffffffff812231f0 T platform_driver_register
ffffffff81223230 T platform_driver_probe
ffffffff812232d0 t platform_device_release
ffffffff81223310 T platform_device_del
ffffffff81223390 T platform_device_add
ffffffff81223560 T platform_device_add_data
ffffffff812235d0 T platform_device_add_resources
ffffffff81223650 T platform_device_put
ffffffff81223670 T platform_device_unregister
ffffffff81223690 t platform_match
ffffffff81223710 W arch_setup_pdev_archdata
ffffffff81223720 T platform_device_register
ffffffff81223740 T platform_add_devices
ffffffff812237b0 T platform_device_alloc
ffffffff81223850 T platform_create_bundle
ffffffff81223940 T platform_device_register_full
ffffffff81223a30 t cpu_device_release
ffffffff81223a40 T get_cpu_device
ffffffff81223a80 T cpu_is_hotpluggable
ffffffff81223ac0 t print_cpus_kernel_max
ffffffff81223af0 t show_cpus_attr
ffffffff81223b20 t print_cpus_offline
ffffffff81223c10 t cpu_release_store
ffffffff81223c20 t cpu_probe_store
ffffffff81223c30 t show_online
ffffffff81223c70 T unregister_cpu
ffffffff81223cf0 T kobj_map
ffffffff81223e90 T kobj_unmap
ffffffff81223f70 T kobj_lookup
ffffffff812240d0 T kobj_map_init
ffffffff81224170 t group_open_release
ffffffff81224180 t group_close_release
ffffffff81224190 t find_dr
ffffffff81224230 t find_group
ffffffff81224290 t devm_kzalloc_release
ffffffff812242a0 t devm_kzalloc_match
ffffffff812242b0 T devres_alloc
ffffffff81224310 T devres_remove
ffffffff812243b0 T devres_find
ffffffff81224440 T devres_remove_group
ffffffff81224500 t release_nodes
ffffffff812246d0 T devres_release_group
ffffffff81224780 t add_dr
ffffffff812247b0 T devres_close_group
ffffffff81224840 T devres_open_group
ffffffff81224910 T devres_add
ffffffff81224970 T devm_kzalloc
ffffffff812249e0 T devres_free
ffffffff81224a00 T devres_destroy
ffffffff81224a30 T devres_get
ffffffff81224ae0 T devm_kfree
ffffffff81224b20 T devres_release_all
ffffffff81224b70 T attribute_container_classdev_to_container
ffffffff81224b80 T attribute_container_find_class_device
ffffffff81224be0 t internal_container_klist_get
ffffffff81224bf0 t attribute_container_release
ffffffff81224c10 t internal_container_klist_put
ffffffff81224c20 T attribute_container_unregister
ffffffff81224cc0 T attribute_container_register
ffffffff81224d20 T attribute_container_device_trigger
ffffffff81224df0 T attribute_container_trigger
ffffffff81224e60 T attribute_container_add_attrs
ffffffff81224ee0 T attribute_container_add_class_device
ffffffff81224f00 T attribute_container_add_device
ffffffff81225050 T attribute_container_add_class_device_adapter
ffffffff81225060 T attribute_container_remove_attrs
ffffffff812250e0 T attribute_container_remove_device
ffffffff812251d0 T attribute_container_class_device_del
ffffffff812251f0 t anon_transport_dummy_function
ffffffff81225200 t transport_setup_classdev
ffffffff81225220 t transport_configure
ffffffff81225240 T transport_destroy_device
ffffffff81225250 t transport_destroy_classdev
ffffffff81225280 T transport_remove_device
ffffffff81225290 T transport_configure_device
ffffffff812252a0 T transport_add_device
ffffffff812252b0 t transport_remove_classdev
ffffffff81225320 T transport_setup_device
ffffffff81225330 T anon_transport_class_register
ffffffff81225370 T transport_class_unregister
ffffffff81225380 T transport_class_register
ffffffff81225390 t transport_add_class_device
ffffffff812253e0 T anon_transport_class_unregister
ffffffff81225400 t show_cpumap
ffffffff81225450 t show_thread_cpumask
ffffffff81225470 t show_thread_cpumask_list
ffffffff81225490 t show_core_cpumask
ffffffff812254b0 t show_core_cpumask_list
ffffffff812254d0 t show_core_id
ffffffff81225510 t show_physical_package_id
ffffffff81225550 t dev_mount
ffffffff81225560 t dev_rmdir
ffffffff81225630 t handle_remove
ffffffff81225880 t handle_create.isra.2
ffffffff81225ac0 t devtmpfsd
ffffffff81225c00 T devtmpfs_create_node
ffffffff81225d10 T devtmpfs_delete_node
ffffffff81225de0 T devtmpfs_mount
ffffffff81225e50 t wake_show
ffffffff81225ea0 t autosuspend_delay_ms_show
ffffffff81225ee0 t control_show
ffffffff81225f20 t rtpm_status_show
ffffffff81225fc0 t pm_qos_latency_show
ffffffff81225ff0 t wakeup_active_show
ffffffff81226090 t wakeup_hit_count_show
ffffffff81226130 t wakeup_active_count_show
ffffffff812261d0 t wakeup_count_show
ffffffff81226270 t wakeup_last_time_show
ffffffff81226350 t wakeup_max_time_show
ffffffff81226430 t wakeup_total_time_show
ffffffff81226510 t wake_store
ffffffff812265d0 t autosuspend_delay_ms_store
ffffffff81226660 t rtpm_active_time_show
ffffffff812266e0 t rtpm_suspended_time_show
ffffffff81226760 t control_store
ffffffff81226850 t pm_qos_latency_store
ffffffff812268a0 T dpm_sysfs_add
ffffffff81226950 T wakeup_sysfs_add
ffffffff81226960 T wakeup_sysfs_remove
ffffffff81226970 T pm_qos_sysfs_add
ffffffff81226980 T pm_qos_sysfs_remove
ffffffff81226990 T rpm_sysfs_remove
ffffffff812269a0 T dpm_sysfs_remove
ffffffff812269d0 T pm_generic_runtime_suspend
ffffffff81226a00 T pm_generic_runtime_resume
ffffffff81226a30 T pm_generic_suspend_noirq
ffffffff81226a60 T pm_generic_suspend_late
ffffffff81226a90 T pm_generic_suspend
ffffffff81226ac0 T pm_generic_freeze_noirq
ffffffff81226af0 T pm_generic_freeze_late
ffffffff81226b20 T pm_generic_freeze
ffffffff81226b50 T pm_generic_poweroff_noirq
ffffffff81226b80 T pm_generic_poweroff_late
ffffffff81226bb0 T pm_generic_poweroff
ffffffff81226be0 T pm_generic_thaw_noirq
ffffffff81226c10 T pm_generic_thaw_early
ffffffff81226c40 T pm_generic_thaw
ffffffff81226c70 T pm_generic_resume_noirq
ffffffff81226ca0 T pm_generic_resume_early
ffffffff81226cd0 T pm_generic_resume
ffffffff81226d00 T pm_generic_restore_noirq
ffffffff81226d30 T pm_generic_restore_early
ffffffff81226d60 T pm_generic_restore
ffffffff81226d90 T pm_generic_runtime_idle
ffffffff81226dd0 T pm_generic_prepare
ffffffff81226e00 T pm_generic_complete
ffffffff81226e30 T dev_pm_put_subsys_data
ffffffff81226ec0 T dev_pm_get_subsys_data
ffffffff81226f70 T dev_pm_qos_remove_global_notifier
ffffffff81226f80 T dev_pm_qos_add_global_notifier
ffffffff81226f90 T dev_pm_qos_remove_notifier
ffffffff81226ff0 T dev_pm_qos_add_notifier
ffffffff81227050 t apply_constraint
ffffffff812270c0 T dev_pm_qos_remove_request
ffffffff81227270 T dev_pm_qos_hide_latency_limit
ffffffff812272a0 T dev_pm_qos_update_request
ffffffff81227350 T dev_pm_qos_add_request
ffffffff81227500 T dev_pm_qos_add_ancestor_request
ffffffff81227540 T dev_pm_qos_expose_latency_limit
ffffffff81227610 T __dev_pm_qos_read_value
ffffffff81227630 T dev_pm_qos_read_value
ffffffff81227690 T dev_pm_qos_constraints_init
ffffffff812276d0 T dev_pm_qos_constraints_destroy
ffffffff81227890 t pm_op
ffffffff812278e0 t pm_late_early_op
ffffffff81227930 t pm_noirq_op
ffffffff81227980 t dpm_wait
ffffffff812279c0 T device_pm_wait_for_dev
ffffffff812279f0 t dpm_wait_fn
ffffffff81227a10 t pm_dev_err
ffffffff81227ac0 t initcall_debug_start
ffffffff81227b30 t initcall_debug_report
ffffffff81227b90 t dpm_show_time
ffffffff81227cd0 T __suspend_report_result
ffffffff81227cf0 t dpm_run_callback.isra.5
ffffffff81227d70 t dpm_resume_early
ffffffff81227f70 t dpm_resume_noirq
ffffffff81228180 T dpm_suspend_end
ffffffff81228660 T dpm_resume_start
ffffffff81228670 t __device_suspend
ffffffff81228890 t async_suspend
ffffffff81228920 t device_resume
ffffffff81228a80 t async_resume
ffffffff81228ac0 T device_pm_init
ffffffff81228b60 T device_pm_lock
ffffffff81228b70 T device_pm_unlock
ffffffff81228b80 T device_pm_add
ffffffff81228c10 T device_pm_remove
ffffffff81228c80 T device_pm_move_before
ffffffff81228cd0 T device_pm_move_after
ffffffff81228d20 T device_pm_move_last
ffffffff81228d60 T dpm_resume
ffffffff81228f70 T dpm_complete
ffffffff81229110 T dpm_resume_end
ffffffff81229120 T dpm_suspend
ffffffff81229340 T dpm_prepare
ffffffff81229520 T dpm_suspend_start
ffffffff81229570 t wakeup_source_activate
ffffffff812295a0 t pm_wakeup_update_hit_counts
ffffffff81229600 T __pm_stay_awake
ffffffff81229690 T pm_stay_awake
ffffffff81229700 T device_set_wakeup_capable
ffffffff81229780 T wakeup_source_remove
ffffffff812297e0 T wakeup_source_add
ffffffff81229870 T wakeup_source_prepare
ffffffff81229930 T wakeup_source_create
ffffffff81229990 T wakeup_source_register
ffffffff812299c0 t wakeup_source_deactivate
ffffffff81229a30 T __pm_wakeup_event
ffffffff81229b20 T pm_wakeup_event
ffffffff81229bb0 T __pm_relax
ffffffff81229c20 T pm_relax
ffffffff81229c90 T wakeup_source_drop
ffffffff81229cc0 T wakeup_source_destroy
ffffffff81229cf0 T wakeup_source_unregister
ffffffff81229d10 T device_wakeup_disable
ffffffff81229d90 T device_wakeup_enable
ffffffff81229e50 T device_set_wakeup_enable
ffffffff81229e80 t pm_wakeup_timer_fn
ffffffff81229ef0 T device_init_wakeup
ffffffff81229f20 T pm_wakeup_pending
ffffffff81229fb0 T pm_get_wakeup_count
ffffffff8122a030 T pm_save_wakeup_count
ffffffff8122a0a0 t rpm_check_suspend_allowed
ffffffff8122a140 t rpm_update_qos_constraint
ffffffff8122a1e0 t pm_runtime_cancel_pending
ffffffff8122a220 t __rpm_callback
ffffffff8122a2b0 T pm_runtime_no_callbacks
ffffffff8122a320 T pm_runtime_enable
ffffffff8122a3b0 t __pm_runtime_barrier
ffffffff8122a530 T pm_runtime_autosuspend_expiration
ffffffff8122a5c0 t rpm_suspend
ffffffff8122ac20 t pm_suspend_timer_fn
ffffffff8122acb0 t rpm_idle
ffffffff8122ae70 T pm_runtime_allow
ffffffff8122aef0 T __pm_runtime_idle
ffffffff8122af70 t rpm_resume
ffffffff8122b520 T pm_runtime_forbid
ffffffff8122b590 T __pm_runtime_disable
ffffffff8122b6c0 T pm_runtime_barrier
ffffffff8122b790 T __pm_runtime_resume
ffffffff8122b800 T pm_runtime_irq_safe
ffffffff8122b860 T __pm_runtime_set_status
ffffffff8122baa0 T __pm_runtime_suspend
ffffffff8122bb20 T pm_schedule_suspend
ffffffff8122bc00 t pm_runtime_work
ffffffff8122bcc0 t update_autosuspend
ffffffff8122bd30 T __pm_runtime_use_autosuspend
ffffffff8122bdb0 T pm_runtime_set_autosuspend_delay
ffffffff8122be20 T update_pm_runtime_accounting
ffffffff8122be60 T pm_runtime_init
ffffffff8122bf50 T pm_runtime_remove
ffffffff8122bfa0 T pm_runtime_update_max_time_suspended
ffffffff8122c020 t dmam_coherent_release
ffffffff8122c0d0 t dmam_noncoherent_release
ffffffff8122c180 T dmam_free_noncoherent
ffffffff8122c260 T dmam_free_coherent
ffffffff8122c340 T dmam_alloc_noncoherent
ffffffff8122c4a0 T dmam_alloc_coherent
ffffffff8122c600 t dmam_match
ffffffff8122c650 t firmware_timeout_store
ffffffff8122c680 t firmware_timeout_show
ffffffff8122c6b0 t firmware_loading_show
ffffffff8122c6e0 t fw_dev_release
ffffffff8122c730 t firmware_uevent
ffffffff8122c7c0 T request_firmware_nowait
ffffffff8122c8c0 t fw_load_abort
ffffffff8122c8d0 t firmware_class_timeout
ffffffff8122c8e0 t _request_firmware_load
ffffffff8122cac0 t firmware_free_data
ffffffff8122cb30 T release_firmware
ffffffff8122cb90 t firmware_loading_store
ffffffff8122cd20 t firmware_data_read
ffffffff8122ce40 t firmware_data_write
ffffffff8122d080 t _request_firmware_prepare.isra.7
ffffffff8122d250 t request_firmware_work_func
ffffffff8122d320 T request_firmware
ffffffff8122d410 t node_read_cpumask
ffffffff8122d420 t node_read_cpulist
ffffffff8122d430 t show_node_state
ffffffff8122d480 t node_read_vmstat
ffffffff8122d540 t node_read_numastat
ffffffff8122d7b0 t node_read_distance
ffffffff8122d880 t node_read_meminfo
ffffffff8122e8f0 t node_read_cpumap
ffffffff8122e940 T register_hugetlbfs_with_node
ffffffff8122e950 T register_node
ffffffff8122ea20 T unregister_node
ffffffff8122eaa0 T register_cpu_under_node
ffffffff8122eb60 T unregister_cpu_under_node
ffffffff8122ebd0 T register_one_node
ffffffff8122ec70 T unregister_one_node
ffffffff8122ec90 T module_add_driver
ffffffff8122ed90 T module_remove_driver
ffffffff8122ee20 T scsi_device_type
ffffffff8122ee50 T scsi_cmd_get_serial
ffffffff8122ee80 T __scsi_device_lookup_by_target
ffffffff8122eec0 T __scsi_device_lookup
ffffffff8122ef10 T __starget_for_each_device
ffffffff8122efc0 T scsi_device_put
ffffffff8122f020 T scsi_device_get
ffffffff8122f080 T scsi_device_lookup
ffffffff8122f130 T scsi_device_lookup_by_target
ffffffff8122f1f0 T __scsi_iterate_devices
ffffffff8122f280 T starget_for_each_device
ffffffff8122f340 t scsi_vpd_inquiry
ffffffff8122f3c0 T scsi_get_vpd_page
ffffffff8122f460 T scsi_finish_command
ffffffff8122f580 t scsi_done
ffffffff8122f590 t scsi_get_host_cmd_pool
ffffffff8122f630 t scsi_pool_alloc_command
ffffffff8122f6b0 T scsi_allocate_command
ffffffff8122f6d0 T scsi_adjust_queue_depth
ffffffff8122f810 T scsi_track_queue_full
ffffffff8122f8d0 t scsi_pool_free_command.isra.9
ffffffff8122f930 T __scsi_put_command
ffffffff8122f9d0 t scsi_host_alloc_command
ffffffff8122fa50 T __scsi_get_command
ffffffff8122fb40 T scsi_get_command
ffffffff8122fc00 T scsi_put_command
ffffffff8122fc70 t scsi_put_host_cmd_pool
ffffffff8122fce0 T scsi_free_command
ffffffff8122fd30 T scsi_setup_command_freelist
ffffffff8122fde0 T scsi_destroy_command_freelist
ffffffff8122fe60 T scsi_log_send
ffffffff8122ff80 T scsi_log_completion
ffffffff812301b0 T scsi_dispatch_cmd
ffffffff812303d0 t __scsi_host_match
ffffffff812303e0 T scsi_is_host_device
ffffffff812303f0 t scsi_host_dev_release
ffffffff812304d0 t scsi_host_cls_release
ffffffff812304e0 T scsi_host_put
ffffffff812304f0 T scsi_unregister
ffffffff81230540 T scsi_host_get
ffffffff81230580 T scsi_host_lookup
ffffffff812305d0 T scsi_host_alloc
ffffffff812308e0 T scsi_register
ffffffff81230960 T scsi_host_set_state
ffffffff81230a60 T scsi_add_host_with_dma
ffffffff81230d10 T scsi_remove_host
ffffffff81230e10 T scsi_flush_work
ffffffff81230e50 T scsi_queue_work
ffffffff81230ea0 T scsi_init_hosts
ffffffff81230ec0 T scsi_exit_hosts
ffffffff81230ed0 T scsi_nonblockable_ioctl
ffffffff81230fc0 t ioctl_internal_command.constprop.4
ffffffff81231170 T scsi_set_medium_removal
ffffffff81231200 T scsi_ioctl
ffffffff812315e0 T scsi_sense_key_string
ffffffff81231600 T scsi_show_result
ffffffff81231650 T scsi_print_result
ffffffff812316d0 T scsi_show_sense_hdr
ffffffff81231760 T scsi_print_status
ffffffff81231790 t print_opcode_name
ffffffff812319e0 T __scsi_print_command
ffffffff81231a70 T scsi_extd_sense_format
ffffffff81231af0 T scsi_show_extd_sense
ffffffff81231be0 T scsi_cmd_print_sense_hdr
ffffffff81231cf0 T scsi_print_sense_hdr
ffffffff81231d50 T scsi_print_command
ffffffff81231e30 t scsi_decode_sense_buffer.part.3
ffffffff81231ed0 t scsi_decode_sense_extras
ffffffff81232120 T scsi_print_sense
ffffffff81232250 T __scsi_print_sense
ffffffff812322f0 T scsi_partsize
ffffffff812323f0 T scsi_bios_ptable
ffffffff81232520 T scsicam_bios_param
ffffffff812326a0 t __scsi_report_device_reset
ffffffff812326b0 t scsi_try_bus_device_reset
ffffffff81232700 T scsi_eh_restore_cmnd
ffffffff81232760 T scsi_eh_finish_cmd
ffffffff812327a0 T scsi_report_bus_reset
ffffffff812327f0 T scsi_report_device_reset
ffffffff81232840 t scsi_reset_provider_done_command
ffffffff81232850 T scsi_sense_desc_find
ffffffff812328e0 T scsi_build_sense_buffer
ffffffff81232910 t scsi_try_bus_reset
ffffffff81232a20 t scsi_try_host_reset
ffffffff81232b30 t scsi_try_target_reset
ffffffff81232bd0 T scsi_reset_provider
ffffffff81232dc0 t scsi_handle_queue_ramp_up
ffffffff81232eb0 t scsi_handle_queue_full
ffffffff81232f30 t scsi_eh_done
ffffffff81232f90 t eh_lock_door_done
ffffffff81232fa0 T scsi_eh_prep_cmnd
ffffffff812331d0 T scsi_block_when_processing_errors
ffffffff812332c0 T scsi_get_sense_info_fld
ffffffff81233390 T scsi_normalize_sense
ffffffff81233450 T scsi_command_normalize_sense
ffffffff81233470 t scsi_check_sense
ffffffff812337e0 t scsi_send_eh_cmnd
ffffffff81233b90 t scsi_eh_try_stu
ffffffff81233c00 t scsi_eh_tur
ffffffff81233ca0 t scsi_eh_test_devices
ffffffff81233e00 T scsi_eh_ready_devs
ffffffff812346e0 T scsi_eh_wakeup
ffffffff81234730 T scsi_schedule_eh
ffffffff81234790 T scsi_eh_scmd_add
ffffffff81234870 T scsi_times_out
ffffffff81234900 T scsi_noretry_cmd
ffffffff812349a0 T scsi_eh_flush_done_q
ffffffff81234ac0 T scsi_decide_disposition
ffffffff81234c60 T scsi_eh_get_sense
ffffffff81234e50 T scsi_error_handler
ffffffff812354e0 t scsi_lld_busy
ffffffff81235530 T scsi_block_requests
ffffffff81235540 T scsi_kunmap_atomic_sg
ffffffff81235550 T scsi_kmap_atomic_sg
ffffffff812356f0 T scsi_target_resume
ffffffff81235700 T scsi_target_quiesce
ffffffff81235710 T scsi_internal_device_unblock
ffffffff81235790 t device_unblock
ffffffff812357a0 t scsi_run_queue
ffffffff81235a00 T sdev_evt_alloc
ffffffff81235a40 T sdev_evt_send
ffffffff81235ad0 T scsi_device_set_state
ffffffff81235c00 T scsi_internal_device_block
ffffffff81235c60 t device_block
ffffffff81235c70 T scsi_device_resume
ffffffff81235ca0 t device_resume_fn
ffffffff81235cb0 t scsi_get_cmd_from_req
ffffffff81235d00 t __scsi_release_buffers
ffffffff81235e10 T scsi_release_buffers
ffffffff81235e20 t scsi_requeue_command
ffffffff81235eb0 T scsi_execute
ffffffff81236020 T scsi_execute_req
ffffffff81236140 T scsi_test_unit_ready
ffffffff81236260 T scsi_mode_sense
ffffffff81236590 T scsi_mode_select
ffffffff81236780 T scsi_calculate_bounce_limit
ffffffff812367d0 T __scsi_alloc_queue
ffffffff812368e0 t target_unblock
ffffffff81236910 t target_block
ffffffff81236940 T scsi_target_unblock
ffffffff81236980 T scsi_target_block
ffffffff812369c0 T scsi_prep_state_check
ffffffff81236a50 T sdev_evt_send_simple
ffffffff81236ac0 T scsi_device_quiesce
ffffffff81236b10 t device_quiesce_fn
ffffffff81236b20 t scsi_init_cmd_errh
ffffffff81236c10 t scsi_kill_request.isra.29
ffffffff81236d40 t scsi_request_fn
ffffffff81237190 t scsi_init_sgtable
ffffffff81237220 T scsi_init_io
ffffffff812372d0 t scsi_sg_free
ffffffff81237310 t scsi_sg_alloc
ffffffff81237350 T scsi_prep_return
ffffffff812373f0 T scsi_setup_fs_cmnd
ffffffff81237480 T scsi_setup_blk_pc_cmnd
ffffffff81237590 T scsi_prep_fn
ffffffff812375e0 T scsi_device_unbusy
ffffffff812376b0 t __scsi_queue_insert
ffffffff81237800 T scsi_queue_insert
ffffffff81237810 t scsi_softirq_done
ffffffff81237970 T scsi_requeue_run_queue
ffffffff81237980 T scsi_next_command
ffffffff812379e0 T scsi_run_host_queues
ffffffff81237a20 T scsi_unblock_requests
ffffffff81237a30 T scsi_io_completion
ffffffff81238090 T scsi_alloc_queue
ffffffff812380f0 T scsi_free_queue
ffffffff81238160 T scsi_exit_queue
ffffffff812381b0 T scsi_evt_thread
ffffffff812382c0 T scsi_dma_map
ffffffff81238390 T scsi_dma_unmap
ffffffff812383f0 T scsi_is_target_device
ffffffff81238400 t sanitize_inquiry_string
ffffffff81238450 T scsilun_to_int
ffffffff81238470 t scsi_target_dev_release
ffffffff81238490 t scsi_target_destroy
ffffffff81238540 t scsi_alloc_target
ffffffff81238820 t scsi_alloc_sdev
ffffffff81238aa0 T scsi_complete_async_scans
ffffffff81238be0 T int_to_scsilun
ffffffff81238c10 T scsi_rescan_device
ffffffff81238c70 t scsi_target_reap_usercontext
ffffffff81238cb0 T scsi_free_host_dev
ffffffff81238cd0 t scsi_probe_and_add_lun
ffffffff81239930 T scsi_target_reap
ffffffff812399f0 T scsi_get_host_dev
ffffffff81239aa0 t __scsi_scan_target
ffffffff8123a170 t scsi_scan_channel.part.8
ffffffff8123a1e0 T scsi_scan_target
ffffffff8123a2e0 T __scsi_add_device
ffffffff8123a420 T scsi_add_device
ffffffff8123a450 T scsi_scan_host_selected
ffffffff8123a620 t do_scsi_scan_host
ffffffff8123a6c0 t do_scan_async
ffffffff8123a830 T scsi_scan_host
ffffffff8123aa50 T scsi_forget_host
ffffffff8123aac0 T scsi_is_sdev_device
ffffffff8123aad0 t store_host_reset
ffffffff8123aba0 t sdev_store_queue_type_rw
ffffffff8123ac90 t show_prot_guard_type
ffffffff8123acc0 t show_prot_capabilities
ffffffff8123acf0 t show_proc_name
ffffffff8123ad30 t show_unchecked_isa_dma
ffffffff8123ad70 t show_sg_prot_tablesize
ffffffff8123ada0 t show_sg_tablesize
ffffffff8123add0 t show_can_queue
ffffffff8123ae00 t show_cmd_per_lun
ffffffff8123ae30 t show_host_busy
ffffffff8123ae60 t show_unique_id
ffffffff8123ae90 t sdev_show_evt_media_change
ffffffff8123aec0 t sdev_show_modalias
ffffffff8123aef0 t show_iostat_ioerr_cnt
ffffffff8123af20 t show_iostat_iodone_cnt
ffffffff8123af50 t show_iostat_iorequest_cnt
ffffffff8123af80 t show_iostat_counterbits
ffffffff8123afb0 t sdev_show_timeout
ffffffff8123aff0 t sdev_show_rev
ffffffff8123b020 t sdev_show_model
ffffffff8123b050 t sdev_show_vendor
ffffffff8123b080 t sdev_show_scsi_level
ffffffff8123b0b0 t sdev_show_type
ffffffff8123b0e0 t sdev_show_device_blocked
ffffffff8123b110 t show_queue_type_field
ffffffff8123b160 t sdev_show_queue_depth
ffffffff8123b190 t show_state_field
ffffffff8123b200 t show_shost_state
ffffffff8123b280 t show_shost_mode
ffffffff8123b310 t show_shost_supported_mode
ffffffff8123b340 t check_set
ffffffff8123b3a0 t store_scan
ffffffff8123b480 t sdev_store_evt_media_change
ffffffff8123b4e0 t sdev_store_queue_depth_rw
ffffffff8123b580 t sdev_store_timeout
ffffffff8123b5e0 t store_state_field
ffffffff8123b690 t sdev_store_delete
ffffffff8123b6b0 t store_rescan_field
ffffffff8123b6c0 t scsi_device_dev_release
ffffffff8123b6e0 t scsi_device_cls_release
ffffffff8123b6f0 t scsi_device_dev_release_usercontext
ffffffff8123b8a0 t store_shost_state
ffffffff8123b950 T scsi_register_interface
ffffffff8123b960 T scsi_register_driver
ffffffff8123b970 t sdev_store_queue_ramp_up_period
ffffffff8123b9c0 t sdev_show_queue_ramp_up_period
ffffffff8123b9f0 t scsi_bus_match
ffffffff8123ba20 t show_shost_active_mode
ffffffff8123ba70 t scsi_bus_uevent
ffffffff8123bab0 T scsi_device_state_name
ffffffff8123baf0 T scsi_host_state_name
ffffffff8123bb30 T scsi_sysfs_register
ffffffff8123bb80 T scsi_sysfs_unregister
ffffffff8123bba0 T scsi_sysfs_add_sdev
ffffffff8123beb0 T __scsi_remove_device
ffffffff8123bf80 T scsi_remove_device
ffffffff8123bfc0 t sdev_store_delete_callback
ffffffff8123bfd0 t __scsi_remove_target
ffffffff8123c0c0 T scsi_remove_target
ffffffff8123c100 t __remove_child
ffffffff8123c120 T scsi_sysfs_add_host
ffffffff8123c1b0 T scsi_sysfs_device_initialize
ffffffff8123c300 t proc_scsi_devinfo_open
ffffffff8123c310 t devinfo_seq_show
ffffffff8123c380 t devinfo_seq_next
ffffffff8123c3f0 t devinfo_seq_stop
ffffffff8123c400 t devinfo_seq_start
ffffffff8123c490 T scsi_dev_info_remove_list
ffffffff8123c540 T scsi_get_device_flags_keyed
ffffffff8123c710 T scsi_dev_info_list_del_keyed
ffffffff8123c8e0 T scsi_dev_info_add_list
ffffffff8123c980 t scsi_strcpy_devinfo
ffffffff8123ca70 T scsi_dev_info_list_add_keyed
ffffffff8123cbf0 t scsi_dev_info_list_add_str
ffffffff8123cd00 t proc_scsi_devinfo_write
ffffffff8123cdc0 T scsi_get_device_flags
ffffffff8123cdd0 T scsi_exit_devinfo
ffffffff8123cdf0 T scsi_exit_sysctl
ffffffff8123ce00 t proc_scsi_read
ffffffff8123ce50 t always_match
ffffffff8123ce60 t proc_scsi_open
ffffffff8123ce70 t scsi_seq_show
ffffffff8123d080 t scsi_seq_start
ffffffff8123d0e0 t scsi_seq_next
ffffffff8123d120 t scsi_seq_stop
ffffffff8123d130 t proc_scsi_write_proc
ffffffff8123d1f0 t proc_scsi_write
ffffffff8123d4a0 T scsi_proc_hostdir_add
ffffffff8123d530 T scsi_proc_hostdir_rm
ffffffff8123d5a0 T scsi_proc_host_add
ffffffff8123d660 T scsi_proc_host_rm
ffffffff8123d6b0 T scsi_exit_procfs
ffffffff8123d6e0 T scsi_trace_parse_cdb
ffffffff8123d780 t scsi_dev_type_resume
ffffffff8123d7d0 t scsi_runtime_resume
ffffffff8123d7f0 t scsi_dev_type_suspend
ffffffff8123d850 t scsi_bus_suspend_common
ffffffff8123d8d0 t scsi_bus_poweroff
ffffffff8123d8f0 t scsi_bus_freeze
ffffffff8123d910 t scsi_bus_suspend
ffffffff8123d930 T scsi_autopm_get_device
ffffffff8123d990 T scsi_autopm_put_device
ffffffff8123d9b0 t scsi_runtime_idle
ffffffff8123d9e0 t scsi_runtime_suspend
ffffffff8123da50 t scsi_bus_resume_common
ffffffff8123dad0 t scsi_bus_prepare
ffffffff8123db10 T scsi_autopm_get_target
ffffffff8123db20 T scsi_autopm_put_target
ffffffff8123db30 T scsi_autopm_get_host
ffffffff8123db90 T scsi_autopm_put_host
ffffffff8123dbb0 t sd_config_discard
ffffffff8123dcf0 t sd_unlock_native_capacity
ffffffff8123dd20 t sd_store_max_medium_access_timeouts
ffffffff8123dd90 t sd_show_max_medium_access_timeouts
ffffffff8123ddc0 t sd_show_provisioning_mode
ffffffff8123de00 t sd_show_thin_provisioning
ffffffff8123de40 t sd_show_app_tag_own
ffffffff8123de70 t sd_show_protection_type
ffffffff8123dea0 t sd_show_manage_start_stop
ffffffff8123dee0 t sd_show_allow_restart
ffffffff8123df20 t sd_show_fua
ffffffff8123df60 t sd_show_cache_type
ffffffff8123dfb0 t sd_store_provisioning_mode
ffffffff8123e0d0 t sd_store_manage_start_stop
ffffffff8123e140 t sd_store_allow_restart
ffffffff8123e1c0 t sd_eh_action
ffffffff8123e3a0 t sd_completed_bytes
ffffffff8123e4a0 t sd_done
ffffffff8123e880 t __scsi_disk_get
ffffffff8123e8b0 t scsi_disk_get_from_dev
ffffffff8123e900 t scsi_disk_put
ffffffff8123e950 t sd_rescan
ffffffff8123e980 t scsi_disk_release
ffffffff8123ea00 t sd_probe
ffffffff8123ed40 t sd_getgeo
ffffffff8123edd0 t sd_compat_ioctl
ffffffff8123ee70 t sd_ioctl
ffffffff8123eff0 t sd_release
ffffffff8123f0d0 t sd_open
ffffffff8123f2b0 t sd_prep_fn
ffffffff81240060 t media_not_present
ffffffff812400d0 t sd_check_events
ffffffff812402a0 t sd_show_protection_mode
ffffffff81240320 t sd_print_sense_hdr.isra.24
ffffffff81240410 t sd_store_cache_type
ffffffff812405d0 t sd_print_result.isra.25
ffffffff81240630 t sd_start_stop_device
ffffffff81240760 t sd_resume
ffffffff812407f0 t sd_sync_cache
ffffffff812408e0 t sd_suspend
ffffffff81240a10 t sd_shutdown
ffffffff81240b60 t sd_remove
ffffffff81240c10 t read_capacity_error
ffffffff81240d40 t read_capacity_10
ffffffff81240f30 t read_capacity_16
ffffffff81241410 t sd_revalidate_disk
ffffffff81243020 t sd_unprep_fn
ffffffff81243050 t sd_major
ffffffff81243080 t sd_probe_async
ffffffff81243250 T atapi_cmd_type
ffffffff812432d0 T ata_tf_to_fis
ffffffff81243360 T ata_tf_from_fis
ffffffff812433b0 t ata_rwcmd_protocol
ffffffff81243430 T ata_pack_xfermask
ffffffff81243450 T ata_unpack_xfermask
ffffffff81243490 T ata_xfer_mask2mode
ffffffff812434e0 T ata_xfer_mode2mask
ffffffff81243540 T ata_xfer_mode2shift
ffffffff81243580 T ata_mode_string
ffffffff812435b0 T ata_id_xfermask
ffffffff81243690 T ata_cable_40wire
ffffffff812436a0 T ata_cable_80wire
ffffffff812436b0 T ata_cable_unknown
ffffffff812436c0 T ata_cable_ignore
ffffffff812436d0 T ata_cable_sata
ffffffff812436e0 T ata_dev_pair
ffffffff81243730 T ata_timing_merge
ffffffff81243800 T ata_timing_find_mode
ffffffff81243840 T ata_timing_cycle2mode
ffffffff81243940 t glob_match
ffffffff81243a60 T ata_std_qc_defer
ffffffff81243aa0 T ata_noop_qc_prep
ffffffff81243ab0 T ata_sg_init
ffffffff81243ad0 T sata_scr_valid
ffffffff81243af0 T sata_scr_read
ffffffff81243b20 T sata_scr_write
ffffffff81243b50 T sata_set_spd
ffffffff81243bd0 T sata_scr_write_flush
ffffffff81243c50 T ata_host_suspend
ffffffff81243c60 T ata_host_resume
ffffffff81243c70 t ata_dummy_qc_issue
ffffffff81243c80 t ata_dummy_error_handler
ffffffff81243c90 t ata_port_runtime_idle
ffffffff81243ca0 T ata_print_version
ffffffff81243cc0 T ata_dev_printk
ffffffff81243d30 T ata_link_printk
ffffffff81243dc0 T ata_port_printk
ffffffff81243e20 T ata_msleep
ffffffff81243e80 T ata_wait_register
ffffffff81243f30 T sata_link_debounce
ffffffff81244030 T sata_link_resume
ffffffff81244170 T ata_ratelimit
ffffffff81244190 t ata_host_stop
ffffffff81244210 T ata_pci_device_do_resume
ffffffff81244280 T ata_pci_device_resume
ffffffff812442d0 T pci_test_config_bits
ffffffff81244380 T ata_host_detach
ffffffff81244470 T ata_pci_remove_one
ffffffff81244490 T ata_host_init
ffffffff81244500 t ata_finalize_port_ops
ffffffff812445d0 t ata_host_release
ffffffff81244650 T ata_dev_next
ffffffff81244710 T ata_link_next
ffffffff812447b0 T ata_timing_compute
ffffffff81244af0 T sata_link_scr_lpm
ffffffff81244c20 t ata_qc_complete_internal
ffffffff81244c30 T ata_dev_classify
ffffffff81244c70 t ata_id_n_sectors
ffffffff81244d40 T ata_pio_need_iordy
ffffffff81244dc0 T ata_pci_device_do_suspend
ffffffff81244e20 T ata_pci_device_suspend
ffffffff81244e60 T ata_host_start
ffffffff81245000 T ata_id_string
ffffffff81245040 T ata_id_c_string
ffffffff81245080 t ata_dev_same_device
ffffffff812451d0 t ata_dev_blacklisted
ffffffff81245270 t ata_port_request_pm.constprop.41
ffffffff81245370 t ata_port_suspend_common
ffffffff81245390 t ata_port_poweroff
ffffffff812453c0 t ata_port_do_freeze
ffffffff81245400 t ata_port_suspend
ffffffff81245430 t ata_port_resume_common
ffffffff81245460 t ata_port_resume
ffffffff812454b0 T ata_dev_phys_link
ffffffff812454e0 T ata_force_cbl
ffffffff81245550 T ata_tf_read_block
ffffffff81245640 T ata_build_rw_tf
ffffffff81245910 T sata_spd_string
ffffffff81245930 T ata_tf_to_lba48
ffffffff81245970 T ata_tf_to_lba
ffffffff812459a0 T sata_down_spd_limit
ffffffff81245ac0 T ata_down_xfermask_limit
ffffffff81245d20 T ata_sg_clean
ffffffff81245df0 T atapi_check_dma
ffffffff81245e30 T swap_buf_le16
ffffffff81245e40 T ata_qc_new_init
ffffffff81245f60 T ata_qc_free
ffffffff81245fb0 T __ata_qc_complete
ffffffff812460f0 T ata_qc_complete
ffffffff81246330 T ata_qc_complete_multiple
ffffffff812463f0 T ata_qc_issue
ffffffff812467a0 T ata_exec_internal_sg
ffffffff81246cf0 T ata_exec_internal
ffffffff81246da0 T ata_dev_set_feature
ffffffff81246e20 T ata_do_dev_read_id
ffffffff81246e50 T ata_dev_read_id
ffffffff81247340 T ata_dev_reread_id
ffffffff81247490 T ata_dev_configure
ffffffff81248820 T ata_dev_revalidate
ffffffff81248a10 T ata_do_set_mode
ffffffff81249360 T ata_bus_probe
ffffffff812496b0 T ata_do_simple_cmd
ffffffff81249730 T ata_phys_link_online
ffffffff81249760 T ata_link_online
ffffffff812497f0 T ata_std_postreset
ffffffff812498d0 T ata_phys_link_offline
ffffffff81249900 T ata_link_offline
ffffffff81249980 T ata_wait_ready
ffffffff81249b20 T ata_wait_after_reset
ffffffff81249b70 T sata_link_hardreset
ffffffff81249d50 T sata_std_hardreset
ffffffff81249d90 T ata_std_prereset
ffffffff81249e10 T ata_dev_init
ffffffff81249f70 T ata_link_init
ffffffff8124a0b0 T ata_slave_link_init
ffffffff8124a150 T sata_link_init_spd
ffffffff8124a2b0 T ata_host_register
ffffffff8124a530 T ata_host_activate
ffffffff8124a660 T ata_port_alloc
ffffffff8124a7e0 T ata_host_alloc
ffffffff8124a8c0 T ata_host_alloc_pinfo
ffffffff8124a970 T __ata_port_probe
ffffffff8124a9d0 T ata_port_probe
ffffffff8124aa00 t async_port_probe
ffffffff8124aa70 t ata_scsi_em_message_store
ffffffff8124aaa0 t ata_scsi_em_message_show
ffffffff8124aad0 t ata_scsi_flush_xlat
ffffffff8124ab00 t scsi_16_lba_len
ffffffff8124ab80 t ata_scsiop_inq_00
ffffffff8124abb0 t ata_scsiop_inq_b1
ffffffff8124ac70 t ata_scsiop_inq_b2
ffffffff8124ac80 t ata_scsiop_noop
ffffffff8124ac90 t ata_msense_caching
ffffffff8124ad10 t ata_scsiop_read_cap
ffffffff8124af90 t ata_scsiop_report_luns
ffffffff8124afa0 T ata_sas_port_start
ffffffff8124afc0 T ata_sas_port_stop
ffffffff8124afd0 t ata_scsi_find_dev
ffffffff8124b050 t ata_scsi_activity_store
ffffffff8124b100 t ata_scsi_activity_show
ffffffff8124b180 t ata_scsi_em_message_type_show
ffffffff8124b1c0 t ata_scsi_lpm_show
ffffffff8124b210 t ata_scsi_park_store
ffffffff8124b3a0 t ata_scsi_park_show
ffffffff8124b490 t ata_scsi_lpm_store
ffffffff8124b550 t ata_scsi_translate
ffffffff8124b6c0 t ata_to_sense_error
ffffffff8124b880 t ata_gen_passthru_sense
ffffffff8124ba40 t atapi_sense_complete
ffffffff8124ba80 t atapi_xlat
ffffffff8124bc10 t atapi_qc_complete
ffffffff8124bf80 t ata_scsi_rbuf_fill
ffffffff8124c040 t ata_scsiop_inq_b0
ffffffff8124c0d0 t ata_scsi_dev_config
ffffffff8124c2d0 T ata_sas_slave_configure
ffffffff8124c300 T ata_sas_port_destroy
ffffffff8124c320 T ata_sas_sync_probe
ffffffff8124c330 T ata_sas_async_probe
ffffffff8124c340 T ata_sas_port_alloc
ffffffff8124c3a0 t ata_scsi_handle_link_detach
ffffffff8124c500 t ata_scsiop_inq_83
ffffffff8124c5e0 t ata_scsiop_inq_80
ffffffff8124c620 t ata_scsiop_inq_std
ffffffff8124c6c0 t ata_scsiop_inq_89
ffffffff8124c810 T ata_sas_port_init
ffffffff8124c840 t ata_scsi_qc_complete
ffffffff8124cc70 t atapi_drain_needed
ffffffff8124ccb0 t ata_scsi_set_sense.constprop.21
ffffffff8124ccd0 t ata_scsiop_mode_sense
ffffffff8124cfb0 t ata_scsi_rw_xlat
ffffffff8124d1d0 t ata_scsi_pass_thru
ffffffff8124d490 t ata_scsi_invalid_field
ffffffff8124d4b0 t ata_scsi_write_same_xlat
ffffffff8124d630 t ata_scsi_verify_xlat
ffffffff8124d8c0 t ata_scsi_start_stop_xlat
ffffffff8124d9a0 T ata_std_bios_param
ffffffff8124d9e0 T ata_scsi_unlock_native_capacity
ffffffff8124da70 T ata_cmd_ioctl
ffffffff8124dd00 T ata_task_ioctl
ffffffff8124df00 T ata_sas_scsi_ioctl
ffffffff8124e1a0 T ata_scsi_ioctl
ffffffff8124e1c0 T ata_scsi_slave_config
ffffffff8124e250 T ata_scsi_slave_destroy
ffffffff8124e350 T __ata_change_queue_depth
ffffffff8124e490 T ata_scsi_change_queue_depth
ffffffff8124e4b0 T ata_scsi_simulate
ffffffff8124e6e0 T ata_sas_queuecmd
ffffffff8124e920 T ata_scsi_queuecmd
ffffffff8124eb90 T ata_scsi_add_hosts
ffffffff8124ecb0 T ata_scsi_scan_host
ffffffff8124ee60 T ata_scsi_offline_dev
ffffffff8124ee90 T ata_scsi_media_change_notify
ffffffff8124eeb0 T ata_scsi_hotplug
ffffffff8124ef80 T ata_scsi_user_scan
ffffffff8124f0a0 T ata_scsi_dev_rescan
ffffffff8124f1c0 t ata_eh_scsidone
ffffffff8124f1d0 t ata_eh_categorize_error
ffffffff8124f230 t ata_do_reset
ffffffff8124f2c0 t ata_ering_record
ffffffff8124f330 t ata_eh_clear_action
ffffffff8124f430 t __ata_port_freeze
ffffffff8124f470 t atapi_eh_request_sense
ffffffff8124f5e0 t ata_eh_park_issue_cmd
ffffffff8124f710 t __ata_eh_qc_complete
ffffffff8124f7a0 T ata_port_wait_eh
ffffffff8124f870 T ata_scsi_cmd_error_handler
ffffffff8124f9d0 t __ata_ehi_pushv_desc
ffffffff8124fa20 t ata_eh_set_pending
ffffffff8124fad0 T __ata_ehi_push_desc
ffffffff8124fb20 T ata_ehi_push_desc
ffffffff8124fba0 T ata_ehi_clear_desc
ffffffff8124fbb0 T ata_port_desc
ffffffff8124fc60 T ata_port_pbar_desc
ffffffff8124fd20 T ata_internal_cmd_timeout
ffffffff8124fd90 T ata_internal_cmd_timed_out
ffffffff8124fe10 T ata_ering_map
ffffffff8124fe70 T ata_ering_clear_cb
ffffffff8124fe80 T ata_eh_acquire
ffffffff8124fee0 T ata_eh_release
ffffffff8124ff40 T ata_scsi_timed_out
ffffffff81250030 T ata_qc_schedule_eh
ffffffff812500d0 T ata_port_schedule_eh
ffffffff81250120 t ata_do_link_abort
ffffffff812501f0 T ata_link_abort
ffffffff81250200 T ata_port_abort
ffffffff81250210 T ata_port_freeze
ffffffff81250250 T ata_eh_fastdrain_timerfn
ffffffff81250370 T sata_async_notification
ffffffff812503f0 T ata_eh_freeze_port
ffffffff81250450 T ata_eh_thaw_port
ffffffff812504c0 T ata_eh_qc_complete
ffffffff812504d0 T ata_eh_qc_retry
ffffffff812504f0 T ata_dev_disable
ffffffff812505b0 T ata_eh_detach_dev
ffffffff81250680 t ata_eh_schedule_probe
ffffffff81250820 T ata_eh_about_to_do
ffffffff812508b0 T ata_eh_done
ffffffff812508d0 T ata_eh_analyze_ncq_error
ffffffff81250b70 t ata_eh_link_autopsy
ffffffff812513b0 T ata_eh_autopsy
ffffffff81251470 T ata_get_cmd_descript
ffffffff812514b0 T ata_eh_report
ffffffff81251ea0 T ata_eh_reset
ffffffff81252bd0 T ata_set_mode
ffffffff81252ce0 T ata_link_nr_enabled
ffffffff81252d20 T ata_eh_recover
ffffffff81254020 T ata_eh_finish
ffffffff812540d0 T ata_scsi_port_error_handler
ffffffff812548b0 T ata_scsi_error
ffffffff81254980 T ata_do_eh
ffffffff81254a30 T ata_std_error_handler
ffffffff81254ab0 t ata_tport_match
ffffffff81254ae0 t ata_tlink_match
ffffffff81254b10 t ata_tdev_match
ffffffff81254b40 t show_ata_dev_gscr
ffffffff81254bd0 t show_ata_dev_id
ffffffff81254c50 t show_ata_dev_spdn_cnt
ffffffff81254c80 t show_ata_port_idle_irq
ffffffff81254cb0 t show_ata_port_nr_pmp_links
ffffffff81254ce0 t show_ata_dev_ering
ffffffff81254d20 t ata_show_ering
ffffffff81254df0 t get_ata_xfer_names
ffffffff81254e80 t show_ata_dev_xfer_mode
ffffffff81254ea0 t show_ata_dev_dma_mode
ffffffff81254ec0 t show_ata_dev_pio_mode
ffffffff81254ee0 t show_ata_dev_class
ffffffff81254f50 t show_ata_link_sata_spd
ffffffff81254f90 t show_ata_link_sata_spd_limit
ffffffff81254fd0 t show_ata_link_hw_sata_spd_limit
ffffffff81255010 t ata_tdev_release
ffffffff81255020 t ata_tlink_release
ffffffff81255030 t ata_tport_release
ffffffff81255040 T ata_is_port
ffffffff81255060 T ata_is_link
ffffffff81255080 T ata_tlink_delete
ffffffff81255110 T ata_tport_delete
ffffffff81255150 T ata_tlink_add
ffffffff81255310 T ata_tport_add
ffffffff81255420 T ata_is_ata_dev
ffffffff81255440 T ata_attach_transport
ffffffff81255930 T ata_release_transport
ffffffff81255970 t ata_sff_check_ready
ffffffff812559a0 T ata_sff_qc_fill_rtf
ffffffff812559d0 T ata_sff_std_ports
ffffffff81255a20 T ata_pci_bmdma_clear_simplex
ffffffff81255a50 t ata_bmdma_nodma
ffffffff81255ab0 T ata_bmdma_status
ffffffff81255ad0 T ata_sff_check_status
ffffffff81255af0 T ata_pci_bmdma_init
ffffffff81255c80 T ata_bmdma_port_start
ffffffff81255cd0 T ata_bmdma_port_start32
ffffffff81255ce0 T ata_bmdma_start
ffffffff81255d10 T ata_bmdma_irq_clear
ffffffff81255d40 t ata_sff_set_devctl
ffffffff81255d70 T ata_sff_freeze
ffffffff81255df0 t ata_devchk
ffffffff81255e90 T ata_sff_tf_read
ffffffff81255fa0 T ata_sff_tf_load
ffffffff81256140 T ata_bmdma_setup
ffffffff812561d0 T ata_bmdma_post_internal_cmd
ffffffff81256250 T ata_bmdma_dumb_qc_prep
ffffffff81256350 t ata_pio_sector
ffffffff81256460 T ata_pci_sff_activate_host
ffffffff81256690 T ata_pci_sff_init_host
ffffffff81256870 T ata_pci_sff_prepare_host
ffffffff81256930 T ata_pci_bmdma_prepare_host
ffffffff81256960 t ata_pci_init_one
ffffffff81256b00 T ata_pci_bmdma_init_one
ffffffff81256b10 T ata_pci_sff_init_one
ffffffff81256b20 T ata_sff_error_handler
ffffffff81256c40 T ata_bmdma_error_handler
ffffffff81256d60 T ata_sff_drain_fifo
ffffffff81256dd0 T ata_sff_postreset
ffffffff81256e70 T ata_sff_dev_classify
ffffffff81256f80 T ata_sff_busy_sleep
ffffffff81257120 T ata_sff_queue_delayed_work
ffffffff81257140 T ata_sff_queue_pio_task
ffffffff812571b0 T ata_sff_queue_work
ffffffff812571c0 T ata_sff_data_xfer
ffffffff81257290 T ata_sff_data_xfer32
ffffffff812573a0 T ata_sff_data_xfer_noirq
ffffffff812573d0 T ata_sff_wait_ready
ffffffff812573e0 T ata_sff_wait_after_reset
ffffffff81257530 T ata_sff_softreset
ffffffff812576d0 t ata_sff_altstatus
ffffffff81257710 T ata_sff_dma_pause
ffffffff81257730 T ata_bmdma_stop
ffffffff81257780 t ata_sff_sync
ffffffff812577c0 T ata_sff_pause
ffffffff812577e0 T ata_sff_exec_command
ffffffff81257800 T ata_sff_dev_select
ffffffff81257830 t ata_pio_sectors
ffffffff812578e0 T ata_sff_irq_on
ffffffff81257990 T ata_sff_thaw
ffffffff812579c0 t ata_hsm_qc_complete
ffffffff81257af0 T ata_sff_hsm_move
ffffffff81258230 t ata_sff_pio_task
ffffffff812583a0 T ata_bmdma_qc_prep
ffffffff81258470 T sata_sff_hardreset
ffffffff812584f0 t __ata_sff_port_intr
ffffffff81258600 T ata_bmdma_port_intr
ffffffff81258720 T ata_bmdma_interrupt
ffffffff81258900 T ata_sff_port_intr
ffffffff81258910 T ata_sff_interrupt
ffffffff81258af0 T ata_sff_lost_interrupt
ffffffff81258b80 T ata_sff_prereset
ffffffff81258c10 t ata_dev_select.constprop.20
ffffffff81258cc0 T ata_sff_qc_issue
ffffffff81258e90 T ata_bmdma_qc_issue
ffffffff81259020 T ata_sff_flush_pio_task
ffffffff81259070 T ata_sff_port_init
ffffffff812590c0 T ata_sff_exit
ffffffff812590d0 t ata_dev_get_GTF
ffffffff812592b0 T ata_acpi_gtm_xfermask
ffffffff81259340 T ata_acpi_cbl_80wire
ffffffff812593d0 T ata_acpi_stm
ffffffff812594b0 T ata_acpi_gtm
ffffffff812595a0 t ata_acpi_run_tf
ffffffff81259a10 T ata_acpi_associate_sata_port
ffffffff81259a60 T ata_acpi_associate
ffffffff81259ba0 T ata_acpi_dissociate
ffffffff81259bf0 T ata_acpi_on_suspend
ffffffff81259c00 T ata_acpi_on_resume
ffffffff81259d40 T ata_acpi_set_state
ffffffff81259e00 T ata_acpi_on_devcfg
ffffffff8125a0f0 T ata_acpi_on_disable
ffffffff8125a110 t always_on
ffffffff8125a120 t loopback_setup
ffffffff8125a1b0 t loopback_net_init
ffffffff8125a250 t loopback_get_stats64
ffffffff8125a2c0 t loopback_xmit
ffffffff8125a350 t loopback_dev_init
ffffffff8125a380 t loopback_dev_free
ffffffff8125a3a0 T usb_is_intel_switchable_xhci
ffffffff8125a3d0 T usb_enable_xhci_ports
ffffffff8125a400 T uhci_reset_hc
ffffffff8125a490 T uhci_check_and_reset_hc
ffffffff8125a500 t usb_amd_quirk_pll
ffffffff8125a910 T usb_amd_quirk_pll_enable
ffffffff8125a920 T usb_amd_quirk_pll_disable
ffffffff8125a930 T usb_amd_dev_put
ffffffff8125aa20 T usb_amd_find_chipset_info
ffffffff8125aca0 T usb_is_intel_ppt_switchable_xhci
ffffffff8125acc0 T usb_is_intel_lpt_switchable_xhci
ffffffff8125ace0 t serio_match_port
ffffffff8125ad40 t serio_bus_match
ffffffff8125ad70 t serio_cleanup
ffffffff8125adc0 t serio_suspend
ffffffff8125ade0 t serio_shutdown
ffffffff8125adf0 t serio_driver_probe
ffffffff8125ae50 t serio_disconnect_driver
ffffffff8125aea0 t serio_driver_remove
ffffffff8125aec0 t serio_set_drv
ffffffff8125af10 T serio_open
ffffffff8125af70 T serio_close
ffffffff8125af90 t serio_find_driver
ffffffff8125b000 t serio_release_port
ffffffff8125b020 t serio_queue_event
ffffffff8125b140 t serio_resume
ffffffff8125b160 T serio_reconnect
ffffffff8125b170 T serio_rescan
ffffffff8125b180 t serio_remove_pending_events
ffffffff8125b230 t serio_destroy_port
ffffffff8125b390 t serio_disconnect_port
ffffffff8125b420 T serio_unregister_child_port
ffffffff8125b4a0 T serio_unregister_port
ffffffff8125b4d0 t serio_remove_duplicate_events
ffffffff8125b580 t serio_driver_set_bind_mode
ffffffff8125b5f0 t serio_set_bind_mode
ffffffff8125b670 t serio_driver_show_bind_mode
ffffffff8125b6b0 t serio_driver_show_description
ffffffff8125b6f0 t serio_show_bind_mode
ffffffff8125b730 t serio_show_modalias
ffffffff8125b770 t serio_show_description
ffffffff8125b7a0 t serio_show_id_extra
ffffffff8125b7d0 t serio_show_id_id
ffffffff8125b800 t serio_show_id_proto
ffffffff8125b830 t serio_show_id_type
ffffffff8125b860 T serio_interrupt
ffffffff8125b900 T serio_unregister_driver
ffffffff8125b9a0 T __serio_register_port
ffffffff8125bab0 t serio_reconnect_port
ffffffff8125bb40 t serio_reconnect_subtree
ffffffff8125bbd0 t serio_rebind_driver
ffffffff8125be10 t serio_handle_event
ffffffff8125c030 t serio_uevent
ffffffff8125c100 T __serio_register_driver
ffffffff8125c1b0 t i8042_start
ffffffff8125c1c0 T i8042_check_port_owner
ffffffff8125c1e0 t i8042_panic_blink
ffffffff8125c310 t i8042_wait_write
ffffffff8125c370 T i8042_remove_filter
ffffffff8125c3d0 T i8042_install_filter
ffffffff8125c430 t i8042_flush
ffffffff8125c4e0 t i8042_kbd_write
ffffffff8125c570 t i8042_interrupt
ffffffff8125c910 t i8042_pm_thaw
ffffffff8125c930 t i8042_pnp_aux_probe
ffffffff8125cac0 t i8042_pnp_kbd_probe
ffffffff8125cc60 t i8042_stop
ffffffff8125cc90 T i8042_unlock_chip
ffffffff8125cca0 T i8042_lock_chip
ffffffff8125ccb0 t __i8042_command
ffffffff8125ceb0 T i8042_command
ffffffff8125cf10 t i8042_dritek_enable
ffffffff8125cf50 t i8042_set_mux_mode
ffffffff8125d020 t i8042_aux_write
ffffffff8125d060 t i8042_port_close
ffffffff8125d140 t i8042_controller_selftest
ffffffff8125d1f0 t i8042_controller_reset
ffffffff8125d290 t i8042_pm_reset
ffffffff8125d2b0 t i8042_pm_suspend
ffffffff8125d2d0 t i8042_shutdown
ffffffff8125d2e0 t i8042_enable_aux_port
ffffffff8125d340 t i8042_enable_mux_ports
ffffffff8125d380 t i8042_enable_kbd_port
ffffffff8125d3e0 t i8042_controller_resume
ffffffff8125d530 t i8042_pm_restore
ffffffff8125d540 t i8042_pm_resume
ffffffff8125d550 T ps2_cmd_aborted
ffffffff8125d590 T ps2_init
ffffffff8125d5f0 T ps2_sendbyte
ffffffff8125d700 T ps2_is_keyboard_id
ffffffff8125d730 T __ps2_command
ffffffff8125db60 T ps2_end_command
ffffffff8125db80 T ps2_begin_command
ffffffff8125dbb0 T ps2_command
ffffffff8125dbf0 T ps2_drain
ffffffff8125dcf0 T ps2_handle_response
ffffffff8125dd80 T ps2_handle_ack
ffffffff8125de90 T input_scancode_to_scalar
ffffffff8125ded0 t input_default_getkeycode
ffffffff8125df60 t input_default_setkeycode
ffffffff8125e0d0 t input_proc_devices_poll
ffffffff8125e110 T input_handler_for_each_handle
ffffffff8125e170 t input_attach_handler
ffffffff8125e360 t input_seq_stop
ffffffff8125e380 T input_register_handle
ffffffff8125e470 T input_flush_device
ffffffff8125e4e0 T input_grab_device
ffffffff8125e550 t input_proc_handlers_open
ffffffff8125e560 t input_proc_devices_open
ffffffff8125e570 t input_handlers_seq_show
ffffffff8125e5e0 t input_handlers_seq_next
ffffffff8125e600 t input_devices_seq_next
ffffffff8125e610 t input_print_modalias_bits
ffffffff8125e6d0 t input_print_modalias
ffffffff8125e8c0 t input_dev_show_modalias
ffffffff8125e900 t input_devnode
ffffffff8125e930 t __input_release_device
ffffffff8125e9c0 T input_release_device
ffffffff8125ea10 T input_open_device
ffffffff8125eac0 T input_unregister_handle
ffffffff8125eb40 T input_close_device
ffffffff8125ebc0 T input_register_handler
ffffffff8125ecc0 T input_unregister_handler
ffffffff8125ed90 t input_dev_toggle
ffffffff8125eec0 t input_dev_suspend
ffffffff8125ef20 T input_register_device
ffffffff8125f350 T input_get_keycode
ffffffff8125f3c0 T input_set_capability
ffffffff8125f480 T input_free_device
ffffffff8125f4a0 t input_dev_show_id_version
ffffffff8125f4d0 t input_dev_show_id_product
ffffffff8125f500 t input_dev_show_id_vendor
ffffffff8125f530 t input_dev_show_id_bustype
ffffffff8125f560 t input_dev_show_uniq
ffffffff8125f5a0 t input_dev_show_phys
ffffffff8125f5e0 t input_dev_show_name
ffffffff8125f620 t input_dev_release
ffffffff8125f680 T input_allocate_device
ffffffff8125f720 t input_pass_event
ffffffff8125f800 T input_set_keycode
ffffffff8125f8f0 t input_repeat_key
ffffffff8125f9b0 t input_dev_release_keys.part.5
ffffffff8125fa10 T input_unregister_device
ffffffff8125fb80 T input_reset_device
ffffffff8125fc20 t input_dev_resume
ffffffff8125fc40 t input_open_file
ffffffff8125fd80 t input_handlers_seq_start
ffffffff8125fdf0 t input_devices_seq_start
ffffffff8125fe60 t input_bits_to_string
ffffffff8125ff70 t input_seq_print_bitmap
ffffffff81260050 t input_devices_seq_show
ffffffff81260380 t input_print_bitmap
ffffffff81260490 t input_dev_show_cap_sw
ffffffff812604d0 t input_dev_show_cap_ff
ffffffff81260510 t input_dev_show_cap_snd
ffffffff81260550 t input_dev_show_cap_led
ffffffff81260590 t input_dev_show_cap_msc
ffffffff812605d0 t input_dev_show_cap_abs
ffffffff81260610 t input_dev_show_cap_rel
ffffffff81260650 t input_dev_show_cap_key
ffffffff81260690 t input_dev_show_cap_ev
ffffffff812606d0 t input_dev_show_properties
ffffffff81260710 t input_add_uevent_bm_var
ffffffff812607b0 t input_dev_uevent
ffffffff81260b00 T input_alloc_absinfo
ffffffff81260b50 t input_handle_event
ffffffff81261030 T input_inject_event
ffffffff812610f0 T input_set_abs_params
ffffffff812611a0 T input_event
ffffffff81261260 T input_event_from_user
ffffffff812612f0 T input_ff_effect_from_user
ffffffff81261370 T input_event_to_user
ffffffff812613d0 T input_mt_report_finger_count
ffffffff81261460 T input_mt_report_pointer_emulation
ffffffff81261550 T input_mt_destroy_slots
ffffffff81261590 T input_mt_report_slot_state
ffffffff81261620 T input_mt_init_slots
ffffffff81261700 T input_ff_event
ffffffff81261790 T input_ff_destroy
ffffffff81261800 T input_ff_create
ffffffff81261920 t erase_effect
ffffffff81261a40 t flush_effects
ffffffff81261aa0 T input_ff_erase
ffffffff81261b10 T input_ff_upload
ffffffff81261df0 t mousedev_packet
ffffffff81261f70 t mousedev_poll
ffffffff81261fe0 t mousedev_fasync
ffffffff81261ff0 t mousedev_free
ffffffff81262020 t mousedev_detach_client
ffffffff81262080 t mousedev_close_device
ffffffff81262150 t mousedev_release
ffffffff812621b0 t mousedev_open_device
ffffffff812622a0 t mousedev_open
ffffffff81262430 t mousedev_cleanup
ffffffff81262510 t mousedev_write
ffffffff81262790 t mousedev_read
ffffffff81262990 t mousedev_notify_readers
ffffffff81262b90 t mousedev_event
ffffffff81263080 t mousedev_create
ffffffff81263270 t mousedev_destroy
ffffffff812632c0 t mousedev_disconnect
ffffffff81263350 t mousedev_connect
ffffffff81263480 t atkbd_reset_state
ffffffff812634d0 t atkbd_select_set
ffffffff81263660 t atkbd_set_leds
ffffffff81263760 t atkbd_set_repeat_rate
ffffffff81263910 t atkbd_attr_show_helper
ffffffff81263940 t atkbd_do_show_err_count
ffffffff81263950 t atkbd_do_show_softraw
ffffffff81263960 t atkbd_do_show_softrepeat
ffffffff81263970 t atkbd_do_show_set
ffffffff81263980 t atkbd_do_show_scroll
ffffffff81263990 t atkbd_do_show_force_release
ffffffff812639a0 t atkbd_do_show_extra
ffffffff812639b0 t atkbd_cleanup
ffffffff81263a00 t atkbd_show_err_count
ffffffff81263a30 t atkbd_show_softraw
ffffffff81263a60 t atkbd_show_softrepeat
ffffffff81263a90 t atkbd_show_set
ffffffff81263ac0 t atkbd_show_scroll
ffffffff81263af0 t atkbd_show_extra
ffffffff81263b20 t atkbd_attr_set_helper
ffffffff81263bf0 t atkbd_do_set_softraw
ffffffff81263c10 t atkbd_do_set_softrepeat
ffffffff81263c30 t atkbd_do_set_set
ffffffff81263c50 t atkbd_do_set_scroll
ffffffff81263c70 t atkbd_do_set_force_release
ffffffff81263c90 t atkbd_do_set_extra
ffffffff81263cb0 t atkbd_disconnect
ffffffff81263d60 t atkbd_set_device_attrs
ffffffff81263f50 t atkbd_set_softraw
ffffffff81264040 t atkbd_set_softrepeat
ffffffff81264170 t atkbd_schedule_event_work
ffffffff81264200 t atkbd_event
ffffffff81264280 t atkbd_set_keycode_table
ffffffff812646a0 t atkbd_set_scroll
ffffffff812647b0 t atkbd_set_force_release
ffffffff81264850 t atkbd_show_force_release
ffffffff81264880 t atkbd_event_work
ffffffff81264920 t atkbd_probe
ffffffff81264a20 t atkbd_interrupt
ffffffff81265070 t atkbd_apply_forced_release_keylist
ffffffff812650b0 t atkbd_oqo_01plus_scancode_fixup
ffffffff812650e0 t atkbd_activate
ffffffff81265120 t atkbd_set_set
ffffffff81265280 t atkbd_set_extra
ffffffff812653e0 t atkbd_reconnect
ffffffff81265520 t atkbd_connect
ffffffff812657a0 t hgpk_detect
ffffffff812657b0 t touchkit_ps2_detect
ffffffff812657c0 t elantech_detect
ffffffff812657d0 T fsp_detect
ffffffff812657e0 t psmouse_show_int_attr
ffffffff81265810 t genius_detect
ffffffff812658f0 t psmouse_poll
ffffffff81265910 t psmouse_set_rate
ffffffff81265970 T psmouse_set_resolution
ffffffff812659d0 t psmouse_protocol_by_name
ffffffff81265a80 t psmouse_set_maxproto
ffffffff81265ad0 T psmouse_attr_show_helper
ffffffff81265b00 t psmouse_set_int_attr
ffffffff81265b60 t psmouse_attr_set_resolution
ffffffff81265bc0 t psmouse_attr_set_rate
ffffffff81265c20 t psmouse_apply_defaults
ffffffff81265d90 t psmouse_do_detect
ffffffff81265de0 T psmouse_process_byte
ffffffff81266100 t ps2bare_detect
ffffffff81266150 t cortron_detect
ffffffff81266190 t psmouse_protocol_by_type
ffffffff81266200 t psmouse_get_maxproto
ffffffff81266230 t psmouse_attr_show_protocol
ffffffff81266260 t intellimouse_detect
ffffffff81266360 t im_explorer_detect
ffffffff812664b0 t thinking_detect
ffffffff81266590 t psmouse_initialize
ffffffff812665e0 t psmouse_probe
ffffffff81266680 t psmouse_handle_byte
ffffffff812667c0 T fsp_init
ffffffff812667d0 T psmouse_queue_work
ffffffff812667e0 t psmouse_interrupt
ffffffff81266b10 T psmouse_set_state
ffffffff81266b90 T psmouse_sliced_command
ffffffff81266c10 T psmouse_reset
ffffffff81266c50 t psmouse_extensions
ffffffff81266fd0 t psmouse_switch_protocol
ffffffff81267160 t psmouse_attr_set_protocol
ffffffff81267420 T psmouse_activate
ffffffff81267480 T psmouse_deactivate
ffffffff812674e0 t psmouse_cleanup
ffffffff812675f0 t psmouse_disconnect
ffffffff81267760 t psmouse_reconnect
ffffffff812678a0 t psmouse_connect
ffffffff81267b40 t psmouse_resync
ffffffff81267d50 T psmouse_attr_set_helper
ffffffff81267e80 t synaptics_mode_cmd
ffffffff81267ec0 t synaptics_set_disable_gesture
ffffffff81267f70 t synaptics_set_rate
ffffffff81267fb0 T synaptics_reset
ffffffff81267fc0 t synaptics_send_cmd
ffffffff81268000 t synaptics_set_mode
ffffffff81268110 t synaptics_show_disable_gesture
ffffffff81268140 t synaptics_query_hardware
ffffffff812684f0 t set_abs_position_params
ffffffff81268610 t __synaptics_init
ffffffff81268b90 t synaptics_pt_write
ffffffff81268bf0 t synaptics_pt_start
ffffffff81268c60 t synaptics_pt_stop
ffffffff81268cc0 t synaptics_disconnect
ffffffff81268d30 t synaptics_report_slot
ffffffff81268dc0 t synaptics_report_buttons.isra.2
ffffffff81268ef0 t synaptics_validate_byte
ffffffff81268f90 t synaptics_pt_activate
ffffffff81269010 t synaptics_report_semi_mt_slot
ffffffff812690c0 t synaptics_process_byte
ffffffff81269f00 T synaptics_detect
ffffffff81269fc0 t synaptics_reconnect
ffffffff8126a130 T synaptics_init
ffffffff8126a140 T synaptics_init_relative
ffffffff8126a150 T synaptics_supported
ffffffff8126a160 t alps_enter_command_mode
ffffffff8126a230 t alps_get_model
ffffffff8126a4c0 t alps_passthrough_mode_v2
ffffffff8126a560 t alps_poll
ffffffff8126a660 t alps_disconnect
ffffffff8126a690 t alps_report_buttons.isra.2
ffffffff8126a760 t alps_report_bare_ps2_packet.isra.3
ffffffff8126a810 t alps_command_mode_send_nibble
ffffffff8126a850 t __alps_command_mode_write_reg
ffffffff8126a890 t alps_command_mode_set_addr
ffffffff8126a900 t alps_command_mode_read_reg
ffffffff8126a980 t alps_passthrough_mode_v3
ffffffff8126a9d0 t alps_command_mode_write_reg
ffffffff8126aa20 t alps_process_packet
ffffffff8126b930 t alps_process_byte
ffffffff8126bb00 t alps_flush_packet
ffffffff8126bb60 t alps_hw_init
ffffffff8126c300 t alps_reconnect
ffffffff8126c330 T alps_init
ffffffff8126c680 T alps_detect
ffffffff8126c6f0 t ps2pp_attr_show_smartscroll
ffffffff8126c720 t ps2pp_cmd
ffffffff8126c760 t ps2pp_set_smartscroll
ffffffff8126c7e0 t ps2pp_attr_set_smartscroll
ffffffff8126c860 t ps2pp_disconnect
ffffffff8126c880 t ps2pp_process_byte
ffffffff8126cb40 t ps2pp_set_resolution
ffffffff8126cbe0 T ps2pp_init
ffffffff8126d020 t lifebook_limit_serio3
ffffffff8126d040 t lifebook_set_6byte_proto
ffffffff8126d050 t lifebook_set_resolution
ffffffff8126d0b0 t lifebook_absolute_mode
ffffffff8126d100 t lifebook_disconnect
ffffffff8126d150 t lifebook_process_byte
ffffffff8126d480 T lifebook_detect
ffffffff8126d500 T lifebook_init
ffffffff8126d710 t trackpoint_write
ffffffff8126d7a0 t trackpoint_start_protocol
ffffffff8126d7f0 t trackpoint_read
ffffffff8126d850 t trackpoint_set_int_attr
ffffffff8126d8e0 t trackpoint_show_int_attr
ffffffff8126d920 t trackpoint_disconnect
ffffffff8126d950 t trackpoint_toggle_bit
ffffffff8126d9f0 t trackpoint_set_bit_attr
ffffffff8126daa0 t trackpoint_sync
ffffffff8126dca0 t trackpoint_reconnect
ffffffff8126dcd0 T trackpoint_detect
ffffffff8126de90 T rtc_month_days
ffffffff8126def0 T rtc_year_days
ffffffff8126df60 T rtc_time_to_tm
ffffffff8126e100 T rtc_valid_tm
ffffffff8126e170 T rtc_ktime_to_tm
ffffffff8126e1e0 T rtc_tm_to_time
ffffffff8126e210 T rtc_tm_to_ktime
ffffffff8126e250 T rtc_device_unregister
ffffffff8126e2e0 t rtc_device_release
ffffffff8126e300 T rtc_device_register
ffffffff8126e5c0 t rtc_suspend
ffffffff8126e6f0 t rtc_resume.part.5
ffffffff8126e7f0 t rtc_resume
ffffffff8126e830 T rtc_update_irq
ffffffff8126e840 T rtc_irq_set_freq
ffffffff8126e940 T rtc_irq_set_state
ffffffff8126ea20 T rtc_irq_register
ffffffff8126ead0 T rtc_irq_unregister
ffffffff8126eb40 T rtc_class_close
ffffffff8126eb60 T rtc_class_open
ffffffff8126ebb0 t __rtc_match
ffffffff8126ebe0 T rtc_read_alarm
ffffffff8126ece0 t __rtc_read_time.isra.4
ffffffff8126ed40 t __rtc_set_alarm.part.5
ffffffff8126ed70 t __rtc_set_alarm
ffffffff8126ee00 t rtc_timer_remove
ffffffff8126ef20 t rtc_timer_enqueue
ffffffff8126f030 T rtc_update_irq_enable
ffffffff8126f160 T rtc_alarm_irq_enable
ffffffff8126f210 T rtc_set_alarm
ffffffff8126f300 T rtc_set_time
ffffffff8126f3f0 T rtc_read_time
ffffffff8126f460 T rtc_initialize_alarm
ffffffff8126f5a0 T rtc_set_mmss
ffffffff8126f6a0 T __rtc_read_alarm
ffffffff8126f980 T rtc_handle_legacy_irq
ffffffff8126fa50 T rtc_aie_update_irq
ffffffff8126fa60 T rtc_uie_update_irq
ffffffff8126fa70 T rtc_pie_update_irq
ffffffff8126fae0 T rtc_timer_do_work
ffffffff8126fca0 T rtc_timer_init
ffffffff8126fcd0 T rtc_timer_start
ffffffff8126fd60 T rtc_timer_cancel
ffffffff8126fdc0 t rtc_dev_poll
ffffffff8126fe00 t rtc_dev_fasync
ffffffff8126fe20 t rtc_dev_open
ffffffff8126feb0 t rtc_dev_ioctl
ffffffff81270390 t rtc_dev_release
ffffffff812703e0 t rtc_dev_read
ffffffff81270590 T rtc_dev_prepare
ffffffff812705e0 T rtc_dev_add_device
ffffffff81270630 T rtc_dev_del_device
ffffffff81270650 t rtc_proc_release
ffffffff81270670 t rtc_proc_open
ffffffff812706e0 t rtc_proc_show
ffffffff81270a10 T rtc_proc_add_device
ffffffff81270a40 T rtc_proc_del_device
ffffffff81270a60 t rtc_sysfs_set_max_user_freq
ffffffff81270ab0 t rtc_sysfs_show_max_user_freq
ffffffff81270ae0 t rtc_sysfs_show_name
ffffffff81270b10 t rtc_sysfs_show_time
ffffffff81270b50 t rtc_sysfs_show_date
ffffffff81270ba0 t rtc_sysfs_show_since_epoch
ffffffff81270bf0 t rtc_sysfs_show_wakealarm
ffffffff81270c50 t rtc_sysfs_set_wakealarm
ffffffff81270d40 t rtc_sysfs_show_hctosys
ffffffff81270d90 T rtc_sysfs_add_device
ffffffff81270de0 T rtc_sysfs_del_device
ffffffff81270e10 t cmos_resume
ffffffff81270f00 t cmos_pnp_resume
ffffffff81270f10 t rtc_handler
ffffffff81270f40 t rtc_wake_off
ffffffff81270f50 t rtc_wake_on
ffffffff81270f70 t cmos_nvram_write
ffffffff81271060 t cmos_nvram_read
ffffffff81271130 t cmos_procfs
ffffffff81271260 t cmos_set_time
ffffffff81271440 t cmos_read_alarm
ffffffff81271620 t cmos_read_time
ffffffff81271760 t cmos_interrupt
ffffffff81271840 t cmos_checkintr.isra.3
ffffffff812718c0 t cmos_suspend
ffffffff812719a0 t cmos_pnp_suspend
ffffffff812719b0 t cmos_irq_disable
ffffffff81271a10 t cmos_do_shutdown
ffffffff81271a50 t cmos_platform_shutdown
ffffffff81271a80 t cmos_pnp_shutdown
ffffffff81271ab0 t cmos_irq_enable.constprop.5
ffffffff81271b10 t cmos_set_alarm
ffffffff81271cf0 t cmos_alarm_irq_enable
ffffffff81271d70 T watchdog_unregister_device
ffffffff81271da0 T watchdog_register_device
ffffffff81271e20 t watchdog_ping
ffffffff81271e50 t watchdog_start
ffffffff81271e80 t watchdog_open
ffffffff81271f40 t watchdog_write
ffffffff81271fb0 t watchdog_stop
ffffffff81272000 t watchdog_ioctl
ffffffff81272260 t watchdog_release
ffffffff812722f0 T watchdog_dev_register
ffffffff81272390 T watchdog_dev_unregister
ffffffff812723f0 T edac_handler_set
ffffffff81272410 T edac_atomic_assert_error
ffffffff81272420 T edac_put_sysfs_subsys
ffffffff81272440 T edac_get_sysfs_subsys
ffffffff81272490 T __cpufreq_driver_target
ffffffff812724e0 T cpufreq_cpu_put
ffffffff81272500 t __find_governor
ffffffff81272570 t store_scaling_setspeed
ffffffff812725e0 t show_scaling_available_governors
ffffffff812726a0 t show_scaling_driver
ffffffff812726d0 t show_cpus
ffffffff81272780 t show_affected_cpus
ffffffff81272790 T cpufreq_register_governor
ffffffff81272820 t show_scaling_max_freq
ffffffff81272850 t show_scaling_min_freq
ffffffff81272880 t show_cpuinfo_transition_latency
ffffffff812728b0 t show_cpuinfo_max_freq
ffffffff812728e0 t show_cpuinfo_min_freq
ffffffff81272910 t show_bios_limit
ffffffff81272980 t show_scaling_cur_freq
ffffffff812729b0 t cpufreq_sysfs_release
ffffffff812729c0 T cpufreq_register_driver
ffffffff81272b30 T cpufreq_notify_transition
ffffffff81272bf0 t __cpufreq_get
ffffffff81272ca0 T cpufreq_cpu_get
ffffffff81272d60 t cpufreq_bp_resume
ffffffff81272dc0 t cpufreq_bp_suspend
ffffffff81272e30 T cpufreq_get_policy
ffffffff81272f30 T __cpufreq_driver_getavg
ffffffff81272fb0 T cpufreq_quick_get_max
ffffffff81272fd0 T cpufreq_quick_get
ffffffff81272ff0 T cpufreq_unregister_driver
ffffffff81273050 t lock_policy_rwsem_write
ffffffff812730e0 t unlock_policy_rwsem_write
ffffffff81273120 t store
ffffffff812731d0 T cpufreq_driver_target
ffffffff81273250 t __cpufreq_governor
ffffffff81273300 t __cpufreq_set_policy
ffffffff81273490 t cpufreq_add_dev_interface
ffffffff81273720 t cpufreq_add_dev
ffffffff81273b50 t __cpufreq_remove_dev
ffffffff81273e10 T cpufreq_update_policy
ffffffff81273f10 t handle_update
ffffffff81273f20 t store_scaling_governor
ffffffff81274100 t store_scaling_max_freq
ffffffff812741b0 t store_scaling_min_freq
ffffffff81274260 t show_scaling_setspeed
ffffffff812742a0 t show_scaling_governor
ffffffff81274340 t show_related_cpus
ffffffff81274350 t lock_policy_rwsem_read
ffffffff812743e0 t unlock_policy_rwsem_read
ffffffff81274420 t show
ffffffff812744c0 T cpufreq_get
ffffffff81274520 T cpufreq_unregister_notifier
ffffffff81274560 T cpufreq_register_notifier
ffffffff812745f0 t show_cpuinfo_cur_freq
ffffffff81274640 t cpufreq_remove_dev
ffffffff81274690 T cpufreq_unregister_governor
ffffffff81274790 T cpufreq_disabled
ffffffff812747a0 T disable_cpufreq
ffffffff812747b0 t show_total_trans
ffffffff81274800 t cpufreq_stats_update
ffffffff81274870 t show_time_in_state
ffffffff81274900 t cpufreq_stat_notifier_trans
ffffffff812749b0 t cpufreq_stat_notifier_policy
ffffffff81274c50 t cpufreq_governor_performance
ffffffff81274c80 T cpufreq_frequency_table_cpuinfo
ffffffff81274d00 T cpufreq_frequency_table_verify
ffffffff81274e30 T cpufreq_frequency_table_target
ffffffff81274f30 T cpufreq_frequency_table_get_attr
ffffffff81274f50 T cpufreq_frequency_table_put_attr
ffffffff81274f70 T cpufreq_frequency_get_table
ffffffff81274f90 t show_available_freqs
ffffffff81275020 t cpuidle_enter
ffffffff81275040 t cpuidle_enter_tk
ffffffff81275050 t smp_callback
ffffffff81275060 t cpuidle_latency_notify
ffffffff81275090 T cpuidle_resume_and_unlock
ffffffff812750b0 t __cpuidle_register_device
ffffffff812751b0 T cpuidle_enable_device
ffffffff81275330 t poll_idle
ffffffff812753b0 T cpuidle_disable_device
ffffffff81275430 T cpuidle_register_device
ffffffff812754a0 T cpuidle_disabled
ffffffff812754b0 T disable_cpuidle
ffffffff812754c0 T cpuidle_play_dead
ffffffff81275550 T cpuidle_idle_call
ffffffff81275640 T cpuidle_install_idle_handler
ffffffff81275660 T cpuidle_uninstall_idle_handler
ffffffff81275680 T cpuidle_pause_and_lock
ffffffff812756a0 T cpuidle_unregister_device
ffffffff81275770 T cpuidle_wrap_enter
ffffffff81275800 T cpuidle_get_driver
ffffffff81275810 T cpuidle_unregister_driver
ffffffff81275870 T cpuidle_register_driver
ffffffff81275910 T cpuidle_switch_governor
ffffffff81275a00 T cpuidle_register_governor
ffffffff81275af0 T cpuidle_unregister_governor
ffffffff81275ba0 t cpuidle_state_show
ffffffff81275bd0 t cpuidle_state_store
ffffffff81275c00 t cpuidle_store
ffffffff81275c80 t cpuidle_show
ffffffff81275cf0 t cpuidle_sysfs_release
ffffffff81275d00 t cpuidle_state_sysfs_release
ffffffff81275d10 t store_state_disable
ffffffff81275d90 t show_state_disable
ffffffff81275dc0 t show_state_time
ffffffff81275de0 t show_state_usage
ffffffff81275e00 t show_state_power_usage
ffffffff81275e30 t show_state_exit_latency
ffffffff81275e60 t show_current_governor
ffffffff81275ec0 t show_current_driver
ffffffff81275f40 t store_current_governor
ffffffff81276030 t show_available_governors
ffffffff812760d0 t show_state_desc
ffffffff81276120 t show_state_name
ffffffff81276170 T cpuidle_add_interface
ffffffff812761a0 T cpuidle_remove_interface
ffffffff812761b0 T cpuidle_add_state_sysfs
ffffffff81276330 T cpuidle_remove_state_sysfs
ffffffff812763a0 T cpuidle_add_sysfs
ffffffff81276400 T cpuidle_remove_sysfs
ffffffff81276430 t ladder_enable_device
ffffffff812764d0 t ladder_reflect
ffffffff812764f0 t ladder_select_state
ffffffff81276680 t dmi_table
ffffffff81276730 T dmi_get_system_info
ffffffff81276740 T dmi_match
ffffffff81276780 T dmi_find_device
ffffffff812767f0 T dmi_walk
ffffffff81276880 T dmi_get_date
ffffffff81276a20 T dmi_name_in_vendors
ffffffff81276a80 t dmi_matches
ffffffff81276b40 T dmi_first_match
ffffffff81276b80 T dmi_check_system
ffffffff81276bd0 T dmi_name_in_serial
ffffffff81276c00 t memmap_attr_show
ffffffff81276c10 t type_show
ffffffff81276c40 t end_show
ffffffff81276c70 t start_show
ffffffff81276ca0 t acpi_pm_read
ffffffff81276cb0 T acpi_pm_read_verified
ffffffff81276d10 t acpi_pm_read_slow
ffffffff81276d20 t init_pit_timer
ffffffff81276de0 t pit_next_event
ffffffff81276e20 t hid_uevent
ffffffff81276ee0 T hid_register_report
ffffffff81276fa0 t store_new_id
ffffffff812770a0 T hid_unregister_driver
ffffffff81277140 t hid_device_release
ffffffff81277210 T hid_destroy_device
ffffffff81277260 T hid_allocate_device
ffffffff812773a0 T hid_disconnect
ffffffff812773f0 t hid_device_remove
ffffffff81277490 t hid_parser_local
ffffffff81277790 t hid_parser_global
ffffffff81277d10 t hid_add_field
ffffffff81278020 t extract
ffffffff812780a0 t implement
ffffffff812781a0 T hid_output_report
ffffffff81278300 t hid_parser_main
ffffffff812785e0 t hid_process_event
ffffffff81278720 T hid_parse_report
ffffffff81278a10 t snto32
ffffffff81278a50 T hid_set_field
ffffffff81278b30 T hid_report_raw_event
ffffffff81278ef0 T hid_input_report
ffffffff81279160 T hid_check_keys_pressed
ffffffff812791c0 t hid_parser_reserved
ffffffff812791f0 T __hid_register_driver
ffffffff81279280 t read_report_descriptor
ffffffff812792d0 T hid_match_id
ffffffff81279330 t hid_match_device
ffffffff812793e0 t hid_bus_match
ffffffff81279490 T hid_add_device
ffffffff81279680 T hid_connect
ffffffff812799b0 t hid_device_probe
ffffffff81279b20 t match_scancode
ffffffff81279b30 t match_keycode
ffffffff81279b50 t match_index
ffffffff81279b60 t hidinput_find_key
ffffffff81279ca0 T hidinput_find_field
ffffffff81279d30 T hidinput_get_led_field
ffffffff81279db0 T hidinput_count_leds
ffffffff81279e40 T hidinput_disconnect
ffffffff81279eb0 t hidinput_open
ffffffff81279ee0 t hidinput_close
ffffffff81279f10 T hidinput_connect
ffffffff8127ca60 T hidinput_report_event
ffffffff8127cab0 t hidinput_locate_usage
ffffffff8127cb30 t hidinput_getkeycode
ffffffff8127cb90 t hidinput_setkeycode
ffffffff8127cc90 T hidinput_hid_event
ffffffff8127d080 T iommu_present
ffffffff8127d090 T iommu_device_group
ffffffff8127d0c0 T iommu_domain_has_cap
ffffffff8127d0e0 T iommu_iova_to_phys
ffffffff8127d100 T iommu_detach_device
ffffffff8127d110 T iommu_attach_device
ffffffff8127d130 T iommu_unmap
ffffffff8127d1e0 T iommu_map
ffffffff8127d320 T iommu_domain_free
ffffffff8127d340 T iommu_domain_alloc
ffffffff8127d3b0 t show_iommu_group
ffffffff8127d410 T iommu_set_fault_handler
ffffffff8127d420 T bus_set_iommu
ffffffff8127d460 t add_iommu_group
ffffffff8127d4c0 t iommu_device_notifier
ffffffff8127d540 t amd_iommu_domain_has_cap
ffffffff8127d550 t amd_iommu_device_group
ffffffff8127d5b0 t build_inv_iommu_pages
ffffffff8127d630 T amd_iommu_device_info
ffffffff8127d730 t set_dte_entry
ffffffff8127d860 t build_completion_wait
ffffffff8127d8e0 t find_protection_domain
ffffffff8127d970 t add_domain_to_list
ffffffff8127d9b0 t del_domain_from_list
ffffffff8127da00 t __get_gcr3_pte
ffffffff8127daa0 t alloc_pte
ffffffff8127dca0 t free_gcr3_tbl_level1
ffffffff8127dd00 T amd_iommu_unregister_ppr_notifier
ffffffff8127dd10 T amd_iommu_register_ppr_notifier
ffffffff8127dd20 t protection_domain_free
ffffffff8127ddb0 t dma_ops_area_alloc
ffffffff8127df00 t check_device
ffffffff8127df50 t amd_iommu_dma_supported
ffffffff8127df60 t fetch_pte.isra.10
ffffffff8127e090 t amd_iommu_iova_to_phys
ffffffff8127e120 T amd_iommu_enable_device_erratum
ffffffff8127e160 t dma_ops_domain_unmap.part.14
ffffffff8127e1c0 t find_dev_data
ffffffff8127e2b0 t iommu_init_device
ffffffff8127e3d0 t wait_on_sem
ffffffff8127e430 t iommu_queue_command_sync
ffffffff8127e570 t iommu_flush_dte
ffffffff8127e5a0 T amd_iommu_complete_ppr
ffffffff8127e620 t iommu_completion_wait
ffffffff8127e680 t domain_flush_complete
ffffffff8127e6d0 t __flush_pasid
ffffffff8127e890 T amd_iommu_domain_clear_gcr3
ffffffff8127e940 T amd_iommu_domain_set_gcr3
ffffffff8127ea10 T amd_iommu_flush_tlb
ffffffff8127ea80 T amd_iommu_flush_page
ffffffff8127eaf0 t device_flush_iotlb.isra.18
ffffffff8127eb90 t device_flush_dte
ffffffff8127ebd0 t do_attach
ffffffff8127ec50 t __attach_device
ffffffff8127ed20 t domain_for_device
ffffffff8127edb0 t do_detach
ffffffff8127ee40 t __detach_device
ffffffff8127ef20 t detach_device
ffffffff8127efd0 t amd_iommu_detach_device
ffffffff8127f060 t __domain_flush_pages
ffffffff8127f140 t domain_flush_tlb_pde
ffffffff8127f160 t amd_iommu_unmap
ffffffff8127f2f0 t update_domain
ffffffff8127f360 T amd_iommu_domain_enable_v2
ffffffff8127f430 t iommu_map_page
ffffffff8127f5a0 t amd_iommu_map
ffffffff8127f630 t alloc_new_range
ffffffff8127f9a0 t free_pagetable.isra.20
ffffffff8127fa70 T amd_iommu_domain_direct_map
ffffffff8127fad0 t dma_ops_domain_free
ffffffff8127fb40 t amd_iommu_domain_destroy
ffffffff8127fc90 t domain_id_alloc
ffffffff8127fcf0 t protection_domain_alloc
ffffffff8127fd60 t amd_iommu_domain_init
ffffffff8127fdd0 t dma_ops_domain_alloc
ffffffff8127fea0 t dma_ops_free_addresses
ffffffff8127fee0 t __map_single
ffffffff81280330 t __unmap_single.isra.26
ffffffff81280450 t device_change_notifier
ffffffff812805b0 T amd_iommu_int_thread
ffffffff81280a60 T amd_iommu_int_handler
ffffffff81280a70 T iommu_flush_all_caches
ffffffff81280b30 T pci_pri_tlp_required
ffffffff81280b70 t attach_device
ffffffff81280d40 t get_domain
ffffffff81280e40 T amd_iommu_get_v2_domain
ffffffff81280e70 t unmap_sg
ffffffff81280f20 t map_sg
ffffffff81281110 t unmap_page
ffffffff812811b0 t map_page
ffffffff812812b0 t free_coherent
ffffffff81281360 t alloc_coherent
ffffffff81281510 t amd_iommu_attach_device
ffffffff812815a0 T amd_iommu_init_notifier
ffffffff812815c0 t iommu_disable
ffffffff81281610 t disable_iommus
ffffffff81281640 t amd_iommu_suspend
ffffffff81281650 T amd_iommu_v2_supported
ffffffff81281660 t amd_iommu_enable_interrupts
ffffffff81281750 T amd_iommu_reset_cmd_buffer
ffffffff81281790 t enable_iommus
ffffffff81281b00 t amd_iommu_resume
ffffffff81281d10 T amd_iommu_apply_erratum_63
ffffffff81281d50 T pcibios_align_resource
ffffffff81281da0 t pcibios_fwaddrmap_lookup
ffffffff81281e00 T pcibios_retrieve_fw_addr
ffffffff81281e70 T pci_mmap_page_range
ffffffff81281f10 t pci_dev_base
ffffffff81281f60 t pci_mmcfg_write
ffffffff81282000 t pci_mmcfg_read
ffffffff812820b0 t pci_conf1_write
ffffffff812821c0 t pci_conf1_read
ffffffff812822e0 t pci_conf2_write
ffffffff81282420 t pci_conf2_read
ffffffff81282580 T pci_mmconfig_lookup
ffffffff812825d0 T xen_find_device_domain_owner
ffffffff81282640 T xen_register_device_domain_owner
ffffffff81282710 t xen_initdom_restore_msi_irqs
ffffffff81282890 t xen_teardown_msi_irq
ffffffff812828a0 t xen_initdom_setup_msi_irqs
ffffffff81282b70 t xen_hvm_setup_msi_irqs
ffffffff81282cb0 t xen_setup_msi_irqs
ffffffff81282e00 t xen_teardown_msi_irqs
ffffffff81282e50 t xen_pcifront_enable_irq
ffffffff81282f30 T xen_unregister_device_domain_owner
ffffffff81282fe0 t xen_register_pirq
ffffffff81283120 t acpi_register_gsi_xen_hvm
ffffffff81283140 t xen_register_gsi.part.7
ffffffff81283230 t acpi_register_gsi_xen
ffffffff81283250 t sb600_disable_hpet_bar
ffffffff812832a0 t pci_early_fixup_cyrix_5530
ffffffff812832f0 t pcie_rootport_aspm_quirk
ffffffff812833b0 t quirk_pcie_aspm_write
ffffffff81283410 t quirk_pcie_aspm_read
ffffffff81283440 t pci_fixup_via_northbridge_bug
ffffffff81283560 t pci_fixup_nforce2
ffffffff812835d0 t resource_to_addr
ffffffff81283720 t setup_resource
ffffffff812838d0 t count_resource
ffffffff81283900 t pirq_serverworks_get
ffffffff81283910 t pirq_serverworks_set
ffffffff81283930 t pirq_pico_get
ffffffff81283950 t pirq_pico_set
ffffffff81283980 t read_config_nybble
ffffffff812839d0 t pirq_via_get
ffffffff812839f0 t pirq_opti_get
ffffffff81283a00 t pirq_cyrix_get
ffffffff81283a10 t pirq_amd756_get
ffffffff81283a70 t pirq_piix_get
ffffffff81283aa0 t pirq_sis_get
ffffffff81283ae0 t write_config_nybble
ffffffff81283b80 t pirq_via_set
ffffffff81283bb0 t pirq_opti_set
ffffffff81283bd0 t pirq_cyrix_set
ffffffff81283bf0 t pirq_piix_set
ffffffff81283c10 t pirq_sis_set
ffffffff81283c90 t pirq_via586_set
ffffffff81283d00 t pirq_via586_get
ffffffff81283d60 t pirq_ite_set
ffffffff81283dd0 t pirq_ite_get
ffffffff81283e30 t pirq_ali_set
ffffffff81283ea0 t pirq_ali_get
ffffffff81283f00 t pirq_amd756_set
ffffffff81283f70 t pirq_vlsi_set
ffffffff81283fe0 t pirq_vlsi_get
ffffffff81284050 T eisa_set_level_irq
ffffffff812840d0 t pcibios_lookup_irq
ffffffff812845e0 t pirq_enable_irq
ffffffff81284830 T pcibios_penalize_isa_irq
ffffffff81284870 T raw_pci_read
ffffffff812848b0 t pci_read
ffffffff812848e0 T raw_pci_write
ffffffff81284920 t pci_write
ffffffff81284950 T pcibios_assign_all_busses
ffffffff81284960 T pcibios_enable_device
ffffffff81284990 T pcibios_disable_device
ffffffff812849b0 T pci_ext_cfg_avail
ffffffff812849c0 T set_mp_bus_to_node
ffffffff812849e0 T get_mp_bus_to_node
ffffffff81284a20 T read_pci_config
ffffffff81284a50 T read_pci_config_byte
ffffffff81284a90 T read_pci_config_16
ffffffff81284ad0 T write_pci_config
ffffffff81284b00 T write_pci_config_byte
ffffffff81284b40 T write_pci_config_16
ffffffff81284b80 T early_pci_allowed
ffffffff81284ba0 T early_dump_pci_device
ffffffff81284c60 T early_dump_pci_devices
ffffffff81284d50 T x86_pci_root_bus_resources
ffffffff81284e70 T save_processor_state
ffffffff81284fa0 T restore_processor_state
ffffffff812851c0 T fb_is_primary_device
ffffffff812851e0 t sock_no_open
ffffffff812851f0 T sock_tx_timestamp
ffffffff81285230 t sock_poll
ffffffff81285250 t sock_mmap
ffffffff81285270 T kernel_bind
ffffffff81285280 T kernel_listen
ffffffff81285290 T kernel_connect
ffffffff812852a0 T kernel_getsockname
ffffffff812852b0 T kernel_getpeername
ffffffff812852c0 T kernel_sock_ioctl
ffffffff81285310 T kernel_sock_shutdown
ffffffff81285320 t sockfs_mount
ffffffff81285340 t sockfs_dname
ffffffff81285360 t sock_destroy_inode
ffffffff81285390 t sock_alloc_inode
ffffffff81285460 t init_once
ffffffff81285470 t sock_splice_read
ffffffff812854e0 T kernel_sendpage
ffffffff81285540 t sock_sendpage
ffffffff81285570 T kernel_setsockopt
ffffffff812855c0 T kernel_getsockopt
ffffffff81285610 t sock_fasync
ffffffff812856b0 t sock_do_ioctl
ffffffff81285710 t routing_ioctl
ffffffff81285930 t sock_ioctl
ffffffff81285bd0 t compat_sock_ioctl
ffffffff81286650 T dlci_ioctl_set
ffffffff81286680 T vlan_ioctl_set
ffffffff812866b0 T brioctl_set
ffffffff812866e0 t sock_aio_dtor
ffffffff812866f0 t sock_recvmsg_nosec
ffffffff81286820 t sock_sendmsg_nosec
ffffffff81286940 T sock_recvmsg
ffffffff81286a70 T kernel_recvmsg
ffffffff81286ad0 T sock_sendmsg
ffffffff81286bf0 T kernel_sendmsg
ffffffff81286c40 t sockfd_lookup_light
ffffffff81286cc0 t __sys_sendmsg
ffffffff81287050 t sock_alloc
ffffffff812870b0 T sock_create_lite
ffffffff812870f0 t sock_alloc_file
ffffffff81287210 T sock_map_fd
ffffffff81287240 T sock_wake_async
ffffffff812872f0 T __sock_recv_timestamp
ffffffff81287540 T sock_release
ffffffff812875d0 T __sock_create
ffffffff812877d0 T sock_create_kern
ffffffff812877f0 T sock_create
ffffffff81287820 T sockfd_lookup
ffffffff81287870 t move_addr_to_user
ffffffff81287930 t __sys_recvmsg
ffffffff81287bc0 t sock_aio_read.part.23
ffffffff81287cf0 t sock_aio_read
ffffffff81287d20 t sock_aio_write
ffffffff81287e70 T sock_unregister
ffffffff81287ec0 T sock_register
ffffffff81287f60 T __sock_recv_wifi_status
ffffffff81287fb0 T __sock_recv_ts_and_drops
ffffffff81288100 t sock_close
ffffffff81288130 T kernel_accept
ffffffff81288200 T move_addr_to_kernel
ffffffff81288290 T sys_socket
ffffffff812882f0 T sys_socketpair
ffffffff812884c0 T sys_bind
ffffffff81288580 T sys_listen
ffffffff81288600 T sys_accept4
ffffffff812887f0 T sys_accept
ffffffff81288800 T sys_connect
ffffffff812888d0 T sys_getsockname
ffffffff812889a0 T sys_getpeername
ffffffff81288a80 T sys_sendto
ffffffff81288c20 T sys_send
ffffffff81288c30 T sys_recvfrom
ffffffff81288dc0 T sys_recv
ffffffff81288dd0 T sys_setsockopt
ffffffff81288ea0 T sys_getsockopt
ffffffff81288f60 T sys_shutdown
ffffffff81288fd0 T sys_sendmsg
ffffffff81289050 T __sys_sendmmsg
ffffffff81289190 T sys_sendmmsg
ffffffff812891a0 T sys_recvmsg
ffffffff81289220 T __sys_recvmmsg
ffffffff81289430 T sys_recvmmsg
ffffffff81289500 T sys_socketcall
ffffffff81289770 T socket_seq_show
ffffffff812897f0 T sk_reset_txq
ffffffff81289800 T sock_update_classid
ffffffff81289860 T sock_rfree
ffffffff81289890 T __sk_mem_schedule
ffffffff81289ae0 T sock_no_bind
ffffffff81289af0 T sock_no_connect
ffffffff81289b00 T sock_no_socketpair
ffffffff81289b10 T sock_no_accept
ffffffff81289b20 T sock_no_getname
ffffffff81289b30 T sock_no_poll
ffffffff81289b40 T sock_no_ioctl
ffffffff81289b50 T sock_no_listen
ffffffff81289b60 T sock_no_shutdown
ffffffff81289b70 T sock_no_setsockopt
ffffffff81289b80 T sock_no_getsockopt
ffffffff81289b90 T sock_no_sendmsg
ffffffff81289ba0 T sock_no_recvmsg
ffffffff81289bb0 T sock_no_mmap
ffffffff81289bc0 T sock_common_getsockopt
ffffffff81289bd0 T compat_sock_common_getsockopt
ffffffff81289bf0 T sock_common_recvmsg
ffffffff81289c40 T sock_common_setsockopt
ffffffff81289c50 T compat_sock_common_setsockopt
ffffffff81289c70 t proto_exit_net
ffffffff81289c80 t proto_init_net
ffffffff81289cb0 t proto_seq_open
ffffffff81289cd0 t proto_seq_next
ffffffff81289ce0 t proto_seq_stop
ffffffff81289cf0 t proto_seq_start
ffffffff81289d10 t sock_inuse_exit_net
ffffffff81289d20 t sock_inuse_init_net
ffffffff81289d50 t sock_def_destruct
ffffffff81289d60 T sock_kfree_s
ffffffff81289da0 T proto_unregister
ffffffff81289ea0 T sock_prot_inuse_add
ffffffff81289ec0 T proto_register
ffffffff8128a0f0 T sock_prot_inuse_get
ffffffff8128a160 t __lock_sock
ffffffff8128a1f0 T lock_sock_nested
ffffffff8128a240 t sock_def_wakeup
ffffffff8128a270 T release_sock
ffffffff8128a380 T sock_init_data
ffffffff8128a560 T sock_no_sendpage
ffffffff8128a5e0 T sk_wait_data
ffffffff8128a6c0 T sock_wmalloc
ffffffff8128a730 T sock_alloc_send_pskb
ffffffff8128aa40 T sock_alloc_send_skb
ffffffff8128aa50 T sock_kmalloc
ffffffff8128aab0 T sock_i_ino
ffffffff8128ab00 T sock_i_uid
ffffffff8128ab50 t __sk_free
ffffffff8128aca0 T sock_wfree
ffffffff8128ad00 T sk_free
ffffffff8128ad20 T sk_common_release
ffffffff8128adf0 T sk_dst_check
ffffffff8128ae90 T __sk_dst_check
ffffffff8128af00 T sk_prot_clear_portaddr_nulls
ffffffff8128af50 T sk_release_kernel
ffffffff8128afa0 T cred_to_ucred
ffffffff8128aff0 T sk_receive_skb
ffffffff8128b120 T sock_queue_rcv_skb
ffffffff8128b2b0 T sock_update_netprioidx
ffffffff8128b300 T sk_setup_caps
ffffffff8128b3c0 T __sk_mem_reclaim
ffffffff8128b420 t proto_seq_show
ffffffff8128b880 T lock_sock_fast
ffffffff8128b8e0 t sock_def_error_report
ffffffff8128b940 t sock_def_write_space
ffffffff8128b9c0 t sock_def_readable
ffffffff8128ba20 T sk_stop_timer
ffffffff8128ba40 T sk_reset_timer
ffffffff8128ba60 T sk_send_sigurg
ffffffff8128bab0 t sk_prot_alloc.isra.43
ffffffff8128bc20 T sk_alloc
ffffffff8128bcd0 T sk_clone_lock
ffffffff8128bf80 t sock_warn_obsolete_bsdism
ffffffff8128c000 t sock_set_timeout
ffffffff8128c100 T sock_getsockopt
ffffffff8128c790 T sock_rmalloc
ffffffff8128c820 T sock_enable_timestamp
ffffffff8128c850 T sock_get_timestampns
ffffffff8128c900 T sock_get_timestamp
ffffffff8128c9b0 T sock_setsockopt
ffffffff8128d1f0 T reqsk_queue_alloc
ffffffff8128d2d0 T __reqsk_queue_destroy
ffffffff8128d300 T reqsk_queue_destroy
ffffffff8128d3f0 t sock_pipe_buf_steal
ffffffff8128d400 T skb_add_rx_frag
ffffffff8128d450 T skb_prepare_seq_read
ffffffff8128d480 T skb_abort_seq_read
ffffffff8128d4a0 t skb_ts_finish
ffffffff8128d4c0 T skb_find_text
ffffffff8128d570 t sock_rmem_free
ffffffff8128d590 T skb_seq_read
ffffffff8128d770 t skb_ts_get_next_block
ffffffff8128d780 t __skb_to_sgvec
ffffffff8128d9b0 T skb_to_sgvec
ffffffff8128d9e0 t __copy_skb_header
ffffffff8128db40 t copy_skb_header
ffffffff8128dbc0 t __skb_clone
ffffffff8128dca0 T skb_store_bits
ffffffff8128df40 T skb_copy_bits
ffffffff8128e200 t skb_release_head_state
ffffffff8128e2a0 T skb_recycle
ffffffff8128e390 T skb_recycle_check
ffffffff8128e410 t sock_pipe_buf_get
ffffffff8128e440 T skb_pull_rcsum
ffffffff8128e4b0 T skb_checksum
ffffffff8128e770 T skb_append_datato_frags
ffffffff8128e910 t __skb_splice_bits
ffffffff8128ee50 t sock_pipe_buf_release
ffffffff8128ee60 t sock_spd_release
ffffffff8128ee70 T skb_insert
ffffffff8128eee0 T skb_append
ffffffff8128ef50 T skb_unlink
ffffffff8128efc0 T skb_queue_tail
ffffffff8128f020 T sock_queue_err_skb
ffffffff8128f0d0 T skb_queue_head
ffffffff8128f130 T skb_dequeue_tail
ffffffff8128f1b0 T skb_dequeue
ffffffff8128f230 T skb_copy_and_csum_bits
ffffffff8128f510 T skb_copy_and_csum_dev
ffffffff8128f5f0 T build_skb
ffffffff8128f770 T __alloc_skb
ffffffff8128f9a0 T dev_alloc_skb
ffffffff8128f9e0 T skb_gro_receive
ffffffff8128fe00 T __netdev_alloc_skb
ffffffff8128fe40 T skb_pull
ffffffff8128fe70 T skb_trim
ffffffff8128feb0 T __skb_warn_lro_forwarding
ffffffff8128fee0 T skb_partial_csum_set
ffffffff8128ff80 T skb_push
ffffffff81290000 T skb_put
ffffffff81290090 T skb_split
ffffffff81290340 T skb_copy_expand
ffffffff81290470 T skb_copy
ffffffff81290520 t skb_release_data
ffffffff81290610 T skb_morph
ffffffff81290640 T __kfree_skb
ffffffff812906d0 T consume_skb
ffffffff81290700 T kfree_skb
ffffffff81290730 T skb_complete_wifi_ack
ffffffff812907b0 T skb_queue_purge
ffffffff812907d0 t skb_drop_list
ffffffff81290800 T skb_copy_ubufs
ffffffff81290a40 T pskb_expand_head
ffffffff81290d30 t skb_prepare_for_shift
ffffffff81290d70 T __pskb_copy
ffffffff81290f50 T skb_clone
ffffffff81291000 T skb_tstamp_tx
ffffffff812910e0 T skb_segment
ffffffff81291700 T __pskb_pull_tail
ffffffff81291a50 T skb_pad
ffffffff81291b50 T skb_cow_data
ffffffff81291e60 T ___pskb_trim
ffffffff812920e0 T skb_realloc_headroom
ffffffff81292160 T skb_splice_bits
ffffffff81292310 T skb_shift
ffffffff812926f0 T memcpy_fromiovec
ffffffff81292760 T memcpy_fromiovecend
ffffffff81292800 T csum_partial_copy_fromiovecend
ffffffff81292a20 T memcpy_toiovec
ffffffff81292aa0 T memcpy_toiovecend
ffffffff81292b30 T verify_iovec
ffffffff81292bf0 T datagram_poll
ffffffff81292cd0 t skb_copy_and_csum_datagram
ffffffff81293010 T __skb_checksum_complete_head
ffffffff81293080 T __skb_checksum_complete
ffffffff81293090 T skb_copy_datagram_from_iovec
ffffffff812932d0 T skb_copy_datagram_const_iovec
ffffffff81293510 T skb_copy_datagram_iovec
ffffffff81293720 T skb_copy_and_csum_datagram_iovec
ffffffff81293850 T skb_kill_datagram
ffffffff81293940 T skb_free_datagram_locked
ffffffff81293a20 T skb_free_datagram
ffffffff81293a60 T __skb_recv_datagram
ffffffff81293d80 T skb_recv_datagram
ffffffff81293dc0 t receiver_wake_function
ffffffff81293de0 T sk_stream_kill_queues
ffffffff81293f00 T sk_stream_wait_memory
ffffffff81294140 T sk_stream_wait_connect
ffffffff812942e0 T sk_stream_write_space
ffffffff812943a0 T sk_stream_error
ffffffff81294400 T sk_stream_wait_close
ffffffff812944e0 T scm_fp_dup
ffffffff81294540 T put_cmsg
ffffffff81294650 T __scm_destroy
ffffffff81294740 T __scm_send
ffffffff81294b30 T scm_detach_fds
ffffffff81294cb0 T gnet_stats_copy_queue
ffffffff81294d10 T gnet_stats_copy_app
ffffffff81294d70 T gnet_stats_finish_copy
ffffffff81294df0 T gnet_stats_copy_basic
ffffffff81294e60 T gnet_stats_start_copy_compat
ffffffff81294fc0 T gnet_stats_start_copy
ffffffff81294fd0 T gnet_stats_copy_rate_est
ffffffff81295060 T gen_estimator_active
ffffffff81295100 T gen_kill_estimator
ffffffff812951c0 T gen_new_estimator
ffffffff812953a0 T gen_replace_estimator
ffffffff812953f0 t est_timer
ffffffff81295550 t netns_get
ffffffff81295570 t net_alloc_generic
ffffffff812955a0 T get_net_ns_by_pid
ffffffff812955e0 T __put_net
ffffffff81295640 t netns_put
ffffffff81295660 t netns_install
ffffffff812956a0 t ops_exit_list.isra.2
ffffffff81295710 t ops_free_list
ffffffff81295790 t unregister_pernet_operations
ffffffff81295860 T unregister_pernet_device
ffffffff812958a0 T unregister_pernet_subsys
ffffffff812958d0 t ops_init
ffffffff81295a10 t setup_net
ffffffff81295b00 t register_pernet_operations
ffffffff81295c90 T register_pernet_device
ffffffff81295cf0 T register_pernet_subsys
ffffffff81295d30 T net_drop_ns
ffffffff81295d70 t cleanup_net
ffffffff81295f10 T copy_net_ns
ffffffff81296020 T get_net_ns_by_fd
ffffffff81296070 T secure_ipv4_port_ephemeral
ffffffff812960b0 T secure_ip_id
ffffffff812960f0 T secure_ipv6_id
ffffffff81296120 T secure_tcp_sequence_number
ffffffff81296170 T skb_flow_dissect
ffffffff812964e0 t sysctl_core_net_init
ffffffff81296580 t rps_sock_flow_sysctl
ffffffff81296740 t sysctl_core_net_exit
ffffffff81296770 T __dev_get_by_index
ffffffff812967c0 T dev_get_by_index_rcu
ffffffff81296820 T dev_get_by_index
ffffffff81296880 T dev_getfirstbyhwtype
ffffffff81296900 T dev_get_by_flags_rcu
ffffffff81296980 T net_disable_timestamp
ffffffff81296990 T rps_may_expire_flow
ffffffff81296a10 T napi_gro_flush
ffffffff81296a60 T netif_napi_add
ffffffff81296ac0 T register_gifconf
ffffffff81296ae0 T dev_seq_next
ffffffff81296b60 T dev_seq_stop
ffffffff81296b70 t softnet_seq_start
ffffffff81296bd0 t softnet_seq_next
ffffffff81296c40 t softnet_seq_stop
ffffffff81296c50 t ptype_get_idx
ffffffff81296d00 t ptype_seq_stop
ffffffff81296d10 T dev_get_flags
ffffffff81296d90 T dev_set_group
ffffffff81296da0 t dev_new_index
ffffffff81296e00 T netdev_increment_features
ffffffff81296e40 T __skb_tx_hash
ffffffff81296f20 T init_dummy_netdev
ffffffff81296fe0 t netdev_exit
ffffffff81297000 t netdev_create_hash
ffffffff81297040 t netdev_init
ffffffff812970a0 t dev_proc_net_exit
ffffffff812970d0 t dev_proc_net_init
ffffffff81297170 t ptype_seq_open
ffffffff81297190 t dev_seq_open
ffffffff812971b0 t softnet_seq_show
ffffffff81297210 t softnet_seq_open
ffffffff81297220 T netdev_refcnt_read
ffffffff81297270 T net_enable_timestamp
ffffffff812972b0 t rps_trigger_softirq
ffffffff812972f0 T __napi_schedule
ffffffff81297340 t net_tx_action
ffffffff812974c0 T netif_napi_del
ffffffff81297530 t dev_gso_skb_destructor
ffffffff81297570 t __netif_receive_skb
ffffffff812978a0 T dev_add_pack
ffffffff81297920 T __dev_remove_pack
ffffffff812979d0 t flush_backlog
ffffffff81297ad0 t enqueue_to_backlog
ffffffff81297c70 T netdev_set_master
ffffffff81297d20 T netdev_rx_handler_unregister
ffffffff81297d70 T netdev_rx_handler_register
ffffffff81297df0 T __dev_getfirstbyhwtype
ffffffff81297e80 t unlist_netdevice
ffffffff81297f70 t list_netdevice
ffffffff812980a0 T synchronize_net
ffffffff812980d0 T dev_remove_pack
ffffffff812980f0 T free_netdev
ffffffff812981d0 T alloc_netdev_mqs
ffffffff812984c0 T netdev_stats_to_stats64
ffffffff81298560 T dev_get_stats
ffffffff81298690 t dev_seq_printf_stats
ffffffff81298780 T netif_stacked_transfer_operstate
ffffffff81298810 t __dev_set_promiscuity
ffffffff812989a0 T dev_getbyhwaddr_rcu
ffffffff81298a40 T __skb_get_rxhash
ffffffff81298b00 t get_rps_cpu
ffffffff81298e30 T netif_receive_skb
ffffffff81298eb0 t napi_gro_complete
ffffffff81298f80 T dev_gro_receive
ffffffff812991f0 T skb_gso_segment
ffffffff81299470 T skb_checksum_help
ffffffff812995d0 T netif_set_real_num_rx_queues
ffffffff81299660 T netif_set_real_num_tx_queues
ffffffff81299820 T call_netdevice_notifiers
ffffffff81299880 t __dev_close_many
ffffffff81299950 t __dev_close
ffffffff812999a0 t dev_close_many
ffffffff81299ab0 t rollback_registered_many
ffffffff81299d00 T unregister_netdevice_many
ffffffff81299d60 t rollback_registered
ffffffff81299db0 T dev_set_mac_address
ffffffff81299e20 T dev_set_mtu
ffffffff81299ea0 T netdev_bonding_change
ffffffff81299eb0 T netdev_features_change
ffffffff81299ec0 T unregister_netdevice_notifier
ffffffff81299fb0 T register_netdevice_notifier
ffffffff8129a160 T dev_get_by_name_rcu
ffffffff8129a1e0 T dev_get_by_name
ffffffff8129a200 T __dev_get_by_name
ffffffff8129a280 T netdev_boot_setup_check
ffffffff8129a310 T __netif_schedule
ffffffff8129a380 T netif_device_detach
ffffffff8129a410 t harmonize_features.isra.73
ffffffff8129a450 T netif_skb_features
ffffffff8129a4d0 T skb_gro_reset_offset
ffffffff8129a550 T napi_frags_skb
ffffffff8129a670 T napi_get_frags
ffffffff8129a6a0 T dev_seq_start
ffffffff8129a730 t ptype_seq_start
ffffffff8129a750 T __napi_complete
ffffffff8129a7a0 T napi_complete
ffffffff8129a810 T dev_kfree_skb_irq
ffffffff8129a870 T dev_kfree_skb_any
ffffffff8129a8a0 t ptype_seq_next
ffffffff8129a930 t ptype_seq_show
ffffffff8129a9f0 t net_rps_action_and_irq_enable.isra.86
ffffffff8129aa60 t net_rx_action
ffffffff8129abb0 t process_backlog
ffffffff8129ad30 T __netdev_printk
ffffffff8129adb0 T netdev_info
ffffffff8129ae10 T netdev_notice
ffffffff8129ae70 T netdev_warn
ffffffff8129aed0 T netdev_err
ffffffff8129af30 T netdev_crit
ffffffff8129af90 T netdev_alert
ffffffff8129aff0 T netdev_emerg
ffffffff8129b050 T netdev_printk
ffffffff8129b0a0 T netdev_set_bond_master
ffffffff8129b140 t dev_seq_show
ffffffff8129b170 T napi_frags_finish
ffffffff8129b250 T napi_gro_frags
ffffffff8129b380 T napi_skb_finish
ffffffff8129b3c0 T napi_gro_receive
ffffffff8129b4e0 T netif_rx
ffffffff8129b570 t dev_cpu_callback
ffffffff8129b700 T netif_rx_ni
ffffffff8129b730 T dev_forward_skb
ffffffff8129b890 T netdev_rx_csum_fault
ffffffff8129b8d0 T netif_device_attach
ffffffff8129b970 T unregister_netdevice_queue
ffffffff8129ba50 T unregister_netdev
ffffffff8129ba80 t default_device_exit_batch
ffffffff8129bb60 T dev_close
ffffffff8129bbb0 T netdev_state_change
ffffffff8129bbe0 T dev_load
ffffffff8129bc60 T dev_alloc_name
ffffffff8129be00 T dev_valid_name
ffffffff8129beb0 t dev_get_valid_name
ffffffff8129bf70 T dev_change_net_namespace
ffffffff8129c130 t default_device_exit
ffffffff8129c200 T netdev_boot_base
ffffffff8129c290 T dev_change_name
ffffffff8129c480 T dev_set_alias
ffffffff8129c560 T dev_hard_start_xmit
ffffffff8129cad0 T dev_queue_xmit
ffffffff8129d0e0 T __dev_set_rx_mode
ffffffff8129d190 T dev_set_rx_mode
ffffffff8129d1d0 t __dev_open
ffffffff8129d2b0 T dev_open
ffffffff8129d310 T dev_set_allmulti
ffffffff8129d3f0 T dev_set_promiscuity
ffffffff8129d440 T __dev_change_flags
ffffffff8129d5c0 T __dev_notify_flags
ffffffff8129d650 T dev_change_flags
ffffffff8129d6c0 t dev_ifsioc
ffffffff8129dab0 T dev_ioctl
ffffffff8129e040 T __netdev_update_features
ffffffff8129e220 T register_netdevice
ffffffff8129e450 T register_netdev
ffffffff8129e480 T netdev_change_features
ffffffff8129e4a0 T netdev_update_features
ffffffff8129e4c0 T dev_disable_lro
ffffffff8129e520 T netdev_run_todo
ffffffff8129e790 T dev_ingress_queue_create
ffffffff8129e7a0 T netdev_drivername
ffffffff8129e7e0 T ethtool_op_get_link
ffffffff8129e7f0 t __ethtool_get_flags
ffffffff8129e840 t ethtool_set_coalesce
ffffffff8129e8a0 t ethtool_set_channels
ffffffff8129e900 t ethtool_set_value
ffffffff8129e950 t ethtool_flash_device
ffffffff8129e9b0 t ethtool_set_rxnfc
ffffffff8129eaa0 t ethtool_get_coalesce
ffffffff8129eb30 t ethtool_get_channels
ffffffff8129ebb0 t ethtool_get_value
ffffffff8129ec00 t ethtool_get_drvinfo
ffffffff8129ed90 t ethtool_get_rxfh_indir
ffffffff8129eec0 t ethtool_set_rxfh_indir
ffffffff8129f0d0 t ethtool_get_rxnfc
ffffffff8129f250 t ethtool_get_sset_info
ffffffff8129f3b0 t __ethtool_set_flags
ffffffff8129f430 T __ethtool_get_settings
ffffffff8129f5d0 t ethtool_get_feature_mask
ffffffff8129f620 T dev_ethtool
ffffffff812a0e80 T __hw_addr_init
ffffffff812a0e90 T dev_uc_init
ffffffff812a0eb0 T dev_mc_init
ffffffff812a0ed0 t dev_mc_net_exit
ffffffff812a0ee0 t dev_mc_net_init
ffffffff812a0f10 t dev_mc_seq_open
ffffffff812a0f30 t dev_mc_seq_show
ffffffff812a1000 T __hw_addr_flush
ffffffff812a1060 T dev_addr_flush
ffffffff812a1080 T dev_uc_flush
ffffffff812a10d0 T dev_mc_flush
ffffffff812a1120 t __hw_addr_del_ex
ffffffff812a11f0 t __dev_mc_del
ffffffff812a1270 T dev_mc_del_global
ffffffff812a1280 T dev_mc_del
ffffffff812a1290 T dev_uc_del
ffffffff812a1310 T __hw_addr_unsync
ffffffff812a13a0 T __hw_addr_del_multiple
ffffffff812a1400 T dev_addr_del_multiple
ffffffff812a14a0 T dev_addr_del
ffffffff812a1550 T dev_mc_unsync
ffffffff812a1610 T dev_uc_unsync
ffffffff812a16d0 t __hw_addr_add_ex
ffffffff812a17d0 t __dev_mc_add
ffffffff812a1850 T dev_mc_add_global
ffffffff812a1860 T dev_mc_add
ffffffff812a1870 T dev_uc_add
ffffffff812a18f0 T __hw_addr_sync
ffffffff812a19b0 T dev_mc_sync
ffffffff812a1a50 T dev_uc_sync
ffffffff812a1af0 T __hw_addr_add_multiple
ffffffff812a1ba0 T dev_addr_add_multiple
ffffffff812a1c60 T dev_addr_init
ffffffff812a1ce0 T dev_addr_add
ffffffff812a1d70 T __dst_destroy_metrics_generic
ffffffff812a1da0 T dst_cow_metrics_generic
ffffffff812a1e40 T dst_alloc
ffffffff812a1fb0 T dst_destroy
ffffffff812a20a0 t dst_gc_task
ffffffff812a22a0 T __dst_free
ffffffff812a2360 T dst_discard
ffffffff812a2370 t dst_ifdown
ffffffff812a2440 t dst_dev_event
ffffffff812a2570 T skb_dst_set_noref
ffffffff812a2590 T dst_release
ffffffff812a2610 T call_netevent_notifiers
ffffffff812a2630 T unregister_netevent_notifier
ffffffff812a2640 T register_netevent_notifier
ffffffff812a2650 t neigh_get_first
ffffffff812a2730 t neigh_get_next
ffffffff812a2800 t pneigh_get_first
ffffffff812a2860 t pneigh_get_next
ffffffff812a28e0 t neigh_stat_seq_start
ffffffff812a2960 t neigh_stat_seq_next
ffffffff812a29c0 t neigh_stat_seq_stop
ffffffff812a29d0 t neightbl_set
ffffffff812a2da0 T neigh_seq_stop
ffffffff812a2db0 T neigh_for_each
ffffffff812a2e60 t neightbl_fill_parms
ffffffff812a31a0 t neigh_fill_info
ffffffff812a33e0 T neigh_sysctl_unregister
ffffffff812a3420 t neigh_rcu_free_parms
ffffffff812a3440 T neigh_sysctl_register
ffffffff812a3730 t proc_unres_qlen
ffffffff812a37e0 t neigh_blackhole
ffffffff812a3800 t pneigh_queue_purge
ffffffff812a3830 t neigh_hash_free_rcu
ffffffff812a3890 t neigh_stat_seq_open
ffffffff812a38e0 t neigh_hash_alloc
ffffffff812a3990 t neigh_proxy_process
ffffffff812a3ac0 T pneigh_enqueue
ffffffff812a3c10 T neigh_direct_output
ffffffff812a3c20 T neigh_connected_output
ffffffff812a3d30 T neigh_compat_output
ffffffff812a3dd0 t __pneigh_lookup_1
ffffffff812a3e30 T pneigh_lookup
ffffffff812a4010 T __pneigh_lookup
ffffffff812a4060 T neigh_lookup_nodev
ffffffff812a4150 T neigh_lookup
ffffffff812a4240 t neigh_invalidate
ffffffff812a42e0 t neigh_probe
ffffffff812a4350 T neigh_seq_start
ffffffff812a4470 T neigh_seq_next
ffffffff812a44f0 T neigh_parms_release
ffffffff812a45a0 t neigh_stat_seq_show
ffffffff812a4640 t neigh_del_timer
ffffffff812a46a0 T neigh_destroy
ffffffff812a4780 t neigh_add_timer
ffffffff812a47c0 T __neigh_event_send
ffffffff812a49f0 T neigh_resolve_output
ffffffff812a4bf0 T neigh_rand_reach_time
ffffffff812a4c20 T neigh_table_init_no_netlink
ffffffff812a4dd0 T neigh_parms_alloc
ffffffff812a4ee0 T neigh_table_init
ffffffff812a4f50 t neigh_dump_info
ffffffff812a53d0 t neightbl_fill_info.constprop.39
ffffffff812a5720 t neightbl_dump_info
ffffffff812a5980 t __neigh_notify.constprop.41
ffffffff812a5a80 t neigh_update_notify
ffffffff812a5aa0 T neigh_update
ffffffff812a5f70 t neigh_timer_handler
ffffffff812a61f0 t neigh_cleanup_and_release
ffffffff812a6230 T __neigh_for_each_release
ffffffff812a62e0 T neigh_create
ffffffff812a68c0 t neigh_add
ffffffff812a6be0 T neigh_event_ns
ffffffff812a6cb0 t neigh_periodic_work
ffffffff812a6e70 t neigh_flush_dev.isra.37
ffffffff812a6f70 T neigh_ifdown
ffffffff812a7050 T neigh_table_clear
ffffffff812a7170 T neigh_changeaddr
ffffffff812a71c0 T pneigh_delete
ffffffff812a72d0 t neigh_delete
ffffffff812a7490 T rtnl_is_locked
ffffffff812a74a0 T __rtnl_af_register
ffffffff812a74c0 T __rtnl_af_unregister
ffffffff812a74f0 t validate_linkmsg
ffffffff812a7620 t rtnetlink_net_exit
ffffffff812a7640 t rtnetlink_net_init
ffffffff812a7680 t rtnl_dump_all
ffffffff812a7860 T __rtnl_link_unregister
ffffffff812a7930 t rtnl_dellink
ffffffff812a7a50 t set_operstate
ffffffff812a7af0 t rtnl_link_ops_get
ffffffff812a7b50 T __rtnl_link_register
ffffffff812a7ba0 T __rtnl_register
ffffffff812a7c80 T rtnl_register
ffffffff812a7cc0 t if_nlmsg_size
ffffffff812a7e80 t rtnl_calcit
ffffffff812a7f70 T rtnetlink_put_metrics
ffffffff812a8070 t rtnl_fill_ifinfo
ffffffff812a8dc0 t rtnl_dump_ifinfo
ffffffff812a8f60 T rtnl_create_link
ffffffff812a90f0 T rtnl_put_cacheinfo
ffffffff812a91e0 T rtnl_set_sk_err
ffffffff812a9200 T rtnl_notify
ffffffff812a9230 T rtnl_unicast
ffffffff812a9260 t rtnl_getlink
ffffffff812a9450 T __rta_fill
ffffffff812a94e0 T rtnl_trylock
ffffffff812a94f0 T rtnl_unlock
ffffffff812a9500 T rtnl_lock
ffffffff812a9510 t rtnetlink_rcv
ffffffff812a9540 T rtnl_af_unregister
ffffffff812a9580 T rtnl_af_register
ffffffff812a95b0 T rtnl_link_unregister
ffffffff812a95e0 T rtnl_link_register
ffffffff812a9610 T rtnl_unregister_all
ffffffff812a9640 T rtnl_unregister
ffffffff812a96a0 T rtnl_link_get_net
ffffffff812a96e0 t do_setlink
ffffffff812aa080 t rtnl_setlink
ffffffff812aa180 T __rtnl_unlock
ffffffff812aa190 t rtnetlink_rcv_msg
ffffffff812aa4d0 T rtnetlink_send
ffffffff812aa560 T rtmsg_ifinfo
ffffffff812aa680 t rtnetlink_event
ffffffff812aa6d0 T rtnl_configure_link
ffffffff812aa750 t rtnl_newlink
ffffffff812aacd0 T in4_pton
ffffffff812aae40 T inet_proto_csum_replace4
ffffffff812aaf10 T in6_pton
ffffffff812ab2e0 T in_aton
ffffffff812ab350 T net_ratelimit
ffffffff812ab370 T mac_pton
ffffffff812ab440 t linkwatch_do_dev
ffffffff812ab510 t linkwatch_schedule_work
ffffffff812ab5b0 T linkwatch_fire_event
ffffffff812ab710 t __linkwatch_run_queue
ffffffff812ab8f0 t linkwatch_event
ffffffff812ab920 T linkwatch_forget_dev
ffffffff812ab990 T linkwatch_run_queue
ffffffff812ab9a0 T sk_detach_filter
ffffffff812aba00 T sk_filter_release_rcu
ffffffff812aba10 T sk_chk_filter
ffffffff812abd10 T sk_attach_filter
ffffffff812abe80 T bpf_internal_load_pointer_neg_helper
ffffffff812abef0 T sk_run_filter
ffffffff812ac3e0 T sk_filter
ffffffff812ac450 T sock_diag_check_cookie
ffffffff812ac490 T sock_diag_save_cookie
ffffffff812ac4a0 t sock_diag_rcv
ffffffff812ac4d0 T sock_diag_register
ffffffff812ac540 T sock_diag_unregister_inet_compat
ffffffff812ac570 T sock_diag_register_inet_compat
ffffffff812ac5a0 T sock_diag_put_meminfo
ffffffff812ac630 t sock_diag_rcv_msg
ffffffff812ac760 T sock_diag_unregister
ffffffff812ac7d0 t flow_cache_gc_task
ffffffff812ac890 t flow_cache_new_hashrnd
ffffffff812ac900 t flow_cache_flush_per_cpu
ffffffff812ac940 t flow_cache_queue_garbage.isra.5.part.6
ffffffff812ac9a0 t flow_cache_flush_tasklet
ffffffff812acaf0 t __flow_cache_shrink.isra.7
ffffffff812acc30 T flow_cache_lookup
ffffffff812ad050 T flow_cache_flush
ffffffff812ad0f0 t flow_cache_flush_task
ffffffff812ad100 T flow_cache_flush_deferred
ffffffff812ad110 t change_tx_queue_len
ffffffff812ad120 t rx_queue_attr_show
ffffffff812ad140 t rx_queue_attr_store
ffffffff812ad160 t netdev_queue_attr_show
ffffffff812ad180 t netdev_queue_attr_store
ffffffff812ad1a0 t net_grab_current_ns
ffffffff812ad1c0 t net_initial_ns
ffffffff812ad1d0 t net_netlink_ns
ffffffff812ad1e0 t net_namespace
ffffffff812ad1f0 t change_group
ffffffff812ad200 t format_group
ffffffff812ad230 t format_tx_queue_len
ffffffff812ad260 t format_flags
ffffffff812ad290 t format_mtu
ffffffff812ad2c0 t show_operstate
ffffffff812ad330 t show_dormant
ffffffff812ad370 t show_carrier
ffffffff812ad3c0 t format_link_mode
ffffffff812ad3f0 t format_type
ffffffff812ad420 t format_ifindex
ffffffff812ad450 t format_iflink
ffffffff812ad480 t format_dev_id
ffffffff812ad4b0 t format_addr_len
ffffffff812ad4e0 t format_addr_assign_type
ffffffff812ad510 t bql_show_inflight
ffffffff812ad540 t bql_show_limit_min
ffffffff812ad570 t bql_show_limit_max
ffffffff812ad5a0 t bql_show_limit
ffffffff812ad5d0 t show_rps_dev_flow_table_cnt
ffffffff812ad600 t change_flags
ffffffff812ad610 t change_mtu
ffffffff812ad620 t show_broadcast
ffffffff812ad650 t show_address
ffffffff812ad6b0 T netdev_class_remove_file
ffffffff812ad6c0 T netdev_class_create_file
ffffffff812ad6d0 t bql_set_hold_time
ffffffff812ad740 t bql_show_hold_time
ffffffff812ad770 t bql_set
ffffffff812ad800 t bql_set_limit_min
ffffffff812ad820 t bql_set_limit_max
ffffffff812ad840 t bql_set_limit
ffffffff812ad860 t store_xps_map
ffffffff812adcd0 t netdev_queue_release
ffffffff812aded0 t show_xps_map
ffffffff812adff0 t show_rps_map
ffffffff812ae070 t show_trans_timeout
ffffffff812ae0e0 t rx_queue_release
ffffffff812ae1c0 t store_rps_dev_flow_table_cnt
ffffffff812ae2e0 t rps_dev_flow_table_release
ffffffff812ae310 t rps_dev_flow_table_release_work
ffffffff812ae320 t store_rps_map
ffffffff812ae460 t netdev_store.isra.11
ffffffff812ae540 t store_group
ffffffff812ae560 t store_tx_queue_len
ffffffff812ae580 t store_flags
ffffffff812ae5a0 t store_mtu
ffffffff812ae5c0 t netdev_show.isra.13
ffffffff812ae620 t show_group
ffffffff812ae630 t show_tx_queue_len
ffffffff812ae640 t show_flags
ffffffff812ae650 t show_mtu
ffffffff812ae660 t show_link_mode
ffffffff812ae670 t show_type
ffffffff812ae680 t show_ifindex
ffffffff812ae690 t show_iflink
ffffffff812ae6a0 t show_dev_id
ffffffff812ae6b0 t show_addr_len
ffffffff812ae6c0 t show_addr_assign_type
ffffffff812ae6d0 t show_ifalias
ffffffff812ae750 t show_duplex
ffffffff812ae810 t show_speed
ffffffff812ae8d0 t store_ifalias
ffffffff812ae990 t netdev_release
ffffffff812ae9c0 t netdev_uevent
ffffffff812aea30 t netstat_show.isra.20
ffffffff812aeaf0 t show_tx_compressed
ffffffff812aeb00 t show_rx_compressed
ffffffff812aeb10 t show_tx_window_errors
ffffffff812aeb20 t show_tx_heartbeat_errors
ffffffff812aeb30 t show_tx_fifo_errors
ffffffff812aeb40 t show_tx_carrier_errors
ffffffff812aeb50 t show_tx_aborted_errors
ffffffff812aeb60 t show_rx_missed_errors
ffffffff812aeb70 t show_rx_fifo_errors
ffffffff812aeb80 t show_rx_frame_errors
ffffffff812aeb90 t show_rx_crc_errors
ffffffff812aeba0 t show_rx_over_errors
ffffffff812aebb0 t show_rx_length_errors
ffffffff812aebc0 t show_collisions
ffffffff812aebd0 t show_multicast
ffffffff812aebe0 t show_tx_dropped
ffffffff812aebf0 t show_rx_dropped
ffffffff812aec00 t show_tx_errors
ffffffff812aec10 t show_rx_errors
ffffffff812aec20 t show_tx_bytes
ffffffff812aec30 t show_rx_bytes
ffffffff812aec40 t show_tx_packets
ffffffff812aec50 t show_rx_packets
ffffffff812aec60 T net_rx_queue_update_kobjects
ffffffff812aed40 T netdev_queue_update_kobjects
ffffffff812aee60 T netdev_unregister_kobject
ffffffff812aeee0 T netdev_register_kobject
ffffffff812af030 T netdev_kobject_init
ffffffff812af060 T compat_mc_setsockopt
ffffffff812af3d0 T compat_sock_get_timestampns
ffffffff812af490 T compat_sock_get_timestamp
ffffffff812af550 T compat_mc_getsockopt
ffffffff812af890 T get_compat_msghdr
ffffffff812af930 T verify_compat_iovec
ffffffff812afa40 T cmsghdr_from_user_compat_to_kern
ffffffff812afce0 T put_cmsg_compat
ffffffff812afe80 T scm_detach_fds_compat
ffffffff812affb0 T compat_sys_setsockopt
ffffffff812b01c0 T compat_sys_getsockopt
ffffffff812b0370 T compat_sys_sendmsg
ffffffff812b0380 T compat_sys_sendmmsg
ffffffff812b03a0 T compat_sys_recvmsg
ffffffff812b03b0 T compat_sys_recv
ffffffff812b03c0 T compat_sys_recvfrom
ffffffff812b03d0 T compat_sys_recvmmsg
ffffffff812b04a0 T compat_sys_socketcall
ffffffff812b06c0 T eth_change_mtu
ffffffff812b06e0 T eth_validate_addr
ffffffff812b0710 T sysfs_format_mac
ffffffff812b07d0 T alloc_etherdev_mqs
ffffffff812b07f0 T ether_setup
ffffffff812b0860 T eth_mac_addr
ffffffff812b08b0 T eth_header_cache_update
ffffffff812b08c0 T eth_header_cache
ffffffff812b0910 T eth_header_parse
ffffffff812b0930 T eth_type_trans
ffffffff812b0a00 T eth_rebuild_header
ffffffff812b0a80 T eth_header
ffffffff812b0b60 T dev_trans_start
ffffffff812b0bc0 t noop_dequeue
ffffffff812b0bd0 t pfifo_fast_peek
ffffffff812b0c20 t pfifo_fast_init
ffffffff812b0c70 t transition_one_qdisc
ffffffff812b0cb0 t pfifo_fast_dump
ffffffff812b0d10 t pfifo_fast_dequeue
ffffffff812b0de0 t pfifo_fast_reset
ffffffff812b0e80 t noop_enqueue
ffffffff812b0ea0 T qdisc_reset
ffffffff812b0ee0 t dev_watchdog
ffffffff812b1150 T dev_graft_qdisc
ffffffff812b11e0 T qdisc_destroy
ffffffff812b1290 t qdisc_rcu_free
ffffffff812b12b0 T netif_notify_peers
ffffffff812b12d0 t pfifo_fast_enqueue
ffffffff812b1370 T netif_carrier_off
ffffffff812b13a0 t dev_deactivate_queue.constprop.31
ffffffff812b1420 T sch_direct_xmit
ffffffff812b1600 T __qdisc_run
ffffffff812b1730 T __netdev_watchdog_up
ffffffff812b17a0 T netif_carrier_on
ffffffff812b17e0 T qdisc_alloc
ffffffff812b18d0 T qdisc_create_dflt
ffffffff812b1940 T dev_activate
ffffffff812b1af0 T dev_deactivate_many
ffffffff812b1d60 T dev_deactivate
ffffffff812b1db0 T dev_init_scheduler
ffffffff812b1e40 T dev_shutdown
ffffffff812b1f00 t mq_select_queue
ffffffff812b1f50 t mq_leaf
ffffffff812b1f90 t mq_get
ffffffff812b1fe0 t mq_put
ffffffff812b1ff0 t mq_dump_class
ffffffff812b2040 t mq_walk
ffffffff812b20a0 t mq_dump_class_stats
ffffffff812b2120 t mq_graft
ffffffff812b21c0 t mq_dump
ffffffff812b22e0 t mq_attach
ffffffff812b2350 t mq_destroy
ffffffff812b23c0 t mq_init
ffffffff812b24b0 t netlink_update_listeners
ffffffff812b2530 t netlink_update_subscriptions
ffffffff812b25b0 t netlink_getname
ffffffff812b2610 t netlink_overrun
ffffffff812b2650 t netlink_getsockopt
ffffffff812b2750 T netlink_set_nonroot
ffffffff812b2770 t netlink_seq_socket_idx
ffffffff812b2800 t netlink_seq_next
ffffffff812b28c0 t netlink_seq_stop
ffffffff812b28d0 t netlink_net_exit
ffffffff812b28e0 t netlink_net_init
ffffffff812b2910 t netlink_seq_open
ffffffff812b2930 t netlink_lookup
ffffffff812b2a00 t __netlink_create
ffffffff812b2ae0 t netlink_create
ffffffff812b2ce0 t netlink_update_socket_mc
ffffffff812b2d30 T netlink_set_err
ffffffff812b2e20 T __nlmsg_put
ffffffff812b2ec0 t netlink_dump
ffffffff812b30d0 t netlink_recvmsg
ffffffff812b3460 t netlink_data_ready
ffffffff812b3470 t netlink_sock_destruct
ffffffff812b3540 T netlink_dump_start
ffffffff812b36d0 T netlink_unregister_notifier
ffffffff812b36e0 T netlink_register_notifier
ffffffff812b36f0 T netlink_kernel_release
ffffffff812b3700 t netlink_trim
ffffffff812b37c0 T netlink_broadcast_filtered
ffffffff812b3b60 T netlink_broadcast
ffffffff812b3b80 t netlink_seq_show
ffffffff812b3c40 t netlink_seq_start
ffffffff812b3ca0 T netlink_has_listeners
ffffffff812b3ce0 T netlink_table_grab
ffffffff812b3dc0 T netlink_table_ungrab
ffffffff812b3df0 t netlink_insert
ffffffff812b4010 t netlink_autobind.isra.23
ffffffff812b4140 t netlink_connect
ffffffff812b4250 t netlink_realloc_groups
ffffffff812b4340 t netlink_setsockopt
ffffffff812b4500 t netlink_bind
ffffffff812b4660 t netlink_release
ffffffff812b48d0 T netlink_kernel_create
ffffffff812b4ae0 T netlink_getsockbyfilp
ffffffff812b4b30 T netlink_attachskb
ffffffff812b4d50 T netlink_sendskb
ffffffff812b4da0 T netlink_unicast
ffffffff812b4fa0 T nlmsg_notify
ffffffff812b5060 t netlink_sendmsg
ffffffff812b5380 T netlink_ack
ffffffff812b54d0 T netlink_rcv_skb
ffffffff812b5580 T netlink_detachskb
ffffffff812b55b0 T __netlink_change_ngroups
ffffffff812b5680 T netlink_change_ngroups
ffffffff812b56b0 T __netlink_clear_multicast_users
ffffffff812b56f0 T netlink_clear_multicast_users
ffffffff812b5720 t genl_family_find_byid
ffffffff812b5770 t genl_pernet_exit
ffffffff812b5790 t genl_pernet_init
ffffffff812b57f0 t genl_family_find_byname
ffffffff812b5870 T genl_notify
ffffffff812b58a0 t genlmsg_mcast
ffffffff812b59a0 T genlmsg_multicast_allns
ffffffff812b59b0 T genlmsg_put
ffffffff812b5a20 t ctrl_fill_info
ffffffff812b5df0 t ctrl_dumpfamily
ffffffff812b5f30 t ctrl_build_family_msg
ffffffff812b5fe0 T genl_unlock
ffffffff812b5ff0 T genl_lock
ffffffff812b6000 t genl_rcv
ffffffff812b6030 t genl_rcv_msg
ffffffff812b62a0 t ctrl_getfamily
ffffffff812b63a0 t genl_ctrl_event
ffffffff812b66a0 t __genl_unregister_mc_group
ffffffff812b6750 T genl_unregister_mc_group
ffffffff812b6780 T genl_unregister_family
ffffffff812b68a0 T genl_register_family
ffffffff812b6a40 T genl_unregister_ops
ffffffff812b6ae0 T genl_register_ops
ffffffff812b6bb0 T genl_register_family_with_ops
ffffffff812b6c10 T genl_register_mc_group
ffffffff812b6dd0 t dst_rcu_free
ffffffff812b6e00 t ipv4_dst_ifdown
ffffffff812b6e10 t rt_cpu_seq_next
ffffffff812b6e70 t rt_cpu_seq_stop
ffffffff812b6e80 t ipv4_blackhole_dst_check
ffffffff812b6e90 t ipv4_blackhole_mtu
ffffffff812b6eb0 t ipv4_rt_blackhole_update_pmtu
ffffffff812b6ec0 t ipv4_rt_blackhole_cow_metrics
ffffffff812b6ed0 t ipv4_mtu
ffffffff812b6f30 t rt_cache_get_first
ffffffff812b6fe0 t rt_cache_get_next
ffffffff812b7090 t rt_cache_seq_next
ffffffff812b70c0 t rt_cache_seq_stop
ffffffff812b70d0 t ipv4_neigh_lookup
ffffffff812b71e0 t check_peer_pmtu
ffffffff812b72a0 t ipv4_link_failure
ffffffff812b7330 t ipv4_dst_destroy
ffffffff812b73b0 t rt_genid_init
ffffffff812b73e0 t sysctl_route_net_init
ffffffff812b7470 t ip_rt_do_proc_exit
ffffffff812b74a0 t rt_cpu_seq_open
ffffffff812b74b0 t rt_cpu_seq_show
ffffffff812b7690 t rt_cache_seq_show
ffffffff812b7850 t rt_cache_seq_open
ffffffff812b7870 t ip_rt_bug
ffffffff812b78d0 t rt_do_flush
ffffffff812b7a10 t rt_dst_alloc
ffffffff812b7a50 t rt_cache_invalidate
ffffffff812b7a90 t rt_cache_seq_start
ffffffff812b7b00 t rt_cpu_seq_start
ffffffff812b7b80 t rt_may_expire.part.32
ffffffff812b7bf0 t rt_garbage_collect
ffffffff812b7fd0 t rt_intern_hash
ffffffff812b8620 t rt_worker_func
ffffffff812b8920 t ipv4_negative_advice
ffffffff812b8b20 t ipv4_default_advmss
ffffffff812b8b60 t rt_set_nexthop.isra.40
ffffffff812b8d40 t check_peer_redir.isra.41
ffffffff812b8e30 t sysctl_route_net_exit
ffffffff812b8e60 t ip_rt_do_proc_init
ffffffff812b8ed0 t rt_fill_info.isra.44.constprop.48
ffffffff812b92d0 T rt_cache_flush
ffffffff812b9330 t ipv4_sysctl_rtcache_flush
ffffffff812b93b0 T rt_cache_flush_batch
ffffffff812b93d0 T rt_bind_peer
ffffffff812b9430 t ip_rt_update_pmtu
ffffffff812b9500 t ipv4_cow_metrics
ffffffff812b9610 t ipv4_validate_peer
ffffffff812b96b0 t ipv4_dst_check
ffffffff812b96e0 T __ip_route_output_key
ffffffff812b9fa0 T ip_route_output_flow
ffffffff812ba010 T ip_route_input_common
ffffffff812bab00 t inet_rtm_getroute
ffffffff812bade0 t ip_error
ffffffff812baef0 T __ip_select_ident
ffffffff812bb060 T ip_rt_redirect
ffffffff812bb3b0 T ip_rt_send_redirect
ffffffff812bb510 T ip_rt_frag_needed
ffffffff812bb680 T ip_rt_get_source
ffffffff812bb7e0 T ipv4_blackhole_route
ffffffff812bba20 T ip_rt_dump
ffffffff812bbbc0 T ip_rt_multicast_event
ffffffff812bbbe0 T inet_putpeer
ffffffff812bbc00 T inet_peer_xrlim_allow
ffffffff812bbc50 T inetpeer_invalidate_tree
ffffffff812bbce0 t inetpeer_inval_rcu
ffffffff812bbd30 t inetpeer_free_rcu
ffffffff812bbd50 t inetpeer_gc_worker
ffffffff812bbf40 t peer_avl_rebalance.isra.2
ffffffff812bc070 T inet_getpeer
ffffffff812bc580 T inet_add_protocol
ffffffff812bc5a0 T inet_del_protocol
ffffffff812bc5d0 T ip_call_ra_chain
ffffffff812bc6f0 T ip_local_deliver
ffffffff812bc910 T ip_rcv
ffffffff812bce40 t ipqhashfn
ffffffff812bcea0 t ip4_hashfn
ffffffff812bcec0 t ip4_frag_match
ffffffff812bcf10 t ip4_frag_free
ffffffff812bcf30 t ipv4_frags_exit_net
ffffffff812bcf80 t ipv4_frags_init_net
ffffffff812bd070 t ip_expire
ffffffff812bd220 t ip4_frag_init
ffffffff812bd2c0 T ip_defrag
ffffffff812bde60 T ip_check_defrag
ffffffff812be050 T ip_frag_nqueues
ffffffff812be060 T ip_frag_mem
ffffffff812be070 T ip_forward
ffffffff812be440 T ip_options_rcv_srr
ffffffff812be660 t ip_options_get_alloc
ffffffff812be680 T ip_options_compile
ffffffff812becc0 t ip_options_get_finish
ffffffff812bed30 T ip_options_build
ffffffff812bef20 T ip_options_echo
ffffffff812bf2f0 T ip_options_fragment
ffffffff812bf3a0 T ip_options_undo
ffffffff812bf4a0 T ip_options_get_from_user
ffffffff812bf560 T ip_options_get
ffffffff812bf610 T ip_forward_options
ffffffff812bf810 T ip_send_check
ffffffff812bf850 t ip_finish_output2
ffffffff812bfaf0 t ip_copy_metadata
ffffffff812bfb90 t ip_reply_glue_bits
ffffffff812bfbe0 t ip_setup_cork
ffffffff812bfd00 T ip_generic_getfrag
ffffffff812bfda0 T ip_fragment
ffffffff812c05a0 t ip_finish_output
ffffffff812c08d0 t ip_dev_loopback_xmit
ffffffff812c0970 t __ip_append_data.isra.33
ffffffff812c12e0 t __ip_flush_pending_frames.isra.34
ffffffff812c1350 T __ip_local_out
ffffffff812c13b0 T ip_local_out
ffffffff812c13e0 T ip_queue_xmit
ffffffff812c1790 T ip_build_and_send_pkt
ffffffff812c19a0 T ip_mc_output
ffffffff812c1ac0 T ip_output
ffffffff812c1b10 T ip_append_data
ffffffff812c1c20 T ip_append_page
ffffffff812c2180 T __ip_make_skb
ffffffff812c2510 T ip_send_skb
ffffffff812c2550 T ip_push_pending_frames
ffffffff812c2580 T ip_flush_pending_frames
ffffffff812c25a0 T ip_make_skb
ffffffff812c26f0 T ip_send_reply
ffffffff812c2990 t do_ip_getsockopt
ffffffff812c3040 T ip_getsockopt
ffffffff812c3050 t ip_ra_destroy_rcu
ffffffff812c3080 T ip_cmsg_recv
ffffffff812c32b0 T compat_ip_getsockopt
ffffffff812c32d0 T ip_cmsg_send
ffffffff812c33c0 T ip_ra_control
ffffffff812c3520 t do_ip_setsockopt.isra.18
ffffffff812c42a0 T compat_ip_setsockopt
ffffffff812c42e0 T ip_setsockopt
ffffffff812c4300 T ip_icmp_error
ffffffff812c4430 T ip_local_error
ffffffff812c45a0 T ip_recv_error
ffffffff812c4850 T ipv4_pktinfo_prepare
ffffffff812c48b0 T inet_hashinfo_init
ffffffff812c48f0 T __inet_hash_nolisten
ffffffff812c4a60 t __inet_check_established
ffffffff812c4d50 T inet_unhash
ffffffff812c4e10 T __inet_lookup_established
ffffffff812c50a0 T __inet_lookup_listener
ffffffff812c5250 T inet_hash
ffffffff812c5360 T inet_bind_bucket_create
ffffffff812c53e0 T inet_bind_bucket_destroy
ffffffff812c5410 T inet_put_port
ffffffff812c54d0 T inet_bind_hash
ffffffff812c5520 T __inet_inherit_port
ffffffff812c5610 T __inet_hash_connect
ffffffff812c5910 T inet_hash_connect
ffffffff812c5960 T inet_twsk_schedule
ffffffff812c5b50 T inet_twsk_alloc
ffffffff812c5c40 T __inet_twsk_hashdance
ffffffff812c5da0 t inet_twsk_free
ffffffff812c5e00 T inet_twsk_put
ffffffff812c5e20 T inet_twsk_unhash
ffffffff812c5e50 T inet_twsk_bind_unhash
ffffffff812c5ea0 t __inet_twsk_kill
ffffffff812c5f60 T inet_twdr_twcal_tick
ffffffff812c6110 T inet_twsk_deschedule
ffffffff812c61b0 T inet_twsk_purge
ffffffff812c62b0 t inet_twdr_do_twkill_work
ffffffff812c6370 T inet_twdr_twkill_work
ffffffff812c6440 T inet_twdr_hangman
ffffffff812c6500 T inet_csk_bind_conflict
ffffffff812c6580 T inet_csk_addr2sockaddr
ffffffff812c65a0 T inet_get_local_port_range
ffffffff812c65d0 T inet_csk_search_req
ffffffff812c66b0 T inet_csk_destroy_sock
ffffffff812c67e0 T inet_csk_clone_lock
ffffffff812c6880 T inet_csk_route_child_sock
ffffffff812c6a00 T inet_csk_route_req
ffffffff812c6b40 T inet_csk_reset_keepalive_timer
ffffffff812c6b60 T inet_csk_reqsk_queue_hash_add
ffffffff812c6c90 T inet_csk_reqsk_queue_prune
ffffffff812c6f90 T inet_csk_delete_keepalive_timer
ffffffff812c6fa0 T inet_csk_listen_stop
ffffffff812c7120 T inet_csk_clear_xmit_timers
ffffffff812c7170 T inet_csk_init_xmit_timers
ffffffff812c7210 T inet_csk_accept
ffffffff812c7470 T inet_csk_get_port
ffffffff812c7850 T inet_csk_compat_getsockopt
ffffffff812c7870 T inet_csk_compat_setsockopt
ffffffff812c7890 T inet_csk_listen_start
ffffffff812c7990 t tcp_cookie_values_release
ffffffff812c79a0 T tcp_poll
ffffffff812c7b00 T tcp_cookie_generator
ffffffff812c7c50 t tcp_prequeue_process
ffffffff812c7ce0 T tcp_gro_receive
ffffffff812c7f50 T tcp_get_info
ffffffff812c81e0 T tcp_ioctl
ffffffff812c8390 t tcp_send_mss
ffffffff812c8480 T tcp_set_state
ffffffff812c8550 T tcp_done
ffffffff812c85c0 T tcp_disconnect
ffffffff812c8950 t tcp_splice_data_recv
ffffffff812c8990 T tcp_enter_memory_pressure
ffffffff812c89c0 T tcp_gro_complete
ffffffff812c8a10 T tcp_tso_segment
ffffffff812c8d20 t do_tcp_getsockopt.isra.20
ffffffff812c9180 T compat_tcp_getsockopt
ffffffff812c91b0 T tcp_getsockopt
ffffffff812c91e0 T tcp_shutdown
ffffffff812c9250 T sk_stream_alloc_skb
ffffffff812c9350 T tcp_sendmsg
ffffffff812ca0f0 T tcp_sendpage
ffffffff812ca7f0 T tcp_cleanup_rbuf
ffffffff812ca8f0 t do_tcp_setsockopt.isra.24
ffffffff812cb020 T compat_tcp_setsockopt
ffffffff812cb050 T tcp_setsockopt
ffffffff812cb080 T tcp_recvmsg
ffffffff812cb9d0 T tcp_read_sock
ffffffff812cbbc0 T tcp_splice_read
ffffffff812cbe00 T tcp_check_oom
ffffffff812cbf00 T tcp_close
ffffffff812cc2e0 T tcp_init_mem
ffffffff812cc320 t tcp_init_buffer_space
ffffffff812cc510 T tcp_initialize_rcv_mss
ffffffff812cc560 t tcp_sacktag_one
ffffffff812cc6e0 t tcp_undo_cwr
ffffffff812cc790 t tcp_try_undo_recovery
ffffffff812cc8a0 t tcp_check_space
ffffffff812cc980 t tcp_reset
ffffffff812cc9f0 t tcp_enter_frto_loss
ffffffff812ccc00 t tcp_shifted_skb
ffffffff812ccee0 t tcp_match_skb_to_sack
ffffffff812ccfb0 t tcp_sacktag_walk
ffffffff812cd480 t tcp_mark_head_lost
ffffffff812cd6f0 T tcp_valid_rtt_meas
ffffffff812cd8c0 t tcp_init_metrics
ffffffff812cdb40 t tcp_collapse
ffffffff812cdf60 t tcp_event_data_recv
ffffffff812ce3f0 t tcp_prune_ofo_queue
ffffffff812ce4c0 t tcp_fin
ffffffff812ce670 t tcp_prune_queue
ffffffff812ce8b0 t __tcp_ack_snd_check
ffffffff812ce950 T tcp_parse_options
ffffffff812cec60 t tcp_skb_mark_lost_uncond_verify
ffffffff812cecd0 t tcp_sacktag_write_queue
ffffffff812cf880 T tcp_simple_retransmit
ffffffff812cfa80 t tcp_check_reno_reordering
ffffffff812cfb40 t tcp_dsack_set
ffffffff812cfb90 t tcp_send_dupack
ffffffff812cfc50 t tcp_dsack_extend
ffffffff812cfcb0 t tcp_parse_aligned_timestamp.part.32
ffffffff812cfce0 t tcp_validate_incoming
ffffffff812d0020 t tcp_add_reno_sack
ffffffff812d0060 t tcp_urg
ffffffff812d0240 T tcp_rcv_space_adjust
ffffffff812d0360 t tcp_data_queue
ffffffff812d1170 T tcp_update_metrics
ffffffff812d1630 T tcp_init_cwnd
ffffffff812d1660 T tcp_enter_cwr
ffffffff812d1740 T tcp_use_frto
ffffffff812d1810 T tcp_enter_frto
ffffffff812d1a70 T tcp_clear_retrans
ffffffff812d1ab0 T tcp_enter_loss
ffffffff812d1d40 t tcp_fastretrans_alert
ffffffff812d2d20 t tcp_ack
ffffffff812d3e10 T tcp_rcv_state_process
ffffffff812d4980 T tcp_rcv_established
ffffffff812d5010 T tcp_cwnd_application_limited
ffffffff812d50c0 T tcp_select_initial_window
ffffffff812d51e0 t tcp_init_nondata_skb
ffffffff812d5240 T tcp_mtup_init
ffffffff812d52a0 T tcp_sync_mss
ffffffff812d5380 t tcp_options_write
ffffffff812d55d0 t tcp_adjust_pcount
ffffffff812d56c0 t __pskb_trim_head
ffffffff812d57f0 t tcp_event_new_data_sent
ffffffff812d58a0 t tcp_set_skb_tso_segs
ffffffff812d5960 t tcp_init_tso_segs
ffffffff812d59b0 T tcp_make_synack
ffffffff812d5ec0 T tcp_fragment
ffffffff812d6180 T tcp_trim_head
ffffffff812d6260 T tcp_mtu_to_mss
ffffffff812d62a0 T tcp_mss_to_mtu
ffffffff812d62c0 T tcp_current_mss
ffffffff812d6350 T tcp_may_send_now
ffffffff812d64d0 T __tcp_select_window
ffffffff812d65f0 t tcp_transmit_skb
ffffffff812d6e30 t tcp_xmit_probe_skb
ffffffff812d6ee0 T tcp_connect
ffffffff812d7340 t tcp_write_xmit
ffffffff812d7d50 T tcp_push_one
ffffffff812d7d80 T __tcp_push_pending_frames
ffffffff812d7e00 T tcp_retransmit_skb
ffffffff812d83a0 T tcp_xmit_retransmit_queue
ffffffff812d8680 T tcp_send_fin
ffffffff812d8800 T tcp_send_active_reset
ffffffff812d88e0 T tcp_send_synack
ffffffff812d8ad0 T tcp_send_ack
ffffffff812d8bd0 T tcp_send_delayed_ack
ffffffff812d8cb0 T tcp_write_wakeup
ffffffff812d8e10 T tcp_send_probe0
ffffffff812d8ee0 T tcp_syn_ack_timeout
ffffffff812d8f00 t tcp_write_err
ffffffff812d8f40 t retransmits_timed_out
ffffffff812d9000 T tcp_init_xmit_timers
ffffffff812d9020 t tcp_keepalive_timer
ffffffff812d9280 t tcp_delack_timer
ffffffff812d94b0 t tcp_out_of_resources
ffffffff812d9570 T tcp_retransmit_timer
ffffffff812d9af0 t tcp_write_timer
ffffffff812d9d20 T tcp_set_keepalive
ffffffff812d9d70 t tcp_cookie_values_release
ffffffff812d9d80 t tcp_v4_init_sock
ffffffff812d9f10 t tcp_v4_reqsk_destructor
ffffffff812d9f20 t tcp_v4_send_reset
ffffffff812da0f0 t __tcp_v4_send_check
ffffffff812da1e0 t tcp_v4_send_synack
ffffffff812da2c0 t tcp_v4_rtx_synack
ffffffff812da2e0 T tcp_v4_send_check
ffffffff812da300 t tcp_sk_exit_batch
ffffffff812da320 t tcp_sk_exit
ffffffff812da330 t tcp_sk_init
ffffffff812da360 t tcp4_seq_show
ffffffff812da8f0 T tcp_proc_unregister
ffffffff812da900 t tcp4_proc_exit_net
ffffffff812da910 T tcp_proc_register
ffffffff812da960 t tcp4_proc_init_net
ffffffff812da970 t tcp_seq_stop
ffffffff812daa10 t established_get_first
ffffffff812dab20 t established_get_next
ffffffff812dac90 t listening_get_next
ffffffff812daef0 t tcp_get_idx
ffffffff812dafa0 t tcp_seq_next
ffffffff812db040 t tcp_seq_start
ffffffff812db1f0 T tcp_seq_open
ffffffff812db250 T tcp_v4_tw_get_peer
ffffffff812db280 T tcp_v4_do_rcv
ffffffff812db560 T tcp_v4_syn_recv_sock
ffffffff812db7e0 T tcp_syn_flood_action
ffffffff812db870 T tcp_v4_connect
ffffffff812dbd60 t tcp_v4_send_ack.isra.30
ffffffff812dbef0 t tcp_v4_reqsk_send_ack
ffffffff812dbf40 T tcp_v4_get_peer
ffffffff812dbfe0 T tcp_twsk_unique
ffffffff812dc080 T tcp_v4_conn_request
ffffffff812dc7a0 T tcp_v4_destroy_sock
ffffffff812dc980 T tcp_v4_err
ffffffff812dcf40 T tcp_v4_gso_send_check
ffffffff812dcfc0 T tcp_v4_rcv
ffffffff812dd810 T tcp4_proc_exit
ffffffff812dd820 T tcp4_gro_receive
ffffffff812dd8e0 T tcp4_gro_complete
ffffffff812dd960 T tcp_twsk_destructor
ffffffff812dd970 T tcp_child_process
ffffffff812dda80 T tcp_check_req
ffffffff812ddf20 T tcp_create_openreq_child
ffffffff812de3a0 T tcp_timewait_state_process
ffffffff812de7b0 T tcp_time_wait
ffffffff812de9e0 T tcp_slow_start
ffffffff812dea80 T tcp_reno_ssthresh
ffffffff812deaa0 T tcp_reno_min_cwnd
ffffffff812deab0 t tcp_ca_find
ffffffff812deb20 T tcp_unregister_congestion_control
ffffffff812deb60 T tcp_register_congestion_control
ffffffff812dec20 T tcp_is_cwnd_limited
ffffffff812dec80 T tcp_cong_avoid_ai
ffffffff812decc0 T tcp_reno_cong_avoid
ffffffff812ded40 T tcp_init_congestion_control
ffffffff812dede0 T tcp_cleanup_congestion_control
ffffffff812dee10 T tcp_set_default_congestion_control
ffffffff812def00 T tcp_get_available_congestion_control
ffffffff812defa0 T tcp_get_default_congestion_control
ffffffff812defc0 T tcp_get_allowed_congestion_control
ffffffff812df060 T tcp_set_allowed_congestion_control
ffffffff812df180 T tcp_set_congestion_control
ffffffff812df270 T ip4_datagram_connect
ffffffff812df590 t __raw_v4_lookup
ffffffff812df610 t compat_raw_ioctl
ffffffff812df620 t raw_get_next
ffffffff812df690 T raw_seq_next
ffffffff812df710 T raw_seq_stop
ffffffff812df720 t raw_rcv_skb
ffffffff812df770 t raw_bind
ffffffff812df820 t raw_recvmsg
ffffffff812df9d0 t raw_destroy
ffffffff812df9f0 t raw_close
ffffffff812dfa10 t raw_exit_net
ffffffff812dfa20 t raw_init_net
ffffffff812dfa50 t raw_seq_show
ffffffff812dfbc0 T raw_seq_open
ffffffff812dfc10 t raw_v4_seq_open
ffffffff812dfc30 T raw_unhash_sk
ffffffff812dfcb0 T raw_hash_sk
ffffffff812dfd50 t do_raw_setsockopt.isra.15
ffffffff812dfda0 t compat_raw_setsockopt
ffffffff812dfdc0 t raw_setsockopt
ffffffff812dfdf0 t do_raw_getsockopt.isra.16
ffffffff812dfe70 t compat_raw_getsockopt
ffffffff812dfea0 t raw_getsockopt
ffffffff812dfed0 t raw_ioctl
ffffffff812dff80 t raw_init
ffffffff812dffa0 t raw_sendmsg
ffffffff812e0870 T raw_seq_start
ffffffff812e0920 T raw_icmp_error
ffffffff812e0b00 T raw_rcv
ffffffff812e0bb0 T raw_local_deliver
ffffffff812e0d90 t udp_lib_hash
ffffffff812e0da0 t udp_lib_close
ffffffff812e0db0 t udplite_getfrag
ffffffff812e0dc0 t udp4_portaddr_hash
ffffffff812e0e20 t __udp_queue_rcv_skb
ffffffff812e0f00 T udp_proc_unregister
ffffffff812e0f10 t udp4_proc_exit_net
ffffffff812e0f20 T udp_proc_register
ffffffff812e0f70 t udp4_proc_init_net
ffffffff812e0f80 t udp_seq_stop
ffffffff812e0fb0 t udp_get_first
ffffffff812e1060 t udp_get_next
ffffffff812e10e0 t udp_get_idx
ffffffff812e1130 t udp_seq_next
ffffffff812e1160 T udp_seq_open
ffffffff812e11c0 t first_packet_length
ffffffff812e1390 T udp_poll
ffffffff812e1400 T udp_lib_getsockopt
ffffffff812e14c0 T compat_udp_getsockopt
ffffffff812e14e0 T udp_getsockopt
ffffffff812e1500 t udp_send_skb
ffffffff812e1850 t udp_push_pending_frames
ffffffff812e18b0 T udp_lib_setsockopt
ffffffff812e1a40 t udp_lib_lport_inuse2
ffffffff812e1b10 T udp_lib_rehash
ffffffff812e1c60 t udp_v4_rehash
ffffffff812e1c90 T udp_lib_unhash
ffffffff812e1e30 T udp_disconnect
ffffffff812e1f10 T udp_recvmsg
ffffffff812e2270 t udp_lib_lport_inuse.isra.24
ffffffff812e2340 T udp_lib_get_port
ffffffff812e2680 T udp_v4_get_port
ffffffff812e26f0 t ipv4_rcv_saddr_equal
ffffffff812e2710 t udp_seq_start
ffffffff812e2740 t udp4_lib_lookup2.isra.28
ffffffff812e2940 T __udp4_lib_lookup
ffffffff812e2c10 T udp4_lib_lookup
ffffffff812e2c30 T udp4_seq_show
ffffffff812e2df0 T udp_ioctl
ffffffff812e2e50 T compat_udp_setsockopt
ffffffff812e2e80 T udp_setsockopt
ffffffff812e2eb0 T udp_flush_pending_frames
ffffffff812e2ee0 T udp_destroy_sock
ffffffff812e2f40 T udp_sendmsg
ffffffff812e3830 T udp_sendpage
ffffffff812e39b0 T __udp4_lib_err
ffffffff812e3ba0 T udp_err
ffffffff812e3bb0 T udp_queue_rcv_skb
ffffffff812e3ec0 t flush_stack
ffffffff812e3fe0 t __udp4_lib_mcast_deliver
ffffffff812e42b0 T __udp4_lib_rcv
ffffffff812e48f0 T udp_rcv
ffffffff812e4910 T udp4_proc_exit
ffffffff812e4920 T udp4_ufo_send_check
ffffffff812e49c0 T udp4_ufo_fragment
ffffffff812e4aa0 t udp_lib_hash
ffffffff812e4ab0 t udp_lib_close
ffffffff812e4ac0 t udplite_sk_init
ffffffff812e4ad0 t udplite_err
ffffffff812e4ae0 t udplite_rcv
ffffffff812e4b00 t udplite4_proc_exit_net
ffffffff812e4b10 t udplite4_proc_init_net
ffffffff812e4b20 t arp_hash
ffffffff812e4b30 T arp_invalidate
ffffffff812e4bb0 t arp_error_report
ffffffff812e4be0 t arp_net_exit
ffffffff812e4bf0 t arp_net_init
ffffffff812e4c20 t arp_seq_open
ffffffff812e4c40 t arp_seq_start
ffffffff812e4c60 t arp_req_delete
ffffffff812e4d80 t arp_req_set
ffffffff812e4ff0 T arp_xmit
ffffffff812e5000 T arp_create
ffffffff812e5200 t arp_netdev_event
ffffffff812e5240 t arp_seq_show
ffffffff812e5450 T arp_send
ffffffff812e5490 t arp_process
ffffffff812e5ae0 t parp_redo
ffffffff812e5af0 t arp_rcv
ffffffff812e5bf0 t arp_solicit
ffffffff812e5e40 T arp_mc_map
ffffffff812e5fc0 t arp_constructor
ffffffff812e6180 T arp_find
ffffffff812e63a0 T arp_ioctl
ffffffff812e6680 T arp_ifdown
ffffffff812e6690 t icmp_address
ffffffff812e66a0 t icmp_discard
ffffffff812e66b0 t icmp_sk_exit
ffffffff812e6720 t icmp_sk_init
ffffffff812e6890 t icmp_address_reply
ffffffff812e6a00 t icmp_push_reply
ffffffff812e6b30 t icmp_glue_bits
ffffffff812e6b90 t icmp_reply
ffffffff812e6db0 t icmp_unreach
ffffffff812e70a0 t icmp_timestamp.part.14
ffffffff812e7180 t icmp_timestamp
ffffffff812e71b0 t icmp_echo.part.15
ffffffff812e7200 t icmp_echo
ffffffff812e7230 t icmp_redirect
ffffffff812e7350 T icmp_send
ffffffff812e79e0 T icmp_out_count
ffffffff812e7a10 T icmp_rcv
ffffffff812e7d10 T inet_select_addr
ffffffff812e7e10 t inet_get_link_af_size
ffffffff812e7e30 t inet_validate_link_af
ffffffff812e7f20 t inet_set_link_af
ffffffff812e7ff0 t inet_fill_link_af
ffffffff812e8040 t inet_alloc_ifa
ffffffff812e8060 t inet_hash_remove
ffffffff812e80a0 t inet_fill_ifaddr
ffffffff812e8230 t rtmsg_ifa
ffffffff812e8360 t __inet_insert_ifa
ffffffff812e8590 t inet_insert_ifa
ffffffff812e85a0 t inet_dump_ifaddr
ffffffff812e8720 t __inet_del_ifa
ffffffff812e89b0 t inet_del_ifa
ffffffff812e89c0 t __devinet_sysctl_register
ffffffff812e8ac0 t devinet_sysctl_register
ffffffff812e8b00 t inetdev_init
ffffffff812e8ce0 t ipv4_doint_and_flush
ffffffff812e8d40 t devinet_conf_proc
ffffffff812e8e50 t devinet_sysctl_forward
ffffffff812e8fc0 t inet_rtm_newaddr
ffffffff812e91d0 t inet_gifconf
ffffffff812e92b0 T unregister_inetaddr_notifier
ffffffff812e92c0 T register_inetaddr_notifier
ffffffff812e92d0 T inetdev_by_index
ffffffff812e9300 t inet_rtm_deladdr
ffffffff812e9450 T __ip_dev_find
ffffffff812e95a0 t confirm_addr_indev.isra.14
ffffffff812e9660 T inet_confirm_addr
ffffffff812e9720 T in_dev_finish_destroy
ffffffff812e97b0 t in_dev_rcu_put
ffffffff812e97d0 t inet_rcu_free_ifa
ffffffff812e9800 t __devinet_sysctl_unregister.isra.19
ffffffff812e9840 t devinet_exit_net
ffffffff812e98b0 t inetdev_event
ffffffff812e9d60 t devinet_init_net
ffffffff812e9f00 T inet_addr_onlink
ffffffff812e9f60 T inet_ifa_byprefix
ffffffff812e9fd0 T devinet_ioctl
ffffffff812ea6c0 T inet_recvmsg
ffffffff812ea740 t inet_compat_ioctl
ffffffff812ea760 t inet_gro_complete
ffffffff812ea840 t inet_gro_receive
ffffffff812eaa10 t inet_gso_send_check
ffffffff812eab10 t inet_gso_segment
ffffffff812ead80 T snmp_fold_field
ffffffff812eadf0 T inet_ctl_sock_create
ffffffff812eae80 T inet_register_protosw
ffffffff812eaf30 T inet_ioctl
ffffffff812eafa0 T inet_shutdown
ffffffff812eb0d0 t inet_autobind
ffffffff812eb130 T inet_sendmsg
ffffffff812eb1e0 T inet_dgram_connect
ffffffff812eb260 T inet_sendpage
ffffffff812eb330 T inet_getname
ffffffff812eb3c0 T inet_accept
ffffffff812eb4d0 T inet_stream_connect
ffffffff812eb7b0 T inet_bind
ffffffff812eb9f0 T inet_release
ffffffff812eba70 T build_ehash_secret
ffffffff812ebaa0 t inet_create
ffffffff812ebdf0 T inet_listen
ffffffff812ebe90 T inet_sock_destruct
ffffffff812ec050 T snmp_mib_free
ffffffff812ec070 t ipv4_mib_exit_net
ffffffff812ec0d0 T snmp_mib_init
ffffffff812ec100 t ipv4_mib_init_net
ffffffff812ec2b0 T inet_sk_rebuild_header
ffffffff812ec600 T inet_unregister_protosw
ffffffff812ec660 T ip_mc_rejoin_groups
ffffffff812ec670 t igmp_mc_seq_stop
ffffffff812ec690 t igmp_net_exit
ffffffff812ec6b0 t igmp_net_init
ffffffff812ec720 t igmp_mcf_seq_open
ffffffff812ec740 t igmp_mc_seq_open
ffffffff812ec760 t igmp_mcf_seq_show
ffffffff812ec7f0 t igmp_mcf_seq_stop
ffffffff812ec830 t igmp_mc_seq_show
ffffffff812ec910 t ip_mc_clear_src
ffffffff812ec990 t ip_mc_find_dev
ffffffff812eca40 t igmp_group_added
ffffffff812eca90 t igmp_group_dropped
ffffffff812ecae0 t ip_ma_put
ffffffff812ecb30 T ip_mc_dec_group
ffffffff812ecbe0 t igmp_mc_get_next.isra.6
ffffffff812ecc50 t igmp_mc_seq_next
ffffffff812ecd10 t igmp_mc_seq_start
ffffffff812ecdf0 t igmp_mcf_get_next.isra.8
ffffffff812eceb0 t igmp_mcf_seq_next
ffffffff812ecfc0 t igmp_mcf_seq_start
ffffffff812ed110 t ip_mc_del1_src.isra.11
ffffffff812ed1d0 t ip_mc_del_src
ffffffff812ed300 t ip_mc_add_src
ffffffff812ed510 T ip_mc_inc_group
ffffffff812ed610 T ip_mc_join_group
ffffffff812ed700 t ip_mc_leave_src
ffffffff812ed7b0 T ip_mc_unmap
ffffffff812ed800 T ip_mc_remap
ffffffff812ed850 T ip_mc_down
ffffffff812ed8c0 T ip_mc_init_dev
ffffffff812ed900 T ip_mc_up
ffffffff812ed960 T ip_mc_destroy_dev
ffffffff812ed9e0 T ip_mc_leave_group
ffffffff812edae0 T ip_mc_source
ffffffff812edea0 T ip_mc_msfilter
ffffffff812ee110 T ip_mc_msfget
ffffffff812ee270 T ip_mc_gsfget
ffffffff812ee3f0 T ip_mc_sf_allow
ffffffff812ee4c0 T ip_mc_drop_socket
ffffffff812ee570 T ip_check_mc_rcu
ffffffff812ee600 t fib_flush
ffffffff812ee670 t fib_disable_ip
ffffffff812ee6c0 T inet_addr_type
ffffffff812ee750 T inet_dev_addr_type
ffffffff812ee7f0 t nl_fib_input
ffffffff812ee950 t inet_dump_fib
ffffffff812eea60 t rtm_to_fib_config
ffffffff812eecb0 t inet_rtm_delroute
ffffffff812eed10 t inet_rtm_newroute
ffffffff812eed70 t ip_fib_net_exit.isra.13
ffffffff812eee30 t fib_net_exit
ffffffff812eee60 t fib_net_init
ffffffff812eefa0 t fib_magic.isra.16
ffffffff812ef050 T fib_validate_source
ffffffff812ef360 T ip_rt_ioctl
ffffffff812ef7e0 T fib_add_ifaddr
ffffffff812ef9d0 t fib_netdev_event
ffffffff812efab0 T fib_del_ifaddr
ffffffff812eff20 t fib_inetaddr_event
ffffffff812effe0 t free_fib_info_rcu
ffffffff812f0020 t fib_info_hash_free
ffffffff812f0060 t fib_info_hash_alloc
ffffffff812f00a0 T free_fib_info
ffffffff812f00d0 T fib_release_info
ffffffff812f01d0 T ip_fib_check_default
ffffffff812f0260 T fib_find_alias
ffffffff812f02b0 T fib_detect_death
ffffffff812f0390 T fib_nh_match
ffffffff812f03e0 T fib_info_update_nh_saddr
ffffffff812f0430 T fib_create_info
ffffffff812f0d40 T fib_dump_info
ffffffff812f0fb0 T rtmsg_fib
ffffffff812f1130 T fib_sync_down_addr
ffffffff812f11b0 T fib_sync_down_dev
ffffffff812f1280 T fib_select_default
ffffffff812f1410 t fib_trie_seq_stop
ffffffff812f1420 t fib_route_seq_stop
ffffffff812f1430 t fib_route_seq_open
ffffffff812f1450 t fib_trie_seq_open
ffffffff812f1470 t fib_route_seq_show
ffffffff812f16d0 t fib_trie_get_next
ffffffff812f1790 t fib_trie_seq_start
ffffffff812f1850 t fib_trie_seq_next
ffffffff812f1920 t tnode_put_child_reorg
ffffffff812f1a00 t fib_find_node
ffffffff812f1b00 t fib_triestat_seq_open
ffffffff812f1b10 t seq_indent
ffffffff812f1b40 t leaf_info_new
ffffffff812f1b90 t __alias_free_mem
ffffffff812f1ba0 t __leaf_free_rcu
ffffffff812f1bb0 t tnode_new
ffffffff812f1c30 t __tnode_vfree
ffffffff812f1c40 t tnode_free_flush
ffffffff812f1cd0 t tnode_clean_free
ffffffff812f1d70 t insert_leaf_info
ffffffff812f1de0 t check_leaf.isra.13
ffffffff812f1f40 T fib_table_lookup
ffffffff812f2230 t leaf_walk_rcu
ffffffff812f22d0 t trie_firstleaf
ffffffff812f2300 t trie_nextleaf
ffffffff812f2320 t fib_route_seq_next
ffffffff812f2370 t fib_route_seq_start
ffffffff812f2430 t tnode_free_safe
ffffffff812f2470 t resize
ffffffff812f2ce0 t trie_rebalance
ffffffff812f2e00 t trie_leaf_remove
ffffffff812f2e90 t fib_table_print.isra.16
ffffffff812f2ed0 t fib_triestat_seq_show
ffffffff812f3270 t fib_trie_seq_show
ffffffff812f34c0 t __tnode_free_rcu
ffffffff812f3510 T fib_table_insert
ffffffff812f3de0 T fib_table_delete
ffffffff812f4080 T fib_table_flush
ffffffff812f4220 T fib_free_table
ffffffff812f4230 T fib_table_dump
ffffffff812f44c0 T fib_trie_table
ffffffff812f4500 T fib_proc_init
ffffffff812f45a0 T fib_proc_exit
ffffffff812f45d0 T inet_frags_init_net
ffffffff812f45f0 T inet_frags_fini
ffffffff812f4600 T inet_frag_destroy
ffffffff812f4710 T inet_frag_find
ffffffff812f4910 T inet_frags_init
ffffffff812f49b0 t inet_frag_secret_rebuild
ffffffff812f4aa0 T inet_frag_kill
ffffffff812f4b70 T inet_frag_evictor
ffffffff812f4c60 T inet_frags_exit_net
ffffffff812f4c90 t ping_get_first
ffffffff812f4cf0 t ping_get_next
ffffffff812f4d30 t ping_get_idx
ffffffff812f4d70 t ping_seq_next
ffffffff812f4da0 t ping_v4_get_port
ffffffff812f4f10 t ping_v4_unhash
ffffffff812f4fa0 t ping_v4_hash
ffffffff812f4fb0 t ping_queue_rcv_skb
ffffffff812f4fd0 t ping_bind
ffffffff812f5150 t ping_recvmsg
ffffffff812f5340 t ping_init_sock
ffffffff812f5400 t ping_sendmsg
ffffffff812f5a40 t ping_close
ffffffff812f5a50 t ping_proc_exit_net
ffffffff812f5a60 t ping_proc_init_net
ffffffff812f5a90 t ping_seq_open
ffffffff812f5ab0 t ping_seq_stop
ffffffff812f5ac0 t ping_getfrag
ffffffff812f5b20 t ping_seq_show
ffffffff812f5ce0 t ping_seq_start
ffffffff812f5d50 t ping_v4_lookup.isra.11
ffffffff812f5df0 T ping_err
ffffffff812f5f90 T ping_rcv
ffffffff812f6060 T ping_proc_exit
ffffffff812f6070 t ipv4_tcp_mem
ffffffff812f6120 t ipv4_sysctl_exit_net
ffffffff812f6140 t ipv4_sysctl_init_net
ffffffff812f6270 t ipv4_ping_group_range
ffffffff812f6360 t proc_allowed_congestion_control
ffffffff812f6430 t proc_tcp_available_congestion_control
ffffffff812f64f0 t proc_tcp_congestion_control
ffffffff812f6580 t ipv4_local_port_range
ffffffff812f6650 t ip_proc_exit_net
ffffffff812f6680 t ip_proc_init_net
ffffffff812f6720 t snmp_seq_open
ffffffff812f6730 t netstat_seq_open
ffffffff812f6740 t sockstat_seq_open
ffffffff812f6750 t netstat_seq_show
ffffffff812f6890 t sockstat_seq_show
ffffffff812f6a00 t icmpmsg_put_line.part.3
ffffffff812f6ae0 t icmpmsg_put
ffffffff812f6b80 t snmp_seq_show
ffffffff812f6f70 T cookie_check_timestamp
ffffffff812f7020 t cookie_hash
ffffffff812f7140 T cookie_init_timestamp
ffffffff812f7190 T cookie_v4_init_sequence
ffffffff812f7310 T cookie_v4_check
ffffffff812f7920 t lro_tcp_ip_check
ffffffff812f79d0 t lro_get_desc.isra.7
ffffffff812f7a60 t lro_flush.isra.8
ffffffff812f7c80 T lro_flush_pkt
ffffffff812f7cc0 T lro_flush_all
ffffffff812f7d20 t lro_tcp_data_csum.isra.9
ffffffff812f7de0 t lro_init_desc
ffffffff812f7ec0 t lro_add_common
ffffffff812f7f80 t lro_gen_skb.isra.11
ffffffff812f80d0 T lro_receive_frags
ffffffff812f8480 T lro_receive_skb
ffffffff812f86d0 t bictcp_recalc_ssthresh
ffffffff812f8740 t bictcp_undo_cwnd
ffffffff812f8760 t bictcp_init
ffffffff812f8880 t bictcp_state
ffffffff812f8970 t bictcp_acked
ffffffff812f8b40 t bictcp_cong_avoid
ffffffff812f8ef0 t xfrm4_get_tos
ffffffff812f8f00 t xfrm4_init_path
ffffffff812f8f10 t xfrm4_fill_dst
ffffffff812f8fe0 t xfrm4_garbage_collect
ffffffff812f9040 t xfrm4_update_pmtu
ffffffff812f9060 t xfrm4_dst_ifdown
ffffffff812f9070 t xfrm4_dst_destroy
ffffffff812f9110 t _decode_session4
ffffffff812f94c0 t xfrm4_get_saddr
ffffffff812f9520 t xfrm4_dst_lookup
ffffffff812f9570 t xfrm4_init_flags
ffffffff812f9590 t xfrm4_init_temprop
ffffffff812f9600 t __xfrm4_init_tempsel
ffffffff812f96f0 T xfrm4_extract_header
ffffffff812f9740 T xfrm4_rcv_encap
ffffffff812f9760 T xfrm4_rcv
ffffffff812f9780 T xfrm4_extract_input
ffffffff812f9790 T xfrm4_transport_finish
ffffffff812f9850 T xfrm4_udp_encap_rcv
ffffffff812f9a20 T xfrm4_prepare_output
ffffffff812f9aa0 T xfrm4_extract_output
ffffffff812f9b70 T xfrm4_output_finish
ffffffff812f9b80 T xfrm4_output
ffffffff812f9ba0 t xfrm_policy_flo_check
ffffffff812f9bb0 T xfrm_policy_walk_init
ffffffff812f9bd0 t __xfrm_policy_unlink
ffffffff812f9c90 T xfrm_dst_ifdown
ffffffff812f9cf0 t xfrm_link_failure
ffffffff812f9d00 t xfrm_default_advmss
ffffffff812f9d30 t xfrm_neigh_lookup
ffffffff812f9d40 t xfrm_audit_common_policyinfo
ffffffff812f9e80 T xfrm_audit_policy_delete
ffffffff812f9f70 T xfrm_audit_policy_add
ffffffff812fa060 t xfrm_policy_flo_get
ffffffff812fa080 t xfrm_bundle_flo_delete
ffffffff812fa0d0 T xfrm_policy_unregister_afinfo
ffffffff812fa170 T xfrm_policy_walk_done
ffffffff812fa1d0 T xfrm_policy_walk
ffffffff812fa320 t policy_hash_bysel
ffffffff812fa3a0 T xfrm_spd_getinfo
ffffffff812fa420 T xfrm_policy_register_afinfo
ffffffff812fa560 t xfrm_negative_advice
ffffffff812fa590 t xfrm_bundle_ok
ffffffff812fa740 t xfrm_bundle_flo_check
ffffffff812fa770 t xfrm_bundle_flo_get
ffffffff812fa7b0 t xfrm_policy_get_afinfo
ffffffff812fa7f0 T __xfrm_decode_session
ffffffff812fa850 t xfrm_tmpl_resolve
ffffffff812faca0 t xfrm_resolve_and_create_bundle
ffffffff812fb4b0 t __xfrm_policy_link
ffffffff812fb5b0 T xfrm_policy_alloc
ffffffff812fb660 t __xfrm_garbage_collect.isra.36
ffffffff812fb6f0 t xfrm_dev_event
ffffffff812fb710 t xfrm_garbage_collect_deferred
ffffffff812fb730 t xfrm_mtu
ffffffff812fb760 t xfrm_hash_resize
ffffffff812fbab0 t xfrm_dst_check
ffffffff812fbae0 T xfrm_policy_destroy
ffffffff812fbb10 t xfrm_policy_flo_delete
ffffffff812fbb30 t clone_policy
ffffffff812fbd20 t xfrm_policy_kill
ffffffff812fbd80 T xfrm_policy_delete
ffffffff812fbdd0 T xfrm_policy_flush
ffffffff812fbfc0 t xfrm_policy_fini
ffffffff812fc0e0 t xfrm_net_exit
ffffffff812fc100 t xfrm_net_init
ffffffff812fc300 T xfrm_policy_byid
ffffffff812fc440 T xfrm_policy_bysel_ctx
ffffffff812fc560 T xfrm_policy_insert
ffffffff812fc8f0 t xfrm_policy_timer
ffffffff812fcb30 T xfrm_selector_match
ffffffff812fcf10 t xfrm_sk_policy_lookup
ffffffff812fcfa0 T xfrm_lookup
ffffffff812fd450 T __xfrm_route_forward
ffffffff812fd4e0 T __xfrm_policy_check
ffffffff812fdaa0 t xfrm_policy_lookup_bytype.constprop.48
ffffffff812fdce0 t xfrm_bundle_lookup
ffffffff812fe130 t xfrm_policy_lookup
ffffffff812fe1a0 T xfrm_sk_policy_insert
ffffffff812fe290 T __xfrm_sk_clone_policy
ffffffff812fe310 T xfrm_get_acqseq
ffffffff812fe330 T xfrm_state_walk_init
ffffffff812fe350 t xfrm_audit_helper_sainfo
ffffffff812fe410 T xfrm_audit_state_delete
ffffffff812fe500 T xfrm_audit_state_add
ffffffff812fe5f0 T xfrm_sad_getinfo
ffffffff812fe650 T xfrm_state_walk
ffffffff812fe7a0 T xfrm_state_walk_done
ffffffff812fe800 t xfrm_state_gc_task
ffffffff812fe970 t xfrm_hash_resize
ffffffff812fec20 t xfrm_state_get_afinfo
ffffffff812fec60 T km_report
ffffffff812fecf0 T km_new_mapping
ffffffff812fed70 T km_query
ffffffff812fedf0 T km_state_notify
ffffffff812fee50 T km_state_expired
ffffffff812feeb0 T km_policy_notify
ffffffff812fef20 T km_policy_expired
ffffffff812fef80 t xfrm_get_mode
ffffffff812ff010 T __xfrm_init_state
ffffffff812ff250 T xfrm_init_state
ffffffff812ff260 T xfrm_state_unregister_afinfo
ffffffff812ff2d0 T xfrm_state_register_afinfo
ffffffff812ff330 T xfrm_unregister_km
ffffffff812ff380 T xfrm_register_km
ffffffff812ff3c0 t xfrm_state_lock_afinfo
ffffffff812ff410 T xfrm_unregister_mode
ffffffff812ff480 T xfrm_register_mode
ffffffff812ff520 T xfrm_unregister_type
ffffffff812ff570 T xfrm_register_type
ffffffff812ff5c0 T xfrm_user_policy
ffffffff812ff710 t __xfrm_state_lookup
ffffffff812ff800 T xfrm_state_lookup
ffffffff812ff880 t __xfrm_state_lookup_byaddr
ffffffff812ff9b0 T xfrm_state_lookup_byaddr
ffffffff812ffa30 t __xfrm_state_bump_genids
ffffffff812ffbc0 T __xfrm_state_destroy
ffffffff812ffc50 T xfrm_alloc_spi
ffffffff812ffe60 T __xfrm_state_delete
ffffffff812fffa0 T xfrm_state_delete
ffffffff812ffff0 T xfrm_state_delete_tunnel
ffffffff81300070 T xfrm_state_flush
ffffffff81300240 T xfrm_stateonly_find
ffffffff81300430 t xfrm_timer_handler
ffffffff813006a0 T xfrm_state_alloc
ffffffff813007d0 t xfrm_replay_timer_handler
ffffffff81300850 t __xfrm_find_acq_byseq.isra.20
ffffffff813008b0 T xfrm_find_acq_byseq
ffffffff81300920 t xfrm_audit_helper_pktinfo
ffffffff813009b0 T xfrm_audit_state_icvfail
ffffffff81300a90 T xfrm_audit_state_notfound
ffffffff81300b70 T xfrm_audit_state_notfound_simple
ffffffff81300c10 T xfrm_audit_state_replay
ffffffff81300cf0 T xfrm_audit_state_replay_overflow
ffffffff81300db0 t xfrm_hash_grow_check
ffffffff81300de0 t __xfrm_state_insert
ffffffff81301010 T xfrm_state_insert
ffffffff81301040 t __find_acq_core
ffffffff813014a0 T xfrm_find_acq
ffffffff81301560 T xfrm_state_add
ffffffff813017f0 T xfrm_state_check_expire
ffffffff813018a0 T xfrm_state_update
ffffffff81301c10 t xfrm_state_look_at.isra.28
ffffffff81301d10 T xfrm_state_find
ffffffff813025a0 T xfrm_state_mtu
ffffffff81302620 T xfrm_state_init
ffffffff81302750 T xfrm_state_fini
ffffffff81302860 T xfrm_hash_alloc
ffffffff813028b0 T xfrm_hash_free
ffffffff813028f0 T xfrm_prepare_input
ffffffff813029c0 T secpath_dup
ffffffff81302a50 T __secpath_destroy
ffffffff81302ab0 T xfrm_parse_spi
ffffffff81302c20 T xfrm_input
ffffffff81303050 T xfrm_input_resume
ffffffff81303060 T xfrm_inner_extract_output
ffffffff813030f0 T xfrm_output_resume
ffffffff81303340 T xfrm_output
ffffffff81303430 t xfrm_alg_id_match
ffffffff81303440 T xfrm_aalg_get_byidx
ffffffff81303460 T xfrm_ealg_get_byidx
ffffffff81303480 T xfrm_count_auth_supported
ffffffff813034b0 T xfrm_count_enc_supported
ffffffff813034e0 T xfrm_probe_algs
ffffffff81303600 t xfrm_find_algo
ffffffff813036c0 T xfrm_aead_get_byname
ffffffff813036f0 T xfrm_calg_get_byname
ffffffff81303710 T xfrm_ealg_get_byname
ffffffff81303730 T xfrm_aalg_get_byname
ffffffff81303750 T xfrm_calg_get_byid
ffffffff81303770 T xfrm_ealg_get_byid
ffffffff81303790 T xfrm_aalg_get_byid
ffffffff813037b0 t xfrm_aead_name_match
ffffffff813037f0 t xfrm_alg_name_match
ffffffff81303850 T xfrm_sysctl_init
ffffffff81303910 T xfrm_sysctl_fini
ffffffff81303930 T xfrm_init_replay
ffffffff81303990 t xfrm_replay_overflow_esn
ffffffff81303a30 t xfrm_replay_notify
ffffffff81303b80 t xfrm_replay_notify_bmp
ffffffff81303d00 t xfrm_replay_advance_bmp
ffffffff81303e40 t xfrm_replay_check
ffffffff81303ec0 t xfrm_replay_check_bmp
ffffffff81303f50 t xfrm_replay_check_esn
ffffffff81304050 t xfrm_replay_overflow
ffffffff813040f0 t xfrm_replay_overflow_bmp
ffffffff81304180 t xfrm_replay_advance
ffffffff81304220 T xfrm_replay_seqhi
ffffffff81304260 t xfrm_replay_advance_esn
ffffffff81304420 T unix_outq_len
ffffffff81304430 t unix_poll
ffffffff813044e0 t unix_seq_stop
ffffffff813044f0 t unix_net_exit
ffffffff81304510 t unix_net_init
ffffffff81304580 t unix_seq_open
ffffffff813045a0 T unix_peer_get
ffffffff813045f0 T unix_inq_len
ffffffff81304680 t unix_ioctl
ffffffff813046f0 t unix_seq_show
ffffffff81304850 t init_peercred
ffffffff813048c0 t unix_socketpair
ffffffff81304950 t unix_set_peek_off
ffffffff813049a0 t __unix_find_socket_byname
ffffffff81304a30 t __unix_insert_socket
ffffffff81304a90 t unix_listen
ffffffff81304ba0 t unix_wait_for_peer
ffffffff81304c80 t unix_dgram_poll
ffffffff81304e30 t unix_getname
ffffffff81304f10 t unix_find_other
ffffffff81305110 t unix_shutdown
ffffffff81305260 t unix_accept
ffffffff81305390 t unix_create1
ffffffff81305500 t unix_create
ffffffff81305590 t unix_sock_destructor
ffffffff81305670 t __unix_remove_socket
ffffffff813056c0 t unix_release_sock
ffffffff81305910 t unix_release
ffffffff81305940 t maybe_add_creds
ffffffff813059b0 t next_unix_socket
ffffffff81305a20 t unix_seq_next
ffffffff81305af0 t unix_seq_start
ffffffff81305bb0 t unix_state_double_lock
ffffffff81305c10 t unix_state_double_unlock
ffffffff81305c50 t unix_write_space
ffffffff81305cc0 t unix_detach_fds.isra.32
ffffffff81305d10 t unix_dgram_recvmsg
ffffffff81306190 t unix_seqpacket_recvmsg
ffffffff813061b0 t unix_destruct_scm
ffffffff81306260 t unix_stream_recvmsg
ffffffff813069d0 t unix_mkname
ffffffff81306a50 t unix_autobind.isra.34
ffffffff81306c20 t unix_bind
ffffffff81306f50 t unix_stream_connect
ffffffff813073a0 t unix_scm_to_skb
ffffffff813074c0 t unix_stream_sendmsg
ffffffff813078d0 t unix_dgram_disconnected
ffffffff81307960 t unix_dgram_connect
ffffffff81307b40 t unix_dgram_sendmsg
ffffffff81308170 t unix_seqpacket_sendmsg
ffffffff813081c0 t dec_inflight
ffffffff813081d0 t inc_inflight
ffffffff813081e0 t inc_inflight_move_tail
ffffffff81308240 T unix_get_socket
ffffffff81308280 t scan_inflight
ffffffff813083a0 t scan_children
ffffffff813084c0 T unix_inflight
ffffffff81308550 T unix_notinflight
ffffffff813085d0 T unix_gc
ffffffff81308960 T wait_for_unix_gc
ffffffff81308a10 T unix_sysctl_register
ffffffff81308a90 T unix_sysctl_unregister
ffffffff81308ab0 T __ipv6_addr_type
ffffffff81308b80 T ipv6_ext_hdr
ffffffff81308bc0 T ipv6_skip_exthdr
ffffffff81308d00 t net_ctl_header_lookup
ffffffff81308d10 t is_seen
ffffffff81308d40 t net_ctl_ro_header_perms
ffffffff81308d60 t sysctl_net_init
ffffffff81308d90 t sysctl_net_exit
ffffffff81308da0 T unregister_net_sysctl_table
ffffffff81308db0 T register_net_sysctl_rotable
ffffffff81308dd0 T register_net_sysctl_table
ffffffff81308de0 t net_ctl_permissions
ffffffff81308e20 T klist_init
ffffffff81308e40 T klist_node_attached
ffffffff81308e50 T klist_iter_init_node
ffffffff81308e90 T klist_iter_init
ffffffff81308ea0 t klist_node_init
ffffffff81308f20 T klist_add_before
ffffffff81308f90 T klist_add_after
ffffffff81309000 T klist_add_tail
ffffffff81309060 T klist_add_head
ffffffff813090c0 t klist_release
ffffffff813091c0 t klist_dec_and_del
ffffffff813091f0 T klist_next
ffffffff813092f0 t klist_put
ffffffff813093b0 T klist_iter_exit
ffffffff813093d0 T klist_del
ffffffff813093e0 T klist_remove
ffffffff81309490 T md5_transform
ffffffff81309c40 T sha_transform
ffffffff8130ad40 T sha_init
ffffffff8130ad70 T csum_ipv6_magic
ffffffff8130adc0 T csum_partial_copy_nocheck
ffffffff8130add0 T csum_partial_copy_to_user
ffffffff8130ae60 T csum_partial_copy_from_user
ffffffff8130af30 T csum_partial_copy_generic
ffffffff8130b09c t rest_init
ffffffff8130b110 T xen_hvm_init_shared_info
ffffffff8130b1e0 T xen_build_mfn_list_list
ffffffff8130b4a0 T arch_register_cpu
ffffffff8130b4d0 T acpi_map_lsapic
ffffffff8130b640 t remove_cpu_from_maps
ffffffff8130b66a T check_enable_amd_mmconf_dmi
ffffffff8130b680 T init_memory_mapping
ffffffff8130bbc0 t map_low_page
ffffffff8130bbf4 t alloc_low_page
ffffffff8130bc70 t unmap_low_page
ffffffff8130bc90 t spp_getpage
ffffffff8130bd00 t take_cpu_down
ffffffff8130bd30 t _cpu_down
ffffffff8130bf90 T cpu_down
ffffffff8130bfe0 T unregister_cpu_notifier
ffffffff8130c000 T register_cpu_notifier
ffffffff8130c030 T enable_nonboot_cpus
ffffffff8130c0f0 T __irq_alloc_descs
ffffffff8130c2b0 t zone_wait_table_init
ffffffff8130c380 t __build_all_zonelists
ffffffff8130c8d0 T build_all_zonelists
ffffffff8130cb7b t sparse_index_alloc
ffffffff8130cc20 t __earlyonly_bootmem_alloc
ffffffff8130cc30 t setup_cpu_cache
ffffffff8130ce80 T pci_add_new_bus
ffffffff8130d1b0 T pci_scan_single_device
ffffffff8130d270 T pci_rescan_bus_bridge_resize
ffffffff8130d2c0 t pci_bus_release_bridge_resources
ffffffff8130d440 t __pci_bus_assign_resources
ffffffff8130d540 T pci_bus_assign_resources
ffffffff8130d550 t __pci_bridge_assign_resources
ffffffff8130d620 T __pci_bus_size_bridges
ffffffff8130de10 T pci_rescan_bus
ffffffff8130ded0 T pci_bus_size_bridges
ffffffff8130dee0 T fb_find_logo
ffffffff8130df12 T acpi_os_map_memory
ffffffff8130e06d T acpi_os_unmap_memory
ffffffff8130e180 t store_online
ffffffff8130e238 t via_no_dac
ffffffff8130e263 t quirk_intel_irqbalance
ffffffff8130e311 t workqueue_cpu_callback
ffffffff8130e52a t cpu_callback
ffffffff8130e5e6 T read_current_timer
ffffffff8130e608 T pci_read_bridge_bases
ffffffff8130ea01 T pci_scan_child_bus
ffffffff8130eaa2 T pci_scan_bridge
ffffffff8130ef54 T pci_scan_bus
ffffffff8130efd5 T pci_scan_bus_parented
ffffffff8130f05c T pci_scan_root_bus
ffffffff8130f085 t quirk_mmio_always_on
ffffffff8130f08a t quirk_mellanox_tavor
ffffffff8130f092 t quirk_citrine
ffffffff8130f09d t quirk_s3_64M
ffffffff8130f0d1 t quirk_dunord
ffffffff8130f0e8 t quirk_transparent_bridge
ffffffff8130f0f0 t quirk_no_ata_d3
ffffffff8130f0f9 t quirk_brcm_570x_limit_vpd
ffffffff8130f136 t quirk_msi_intx_disable_bug
ffffffff8130f13f t quirk_hotplug_bridge
ffffffff8130f147 t fixup_mpss_256
ffffffff8130f154 t quirk_via_acpi
ffffffff8130f190 t quirk_disable_msi
ffffffff8130f1be t quirk_xio2000a
ffffffff8130f234 t fixup_ti816x_class
ffffffff8130f256 t quirk_netmos
ffffffff8130f2d5 t quirk_cs5536_vsa
ffffffff8130f31c t quirk_msi_intx_disable_ati_bug
ffffffff8130f354 t ht_check_msi_mapping
ffffffff8130f3c4 t msi_ht_cap_enabled
ffffffff8130f456 t quirk_nvidia_ck804_msi_ht_cap
ffffffff8130f4bc t quirk_amd_780_apc_msi
ffffffff8130f4fe t quirk_vt82c598_id
ffffffff8130f52b t quirk_svwks_csb5ide
ffffffff8130f577 t quirk_unhide_mch_dev6
ffffffff8130f5d4 t quirk_amd_ide_mode
ffffffff8130f692 t quirk_via_cx700_pci_parking_caching
ffffffff8130f795 t ht_enable_msi_mapping
ffffffff8130f82a t ich7_lpc_generic_decode
ffffffff8130f88b t ich6_lpc_generic_decode
ffffffff8130f8fc t quirk_tigerpoint_bm_sts
ffffffff8130f954 t nvenet_msi_disable
ffffffff8130f9b5 t quirk_disable_aspm_l0s
ffffffff8130f9dc t quirk_pcie_pxh
ffffffff8130fa02 t quirk_pcie_mch
ffffffff8130fa14 t quirk_io_region
ffffffff8130fabf t quirk_vt8235_acpi
ffffffff8130fb4a t ich6_lpc_acpi_gpio
ffffffff8130fc16 t quirk_ich7_lpc
ffffffff8130fc70 t quirk_ich6_lpc
ffffffff8130fca9 t quirk_ich4_lpc_acpi
ffffffff8130fd75 t quirk_ali7101_acpi
ffffffff8130fdeb t quirk_ati_exploding_mce
ffffffff8130fe44 t quirk_amd_ioapic
ffffffff8130fe77 t disable_igfx_irq
ffffffff8130fedc t quirk_intel_mc_errata
ffffffff8130ff87 t quirk_p64h2_1k_io_fix_iobl
ffffffff8131001f t quirk_p64h2_1k_io
ffffffff813100be t fixup_rev1_53c810
ffffffff813100e6 t quirk_natoma
ffffffff8131010e t quirk_vsfx
ffffffff81310136 t quirk_viaetbf
ffffffff8131015f t quirk_triton
ffffffff81310189 t quirk_nopciamd
ffffffff813101d0 t quirk_nopcipci
ffffffff813101f8 t quirk_isa_dma_hangs
ffffffff81310221 t quirk_msi_ht_cap
ffffffff81310256 t __nv_msi_ht_cap_quirk
ffffffff813104af t nv_msi_ht_cap_quirk_all
ffffffff813104b9 t nv_msi_ht_cap_quirk_leaf
ffffffff813104c0 t nvbridge_check_legacy_irq_routing
ffffffff81310529 t quirk_brcm_5719_limit_mrrs
ffffffff81310573 t quirk_e100_interrupt
ffffffff813106d9 t quirk_vt82c586_acpi
ffffffff81310727 t quirk_vt82c686_acpi
ffffffff813107b3 t quirk_piix4_acpi
ffffffff81310909 t pcie_portdrv_probe
ffffffff8131096f t aer_probe
ffffffff81310bd5 t ioapic_probe
ffffffff81310cfd t acpi_pci_root_add
ffffffff81311143 t acpi_hed_add
ffffffff81311160 t ghes_probe
ffffffff8131147e t __gnttab_init
ffffffff8131149b t platform_pci_init
ffffffff8131167a t xencons_probe
ffffffff81311793 t mmio_resource_enabled.constprop.6
ffffffff813117d0 t quirk_usb_early_handoff
ffffffff81311d94 t cmos_wake_setup.part.4
ffffffff81311e47 t cmos_pnp_probe
ffffffff81311edb t acpi_pm_check_graylist
ffffffff81311f0a t acpi_pm_check_blacklist
ffffffff81311f3f t pci_fixup_latency
ffffffff81311f4a t pci_fixup_piix4_acpi
ffffffff81311f55 t pci_fixup_transparent_bridge
ffffffff81311f6d t pci_siemens_interrupt_controller
ffffffff81311f76 t pci_fixup_umc_ide
ffffffff81311fa8 t pci_fixup_i450gx
ffffffff81311ffa t pci_fixup_i450nx
ffffffff813120a7 t pci_post_fixup_toshiba_ohci1394
ffffffff8131211e t pci_pre_fixup_toshiba_ohci1394
ffffffff81312154 t pci_fixup_msi_k8t_onboard_sound
ffffffff813121eb t pci_fixup_video
ffffffff8131227b t pci_fixup_ncr53c810
ffffffff813122a3 T pci_acpi_scan_root
ffffffff813125ed T pcibios_scan_specific_bus
ffffffff81312673 t can_skip_ioresource_align
ffffffff81312694 t read_dmi_type_b1
ffffffff813126d4 t set_bf_sort
ffffffff813126fe t find_sort_method
ffffffff81312728 T pcibios_fixup_bus
ffffffff813127c9 T pcibios_setup
ffffffff81312b49 T pci_scan_bus_on_node
ffffffff81312be7 T pci_scan_bus_with_sysdata
ffffffff81312bf6 T pcibios_scan_root
ffffffff81312cbc T update_res
ffffffff81312d5f t ioapic_remove
ffffffff81312d9c t acpi_hed_remove
ffffffff81312daa t ghes_remove
ffffffff81312f1e t i8042_unregister_ports
ffffffff81312f49 t i8042_remove
ffffffff81312f6d T calibrate_delay
ffffffff8131343b t xen_hvm_cpu_notify
ffffffff81313470 t register_callback
ffffffff81313493 T xen_enable_sysenter
ffffffff813134c8 T xen_enable_syscall
ffffffff81313524 t cpu_bringup
ffffffff813135a3 t xen_play_dead
ffffffff813135c6 t xen_cpu_up
ffffffff8131392b t cpu_bringup_and_idle
ffffffff81313938 t xen_hvm_cpu_up
ffffffff81313967 T xen_init_lock_cpu
ffffffff813139d6 T x86_init_noop
ffffffff813139d7 t cpu_vsyscall_init
ffffffff81313a87 t cpu_vsyscall_notifier
ffffffff81313ab0 T unsynchronized_tsc
ffffffff81313b17 T calibrate_delay_is_known
ffffffff81313bb6 T select_idle_routine
ffffffff81313c5f T fpu_init
ffffffff81313d11 T xsave_init
ffffffff81313d34 t cpuid4_cache_lookup_regs
ffffffff813140d1 t cache_remove_shared_cpu_map
ffffffff8131413f t cpuid4_cache_sysfs_exit
ffffffff813141eb t cache_add_dev
ffffffff81314561 t get_cpu_leaves
ffffffff8131494b t cacheinfo_cpu_callback
ffffffff81314a21 t cache_sysfs_init
ffffffff81314a89 T init_intel_cacheinfo
ffffffff81314f58 T init_scattered_cpuid_features
ffffffff8131504c T detect_extended_topology
ffffffff813151fc t filter_cpuid_features
ffffffff81315266 t get_cpu_vendor
ffffffff813152fd t setup_smep.part.6
ffffffff8131535e T cpu_detect_cache_sizes
ffffffff8131542f t default_init
ffffffff81315434 T detect_ht
ffffffff813155ae T cpu_detect
ffffffff8131567b T get_cpu_cap
ffffffff81315841 t identify_cpu
ffffffff81315b75 T identify_secondary_cpu
ffffffff81315b8e T print_cpu_msr
ffffffff81315c1f T print_cpu_info
ffffffff81315ccf T cpu_init
ffffffff81315faa t vmware_set_cpu_features
ffffffff81315fb5 T init_hypervisor
ffffffff81315fcd T x86_init_rdrand
ffffffff8131600c t early_init_intel
ffffffff813161dc t init_intel
ffffffff8131643e t early_init_amd
ffffffff8131650e t init_amd
ffffffff81316b02 t bsp_init_amd
ffffffff81316baf t early_init_centaur
ffffffff81316bc5 t centaur_size_cache
ffffffff81316bc8 t init_centaur
ffffffff81316d46 t x86_pmu_notifier
ffffffff81316dfc t mce_reenable_cpu
ffffffff81316e56 t mce_disable_cpu
ffffffff81316ebc t mce_device_create
ffffffff81316ffe t mce_cpu_callback
ffffffff81317193 T mcheck_cpu_init
ffffffff813174a4 t allocate_threshold_blocks
ffffffff813176a8 t threshold_create_device
ffffffff813179eb t amd_64_threshold_cpu_callback
ffffffff81317cac t perf_ibs_cpu_notifier
ffffffff81317cfc t acpi_register_lapic
ffffffff81317d37 t link_thread_siblings
ffffffff81317da5 t do_boot_cpu
ffffffff813183db t do_fork_idle
ffffffff813183f5 T smp_store_cpu_info
ffffffff8131842e T set_cpu_sibling_map
ffffffff813186e2 t start_secondary
ffffffff81318887 T wakeup_secondary_cpu_via_nmi
ffffffff81318925 T native_cpu_up
ffffffff81318a1d t check_tsc_warp
ffffffff81318b31 T check_tsc_sync_source
ffffffff81318c47 T check_tsc_sync_target
ffffffff81318cc3 t apic_cluster_num
ffffffff81318d5e t setup_APIC_timer
ffffffff81318e00 t set_multi
ffffffff81318e2a T setup_secondary_APIC_clock
ffffffff81318e2f T setup_local_APIC
ffffffff8131909a T end_local_APIC_setup
ffffffff8131916c T generic_processor_info
ffffffff813192ea T apic_is_clustered_box
ffffffff81319334 t cmp_range
ffffffff8131933b t get_fam10h_pci_mmconf_base
ffffffff81319600 T fam10h_check_enable_mmcfg
ffffffff813196bc T x86_configure_nx
ffffffff813196f8 t calculate_tlb_offset
ffffffff813197fa t tlb_cpuhp_notify
ffffffff81319816 t init_smp_flush
ffffffff81319845 T numa_cpu_node
ffffffff81319883 T numa_set_node
ffffffff813198c0 T numa_clear_node
ffffffff813198c8 T numa_add_cpu
ffffffff813198f9 T numa_remove_cpu
ffffffff8131992a W idle_regs
ffffffff81319960 T fork_idle
ffffffff813199ed t console_cpu_notify
ffffffff81319a19 t _cpu_up
ffffffff81319b14 T cpu_up
ffffffff81319b65 T notify_cpu_starting
ffffffff81319b8b t cpu_callback
ffffffff81319df6 t remote_softirq_cpu_notify
ffffffff81319eaf t timer_cpu_notify
ffffffff8131a171 t trustee_thread
ffffffff8131a66f t wait_trustee_state.part.26
ffffffff8131a70f t hrtimer_cpu_notify
ffffffff8131a8e0 t sched_cpu_active
ffffffff8131a909 t sched_cpu_inactive
ffffffff8131a92a T init_idle_bootup_task
ffffffff8131a938 t migration_call
ffffffff8131ab60 T init_idle
ffffffff8131acdb t cpu_stop_cpu_callback
ffffffff8131ae6e t rcu_init_percpu_data.constprop.46
ffffffff8131af74 t rcu_cpu_notify
ffffffff8131b014 t perf_cpu_notify
ffffffff8131b0ce t ratelimit_handler
ffffffff8131b0da t start_cpu_timer
ffffffff8131b136 t vmstat_cpuup_callback
ffffffff8131b1d4 t start_cpu_timer
ffffffff8131b2cd t cpuup_canceled
ffffffff8131b46e t cpuup_callback
ffffffff8131b787 t blk_cpu_notify
ffffffff8131b7f5 t blk_iopoll_cpu_notify
ffffffff8131b864 t percpu_counter_hotcpu_callback
ffffffff8131b8e9 T acpi_processor_set_pdc
ffffffff8131ba77 T register_cpu
ffffffff8131bb5c t topology_cpu_callback
ffffffff8131bbcc t topology_sysfs_init
ffffffff8131bc30 t cpufreq_cpu_callback
ffffffff8131bc9f t cpufreq_stat_cpu_callback
ffffffff8131bce2 t enable_pci_io_ecs
ffffffff8131bd25 t amd_cpu_notify
ffffffff8131bd4c t flow_cache_cpu_prepare.isra.4
ffffffff8131bdd6 t flow_cache_cpu
ffffffff8131be41 t run_init_process
ffffffff8131be5b t init_post
ffffffff8131bf13 t m2p
ffffffff8131bfa6 t convert_pfn_mfn
ffffffff8131bfcc t pin_pagetable_pfn
ffffffff8131c022 t xen_remap_domain_mfn_range.part.29
ffffffff8131c024 t set_page_prot
ffffffff8131c069 t xen_smp_intr_init
ffffffff8131c23c t xen_spin_unlock_slow
ffffffff8131c298 t xen_spin_lock_slow
ffffffff8131c377 T dump_stack
ffffffff8131c3e6 t write_ok_or_segv.part.3
ffffffff8131c44d t wait_for_panic
ffffffff8131c48e t hpet_msi_capability_lookup
ffffffff8131c61b t bad_address
ffffffff8131c674 t vmalloc_fault
ffffffff8131c8ca t spurious_fault
ffffffff8131ca8c t dump_pagetable
ffffffff8131cc3e t pgtable_bad
ffffffff8131ccc6 t force_sig_info_fault
ffffffff8131cd29 t is_prefetch.isra.15.part.16
ffffffff8131cf1d t no_context
ffffffff8131d189 t __bad_area_nosemaphore
ffffffff8131d386 t bad_area_nosemaphore
ffffffff8131d390 t bad_area
ffffffff8131d3d5 t bad_area_access_error
ffffffff8131d41a t mm_fault_error
ffffffff8131d5f7 T panic
ffffffff8131d7b8 T printk
ffffffff8131d800 T printk_sched
ffffffff8131d88a t wait_noreap_copyout.isra.13
ffffffff8131d950 t migrate_timer_list
ffffffff8131d9aa t retarget_shared_pending
ffffffff8131da2c t ptrace_trap_notify
ffffffff8131da94 t clear_dead_task
ffffffff8131dacc t __schedule_bug
ffffffff8131db17 t select_fallback_rq
ffffffff8131dc80 t print_name_offset.part.3
ffffffff8131dc91 t refill_pi_state_cache.part.10
ffffffff8131dce8 t remove_waiter
ffffffff8131de4a t grow_tree_refs
ffffffff8131dea2 t rcu_cleanup_dead_cpu
ffffffff8131df40 t rcu_cleanup_dying_cpu
ffffffff8131e046 t __iovec_copy_from_user_inatomic
ffffffff8131e0ab t __alloc_pages_direct_compact
ffffffff8131e251 t pcpu_dump_alloc_info
ffffffff8131e4a3 t offset_il_node.isra.15
ffffffff8131e50e t cache_flusharray
ffffffff8131e5ae t cache_alloc_refill
ffffffff8131e775 t alternate_node_alloc
ffffffff8131e80f t release_pte_pages
ffffffff8131e87f t vfs_path_lookup.part.34
ffffffff8131e881 t block_dump___mark_inode_dirty
ffffffff8131e91f t set_brk
ffffffff8131e9be t elf_map
ffffffff8131eb06 t alignfile
ffffffff8131eb5b t writenote
ffffffff8131ec11 t set_brk
ffffffff8131ecb0 t alignfile
ffffffff8131ed04 t writenote
ffffffff8131edba t elf_map.isra.10
ffffffff8131eef9 t cipher_crypt_unaligned
ffffffff8131ef69 t compute_batch_value
ffffffff8131ef92 t pci_fixup_parent_subordinate_busnr.isra.31
ffffffff8131efe4 t piix4_io_quirk
ffffffff8131f064 t piix4_mem_quirk.constprop.44
ffffffff8131f0d8 t ec_install_handlers
ffffffff8131f177 t erst_get_erange.constprop.14
ffffffff8131f1ff t ghes_estatus_pool_expand
ffffffff8131f287 t ghes_estatus_pool_exit
ffffffff8131f2ab t do_free_callbacks
ffffffff8131f2fa t xenbus_reset_wait_for_backend
ffffffff8131f3cb t scrdown
ffffffff8131f49f t restore_cur
ffffffff8131f56f t legacy_suspend
ffffffff8131f5c4 t handshake
ffffffff8131f612 t i8042_free_irqs
ffffffff8131f659 t i8042_pnp_exit
ffffffff8131f697 t input_proc_exit
ffffffff8131f6ce t cmos_do_probe
ffffffff8131fa12 t cpufreq_stats_free_sysfs
ffffffff8131fa55 t cpufreq_stats_free_table
ffffffff8131faa2 t add_sysfs_fw_map_entry
ffffffff8131fb1d t iommu_ignore_device
ffffffff8131fbdd t free_dev_data
ffffffff8131fc30 t dma_ops_unity_map
ffffffff8131fc9a t coalesce_windows
ffffffff8131fd93 t skb_warn_bad_offload
ffffffff8131fe4e t nl_pid_hash_zalloc
ffffffff8131fe7c t nl_pid_hash_free
ffffffff8131fea1 t nl_pid_hash_rehash
ffffffff8131fff4 t arp_ignore
ffffffff81320060 T __sched_text_start
ffffffff81320060 T console_conditional_schedule
ffffffff81320080 T schedule_timeout
ffffffff81320270 T schedule_timeout_uninterruptible
ffffffff81320290 T schedule_timeout_killable
ffffffff813202b0 T schedule_timeout_interruptible
ffffffff813202d0 T __wait_on_bit
ffffffff81320350 T out_of_line_wait_on_bit
ffffffff813203e0 T __wait_on_bit_lock
ffffffff81320480 T out_of_line_wait_on_bit_lock
ffffffff81320510 T mutex_lock
ffffffff81320550 T mutex_unlock
ffffffff81320570 T mutex_trylock
ffffffff81320595 t __mutex_lock_killable_slowpath
ffffffff81320700 T mutex_lock_killable
ffffffff81320731 t __mutex_lock_interruptible_slowpath
ffffffff81320890 T mutex_lock_interruptible
ffffffff813208d0 t __mutex_unlock_slowpath
ffffffff81320930 t __mutex_lock_slowpath
ffffffff81320a70 t do_nanosleep
ffffffff81320b40 T hrtimer_nanosleep_restart
ffffffff81320bb0 T schedule_hrtimeout_range_clock
ffffffff81320cf0 T schedule_hrtimeout_range
ffffffff81320d00 T schedule_hrtimeout
ffffffff81320d10 T down_read
ffffffff81320d20 T down_write
ffffffff81320d3d t __down
ffffffff81320de1 t __down_interruptible
ffffffff81320eb2 t __down_killable
ffffffff81320f88 t __down_timeout
ffffffff8132102c t __up.isra.0
ffffffff81321070 t wait_for_common
ffffffff813211f0 T wait_for_completion
ffffffff81321210 T wait_for_completion_timeout
ffffffff81321220 T wait_for_completion_interruptible
ffffffff81321250 T wait_for_completion_interruptible_timeout
ffffffff81321260 T wait_for_completion_killable
ffffffff81321290 T wait_for_completion_killable_timeout
ffffffff813212a0 t sleep_on_common
ffffffff81321370 T sleep_on_timeout
ffffffff81321390 T sleep_on
ffffffff813213b0 T interruptible_sleep_on_timeout
ffffffff813213d0 T interruptible_sleep_on
ffffffff813213f0 t __schedule
ffffffff81321af0 T __cond_resched_softirq
ffffffff81321b40 T _cond_resched
ffffffff81321b80 T schedule
ffffffff81321be0 T io_schedule
ffffffff81321cb0 T yield_to
ffffffff81321e30 T schedule_preempt_disabled
ffffffff81321e40 T yield
ffffffff81321e70 T io_schedule_timeout
ffffffff81321f60 t alarm_timer_nsleep_restart
ffffffff81322030 T rt_mutex_trylock
ffffffff813220c0 T rt_mutex_unlock
ffffffff813221e0 t __rt_mutex_slowlock
ffffffff813222a0 t rt_mutex_slowlock.part.12
ffffffff813223f0 t rt_mutex_slowlock
ffffffff81322480 T rt_mutex_lock
ffffffff813224b0 T rt_mutex_lock_interruptible
ffffffff813224e0 t rwsem_down_failed_common
ffffffff81322640 T rwsem_down_write_failed
ffffffff81322650 T rwsem_down_read_failed
ffffffff81322661 T __sched_text_end
ffffffff81322668 T __lock_text_start
ffffffff81322670 T _raw_spin_trylock
ffffffff81322680 T _raw_spin_lock
ffffffff81322690 T _raw_spin_lock_irqsave
ffffffff813226d0 T _raw_spin_lock_irq
ffffffff813226e0 T _raw_read_trylock
ffffffff81322700 T _raw_read_lock
ffffffff81322710 T _raw_read_lock_irqsave
ffffffff81322730 T _raw_read_lock_irq
ffffffff81322750 T _raw_write_trylock
ffffffff81322770 T _raw_write_lock
ffffffff81322780 T _raw_write_lock_irqsave
ffffffff813227b0 T _raw_write_lock_irq
ffffffff813227d0 T _raw_spin_unlock_bh
ffffffff813227e0 T _raw_read_unlock_bh
ffffffff813227f0 T _raw_write_unlock_bh
ffffffff81322800 T _raw_spin_trylock_bh
ffffffff81322850 T _raw_spin_lock_bh
ffffffff81322870 T _raw_read_lock_bh
ffffffff81322890 T _raw_write_lock_bh
ffffffff813228b0 T _raw_spin_unlock_irqrestore
ffffffff813228d0 T _raw_read_unlock_irqrestore
ffffffff813228f0 T _raw_write_unlock_irqrestore
ffffffff81322920 T tty_unlock
ffffffff81322930 T tty_lock
ffffffff8132293c T __lock_text_end
ffffffff81322940 T __kprobes_text_start
ffffffff81322940 T save_paranoid
ffffffff813229c0 t common_interrupt
ffffffff81322a2a t ret_from_intr
ffffffff81322a3f t exit_intr
ffffffff81322a59 t retint_with_reschedule
ffffffff81322a5e t retint_check
ffffffff81322a65 t retint_swapgs
ffffffff81322a73 t retint_restore_args
ffffffff81322a79 t restore_args
ffffffff81322aa9 t irq_return
ffffffff81322ab0 T native_iret
ffffffff81322ab2 t retint_careful
ffffffff81322ae4 t retint_signal
ffffffff81322b70 T debug
ffffffff81322bb0 T int3
ffffffff81322bf0 T stack_segment
ffffffff81322c20 T xen_debug
ffffffff81322c40 T xen_int3
ffffffff81322c60 T xen_stack_segment
ffffffff81322c90 T general_protection
ffffffff81322cc0 T page_fault
ffffffff81322cf0 T machine_check
ffffffff81322d20 T paranoid_exit
ffffffff81322d3d t paranoid_swapgs
ffffffff81322d96 t paranoid_restore
ffffffff81322dec t paranoid_userspace
ffffffff81322e3c t paranoid_schedule
ffffffff81322e50 T error_entry
ffffffff81322eab t error_swapgs
ffffffff81322eb1 t error_sti
ffffffff81322eb2 t error_kernelspace
ffffffff81322ee1 t bstep_iret
ffffffff81322ef0 T error_exit
ffffffff81322f50 T nmi
ffffffff81322f81 t nested_nmi
ffffffff81322fb4 t nested_nmi_out
ffffffff81322fbb t first_nmi
ffffffff81322fd5 t repeat_nmi
ffffffff81322ff2 t end_repeat_nmi
ffffffff81323010 t nmi_swapgs
ffffffff81323013 t nmi_restore
ffffffff81323080 T ignore_sysret
ffffffff81323087 T __kprobes_text_end
ffffffff81323088 T __entry_text_start
ffffffff813230c0 T native_usergs_sysret64
ffffffff813230d0 T save_rest
ffffffff81323140 T ret_from_fork
ffffffff813231c0 T system_call
ffffffff813231c3 T system_call_after_swapgs
ffffffff81323223 t system_call_fastpath
ffffffff8132323e t ret_from_sys_call
ffffffff81323243 t sysret_check
ffffffff81323291 t sysret_careful
ffffffff813232a8 t sysret_signal
ffffffff813232f1 t badsys
ffffffff813232ff t auditsys
ffffffff81323349 t sysret_audit
ffffffff8132336a t tracesys
ffffffff8132344c T int_ret_from_sys_call
ffffffff81323459 T int_with_check
ffffffff81323479 t int_careful
ffffffff8132349b t int_very_careful
ffffffff813234a3 t int_check_syscall_exit_work
ffffffff813234e0 t int_signal
ffffffff813234f7 t int_restore_rest
ffffffff81323530 T stub_clone
ffffffff81323550 T stub_fork
ffffffff81323570 T stub_vfork
ffffffff81323590 T stub_sigaltstack
ffffffff813235b0 T stub_iopl
ffffffff813235d0 T ptregscall_common
ffffffff81323610 T stub_execve
ffffffff813236d0 T stub_rt_sigreturn
ffffffff81323780 T irq_entries_start
ffffffff81323b80 T irq_move_cleanup_interrupt
ffffffff81323bf0 T reboot_interrupt
ffffffff81323c60 T apic_timer_interrupt
ffffffff81323cd0 T x86_platform_ipi
ffffffff81323d40 T invalidate_interrupt1
ffffffff81323d50 T invalidate_interrupt2
ffffffff81323d60 T invalidate_interrupt3
ffffffff81323d70 T invalidate_interrupt4
ffffffff81323d80 T invalidate_interrupt5
ffffffff81323d90 T invalidate_interrupt6
ffffffff81323da0 T invalidate_interrupt7
ffffffff81323db0 T invalidate_interrupt0
ffffffff81323e20 T threshold_interrupt
ffffffff81323e90 T thermal_interrupt
ffffffff81323f00 T call_function_single_interrupt
ffffffff81323f70 T call_function_interrupt
ffffffff81323fe0 T reschedule_interrupt
ffffffff81324050 T error_interrupt
ffffffff813240c0 T spurious_interrupt
ffffffff81324130 T irq_work_interrupt
ffffffff813241a0 T divide_error
ffffffff813241c0 T overflow
ffffffff813241e0 T bounds
ffffffff81324200 T invalid_op
ffffffff81324220 T device_not_available
ffffffff81324240 T double_fault
ffffffff81324270 T coprocessor_segment_overrun
ffffffff81324290 T invalid_TSS
ffffffff813242c0 T segment_not_present
ffffffff813242f0 T spurious_interrupt_bug
ffffffff81324310 T coprocessor_error
ffffffff81324330 T alignment_check
ffffffff81324360 T simd_coprocessor_error
ffffffff81324380 T native_load_gs_index
ffffffff8132438d t gs_change
ffffffff813243a0 T kernel_thread_helper
ffffffff813243b0 T kernel_execve
ffffffff81324480 T call_softirq
ffffffff813244b0 T xen_hypervisor_callback
ffffffff813244d0 T xen_do_hypervisor_callback
ffffffff81324500 T xen_failsafe_callback
ffffffff813245b0 T xen_hvm_callback_vector
ffffffff81324620 T native_usergs_sysret32
ffffffff81324630 T native_irq_enable_sysexit
ffffffff81324640 T ia32_sysenter_target
ffffffff813246b3 t sysenter_do_call
ffffffff813246bf t sysenter_dispatch
ffffffff813246e0 t sysexit_from_sys_call
ffffffff81324716 t sysenter_auditsys
ffffffff81324755 t sysexit_audit
ffffffff813247b0 t sysenter_tracesys
ffffffff81324850 T ia32_cstar_target
ffffffff813248d6 t cstar_do_call
ffffffff813248df t cstar_dispatch
ffffffff81324900 t sysretl_from_sys_call
ffffffff81324933 t cstar_auditsys
ffffffff81324979 t sysretl_audit
ffffffff813249d4 t cstar_tracesys
ffffffff81324a7c t ia32_badarg
ffffffff81324a90 T ia32_syscall
ffffffff81324ae6 t ia32_do_call
ffffffff81324af9 t ia32_sysret
ffffffff81324afe t ia32_ret_from_sys_call
ffffffff81324b18 t ia32_tracesys
ffffffff81324ba4 t ia32_badsys
ffffffff81324bc0 T stub32_rt_sigreturn
ffffffff81324bd0 T stub32_sigreturn
ffffffff81324be0 T stub32_sigaltstack
ffffffff81324bf0 T stub32_execve
ffffffff81324c00 T stub32_fork
ffffffff81324c10 T stub32_clone
ffffffff81324c20 T stub32_vfork
ffffffff81324c30 T stub32_iopl
ffffffff81324c40 t ia32_ptregs_common
ffffffff81324c8b T __entry_text_end
ffffffff81324d59 t bad_iret
ffffffff81324d66 t bad_gs
ffffffff81326770 T bad_from_user
ffffffff81326776 t bad_to_user
ffffffff81326e30 T __start_notes
ffffffff81326e30 T _etext
ffffffff81326fac T __stop_notes
ffffffff81326fb0 R __start___ex_table
ffffffff8132aea0 R __stop___ex_table
ffffffff8132b000 r __param_str_initcall_debug
ffffffff8132b000 R __start_rodata
ffffffff8132b020 R linux_proc_banner
ffffffff8132b060 R linux_banner
ffffffff8132b100 r xen_vcpuop_clockevent
ffffffff8132b1c0 r xen_timerop_clockevent
ffffffff8132b280 r __func__.26234
ffffffff8132b290 r __func__.28840
ffffffff8132b298 r str.28884
ffffffff8132b2b0 r __func__.28893
ffffffff8132b2c6 r __func__.22053
ffffffff8132b2d2 r __func__.22257
ffffffff8132b2e0 r print_trace_ops
ffffffff8132b300 R sys_call_table
ffffffff8132bcc0 r __func__.28008
ffffffff8132bd20 r k8_nops
ffffffff8132bd70 r __func__.20946
ffffffff8132bd90 r __func__.20998
ffffffff8132bdb0 r __func__.21008
ffffffff8132bde0 r p6_nops
ffffffff8132be40 r k8nops
ffffffff8132be80 r p6nops
ffffffff8132bec0 r CSWTCH.37
ffffffff8132c540 r regoffset_table
ffffffff8132c6a0 r user_x86_32_view
ffffffff8132c6c0 r user_x86_64_view
ffffffff8132c720 r sysfs_ops
ffffffff8132c740 R cpuinfo_op
ffffffff8132c760 R x86_cap_flags
ffffffff8132d160 R x86_power_flags
ffffffff8132d260 r exception_stack_sizes
ffffffff8132d280 R amd_erratum_383
ffffffff8132d290 R amd_erratum_400
ffffffff8132d2f0 r backtrace_ops
ffffffff8132d3a0 r amd_perfmon_event_map
ffffffff8132d400 r p6_perfmon_event_map
ffffffff8132d440 r p4_general_events
ffffffff8132d4a0 r p4_escr_table
ffffffff8132d5c0 r CSWTCH.25
ffffffff8132d5e0 r nhm_lbr_sel_map
ffffffff8132d7e0 r snb_lbr_sel_map
ffffffff8132db00 r nhm_magic.25569
ffffffff8132dbd0 r __func__.23553
ffffffff8132dbda r __func__.24029
ffffffff8132dbf0 r CSWTCH.184
ffffffff8132dc20 r mce_device_attrs
ffffffff8132dc60 r mce_chrdev_ops
ffffffff8132dd30 r shared_bank
ffffffff8132dd40 r threshold_ops
ffffffff8132dd60 r mtrr_strings
ffffffff8132dda0 r mtrr_fops
ffffffff8132de80 R generic_mtrr_ops
ffffffff8132dec0 r fixed_range_blocks
ffffffff8132e038 r no_idt
ffffffff8132e042 r __func__.27161
ffffffff8132e050 r __func__.26864
ffffffff8132e0c0 r error_interrupt_reason.33340
ffffffff8132e188 r __func__.20785
ffffffff8132e194 r __func__.20610
ffffffff8132e1a2 r __func__.20790
ffffffff8132e1c0 r __func__.20704
ffffffff8132e1e0 r __func__.20954
ffffffff8132e200 R amd_nb_misc_ids
ffffffff8132e300 r CSWTCH.21
ffffffff8132e320 r ud2a
ffffffff8132e500 r __func__.27763
ffffffff8132e520 r errata93_warning
ffffffff8132e620 r nx_warning
ffffffff8132e680 r CSWTCH.32
ffffffff8132e760 r CSWTCH.30
ffffffff8132e800 r CSWTCH.9
ffffffff8132e918 r code.25302
ffffffff8132e920 r code.25348
ffffffff8132e940 R ia32_sys_call_table
ffffffff8132f500 r execdomains_proc_fops
ffffffff8132f5e0 r tnts
ffffffff8132f607 r __param_str_pause_on_oops
ffffffff8132f615 r __param_str_panic
ffffffff8132f680 r recursion_bug_msg
ffffffff8132f6b0 r __param_str_console_suspend
ffffffff8132f6d0 r __param_str_always_kmsg_dump
ffffffff8132f6e8 r __param_str_time
ffffffff8132f700 r __param_str_ignore_loglevel
ffffffff8132f720 R cpu_active_mask
ffffffff8132f728 R cpu_present_mask
ffffffff8132f730 R cpu_online_mask
ffffffff8132f738 R cpu_possible_mask
ffffffff8132f740 R cpu_all_bits
ffffffff8132f760 R cpu_bit_bitmap
ffffffff8132f968 r __func__.23787
ffffffff8132f972 r __func__.23809
ffffffff8132fa30 r param.26175
ffffffff8132fa40 r proc_ioports_operations
ffffffff8132fb20 r proc_iomem_operations
ffffffff8132fc00 r resource_op
ffffffff8132fc20 r proc_wspace_sep
ffffffff8132fc24 r cap_last_cap
ffffffff8132fc30 r __func__.41859
ffffffff8132fc4c R __cap_empty_set
ffffffff8132fc60 r __func__.32617
ffffffff8132fdc0 r __func__.32697
ffffffff8132fe20 r module_uevent_ops
ffffffff8132fe40 r module_sysfs_ops
ffffffff8132feb0 r param.19414
ffffffff8132fec0 r hrtimer_clock_to_base_table
ffffffff8132ff00 R min_cfs_quota_period
ffffffff8132ff08 R max_cfs_quota_period
ffffffff8132ff10 R sysctl_timer_migration
ffffffff8132ff14 R sysctl_sched_time_avg
ffffffff8132ff18 R sysctl_sched_nr_migrate
ffffffff8132ff1c R sysctl_sched_features
ffffffff8132ff20 r __func__.39484
ffffffff8132ff40 r prio_to_weight
ffffffff8132ffe0 r prio_to_wmult
ffffffff81330080 r degrade_zero_ticks
ffffffff813300a0 r degrade_factor
ffffffff813300e0 R idle_sched_class
ffffffff813301a0 R fair_sched_class
ffffffff81330260 R sysctl_sched_migration_cost
ffffffff81330280 R rt_sched_class
ffffffff81330340 R stop_sched_class
ffffffff81330400 r pm_qos_array
ffffffff81330420 r __func__.20570
ffffffff81330440 r pm_qos_power_fops
ffffffff81330560 r timer_list_fops
ffffffff81330640 r __mon_yday
ffffffff81330680 r posix_clock_file_operations
ffffffff81330760 r alarmtimer_pm_ops
ffffffff81330940 r proc_dma_operations
ffffffff81330a20 r arr.29319
ffffffff81330a80 r proc_modules_operations
ffffffff81330b60 r modules_op
ffffffff81330b80 r __func__.30451
ffffffff81330ba0 r modinfo_attrs
ffffffff81330bf0 r CSWTCH.170
ffffffff81330c10 r vermagic
ffffffff81330c40 r masks.30190
ffffffff81330c80 r __param_str_nomodule
ffffffff81330ca0 r kallsyms_operations
ffffffff81330d80 r kallsyms_op
ffffffff81330dc0 R proc_cgroup_operations
ffffffff81330ec0 r cgroup_dir_inode_operations
ffffffff81330f80 r cgroup_file_operations
ffffffff81331060 r cgroup_pidlist_operations
ffffffff81331140 r cgroup_pidlist_seq_operations
ffffffff81331160 r cgroup_seqfile_operations
ffffffff81331240 r cgroup_ops
ffffffff81331300 r proc_cgroupstats_operations
ffffffff81331400 r cgroup_dops.31045
ffffffff81331480 r freezer_state_strs
ffffffff81331560 R utsns_operations
ffffffff813315a0 r kernel_config_data
ffffffff81334900 r ikconfig_file_ops
ffffffff81334a50 r __func__.37037
ffffffff81334a5c r __func__.37065
ffffffff81334a70 r __func__.37377
ffffffff81334a80 r __func__.37123
ffffffff81335220 r audit_ops
ffffffff81335a80 r audit_watch_fsnotify_ops
ffffffff81335ac0 r audit_tree_ops
ffffffff81335b00 r param.18824
ffffffff81335b10 r __param_str_irqfixup
ffffffff81335b30 r __param_str_noirqdebug
ffffffff81335b60 r irq_affinity_proc_fops
ffffffff81335c40 r irq_affinity_hint_proc_fops
ffffffff81335d20 r irq_affinity_list_proc_fops
ffffffff81335e00 r irq_node_proc_fops
ffffffff81335ee0 r irq_spurious_proc_fops
ffffffff81335fc0 r default_affinity_proc_fops
ffffffff81336090 r __param_str_rcu_cpu_stall_timeout
ffffffff813360b0 r __param_str_rcu_cpu_stall_suppress
ffffffff813360d0 r __param_str_qlowmark
ffffffff813360f0 r __param_str_qhimark
ffffffff81336100 r __param_str_blimit
ffffffff81336110 r taskstats_cmd_get_policy
ffffffff81336124 r cgroupstats_cmd_get_policy
ffffffff81336140 r perf_fops
ffffffff81336220 r perf_mmap_vmops
ffffffff81336280 R generic_file_vm_ops
ffffffff813362c0 r __func__.26414
ffffffff81336320 r __func__.32359
ffffffff81336340 r fallbacks
ffffffff813363a0 r zone_names
ffffffff813363c0 r pageflag_names
ffffffff81336550 r __func__.30290
ffffffff81336580 r shmem_export_ops
ffffffff813365c0 r shmem_ops
ffffffff81336680 r shmem_aops
ffffffff81336740 r shmem_special_inode_operations
ffffffff81336800 r shmem_inode_operations
ffffffff813368c0 r shmem_file_operations
ffffffff813369c0 r shmem_dir_inode_operations
ffffffff81336a80 r shmem_vm_ops
ffffffff81336ac0 r shmem_short_symlink_operations
ffffffff81336b80 r shmem_symlink_inode_operations
ffffffff81336d00 R vmstat_text
ffffffff81337020 r fragmentation_file_operations
ffffffff81337100 r pagetypeinfo_file_ops
ffffffff813371e0 r proc_vmstat_file_operations
ffffffff813372c0 r proc_zoneinfo_file_operations
ffffffff813373a0 r vmstat_op
ffffffff813373c0 r pagetypeinfo_op
ffffffff813373e0 r migratetype_names
ffffffff81337420 r fragmentation_op
ffffffff81337440 r zoneinfo_op
ffffffff81337500 r special_mapping_vmops
ffffffff81337540 r __func__.24559
ffffffff81337560 r proc_vmalloc_operations
ffffffff81337640 r vmalloc_op
ffffffff81337660 r swap_aops
ffffffff81337700 r proc_swaps_operations
ffffffff813377e0 r swaps_op
ffffffff81337800 r Unused_offset
ffffffff81337820 r Bad_offset
ffffffff81337840 r Unused_file
ffffffff81337860 r Bad_file
ffffffff81337880 R hugetlb_vm_ops
ffffffff813378c0 R hugetlb_infinity
ffffffff813378c8 R hugetlb_zero
ffffffff813378e0 r mpol_ops
ffffffff81337920 r policy_modes
ffffffff81337960 r __func__.19760
ffffffff81337980 r __func__.22898
ffffffff81337a60 r proc_slabinfo_operations
ffffffff81337b40 r slabinfo_op
ffffffff81337b60 r __func__.27911
ffffffff81337b80 r __func__.28010
ffffffff81337b93 r __func__.27818
ffffffff81337bc0 r action_name
ffffffff81337c00 r empty_fops.31745
ffffffff81337ce0 R generic_ro_fops
ffffffff81337dc0 R def_chr_fops
ffffffff81337e90 r uselib_flags.33193
ffffffff81337ea0 r open_exec_flags.33367
ffffffff81337eb0 r __func__.33769
ffffffff81337ec0 R rdwr_pipefifo_fops
ffffffff81337fa0 R write_pipefifo_fops
ffffffff81338080 R read_pipefifo_fops
ffffffff81338160 r pipefs_ops
ffffffff81338240 r pipefs_dentry_operations
ffffffff813382c0 r anon_pipe_buf_ops
ffffffff81338300 r packet_pipe_buf_ops
ffffffff81338340 R page_symlink_inode_operations
ffffffff81338400 r band_table
ffffffff81338440 R def_fifo_fops
ffffffff81338520 r name.30158
ffffffff81338530 r anonstring.30180
ffffffff81338540 r __func__.30395
ffffffff81338560 R empty_aops
ffffffff81338600 r bad_inode_ops
ffffffff813386c0 r bad_file_ops
ffffffff813387a0 r filesystems_proc_fops
ffffffff81338880 R mounts_op
ffffffff813388a0 r __func__.30429
ffffffff813388c0 R simple_dir_inode_operations
ffffffff81338980 R simple_dir_operations
ffffffff81338a80 r simple_dentry_operations.24815
ffffffff81338b00 r simple_super_operations
ffffffff81338bb0 r __func__.24984
ffffffff81338be0 R page_cache_pipe_buf_ops
ffffffff81338c20 r default_pipe_buf_ops
ffffffff81338c60 r user_page_pipe_buf_ops
ffffffff81338c98 r __func__.31185
ffffffff81338cc0 R def_blk_fops
ffffffff81338da0 r bdev_sops
ffffffff81338e60 r def_blk_aops
ffffffff81338f00 R proc_mountstats_operations
ffffffff81338fe0 R proc_mountinfo_operations
ffffffff813390c0 R proc_mounts_operations
ffffffff813391a0 r mnt_info.21914
ffffffff81339220 r fs_info.21905
ffffffff81339260 R inotify_fsnotify_ops
ffffffff813392a0 r __func__.27708
ffffffff813392c0 r inotify_fops
ffffffff813393a0 R fanotify_fsnotify_ops
ffffffff813393e0 r fanotify_fops
ffffffff813394c0 r eventpoll_fops
ffffffff81339590 r path_limits
ffffffff813395c0 r anon_inodefs_dentry_operations
ffffffff81339640 r anon_aops
ffffffff813396e0 r signalfd_fops
ffffffff813397c0 r timerfd_fops
ffffffff813398a0 r eventfd_fops
ffffffff813399c0 r lease_manager_ops
ffffffff81339a00 r proc_locks_operations
ffffffff81339ae0 r locks_seq_operations
ffffffff81339b00 r CSWTCH.60
ffffffff81339b20 r buf.25421
ffffffff81339b24 r buf.26557
ffffffff8133a060 R generic_acl_default_handler
ffffffff8133a0a0 R generic_acl_access_handler
ffffffff8133a100 R proc_tid_numa_maps_operations
ffffffff8133a1e0 R proc_pid_numa_maps_operations
ffffffff8133a2c0 R proc_pagemap_operations
ffffffff8133a3a0 R proc_clear_refs_operations
ffffffff8133a480 R proc_tid_smaps_operations
ffffffff8133a560 R proc_pid_smaps_operations
ffffffff8133a640 R proc_tid_maps_operations
ffffffff8133a720 R proc_pid_maps_operations
ffffffff8133a800 r proc_tid_maps_op
ffffffff8133a820 r proc_pid_smaps_op
ffffffff8133a840 r proc_tid_smaps_op
ffffffff8133a860 r proc_pid_numa_maps_op
ffffffff8133a880 r proc_tid_numa_maps_op
ffffffff8133a8a0 r proc_pid_maps_op
ffffffff8133a8c0 r proc_reg_file_ops_no_compat
ffffffff8133a9a0 r proc_reg_file_ops
ffffffff8133aa80 r proc_sops
ffffffff8133ab40 r tokens
ffffffff8133ab80 r proc_root_inode_operations
ffffffff8133ac40 r proc_root_operations
ffffffff8133ad40 R pid_dentry_operations
ffffffff8133adc0 r proc_def_inode_operations
ffffffff8133ae80 r proc_base_stuff
ffffffff8133aec0 r proc_tgid_base_inode_operations
ffffffff8133af80 r proc_tgid_base_operations
ffffffff8133b060 r tgid_base_stuff
ffffffff8133b600 r proc_pid_link_inode_operations
ffffffff8133b6c0 r proc_tid_base_inode_operations
ffffffff8133b780 r proc_tid_base_operations
ffffffff8133b860 r tid_base_stuff
ffffffff8133bd80 r tid_fd_dentry_operations
ffffffff8133be00 r proc_fdinfo_file_operations
ffffffff8133bee0 r lnames
ffffffff8133c000 r proc_self_inode_operations
ffffffff8133c0c0 r proc_task_inode_operations
ffffffff8133c180 r proc_task_operations
ffffffff8133c280 r proc_fd_inode_operations
ffffffff8133c340 r proc_fd_operations
ffffffff8133c440 r proc_fdinfo_inode_operations
ffffffff8133c500 r proc_fdinfo_operations
ffffffff8133c5e0 r proc_environ_operations
ffffffff8133c6c0 r proc_info_file_operations
ffffffff8133c7a0 r proc_single_file_operations
ffffffff8133c880 r proc_pid_set_comm_operations
ffffffff8133c960 r proc_mem_operations
ffffffff8133ca40 r proc_oom_adjust_operations
ffffffff8133cb20 r proc_oom_score_adj_operations
ffffffff8133cc00 r proc_loginuid_operations
ffffffff8133cce0 r proc_sessionid_operations
ffffffff8133cdc0 r proc_coredump_filter_operations
ffffffff8133cec0 r proc_dentry_operations
ffffffff8133cf40 r proc_dir_operations
ffffffff8133d040 r proc_dir_inode_operations
ffffffff8133d100 r proc_link_inode_operations
ffffffff8133d1c0 r proc_file_operations
ffffffff8133d2c0 r proc_file_inode_operations
ffffffff8133d380 r __func__.20802
ffffffff8133d3a0 r task_state_array
ffffffff8133d400 r proc_tty_drivers_operations
ffffffff8133d4e0 r tty_drivers_op
ffffffff8133d500 r cmdline_proc_fops
ffffffff8133d5e0 r proc_consoles_operations
ffffffff8133d6c0 r consoles_op
ffffffff8133d6e0 r con_flags.16565
ffffffff8133d700 r proc_cpuinfo_operations
ffffffff8133d7e0 r proc_devinfo_operations
ffffffff8133d8c0 r devinfo_ops
ffffffff8133d8e0 r proc_interrupts_operations
ffffffff8133d9c0 r int_seq_ops
ffffffff8133d9e0 r loadavg_proc_fops
ffffffff8133dac0 r meminfo_proc_fops
ffffffff8133dba0 r proc_stat_operations
ffffffff8133dc80 r uptime_proc_fops
ffffffff8133dd60 r version_proc_fops
ffffffff8133de40 r proc_softirqs_operations
ffffffff8133df40 R proc_ns_dir_inode_operations
ffffffff8133e000 R proc_ns_dir_operations
ffffffff8133e0e0 r ns_file_operations
ffffffff8133e1c0 r null_path.24433
ffffffff8133e200 r proc_sys_dir_operations
ffffffff8133e2c0 r proc_sys_dir_file_operations
ffffffff8133e3c0 r proc_sys_dentry_operations
ffffffff8133e440 r proc_sys_inode_operations
ffffffff8133e500 r proc_sys_file_operations
ffffffff8133e600 R proc_net_operations
ffffffff8133e700 R proc_net_inode_operations
ffffffff8133e7c0 r proc_kcore_operations
ffffffff8133e8a0 r proc_kmsg_operations
ffffffff8133e980 r proc_kpagecount_operations
ffffffff8133ea60 r proc_kpageflags_operations
ffffffff8133eb40 r sysfs_aops
ffffffff8133ec00 r sysfs_inode_operations
ffffffff8133ecc0 R sysfs_file_operations
ffffffff8133edc0 R sysfs_dir_operations
ffffffff8133eec0 R sysfs_dir_inode_operations
ffffffff8133ef80 r sysfs_dentry_ops
ffffffff8133f000 R sysfs_symlink_inode_operations
ffffffff8133f0c0 r sysfs_ops
ffffffff8133f170 r __func__.21523
ffffffff8133f1a0 R bin_fops
ffffffff8133f280 r bin_vm_ops
ffffffff8133f2c0 r configfs_aops
ffffffff8133f380 r configfs_inode_operations
ffffffff8133f440 R configfs_file_operations
ffffffff8133f540 R configfs_dir_operations
ffffffff8133f640 R configfs_root_inode_operations
ffffffff8133f700 R configfs_dir_inode_operations
ffffffff8133f7c0 R configfs_dentry_ops
ffffffff8133f840 R configfs_symlink_inode_operations
ffffffff8133f900 r configfs_ops
ffffffff8133f9c0 r devpts_sops
ffffffff8133fa80 r tokens
ffffffff8133fac0 r ramfs_dir_inode_operations
ffffffff8133fb80 r tokens
ffffffff8133fba0 r ramfs_ops
ffffffff8133fc80 R ramfs_file_inode_operations
ffffffff8133fd40 R ramfs_file_operations
ffffffff8133fe20 R ramfs_aops
ffffffff8133ff00 R hugetlbfs_file_operations
ffffffff8133ffe0 r hugetlbfs_ops
ffffffff813400c0 r hugetlbfs_dir_inode_operations
ffffffff81340180 r tokens
ffffffff81340200 r hugetlbfs_aops
ffffffff813402c0 r hugetlbfs_inode_operations
ffffffff81340380 r utf8_table
ffffffff81340460 r charset2uni
ffffffff81340660 r page_uni2charset
ffffffff81340e60 r charset2lower
ffffffff81340f60 r charset2upper
ffffffff81341060 r page00
ffffffff81341180 r pstore_file_operations
ffffffff81341260 r pstore_ops
ffffffff81341340 r pstore_dir_inode_operations
ffffffff81341400 r tokens
ffffffff81341420 r CSWTCH.23
ffffffff81341450 r __param_str_backend
ffffffff81341500 r sysvipc_proc_fops
ffffffff813415e0 r sysvipc_proc_seqops
ffffffff813416c0 r shm_file_operations_huge
ffffffff813417a0 r shm_vm_ops
ffffffff813417e0 r shm_file_operations
ffffffff813418c0 r mqueue_super_ops
ffffffff81341980 r mqueue_file_operations
ffffffff81341a80 r mqueue_dir_inode_operations
ffffffff81341b40 r oflag2acc.40042
ffffffff81341b60 R ipcns_operations
ffffffff81341c38 r __func__.30831
ffffffff81341ca0 r proc_crypto_ops
ffffffff81341d80 r crypto_seq_ops
ffffffff81341da0 R crypto_nivaead_type
ffffffff81341e00 R crypto_aead_type
ffffffff81341e60 R crypto_givcipher_type
ffffffff81341ec0 R crypto_ablkcipher_type
ffffffff81341f20 R crypto_blkcipher_type
ffffffff81341f80 R crypto_ahash_type
ffffffff81341fe0 r crypto_shash_type
ffffffff81342040 r crypto_pcomp_type
ffffffff813420a0 R crypto_il_tab
ffffffff813430a0 R crypto_it_tab
ffffffff813440a0 R crypto_fl_tab
ffffffff813450a0 R crypto_ft_tab
ffffffff813460a0 r rco_tab
ffffffff813460e0 R crypto_rng_type
ffffffff81346180 r __func__.29257
ffffffff813461a0 r elv_sysfs_ops
ffffffff813461c0 r __func__.30914
ffffffff813461e0 r __func__.31011
ffffffff81346200 r __func__.31055
ffffffff81346213 r __func__.30445
ffffffff81346230 r __func__.27102
ffffffff81346240 r __func__.27150
ffffffff81346260 r __func__.27163
ffffffff81346280 r queue_sysfs_ops
ffffffff813462a0 r __func__.27327
ffffffff813462c0 r __func__.27358
ffffffff813462e0 r __func__.27368
ffffffff81346300 r __func__.27603
ffffffff81346320 r proc_diskstats_operations
ffffffff81346400 r proc_partitions_operations
ffffffff813464e0 r diskstats_op
ffffffff81346500 r partitions_op
ffffffff81346520 r __param_str_events_dfl_poll_msecs
ffffffff81346540 r disk_events_dfl_poll_msecs_param_ops
ffffffff81346560 r dev_attr_events
ffffffff81346580 r dev_attr_events_async
ffffffff813465a0 r dev_attr_events_poll_msecs
ffffffff813465c0 R scsi_command_size_tbl
ffffffff813465d0 r __func__.28680
ffffffff81346600 r check_part
ffffffff81346620 r subtypes
ffffffff813466a0 r bsg_fops
ffffffff81346770 r CSWTCH.36
ffffffff81346800 r fd_ioctl_trans_table
ffffffff81346860 R _ctype
ffffffff81346960 r compressed_formats
ffffffff81346a08 r lzop_magic
ffffffff81346a60 R kobj_sysfs_ops
ffffffff81346a80 r __func__.12132
ffffffff81346aa0 r __func__.12267
ffffffff81346ac0 r kobject_actions
ffffffff813474e8 r io_spec.38765
ffffffff813474f0 r mem_spec.38766
ffffffff813474f8 r dec_spec.38768
ffffffff81347500 r bus_spec.38767
ffffffff81347510 r CSWTCH.83
ffffffff81347530 r CSWTCH.84
ffffffff81347550 r be.38880
ffffffff81347560 r le.38881
ffffffff813475c0 R inat_group_table_22
ffffffff813475e0 R inat_group_table_12
ffffffff81347600 R inat_group_table_18_2
ffffffff81347620 R inat_group_table_18
ffffffff81347640 R inat_group_table_15_1
ffffffff81347660 R inat_group_table_15
ffffffff81347680 R inat_group_table_14_1
ffffffff813476a0 R inat_group_table_14
ffffffff813476c0 R inat_group_table_13_1
ffffffff813476e0 R inat_group_table_13
ffffffff81347700 R inat_group_table_21_2
ffffffff81347720 R inat_group_table_21_1
ffffffff81347740 R inat_group_table_21
ffffffff81347760 R inat_group_table_10
ffffffff81347780 R inat_group_table_9
ffffffff813477a0 R inat_group_table_8
ffffffff813477c0 R inat_group_table_7
ffffffff813477e0 R inat_group_table_6
ffffffff81347800 R inat_group_table_5
ffffffff81347820 R inat_escape_table_3_3
ffffffff81347c20 R inat_escape_table_3_1
ffffffff81348020 R inat_escape_table_3
ffffffff81348420 R inat_escape_table_2_3
ffffffff81348820 R inat_escape_table_2_2
ffffffff81348c20 R inat_escape_table_2_1
ffffffff81349020 R inat_escape_table_2
ffffffff81349420 R inat_escape_table_1_3
ffffffff81349820 R inat_escape_table_1_2
ffffffff81349c20 R inat_escape_table_1_1
ffffffff8134a020 R inat_escape_table_1
ffffffff8134a420 R inat_primary_table
ffffffff8134a8d0 R hex_asc
ffffffff8134a9c0 R byte_rev_table
ffffffff8134aac0 R crc16_table
ffffffff8134adc0 r lenfix.2649
ffffffff8134b5c0 r distfix.2650
ffffffff8134b640 r order.2681
ffffffff8134b680 r lbase.2594
ffffffff8134b6c0 r dbase.2596
ffffffff8134b700 r lext.2595
ffffffff8134b740 r dext.2597
ffffffff8134b850 r mask_to_bit_num.11799
ffffffff8134b858 r mask_to_allowed_status.11798
ffffffff8134b860 r branch_table.11828
ffffffff8134b8c0 r nla_attr_minlen
ffffffff8134b8d8 r __func__.13682
ffffffff8134b900 r pci_vpd_pci22_ops
ffffffff8134b920 r pcie_link_speed
ffffffff8134b930 r agp_speeds
ffffffff8134b940 r pcix_bus_speed
ffffffff8134b950 r CSWTCH.210
ffffffff8134b960 r __func__.21752
ffffffff8134b980 r __func__.21817
ffffffff8134b9a0 r __func__.21831
ffffffff8134b9c0 R pci_dev_pm_ops
ffffffff8134ba80 r __func__.28878
ffffffff8134baa0 r __func__.28852
ffffffff8134bac0 r __func__.28752
ffffffff8134bae0 r __func__.28813
ffffffff8134bb00 r __func__.28840
ffffffff8134bb10 r __func__.28736
ffffffff8134bb23 r __func__.28803
ffffffff8134bb40 r proc_bus_pci_operations
ffffffff8134bc20 r proc_bus_pci_dev_operations
ffffffff8134bd00 r proc_bus_pci_devices_op
ffffffff8134bd20 r pci_bus_speed_strings
ffffffff8134bde0 r pci_slot_sysfs_ops
ffffffff8134bea0 r pci_dev_reset_methods
ffffffff8134bee0 r policy_str
ffffffff8134bf00 r __param_str_policy
ffffffff8134bf20 r port_pci_ids
ffffffff8134bf60 r pcie_portdrv_pm_ops
ffffffff8134c020 r aer_error_severity_string
ffffffff8134c040 r aer_agent_string
ffffffff8134c060 r aer_error_layer
ffffffff8134c078 r CSWTCH.31
ffffffff8134c080 r __param_str_nosourceid
ffffffff8134c0a0 r __param_str_forceload
ffffffff8134c0c0 r msi_irq_sysfs_ops
ffffffff8134c0e0 r CSWTCH.7
ffffffff8134c0f4 r state_conv.28739
ffffffff8134c100 r device_label_dsm_uuid
ffffffff8134c1b0 r CSWTCH.65
ffffffff8134c1b2 r mask.29468
ffffffff8134c1c0 r fb_proc_fops
ffffffff8134c2a0 r fb_fops
ffffffff8134c380 r proc_fb_seq_ops
ffffffff8134c3a0 r brokendb
ffffffff8134c3c4 r edid_v1_header
ffffffff8134c3e0 r default_2_colors
ffffffff8134c420 r default_4_colors
ffffffff8134c460 r default_8_colors
ffffffff8134c4a0 r default_16_colors
ffffffff8134c740 R vesa_modes
ffffffff8134cfc0 R cea_modes
ffffffff8134dfc0 r modedb
ffffffff8134eec0 r fb_cvt_vbi_tab
ffffffff8134eee0 R dummy_con
ffffffff8134f000 R vga_con
ffffffff8134f1c0 r CSWTCH.380
ffffffff8134f200 r fb_con
ffffffff8134f320 r fonts
ffffffff8134f340 R font_vga_8x8
ffffffff8134f380 r fontdata_8x8
ffffffff8134fb80 R font_vga_8x16
ffffffff8134fbc0 r fontdata_8x16
ffffffff81350bc0 r __param_str_nologo
ffffffff81350ce0 r CSWTCH.44
ffffffff81350d28 r cfb_tab32
ffffffff81350d40 r cfb_tab8_le
ffffffff81350d80 r cfb_tab16_le
ffffffff81350d90 r CSWTCH.98
ffffffff81350e00 r mps_inti_flags_trigger
ffffffff81350e20 r mps_inti_flags_polarity
ffffffff81350e40 r __func__.31534
ffffffff81350e58 r __func__.28766
ffffffff81350e68 r __param_str_bfs
ffffffff81350e78 r __param_str_gts
ffffffff81350e90 r opc_map_to_uuid.29883
ffffffff81350ed0 r __func__.30057
ffffffff81350eda r _acpi_module_name
ffffffff81350f80 r _acpi_module_name
ffffffff81350f90 r __func__.24377
ffffffff81350fb0 r ec_device_ids
ffffffff81350fe0 r __param_str_ec_delay
ffffffff81350ff0 r root_device_ids
ffffffff81351020 r _acpi_module_name
ffffffff81351030 r link_device_ids
ffffffff81351060 r _acpi_module_name
ffffffff81351070 r prt_quirks
ffffffff813510f0 r medion_md9580
ffffffff813513a0 r dell_optiplex
ffffffff81351650 r hp_t5710
ffffffff81351900 r power_device_ids
ffffffff81351930 r acpi_system_event_ops
ffffffff81351a05 r _acpi_module_name
ffffffff81351a10 r pm_profile_attr
ffffffff81351a30 r __param_str_acpica_version
ffffffff81351a50 r __param_str_aml_debug_output
ffffffff81351a70 r _acpi_module_name
ffffffff81351a78 r _acpi_module_name
ffffffff81351ad8 r _acpi_module_name
ffffffff81351ae0 r _acpi_module_name
ffffffff81351ae8 r _acpi_module_name
ffffffff81351af8 r _acpi_module_name
ffffffff81351b08 r _acpi_module_name
ffffffff81351b18 r _acpi_module_name
ffffffff81351b28 r _acpi_module_name
ffffffff81351b88 r _acpi_module_name
ffffffff81351b90 r acpi_gbl_op_type_dispatch
ffffffff81351bf0 r _acpi_module_name
ffffffff81351c70 r _acpi_module_name
ffffffff81351c80 r _acpi_module_name
ffffffff81351c90 r _acpi_module_name
ffffffff81351ca0 r _acpi_module_name
ffffffff81351ca8 r _acpi_module_name
ffffffff81351cb0 r _acpi_module_name
ffffffff81351cc0 r _acpi_module_name
ffffffff81351cd0 r _acpi_module_name
ffffffff81351ce0 r _acpi_module_name
ffffffff81351ce8 r _acpi_module_name
ffffffff81351cf0 r acpi_gbl_default_address_spaces
ffffffff81351cf8 r _acpi_module_name
ffffffff81351d08 r _acpi_module_name
ffffffff81351d18 r _acpi_module_name
ffffffff81351d20 r _acpi_module_name
ffffffff81351d30 r _acpi_module_name
ffffffff81351d38 r _acpi_module_name
ffffffff81351d80 r CSWTCH.8
ffffffff81351d88 r _acpi_module_name
ffffffff81351d98 r _acpi_module_name
ffffffff81351da8 r _acpi_module_name
ffffffff81351db0 r _acpi_module_name
ffffffff81351db8 r _acpi_module_name
ffffffff81351dc0 r _acpi_module_name
ffffffff81351e18 r _acpi_module_name
ffffffff81351e28 r _acpi_module_name
ffffffff81351e38 r _acpi_module_name
ffffffff81351e78 r _acpi_module_name
ffffffff81351eb8 r _acpi_module_name
ffffffff81351f30 r _acpi_module_name
ffffffff81351f38 r _acpi_module_name
ffffffff81351ff0 r _acpi_module_name
ffffffff81352100 r _acpi_module_name
ffffffff813521c0 r _acpi_module_name
ffffffff81352200 r _acpi_module_name
ffffffff81352208 r _acpi_module_name
ffffffff81352218 r _acpi_module_name
ffffffff81352228 r _acpi_module_name
ffffffff81352230 r _acpi_module_name
ffffffff81352238 r _acpi_module_name
ffffffff81352241 r _acpi_module_name
ffffffff813522b8 r _acpi_module_name
ffffffff813522c0 r _acpi_module_name
ffffffff813522d0 r acpi_protected_ports
ffffffff813523e0 r _acpi_module_name
ffffffff813523e8 r CSWTCH.7
ffffffff813523f0 r _acpi_module_name
ffffffff81352400 r _acpi_module_name
ffffffff81352410 r _acpi_module_name
ffffffff81352418 r _acpi_module_name
ffffffff81352520 r _acpi_module_name
ffffffff81352528 r _acpi_module_name
ffffffff81352530 r _acpi_module_name
ffffffff813525c8 r _acpi_module_name
ffffffff813525e0 r acpi_rtype_names
ffffffff81352610 r predefined_names
ffffffff81352ce0 r acpi_ns_repairable_names
ffffffff81352d60 r _acpi_module_name
ffffffff81352d70 r _acpi_module_name
ffffffff81352d80 r _acpi_module_name
ffffffff81352d88 r _acpi_module_name
ffffffff81352d98 r _acpi_module_name
ffffffff81352f10 r _acpi_module_name
ffffffff81352f17 r _acpi_module_name
ffffffff81352f20 R acpi_gbl_aml_op_info
ffffffff81353730 r acpi_gbl_short_op_index
ffffffff81353830 r acpi_gbl_long_op_index
ffffffff813538c0 r acpi_gbl_argument_count
ffffffff81353918 r _acpi_module_name
ffffffff81353920 r _acpi_module_name
ffffffff813539c8 r _acpi_module_name
ffffffff813539e0 R acpi_gbl_resource_struct_serial_bus_sizes
ffffffff813539e4 R acpi_gbl_aml_resource_serial_bus_sizes
ffffffff813539f0 R acpi_gbl_resource_struct_sizes
ffffffff81353a10 R acpi_gbl_aml_resource_sizes
ffffffff81353a24 r _acpi_module_name
ffffffff81353c30 r _acpi_module_name
ffffffff81353c78 r _acpi_module_name
ffffffff81353c80 r _acpi_module_name
ffffffff81353c90 r fadt_info_table
ffffffff81353d10 r fadt_pm_info_table
ffffffff81353d50 r _acpi_module_name
ffffffff81353d60 r _acpi_module_name
ffffffff81353d68 r _acpi_module_name
ffffffff81353d70 r _acpi_module_name
ffffffff81353d80 r _acpi_module_name
ffffffff81353e28 r _acpi_module_name
ffffffff81353e30 R acpi_gbl_ns_properties
ffffffff81353e50 r _acpi_module_name
ffffffff81353e60 r acpi_gbl_hex_to_ascii
ffffffff81353e70 r acpi_gbl_event_types
ffffffff81353ea0 r acpi_gbl_ns_type_names
ffffffff81353f98 r acpi_gbl_bad_type
ffffffff81353fb0 r acpi_gbl_desc_type_names
ffffffff81354030 r acpi_gbl_ref_class_names
ffffffff81354178 r _acpi_module_name
ffffffff81354181 r _acpi_module_name
ffffffff81354188 r CSWTCH.7
ffffffff81354190 R acpi_gbl_pre_defined_names
ffffffff81354280 r _acpi_module_name
ffffffff81354287 r _acpi_module_name
ffffffff81354290 r _acpi_module_name
ffffffff81354298 r _acpi_module_name
ffffffff813542a1 r _acpi_module_name
ffffffff813542b0 R acpi_gbl_resource_aml_serial_bus_sizes
ffffffff813542c0 R acpi_gbl_resource_aml_sizes
ffffffff813542e0 r acpi_gbl_resource_types
ffffffff81354300 r _acpi_module_name
ffffffff81354308 r _acpi_module_name
ffffffff81354310 r _acpi_module_name
ffffffff81354320 r hest_esrc_len_tab
ffffffff81354360 r cper_severity_strs
ffffffff81354380 r cper_proc_type_strs
ffffffff81354390 r cper_proc_isa_strs
ffffffff813543c0 r cper_proc_op_strs
ffffffff813543e0 r cper_mem_err_type_strs
ffffffff81354460 r cper_pcie_port_type_strs
ffffffff813544c0 r __func__.25411
ffffffff813544e0 r CSWTCH.35
ffffffff81354500 r __func__.25474
ffffffff81354510 r __func__.30575
ffffffff81354530 r CSWTCH.70
ffffffff81354540 r __func__.30613
ffffffff81354553 r __param_str_disable
ffffffff81354560 r xtab.15093
ffffffff81354580 r xtab.15110
ffffffff81354590 r CSWTCH.14
ffffffff813545c0 r CSWTCH.37
ffffffff81354690 r CSWTCH.39
ffffffff813546a0 r pnp_dev_table
ffffffff81354860 r CSWTCH.38
ffffffff81354880 r name.19735
ffffffff813548d0 r ring_ops_pv
ffffffff813548e0 r ring_ops_hvm
ffffffff81354900 r xsd_errors
ffffffff813549e0 r __func__.21171
ffffffff813549f0 r __func__.27324
ffffffff81354a20 R xen_xenbus_fops
ffffffff81354b00 R xenbus_backend_fops
ffffffff81354be0 r xenbus_pm_ops
ffffffff81354ca0 r balloon_attrs
ffffffff81354cd0 r balloon_info_group
ffffffff81354cf0 r selfballoon_group
ffffffff81354d10 r xen_properties_group
ffffffff81354d30 r xen_compilation_group
ffffffff81354d50 r version_group
ffffffff81354d70 r hyp_sysfs_ops
ffffffff81354da0 r hung_up_tty_fops
ffffffff81354e70 r __func__.27183
ffffffff81354e7d r __func__.27246
ffffffff81354e90 r __func__.27232
ffffffff81354eb0 r ptychar
ffffffff81354ee0 r tty_fops
ffffffff81354fb0 r __func__.27282
ffffffff81354fc0 r console_fops
ffffffff813550c0 r baud_table
ffffffff81355140 r baud_bits
ffffffff813551c0 R tty_ldiscs_proc_fops
ffffffff813552a0 r tty_ldiscs_seq_ops
ffffffff813552c0 r __func__.25998
ffffffff813552e0 r ptm_unix98_ops
ffffffff813553e0 r pty_unix98_ops
ffffffff813554e0 r vcs_fops
ffffffff81355670 r __func__.26935
ffffffff81355680 r k_handler
ffffffff81355700 r ret_diacr.26734
ffffffff81355710 r app_map.26759
ffffffff81355740 r fn_handler
ffffffff813557e0 r x86_keycodes
ffffffff813559e0 r max_vals
ffffffff81355a20 r CSWTCH.125
ffffffff81355a30 r __param_str_brl_nbchords
ffffffff81355a50 r __param_str_brl_timeout
ffffffff81355a80 r kbd_ids
ffffffff813563e0 r con_ops
ffffffff813564e0 r utf8_length_changes.26907
ffffffff81356500 r double_width.26880
ffffffff81356560 r __param_str_underline
ffffffff8135656d r __param_str_italic
ffffffff81356577 r __param_str_default_blu
ffffffff813565a0 r __param_arr_default_blu
ffffffff813565c0 r __param_str_default_grn
ffffffff813565e0 r __param_arr_default_grn
ffffffff81356600 r __param_str_default_red
ffffffff81356620 r __param_arr_default_red
ffffffff81356640 r __param_str_consoleblank
ffffffff8135664d r __param_str_cur_default
ffffffff81356660 r __param_str_global_cursor_default
ffffffff81356680 r __param_str_default_utf8
ffffffff813566a0 r hvc_ops
ffffffff813567a0 r xencons_ids
ffffffff813567e0 r memory_fops
ffffffff813568c0 r devlist
ffffffff81356a40 r mmap_mem_ops
ffffffff81356a80 r mem_fops
ffffffff81356b60 r null_fops
ffffffff81356c40 r port_fops
ffffffff81356d20 r zero_fops
ffffffff81356e00 r full_fops
ffffffff81356ee0 r kmsg_fops
ffffffff81356fc0 R urandom_fops
ffffffff813570a0 R random_fops
ffffffff81357180 r twist_table.24979
ffffffff813571a0 r misc_proc_fops
ffffffff81357280 r misc_fops
ffffffff81357360 r misc_seq_ops
ffffffff81357380 r nvram_proc_fops
ffffffff81357460 r floppy_types
ffffffff813574a0 r gfx_types
ffffffff813574c0 r nvram_fops
ffffffff813575a0 r CSWTCH.67
ffffffff813575c0 r vga_arb_device_fops
ffffffff813576a0 r device_uevent_ops
ffffffff813576c0 r dev_sysfs_ops
ffffffff813576e0 r __func__.15168
ffffffff813576f0 r bus_uevent_ops
ffffffff81357710 r driver_sysfs_ops
ffffffff81357730 r bus_sysfs_ops
ffffffff81357748 r __func__.17932
ffffffff81357755 r __func__.17957
ffffffff81357770 r __func__.18599
ffffffff81357790 r __func__.18622
ffffffff813577b0 r class_sysfs_ops
ffffffff813577e0 r platform_dev_pm_ops
ffffffff813578a0 R power_group_name
ffffffff813578a6 r enabled
ffffffff813578ae r disabled
ffffffff813578b7 r ctrl_auto
ffffffff813578bc r ctrl_on
ffffffff813578c0 r __func__.13023
ffffffff813578e0 r __func__.13038
ffffffff81357900 r __func__.13050
ffffffff81357920 r __func__.22055
ffffffff81357931 r __func__.22212
ffffffff81357940 r __func__.22250
ffffffff81357950 r __func__.17195
ffffffff81357970 r __func__.24828
ffffffff81357990 r __func__.24818
ffffffff813579b0 r __func__.24840
ffffffff813579d0 r __func__.24733
ffffffff81357a40 r scsi_device_types
ffffffff81357ae0 r __param_str_scsi_logging_level
ffffffff81357cc0 r variable_length_arr
ffffffff81357f00 r maint_in_arr
ffffffff81357f80 r maint_out_arr
ffffffff81357fe0 r serv_in16_arr
ffffffff81358020 r serv_out16_arr
ffffffff81358040 r cdb_byte0_names
ffffffff81358640 r CSWTCH.21
ffffffff81358860 r snstext
ffffffff813588e0 r additional
ffffffff8135afc0 r additional2
ffffffff8135b040 r driverbyte_table
ffffffff8135b0a0 r hostbyte_table
ffffffff8135b3e0 r __func__.29583
ffffffff8135b410 r __func__.29768
ffffffff8135b422 r __func__.29676
ffffffff8135b430 r __func__.29695
ffffffff8135b450 r __func__.29684
ffffffff8135b470 r __func__.30036
ffffffff8135b488 r __func__.29817
ffffffff8135b4a0 r __func__.29605
ffffffff8135b4c0 r __func__.30083
ffffffff8135b4e0 r __func__.30216
ffffffff8135b5e0 r __func__.29883
ffffffff8135b5f0 r __func__.30302
ffffffff8135b620 r __func__.29114
ffffffff8135b640 r __func__.29060
ffffffff8135b650 r __func__.29290
ffffffff8135b670 r __func__.29392
ffffffff8135b690 r __func__.29418
ffffffff8135b6b0 r __func__.29408
ffffffff8135b6d0 r __param_str_inq_timeout
ffffffff8135b6f0 r __param_str_max_report_luns
ffffffff8135b709 r __param_str_scan
ffffffff8135b720 r __param_string_scan
ffffffff8135b730 r __param_str_max_luns
ffffffff8135b760 r sdev_states
ffffffff8135b7e0 r shost_states
ffffffff8135b860 r __func__.27904
ffffffff8135b880 r spaces
ffffffff8135b8a0 r __func__.27886
ffffffff8135b8c0 r scsi_devinfo_proc_fops
ffffffff8135b9a0 r scsi_devinfo_seq_ops
ffffffff8135b9c0 r __func__.27958
ffffffff8135b9e0 r __param_str_default_dev_flags
ffffffff8135ba00 r __param_str_dev_flags
ffffffff8135ba20 r __param_string_dev_flags
ffffffff8135ba40 r __func__.28319
ffffffff8135ba60 r __func__.28329
ffffffff8135ba80 r proc_scsi_operations
ffffffff8135bb60 r scsi_seq_ops
ffffffff8135bb80 R scsi_bus_pm_ops
ffffffff8135bd50 r cap.29265
ffffffff8135bd60 r sd_fops
ffffffff8135bdc0 r lbp_mode
ffffffff8135be00 r sd_cache_types
ffffffff8135be60 R ata_dummy_port_info
ffffffff8135bea0 R sata_port_ops
ffffffff8135c080 R ata_base_port_ops
ffffffff8135c250 R sata_deb_timing_long
ffffffff8135c270 R sata_deb_timing_hotplug
ffffffff8135c290 R sata_deb_timing_normal
ffffffff8135c2b0 r ata_rw_cmds
ffffffff8135c2e0 r ata_xfer_tbl
ffffffff8135c320 r xfer_mode_str.37179
ffffffff8135c3c0 r spd_str.37186
ffffffff8135c3e0 r __func__.37348
ffffffff8135c3f0 r __func__.37391
ffffffff8135c404 r CSWTCH.237
ffffffff8135c420 r ata_device_blacklist
ffffffff8135ca40 r ata_timing
ffffffff8135cc10 r CSWTCH.234
ffffffff8135cc28 r __func__.38431
ffffffff8135cc40 r ata_port_pm_ops
ffffffff8135cd00 r __param_str_atapi_an
ffffffff8135cd10 r __param_str_allow_tpm
ffffffff8135cd21 r __param_str_noacpi
ffffffff8135cd30 r __param_str_ata_probe_timeout
ffffffff8135cd49 r __param_str_dma
ffffffff8135cd60 r __param_str_ignore_hpa
ffffffff8135cd72 r __param_str_fua
ffffffff8135cd80 r __param_str_atapi_passthru16
ffffffff8135cda0 r __param_str_atapi_dmadir
ffffffff8135cdc0 r __param_str_atapi_enabled
ffffffff8135cdd5 r __param_str_force
ffffffff8135cdf0 r __param_string_force
ffffffff8135d960 r ata_lpm_policy_names
ffffffff8135d980 r CSWTCH.114
ffffffff8135d9a0 r sense_table.35561
ffffffff8135d9e0 r stat_table.35562
ffffffff8135d9f4 r def_rw_recovery_mpage
ffffffff8135da00 r def_control_mpage
ffffffff8135da10 r def_cache_mpage
ffffffff8135da40 r ata_eh_cmd_timeout_table
ffffffff8135daa0 r dma_dnxfer_sel.35671
ffffffff8135daa8 r pio_dnxfer_sel.35672
ffffffff8135dac0 r cmd_descr.35705
ffffffff8135dfe0 r dma_str.35732
ffffffff8135e000 r prot_str.35733
ffffffff8135e040 r CSWTCH.105
ffffffff8135e060 r ata_eh_reset_timeouts
ffffffff8135e0a0 r ata_eh_identify_timeouts
ffffffff8135e0c0 r ata_eh_other_timeouts
ffffffff8135e0e0 r ata_eh_flush_timeouts
ffffffff8135e100 r dev_attr_nr_pmp_links
ffffffff8135e120 r dev_attr_idle_irq
ffffffff8135e140 r dev_attr_hw_sata_spd_limit
ffffffff8135e160 r dev_attr_sata_spd_limit
ffffffff8135e180 r dev_attr_sata_spd
ffffffff8135e1a0 r dev_attr_class
ffffffff8135e1c0 r dev_attr_pio_mode
ffffffff8135e1e0 r dev_attr_dma_mode
ffffffff8135e200 r dev_attr_xfer_mode
ffffffff8135e220 r dev_attr_spdn_cnt
ffffffff8135e240 r dev_attr_ering
ffffffff8135e260 r dev_attr_id
ffffffff8135e280 r dev_attr_gscr
ffffffff8135e2a0 r ata_class_names
ffffffff8135e340 r ata_xfer_names
ffffffff8135e4c0 r ata_err_names
ffffffff8135e580 R ata_bmdma32_port_ops
ffffffff8135e760 R ata_bmdma_port_ops
ffffffff8135e940 R ata_sff_port_ops
ffffffff8135eb10 r CSWTCH.67
ffffffff8135eb30 r __func__.33174
ffffffff8135eb50 r __func__.35842
ffffffff8135eb60 r __func__.35919
ffffffff8135eb80 r __param_str_acpi_gtf_filter
ffffffff8135eba0 r loopback_ethtool_ops
ffffffff8135ed00 r loopback_ops
ffffffff8135ee60 r serio_pm_ops
ffffffff8135ef20 r __param_str_debug
ffffffff8135ef2c r __param_str_nopnp
ffffffff8135ef38 r __param_str_dritek
ffffffff8135ef50 r __param_str_notimeout
ffffffff8135ef60 r __param_str_noloop
ffffffff8135ef6d r __param_str_dumbkbd
ffffffff8135ef7b r __param_str_direct
ffffffff8135ef88 r __param_str_reset
ffffffff8135ef94 r __param_str_unlock
ffffffff8135efa1 r __param_str_nomux
ffffffff8135efad r __param_str_noaux
ffffffff8135efb9 r __param_str_nokbd
ffffffff8135efe0 r i8042_pm_ops
ffffffff8135f098 r keyboard_ids.22695
ffffffff8135f210 r __func__.23203
ffffffff8135f240 r input_devices_fileops
ffffffff8135f320 r input_handlers_fileops
ffffffff8135f400 r input_fops
ffffffff8135f4e0 r input_devices_seq_ops
ffffffff8135f500 r input_handlers_seq_ops
ffffffff8135f520 r input_dev_pm_ops
ffffffff8135f848 r mousedev_imex_seq
ffffffff8135f84e r mousedev_imps_seq
ffffffff8135f860 r __param_str_tap_time
ffffffff8135f872 r __param_str_yres
ffffffff8135f880 r __param_str_xres
ffffffff8135f8a0 r mousedev_fops
ffffffff8135f980 r mousedev_ids
ffffffff8135fe00 r atkbd_unxlate_table
ffffffff8135ff00 r atkbd_set2_keycode
ffffffff81360300 r atkbd_scroll_keys
ffffffff81360320 r atkbd_set3_keycode
ffffffff81360720 r xl_table
ffffffff81360740 r __func__.20796
ffffffff81360750 r __param_str_terminal
ffffffff8136075f r __param_str_extra
ffffffff8136076b r __param_str_scroll
ffffffff81360778 r __param_str_softraw
ffffffff81360790 r __param_str_softrepeat
ffffffff813607a1 r __param_str_reset
ffffffff813607ad r __param_str_set
ffffffff813607c0 r psmouse_protocols
ffffffff813609c8 r seq.20958
ffffffff813609d1 r rates.20925
ffffffff813609d9 r params.20919
ffffffff813609e0 r __param_str_resync_time
ffffffff81360a00 r __param_str_resetafter
ffffffff81360a20 r __param_str_smartscroll
ffffffff81360a34 r __param_str_rate
ffffffff81360a50 r __param_str_resolution
ffffffff81360a63 r __param_str_proto
ffffffff81360a71 r newabs_rel_mask.20934
ffffffff81360a76 r newabs_rslt.20935
ffffffff81360a7b r newabs_mask.20933
ffffffff81360a80 r oldabs_mask.20936
ffffffff81360a85 r oldabs_rslt.20937
ffffffff81360aa0 r rates.19066
ffffffff81360ac0 r alps_model_data
ffffffff81360b80 r alps_v3_nibble_commands
ffffffff81360c00 r alps_v4_nibble_commands
ffffffff81360c80 r ps2pp_list.18720
ffffffff81360d40 r params.18936
ffffffff81360d60 r rtc_days_in_month
ffffffff81360d80 r rtc_ydays
ffffffff81360dc0 r rtc_dev_fops
ffffffff81360ea0 r rtc_proc_fops
ffffffff81360f80 r driver_name
ffffffff81360fa0 r cmos_rtc_ops
ffffffff81361000 r rtc_ids
ffffffff81361040 r cmos_pm_ops
ffffffff81361100 r watchdog_fops
ffffffff813611d0 r sysfs_ops
ffffffff813611f0 r __param_str_off
ffffffff81361200 r cpuidle_state_sysfs_ops
ffffffff81361220 r cpuidle_sysfs_ops
ffffffff81361240 r fields.21404
ffffffff81361250 r memmap_attr_ops
ffffffff81361320 r dispatch_type.24605
ffffffff81361340 r __func__.24644
ffffffff81361360 r hid_have_special_driver
ffffffff81362dc0 r hid_hiddev_list
ffffffff81362e20 r types.24828
ffffffff81362e70 r CSWTCH.71
ffffffff81362ea0 r hid_mouse_ignore_list
ffffffff81363320 r hid_ignore_list
ffffffff81364250 r __param_str_ignore_special_drivers
ffffffff8136426b r __param_str_debug
ffffffff81365f20 r hid_hat_to_axis
ffffffff81365f80 r hid_keyboard
ffffffff81366080 r __func__.14763
ffffffff8136608a r __func__.14790
ffffffff813660e8 r caps.22049
ffffffff81366100 r __func__.22182
ffffffff81366120 r feat_str.29231
ffffffff81366180 r pci_mmap_ops
ffffffff813661c0 r pci_mmcfg
ffffffff813661d0 R pci_direct_conf1
ffffffff813661e0 r pci_direct_conf2
ffffffff813661f0 r extcfg_base_mask.27720
ffffffff81366200 r extcfg_sizebus.27719
ffffffff81366240 r pirqmap.29098
ffffffff81366260 r pirqmap.29086
ffffffff81366274 r pirqmap.29121
ffffffff81366278 r pirqmap.29109
ffffffff81366280 r irqmap.29051
ffffffff81366290 r irqmap.29039
ffffffff81366920 R bad_sock_fops
ffffffff81366a00 r socket_file_ops
ffffffff81366ae0 r sockfs_ops
ffffffff81366bc0 r sockfs_dentry_operations
ffffffff81366c40 r nargs
ffffffff81366f20 r __func__.44382
ffffffff81366f40 r proto_seq_fops
ffffffff81367020 r proto_seq_ops
ffffffff81367040 r sock_pipe_buf_ops
ffffffff81367080 R netns_operations
ffffffff813675c0 r null_features.46504
ffffffff813675d0 r __func__.48467
ffffffff81367600 r dev_seq_fops
ffffffff813676e0 r softnet_seq_fops
ffffffff813677c0 r ptype_seq_fops
ffffffff813678a0 r dev_seq_ops
ffffffff813678c0 r softnet_seq_ops
ffffffff813678e0 r ptype_seq_ops
ffffffff81367be0 r netdev_features_strings
ffffffff81368020 r dev_mc_seq_fops
ffffffff81368100 r dev_mc_seq_ops
ffffffff813681a0 r nl_neightbl_policy
ffffffff813681e0 r nl_ntbl_parm_policy
ffffffff81368240 r neigh_stat_seq_fops
ffffffff81368320 r neigh_stat_seq_ops
ffffffff81368340 R ifla_policy
ffffffff813683c0 r rta_max
ffffffff81368400 r rtm_min
ffffffff81368440 r ifla_info_policy
ffffffff81368460 r ifla_port_policy
ffffffff81368480 r __func__.36326
ffffffff813688c8 r __func__.37366
ffffffff813688e0 r codes.37407
ffffffff813689a0 r CSWTCH.35
ffffffff81368a00 r fmt_dec
ffffffff81368a04 r fmt_ulong
ffffffff81368a09 r fmt_hex
ffffffff81368a20 r operstates
ffffffff81368a58 r fmt_udec
ffffffff81368a5c r fmt_u64
ffffffff81368a70 r rx_queue_sysfs_ops
ffffffff81368a90 r netdev_queue_sysfs_ops
ffffffff81368b60 r nas
ffffffff81368b80 R eth_header_ops
ffffffff81368bc0 r prio2band
ffffffff81368be0 r bitmap2band
ffffffff81368c00 r mq_class_ops
ffffffff81368cb0 r netlink_family_ops
ffffffff81368ce0 r netlink_ops
ffffffff81368da0 r netlink_seq_fops
ffffffff81368e80 r netlink_seq_ops
ffffffff81368ea0 r ctrl_policy
ffffffff81368ec0 R ip_tos2prio
ffffffff81368ed0 r inaddr_any.45647
ffffffff81368ee0 r mtu_plateau
ffffffff81368f00 r rt_cache_seq_fops
ffffffff81368fe0 r rt_cpu_seq_fops
ffffffff813690c0 r rt_cache_seq_ops
ffffffff813690e0 r rt_cpu_seq_ops
ffffffff81369100 r peer_fake_node
ffffffff813691c0 r __func__.41924
ffffffff813691e0 r ip4_frag_ecn_table
ffffffff813691f0 r __func__.39515
ffffffff81369540 R inet_csk_timer_bug_msg
ffffffff813696a0 r new_state
ffffffff81369710 r __func__.43559
ffffffff81369722 r __func__.43639
ffffffff81369740 R ipv4_specific
ffffffff813697c0 r tcp_afinfo_seq_fops
ffffffff813698a0 r __func__.41337
ffffffff813698c0 r raw_seq_fops
ffffffff813699a0 r raw_seq_ops
ffffffff813699c0 r udp_afinfo_seq_fops
ffffffff81369aa0 r udplite_protocol
ffffffff81369ae0 r __func__.37671
ffffffff81369b00 r udplite_afinfo_seq_fops
ffffffff81369be0 r arp_direct_ops
ffffffff81369c20 r arp_hh_ops
ffffffff81369c60 r arp_generic_ops
ffffffff81369ca0 r arp_seq_fops
ffffffff81369d80 r arp_seq_ops
ffffffff81369da0 R icmp_err_convert
ffffffff81369e20 r icmp_pointers
ffffffff8136a048 r inet_af_policy
ffffffff8136a060 r ifa_ipv4_policy
ffffffff8136a300 R inet_dgram_ops
ffffffff8136a3c0 R inet_stream_ops
ffffffff8136a480 r inet_family_ops
ffffffff8136a4a0 r icmp_protocol
ffffffff8136a4d8 r __func__.45188
ffffffff8136a500 r udp_protocol
ffffffff8136a540 r tcp_protocol
ffffffff8136a580 r __func__.45014
ffffffff8136a5a0 r inet_sockraw_ops
ffffffff8136a660 r igmp_mc_seq_fops
ffffffff8136a740 r igmp_mcf_seq_fops
ffffffff8136a820 r igmp_mc_seq_ops
ffffffff8136a840 r igmp_mcf_seq_ops
ffffffff8136a980 R rtm_ipv4_policy
ffffffff8136a9c4 r __func__.44231
ffffffff8136a9d3 r __func__.44247
ffffffff8136aa00 R fib_props
ffffffff8136aa60 r fib_trie_fops
ffffffff8136ab40 r fib_triestat_fops
ffffffff8136ac20 r fib_route_fops
ffffffff8136ad00 r fib_trie_seq_ops
ffffffff8136ad20 r rtn_type_names
ffffffff8136ad80 r fib_route_seq_ops
ffffffff8136ada0 r ping_seq_fops
ffffffff8136ae80 r ping_seq_ops
ffffffff8136aea0 r sockstat_seq_fops
ffffffff8136af80 r netstat_seq_fops
ffffffff8136b060 r snmp_seq_fops
ffffffff8136b140 r snmp4_net_list
ffffffff8136b680 r snmp4_ipextstats_list
ffffffff8136b760 r snmp4_ipstats_list
ffffffff8136b880 r snmp4_tcp_list
ffffffff8136b980 r snmp4_udp_list
ffffffff8136ba00 r icmpmibmap
ffffffff8136bac0 r msstab
ffffffff8136bae0 r v.41716
ffffffff8136bb20 r __param_str_hystart_ack_delta
ffffffff8136bb40 r __param_str_hystart_low_window
ffffffff8136bb60 r __param_str_hystart_detect
ffffffff8136bb80 r __param_str_hystart
ffffffff8136bba0 r __param_str_tcp_friendliness
ffffffff8136bbc0 r __param_str_bic_scale
ffffffff8136bbe0 r __param_str_initial_ssthresh
ffffffff8136bbfb r __param_str_beta
ffffffff8136bc10 r __param_str_fast_convergence
ffffffff8136bc40 r xfrm_policy_fc_ops
ffffffff8136bc60 r xfrm_bundle_fc_ops
ffffffff8136bc80 r CSWTCH.15
ffffffff8136bcb0 r xfrm_aalg_list
ffffffff8136bcd0 r xfrm_ealg_list
ffffffff8136bcf0 r xfrm_calg_list
ffffffff8136bd10 r xfrm_aead_list
ffffffff8136bd40 r unix_seq_fops
ffffffff8136be20 r unix_seq_ops
ffffffff8136be40 r unix_family_ops
ffffffff8136be60 r unix_stream_ops
ffffffff8136bf20 r unix_dgram_ops
ffffffff8136bfe0 r unix_seqpacket_ops
ffffffff8136c098 r __func__.37631
ffffffff8136c0a8 R kallsyms_addresses
ffffffff8138d2f8 R kallsyms_num_syms
ffffffff8138d300 R kallsyms_names
ffffffff813baaa0 R kallsyms_markers
ffffffff813bacb8 R kallsyms_token_table
ffffffff813bb018 R kallsyms_token_index
ffffffff813fa710 R __start___bug_table
ffffffff813fa710 R __start___tracepoints_ptrs
ffffffff813fa710 R __stop___tracepoints_ptrs
ffffffff813fede4 R __stop___bug_table
ffffffff813fede8 r __pci_fixup_PCI_VENDOR_ID_TI0xb800fixup_ti816x_class
ffffffff813fede8 R __start_pci_fixups_early
ffffffff813fee00 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_MCP55_BRIDGE_V4nvbridge_check_legacy_irq_routing
ffffffff813fee18 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_MCP55_BRIDGE_V0nvbridge_check_legacy_irq_routing
ffffffff813fee30 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_NVENET_15nvenet_msi_disable
ffffffff813fee48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82875_HBquirk_unhide_mch_dev6
ffffffff813fee60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82865_HBquirk_unhide_mch_dev6
ffffffff813fee78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHVquirk_pcie_pxh
ffffffff813fee90 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_1quirk_pcie_pxh
ffffffff813feea8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_0quirk_pcie_pxh
ffffffff813feec0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHD_1quirk_pcie_pxh
ffffffff813feed8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHD_0quirk_pcie_pxh
ffffffff813feef0 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB369quirk_jmicron_ata
ffffffff813fef08 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB368quirk_jmicron_ata
ffffffff813fef20 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB366quirk_jmicron_ata
ffffffff813fef38 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB365quirk_jmicron_ata
ffffffff813fef50 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB364quirk_jmicron_ata
ffffffff813fef68 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB363quirk_jmicron_ata
ffffffff813fef80 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB362quirk_jmicron_ata
ffffffff813fef98 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB361quirk_jmicron_ata
ffffffff813fefb0 r __pci_fixup_PCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB360quirk_jmicron_ata
ffffffff813fefc8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_ANY_IDquirk_no_ata_d3
ffffffff813fefe0 r __pci_fixup_PCI_VENDOR_ID_ALPCI_ANY_IDquirk_no_ata_d3
ffffffff813feff8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_ANY_IDquirk_no_ata_d3
ffffffff813ff010 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_ANY_IDquirk_no_ata_d3
ffffffff813ff028 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_10quirk_ide_samemode
ffffffff813ff040 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_CSB5IDEquirk_svwks_csb5ide
ffffffff813ff058 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDquirk_mmio_always_on
ffffffff813ff070 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_LEacpi_pm_check_graylist
ffffffff813ff088 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0acpi_pm_check_graylist
ffffffff813ff0a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371AB_3acpi_pm_check_blacklist
ffffffff813ff0b8 r __pci_fixup_PCI_VENDOR_ID_ATI0x4385sb600_disable_hpet_bar
ffffffff813ff0d0 r __pci_fixup_PCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_5530_LEGACYpci_early_fixup_cyrix_5530
ffffffff813ff0e8 R __end_pci_fixups_early
ffffffff813ff0e8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_SBX00_SMBUSforce_disable_hpet_msi
ffffffff813ff0e8 R __start_pci_fixups_header
ffffffff813ff100 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0367nvidia_force_enable_hpet
ffffffff813ff118 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0366nvidia_force_enable_hpet
ffffffff813ff130 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0365nvidia_force_enable_hpet
ffffffff813ff148 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0364nvidia_force_enable_hpet
ffffffff813ff160 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0363nvidia_force_enable_hpet
ffffffff813ff178 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0362nvidia_force_enable_hpet
ffffffff813ff190 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0361nvidia_force_enable_hpet
ffffffff813ff1a8 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0360nvidia_force_enable_hpet
ffffffff813ff1c0 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0260nvidia_force_enable_hpet
ffffffff813ff1d8 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0051nvidia_force_enable_hpet
ffffffff813ff1f0 r __pci_fixup_PCI_VENDOR_ID_NVIDIA0x0050nvidia_force_enable_hpet
ffffffff813ff208 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP400_SMBUSati_force_enable_hpet
ffffffff813ff220 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_CX700vt8237_force_enable_hpet
ffffffff813ff238 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237vt8237_force_enable_hpet
ffffffff813ff250 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8235vt8237_force_enable_hpet
ffffffff813ff268 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_12old_ich_force_enable_hpet
ffffffff813ff280 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0old_ich_force_enable_hpet
ffffffff813ff298 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12old_ich_force_enable_hpet_user
ffffffff813ff2b0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0old_ich_force_enable_hpet_user
ffffffff813ff2c8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12old_ich_force_enable_hpet_user
ffffffff813ff2e0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0old_ich_force_enable_hpet_user
ffffffff813ff2f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_1old_ich_force_enable_hpet_user
ffffffff813ff310 r __pci_fixup_PCI_VENDOR_ID_INTEL0x3a16ich_force_enable_hpet
ffffffff813ff328 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_7ich_force_enable_hpet
ffffffff813ff340 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_4ich_force_enable_hpet
ffffffff813ff358 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_1ich_force_enable_hpet
ffffffff813ff370 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_31ich_force_enable_hpet
ffffffff813ff388 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_1ich_force_enable_hpet
ffffffff813ff3a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_0ich_force_enable_hpet
ffffffff813ff3b8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1ich_force_enable_hpet
ffffffff813ff3d0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_0ich_force_enable_hpet
ffffffff813ff3e8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB2_0ich_force_enable_hpet
ffffffff813ff400 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65faquirk_intel_mc_errata
ffffffff813ff418 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65f9quirk_intel_mc_errata
ffffffff813ff430 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65f8quirk_intel_mc_errata
ffffffff813ff448 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65f7quirk_intel_mc_errata
ffffffff813ff460 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e7quirk_intel_mc_errata
ffffffff813ff478 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e6quirk_intel_mc_errata
ffffffff813ff490 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e5quirk_intel_mc_errata
ffffffff813ff4a8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e4quirk_intel_mc_errata
ffffffff813ff4c0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e3quirk_intel_mc_errata
ffffffff813ff4d8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65e2quirk_intel_mc_errata
ffffffff813ff4f0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x65c0quirk_intel_mc_errata
ffffffff813ff508 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25faquirk_intel_mc_errata
ffffffff813ff520 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f9quirk_intel_mc_errata
ffffffff813ff538 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f8quirk_intel_mc_errata
ffffffff813ff550 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f7quirk_intel_mc_errata
ffffffff813ff568 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e7quirk_intel_mc_errata
ffffffff813ff580 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e6quirk_intel_mc_errata
ffffffff813ff598 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e5quirk_intel_mc_errata
ffffffff813ff5b0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e4quirk_intel_mc_errata
ffffffff813ff5c8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e3quirk_intel_mc_errata
ffffffff813ff5e0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e2quirk_intel_mc_errata
ffffffff813ff5f8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25d8quirk_intel_mc_errata
ffffffff813ff610 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25d4quirk_intel_mc_errata
ffffffff813ff628 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25d0quirk_intel_mc_errata
ffffffff813ff640 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25c0quirk_intel_mc_errata
ffffffff813ff658 r __pci_fixup_PCI_VENDOR_ID_SOLARFLAREPCI_DEVICE_ID_SOLARFLARE_SFC4000Bfixup_mpss_256
ffffffff813ff670 r __pci_fixup_PCI_VENDOR_ID_SOLARFLAREPCI_DEVICE_ID_SOLARFLARE_SFC4000A_1fixup_mpss_256
ffffffff813ff688 r __pci_fixup_PCI_VENDOR_ID_SOLARFLAREPCI_DEVICE_ID_SOLARFLARE_SFC4000A_0fixup_mpss_256
ffffffff813ff6a0 r __pci_fixup_PCI_VENDOR_ID_HINT0x0020quirk_hotplug_bridge
ffffffff813ff6b8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8132_BRIDGEht_enable_msi_mapping
ffffffff813ff6d0 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT1000_PXBht_enable_msi_mapping
ffffffff813ff6e8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x1460quirk_p64h2_1k_io
ffffffff813ff700 r __pci_fixup_PCI_VENDOR_ID_NCRPCI_DEVICE_ID_NCR_53C810fixup_rev1_53c810
ffffffff813ff718 r __pci_fixup_PCI_VENDOR_ID_NETMOSPCI_ANY_IDquirk_netmos
ffffffff813ff730 r __pci_fixup_PCI_VENDOR_ID_TOSHIBA_2PCI_DEVICE_ID_TOSHIBA_TC86C001_IDEquirk_tc86c001_ide
ffffffff813ff748 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_EESSCquirk_alder_ioapic
ffffffff813ff760 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237asus_hides_ac97_lpc
ffffffff813ff778 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_503quirk_sis_503
ffffffff813ff790 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_LPCquirk_sis_96x_smbus
ffffffff813ff7a8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_963quirk_sis_96x_smbus
ffffffff813ff7c0 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_962quirk_sis_96x_smbus
ffffffff813ff7d8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_961quirk_sis_96x_smbus
ffffffff813ff7f0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6
ffffffff813ff808 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0asus_hides_smbus_lpc
ffffffff813ff820 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12asus_hides_smbus_lpc
ffffffff813ff838 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12asus_hides_smbus_lpc
ffffffff813ff850 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0asus_hides_smbus_lpc
ffffffff813ff868 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_0asus_hides_smbus_lpc
ffffffff813ff880 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0asus_hides_smbus_lpc
ffffffff813ff898 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AA_0asus_hides_smbus_lpc
ffffffff813ff8b0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82815_CGCasus_hides_smbus_hostbridge
ffffffff813ff8c8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_2asus_hides_smbus_hostbridge
ffffffff813ff8e0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82810_IG3asus_hides_smbus_hostbridge
ffffffff813ff8f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82915GM_HBasus_hides_smbus_hostbridge
ffffffff813ff910 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82855GM_HBasus_hides_smbus_hostbridge
ffffffff813ff928 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82855PM_HBasus_hides_smbus_hostbridge
ffffffff813ff940 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7501_MCHasus_hides_smbus_hostbridge
ffffffff813ff958 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_7205_0asus_hides_smbus_hostbridge
ffffffff813ff970 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82875_HBasus_hides_smbus_hostbridge
ffffffff813ff988 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82865_HBasus_hides_smbus_hostbridge
ffffffff813ff9a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82850_HBasus_hides_smbus_hostbridge
ffffffff813ff9b8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82845G_HBasus_hides_smbus_hostbridge
ffffffff813ff9d0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82845_HBasus_hides_smbus_hostbridge
ffffffff813ff9e8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82375quirk_eisa_bridge
ffffffff813ffa00 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_HUDSON2_SATA_IDEquirk_amd_ide_mode
ffffffff813ffa18 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP700_SATAquirk_amd_ide_mode
ffffffff813ffa30 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP600_SATAquirk_amd_ide_mode
ffffffff813ffa48 r __pci_fixup_PCI_VENDOR_ID_TOSHIBA0x605quirk_transparent_bridge
ffffffff813ffa60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82380FBquirk_transparent_bridge
ffffffff813ffa78 r __pci_fixup_PCI_VENDOR_ID_DUNORDPCI_DEVICE_ID_DUNORD_I3000quirk_dunord
ffffffff813ffa90 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C597_0quirk_vt82c598_id
ffffffff813ffaa8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237Aquirk_via_bridge
ffffffff813ffac0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237quirk_via_bridge
ffffffff813ffad8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8235quirk_via_bridge
ffffffff813ffaf0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8233C_0quirk_via_bridge
ffffffff813ffb08 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8233Aquirk_via_bridge
ffffffff813ffb20 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8233_0quirk_via_bridge
ffffffff813ffb38 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8231quirk_via_bridge
ffffffff813ffb50 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686quirk_via_bridge
ffffffff813ffb68 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686_4quirk_via_acpi
ffffffff813ffb80 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C586_3quirk_via_acpi
ffffffff813ffb98 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8235quirk_vt8235_acpi
ffffffff813ffbb0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686_4quirk_vt82c686_acpi
ffffffff813ffbc8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C586_3quirk_vt82c586_acpi
ffffffff813ffbe0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH10_1quirk_ich7_lpc
ffffffff813ffbf8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_8quirk_ich7_lpc
ffffffff813ffc10 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_7quirk_ich7_lpc
ffffffff813ffc28 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_4quirk_ich7_lpc
ffffffff813ffc40 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH9_2quirk_ich7_lpc
ffffffff813ffc58 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_4quirk_ich7_lpc
ffffffff813ffc70 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_1quirk_ich7_lpc
ffffffff813ffc88 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_3quirk_ich7_lpc
ffffffff813ffca0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_2quirk_ich7_lpc
ffffffff813ffcb8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH8_0quirk_ich7_lpc
ffffffff813ffcd0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_31quirk_ich7_lpc
ffffffff813ffce8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_1quirk_ich7_lpc
ffffffff813ffd00 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH7_0quirk_ich7_lpc
ffffffff813ffd18 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1quirk_ich6_lpc
ffffffff813ffd30 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_0quirk_ich6_lpc
ffffffff813ffd48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_1quirk_ich4_lpc_acpi
ffffffff813ffd60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0quirk_ich4_lpc_acpi
ffffffff813ffd78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12quirk_ich4_lpc_acpi
ffffffff813ffd90 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0quirk_ich4_lpc_acpi
ffffffff813ffda8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12quirk_ich4_lpc_acpi
ffffffff813ffdc0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0quirk_ich4_lpc_acpi
ffffffff813ffdd8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_10quirk_ich4_lpc_acpi
ffffffff813ffdf0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_0quirk_ich4_lpc_acpi
ffffffff813ffe08 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AB_0quirk_ich4_lpc_acpi
ffffffff813ffe20 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AA_0quirk_ich4_lpc_acpi
ffffffff813ffe38 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443MX_3quirk_piix4_acpi
ffffffff813ffe50 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371AB_3quirk_piix4_acpi
ffffffff813ffe68 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M7101quirk_ali7101_acpi
ffffffff813ffe80 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_CS5536_ISAquirk_cs5536_vsa
ffffffff813ffe98 r __pci_fixup_PCI_VENDOR_ID_S3PCI_DEVICE_ID_S3_968quirk_s3_64M
ffffffff813ffeb0 r __pci_fixup_PCI_VENDOR_ID_S3PCI_DEVICE_ID_S3_868quirk_s3_64M
ffffffff813ffec8 r __pci_fixup_PCI_VENDOR_ID_IBMPCI_DEVICE_ID_IBM_CITRINEquirk_citrine
ffffffff813ffee0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_TGP_LPCquirk_tigerpoint_bm_sts
ffffffff813ffef8 r __pci_fixup_PCI_VENDOR_ID_SIEMENS0x0015pci_siemens_interrupt_controller
ffffffff813fff10 r __pci_fixup_PCI_VENDOR_ID_TI0x8032pci_pre_fixup_toshiba_ohci1394
ffffffff813fff28 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237pci_fixup_msi_k8t_onboard_sound
ffffffff813fff40 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_NFORCE2pci_fixup_nforce2
ffffffff813fff58 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_ANY_IDpci_fixup_transparent_bridge
ffffffff813fff70 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8367_0pci_fixup_via_northbridge_bug
ffffffff813fff88 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361pci_fixup_via_northbridge_bug
ffffffff813fffa0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8622pci_fixup_via_northbridge_bug
ffffffff813fffb8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0pci_fixup_via_northbridge_bug
ffffffff813fffd0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371AB_3pci_fixup_piix4_acpi
ffffffff813fffe8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_5598pci_fixup_latency
ffffffff81400000 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_5597pci_fixup_latency
ffffffff81400018 r __pci_fixup_PCI_VENDOR_ID_NCRPCI_DEVICE_ID_NCR_53C810pci_fixup_ncr53c810
ffffffff81400030 r __pci_fixup_PCI_VENDOR_ID_UMCPCI_DEVICE_ID_UMC_UM8886BFpci_fixup_umc_ide
ffffffff81400048 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82454GXpci_fixup_i450gx
ffffffff81400060 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82451NXpci_fixup_i450nx
ffffffff81400078 R __end_pci_fixups_header
ffffffff81400078 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_ANY_IDvia_no_dac
ffffffff81400078 R __start_pci_fixups_final
ffffffff81400090 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F5quirk_amd_nb_node
ffffffff814000a8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F4quirk_amd_nb_node
ffffffff814000c0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F3quirk_amd_nb_node
ffffffff814000d8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F2quirk_amd_nb_node
ffffffff814000f0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F1quirk_amd_nb_node
ffffffff81400108 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_15H_NB_F0quirk_amd_nb_node
ffffffff81400120 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_LINKquirk_amd_nb_node
ffffffff81400138 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_MISCquirk_amd_nb_node
ffffffff81400150 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_DRAMquirk_amd_nb_node
ffffffff81400168 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_MAPquirk_amd_nb_node
ffffffff81400180 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_10H_NB_HTquirk_amd_nb_node
ffffffff81400198 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NB_MISCquirk_amd_nb_node
ffffffff814001b0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NB_MEMCTLquirk_amd_nb_node
ffffffff814001c8 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NB_ADDRMAPquirk_amd_nb_node
ffffffff814001e0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_K8_NBquirk_amd_nb_node
ffffffff814001f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7520_MCHquirk_intel_irqbalance
ffffffff81400210 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7525_MCHquirk_intel_irqbalance
ffffffff81400228 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7320_MCHquirk_intel_irqbalance
ffffffff81400240 r __pci_fixup_PCI_VENDOR_ID_INTEL0x010adisable_igfx_irq
ffffffff81400258 r __pci_fixup_PCI_VENDOR_ID_INTEL0x0102disable_igfx_irq
ffffffff81400270 r __pci_fixup_PCI_VENDOR_ID_ATI0x4375quirk_msi_intx_disable_bug
ffffffff81400288 r __pci_fixup_PCI_VENDOR_ID_ATI0x4374quirk_msi_intx_disable_bug
ffffffff814002a0 r __pci_fixup_PCI_VENDOR_ID_ATI0x4373quirk_msi_intx_disable_bug
ffffffff814002b8 r __pci_fixup_PCI_VENDOR_ID_ATI0x4394quirk_msi_intx_disable_ati_bug
ffffffff814002d0 r __pci_fixup_PCI_VENDOR_ID_ATI0x4393quirk_msi_intx_disable_ati_bug
ffffffff814002e8 r __pci_fixup_PCI_VENDOR_ID_ATI0x4392quirk_msi_intx_disable_ati_bug
ffffffff81400300 r __pci_fixup_PCI_VENDOR_ID_ATI0x4391quirk_msi_intx_disable_ati_bug
ffffffff81400318 r __pci_fixup_PCI_VENDOR_ID_ATI0x4390quirk_msi_intx_disable_ati_bug
ffffffff81400330 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5715Squirk_msi_intx_disable_bug
ffffffff81400348 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5715quirk_msi_intx_disable_bug
ffffffff81400360 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5714Squirk_msi_intx_disable_bug
ffffffff81400378 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5714quirk_msi_intx_disable_bug
ffffffff81400390 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5780Squirk_msi_intx_disable_bug
ffffffff814003a8 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5780quirk_msi_intx_disable_bug
ffffffff814003c0 r __pci_fixup_PCI_VENDOR_ID_ALPCI_ANY_IDnv_msi_ht_cap_quirk_all
ffffffff814003d8 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_ANY_IDnv_msi_ht_cap_quirk_leaf
ffffffff814003f0 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_CK804_PCIEquirk_nvidia_ck804_msi_ht_cap
ffffffff81400408 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT2000_PCIEquirk_msi_ht_cap
ffffffff81400420 r __pci_fixup_PCI_VENDOR_ID_AMD0x9601quirk_amd_780_apc_msi
ffffffff81400438 r __pci_fixup_PCI_VENDOR_ID_AMD0x9600quirk_amd_780_apc_msi
ffffffff81400450 r __pci_fixup_PCI_VENDOR_ID_ATI0x5a3fquirk_disable_msi
ffffffff81400468 r __pci_fixup_PCI_VENDOR_ID_VIA0xa238quirk_disable_msi
ffffffff81400480 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_disable_msi
ffffffff81400498 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8380_0quirk_disable_all_msi
ffffffff814004b0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_VT3364quirk_disable_all_msi
ffffffff814004c8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_VT3351quirk_disable_all_msi
ffffffff814004e0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_VT3336quirk_disable_all_msi
ffffffff814004f8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_RS480quirk_disable_all_msi
ffffffff81400510 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_RS400_200quirk_disable_all_msi
ffffffff81400528 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_GCNB_LEquirk_disable_all_msi
ffffffff81400540 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5709Squirk_brcm_570x_limit_vpd
ffffffff81400558 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5709quirk_brcm_570x_limit_vpd
ffffffff81400570 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5708Squirk_brcm_570x_limit_vpd
ffffffff81400588 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5708quirk_brcm_570x_limit_vpd
ffffffff814005a0 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5706Squirk_brcm_570x_limit_vpd
ffffffff814005b8 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_NX2_5706quirk_brcm_570x_limit_vpd
ffffffff814005d0 r __pci_fixup_PCI_VENDOR_ID_VIA0x324equirk_via_cx700_pci_parking_caching
ffffffff814005e8 r __pci_fixup_PCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_CK804_PCIEquirk_nvidia_ck804_pcie_aer_ext_cap
ffffffff81400600 r __pci_fixup_PCI_VENDOR_ID_INTEL0x1460quirk_p64h2_1k_io_fix_iobl
ffffffff81400618 r __pci_fixup_PCI_VENDOR_ID_INTEL0x1508quirk_disable_aspm_l0s
ffffffff81400630 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10f4quirk_disable_aspm_l0s
ffffffff81400648 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10f1quirk_disable_aspm_l0s
ffffffff81400660 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10ecquirk_disable_aspm_l0s
ffffffff81400678 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10e1quirk_disable_aspm_l0s
ffffffff81400690 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10ddquirk_disable_aspm_l0s
ffffffff814006a8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10dbquirk_disable_aspm_l0s
ffffffff814006c0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10d6quirk_disable_aspm_l0s
ffffffff814006d8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10c8quirk_disable_aspm_l0s
ffffffff814006f0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10c7quirk_disable_aspm_l0s
ffffffff81400708 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10c6quirk_disable_aspm_l0s
ffffffff81400720 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10b6quirk_disable_aspm_l0s
ffffffff81400738 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10a9quirk_disable_aspm_l0s
ffffffff81400750 r __pci_fixup_PCI_VENDOR_ID_INTEL0x10a7quirk_disable_aspm_l0s
ffffffff81400768 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_ANY_IDquirk_e100_interrupt
ffffffff81400780 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8111_SMBUSquirk_disable_amd_8111_boot_interrupt
ffffffff81400798 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8132_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff814007b0 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff814007c8 r __pci_fixup_PCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT1000SBquirk_disable_broadcom_boot_interrupt
ffffffff814007e0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_10quirk_disable_intel_boot_interrupt
ffffffff814007f8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_1quirk_reroute_to_boot_interrupts_intel
ffffffff81400810 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_0quirk_reroute_to_boot_interrupts_intel
ffffffff81400828 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHVquirk_reroute_to_boot_interrupts_intel
ffffffff81400840 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_1quirk_reroute_to_boot_interrupts_intel
ffffffff81400858 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_0quirk_reroute_to_boot_interrupts_intel
ffffffff81400870 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB2_0quirk_reroute_to_boot_interrupts_intel
ffffffff81400888 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_1quirk_reroute_to_boot_interrupts_intel
ffffffff814008a0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_0quirk_reroute_to_boot_interrupts_intel
ffffffff814008b8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x260bquirk_intel_pcie_pm
ffffffff814008d0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x260aquirk_intel_pcie_pm
ffffffff814008e8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2609quirk_intel_pcie_pm
ffffffff81400900 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2608quirk_intel_pcie_pm
ffffffff81400918 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2607quirk_intel_pcie_pm
ffffffff81400930 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2606quirk_intel_pcie_pm
ffffffff81400948 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2605quirk_intel_pcie_pm
ffffffff81400960 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2604quirk_intel_pcie_pm
ffffffff81400978 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2603quirk_intel_pcie_pm
ffffffff81400990 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2602quirk_intel_pcie_pm
ffffffff814009a8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x2601quirk_intel_pcie_pm
ffffffff814009c0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25faquirk_intel_pcie_pm
ffffffff814009d8 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f9quirk_intel_pcie_pm
ffffffff814009f0 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f8quirk_intel_pcie_pm
ffffffff81400a08 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25f7quirk_intel_pcie_pm
ffffffff81400a20 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e7quirk_intel_pcie_pm
ffffffff81400a38 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e6quirk_intel_pcie_pm
ffffffff81400a50 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e5quirk_intel_pcie_pm
ffffffff81400a68 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e4quirk_intel_pcie_pm
ffffffff81400a80 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e3quirk_intel_pcie_pm
ffffffff81400a98 r __pci_fixup_PCI_VENDOR_ID_INTEL0x25e2quirk_intel_pcie_pm
ffffffff81400ab0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7525_MCHquirk_pcie_mch
ffffffff81400ac8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7320_MCHquirk_pcie_mch
ffffffff81400ae0 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_E7520_MCHquirk_pcie_mch
ffffffff81400af8 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82454NXquirk_disable_pxb
ffffffff81400b10 r __pci_fixup_PCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_PCI_MASTERquirk_mediagx_master
ffffffff81400b28 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_FE_GATE_700Cquirk_amd_ordering
ffffffff81400b40 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDquirk_cardbus_legacy
ffffffff81400b58 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_amd_8131_mmrbc
ffffffff81400b70 r __pci_fixup_PCI_VENDOR_ID_SIPCI_ANY_IDquirk_ioapic_rmw
ffffffff81400b88 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_VIPER_7410quirk_amd_ioapic
ffffffff81400ba0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237quirk_via_vt8237_bypass_apic_deassert
ffffffff81400bb8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686quirk_via_ioapic
ffffffff81400bd0 r __pci_fixup_PCI_VENDOR_ID_TIPCI_DEVICE_ID_TI_XIO2000Aquirk_xio2000a
ffffffff81400be8 r __pci_fixup_PCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_RS100quirk_ati_exploding_mce
ffffffff81400c00 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443BX_2quirk_natoma
ffffffff81400c18 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443BX_1quirk_natoma
ffffffff81400c30 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443BX_0quirk_natoma
ffffffff81400c48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443LX_1quirk_natoma
ffffffff81400c60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82443LX_0quirk_natoma
ffffffff81400c78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82441quirk_natoma
ffffffff81400c90 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M1651quirk_alimagik
ffffffff81400ca8 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M1647quirk_alimagik
ffffffff81400cc0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C576quirk_vsfx
ffffffff81400cd8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C597_0quirk_viaetbf
ffffffff81400cf0 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361quirk_vialatency
ffffffff81400d08 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8371_1quirk_vialatency
ffffffff81400d20 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0quirk_vialatency
ffffffff81400d38 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82439TXquirk_triton
ffffffff81400d50 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82439quirk_triton
ffffffff81400d68 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82437VXquirk_triton
ffffffff81400d80 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82437quirk_triton
ffffffff81400d98 r __pci_fixup_PCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8151_0quirk_nopciamd
ffffffff81400db0 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_496quirk_nopcipci
ffffffff81400dc8 r __pci_fixup_PCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_5597quirk_nopcipci
ffffffff81400de0 r __pci_fixup_PCI_VENDOR_ID_NECPCI_DEVICE_ID_NEC_CBUS_3quirk_isa_dma_hangs
ffffffff81400df8 r __pci_fixup_PCI_VENDOR_ID_NECPCI_DEVICE_ID_NEC_CBUS_2quirk_isa_dma_hangs
ffffffff81400e10 r __pci_fixup_PCI_VENDOR_ID_NECPCI_DEVICE_ID_NEC_CBUS_1quirk_isa_dma_hangs
ffffffff81400e28 r __pci_fixup_PCI_VENDOR_ID_ALPCI_DEVICE_ID_AL_M1533quirk_isa_dma_hangs
ffffffff81400e40 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82371SB_0quirk_isa_dma_hangs
ffffffff81400e58 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C596quirk_isa_dma_hangs
ffffffff81400e70 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C586_0quirk_isa_dma_hangs
ffffffff81400e88 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82441quirk_passive_release
ffffffff81400ea0 r __pci_fixup_PCI_VENDOR_ID_MELLANOXPCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGEquirk_mellanox_tavor
ffffffff81400eb8 r __pci_fixup_PCI_VENDOR_ID_MELLANOXPCI_DEVICE_ID_MELLANOX_TAVORquirk_mellanox_tavor
ffffffff81400ed0 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDquirk_usb_early_handoff
ffffffff81400ee8 r __pci_fixup_PCI_ANY_IDPCI_ANY_IDpci_fixup_video
ffffffff81400f00 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PC1pcie_rootport_aspm_quirk
ffffffff81400f18 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PCpcie_rootport_aspm_quirk
ffffffff81400f30 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PB1pcie_rootport_aspm_quirk
ffffffff81400f48 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PBpcie_rootport_aspm_quirk
ffffffff81400f60 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PA1pcie_rootport_aspm_quirk
ffffffff81400f78 r __pci_fixup_PCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_MCH_PApcie_rootport_aspm_quirk
ffffffff81400f90 R __end_pci_fixups_final
ffffffff81400f90 r __pci_fixup_PCI_VENDOR_ID_BROADCOMPCI_DEVICE_ID_TIGON3_5719quirk_brcm_5719_limit_mrrs
ffffffff81400f90 R __start_pci_fixups_enable
ffffffff81400fa8 r __pci_fixup_PCI_VENDOR_ID_VIAPCI_ANY_IDquirk_via_vlink
ffffffff81400fc0 r __pci_fixup_PCI_VENDOR_ID_TI0x8032pci_post_fixup_toshiba_ohci1394
ffffffff81400fd8 R __end_pci_fixups_enable
ffffffff81400fd8 r __pci_fixup_resumePCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8111_SMBUSquirk_disable_amd_8111_boot_interrupt
ffffffff81400fd8 R __start_pci_fixups_resume
ffffffff81400ff0 r __pci_fixup_resumePCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8132_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff81401008 r __pci_fixup_resumePCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_8131_BRIDGEquirk_disable_amd_813x_boot_interrupt
ffffffff81401020 r __pci_fixup_resumePCI_VENDOR_ID_SERVERWORKSPCI_DEVICE_ID_SERVERWORKS_HT1000SBquirk_disable_broadcom_boot_interrupt
ffffffff81401038 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB_10quirk_disable_intel_boot_interrupt
ffffffff81401050 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_1quirk_reroute_to_boot_interrupts_intel
ffffffff81401068 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80332_0quirk_reroute_to_boot_interrupts_intel
ffffffff81401080 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXHVquirk_reroute_to_boot_interrupts_intel
ffffffff81401098 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_1quirk_reroute_to_boot_interrupts_intel
ffffffff814010b0 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_PXH_0quirk_reroute_to_boot_interrupts_intel
ffffffff814010c8 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ESB2_0quirk_reroute_to_boot_interrupts_intel
ffffffff814010e0 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_1quirk_reroute_to_boot_interrupts_intel
ffffffff814010f8 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_80333_0quirk_reroute_to_boot_interrupts_intel
ffffffff81401110 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6_resume
ffffffff81401128 r __pci_fixup_resumePCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_PCI_MASTERquirk_mediagx_master
ffffffff81401140 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361quirk_vialatency
ffffffff81401158 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8371_1quirk_vialatency
ffffffff81401170 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0quirk_vialatency
ffffffff81401188 r __pci_fixup_resumePCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82441quirk_passive_release
ffffffff814011a0 r __pci_fixup_resumePCI_VENDOR_ID_CYRIXPCI_DEVICE_ID_CYRIX_5530_LEGACYpci_early_fixup_cyrix_5530
ffffffff814011b8 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237pci_fixup_msi_k8t_onboard_sound
ffffffff814011d0 r __pci_fixup_resumePCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_NFORCE2pci_fixup_nforce2
ffffffff814011e8 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8367_0pci_fixup_via_northbridge_bug
ffffffff81401200 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8361pci_fixup_via_northbridge_bug
ffffffff81401218 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8622pci_fixup_via_northbridge_bug
ffffffff81401230 r __pci_fixup_resumePCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8363_0pci_fixup_via_northbridge_bug
ffffffff81401248 R __end_pci_fixups_resume
ffffffff81401248 r __pci_fixup_resume_earlyPCI_VENDOR_ID_ALPCI_ANY_IDnv_msi_ht_cap_quirk_all
ffffffff81401248 R __start_pci_fixups_resume_early
ffffffff81401260 r __pci_fixup_resume_earlyPCI_VENDOR_ID_NVIDIAPCI_ANY_IDnv_msi_ht_cap_quirk_leaf
ffffffff81401278 r __pci_fixup_resume_earlyPCI_VENDOR_ID_NVIDIAPCI_DEVICE_ID_NVIDIA_CK804_PCIEquirk_nvidia_ck804_pcie_aer_ext_cap
ffffffff81401290 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB369quirk_jmicron_ata
ffffffff814012a8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB368quirk_jmicron_ata
ffffffff814012c0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB366quirk_jmicron_ata
ffffffff814012d8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB365quirk_jmicron_ata
ffffffff814012f0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB364quirk_jmicron_ata
ffffffff81401308 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB363quirk_jmicron_ata
ffffffff81401320 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB362quirk_jmicron_ata
ffffffff81401338 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB361quirk_jmicron_ata
ffffffff81401350 r __pci_fixup_resume_earlyPCI_VENDOR_ID_JMICRONPCI_DEVICE_ID_JMICRON_JMB360quirk_jmicron_ata
ffffffff81401368 r __pci_fixup_resume_earlyPCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237asus_hides_ac97_lpc
ffffffff81401380 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_503quirk_sis_503
ffffffff81401398 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_LPCquirk_sis_96x_smbus
ffffffff814013b0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_963quirk_sis_96x_smbus
ffffffff814013c8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_962quirk_sis_96x_smbus
ffffffff814013e0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_SIPCI_DEVICE_ID_SI_961quirk_sis_96x_smbus
ffffffff814013f8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6_resume_early
ffffffff81401410 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801EB_0asus_hides_smbus_lpc
ffffffff81401428 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_12asus_hides_smbus_lpc
ffffffff81401440 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_12asus_hides_smbus_lpc
ffffffff81401458 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801CA_0asus_hides_smbus_lpc
ffffffff81401470 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801BA_0asus_hides_smbus_lpc
ffffffff81401488 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801DB_0asus_hides_smbus_lpc
ffffffff814014a0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82801AA_0asus_hides_smbus_lpc
ffffffff814014b8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_HUDSON2_SATA_IDEquirk_amd_ide_mode
ffffffff814014d0 r __pci_fixup_resume_earlyPCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP700_SATAquirk_amd_ide_mode
ffffffff814014e8 r __pci_fixup_resume_earlyPCI_VENDOR_ID_ATIPCI_DEVICE_ID_ATI_IXP600_SATAquirk_amd_ide_mode
ffffffff81401500 r __pci_fixup_resume_earlyPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_82454NXquirk_disable_pxb
ffffffff81401518 r __pci_fixup_resume_earlyPCI_VENDOR_ID_AMDPCI_DEVICE_ID_AMD_FE_GATE_700Cquirk_amd_ordering
ffffffff81401530 r __pci_fixup_resume_earlyPCI_ANY_IDPCI_ANY_IDquirk_cardbus_legacy
ffffffff81401548 r __pci_fixup_resume_earlyPCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_8237quirk_via_vt8237_bypass_apic_deassert
ffffffff81401560 r __pci_fixup_resume_earlyPCI_VENDOR_ID_VIAPCI_DEVICE_ID_VIA_82C686quirk_via_ioapic
ffffffff81401578 R __end_pci_fixups_resume_early
ffffffff81401578 r __pci_fixup_suspendPCI_VENDOR_ID_INTELPCI_DEVICE_ID_INTEL_ICH6_1asus_hides_smbus_lpc_ich6_suspend
ffffffff81401578 R __start_pci_fixups_suspend
ffffffff81401590 R __end_builtin_fw
ffffffff81401590 R __end_pci_fixups_suspend
ffffffff81401590 R __end_rio_switch_ops
ffffffff81401590 r __ksymtab_IO_APIC_get_PCI_irq_vector
ffffffff81401590 R __start___ksymtab
ffffffff81401590 R __start_builtin_fw
ffffffff81401590 R __start_rio_switch_ops
ffffffff814015a0 r __ksymtab_I_BDEV
ffffffff814015b0 r __ksymtab____pskb_trim
ffffffff814015c0 r __ksymtab____ratelimit
ffffffff814015d0 r __ksymtab___alloc_pages_nodemask
ffffffff814015e0 r __ksymtab___alloc_skb
ffffffff814015f0 r __ksymtab___alloc_tty_driver
ffffffff81401600 r __ksymtab___bdevname
ffffffff81401610 r __ksymtab___bforget
ffffffff81401620 r __ksymtab___bio_clone
ffffffff81401630 r __ksymtab___bitmap_and
ffffffff81401640 r __ksymtab___bitmap_andnot
ffffffff81401650 r __ksymtab___bitmap_complement
ffffffff81401660 r __ksymtab___bitmap_empty
ffffffff81401670 r __ksymtab___bitmap_equal
ffffffff81401680 r __ksymtab___bitmap_full
ffffffff81401690 r __ksymtab___bitmap_intersects
ffffffff814016a0 r __ksymtab___bitmap_or
ffffffff814016b0 r __ksymtab___bitmap_parse
ffffffff814016c0 r __ksymtab___bitmap_shift_left
ffffffff814016d0 r __ksymtab___bitmap_shift_right
ffffffff814016e0 r __ksymtab___bitmap_subset
ffffffff814016f0 r __ksymtab___bitmap_weight
ffffffff81401700 r __ksymtab___bitmap_xor
ffffffff81401710 r __ksymtab___blk_end_request
ffffffff81401720 r __ksymtab___blk_end_request_all
ffffffff81401730 r __ksymtab___blk_end_request_cur
ffffffff81401740 r __ksymtab___blk_iopoll_complete
ffffffff81401750 r __ksymtab___blk_run_queue
ffffffff81401760 r __ksymtab___block_page_mkwrite
ffffffff81401770 r __ksymtab___block_write_begin
ffffffff81401780 r __ksymtab___blockdev_direct_IO
ffffffff81401790 r __ksymtab___bread
ffffffff814017a0 r __ksymtab___breadahead
ffffffff814017b0 r __ksymtab___break_lease
ffffffff814017c0 r __ksymtab___brelse
ffffffff814017d0 r __ksymtab___cap_empty_set
ffffffff814017e0 r __ksymtab___check_region
ffffffff814017f0 r __ksymtab___cleancache_get_page
ffffffff81401800 r __ksymtab___cleancache_init_fs
ffffffff81401810 r __ksymtab___cleancache_init_shared_fs
ffffffff81401820 r __ksymtab___cleancache_invalidate_fs
ffffffff81401830 r __ksymtab___cleancache_invalidate_inode
ffffffff81401840 r __ksymtab___cleancache_invalidate_page
ffffffff81401850 r __ksymtab___cleancache_put_page
ffffffff81401860 r __ksymtab___clear_user
ffffffff81401870 r __ksymtab___cond_resched_lock
ffffffff81401880 r __ksymtab___cond_resched_softirq
ffffffff81401890 r __ksymtab___const_udelay
ffffffff814018a0 r __ksymtab___copy_user_nocache
ffffffff814018b0 r __ksymtab___crc32c_le
ffffffff814018c0 r __ksymtab___d_drop
ffffffff814018d0 r __ksymtab___dec_zone_page_state
ffffffff814018e0 r __ksymtab___delay
ffffffff814018f0 r __ksymtab___destroy_inode
ffffffff81401900 r __ksymtab___dev_get_by_index
ffffffff81401910 r __ksymtab___dev_get_by_name
ffffffff81401920 r __ksymtab___dev_getfirstbyhwtype
ffffffff81401930 r __ksymtab___dev_printk
ffffffff81401940 r __ksymtab___dev_remove_pack
ffffffff81401950 r __ksymtab___devm_release_region
ffffffff81401960 r __ksymtab___devm_request_region
ffffffff81401970 r __ksymtab___dst_destroy_metrics_generic
ffffffff81401980 r __ksymtab___dst_free
ffffffff81401990 r __ksymtab___elv_add_request
ffffffff814019a0 r __ksymtab___ethtool_get_settings
ffffffff814019b0 r __ksymtab___f_setown
ffffffff814019c0 r __ksymtab___find_get_block
ffffffff814019d0 r __ksymtab___first_cpu
ffffffff814019e0 r __ksymtab___free_pages
ffffffff814019f0 r __ksymtab___generic_block_fiemap
ffffffff81401a00 r __ksymtab___generic_file_aio_write
ffffffff81401a10 r __ksymtab___get_free_pages
ffffffff81401a20 r __ksymtab___get_page_tail
ffffffff81401a30 r __ksymtab___get_user_1
ffffffff81401a40 r __ksymtab___get_user_2
ffffffff81401a50 r __ksymtab___get_user_4
ffffffff81401a60 r __ksymtab___get_user_8
ffffffff81401a70 r __ksymtab___get_user_pages
ffffffff81401a80 r __ksymtab___getblk
ffffffff81401a90 r __ksymtab___ht_create_irq
ffffffff81401aa0 r __ksymtab___hw_addr_add_multiple
ffffffff81401ab0 r __ksymtab___hw_addr_del_multiple
ffffffff81401ac0 r __ksymtab___hw_addr_flush
ffffffff81401ad0 r __ksymtab___hw_addr_init
ffffffff81401ae0 r __ksymtab___hw_addr_sync
ffffffff81401af0 r __ksymtab___hw_addr_unsync
ffffffff81401b00 r __ksymtab___inc_zone_page_state
ffffffff81401b10 r __ksymtab___init_rwsem
ffffffff81401b20 r __ksymtab___init_waitqueue_head
ffffffff81401b30 r __ksymtab___insert_inode_hash
ffffffff81401b40 r __ksymtab___invalidate_device
ffffffff81401b50 r __ksymtab___ip_dev_find
ffffffff81401b60 r __ksymtab___ip_select_ident
ffffffff81401b70 r __ksymtab___ipv6_addr_type
ffffffff81401b80 r __ksymtab___kernel_param_lock
ffffffff81401b90 r __ksymtab___kernel_param_unlock
ffffffff81401ba0 r __ksymtab___kfifo_alloc
ffffffff81401bb0 r __ksymtab___kfifo_dma_in_finish_r
ffffffff81401bc0 r __ksymtab___kfifo_dma_in_prepare
ffffffff81401bd0 r __ksymtab___kfifo_dma_in_prepare_r
ffffffff81401be0 r __ksymtab___kfifo_dma_out_finish_r
ffffffff81401bf0 r __ksymtab___kfifo_dma_out_prepare
ffffffff81401c00 r __ksymtab___kfifo_dma_out_prepare_r
ffffffff81401c10 r __ksymtab___kfifo_free
ffffffff81401c20 r __ksymtab___kfifo_from_user
ffffffff81401c30 r __ksymtab___kfifo_from_user_r
ffffffff81401c40 r __ksymtab___kfifo_in
ffffffff81401c50 r __ksymtab___kfifo_in_r
ffffffff81401c60 r __ksymtab___kfifo_init
ffffffff81401c70 r __ksymtab___kfifo_len_r
ffffffff81401c80 r __ksymtab___kfifo_out
ffffffff81401c90 r __ksymtab___kfifo_out_peek
ffffffff81401ca0 r __ksymtab___kfifo_out_peek_r
ffffffff81401cb0 r __ksymtab___kfifo_out_r
ffffffff81401cc0 r __ksymtab___kfifo_skip_r
ffffffff81401cd0 r __ksymtab___kfifo_to_user
ffffffff81401ce0 r __ksymtab___kfifo_to_user_r
ffffffff81401cf0 r __ksymtab___kfree_skb
ffffffff81401d00 r __ksymtab___kmalloc
ffffffff81401d10 r __ksymtab___kmalloc_node
ffffffff81401d20 r __ksymtab___krealloc
ffffffff81401d30 r __ksymtab___lock_buffer
ffffffff81401d40 r __ksymtab___lock_page
ffffffff81401d50 r __ksymtab___locks_copy_lock
ffffffff81401d60 r __ksymtab___lru_cache_add
ffffffff81401d70 r __ksymtab___mark_inode_dirty
ffffffff81401d80 r __ksymtab___memcpy
ffffffff81401d90 r __ksymtab___mod_zone_page_state
ffffffff81401da0 r __ksymtab___module_get
ffffffff81401db0 r __ksymtab___module_put_and_exit
ffffffff81401dc0 r __ksymtab___mutex_init
ffffffff81401dd0 r __ksymtab___napi_complete
ffffffff81401de0 r __ksymtab___napi_schedule
ffffffff81401df0 r __ksymtab___ndelay
ffffffff81401e00 r __ksymtab___neigh_event_send
ffffffff81401e10 r __ksymtab___neigh_for_each_release
ffffffff81401e20 r __ksymtab___netdev_alloc_skb
ffffffff81401e30 r __ksymtab___netdev_printk
ffffffff81401e40 r __ksymtab___netif_schedule
ffffffff81401e50 r __ksymtab___next_cpu
ffffffff81401e60 r __ksymtab___nla_put
ffffffff81401e70 r __ksymtab___nla_put_nohdr
ffffffff81401e80 r __ksymtab___nla_reserve
ffffffff81401e90 r __ksymtab___nla_reserve_nohdr
ffffffff81401ea0 r __ksymtab___nlmsg_put
ffffffff81401eb0 r __ksymtab___node_distance
ffffffff81401ec0 r __ksymtab___nvram_check_checksum
ffffffff81401ed0 r __ksymtab___nvram_read_byte
ffffffff81401ee0 r __ksymtab___nvram_write_byte
ffffffff81401ef0 r __ksymtab___page_cache_alloc
ffffffff81401f00 r __ksymtab___page_symlink
ffffffff81401f10 r __ksymtab___pagevec_lru_add
ffffffff81401f20 r __ksymtab___pagevec_release
ffffffff81401f30 r __ksymtab___pci_enable_wake
ffffffff81401f40 r __ksymtab___pci_register_driver
ffffffff81401f50 r __ksymtab___pci_remove_bus_device
ffffffff81401f60 r __ksymtab___per_cpu_offset
ffffffff81401f70 r __ksymtab___percpu_counter_add
ffffffff81401f80 r __ksymtab___percpu_counter_init
ffffffff81401f90 r __ksymtab___percpu_counter_sum
ffffffff81401fa0 r __ksymtab___phys_addr
ffffffff81401fb0 r __ksymtab___print_symbol
ffffffff81401fc0 r __ksymtab___printk_ratelimit
ffffffff81401fd0 r __ksymtab___ps2_command
ffffffff81401fe0 r __ksymtab___pskb_copy
ffffffff81401ff0 r __ksymtab___pskb_pull_tail
ffffffff81402000 r __ksymtab___put_cred
ffffffff81402010 r __ksymtab___put_user_1
ffffffff81402020 r __ksymtab___put_user_2
ffffffff81402030 r __ksymtab___put_user_4
ffffffff81402040 r __ksymtab___put_user_8
ffffffff81402050 r __ksymtab___refrigerator
ffffffff81402060 r __ksymtab___register_binfmt
ffffffff81402070 r __ksymtab___register_chrdev
ffffffff81402080 r __ksymtab___release_region
ffffffff81402090 r __ksymtab___remove_inode_hash
ffffffff814020a0 r __ksymtab___request_module
ffffffff814020b0 r __ksymtab___request_region
ffffffff814020c0 r __ksymtab___rta_fill
ffffffff814020d0 r __ksymtab___scm_destroy
ffffffff814020e0 r __ksymtab___scm_send
ffffffff814020f0 r __ksymtab___scsi_add_device
ffffffff81402100 r __ksymtab___scsi_alloc_queue
ffffffff81402110 r __ksymtab___scsi_device_lookup
ffffffff81402120 r __ksymtab___scsi_device_lookup_by_target
ffffffff81402130 r __ksymtab___scsi_iterate_devices
ffffffff81402140 r __ksymtab___scsi_print_command
ffffffff81402150 r __ksymtab___scsi_print_sense
ffffffff81402160 r __ksymtab___scsi_put_command
ffffffff81402170 r __ksymtab___secpath_destroy
ffffffff81402180 r __ksymtab___send_remote_softirq
ffffffff81402190 r __ksymtab___seq_open_private
ffffffff814021a0 r __ksymtab___serio_register_driver
ffffffff814021b0 r __ksymtab___serio_register_port
ffffffff814021c0 r __ksymtab___set_page_dirty_buffers
ffffffff814021d0 r __ksymtab___set_page_dirty_nobuffers
ffffffff814021e0 r __ksymtab___set_personality
ffffffff814021f0 r __ksymtab___sg_alloc_table
ffffffff81402200 r __ksymtab___sg_free_table
ffffffff81402210 r __ksymtab___sk_dst_check
ffffffff81402220 r __ksymtab___sk_mem_reclaim
ffffffff81402230 r __ksymtab___sk_mem_schedule
ffffffff81402240 r __ksymtab___skb_checksum_complete
ffffffff81402250 r __ksymtab___skb_checksum_complete_head
ffffffff81402260 r __ksymtab___skb_get_rxhash
ffffffff81402270 r __ksymtab___skb_recv_datagram
ffffffff81402280 r __ksymtab___skb_tx_hash
ffffffff81402290 r __ksymtab___skb_warn_lro_forwarding
ffffffff814022a0 r __ksymtab___sock_create
ffffffff814022b0 r __ksymtab___splice_from_pipe
ffffffff814022c0 r __ksymtab___starget_for_each_device
ffffffff814022d0 r __ksymtab___strnlen_user
ffffffff814022e0 r __ksymtab___sw_hweight16
ffffffff814022f0 r __ksymtab___sw_hweight32
ffffffff81402300 r __ksymtab___sw_hweight64
ffffffff81402310 r __ksymtab___sw_hweight8
ffffffff81402320 r __ksymtab___symbol_put
ffffffff81402330 r __ksymtab___sync_dirty_buffer
ffffffff81402340 r __ksymtab___task_pid_nr_ns
ffffffff81402350 r __ksymtab___tasklet_hi_schedule
ffffffff81402360 r __ksymtab___tasklet_hi_schedule_first
ffffffff81402370 r __ksymtab___tasklet_schedule
ffffffff81402380 r __ksymtab___udelay
ffffffff81402390 r __ksymtab___unregister_chrdev
ffffffff814023a0 r __ksymtab___virt_addr_valid
ffffffff814023b0 r __ksymtab___vmalloc
ffffffff814023c0 r __ksymtab___wait_on_bit
ffffffff814023d0 r __ksymtab___wait_on_bit_lock
ffffffff814023e0 r __ksymtab___wait_on_buffer
ffffffff814023f0 r __ksymtab___wake_up
ffffffff81402400 r __ksymtab___wake_up_bit
ffffffff81402410 r __ksymtab___xfrm_decode_session
ffffffff81402420 r __ksymtab___xfrm_init_state
ffffffff81402430 r __ksymtab___xfrm_policy_check
ffffffff81402440 r __ksymtab___xfrm_route_forward
ffffffff81402450 r __ksymtab___xfrm_state_delete
ffffffff81402460 r __ksymtab___xfrm_state_destroy
ffffffff81402470 r __ksymtab__atomic_dec_and_lock
ffffffff81402480 r __ksymtab__cond_resched
ffffffff81402490 r __ksymtab__copy_from_user
ffffffff814024a0 r __ksymtab__copy_to_user
ffffffff814024b0 r __ksymtab__ctype
ffffffff814024c0 r __ksymtab__dev_info
ffffffff814024d0 r __ksymtab__kstrtol
ffffffff814024e0 r __ksymtab__kstrtoul
ffffffff814024f0 r __ksymtab__local_bh_enable
ffffffff81402500 r __ksymtab__raw_read_lock
ffffffff81402510 r __ksymtab__raw_read_lock_bh
ffffffff81402520 r __ksymtab__raw_read_lock_irq
ffffffff81402530 r __ksymtab__raw_read_lock_irqsave
ffffffff81402540 r __ksymtab__raw_read_trylock
ffffffff81402550 r __ksymtab__raw_read_unlock_bh
ffffffff81402560 r __ksymtab__raw_read_unlock_irqrestore
ffffffff81402570 r __ksymtab__raw_spin_lock
ffffffff81402580 r __ksymtab__raw_spin_lock_bh
ffffffff81402590 r __ksymtab__raw_spin_lock_irq
ffffffff814025a0 r __ksymtab__raw_spin_lock_irqsave
ffffffff814025b0 r __ksymtab__raw_spin_trylock
ffffffff814025c0 r __ksymtab__raw_spin_trylock_bh
ffffffff814025d0 r __ksymtab__raw_spin_unlock_bh
ffffffff814025e0 r __ksymtab__raw_spin_unlock_irqrestore
ffffffff814025f0 r __ksymtab__raw_write_lock
ffffffff81402600 r __ksymtab__raw_write_lock_bh
ffffffff81402610 r __ksymtab__raw_write_lock_irq
ffffffff81402620 r __ksymtab__raw_write_lock_irqsave
ffffffff81402630 r __ksymtab__raw_write_trylock
ffffffff81402640 r __ksymtab__raw_write_unlock_bh
ffffffff81402650 r __ksymtab__raw_write_unlock_irqrestore
ffffffff81402660 r __ksymtab_abort_creds
ffffffff81402670 r __ksymtab_abort_exclusive_wait
ffffffff81402680 r __ksymtab_account_page_dirtied
ffffffff81402690 r __ksymtab_account_page_redirty
ffffffff814026a0 r __ksymtab_account_page_writeback
ffffffff814026b0 r __ksymtab_acpi_acquire_global_lock
ffffffff814026c0 r __ksymtab_acpi_attach_data
ffffffff814026d0 r __ksymtab_acpi_bus_add
ffffffff814026e0 r __ksymtab_acpi_bus_can_wakeup
ffffffff814026f0 r __ksymtab_acpi_bus_generate_netlink_event
ffffffff81402700 r __ksymtab_acpi_bus_generate_proc_event
ffffffff81402710 r __ksymtab_acpi_bus_get_device
ffffffff81402720 r __ksymtab_acpi_bus_get_private_data
ffffffff81402730 r __ksymtab_acpi_bus_get_status
ffffffff81402740 r __ksymtab_acpi_bus_power_manageable
ffffffff81402750 r __ksymtab_acpi_bus_private_data_handler
ffffffff81402760 r __ksymtab_acpi_bus_register_driver
ffffffff81402770 r __ksymtab_acpi_bus_set_power
ffffffff81402780 r __ksymtab_acpi_bus_start
ffffffff81402790 r __ksymtab_acpi_bus_unregister_driver
ffffffff814027a0 r __ksymtab_acpi_check_address_range
ffffffff814027b0 r __ksymtab_acpi_check_region
ffffffff814027c0 r __ksymtab_acpi_check_resource_conflict
ffffffff814027d0 r __ksymtab_acpi_clear_event
ffffffff814027e0 r __ksymtab_acpi_clear_gpe
ffffffff814027f0 r __ksymtab_acpi_current_gpe_count
ffffffff81402800 r __ksymtab_acpi_dbg_layer
ffffffff81402810 r __ksymtab_acpi_dbg_level
ffffffff81402820 r __ksymtab_acpi_detach_data
ffffffff81402830 r __ksymtab_acpi_device_hid
ffffffff81402840 r __ksymtab_acpi_disable
ffffffff81402850 r __ksymtab_acpi_disable_all_gpes
ffffffff81402860 r __ksymtab_acpi_disable_event
ffffffff81402870 r __ksymtab_acpi_disable_gpe
ffffffff81402880 r __ksymtab_acpi_disabled
ffffffff81402890 r __ksymtab_acpi_enable
ffffffff814028a0 r __ksymtab_acpi_enable_all_runtime_gpes
ffffffff814028b0 r __ksymtab_acpi_enable_event
ffffffff814028c0 r __ksymtab_acpi_enable_gpe
ffffffff814028d0 r __ksymtab_acpi_enable_subsystem
ffffffff814028e0 r __ksymtab_acpi_enter_sleep_state
ffffffff814028f0 r __ksymtab_acpi_enter_sleep_state_prep
ffffffff81402900 r __ksymtab_acpi_enter_sleep_state_s4bios
ffffffff81402910 r __ksymtab_acpi_error
ffffffff81402920 r __ksymtab_acpi_evaluate_integer
ffffffff81402930 r __ksymtab_acpi_evaluate_object
ffffffff81402940 r __ksymtab_acpi_evaluate_object_typed
ffffffff81402950 r __ksymtab_acpi_evaluate_reference
ffffffff81402960 r __ksymtab_acpi_exception
ffffffff81402970 r __ksymtab_acpi_extract_package
ffffffff81402980 r __ksymtab_acpi_format_exception
ffffffff81402990 r __ksymtab_acpi_gbl_FADT
ffffffff814029a0 r __ksymtab_acpi_get_child
ffffffff814029b0 r __ksymtab_acpi_get_current_resources
ffffffff814029c0 r __ksymtab_acpi_get_data
ffffffff814029d0 r __ksymtab_acpi_get_devices
ffffffff814029e0 r __ksymtab_acpi_get_event_resources
ffffffff814029f0 r __ksymtab_acpi_get_event_status
ffffffff81402a00 r __ksymtab_acpi_get_gpe_device
ffffffff81402a10 r __ksymtab_acpi_get_gpe_status
ffffffff81402a20 r __ksymtab_acpi_get_handle
ffffffff81402a30 r __ksymtab_acpi_get_id
ffffffff81402a40 r __ksymtab_acpi_get_irq_routing_table
ffffffff81402a50 r __ksymtab_acpi_get_name
ffffffff81402a60 r __ksymtab_acpi_get_next_object
ffffffff81402a70 r __ksymtab_acpi_get_node
ffffffff81402a80 r __ksymtab_acpi_get_object_info
ffffffff81402a90 r __ksymtab_acpi_get_parent
ffffffff81402aa0 r __ksymtab_acpi_get_physical_device
ffffffff81402ab0 r __ksymtab_acpi_get_sleep_type_data
ffffffff81402ac0 r __ksymtab_acpi_get_table
ffffffff81402ad0 r __ksymtab_acpi_get_table_by_index
ffffffff81402ae0 r __ksymtab_acpi_get_table_header
ffffffff81402af0 r __ksymtab_acpi_get_type
ffffffff81402b00 r __ksymtab_acpi_get_vendor_resource
ffffffff81402b10 r __ksymtab_acpi_info
ffffffff81402b20 r __ksymtab_acpi_initialize_objects
ffffffff81402b30 r __ksymtab_acpi_install_address_space_handler
ffffffff81402b40 r __ksymtab_acpi_install_fixed_event_handler
ffffffff81402b50 r __ksymtab_acpi_install_global_event_handler
ffffffff81402b60 r __ksymtab_acpi_install_gpe_block
ffffffff81402b70 r __ksymtab_acpi_install_gpe_handler
ffffffff81402b80 r __ksymtab_acpi_install_interface
ffffffff81402b90 r __ksymtab_acpi_install_interface_handler
ffffffff81402ba0 r __ksymtab_acpi_install_method
ffffffff81402bb0 r __ksymtab_acpi_install_notify_handler
ffffffff81402bc0 r __ksymtab_acpi_install_table_handler
ffffffff81402bd0 r __ksymtab_acpi_leave_sleep_state
ffffffff81402be0 r __ksymtab_acpi_leave_sleep_state_prep
ffffffff81402bf0 r __ksymtab_acpi_load_table
ffffffff81402c00 r __ksymtab_acpi_load_tables
ffffffff81402c10 r __ksymtab_acpi_lock_ac_dir
ffffffff81402c20 r __ksymtab_acpi_lock_battery_dir
ffffffff81402c30 r __ksymtab_acpi_map_lsapic
ffffffff81402c40 r __ksymtab_acpi_match_device_ids
ffffffff81402c50 r __ksymtab_acpi_notifier_call_chain
ffffffff81402c60 r __ksymtab_acpi_os_execute
ffffffff81402c70 r __ksymtab_acpi_os_map_generic_address
ffffffff81402c80 r __ksymtab_acpi_os_read_port
ffffffff81402c90 r __ksymtab_acpi_os_unmap_generic_address
ffffffff81402ca0 r __ksymtab_acpi_os_wait_events_complete
ffffffff81402cb0 r __ksymtab_acpi_os_write_port
ffffffff81402cc0 r __ksymtab_acpi_pci_disabled
ffffffff81402cd0 r __ksymtab_acpi_pci_osc_control_set
ffffffff81402ce0 r __ksymtab_acpi_pci_register_driver
ffffffff81402cf0 r __ksymtab_acpi_pci_unregister_driver
ffffffff81402d00 r __ksymtab_acpi_processor_power_init_bm_check
ffffffff81402d10 r __ksymtab_acpi_purge_cached_objects
ffffffff81402d20 r __ksymtab_acpi_read
ffffffff81402d30 r __ksymtab_acpi_read_bit_register
ffffffff81402d40 r __ksymtab_acpi_register_ioapic
ffffffff81402d50 r __ksymtab_acpi_release_global_lock
ffffffff81402d60 r __ksymtab_acpi_remove_address_space_handler
ffffffff81402d70 r __ksymtab_acpi_remove_fixed_event_handler
ffffffff81402d80 r __ksymtab_acpi_remove_gpe_block
ffffffff81402d90 r __ksymtab_acpi_remove_gpe_handler
ffffffff81402da0 r __ksymtab_acpi_remove_interface
ffffffff81402db0 r __ksymtab_acpi_remove_notify_handler
ffffffff81402dc0 r __ksymtab_acpi_remove_table_handler
ffffffff81402dd0 r __ksymtab_acpi_reset
ffffffff81402de0 r __ksymtab_acpi_resource_to_address64
ffffffff81402df0 r __ksymtab_acpi_resources_are_enforced
ffffffff81402e00 r __ksymtab_acpi_root_dir
ffffffff81402e10 r __ksymtab_acpi_run_osc
ffffffff81402e20 r __ksymtab_acpi_set_current_resources
ffffffff81402e30 r __ksymtab_acpi_set_firmware_waking_vector
ffffffff81402e40 r __ksymtab_acpi_set_firmware_waking_vector64
ffffffff81402e50 r __ksymtab_acpi_set_gpe_wake_mask
ffffffff81402e60 r __ksymtab_acpi_setup_gpe_for_wake
ffffffff81402e70 r __ksymtab_acpi_terminate
ffffffff81402e80 r __ksymtab_acpi_unload_table_id
ffffffff81402e90 r __ksymtab_acpi_unlock_ac_dir
ffffffff81402ea0 r __ksymtab_acpi_unlock_battery_dir
ffffffff81402eb0 r __ksymtab_acpi_unmap_lsapic
ffffffff81402ec0 r __ksymtab_acpi_unregister_ioapic
ffffffff81402ed0 r __ksymtab_acpi_update_all_gpes
ffffffff81402ee0 r __ksymtab_acpi_walk_namespace
ffffffff81402ef0 r __ksymtab_acpi_walk_resources
ffffffff81402f00 r __ksymtab_acpi_warning
ffffffff81402f10 r __ksymtab_acpi_write
ffffffff81402f20 r __ksymtab_acpi_write_bit_register
ffffffff81402f30 r __ksymtab_add_disk
ffffffff81402f40 r __ksymtab_add_taint
ffffffff81402f50 r __ksymtab_add_timer
ffffffff81402f60 r __ksymtab_add_to_page_cache_locked
ffffffff81402f70 r __ksymtab_add_wait_queue
ffffffff81402f80 r __ksymtab_add_wait_queue_exclusive
ffffffff81402f90 r __ksymtab_address_space_init_once
ffffffff81402fa0 r __ksymtab_adjust_resource
ffffffff81402fb0 r __ksymtab_aio_complete
ffffffff81402fc0 r __ksymtab_aio_put_req
ffffffff81402fd0 r __ksymtab_alloc_buffer_head
ffffffff81402fe0 r __ksymtab_alloc_chrdev_region
ffffffff81402ff0 r __ksymtab_alloc_cpu_rmap
ffffffff81403000 r __ksymtab_alloc_disk
ffffffff81403010 r __ksymtab_alloc_disk_node
ffffffff81403020 r __ksymtab_alloc_etherdev_mqs
ffffffff81403030 r __ksymtab_alloc_file
ffffffff81403040 r __ksymtab_alloc_netdev_mqs
ffffffff81403050 r __ksymtab_alloc_pages_current
ffffffff81403060 r __ksymtab_alloc_pages_exact
ffffffff81403070 r __ksymtab_alloc_pages_exact_nid
ffffffff81403080 r __ksymtab_alloc_pci_dev
ffffffff81403090 r __ksymtab_alloc_xenballooned_pages
ffffffff814030a0 r __ksymtab_allocate_resource
ffffffff814030b0 r __ksymtab_allow_signal
ffffffff814030c0 r __ksymtab_amd_e400_c1e_detected
ffffffff814030d0 r __ksymtab_amd_iommu_complete_ppr
ffffffff814030e0 r __ksymtab_amd_iommu_device_info
ffffffff814030f0 r __ksymtab_amd_iommu_domain_clear_gcr3
ffffffff81403100 r __ksymtab_amd_iommu_domain_direct_map
ffffffff81403110 r __ksymtab_amd_iommu_domain_enable_v2
ffffffff81403120 r __ksymtab_amd_iommu_domain_set_gcr3
ffffffff81403130 r __ksymtab_amd_iommu_enable_device_erratum
ffffffff81403140 r __ksymtab_amd_iommu_flush_page
ffffffff81403150 r __ksymtab_amd_iommu_flush_tlb
ffffffff81403160 r __ksymtab_amd_iommu_get_v2_domain
ffffffff81403170 r __ksymtab_amd_iommu_register_ppr_notifier
ffffffff81403180 r __ksymtab_amd_iommu_unregister_ppr_notifier
ffffffff81403190 r __ksymtab_amd_iommu_v2_supported
ffffffff814031a0 r __ksymtab_amd_nb_misc_ids
ffffffff814031b0 r __ksymtab_amd_northbridges
ffffffff814031c0 r __ksymtab_arch_debugfs_dir
ffffffff814031d0 r __ksymtab_arch_register_cpu
ffffffff814031e0 r __ksymtab_arch_unregister_cpu
ffffffff814031f0 r __ksymtab_argv_free
ffffffff81403200 r __ksymtab_argv_split
ffffffff81403210 r __ksymtab_arp_create
ffffffff81403220 r __ksymtab_arp_find
ffffffff81403230 r __ksymtab_arp_invalidate
ffffffff81403240 r __ksymtab_arp_send
ffffffff81403250 r __ksymtab_arp_tbl
ffffffff81403260 r __ksymtab_arp_xmit
ffffffff81403270 r __ksymtab_ata_dev_printk
ffffffff81403280 r __ksymtab_ata_link_printk
ffffffff81403290 r __ksymtab_ata_port_printk
ffffffff814032a0 r __ksymtab_ata_print_version
ffffffff814032b0 r __ksymtab_ata_scsi_cmd_error_handler
ffffffff814032c0 r __ksymtab_atomic_dec_and_mutex_lock
ffffffff814032d0 r __ksymtab_audit_log
ffffffff814032e0 r __ksymtab_audit_log_end
ffffffff814032f0 r __ksymtab_audit_log_format
ffffffff81403300 r __ksymtab_audit_log_start
ffffffff81403310 r __ksymtab_audit_log_task_context
ffffffff81403320 r __ksymtab_autoremove_wake_function
ffffffff81403330 r __ksymtab_avail_to_resrv_perfctr_nmi_bit
ffffffff81403340 r __ksymtab_avenrun
ffffffff81403350 r __ksymtab_balance_dirty_pages_ratelimited_nr
ffffffff81403360 r __ksymtab_bcd2bin
ffffffff81403370 r __ksymtab_bd_set_size
ffffffff81403380 r __ksymtab_bdev_read_only
ffffffff81403390 r __ksymtab_bdev_stack_limits
ffffffff814033a0 r __ksymtab_bdevname
ffffffff814033b0 r __ksymtab_bdget
ffffffff814033c0 r __ksymtab_bdget_disk
ffffffff814033d0 r __ksymtab_bdi_destroy
ffffffff814033e0 r __ksymtab_bdi_init
ffffffff814033f0 r __ksymtab_bdi_register
ffffffff81403400 r __ksymtab_bdi_register_dev
ffffffff81403410 r __ksymtab_bdi_set_max_ratio
ffffffff81403420 r __ksymtab_bdi_setup_and_register
ffffffff81403430 r __ksymtab_bdi_unregister
ffffffff81403440 r __ksymtab_bdput
ffffffff81403450 r __ksymtab_bh_submit_read
ffffffff81403460 r __ksymtab_bh_uptodate_or_lock
ffffffff81403470 r __ksymtab_bin2bcd
ffffffff81403480 r __ksymtab_bio_add_page
ffffffff81403490 r __ksymtab_bio_add_pc_page
ffffffff814034a0 r __ksymtab_bio_alloc
ffffffff814034b0 r __ksymtab_bio_alloc_bioset
ffffffff814034c0 r __ksymtab_bio_clone
ffffffff814034d0 r __ksymtab_bio_copy_kern
ffffffff814034e0 r __ksymtab_bio_copy_user
ffffffff814034f0 r __ksymtab_bio_endio
ffffffff81403500 r __ksymtab_bio_free
ffffffff81403510 r __ksymtab_bio_get_nr_vecs
ffffffff81403520 r __ksymtab_bio_init
ffffffff81403530 r __ksymtab_bio_kmalloc
ffffffff81403540 r __ksymtab_bio_map_kern
ffffffff81403550 r __ksymtab_bio_map_user
ffffffff81403560 r __ksymtab_bio_pair_release
ffffffff81403570 r __ksymtab_bio_phys_segments
ffffffff81403580 r __ksymtab_bio_put
ffffffff81403590 r __ksymtab_bio_sector_offset
ffffffff814035a0 r __ksymtab_bio_split
ffffffff814035b0 r __ksymtab_bio_uncopy_user
ffffffff814035c0 r __ksymtab_bio_unmap_user
ffffffff814035d0 r __ksymtab_bioset_create
ffffffff814035e0 r __ksymtab_bioset_free
ffffffff814035f0 r __ksymtab_bit_waitqueue
ffffffff81403600 r __ksymtab_bitmap_allocate_region
ffffffff81403610 r __ksymtab_bitmap_bitremap
ffffffff81403620 r __ksymtab_bitmap_clear
ffffffff81403630 r __ksymtab_bitmap_copy_le
ffffffff81403640 r __ksymtab_bitmap_find_free_region
ffffffff81403650 r __ksymtab_bitmap_find_next_zero_area
ffffffff81403660 r __ksymtab_bitmap_fold
ffffffff81403670 r __ksymtab_bitmap_onto
ffffffff81403680 r __ksymtab_bitmap_parse_user
ffffffff81403690 r __ksymtab_bitmap_parselist
ffffffff814036a0 r __ksymtab_bitmap_parselist_user
ffffffff814036b0 r __ksymtab_bitmap_release_region
ffffffff814036c0 r __ksymtab_bitmap_remap
ffffffff814036d0 r __ksymtab_bitmap_scnlistprintf
ffffffff814036e0 r __ksymtab_bitmap_scnprintf
ffffffff814036f0 r __ksymtab_bitmap_set
ffffffff81403700 r __ksymtab_bitrev16
ffffffff81403710 r __ksymtab_bitrev32
ffffffff81403720 r __ksymtab_blk_alloc_queue
ffffffff81403730 r __ksymtab_blk_alloc_queue_node
ffffffff81403740 r __ksymtab_blk_cleanup_queue
ffffffff81403750 r __ksymtab_blk_complete_request
ffffffff81403760 r __ksymtab_blk_delay_queue
ffffffff81403770 r __ksymtab_blk_dump_rq_flags
ffffffff81403780 r __ksymtab_blk_end_request
ffffffff81403790 r __ksymtab_blk_end_request_all
ffffffff814037a0 r __ksymtab_blk_end_request_cur
ffffffff814037b0 r __ksymtab_blk_execute_rq
ffffffff814037c0 r __ksymtab_blk_fetch_request
ffffffff814037d0 r __ksymtab_blk_finish_plug
ffffffff814037e0 r __ksymtab_blk_free_tags
ffffffff814037f0 r __ksymtab_blk_get_backing_dev_info
ffffffff81403800 r __ksymtab_blk_get_queue
ffffffff81403810 r __ksymtab_blk_get_request
ffffffff81403820 r __ksymtab_blk_init_allocated_queue
ffffffff81403830 r __ksymtab_blk_init_queue
ffffffff81403840 r __ksymtab_blk_init_queue_node
ffffffff81403850 r __ksymtab_blk_init_tags
ffffffff81403860 r __ksymtab_blk_iopoll_complete
ffffffff81403870 r __ksymtab_blk_iopoll_disable
ffffffff81403880 r __ksymtab_blk_iopoll_enable
ffffffff81403890 r __ksymtab_blk_iopoll_enabled
ffffffff814038a0 r __ksymtab_blk_iopoll_init
ffffffff814038b0 r __ksymtab_blk_iopoll_sched
ffffffff814038c0 r __ksymtab_blk_limits_io_min
ffffffff814038d0 r __ksymtab_blk_limits_io_opt
ffffffff814038e0 r __ksymtab_blk_limits_max_hw_sectors
ffffffff814038f0 r __ksymtab_blk_lookup_devt
ffffffff81403900 r __ksymtab_blk_make_request
ffffffff81403910 r __ksymtab_blk_max_low_pfn
ffffffff81403920 r __ksymtab_blk_peek_request
ffffffff81403930 r __ksymtab_blk_put_queue
ffffffff81403940 r __ksymtab_blk_put_request
ffffffff81403950 r __ksymtab_blk_queue_alignment_offset
ffffffff81403960 r __ksymtab_blk_queue_bounce
ffffffff81403970 r __ksymtab_blk_queue_bounce_limit
ffffffff81403980 r __ksymtab_blk_queue_dma_alignment
ffffffff81403990 r __ksymtab_blk_queue_dma_pad
ffffffff814039a0 r __ksymtab_blk_queue_end_tag
ffffffff814039b0 r __ksymtab_blk_queue_find_tag
ffffffff814039c0 r __ksymtab_blk_queue_free_tags
ffffffff814039d0 r __ksymtab_blk_queue_init_tags
ffffffff814039e0 r __ksymtab_blk_queue_invalidate_tags
ffffffff814039f0 r __ksymtab_blk_queue_io_min
ffffffff81403a00 r __ksymtab_blk_queue_io_opt
ffffffff81403a10 r __ksymtab_blk_queue_logical_block_size
ffffffff81403a20 r __ksymtab_blk_queue_make_request
ffffffff81403a30 r __ksymtab_blk_queue_max_discard_sectors
ffffffff81403a40 r __ksymtab_blk_queue_max_hw_sectors
ffffffff81403a50 r __ksymtab_blk_queue_max_segment_size
ffffffff81403a60 r __ksymtab_blk_queue_max_segments
ffffffff81403a70 r __ksymtab_blk_queue_merge_bvec
ffffffff81403a80 r __ksymtab_blk_queue_physical_block_size
ffffffff81403a90 r __ksymtab_blk_queue_prep_rq
ffffffff81403aa0 r __ksymtab_blk_queue_resize_tags
ffffffff81403ab0 r __ksymtab_blk_queue_segment_boundary
ffffffff81403ac0 r __ksymtab_blk_queue_softirq_done
ffffffff81403ad0 r __ksymtab_blk_queue_stack_limits
ffffffff81403ae0 r __ksymtab_blk_queue_start_tag
ffffffff81403af0 r __ksymtab_blk_queue_unprep_rq
ffffffff81403b00 r __ksymtab_blk_queue_update_dma_alignment
ffffffff81403b10 r __ksymtab_blk_queue_update_dma_pad
ffffffff81403b20 r __ksymtab_blk_recount_segments
ffffffff81403b30 r __ksymtab_blk_register_region
ffffffff81403b40 r __ksymtab_blk_requeue_request
ffffffff81403b50 r __ksymtab_blk_rq_init
ffffffff81403b60 r __ksymtab_blk_rq_map_kern
ffffffff81403b70 r __ksymtab_blk_rq_map_sg
ffffffff81403b80 r __ksymtab_blk_rq_map_user
ffffffff81403b90 r __ksymtab_blk_rq_map_user_iov
ffffffff81403ba0 r __ksymtab_blk_rq_unmap_user
ffffffff81403bb0 r __ksymtab_blk_run_queue
ffffffff81403bc0 r __ksymtab_blk_run_queue_async
ffffffff81403bd0 r __ksymtab_blk_set_default_limits
ffffffff81403be0 r __ksymtab_blk_set_stacking_limits
ffffffff81403bf0 r __ksymtab_blk_stack_limits
ffffffff81403c00 r __ksymtab_blk_start_plug
ffffffff81403c10 r __ksymtab_blk_start_queue
ffffffff81403c20 r __ksymtab_blk_start_request
ffffffff81403c30 r __ksymtab_blk_stop_queue
ffffffff81403c40 r __ksymtab_blk_sync_queue
ffffffff81403c50 r __ksymtab_blk_unregister_region
ffffffff81403c60 r __ksymtab_blk_verify_command
ffffffff81403c70 r __ksymtab_blkdev_fsync
ffffffff81403c80 r __ksymtab_blkdev_get
ffffffff81403c90 r __ksymtab_blkdev_get_by_dev
ffffffff81403ca0 r __ksymtab_blkdev_get_by_path
ffffffff81403cb0 r __ksymtab_blkdev_issue_discard
ffffffff81403cc0 r __ksymtab_blkdev_issue_flush
ffffffff81403cd0 r __ksymtab_blkdev_issue_zeroout
ffffffff81403ce0 r __ksymtab_blkdev_put
ffffffff81403cf0 r __ksymtab_block_all_signals
ffffffff81403d00 r __ksymtab_block_commit_write
ffffffff81403d10 r __ksymtab_block_invalidatepage
ffffffff81403d20 r __ksymtab_block_is_partially_uptodate
ffffffff81403d30 r __ksymtab_block_page_mkwrite
ffffffff81403d40 r __ksymtab_block_read_full_page
ffffffff81403d50 r __ksymtab_block_truncate_page
ffffffff81403d60 r __ksymtab_block_write_begin
ffffffff81403d70 r __ksymtab_block_write_end
ffffffff81403d80 r __ksymtab_block_write_full_page
ffffffff81403d90 r __ksymtab_block_write_full_page_endio
ffffffff81403da0 r __ksymtab_bmap
ffffffff81403db0 r __ksymtab_boot_cpu_data
ffffffff81403dc0 r __ksymtab_boot_option_idle_override
ffffffff81403dd0 r __ksymtab_boot_tvec_bases
ffffffff81403de0 r __ksymtab_brioctl_set
ffffffff81403df0 r __ksymtab_bsearch
ffffffff81403e00 r __ksymtab_buffer_migrate_page
ffffffff81403e10 r __ksymtab_build_ehash_secret
ffffffff81403e20 r __ksymtab_build_skb
ffffffff81403e30 r __ksymtab_cad_pid
ffffffff81403e40 r __ksymtab_call_netdevice_notifiers
ffffffff81403e50 r __ksymtab_call_usermodehelper_exec
ffffffff81403e60 r __ksymtab_call_usermodehelper_freeinfo
ffffffff81403e70 r __ksymtab_call_usermodehelper_setfns
ffffffff81403e80 r __ksymtab_call_usermodehelper_setup
ffffffff81403e90 r __ksymtab_can_do_mlock
ffffffff81403ea0 r __ksymtab_cancel_delayed_work_sync
ffffffff81403eb0 r __ksymtab_cancel_dirty_page
ffffffff81403ec0 r __ksymtab_capable
ffffffff81403ed0 r __ksymtab_cdev_add
ffffffff81403ee0 r __ksymtab_cdev_alloc
ffffffff81403ef0 r __ksymtab_cdev_del
ffffffff81403f00 r __ksymtab_cdev_init
ffffffff81403f10 r __ksymtab_cfb_copyarea
ffffffff81403f20 r __ksymtab_cfb_fillrect
ffffffff81403f30 r __ksymtab_cfb_imageblit
ffffffff81403f40 r __ksymtab_check_disk_change
ffffffff81403f50 r __ksymtab_check_disk_size_change
ffffffff81403f60 r __ksymtab_cleancache_enabled
ffffffff81403f70 r __ksymtab_cleancache_register_ops
ffffffff81403f80 r __ksymtab_clear_bdi_congested
ffffffff81403f90 r __ksymtab_clear_nlink
ffffffff81403fa0 r __ksymtab_clear_page
ffffffff81403fb0 r __ksymtab_clear_page_dirty_for_io
ffffffff81403fc0 r __ksymtab_clear_user
ffffffff81403fd0 r __ksymtab_clock_t_to_jiffies
ffffffff81403fe0 r __ksymtab_clocksource_change_rating
ffffffff81403ff0 r __ksymtab_clocksource_register
ffffffff81404000 r __ksymtab_clocksource_unregister
ffffffff81404010 r __ksymtab_color_table
ffffffff81404020 r __ksymtab_commit_creds
ffffffff81404030 r __ksymtab_compat_ip_getsockopt
ffffffff81404040 r __ksymtab_compat_ip_setsockopt
ffffffff81404050 r __ksymtab_compat_mc_getsockopt
ffffffff81404060 r __ksymtab_compat_mc_setsockopt
ffffffff81404070 r __ksymtab_compat_sock_common_getsockopt
ffffffff81404080 r __ksymtab_compat_sock_common_setsockopt
ffffffff81404090 r __ksymtab_compat_sock_get_timestamp
ffffffff814040a0 r __ksymtab_compat_sock_get_timestampns
ffffffff814040b0 r __ksymtab_compat_tcp_getsockopt
ffffffff814040c0 r __ksymtab_compat_tcp_setsockopt
ffffffff814040d0 r __ksymtab_complete
ffffffff814040e0 r __ksymtab_complete_all
ffffffff814040f0 r __ksymtab_complete_and_exit
ffffffff81404100 r __ksymtab_completion_done
ffffffff81404110 r __ksymtab_con_copy_unimap
ffffffff81404120 r __ksymtab_con_is_bound
ffffffff81404130 r __ksymtab_con_set_default_unimap
ffffffff81404140 r __ksymtab_config_group_find_item
ffffffff81404150 r __ksymtab_config_group_init
ffffffff81404160 r __ksymtab_config_group_init_type_name
ffffffff81404170 r __ksymtab_config_item_get
ffffffff81404180 r __ksymtab_config_item_init
ffffffff81404190 r __ksymtab_config_item_init_type_name
ffffffff814041a0 r __ksymtab_config_item_put
ffffffff814041b0 r __ksymtab_config_item_set_name
ffffffff814041c0 r __ksymtab_configfs_depend_item
ffffffff814041d0 r __ksymtab_configfs_register_subsystem
ffffffff814041e0 r __ksymtab_configfs_undepend_item
ffffffff814041f0 r __ksymtab_configfs_unregister_subsystem
ffffffff81404200 r __ksymtab_congestion_wait
ffffffff81404210 r __ksymtab_console_blank_hook
ffffffff81404220 r __ksymtab_console_blanked
ffffffff81404230 r __ksymtab_console_conditional_schedule
ffffffff81404240 r __ksymtab_console_lock
ffffffff81404250 r __ksymtab_console_set_on_cmdline
ffffffff81404260 r __ksymtab_console_start
ffffffff81404270 r __ksymtab_console_stop
ffffffff81404280 r __ksymtab_console_suspend_enabled
ffffffff81404290 r __ksymtab_console_trylock
ffffffff814042a0 r __ksymtab_console_unlock
ffffffff814042b0 r __ksymtab_consume_skb
ffffffff814042c0 r __ksymtab_cont_write_begin
ffffffff814042d0 r __ksymtab_cookie_check_timestamp
ffffffff814042e0 r __ksymtab_copy_in_user
ffffffff814042f0 r __ksymtab_copy_page
ffffffff81404300 r __ksymtab_copy_strings_kernel
ffffffff81404310 r __ksymtab_copy_user_generic_string
ffffffff81404320 r __ksymtab_copy_user_generic_unrolled
ffffffff81404330 r __ksymtab_cpu_active_mask
ffffffff81404340 r __ksymtab_cpu_all_bits
ffffffff81404350 r __ksymtab_cpu_core_map
ffffffff81404360 r __ksymtab_cpu_down
ffffffff81404370 r __ksymtab_cpu_dr7
ffffffff81404380 r __ksymtab_cpu_info
ffffffff81404390 r __ksymtab_cpu_khz
ffffffff814043a0 r __ksymtab_cpu_number
ffffffff814043b0 r __ksymtab_cpu_online_mask
ffffffff814043c0 r __ksymtab_cpu_possible_mask
ffffffff814043d0 r __ksymtab_cpu_present_mask
ffffffff814043e0 r __ksymtab_cpu_rmap_add
ffffffff814043f0 r __ksymtab_cpu_rmap_update
ffffffff81404400 r __ksymtab_cpu_sibling_map
ffffffff81404410 r __ksymtab_cpufreq_get
ffffffff81404420 r __ksymtab_cpufreq_get_policy
ffffffff81404430 r __ksymtab_cpufreq_global_kobject
ffffffff81404440 r __ksymtab_cpufreq_quick_get
ffffffff81404450 r __ksymtab_cpufreq_quick_get_max
ffffffff81404460 r __ksymtab_cpufreq_register_notifier
ffffffff81404470 r __ksymtab_cpufreq_unregister_notifier
ffffffff81404480 r __ksymtab_cpufreq_update_policy
ffffffff81404490 r __ksymtab_cpumask_next_and
ffffffff814044a0 r __ksymtab_crc16
ffffffff814044b0 r __ksymtab_crc16_table
ffffffff814044c0 r __ksymtab_crc32_be
ffffffff814044d0 r __ksymtab_crc32_le
ffffffff814044e0 r __ksymtab_create_empty_buffers
ffffffff814044f0 r __ksymtab_create_proc_entry
ffffffff81404500 r __ksymtab_csum_ipv6_magic
ffffffff81404510 r __ksymtab_csum_partial
ffffffff81404520 r __ksymtab_csum_partial_copy_from_user
ffffffff81404530 r __ksymtab_csum_partial_copy_fromiovecend
ffffffff81404540 r __ksymtab_csum_partial_copy_nocheck
ffffffff81404550 r __ksymtab_csum_partial_copy_to_user
ffffffff81404560 r __ksymtab_current_fs_time
ffffffff81404570 r __ksymtab_current_kernel_time
ffffffff81404580 r __ksymtab_current_task
ffffffff81404590 r __ksymtab_current_umask
ffffffff814045a0 r __ksymtab_d_add_ci
ffffffff814045b0 r __ksymtab_d_alloc
ffffffff814045c0 r __ksymtab_d_alloc_name
ffffffff814045d0 r __ksymtab_d_alloc_pseudo
ffffffff814045e0 r __ksymtab_d_clear_need_lookup
ffffffff814045f0 r __ksymtab_d_delete
ffffffff81404600 r __ksymtab_d_drop
ffffffff81404610 r __ksymtab_d_find_alias
ffffffff81404620 r __ksymtab_d_find_any_alias
ffffffff81404630 r __ksymtab_d_genocide
ffffffff81404640 r __ksymtab_d_instantiate
ffffffff81404650 r __ksymtab_d_instantiate_unique
ffffffff81404660 r __ksymtab_d_invalidate
ffffffff81404670 r __ksymtab_d_lookup
ffffffff81404680 r __ksymtab_d_make_root
ffffffff81404690 r __ksymtab_d_move
ffffffff814046a0 r __ksymtab_d_obtain_alias
ffffffff814046b0 r __ksymtab_d_path
ffffffff814046c0 r __ksymtab_d_prune_aliases
ffffffff814046d0 r __ksymtab_d_rehash
ffffffff814046e0 r __ksymtab_d_set_d_op
ffffffff814046f0 r __ksymtab_d_splice_alias
ffffffff81404700 r __ksymtab_d_validate
ffffffff81404710 r __ksymtab_daemonize
ffffffff81404720 r __ksymtab_datagram_poll
ffffffff81404730 r __ksymtab_dcache_dir_close
ffffffff81404740 r __ksymtab_dcache_dir_lseek
ffffffff81404750 r __ksymtab_dcache_dir_open
ffffffff81404760 r __ksymtab_dcache_readdir
ffffffff81404770 r __ksymtab_deactivate_locked_super
ffffffff81404780 r __ksymtab_deactivate_super
ffffffff81404790 r __ksymtab_dec_zone_page_state
ffffffff814047a0 r __ksymtab_default_blu
ffffffff814047b0 r __ksymtab_default_file_splice_read
ffffffff814047c0 r __ksymtab_default_grn
ffffffff814047d0 r __ksymtab_default_llseek
ffffffff814047e0 r __ksymtab_default_red
ffffffff814047f0 r __ksymtab_default_wake_function
ffffffff81404800 r __ksymtab_del_gendisk
ffffffff81404810 r __ksymtab_del_timer
ffffffff81404820 r __ksymtab_del_timer_sync
ffffffff81404830 r __ksymtab_delete_from_page_cache
ffffffff81404840 r __ksymtab_dentry_open
ffffffff81404850 r __ksymtab_dentry_path_raw
ffffffff81404860 r __ksymtab_dentry_unhash
ffffffff81404870 r __ksymtab_dentry_update_name_case
ffffffff81404880 r __ksymtab_dev_activate
ffffffff81404890 r __ksymtab_dev_add_pack
ffffffff814048a0 r __ksymtab_dev_addr_add
ffffffff814048b0 r __ksymtab_dev_addr_add_multiple
ffffffff814048c0 r __ksymtab_dev_addr_del
ffffffff814048d0 r __ksymtab_dev_addr_del_multiple
ffffffff814048e0 r __ksymtab_dev_addr_flush
ffffffff814048f0 r __ksymtab_dev_addr_init
ffffffff81404900 r __ksymtab_dev_alert
ffffffff81404910 r __ksymtab_dev_alloc_name
ffffffff81404920 r __ksymtab_dev_alloc_skb
ffffffff81404930 r __ksymtab_dev_base_lock
ffffffff81404940 r __ksymtab_dev_change_flags
ffffffff81404950 r __ksymtab_dev_close
ffffffff81404960 r __ksymtab_dev_crit
ffffffff81404970 r __ksymtab_dev_deactivate
ffffffff81404980 r __ksymtab_dev_disable_lro
ffffffff81404990 r __ksymtab_dev_driver_string
ffffffff814049a0 r __ksymtab_dev_emerg
ffffffff814049b0 r __ksymtab_dev_err
ffffffff814049c0 r __ksymtab_dev_get_by_flags_rcu
ffffffff814049d0 r __ksymtab_dev_get_by_index
ffffffff814049e0 r __ksymtab_dev_get_by_index_rcu
ffffffff814049f0 r __ksymtab_dev_get_by_name
ffffffff81404a00 r __ksymtab_dev_get_by_name_rcu
ffffffff81404a10 r __ksymtab_dev_get_drvdata
ffffffff81404a20 r __ksymtab_dev_get_flags
ffffffff81404a30 r __ksymtab_dev_get_stats
ffffffff81404a40 r __ksymtab_dev_getbyhwaddr_rcu
ffffffff81404a50 r __ksymtab_dev_getfirstbyhwtype
ffffffff81404a60 r __ksymtab_dev_graft_qdisc
ffffffff81404a70 r __ksymtab_dev_gro_receive
ffffffff81404a80 r __ksymtab_dev_kfree_skb_any
ffffffff81404a90 r __ksymtab_dev_kfree_skb_irq
ffffffff81404aa0 r __ksymtab_dev_load
ffffffff81404ab0 r __ksymtab_dev_mc_add
ffffffff81404ac0 r __ksymtab_dev_mc_add_global
ffffffff81404ad0 r __ksymtab_dev_mc_del
ffffffff81404ae0 r __ksymtab_dev_mc_del_global
ffffffff81404af0 r __ksymtab_dev_mc_flush
ffffffff81404b00 r __ksymtab_dev_mc_init
ffffffff81404b10 r __ksymtab_dev_mc_sync
ffffffff81404b20 r __ksymtab_dev_mc_unsync
ffffffff81404b30 r __ksymtab_dev_notice
ffffffff81404b40 r __ksymtab_dev_open
ffffffff81404b50 r __ksymtab_dev_printk
ffffffff81404b60 r __ksymtab_dev_queue_xmit
ffffffff81404b70 r __ksymtab_dev_remove_pack
ffffffff81404b80 r __ksymtab_dev_set_allmulti
ffffffff81404b90 r __ksymtab_dev_set_drvdata
ffffffff81404ba0 r __ksymtab_dev_set_group
ffffffff81404bb0 r __ksymtab_dev_set_mac_address
ffffffff81404bc0 r __ksymtab_dev_set_mtu
ffffffff81404bd0 r __ksymtab_dev_set_promiscuity
ffffffff81404be0 r __ksymtab_dev_trans_start
ffffffff81404bf0 r __ksymtab_dev_uc_add
ffffffff81404c00 r __ksymtab_dev_uc_del
ffffffff81404c10 r __ksymtab_dev_uc_flush
ffffffff81404c20 r __ksymtab_dev_uc_init
ffffffff81404c30 r __ksymtab_dev_uc_sync
ffffffff81404c40 r __ksymtab_dev_uc_unsync
ffffffff81404c50 r __ksymtab_dev_valid_name
ffffffff81404c60 r __ksymtab_dev_warn
ffffffff81404c70 r __ksymtab_devm_free_irq
ffffffff81404c80 r __ksymtab_devm_ioport_map
ffffffff81404c90 r __ksymtab_devm_ioport_unmap
ffffffff81404ca0 r __ksymtab_devm_ioremap
ffffffff81404cb0 r __ksymtab_devm_ioremap_nocache
ffffffff81404cc0 r __ksymtab_devm_iounmap
ffffffff81404cd0 r __ksymtab_devm_request_and_ioremap
ffffffff81404ce0 r __ksymtab_devm_request_threaded_irq
ffffffff81404cf0 r __ksymtab_dget_parent
ffffffff81404d00 r __ksymtab_directly_mappable_cdev_bdi
ffffffff81404d10 r __ksymtab_disable_irq
ffffffff81404d20 r __ksymtab_disable_irq_nosync
ffffffff81404d30 r __ksymtab_disallow_signal
ffffffff81404d40 r __ksymtab_disk_stack_limits
ffffffff81404d50 r __ksymtab_dlci_ioctl_set
ffffffff81404d60 r __ksymtab_dma_ops
ffffffff81404d70 r __ksymtab_dma_pool_alloc
ffffffff81404d80 r __ksymtab_dma_pool_create
ffffffff81404d90 r __ksymtab_dma_pool_destroy
ffffffff81404da0 r __ksymtab_dma_pool_free
ffffffff81404db0 r __ksymtab_dma_set_mask
ffffffff81404dc0 r __ksymtab_dma_spin_lock
ffffffff81404dd0 r __ksymtab_dma_supported
ffffffff81404de0 r __ksymtab_dmam_alloc_coherent
ffffffff81404df0 r __ksymtab_dmam_alloc_noncoherent
ffffffff81404e00 r __ksymtab_dmam_free_coherent
ffffffff81404e10 r __ksymtab_dmam_free_noncoherent
ffffffff81404e20 r __ksymtab_dmam_pool_create
ffffffff81404e30 r __ksymtab_dmam_pool_destroy
ffffffff81404e40 r __ksymtab_dmi_check_system
ffffffff81404e50 r __ksymtab_dmi_find_device
ffffffff81404e60 r __ksymtab_dmi_first_match
ffffffff81404e70 r __ksymtab_dmi_get_date
ffffffff81404e80 r __ksymtab_dmi_get_system_info
ffffffff81404e90 r __ksymtab_dmi_name_in_vendors
ffffffff81404ea0 r __ksymtab_do_SAK
ffffffff81404eb0 r __ksymtab_do_blank_screen
ffffffff81404ec0 r __ksymtab_do_gettimeofday
ffffffff81404ed0 r __ksymtab_do_mmap
ffffffff81404ee0 r __ksymtab_do_munmap
ffffffff81404ef0 r __ksymtab_do_settimeofday
ffffffff81404f00 r __ksymtab_do_sync_read
ffffffff81404f10 r __ksymtab_do_sync_write
ffffffff81404f20 r __ksymtab_do_unblank_screen
ffffffff81404f30 r __ksymtab_down
ffffffff81404f40 r __ksymtab_down_interruptible
ffffffff81404f50 r __ksymtab_down_killable
ffffffff81404f60 r __ksymtab_down_read
ffffffff81404f70 r __ksymtab_down_read_trylock
ffffffff81404f80 r __ksymtab_down_timeout
ffffffff81404f90 r __ksymtab_down_trylock
ffffffff81404fa0 r __ksymtab_down_write
ffffffff81404fb0 r __ksymtab_down_write_trylock
ffffffff81404fc0 r __ksymtab_downgrade_write
ffffffff81404fd0 r __ksymtab_dput
ffffffff81404fe0 r __ksymtab_dql_completed
ffffffff81404ff0 r __ksymtab_dql_init
ffffffff81405000 r __ksymtab_dql_reset
ffffffff81405010 r __ksymtab_drop_nlink
ffffffff81405020 r __ksymtab_drop_super
ffffffff81405030 r __ksymtab_dst_alloc
ffffffff81405040 r __ksymtab_dst_cow_metrics_generic
ffffffff81405050 r __ksymtab_dst_destroy
ffffffff81405060 r __ksymtab_dst_discard
ffffffff81405070 r __ksymtab_dst_release
ffffffff81405080 r __ksymtab_dump_fpu
ffffffff81405090 r __ksymtab_dump_seek
ffffffff814050a0 r __ksymtab_dump_stack
ffffffff814050b0 r __ksymtab_dump_trace
ffffffff814050c0 r __ksymtab_dump_write
ffffffff814050d0 r __ksymtab_ec_burst_disable
ffffffff814050e0 r __ksymtab_ec_burst_enable
ffffffff814050f0 r __ksymtab_ec_get_handle
ffffffff81405100 r __ksymtab_ec_read
ffffffff81405110 r __ksymtab_ec_transaction
ffffffff81405120 r __ksymtab_ec_write
ffffffff81405130 r __ksymtab_elevator_change
ffffffff81405140 r __ksymtab_elevator_exit
ffffffff81405150 r __ksymtab_elevator_init
ffffffff81405160 r __ksymtab_elv_abort_queue
ffffffff81405170 r __ksymtab_elv_add_request
ffffffff81405180 r __ksymtab_elv_dispatch_add_tail
ffffffff81405190 r __ksymtab_elv_dispatch_sort
ffffffff814051a0 r __ksymtab_elv_rb_add
ffffffff814051b0 r __ksymtab_elv_rb_del
ffffffff814051c0 r __ksymtab_elv_rb_find
ffffffff814051d0 r __ksymtab_elv_rb_former_request
ffffffff814051e0 r __ksymtab_elv_rb_latter_request
ffffffff814051f0 r __ksymtab_elv_register_queue
ffffffff81405200 r __ksymtab_elv_rq_merge_ok
ffffffff81405210 r __ksymtab_elv_unregister_queue
ffffffff81405220 r __ksymtab_empty_aops
ffffffff81405230 r __ksymtab_empty_zero_page
ffffffff81405240 r __ksymtab_enable_irq
ffffffff81405250 r __ksymtab_end_buffer_async_write
ffffffff81405260 r __ksymtab_end_buffer_read_sync
ffffffff81405270 r __ksymtab_end_buffer_write_sync
ffffffff81405280 r __ksymtab_end_page_writeback
ffffffff81405290 r __ksymtab_end_writeback
ffffffff814052a0 r __ksymtab_eth_change_mtu
ffffffff814052b0 r __ksymtab_eth_header
ffffffff814052c0 r __ksymtab_eth_header_cache
ffffffff814052d0 r __ksymtab_eth_header_cache_update
ffffffff814052e0 r __ksymtab_eth_header_parse
ffffffff814052f0 r __ksymtab_eth_mac_addr
ffffffff81405300 r __ksymtab_eth_rebuild_header
ffffffff81405310 r __ksymtab_eth_type_trans
ffffffff81405320 r __ksymtab_eth_validate_addr
ffffffff81405330 r __ksymtab_ether_setup
ffffffff81405340 r __ksymtab_ethtool_op_get_link
ffffffff81405350 r __ksymtab_f_setown
ffffffff81405360 r __ksymtab_fail_migrate_page
ffffffff81405370 r __ksymtab_fasync_helper
ffffffff81405380 r __ksymtab_fb_add_videomode
ffffffff81405390 r __ksymtab_fb_alloc_cmap
ffffffff814053a0 r __ksymtab_fb_blank
ffffffff814053b0 r __ksymtab_fb_class
ffffffff814053c0 r __ksymtab_fb_copy_cmap
ffffffff814053d0 r __ksymtab_fb_dealloc_cmap
ffffffff814053e0 r __ksymtab_fb_default_cmap
ffffffff814053f0 r __ksymtab_fb_destroy_modedb
ffffffff81405400 r __ksymtab_fb_edid_add_monspecs
ffffffff81405410 r __ksymtab_fb_edid_to_monspecs
ffffffff81405420 r __ksymtab_fb_find_best_display
ffffffff81405430 r __ksymtab_fb_find_best_mode
ffffffff81405440 r __ksymtab_fb_find_mode
ffffffff81405450 r __ksymtab_fb_find_mode_cvt
ffffffff81405460 r __ksymtab_fb_find_nearest_mode
ffffffff81405470 r __ksymtab_fb_firmware_edid
ffffffff81405480 r __ksymtab_fb_get_buffer_offset
ffffffff81405490 r __ksymtab_fb_get_color_depth
ffffffff814054a0 r __ksymtab_fb_get_mode
ffffffff814054b0 r __ksymtab_fb_get_options
ffffffff814054c0 r __ksymtab_fb_invert_cmaps
ffffffff814054d0 r __ksymtab_fb_is_primary_device
ffffffff814054e0 r __ksymtab_fb_match_mode
ffffffff814054f0 r __ksymtab_fb_mode_is_equal
ffffffff81405500 r __ksymtab_fb_pad_aligned_buffer
ffffffff81405510 r __ksymtab_fb_pad_unaligned_buffer
ffffffff81405520 r __ksymtab_fb_pan_display
ffffffff81405530 r __ksymtab_fb_parse_edid
ffffffff81405540 r __ksymtab_fb_register_client
ffffffff81405550 r __ksymtab_fb_set_cmap
ffffffff81405560 r __ksymtab_fb_set_suspend
ffffffff81405570 r __ksymtab_fb_set_var
ffffffff81405580 r __ksymtab_fb_show_logo
ffffffff81405590 r __ksymtab_fb_unregister_client
ffffffff814055a0 r __ksymtab_fb_validate_mode
ffffffff814055b0 r __ksymtab_fb_var_to_videomode
ffffffff814055c0 r __ksymtab_fb_videomode_to_modelist
ffffffff814055d0 r __ksymtab_fb_videomode_to_var
ffffffff814055e0 r __ksymtab_fbcon_set_bitops
ffffffff814055f0 r __ksymtab_fd_install
ffffffff81405600 r __ksymtab_fg_console
ffffffff81405610 r __ksymtab_fget
ffffffff81405620 r __ksymtab_fget_raw
ffffffff81405630 r __ksymtab_fiemap_check_flags
ffffffff81405640 r __ksymtab_fiemap_fill_next_extent
ffffffff81405650 r __ksymtab_file_open_root
ffffffff81405660 r __ksymtab_file_remove_suid
ffffffff81405670 r __ksymtab_file_update_time
ffffffff81405680 r __ksymtab_filemap_fault
ffffffff81405690 r __ksymtab_filemap_fdatawait
ffffffff814056a0 r __ksymtab_filemap_fdatawait_range
ffffffff814056b0 r __ksymtab_filemap_fdatawrite
ffffffff814056c0 r __ksymtab_filemap_fdatawrite_range
ffffffff814056d0 r __ksymtab_filemap_flush
ffffffff814056e0 r __ksymtab_filemap_write_and_wait
ffffffff814056f0 r __ksymtab_filemap_write_and_wait_range
ffffffff81405700 r __ksymtab_files_lglock_global_lock
ffffffff81405710 r __ksymtab_files_lglock_global_lock_online
ffffffff81405720 r __ksymtab_files_lglock_global_unlock
ffffffff81405730 r __ksymtab_files_lglock_global_unlock_online
ffffffff81405740 r __ksymtab_files_lglock_local_lock
ffffffff81405750 r __ksymtab_files_lglock_local_lock_cpu
ffffffff81405760 r __ksymtab_files_lglock_local_unlock
ffffffff81405770 r __ksymtab_files_lglock_local_unlock_cpu
ffffffff81405780 r __ksymtab_files_lglock_lock_init
ffffffff81405790 r __ksymtab_filp_close
ffffffff814057a0 r __ksymtab_filp_open
ffffffff814057b0 r __ksymtab_find_first_bit
ffffffff814057c0 r __ksymtab_find_first_zero_bit
ffffffff814057d0 r __ksymtab_find_font
ffffffff814057e0 r __ksymtab_find_get_page
ffffffff814057f0 r __ksymtab_find_get_pages_contig
ffffffff81405800 r __ksymtab_find_get_pages_tag
ffffffff81405810 r __ksymtab_find_inode_number
ffffffff81405820 r __ksymtab_find_last_bit
ffffffff81405830 r __ksymtab_find_lock_page
ffffffff81405840 r __ksymtab_find_next_bit
ffffffff81405850 r __ksymtab_find_next_zero_bit
ffffffff81405860 r __ksymtab_find_or_create_page
ffffffff81405870 r __ksymtab_find_vma
ffffffff81405880 r __ksymtab_finish_wait
ffffffff81405890 r __ksymtab_first_ec
ffffffff814058a0 r __ksymtab_flex_array_alloc
ffffffff814058b0 r __ksymtab_flex_array_clear
ffffffff814058c0 r __ksymtab_flex_array_free
ffffffff814058d0 r __ksymtab_flex_array_free_parts
ffffffff814058e0 r __ksymtab_flex_array_get
ffffffff814058f0 r __ksymtab_flex_array_get_ptr
ffffffff81405900 r __ksymtab_flex_array_prealloc
ffffffff81405910 r __ksymtab_flex_array_put
ffffffff81405920 r __ksymtab_flex_array_shrink
ffffffff81405930 r __ksymtab_flock_lock_file_wait
ffffffff81405940 r __ksymtab_flow_cache_genid
ffffffff81405950 r __ksymtab_flow_cache_lookup
ffffffff81405960 r __ksymtab_flush_delayed_work
ffffffff81405970 r __ksymtab_flush_delayed_work_sync
ffffffff81405980 r __ksymtab_flush_old_exec
ffffffff81405990 r __ksymtab_flush_scheduled_work
ffffffff814059a0 r __ksymtab_flush_signals
ffffffff814059b0 r __ksymtab_follow_down
ffffffff814059c0 r __ksymtab_follow_down_one
ffffffff814059d0 r __ksymtab_follow_pfn
ffffffff814059e0 r __ksymtab_follow_up
ffffffff814059f0 r __ksymtab_font_vga_8x16
ffffffff81405a00 r __ksymtab_force_sig
ffffffff81405a10 r __ksymtab_fput
ffffffff81405a20 r __ksymtab_framebuffer_alloc
ffffffff81405a30 r __ksymtab_framebuffer_release
ffffffff81405a40 r __ksymtab_free_anon_bdev
ffffffff81405a50 r __ksymtab_free_buffer_head
ffffffff81405a60 r __ksymtab_free_dma
ffffffff81405a70 r __ksymtab_free_inode_nonrcu
ffffffff81405a80 r __ksymtab_free_irq
ffffffff81405a90 r __ksymtab_free_irq_cpu_rmap
ffffffff81405aa0 r __ksymtab_free_netdev
ffffffff81405ab0 r __ksymtab_free_pages
ffffffff81405ac0 r __ksymtab_free_pages_exact
ffffffff81405ad0 r __ksymtab_free_task
ffffffff81405ae0 r __ksymtab_free_xenballooned_pages
ffffffff81405af0 r __ksymtab_freeze_bdev
ffffffff81405b00 r __ksymtab_freeze_super
ffffffff81405b10 r __ksymtab_freezing_slow_path
ffffffff81405b20 r __ksymtab_fs_overflowgid
ffffffff81405b30 r __ksymtab_fs_overflowuid
ffffffff81405b40 r __ksymtab_fsync_bdev
ffffffff81405b50 r __ksymtab_full_name_hash
ffffffff81405b60 r __ksymtab_gen_estimator_active
ffffffff81405b70 r __ksymtab_gen_kill_estimator
ffffffff81405b80 r __ksymtab_gen_new_estimator
ffffffff81405b90 r __ksymtab_gen_pool_add_virt
ffffffff81405ba0 r __ksymtab_gen_pool_alloc
ffffffff81405bb0 r __ksymtab_gen_pool_create
ffffffff81405bc0 r __ksymtab_gen_pool_destroy
ffffffff81405bd0 r __ksymtab_gen_pool_for_each_chunk
ffffffff81405be0 r __ksymtab_gen_pool_free
ffffffff81405bf0 r __ksymtab_gen_pool_virt_to_phys
ffffffff81405c00 r __ksymtab_gen_replace_estimator
ffffffff81405c10 r __ksymtab_generate_random_uuid
ffffffff81405c20 r __ksymtab_generic_block_bmap
ffffffff81405c30 r __ksymtab_generic_block_fiemap
ffffffff81405c40 r __ksymtab_generic_check_addressable
ffffffff81405c50 r __ksymtab_generic_cont_expand_simple
ffffffff81405c60 r __ksymtab_generic_delete_inode
ffffffff81405c70 r __ksymtab_generic_error_remove_page
ffffffff81405c80 r __ksymtab_generic_file_aio_read
ffffffff81405c90 r __ksymtab_generic_file_aio_write
ffffffff81405ca0 r __ksymtab_generic_file_buffered_write
ffffffff81405cb0 r __ksymtab_generic_file_direct_write
ffffffff81405cc0 r __ksymtab_generic_file_fsync
ffffffff81405cd0 r __ksymtab_generic_file_llseek
ffffffff81405ce0 r __ksymtab_generic_file_llseek_size
ffffffff81405cf0 r __ksymtab_generic_file_mmap
ffffffff81405d00 r __ksymtab_generic_file_open
ffffffff81405d10 r __ksymtab_generic_file_readonly_mmap
ffffffff81405d20 r __ksymtab_generic_file_splice_read
ffffffff81405d30 r __ksymtab_generic_file_splice_write
ffffffff81405d40 r __ksymtab_generic_fillattr
ffffffff81405d50 r __ksymtab_generic_getxattr
ffffffff81405d60 r __ksymtab_generic_listxattr
ffffffff81405d70 r __ksymtab_generic_make_request
ffffffff81405d80 r __ksymtab_generic_permission
ffffffff81405d90 r __ksymtab_generic_pipe_buf_confirm
ffffffff81405da0 r __ksymtab_generic_pipe_buf_get
ffffffff81405db0 r __ksymtab_generic_pipe_buf_map
ffffffff81405dc0 r __ksymtab_generic_pipe_buf_release
ffffffff81405dd0 r __ksymtab_generic_pipe_buf_steal
ffffffff81405de0 r __ksymtab_generic_pipe_buf_unmap
ffffffff81405df0 r __ksymtab_generic_read_dir
ffffffff81405e00 r __ksymtab_generic_readlink
ffffffff81405e10 r __ksymtab_generic_removexattr
ffffffff81405e20 r __ksymtab_generic_ro_fops
ffffffff81405e30 r __ksymtab_generic_segment_checks
ffffffff81405e40 r __ksymtab_generic_setlease
ffffffff81405e50 r __ksymtab_generic_setxattr
ffffffff81405e60 r __ksymtab_generic_show_options
ffffffff81405e70 r __ksymtab_generic_shutdown_super
ffffffff81405e80 r __ksymtab_generic_splice_sendpage
ffffffff81405e90 r __ksymtab_generic_write_checks
ffffffff81405ea0 r __ksymtab_generic_write_end
ffffffff81405eb0 r __ksymtab_generic_write_sync
ffffffff81405ec0 r __ksymtab_generic_writepages
ffffffff81405ed0 r __ksymtab_genl_lock
ffffffff81405ee0 r __ksymtab_genl_notify
ffffffff81405ef0 r __ksymtab_genl_register_family
ffffffff81405f00 r __ksymtab_genl_register_family_with_ops
ffffffff81405f10 r __ksymtab_genl_register_mc_group
ffffffff81405f20 r __ksymtab_genl_register_ops
ffffffff81405f30 r __ksymtab_genl_unlock
ffffffff81405f40 r __ksymtab_genl_unregister_family
ffffffff81405f50 r __ksymtab_genl_unregister_mc_group
ffffffff81405f60 r __ksymtab_genl_unregister_ops
ffffffff81405f70 r __ksymtab_genlmsg_multicast_allns
ffffffff81405f80 r __ksymtab_genlmsg_put
ffffffff81405f90 r __ksymtab_get_anon_bdev
ffffffff81405fa0 r __ksymtab_get_default_font
ffffffff81405fb0 r __ksymtab_get_disk
ffffffff81405fc0 r __ksymtab_get_fs_type
ffffffff81405fd0 r __ksymtab_get_gendisk
ffffffff81405fe0 r __ksymtab_get_ibs_caps
ffffffff81405ff0 r __ksymtab_get_io_context
ffffffff81406000 r __ksymtab_get_next_ino
ffffffff81406010 r __ksymtab_get_option
ffffffff81406020 r __ksymtab_get_options
ffffffff81406030 r __ksymtab_get_random_bytes
ffffffff81406040 r __ksymtab_get_seconds
ffffffff81406050 r __ksymtab_get_super
ffffffff81406060 r __ksymtab_get_super_thawed
ffffffff81406070 r __ksymtab_get_task_io_context
ffffffff81406080 r __ksymtab_get_unmapped_area
ffffffff81406090 r __ksymtab_get_unused_fd
ffffffff814060a0 r __ksymtab_get_user_pages
ffffffff814060b0 r __ksymtab_get_write_access
ffffffff814060c0 r __ksymtab_get_zeroed_page
ffffffff814060d0 r __ksymtab_getname
ffffffff814060e0 r __ksymtab_getnstimeofday
ffffffff814060f0 r __ksymtab_getrawmonotonic
ffffffff81406100 r __ksymtab_give_up_console
ffffffff81406110 r __ksymtab_global_cursor_default
ffffffff81406120 r __ksymtab_gnet_stats_copy_app
ffffffff81406130 r __ksymtab_gnet_stats_copy_basic
ffffffff81406140 r __ksymtab_gnet_stats_copy_queue
ffffffff81406150 r __ksymtab_gnet_stats_copy_rate_est
ffffffff81406160 r __ksymtab_gnet_stats_finish_copy
ffffffff81406170 r __ksymtab_gnet_stats_start_copy
ffffffff81406180 r __ksymtab_gnet_stats_start_copy_compat
ffffffff81406190 r __ksymtab_grab_cache_page_nowait
ffffffff814061a0 r __ksymtab_grab_cache_page_write_begin
ffffffff814061b0 r __ksymtab_groups_alloc
ffffffff814061c0 r __ksymtab_groups_free
ffffffff814061d0 r __ksymtab_half_md4_transform
ffffffff814061e0 r __ksymtab_handle_edge_irq
ffffffff814061f0 r __ksymtab_have_submounts
ffffffff81406200 r __ksymtab_hex2bin
ffffffff81406210 r __ksymtab_hex_asc
ffffffff81406220 r __ksymtab_hex_dump_to_buffer
ffffffff81406230 r __ksymtab_hex_to_bin
ffffffff81406240 r __ksymtab_high_memory
ffffffff81406250 r __ksymtab_ht_create_irq
ffffffff81406260 r __ksymtab_ht_destroy_irq
ffffffff81406270 r __ksymtab_i8042_check_port_owner
ffffffff81406280 r __ksymtab_i8042_command
ffffffff81406290 r __ksymtab_i8042_install_filter
ffffffff814062a0 r __ksymtab_i8042_lock_chip
ffffffff814062b0 r __ksymtab_i8042_remove_filter
ffffffff814062c0 r __ksymtab_i8042_unlock_chip
ffffffff814062d0 r __ksymtab_i8253_lock
ffffffff814062e0 r __ksymtab_icmp_err_convert
ffffffff814062f0 r __ksymtab_icmp_send
ffffffff81406300 r __ksymtab_icq_get_changed
ffffffff81406310 r __ksymtab_ida_destroy
ffffffff81406320 r __ksymtab_ida_get_new
ffffffff81406330 r __ksymtab_ida_get_new_above
ffffffff81406340 r __ksymtab_ida_init
ffffffff81406350 r __ksymtab_ida_pre_get
ffffffff81406360 r __ksymtab_ida_remove
ffffffff81406370 r __ksymtab_ida_simple_get
ffffffff81406380 r __ksymtab_ida_simple_remove
ffffffff81406390 r __ksymtab_idr_destroy
ffffffff814063a0 r __ksymtab_idr_find
ffffffff814063b0 r __ksymtab_idr_for_each
ffffffff814063c0 r __ksymtab_idr_get_new
ffffffff814063d0 r __ksymtab_idr_get_new_above
ffffffff814063e0 r __ksymtab_idr_get_next
ffffffff814063f0 r __ksymtab_idr_init
ffffffff81406400 r __ksymtab_idr_pre_get
ffffffff81406410 r __ksymtab_idr_remove
ffffffff81406420 r __ksymtab_idr_remove_all
ffffffff81406430 r __ksymtab_idr_replace
ffffffff81406440 r __ksymtab_ifla_policy
ffffffff81406450 r __ksymtab_iget5_locked
ffffffff81406460 r __ksymtab_iget_failed
ffffffff81406470 r __ksymtab_iget_locked
ffffffff81406480 r __ksymtab_igrab
ffffffff81406490 r __ksymtab_ihold
ffffffff814064a0 r __ksymtab_ilookup
ffffffff814064b0 r __ksymtab_ilookup5
ffffffff814064c0 r __ksymtab_ilookup5_nowait
ffffffff814064d0 r __ksymtab_in4_pton
ffffffff814064e0 r __ksymtab_in6_pton
ffffffff814064f0 r __ksymtab_in_aton
ffffffff81406500 r __ksymtab_in_dev_finish_destroy
ffffffff81406510 r __ksymtab_in_egroup_p
ffffffff81406520 r __ksymtab_in_group_p
ffffffff81406530 r __ksymtab_in_lock_functions
ffffffff81406540 r __ksymtab_inc_nlink
ffffffff81406550 r __ksymtab_inc_zone_page_state
ffffffff81406560 r __ksymtab_inet_accept
ffffffff81406570 r __ksymtab_inet_add_protocol
ffffffff81406580 r __ksymtab_inet_addr_type
ffffffff81406590 r __ksymtab_inet_bind
ffffffff814065a0 r __ksymtab_inet_confirm_addr
ffffffff814065b0 r __ksymtab_inet_csk_accept
ffffffff814065c0 r __ksymtab_inet_csk_clear_xmit_timers
ffffffff814065d0 r __ksymtab_inet_csk_delete_keepalive_timer
ffffffff814065e0 r __ksymtab_inet_csk_destroy_sock
ffffffff814065f0 r __ksymtab_inet_csk_init_xmit_timers
ffffffff81406600 r __ksymtab_inet_csk_reset_keepalive_timer
ffffffff81406610 r __ksymtab_inet_csk_timer_bug_msg
ffffffff81406620 r __ksymtab_inet_del_protocol
ffffffff81406630 r __ksymtab_inet_dev_addr_type
ffffffff81406640 r __ksymtab_inet_dgram_connect
ffffffff81406650 r __ksymtab_inet_dgram_ops
ffffffff81406660 r __ksymtab_inet_ehash_secret
ffffffff81406670 r __ksymtab_inet_frag_destroy
ffffffff81406680 r __ksymtab_inet_frag_evictor
ffffffff81406690 r __ksymtab_inet_frag_find
ffffffff814066a0 r __ksymtab_inet_frag_kill
ffffffff814066b0 r __ksymtab_inet_frags_exit_net
ffffffff814066c0 r __ksymtab_inet_frags_fini
ffffffff814066d0 r __ksymtab_inet_frags_init
ffffffff814066e0 r __ksymtab_inet_frags_init_net
ffffffff814066f0 r __ksymtab_inet_get_local_port_range
ffffffff81406700 r __ksymtab_inet_getname
ffffffff81406710 r __ksymtab_inet_ioctl
ffffffff81406720 r __ksymtab_inet_listen
ffffffff81406730 r __ksymtab_inet_peer_xrlim_allow
ffffffff81406740 r __ksymtab_inet_proto_csum_replace4
ffffffff81406750 r __ksymtab_inet_put_port
ffffffff81406760 r __ksymtab_inet_recvmsg
ffffffff81406770 r __ksymtab_inet_register_protosw
ffffffff81406780 r __ksymtab_inet_release
ffffffff81406790 r __ksymtab_inet_select_addr
ffffffff814067a0 r __ksymtab_inet_sendmsg
ffffffff814067b0 r __ksymtab_inet_sendpage
ffffffff814067c0 r __ksymtab_inet_shutdown
ffffffff814067d0 r __ksymtab_inet_sk_rebuild_header
ffffffff814067e0 r __ksymtab_inet_sock_destruct
ffffffff814067f0 r __ksymtab_inet_stream_connect
ffffffff81406800 r __ksymtab_inet_stream_ops
ffffffff81406810 r __ksymtab_inet_twsk_deschedule
ffffffff81406820 r __ksymtab_inet_unregister_protosw
ffffffff81406830 r __ksymtab_inetdev_by_index
ffffffff81406840 r __ksymtab_inetpeer_invalidate_tree
ffffffff81406850 r __ksymtab_init_buffer
ffffffff81406860 r __ksymtab_init_net
ffffffff81406870 r __ksymtab_init_special_inode
ffffffff81406880 r __ksymtab_init_task
ffffffff81406890 r __ksymtab_init_timer_deferrable_key
ffffffff814068a0 r __ksymtab_init_timer_key
ffffffff814068b0 r __ksymtab_inode_add_bytes
ffffffff814068c0 r __ksymtab_inode_change_ok
ffffffff814068d0 r __ksymtab_inode_dio_done
ffffffff814068e0 r __ksymtab_inode_dio_wait
ffffffff814068f0 r __ksymtab_inode_get_bytes
ffffffff81406900 r __ksymtab_inode_init_always
ffffffff81406910 r __ksymtab_inode_init_once
ffffffff81406920 r __ksymtab_inode_init_owner
ffffffff81406930 r __ksymtab_inode_needs_sync
ffffffff81406940 r __ksymtab_inode_newsize_ok
ffffffff81406950 r __ksymtab_inode_owner_or_capable
ffffffff81406960 r __ksymtab_inode_permission
ffffffff81406970 r __ksymtab_inode_set_bytes
ffffffff81406980 r __ksymtab_inode_sub_bytes
ffffffff81406990 r __ksymtab_inode_wait
ffffffff814069a0 r __ksymtab_input_alloc_absinfo
ffffffff814069b0 r __ksymtab_input_allocate_device
ffffffff814069c0 r __ksymtab_input_close_device
ffffffff814069d0 r __ksymtab_input_event
ffffffff814069e0 r __ksymtab_input_flush_device
ffffffff814069f0 r __ksymtab_input_free_device
ffffffff81406a00 r __ksymtab_input_get_keycode
ffffffff81406a10 r __ksymtab_input_grab_device
ffffffff81406a20 r __ksymtab_input_handler_for_each_handle
ffffffff81406a30 r __ksymtab_input_inject_event
ffffffff81406a40 r __ksymtab_input_mt_destroy_slots
ffffffff81406a50 r __ksymtab_input_mt_init_slots
ffffffff81406a60 r __ksymtab_input_mt_report_finger_count
ffffffff81406a70 r __ksymtab_input_mt_report_pointer_emulation
ffffffff81406a80 r __ksymtab_input_mt_report_slot_state
ffffffff81406a90 r __ksymtab_input_open_device
ffffffff81406aa0 r __ksymtab_input_register_device
ffffffff81406ab0 r __ksymtab_input_register_handle
ffffffff81406ac0 r __ksymtab_input_register_handler
ffffffff81406ad0 r __ksymtab_input_release_device
ffffffff81406ae0 r __ksymtab_input_reset_device
ffffffff81406af0 r __ksymtab_input_scancode_to_scalar
ffffffff81406b00 r __ksymtab_input_set_abs_params
ffffffff81406b10 r __ksymtab_input_set_capability
ffffffff81406b20 r __ksymtab_input_set_keycode
ffffffff81406b30 r __ksymtab_input_unregister_device
ffffffff81406b40 r __ksymtab_input_unregister_handle
ffffffff81406b50 r __ksymtab_input_unregister_handler
ffffffff81406b60 r __ksymtab_insert_inode_locked
ffffffff81406b70 r __ksymtab_insert_inode_locked4
ffffffff81406b80 r __ksymtab_install_exec_creds
ffffffff81406b90 r __ksymtab_int_sqrt
ffffffff81406ba0 r __ksymtab_int_to_scsilun
ffffffff81406bb0 r __ksymtab_interruptible_sleep_on
ffffffff81406bc0 r __ksymtab_interruptible_sleep_on_timeout
ffffffff81406bd0 r __ksymtab_invalidate_bdev
ffffffff81406be0 r __ksymtab_invalidate_inode_buffers
ffffffff81406bf0 r __ksymtab_invalidate_mapping_pages
ffffffff81406c00 r __ksymtab_invalidate_partition
ffffffff81406c10 r __ksymtab_io_schedule
ffffffff81406c20 r __ksymtab_ioc_cgroup_changed
ffffffff81406c30 r __ksymtab_ioc_lookup_icq
ffffffff81406c40 r __ksymtab_ioctl_by_bdev
ffffffff81406c50 r __ksymtab_iomem_resource
ffffffff81406c60 r __ksymtab_iommu_area_alloc
ffffffff81406c70 r __ksymtab_ioport_map
ffffffff81406c80 r __ksymtab_ioport_resource
ffffffff81406c90 r __ksymtab_ioport_unmap
ffffffff81406ca0 r __ksymtab_ioread16
ffffffff81406cb0 r __ksymtab_ioread16_rep
ffffffff81406cc0 r __ksymtab_ioread16be
ffffffff81406cd0 r __ksymtab_ioread32
ffffffff81406ce0 r __ksymtab_ioread32_rep
ffffffff81406cf0 r __ksymtab_ioread32be
ffffffff81406d00 r __ksymtab_ioread8
ffffffff81406d10 r __ksymtab_ioread8_rep
ffffffff81406d20 r __ksymtab_ioremap_cache
ffffffff81406d30 r __ksymtab_ioremap_nocache
ffffffff81406d40 r __ksymtab_ioremap_prot
ffffffff81406d50 r __ksymtab_ioremap_wc
ffffffff81406d60 r __ksymtab_iounmap
ffffffff81406d70 r __ksymtab_iov_iter_advance
ffffffff81406d80 r __ksymtab_iov_iter_copy_from_user
ffffffff81406d90 r __ksymtab_iov_iter_copy_from_user_atomic
ffffffff81406da0 r __ksymtab_iov_iter_fault_in_readable
ffffffff81406db0 r __ksymtab_iov_iter_single_seg_count
ffffffff81406dc0 r __ksymtab_iov_shorten
ffffffff81406dd0 r __ksymtab_iowrite16
ffffffff81406de0 r __ksymtab_iowrite16_rep
ffffffff81406df0 r __ksymtab_iowrite16be
ffffffff81406e00 r __ksymtab_iowrite32
ffffffff81406e10 r __ksymtab_iowrite32_rep
ffffffff81406e20 r __ksymtab_iowrite32be
ffffffff81406e30 r __ksymtab_iowrite8
ffffffff81406e40 r __ksymtab_iowrite8_rep
ffffffff81406e50 r __ksymtab_ip4_datagram_connect
ffffffff81406e60 r __ksymtab_ip_check_defrag
ffffffff81406e70 r __ksymtab_ip_cmsg_recv
ffffffff81406e80 r __ksymtab_ip_compute_csum
ffffffff81406e90 r __ksymtab_ip_defrag
ffffffff81406ea0 r __ksymtab_ip_fragment
ffffffff81406eb0 r __ksymtab_ip_generic_getfrag
ffffffff81406ec0 r __ksymtab_ip_getsockopt
ffffffff81406ed0 r __ksymtab_ip_mc_dec_group
ffffffff81406ee0 r __ksymtab_ip_mc_inc_group
ffffffff81406ef0 r __ksymtab_ip_mc_join_group
ffffffff81406f00 r __ksymtab_ip_mc_rejoin_groups
ffffffff81406f10 r __ksymtab_ip_options_compile
ffffffff81406f20 r __ksymtab_ip_options_rcv_srr
ffffffff81406f30 r __ksymtab_ip_queue_xmit
ffffffff81406f40 r __ksymtab_ip_route_input_common
ffffffff81406f50 r __ksymtab_ip_send_check
ffffffff81406f60 r __ksymtab_ip_setsockopt
ffffffff81406f70 r __ksymtab_iput
ffffffff81406f80 r __ksymtab_ipv4_config
ffffffff81406f90 r __ksymtab_ipv4_specific
ffffffff81406fa0 r __ksymtab_ipv6_ext_hdr
ffffffff81406fb0 r __ksymtab_ipv6_skip_exthdr
ffffffff81406fc0 r __ksymtab_irq_cpu_rmap_add
ffffffff81406fd0 r __ksymtab_irq_fpu_usable
ffffffff81406fe0 r __ksymtab_irq_regs
ffffffff81406ff0 r __ksymtab_irq_set_chip
ffffffff81407000 r __ksymtab_irq_set_chip_data
ffffffff81407010 r __ksymtab_irq_set_handler_data
ffffffff81407020 r __ksymtab_irq_set_irq_type
ffffffff81407030 r __ksymtab_irq_set_irq_wake
ffffffff81407040 r __ksymtab_irq_stat
ffffffff81407050 r __ksymtab_irq_to_desc
ffffffff81407060 r __ksymtab_is_bad_inode
ffffffff81407070 r __ksymtab_is_container_init
ffffffff81407080 r __ksymtab_isa_dma_bridge_buggy
ffffffff81407090 r __ksymtab_iter_div_u64_rem
ffffffff814070a0 r __ksymtab_iterate_supers_type
ffffffff814070b0 r __ksymtab_iunique
ffffffff814070c0 r __ksymtab_jiffies
ffffffff814070d0 r __ksymtab_jiffies_64
ffffffff814070e0 r __ksymtab_jiffies_64_to_clock_t
ffffffff814070f0 r __ksymtab_jiffies_to_clock_t
ffffffff81407100 r __ksymtab_jiffies_to_msecs
ffffffff81407110 r __ksymtab_jiffies_to_timespec
ffffffff81407120 r __ksymtab_jiffies_to_timeval
ffffffff81407130 r __ksymtab_jiffies_to_usecs
ffffffff81407140 r __ksymtab_kacpi_hotplug_wq
ffffffff81407150 r __ksymtab_kasprintf
ffffffff81407160 r __ksymtab_kblockd_schedule_delayed_work
ffffffff81407170 r __ksymtab_kblockd_schedule_work
ffffffff81407180 r __ksymtab_kd_mksound
ffffffff81407190 r __ksymtab_kern_path
ffffffff814071a0 r __ksymtab_kern_path_create
ffffffff814071b0 r __ksymtab_kern_unmount
ffffffff814071c0 r __ksymtab_kernel_accept
ffffffff814071d0 r __ksymtab_kernel_bind
ffffffff814071e0 r __ksymtab_kernel_connect
ffffffff814071f0 r __ksymtab_kernel_cpustat
ffffffff81407200 r __ksymtab_kernel_fpu_begin
ffffffff81407210 r __ksymtab_kernel_fpu_end
ffffffff81407220 r __ksymtab_kernel_getpeername
ffffffff81407230 r __ksymtab_kernel_getsockname
ffffffff81407240 r __ksymtab_kernel_getsockopt
ffffffff81407250 r __ksymtab_kernel_listen
ffffffff81407260 r __ksymtab_kernel_read
ffffffff81407270 r __ksymtab_kernel_recvmsg
ffffffff81407280 r __ksymtab_kernel_sendmsg
ffffffff81407290 r __ksymtab_kernel_sendpage
ffffffff814072a0 r __ksymtab_kernel_setsockopt
ffffffff814072b0 r __ksymtab_kernel_sock_ioctl
ffffffff814072c0 r __ksymtab_kernel_sock_shutdown
ffffffff814072d0 r __ksymtab_kernel_stack
ffffffff814072e0 r __ksymtab_kernel_thread
ffffffff814072f0 r __ksymtab_kfree
ffffffff81407300 r __ksymtab_kfree_skb
ffffffff81407310 r __ksymtab_kick_iocb
ffffffff81407320 r __ksymtab_kill_anon_super
ffffffff81407330 r __ksymtab_kill_bdev
ffffffff81407340 r __ksymtab_kill_block_super
ffffffff81407350 r __ksymtab_kill_fasync
ffffffff81407360 r __ksymtab_kill_litter_super
ffffffff81407370 r __ksymtab_kill_pgrp
ffffffff81407380 r __ksymtab_kill_pid
ffffffff81407390 r __ksymtab_km_new_mapping
ffffffff814073a0 r __ksymtab_km_policy_expired
ffffffff814073b0 r __ksymtab_km_policy_notify
ffffffff814073c0 r __ksymtab_km_query
ffffffff814073d0 r __ksymtab_km_report
ffffffff814073e0 r __ksymtab_km_state_expired
ffffffff814073f0 r __ksymtab_km_state_notify
ffffffff81407400 r __ksymtab_kmem_cache_alloc
ffffffff81407410 r __ksymtab_kmem_cache_alloc_node
ffffffff81407420 r __ksymtab_kmem_cache_create
ffffffff81407430 r __ksymtab_kmem_cache_destroy
ffffffff81407440 r __ksymtab_kmem_cache_free
ffffffff81407450 r __ksymtab_kmem_cache_shrink
ffffffff81407460 r __ksymtab_kmem_cache_size
ffffffff81407470 r __ksymtab_kmemdup
ffffffff81407480 r __ksymtab_kobject_add
ffffffff81407490 r __ksymtab_kobject_del
ffffffff814074a0 r __ksymtab_kobject_get
ffffffff814074b0 r __ksymtab_kobject_init
ffffffff814074c0 r __ksymtab_kobject_put
ffffffff814074d0 r __ksymtab_kobject_set_name
ffffffff814074e0 r __ksymtab_krealloc
ffffffff814074f0 r __ksymtab_kset_register
ffffffff81407500 r __ksymtab_kset_unregister
ffffffff81407510 r __ksymtab_ksize
ffffffff81407520 r __ksymtab_kstat
ffffffff81407530 r __ksymtab_kstrdup
ffffffff81407540 r __ksymtab_kstrndup
ffffffff81407550 r __ksymtab_kstrtoint
ffffffff81407560 r __ksymtab_kstrtoint_from_user
ffffffff81407570 r __ksymtab_kstrtol_from_user
ffffffff81407580 r __ksymtab_kstrtoll
ffffffff81407590 r __ksymtab_kstrtoll_from_user
ffffffff814075a0 r __ksymtab_kstrtos16
ffffffff814075b0 r __ksymtab_kstrtos16_from_user
ffffffff814075c0 r __ksymtab_kstrtos8
ffffffff814075d0 r __ksymtab_kstrtos8_from_user
ffffffff814075e0 r __ksymtab_kstrtou16
ffffffff814075f0 r __ksymtab_kstrtou16_from_user
ffffffff81407600 r __ksymtab_kstrtou8
ffffffff81407610 r __ksymtab_kstrtou8_from_user
ffffffff81407620 r __ksymtab_kstrtouint
ffffffff81407630 r __ksymtab_kstrtouint_from_user
ffffffff81407640 r __ksymtab_kstrtoul_from_user
ffffffff81407650 r __ksymtab_kstrtoull
ffffffff81407660 r __ksymtab_kstrtoull_from_user
ffffffff81407670 r __ksymtab_kthread_bind
ffffffff81407680 r __ksymtab_kthread_create_on_node
ffffffff81407690 r __ksymtab_kthread_should_stop
ffffffff814076a0 r __ksymtab_kthread_stop
ffffffff814076b0 r __ksymtab_kvasprintf
ffffffff814076c0 r __ksymtab_kzfree
ffffffff814076d0 r __ksymtab_laptop_mode
ffffffff814076e0 r __ksymtab_lease_get_mtime
ffffffff814076f0 r __ksymtab_lease_modify
ffffffff81407700 r __ksymtab_linkwatch_fire_event
ffffffff81407710 r __ksymtab_list_sort
ffffffff81407720 r __ksymtab_ll_rw_block
ffffffff81407730 r __ksymtab_load_nls
ffffffff81407740 r __ksymtab_load_nls_default
ffffffff81407750 r __ksymtab_local_bh_disable
ffffffff81407760 r __ksymtab_local_bh_enable
ffffffff81407770 r __ksymtab_local_bh_enable_ip
ffffffff81407780 r __ksymtab_lock_fb_info
ffffffff81407790 r __ksymtab_lock_may_read
ffffffff814077a0 r __ksymtab_lock_may_write
ffffffff814077b0 r __ksymtab_lock_rename
ffffffff814077c0 r __ksymtab_lock_sock_fast
ffffffff814077d0 r __ksymtab_lock_sock_nested
ffffffff814077e0 r __ksymtab_lock_super
ffffffff814077f0 r __ksymtab_locks_copy_lock
ffffffff81407800 r __ksymtab_locks_delete_block
ffffffff81407810 r __ksymtab_locks_free_lock
ffffffff81407820 r __ksymtab_locks_init_lock
ffffffff81407830 r __ksymtab_locks_mandatory_area
ffffffff81407840 r __ksymtab_locks_remove_posix
ffffffff81407850 r __ksymtab_lookup_bdev
ffffffff81407860 r __ksymtab_lookup_one_len
ffffffff81407870 r __ksymtab_loops_per_jiffy
ffffffff81407880 r __ksymtab_lro_flush_all
ffffffff81407890 r __ksymtab_lro_flush_pkt
ffffffff814078a0 r __ksymtab_lro_receive_frags
ffffffff814078b0 r __ksymtab_lro_receive_skb
ffffffff814078c0 r __ksymtab_mac_pton
ffffffff814078d0 r __ksymtab_machine_to_phys_mapping
ffffffff814078e0 r __ksymtab_machine_to_phys_nr
ffffffff814078f0 r __ksymtab_make_bad_inode
ffffffff81407900 r __ksymtab_malloc_sizes
ffffffff81407910 r __ksymtab_mangle_path
ffffffff81407920 r __ksymtab_mapping_tagged
ffffffff81407930 r __ksymtab_mark_buffer_async_write
ffffffff81407940 r __ksymtab_mark_buffer_dirty
ffffffff81407950 r __ksymtab_mark_buffer_dirty_inode
ffffffff81407960 r __ksymtab_mark_page_accessed
ffffffff81407970 r __ksymtab_match_hex
ffffffff81407980 r __ksymtab_match_int
ffffffff81407990 r __ksymtab_match_octal
ffffffff814079a0 r __ksymtab_match_strdup
ffffffff814079b0 r __ksymtab_match_strlcpy
ffffffff814079c0 r __ksymtab_match_token
ffffffff814079d0 r __ksymtab_may_umount
ffffffff814079e0 r __ksymtab_may_umount_tree
ffffffff814079f0 r __ksymtab_md5_transform
ffffffff81407a00 r __ksymtab_mem_section
ffffffff81407a10 r __ksymtab_memcg_socket_limit_enabled
ffffffff81407a20 r __ksymtab_memchr
ffffffff81407a30 r __ksymtab_memchr_inv
ffffffff81407a40 r __ksymtab_memcmp
ffffffff81407a50 r __ksymtab_memcpy
ffffffff81407a60 r __ksymtab_memcpy_fromiovec
ffffffff81407a70 r __ksymtab_memcpy_fromiovecend
ffffffff81407a80 r __ksymtab_memcpy_toiovec
ffffffff81407a90 r __ksymtab_memcpy_toiovecend
ffffffff81407aa0 r __ksymtab_memdup_user
ffffffff81407ab0 r __ksymtab_memmove
ffffffff81407ac0 r __ksymtab_memory_read_from_buffer
ffffffff81407ad0 r __ksymtab_memparse
ffffffff81407ae0 r __ksymtab_mempool_alloc
ffffffff81407af0 r __ksymtab_mempool_alloc_pages
ffffffff81407b00 r __ksymtab_mempool_alloc_slab
ffffffff81407b10 r __ksymtab_mempool_create
ffffffff81407b20 r __ksymtab_mempool_create_node
ffffffff81407b30 r __ksymtab_mempool_destroy
ffffffff81407b40 r __ksymtab_mempool_free
ffffffff81407b50 r __ksymtab_mempool_free_pages
ffffffff81407b60 r __ksymtab_mempool_free_slab
ffffffff81407b70 r __ksymtab_mempool_kfree
ffffffff81407b80 r __ksymtab_mempool_kmalloc
ffffffff81407b90 r __ksymtab_mempool_resize
ffffffff81407ba0 r __ksymtab_memscan
ffffffff81407bb0 r __ksymtab_memset
ffffffff81407bc0 r __ksymtab_migrate_page
ffffffff81407bd0 r __ksymtab_misc_deregister
ffffffff81407be0 r __ksymtab_misc_register
ffffffff81407bf0 r __ksymtab_mktime
ffffffff81407c00 r __ksymtab_mnt_drop_write_file
ffffffff81407c10 r __ksymtab_mnt_pin
ffffffff81407c20 r __ksymtab_mnt_set_expiry
ffffffff81407c30 r __ksymtab_mnt_unpin
ffffffff81407c40 r __ksymtab_mntget
ffffffff81407c50 r __ksymtab_mntput
ffffffff81407c60 r __ksymtab_mod_timer
ffffffff81407c70 r __ksymtab_mod_timer_pending
ffffffff81407c80 r __ksymtab_mod_timer_pinned
ffffffff81407c90 r __ksymtab_mod_zone_page_state
ffffffff81407ca0 r __ksymtab_module_put
ffffffff81407cb0 r __ksymtab_module_refcount
ffffffff81407cc0 r __ksymtab_mount_bdev
ffffffff81407cd0 r __ksymtab_mount_nodev
ffffffff81407ce0 r __ksymtab_mount_ns
ffffffff81407cf0 r __ksymtab_mount_pseudo
ffffffff81407d00 r __ksymtab_mount_single
ffffffff81407d10 r __ksymtab_mount_subtree
ffffffff81407d20 r __ksymtab_movable_zone
ffffffff81407d30 r __ksymtab_mpage_readpage
ffffffff81407d40 r __ksymtab_mpage_readpages
ffffffff81407d50 r __ksymtab_mpage_writepage
ffffffff81407d60 r __ksymtab_mpage_writepages
ffffffff81407d70 r __ksymtab_msecs_to_jiffies
ffffffff81407d80 r __ksymtab_msleep
ffffffff81407d90 r __ksymtab_msleep_interruptible
ffffffff81407da0 r __ksymtab_msrs_alloc
ffffffff81407db0 r __ksymtab_msrs_free
ffffffff81407dc0 r __ksymtab_mtrr_add
ffffffff81407dd0 r __ksymtab_mtrr_del
ffffffff81407de0 r __ksymtab_mutex_lock
ffffffff81407df0 r __ksymtab_mutex_lock_interruptible
ffffffff81407e00 r __ksymtab_mutex_lock_killable
ffffffff81407e10 r __ksymtab_mutex_trylock
ffffffff81407e20 r __ksymtab_mutex_unlock
ffffffff81407e30 r __ksymtab_n_tty_compat_ioctl_helper
ffffffff81407e40 r __ksymtab_n_tty_ioctl_helper
ffffffff81407e50 r __ksymtab_names_cachep
ffffffff81407e60 r __ksymtab_napi_complete
ffffffff81407e70 r __ksymtab_napi_frags_finish
ffffffff81407e80 r __ksymtab_napi_frags_skb
ffffffff81407e90 r __ksymtab_napi_get_frags
ffffffff81407ea0 r __ksymtab_napi_gro_flush
ffffffff81407eb0 r __ksymtab_napi_gro_frags
ffffffff81407ec0 r __ksymtab_napi_gro_receive
ffffffff81407ed0 r __ksymtab_napi_skb_finish
ffffffff81407ee0 r __ksymtab_native_io_delay
ffffffff81407ef0 r __ksymtab_native_rdmsr_safe_regs
ffffffff81407f00 r __ksymtab_native_read_tsc
ffffffff81407f10 r __ksymtab_native_wrmsr_safe_regs
ffffffff81407f20 r __ksymtab_neigh_changeaddr
ffffffff81407f30 r __ksymtab_neigh_compat_output
ffffffff81407f40 r __ksymtab_neigh_connected_output
ffffffff81407f50 r __ksymtab_neigh_create
ffffffff81407f60 r __ksymtab_neigh_destroy
ffffffff81407f70 r __ksymtab_neigh_direct_output
ffffffff81407f80 r __ksymtab_neigh_event_ns
ffffffff81407f90 r __ksymtab_neigh_for_each
ffffffff81407fa0 r __ksymtab_neigh_ifdown
ffffffff81407fb0 r __ksymtab_neigh_lookup
ffffffff81407fc0 r __ksymtab_neigh_lookup_nodev
ffffffff81407fd0 r __ksymtab_neigh_parms_alloc
ffffffff81407fe0 r __ksymtab_neigh_parms_release
ffffffff81407ff0 r __ksymtab_neigh_rand_reach_time
ffffffff81408000 r __ksymtab_neigh_resolve_output
ffffffff81408010 r __ksymtab_neigh_seq_next
ffffffff81408020 r __ksymtab_neigh_seq_start
ffffffff81408030 r __ksymtab_neigh_seq_stop
ffffffff81408040 r __ksymtab_neigh_sysctl_register
ffffffff81408050 r __ksymtab_neigh_sysctl_unregister
ffffffff81408060 r __ksymtab_neigh_table_clear
ffffffff81408070 r __ksymtab_neigh_table_init
ffffffff81408080 r __ksymtab_neigh_table_init_no_netlink
ffffffff81408090 r __ksymtab_neigh_update
ffffffff814080a0 r __ksymtab_net_disable_timestamp
ffffffff814080b0 r __ksymtab_net_enable_timestamp
ffffffff814080c0 r __ksymtab_net_msg_warn
ffffffff814080d0 r __ksymtab_net_ratelimit
ffffffff814080e0 r __ksymtab_netdev_alert
ffffffff814080f0 r __ksymtab_netdev_bonding_change
ffffffff81408100 r __ksymtab_netdev_boot_setup_check
ffffffff81408110 r __ksymtab_netdev_change_features
ffffffff81408120 r __ksymtab_netdev_class_create_file
ffffffff81408130 r __ksymtab_netdev_class_remove_file
ffffffff81408140 r __ksymtab_netdev_crit
ffffffff81408150 r __ksymtab_netdev_emerg
ffffffff81408160 r __ksymtab_netdev_err
ffffffff81408170 r __ksymtab_netdev_features_change
ffffffff81408180 r __ksymtab_netdev_increment_features
ffffffff81408190 r __ksymtab_netdev_info
ffffffff814081a0 r __ksymtab_netdev_notice
ffffffff814081b0 r __ksymtab_netdev_printk
ffffffff814081c0 r __ksymtab_netdev_refcnt_read
ffffffff814081d0 r __ksymtab_netdev_rx_csum_fault
ffffffff814081e0 r __ksymtab_netdev_set_bond_master
ffffffff814081f0 r __ksymtab_netdev_set_master
ffffffff81408200 r __ksymtab_netdev_state_change
ffffffff81408210 r __ksymtab_netdev_stats_to_stats64
ffffffff81408220 r __ksymtab_netdev_update_features
ffffffff81408230 r __ksymtab_netdev_warn
ffffffff81408240 r __ksymtab_netif_carrier_off
ffffffff81408250 r __ksymtab_netif_carrier_on
ffffffff81408260 r __ksymtab_netif_device_attach
ffffffff81408270 r __ksymtab_netif_device_detach
ffffffff81408280 r __ksymtab_netif_napi_add
ffffffff81408290 r __ksymtab_netif_napi_del
ffffffff814082a0 r __ksymtab_netif_notify_peers
ffffffff814082b0 r __ksymtab_netif_receive_skb
ffffffff814082c0 r __ksymtab_netif_rx
ffffffff814082d0 r __ksymtab_netif_rx_ni
ffffffff814082e0 r __ksymtab_netif_set_real_num_rx_queues
ffffffff814082f0 r __ksymtab_netif_set_real_num_tx_queues
ffffffff81408300 r __ksymtab_netif_skb_features
ffffffff81408310 r __ksymtab_netif_stacked_transfer_operstate
ffffffff81408320 r __ksymtab_netlink_ack
ffffffff81408330 r __ksymtab_netlink_broadcast
ffffffff81408340 r __ksymtab_netlink_broadcast_filtered
ffffffff81408350 r __ksymtab_netlink_dump_start
ffffffff81408360 r __ksymtab_netlink_kernel_create
ffffffff81408370 r __ksymtab_netlink_kernel_release
ffffffff81408380 r __ksymtab_netlink_rcv_skb
ffffffff81408390 r __ksymtab_netlink_register_notifier
ffffffff814083a0 r __ksymtab_netlink_set_err
ffffffff814083b0 r __ksymtab_netlink_set_nonroot
ffffffff814083c0 r __ksymtab_netlink_unicast
ffffffff814083d0 r __ksymtab_netlink_unregister_notifier
ffffffff814083e0 r __ksymtab_new_inode
ffffffff814083f0 r __ksymtab_nla_append
ffffffff81408400 r __ksymtab_nla_find
ffffffff81408410 r __ksymtab_nla_memcmp
ffffffff81408420 r __ksymtab_nla_memcpy
ffffffff81408430 r __ksymtab_nla_parse
ffffffff81408440 r __ksymtab_nla_policy_len
ffffffff81408450 r __ksymtab_nla_put
ffffffff81408460 r __ksymtab_nla_put_nohdr
ffffffff81408470 r __ksymtab_nla_reserve
ffffffff81408480 r __ksymtab_nla_reserve_nohdr
ffffffff81408490 r __ksymtab_nla_strcmp
ffffffff814084a0 r __ksymtab_nla_strlcpy
ffffffff814084b0 r __ksymtab_nla_validate
ffffffff814084c0 r __ksymtab_nlmsg_notify
ffffffff814084d0 r __ksymtab_no_llseek
ffffffff814084e0 r __ksymtab_no_pci_devices
ffffffff814084f0 r __ksymtab_nobh_truncate_page
ffffffff81408500 r __ksymtab_nobh_write_begin
ffffffff81408510 r __ksymtab_nobh_write_end
ffffffff81408520 r __ksymtab_nobh_writepage
ffffffff81408530 r __ksymtab_node_data
ffffffff81408540 r __ksymtab_node_states
ffffffff81408550 r __ksymtab_node_to_cpumask_map
ffffffff81408560 r __ksymtab_nonseekable_open
ffffffff81408570 r __ksymtab_noop_fsync
ffffffff81408580 r __ksymtab_noop_llseek
ffffffff81408590 r __ksymtab_noop_qdisc
ffffffff814085a0 r __ksymtab_notify_change
ffffffff814085b0 r __ksymtab_nr_cpu_ids
ffffffff814085c0 r __ksymtab_nr_node_ids
ffffffff814085d0 r __ksymtab_nr_online_nodes
ffffffff814085e0 r __ksymtab_ns_capable
ffffffff814085f0 r __ksymtab_ns_to_timespec
ffffffff81408600 r __ksymtab_ns_to_timeval
ffffffff81408610 r __ksymtab_num_physpages
ffffffff81408620 r __ksymtab_num_registered_fb
ffffffff81408630 r __ksymtab_numa_node
ffffffff81408640 r __ksymtab_nvram_check_checksum
ffffffff81408650 r __ksymtab_nvram_read_byte
ffffffff81408660 r __ksymtab_nvram_write_byte
ffffffff81408670 r __ksymtab_on_each_cpu
ffffffff81408680 r __ksymtab_on_each_cpu_cond
ffffffff81408690 r __ksymtab_on_each_cpu_mask
ffffffff814086a0 r __ksymtab_oops_in_progress
ffffffff814086b0 r __ksymtab_open_exec
ffffffff814086c0 r __ksymtab_out_of_line_wait_on_bit
ffffffff814086d0 r __ksymtab_out_of_line_wait_on_bit_lock
ffffffff814086e0 r __ksymtab_overflowgid
ffffffff814086f0 r __ksymtab_overflowuid
ffffffff81408700 r __ksymtab_override_creds
ffffffff81408710 r __ksymtab_page_follow_link_light
ffffffff81408720 r __ksymtab_page_put_link
ffffffff81408730 r __ksymtab_page_readlink
ffffffff81408740 r __ksymtab_page_symlink
ffffffff81408750 r __ksymtab_page_symlink_inode_operations
ffffffff81408760 r __ksymtab_page_zero_new_buffers
ffffffff81408770 r __ksymtab_pagecache_write_begin
ffffffff81408780 r __ksymtab_pagecache_write_end
ffffffff81408790 r __ksymtab_pagevec_lookup
ffffffff814087a0 r __ksymtab_pagevec_lookup_tag
ffffffff814087b0 r __ksymtab_panic
ffffffff814087c0 r __ksymtab_panic_blink
ffffffff814087d0 r __ksymtab_panic_notifier_list
ffffffff814087e0 r __ksymtab_param_array_ops
ffffffff814087f0 r __ksymtab_param_get_bool
ffffffff81408800 r __ksymtab_param_get_byte
ffffffff81408810 r __ksymtab_param_get_charp
ffffffff81408820 r __ksymtab_param_get_int
ffffffff81408830 r __ksymtab_param_get_invbool
ffffffff81408840 r __ksymtab_param_get_long
ffffffff81408850 r __ksymtab_param_get_short
ffffffff81408860 r __ksymtab_param_get_string
ffffffff81408870 r __ksymtab_param_get_uint
ffffffff81408880 r __ksymtab_param_get_ulong
ffffffff81408890 r __ksymtab_param_get_ushort
ffffffff814088a0 r __ksymtab_param_ops_bint
ffffffff814088b0 r __ksymtab_param_ops_bool
ffffffff814088c0 r __ksymtab_param_ops_byte
ffffffff814088d0 r __ksymtab_param_ops_charp
ffffffff814088e0 r __ksymtab_param_ops_int
ffffffff814088f0 r __ksymtab_param_ops_invbool
ffffffff81408900 r __ksymtab_param_ops_long
ffffffff81408910 r __ksymtab_param_ops_short
ffffffff81408920 r __ksymtab_param_ops_string
ffffffff81408930 r __ksymtab_param_ops_uint
ffffffff81408940 r __ksymtab_param_ops_ulong
ffffffff81408950 r __ksymtab_param_ops_ushort
ffffffff81408960 r __ksymtab_param_set_bint
ffffffff81408970 r __ksymtab_param_set_bool
ffffffff81408980 r __ksymtab_param_set_byte
ffffffff81408990 r __ksymtab_param_set_charp
ffffffff814089a0 r __ksymtab_param_set_copystring
ffffffff814089b0 r __ksymtab_param_set_int
ffffffff814089c0 r __ksymtab_param_set_invbool
ffffffff814089d0 r __ksymtab_param_set_long
ffffffff814089e0 r __ksymtab_param_set_short
ffffffff814089f0 r __ksymtab_param_set_uint
ffffffff81408a00 r __ksymtab_param_set_ulong
ffffffff81408a10 r __ksymtab_param_set_ushort
ffffffff81408a20 r __ksymtab_path_get
ffffffff81408a30 r __ksymtab_path_is_under
ffffffff81408a40 r __ksymtab_path_put
ffffffff81408a50 r __ksymtab_pci_add_new_bus
ffffffff81408a60 r __ksymtab_pci_add_resource
ffffffff81408a70 r __ksymtab_pci_add_resource_offset
ffffffff81408a80 r __ksymtab_pci_assign_resource
ffffffff81408a90 r __ksymtab_pci_back_from_sleep
ffffffff81408aa0 r __ksymtab_pci_biosrom_size
ffffffff81408ab0 r __ksymtab_pci_bus_add_devices
ffffffff81408ac0 r __ksymtab_pci_bus_alloc_resource
ffffffff81408ad0 r __ksymtab_pci_bus_assign_resources
ffffffff81408ae0 r __ksymtab_pci_bus_find_capability
ffffffff81408af0 r __ksymtab_pci_bus_read_config_byte
ffffffff81408b00 r __ksymtab_pci_bus_read_config_dword
ffffffff81408b10 r __ksymtab_pci_bus_read_config_word
ffffffff81408b20 r __ksymtab_pci_bus_read_dev_vendor_id
ffffffff81408b30 r __ksymtab_pci_bus_set_ops
ffffffff81408b40 r __ksymtab_pci_bus_size_bridges
ffffffff81408b50 r __ksymtab_pci_bus_type
ffffffff81408b60 r __ksymtab_pci_bus_write_config_byte
ffffffff81408b70 r __ksymtab_pci_bus_write_config_dword
ffffffff81408b80 r __ksymtab_pci_bus_write_config_word
ffffffff81408b90 r __ksymtab_pci_choose_state
ffffffff81408ba0 r __ksymtab_pci_claim_resource
ffffffff81408bb0 r __ksymtab_pci_clear_master
ffffffff81408bc0 r __ksymtab_pci_clear_mwi
ffffffff81408bd0 r __ksymtab_pci_dev_driver
ffffffff81408be0 r __ksymtab_pci_dev_get
ffffffff81408bf0 r __ksymtab_pci_dev_present
ffffffff81408c00 r __ksymtab_pci_dev_put
ffffffff81408c10 r __ksymtab_pci_disable_device
ffffffff81408c20 r __ksymtab_pci_disable_ido
ffffffff81408c30 r __ksymtab_pci_disable_link_state
ffffffff81408c40 r __ksymtab_pci_disable_link_state_locked
ffffffff81408c50 r __ksymtab_pci_disable_ltr
ffffffff81408c60 r __ksymtab_pci_disable_msi
ffffffff81408c70 r __ksymtab_pci_disable_msix
ffffffff81408c80 r __ksymtab_pci_disable_obff
ffffffff81408c90 r __ksymtab_pci_enable_bridges
ffffffff81408ca0 r __ksymtab_pci_enable_device
ffffffff81408cb0 r __ksymtab_pci_enable_device_io
ffffffff81408cc0 r __ksymtab_pci_enable_device_mem
ffffffff81408cd0 r __ksymtab_pci_enable_ido
ffffffff81408ce0 r __ksymtab_pci_enable_ltr
ffffffff81408cf0 r __ksymtab_pci_enable_msi_block
ffffffff81408d00 r __ksymtab_pci_enable_msix
ffffffff81408d10 r __ksymtab_pci_enable_obff
ffffffff81408d20 r __ksymtab_pci_find_bus
ffffffff81408d30 r __ksymtab_pci_find_capability
ffffffff81408d40 r __ksymtab_pci_find_next_bus
ffffffff81408d50 r __ksymtab_pci_find_parent_resource
ffffffff81408d60 r __ksymtab_pci_fixup_cardbus
ffffffff81408d70 r __ksymtab_pci_fixup_device
ffffffff81408d80 r __ksymtab_pci_free_resource_list
ffffffff81408d90 r __ksymtab_pci_get_class
ffffffff81408da0 r __ksymtab_pci_get_device
ffffffff81408db0 r __ksymtab_pci_get_domain_bus_and_slot
ffffffff81408dc0 r __ksymtab_pci_get_slot
ffffffff81408dd0 r __ksymtab_pci_get_subsys
ffffffff81408de0 r __ksymtab_pci_iomap
ffffffff81408df0 r __ksymtab_pci_iounmap
ffffffff81408e00 r __ksymtab_pci_lost_interrupt
ffffffff81408e10 r __ksymtab_pci_ltr_supported
ffffffff81408e20 r __ksymtab_pci_map_biosrom
ffffffff81408e30 r __ksymtab_pci_map_rom
ffffffff81408e40 r __ksymtab_pci_match_id
ffffffff81408e50 r __ksymtab_pci_mem_start
ffffffff81408e60 r __ksymtab_pci_msi_enabled
ffffffff81408e70 r __ksymtab_pci_pci_problems
ffffffff81408e80 r __ksymtab_pci_pme_active
ffffffff81408e90 r __ksymtab_pci_pme_capable
ffffffff81408ea0 r __ksymtab_pci_prepare_to_sleep
ffffffff81408eb0 r __ksymtab_pci_read_vpd
ffffffff81408ec0 r __ksymtab_pci_reenable_device
ffffffff81408ed0 r __ksymtab_pci_release_region
ffffffff81408ee0 r __ksymtab_pci_release_regions
ffffffff81408ef0 r __ksymtab_pci_release_selected_regions
ffffffff81408f00 r __ksymtab_pci_remove_bus
ffffffff81408f10 r __ksymtab_pci_request_region
ffffffff81408f20 r __ksymtab_pci_request_region_exclusive
ffffffff81408f30 r __ksymtab_pci_request_regions
ffffffff81408f40 r __ksymtab_pci_request_regions_exclusive
ffffffff81408f50 r __ksymtab_pci_request_selected_regions
ffffffff81408f60 r __ksymtab_pci_request_selected_regions_exclusive
ffffffff81408f70 r __ksymtab_pci_restore_state
ffffffff81408f80 r __ksymtab_pci_root_buses
ffffffff81408f90 r __ksymtab_pci_save_state
ffffffff81408fa0 r __ksymtab_pci_scan_bridge
ffffffff81408fb0 r __ksymtab_pci_scan_bus
ffffffff81408fc0 r __ksymtab_pci_scan_bus_parented
ffffffff81408fd0 r __ksymtab_pci_scan_root_bus
ffffffff81408fe0 r __ksymtab_pci_scan_single_device
ffffffff81408ff0 r __ksymtab_pci_scan_slot
ffffffff81409000 r __ksymtab_pci_select_bars
ffffffff81409010 r __ksymtab_pci_set_dma_max_seg_size
ffffffff81409020 r __ksymtab_pci_set_dma_seg_boundary
ffffffff81409030 r __ksymtab_pci_set_ltr
ffffffff81409040 r __ksymtab_pci_set_master
ffffffff81409050 r __ksymtab_pci_set_mwi
ffffffff81409060 r __ksymtab_pci_set_power_state
ffffffff81409070 r __ksymtab_pci_setup_cardbus
ffffffff81409080 r __ksymtab_pci_stop_and_remove_behind_bridge
ffffffff81409090 r __ksymtab_pci_stop_and_remove_bus_device
ffffffff814090a0 r __ksymtab_pci_target_state
ffffffff814090b0 r __ksymtab_pci_try_set_mwi
ffffffff814090c0 r __ksymtab_pci_unmap_biosrom
ffffffff814090d0 r __ksymtab_pci_unmap_rom
ffffffff814090e0 r __ksymtab_pci_unregister_driver
ffffffff814090f0 r __ksymtab_pci_vpd_truncate
ffffffff81409100 r __ksymtab_pci_wake_from_d3
ffffffff81409110 r __ksymtab_pci_write_vpd
ffffffff81409120 r __ksymtab_pcibios_align_resource
ffffffff81409130 r __ksymtab_pcibios_bus_to_resource
ffffffff81409140 r __ksymtab_pcibios_resource_to_bus
ffffffff81409150 r __ksymtab_pcie_aspm_enabled
ffffffff81409160 r __ksymtab_pcie_aspm_support_enabled
ffffffff81409170 r __ksymtab_pcie_get_readrq
ffffffff81409180 r __ksymtab_pcie_port_service_register
ffffffff81409190 r __ksymtab_pcie_port_service_unregister
ffffffff814091a0 r __ksymtab_pcie_set_readrq
ffffffff814091b0 r __ksymtab_pcim_enable_device
ffffffff814091c0 r __ksymtab_pcim_iomap
ffffffff814091d0 r __ksymtab_pcim_iomap_regions
ffffffff814091e0 r __ksymtab_pcim_iomap_regions_request_all
ffffffff814091f0 r __ksymtab_pcim_iomap_table
ffffffff81409200 r __ksymtab_pcim_iounmap
ffffffff81409210 r __ksymtab_pcim_iounmap_regions
ffffffff81409220 r __ksymtab_pcim_pin_device
ffffffff81409230 r __ksymtab_pcix_get_max_mmrbc
ffffffff81409240 r __ksymtab_pcix_get_mmrbc
ffffffff81409250 r __ksymtab_pcix_set_mmrbc
ffffffff81409260 r __ksymtab_percpu_counter_batch
ffffffff81409270 r __ksymtab_percpu_counter_compare
ffffffff81409280 r __ksymtab_percpu_counter_destroy
ffffffff81409290 r __ksymtab_percpu_counter_set
ffffffff814092a0 r __ksymtab_pfifo_fast_ops
ffffffff814092b0 r __ksymtab_pid_task
ffffffff814092c0 r __ksymtab_ping_prot
ffffffff814092d0 r __ksymtab_pipe_lock
ffffffff814092e0 r __ksymtab_pipe_to_file
ffffffff814092f0 r __ksymtab_pipe_unlock
ffffffff81409300 r __ksymtab_pm_power_off
ffffffff81409310 r __ksymtab_pm_set_vt_switch
ffffffff81409320 r __ksymtab_pneigh_enqueue
ffffffff81409330 r __ksymtab_pneigh_lookup
ffffffff81409340 r __ksymtab_pnp_activate_dev
ffffffff81409350 r __ksymtab_pnp_device_attach
ffffffff81409360 r __ksymtab_pnp_device_detach
ffffffff81409370 r __ksymtab_pnp_disable_dev
ffffffff81409380 r __ksymtab_pnp_get_resource
ffffffff81409390 r __ksymtab_pnp_is_active
ffffffff814093a0 r __ksymtab_pnp_platform_devices
ffffffff814093b0 r __ksymtab_pnp_possible_config
ffffffff814093c0 r __ksymtab_pnp_range_reserved
ffffffff814093d0 r __ksymtab_pnp_register_card_driver
ffffffff814093e0 r __ksymtab_pnp_register_driver
ffffffff814093f0 r __ksymtab_pnp_release_card_device
ffffffff81409400 r __ksymtab_pnp_request_card_device
ffffffff81409410 r __ksymtab_pnp_start_dev
ffffffff81409420 r __ksymtab_pnp_stop_dev
ffffffff81409430 r __ksymtab_pnp_unregister_card_driver
ffffffff81409440 r __ksymtab_pnp_unregister_driver
ffffffff81409450 r __ksymtab_pnpacpi_protocol
ffffffff81409460 r __ksymtab_poll_freewait
ffffffff81409470 r __ksymtab_poll_initwait
ffffffff81409480 r __ksymtab_poll_schedule_timeout
ffffffff81409490 r __ksymtab_posix_acl_alloc
ffffffff814094a0 r __ksymtab_posix_acl_chmod
ffffffff814094b0 r __ksymtab_posix_acl_create
ffffffff814094c0 r __ksymtab_posix_acl_equiv_mode
ffffffff814094d0 r __ksymtab_posix_acl_from_mode
ffffffff814094e0 r __ksymtab_posix_acl_from_xattr
ffffffff814094f0 r __ksymtab_posix_acl_init
ffffffff81409500 r __ksymtab_posix_acl_to_xattr
ffffffff81409510 r __ksymtab_posix_acl_valid
ffffffff81409520 r __ksymtab_posix_lock_file
ffffffff81409530 r __ksymtab_posix_lock_file_wait
ffffffff81409540 r __ksymtab_posix_test_lock
ffffffff81409550 r __ksymtab_posix_unblock_lock
ffffffff81409560 r __ksymtab_prandom32
ffffffff81409570 r __ksymtab_prepare_binprm
ffffffff81409580 r __ksymtab_prepare_creds
ffffffff81409590 r __ksymtab_prepare_kernel_cred
ffffffff814095a0 r __ksymtab_prepare_to_wait
ffffffff814095b0 r __ksymtab_prepare_to_wait_exclusive
ffffffff814095c0 r __ksymtab_print_hex_dump
ffffffff814095d0 r __ksymtab_print_hex_dump_bytes
ffffffff814095e0 r __ksymtab_printk
ffffffff814095f0 r __ksymtab_printk_timed_ratelimit
ffffffff81409600 r __ksymtab_probe_irq_mask
ffffffff81409610 r __ksymtab_probe_irq_off
ffffffff81409620 r __ksymtab_probe_irq_on
ffffffff81409630 r __ksymtab_proc_create_data
ffffffff81409640 r __ksymtab_proc_dointvec
ffffffff81409650 r __ksymtab_proc_dointvec_jiffies
ffffffff81409660 r __ksymtab_proc_dointvec_minmax
ffffffff81409670 r __ksymtab_proc_dointvec_ms_jiffies
ffffffff81409680 r __ksymtab_proc_dointvec_userhz_jiffies
ffffffff81409690 r __ksymtab_proc_dostring
ffffffff814096a0 r __ksymtab_proc_doulongvec_minmax
ffffffff814096b0 r __ksymtab_proc_doulongvec_ms_jiffies_minmax
ffffffff814096c0 r __ksymtab_proc_mkdir
ffffffff814096d0 r __ksymtab_proc_mkdir_mode
ffffffff814096e0 r __ksymtab_proc_symlink
ffffffff814096f0 r __ksymtab_profile_pc
ffffffff81409700 r __ksymtab_proto_register
ffffffff81409710 r __ksymtab_proto_unregister
ffffffff81409720 r __ksymtab_ps2_begin_command
ffffffff81409730 r __ksymtab_ps2_cmd_aborted
ffffffff81409740 r __ksymtab_ps2_command
ffffffff81409750 r __ksymtab_ps2_drain
ffffffff81409760 r __ksymtab_ps2_end_command
ffffffff81409770 r __ksymtab_ps2_handle_ack
ffffffff81409780 r __ksymtab_ps2_handle_response
ffffffff81409790 r __ksymtab_ps2_init
ffffffff814097a0 r __ksymtab_ps2_is_keyboard_id
ffffffff814097b0 r __ksymtab_ps2_sendbyte
ffffffff814097c0 r __ksymtab_pskb_expand_head
ffffffff814097d0 r __ksymtab_put_cmsg
ffffffff814097e0 r __ksymtab_put_disk
ffffffff814097f0 r __ksymtab_put_io_context
ffffffff81409800 r __ksymtab_put_page
ffffffff81409810 r __ksymtab_put_pages_list
ffffffff81409820 r __ksymtab_put_tty_driver
ffffffff81409830 r __ksymtab_put_unused_fd
ffffffff81409840 r __ksymtab_putname
ffffffff81409850 r __ksymtab_pv_cpu_ops
ffffffff81409860 r __ksymtab_pv_irq_ops
ffffffff81409870 r __ksymtab_pv_lock_ops
ffffffff81409880 r __ksymtab_pv_mmu_ops
ffffffff81409890 r __ksymtab_qdisc_create_dflt
ffffffff814098a0 r __ksymtab_qdisc_destroy
ffffffff814098b0 r __ksymtab_qdisc_reset
ffffffff814098c0 r __ksymtab_radix_tree_delete
ffffffff814098d0 r __ksymtab_radix_tree_gang_lookup
ffffffff814098e0 r __ksymtab_radix_tree_gang_lookup_slot
ffffffff814098f0 r __ksymtab_radix_tree_gang_lookup_tag
ffffffff81409900 r __ksymtab_radix_tree_gang_lookup_tag_slot
ffffffff81409910 r __ksymtab_radix_tree_insert
ffffffff81409920 r __ksymtab_radix_tree_lookup
ffffffff81409930 r __ksymtab_radix_tree_lookup_slot
ffffffff81409940 r __ksymtab_radix_tree_next_chunk
ffffffff81409950 r __ksymtab_radix_tree_next_hole
ffffffff81409960 r __ksymtab_radix_tree_preload
ffffffff81409970 r __ksymtab_radix_tree_prev_hole
ffffffff81409980 r __ksymtab_radix_tree_range_tag_if_tagged
ffffffff81409990 r __ksymtab_radix_tree_tag_clear
ffffffff814099a0 r __ksymtab_radix_tree_tag_get
ffffffff814099b0 r __ksymtab_radix_tree_tag_set
ffffffff814099c0 r __ksymtab_radix_tree_tagged
ffffffff814099d0 r __ksymtab_random32
ffffffff814099e0 r __ksymtab_rb_augment_erase_begin
ffffffff814099f0 r __ksymtab_rb_augment_erase_end
ffffffff81409a00 r __ksymtab_rb_augment_insert
ffffffff81409a10 r __ksymtab_rb_erase
ffffffff81409a20 r __ksymtab_rb_first
ffffffff81409a30 r __ksymtab_rb_insert_color
ffffffff81409a40 r __ksymtab_rb_last
ffffffff81409a50 r __ksymtab_rb_next
ffffffff81409a60 r __ksymtab_rb_prev
ffffffff81409a70 r __ksymtab_rb_replace_node
ffffffff81409a80 r __ksymtab_rdmsr_on_cpu
ffffffff81409a90 r __ksymtab_rdmsr_on_cpus
ffffffff81409aa0 r __ksymtab_rdmsr_safe_on_cpu
ffffffff81409ab0 r __ksymtab_rdmsr_safe_regs_on_cpu
ffffffff81409ac0 r __ksymtab_read_cache_page
ffffffff81409ad0 r __ksymtab_read_cache_page_async
ffffffff81409ae0 r __ksymtab_read_cache_page_gfp
ffffffff81409af0 r __ksymtab_read_cache_pages
ffffffff81409b00 r __ksymtab_read_dev_sector
ffffffff81409b10 r __ksymtab_recalc_sigpending
ffffffff81409b20 r __ksymtab_recalibrate_cpu_khz
ffffffff81409b30 r __ksymtab_reciprocal_value
ffffffff81409b40 r __ksymtab_redirty_page_for_writepage
ffffffff81409b50 r __ksymtab_redraw_screen
ffffffff81409b60 r __ksymtab_register_acpi_notifier
ffffffff81409b70 r __ksymtab_register_blkdev
ffffffff81409b80 r __ksymtab_register_chrdev_region
ffffffff81409b90 r __ksymtab_register_con_driver
ffffffff81409ba0 r __ksymtab_register_console
ffffffff81409bb0 r __ksymtab_register_cpu_notifier
ffffffff81409bc0 r __ksymtab_register_exec_domain
ffffffff81409bd0 r __ksymtab_register_filesystem
ffffffff81409be0 r __ksymtab_register_framebuffer
ffffffff81409bf0 r __ksymtab_register_gifconf
ffffffff81409c00 r __ksymtab_register_inetaddr_notifier
ffffffff81409c10 r __ksymtab_register_module_notifier
ffffffff81409c20 r __ksymtab_register_netdev
ffffffff81409c30 r __ksymtab_register_netdevice
ffffffff81409c40 r __ksymtab_register_netdevice_notifier
ffffffff81409c50 r __ksymtab_register_nls
ffffffff81409c60 r __ksymtab_register_reboot_notifier
ffffffff81409c70 r __ksymtab_register_shrinker
ffffffff81409c80 r __ksymtab_register_sysctl
ffffffff81409c90 r __ksymtab_register_sysctl_paths
ffffffff81409ca0 r __ksymtab_register_sysctl_table
ffffffff81409cb0 r __ksymtab_register_xen_selfballooning
ffffffff81409cc0 r __ksymtab_registered_fb
ffffffff81409cd0 r __ksymtab_release_evntsel_nmi
ffffffff81409ce0 r __ksymtab_release_firmware
ffffffff81409cf0 r __ksymtab_release_pages
ffffffff81409d00 r __ksymtab_release_perfctr_nmi
ffffffff81409d10 r __ksymtab_release_resource
ffffffff81409d20 r __ksymtab_release_sock
ffffffff81409d30 r __ksymtab_remap_pfn_range
ffffffff81409d40 r __ksymtab_remap_vmalloc_range
ffffffff81409d50 r __ksymtab_remove_arg_zero
ffffffff81409d60 r __ksymtab_remove_conflicting_framebuffers
ffffffff81409d70 r __ksymtab_remove_proc_entry
ffffffff81409d80 r __ksymtab_remove_wait_queue
ffffffff81409d90 r __ksymtab_rename_lock
ffffffff81409da0 r __ksymtab_replace_mount_options
ffffffff81409db0 r __ksymtab_request_dma
ffffffff81409dc0 r __ksymtab_request_firmware
ffffffff81409dd0 r __ksymtab_request_firmware_nowait
ffffffff81409de0 r __ksymtab_request_resource
ffffffff81409df0 r __ksymtab_request_threaded_irq
ffffffff81409e00 r __ksymtab_reserve_evntsel_nmi
ffffffff81409e10 r __ksymtab_reserve_perfctr_nmi
ffffffff81409e20 r __ksymtab_reset_devices
ffffffff81409e30 r __ksymtab_revalidate_disk
ffffffff81409e40 r __ksymtab_revert_creds
ffffffff81409e50 r __ksymtab_rps_may_expire_flow
ffffffff81409e60 r __ksymtab_rps_sock_flow_table
ffffffff81409e70 r __ksymtab_rtc_cmos_read
ffffffff81409e80 r __ksymtab_rtc_cmos_write
ffffffff81409e90 r __ksymtab_rtc_lock
ffffffff81409ea0 r __ksymtab_rtc_month_days
ffffffff81409eb0 r __ksymtab_rtc_time_to_tm
ffffffff81409ec0 r __ksymtab_rtc_tm_to_time
ffffffff81409ed0 r __ksymtab_rtc_valid_tm
ffffffff81409ee0 r __ksymtab_rtc_year_days
ffffffff81409ef0 r __ksymtab_rtnetlink_put_metrics
ffffffff81409f00 r __ksymtab_rtnl_configure_link
ffffffff81409f10 r __ksymtab_rtnl_create_link
ffffffff81409f20 r __ksymtab_rtnl_is_locked
ffffffff81409f30 r __ksymtab_rtnl_link_get_net
ffffffff81409f40 r __ksymtab_rtnl_lock
ffffffff81409f50 r __ksymtab_rtnl_notify
ffffffff81409f60 r __ksymtab_rtnl_set_sk_err
ffffffff81409f70 r __ksymtab_rtnl_trylock
ffffffff81409f80 r __ksymtab_rtnl_unicast
ffffffff81409f90 r __ksymtab_rtnl_unlock
ffffffff81409fa0 r __ksymtab_rwsem_down_read_failed
ffffffff81409fb0 r __ksymtab_rwsem_down_write_failed
ffffffff81409fc0 r __ksymtab_rwsem_downgrade_wake
ffffffff81409fd0 r __ksymtab_rwsem_wake
ffffffff81409fe0 r __ksymtab_save_mount_options
ffffffff81409ff0 r __ksymtab_sb_min_blocksize
ffffffff8140a000 r __ksymtab_sb_set_blocksize
ffffffff8140a010 r __ksymtab_schedule
ffffffff8140a020 r __ksymtab_schedule_delayed_work
ffffffff8140a030 r __ksymtab_schedule_delayed_work_on
ffffffff8140a040 r __ksymtab_schedule_timeout
ffffffff8140a050 r __ksymtab_schedule_timeout_interruptible
ffffffff8140a060 r __ksymtab_schedule_timeout_killable
ffffffff8140a070 r __ksymtab_schedule_timeout_uninterruptible
ffffffff8140a080 r __ksymtab_schedule_work
ffffffff8140a090 r __ksymtab_schedule_work_on
ffffffff8140a0a0 r __ksymtab_scm_detach_fds
ffffffff8140a0b0 r __ksymtab_scm_fp_dup
ffffffff8140a0c0 r __ksymtab_scnprintf
ffffffff8140a0d0 r __ksymtab_screen_info
ffffffff8140a0e0 r __ksymtab_scsi_add_device
ffffffff8140a0f0 r __ksymtab_scsi_add_host_with_dma
ffffffff8140a100 r __ksymtab_scsi_adjust_queue_depth
ffffffff8140a110 r __ksymtab_scsi_allocate_command
ffffffff8140a120 r __ksymtab_scsi_bios_ptable
ffffffff8140a130 r __ksymtab_scsi_block_requests
ffffffff8140a140 r __ksymtab_scsi_block_when_processing_errors
ffffffff8140a150 r __ksymtab_scsi_build_sense_buffer
ffffffff8140a160 r __ksymtab_scsi_calculate_bounce_limit
ffffffff8140a170 r __ksymtab_scsi_cmd_blk_ioctl
ffffffff8140a180 r __ksymtab_scsi_cmd_get_serial
ffffffff8140a190 r __ksymtab_scsi_cmd_ioctl
ffffffff8140a1a0 r __ksymtab_scsi_cmd_print_sense_hdr
ffffffff8140a1b0 r __ksymtab_scsi_command_normalize_sense
ffffffff8140a1c0 r __ksymtab_scsi_command_size_tbl
ffffffff8140a1d0 r __ksymtab_scsi_dev_info_add_list
ffffffff8140a1e0 r __ksymtab_scsi_dev_info_list_add_keyed
ffffffff8140a1f0 r __ksymtab_scsi_dev_info_list_del_keyed
ffffffff8140a200 r __ksymtab_scsi_dev_info_remove_list
ffffffff8140a210 r __ksymtab_scsi_device_get
ffffffff8140a220 r __ksymtab_scsi_device_lookup
ffffffff8140a230 r __ksymtab_scsi_device_lookup_by_target
ffffffff8140a240 r __ksymtab_scsi_device_put
ffffffff8140a250 r __ksymtab_scsi_device_quiesce
ffffffff8140a260 r __ksymtab_scsi_device_resume
ffffffff8140a270 r __ksymtab_scsi_device_set_state
ffffffff8140a280 r __ksymtab_scsi_device_type
ffffffff8140a290 r __ksymtab_scsi_dma_map
ffffffff8140a2a0 r __ksymtab_scsi_dma_unmap
ffffffff8140a2b0 r __ksymtab_scsi_eh_finish_cmd
ffffffff8140a2c0 r __ksymtab_scsi_eh_flush_done_q
ffffffff8140a2d0 r __ksymtab_scsi_eh_prep_cmnd
ffffffff8140a2e0 r __ksymtab_scsi_eh_restore_cmnd
ffffffff8140a2f0 r __ksymtab_scsi_execute
ffffffff8140a300 r __ksymtab_scsi_execute_req
ffffffff8140a310 r __ksymtab_scsi_extd_sense_format
ffffffff8140a320 r __ksymtab_scsi_finish_command
ffffffff8140a330 r __ksymtab_scsi_free_command
ffffffff8140a340 r __ksymtab_scsi_free_host_dev
ffffffff8140a350 r __ksymtab_scsi_get_command
ffffffff8140a360 r __ksymtab_scsi_get_device_flags_keyed
ffffffff8140a370 r __ksymtab_scsi_get_host_dev
ffffffff8140a380 r __ksymtab_scsi_get_sense_info_fld
ffffffff8140a390 r __ksymtab_scsi_host_alloc
ffffffff8140a3a0 r __ksymtab_scsi_host_get
ffffffff8140a3b0 r __ksymtab_scsi_host_lookup
ffffffff8140a3c0 r __ksymtab_scsi_host_put
ffffffff8140a3d0 r __ksymtab_scsi_host_set_state
ffffffff8140a3e0 r __ksymtab_scsi_init_io
ffffffff8140a3f0 r __ksymtab_scsi_ioctl
ffffffff8140a400 r __ksymtab_scsi_is_host_device
ffffffff8140a410 r __ksymtab_scsi_is_sdev_device
ffffffff8140a420 r __ksymtab_scsi_is_target_device
ffffffff8140a430 r __ksymtab_scsi_kmap_atomic_sg
ffffffff8140a440 r __ksymtab_scsi_kunmap_atomic_sg
ffffffff8140a450 r __ksymtab_scsi_logging_level
ffffffff8140a460 r __ksymtab_scsi_mode_sense
ffffffff8140a470 r __ksymtab_scsi_nonblockable_ioctl
ffffffff8140a480 r __ksymtab_scsi_normalize_sense
ffffffff8140a490 r __ksymtab_scsi_partsize
ffffffff8140a4a0 r __ksymtab_scsi_prep_fn
ffffffff8140a4b0 r __ksymtab_scsi_prep_return
ffffffff8140a4c0 r __ksymtab_scsi_prep_state_check
ffffffff8140a4d0 r __ksymtab_scsi_print_command
ffffffff8140a4e0 r __ksymtab_scsi_print_result
ffffffff8140a4f0 r __ksymtab_scsi_print_sense
ffffffff8140a500 r __ksymtab_scsi_print_sense_hdr
ffffffff8140a510 r __ksymtab_scsi_print_status
ffffffff8140a520 r __ksymtab_scsi_put_command
ffffffff8140a530 r __ksymtab_scsi_register
ffffffff8140a540 r __ksymtab_scsi_register_driver
ffffffff8140a550 r __ksymtab_scsi_register_interface
ffffffff8140a560 r __ksymtab_scsi_release_buffers
ffffffff8140a570 r __ksymtab_scsi_remove_device
ffffffff8140a580 r __ksymtab_scsi_remove_host
ffffffff8140a590 r __ksymtab_scsi_remove_target
ffffffff8140a5a0 r __ksymtab_scsi_report_bus_reset
ffffffff8140a5b0 r __ksymtab_scsi_report_device_reset
ffffffff8140a5c0 r __ksymtab_scsi_rescan_device
ffffffff8140a5d0 r __ksymtab_scsi_reset_provider
ffffffff8140a5e0 r __ksymtab_scsi_scan_host
ffffffff8140a5f0 r __ksymtab_scsi_scan_target
ffffffff8140a600 r __ksymtab_scsi_sense_desc_find
ffffffff8140a610 r __ksymtab_scsi_sense_key_string
ffffffff8140a620 r __ksymtab_scsi_set_medium_removal
ffffffff8140a630 r __ksymtab_scsi_setup_blk_pc_cmnd
ffffffff8140a640 r __ksymtab_scsi_setup_fs_cmnd
ffffffff8140a650 r __ksymtab_scsi_show_extd_sense
ffffffff8140a660 r __ksymtab_scsi_show_result
ffffffff8140a670 r __ksymtab_scsi_show_sense_hdr
ffffffff8140a680 r __ksymtab_scsi_target_quiesce
ffffffff8140a690 r __ksymtab_scsi_target_resume
ffffffff8140a6a0 r __ksymtab_scsi_test_unit_ready
ffffffff8140a6b0 r __ksymtab_scsi_track_queue_full
ffffffff8140a6c0 r __ksymtab_scsi_unblock_requests
ffffffff8140a6d0 r __ksymtab_scsi_unregister
ffffffff8140a6e0 r __ksymtab_scsi_verify_blk_ioctl
ffffffff8140a6f0 r __ksymtab_scsicam_bios_param
ffffffff8140a700 r __ksymtab_scsilun_to_int
ffffffff8140a710 r __ksymtab_search_binary_handler
ffffffff8140a720 r __ksymtab_secpath_dup
ffffffff8140a730 r __ksymtab_send_remote_softirq
ffffffff8140a740 r __ksymtab_send_sig
ffffffff8140a750 r __ksymtab_send_sig_info
ffffffff8140a760 r __ksymtab_seq_bitmap
ffffffff8140a770 r __ksymtab_seq_bitmap_list
ffffffff8140a780 r __ksymtab_seq_escape
ffffffff8140a790 r __ksymtab_seq_hlist_next
ffffffff8140a7a0 r __ksymtab_seq_hlist_next_rcu
ffffffff8140a7b0 r __ksymtab_seq_hlist_start
ffffffff8140a7c0 r __ksymtab_seq_hlist_start_head
ffffffff8140a7d0 r __ksymtab_seq_hlist_start_head_rcu
ffffffff8140a7e0 r __ksymtab_seq_hlist_start_rcu
ffffffff8140a7f0 r __ksymtab_seq_list_next
ffffffff8140a800 r __ksymtab_seq_list_start
ffffffff8140a810 r __ksymtab_seq_list_start_head
ffffffff8140a820 r __ksymtab_seq_lseek
ffffffff8140a830 r __ksymtab_seq_open
ffffffff8140a840 r __ksymtab_seq_open_private
ffffffff8140a850 r __ksymtab_seq_path
ffffffff8140a860 r __ksymtab_seq_printf
ffffffff8140a870 r __ksymtab_seq_put_decimal_ll
ffffffff8140a880 r __ksymtab_seq_put_decimal_ull
ffffffff8140a890 r __ksymtab_seq_putc
ffffffff8140a8a0 r __ksymtab_seq_puts
ffffffff8140a8b0 r __ksymtab_seq_read
ffffffff8140a8c0 r __ksymtab_seq_release
ffffffff8140a8d0 r __ksymtab_seq_release_private
ffffffff8140a8e0 r __ksymtab_seq_write
ffffffff8140a8f0 r __ksymtab_serio_close
ffffffff8140a900 r __ksymtab_serio_interrupt
ffffffff8140a910 r __ksymtab_serio_open
ffffffff8140a920 r __ksymtab_serio_reconnect
ffffffff8140a930 r __ksymtab_serio_rescan
ffffffff8140a940 r __ksymtab_serio_unregister_child_port
ffffffff8140a950 r __ksymtab_serio_unregister_driver
ffffffff8140a960 r __ksymtab_serio_unregister_port
ffffffff8140a970 r __ksymtab_set_anon_super
ffffffff8140a980 r __ksymtab_set_bdi_congested
ffffffff8140a990 r __ksymtab_set_bh_page
ffffffff8140a9a0 r __ksymtab_set_binfmt
ffffffff8140a9b0 r __ksymtab_set_blocksize
ffffffff8140a9c0 r __ksymtab_set_create_files_as
ffffffff8140a9d0 r __ksymtab_set_current_groups
ffffffff8140a9e0 r __ksymtab_set_device_ro
ffffffff8140a9f0 r __ksymtab_set_disk_ro
ffffffff8140aa00 r __ksymtab_set_freezable
ffffffff8140aa10 r __ksymtab_set_groups
ffffffff8140aa20 r __ksymtab_set_memory_array_uc
ffffffff8140aa30 r __ksymtab_set_memory_array_wb
ffffffff8140aa40 r __ksymtab_set_memory_array_wc
ffffffff8140aa50 r __ksymtab_set_memory_nx
ffffffff8140aa60 r __ksymtab_set_memory_uc
ffffffff8140aa70 r __ksymtab_set_memory_wb
ffffffff8140aa80 r __ksymtab_set_memory_wc
ffffffff8140aa90 r __ksymtab_set_memory_x
ffffffff8140aaa0 r __ksymtab_set_nlink
ffffffff8140aab0 r __ksymtab_set_normalized_timespec
ffffffff8140aac0 r __ksymtab_set_page_dirty
ffffffff8140aad0 r __ksymtab_set_page_dirty_lock
ffffffff8140aae0 r __ksymtab_set_pages_array_uc
ffffffff8140aaf0 r __ksymtab_set_pages_array_wb
ffffffff8140ab00 r __ksymtab_set_pages_array_wc
ffffffff8140ab10 r __ksymtab_set_pages_nx
ffffffff8140ab20 r __ksymtab_set_pages_uc
ffffffff8140ab30 r __ksymtab_set_pages_wb
ffffffff8140ab40 r __ksymtab_set_pages_x
ffffffff8140ab50 r __ksymtab_set_security_override
ffffffff8140ab60 r __ksymtab_set_security_override_from_ctx
ffffffff8140ab70 r __ksymtab_set_user_nice
ffffffff8140ab80 r __ksymtab_setattr_copy
ffffffff8140ab90 r __ksymtab_setup_arg_pages
ffffffff8140aba0 r __ksymtab_setup_max_cpus
ffffffff8140abb0 r __ksymtab_setup_new_exec
ffffffff8140abc0 r __ksymtab_sg_alloc_table
ffffffff8140abd0 r __ksymtab_sg_copy_from_buffer
ffffffff8140abe0 r __ksymtab_sg_copy_to_buffer
ffffffff8140abf0 r __ksymtab_sg_free_table
ffffffff8140ac00 r __ksymtab_sg_init_one
ffffffff8140ac10 r __ksymtab_sg_init_table
ffffffff8140ac20 r __ksymtab_sg_last
ffffffff8140ac30 r __ksymtab_sg_miter_next
ffffffff8140ac40 r __ksymtab_sg_miter_start
ffffffff8140ac50 r __ksymtab_sg_miter_stop
ffffffff8140ac60 r __ksymtab_sg_next
ffffffff8140ac70 r __ksymtab_sget
ffffffff8140ac80 r __ksymtab_sha_transform
ffffffff8140ac90 r __ksymtab_should_remove_suid
ffffffff8140aca0 r __ksymtab_shrink_dcache_parent
ffffffff8140acb0 r __ksymtab_shrink_dcache_sb
ffffffff8140acc0 r __ksymtab_si_meminfo
ffffffff8140acd0 r __ksymtab_sigprocmask
ffffffff8140ace0 r __ksymtab_simple_dir_inode_operations
ffffffff8140acf0 r __ksymtab_simple_dir_operations
ffffffff8140ad00 r __ksymtab_simple_empty
ffffffff8140ad10 r __ksymtab_simple_fill_super
ffffffff8140ad20 r __ksymtab_simple_getattr
ffffffff8140ad30 r __ksymtab_simple_link
ffffffff8140ad40 r __ksymtab_simple_lookup
ffffffff8140ad50 r __ksymtab_simple_open
ffffffff8140ad60 r __ksymtab_simple_pin_fs
ffffffff8140ad70 r __ksymtab_simple_read_from_buffer
ffffffff8140ad80 r __ksymtab_simple_readpage
ffffffff8140ad90 r __ksymtab_simple_release_fs
ffffffff8140ada0 r __ksymtab_simple_rename
ffffffff8140adb0 r __ksymtab_simple_rmdir
ffffffff8140adc0 r __ksymtab_simple_setattr
ffffffff8140add0 r __ksymtab_simple_statfs
ffffffff8140ade0 r __ksymtab_simple_strtol
ffffffff8140adf0 r __ksymtab_simple_strtoll
ffffffff8140ae00 r __ksymtab_simple_strtoul
ffffffff8140ae10 r __ksymtab_simple_strtoull
ffffffff8140ae20 r __ksymtab_simple_transaction_get
ffffffff8140ae30 r __ksymtab_simple_transaction_read
ffffffff8140ae40 r __ksymtab_simple_transaction_release
ffffffff8140ae50 r __ksymtab_simple_transaction_set
ffffffff8140ae60 r __ksymtab_simple_unlink
ffffffff8140ae70 r __ksymtab_simple_write_begin
ffffffff8140ae80 r __ksymtab_simple_write_end
ffffffff8140ae90 r __ksymtab_simple_write_to_buffer
ffffffff8140aea0 r __ksymtab_single_open
ffffffff8140aeb0 r __ksymtab_single_release
ffffffff8140aec0 r __ksymtab_sk_alloc
ffffffff8140aed0 r __ksymtab_sk_chk_filter
ffffffff8140aee0 r __ksymtab_sk_common_release
ffffffff8140aef0 r __ksymtab_sk_dst_check
ffffffff8140af00 r __ksymtab_sk_filter
ffffffff8140af10 r __ksymtab_sk_filter_release_rcu
ffffffff8140af20 r __ksymtab_sk_free
ffffffff8140af30 r __ksymtab_sk_prot_clear_portaddr_nulls
ffffffff8140af40 r __ksymtab_sk_receive_skb
ffffffff8140af50 r __ksymtab_sk_release_kernel
ffffffff8140af60 r __ksymtab_sk_reset_timer
ffffffff8140af70 r __ksymtab_sk_reset_txq
ffffffff8140af80 r __ksymtab_sk_run_filter
ffffffff8140af90 r __ksymtab_sk_send_sigurg
ffffffff8140afa0 r __ksymtab_sk_stop_timer
ffffffff8140afb0 r __ksymtab_sk_stream_error
ffffffff8140afc0 r __ksymtab_sk_stream_kill_queues
ffffffff8140afd0 r __ksymtab_sk_stream_wait_close
ffffffff8140afe0 r __ksymtab_sk_stream_wait_connect
ffffffff8140aff0 r __ksymtab_sk_stream_wait_memory
ffffffff8140b000 r __ksymtab_sk_stream_write_space
ffffffff8140b010 r __ksymtab_sk_wait_data
ffffffff8140b020 r __ksymtab_skb_abort_seq_read
ffffffff8140b030 r __ksymtab_skb_add_rx_frag
ffffffff8140b040 r __ksymtab_skb_append
ffffffff8140b050 r __ksymtab_skb_append_datato_frags
ffffffff8140b060 r __ksymtab_skb_checksum
ffffffff8140b070 r __ksymtab_skb_checksum_help
ffffffff8140b080 r __ksymtab_skb_clone
ffffffff8140b090 r __ksymtab_skb_copy
ffffffff8140b0a0 r __ksymtab_skb_copy_and_csum_bits
ffffffff8140b0b0 r __ksymtab_skb_copy_and_csum_datagram_iovec
ffffffff8140b0c0 r __ksymtab_skb_copy_and_csum_dev
ffffffff8140b0d0 r __ksymtab_skb_copy_bits
ffffffff8140b0e0 r __ksymtab_skb_copy_datagram_const_iovec
ffffffff8140b0f0 r __ksymtab_skb_copy_datagram_from_iovec
ffffffff8140b100 r __ksymtab_skb_copy_datagram_iovec
ffffffff8140b110 r __ksymtab_skb_copy_expand
ffffffff8140b120 r __ksymtab_skb_dequeue
ffffffff8140b130 r __ksymtab_skb_dequeue_tail
ffffffff8140b140 r __ksymtab_skb_dst_set_noref
ffffffff8140b150 r __ksymtab_skb_find_text
ffffffff8140b160 r __ksymtab_skb_flow_dissect
ffffffff8140b170 r __ksymtab_skb_free_datagram
ffffffff8140b180 r __ksymtab_skb_free_datagram_locked
ffffffff8140b190 r __ksymtab_skb_gro_reset_offset
ffffffff8140b1a0 r __ksymtab_skb_gso_segment
ffffffff8140b1b0 r __ksymtab_skb_insert
ffffffff8140b1c0 r __ksymtab_skb_kill_datagram
ffffffff8140b1d0 r __ksymtab_skb_pad
ffffffff8140b1e0 r __ksymtab_skb_prepare_seq_read
ffffffff8140b1f0 r __ksymtab_skb_pull
ffffffff8140b200 r __ksymtab_skb_push
ffffffff8140b210 r __ksymtab_skb_put
ffffffff8140b220 r __ksymtab_skb_queue_head
ffffffff8140b230 r __ksymtab_skb_queue_purge
ffffffff8140b240 r __ksymtab_skb_queue_tail
ffffffff8140b250 r __ksymtab_skb_realloc_headroom
ffffffff8140b260 r __ksymtab_skb_recv_datagram
ffffffff8140b270 r __ksymtab_skb_recycle
ffffffff8140b280 r __ksymtab_skb_recycle_check
ffffffff8140b290 r __ksymtab_skb_seq_read
ffffffff8140b2a0 r __ksymtab_skb_split
ffffffff8140b2b0 r __ksymtab_skb_store_bits
ffffffff8140b2c0 r __ksymtab_skb_trim
ffffffff8140b2d0 r __ksymtab_skb_unlink
ffffffff8140b2e0 r __ksymtab_skip_spaces
ffffffff8140b2f0 r __ksymtab_sleep_on
ffffffff8140b300 r __ksymtab_sleep_on_timeout
ffffffff8140b310 r __ksymtab_smp_call_function
ffffffff8140b320 r __ksymtab_smp_call_function_many
ffffffff8140b330 r __ksymtab_smp_call_function_single
ffffffff8140b340 r __ksymtab_smp_num_siblings
ffffffff8140b350 r __ksymtab_snprintf
ffffffff8140b360 r __ksymtab_sock_alloc_send_pskb
ffffffff8140b370 r __ksymtab_sock_alloc_send_skb
ffffffff8140b380 r __ksymtab_sock_common_getsockopt
ffffffff8140b390 r __ksymtab_sock_common_recvmsg
ffffffff8140b3a0 r __ksymtab_sock_common_setsockopt
ffffffff8140b3b0 r __ksymtab_sock_create
ffffffff8140b3c0 r __ksymtab_sock_create_kern
ffffffff8140b3d0 r __ksymtab_sock_create_lite
ffffffff8140b3e0 r __ksymtab_sock_get_timestamp
ffffffff8140b3f0 r __ksymtab_sock_get_timestampns
ffffffff8140b400 r __ksymtab_sock_i_ino
ffffffff8140b410 r __ksymtab_sock_i_uid
ffffffff8140b420 r __ksymtab_sock_init_data
ffffffff8140b430 r __ksymtab_sock_kfree_s
ffffffff8140b440 r __ksymtab_sock_kmalloc
ffffffff8140b450 r __ksymtab_sock_map_fd
ffffffff8140b460 r __ksymtab_sock_no_accept
ffffffff8140b470 r __ksymtab_sock_no_bind
ffffffff8140b480 r __ksymtab_sock_no_connect
ffffffff8140b490 r __ksymtab_sock_no_getname
ffffffff8140b4a0 r __ksymtab_sock_no_getsockopt
ffffffff8140b4b0 r __ksymtab_sock_no_ioctl
ffffffff8140b4c0 r __ksymtab_sock_no_listen
ffffffff8140b4d0 r __ksymtab_sock_no_mmap
ffffffff8140b4e0 r __ksymtab_sock_no_poll
ffffffff8140b4f0 r __ksymtab_sock_no_recvmsg
ffffffff8140b500 r __ksymtab_sock_no_sendmsg
ffffffff8140b510 r __ksymtab_sock_no_sendpage
ffffffff8140b520 r __ksymtab_sock_no_setsockopt
ffffffff8140b530 r __ksymtab_sock_no_shutdown
ffffffff8140b540 r __ksymtab_sock_no_socketpair
ffffffff8140b550 r __ksymtab_sock_queue_err_skb
ffffffff8140b560 r __ksymtab_sock_queue_rcv_skb
ffffffff8140b570 r __ksymtab_sock_recvmsg
ffffffff8140b580 r __ksymtab_sock_register
ffffffff8140b590 r __ksymtab_sock_release
ffffffff8140b5a0 r __ksymtab_sock_rfree
ffffffff8140b5b0 r __ksymtab_sock_sendmsg
ffffffff8140b5c0 r __ksymtab_sock_setsockopt
ffffffff8140b5d0 r __ksymtab_sock_tx_timestamp
ffffffff8140b5e0 r __ksymtab_sock_unregister
ffffffff8140b5f0 r __ksymtab_sock_update_classid
ffffffff8140b600 r __ksymtab_sock_wake_async
ffffffff8140b610 r __ksymtab_sock_wfree
ffffffff8140b620 r __ksymtab_sock_wmalloc
ffffffff8140b630 r __ksymtab_sockfd_lookup
ffffffff8140b640 r __ksymtab_soft_cursor
ffffffff8140b650 r __ksymtab_softirq_work_list
ffffffff8140b660 r __ksymtab_softnet_data
ffffffff8140b670 r __ksymtab_sort
ffffffff8140b680 r __ksymtab_splice_direct_to_actor
ffffffff8140b690 r __ksymtab_splice_from_pipe_begin
ffffffff8140b6a0 r __ksymtab_splice_from_pipe_end
ffffffff8140b6b0 r __ksymtab_splice_from_pipe_feed
ffffffff8140b6c0 r __ksymtab_splice_from_pipe_next
ffffffff8140b6d0 r __ksymtab_sprintf
ffffffff8140b6e0 r __ksymtab_srandom32
ffffffff8140b6f0 r __ksymtab_sscanf
ffffffff8140b700 r __ksymtab_starget_for_each_device
ffffffff8140b710 r __ksymtab_start_tty
ffffffff8140b720 r __ksymtab_stop_tty
ffffffff8140b730 r __ksymtab_strcasecmp
ffffffff8140b740 r __ksymtab_strcat
ffffffff8140b750 r __ksymtab_strchr
ffffffff8140b760 r __ksymtab_strcmp
ffffffff8140b770 r __ksymtab_strcpy
ffffffff8140b780 r __ksymtab_strcspn
ffffffff8140b790 r __ksymtab_strim
ffffffff8140b7a0 r __ksymtab_string_get_size
ffffffff8140b7b0 r __ksymtab_strlcat
ffffffff8140b7c0 r __ksymtab_strlcpy
ffffffff8140b7d0 r __ksymtab_strlen
ffffffff8140b7e0 r __ksymtab_strlen_user
ffffffff8140b7f0 r __ksymtab_strncasecmp
ffffffff8140b800 r __ksymtab_strncat
ffffffff8140b810 r __ksymtab_strnchr
ffffffff8140b820 r __ksymtab_strncmp
ffffffff8140b830 r __ksymtab_strncpy
ffffffff8140b840 r __ksymtab_strncpy_from_user
ffffffff8140b850 r __ksymtab_strndup_user
ffffffff8140b860 r __ksymtab_strnicmp
ffffffff8140b870 r __ksymtab_strnlen
ffffffff8140b880 r __ksymtab_strnlen_user
ffffffff8140b890 r __ksymtab_strnstr
ffffffff8140b8a0 r __ksymtab_strpbrk
ffffffff8140b8b0 r __ksymtab_strrchr
ffffffff8140b8c0 r __ksymtab_strsep
ffffffff8140b8d0 r __ksymtab_strspn
ffffffff8140b8e0 r __ksymtab_strstr
ffffffff8140b8f0 r __ksymtab_strtobool
ffffffff8140b900 r __ksymtab_submit_bh
ffffffff8140b910 r __ksymtab_submit_bio
ffffffff8140b920 r __ksymtab_swiotlb_alloc_coherent
ffffffff8140b930 r __ksymtab_swiotlb_dma_mapping_error
ffffffff8140b940 r __ksymtab_swiotlb_dma_supported
ffffffff8140b950 r __ksymtab_swiotlb_free_coherent
ffffffff8140b960 r __ksymtab_swiotlb_map_sg
ffffffff8140b970 r __ksymtab_swiotlb_map_sg_attrs
ffffffff8140b980 r __ksymtab_swiotlb_sync_sg_for_cpu
ffffffff8140b990 r __ksymtab_swiotlb_sync_sg_for_device
ffffffff8140b9a0 r __ksymtab_swiotlb_sync_single_for_cpu
ffffffff8140b9b0 r __ksymtab_swiotlb_sync_single_for_device
ffffffff8140b9c0 r __ksymtab_swiotlb_unmap_sg
ffffffff8140b9d0 r __ksymtab_swiotlb_unmap_sg_attrs
ffffffff8140b9e0 r __ksymtab_sync_blockdev
ffffffff8140b9f0 r __ksymtab_sync_dirty_buffer
ffffffff8140ba00 r __ksymtab_sync_inode
ffffffff8140ba10 r __ksymtab_sync_inode_metadata
ffffffff8140ba20 r __ksymtab_sync_inodes_sb
ffffffff8140ba30 r __ksymtab_sync_mapping_buffers
ffffffff8140ba40 r __ksymtab_synchronize_irq
ffffffff8140ba50 r __ksymtab_synchronize_net
ffffffff8140ba60 r __ksymtab_syncookie_secret
ffffffff8140ba70 r __ksymtab_sys_close
ffffffff8140ba80 r __ksymtab_sys_tz
ffffffff8140ba90 r __ksymtab_sysctl_ip_default_ttl
ffffffff8140baa0 r __ksymtab_sysctl_ip_nonlocal_bind
ffffffff8140bab0 r __ksymtab_sysctl_local_reserved_ports
ffffffff8140bac0 r __ksymtab_sysctl_max_syn_backlog
ffffffff8140bad0 r __ksymtab_sysctl_optmem_max
ffffffff8140bae0 r __ksymtab_sysctl_tcp_adv_win_scale
ffffffff8140baf0 r __ksymtab_sysctl_tcp_ecn
ffffffff8140bb00 r __ksymtab_sysctl_tcp_low_latency
ffffffff8140bb10 r __ksymtab_sysctl_tcp_reordering
ffffffff8140bb20 r __ksymtab_sysctl_tcp_rmem
ffffffff8140bb30 r __ksymtab_sysctl_tcp_syncookies
ffffffff8140bb40 r __ksymtab_sysctl_tcp_wmem
ffffffff8140bb50 r __ksymtab_sysctl_udp_mem
ffffffff8140bb60 r __ksymtab_sysctl_udp_rmem_min
ffffffff8140bb70 r __ksymtab_sysctl_udp_wmem_min
ffffffff8140bb80 r __ksymtab_sysfs_format_mac
ffffffff8140bb90 r __ksymtab_sysfs_streq
ffffffff8140bba0 r __ksymtab_system_freezing_cnt
ffffffff8140bbb0 r __ksymtab_system_state
ffffffff8140bbc0 r __ksymtab_tag_pages_for_writeback
ffffffff8140bbd0 r __ksymtab_take_over_console
ffffffff8140bbe0 r __ksymtab_task_nice
ffffffff8140bbf0 r __ksymtab_task_tgid_nr_ns
ffffffff8140bc00 r __ksymtab_tasklet_init
ffffffff8140bc10 r __ksymtab_tasklet_kill
ffffffff8140bc20 r __ksymtab_tcp_check_req
ffffffff8140bc30 r __ksymtab_tcp_child_process
ffffffff8140bc40 r __ksymtab_tcp_close
ffffffff8140bc50 r __ksymtab_tcp_connect
ffffffff8140bc60 r __ksymtab_tcp_cookie_generator
ffffffff8140bc70 r __ksymtab_tcp_create_openreq_child
ffffffff8140bc80 r __ksymtab_tcp_disconnect
ffffffff8140bc90 r __ksymtab_tcp_enter_memory_pressure
ffffffff8140bca0 r __ksymtab_tcp_getsockopt
ffffffff8140bcb0 r __ksymtab_tcp_gro_complete
ffffffff8140bcc0 r __ksymtab_tcp_gro_receive
ffffffff8140bcd0 r __ksymtab_tcp_hashinfo
ffffffff8140bce0 r __ksymtab_tcp_init_xmit_timers
ffffffff8140bcf0 r __ksymtab_tcp_initialize_rcv_mss
ffffffff8140bd00 r __ksymtab_tcp_ioctl
ffffffff8140bd10 r __ksymtab_tcp_make_synack
ffffffff8140bd20 r __ksymtab_tcp_memory_allocated
ffffffff8140bd30 r __ksymtab_tcp_memory_pressure
ffffffff8140bd40 r __ksymtab_tcp_mtup_init
ffffffff8140bd50 r __ksymtab_tcp_parse_options
ffffffff8140bd60 r __ksymtab_tcp_poll
ffffffff8140bd70 r __ksymtab_tcp_proc_register
ffffffff8140bd80 r __ksymtab_tcp_proc_unregister
ffffffff8140bd90 r __ksymtab_tcp_prot
ffffffff8140bda0 r __ksymtab_tcp_rcv_established
ffffffff8140bdb0 r __ksymtab_tcp_rcv_state_process
ffffffff8140bdc0 r __ksymtab_tcp_read_sock
ffffffff8140bdd0 r __ksymtab_tcp_recvmsg
ffffffff8140bde0 r __ksymtab_tcp_select_initial_window
ffffffff8140bdf0 r __ksymtab_tcp_sendmsg
ffffffff8140be00 r __ksymtab_tcp_sendpage
ffffffff8140be10 r __ksymtab_tcp_seq_open
ffffffff8140be20 r __ksymtab_tcp_setsockopt
ffffffff8140be30 r __ksymtab_tcp_shutdown
ffffffff8140be40 r __ksymtab_tcp_simple_retransmit
ffffffff8140be50 r __ksymtab_tcp_sockets_allocated
ffffffff8140be60 r __ksymtab_tcp_splice_read
ffffffff8140be70 r __ksymtab_tcp_syn_ack_timeout
ffffffff8140be80 r __ksymtab_tcp_syn_flood_action
ffffffff8140be90 r __ksymtab_tcp_sync_mss
ffffffff8140bea0 r __ksymtab_tcp_timewait_state_process
ffffffff8140beb0 r __ksymtab_tcp_tso_segment
ffffffff8140bec0 r __ksymtab_tcp_v4_conn_request
ffffffff8140bed0 r __ksymtab_tcp_v4_connect
ffffffff8140bee0 r __ksymtab_tcp_v4_destroy_sock
ffffffff8140bef0 r __ksymtab_tcp_v4_do_rcv
ffffffff8140bf00 r __ksymtab_tcp_v4_get_peer
ffffffff8140bf10 r __ksymtab_tcp_v4_send_check
ffffffff8140bf20 r __ksymtab_tcp_v4_syn_recv_sock
ffffffff8140bf30 r __ksymtab_tcp_v4_tw_get_peer
ffffffff8140bf40 r __ksymtab_tcp_valid_rtt_meas
ffffffff8140bf50 r __ksymtab_test_set_page_writeback
ffffffff8140bf60 r __ksymtab_test_taint
ffffffff8140bf70 r __ksymtab_thaw_bdev
ffffffff8140bf80 r __ksymtab_thaw_super
ffffffff8140bf90 r __ksymtab_this_cpu_off
ffffffff8140bfa0 r __ksymtab_time_to_tm
ffffffff8140bfb0 r __ksymtab_timekeeping_inject_offset
ffffffff8140bfc0 r __ksymtab_timespec_to_jiffies
ffffffff8140bfd0 r __ksymtab_timespec_trunc
ffffffff8140bfe0 r __ksymtab_timeval_to_jiffies
ffffffff8140bff0 r __ksymtab_totalram_pages
ffffffff8140c000 r __ksymtab_touch_atime
ffffffff8140c010 r __ksymtab_truncate_inode_pages
ffffffff8140c020 r __ksymtab_truncate_inode_pages_range
ffffffff8140c030 r __ksymtab_truncate_pagecache
ffffffff8140c040 r __ksymtab_truncate_pagecache_range
ffffffff8140c050 r __ksymtab_truncate_setsize
ffffffff8140c060 r __ksymtab_try_module_get
ffffffff8140c070 r __ksymtab_try_to_del_timer_sync
ffffffff8140c080 r __ksymtab_try_to_free_buffers
ffffffff8140c090 r __ksymtab_try_to_release_page
ffffffff8140c0a0 r __ksymtab_try_wait_for_completion
ffffffff8140c0b0 r __ksymtab_tsc_khz
ffffffff8140c0c0 r __ksymtab_tty_chars_in_buffer
ffffffff8140c0d0 r __ksymtab_tty_check_change
ffffffff8140c0e0 r __ksymtab_tty_devnum
ffffffff8140c0f0 r __ksymtab_tty_driver_flush_buffer
ffffffff8140c100 r __ksymtab_tty_driver_kref_put
ffffffff8140c110 r __ksymtab_tty_flip_buffer_push
ffffffff8140c120 r __ksymtab_tty_free_termios
ffffffff8140c130 r __ksymtab_tty_get_baud_rate
ffffffff8140c140 r __ksymtab_tty_hangup
ffffffff8140c150 r __ksymtab_tty_hung_up_p
ffffffff8140c160 r __ksymtab_tty_insert_flip_string_fixed_flag
ffffffff8140c170 r __ksymtab_tty_insert_flip_string_flags
ffffffff8140c180 r __ksymtab_tty_kref_put
ffffffff8140c190 r __ksymtab_tty_lock
ffffffff8140c1a0 r __ksymtab_tty_mutex
ffffffff8140c1b0 r __ksymtab_tty_name
ffffffff8140c1c0 r __ksymtab_tty_pair_get_pty
ffffffff8140c1d0 r __ksymtab_tty_pair_get_tty
ffffffff8140c1e0 r __ksymtab_tty_port_alloc_xmit_buf
ffffffff8140c1f0 r __ksymtab_tty_port_block_til_ready
ffffffff8140c200 r __ksymtab_tty_port_carrier_raised
ffffffff8140c210 r __ksymtab_tty_port_close
ffffffff8140c220 r __ksymtab_tty_port_close_end
ffffffff8140c230 r __ksymtab_tty_port_close_start
ffffffff8140c240 r __ksymtab_tty_port_free_xmit_buf
ffffffff8140c250 r __ksymtab_tty_port_hangup
ffffffff8140c260 r __ksymtab_tty_port_init
ffffffff8140c270 r __ksymtab_tty_port_lower_dtr_rts
ffffffff8140c280 r __ksymtab_tty_port_open
ffffffff8140c290 r __ksymtab_tty_port_put
ffffffff8140c2a0 r __ksymtab_tty_port_raise_dtr_rts
ffffffff8140c2b0 r __ksymtab_tty_port_tty_get
ffffffff8140c2c0 r __ksymtab_tty_port_tty_set
ffffffff8140c2d0 r __ksymtab_tty_register_device
ffffffff8140c2e0 r __ksymtab_tty_register_driver
ffffffff8140c2f0 r __ksymtab_tty_register_ldisc
ffffffff8140c300 r __ksymtab_tty_schedule_flip
ffffffff8140c310 r __ksymtab_tty_set_operations
ffffffff8140c320 r __ksymtab_tty_shutdown
ffffffff8140c330 r __ksymtab_tty_std_termios
ffffffff8140c340 r __ksymtab_tty_termios_baud_rate
ffffffff8140c350 r __ksymtab_tty_termios_copy_hw
ffffffff8140c360 r __ksymtab_tty_termios_hw_change
ffffffff8140c370 r __ksymtab_tty_termios_input_baud_rate
ffffffff8140c380 r __ksymtab_tty_throttle
ffffffff8140c390 r __ksymtab_tty_unlock
ffffffff8140c3a0 r __ksymtab_tty_unregister_device
ffffffff8140c3b0 r __ksymtab_tty_unregister_driver
ffffffff8140c3c0 r __ksymtab_tty_unregister_ldisc
ffffffff8140c3d0 r __ksymtab_tty_unthrottle
ffffffff8140c3e0 r __ksymtab_tty_vhangup
ffffffff8140c3f0 r __ksymtab_tty_wait_until_sent
ffffffff8140c400 r __ksymtab_tty_write_room
ffffffff8140c410 r __ksymtab_udp_disconnect
ffffffff8140c420 r __ksymtab_udp_flush_pending_frames
ffffffff8140c430 r __ksymtab_udp_ioctl
ffffffff8140c440 r __ksymtab_udp_lib_get_port
ffffffff8140c450 r __ksymtab_udp_lib_getsockopt
ffffffff8140c460 r __ksymtab_udp_lib_rehash
ffffffff8140c470 r __ksymtab_udp_lib_setsockopt
ffffffff8140c480 r __ksymtab_udp_lib_unhash
ffffffff8140c490 r __ksymtab_udp_memory_allocated
ffffffff8140c4a0 r __ksymtab_udp_poll
ffffffff8140c4b0 r __ksymtab_udp_proc_register
ffffffff8140c4c0 r __ksymtab_udp_proc_unregister
ffffffff8140c4d0 r __ksymtab_udp_prot
ffffffff8140c4e0 r __ksymtab_udp_sendmsg
ffffffff8140c4f0 r __ksymtab_udp_seq_open
ffffffff8140c500 r __ksymtab_udp_table
ffffffff8140c510 r __ksymtab_udplite_prot
ffffffff8140c520 r __ksymtab_udplite_table
ffffffff8140c530 r __ksymtab_unbind_con_driver
ffffffff8140c540 r __ksymtab_unblock_all_signals
ffffffff8140c550 r __ksymtab_unlazy_fpu
ffffffff8140c560 r __ksymtab_unlink_framebuffer
ffffffff8140c570 r __ksymtab_unload_nls
ffffffff8140c580 r __ksymtab_unlock_buffer
ffffffff8140c590 r __ksymtab_unlock_new_inode
ffffffff8140c5a0 r __ksymtab_unlock_page
ffffffff8140c5b0 r __ksymtab_unlock_rename
ffffffff8140c5c0 r __ksymtab_unlock_super
ffffffff8140c5d0 r __ksymtab_unmap_mapping_range
ffffffff8140c5e0 r __ksymtab_unmap_underlying_metadata
ffffffff8140c5f0 r __ksymtab_unpoison_memory
ffffffff8140c600 r __ksymtab_unregister_acpi_notifier
ffffffff8140c610 r __ksymtab_unregister_binfmt
ffffffff8140c620 r __ksymtab_unregister_blkdev
ffffffff8140c630 r __ksymtab_unregister_chrdev_region
ffffffff8140c640 r __ksymtab_unregister_con_driver
ffffffff8140c650 r __ksymtab_unregister_console
ffffffff8140c660 r __ksymtab_unregister_cpu_notifier
ffffffff8140c670 r __ksymtab_unregister_exec_domain
ffffffff8140c680 r __ksymtab_unregister_filesystem
ffffffff8140c690 r __ksymtab_unregister_framebuffer
ffffffff8140c6a0 r __ksymtab_unregister_inetaddr_notifier
ffffffff8140c6b0 r __ksymtab_unregister_module_notifier
ffffffff8140c6c0 r __ksymtab_unregister_netdev
ffffffff8140c6d0 r __ksymtab_unregister_netdevice_many
ffffffff8140c6e0 r __ksymtab_unregister_netdevice_notifier
ffffffff8140c6f0 r __ksymtab_unregister_netdevice_queue
ffffffff8140c700 r __ksymtab_unregister_nls
ffffffff8140c710 r __ksymtab_unregister_reboot_notifier
ffffffff8140c720 r __ksymtab_unregister_shrinker
ffffffff8140c730 r __ksymtab_unregister_sysctl_table
ffffffff8140c740 r __ksymtab_up
ffffffff8140c750 r __ksymtab_up_read
ffffffff8140c760 r __ksymtab_up_write
ffffffff8140c770 r __ksymtab_update_region
ffffffff8140c780 r __ksymtab_usecs_to_jiffies
ffffffff8140c790 r __ksymtab_user_path_at
ffffffff8140c7a0 r __ksymtab_user_path_create
ffffffff8140c7b0 r __ksymtab_usleep_range
ffffffff8140c7c0 r __ksymtab_utf16s_to_utf8s
ffffffff8140c7d0 r __ksymtab_utf32_to_utf8
ffffffff8140c7e0 r __ksymtab_utf8_to_utf32
ffffffff8140c7f0 r __ksymtab_utf8s_to_utf16s
ffffffff8140c800 r __ksymtab_vc_cons
ffffffff8140c810 r __ksymtab_vc_resize
ffffffff8140c820 r __ksymtab_vesa_modes
ffffffff8140c830 r __ksymtab_vfree
ffffffff8140c840 r __ksymtab_vfs_create
ffffffff8140c850 r __ksymtab_vfs_follow_link
ffffffff8140c860 r __ksymtab_vfs_fstat
ffffffff8140c870 r __ksymtab_vfs_fstatat
ffffffff8140c880 r __ksymtab_vfs_fsync
ffffffff8140c890 r __ksymtab_vfs_fsync_range
ffffffff8140c8a0 r __ksymtab_vfs_getattr
ffffffff8140c8b0 r __ksymtab_vfs_link
ffffffff8140c8c0 r __ksymtab_vfs_llseek
ffffffff8140c8d0 r __ksymtab_vfs_lstat
ffffffff8140c8e0 r __ksymtab_vfs_mkdir
ffffffff8140c8f0 r __ksymtab_vfs_mknod
ffffffff8140c900 r __ksymtab_vfs_path_lookup
ffffffff8140c910 r __ksymtab_vfs_read
ffffffff8140c920 r __ksymtab_vfs_readdir
ffffffff8140c930 r __ksymtab_vfs_readlink
ffffffff8140c940 r __ksymtab_vfs_readv
ffffffff8140c950 r __ksymtab_vfs_rename
ffffffff8140c960 r __ksymtab_vfs_rmdir
ffffffff8140c970 r __ksymtab_vfs_stat
ffffffff8140c980 r __ksymtab_vfs_statfs
ffffffff8140c990 r __ksymtab_vfs_symlink
ffffffff8140c9a0 r __ksymtab_vfs_unlink
ffffffff8140c9b0 r __ksymtab_vfs_write
ffffffff8140c9c0 r __ksymtab_vfs_writev
ffffffff8140c9d0 r __ksymtab_vfsmount_lock_global_lock
ffffffff8140c9e0 r __ksymtab_vfsmount_lock_global_lock_online
ffffffff8140c9f0 r __ksymtab_vfsmount_lock_global_unlock
ffffffff8140ca00 r __ksymtab_vfsmount_lock_global_unlock_online
ffffffff8140ca10 r __ksymtab_vfsmount_lock_local_lock
ffffffff8140ca20 r __ksymtab_vfsmount_lock_local_lock_cpu
ffffffff8140ca30 r __ksymtab_vfsmount_lock_local_unlock
ffffffff8140ca40 r __ksymtab_vfsmount_lock_local_unlock_cpu
ffffffff8140ca50 r __ksymtab_vfsmount_lock_lock_init
ffffffff8140ca60 r __ksymtab_vga_client_register
ffffffff8140ca70 r __ksymtab_vga_get
ffffffff8140ca80 r __ksymtab_vga_put
ffffffff8140ca90 r __ksymtab_vga_set_legacy_decoding
ffffffff8140caa0 r __ksymtab_vga_tryget
ffffffff8140cab0 r __ksymtab_vgacon_text_force
ffffffff8140cac0 r __ksymtab_vlan_ioctl_set
ffffffff8140cad0 r __ksymtab_vm_brk
ffffffff8140cae0 r __ksymtab_vm_event_states
ffffffff8140caf0 r __ksymtab_vm_get_page_prot
ffffffff8140cb00 r __ksymtab_vm_insert_mixed
ffffffff8140cb10 r __ksymtab_vm_insert_page
ffffffff8140cb20 r __ksymtab_vm_insert_pfn
ffffffff8140cb30 r __ksymtab_vm_map_ram
ffffffff8140cb40 r __ksymtab_vm_mmap
ffffffff8140cb50 r __ksymtab_vm_munmap
ffffffff8140cb60 r __ksymtab_vm_stat
ffffffff8140cb70 r __ksymtab_vm_unmap_ram
ffffffff8140cb80 r __ksymtab_vmalloc
ffffffff8140cb90 r __ksymtab_vmalloc_32
ffffffff8140cba0 r __ksymtab_vmalloc_32_user
ffffffff8140cbb0 r __ksymtab_vmalloc_node
ffffffff8140cbc0 r __ksymtab_vmalloc_to_page
ffffffff8140cbd0 r __ksymtab_vmalloc_to_pfn
ffffffff8140cbe0 r __ksymtab_vmalloc_user
ffffffff8140cbf0 r __ksymtab_vmap
ffffffff8140cc00 r __ksymtab_vmtruncate
ffffffff8140cc10 r __ksymtab_vprintk
ffffffff8140cc20 r __ksymtab_vscnprintf
ffffffff8140cc30 r __ksymtab_vsnprintf
ffffffff8140cc40 r __ksymtab_vsprintf
ffffffff8140cc50 r __ksymtab_vsscanf
ffffffff8140cc60 r __ksymtab_vunmap
ffffffff8140cc70 r __ksymtab_vzalloc
ffffffff8140cc80 r __ksymtab_vzalloc_node
ffffffff8140cc90 r __ksymtab_wait_for_completion
ffffffff8140cca0 r __ksymtab_wait_for_completion_interruptible
ffffffff8140ccb0 r __ksymtab_wait_for_completion_interruptible_timeout
ffffffff8140ccc0 r __ksymtab_wait_for_completion_killable
ffffffff8140ccd0 r __ksymtab_wait_for_completion_killable_timeout
ffffffff8140cce0 r __ksymtab_wait_for_completion_timeout
ffffffff8140ccf0 r __ksymtab_wait_iff_congested
ffffffff8140cd00 r __ksymtab_wait_on_page_bit
ffffffff8140cd10 r __ksymtab_wait_on_sync_kiocb
ffffffff8140cd20 r __ksymtab_wake_bit_function
ffffffff8140cd30 r __ksymtab_wake_up_bit
ffffffff8140cd40 r __ksymtab_wake_up_process
ffffffff8140cd50 r __ksymtab_warn_slowpath_fmt
ffffffff8140cd60 r __ksymtab_warn_slowpath_fmt_taint
ffffffff8140cd70 r __ksymtab_warn_slowpath_null
ffffffff8140cd80 r __ksymtab_wbinvd_on_all_cpus
ffffffff8140cd90 r __ksymtab_wbinvd_on_cpu
ffffffff8140cda0 r __ksymtab_would_dump
ffffffff8140cdb0 r __ksymtab_write_cache_pages
ffffffff8140cdc0 r __ksymtab_write_dirty_buffer
ffffffff8140cdd0 r __ksymtab_write_inode_now
ffffffff8140cde0 r __ksymtab_write_one_page
ffffffff8140cdf0 r __ksymtab_writeback_inodes_sb
ffffffff8140ce00 r __ksymtab_writeback_inodes_sb_if_idle
ffffffff8140ce10 r __ksymtab_writeback_inodes_sb_nr
ffffffff8140ce20 r __ksymtab_writeback_inodes_sb_nr_if_idle
ffffffff8140ce30 r __ksymtab_wrmsr_on_cpu
ffffffff8140ce40 r __ksymtab_wrmsr_on_cpus
ffffffff8140ce50 r __ksymtab_wrmsr_safe_on_cpu
ffffffff8140ce60 r __ksymtab_wrmsr_safe_regs_on_cpu
ffffffff8140ce70 r __ksymtab_x86_bios_cpu_apicid
ffffffff8140ce80 r __ksymtab_x86_cpu_to_apicid
ffffffff8140ce90 r __ksymtab_x86_cpu_to_node_map
ffffffff8140cea0 r __ksymtab_x86_dma_fallback_dev
ffffffff8140ceb0 r __ksymtab_x86_hyper
ffffffff8140cec0 r __ksymtab_x86_hyper_ms_hyperv
ffffffff8140ced0 r __ksymtab_x86_hyper_vmware
ffffffff8140cee0 r __ksymtab_x86_hyper_xen_hvm
ffffffff8140cef0 r __ksymtab_x86_match_cpu
ffffffff8140cf00 r __ksymtab_xen_biovec_phys_mergeable
ffffffff8140cf10 r __ksymtab_xen_clear_irq_pending
ffffffff8140cf20 r __ksymtab_xen_poll_irq_timeout
ffffffff8140cf30 r __ksymtab_xenbus_dev_request_and_reply
ffffffff8140cf40 r __ksymtab_xfrm4_prepare_output
ffffffff8140cf50 r __ksymtab_xfrm4_rcv
ffffffff8140cf60 r __ksymtab_xfrm4_rcv_encap
ffffffff8140cf70 r __ksymtab_xfrm_alloc_spi
ffffffff8140cf80 r __ksymtab_xfrm_cfg_mutex
ffffffff8140cf90 r __ksymtab_xfrm_dst_ifdown
ffffffff8140cfa0 r __ksymtab_xfrm_find_acq
ffffffff8140cfb0 r __ksymtab_xfrm_find_acq_byseq
ffffffff8140cfc0 r __ksymtab_xfrm_get_acqseq
ffffffff8140cfd0 r __ksymtab_xfrm_init_replay
ffffffff8140cfe0 r __ksymtab_xfrm_init_state
ffffffff8140cff0 r __ksymtab_xfrm_input
ffffffff8140d000 r __ksymtab_xfrm_input_resume
ffffffff8140d010 r __ksymtab_xfrm_lookup
ffffffff8140d020 r __ksymtab_xfrm_policy_alloc
ffffffff8140d030 r __ksymtab_xfrm_policy_byid
ffffffff8140d040 r __ksymtab_xfrm_policy_bysel_ctx
ffffffff8140d050 r __ksymtab_xfrm_policy_delete
ffffffff8140d060 r __ksymtab_xfrm_policy_destroy
ffffffff8140d070 r __ksymtab_xfrm_policy_flush
ffffffff8140d080 r __ksymtab_xfrm_policy_insert
ffffffff8140d090 r __ksymtab_xfrm_policy_register_afinfo
ffffffff8140d0a0 r __ksymtab_xfrm_policy_unregister_afinfo
ffffffff8140d0b0 r __ksymtab_xfrm_policy_walk
ffffffff8140d0c0 r __ksymtab_xfrm_policy_walk_done
ffffffff8140d0d0 r __ksymtab_xfrm_policy_walk_init
ffffffff8140d0e0 r __ksymtab_xfrm_prepare_input
ffffffff8140d0f0 r __ksymtab_xfrm_register_km
ffffffff8140d100 r __ksymtab_xfrm_register_mode
ffffffff8140d110 r __ksymtab_xfrm_register_type
ffffffff8140d120 r __ksymtab_xfrm_sad_getinfo
ffffffff8140d130 r __ksymtab_xfrm_spd_getinfo
ffffffff8140d140 r __ksymtab_xfrm_state_add
ffffffff8140d150 r __ksymtab_xfrm_state_alloc
ffffffff8140d160 r __ksymtab_xfrm_state_check_expire
ffffffff8140d170 r __ksymtab_xfrm_state_delete
ffffffff8140d180 r __ksymtab_xfrm_state_delete_tunnel
ffffffff8140d190 r __ksymtab_xfrm_state_flush
ffffffff8140d1a0 r __ksymtab_xfrm_state_insert
ffffffff8140d1b0 r __ksymtab_xfrm_state_lookup
ffffffff8140d1c0 r __ksymtab_xfrm_state_lookup_byaddr
ffffffff8140d1d0 r __ksymtab_xfrm_state_register_afinfo
ffffffff8140d1e0 r __ksymtab_xfrm_state_unregister_afinfo
ffffffff8140d1f0 r __ksymtab_xfrm_state_update
ffffffff8140d200 r __ksymtab_xfrm_state_walk
ffffffff8140d210 r __ksymtab_xfrm_state_walk_done
ffffffff8140d220 r __ksymtab_xfrm_state_walk_init
ffffffff8140d230 r __ksymtab_xfrm_stateonly_find
ffffffff8140d240 r __ksymtab_xfrm_unregister_km
ffffffff8140d250 r __ksymtab_xfrm_unregister_mode
ffffffff8140d260 r __ksymtab_xfrm_unregister_type
ffffffff8140d270 r __ksymtab_xfrm_user_policy
ffffffff8140d280 r __ksymtab_xz_dec_end
ffffffff8140d290 r __ksymtab_xz_dec_init
ffffffff8140d2a0 r __ksymtab_xz_dec_reset
ffffffff8140d2b0 r __ksymtab_xz_dec_run
ffffffff8140d2c0 r __ksymtab_yield
ffffffff8140d2d0 r __ksymtab_zero_fill_bio
ffffffff8140d2e0 r __ksymtab_zlib_inflate
ffffffff8140d2f0 r __ksymtab_zlib_inflateEnd
ffffffff8140d300 r __ksymtab_zlib_inflateIncomp
ffffffff8140d310 r __ksymtab_zlib_inflateInit2
ffffffff8140d320 r __ksymtab_zlib_inflateReset
ffffffff8140d330 r __ksymtab_zlib_inflate_blob
ffffffff8140d340 r __ksymtab_zlib_inflate_workspacesize
ffffffff8140d350 r __ksymtab_PageHuge
ffffffff8140d350 R __start___ksymtab_gpl
ffffffff8140d350 R __stop___ksymtab
ffffffff8140d360 r __ksymtab___ablkcipher_walk_complete
ffffffff8140d370 r __ksymtab___alloc_percpu
ffffffff8140d380 r __ksymtab___alloc_workqueue_key
ffffffff8140d390 r __ksymtab___apei_exec_run
ffffffff8140d3a0 r __ksymtab___ata_change_queue_depth
ffffffff8140d3b0 r __ksymtab___ata_ehi_push_desc
ffffffff8140d3c0 r __ksymtab___atomic_notifier_call_chain
ffffffff8140d3d0 r __ksymtab___audit_inode_child
ffffffff8140d3e0 r __ksymtab___blk_end_request_err
ffffffff8140d3f0 r __ksymtab___blk_put_request
ffffffff8140d400 r __ksymtab___blkdev_driver_ioctl
ffffffff8140d410 r __ksymtab___blocking_notifier_call_chain
ffffffff8140d420 r __ksymtab___bus_register
ffffffff8140d430 r __ksymtab___class_create
ffffffff8140d440 r __ksymtab___class_register
ffffffff8140d450 r __ksymtab___clocksource_register_scale
ffffffff8140d460 r __ksymtab___clocksource_updatefreq_scale
ffffffff8140d470 r __ksymtab___cpufreq_driver_getavg
ffffffff8140d480 r __ksymtab___cpufreq_driver_target
ffffffff8140d490 r __ksymtab___crypto_alloc_tfm
ffffffff8140d4a0 r __ksymtab___crypto_dequeue_request
ffffffff8140d4b0 r __ksymtab___css_put
ffffffff8140d4c0 r __ksymtab___fsnotify_inode_delete
ffffffff8140d4d0 r __ksymtab___fsnotify_parent
ffffffff8140d4e0 r __ksymtab___get_user_pages_fast
ffffffff8140d4f0 r __ksymtab___get_vm_area
ffffffff8140d500 r __ksymtab___hid_register_driver
ffffffff8140d510 r __ksymtab___hvc_resize
ffffffff8140d520 r __ksymtab___inet_hash_nolisten
ffffffff8140d530 r __ksymtab___inet_inherit_port
ffffffff8140d540 r __ksymtab___inet_lookup_established
ffffffff8140d550 r __ksymtab___inet_lookup_listener
ffffffff8140d560 r __ksymtab___inet_twsk_hashdance
ffffffff8140d570 r __ksymtab___init_kthread_worker
ffffffff8140d580 r __ksymtab___iowrite32_copy
ffffffff8140d590 r __ksymtab___iowrite64_copy
ffffffff8140d5a0 r __ksymtab___ip_route_output_key
ffffffff8140d5b0 r __ksymtab___irq_alloc_descs
ffffffff8140d5c0 r __ksymtab___irq_set_handler
ffffffff8140d5d0 r __ksymtab___lock_page_killable
ffffffff8140d5e0 r __ksymtab___mmdrop
ffffffff8140d5f0 r __ksymtab___mmu_notifier_register
ffffffff8140d600 r __ksymtab___mnt_is_readonly
ffffffff8140d610 r __ksymtab___module_address
ffffffff8140d620 r __ksymtab___module_text_address
ffffffff8140d630 r __ksymtab___pci_complete_power_transition
ffffffff8140d640 r __ksymtab___pci_reset_function
ffffffff8140d650 r __ksymtab___pci_reset_function_locked
ffffffff8140d660 r __ksymtab___pm_relax
ffffffff8140d670 r __ksymtab___pm_runtime_disable
ffffffff8140d680 r __ksymtab___pm_runtime_idle
ffffffff8140d690 r __ksymtab___pm_runtime_resume
ffffffff8140d6a0 r __ksymtab___pm_runtime_set_status
ffffffff8140d6b0 r __ksymtab___pm_runtime_suspend
ffffffff8140d6c0 r __ksymtab___pm_runtime_use_autosuspend
ffffffff8140d6d0 r __ksymtab___pm_stay_awake
ffffffff8140d6e0 r __ksymtab___pm_wakeup_event
ffffffff8140d6f0 r __ksymtab___pneigh_lookup
ffffffff8140d700 r __ksymtab___put_net
ffffffff8140d710 r __ksymtab___put_task_struct
ffffffff8140d720 r __ksymtab___raw_notifier_call_chain
ffffffff8140d730 r __ksymtab___root_device_register
ffffffff8140d740 r __ksymtab___round_jiffies
ffffffff8140d750 r __ksymtab___round_jiffies_relative
ffffffff8140d760 r __ksymtab___round_jiffies_up
ffffffff8140d770 r __ksymtab___round_jiffies_up_relative
ffffffff8140d780 r __ksymtab___rt_mutex_init
ffffffff8140d790 r __ksymtab___rtnl_af_register
ffffffff8140d7a0 r __ksymtab___rtnl_af_unregister
ffffffff8140d7b0 r __ksymtab___rtnl_link_register
ffffffff8140d7c0 r __ksymtab___rtnl_link_unregister
ffffffff8140d7d0 r __ksymtab___rtnl_register
ffffffff8140d7e0 r __ksymtab___scsi_get_command
ffffffff8140d7f0 r __ksymtab___sock_recv_timestamp
ffffffff8140d800 r __ksymtab___sock_recv_ts_and_drops
ffffffff8140d810 r __ksymtab___sock_recv_wifi_status
ffffffff8140d820 r __ksymtab___srcu_notifier_call_chain
ffffffff8140d830 r __ksymtab___srcu_read_lock
ffffffff8140d840 r __ksymtab___srcu_read_unlock
ffffffff8140d850 r __ksymtab___supported_pte_mask
ffffffff8140d860 r __ksymtab___suspend_report_result
ffffffff8140d870 r __ksymtab___symbol_get
ffffffff8140d880 r __ksymtab___timecompare_update
ffffffff8140d890 r __ksymtab___udp4_lib_lookup
ffffffff8140d8a0 r __ksymtab___wake_up_locked
ffffffff8140d8b0 r __ksymtab___wake_up_locked_key
ffffffff8140d8c0 r __ksymtab___wake_up_sync
ffffffff8140d8d0 r __ksymtab___wake_up_sync_key
ffffffff8140d8e0 r __ksymtab_ablkcipher_walk_done
ffffffff8140d8f0 r __ksymtab_ablkcipher_walk_phys
ffffffff8140d900 r __ksymtab_acpi_bus_generate_proc_event4
ffffffff8140d910 r __ksymtab_acpi_bus_get_ejd
ffffffff8140d920 r __ksymtab_acpi_bus_trim
ffffffff8140d930 r __ksymtab_acpi_bus_update_power
ffffffff8140d940 r __ksymtab_acpi_ec_add_query_handler
ffffffff8140d950 r __ksymtab_acpi_ec_remove_query_handler
ffffffff8140d960 r __ksymtab_acpi_get_cpuid
ffffffff8140d970 r __ksymtab_acpi_get_pci_dev
ffffffff8140d980 r __ksymtab_acpi_get_pci_rootbridge_handle
ffffffff8140d990 r __ksymtab_acpi_gsi_to_irq
ffffffff8140d9a0 r __ksymtab_acpi_is_root_bridge
ffffffff8140d9b0 r __ksymtab_acpi_kobj
ffffffff8140d9c0 r __ksymtab_acpi_os_get_iomem
ffffffff8140d9d0 r __ksymtab_acpi_os_map_memory
ffffffff8140d9e0 r __ksymtab_acpi_os_unmap_memory
ffffffff8140d9f0 r __ksymtab_acpi_pci_find_root
ffffffff8140da00 r __ksymtab_acpi_processor_ffh_cstate_enter
ffffffff8140da10 r __ksymtab_acpi_processor_ffh_cstate_probe
ffffffff8140da20 r __ksymtab_add_input_randomness
ffffffff8140da30 r __ksymtab_add_page_wait_queue
ffffffff8140da40 r __ksymtab_add_timer_on
ffffffff8140da50 r __ksymtab_add_to_page_cache_lru
ffffffff8140da60 r __ksymtab_add_uevent_var
ffffffff8140da70 r __ksymtab_aead_geniv_alloc
ffffffff8140da80 r __ksymtab_aead_geniv_exit
ffffffff8140da90 r __ksymtab_aead_geniv_free
ffffffff8140daa0 r __ksymtab_aead_geniv_init
ffffffff8140dab0 r __ksymtab_aer_irq
ffffffff8140dac0 r __ksymtab_aer_recover_queue
ffffffff8140dad0 r __ksymtab_ahash_attr_alg
ffffffff8140dae0 r __ksymtab_ahash_free_instance
ffffffff8140daf0 r __ksymtab_ahash_register_instance
ffffffff8140db00 r __ksymtab_alg_test
ffffffff8140db10 r __ksymtab_all_vm_events
ffffffff8140db20 r __ksymtab_alloc_page_buffers
ffffffff8140db30 r __ksymtab_alloc_vm_area
ffffffff8140db40 r __ksymtab_amd_cache_northbridges
ffffffff8140db50 r __ksymtab_amd_erratum_383
ffffffff8140db60 r __ksymtab_amd_erratum_400
ffffffff8140db70 r __ksymtab_amd_flush_garts
ffffffff8140db80 r __ksymtab_amd_get_nb_id
ffffffff8140db90 r __ksymtab_amd_pmu_disable_virt
ffffffff8140dba0 r __ksymtab_amd_pmu_enable_virt
ffffffff8140dbb0 r __ksymtab_anon_inode_getfd
ffffffff8140dbc0 r __ksymtab_anon_inode_getfile
ffffffff8140dbd0 r __ksymtab_anon_transport_class_register
ffffffff8140dbe0 r __ksymtab_anon_transport_class_unregister
ffffffff8140dbf0 r __ksymtab_aout_dump_debugregs
ffffffff8140dc00 r __ksymtab_apei_estatus_check
ffffffff8140dc10 r __ksymtab_apei_estatus_check_header
ffffffff8140dc20 r __ksymtab_apei_estatus_print
ffffffff8140dc30 r __ksymtab_apei_exec_collect_resources
ffffffff8140dc40 r __ksymtab_apei_exec_ctx_init
ffffffff8140dc50 r __ksymtab_apei_exec_noop
ffffffff8140dc60 r __ksymtab_apei_exec_post_unmap_gars
ffffffff8140dc70 r __ksymtab_apei_exec_pre_map_gars
ffffffff8140dc80 r __ksymtab_apei_exec_read_register
ffffffff8140dc90 r __ksymtab_apei_exec_read_register_value
ffffffff8140dca0 r __ksymtab_apei_exec_write_register
ffffffff8140dcb0 r __ksymtab_apei_exec_write_register_value
ffffffff8140dcc0 r __ksymtab_apei_get_debugfs_dir
ffffffff8140dcd0 r __ksymtab_apei_hest_parse
ffffffff8140dce0 r __ksymtab_apei_map_generic_address
ffffffff8140dcf0 r __ksymtab_apei_mce_report_mem_error
ffffffff8140dd00 r __ksymtab_apei_osc_setup
ffffffff8140dd10 r __ksymtab_apei_read
ffffffff8140dd20 r __ksymtab_apei_resources_add
ffffffff8140dd30 r __ksymtab_apei_resources_fini
ffffffff8140dd40 r __ksymtab_apei_resources_release
ffffffff8140dd50 r __ksymtab_apei_resources_request
ffffffff8140dd60 r __ksymtab_apei_resources_sub
ffffffff8140dd70 r __ksymtab_apei_write
ffffffff8140dd80 r __ksymtab_apic
ffffffff8140dd90 r __ksymtab_apply_to_page_range
ffffffff8140dda0 r __ksymtab_arbitrary_virt_to_machine
ffffffff8140ddb0 r __ksymtab_async_schedule
ffffffff8140ddc0 r __ksymtab_async_schedule_domain
ffffffff8140ddd0 r __ksymtab_async_synchronize_cookie
ffffffff8140dde0 r __ksymtab_async_synchronize_cookie_domain
ffffffff8140ddf0 r __ksymtab_async_synchronize_full
ffffffff8140de00 r __ksymtab_async_synchronize_full_domain
ffffffff8140de10 r __ksymtab_ata_acpi_cbl_80wire
ffffffff8140de20 r __ksymtab_ata_acpi_gtm
ffffffff8140de30 r __ksymtab_ata_acpi_gtm_xfermask
ffffffff8140de40 r __ksymtab_ata_acpi_stm
ffffffff8140de50 r __ksymtab_ata_base_port_ops
ffffffff8140de60 r __ksymtab_ata_bmdma32_port_ops
ffffffff8140de70 r __ksymtab_ata_bmdma_dumb_qc_prep
ffffffff8140de80 r __ksymtab_ata_bmdma_error_handler
ffffffff8140de90 r __ksymtab_ata_bmdma_interrupt
ffffffff8140dea0 r __ksymtab_ata_bmdma_irq_clear
ffffffff8140deb0 r __ksymtab_ata_bmdma_port_intr
ffffffff8140dec0 r __ksymtab_ata_bmdma_port_ops
ffffffff8140ded0 r __ksymtab_ata_bmdma_port_start
ffffffff8140dee0 r __ksymtab_ata_bmdma_port_start32
ffffffff8140def0 r __ksymtab_ata_bmdma_post_internal_cmd
ffffffff8140df00 r __ksymtab_ata_bmdma_qc_issue
ffffffff8140df10 r __ksymtab_ata_bmdma_qc_prep
ffffffff8140df20 r __ksymtab_ata_bmdma_setup
ffffffff8140df30 r __ksymtab_ata_bmdma_start
ffffffff8140df40 r __ksymtab_ata_bmdma_status
ffffffff8140df50 r __ksymtab_ata_bmdma_stop
ffffffff8140df60 r __ksymtab_ata_cable_40wire
ffffffff8140df70 r __ksymtab_ata_cable_80wire
ffffffff8140df80 r __ksymtab_ata_cable_ignore
ffffffff8140df90 r __ksymtab_ata_cable_sata
ffffffff8140dfa0 r __ksymtab_ata_cable_unknown
ffffffff8140dfb0 r __ksymtab_ata_common_sdev_attrs
ffffffff8140dfc0 r __ksymtab_ata_dev_classify
ffffffff8140dfd0 r __ksymtab_ata_dev_disable
ffffffff8140dfe0 r __ksymtab_ata_dev_next
ffffffff8140dff0 r __ksymtab_ata_dev_pair
ffffffff8140e000 r __ksymtab_ata_do_dev_read_id
ffffffff8140e010 r __ksymtab_ata_do_eh
ffffffff8140e020 r __ksymtab_ata_do_set_mode
ffffffff8140e030 r __ksymtab_ata_dummy_port_info
ffffffff8140e040 r __ksymtab_ata_dummy_port_ops
ffffffff8140e050 r __ksymtab_ata_eh_analyze_ncq_error
ffffffff8140e060 r __ksymtab_ata_eh_freeze_port
ffffffff8140e070 r __ksymtab_ata_eh_qc_complete
ffffffff8140e080 r __ksymtab_ata_eh_qc_retry
ffffffff8140e090 r __ksymtab_ata_eh_thaw_port
ffffffff8140e0a0 r __ksymtab_ata_ehi_clear_desc
ffffffff8140e0b0 r __ksymtab_ata_ehi_push_desc
ffffffff8140e0c0 r __ksymtab_ata_host_activate
ffffffff8140e0d0 r __ksymtab_ata_host_alloc
ffffffff8140e0e0 r __ksymtab_ata_host_alloc_pinfo
ffffffff8140e0f0 r __ksymtab_ata_host_detach
ffffffff8140e100 r __ksymtab_ata_host_init
ffffffff8140e110 r __ksymtab_ata_host_register
ffffffff8140e120 r __ksymtab_ata_host_resume
ffffffff8140e130 r __ksymtab_ata_host_start
ffffffff8140e140 r __ksymtab_ata_host_suspend
ffffffff8140e150 r __ksymtab_ata_id_c_string
ffffffff8140e160 r __ksymtab_ata_id_string
ffffffff8140e170 r __ksymtab_ata_id_xfermask
ffffffff8140e180 r __ksymtab_ata_link_abort
ffffffff8140e190 r __ksymtab_ata_link_next
ffffffff8140e1a0 r __ksymtab_ata_link_offline
ffffffff8140e1b0 r __ksymtab_ata_link_online
ffffffff8140e1c0 r __ksymtab_ata_mode_string
ffffffff8140e1d0 r __ksymtab_ata_msleep
ffffffff8140e1e0 r __ksymtab_ata_noop_qc_prep
ffffffff8140e1f0 r __ksymtab_ata_pack_xfermask
ffffffff8140e200 r __ksymtab_ata_pci_bmdma_clear_simplex
ffffffff8140e210 r __ksymtab_ata_pci_bmdma_init
ffffffff8140e220 r __ksymtab_ata_pci_bmdma_init_one
ffffffff8140e230 r __ksymtab_ata_pci_bmdma_prepare_host
ffffffff8140e240 r __ksymtab_ata_pci_device_do_resume
ffffffff8140e250 r __ksymtab_ata_pci_device_do_suspend
ffffffff8140e260 r __ksymtab_ata_pci_device_resume
ffffffff8140e270 r __ksymtab_ata_pci_device_suspend
ffffffff8140e280 r __ksymtab_ata_pci_remove_one
ffffffff8140e290 r __ksymtab_ata_pci_sff_activate_host
ffffffff8140e2a0 r __ksymtab_ata_pci_sff_init_host
ffffffff8140e2b0 r __ksymtab_ata_pci_sff_init_one
ffffffff8140e2c0 r __ksymtab_ata_pci_sff_prepare_host
ffffffff8140e2d0 r __ksymtab_ata_pio_need_iordy
ffffffff8140e2e0 r __ksymtab_ata_port_abort
ffffffff8140e2f0 r __ksymtab_ata_port_desc
ffffffff8140e300 r __ksymtab_ata_port_freeze
ffffffff8140e310 r __ksymtab_ata_port_pbar_desc
ffffffff8140e320 r __ksymtab_ata_port_schedule_eh
ffffffff8140e330 r __ksymtab_ata_port_wait_eh
ffffffff8140e340 r __ksymtab_ata_qc_complete
ffffffff8140e350 r __ksymtab_ata_qc_complete_multiple
ffffffff8140e360 r __ksymtab_ata_ratelimit
ffffffff8140e370 r __ksymtab_ata_sas_async_probe
ffffffff8140e380 r __ksymtab_ata_sas_port_alloc
ffffffff8140e390 r __ksymtab_ata_sas_port_destroy
ffffffff8140e3a0 r __ksymtab_ata_sas_port_init
ffffffff8140e3b0 r __ksymtab_ata_sas_port_start
ffffffff8140e3c0 r __ksymtab_ata_sas_port_stop
ffffffff8140e3d0 r __ksymtab_ata_sas_queuecmd
ffffffff8140e3e0 r __ksymtab_ata_sas_scsi_ioctl
ffffffff8140e3f0 r __ksymtab_ata_sas_slave_configure
ffffffff8140e400 r __ksymtab_ata_sas_sync_probe
ffffffff8140e410 r __ksymtab_ata_scsi_change_queue_depth
ffffffff8140e420 r __ksymtab_ata_scsi_ioctl
ffffffff8140e430 r __ksymtab_ata_scsi_port_error_handler
ffffffff8140e440 r __ksymtab_ata_scsi_queuecmd
ffffffff8140e450 r __ksymtab_ata_scsi_simulate
ffffffff8140e460 r __ksymtab_ata_scsi_slave_config
ffffffff8140e470 r __ksymtab_ata_scsi_slave_destroy
ffffffff8140e480 r __ksymtab_ata_scsi_unlock_native_capacity
ffffffff8140e490 r __ksymtab_ata_sff_busy_sleep
ffffffff8140e4a0 r __ksymtab_ata_sff_check_status
ffffffff8140e4b0 r __ksymtab_ata_sff_data_xfer
ffffffff8140e4c0 r __ksymtab_ata_sff_data_xfer32
ffffffff8140e4d0 r __ksymtab_ata_sff_data_xfer_noirq
ffffffff8140e4e0 r __ksymtab_ata_sff_dev_classify
ffffffff8140e4f0 r __ksymtab_ata_sff_dev_select
ffffffff8140e500 r __ksymtab_ata_sff_dma_pause
ffffffff8140e510 r __ksymtab_ata_sff_drain_fifo
ffffffff8140e520 r __ksymtab_ata_sff_error_handler
ffffffff8140e530 r __ksymtab_ata_sff_exec_command
ffffffff8140e540 r __ksymtab_ata_sff_freeze
ffffffff8140e550 r __ksymtab_ata_sff_hsm_move
ffffffff8140e560 r __ksymtab_ata_sff_interrupt
ffffffff8140e570 r __ksymtab_ata_sff_irq_on
ffffffff8140e580 r __ksymtab_ata_sff_lost_interrupt
ffffffff8140e590 r __ksymtab_ata_sff_pause
ffffffff8140e5a0 r __ksymtab_ata_sff_port_intr
ffffffff8140e5b0 r __ksymtab_ata_sff_port_ops
ffffffff8140e5c0 r __ksymtab_ata_sff_postreset
ffffffff8140e5d0 r __ksymtab_ata_sff_prereset
ffffffff8140e5e0 r __ksymtab_ata_sff_qc_fill_rtf
ffffffff8140e5f0 r __ksymtab_ata_sff_qc_issue
ffffffff8140e600 r __ksymtab_ata_sff_queue_delayed_work
ffffffff8140e610 r __ksymtab_ata_sff_queue_pio_task
ffffffff8140e620 r __ksymtab_ata_sff_queue_work
ffffffff8140e630 r __ksymtab_ata_sff_softreset
ffffffff8140e640 r __ksymtab_ata_sff_std_ports
ffffffff8140e650 r __ksymtab_ata_sff_tf_load
ffffffff8140e660 r __ksymtab_ata_sff_tf_read
ffffffff8140e670 r __ksymtab_ata_sff_thaw
ffffffff8140e680 r __ksymtab_ata_sff_wait_after_reset
ffffffff8140e690 r __ksymtab_ata_sff_wait_ready
ffffffff8140e6a0 r __ksymtab_ata_sg_init
ffffffff8140e6b0 r __ksymtab_ata_slave_link_init
ffffffff8140e6c0 r __ksymtab_ata_std_bios_param
ffffffff8140e6d0 r __ksymtab_ata_std_error_handler
ffffffff8140e6e0 r __ksymtab_ata_std_postreset
ffffffff8140e6f0 r __ksymtab_ata_std_prereset
ffffffff8140e700 r __ksymtab_ata_std_qc_defer
ffffffff8140e710 r __ksymtab_ata_tf_from_fis
ffffffff8140e720 r __ksymtab_ata_tf_to_fis
ffffffff8140e730 r __ksymtab_ata_timing_compute
ffffffff8140e740 r __ksymtab_ata_timing_cycle2mode
ffffffff8140e750 r __ksymtab_ata_timing_find_mode
ffffffff8140e760 r __ksymtab_ata_timing_merge
ffffffff8140e770 r __ksymtab_ata_unpack_xfermask
ffffffff8140e780 r __ksymtab_ata_wait_after_reset
ffffffff8140e790 r __ksymtab_ata_wait_register
ffffffff8140e7a0 r __ksymtab_ata_xfer_mask2mode
ffffffff8140e7b0 r __ksymtab_ata_xfer_mode2mask
ffffffff8140e7c0 r __ksymtab_ata_xfer_mode2shift
ffffffff8140e7d0 r __ksymtab_atapi_cmd_type
ffffffff8140e7e0 r __ksymtab_atomic_notifier_call_chain
ffffffff8140e7f0 r __ksymtab_atomic_notifier_chain_register
ffffffff8140e800 r __ksymtab_atomic_notifier_chain_unregister
ffffffff8140e810 r __ksymtab_attribute_container_classdev_to_container
ffffffff8140e820 r __ksymtab_attribute_container_find_class_device
ffffffff8140e830 r __ksymtab_attribute_container_register
ffffffff8140e840 r __ksymtab_attribute_container_unregister
ffffffff8140e850 r __ksymtab_audit_enabled
ffffffff8140e860 r __ksymtab_balloon_set_new_target
ffffffff8140e870 r __ksymtab_balloon_stats
ffffffff8140e880 r __ksymtab_bd_link_disk_holder
ffffffff8140e890 r __ksymtab_bd_unlink_disk_holder
ffffffff8140e8a0 r __ksymtab_bdi_writeout_inc
ffffffff8140e8b0 r __ksymtab_bind_evtchn_to_irq
ffffffff8140e8c0 r __ksymtab_bind_evtchn_to_irqhandler
ffffffff8140e8d0 r __ksymtab_bind_interdomain_evtchn_to_irqhandler
ffffffff8140e8e0 r __ksymtab_bind_virq_to_irqhandler
ffffffff8140e8f0 r __ksymtab_blk_abort_queue
ffffffff8140e900 r __ksymtab_blk_abort_request
ffffffff8140e910 r __ksymtab_blk_add_request_payload
ffffffff8140e920 r __ksymtab_blk_end_request_err
ffffffff8140e930 r __ksymtab_blk_execute_rq_nowait
ffffffff8140e940 r __ksymtab_blk_insert_cloned_request
ffffffff8140e950 r __ksymtab_blk_lld_busy
ffffffff8140e960 r __ksymtab_blk_queue_bio
ffffffff8140e970 r __ksymtab_blk_queue_dma_drain
ffffffff8140e980 r __ksymtab_blk_queue_flush
ffffffff8140e990 r __ksymtab_blk_queue_flush_queueable
ffffffff8140e9a0 r __ksymtab_blk_queue_lld_busy
ffffffff8140e9b0 r __ksymtab_blk_queue_rq_timed_out
ffffffff8140e9c0 r __ksymtab_blk_queue_rq_timeout
ffffffff8140e9d0 r __ksymtab_blk_rq_check_limits
ffffffff8140e9e0 r __ksymtab_blk_rq_err_bytes
ffffffff8140e9f0 r __ksymtab_blk_rq_prep_clone
ffffffff8140ea00 r __ksymtab_blk_rq_unprep_clone
ffffffff8140ea10 r __ksymtab_blk_unprep_request
ffffffff8140ea20 r __ksymtab_blk_update_request
ffffffff8140ea30 r __ksymtab_blkcg_get_weight
ffffffff8140ea40 r __ksymtab_blkcipher_walk_done
ffffffff8140ea50 r __ksymtab_blkcipher_walk_phys
ffffffff8140ea60 r __ksymtab_blkcipher_walk_virt
ffffffff8140ea70 r __ksymtab_blkcipher_walk_virt_block
ffffffff8140ea80 r __ksymtab_blkdev_aio_write
ffffffff8140ea90 r __ksymtab_blkdev_ioctl
ffffffff8140eaa0 r __ksymtab_blkio_alloc_blkg_stats
ffffffff8140eab0 r __ksymtab_blkio_policy_register
ffffffff8140eac0 r __ksymtab_blkio_policy_unregister
ffffffff8140ead0 r __ksymtab_blkio_root_cgroup
ffffffff8140eae0 r __ksymtab_blkio_subsys
ffffffff8140eaf0 r __ksymtab_blkiocg_add_blkio_group
ffffffff8140eb00 r __ksymtab_blkiocg_del_blkio_group
ffffffff8140eb10 r __ksymtab_blkiocg_lookup_group
ffffffff8140eb20 r __ksymtab_blkiocg_update_completion_stats
ffffffff8140eb30 r __ksymtab_blkiocg_update_dispatch_stats
ffffffff8140eb40 r __ksymtab_blkiocg_update_io_add_stats
ffffffff8140eb50 r __ksymtab_blkiocg_update_io_merged_stats
ffffffff8140eb60 r __ksymtab_blkiocg_update_io_remove_stats
ffffffff8140eb70 r __ksymtab_blkiocg_update_timeslice_used
ffffffff8140eb80 r __ksymtab_blocking_notifier_call_chain
ffffffff8140eb90 r __ksymtab_blocking_notifier_chain_cond_register
ffffffff8140eba0 r __ksymtab_blocking_notifier_chain_register
ffffffff8140ebb0 r __ksymtab_blocking_notifier_chain_unregister
ffffffff8140ebc0 r __ksymtab_bsg_goose_queue
ffffffff8140ebd0 r __ksymtab_bsg_job_done
ffffffff8140ebe0 r __ksymtab_bsg_register_queue
ffffffff8140ebf0 r __ksymtab_bsg_remove_queue
ffffffff8140ec00 r __ksymtab_bsg_request_fn
ffffffff8140ec10 r __ksymtab_bsg_setup_queue
ffffffff8140ec20 r __ksymtab_bsg_unregister_queue
ffffffff8140ec30 r __ksymtab_bus_create_file
ffffffff8140ec40 r __ksymtab_bus_find_device
ffffffff8140ec50 r __ksymtab_bus_find_device_by_name
ffffffff8140ec60 r __ksymtab_bus_for_each_dev
ffffffff8140ec70 r __ksymtab_bus_for_each_drv
ffffffff8140ec80 r __ksymtab_bus_get_device_klist
ffffffff8140ec90 r __ksymtab_bus_get_kset
ffffffff8140eca0 r __ksymtab_bus_register_notifier
ffffffff8140ecb0 r __ksymtab_bus_remove_file
ffffffff8140ecc0 r __ksymtab_bus_rescan_devices
ffffffff8140ecd0 r __ksymtab_bus_set_iommu
ffffffff8140ece0 r __ksymtab_bus_sort_breadthfirst
ffffffff8140ecf0 r __ksymtab_bus_unregister
ffffffff8140ed00 r __ksymtab_bus_unregister_notifier
ffffffff8140ed10 r __ksymtab_byte_rev_table
ffffffff8140ed20 r __ksymtab_call_netevent_notifiers
ffffffff8140ed30 r __ksymtab_call_rcu_bh
ffffffff8140ed40 r __ksymtab_call_rcu_sched
ffffffff8140ed50 r __ksymtab_cancel_work_sync
ffffffff8140ed60 r __ksymtab_cgroup_add_file
ffffffff8140ed70 r __ksymtab_cgroup_add_files
ffffffff8140ed80 r __ksymtab_cgroup_attach_task_all
ffffffff8140ed90 r __ksymtab_cgroup_load_subsys
ffffffff8140eda0 r __ksymtab_cgroup_lock
ffffffff8140edb0 r __ksymtab_cgroup_lock_is_held
ffffffff8140edc0 r __ksymtab_cgroup_lock_live_group
ffffffff8140edd0 r __ksymtab_cgroup_path
ffffffff8140ede0 r __ksymtab_cgroup_taskset_cur_cgroup
ffffffff8140edf0 r __ksymtab_cgroup_taskset_first
ffffffff8140ee00 r __ksymtab_cgroup_taskset_next
ffffffff8140ee10 r __ksymtab_cgroup_taskset_size
ffffffff8140ee20 r __ksymtab_cgroup_to_blkio_cgroup
ffffffff8140ee30 r __ksymtab_cgroup_unload_subsys
ffffffff8140ee40 r __ksymtab_cgroup_unlock
ffffffff8140ee50 r __ksymtab_check_tsc_unstable
ffffffff8140ee60 r __ksymtab_class_compat_create_link
ffffffff8140ee70 r __ksymtab_class_compat_register
ffffffff8140ee80 r __ksymtab_class_compat_remove_link
ffffffff8140ee90 r __ksymtab_class_compat_unregister
ffffffff8140eea0 r __ksymtab_class_create_file
ffffffff8140eeb0 r __ksymtab_class_destroy
ffffffff8140eec0 r __ksymtab_class_dev_iter_exit
ffffffff8140eed0 r __ksymtab_class_dev_iter_init
ffffffff8140eee0 r __ksymtab_class_dev_iter_next
ffffffff8140eef0 r __ksymtab_class_find_device
ffffffff8140ef00 r __ksymtab_class_for_each_device
ffffffff8140ef10 r __ksymtab_class_interface_register
ffffffff8140ef20 r __ksymtab_class_interface_unregister
ffffffff8140ef30 r __ksymtab_class_remove_file
ffffffff8140ef40 r __ksymtab_class_unregister
ffffffff8140ef50 r __ksymtab_cleanup_srcu_struct
ffffffff8140ef60 r __ksymtab_clflush_cache_range
ffffffff8140ef70 r __ksymtab_clockevent_delta2ns
ffffffff8140ef80 r __ksymtab_clockevents_notify
ffffffff8140ef90 r __ksymtab_clockevents_register_device
ffffffff8140efa0 r __ksymtab_compat_alloc_user_space
ffffffff8140efb0 r __ksymtab_compat_get_timespec
ffffffff8140efc0 r __ksymtab_compat_get_timeval
ffffffff8140efd0 r __ksymtab_compat_put_timespec
ffffffff8140efe0 r __ksymtab_compat_put_timeval
ffffffff8140eff0 r __ksymtab_con_debug_enter
ffffffff8140f000 r __ksymtab_con_debug_leave
ffffffff8140f010 r __ksymtab_console_drivers
ffffffff8140f020 r __ksymtab_copy_from_user_nmi
ffffffff8140f030 r __ksymtab_cper_next_record_id
ffffffff8140f040 r __ksymtab_cper_severity_to_aer
ffffffff8140f050 r __ksymtab_cpu_bit_bitmap
ffffffff8140f060 r __ksymtab_cpu_clock
ffffffff8140f070 r __ksymtab_cpu_has_amd_erratum
ffffffff8140f080 r __ksymtab_cpu_idle_wait
ffffffff8140f090 r __ksymtab_cpu_is_hotpluggable
ffffffff8140f0a0 r __ksymtab_cpu_subsys
ffffffff8140f0b0 r __ksymtab_cpu_up
ffffffff8140f0c0 r __ksymtab_cpufreq_cpu_get
ffffffff8140f0d0 r __ksymtab_cpufreq_cpu_put
ffffffff8140f0e0 r __ksymtab_cpufreq_driver_target
ffffffff8140f0f0 r __ksymtab_cpufreq_freq_attr_scaling_available_freqs
ffffffff8140f100 r __ksymtab_cpufreq_frequency_get_table
ffffffff8140f110 r __ksymtab_cpufreq_frequency_table_cpuinfo
ffffffff8140f120 r __ksymtab_cpufreq_frequency_table_get_attr
ffffffff8140f130 r __ksymtab_cpufreq_frequency_table_put_attr
ffffffff8140f140 r __ksymtab_cpufreq_frequency_table_target
ffffffff8140f150 r __ksymtab_cpufreq_frequency_table_verify
ffffffff8140f160 r __ksymtab_cpufreq_notify_transition
ffffffff8140f170 r __ksymtab_cpufreq_register_driver
ffffffff8140f180 r __ksymtab_cpufreq_register_governor
ffffffff8140f190 r __ksymtab_cpufreq_unregister_driver
ffffffff8140f1a0 r __ksymtab_cpufreq_unregister_governor
ffffffff8140f1b0 r __ksymtab_cpuidle_disable_device
ffffffff8140f1c0 r __ksymtab_cpuidle_enable_device
ffffffff8140f1d0 r __ksymtab_cpuidle_get_driver
ffffffff8140f1e0 r __ksymtab_cpuidle_pause_and_lock
ffffffff8140f1f0 r __ksymtab_cpuidle_register_device
ffffffff8140f200 r __ksymtab_cpuidle_register_driver
ffffffff8140f210 r __ksymtab_cpuidle_resume_and_unlock
ffffffff8140f220 r __ksymtab_cpuidle_unregister_device
ffffffff8140f230 r __ksymtab_cpuidle_unregister_driver
ffffffff8140f240 r __ksymtab_cpuset_mem_spread_node
ffffffff8140f250 r __ksymtab_cred_to_ucred
ffffffff8140f260 r __ksymtab_crypto_ablkcipher_type
ffffffff8140f270 r __ksymtab_crypto_aead_setauthsize
ffffffff8140f280 r __ksymtab_crypto_aead_type
ffffffff8140f290 r __ksymtab_crypto_aes_expand_key
ffffffff8140f2a0 r __ksymtab_crypto_aes_set_key
ffffffff8140f2b0 r __ksymtab_crypto_ahash_digest
ffffffff8140f2c0 r __ksymtab_crypto_ahash_final
ffffffff8140f2d0 r __ksymtab_crypto_ahash_finup
ffffffff8140f2e0 r __ksymtab_crypto_ahash_setkey
ffffffff8140f2f0 r __ksymtab_crypto_ahash_type
ffffffff8140f300 r __ksymtab_crypto_alg_list
ffffffff8140f310 r __ksymtab_crypto_alg_lookup
ffffffff8140f320 r __ksymtab_crypto_alg_mod_lookup
ffffffff8140f330 r __ksymtab_crypto_alg_sem
ffffffff8140f340 r __ksymtab_crypto_alg_tested
ffffffff8140f350 r __ksymtab_crypto_alloc_ablkcipher
ffffffff8140f360 r __ksymtab_crypto_alloc_aead
ffffffff8140f370 r __ksymtab_crypto_alloc_ahash
ffffffff8140f380 r __ksymtab_crypto_alloc_base
ffffffff8140f390 r __ksymtab_crypto_alloc_instance
ffffffff8140f3a0 r __ksymtab_crypto_alloc_instance2
ffffffff8140f3b0 r __ksymtab_crypto_alloc_pcomp
ffffffff8140f3c0 r __ksymtab_crypto_alloc_shash
ffffffff8140f3d0 r __ksymtab_crypto_alloc_tfm
ffffffff8140f3e0 r __ksymtab_crypto_attr_alg2
ffffffff8140f3f0 r __ksymtab_crypto_attr_alg_name
ffffffff8140f400 r __ksymtab_crypto_attr_u32
ffffffff8140f410 r __ksymtab_crypto_blkcipher_type
ffffffff8140f420 r __ksymtab_crypto_chain
ffffffff8140f430 r __ksymtab_crypto_check_attr_type
ffffffff8140f440 r __ksymtab_crypto_create_tfm
ffffffff8140f450 r __ksymtab_crypto_default_rng
ffffffff8140f460 r __ksymtab_crypto_dequeue_request
ffffffff8140f470 r __ksymtab_crypto_destroy_tfm
ffffffff8140f480 r __ksymtab_crypto_drop_spawn
ffffffff8140f490 r __ksymtab_crypto_enqueue_request
ffffffff8140f4a0 r __ksymtab_crypto_find_alg
ffffffff8140f4b0 r __ksymtab_crypto_fl_tab
ffffffff8140f4c0 r __ksymtab_crypto_ft_tab
ffffffff8140f4d0 r __ksymtab_crypto_get_attr_type
ffffffff8140f4e0 r __ksymtab_crypto_get_default_rng
ffffffff8140f4f0 r __ksymtab_crypto_givcipher_type
ffffffff8140f500 r __ksymtab_crypto_grab_aead
ffffffff8140f510 r __ksymtab_crypto_grab_skcipher
ffffffff8140f520 r __ksymtab_crypto_has_alg
ffffffff8140f530 r __ksymtab_crypto_hash_walk_done
ffffffff8140f540 r __ksymtab_crypto_hash_walk_first
ffffffff8140f550 r __ksymtab_crypto_il_tab
ffffffff8140f560 r __ksymtab_crypto_inc
ffffffff8140f570 r __ksymtab_crypto_init_ahash_spawn
ffffffff8140f580 r __ksymtab_crypto_init_queue
ffffffff8140f590 r __ksymtab_crypto_init_shash_spawn
ffffffff8140f5a0 r __ksymtab_crypto_init_spawn
ffffffff8140f5b0 r __ksymtab_crypto_init_spawn2
ffffffff8140f5c0 r __ksymtab_crypto_it_tab
ffffffff8140f5d0 r __ksymtab_crypto_larval_alloc
ffffffff8140f5e0 r __ksymtab_crypto_larval_error
ffffffff8140f5f0 r __ksymtab_crypto_larval_kill
ffffffff8140f600 r __ksymtab_crypto_larval_lookup
ffffffff8140f610 r __ksymtab_crypto_lookup_aead
ffffffff8140f620 r __ksymtab_crypto_lookup_skcipher
ffffffff8140f630 r __ksymtab_crypto_lookup_template
ffffffff8140f640 r __ksymtab_crypto_mod_get
ffffffff8140f650 r __ksymtab_crypto_mod_put
ffffffff8140f660 r __ksymtab_crypto_nivaead_type
ffffffff8140f670 r __ksymtab_crypto_probing_notify
ffffffff8140f680 r __ksymtab_crypto_put_default_rng
ffffffff8140f690 r __ksymtab_crypto_register_ahash
ffffffff8140f6a0 r __ksymtab_crypto_register_alg
ffffffff8140f6b0 r __ksymtab_crypto_register_algs
ffffffff8140f6c0 r __ksymtab_crypto_register_instance
ffffffff8140f6d0 r __ksymtab_crypto_register_notifier
ffffffff8140f6e0 r __ksymtab_crypto_register_pcomp
ffffffff8140f6f0 r __ksymtab_crypto_register_shash
ffffffff8140f700 r __ksymtab_crypto_register_template
ffffffff8140f710 r __ksymtab_crypto_remove_final
ffffffff8140f720 r __ksymtab_crypto_remove_spawns
ffffffff8140f730 r __ksymtab_crypto_rng_type
ffffffff8140f740 r __ksymtab_crypto_shash_digest
ffffffff8140f750 r __ksymtab_crypto_shash_final
ffffffff8140f760 r __ksymtab_crypto_shash_finup
ffffffff8140f770 r __ksymtab_crypto_shash_setkey
ffffffff8140f780 r __ksymtab_crypto_shash_update
ffffffff8140f790 r __ksymtab_crypto_shoot_alg
ffffffff8140f7a0 r __ksymtab_crypto_spawn_tfm
ffffffff8140f7b0 r __ksymtab_crypto_spawn_tfm2
ffffffff8140f7c0 r __ksymtab_crypto_tfm_in_queue
ffffffff8140f7d0 r __ksymtab_crypto_unregister_ahash
ffffffff8140f7e0 r __ksymtab_crypto_unregister_alg
ffffffff8140f7f0 r __ksymtab_crypto_unregister_algs
ffffffff8140f800 r __ksymtab_crypto_unregister_instance
ffffffff8140f810 r __ksymtab_crypto_unregister_notifier
ffffffff8140f820 r __ksymtab_crypto_unregister_pcomp
ffffffff8140f830 r __ksymtab_crypto_unregister_shash
ffffffff8140f840 r __ksymtab_crypto_unregister_template
ffffffff8140f850 r __ksymtab_crypto_xor
ffffffff8140f860 r __ksymtab_css_depth
ffffffff8140f870 r __ksymtab_css_id
ffffffff8140f880 r __ksymtab_css_lookup
ffffffff8140f890 r __ksymtab_d_materialise_unique
ffffffff8140f8a0 r __ksymtab_debug_locks
ffffffff8140f8b0 r __ksymtab_default_backing_dev_info
ffffffff8140f8c0 r __ksymtab_delayacct_on
ffffffff8140f8d0 r __ksymtab_dequeue_signal
ffffffff8140f8e0 r __ksymtab_destroy_workqueue
ffffffff8140f8f0 r __ksymtab_dev_attr_em_message
ffffffff8140f900 r __ksymtab_dev_attr_em_message_type
ffffffff8140f910 r __ksymtab_dev_attr_link_power_management_policy
ffffffff8140f920 r __ksymtab_dev_attr_sw_activity
ffffffff8140f930 r __ksymtab_dev_attr_unload_heads
ffffffff8140f940 r __ksymtab_dev_change_net_namespace
ffffffff8140f950 r __ksymtab_dev_forward_skb
ffffffff8140f960 r __ksymtab_dev_pm_get_subsys_data
ffffffff8140f970 r __ksymtab_dev_pm_put_subsys_data
ffffffff8140f980 r __ksymtab_dev_pm_qos_add_ancestor_request
ffffffff8140f990 r __ksymtab_dev_pm_qos_add_global_notifier
ffffffff8140f9a0 r __ksymtab_dev_pm_qos_add_notifier
ffffffff8140f9b0 r __ksymtab_dev_pm_qos_add_request
ffffffff8140f9c0 r __ksymtab_dev_pm_qos_expose_latency_limit
ffffffff8140f9d0 r __ksymtab_dev_pm_qos_hide_latency_limit
ffffffff8140f9e0 r __ksymtab_dev_pm_qos_remove_global_notifier
ffffffff8140f9f0 r __ksymtab_dev_pm_qos_remove_notifier
ffffffff8140fa00 r __ksymtab_dev_pm_qos_remove_request
ffffffff8140fa10 r __ksymtab_dev_pm_qos_update_request
ffffffff8140fa20 r __ksymtab_dev_set_name
ffffffff8140fa30 r __ksymtab_device_add
ffffffff8140fa40 r __ksymtab_device_attach
ffffffff8140fa50 r __ksymtab_device_bind_driver
ffffffff8140fa60 r __ksymtab_device_create
ffffffff8140fa70 r __ksymtab_device_create_bin_file
ffffffff8140fa80 r __ksymtab_device_create_file
ffffffff8140fa90 r __ksymtab_device_create_vargs
ffffffff8140faa0 r __ksymtab_device_del
ffffffff8140fab0 r __ksymtab_device_destroy
ffffffff8140fac0 r __ksymtab_device_find_child
ffffffff8140fad0 r __ksymtab_device_for_each_child
ffffffff8140fae0 r __ksymtab_device_init_wakeup
ffffffff8140faf0 r __ksymtab_device_initialize
ffffffff8140fb00 r __ksymtab_device_move
ffffffff8140fb10 r __ksymtab_device_pm_wait_for_dev
ffffffff8140fb20 r __ksymtab_device_register
ffffffff8140fb30 r __ksymtab_device_release_driver
ffffffff8140fb40 r __ksymtab_device_remove_bin_file
ffffffff8140fb50 r __ksymtab_device_remove_file
ffffffff8140fb60 r __ksymtab_device_rename
ffffffff8140fb70 r __ksymtab_device_reprobe
ffffffff8140fb80 r __ksymtab_device_schedule_callback_owner
ffffffff8140fb90 r __ksymtab_device_set_wakeup_capable
ffffffff8140fba0 r __ksymtab_device_set_wakeup_enable
ffffffff8140fbb0 r __ksymtab_device_show_int
ffffffff8140fbc0 r __ksymtab_device_show_ulong
ffffffff8140fbd0 r __ksymtab_device_store_int
ffffffff8140fbe0 r __ksymtab_device_store_ulong
ffffffff8140fbf0 r __ksymtab_device_unregister
ffffffff8140fc00 r __ksymtab_device_wakeup_disable
ffffffff8140fc10 r __ksymtab_device_wakeup_enable
ffffffff8140fc20 r __ksymtab_devm_kfree
ffffffff8140fc30 r __ksymtab_devm_kzalloc
ffffffff8140fc40 r __ksymtab_devres_add
ffffffff8140fc50 r __ksymtab_devres_alloc
ffffffff8140fc60 r __ksymtab_devres_close_group
ffffffff8140fc70 r __ksymtab_devres_destroy
ffffffff8140fc80 r __ksymtab_devres_find
ffffffff8140fc90 r __ksymtab_devres_free
ffffffff8140fca0 r __ksymtab_devres_get
ffffffff8140fcb0 r __ksymtab_devres_open_group
ffffffff8140fcc0 r __ksymtab_devres_release_group
ffffffff8140fcd0 r __ksymtab_devres_remove
ffffffff8140fce0 r __ksymtab_devres_remove_group
ffffffff8140fcf0 r __ksymtab_dio_end_io
ffffffff8140fd00 r __ksymtab_dirty_writeback_interval
ffffffff8140fd10 r __ksymtab_disk_get_part
ffffffff8140fd20 r __ksymtab_disk_map_sector_rcu
ffffffff8140fd30 r __ksymtab_disk_part_iter_exit
ffffffff8140fd40 r __ksymtab_disk_part_iter_init
ffffffff8140fd50 r __ksymtab_disk_part_iter_next
ffffffff8140fd60 r __ksymtab_dma_get_required_mask
ffffffff8140fd70 r __ksymtab_dmi_match
ffffffff8140fd80 r __ksymtab_dmi_walk
ffffffff8140fd90 r __ksymtab_do_exit
ffffffff8140fda0 r __ksymtab_do_machine_check
ffffffff8140fdb0 r __ksymtab_do_trace_rcu_torture_read
ffffffff8140fdc0 r __ksymtab_dpm_resume_end
ffffffff8140fdd0 r __ksymtab_dpm_resume_start
ffffffff8140fde0 r __ksymtab_dpm_suspend_end
ffffffff8140fdf0 r __ksymtab_dpm_suspend_start
ffffffff8140fe00 r __ksymtab_drain_workqueue
ffffffff8140fe10 r __ksymtab_driver_attach
ffffffff8140fe20 r __ksymtab_driver_create_file
ffffffff8140fe30 r __ksymtab_driver_find
ffffffff8140fe40 r __ksymtab_driver_find_device
ffffffff8140fe50 r __ksymtab_driver_for_each_device
ffffffff8140fe60 r __ksymtab_driver_register
ffffffff8140fe70 r __ksymtab_driver_remove_file
ffffffff8140fe80 r __ksymtab_driver_unregister
ffffffff8140fe90 r __ksymtab_e820_any_mapped
ffffffff8140fea0 r __ksymtab_each_symbol_section
ffffffff8140feb0 r __ksymtab_edac_atomic_assert_error
ffffffff8140fec0 r __ksymtab_edac_err_assert
ffffffff8140fed0 r __ksymtab_edac_get_sysfs_subsys
ffffffff8140fee0 r __ksymtab_edac_handler_set
ffffffff8140fef0 r __ksymtab_edac_handlers
ffffffff8140ff00 r __ksymtab_edac_op_state
ffffffff8140ff10 r __ksymtab_edac_put_sysfs_subsys
ffffffff8140ff20 r __ksymtab_edac_subsys
ffffffff8140ff30 r __ksymtab_edid_info
ffffffff8140ff40 r __ksymtab_elv_register
ffffffff8140ff50 r __ksymtab_elv_unregister
ffffffff8140ff60 r __ksymtab_emergency_restart
ffffffff8140ff70 r __ksymtab_erst_clear
ffffffff8140ff80 r __ksymtab_erst_disable
ffffffff8140ff90 r __ksymtab_erst_get_record_count
ffffffff8140ffa0 r __ksymtab_erst_get_record_id_begin
ffffffff8140ffb0 r __ksymtab_erst_get_record_id_end
ffffffff8140ffc0 r __ksymtab_erst_get_record_id_next
ffffffff8140ffd0 r __ksymtab_erst_read
ffffffff8140ffe0 r __ksymtab_erst_write
ffffffff8140fff0 r __ksymtab_eventfd_ctx_fdget
ffffffff81410000 r __ksymtab_eventfd_ctx_fileget
ffffffff81410010 r __ksymtab_eventfd_ctx_get
ffffffff81410020 r __ksymtab_eventfd_ctx_put
ffffffff81410030 r __ksymtab_eventfd_ctx_read
ffffffff81410040 r __ksymtab_eventfd_ctx_remove_wait_queue
ffffffff81410050 r __ksymtab_eventfd_fget
ffffffff81410060 r __ksymtab_eventfd_signal
ffffffff81410070 r __ksymtab_evtchn_get
ffffffff81410080 r __ksymtab_evtchn_make_refcounted
ffffffff81410090 r __ksymtab_evtchn_put
ffffffff814100a0 r __ksymtab_execute_in_process_context
ffffffff814100b0 r __ksymtab_fb_destroy_modelist
ffffffff814100c0 r __ksymtab_fb_find_logo
ffffffff814100d0 r __ksymtab_fb_mode_option
ffffffff814100e0 r __ksymtab_fb_notifier_call_chain
ffffffff814100f0 r __ksymtab_fib_table_lookup
ffffffff81410100 r __ksymtab_file_ra_state_init
ffffffff81410110 r __ksymtab_find_get_pid
ffffffff81410120 r __ksymtab_find_module
ffffffff81410130 r __ksymtab_find_pid_ns
ffffffff81410140 r __ksymtab_find_symbol
ffffffff81410150 r __ksymtab_find_vpid
ffffffff81410160 r __ksymtab_firmware_kobj
ffffffff81410170 r __ksymtab_flush_kthread_work
ffffffff81410180 r __ksymtab_flush_kthread_worker
ffffffff81410190 r __ksymtab_flush_work
ffffffff814101a0 r __ksymtab_flush_work_sync
ffffffff814101b0 r __ksymtab_flush_workqueue
ffffffff814101c0 r __ksymtab_fpu_finit
ffffffff814101d0 r __ksymtab_free_css_id
ffffffff814101e0 r __ksymtab_free_percpu
ffffffff814101f0 r __ksymtab_free_vm_area
ffffffff81410200 r __ksymtab_fs_kobj
ffffffff81410210 r __ksymtab_fsnotify
ffffffff81410220 r __ksymtab_fsnotify_get_cookie
ffffffff81410230 r __ksymtab_fsstack_copy_attr_all
ffffffff81410240 r __ksymtab_fsstack_copy_inode_size
ffffffff81410250 r __ksymtab_gcd
ffffffff81410260 r __ksymtab_gdt_page
ffffffff81410270 r __ksymtab_gen_pool_avail
ffffffff81410280 r __ksymtab_gen_pool_size
ffffffff81410290 r __ksymtab_generic_fh_to_dentry
ffffffff814102a0 r __ksymtab_generic_fh_to_parent
ffffffff814102b0 r __ksymtab_generic_handle_irq
ffffffff814102c0 r __ksymtab_get_compat_timespec
ffffffff814102d0 r __ksymtab_get_compat_timeval
ffffffff814102e0 r __ksymtab_get_cpu_device
ffffffff814102f0 r __ksymtab_get_current_tty
ffffffff81410300 r __ksymtab_get_device
ffffffff81410310 r __ksymtab_get_max_files
ffffffff81410320 r __ksymtab_get_monotonic_boottime
ffffffff81410330 r __ksymtab_get_net_ns_by_pid
ffffffff81410340 r __ksymtab_get_online_cpus
ffffffff81410350 r __ksymtab_get_phys_to_machine
ffffffff81410360 r __ksymtab_get_pid_task
ffffffff81410370 r __ksymtab_get_task_comm
ffffffff81410380 r __ksymtab_get_task_mm
ffffffff81410390 r __ksymtab_get_task_pid
ffffffff814103a0 r __ksymtab_get_user_pages_fast
ffffffff814103b0 r __ksymtab_getboottime
ffffffff814103c0 r __ksymtab_gnttab_alloc_grant_references
ffffffff814103d0 r __ksymtab_gnttab_cancel_free_callback
ffffffff814103e0 r __ksymtab_gnttab_claim_grant_reference
ffffffff814103f0 r __ksymtab_gnttab_empty_grant_references
ffffffff81410400 r __ksymtab_gnttab_end_foreign_access
ffffffff81410410 r __ksymtab_gnttab_end_foreign_access_ref
ffffffff81410420 r __ksymtab_gnttab_end_foreign_transfer
ffffffff81410430 r __ksymtab_gnttab_end_foreign_transfer_ref
ffffffff81410440 r __ksymtab_gnttab_free_grant_reference
ffffffff81410450 r __ksymtab_gnttab_free_grant_references
ffffffff81410460 r __ksymtab_gnttab_grant_foreign_access
ffffffff81410470 r __ksymtab_gnttab_grant_foreign_access_ref
ffffffff81410480 r __ksymtab_gnttab_grant_foreign_access_subpage
ffffffff81410490 r __ksymtab_gnttab_grant_foreign_access_subpage_ref
ffffffff814104a0 r __ksymtab_gnttab_grant_foreign_access_trans
ffffffff814104b0 r __ksymtab_gnttab_grant_foreign_access_trans_ref
ffffffff814104c0 r __ksymtab_gnttab_grant_foreign_transfer
ffffffff814104d0 r __ksymtab_gnttab_grant_foreign_transfer_ref
ffffffff814104e0 r __ksymtab_gnttab_init
ffffffff814104f0 r __ksymtab_gnttab_map_refs
ffffffff81410500 r __ksymtab_gnttab_max_grant_frames
ffffffff81410510 r __ksymtab_gnttab_query_foreign_access
ffffffff81410520 r __ksymtab_gnttab_release_grant_reference
ffffffff81410530 r __ksymtab_gnttab_request_free_callback
ffffffff81410540 r __ksymtab_gnttab_subpage_grants_available
ffffffff81410550 r __ksymtab_gnttab_trans_grants_available
ffffffff81410560 r __ksymtab_gnttab_unmap_refs
ffffffff81410570 r __ksymtab_handle_level_irq
ffffffff81410580 r __ksymtab_handle_nested_irq
ffffffff81410590 r __ksymtab_handle_simple_irq
ffffffff814105a0 r __ksymtab_hest_disable
ffffffff814105b0 r __ksymtab_hid_add_device
ffffffff814105c0 r __ksymtab_hid_allocate_device
ffffffff814105d0 r __ksymtab_hid_check_keys_pressed
ffffffff814105e0 r __ksymtab_hid_connect
ffffffff814105f0 r __ksymtab_hid_debug
ffffffff81410600 r __ksymtab_hid_destroy_device
ffffffff81410610 r __ksymtab_hid_disconnect
ffffffff81410620 r __ksymtab_hid_input_report
ffffffff81410630 r __ksymtab_hid_output_report
ffffffff81410640 r __ksymtab_hid_parse_report
ffffffff81410650 r __ksymtab_hid_register_report
ffffffff81410660 r __ksymtab_hid_report_raw_event
ffffffff81410670 r __ksymtab_hid_set_field
ffffffff81410680 r __ksymtab_hid_unregister_driver
ffffffff81410690 r __ksymtab_hidinput_connect
ffffffff814106a0 r __ksymtab_hidinput_count_leds
ffffffff814106b0 r __ksymtab_hidinput_disconnect
ffffffff814106c0 r __ksymtab_hidinput_find_field
ffffffff814106d0 r __ksymtab_hidinput_get_led_field
ffffffff814106e0 r __ksymtab_hidinput_report_event
ffffffff814106f0 r __ksymtab_hpet_mask_rtc_irq_bit
ffffffff81410700 r __ksymtab_hpet_register_irq_handler
ffffffff81410710 r __ksymtab_hpet_rtc_dropped_irq
ffffffff81410720 r __ksymtab_hpet_rtc_interrupt
ffffffff81410730 r __ksymtab_hpet_rtc_timer_init
ffffffff81410740 r __ksymtab_hpet_set_alarm_time
ffffffff81410750 r __ksymtab_hpet_set_periodic_freq
ffffffff81410760 r __ksymtab_hpet_set_rtc_irq_bit
ffffffff81410770 r __ksymtab_hpet_unregister_irq_handler
ffffffff81410780 r __ksymtab_hrtimer_cancel
ffffffff81410790 r __ksymtab_hrtimer_forward
ffffffff814107a0 r __ksymtab_hrtimer_get_remaining
ffffffff814107b0 r __ksymtab_hrtimer_get_res
ffffffff814107c0 r __ksymtab_hrtimer_init
ffffffff814107d0 r __ksymtab_hrtimer_init_sleeper
ffffffff814107e0 r __ksymtab_hrtimer_start
ffffffff814107f0 r __ksymtab_hrtimer_start_range_ns
ffffffff81410800 r __ksymtab_hrtimer_try_to_cancel
ffffffff81410810 r __ksymtab_hvc_alloc
ffffffff81410820 r __ksymtab_hvc_instantiate
ffffffff81410830 r __ksymtab_hvc_kick
ffffffff81410840 r __ksymtab_hvc_poll
ffffffff81410850 r __ksymtab_hvc_remove
ffffffff81410860 r __ksymtab_hw_breakpoint_restore
ffffffff81410870 r __ksymtab_hwpoison_filter
ffffffff81410880 r __ksymtab_hypercall_page
ffffffff81410890 r __ksymtab_hypervisor_kobj
ffffffff814108a0 r __ksymtab_ibft_addr
ffffffff814108b0 r __ksymtab_idle_notifier_register
ffffffff814108c0 r __ksymtab_idle_notifier_unregister
ffffffff814108d0 r __ksymtab_inet_csk_addr2sockaddr
ffffffff814108e0 r __ksymtab_inet_csk_bind_conflict
ffffffff814108f0 r __ksymtab_inet_csk_clone_lock
ffffffff81410900 r __ksymtab_inet_csk_compat_getsockopt
ffffffff81410910 r __ksymtab_inet_csk_compat_setsockopt
ffffffff81410920 r __ksymtab_inet_csk_get_port
ffffffff81410930 r __ksymtab_inet_csk_listen_start
ffffffff81410940 r __ksymtab_inet_csk_listen_stop
ffffffff81410950 r __ksymtab_inet_csk_reqsk_queue_hash_add
ffffffff81410960 r __ksymtab_inet_csk_reqsk_queue_prune
ffffffff81410970 r __ksymtab_inet_csk_route_child_sock
ffffffff81410980 r __ksymtab_inet_csk_route_req
ffffffff81410990 r __ksymtab_inet_csk_search_req
ffffffff814109a0 r __ksymtab_inet_ctl_sock_create
ffffffff814109b0 r __ksymtab_inet_getpeer
ffffffff814109c0 r __ksymtab_inet_hash
ffffffff814109d0 r __ksymtab_inet_hash_connect
ffffffff814109e0 r __ksymtab_inet_hashinfo_init
ffffffff814109f0 r __ksymtab_inet_putpeer
ffffffff81410a00 r __ksymtab_inet_twdr_hangman
ffffffff81410a10 r __ksymtab_inet_twdr_twcal_tick
ffffffff81410a20 r __ksymtab_inet_twdr_twkill_work
ffffffff81410a30 r __ksymtab_inet_twsk_alloc
ffffffff81410a40 r __ksymtab_inet_twsk_purge
ffffffff81410a50 r __ksymtab_inet_twsk_put
ffffffff81410a60 r __ksymtab_inet_twsk_schedule
ffffffff81410a70 r __ksymtab_inet_unhash
ffffffff81410a80 r __ksymtab_init_dummy_netdev
ffffffff81410a90 r __ksymtab_init_fpu
ffffffff81410aa0 r __ksymtab_init_pid_ns
ffffffff81410ab0 r __ksymtab_init_srcu_struct
ffffffff81410ac0 r __ksymtab_init_user_ns
ffffffff81410ad0 r __ksymtab_init_uts_ns
ffffffff81410ae0 r __ksymtab_injectm
ffffffff81410af0 r __ksymtab_inode_sb_list_add
ffffffff81410b00 r __ksymtab_input_class
ffffffff81410b10 r __ksymtab_input_event_from_user
ffffffff81410b20 r __ksymtab_input_event_to_user
ffffffff81410b30 r __ksymtab_input_ff_create
ffffffff81410b40 r __ksymtab_input_ff_destroy
ffffffff81410b50 r __ksymtab_input_ff_effect_from_user
ffffffff81410b60 r __ksymtab_input_ff_erase
ffffffff81410b70 r __ksymtab_input_ff_event
ffffffff81410b80 r __ksymtab_input_ff_upload
ffffffff81410b90 r __ksymtab_invalidate_bh_lrus
ffffffff81410ba0 r __ksymtab_invalidate_inode_pages2
ffffffff81410bb0 r __ksymtab_invalidate_inode_pages2_range
ffffffff81410bc0 r __ksymtab_inverse_translate
ffffffff81410bd0 r __ksymtab_iommu_attach_device
ffffffff81410be0 r __ksymtab_iommu_detach_device
ffffffff81410bf0 r __ksymtab_iommu_device_group
ffffffff81410c00 r __ksymtab_iommu_domain_alloc
ffffffff81410c10 r __ksymtab_iommu_domain_free
ffffffff81410c20 r __ksymtab_iommu_domain_has_cap
ffffffff81410c30 r __ksymtab_iommu_iova_to_phys
ffffffff81410c40 r __ksymtab_iommu_map
ffffffff81410c50 r __ksymtab_iommu_present
ffffffff81410c60 r __ksymtab_iommu_set_fault_handler
ffffffff81410c70 r __ksymtab_iommu_unmap
ffffffff81410c80 r __ksymtab_ioremap_page_range
ffffffff81410c90 r __ksymtab_ip_build_and_send_pkt
ffffffff81410ca0 r __ksymtab_ip_local_out
ffffffff81410cb0 r __ksymtab_ip_route_output_flow
ffffffff81410cc0 r __ksymtab_irq_free_descs
ffffffff81410cd0 r __ksymtab_irq_from_evtchn
ffffffff81410ce0 r __ksymtab_irq_get_irq_data
ffffffff81410cf0 r __ksymtab_irq_modify_status
ffffffff81410d00 r __ksymtab_irq_set_affinity_hint
ffffffff81410d10 r __ksymtab_irq_set_affinity_notifier
ffffffff81410d20 r __ksymtab_irq_work_queue
ffffffff81410d30 r __ksymtab_irq_work_run
ffffffff81410d40 r __ksymtab_irq_work_sync
ffffffff81410d50 r __ksymtab_is_hpet_enabled
ffffffff81410d60 r __ksymtab_kallsyms_lookup_name
ffffffff81410d70 r __ksymtab_kallsyms_on_each_symbol
ffffffff81410d80 r __ksymtab_kcrypto_wq
ffffffff81410d90 r __ksymtab_kern_mount_data
ffffffff81410da0 r __ksymtab_kernel_halt
ffffffff81410db0 r __ksymtab_kernel_kobj
ffffffff81410dc0 r __ksymtab_kernel_power_off
ffffffff81410dd0 r __ksymtab_kernel_restart
ffffffff81410de0 r __ksymtab_kfree_call_rcu
ffffffff81410df0 r __ksymtab_kick_process
ffffffff81410e00 r __ksymtab_kill_pid_info_as_cred
ffffffff81410e10 r __ksymtab_klist_add_after
ffffffff81410e20 r __ksymtab_klist_add_before
ffffffff81410e30 r __ksymtab_klist_add_head
ffffffff81410e40 r __ksymtab_klist_add_tail
ffffffff81410e50 r __ksymtab_klist_del
ffffffff81410e60 r __ksymtab_klist_init
ffffffff81410e70 r __ksymtab_klist_iter_exit
ffffffff81410e80 r __ksymtab_klist_iter_init
ffffffff81410e90 r __ksymtab_klist_iter_init_node
ffffffff81410ea0 r __ksymtab_klist_next
ffffffff81410eb0 r __ksymtab_klist_node_attached
ffffffff81410ec0 r __ksymtab_klist_remove
ffffffff81410ed0 r __ksymtab_kmsg_dump_register
ffffffff81410ee0 r __ksymtab_kmsg_dump_unregister
ffffffff81410ef0 r __ksymtab_kobject_create_and_add
ffffffff81410f00 r __ksymtab_kobject_get_path
ffffffff81410f10 r __ksymtab_kobject_init_and_add
ffffffff81410f20 r __ksymtab_kobject_rename
ffffffff81410f30 r __ksymtab_kobject_uevent
ffffffff81410f40 r __ksymtab_kobject_uevent_env
ffffffff81410f50 r __ksymtab_kset_create_and_add
ffffffff81410f60 r __ksymtab_kthread_freezable_should_stop
ffffffff81410f70 r __ksymtab_kthread_worker_fn
ffffffff81410f80 r __ksymtab_ktime_add_safe
ffffffff81410f90 r __ksymtab_ktime_get
ffffffff81410fa0 r __ksymtab_ktime_get_boottime
ffffffff81410fb0 r __ksymtab_ktime_get_monotonic_offset
ffffffff81410fc0 r __ksymtab_ktime_get_real
ffffffff81410fd0 r __ksymtab_ktime_get_ts
ffffffff81410fe0 r __ksymtab_lcm
ffffffff81410ff0 r __ksymtab_leave_mm
ffffffff81411000 r __ksymtab_llist_add_batch
ffffffff81411010 r __ksymtab_llist_del_first
ffffffff81411020 r __ksymtab_local_apic_timer_c2_ok
ffffffff81411030 r __ksymtab_local_clock
ffffffff81411040 r __ksymtab_lock_flocks
ffffffff81411050 r __ksymtab_locks_alloc_lock
ffffffff81411060 r __ksymtab_locks_release_private
ffffffff81411070 r __ksymtab_lookup_address
ffffffff81411080 r __ksymtab_lookup_instantiate_filp
ffffffff81411090 r __ksymtab_lzo1x_decompress_safe
ffffffff814110a0 r __ksymtab_m2p_add_override
ffffffff814110b0 r __ksymtab_m2p_find_override_pfn
ffffffff814110c0 r __ksymtab_m2p_remove_override
ffffffff814110d0 r __ksymtab_machine_check_poll
ffffffff814110e0 r __ksymtab_map_vm_area
ffffffff814110f0 r __ksymtab_mark_mounts_for_expiry
ffffffff81411100 r __ksymtab_mark_tsc_unstable
ffffffff81411110 r __ksymtab_math_state_restore
ffffffff81411120 r __ksymtab_mce_notify_irq
ffffffff81411130 r __ksymtab_mce_register_decode_chain
ffffffff81411140 r __ksymtab_mce_unregister_decode_chain
ffffffff81411150 r __ksymtab_memory_failure
ffffffff81411160 r __ksymtab_memory_failure_queue
ffffffff81411170 r __ksymtab_mm_kobj
ffffffff81411180 r __ksymtab_mmput
ffffffff81411190 r __ksymtab_mmu_notifier_register
ffffffff814111a0 r __ksymtab_mmu_notifier_unregister
ffffffff814111b0 r __ksymtab_mnt_clone_write
ffffffff814111c0 r __ksymtab_mnt_drop_write
ffffffff814111d0 r __ksymtab_mnt_want_write
ffffffff814111e0 r __ksymtab_mnt_want_write_file
ffffffff814111f0 r __ksymtab_modify_user_hw_breakpoint
ffffffff81411200 r __ksymtab_module_mutex
ffffffff81411210 r __ksymtab_monotonic_to_bootbased
ffffffff81411220 r __ksymtab_ms_hyperv
ffffffff81411230 r __ksymtab_mtrr_state
ffffffff81411240 r __ksymtab_n_tty_inherit_ops
ffffffff81411250 r __ksymtab_net_cls_subsys_id
ffffffff81411260 r __ksymtab_net_ipv4_ctl_path
ffffffff81411270 r __ksymtab_net_namespace_list
ffffffff81411280 r __ksymtab_net_ns_type_operations
ffffffff81411290 r __ksymtab_net_prio_subsys_id
ffffffff814112a0 r __ksymtab_netdev_rx_handler_register
ffffffff814112b0 r __ksymtab_netdev_rx_handler_unregister
ffffffff814112c0 r __ksymtab_netlink_has_listeners
ffffffff814112d0 r __ksymtab_noop_backing_dev_info
ffffffff814112e0 r __ksymtab_notify_remote_via_irq
ffffffff814112f0 r __ksymtab_nr_free_buffer_pages
ffffffff81411300 r __ksymtab_nr_irqs
ffffffff81411310 r __ksymtab_oops_begin
ffffffff81411320 r __ksymtab_orderly_poweroff
ffffffff81411330 r __ksymtab_page_cache_async_readahead
ffffffff81411340 r __ksymtab_page_cache_sync_readahead
ffffffff81411350 r __ksymtab_page_mkclean
ffffffff81411360 r __ksymtab_panic_timeout
ffffffff81411370 r __ksymtab_part_round_stats
ffffffff81411380 r __ksymtab_pci_add_dynid
ffffffff81411390 r __ksymtab_pci_assign_unassigned_bridge_resources
ffffffff814113a0 r __ksymtab_pci_ats_queue_depth
ffffffff814113b0 r __ksymtab_pci_bus_add_device
ffffffff814113c0 r __ksymtab_pci_bus_max_busnr
ffffffff814113d0 r __ksymtab_pci_bus_resource_n
ffffffff814113e0 r __ksymtab_pci_cfg_access_lock
ffffffff814113f0 r __ksymtab_pci_cfg_access_trylock
ffffffff81411400 r __ksymtab_pci_cfg_access_unlock
ffffffff81411410 r __ksymtab_pci_check_and_mask_intx
ffffffff81411420 r __ksymtab_pci_check_and_unmask_intx
ffffffff81411430 r __ksymtab_pci_cleanup_aer_uncorrect_error_status
ffffffff81411440 r __ksymtab_pci_create_slot
ffffffff81411450 r __ksymtab_pci_destroy_slot
ffffffff81411460 r __ksymtab_pci_dev_run_wake
ffffffff81411470 r __ksymtab_pci_disable_ats
ffffffff81411480 r __ksymtab_pci_disable_pasid
ffffffff81411490 r __ksymtab_pci_disable_pcie_error_reporting
ffffffff814114a0 r __ksymtab_pci_disable_pri
ffffffff814114b0 r __ksymtab_pci_disable_rom
ffffffff814114c0 r __ksymtab_pci_disable_sriov
ffffffff814114d0 r __ksymtab_pci_enable_ats
ffffffff814114e0 r __ksymtab_pci_enable_pasid
ffffffff814114f0 r __ksymtab_pci_enable_pcie_error_reporting
ffffffff81411500 r __ksymtab_pci_enable_pri
ffffffff81411510 r __ksymtab_pci_enable_rom
ffffffff81411520 r __ksymtab_pci_enable_sriov
ffffffff81411530 r __ksymtab_pci_find_ext_capability
ffffffff81411540 r __ksymtab_pci_find_ht_capability
ffffffff81411550 r __ksymtab_pci_find_next_capability
ffffffff81411560 r __ksymtab_pci_find_next_ht_capability
ffffffff81411570 r __ksymtab_pci_intx
ffffffff81411580 r __ksymtab_pci_intx_mask_supported
ffffffff81411590 r __ksymtab_pci_ioremap_bar
ffffffff814115a0 r __ksymtab_pci_load_and_free_saved_state
ffffffff814115b0 r __ksymtab_pci_load_saved_state
ffffffff814115c0 r __ksymtab_pci_max_pasids
ffffffff814115d0 r __ksymtab_pci_msi_off
ffffffff814115e0 r __ksymtab_pci_num_vf
ffffffff814115f0 r __ksymtab_pci_pasid_features
ffffffff81411600 r __ksymtab_pci_power_names
ffffffff81411610 r __ksymtab_pci_pri_enabled
ffffffff81411620 r __ksymtab_pci_pri_status
ffffffff81411630 r __ksymtab_pci_pri_stopped
ffffffff81411640 r __ksymtab_pci_renumber_slot
ffffffff81411650 r __ksymtab_pci_rescan_bus
ffffffff81411660 r __ksymtab_pci_reset_function
ffffffff81411670 r __ksymtab_pci_reset_pri
ffffffff81411680 r __ksymtab_pci_restore_ats_state
ffffffff81411690 r __ksymtab_pci_restore_msi_state
ffffffff814116a0 r __ksymtab_pci_scan_child_bus
ffffffff814116b0 r __ksymtab_pci_set_cacheline_size
ffffffff814116c0 r __ksymtab_pci_set_pcie_reset_state
ffffffff814116d0 r __ksymtab_pci_slots_kset
ffffffff814116e0 r __ksymtab_pci_sriov_migration
ffffffff814116f0 r __ksymtab_pci_stop_bus_device
ffffffff81411700 r __ksymtab_pci_store_saved_state
ffffffff81411710 r __ksymtab_pci_test_config_bits
ffffffff81411720 r __ksymtab_pci_vpd_find_info_keyword
ffffffff81411730 r __ksymtab_pci_vpd_find_tag
ffffffff81411740 r __ksymtab_pci_walk_bus
ffffffff81411750 r __ksymtab_pcibios_scan_specific_bus
ffffffff81411760 r __ksymtab_pcie_bus_configure_settings
ffffffff81411770 r __ksymtab_pcie_port_bus_type
ffffffff81411780 r __ksymtab_pcie_update_link_speed
ffffffff81411790 r __ksymtab_pcpu_base_addr
ffffffff814117a0 r __ksymtab_perf_event_create_kernel_counter
ffffffff814117b0 r __ksymtab_perf_event_disable
ffffffff814117c0 r __ksymtab_perf_event_enable
ffffffff814117d0 r __ksymtab_perf_event_read_value
ffffffff814117e0 r __ksymtab_perf_event_refresh
ffffffff814117f0 r __ksymtab_perf_event_release_kernel
ffffffff81411800 r __ksymtab_perf_get_x86_pmu_capability
ffffffff81411810 r __ksymtab_perf_guest_get_msrs
ffffffff81411820 r __ksymtab_perf_register_guest_info_callbacks
ffffffff81411830 r __ksymtab_perf_swevent_get_recursion_context
ffffffff81411840 r __ksymtab_perf_unregister_guest_info_callbacks
ffffffff81411850 r __ksymtab_pgprot_writecombine
ffffffff81411860 r __ksymtab_pid_vnr
ffffffff81411870 r __ksymtab_platform_add_devices
ffffffff81411880 r __ksymtab_platform_bus
ffffffff81411890 r __ksymtab_platform_bus_type
ffffffff814118a0 r __ksymtab_platform_create_bundle
ffffffff814118b0 r __ksymtab_platform_device_add
ffffffff814118c0 r __ksymtab_platform_device_add_data
ffffffff814118d0 r __ksymtab_platform_device_add_resources
ffffffff814118e0 r __ksymtab_platform_device_alloc
ffffffff814118f0 r __ksymtab_platform_device_del
ffffffff81411900 r __ksymtab_platform_device_put
ffffffff81411910 r __ksymtab_platform_device_register
ffffffff81411920 r __ksymtab_platform_device_register_full
ffffffff81411930 r __ksymtab_platform_device_unregister
ffffffff81411940 r __ksymtab_platform_driver_probe
ffffffff81411950 r __ksymtab_platform_driver_register
ffffffff81411960 r __ksymtab_platform_driver_unregister
ffffffff81411970 r __ksymtab_platform_get_irq
ffffffff81411980 r __ksymtab_platform_get_irq_byname
ffffffff81411990 r __ksymtab_platform_get_resource
ffffffff814119a0 r __ksymtab_platform_get_resource_byname
ffffffff814119b0 r __ksymtab_pm_generic_freeze
ffffffff814119c0 r __ksymtab_pm_generic_freeze_late
ffffffff814119d0 r __ksymtab_pm_generic_freeze_noirq
ffffffff814119e0 r __ksymtab_pm_generic_poweroff
ffffffff814119f0 r __ksymtab_pm_generic_poweroff_late
ffffffff81411a00 r __ksymtab_pm_generic_poweroff_noirq
ffffffff81411a10 r __ksymtab_pm_generic_restore
ffffffff81411a20 r __ksymtab_pm_generic_restore_early
ffffffff81411a30 r __ksymtab_pm_generic_restore_noirq
ffffffff81411a40 r __ksymtab_pm_generic_resume
ffffffff81411a50 r __ksymtab_pm_generic_resume_early
ffffffff81411a60 r __ksymtab_pm_generic_resume_noirq
ffffffff81411a70 r __ksymtab_pm_generic_runtime_idle
ffffffff81411a80 r __ksymtab_pm_generic_runtime_resume
ffffffff81411a90 r __ksymtab_pm_generic_runtime_suspend
ffffffff81411aa0 r __ksymtab_pm_generic_suspend
ffffffff81411ab0 r __ksymtab_pm_generic_suspend_late
ffffffff81411ac0 r __ksymtab_pm_generic_suspend_noirq
ffffffff81411ad0 r __ksymtab_pm_generic_thaw
ffffffff81411ae0 r __ksymtab_pm_generic_thaw_early
ffffffff81411af0 r __ksymtab_pm_generic_thaw_noirq
ffffffff81411b00 r __ksymtab_pm_qos_add_notifier
ffffffff81411b10 r __ksymtab_pm_qos_add_request
ffffffff81411b20 r __ksymtab_pm_qos_remove_notifier
ffffffff81411b30 r __ksymtab_pm_qos_remove_request
ffffffff81411b40 r __ksymtab_pm_qos_request
ffffffff81411b50 r __ksymtab_pm_qos_request_active
ffffffff81411b60 r __ksymtab_pm_qos_update_request
ffffffff81411b70 r __ksymtab_pm_relax
ffffffff81411b80 r __ksymtab_pm_runtime_allow
ffffffff81411b90 r __ksymtab_pm_runtime_autosuspend_expiration
ffffffff81411ba0 r __ksymtab_pm_runtime_barrier
ffffffff81411bb0 r __ksymtab_pm_runtime_enable
ffffffff81411bc0 r __ksymtab_pm_runtime_forbid
ffffffff81411bd0 r __ksymtab_pm_runtime_irq_safe
ffffffff81411be0 r __ksymtab_pm_runtime_no_callbacks
ffffffff81411bf0 r __ksymtab_pm_runtime_set_autosuspend_delay
ffffffff81411c00 r __ksymtab_pm_schedule_suspend
ffffffff81411c10 r __ksymtab_pm_stay_awake
ffffffff81411c20 r __ksymtab_pm_wakeup_event
ffffffff81411c30 r __ksymtab_pm_wq
ffffffff81411c40 r __ksymtab_posix_clock_register
ffffffff81411c50 r __ksymtab_posix_clock_unregister
ffffffff81411c60 r __ksymtab_posix_timer_event
ffffffff81411c70 r __ksymtab_posix_timers_register_clock
ffffffff81411c80 r __ksymtab_power_group_name
ffffffff81411c90 r __ksymtab_print_context_stack
ffffffff81411ca0 r __ksymtab_print_context_stack_bp
ffffffff81411cb0 r __ksymtab_probe_kernel_read
ffffffff81411cc0 r __ksymtab_probe_kernel_write
ffffffff81411cd0 r __ksymtab_proc_net_fops_create
ffffffff81411ce0 r __ksymtab_proc_net_mkdir
ffffffff81411cf0 r __ksymtab_proc_net_remove
ffffffff81411d00 r __ksymtab_pstore_register
ffffffff81411d10 r __ksymtab_put_compat_timespec
ffffffff81411d20 r __ksymtab_put_compat_timeval
ffffffff81411d30 r __ksymtab_put_device
ffffffff81411d40 r __ksymtab_put_online_cpus
ffffffff81411d50 r __ksymtab_put_pid
ffffffff81411d60 r __ksymtab_pv_apic_ops
ffffffff81411d70 r __ksymtab_pv_info
ffffffff81411d80 r __ksymtab_pv_time_ops
ffffffff81411d90 r __ksymtab_queue_delayed_work
ffffffff81411da0 r __ksymtab_queue_delayed_work_on
ffffffff81411db0 r __ksymtab_queue_kthread_work
ffffffff81411dc0 r __ksymtab_queue_work
ffffffff81411dd0 r __ksymtab_queue_work_on
ffffffff81411de0 r __ksymtab_raw_hash_sk
ffffffff81411df0 r __ksymtab_raw_notifier_call_chain
ffffffff81411e00 r __ksymtab_raw_notifier_chain_register
ffffffff81411e10 r __ksymtab_raw_notifier_chain_unregister
ffffffff81411e20 r __ksymtab_raw_seq_next
ffffffff81411e30 r __ksymtab_raw_seq_open
ffffffff81411e40 r __ksymtab_raw_seq_start
ffffffff81411e50 r __ksymtab_raw_seq_stop
ffffffff81411e60 r __ksymtab_raw_unhash_sk
ffffffff81411e70 r __ksymtab_rcu_barrier
ffffffff81411e80 r __ksymtab_rcu_barrier_bh
ffffffff81411e90 r __ksymtab_rcu_barrier_sched
ffffffff81411ea0 r __ksymtab_rcu_batches_completed
ffffffff81411eb0 r __ksymtab_rcu_batches_completed_bh
ffffffff81411ec0 r __ksymtab_rcu_batches_completed_sched
ffffffff81411ed0 r __ksymtab_rcu_bh_force_quiescent_state
ffffffff81411ee0 r __ksymtab_rcu_force_quiescent_state
ffffffff81411ef0 r __ksymtab_rcu_idle_enter
ffffffff81411f00 r __ksymtab_rcu_idle_exit
ffffffff81411f10 r __ksymtab_rcu_note_context_switch
ffffffff81411f20 r __ksymtab_rcu_sched_force_quiescent_state
ffffffff81411f30 r __ksymtab_rcu_scheduler_active
ffffffff81411f40 r __ksymtab_rcutorture_record_progress
ffffffff81411f50 r __ksymtab_rcutorture_record_test_transition
ffffffff81411f60 r __ksymtab_ref_module
ffffffff81411f70 r __ksymtab_register_acpi_bus_notifier
ffffffff81411f80 r __ksymtab_register_acpi_hed_notifier
ffffffff81411f90 r __ksymtab_register_die_notifier
ffffffff81411fa0 r __ksymtab_register_keyboard_notifier
ffffffff81411fb0 r __ksymtab_register_mce_write_callback
ffffffff81411fc0 r __ksymtab_register_net_sysctl_rotable
ffffffff81411fd0 r __ksymtab_register_net_sysctl_table
ffffffff81411fe0 r __ksymtab_register_netevent_notifier
ffffffff81411ff0 r __ksymtab_register_nmi_handler
ffffffff81412000 r __ksymtab_register_oom_notifier
ffffffff81412010 r __ksymtab_register_pernet_device
ffffffff81412020 r __ksymtab_register_pernet_subsys
ffffffff81412030 r __ksymtab_register_pm_notifier
ffffffff81412040 r __ksymtab_register_syscore_ops
ffffffff81412050 r __ksymtab_register_user_hw_breakpoint
ffffffff81412060 r __ksymtab_register_vt_notifier
ffffffff81412070 r __ksymtab_register_wide_hw_breakpoint
ffffffff81412080 r __ksymtab_register_xenbus_watch
ffffffff81412090 r __ksymtab_register_xenstore_notifier
ffffffff814120a0 r __ksymtab_remove_irq
ffffffff814120b0 r __ksymtab_replace_page_cache_page
ffffffff814120c0 r __ksymtab_request_any_context_irq
ffffffff814120d0 r __ksymtab_resume_device_irqs
ffffffff814120e0 r __ksymtab_root_device_unregister
ffffffff814120f0 r __ksymtab_round_jiffies
ffffffff81412100 r __ksymtab_round_jiffies_relative
ffffffff81412110 r __ksymtab_round_jiffies_up
ffffffff81412120 r __ksymtab_round_jiffies_up_relative
ffffffff81412130 r __ksymtab_rt_mutex_destroy
ffffffff81412140 r __ksymtab_rt_mutex_lock
ffffffff81412150 r __ksymtab_rt_mutex_lock_interruptible
ffffffff81412160 r __ksymtab_rt_mutex_timed_lock
ffffffff81412170 r __ksymtab_rt_mutex_trylock
ffffffff81412180 r __ksymtab_rt_mutex_unlock
ffffffff81412190 r __ksymtab_rtc_alarm_irq_enable
ffffffff814121a0 r __ksymtab_rtc_class_close
ffffffff814121b0 r __ksymtab_rtc_class_open
ffffffff814121c0 r __ksymtab_rtc_device_register
ffffffff814121d0 r __ksymtab_rtc_device_unregister
ffffffff814121e0 r __ksymtab_rtc_initialize_alarm
ffffffff814121f0 r __ksymtab_rtc_irq_register
ffffffff81412200 r __ksymtab_rtc_irq_set_freq
ffffffff81412210 r __ksymtab_rtc_irq_set_state
ffffffff81412220 r __ksymtab_rtc_irq_unregister
ffffffff81412230 r __ksymtab_rtc_ktime_to_tm
ffffffff81412240 r __ksymtab_rtc_read_alarm
ffffffff81412250 r __ksymtab_rtc_read_time
ffffffff81412260 r __ksymtab_rtc_set_alarm
ffffffff81412270 r __ksymtab_rtc_set_mmss
ffffffff81412280 r __ksymtab_rtc_set_time
ffffffff81412290 r __ksymtab_rtc_tm_to_ktime
ffffffff814122a0 r __ksymtab_rtc_update_irq
ffffffff814122b0 r __ksymtab_rtc_update_irq_enable
ffffffff814122c0 r __ksymtab_rtnl_af_register
ffffffff814122d0 r __ksymtab_rtnl_af_unregister
ffffffff814122e0 r __ksymtab_rtnl_link_register
ffffffff814122f0 r __ksymtab_rtnl_link_unregister
ffffffff81412300 r __ksymtab_rtnl_put_cacheinfo
ffffffff81412310 r __ksymtab_rtnl_register
ffffffff81412320 r __ksymtab_rtnl_unregister
ffffffff81412330 r __ksymtab_rtnl_unregister_all
ffffffff81412340 r __ksymtab_sata_async_notification
ffffffff81412350 r __ksymtab_sata_deb_timing_hotplug
ffffffff81412360 r __ksymtab_sata_deb_timing_long
ffffffff81412370 r __ksymtab_sata_deb_timing_normal
ffffffff81412380 r __ksymtab_sata_link_debounce
ffffffff81412390 r __ksymtab_sata_link_hardreset
ffffffff814123a0 r __ksymtab_sata_link_resume
ffffffff814123b0 r __ksymtab_sata_link_scr_lpm
ffffffff814123c0 r __ksymtab_sata_port_ops
ffffffff814123d0 r __ksymtab_sata_scr_read
ffffffff814123e0 r __ksymtab_sata_scr_valid
ffffffff814123f0 r __ksymtab_sata_scr_write
ffffffff81412400 r __ksymtab_sata_scr_write_flush
ffffffff81412410 r __ksymtab_sata_set_spd
ffffffff81412420 r __ksymtab_sata_sff_hardreset
ffffffff81412430 r __ksymtab_sata_std_hardreset
ffffffff81412440 r __ksymtab_scatterwalk_copychunks
ffffffff81412450 r __ksymtab_scatterwalk_done
ffffffff81412460 r __ksymtab_scatterwalk_map
ffffffff81412470 r __ksymtab_scatterwalk_map_and_copy
ffffffff81412480 r __ksymtab_scatterwalk_start
ffffffff81412490 r __ksymtab_sched_clock
ffffffff814124a0 r __ksymtab_sched_clock_idle_sleep_event
ffffffff814124b0 r __ksymtab_sched_clock_idle_wakeup_event
ffffffff814124c0 r __ksymtab_sched_setscheduler
ffffffff814124d0 r __ksymtab_schedule_hrtimeout
ffffffff814124e0 r __ksymtab_schedule_hrtimeout_range
ffffffff814124f0 r __ksymtab_screen_glyph
ffffffff81412500 r __ksymtab_scsi_autopm_get_device
ffffffff81412510 r __ksymtab_scsi_autopm_put_device
ffffffff81412520 r __ksymtab_scsi_bus_type
ffffffff81412530 r __ksymtab_scsi_complete_async_scans
ffffffff81412540 r __ksymtab_scsi_eh_get_sense
ffffffff81412550 r __ksymtab_scsi_eh_ready_devs
ffffffff81412560 r __ksymtab_scsi_flush_work
ffffffff81412570 r __ksymtab_scsi_get_vpd_page
ffffffff81412580 r __ksymtab_scsi_internal_device_block
ffffffff81412590 r __ksymtab_scsi_internal_device_unblock
ffffffff814125a0 r __ksymtab_scsi_mode_select
ffffffff814125b0 r __ksymtab_scsi_queue_work
ffffffff814125c0 r __ksymtab_scsi_schedule_eh
ffffffff814125d0 r __ksymtab_scsi_target_block
ffffffff814125e0 r __ksymtab_scsi_target_unblock
ffffffff814125f0 r __ksymtab_sdev_evt_alloc
ffffffff81412600 r __ksymtab_sdev_evt_send
ffffffff81412610 r __ksymtab_sdev_evt_send_simple
ffffffff81412620 r __ksymtab_secure_ipv4_port_ephemeral
ffffffff81412630 r __ksymtab_seq_open_net
ffffffff81412640 r __ksymtab_seq_release_net
ffffffff81412650 r __ksymtab_set_cpus_allowed_ptr
ffffffff81412660 r __ksymtab_set_memory_ro
ffffffff81412670 r __ksymtab_set_memory_rw
ffffffff81412680 r __ksymtab_set_personality_ia32
ffffffff81412690 r __ksymtab_set_task_ioprio
ffffffff814126a0 r __ksymtab_set_timer_slack
ffffffff814126b0 r __ksymtab_setup_APIC_eilvt
ffffffff814126c0 r __ksymtab_setup_deferrable_timer_on_stack_key
ffffffff814126d0 r __ksymtab_setup_irq
ffffffff814126e0 r __ksymtab_sg_scsi_ioctl
ffffffff814126f0 r __ksymtab_shake_page
ffffffff81412700 r __ksymtab_shash_ahash_digest
ffffffff81412710 r __ksymtab_shash_ahash_finup
ffffffff81412720 r __ksymtab_shash_ahash_update
ffffffff81412730 r __ksymtab_shash_attr_alg
ffffffff81412740 r __ksymtab_shash_free_instance
ffffffff81412750 r __ksymtab_shash_register_instance
ffffffff81412760 r __ksymtab_shmem_file_setup
ffffffff81412770 r __ksymtab_shmem_read_mapping_page_gfp
ffffffff81412780 r __ksymtab_shmem_truncate_range
ffffffff81412790 r __ksymtab_show_class_attr_string
ffffffff814127a0 r __ksymtab_sigset_from_compat
ffffffff814127b0 r __ksymtab_simple_attr_open
ffffffff814127c0 r __ksymtab_simple_attr_read
ffffffff814127d0 r __ksymtab_simple_attr_release
ffffffff814127e0 r __ksymtab_simple_attr_write
ffffffff814127f0 r __ksymtab_single_open_net
ffffffff81412800 r __ksymtab_single_release_net
ffffffff81412810 r __ksymtab_sk_attach_filter
ffffffff81412820 r __ksymtab_sk_clone_lock
ffffffff81412830 r __ksymtab_sk_detach_filter
ffffffff81412840 r __ksymtab_sk_setup_caps
ffffffff81412850 r __ksymtab_skb_complete_wifi_ack
ffffffff81412860 r __ksymtab_skb_cow_data
ffffffff81412870 r __ksymtab_skb_gro_receive
ffffffff81412880 r __ksymtab_skb_morph
ffffffff81412890 r __ksymtab_skb_partial_csum_set
ffffffff814128a0 r __ksymtab_skb_pull_rcsum
ffffffff814128b0 r __ksymtab_skb_segment
ffffffff814128c0 r __ksymtab_skb_to_sgvec
ffffffff814128d0 r __ksymtab_skb_tstamp_tx
ffffffff814128e0 r __ksymtab_skcipher_geniv_alloc
ffffffff814128f0 r __ksymtab_skcipher_geniv_exit
ffffffff81412900 r __ksymtab_skcipher_geniv_free
ffffffff81412910 r __ksymtab_skcipher_geniv_init
ffffffff81412920 r __ksymtab_smp_call_function_any
ffffffff81412930 r __ksymtab_smp_ops
ffffffff81412940 r __ksymtab_snmp_fold_field
ffffffff81412950 r __ksymtab_snmp_mib_free
ffffffff81412960 r __ksymtab_snmp_mib_init
ffffffff81412970 r __ksymtab_sock_diag_check_cookie
ffffffff81412980 r __ksymtab_sock_diag_nlsk
ffffffff81412990 r __ksymtab_sock_diag_put_meminfo
ffffffff814129a0 r __ksymtab_sock_diag_register
ffffffff814129b0 r __ksymtab_sock_diag_register_inet_compat
ffffffff814129c0 r __ksymtab_sock_diag_save_cookie
ffffffff814129d0 r __ksymtab_sock_diag_unregister
ffffffff814129e0 r __ksymtab_sock_diag_unregister_inet_compat
ffffffff814129f0 r __ksymtab_sock_prot_inuse_add
ffffffff81412a00 r __ksymtab_sock_prot_inuse_get
ffffffff81412a10 r __ksymtab_sock_update_netprioidx
ffffffff81412a20 r __ksymtab_sprint_symbol
ffffffff81412a30 r __ksymtab_srcu_batches_completed
ffffffff81412a40 r __ksymtab_srcu_init_notifier_head
ffffffff81412a50 r __ksymtab_srcu_notifier_call_chain
ffffffff81412a60 r __ksymtab_srcu_notifier_chain_register
ffffffff81412a70 r __ksymtab_srcu_notifier_chain_unregister
ffffffff81412a80 r __ksymtab_stop_machine
ffffffff81412a90 r __ksymtab_subsys_dev_iter_exit
ffffffff81412aa0 r __ksymtab_subsys_dev_iter_init
ffffffff81412ab0 r __ksymtab_subsys_dev_iter_next
ffffffff81412ac0 r __ksymtab_subsys_find_device_by_id
ffffffff81412ad0 r __ksymtab_subsys_interface_register
ffffffff81412ae0 r __ksymtab_subsys_interface_unregister
ffffffff81412af0 r __ksymtab_subsys_system_register
ffffffff81412b00 r __ksymtab_suspend_device_irqs
ffffffff81412b10 r __ksymtab_swiotlb_bounce
ffffffff81412b20 r __ksymtab_swiotlb_map_page
ffffffff81412b30 r __ksymtab_swiotlb_nr_tbl
ffffffff81412b40 r __ksymtab_swiotlb_tbl_map_single
ffffffff81412b50 r __ksymtab_swiotlb_tbl_sync_single
ffffffff81412b60 r __ksymtab_swiotlb_tbl_unmap_single
ffffffff81412b70 r __ksymtab_swiotlb_unmap_page
ffffffff81412b80 r __ksymtab_symbol_put_addr
ffffffff81412b90 r __ksymtab_sync_filesystem
ffffffff81412ba0 r __ksymtab_synchronize_rcu_bh
ffffffff81412bb0 r __ksymtab_synchronize_rcu_expedited
ffffffff81412bc0 r __ksymtab_synchronize_sched
ffffffff81412bd0 r __ksymtab_synchronize_sched_expedited
ffffffff81412be0 r __ksymtab_synchronize_srcu
ffffffff81412bf0 r __ksymtab_synchronize_srcu_expedited
ffffffff81412c00 r __ksymtab_syscore_resume
ffffffff81412c10 r __ksymtab_syscore_suspend
ffffffff81412c20 r __ksymtab_sysctl_tcp_cookie_size
ffffffff81412c30 r __ksymtab_sysctl_vfs_cache_pressure
ffffffff81412c40 r __ksymtab_sysfs_add_file_to_group
ffffffff81412c50 r __ksymtab_sysfs_chmod_file
ffffffff81412c60 r __ksymtab_sysfs_create_bin_file
ffffffff81412c70 r __ksymtab_sysfs_create_file
ffffffff81412c80 r __ksymtab_sysfs_create_files
ffffffff81412c90 r __ksymtab_sysfs_create_group
ffffffff81412ca0 r __ksymtab_sysfs_create_link
ffffffff81412cb0 r __ksymtab_sysfs_get
ffffffff81412cc0 r __ksymtab_sysfs_get_dirent
ffffffff81412cd0 r __ksymtab_sysfs_merge_group
ffffffff81412ce0 r __ksymtab_sysfs_notify
ffffffff81412cf0 r __ksymtab_sysfs_notify_dirent
ffffffff81412d00 r __ksymtab_sysfs_put
ffffffff81412d10 r __ksymtab_sysfs_remove_bin_file
ffffffff81412d20 r __ksymtab_sysfs_remove_file
ffffffff81412d30 r __ksymtab_sysfs_remove_file_from_group
ffffffff81412d40 r __ksymtab_sysfs_remove_files
ffffffff81412d50 r __ksymtab_sysfs_remove_group
ffffffff81412d60 r __ksymtab_sysfs_remove_link
ffffffff81412d70 r __ksymtab_sysfs_rename_link
ffffffff81412d80 r __ksymtab_sysfs_schedule_callback
ffffffff81412d90 r __ksymtab_sysfs_unmerge_group
ffffffff81412da0 r __ksymtab_sysfs_update_group
ffffffff81412db0 r __ksymtab_system_freezable_wq
ffffffff81412dc0 r __ksymtab_system_long_wq
ffffffff81412dd0 r __ksymtab_system_nrt_freezable_wq
ffffffff81412de0 r __ksymtab_system_nrt_wq
ffffffff81412df0 r __ksymtab_system_unbound_wq
ffffffff81412e00 r __ksymtab_system_wq
ffffffff81412e10 r __ksymtab_task_active_pid_ns
ffffffff81412e20 r __ksymtab_task_blkio_cgroup
ffffffff81412e30 r __ksymtab_task_current_syscall
ffffffff81412e40 r __ksymtab_task_xstate_cachep
ffffffff81412e50 r __ksymtab_tasklet_hrtimer_init
ffffffff81412e60 r __ksymtab_tcp_cong_avoid_ai
ffffffff81412e70 r __ksymtab_tcp_death_row
ffffffff81412e80 r __ksymtab_tcp_done
ffffffff81412e90 r __ksymtab_tcp_get_info
ffffffff81412ea0 r __ksymtab_tcp_init_congestion_ops
ffffffff81412eb0 r __ksymtab_tcp_is_cwnd_limited
ffffffff81412ec0 r __ksymtab_tcp_orphan_count
ffffffff81412ed0 r __ksymtab_tcp_register_congestion_control
ffffffff81412ee0 r __ksymtab_tcp_reno_cong_avoid
ffffffff81412ef0 r __ksymtab_tcp_reno_min_cwnd
ffffffff81412f00 r __ksymtab_tcp_reno_ssthresh
ffffffff81412f10 r __ksymtab_tcp_set_state
ffffffff81412f20 r __ksymtab_tcp_slow_start
ffffffff81412f30 r __ksymtab_tcp_twsk_destructor
ffffffff81412f40 r __ksymtab_tcp_twsk_unique
ffffffff81412f50 r __ksymtab_tcp_unregister_congestion_control
ffffffff81412f60 r __ksymtab_timecompare_offset
ffffffff81412f70 r __ksymtab_timecompare_transform
ffffffff81412f80 r __ksymtab_timecounter_cyc2time
ffffffff81412f90 r __ksymtab_timecounter_init
ffffffff81412fa0 r __ksymtab_timecounter_read
ffffffff81412fb0 r __ksymtab_timerqueue_add
ffffffff81412fc0 r __ksymtab_timerqueue_del
ffffffff81412fd0 r __ksymtab_timerqueue_iterate_next
ffffffff81412fe0 r __ksymtab_transport_add_device
ffffffff81412ff0 r __ksymtab_transport_class_register
ffffffff81413000 r __ksymtab_transport_class_unregister
ffffffff81413010 r __ksymtab_transport_configure_device
ffffffff81413020 r __ksymtab_transport_destroy_device
ffffffff81413030 r __ksymtab_transport_remove_device
ffffffff81413040 r __ksymtab_transport_setup_device
ffffffff81413050 r __ksymtab_tty_buffer_request_room
ffffffff81413060 r __ksymtab_tty_encode_baud_rate
ffffffff81413070 r __ksymtab_tty_get_pgrp
ffffffff81413080 r __ksymtab_tty_init_termios
ffffffff81413090 r __ksymtab_tty_ldisc_deref
ffffffff814130a0 r __ksymtab_tty_ldisc_flush
ffffffff814130b0 r __ksymtab_tty_ldisc_ref
ffffffff814130c0 r __ksymtab_tty_ldisc_ref_wait
ffffffff814130d0 r __ksymtab_tty_mode_ioctl
ffffffff814130e0 r __ksymtab_tty_perform_flush
ffffffff814130f0 r __ksymtab_tty_prepare_flip_string
ffffffff81413100 r __ksymtab_tty_prepare_flip_string_flags
ffffffff81413110 r __ksymtab_tty_put_char
ffffffff81413120 r __ksymtab_tty_set_termios
ffffffff81413130 r __ksymtab_tty_standard_install
ffffffff81413140 r __ksymtab_tty_termios_encode_baud_rate
ffffffff81413150 r __ksymtab_tty_wakeup
ffffffff81413160 r __ksymtab_udp4_lib_lookup
ffffffff81413170 r __ksymtab_uhci_check_and_reset_hc
ffffffff81413180 r __ksymtab_uhci_reset_hc
ffffffff81413190 r __ksymtab_unbind_from_irqhandler
ffffffff814131a0 r __ksymtab_unix_inq_len
ffffffff814131b0 r __ksymtab_unix_outq_len
ffffffff814131c0 r __ksymtab_unix_peer_get
ffffffff814131d0 r __ksymtab_unix_socket_table
ffffffff814131e0 r __ksymtab_unix_table_lock
ffffffff814131f0 r __ksymtab_unlock_flocks
ffffffff81413200 r __ksymtab_unmap_kernel_range_noflush
ffffffff81413210 r __ksymtab_unregister_acpi_bus_notifier
ffffffff81413220 r __ksymtab_unregister_acpi_hed_notifier
ffffffff81413230 r __ksymtab_unregister_die_notifier
ffffffff81413240 r __ksymtab_unregister_hw_breakpoint
ffffffff81413250 r __ksymtab_unregister_keyboard_notifier
ffffffff81413260 r __ksymtab_unregister_net_sysctl_table
ffffffff81413270 r __ksymtab_unregister_netevent_notifier
ffffffff81413280 r __ksymtab_unregister_nmi_handler
ffffffff81413290 r __ksymtab_unregister_oom_notifier
ffffffff814132a0 r __ksymtab_unregister_pernet_device
ffffffff814132b0 r __ksymtab_unregister_pernet_subsys
ffffffff814132c0 r __ksymtab_unregister_pm_notifier
ffffffff814132d0 r __ksymtab_unregister_syscore_ops
ffffffff814132e0 r __ksymtab_unregister_vt_notifier
ffffffff814132f0 r __ksymtab_unregister_wide_hw_breakpoint
ffffffff81413300 r __ksymtab_unregister_xenbus_watch
ffffffff81413310 r __ksymtab_unregister_xenstore_notifier
ffffffff81413320 r __ksymtab_unshare_fs_struct
ffffffff81413330 r __ksymtab_unuse_mm
ffffffff81413340 r __ksymtab_usb_amd_dev_put
ffffffff81413350 r __ksymtab_usb_amd_find_chipset_info
ffffffff81413360 r __ksymtab_usb_amd_quirk_pll_disable
ffffffff81413370 r __ksymtab_usb_amd_quirk_pll_enable
ffffffff81413380 r __ksymtab_usb_enable_xhci_ports
ffffffff81413390 r __ksymtab_usb_is_intel_switchable_xhci
ffffffff814133a0 r __ksymtab_use_mm
ffffffff814133b0 r __ksymtab_used_vectors
ffffffff814133c0 r __ksymtab_usermodehelper_read_lock_wait
ffffffff814133d0 r __ksymtab_usermodehelper_read_trylock
ffffffff814133e0 r __ksymtab_usermodehelper_read_unlock
ffffffff814133f0 r __ksymtab_uuid_be_gen
ffffffff81413400 r __ksymtab_uuid_le_gen
ffffffff81413410 r __ksymtab_vector_used_by_percpu_irq
ffffffff81413420 r __ksymtab_vfs_cancel_lock
ffffffff81413430 r __ksymtab_vfs_getxattr
ffffffff81413440 r __ksymtab_vfs_kern_mount
ffffffff81413450 r __ksymtab_vfs_listxattr
ffffffff81413460 r __ksymtab_vfs_lock_file
ffffffff81413470 r __ksymtab_vfs_removexattr
ffffffff81413480 r __ksymtab_vfs_setlease
ffffffff81413490 r __ksymtab_vfs_setxattr
ffffffff814134a0 r __ksymtab_vfs_test_lock
ffffffff814134b0 r __ksymtab_vm_unmap_aliases
ffffffff814134c0 r __ksymtab_vma_kernel_pagesize
ffffffff814134d0 r __ksymtab_vt_get_leds
ffffffff814134e0 r __ksymtab_wait_for_device_probe
ffffffff814134f0 r __ksymtab_wait_rcu_gp
ffffffff81413500 r __ksymtab_wakeup_source_add
ffffffff81413510 r __ksymtab_wakeup_source_create
ffffffff81413520 r __ksymtab_wakeup_source_destroy
ffffffff81413530 r __ksymtab_wakeup_source_drop
ffffffff81413540 r __ksymtab_wakeup_source_prepare
ffffffff81413550 r __ksymtab_wakeup_source_register
ffffffff81413560 r __ksymtab_wakeup_source_remove
ffffffff81413570 r __ksymtab_wakeup_source_unregister
ffffffff81413580 r __ksymtab_watchdog_register_device
ffffffff81413590 r __ksymtab_watchdog_unregister_device
ffffffff814135a0 r __ksymtab_work_busy
ffffffff814135b0 r __ksymtab_work_cpu
ffffffff814135c0 r __ksymtab_work_on_cpu
ffffffff814135d0 r __ksymtab_workqueue_congested
ffffffff814135e0 r __ksymtab_workqueue_set_max_active
ffffffff814135f0 r __ksymtab_x86_platform
ffffffff81413600 r __ksymtab_xattr_getsecurity
ffffffff81413610 r __ksymtab_xen_create_contiguous_region
ffffffff81413620 r __ksymtab_xen_destroy_contiguous_region
ffffffff81413630 r __ksymtab_xen_domain_type
ffffffff81413640 r __ksymtab_xen_features
ffffffff81413650 r __ksymtab_xen_find_device_domain_owner
ffffffff81413660 r __ksymtab_xen_have_vector_callback
ffffffff81413670 r __ksymtab_xen_hvm_evtchn_do_upcall
ffffffff81413680 r __ksymtab_xen_hvm_need_lapic
ffffffff81413690 r __ksymtab_xen_hvm_resume_frames
ffffffff814136a0 r __ksymtab_xen_irq_from_gsi
ffffffff814136b0 r __ksymtab_xen_pci_frontend
ffffffff814136c0 r __ksymtab_xen_pirq_from_irq
ffffffff814136d0 r __ksymtab_xen_platform_pci_unplug
ffffffff814136e0 r __ksymtab_xen_register_device_domain_owner
ffffffff814136f0 r __ksymtab_xen_remap_domain_mfn_range
ffffffff81413700 r __ksymtab_xen_set_callback_via
ffffffff81413710 r __ksymtab_xen_set_domain_pte
ffffffff81413720 r __ksymtab_xen_setup_shutdown_event
ffffffff81413730 r __ksymtab_xen_start_info
ffffffff81413740 r __ksymtab_xen_store_evtchn
ffffffff81413750 r __ksymtab_xen_store_interface
ffffffff81413760 r __ksymtab_xen_swiotlb_alloc_coherent
ffffffff81413770 r __ksymtab_xen_swiotlb_dma_mapping_error
ffffffff81413780 r __ksymtab_xen_swiotlb_dma_supported
ffffffff81413790 r __ksymtab_xen_swiotlb_free_coherent
ffffffff814137a0 r __ksymtab_xen_swiotlb_map_page
ffffffff814137b0 r __ksymtab_xen_swiotlb_map_sg
ffffffff814137c0 r __ksymtab_xen_swiotlb_map_sg_attrs
ffffffff814137d0 r __ksymtab_xen_swiotlb_sync_sg_for_cpu
ffffffff814137e0 r __ksymtab_xen_swiotlb_sync_sg_for_device
ffffffff814137f0 r __ksymtab_xen_swiotlb_sync_single_for_cpu
ffffffff81413800 r __ksymtab_xen_swiotlb_sync_single_for_device
ffffffff81413810 r __ksymtab_xen_swiotlb_unmap_page
ffffffff81413820 r __ksymtab_xen_swiotlb_unmap_sg
ffffffff81413830 r __ksymtab_xen_swiotlb_unmap_sg_attrs
ffffffff81413840 r __ksymtab_xen_test_irq_shared
ffffffff81413850 r __ksymtab_xen_unregister_device_domain_owner
ffffffff81413860 r __ksymtab_xen_xenbus_fops
ffffffff81413870 r __ksymtab_xenbus_alloc_evtchn
ffffffff81413880 r __ksymtab_xenbus_bind_evtchn
ffffffff81413890 r __ksymtab_xenbus_dev_attrs
ffffffff814138a0 r __ksymtab_xenbus_dev_cancel
ffffffff814138b0 r __ksymtab_xenbus_dev_changed
ffffffff814138c0 r __ksymtab_xenbus_dev_error
ffffffff814138d0 r __ksymtab_xenbus_dev_fatal
ffffffff814138e0 r __ksymtab_xenbus_dev_is_online
ffffffff814138f0 r __ksymtab_xenbus_dev_probe
ffffffff81413900 r __ksymtab_xenbus_dev_remove
ffffffff81413910 r __ksymtab_xenbus_dev_resume
ffffffff81413920 r __ksymtab_xenbus_dev_shutdown
ffffffff81413930 r __ksymtab_xenbus_dev_suspend
ffffffff81413940 r __ksymtab_xenbus_directory
ffffffff81413950 r __ksymtab_xenbus_exists
ffffffff81413960 r __ksymtab_xenbus_free_evtchn
ffffffff81413970 r __ksymtab_xenbus_frontend_closed
ffffffff81413980 r __ksymtab_xenbus_gather
ffffffff81413990 r __ksymtab_xenbus_grant_ring
ffffffff814139a0 r __ksymtab_xenbus_map_ring
ffffffff814139b0 r __ksymtab_xenbus_map_ring_valloc
ffffffff814139c0 r __ksymtab_xenbus_match
ffffffff814139d0 r __ksymtab_xenbus_mkdir
ffffffff814139e0 r __ksymtab_xenbus_otherend_changed
ffffffff814139f0 r __ksymtab_xenbus_printf
ffffffff81413a00 r __ksymtab_xenbus_probe
ffffffff81413a10 r __ksymtab_xenbus_probe_devices
ffffffff81413a20 r __ksymtab_xenbus_probe_node
ffffffff81413a30 r __ksymtab_xenbus_read
ffffffff81413a40 r __ksymtab_xenbus_read_driver_state
ffffffff81413a50 r __ksymtab_xenbus_read_otherend_details
ffffffff81413a60 r __ksymtab_xenbus_register_backend
ffffffff81413a70 r __ksymtab_xenbus_register_driver_common
ffffffff81413a80 r __ksymtab_xenbus_register_frontend
ffffffff81413a90 r __ksymtab_xenbus_rm
ffffffff81413aa0 r __ksymtab_xenbus_scanf
ffffffff81413ab0 r __ksymtab_xenbus_strstate
ffffffff81413ac0 r __ksymtab_xenbus_switch_state
ffffffff81413ad0 r __ksymtab_xenbus_transaction_end
ffffffff81413ae0 r __ksymtab_xenbus_transaction_start
ffffffff81413af0 r __ksymtab_xenbus_unmap_ring
ffffffff81413b00 r __ksymtab_xenbus_unmap_ring_vfree
ffffffff81413b10 r __ksymtab_xenbus_unregister_driver
ffffffff81413b20 r __ksymtab_xenbus_watch_path
ffffffff81413b30 r __ksymtab_xenbus_watch_pathfmt
ffffffff81413b40 r __ksymtab_xenbus_write
ffffffff81413b50 r __ksymtab_xfrm_aalg_get_byid
ffffffff81413b60 r __ksymtab_xfrm_aalg_get_byidx
ffffffff81413b70 r __ksymtab_xfrm_aalg_get_byname
ffffffff81413b80 r __ksymtab_xfrm_aead_get_byname
ffffffff81413b90 r __ksymtab_xfrm_audit_policy_add
ffffffff81413ba0 r __ksymtab_xfrm_audit_policy_delete
ffffffff81413bb0 r __ksymtab_xfrm_audit_state_add
ffffffff81413bc0 r __ksymtab_xfrm_audit_state_delete
ffffffff81413bd0 r __ksymtab_xfrm_audit_state_icvfail
ffffffff81413be0 r __ksymtab_xfrm_audit_state_notfound
ffffffff81413bf0 r __ksymtab_xfrm_audit_state_notfound_simple
ffffffff81413c00 r __ksymtab_xfrm_audit_state_replay
ffffffff81413c10 r __ksymtab_xfrm_audit_state_replay_overflow
ffffffff81413c20 r __ksymtab_xfrm_calg_get_byid
ffffffff81413c30 r __ksymtab_xfrm_calg_get_byname
ffffffff81413c40 r __ksymtab_xfrm_count_auth_supported
ffffffff81413c50 r __ksymtab_xfrm_count_enc_supported
ffffffff81413c60 r __ksymtab_xfrm_ealg_get_byid
ffffffff81413c70 r __ksymtab_xfrm_ealg_get_byidx
ffffffff81413c80 r __ksymtab_xfrm_ealg_get_byname
ffffffff81413c90 r __ksymtab_xfrm_inner_extract_output
ffffffff81413ca0 r __ksymtab_xfrm_output
ffffffff81413cb0 r __ksymtab_xfrm_output_resume
ffffffff81413cc0 r __ksymtab_xfrm_probe_algs
ffffffff81413cd0 r __ksymtab_xstate_size
ffffffff81413ce0 r __ksymtab_yield_to
ffffffff81413cf0 r __ksymtab_zap_vma_ptes
ffffffff81413d00 r __kstrtab_init_task
ffffffff81413d00 R __start___kcrctab
ffffffff81413d00 R __start___kcrctab_gpl
ffffffff81413d00 R __start___kcrctab_gpl_future
ffffffff81413d00 R __start___kcrctab_unused
ffffffff81413d00 R __start___kcrctab_unused_gpl
ffffffff81413d00 R __start___ksymtab_gpl_future
ffffffff81413d00 R __start___ksymtab_unused
ffffffff81413d00 R __start___ksymtab_unused_gpl
ffffffff81413d00 R __stop___kcrctab
ffffffff81413d00 R __stop___kcrctab_gpl
ffffffff81413d00 R __stop___kcrctab_gpl_future
ffffffff81413d00 R __stop___kcrctab_unused
ffffffff81413d00 R __stop___kcrctab_unused_gpl
ffffffff81413d00 R __stop___ksymtab_gpl
ffffffff81413d00 R __stop___ksymtab_gpl_future
ffffffff81413d00 R __stop___ksymtab_unused
ffffffff81413d00 R __stop___ksymtab_unused_gpl
ffffffff81413d0a r __kstrtab_loops_per_jiffy
ffffffff81413d1a r __kstrtab_reset_devices
ffffffff81413d28 r __kstrtab_system_state
ffffffff81413d35 r __kstrtab_init_uts_ns
ffffffff81413d41 r __kstrtab_x86_hyper_xen_hvm
ffffffff81413d53 r __kstrtab_xen_hvm_need_lapic
ffffffff81413d66 r __kstrtab_xen_have_vector_callback
ffffffff81413d7f r __kstrtab_xen_start_info
ffffffff81413d8e r __kstrtab_machine_to_phys_nr
ffffffff81413da1 r __kstrtab_machine_to_phys_mapping
ffffffff81413db9 r __kstrtab_xen_domain_type
ffffffff81413dc9 r __kstrtab_hypercall_page
ffffffff81413dd8 r __kstrtab_xen_remap_domain_mfn_range
ffffffff81413df3 r __kstrtab_xen_destroy_contiguous_region
ffffffff81413e11 r __kstrtab_xen_create_contiguous_region
ffffffff81413e2e r __kstrtab_xen_set_domain_pte
ffffffff81413e41 r __kstrtab_arbitrary_virt_to_machine
ffffffff81413e5b r __kstrtab_xen_platform_pci_unplug
ffffffff81413e73 r __kstrtab_m2p_find_override_pfn
ffffffff81413e89 r __kstrtab_m2p_remove_override
ffffffff81413e9d r __kstrtab_m2p_add_override
ffffffff81413eae r __kstrtab_get_phys_to_machine
ffffffff81413ec2 r __kstrtab_set_personality_ia32
ffffffff81413ed7 r __kstrtab_math_state_restore
ffffffff81413eea r __kstrtab_used_vectors
ffffffff81413ef7 r __kstrtab_vector_used_by_percpu_irq
ffffffff81413f11 r __kstrtab_irq_regs
ffffffff81413f1a r __kstrtab_irq_stat
ffffffff81413f23 r __kstrtab_dump_trace
ffffffff81413f2e r __kstrtab_profile_pc
ffffffff81413f39 r __kstrtab_oops_begin
ffffffff81413f44 r __kstrtab_dump_stack
ffffffff81413f4f r __kstrtab_print_context_stack_bp
ffffffff81413f66 r __kstrtab_print_context_stack
ffffffff81413f7a r __kstrtab_unregister_nmi_handler
ffffffff81413f91 r __kstrtab_register_nmi_handler
ffffffff81413fa6 r __kstrtab_edid_info
ffffffff81413fb0 r __kstrtab_screen_info
ffffffff81413fbc r __kstrtab_boot_cpu_data
ffffffff81413fca r __kstrtab_x86_platform
ffffffff81413fd7 r __kstrtab_pci_biosrom_size
ffffffff81413fe8 r __kstrtab_pci_unmap_biosrom
ffffffff81413ffa r __kstrtab_pci_map_biosrom
ffffffff8141400a r __kstrtab_empty_zero_page
ffffffff8141401a r __kstrtab_memmove
ffffffff81414022 r __kstrtab___memcpy
ffffffff8141402b r __kstrtab_memcpy
ffffffff81414032 r __kstrtab_memset
ffffffff81414039 r __kstrtab_csum_partial
ffffffff81414046 r __kstrtab_clear_page
ffffffff81414051 r __kstrtab_copy_page
ffffffff8141405b r __kstrtab__copy_to_user
ffffffff81414069 r __kstrtab__copy_from_user
ffffffff81414079 r __kstrtab___copy_user_nocache
ffffffff8141408d r __kstrtab_copy_user_generic_unrolled
ffffffff814140a8 r __kstrtab_copy_user_generic_string
ffffffff814140c1 r __kstrtab___put_user_8
ffffffff814140ce r __kstrtab___put_user_4
ffffffff814140db r __kstrtab___put_user_2
ffffffff814140e8 r __kstrtab___put_user_1
ffffffff814140f5 r __kstrtab___get_user_8
ffffffff81414102 r __kstrtab___get_user_4
ffffffff8141410f r __kstrtab___get_user_2
ffffffff8141411c r __kstrtab___get_user_1
ffffffff81414129 r __kstrtab_e820_any_mapped
ffffffff81414139 r __kstrtab_pci_mem_start
ffffffff81414147 r __kstrtab_dma_supported
ffffffff81414155 r __kstrtab_dma_set_mask
ffffffff81414162 r __kstrtab_x86_dma_fallback_dev
ffffffff81414177 r __kstrtab_dma_ops
ffffffff8141417f r __kstrtab_arch_unregister_cpu
ffffffff81414193 r __kstrtab_arch_register_cpu
ffffffff814141a5 r __kstrtab_arch_debugfs_dir
ffffffff814141b6 r __kstrtab_hw_breakpoint_restore
ffffffff814141cc r __kstrtab_aout_dump_debugregs
ffffffff814141e0 r __kstrtab_cpu_dr7
ffffffff814141e8 r __kstrtab_mark_tsc_unstable
ffffffff814141fa r __kstrtab_recalibrate_cpu_khz
ffffffff8141420e r __kstrtab_check_tsc_unstable
ffffffff81414221 r __kstrtab_tsc_khz
ffffffff81414229 r __kstrtab_cpu_khz
ffffffff81414231 r __kstrtab_native_io_delay
ffffffff81414241 r __kstrtab_native_read_tsc
ffffffff81414251 r __kstrtab_rtc_cmos_write
ffffffff81414260 r __kstrtab_rtc_cmos_read
ffffffff8141426e r __kstrtab_rtc_lock
ffffffff81414277 r __kstrtab_amd_e400_c1e_detected
ffffffff8141428d r __kstrtab_cpu_idle_wait
ffffffff8141429b r __kstrtab_boot_option_idle_override
ffffffff814142b5 r __kstrtab_kernel_thread
ffffffff814142c3 r __kstrtab_task_xstate_cachep
ffffffff814142d6 r __kstrtab_idle_notifier_unregister
ffffffff814142ef r __kstrtab_idle_notifier_register
ffffffff81414306 r __kstrtab_dump_fpu
ffffffff8141430f r __kstrtab_init_fpu
ffffffff81414318 r __kstrtab_fpu_finit
ffffffff81414322 r __kstrtab_xstate_size
ffffffff8141432e r __kstrtab_unlazy_fpu
ffffffff81414339 r __kstrtab_kernel_fpu_end
ffffffff81414348 r __kstrtab_kernel_fpu_begin
ffffffff81414359 r __kstrtab_irq_fpu_usable
ffffffff81414368 r __kstrtab_kernel_stack
ffffffff81414375 r __kstrtab_current_task
ffffffff81414382 r __kstrtab_gdt_page
ffffffff8141438b r __kstrtab_x86_hyper_vmware
ffffffff8141439c r __kstrtab_x86_hyper
ffffffff814143a6 r __kstrtab_x86_hyper_ms_hyperv
ffffffff814143ba r __kstrtab_ms_hyperv
ffffffff814143c4 r __kstrtab_x86_match_cpu
ffffffff814143d2 r __kstrtab_cpu_has_amd_erratum
ffffffff814143e6 r __kstrtab_amd_erratum_383
ffffffff814143f6 r __kstrtab_amd_erratum_400
ffffffff81414406 r __kstrtab_amd_get_nb_id
ffffffff81414414 r __kstrtab_perf_get_x86_pmu_capability
ffffffff81414430 r __kstrtab_amd_pmu_disable_virt
ffffffff81414445 r __kstrtab_amd_pmu_enable_virt
ffffffff81414459 r __kstrtab_perf_guest_get_msrs
ffffffff8141446d r __kstrtab_register_mce_write_callback
ffffffff81414489 r __kstrtab_mce_notify_irq
ffffffff81414498 r __kstrtab_do_machine_check
ffffffff814144a9 r __kstrtab_machine_check_poll
ffffffff814144bc r __kstrtab_mce_unregister_decode_chain
ffffffff814144d8 r __kstrtab_mce_register_decode_chain
ffffffff814144f2 r __kstrtab_injectm
ffffffff814144fa r __kstrtab_apei_mce_report_mem_error
ffffffff81414514 r __kstrtab_mtrr_del
ffffffff8141451d r __kstrtab_mtrr_add
ffffffff81414526 r __kstrtab_mtrr_state
ffffffff81414531 r __kstrtab_release_evntsel_nmi
ffffffff81414545 r __kstrtab_reserve_evntsel_nmi
ffffffff81414559 r __kstrtab_release_perfctr_nmi
ffffffff8141456d r __kstrtab_reserve_perfctr_nmi
ffffffff81414581 r __kstrtab_avail_to_resrv_perfctr_nmi_bit
ffffffff814145a0 r __kstrtab_get_ibs_caps
ffffffff814145ad r __kstrtab_acpi_unregister_ioapic
ffffffff814145c4 r __kstrtab_acpi_register_ioapic
ffffffff814145d9 r __kstrtab_acpi_unmap_lsapic
ffffffff814145eb r __kstrtab_acpi_map_lsapic
ffffffff814145fb r __kstrtab_acpi_gsi_to_irq
ffffffff8141460b r __kstrtab_acpi_pci_disabled
ffffffff8141461d r __kstrtab_acpi_disabled
ffffffff8141462b r __kstrtab_acpi_processor_ffh_cstate_enter
ffffffff8141464b r __kstrtab_acpi_processor_ffh_cstate_probe
ffffffff8141466b r __kstrtab_acpi_processor_power_init_bm_check
ffffffff8141468e r __kstrtab_pm_power_off
ffffffff8141469b r __kstrtab_smp_ops
ffffffff814146a3 r __kstrtab_cpu_info
ffffffff814146ac r __kstrtab_cpu_core_map
ffffffff814146b9 r __kstrtab_cpu_sibling_map
ffffffff814146c9 r __kstrtab_smp_num_siblings
ffffffff814146da r __kstrtab___per_cpu_offset
ffffffff814146eb r __kstrtab_this_cpu_off
ffffffff814146f8 r __kstrtab_cpu_number
ffffffff81414703 r __kstrtab_setup_APIC_eilvt
ffffffff81414714 r __kstrtab_local_apic_timer_c2_ok
ffffffff8141472b r __kstrtab_x86_bios_cpu_apicid
ffffffff8141473f r __kstrtab_x86_cpu_to_apicid
ffffffff81414751 r __kstrtab_IO_APIC_get_PCI_irq_vector
ffffffff8141476c r __kstrtab_apic
ffffffff81414771 r __kstrtab_hpet_rtc_interrupt
ffffffff81414784 r __kstrtab_hpet_rtc_dropped_irq
ffffffff81414799 r __kstrtab_hpet_set_periodic_freq
ffffffff814147b0 r __kstrtab_hpet_set_alarm_time
ffffffff814147c4 r __kstrtab_hpet_set_rtc_irq_bit
ffffffff814147d9 r __kstrtab_hpet_mask_rtc_irq_bit
ffffffff814147ef r __kstrtab_hpet_rtc_timer_init
ffffffff81414803 r __kstrtab_hpet_unregister_irq_handler
ffffffff8141481f r __kstrtab_hpet_register_irq_handler
ffffffff81414839 r __kstrtab_is_hpet_enabled
ffffffff81414849 r __kstrtab_amd_flush_garts
ffffffff81414859 r __kstrtab_amd_cache_northbridges
ffffffff81414870 r __kstrtab_amd_northbridges
ffffffff81414881 r __kstrtab_amd_nb_misc_ids
ffffffff81414891 r __kstrtab_pv_irq_ops
ffffffff8141489c r __kstrtab_pv_info
ffffffff814148a4 r __kstrtab_pv_apic_ops
ffffffff814148b0 r __kstrtab_pv_mmu_ops
ffffffff814148bb r __kstrtab_pv_cpu_ops
ffffffff814148c6 r __kstrtab_pv_time_ops
ffffffff814148d2 r __kstrtab_pv_lock_ops
ffffffff814148de r __kstrtab___supported_pte_mask
ffffffff814148f3 r __kstrtab_iounmap
ffffffff814148fb r __kstrtab_ioremap_prot
ffffffff81414908 r __kstrtab_ioremap_cache
ffffffff81414916 r __kstrtab_ioremap_wc
ffffffff81414921 r __kstrtab_ioremap_nocache
ffffffff81414931 r __kstrtab_set_pages_nx
ffffffff8141493e r __kstrtab_set_pages_x
ffffffff8141494a r __kstrtab_set_pages_array_wb
ffffffff8141495d r __kstrtab_set_pages_wb
ffffffff8141496a r __kstrtab_set_pages_array_wc
ffffffff8141497d r __kstrtab_set_pages_array_uc
ffffffff81414990 r __kstrtab_set_pages_uc
ffffffff8141499d r __kstrtab_set_memory_rw
ffffffff814149ab r __kstrtab_set_memory_ro
ffffffff814149b9 r __kstrtab_set_memory_nx
ffffffff814149c7 r __kstrtab_set_memory_x
ffffffff814149d4 r __kstrtab_set_memory_array_wb
ffffffff814149e8 r __kstrtab_set_memory_wb
ffffffff814149f6 r __kstrtab_set_memory_wc
ffffffff81414a04 r __kstrtab_set_memory_array_wc
ffffffff81414a18 r __kstrtab_set_memory_array_uc
ffffffff81414a2c r __kstrtab_set_memory_uc
ffffffff81414a3a r __kstrtab_lookup_address
ffffffff81414a49 r __kstrtab_clflush_cache_range
ffffffff81414a5d r __kstrtab_pgprot_writecombine
ffffffff81414a71 r __kstrtab___virt_addr_valid
ffffffff81414a83 r __kstrtab___phys_addr
ffffffff81414a8f r __kstrtab_leave_mm
ffffffff81414a98 r __kstrtab___node_distance
ffffffff81414aa8 r __kstrtab_x86_cpu_to_node_map
ffffffff81414abc r __kstrtab_node_to_cpumask_map
ffffffff81414ad0 r __kstrtab_node_data
ffffffff81414ada r __kstrtab_get_task_mm
ffffffff81414ae6 r __kstrtab_mmput
ffffffff81414aec r __kstrtab___mmdrop
ffffffff81414af5 r __kstrtab___put_task_struct
ffffffff81414b07 r __kstrtab_free_task
ffffffff81414b11 r __kstrtab___set_personality
ffffffff81414b23 r __kstrtab_unregister_exec_domain
ffffffff81414b3a r __kstrtab_register_exec_domain
ffffffff81414b4f r __kstrtab_warn_slowpath_null
ffffffff81414b62 r __kstrtab_warn_slowpath_fmt_taint
ffffffff81414b7a r __kstrtab_warn_slowpath_fmt
ffffffff81414b8c r __kstrtab_add_taint
ffffffff81414b96 r __kstrtab_test_taint
ffffffff81414ba1 r __kstrtab_panic
ffffffff81414ba7 r __kstrtab_panic_blink
ffffffff81414bb3 r __kstrtab_panic_notifier_list
ffffffff81414bc7 r __kstrtab_panic_timeout
ffffffff81414bd5 r __kstrtab_kmsg_dump_unregister
ffffffff81414bea r __kstrtab_kmsg_dump_register
ffffffff81414bfd r __kstrtab_printk_timed_ratelimit
ffffffff81414c14 r __kstrtab___printk_ratelimit
ffffffff81414c27 r __kstrtab_unregister_console
ffffffff81414c3a r __kstrtab_register_console
ffffffff81414c4b r __kstrtab_console_start
ffffffff81414c59 r __kstrtab_console_stop
ffffffff81414c66 r __kstrtab_console_conditional_schedule
ffffffff81414c83 r __kstrtab_console_unlock
ffffffff81414c92 r __kstrtab_console_trylock
ffffffff81414ca2 r __kstrtab_console_lock
ffffffff81414caf r __kstrtab_console_suspend_enabled
ffffffff81414cc7 r __kstrtab_vprintk
ffffffff81414ccf r __kstrtab_printk
ffffffff81414cd6 r __kstrtab_console_set_on_cmdline
ffffffff81414ced r __kstrtab_console_drivers
ffffffff81414cfd r __kstrtab_oops_in_progress
ffffffff81414d0e r __kstrtab_cpu_active_mask
ffffffff81414d1e r __kstrtab_cpu_present_mask
ffffffff81414d2f r __kstrtab_cpu_online_mask
ffffffff81414d3f r __kstrtab_cpu_possible_mask
ffffffff81414d51 r __kstrtab_cpu_all_bits
ffffffff81414d5e r __kstrtab_cpu_bit_bitmap
ffffffff81414d6d r __kstrtab_cpu_up
ffffffff81414d74 r __kstrtab_cpu_down
ffffffff81414d7d r __kstrtab_unregister_cpu_notifier
ffffffff81414d95 r __kstrtab_register_cpu_notifier
ffffffff81414dab r __kstrtab_put_online_cpus
ffffffff81414dbb r __kstrtab_get_online_cpus
ffffffff81414dcb r __kstrtab_complete_and_exit
ffffffff81414ddd r __kstrtab_do_exit
ffffffff81414de5 r __kstrtab_daemonize
ffffffff81414def r __kstrtab_disallow_signal
ffffffff81414dff r __kstrtab_allow_signal
ffffffff81414e0c r __kstrtab_jiffies_64_to_clock_t
ffffffff81414e22 r __kstrtab_clock_t_to_jiffies
ffffffff81414e35 r __kstrtab_jiffies_to_clock_t
ffffffff81414e48 r __kstrtab_jiffies_to_timeval
ffffffff81414e5b r __kstrtab_timeval_to_jiffies
ffffffff81414e6e r __kstrtab_jiffies_to_timespec
ffffffff81414e82 r __kstrtab_timespec_to_jiffies
ffffffff81414e96 r __kstrtab_usecs_to_jiffies
ffffffff81414ea7 r __kstrtab_msecs_to_jiffies
ffffffff81414eb8 r __kstrtab_ns_to_timeval
ffffffff81414ec6 r __kstrtab_ns_to_timespec
ffffffff81414ed5 r __kstrtab_set_normalized_timespec
ffffffff81414eed r __kstrtab_mktime
ffffffff81414ef4 r __kstrtab_timespec_trunc
ffffffff81414f03 r __kstrtab_jiffies_to_usecs
ffffffff81414f14 r __kstrtab_jiffies_to_msecs
ffffffff81414f25 r __kstrtab_current_fs_time
ffffffff81414f35 r __kstrtab_sys_tz
ffffffff81414f3c r __kstrtab_send_remote_softirq
ffffffff81414f50 r __kstrtab___send_remote_softirq
ffffffff81414f66 r __kstrtab_softirq_work_list
ffffffff81414f78 r __kstrtab_tasklet_hrtimer_init
ffffffff81414f8d r __kstrtab_tasklet_kill
ffffffff81414f9a r __kstrtab_tasklet_init
ffffffff81414fa7 r __kstrtab___tasklet_hi_schedule_first
ffffffff81414fc3 r __kstrtab___tasklet_hi_schedule
ffffffff81414fd9 r __kstrtab___tasklet_schedule
ffffffff81414fec r __kstrtab_local_bh_enable_ip
ffffffff81414fff r __kstrtab_local_bh_enable
ffffffff8141500f r __kstrtab__local_bh_enable
ffffffff81415020 r __kstrtab_local_bh_disable
ffffffff81415031 r __kstrtab___devm_release_region
ffffffff81415047 r __kstrtab___devm_request_region
ffffffff8141505d r __kstrtab___release_region
ffffffff8141506e r __kstrtab___check_region
ffffffff8141507d r __kstrtab___request_region
ffffffff8141508e r __kstrtab_adjust_resource
ffffffff8141509e r __kstrtab_allocate_resource
ffffffff814150b0 r __kstrtab_release_resource
ffffffff814150c1 r __kstrtab_request_resource
ffffffff814150d2 r __kstrtab_iomem_resource
ffffffff814150e1 r __kstrtab_ioport_resource
ffffffff814150f1 r __kstrtab_proc_doulongvec_ms_jiffies_minmax
ffffffff81415113 r __kstrtab_proc_doulongvec_minmax
ffffffff8141512a r __kstrtab_proc_dostring
ffffffff81415138 r __kstrtab_proc_dointvec_ms_jiffies
ffffffff81415151 r __kstrtab_proc_dointvec_userhz_jiffies
ffffffff8141516e r __kstrtab_proc_dointvec_minmax
ffffffff81415183 r __kstrtab_proc_dointvec_jiffies
ffffffff81415199 r __kstrtab_proc_dointvec
ffffffff814151a7 r __kstrtab_capable
ffffffff814151af r __kstrtab_ns_capable
ffffffff814151ba r __kstrtab___cap_empty_set
ffffffff814151ca r __kstrtab_usleep_range
ffffffff814151d7 r __kstrtab_msleep_interruptible
ffffffff814151ec r __kstrtab_msleep
ffffffff814151f3 r __kstrtab_schedule_timeout_uninterruptible
ffffffff81415214 r __kstrtab_schedule_timeout_killable
ffffffff8141522e r __kstrtab_schedule_timeout_interruptible
ffffffff8141524d r __kstrtab_schedule_timeout
ffffffff8141525e r __kstrtab_del_timer_sync
ffffffff8141526d r __kstrtab_try_to_del_timer_sync
ffffffff81415283 r __kstrtab_del_timer
ffffffff8141528d r __kstrtab_add_timer_on
ffffffff8141529a r __kstrtab_add_timer
ffffffff814152a4 r __kstrtab_mod_timer_pinned
ffffffff814152b5 r __kstrtab_mod_timer
ffffffff814152bf r __kstrtab_mod_timer_pending
ffffffff814152d1 r __kstrtab_init_timer_deferrable_key
ffffffff814152eb r __kstrtab_init_timer_key
ffffffff814152fa r __kstrtab_setup_deferrable_timer_on_stack_key
ffffffff8141531e r __kstrtab_set_timer_slack
ffffffff8141532e r __kstrtab_round_jiffies_up_relative
ffffffff81415348 r __kstrtab_round_jiffies_up
ffffffff81415359 r __kstrtab___round_jiffies_up_relative
ffffffff81415375 r __kstrtab___round_jiffies_up
ffffffff81415388 r __kstrtab_round_jiffies_relative
ffffffff8141539f r __kstrtab_round_jiffies
ffffffff814153ad r __kstrtab___round_jiffies_relative
ffffffff814153c6 r __kstrtab___round_jiffies
ffffffff814153d6 r __kstrtab_boot_tvec_bases
ffffffff814153e6 r __kstrtab_jiffies_64
ffffffff814153f1 r __kstrtab_init_user_ns
ffffffff814153fe r __kstrtab_unblock_all_signals
ffffffff81415412 r __kstrtab_block_all_signals
ffffffff81415424 r __kstrtab_sigprocmask
ffffffff81415430 r __kstrtab_send_sig_info
ffffffff8141543e r __kstrtab_send_sig
ffffffff81415447 r __kstrtab_force_sig
ffffffff81415451 r __kstrtab_flush_signals
ffffffff8141545f r __kstrtab_dequeue_signal
ffffffff8141546e r __kstrtab_recalc_sigpending
ffffffff81415480 r __kstrtab_kill_pid
ffffffff81415489 r __kstrtab_kill_pgrp
ffffffff81415493 r __kstrtab_kill_pid_info_as_cred
ffffffff814154a9 r __kstrtab_orderly_poweroff
ffffffff814154ba r __kstrtab_kernel_power_off
ffffffff814154cb r __kstrtab_kernel_halt
ffffffff814154d7 r __kstrtab_kernel_restart
ffffffff814154e6 r __kstrtab_unregister_reboot_notifier
ffffffff81415501 r __kstrtab_register_reboot_notifier
ffffffff8141551a r __kstrtab_emergency_restart
ffffffff8141552c r __kstrtab_cad_pid
ffffffff81415534 r __kstrtab_fs_overflowgid
ffffffff81415543 r __kstrtab_fs_overflowuid
ffffffff81415552 r __kstrtab_overflowgid
ffffffff8141555e r __kstrtab_overflowuid
ffffffff8141556a r __kstrtab_call_usermodehelper_exec
ffffffff81415583 r __kstrtab_call_usermodehelper_setfns
ffffffff8141559e r __kstrtab_call_usermodehelper_setup
ffffffff814155b8 r __kstrtab_usermodehelper_read_unlock
ffffffff814155d3 r __kstrtab_usermodehelper_read_lock_wait
ffffffff814155f1 r __kstrtab_usermodehelper_read_trylock
ffffffff8141560d r __kstrtab_call_usermodehelper_freeinfo
ffffffff8141562a r __kstrtab___request_module
ffffffff8141563b r __kstrtab_work_on_cpu
ffffffff81415647 r __kstrtab_work_busy
ffffffff81415651 r __kstrtab_work_cpu
ffffffff8141565a r __kstrtab_workqueue_congested
ffffffff8141566e r __kstrtab_workqueue_set_max_active
ffffffff81415687 r __kstrtab_destroy_workqueue
ffffffff81415699 r __kstrtab___alloc_workqueue_key
ffffffff814156af r __kstrtab_execute_in_process_context
ffffffff814156ca r __kstrtab_flush_scheduled_work
ffffffff814156df r __kstrtab_schedule_delayed_work_on
ffffffff814156f8 r __kstrtab_schedule_delayed_work
ffffffff8141570e r __kstrtab_schedule_work_on
ffffffff8141571f r __kstrtab_schedule_work
ffffffff8141572d r __kstrtab_cancel_delayed_work_sync
ffffffff81415746 r __kstrtab_flush_delayed_work_sync
ffffffff8141575e r __kstrtab_flush_delayed_work
ffffffff81415771 r __kstrtab_cancel_work_sync
ffffffff81415782 r __kstrtab_flush_work_sync
ffffffff81415792 r __kstrtab_flush_work
ffffffff8141579d r __kstrtab_drain_workqueue
ffffffff814157ad r __kstrtab_flush_workqueue
ffffffff814157bd r __kstrtab_queue_delayed_work_on
ffffffff814157d3 r __kstrtab_queue_delayed_work
ffffffff814157e6 r __kstrtab_queue_work_on
ffffffff814157f4 r __kstrtab_queue_work
ffffffff814157ff r __kstrtab_system_nrt_freezable_wq
ffffffff81415817 r __kstrtab_system_freezable_wq
ffffffff8141582b r __kstrtab_system_unbound_wq
ffffffff8141583d r __kstrtab_system_nrt_wq
ffffffff8141584b r __kstrtab_system_long_wq
ffffffff8141585a r __kstrtab_system_wq
ffffffff81415864 r __kstrtab_task_active_pid_ns
ffffffff81415877 r __kstrtab_task_tgid_nr_ns
ffffffff81415887 r __kstrtab___task_pid_nr_ns
ffffffff81415898 r __kstrtab_pid_vnr
ffffffff814158a0 r __kstrtab_find_get_pid
ffffffff814158ad r __kstrtab_get_pid_task
ffffffff814158ba r __kstrtab_get_task_pid
ffffffff814158c7 r __kstrtab_pid_task
ffffffff814158d0 r __kstrtab_find_vpid
ffffffff814158da r __kstrtab_find_pid_ns
ffffffff814158e6 r __kstrtab_put_pid
ffffffff814158ee r __kstrtab_is_container_init
ffffffff81415900 r __kstrtab_init_pid_ns
ffffffff8141590c r __kstrtab_do_trace_rcu_torture_read
ffffffff81415926 r __kstrtab_wait_rcu_gp
ffffffff81415932 r __kstrtab___kernel_param_unlock
ffffffff81415948 r __kstrtab___kernel_param_lock
ffffffff8141595c r __kstrtab_param_ops_string
ffffffff8141596d r __kstrtab_param_get_string
ffffffff8141597e r __kstrtab_param_set_copystring
ffffffff81415993 r __kstrtab_param_array_ops
ffffffff814159a3 r __kstrtab_param_ops_bint
ffffffff814159b2 r __kstrtab_param_set_bint
ffffffff814159c1 r __kstrtab_param_ops_invbool
ffffffff814159d3 r __kstrtab_param_get_invbool
ffffffff814159e5 r __kstrtab_param_set_invbool
ffffffff814159f7 r __kstrtab_param_ops_bool
ffffffff81415a06 r __kstrtab_param_get_bool
ffffffff81415a15 r __kstrtab_param_set_bool
ffffffff81415a24 r __kstrtab_param_ops_charp
ffffffff81415a34 r __kstrtab_param_get_charp
ffffffff81415a44 r __kstrtab_param_set_charp
ffffffff81415a54 r __kstrtab_param_ops_ulong
ffffffff81415a64 r __kstrtab_param_get_ulong
ffffffff81415a74 r __kstrtab_param_set_ulong
ffffffff81415a84 r __kstrtab_param_ops_long
ffffffff81415a93 r __kstrtab_param_get_long
ffffffff81415aa2 r __kstrtab_param_set_long
ffffffff81415ab1 r __kstrtab_param_ops_uint
ffffffff81415ac0 r __kstrtab_param_get_uint
ffffffff81415acf r __kstrtab_param_set_uint
ffffffff81415ade r __kstrtab_param_ops_int
ffffffff81415aec r __kstrtab_param_get_int
ffffffff81415afa r __kstrtab_param_set_int
ffffffff81415b08 r __kstrtab_param_ops_ushort
ffffffff81415b19 r __kstrtab_param_get_ushort
ffffffff81415b2a r __kstrtab_param_set_ushort
ffffffff81415b3b r __kstrtab_param_ops_short
ffffffff81415b4b r __kstrtab_param_get_short
ffffffff81415b5b r __kstrtab_param_set_short
ffffffff81415b6b r __kstrtab_param_ops_byte
ffffffff81415b7a r __kstrtab_param_get_byte
ffffffff81415b89 r __kstrtab_param_set_byte
ffffffff81415b98 r __kstrtab_posix_timers_register_clock
ffffffff81415bb4 r __kstrtab_posix_timer_event
ffffffff81415bc6 r __kstrtab_flush_kthread_worker
ffffffff81415bdb r __kstrtab_flush_kthread_work
ffffffff81415bee r __kstrtab_queue_kthread_work
ffffffff81415c01 r __kstrtab_kthread_worker_fn
ffffffff81415c13 r __kstrtab___init_kthread_worker
ffffffff81415c29 r __kstrtab_kthread_stop
ffffffff81415c36 r __kstrtab_kthread_bind
ffffffff81415c43 r __kstrtab_kthread_create_on_node
ffffffff81415c5a r __kstrtab_kthread_freezable_should_stop
ffffffff81415c78 r __kstrtab_kthread_should_stop
ffffffff81415c8c r __kstrtab_bit_waitqueue
ffffffff81415c9a r __kstrtab_wake_up_bit
ffffffff81415ca6 r __kstrtab___wake_up_bit
ffffffff81415cb4 r __kstrtab_out_of_line_wait_on_bit_lock
ffffffff81415cd1 r __kstrtab___wait_on_bit_lock
ffffffff81415ce4 r __kstrtab_out_of_line_wait_on_bit
ffffffff81415cfc r __kstrtab___wait_on_bit
ffffffff81415d0a r __kstrtab_wake_bit_function
ffffffff81415d1c r __kstrtab_autoremove_wake_function
ffffffff81415d35 r __kstrtab_abort_exclusive_wait
ffffffff81415d4a r __kstrtab_finish_wait
ffffffff81415d56 r __kstrtab_prepare_to_wait_exclusive
ffffffff81415d70 r __kstrtab_prepare_to_wait
ffffffff81415d80 r __kstrtab_remove_wait_queue
ffffffff81415d92 r __kstrtab_add_wait_queue_exclusive
ffffffff81415dab r __kstrtab_add_wait_queue
ffffffff81415dba r __kstrtab___init_waitqueue_head
ffffffff81415dd0 r __kstrtab___kfifo_dma_out_finish_r
ffffffff81415de9 r __kstrtab___kfifo_dma_out_prepare_r
ffffffff81415e03 r __kstrtab___kfifo_dma_in_finish_r
ffffffff81415e1b r __kstrtab___kfifo_dma_in_prepare_r
ffffffff81415e34 r __kstrtab___kfifo_to_user_r
ffffffff81415e46 r __kstrtab___kfifo_from_user_r
ffffffff81415e5a r __kstrtab___kfifo_skip_r
ffffffff81415e69 r __kstrtab___kfifo_out_r
ffffffff81415e77 r __kstrtab___kfifo_out_peek_r
ffffffff81415e8a r __kstrtab___kfifo_in_r
ffffffff81415e97 r __kstrtab___kfifo_len_r
ffffffff81415ea5 r __kstrtab___kfifo_dma_out_prepare
ffffffff81415ebd r __kstrtab___kfifo_dma_in_prepare
ffffffff81415ed4 r __kstrtab___kfifo_to_user
ffffffff81415ee4 r __kstrtab___kfifo_from_user
ffffffff81415ef6 r __kstrtab___kfifo_out
ffffffff81415f02 r __kstrtab___kfifo_out_peek
ffffffff81415f13 r __kstrtab___kfifo_in
ffffffff81415f1e r __kstrtab___kfifo_init
ffffffff81415f2b r __kstrtab___kfifo_free
ffffffff81415f38 r __kstrtab___kfifo_alloc
ffffffff81415f46 r __kstrtab_atomic_dec_and_mutex_lock
ffffffff81415f60 r __kstrtab_mutex_trylock
ffffffff81415f6e r __kstrtab_mutex_lock_killable
ffffffff81415f82 r __kstrtab_mutex_lock_interruptible
ffffffff81415f9b r __kstrtab_mutex_unlock
ffffffff81415fa8 r __kstrtab_mutex_lock
ffffffff81415fb3 r __kstrtab___mutex_init
ffffffff81415fc0 r __kstrtab_schedule_hrtimeout
ffffffff81415fd3 r __kstrtab_schedule_hrtimeout_range
ffffffff81415fec r __kstrtab_hrtimer_init_sleeper
ffffffff81416001 r __kstrtab_hrtimer_get_res
ffffffff81416011 r __kstrtab_hrtimer_init
ffffffff8141601e r __kstrtab_hrtimer_get_remaining
ffffffff81416034 r __kstrtab_hrtimer_cancel
ffffffff81416043 r __kstrtab_hrtimer_try_to_cancel
ffffffff81416059 r __kstrtab_hrtimer_start
ffffffff81416067 r __kstrtab_hrtimer_start_range_ns
ffffffff8141607e r __kstrtab_hrtimer_forward
ffffffff8141608e r __kstrtab_ktime_add_safe
ffffffff8141609d r __kstrtab_downgrade_write
ffffffff814160ad r __kstrtab_up_write
ffffffff814160b6 r __kstrtab_up_read
ffffffff814160be r __kstrtab_down_write_trylock
ffffffff814160d1 r __kstrtab_down_write
ffffffff814160dc r __kstrtab_down_read_trylock
ffffffff814160ee r __kstrtab_down_read
ffffffff814160f8 r __kstrtab_srcu_batches_completed
ffffffff8141610f r __kstrtab_synchronize_srcu_expedited
ffffffff8141612a r __kstrtab_synchronize_srcu
ffffffff8141613b r __kstrtab___srcu_read_unlock
ffffffff8141614e r __kstrtab___srcu_read_lock
ffffffff8141615f r __kstrtab_cleanup_srcu_struct
ffffffff81416173 r __kstrtab_init_srcu_struct
ffffffff81416184 r __kstrtab_up
ffffffff81416187 r __kstrtab_down_timeout
ffffffff81416194 r __kstrtab_down_trylock
ffffffff814161a1 r __kstrtab_down_killable
ffffffff814161af r __kstrtab_down_interruptible
ffffffff814161c2 r __kstrtab_down
ffffffff814161c7 r __kstrtab_unregister_die_notifier
ffffffff814161df r __kstrtab_register_die_notifier
ffffffff814161f5 r __kstrtab_srcu_init_notifier_head
ffffffff8141620d r __kstrtab_srcu_notifier_call_chain
ffffffff81416226 r __kstrtab___srcu_notifier_call_chain
ffffffff81416241 r __kstrtab_srcu_notifier_chain_unregister
ffffffff81416260 r __kstrtab_srcu_notifier_chain_register
ffffffff8141627d r __kstrtab_raw_notifier_call_chain
ffffffff81416295 r __kstrtab___raw_notifier_call_chain
ffffffff814162af r __kstrtab_raw_notifier_chain_unregister
ffffffff814162cd r __kstrtab_raw_notifier_chain_register
ffffffff814162e9 r __kstrtab_blocking_notifier_call_chain
ffffffff81416306 r __kstrtab___blocking_notifier_call_chain
ffffffff81416325 r __kstrtab_blocking_notifier_chain_unregister
ffffffff81416348 r __kstrtab_blocking_notifier_chain_cond_register
ffffffff8141636e r __kstrtab_blocking_notifier_chain_register
ffffffff8141638f r __kstrtab_atomic_notifier_call_chain
ffffffff814163aa r __kstrtab___atomic_notifier_call_chain
ffffffff814163c7 r __kstrtab_atomic_notifier_chain_unregister
ffffffff814163e8 r __kstrtab_atomic_notifier_chain_register
ffffffff81416407 r __kstrtab_kernel_kobj
ffffffff81416413 r __kstrtab_set_create_files_as
ffffffff81416427 r __kstrtab_set_security_override_from_ctx
ffffffff81416446 r __kstrtab_set_security_override
ffffffff8141645c r __kstrtab_prepare_kernel_cred
ffffffff81416470 r __kstrtab_revert_creds
ffffffff8141647d r __kstrtab_override_creds
ffffffff8141648c r __kstrtab_abort_creds
ffffffff81416498 r __kstrtab_commit_creds
ffffffff814164a5 r __kstrtab_prepare_creds
ffffffff814164b3 r __kstrtab___put_cred
ffffffff814164be r __kstrtab_async_synchronize_cookie
ffffffff814164d7 r __kstrtab_async_synchronize_cookie_domain
ffffffff814164f7 r __kstrtab_async_synchronize_full_domain
ffffffff81416515 r __kstrtab_async_synchronize_full
ffffffff8141652c r __kstrtab_async_schedule_domain
ffffffff81416542 r __kstrtab_async_schedule
ffffffff81416551 r __kstrtab_in_egroup_p
ffffffff8141655d r __kstrtab_in_group_p
ffffffff81416568 r __kstrtab_set_current_groups
ffffffff8141657b r __kstrtab_set_groups
ffffffff81416586 r __kstrtab_groups_free
ffffffff81416592 r __kstrtab_groups_alloc
ffffffff8141659f r __kstrtab_set_cpus_allowed_ptr
ffffffff814165b4 r __kstrtab_io_schedule
ffffffff814165c0 r __kstrtab_yield_to
ffffffff814165c9 r __kstrtab_yield
ffffffff814165cf r __kstrtab___cond_resched_softirq
ffffffff814165e6 r __kstrtab___cond_resched_lock
ffffffff814165fa r __kstrtab__cond_resched
ffffffff81416608 r __kstrtab_sched_setscheduler
ffffffff8141661b r __kstrtab_task_nice
ffffffff81416625 r __kstrtab_set_user_nice
ffffffff81416633 r __kstrtab_sleep_on_timeout
ffffffff81416644 r __kstrtab_sleep_on
ffffffff8141664d r __kstrtab_interruptible_sleep_on_timeout
ffffffff8141666c r __kstrtab_interruptible_sleep_on
ffffffff81416683 r __kstrtab_completion_done
ffffffff81416693 r __kstrtab_try_wait_for_completion
ffffffff814166ab r __kstrtab_wait_for_completion_killable_timeout
ffffffff814166d0 r __kstrtab_wait_for_completion_killable
ffffffff814166ed r __kstrtab_wait_for_completion_interruptible_timeout
ffffffff81416717 r __kstrtab_wait_for_completion_interruptible
ffffffff81416739 r __kstrtab_wait_for_completion_timeout
ffffffff81416755 r __kstrtab_wait_for_completion
ffffffff81416769 r __kstrtab_complete_all
ffffffff81416776 r __kstrtab_complete
ffffffff8141677f r __kstrtab___wake_up_sync
ffffffff8141678e r __kstrtab___wake_up_sync_key
ffffffff814167a1 r __kstrtab___wake_up_locked_key
ffffffff814167b6 r __kstrtab___wake_up_locked
ffffffff814167c7 r __kstrtab___wake_up
ffffffff814167d1 r __kstrtab_default_wake_function
ffffffff814167e7 r __kstrtab_schedule
ffffffff814167f0 r __kstrtab_kernel_cpustat
ffffffff814167ff r __kstrtab_kstat
ffffffff81416805 r __kstrtab_avenrun
ffffffff8141680d r __kstrtab_wake_up_process
ffffffff8141681d r __kstrtab_kick_process
ffffffff8141682a r __kstrtab_local_clock
ffffffff81416836 r __kstrtab_cpu_clock
ffffffff81416840 r __kstrtab_sched_clock_idle_wakeup_event
ffffffff8141685e r __kstrtab_sched_clock_idle_sleep_event
ffffffff8141687b r __kstrtab_sched_clock
ffffffff81416887 r __kstrtab_pm_qos_remove_notifier
ffffffff8141689e r __kstrtab_pm_qos_add_notifier
ffffffff814168b2 r __kstrtab_pm_qos_remove_request
ffffffff814168c8 r __kstrtab_pm_qos_update_request
ffffffff814168de r __kstrtab_pm_qos_add_request
ffffffff814168f1 r __kstrtab_pm_qos_request_active
ffffffff81416907 r __kstrtab_pm_qos_request
ffffffff81416916 r __kstrtab_pm_wq
ffffffff8141691c r __kstrtab_unregister_pm_notifier
ffffffff81416933 r __kstrtab_register_pm_notifier
ffffffff81416948 r __kstrtab_set_freezable
ffffffff81416956 r __kstrtab___refrigerator
ffffffff81416965 r __kstrtab_freezing_slow_path
ffffffff81416978 r __kstrtab_system_freezing_cnt
ffffffff8141698c r __kstrtab_ktime_get_monotonic_offset
ffffffff814169a7 r __kstrtab_current_kernel_time
ffffffff814169bb r __kstrtab_get_seconds
ffffffff814169c7 r __kstrtab_monotonic_to_bootbased
ffffffff814169de r __kstrtab_ktime_get_boottime
ffffffff814169f1 r __kstrtab_get_monotonic_boottime
ffffffff81416a08 r __kstrtab_getboottime
ffffffff81416a14 r __kstrtab_getrawmonotonic
ffffffff81416a24 r __kstrtab_ktime_get_real
ffffffff81416a33 r __kstrtab_timekeeping_inject_offset
ffffffff81416a4d r __kstrtab_do_settimeofday
ffffffff81416a5d r __kstrtab_do_gettimeofday
ffffffff81416a6d r __kstrtab_ktime_get_ts
ffffffff81416a7a r __kstrtab_ktime_get
ffffffff81416a84 r __kstrtab_getnstimeofday
ffffffff81416a93 r __kstrtab_clocksource_unregister
ffffffff81416aaa r __kstrtab_clocksource_change_rating
ffffffff81416ac4 r __kstrtab_clocksource_register
ffffffff81416ad9 r __kstrtab___clocksource_register_scale
ffffffff81416af6 r __kstrtab___clocksource_updatefreq_scale
ffffffff81416b15 r __kstrtab_timecounter_cyc2time
ffffffff81416b2a r __kstrtab_timecounter_read
ffffffff81416b3b r __kstrtab_timecounter_init
ffffffff81416b4c r __kstrtab_jiffies
ffffffff81416b54 r __kstrtab___timecompare_update
ffffffff81416b69 r __kstrtab_timecompare_offset
ffffffff81416b7c r __kstrtab_timecompare_transform
ffffffff81416b92 r __kstrtab_time_to_tm
ffffffff81416b9d r __kstrtab_posix_clock_unregister
ffffffff81416bb4 r __kstrtab_posix_clock_register
ffffffff81416bc9 r __kstrtab_clockevents_notify
ffffffff81416bdc r __kstrtab_clockevents_register_device
ffffffff81416bf8 r __kstrtab_clockevent_delta2ns
ffffffff81416c0c r __kstrtab___rt_mutex_init
ffffffff81416c1c r __kstrtab_rt_mutex_destroy
ffffffff81416c2d r __kstrtab_rt_mutex_unlock
ffffffff81416c3d r __kstrtab_rt_mutex_trylock
ffffffff81416c4e r __kstrtab_rt_mutex_timed_lock
ffffffff81416c62 r __kstrtab_rt_mutex_lock_interruptible
ffffffff81416c7e r __kstrtab_rt_mutex_lock
ffffffff81416c8c r __kstrtab_dma_spin_lock
ffffffff81416c9a r __kstrtab_free_dma
ffffffff81416ca3 r __kstrtab_request_dma
ffffffff81416caf r __kstrtab_on_each_cpu_cond
ffffffff81416cc0 r __kstrtab_on_each_cpu_mask
ffffffff81416cd1 r __kstrtab_on_each_cpu
ffffffff81416cdd r __kstrtab_nr_cpu_ids
ffffffff81416ce8 r __kstrtab_setup_max_cpus
ffffffff81416cf7 r __kstrtab_smp_call_function
ffffffff81416d09 r __kstrtab_smp_call_function_many
ffffffff81416d20 r __kstrtab_smp_call_function_any
ffffffff81416d36 r __kstrtab_smp_call_function_single
ffffffff81416d4f r __kstrtab_in_lock_functions
ffffffff81416d61 r __kstrtab__raw_write_unlock_bh
ffffffff81416d76 r __kstrtab__raw_write_unlock_irqrestore
ffffffff81416d93 r __kstrtab__raw_write_lock_bh
ffffffff81416da6 r __kstrtab__raw_write_lock_irq
ffffffff81416dba r __kstrtab__raw_write_lock_irqsave
ffffffff81416dd2 r __kstrtab__raw_write_lock
ffffffff81416de2 r __kstrtab__raw_write_trylock
ffffffff81416df5 r __kstrtab__raw_read_unlock_bh
ffffffff81416e09 r __kstrtab__raw_read_unlock_irqrestore
ffffffff81416e25 r __kstrtab__raw_read_lock_bh
ffffffff81416e37 r __kstrtab__raw_read_lock_irq
ffffffff81416e4a r __kstrtab__raw_read_lock_irqsave
ffffffff81416e61 r __kstrtab__raw_read_lock
ffffffff81416e70 r __kstrtab__raw_read_trylock
ffffffff81416e82 r __kstrtab__raw_spin_unlock_bh
ffffffff81416e96 r __kstrtab__raw_spin_unlock_irqrestore
ffffffff81416eb2 r __kstrtab__raw_spin_lock_bh
ffffffff81416ec4 r __kstrtab__raw_spin_lock_irq
ffffffff81416ed7 r __kstrtab__raw_spin_lock_irqsave
ffffffff81416eee r __kstrtab__raw_spin_lock
ffffffff81416efd r __kstrtab__raw_spin_trylock_bh
ffffffff81416f12 r __kstrtab__raw_spin_trylock
ffffffff81416f24 r __kstrtab___module_text_address
ffffffff81416f3a r __kstrtab___module_address
ffffffff81416f4b r __kstrtab___symbol_get
ffffffff81416f58 r __kstrtab_module_put
ffffffff81416f63 r __kstrtab_try_module_get
ffffffff81416f72 r __kstrtab___module_get
ffffffff81416f7f r __kstrtab_symbol_put_addr
ffffffff81416f8f r __kstrtab___symbol_put
ffffffff81416f9c r __kstrtab_module_refcount
ffffffff81416fac r __kstrtab_ref_module
ffffffff81416fb7 r __kstrtab_find_module
ffffffff81416fc3 r __kstrtab_find_symbol
ffffffff81416fcf r __kstrtab_each_symbol_section
ffffffff81416fe3 r __kstrtab___module_put_and_exit
ffffffff81416ff9 r __kstrtab_unregister_module_notifier
ffffffff81417014 r __kstrtab_register_module_notifier
ffffffff8141702d r __kstrtab_module_mutex
ffffffff8141703a r __kstrtab___print_symbol
ffffffff81417049 r __kstrtab_sprint_symbol
ffffffff81417057 r __kstrtab_kallsyms_on_each_symbol
ffffffff8141706f r __kstrtab_kallsyms_lookup_name
ffffffff81417084 r __kstrtab_compat_alloc_user_space
ffffffff8141709c r __kstrtab_sigset_from_compat
ffffffff814170af r __kstrtab_compat_put_timespec
ffffffff814170c3 r __kstrtab_compat_get_timespec
ffffffff814170d7 r __kstrtab_compat_put_timeval
ffffffff814170ea r __kstrtab_compat_get_timeval
ffffffff814170fd r __kstrtab_put_compat_timespec
ffffffff81417111 r __kstrtab_get_compat_timespec
ffffffff81417125 r __kstrtab_put_compat_timeval
ffffffff81417138 r __kstrtab_get_compat_timeval
ffffffff8141714b r __kstrtab_css_lookup
ffffffff81417156 r __kstrtab_free_css_id
ffffffff81417162 r __kstrtab_css_depth
ffffffff8141716c r __kstrtab_css_id
ffffffff81417173 r __kstrtab___css_put
ffffffff8141717d r __kstrtab_cgroup_unload_subsys
ffffffff81417192 r __kstrtab_cgroup_load_subsys
ffffffff814171a5 r __kstrtab_cgroup_add_files
ffffffff814171b6 r __kstrtab_cgroup_add_file
ffffffff814171c6 r __kstrtab_cgroup_lock_live_group
ffffffff814171dd r __kstrtab_cgroup_attach_task_all
ffffffff814171f4 r __kstrtab_cgroup_taskset_size
ffffffff81417208 r __kstrtab_cgroup_taskset_cur_cgroup
ffffffff81417222 r __kstrtab_cgroup_taskset_next
ffffffff81417236 r __kstrtab_cgroup_taskset_first
ffffffff8141724b r __kstrtab_cgroup_path
ffffffff81417257 r __kstrtab_cgroup_unlock
ffffffff81417265 r __kstrtab_cgroup_lock
ffffffff81417271 r __kstrtab_cgroup_lock_is_held
ffffffff81417285 r __kstrtab_cpuset_mem_spread_node
ffffffff8141729c r __kstrtab_stop_machine
ffffffff814172a9 r __kstrtab_audit_log
ffffffff814172b3 r __kstrtab_audit_log_format
ffffffff814172c4 r __kstrtab_audit_log_end
ffffffff814172d2 r __kstrtab_audit_log_start
ffffffff814172e2 r __kstrtab_audit_enabled
ffffffff814172f0 r __kstrtab___audit_inode_child
ffffffff81417304 r __kstrtab_audit_log_task_context
ffffffff8141731b r __kstrtab___irq_alloc_descs
ffffffff8141732d r __kstrtab_irq_free_descs
ffffffff8141733c r __kstrtab_generic_handle_irq
ffffffff8141734f r __kstrtab_irq_to_desc
ffffffff8141735b r __kstrtab_nr_irqs
ffffffff81417363 r __kstrtab_request_any_context_irq
ffffffff8141737b r __kstrtab_request_threaded_irq
ffffffff81417390 r __kstrtab_free_irq
ffffffff81417399 r __kstrtab_remove_irq
ffffffff814173a4 r __kstrtab_setup_irq
ffffffff814173ae r __kstrtab_irq_set_irq_wake
ffffffff814173bf r __kstrtab_enable_irq
ffffffff814173ca r __kstrtab_disable_irq
ffffffff814173d6 r __kstrtab_disable_irq_nosync
ffffffff814173e9 r __kstrtab_irq_set_affinity_notifier
ffffffff81417403 r __kstrtab_irq_set_affinity_hint
ffffffff81417419 r __kstrtab_synchronize_irq
ffffffff81417429 r __kstrtab_irq_modify_status
ffffffff8141743b r __kstrtab___irq_set_handler
ffffffff8141744d r __kstrtab_handle_edge_irq
ffffffff8141745d r __kstrtab_handle_level_irq
ffffffff8141746e r __kstrtab_handle_simple_irq
ffffffff81417480 r __kstrtab_handle_nested_irq
ffffffff81417492 r __kstrtab_irq_get_irq_data
ffffffff814174a3 r __kstrtab_irq_set_chip_data
ffffffff814174b5 r __kstrtab_irq_set_handler_data
ffffffff814174ca r __kstrtab_irq_set_irq_type
ffffffff814174db r __kstrtab_irq_set_chip
ffffffff814174e8 r __kstrtab_devm_free_irq
ffffffff814174f6 r __kstrtab_devm_request_threaded_irq
ffffffff81417510 r __kstrtab_probe_irq_off
ffffffff8141751e r __kstrtab_probe_irq_mask
ffffffff8141752d r __kstrtab_probe_irq_on
ffffffff8141753a r __kstrtab_resume_device_irqs
ffffffff8141754d r __kstrtab_suspend_device_irqs
ffffffff81417561 r __kstrtab_rcu_barrier
ffffffff8141756d r __kstrtab_synchronize_rcu_expedited
ffffffff81417587 r __kstrtab_kfree_call_rcu
ffffffff81417596 r __kstrtab_rcu_force_quiescent_state
ffffffff814175b0 r __kstrtab_rcu_batches_completed
ffffffff814175c6 r __kstrtab_rcu_barrier_sched
ffffffff814175d8 r __kstrtab_rcu_barrier_bh
ffffffff814175e7 r __kstrtab_synchronize_sched_expedited
ffffffff81417603 r __kstrtab_synchronize_rcu_bh
ffffffff81417616 r __kstrtab_synchronize_sched
ffffffff81417628 r __kstrtab_call_rcu_bh
ffffffff81417634 r __kstrtab_call_rcu_sched
ffffffff81417643 r __kstrtab_rcu_idle_exit
ffffffff81417651 r __kstrtab_rcu_idle_enter
ffffffff81417660 r __kstrtab_rcu_sched_force_quiescent_state
ffffffff81417680 r __kstrtab_rcutorture_record_progress
ffffffff8141769b r __kstrtab_rcutorture_record_test_transition
ffffffff814176bd r __kstrtab_rcu_bh_force_quiescent_state
ffffffff814176da r __kstrtab_rcu_batches_completed_bh
ffffffff814176f3 r __kstrtab_rcu_batches_completed_sched
ffffffff8141770f r __kstrtab_rcu_note_context_switch
ffffffff81417727 r __kstrtab_rcu_scheduler_active
ffffffff8141773c r __kstrtab_delayacct_on
ffffffff81417749 r __kstrtab_irq_work_sync
ffffffff81417757 r __kstrtab_irq_work_run
ffffffff81417764 r __kstrtab_irq_work_queue
ffffffff81417773 r __kstrtab_perf_event_create_kernel_counter
ffffffff81417794 r __kstrtab_perf_swevent_get_recursion_context
ffffffff814177b7 r __kstrtab_perf_unregister_guest_info_callbacks
ffffffff814177dc r __kstrtab_perf_register_guest_info_callbacks
ffffffff814177ff r __kstrtab_perf_event_read_value
ffffffff81417815 r __kstrtab_perf_event_release_kernel
ffffffff8141782f r __kstrtab_perf_event_refresh
ffffffff81417842 r __kstrtab_perf_event_enable
ffffffff81417854 r __kstrtab_perf_event_disable
ffffffff81417867 r __kstrtab_unregister_wide_hw_breakpoint
ffffffff81417885 r __kstrtab_register_wide_hw_breakpoint
ffffffff814178a1 r __kstrtab_unregister_hw_breakpoint
ffffffff814178ba r __kstrtab_modify_user_hw_breakpoint
ffffffff814178d4 r __kstrtab_register_user_hw_breakpoint
ffffffff814178f0 r __kstrtab_try_to_release_page
ffffffff81417904 r __kstrtab_generic_file_aio_write
ffffffff8141791b r __kstrtab___generic_file_aio_write
ffffffff81417934 r __kstrtab_generic_file_buffered_write
ffffffff81417950 r __kstrtab_grab_cache_page_write_begin
ffffffff8141796c r __kstrtab_generic_file_direct_write
ffffffff81417986 r __kstrtab_pagecache_write_end
ffffffff8141799a r __kstrtab_pagecache_write_begin
ffffffff814179b0 r __kstrtab_generic_write_checks
ffffffff814179c5 r __kstrtab_iov_iter_single_seg_count
ffffffff814179df r __kstrtab_iov_iter_fault_in_readable
ffffffff814179fa r __kstrtab_iov_iter_advance
ffffffff81417a0b r __kstrtab_iov_iter_copy_from_user
ffffffff81417a23 r __kstrtab_iov_iter_copy_from_user_atomic
ffffffff81417a42 r __kstrtab_file_remove_suid
ffffffff81417a53 r __kstrtab_should_remove_suid
ffffffff81417a66 r __kstrtab_read_cache_page
ffffffff81417a76 r __kstrtab_read_cache_page_gfp
ffffffff81417a8a r __kstrtab_read_cache_page_async
ffffffff81417aa0 r __kstrtab_generic_file_readonly_mmap
ffffffff81417abb r __kstrtab_generic_file_mmap
ffffffff81417acd r __kstrtab_filemap_fault
ffffffff81417adb r __kstrtab_generic_file_aio_read
ffffffff81417af1 r __kstrtab_generic_segment_checks
ffffffff81417b08 r __kstrtab_grab_cache_page_nowait
ffffffff81417b1f r __kstrtab_find_get_pages_tag
ffffffff81417b32 r __kstrtab_find_get_pages_contig
ffffffff81417b48 r __kstrtab_find_or_create_page
ffffffff81417b5c r __kstrtab_find_lock_page
ffffffff81417b6b r __kstrtab_find_get_page
ffffffff81417b79 r __kstrtab___lock_page_killable
ffffffff81417b8e r __kstrtab___lock_page
ffffffff81417b9a r __kstrtab_end_page_writeback
ffffffff81417bad r __kstrtab_unlock_page
ffffffff81417bb9 r __kstrtab_add_page_wait_queue
ffffffff81417bcd r __kstrtab_wait_on_page_bit
ffffffff81417bde r __kstrtab___page_cache_alloc
ffffffff81417bf1 r __kstrtab_add_to_page_cache_lru
ffffffff81417c07 r __kstrtab_add_to_page_cache_locked
ffffffff81417c20 r __kstrtab_replace_page_cache_page
ffffffff81417c38 r __kstrtab_filemap_write_and_wait_range
ffffffff81417c55 r __kstrtab_filemap_write_and_wait
ffffffff81417c6c r __kstrtab_filemap_fdatawait
ffffffff81417c7e r __kstrtab_filemap_fdatawait_range
ffffffff81417c96 r __kstrtab_filemap_flush
ffffffff81417ca4 r __kstrtab_filemap_fdatawrite_range
ffffffff81417cbd r __kstrtab_filemap_fdatawrite
ffffffff81417cd0 r __kstrtab_delete_from_page_cache
ffffffff81417ce7 r __kstrtab_mempool_free_pages
ffffffff81417cfa r __kstrtab_mempool_alloc_pages
ffffffff81417d0e r __kstrtab_mempool_kfree
ffffffff81417d1c r __kstrtab_mempool_kmalloc
ffffffff81417d2c r __kstrtab_mempool_free_slab
ffffffff81417d3e r __kstrtab_mempool_alloc_slab
ffffffff81417d51 r __kstrtab_mempool_free
ffffffff81417d5e r __kstrtab_mempool_alloc
ffffffff81417d6c r __kstrtab_mempool_resize
ffffffff81417d7b r __kstrtab_mempool_create_node
ffffffff81417d8f r __kstrtab_mempool_create
ffffffff81417d9e r __kstrtab_mempool_destroy
ffffffff81417dae r __kstrtab_unregister_oom_notifier
ffffffff81417dc6 r __kstrtab_register_oom_notifier
ffffffff81417ddc r __kstrtab_probe_kernel_write
ffffffff81417def r __kstrtab_probe_kernel_read
ffffffff81417e01 r __kstrtab_si_meminfo
ffffffff81417e0c r __kstrtab_nr_free_buffer_pages
ffffffff81417e21 r __kstrtab_free_pages_exact
ffffffff81417e32 r __kstrtab_alloc_pages_exact_nid
ffffffff81417e48 r __kstrtab_alloc_pages_exact
ffffffff81417e5a r __kstrtab_free_pages
ffffffff81417e65 r __kstrtab___free_pages
ffffffff81417e72 r __kstrtab_get_zeroed_page
ffffffff81417e82 r __kstrtab___get_free_pages
ffffffff81417e93 r __kstrtab___alloc_pages_nodemask
ffffffff81417eaa r __kstrtab_nr_online_nodes
ffffffff81417eba r __kstrtab_nr_node_ids
ffffffff81417ec6 r __kstrtab_movable_zone
ffffffff81417ed3 r __kstrtab_totalram_pages
ffffffff81417ee2 r __kstrtab_node_states
ffffffff81417eee r __kstrtab_numa_node
ffffffff81417ef8 r __kstrtab_mapping_tagged
ffffffff81417f07 r __kstrtab_test_set_page_writeback
ffffffff81417f1f r __kstrtab_clear_page_dirty_for_io
ffffffff81417f37 r __kstrtab_set_page_dirty_lock
ffffffff81417f4b r __kstrtab_set_page_dirty
ffffffff81417f5a r __kstrtab_redirty_page_for_writepage
ffffffff81417f75 r __kstrtab_account_page_redirty
ffffffff81417f8a r __kstrtab___set_page_dirty_nobuffers
ffffffff81417fa5 r __kstrtab_account_page_writeback
ffffffff81417fbc r __kstrtab_account_page_dirtied
ffffffff81417fd1 r __kstrtab_write_one_page
ffffffff81417fe0 r __kstrtab_generic_writepages
ffffffff81417ff3 r __kstrtab_write_cache_pages
ffffffff81418005 r __kstrtab_tag_pages_for_writeback
ffffffff8141801d r __kstrtab_balance_dirty_pages_ratelimited_nr
ffffffff81418040 r __kstrtab_bdi_set_max_ratio
ffffffff81418052 r __kstrtab_bdi_writeout_inc
ffffffff81418063 r __kstrtab_laptop_mode
ffffffff8141806f r __kstrtab_dirty_writeback_interval
ffffffff81418088 r __kstrtab_page_cache_async_readahead
ffffffff814180a3 r __kstrtab_page_cache_sync_readahead
ffffffff814180bd r __kstrtab_read_cache_pages
ffffffff814180ce r __kstrtab_file_ra_state_init
ffffffff814180e1 r __kstrtab_pagevec_lookup_tag
ffffffff814180f4 r __kstrtab_pagevec_lookup
ffffffff81418103 r __kstrtab___pagevec_lru_add
ffffffff81418115 r __kstrtab___pagevec_release
ffffffff81418127 r __kstrtab_release_pages
ffffffff81418135 r __kstrtab___lru_cache_add
ffffffff81418145 r __kstrtab_mark_page_accessed
ffffffff81418158 r __kstrtab_put_pages_list
ffffffff81418167 r __kstrtab___get_page_tail
ffffffff81418177 r __kstrtab_put_page
ffffffff81418180 r __kstrtab_truncate_pagecache_range
ffffffff81418199 r __kstrtab_vmtruncate
ffffffff814181a4 r __kstrtab_truncate_setsize
ffffffff814181b5 r __kstrtab_truncate_pagecache
ffffffff814181c8 r __kstrtab_invalidate_inode_pages2
ffffffff814181e0 r __kstrtab_invalidate_inode_pages2_range
ffffffff814181fe r __kstrtab_invalidate_mapping_pages
ffffffff81418217 r __kstrtab_truncate_inode_pages
ffffffff8141822c r __kstrtab_truncate_inode_pages_range
ffffffff81418247 r __kstrtab_generic_error_remove_page
ffffffff81418261 r __kstrtab_cancel_dirty_page
ffffffff81418273 r __kstrtab_unregister_shrinker
ffffffff81418287 r __kstrtab_register_shrinker
ffffffff81418299 r __kstrtab_shmem_read_mapping_page_gfp
ffffffff814182b5 r __kstrtab_shmem_file_setup
ffffffff814182c6 r __kstrtab_shmem_truncate_range
ffffffff814182db r __kstrtab_get_user_pages_fast
ffffffff814182ef r __kstrtab___get_user_pages_fast
ffffffff81418305 r __kstrtab_strndup_user
ffffffff81418312 r __kstrtab_kzfree
ffffffff81418319 r __kstrtab_krealloc
ffffffff81418322 r __kstrtab___krealloc
ffffffff8141832d r __kstrtab_memdup_user
ffffffff81418339 r __kstrtab_kmemdup
ffffffff81418341 r __kstrtab_kstrndup
ffffffff8141834a r __kstrtab_kstrdup
ffffffff81418352 r __kstrtab_dec_zone_page_state
ffffffff81418366 r __kstrtab_inc_zone_page_state
ffffffff8141837a r __kstrtab_mod_zone_page_state
ffffffff8141838e r __kstrtab___dec_zone_page_state
ffffffff814183a4 r __kstrtab___inc_zone_page_state
ffffffff814183ba r __kstrtab___mod_zone_page_state
ffffffff814183d0 r __kstrtab_vm_stat
ffffffff814183d8 r __kstrtab_all_vm_events
ffffffff814183e6 r __kstrtab_vm_event_states
ffffffff814183f6 r __kstrtab_wait_iff_congested
ffffffff81418409 r __kstrtab_congestion_wait
ffffffff81418419 r __kstrtab_set_bdi_congested
ffffffff8141842b r __kstrtab_clear_bdi_congested
ffffffff8141843f r __kstrtab_bdi_setup_and_register
ffffffff81418456 r __kstrtab_bdi_destroy
ffffffff81418462 r __kstrtab_bdi_init
ffffffff8141846b r __kstrtab_bdi_unregister
ffffffff8141847a r __kstrtab_bdi_register_dev
ffffffff8141848b r __kstrtab_bdi_register
ffffffff81418498 r __kstrtab_noop_backing_dev_info
ffffffff814184ae r __kstrtab_default_backing_dev_info
ffffffff814184c7 r __kstrtab_mm_kobj
ffffffff814184cf r __kstrtab_unuse_mm
ffffffff814184d8 r __kstrtab_use_mm
ffffffff814184df r __kstrtab_free_percpu
ffffffff814184eb r __kstrtab___alloc_percpu
ffffffff814184fa r __kstrtab_pcpu_base_addr
ffffffff81418509 r __kstrtab_follow_pfn
ffffffff81418514 r __kstrtab_unmap_mapping_range
ffffffff81418528 r __kstrtab_apply_to_page_range
ffffffff8141853c r __kstrtab_remap_pfn_range
ffffffff8141854c r __kstrtab_vm_insert_mixed
ffffffff8141855c r __kstrtab_vm_insert_pfn
ffffffff8141856a r __kstrtab_vm_insert_page
ffffffff81418579 r __kstrtab_get_user_pages
ffffffff81418588 r __kstrtab___get_user_pages
ffffffff81418599 r __kstrtab_zap_vma_ptes
ffffffff814185a6 r __kstrtab_high_memory
ffffffff814185b2 r __kstrtab_num_physpages
ffffffff814185c0 r __kstrtab_can_do_mlock
ffffffff814185cd r __kstrtab_vm_brk
ffffffff814185d4 r __kstrtab_vm_munmap
ffffffff814185de r __kstrtab_do_munmap
ffffffff814185e8 r __kstrtab_find_vma
ffffffff814185f1 r __kstrtab_get_unmapped_area
ffffffff81418603 r __kstrtab_vm_mmap
ffffffff8141860b r __kstrtab_do_mmap
ffffffff81418613 r __kstrtab_vm_get_page_prot
ffffffff81418624 r __kstrtab_page_mkclean
ffffffff81418631 r __kstrtab_free_vm_area
ffffffff8141863e r __kstrtab_alloc_vm_area
ffffffff8141864c r __kstrtab_remap_vmalloc_range
ffffffff81418660 r __kstrtab_vmalloc_32_user
ffffffff81418670 r __kstrtab_vmalloc_32
ffffffff8141867b r __kstrtab_vzalloc_node
ffffffff81418688 r __kstrtab_vmalloc_node
ffffffff81418695 r __kstrtab_vmalloc_user
ffffffff814186a2 r __kstrtab_vzalloc
ffffffff814186aa r __kstrtab_vmalloc
ffffffff814186b2 r __kstrtab___vmalloc
ffffffff814186bc r __kstrtab_vmap
ffffffff814186c1 r __kstrtab_vunmap
ffffffff814186c8 r __kstrtab_vfree
ffffffff814186ce r __kstrtab___get_vm_area
ffffffff814186dc r __kstrtab_map_vm_area
ffffffff814186e8 r __kstrtab_unmap_kernel_range_noflush
ffffffff81418703 r __kstrtab_vm_map_ram
ffffffff8141870e r __kstrtab_vm_unmap_ram
ffffffff8141871b r __kstrtab_vm_unmap_aliases
ffffffff8141872c r __kstrtab_vmalloc_to_pfn
ffffffff8141873b r __kstrtab_vmalloc_to_page
ffffffff8141874b r __kstrtab_blk_queue_bounce
ffffffff8141875c r __kstrtab_dmam_pool_destroy
ffffffff8141876e r __kstrtab_dmam_pool_create
ffffffff8141877f r __kstrtab_dma_pool_free
ffffffff8141878d r __kstrtab_dma_pool_alloc
ffffffff8141879c r __kstrtab_dma_pool_destroy
ffffffff814187ad r __kstrtab_dma_pool_create
ffffffff814187bd r __kstrtab_PageHuge
ffffffff814187c6 r __kstrtab_vma_kernel_pagesize
ffffffff814187da r __kstrtab_alloc_pages_current
ffffffff814187ee r __kstrtab_mem_section
ffffffff814187fa r __kstrtab_mmu_notifier_unregister
ffffffff81418812 r __kstrtab___mmu_notifier_register
ffffffff8141882a r __kstrtab_mmu_notifier_register
ffffffff81418840 r __kstrtab_ksize
ffffffff81418846 r __kstrtab_kmem_cache_size
ffffffff81418856 r __kstrtab_kfree
ffffffff8141885c r __kstrtab_kmem_cache_free
ffffffff8141886c r __kstrtab___kmalloc
ffffffff81418876 r __kstrtab___kmalloc_node
ffffffff81418885 r __kstrtab_kmem_cache_alloc_node
ffffffff8141889b r __kstrtab_kmem_cache_alloc
ffffffff814188ac r __kstrtab_kmem_cache_destroy
ffffffff814188bf r __kstrtab_kmem_cache_shrink
ffffffff814188d1 r __kstrtab_kmem_cache_create
ffffffff814188e3 r __kstrtab_malloc_sizes
ffffffff814188f0 r __kstrtab_buffer_migrate_page
ffffffff81418904 r __kstrtab_migrate_page
ffffffff81418911 r __kstrtab_fail_migrate_page
ffffffff81418923 r __kstrtab_unpoison_memory
ffffffff81418933 r __kstrtab_memory_failure_queue
ffffffff81418948 r __kstrtab_memory_failure
ffffffff81418957 r __kstrtab_shake_page
ffffffff81418962 r __kstrtab_hwpoison_filter
ffffffff81418972 r __kstrtab___cleancache_invalidate_fs
ffffffff8141898d r __kstrtab___cleancache_invalidate_inode
ffffffff814189ab r __kstrtab___cleancache_invalidate_page
ffffffff814189c8 r __kstrtab___cleancache_put_page
ffffffff814189de r __kstrtab___cleancache_get_page
ffffffff814189f4 r __kstrtab___cleancache_init_shared_fs
ffffffff81418a10 r __kstrtab___cleancache_init_fs
ffffffff81418a25 r __kstrtab_cleancache_register_ops
ffffffff81418a3d r __kstrtab_cleancache_enabled
ffffffff81418a50 r __kstrtab_nonseekable_open
ffffffff81418a61 r __kstrtab_generic_file_open
ffffffff81418a73 r __kstrtab_sys_close
ffffffff81418a7d r __kstrtab_filp_close
ffffffff81418a88 r __kstrtab_file_open_root
ffffffff81418a97 r __kstrtab_filp_open
ffffffff81418aa1 r __kstrtab_fd_install
ffffffff81418aac r __kstrtab_put_unused_fd
ffffffff81418aba r __kstrtab_dentry_open
ffffffff81418ac6 r __kstrtab_lookup_instantiate_filp
ffffffff81418ade r __kstrtab_vfs_writev
ffffffff81418ae9 r __kstrtab_vfs_readv
ffffffff81418af3 r __kstrtab_iov_shorten
ffffffff81418aff r __kstrtab_vfs_write
ffffffff81418b09 r __kstrtab_do_sync_write
ffffffff81418b17 r __kstrtab_vfs_read
ffffffff81418b20 r __kstrtab_do_sync_read
ffffffff81418b2d r __kstrtab_vfs_llseek
ffffffff81418b38 r __kstrtab_default_llseek
ffffffff81418b47 r __kstrtab_no_llseek
ffffffff81418b51 r __kstrtab_noop_llseek
ffffffff81418b5d r __kstrtab_generic_file_llseek
ffffffff81418b71 r __kstrtab_generic_file_llseek_size
ffffffff81418b8a r __kstrtab_generic_ro_fops
ffffffff81418b9a r __kstrtab_fget_raw
ffffffff81418ba3 r __kstrtab_fget
ffffffff81418ba8 r __kstrtab_fput
ffffffff81418bad r __kstrtab_alloc_file
ffffffff81418bb8 r __kstrtab_get_max_files
ffffffff81418bc6 r __kstrtab_files_lglock_global_unlock
ffffffff81418be1 r __kstrtab_files_lglock_global_lock
ffffffff81418bfa r __kstrtab_files_lglock_global_unlock_online
ffffffff81418c1c r __kstrtab_files_lglock_global_lock_online
ffffffff81418c3c r __kstrtab_files_lglock_local_unlock_cpu
ffffffff81418c5a r __kstrtab_files_lglock_local_lock_cpu
ffffffff81418c76 r __kstrtab_files_lglock_local_unlock
ffffffff81418c90 r __kstrtab_files_lglock_local_lock
ffffffff81418ca8 r __kstrtab_files_lglock_lock_init
ffffffff81418cbf r __kstrtab_thaw_super
ffffffff81418cca r __kstrtab_freeze_super
ffffffff81418cd7 r __kstrtab_mount_single
ffffffff81418ce4 r __kstrtab_mount_nodev
ffffffff81418cf0 r __kstrtab_kill_block_super
ffffffff81418d01 r __kstrtab_mount_bdev
ffffffff81418d0c r __kstrtab_mount_ns
ffffffff81418d15 r __kstrtab_kill_litter_super
ffffffff81418d27 r __kstrtab_kill_anon_super
ffffffff81418d37 r __kstrtab_set_anon_super
ffffffff81418d46 r __kstrtab_free_anon_bdev
ffffffff81418d55 r __kstrtab_get_anon_bdev
ffffffff81418d63 r __kstrtab_get_super_thawed
ffffffff81418d74 r __kstrtab_get_super
ffffffff81418d7e r __kstrtab_iterate_supers_type
ffffffff81418d92 r __kstrtab_drop_super
ffffffff81418d9d r __kstrtab_sget
ffffffff81418da2 r __kstrtab_generic_shutdown_super
ffffffff81418db9 r __kstrtab_unlock_super
ffffffff81418dc6 r __kstrtab_lock_super
ffffffff81418dd1 r __kstrtab_deactivate_super
ffffffff81418de2 r __kstrtab_deactivate_locked_super
ffffffff81418dfa r __kstrtab_directly_mappable_cdev_bdi
ffffffff81418e15 r __kstrtab___unregister_chrdev
ffffffff81418e29 r __kstrtab___register_chrdev
ffffffff81418e3b r __kstrtab_cdev_add
ffffffff81418e44 r __kstrtab_cdev_del
ffffffff81418e4d r __kstrtab_cdev_alloc
ffffffff81418e58 r __kstrtab_cdev_init
ffffffff81418e62 r __kstrtab_alloc_chrdev_region
ffffffff81418e76 r __kstrtab_unregister_chrdev_region
ffffffff81418e8f r __kstrtab_register_chrdev_region
ffffffff81418ea6 r __kstrtab_inode_set_bytes
ffffffff81418eb6 r __kstrtab_inode_get_bytes
ffffffff81418ec6 r __kstrtab_inode_sub_bytes
ffffffff81418ed6 r __kstrtab_inode_add_bytes
ffffffff81418ee6 r __kstrtab_vfs_lstat
ffffffff81418ef0 r __kstrtab_vfs_stat
ffffffff81418ef9 r __kstrtab_vfs_fstatat
ffffffff81418f05 r __kstrtab_vfs_fstat
ffffffff81418f0f r __kstrtab_vfs_getattr
ffffffff81418f1b r __kstrtab_generic_fillattr
ffffffff81418f2c r __kstrtab_dump_seek
ffffffff81418f36 r __kstrtab_dump_write
ffffffff81418f41 r __kstrtab_set_binfmt
ffffffff81418f4c r __kstrtab_search_binary_handler
ffffffff81418f62 r __kstrtab_remove_arg_zero
ffffffff81418f72 r __kstrtab_prepare_binprm
ffffffff81418f81 r __kstrtab_install_exec_creds
ffffffff81418f94 r __kstrtab_setup_new_exec
ffffffff81418fa3 r __kstrtab_would_dump
ffffffff81418fae r __kstrtab_flush_old_exec
ffffffff81418fbd r __kstrtab_get_task_comm
ffffffff81418fcb r __kstrtab_kernel_read
ffffffff81418fd7 r __kstrtab_open_exec
ffffffff81418fe1 r __kstrtab_setup_arg_pages
ffffffff81418ff1 r __kstrtab_copy_strings_kernel
ffffffff81419005 r __kstrtab_unregister_binfmt
ffffffff81419017 r __kstrtab___register_binfmt
ffffffff81419029 r __kstrtab_generic_pipe_buf_release
ffffffff81419042 r __kstrtab_generic_pipe_buf_confirm
ffffffff8141905b r __kstrtab_generic_pipe_buf_get
ffffffff81419070 r __kstrtab_generic_pipe_buf_steal
ffffffff81419087 r __kstrtab_generic_pipe_buf_unmap
ffffffff8141909e r __kstrtab_generic_pipe_buf_map
ffffffff814190b3 r __kstrtab_pipe_unlock
ffffffff814190bf r __kstrtab_pipe_lock
ffffffff814190c9 r __kstrtab_generic_readlink
ffffffff814190da r __kstrtab_dentry_unhash
ffffffff814190e8 r __kstrtab_vfs_unlink
ffffffff814190f3 r __kstrtab_vfs_symlink
ffffffff814190ff r __kstrtab_vfs_rmdir
ffffffff81419109 r __kstrtab_vfs_rename
ffffffff81419114 r __kstrtab_vfs_readlink
ffffffff81419121 r __kstrtab_generic_permission
ffffffff81419134 r __kstrtab_vfs_mknod
ffffffff8141913e r __kstrtab_vfs_mkdir
ffffffff81419148 r __kstrtab_vfs_link
ffffffff81419151 r __kstrtab_vfs_follow_link
ffffffff81419161 r __kstrtab_vfs_create
ffffffff8141916c r __kstrtab_unlock_rename
ffffffff8141917a r __kstrtab_inode_permission
ffffffff8141918b r __kstrtab_vfs_path_lookup
ffffffff8141919b r __kstrtab_kern_path
ffffffff814191a5 r __kstrtab_page_symlink_inode_operations
ffffffff814191c3 r __kstrtab_page_symlink
ffffffff814191d0 r __kstrtab___page_symlink
ffffffff814191df r __kstrtab_page_readlink
ffffffff814191ed r __kstrtab_page_put_link
ffffffff814191fb r __kstrtab_page_follow_link_light
ffffffff81419212 r __kstrtab_lookup_one_len
ffffffff81419221 r __kstrtab_lock_rename
ffffffff8141922d r __kstrtab_getname
ffffffff81419235 r __kstrtab_get_write_access
ffffffff81419246 r __kstrtab_follow_up
ffffffff81419250 r __kstrtab_follow_down
ffffffff8141925c r __kstrtab_follow_down_one
ffffffff8141926c r __kstrtab_user_path_at
ffffffff81419279 r __kstrtab_user_path_create
ffffffff8141928a r __kstrtab_kern_path_create
ffffffff8141929b r __kstrtab_full_name_hash
ffffffff814192aa r __kstrtab_path_put
ffffffff814192b3 r __kstrtab_path_get
ffffffff814192bc r __kstrtab_putname
ffffffff814192c4 r __kstrtab_kill_fasync
ffffffff814192d0 r __kstrtab_fasync_helper
ffffffff814192de r __kstrtab_f_setown
ffffffff814192e7 r __kstrtab___f_setown
ffffffff814192f2 r __kstrtab_generic_block_fiemap
ffffffff81419307 r __kstrtab___generic_block_fiemap
ffffffff8141931e r __kstrtab_fiemap_check_flags
ffffffff81419331 r __kstrtab_fiemap_fill_next_extent
ffffffff81419349 r __kstrtab_vfs_readdir
ffffffff81419355 r __kstrtab_poll_schedule_timeout
ffffffff8141936b r __kstrtab_poll_freewait
ffffffff81419379 r __kstrtab_poll_initwait
ffffffff81419387 r __kstrtab_d_genocide
ffffffff81419392 r __kstrtab_names_cachep
ffffffff8141939f r __kstrtab_find_inode_number
ffffffff814193b1 r __kstrtab_dentry_path_raw
ffffffff814193c1 r __kstrtab_d_path
ffffffff814193c8 r __kstrtab_d_materialise_unique
ffffffff814193dd r __kstrtab_d_move
ffffffff814193e4 r __kstrtab_dentry_update_name_case
ffffffff814193fc r __kstrtab_d_rehash
ffffffff81419405 r __kstrtab_d_delete
ffffffff8141940e r __kstrtab_d_validate
ffffffff81419419 r __kstrtab_d_lookup
ffffffff81419422 r __kstrtab_d_add_ci
ffffffff8141942b r __kstrtab_d_splice_alias
ffffffff8141943a r __kstrtab_d_obtain_alias
ffffffff81419449 r __kstrtab_d_find_any_alias
ffffffff8141945a r __kstrtab_d_make_root
ffffffff81419466 r __kstrtab_d_instantiate_unique
ffffffff8141947b r __kstrtab_d_instantiate
ffffffff81419489 r __kstrtab_d_set_d_op
ffffffff81419494 r __kstrtab_d_alloc_name
ffffffff814194a1 r __kstrtab_d_alloc_pseudo
ffffffff814194b0 r __kstrtab_d_alloc
ffffffff814194b8 r __kstrtab_shrink_dcache_parent
ffffffff814194cd r __kstrtab_have_submounts
ffffffff814194dc r __kstrtab_shrink_dcache_sb
ffffffff814194ed r __kstrtab_d_prune_aliases
ffffffff814194fd r __kstrtab_d_find_alias
ffffffff8141950a r __kstrtab_dget_parent
ffffffff81419516 r __kstrtab_d_invalidate
ffffffff81419523 r __kstrtab_dput
ffffffff81419528 r __kstrtab_d_clear_need_lookup
ffffffff8141953c r __kstrtab_d_drop
ffffffff81419543 r __kstrtab___d_drop
ffffffff8141954c r __kstrtab_rename_lock
ffffffff81419558 r __kstrtab_sysctl_vfs_cache_pressure
ffffffff81419572 r __kstrtab_inode_owner_or_capable
ffffffff81419589 r __kstrtab_inode_init_owner
ffffffff8141959a r __kstrtab_init_special_inode
ffffffff814195ad r __kstrtab_inode_wait
ffffffff814195b8 r __kstrtab_inode_needs_sync
ffffffff814195c9 r __kstrtab_file_update_time
ffffffff814195da r __kstrtab_touch_atime
ffffffff814195e6 r __kstrtab_bmap
ffffffff814195eb r __kstrtab_iput
ffffffff814195f0 r __kstrtab_generic_delete_inode
ffffffff81419605 r __kstrtab_insert_inode_locked4
ffffffff8141961a r __kstrtab_insert_inode_locked
ffffffff8141962e r __kstrtab_ilookup
ffffffff81419636 r __kstrtab_ilookup5
ffffffff8141963f r __kstrtab_ilookup5_nowait
ffffffff8141964f r __kstrtab_igrab
ffffffff81419655 r __kstrtab_iunique
ffffffff8141965d r __kstrtab_iget_locked
ffffffff81419669 r __kstrtab_iget5_locked
ffffffff81419676 r __kstrtab_unlock_new_inode
ffffffff81419687 r __kstrtab_new_inode
ffffffff81419691 r __kstrtab_get_next_ino
ffffffff8141969e r __kstrtab_end_writeback
ffffffff814196ac r __kstrtab___remove_inode_hash
ffffffff814196c0 r __kstrtab___insert_inode_hash
ffffffff814196d4 r __kstrtab_inode_sb_list_add
ffffffff814196e6 r __kstrtab_ihold
ffffffff814196ec r __kstrtab_inode_init_once
ffffffff814196fc r __kstrtab_address_space_init_once
ffffffff81419714 r __kstrtab_inc_nlink
ffffffff8141971e r __kstrtab_set_nlink
ffffffff81419728 r __kstrtab_clear_nlink
ffffffff81419734 r __kstrtab_drop_nlink
ffffffff8141973f r __kstrtab___destroy_inode
ffffffff8141974f r __kstrtab_free_inode_nonrcu
ffffffff81419761 r __kstrtab_inode_init_always
ffffffff81419773 r __kstrtab_empty_aops
ffffffff8141977e r __kstrtab_notify_change
ffffffff8141978c r __kstrtab_setattr_copy
ffffffff81419799 r __kstrtab_inode_newsize_ok
ffffffff814197aa r __kstrtab_inode_change_ok
ffffffff814197ba r __kstrtab_iget_failed
ffffffff814197c6 r __kstrtab_is_bad_inode
ffffffff814197d3 r __kstrtab_make_bad_inode
ffffffff814197e2 r __kstrtab_get_unused_fd
ffffffff814197f0 r __kstrtab_get_fs_type
ffffffff814197fc r __kstrtab_unregister_filesystem
ffffffff81419812 r __kstrtab_register_filesystem
ffffffff81419826 r __kstrtab_kern_unmount
ffffffff81419833 r __kstrtab_kern_mount_data
ffffffff81419843 r __kstrtab_path_is_under
ffffffff81419851 r __kstrtab_mount_subtree
ffffffff8141985f r __kstrtab_mark_mounts_for_expiry
ffffffff81419876 r __kstrtab_mnt_set_expiry
ffffffff81419885 r __kstrtab_may_umount
ffffffff81419890 r __kstrtab_may_umount_tree
ffffffff814198a0 r __kstrtab_replace_mount_options
ffffffff814198b6 r __kstrtab_save_mount_options
ffffffff814198c9 r __kstrtab_generic_show_options
ffffffff814198de r __kstrtab_mnt_unpin
ffffffff814198e8 r __kstrtab_mnt_pin
ffffffff814198f0 r __kstrtab_mntget
ffffffff814198f7 r __kstrtab_mntput
ffffffff814198fe r __kstrtab_vfs_kern_mount
ffffffff8141990d r __kstrtab_mnt_drop_write_file
ffffffff81419921 r __kstrtab_mnt_drop_write
ffffffff81419930 r __kstrtab_mnt_want_write_file
ffffffff81419944 r __kstrtab_mnt_clone_write
ffffffff81419954 r __kstrtab_mnt_want_write
ffffffff81419963 r __kstrtab___mnt_is_readonly
ffffffff81419975 r __kstrtab_vfsmount_lock_global_unlock
ffffffff81419991 r __kstrtab_vfsmount_lock_global_lock
ffffffff814199ab r __kstrtab_vfsmount_lock_global_unlock_online
ffffffff814199ce r __kstrtab_vfsmount_lock_global_lock_online
ffffffff814199ef r __kstrtab_vfsmount_lock_local_unlock_cpu
ffffffff81419a0e r __kstrtab_vfsmount_lock_local_lock_cpu
ffffffff81419a2b r __kstrtab_vfsmount_lock_local_unlock
ffffffff81419a46 r __kstrtab_vfsmount_lock_local_lock
ffffffff81419a5f r __kstrtab_vfsmount_lock_lock_init
ffffffff81419a77 r __kstrtab_fs_kobj
ffffffff81419a7f r __kstrtab_seq_hlist_next_rcu
ffffffff81419a92 r __kstrtab_seq_hlist_start_head_rcu
ffffffff81419aab r __kstrtab_seq_hlist_start_rcu
ffffffff81419abf r __kstrtab_seq_hlist_next
ffffffff81419ace r __kstrtab_seq_hlist_start_head
ffffffff81419ae3 r __kstrtab_seq_hlist_start
ffffffff81419af3 r __kstrtab_seq_list_next
ffffffff81419b01 r __kstrtab_seq_list_start_head
ffffffff81419b15 r __kstrtab_seq_list_start
ffffffff81419b24 r __kstrtab_seq_write
ffffffff81419b2e r __kstrtab_seq_put_decimal_ll
ffffffff81419b41 r __kstrtab_seq_put_decimal_ull
ffffffff81419b55 r __kstrtab_seq_puts
ffffffff81419b5e r __kstrtab_seq_putc
ffffffff81419b67 r __kstrtab_seq_open_private
ffffffff81419b78 r __kstrtab___seq_open_private
ffffffff81419b8b r __kstrtab_seq_release_private
ffffffff81419b9f r __kstrtab_single_release
ffffffff81419bae r __kstrtab_single_open
ffffffff81419bba r __kstrtab_seq_bitmap_list
ffffffff81419bca r __kstrtab_seq_bitmap
ffffffff81419bd5 r __kstrtab_seq_path
ffffffff81419bde r __kstrtab_mangle_path
ffffffff81419bea r __kstrtab_seq_printf
ffffffff81419bf5 r __kstrtab_seq_escape
ffffffff81419c00 r __kstrtab_seq_release
ffffffff81419c0c r __kstrtab_seq_lseek
ffffffff81419c16 r __kstrtab_seq_read
ffffffff81419c1f r __kstrtab_seq_open
ffffffff81419c28 r __kstrtab_generic_removexattr
ffffffff81419c3c r __kstrtab_generic_setxattr
ffffffff81419c4d r __kstrtab_generic_listxattr
ffffffff81419c5f r __kstrtab_generic_getxattr
ffffffff81419c70 r __kstrtab_vfs_removexattr
ffffffff81419c80 r __kstrtab_vfs_listxattr
ffffffff81419c8e r __kstrtab_vfs_getxattr
ffffffff81419c9b r __kstrtab_xattr_getsecurity
ffffffff81419cad r __kstrtab_vfs_setxattr
ffffffff81419cba r __kstrtab_simple_attr_write
ffffffff81419ccc r __kstrtab_simple_attr_read
ffffffff81419cdd r __kstrtab_simple_attr_release
ffffffff81419cf1 r __kstrtab_simple_attr_open
ffffffff81419d02 r __kstrtab_simple_transaction_release
ffffffff81419d1d r __kstrtab_simple_transaction_read
ffffffff81419d35 r __kstrtab_simple_transaction_get
ffffffff81419d4c r __kstrtab_simple_transaction_set
ffffffff81419d63 r __kstrtab_memory_read_from_buffer
ffffffff81419d7b r __kstrtab_simple_write_to_buffer
ffffffff81419d92 r __kstrtab_simple_read_from_buffer
ffffffff81419daa r __kstrtab_simple_unlink
ffffffff81419db8 r __kstrtab_noop_fsync
ffffffff81419dc3 r __kstrtab_simple_statfs
ffffffff81419dd1 r __kstrtab_simple_rmdir
ffffffff81419dde r __kstrtab_simple_rename
ffffffff81419dec r __kstrtab_simple_release_fs
ffffffff81419dfe r __kstrtab_simple_readpage
ffffffff81419e0e r __kstrtab_simple_pin_fs
ffffffff81419e1c r __kstrtab_simple_lookup
ffffffff81419e2a r __kstrtab_simple_link
ffffffff81419e36 r __kstrtab_simple_open
ffffffff81419e42 r __kstrtab_simple_getattr
ffffffff81419e51 r __kstrtab_simple_fill_super
ffffffff81419e63 r __kstrtab_simple_empty
ffffffff81419e70 r __kstrtab_simple_dir_operations
ffffffff81419e86 r __kstrtab_simple_dir_inode_operations
ffffffff81419ea2 r __kstrtab_simple_write_end
ffffffff81419eb3 r __kstrtab_simple_write_begin
ffffffff81419ec6 r __kstrtab_mount_pseudo
ffffffff81419ed3 r __kstrtab_generic_read_dir
ffffffff81419ee4 r __kstrtab_dcache_readdir
ffffffff81419ef3 r __kstrtab_dcache_dir_open
ffffffff81419f03 r __kstrtab_dcache_dir_lseek
ffffffff81419f14 r __kstrtab_dcache_dir_close
ffffffff81419f25 r __kstrtab_generic_check_addressable
ffffffff81419f3f r __kstrtab_generic_file_fsync
ffffffff81419f52 r __kstrtab_generic_fh_to_parent
ffffffff81419f67 r __kstrtab_generic_fh_to_dentry
ffffffff81419f7c r __kstrtab_simple_setattr
ffffffff81419f8b r __kstrtab_sync_inode_metadata
ffffffff81419f9f r __kstrtab_sync_inode
ffffffff81419faa r __kstrtab_write_inode_now
ffffffff81419fba r __kstrtab_sync_inodes_sb
ffffffff81419fc9 r __kstrtab_writeback_inodes_sb_nr_if_idle
ffffffff81419fe8 r __kstrtab_writeback_inodes_sb_if_idle
ffffffff8141a004 r __kstrtab_writeback_inodes_sb
ffffffff8141a018 r __kstrtab_writeback_inodes_sb_nr
ffffffff8141a02f r __kstrtab___mark_inode_dirty
ffffffff8141a042 r __kstrtab_splice_direct_to_actor
ffffffff8141a059 r __kstrtab_generic_splice_sendpage
ffffffff8141a071 r __kstrtab_generic_file_splice_write
ffffffff8141a08b r __kstrtab___splice_from_pipe
ffffffff8141a09e r __kstrtab_splice_from_pipe_end
ffffffff8141a0b3 r __kstrtab_splice_from_pipe_begin
ffffffff8141a0ca r __kstrtab_splice_from_pipe_next
ffffffff8141a0e0 r __kstrtab_splice_from_pipe_feed
ffffffff8141a0f6 r __kstrtab_pipe_to_file
ffffffff8141a103 r __kstrtab_default_file_splice_read
ffffffff8141a11c r __kstrtab_generic_file_splice_read
ffffffff8141a135 r __kstrtab_generic_write_sync
ffffffff8141a148 r __kstrtab_vfs_fsync
ffffffff8141a152 r __kstrtab_vfs_fsync_range
ffffffff8141a162 r __kstrtab_sync_filesystem
ffffffff8141a172 r __kstrtab_fsstack_copy_attr_all
ffffffff8141a188 r __kstrtab_fsstack_copy_inode_size
ffffffff8141a1a0 r __kstrtab_current_umask
ffffffff8141a1ae r __kstrtab_unshare_fs_struct
ffffffff8141a1c0 r __kstrtab_vfs_statfs
ffffffff8141a1cb r __kstrtab_bh_submit_read
ffffffff8141a1da r __kstrtab_bh_uptodate_or_lock
ffffffff8141a1ee r __kstrtab_free_buffer_head
ffffffff8141a1ff r __kstrtab_alloc_buffer_head
ffffffff8141a211 r __kstrtab_try_to_free_buffers
ffffffff8141a225 r __kstrtab_sync_dirty_buffer
ffffffff8141a237 r __kstrtab___sync_dirty_buffer
ffffffff8141a24b r __kstrtab_write_dirty_buffer
ffffffff8141a25e r __kstrtab_ll_rw_block
ffffffff8141a26a r __kstrtab_submit_bh
ffffffff8141a274 r __kstrtab_generic_block_bmap
ffffffff8141a287 r __kstrtab_block_write_full_page
ffffffff8141a29d r __kstrtab_block_write_full_page_endio
ffffffff8141a2b9 r __kstrtab_block_truncate_page
ffffffff8141a2cd r __kstrtab_nobh_truncate_page
ffffffff8141a2e0 r __kstrtab_nobh_writepage
ffffffff8141a2ef r __kstrtab_nobh_write_end
ffffffff8141a2fe r __kstrtab_nobh_write_begin
ffffffff8141a30f r __kstrtab_block_page_mkwrite
ffffffff8141a322 r __kstrtab___block_page_mkwrite
ffffffff8141a337 r __kstrtab_block_commit_write
ffffffff8141a34a r __kstrtab_cont_write_begin
ffffffff8141a35b r __kstrtab_generic_cont_expand_simple
ffffffff8141a376 r __kstrtab_block_read_full_page
ffffffff8141a38b r __kstrtab_block_is_partially_uptodate
ffffffff8141a3a7 r __kstrtab_generic_write_end
ffffffff8141a3b9 r __kstrtab_block_write_end
ffffffff8141a3c9 r __kstrtab_block_write_begin
ffffffff8141a3db r __kstrtab___block_write_begin
ffffffff8141a3ef r __kstrtab_page_zero_new_buffers
ffffffff8141a405 r __kstrtab_unmap_underlying_metadata
ffffffff8141a41f r __kstrtab_create_empty_buffers
ffffffff8141a434 r __kstrtab_block_invalidatepage
ffffffff8141a449 r __kstrtab_set_bh_page
ffffffff8141a455 r __kstrtab_invalidate_bh_lrus
ffffffff8141a468 r __kstrtab___bread
ffffffff8141a470 r __kstrtab___breadahead
ffffffff8141a47d r __kstrtab___getblk
ffffffff8141a486 r __kstrtab___find_get_block
ffffffff8141a497 r __kstrtab___bforget
ffffffff8141a4a1 r __kstrtab___brelse
ffffffff8141a4aa r __kstrtab_mark_buffer_dirty
ffffffff8141a4bc r __kstrtab_alloc_page_buffers
ffffffff8141a4cf r __kstrtab_invalidate_inode_buffers
ffffffff8141a4e8 r __kstrtab___set_page_dirty_buffers
ffffffff8141a501 r __kstrtab_mark_buffer_dirty_inode
ffffffff8141a519 r __kstrtab_sync_mapping_buffers
ffffffff8141a52e r __kstrtab_mark_buffer_async_write
ffffffff8141a546 r __kstrtab_end_buffer_async_write
ffffffff8141a55d r __kstrtab_end_buffer_write_sync
ffffffff8141a573 r __kstrtab_end_buffer_read_sync
ffffffff8141a588 r __kstrtab___wait_on_buffer
ffffffff8141a599 r __kstrtab_unlock_buffer
ffffffff8141a5a7 r __kstrtab___lock_buffer
ffffffff8141a5b5 r __kstrtab_init_buffer
ffffffff8141a5c1 r __kstrtab_bioset_create
ffffffff8141a5cf r __kstrtab_bioset_free
ffffffff8141a5db r __kstrtab_bio_sector_offset
ffffffff8141a5ed r __kstrtab_bio_split
ffffffff8141a5f7 r __kstrtab_bio_pair_release
ffffffff8141a608 r __kstrtab_bio_endio
ffffffff8141a612 r __kstrtab_bio_copy_kern
ffffffff8141a620 r __kstrtab_bio_map_kern
ffffffff8141a62d r __kstrtab_bio_unmap_user
ffffffff8141a63c r __kstrtab_bio_map_user
ffffffff8141a649 r __kstrtab_bio_copy_user
ffffffff8141a657 r __kstrtab_bio_uncopy_user
ffffffff8141a667 r __kstrtab_bio_add_page
ffffffff8141a674 r __kstrtab_bio_add_pc_page
ffffffff8141a684 r __kstrtab_bio_get_nr_vecs
ffffffff8141a694 r __kstrtab_bio_clone
ffffffff8141a69e r __kstrtab___bio_clone
ffffffff8141a6aa r __kstrtab_bio_phys_segments
ffffffff8141a6bc r __kstrtab_bio_put
ffffffff8141a6c4 r __kstrtab_zero_fill_bio
ffffffff8141a6d2 r __kstrtab_bio_kmalloc
ffffffff8141a6de r __kstrtab_bio_alloc
ffffffff8141a6e8 r __kstrtab_bio_alloc_bioset
ffffffff8141a6f9 r __kstrtab_bio_init
ffffffff8141a702 r __kstrtab_bio_free
ffffffff8141a70b r __kstrtab___invalidate_device
ffffffff8141a71f r __kstrtab_lookup_bdev
ffffffff8141a72b r __kstrtab_ioctl_by_bdev
ffffffff8141a739 r __kstrtab_blkdev_aio_write
ffffffff8141a74a r __kstrtab_blkdev_put
ffffffff8141a755 r __kstrtab_blkdev_get_by_dev
ffffffff8141a767 r __kstrtab_blkdev_get_by_path
ffffffff8141a77a r __kstrtab_blkdev_get
ffffffff8141a785 r __kstrtab_bd_set_size
ffffffff8141a791 r __kstrtab_check_disk_change
ffffffff8141a7a3 r __kstrtab_revalidate_disk
ffffffff8141a7b3 r __kstrtab_check_disk_size_change
ffffffff8141a7ca r __kstrtab_bd_unlink_disk_holder
ffffffff8141a7e0 r __kstrtab_bd_link_disk_holder
ffffffff8141a7f4 r __kstrtab_bdput
ffffffff8141a7fa r __kstrtab_bdget
ffffffff8141a800 r __kstrtab_blkdev_fsync
ffffffff8141a80d r __kstrtab_thaw_bdev
ffffffff8141a817 r __kstrtab_freeze_bdev
ffffffff8141a823 r __kstrtab_fsync_bdev
ffffffff8141a82e r __kstrtab_sync_blockdev
ffffffff8141a83c r __kstrtab_sb_min_blocksize
ffffffff8141a84d r __kstrtab_sb_set_blocksize
ffffffff8141a85e r __kstrtab_set_blocksize
ffffffff8141a86c r __kstrtab_invalidate_bdev
ffffffff8141a87c r __kstrtab_kill_bdev
ffffffff8141a886 r __kstrtab_I_BDEV
ffffffff8141a88d r __kstrtab___blockdev_direct_IO
ffffffff8141a8a2 r __kstrtab_dio_end_io
ffffffff8141a8ad r __kstrtab_inode_dio_done
ffffffff8141a8bc r __kstrtab_inode_dio_wait
ffffffff8141a8cb r __kstrtab_mpage_writepage
ffffffff8141a8db r __kstrtab_mpage_writepages
ffffffff8141a8ec r __kstrtab_mpage_readpage
ffffffff8141a8fb r __kstrtab_mpage_readpages
ffffffff8141a90b r __kstrtab_set_task_ioprio
ffffffff8141a91b r __kstrtab_fsnotify
ffffffff8141a924 r __kstrtab___fsnotify_parent
ffffffff8141a936 r __kstrtab___fsnotify_inode_delete
ffffffff8141a94e r __kstrtab_fsnotify_get_cookie
ffffffff8141a962 r __kstrtab_anon_inode_getfd
ffffffff8141a973 r __kstrtab_anon_inode_getfile
ffffffff8141a986 r __kstrtab_eventfd_ctx_fileget
ffffffff8141a99a r __kstrtab_eventfd_ctx_fdget
ffffffff8141a9ac r __kstrtab_eventfd_fget
ffffffff8141a9b9 r __kstrtab_eventfd_ctx_read
ffffffff8141a9ca r __kstrtab_eventfd_ctx_remove_wait_queue
ffffffff8141a9e8 r __kstrtab_eventfd_ctx_put
ffffffff8141a9f8 r __kstrtab_eventfd_ctx_get
ffffffff8141aa08 r __kstrtab_eventfd_signal
ffffffff8141aa17 r __kstrtab_aio_complete
ffffffff8141aa24 r __kstrtab_kick_iocb
ffffffff8141aa2e r __kstrtab_aio_put_req
ffffffff8141aa3a r __kstrtab_wait_on_sync_kiocb
ffffffff8141aa4d r __kstrtab_lock_may_write
ffffffff8141aa5c r __kstrtab_lock_may_read
ffffffff8141aa6a r __kstrtab_vfs_cancel_lock
ffffffff8141aa7a r __kstrtab_posix_unblock_lock
ffffffff8141aa8d r __kstrtab_locks_remove_posix
ffffffff8141aaa0 r __kstrtab_vfs_lock_file
ffffffff8141aaae r __kstrtab_vfs_test_lock
ffffffff8141aabc r __kstrtab_flock_lock_file_wait
ffffffff8141aad1 r __kstrtab_vfs_setlease
ffffffff8141aade r __kstrtab_generic_setlease
ffffffff8141aaef r __kstrtab_lease_get_mtime
ffffffff8141aaff r __kstrtab___break_lease
ffffffff8141ab0d r __kstrtab_lease_modify
ffffffff8141ab1a r __kstrtab_locks_mandatory_area
ffffffff8141ab2f r __kstrtab_posix_lock_file_wait
ffffffff8141ab44 r __kstrtab_posix_lock_file
ffffffff8141ab54 r __kstrtab_posix_test_lock
ffffffff8141ab64 r __kstrtab_locks_delete_block
ffffffff8141ab77 r __kstrtab_locks_copy_lock
ffffffff8141ab87 r __kstrtab___locks_copy_lock
ffffffff8141ab99 r __kstrtab_locks_init_lock
ffffffff8141aba9 r __kstrtab_locks_free_lock
ffffffff8141abb9 r __kstrtab_locks_release_private
ffffffff8141abcf r __kstrtab_locks_alloc_lock
ffffffff8141abe0 r __kstrtab_unlock_flocks
ffffffff8141abee r __kstrtab_lock_flocks
ffffffff8141abfa r __kstrtab_posix_acl_chmod
ffffffff8141ac0a r __kstrtab_posix_acl_create
ffffffff8141ac1b r __kstrtab_posix_acl_from_mode
ffffffff8141ac2f r __kstrtab_posix_acl_equiv_mode
ffffffff8141ac44 r __kstrtab_posix_acl_valid
ffffffff8141ac54 r __kstrtab_posix_acl_alloc
ffffffff8141ac64 r __kstrtab_posix_acl_init
ffffffff8141ac73 r __kstrtab_posix_acl_to_xattr
ffffffff8141ac86 r __kstrtab_posix_acl_from_xattr
ffffffff8141ac9b r __kstrtab_remove_proc_entry
ffffffff8141acad r __kstrtab_proc_create_data
ffffffff8141acbe r __kstrtab_create_proc_entry
ffffffff8141acd0 r __kstrtab_proc_mkdir
ffffffff8141acdb r __kstrtab_proc_net_mkdir
ffffffff8141acea r __kstrtab_proc_mkdir_mode
ffffffff8141acfa r __kstrtab_proc_symlink
ffffffff8141ad07 r __kstrtab_unregister_sysctl_table
ffffffff8141ad1f r __kstrtab_register_sysctl_table
ffffffff8141ad35 r __kstrtab_register_sysctl_paths
ffffffff8141ad4b r __kstrtab_register_sysctl
ffffffff8141ad5b r __kstrtab_proc_net_remove
ffffffff8141ad6b r __kstrtab_proc_net_fops_create
ffffffff8141ad80 r __kstrtab_single_release_net
ffffffff8141ad93 r __kstrtab_seq_release_net
ffffffff8141ada3 r __kstrtab_single_open_net
ffffffff8141adb3 r __kstrtab_seq_open_net
ffffffff8141adc0 r __kstrtab_sysfs_create_files
ffffffff8141add3 r __kstrtab_sysfs_remove_files
ffffffff8141ade6 r __kstrtab_sysfs_remove_file
ffffffff8141adf8 r __kstrtab_sysfs_create_file
ffffffff8141ae0a r __kstrtab_sysfs_schedule_callback
ffffffff8141ae22 r __kstrtab_sysfs_remove_file_from_group
ffffffff8141ae3f r __kstrtab_sysfs_chmod_file
ffffffff8141ae50 r __kstrtab_sysfs_add_file_to_group
ffffffff8141ae68 r __kstrtab_sysfs_notify
ffffffff8141ae75 r __kstrtab_sysfs_notify_dirent
ffffffff8141ae89 r __kstrtab_sysfs_get_dirent
ffffffff8141ae9a r __kstrtab_sysfs_rename_link
ffffffff8141aeac r __kstrtab_sysfs_remove_link
ffffffff8141aebe r __kstrtab_sysfs_create_link
ffffffff8141aed0 r __kstrtab_sysfs_put
ffffffff8141aeda r __kstrtab_sysfs_get
ffffffff8141aee4 r __kstrtab_sysfs_remove_bin_file
ffffffff8141aefa r __kstrtab_sysfs_create_bin_file
ffffffff8141af10 r __kstrtab_sysfs_remove_group
ffffffff8141af23 r __kstrtab_sysfs_update_group
ffffffff8141af36 r __kstrtab_sysfs_create_group
ffffffff8141af49 r __kstrtab_sysfs_unmerge_group
ffffffff8141af5d r __kstrtab_sysfs_merge_group
ffffffff8141af6f r __kstrtab_configfs_unregister_subsystem
ffffffff8141af8d r __kstrtab_configfs_register_subsystem
ffffffff8141afa9 r __kstrtab_configfs_undepend_item
ffffffff8141afc0 r __kstrtab_configfs_depend_item
ffffffff8141afd5 r __kstrtab_config_group_find_item
ffffffff8141afec r __kstrtab_config_item_put
ffffffff8141affc r __kstrtab_config_item_get
ffffffff8141b00c r __kstrtab_config_group_init
ffffffff8141b01e r __kstrtab_config_item_init
ffffffff8141b02f r __kstrtab_config_group_init_type_name
ffffffff8141b04b r __kstrtab_config_item_init_type_name
ffffffff8141b066 r __kstrtab_config_item_set_name
ffffffff8141b07b r __kstrtab_load_nls_default
ffffffff8141b08c r __kstrtab_load_nls
ffffffff8141b095 r __kstrtab_unload_nls
ffffffff8141b0a0 r __kstrtab_unregister_nls
ffffffff8141b0af r __kstrtab_register_nls
ffffffff8141b0bc r __kstrtab_utf16s_to_utf8s
ffffffff8141b0cc r __kstrtab_utf8s_to_utf16s
ffffffff8141b0dc r __kstrtab_utf32_to_utf8
ffffffff8141b0ea r __kstrtab_utf8_to_utf32
ffffffff8141b0f8 r __kstrtab_pstore_register
ffffffff8141b108 r __kstrtab_crypto_has_alg
ffffffff8141b117 r __kstrtab_crypto_destroy_tfm
ffffffff8141b12a r __kstrtab_crypto_alloc_tfm
ffffffff8141b13b r __kstrtab_crypto_find_alg
ffffffff8141b14b r __kstrtab_crypto_create_tfm
ffffffff8141b15d r __kstrtab_crypto_alloc_base
ffffffff8141b16f r __kstrtab___crypto_alloc_tfm
ffffffff8141b182 r __kstrtab_crypto_shoot_alg
ffffffff8141b193 r __kstrtab_crypto_alg_mod_lookup
ffffffff8141b1a9 r __kstrtab_crypto_probing_notify
ffffffff8141b1bf r __kstrtab_crypto_larval_lookup
ffffffff8141b1d4 r __kstrtab_crypto_alg_lookup
ffffffff8141b1e6 r __kstrtab_crypto_larval_kill
ffffffff8141b1f9 r __kstrtab_crypto_larval_alloc
ffffffff8141b20d r __kstrtab_crypto_mod_put
ffffffff8141b21c r __kstrtab_crypto_mod_get
ffffffff8141b22b r __kstrtab_crypto_chain
ffffffff8141b238 r __kstrtab_crypto_alg_sem
ffffffff8141b247 r __kstrtab_crypto_alg_list
ffffffff8141b257 r __kstrtab_kcrypto_wq
ffffffff8141b262 r __kstrtab_crypto_xor
ffffffff8141b26d r __kstrtab_crypto_inc
ffffffff8141b278 r __kstrtab_crypto_tfm_in_queue
ffffffff8141b28c r __kstrtab_crypto_dequeue_request
ffffffff8141b2a3 r __kstrtab___crypto_dequeue_request
ffffffff8141b2bc r __kstrtab_crypto_enqueue_request
ffffffff8141b2d3 r __kstrtab_crypto_init_queue
ffffffff8141b2e5 r __kstrtab_crypto_alloc_instance
ffffffff8141b2fb r __kstrtab_crypto_alloc_instance2
ffffffff8141b312 r __kstrtab_crypto_attr_u32
ffffffff8141b322 r __kstrtab_crypto_attr_alg2
ffffffff8141b333 r __kstrtab_crypto_attr_alg_name
ffffffff8141b348 r __kstrtab_crypto_check_attr_type
ffffffff8141b35f r __kstrtab_crypto_get_attr_type
ffffffff8141b374 r __kstrtab_crypto_unregister_notifier
ffffffff8141b38f r __kstrtab_crypto_register_notifier
ffffffff8141b3a8 r __kstrtab_crypto_spawn_tfm2
ffffffff8141b3ba r __kstrtab_crypto_spawn_tfm
ffffffff8141b3cb r __kstrtab_crypto_drop_spawn
ffffffff8141b3dd r __kstrtab_crypto_init_spawn2
ffffffff8141b3f0 r __kstrtab_crypto_init_spawn
ffffffff8141b402 r __kstrtab_crypto_unregister_instance
ffffffff8141b41d r __kstrtab_crypto_register_instance
ffffffff8141b436 r __kstrtab_crypto_lookup_template
ffffffff8141b44d r __kstrtab_crypto_unregister_template
ffffffff8141b468 r __kstrtab_crypto_register_template
ffffffff8141b481 r __kstrtab_crypto_unregister_algs
ffffffff8141b498 r __kstrtab_crypto_register_algs
ffffffff8141b4ad r __kstrtab_crypto_unregister_alg
ffffffff8141b4c3 r __kstrtab_crypto_register_alg
ffffffff8141b4d7 r __kstrtab_crypto_remove_final
ffffffff8141b4eb r __kstrtab_crypto_alg_tested
ffffffff8141b4fd r __kstrtab_crypto_remove_spawns
ffffffff8141b512 r __kstrtab_crypto_larval_error
ffffffff8141b526 r __kstrtab_scatterwalk_map_and_copy
ffffffff8141b53f r __kstrtab_scatterwalk_copychunks
ffffffff8141b556 r __kstrtab_scatterwalk_done
ffffffff8141b567 r __kstrtab_scatterwalk_map
ffffffff8141b577 r __kstrtab_scatterwalk_start
ffffffff8141b589 r __kstrtab_crypto_alloc_aead
ffffffff8141b59b r __kstrtab_crypto_grab_aead
ffffffff8141b5ac r __kstrtab_crypto_lookup_aead
ffffffff8141b5bf r __kstrtab_aead_geniv_exit
ffffffff8141b5cf r __kstrtab_aead_geniv_init
ffffffff8141b5df r __kstrtab_aead_geniv_free
ffffffff8141b5ef r __kstrtab_aead_geniv_alloc
ffffffff8141b600 r __kstrtab_crypto_nivaead_type
ffffffff8141b614 r __kstrtab_crypto_aead_type
ffffffff8141b625 r __kstrtab_crypto_aead_setauthsize
ffffffff8141b63d r __kstrtab_crypto_alloc_ablkcipher
ffffffff8141b655 r __kstrtab_crypto_grab_skcipher
ffffffff8141b66a r __kstrtab_crypto_lookup_skcipher
ffffffff8141b681 r __kstrtab_crypto_givcipher_type
ffffffff8141b697 r __kstrtab_crypto_ablkcipher_type
ffffffff8141b6ae r __kstrtab_ablkcipher_walk_phys
ffffffff8141b6c3 r __kstrtab_ablkcipher_walk_done
ffffffff8141b6d8 r __kstrtab___ablkcipher_walk_complete
ffffffff8141b6f3 r __kstrtab_skcipher_geniv_exit
ffffffff8141b707 r __kstrtab_skcipher_geniv_init
ffffffff8141b71b r __kstrtab_skcipher_geniv_free
ffffffff8141b72f r __kstrtab_skcipher_geniv_alloc
ffffffff8141b744 r __kstrtab_crypto_blkcipher_type
ffffffff8141b75a r __kstrtab_blkcipher_walk_virt_block
ffffffff8141b774 r __kstrtab_blkcipher_walk_phys
ffffffff8141b788 r __kstrtab_blkcipher_walk_virt
ffffffff8141b79c r __kstrtab_blkcipher_walk_done
ffffffff8141b7b0 r __kstrtab_ahash_attr_alg
ffffffff8141b7bf r __kstrtab_crypto_init_ahash_spawn
ffffffff8141b7d7 r __kstrtab_ahash_free_instance
ffffffff8141b7eb r __kstrtab_ahash_register_instance
ffffffff8141b803 r __kstrtab_crypto_unregister_ahash
ffffffff8141b81b r __kstrtab_crypto_register_ahash
ffffffff8141b831 r __kstrtab_crypto_alloc_ahash
ffffffff8141b844 r __kstrtab_crypto_ahash_type
ffffffff8141b856 r __kstrtab_crypto_ahash_digest
ffffffff8141b86a r __kstrtab_crypto_ahash_finup
ffffffff8141b87d r __kstrtab_crypto_ahash_final
ffffffff8141b890 r __kstrtab_crypto_ahash_setkey
ffffffff8141b8a4 r __kstrtab_crypto_hash_walk_first
ffffffff8141b8bb r __kstrtab_crypto_hash_walk_done
ffffffff8141b8d1 r __kstrtab_shash_attr_alg
ffffffff8141b8e0 r __kstrtab_crypto_init_shash_spawn
ffffffff8141b8f8 r __kstrtab_shash_free_instance
ffffffff8141b90c r __kstrtab_shash_register_instance
ffffffff8141b924 r __kstrtab_crypto_unregister_shash
ffffffff8141b93c r __kstrtab_crypto_register_shash
ffffffff8141b952 r __kstrtab_crypto_alloc_shash
ffffffff8141b965 r __kstrtab_shash_ahash_digest
ffffffff8141b978 r __kstrtab_shash_ahash_finup
ffffffff8141b98a r __kstrtab_shash_ahash_update
ffffffff8141b99d r __kstrtab_crypto_shash_digest
ffffffff8141b9b1 r __kstrtab_crypto_shash_finup
ffffffff8141b9c4 r __kstrtab_crypto_shash_final
ffffffff8141b9d7 r __kstrtab_crypto_shash_update
ffffffff8141b9eb r __kstrtab_crypto_shash_setkey
ffffffff8141b9ff r __kstrtab_crypto_unregister_pcomp
ffffffff8141ba17 r __kstrtab_crypto_register_pcomp
ffffffff8141ba2d r __kstrtab_crypto_alloc_pcomp
ffffffff8141ba40 r __kstrtab_alg_test
ffffffff8141ba49 r __kstrtab_crypto_aes_set_key
ffffffff8141ba5c r __kstrtab_crypto_aes_expand_key
ffffffff8141ba72 r __kstrtab_crypto_il_tab
ffffffff8141ba80 r __kstrtab_crypto_it_tab
ffffffff8141ba8e r __kstrtab_crypto_fl_tab
ffffffff8141ba9c r __kstrtab_crypto_ft_tab
ffffffff8141baaa r __kstrtab_crypto_put_default_rng
ffffffff8141bac1 r __kstrtab_crypto_get_default_rng
ffffffff8141bad8 r __kstrtab_crypto_rng_type
ffffffff8141bae8 r __kstrtab_crypto_default_rng
ffffffff8141bafb r __kstrtab_elv_rb_latter_request
ffffffff8141bb11 r __kstrtab_elv_rb_former_request
ffffffff8141bb27 r __kstrtab_elevator_change
ffffffff8141bb37 r __kstrtab_elv_unregister
ffffffff8141bb46 r __kstrtab_elv_register
ffffffff8141bb53 r __kstrtab_elv_unregister_queue
ffffffff8141bb68 r __kstrtab_elv_register_queue
ffffffff8141bb7b r __kstrtab_elv_abort_queue
ffffffff8141bb8b r __kstrtab_elv_add_request
ffffffff8141bb9b r __kstrtab___elv_add_request
ffffffff8141bbad r __kstrtab_elv_dispatch_add_tail
ffffffff8141bbc3 r __kstrtab_elv_dispatch_sort
ffffffff8141bbd5 r __kstrtab_elv_rb_find
ffffffff8141bbe1 r __kstrtab_elv_rb_del
ffffffff8141bbec r __kstrtab_elv_rb_add
ffffffff8141bbf7 r __kstrtab_elevator_exit
ffffffff8141bc05 r __kstrtab_elevator_init
ffffffff8141bc13 r __kstrtab_elv_rq_merge_ok
ffffffff8141bc23 r __kstrtab_blk_finish_plug
ffffffff8141bc33 r __kstrtab_blk_start_plug
ffffffff8141bc42 r __kstrtab_kblockd_schedule_delayed_work
ffffffff8141bc60 r __kstrtab_kblockd_schedule_work
ffffffff8141bc76 r __kstrtab_blk_rq_prep_clone
ffffffff8141bc88 r __kstrtab_blk_rq_unprep_clone
ffffffff8141bc9c r __kstrtab_blk_lld_busy
ffffffff8141bca9 r __kstrtab___blk_end_request_err
ffffffff8141bcbf r __kstrtab___blk_end_request_cur
ffffffff8141bcd5 r __kstrtab___blk_end_request_all
ffffffff8141bceb r __kstrtab___blk_end_request
ffffffff8141bcfd r __kstrtab_blk_end_request_err
ffffffff8141bd11 r __kstrtab_blk_end_request_cur
ffffffff8141bd25 r __kstrtab_blk_end_request_all
ffffffff8141bd39 r __kstrtab_blk_end_request
ffffffff8141bd49 r __kstrtab_blk_unprep_request
ffffffff8141bd5c r __kstrtab_blk_update_request
ffffffff8141bd6f r __kstrtab_blk_fetch_request
ffffffff8141bd81 r __kstrtab_blk_start_request
ffffffff8141bd93 r __kstrtab_blk_peek_request
ffffffff8141bda4 r __kstrtab_blk_rq_err_bytes
ffffffff8141bdb5 r __kstrtab_blk_insert_cloned_request
ffffffff8141bdcf r __kstrtab_blk_rq_check_limits
ffffffff8141bde3 r __kstrtab_submit_bio
ffffffff8141bdee r __kstrtab_generic_make_request
ffffffff8141be03 r __kstrtab_blk_queue_bio
ffffffff8141be11 r __kstrtab_blk_add_request_payload
ffffffff8141be29 r __kstrtab_blk_put_request
ffffffff8141be39 r __kstrtab___blk_put_request
ffffffff8141be4b r __kstrtab_part_round_stats
ffffffff8141be5c r __kstrtab_blk_requeue_request
ffffffff8141be70 r __kstrtab_blk_make_request
ffffffff8141be81 r __kstrtab_blk_get_request
ffffffff8141be91 r __kstrtab_blk_get_queue
ffffffff8141be9f r __kstrtab_blk_init_allocated_queue
ffffffff8141beb8 r __kstrtab_blk_init_queue_node
ffffffff8141becc r __kstrtab_blk_init_queue
ffffffff8141bedb r __kstrtab_blk_alloc_queue_node
ffffffff8141bef0 r __kstrtab_blk_alloc_queue
ffffffff8141bf00 r __kstrtab_blk_cleanup_queue
ffffffff8141bf12 r __kstrtab_blk_put_queue
ffffffff8141bf20 r __kstrtab_blk_run_queue
ffffffff8141bf2e r __kstrtab_blk_run_queue_async
ffffffff8141bf42 r __kstrtab___blk_run_queue
ffffffff8141bf52 r __kstrtab_blk_sync_queue
ffffffff8141bf61 r __kstrtab_blk_stop_queue
ffffffff8141bf70 r __kstrtab_blk_start_queue
ffffffff8141bf80 r __kstrtab_blk_delay_queue
ffffffff8141bf90 r __kstrtab_blk_dump_rq_flags
ffffffff8141bfa2 r __kstrtab_blk_rq_init
ffffffff8141bfae r __kstrtab_blk_get_backing_dev_info
ffffffff8141bfc7 r __kstrtab_blk_queue_invalidate_tags
ffffffff8141bfe1 r __kstrtab_blk_queue_start_tag
ffffffff8141bff5 r __kstrtab_blk_queue_end_tag
ffffffff8141c007 r __kstrtab_blk_queue_resize_tags
ffffffff8141c01d r __kstrtab_blk_queue_init_tags
ffffffff8141c031 r __kstrtab_blk_init_tags
ffffffff8141c03f r __kstrtab_blk_queue_free_tags
ffffffff8141c053 r __kstrtab_blk_free_tags
ffffffff8141c061 r __kstrtab_blk_queue_find_tag
ffffffff8141c074 r __kstrtab_blkdev_issue_flush
ffffffff8141c087 r __kstrtab_blk_queue_flush_queueable
ffffffff8141c0a1 r __kstrtab_blk_queue_flush
ffffffff8141c0b1 r __kstrtab_blk_queue_update_dma_alignment
ffffffff8141c0d0 r __kstrtab_blk_queue_dma_alignment
ffffffff8141c0e8 r __kstrtab_blk_queue_segment_boundary
ffffffff8141c103 r __kstrtab_blk_queue_dma_drain
ffffffff8141c117 r __kstrtab_blk_queue_update_dma_pad
ffffffff8141c130 r __kstrtab_blk_queue_dma_pad
ffffffff8141c142 r __kstrtab_disk_stack_limits
ffffffff8141c154 r __kstrtab_bdev_stack_limits
ffffffff8141c166 r __kstrtab_blk_stack_limits
ffffffff8141c177 r __kstrtab_blk_queue_stack_limits
ffffffff8141c18e r __kstrtab_blk_queue_io_opt
ffffffff8141c19f r __kstrtab_blk_limits_io_opt
ffffffff8141c1b1 r __kstrtab_blk_queue_io_min
ffffffff8141c1c2 r __kstrtab_blk_limits_io_min
ffffffff8141c1d4 r __kstrtab_blk_queue_alignment_offset
ffffffff8141c1ef r __kstrtab_blk_queue_physical_block_size
ffffffff8141c20d r __kstrtab_blk_queue_logical_block_size
ffffffff8141c22a r __kstrtab_blk_queue_max_segment_size
ffffffff8141c245 r __kstrtab_blk_queue_max_segments
ffffffff8141c25c r __kstrtab_blk_queue_max_discard_sectors
ffffffff8141c27a r __kstrtab_blk_queue_max_hw_sectors
ffffffff8141c293 r __kstrtab_blk_limits_max_hw_sectors
ffffffff8141c2ad r __kstrtab_blk_queue_bounce_limit
ffffffff8141c2c4 r __kstrtab_blk_queue_make_request
ffffffff8141c2db r __kstrtab_blk_set_stacking_limits
ffffffff8141c2f3 r __kstrtab_blk_set_default_limits
ffffffff8141c30a r __kstrtab_blk_queue_lld_busy
ffffffff8141c31d r __kstrtab_blk_queue_rq_timed_out
ffffffff8141c334 r __kstrtab_blk_queue_rq_timeout
ffffffff8141c349 r __kstrtab_blk_queue_softirq_done
ffffffff8141c360 r __kstrtab_blk_queue_merge_bvec
ffffffff8141c375 r __kstrtab_blk_queue_unprep_rq
ffffffff8141c389 r __kstrtab_blk_queue_prep_rq
ffffffff8141c39b r __kstrtab_blk_max_low_pfn
ffffffff8141c3ab r __kstrtab_icq_get_changed
ffffffff8141c3bb r __kstrtab_ioc_cgroup_changed
ffffffff8141c3ce r __kstrtab_ioc_lookup_icq
ffffffff8141c3dd r __kstrtab_get_task_io_context
ffffffff8141c3f1 r __kstrtab_put_io_context
ffffffff8141c400 r __kstrtab_get_io_context
ffffffff8141c40f r __kstrtab_blk_rq_map_kern
ffffffff8141c41f r __kstrtab_blk_rq_unmap_user
ffffffff8141c431 r __kstrtab_blk_rq_map_user_iov
ffffffff8141c445 r __kstrtab_blk_rq_map_user
ffffffff8141c455 r __kstrtab_blk_execute_rq
ffffffff8141c464 r __kstrtab_blk_execute_rq_nowait
ffffffff8141c47a r __kstrtab_blk_rq_map_sg
ffffffff8141c488 r __kstrtab_blk_recount_segments
ffffffff8141c49d r __kstrtab_blk_complete_request
ffffffff8141c4b2 r __kstrtab_blk_abort_queue
ffffffff8141c4c2 r __kstrtab_blk_abort_request
ffffffff8141c4d4 r __kstrtab_blk_iopoll_init
ffffffff8141c4e4 r __kstrtab_blk_iopoll_enable
ffffffff8141c4f6 r __kstrtab_blk_iopoll_disable
ffffffff8141c509 r __kstrtab_blk_iopoll_complete
ffffffff8141c51d r __kstrtab___blk_iopoll_complete
ffffffff8141c533 r __kstrtab_blk_iopoll_sched
ffffffff8141c544 r __kstrtab_blk_iopoll_enabled
ffffffff8141c557 r __kstrtab_blkdev_issue_zeroout
ffffffff8141c56c r __kstrtab_blkdev_issue_discard
ffffffff8141c581 r __kstrtab_blkdev_ioctl
ffffffff8141c58e r __kstrtab___blkdev_driver_ioctl
ffffffff8141c5a4 r __kstrtab_invalidate_partition
ffffffff8141c5b9 r __kstrtab_bdev_read_only
ffffffff8141c5c8 r __kstrtab_set_disk_ro
ffffffff8141c5d4 r __kstrtab_set_device_ro
ffffffff8141c5e2 r __kstrtab_put_disk
ffffffff8141c5eb r __kstrtab_get_disk
ffffffff8141c5f4 r __kstrtab_alloc_disk_node
ffffffff8141c604 r __kstrtab_alloc_disk
ffffffff8141c60f r __kstrtab_blk_lookup_devt
ffffffff8141c61f r __kstrtab_bdget_disk
ffffffff8141c62a r __kstrtab_get_gendisk
ffffffff8141c636 r __kstrtab_del_gendisk
ffffffff8141c642 r __kstrtab_add_disk
ffffffff8141c64b r __kstrtab_blk_unregister_region
ffffffff8141c661 r __kstrtab_blk_register_region
ffffffff8141c675 r __kstrtab_unregister_blkdev
ffffffff8141c687 r __kstrtab_register_blkdev
ffffffff8141c697 r __kstrtab_disk_map_sector_rcu
ffffffff8141c6ab r __kstrtab_disk_part_iter_exit
ffffffff8141c6bf r __kstrtab_disk_part_iter_next
ffffffff8141c6d3 r __kstrtab_disk_part_iter_init
ffffffff8141c6e7 r __kstrtab_disk_get_part
ffffffff8141c6f5 r __kstrtab_scsi_cmd_blk_ioctl
ffffffff8141c708 r __kstrtab_scsi_verify_blk_ioctl
ffffffff8141c71e r __kstrtab_scsi_cmd_ioctl
ffffffff8141c72d r __kstrtab_sg_scsi_ioctl
ffffffff8141c73b r __kstrtab_blk_verify_command
ffffffff8141c74e r __kstrtab_scsi_command_size_tbl
ffffffff8141c764 r __kstrtab_read_dev_sector
ffffffff8141c774 r __kstrtab___bdevname
ffffffff8141c77f r __kstrtab_bdevname
ffffffff8141c788 r __kstrtab_bsg_register_queue
ffffffff8141c79b r __kstrtab_bsg_unregister_queue
ffffffff8141c7b0 r __kstrtab_bsg_remove_queue
ffffffff8141c7c1 r __kstrtab_bsg_setup_queue
ffffffff8141c7d1 r __kstrtab_bsg_request_fn
ffffffff8141c7e0 r __kstrtab_bsg_goose_queue
ffffffff8141c7f0 r __kstrtab_bsg_job_done
ffffffff8141c7fd r __kstrtab_blkio_policy_unregister
ffffffff8141c815 r __kstrtab_blkio_policy_register
ffffffff8141c82b r __kstrtab_blkcg_get_weight
ffffffff8141c83c r __kstrtab_blkiocg_lookup_group
ffffffff8141c851 r __kstrtab_blkiocg_del_blkio_group
ffffffff8141c869 r __kstrtab_blkiocg_add_blkio_group
ffffffff8141c881 r __kstrtab_blkio_alloc_blkg_stats
ffffffff8141c898 r __kstrtab_blkiocg_update_io_merged_stats
ffffffff8141c8b7 r __kstrtab_blkiocg_update_completion_stats
ffffffff8141c8d7 r __kstrtab_blkiocg_update_dispatch_stats
ffffffff8141c8f5 r __kstrtab_blkiocg_update_timeslice_used
ffffffff8141c913 r __kstrtab_blkiocg_update_io_remove_stats
ffffffff8141c932 r __kstrtab_blkiocg_update_io_add_stats
ffffffff8141c94e r __kstrtab_task_blkio_cgroup
ffffffff8141c960 r __kstrtab_cgroup_to_blkio_cgroup
ffffffff8141c977 r __kstrtab_blkio_subsys
ffffffff8141c984 r __kstrtab_blkio_root_cgroup
ffffffff8141c996 r __kstrtab_argv_split
ffffffff8141c9a1 r __kstrtab_argv_free
ffffffff8141c9ab r __kstrtab_get_options
ffffffff8141c9b7 r __kstrtab_get_option
ffffffff8141c9c2 r __kstrtab_memparse
ffffffff8141c9cb r __kstrtab_cpumask_next_and
ffffffff8141c9dc r __kstrtab___next_cpu
ffffffff8141c9e7 r __kstrtab___first_cpu
ffffffff8141c9f3 r __kstrtab__ctype
ffffffff8141c9fa r __kstrtab__atomic_dec_and_lock
ffffffff8141ca0f r __kstrtab_ida_init
ffffffff8141ca18 r __kstrtab_ida_simple_remove
ffffffff8141ca2a r __kstrtab_ida_simple_get
ffffffff8141ca39 r __kstrtab_ida_destroy
ffffffff8141ca45 r __kstrtab_ida_remove
ffffffff8141ca50 r __kstrtab_ida_get_new
ffffffff8141ca5c r __kstrtab_ida_get_new_above
ffffffff8141ca6e r __kstrtab_ida_pre_get
ffffffff8141ca7a r __kstrtab_idr_init
ffffffff8141ca83 r __kstrtab_idr_replace
ffffffff8141ca8f r __kstrtab_idr_get_next
ffffffff8141ca9c r __kstrtab_idr_for_each
ffffffff8141caa9 r __kstrtab_idr_find
ffffffff8141cab2 r __kstrtab_idr_destroy
ffffffff8141cabe r __kstrtab_idr_remove_all
ffffffff8141cacd r __kstrtab_idr_remove
ffffffff8141cad8 r __kstrtab_idr_get_new
ffffffff8141cae4 r __kstrtab_idr_get_new_above
ffffffff8141caf6 r __kstrtab_idr_pre_get
ffffffff8141cb02 r __kstrtab_int_sqrt
ffffffff8141cb0b r __kstrtab_ioremap_page_range
ffffffff8141cb1e r __kstrtab_kset_unregister
ffffffff8141cb2e r __kstrtab_kset_register
ffffffff8141cb3c r __kstrtab_kobject_del
ffffffff8141cb48 r __kstrtab_kobject_put
ffffffff8141cb54 r __kstrtab_kobject_get
ffffffff8141cb60 r __kstrtab_kset_create_and_add
ffffffff8141cb74 r __kstrtab_kobject_create_and_add
ffffffff8141cb8b r __kstrtab_kobject_rename
ffffffff8141cb9a r __kstrtab_kobject_init_and_add
ffffffff8141cbaf r __kstrtab_kobject_add
ffffffff8141cbbb r __kstrtab_kobject_init
ffffffff8141cbc8 r __kstrtab_kobject_set_name
ffffffff8141cbd9 r __kstrtab_kobject_get_path
ffffffff8141cbea r __kstrtab_add_uevent_var
ffffffff8141cbf9 r __kstrtab_kobject_uevent
ffffffff8141cc08 r __kstrtab_kobject_uevent_env
ffffffff8141cc1b r __kstrtab_radix_tree_tagged
ffffffff8141cc2d r __kstrtab_radix_tree_delete
ffffffff8141cc3f r __kstrtab_radix_tree_gang_lookup_tag_slot
ffffffff8141cc5f r __kstrtab_radix_tree_gang_lookup_tag
ffffffff8141cc7a r __kstrtab_radix_tree_gang_lookup_slot
ffffffff8141cc96 r __kstrtab_radix_tree_gang_lookup
ffffffff8141ccad r __kstrtab_radix_tree_prev_hole
ffffffff8141ccc2 r __kstrtab_radix_tree_next_hole
ffffffff8141ccd7 r __kstrtab_radix_tree_range_tag_if_tagged
ffffffff8141ccf6 r __kstrtab_radix_tree_next_chunk
ffffffff8141cd0c r __kstrtab_radix_tree_tag_get
ffffffff8141cd1f r __kstrtab_radix_tree_tag_clear
ffffffff8141cd34 r __kstrtab_radix_tree_tag_set
ffffffff8141cd47 r __kstrtab_radix_tree_lookup
ffffffff8141cd59 r __kstrtab_radix_tree_lookup_slot
ffffffff8141cd70 r __kstrtab_radix_tree_insert
ffffffff8141cd82 r __kstrtab_radix_tree_preload
ffffffff8141cd95 r __kstrtab____ratelimit
ffffffff8141cda2 r __kstrtab_rb_replace_node
ffffffff8141cdb2 r __kstrtab_rb_prev
ffffffff8141cdba r __kstrtab_rb_next
ffffffff8141cdc2 r __kstrtab_rb_last
ffffffff8141cdca r __kstrtab_rb_first
ffffffff8141cdd3 r __kstrtab_rb_augment_erase_end
ffffffff8141cde8 r __kstrtab_rb_augment_erase_begin
ffffffff8141cdff r __kstrtab_rb_augment_insert
ffffffff8141ce11 r __kstrtab_rb_erase
ffffffff8141ce1a r __kstrtab_rb_insert_color
ffffffff8141ce2a r __kstrtab_reciprocal_value
ffffffff8141ce3b r __kstrtab_rwsem_downgrade_wake
ffffffff8141ce50 r __kstrtab_rwsem_wake
ffffffff8141ce5b r __kstrtab_rwsem_down_write_failed
ffffffff8141ce73 r __kstrtab_rwsem_down_read_failed
ffffffff8141ce8a r __kstrtab___init_rwsem
ffffffff8141ce97 r __kstrtab_memchr_inv
ffffffff8141cea2 r __kstrtab_memchr
ffffffff8141cea9 r __kstrtab_strnstr
ffffffff8141ceb1 r __kstrtab_strstr
ffffffff8141ceb8 r __kstrtab_memscan
ffffffff8141cec0 r __kstrtab_memcmp
ffffffff8141cec7 r __kstrtab_strtobool
ffffffff8141ced1 r __kstrtab_sysfs_streq
ffffffff8141cedd r __kstrtab_strsep
ffffffff8141cee4 r __kstrtab_strpbrk
ffffffff8141ceec r __kstrtab_strcspn
ffffffff8141cef4 r __kstrtab_strspn
ffffffff8141cefb r __kstrtab_strnlen
ffffffff8141cf03 r __kstrtab_strlen
ffffffff8141cf0a r __kstrtab_strim
ffffffff8141cf10 r __kstrtab_skip_spaces
ffffffff8141cf1c r __kstrtab_strnchr
ffffffff8141cf24 r __kstrtab_strrchr
ffffffff8141cf2c r __kstrtab_strchr
ffffffff8141cf33 r __kstrtab_strncmp
ffffffff8141cf3b r __kstrtab_strcmp
ffffffff8141cf42 r __kstrtab_strlcat
ffffffff8141cf4a r __kstrtab_strncat
ffffffff8141cf52 r __kstrtab_strcat
ffffffff8141cf59 r __kstrtab_strlcpy
ffffffff8141cf61 r __kstrtab_strncpy
ffffffff8141cf69 r __kstrtab_strcpy
ffffffff8141cf70 r __kstrtab_strncasecmp
ffffffff8141cf7c r __kstrtab_strcasecmp
ffffffff8141cf87 r __kstrtab_strnicmp
ffffffff8141cf90 r __kstrtab_timerqueue_iterate_next
ffffffff8141cfa8 r __kstrtab_timerqueue_del
ffffffff8141cfb7 r __kstrtab_timerqueue_add
ffffffff8141cfc6 r __kstrtab_sscanf
ffffffff8141cfcd r __kstrtab_vsscanf
ffffffff8141cfd5 r __kstrtab_sprintf
ffffffff8141cfdd r __kstrtab_vsprintf
ffffffff8141cfe6 r __kstrtab_scnprintf
ffffffff8141cff0 r __kstrtab_snprintf
ffffffff8141cff9 r __kstrtab_vscnprintf
ffffffff8141d004 r __kstrtab_vsnprintf
ffffffff8141d00e r __kstrtab_simple_strtoll
ffffffff8141d01d r __kstrtab_simple_strtol
ffffffff8141d02b r __kstrtab_simple_strtoul
ffffffff8141d03a r __kstrtab_simple_strtoull
ffffffff8141d04a r __kstrtab_ip_compute_csum
ffffffff8141d05a r __kstrtab___ndelay
ffffffff8141d063 r __kstrtab___udelay
ffffffff8141d06c r __kstrtab___const_udelay
ffffffff8141d07b r __kstrtab___delay
ffffffff8141d083 r __kstrtab_strncpy_from_user
ffffffff8141d095 r __kstrtab_copy_from_user_nmi
ffffffff8141d0a8 r __kstrtab_copy_in_user
ffffffff8141d0b5 r __kstrtab_strlen_user
ffffffff8141d0c1 r __kstrtab_strnlen_user
ffffffff8141d0ce r __kstrtab___strnlen_user
ffffffff8141d0dd r __kstrtab_clear_user
ffffffff8141d0e8 r __kstrtab___clear_user
ffffffff8141d0f5 r __kstrtab_bin2bcd
ffffffff8141d0fd r __kstrtab_bcd2bin
ffffffff8141d105 r __kstrtab_iter_div_u64_rem
ffffffff8141d116 r __kstrtab_sort
ffffffff8141d11b r __kstrtab_match_strdup
ffffffff8141d128 r __kstrtab_match_strlcpy
ffffffff8141d136 r __kstrtab_match_hex
ffffffff8141d140 r __kstrtab_match_octal
ffffffff8141d14c r __kstrtab_match_int
ffffffff8141d156 r __kstrtab_match_token
ffffffff8141d162 r __kstrtab_half_md4_transform
ffffffff8141d175 r __kstrtab_debug_locks
ffffffff8141d181 r __kstrtab_srandom32
ffffffff8141d18b r __kstrtab_random32
ffffffff8141d194 r __kstrtab_prandom32
ffffffff8141d19e r __kstrtab_print_hex_dump_bytes
ffffffff8141d1b3 r __kstrtab_print_hex_dump
ffffffff8141d1c2 r __kstrtab_hex_dump_to_buffer
ffffffff8141d1d5 r __kstrtab_hex2bin
ffffffff8141d1dd r __kstrtab_hex_to_bin
ffffffff8141d1e8 r __kstrtab_hex_asc
ffffffff8141d1f0 r __kstrtab_kasprintf
ffffffff8141d1fa r __kstrtab_kvasprintf
ffffffff8141d205 r __kstrtab_bitmap_copy_le
ffffffff8141d214 r __kstrtab_bitmap_allocate_region
ffffffff8141d22b r __kstrtab_bitmap_release_region
ffffffff8141d241 r __kstrtab_bitmap_find_free_region
ffffffff8141d259 r __kstrtab_bitmap_fold
ffffffff8141d265 r __kstrtab_bitmap_onto
ffffffff8141d271 r __kstrtab_bitmap_bitremap
ffffffff8141d281 r __kstrtab_bitmap_remap
ffffffff8141d28e r __kstrtab_bitmap_parselist_user
ffffffff8141d2a4 r __kstrtab_bitmap_parselist
ffffffff8141d2b5 r __kstrtab_bitmap_scnlistprintf
ffffffff8141d2ca r __kstrtab_bitmap_parse_user
ffffffff8141d2dc r __kstrtab___bitmap_parse
ffffffff8141d2eb r __kstrtab_bitmap_scnprintf
ffffffff8141d2fc r __kstrtab_bitmap_find_next_zero_area
ffffffff8141d317 r __kstrtab_bitmap_clear
ffffffff8141d324 r __kstrtab_bitmap_set
ffffffff8141d32f r __kstrtab___bitmap_weight
ffffffff8141d33f r __kstrtab___bitmap_subset
ffffffff8141d34f r __kstrtab___bitmap_intersects
ffffffff8141d363 r __kstrtab___bitmap_andnot
ffffffff8141d373 r __kstrtab___bitmap_xor
ffffffff8141d380 r __kstrtab___bitmap_or
ffffffff8141d38c r __kstrtab___bitmap_and
ffffffff8141d399 r __kstrtab___bitmap_shift_left
ffffffff8141d3ad r __kstrtab___bitmap_shift_right
ffffffff8141d3c2 r __kstrtab___bitmap_complement
ffffffff8141d3d6 r __kstrtab___bitmap_equal
ffffffff8141d3e5 r __kstrtab___bitmap_full
ffffffff8141d3f3 r __kstrtab___bitmap_empty
ffffffff8141d402 r __kstrtab_sg_copy_to_buffer
ffffffff8141d414 r __kstrtab_sg_copy_from_buffer
ffffffff8141d428 r __kstrtab_sg_miter_stop
ffffffff8141d436 r __kstrtab_sg_miter_next
ffffffff8141d444 r __kstrtab_sg_miter_start
ffffffff8141d453 r __kstrtab_sg_alloc_table
ffffffff8141d462 r __kstrtab___sg_alloc_table
ffffffff8141d473 r __kstrtab_sg_free_table
ffffffff8141d481 r __kstrtab___sg_free_table
ffffffff8141d491 r __kstrtab_sg_init_one
ffffffff8141d49d r __kstrtab_sg_init_table
ffffffff8141d4ab r __kstrtab_sg_last
ffffffff8141d4b3 r __kstrtab_sg_next
ffffffff8141d4bb r __kstrtab_string_get_size
ffffffff8141d4cb r __kstrtab_gcd
ffffffff8141d4cf r __kstrtab_lcm
ffffffff8141d4d3 r __kstrtab_list_sort
ffffffff8141d4dd r __kstrtab_uuid_be_gen
ffffffff8141d4e9 r __kstrtab_uuid_le_gen
ffffffff8141d4f5 r __kstrtab_flex_array_shrink
ffffffff8141d507 r __kstrtab_flex_array_get_ptr
ffffffff8141d51a r __kstrtab_flex_array_get
ffffffff8141d529 r __kstrtab_flex_array_prealloc
ffffffff8141d53d r __kstrtab_flex_array_clear
ffffffff8141d54e r __kstrtab_flex_array_put
ffffffff8141d55d r __kstrtab_flex_array_free
ffffffff8141d56d r __kstrtab_flex_array_free_parts
ffffffff8141d583 r __kstrtab_flex_array_alloc
ffffffff8141d594 r __kstrtab_bsearch
ffffffff8141d59c r __kstrtab_find_last_bit
ffffffff8141d5aa r __kstrtab_find_first_zero_bit
ffffffff8141d5be r __kstrtab_find_first_bit
ffffffff8141d5cd r __kstrtab_find_next_zero_bit
ffffffff8141d5e0 r __kstrtab_find_next_bit
ffffffff8141d5ee r __kstrtab_llist_del_first
ffffffff8141d5fe r __kstrtab_llist_add_batch
ffffffff8141d60e r __kstrtab_kstrtos8_from_user
ffffffff8141d621 r __kstrtab_kstrtou8_from_user
ffffffff8141d634 r __kstrtab_kstrtos16_from_user
ffffffff8141d648 r __kstrtab_kstrtou16_from_user
ffffffff8141d65c r __kstrtab_kstrtoint_from_user
ffffffff8141d670 r __kstrtab_kstrtouint_from_user
ffffffff8141d685 r __kstrtab_kstrtol_from_user
ffffffff8141d697 r __kstrtab_kstrtoul_from_user
ffffffff8141d6aa r __kstrtab_kstrtoll_from_user
ffffffff8141d6bd r __kstrtab_kstrtoull_from_user
ffffffff8141d6d1 r __kstrtab_kstrtos8
ffffffff8141d6da r __kstrtab_kstrtou8
ffffffff8141d6e3 r __kstrtab_kstrtos16
ffffffff8141d6ed r __kstrtab_kstrtou16
ffffffff8141d6f7 r __kstrtab_kstrtoint
ffffffff8141d701 r __kstrtab_kstrtouint
ffffffff8141d70c r __kstrtab__kstrtol
ffffffff8141d715 r __kstrtab__kstrtoul
ffffffff8141d71f r __kstrtab_kstrtoll
ffffffff8141d728 r __kstrtab_kstrtoull
ffffffff8141d732 r __kstrtab_pci_iounmap
ffffffff8141d73e r __kstrtab_ioport_unmap
ffffffff8141d74b r __kstrtab_ioport_map
ffffffff8141d756 r __kstrtab_iowrite32_rep
ffffffff8141d764 r __kstrtab_iowrite16_rep
ffffffff8141d772 r __kstrtab_iowrite8_rep
ffffffff8141d77f r __kstrtab_ioread32_rep
ffffffff8141d78c r __kstrtab_ioread16_rep
ffffffff8141d799 r __kstrtab_ioread8_rep
ffffffff8141d7a5 r __kstrtab_iowrite32be
ffffffff8141d7b1 r __kstrtab_iowrite32
ffffffff8141d7bb r __kstrtab_iowrite16be
ffffffff8141d7c7 r __kstrtab_iowrite16
ffffffff8141d7d1 r __kstrtab_iowrite8
ffffffff8141d7da r __kstrtab_ioread32be
ffffffff8141d7e5 r __kstrtab_ioread32
ffffffff8141d7ee r __kstrtab_ioread16be
ffffffff8141d7f9 r __kstrtab_ioread16
ffffffff8141d802 r __kstrtab_ioread8
ffffffff8141d80a r __kstrtab_pci_iomap
ffffffff8141d814 r __kstrtab___iowrite64_copy
ffffffff8141d825 r __kstrtab___iowrite32_copy
ffffffff8141d836 r __kstrtab_pcim_iounmap_regions
ffffffff8141d84b r __kstrtab_pcim_iomap_regions_request_all
ffffffff8141d86a r __kstrtab_pcim_iomap_regions
ffffffff8141d87d r __kstrtab_pcim_iounmap
ffffffff8141d88a r __kstrtab_pcim_iomap
ffffffff8141d895 r __kstrtab_pcim_iomap_table
ffffffff8141d8a6 r __kstrtab_devm_ioport_unmap
ffffffff8141d8b8 r __kstrtab_devm_ioport_map
ffffffff8141d8c8 r __kstrtab_devm_request_and_ioremap
ffffffff8141d8e1 r __kstrtab_devm_iounmap
ffffffff8141d8ee r __kstrtab_devm_ioremap_nocache
ffffffff8141d903 r __kstrtab_devm_ioremap
ffffffff8141d910 r __kstrtab___sw_hweight64
ffffffff8141d91f r __kstrtab___sw_hweight8
ffffffff8141d92d r __kstrtab___sw_hweight16
ffffffff8141d93c r __kstrtab___sw_hweight32
ffffffff8141d94b r __kstrtab_bitrev32
ffffffff8141d954 r __kstrtab_bitrev16
ffffffff8141d95d r __kstrtab_byte_rev_table
ffffffff8141d96c r __kstrtab_crc16
ffffffff8141d972 r __kstrtab_crc16_table
ffffffff8141d97e r __kstrtab_crc32_be
ffffffff8141d987 r __kstrtab___crc32c_le
ffffffff8141d993 r __kstrtab_crc32_le
ffffffff8141d99c r __kstrtab_gen_pool_size
ffffffff8141d9aa r __kstrtab_gen_pool_avail
ffffffff8141d9b9 r __kstrtab_gen_pool_for_each_chunk
ffffffff8141d9d1 r __kstrtab_gen_pool_free
ffffffff8141d9df r __kstrtab_gen_pool_alloc
ffffffff8141d9ee r __kstrtab_gen_pool_destroy
ffffffff8141d9ff r __kstrtab_gen_pool_virt_to_phys
ffffffff8141da15 r __kstrtab_gen_pool_add_virt
ffffffff8141da27 r __kstrtab_gen_pool_create
ffffffff8141da37 r __kstrtab_zlib_inflate_blob
ffffffff8141da49 r __kstrtab_zlib_inflateIncomp
ffffffff8141da5c r __kstrtab_zlib_inflateReset
ffffffff8141da6e r __kstrtab_zlib_inflateEnd
ffffffff8141da7e r __kstrtab_zlib_inflateInit2
ffffffff8141da90 r __kstrtab_zlib_inflate
ffffffff8141da9d r __kstrtab_zlib_inflate_workspacesize
ffffffff8141dab8 r __kstrtab_lzo1x_decompress_safe
ffffffff8141dace r __kstrtab_xz_dec_end
ffffffff8141dad9 r __kstrtab_xz_dec_run
ffffffff8141dae4 r __kstrtab_xz_dec_reset
ffffffff8141daf1 r __kstrtab_xz_dec_init
ffffffff8141dafd r __kstrtab_percpu_counter_compare
ffffffff8141db14 r __kstrtab_percpu_counter_batch
ffffffff8141db29 r __kstrtab_percpu_counter_destroy
ffffffff8141db40 r __kstrtab___percpu_counter_init
ffffffff8141db56 r __kstrtab___percpu_counter_sum
ffffffff8141db6b r __kstrtab___percpu_counter_add
ffffffff8141db80 r __kstrtab_percpu_counter_set
ffffffff8141db93 r __kstrtab_swiotlb_dma_supported
ffffffff8141dba9 r __kstrtab_swiotlb_dma_mapping_error
ffffffff8141dbc3 r __kstrtab_swiotlb_sync_sg_for_device
ffffffff8141dbde r __kstrtab_swiotlb_sync_sg_for_cpu
ffffffff8141dbf6 r __kstrtab_swiotlb_unmap_sg
ffffffff8141dc07 r __kstrtab_swiotlb_unmap_sg_attrs
ffffffff8141dc1e r __kstrtab_swiotlb_map_sg
ffffffff8141dc2d r __kstrtab_swiotlb_map_sg_attrs
ffffffff8141dc42 r __kstrtab_swiotlb_sync_single_for_device
ffffffff8141dc61 r __kstrtab_swiotlb_sync_single_for_cpu
ffffffff8141dc7d r __kstrtab_swiotlb_unmap_page
ffffffff8141dc90 r __kstrtab_swiotlb_map_page
ffffffff8141dca1 r __kstrtab_swiotlb_free_coherent
ffffffff8141dcb7 r __kstrtab_swiotlb_alloc_coherent
ffffffff8141dcce r __kstrtab_swiotlb_tbl_sync_single
ffffffff8141dce6 r __kstrtab_swiotlb_tbl_unmap_single
ffffffff8141dcff r __kstrtab_swiotlb_tbl_map_single
ffffffff8141dd16 r __kstrtab_swiotlb_bounce
ffffffff8141dd25 r __kstrtab_swiotlb_nr_tbl
ffffffff8141dd34 r __kstrtab_iommu_area_alloc
ffffffff8141dd45 r __kstrtab_task_current_syscall
ffffffff8141dd5a r __kstrtab_nla_strcmp
ffffffff8141dd65 r __kstrtab_nla_memcmp
ffffffff8141dd70 r __kstrtab_nla_memcpy
ffffffff8141dd7b r __kstrtab_nla_strlcpy
ffffffff8141dd87 r __kstrtab_nla_find
ffffffff8141dd90 r __kstrtab_nla_parse
ffffffff8141dd9a r __kstrtab_nla_policy_len
ffffffff8141dda9 r __kstrtab_nla_validate
ffffffff8141ddb6 r __kstrtab_nla_append
ffffffff8141ddc1 r __kstrtab_nla_put_nohdr
ffffffff8141ddcf r __kstrtab_nla_put
ffffffff8141ddd7 r __kstrtab___nla_put_nohdr
ffffffff8141dde7 r __kstrtab___nla_put
ffffffff8141ddf1 r __kstrtab_nla_reserve_nohdr
ffffffff8141de03 r __kstrtab_nla_reserve
ffffffff8141de0f r __kstrtab___nla_reserve_nohdr
ffffffff8141de23 r __kstrtab___nla_reserve
ffffffff8141de31 r __kstrtab_irq_cpu_rmap_add
ffffffff8141de42 r __kstrtab_free_irq_cpu_rmap
ffffffff8141de54 r __kstrtab_cpu_rmap_update
ffffffff8141de64 r __kstrtab_cpu_rmap_add
ffffffff8141de71 r __kstrtab_alloc_cpu_rmap
ffffffff8141de80 r __kstrtab_dql_init
ffffffff8141de89 r __kstrtab_dql_reset
ffffffff8141de93 r __kstrtab_dql_completed
ffffffff8141dea1 r __kstrtab_wrmsr_safe_regs_on_cpu
ffffffff8141deb8 r __kstrtab_rdmsr_safe_regs_on_cpu
ffffffff8141decf r __kstrtab_wrmsr_safe_on_cpu
ffffffff8141dee1 r __kstrtab_rdmsr_safe_on_cpu
ffffffff8141def3 r __kstrtab_wrmsr_on_cpus
ffffffff8141df01 r __kstrtab_rdmsr_on_cpus
ffffffff8141df0f r __kstrtab_wrmsr_on_cpu
ffffffff8141df1c r __kstrtab_rdmsr_on_cpu
ffffffff8141df29 r __kstrtab_wbinvd_on_all_cpus
ffffffff8141df3c r __kstrtab_wbinvd_on_cpu
ffffffff8141df4a r __kstrtab_msrs_free
ffffffff8141df54 r __kstrtab_msrs_alloc
ffffffff8141df5f r __kstrtab_native_wrmsr_safe_regs
ffffffff8141df76 r __kstrtab_native_rdmsr_safe_regs
ffffffff8141df8d r __kstrtab_pci_cfg_access_unlock
ffffffff8141dfa3 r __kstrtab_pci_cfg_access_trylock
ffffffff8141dfba r __kstrtab_pci_cfg_access_lock
ffffffff8141dfce r __kstrtab_pci_vpd_truncate
ffffffff8141dfdf r __kstrtab_pci_write_vpd
ffffffff8141dfed r __kstrtab_pci_read_vpd
ffffffff8141dffa r __kstrtab_pci_bus_set_ops
ffffffff8141e00a r __kstrtab_pci_bus_write_config_dword
ffffffff8141e025 r __kstrtab_pci_bus_write_config_word
ffffffff8141e03f r __kstrtab_pci_bus_write_config_byte
ffffffff8141e059 r __kstrtab_pci_bus_read_config_dword
ffffffff8141e073 r __kstrtab_pci_bus_read_config_word
ffffffff8141e08c r __kstrtab_pci_bus_read_config_byte
ffffffff8141e0a5 r __kstrtab_pci_enable_bridges
ffffffff8141e0b8 r __kstrtab_pci_bus_add_devices
ffffffff8141e0cc r __kstrtab_pci_bus_add_device
ffffffff8141e0df r __kstrtab_pci_bus_alloc_resource
ffffffff8141e0f6 r __kstrtab_pci_walk_bus
ffffffff8141e103 r __kstrtab_pci_bus_resource_n
ffffffff8141e116 r __kstrtab_pci_free_resource_list
ffffffff8141e12d r __kstrtab_pci_add_resource
ffffffff8141e13e r __kstrtab_pci_add_resource_offset
ffffffff8141e156 r __kstrtab_pci_scan_child_bus
ffffffff8141e169 r __kstrtab_pci_scan_bridge
ffffffff8141e179 r __kstrtab_pci_scan_slot
ffffffff8141e187 r __kstrtab_pci_add_new_bus
ffffffff8141e197 r __kstrtab_pci_scan_bus
ffffffff8141e1a4 r __kstrtab_pci_scan_bus_parented
ffffffff8141e1ba r __kstrtab_pci_scan_root_bus
ffffffff8141e1cc r __kstrtab_pcie_bus_configure_settings
ffffffff8141e1e8 r __kstrtab_pci_scan_single_device
ffffffff8141e1ff r __kstrtab_pci_bus_read_dev_vendor_id
ffffffff8141e21a r __kstrtab_alloc_pci_dev
ffffffff8141e228 r __kstrtab_pcie_update_link_speed
ffffffff8141e23f r __kstrtab_pcibios_bus_to_resource
ffffffff8141e257 r __kstrtab_pcibios_resource_to_bus
ffffffff8141e26f r __kstrtab_no_pci_devices
ffffffff8141e27e r __kstrtab_pci_root_buses
ffffffff8141e28d r __kstrtab_pci_stop_bus_device
ffffffff8141e2a1 r __kstrtab_pci_stop_and_remove_behind_bridge
ffffffff8141e2c3 r __kstrtab_pci_stop_and_remove_bus_device
ffffffff8141e2e2 r __kstrtab___pci_remove_bus_device
ffffffff8141e2fa r __kstrtab_pci_remove_bus
ffffffff8141e309 r __kstrtab_pci_set_pcie_reset_state
ffffffff8141e322 r __kstrtab_pci_back_from_sleep
ffffffff8141e336 r __kstrtab_pci_prepare_to_sleep
ffffffff8141e34b r __kstrtab_pci_target_state
ffffffff8141e35c r __kstrtab_pci_wake_from_d3
ffffffff8141e36d r __kstrtab_pci_pme_active
ffffffff8141e37c r __kstrtab_pci_pme_capable
ffffffff8141e38c r __kstrtab_pci_restore_state
ffffffff8141e39e r __kstrtab_pci_save_state
ffffffff8141e3ad r __kstrtab_pci_set_power_state
ffffffff8141e3c1 r __kstrtab_pci_select_bars
ffffffff8141e3d1 r __kstrtab_pci_find_parent_resource
ffffffff8141e3ea r __kstrtab_pci_assign_resource
ffffffff8141e3fe r __kstrtab_pci_intx
ffffffff8141e407 r __kstrtab_pci_clear_mwi
ffffffff8141e415 r __kstrtab_pci_try_set_mwi
ffffffff8141e425 r __kstrtab_pci_set_mwi
ffffffff8141e431 r __kstrtab_pci_clear_master
ffffffff8141e442 r __kstrtab_pci_set_master
ffffffff8141e451 r __kstrtab_pci_request_selected_regions_exclusive
ffffffff8141e478 r __kstrtab_pci_request_selected_regions
ffffffff8141e495 r __kstrtab_pci_release_selected_regions
ffffffff8141e4b2 r __kstrtab_pci_request_region_exclusive
ffffffff8141e4cf r __kstrtab_pci_request_region
ffffffff8141e4e2 r __kstrtab_pci_release_region
ffffffff8141e4f5 r __kstrtab_pci_request_regions_exclusive
ffffffff8141e513 r __kstrtab_pci_request_regions
ffffffff8141e527 r __kstrtab_pci_release_regions
ffffffff8141e53b r __kstrtab_pci_bus_find_capability
ffffffff8141e553 r __kstrtab_pci_find_capability
ffffffff8141e567 r __kstrtab_pci_disable_device
ffffffff8141e57a r __kstrtab_pcim_pin_device
ffffffff8141e58a r __kstrtab_pcim_enable_device
ffffffff8141e59d r __kstrtab_pci_enable_device
ffffffff8141e5af r __kstrtab_pci_enable_device_mem
ffffffff8141e5c5 r __kstrtab_pci_enable_device_io
ffffffff8141e5da r __kstrtab_pci_reenable_device
ffffffff8141e5ee r __kstrtab_pci_fixup_cardbus
ffffffff8141e600 r __kstrtab_pcie_set_readrq
ffffffff8141e610 r __kstrtab_pcie_get_readrq
ffffffff8141e620 r __kstrtab_pcix_set_mmrbc
ffffffff8141e62f r __kstrtab_pcix_get_mmrbc
ffffffff8141e63e r __kstrtab_pcix_get_max_mmrbc
ffffffff8141e651 r __kstrtab_pci_reset_function
ffffffff8141e664 r __kstrtab___pci_reset_function_locked
ffffffff8141e680 r __kstrtab___pci_reset_function
ffffffff8141e695 r __kstrtab_pci_set_dma_seg_boundary
ffffffff8141e6ae r __kstrtab_pci_set_dma_max_seg_size
ffffffff8141e6c7 r __kstrtab_pci_msi_off
ffffffff8141e6d3 r __kstrtab_pci_check_and_unmask_intx
ffffffff8141e6ed r __kstrtab_pci_check_and_mask_intx
ffffffff8141e705 r __kstrtab_pci_intx_mask_supported
ffffffff8141e71d r __kstrtab_pci_set_cacheline_size
ffffffff8141e734 r __kstrtab_pci_set_ltr
ffffffff8141e740 r __kstrtab_pci_disable_ltr
ffffffff8141e750 r __kstrtab_pci_enable_ltr
ffffffff8141e75f r __kstrtab_pci_ltr_supported
ffffffff8141e771 r __kstrtab_pci_disable_obff
ffffffff8141e782 r __kstrtab_pci_enable_obff
ffffffff8141e792 r __kstrtab_pci_disable_ido
ffffffff8141e7a2 r __kstrtab_pci_enable_ido
ffffffff8141e7b1 r __kstrtab_pci_dev_run_wake
ffffffff8141e7c2 r __kstrtab___pci_enable_wake
ffffffff8141e7d4 r __kstrtab_pci_load_and_free_saved_state
ffffffff8141e7f2 r __kstrtab_pci_load_saved_state
ffffffff8141e807 r __kstrtab_pci_store_saved_state
ffffffff8141e81d r __kstrtab_pci_choose_state
ffffffff8141e82e r __kstrtab___pci_complete_power_transition
ffffffff8141e84e r __kstrtab_pci_find_ht_capability
ffffffff8141e865 r __kstrtab_pci_find_next_ht_capability
ffffffff8141e881 r __kstrtab_pci_find_ext_capability
ffffffff8141e899 r __kstrtab_pci_find_next_capability
ffffffff8141e8b2 r __kstrtab_pci_ioremap_bar
ffffffff8141e8c2 r __kstrtab_pci_bus_max_busnr
ffffffff8141e8d4 r __kstrtab_pci_pci_problems
ffffffff8141e8e5 r __kstrtab_isa_dma_bridge_buggy
ffffffff8141e8fa r __kstrtab_pci_power_names
ffffffff8141e90a r __kstrtab_pci_dev_put
ffffffff8141e916 r __kstrtab_pci_dev_get
ffffffff8141e922 r __kstrtab_pci_bus_type
ffffffff8141e92f r __kstrtab_pci_dev_driver
ffffffff8141e93e r __kstrtab_pci_unregister_driver
ffffffff8141e954 r __kstrtab___pci_register_driver
ffffffff8141e96a r __kstrtab_pci_match_id
ffffffff8141e977 r __kstrtab_pci_add_dynid
ffffffff8141e985 r __kstrtab_pci_get_class
ffffffff8141e993 r __kstrtab_pci_get_slot
ffffffff8141e9a0 r __kstrtab_pci_get_subsys
ffffffff8141e9af r __kstrtab_pci_get_device
ffffffff8141e9be r __kstrtab_pci_find_next_bus
ffffffff8141e9d0 r __kstrtab_pci_find_bus
ffffffff8141e9dd r __kstrtab_pci_dev_present
ffffffff8141e9ed r __kstrtab_pci_get_domain_bus_and_slot
ffffffff8141ea09 r __kstrtab_pci_disable_rom
ffffffff8141ea19 r __kstrtab_pci_enable_rom
ffffffff8141ea28 r __kstrtab_pci_unmap_rom
ffffffff8141ea36 r __kstrtab_pci_map_rom
ffffffff8141ea42 r __kstrtab_pci_claim_resource
ffffffff8141ea55 r __kstrtab_pci_lost_interrupt
ffffffff8141ea68 r __kstrtab_pci_vpd_find_info_keyword
ffffffff8141ea82 r __kstrtab_pci_vpd_find_tag
ffffffff8141ea93 r __kstrtab_pci_destroy_slot
ffffffff8141eaa4 r __kstrtab_pci_renumber_slot
ffffffff8141eab6 r __kstrtab_pci_create_slot
ffffffff8141eac6 r __kstrtab_pci_slots_kset
ffffffff8141ead5 r __kstrtab_pci_fixup_device
ffffffff8141eae6 r __kstrtab_pcie_aspm_support_enabled
ffffffff8141eb00 r __kstrtab_pcie_aspm_enabled
ffffffff8141eb12 r __kstrtab_pci_disable_link_state
ffffffff8141eb29 r __kstrtab_pci_disable_link_state_locked
ffffffff8141eb47 r __kstrtab_pcie_port_service_unregister
ffffffff8141eb64 r __kstrtab_pcie_port_service_register
ffffffff8141eb7f r __kstrtab_pcie_port_bus_type
ffffffff8141eb92 r __kstrtab_cper_severity_to_aer
ffffffff8141eba7 r __kstrtab_aer_recover_queue
ffffffff8141ebb9 r __kstrtab_pci_cleanup_aer_uncorrect_error_status
ffffffff8141ebe0 r __kstrtab_pci_disable_pcie_error_reporting
ffffffff8141ec01 r __kstrtab_pci_enable_pcie_error_reporting
ffffffff8141ec21 r __kstrtab_aer_irq
ffffffff8141ec29 r __kstrtab_pci_msi_enabled
ffffffff8141ec39 r __kstrtab_pci_disable_msix
ffffffff8141ec4a r __kstrtab_pci_enable_msix
ffffffff8141ec5a r __kstrtab_pci_disable_msi
ffffffff8141ec6a r __kstrtab_pci_enable_msi_block
ffffffff8141ec7f r __kstrtab_pci_restore_msi_state
ffffffff8141ec95 r __kstrtab_ht_destroy_irq
ffffffff8141eca4 r __kstrtab_ht_create_irq
ffffffff8141ecb2 r __kstrtab___ht_create_irq
ffffffff8141ecc2 r __kstrtab_pci_max_pasids
ffffffff8141ecd1 r __kstrtab_pci_pasid_features
ffffffff8141ece4 r __kstrtab_pci_disable_pasid
ffffffff8141ecf6 r __kstrtab_pci_enable_pasid
ffffffff8141ed07 r __kstrtab_pci_pri_status
ffffffff8141ed16 r __kstrtab_pci_pri_stopped
ffffffff8141ed26 r __kstrtab_pci_reset_pri
ffffffff8141ed34 r __kstrtab_pci_pri_enabled
ffffffff8141ed44 r __kstrtab_pci_disable_pri
ffffffff8141ed54 r __kstrtab_pci_enable_pri
ffffffff8141ed63 r __kstrtab_pci_ats_queue_depth
ffffffff8141ed77 r __kstrtab_pci_restore_ats_state
ffffffff8141ed8d r __kstrtab_pci_disable_ats
ffffffff8141ed9d r __kstrtab_pci_enable_ats
ffffffff8141edac r __kstrtab_pci_num_vf
ffffffff8141edb7 r __kstrtab_pci_sriov_migration
ffffffff8141edcb r __kstrtab_pci_disable_sriov
ffffffff8141eddd r __kstrtab_pci_enable_sriov
ffffffff8141edee r __kstrtab_pci_rescan_bus
ffffffff8141edfd r __kstrtab_pci_assign_unassigned_bridge_resources
ffffffff8141ee24 r __kstrtab_pci_bus_assign_resources
ffffffff8141ee3d r __kstrtab_pci_bus_size_bridges
ffffffff8141ee52 r __kstrtab_pci_setup_cardbus
ffffffff8141ee64 r __kstrtab_fb_notifier_call_chain
ffffffff8141ee7b r __kstrtab_fb_unregister_client
ffffffff8141ee90 r __kstrtab_fb_register_client
ffffffff8141eea3 r __kstrtab_fb_get_options
ffffffff8141eeb2 r __kstrtab_fb_set_suspend
ffffffff8141eec1 r __kstrtab_fb_get_buffer_offset
ffffffff8141eed6 r __kstrtab_fb_pan_display
ffffffff8141eee5 r __kstrtab_fb_blank
ffffffff8141eeee r __kstrtab_fb_set_var
ffffffff8141eef9 r __kstrtab_fb_show_logo
ffffffff8141ef06 r __kstrtab_registered_fb
ffffffff8141ef14 r __kstrtab_num_registered_fb
ffffffff8141ef26 r __kstrtab_unregister_framebuffer
ffffffff8141ef3d r __kstrtab_register_framebuffer
ffffffff8141ef52 r __kstrtab_remove_conflicting_framebuffers
ffffffff8141ef72 r __kstrtab_unlink_framebuffer
ffffffff8141ef85 r __kstrtab_fb_class
ffffffff8141ef8e r __kstrtab_fb_pad_unaligned_buffer
ffffffff8141efa6 r __kstrtab_fb_pad_aligned_buffer
ffffffff8141efbc r __kstrtab_fb_get_color_depth
ffffffff8141efcf r __kstrtab_lock_fb_info
ffffffff8141efdc r __kstrtab_fb_destroy_modedb
ffffffff8141efee r __kstrtab_fb_validate_mode
ffffffff8141efff r __kstrtab_fb_get_mode
ffffffff8141f00b r __kstrtab_fb_edid_add_monspecs
ffffffff8141f020 r __kstrtab_fb_edid_to_monspecs
ffffffff8141f034 r __kstrtab_fb_parse_edid
ffffffff8141f042 r __kstrtab_fb_firmware_edid
ffffffff8141f053 r __kstrtab_fb_invert_cmaps
ffffffff8141f063 r __kstrtab_fb_default_cmap
ffffffff8141f073 r __kstrtab_fb_set_cmap
ffffffff8141f07f r __kstrtab_fb_copy_cmap
ffffffff8141f08c r __kstrtab_fb_dealloc_cmap
ffffffff8141f09c r __kstrtab_fb_alloc_cmap
ffffffff8141f0aa r __kstrtab_framebuffer_release
ffffffff8141f0be r __kstrtab_framebuffer_alloc
ffffffff8141f0d0 r __kstrtab_fb_find_mode_cvt
ffffffff8141f0e1 r __kstrtab_fb_find_mode
ffffffff8141f0ee r __kstrtab_fb_videomode_to_modelist
ffffffff8141f107 r __kstrtab_fb_find_nearest_mode
ffffffff8141f11c r __kstrtab_fb_find_best_mode
ffffffff8141f12e r __kstrtab_fb_match_mode
ffffffff8141f13c r __kstrtab_fb_add_videomode
ffffffff8141f14d r __kstrtab_fb_mode_is_equal
ffffffff8141f15e r __kstrtab_fb_var_to_videomode
ffffffff8141f172 r __kstrtab_fb_videomode_to_var
ffffffff8141f186 r __kstrtab_fb_find_best_display
ffffffff8141f19b r __kstrtab_fb_destroy_modelist
ffffffff8141f1af r __kstrtab_vesa_modes
ffffffff8141f1ba r __kstrtab_fb_mode_option
ffffffff8141f1c9 r __kstrtab_vgacon_text_force
ffffffff8141f1db r __kstrtab_fbcon_set_bitops
ffffffff8141f1ec r __kstrtab_get_default_font
ffffffff8141f1fd r __kstrtab_find_font
ffffffff8141f207 r __kstrtab_font_vga_8x16
ffffffff8141f215 r __kstrtab_soft_cursor
ffffffff8141f221 r __kstrtab_fb_find_logo
ffffffff8141f22e r __kstrtab_cfb_fillrect
ffffffff8141f23b r __kstrtab_cfb_copyarea
ffffffff8141f248 r __kstrtab_cfb_imageblit
ffffffff8141f256 r __kstrtab_acpi_resources_are_enforced
ffffffff8141f272 r __kstrtab_acpi_check_region
ffffffff8141f284 r __kstrtab_acpi_check_resource_conflict
ffffffff8141f2a1 r __kstrtab_acpi_os_wait_events_complete
ffffffff8141f2be r __kstrtab_acpi_os_execute
ffffffff8141f2ce r __kstrtab_acpi_os_write_port
ffffffff8141f2e1 r __kstrtab_acpi_os_read_port
ffffffff8141f2f3 r __kstrtab_acpi_os_unmap_generic_address
ffffffff8141f311 r __kstrtab_acpi_os_map_generic_address
ffffffff8141f32d r __kstrtab_acpi_os_unmap_memory
ffffffff8141f342 r __kstrtab_acpi_os_map_memory
ffffffff8141f355 r __kstrtab_acpi_os_get_iomem
ffffffff8141f367 r __kstrtab_kacpi_hotplug_wq
ffffffff8141f378 r __kstrtab_acpi_evaluate_reference
ffffffff8141f390 r __kstrtab_acpi_evaluate_integer
ffffffff8141f3a6 r __kstrtab_acpi_extract_package
ffffffff8141f3bb r __kstrtab_acpi_kobj
ffffffff8141f3c5 r __kstrtab_unregister_acpi_bus_notifier
ffffffff8141f3e2 r __kstrtab_register_acpi_bus_notifier
ffffffff8141f3fd r __kstrtab_acpi_bus_generate_proc_event
ffffffff8141f41a r __kstrtab_acpi_bus_generate_proc_event4
ffffffff8141f438 r __kstrtab_acpi_run_osc
ffffffff8141f445 r __kstrtab_acpi_bus_can_wakeup
ffffffff8141f459 r __kstrtab_acpi_bus_power_manageable
ffffffff8141f473 r __kstrtab_acpi_bus_update_power
ffffffff8141f489 r __kstrtab_acpi_bus_set_power
ffffffff8141f49c r __kstrtab_acpi_bus_get_private_data
ffffffff8141f4b6 r __kstrtab_acpi_bus_private_data_handler
ffffffff8141f4d4 r __kstrtab_acpi_bus_get_status
ffffffff8141f4e8 r __kstrtab_acpi_bus_get_device
ffffffff8141f4fc r __kstrtab_acpi_root_dir
ffffffff8141f50a r __kstrtab_acpi_get_physical_device
ffffffff8141f523 r __kstrtab_acpi_get_child
ffffffff8141f532 r __kstrtab_acpi_bus_trim
ffffffff8141f540 r __kstrtab_acpi_bus_start
ffffffff8141f54f r __kstrtab_acpi_bus_add
ffffffff8141f55c r __kstrtab_acpi_device_hid
ffffffff8141f56c r __kstrtab_acpi_bus_get_ejd
ffffffff8141f57d r __kstrtab_acpi_bus_unregister_driver
ffffffff8141f598 r __kstrtab_acpi_bus_register_driver
ffffffff8141f5b1 r __kstrtab_acpi_match_device_ids
ffffffff8141f5c7 r __kstrtab_acpi_get_cpuid
ffffffff8141f5d6 r __kstrtab_acpi_ec_remove_query_handler
ffffffff8141f5f3 r __kstrtab_acpi_ec_add_query_handler
ffffffff8141f60d r __kstrtab_ec_get_handle
ffffffff8141f61b r __kstrtab_ec_transaction
ffffffff8141f62a r __kstrtab_ec_write
ffffffff8141f633 r __kstrtab_ec_read
ffffffff8141f63b r __kstrtab_ec_burst_disable
ffffffff8141f64c r __kstrtab_ec_burst_enable
ffffffff8141f65c r __kstrtab_first_ec
ffffffff8141f665 r __kstrtab_acpi_pci_osc_control_set
ffffffff8141f67e r __kstrtab_acpi_get_pci_dev
ffffffff8141f68f r __kstrtab_acpi_pci_find_root
ffffffff8141f6a2 r __kstrtab_acpi_is_root_bridge
ffffffff8141f6b6 r __kstrtab_acpi_get_pci_rootbridge_handle
ffffffff8141f6d5 r __kstrtab_acpi_pci_unregister_driver
ffffffff8141f6f0 r __kstrtab_acpi_pci_register_driver
ffffffff8141f709 r __kstrtab_acpi_bus_generate_netlink_event
ffffffff8141f729 r __kstrtab_unregister_acpi_notifier
ffffffff8141f742 r __kstrtab_register_acpi_notifier
ffffffff8141f759 r __kstrtab_acpi_notifier_call_chain
ffffffff8141f772 r __kstrtab_acpi_get_node
ffffffff8141f780 r __kstrtab_acpi_unlock_battery_dir
ffffffff8141f798 r __kstrtab_acpi_lock_battery_dir
ffffffff8141f7ae r __kstrtab_acpi_unlock_ac_dir
ffffffff8141f7c1 r __kstrtab_acpi_lock_ac_dir
ffffffff8141f7d2 r __kstrtab_acpi_release_global_lock
ffffffff8141f7eb r __kstrtab_acpi_acquire_global_lock
ffffffff8141f804 r __kstrtab_acpi_remove_gpe_handler
ffffffff8141f81c r __kstrtab_acpi_install_gpe_handler
ffffffff8141f835 r __kstrtab_acpi_remove_fixed_event_handler
ffffffff8141f855 r __kstrtab_acpi_install_fixed_event_handler
ffffffff8141f876 r __kstrtab_acpi_install_global_event_handler
ffffffff8141f898 r __kstrtab_acpi_remove_notify_handler
ffffffff8141f8b3 r __kstrtab_acpi_install_notify_handler
ffffffff8141f8cf r __kstrtab_acpi_get_event_status
ffffffff8141f8e5 r __kstrtab_acpi_clear_event
ffffffff8141f8f6 r __kstrtab_acpi_disable_event
ffffffff8141f909 r __kstrtab_acpi_enable_event
ffffffff8141f91b r __kstrtab_acpi_disable
ffffffff8141f928 r __kstrtab_acpi_enable
ffffffff8141f934 r __kstrtab_acpi_get_gpe_device
ffffffff8141f948 r __kstrtab_acpi_remove_gpe_block
ffffffff8141f95e r __kstrtab_acpi_install_gpe_block
ffffffff8141f975 r __kstrtab_acpi_enable_all_runtime_gpes
ffffffff8141f992 r __kstrtab_acpi_disable_all_gpes
ffffffff8141f9a8 r __kstrtab_acpi_get_gpe_status
ffffffff8141f9bc r __kstrtab_acpi_clear_gpe
ffffffff8141f9cb r __kstrtab_acpi_set_gpe_wake_mask
ffffffff8141f9e2 r __kstrtab_acpi_setup_gpe_for_wake
ffffffff8141f9fa r __kstrtab_acpi_disable_gpe
ffffffff8141fa0b r __kstrtab_acpi_enable_gpe
ffffffff8141fa1b r __kstrtab_acpi_update_all_gpes
ffffffff8141fa30 r __kstrtab_acpi_remove_address_space_handler
ffffffff8141fa52 r __kstrtab_acpi_install_address_space_handler
ffffffff8141fa75 r __kstrtab_acpi_get_sleep_type_data
ffffffff8141fa8e r __kstrtab_acpi_write_bit_register
ffffffff8141faa6 r __kstrtab_acpi_read_bit_register
ffffffff8141fabd r __kstrtab_acpi_write
ffffffff8141fac8 r __kstrtab_acpi_read
ffffffff8141fad2 r __kstrtab_acpi_reset
ffffffff8141fadd r __kstrtab_acpi_leave_sleep_state
ffffffff8141faf4 r __kstrtab_acpi_leave_sleep_state_prep
ffffffff8141fb10 r __kstrtab_acpi_enter_sleep_state
ffffffff8141fb27 r __kstrtab_acpi_enter_sleep_state_prep
ffffffff8141fb43 r __kstrtab_acpi_enter_sleep_state_s4bios
ffffffff8141fb61 r __kstrtab_acpi_set_firmware_waking_vector64
ffffffff8141fb83 r __kstrtab_acpi_set_firmware_waking_vector
ffffffff8141fba3 r __kstrtab_acpi_get_data
ffffffff8141fbb1 r __kstrtab_acpi_detach_data
ffffffff8141fbc2 r __kstrtab_acpi_attach_data
ffffffff8141fbd3 r __kstrtab_acpi_get_devices
ffffffff8141fbe4 r __kstrtab_acpi_walk_namespace
ffffffff8141fbf8 r __kstrtab_acpi_evaluate_object
ffffffff8141fc0d r __kstrtab_acpi_evaluate_object_typed
ffffffff8141fc28 r __kstrtab_acpi_install_method
ffffffff8141fc3c r __kstrtab_acpi_get_object_info
ffffffff8141fc51 r __kstrtab_acpi_get_name
ffffffff8141fc5f r __kstrtab_acpi_get_handle
ffffffff8141fc6f r __kstrtab_acpi_get_next_object
ffffffff8141fc84 r __kstrtab_acpi_get_parent
ffffffff8141fc94 r __kstrtab_acpi_get_type
ffffffff8141fca2 r __kstrtab_acpi_get_id
ffffffff8141fcae r __kstrtab_acpi_walk_resources
ffffffff8141fcc2 r __kstrtab_acpi_get_vendor_resource
ffffffff8141fcdb r __kstrtab_acpi_resource_to_address64
ffffffff8141fcf6 r __kstrtab_acpi_get_event_resources
ffffffff8141fd0f r __kstrtab_acpi_set_current_resources
ffffffff8141fd2a r __kstrtab_acpi_get_current_resources
ffffffff8141fd45 r __kstrtab_acpi_get_irq_routing_table
ffffffff8141fd60 r __kstrtab_acpi_remove_table_handler
ffffffff8141fd7a r __kstrtab_acpi_install_table_handler
ffffffff8141fd95 r __kstrtab_acpi_load_tables
ffffffff8141fda6 r __kstrtab_acpi_get_table_by_index
ffffffff8141fdbe r __kstrtab_acpi_get_table
ffffffff8141fdcd r __kstrtab_acpi_unload_table_id
ffffffff8141fde2 r __kstrtab_acpi_get_table_header
ffffffff8141fdf8 r __kstrtab_acpi_load_table
ffffffff8141fe08 r __kstrtab_acpi_format_exception
ffffffff8141fe1e r __kstrtab_acpi_current_gpe_count
ffffffff8141fe35 r __kstrtab_acpi_dbg_layer
ffffffff8141fe44 r __kstrtab_acpi_dbg_level
ffffffff8141fe53 r __kstrtab_acpi_gbl_FADT
ffffffff8141fe61 r __kstrtab_acpi_check_address_range
ffffffff8141fe7a r __kstrtab_acpi_install_interface_handler
ffffffff8141fe99 r __kstrtab_acpi_remove_interface
ffffffff8141feaf r __kstrtab_acpi_install_interface
ffffffff8141fec6 r __kstrtab_acpi_purge_cached_objects
ffffffff8141fee0 r __kstrtab_acpi_terminate
ffffffff8141feef r __kstrtab_acpi_initialize_objects
ffffffff8141ff07 r __kstrtab_acpi_enable_subsystem
ffffffff8141ff1d r __kstrtab_acpi_info
ffffffff8141ff27 r __kstrtab_acpi_warning
ffffffff8141ff34 r __kstrtab_acpi_exception
ffffffff8141ff43 r __kstrtab_acpi_error
ffffffff8141ff4e r __kstrtab_unregister_acpi_hed_notifier
ffffffff8141ff6b r __kstrtab_register_acpi_hed_notifier
ffffffff8141ff86 r __kstrtab_apei_osc_setup
ffffffff8141ff95 r __kstrtab_apei_get_debugfs_dir
ffffffff8141ffaa r __kstrtab_apei_exec_collect_resources
ffffffff8141ffc6 r __kstrtab_apei_write
ffffffff8141ffd1 r __kstrtab_apei_read
ffffffff8141ffdb r __kstrtab_apei_map_generic_address
ffffffff8141fff4 r __kstrtab_apei_resources_release
ffffffff8142000b r __kstrtab_apei_resources_request
ffffffff81420022 r __kstrtab_apei_resources_sub
ffffffff81420035 r __kstrtab_apei_resources_add
ffffffff81420048 r __kstrtab_apei_resources_fini
ffffffff8142005c r __kstrtab_apei_exec_post_unmap_gars
ffffffff81420076 r __kstrtab_apei_exec_pre_map_gars
ffffffff8142008d r __kstrtab___apei_exec_run
ffffffff8142009d r __kstrtab_apei_exec_noop
ffffffff814200ac r __kstrtab_apei_exec_write_register_value
ffffffff814200cb r __kstrtab_apei_exec_write_register
ffffffff814200e4 r __kstrtab_apei_exec_read_register_value
ffffffff81420102 r __kstrtab_apei_exec_read_register
ffffffff8142011a r __kstrtab_apei_exec_ctx_init
ffffffff8142012d r __kstrtab_apei_hest_parse
ffffffff8142013d r __kstrtab_hest_disable
ffffffff8142014a r __kstrtab_apei_estatus_check
ffffffff8142015d r __kstrtab_apei_estatus_check_header
ffffffff81420177 r __kstrtab_apei_estatus_print
ffffffff8142018a r __kstrtab_cper_next_record_id
ffffffff8142019e r __kstrtab_erst_clear
ffffffff814201a9 r __kstrtab_erst_read
ffffffff814201b3 r __kstrtab_erst_write
ffffffff814201be r __kstrtab_erst_get_record_id_end
ffffffff814201d5 r __kstrtab_erst_get_record_id_next
ffffffff814201ed r __kstrtab_erst_get_record_id_begin
ffffffff81420206 r __kstrtab_erst_get_record_count
ffffffff8142021c r __kstrtab_erst_disable
ffffffff81420229 r __kstrtab_pnp_platform_devices
ffffffff8142023e r __kstrtab_pnp_unregister_card_driver
ffffffff81420259 r __kstrtab_pnp_register_card_driver
ffffffff81420272 r __kstrtab_pnp_release_card_device
ffffffff8142028a r __kstrtab_pnp_request_card_device
ffffffff814202a2 r __kstrtab_pnp_device_detach
ffffffff814202b4 r __kstrtab_pnp_device_attach
ffffffff814202c6 r __kstrtab_pnp_unregister_driver
ffffffff814202dc r __kstrtab_pnp_register_driver
ffffffff814202f0 r __kstrtab_pnp_range_reserved
ffffffff81420303 r __kstrtab_pnp_possible_config
ffffffff81420317 r __kstrtab_pnp_get_resource
ffffffff81420328 r __kstrtab_pnp_disable_dev
ffffffff81420338 r __kstrtab_pnp_activate_dev
ffffffff81420349 r __kstrtab_pnp_stop_dev
ffffffff81420356 r __kstrtab_pnp_start_dev
ffffffff81420364 r __kstrtab_pnp_is_active
ffffffff81420372 r __kstrtab_pnpacpi_protocol
ffffffff81420383 r __kstrtab_gnttab_init
ffffffff8142038f r __kstrtab_gnttab_unmap_refs
ffffffff814203a1 r __kstrtab_gnttab_map_refs
ffffffff814203b1 r __kstrtab_gnttab_max_grant_frames
ffffffff814203c9 r __kstrtab_gnttab_cancel_free_callback
ffffffff814203e5 r __kstrtab_gnttab_request_free_callback
ffffffff81420402 r __kstrtab_gnttab_release_grant_reference
ffffffff81420421 r __kstrtab_gnttab_claim_grant_reference
ffffffff8142043e r __kstrtab_gnttab_empty_grant_references
ffffffff8142045c r __kstrtab_gnttab_alloc_grant_references
ffffffff8142047a r __kstrtab_gnttab_free_grant_references
ffffffff81420497 r __kstrtab_gnttab_free_grant_reference
ffffffff814204b3 r __kstrtab_gnttab_end_foreign_transfer
ffffffff814204cf r __kstrtab_gnttab_end_foreign_transfer_ref
ffffffff814204ef r __kstrtab_gnttab_grant_foreign_transfer_ref
ffffffff81420511 r __kstrtab_gnttab_grant_foreign_transfer
ffffffff8142052f r __kstrtab_gnttab_end_foreign_access
ffffffff81420549 r __kstrtab_gnttab_end_foreign_access_ref
ffffffff81420567 r __kstrtab_gnttab_query_foreign_access
ffffffff81420583 r __kstrtab_gnttab_trans_grants_available
ffffffff814205a1 r __kstrtab_gnttab_grant_foreign_access_trans
ffffffff814205c3 r __kstrtab_gnttab_grant_foreign_access_trans_ref
ffffffff814205e9 r __kstrtab_gnttab_subpage_grants_available
ffffffff81420609 r __kstrtab_gnttab_grant_foreign_access_subpage
ffffffff8142062d r __kstrtab_gnttab_grant_foreign_access_subpage_ref
ffffffff81420655 r __kstrtab_gnttab_grant_foreign_access
ffffffff81420671 r __kstrtab_gnttab_grant_foreign_access_ref
ffffffff81420691 r __kstrtab_xen_hvm_resume_frames
ffffffff814206a7 r __kstrtab_xen_features
ffffffff814206b4 r __kstrtab_xen_set_callback_via
ffffffff814206c9 r __kstrtab_xen_test_irq_shared
ffffffff814206dd r __kstrtab_xen_poll_irq_timeout
ffffffff814206f2 r __kstrtab_xen_clear_irq_pending
ffffffff81420708 r __kstrtab_xen_hvm_evtchn_do_upcall
ffffffff81420721 r __kstrtab_evtchn_put
ffffffff8142072c r __kstrtab_evtchn_get
ffffffff81420737 r __kstrtab_evtchn_make_refcounted
ffffffff8142074e r __kstrtab_unbind_from_irqhandler
ffffffff81420765 r __kstrtab_bind_virq_to_irqhandler
ffffffff8142077d r __kstrtab_bind_interdomain_evtchn_to_irqhandler
ffffffff814207a3 r __kstrtab_bind_evtchn_to_irqhandler
ffffffff814207bd r __kstrtab_bind_evtchn_to_irq
ffffffff814207d0 r __kstrtab_xen_pirq_from_irq
ffffffff814207e2 r __kstrtab_xen_irq_from_gsi
ffffffff814207f3 r __kstrtab_notify_remote_via_irq
ffffffff81420809 r __kstrtab_irq_from_evtchn
ffffffff81420819 r __kstrtab_xen_setup_shutdown_event
ffffffff81420832 r __kstrtab_free_xenballooned_pages
ffffffff8142084a r __kstrtab_alloc_xenballooned_pages
ffffffff81420863 r __kstrtab_balloon_set_new_target
ffffffff8142087a r __kstrtab_balloon_stats
ffffffff81420888 r __kstrtab_xenbus_read_driver_state
ffffffff814208a1 r __kstrtab_xenbus_unmap_ring
ffffffff814208b3 r __kstrtab_xenbus_unmap_ring_vfree
ffffffff814208cb r __kstrtab_xenbus_map_ring
ffffffff814208db r __kstrtab_xenbus_map_ring_valloc
ffffffff814208f2 r __kstrtab_xenbus_free_evtchn
ffffffff81420905 r __kstrtab_xenbus_bind_evtchn
ffffffff81420918 r __kstrtab_xenbus_alloc_evtchn
ffffffff8142092c r __kstrtab_xenbus_grant_ring
ffffffff8142093e r __kstrtab_xenbus_dev_fatal
ffffffff8142094f r __kstrtab_xenbus_dev_error
ffffffff81420960 r __kstrtab_xenbus_frontend_closed
ffffffff81420977 r __kstrtab_xenbus_switch_state
ffffffff8142098b r __kstrtab_xenbus_watch_pathfmt
ffffffff814209a0 r __kstrtab_xenbus_watch_path
ffffffff814209b2 r __kstrtab_xenbus_strstate
ffffffff814209c2 r __kstrtab_unregister_xenbus_watch
ffffffff814209da r __kstrtab_register_xenbus_watch
ffffffff814209f0 r __kstrtab_xenbus_gather
ffffffff814209fe r __kstrtab_xenbus_printf
ffffffff81420a0c r __kstrtab_xenbus_scanf
ffffffff81420a19 r __kstrtab_xenbus_transaction_end
ffffffff81420a30 r __kstrtab_xenbus_transaction_start
ffffffff81420a49 r __kstrtab_xenbus_rm
ffffffff81420a53 r __kstrtab_xenbus_mkdir
ffffffff81420a60 r __kstrtab_xenbus_write
ffffffff81420a6d r __kstrtab_xenbus_read
ffffffff81420a79 r __kstrtab_xenbus_exists
ffffffff81420a87 r __kstrtab_xenbus_directory
ffffffff81420a98 r __kstrtab_xenbus_dev_request_and_reply
ffffffff81420ab5 r __kstrtab_xenbus_probe
ffffffff81420ac2 r __kstrtab_unregister_xenstore_notifier
ffffffff81420adf r __kstrtab_register_xenstore_notifier
ffffffff81420afa r __kstrtab_xenbus_dev_cancel
ffffffff81420b0c r __kstrtab_xenbus_dev_resume
ffffffff81420b1e r __kstrtab_xenbus_dev_suspend
ffffffff81420b31 r __kstrtab_xenbus_dev_changed
ffffffff81420b44 r __kstrtab_xenbus_probe_devices
ffffffff81420b59 r __kstrtab_xenbus_probe_node
ffffffff81420b6b r __kstrtab_xenbus_dev_attrs
ffffffff81420b7c r __kstrtab_xenbus_unregister_driver
ffffffff81420b95 r __kstrtab_xenbus_register_driver_common
ffffffff81420bb3 r __kstrtab_xenbus_dev_shutdown
ffffffff81420bc7 r __kstrtab_xenbus_dev_remove
ffffffff81420bd9 r __kstrtab_xenbus_dev_probe
ffffffff81420bea r __kstrtab_xenbus_otherend_changed
ffffffff81420c02 r __kstrtab_xenbus_read_otherend_details
ffffffff81420c1f r __kstrtab_xenbus_match
ffffffff81420c2c r __kstrtab_xen_store_interface
ffffffff81420c40 r __kstrtab_xen_store_evtchn
ffffffff81420c51 r __kstrtab_xenbus_register_backend
ffffffff81420c69 r __kstrtab_xenbus_dev_is_online
ffffffff81420c7e r __kstrtab_xen_xenbus_fops
ffffffff81420c8e r __kstrtab_xenbus_register_frontend
ffffffff81420ca7 r __kstrtab_xen_biovec_phys_mergeable
ffffffff81420cc1 r __kstrtab_register_xen_selfballooning
ffffffff81420cdd r __kstrtab_xen_swiotlb_dma_supported
ffffffff81420cf7 r __kstrtab_xen_swiotlb_dma_mapping_error
ffffffff81420d15 r __kstrtab_xen_swiotlb_sync_sg_for_device
ffffffff81420d34 r __kstrtab_xen_swiotlb_sync_sg_for_cpu
ffffffff81420d50 r __kstrtab_xen_swiotlb_unmap_sg
ffffffff81420d65 r __kstrtab_xen_swiotlb_unmap_sg_attrs
ffffffff81420d80 r __kstrtab_xen_swiotlb_map_sg
ffffffff81420d93 r __kstrtab_xen_swiotlb_map_sg_attrs
ffffffff81420dac r __kstrtab_xen_swiotlb_sync_single_for_device
ffffffff81420dcf r __kstrtab_xen_swiotlb_sync_single_for_cpu
ffffffff81420def r __kstrtab_xen_swiotlb_unmap_page
ffffffff81420e06 r __kstrtab_xen_swiotlb_map_page
ffffffff81420e1b r __kstrtab_xen_swiotlb_free_coherent
ffffffff81420e35 r __kstrtab_xen_swiotlb_alloc_coherent
ffffffff81420e50 r __kstrtab_get_current_tty
ffffffff81420e60 r __kstrtab_tty_devnum
ffffffff81420e6b r __kstrtab_tty_unregister_driver
ffffffff81420e81 r __kstrtab_tty_register_driver
ffffffff81420e95 r __kstrtab_put_tty_driver
ffffffff81420ea4 r __kstrtab_tty_set_operations
ffffffff81420eb7 r __kstrtab_tty_driver_kref_put
ffffffff81420ecb r __kstrtab___alloc_tty_driver
ffffffff81420ede r __kstrtab_tty_unregister_device
ffffffff81420ef4 r __kstrtab_tty_register_device
ffffffff81420f08 r __kstrtab_tty_put_char
ffffffff81420f15 r __kstrtab_do_SAK
ffffffff81420f1c r __kstrtab_tty_pair_get_pty
ffffffff81420f2d r __kstrtab_tty_pair_get_tty
ffffffff81420f3e r __kstrtab_tty_get_pgrp
ffffffff81420f4b r __kstrtab_tty_kref_put
ffffffff81420f58 r __kstrtab_tty_shutdown
ffffffff81420f65 r __kstrtab_tty_free_termios
ffffffff81420f76 r __kstrtab_tty_standard_install
ffffffff81420f8b r __kstrtab_tty_init_termios
ffffffff81420f9c r __kstrtab_start_tty
ffffffff81420fa6 r __kstrtab_stop_tty
ffffffff81420faf r __kstrtab_tty_hung_up_p
ffffffff81420fbd r __kstrtab_tty_vhangup
ffffffff81420fc9 r __kstrtab_tty_hangup
ffffffff81420fd4 r __kstrtab_tty_wakeup
ffffffff81420fdf r __kstrtab_tty_check_change
ffffffff81420ff0 r __kstrtab_tty_name
ffffffff81420ff9 r __kstrtab_tty_mutex
ffffffff81421003 r __kstrtab_tty_std_termios
ffffffff81421013 r __kstrtab_n_tty_inherit_ops
ffffffff81421025 r __kstrtab_n_tty_compat_ioctl_helper
ffffffff8142103f r __kstrtab_n_tty_ioctl_helper
ffffffff81421052 r __kstrtab_tty_perform_flush
ffffffff81421064 r __kstrtab_tty_mode_ioctl
ffffffff81421073 r __kstrtab_tty_set_termios
ffffffff81421083 r __kstrtab_tty_termios_hw_change
ffffffff81421099 r __kstrtab_tty_termios_copy_hw
ffffffff814210ad r __kstrtab_tty_get_baud_rate
ffffffff814210bf r __kstrtab_tty_encode_baud_rate
ffffffff814210d4 r __kstrtab_tty_termios_encode_baud_rate
ffffffff814210f1 r __kstrtab_tty_termios_input_baud_rate
ffffffff8142110d r __kstrtab_tty_termios_baud_rate
ffffffff81421123 r __kstrtab_tty_wait_until_sent
ffffffff81421137 r __kstrtab_tty_unthrottle
ffffffff81421146 r __kstrtab_tty_throttle
ffffffff81421153 r __kstrtab_tty_driver_flush_buffer
ffffffff8142116b r __kstrtab_tty_write_room
ffffffff8142117a r __kstrtab_tty_chars_in_buffer
ffffffff8142118e r __kstrtab_tty_ldisc_flush
ffffffff8142119e r __kstrtab_tty_ldisc_deref
ffffffff814211ae r __kstrtab_tty_ldisc_ref
ffffffff814211bc r __kstrtab_tty_ldisc_ref_wait
ffffffff814211cf r __kstrtab_tty_unregister_ldisc
ffffffff814211e4 r __kstrtab_tty_register_ldisc
ffffffff814211f7 r __kstrtab_tty_flip_buffer_push
ffffffff8142120c r __kstrtab_tty_prepare_flip_string_flags
ffffffff8142122a r __kstrtab_tty_prepare_flip_string
ffffffff81421242 r __kstrtab_tty_schedule_flip
ffffffff81421254 r __kstrtab_tty_insert_flip_string_flags
ffffffff81421271 r __kstrtab_tty_insert_flip_string_fixed_flag
ffffffff81421293 r __kstrtab_tty_buffer_request_room
ffffffff814212ab r __kstrtab_tty_port_open
ffffffff814212b9 r __kstrtab_tty_port_close
ffffffff814212c8 r __kstrtab_tty_port_close_end
ffffffff814212db r __kstrtab_tty_port_close_start
ffffffff814212f0 r __kstrtab_tty_port_block_til_ready
ffffffff81421309 r __kstrtab_tty_port_lower_dtr_rts
ffffffff81421320 r __kstrtab_tty_port_raise_dtr_rts
ffffffff81421337 r __kstrtab_tty_port_carrier_raised
ffffffff8142134f r __kstrtab_tty_port_hangup
ffffffff8142135f r __kstrtab_tty_port_tty_set
ffffffff81421370 r __kstrtab_tty_port_tty_get
ffffffff81421381 r __kstrtab_tty_port_put
ffffffff8142138e r __kstrtab_tty_port_free_xmit_buf
ffffffff814213a5 r __kstrtab_tty_port_alloc_xmit_buf
ffffffff814213bd r __kstrtab_tty_port_init
ffffffff814213cb r __kstrtab_tty_unlock
ffffffff814213d6 r __kstrtab_tty_lock
ffffffff814213df r __kstrtab_pm_set_vt_switch
ffffffff814213f0 r __kstrtab_vt_get_leds
ffffffff814213fc r __kstrtab_kd_mksound
ffffffff81421407 r __kstrtab_unregister_keyboard_notifier
ffffffff81421424 r __kstrtab_register_keyboard_notifier
ffffffff8142143f r __kstrtab_con_copy_unimap
ffffffff8142144f r __kstrtab_con_set_default_unimap
ffffffff81421466 r __kstrtab_inverse_translate
ffffffff81421478 r __kstrtab_give_up_console
ffffffff81421488 r __kstrtab_take_over_console
ffffffff8142149a r __kstrtab_global_cursor_default
ffffffff814214b0 r __kstrtab_vc_cons
ffffffff814214b8 r __kstrtab_console_blanked
ffffffff814214c8 r __kstrtab_console_blank_hook
ffffffff814214db r __kstrtab_fg_console
ffffffff814214e6 r __kstrtab_vc_resize
ffffffff814214f0 r __kstrtab_redraw_screen
ffffffff814214fe r __kstrtab_update_region
ffffffff8142150c r __kstrtab_default_blu
ffffffff81421518 r __kstrtab_default_grn
ffffffff81421524 r __kstrtab_default_red
ffffffff81421530 r __kstrtab_color_table
ffffffff8142153c r __kstrtab_screen_glyph
ffffffff81421549 r __kstrtab_do_unblank_screen
ffffffff8142155b r __kstrtab_do_blank_screen
ffffffff8142156b r __kstrtab_unregister_con_driver
ffffffff81421581 r __kstrtab_register_con_driver
ffffffff81421595 r __kstrtab_con_debug_leave
ffffffff814215a5 r __kstrtab_con_debug_enter
ffffffff814215b5 r __kstrtab_con_is_bound
ffffffff814215c2 r __kstrtab_unbind_con_driver
ffffffff814215d4 r __kstrtab_unregister_vt_notifier
ffffffff814215eb r __kstrtab_register_vt_notifier
ffffffff81421600 r __kstrtab_hvc_remove
ffffffff8142160b r __kstrtab_hvc_alloc
ffffffff81421615 r __kstrtab___hvc_resize
ffffffff81421622 r __kstrtab_hvc_poll
ffffffff8142162b r __kstrtab_hvc_kick
ffffffff81421634 r __kstrtab_hvc_instantiate
ffffffff81421644 r __kstrtab_generate_random_uuid
ffffffff81421659 r __kstrtab_get_random_bytes
ffffffff8142166a r __kstrtab_add_input_randomness
ffffffff8142167f r __kstrtab_misc_deregister
ffffffff8142168f r __kstrtab_misc_register
ffffffff8142169d r __kstrtab_nvram_check_checksum
ffffffff814216b2 r __kstrtab___nvram_check_checksum
ffffffff814216c9 r __kstrtab_nvram_write_byte
ffffffff814216da r __kstrtab___nvram_write_byte
ffffffff814216ed r __kstrtab_nvram_read_byte
ffffffff814216fd r __kstrtab___nvram_read_byte
ffffffff8142170f r __kstrtab_vga_client_register
ffffffff81421723 r __kstrtab_vga_set_legacy_decoding
ffffffff8142173b r __kstrtab_vga_put
ffffffff81421743 r __kstrtab_vga_tryget
ffffffff8142174e r __kstrtab_vga_get
ffffffff81421756 r __kstrtab__dev_info
ffffffff81421760 r __kstrtab_dev_notice
ffffffff8142176b r __kstrtab_dev_warn
ffffffff81421774 r __kstrtab_dev_err
ffffffff8142177c r __kstrtab_dev_crit
ffffffff81421785 r __kstrtab_dev_alert
ffffffff8142178f r __kstrtab_dev_emerg
ffffffff81421799 r __kstrtab_dev_printk
ffffffff814217a4 r __kstrtab___dev_printk
ffffffff814217b1 r __kstrtab_device_move
ffffffff814217bd r __kstrtab_device_rename
ffffffff814217cb r __kstrtab_device_destroy
ffffffff814217da r __kstrtab_device_create
ffffffff814217e8 r __kstrtab_device_create_vargs
ffffffff814217fc r __kstrtab_root_device_unregister
ffffffff81421813 r __kstrtab___root_device_register
ffffffff8142182a r __kstrtab_device_remove_file
ffffffff8142183d r __kstrtab_device_create_file
ffffffff81421850 r __kstrtab_put_device
ffffffff8142185b r __kstrtab_get_device
ffffffff81421866 r __kstrtab_device_unregister
ffffffff81421878 r __kstrtab_device_del
ffffffff81421883 r __kstrtab_device_register
ffffffff81421893 r __kstrtab_device_add
ffffffff8142189e r __kstrtab_device_initialize
ffffffff814218b0 r __kstrtab_device_find_child
ffffffff814218c2 r __kstrtab_device_for_each_child
ffffffff814218d8 r __kstrtab_dev_set_name
ffffffff814218e5 r __kstrtab_device_schedule_callback_owner
ffffffff81421904 r __kstrtab_device_remove_bin_file
ffffffff8142191b r __kstrtab_device_create_bin_file
ffffffff81421932 r __kstrtab_device_show_int
ffffffff81421942 r __kstrtab_device_store_int
ffffffff81421953 r __kstrtab_device_show_ulong
ffffffff81421965 r __kstrtab_device_store_ulong
ffffffff81421978 r __kstrtab_dev_driver_string
ffffffff8142198a r __kstrtab_subsys_system_register
ffffffff814219a1 r __kstrtab_subsys_interface_unregister
ffffffff814219bd r __kstrtab_subsys_interface_register
ffffffff814219d7 r __kstrtab_subsys_dev_iter_exit
ffffffff814219ec r __kstrtab_subsys_dev_iter_next
ffffffff81421a01 r __kstrtab_subsys_dev_iter_init
ffffffff81421a16 r __kstrtab_bus_sort_breadthfirst
ffffffff81421a2c r __kstrtab_bus_get_device_klist
ffffffff81421a41 r __kstrtab_bus_get_kset
ffffffff81421a4e r __kstrtab_bus_unregister_notifier
ffffffff81421a66 r __kstrtab_bus_register_notifier
ffffffff81421a7c r __kstrtab_bus_unregister
ffffffff81421a8b r __kstrtab___bus_register
ffffffff81421a9a r __kstrtab_device_reprobe
ffffffff81421aa9 r __kstrtab_bus_rescan_devices
ffffffff81421abc r __kstrtab_bus_for_each_drv
ffffffff81421acd r __kstrtab_subsys_find_device_by_id
ffffffff81421ae6 r __kstrtab_bus_find_device_by_name
ffffffff81421afe r __kstrtab_bus_find_device
ffffffff81421b0e r __kstrtab_bus_for_each_dev
ffffffff81421b1f r __kstrtab_bus_remove_file
ffffffff81421b2f r __kstrtab_bus_create_file
ffffffff81421b3f r __kstrtab_dev_set_drvdata
ffffffff81421b4f r __kstrtab_dev_get_drvdata
ffffffff81421b5f r __kstrtab_device_release_driver
ffffffff81421b75 r __kstrtab_driver_attach
ffffffff81421b83 r __kstrtab_device_attach
ffffffff81421b91 r __kstrtab_wait_for_device_probe
ffffffff81421ba7 r __kstrtab_device_bind_driver
ffffffff81421bba r __kstrtab_syscore_resume
ffffffff81421bc9 r __kstrtab_syscore_suspend
ffffffff81421bd9 r __kstrtab_unregister_syscore_ops
ffffffff81421bf0 r __kstrtab_register_syscore_ops
ffffffff81421c05 r __kstrtab_driver_find
ffffffff81421c11 r __kstrtab_driver_unregister
ffffffff81421c23 r __kstrtab_driver_register
ffffffff81421c33 r __kstrtab_driver_remove_file
ffffffff81421c46 r __kstrtab_driver_create_file
ffffffff81421c59 r __kstrtab_driver_find_device
ffffffff81421c6c r __kstrtab_driver_for_each_device
ffffffff81421c83 r __kstrtab_class_interface_unregister
ffffffff81421c9e r __kstrtab_class_interface_register
ffffffff81421cb7 r __kstrtab_class_destroy
ffffffff81421cc5 r __kstrtab_class_unregister
ffffffff81421cd6 r __kstrtab_class_remove_file
ffffffff81421ce8 r __kstrtab_class_create_file
ffffffff81421cfa r __kstrtab_class_compat_remove_link
ffffffff81421d13 r __kstrtab_class_compat_create_link
ffffffff81421d2c r __kstrtab_class_compat_unregister
ffffffff81421d44 r __kstrtab_class_compat_register
ffffffff81421d5a r __kstrtab_show_class_attr_string
ffffffff81421d71 r __kstrtab_class_find_device
ffffffff81421d83 r __kstrtab_class_for_each_device
ffffffff81421d99 r __kstrtab_class_dev_iter_exit
ffffffff81421dad r __kstrtab_class_dev_iter_next
ffffffff81421dc1 r __kstrtab_class_dev_iter_init
ffffffff81421dd5 r __kstrtab___class_create
ffffffff81421de4 r __kstrtab___class_register
ffffffff81421df5 r __kstrtab_dma_get_required_mask
ffffffff81421e0b r __kstrtab_platform_bus_type
ffffffff81421e1d r __kstrtab_platform_create_bundle
ffffffff81421e34 r __kstrtab_platform_driver_probe
ffffffff81421e4a r __kstrtab_platform_driver_unregister
ffffffff81421e65 r __kstrtab_platform_driver_register
ffffffff81421e7e r __kstrtab_platform_device_register_full
ffffffff81421e9c r __kstrtab_platform_device_unregister
ffffffff81421eb7 r __kstrtab_platform_device_register
ffffffff81421ed0 r __kstrtab_platform_device_del
ffffffff81421ee4 r __kstrtab_platform_device_add
ffffffff81421ef8 r __kstrtab_platform_device_add_data
ffffffff81421f11 r __kstrtab_platform_device_add_resources
ffffffff81421f2f r __kstrtab_platform_device_alloc
ffffffff81421f45 r __kstrtab_platform_device_put
ffffffff81421f59 r __kstrtab_platform_add_devices
ffffffff81421f6e r __kstrtab_platform_get_irq_byname
ffffffff81421f86 r __kstrtab_platform_get_resource_byname
ffffffff81421fa3 r __kstrtab_platform_get_irq
ffffffff81421fb4 r __kstrtab_platform_get_resource
ffffffff81421fca r __kstrtab_platform_bus
ffffffff81421fd7 r __kstrtab_cpu_is_hotpluggable
ffffffff81421feb r __kstrtab_get_cpu_device
ffffffff81421ffa r __kstrtab_cpu_subsys
ffffffff81422005 r __kstrtab_firmware_kobj
ffffffff81422013 r __kstrtab_devm_kfree
ffffffff8142201e r __kstrtab_devm_kzalloc
ffffffff8142202b r __kstrtab_devres_release_group
ffffffff81422040 r __kstrtab_devres_remove_group
ffffffff81422054 r __kstrtab_devres_close_group
ffffffff81422067 r __kstrtab_devres_open_group
ffffffff81422079 r __kstrtab_devres_destroy
ffffffff81422088 r __kstrtab_devres_remove
ffffffff81422096 r __kstrtab_devres_get
ffffffff814220a1 r __kstrtab_devres_find
ffffffff814220ad r __kstrtab_devres_add
ffffffff814220b8 r __kstrtab_devres_free
ffffffff814220c4 r __kstrtab_devres_alloc
ffffffff814220d1 r __kstrtab_attribute_container_find_class_device
ffffffff814220f7 r __kstrtab_attribute_container_unregister
ffffffff81422116 r __kstrtab_attribute_container_register
ffffffff81422133 r __kstrtab_attribute_container_classdev_to_container
ffffffff8142215d r __kstrtab_transport_destroy_device
ffffffff81422176 r __kstrtab_transport_remove_device
ffffffff8142218e r __kstrtab_transport_configure_device
ffffffff814221a9 r __kstrtab_transport_add_device
ffffffff814221be r __kstrtab_transport_setup_device
ffffffff814221d5 r __kstrtab_anon_transport_class_unregister
ffffffff814221f5 r __kstrtab_anon_transport_class_register
ffffffff81422213 r __kstrtab_transport_class_unregister
ffffffff8142222e r __kstrtab_transport_class_register
ffffffff81422247 r __kstrtab_power_group_name
ffffffff81422258 r __kstrtab_pm_generic_restore
ffffffff8142226b r __kstrtab_pm_generic_restore_early
ffffffff81422284 r __kstrtab_pm_generic_restore_noirq
ffffffff8142229d r __kstrtab_pm_generic_resume
ffffffff814222af r __kstrtab_pm_generic_resume_early
ffffffff814222c7 r __kstrtab_pm_generic_resume_noirq
ffffffff814222df r __kstrtab_pm_generic_thaw
ffffffff814222ef r __kstrtab_pm_generic_thaw_early
ffffffff81422305 r __kstrtab_pm_generic_thaw_noirq
ffffffff8142231b r __kstrtab_pm_generic_poweroff
ffffffff8142232f r __kstrtab_pm_generic_poweroff_late
ffffffff81422348 r __kstrtab_pm_generic_poweroff_noirq
ffffffff81422362 r __kstrtab_pm_generic_freeze
ffffffff81422374 r __kstrtab_pm_generic_freeze_late
ffffffff8142238b r __kstrtab_pm_generic_freeze_noirq
ffffffff814223a3 r __kstrtab_pm_generic_suspend
ffffffff814223b6 r __kstrtab_pm_generic_suspend_late
ffffffff814223ce r __kstrtab_pm_generic_suspend_noirq
ffffffff814223e7 r __kstrtab_pm_generic_runtime_resume
ffffffff81422401 r __kstrtab_pm_generic_runtime_suspend
ffffffff8142241c r __kstrtab_pm_generic_runtime_idle
ffffffff81422434 r __kstrtab_dev_pm_put_subsys_data
ffffffff8142244b r __kstrtab_dev_pm_get_subsys_data
ffffffff81422462 r __kstrtab_dev_pm_qos_hide_latency_limit
ffffffff81422480 r __kstrtab_dev_pm_qos_expose_latency_limit
ffffffff814224a0 r __kstrtab_dev_pm_qos_add_ancestor_request
ffffffff814224c0 r __kstrtab_dev_pm_qos_remove_global_notifier
ffffffff814224e2 r __kstrtab_dev_pm_qos_add_global_notifier
ffffffff81422501 r __kstrtab_dev_pm_qos_remove_notifier
ffffffff8142251c r __kstrtab_dev_pm_qos_add_notifier
ffffffff81422534 r __kstrtab_dev_pm_qos_remove_request
ffffffff8142254e r __kstrtab_dev_pm_qos_update_request
ffffffff81422568 r __kstrtab_dev_pm_qos_add_request
ffffffff8142257f r __kstrtab_device_pm_wait_for_dev
ffffffff81422596 r __kstrtab___suspend_report_result
ffffffff814225ae r __kstrtab_dpm_suspend_start
ffffffff814225c0 r __kstrtab_dpm_suspend_end
ffffffff814225d0 r __kstrtab_dpm_resume_end
ffffffff814225df r __kstrtab_dpm_resume_start
ffffffff814225f0 r __kstrtab_pm_wakeup_event
ffffffff81422600 r __kstrtab___pm_wakeup_event
ffffffff81422612 r __kstrtab_pm_relax
ffffffff8142261b r __kstrtab___pm_relax
ffffffff81422626 r __kstrtab_pm_stay_awake
ffffffff81422634 r __kstrtab___pm_stay_awake
ffffffff81422644 r __kstrtab_device_set_wakeup_enable
ffffffff8142265d r __kstrtab_device_init_wakeup
ffffffff81422670 r __kstrtab_device_set_wakeup_capable
ffffffff8142268a r __kstrtab_device_wakeup_disable
ffffffff814226a0 r __kstrtab_device_wakeup_enable
ffffffff814226b5 r __kstrtab_wakeup_source_unregister
ffffffff814226ce r __kstrtab_wakeup_source_register
ffffffff814226e5 r __kstrtab_wakeup_source_remove
ffffffff814226fa r __kstrtab_wakeup_source_add
ffffffff8142270c r __kstrtab_wakeup_source_destroy
ffffffff81422722 r __kstrtab_wakeup_source_drop
ffffffff81422735 r __kstrtab_wakeup_source_create
ffffffff8142274a r __kstrtab_wakeup_source_prepare
ffffffff81422760 r __kstrtab___pm_runtime_use_autosuspend
ffffffff8142277d r __kstrtab_pm_runtime_set_autosuspend_delay
ffffffff8142279e r __kstrtab_pm_runtime_irq_safe
ffffffff814227b2 r __kstrtab_pm_runtime_no_callbacks
ffffffff814227ca r __kstrtab_pm_runtime_allow
ffffffff814227db r __kstrtab_pm_runtime_forbid
ffffffff814227ed r __kstrtab_pm_runtime_enable
ffffffff814227ff r __kstrtab___pm_runtime_disable
ffffffff81422814 r __kstrtab_pm_runtime_barrier
ffffffff81422827 r __kstrtab___pm_runtime_set_status
ffffffff8142283f r __kstrtab___pm_runtime_resume
ffffffff81422853 r __kstrtab___pm_runtime_suspend
ffffffff81422868 r __kstrtab___pm_runtime_idle
ffffffff8142287a r __kstrtab_pm_schedule_suspend
ffffffff8142288e r __kstrtab_pm_runtime_autosuspend_expiration
ffffffff814228b0 r __kstrtab_dmam_free_noncoherent
ffffffff814228c6 r __kstrtab_dmam_alloc_noncoherent
ffffffff814228dd r __kstrtab_dmam_free_coherent
ffffffff814228f0 r __kstrtab_dmam_alloc_coherent
ffffffff81422904 r __kstrtab_request_firmware_nowait
ffffffff8142291c r __kstrtab_request_firmware
ffffffff8142292d r __kstrtab_release_firmware
ffffffff8142293e r __kstrtab_hypervisor_kobj
ffffffff8142294e r __kstrtab_scsi_device_lookup
ffffffff81422961 r __kstrtab___scsi_device_lookup
ffffffff81422976 r __kstrtab_scsi_device_lookup_by_target
ffffffff81422993 r __kstrtab___scsi_device_lookup_by_target
ffffffff814229b2 r __kstrtab___starget_for_each_device
ffffffff814229cc r __kstrtab_starget_for_each_device
ffffffff814229e4 r __kstrtab___scsi_iterate_devices
ffffffff814229fb r __kstrtab_scsi_device_put
ffffffff81422a0b r __kstrtab_scsi_device_get
ffffffff81422a1b r __kstrtab_scsi_get_vpd_page
ffffffff81422a2d r __kstrtab_scsi_track_queue_full
ffffffff81422a43 r __kstrtab_scsi_adjust_queue_depth
ffffffff81422a5b r __kstrtab_scsi_finish_command
ffffffff81422a6f r __kstrtab_scsi_cmd_get_serial
ffffffff81422a83 r __kstrtab_scsi_free_command
ffffffff81422a95 r __kstrtab_scsi_allocate_command
ffffffff81422aab r __kstrtab_scsi_put_command
ffffffff81422abc r __kstrtab___scsi_put_command
ffffffff81422acf r __kstrtab_scsi_get_command
ffffffff81422ae0 r __kstrtab___scsi_get_command
ffffffff81422af3 r __kstrtab_scsi_device_type
ffffffff81422b04 r __kstrtab_scsi_logging_level
ffffffff81422b17 r __kstrtab_scsi_flush_work
ffffffff81422b27 r __kstrtab_scsi_queue_work
ffffffff81422b37 r __kstrtab_scsi_is_host_device
ffffffff81422b4b r __kstrtab_scsi_host_put
ffffffff81422b59 r __kstrtab_scsi_host_get
ffffffff81422b67 r __kstrtab_scsi_host_lookup
ffffffff81422b78 r __kstrtab_scsi_unregister
ffffffff81422b88 r __kstrtab_scsi_register
ffffffff81422b96 r __kstrtab_scsi_host_alloc
ffffffff81422ba6 r __kstrtab_scsi_add_host_with_dma
ffffffff81422bbd r __kstrtab_scsi_remove_host
ffffffff81422bce r __kstrtab_scsi_host_set_state
ffffffff81422be2 r __kstrtab_scsi_nonblockable_ioctl
ffffffff81422bfa r __kstrtab_scsi_ioctl
ffffffff81422c05 r __kstrtab_scsi_set_medium_removal
ffffffff81422c1d r __kstrtab_scsi_print_result
ffffffff81422c2f r __kstrtab_scsi_show_result
ffffffff81422c40 r __kstrtab_scsi_print_sense
ffffffff81422c51 r __kstrtab___scsi_print_sense
ffffffff81422c64 r __kstrtab_scsi_cmd_print_sense_hdr
ffffffff81422c7d r __kstrtab_scsi_print_sense_hdr
ffffffff81422c92 r __kstrtab_scsi_show_sense_hdr
ffffffff81422ca6 r __kstrtab_scsi_show_extd_sense
ffffffff81422cbb r __kstrtab_scsi_extd_sense_format
ffffffff81422cd2 r __kstrtab_scsi_sense_key_string
ffffffff81422ce8 r __kstrtab_scsi_print_status
ffffffff81422cfa r __kstrtab_scsi_print_command
ffffffff81422d0d r __kstrtab___scsi_print_command
ffffffff81422d22 r __kstrtab_scsi_partsize
ffffffff81422d30 r __kstrtab_scsicam_bios_param
ffffffff81422d43 r __kstrtab_scsi_bios_ptable
ffffffff81422d54 r __kstrtab_scsi_build_sense_buffer
ffffffff81422d6c r __kstrtab_scsi_get_sense_info_fld
ffffffff81422d84 r __kstrtab_scsi_sense_desc_find
ffffffff81422d99 r __kstrtab_scsi_command_normalize_sense
ffffffff81422db6 r __kstrtab_scsi_normalize_sense
ffffffff81422dcb r __kstrtab_scsi_reset_provider
ffffffff81422ddf r __kstrtab_scsi_report_device_reset
ffffffff81422df8 r __kstrtab_scsi_report_bus_reset
ffffffff81422e0e r __kstrtab_scsi_eh_flush_done_q
ffffffff81422e23 r __kstrtab_scsi_eh_ready_devs
ffffffff81422e36 r __kstrtab_scsi_eh_get_sense
ffffffff81422e48 r __kstrtab_scsi_eh_finish_cmd
ffffffff81422e5b r __kstrtab_scsi_eh_restore_cmnd
ffffffff81422e70 r __kstrtab_scsi_eh_prep_cmnd
ffffffff81422e82 r __kstrtab_scsi_block_when_processing_errors
ffffffff81422ea4 r __kstrtab_scsi_schedule_eh
ffffffff81422eb5 r __kstrtab_scsi_kunmap_atomic_sg
ffffffff81422ecb r __kstrtab_scsi_kmap_atomic_sg
ffffffff81422edf r __kstrtab_scsi_target_unblock
ffffffff81422ef3 r __kstrtab_scsi_target_block
ffffffff81422f05 r __kstrtab_scsi_internal_device_unblock
ffffffff81422f22 r __kstrtab_scsi_internal_device_block
ffffffff81422f3d r __kstrtab_scsi_target_resume
ffffffff81422f50 r __kstrtab_scsi_target_quiesce
ffffffff81422f64 r __kstrtab_scsi_device_resume
ffffffff81422f77 r __kstrtab_scsi_device_quiesce
ffffffff81422f8b r __kstrtab_sdev_evt_send_simple
ffffffff81422fa0 r __kstrtab_sdev_evt_alloc
ffffffff81422faf r __kstrtab_sdev_evt_send
ffffffff81422fbd r __kstrtab_scsi_device_set_state
ffffffff81422fd3 r __kstrtab_scsi_test_unit_ready
ffffffff81422fe8 r __kstrtab_scsi_mode_sense
ffffffff81422ff8 r __kstrtab_scsi_mode_select
ffffffff81423009 r __kstrtab_scsi_unblock_requests
ffffffff8142301f r __kstrtab_scsi_block_requests
ffffffff81423033 r __kstrtab___scsi_alloc_queue
ffffffff81423046 r __kstrtab_scsi_calculate_bounce_limit
ffffffff81423062 r __kstrtab_scsi_prep_fn
ffffffff8142306f r __kstrtab_scsi_prep_return
ffffffff81423080 r __kstrtab_scsi_prep_state_check
ffffffff81423096 r __kstrtab_scsi_setup_fs_cmnd
ffffffff814230a9 r __kstrtab_scsi_setup_blk_pc_cmnd
ffffffff814230c0 r __kstrtab_scsi_init_io
ffffffff814230cd r __kstrtab_scsi_release_buffers
ffffffff814230e2 r __kstrtab_scsi_execute_req
ffffffff814230f3 r __kstrtab_scsi_execute
ffffffff81423100 r __kstrtab_scsi_dma_unmap
ffffffff8142310f r __kstrtab_scsi_dma_map
ffffffff8142311c r __kstrtab_scsi_free_host_dev
ffffffff8142312f r __kstrtab_scsi_get_host_dev
ffffffff81423141 r __kstrtab_scsi_scan_host
ffffffff81423150 r __kstrtab_scsi_scan_target
ffffffff81423161 r __kstrtab_scsi_rescan_device
ffffffff81423174 r __kstrtab_scsi_add_device
ffffffff81423184 r __kstrtab___scsi_add_device
ffffffff81423196 r __kstrtab_int_to_scsilun
ffffffff814231a5 r __kstrtab_scsilun_to_int
ffffffff814231b4 r __kstrtab_scsi_is_target_device
ffffffff814231ca r __kstrtab_scsi_complete_async_scans
ffffffff814231e4 r __kstrtab_scsi_is_sdev_device
ffffffff814231f8 r __kstrtab_scsi_register_interface
ffffffff81423210 r __kstrtab_scsi_register_driver
ffffffff81423225 r __kstrtab_scsi_remove_target
ffffffff81423238 r __kstrtab_scsi_remove_device
ffffffff8142324b r __kstrtab_scsi_bus_type
ffffffff81423259 r __kstrtab_scsi_dev_info_remove_list
ffffffff81423273 r __kstrtab_scsi_dev_info_add_list
ffffffff8142328a r __kstrtab_scsi_get_device_flags_keyed
ffffffff814232a6 r __kstrtab_scsi_dev_info_list_del_keyed
ffffffff814232c3 r __kstrtab_scsi_dev_info_list_add_keyed
ffffffff814232e0 r __kstrtab_scsi_autopm_put_device
ffffffff814232f7 r __kstrtab_scsi_autopm_get_device
ffffffff8142330e r __kstrtab_ata_cable_sata
ffffffff8142331d r __kstrtab_ata_cable_ignore
ffffffff8142332e r __kstrtab_ata_cable_unknown
ffffffff81423340 r __kstrtab_ata_cable_80wire
ffffffff81423351 r __kstrtab_ata_cable_40wire
ffffffff81423362 r __kstrtab_ata_std_error_handler
ffffffff81423378 r __kstrtab_ata_do_eh
ffffffff81423382 r __kstrtab_ata_eh_analyze_ncq_error
ffffffff8142339b r __kstrtab_ata_eh_qc_retry
ffffffff814233ab r __kstrtab_ata_eh_qc_complete
ffffffff814233be r __kstrtab_ata_eh_thaw_port
ffffffff814233cf r __kstrtab_ata_eh_freeze_port
ffffffff814233e2 r __kstrtab_sata_async_notification
ffffffff814233fa r __kstrtab_ata_port_freeze
ffffffff8142340a r __kstrtab_ata_port_abort
ffffffff81423419 r __kstrtab_ata_link_abort
ffffffff81423428 r __kstrtab_ata_port_schedule_eh
ffffffff8142343d r __kstrtab_ata_port_pbar_desc
ffffffff81423450 r __kstrtab_ata_port_desc
ffffffff8142345e r __kstrtab_ata_ehi_clear_desc
ffffffff81423471 r __kstrtab_ata_ehi_push_desc
ffffffff81423483 r __kstrtab___ata_ehi_push_desc
ffffffff81423497 r __kstrtab_ata_pci_device_resume
ffffffff814234ad r __kstrtab_ata_pci_device_suspend
ffffffff814234c4 r __kstrtab_ata_pci_device_do_resume
ffffffff814234dd r __kstrtab_ata_pci_device_do_suspend
ffffffff814234f7 r __kstrtab_ata_pci_remove_one
ffffffff8142350a r __kstrtab_pci_test_config_bits
ffffffff8142351f r __kstrtab_ata_timing_cycle2mode
ffffffff81423535 r __kstrtab_ata_timing_merge
ffffffff81423546 r __kstrtab_ata_timing_compute
ffffffff81423559 r __kstrtab_ata_timing_find_mode
ffffffff8142356e r __kstrtab_ata_pio_need_iordy
ffffffff81423581 r __kstrtab_ata_scsi_simulate
ffffffff81423593 r __kstrtab_ata_do_dev_read_id
ffffffff814235a6 r __kstrtab_ata_id_c_string
ffffffff814235b6 r __kstrtab_ata_id_string
ffffffff814235c4 r __kstrtab_ata_host_resume
ffffffff814235d4 r __kstrtab_ata_host_suspend
ffffffff814235e5 r __kstrtab_ata_link_offline
ffffffff814235f6 r __kstrtab_ata_link_online
ffffffff81423606 r __kstrtab_sata_scr_write_flush
ffffffff8142361b r __kstrtab_sata_scr_write
ffffffff8142362a r __kstrtab_sata_scr_read
ffffffff81423638 r __kstrtab_sata_scr_valid
ffffffff81423647 r __kstrtab___ata_change_queue_depth
ffffffff81423660 r __kstrtab_ata_scsi_change_queue_depth
ffffffff8142367c r __kstrtab_ata_scsi_slave_destroy
ffffffff81423693 r __kstrtab_ata_scsi_slave_config
ffffffff814236a9 r __kstrtab_ata_scsi_queuecmd
ffffffff814236bb r __kstrtab_ata_wait_register
ffffffff814236cd r __kstrtab_ata_msleep
ffffffff814236d8 r __kstrtab_ata_ratelimit
ffffffff814236e6 r __kstrtab_ata_dev_pair
ffffffff814236f3 r __kstrtab_ata_dev_classify
ffffffff81423704 r __kstrtab_ata_std_postreset
ffffffff81423716 r __kstrtab_sata_std_hardreset
ffffffff81423729 r __kstrtab_sata_link_hardreset
ffffffff8142373d r __kstrtab_ata_std_prereset
ffffffff8142374e r __kstrtab_sata_link_scr_lpm
ffffffff81423760 r __kstrtab_sata_link_resume
ffffffff81423771 r __kstrtab_sata_link_debounce
ffffffff81423784 r __kstrtab_ata_wait_after_reset
ffffffff81423799 r __kstrtab_sata_set_spd
ffffffff814237a6 r __kstrtab_ata_dev_disable
ffffffff814237b6 r __kstrtab_ata_noop_qc_prep
ffffffff814237c7 r __kstrtab_ata_std_qc_defer
ffffffff814237d8 r __kstrtab_ata_do_set_mode
ffffffff814237e8 r __kstrtab_ata_id_xfermask
ffffffff814237f8 r __kstrtab_ata_mode_string
ffffffff81423808 r __kstrtab_ata_xfer_mode2shift
ffffffff8142381c r __kstrtab_ata_xfer_mode2mask
ffffffff8142382f r __kstrtab_ata_xfer_mask2mode
ffffffff81423842 r __kstrtab_ata_unpack_xfermask
ffffffff81423856 r __kstrtab_ata_pack_xfermask
ffffffff81423868 r __kstrtab_ata_tf_from_fis
ffffffff81423878 r __kstrtab_ata_tf_to_fis
ffffffff81423886 r __kstrtab_atapi_cmd_type
ffffffff81423895 r __kstrtab_ata_qc_complete_multiple
ffffffff814238ae r __kstrtab_ata_qc_complete
ffffffff814238be r __kstrtab_ata_sg_init
ffffffff814238ca r __kstrtab_ata_host_detach
ffffffff814238da r __kstrtab_ata_host_activate
ffffffff814238ec r __kstrtab_ata_host_register
ffffffff814238fe r __kstrtab_ata_host_start
ffffffff8142390d r __kstrtab_ata_slave_link_init
ffffffff81423921 r __kstrtab_ata_host_alloc_pinfo
ffffffff81423936 r __kstrtab_ata_host_alloc
ffffffff81423945 r __kstrtab_ata_host_init
ffffffff81423953 r __kstrtab_ata_scsi_unlock_native_capacity
ffffffff81423973 r __kstrtab_ata_std_bios_param
ffffffff81423986 r __kstrtab_ata_dev_next
ffffffff81423993 r __kstrtab_ata_link_next
ffffffff814239a1 r __kstrtab_ata_dummy_port_info
ffffffff814239b5 r __kstrtab_ata_dummy_port_ops
ffffffff814239c8 r __kstrtab_sata_port_ops
ffffffff814239d6 r __kstrtab_ata_base_port_ops
ffffffff814239e8 r __kstrtab_sata_deb_timing_long
ffffffff814239fd r __kstrtab_sata_deb_timing_hotplug
ffffffff81423a15 r __kstrtab_sata_deb_timing_normal
ffffffff81423a2c r __kstrtab_ata_print_version
ffffffff81423a3e r __kstrtab_ata_dev_printk
ffffffff81423a4d r __kstrtab_ata_link_printk
ffffffff81423a5d r __kstrtab_ata_port_printk
ffffffff81423a6d r __kstrtab_ata_sas_queuecmd
ffffffff81423a7e r __kstrtab_ata_sas_slave_configure
ffffffff81423a96 r __kstrtab_ata_sas_port_destroy
ffffffff81423aab r __kstrtab_ata_sas_port_init
ffffffff81423abd r __kstrtab_ata_sas_sync_probe
ffffffff81423ad0 r __kstrtab_ata_sas_async_probe
ffffffff81423ae4 r __kstrtab_ata_sas_port_stop
ffffffff81423af6 r __kstrtab_ata_sas_port_start
ffffffff81423b09 r __kstrtab_ata_sas_port_alloc
ffffffff81423b1c r __kstrtab_ata_scsi_ioctl
ffffffff81423b2b r __kstrtab_ata_sas_scsi_ioctl
ffffffff81423b3e r __kstrtab_ata_common_sdev_attrs
ffffffff81423b54 r __kstrtab_dev_attr_sw_activity
ffffffff81423b69 r __kstrtab_dev_attr_em_message_type
ffffffff81423b82 r __kstrtab_dev_attr_em_message
ffffffff81423b96 r __kstrtab_dev_attr_unload_heads
ffffffff81423bac r __kstrtab_dev_attr_link_power_management_policy
ffffffff81423bd2 r __kstrtab_ata_port_wait_eh
ffffffff81423be3 r __kstrtab_ata_scsi_port_error_handler
ffffffff81423bff r __kstrtab_ata_scsi_cmd_error_handler
ffffffff81423c1a r __kstrtab_ata_pci_bmdma_init_one
ffffffff81423c31 r __kstrtab_ata_pci_bmdma_prepare_host
ffffffff81423c4c r __kstrtab_ata_pci_bmdma_init
ffffffff81423c5f r __kstrtab_ata_pci_bmdma_clear_simplex
ffffffff81423c7b r __kstrtab_ata_bmdma_port_start32
ffffffff81423c92 r __kstrtab_ata_bmdma_port_start
ffffffff81423ca7 r __kstrtab_ata_bmdma_status
ffffffff81423cb8 r __kstrtab_ata_bmdma_stop
ffffffff81423cc7 r __kstrtab_ata_bmdma_start
ffffffff81423cd7 r __kstrtab_ata_bmdma_setup
ffffffff81423ce7 r __kstrtab_ata_bmdma_irq_clear
ffffffff81423cfb r __kstrtab_ata_bmdma_post_internal_cmd
ffffffff81423d17 r __kstrtab_ata_bmdma_error_handler
ffffffff81423d2f r __kstrtab_ata_bmdma_interrupt
ffffffff81423d43 r __kstrtab_ata_bmdma_port_intr
ffffffff81423d57 r __kstrtab_ata_bmdma_qc_issue
ffffffff81423d6a r __kstrtab_ata_bmdma_dumb_qc_prep
ffffffff81423d81 r __kstrtab_ata_bmdma_qc_prep
ffffffff81423d93 r __kstrtab_ata_bmdma32_port_ops
ffffffff81423da8 r __kstrtab_ata_bmdma_port_ops
ffffffff81423dbb r __kstrtab_ata_pci_sff_init_one
ffffffff81423dd0 r __kstrtab_ata_pci_sff_activate_host
ffffffff81423dea r __kstrtab_ata_pci_sff_prepare_host
ffffffff81423e03 r __kstrtab_ata_pci_sff_init_host
ffffffff81423e19 r __kstrtab_ata_sff_std_ports
ffffffff81423e2b r __kstrtab_ata_sff_error_handler
ffffffff81423e41 r __kstrtab_ata_sff_drain_fifo
ffffffff81423e54 r __kstrtab_ata_sff_postreset
ffffffff81423e66 r __kstrtab_sata_sff_hardreset
ffffffff81423e79 r __kstrtab_ata_sff_softreset
ffffffff81423e8b r __kstrtab_ata_sff_wait_after_reset
ffffffff81423ea4 r __kstrtab_ata_sff_dev_classify
ffffffff81423eb9 r __kstrtab_ata_sff_prereset
ffffffff81423eca r __kstrtab_ata_sff_thaw
ffffffff81423ed7 r __kstrtab_ata_sff_freeze
ffffffff81423ee6 r __kstrtab_ata_sff_lost_interrupt
ffffffff81423efd r __kstrtab_ata_sff_interrupt
ffffffff81423f0f r __kstrtab_ata_sff_port_intr
ffffffff81423f21 r __kstrtab_ata_sff_qc_fill_rtf
ffffffff81423f35 r __kstrtab_ata_sff_qc_issue
ffffffff81423f46 r __kstrtab_ata_sff_queue_pio_task
ffffffff81423f5d r __kstrtab_ata_sff_queue_delayed_work
ffffffff81423f78 r __kstrtab_ata_sff_queue_work
ffffffff81423f8b r __kstrtab_ata_sff_hsm_move
ffffffff81423f9c r __kstrtab_ata_sff_data_xfer_noirq
ffffffff81423fb4 r __kstrtab_ata_sff_data_xfer32
ffffffff81423fc8 r __kstrtab_ata_sff_data_xfer
ffffffff81423fda r __kstrtab_ata_sff_exec_command
ffffffff81423fef r __kstrtab_ata_sff_tf_read
ffffffff81423fff r __kstrtab_ata_sff_tf_load
ffffffff8142400f r __kstrtab_ata_sff_irq_on
ffffffff8142401e r __kstrtab_ata_sff_dev_select
ffffffff81424031 r __kstrtab_ata_sff_wait_ready
ffffffff81424044 r __kstrtab_ata_sff_busy_sleep
ffffffff81424057 r __kstrtab_ata_sff_dma_pause
ffffffff81424069 r __kstrtab_ata_sff_pause
ffffffff81424077 r __kstrtab_ata_sff_check_status
ffffffff8142408c r __kstrtab_ata_sff_port_ops
ffffffff8142409d r __kstrtab_ata_acpi_cbl_80wire
ffffffff814240b1 r __kstrtab_ata_acpi_gtm_xfermask
ffffffff814240c7 r __kstrtab_ata_acpi_stm
ffffffff814240d4 r __kstrtab_ata_acpi_gtm
ffffffff814240e1 r __kstrtab_usb_enable_xhci_ports
ffffffff814240f7 r __kstrtab_usb_is_intel_switchable_xhci
ffffffff81424114 r __kstrtab_uhci_check_and_reset_hc
ffffffff8142412c r __kstrtab_uhci_reset_hc
ffffffff8142413a r __kstrtab_usb_amd_dev_put
ffffffff8142414a r __kstrtab_usb_amd_quirk_pll_enable
ffffffff81424163 r __kstrtab_usb_amd_quirk_pll_disable
ffffffff8142417d r __kstrtab_usb_amd_find_chipset_info
ffffffff81424197 r __kstrtab_serio_interrupt
ffffffff814241a7 r __kstrtab_serio_close
ffffffff814241b3 r __kstrtab_serio_open
ffffffff814241be r __kstrtab_serio_unregister_driver
ffffffff814241d6 r __kstrtab___serio_register_driver
ffffffff814241ee r __kstrtab_serio_unregister_child_port
ffffffff8142420a r __kstrtab_serio_unregister_port
ffffffff81424220 r __kstrtab___serio_register_port
ffffffff81424236 r __kstrtab_serio_reconnect
ffffffff81424246 r __kstrtab_serio_rescan
ffffffff81424253 r __kstrtab_i8042_check_port_owner
ffffffff8142426a r __kstrtab_i8042_command
ffffffff81424278 r __kstrtab_i8042_remove_filter
ffffffff8142428c r __kstrtab_i8042_install_filter
ffffffff814242a1 r __kstrtab_i8042_unlock_chip
ffffffff814242b3 r __kstrtab_i8042_lock_chip
ffffffff814242c3 r __kstrtab_ps2_cmd_aborted
ffffffff814242d3 r __kstrtab_ps2_handle_response
ffffffff814242e7 r __kstrtab_ps2_handle_ack
ffffffff814242f6 r __kstrtab_ps2_init
ffffffff814242ff r __kstrtab_ps2_command
ffffffff8142430b r __kstrtab___ps2_command
ffffffff81424319 r __kstrtab_ps2_is_keyboard_id
ffffffff8142432c r __kstrtab_ps2_drain
ffffffff81424336 r __kstrtab_ps2_end_command
ffffffff81424346 r __kstrtab_ps2_begin_command
ffffffff81424358 r __kstrtab_ps2_sendbyte
ffffffff81424365 r __kstrtab_input_unregister_handle
ffffffff8142437d r __kstrtab_input_register_handle
ffffffff81424393 r __kstrtab_input_handler_for_each_handle
ffffffff814243b1 r __kstrtab_input_unregister_handler
ffffffff814243ca r __kstrtab_input_register_handler
ffffffff814243e1 r __kstrtab_input_unregister_device
ffffffff814243f9 r __kstrtab_input_register_device
ffffffff8142440f r __kstrtab_input_set_capability
ffffffff81424424 r __kstrtab_input_free_device
ffffffff81424436 r __kstrtab_input_allocate_device
ffffffff8142444c r __kstrtab_input_class
ffffffff81424458 r __kstrtab_input_reset_device
ffffffff8142446b r __kstrtab_input_set_keycode
ffffffff8142447d r __kstrtab_input_get_keycode
ffffffff8142448f r __kstrtab_input_scancode_to_scalar
ffffffff814244a8 r __kstrtab_input_close_device
ffffffff814244bb r __kstrtab_input_flush_device
ffffffff814244ce r __kstrtab_input_open_device
ffffffff814244e0 r __kstrtab_input_release_device
ffffffff814244f5 r __kstrtab_input_grab_device
ffffffff81424507 r __kstrtab_input_set_abs_params
ffffffff8142451c r __kstrtab_input_alloc_absinfo
ffffffff81424530 r __kstrtab_input_inject_event
ffffffff81424543 r __kstrtab_input_event
ffffffff8142454f r __kstrtab_input_ff_effect_from_user
ffffffff81424569 r __kstrtab_input_event_to_user
ffffffff8142457d r __kstrtab_input_event_from_user
ffffffff81424593 r __kstrtab_input_mt_report_pointer_emulation
ffffffff814245b5 r __kstrtab_input_mt_report_finger_count
ffffffff814245d2 r __kstrtab_input_mt_report_slot_state
ffffffff814245ed r __kstrtab_input_mt_destroy_slots
ffffffff81424604 r __kstrtab_input_mt_init_slots
ffffffff81424618 r __kstrtab_input_ff_destroy
ffffffff81424629 r __kstrtab_input_ff_create
ffffffff81424639 r __kstrtab_input_ff_event
ffffffff81424648 r __kstrtab_input_ff_erase
ffffffff81424657 r __kstrtab_input_ff_upload
ffffffff81424667 r __kstrtab_rtc_ktime_to_tm
ffffffff81424677 r __kstrtab_rtc_tm_to_ktime
ffffffff81424687 r __kstrtab_rtc_tm_to_time
ffffffff81424696 r __kstrtab_rtc_valid_tm
ffffffff814246a3 r __kstrtab_rtc_time_to_tm
ffffffff814246b2 r __kstrtab_rtc_year_days
ffffffff814246c0 r __kstrtab_rtc_month_days
ffffffff814246cf r __kstrtab_rtc_device_unregister
ffffffff814246e5 r __kstrtab_rtc_device_register
ffffffff814246f9 r __kstrtab_rtc_irq_set_freq
ffffffff8142470a r __kstrtab_rtc_irq_set_state
ffffffff8142471c r __kstrtab_rtc_irq_unregister
ffffffff8142472f r __kstrtab_rtc_irq_register
ffffffff81424740 r __kstrtab_rtc_class_close
ffffffff81424750 r __kstrtab_rtc_class_open
ffffffff8142475f r __kstrtab_rtc_update_irq
ffffffff8142476e r __kstrtab_rtc_update_irq_enable
ffffffff81424784 r __kstrtab_rtc_alarm_irq_enable
ffffffff81424799 r __kstrtab_rtc_initialize_alarm
ffffffff814247ae r __kstrtab_rtc_set_alarm
ffffffff814247bc r __kstrtab_rtc_read_alarm
ffffffff814247cb r __kstrtab_rtc_set_mmss
ffffffff814247d8 r __kstrtab_rtc_set_time
ffffffff814247e5 r __kstrtab_rtc_read_time
ffffffff814247f3 r __kstrtab_watchdog_unregister_device
ffffffff8142480e r __kstrtab_watchdog_register_device
ffffffff81424827 r __kstrtab_edac_put_sysfs_subsys
ffffffff8142483d r __kstrtab_edac_get_sysfs_subsys
ffffffff81424853 r __kstrtab_edac_subsys
ffffffff8142485f r __kstrtab_edac_atomic_assert_error
ffffffff81424878 r __kstrtab_edac_handler_set
ffffffff81424889 r __kstrtab_edac_err_assert
ffffffff81424899 r __kstrtab_edac_handlers
ffffffff814248a7 r __kstrtab_edac_op_state
ffffffff814248b5 r __kstrtab_cpufreq_unregister_driver
ffffffff814248cf r __kstrtab_cpufreq_register_driver
ffffffff814248e7 r __kstrtab_cpufreq_update_policy
ffffffff814248fd r __kstrtab_cpufreq_get_policy
ffffffff81424910 r __kstrtab_cpufreq_unregister_governor
ffffffff8142492c r __kstrtab_cpufreq_register_governor
ffffffff81424946 r __kstrtab___cpufreq_driver_getavg
ffffffff8142495e r __kstrtab_cpufreq_driver_target
ffffffff81424974 r __kstrtab___cpufreq_driver_target
ffffffff8142498c r __kstrtab_cpufreq_unregister_notifier
ffffffff814249a8 r __kstrtab_cpufreq_register_notifier
ffffffff814249c2 r __kstrtab_cpufreq_get
ffffffff814249ce r __kstrtab_cpufreq_quick_get_max
ffffffff814249e4 r __kstrtab_cpufreq_quick_get
ffffffff814249f6 r __kstrtab_cpufreq_global_kobject
ffffffff81424a0d r __kstrtab_cpufreq_notify_transition
ffffffff81424a27 r __kstrtab_cpufreq_cpu_put
ffffffff81424a37 r __kstrtab_cpufreq_cpu_get
ffffffff81424a47 r __kstrtab_cpufreq_frequency_get_table
ffffffff81424a63 r __kstrtab_cpufreq_frequency_table_put_attr
ffffffff81424a84 r __kstrtab_cpufreq_frequency_table_get_attr
ffffffff81424aa5 r __kstrtab_cpufreq_freq_attr_scaling_available_freqs
ffffffff81424acf r __kstrtab_cpufreq_frequency_table_target
ffffffff81424aee r __kstrtab_cpufreq_frequency_table_verify
ffffffff81424b0d r __kstrtab_cpufreq_frequency_table_cpuinfo
ffffffff81424b2d r __kstrtab_cpuidle_unregister_device
ffffffff81424b47 r __kstrtab_cpuidle_register_device
ffffffff81424b5f r __kstrtab_cpuidle_disable_device
ffffffff81424b76 r __kstrtab_cpuidle_enable_device
ffffffff81424b8c r __kstrtab_cpuidle_resume_and_unlock
ffffffff81424ba6 r __kstrtab_cpuidle_pause_and_lock
ffffffff81424bbd r __kstrtab_cpuidle_unregister_driver
ffffffff81424bd7 r __kstrtab_cpuidle_get_driver
ffffffff81424bea r __kstrtab_cpuidle_register_driver
ffffffff81424c02 r __kstrtab_dmi_match
ffffffff81424c0c r __kstrtab_dmi_walk
ffffffff81424c15 r __kstrtab_dmi_get_date
ffffffff81424c22 r __kstrtab_dmi_find_device
ffffffff81424c32 r __kstrtab_dmi_name_in_vendors
ffffffff81424c46 r __kstrtab_dmi_get_system_info
ffffffff81424c5a r __kstrtab_dmi_first_match
ffffffff81424c6a r __kstrtab_dmi_check_system
ffffffff81424c7b r __kstrtab_ibft_addr
ffffffff81424c85 r __kstrtab_i8253_lock
ffffffff81424c90 r __kstrtab_hid_check_keys_pressed
ffffffff81424ca7 r __kstrtab_hid_unregister_driver
ffffffff81424cbd r __kstrtab___hid_register_driver
ffffffff81424cd3 r __kstrtab_hid_destroy_device
ffffffff81424ce6 r __kstrtab_hid_allocate_device
ffffffff81424cfa r __kstrtab_hid_add_device
ffffffff81424d09 r __kstrtab_hid_disconnect
ffffffff81424d18 r __kstrtab_hid_connect
ffffffff81424d24 r __kstrtab_hid_input_report
ffffffff81424d35 r __kstrtab_hid_report_raw_event
ffffffff81424d4a r __kstrtab_hid_set_field
ffffffff81424d58 r __kstrtab_hid_output_report
ffffffff81424d6a r __kstrtab_hid_parse_report
ffffffff81424d7b r __kstrtab_hid_register_report
ffffffff81424d8f r __kstrtab_hid_debug
ffffffff81424d99 r __kstrtab_hidinput_disconnect
ffffffff81424dad r __kstrtab_hidinput_connect
ffffffff81424dbe r __kstrtab_hidinput_count_leds
ffffffff81424dd2 r __kstrtab_hidinput_get_led_field
ffffffff81424de9 r __kstrtab_hidinput_find_field
ffffffff81424dfd r __kstrtab_hidinput_report_event
ffffffff81424e13 r __kstrtab_iommu_device_group
ffffffff81424e26 r __kstrtab_iommu_unmap
ffffffff81424e32 r __kstrtab_iommu_map
ffffffff81424e3c r __kstrtab_iommu_domain_has_cap
ffffffff81424e51 r __kstrtab_iommu_iova_to_phys
ffffffff81424e64 r __kstrtab_iommu_detach_device
ffffffff81424e78 r __kstrtab_iommu_attach_device
ffffffff81424e8c r __kstrtab_iommu_domain_free
ffffffff81424e9e r __kstrtab_iommu_domain_alloc
ffffffff81424eb1 r __kstrtab_iommu_set_fault_handler
ffffffff81424ec9 r __kstrtab_iommu_present
ffffffff81424ed7 r __kstrtab_bus_set_iommu
ffffffff81424ee5 r __kstrtab_amd_iommu_device_info
ffffffff81424efb r __kstrtab_amd_iommu_enable_device_erratum
ffffffff81424f1b r __kstrtab_amd_iommu_get_v2_domain
ffffffff81424f33 r __kstrtab_amd_iommu_complete_ppr
ffffffff81424f4a r __kstrtab_amd_iommu_domain_clear_gcr3
ffffffff81424f66 r __kstrtab_amd_iommu_domain_set_gcr3
ffffffff81424f80 r __kstrtab_amd_iommu_flush_tlb
ffffffff81424f94 r __kstrtab_amd_iommu_flush_page
ffffffff81424fa9 r __kstrtab_amd_iommu_domain_enable_v2
ffffffff81424fc4 r __kstrtab_amd_iommu_domain_direct_map
ffffffff81424fe0 r __kstrtab_amd_iommu_unregister_ppr_notifier
ffffffff81425002 r __kstrtab_amd_iommu_register_ppr_notifier
ffffffff81425022 r __kstrtab_amd_iommu_v2_supported
ffffffff81425039 r __kstrtab_pcibios_align_resource
ffffffff81425050 r __kstrtab_xen_unregister_device_domain_owner
ffffffff81425073 r __kstrtab_xen_register_device_domain_owner
ffffffff81425094 r __kstrtab_xen_find_device_domain_owner
ffffffff814250b1 r __kstrtab_xen_pci_frontend
ffffffff814250c2 r __kstrtab_pcibios_scan_specific_bus
ffffffff814250dc r __kstrtab_fb_is_primary_device
ffffffff814250f1 r __kstrtab_kernel_sock_shutdown
ffffffff81425106 r __kstrtab_kernel_sock_ioctl
ffffffff81425118 r __kstrtab_kernel_sendpage
ffffffff81425128 r __kstrtab_kernel_setsockopt
ffffffff8142513a r __kstrtab_kernel_getsockopt
ffffffff8142514c r __kstrtab_kernel_getpeername
ffffffff8142515f r __kstrtab_kernel_getsockname
ffffffff81425172 r __kstrtab_kernel_connect
ffffffff81425181 r __kstrtab_kernel_accept
ffffffff8142518f r __kstrtab_kernel_listen
ffffffff8142519d r __kstrtab_kernel_bind
ffffffff814251a9 r __kstrtab_sock_unregister
ffffffff814251b9 r __kstrtab_sock_register
ffffffff814251c7 r __kstrtab_sock_create_kern
ffffffff814251d8 r __kstrtab_sock_create
ffffffff814251e4 r __kstrtab___sock_create
ffffffff814251f2 r __kstrtab_sock_wake_async
ffffffff81425202 r __kstrtab_sock_create_lite
ffffffff81425213 r __kstrtab_dlci_ioctl_set
ffffffff81425222 r __kstrtab_vlan_ioctl_set
ffffffff81425231 r __kstrtab_brioctl_set
ffffffff8142523d r __kstrtab_kernel_recvmsg
ffffffff8142524c r __kstrtab_sock_recvmsg
ffffffff81425259 r __kstrtab___sock_recv_ts_and_drops
ffffffff81425272 r __kstrtab___sock_recv_wifi_status
ffffffff8142528a r __kstrtab___sock_recv_timestamp
ffffffff814252a0 r __kstrtab_kernel_sendmsg
ffffffff814252af r __kstrtab_sock_sendmsg
ffffffff814252bc r __kstrtab_sock_tx_timestamp
ffffffff814252ce r __kstrtab_sock_release
ffffffff814252db r __kstrtab_sockfd_lookup
ffffffff814252e9 r __kstrtab_sock_map_fd
ffffffff814252f5 r __kstrtab_proto_unregister
ffffffff81425306 r __kstrtab_proto_register
ffffffff81425315 r __kstrtab_sock_prot_inuse_get
ffffffff81425329 r __kstrtab_sock_prot_inuse_add
ffffffff8142533d r __kstrtab_sk_common_release
ffffffff8142534f r __kstrtab_compat_sock_common_setsockopt
ffffffff8142536d r __kstrtab_sock_common_setsockopt
ffffffff81425384 r __kstrtab_sock_common_recvmsg
ffffffff81425398 r __kstrtab_compat_sock_common_getsockopt
ffffffff814253b6 r __kstrtab_sock_common_getsockopt
ffffffff814253cd r __kstrtab_sock_get_timestampns
ffffffff814253e2 r __kstrtab_sock_get_timestamp
ffffffff814253f5 r __kstrtab_lock_sock_fast
ffffffff81425404 r __kstrtab_release_sock
ffffffff81425411 r __kstrtab_lock_sock_nested
ffffffff81425422 r __kstrtab_sock_init_data
ffffffff81425431 r __kstrtab_sk_stop_timer
ffffffff8142543f r __kstrtab_sk_reset_timer
ffffffff8142544e r __kstrtab_sk_send_sigurg
ffffffff8142545d r __kstrtab_sock_no_sendpage
ffffffff8142546e r __kstrtab_sock_no_mmap
ffffffff8142547b r __kstrtab_sock_no_recvmsg
ffffffff8142548b r __kstrtab_sock_no_sendmsg
ffffffff8142549b r __kstrtab_sock_no_getsockopt
ffffffff814254ae r __kstrtab_sock_no_setsockopt
ffffffff814254c1 r __kstrtab_sock_no_shutdown
ffffffff814254d2 r __kstrtab_sock_no_listen
ffffffff814254e1 r __kstrtab_sock_no_ioctl
ffffffff814254ef r __kstrtab_sock_no_poll
ffffffff814254fc r __kstrtab_sock_no_getname
ffffffff8142550c r __kstrtab_sock_no_accept
ffffffff8142551b r __kstrtab_sock_no_socketpair
ffffffff8142552e r __kstrtab_sock_no_connect
ffffffff8142553e r __kstrtab_sock_no_bind
ffffffff8142554b r __kstrtab___sk_mem_reclaim
ffffffff8142555c r __kstrtab___sk_mem_schedule
ffffffff8142556e r __kstrtab_sk_wait_data
ffffffff8142557b r __kstrtab_sock_alloc_send_skb
ffffffff8142558f r __kstrtab_sock_alloc_send_pskb
ffffffff814255a4 r __kstrtab_sock_kfree_s
ffffffff814255b1 r __kstrtab_sock_kmalloc
ffffffff814255be r __kstrtab_sock_wmalloc
ffffffff814255cb r __kstrtab_sock_i_ino
ffffffff814255d6 r __kstrtab_sock_i_uid
ffffffff814255e1 r __kstrtab_sock_rfree
ffffffff814255ec r __kstrtab_sock_wfree
ffffffff814255f7 r __kstrtab_sk_setup_caps
ffffffff81425605 r __kstrtab_sk_clone_lock
ffffffff81425613 r __kstrtab_sk_release_kernel
ffffffff81425625 r __kstrtab_sk_free
ffffffff8142562d r __kstrtab_sk_alloc
ffffffff81425636 r __kstrtab_sock_update_netprioidx
ffffffff8142564d r __kstrtab_sock_update_classid
ffffffff81425661 r __kstrtab_sk_prot_clear_portaddr_nulls
ffffffff8142567e r __kstrtab_cred_to_ucred
ffffffff8142568c r __kstrtab_sock_setsockopt
ffffffff8142569c r __kstrtab_sk_dst_check
ffffffff814256a9 r __kstrtab___sk_dst_check
ffffffff814256b8 r __kstrtab_sk_reset_txq
ffffffff814256c5 r __kstrtab_sk_receive_skb
ffffffff814256d4 r __kstrtab_sock_queue_rcv_skb
ffffffff814256e7 r __kstrtab_net_prio_subsys_id
ffffffff814256fa r __kstrtab_net_cls_subsys_id
ffffffff8142570c r __kstrtab_sysctl_optmem_max
ffffffff8142571e r __kstrtab_memcg_socket_limit_enabled
ffffffff81425739 r __kstrtab_sysctl_max_syn_backlog
ffffffff81425750 r __kstrtab___skb_warn_lro_forwarding
ffffffff8142576a r __kstrtab_skb_partial_csum_set
ffffffff8142577f r __kstrtab_skb_complete_wifi_ack
ffffffff81425795 r __kstrtab_skb_tstamp_tx
ffffffff814257a3 r __kstrtab_sock_queue_err_skb
ffffffff814257b6 r __kstrtab_skb_cow_data
ffffffff814257c3 r __kstrtab_skb_to_sgvec
ffffffff814257d0 r __kstrtab_skb_gro_receive
ffffffff814257e0 r __kstrtab_skb_segment
ffffffff814257ec r __kstrtab_skb_pull_rcsum
ffffffff814257fb r __kstrtab_skb_append_datato_frags
ffffffff81425813 r __kstrtab_skb_find_text
ffffffff81425821 r __kstrtab_skb_abort_seq_read
ffffffff81425834 r __kstrtab_skb_seq_read
ffffffff81425841 r __kstrtab_skb_prepare_seq_read
ffffffff81425856 r __kstrtab_skb_split
ffffffff81425860 r __kstrtab_skb_insert
ffffffff8142586b r __kstrtab_skb_append
ffffffff81425876 r __kstrtab_skb_unlink
ffffffff81425881 r __kstrtab_skb_queue_tail
ffffffff81425890 r __kstrtab_skb_queue_head
ffffffff8142589f r __kstrtab_skb_queue_purge
ffffffff814258af r __kstrtab_skb_dequeue_tail
ffffffff814258c0 r __kstrtab_skb_dequeue
ffffffff814258cc r __kstrtab_skb_copy_and_csum_dev
ffffffff814258e2 r __kstrtab_skb_copy_and_csum_bits
ffffffff814258f9 r __kstrtab_skb_checksum
ffffffff81425906 r __kstrtab_skb_store_bits
ffffffff81425915 r __kstrtab_skb_copy_bits
ffffffff81425923 r __kstrtab___pskb_pull_tail
ffffffff81425934 r __kstrtab____pskb_trim
ffffffff81425941 r __kstrtab_skb_trim
ffffffff8142594a r __kstrtab_skb_pull
ffffffff81425953 r __kstrtab_skb_push
ffffffff8142595c r __kstrtab_skb_put
ffffffff81425964 r __kstrtab_skb_pad
ffffffff8142596c r __kstrtab_skb_copy_expand
ffffffff8142597c r __kstrtab_skb_realloc_headroom
ffffffff81425991 r __kstrtab_pskb_expand_head
ffffffff814259a2 r __kstrtab___pskb_copy
ffffffff814259ae r __kstrtab_skb_copy
ffffffff814259b7 r __kstrtab_skb_clone
ffffffff814259c1 r __kstrtab_skb_morph
ffffffff814259cb r __kstrtab_skb_recycle_check
ffffffff814259dd r __kstrtab_skb_recycle
ffffffff814259e9 r __kstrtab_consume_skb
ffffffff814259f5 r __kstrtab_kfree_skb
ffffffff814259ff r __kstrtab___kfree_skb
ffffffff81425a0b r __kstrtab_dev_alloc_skb
ffffffff81425a19 r __kstrtab_skb_add_rx_frag
ffffffff81425a29 r __kstrtab___netdev_alloc_skb
ffffffff81425a3c r __kstrtab_build_skb
ffffffff81425a46 r __kstrtab___alloc_skb
ffffffff81425a52 r __kstrtab_csum_partial_copy_fromiovecend
ffffffff81425a71 r __kstrtab_memcpy_fromiovecend
ffffffff81425a85 r __kstrtab_memcpy_fromiovec
ffffffff81425a96 r __kstrtab_memcpy_toiovecend
ffffffff81425aa8 r __kstrtab_memcpy_toiovec
ffffffff81425ab7 r __kstrtab_datagram_poll
ffffffff81425ac5 r __kstrtab_skb_copy_and_csum_datagram_iovec
ffffffff81425ae6 r __kstrtab___skb_checksum_complete
ffffffff81425afe r __kstrtab___skb_checksum_complete_head
ffffffff81425b1b r __kstrtab_skb_copy_datagram_from_iovec
ffffffff81425b38 r __kstrtab_skb_copy_datagram_const_iovec
ffffffff81425b56 r __kstrtab_skb_copy_datagram_iovec
ffffffff81425b6e r __kstrtab_skb_kill_datagram
ffffffff81425b80 r __kstrtab_skb_free_datagram_locked
ffffffff81425b99 r __kstrtab_skb_free_datagram
ffffffff81425bab r __kstrtab_skb_recv_datagram
ffffffff81425bbd r __kstrtab___skb_recv_datagram
ffffffff81425bd1 r __kstrtab_sk_stream_kill_queues
ffffffff81425be7 r __kstrtab_sk_stream_error
ffffffff81425bf7 r __kstrtab_sk_stream_wait_memory
ffffffff81425c0d r __kstrtab_sk_stream_wait_close
ffffffff81425c22 r __kstrtab_sk_stream_wait_connect
ffffffff81425c39 r __kstrtab_sk_stream_write_space
ffffffff81425c4f r __kstrtab_scm_fp_dup
ffffffff81425c5a r __kstrtab_scm_detach_fds
ffffffff81425c69 r __kstrtab_put_cmsg
ffffffff81425c72 r __kstrtab___scm_send
ffffffff81425c7d r __kstrtab___scm_destroy
ffffffff81425c8b r __kstrtab_gnet_stats_finish_copy
ffffffff81425ca2 r __kstrtab_gnet_stats_copy_app
ffffffff81425cb6 r __kstrtab_gnet_stats_copy_queue
ffffffff81425ccc r __kstrtab_gnet_stats_copy_rate_est
ffffffff81425ce5 r __kstrtab_gnet_stats_copy_basic
ffffffff81425cfb r __kstrtab_gnet_stats_start_copy
ffffffff81425d11 r __kstrtab_gnet_stats_start_copy_compat
ffffffff81425d2e r __kstrtab_gen_estimator_active
ffffffff81425d43 r __kstrtab_gen_replace_estimator
ffffffff81425d59 r __kstrtab_gen_kill_estimator
ffffffff81425d6c r __kstrtab_gen_new_estimator
ffffffff81425d7e r __kstrtab_unregister_pernet_device
ffffffff81425d97 r __kstrtab_register_pernet_device
ffffffff81425dae r __kstrtab_unregister_pernet_subsys
ffffffff81425dc7 r __kstrtab_register_pernet_subsys
ffffffff81425dde r __kstrtab_get_net_ns_by_pid
ffffffff81425df0 r __kstrtab___put_net
ffffffff81425dfa r __kstrtab_init_net
ffffffff81425e03 r __kstrtab_net_namespace_list
ffffffff81425e16 r __kstrtab_secure_ipv4_port_ephemeral
ffffffff81425e31 r __kstrtab_skb_flow_dissect
ffffffff81425e42 r __kstrtab_netdev_info
ffffffff81425e4e r __kstrtab_netdev_notice
ffffffff81425e5c r __kstrtab_netdev_warn
ffffffff81425e68 r __kstrtab_netdev_err
ffffffff81425e73 r __kstrtab_netdev_crit
ffffffff81425e7f r __kstrtab_netdev_alert
ffffffff81425e8c r __kstrtab_netdev_emerg
ffffffff81425e99 r __kstrtab_netdev_printk
ffffffff81425ea7 r __kstrtab___netdev_printk
ffffffff81425eb7 r __kstrtab_netdev_increment_features
ffffffff81425ed1 r __kstrtab_dev_change_net_namespace
ffffffff81425eea r __kstrtab_unregister_netdev
ffffffff81425efc r __kstrtab_unregister_netdevice_many
ffffffff81425f16 r __kstrtab_unregister_netdevice_queue
ffffffff81425f31 r __kstrtab_synchronize_net
ffffffff81425f41 r __kstrtab_free_netdev
ffffffff81425f4d r __kstrtab_alloc_netdev_mqs
ffffffff81425f5e r __kstrtab_dev_get_stats
ffffffff81425f6c r __kstrtab_netdev_stats_to_stats64
ffffffff81425f84 r __kstrtab_netdev_refcnt_read
ffffffff81425f97 r __kstrtab_register_netdev
ffffffff81425fa7 r __kstrtab_init_dummy_netdev
ffffffff81425fb9 r __kstrtab_register_netdevice
ffffffff81425fcc r __kstrtab_netif_stacked_transfer_operstate
ffffffff81425fed r __kstrtab_netdev_change_features
ffffffff81426004 r __kstrtab_netdev_update_features
ffffffff8142601b r __kstrtab_dev_set_mac_address
ffffffff8142602f r __kstrtab_dev_set_group
ffffffff8142603d r __kstrtab_dev_set_mtu
ffffffff81426049 r __kstrtab_dev_change_flags
ffffffff8142605a r __kstrtab_dev_get_flags
ffffffff81426068 r __kstrtab_dev_set_allmulti
ffffffff81426079 r __kstrtab_dev_set_promiscuity
ffffffff8142608d r __kstrtab_netdev_set_bond_master
ffffffff814260a4 r __kstrtab_netdev_set_master
ffffffff814260b6 r __kstrtab_register_gifconf
ffffffff814260c7 r __kstrtab_netif_napi_del
ffffffff814260d6 r __kstrtab_netif_napi_add
ffffffff814260e5 r __kstrtab_napi_complete
ffffffff814260f3 r __kstrtab___napi_complete
ffffffff81426103 r __kstrtab___napi_schedule
ffffffff81426113 r __kstrtab_napi_gro_frags
ffffffff81426122 r __kstrtab_napi_frags_skb
ffffffff81426131 r __kstrtab_napi_frags_finish
ffffffff81426143 r __kstrtab_napi_get_frags
ffffffff81426152 r __kstrtab_napi_gro_receive
ffffffff81426163 r __kstrtab_skb_gro_reset_offset
ffffffff81426178 r __kstrtab_napi_skb_finish
ffffffff81426188 r __kstrtab_dev_gro_receive
ffffffff81426198 r __kstrtab_napi_gro_flush
ffffffff814261a7 r __kstrtab_netif_receive_skb
ffffffff814261b9 r __kstrtab_netdev_rx_handler_unregister
ffffffff814261d6 r __kstrtab_netdev_rx_handler_register
ffffffff814261f1 r __kstrtab_netif_rx_ni
ffffffff814261fd r __kstrtab_netif_rx
ffffffff81426206 r __kstrtab_rps_may_expire_flow
ffffffff8142621a r __kstrtab_rps_sock_flow_table
ffffffff8142622e r __kstrtab___skb_get_rxhash
ffffffff8142623f r __kstrtab_dev_queue_xmit
ffffffff8142624e r __kstrtab___skb_tx_hash
ffffffff8142625c r __kstrtab_netif_skb_features
ffffffff8142626f r __kstrtab_netdev_rx_csum_fault
ffffffff81426284 r __kstrtab_skb_gso_segment
ffffffff81426294 r __kstrtab_skb_checksum_help
ffffffff814262a6 r __kstrtab_netif_device_attach
ffffffff814262ba r __kstrtab_netif_device_detach
ffffffff814262ce r __kstrtab_dev_kfree_skb_any
ffffffff814262e0 r __kstrtab_dev_kfree_skb_irq
ffffffff814262f2 r __kstrtab___netif_schedule
ffffffff81426303 r __kstrtab_netif_set_real_num_rx_queues
ffffffff81426320 r __kstrtab_netif_set_real_num_tx_queues
ffffffff8142633d r __kstrtab_dev_forward_skb
ffffffff8142634d r __kstrtab_net_disable_timestamp
ffffffff81426363 r __kstrtab_net_enable_timestamp
ffffffff81426378 r __kstrtab_call_netdevice_notifiers
ffffffff81426391 r __kstrtab_unregister_netdevice_notifier
ffffffff814263af r __kstrtab_register_netdevice_notifier
ffffffff814263cb r __kstrtab_dev_disable_lro
ffffffff814263db r __kstrtab_dev_close
ffffffff814263e5 r __kstrtab_dev_open
ffffffff814263ee r __kstrtab_dev_load
ffffffff814263f7 r __kstrtab_netdev_bonding_change
ffffffff8142640d r __kstrtab_netdev_state_change
ffffffff81426421 r __kstrtab_netdev_features_change
ffffffff81426438 r __kstrtab_dev_alloc_name
ffffffff81426447 r __kstrtab_dev_valid_name
ffffffff81426456 r __kstrtab_dev_get_by_flags_rcu
ffffffff8142646b r __kstrtab_dev_getfirstbyhwtype
ffffffff81426480 r __kstrtab___dev_getfirstbyhwtype
ffffffff81426497 r __kstrtab_dev_getbyhwaddr_rcu
ffffffff814264ab r __kstrtab_dev_get_by_index
ffffffff814264bc r __kstrtab_dev_get_by_index_rcu
ffffffff814264d1 r __kstrtab___dev_get_by_index
ffffffff814264e4 r __kstrtab_dev_get_by_name
ffffffff814264f4 r __kstrtab_dev_get_by_name_rcu
ffffffff81426508 r __kstrtab___dev_get_by_name
ffffffff8142651a r __kstrtab_netdev_boot_setup_check
ffffffff81426532 r __kstrtab_dev_remove_pack
ffffffff81426542 r __kstrtab___dev_remove_pack
ffffffff81426554 r __kstrtab_dev_add_pack
ffffffff81426561 r __kstrtab_softnet_data
ffffffff8142656e r __kstrtab_dev_base_lock
ffffffff8142657c r __kstrtab___ethtool_get_settings
ffffffff81426593 r __kstrtab_ethtool_op_get_link
ffffffff814265a7 r __kstrtab_dev_mc_init
ffffffff814265b3 r __kstrtab_dev_mc_flush
ffffffff814265c0 r __kstrtab_dev_mc_unsync
ffffffff814265ce r __kstrtab_dev_mc_sync
ffffffff814265da r __kstrtab_dev_mc_del_global
ffffffff814265ec r __kstrtab_dev_mc_del
ffffffff814265f7 r __kstrtab_dev_mc_add_global
ffffffff81426609 r __kstrtab_dev_mc_add
ffffffff81426614 r __kstrtab_dev_uc_init
ffffffff81426620 r __kstrtab_dev_uc_flush
ffffffff8142662d r __kstrtab_dev_uc_unsync
ffffffff8142663b r __kstrtab_dev_uc_sync
ffffffff81426647 r __kstrtab_dev_uc_del
ffffffff81426652 r __kstrtab_dev_uc_add
ffffffff8142665d r __kstrtab_dev_addr_del_multiple
ffffffff81426673 r __kstrtab_dev_addr_add_multiple
ffffffff81426689 r __kstrtab_dev_addr_del
ffffffff81426696 r __kstrtab_dev_addr_add
ffffffff814266a3 r __kstrtab_dev_addr_init
ffffffff814266b1 r __kstrtab_dev_addr_flush
ffffffff814266c0 r __kstrtab___hw_addr_init
ffffffff814266cf r __kstrtab___hw_addr_flush
ffffffff814266df r __kstrtab___hw_addr_unsync
ffffffff814266f0 r __kstrtab___hw_addr_sync
ffffffff814266ff r __kstrtab___hw_addr_del_multiple
ffffffff81426716 r __kstrtab___hw_addr_add_multiple
ffffffff8142672d r __kstrtab_skb_dst_set_noref
ffffffff8142673f r __kstrtab___dst_destroy_metrics_generic
ffffffff8142675d r __kstrtab_dst_cow_metrics_generic
ffffffff81426775 r __kstrtab_dst_release
ffffffff81426781 r __kstrtab_dst_destroy
ffffffff8142678d r __kstrtab___dst_free
ffffffff81426798 r __kstrtab_dst_alloc
ffffffff814267a2 r __kstrtab_dst_discard
ffffffff814267ae r __kstrtab_call_netevent_notifiers
ffffffff814267c6 r __kstrtab_unregister_netevent_notifier
ffffffff814267e3 r __kstrtab_register_netevent_notifier
ffffffff814267fe r __kstrtab_neigh_sysctl_unregister
ffffffff81426816 r __kstrtab_neigh_sysctl_register
ffffffff8142682c r __kstrtab_neigh_seq_stop
ffffffff8142683b r __kstrtab_neigh_seq_next
ffffffff8142684a r __kstrtab_neigh_seq_start
ffffffff8142685a r __kstrtab___neigh_for_each_release
ffffffff81426873 r __kstrtab_neigh_for_each
ffffffff81426882 r __kstrtab_neigh_table_clear
ffffffff81426894 r __kstrtab_neigh_table_init
ffffffff814268a5 r __kstrtab_neigh_table_init_no_netlink
ffffffff814268c1 r __kstrtab_neigh_parms_release
ffffffff814268d5 r __kstrtab_neigh_parms_alloc
ffffffff814268e7 r __kstrtab_pneigh_enqueue
ffffffff814268f6 r __kstrtab_neigh_direct_output
ffffffff8142690a r __kstrtab_neigh_connected_output
ffffffff81426921 r __kstrtab_neigh_resolve_output
ffffffff81426936 r __kstrtab_neigh_compat_output
ffffffff8142694a r __kstrtab_neigh_event_ns
ffffffff81426959 r __kstrtab_neigh_update
ffffffff81426966 r __kstrtab___neigh_event_send
ffffffff81426979 r __kstrtab_neigh_destroy
ffffffff81426987 r __kstrtab_pneigh_lookup
ffffffff81426995 r __kstrtab___pneigh_lookup
ffffffff814269a5 r __kstrtab_neigh_create
ffffffff814269b2 r __kstrtab_neigh_lookup_nodev
ffffffff814269c5 r __kstrtab_neigh_lookup
ffffffff814269d2 r __kstrtab_neigh_ifdown
ffffffff814269df r __kstrtab_neigh_changeaddr
ffffffff814269f0 r __kstrtab_neigh_rand_reach_time
ffffffff81426a06 r __kstrtab_rtnl_create_link
ffffffff81426a17 r __kstrtab_rtnl_configure_link
ffffffff81426a2b r __kstrtab_rtnl_link_get_net
ffffffff81426a3d r __kstrtab_ifla_policy
ffffffff81426a49 r __kstrtab_rtnl_put_cacheinfo
ffffffff81426a5c r __kstrtab_rtnetlink_put_metrics
ffffffff81426a72 r __kstrtab_rtnl_set_sk_err
ffffffff81426a82 r __kstrtab_rtnl_notify
ffffffff81426a8e r __kstrtab_rtnl_unicast
ffffffff81426a9b r __kstrtab___rta_fill
ffffffff81426aa6 r __kstrtab_rtnl_af_unregister
ffffffff81426ab9 r __kstrtab___rtnl_af_unregister
ffffffff81426ace r __kstrtab_rtnl_af_register
ffffffff81426adf r __kstrtab___rtnl_af_register
ffffffff81426af2 r __kstrtab_rtnl_link_unregister
ffffffff81426b07 r __kstrtab___rtnl_link_unregister
ffffffff81426b1e r __kstrtab_rtnl_link_register
ffffffff81426b31 r __kstrtab___rtnl_link_register
ffffffff81426b46 r __kstrtab_rtnl_unregister_all
ffffffff81426b5a r __kstrtab_rtnl_unregister
ffffffff81426b6a r __kstrtab_rtnl_register
ffffffff81426b78 r __kstrtab___rtnl_register
ffffffff81426b88 r __kstrtab_rtnl_is_locked
ffffffff81426b97 r __kstrtab_rtnl_trylock
ffffffff81426ba4 r __kstrtab_rtnl_unlock
ffffffff81426bb0 r __kstrtab_rtnl_lock
ffffffff81426bba r __kstrtab_mac_pton
ffffffff81426bc3 r __kstrtab_inet_proto_csum_replace4
ffffffff81426bdc r __kstrtab_in6_pton
ffffffff81426be5 r __kstrtab_in4_pton
ffffffff81426bee r __kstrtab_in_aton
ffffffff81426bf6 r __kstrtab_net_ratelimit
ffffffff81426c04 r __kstrtab_net_msg_warn
ffffffff81426c11 r __kstrtab_linkwatch_fire_event
ffffffff81426c26 r __kstrtab_sk_detach_filter
ffffffff81426c37 r __kstrtab_sk_attach_filter
ffffffff81426c48 r __kstrtab_sk_filter_release_rcu
ffffffff81426c5e r __kstrtab_sk_chk_filter
ffffffff81426c6c r __kstrtab_sk_run_filter
ffffffff81426c7a r __kstrtab_sk_filter
ffffffff81426c84 r __kstrtab_sock_diag_nlsk
ffffffff81426c93 r __kstrtab_sock_diag_unregister
ffffffff81426ca8 r __kstrtab_sock_diag_register
ffffffff81426cbb r __kstrtab_sock_diag_unregister_inet_compat
ffffffff81426cdc r __kstrtab_sock_diag_register_inet_compat
ffffffff81426cfb r __kstrtab_sock_diag_put_meminfo
ffffffff81426d11 r __kstrtab_sock_diag_save_cookie
ffffffff81426d27 r __kstrtab_sock_diag_check_cookie
ffffffff81426d3e r __kstrtab_flow_cache_lookup
ffffffff81426d50 r __kstrtab_flow_cache_genid
ffffffff81426d61 r __kstrtab_netdev_class_remove_file
ffffffff81426d7a r __kstrtab_netdev_class_create_file
ffffffff81426d93 r __kstrtab_net_ns_type_operations
ffffffff81426daa r __kstrtab_compat_mc_getsockopt
ffffffff81426dbf r __kstrtab_compat_mc_setsockopt
ffffffff81426dd4 r __kstrtab_compat_sock_get_timestampns
ffffffff81426df0 r __kstrtab_compat_sock_get_timestamp
ffffffff81426e0a r __kstrtab_sysfs_format_mac
ffffffff81426e1b r __kstrtab_alloc_etherdev_mqs
ffffffff81426e2e r __kstrtab_ether_setup
ffffffff81426e3a r __kstrtab_eth_validate_addr
ffffffff81426e4c r __kstrtab_eth_change_mtu
ffffffff81426e5b r __kstrtab_eth_mac_addr
ffffffff81426e68 r __kstrtab_eth_header_cache_update
ffffffff81426e80 r __kstrtab_eth_header_cache
ffffffff81426e91 r __kstrtab_eth_header_parse
ffffffff81426ea2 r __kstrtab_eth_type_trans
ffffffff81426eb1 r __kstrtab_eth_rebuild_header
ffffffff81426ec4 r __kstrtab_eth_header
ffffffff81426ecf r __kstrtab_dev_deactivate
ffffffff81426ede r __kstrtab_dev_activate
ffffffff81426eeb r __kstrtab_dev_graft_qdisc
ffffffff81426efb r __kstrtab_qdisc_destroy
ffffffff81426f09 r __kstrtab_qdisc_reset
ffffffff81426f15 r __kstrtab_qdisc_create_dflt
ffffffff81426f27 r __kstrtab_pfifo_fast_ops
ffffffff81426f36 r __kstrtab_noop_qdisc
ffffffff81426f41 r __kstrtab_netif_notify_peers
ffffffff81426f54 r __kstrtab_netif_carrier_off
ffffffff81426f66 r __kstrtab_netif_carrier_on
ffffffff81426f77 r __kstrtab_dev_trans_start
ffffffff81426f87 r __kstrtab_netlink_unregister_notifier
ffffffff81426fa3 r __kstrtab_netlink_register_notifier
ffffffff81426fbd r __kstrtab_nlmsg_notify
ffffffff81426fca r __kstrtab_netlink_rcv_skb
ffffffff81426fda r __kstrtab_netlink_ack
ffffffff81426fe6 r __kstrtab_netlink_dump_start
ffffffff81426ff9 r __kstrtab___nlmsg_put
ffffffff81427005 r __kstrtab_netlink_set_nonroot
ffffffff81427019 r __kstrtab_netlink_kernel_release
ffffffff81427030 r __kstrtab_netlink_kernel_create
ffffffff81427046 r __kstrtab_netlink_set_err
ffffffff81427056 r __kstrtab_netlink_broadcast
ffffffff81427068 r __kstrtab_netlink_broadcast_filtered
ffffffff81427083 r __kstrtab_netlink_has_listeners
ffffffff81427099 r __kstrtab_netlink_unicast
ffffffff814270a9 r __kstrtab_genl_notify
ffffffff814270b5 r __kstrtab_genlmsg_multicast_allns
ffffffff814270cd r __kstrtab_genlmsg_put
ffffffff814270d9 r __kstrtab_genl_unregister_family
ffffffff814270f0 r __kstrtab_genl_register_family_with_ops
ffffffff8142710e r __kstrtab_genl_register_family
ffffffff81427123 r __kstrtab_genl_unregister_ops
ffffffff81427137 r __kstrtab_genl_register_ops
ffffffff81427149 r __kstrtab_genl_unregister_mc_group
ffffffff81427162 r __kstrtab_genl_register_mc_group
ffffffff81427179 r __kstrtab_genl_unlock
ffffffff81427185 r __kstrtab_genl_lock
ffffffff8142718f r __kstrtab_ip_route_output_flow
ffffffff814271a4 r __kstrtab___ip_route_output_key
ffffffff814271ba r __kstrtab_ip_route_input_common
ffffffff814271d0 r __kstrtab___ip_select_ident
ffffffff814271e2 r __kstrtab_inetpeer_invalidate_tree
ffffffff814271fb r __kstrtab_inet_peer_xrlim_allow
ffffffff81427211 r __kstrtab_inet_putpeer
ffffffff8142721e r __kstrtab_inet_getpeer
ffffffff8142722b r __kstrtab_inet_del_protocol
ffffffff8142723d r __kstrtab_inet_add_protocol
ffffffff8142724f r __kstrtab_ip_check_defrag
ffffffff8142725f r __kstrtab_ip_defrag
ffffffff81427269 r __kstrtab_ip_options_rcv_srr
ffffffff8142727c r __kstrtab_ip_options_compile
ffffffff8142728f r __kstrtab_ip_generic_getfrag
ffffffff814272a2 r __kstrtab_ip_fragment
ffffffff814272ae r __kstrtab_ip_queue_xmit
ffffffff814272bc r __kstrtab_ip_build_and_send_pkt
ffffffff814272d2 r __kstrtab_ip_local_out
ffffffff814272df r __kstrtab_ip_send_check
ffffffff814272ed r __kstrtab_sysctl_ip_default_ttl
ffffffff81427303 r __kstrtab_compat_ip_getsockopt
ffffffff81427318 r __kstrtab_ip_getsockopt
ffffffff81427326 r __kstrtab_compat_ip_setsockopt
ffffffff8142733b r __kstrtab_ip_setsockopt
ffffffff81427349 r __kstrtab_ip_cmsg_recv
ffffffff81427356 r __kstrtab_inet_hashinfo_init
ffffffff81427369 r __kstrtab_inet_hash_connect
ffffffff8142737b r __kstrtab_inet_unhash
ffffffff81427387 r __kstrtab_inet_hash
ffffffff81427391 r __kstrtab___inet_hash_nolisten
ffffffff814273a6 r __kstrtab___inet_lookup_established
ffffffff814273c0 r __kstrtab___inet_lookup_listener
ffffffff814273d7 r __kstrtab___inet_inherit_port
ffffffff814273eb r __kstrtab_inet_put_port
ffffffff814273f9 r __kstrtab_inet_twsk_purge
ffffffff81427409 r __kstrtab_inet_twdr_twcal_tick
ffffffff8142741e r __kstrtab_inet_twsk_schedule
ffffffff81427431 r __kstrtab_inet_twsk_deschedule
ffffffff81427446 r __kstrtab_inet_twdr_twkill_work
ffffffff8142745c r __kstrtab_inet_twdr_hangman
ffffffff8142746e r __kstrtab_inet_twsk_alloc
ffffffff8142747e r __kstrtab___inet_twsk_hashdance
ffffffff81427494 r __kstrtab_inet_twsk_put
ffffffff814274a2 r __kstrtab_inet_csk_compat_setsockopt
ffffffff814274bd r __kstrtab_inet_csk_compat_getsockopt
ffffffff814274d8 r __kstrtab_inet_csk_addr2sockaddr
ffffffff814274ef r __kstrtab_inet_csk_listen_stop
ffffffff81427504 r __kstrtab_inet_csk_listen_start
ffffffff8142751a r __kstrtab_inet_csk_destroy_sock
ffffffff81427530 r __kstrtab_inet_csk_clone_lock
ffffffff81427544 r __kstrtab_inet_csk_reqsk_queue_prune
ffffffff8142755f r __kstrtab_inet_csk_reqsk_queue_hash_add
ffffffff8142757d r __kstrtab_inet_csk_search_req
ffffffff81427591 r __kstrtab_inet_csk_route_child_sock
ffffffff814275ab r __kstrtab_inet_csk_route_req
ffffffff814275be r __kstrtab_inet_csk_reset_keepalive_timer
ffffffff814275dd r __kstrtab_inet_csk_delete_keepalive_timer
ffffffff814275fd r __kstrtab_inet_csk_clear_xmit_timers
ffffffff81427618 r __kstrtab_inet_csk_init_xmit_timers
ffffffff81427632 r __kstrtab_inet_csk_accept
ffffffff81427642 r __kstrtab_inet_csk_get_port
ffffffff81427654 r __kstrtab_inet_csk_bind_conflict
ffffffff8142766b r __kstrtab_inet_get_local_port_range
ffffffff81427685 r __kstrtab_sysctl_local_reserved_ports
ffffffff814276a1 r __kstrtab_inet_csk_timer_bug_msg
ffffffff814276b8 r __kstrtab_tcp_done
ffffffff814276c1 r __kstrtab_tcp_cookie_generator
ffffffff814276d6 r __kstrtab_tcp_gro_complete
ffffffff814276e7 r __kstrtab_tcp_gro_receive
ffffffff814276f7 r __kstrtab_tcp_tso_segment
ffffffff81427707 r __kstrtab_compat_tcp_getsockopt
ffffffff8142771d r __kstrtab_tcp_getsockopt
ffffffff8142772c r __kstrtab_tcp_get_info
ffffffff81427739 r __kstrtab_compat_tcp_setsockopt
ffffffff8142774f r __kstrtab_tcp_setsockopt
ffffffff8142775e r __kstrtab_tcp_disconnect
ffffffff8142776d r __kstrtab_tcp_close
ffffffff81427777 r __kstrtab_tcp_shutdown
ffffffff81427784 r __kstrtab_tcp_set_state
ffffffff81427792 r __kstrtab_tcp_recvmsg
ffffffff8142779e r __kstrtab_tcp_read_sock
ffffffff814277ac r __kstrtab_tcp_sendmsg
ffffffff814277b8 r __kstrtab_tcp_sendpage
ffffffff814277c5 r __kstrtab_tcp_splice_read
ffffffff814277d5 r __kstrtab_tcp_ioctl
ffffffff814277df r __kstrtab_tcp_poll
ffffffff814277e8 r __kstrtab_tcp_enter_memory_pressure
ffffffff81427802 r __kstrtab_tcp_memory_pressure
ffffffff81427816 r __kstrtab_tcp_sockets_allocated
ffffffff8142782c r __kstrtab_tcp_memory_allocated
ffffffff81427841 r __kstrtab_sysctl_tcp_wmem
ffffffff81427851 r __kstrtab_sysctl_tcp_rmem
ffffffff81427861 r __kstrtab_tcp_orphan_count
ffffffff81427872 r __kstrtab_tcp_rcv_state_process
ffffffff81427888 r __kstrtab_tcp_rcv_established
ffffffff8142789c r __kstrtab_tcp_parse_options
ffffffff814278ae r __kstrtab_tcp_valid_rtt_meas
ffffffff814278c1 r __kstrtab_tcp_simple_retransmit
ffffffff814278d7 r __kstrtab_tcp_initialize_rcv_mss
ffffffff814278ee r __kstrtab_sysctl_tcp_adv_win_scale
ffffffff81427907 r __kstrtab_sysctl_tcp_ecn
ffffffff81427916 r __kstrtab_sysctl_tcp_reordering
ffffffff8142792c r __kstrtab_tcp_connect
ffffffff81427938 r __kstrtab_tcp_make_synack
ffffffff81427948 r __kstrtab_tcp_sync_mss
ffffffff81427955 r __kstrtab_tcp_mtup_init
ffffffff81427963 r __kstrtab_tcp_select_initial_window
ffffffff8142797d r __kstrtab_sysctl_tcp_cookie_size
ffffffff81427994 r __kstrtab_tcp_syn_ack_timeout
ffffffff814279a8 r __kstrtab_tcp_init_xmit_timers
ffffffff814279bd r __kstrtab_tcp_prot
ffffffff814279c6 r __kstrtab_tcp_proc_unregister
ffffffff814279da r __kstrtab_tcp_proc_register
ffffffff814279ec r __kstrtab_tcp_seq_open
ffffffff814279f9 r __kstrtab_tcp_v4_destroy_sock
ffffffff81427a0d r __kstrtab_ipv4_specific
ffffffff81427a1b r __kstrtab_tcp_v4_tw_get_peer
ffffffff81427a2e r __kstrtab_tcp_v4_get_peer
ffffffff81427a3e r __kstrtab_tcp_v4_do_rcv
ffffffff81427a4c r __kstrtab_tcp_v4_syn_recv_sock
ffffffff81427a61 r __kstrtab_tcp_v4_conn_request
ffffffff81427a75 r __kstrtab_tcp_syn_flood_action
ffffffff81427a8a r __kstrtab_tcp_v4_send_check
ffffffff81427a9c r __kstrtab_tcp_v4_connect
ffffffff81427aab r __kstrtab_tcp_twsk_unique
ffffffff81427abb r __kstrtab_tcp_hashinfo
ffffffff81427ac8 r __kstrtab_sysctl_tcp_low_latency
ffffffff81427adf r __kstrtab_tcp_child_process
ffffffff81427af1 r __kstrtab_tcp_check_req
ffffffff81427aff r __kstrtab_tcp_create_openreq_child
ffffffff81427b18 r __kstrtab_tcp_twsk_destructor
ffffffff81427b2c r __kstrtab_tcp_timewait_state_process
ffffffff81427b47 r __kstrtab_tcp_death_row
ffffffff81427b55 r __kstrtab_sysctl_tcp_syncookies
ffffffff81427b6b r __kstrtab_tcp_init_congestion_ops
ffffffff81427b83 r __kstrtab_tcp_reno_min_cwnd
ffffffff81427b95 r __kstrtab_tcp_reno_ssthresh
ffffffff81427ba7 r __kstrtab_tcp_reno_cong_avoid
ffffffff81427bbb r __kstrtab_tcp_cong_avoid_ai
ffffffff81427bcd r __kstrtab_tcp_slow_start
ffffffff81427bdc r __kstrtab_tcp_is_cwnd_limited
ffffffff81427bf0 r __kstrtab_tcp_unregister_congestion_control
ffffffff81427c12 r __kstrtab_tcp_register_congestion_control
ffffffff81427c32 r __kstrtab_ip4_datagram_connect
ffffffff81427c47 r __kstrtab_raw_seq_open
ffffffff81427c54 r __kstrtab_raw_seq_stop
ffffffff81427c61 r __kstrtab_raw_seq_next
ffffffff81427c6e r __kstrtab_raw_seq_start
ffffffff81427c7c r __kstrtab_raw_unhash_sk
ffffffff81427c8a r __kstrtab_raw_hash_sk
ffffffff81427c96 r __kstrtab_udp_proc_unregister
ffffffff81427caa r __kstrtab_udp_proc_register
ffffffff81427cbc r __kstrtab_udp_seq_open
ffffffff81427cc9 r __kstrtab_udp_prot
ffffffff81427cd2 r __kstrtab_udp_poll
ffffffff81427cdb r __kstrtab_udp_lib_getsockopt
ffffffff81427cee r __kstrtab_udp_lib_setsockopt
ffffffff81427d01 r __kstrtab_udp_lib_rehash
ffffffff81427d10 r __kstrtab_udp_lib_unhash
ffffffff81427d1f r __kstrtab_udp_disconnect
ffffffff81427d2e r __kstrtab_udp_ioctl
ffffffff81427d38 r __kstrtab_udp_sendmsg
ffffffff81427d44 r __kstrtab_udp_flush_pending_frames
ffffffff81427d5d r __kstrtab_udp4_lib_lookup
ffffffff81427d6d r __kstrtab___udp4_lib_lookup
ffffffff81427d7f r __kstrtab_udp_lib_get_port
ffffffff81427d90 r __kstrtab_udp_memory_allocated
ffffffff81427da5 r __kstrtab_sysctl_udp_wmem_min
ffffffff81427db9 r __kstrtab_sysctl_udp_rmem_min
ffffffff81427dcd r __kstrtab_sysctl_udp_mem
ffffffff81427ddc r __kstrtab_udp_table
ffffffff81427de6 r __kstrtab_udplite_prot
ffffffff81427df3 r __kstrtab_udplite_table
ffffffff81427e01 r __kstrtab_arp_invalidate
ffffffff81427e10 r __kstrtab_arp_send
ffffffff81427e19 r __kstrtab_arp_xmit
ffffffff81427e22 r __kstrtab_arp_create
ffffffff81427e2d r __kstrtab_arp_find
ffffffff81427e36 r __kstrtab_arp_tbl
ffffffff81427e3e r __kstrtab_icmp_send
ffffffff81427e48 r __kstrtab_icmp_err_convert
ffffffff81427e59 r __kstrtab_unregister_inetaddr_notifier
ffffffff81427e76 r __kstrtab_register_inetaddr_notifier
ffffffff81427e91 r __kstrtab_inet_confirm_addr
ffffffff81427ea3 r __kstrtab_inet_select_addr
ffffffff81427eb4 r __kstrtab_inetdev_by_index
ffffffff81427ec5 r __kstrtab_in_dev_finish_destroy
ffffffff81427edb r __kstrtab___ip_dev_find
ffffffff81427ee9 r __kstrtab_snmp_mib_free
ffffffff81427ef7 r __kstrtab_snmp_mib_init
ffffffff81427f05 r __kstrtab_snmp_fold_field
ffffffff81427f15 r __kstrtab_inet_ctl_sock_create
ffffffff81427f2a r __kstrtab_inet_sk_rebuild_header
ffffffff81427f41 r __kstrtab_inet_unregister_protosw
ffffffff81427f59 r __kstrtab_inet_register_protosw
ffffffff81427f6f r __kstrtab_inet_dgram_ops
ffffffff81427f7e r __kstrtab_inet_stream_ops
ffffffff81427f8e r __kstrtab_inet_ioctl
ffffffff81427f99 r __kstrtab_inet_shutdown
ffffffff81427fa7 r __kstrtab_inet_recvmsg
ffffffff81427fb4 r __kstrtab_inet_sendpage
ffffffff81427fc2 r __kstrtab_inet_sendmsg
ffffffff81427fcf r __kstrtab_inet_getname
ffffffff81427fdc r __kstrtab_inet_accept
ffffffff81427fe8 r __kstrtab_inet_stream_connect
ffffffff81427ffc r __kstrtab_inet_dgram_connect
ffffffff8142800f r __kstrtab_inet_bind
ffffffff81428019 r __kstrtab_sysctl_ip_nonlocal_bind
ffffffff81428031 r __kstrtab_inet_release
ffffffff8142803e r __kstrtab_build_ehash_secret
ffffffff81428051 r __kstrtab_inet_ehash_secret
ffffffff81428063 r __kstrtab_inet_listen
ffffffff8142806f r __kstrtab_inet_sock_destruct
ffffffff81428082 r __kstrtab_ipv4_config
ffffffff8142808e r __kstrtab_ip_mc_join_group
ffffffff8142809f r __kstrtab_ip_mc_dec_group
ffffffff814280af r __kstrtab_ip_mc_rejoin_groups
ffffffff814280c3 r __kstrtab_ip_mc_inc_group
ffffffff814280d3 r __kstrtab_inet_dev_addr_type
ffffffff814280e6 r __kstrtab_inet_addr_type
ffffffff814280f5 r __kstrtab_fib_table_lookup
ffffffff81428106 r __kstrtab_inet_frag_find
ffffffff81428115 r __kstrtab_inet_frag_evictor
ffffffff81428127 r __kstrtab_inet_frag_destroy
ffffffff81428139 r __kstrtab_inet_frag_kill
ffffffff81428148 r __kstrtab_inet_frags_exit_net
ffffffff8142815c r __kstrtab_inet_frags_fini
ffffffff8142816c r __kstrtab_inet_frags_init_net
ffffffff81428180 r __kstrtab_inet_frags_init
ffffffff81428190 r __kstrtab_ping_prot
ffffffff8142819a r __kstrtab_net_ipv4_ctl_path
ffffffff814281ac r __kstrtab_cookie_check_timestamp
ffffffff814281c3 r __kstrtab_syncookie_secret
ffffffff814281d4 r __kstrtab_lro_flush_pkt
ffffffff814281e2 r __kstrtab_lro_flush_all
ffffffff814281f0 r __kstrtab_lro_receive_frags
ffffffff81428202 r __kstrtab_lro_receive_skb
ffffffff81428212 r __kstrtab_xfrm4_rcv
ffffffff8142821c r __kstrtab_xfrm4_rcv_encap
ffffffff8142822c r __kstrtab_xfrm4_prepare_output
ffffffff81428241 r __kstrtab_xfrm_audit_policy_delete
ffffffff8142825a r __kstrtab_xfrm_audit_policy_add
ffffffff81428270 r __kstrtab_xfrm_policy_unregister_afinfo
ffffffff8142828e r __kstrtab_xfrm_policy_register_afinfo
ffffffff814282aa r __kstrtab_xfrm_dst_ifdown
ffffffff814282ba r __kstrtab___xfrm_route_forward
ffffffff814282cf r __kstrtab___xfrm_policy_check
ffffffff814282e3 r __kstrtab___xfrm_decode_session
ffffffff814282f9 r __kstrtab_xfrm_lookup
ffffffff81428305 r __kstrtab_xfrm_policy_delete
ffffffff81428318 r __kstrtab_xfrm_policy_walk_done
ffffffff8142832e r __kstrtab_xfrm_policy_walk_init
ffffffff81428344 r __kstrtab_xfrm_policy_walk
ffffffff81428355 r __kstrtab_xfrm_policy_flush
ffffffff81428367 r __kstrtab_xfrm_policy_byid
ffffffff81428378 r __kstrtab_xfrm_policy_bysel_ctx
ffffffff8142838e r __kstrtab_xfrm_policy_insert
ffffffff814283a1 r __kstrtab_xfrm_spd_getinfo
ffffffff814283b2 r __kstrtab_xfrm_policy_destroy
ffffffff814283c6 r __kstrtab_xfrm_policy_alloc
ffffffff814283d8 r __kstrtab_xfrm_cfg_mutex
ffffffff814283e7 r __kstrtab_xfrm_audit_state_icvfail
ffffffff81428400 r __kstrtab_xfrm_audit_state_notfound
ffffffff8142841a r __kstrtab_xfrm_audit_state_notfound_simple
ffffffff8142843b r __kstrtab_xfrm_audit_state_replay
ffffffff81428453 r __kstrtab_xfrm_audit_state_replay_overflow
ffffffff81428474 r __kstrtab_xfrm_audit_state_delete
ffffffff8142848c r __kstrtab_xfrm_audit_state_add
ffffffff814284a1 r __kstrtab_xfrm_init_state
ffffffff814284b1 r __kstrtab___xfrm_init_state
ffffffff814284c3 r __kstrtab_xfrm_state_delete_tunnel
ffffffff814284dc r __kstrtab_xfrm_state_unregister_afinfo
ffffffff814284f9 r __kstrtab_xfrm_state_register_afinfo
ffffffff81428514 r __kstrtab_xfrm_unregister_km
ffffffff81428527 r __kstrtab_xfrm_register_km
ffffffff81428538 r __kstrtab_xfrm_user_policy
ffffffff81428549 r __kstrtab_km_report
ffffffff81428553 r __kstrtab_km_policy_expired
ffffffff81428565 r __kstrtab_km_new_mapping
ffffffff81428574 r __kstrtab_km_query
ffffffff8142857d r __kstrtab_km_state_expired
ffffffff8142858e r __kstrtab_km_state_notify
ffffffff8142859e r __kstrtab_km_policy_notify
ffffffff814285af r __kstrtab_xfrm_state_walk_done
ffffffff814285c4 r __kstrtab_xfrm_state_walk_init
ffffffff814285d9 r __kstrtab_xfrm_state_walk
ffffffff814285e9 r __kstrtab_xfrm_alloc_spi
ffffffff814285f8 r __kstrtab_xfrm_get_acqseq
ffffffff81428608 r __kstrtab_xfrm_find_acq_byseq
ffffffff8142861c r __kstrtab_xfrm_find_acq
ffffffff8142862a r __kstrtab_xfrm_state_lookup_byaddr
ffffffff81428643 r __kstrtab_xfrm_state_lookup
ffffffff81428655 r __kstrtab_xfrm_state_check_expire
ffffffff8142866d r __kstrtab_xfrm_state_update
ffffffff8142867f r __kstrtab_xfrm_state_add
ffffffff8142868e r __kstrtab_xfrm_state_insert
ffffffff814286a0 r __kstrtab_xfrm_stateonly_find
ffffffff814286b4 r __kstrtab_xfrm_sad_getinfo
ffffffff814286c5 r __kstrtab_xfrm_state_flush
ffffffff814286d6 r __kstrtab_xfrm_state_delete
ffffffff814286e8 r __kstrtab___xfrm_state_delete
ffffffff814286fc r __kstrtab___xfrm_state_destroy
ffffffff81428711 r __kstrtab_xfrm_state_alloc
ffffffff81428722 r __kstrtab_xfrm_unregister_mode
ffffffff81428737 r __kstrtab_xfrm_register_mode
ffffffff8142874a r __kstrtab_xfrm_unregister_type
ffffffff8142875f r __kstrtab_xfrm_register_type
ffffffff81428772 r __kstrtab_xfrm_input_resume
ffffffff81428784 r __kstrtab_xfrm_input
ffffffff8142878f r __kstrtab_xfrm_prepare_input
ffffffff814287a2 r __kstrtab_secpath_dup
ffffffff814287ae r __kstrtab___secpath_destroy
ffffffff814287c0 r __kstrtab_xfrm_inner_extract_output
ffffffff814287da r __kstrtab_xfrm_output
ffffffff814287e6 r __kstrtab_xfrm_output_resume
ffffffff814287f9 r __kstrtab_xfrm_count_enc_supported
ffffffff81428812 r __kstrtab_xfrm_count_auth_supported
ffffffff8142882c r __kstrtab_xfrm_probe_algs
ffffffff8142883c r __kstrtab_xfrm_ealg_get_byidx
ffffffff81428850 r __kstrtab_xfrm_aalg_get_byidx
ffffffff81428864 r __kstrtab_xfrm_aead_get_byname
ffffffff81428879 r __kstrtab_xfrm_calg_get_byname
ffffffff8142888e r __kstrtab_xfrm_ealg_get_byname
ffffffff814288a3 r __kstrtab_xfrm_aalg_get_byname
ffffffff814288b8 r __kstrtab_xfrm_calg_get_byid
ffffffff814288cb r __kstrtab_xfrm_ealg_get_byid
ffffffff814288de r __kstrtab_xfrm_aalg_get_byid
ffffffff814288f1 r __kstrtab_xfrm_init_replay
ffffffff81428902 r __kstrtab_unix_outq_len
ffffffff81428910 r __kstrtab_unix_inq_len
ffffffff8142891d r __kstrtab_unix_peer_get
ffffffff8142892b r __kstrtab_unix_table_lock
ffffffff8142893b r __kstrtab_unix_socket_table
ffffffff8142894d r __kstrtab___ipv6_addr_type
ffffffff8142895e r __kstrtab_ipv6_skip_exthdr
ffffffff8142896f r __kstrtab_ipv6_ext_hdr
ffffffff8142897c r __kstrtab_unregister_net_sysctl_table
ffffffff81428998 r __kstrtab_register_net_sysctl_rotable
ffffffff814289b4 r __kstrtab_register_net_sysctl_table
ffffffff814289ce r __kstrtab_klist_next
ffffffff814289d9 r __kstrtab_klist_iter_exit
ffffffff814289e9 r __kstrtab_klist_iter_init
ffffffff814289f9 r __kstrtab_klist_iter_init_node
ffffffff81428a0e r __kstrtab_klist_node_attached
ffffffff81428a22 r __kstrtab_klist_remove
ffffffff81428a2f r __kstrtab_klist_del
ffffffff81428a39 r __kstrtab_klist_add_before
ffffffff81428a4a r __kstrtab_klist_add_after
ffffffff81428a5a r __kstrtab_klist_add_tail
ffffffff81428a69 r __kstrtab_klist_add_head
ffffffff81428a78 r __kstrtab_klist_init
ffffffff81428a83 r __kstrtab_md5_transform
ffffffff81428a91 r __kstrtab_sha_transform
ffffffff81428a9f r __kstrtab_csum_ipv6_magic
ffffffff81428aaf r __kstrtab_csum_partial_copy_nocheck
ffffffff81428ac9 r __kstrtab_csum_partial_copy_to_user
ffffffff81428ae3 r __kstrtab_csum_partial_copy_from_user
ffffffff81428b00 R x86_hyper_xen_hvm
ffffffff81428b20 R x86_hyper_vmware
ffffffff81428b40 R x86_hyper_ms_hyperv
ffffffff81428b60 r ioapic_devices
ffffffff81428bc0 r ehci_dmi_nohandoff_table
ffffffff81428fe0 r msi_k8t_dmi_table
ffffffff814292a0 r toshiba_ohci1394_dmi_table
ffffffff81429800 r can_skip_pciprobe_dmi_table
ffffffff81429d60 r pciprobe_dmi_table
ffffffff8142bb00 r assocs
ffffffff8142bb20 r types
ffffffff8142bb24 r levels
ffffffff8142bb40 r cache_table
ffffffff8142bc80 r cpuid_bits.11607
ffffffff8142bd80 r default_cpu
ffffffff8142c000 r cpuid_dependent_features
ffffffff8142c020 r msr_range_array
ffffffff8142c040 r intel_cpu_dev
ffffffff8142c2c0 r amd_cpu_dev
ffffffff8142c540 r centaur_cpu_dev
ffffffff8142c7c0 r multi_dmi_table
ffffffff8142ca70 r __param_initcall_debug
ffffffff8142ca70 R __start___param
ffffffff8142ca90 r __param_pause_on_oops
ffffffff8142cab0 r __param_panic
ffffffff8142cad0 r __param_console_suspend
ffffffff8142caf0 r __param_always_kmsg_dump
ffffffff8142cb10 r __param_time
ffffffff8142cb30 r __param_ignore_loglevel
ffffffff8142cb50 r __param_nomodule
ffffffff8142cb70 r __param_irqfixup
ffffffff8142cb90 r __param_noirqdebug
ffffffff8142cbb0 r __param_rcu_cpu_stall_timeout
ffffffff8142cbd0 r __param_rcu_cpu_stall_suppress
ffffffff8142cbf0 r __param_qlowmark
ffffffff8142cc10 r __param_qhimark
ffffffff8142cc30 r __param_blimit
ffffffff8142cc50 r __param_backend
ffffffff8142cc70 r __param_events_dfl_poll_msecs
ffffffff8142cc90 r __param_policy
ffffffff8142ccb0 r __param_nosourceid
ffffffff8142ccd0 r __param_forceload
ffffffff8142ccf0 r __param_nologo
ffffffff8142cd10 r __param_bfs
ffffffff8142cd30 r __param_gts
ffffffff8142cd50 r __param_ec_delay
ffffffff8142cd70 r __param_acpica_version
ffffffff8142cd90 r __param_aml_debug_output
ffffffff8142cdb0 r __param_disable
ffffffff8142cdd0 r __param_brl_nbchords
ffffffff8142cdf0 r __param_brl_timeout
ffffffff8142ce10 r __param_underline
ffffffff8142ce30 r __param_italic
ffffffff8142ce50 r __param_default_blu
ffffffff8142ce70 r __param_default_grn
ffffffff8142ce90 r __param_default_red
ffffffff8142ceb0 r __param_consoleblank
ffffffff8142ced0 r __param_cur_default
ffffffff8142cef0 r __param_global_cursor_default
ffffffff8142cf10 r __param_default_utf8
ffffffff8142cf30 r __param_scsi_logging_level
ffffffff8142cf50 r __param_inq_timeout
ffffffff8142cf70 r __param_max_report_luns
ffffffff8142cf90 r __param_scan
ffffffff8142cfb0 r __param_max_luns
ffffffff8142cfd0 r __param_default_dev_flags
ffffffff8142cff0 r __param_dev_flags
ffffffff8142d010 r __param_atapi_an
ffffffff8142d030 r __param_allow_tpm
ffffffff8142d050 r __param_noacpi
ffffffff8142d070 r __param_ata_probe_timeout
ffffffff8142d090 r __param_dma
ffffffff8142d0b0 r __param_ignore_hpa
ffffffff8142d0d0 r __param_fua
ffffffff8142d0f0 r __param_atapi_passthru16
ffffffff8142d110 r __param_atapi_dmadir
ffffffff8142d130 r __param_atapi_enabled
ffffffff8142d150 r __param_force
ffffffff8142d170 r __param_acpi_gtf_filter
ffffffff8142d190 r __param_debug
ffffffff8142d1b0 r __param_nopnp
ffffffff8142d1d0 r __param_dritek
ffffffff8142d1f0 r __param_notimeout
ffffffff8142d210 r __param_noloop
ffffffff8142d230 r __param_dumbkbd
ffffffff8142d250 r __param_direct
ffffffff8142d270 r __param_reset
ffffffff8142d290 r __param_unlock
ffffffff8142d2b0 r __param_nomux
ffffffff8142d2d0 r __param_noaux
ffffffff8142d2f0 r __param_nokbd
ffffffff8142d310 r __param_tap_time
ffffffff8142d330 r __param_yres
ffffffff8142d350 r __param_xres
ffffffff8142d370 r __param_terminal
ffffffff8142d390 r __param_extra
ffffffff8142d3b0 r __param_scroll
ffffffff8142d3d0 r __param_softraw
ffffffff8142d3f0 r __param_softrepeat
ffffffff8142d410 r __param_reset
ffffffff8142d430 r __param_set
ffffffff8142d450 r __param_resync_time
ffffffff8142d470 r __param_resetafter
ffffffff8142d490 r __param_smartscroll
ffffffff8142d4b0 r __param_rate
ffffffff8142d4d0 r __param_resolution
ffffffff8142d4f0 r __param_proto
ffffffff8142d510 r __param_off
ffffffff8142d530 r __param_ignore_special_drivers
ffffffff8142d550 r __param_debug
ffffffff8142d570 r __param_hystart_ack_delta
ffffffff8142d590 r __param_hystart_low_window
ffffffff8142d5b0 r __param_hystart_detect
ffffffff8142d5d0 r __param_hystart
ffffffff8142d5f0 r __param_tcp_friendliness
ffffffff8142d610 r __param_bic_scale
ffffffff8142d630 r __param_initial_ssthresh
ffffffff8142d650 r __param_beta
ffffffff8142d670 r __param_fast_convergence
ffffffff8142d690 r __modver_attr
ffffffff8142d690 R __start___modver
ffffffff8142d690 R __stop___param
ffffffff8142d698 r __modver_attr
ffffffff8142d6a0 r __modver_attr
ffffffff8142d6a8 r __modver_attr
ffffffff8142d6b0 R __stop___modver
ffffffff8142e000 R __end_rodata
ffffffff8142e000 D _sdata
ffffffff8142e000 D init_thread_union
ffffffff81430000 D __vsyscall_page
ffffffff81431000 D vdso_start
ffffffff81431f98 D vdso_end
ffffffff81432000 D mmlist_lock
ffffffff81432040 D tasklist_lock
ffffffff81432080 d softirq_vec
ffffffff81432140 d pidmap_lock
ffffffff81432180 D xtime_lock
ffffffff814321c0 d call_function
ffffffff81432200 d hash_lock
ffffffff81432240 D vm_stat
ffffffff81432380 d nr_files
ffffffff814323c0 D rename_lock
ffffffff81432400 d dcache_lru_lock
ffffffff81432440 D inode_sb_list_lock
ffffffff81432480 d inode_hash_lock
ffffffff814324c0 d bdev_lock
ffffffff81432500 d crc32table_le
ffffffff81434500 d crc32ctable_le
ffffffff81436500 d crc32table_be
ffffffff81439000 D init_level4_pgt
ffffffff8143a000 D level3_ident_pgt
ffffffff8143b000 D level3_kernel_pgt
ffffffff8143c000 D level2_fixmap_pgt
ffffffff8143d000 D level1_fixmap_pgt
ffffffff8143e000 D level2_ident_pgt
ffffffff8143f000 D level2_kernel_pgt
ffffffff81440000 D level2_spare_pgt
ffffffff81441000 D early_gdt_descr
ffffffff81441002 d early_gdt_descr_base
ffffffff81441010 D phys_base
ffffffff81441020 D init_task
ffffffff814416c0 d init_signals
ffffffff81441ae0 d init_sighand
ffffffff81442320 D loops_per_jiffy
ffffffff81442340 D envp_init
ffffffff81442460 d argv_init
ffffffff81442580 D init_uts_ns
ffffffff81442718 D root_mountflags
ffffffff81442740 D HYPERVISOR_shared_info
ffffffff81442748 D machine_to_phys_mapping
ffffffff81442750 d have_vcpu_info_placement
ffffffff81442760 d xen_panic_block
ffffffff81442778 d __xen_make_pud__
ffffffff81442780 d __xen_pud_val__
ffffffff81442788 d __xen_make_pmd__
ffffffff81442790 d __xen_pmd_val__
ffffffff81442798 d __xen_make_pgd__
ffffffff814427a0 d __xen_make_pte__
ffffffff814427a8 d __xen_pgd_val__
ffffffff814427b0 d __xen_pte_val__
ffffffff814427b8 d __xen_irq_enable__
ffffffff814427c0 d __xen_irq_disable__
ffffffff814427c8 d __xen_restore_fl__
ffffffff814427d0 d __xen_save_fl__
ffffffff814427d8 d xen_clockevent
ffffffff814427e0 d xen_swiotlb_dma_ops
ffffffff81442880 d x86_stack_ids
ffffffff814428c0 d irq0
ffffffff81442940 D kstack_depth_to_print
ffffffff81442944 D code_bytes
ffffffff81442948 d die_owner
ffffffff81442960 d nmi_desc
ffffffff814429a0 D _brk_end
ffffffff814429c0 d standard_io_resources
ffffffff81442c00 d code_resource
ffffffff81442c40 d data_resource
ffffffff81442c80 d bss_resource
ffffffff81442cb8 d reserve_low
ffffffff81442cc0 D x86_msi
ffffffff81442ce0 D x86_platform
ffffffff81442d40 D legacy_pic
ffffffff81442d60 D default_legacy_pic
ffffffff81442dc0 D null_legacy_pic
ffffffff81442e20 D i8259A_chip
ffffffff81442ed8 D cached_irq_mask
ffffffff81442ee0 d i8259_syscore_ops
ffffffff81442f40 d irq2
ffffffff81442fc0 d adapter_rom_resources
ffffffff81443120 d video_rom_resource
ffffffff81443160 d system_rom_resource
ffffffff814431a0 d extension_rom_resource
ffffffff814431e0 d rs.28005
ffffffff81443200 D pci_mem_start
ffffffff81443220 D x86_dma_fallback_dev
ffffffff81443498 D dma_ops
ffffffff814434a0 D ideal_nops
ffffffff814434c0 d smp_alt
ffffffff814434e0 d smp_alt_modules
ffffffff814434f0 d smp_mode
ffffffff81443500 D nommu_dma_ops
ffffffff81443580 d clocksource_tsc
ffffffff81443640 d time_cpufreq_notifier_block
ffffffff81443660 d tsc_irqwork
ffffffff814436b8 d tsc_start.23387
ffffffff814436c0 d rtc_device
ffffffff81443980 d rtc_resources
ffffffff814439f0 D sig_xstate_ia32_size
ffffffff814439f4 D sig_xstate_size
ffffffff81443a00 d i8237_syscore_ops
ffffffff81443a40 d ktype_percpu_entry
ffffffff81443a80 d ktype_cache
ffffffff81443ac0 d default_attrs
ffffffff81443b20 d cache_disable_0
ffffffff81443b40 d cache_disable_1
ffffffff81443b60 d subcaches
ffffffff81443b80 d type
ffffffff81443ba0 d level
ffffffff81443bc0 d coherency_line_size
ffffffff81443be0 d physical_line_partition
ffffffff81443c00 d ways_of_associativity
ffffffff81443c20 d number_of_sets
ffffffff81443c40 d size
ffffffff81443c60 d shared_cpu_map
ffffffff81443c80 d shared_cpu_list
ffffffff81443ca0 D nmi_idt_descr
ffffffff81443caa D idt_descr
ffffffff81443cc0 d hyperv_cs
ffffffff81443d80 d x86_pmu_format_group
ffffffff81443da0 d pmu
ffffffff81443e60 d pmc_reserve_mutex
ffffffff81443e80 d x86_pmu_attr_groups
ffffffff81443ea0 d x86_pmu_attr_group
ffffffff81443ec0 d x86_pmu_attrs
ffffffff81443ee0 d dev_attr_rdpmc
ffffffff81443f00 d amd_format_attr
ffffffff81443f40 d amd_f15_PMC3
ffffffff81443f60 d amd_f15_PMC53
ffffffff81443f80 d amd_f15_PMC20
ffffffff81443fa0 d amd_f15_PMC30
ffffffff81443fc0 d amd_f15_PMC50
ffffffff81443fe0 d amd_f15_PMC0
ffffffff81444000 d format_attr_event
ffffffff81444020 d format_attr_umask
ffffffff81444040 d format_attr_edge
ffffffff81444060 d format_attr_inv
ffffffff81444080 d format_attr_cmask
ffffffff814440a0 d p6_event_constraints
ffffffff81444180 d intel_p6_formats_attr
ffffffff814441c0 d format_attr_event
ffffffff814441e0 d format_attr_umask
ffffffff81444200 d format_attr_edge
ffffffff81444220 d format_attr_pc
ffffffff81444240 d format_attr_inv
ffffffff81444260 d format_attr_cmask
ffffffff81444280 D p4_event_aliases
ffffffff814442a0 d intel_p4_formats_attr
ffffffff814442c0 d p4_event_bind_map
ffffffff814447e0 d p4_pebs_bind_map
ffffffff81444840 d format_attr_cccr
ffffffff81444860 d format_attr_escr
ffffffff81444880 d format_attr_ht
ffffffff814448a0 D intel_snb_pebs_event_constraints
ffffffff81444ae0 D intel_westmere_pebs_event_constraints
ffffffff81444c60 D intel_nehalem_pebs_event_constraints
ffffffff81444de0 D intel_atom_pebs_event_constraints
ffffffff81444e60 D intel_core2_pebs_event_constraints
ffffffff81444f20 D bts_constraint
ffffffff81444f40 d intel_arch_formats_attr
ffffffff81444f80 d intel_arch3_formats_attr
ffffffff81444fe0 d format_attr_event
ffffffff81445000 d format_attr_umask
ffffffff81445020 d format_attr_edge
ffffffff81445040 d format_attr_pc
ffffffff81445060 d format_attr_inv
ffffffff81445080 d format_attr_cmask
ffffffff814450a0 d format_attr_any
ffffffff814450c0 d format_attr_offcore_rsp
ffffffff814450e0 D machine_check_vector
ffffffff81445100 d mcelog
ffffffff81445c20 d _rs.23551
ffffffff81445c40 d mce_chrdev_wait
ffffffff81445c60 d mce_trigger_work
ffffffff81445c80 d ratelimit.24027
ffffffff81445ca0 d mce_helper_argv
ffffffff81445cb0 d check_interval
ffffffff81445cc0 d mce_subsys
ffffffff81445d40 d mce_syscore_ops
ffffffff81445d80 d mce_chrdev_device
ffffffff81445de0 d mce_chrdev_read_mutex
ffffffff81445e00 d dev_attr_tolerant
ffffffff81445e40 d dev_attr_check_interval
ffffffff81445e80 d dev_attr_trigger
ffffffff81445ea0 d dev_attr_monarch_timeout
ffffffff81445ee0 d dev_attr_dont_log_ce
ffffffff81445f20 d dev_attr_ignore_ce
ffffffff81445f60 d dev_attr_cmci_disabled
ffffffff81445fa0 d severities
ffffffff81446240 d threshold_ktype
ffffffff81446280 d default_attrs
ffffffff814462a0 d interrupt_enable
ffffffff814462c0 d threshold_limit
ffffffff814462e0 d error_count
ffffffff81446300 D mce_threshold_vector
ffffffff81446320 d mtrr_mutex
ffffffff81446340 d mtrr_syscore_ops
ffffffff81446380 d perf_ibs
ffffffff81446430 D __acpi_register_gsi
ffffffff81446440 D machine_ops
ffffffff81446470 D reboot_type
ffffffff81446474 d reboot_default
ffffffff81446480 D smp_ops
ffffffff814464d8 d stopping_cpu
ffffffff814464e0 D smp_num_siblings
ffffffff81446500 d x86_cpu_hotplug_driver_mutex
ffffffff81446520 d current_node.27104
ffffffff81446540 D first_system_vector
ffffffff81446544 D boot_cpu_physical_apicid
ffffffff81446580 d lapic_clockevent
ffffffff81446640 d lapic_syscore_ops
ffffffff81446680 d lapic_resource
ffffffff814466c0 D apic_noop
ffffffff81446820 D sis_apic_bug
ffffffff81446840 d io_apic_ops
ffffffff81446860 d nr_irqs_gsi
ffffffff81446880 d ioapic_syscore_ops
ffffffff814468a8 d current_vector.31620
ffffffff814468ac d current_offset.31621
ffffffff814468b0 d ioapic_i8259
ffffffff814468c0 d msi_chip
ffffffff81446980 d hpet_msi_type
ffffffff81446a40 d ht_irq_chip
ffffffff81446b00 d apic_physflat
ffffffff81446c60 d apic_flat
ffffffff81446dc0 d early_console
ffffffff81446de0 d early_vga_console
ffffffff81446e38 d current_ypos
ffffffff81446e3c d max_ypos
ffffffff81446e40 d max_xpos
ffffffff81446e60 d early_serial_console
ffffffff81446eb8 d early_serial_base
ffffffff81446ec0 d hpet_clockevent
ffffffff81446f80 d clocksource_hpet
ffffffff81447040 d amd_nb_link_ids
ffffffff81447080 D pv_mmu_ops
ffffffff814471d0 D pv_apic_ops
ffffffff814471e0 D pv_cpu_ops
ffffffff81447340 D pv_irq_ops
ffffffff81447380 D pv_time_ops
ffffffff81447398 D pv_init_ops
ffffffff814473a0 D pv_info
ffffffff814473c0 d reserve_ioports
ffffffff81447400 D pv_lock_ops
ffffffff81447440 d swiotlb_dma_ops
ffffffff814474c0 d write_class
ffffffff81447520 d read_class
ffffffff81447560 d dir_class
ffffffff814475a0 d chattr_class
ffffffff814475e0 d signal_class
ffffffff81447600 d gart_dma_ops
ffffffff81447678 d iommu_fullflush
ffffffff81447680 d gart_syscore_ops
ffffffff814476c0 d gart_resource
ffffffff814476f8 d __vsmp_irq_enable__
ffffffff81447700 d __vsmp_irq_disable__
ffffffff81447708 d __vsmp_restore_fl__
ffffffff81447710 d __vsmp_save_fl__
ffffffff81447718 d is_vsmp
ffffffff81447740 D direct_gbpages
ffffffff81447760 d gate_vma
ffffffff81447810 D show_unhandled_signals
ffffffff81447820 D pgd_list
ffffffff81447840 D va_align
ffffffff81447880 D __userpte_alloc_gfp
ffffffff814478a0 d abi_root_table2
ffffffff81447920 d abi_table2
ffffffff814479a0 D ia32_signal_class
ffffffff814479c0 D ia32_read_class
ffffffff81447a00 D ia32_write_class
ffffffff81447a60 D ia32_chattr_class
ffffffff81447ac0 D ia32_dir_class
ffffffff81447b00 d default_dump_filter
ffffffff81447b20 D default_exec_domain
ffffffff81447b78 d exec_domains_lock
ffffffff81447b80 d exec_domains
ffffffff81447ba0 d ident_map
ffffffff81447ca0 D printk_ratelimit_state
ffffffff81447cc0 D console_suspend_enabled
ffffffff81447cc4 d printk_cpu
ffffffff81447cd0 D console_printk
ffffffff81447ce0 D log_wait
ffffffff81447cf8 d log_buf_len
ffffffff81447d00 d log_buf
ffffffff81447d08 d saved_console_loglevel
ffffffff81447d10 d console_sem
ffffffff81447d28 d new_text_line
ffffffff81447d2c d selected_console
ffffffff81447d30 d msg_level.30848
ffffffff81447d34 d preferred_console
ffffffff81447d40 d dump_list
ffffffff81447d60 d cpu_add_remove_lock
ffffffff81447d80 d cpu_hotplug
ffffffff81447db0 d cpu_hotplug_pm_callback_nb.23875
ffffffff81447dc8 d firsttime.28209
ffffffff81447de0 D softirq_to_name
ffffffff81447e40 D iomem_resource
ffffffff81447e80 D ioport_resource
ffffffff81447eb8 d resource_lock
ffffffff81447ec0 d muxed_resource_wait
ffffffff81447ee0 d sysctl_base_table
ffffffff81448060 d kern_table
ffffffff81448c60 d vm_table
ffffffff81449660 d fs_table
ffffffff81449ae0 d debug_table
ffffffff81449b60 d one
ffffffff81449b64 d maxolduid
ffffffff81449b68 d ten_thousand
ffffffff81449b6c d two
ffffffff81449b70 d ngroups_max
ffffffff81449b74 d one_hundred
ffffffff81449b78 d one_ul
ffffffff81449b80 d dirty_bytes_min
ffffffff81449b88 d three
ffffffff81449b8c d max_extfrag_threshold
ffffffff81449b90 d min_percpu_pagelist_fract
ffffffff81449b94 D file_caps_enabled
ffffffff81449ba0 D root_user
ffffffff81449c00 D init_user_ns
ffffffff8144a040 d ratelimit_state.32615
ffffffff8144a060 D poweroff_cmd
ffffffff8144a160 D uts_sem
ffffffff8144a180 D C_A_D
ffffffff8144a184 D fs_overflowgid
ffffffff8144a188 D fs_overflowuid
ffffffff8144a18c D overflowgid
ffffffff8144a190 D overflowuid
ffffffff8144a1a0 d reboot_mutex
ffffffff8144a1c0 d cad_work.32196
ffffffff8144a1e0 d envp.32694
ffffffff8144a200 D usermodehelper_table
ffffffff8144a2c0 D modprobe_path
ffffffff8144a3c0 d envp.29705
ffffffff8144a3e0 d umhelper_sem
ffffffff8144a400 d usermodehelper_disabled_waitq
ffffffff8144a418 d usermodehelper_disabled
ffffffff8144a41c d usermodehelper_bset
ffffffff8144a424 d usermodehelper_inheritable
ffffffff8144a430 d running_helpers_waitq
ffffffff8144a450 d workqueues
ffffffff8144a460 D init_pid_ns
ffffffff8144acb0 D pid_max_max
ffffffff8144acb4 D pid_max_min
ffffffff8144acb8 D pid_max
ffffffff8144acc0 D init_struct_pid
ffffffff8144ad10 d pidhash_shift
ffffffff8144ad20 D text_mutex
ffffffff8144ad40 D module_ktype
ffffffff8144ad70 D param_ops_string
ffffffff8144ad90 D param_array_ops
ffffffff8144adb0 D param_ops_bint
ffffffff8144add0 D param_ops_invbool
ffffffff8144adf0 D param_ops_bool
ffffffff8144ae10 D param_ops_charp
ffffffff8144ae30 D param_ops_ulong
ffffffff8144ae50 D param_ops_long
ffffffff8144ae70 D param_ops_uint
ffffffff8144ae90 D param_ops_int
ffffffff8144aeb0 D param_ops_ushort
ffffffff8144aed0 D param_ops_short
ffffffff8144aef0 D param_ops_byte
ffffffff8144af20 d param_lock
ffffffff8144af40 d kmalloced_params
ffffffff8144af50 d kthread_create_list
ffffffff8144af60 D clock_posix_cpu
ffffffff8144afc0 D init_nsproxy
ffffffff8144b000 D reboot_notifier_list
ffffffff8144b040 d kernel_attr_group
ffffffff8144b060 d notes_attr
ffffffff8144b0a0 d kernel_attrs
ffffffff8144b0c0 d fscaps_attr
ffffffff8144b0e0 d uevent_seqnum_attr
ffffffff8144b100 d uevent_helper_attr
ffffffff8144b120 D init_cred
ffffffff8144b190 d async_running
ffffffff8144b1a0 d next_cookie
ffffffff8144b1b0 d async_pending
ffffffff8144b1c0 d async_done
ffffffff8144b1e0 D init_groups
ffffffff8144b280 D cpu_cgroup_subsys
ffffffff8144b360 D cpuacct_subsys
ffffffff8144b440 D sysctl_sched_rt_runtime
ffffffff8144b444 D sysctl_sched_rt_period
ffffffff8144b460 D sched_domains_mutex
ffffffff8144b480 d files
ffffffff8144b6c0 d default_relax_domain_level
ffffffff8144b6e0 d cpu_files
ffffffff8144bb60 d default_topology
ffffffff8144bc60 d dev_attr_sched_mc_power_savings
ffffffff8144bc80 d task_groups
ffffffff8144bca0 d rt_constraints_mutex
ffffffff8144bcc0 d mutex.41753
ffffffff8144bce0 d cfs_constraints_mutex
ffffffff8144bd00 D sysctl_sched_cfs_bandwidth_slice
ffffffff8144bd04 D normalized_sysctl_sched_wakeup_granularity
ffffffff8144bd08 D sysctl_sched_wakeup_granularity
ffffffff8144bd0c D normalized_sysctl_sched_min_granularity
ffffffff8144bd10 D sysctl_sched_min_granularity
ffffffff8144bd14 D sysctl_sched_tunable_scaling
ffffffff8144bd18 D normalized_sysctl_sched_latency
ffffffff8144bd1c D sysctl_sched_latency
ffffffff8144bd20 d shares_mutex
ffffffff8144bd40 d task_groups
ffffffff8144bd60 d cpu_dma_pm_qos
ffffffff8144bdc0 d network_lat_pm_qos
ffffffff8144be20 d network_throughput_pm_qos
ffffffff8144be80 d cpu_dma_constraints
ffffffff8144bec0 d network_lat_constraints
ffffffff8144bf00 d network_tput_constraints
ffffffff8144bf40 d cpu_dma_lat_notifier
ffffffff8144bf80 d network_lat_notifier
ffffffff8144bfc0 d network_throughput_notifier
ffffffff8144c000 D pm_async_enabled
ffffffff8144c020 D pm_mutex
ffffffff8144c040 d pm_chain_head
ffffffff8144c070 d attr_group
ffffffff8144c0a0 d g
ffffffff8144c0c0 d state_attr
ffffffff8144c0e0 d pm_async_attr
ffffffff8144c100 d wakeup_count_attr
ffffffff8144c140 d timekeeping_syscore_ops
ffffffff8144c180 D tick_usec
ffffffff8144c188 d time_status
ffffffff8144c190 d time_maxerror
ffffffff8144c198 d time_esterror
ffffffff8144c1a0 d time_constant
ffffffff8144c1c0 d sync_cmos_work
ffffffff8144c220 d watchdog_list
ffffffff8144c240 d watchdog_work
ffffffff8144c260 d clocksource_mutex
ffffffff8144c280 d clocksource_list
ffffffff8144c2a0 d clocksource_subsys
ffffffff8144c320 d device_clocksource
ffffffff8144c5a0 d dev_attr_current_clocksource
ffffffff8144c5c0 d dev_attr_available_clocksource
ffffffff8144c600 D clocksource_jiffies
ffffffff8144c6c0 D clock_posix_dynamic
ffffffff8144c720 d alarmtimer_rtc_interface
ffffffff8144c760 d alarmtimer_driver
ffffffff8144c800 d clockevent_devices
ffffffff8144c810 d clockevents_released
ffffffff8144c820 d tick_notifier
ffffffff8144c838 D max_lock_depth
ffffffff8144c840 d dma_chan_busy
ffffffff8144c8c0 D setup_max_cpus
ffffffff8144c8e0 D module_uevent
ffffffff8144c920 D module_mutex
ffffffff8144c940 d module_notify_list
ffffffff8144c970 d modules
ffffffff8144c980 d module_wq
ffffffff8144c998 d module_addr_min
ffffffff8144c9a0 d modinfo_version
ffffffff8144c9e0 d modinfo_srcversion
ffffffff8144ca20 d modinfo_initstate
ffffffff8144ca60 d modinfo_coresize
ffffffff8144caa0 d modinfo_initsize
ffffffff8144cae0 d modinfo_taint
ffffffff8144cb20 d modinfo_refcnt
ffffffff8144cb60 D acct_parm
ffffffff8144cb70 d acct_list
ffffffff8144cb80 d cgroup_mutex
ffffffff8144cba0 d cgroup_rmdir_waitq
ffffffff8144cbb8 d css_set_lock
ffffffff8144cbc0 d roots
ffffffff8144cbe0 d subsys
ffffffff8144cde0 d release_list
ffffffff8144ce00 d release_agent_work
ffffffff8144ce20 d files
ffffffff8144d1e0 d cft_release_agent
ffffffff8144d2a0 d cgroup_root_mutex
ffffffff8144d2c0 d cgroup_backing_dev_info
ffffffff8144d520 d cgroup_fs_type
ffffffff8144d560 D freezer_subsys
ffffffff8144d640 d files
ffffffff8144d700 D cpuset_subsys
ffffffff8144d7e0 d callback_mutex
ffffffff8144d800 d files
ffffffff8144e040 d cft_memory_pressure_enabled
ffffffff8144e100 d rebuild_sched_domains_work
ffffffff8144e120 d top_cpuset
ffffffff8144e1a8 d warnings.26996
ffffffff8144e1c0 d cpuset_fs_type
ffffffff8144e200 d kern_path
ffffffff8144e220 d pid_ns_ctl_table
ffffffff8144e2a0 d pid_caches_mutex
ffffffff8144e2c0 d pid_caches_lh
ffffffff8144e2e0 d stop_cpus_mutex
ffffffff8144e300 D audit_cmd_mutex
ffffffff8144e320 D audit_sig_pid
ffffffff8144e324 D audit_sig_uid
ffffffff8144e328 d audit_failure
ffffffff8144e32c d audit_backlog_limit
ffffffff8144e330 d audit_backlog_wait_time
ffffffff8144e340 d audit_backlog_wait
ffffffff8144e360 d audit_freelist
ffffffff8144e370 d kauditd_wait
ffffffff8144e3a0 D audit_filter_mutex
ffffffff8144e3c0 D audit_filter_list
ffffffff8144e420 d prio_high
ffffffff8144e428 d prio_low
ffffffff8144e440 d audit_rules_list
ffffffff8144e4a0 d prune_list
ffffffff8144e4b0 d tree_list
ffffffff8144e4c0 D nr_irqs
ffffffff8144e4d0 d irq_desc_tree
ffffffff8144e4e0 d sparse_irq_lock
ffffffff8144e500 d count.15636
ffffffff8144e520 d poll_spurious_irq_timer
ffffffff8144e560 D dummy_irq_chip
ffffffff8144e620 D no_irq_chip
ffffffff8144e6e0 d probing_active
ffffffff8144e700 d irq_pm_syscore_ops
ffffffff8144e740 D rcu_bh_state
ffffffff8144e8c0 D rcu_sched_state
ffffffff8144ea40 d blimit
ffffffff8144ea44 d qhimark
ffffffff8144ea48 d qlowmark
ffffffff8144ea60 d rcu_barrier_mutex
ffffffff8144ea80 d rcu_panic_block
ffffffff8144eaa0 d uts_kern_table
ffffffff8144ec20 d uts_root_table
ffffffff8144eca0 d hostname_poll
ffffffff8144ecc0 d domainname_poll
ffffffff8144ece0 d family
ffffffff8144ed60 d taskstats_ops
ffffffff8144eda0 d cgroupstats_ops
ffffffff8144ede0 d pmus_lock
ffffffff8144ee00 d pmu_bus
ffffffff8144ee80 d pmus
ffffffff8144eea0 d perf_swevent
ffffffff8144ef60 d perf_cpu_clock
ffffffff8144f020 d perf_task_clock
ffffffff8144f0d0 d perf_reboot_notifier
ffffffff8144f100 d pmu_dev_attrs
ffffffff8144f140 d callchain_mutex
ffffffff8144f160 d nr_bp_mutex
ffffffff8144f180 d bp_task_head
ffffffff8144f1a0 d perf_breakpoint
ffffffff8144f250 d hw_breakpoint_exceptions_nb
ffffffff8144f280 D sysctl_oom_dump_tasks
ffffffff8144f2a0 d oom_notify_list
ffffffff8144f2e0 d oom_rs.26412
ffffffff8144f300 D hashdist
ffffffff8144f320 D zonelists_mutex
ffffffff8144f340 D numa_zonelist_order
ffffffff8144f350 D min_free_kbytes
ffffffff8144f354 D sysctl_lowmem_reserve_ratio
ffffffff8144f360 d nopage_rs
ffffffff8144f380 d zl_order_mutex.32698
ffffffff8144f3a0 d zonelist_order_name
ffffffff8144f3b8 D dirty_expire_interval
ffffffff8144f3bc D dirty_writeback_interval
ffffffff8144f3c0 D vm_dirty_ratio
ffffffff8144f3c4 D dirty_background_ratio
ffffffff8144f3c8 d ratelimit_pages
ffffffff8144f3e0 D sysctl_min_slab_ratio
ffffffff8144f3e4 D sysctl_min_unmapped_ratio
ffffffff8144f3e8 D vm_swappiness
ffffffff8144f400 d shrinker_rwsem
ffffffff8144f420 d shrinker_list
ffffffff8144f440 d dev_attr_scan_unevictable_pages
ffffffff8144f460 d shmem_swaplist_mutex
ffffffff8144f480 d shmem_swaplist
ffffffff8144f490 d shmem_xattr_handlers
ffffffff8144f4c0 d shmem_fs_type
ffffffff8144f500 D bdi_pending_list
ffffffff8144f510 D bdi_list
ffffffff8144f520 D noop_backing_dev_info
ffffffff8144f780 D default_backing_dev_info
ffffffff8144f9e0 d bdi_dev_attrs
ffffffff8144fa60 d congestion_wqh
ffffffff8144faa0 d pcpu_alloc_mutex
ffffffff8144fac0 d warn_limit.20211
ffffffff8144fae0 d pcpu_reclaim_work
ffffffff8144fb00 D protection_map
ffffffff8144fb80 d mm_all_locks_mutex
ffffffff8144fba0 D vmlist_lock
ffffffff8144fbb0 d vmap_area_list
ffffffff8144fbc0 d vmap_block_tree
ffffffff8144fbe0 D init_mm
ffffffff8144ff40 D swapper_space
ffffffff81450000 d swap_backing_dev_info
ffffffff81450260 d swap_list
ffffffff81450280 d swapon_mutex
ffffffff814502a0 d proc_poll_wait
ffffffff814502c0 d pools_lock
ffffffff814502e0 d dev_attr_pools
ffffffff81450300 d hstate_attr_group
ffffffff81450318 d htlb_alloc_mask
ffffffff81450320 d per_node_hstate_attr_group
ffffffff81450340 d hugetlb_instantiation_mutex.27899
ffffffff81450360 d hstate_attrs
ffffffff814503a0 d per_node_hstate_attrs
ffffffff814503c0 d nr_hugepages_attr
ffffffff814503e0 d nr_overcommit_hugepages_attr
ffffffff81450400 d free_hugepages_attr
ffffffff81450420 d resv_hugepages_attr
ffffffff81450440 d surplus_hugepages_attr
ffffffff81450460 d nr_hugepages_mempolicy_attr
ffffffff81450480 d default_policy
ffffffff814504e0 D sysctl_extfrag_threshold
ffffffff81450500 d dev_attr_compact
ffffffff81450520 D malloc_sizes
ffffffff81450700 d cache_cache
ffffffff814507b0 d slab_early_init
ffffffff814507c0 d initarray_generic
ffffffff814507e0 d cache_chain_mutex
ffffffff81450800 d khugepaged_attr_group
ffffffff81450820 d hugepage_attr_group
ffffffff81450840 d khugepaged_wait
ffffffff81450860 d khugepaged_mutex
ffffffff81450880 d khugepaged_scan
ffffffff814508a0 d khugepaged_attr
ffffffff814508e0 d hugepage_attr
ffffffff81450900 d khugepaged_defrag_attr
ffffffff81450920 d khugepaged_max_ptes_none_attr
ffffffff81450940 d pages_to_scan_attr
ffffffff81450960 d pages_collapsed_attr
ffffffff81450980 d full_scans_attr
ffffffff814509a0 d scan_sleep_millisecs_attr
ffffffff814509c0 d alloc_sleep_millisecs_attr
ffffffff814509e0 d enabled_attr
ffffffff81450a00 d defrag_attr
ffffffff81450a20 d error_states
ffffffff81450bc0 D files_stat
ffffffff81450be0 d files_lglock_lg_cpu_notifier
ffffffff81450c00 D super_blocks
ffffffff81450c20 D directly_mappable_cdev_bdi
ffffffff81450e80 d chrdevs_lock
ffffffff81450ea0 d ktype_cdev_dynamic
ffffffff81450ee0 d ktype_cdev_default
ffffffff81450f08 d warncount.28692
ffffffff81450f20 D core_pattern
ffffffff81450fa0 d binfmt_lock
ffffffff81450fb0 d formats
ffffffff81450fc0 d call_count
ffffffff81450fe0 D pipe_min_size
ffffffff81450fe4 D pipe_max_size
ffffffff81451000 d pipe_fs_type
ffffffff81451040 D dentry_stat
ffffffff81451060 d _rs.30393
ffffffff81451080 D init_files
ffffffff81451340 D sysctl_nr_open_max
ffffffff81451344 D sysctl_nr_open_min
ffffffff81451348 d file_systems_lock
ffffffff81451350 d vfsmount_lock_lg_cpu_notifier
ffffffff81451368 d mnt_group_start
ffffffff81451370 d cursor_name.24820
ffffffff81451380 D init_fs
ffffffff814513c0 d bio_dirty_work
ffffffff814513e0 d bio_slab_lock
ffffffff81451400 d bd_type
ffffffff81451440 d all_bdevs
ffffffff81451460 d destroy_list
ffffffff81451470 d destroy_waitq
ffffffff814514a0 d dnotify_mark_mutex
ffffffff814514c0 d dnotify_fsnotify_ops
ffffffff81451500 D inotify_table
ffffffff81451600 D epoll_table
ffffffff81451680 d epmutex
ffffffff814516a0 d visited_list
ffffffff814516b0 d tfile_check_list
ffffffff814516c0 d long_max
ffffffff814516e0 d anon_inode_fs_type
ffffffff81451720 d cancel_list
ffffffff81451740 D aio_max_nr
ffffffff81451750 d fput_head
ffffffff81451760 d fput_work
ffffffff81451780 D lease_break_time
ffffffff81451784 D leases_enable
ffffffff81451790 d blocked_list
ffffffff814517a0 d file_lock_list
ffffffff814517b0 D compat_log
ffffffff814517c0 d ioctl_pointer
ffffffff81451f80 d script_format
ffffffff81451fc0 d elf_format
ffffffff81452000 d compat_elf_format
ffffffff81452040 D proc_root
ffffffff814520e0 d proc_fs_type
ffffffff81452120 d ns_entries
ffffffff81452140 d sysctl_table_root
ffffffff814521c0 d root_table
ffffffff81452240 d proc_net_ns_ops
ffffffff81452280 d kclist_lock
ffffffff81452290 d kclist_head
ffffffff814522a0 d kcore_need_update
ffffffff814522c0 d sysfs_backing_dev_info
ffffffff81452520 d sysfs_workq_mutex
ffffffff81452540 d sysfs_workq
ffffffff81452560 D sysfs_mutex
ffffffff81452580 D sysfs_root
ffffffff81452600 d sysfs_fs_type
ffffffff81452640 d sysfs_bin_lock
ffffffff81452660 d configfs_backing_dev_info
ffffffff814528c0 D configfs_rename_sem
ffffffff814528e0 D configfs_symlink_mutex
ffffffff81452900 d configfs_root_group
ffffffff81452980 d configfs_fs_type
ffffffff814529c0 d configfs_root
ffffffff81452a20 d ___modver_attr
ffffffff81452a80 d allocated_ptys_lock
ffffffff81452aa0 d pty_limit
ffffffff81452aa4 d pty_reserve
ffffffff81452ac0 d devpts_fs_type
ffffffff81452b00 d pty_root_table
ffffffff81452b80 d pty_kern_table
ffffffff81452c00 d pty_table
ffffffff81452d00 d pty_limit_max
ffffffff81452d20 d ramfs_backing_dev_info
ffffffff81452f80 d ramfs_fs_type
ffffffff81452fc0 d rootfs_fs_type
ffffffff81453000 d hugetlbfs_backing_dev_info
ffffffff81453260 d hugetlbfs_fs_type
ffffffff814532a0 d tables
ffffffff814532c0 d default_table
ffffffff81453300 d allpstore
ffffffff81453320 d pstore_fs_type
ffffffff81453360 d kmsg_bytes
ffffffff81453380 d pstore_dumper
ffffffff814533a0 d pstore_timer
ffffffff814533e0 d pstore_work
ffffffff81453400 D nr_ipc_ns
ffffffff81453420 D init_ipc_ns
ffffffff814535a0 d ipcns_chain
ffffffff814535e0 d ipc_root_table
ffffffff81453660 d ipc_kern_table
ffffffff814538e0 d one
ffffffff81453900 d mqueue_fs_type
ffffffff81453940 d mq_sysctl_root
ffffffff814539c0 d mq_sysctl_dir
ffffffff81453a40 d mq_sysctls
ffffffff81453b40 d msg_max_limit_min
ffffffff81453b44 d msg_max_limit_max
ffffffff81453b48 d msg_maxsize_limit_min
ffffffff81453b4c d msg_maxsize_limit_max
ffffffff81453b60 D dac_mmap_min_addr
ffffffff81453b80 D devices_subsys
ffffffff81453c60 d dev_cgroup_files
ffffffff81453ea0 d devcgroup_mutex
ffffffff81453ec0 D crypto_chain
ffffffff81453f00 D crypto_alg_sem
ffffffff81453f20 D crypto_alg_list
ffffffff81453f30 d crypto_template_list
ffffffff81453f40 d chainiv_tmpl
ffffffff81453fc0 d eseqiv_tmpl
ffffffff81454040 d cryptomgr_notifier
ffffffff81454060 d aes_alg
ffffffff81454180 d alg
ffffffff81454300 d crypto_default_rng_lock
ffffffff81454320 d krng_alg
ffffffff81454440 d elv_list
ffffffff81454460 d elv_ktype
ffffffff814544a0 D blk_queue_ktype
ffffffff814544e0 d default_attrs
ffffffff814545a0 d queue_requests_entry
ffffffff814545c0 d queue_ra_entry
ffffffff814545e0 d queue_max_hw_sectors_entry
ffffffff81454600 d queue_max_sectors_entry
ffffffff81454620 d queue_max_segments_entry
ffffffff81454640 d queue_max_integrity_segments_entry
ffffffff81454660 d queue_max_segment_size_entry
ffffffff81454680 d queue_iosched_entry
ffffffff814546a0 d queue_hw_sector_size_entry
ffffffff814546c0 d queue_logical_block_size_entry
ffffffff814546e0 d queue_physical_block_size_entry
ffffffff81454700 d queue_io_min_entry
ffffffff81454720 d queue_io_opt_entry
ffffffff81454740 d queue_discard_granularity_entry
ffffffff81454760 d queue_discard_max_entry
ffffffff81454780 d queue_discard_zeroes_data_entry
ffffffff814547a0 d queue_nonrot_entry
ffffffff814547c0 d queue_nomerges_entry
ffffffff814547e0 d queue_rq_affinity_entry
ffffffff81454800 d queue_iostats_entry
ffffffff81454820 d queue_random_entry
ffffffff81454840 D blk_iopoll_enabled
ffffffff81454860 D block_class
ffffffff814548e0 d block_class_lock
ffffffff81454900 d ext_devt_mutex
ffffffff81454920 d disk_events_attrs
ffffffff81454940 d disk_events_mutex
ffffffff81454960 d disk_events
ffffffff81454980 d disk_type
ffffffff814549b0 d disk_attr_groups
ffffffff814549c0 d disk_attr_group
ffffffff814549e0 d disk_attrs
ffffffff81454a40 d dev_attr_range
ffffffff81454a60 d dev_attr_ext_range
ffffffff81454a80 d dev_attr_removable
ffffffff81454aa0 d dev_attr_ro
ffffffff81454ac0 d dev_attr_size
ffffffff81454ae0 d dev_attr_alignment_offset
ffffffff81454b00 d dev_attr_discard_alignment
ffffffff81454b20 d dev_attr_capability
ffffffff81454b40 d dev_attr_stat
ffffffff81454b60 d dev_attr_inflight
ffffffff81454b80 d _rs.28678
ffffffff81454ba0 D part_type
ffffffff81454be0 d dev_attr_whole_disk
ffffffff81454c00 d part_attr_groups
ffffffff81454c10 d part_attr_group
ffffffff81454c40 d part_attrs
ffffffff81454ca0 d dev_attr_partition
ffffffff81454cc0 d dev_attr_start
ffffffff81454ce0 d dev_attr_size
ffffffff81454d00 d dev_attr_ro
ffffffff81454d20 d dev_attr_alignment_offset
ffffffff81454d40 d dev_attr_discard_alignment
ffffffff81454d60 d dev_attr_stat
ffffffff81454d80 d dev_attr_inflight
ffffffff81454da0 D warn_no_part
ffffffff81454dc0 d bsg_mutex
ffffffff81454de0 D blkio_files
ffffffff81455620 D blkio_subsys
ffffffff81455700 D blkio_root_cgroup
ffffffff81455740 d blkio_list
ffffffff81455760 d elevator_noop
ffffffff81455860 d blkio_policy_cfq
ffffffff814558c0 d iosched_cfq
ffffffff814559b8 d cfq_slice_async
ffffffff814559bc d cfq_slice_idle
ffffffff814559c0 d cfq_group_idle
ffffffff814559e0 d cfq_attrs
ffffffff81455b80 d module_bug_list
ffffffff81455ba0 d dynamic_kobj_ktype
ffffffff81455be0 d kset_ktype
ffffffff81455c20 d uevent_sock_mutex
ffffffff81455c40 d uevent_sock_list
ffffffff81455c60 d uevent_net_ops
ffffffff81455c98 d delay_fn
ffffffff81455ca0 D inat_avx_tables
ffffffff814560a0 D inat_group_tables
ffffffff814564a0 D inat_escape_tables
ffffffff81456520 D debug_locks
ffffffff81456524 d count.17762
ffffffff81456540 d ___modver_attr
ffffffff814565a0 d percpu_counters_lock
ffffffff814565c0 d percpu_counters
ffffffff81456600 d pci_cfg_wait
ffffffff81456620 D pci_root_buses
ffffffff81456630 d pci_host_bridges
ffffffff81456640 d pcibus_class
ffffffff814566c0 D bus_attr_resource_alignment
ffffffff814566e0 D pcibios_max_latency
ffffffff814566e8 D pci_hotplug_mem_size
ffffffff814566f0 D pci_hotplug_io_size
ffffffff814566f8 D pci_cardbus_mem_size
ffffffff81456700 D pci_cardbus_io_size
ffffffff81456708 D pci_domains_supported
ffffffff81456720 D pci_power_names
ffffffff81456760 d pci_pme_list_mutex
ffffffff81456780 d pci_pme_list
ffffffff814567a0 d pci_pme_work
ffffffff81456800 D pci_bus_type
ffffffff81456880 d driver_attr_new_id
ffffffff814568a0 d driver_attr_remove_id
ffffffff814568c0 d pci_compat_driver
ffffffff814569c0 D pci_bus_sem
ffffffff814569e0 D vga_attr
ffffffff81456a00 D pcibus_dev_attrs
ffffffff81456a80 D pci_dev_attrs
ffffffff81456ce0 D pci_bus_attrs
ffffffff81456d20 d pci_remove_rescan_mutex
ffffffff81456d40 d pci_config_attr
ffffffff81456d80 d pcie_config_attr
ffffffff81456dc0 d reset_attr
ffffffff81456de0 d pci_slot_ktype
ffffffff81456e20 d pci_slot_default_attrs
ffffffff81456e40 d pci_slot_attr_address
ffffffff81456e60 d pci_slot_attr_max_speed
ffffffff81456e80 d pci_slot_attr_cur_speed
ffffffff81456ea0 d via_vlink_dev_lo
ffffffff81456ea4 d via_vlink_dev_hi
ffffffff81456ec0 d aspm_lock
ffffffff81456ee0 d link_list
ffffffff81456ef0 d aspm_support_enabled
ffffffff81456f00 d __param_ops_policy
ffffffff81456f20 D pcie_ports_auto
ffffffff81456f40 d pcie_portdriver
ffffffff81457040 d pcie_portdrv_err_handler
ffffffff81457080 D pcie_port_bus_type
ffffffff81457100 d aer_correctable_error_string
ffffffff81457180 d aer_uncorrectable_error_string
ffffffff81457240 d aer_recover_ring
ffffffff814572e0 d aer_recover_work
ffffffff81457300 d aerdriver
ffffffff814573c0 d aer_error_handlers
ffffffff81457400 d pcie_pme_driver
ffffffff814574c0 d ioapic_driver
ffffffff814575c0 D msi_irq_default_attrs
ffffffff814575d0 d pci_msi_enable
ffffffff814575e0 d msi_irq_ktype
ffffffff81457620 d mode_attribute
ffffffff81457640 d acpi_pci_bus
ffffffff81457680 d acpi_pci_platform_pm
ffffffff814576c0 d pci_acpi_pm_notify_mtx
ffffffff814576e0 d acpi_attr_group
ffffffff81457700 d smbios_attr_group
ffffffff81457720 d acpi_attributes
ffffffff81457740 d smbios_attributes
ffffffff81457760 d acpi_attr_label
ffffffff81457780 d acpi_attr_index
ffffffff814577a0 d smbios_attr_label
ffffffff814577c0 d smbios_attr_instance
ffffffff814577e0 d fb_notifier_list
ffffffff81457820 d registration_lock
ffffffff81457840 d device_attrs
ffffffff814579c0 d vga_font_is_default
ffffffff814579e0 d ega_console_resource.22542
ffffffff81457a20 d mda1_console_resource.22543
ffffffff81457a60 d mda2_console_resource.22544
ffffffff81457aa0 d ega_console_resource.22546
ffffffff81457ae0 d vga_console_resource.22547
ffffffff81457b20 d cga_console_resource.22554
ffffffff81457b60 d fbcon_softback_size
ffffffff81457b64 d last_fb_vc
ffffffff81457b68 d fbcon_is_default
ffffffff81457b70 d fbcon_event_notifier
ffffffff81457ba0 d device_attrs
ffffffff81457c00 d info_idx
ffffffff81457c04 d logo_shown
ffffffff81457c20 d palette_cmap
ffffffff81457c48 d primary_device
ffffffff81457c60 d vesafb_driver
ffffffff81457d00 d vesafb_ops
ffffffff81457dc0 d acpi_ioremap_lock
ffffffff81457de0 d acpi_ioremaps
ffffffff81457df0 d acpi_enforce_resources
ffffffff81457e00 d nvs_region_list
ffffffff81457e10 d tts_notifier
ffffffff81457e30 d __param_ops_bfs
ffffffff81457e50 d __param_ops_gts
ffffffff81457e70 D acpi_bus_event_queue
ffffffff81457e90 D acpi_bus_event_list
ffffffff81457ea0 d acpi_bus_notify_list
ffffffff81457ed0 d sb_uuid_str
ffffffff81457f00 d bus_type_sem
ffffffff81457f20 d bus_type_list
ffffffff81457f30 D acpi_bus_type
ffffffff81457fb0 D acpi_wakeup_device_list
ffffffff81457fc0 D acpi_device_lock
ffffffff81457fe0 d acpi_bus_id_list
ffffffff81457ff0 d dev_attr_path
ffffffff81458010 d dev_attr_hid
ffffffff81458030 d dev_attr_modalias
ffffffff81458050 d dev_attr_eject
ffffffff81458070 d acpi_ec_driver
ffffffff814581e0 d acpi_pci_roots
ffffffff814581f0 d osc_lock
ffffffff81458210 d acpi_pci_root_driver
ffffffff81458380 d pci_osc_uuid_str
ffffffff814583b0 d acpi_link_list
ffffffff814583c0 d acpi_irq_penalty
ffffffff814587c0 d acpi_link_lock
ffffffff814587e0 d acpi_irq_balance
ffffffff814587f0 d irqrouter_syscore_ops
ffffffff81458820 d acpi_pci_link_driver
ffffffff81458990 d acpi_prt_list
ffffffff814589a0 d acpi_power_driver
ffffffff81458b10 d acpi_chain_head
ffffffff81458b40 d acpi_event_genl_family
ffffffff81458bb0 d acpi_event_mcgrp
ffffffff81458be0 d interrupt_stats_attr_group
ffffffff81458c00 d acpi_table_attr_list
ffffffff81458c10 d __param_ops_acpica_version
ffffffff81458c30 d pxm_to_node_map
ffffffff81459030 d node_to_pxm_map
ffffffff81459430 d cm_sbs_mutex
ffffffff81459450 d acpi_sleep_dispatch
ffffffff81459480 D acpi_rs_convert_ext_address64
ffffffff814594a0 D acpi_rs_convert_address64
ffffffff814594c0 D acpi_rs_convert_address32
ffffffff814594e0 D acpi_rs_convert_address16
ffffffff81459500 d acpi_rs_convert_general_flags
ffffffff81459520 d acpi_rs_convert_mem_flags
ffffffff81459540 d acpi_rs_convert_io_flags
ffffffff81459550 D acpi_gbl_convert_resource_serial_bus_dispatch
ffffffff81459570 D acpi_gbl_get_resource_dispatch
ffffffff81459670 D acpi_gbl_set_resource_dispatch
ffffffff81459710 D acpi_rs_set_start_dpf
ffffffff81459740 D acpi_rs_get_start_dpf
ffffffff81459758 D acpi_rs_convert_end_tag
ffffffff81459760 D acpi_rs_convert_end_dpf
ffffffff81459770 D acpi_rs_convert_generic_reg
ffffffff81459780 D acpi_rs_convert_fixed_io
ffffffff81459790 D acpi_rs_convert_io
ffffffff814597b0 D acpi_rs_convert_fixed_dma
ffffffff814597c0 D acpi_rs_convert_dma
ffffffff814597e0 D acpi_rs_convert_ext_irq
ffffffff81459810 D acpi_rs_set_irq
ffffffff81459850 D acpi_rs_get_irq
ffffffff81459870 D acpi_rs_set_vendor
ffffffff81459890 D acpi_rs_get_vendor_large
ffffffff814598a0 D acpi_rs_get_vendor_small
ffffffff814598b0 D acpi_rs_convert_fixed_memory32
ffffffff814598c0 D acpi_rs_convert_memory32
ffffffff814598d0 D acpi_rs_convert_memory24
ffffffff814598e0 D acpi_rs_convert_uart_serial_bus
ffffffff81459940 D acpi_rs_convert_spi_serial_bus
ffffffff81459990 D acpi_rs_convert_i2c_serial_bus
ffffffff814599d0 D acpi_rs_convert_gpio
ffffffff81459a20 D acpi_gbl_region_types
ffffffff81459a70 D acpi_gbl_fixed_event_info
ffffffff81459a90 D acpi_gbl_bit_register_info
ffffffff81459ae0 D acpi_gbl_highest_dstate_names
ffffffff81459b00 D acpi_gbl_lowest_dstate_names
ffffffff81459b30 D acpi_gbl_sleep_state_names
ffffffff81459b60 D acpi_gbl_shutdown
ffffffff81459b64 D acpi_dbg_level
ffffffff81459b68 D acpi_gbl_use_default_register_widths
ffffffff81459b69 D acpi_gbl_create_osi_method
ffffffff81459b70 D acpi_gbl_exception_names_ctrl
ffffffff81459be0 D acpi_gbl_exception_names_aml
ffffffff81459cf0 D acpi_gbl_exception_names_tbl
ffffffff81459d20 D acpi_gbl_exception_names_pgm
ffffffff81459d70 D acpi_gbl_exception_names_env
ffffffff81459e60 d acpi_default_supported_interfaces
ffffffff81459f80 d acpi_hed_notify_list
ffffffff81459fb0 d acpi_hed_driver
ffffffff8145a120 d acpi_hed_ids
ffffffff8145a160 D apei_resources_all
ffffffff8145a180 d whea_uuid_str.26117
ffffffff8145a1c0 d apei_estatus_section_flag_strs
ffffffff8145a200 d cper_proc_error_type_strs
ffffffff8145a220 d cper_proc_flag_strs
ffffffff8145a240 d erst_ins_type
ffffffff8145a380 d erst_info
ffffffff8145a400 d erst_record_id_cache
ffffffff8145a440 d ghes_list_mutex
ffffffff8145a460 d ghes_sci
ffffffff8145a470 d ghes_notifier_sci
ffffffff8145a490 d ghes_nmi
ffffffff8145a4a0 d ghes_platform_driver
ffffffff8145a540 d ratelimit_corrected.30608
ffffffff8145a560 d ratelimit_uncorrected.30610
ffffffff8145a580 D pnp_global
ffffffff8145a590 d pnp_protocols
ffffffff8145a5a0 D pnp_cards
ffffffff8145a5c0 d dev_attr_name
ffffffff8145a5e0 d dev_attr_card_id
ffffffff8145a600 d pnp_card_drivers
ffffffff8145a620 D pnp_bus_type
ffffffff8145a6a0 d pnp_reserve_irq
ffffffff8145a6e0 d pnp_reserve_dma
ffffffff8145a700 d pnp_reserve_io
ffffffff8145a740 d pnp_reserve_mem
ffffffff8145a780 D pnp_res_mutex
ffffffff8145a7a0 D pnp_interface_attrs
ffffffff8145a820 d pnp_fixups
ffffffff8145a940 d system_pnp_driver
ffffffff8145aa00 D pnpacpi_protocol
ffffffff8145acf0 d hp_ccsr_uuid
ffffffff8145ad20 d gnttab_v2_ops
ffffffff8145ad60 d gnttab_v1_ops
ffffffff8145ada0 d xen_irq_list_head
ffffffff8145adc0 d irq_mapping_update_lock
ffffffff8145ade0 d xenstore_notifier.23406
ffffffff8145ae00 d shutdown_watch
ffffffff8145ae20 d shutting_down
ffffffff8145ae40 d handlers.23386
ffffffff8145aea0 d balloon_worker
ffffffff8145af00 d balloon_mutex
ffffffff8145af20 d ballooned_pages
ffffffff8145af40 d xenbus_valloc_pages
ffffffff8145af60 d xb_waitq
ffffffff8145af80 d probe_work
ffffffff8145afa0 d watches
ffffffff8145afc0 d xenwatch_mutex
ffffffff8145afe0 d watch_events
ffffffff8145aff0 d watch_events_waitq
ffffffff8145b020 D xenbus_dev_attrs
ffffffff8145b0a0 d xenstore_chain
ffffffff8145b0e0 d xenbus_backend
ffffffff8145b190 d xenstore_notifier.19507
ffffffff8145b1c0 d be_watch
ffffffff8145b1e0 d xenbus_dev
ffffffff8145b240 d xenbus_backend_dev
ffffffff8145b2a0 d xenbus_frontend
ffffffff8145b350 d xenstore_notifier.27378
ffffffff8145b380 d fe_watch
ffffffff8145b3a0 d backend_state_wq
ffffffff8145b3c0 d xsn_cpu.13283
ffffffff8145b3e0 d cpu_watch.13276
ffffffff8145b400 d xenstore_notifier
ffffffff8145b420 d target_watch
ffffffff8145b440 d balloon_subsys
ffffffff8145b4c0 d dev_attr_target_kb
ffffffff8145b4e0 d dev_attr_target
ffffffff8145b500 d dev_attr_schedule_delay
ffffffff8145b540 d dev_attr_max_schedule_delay
ffffffff8145b580 d dev_attr_retry_count
ffffffff8145b5c0 d dev_attr_max_retry_count
ffffffff8145b600 d balloon_info_attrs
ffffffff8145b620 d dev_attr_current_kb
ffffffff8145b640 d dev_attr_low_kb
ffffffff8145b660 d dev_attr_high_kb
ffffffff8145b680 d selfballoon_worker
ffffffff8145b6e0 d selfballoon_attrs
ffffffff8145b720 d dev_attr_selfballooning
ffffffff8145b740 d dev_attr_selfballoon_interval
ffffffff8145b760 d dev_attr_selfballoon_downhysteresis
ffffffff8145b780 d dev_attr_selfballoon_uphysteresis
ffffffff8145b7a0 d dev_attr_selfballoon_min_usable_mb
ffffffff8145b7c0 d uuid_attr
ffffffff8145b800 d type_attr
ffffffff8145b840 d hyp_sysfs_kobj_type
ffffffff8145b880 d xen_properties_attrs
ffffffff8145b8c0 d xen_compile_attrs
ffffffff8145b8e0 d version_attrs
ffffffff8145b900 d capabilities_attr
ffffffff8145b940 d changeset_attr
ffffffff8145b980 d virtual_start_attr
ffffffff8145b9c0 d pagesize_attr
ffffffff8145ba00 d features_attr
ffffffff8145ba40 d compiler_attr
ffffffff8145ba80 d compiled_by_attr
ffffffff8145bac0 d compile_date_attr
ffffffff8145bb00 d major_attr
ffffffff8145bb40 d minor_attr
ffffffff8145bb80 d extra_attr
ffffffff8145bbc0 d platform_driver
ffffffff8145bcb0 d device_nb
ffffffff8145bce0 D tty_mutex
ffffffff8145bd00 D tty_drivers
ffffffff8145bd20 D tty_std_termios
ffffffff8145bd60 d _rs.27181
ffffffff8145bd80 d dev_attr_active
ffffffff8145bda0 D tty_ldisc_N_TTY
ffffffff8145be40 d tty_ldisc_wait
ffffffff8145be60 d tty_ldisc_idle
ffffffff8145be80 d _rs.25996
ffffffff8145bea0 d big_tty_mutex
ffffffff8145bec0 d vt_events
ffffffff8145bed0 d vt_event_waitqueue
ffffffff8145bf00 d sel_start
ffffffff8145bf20 d inwordLut
ffffffff8145bf40 D keyboard_tasklet
ffffffff8145bf80 d kd_mksound_timer
ffffffff8145bfc0 d kbd_handler
ffffffff8145c038 d ledstate
ffffffff8145c03c d brl_timeout
ffffffff8145c040 d brl_nbchords
ffffffff8145c048 d kbd
ffffffff8145c050 d npadch
ffffffff8145c054 d buf.26576
ffffffff8145c060 d translations
ffffffff8145c860 D dfont_unitable
ffffffff8145cac0 D dfont_unicount
ffffffff8145cbc0 D default_blu
ffffffff8145cc00 D default_grn
ffffffff8145cc40 D default_red
ffffffff8145cc80 D color_table
ffffffff8145cc90 D want_console
ffffffff8145cc94 D global_cursor_default
ffffffff8145cc98 D default_utf8
ffffffff8145cca0 d console_work
ffffffff8145ccc0 d cur_default
ffffffff8145ccc4 d blankinterval
ffffffff8145ccc8 d old_offset.26372
ffffffff8145cccc d default_underline_color
ffffffff8145ccd0 d default_italic_color
ffffffff8145cce0 d console_timer
ffffffff8145cd20 d vt_console_driver
ffffffff8145cd80 d device_attrs
ffffffff8145cdc0 d dev_attr_active
ffffffff8145cde0 D accent_table_size
ffffffff8145ce00 D accent_table
ffffffff8145da00 D func_table
ffffffff8145e200 D funcbufsize
ffffffff8145e208 D funcbufptr
ffffffff8145e220 D func_buf
ffffffff8145e2bc D keymap_count
ffffffff8145e2c0 D key_maps
ffffffff8145eac0 D ctrl_alt_map
ffffffff8145ecc0 D alt_map
ffffffff8145eec0 D shift_ctrl_map
ffffffff8145f0c0 D ctrl_map
ffffffff8145f2c0 D altgr_map
ffffffff8145f4c0 D shift_map
ffffffff8145f6c0 D plain_map
ffffffff8145f8c0 d vtermnos
ffffffff8145f900 d hvc_structs
ffffffff8145f920 d hvc_console
ffffffff8145f978 d last_hvc
ffffffff8145f97c d timeout
ffffffff8145f980 D xenboot_console
ffffffff8145f9e0 d xenconsoles
ffffffff8145fa00 d dom0_hvc_ops
ffffffff8145fa40 d domU_hvc_ops
ffffffff8145fa80 d xencons_driver
ffffffff8145fb40 d zero_bdi
ffffffff8145fda0 D random_table
ffffffff8145ff60 d input_pool
ffffffff8145ffa0 d random_read_wakeup_thresh
ffffffff8145ffb0 d random_read_wait
ffffffff8145ffe0 d nonblocking_pool
ffffffff81460020 d random_write_wakeup_thresh
ffffffff81460030 d random_write_wait
ffffffff81460060 d blocking_pool
ffffffff814600a0 d sysctl_poolsize
ffffffff814600a4 d min_read_thresh
ffffffff814600a8 d max_read_thresh
ffffffff814600ac d max_write_thresh
ffffffff814600c0 d poolinfo_table
ffffffff81460100 d misc_mtx
ffffffff81460120 d misc_list
ffffffff81460140 d nvram_dev
ffffffff814601a0 d nvram_mutex
ffffffff814601c0 d vga_list
ffffffff814601d0 d vga_wait_queue
ffffffff81460200 d vga_arb_device
ffffffff81460250 d pci_notifier
ffffffff81460270 d vga_user_list
ffffffff81460280 d device_ktype
ffffffff814602c0 d uevent_attr
ffffffff814602e0 d devt_attr
ffffffff81460300 d gdp_mutex.19636
ffffffff81460320 d class_dir_ktype
ffffffff81460360 d driver_ktype
ffffffff814603a0 d driver_attr_uevent
ffffffff814603c0 d driver_attr_unbind
ffffffff814603e0 d driver_attr_bind
ffffffff81460400 d bus_ktype
ffffffff81460440 d bus_attr_uevent
ffffffff81460460 d bus_attr_drivers_probe
ffffffff81460480 d bus_attr_drivers_autoprobe
ffffffff814604a0 d deferred_probe_mutex
ffffffff814604c0 d deferred_probe_pending_list
ffffffff814604d0 d deferred_probe_active_list
ffffffff814604e0 d deferred_probe_work
ffffffff81460500 d probe_waitqueue
ffffffff81460520 d syscore_ops_lock
ffffffff81460540 d syscore_ops_list
ffffffff81460560 d class_ktype
ffffffff814605a0 D platform_bus_type
ffffffff81460620 D platform_bus
ffffffff814608a0 d platform_dev_attrs
ffffffff814608e0 D cpu_subsys
ffffffff81460960 d dev_attr_online
ffffffff81460980 d cpu_root_attr_groups
ffffffff81460990 d cpu_root_attr_group
ffffffff814609c0 d cpu_root_attrs
ffffffff81460a20 d dev_attr_probe
ffffffff81460a40 d dev_attr_release
ffffffff81460a60 d cpu_attrs
ffffffff81460ae0 d dev_attr_kernel_max
ffffffff81460b00 d dev_attr_offline
ffffffff81460b20 d dev_attr_modalias
ffffffff81460b40 d attribute_container_mutex
ffffffff81460b60 d attribute_container_list
ffffffff81460b80 d topology_attr_group
ffffffff81460ba0 d default_attrs
ffffffff81460be0 d dev_attr_physical_package_id
ffffffff81460c00 d dev_attr_core_id
ffffffff81460c20 d dev_attr_thread_siblings
ffffffff81460c40 d dev_attr_thread_siblings_list
ffffffff81460c60 d dev_attr_core_siblings
ffffffff81460c80 d dev_attr_core_siblings_list
ffffffff81460ca0 d mount_dev
ffffffff81460cc0 d dev_fs_type
ffffffff81460d00 d setup_done
ffffffff81460d20 d pm_attr_group
ffffffff81460d40 d pm_runtime_attr_group
ffffffff81460d60 d pm_wakeup_attr_group
ffffffff81460d80 d pm_qos_attr_group
ffffffff81460da0 d runtime_attrs
ffffffff81460de0 d wakeup_attrs
ffffffff81460e30 d pm_qos_attrs
ffffffff81460e40 d dev_attr_runtime_status
ffffffff81460e60 d dev_attr_control
ffffffff81460e80 d dev_attr_runtime_suspended_time
ffffffff81460ea0 d dev_attr_runtime_active_time
ffffffff81460ec0 d dev_attr_autosuspend_delay_ms
ffffffff81460ee0 d dev_attr_wakeup
ffffffff81460f00 d dev_attr_wakeup_count
ffffffff81460f20 d dev_attr_wakeup_active_count
ffffffff81460f40 d dev_attr_wakeup_hit_count
ffffffff81460f60 d dev_attr_wakeup_active
ffffffff81460f80 d dev_attr_wakeup_total_time_ms
ffffffff81460fa0 d dev_attr_wakeup_max_time_ms
ffffffff81460fc0 d dev_attr_wakeup_last_time_ms
ffffffff81460fe0 d dev_attr_pm_qos_resume_latency_us
ffffffff81461000 d dev_pm_qos_mtx
ffffffff81461020 d dev_pm_notifiers
ffffffff81461060 D dpm_noirq_list
ffffffff81461070 D dpm_late_early_list
ffffffff81461080 D dpm_suspended_list
ffffffff81461090 D dpm_prepared_list
ffffffff814610a0 D dpm_list
ffffffff814610c0 d dpm_list_mtx
ffffffff814610e0 d wakeup_sources
ffffffff81461100 d loading_timeout
ffffffff81461120 d firmware_class
ffffffff814611a0 d firmware_attr_data
ffffffff814611e0 d dev_attr_loading
ffffffff81461200 d fw_lock
ffffffff81461220 d firmware_class_attrs
ffffffff81461280 d node_subsys
ffffffff81461300 d dev_attr_cpumap
ffffffff81461320 d dev_attr_cpulist
ffffffff81461340 d dev_attr_meminfo
ffffffff81461360 d dev_attr_numastat
ffffffff81461380 d dev_attr_distance
ffffffff814613a0 d dev_attr_vmstat
ffffffff814613c0 d cpu_root_attr_groups
ffffffff814613d0 d memory_root_attr_group
ffffffff81461400 d node_state_attrs
ffffffff81461440 d node_state_attr
ffffffff814614e0 d host_cmd_pool_mutex
ffffffff81461500 d scsi_cmd_dma_pool
ffffffff81461540 d scsi_cmd_pool
ffffffff81461580 d scsi_host_type
ffffffff814615c0 d shost_class
ffffffff81461640 d stu_command.29866
ffffffff81461660 d scsi_sg_pools
ffffffff81461700 d scanning_hosts
ffffffff81461710 d max_scsi_luns
ffffffff81461720 d scsi_target_type
ffffffff81461750 d scsi_scan_type
ffffffff81461758 d max_scsi_report_luns
ffffffff8146175c d scsi_inq_timeout
ffffffff81461760 D scsi_bus_type
ffffffff814617e0 D scsi_sysfs_shost_attr_groups
ffffffff814617f0 D scsi_shost_attr_group
ffffffff81461820 D dev_attr_hstate
ffffffff81461840 d sdev_class
ffffffff814618c0 d scsi_dev_type
ffffffff81461900 d sdev_attr_queue_depth_rw
ffffffff81461920 d sdev_attr_queue_ramp_up_period
ffffffff81461940 d dev_attr_queue_depth
ffffffff81461960 d sdev_attr_queue_type_rw
ffffffff81461980 d dev_attr_queue_type
ffffffff814619a0 d scsi_sysfs_shost_attrs
ffffffff81461a20 d scsi_sdev_attr_groups
ffffffff81461a40 d dev_attr_unique_id
ffffffff81461a60 d dev_attr_host_busy
ffffffff81461a80 d dev_attr_cmd_per_lun
ffffffff81461aa0 d dev_attr_can_queue
ffffffff81461ac0 d dev_attr_sg_tablesize
ffffffff81461ae0 d dev_attr_sg_prot_tablesize
ffffffff81461b00 d dev_attr_unchecked_isa_dma
ffffffff81461b20 d dev_attr_proc_name
ffffffff81461b40 d dev_attr_scan
ffffffff81461b60 d dev_attr_supported_mode
ffffffff81461b80 d dev_attr_active_mode
ffffffff81461ba0 d dev_attr_prot_capabilities
ffffffff81461bc0 d dev_attr_prot_guard_type
ffffffff81461be0 d dev_attr_host_reset
ffffffff81461c00 d scsi_sdev_attr_group
ffffffff81461c20 d scsi_sdev_attrs
ffffffff81461cc0 d dev_attr_device_blocked
ffffffff81461ce0 d dev_attr_type
ffffffff81461d00 d dev_attr_scsi_level
ffffffff81461d20 d dev_attr_vendor
ffffffff81461d40 d dev_attr_model
ffffffff81461d60 d dev_attr_rev
ffffffff81461d80 d dev_attr_rescan
ffffffff81461da0 d dev_attr_delete
ffffffff81461dc0 d dev_attr_state
ffffffff81461de0 d dev_attr_timeout
ffffffff81461e00 d dev_attr_iocounterbits
ffffffff81461e20 d dev_attr_iorequest_cnt
ffffffff81461e40 d dev_attr_iodone_cnt
ffffffff81461e60 d dev_attr_ioerr_cnt
ffffffff81461e80 d dev_attr_modalias
ffffffff81461ea0 d dev_attr_evt_media_change
ffffffff81461ec0 d scsi_dev_info_list
ffffffff81461ee0 d scsi_root_table
ffffffff81461f60 d scsi_dir_table
ffffffff81461fe0 d scsi_table
ffffffff81462060 d global_host_template_mutex
ffffffff81462080 d sd_template
ffffffff81462120 d sd_disk_class
ffffffff814621a0 d sd_ref_mutex
ffffffff814621c0 d sd_disk_attrs
ffffffff81462320 D ata_dummy_port_ops
ffffffff81462500 D ata_port_type
ffffffff81462530 D atapi_passthru16
ffffffff81462534 d atapi_enabled
ffffffff81462538 d libata_dma_mask
ffffffff81462540 d ratelimit
ffffffff81462560 d ___modver_attr
ffffffff814625c0 D ata_common_sdev_attrs
ffffffff814625e0 D dev_attr_sw_activity
ffffffff81462600 D dev_attr_em_message_type
ffffffff81462620 D dev_attr_em_message
ffffffff81462640 D dev_attr_unload_heads
ffffffff81462660 D dev_attr_link_power_management_policy
ffffffff81462680 d __compound_literal.0
ffffffff81462683 d __compound_literal.1
ffffffff81462686 d __compound_literal.2
ffffffff81462689 d __compound_literal.3
ffffffff8146268b d __compound_literal.4
ffffffff8146268d d __compound_literal.5
ffffffff814626a0 d ata_port_class
ffffffff81462740 d ata_link_class
ffffffff814627e0 d ata_dev_class
ffffffff81462878 D ata_acpi_gtf_filter
ffffffff81462880 D loopback_net_ops
ffffffff814628c0 d serio_event_list
ffffffff814628e0 d serio_event_work
ffffffff81462900 d serio_mutex
ffffffff81462920 d serio_list
ffffffff81462940 d serio_bus
ffffffff814629c0 d serio_device_attr_groups
ffffffff814629e0 d serio_device_attrs
ffffffff81462a80 d serio_driver_attrs
ffffffff81462ae0 d serio_id_attr_group
ffffffff81462b00 d serio_device_id_attrs
ffffffff81462b40 d dev_attr_type
ffffffff81462b60 d dev_attr_proto
ffffffff81462b80 d dev_attr_id
ffffffff81462ba0 d dev_attr_extra
ffffffff81462bc0 d i8042_mutex
ffffffff81462be0 d i8042_command_reg
ffffffff81462be4 d i8042_data_reg
ffffffff81462c00 d i8042_driver
ffffffff81462ca0 d i8042_pnp_kbd_driver
ffffffff81462d60 d i8042_pnp_aux_driver
ffffffff81462e20 d pnp_kbd_devids
ffffffff81462f20 d pnp_aux_devids
ffffffff81462fe0 D input_class
ffffffff81463060 d input_dev_type
ffffffff814630a0 d input_mutex
ffffffff814630c0 d input_dev_list
ffffffff814630d0 d input_devices_poll_wait
ffffffff814630f0 d input_handler_list
ffffffff81463100 d input_dev_attr_groups
ffffffff81463120 d input_dev_attr_group
ffffffff81463140 d input_dev_id_attr_group
ffffffff81463160 d input_dev_caps_attr_group
ffffffff81463180 d input_dev_attrs
ffffffff814631c0 d input_dev_id_attrs
ffffffff81463200 d input_dev_caps_attrs
ffffffff81463260 d dev_attr_name
ffffffff81463280 d dev_attr_phys
ffffffff814632a0 d dev_attr_uniq
ffffffff814632c0 d dev_attr_modalias
ffffffff814632e0 d dev_attr_properties
ffffffff81463300 d dev_attr_bustype
ffffffff81463320 d dev_attr_vendor
ffffffff81463340 d dev_attr_product
ffffffff81463360 d dev_attr_version
ffffffff81463380 d dev_attr_ev
ffffffff814633a0 d dev_attr_key
ffffffff814633c0 d dev_attr_rel
ffffffff814633e0 d dev_attr_abs
ffffffff81463400 d dev_attr_msc
ffffffff81463420 d dev_attr_led
ffffffff81463440 d dev_attr_snd
ffffffff81463460 d dev_attr_ff
ffffffff81463480 d dev_attr_sw
ffffffff814634a0 d mousedev_handler
ffffffff81463518 d xres
ffffffff8146351c d yres
ffffffff81463520 d mousedev_table_mutex
ffffffff81463540 d tap_time
ffffffff81463550 d mousedev_mix_list
ffffffff81463560 d atkbd_drv
ffffffff81463618 d atkbd_set
ffffffff81463620 d atkbd_attribute_group
ffffffff81463638 d atkbd_softraw
ffffffff81463640 d atkbd_serio_ids
ffffffff81463660 d atkbd_attributes
ffffffff814636a0 d atkbd_dell_laptop_forced_release_keys
ffffffff814636c8 d atkbd_hp_forced_release_keys
ffffffff814636d0 d atkbd_volume_forced_release_keys
ffffffff814636e0 d atkbd_samsung_forced_release_keys
ffffffff81463710 d atkbd_amilo_pi3525_forced_release_keys
ffffffff81463740 d atkbd_amilo_xi3650_forced_release_keys
ffffffff81463770 d atkdb_soltech_ta12_forced_release_keys
ffffffff81463780 d atkbd_attr_extra
ffffffff814637a0 d atkbd_attr_force_release
ffffffff814637c0 d atkbd_attr_scroll
ffffffff814637e0 d atkbd_attr_set
ffffffff81463800 d atkbd_attr_softrepeat
ffffffff81463820 d atkbd_attr_softraw
ffffffff81463840 d atkbd_attr_err_count
ffffffff81463860 d psmouse_max_proto
ffffffff81463864 d psmouse_resolution
ffffffff81463868 d psmouse_rate
ffffffff8146386c d psmouse_smartscroll
ffffffff81463870 d psmouse_resetafter
ffffffff81463880 d psmouse_drv
ffffffff81463940 d psmouse_mutex
ffffffff81463960 d psmouse_attribute_group
ffffffff81463980 d param_ops_proto_abbrev
ffffffff81463998 d psmouse_serio_ids
ffffffff814639c0 d psmouse_attributes
ffffffff81463a00 d psmouse_attr_protocol
ffffffff81463a40 d psmouse_attr_rate
ffffffff81463a80 d psmouse_attr_resolution
ffffffff81463ac0 d psmouse_attr_resetafter
ffffffff81463b00 d psmouse_attr_resync_time
ffffffff81463b40 d psmouse_attr_disable_gesture
ffffffff81463b80 d param.20733
ffffffff81463ba0 d psmouse_attr_smartscroll
ffffffff81463be0 d trackpoint_attr_group
ffffffff81463c00 d trackpoint_attrs
ffffffff81463c80 d psmouse_attr_sensitivity
ffffffff81463cc0 d psmouse_attr_speed
ffffffff81463d00 d psmouse_attr_inertia
ffffffff81463d40 d psmouse_attr_reach
ffffffff81463d80 d psmouse_attr_draghys
ffffffff81463dc0 d psmouse_attr_mindrag
ffffffff81463e00 d psmouse_attr_thresh
ffffffff81463e40 d psmouse_attr_upthresh
ffffffff81463e80 d psmouse_attr_ztime
ffffffff81463ec0 d psmouse_attr_jenks
ffffffff81463f00 d psmouse_attr_press_to_select
ffffffff81463f40 d psmouse_attr_skipback
ffffffff81463f80 d psmouse_attr_ext_dev
ffffffff81463fc0 d trackpoint_attr_sensitivity
ffffffff81463fd0 d trackpoint_attr_speed
ffffffff81463fe0 d trackpoint_attr_inertia
ffffffff81463ff0 d trackpoint_attr_reach
ffffffff81464000 d trackpoint_attr_draghys
ffffffff81464010 d trackpoint_attr_mindrag
ffffffff81464020 d trackpoint_attr_thresh
ffffffff81464030 d trackpoint_attr_upthresh
ffffffff81464040 d trackpoint_attr_ztime
ffffffff81464050 d trackpoint_attr_jenks
ffffffff81464060 d trackpoint_attr_press_to_select
ffffffff81464070 d trackpoint_attr_skipback
ffffffff81464080 d trackpoint_attr_ext_dev
ffffffff814640a0 D rtc_hctosys_ret
ffffffff814640c0 d dev_attr_wakealarm
ffffffff814640e0 d rtc_attrs
ffffffff814641c0 d nvram
ffffffff81464200 d cmos_pnp_driver
ffffffff814642c0 d cmos_platform_driver
ffffffff81464360 d watchdog_miscdev
ffffffff814643c0 D edac_subsys
ffffffff81464440 D edac_op_state
ffffffff81464460 d cpufreq_policy_notifier_list
ffffffff814644a0 d cpufreq_governor_mutex
ffffffff814644c0 d cpufreq_governor_list
ffffffff814644e0 d cpufreq_syscore_ops
ffffffff81464520 d cpufreq_interface
ffffffff81464560 d ktype_cpufreq
ffffffff814645a0 d cpuinfo_cur_freq
ffffffff814645c0 d scaling_cur_freq
ffffffff814645e0 d bios_limit
ffffffff81464600 d default_attrs
ffffffff81464660 d cpuinfo_min_freq
ffffffff81464680 d cpuinfo_max_freq
ffffffff814646a0 d cpuinfo_transition_latency
ffffffff814646c0 d scaling_min_freq
ffffffff814646e0 d scaling_max_freq
ffffffff81464700 d affected_cpus
ffffffff81464720 d related_cpus
ffffffff81464740 d scaling_governor
ffffffff81464760 d scaling_driver
ffffffff81464780 d scaling_available_governors
ffffffff814647a0 d scaling_setspeed
ffffffff814647c0 d notifier_policy_block
ffffffff814647e0 d notifier_trans_block
ffffffff81464800 d stats_attr_group
ffffffff81464820 d default_attrs
ffffffff81464840 d _attr_total_trans
ffffffff81464860 d _attr_time_in_state
ffffffff81464880 D cpufreq_gov_performance
ffffffff814648e0 D cpufreq_freq_attr_scaling_available_freqs
ffffffff81464900 D cpuidle_detected_devices
ffffffff81464920 D cpuidle_lock
ffffffff81464940 d cpuidle_latency_notifier
ffffffff81464960 D cpuidle_governors
ffffffff81464980 d cpuidle_attr_group
ffffffff814649a0 d cpuidle_switch_attrs
ffffffff814649c0 d ktype_state_cpuidle
ffffffff81464a00 d ktype_cpuidle
ffffffff81464a30 d cpuidle_default_attrs
ffffffff81464a60 d dev_attr_available_governors
ffffffff81464a80 d dev_attr_current_driver
ffffffff81464aa0 d dev_attr_current_governor
ffffffff81464ac0 d cpuidle_state_default_attrs
ffffffff81464b00 d dev_attr_current_governor_ro
ffffffff81464b20 d attr_name
ffffffff81464b40 d attr_desc
ffffffff81464b60 d attr_latency
ffffffff81464b80 d attr_power
ffffffff81464ba0 d attr_usage
ffffffff81464bc0 d attr_time
ffffffff81464be0 d attr_disable
ffffffff81464c00 d ladder_governor
ffffffff81464c60 d dmi_empty_string
ffffffff81464c70 d dmi_devices
ffffffff81464c80 d memmap_ktype
ffffffff81464cb0 d map_entries
ffffffff81464cc0 d def_attrs
ffffffff81464ce0 d memmap_start_attr
ffffffff81464d00 d memmap_end_attr
ffffffff81464d20 d memmap_type_attr
ffffffff81464d40 d clocksource_acpi_pm
ffffffff81464e00 D i8253_clockevent
ffffffff81464ec0 d hid_bus_type
ffffffff81464f40 d dev_bin_attr_report_desc
ffffffff81464f80 d driver_attr_new_id
ffffffff81464fa0 d iommu_device_nb
ffffffff81464fc0 d dev_attr_iommu_group
ffffffff81464fe0 D amd_iommu_max_glx_val
ffffffff81464ff0 d dev_data_list
ffffffff81465000 d _rs.22180
ffffffff81465020 d device_nb
ffffffff81465040 d iommu_pd_list
ffffffff81465060 d amd_iommu_dma_ops
ffffffff814650d8 d amd_iommu_devtable_lock
ffffffff814650e0 d amd_iommu_ops
ffffffff81465140 D amd_iommu_pd_list
ffffffff81465150 D amd_iommu_list
ffffffff81465160 D amd_iommu_unity_map
ffffffff81465180 d amd_iommu_syscore_ops
ffffffff814651c0 d pcibios_fwaddrmappings
ffffffff814651d0 D pci_mmcfg_list
ffffffff814651e0 d dev_domain_list
ffffffff814651f0 d quirk_pcie_aspm_ops
ffffffff81465200 d pci_use_crs
ffffffff81465220 D pcibios_enable_irq
ffffffff81465228 D pcibios_irq_mask
ffffffff81465240 d pirq_penalty
ffffffff81465280 D pci_root_ops
ffffffff81465290 D pcibios_last_bus
ffffffff81465294 D pci_probe
ffffffff814652a0 d mp_bus_to_node
ffffffff814656c0 d br_ioctl_mutex
ffffffff814656e0 d vlan_ioctl_mutex
ffffffff81465700 d dlci_ioctl_mutex
ffffffff81465720 d sock_fs_type
ffffffff81465760 D net_prio_subsys_id
ffffffff81465764 D net_cls_subsys_id
ffffffff81465780 d net_inuse_ops
ffffffff814657c0 d proto_net_ops
ffffffff81465800 d proto_list
ffffffff81465820 d proto_list_mutex
ffffffff81465840 D sysctl_max_syn_backlog
ffffffff81465844 d est_lock
ffffffff81465860 D net_namespace_list
ffffffff81465880 d net_mutex
ffffffff814658a0 d max_gen_ptrs
ffffffff814658b0 d pernet_list
ffffffff814658c0 d cleanup_list
ffffffff814658e0 d net_cleanup_work
ffffffff81465900 d first_device
ffffffff81465920 D net_core_path
ffffffff81465940 d net_core_table
ffffffff81465cc0 d sysctl_core_ops
ffffffff81465d00 d netns_core_table
ffffffff81465d80 d sock_flow_mutex.37170
ffffffff81465da0 D dev_base_lock
ffffffff81465da4 d dev_boot_phase
ffffffff81465db0 d net_todo_list
ffffffff81465dc0 d dev_proc_ops
ffffffff81465e00 d netdev_net_ops
ffffffff81465e40 d default_device_ops
ffffffff81465e80 d dev_mc_net_ops
ffffffff81465ec0 d dst_garbage
ffffffff81465ee0 d dst_gc_work
ffffffff81465f40 d dst_gc_mutex
ffffffff81465f60 d dst_dev_notifier
ffffffff81465f78 d neigh_tbl_lock
ffffffff81465f80 d rtnl_mutex
ffffffff81465fa0 d link_ops
ffffffff81465fb0 d rtnl_af_ops
ffffffff81465fc0 d rtnetlink_net_ops
ffffffff81466000 d rtnetlink_dev_notifier
ffffffff81466020 D net_ratelimit_state
ffffffff81466040 d lweventlist
ffffffff81466060 d linkwatch_work
ffffffff814660c0 d _rs.37363
ffffffff814660e0 d sock_diag_table_mutex
ffffffff81466100 d sock_diag_mutex
ffffffff81466120 d flow_cache_gc_list
ffffffff81466140 d flow_cache_gc_work
ffffffff81466160 d flow_flush_sem.20568
ffffffff81466180 d flow_cache_flush_work
ffffffff814661a0 D net_ns_type_operations
ffffffff814661e0 d rx_queue_ktype
ffffffff81466220 d netdev_queue_ktype
ffffffff81466250 d dql_group
ffffffff81466280 d xps_map_mutex
ffffffff814662a0 d net_class
ffffffff81466320 d netstat_group
ffffffff81466340 d rx_queue_default_attrs
ffffffff81466360 d netdev_queue_default_attrs
ffffffff81466380 d dql_attrs
ffffffff814663c0 d net_class_attributes
ffffffff81466640 d netstat_attrs
ffffffff81466700 d rps_cpus_attribute
ffffffff81466720 d rps_dev_flow_table_cnt_attribute
ffffffff81466740 d queue_trans_timeout
ffffffff81466760 d xps_cpus_attribute
ffffffff81466780 d bql_limit_attribute
ffffffff814667a0 d bql_limit_max_attribute
ffffffff814667c0 d bql_limit_min_attribute
ffffffff814667e0 d bql_hold_time_attribute
ffffffff81466800 d bql_inflight_attribute
ffffffff81466820 d dev_attr_rx_packets
ffffffff81466840 d dev_attr_tx_packets
ffffffff81466860 d dev_attr_rx_bytes
ffffffff81466880 d dev_attr_tx_bytes
ffffffff814668a0 d dev_attr_rx_errors
ffffffff814668c0 d dev_attr_tx_errors
ffffffff814668e0 d dev_attr_rx_dropped
ffffffff81466900 d dev_attr_tx_dropped
ffffffff81466920 d dev_attr_multicast
ffffffff81466940 d dev_attr_collisions
ffffffff81466960 d dev_attr_rx_length_errors
ffffffff81466980 d dev_attr_rx_over_errors
ffffffff814669a0 d dev_attr_rx_crc_errors
ffffffff814669c0 d dev_attr_rx_frame_errors
ffffffff814669e0 d dev_attr_rx_fifo_errors
ffffffff81466a00 d dev_attr_rx_missed_errors
ffffffff81466a20 d dev_attr_tx_aborted_errors
ffffffff81466a40 d dev_attr_tx_carrier_errors
ffffffff81466a60 d dev_attr_tx_fifo_errors
ffffffff81466a80 d dev_attr_tx_heartbeat_errors
ffffffff81466aa0 d dev_attr_tx_window_errors
ffffffff81466ac0 d dev_attr_rx_compressed
ffffffff81466ae0 d dev_attr_tx_compressed
ffffffff81466b00 D noop_qdisc
ffffffff81466be0 d noqueue_qdisc
ffffffff81466cc0 d noop_netdev_queue
ffffffff81466e00 d noqueue_netdev_queue
ffffffff81466f40 d nl_table_lock
ffffffff81466f50 d nl_table_wait
ffffffff81466f80 d netlink_proto
ffffffff81467100 d netlink_net_ops
ffffffff81467138 d rover.37442
ffffffff81467140 d genl_mutex
ffffffff81467160 d notify_grp
ffffffff81467190 d mc_groups_longs
ffffffff81467198 d mc_groups
ffffffff814671a0 d mc_group_start
ffffffff814671c0 d genl_ctrl
ffffffff81467240 d genl_ctrl_ops
ffffffff81467280 d genl_pernet_ops
ffffffff814672b8 d id_gen_idx.36549
ffffffff814672c0 d ipv4_dst_ops
ffffffff81467380 d expire.45427
ffffffff814673c0 d ipv4_dst_blackhole_ops
ffffffff81467480 d ip_rt_proc_ops
ffffffff814674c0 d sysctl_route_ops
ffffffff81467500 d rt_genid_ops
ffffffff81467540 d ipv4_route_flush_table
ffffffff814675c0 d ipv4_route_path
ffffffff814675e0 d ipv4_path
ffffffff81467600 d ipv4_skeleton
ffffffff814676c0 d ipv4_route_table
ffffffff81467ac0 d gc_list
ffffffff81467ad0 d v4_peers
ffffffff81467af0 d v6_peers
ffffffff81467b20 d ip4_frags_ctl_table
ffffffff81467be0 d ip4_frags_ops
ffffffff81467c20 d ip4_frags_ns_ctl_table
ffffffff81467d20 D tcp_prot
ffffffff81467ea0 d tcp4_net_ops
ffffffff81467ee0 d tcp4_seq_afinfo
ffffffff81467f20 d tcp_sk_ops
ffffffff81467f60 d tcp_timewait_sock_ops
ffffffff81467fa0 D tcp_death_row
ffffffff814681c0 D tcp_init_congestion_ops
ffffffff81468240 D tcp_reno
ffffffff814682c0 d tcp_cong_list
ffffffff814682e0 D raw_prot
ffffffff81468460 d raw_v4_hashinfo
ffffffff81468c80 d raw_net_ops
ffffffff81468cc0 D udp_prot
ffffffff81468e40 d udp4_net_ops
ffffffff81468e80 d udp4_seq_afinfo
ffffffff81468ec0 D udplite_prot
ffffffff81469040 d udplite4_protosw
ffffffff81469080 d udplite4_net_ops
ffffffff814690c0 d udplite4_seq_afinfo
ffffffff81469100 D arp_tbl
ffffffff814692c0 d arp_net_ops
ffffffff81469300 d arp_netdev_notifier
ffffffff81469320 d icmp_sk_ops
ffffffff81469360 d inetaddr_chain
ffffffff814693a0 d devinet_ops
ffffffff814693e0 d ip_netdev_notifier
ffffffff81469400 d inet_af_ops
ffffffff81469440 d ipv4_devconf
ffffffff814694c0 d ipv4_devconf_dflt
ffffffff81469540 d ctl_forward_entry
ffffffff814695c0 d net_ipv4_path
ffffffff814695e0 d devinet_sysctl
ffffffff81469c80 d inetsw_array
ffffffff81469d40 d ipv4_mib_ops
ffffffff81469d80 d igmp_net_ops
ffffffff81469dc0 d fib_net_ops
ffffffff81469e00 d fib_netdev_notifier
ffffffff81469e20 d fib_inetaddr_notifier
ffffffff81469e40 D ping_prot
ffffffff81469fc0 d ping_net_ops
ffffffff8146a000 D net_ipv4_ctl_path
ffffffff8146a020 d ipv4_table
ffffffff8146afe0 d ipv4_sysctl_ops
ffffffff8146b018 d ip_local_port_range_min
ffffffff8146b020 d ip_local_port_range_max
ffffffff8146b040 d ipv4_net_table
ffffffff8146b2c0 d ip_ping_group_range_max
ffffffff8146b2c8 d ip_ttl_min
ffffffff8146b2cc d ip_ttl_max
ffffffff8146b2d0 d tcp_retr1_max
ffffffff8146b2d4 d tcp_adv_win_scale_min
ffffffff8146b2d8 d tcp_adv_win_scale_max
ffffffff8146b2e0 d ip_proc_ops
ffffffff8146b320 d ___modver_attr
ffffffff8146b380 d xfrm4_policy_afinfo
ffffffff8146b400 d xfrm4_dst_ops
ffffffff8146b4c0 d xfrm4_policy_table
ffffffff8146b540 d xfrm4_state_afinfo
ffffffff8146bde0 D xfrm_cfg_mutex
ffffffff8146be00 d xfrm_policy_lock
ffffffff8146be04 d xfrm_policy_afinfo_lock
ffffffff8146be20 d xfrm_net_ops
ffffffff8146be60 d xfrm_dev_notifier
ffffffff8146be80 d hash_resize_mutex
ffffffff8146bea0 d xfrm_state_afinfo_lock
ffffffff8146bea4 d xfrm_km_lock
ffffffff8146beb0 d xfrm_km_list
ffffffff8146bec0 d hash_resize_mutex
ffffffff8146bee0 d aalg_list
ffffffff8146bfe0 d ealg_list
ffffffff8146c120 d calg_list
ffffffff8146c180 d aead_list
ffffffff8146c260 d xfrm_table
ffffffff8146c3a0 d xfrm_replay_esn
ffffffff8146c3c0 d xfrm_replay_bmp
ffffffff8146c3e0 d xfrm_replay_legacy
ffffffff8146c400 d unix_proto
ffffffff8146c580 d unix_net_ops
ffffffff8146c5b8 d ordernum.37185
ffffffff8146c5c0 d gc_inflight_list
ffffffff8146c5d0 d unix_gc_wait
ffffffff8146c5f0 d gc_candidates
ffffffff8146c600 d unix_table
ffffffff8146c680 d unix_path
ffffffff8146c6a0 d sysctl_pernet_ops
ffffffff8146c6e0 d net_sysctl_root
ffffffff8146c760 d net_sysctl_ro_root
ffffffff8146c7d0 d klist_remove_waiters
ffffffff8146c7e0 D initial_code
ffffffff8146c7f0 D initial_gs
ffffffff8146c800 D stack_start
ffffffff8146c810 d next_func.22060
ffffffff8146c818 D x86_bios_cpu_apicid_early_ptr
ffffffff8146c820 D x86_cpu_to_apicid_early_ptr
ffffffff8146c828 D x86_cpu_to_node_map_early_ptr
ffffffff8146c830 d cpufreq_cpu_notifier
ffffffff8146c850 d cpufreq_stat_cpu_notifier
ffffffff8146c880 D pci_dfl_cache_line_size
ffffffff8146c8a0 d platform_pci_tbl
ffffffff8146c8e0 d acpi_pm_good
ffffffff8146c900 d xen_hvm_cpu_notifier
ffffffff8146c920 D x86_cpuinit
ffffffff8146c940 d cpu_vsyscall_notifier_nb.28100
ffffffff8146c960 d fx_scratch
ffffffff8146cb60 d cacheinfo_cpu_notifier
ffffffff8146cb80 D cpu_caps_set
ffffffff8146cbc0 D cpu_caps_cleared
ffffffff8146cc00 d cpu_devs
ffffffff8146cc48 d this_cpu
ffffffff8146cc50 d disable_smep
ffffffff8146cc54 d show_msr
ffffffff8146cc60 d x86_pmu_notifier_nb.27386
ffffffff8146cc80 D threshold_cpu_callback
ffffffff8146cc90 d mce_cpu_notifier
ffffffff8146ccb0 d perf_ibs_cpu_notifier_nb.27426
ffffffff8146ccc8 d stop_count
ffffffff8146cccc d start_count
ffffffff8146ccd0 d nr_warps
ffffffff8146ccd8 d max_warp
ffffffff8146cce0 d last_tsc
ffffffff8146cce8 d sync_lock
ffffffff8146ccec D disabled_cpus
ffffffff8146ccf0 d multi_checked
ffffffff8146ccf4 d multi
ffffffff8146cd00 d hpet_cpuhp_notify_nb.20794
ffffffff8146cd20 d fam10h_pci_mmconf_base
ffffffff8146cd40 d pci_probes
ffffffff8146cd60 d disable_nx
ffffffff8146cd70 d tlb_cpuhp_notify_nb.25463
ffffffff8146cda0 D __apicid_to_node
ffffffff8147cda0 d console_cpu_notify_nb.31395
ffffffff8147cdc0 d cpu_nfb
ffffffff8147cde0 d remote_softirq_cpu_notifier
ffffffff8147ce00 d timers_nb
ffffffff8147ce18 d tvec_base_done.29688
ffffffff8147ce20 d workqueue_cpu_callback_nb.25605
ffffffff8147ce40 d hrtimers_nb
ffffffff8147ce60 d migration_notifier
ffffffff8147ce80 d sched_cpu_active_nb.40969
ffffffff8147cea0 d sched_cpu_inactive_nb.40970
ffffffff8147cec0 d cpuset_cpu_active_nb.41558
ffffffff8147cee0 d cpuset_cpu_inactive_nb.41559
ffffffff8147cf00 d update_runtime_nb.41560
ffffffff8147cf20 d hotplug_hrtick_nb.39235
ffffffff8147cf40 d hotplug_cfd_notifier
ffffffff8147cf60 d cpu_stop_cpu_notifier
ffffffff8147cf80 d rcu_cpu_notify_nb.19876
ffffffff8147cfa0 d perf_cpu_notify_nb.30752
ffffffff8147cfc0 d page_alloc_cpu_notify_nb.33359
ffffffff8147cfe0 d ratelimit_nb
ffffffff8147d000 d cpu_callback_nb.30854
ffffffff8147d020 d vmstat_notifier
ffffffff8147d040 d cpucache_notifier
ffffffff8147d060 d buffer_cpu_notify_nb.33834
ffffffff8147d080 d blk_cpu_notifier
ffffffff8147d0a0 d blk_iopoll_cpu_notifier
ffffffff8147d0c0 d radix_tree_callback_nb.17465
ffffffff8147d0e0 d percpu_counter_hotcpu_callback_nb.14839
ffffffff8147d100 d topology_cpu_callback_nb.18296
ffffffff8147d120 d amd_cpu_notifier
ffffffff8147d140 d dev_cpu_callback_nb.48509
ffffffff8147d158 d __warned.28545
ffffffff8147d159 d __warned.27602
ffffffff8147d15a d __warned.21674
ffffffff8147d15b d __warned.22496
ffffffff8147d15c d __warned.28031
ffffffff8147d15d d __warned.28050
ffffffff8147d15e d __warned.25252
ffffffff8147d15f d __warned.25312
ffffffff8147d160 d __warned.25577
ffffffff8147d161 d __warned.28421
ffffffff8147d162 d __warned.18784
ffffffff8147d163 d __warned.5709
ffffffff8147d164 d __warned.27207
ffffffff8147d165 d __warned.27212
ffffffff8147d166 d __warned.27217
ffffffff8147d167 d __warned.26982
ffffffff8147d168 d __warned.27289
ffffffff8147d169 d __warned.24665
ffffffff8147d16a d __warned.24681
ffffffff8147d16b d __warned.24690
ffffffff8147d16c d __warned.25752
ffffffff8147d16d d __warned.25689
ffffffff8147d16e d __warned.25694
ffffffff8147d16f d __warned.25727
ffffffff8147d170 d __warned.24684
ffffffff8147d171 d __warned.25147
ffffffff8147d172 d __warned.25153
ffffffff8147d173 d __warned.25123
ffffffff8147d174 d __warned.25128
ffffffff8147d175 d __warned.25557
ffffffff8147d176 d __warned.25754
ffffffff8147d177 d __warned.23675
ffffffff8147d178 d __warned.15425
ffffffff8147d179 d __warned.15416
ffffffff8147d17a d __warned.32191
ffffffff8147d17b d __warned.32196
ffffffff8147d17c d __warned.10482
ffffffff8147d17d d __warned.27687
ffffffff8147d17e d __warned.27826
ffffffff8147d17f d __warned.24244
ffffffff8147d180 d __warned.24250
ffffffff8147d181 d __warned.24270
ffffffff8147d182 d __warned.28055
ffffffff8147d183 d __warned.28063
ffffffff8147d184 d __warned.24159
ffffffff8147d185 d __warned.24212
ffffffff8147d186 d __warned.24277
ffffffff8147d187 d __warned.24496
ffffffff8147d188 d __warned.24501
ffffffff8147d189 d __warned.24524
ffffffff8147d18a d __warned.29085
ffffffff8147d18b d __warned.29504
ffffffff8147d18c d __warned.32834
ffffffff8147d18d d __warned.32634
ffffffff8147d18e d __warned.32639
ffffffff8147d18f d __warned.33240
ffffffff8147d190 d __warned.33245
ffffffff8147d191 d __warned.33226
ffffffff8147d192 d __warned.33633
ffffffff8147d193 d __warned.33644
ffffffff8147d194 d __warned.24697
ffffffff8147d195 d __warned.24602
ffffffff8147d196 d __warned.24614
ffffffff8147d197 d __warned.24809
ffffffff8147d198 d __warned.28155
ffffffff8147d199 d __warned.28192
ffffffff8147d19a d __warned.28204
ffffffff8147d19b d __warned.28280
ffffffff8147d19c d __warned.28191
ffffffff8147d19d d __warned.28107
ffffffff8147d19e d __warned.28381
ffffffff8147d19f d __warned.41494
ffffffff8147d1a0 d __warned.41285
ffffffff8147d1a1 d __warned.39189
ffffffff8147d1a2 d __warned.16556
ffffffff8147d1a3 d __warned.16584
ffffffff8147d1a4 d __warned.18109
ffffffff8147d1a5 d __warned.14353
ffffffff8147d1a6 d __warned.28525
ffffffff8147d1a7 d __warned.27978
ffffffff8147d1a8 d __warned.19960
ffffffff8147d1a9 d __warned.12918
ffffffff8147d1aa d __warned.12956
ffffffff8147d1ab d __warned.13011
ffffffff8147d1ac d __warned.13094
ffffffff8147d1ad d __warned.13165
ffffffff8147d1ae d __warned.31178
ffffffff8147d1af d __warned.32113
ffffffff8147d1b0 d __warned.27151
ffffffff8147d1b1 d __warned.18249
ffffffff8147d1b2 d __warned.18373
ffffffff8147d1b3 d __warned.17568
ffffffff8147d1b4 d __warned.18910
ffffffff8147d1b5 d __warned.15538
ffffffff8147d1b6 d __warned.19233
ffffffff8147d1b7 d __warned.19924
ffffffff8147d1b8 d __warned.19282
ffffffff8147d1b9 d __warned.19254
ffffffff8147d1ba d __warned.18880
ffffffff8147d1bb d __warned.18850
ffffffff8147d1bc d __warned.18863
ffffffff8147d1bd d __warned.18910
ffffffff8147d1be d __warned.18968
ffffffff8147d1bf d __warned.18929
ffffffff8147d1c0 d __warned.18951
ffffffff8147d1c1 d __warned.18998
ffffffff8147d1c2 d __warned.19020
ffffffff8147d1c3 d __warned.19032
ffffffff8147d1c4 d __warned.19543
ffffffff8147d1c5 d __warned.19627
ffffffff8147d1c6 d __warned.19510
ffffffff8147d1c7 d __warned.19995
ffffffff8147d1c8 d __warned.19377
ffffffff8147d1c9 d __warned.19761
ffffffff8147d1ca d __warned.19766
ffffffff8147d1cb d __warned.12712
ffffffff8147d1cc d __warned.28380
ffffffff8147d1cd d __warned.28906
ffffffff8147d1ce d __warned.29010
ffffffff8147d1cf d __warned.29376
ffffffff8147d1d0 d __warned.28496
ffffffff8147d1d1 d __warned.30367
ffffffff8147d1d2 d __warned.29239
ffffffff8147d1d3 d __warned.29029
ffffffff8147d1d4 d __warned.28980
ffffffff8147d1d5 d __warned.28152
ffffffff8147d1d6 d __warned.30391
ffffffff8147d1d7 d __warned.30410
ffffffff8147d1d8 d __warned.30478
ffffffff8147d1d9 d __warned.30517
ffffffff8147d1da d __warned.30543
ffffffff8147d1db d __warned.29919
ffffffff8147d1dc d __warned.25205
ffffffff8147d1dd d __warned.32464
ffffffff8147d1de d __warned.32228
ffffffff8147d1df d __warned.32128
ffffffff8147d1e0 d __warned.32283
ffffffff8147d1e1 d __warned.29300
ffffffff8147d1e2 d __warned.19898
ffffffff8147d1e3 d __warned.25781
ffffffff8147d1e4 d __warned.14901
ffffffff8147d1e5 d __warned.14933
ffffffff8147d1e6 d __warned.14959
ffffffff8147d1e7 d __warned.14976
ffffffff8147d1e8 d __warned.19688
ffffffff8147d1e9 d __warned.19693
ffffffff8147d1ea d __warned.28552
ffffffff8147d1eb d __warned.30110
ffffffff8147d1ec d __warned.30099
ffffffff8147d1ed d __warned.30104
ffffffff8147d1ee d __warned.24936
ffffffff8147d1ef d __warned.31398
ffffffff8147d1f0 d __warned.31541
ffffffff8147d1f1 d __warned.29120
ffffffff8147d1f2 d __warned.29140
ffffffff8147d1f3 d __warned.29226
ffffffff8147d1f4 d __warned.29277
ffffffff8147d1f5 d __warned.29288
ffffffff8147d1f6 d __warned.29293
ffffffff8147d1f7 d __warned.27705
ffffffff8147d1f8 d __warned.27712
ffffffff8147d1f9 d __warned.27717
ffffffff8147d1fa d __warned.35210
ffffffff8147d1fb d __warned.29735
ffffffff8147d1fc d __warned.30357
ffffffff8147d1fd d __warned.27631
ffffffff8147d1fe d __warned.27636
ffffffff8147d1ff d __warned.27179
ffffffff8147d200 d __warned.27259
ffffffff8147d201 d __warned.27661
ffffffff8147d202 d __warned.28298
ffffffff8147d203 d __warned.28164
ffffffff8147d204 d __warned.28185
ffffffff8147d205 d __warned.7007
ffffffff8147d206 d __warned.7026
ffffffff8147d207 d __warned.38975
ffffffff8147d208 d __warned.28879
ffffffff8147d209 d __warned.28753
ffffffff8147d20a d __warned.28737
ffffffff8147d20b d __warned.21311
ffffffff8147d20c d __warned.24136
ffffffff8147d20d d __warned.25775
ffffffff8147d20e d __warned.15305
ffffffff8147d20f d __warned.15324
ffffffff8147d210 d __warned.15355
ffffffff8147d211 d __warned.15373
ffffffff8147d212 d __warned.29054
ffffffff8147d213 d __warned.37879
ffffffff8147d214 d __warned.37927
ffffffff8147d215 d __warned.37937
ffffffff8147d216 d __warned.37942
ffffffff8147d217 d __warned.37961
ffffffff8147d218 d __warned.37988
ffffffff8147d219 d __warned.37993
ffffffff8147d21a d __warned.37998
ffffffff8147d21b d __warned.38003
ffffffff8147d21c d __warned.35270
ffffffff8147d21d d __warned.35278
ffffffff8147d21e d __warned.32881
ffffffff8147d21f d __warned.32896
ffffffff8147d220 d __warned.33111
ffffffff8147d221 d __warned.33116
ffffffff8147d222 d __warned.33129
ffffffff8147d223 d __warned.33023
ffffffff8147d224 d __warned.33008
ffffffff8147d225 d __warned.33202
ffffffff8147d226 d __warned.33606
ffffffff8147d227 d __warned.33613
ffffffff8147d228 d __warned.33186
ffffffff8147d229 d __warned.29180
ffffffff8147d22a d __warned.29169
ffffffff8147d22b d __warned.29099
ffffffff8147d22c d __warned.29087
ffffffff8147d22d d __warned.29122
ffffffff8147d22e d __warned.29110
ffffffff8147d22f d __warned.29053
ffffffff8147d230 d __warned.29040
ffffffff8147d231 d __warned.44566
ffffffff8147d232 d __warned.46927
ffffffff8147d233 d __warned.47518
ffffffff8147d234 d __warned.34960
ffffffff8147d235 d __warned.35008
ffffffff8147d236 d __warned.39918
ffffffff8147d237 d __warned.38792
ffffffff8147d240 D __start___jump_table
ffffffff8147d240 D __start___verbose
ffffffff8147d240 D __stop___jump_table
ffffffff8147d240 D __stop___verbose
ffffffff8147d240 D system_state
ffffffff8147d244 D early_boot_irqs_disabled
ffffffff8147d280 D xen_have_vector_callback
ffffffff8147d284 d cpuid_leaf1_edx_mask
ffffffff8147d288 d cpuid_leaf1_ecx_mask
ffffffff8147d28c d cpuid_leaf1_ecx_set_mask
ffffffff8147d290 d cpuid_leaf5_ecx_val
ffffffff8147d294 d cpuid_leaf5_edx_val
ffffffff8147d2c0 d xen_clocksource
ffffffff8147d380 D xen_max_p2m_pfn
ffffffff8147d388 D xen_swiotlb
ffffffff8147d3c0 D boot_cpu_data
ffffffff8147d480 D iommu_group_mf
ffffffff8147d484 D iommu_pass_through
ffffffff8147d488 D iommu_detected
ffffffff8147d48c D no_iommu
ffffffff8147d490 D iommu_merge
ffffffff8147d494 D force_iommu
ffffffff8147d498 D panic_on_overflow
ffffffff8147d49c d forbid_dac
ffffffff8147d4a0 d iommu_sac_force
ffffffff8147d4a4 D tsc_khz
ffffffff8147d4a8 D cpu_khz
ffffffff8147d4ac d tsc_disabled
ffffffff8147d4b0 d tsc_unstable
ffffffff8147d4b4 D io_delay_type
ffffffff8147d4b8 d mxcsr_feature_mask
ffffffff8147d4c0 d x86_64_regsets
ffffffff8147d5a0 d x86_32_regsets
ffffffff8147d700 D hw_cache_extra_regs
ffffffff8147d860 D hw_cache_event_ids
ffffffff8147d9c0 D x86_pmu
ffffffff8147db20 d intel_core_event_constraints
ffffffff8147dc00 d intel_core2_event_constraints
ffffffff8147ddc0 d intel_nehalem_event_constraints
ffffffff8147df40 d intel_nehalem_extra_regs
ffffffff8147df80 d intel_perfmon_event_map
ffffffff8147dfe0 d intel_gen_event_constraints
ffffffff8147e060 d intel_westmere_event_constraints
ffffffff8147e160 d intel_westmere_extra_regs
ffffffff8147e1c0 d intel_snb_event_constraints
ffffffff8147e2a0 d intel_snb_extra_regs
ffffffff8147e300 d intel_v1_event_constraints
ffffffff8147e320 D mce_banks
ffffffff8147e328 D mce_ser
ffffffff8147e32c D mce_ignore_ce
ffffffff8147e330 D mce_cmci_disabled
ffffffff8147e334 D mce_disabled
ffffffff8147e338 d mce_dont_log_ce
ffffffff8147e33c d banks
ffffffff8147e340 d rip_msr
ffffffff8147e344 d tolerant
ffffffff8147e348 d monarch_timeout
ffffffff8147e34c d mce_panic_timeout
ffffffff8147e350 d mce_bootlog
ffffffff8147e360 d isa_irq_to_gsi
ffffffff8147e3a0 D __per_cpu_offset
ffffffff8147e3e0 d ioapic_chip
ffffffff8147e4a0 d lapic_chip
ffffffff8147e558 D apic
ffffffff8147e560 d valid_flags
ffffffff8147e564 D swiotlb
ffffffff8147e580 D __supported_pte_mask
ffffffff8147e588 D pat_enabled
ffffffff8147e590 d boot_pat_state
ffffffff8147e5a0 D node_data
ffffffff8147eda0 D vdso_enabled
ffffffff8147eda4 D sysctl_vsyscall32
ffffffff8147eda8 D printk_delay_msec
ffffffff8147edac d ignore_loglevel
ffffffff8147edb0 d keep_bootcon
ffffffff8147edb8 d cpu_online_bits
ffffffff8147edc0 d cpu_possible_bits
ffffffff8147edc8 d cpu_present_bits
ffffffff8147edd0 d cpu_active_bits
ffffffff8147edd8 D print_fatal_signals
ffffffff8147ede0 D system_nrt_freezable_wq
ffffffff8147ede8 D system_freezable_wq
ffffffff8147edf0 D system_unbound_wq
ffffffff8147edf8 D system_nrt_wq
ffffffff8147ee00 D system_long_wq
ffffffff8147ee08 D system_wq
ffffffff8147ee10 d hrtimer_hres_enabled
ffffffff8147ee18 D scheduler_running
ffffffff8147ee1c D sched_clock_stable
ffffffff8147ee20 D sched_clock_running
ffffffff8147ee28 D sysctl_sched_shares_window
ffffffff8147ee2c D sysctl_sched_child_runs_first
ffffffff8147ee30 d max_load_balance_interval
ffffffff8147ee38 D timekeeping_suspended
ffffffff8147ee3c D tick_do_timer_cpu
ffffffff8147ee40 D futex_cmpxchg_enabled
ffffffff8147ee44 D nr_cpu_ids
ffffffff8147ee48 d use_task_css_set_links
ffffffff8147ee4c d need_forkexit_callback
ffffffff8147ee50 D cpuset_memory_pressure_enabled
ffffffff8147ee54 D number_of_cpusets
ffffffff8147ee58 D force_irqthreads
ffffffff8147ee5c D noirqdebug
ffffffff8147ee60 d irqfixup
ffffffff8147ee64 D rcu_cpu_stall_timeout
ffffffff8147ee68 D rcu_cpu_stall_suppress
ffffffff8147ee6c D rcu_scheduler_active
ffffffff8147ee70 d rcu_scheduler_fully_active
ffffffff8147ee74 D delayacct_on
ffffffff8147ee78 D sysctl_perf_event_sample_rate
ffffffff8147ee7c D sysctl_perf_event_mlock
ffffffff8147ee80 D sysctl_perf_event_paranoid
ffffffff8147ee84 D perf_sched_events
ffffffff8147ee88 d max_samples_per_tick
ffffffff8147ee8c d nr_mmap_events
ffffffff8147ee90 d nr_comm_events
ffffffff8147ee94 d nr_task_events
ffffffff8147eea0 D oom_killer_disabled
ffffffff8147eea4 D page_group_by_mobility_disabled
ffffffff8147eea8 D nr_online_nodes
ffffffff8147eeac D nr_node_ids
ffffffff8147eeb0 D gfp_allowed_mask
ffffffff8147eeb8 D dirty_balance_reserve
ffffffff8147eec0 D totalreserve_pages
ffffffff8147eec8 D totalram_pages
ffffffff8147eee0 D node_states
ffffffff8147ef60 D zone_reclaim_mode
ffffffff8147ef80 d shmem_backing_dev_info
ffffffff8147f1d8 D sysctl_stat_interval
ffffffff8147f1e0 D pcpu_unit_offsets
ffffffff8147f1e8 D pcpu_base_addr
ffffffff8147f1f0 d pcpu_unit_size
ffffffff8147f1f4 d pcpu_nr_slots
ffffffff8147f1f8 d pcpu_slot
ffffffff8147f200 d pcpu_chunk_struct_size
ffffffff8147f208 d pcpu_atom_size
ffffffff8147f20c d pcpu_nr_groups
ffffffff8147f210 d pcpu_group_sizes
ffffffff8147f218 d pcpu_group_offsets
ffffffff8147f220 d pcpu_unit_pages
ffffffff8147f224 d pcpu_nr_units
ffffffff8147f228 d pcpu_unit_map
ffffffff8147f230 d pcpu_low_unit_cpu
ffffffff8147f234 d pcpu_high_unit_cpu
ffffffff8147f238 D highest_memmap_pfn
ffffffff8147f240 D zero_pfn
ffffffff8147f248 D randomize_va_space
ffffffff8147f24c D sysctl_max_map_count
ffffffff8147f250 D sysctl_overcommit_ratio
ffffffff8147f254 D sysctl_overcommit_memory
ffffffff8147f258 d vmap_initialized
ffffffff8147f25c d use_alien_caches
ffffffff8147f260 D transparent_hugepage_flags
ffffffff8147f268 d mm_slot_cache
ffffffff8147f270 d mm_slots_hash
ffffffff8147f278 d khugepaged_alloc_sleep_millisecs
ffffffff8147f27c d khugepaged_scan_sleep_millisecs
ffffffff8147f280 d khugepaged_pages_to_scan
ffffffff8147f284 d khugepaged_max_ptes_none
ffffffff8147f288 d khugepaged_thread
ffffffff8147f290 D mce_bad_pages
ffffffff8147f298 D sysctl_memory_failure_recovery
ffffffff8147f29c D sysctl_memory_failure_early_kill
ffffffff8147f2a0 D cleancache_enabled
ffffffff8147f2c0 d cleancache_ops
ffffffff8147f300 D files_lglock_cpus
ffffffff8147f308 d filp_cachep
ffffffff8147f310 d pipe_mnt
ffffffff8147f318 d fasync_cache
ffffffff8147f320 D names_cachep
ffffffff8147f328 D sysctl_vfs_cache_pressure
ffffffff8147f32c d d_hash_shift
ffffffff8147f330 d dentry_hashtable
ffffffff8147f338 d d_hash_mask
ffffffff8147f340 d dentry_cache
ffffffff8147f348 d inode_cachep
ffffffff8147f350 d inode_hashtable
ffffffff8147f358 d i_hash_shift
ffffffff8147f35c d i_hash_mask
ffffffff8147f360 D sysctl_nr_open
ffffffff8147f368 D vfsmount_lock_cpus
ffffffff8147f370 d mount_hashtable
ffffffff8147f378 d mnt_cache
ffffffff8147f380 d bvec_slabs
ffffffff8147f410 d bio_split_pool
ffffffff8147f418 d bdev_cachep
ffffffff8147f420 d blockdev_superblock
ffffffff8147f428 d dio_cache
ffffffff8147f430 D dir_notify_enable
ffffffff8147f438 d dnotify_group
ffffffff8147f440 d dnotify_struct_cache
ffffffff8147f448 d dnotify_mark_cache
ffffffff8147f450 D event_priv_cachep
ffffffff8147f458 d inotify_max_user_instances
ffffffff8147f45c d inotify_max_user_watches
ffffffff8147f460 d inotify_max_queued_events
ffffffff8147f468 d inotify_inode_mark_cachep
ffffffff8147f470 d fanotify_mark_cache
ffffffff8147f478 d fanotify_response_event_cache
ffffffff8147f480 d max_user_watches
ffffffff8147f488 d epi_cache
ffffffff8147f490 d pwq_cache
ffffffff8147f498 d anon_inode_mnt
ffffffff8147f4a0 d filelock_cache
ffffffff8147f4a8 d skcipher_default_geniv
ffffffff8147f4b0 d blk_iopoll_budget
ffffffff8147f4c0 d height_to_maxindex
ffffffff8147f520 D kptr_restrict
ffffffff8147f524 D percpu_counter_batch
ffffffff8147f540 D num_registered_fb
ffffffff8147f560 D registered_fb
ffffffff8147f660 d fb_logo
ffffffff8147f678 d ofonly
ffffffff8147f680 d video_options
ffffffff8147f780 d red2
ffffffff8147f784 d green2
ffffffff8147f788 d blue2
ffffffff8147f7a0 d red16
ffffffff8147f7c0 d green16
ffffffff8147f7e0 d blue16
ffffffff8147f800 d red4
ffffffff8147f808 d green4
ffffffff8147f810 d blue4
ffffffff8147f820 d red8
ffffffff8147f830 d green8
ffffffff8147f840 d blue8
ffffffff8147f850 d vga_hardscroll_enabled
ffffffff8147f851 d vga_hardscroll_user_enable
ffffffff8147f854 d vga_can_do_color
ffffffff8147f858 d vga_vram_size
ffffffff8147f860 d vga_vram_base
ffffffff8147f868 d vga_video_port_reg
ffffffff8147f86a d vga_video_type
ffffffff8147f86c d vga_default_font_height
ffffffff8147f870 d vga_video_port_val
ffffffff8147f878 d vga_vram_end
ffffffff8147f880 d vga_scan_lines
ffffffff8147f884 d vga_init_done
ffffffff8147f888 d vga_compat
ffffffff8147f88c d pmi_setpal
ffffffff8147f890 d ypan
ffffffff8147f898 d pmi_start
ffffffff8147f8a0 d pmi_pal
ffffffff8147f8a8 d depth
ffffffff8147f8ac d mtrr
ffffffff8147f8b0 d inverse
ffffffff8147f8b8 d ec_delay
ffffffff8147f8c0 d hest_tab
ffffffff8147f8c8 d ghes_panic_timeout
ffffffff8147f8e0 D xen_features
ffffffff8147f900 d xen_pirq_chip
ffffffff8147f9c0 d xen_dynamic_chip
ffffffff8147fa80 d xen_percpu_chip
ffffffff8147fb38 d ring_ops
ffffffff8147fb40 d selfballoon_uphysteresis
ffffffff8147fb44 d selfballoon_downhysteresis
ffffffff8147fb48 d selfballoon_interval
ffffffff8147fb4c d xen_selfballooning_enabled
ffffffff8147fb50 D tmem_enabled
ffffffff8147fb51 d pci_seg_supported
ffffffff8147fb54 d trickle_thresh
ffffffff8147fb58 d off
ffffffff8147fb5c d off
ffffffff8147fb60 d initialized
ffffffff8147fb64 D pmtmr_ioport
ffffffff8147fb68 D amd_iommu_force_isolation
ffffffff8147fb69 D amd_iommu_v2_present
ffffffff8147fb6c D amd_iommu_max_pasids
ffffffff8147fb70 D amd_iommu_iotlb_sup
ffffffff8147fb71 D amd_iommu_np_cache
ffffffff8147fb78 d pci_seg_supported
ffffffff8147fb80 D raw_pci_ext_ops
ffffffff8147fb88 D raw_pci_ops
ffffffff8147fba0 d sock_mnt
ffffffff8147fba8 d sock_inode_cachep
ffffffff8147fbc0 d net_families
ffffffff8147fd00 D sysctl_optmem_max
ffffffff8147fd04 D sysctl_rmem_default
ffffffff8147fd08 D sysctl_wmem_default
ffffffff8147fd0c D sysctl_rmem_max
ffffffff8147fd10 D sysctl_wmem_max
ffffffff8147fd14 d warned.44065
ffffffff8147fd18 d skbuff_fclone_cache
ffffffff8147fd20 d skbuff_head_cache
ffffffff8147fd40 D rps_needed
ffffffff8147fd48 D rps_sock_flow_table
ffffffff8147fd50 D weight_p
ffffffff8147fd54 D netdev_budget
ffffffff8147fd58 D netdev_tstamp_prequeue
ffffffff8147fd5c D netdev_max_backlog
ffffffff8147fd60 d ptype_base
ffffffff8147fe60 d ptype_all
ffffffff8147fe70 d netstamp_needed
ffffffff8147fe74 d hashrnd
ffffffff8147fe80 d neigh_sysctl_template
ffffffff81480390 D net_msg_warn
ffffffff81480398 d flow_cachep
ffffffff814803a0 D pfifo_fast_ops
ffffffff81480440 D noop_qdisc_ops
ffffffff814804e0 d noqueue_qdisc_ops
ffffffff81480580 D mq_qdisc_ops
ffffffff81480620 d rt_hash_table
ffffffff81480628 d rt_hash_mask
ffffffff8148062c d ip_rt_redirect_silence
ffffffff81480630 d ip_rt_redirect_number
ffffffff81480634 d ip_rt_redirect_load
ffffffff81480638 d ip_rt_min_pmtu
ffffffff8148063c d ip_rt_mtu_expires
ffffffff81480640 d ip_rt_min_advmss
ffffffff81480644 d ip_rt_gc_min_interval
ffffffff81480648 d ip_rt_gc_elasticity
ffffffff8148064c d rt_hash_log
ffffffff81480650 d ip_rt_gc_timeout
ffffffff81480654 d rt_chain_length_max
ffffffff81480658 d ip_rt_error_burst
ffffffff8148065c d ip_rt_error_cost
ffffffff81480660 d ip_rt_gc_interval
ffffffff81480668 D inet_peer_maxttl
ffffffff8148066c D inet_peer_minttl
ffffffff81480670 D inet_peer_threshold
ffffffff81480678 d peer_cachep
ffffffff81480680 D inet_protos
ffffffff81480e80 d sysctl_ipfrag_max_dist
ffffffff81480e84 D sysctl_ip_default_ttl
ffffffff81480e90 D sysctl_local_ports
ffffffff81480ea0 D tcp_memory_pressure
ffffffff81480ea4 D sysctl_tcp_rmem
ffffffff81480eb0 D sysctl_tcp_wmem
ffffffff81480ebc D sysctl_tcp_fin_timeout
ffffffff81480ec0 D sysctl_tcp_abc
ffffffff81480ec4 D sysctl_tcp_moderate_rcvbuf
ffffffff81480ec8 D sysctl_tcp_thin_dupack
ffffffff81480ecc D sysctl_tcp_nometrics_save
ffffffff81480ed0 D sysctl_tcp_frto_response
ffffffff81480ed4 D sysctl_tcp_frto
ffffffff81480ed8 D sysctl_tcp_max_orphans
ffffffff81480edc D sysctl_tcp_rfc1337
ffffffff81480ee0 D sysctl_tcp_stdurg
ffffffff81480ee4 D sysctl_tcp_adv_win_scale
ffffffff81480ee8 D sysctl_tcp_app_win
ffffffff81480eec D sysctl_tcp_dsack
ffffffff81480ef0 D sysctl_tcp_ecn
ffffffff81480ef4 D sysctl_tcp_reordering
ffffffff81480ef8 D sysctl_tcp_fack
ffffffff81480efc D sysctl_tcp_sack
ffffffff81480f00 D sysctl_tcp_window_scaling
ffffffff81480f04 D sysctl_tcp_timestamps
ffffffff81480f08 D sysctl_tcp_cookie_size
ffffffff81480f0c D sysctl_tcp_slow_start_after_idle
ffffffff81480f10 D sysctl_tcp_base_mss
ffffffff81480f14 D sysctl_tcp_mtu_probing
ffffffff81480f18 D sysctl_tcp_tso_win_divisor
ffffffff81480f1c D sysctl_tcp_workaround_signed_windows
ffffffff81480f20 D sysctl_tcp_retrans_collapse
ffffffff81480f24 D sysctl_tcp_thin_linear_timeouts
ffffffff81480f28 D sysctl_tcp_orphan_retries
ffffffff81480f2c D sysctl_tcp_retries2
ffffffff81480f30 D sysctl_tcp_retries1
ffffffff81480f34 D sysctl_tcp_keepalive_intvl
ffffffff81480f38 D sysctl_tcp_keepalive_probes
ffffffff81480f3c D sysctl_tcp_keepalive_time
ffffffff81480f40 D sysctl_tcp_synack_retries
ffffffff81480f44 D sysctl_tcp_syn_retries
ffffffff81480f60 D tcp_request_sock_ops
ffffffff81480fa0 D sysctl_tcp_low_latency
ffffffff81480fa4 D sysctl_tcp_tw_reuse
ffffffff81480fa8 D sysctl_tcp_abort_on_overflow
ffffffff81480fac D sysctl_tcp_syncookies
ffffffff81480fb0 D sysctl_udp_wmem_min
ffffffff81480fb4 D sysctl_udp_rmem_min
ffffffff81480fc0 D sysctl_udp_mem
ffffffff81480fe0 D udp_table
ffffffff81481000 D udplite_table
ffffffff81481020 d arp_packet_type
ffffffff81481080 D sysctl_ip_dynaddr
ffffffff81481084 D sysctl_ip_nonlocal_bind
ffffffff81481088 D inet_ehash_secret
ffffffff814810a0 d ip_packet_type
ffffffff814810f0 D sysctl_igmp_max_msf
ffffffff814810f4 D sysctl_igmp_max_memberships
ffffffff814810f8 d fn_alias_kmem
ffffffff81481100 d trie_leaf_kmem
ffffffff81481120 d cubictcp
ffffffff814811a0 d hystart
ffffffff814811a4 d hystart_low_window
ffffffff814811a8 d hystart_detect
ffffffff814811ac d hystart_ack_delta
ffffffff814811b0 d fast_convergence
ffffffff814811b4 d beta
ffffffff814811b8 d cube_factor
ffffffff814811c0 d cube_rtt_scale
ffffffff814811c4 d tcp_friendliness
ffffffff814811c8 d beta_scale
ffffffff814811cc d initial_ssthresh
ffffffff814811d0 d bic_scale
ffffffff814811d8 d xfrm_policy_hashmax
ffffffff814811e0 d xfrm_dst_cache
ffffffff814811e8 d xfrm_state_hashmax
ffffffff814811f0 d secpath_cachep
ffffffff81481200 D _edata
ffffffff81482000 D __vvar_beginning_hack
ffffffff81482000 A __vvar_page
ffffffff81482000 D jiffies
ffffffff81482000 D jiffies_64
ffffffff81482010 D vgetcpu_mode
ffffffff81482080 D vsyscall_gtod_data
ffffffff81483000 D __init_begin
ffffffff81483000 A __per_cpu_load
ffffffff81483000 A init_per_cpu__irq_stack_union
ffffffff81487000 A init_per_cpu__gdt_page
ffffffff81496000 T _sinittext
ffffffff81496000 T early_idt_handlers
ffffffff81496140 T early_idt_handler
ffffffff814961b1 t early_recursion_flag
ffffffff814961b5 t early_idt_msg
ffffffff814961f1 t early_idt_ripmsg
ffffffff81496200 T startup_xen
ffffffff81496215 T x86_64_start_reservations
ffffffff81496342 T x86_64_start_kernel
ffffffff81496457 T reserve_ebda_region
ffffffff814964b7 t set_reset_devices
ffffffff814964c7 t debug_kernel
ffffffff814964d4 t quiet_kernel
ffffffff814964e1 t init_setup
ffffffff81496504 t rdinit_setup
ffffffff81496527 t loglevel
ffffffff8149655b t do_early_param
ffffffff814965d8 t repair_env_string
ffffffff81496630 t unknown_bootoption
ffffffff814967b0 T parse_early_options
ffffffff814967da T parse_early_param
ffffffff81496815 W smp_setup_processor_id
ffffffff81496816 W thread_info_cache_init
ffffffff81496817 T start_kernel
ffffffff81496b68 t kernel_init
ffffffff81496d38 t rootwait_setup
ffffffff81496d4c t root_data_setup
ffffffff81496d59 t fs_names_setup
ffffffff81496d66 t root_delay_setup
ffffffff81496d7d t root_dev_setup
ffffffff81496d99 t load_ramdisk
ffffffff81496db5 t readonly
ffffffff81496dc6 t readwrite
ffffffff81496dd7 T mount_block_root
ffffffff8149704a T mount_root
ffffffff81497099 T prepare_namespace
ffffffff81497202 t no_initrd
ffffffff81497212 T initrd_load
ffffffff8149724e t error
ffffffff81497260 t flush_buffer
ffffffff81497311 t retain_initrd_param
ffffffff81497325 t clean_path
ffffffff81497373 t do_utime
ffffffff814973ac t do_symlink
ffffffff81497439 t unpack_to_rootfs
ffffffff814976b7 t do_collect
ffffffff81497729 t parse_header
ffffffff8149780c t read_into
ffffffff8149786b t do_start
ffffffff81497884 t do_skip
ffffffff814978ed t do_reset
ffffffff81497960 t do_copy
ffffffff81497a0e t maybe_link.part.5
ffffffff81497b03 t do_name
ffffffff81497d79 t do_header
ffffffff81497eb4 t populate_rootfs
ffffffff81497f51 t lpj_setup
ffffffff81497f69 t xen_banner
ffffffff81497fd3 t xen_hvm_platform
ffffffff81498088 t xen_load_gdt_boot
ffffffff8149819f t xen_write_gdt_entry_boot
ffffffff81498213 T xen_start_kernel
ffffffff814987f0 t xen_hvm_guest_init
ffffffff8149897d T xen_memory_setup
ffffffff81498fa2 T xen_arch_setup
ffffffff814990ba t xen_mark_pinned
ffffffff814990c2 t xen_pagetable_setup_start
ffffffff814990c3 t xen_set_pgd_hyper
ffffffff81499122 t xen_set_pte_init
ffffffff8149918d t xen_pagetable_setup_done
ffffffff8149924b t xen_alloc_pmd_init
ffffffff81499261 t xen_alloc_pte_init
ffffffff81499293 t xen_release_pmd_init
ffffffff814992a9 t xen_release_pte_init
ffffffff814992d2 t xen_mapping_pagetable_reserve
ffffffff81499329 T xen_reserve_top
ffffffff8149932a T xen_setup_machphys_mapping
ffffffff81499374 T xen_setup_kernel_pagetable
ffffffff814996cf T xen_ident_map_ISA
ffffffff8149974d T xen_init_mmu_ops
ffffffff81499950 T xen_hvm_init_mmu_ops
ffffffff81499994 T xen_init_irq_ops
ffffffff814999ed t xen_time_init
ffffffff81499a73 T xen_init_time_ops
ffffffff81499ad0 T xen_hvm_init_time_ops
ffffffff81499b43 t parse_xen_emul_unplug
ffffffff81499cb6 t __early_alloc_p2m
ffffffff81499e32 T xen_build_dynamic_phys_to_machine
ffffffff8149a016 T set_phys_range_identity
ffffffff8149a21a t xen_hvm_smp_prepare_cpus
ffffffff8149a244 t xen_smp_prepare_cpus
ffffffff8149a3cd t xen_smp_prepare_boot_cpu
ffffffff8149a47a T xen_smp_init
ffffffff8149a4df T xen_hvm_smp_init
ffffffff8149a52b T xen_init_spinlocks
ffffffff8149a56e T xen_init_vga
ffffffff8149a6a3 T pci_xen_swiotlb_detect
ffffffff8149a6f1 T pci_xen_swiotlb_init
ffffffff8149a718 T early_trap_init
ffffffff8149a827 T trap_init
ffffffff8149ae82 t x86_late_time_init
ffffffff8149ae8f T setup_default_timer_irq
ffffffff8149ae9d T hpet_time_init
ffffffff8149aeb2 T time_init
ffffffff8149aebe t code_bytes_setup
ffffffff8149aee2 t kstack_setup
ffffffff8149af00 t setup_unknown_nmi_panic
ffffffff8149af10 t parse_reservelow
ffffffff8149af58 T extend_brk
ffffffff8149afa9 T reserve_standard_io_resources
ffffffff8149afce T setup_arch
ffffffff8149b8e2 T x86_init_uint_noop
ffffffff8149b8e3 T x86_init_pgd_noop
ffffffff8149b8e4 T iommu_init_noop
ffffffff8149b8e7 t i8259A_init_ops
ffffffff8149b905 t smp_intr_init
ffffffff8149beef t apic_intr_init
ffffffff8149c1ca T init_ISA_irqs
ffffffff8149c218 T init_IRQ
ffffffff8149c24c T native_init_IRQ
ffffffff8149c326 t romsignature
ffffffff8149c399 t romchecksum
ffffffff8149c428 T probe_roms
ffffffff8149c6da t control_va_addr_alignment
ffffffff8149c78c t vsyscall_init
ffffffff8149c7b0 t vsyscall_setup
ffffffff8149c829 T map_vsyscall
ffffffff8149c889 t sbf_init
ffffffff8149c97a t cpcompare
ffffffff8149c9b3 t e820_mark_nvs_memory
ffffffff8149c9e9 t e820_print_type
ffffffff8149ca61 t __e820_add_region.part.2
ffffffff8149ca6f t __e820_update_range
ffffffff8149ccad t e820_end_pfn.constprop.4
ffffffff8149cd2f T e820_all_mapped
ffffffff8149cd86 T e820_add_region
ffffffff8149cdba t __append_e820_map
ffffffff8149cdf0 T e820_print_map
ffffffff8149ce4b T sanitize_e820_map
ffffffff8149d044 T e820_update_range
ffffffff8149d05b T e820_remove_range
ffffffff8149d1ba t parse_memmap_opt
ffffffff8149d2cd t parse_memopt
ffffffff8149d342 T update_e820
ffffffff8149d393 T e820_search_gap
ffffffff8149d3fd T e820_setup_gap
ffffffff8149d478 T parse_e820_ext
ffffffff8149d4c5 T e820_mark_nosave_regions
ffffffff8149d4c6 T early_reserve_e820
ffffffff8149d541 T e820_end_of_ram_pfn
ffffffff8149d550 T e820_end_of_low_ram_pfn
ffffffff8149d55a T finish_e820_parsing
ffffffff8149d5d0 T e820_reserve_resources
ffffffff8149d72f T e820_reserve_resources_late
ffffffff8149d801 T default_machine_specific_memory_setup
ffffffff8149d8bc T setup_memory_map
ffffffff8149d8f2 T memblock_x86_fill
ffffffff8149d944 T memblock_find_dma_reserve
ffffffff8149da82 t pci_iommu_init
ffffffff8149dab9 t iommu_setup
ffffffff8149dd2f T pci_iommu_alloc
ffffffff8149dd96 t quirk_amd_nb_node
ffffffff8149ddf8 t topology_init
ffffffff8149de86 t arch_kdebugfs_init
ffffffff8149de94 t bootonly
ffffffff8149dea4 t debug_alt
ffffffff8149deb4 t setup_noreplace_smp
ffffffff8149dec4 t setup_noreplace_paravirt
ffffffff8149ded4 T arch_init_ideal_nops
ffffffff8149dee0 T alternative_instructions
ffffffff8149e00e T setup_pit_timer
ffffffff8149e026 T notsc_setup
ffffffff8149e046 t tsc_setup
ffffffff8149e08d t init_tsc_clocksource
ffffffff8149e10e t cpufreq_tsc
ffffffff8149e13d T tsc_init
ffffffff8149e22a t io_delay_param
ffffffff8149e2c7 t dmi_io_delay_0xed_port
ffffffff8149e2f2 T io_delay_init
ffffffff8149e308 t add_rtc_cmos
ffffffff8149e39a T sort_iommu_table
ffffffff8149e475 T check_iommu_entries
ffffffff8149e540 t configure_trampolines
ffffffff8149e563 T setup_trampolines
ffffffff8149e5f9 t idle_setup
ffffffff8149e6e6 T init_amd_e400_c1e_mask
ffffffff8149e6ff t xstate_enable_boot_cpu
ffffffff8149e9a9 t i8237A_init_ops
ffffffff8149e9ba t x86_xsave_setup
ffffffff8149e9e0 t x86_xsaveopt_setup
ffffffff8149e9f6 t setup_disable_smep
ffffffff8149ea06 t setup_noclflush
ffffffff8149ea1c t setup_show_msr
ffffffff8149ea4c t setup_disablecpuid
ffffffff8149ea8e T setup_cpu_local_masks
ffffffff8149ea8f T early_cpu_init
ffffffff8149eb84 T identify_boot_cpu
ffffffff8149ebb1 t vmware_platform
ffffffff8149ec56 t vmware_platform_setup
ffffffff8149ec8a T init_hypervisor_platform
ffffffff8149ecf7 t ms_hyperv_init_platform
ffffffff8149edac t ms_hyperv_platform
ffffffff8149ee23 t x86_rdrand_setup
ffffffff8149ee39 T check_bugs
ffffffff8149ee64 t init_hw_perf_events
ffffffff8149f2a6 T amd_pmu_init
ffffffff8149f505 T p6_pmu_init
ffffffff8149f625 T p4_pmu_init
ffffffff8149f776 t intel_clovertown_quirk
ffffffff8149f79c t intel_nehalem_quirk
ffffffff8149f7ca t intel_sandybridge_quirk
ffffffff8149f7f0 t intel_arch_events_quirk
ffffffff8149f855 T intel_pmu_init
ffffffff8149fe08 t mcheck_disable
ffffffff8149fe18 t mcheck_enable
ffffffff8149ff7f t mcheck_init_device
ffffffff814a0071 T mcheck_init
ffffffff814a0074 t threshold_init_device
ffffffff814a00c7 t mtrr_init_finialize
ffffffff814a00fa T mtrr_bp_init
ffffffff814a02ce t mtrr_if_init
ffffffff814a032f t print_fixed_last.part.1
ffffffff814a0367 t print_fixed
ffffffff814a03f4 T mtrr_state_warn
ffffffff814a0459 T get_mtrr_state
ffffffff814a0741 t disable_mtrr_cleanup_setup
ffffffff814a074e t enable_mtrr_cleanup_setup
ffffffff814a075b t mtrr_cleanup_debug_setup
ffffffff814a0768 t disable_mtrr_trim_setup
ffffffff814a0775 t parse_mtrr_spare_reg
ffffffff814a078f t parse_mtrr_gran_size_opt
ffffffff814a07ba t parse_mtrr_chunk_size_opt
ffffffff814a07e5 t print_out_mtrr_range_state
ffffffff814a08ca t mtrr_print_out_one_result
ffffffff814a09d0 t x86_get_mtrr_mem_range
ffffffff814a0bc3 t real_trim_memory
ffffffff814a0bdd t set_var_mtrr_all
ffffffff814a0c64 t range_to_mtrr
ffffffff814a0dbe t range_to_mtrr_with_hole
ffffffff814a0fd7 t x86_setup_var_mtrrs.constprop.2
ffffffff814a110f t mtrr_calc_range_state.constprop.1
ffffffff814a1267 T mtrr_cleanup
ffffffff814a1645 T amd_special_default_mtrr
ffffffff814a168d T mtrr_trim_uncached_memory
ffffffff814a195b t amd_ibs_init
ffffffff814a1d09 t acpi_parse_lapic_addr_ovr
ffffffff814a1d30 t parse_acpi_skip_timer_override
ffffffff814a1d3d t parse_acpi_use_timer_override
ffffffff814a1d4a t parse_pci
ffffffff814a1d78 t hpet_insert_resource
ffffffff814a1d96 t acpi_parse_nmi_src
ffffffff814a1dba t acpi_parse_hpet
ffffffff814a1ef4 t setup_acpi_sci
ffffffff814a1f99 t parse_acpi
ffffffff814a2093 t dmi_ignore_irq0_timer_override
ffffffff814a20bd t acpi_parse_fadt
ffffffff814a2109 t acpi_parse_sbf
ffffffff814a2134 t disable_acpi_pci
ffffffff814a216a t disable_acpi_irq
ffffffff814a2194 t dmi_disable_acpi
ffffffff814a21e3 t acpi_parse_lapic_nmi
ffffffff814a2225 t acpi_parse_x2apic_nmi
ffffffff814a2266 t acpi_parse_x2apic
ffffffff814a2297 t acpi_parse_lapic
ffffffff814a22cd t acpi_parse_sapic
ffffffff814a230c t acpi_parse_ioapic
ffffffff814a2341 t acpi_parse_madt
ffffffff814a239f T __acpi_map_table
ffffffff814a23b1 T __acpi_unmap_table
ffffffff814a23c1 T acpi_pic_sci_set_trigger
ffffffff814a2431 T acpi_set_irq_model_pic
ffffffff814a2451 T acpi_set_irq_model_ioapic
ffffffff814a2471 T mp_override_legacy_irq
ffffffff814a2510 t acpi_sci_ioapic_setup
ffffffff814a2561 t acpi_parse_int_src_ovr
ffffffff814a262a T mp_config_acpi_legacy_irqs
ffffffff814a270f T acpi_boot_table_init
ffffffff814a2793 T early_acpi_boot_init
ffffffff814a2860 T acpi_boot_init
ffffffff814a2baa T acpi_mps_check
ffffffff814a2bad t ffh_cstate_init
ffffffff814a2bd4 t reboot_setup
ffffffff814a2c3c t set_pci_reboot
ffffffff814a2c68 t pci_reboot_init
ffffffff814a2c83 t nvidia_hpet_check
ffffffff814a2c86 t ati_bugs_contd
ffffffff814a2d32 t fix_hypertransport_config
ffffffff814a2dbf t via_bugs
ffffffff814a2dfa t ati_bugs
ffffffff814a2ee4 t nvidia_bugs
ffffffff814a2f2e T early_quirks
ffffffff814a3037 t nonmi_ipi_setup
ffffffff814a3048 t _setup_possible_cpus
ffffffff814a3069 t disable_smp
ffffffff814a30e7 T native_smp_prepare_cpus
ffffffff814a340c T native_smp_prepare_boot_cpu
ffffffff814a343f T native_smp_cpus_done
ffffffff814a34ef T prefill_possible_map
ffffffff814a35dd t pcpu_cpu_distance
ffffffff814a3630 t pcpup_populate_pte
ffffffff814a3635 t pcpu_fc_free
ffffffff814a3654 t pcpu_fc_alloc
ffffffff814a3708 T setup_per_cpu_areas
ffffffff814a3912 t mpf_checksum
ffffffff814a392a t ELCR_trigger
ffffffff814a3944 t update_mptable_setup
ffffffff814a395b t smp_check_mpc
ffffffff814a3a5c t parse_alloc_mptable_opt
ffffffff814a3aa0 t get_mpc_size
ffffffff814a3ae6 t smp_scan_config
ffffffff814a3bc7 t MP_bus_info
ffffffff814a3c52 t construct_default_ioirq_mptable
ffffffff814a3d61 t print_mp_irq_info
ffffffff814a3dae t MP_lintsrc_info.part.1
ffffffff814a3df2 t smp_dump_mptable.isra.2
ffffffff814a3e48 t update_mp_table
ffffffff814a424b t MP_processor_info.part.5
ffffffff814a4297 T default_mpc_apic_id
ffffffff814a429c T default_mpc_oem_bus_info
ffffffff814a42ce T default_smp_read_mpc_oem
ffffffff814a42cf T default_get_smp_config
ffffffff814a470c T default_find_smp_config
ffffffff814a4767 T early_reserve_e820_mpc_new
ffffffff814a4794 t setup_disableapic
ffffffff814a47b1 t setup_nolapic
ffffffff814a47b3 t parse_lapic_timer_c2_ok
ffffffff814a47c0 t parse_disable_apic_timer
ffffffff814a47cd t parse_nolapic_timer
ffffffff814a47da t lapic_insert_resource
ffffffff814a4816 t init_lapic_sysfs
ffffffff814a4833 t setup_apicpmtimer
ffffffff814a484b t lapic_cal_handler
ffffffff814a4908 t apic_set_verbosity
ffffffff814a4975 T setup_boot_APIC_clock
ffffffff814a4e4b T verify_local_APIC
ffffffff814a5015 T sync_Arb_IDs
ffffffff814a508e T init_bsp_APIC
ffffffff814a510a T bsp_end_local_APIC_setup
ffffffff814a510f T enable_IR
ffffffff814a5113 T enable_IR_x2apic
ffffffff814a5114 T register_lapic_address
ffffffff814a51b4 T init_apic_mappings
ffffffff814a52b2 T connect_bsp_APIC
ffffffff814a52c8 T APIC_init_uniprocessor
ffffffff814a53c2 t parse_noapic
ffffffff814a53e3 t notimercheck
ffffffff814a53f3 t disable_timer_pin_setup
ffffffff814a5400 t io_apic_bug_finalize
ffffffff814a5416 t ioapic_init_ops
ffffffff814a5427 t print_APIC_field
ffffffff814a547a t __ioapic_init_mappings
ffffffff814a55d9 t print_local_APIC
ffffffff814a59ea t setup_show_lapic
ffffffff814a5a3e t find_isa_irq_pin
ffffffff814a5a8c t find_isa_irq_apic
ffffffff814a5afc t setup_timer_IRQ0_pin
ffffffff814a5b92 t timer_irq_works
ffffffff814a5bf4 t print_ICs
ffffffff814a60ec T set_io_apic_ops
ffffffff814a60fe T arch_early_irq_init
ffffffff814a61c3 T enable_IO_APIC
ffffffff814a62ac T setup_IO_APIC
ffffffff814a69b8 T arch_probe_nr_irqs
ffffffff814a69f3 T setup_ioapic_dest
ffffffff814a6a89 T ioapic_and_gsi_init
ffffffff814a6a92 T ioapic_insert_resources
ffffffff814a6ae1 T mp_register_ioapic
ffffffff814a6d2d T pre_init_apic_IRQ0
ffffffff814a6d90 T default_setup_apic_routing
ffffffff814a6e01 T default_acpi_madt_oem_check
ffffffff814a6e65 t early_serial_init
ffffffff814a6fba t setup_early_printk
ffffffff814a724d t disable_hpet
ffffffff814a725d t hpet_setup
ffffffff814a72d3 T hpet_enable
ffffffff814a74d5 t hpet_late_init
ffffffff814a75c7 t init_amd_nbs
ffffffff814a7676 T early_is_amd_nb
ffffffff814a769e T default_banner
ffffffff814a76b3 t add_pcspkr
ffffffff814a7716 T pci_swiotlb_detect_override
ffffffff814a7733 T pci_swiotlb_detect_4gb
ffffffff814a775a T pci_swiotlb_late_init
ffffffff814a777e T pci_swiotlb_init
ffffffff814a779c t audit_classes_init
ffffffff814a7848 T gart_iommu_init
ffffffff814a7c76 T gart_parse_options
ffffffff814a7dd9 t parse_gart_mem
ffffffff814a7e32 t search_agp_bridge
ffffffff814a8113 T early_gart_iommu_check
ffffffff814a8433 T gart_iommu_hole_init
ffffffff814a89d2 t set_check_enable_amd_mmconf
ffffffff814a89df T vsmp_init
ffffffff814a8acc T native_pagetable_reserve
ffffffff814a8ad4 T zone_sizes_init
ffffffff814a8b0e t parse_direct_gbpages_off
ffffffff814a8b1b t parse_direct_gbpages_on
ffffffff814a8b28 t __init_extra_mapping
ffffffff814a8c6d t nonx32_setup
ffffffff814a8cb1 T populate_extra_pmd
ffffffff814a8db4 T populate_extra_pte
ffffffff814a8dc9 T init_extra_mapping_wb
ffffffff814a8ddd T init_extra_mapping_uc
ffffffff814a8df1 T cleanup_highmap
ffffffff814a8e86 T paging_init
ffffffff814a8ea4 T mem_init
ffffffff814a8f89 t early_ioremap_debug_setup
ffffffff814a8f96 t check_early_ioremap_leak
ffffffff814a8fe3 t __early_set_fixmap
ffffffff814a9078 t __early_ioremap
ffffffff814a9213 T is_early_ioremap_ptep
ffffffff814a922b T early_ioremap_init
ffffffff814a9410 T early_ioremap_reset
ffffffff814a941b T fixup_early_ioremap
ffffffff814a944b T early_ioremap
ffffffff814a945d T early_memremap
ffffffff814a946f T early_iounmap
ffffffff814a9585 t pat_debug_setup
ffffffff814a9592 t nopat
ffffffff814a95b6 t setup_userpte
ffffffff814a95de T reserve_top_address
ffffffff814a95df t noexec_setup
ffffffff814a9639 T x86_report_nx
ffffffff814a967b t setup_hugepagesz
ffffffff814a96ec t numa_setup
ffffffff814a9741 t numa_nodemask_from_meminfo
ffffffff814a9769 T setup_node_to_cpumask_map
ffffffff814a97d8 T numa_remove_memblk_from
ffffffff814a9808 T numa_add_memblk
ffffffff814a9888 t dummy_numa_init
ffffffff814a98ed T numa_cleanup_meminfo
ffffffff814a9ab2 T numa_reset_distance
ffffffff814a9aee t numa_init
ffffffff814a9fd5 T numa_set_distance
ffffffff814aa1be T x86_numa_init
ffffffff814aa1f7 T init_cpu_to_node
ffffffff814aa2fd T initmem_init
ffffffff814aa302 T numa_free_all_bootmem
ffffffff814aa382 T amd_numa_init
ffffffff814aa704 t bad_srat
ffffffff814aa71f T acpi_numa_slit_init
ffffffff814aa78a T acpi_numa_x2apic_affinity_init
ffffffff814aa849 T acpi_numa_processor_affinity_init
ffffffff814aa8e6 T acpi_numa_memory_affinity_init
ffffffff814aa9a2 T acpi_numa_arch_fixup
ffffffff814aa9a3 T x86_acpi_numa_init
ffffffff814aa9bb t vdso_setup
ffffffff814aa9cf t init_vdso
ffffffff814aaaec t ia32_binfmt_init
ffffffff814aaafd t vdso_setup
ffffffff814aab14 T sysenter_setup
ffffffff814aadcc t coredump_filter_setup
ffffffff814aaded T fork_init
ffffffff814aae7b T proc_caches_init
ffffffff814aaf5b t proc_execdomains_init
ffffffff814aaf7a t oops_setup
ffffffff814aafa5 t console_suspend_disable
ffffffff814aafb2 t log_buf_len_setup
ffffffff814aaff8 t keep_bootcon_setup
ffffffff814ab015 t ignore_loglevel_setup
ffffffff814ab02f t printk_late_init
ffffffff814ab07e t console_setup
ffffffff814ab136 T setup_log_buf
ffffffff814ab274 t alloc_frozen_cpus
ffffffff814ab277 t cpu_hotplug_pm_sync_init
ffffffff814ab288 t spawn_ksoftirqd
ffffffff814ab2d2 T softirq_init
ffffffff814ab387 t strict_iomem
ffffffff814ab3cb t ioresources_init
ffffffff814ab404 t __reserve_region_with_split
ffffffff814ab49f t reserve_setup
ffffffff814ab56a T reserve_region_with_split
ffffffff814ab5b5 T sysctl_init
ffffffff814ab5c6 t file_caps_disable
ffffffff814ab5d6 T init_timers
ffffffff814ab619 t uid_cache_init
ffffffff814ab6ab t setup_print_fatal_signals
ffffffff814ab6cf T signals_init
ffffffff814ab6f7 T usermodehelper_init
ffffffff814ab725 t init_workqueues
ffffffff814aba89 T pidhash_init
ffffffff814abaef T pidmap_init
ffffffff814abba5 T sort_main_extable
ffffffff814abbb8 t locate_module_kobject
ffffffff814abc85 t param_sysfs_init
ffffffff814abe05 t init_posix_timers
ffffffff814abfe5 t init_posix_cpu_timers
ffffffff814ac0aa t setup_hrtimer_hres
ffffffff814ac0f4 T hrtimers_init
ffffffff814ac12f T nsproxy_cache_init
ffffffff814ac159 t ksysfs_init
ffffffff814ac1f4 T cred_init
ffffffff814ac219 t isolated_cpu_setup
ffffffff814ac235 t setup_relax_domain_level
ffffffff814ac260 t migration_init
ffffffff814ac2ce T sched_create_sysfs_power_savings_entries
ffffffff814ac318 T sched_init_smp
ffffffff814ac416 T sched_init
ffffffff814ac88c T init_sched_fair_class
ffffffff814ac89d t pm_qos_power_init
ffffffff814ac8f7 t pm_init
ffffffff814ac94f t timekeeping_init_ops
ffffffff814ac960 T timekeeping_init
ffffffff814aca74 t ntp_tick_adj_setup
ffffffff814aca90 T ntp_init
ffffffff814aca95 t boot_override_clocksource
ffffffff814acad1 t boot_override_clock
ffffffff814acb0f t init_clocksource_sysfs
ffffffff814acb61 t clocksource_done_booting
ffffffff814acbb8 t init_jiffies_clocksource
ffffffff814acbc4 W clocksource_default_clock
ffffffff814acbcc t init_timer_list_procfs
ffffffff814acbf5 t alarmtimer_init
ffffffff814acdcc T tick_init
ffffffff814acdd8 t futex_init
ffffffff814ace30 t proc_dma_init
ffffffff814ace4f t nrcpus
ffffffff814ace84 T call_function_init
ffffffff814acf04 t maxcpus
ffffffff814acf33 t nosmp
ffffffff814acf49 T setup_nr_cpu_ids
ffffffff814acf66 T smp_init
ffffffff814acffe t proc_modules_init
ffffffff814ad01d t kallsyms_init
ffffffff814ad03f t cgroup_disable
ffffffff814ad0be t cgroup_init_subsys
ffffffff814ad19b T cgroup_init_early
ffffffff814ad3fb T cgroup_init
ffffffff814ad508 T cpuset_init
ffffffff814ad583 T cpuset_init_smp
ffffffff814ad5d9 t pid_namespaces_init
ffffffff814ad616 t ikconfig_init
ffffffff814ad64c t cpu_stop_init
ffffffff814ad6ef t audit_enable
ffffffff814ad78e t audit_init
ffffffff814ad8c9 T audit_register_class
ffffffff814ad946 t audit_watch_init
ffffffff814ad97d t audit_tree_init
ffffffff814ad9d4 T early_irq_init
ffffffff814adabe t setup_forced_irqthreads
ffffffff814adac8 t irqpoll_setup
ffffffff814adaf6 t irqfixup_setup
ffffffff814adb24 t irq_pm_init_ops
ffffffff814adb35 t rcu_scheduler_really_started
ffffffff814adb42 t rcu_init_one
ffffffff814add74 T rcu_init
ffffffff814ade15 t utsname_sysctl_init
ffffffff814ade26 t delayacct_setup_disable
ffffffff814ade36 t taskstats_init
ffffffff814adec5 T taskstats_init_early
ffffffff814adf5d t perf_event_sysfs_init
ffffffff814adff1 T perf_event_init
ffffffff814ae159 T init_hw_breakpoint
ffffffff814ae248 t set_hashdist
ffffffff814ae274 t cmdline_parse_movablecore
ffffffff814ae2a3 t cmdline_parse_kernelcore
ffffffff814ae2d2 t setup_numa_zonelist_order
ffffffff814ae305 t find_min_pfn_for_node
ffffffff814ae380 T setup_per_cpu_pageset
ffffffff814ae45e T free_bootmem_with_active_regions
ffffffff814ae4fc T sparse_memory_present_with_active_regions
ffffffff814ae562 T absent_pages_in_range
ffffffff814ae572 T node_map_pfn_alignment
ffffffff814ae642 T find_min_pfn_with_active_regions
ffffffff814ae64c T set_dma_reserve
ffffffff814ae654 T page_alloc_init
ffffffff814ae660 T free_area_init
ffffffff814ae687 T free_area_init_nodes
ffffffff814aec5d T alloc_large_system_hash
ffffffff814aee91 T page_writeback_init
ffffffff814aeeb8 T swap_setup
ffffffff814aeee1 t kswapd_init
ffffffff814aef51 T shmem_init
ffffffff814af01b t setup_vmstat
ffffffff814af0df t bdi_class_init
ffffffff814af110 t default_bdi_init
ffffffff814af1ad t mm_sysfs_init
ffffffff814af1d3 t set_mminit_loglevel
ffffffff814af1f4 T mminit_verify_pageflags_layout
ffffffff814af30f t percpu_alloc_setup
ffffffff814af363 T pcpu_alloc_alloc_info
ffffffff814af3d7 t pcpu_build_alloc_info
ffffffff814af792 T pcpu_free_alloc_info
ffffffff814af7a8 T pcpu_setup_first_chunk
ffffffff814aff7b T pcpu_embed_first_chunk
ffffffff814b01ea T pcpu_page_first_chunk
ffffffff814b0553 T percpu_init_late
ffffffff814b0613 t disable_randmaps
ffffffff814b0623 t init_zero_pfn
ffffffff814b0655 T mmap_init
ffffffff814b066a T anon_vma_init
ffffffff814b06b8 t proc_vmalloc_init
ffffffff814b06da T vm_area_add_early
ffffffff814b072a T vm_area_register_early
ffffffff814b0777 T vmalloc_init
ffffffff814b0828 t __alloc_memory_core_early
ffffffff814b0887 t __free_pages_memory
ffffffff814b0963 t ___alloc_bootmem_nopanic
ffffffff814b09e1 T free_bootmem_late
ffffffff814b0a2d T free_low_memory_core_early
ffffffff814b0b2e T free_all_bootmem_node
ffffffff814b0b31 T free_all_bootmem
ffffffff814b0b3b T free_bootmem_node
ffffffff814b0b46 T free_bootmem
ffffffff814b0b4b T __alloc_bootmem_nopanic
ffffffff814b0b54 T __alloc_bootmem
ffffffff814b0b87 T __alloc_bootmem_node
ffffffff814b0c2e T __alloc_bootmem_node_high
ffffffff814b0c33 T alloc_bootmem_section
ffffffff814b0c6b T __alloc_bootmem_node_nopanic
ffffffff814b0d02 T __alloc_bootmem_low
ffffffff814b0d36 T __alloc_bootmem_low_node
ffffffff814b0ddb t early_memblock
ffffffff814b0e00 t memblock_alloc_base_nid
ffffffff814b0e4c T memblock_alloc_nid
ffffffff814b0e52 T __memblock_alloc_base
ffffffff814b0e59 T memblock_alloc_base
ffffffff814b0e84 T memblock_alloc
ffffffff814b0e8b T memblock_alloc_try_nid
ffffffff814b0eb3 T memblock_phys_mem_size
ffffffff814b0ebb T memblock_enforce_memory_limit
ffffffff814b0f23 T memblock_is_reserved
ffffffff814b0f42 T memblock_allow_resize
ffffffff814b0f4d t max_swapfiles_check
ffffffff814b0f50 t procswaps_init
ffffffff814b0f6f t hugetlb_default_setup
ffffffff814b0f93 W alloc_bootmem_huge_page
ffffffff814b104a t hugetlb_hstate_alloc_pages
ffffffff814b108e t hugetlb_nrpages_setup
ffffffff814b1108 T hugetlb_add_hstate
ffffffff814b125a t hugetlb_init
ffffffff814b16e0 T numa_policy_init
ffffffff814b17f8 t sparse_early_usemaps_alloc_node
ffffffff814b1878 T memory_present
ffffffff814b196a T node_memmap_size_bytes
ffffffff814b19fa T sparse_init
ffffffff814b1c99 T sparse_mem_maps_populate_node
ffffffff814b1dc0 t noaliencache_setup
ffffffff814b1dd0 t set_up_list3s
ffffffff814b1e7d t slab_proc_init
ffffffff814b1ea1 t cpucache_init
ffffffff814b1ee5 t slab_max_order_setup
ffffffff814b1f30 t init_list.isra.44
ffffffff814b2033 T kmem_cache_init
ffffffff814b24d7 T kmem_cache_init_late
ffffffff814b2542 t setup_transparent_hugepage
ffffffff814b25cf t hugepage_init
ffffffff814b273d t memory_failure_init
ffffffff814b27d8 t init_cleancache
ffffffff814b27db T files_init
ffffffff814b2849 T chrdev_init
ffffffff814b2871 t init_pipe_fs
ffffffff814b28b3 t fcntl_init
ffffffff814b28da t set_dhash_entries
ffffffff814b2907 T vfs_caches_init_early
ffffffff814b2982 T vfs_caches_init
ffffffff814b2a88 t set_ihash_entries
ffffffff814b2ab5 T inode_init_early
ffffffff814b2b2c T inode_init
ffffffff814b2bc6 T files_defer_init
ffffffff814b2c3b t proc_filesystems_init
ffffffff814b2c5a T get_filesystem_list
ffffffff814b2cd1 T mnt_init
ffffffff814b2e4c T buffer_init
ffffffff814b2e98 t init_bio
ffffffff814b2f75 T bdev_cache_init
ffffffff814b2ff2 t dio_init
ffffffff814b301c t fsnotify_init
ffffffff814b303f t fsnotify_notification_init
ffffffff814b30c9 t fsnotify_mark_init
ffffffff814b3106 t dnotify_init
ffffffff814b317e t inotify_user_setup
ffffffff814b31eb t fanotify_user_setup
ffffffff814b323a t eventpoll_init
ffffffff814b3311 t anon_inode_init
ffffffff814b336f t aio_setup
ffffffff814b33e8 t filelock_init
ffffffff814b340f t proc_locks_init
ffffffff814b342e t init_sys32_ioctl_cmp
ffffffff814b343c t init_sys32_ioctl
ffffffff814b3461 t init_script_binfmt
ffffffff814b3474 t init_elf_binfmt
ffffffff814b3488 t init_compat_elf_binfmt
ffffffff814b349b T proc_init_inodecache
ffffffff814b34c4 T proc_root_init
ffffffff814b3568 T proc_tty_init
ffffffff814b35e4 t proc_cmdline_init
ffffffff814b3603 t proc_consoles_init
ffffffff814b3622 t proc_cpuinfo_init
ffffffff814b3641 t proc_devices_init
ffffffff814b3660 t proc_interrupts_init
ffffffff814b367f t proc_loadavg_init
ffffffff814b369f t proc_meminfo_init
ffffffff814b36be t proc_stat_init
ffffffff814b36dd t proc_uptime_init
ffffffff814b36fc t proc_version_init
ffffffff814b371b t proc_softirqs_init
ffffffff814b373a T proc_sys_init
ffffffff814b3767 T proc_net_init
ffffffff814b378a t proc_kcore_init
ffffffff814b3832 t proc_kmsg_init
ffffffff814b3854 t proc_page_init
ffffffff814b3893 T sysfs_inode_init
ffffffff814b389f T sysfs_init
ffffffff814b394d T configfs_inode_init
ffffffff814b3959 t configfs_init
ffffffff814b39f9 t init_devpts_fs
ffffffff814b3a56 t init_ramfs_fs
ffffffff814b3a62 T init_rootfs
ffffffff814b3a9f t init_hugetlbfs_fs
ffffffff814b3b30 t init_pstore_fs
ffffffff814b3b3c t ipc_init
ffffffff814b3b5c T ipc_init_proc_interface
ffffffff814b3bcf T msg_init
ffffffff814b3c11 T sem_init
ffffffff814b3c3b t ipc_ns_init
ffffffff814b3c4e T shm_init
ffffffff814b3c6d t ipc_sysctl_init
ffffffff814b3c7e t init_mqueue_fs
ffffffff814b3d15 t init_mmap_min_addr
ffffffff814b3d26 t crypto_wq_init
ffffffff814b3d56 t crypto_algapi_init
ffffffff814b3d60 T crypto_init_proc
ffffffff814b3d7a t skcipher_module_init
ffffffff814b3dae t chainiv_module_init
ffffffff814b3dba t eseqiv_module_init
ffffffff814b3dc6 t cryptomgr_init
ffffffff814b3dd2 t aes_init
ffffffff814b3dde t crc32c_mod_init
ffffffff814b3dea t krng_mod_init
ffffffff814b3df6 t elevator_setup
ffffffff814b3e12 T blk_dev_init
ffffffff814b3e8d t blk_settings_init
ffffffff814b3eb2 t blk_ioc_init
ffffffff814b3ed9 t blk_softirq_init
ffffffff814b3f41 t blk_iopoll_setup
ffffffff814b3fab t proc_genhd_init
ffffffff814b3fe4 t genhd_device_init
ffffffff814b4055 T printk_all_partitions
ffffffff814b428b t blk_scsi_ioctl_init
ffffffff814b450f t bsg_init
ffffffff814b463a t init_cgroup_blkio
ffffffff814b4646 t noop_init
ffffffff814b4652 t cfq_init
ffffffff814b46e8 t get_bits
ffffffff814b47bd t get_next_block
ffffffff814b4e31 t nofill
ffffffff814b4e35 T bunzip2
ffffffff814b52a6 t nofill
ffffffff814b52aa T gunzip
ffffffff814b559f t nofill
ffffffff814b55a3 t rc_read
ffffffff814b55d8 t rc_do_normalize
ffffffff814b560a t rc_get_bit
ffffffff814b5694 T unlzma
ffffffff814b65b9 T parse_header
ffffffff814b666c T unlzo
ffffffff814b6ac8 T unxz
ffffffff814b6d3e T idr_init_cache
ffffffff814b6d65 t kobject_uevent_init
ffffffff814b6d82 T prio_tree_init
ffffffff814b6db5 T radix_tree_init
ffffffff814b6e8b t random32_init
ffffffff814b6f58 t random32_reseed
ffffffff814b6ffa t percpu_counter_startup
ffffffff814b7010 t setup_io_tlb_npages
ffffffff814b707b T swiotlb_init_with_tbl
ffffffff814b7147 T swiotlb_init_with_default_size
ffffffff814b71b6 T swiotlb_init
ffffffff814b71c2 T swiotlb_free
ffffffff814b7311 t pcibus_class_init
ffffffff814b7324 t pci_sort_bf_cmp
ffffffff814b737a T pci_sort_breadthfirst
ffffffff814b738d t pci_resource_alignment_sysfs_init
ffffffff814b73a0 T pci_register_set_vga_state
ffffffff814b73a8 t pci_setup
ffffffff814b76b2 t pci_driver_init
ffffffff814b76c5 t pci_sysfs_init
ffffffff814b7713 t pci_proc_init
ffffffff814b7773 t quirk_eisa_bridge
ffffffff814b777b t quirk_tc86c001_ide
ffffffff814b779b t quirk_ide_samemode
ffffffff814b7808 t quirk_disable_all_msi
ffffffff814b7827 t asus_hides_smbus_hostbridge
ffffffff814b7a59 t pci_apply_final_quirks
ffffffff814b7b58 t quirk_ioapic_rmw
ffffffff814b7b76 t quirk_amd_8131_mmrbc
ffffffff814b7baf t quirk_alimagik
ffffffff814b7bd7 t quirk_alder_ioapic
ffffffff814b7c61 t pcie_aspm_disable
ffffffff814b7cce t pciehp_setup
ffffffff814b7cf2 t dmi_pcie_pme_disable_msi
ffffffff814b7d10 t pcie_portdrv_init
ffffffff814b7d7d t pcie_port_setup
ffffffff814b7df4 t aer_service_init
ffffffff814b7e1b t pcie_pme_service_init
ffffffff814b7e27 t pcie_pme_setup
ffffffff814b7e4b t ioapic_init
ffffffff814b7e60 t pci_bus_get_depth
ffffffff814b7e98 T pci_realloc_get_opt
ffffffff814b7ee1 T pci_assign_unassigned_resources
ffffffff814b8106 t acpi_pci_init
ffffffff814b815f t video_setup
ffffffff814b81d5 t fbmem_init
ffffffff814b826b t text_mode
ffffffff814b827b t no_scroll
ffffffff814b828f t fb_console_init
ffffffff814b83a6 t fb_console_setup
ffffffff814b85df t vesafb_init
ffffffff814b8830 t vesafb_probe
ffffffff814b8f81 t acpi_parse_apic_instance
ffffffff814b8faf T acpi_table_parse_entries
ffffffff814b90ec T acpi_table_parse_madt
ffffffff814b9105 T acpi_table_parse
ffffffff814b9182 T acpi_table_init
ffffffff814b921b t dmi_enable_osi_linux
ffffffff814b922d t dmi_disable_osi_win7
ffffffff814b9250 t dmi_disable_osi_vista
ffffffff814b928d T acpi_blacklisted
ffffffff814b93eb t acpi_serialize_setup
ffffffff814b940a t acpi_request_region
ffffffff814b9458 t acpi_reserve_resources
ffffffff814b9541 t acpi_os_name_setup
ffffffff814b9599 t acpi_enforce_resources_setup
ffffffff814b9612 T acpi_os_get_root_pointer
ffffffff814b9633 T early_acpi_os_unmap_memory
ffffffff814b9642 T acpi_osi_setup
ffffffff814b96e4 t set_osi_linux
ffffffff814b971d t osi_setup
ffffffff814b978f T acpi_dmi_osi_linux
ffffffff814b97ba T acpi_os_initialize
ffffffff814b97f1 T acpi_os_initialize1
ffffffff814b98e0 T acpi_wakeup_device_init
ffffffff814b994a T acpi_sleep_init
ffffffff814b9a34 t acpi_init
ffffffff814b9ce5 T acpi_early_init
ffffffff814b9dda T init_acpi_device_notify
ffffffff814b9e1b T acpi_scan_init
ffffffff814b9eee t set_no_mwait
ffffffff814b9f10 t early_init_pdc
ffffffff814b9fba T acpi_early_processor_set_pdc
ffffffff814ba00b T acpi_boot_ec_enable
ffffffff814ba044 T acpi_ec_ecdt_probe
ffffffff814ba214 T acpi_ec_init
ffffffff814ba22d t acpi_pci_root_init
ffffffff814ba257 t acpi_irq_nobalance_set
ffffffff814ba267 t acpi_irq_balance_set
ffffffff814ba277 t acpi_irq_penalty_update
ffffffff814ba2e0 t acpi_irq_pci
ffffffff814ba2e4 t acpi_irq_isa
ffffffff814ba2eb t acpi_pci_link_init
ffffffff814ba327 t irqrouter_init_ops
ffffffff814ba34b T acpi_irq_penalty_init
ffffffff814ba3c5 T acpi_power_init
ffffffff814ba3e6 t acpi_event_init
ffffffff814ba462 T acpi_sysfs_init
ffffffff814ba5a7 t acpi_parse_srat
ffffffff814ba5bd t acpi_table_print_srat_entry
ffffffff814ba5de t acpi_parse_slit
ffffffff814ba647 t acpi_parse_memory_affinity
ffffffff814ba666 t acpi_parse_processor_affinity
ffffffff814ba696 t acpi_parse_x2apic_affinity
ffffffff814ba6b5 T acpi_numa_init
ffffffff814ba75e T acpi_tb_parse_root_table
ffffffff814baa12 t acpi_no_auto_ssdt_setup
ffffffff814baa32 T acpi_initialize_tables
ffffffff814baa8a T acpi_initialize_subsystem
ffffffff814bab2c t acpi_hed_init
ffffffff814bab4f t hest_parse_ghes_count
ffffffff814bab5a t setup_hest_disable
ffffffff814bab64 t hest_parse_ghes
ffffffff814bac32 T acpi_hest_init
ffffffff814bad4c t setup_erst_disable
ffffffff814bad59 t erst_init
ffffffff814baffd t ghes_init
ffffffff814bb165 t pnp_init
ffffffff814bb178 t pnp_setup_reserve_mem
ffffffff814bb1af t pnp_setup_reserve_io
ffffffff814bb1e6 t pnp_setup_reserve_dma
ffffffff814bb21d t pnp_setup_reserve_irq
ffffffff814bb254 t pnp_system_init
ffffffff814bb260 t pnpacpi_setup
ffffffff814bb28c t acpi_pnp_find_device
ffffffff814bb2c8 t acpi_pnp_match
ffffffff814bb316 t ispnpidacpi
ffffffff814bb374 t pnpacpi_add_device_handler
ffffffff814bb587 t pnpacpi_init
ffffffff814bb610 t pnpacpi_option_resource
ffffffff814bb971 T pnpacpi_parse_resource_option_data
ffffffff814bb9d1 T xen_init_IRQ
ffffffff814bbb29 t balloon_init
ffffffff814bbc84 T xenbus_ring_ops_init
ffffffff814bbca5 t xenbus_init
ffffffff814bbf66 t xenbus_probe_initcall
ffffffff814bbf9c t xenbus_probe_backend_init
ffffffff814bbfc6 t xenbus_init
ffffffff814bbff9 t xenbus_backend_init
ffffffff814bc039 t boot_wait_for_devices
ffffffff814bc066 t xenbus_probe_frontend_init
ffffffff814bc090 t setup_vcpu_hotplug_event
ffffffff814bc0af t balloon_init
ffffffff814bc1a6 t xen_selfballooning_setup
ffffffff814bc1b3 t xen_selfballoon_init
ffffffff814bc22f t hypervisor_subsys_init
ffffffff814bc24f t hyper_sysfs_init
ffffffff814bc351 t platform_pci_module_init
ffffffff814bc366 t enable_tmem
ffffffff814bc373 t no_cleancache
ffffffff814bc380 t xen_tmem_init
ffffffff814bc3dd T xen_swiotlb_init
ffffffff814bc5b2 t register_xen_pci_notifier
ffffffff814bc5de t tty_class_init
ffffffff814bc60f T console_init
ffffffff814bc62f T tty_init
ffffffff814bc766 t pty_init
ffffffff814bc9c4 T vcs_init
ffffffff814bca5a T kbd_init
ffffffff814bcb0f T console_map_init
ffffffff814bcb49 t vtconsole_class_init
ffffffff814bcc20 t con_init
ffffffff814bce6a T vty_init
ffffffff814bcfe4 t hvc_console_setup
ffffffff814bd008 t hvc_console_init
ffffffff814bd019 t xen_hvc_init
ffffffff814bd23f t chr_dev_init
ffffffff814bd2fc t random_int_secret_init
ffffffff814bd312 t misc_init
ffffffff814bd3c9 t nvram_init
ffffffff814bd441 t vga_arb_device_init
ffffffff814bd52b T devices_init
ffffffff814bd5d5 T buses_init
ffffffff814bd629 T classes_init
ffffffff814bd64c T early_platform_driver_register
ffffffff814bd7aa T early_platform_add_devices
ffffffff814bd7ee T early_platform_driver_register_all
ffffffff814bd7f3 T early_platform_driver_probe
ffffffff814bda82 T early_platform_cleanup
ffffffff814bdb28 T platform_bus_init
ffffffff814bdb71 T cpu_dev_init
ffffffff814bdba4 T firmware_init
ffffffff814bdbc5 T driver_init
ffffffff814bdbef t mount_param
ffffffff814bdc06 T devtmpfs_init
ffffffff814bdcd3 t wakeup_sources_debugfs_init
ffffffff814bdce1 t firmware_class_init
ffffffff814bdcf4 t register_node_type
ffffffff814bdd07 T hypervisor_init
ffffffff814bdd28 t init_scsi
ffffffff814bddaa T scsi_init_queue
ffffffff814bdecf T scsi_init_devinfo
ffffffff814bdf5d T scsi_init_sysctl
ffffffff814bdf7c T scsi_init_procfs
ffffffff814bdfd4 t init_sd
ffffffff814be0fa t ata_init
ffffffff814be4a6 T libata_transport_init
ffffffff814be507 T ata_sff_init
ffffffff814be537 t probe_list2
ffffffff814be57f t net_olddevs_init
ffffffff814be612 t serio_init
ffffffff814be640 t i8042_aux_test_irq
ffffffff814be6ef t i8042_create_aux_port
ffffffff814be7f2 t i8042_free_aux_ports
ffffffff814be818 t i8042_toggle_aux
ffffffff814be88c t i8042_init
ffffffff814bec4f t i8042_probe
ffffffff814bf27e t input_init
ffffffff814bf380 t mousedev_init
ffffffff814bf3d7 t atkbd_setup_forced_release
ffffffff814bf3f6 t atkbd_setup_scancode_fixup
ffffffff814bf40a t atkbd_init
ffffffff814bf42d t psmouse_init
ffffffff814bf4a9 T synaptics_module_init
ffffffff814bf4d7 T lifebook_module_init
ffffffff814bf4ef t rtc_hctosys
ffffffff814bf608 t rtc_init
ffffffff814bf66f T rtc_dev_init
ffffffff814bf6a6 T rtc_sysfs_init
ffffffff814bf6af t cmos_init
ffffffff814bf71a t cmos_platform_probe
ffffffff814bf764 t init_cpufreq_transition_notifier_list
ffffffff814bf77c t cpufreq_core_init
ffffffff814bf833 t cpufreq_stats_init
ffffffff814bf8c3 t cpufreq_gov_performance_init
ffffffff814bf8cf t cpuidle_init
ffffffff814bf905 t cpuidle_sysfs_setup
ffffffff814bf915 t init_ladder
ffffffff814bf921 t print_filtered
ffffffff814bf970 t dmi_string_nosave
ffffffff814bf9f4 t dmi_string
ffffffff814bfa61 t dmi_save_ident
ffffffff814bfa8b t dmi_save_one_device
ffffffff814bfb17 t dmi_decode
ffffffff814bff9a T dmi_scan_machine
ffffffff814c01d6 T find_ibft_region
ffffffff814c02d5 t memmap_init
ffffffff814c0303 T firmware_map_add_early
ffffffff814c039b t acpi_pm_good_setup
ffffffff814c03ab t parse_pmtmr
ffffffff814c0403 t init_acpi_pm_clocksource
ffffffff814c04d6 T clockevent_i8253_init
ffffffff814c0524 t hid_init
ffffffff814c0567 t alloc_passthrough_domain.part.23
ffffffff814c058c T amd_iommu_uninit_devices
ffffffff814c05d3 T amd_iommu_init_devices
ffffffff814c063a T amd_iommu_init_api
ffffffff814c064d T amd_iommu_init_dma_ops
ffffffff814c08f0 T amd_iommu_init_passthrough
ffffffff814c0986 t set_device_exclusion_range
ffffffff814c09cd t early_amd_iommu_detect
ffffffff814c09d0 t parse_amd_iommu_dump
ffffffff814c09dd t parse_amd_iommu_options
ffffffff814c0a4f T amd_iommu_detect
ffffffff814c0ac2 t init_memory_definitions
ffffffff814c0cdf t free_on_init_error
ffffffff814c0e70 t find_last_devid_acpi
ffffffff814c0f8f t set_dev_entry_from_acpi.isra.23
ffffffff814c1096 t init_iommu_all
ffffffff814c19cd T amd_iommu_init_hardware
ffffffff814c1c23 t amd_iommu_init
ffffffff814c1ca7 t pcibios_allocate_bus_resources
ffffffff814c1d2b t pcibios_assign_resources
ffffffff814c1e1e t pcibios_allocate_resources
ffffffff814c2063 T pcibios_resource_survey
ffffffff814c208e t pci_arch_init
ffffffff814c20ed T pci_mmcfg_arch_free
ffffffff814c2128 T pci_mmcfg_arch_init
ffffffff814c21a5 t pci_sanity_check.isra.0
ffffffff814c225f T pci_direct_init
ffffffff814c22c0 T pci_direct_probe
ffffffff814c2493 t is_mmconf_reserved
ffffffff814c25a5 t pci_mmconfig_add
ffffffff814c26c7 t pci_mmcfg_intel_945
ffffffff814c2766 t pci_mmcfg_e7520
ffffffff814c27d0 t free_all_mmcfg
ffffffff814c2839 t pci_mmcfg_amd_fam10h
ffffffff814c28ee t is_acpi_reserved
ffffffff814c294d t find_mboard_resource
ffffffff814c2977 t pci_mmcfg_late_insert_resources
ffffffff814c29c9 t pci_mmcfg_nvidia_mcp55
ffffffff814c2ac1 t __pci_mmcfg_init
ffffffff814c2cc2 t check_mcfg_resource
ffffffff814c2d3f t pci_parse_mcfg
ffffffff814c2e75 T pci_mmcfg_early_init
ffffffff814c2e7f T pci_mmcfg_late_init
ffffffff814c2e86 T pci_xen_init
ffffffff814c2efc T pci_xen_hvm_init
ffffffff814c2f32 T pci_xen_initial_domain
ffffffff814c307b t set_use_crs
ffffffff814c3085 t set_nouse_crs
ffffffff814c308f T pci_acpi_crs_quirks
ffffffff814c3129 T pci_acpi_init
ffffffff814c31b5 T pci_legacy_init
ffffffff814c31f0 T pci_subsys_init
ffffffff814c3233 t via_router_probe
ffffffff814c32c6 t vlsi_router_probe
ffffffff814c32ed t serverworks_router_probe
ffffffff814c3314 t sis_router_probe
ffffffff814c3336 t cyrix_router_probe
ffffffff814c335c t opti_router_probe
ffffffff814c3383 t ite_router_probe
ffffffff814c33aa t ali_router_probe
ffffffff814c33d8 t amd_router_probe
ffffffff814c341f t pico_router_probe
ffffffff814c346b t pirq_peer_trick
ffffffff814c3515 t fix_acer_tm360_irqrouting
ffffffff814c353f t fix_broken_hp_bios_irq9
ffffffff814c356a t intel_router_probe
ffffffff814c3787 T pcibios_fixup_irqs
ffffffff814c384d T pcibios_irq_init
ffffffff814c3ab4 T dmi_check_skip_isa_align
ffffffff814c3ac0 T dmi_check_pciprobe
ffffffff814c3acc T pcibios_set_cache_line_size
ffffffff814c3b0d T pcibios_init
ffffffff814c3b45 t early_fill_mp_bus_info
ffffffff814c42db t amd_postcore_init
ffffffff814c440f t sock_init
ffffffff814c4481 t proto_init
ffffffff814c448d t net_inuse_init
ffffffff814c44b2 T sk_init
ffffffff814c4507 T skb_init
ffffffff814c454e t net_ns_init
ffffffff814c4635 t net_secret_init
ffffffff814c464b t sysctl_core_init
ffffffff814c467f t initialize_hashrnd
ffffffff814c4695 t net_dev_init
ffffffff814c48c1 T netdev_boot_setup
ffffffff814c49fb T dev_mcast_init
ffffffff814c4a07 T dst_init
ffffffff814c4a13 t neigh_init
ffffffff814c4a90 T rtnetlink_init
ffffffff814c4bab t sock_diag_init
ffffffff814c4bde t flow_cache_init_global
ffffffff814c4d6e t netlink_proto_init
ffffffff814c4f11 t genl_init
ffffffff814c4f9f t set_rhash_entries
ffffffff814c4fcc T ip_rt_init
ffffffff814c51e1 T ip_static_sysctl_init
ffffffff814c51f4 T inet_initpeers
ffffffff814c52a4 T ipfrag_init
ffffffff814c5329 T ip_init
ffffffff814c5337 t set_thash_entries
ffffffff814c5364 T tcp_init
ffffffff814c56b4 T tcp4_proc_init
ffffffff814c56c0 T tcp_v4_init
ffffffff814c56ed t tcp_congestion_default
ffffffff814c56f9 T raw_proc_init
ffffffff814c5705 T raw_proc_exit
ffffffff814c5711 t set_uhash_entries
ffffffff814c5756 T udp4_proc_init
ffffffff814c5762 T udp_table_init
ffffffff814c5862 T udp_init
ffffffff814c58c6 T udplite4_register
ffffffff814c595a T arp_init
ffffffff814c59a5 T icmp_init
ffffffff814c59b1 T devinet_init
ffffffff814c5a52 t inet_init
ffffffff814c5ca8 T igmp_mc_proc_init
ffffffff814c5cb4 T ip_fib_init
ffffffff814c5d30 T fib_trie_init
ffffffff814c5d79 T ping_proc_init
ffffffff814c5d85 T ping_init
ffffffff814c5dab t sysctl_ipv4_init
ffffffff814c5e28 T ip_misc_proc_init
ffffffff814c5e34 t init_syncookies
ffffffff814c5e4c t cubictcp_register
ffffffff814c5eb0 T xfrm4_init
ffffffff814c5f0a T xfrm4_state_init
ffffffff814c5f16 T xfrm_init
ffffffff814c5f29 T xfrm_input_init
ffffffff814c5f50 t af_unix_init
ffffffff814c5f9b t net_sysctl_init
ffffffff814c5fe5 t save_mr
ffffffff814c6022 t phys_pte_init
ffffffff814c6156 t phys_pmd_init
ffffffff814c642b t phys_pud_init
ffffffff814c670e T kernel_physical_mapping_init
ffffffff814c68f9 T vmemmap_populate
ffffffff814c6b3b T vmemmap_populate_print_last
ffffffff814c6b9b t adjust_zone_range_for_zone_movable.isra.44
ffffffff814c6be3 T init_currently_empty_zone
ffffffff814c6cdf T __early_pfn_to_nid
ffffffff814c6d4d T early_pfn_to_nid
ffffffff814c6d5e T early_pfn_in_nid
ffffffff814c6d79 T get_pfn_range_for_nid
ffffffff814c6e15 t zone_spanned_pages_in_node.isra.60
ffffffff814c6e9a T __absent_pages_in_range
ffffffff814c6f4d t zone_absent_pages_in_node.isra.61
ffffffff814c6fd8 T init_per_zone_wmark_min
ffffffff814c7059 T memmap_init_zone
ffffffff814c723d T free_area_init_node
ffffffff814c7558 T __free_pages_bootmem
ffffffff814c75a3 T mminit_verify_page_links
ffffffff814c75e8 t memblock_search.isra.4
ffffffff814c7619 t memblock_dump.isra.5
ffffffff814c770b t memblock_insert_region
ffffffff814c7767 t memblock_merge_regions.isra.7
ffffffff814c77e5 T get_allocated_memblock_reserved_regions_info
ffffffff814c7819 T __next_free_mem_range
ffffffff814c7972 T __next_free_mem_range_rev
ffffffff814c7aa1 T memblock_find_in_range_node
ffffffff814c7b92 T memblock_find_in_range
ffffffff814c7b9d t memblock_double_array
ffffffff814c7df7 t memblock_isolate_range
ffffffff814c7f3b t __memblock_remove
ffffffff814c8006 T memblock_free
ffffffff814c8047 T memblock_remove
ffffffff814c8059 t memblock_add_region
ffffffff814c81c7 T memblock_reserve
ffffffff814c820d T memblock_add
ffffffff814c8224 T memblock_add_node
ffffffff814c8238 T __next_mem_pfn_range
ffffffff814c82ba T memblock_set_node
ffffffff814c8323 T memblock_start_of_DRAM
ffffffff814c832e T memblock_end_of_DRAM
ffffffff814c834d T memblock_is_memory
ffffffff814c836c T memblock_is_region_memory
ffffffff814c83cc T memblock_is_region_reserved
ffffffff814c842b T memblock_set_current_limit
ffffffff814c8433 T __memblock_dump_all
ffffffff814c8493 T mminit_validate_memmodel_limits
ffffffff814c8582 T vmemmap_alloc_block
ffffffff814c865e T vmemmap_alloc_block_buf
ffffffff814c8692 T vmemmap_verify
ffffffff814c86ee T vmemmap_pte_populate
ffffffff814c8798 T vmemmap_pmd_populate
ffffffff814c8836 T vmemmap_pud_populate
ffffffff814c88d2 T vmemmap_pgd_populate
ffffffff814c894f T vmemmap_populate_basepages
ffffffff814c89e0 T sparse_mem_map_populate
ffffffff814c8a14 T firmware_map_add_hotplug
ffffffff814c8aa2 T _einittext
ffffffff814c8ac0 T boot_command_line
ffffffff814c92c0 T late_time_init
ffffffff814c92c8 t done.37417
ffffffff814c92e0 t tmp_cmdline.37418
ffffffff814c9ae0 t kthreadd_done
ffffffff814c9b00 t initcall_level_names
ffffffff814c9b40 t initcall_levels
ffffffff814c9ba0 T rd_doload
ffffffff814c9bc0 t saved_root_name
ffffffff814c9c00 t root_mount_data
ffffffff814c9c08 t root_fs_names
ffffffff814c9c10 t root_delay
ffffffff814c9c18 t root_device_name
ffffffff814c9c20 t mount_initrd
ffffffff814c9c40 t do_retain_initrd
ffffffff814c9c48 t header_buf
ffffffff814c9c50 t symlink_buf
ffffffff814c9c58 t name_buf
ffffffff814c9c60 t state
ffffffff814c9c68 t this_header
ffffffff814c9c70 t message
ffffffff814c9c78 t count
ffffffff814c9c80 t victim
ffffffff814c9ca0 t actions
ffffffff814c9ce0 t msg_buf.27339
ffffffff814c9d20 t dir_list
ffffffff814c9d30 t collected
ffffffff814c9d38 t name_len
ffffffff814c9d40 t body_len
ffffffff814c9d48 t gid
ffffffff814c9d4c t uid
ffffffff814c9d50 t mtime
ffffffff814c9d58 t next_state
ffffffff814c9d5c t wfd
ffffffff814c9d60 t vcollected
ffffffff814c9d80 t head
ffffffff814c9e80 t mode
ffffffff814c9e88 t nlink
ffffffff814c9e90 t rdev
ffffffff814c9e98 t ino
ffffffff814c9ea0 t minor
ffffffff814c9ea8 t major
ffffffff814c9eb0 t next_header
ffffffff814c9eb8 t collect
ffffffff814c9ec0 t remains
ffffffff814c9ee0 T xen_extra_mem
ffffffff814ca6e0 t map.22252
ffffffff814cb0e0 T boot_params
ffffffff814cc0e0 t _brk_start
ffffffff814cc100 t command_line
ffffffff814cc900 T x86_init
ffffffff814cc9e0 T sbf_port
ffffffff814cca00 t userdef
ffffffff814cca20 t change_point_list.26921
ffffffff814cda20 t change_point.26922
ffffffff814ce220 t overlap_list.26923
ffffffff814ce620 t new_bios.26924
ffffffff814cf020 t e820_res
ffffffff814cf040 t io_delay_override
ffffffff814cf060 t io_delay_0xed_port_dmi_table
ffffffff814cf880 t __quirk.26039
ffffffff814cf890 t __quirk.26043
ffffffff814cf8a0 t __quirk.26052
ffffffff814cf8b0 t __quirk.26060
ffffffff814cf8c0 T changed_by_mtrr_cleanup
ffffffff814cf8c4 t last_fixed_end
ffffffff814cf8c8 t last_fixed_start
ffffffff814cf8cc t last_fixed_type
ffffffff814cf8e0 t enable_mtrr_cleanup
ffffffff814cf900 t range_state
ffffffff814d1100 t range
ffffffff814d2100 t nr_range
ffffffff814d2108 t range_sums
ffffffff814d2110 t mtrr_chunk_size
ffffffff814d2118 t mtrr_gran_size
ffffffff814d2120 t result
ffffffff814d3220 t min_loss_pfn
ffffffff814d3a20 t debug_print
ffffffff814d3a28 t nr_mtrr_spare_reg
ffffffff814d3a40 T acpi_fix_pin2_polarity
ffffffff814d3a44 T acpi_use_timer_override
ffffffff814d3a48 T acpi_skip_timer_override
ffffffff814d3a4c T acpi_sci_override_gsi
ffffffff814d3a50 T acpi_sci_flags
ffffffff814d3a60 t acpi_dmi_table
ffffffff814d43c8 t acpi_force
ffffffff814d43d0 t acpi_lapic_addr
ffffffff814d43e0 t acpi_dmi_table_late
ffffffff814d4c00 t pci_reboot_dmi_table
ffffffff814d5980 t early_qrk
ffffffff814d5a40 t setup_possible_cpus
ffffffff814d5a60 t alloc_mptable
ffffffff814d5a68 t mpc_new_length
ffffffff814d5a70 t mpc_new_phys
ffffffff814d5a80 t irq_used
ffffffff814d5e80 t m_spare
ffffffff814d5f20 T x86_bios_cpu_apicid_early_map
ffffffff814d5f30 T x86_cpu_to_apicid_early_map
ffffffff814d5f40 t disable_apic_timer
ffffffff814d5f44 t lapic_cal_loops
ffffffff814d5f48 t lapic_cal_t1
ffffffff814d5f50 t lapic_cal_t2
ffffffff814d5f58 t lapic_cal_tsc2
ffffffff814d5f60 t lapic_cal_tsc1
ffffffff814d5f68 t lapic_cal_pm2
ffffffff814d5f70 t lapic_cal_pm1
ffffffff814d5f78 t lapic_cal_j2
ffffffff814d5f80 t lapic_cal_j1
ffffffff814d5f88 t apic_calibrate_pmtmr
ffffffff814d5f8c T timer_through_8259
ffffffff814d5f90 T no_timer_check
ffffffff814d5f94 t show_lapic
ffffffff814d5f98 t disable_timer_pin_1
ffffffff814d5f9c t early_console_initialized
ffffffff814d5fa0 T fix_aperture
ffffffff814d5fa4 T fallback_aper_force
ffffffff814d5fa8 T fallback_aper_order
ffffffff814d5fac T gart_iommu_aperture_allowed
ffffffff814d5fb0 T gart_iommu_aperture_disabled
ffffffff814d5fb4 t gart_fix_e820
ffffffff814d5fb8 t printed_gart_size_msg
ffffffff814d5fc0 T pgt_buf_start
ffffffff814d5fe0 t early_ioremap_debug
ffffffff814d6000 t prev_map
ffffffff814d6020 t slot_virt
ffffffff814d6040 t after_paging_init
ffffffff814d6060 t prev_size
ffffffff814d6080 T x86_cpu_to_node_map_early_map
ffffffff814d60a0 T numa_nodes_parsed
ffffffff814d60c0 T numa_off
ffffffff814d60e0 t numa_meminfo
ffffffff814d90e8 t nodeids
ffffffff814d90f0 T acpi_numa
ffffffff814d90f4 T vdso32_int80_start
ffffffff814d9790 T vdso32_int80_end
ffffffff814d9790 T vdso32_syscall_start
ffffffff814d9e38 T vdso32_syscall_end
ffffffff814d9e38 T vdso32_sysenter_start
ffffffff814da4b8 t new_log_buf_len
ffffffff814da4b8 T vdso32_sysenter_end
ffffffff814da4c0 t required_kernelcore
ffffffff814da4c8 t required_movablecore
ffffffff814da4e0 T pcpu_chosen_fc
ffffffff814da4f0 T pcpu_fc_names
ffffffff814da520 t cpus_buf.20616
ffffffff814db520 t smap.20617
ffffffff814db720 t dmap.20618
ffffffff814db920 t group_map.20665
ffffffff814db940 t group_cnt.20666
ffffffff814db960 t vm_init_off.24867
ffffffff814db970 T huge_boot_pages
ffffffff814db980 t default_hstate_size
ffffffff814db988 t default_hstate_max_huge_pages
ffffffff814db990 t parsed_hstate
ffffffff814db9a0 t initkmem_list3
ffffffff814ef1a0 t slab_max_order_set
ffffffff814ef1c0 t initarray_cache
ffffffff814ef1e0 t cache_names
ffffffff814ef320 t dhash_entries
ffffffff814ef328 t ihash_entries
ffffffff814ef340 t pcie_portdrv_dmi_table
ffffffff814ef5f0 t pci_realloc_enable
ffffffff814ef600 t logo_linux_mono_data
ffffffff814ef920 t logo_linux_vga16_data
ffffffff814f05a0 t logo_linux_clut224_clut
ffffffff814f07e0 t logo_linux_clut224_data
ffffffff814f20e0 t vesafb_fix
ffffffff814f2140 t vesafb_defined
ffffffff814f21e0 t vram_total
ffffffff814f21e4 t vram_remap
ffffffff814f21f0 t acpi_apic_instance
ffffffff814f2200 t initial_tables
ffffffff814f3200 t acpi_blacklist
ffffffff814f3320 t acpi_osi_dmi_table
ffffffff814f45f0 t osi_setup_entries
ffffffff814f4a00 t dsdt_dmi_table
ffffffff814f4cb0 t processor_idle_dmi_table
ffffffff814f4f60 t ec_dmi_table
ffffffff814f5cd0 T acpi_srat_revision
ffffffff814f5ce0 T pnpacpi_disabled
ffffffff814f5d00 t acpi_pnp_bus
ffffffff814f5d40 t excluded_id_list
ffffffff814f5dc0 t use_selfballooning
ffffffff814f5de0 t use_cleancache
ffffffff814f5e00 t tmem_cleancache_ops
ffffffff814f5e40 t early_platform_driver_list
ffffffff814f5e50 t early_platform_device_list
ffffffff814f5e60 t scsi_static_device_list
ffffffff814f7420 t ata_force_param_buf
ffffffff814f8420 t force_tbl.38377
ffffffff814f8ba0 t m68k_probes
ffffffff814f8bb0 t eisa_probes
ffffffff814f8bc0 t mca_probes
ffffffff814f8bd0 t isa_probes
ffffffff814f8be0 t parport_probes
ffffffff814f8c00 t i8042_aux_irq_delivered
ffffffff814f8c20 t i8042_irq_being_tested
ffffffff814f8c24 t amd_iommu_detected
ffffffff814f8c28 t amd_iommu_init_err
ffffffff814f8c2c t amd_iommu_disabled
ffffffff814f8c40 t pci_mmcfg_resources_inserted
ffffffff814f8c44 t known_bridge
ffffffff814f8c60 t pci_mmcfg_probes
ffffffff814f8cd8 t mcp55_checked
ffffffff814f8ce0 t pciirq_dmi_table
ffffffff814f9100 t pirq_routers
ffffffff814f91c0 t pirq_440gx.29225
ffffffff814f9220 t pci_probes
ffffffff814f9260 t rhash_entries
ffffffff814f9268 t thash_entries
ffffffff814f9270 t uhash_entries
ffffffff814f9278 T pgt_buf_top
ffffffff814f9280 T pgt_buf_end
ffffffff814f9288 t addr_end
ffffffff814f9290 t p_end
ffffffff814f9298 t node_start
ffffffff814f92a0 t p_start
ffffffff814f92a8 t addr_start
ffffffff814f92c0 t arch_zone_lowest_possible_pfn
ffffffff814f92e0 t arch_zone_highest_possible_pfn
ffffffff814f9300 t zone_movable_pfn
ffffffff814f9b00 t dma_reserve
ffffffff814f9b08 t nr_kernel_pages
ffffffff814f9b10 t nr_all_pages
ffffffff814f9b20 T memblock_debug
ffffffff814f9b40 T memblock
ffffffff814f9ba0 t memblock_memory_init_regions
ffffffff814fa7a0 t memblock_reserved_init_regions
ffffffff814fb3a0 t memblock_can_resize
ffffffff814fb3a4 t memblock_memory_in_slab
ffffffff814fb3a8 t memblock_reserved_in_slab
ffffffff814fb3ac t __setup_str_rdinit_setup
ffffffff814fb3b4 t __setup_str_init_setup
ffffffff814fb3ba t __setup_str_loglevel
ffffffff814fb3c3 t __setup_str_quiet_kernel
ffffffff814fb3c9 t __setup_str_debug_kernel
ffffffff814fb3cf t __setup_str_set_reset_devices
ffffffff814fb3dd t __setup_str_root_delay_setup
ffffffff814fb3e8 t __setup_str_fs_names_setup
ffffffff814fb3f4 t __setup_str_root_data_setup
ffffffff814fb3ff t __setup_str_rootwait_setup
ffffffff814fb408 t __setup_str_root_dev_setup
ffffffff814fb40e t __setup_str_readwrite
ffffffff814fb411 t __setup_str_readonly
ffffffff814fb414 t __setup_str_load_ramdisk
ffffffff814fb422 t __setup_str_no_initrd
ffffffff814fb42b t __setup_str_retain_initrd_param
ffffffff814fb439 t __setup_str_lpj_setup
ffffffff814fb440 t __setup_str_parse_xen_emul_unplug
ffffffff814fb460 t xen_smp_ops
ffffffff814fb4c0 T interrupt
ffffffff814fbbc0 t __setup_str_code_bytes_setup
ffffffff814fbbcc t __setup_str_kstack_setup
ffffffff814fbbd3 t __setup_str_setup_unknown_nmi_panic
ffffffff814fbbe5 t __setup_str_parse_reservelow
ffffffff814fbbf0 t __setup_str_control_va_addr_alignment
ffffffff814fbbfe t __setup_str_vsyscall_setup
ffffffff814fbc07 t __setup_str_parse_memmap_opt
ffffffff814fbc0e t __setup_str_parse_memopt
ffffffff814fbc12 t __setup_str_iommu_setup
ffffffff814fbc18 t __setup_str_setup_noreplace_paravirt
ffffffff814fbc2b t __setup_str_setup_noreplace_smp
ffffffff814fbc39 t __setup_str_debug_alt
ffffffff814fbc4b t __setup_str_bootonly
ffffffff814fbc58 t __setup_str_tsc_setup
ffffffff814fbc5d t __setup_str_notsc_setup
ffffffff814fbc63 t __setup_str_io_delay_param
ffffffff814fbc70 t ids.25619
ffffffff814fbc88 t __setup_str_idle_setup
ffffffff814fbca0 t __setup_str_setup_disablecpuid
ffffffff814fbcac t __setup_str_setup_noclflush
ffffffff814fbcb6 t __setup_str_setup_show_msr
ffffffff814fbcc0 t __setup_str_setup_disable_smep
ffffffff814fbcc7 t __setup_str_x86_xsaveopt_setup
ffffffff814fbcd2 t __setup_str_x86_xsave_setup
ffffffff814fbce0 t hypervisors
ffffffff814fbcf8 t __setup_str_x86_rdrand_setup
ffffffff814fbd20 t amd_hw_cache_event_ids
ffffffff814fbe80 t p4_hw_cache_event_ids
ffffffff814fbfe0 t core2_hw_cache_event_ids
ffffffff814fc140 t nehalem_hw_cache_event_ids
ffffffff814fc2a0 t nehalem_hw_cache_extra_regs
ffffffff814fc400 t atom_hw_cache_event_ids
ffffffff814fc560 t westmere_hw_cache_event_ids
ffffffff814fc6c0 t snb_hw_cache_event_ids
ffffffff814fc820 t intel_arch_events_map
ffffffff814fc890 t __setup_str_mcheck_disable
ffffffff814fc896 t __setup_str_mcheck_enable
ffffffff814fc89a t __setup_str_disable_mtrr_trim_setup
ffffffff814fc8ac t __setup_str_parse_mtrr_spare_reg
ffffffff814fc8be t __setup_str_parse_mtrr_gran_size_opt
ffffffff814fc8cd t __setup_str_parse_mtrr_chunk_size_opt
ffffffff814fc8dd t __setup_str_mtrr_cleanup_debug_setup
ffffffff814fc8f0 t __setup_str_enable_mtrr_cleanup_setup
ffffffff814fc904 t __setup_str_disable_mtrr_cleanup_setup
ffffffff814fc919 t __setup_str_setup_acpi_sci
ffffffff814fc922 t __setup_str_parse_acpi_use_timer_override
ffffffff814fc93a t __setup_str_parse_acpi_skip_timer_override
ffffffff814fc953 t __setup_str_parse_pci
ffffffff814fc957 t __setup_str_parse_acpi
ffffffff814fc95c t __setup_str_reboot_setup
ffffffff814fc964 t __setup_str_nonmi_ipi_setup
ffffffff814fc96e t __setup_str__setup_possible_cpus
ffffffff814fc97c t __setup_str_parse_alloc_mptable_opt
ffffffff814fc98a t __setup_str_update_mptable_setup
ffffffff814fc999 t __setup_str_apic_set_verbosity
ffffffff814fc99e t __setup_str_parse_nolapic_timer
ffffffff814fc9ac t __setup_str_parse_disable_apic_timer
ffffffff814fc9b8 t __setup_str_parse_lapic_timer_c2_ok
ffffffff814fc9ca t __setup_str_setup_nolapic
ffffffff814fc9d2 t __setup_str_setup_disableapic
ffffffff814fc9de t __setup_str_setup_apicpmtimer
ffffffff814fc9ea t __setup_str_disable_timer_pin_setup
ffffffff814fc9fe t __setup_str_notimercheck
ffffffff814fca0d t __setup_str_setup_show_lapic
ffffffff814fca19 t __setup_str_parse_noapic
ffffffff814fca20 t bases.10458
ffffffff814fca28 t __setup_str_setup_early_printk
ffffffff814fca34 t __setup_str_disable_hpet
ffffffff814fca3b t __setup_str_hpet_setup
ffffffff814fca41 T amd_nb_bus_dev_ranges
ffffffff814fca4d t __setup_str_parse_gart_mem
ffffffff814fca60 t mmconf_dmi_table
ffffffff814fcd10 t __setup_str_nonx32_setup
ffffffff814fcd1a t __setup_str_parse_direct_gbpages_on
ffffffff814fcd22 t __setup_str_parse_direct_gbpages_off
ffffffff814fcd2c t __setup_str_early_ioremap_debug_setup
ffffffff814fcd40 t __setup_str_pat_debug_setup
ffffffff814fcd49 t __setup_str_nopat
ffffffff814fcd4f t __setup_str_setup_userpte
ffffffff814fcd57 t __setup_str_noexec_setup
ffffffff814fcd5e t __setup_str_setup_hugepagesz
ffffffff814fcd6a t __setup_str_numa_setup
ffffffff814fcd6f t __setup_str_vdso_setup
ffffffff814fcd75 t __setup_str_vdso_setup
ffffffff814fcd7d t __setup_str_coredump_filter_setup
ffffffff814fcd8e t __setup_str_oops_setup
ffffffff814fcd93 t __setup_str_keep_bootcon_setup
ffffffff814fcda0 t __setup_str_console_suspend_disable
ffffffff814fcdb3 t __setup_str_console_setup
ffffffff814fcdbc t __setup_str_ignore_loglevel_setup
ffffffff814fcdcc t __setup_str_log_buf_len_setup
ffffffff814fcdd8 t __setup_str_strict_iomem
ffffffff814fcddf t __setup_str_reserve_setup
ffffffff814fcde8 t __setup_str_file_caps_disable
ffffffff814fcdf5 t __setup_str_setup_print_fatal_signals
ffffffff814fce0a t __setup_str_setup_hrtimer_hres
ffffffff814fce13 t __setup_str_setup_relax_domain_level
ffffffff814fce27 t __setup_str_isolated_cpu_setup
ffffffff814fce31 t __setup_str_ntp_tick_adj_setup
ffffffff814fce3f t __setup_str_boot_override_clock
ffffffff814fce46 t __setup_str_boot_override_clocksource
ffffffff814fce53 t __setup_str_maxcpus
ffffffff814fce5b t __setup_str_nrcpus
ffffffff814fce63 t __setup_str_nosmp
ffffffff814fce69 t __setup_str_cgroup_disable
ffffffff814fce79 t __setup_str_audit_enable
ffffffff814fce80 t __setup_str_setup_forced_irqthreads
ffffffff814fce8b t __setup_str_irqpoll_setup
ffffffff814fce93 t __setup_str_irqfixup_setup
ffffffff814fce9c t __setup_str_noirqdebug_setup
ffffffff814fcea7 t __setup_str_delayacct_setup_disable
ffffffff814fceb3 t __setup_str_set_hashdist
ffffffff814fcebd t __setup_str_cmdline_parse_movablecore
ffffffff814fcec9 t __setup_str_cmdline_parse_kernelcore
ffffffff814fced4 t __setup_str_setup_numa_zonelist_order
ffffffff814fcee8 t __setup_str_set_mminit_loglevel
ffffffff814fcef8 t __setup_str_percpu_alloc_setup
ffffffff814fcf05 t __setup_str_disable_randmaps
ffffffff814fcf10 t __setup_str_early_memblock
ffffffff814fcf19 t __setup_str_hugetlb_default_setup
ffffffff814fcf2d t __setup_str_hugetlb_nrpages_setup
ffffffff814fcf38 t __setup_str_slab_max_order_setup
ffffffff814fcf48 t __setup_str_noaliencache_setup
ffffffff814fcf55 t __setup_str_setup_transparent_hugepage
ffffffff814fcf6b t __setup_str_set_dhash_entries
ffffffff814fcf7a t __setup_str_set_ihash_entries
ffffffff814fcf89 t __setup_str_elevator_setup
ffffffff814fcf93 t __setup_str_setup_io_tlb_npages
ffffffff814fcfa0 t __setup_str_pci_setup
ffffffff814fcfa4 t __setup_str_pcie_aspm_disable
ffffffff814fcfaf t __setup_str_pciehp_setup
ffffffff814fcfb8 t __setup_str_pcie_port_setup
ffffffff814fcfc4 t __setup_str_pcie_pme_setup
ffffffff814fcfe0 t __setup_str_video_setup
ffffffff814fcfe7 t __setup_str_no_scroll
ffffffff814fcff1 t __setup_str_text_mode
ffffffff814fcffb t __setup_str_fb_console_setup
ffffffff814fd020 T logo_linux_mono
ffffffff814fd040 T logo_linux_vga16
ffffffff814fd060 T logo_linux_clut224
ffffffff814fd080 t __setup_str_acpi_parse_apic_instance
ffffffff814fd093 t __setup_str_acpi_enforce_resources_setup
ffffffff814fd0ab t __setup_str_acpi_serialize_setup
ffffffff814fd0ba t __setup_str_osi_setup
ffffffff814fd0c4 t __setup_str_acpi_os_name_setup
ffffffff814fd0d2 t __setup_str_acpi_irq_balance_set
ffffffff814fd0e3 t __setup_str_acpi_irq_nobalance_set
ffffffff814fd0f6 t __setup_str_acpi_irq_pci
ffffffff814fd104 t __setup_str_acpi_irq_isa
ffffffff814fd112 t __setup_str_acpi_no_auto_ssdt_setup
ffffffff814fd124 t __setup_str_setup_hest_disable
ffffffff814fd131 t __setup_str_setup_erst_disable
ffffffff814fd13e t __setup_str_pnp_setup_reserve_mem
ffffffff814fd14f t __setup_str_pnp_setup_reserve_io
ffffffff814fd15f t __setup_str_pnp_setup_reserve_dma
ffffffff814fd170 t __setup_str_pnp_setup_reserve_irq
ffffffff814fd181 t __setup_str_pnpacpi_setup
ffffffff814fd18a t __setup_str_xen_selfballooning_setup
ffffffff814fd199 t __setup_str_no_cleancache
ffffffff814fd1a6 t __setup_str_enable_tmem
ffffffff814fd1ab t __setup_str_mount_param
ffffffff814fd1c0 t i8042_dmi_reset_table
ffffffff814fe1e0 t i8042_dmi_noloop_table
ffffffff814ff620 t i8042_dmi_nomux_table
ffffffff81502280 t i8042_dmi_notimeout_table
ffffffff815026a0 t i8042_dmi_dritek_table
ffffffff81503580 t i8042_dmi_nopnp_table
ffffffff815039a0 t i8042_dmi_laptop_table
ffffffff81504060 t atkbd_dmi_quirk_table
ffffffff81505740 t toshiba_dmi_table
ffffffff81505e00 t olpc_dmi_table
ffffffff81505f60 t lifebook_dmi_table
ffffffff815070d8 t __setup_str_cpuidle_sysfs_setup
ffffffff815070ed t __setup_str_parse_pmtmr
ffffffff815070f4 t __setup_str_acpi_pm_good_setup
ffffffff81507101 t __setup_str_parse_amd_iommu_options
ffffffff8150710c t __setup_str_parse_amd_iommu_dump
ffffffff81507120 t pci_use_crs_table
ffffffff81507a88 t __setup_str_netdev_boot_setup
ffffffff81507a90 t __setup_str_netdev_boot_setup
ffffffff81507a97 t __setup_str_set_rhash_entries
ffffffff81507aa6 t __setup_str_set_thash_entries
ffffffff81507ab5 t __setup_str_set_uhash_entries
ffffffff81507ae0 T __dtb_end
ffffffff81507ae0 T __dtb_start
ffffffff81507ae0 t __setup_rdinit_setup
ffffffff81507ae0 T __setup_start
ffffffff81507af8 t __setup_init_setup
ffffffff81507b10 t __setup_loglevel
ffffffff81507b28 t __setup_quiet_kernel
ffffffff81507b40 t __setup_debug_kernel
ffffffff81507b58 t __setup_set_reset_devices
ffffffff81507b70 t __setup_root_delay_setup
ffffffff81507b88 t __setup_fs_names_setup
ffffffff81507ba0 t __setup_root_data_setup
ffffffff81507bb8 t __setup_rootwait_setup
ffffffff81507bd0 t __setup_root_dev_setup
ffffffff81507be8 t __setup_readwrite
ffffffff81507c00 t __setup_readonly
ffffffff81507c18 t __setup_load_ramdisk
ffffffff81507c30 t __setup_no_initrd
ffffffff81507c48 t __setup_retain_initrd_param
ffffffff81507c60 t __setup_lpj_setup
ffffffff81507c78 t __setup_parse_xen_emul_unplug
ffffffff81507c90 t __setup_code_bytes_setup
ffffffff81507ca8 t __setup_kstack_setup
ffffffff81507cc0 t __setup_setup_unknown_nmi_panic
ffffffff81507cd8 t __setup_parse_reservelow
ffffffff81507cf0 t __setup_control_va_addr_alignment
ffffffff81507d08 t __setup_vsyscall_setup
ffffffff81507d20 t __setup_parse_memmap_opt
ffffffff81507d38 t __setup_parse_memopt
ffffffff81507d50 t __setup_iommu_setup
ffffffff81507d68 t __setup_setup_noreplace_paravirt
ffffffff81507d80 t __setup_setup_noreplace_smp
ffffffff81507d98 t __setup_debug_alt
ffffffff81507db0 t __setup_bootonly
ffffffff81507dc8 t __setup_tsc_setup
ffffffff81507de0 t __setup_notsc_setup
ffffffff81507df8 t __setup_io_delay_param
ffffffff81507e10 t __setup_idle_setup
ffffffff81507e28 t __setup_setup_disablecpuid
ffffffff81507e40 t __setup_setup_noclflush
ffffffff81507e58 t __setup_setup_show_msr
ffffffff81507e70 t __setup_setup_disable_smep
ffffffff81507e88 t __setup_x86_xsaveopt_setup
ffffffff81507ea0 t __setup_x86_xsave_setup
ffffffff81507eb8 t __setup_x86_rdrand_setup
ffffffff81507ed0 t __setup_mcheck_disable
ffffffff81507ee8 t __setup_mcheck_enable
ffffffff81507f00 t __setup_disable_mtrr_trim_setup
ffffffff81507f18 t __setup_parse_mtrr_spare_reg
ffffffff81507f30 t __setup_parse_mtrr_gran_size_opt
ffffffff81507f48 t __setup_parse_mtrr_chunk_size_opt
ffffffff81507f60 t __setup_mtrr_cleanup_debug_setup
ffffffff81507f78 t __setup_enable_mtrr_cleanup_setup
ffffffff81507f90 t __setup_disable_mtrr_cleanup_setup
ffffffff81507fa8 t __setup_setup_acpi_sci
ffffffff81507fc0 t __setup_parse_acpi_use_timer_override
ffffffff81507fd8 t __setup_parse_acpi_skip_timer_override
ffffffff81507ff0 t __setup_parse_pci
ffffffff81508008 t __setup_parse_acpi
ffffffff81508020 t __setup_reboot_setup
ffffffff81508038 t __setup_nonmi_ipi_setup
ffffffff81508050 t __setup__setup_possible_cpus
ffffffff81508068 t __setup_parse_alloc_mptable_opt
ffffffff81508080 t __setup_update_mptable_setup
ffffffff81508098 t __setup_apic_set_verbosity
ffffffff815080b0 t __setup_parse_nolapic_timer
ffffffff815080c8 t __setup_parse_disable_apic_timer
ffffffff815080e0 t __setup_parse_lapic_timer_c2_ok
ffffffff815080f8 t __setup_setup_nolapic
ffffffff81508110 t __setup_setup_disableapic
ffffffff81508128 t __setup_setup_apicpmtimer
ffffffff81508140 t __setup_disable_timer_pin_setup
ffffffff81508158 t __setup_notimercheck
ffffffff81508170 t __setup_setup_show_lapic
ffffffff81508188 t __setup_parse_noapic
ffffffff815081a0 t __setup_setup_early_printk
ffffffff815081b8 t __setup_disable_hpet
ffffffff815081d0 t __setup_hpet_setup
ffffffff815081e8 t __setup_parse_gart_mem
ffffffff81508200 t __setup_nonx32_setup
ffffffff81508218 t __setup_parse_direct_gbpages_on
ffffffff81508230 t __setup_parse_direct_gbpages_off
ffffffff81508248 t __setup_early_ioremap_debug_setup
ffffffff81508260 t __setup_pat_debug_setup
ffffffff81508278 t __setup_nopat
ffffffff81508290 t __setup_setup_userpte
ffffffff815082a8 t __setup_noexec_setup
ffffffff815082c0 t __setup_setup_hugepagesz
ffffffff815082d8 t __setup_numa_setup
ffffffff815082f0 t __setup_vdso_setup
ffffffff81508308 t __setup_vdso_setup
ffffffff81508320 t __setup_coredump_filter_setup
ffffffff81508338 t __setup_oops_setup
ffffffff81508350 t __setup_keep_bootcon_setup
ffffffff81508368 t __setup_console_suspend_disable
ffffffff81508380 t __setup_console_setup
ffffffff81508398 t __setup_ignore_loglevel_setup
ffffffff815083b0 t __setup_log_buf_len_setup
ffffffff815083c8 t __setup_strict_iomem
ffffffff815083e0 t __setup_reserve_setup
ffffffff815083f8 t __setup_file_caps_disable
ffffffff81508410 t __setup_setup_print_fatal_signals
ffffffff81508428 t __setup_setup_hrtimer_hres
ffffffff81508440 t __setup_setup_relax_domain_level
ffffffff81508458 t __setup_isolated_cpu_setup
ffffffff81508470 t __setup_ntp_tick_adj_setup
ffffffff81508488 t __setup_boot_override_clock
ffffffff815084a0 t __setup_boot_override_clocksource
ffffffff815084b8 t __setup_maxcpus
ffffffff815084d0 t __setup_nrcpus
ffffffff815084e8 t __setup_nosmp
ffffffff81508500 t __setup_cgroup_disable
ffffffff81508518 t __setup_audit_enable
ffffffff81508530 t __setup_setup_forced_irqthreads
ffffffff81508548 t __setup_irqpoll_setup
ffffffff81508560 t __setup_irqfixup_setup
ffffffff81508578 t __setup_noirqdebug_setup
ffffffff81508590 t __setup_delayacct_setup_disable
ffffffff815085a8 t __setup_set_hashdist
ffffffff815085c0 t __setup_cmdline_parse_movablecore
ffffffff815085d8 t __setup_cmdline_parse_kernelcore
ffffffff815085f0 t __setup_setup_numa_zonelist_order
ffffffff81508608 t __setup_set_mminit_loglevel
ffffffff81508620 t __setup_percpu_alloc_setup
ffffffff81508638 t __setup_disable_randmaps
ffffffff81508650 t __setup_early_memblock
ffffffff81508668 t __setup_hugetlb_default_setup
ffffffff81508680 t __setup_hugetlb_nrpages_setup
ffffffff81508698 t __setup_slab_max_order_setup
ffffffff815086b0 t __setup_noaliencache_setup
ffffffff815086c8 t __setup_setup_transparent_hugepage
ffffffff815086e0 t __setup_set_dhash_entries
ffffffff815086f8 t __setup_set_ihash_entries
ffffffff81508710 t __setup_elevator_setup
ffffffff81508728 t __setup_setup_io_tlb_npages
ffffffff81508740 t __setup_pci_setup
ffffffff81508758 t __setup_pcie_aspm_disable
ffffffff81508770 t __setup_pciehp_setup
ffffffff81508788 t __setup_pcie_port_setup
ffffffff815087a0 t __setup_pcie_pme_setup
ffffffff815087b8 t __setup_video_setup
ffffffff815087d0 t __setup_no_scroll
ffffffff815087e8 t __setup_text_mode
ffffffff81508800 t __setup_fb_console_setup
ffffffff81508818 t __setup_acpi_parse_apic_instance
ffffffff81508830 t __setup_acpi_enforce_resources_setup
ffffffff81508848 t __setup_acpi_serialize_setup
ffffffff81508860 t __setup_osi_setup
ffffffff81508878 t __setup_acpi_os_name_setup
ffffffff81508890 t __setup_acpi_irq_balance_set
ffffffff815088a8 t __setup_acpi_irq_nobalance_set
ffffffff815088c0 t __setup_acpi_irq_pci
ffffffff815088d8 t __setup_acpi_irq_isa
ffffffff815088f0 t __setup_acpi_no_auto_ssdt_setup
ffffffff81508908 t __setup_setup_hest_disable
ffffffff81508920 t __setup_setup_erst_disable
ffffffff81508938 t __setup_pnp_setup_reserve_mem
ffffffff81508950 t __setup_pnp_setup_reserve_io
ffffffff81508968 t __setup_pnp_setup_reserve_dma
ffffffff81508980 t __setup_pnp_setup_reserve_irq
ffffffff81508998 t __setup_pnpacpi_setup
ffffffff815089b0 t __setup_xen_selfballooning_setup
ffffffff815089c8 t __setup_no_cleancache
ffffffff815089e0 t __setup_enable_tmem
ffffffff815089f8 t __setup_mount_param
ffffffff81508a10 t __setup_cpuidle_sysfs_setup
ffffffff81508a28 t __setup_parse_pmtmr
ffffffff81508a40 t __setup_acpi_pm_good_setup
ffffffff81508a58 t __setup_parse_amd_iommu_options
ffffffff81508a70 t __setup_parse_amd_iommu_dump
ffffffff81508a88 t __setup_netdev_boot_setup
ffffffff81508aa0 t __setup_netdev_boot_setup
ffffffff81508ab8 t __setup_set_rhash_entries
ffffffff81508ad0 t __setup_set_thash_entries
ffffffff81508ae8 t __setup_set_uhash_entries
ffffffff81508b00 t __initcall_init_hw_perf_eventsearly
ffffffff81508b00 T __initcall_start
ffffffff81508b00 T __setup_end
ffffffff81508b08 t __initcall_spawn_ksoftirqdearly
ffffffff81508b10 t __initcall_init_workqueuesearly
ffffffff81508b18 t __initcall_migration_initearly
ffffffff81508b20 t __initcall_cpu_stop_initearly
ffffffff81508b28 t __initcall_rcu_scheduler_really_startedearly
ffffffff81508b30 T __initcall0_start
ffffffff81508b30 t __initcall_ipc_ns_init0
ffffffff81508b38 t __initcall_init_mmap_min_addr0
ffffffff81508b40 t __initcall_init_cpufreq_transition_notifier_list0
ffffffff81508b48 t __initcall_net_ns_init0
ffffffff81508b50 T __initcall1_start
ffffffff81508b50 t __initcall_e820_mark_nvs_memory1
ffffffff81508b58 t __initcall_cpufreq_tsc1
ffffffff81508b60 t __initcall_pci_reboot_init1
ffffffff81508b68 t __initcall_init_lapic_sysfs1
ffffffff81508b70 t __initcall_init_smp_flush1
ffffffff81508b78 t __initcall_cpu_hotplug_pm_sync_init1
ffffffff81508b80 t __initcall_alloc_frozen_cpus1
ffffffff81508b88 t __initcall_ksysfs_init1
ffffffff81508b90 t __initcall_pm_init1
ffffffff81508b98 t __initcall_init_jiffies_clocksource1
ffffffff81508ba0 t __initcall_init_zero_pfn1
ffffffff81508ba8 t __initcall_memory_failure_init1
ffffffff81508bb0 t __initcall_fsnotify_init1
ffffffff81508bb8 t __initcall_filelock_init1
ffffffff81508bc0 t __initcall_init_script_binfmt1
ffffffff81508bc8 t __initcall_init_elf_binfmt1
ffffffff81508bd0 t __initcall_init_compat_elf_binfmt1
ffffffff81508bd8 t __initcall_random32_init1
ffffffff81508be0 t __initcall___gnttab_init1
ffffffff81508be8 t __initcall_cpufreq_core_init1
ffffffff81508bf0 t __initcall_cpuidle_init1
ffffffff81508bf8 t __initcall_sock_init1
ffffffff81508c00 t __initcall_net_inuse_init1
ffffffff81508c08 t __initcall_netlink_proto_init1
ffffffff81508c10 T __initcall2_start
ffffffff81508c10 t __initcall_bdi_class_init2
ffffffff81508c18 t __initcall_kobject_uevent_init2
ffffffff81508c20 t __initcall_pcibus_class_init2
ffffffff81508c28 t __initcall_pci_driver_init2
ffffffff81508c30 t __initcall_xenbus_init2
ffffffff81508c38 t __initcall_tty_class_init2
ffffffff81508c40 t __initcall_vtconsole_class_init2
ffffffff81508c48 t __initcall_wakeup_sources_debugfs_init2
ffffffff81508c50 t __initcall_register_node_type2
ffffffff81508c58 t __initcall_amd_postcore_init2
ffffffff81508c60 T __initcall3_start
ffffffff81508c60 t __initcall_arch_kdebugfs_init3
ffffffff81508c68 t __initcall_configure_trampolines3
ffffffff81508c70 t __initcall_mtrr_if_init3
ffffffff81508c78 t __initcall_ffh_cstate_init3
ffffffff81508c80 t __initcall_acpi_pci_init3
ffffffff81508c88 t __initcall_setup_vcpu_hotplug_event3
ffffffff81508c90 t __initcall_register_xen_pci_notifier3
ffffffff81508c98 t __initcall_pci_arch_init3
ffffffff81508ca0 T __initcall4_start
ffffffff81508ca0 t __initcall_topology_init4
ffffffff81508ca8 t __initcall_mtrr_init_finialize4
ffffffff81508cb0 t __initcall_init_vdso4
ffffffff81508cb8 t __initcall_sysenter_setup4
ffffffff81508cc0 t __initcall_param_sysfs_init4
ffffffff81508cc8 t __initcall_default_bdi_init4
ffffffff81508cd0 t __initcall_init_bio4
ffffffff81508cd8 t __initcall_fsnotify_notification_init4
ffffffff81508ce0 t __initcall_cryptomgr_init4
ffffffff81508ce8 t __initcall_blk_settings_init4
ffffffff81508cf0 t __initcall_blk_ioc_init4
ffffffff81508cf8 t __initcall_blk_softirq_init4
ffffffff81508d00 t __initcall_blk_iopoll_setup4
ffffffff81508d08 t __initcall_genhd_device_init4
ffffffff81508d10 t __initcall_pci_slot_init4
ffffffff81508d18 t __initcall_fbmem_init4
ffffffff81508d20 t __initcall_acpi_init4
ffffffff81508d28 t __initcall_acpi_pci_root_init4
ffffffff81508d30 t __initcall_acpi_pci_link_init4
ffffffff81508d38 t __initcall_pnp_init4
ffffffff81508d40 t __initcall_xen_setup_shutdown_event4
ffffffff81508d48 t __initcall_balloon_init4
ffffffff81508d50 t __initcall_xenbus_probe_backend_init4
ffffffff81508d58 t __initcall_xenbus_probe_frontend_init4
ffffffff81508d60 t __initcall_balloon_init4
ffffffff81508d68 t __initcall_xen_selfballoon_init4
ffffffff81508d70 t __initcall_misc_init4
ffffffff81508d78 t __initcall_vga_arb_device_init4
ffffffff81508d80 t __initcall_init_scsi4
ffffffff81508d88 t __initcall_ata_init4
ffffffff81508d90 t __initcall_serio_init4
ffffffff81508d98 t __initcall_input_init4
ffffffff81508da0 t __initcall_rtc_init4
ffffffff81508da8 t __initcall_pci_subsys_init4
ffffffff81508db0 t __initcall_proto_init4
ffffffff81508db8 t __initcall_net_dev_init4
ffffffff81508dc0 t __initcall_neigh_init4
ffffffff81508dc8 t __initcall_genl_init4
ffffffff81508dd0 t __initcall_net_sysctl_init4
ffffffff81508dd8 T __initcall5_start
ffffffff81508dd8 t __initcall_hpet_late_init5
ffffffff81508de0 t __initcall_init_amd_nbs5
ffffffff81508de8 t __initcall_clocksource_done_booting5
ffffffff81508df0 t __initcall_init_pipe_fs5
ffffffff81508df8 t __initcall_eventpoll_init5
ffffffff81508e00 t __initcall_anon_inode_init5
ffffffff81508e08 t __initcall_blk_scsi_ioctl_init5
ffffffff81508e10 t __initcall_acpi_event_init5
ffffffff81508e18 t __initcall_pnp_system_init5
ffffffff81508e20 t __initcall_pnpacpi_init5
ffffffff81508e28 t __initcall_chr_dev_init5
ffffffff81508e30 t __initcall_firmware_class_init5
ffffffff81508e38 t __initcall_cpufreq_gov_performance_init5
ffffffff81508e40 t __initcall_init_acpi_pm_clocksource5
ffffffff81508e48 t __initcall_pcibios_assign_resources5
ffffffff81508e50 t __initcall_sysctl_core_init5
ffffffff81508e58 t __initcall_inet_init5
ffffffff81508e60 t __initcall_af_unix_init5
ffffffff81508e68 t __initcall_pci_apply_final_quirks5s
ffffffff81508e70 t __initcall_populate_rootfsrootfs
ffffffff81508e70 T __initcallrootfs_start
ffffffff81508e78 t __initcall_pci_iommu_initrootfs
ffffffff81508e80 T __initcall6_start
ffffffff81508e80 t __initcall_i8259A_init_ops6
ffffffff81508e88 t __initcall_vsyscall_init6
ffffffff81508e90 t __initcall_sbf_init6
ffffffff81508e98 t __initcall_init_tsc_clocksource6
ffffffff81508ea0 t __initcall_add_rtc_cmos6
ffffffff81508ea8 t __initcall_i8237A_init_ops6
ffffffff81508eb0 t __initcall_cache_sysfs_init6
ffffffff81508eb8 t __initcall_mcheck_init_device6
ffffffff81508ec0 t __initcall_threshold_init_device6
ffffffff81508ec8 t __initcall_amd_ibs_init6
ffffffff81508ed0 t __initcall_ioapic_init_ops6
ffffffff81508ed8 t __initcall_add_pcspkr6
ffffffff81508ee0 t __initcall_audit_classes_init6
ffffffff81508ee8 t __initcall_ia32_binfmt_init6
ffffffff81508ef0 t __initcall_proc_execdomains_init6
ffffffff81508ef8 t __initcall_ioresources_init6
ffffffff81508f00 t __initcall_uid_cache_init6
ffffffff81508f08 t __initcall_init_posix_timers6
ffffffff81508f10 t __initcall_init_posix_cpu_timers6
ffffffff81508f18 t __initcall_timekeeping_init_ops6
ffffffff81508f20 t __initcall_init_clocksource_sysfs6
ffffffff81508f28 t __initcall_init_timer_list_procfs6
ffffffff81508f30 t __initcall_alarmtimer_init6
ffffffff81508f38 t __initcall_futex_init6
ffffffff81508f40 t __initcall_proc_dma_init6
ffffffff81508f48 t __initcall_proc_modules_init6
ffffffff81508f50 t __initcall_kallsyms_init6
ffffffff81508f58 t __initcall_pid_namespaces_init6
ffffffff81508f60 t __initcall_ikconfig_init6
ffffffff81508f68 t __initcall_audit_init6
ffffffff81508f70 t __initcall_audit_watch_init6
ffffffff81508f78 t __initcall_audit_tree_init6
ffffffff81508f80 t __initcall_irq_pm_init_ops6
ffffffff81508f88 t __initcall_utsname_sysctl_init6
ffffffff81508f90 t __initcall_perf_event_sysfs_init6
ffffffff81508f98 t __initcall_init_per_zone_wmark_min6
ffffffff81508fa0 t __initcall_kswapd_init6
ffffffff81508fa8 t __initcall_setup_vmstat6
ffffffff81508fb0 t __initcall_mm_sysfs_init6
ffffffff81508fb8 t __initcall_proc_vmalloc_init6
ffffffff81508fc0 t __initcall_procswaps_init6
ffffffff81508fc8 t __initcall_hugetlb_init6
ffffffff81508fd0 t __initcall_slab_proc_init6
ffffffff81508fd8 t __initcall_cpucache_init6
ffffffff81508fe0 t __initcall_hugepage_init6
ffffffff81508fe8 t __initcall_init_cleancache6
ffffffff81508ff0 t __initcall_fcntl_init6
ffffffff81508ff8 t __initcall_proc_filesystems_init6
ffffffff81509000 t __initcall_dio_init6
ffffffff81509008 t __initcall_fsnotify_mark_init6
ffffffff81509010 t __initcall_dnotify_init6
ffffffff81509018 t __initcall_inotify_user_setup6
ffffffff81509020 t __initcall_fanotify_user_setup6
ffffffff81509028 t __initcall_aio_setup6
ffffffff81509030 t __initcall_proc_locks_init6
ffffffff81509038 t __initcall_init_sys32_ioctl6
ffffffff81509040 t __initcall_proc_cmdline_init6
ffffffff81509048 t __initcall_proc_consoles_init6
ffffffff81509050 t __initcall_proc_cpuinfo_init6
ffffffff81509058 t __initcall_proc_devices_init6
ffffffff81509060 t __initcall_proc_interrupts_init6
ffffffff81509068 t __initcall_proc_loadavg_init6
ffffffff81509070 t __initcall_proc_meminfo_init6
ffffffff81509078 t __initcall_proc_stat_init6
ffffffff81509080 t __initcall_proc_uptime_init6
ffffffff81509088 t __initcall_proc_version_init6
ffffffff81509090 t __initcall_proc_softirqs_init6
ffffffff81509098 t __initcall_proc_kcore_init6
ffffffff815090a0 t __initcall_proc_kmsg_init6
ffffffff815090a8 t __initcall_proc_page_init6
ffffffff815090b0 t __initcall_configfs_init6
ffffffff815090b8 t __initcall_init_devpts_fs6
ffffffff815090c0 t __initcall_init_ramfs_fs6
ffffffff815090c8 t __initcall_init_hugetlbfs_fs6
ffffffff815090d0 t __initcall_init_pstore_fs6
ffffffff815090d8 t __initcall_ipc_init6
ffffffff815090e0 t __initcall_ipc_sysctl_init6
ffffffff815090e8 t __initcall_init_mqueue_fs6
ffffffff815090f0 t __initcall_crypto_wq_init6
ffffffff815090f8 t __initcall_crypto_algapi_init6
ffffffff81509100 t __initcall_skcipher_module_init6
ffffffff81509108 t __initcall_chainiv_module_init6
ffffffff81509110 t __initcall_eseqiv_module_init6
ffffffff81509118 t __initcall_aes_init6
ffffffff81509120 t __initcall_crc32c_mod_init6
ffffffff81509128 t __initcall_krng_mod_init6
ffffffff81509130 t __initcall_proc_genhd_init6
ffffffff81509138 t __initcall_bsg_init6
ffffffff81509140 t __initcall_init_cgroup_blkio6
ffffffff81509148 t __initcall_noop_init6
ffffffff81509150 t __initcall_cfq_init6
ffffffff81509158 t __initcall_percpu_counter_startup6
ffffffff81509160 t __initcall_pci_proc_init6
ffffffff81509168 t __initcall_pcie_portdrv_init6
ffffffff81509170 t __initcall_aer_service_init6
ffffffff81509178 t __initcall_pcie_pme_service_init6
ffffffff81509180 t __initcall_ioapic_init6
ffffffff81509188 t __initcall_fb_console_init6
ffffffff81509190 t __initcall_vesafb_init6
ffffffff81509198 t __initcall_acpi_reserve_resources6
ffffffff815091a0 t __initcall_irqrouter_init_ops6
ffffffff815091a8 t __initcall_acpi_hed_init6
ffffffff815091b0 t __initcall_erst_init6
ffffffff815091b8 t __initcall_ghes_init6
ffffffff815091c0 t __initcall_xenbus_probe_initcall6
ffffffff815091c8 t __initcall_xenbus_init6
ffffffff815091d0 t __initcall_xenbus_backend_init6
ffffffff815091d8 t __initcall_hypervisor_subsys_init6
ffffffff815091e0 t __initcall_hyper_sysfs_init6
ffffffff815091e8 t __initcall_platform_pci_module_init6
ffffffff815091f0 t __initcall_xen_tmem_init6
ffffffff815091f8 t __initcall_pty_init6
ffffffff81509200 t __initcall_xen_hvc_init6
ffffffff81509208 t __initcall_rand_initialize6
ffffffff81509210 t __initcall_nvram_init6
ffffffff81509218 t __initcall_topology_sysfs_init6
ffffffff81509220 t __initcall_init_sd6
ffffffff81509228 t __initcall_net_olddevs_init6
ffffffff81509230 t __initcall_i8042_init6
ffffffff81509238 t __initcall_mousedev_init6
ffffffff81509240 t __initcall_atkbd_init6
ffffffff81509248 t __initcall_psmouse_init6
ffffffff81509250 t __initcall_cmos_init6
ffffffff81509258 t __initcall_cpufreq_stats_init6
ffffffff81509260 t __initcall_init_ladder6
ffffffff81509268 t __initcall_hid_init6
ffffffff81509270 t __initcall_sock_diag_init6
ffffffff81509278 t __initcall_flow_cache_init_global6
ffffffff81509280 t __initcall_sysctl_ipv4_init6
ffffffff81509288 t __initcall_init_syncookies6
ffffffff81509290 t __initcall_cubictcp_register6
ffffffff81509298 T __initcall7_start
ffffffff81509298 t __initcall_hpet_insert_resource7
ffffffff815092a0 t __initcall_update_mp_table7
ffffffff815092a8 t __initcall_lapic_insert_resource7
ffffffff815092b0 t __initcall_io_apic_bug_finalize7
ffffffff815092b8 t __initcall_print_ICs7
ffffffff815092c0 t __initcall_check_early_ioremap_leak7
ffffffff815092c8 t __initcall_init_oops_id7
ffffffff815092d0 t __initcall_printk_late_init7
ffffffff815092d8 t __initcall_pm_qos_power_init7
ffffffff815092e0 t __initcall_taskstats_init7
ffffffff815092e8 t __initcall_max_swapfiles_check7
ffffffff815092f0 t __initcall_set_recommended_min_free_kbytes7
ffffffff815092f8 t __initcall_random32_reseed7
ffffffff81509300 t __initcall_pci_resource_alignment_sysfs_init7
ffffffff81509308 t __initcall_pci_sysfs_init7
ffffffff81509310 t __initcall_boot_wait_for_devices7
ffffffff81509318 t __initcall_random_int_secret_init7
ffffffff81509320 t __initcall_deferred_probe_initcall7
ffffffff81509328 t __initcall_scsi_complete_async_scans7
ffffffff81509330 t __initcall_rtc_hctosys7
ffffffff81509338 t __initcall_memmap_init7
ffffffff81509340 t __initcall_pci_mmcfg_late_insert_resources7
ffffffff81509348 t __initcall_net_secret_init7
ffffffff81509350 t __initcall_tcp_congestion_default7
ffffffff81509358 t __initcall_initialize_hashrnd7s
ffffffff81509360 T __con_initcall_start
ffffffff81509360 t __initcall_con_init
ffffffff81509360 T __initcall_end
ffffffff81509368 t __initcall_hvc_console_init
ffffffff81509370 t __initcall_xen_cons_init
ffffffff81509378 T __con_initcall_end
ffffffff81509378 T __initramfs_start
ffffffff81509378 t __irf_start
ffffffff81509378 T __security_initcall_end
ffffffff81509378 T __security_initcall_start
ffffffff81509578 T __initramfs_size
ffffffff81509578 t __irf_end
ffffffff8150a000 r r_base
ffffffff8150a000 R trampoline_data
ffffffff8150a000 R x86_trampoline_start
ffffffff8150a068 r startup_32
ffffffff8150a09c r startup_64
ffffffff8150a0a5 r no_longmode
ffffffff8150a0a8 r verify_cpu
ffffffff8150a0fd r verify_cpu_noamd
ffffffff8150a146 r verify_cpu_clear_xd
ffffffff8150a157 r verify_cpu_check
ffffffff8150a197 r verify_cpu_sse_test
ffffffff8150a1c6 r verify_cpu_no_longmode
ffffffff8150a1cf r verify_cpu_sse_ok
ffffffff8150a1d8 r tidt
ffffffff8150a1e0 r tgdt
ffffffff8150a200 r startup_32_vector
ffffffff8150a200 r tgdt_end
ffffffff8150a208 r startup_64_vector
ffffffff8150a210 R trampoline_status
ffffffff8150a214 r trampoline_stack
ffffffff8150b000 R trampoline_level4_pgt
ffffffff8150b000 r trampoline_stack_end
ffffffff8150c000 r __cpu_dev_intel_cpu_dev
ffffffff8150c000 R __x86_cpu_dev_start
ffffffff8150c000 R trampoline_end
ffffffff8150c000 R x86_trampoline_end
ffffffff8150c008 r __cpu_dev_amd_cpu_dev
ffffffff8150c010 r __cpu_dev_centaur_cpu_dev
ffffffff8150c018 R __parainstructions
ffffffff8150c018 R __x86_cpu_dev_end
ffffffff8151e554 R __parainstructions_end
ffffffff8151e558 R __alt_instructions
ffffffff8151f11c R __alt_instructions_end
ffffffff8151f5b8 r __iommu_entry_pci_xen_swiotlb_detect
ffffffff8151f5b8 R __iommu_table
ffffffff8151f5e0 r __iommu_entry_pci_swiotlb_detect_4gb
ffffffff8151f608 r __iommu_entry_pci_swiotlb_detect_override
ffffffff8151f630 r __iommu_entry_gart_iommu_hole_init
ffffffff8151f658 r __iommu_entry_amd_iommu_detect
ffffffff8151f680 D __apicdrivers
ffffffff8151f680 d __apicdrivers_apic_physflatapic_flat
ffffffff8151f680 R __iommu_table_end
ffffffff8151f690 D __apicdrivers_end
ffffffff8151f690 t ffh_cstate_exit
ffffffff8151f6aa t ikconfig_cleanup
ffffffff8151f6b8 t hugetlb_exit
ffffffff8151f747 t exit_script_binfmt
ffffffff8151f753 t exit_elf_binfmt
ffffffff8151f75f t exit_compat_elf_binfmt
ffffffff8151f76b t configfs_exit
ffffffff8151f7a1 t exit_hugetlbfs_fs
ffffffff8151f7d5 t crypto_wq_exit
ffffffff8151f7e1 t crypto_algapi_exit
ffffffff8151f7e6 T crypto_exit_proc
ffffffff8151f7f4 t eseqiv_module_exit
ffffffff8151f800 t cryptomgr_exit
ffffffff8151f816 t aes_fini
ffffffff8151f822 t crc32c_mod_fini
ffffffff8151f82e t krng_mod_fini
ffffffff8151f83a t exit_cgroup_blkio
ffffffff8151f846 t noop_exit
ffffffff8151f852 t cfq_exit
ffffffff8151f87a t aer_service_exit
ffffffff8151f886 t ioapic_exit
ffffffff8151f892 t interrupt_stats_exit
ffffffff8151f8ad t acpi_hed_exit
ffffffff8151f8b9 t ghes_exit
ffffffff8151f8d8 t xenbus_exit
ffffffff8151f8e4 t xenbus_backend_exit
ffffffff8151f8f0 t hyper_sysfs_exit
ffffffff8151f952 t hvc_exit
ffffffff8151f990 t xen_hvc_fini
ffffffff8151f9bc t nvram_cleanup_module
ffffffff8151f9d8 t firmware_class_exit
ffffffff8151f9e4 t exit_scsi
ffffffff8151fa04 t exit_sd
ffffffff8151fa74 t ata_exit
ffffffff8151fa98 T libata_transport_exit
ffffffff8151fabe t serio_exit
ffffffff8151fad8 t i8042_exit
ffffffff8151fb03 t input_exit
ffffffff8151fb30 t mousedev_exit
ffffffff8151fb4a t atkbd_exit
ffffffff8151fb56 t psmouse_exit
ffffffff8151fb70 t rtc_exit
ffffffff8151fb8f T rtc_dev_exit
ffffffff8151fba4 t cmos_exit
ffffffff8151fbd2 t cmos_do_remove
ffffffff8151fc5c t cmos_pnp_remove
ffffffff8151fc61 t cmos_platform_remove
ffffffff8151fc71 t cpufreq_stats_exit
ffffffff8151fce1 t cpufreq_gov_performance_exit
ffffffff8151fced t exit_ladder
ffffffff8151fcf9 t hid_exit
ffffffff8151fd05 t sock_diag_exit
ffffffff8151fd11 t cubictcp_unregister
ffffffff8151fd1d t xfrm4_policy_fini
ffffffff8151fd3c t af_unix_exit
ffffffff81520000 R __init_end
ffffffff81520000 R __smp_locks
ffffffff81524000 B __bss_start
ffffffff81524000 R __nosave_begin
ffffffff81524000 R __nosave_end
ffffffff81524000 R __smp_locks_end
ffffffff81524000 B empty_zero_page
ffffffff81525000 b level3_user_vsyscall
ffffffff81526000 b dummy_mapping
ffffffff81527000 b fake_ioapic_mapping
ffffffff81528000 b bm_pte
ffffffff81529000 B idt_table
ffffffff8152a000 B nmi_idt_table
ffffffff8152b000 B initcall_debug
ffffffff8152b004 B reset_devices
ffffffff8152b008 B saved_command_line
ffffffff8152b010 b static_command_line
ffffffff8152b018 b panic_later
ffffffff8152b020 b panic_param
ffffffff8152b028 b execute_command
ffffffff8152b030 b ramdisk_execute_command
ffffffff8152b040 b msgbuf
ffffffff8152b080 B ROOT_DEV
ffffffff8152b084 b root_wait
ffffffff8152b088 B real_root_dev
ffffffff8152b08c B initrd_below_start_ok
ffffffff8152b090 B initrd_end
ffffffff8152b098 B initrd_start
ffffffff8152b0a0 b my_inptr
ffffffff8152b0a8 B preset_lpj
ffffffff8152b0b0 B lpj_fine
ffffffff8152b0b8 b printed.10632
ffffffff8152b0c0 B xen_initial_gdt
ffffffff8152b0e0 B xen_dummy_shared_info
ffffffff8152bd08 B xen_start_info
ffffffff8152bd10 B machine_to_phys_nr
ffffffff8152bd18 B xen_domain_type
ffffffff8152bd1c b lock.35222
ffffffff8152bd20 b traps.35224
ffffffff8152cd30 b __force_order
ffffffff8152cd38 b shared_info_page.35578
ffffffff8152cd40 B xen_released_pages
ffffffff8152cd60 B xen_reservation_lock
ffffffff8152cd68 b level1_ident_pgt
ffffffff8152cd80 b discontig_frames
ffffffff8152dd80 B xen_platform_pci_unplug
ffffffff8152dd84 b xen_emul_unplug
ffffffff8152dd88 b p2m_top_mfn
ffffffff8152dd90 b p2m_mid_missing_mfn
ffffffff8152dd98 b p2m_top_mfn_p
ffffffff8152dda0 b p2m_top
ffffffff8152dda8 b p2m_mid_missing
ffffffff8152ddb0 b p2m_missing
ffffffff8152ddb8 b p2m_identity
ffffffff8152ddc0 b m2p_overrides
ffffffff8152ddc8 b m2p_override_lock
ffffffff8152ddd0 B xen_cpu_initialized_map
ffffffff8152dde0 B used_vectors
ffffffff8152de00 B x86_platform_ipi_callback
ffffffff8152de08 B irq_err_count
ffffffff8152de0c b warned.22284
ffffffff8152de10 B sysctl_panic_on_stackoverflow
ffffffff8152de14 b __key.22954
ffffffff8152de14 B panic_on_io_nmi
ffffffff8152de18 B panic_on_unrecovered_nmi
ffffffff8152de1c b die_lock
ffffffff8152de20 b die_nest_count
ffffffff8152de24 b die_counter
ffffffff8152de28 B unknown_nmi_panic
ffffffff8152de2c b ignore_nmis
ffffffff8152de30 b nmi_reason_lock
ffffffff8152de40 B saved_video_mode
ffffffff8152de60 B edid_info
ffffffff8152dee0 B screen_info
ffffffff8152df20 B bootloader_version
ffffffff8152df24 B bootloader_type
ffffffff8152df28 B mmu_cr4_features
ffffffff8152df30 B max_pfn_mapped
ffffffff8152df38 B max_low_pfn_mapped
ffffffff8152df40 B io_apic_irqs
ffffffff8152df48 B i8259A_lock
ffffffff8152df4c b i8259A_auto_eoi
ffffffff8152df50 b irq_trigger
ffffffff8152df54 b spurious_irq_mask.25340
ffffffff8152df58 b vsyscall_mode
ffffffff8152df60 B e820_saved
ffffffff8152e980 B e820
ffffffff8152f388 B force_hpet_address
ffffffff8152f390 b force_hpet_resume_type
ffffffff8152f398 b cached_dev
ffffffff8152f3a0 b rcba_base
ffffffff8152f3a8 B arch_debugfs_dir
ffffffff8152f3b0 B skip_smp_alternatives
ffffffff8152f3b4 b debug_alternative
ffffffff8152f3b8 b smp_alt_once
ffffffff8152f3bc b noreplace_smp
ffffffff8152f3c0 b noreplace_paravirt
ffffffff8152f3c4 b stop_machine_first
ffffffff8152f3c8 b wrote_text
ffffffff8152f3d0 B global_clock_event
ffffffff8152f3d8 B tsc_clocksource_reliable
ffffffff8152f3e0 b cyc2ns_suspend
ffffffff8152f3e8 b no_sched_irq_time
ffffffff8152f3ec b ref_freq
ffffffff8152f3f0 b loops_per_jiffy_ref
ffffffff8152f3f8 b tsc_khz_ref
ffffffff8152f400 b hpet.23389
ffffffff8152f408 b ref_start.23388
ffffffff8152f410 B rtc_lock
ffffffff8152f412 b __print_once.25558
ffffffff8152f418 B x86_trampoline_base
ffffffff8152f420 B amd_e400_c1e_detected
ffffffff8152f428 B pm_idle
ffffffff8152f430 B boot_option_idle_override
ffffffff8152f438 B task_xstate_cachep
ffffffff8152f440 b idle_notifier
ffffffff8152f450 b amd_e400_c1e_mask
ffffffff8152f458 b __print_once.28414
ffffffff8152f45c B xstate_size
ffffffff8152f460 B fx_sw_reserved_ia32
ffffffff8152f4a0 B fx_sw_reserved
ffffffff8152f4d0 B pcntxt_mask
ffffffff8152f4d8 b xstate_offsets
ffffffff8152f4e0 b xstate_sizes
ffffffff8152f4e8 b init_xstate_buf
ffffffff8152f4f0 b xstate_features
ffffffff8152f500 B xstate_fx_sw_bytes
ffffffff8152f540 B num_cache_leaves
ffffffff8152f548 b cache_dev_map
ffffffff8152f550 b attrs.22127
ffffffff8152f558 b is_initialized.21853
ffffffff8152f55c b printed.11602
ffffffff8152f560 B kernel_eflags
ffffffff8152f568 B cpu_sibling_setup_mask
ffffffff8152f570 B cpu_callin_mask
ffffffff8152f578 B cpu_callout_mask
ffffffff8152f580 B cpu_initialized_mask
ffffffff8152f588 b __print_once.29704
ffffffff8152f589 b printed.29702
ffffffff8152f58a b __print_once.29713
ffffffff8152f590 B x86_hyper
ffffffff8152f598 B ms_hyperv
ffffffff8152f5a0 b __print_once.18294
ffffffff8152f5c0 B unconstrained
ffffffff8152f5e0 B emptyconstraint
ffffffff8152f600 b active_events
ffffffff8152f620 B mce_info
ffffffff8152f820 B x86_mce_decoder_chain
ffffffff8152f830 B mce_entry
ffffffff8152f838 b mce_need_notify
ffffffff8152f840 b global_nwo
ffffffff8152f844 b mce_callin
ffffffff8152f848 b mce_executing
ffffffff8152f84c b mce_paniced
ffffffff8152f850 b cpu_missing
ffffffff8152f860 b mce_helper
ffffffff8152f8e0 b mce_write
ffffffff8152f8e8 b mce_device_initialized
ffffffff8152f8f0 b mce_chrdev_state_lock
ffffffff8152f8f4 b mce_chrdev_open_count
ffffffff8152f8f8 b mce_chrdev_open_exclu
ffffffff8152f8fc b mce_apei_read_done
ffffffff8152f900 B mtrr_if
ffffffff8152f908 B size_and_mask
ffffffff8152f910 B size_or_mask
ffffffff8152f920 B mtrr_usage_table
ffffffff8152fd20 B num_var_ranges
ffffffff8152fd40 b mtrr_ops
ffffffff8152fd88 b mtrr_aps_delayed_init
ffffffff8152fda0 b mtrr_value
ffffffff815315a0 B mtrr_state
ffffffff81532600 B mtrr_tom2
ffffffff81532608 b mtrr_state_set
ffffffff8153260c b set_atomicity_lock
ffffffff81532610 b cr4
ffffffff81532618 b deftype_lo
ffffffff8153261c b deftype_hi
ffffffff81532620 b smp_changes_mask
ffffffff81532640 b range_new.27148
ffffffff81533640 b nr_range_new.27150
ffffffff81533644 b disable_mtrr_trim
ffffffff81533650 b perfctr_nmi_owner
ffffffff81533660 b evntsel_nmi_owner
ffffffff81533670 b ibs_caps
ffffffff81533680 B acpi_irq_model
ffffffff81533684 B acpi_strict
ffffffff81533688 B acpi_ioapic
ffffffff8153368c B acpi_lapic
ffffffff81533690 B acpi_pci_disabled
ffffffff81533694 B acpi_noirq
ffffffff81533698 B acpi_disabled
ffffffff8153369c B acpi_rsdt_forced
ffffffff815336a0 b hpet_res
ffffffff815336b0 b cpu_cstate_entry
ffffffff815336c0 b mwait_supported
ffffffff815336d0 B port_cf9_safe
ffffffff815336d4 B reboot_force
ffffffff815336d8 B pm_power_off
ffffffff815336e0 b reboot_emergency
ffffffff815336e4 b crashing_cpu
ffffffff815336e8 b shootdown_callback
ffffffff815336f0 b waiting_for_crash_ipi
ffffffff815336f4 b reboot_mode
ffffffff815336f8 B init_deasserted
ffffffff815336fc b __key.8440
ffffffff81533700 B enable_update_mptable
ffffffff81533708 b mpf_found
ffffffff81533720 B apic_version
ffffffff81553720 B lapic_timer_frequency
ffffffff81553724 B smp_found_config
ffffffff81553728 B pic_mode
ffffffff8155372c B apic_verbosity
ffffffff81553730 B local_apic_timer_c2_ok
ffffffff81553734 B disable_apic
ffffffff81553738 B mp_lapic_addr
ffffffff81553740 B x2apic_mode
ffffffff81553760 B phys_cpu_present_map
ffffffff81554760 B max_physical_apicid
ffffffff81554764 B num_processors
ffffffff81554770 b eilvt_offsets
ffffffff81554780 b apic_phys
ffffffff815547a0 b apic_pm_state
ffffffff815547e0 B irq_mis_count
ffffffff815547e4 B skip_ioapic_setup
ffffffff81554800 B mp_bus_not_pci
ffffffff81554820 B mp_irq_entries
ffffffff81554840 B mp_irqs
ffffffff81556840 B gsi_top
ffffffff81556844 B nr_ioapics
ffffffff81556860 b ioapics
ffffffff81558060 b ioapic_resources
ffffffff81558080 b irq_cfgx
ffffffff81558280 b ioapic_lock
ffffffff81558282 b vector_lock
ffffffff81558284 b current_xpos
ffffffff815582a0 B hpet_force_user
ffffffff815582a4 B hpet_msi_disable
ffffffff815582a5 B hpet_blockid
ffffffff815582a8 B hpet_address
ffffffff815582b0 b hpet_virt_address
ffffffff815582b8 b boot_hpet_disable
ffffffff815582bc b hpet_legacy_int_enabled
ffffffff815582c0 b hpet_freq
ffffffff815582c8 b hpet_verbose
ffffffff815582d0 b hpet_devs
ffffffff815582d8 b hpet_num_timers
ffffffff815582e0 b __key.8255
ffffffff815582e0 b irq_handler
ffffffff815582e8 b hpet_rtc_flags
ffffffff815582f0 b hpet_default_delta
ffffffff815582f8 b hpet_pie_limit
ffffffff81558300 b hpet_pie_delta
ffffffff81558304 b hpet_t1_cmp
ffffffff81558308 b hpet_prev_update_sec
ffffffff81558320 b hpet_alarm_time
ffffffff81558348 b hpet_pie_count
ffffffff81558350 B amd_northbridges
ffffffff81558368 b reset.19590
ffffffff8155836c b ban.19591
ffffffff81558370 b gart_lock.19612
ffffffff81558378 b flush_words
ffffffff81558380 B paravirt_steal_rq_enabled
ffffffff81558384 B paravirt_steal_enabled
ffffffff81558388 b __force_order
ffffffff81558390 b last_value
ffffffff81558398 B agp_gatt_table
ffffffff815583a0 B agp_memory_reserved
ffffffff815583a4 b fix_up_north_bridges
ffffffff815583a8 b aperture_order
ffffffff815583ac b aperture_alloc
ffffffff815583b0 b no_agp
ffffffff815583b8 b iommu_size
ffffffff815583c0 b iommu_pages
ffffffff815583c8 b iommu_gart_bitmap
ffffffff815583d0 b iommu_bus_base
ffffffff815583d8 b bad_dma_addr
ffffffff815583e0 b iommu_gatt_base
ffffffff815583e8 b gart_unmapped_entry
ffffffff815583ec b iommu_bitmap_lock
ffffffff815583f0 b next_bit
ffffffff815583f8 b need_flush
ffffffff815583fc B gart_iommu_aperture
ffffffff81558400 B after_bootmem
ffffffff81558420 B force_personality32
ffffffff81558440 b kcore_vsyscall
ffffffff81558468 B pgd_lock
ffffffff8155846a b __print_once.27720
ffffffff81558480 b direct_pages_count
ffffffff815584a0 b cpa_lock
ffffffff815584a4 B pat_debug_enable
ffffffff815584a8 b memtype_lock
ffffffff815584ac B fixmaps_set
ffffffff815584b0 b memtype_rbroot
ffffffff815584c0 b flush_state
ffffffff815586c0 B node_to_cpumask_map
ffffffff81558ec0 b numa_distance_cnt
ffffffff81558ec8 b numa_distance
ffffffff81558ed0 b __print_once.22825
ffffffff81558ed1 b __print_once.22826
ffffffff81558ed8 b vdso_size
ffffffff81558ee0 B vdso_pages
ffffffff81558ee8 b vdso32_pages
ffffffff81558ef0 b lastcomm.29388
ffffffff81558f00 B vm_area_cachep
ffffffff81558f08 B fs_cachep
ffffffff81558f10 B files_cachep
ffffffff81558f18 B sighand_cachep
ffffffff81558f20 B max_threads
ffffffff81558f24 B nr_threads
ffffffff81558f28 B total_forks
ffffffff81558f30 b task_struct_cachep
ffffffff81558f38 b signal_cachep
ffffffff81558f40 b mm_cachep
ffffffff81558f48 b __key.41693
ffffffff81558f48 b __key.41831
ffffffff81558f48 b __key.41832
ffffffff81558f48 b __key.41833
ffffffff81558f48 b __key.41962
ffffffff81558f48 b __key.7292
ffffffff81558f60 B panic_blink
ffffffff81558f70 B panic_notifier_list
ffffffff81558f80 B panic_timeout
ffffffff81558f84 B panic_on_oops
ffffffff81558f88 b panic_lock.18615
ffffffff81558fa0 b buf.18617
ffffffff815593a0 b tainted_mask
ffffffff815593b0 b buf.18648
ffffffff815593c8 b pause_on_oops_flag
ffffffff815593cc b pause_on_oops
ffffffff815593d0 b pause_on_oops_lock
ffffffff815593d4 b spin_counter.18695
ffffffff815593d8 b oops_id
ffffffff815593e0 B dmesg_restrict
ffffffff815593e4 B console_set_on_cmdline
ffffffff815593e8 B console_drivers
ffffffff815593f0 B oops_in_progress
ffffffff815593f4 b logbuf_lock
ffffffff815593f8 b log_end
ffffffff815593fc b con_start
ffffffff81559400 b log_start
ffffffff81559420 b __log_buf
ffffffff81599420 b __print_once.30681
ffffffff81599424 b logged_chars
ffffffff81599428 b recursion_bug
ffffffff81599430 b oops_timestamp.30862
ffffffff81599440 b printk_buf
ffffffff81599840 b printk_time
ffffffff81599844 b console_locked
ffffffff81599860 b console_cmdline
ffffffff81599920 b console_suspended
ffffffff81599924 b console_may_schedule
ffffffff81599928 b always_kmsg_dump
ffffffff81599930 b exclusive_console
ffffffff81599938 b dump_list_lock
ffffffff81599940 b cpu_chain
ffffffff81599948 b cpu_hotplug_disabled
ffffffff81599950 b frozen_cpus
ffffffff81599958 b __print_once.28158
ffffffff8159995c B sys_tz
ffffffff81599980 b reserved.20581
ffffffff815999a0 b reserve.20582
ffffffff81599a80 b strict_iomem_checks
ffffffff81599aa0 B sysctl_legacy_va_layout
ffffffff81599ac0 b dev_table
ffffffff81599b00 b minolduid
ffffffff81599b04 b zero
ffffffff81599b08 b min_extfrag_threshold
ffffffff81599b20 b warn_once_bitmap
ffffffff81599b40 b warned.28374
ffffffff81599b44 b warned.28369
ffffffff81599b80 B boot_tvec_bases
ffffffff8159bbc0 b boot_done.29689
ffffffff8159bbc8 b uidhash_lock
ffffffff8159bbd0 b uid_cachep
ffffffff8159bbd8 b sigqueue_cachep
ffffffff8159bbe0 B pm_power_off_prepare
ffffffff8159bbe8 B cad_pid
ffffffff8159bbf0 b kmod_concurrent.29717
ffffffff8159bbf4 b kmod_loop_msg.29718
ffffffff8159bbf8 b umh_sysctl_lock
ffffffff8159bbfc b running_helpers
ffffffff8159bc00 b khelper_wq
ffffffff8159bc40 b unbound_gcwq_nr_running
ffffffff8159bc80 b unbound_global_cwq
ffffffff8159bf80 b __key.7579
ffffffff8159bf80 b workqueue_lock
ffffffff8159bf82 b __key.25301
ffffffff8159bf82 b workqueue_freezing
ffffffff8159bf83 b __key.25611
ffffffff8159bf88 b pid_hash
ffffffff8159bf90 b __key.8440
ffffffff8159bf90 B module_sysfs_initialized
ffffffff8159bf98 B module_kset
ffffffff8159bfa0 b posix_timers_cache
ffffffff8159bfc0 b posix_timers_id
ffffffff8159bfe0 b posix_clocks
ffffffff8159c4e0 b idr_lock
ffffffff8159c4e8 B kthreadd_task
ffffffff8159c4f0 b __key.7576
ffffffff8159c4f0 b kthread_create_lock
ffffffff8159c500 b onecputick
ffffffff8159c520 b zero_it.22584
ffffffff8159c540 b __print_once.28419
ffffffff8159c548 b nsproxy_cachep
ffffffff8159c550 b __key.11426
ffffffff8159c550 b __key.15371
ffffffff8159c550 b die_chain
ffffffff8159c560 B kernel_kobj
ffffffff8159c568 b cred_jar
ffffffff8159c570 b entry_count
ffffffff8159c574 b async_lock
ffffffff8159c580 B root_task_group
ffffffff8159c748 B sched_domain_level_max
ffffffff8159c74c B sched_mc_power_savings
ffffffff8159c750 B sched_smt_power_savings
ffffffff8159c760 B def_root_domain
ffffffff8159ce20 B root_cpuacct
ffffffff8159ce50 B avenrun
ffffffff8159ce68 b calc_load_update
ffffffff8159ce70 b calc_load_tasks
ffffffff8159ce78 b cpu_isolated_map
ffffffff8159ce80 b doms_cur
ffffffff8159ce88 b dattr_cur
ffffffff8159ce90 b ndoms_cur
ffffffff8159ce98 b fallback_doms
ffffffff8159cea0 b sched_domains_tmpmask
ffffffff8159cea8 b task_group_lock
ffffffff8159ceac b balancing
ffffffff8159cec0 B def_rt_bandwidth
ffffffff8159cf18 b once.18238
ffffffff8159cf20 b pm_qos_lock
ffffffff8159cf40 b null_pm_qos
ffffffff8159cf98 B pm_wq
ffffffff8159cfa0 B power_kobj
ffffffff8159cfa8 b orig_fgconsole
ffffffff8159cfac b orig_kmsg
ffffffff8159cfb0 B pm_nosig_freezing
ffffffff8159cfb1 B pm_freezing
ffffffff8159cfb4 B system_freezing_cnt
ffffffff8159cfb8 b freezer_lock
ffffffff8159cfc0 b timekeeper
ffffffff8159d060 b timekeeping_suspend_time
ffffffff8159d070 b old_delta.21960
ffffffff8159d080 b __print_once.22010
ffffffff8159d088 B tick_nsec
ffffffff8159d090 B ntp_lock
ffffffff8159d098 b time_adjust
ffffffff8159d0a0 b tick_length_base
ffffffff8159d0a8 b tick_length
ffffffff8159d0b0 b time_offset
ffffffff8159d0b8 b ntp_tick_adj
ffffffff8159d0c0 b time_freq
ffffffff8159d0c8 b time_state
ffffffff8159d0d0 b time_tai
ffffffff8159d0d8 b time_reftime
ffffffff8159d0e0 b watchdog_lock
ffffffff8159d0e4 b finished_booting
ffffffff8159d0e8 b watchdog_running
ffffffff8159d0f0 b watchdog
ffffffff8159d100 b watchdog_timer
ffffffff8159d140 b override_name
ffffffff8159d160 b curr_clocksource
ffffffff8159d168 b watchdog_reset_pending
ffffffff8159d16c b __key.28066
ffffffff8159d180 b rtctimer
ffffffff8159d1c0 b alarm_bases
ffffffff8159d290 b freezer_delta_lock
ffffffff8159d298 b freezer_delta
ffffffff8159d2a0 b rtcdev_lock
ffffffff8159d2a8 b rtcdev
ffffffff8159d2b0 b clockevents_lock
ffffffff8159d2b8 b clockevents_chain
ffffffff8159d2c0 B tick_period
ffffffff8159d2c8 B tick_next_period
ffffffff8159d2d0 b tick_device_lock
ffffffff8159d2e0 b tick_broadcast_device
ffffffff8159d2f0 b tick_broadcast_mask
ffffffff8159d2f8 b tick_broadcast_lock
ffffffff8159d300 b tick_broadcast_oneshot_mask
ffffffff8159d308 b tick_broadcast_force
ffffffff8159d310 b tmpmask
ffffffff8159d318 b last_jiffies_update
ffffffff8159d320 b futex_queues
ffffffff8159eb20 b prev_max.15450
ffffffff8159eb24 B dma_spin_lock
ffffffff8159eb40 B modules_disabled
ffffffff8159eb60 b last_unloaded_module
ffffffff8159eba0 b module_addr_max
ffffffff8159eba8 b acct_lock
ffffffff8159ebc0 b rootnode
ffffffff8159ff20 b init_css_set
ffffffff815a0168 b root_count
ffffffff815a016c b css_set_count
ffffffff815a0180 b css_set_table
ffffffff815a0580 b release_list_lock
ffffffff815a0582 b __key.31011
ffffffff815a0582 b __key.31644
ffffffff815a0582 b __key.31972
ffffffff815a05a0 b init_css_set_link
ffffffff815a05d0 b __key.31946
ffffffff815a05d0 b cgroup_kobj
ffffffff815a05d8 b hierarchy_id_lock
ffffffff815a05e0 b hierarchy_ida
ffffffff815a0608 b next_hierarchy_id
ffffffff815a0620 b cpuset_wq
ffffffff815a0628 b cpuset_being_rebound
ffffffff815a0640 b newmems.27058
ffffffff815a0660 b cpus_attach
ffffffff815a0680 b cpuset_attach_nodemask_to
ffffffff815a06a0 b cpuset_attach_nodemask_from
ffffffff815a06c0 b oldmems.27329
ffffffff815a06e0 b cpuset_buffer_lock
ffffffff815a0700 b cpuset_name
ffffffff815a0780 b cpuset_nodelist
ffffffff815a0880 b pid_ns_cachep
ffffffff815a0888 b __key.6723
ffffffff815a0888 b stop_machine_initialized
ffffffff815a08a0 B audit_inode_hash
ffffffff815a0aa0 B audit_sig_sid
ffffffff815a0aa4 B audit_pid
ffffffff815a0aa8 B audit_ever_enabled
ffffffff815a0aac B audit_enabled
ffffffff815a0ab0 b audit_lost
ffffffff815a0ab4 b audit_rate_limit
ffffffff815a0ab8 b lock.37057
ffffffff815a0ac0 b last_msg.37056
ffffffff815a0ac8 b audit_sock
ffffffff815a0ad0 b serial_lock.37336
ffffffff815a0ad4 b serial.37338
ffffffff815a0ad8 b audit_initialized
ffffffff815a0ae0 b audit_skb_queue
ffffffff815a0af8 b lock.37044
ffffffff815a0afc b messages.37043
ffffffff815a0b00 b last_check.37042
ffffffff815a0b08 b audit_freelist_lock
ffffffff815a0b0c b audit_freelist_count
ffffffff815a0b10 b audit_default
ffffffff815a0b20 b audit_skb_hold_queue
ffffffff815a0b38 b kauditd_task
ffffffff815a0b40 b audit_nlk_pid
ffffffff815a0b60 b classes
ffffffff815a0be0 B audit_signals
ffffffff815a0be4 B audit_n_rules
ffffffff815a0be8 b session_id
ffffffff815a0bf0 b audit_watch_group
ffffffff815a0c00 b audit_tree_group
ffffffff815a0c20 b chunk_hash_heads
ffffffff815a1420 b allocated_irqs
ffffffff815a1a48 B irq_default_affinity
ffffffff815a1a50 b __key.18867
ffffffff815a1a50 b irq_poll_cpu
ffffffff815a1a54 b irq_poll_active
ffffffff815a1a58 B no_irq_affinity
ffffffff815a1a60 b root_irq_dir
ffffffff815a1a68 b prec.20938
ffffffff815a1a80 B rcutorture_vernum
ffffffff815a1a88 B rcutorture_testseq
ffffffff815a1a90 b sync_sched_expedited_started
ffffffff815a1a94 b sync_sched_expedited_done
ffffffff815a1aa0 b rcu_barrier_completion
ffffffff815a1ac0 b __key.8440
ffffffff815a1ac0 b rcu_barrier_cpu_count
ffffffff815a1ac8 B delayacct_cache
ffffffff815a1ad0 B taskstats_cache
ffffffff815a1ad8 b family_registered
ffffffff815a1adc b __key.28132
ffffffff815a1ae0 B perf_swevent_enabled
ffffffff815a1b08 B perf_guest_cbs
ffffffff815a1b10 b __key.30784
ffffffff815a1b10 b pmu_bus_running
ffffffff815a1b20 b pmu_idr
ffffffff815a1b40 b __key.28849
ffffffff815a1b40 b pmus_srcu
ffffffff815a1b70 b __key.30265
ffffffff815a1b70 b __key.30266
ffffffff815a1b70 b __key.30267
ffffffff815a1b70 b perf_event_id
ffffffff815a1b78 b __key.30618
ffffffff815a1b78 b __key.30631
ffffffff815a1b78 b nr_callchain_events
ffffffff815a1b80 b callchain_cpus_entries
ffffffff815a1b88 b constraints_initialized
ffffffff815a1b8c b nr_slots
ffffffff815a1bc0 b __key.26012
ffffffff815a1bc0 B sysctl_oom_kill_allocating_task
ffffffff815a1bc4 B sysctl_panic_on_oom
ffffffff815a1bc8 b zone_scan_lock
ffffffff815a1be0 B movable_zone
ffffffff815a1be4 B percpu_pagelist_fraction
ffffffff815a1be8 b saved_gfp_mask
ffffffff815a1bf0 b nr_shown.31811
ffffffff815a1bf8 b resume.31810
ffffffff815a1c00 b nr_unshown.31812
ffffffff815a1c08 b cpus_with_pcps.32133
ffffffff815a1c10 b user_zonelist_order
ffffffff815a1c14 b current_zonelist_order
ffffffff815a1c20 b node_load
ffffffff815a2020 b node_order
ffffffff815a2420 b __key.32950
ffffffff815a2420 b __key.33168
ffffffff815a2420 B global_dirty_limit
ffffffff815a2428 B laptop_mode
ffffffff815a242c B block_dump
ffffffff815a2430 B vm_dirty_bytes
ffffffff815a2438 B vm_highmem_is_dirtyable
ffffffff815a2440 B dirty_background_bytes
ffffffff815a2460 b vm_completions
ffffffff815a24e8 b bdi_min_ratio
ffffffff815a24f0 b update_time.31857
ffffffff815a24f8 b dirty_lock.31855
ffffffff815a24fc B page_cluster
ffffffff815a2500 B scan_unevictable_pages
ffffffff815a2508 B vm_total_pages
ffffffff815a2510 b __print_once.30921
ffffffff815a2518 b __key.29772
ffffffff815a2518 b lock.29717
ffffffff815a2520 b shmem_inode_cachep
ffffffff815a2528 b shm_mnt
ffffffff815a2540 B bdi_lock
ffffffff815a2560 b sync_supers_timer
ffffffff815a2598 b bdi_class
ffffffff815a25a0 b __key.25684
ffffffff815a25a0 b sync_supers_tsk
ffffffff815a25a8 b __key.25839
ffffffff815a25a8 b bdi_seq
ffffffff815a25b0 b nr_bdi_congested
ffffffff815a25b8 B mm_kobj
ffffffff815a25c0 B mminit_loglevel
ffffffff815a25e0 b pcpu_lock
ffffffff815a25e8 b pcpu_reserved_chunk
ffffffff815a25f0 b pages.20023
ffffffff815a25f8 b bitmap.20024
ffffffff815a2600 b pcpu_first_chunk
ffffffff815a2608 b pcpu_reserved_chunk_limit
ffffffff815a2620 b vm.20788
ffffffff815a2660 B high_memory
ffffffff815a2668 B num_physpages
ffffffff815a2670 b nr_shown.27006
ffffffff815a2678 b resume.27005
ffffffff815a2680 b nr_unshown.27007
ffffffff815a2688 b shmlock_user_lock
ffffffff815a26c0 B vm_committed_as
ffffffff815a26e8 b __key.31699
ffffffff815a26e8 b anon_vma_chain_cachep
ffffffff815a26f0 b anon_vma_cachep
ffffffff815a26f8 b __key.25857
ffffffff815a26f8 B vmlist
ffffffff815a2700 b vmap_lazy_nr
ffffffff815a2704 b purge_lock.24595
ffffffff815a2706 b vmap_area_lock
ffffffff815a2708 b vmap_block_tree_lock
ffffffff815a2710 b free_vmap_cache
ffffffff815a2718 b cached_vstart
ffffffff815a2720 b vmap_area_root
ffffffff815a2728 b vmap_area_pcpu_hole
ffffffff815a2730 b cached_hole_size
ffffffff815a2738 b cached_align
ffffffff815a2740 B max_pfn
ffffffff815a2748 B min_low_pfn
ffffffff815a2750 B max_low_pfn
ffffffff815a2758 b isa_page_pool
ffffffff815a2760 b page_pool
ffffffff815a2780 b swap_cache_info
ffffffff815a27a0 B total_swap_pages
ffffffff815a27a8 B nr_swap_pages
ffffffff815a27b0 b swap_lock
ffffffff815a27c0 b swap_info
ffffffff815a28a8 b proc_poll_event
ffffffff815a28ac b nr_swapfiles
ffffffff815a28b0 b least_priority
ffffffff815a28b8 B swap_token_mm
ffffffff815a28c0 b global_faults.20701
ffffffff815a28c4 b swap_token_lock
ffffffff815a28c8 b last_aging.20702
ffffffff815a28d0 b swap_token_memcg
ffffffff815a28d8 b __key.19966
ffffffff815a28e0 B node_hstates
ffffffff815a40e0 B hstates
ffffffff815a79b0 B default_hstate_idx
ffffffff815a79b8 B hugepages_treat_as_movable
ffffffff815a79c0 b max_hstate
ffffffff815a79c8 b hugepages_kobj
ffffffff815a79d0 b hstate_kobjs
ffffffff815a79e0 b hugetlb_lock
ffffffff815a79e8 b last_mhp.27642
ffffffff815a79f0 B policy_zone
ffffffff815a79f8 b policy_cache
ffffffff815a7a00 b sn_cache
ffffffff815a7a40 B mem_section
ffffffff815aba40 b index_init_lock.19662
ffffffff815aba48 b vmemmap_buf
ffffffff815aba50 b vmemmap_buf_end
ffffffff815aba58 B sysctl_compact_memory
ffffffff815aba60 b g_cpucache_up
ffffffff815aba64 b slab_max_order
ffffffff815aba70 b cache_chain
ffffffff815aba80 b cache_cache_nodelists
ffffffff815ac280 b khugepaged_full_scans
ffffffff815ac284 b khugepaged_pages_collapsed
ffffffff815ac288 b khugepaged_mm_lock
ffffffff815ac290 b cleancache_succ_gets
ffffffff815ac298 b cleancache_failed_gets
ffffffff815ac2a0 b cleancache_puts
ffffffff815ac2a8 b cleancache_invalidates
ffffffff815ac2c0 B files_lglock_cpu_lock
ffffffff815ac2c8 b old_max.22967
ffffffff815ac2d0 b __key.23098
ffffffff815ac2e0 B sb_lock
ffffffff815ac2e2 b __key.28206
ffffffff815ac2e2 b __key.28207
ffffffff815ac2e2 b __key.28208
ffffffff815ac2e2 b __key.28209
ffffffff815ac2e2 b __key.28210
ffffffff815ac2e2 b __key.28211
ffffffff815ac2e2 b __key.28212
ffffffff815ac300 b default_op.28195
ffffffff815ac3c0 b unnamed_dev_ida
ffffffff815ac3e8 b unnamed_dev_lock
ffffffff815ac3ec b unnamed_dev_start
ffffffff815ac400 b chrdevs
ffffffff815acbf8 b cdev_lock
ffffffff815acc00 b cdev_map
ffffffff815acc08 B suid_dumpable
ffffffff815acc0c B core_pipe_limit
ffffffff815acc10 B core_uses_pid
ffffffff815acc14 b core_dump_count.33760
ffffffff815acc18 b __key.29214
ffffffff815acc18 b __key.7292
ffffffff815acc18 b fasync_lock
ffffffff815acc40 B inodes_stat
ffffffff815acc80 b empty_iops.27685
ffffffff815acd40 b empty_fops.27686
ffffffff815ace10 b __key.27690
ffffffff815ace10 b __key.27850
ffffffff815ace10 b shared_last_ino.28209
ffffffff815ace14 b iunique_lock.28287
ffffffff815ace18 b counter.28289
ffffffff815ace20 b file_systems
ffffffff815ace40 B vfsmount_lock_cpu_lock
ffffffff815ace48 B fs_kobj
ffffffff815ace60 b mnt_group_ida
ffffffff815acea0 b mnt_id_ida
ffffffff815acec8 b mnt_id_lock
ffffffff815acecc b mnt_id_start
ffffffff815acee0 b namespace_sem
ffffffff815acf00 b event
ffffffff815acf04 b __key.15525
ffffffff815acf04 b __key.30317
ffffffff815acf04 b __key.30425
ffffffff815acf04 b pin_fs_lock
ffffffff815acf06 b simple_transaction_lock.25039
ffffffff815acf08 b __key.25076
ffffffff815acf08 B nr_pdflush_threads
ffffffff815acf0c B sysctl_drop_caches
ffffffff815acf10 B buffer_heads_over_limit
ffffffff815acf14 b msg_count.33407
ffffffff815acf18 b bh_cachep
ffffffff815acf20 b max_buffer_heads
ffffffff815acf28 B fs_bio_set
ffffffff815acf30 b bio_slab_max
ffffffff815acf34 b bio_slab_nr
ffffffff815acf38 b bio_slabs
ffffffff815acf40 b bio_dirty_lock
ffffffff815acf48 b bio_dirty_list
ffffffff815acf50 b bd_mnt.29014
ffffffff815acf58 b __key.28986
ffffffff815acf58 b __key.28987
ffffffff815acf60 b fsnotify_sync_cookie
ffffffff815acf68 b fsnotify_event_cachep
ffffffff815acf70 b fsnotify_event_holder_cachep
ffffffff815acf78 b q_overflow_event
ffffffff815acf80 b __key.19553
ffffffff815acf80 b __key.19554
ffffffff815acf80 B fsnotify_mark_srcu
ffffffff815acfb0 b destroy_lock
ffffffff815acfb4 b warned.19061
ffffffff815acfb8 b zero
ffffffff815acfc0 b poll_loop_ncalls
ffffffff815acfe0 b poll_safewake_ncalls
ffffffff815ad000 b poll_readywalk_ncalls
ffffffff815ad018 b __key.27570
ffffffff815ad018 b __key.27571
ffffffff815ad018 b __key.27572
ffffffff815ad020 b path_count
ffffffff815ad038 b zero
ffffffff815ad040 b anon_inode_inode
ffffffff815ad060 b anon_inode_fops
ffffffff815ad130 b __key.27365
ffffffff815ad130 b cancel_lock
ffffffff815ad134 b __key.27430
ffffffff815ad138 B aio_nr
ffffffff815ad140 b kiocb_cachep
ffffffff815ad148 b kioctx_cachep
ffffffff815ad150 b aio_wq
ffffffff815ad158 b aio_nr_lock
ffffffff815ad15a b fput_lock
ffffffff815ad15c b __key.31994
ffffffff815ad15c b file_lock_lock
ffffffff815ad15e b __key.28900
ffffffff815ad160 b proc_inode_cachep
ffffffff815ad168 b __print_once.26962
ffffffff815ad180 B proc_subdir_lock
ffffffff815ad1a0 b proc_inum_ida
ffffffff815ad1c8 b proc_inum_lock
ffffffff815ad1d0 b proc_tty_driver
ffffffff815ad1d8 b proc_tty_ldisc
ffffffff815ad1e0 b sysctl_lock
ffffffff815ad1e2 b __key.7469
ffffffff815ad200 B kcore_modules
ffffffff815ad228 b proc_root_kcore
ffffffff815ad240 b kcore_text
ffffffff815ad280 b kcore_vmalloc
ffffffff815ad2c0 b sysfs_open_dirent_lock
ffffffff815ad2c2 b __key.21320
ffffffff815ad2c2 b __key.21342
ffffffff815ad2c8 b sysfs_workqueue
ffffffff815ad2e0 B sysfs_assoc_lock
ffffffff815ad2e2 b sysfs_ino_lock
ffffffff815ad300 b sysfs_ino_ida
ffffffff815ad328 B sysfs_dir_cachep
ffffffff815ad330 b sysfs_mnt
ffffffff815ad338 b __key.17715
ffffffff815ad338 b __key.20269
ffffffff815ad338 B configfs_dirent_lock
ffffffff815ad340 B configfs_dir_cachep
ffffffff815ad348 b configfs_mount
ffffffff815ad350 b configfs_mnt_count
ffffffff815ad358 b config_kobj
ffffffff815ad360 b devpts_mnt
ffffffff815ad368 b pty_count
ffffffff815ad36c b pty_limit_min
ffffffff815ad370 B sysctl_hugetlb_shm_group
ffffffff815ad378 b hugetlbfs_vfsmount
ffffffff815ad380 b __print_once.26277
ffffffff815ad388 b hugetlbfs_inode_cachep
ffffffff815ad390 b nls_lock
ffffffff815ad398 b pstore_sb
ffffffff815ad3a0 b allpstore_lock
ffffffff815ad3a8 b pstore_lock
ffffffff815ad3b0 b psinfo
ffffffff815ad3b8 b backend
ffffffff815ad3c0 b __key.14983
ffffffff815ad3c0 b pstore_new_entry
ffffffff815ad3c4 b oopscount
ffffffff815ad3c8 b __key.24392
ffffffff815ad3c8 B mq_lock
ffffffff815ad3cc b zero
ffffffff815ad3d0 b mqueue_inode_cachep
ffffffff815ad3d8 b mq_sysctl_table
ffffffff815ad3e0 b __key.39861
ffffffff815ad3e0 b warned.30716
ffffffff815ad3e8 B mmap_min_addr
ffffffff815ad3f0 b __key.7292
ffffffff815ad3f0 B kcrypto_wq
ffffffff815ad3f8 B crypto_default_rng
ffffffff815ad400 b crypto_default_rng_refcnt
ffffffff815ad420 b chosen_elevator
ffffffff815ad430 b elv_list_lock
ffffffff815ad432 b __key.29063
ffffffff815ad434 b printed.29234
ffffffff815ad440 B blk_requestq_cachep
ffffffff815ad460 B blk_queue_ida
ffffffff815ad488 b kblockd_workqueue
ffffffff815ad490 b __key.30561
ffffffff815ad490 b __key.30562
ffffffff815ad490 b __key.30579
ffffffff815ad490 b request_cachep
ffffffff815ad498 B blk_max_pfn
ffffffff815ad4a0 B blk_max_low_pfn
ffffffff815ad4a8 b iocontext_cachep
ffffffff815ad4c0 B block_depr
ffffffff815ad4e0 b major_names
ffffffff815adce0 b ext_devt_idr
ffffffff815add00 b bdev_map
ffffffff815add08 b __key.28287
ffffffff815add08 b disk_events_dfl_poll_msecs
ffffffff815add10 b __key.27786
ffffffff815add10 b p.27759
ffffffff815add20 b blk_default_cmd_filter
ffffffff815add60 b bsg_minor_idr
ffffffff815add80 b bsg_cmd_cachep
ffffffff815adda0 b bsg_device_list
ffffffff815adde0 b __key.28860
ffffffff815adde0 b bsg_class
ffffffff815adde8 b bsg_major
ffffffff815ade00 b bsg_cdev
ffffffff815ade68 b __key.28709
ffffffff815ade68 b __key.28710
ffffffff815ade68 b blkio_list_lock
ffffffff815ade70 b cfq_pool
ffffffff815ade78 b idr_layer_cache
ffffffff815ade80 b simple_ida_lock
ffffffff815ade90 b kobj_ns_type_lock
ffffffff815adea0 b kobj_ns_ops_tbl
ffffffff815adec0 B uevent_helper
ffffffff815adfc0 B uevent_seqnum
ffffffff815adfe0 b index_bits_to_maxindex
ffffffff815ae1e0 b __key.10819
ffffffff815ae1e0 b __key.10820
ffffffff815ae1e0 b __key.10823
ffffffff815ae1e0 b __key.10865
ffffffff815ae1e0 b radix_tree_node_cachep
ffffffff815ae1e8 B debug_locks_silent
ffffffff815ae1ec b __print_once.13863
ffffffff815ae1f0 B swiotlb_force
ffffffff815ae1f8 b io_tlb_nslabs
ffffffff815ae200 b io_tlb_start
ffffffff815ae208 b io_tlb_end
ffffffff815ae210 b io_tlb_list
ffffffff815ae218 b io_tlb_index
ffffffff815ae220 b io_tlb_orig_addr
ffffffff815ae228 b io_tlb_overflow_buffer
ffffffff815ae230 b late_alloc
ffffffff815ae234 b io_tlb_lock
ffffffff815ae240 B pci_lock
ffffffff815ae242 b __key.22838
ffffffff815ae244 b __key.19869
ffffffff815ae260 B pci_cache_line_size
ffffffff815ae264 B pcie_bus_config
ffffffff815ae268 B pci_pm_d3_delay
ffffffff815ae26c B pci_pci_problems
ffffffff815ae270 B isa_dma_bridge_buggy
ffffffff815ae278 b pci_platform_pm
ffffffff815ae280 b pcie_ari_disabled
ffffffff815ae284 b pci_acs_enable
ffffffff815ae288 b arch_set_vga_state
ffffffff815ae290 b resource_alignment_lock
ffffffff815ae2a0 b resource_alignment_param
ffffffff815aeaa0 b __key.28938
ffffffff815aeaa0 b sysfs_initialized
ffffffff815aeaa8 b proc_initialized
ffffffff815aeab0 b proc_bus_pci_dir
ffffffff815aeab8 B pci_slots_kset
ffffffff815aeac0 b asus_hides_smbus
ffffffff815aeac8 b asus_rcba_base
ffffffff815aead0 b __print_once.28775
ffffffff815aead4 b aspm_disabled
ffffffff815aead8 b aspm_force
ffffffff815aeadc b aspm_policy
ffffffff815aeae0 B pciehp_msi_disabled
ffffffff815aeae4 B pcie_ports_disabled
ffffffff815aeae8 b __key.19573
ffffffff815aeae8 b forceload
ffffffff815aeae9 b nosourceid
ffffffff815aeaea b aer_recover_ring_lock
ffffffff815aeaec b pcie_aer_disable
ffffffff815aeaf0 b __key.30056
ffffffff815aeaf0 b __key.30057
ffffffff815aeaf0 b parsed.31133
ffffffff815aeaf1 b aer_firmware_first
ffffffff815aeaf4 B pcie_pme_msi_disabled
ffffffff815aeaf8 b ht_irq_lock
ffffffff815aeafc b __key.18200
ffffffff815aeafc B pci_flags
ffffffff815aeb00 B fb_class
ffffffff815aeb08 b __key.30092
ffffffff815aeb08 b __key.30093
ffffffff815aeb08 b __key.30135
ffffffff815aeb08 B fb_mode_option
ffffffff815aeb20 b vgacon_text_mode_force
ffffffff815aeb24 b vga_bootup_console.22750
ffffffff815aeb28 b vga_is_gfx
ffffffff815aeb2c b vga_palette_blanked
ffffffff815aeb30 b vga_lock
ffffffff815aeb34 b vga_rolled_over
ffffffff815aeb40 b state
ffffffff815aeb78 b vgacon_xres
ffffffff815aeb7c b vgacon_yres
ffffffff815aeb80 b vga_512_chars
ffffffff815aeb84 b vga_video_font_height
ffffffff815aeb88 b cursor_size_lastfrom
ffffffff815aeb8c b cursor_size_lastto
ffffffff815aeb90 b vga_vesa_blanked
ffffffff815aeb94 b vga_state
ffffffff815aeba0 b vga_video_num_columns
ffffffff815aeba4 b vga_video_num_lines
ffffffff815aebb0 b vgacon_uni_pagedir
ffffffff815aebc0 b fontname
ffffffff815aec00 b con2fb_map_boot
ffffffff815aec40 b map_override
ffffffff815aec44 b first_fb_vc
ffffffff815aec48 b initial_rotation
ffffffff815aec50 b fbcon_device
ffffffff815aec58 b fbcon_has_sysfs
ffffffff815aec60 b con2fb_map
ffffffff815aeca0 b fbcon_has_exited
ffffffff815aeca4 b fbcon_cursor_noblink
ffffffff815aeca8 b softback_lines
ffffffff815aecc0 b fb_display
ffffffff815b0c40 b softback_curr
ffffffff815b0c48 b softback_end
ffffffff815b0c50 b softback_buf
ffffffff815b0c58 b softback_in
ffffffff815b0c60 b softback_top
ffffffff815b0c68 b logo_lines
ffffffff815b0c6c b scrollback_phys_max
ffffffff815b0c70 b scrollback_current
ffffffff815b0c74 b scrollback_max
ffffffff815b0c80 b palette_red
ffffffff815b0ca0 b palette_green
ffffffff815b0cc0 b palette_blue
ffffffff815b0ce0 b vbl_cursor_cnt
ffffffff815b0ce4 b fbcon_has_console_bind
ffffffff815b0ce8 b nologo
ffffffff815b0cf0 b vesafb_device
ffffffff815b0d00 B kacpi_hotplug_wq
ffffffff815b0d10 b buffer.31437
ffffffff815b0f10 b acpi_os_name
ffffffff815b0f74 b osi_linux
ffffffff815b0f78 b acpi_irq_handler
ffffffff815b0f80 b acpi_irq_context
ffffffff815b0f88 b t.31605
ffffffff815b0f90 b kacpi_notify_wq
ffffffff815b0f98 b kacpid_wq
ffffffff815b0fa0 b __print_once.31418
ffffffff815b0fa8 b __acpi_os_prepare_sleep
ffffffff815b0fb0 B wake_sleep_flags
ffffffff815b0fb4 b gts
ffffffff815b0fb8 b bfs
ffffffff815b0fbc b sleep_states
ffffffff815b0fc8 B acpi_kobj
ffffffff815b0fd0 B acpi_gbl_permanent_mmap
ffffffff815b0fd1 B osc_sb_apei_support_acked
ffffffff815b0fd8 B acpi_root_dir
ffffffff815b0fe0 B acpi_root
ffffffff815b0fe8 b acpi_bus_event_lock
ffffffff815b0fec b __key.24499
ffffffff815b0ff0 b read_madt.23771
ffffffff815b0ff8 b madt.23770
ffffffff815b1000 B first_ec
ffffffff815b1008 B boot_ec
ffffffff815b1010 b EC_FLAGS_MSI
ffffffff815b1014 b EC_FLAGS_VALIDATE_ECDT
ffffffff815b1018 b EC_FLAGS_SKIP_DSDT_SCAN
ffffffff815b101c b __key.25681
ffffffff815b101c b __key.25682
ffffffff815b1020 b sub_driver
ffffffff815b1028 b acpi_prt_lock
ffffffff815b1030 b acpi_power_resource_list
ffffffff815b1040 b __key.24186
ffffffff815b1040 B event_is_open
ffffffff815b1044 b acpi_event_seqnum
ffffffff815b1048 b acpi_system_event_lock
ffffffff815b104c b chars_remaining.30556
ffffffff815b1050 b str.30555
ffffffff815b10a0 b ptr.30557
ffffffff815b10a8 B acpi_irq_not_handled
ffffffff815b10ac B acpi_irq_handled
ffffffff815b10b0 b all_counters
ffffffff815b10b8 b num_gpes
ffffffff815b10bc b num_counters
ffffffff815b10c0 b all_attrs
ffffffff815b10c8 b counter_attrs
ffffffff815b10d0 b acpi_gpe_count
ffffffff815b10d8 b tables_kobj
ffffffff815b10e0 b dynamic_tables_kobj
ffffffff815b10f0 b nodes_found_map
ffffffff815b1110 b acpi_ac_dir
ffffffff815b1118 b lock_ac_dir_cnt
ffffffff815b1120 b acpi_battery_dir
ffffffff815b1128 b lock_battery_dir_cnt
ffffffff815b1130 b acpi_gbl_depth
ffffffff815b1134 b no_auto_ssdt
ffffffff815b1140 B acpi_gbl_startup_flags
ffffffff815b1144 B acpi_gbl_method_executing
ffffffff815b1145 B acpi_gbl_abort_method
ffffffff815b1146 B acpi_gbl_db_terminate_threads
ffffffff815b1148 B acpi_gbl_nesting_level
ffffffff815b114c B acpi_dbg_layer
ffffffff815b1150 B acpi_gbl_db_output_flags
ffffffff815b1158 B acpi_gbl_global_event_handler_context
ffffffff815b1160 B acpi_gbl_global_event_handler
ffffffff815b1168 B acpi_gbl_all_gpes_initialized
ffffffff815b1170 B acpi_gbl_gpe_fadt_blocks
ffffffff815b1180 B acpi_gbl_gpe_xrupt_list_head
ffffffff815b1190 B acpi_gbl_fixed_event_handlers
ffffffff815b11e0 B acpi_gbl_sleep_type_b
ffffffff815b11e1 B acpi_gbl_sleep_type_a
ffffffff815b11e2 B acpi_gbl_cm_single_step
ffffffff815b11e8 B acpi_gbl_current_walk_list
ffffffff815b11f0 B acpi_gbl_module_code_list
ffffffff815b11f8 B acpi_gbl_fadt_gpe_device
ffffffff815b1200 B acpi_gbl_root_node
ffffffff815b1210 B acpi_gbl_root_node_struct
ffffffff815b1240 B acpi_gbl_address_range_list
ffffffff815b1250 B acpi_gbl_supported_interfaces
ffffffff815b1258 B acpi_gbl_osi_data
ffffffff815b1259 B acpi_gbl_events_initialized
ffffffff815b125a B acpi_gbl_acpi_hardware_present
ffffffff815b125b B acpi_gbl_step_to_next_call
ffffffff815b125c B acpi_gbl_debugger_configuration
ffffffff815b125e B acpi_gbl_pm1_enable_register_save
ffffffff815b1260 B acpi_gbl_ps_find_count
ffffffff815b1264 B acpi_gbl_ns_lookup_count
ffffffff815b1268 B acpi_gbl_rsdp_original_location
ffffffff815b126c B acpi_gbl_original_mode
ffffffff815b1270 B acpi_gbl_reg_methods_executed
ffffffff815b1271 B acpi_gbl_next_owner_id_offset
ffffffff815b1272 B acpi_gbl_last_owner_id_index
ffffffff815b1280 B acpi_gbl_owner_id_mask
ffffffff815b12a0 B acpi_gbl_interface_handler
ffffffff815b12a8 B acpi_gbl_breakpoint_walk
ffffffff815b12b0 B acpi_gbl_table_handler_context
ffffffff815b12b8 B acpi_gbl_table_handler
ffffffff815b12c0 B acpi_gbl_init_handler
ffffffff815b12c8 B acpi_gbl_exception_handler
ffffffff815b12d0 B acpi_gbl_system_notify
ffffffff815b1310 B acpi_gbl_device_notify
ffffffff815b1348 B acpi_gbl_operand_cache
ffffffff815b1350 B acpi_gbl_ps_node_ext_cache
ffffffff815b1358 B acpi_gbl_ps_node_cache
ffffffff815b1360 B acpi_gbl_state_cache
ffffffff815b1368 B acpi_gbl_namespace_cache
ffffffff815b1370 B acpi_gbl_hardware_lock
ffffffff815b1378 B acpi_gbl_gpe_lock
ffffffff815b1380 B acpi_gbl_global_lock_pending
ffffffff815b1381 B acpi_gbl_global_lock_present
ffffffff815b1382 B acpi_gbl_global_lock_acquired
ffffffff815b1384 B acpi_gbl_global_lock_handle
ffffffff815b1388 B acpi_gbl_global_lock_pending_lock
ffffffff815b1390 B acpi_gbl_global_lock_semaphore
ffffffff815b1398 B acpi_gbl_global_lock_mutex
ffffffff815b13a0 B acpi_gbl_mutex_info
ffffffff815b1460 B acpi_gbl_namespace_rw_lock
ffffffff815b1478 B acpi_gbl_osi_mutex
ffffffff815b1480 B acpi_gbl_integer_nybble_width
ffffffff815b1481 B acpi_gbl_integer_byte_width
ffffffff815b1482 B acpi_gbl_integer_bit_width
ffffffff815b1490 B acpi_gbl_original_dsdt_header
ffffffff815b14b8 B acpi_gbl_DSDT
ffffffff815b14c0 B acpi_gbl_xpm1b_enable
ffffffff815b14d0 B acpi_gbl_xpm1b_status
ffffffff815b14e0 B acpi_gbl_xpm1a_enable
ffffffff815b14f0 B acpi_gbl_xpm1a_status
ffffffff815b1500 B acpi_gbl_FACS
ffffffff815b1510 B acpi_gbl_root_table_list
ffffffff815b1528 B acpi_gbl_trace_dbg_layer
ffffffff815b152c B acpi_gbl_trace_dbg_level
ffffffff815b1530 B acpi_gbl_original_dbg_layer
ffffffff815b1534 B acpi_gbl_original_dbg_level
ffffffff815b1540 B acpi_fixed_event_count
ffffffff815b1554 B acpi_gpe_count
ffffffff815b1558 B acpi_gbl_no_resource_disassembly
ffffffff815b1559 B acpi_gbl_reduced_hardware
ffffffff815b155a B acpi_gbl_system_awake_and_running
ffffffff815b155c B acpi_gbl_trace_method_name
ffffffff815b1560 B acpi_gbl_trace_flags
ffffffff815b1564 B acpi_current_gpe_count
ffffffff815b1570 B acpi_gbl_FADT
ffffffff815b167c B acpi_gbl_disable_auto_repair
ffffffff815b167d B acpi_gbl_truncate_io_addresses
ffffffff815b167e B acpi_gbl_copy_dsdt_locally
ffffffff815b167f B acpi_gbl_enable_aml_debug_object
ffffffff815b1680 B acpi_gbl_all_methods_serialized
ffffffff815b1681 B acpi_gbl_enable_interpreter_slack
ffffffff815b1688 b hed_handle
ffffffff815b16a0 b dapei.26110
ffffffff815b16a8 B hest_disable
ffffffff815b16b0 b seq.23950
ffffffff815b16c0 B erst_disable
ffffffff815b16c4 b erst_lock
ffffffff815b16c8 b erst_tab
ffffffff815b16e0 b erst_erange
ffffffff815b1700 b reader_pos
ffffffff815b1720 B ghes_estatus_caches
ffffffff815b1740 B ghes_disable
ffffffff815b1748 b ghes_estatus_pool_size_request
ffffffff815b1750 b ghes_ioremap_lock_nmi
ffffffff815b1758 b ghes_ioremap_area
ffffffff815b1760 b ghes_ioremap_lock_irq
ffffffff815b1770 b ghes_proc_irq_work
ffffffff815b1788 b ghes_estatus_pool
ffffffff815b1790 b ghes_estatus_llist
ffffffff815b1798 b ghes_nmi_lock
ffffffff815b179c b seqno.30600
ffffffff815b17a0 b ghes_estatus_cache_alloced
ffffffff815b17a4 B pnp_debug
ffffffff815b17a8 B pnp_platform_devices
ffffffff815b17ac B pnp_lock
ffffffff815b17ae b __key.18648
ffffffff815b17b0 b num
ffffffff815b17c0 B xen_hvm_resume_frames
ffffffff815b17c8 b gnttab_interface
ffffffff815b17d0 b gnttab_list_lock
ffffffff815b17d4 b gnttab_free_count
ffffffff815b17d8 b nr_grant_frames
ffffffff815b17dc b grant_table_version
ffffffff815b17e0 b gnttab_free_head
ffffffff815b17e8 b gnttab_list
ffffffff815b17f0 b gnttab_free_callback_list
ffffffff815b17f8 b gnttab_shared
ffffffff815b1800 b boot_max_nr_grant_frames
ffffffff815b1808 b grstatus
ffffffff815b1810 b evtchn_to_irq
ffffffff815b1818 b pirq_needs_eoi
ffffffff815b1820 b debug_lock.24388
ffffffff815b1828 b pirq_eoi_map
ffffffff815b1830 b handler.23387
ffffffff815b1840 B balloon_stats
ffffffff815b1880 b frame_list
ffffffff815b2880 b xenbus_valloc_lock
ffffffff815b2884 b xenbus_irq
ffffffff815b28a0 b xs_state
ffffffff815b2970 b watches_lock
ffffffff815b2974 b xenwatch_pid
ffffffff815b2978 b watch_events_lock
ffffffff815b297a b __key.21476
ffffffff815b297a b __key.21477
ffffffff815b297a b __key.21478
ffffffff815b297a b __key.21479
ffffffff815b297a b __key.21480
ffffffff815b297a b __key.21481
ffffffff815b2980 B xenstored_ready
ffffffff815b2988 B xen_store_interface
ffffffff815b2990 B xen_store_evtchn
ffffffff815b2994 b __key.7311
ffffffff815b2998 b xen_store_mfn
ffffffff815b29a0 b __key.19510
ffffffff815b29a0 b __key.25302
ffffffff815b29a0 b __key.25303
ffffffff815b29a0 b __key.25304
ffffffff815b29a0 b __key.27381
ffffffff815b29a0 b ready_to_wait_for_devices
ffffffff815b29a4 b backend_state
ffffffff815b29c0 b balloon_dev
ffffffff815b2c38 b selfballoon_min_usable_mb
ffffffff815b2c40 b platform_mmio
ffffffff815b2c48 b platform_mmio_alloc
ffffffff815b2c50 b platform_mmiolen
ffffffff815b2c58 b callback_via
ffffffff815b2c60 B start_dma_addr
ffffffff815b2c68 b xen_io_tlb_nslabs
ffffffff815b2c70 b xen_io_tlb_start
ffffffff815b2c78 b xen_io_tlb_end
ffffffff815b2c80 B tty_class
ffffffff815b2c88 B tty_files_lock
ffffffff815b2c8a b redirect_lock
ffffffff815b2c90 b redirect
ffffffff815b2c98 b __key.27651
ffffffff815b2c98 b __key.27652
ffffffff815b2c98 b __key.27653
ffffffff815b2c98 b __key.27654
ffffffff815b2c98 b __key.27656
ffffffff815b2c98 b __key.27657
ffffffff815b2c98 b __key.27658
ffffffff815b2c98 b __key.27659
ffffffff815b2c98 b __key.27824
ffffffff815b2c98 b consdev
ffffffff815b2ca0 b tty_cdev
ffffffff815b2d20 b console_cdev
ffffffff815b2da0 b tty_ldisc_lock
ffffffff815b2dc0 b tty_ldiscs
ffffffff815b2eb0 b __key.21834
ffffffff815b2eb0 b __key.21835
ffffffff815b2eb0 b __key.21836
ffffffff815b2eb0 b __key.21837
ffffffff815b2eb0 b __key.21838
ffffffff815b2ec0 b ptm_driver
ffffffff815b2ec8 b pts_driver
ffffffff815b2ee0 b ptmx_fops
ffffffff815b2fc0 b ptmx_cdev
ffffffff815b3028 b __key.21283
ffffffff815b3040 B vt_dont_switch
ffffffff815b3042 b vt_event_lock
ffffffff815b3044 b disable_vt_switch
ffffffff815b3048 b vc_class
ffffffff815b3050 b __key.25317
ffffffff815b3050 b __key.25458
ffffffff815b3050 B sel_cons
ffffffff815b3058 b sel_end
ffffffff815b305c b use_unicode
ffffffff815b3060 b sel_buffer
ffffffff815b3068 b sel_buffer_lth
ffffffff815b3080 B vt_spawn_con
ffffffff815b30a0 b keyboard_notifier_list
ffffffff815b30b0 b zero.26537
ffffffff815b30b4 b kbd_event_lock
ffffffff815b30c0 b kbd_table
ffffffff815b31fb b rep
ffffffff815b3200 b key_down
ffffffff815b3260 b shift_state
ffffffff815b3264 b pressed.26837
ffffffff815b3268 b committing.26838
ffffffff815b326c b diacr
ffffffff815b3270 b dead_key_next
ffffffff815b3278 b releasestart.26839
ffffffff815b3280 b committed.26831
ffffffff815b3288 b chords.26830
ffffffff815b3290 b shift_down
ffffffff815b3299 b ledioctl
ffffffff815b32a0 b inv_translate
ffffffff815b33a0 b dflt
ffffffff815b33c0 B console_driver
ffffffff815b33c8 B console_blank_hook
ffffffff815b33d0 B last_console
ffffffff815b33d4 B fg_console
ffffffff815b33d8 B console_blanked
ffffffff815b33dc B do_poke_blanked_console
ffffffff815b33e0 B vc_cons
ffffffff815b3db8 B conswitchp
ffffffff815b3dc0 b vt_notifier_list
ffffffff815b3dd0 b scrollback_delta
ffffffff815b3dd4 b blank_timer_expired
ffffffff815b3dd8 b softcursor_original
ffffffff815b3ddc b old.26373
ffffffff815b3dde b oldx.26374
ffffffff815b3de0 b oldy.26375
ffffffff815b3de8 b tty0dev
ffffffff815b3e00 b con_driver_map
ffffffff815b3ff8 b master_display_fg
ffffffff815b4000 b __key.27116
ffffffff815b4000 b registered_con_driver
ffffffff815b4280 b blank_state
ffffffff815b4284 b printable
ffffffff815b4288 b printing_lock.26941
ffffffff815b428c b kmsg_con.26925
ffffffff815b4290 b __key.27387
ffffffff815b4290 b vtconsole_class
ffffffff815b4298 b vesa_blank_mode
ffffffff815b429c b ignore_poke
ffffffff815b42a0 b vc0_cdev
ffffffff815b4308 b __print_once.26904
ffffffff815b430c b vesa_off_interval
ffffffff815b4310 b saved_fg_console
ffffffff815b4314 b saved_last_console
ffffffff815b4318 b saved_want_console
ffffffff815b431c b saved_vc_mode
ffffffff815b4320 b saved_console_blanked
ffffffff815b4324 B funcbufleft
ffffffff815b4340 b hvc_structs_lock
ffffffff815b4348 b hvc_driver
ffffffff815b4360 b cons_ops
ffffffff815b43e0 b hvc_kicked
ffffffff815b43e8 b hvc_task
ffffffff815b4400 b xencons_lock
ffffffff815b4420 b buf.24502
ffffffff815b4640 b __key.25448
ffffffff815b4640 b mem_class
ffffffff815b4680 b last_value.25091
ffffffff815b46a0 b input_timer_state
ffffffff815b46c0 b fasync
ffffffff815b46c8 b bootid_spinlock.25353
ffffffff815b4700 b random_int_secret
ffffffff815b4740 b min_write_thresh
ffffffff815b4750 b sysctl_bootid
ffffffff815b4760 b input_pool_data
ffffffff815b4960 b nonblocking_pool_data
ffffffff815b49e0 b blocking_pool_data
ffffffff815b4a60 b misc_minors
ffffffff815b4a68 b misc_class
ffffffff815b4a70 b __key.19518
ffffffff815b4a70 b nvram_state_lock
ffffffff815b4a74 b nvram_open_cnt
ffffffff815b4a78 b nvram_open_mode
ffffffff815b4a80 b vga_default
ffffffff815b4a88 b vga_arbiter_used
ffffffff815b4a8a b vga_lock
ffffffff815b4a8c b vga_count
ffffffff815b4a90 b vga_decode_count
ffffffff815b4a94 b vga_user_lock
ffffffff815b4aa0 B devices_kset
ffffffff815b4aa8 B sysfs_dev_block_kobj
ffffffff815b4ab0 B sysfs_dev_char_kobj
ffffffff815b4ab8 B platform_notify_remove
ffffffff815b4ac0 B platform_notify
ffffffff815b4ac8 b __key.19604
ffffffff815b4ac8 b virtual_dir.19609
ffffffff815b4ad0 b dev_kobj
ffffffff815b4ad8 B system_kset
ffffffff815b4ae0 b __key.15238
ffffffff815b4ae0 b bus_kset
ffffffff815b4ae8 b __key.15400
ffffffff815b4ae8 b deferred_wq
ffffffff815b4af0 b driver_deferred_probe_enable
ffffffff815b4af4 b probe_count
ffffffff815b4af8 b class_kset
ffffffff815b4b00 b __key.19027
ffffffff815b4b00 B total_cpus
ffffffff815b4b08 B firmware_kobj
ffffffff815b4b10 b __key.9039
ffffffff815b4b10 b thread
ffffffff815b4b18 b __key.7055
ffffffff815b4b18 b req_lock
ffffffff815b4b20 b requests
ffffffff815b4b40 b power_attrs
ffffffff815b4b48 b __key.12994
ffffffff815b4b60 B suspend_stats
ffffffff815b4bf4 b __key.7985
ffffffff815b4bf4 b pm_transition
ffffffff815b4bf8 b async_error
ffffffff815b4c00 B events_check_enabled
ffffffff815b4c02 b events_lock
ffffffff815b4c04 b combined_event_count
ffffffff815b4c08 b saved_count
ffffffff815b4c10 b wakeup_sources_stats_dentry
ffffffff815b4c18 b __key.17262
ffffffff815b4c18 b __key.24895
ffffffff815b4c18 b __key.8112
ffffffff815b4c20 B node_devices
ffffffff815dc420 b __hugetlb_register_node
ffffffff815dc428 b __hugetlb_unregister_node
ffffffff815dc430 B hypervisor_kobj
ffffffff815dc440 B scsi_logging_level
ffffffff815dc444 b __key.28590
ffffffff815dc444 b __key.28591
ffffffff815dc444 b scsi_host_next_hn
ffffffff815dc448 b __key.28653
ffffffff815dc448 b tur_command.29813
ffffffff815dc450 B scsi_sdb_cache
ffffffff815dc458 b __key.7489
ffffffff815dc458 b async_scan_lock
ffffffff815dc460 B blank_transport_template
ffffffff815dc5d8 b __key.29243
ffffffff815dc5d8 b __key.29245
ffffffff815dc5e0 b scsi_dev_flags
ffffffff815dc6e0 b scsi_default_dev_flags
ffffffff815dc6e8 b scsi_table_header
ffffffff815dc6f0 b proc_scsi
ffffffff815dc700 b sd_cdb_pool
ffffffff815dc708 b sd_cdb_cache
ffffffff815dc710 b sd_index_lock
ffffffff815dc720 b sd_index_ida
ffffffff815dc748 b __key.30132
ffffffff815dc760 B libata_allow_tpm
ffffffff815dc764 B libata_noacpi
ffffffff815dc768 B libata_fua
ffffffff815dc76c B ata_print_id
ffffffff815dc770 b ata_force_tbl_size
ffffffff815dc778 b ata_force_tbl
ffffffff815dc780 b atapi_dmadir
ffffffff815dc784 b ata_probe_timeout
ffffffff815dc788 b ata_ignore_hpa
ffffffff815dc78c b atapi_an
ffffffff815dc790 b __key.38144
ffffffff815dc790 b __key.38147
ffffffff815dc790 b __key.38168
ffffffff815dc790 b __key.7489
ffffffff815dc790 b lock.38209
ffffffff815dc792 b __key.38252
ffffffff815dc7a0 b ata_scsi_rbuf_lock
ffffffff815dc7c0 b ata_scsi_rbuf
ffffffff815dd7c0 B ata_scsi_transport_template
ffffffff815dd7c8 b ata_sff_wq
ffffffff815dd7e0 b amd_lock
ffffffff815dd800 b amd_chipset
ffffffff815dd840 b serio_event_lock
ffffffff815dd842 b __key.19460
ffffffff815dd842 b __key.19711
ffffffff815dd844 b serio_no.19458
ffffffff815dd860 b i8042_nokbd
ffffffff815dd862 b i8042_lock
ffffffff815dd868 b i8042_platform_filter
ffffffff815dd870 b i8042_noaux
ffffffff815dd871 b i8042_noloop
ffffffff815dd872 b i8042_debug
ffffffff815dd878 b i8042_start_time
ffffffff815dd880 b i8042_ports
ffffffff815dd8e0 b i8042_platform_device
ffffffff815dd8e8 b i8042_aux_irq_registered
ffffffff815dd8ec b i8042_aux_irq
ffffffff815dd8f0 b i8042_kbd_irq_registered
ffffffff815dd8f4 b i8042_kbd_irq
ffffffff815dd8f8 b i8042_ctr
ffffffff815dd8f9 b i8042_mux_present
ffffffff815dd8fa b i8042_reset
ffffffff815dd8fb b i8042_initial_ctr
ffffffff815dd8fc b i8042_nomux
ffffffff815dd8fd b i8042_direct
ffffffff815dd8fe b i8042_dritek
ffffffff815dd900 b last_transmit.17396
ffffffff815dd908 b last_str.17397
ffffffff815dd909 b i8042_notimeout
ffffffff815dd90a b i8042_suppress_kbd_ack
ffffffff815dd90b b i8042_unlock
ffffffff815dd90c b i8042_dumbkbd
ffffffff815dd90d b i8042_pnp_kbd_registered
ffffffff815dd90e b i8042_pnp_aux_registered
ffffffff815dd910 b i8042_pnp_data_reg
ffffffff815dd914 b i8042_pnp_command_reg
ffffffff815dd918 b i8042_pnp_kbd_irq
ffffffff815dd920 b i8042_pnp_kbd_name
ffffffff815dd940 b i8042_pnp_kbd_devices
ffffffff815dd944 b i8042_pnp_aux_irq
ffffffff815dd960 b i8042_pnp_aux_name
ffffffff815dd980 b i8042_pnp_aux_devices
ffffffff815dd984 b i8042_nopnp
ffffffff815dd985 b i8042_bypass_aux_irq_test
ffffffff815dd986 b __key.7517
ffffffff815dd988 b __key.22765
ffffffff815dd988 b __key.22766
ffffffff815dd9a0 b __key.23900
ffffffff815dd9a0 b __key.24090
ffffffff815dd9a0 b proc_bus_input_dir
ffffffff815dd9c0 b input_table
ffffffff815dda00 b input_devices_state
ffffffff815dda04 b input_no.23956
ffffffff815dda08 b __key.21339
ffffffff815dda20 b mousedev_mix
ffffffff815dda40 b mousedev_table
ffffffff815ddb40 b __key.21952
ffffffff815ddb40 b __key.21953
ffffffff815ddb40 b atkbd_reset
ffffffff815ddb41 b atkbd_softrepeat
ffffffff815ddb42 b atkbd_terminal
ffffffff815ddb48 b atkbd_platform_fixup
ffffffff815ddb50 b atkbd_platform_fixup_data
ffffffff815ddb58 b atkbd_scroll
ffffffff815ddb59 b atkbd_extra
ffffffff815ddb60 b atkbd_platform_scancode_fixup
ffffffff815ddb68 b __key.20960
ffffffff815ddb68 b psmouse_resync_time
ffffffff815ddb70 b kpsmoused_wq
ffffffff815ddb78 b impaired_toshiba_kbc
ffffffff815ddb79 b broken_olpc_ec
ffffffff815ddb80 b lifebook_present
ffffffff815ddb81 b lifebook_use_6byte_proto
ffffffff815ddb88 b desired_serio_phys
ffffffff815ddba0 B rtc_class
ffffffff815ddbc0 b rtc_ida
ffffffff815ddbe8 b __key.20252
ffffffff815ddbe8 b __key.20255
ffffffff815ddbe8 b __key.20275
ffffffff815ddbf0 b old_rtc
ffffffff815ddc00 b old_system
ffffffff815ddc10 b old_delta
ffffffff815ddc20 b rtc_devt
ffffffff815ddc40 b pnp_driver_registered
ffffffff815ddc60 b cmos_rtc
ffffffff815ddc98 b platform_driver_registered
ffffffff815ddca0 b acpi_rtc_info
ffffffff815ddcb8 b watchdog_dev_busy
ffffffff815ddcc0 b wdd
ffffffff815ddcc8 B edac_err_assert
ffffffff815ddccc B edac_handlers
ffffffff815ddcd0 b edac_subsys_valid
ffffffff815ddce0 B cpufreq_global_kobject
ffffffff815ddd00 b cpufreq_transition_notifier_list
ffffffff815ddd58 b init_cpufreq_transition_notifier_list_called
ffffffff815ddd5a b cpufreq_driver_lock
ffffffff815ddd60 b cpufreq_driver
ffffffff815ddd68 b __key.17462
ffffffff815ddd68 b __key.7489
ffffffff815ddd68 b cpufreq_stats_lock
ffffffff815ddd70 b cpuidle_enter_ops
ffffffff815ddd78 b enabled_devices
ffffffff815ddd7c b __key.7607
ffffffff815ddd80 B cpuidle_driver_lock
ffffffff815ddd88 b cpuidle_curr_driver
ffffffff815ddd90 B cpuidle_curr_governor
ffffffff815ddd98 b sysfs_switch
ffffffff815ddd9c b __key.8440
ffffffff815ddda0 B dmi_available
ffffffff815ddda4 b dmi_initialized
ffffffff815ddda8 b dmi_num
ffffffff815dddaa b dmi_len
ffffffff815dddac b dmi_base
ffffffff815dddc0 b dmi_ident
ffffffff815dde58 B ibft_addr
ffffffff815dde60 b mmap_kset.14003
ffffffff815dde68 b map_entries_nr.14002
ffffffff815dde6c B i8253_lock
ffffffff815dde70 B hid_debug
ffffffff815dde74 b hid_ignore_special_drivers
ffffffff815dde78 b __key.25028
ffffffff815dde78 b id.24953
ffffffff815dde7c b __key.24971
ffffffff815dde80 b dev_data_list_lock
ffffffff815dde90 b ppr_notifier
ffffffff815ddea0 b iommu_pd_list_lock
ffffffff815ddea8 b pt_domain
ffffffff815ddeb0 b __key.23093
ffffffff815ddec0 B amd_iommu_pd_alloc_bitmap
ffffffff815ddec8 B amd_iommu_rlookup_table
ffffffff815dded0 B amd_iommu_alias_table
ffffffff815dded8 B amd_iommu_dev_table
ffffffff815ddee0 B amd_iommu_pd_lock
ffffffff815ddee4 B amd_iommus_present
ffffffff815ddf00 B amd_iommus
ffffffff815de000 B amd_iommu_unmap_flush
ffffffff815de002 B amd_iommu_last_bdf
ffffffff815de004 B amd_iommu_dump
ffffffff815de008 b dev_table_size
ffffffff815de00c b alias_table_size
ffffffff815de010 b rlookup_table_size
ffffffff815de020 b pcibios_fwaddrmap_lock
ffffffff815de028 B xen_pci_frontend
ffffffff815de030 b dev_domain_list_spinlock
ffffffff815de040 b quirk_aspm_offset
ffffffff815de100 b toshiba_line_size
ffffffff815de120 B pcibios_disable_irq
ffffffff815de128 b eisa_irq_mask.29018
ffffffff815de130 b pirq_table
ffffffff815de138 b broken_hp_bios_irq9
ffffffff815de140 b pirq_router
ffffffff815de160 b pirq_router_dev
ffffffff815de168 b acer_tm360_irqrouting
ffffffff815de170 B pci_config_lock
ffffffff815de178 B pci_root_bus
ffffffff815de180 B pirq_table_addr
ffffffff815de188 B noioapicreroute
ffffffff815de18c B noioapicquirk
ffffffff815de190 B pci_routeirq
ffffffff815de194 B pci_early_dump_regs
ffffffff815de198 b smbios_type_b1_flag
ffffffff815de19c b pci_bf_sort
ffffffff815de1a0 B pci_root_info
ffffffff815df020 B pci_root_num
ffffffff815df040 B saved_context
ffffffff815df180 b br_ioctl_hook
ffffffff815df188 b vlan_ioctl_hook
ffffffff815df190 b dlci_ioctl_hook
ffffffff815df198 b __key.41667
ffffffff815df198 b warned.42188
ffffffff815df19c b net_family_lock
ffffffff815df1c0 B memcg_socket_limit_enabled
ffffffff815df1d0 b warncomm.44070
ffffffff815df1e0 b warned.44069
ffffffff815df1e4 b __key.44310
ffffffff815df1e8 b proto_inuse_idx
ffffffff815df200 b est_tree_lock
ffffffff815df220 b elist
ffffffff815df3d0 b est_root
ffffffff815df400 B init_net
ffffffff815df900 b net_cachep
ffffffff815df908 b netns_wq
ffffffff815df910 b cleanup_list_lock
ffffffff815df920 b net_generic_ids
ffffffff815df980 b net_secret
ffffffff815df9c0 b empty.37194
ffffffff815dfa00 b ptype_lock
ffffffff815dfa20 b dev_boot_setup
ffffffff815dfb60 b netdev_chain
ffffffff815dfb80 b gifconf_list
ffffffff815dfcc0 b ifindex.47961
ffffffff815dfcc4 b busy.33051
ffffffff815dfce0 B dst_default_metrics
ffffffff815dfd18 b dst_busy_list
ffffffff815dfd20 b netevent_notif_chain
ffffffff815dfd30 b neigh_tables
ffffffff815dfd40 b rtnl_msg_handlers
ffffffff815e0150 b rtattr_max
ffffffff815e0158 b rta_buf
ffffffff815e0160 b lweventlist_lock
ffffffff815e0168 b linkwatch_nextevent
ffffffff815e0170 b linkwatch_flags
ffffffff815e0180 B sock_diag_nlsk
ffffffff815e0188 b inet_rcv_compat
ffffffff815e01a0 b sock_diag_handlers
ffffffff815e02e0 B flow_cache_genid
ffffffff815e0300 b flow_cache_global
ffffffff815e0368 b flow_cache_gc_lock
ffffffff815e036a b __key.7489
ffffffff815e036c b rps_dev_flow_lock.36355
ffffffff815e036e b rps_map_lock.36313
ffffffff815e0370 b __key.36720
ffffffff815e0380 b nl_table_users
ffffffff815e0388 b nl_table
ffffffff815e0390 b __key.37411
ffffffff815e0390 b __key.37412
ffffffff815e0390 b netlink_chain
ffffffff815e03a0 b family_ht
ffffffff815e04c0 b rt_hash_locks
ffffffff815e04c8 b __rt_peer_genid
ffffffff815e04cc b ip_fb_id_lock.45713
ffffffff815e04d0 b ip_fallback_id.45715
ffffffff815e04d8 b last_gc.45428
ffffffff815e04e0 b ip_rt_max_size
ffffffff815e04e4 b equilibrium.45430
ffffffff815e04e8 b rover.45429
ffffffff815e04ec b __key.28822
ffffffff815e0500 b expires_work
ffffffff815e0558 b expires_ljiffies
ffffffff815e0560 b rover.45369
ffffffff815e0580 b empty
ffffffff815e05c0 b gc_work
ffffffff815e0618 b gc_lock
ffffffff815e0620 b ip4_frags
ffffffff815e0898 b zero
ffffffff815e08a0 B ip_ra_chain
ffffffff815e08a8 b ip_ra_lock
ffffffff815e08ac b hint.39836
ffffffff815e08b0 B sysctl_local_reserved_ports
ffffffff815e08c0 B tcp_sockets_allocated
ffffffff815e08e8 B tcp_memory_allocated
ffffffff815e0900 B tcp_orphan_count
ffffffff815e0928 b tcp_secret_generating
ffffffff815e0930 b tcp_secret_locker
ffffffff815e0938 b tcp_secret_primary
ffffffff815e0940 b tcp_secret_secondary
ffffffff815e0948 b tcp_secret_retiring
ffffffff815e0950 b __key.45701
ffffffff815e0950 b __key.45703
ffffffff815e0960 b tcp_secret_one
ffffffff815e09c0 b tcp_secret_two
ffffffff815e0a40 B tcp_hashinfo
ffffffff815e0cc0 B sysctl_tcp_max_ssthresh
ffffffff815e0cc4 b tcp_cong_list_lock
ffffffff815e0cc8 b __print_once.41336
ffffffff815e0cd0 B udp_memory_allocated
ffffffff815e0ce0 b inet_addr_lst
ffffffff815e14e0 b inet_addr_hash_lock
ffffffff815e1500 B ipv4_config
ffffffff815e1508 b inetsw_lock
ffffffff815e1520 b inetsw
ffffffff815e15e0 b fib_info_cnt
ffffffff815e15e4 b fib_info_lock
ffffffff815e1600 b fib_info_devhash
ffffffff815e1e00 b fib_info_hash_size
ffffffff815e1e08 b fib_info_hash
ffffffff815e1e10 b fib_info_laddrhash
ffffffff815e1e18 b tnode_free_head
ffffffff815e1e20 b tnode_free_size
ffffffff815e1e40 b ping_table
ffffffff815e2048 b ping_port_rover
ffffffff815e204c b ip_ping_group_range_min
ffffffff815e2054 b zero
ffffffff815e2060 B syncookie_secret
ffffffff815e20e8 b sysctl_hdr
ffffffff815e20f0 b __key.27703
ffffffff815e2100 b idx_generator.41565
ffffffff815e2104 b xfrm_policy_sk_bundle_lock
ffffffff815e2108 b xfrm_policy_sk_bundles
ffffffff815e2120 b xfrm_policy_afinfo
ffffffff815e2260 b dummy.42214
ffffffff815e22a0 b xfrm_state_afinfo
ffffffff815e23e0 b xfrm_state_gc_lock
ffffffff815e23e2 b xfrm_state_lock
ffffffff815e23e4 b acqseq.41924
ffffffff815e23e8 b __key.42243
ffffffff815e2400 B unix_table_lock
ffffffff815e2420 B unix_socket_table
ffffffff815e2c28 b unix_nr_socks
ffffffff815e2c30 b __key.37161
ffffffff815e2c30 b __key.37162
ffffffff815e2c30 B unix_tot_inflight
ffffffff815e2c34 b unix_gc_lock
ffffffff815e2c36 b gc_in_progress
ffffffff815e2c38 b klist_remove_lock
ffffffff815e3000 B __brk_base
ffffffff815e3000 B __bss_stop
ffffffff815f3000 b .brk.shared_info_page_brk
ffffffff815f4000 b .brk.level1_ident_pgt
ffffffff815f8000 b .brk.p2m_missing
ffffffff815f9000 b .brk.p2m_mid_missing
ffffffff815fa000 b .brk.p2m_mid_missing_mfn
ffffffff815fb000 b .brk.p2m_top
ffffffff815fc000 b .brk.p2m_top_mfn
ffffffff815fd000 b .brk.p2m_top_mfn_p
ffffffff815fe000 b .brk.p2m_identity
ffffffff815ff000 b .brk.p2m_mid
ffffffff817f3000 b .brk.p2m_mid_mfn
ffffffff819e7000 b .brk.p2m_mid_identity
ffffffff819ed000 b .brk.m2p_overrides
ffffffff819f1000 b .brk.dmi_alloc
ffffffff81a01000 B __brk_limit
ffffffff81a01000 A _end
ffffffffff700000 A VDSO64_PRELINK



------=_20120801183353_31106
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------=_20120801183353_31106--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:45:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc2J-00022T-9x; Wed, 01 Aug 2012 16:44:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swc2H-00022H-Bl
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:44:37 +0000
Received: from [85.158.143.35:10085] by server-1.bemta-4.messagelabs.com id
	F1/F9-24392-4FC59105; Wed, 01 Aug 2012 16:44:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343839474!13000563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31077 invoked from network); 1 Aug 2012 16:44:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:44:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; d="scan'208";a="33230818"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:43:59 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:43:59 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Swc1e-0005EM-Fx;
	Wed, 01 Aug 2012 17:43:58 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 16:43:57 +0000
Message-ID: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: tim@xen.org, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] arm: fix gic_init_secondary_cpu.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using spin_lock_irq here is unnecessary (interrupts are not yet enabled) and
wrong (since they will get unexpectedly renabled by spin_unlock_irq).

We can just use spin_lock/spin_unlock.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: tim@xen.org
---

Now an SMP model gets as far as hanging at dom0 "Calibrating delay loop...".

Tim, didn't you diagnose that a while ago?
---
 xen/arch/arm/gic.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 378158b..6f5b0e1 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -323,10 +323,10 @@ int __init gic_init(void)
 /* Set up the per-CPU parts of the GIC for a secondary CPU */
 void __cpuinit gic_init_secondary_cpu(void)
 {
-    spin_lock_irq(&gic.lock);
+    spin_lock(&gic.lock);
     gic_cpu_init();
     gic_hyp_init();
-    spin_unlock_irq(&gic.lock);
+    spin_unlock(&gic.lock);
 }
 
 /* Shut down the per-CPU GIC interface */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:45:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc2J-00022T-9x; Wed, 01 Aug 2012 16:44:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swc2H-00022H-Bl
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:44:37 +0000
Received: from [85.158.143.35:10085] by server-1.bemta-4.messagelabs.com id
	F1/F9-24392-4FC59105; Wed, 01 Aug 2012 16:44:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1343839474!13000563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31077 invoked from network); 1 Aug 2012 16:44:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:44:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; d="scan'208";a="33230818"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:43:59 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:43:59 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Swc1e-0005EM-Fx;
	Wed, 01 Aug 2012 17:43:58 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 1 Aug 2012 16:43:57 +0000
Message-ID: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: tim@xen.org, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] arm: fix gic_init_secondary_cpu.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using spin_lock_irq here is unnecessary (interrupts are not yet enabled) and
wrong (since they will get unexpectedly renabled by spin_unlock_irq).

We can just use spin_lock/spin_unlock.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: tim@xen.org
---

Now an SMP model gets as far as hanging at dom0 "Calibrating delay loop...".

Tim, didn't you diagnose that a while ago?
---
 xen/arch/arm/gic.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 378158b..6f5b0e1 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -323,10 +323,10 @@ int __init gic_init(void)
 /* Set up the per-CPU parts of the GIC for a secondary CPU */
 void __cpuinit gic_init_secondary_cpu(void)
 {
-    spin_lock_irq(&gic.lock);
+    spin_lock(&gic.lock);
     gic_cpu_init();
     gic_hyp_init();
-    spin_unlock_irq(&gic.lock);
+    spin_unlock(&gic.lock);
 }
 
 /* Shut down the per-CPU GIC interface */
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:47:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc51-0002E6-TS; Wed, 01 Aug 2012 16:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Swc50-0002Dr-F8
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:47:26 +0000
Received: from [85.158.143.99:30564] by server-3.bemta-4.messagelabs.com id
	8E/8F-01511-D9D59105; Wed, 01 Aug 2012 16:47:25 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343839644!29192974!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27420 invoked from network); 1 Aug 2012 16:47:24 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:47:24 -0000
Received: by wibhm6 with SMTP id hm6so3453755wib.14
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:47:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=w3BfDctLJ/MbFbQ6zbD5rZt2tddb5yA82OsTFbEvK14=;
	b=uTuDapdXQyCcikVZ+wNE8U5Sd3hTT3Urya/5uvErABNqu45DekgtmmFXD9Yn2IV9Jg
	o3O9FRoeqJsNj+xIdZsHhfXlFOFCYThM1xvIBn4AkGjeZEoo9lUbtVs6JNRRv45aNsHF
	gjJD2xLNJH2jDUGRucBU5H4QsA1SAbVIiOUADZ5WjiExzINb6LBKo/nGgjEPu7n2Ds21
	lRlXejm4DJpoPKIT+YANlK6w1DCKkuq/dbOEzV96Q5562LY9oSz0OjIW5G82G17v3r5o
	oqrUc034HjMU1ji8LHQGyJAVP98WbM0yqshEy4M8IK8+ZF8Y4wUmG5IaAwWZCyhweD9K
	PgfQ==
Received: by 10.217.2.197 with SMTP id p47mr8541577wes.55.1343839644101;
	Wed, 01 Aug 2012 09:47:24 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id j6sm9913971wiy.4.2012.08.01.09.47.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:47:23 -0700 (PDT)
Message-ID: <1343839634.4958.39.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 01 Aug 2012 18:47:14 +0200
In-Reply-To: <501959BE.60801@citrix.com>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8435584703076205533=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8435584703076205533==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-hxuxtGRndUgIcnhdldcP"


--=-hxuxtGRndUgIcnhdldcP
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-08-01 at 17:30 +0100, Andrew Cooper wrote:
> On 01/08/12 17:16, Dario Faggioli wrote:
>
> ...
>
> > - Automatic placement at guest creation time. Basics are there and
> > will be shipping with 4.2. However, a lot of other things are
> > missing and/or can be improved, for instance:
> > [D] * automated verification and testing of the placement;
> > * benchmarks and improvements of the placement heuristic;
> > [D] * choosing/building up some measure of node load (more accurate
> > than just counting vcpus) onto which to rely during placement;
> > * consider IONUMA during placement;
> > * automatic placement of Dom0, if possible (my current series is
> > only affecting DomU)
> > * having internal xen data structure honour the placement (e.g.,=20
> > I've been told that right now vcpu stacks are always allocated
> > on node 0... Andrew?).
> >
>=20
> - Xen NUMA internals.  Placing items such as the per-cpu stacks and
> data area on the local NUMA node, rather than unconditionally on node
> 0 at the moment.  As part of this, there will be changes to
> alloc_{dom,xen}heap_page() to allow specification of which node(s) to
> allocate memory from.

As you see, I already tried to consider that (as you told me it does
that couple of weeks ago :-) ). I'll add your wording of it (much better
than mine) to the wiki... I understand you're working on this, aren't
you? Can I put that down to?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-hxuxtGRndUgIcnhdldcP
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZXZMACgkQk4XaBE3IOsTNSwCgrGgKU8jQt8a2kbhcsLYMbqib
kEsAoKgFvRSZFTTICahBuzYjV8Z+SARl
=Lbo6
-----END PGP SIGNATURE-----

--=-hxuxtGRndUgIcnhdldcP--



--===============8435584703076205533==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8435584703076205533==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:47:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc51-0002E6-TS; Wed, 01 Aug 2012 16:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Swc50-0002Dr-F8
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:47:26 +0000
Received: from [85.158.143.99:30564] by server-3.bemta-4.messagelabs.com id
	8E/8F-01511-D9D59105; Wed, 01 Aug 2012 16:47:25 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343839644!29192974!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27420 invoked from network); 1 Aug 2012 16:47:24 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:47:24 -0000
Received: by wibhm6 with SMTP id hm6so3453755wib.14
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:47:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=w3BfDctLJ/MbFbQ6zbD5rZt2tddb5yA82OsTFbEvK14=;
	b=uTuDapdXQyCcikVZ+wNE8U5Sd3hTT3Urya/5uvErABNqu45DekgtmmFXD9Yn2IV9Jg
	o3O9FRoeqJsNj+xIdZsHhfXlFOFCYThM1xvIBn4AkGjeZEoo9lUbtVs6JNRRv45aNsHF
	gjJD2xLNJH2jDUGRucBU5H4QsA1SAbVIiOUADZ5WjiExzINb6LBKo/nGgjEPu7n2Ds21
	lRlXejm4DJpoPKIT+YANlK6w1DCKkuq/dbOEzV96Q5562LY9oSz0OjIW5G82G17v3r5o
	oqrUc034HjMU1ji8LHQGyJAVP98WbM0yqshEy4M8IK8+ZF8Y4wUmG5IaAwWZCyhweD9K
	PgfQ==
Received: by 10.217.2.197 with SMTP id p47mr8541577wes.55.1343839644101;
	Wed, 01 Aug 2012 09:47:24 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id j6sm9913971wiy.4.2012.08.01.09.47.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:47:23 -0700 (PDT)
Message-ID: <1343839634.4958.39.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 01 Aug 2012 18:47:14 +0200
In-Reply-To: <501959BE.60801@citrix.com>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8435584703076205533=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8435584703076205533==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-hxuxtGRndUgIcnhdldcP"


--=-hxuxtGRndUgIcnhdldcP
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-08-01 at 17:30 +0100, Andrew Cooper wrote:
> On 01/08/12 17:16, Dario Faggioli wrote:
>
> ...
>
> > - Automatic placement at guest creation time. Basics are there and
> > will be shipping with 4.2. However, a lot of other things are
> > missing and/or can be improved, for instance:
> > [D] * automated verification and testing of the placement;
> > * benchmarks and improvements of the placement heuristic;
> > [D] * choosing/building up some measure of node load (more accurate
> > than just counting vcpus) onto which to rely during placement;
> > * consider IONUMA during placement;
> > * automatic placement of Dom0, if possible (my current series is
> > only affecting DomU)
> > * having internal xen data structure honour the placement (e.g.,=20
> > I've been told that right now vcpu stacks are always allocated
> > on node 0... Andrew?).
> >
>=20
> - Xen NUMA internals.  Placing items such as the per-cpu stacks and
> data area on the local NUMA node, rather than unconditionally on node
> 0 at the moment.  As part of this, there will be changes to
> alloc_{dom,xen}heap_page() to allow specification of which node(s) to
> allocate memory from.

As you see, I already tried to consider that (as you told me it does
that couple of weeks ago :-) ). I'll add your wording of it (much better
than mine) to the wiki... I understand you're working on this, aren't
you? Can I put that down to?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-hxuxtGRndUgIcnhdldcP
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZXZMACgkQk4XaBE3IOsTNSwCgrGgKU8jQt8a2kbhcsLYMbqib
kEsAoKgFvRSZFTTICahBuzYjV8Z+SARl
=Lbo6
-----END PGP SIGNATURE-----

--=-hxuxtGRndUgIcnhdldcP--



--===============8435584703076205533==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8435584703076205533==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:52:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc9X-0002T2-LB; Wed, 01 Aug 2012 16:52:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Swc9V-0002Sv-Sv
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:52:06 +0000
Received: from [85.158.143.99:46948] by server-3.bemta-4.messagelabs.com id
	95/53-01511-5BE59105; Wed, 01 Aug 2012 16:52:05 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343839921!29466359!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_SEX
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23898 invoked from network); 1 Aug 2012 16:52:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	1 Aug 2012 16:52:04 -0000
Received: from 225-68-ftth.onsneteindhoven.nl ([88.159.68.225]:50384
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Swc5x-0005BU-Ma; Wed, 01 Aug 2012 18:48:25 +0200
Date: Wed, 1 Aug 2012 18:51:56 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <336853058.20120801185156@eikelenboom.it>
To: George Dunlap <George.Dunlap@eu.citrix.com>
In-Reply-To: <CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Wednesday, August 1, 2012, 6:21:57 PM, you wrote:

> On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
>>> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
>>> <konrad.wilk@oracle.com> wrote:
>>> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>>> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>>> >> for this feature, mainly for "marketing" reasons.  I think it will
>>> >> probably give people the wrong idea about what the technology does.
>>> >> PV domains is one of Xen's really distinct advantages -- much simpler
>>> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>>> >> understand it, the mode you've been calling "hybrid" still has all of
>>> >> these advantages -- it just uses some of the HVM hardware extensions
>>> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>>> >> be seen as, "Even Xen has had to give up on PV."
>>> >>
>>> >> Can I suggest something like "PVH" instead?  That (at least to me)
>>> >> makes it clear that PV domains are still fully PV, but just use some
>>> >> HVM extensions.
>>> >
>>> > if (xen_pvh_domain()?
>>> >
>>> > if (xen_pv_h_domain()?
>>> >
>>> > if (xen_h_domain()) ?
>>> >
>>> > if (xen_pvplus_domain()) ?
>>> >
>>> > if (xen_pv_ext_domain()) ?
>>> >
>>> > I think I like 'pv+'?
>>>
>>> I could deal with pv+.  However, in general I dislike that kind of
>>> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
>>> (Extreme PV!) -- they all sound cool when they come out, but five
>>> years later, when they're not so new or sexy anymore, they all sound
>>> lame.  PVH is just descriptive -- it will always be PV with HVM
>>> extensions, so it will age much better. :-)
>>
>> How about pv_with_mmu_in_hvm_container_domain() ?
>>
>> Ok, that is a bit to lengthy. How about then:
>>
>> if (xen_pvhvm_ext_domain()) ?
>>
>> The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
>> and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?

> Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
> already been sort of taken by Stefano's extensions to allow Linux
> kernels booted in HVM mode to use some of the PV extensions.  I tend
> to think "xen_pvh_domain()" is probably OK, but maybe calling it
> "pvext" (or "pvhext") in the code, and "PVH" in documentation /
> stories?  Just using "pvext" everywhere could work as well; it's a
> little bit "now even better!", but not as much as pvplus.

Am i mistaken in saying that both "Hyrbid" and "PVHVM" are both targeting the same, but approaching it from a different base / direction (PV + HVM extensions versus HVM + PV extensions) ?
If that's the case, how far are these apart from meeting each other in the middle, and what would be the fundamental difference ?
It seems to be the full qemu container / emulated driver model availability on HVM + PV extensions ?

Just wondering if the naming should express that most explicit and fundamental difference (if there is one :-) ) ?



>  -George





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:52:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swc9X-0002T2-LB; Wed, 01 Aug 2012 16:52:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1Swc9V-0002Sv-Sv
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:52:06 +0000
Received: from [85.158.143.99:46948] by server-3.bemta-4.messagelabs.com id
	95/53-01511-5BE59105; Wed, 01 Aug 2012 16:52:05 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343839921!29466359!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_SEX
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23898 invoked from network); 1 Aug 2012 16:52:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	1 Aug 2012 16:52:04 -0000
Received: from 225-68-ftth.onsneteindhoven.nl ([88.159.68.225]:50384
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1Swc5x-0005BU-Ma; Wed, 01 Aug 2012 18:48:25 +0200
Date: Wed, 1 Aug 2012 18:51:56 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <336853058.20120801185156@eikelenboom.it>
To: George Dunlap <George.Dunlap@eu.citrix.com>
In-Reply-To: <CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Wednesday, August 1, 2012, 6:21:57 PM, you wrote:

> On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
>>> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
>>> <konrad.wilk@oracle.com> wrote:
>>> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
>>> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>>> >> for this feature, mainly for "marketing" reasons.  I think it will
>>> >> probably give people the wrong idea about what the technology does.
>>> >> PV domains is one of Xen's really distinct advantages -- much simpler
>>> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>>> >> understand it, the mode you've been calling "hybrid" still has all of
>>> >> these advantages -- it just uses some of the HVM hardware extensions
>>> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>>> >> be seen as, "Even Xen has had to give up on PV."
>>> >>
>>> >> Can I suggest something like "PVH" instead?  That (at least to me)
>>> >> makes it clear that PV domains are still fully PV, but just use some
>>> >> HVM extensions.
>>> >
>>> > if (xen_pvh_domain()?
>>> >
>>> > if (xen_pv_h_domain()?
>>> >
>>> > if (xen_h_domain()) ?
>>> >
>>> > if (xen_pvplus_domain()) ?
>>> >
>>> > if (xen_pv_ext_domain()) ?
>>> >
>>> > I think I like 'pv+'?
>>>
>>> I could deal with pv+.  However, in general I dislike that kind of
>>> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
>>> (Extreme PV!) -- they all sound cool when they come out, but five
>>> years later, when they're not so new or sexy anymore, they all sound
>>> lame.  PVH is just descriptive -- it will always be PV with HVM
>>> extensions, so it will age much better. :-)
>>
>> How about pv_with_mmu_in_hvm_container_domain() ?
>>
>> Ok, that is a bit to lengthy. How about then:
>>
>> if (xen_pvhvm_ext_domain()) ?
>>
>> The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
>> and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?

> Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
> already been sort of taken by Stefano's extensions to allow Linux
> kernels booted in HVM mode to use some of the PV extensions.  I tend
> to think "xen_pvh_domain()" is probably OK, but maybe calling it
> "pvext" (or "pvhext") in the code, and "PVH" in documentation /
> stories?  Just using "pvext" everywhere could work as well; it's a
> little bit "now even better!", but not as much as pvplus.

Am i mistaken in saying that both "Hyrbid" and "PVHVM" are both targeting the same, but approaching it from a different base / direction (PV + HVM extensions versus HVM + PV extensions) ?
If that's the case, how far are these apart from meeting each other in the middle, and what would be the fundamental difference ?
It seems to be the full qemu container / emulated driver model availability on HVM + PV extensions ?

Just wondering if the naming should express that most explicit and fundamental difference (if there is one :-) ) ?



>  -George





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:53:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcAr-0002Y7-4F; Wed, 01 Aug 2012 16:53:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwcAp-0002Xz-G3
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:53:27 +0000
Received: from [85.158.143.99:51811] by server-2.bemta-4.messagelabs.com id
	30/3D-17938-60F59105; Wed, 01 Aug 2012 16:53:26 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1343840005!20032069!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21445 invoked from network); 1 Aug 2012 16:53:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:53:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; d="scan'208,217";a="33231965"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:53:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:53:22 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwcAj-0005Mz-4Z;
	Wed, 01 Aug 2012 17:53:21 +0100
Message-ID: <50195F01.5020405@citrix.com>
Date: Wed, 1 Aug 2012 17:53:21 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
	<1343839634.4958.39.camel@Solace>
In-Reply-To: <1343839634.4958.39.camel@Solace>
X-Enigmail-Version: 1.4.3
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2161362334832821623=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2161362334832821623==
Content-Type: multipart/alternative;
	boundary="------------030103050606070408050209"

--------------030103050606070408050209
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On 01/08/12 17:47, Dario Faggioli wrote:
> On Wed, 2012-08-01 at 17:30 +0100, Andrew Cooper wrote:
>> On 01/08/12 17:16, Dario Faggioli wrote:
>>
>> ...
>>
>>> - Automatic placement at guest creation time. Basics are there and
>>> will be shipping with 4.2. However, a lot of other things are
>>> missing and/or can be improved, for instance:
>>> [D] * automated verification and testing of the placement;
>>> * benchmarks and improvements of the placement heuristic;
>>> [D] * choosing/building up some measure of node load (more accurate
>>> than just counting vcpus) onto which to rely during placement;
>>> * consider IONUMA during placement;
>>> * automatic placement of Dom0, if possible (my current series is
>>> only affecting DomU)
>>> * having internal xen data structure honour the placement (e.g.,
>>> I've been told that right now vcpu stacks are always allocated
>>> on node 0... Andrew?).
>>>
>>
>> - Xen NUMA internals. Placing items such as the per-cpu stacks and
>> data area on the local NUMA node, rather than unconditionally on node
>> 0 at the moment. As part of this, there will be changes to
>> alloc_{dom,xen}heap_page() to allow specification of which node(s) to
>> allocate memory from.
>
> As you see, I already tried to consider that (as you told me it does
> that couple of weeks ago :-) ). I'll add your wording of it (much better
> than mine) to the wiki... I understand you're working on this, aren't
> you? Can I put that down to?
>
> Thanks and Regards,
> Dario
>

Wow - I completely managed to miss that while reading.  Someone will be
working on it for XS.next, and that someone will probably be me - put me
down for it.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------030103050606070408050209
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    On 01/08/12 17:47, Dario Faggioli wrote:<br>
    <span style="white-space: pre;">&gt; On Wed, 2012-08-01 at 17:30
      +0100, Andrew Cooper wrote:<br>
      &gt;&gt; On 01/08/12 17:16, Dario Faggioli wrote:<br>
      &gt;&gt;<br>
      &gt;&gt; ...<br>
      &gt;&gt;<br>
      &gt;&gt;&gt; - Automatic placement at guest creation time. Basics
      are there and<br>
      &gt;&gt;&gt; will be shipping with 4.2. However, a lot of other
      things are<br>
      &gt;&gt;&gt; missing and/or can be improved, for instance:<br>
      &gt;&gt;&gt; [D] * automated verification and testing of the
      placement;<br>
      &gt;&gt;&gt; * benchmarks and improvements of the placement
      heuristic;<br>
      &gt;&gt;&gt; [D] * choosing/building up some measure of node load
      (more accurate<br>
      &gt;&gt;&gt; than just counting vcpus) onto which to rely during
      placement;<br>
      &gt;&gt;&gt; * consider IONUMA during placement;<br>
      &gt;&gt;&gt; * automatic placement of Dom0, if possible (my
      current series is<br>
      &gt;&gt;&gt; only affecting DomU)<br>
      &gt;&gt;&gt; * having internal xen data structure honour the
      placement (e.g., <br>
      &gt;&gt;&gt; I've been told that right now vcpu stacks are always
      allocated<br>
      &gt;&gt;&gt; on node 0... Andrew?).<br>
      &gt;&gt;&gt;<br>
      &gt;&gt;<br>
      &gt;&gt; - Xen NUMA internals. Placing items such as the per-cpu
      stacks and<br>
      &gt;&gt; data area on the local NUMA node, rather than
      unconditionally on node<br>
      &gt;&gt; 0 at the moment. As part of this, there will be changes
      to<br>
      &gt;&gt; alloc_{dom,xen}heap_page() to allow specification of
      which node(s) to<br>
      &gt;&gt; allocate memory from.<br>
      &gt;<br>
      &gt; As you see, I already tried to consider that (as you told me
      it does<br>
      &gt; that couple of weeks ago :-) ). I'll add your wording of it
      (much better<br>
      &gt; than mine) to the wiki... I understand you're working on
      this, aren't<br>
      &gt; you? Can I put that down to?<br>
      &gt;<br>
      &gt; Thanks and Regards,<br>
      &gt; Dario<br>
      &gt;</span><br>
    <br>
    Wow - I completely managed to miss that while reading.Â  Someone will
    be working on it for XS.next, and that someone will probably be me -
    put me down for it.<br>
    <br>
    -- <br>
    Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer<br>
    T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a><br>
    <br>
  </body>
</html>

--------------030103050606070408050209--


--===============2161362334832821623==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2161362334832821623==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 16:53:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcAr-0002Y7-4F; Wed, 01 Aug 2012 16:53:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwcAp-0002Xz-G3
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:53:27 +0000
Received: from [85.158.143.99:51811] by server-2.bemta-4.messagelabs.com id
	30/3D-17938-60F59105; Wed, 01 Aug 2012 16:53:26 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1343840005!20032069!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21445 invoked from network); 1 Aug 2012 16:53:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:53:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; d="scan'208,217";a="33231965"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 12:53:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 12:53:22 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwcAj-0005Mz-4Z;
	Wed, 01 Aug 2012 17:53:21 +0100
Message-ID: <50195F01.5020405@citrix.com>
Date: Wed, 1 Aug 2012 17:53:21 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
	<1343839634.4958.39.camel@Solace>
In-Reply-To: <1343839634.4958.39.camel@Solace>
X-Enigmail-Version: 1.4.3
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2161362334832821623=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2161362334832821623==
Content-Type: multipart/alternative;
	boundary="------------030103050606070408050209"

--------------030103050606070408050209
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On 01/08/12 17:47, Dario Faggioli wrote:
> On Wed, 2012-08-01 at 17:30 +0100, Andrew Cooper wrote:
>> On 01/08/12 17:16, Dario Faggioli wrote:
>>
>> ...
>>
>>> - Automatic placement at guest creation time. Basics are there and
>>> will be shipping with 4.2. However, a lot of other things are
>>> missing and/or can be improved, for instance:
>>> [D] * automated verification and testing of the placement;
>>> * benchmarks and improvements of the placement heuristic;
>>> [D] * choosing/building up some measure of node load (more accurate
>>> than just counting vcpus) onto which to rely during placement;
>>> * consider IONUMA during placement;
>>> * automatic placement of Dom0, if possible (my current series is
>>> only affecting DomU)
>>> * having internal xen data structure honour the placement (e.g.,
>>> I've been told that right now vcpu stacks are always allocated
>>> on node 0... Andrew?).
>>>
>>
>> - Xen NUMA internals. Placing items such as the per-cpu stacks and
>> data area on the local NUMA node, rather than unconditionally on node
>> 0 at the moment. As part of this, there will be changes to
>> alloc_{dom,xen}heap_page() to allow specification of which node(s) to
>> allocate memory from.
>
> As you see, I already tried to consider that (as you told me it does
> that couple of weeks ago :-) ). I'll add your wording of it (much better
> than mine) to the wiki... I understand you're working on this, aren't
> you? Can I put that down to?
>
> Thanks and Regards,
> Dario
>

Wow - I completely managed to miss that while reading.  Someone will be
working on it for XS.next, and that someone will probably be me - put me
down for it.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------030103050606070408050209
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    On 01/08/12 17:47, Dario Faggioli wrote:<br>
    <span style="white-space: pre;">&gt; On Wed, 2012-08-01 at 17:30
      +0100, Andrew Cooper wrote:<br>
      &gt;&gt; On 01/08/12 17:16, Dario Faggioli wrote:<br>
      &gt;&gt;<br>
      &gt;&gt; ...<br>
      &gt;&gt;<br>
      &gt;&gt;&gt; - Automatic placement at guest creation time. Basics
      are there and<br>
      &gt;&gt;&gt; will be shipping with 4.2. However, a lot of other
      things are<br>
      &gt;&gt;&gt; missing and/or can be improved, for instance:<br>
      &gt;&gt;&gt; [D] * automated verification and testing of the
      placement;<br>
      &gt;&gt;&gt; * benchmarks and improvements of the placement
      heuristic;<br>
      &gt;&gt;&gt; [D] * choosing/building up some measure of node load
      (more accurate<br>
      &gt;&gt;&gt; than just counting vcpus) onto which to rely during
      placement;<br>
      &gt;&gt;&gt; * consider IONUMA during placement;<br>
      &gt;&gt;&gt; * automatic placement of Dom0, if possible (my
      current series is<br>
      &gt;&gt;&gt; only affecting DomU)<br>
      &gt;&gt;&gt; * having internal xen data structure honour the
      placement (e.g., <br>
      &gt;&gt;&gt; I've been told that right now vcpu stacks are always
      allocated<br>
      &gt;&gt;&gt; on node 0... Andrew?).<br>
      &gt;&gt;&gt;<br>
      &gt;&gt;<br>
      &gt;&gt; - Xen NUMA internals. Placing items such as the per-cpu
      stacks and<br>
      &gt;&gt; data area on the local NUMA node, rather than
      unconditionally on node<br>
      &gt;&gt; 0 at the moment. As part of this, there will be changes
      to<br>
      &gt;&gt; alloc_{dom,xen}heap_page() to allow specification of
      which node(s) to<br>
      &gt;&gt; allocate memory from.<br>
      &gt;<br>
      &gt; As you see, I already tried to consider that (as you told me
      it does<br>
      &gt; that couple of weeks ago :-) ). I'll add your wording of it
      (much better<br>
      &gt; than mine) to the wiki... I understand you're working on
      this, aren't<br>
      &gt; you? Can I put that down to?<br>
      &gt;<br>
      &gt; Thanks and Regards,<br>
      &gt; Dario<br>
      &gt;</span><br>
    <br>
    Wow - I completely managed to miss that while reading.Â  Someone will
    be working on it for XS.next, and that someone will probably be me -
    put me down for it.<br>
    <br>
    -- <br>
    Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer<br>
    T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a><br>
    <br>
  </body>
</html>

--------------030103050606070408050209--


--===============2161362334832821623==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2161362334832821623==--


From xen-devel-bounces@lists.xen.org Wed Aug 01 16:54:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:54:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcBS-0002bq-HX; Wed, 01 Aug 2012 16:54:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SwcBS-0002bg-3O
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:54:06 +0000
Received: from [85.158.143.99:8907] by server-3.bemta-4.messagelabs.com id
	00/15-01511-D2F59105; Wed, 01 Aug 2012 16:54:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343840042!18124285!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDcxOTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17102 invoked from network); 1 Aug 2012 16:54:04 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:54:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1343840044; x=1375376044;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=bInllnmcIFSD9yolE3EEIPcfsokpUiQ2+xAZonhbXQo=;
	b=GuuyL2J9kSp/HTz5+o+qHOdxOGgCyl3UdxD1RLHlzBdGu8pDGiiz6zmS
	3ok4Fab4Kj8jeesQnZi9zGdzXTVRyg==;
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="338920273"
Received: from smtp-in-9003.sea19.amazon.com ([10.186.104.20])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 01 Aug 2012 16:54:01 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-9003.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q71Gs1iR012625
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 1 Aug 2012 16:54:01 GMT
Received: from US-SEA-R8XVZTX (10.224.80.34) by ex10-hub-9003.ant.amazon.com
	(10.185.137.132) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 1 Aug 2012 09:53:59 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 01 Aug 2012
	09:53:59 -0700
Date: Wed, 1 Aug 2012 09:53:59 -0700
From: Matt Wilson <msw@amazon.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801165359.GF8228@US-SEA-R8XVZTX>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] HYBRID naming [Was: Re: [HYBRID]: status update...]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 09:21:57AM -0700, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
> >> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com> wrote:
> >> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> >> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >> >> for this feature, mainly for "marketing" reasons.  I think it will
> >> >> probably give people the wrong idea about what the technology does.
> >> >> PV domains is one of Xen's really distinct advantages -- much simpler
> >> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >> >> understand it, the mode you've been calling "hybrid" still has all of
> >> >> these advantages -- it just uses some of the HVM hardware extensions
> >> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >> >> be seen as, "Even Xen has had to give up on PV."
> >> >>
> >> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >> >> makes it clear that PV domains are still fully PV, but just use some
> >> >> HVM extensions.
> >> >
> >> > if (xen_pvh_domain()?
> >> >
> >> > if (xen_pv_h_domain()?
> >> >
> >> > if (xen_h_domain()) ?
> >> >
> >> > if (xen_pvplus_domain()) ?
> >> >
> >> > if (xen_pv_ext_domain()) ?
> >> >
> >> > I think I like 'pv+'?
> >>
> >> I could deal with pv+.  However, in general I dislike that kind of
> >> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> >> (Extreme PV!) -- they all sound cool when they come out, but five
> >> years later, when they're not so new or sexy anymore, they all sound
> >> lame.  PVH is just descriptive -- it will always be PV with HVM
> >> extensions, so it will age much better. :-)
> >
> > How about pv_with_mmu_in_hvm_container_domain() ?
> >
> > Ok, that is a bit to lengthy. How about then:
> >
> > if (xen_pvhvm_ext_domain()) ?
> >
> > The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
> > and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?
> 
> Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
> already been sort of taken by Stefano's extensions to allow Linux
> kernels booted in HVM mode to use some of the PV extensions.  I tend
> to think "xen_pvh_domain()" is probably OK, but maybe calling it
> "pvext" (or "pvhext") in the code, and "PVH" in documentation /
> stories?  Just using "pvext" everywhere could work as well; it's a
> little bit "now even better!", but not as much as pvplus.

How about HAPV, for "Hardware Assisted Paravirtualization"? It's
nicely pronounceable as "hap-vee" and follows the general
"hardware-assisted paging" (HAP) Xen terminology that spans both Intel
EPT and AMD RVI. 'if (xen_hapv_domain())'

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:54:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:54:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcBS-0002bq-HX; Wed, 01 Aug 2012 16:54:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SwcBS-0002bg-3O
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 16:54:06 +0000
Received: from [85.158.143.99:8907] by server-3.bemta-4.messagelabs.com id
	00/15-01511-D2F59105; Wed, 01 Aug 2012 16:54:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343840042!18124285!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDcxOTQy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17102 invoked from network); 1 Aug 2012 16:54:04 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:54:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1343840044; x=1375376044;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=bInllnmcIFSD9yolE3EEIPcfsokpUiQ2+xAZonhbXQo=;
	b=GuuyL2J9kSp/HTz5+o+qHOdxOGgCyl3UdxD1RLHlzBdGu8pDGiiz6zmS
	3ok4Fab4Kj8jeesQnZi9zGdzXTVRyg==;
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="338920273"
Received: from smtp-in-9003.sea19.amazon.com ([10.186.104.20])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 01 Aug 2012 16:54:01 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-9003.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q71Gs1iR012625
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 1 Aug 2012 16:54:01 GMT
Received: from US-SEA-R8XVZTX (10.224.80.34) by ex10-hub-9003.ant.amazon.com
	(10.185.137.132) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 1 Aug 2012 09:53:59 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 01 Aug 2012
	09:53:59 -0700
Date: Wed, 1 Aug 2012 09:53:59 -0700
From: Matt Wilson <msw@amazon.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801165359.GF8228@US-SEA-R8XVZTX>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] HYBRID naming [Was: Re: [HYBRID]: status update...]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 09:21:57AM -0700, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
> >> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com> wrote:
> >> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> >> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >> >> for this feature, mainly for "marketing" reasons.  I think it will
> >> >> probably give people the wrong idea about what the technology does.
> >> >> PV domains is one of Xen's really distinct advantages -- much simpler
> >> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >> >> understand it, the mode you've been calling "hybrid" still has all of
> >> >> these advantages -- it just uses some of the HVM hardware extensions
> >> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >> >> be seen as, "Even Xen has had to give up on PV."
> >> >>
> >> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >> >> makes it clear that PV domains are still fully PV, but just use some
> >> >> HVM extensions.
> >> >
> >> > if (xen_pvh_domain()?
> >> >
> >> > if (xen_pv_h_domain()?
> >> >
> >> > if (xen_h_domain()) ?
> >> >
> >> > if (xen_pvplus_domain()) ?
> >> >
> >> > if (xen_pv_ext_domain()) ?
> >> >
> >> > I think I like 'pv+'?
> >>
> >> I could deal with pv+.  However, in general I dislike that kind of
> >> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> >> (Extreme PV!) -- they all sound cool when they come out, but five
> >> years later, when they're not so new or sexy anymore, they all sound
> >> lame.  PVH is just descriptive -- it will always be PV with HVM
> >> extensions, so it will age much better. :-)
> >
> > How about pv_with_mmu_in_hvm_container_domain() ?
> >
> > Ok, that is a bit to lengthy. How about then:
> >
> > if (xen_pvhvm_ext_domain()) ?
> >
> > The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
> > and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?
> 
> Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
> already been sort of taken by Stefano's extensions to allow Linux
> kernels booted in HVM mode to use some of the PV extensions.  I tend
> to think "xen_pvh_domain()" is probably OK, but maybe calling it
> "pvext" (or "pvhext") in the code, and "PVH" in documentation /
> stories?  Just using "pvext" everywhere could work as well; it's a
> little bit "now even better!", but not as much as pvplus.

How about HAPV, for "Hardware Assisted Paravirtualization"? It's
nicely pronounceable as "hap-vee" and follows the general
"hardware-assisted paging" (HAP) Xen terminology that spans both Intel
EPT and AMD RVI. 'if (xen_hapv_domain())'

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 16:59:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:59:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcGI-0002yD-9L; Wed, 01 Aug 2012 16:59:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwcGG-0002y8-PA
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:59:04 +0000
Received: from [85.158.143.99:39574] by server-2.bemta-4.messagelabs.com id
	FB/C1-17938-85069105; Wed, 01 Aug 2012 16:59:04 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343840343!24493461!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14716 invoked from network); 1 Aug 2012 16:59:03 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:59:03 -0000
Received: by wibhm2 with SMTP id hm2so4674007wib.2
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:59:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=flWyjpvNEQoVY+n5f3V6CJCMTIMCwkfryfx/gejyF3I=;
	b=q8ZVKflEuNs5afiSJehAjp0jnVN/cMFi/P9WhrCCCGsZX14fx3QgTRIrZ6oT6swtVo
	lKUzcPqSyN86VVT8lN39t8PmOHI/8bVnENtBOAJR5xHAxAtbi5/8/7hUYxjTUwoVJywO
	vp9+8A4f3RmpyHZvBvLIj7P0I5ZB3U8zFectPhlfxCQeABtFtdWgeRd0mlUPt2qFkO9Y
	Xkc/NdAt29wPrq+qMOZnvGs0tSVDncgwvfE/kI94Ct/xsw/PZ+JAmBX5evgMFTRiG2FR
	UbkfzaA+bfbJJ0m1RxS0um/GZa1BpyNra4pknVzua3TbgDyfmw17v2aB7v9qHOZWgeZw
	v2Jw==
Received: by 10.216.116.70 with SMTP id f48mr8881676weh.162.1343840342968;
	Wed, 01 Aug 2012 09:59:02 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id bc2sm9997736wib.0.2012.08.01.09.59.01
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:59:02 -0700 (PDT)
Message-ID: <1343840334.4958.45.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Anil Madhavapeddy <anil@recoil.org>
Date: Wed, 01 Aug 2012 18:58:54 +0200
In-Reply-To: <A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
References: <1343837796.4958.32.camel@Solace>
	<A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Steven Smith <steven.smith@cl.cam.ac.uk>,
	George Dunlap <dunlapg@gmail.com>,
	Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3130871632526775411=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3130871632526775411==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-fCDbrOow2yB1iujZDfz9"


--=-fCDbrOow2yB1iujZDfz9
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-08-01 at 17:32 +0100, Anil Madhavapeddy wrote:
> On 1 Aug 2012, at 17:16, Dario Faggioli <raistlin@linux.it> wrote:
>=20
> >    - Inter-VM dependencies and communication issues. If a workload is
> >      made up of more than just a VM and they all share the same (NUMA)
> >      host, it might be best to have them sharing the nodes as much as
> >      possible, or perhaps do right the opposite, depending on the
> >      specific characteristics of he workload itself, and this might be
> >      considered during placement, memory migration and perhaps
> >      scheduling.
> >=20
> >    - Benchmarking and performances evaluation in general. Meaning both
> >      agreeing on a (set of) relevant workload(s) and on how to extract
> >      meaningful performances data from there (and maybe how to do that
> >      automatically?).
>=20
> I haven't tried out the latest Xen NUMA features yet, but we've been
> keeping track of the IPC benchmarks as we get newer machines here:
>=20

> http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/results.html
>=20
Wow... That's really cool. I'll definitely take a deep look at all these
data! I'm also adding the link to the wiki, if you're fine with that...

> Happy to share the raw data if you have cycles to figure out the best
> way to auto-place multiple VMs so they are near each other from a memory
> latency perspective. =20
>
I don't have anything precise in mind yet, but we need to think about
this.

> We haven't run many macro-benchmarks though, so
> in practise it might not matter, so it would be nice to settle on a good
> set of benchmarks to determine that for sure.
>=20
Yes, that's what we need. I'm open and available on trying to figure
this out anytime... I seem to recall you're going to be in SanDiego for
XenSummit, am I right? If yes, we can discuss this more there.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-fCDbrOow2yB1iujZDfz9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZYE4ACgkQk4XaBE3IOsQGvACeLRo4slcAvX4DaAgZR1KgnDPT
bTQAnAnBMFBH/ZS2v7DOLZbeO0bi8WWe
=J2V+
-----END PGP SIGNATURE-----

--=-fCDbrOow2yB1iujZDfz9--



--===============3130871632526775411==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3130871632526775411==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 16:59:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 16:59:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcGI-0002yD-9L; Wed, 01 Aug 2012 16:59:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwcGG-0002y8-PA
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 16:59:04 +0000
Received: from [85.158.143.99:39574] by server-2.bemta-4.messagelabs.com id
	FB/C1-17938-85069105; Wed, 01 Aug 2012 16:59:04 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343840343!24493461!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14716 invoked from network); 1 Aug 2012 16:59:03 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 16:59:03 -0000
Received: by wibhm2 with SMTP id hm2so4674007wib.2
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 09:59:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=flWyjpvNEQoVY+n5f3V6CJCMTIMCwkfryfx/gejyF3I=;
	b=q8ZVKflEuNs5afiSJehAjp0jnVN/cMFi/P9WhrCCCGsZX14fx3QgTRIrZ6oT6swtVo
	lKUzcPqSyN86VVT8lN39t8PmOHI/8bVnENtBOAJR5xHAxAtbi5/8/7hUYxjTUwoVJywO
	vp9+8A4f3RmpyHZvBvLIj7P0I5ZB3U8zFectPhlfxCQeABtFtdWgeRd0mlUPt2qFkO9Y
	Xkc/NdAt29wPrq+qMOZnvGs0tSVDncgwvfE/kI94Ct/xsw/PZ+JAmBX5evgMFTRiG2FR
	UbkfzaA+bfbJJ0m1RxS0um/GZa1BpyNra4pknVzua3TbgDyfmw17v2aB7v9qHOZWgeZw
	v2Jw==
Received: by 10.216.116.70 with SMTP id f48mr8881676weh.162.1343840342968;
	Wed, 01 Aug 2012 09:59:02 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id bc2sm9997736wib.0.2012.08.01.09.59.01
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 01 Aug 2012 09:59:02 -0700 (PDT)
Message-ID: <1343840334.4958.45.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Anil Madhavapeddy <anil@recoil.org>
Date: Wed, 01 Aug 2012 18:58:54 +0200
In-Reply-To: <A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
References: <1343837796.4958.32.camel@Solace>
	<A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Steven Smith <steven.smith@cl.cam.ac.uk>,
	George Dunlap <dunlapg@gmail.com>,
	Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3130871632526775411=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3130871632526775411==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-fCDbrOow2yB1iujZDfz9"


--=-fCDbrOow2yB1iujZDfz9
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2012-08-01 at 17:32 +0100, Anil Madhavapeddy wrote:
> On 1 Aug 2012, at 17:16, Dario Faggioli <raistlin@linux.it> wrote:
>=20
> >    - Inter-VM dependencies and communication issues. If a workload is
> >      made up of more than just a VM and they all share the same (NUMA)
> >      host, it might be best to have them sharing the nodes as much as
> >      possible, or perhaps do right the opposite, depending on the
> >      specific characteristics of he workload itself, and this might be
> >      considered during placement, memory migration and perhaps
> >      scheduling.
> >=20
> >    - Benchmarking and performances evaluation in general. Meaning both
> >      agreeing on a (set of) relevant workload(s) and on how to extract
> >      meaningful performances data from there (and maybe how to do that
> >      automatically?).
>=20
> I haven't tried out the latest Xen NUMA features yet, but we've been
> keeping track of the IPC benchmarks as we get newer machines here:
>=20

> http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/results.html
>=20
Wow... That's really cool. I'll definitely take a deep look at all these
data! I'm also adding the link to the wiki, if you're fine with that...

> Happy to share the raw data if you have cycles to figure out the best
> way to auto-place multiple VMs so they are near each other from a memory
> latency perspective. =20
>
I don't have anything precise in mind yet, but we need to think about
this.

> We haven't run many macro-benchmarks though, so
> in practise it might not matter, so it would be nice to settle on a good
> set of benchmarks to determine that for sure.
>=20
Yes, that's what we need. I'm open and available on trying to figure
this out anytime... I seem to recall you're going to be in SanDiego for
XenSummit, am I right? If yes, we can discuss this more there.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-fCDbrOow2yB1iujZDfz9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAZYE4ACgkQk4XaBE3IOsQGvACeLRo4slcAvX4DaAgZR1KgnDPT
bTQAnAnBMFBH/ZS2v7DOLZbeO0bi8WWe
=J2V+
-----END PGP SIGNATURE-----

--=-fCDbrOow2yB1iujZDfz9--



--===============3130871632526775411==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3130871632526775411==--



From xen-devel-bounces@lists.xen.org Wed Aug 01 17:05:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcLs-0003LE-7e; Wed, 01 Aug 2012 17:04:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwcLq-0003L8-AV
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 17:04:50 +0000
Received: from [85.158.143.99:9818] by server-1.bemta-4.messagelabs.com id
	14/4B-24392-1B169105; Wed, 01 Aug 2012 17:04:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343840687!23413200!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5555 invoked from network); 1 Aug 2012 17:04:49 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 17:04:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71H4YY7001652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 17:04:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71H4Xso001620
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 17:04:33 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71H4WXd030525; Wed, 1 Aug 2012 12:04:32 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 10:04:32 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F211402B2; Wed,  1 Aug 2012 12:55:32 -0400 (EDT)
Date: Wed, 1 Aug 2012 12:55:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20120801165532.GB11447@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
	<336853058.20120801185156@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <336853058.20120801185156@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 06:51:56PM +0200, Sander Eikelenboom wrote:
> Wednesday, August 1, 2012, 6:21:57 PM, you wrote:
> 
> > On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com> wrote:
> >> On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
> >>> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> >>> <konrad.wilk@oracle.com> wrote:
> >>> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> >>> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >>> >> for this feature, mainly for "marketing" reasons.  I think it will
> >>> >> probably give people the wrong idea about what the technology does.
> >>> >> PV domains is one of Xen's really distinct advantages -- much simpler
> >>> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >>> >> understand it, the mode you've been calling "hybrid" still has all of
> >>> >> these advantages -- it just uses some of the HVM hardware extensions
> >>> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >>> >> be seen as, "Even Xen has had to give up on PV."
> >>> >>
> >>> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >>> >> makes it clear that PV domains are still fully PV, but just use some
> >>> >> HVM extensions.
> >>> >
> >>> > if (xen_pvh_domain()?
> >>> >
> >>> > if (xen_pv_h_domain()?
> >>> >
> >>> > if (xen_h_domain()) ?
> >>> >
> >>> > if (xen_pvplus_domain()) ?
> >>> >
> >>> > if (xen_pv_ext_domain()) ?
> >>> >
> >>> > I think I like 'pv+'?
> >>>
> >>> I could deal with pv+.  However, in general I dislike that kind of
> >>> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> >>> (Extreme PV!) -- they all sound cool when they come out, but five
> >>> years later, when they're not so new or sexy anymore, they all sound
> >>> lame.  PVH is just descriptive -- it will always be PV with HVM
> >>> extensions, so it will age much better. :-)
> >>
> >> How about pv_with_mmu_in_hvm_container_domain() ?
> >>
> >> Ok, that is a bit to lengthy. How about then:
> >>
> >> if (xen_pvhvm_ext_domain()) ?
> >>
> >> The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
> >> and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?
> 
> > Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
> > already been sort of taken by Stefano's extensions to allow Linux
> > kernels booted in HVM mode to use some of the PV extensions.  I tend
> > to think "xen_pvh_domain()" is probably OK, but maybe calling it
> > "pvext" (or "pvhext") in the code, and "PVH" in documentation /
> > stories?  Just using "pvext" everywhere could work as well; it's a
> > little bit "now even better!", but not as much as pvplus.
> 
> Am i mistaken in saying that both "Hyrbid" and "PVHVM" are both targeting the same, but approaching it from a different base / direction (PV + HVM extensions versus HVM + PV extensions) ?

Pretty much. HVM is QEMU with emulated devices and then ditching most of them and
turning on PV for everything you can.

PV ext is starting with PV and using the SVM/VMX container (so no QEMU) to deal
with the memory management. The PV functionality is still present except that
MMU operations and syscalls' don't go through the hypervisor anymore.

> If that's the case, how far are these apart from meeting each other in the middle, and what would be the fundamental difference ?

At the end of the day they are pretty similar .. a PVHVM guest tries to
use the least amount of QEMU functionality and use PV drivers. For MMU
and such it uses VMX/SVM container. The PV ext would be using no QEMU
but using VMX/SVM container.
> It seems to be the full qemu container / emulated driver model availability on HVM + PV extensions ?

Sure, that is the big difference - PVHVM needs QEMU to bootup at least.

> 
> Just wondering if the naming should express that most explicit and fundamental difference (if there is one :-) ) ?

"if (xen_fat_domain())' (so QEMU) and 'if (xen_slim__domain())' (no QEMU)?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 17:05:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcLs-0003LE-7e; Wed, 01 Aug 2012 17:04:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwcLq-0003L8-AV
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 17:04:50 +0000
Received: from [85.158.143.99:9818] by server-1.bemta-4.messagelabs.com id
	14/4B-24392-1B169105; Wed, 01 Aug 2012 17:04:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343840687!23413200!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5555 invoked from network); 1 Aug 2012 17:04:49 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 17:04:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71H4YY7001652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 17:04:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71H4Xso001620
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 17:04:33 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71H4WXd030525; Wed, 1 Aug 2012 12:04:32 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 10:04:32 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F211402B2; Wed,  1 Aug 2012 12:55:32 -0400 (EDT)
Date: Wed, 1 Aug 2012 12:55:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20120801165532.GB11447@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
	<336853058.20120801185156@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <336853058.20120801185156@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 06:51:56PM +0200, Sander Eikelenboom wrote:
> Wednesday, August 1, 2012, 6:21:57 PM, you wrote:
> 
> > On Wed, Aug 1, 2012 at 5:05 PM, Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com> wrote:
> >> On Wed, Aug 01, 2012 at 04:59:58PM +0100, George Dunlap wrote:
> >>> On Wed, Aug 1, 2012 at 4:25 PM, Konrad Rzeszutek Wilk
> >>> <konrad.wilk@oracle.com> wrote:
> >>> > On Wed, Aug 01, 2012 at 04:25:01PM +0100, George Dunlap wrote:
> >>> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >>> >> for this feature, mainly for "marketing" reasons.  I think it will
> >>> >> probably give people the wrong idea about what the technology does.
> >>> >> PV domains is one of Xen's really distinct advantages -- much simpler
> >>> >> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >>> >> understand it, the mode you've been calling "hybrid" still has all of
> >>> >> these advantages -- it just uses some of the HVM hardware extensions
> >>> >> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >>> >> be seen as, "Even Xen has had to give up on PV."
> >>> >>
> >>> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >>> >> makes it clear that PV domains are still fully PV, but just use some
> >>> >> HVM extensions.
> >>> >
> >>> > if (xen_pvh_domain()?
> >>> >
> >>> > if (xen_pv_h_domain()?
> >>> >
> >>> > if (xen_h_domain()) ?
> >>> >
> >>> > if (xen_pvplus_domain()) ?
> >>> >
> >>> > if (xen_pv_ext_domain()) ?
> >>> >
> >>> > I think I like 'pv+'?
> >>>
> >>> I could deal with pv+.  However, in general I dislike that kind of
> >>> "now even better!" marketing.  PV+, EPV (Enhanced / extended PV), PVX
> >>> (Extreme PV!) -- they all sound cool when they come out, but five
> >>> years later, when they're not so new or sexy anymore, they all sound
> >>> lame.  PVH is just descriptive -- it will always be PV with HVM
> >>> extensions, so it will age much better. :-)
> >>
> >> How about pv_with_mmu_in_hvm_container_domain() ?
> >>
> >> Ok, that is a bit to lengthy. How about then:
> >>
> >> if (xen_pvhvm_ext_domain()) ?
> >>
> >> The 'if (xen_pvh_domain())' is just one characer short of 'xen_pv_domain()'
> >> and one might not notice it. Perhaps then 'if (xen_pv_h_domain()' ?
> 
> > Hmm -- that's an interesting issue I hadn't thought of.  "PVHVM" has
> > already been sort of taken by Stefano's extensions to allow Linux
> > kernels booted in HVM mode to use some of the PV extensions.  I tend
> > to think "xen_pvh_domain()" is probably OK, but maybe calling it
> > "pvext" (or "pvhext") in the code, and "PVH" in documentation /
> > stories?  Just using "pvext" everywhere could work as well; it's a
> > little bit "now even better!", but not as much as pvplus.
> 
> Am i mistaken in saying that both "Hyrbid" and "PVHVM" are both targeting the same, but approaching it from a different base / direction (PV + HVM extensions versus HVM + PV extensions) ?

Pretty much. HVM is QEMU with emulated devices and then ditching most of them and
turning on PV for everything you can.

PV ext is starting with PV and using the SVM/VMX container (so no QEMU) to deal
with the memory management. The PV functionality is still present except that
MMU operations and syscalls' don't go through the hypervisor anymore.

> If that's the case, how far are these apart from meeting each other in the middle, and what would be the fundamental difference ?

At the end of the day they are pretty similar .. a PVHVM guest tries to
use the least amount of QEMU functionality and use PV drivers. For MMU
and such it uses VMX/SVM container. The PV ext would be using no QEMU
but using VMX/SVM container.
> It seems to be the full qemu container / emulated driver model availability on HVM + PV extensions ?

Sure, that is the big difference - PVHVM needs QEMU to bootup at least.

> 
> Just wondering if the naming should express that most explicit and fundamental difference (if there is one :-) ) ?

"if (xen_fat_domain())' (so QEMU) and 'if (xen_slim__domain())' (no QEMU)?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 17:09:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcQ9-0003Xf-U8; Wed, 01 Aug 2012 17:09:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwcQ7-0003XP-TG
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 17:09:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343840947!10371587!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31079 invoked from network); 1 Aug 2012 17:09:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 17:09:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808668"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 17:09:07 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 18:09:07 +0100
Date: Wed, 1 Aug 2012 18:08:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801144059.GL7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011752020.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-14-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801144059.GL7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 14/24] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:56PM +0100, Stefano Stabellini wrote:
> > Initialize the grant table mapping at the address specified at index 0
> > in the DT under the /xen node.
> 
> Is it always index 0? If so, should it have a #define for the
> other index values?

There are no other values at the moment but I'll add an #define.


> > After the grant table is initialized, call xenbus_probe (if not dom0).
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c  |   13 +++++++++++++
> >  drivers/xen/grant-table.c |    2 +-
> >  2 files changed, 14 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index 2e013cf..854af1e 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -1,8 +1,12 @@
> >  #include <xen/xen.h>
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/memory.h>
> > +#include <xen/interface/hvm/params.h>
> >  #include <xen/platform_pci.h>
> >  #include <xen/features.h>
> > +#include <xen/grant_table.h>
> > +#include <xen/hvm.h>
> > +#include <xen/xenbus.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <linux/module.h>
> > @@ -51,12 +55,16 @@ int __init xen_guest_init(void)
> >  	struct xen_add_to_physmap xatp;
> >  	static struct shared_info *shared_info_page = 0;
> >  	struct device_node *node;
> > +	struct resource res;
> >  
> >  	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> >  	if (!node) {
> >  		pr_info("No Xen support\n");
> >  		return 0;
> >  	}
> > +	if (of_address_to_resource(node, 0, &res))
> > +		return -EINVAL;
> > +	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -97,6 +105,11 @@ int __init xen_guest_init(void)
> >  		per_cpu(xen_vcpu, cpu) =
> >  			&HYPERVISOR_shared_info->vcpu_info[cpu];
> >  	}
> > +
> > +	gnttab_init();
> > +	if (!xen_initial_domain())
> > +		xenbus_probe(NULL);
> > +
> >  	return 0;
> >  }
> >  EXPORT_SYMBOL_GPL(xen_guest_init);
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 1d0d95e..fd2137a 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -62,7 +62,7 @@
> >  
> >  static grant_ref_t **gnttab_list;
> >  static unsigned int nr_grant_frames;
> > -static unsigned int boot_max_nr_grant_frames;
> > +static unsigned int boot_max_nr_grant_frames = 1;
> 
> Is this going to impact x86 version?

It is not needed so I'll remove this change

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 17:09:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwcQ9-0003Xf-U8; Wed, 01 Aug 2012 17:09:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwcQ7-0003XP-TG
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 17:09:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343840947!10371587!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31079 invoked from network); 1 Aug 2012 17:09:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 17:09:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13808668"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 17:09:07 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 18:09:07 +0100
Date: Wed, 1 Aug 2012 18:08:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801144059.GL7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208011752020.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-14-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801144059.GL7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 14/24] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:56PM +0100, Stefano Stabellini wrote:
> > Initialize the grant table mapping at the address specified at index 0
> > in the DT under the /xen node.
> 
> Is it always index 0? If so, should it have a #define for the
> other index values?

There are no other values at the moment but I'll add an #define.


> > After the grant table is initialized, call xenbus_probe (if not dom0).
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c  |   13 +++++++++++++
> >  drivers/xen/grant-table.c |    2 +-
> >  2 files changed, 14 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index 2e013cf..854af1e 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -1,8 +1,12 @@
> >  #include <xen/xen.h>
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/memory.h>
> > +#include <xen/interface/hvm/params.h>
> >  #include <xen/platform_pci.h>
> >  #include <xen/features.h>
> > +#include <xen/grant_table.h>
> > +#include <xen/hvm.h>
> > +#include <xen/xenbus.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <linux/module.h>
> > @@ -51,12 +55,16 @@ int __init xen_guest_init(void)
> >  	struct xen_add_to_physmap xatp;
> >  	static struct shared_info *shared_info_page = 0;
> >  	struct device_node *node;
> > +	struct resource res;
> >  
> >  	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> >  	if (!node) {
> >  		pr_info("No Xen support\n");
> >  		return 0;
> >  	}
> > +	if (of_address_to_resource(node, 0, &res))
> > +		return -EINVAL;
> > +	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -97,6 +105,11 @@ int __init xen_guest_init(void)
> >  		per_cpu(xen_vcpu, cpu) =
> >  			&HYPERVISOR_shared_info->vcpu_info[cpu];
> >  	}
> > +
> > +	gnttab_init();
> > +	if (!xen_initial_domain())
> > +		xenbus_probe(NULL);
> > +
> >  	return 0;
> >  }
> >  EXPORT_SYMBOL_GPL(xen_guest_init);
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 1d0d95e..fd2137a 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -62,7 +62,7 @@
> >  
> >  static grant_ref_t **gnttab_list;
> >  static unsigned int nr_grant_frames;
> > -static unsigned int boot_max_nr_grant_frames;
> > +static unsigned int boot_max_nr_grant_frames = 1;
> 
> Is this going to impact x86 version?

It is not needed so I'll remove this change

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 17:52:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swd5n-00054n-12; Wed, 01 Aug 2012 17:52:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1Swd5k-00054S-St
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 17:52:17 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1343843526!6220035!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11796 invoked from network); 1 Aug 2012 17:52:06 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 17:52:06 -0000
Received: by weyz53 with SMTP id z53so6150030wey.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 10:52:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ddzI3Tmdb/xDceflx/35Y5Uz5RHCH1yEMQo2FSqWZuY=;
	b=yysUmrJIwBacMenduBejmmiitfPzcQagIiEfnIujTYKI/llFAqE4Yef230bdCpJx5h
	zS4c7PmN7hNme31Du+nYYbGX3Pu9DW43K+9+e8HtocZR9gkQHMrOLCwA6dReDmpMFPUC
	cIkF31USOdN8lkB/OjsaAUm0Zirw1udZ6tgQXQU77l0Ua/XYzMTCq2O9j1lOx/T3vXK+
	guiCF8AR1PqTumJJ3xj1uzzd6DUtDMRGc7MFy62BhOusirGsuJ1cc18Jp0QZHzK5BtNq
	qLoq59aeeR4kEz4nc97AzvghSBphvT83K1/ey5G8LMgkXMlHpD+OqxWAP34WPWSfH380
	p5Ug==
MIME-Version: 1.0
Received: by 10.180.19.169 with SMTP id g9mr13811415wie.9.1343843523780; Wed,
	01 Aug 2012 10:52:03 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Wed, 1 Aug 2012 10:52:03 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
Date: Wed, 1 Aug 2012 10:52:03 -0700
Message-ID: <CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Content-Type: multipart/mixed; boundary=bcaec53d5ed7de620704c637f31e
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec53d5ed7de620704c637f31e
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Tue, 31 Jul 2012, David Erickson wrote:
>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > On Tue, 31 Jul 2012, David Erickson wrote:
>> >> Just got back in town, following up on the prior discussion.  I
>> >> successfully compiled the latest code (25688 and qemu upstream
>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
>> >> problems during initialization of the card in the guest, in particular
>> >> the unsupported delivery mode 3 which seems to cause interrupt related
>> >> problems during init.  I've again attached the qemu-dm-log, and xl
>> >> dmesg log files, and additionally screenshots of the guest dmesg and
>> >> also for comparison starting the same livecd natively on the box.
>> >
>> > "unsupported delivery mode 3" means that the Linux guest is trying to
>> > remap the MSI onto an event channel but Xen is still trying to deliver
>> > the MSI using the emulated code path anyway.
>> >
>> > Adding
>> >
>> > #define XEN_PT_LOGGING_ENABLED 1
>> >
>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
>> > be helpful.
>> >
>> > The full Xen logs might also be useful. I would add some more tracing to
>> > the hypervisor too:
>> >
>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
>> > index b5975d1..08f4ab7 100644
>> > --- a/xen/drivers/passthrough/io.c
>> > +++ b/xen/drivers/passthrough/io.c
>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
>> >  {
>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
>> >
>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
>> > +            pirq->pirq,
>> > +            hvm_domain_use_pirq(d, pirq),
>> > +            pirq->arch.hvm.emuirq);
>> > +
>> >      if ( hvm_domain_use_pirq(d, pirq) )
>> >          send_guest_pirq(d, pirq);
>> >      else
>>
>> Hi Stefano-
>> I made the modifications (it looks like that DEFINE hasn't been used
>> in awhile, caused a few compilation issues, I had to prefix most of
>> the logged variables with s->hostaddr.), and am attaching the
>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
>> where do I find those at?
>
> Thanks for the logs!
> You can get the full Xen logs from the serial console but you can also
> grab the last few lines with "xl dmesg", like you did and it seems to be
> enough in this case.
>
>
> The initial MSI remapping has been done:
>
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
>
> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
> necessary to be able to receive event notifications (emuirq=-1 in the
> Xen logs).
>
> Now we need to figure out why: we still need more logs, this time on the
> guest side.
> What is the kernel version that you are using in the guest?
> Could you please add "debug loglevel=9" to the guest kernel command line
> and then post the guest dmesg again?
> It would be great if you could use the emulated serial to get the logs
> rather than a picture. You can do that by adding serial='pty' to the VM
> config file and console=ttyS0 to the guest command line.
> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
> has been done:
>
>
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index 53777f8..d65a97a 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
>  #ifdef CONFIG_X86
>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
>  #endif
>
>   out:

The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
I've also attached all the logs, thanks for the tip on the serial
console, very useful.

Additionally I've attached logs for booting a solaris livecd (my
ultimate goal is to use this HBA card in Solaris), with the serial
console tip I was able to capture its kernel boot as well.

Lastly I still also have the issue where no NIC is being presented
enabled in the guests, my assumption is this is because of the error
in xen-hotplug.log: "RTNETLINK answers: Operation not supported", is
there some way to debug this? I checked my dom0's dmesg and this is
what I get when I boot the Ubuntu livecd VM:

[ 2506.619039] device vif8.0 entered promiscuous mode
[ 2506.624304] ADDRCONF(NETDEV_UP): vif8.0: link is not ready
[ 2506.777865] device vif8.0-emu entered promiscuous mode
[ 2506.783073] xenbr0: port 3(vif8.0-emu) entering forwarding state
[ 2506.783079] xenbr0: port 3(vif8.0-emu) entering forwarding state
[ 2506.895379] pciback 0000:02:00.0: restoring config space at offset
0xf (was 0x100, writing 0x10a)
[ 2506.895393] pciback 0000:02:00.0: restoring config space at offset
0xc (was 0x0, writing 0xf7a00000)
[ 2506.895410] pciback 0000:02:00.0: restoring config space at offset
0x7 (was 0x4, writing 0xf7a80004)
[ 2506.895420] pciback 0000:02:00.0: restoring config space at offset
0x5 (was 0x4, writing 0xf7ac0004)
[ 2506.895428] pciback 0000:02:00.0: restoring config space at offset
0x4 (was 0x1, writing 0xd001)
[ 2506.895435] pciback 0000:02:00.0: restoring config space at offset
0x3 (was 0x0, writing 0x10)
[ 2507.056216] xen-pciback: vpci: 0000:02:00.0: assign to virtual slot 0
[ 2507.056398] pciback 0000:02:00.0: device has been assigned to
another domain! Over-writting the ownership, but beware.
[ 2517.191338] vif8.0-emu: no IPv6 routers present
[ 2521.799320] xenbr0: port 3(vif8.0-emu) entering forwarding state

and here is my ifconfig while ubuntu is booted (without VMs it doesn't
have the vifs):
eth0      Link encap:Ethernet  HWaddr 00:e0:81:cb:db:74
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:71977 errors:0 dropped:0 overruns:0 frame:0
          TX packets:126629 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5472933 (5.4 MB)  TX bytes:57934759 (57.9 MB)
          Interrupt:16 Memory:f7900000-f7920000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:4002 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4002 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:26667388 (26.6 MB)  TX bytes:26667388 (26.6 MB)

vif8.0    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:32
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vif8.0-emu Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:172 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 B)  TX bytes:26220 (26.2 KB)

xenbr0    Link encap:Ethernet  HWaddr 00:e0:81:cb:db:74
          inet addr:192.168.1.7  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::2e0:81ff:fecb:db74/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:71245 errors:0 dropped:0 overruns:0 frame:0
          TX packets:98995 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3595409 (3.5 MB)  TX bytes:55909320 (55.9 MB)

And the line from my ubuntu.conf:
vif = ['bridge=xenbr0']

Thanks-
David

--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="xen-hotplug.log"
Content-Disposition: attachment; filename="xen-hotplug.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3wa0

UlRORVRMSU5LIGFuc3dlcnM6IE9wZXJhdGlvbiBub3Qgc3VwcG9ydGVkCg==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="qemu-dm-ubuntu.log"
Content-Disposition: attachment; filename="qemu-dm-ubuntu.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3wc1

Y2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8yCnhjOiBlcnJvcjogbGludXhfZ250
dGFiX3NldF9tYXhfZ3JhbnRzOiBpb2N0bCBTRVRfTUFYX0dSQU5UUyBmYWlsZWQgKDIyID0gSW52
YWxpZCBhcmd1bWVudCk6IEludGVybmFsIGVycm9yCnhlbiBiZTogcWRpc2stNTYzMjogeGNfZ250
dGFiX3NldF9tYXhfZ3JhbnRzIGZhaWxlZDogSW52YWxpZCBhcmd1bWVudApbMDA6MDUuMF0geGVu
X3B0X2luaXRmbjogQXNzaWduaW5nIHJlYWwgcGh5c2ljYWwgZGV2aWNlIDAyOjAwLjAgdG8gZGV2
Zm4gMHgyOApbMDA6MDUuMF0geGVuX3B0X3JlZ2lzdGVyX3JlZ2lvbnM6IElPIHJlZ2lvbiAwIHJl
Z2lzdGVyZWQgKHNpemU9MHgwMDAwMDEwMCBiYXNlX2FkZHI9MHgwMDAwZDAwMCB0eXBlOiAweDEp
ClswMDowNS4wXSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDEgcmVnaXN0ZXJl
ZCAoc2l6ZT0weDAwMDA0MDAwIGJhc2VfYWRkcj0weGY3YWMwMDAwIHR5cGU6IDApClswMDowNS4w
XSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDMgcmVnaXN0ZXJlZCAoc2l6ZT0w
eDAwMDQwMDAwIGJhc2VfYWRkcj0weGY3YTgwMDAwIHR5cGU6IDApClswMDowNS4wXSB4ZW5fcHRf
cmVnaXN0ZXJfcmVnaW9uczogRXhwYW5zaW9uIFJPTSByZWdpc3RlcmVkIChzaXplPTB4MDAwODAw
MDAgYmFzZV9hZGRyPTB4ZjdhMDAwMDApClswMDowNS4wXSB4ZW5fcHRfbXNpeF9pbml0OiBnZXQg
TVNJLVggdGFibGUgQkFSIGJhc2UgMHhmN2FjMDAwMApbMDA6MDUuMF0geGVuX3B0X21zaXhfaW5p
dDogdGFibGVfb2ZmID0gMHgyMDAwLCB0b3RhbF9lbnRyaWVzID0gMTUKWzAwOjA1LjBdIHhlbl9w
dF9tc2l4X2luaXQ6IG1hcHBpbmcgcGh5c2ljYWwgTVNJLVggdGFibGUgdG8gMHg3ZmI0ZDAyY2Ew
MDAKWzAwOjA1LjBdIHhlbl9wdF9wY2lfaW50eDogaW50eD0xClswMDowNS4wXSB4ZW5fcHRfaW5p
dGZuOiBSZWFsIHBoeXNpY2FsIGRldmljZSAwMjowMC4wIHJlZ2lzdGVyZWQgc3VjY2Vzc2Z1bHkh
ClswMDowNS4wXSB4ZW5fcHRfbXNpeGN0cmxfcmVnX3dyaXRlOiBlbmFibGUgTVNJLVgKWzAwOjA1
LjBdIG1zaV9tc2l4X3NldHVwOiByZXF1ZXN0ZWQgcGlycSA0IGZvciBNU0ktWCAodmVjOiAwLCBl
bnRyeTogMCkKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBw
aXJxIDQgZ3ZlYyAwIGdmbGFncyAweDMwMzcgKGVudHJ5OiAwKQpbMDA6MDUuMF0gcGNpX21zaXhf
d3JpdGU6IEVycm9yOiBDYW4ndCB1cGRhdGUgbXNpeCBlbnRyeSAwIHNpbmNlIE1TSS1YIGlzIGFs
cmVhZHkgZW5hYmxlZC4KWzAwOjA1LjBdIHBjaV9tc2l4X3dyaXRlOiBFcnJvcjogQ2FuJ3QgdXBk
YXRlIG1zaXggZW50cnkgMCBzaW5jZSBNU0ktWCBpcyBhbHJlYWR5IGVuYWJsZWQuClswMDowNS4w
XSBwY2lfbXNpeF93cml0ZTogRXJyb3I6IENhbid0IHVwZGF0ZSBtc2l4IGVudHJ5IDAgc2luY2Ug
TVNJLVggaXMgYWxyZWFkeSBlbmFibGVkLgpbMDA6MDUuMF0gcGNpX21zaXhfd3JpdGU6IEVycm9y
OiBDYW4ndCB1cGRhdGUgbXNpeCBlbnRyeSAwIHNpbmNlIE1TSS1YIGlzIGFscmVhZHkgZW5hYmxl
ZC4KWzAwOjA1LjBdIHhlbl9wdF9tc2l4Y3RybF9yZWdfd3JpdGU6IGRpc2FibGUgTVNJLVgK
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="ubuntu_guest_boot.log"
Content-Disposition: attachment; filename="ubuntu_guest_boot.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3wd2

WyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgY3B1c2V0DQpbICAgIDAu
MDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHUNClsgICAgMC4wMDAwMDBdIExp
bnV4IHZlcnNpb24gMy4wLjAtMTItZ2VuZXJpYyAoYnVpbGRkQGNyZXN0ZWQpIChnY2MgdmVyc2lv
biA0LjYuMSAoVWJ1bnR1L0xpbmFybyA0LjYuMS05dWJ1bnR1MykgKSAjMjAtVWJ1bnR1IFNNUCBG
cmkgT2N0IDcgMTQ6NTY6MjUgVVRDIDIwMTEgKFVidW50dSAzLjAuMC0xMi4yMC1nZW5lcmljIDMu
MC40KQ0KWyAgICAwLjAwMDAwMF0gQ29tbWFuZCBsaW5lOiBmaWxlPS9jZHJvbS9wcmVzZWVkL3Vi
dW50dS5zZWVkIGJvb3Q9Y2FzcGVyIGluaXRyZD0vY2FzcGVyL2luaXRyZC5seiBxdWlldCBzcGxh
c2ggZGVidWcgbG9nbGV2ZWw9OSBjb25zb2xlPXR0eTAgY29uc29sZT10dHlTMCwxMTUyMDBuOCBj
b25zb2xlPXR0eVMwIC0tDQpbICAgIDAuMDAwMDAwXSBLRVJORUwgc3VwcG9ydGVkIGNwdXM6DQpb
ICAgIDAuMDAwMDAwXSAgIEludGVsIEdlbnVpbmVJbnRlbA0KWyAgICAwLjAwMDAwMF0gICBBTUQg
QXV0aGVudGljQU1EDQpbICAgIDAuMDAwMDAwXSAgIENlbnRhdXIgQ2VudGF1ckhhdWxzDQpbICAg
IDAuMDAwMDAwXSBCSU9TLXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAw
XSAgQklPUy1lODIwOiAwMDAwMDAwMDAwMDAwMDAwIC0gMDAwMDAwMDAwMDA5ZTQwMCAodXNhYmxl
KQ0KWyAgICAwLjAwMDAwMF0gIEJJT1MtZTgyMDogMDAwMDAwMDAwMDA5ZTQwMCAtIDAwMDAwMDAw
MDAwYTAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIEJJT1MtZTgyMDogMDAwMDAwMDAw
MDBmMDAwMCAtIDAwMDAwMDAwMDAxMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIEJJ
T1MtZTgyMDogMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAwMDAwN2Y3ZmYwMDAgKHVzYWJsZSkNClsg
ICAgMC4wMDAwMDBdICBCSU9TLWU4MjA6IDAwMDAwMDAwN2Y3ZmYwMDAgLSAwMDAwMDAwMDdmODAw
MDAwIChyZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdICBCSU9TLWU4MjA6IDAwMDAwMDAwZmMwMDAw
MDAgLSAwMDAwMDAwMTAwMDAwMDAwIChyZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdIE5YIChFeGVj
dXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAwLjAwMDAwMF0gRE1JIDIuNCBw
cmVzZW50Lg0KWyAgICAwLjAwMDAwMF0gRE1JOiBYZW4gSFZNIGRvbVUsIEJJT1MgNC4yLXVuc3Rh
YmxlIDA3LzMwLzIwMTINClsgICAgMC4wMDAwMDBdIEh5cGVydmlzb3IgZGV0ZWN0ZWQ6IFhlbiBI
Vk0NClsgICAgMC4wMDAwMDBdIFhlbiB2ZXJzaW9uIDQuMi4NClsgICAgMC4wMDAwMDBdIFhlbiBQ
bGF0Zm9ybSBQQ0k6IEkvTyBwcm90b2NvbCB2ZXJzaW9uIDENClsgICAgMC4wMDAwMDBdIE5ldGZy
b250IGFuZCB0aGUgWGVuIHBsYXRmb3JtIFBDSSBkcml2ZXIgaGF2ZSBiZWVuIGNvbXBpbGVkIGZv
ciB0aGlzIGtlcm5lbDogdW5wbHVnIGVtdWxhdGVkIE5JQ3MuDQpbICAgIDAuMDAwMDAwXSBCbGtm
cm9udCBhbmQgdGhlIFhlbiBwbGF0Zm9ybSBQQ0kgZHJpdmVyIGhhdmUgYmVlbiBjb21waWxlZCBm
b3IgdGhpcyBrZXJuZWw6IHVucGx1ZyBlbXVsYXRlZCBkaXNrcy4NClsgICAgMC4wMDAwMDBdIFlv
dSBtaWdodCBoYXZlIHRvIGNoYW5nZSB0aGUgcm9vdCBkZXZpY2UNClsgICAgMC4wMDAwMDBdIGZy
b20gL2Rldi9oZFthLWRdIHRvIC9kZXYveHZkW2EtZF0NClsgICAgMC4wMDAwMDBdIGluIHlvdXIg
cm9vdD0ga2VybmVsIGNvbW1hbmQgbGluZSBvcHRpb24NClsgICAgMC4wMDAwMDBdIEhWTU9QX3Bh
Z2V0YWJsZV9keWluZyBub3Qgc3VwcG9ydGVkDQpbICAgIDAuMDAwMDAwXSBlODIwIHVwZGF0ZSBy
YW5nZTogMDAwMDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwMTAwMDAgKHVzYWJsZSkgPT0+IChy
ZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdIGU4MjAgcmVtb3ZlIHJhbmdlOiAwMDAwMDAwMDAwMGEw
MDAwIC0gMDAwMDAwMDAwMDEwMDAwMCAodXNhYmxlKQ0KWyAgICAwLjAwMDAwMF0gTm8gQUdQIGJy
aWRnZSBmb3VuZA0KWyAgICAwLjAwMDAwMF0gbGFzdF9wZm4gPSAweDdmN2ZmIG1heF9hcmNoX3Bm
biA9IDB4NDAwMDAwMDAwDQpbICAgIDAuMDAwMDAwXSBNVFJSIGRlZmF1bHQgdHlwZTogd3JpdGUt
YmFjaw0KWyAgICAwLjAwMDAwMF0gTVRSUiBmaXhlZCByYW5nZXMgZW5hYmxlZDoNClsgICAgMC4w
MDAwMDBdICAgMDAwMDAtOUZGRkYgd3JpdGUtYmFjaw0KWyAgICAwLjAwMDAwMF0gICBBMDAwMC1C
RkZGRiB3cml0ZS1jb21iaW5pbmcNClsgICAgMC4wMDAwMDBdICAgQzAwMDAtRkZGRkYgd3JpdGUt
YmFjaw0KWyAgICAwLjAwMDAwMF0gTVRSUiB2YXJpYWJsZSByYW5nZXMgZW5hYmxlZDoNClsgICAg
MC4wMDAwMDBdICAgMCBiYXNlIDBGMDAwMDAwMCBtYXNrIEZGODAwMDAwMCB1bmNhY2hhYmxlDQpb
ICAgIDAuMDAwMDAwXSAgIDEgYmFzZSAwRjgwMDAwMDAgbWFzayBGRkMwMDAwMDAgdW5jYWNoYWJs
ZQ0KWyAgICAwLjAwMDAwMF0gICAyIGRpc2FibGVkDQpbICAgIDAuMDAwMDAwXSAgIDMgZGlzYWJs
ZWQNClsgICAgMC4wMDAwMDBdICAgNCBkaXNhYmxlZA0KWyAgICAwLjAwMDAwMF0gICA1IGRpc2Fi
bGVkDQpbICAgIDAuMDAwMDAwXSAgIDYgZGlzYWJsZWQNClsgICAgMC4wMDAwMDBdICAgNyBkaXNh
YmxlZA0KWyAgICAwLjAwMDAwMF0geDg2IFBBVCBlbmFibGVkOiBjcHUgMCwgb2xkIDB4NzA0MDYw
MDA3MDQwNiwgbmV3IDB4NzAxMDYwMDA3MDEwNg0KWyAgICAwLjAwMDAwMF0gZm91bmQgU01QIE1Q
LXRhYmxlIGF0IFtmZmZmODgwMDAwMGZkYWQwXSBmZGFkMA0KWyAgICAwLjAwMDAwMF0gaW5pdGlh
bCBtZW1vcnkgbWFwcGVkIDogMCAtIDIwMDAwMDAwDQpbICAgIDAuMDAwMDAwXSBCYXNlIG1lbW9y
eSB0cmFtcG9saW5lIGF0IFtmZmZmODgwMDAwMDk5MDAwXSA5OTAwMCBzaXplIDIwNDgwDQpbICAg
IDAuMDAwMDAwXSBpbml0X21lbW9yeV9tYXBwaW5nOiAwMDAwMDAwMDAwMDAwMDAwLTAwMDAwMDAw
N2Y3ZmYwMDANClsgICAgMC4wMDAwMDBdICAwMDAwMDAwMDAwIC0gMDA3ZjYwMDAwMCBwYWdlIDJN
DQpbICAgIDAuMDAwMDAwXSAgMDA3ZjYwMDAwMCAtIDAwN2Y3ZmYwMDAgcGFnZSA0aw0KWyAgICAw
LjAwMDAwMF0ga2VybmVsIGRpcmVjdCBtYXBwaW5nIHRhYmxlcyB1cCB0byA3ZjdmZjAwMCBAIDdl
YTE4MDAwLTdlYTFjMDAwDQpbICAgIDAuMDAwMDAwXSBSQU1ESVNLOiA3ZWExYzAwMCAtIDdmN2Zm
MDAwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBSU0RQIDAwMDAwMDAwMDAwZmRhMjAgMDAwMjQgKHYw
MiAgICBYZW4pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBYU0RUIDAwMDAwMDAwZmMwMDlmZDAgMDAw
NTQgKHYwMSAgICBYZW4gICAgICBIVk0gMDAwMDAwMDAgSFZNTCAwMDAwMDAwMCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IEZBQ1AgMDAwMDAwMDBmYzAwOTkwMCAwMDBGNCAodjA0ICAgIFhlbiAgICAg
IEhWTSAwMDAwMDAwMCBIVk1MIDAwMDAwMDAwKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRFNEVCAw
MDAwMDAwMGZjMDAxMmIwIDA4NUNEICh2MDIgICAgWGVuICAgICAgSFZNIDAwMDAwMDAwIElOVEwg
MjAxMDA1MjgpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNTIDAwMDAwMDAwZmMwMDEyNzAgMDAw
NDANClsgICAgMC4wMDAwMDBdIEFDUEk6IEFQSUMgMDAwMDAwMDBmYzAwOWEwMCAwMDQ2MCAodjAy
ICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBIVk1MIDAwMDAwMDAwKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogSFBFVCAwMDAwMDAwMGZjMDA5ZWUwIDAwMDM4ICh2MDEgICAgWGVuICAgICAgSFZNIDAw
MDAwMDAwIEhWTUwgMDAwMDAwMDApDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBXQUVUIDAwMDAwMDAw
ZmMwMDlmMjAgMDAwMjggKHYwMSAgICBYZW4gICAgICBIVk0gMDAwMDAwMDAgSFZNTCAwMDAwMDAw
MCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAwMDBmYzAwOWY1MCAwMDAzMSAodjAy
ICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBJTlRMIDIwMTAwNTI4KQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogU1NEVCAwMDAwMDAwMGZjMDA5ZjkwIDAwMDMxICh2MDIgICAgWGVuICAgICAgSFZNIDAw
MDAwMDAwIElOVEwgMjAxMDA1MjgpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMb2NhbCBBUElDIGFk
ZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAwLjAwMDAwMF0gTm8gTlVNQSBjb25maWd1cmF0aW9uIGZv
dW5kDQpbICAgIDAuMDAwMDAwXSBGYWtpbmcgYSBub2RlIGF0IDAwMDAwMDAwMDAwMDAwMDAtMDAw
MDAwMDA3ZjdmZjAwMA0KWyAgICAwLjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgMDAwMDAw
MDAwMDAwMDAwMC0wMDAwMDAwMDdmN2ZmMDAwDQpbICAgIDAuMDAwMDAwXSAgIE5PREVfREFUQSBb
MDAwMDAwMDA3ZWExMzAwMCAtIDAwMDAwMDAwN2VhMTdmZmZdDQpbICAgIDAuMDAwMDAwXSAgW2Zm
ZmZlYTAwMDAwMDAwMDAtZmZmZmVhMDAwMWJmZmZmZl0gUE1EIC0+IFtmZmZmODgwMDdjMjAwMDAw
LWZmZmY4ODAwN2RkZmZmZmZdIG9uIG5vZGUgMA0KWyAgICAwLjAwMDAwMF0gWm9uZSBQRk4gcmFu
Z2VzOg0KWyAgICAwLjAwMDAwMF0gICBETUEgICAgICAweDAwMDAwMDEwIC0+IDB4MDAwMDEwMDAN
ClsgICAgMC4wMDAwMDBdICAgRE1BMzIgICAgMHgwMDAwMTAwMCAtPiAweDAwMTAwMDAwDQpbICAg
IDAuMDAwMDAwXSAgIE5vcm1hbCAgIGVtcHR5DQpbICAgIDAuMDAwMDAwXSBNb3ZhYmxlIHpvbmUg
c3RhcnQgUEZOIGZvciBlYWNoIG5vZGUNClsgICAgMC4wMDAwMDBdIGVhcmx5X25vZGVfbWFwWzJd
IGFjdGl2ZSBQRk4gcmFuZ2VzDQpbICAgIDAuMDAwMDAwXSAgICAgMDogMHgwMDAwMDAxMCAtPiAw
eDAwMDAwMDllDQpbICAgIDAuMDAwMDAwXSAgICAgMDogMHgwMDAwMDEwMCAtPiAweDAwMDdmN2Zm
DQpbICAgIDAuMDAwMDAwXSBPbiBub2RlIDAgdG90YWxwYWdlczogNTIyMTI1DQpbICAgIDAuMDAw
MDAwXSAgIERNQSB6b25lOiA1NiBwYWdlcyB1c2VkIGZvciBtZW1tYXANClsgICAgMC4wMDAwMDBd
ICAgRE1BIHpvbmU6IDUgcGFnZXMgcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6
IDM5MjEgcGFnZXMsIExJRk8gYmF0Y2g6MA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA3
MDg0IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA1
MTEwNTkgcGFnZXMsIExJRk8gYmF0Y2g6MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRpbWVy
IElPIFBvcnQ6IDB4YjAwOA0KWyAgICAwLjAwMDAwMF0gQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNz
IDB4ZmVlMDAwMDANClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxh
cGljX2lkWzB4MDBdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDAxXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNfaWRbMHgwNF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19pZFsweDA2XSBkaXNhYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDhdIGRp
c2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNf
aWRbMHgwYV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsw
eDA2XSBsYXBpY19pZFsweDBjXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElD
IChhY3BpX2lkWzB4MDddIGxhcGljX2lkWzB4MGVdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwOF0gbGFwaWNfaWRbMHgxMF0gZGlzYWJsZWQpDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA5XSBsYXBpY19pZFsweDEyXSBkaXNh
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MGFdIGxhcGljX2lk
WzB4MTRdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
Yl0gbGFwaWNfaWRbMHgxNl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDBjXSBsYXBpY19pZFsweDE4XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IExBUElDIChhY3BpX2lkWzB4MGRdIGxhcGljX2lkWzB4MWFdIGRpc2FibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwZV0gbGFwaWNfaWRbMHgxY10gZGlzYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDBmXSBsYXBpY19pZFsw
eDFlXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MTBd
IGxhcGljX2lkWzB4MjBdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFj
cGlfaWRbMHgxMV0gbGFwaWNfaWRbMHgyMl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDEyXSBsYXBpY19pZFsweDI0XSBkaXNhYmxlZCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MTNdIGxhcGljX2lkWzB4MjZdIGRpc2FibGVk
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgxNF0gbGFwaWNfaWRbMHgy
OF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDE1XSBs
YXBpY19pZFsweDJhXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MTZdIGxhcGljX2lkWzB4MmNdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxN10gbGFwaWNfaWRbMHgyZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDE4XSBsYXBpY19pZFsweDMwXSBkaXNhYmxlZCkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MTldIGxhcGljX2lkWzB4MzJd
IGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgxYV0gbGFw
aWNfaWRbMHgzNF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDFiXSBsYXBpY19pZFsweDM2XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MWNdIGxhcGljX2lkWzB4MzhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgxZF0gbGFwaWNfaWRbMHgzYV0gZGlzYWJsZWQpDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDFlXSBsYXBpY19pZFsweDNjXSBk
aXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MWZdIGxhcGlj
X2lkWzB4M2VdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRb
MHgyMF0gbGFwaWNfaWRbMHg0MF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDIxXSBsYXBpY19pZFsweDQyXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBd
IEFDUEk6IExBUElDIChhY3BpX2lkWzB4MjJdIGxhcGljX2lkWzB4NDRdIGRpc2FibGVkKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgyM10gbGFwaWNfaWRbMHg0Nl0gZGlz
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDI0XSBsYXBpY19p
ZFsweDQ4XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MjVdIGxhcGljX2lkWzB4NGFdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMg
KGFjcGlfaWRbMHgyNl0gbGFwaWNfaWRbMHg0Y10gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDI3XSBsYXBpY19pZFsweDRlXSBkaXNhYmxlZCkNClsgICAg
MC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MjhdIGxhcGljX2lkWzB4NTBdIGRpc2Fi
bGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgyOV0gbGFwaWNfaWRb
MHg1Ml0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDJh
XSBsYXBpY19pZFsweDU0XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MmJdIGxhcGljX2lkWzB4NTZdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQ
STogTEFQSUMgKGFjcGlfaWRbMHgyY10gbGFwaWNfaWRbMHg1OF0gZGlzYWJsZWQpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDJkXSBsYXBpY19pZFsweDVhXSBkaXNhYmxl
ZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MmVdIGxhcGljX2lkWzB4
NWNdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgyZl0g
bGFwaWNfaWRbMHg1ZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNw
aV9pZFsweDMwXSBsYXBpY19pZFsweDYwXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6
IExBUElDIChhY3BpX2lkWzB4MzFdIGxhcGljX2lkWzB4NjJdIGRpc2FibGVkKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgzMl0gbGFwaWNfaWRbMHg2NF0gZGlzYWJsZWQp
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDMzXSBsYXBpY19pZFsweDY2
XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MzRdIGxh
cGljX2lkWzB4NjhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgzNV0gbGFwaWNfaWRbMHg2YV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDM2XSBsYXBpY19pZFsweDZjXSBkaXNhYmxlZCkNClsgICAgMC4wMDAw
MDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MzddIGxhcGljX2lkWzB4NmVdIGRpc2FibGVkKQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgzOF0gbGFwaWNfaWRbMHg3MF0g
ZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDM5XSBsYXBp
Y19pZFsweDcyXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lk
WzB4M2FdIGxhcGljX2lkWzB4NzRdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgzYl0gbGFwaWNfaWRbMHg3Nl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDNjXSBsYXBpY19pZFsweDc4XSBkaXNhYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4M2RdIGxhcGljX2lkWzB4N2FdIGRp
c2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgzZV0gbGFwaWNf
aWRbMHg3Y10gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsw
eDNmXSBsYXBpY19pZFsweDdlXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElD
IChhY3BpX2lkWzB4NDBdIGxhcGljX2lkWzB4ODBdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0MV0gbGFwaWNfaWRbMHg4Ml0gZGlzYWJsZWQpDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDQyXSBsYXBpY19pZFsweDg0XSBkaXNh
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NDNdIGxhcGljX2lk
WzB4ODZdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0
NF0gbGFwaWNfaWRbMHg4OF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDQ1XSBsYXBpY19pZFsweDhhXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IExBUElDIChhY3BpX2lkWzB4NDZdIGxhcGljX2lkWzB4OGNdIGRpc2FibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0N10gbGFwaWNfaWRbMHg4ZV0gZGlzYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDQ4XSBsYXBpY19pZFsw
eDkwXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NDld
IGxhcGljX2lkWzB4OTJdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFj
cGlfaWRbMHg0YV0gbGFwaWNfaWRbMHg5NF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDRiXSBsYXBpY19pZFsweDk2XSBkaXNhYmxlZCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NGNdIGxhcGljX2lkWzB4OThdIGRpc2FibGVk
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0ZF0gbGFwaWNfaWRbMHg5
YV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDRlXSBs
YXBpY19pZFsweDljXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4NGZdIGxhcGljX2lkWzB4OWVdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHg1MF0gbGFwaWNfaWRbMHhhMF0gZGlzYWJsZWQpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDUxXSBsYXBpY19pZFsweGEyXSBkaXNhYmxlZCkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NTJdIGxhcGljX2lkWzB4YTRd
IGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg1M10gbGFw
aWNfaWRbMHhhNl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDU0XSBsYXBpY19pZFsweGE4XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4NTVdIGxhcGljX2lkWzB4YWFdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg1Nl0gbGFwaWNfaWRbMHhhY10gZGlzYWJsZWQpDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDU3XSBsYXBpY19pZFsweGFlXSBk
aXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NThdIGxhcGlj
X2lkWzB4YjBdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRb
MHg1OV0gbGFwaWNfaWRbMHhiMl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDVhXSBsYXBpY19pZFsweGI0XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBd
IEFDUEk6IExBUElDIChhY3BpX2lkWzB4NWJdIGxhcGljX2lkWzB4YjZdIGRpc2FibGVkKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg1Y10gbGFwaWNfaWRbMHhiOF0gZGlz
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDVkXSBsYXBpY19p
ZFsweGJhXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
NWVdIGxhcGljX2lkWzB4YmNdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMg
KGFjcGlfaWRbMHg1Zl0gbGFwaWNfaWRbMHhiZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDYwXSBsYXBpY19pZFsweGMwXSBkaXNhYmxlZCkNClsgICAg
MC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NjFdIGxhcGljX2lkWzB4YzJdIGRpc2Fi
bGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg2Ml0gbGFwaWNfaWRb
MHhjNF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDYz
XSBsYXBpY19pZFsweGM2XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4NjRdIGxhcGljX2lkWzB4YzhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQ
STogTEFQSUMgKGFjcGlfaWRbMHg2NV0gbGFwaWNfaWRbMHhjYV0gZGlzYWJsZWQpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDY2XSBsYXBpY19pZFsweGNjXSBkaXNhYmxl
ZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NjddIGxhcGljX2lkWzB4
Y2VdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg2OF0g
bGFwaWNfaWRbMHhkMF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNw
aV9pZFsweDY5XSBsYXBpY19pZFsweGQyXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6
IExBUElDIChhY3BpX2lkWzB4NmFdIGxhcGljX2lkWzB4ZDRdIGRpc2FibGVkKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg2Yl0gbGFwaWNfaWRbMHhkNl0gZGlzYWJsZWQp
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDZjXSBsYXBpY19pZFsweGQ4
XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NmRdIGxh
cGljX2lkWzB4ZGFdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHg2ZV0gbGFwaWNfaWRbMHhkY10gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDZmXSBsYXBpY19pZFsweGRlXSBkaXNhYmxlZCkNClsgICAgMC4wMDAw
MDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NzBdIGxhcGljX2lkWzB4ZTBdIGRpc2FibGVkKQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3MV0gbGFwaWNfaWRbMHhlMl0g
ZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDcyXSBsYXBp
Y19pZFsweGU0XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lk
WzB4NzNdIGxhcGljX2lkWzB4ZTZdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHg3NF0gbGFwaWNfaWRbMHhlOF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDc1XSBsYXBpY19pZFsweGVhXSBkaXNhYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NzZdIGxhcGljX2lkWzB4ZWNdIGRp
c2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3N10gbGFwaWNf
aWRbMHhlZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsw
eDc4XSBsYXBpY19pZFsweGYwXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElD
IChhY3BpX2lkWzB4NzldIGxhcGljX2lkWzB4ZjJdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3YV0gbGFwaWNfaWRbMHhmNF0gZGlzYWJsZWQpDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDdiXSBsYXBpY19pZFsweGY2XSBkaXNh
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4N2NdIGxhcGljX2lk
WzB4ZjhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3
ZF0gbGFwaWNfaWRbMHhmYV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDdlXSBsYXBpY19pZFsweGZjXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IExBUElDIChhY3BpX2lkWzB4N2ZdIGxhcGljX2lkWzB4ZmVdIGRpc2FibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDAxXSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9i
YXNlWzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9BUElDWzBdOiBhcGljX2lkIDEsIHZlcnNpb24gMTcs
IGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtNDcNClsgICAgMC4wMDAwMDBdIEFDUEk6IElOVF9T
UkNfT1ZSIChidXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSA1IGdsb2JhbF9pcnEgNSBsb3cg
bGV2ZWwpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSAx
MCBnbG9iYWxfaXJxIDEwIGxvdyBsZXZlbCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IElOVF9TUkNf
T1ZSIChidXMgMCBidXNfaXJxIDExIGdsb2JhbF9pcnEgMTEgbG93IGxldmVsKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJyaWRlLg0KWyAgICAwLjAwMDAwMF0gQUNQSTog
SVJRMiB1c2VkIGJ5IG92ZXJyaWRlLg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSVJRNSB1c2VkIGJ5
IG92ZXJyaWRlLg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLg0K
WyAgICAwLjAwMDAwMF0gQUNQSTogSVJRMTAgdXNlZCBieSBvdmVycmlkZS4NClsgICAgMC4wMDAw
MDBdIEFDUEk6IElSUTExIHVzZWQgYnkgb3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBVc2luZyBB
Q1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRpb24NClsgICAgMC4wMDAw
MDBdIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KWyAgICAwLjAw
MDAwMF0gU01QOiBBbGxvd2luZyAxMjggQ1BVcywgMTI2IGhvdHBsdWcgQ1BVcw0KWyAgICAwLjAw
MDAwMF0gbnJfaXJxc19nc2k6IDY0DQpbICAgIDAuMDAwMDAwXSBQTTogUmVnaXN0ZXJlZCBub3Nh
dmUgbWVtb3J5OiAwMDAwMDAwMDAwMDllMDAwIC0gMDAwMDAwMDAwMDA5ZjAwMA0KWyAgICAwLjAw
MDAwMF0gUE06IFJlZ2lzdGVyZWQgbm9zYXZlIG1lbW9yeTogMDAwMDAwMDAwMDA5ZjAwMCAtIDAw
MDAwMDAwMDAwYTAwMDANClsgICAgMC4wMDAwMDBdIFBNOiBSZWdpc3RlcmVkIG5vc2F2ZSBtZW1v
cnk6IDAwMDAwMDAwMDAwYTAwMDAgLSAwMDAwMDAwMDAwMGYwMDAwDQpbICAgIDAuMDAwMDAwXSBQ
TTogUmVnaXN0ZXJlZCBub3NhdmUgbWVtb3J5OiAwMDAwMDAwMDAwMGYwMDAwIC0gMDAwMDAwMDAw
MDEwMDAwMA0KWyAgICAwLjAwMDAwMF0gQWxsb2NhdGluZyBQQ0kgcmVzb3VyY2VzIHN0YXJ0aW5n
IGF0IDdmODAwMDAwIChnYXA6IDdmODAwMDAwOjdjODAwMDAwKQ0KWyAgICAwLjAwMDAwMF0gQm9v
dGluZyBwYXJhdmlydHVhbGl6ZWQga2VybmVsIG9uIFhlbiBIVk0NClsgICAgMC4wMDAwMDBdIHNl
dHVwX3BlcmNwdTogTlJfQ1BVUzoyNTYgbnJfY3B1bWFza19iaXRzOjI1NiBucl9jcHVfaWRzOjEy
OCBucl9ub2RlX2lkczoxDQpbICAgIDAuMDAwMDAwXSBQRVJDUFU6IEVtYmVkZGVkIDI3IHBhZ2Vz
L2NwdSBAZmZmZjg4MDA3YjIwMDAwMCBzNzk2MTYgcjgxOTIgZDIyNzg0IHUxMzEwNzINClsgICAg
MC4wMDAwMDBdIHBjcHUtYWxsb2M6IHM3OTYxNiByODE5MiBkMjI3ODQgdTEzMTA3MiBhbGxvYz0x
KjIwOTcxNTINClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwMDAgMDAxIDAwMiAwMDMg
MDA0IDAwNSAwMDYgMDA3IDAwOCAwMDkgMDEwIDAxMSAwMTIgMDEzIDAxNCAwMTUgDQpbICAgIDAu
MDAwMDAwXSBwY3B1LWFsbG9jOiBbMF0gMDE2IDAxNyAwMTggMDE5IDAyMCAwMjEgMDIyIDAyMyAw
MjQgMDI1IDAyNiAwMjcgMDI4IDAyOSAwMzAgMDMxIA0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxv
YzogWzBdIDAzMiAwMzMgMDM0IDAzNSAwMzYgMDM3IDAzOCAwMzkgMDQwIDA0MSAwNDIgMDQzIDA0
NCAwNDUgMDQ2IDA0NyANClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwNDggMDQ5IDA1
MCAwNTEgMDUyIDA1MyAwNTQgMDU1IDA1NiAwNTcgMDU4IDA1OSAwNjAgMDYxIDA2MiAwNjMgDQpb
ICAgIDAuMDAwMDAwXSBwY3B1LWFsbG9jOiBbMF0gMDY0IDA2NSAwNjYgMDY3IDA2OCAwNjkgMDcw
IDA3MSAwNzIgMDczIDA3NCAwNzUgMDc2IDA3NyAwNzggMDc5IA0KWyAgICAwLjAwMDAwMF0gcGNw
dS1hbGxvYzogWzBdIDA4MCAwODEgMDgyIDA4MyAwODQgMDg1IDA4NiAwODcgMDg4IDA4OSAwOTAg
MDkxIDA5MiAwOTMgMDk0IDA5NSANClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwOTYg
MDk3IDA5OCAwOTkgMTAwIDEwMSAxMDIgMTAzIDEwNCAxMDUgMTA2IDEwNyAxMDggMTA5IDExMCAx
MTEgDQpbICAgIDAuMDAwMDAwXSBwY3B1LWFsbG9jOiBbMF0gMTEyIDExMyAxMTQgMTE1IDExNiAx
MTcgMTE4IDExOSAxMjAgMTIxIDEyMiAxMjMgMTI0IDEyNSAxMjYgMTI3IA0KWyAgICAwLjAwMDAw
MF0gQnVpbHQgMSB6b25lbGlzdHMgaW4gTm9kZSBvcmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24u
ICBUb3RhbCBwYWdlczogNTE0OTgwDQpbICAgIDAuMDAwMDAwXSBQb2xpY3kgem9uZTogRE1BMzIN
ClsgICAgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IGZpbGU9L2Nkcm9tL3ByZXNlZWQv
dWJ1bnR1LnNlZWQgYm9vdD1jYXNwZXIgaW5pdHJkPS9jYXNwZXIvaW5pdHJkLmx6IHF1aWV0IHNw
bGFzaCBkZWJ1ZyBsb2dsZXZlbD05IGNvbnNvbGU9dHR5MCBjb25zb2xlPXR0eVMwLDExNTIwMG44
IGNvbnNvbGU9dHR5UzAgLS0NClsgICAgMC4wMDAwMDBdIFBJRCBoYXNoIHRhYmxlIGVudHJpZXM6
IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykNClsgICAgMC4wMDAwMDBdIENoZWNraW5nIGFw
ZXJ0dXJlLi4uDQpbICAgIDAuMDAwMDAwXSBObyBBR1AgYnJpZGdlIGZvdW5kDQpbICAgIDAuMDAw
MDAwXSBDYWxnYXJ5OiBkZXRlY3RpbmcgQ2FsZ2FyeSB2aWEgQklPUyBFQkRBIGFyZWENClsgICAg
MC4wMDAwMDBdIENhbGdhcnk6IFVuYWJsZSB0byBsb2NhdGUgUmlvIEdyYW5kZSB0YWJsZSBpbiBF
QkRBIC0gYmFpbGluZyENClsgICAgMC4wMDAwMDBdIE1lbW9yeTogMjAxODI2NGsvMjA4ODk1Nmsg
YXZhaWxhYmxlICg2MTA0ayBrZXJuZWwgY29kZSwgNDU2ayBhYnNlbnQsIDcwMjM2ayByZXNlcnZl
ZCwgNDg4MGsgZGF0YSwgOTg0ayBpbml0KQ0KWyAgICAwLjAwMDAwMF0gU0xVQjogR2Vuc2xhYnM9
MTUsIEhXYWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTEyOCwgTm9kZXM9
MQ0KWyAgICAwLjAwMDAwMF0gSGllcmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4NClsgICAg
MC4wMDAwMDBdIAlSQ1UgZHludGljay1pZGxlIGdyYWNlLXBlcmlvZCBhY2NlbGVyYXRpb24gaXMg
ZW5hYmxlZC4NClsgICAgMC4wMDAwMDBdIE5SX0lSUVM6MTY2NDAgbnJfaXJxczoyMTEyIDE2DQpb
ICAgIDAuMDAwMDAwXSBYZW4gSFZNIGNhbGxiYWNrIHZlY3RvciBmb3IgZXZlbnQgZGVsaXZlcnkg
aXMgZW5hYmxlZA0KWyAgICAwLjAwMDAwMF0gQ29uc29sZTogY29sb3VyIFZHQSsgODB4MjUNClsg
ICAgMC4wMDAwMDBdIGNvbnNvbGUgW3R0eTBdIGVuYWJsZWQNClsgICAgMC4wMDAwMDBdIGNvbnNv
bGUgW3R0eVMwXSBlbmFibGVkDQpbICAgIDAuMDAwMDAwXSBhbGxvY2F0ZWQgMTY3NzcyMTYgYnl0
ZXMgb2YgcGFnZV9jZ3JvdXANClsgICAgMC4wMDAwMDBdIHBsZWFzZSB0cnkgJ2Nncm91cF9kaXNh
YmxlPW1lbW9yeScgb3B0aW9uIGlmIHlvdSBkb24ndCB3YW50IG1lbW9yeSBjZ3JvdXBzDQpbICAg
IDAuMDAwMDAwXSBocGV0IGNsb2NrZXZlbnQgcmVnaXN0ZXJlZA0KWyAgICAwLjAwMDAwMF0gRGV0
ZWN0ZWQgMzI5Mi41NzggTUh6IHByb2Nlc3Nvci4NClsgICAgMC4wMDgwMDBdIENhbGlicmF0aW5n
IGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRpbWVyIGZyZXF1
ZW5jeS4uIDY1ODUuMTUgQm9nb01JUFMgKGxwaj0xMzE3MDMxMikNClsgICAgMC4wMTQ1MTZdIHBp
ZF9tYXg6IGRlZmF1bHQ6IDEzMTA3MiBtaW5pbXVtOiAxMDI0DQpbICAgIDAuMDE2MDg4XSBTZWN1
cml0eSBGcmFtZXdvcmsgaW5pdGlhbGl6ZWQNClsgICAgMC4wMjAwMTNdIEFwcEFybW9yOiBBcHBB
cm1vciBpbml0aWFsaXplZA0KWyAgICAwLjAyNDAwNF0gWWFtYTogYmVjb21pbmcgbWluZGZ1bC4N
ClsgICAgMC4wMjgzMDRdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDI2MjE0NCAo
b3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMpDQpbICAgIDAuMDMyNzg2XSBJbm9kZS1jYWNoZSBoYXNo
IHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMpDQpbICAgIDAu
MDQwMzQyXSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDI1Ng0KWyAgICAwLjA0NDI2
MV0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgY3B1YWNjdA0KWyAgICAwLjA0ODAxM10gSW5p
dGlhbGl6aW5nIGNncm91cCBzdWJzeXMgbWVtb3J5DQpbICAgIDAuMDUxMzk5XSBJbml0aWFsaXpp
bmcgY2dyb3VwIHN1YnN5cyBkZXZpY2VzDQpbICAgIDAuMDUyMDA2XSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBmcmVlemVyDQpbICAgIDAuMDU2MDA2XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1
YnN5cyBuZXRfY2xzDQpbICAgIDAuMDYwMDA2XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBi
bGtpbw0KWyAgICAwLjA2NDAxMF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgcGVyZl9ldmVu
dA0KWyAgICAwLjA2ODExMV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDANClsgICAgMC4w
NzIwMDZdIENQVTogUHJvY2Vzc29yIENvcmUgSUQ6IDANClsgICAgMC4wNzQ5ODRdIG1jZTogQ1BV
IHN1cHBvcnRzIDkgTUNFIGJhbmtzDQpbICAgIDAuMDg1NzI1XSBBQ1BJOiBDb3JlIHJldmlzaW9u
IDIwMTEwNDEzDQpbICAgIDAuMDk1MDIxXSBmdHJhY2U6IGFsbG9jYXRpbmcgMjU2NTEgZW50cmll
cyBpbiAxMDEgcGFnZXMNClsgICAgMC4xMjUwMjddIHgyYXBpYyBub3QgZW5hYmxlZCwgSVJRIHJl
bWFwcGluZyBpbml0IGZhaWxlZA0KWyAgICAwLjEzMjAxNF0gU3dpdGNoZWQgQVBJQyByb3V0aW5n
IHRvIHBoeXNpY2FsIGZsYXQuDQpbICAgIDAuMTQxMzczXSAuLlRJTUVSOiB2ZWN0b3I9MHgzMCBh
cGljMT0wIHBpbjE9MiBhcGljMj0wIHBpbjI9MA0KWyAgICAwLjE4OTg0NV0gQ1BVMDogSW50ZWwo
UikgWGVvbihSKSBDUFUgRTMtMTIzMCBWMiBAIDMuMzBHSHogc3RlcHBpbmcgMDkNClsgICAgMC4y
MDAwMjRdIFhlbjogdXNpbmcgdmNwdW9wIHRpbWVyIGludGVyZmFjZQ0KWyAgICAwLjIwNDAyN10g
aW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAwDQpbICAgIDAuMjA4MTM5XSBQZXJmb3JtYW5j
ZSBFdmVudHM6IGdlbmVyaWMgYXJjaGl0ZWN0ZWQgcGVyZm1vbiwgSW50ZWwgUE1VIGRyaXZlci4N
ClsgICAgMC4yMTQzOTZdIC4uLiB2ZXJzaW9uOiAgICAgICAgICAgICAgICAzDQpbICAgIDAuMjE2
MDE2XSAuLi4gYml0IHdpZHRoOiAgICAgICAgICAgICAgNDgNClsgICAgMC4yMTk0NTldIC4uLiBn
ZW5lcmljIHJlZ2lzdGVyczogICAgICA0DQpbICAgIDAuMjIwMDE3XSAuLi4gdmFsdWUgbWFzazog
ICAgICAgICAgICAgMDAwMGZmZmZmZmZmZmZmZg0KWyAgICAwLjIyNDAxN10gLi4uIG1heCBwZXJp
b2Q6ICAgICAgICAgICAgIDAwMDAwMDAwN2ZmZmZmZmYNClsgICAgMC4yMjgwMTddIC4uLiBmaXhl
ZC1wdXJwb3NlIGV2ZW50czogICAzDQpbICAgIDAuMjMyMDEzXSAuLi4gZXZlbnQgbWFzazogICAg
ICAgICAgICAgMDAwMDAwMDcwMDAwMDAwZg0KWyAgICAwLjIzMjk1MV0gQm9vdGluZyBOb2RlICAg
MCwgUHJvY2Vzc29ycyAgIzENClsgICAgMC4yMzYwMThdIHNtcGJvb3QgY3B1IDE6IHN0YXJ0X2lw
ID0gOTkwMDANClsgICAgMC4zMzIwNzBdIEJyb3VnaHQgdXAgMiBDUFVzDQpbICAgIDAuMzMyMDY5
XSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDENClsgICAgMC4zMzYwMzNdIFRvdGFsIG9m
IDIgcHJvY2Vzc29ycyBhY3RpdmF0ZWQgKDEzMjMwLjUwIEJvZ29NSVBTKS4NClsgICAgMC4zNDQx
NzJdIGRldnRtcGZzOiBpbml0aWFsaXplZA0KWyAgICAwLjM0OTkxN10gcHJpbnRfY29uc3RyYWlu
dHM6IGR1bW15OiANClsgICAgMC4zNTIwNjVdIFRpbWU6IDE3OjIzOjEyICBEYXRlOiAwOC8wMS8x
Mg0KWyAgICAwLjM1NjEzNF0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNg0KWyAg
ICAwLjM2MDE1OF0gVHJ5aW5nIHRvIHVucGFjayByb290ZnMgaW1hZ2UgYXMgaW5pdHJhbWZzLi4u
DQpbICAgIDEuMzM2MTczXSBBQ1BJOiBidXMgdHlwZSBwY2kgcmVnaXN0ZXJlZA0KWyAgICAxLjM0
MDA4OF0gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUgMSBmb3IgYmFzZSBhY2Nlc3MNClsg
ICAgMS4zNDg5NDZdIGJpbzogY3JlYXRlIHNsYWIgPGJpby0wPiBhdCAwDQpbICAgIDEuMzU4MTkw
XSBBQ1BJOiBFQzogTG9vayB1cCBFQyBpbiBEU0RUDQpbICAgIDEuMzY3MTQzXSBBQ1BJOiBJbnRl
cnByZXRlciBlbmFibGVkDQpbICAgIDEuMzY4MDg5XSBBQ1BJOiAoc3VwcG9ydHMgUzAgUzMgUzQg
UzUpDQpbICAgIDEuMzc2MDkxXSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGludGVycnVwdCByb3V0
aW5nDQpbICAgIDIuMzU4NTA1XSBBQ1BJOiBObyBkb2NrIGRldmljZXMgZm91bmQuDQpbICAgIDIu
MzYwMTUzXSBIRVNUOiBUYWJsZSBub3QgZm91bmQuDQpbICAgIDIuMzYzMTYwXSBGcmVlaW5nIGlu
aXRyZCBtZW1vcnk6IDE0MjIwayBmcmVlZA0KWyAgICAyLjM3MjE2Ml0gUENJOiBVc2luZyBob3N0
IGJyaWRnZSB3aW5kb3dzIGZyb20gQUNQSTsgaWYgbmVjZXNzYXJ5LCB1c2UgInBjaT1ub2NycyIg
YW5kIHJlcG9ydCBhIGJ1Zw0KWyAgICAyLjM4NDI1Nl0gQUNQSTogUENJIFJvb3QgQnJpZGdlIFtQ
Q0kwXSAoZG9tYWluIDAwMDAgW2J1cyAwMC1mZl0pDQpbICAgIDIuMzg4Mjk3XSBwY2lfcm9vdCBQ
TlAwQTAzOjAwOiBob3N0IGJyaWRnZSB3aW5kb3cgW2lvICAweDAwMDAtMHgwY2Y3XQ0KWyAgICAy
LjM5NjE1NV0gcGNpX3Jvb3QgUE5QMEEwMzowMDogaG9zdCBicmlkZ2Ugd2luZG93IFtpbyAgMHgw
ZDAwLTB4ZmZmZl0NClsgICAgMi40MDAxNTRdIHBjaV9yb290IFBOUDBBMDM6MDA6IGhvc3QgYnJp
ZGdlIHdpbmRvdyBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZl0NClsgICAgMi40MDQxNTRdIHBj
aV9yb290IFBOUDBBMDM6MDA6IGhvc3QgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZjAwMDAwMDAtMHhm
YmZmZmZmZl0NClsgICAgMi40MTIyODFdIHBjaSAwMDAwOjAwOjAwLjA6IFs4MDg2OjEyMzddIHR5
cGUgMCBjbGFzcyAweDAwMDYwMA0KWyAgICAyLjQxNzI1Ml0gcGNpIDAwMDA6MDA6MDEuMDogWzgw
ODY6NzAwMF0gdHlwZSAwIGNsYXNzIDB4MDAwNjAxDQpbICAgIDIuNDI1ODY0XSBwY2kgMDAwMDow
MDowMS4xOiBbODA4Njo3MDEwXSB0eXBlIDAgY2xhc3MgMHgwMDAxMDENClsgICAgMi40MzAxNjdd
IHBjaSAwMDAwOjAwOjAxLjE6IHJlZyAyMDogW2lvICAweGMzMDAtMHhjMzBmXQ0KWyAgICAyLjQz
NzI1OV0gcGNpIDAwMDA6MDA6MDEuMzogWzgwODY6NzExM10gdHlwZSAwIGNsYXNzIDB4MDAwNjgw
DQpbICAgIDIuNDQxNzAwXSBwY2kgMDAwMDowMDowMS4zOiBxdWlyazogW2lvICAweGIwMDAtMHhi
MDNmXSBjbGFpbWVkIGJ5IFBJSVg0IEFDUEkNClsgICAgMi40NDgxODVdIHBjaSAwMDAwOjAwOjAx
LjM6IHF1aXJrOiBbaW8gIDB4YjEwMC0weGIxMGZdIGNsYWltZWQgYnkgUElJWDQgU01CDQpbICAg
IDIuNDU2NjQyXSBwY2kgMDAwMDowMDowMi4wOiBbMTAxMzowMGI4XSB0eXBlIDAgY2xhc3MgMHgw
MDAzMDANClsgICAgMi40NjA3ODVdIHBjaSAwMDAwOjAwOjAyLjA6IHJlZyAxMDogW21lbSAweGYw
MDAwMDAwLTB4ZjFmZmZmZmYgcHJlZl0NClsgICAgMi40Njg3MTldIHBjaSAwMDAwOjAwOjAyLjA6
IHJlZyAxNDogW21lbSAweGYzMGU0MDAwLTB4ZjMwZTRmZmZdDQpbICAgIDIuNDc0OTU5XSBwY2kg
MDAwMDowMDowMi4wOiByZWcgMzA6IFttZW0gMHhmMzBjMDAwMC0weGYzMGNmZmZmIHByZWZdDQpb
ICAgIDIuNDg0NDQwXSBwY2kgMDAwMDowMDowMy4wOiBbNTg1MzowMDAxXSB0eXBlIDAgY2xhc3Mg
MHgwMGZmODANClsgICAgMi40ODg4MjBdIHBjaSAwMDAwOjAwOjAzLjA6IHJlZyAxMDogW2lvICAw
eGMwMDAtMHhjMGZmXQ0KWyAgICAyLjQ5MjcyN10gcGNpIDAwMDA6MDA6MDMuMDogcmVnIDE0OiBb
bWVtIDB4ZjIwMDAwMDAtMHhmMmZmZmZmZiBwcmVmXQ0KWyAgICAyLjUwNDIyNl0gcGNpIDAwMDA6
MDA6MDUuMDogWzEwMDA6MDA3Ml0gdHlwZSAwIGNsYXNzIDB4MDAwMTA3DQpbICAgIDIuNTA4ODE3
XSBwY2kgMDAwMDowMDowNS4wOiByZWcgMTA6IFtpbyAgMHhjMjAwLTB4YzJmZl0NClsgICAgMi41
MTY4ODFdIHBjaSAwMDAwOjAwOjA1LjA6IHJlZyAxNDogW21lbSAweGYzMGUwMDAwLTB4ZjMwZTNm
ZmYgNjRiaXRdDQpbICAgIDIuNTI0ODgwXSBwY2kgMDAwMDowMDowNS4wOiByZWcgMWM6IFttZW0g
MHhmMzA4MDAwMC0weGYzMGJmZmZmIDY0Yml0XQ0KWyAgICAyLjUyOTMyNV0gcGNpIDAwMDA6MDA6
MDUuMDogcmVnIDMwOiBbbWVtIDB4ZjMwMDAwMDAtMHhmMzA3ZmZmZiBwcmVmXQ0KWyAgICAyLjUz
NjkwMF0gcGNpIDAwMDA6MDA6MDUuMDogc3VwcG9ydHMgRDEgRDINClsgICAgMi41NDExMzBdIEFD
UEk6IFBDSSBJbnRlcnJ1cHQgUm91dGluZyBUYWJsZSBbXF9TQl8uUENJMC5fUFJUXQ0KWyAgICAy
LjU0ODY1NV0gIHBjaTAwMDA6MDA6IFVuYWJsZSB0byByZXF1ZXN0IF9PU0MgY29udHJvbCAoX09T
QyBzdXBwb3J0IG1hc2s6IDB4MWUpDQpbICAgIDIuNTYwMTcyXSBBQ1BJOiBQQ0kgSW50ZXJydXB0
IExpbmsgW0xOS0FdIChJUlFzICo1IDEwIDExKQ0KWyAgICAyLjU2NzM3NF0gQUNQSTogUENJIElu
dGVycnVwdCBMaW5rIFtMTktCXSAoSVJRcyA1ICoxMCAxMSkNClsgICAgMi41NzE0NjVdIEFDUEk6
IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LQ10gKElSUXMgNSAxMCAqMTEpDQpbICAgIDIuNTc5Mzg2
XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOS0RdIChJUlFzICo1IDEwIDExKQ0KWyAgICAy
LjU4NzM0N10geGVuL2JhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlci4NClsgICAg
Mi41OTIxNjZdIGxhc3RfcGZuID0gMHg3ZjdmZiBtYXhfYXJjaF9wZm4gPSAweDQwMDAwMDAwMA0K
WyAgICAyLjU5NjE3Ml0geGVuLWJhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlci4N
ClsgICAgMi42MDAzMzZdIHZnYWFyYjogZGV2aWNlIGFkZGVkOiBQQ0k6MDAwMDowMDowMi4wLGRl
Y29kZXM9aW8rbWVtLG93bnM9aW8rbWVtLGxvY2tzPW5vbmUNClsgICAgMi42MDgxNjhdIHZnYWFy
YjogbG9hZGVkDQpbICAgIDIuNjEwOTc2XSB2Z2FhcmI6IGJyaWRnZSBjb250cm9sIHBvc3NpYmxl
IDAwMDA6MDA6MDIuMA0KWyAgICAyLjYxNjM2Ml0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQN
ClsgICAgMi42MjAxOTNdIGxpYmF0YSB2ZXJzaW9uIDMuMDAgbG9hZGVkLg0KWyAgICAyLjYyNDIw
Ml0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Jmcw0KWyAgICAy
LjYyODE3NV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWINClsg
ICAgMi42MzYxOTNdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGRldmljZSBkcml2ZXIgdXNiDQpb
ICAgIDIuNjQwMjMxXSBQQ0k6IFVzaW5nIEFDUEkgZm9yIElSUSByb3V0aW5nDQpbICAgIDIuNjQ0
MTY5XSBQQ0k6IHBjaV9jYWNoZV9saW5lX3NpemUgc2V0IHRvIDY0IGJ5dGVzDQpbICAgIDIuNjQ4
NTgzXSByZXNlcnZlIFJBTSBidWZmZXI6IDAwMDAwMDAwMDAwOWU0MDAgLSAwMDAwMDAwMDAwMDlm
ZmZmIA0KWyAgICAyLjY1MjE3MF0gcmVzZXJ2ZSBSQU0gYnVmZmVyOiAwMDAwMDAwMDdmN2ZmMDAw
IC0gMDAwMDAwMDA3ZmZmZmZmZiANClsgICAgMi42NjAyOTddIE5ldExhYmVsOiBJbml0aWFsaXpp
bmcNClsgICAgMi42NjQxNzBdIE5ldExhYmVsOiAgZG9tYWluIGhhc2ggc2l6ZSA9IDEyOA0KWyAg
ICAyLjY3MjE3MF0gTmV0TGFiZWw6ICBwcm90b2NvbHMgPSBVTkxBQkVMRUQgQ0lQU092NA0KWyAg
ICAyLjY3NjE3N10gTmV0TGFiZWw6ICB1bmxhYmVsZWQgdHJhZmZpYyBhbGxvd2VkIGJ5IGRlZmF1
bHQNClsgICAgMi42ODAyMjZdIEhQRVQ6IDMgdGltZXJzIGluIHRvdGFsLCAwIHRpbWVycyB3aWxs
IGJlIHVzZWQgZm9yIHBlci1jcHUgdGltZXINClsgICAgMi42ODgxODFdIGhwZXQwOiBhdCBNTUlP
IDB4ZmVkMDAwMDAsIElSUXMgMiwgOCwgMA0KWyAgICAyLjY5NTkxOF0gaHBldDA6IDMgY29tcGFy
YXRvcnMsIDY0LWJpdCA2Mi41MDAwMDAgTUh6IGNvdW50ZXINClsgICAgMi43MDgyMDhdIFN3aXRj
aGluZyB0byBjbG9ja3NvdXJjZSB4ZW4NClsgICAgMi43MjAwODRdIFN3aXRjaGVkIHRvIE5PSHog
bW9kZSBvbiBDUFUgIzANClsgICAgMi43MjAwODldIFN3aXRjaGVkIHRvIE5PSHogbW9kZSBvbiBD
UFUgIzENClsgICAgMi43MjQwMzRdIEFwcEFybW9yOiBBcHBBcm1vciBGaWxlc3lzdGVtIEVuYWJs
ZWQNClsgICAgMi43MjQxMDVdIHBucDogUG5QIEFDUEkgaW5pdA0KWyAgICAyLjcyNDEyNF0gQUNQ
STogYnVzIHR5cGUgcG5wIHJlZ2lzdGVyZWQNClsgICAgMi43MjQxNjJdIHBucCAwMDowMDogW21l
bSAweDAwMDAwMDAwLTB4MDAwOWZmZmZdDQpbICAgIDIuNzI0MjEyXSBzeXN0ZW0gMDA6MDA6IFtt
ZW0gMHgwMDAwMDAwMC0weDAwMDlmZmZmXSBjb3VsZCBub3QgYmUgcmVzZXJ2ZWQNClsgICAgMi43
MjQyMThdIHN5c3RlbSAwMDowMDogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBj
MDIgKGFjdGl2ZSkNClsgICAgMi43MjQzMjddIHBucCAwMDowMTogW2J1cyAwMC1mZl0NClsgICAg
Mi43MjQzMzBdIHBucCAwMDowMTogW2lvICAweDBjZjgtMHgwY2ZmXQ0KWyAgICAyLjcyNDMzM10g
cG5wIDAwOjAxOiBbaW8gIDB4MDAwMC0weDBjZjcgd2luZG93XQ0KWyAgICAyLjcyNDMzNl0gcG5w
IDAwOjAxOiBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQ0KWyAgICAyLjcyNDMzOV0gcG5wIDAw
OjAxOiBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZiB3aW5kb3ddDQpbICAgIDIuNzI0MzQzXSBw
bnAgMDA6MDE6IFttZW0gMHhmMDAwMDAwMC0weGZiZmZmZmZmIHdpbmRvd10NClsgICAgMi43MjQ0
MDVdIHBucCAwMDowMTogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBhMDMgKGFj
dGl2ZSkNClsgICAgMi43MjQ0NDddIHBucCAwMDowMjogW21lbSAweGZlZDAwMDAwLTB4ZmVkMDAz
ZmZdDQpbICAgIDIuNzI0NDc4XSBwbnAgMDA6MDI6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2Us
IElEcyBQTlAwMTAzIChhY3RpdmUpDQpbICAgIDIuNzI0NTA5XSBwbnAgMDA6MDM6IFtpbyAgMHgw
MDEwLTB4MDAxZl0NClsgICAgMi43MjQ1MTJdIHBucCAwMDowMzogW2lvICAweDAwMjItMHgwMDJk
XQ0KWyAgICAyLjcyNDUxNF0gcG5wIDAwOjAzOiBbaW8gIDB4MDAzMC0weDAwM2ZdDQpbICAgIDIu
NzI0NTE3XSBwbnAgMDA6MDM6IFtpbyAgMHgwMDQ0LTB4MDA1Zl0NClsgICAgMi43MjQ1MTldIHBu
cCAwMDowMzogW2lvICAweDAwNjItMHgwMDYzXQ0KWyAgICAyLjcyNDUyMl0gcG5wIDAwOjAzOiBb
aW8gIDB4MDA2NS0weDAwNmZdDQpbICAgIDIuNzI0NTI0XSBwbnAgMDA6MDM6IFtpbyAgMHgwMDcy
LTB4MDA3Zl0NClsgICAgMi43MjQ1MjddIHBucCAwMDowMzogW2lvICAweDAwODBdDQpbICAgIDIu
NzI0NTMwXSBwbnAgMDA6MDM6IFtpbyAgMHgwMDg0LTB4MDA4Nl0NClsgICAgMi43MjQ1MzNdIHBu
cCAwMDowMzogW2lvICAweDAwODhdDQpbICAgIDIuNzI0NTM1XSBwbnAgMDA6MDM6IFtpbyAgMHgw
MDhjLTB4MDA4ZV0NClsgICAgMi43MjQ1MzddIHBucCAwMDowMzogW2lvICAweDAwOTAtMHgwMDlm
XQ0KWyAgICAyLjcyNDU0MF0gcG5wIDAwOjAzOiBbaW8gIDB4MDBhMi0weDAwYmRdDQpbICAgIDIu
NzI0NTQyXSBwbnAgMDA6MDM6IFtpbyAgMHgwMGUwLTB4MDBlZl0NClsgICAgMi43MjQ1NDVdIHBu
cCAwMDowMzogW2lvICAweDA4YTAtMHgwOGEzXQ0KWyAgICAyLjcyNDU0N10gcG5wIDAwOjAzOiBb
aW8gIDB4MGNjMC0weDBjY2ZdDQpbICAgIDIuNzI0NTQ5XSBwbnAgMDA6MDM6IFtpbyAgMHgwNGQw
LTB4MDRkMV0NClsgICAgMi43MjQ2MDRdIHN5c3RlbSAwMDowMzogW2lvICAweDA4YTAtMHgwOGEz
XSBoYXMgYmVlbiByZXNlcnZlZA0KWyAgICAyLjcyNDYwN10gc3lzdGVtIDAwOjAzOiBbaW8gIDB4
MGNjMC0weDBjY2ZdIGhhcyBiZWVuIHJlc2VydmVkDQpbICAgIDIuNzI0NjExXSBzeXN0ZW0gMDA6
MDM6IFtpbyAgMHgwNGQwLTB4MDRkMV0gaGFzIGJlZW4gcmVzZXJ2ZWQNClsgICAgMi43MjQ2MTZd
IHN5c3RlbSAwMDowMzogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBjMDIgKGFj
dGl2ZSkNClsgICAgMi43MjQ2MzZdIHBucCAwMDowNDogW2RtYSA0XQ0KWyAgICAyLjcyNDYzOF0g
cG5wIDAwOjA0OiBbaW8gIDB4MDAwMC0weDAwMGZdDQpbICAgIDIuNzI0NjQxXSBwbnAgMDA6MDQ6
IFtpbyAgMHgwMDgxLTB4MDA4M10NClsgICAgMi43MjQ2NDNdIHBucCAwMDowNDogW2lvICAweDAw
ODddDQpbICAgIDIuNzI0NjQ2XSBwbnAgMDA6MDQ6IFtpbyAgMHgwMDg5LTB4MDA4Yl0NClsgICAg
Mi43MjQ2NTJdIHBucCAwMDowNDogW2lvICAweDAwOGZdDQpbICAgIDIuNzI0NjU0XSBwbnAgMDA6
MDQ6IFtpbyAgMHgwMGMwLTB4MDBkZl0NClsgICAgMi43MjQ2NTddIHBucCAwMDowNDogW2lvICAw
eDA0ODAtMHgwNDhmXQ0KWyAgICAyLjcyNDY4OF0gcG5wIDAwOjA0OiBQbHVnIGFuZCBQbGF5IEFD
UEkgZGV2aWNlLCBJRHMgUE5QMDIwMCAoYWN0aXZlKQ0KWyAgICAyLjcyNDcwNV0gcG5wIDAwOjA1
OiBbaW8gIDB4MDA3MC0weDAwNzFdDQpbICAgIDIuNzI0NzQwXSB4ZW46IC0tPiBpcnE9OCwgcGly
cT0xNw0KWyAgICAyLjcyNDc0NF0gcG5wIDAwOjA1OiBbaXJxIDhdDQpbICAgIDIuNzI0Nzc2XSBw
bnAgMDA6MDU6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYjAwIChhY3RpdmUp
DQpbICAgIDIuNzI0NzkwXSBwbnAgMDA6MDY6IFtpbyAgMHgwMDYxXQ0KWyAgICAyLjcyNDgyM10g
cG5wIDAwOjA2OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMDgwMCAoYWN0aXZl
KQ0KWyAgICAyLjcyNDg2OV0geGVuOiAtLT4gaXJxPTEyLCBwaXJxPTE4DQpbICAgIDIuNzI0ODcz
XSBwbnAgMDA6MDc6IFtpcnEgMTJdDQpbICAgIDIuNzI0OTAzXSBwbnAgMDA6MDc6IFBsdWcgYW5k
IFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwZjEzIChhY3RpdmUpDQpbICAgIDIuNzI0OTMyXSBw
bnAgMDA6MDg6IFtpbyAgMHgwMDYwXQ0KWyAgICAyLjcyNDkzNV0gcG5wIDAwOjA4OiBbaW8gIDB4
MDA2NF0NClsgICAgMi43MjQ5NTldIHhlbjogLS0+IGlycT0xLCBwaXJxPTE5DQpbICAgIDIuNzI0
OTYzXSBwbnAgMDA6MDg6IFtpcnEgMV0NClsgICAgMi43MjQ5OThdIHBucCAwMDowODogUGx1ZyBh
bmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDAzMDMgUE5QMDMwYiAoYWN0aXZlKQ0KWyAgICAy
LjcyNTAyNF0gcG5wIDAwOjA5OiBbaW8gIDB4MDNmMC0weDAzZjVdDQpbICAgIDIuNzI1MDI3XSBw
bnAgMDA6MDk6IFtpbyAgMHgwM2Y3XQ0KWyAgICAyLjcyNTA0OV0geGVuOiAtLT4gaXJxPTYsIHBp
cnE9MjANClsgICAgMi43MjUwNTJdIHBucCAwMDowOTogW2lycSA2XQ0KWyAgICAyLjcyNTA1NV0g
cG5wIDAwOjA5OiBbZG1hIDJdDQpbICAgIDIuNzI1MDkzXSBwbnAgMDA6MDk6IFBsdWcgYW5kIFBs
YXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwNzAwIChhY3RpdmUpDQpbICAgIDIuNzI1MTM1XSBwbnAg
MDA6MGE6IFtpbyAgMHgwM2Y4LTB4MDNmZl0NClsgICAgMi43MjUxNTldIHhlbjogLS0+IGlycT00
LCBwaXJxPTIxDQpbICAgIDIuNzI1MTYyXSBwbnAgMDA6MGE6IFtpcnEgNF0NClsgICAgMi43MjUx
OTddIHBucCAwMDowYTogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDA1MDEgKGFj
dGl2ZSkNClsgICAgMi43MjUyNDldIHBucCAwMDowYjogW2lvICAweDAzNzgtMHgwMzdmXQ0KWyAg
ICAyLjcyNTI3M10geGVuOiAtLT4gaXJxPTcsIHBpcnE9MjINClsgICAgMi43MjUyNzZdIHBucCAw
MDowYjogW2lycSA3XQ0KWyAgICAyLjcyNTMxNV0gcG5wIDAwOjBiOiBQbHVnIGFuZCBQbGF5IEFD
UEkgZGV2aWNlLCBJRHMgUE5QMDQwMCAoYWN0aXZlKQ0KWyAgICAyLjcyNTM1Nl0gcG5wIDAwOjBj
OiBbaW8gIDB4YWUwMC0weGFlMGZdDQpbICAgIDIuNzI1MzU5XSBwbnAgMDA6MGM6IFtpbyAgMHhi
MDQ0LTB4YjA0N10NClsgICAgMi43MjU0MDZdIHN5c3RlbSAwMDowYzogW2lvICAweGFlMDAtMHhh
ZTBmXSBoYXMgYmVlbiByZXNlcnZlZA0KWyAgICAyLjcyNTQxMF0gc3lzdGVtIDAwOjBjOiBbaW8g
IDB4YjA0NC0weGIwNDddIGhhcyBiZWVuIHJlc2VydmVkDQpbICAgIDIuNzI1NDE1XSBzeXN0ZW0g
MDA6MGM6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYzAyIChhY3RpdmUpDQpb
ICAgIDIuNzI2MDYzXSBwbnA6IFBuUCBBQ1BJOiBmb3VuZCAxMyBkZXZpY2VzDQpbICAgIDIuNzI2
MDY1XSBBQ1BJOiBBQ1BJIGJ1cyB0eXBlIHBucCB1bnJlZ2lzdGVyZWQNClsgICAgMi43MzI0MzVd
IFBDSTogbWF4IGJ1cyBkZXB0aDogMCBwY2lfdHJ5X251bTogMQ0KWyAgICAyLjczMjQ0NV0gcGNp
X2J1cyAwMDAwOjAwOiByZXNvdXJjZSA0IFtpbyAgMHgwMDAwLTB4MGNmN10NClsgICAgMi43MzI0
NDldIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNSBbaW8gIDB4MGQwMC0weGZmZmZdDQpbICAg
IDIuNzMyNDUyXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4
MDAwYmZmZmZdDQpbICAgIDIuNzMyNDU2XSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDcgW21l
bSAweGYwMDAwMDAwLTB4ZmJmZmZmZmZdDQpbICAgIDIuNzMyNTMyXSBORVQ6IFJlZ2lzdGVyZWQg
cHJvdG9jb2wgZmFtaWx5IDINClsgICAgMi43MzI3NTFdIElQIHJvdXRlIGNhY2hlIGhhc2ggdGFi
bGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA3LCA1MjQyODggYnl0ZXMpDQpbICAgIDIuNzMzNDI3
XSBUQ1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNjIxNDQgKG9yZGVyOiAxMCwg
NDE5NDMwNCBieXRlcykNClsgICAgMi43MzQyNDBdIFRDUCBiaW5kIGhhc2ggdGFibGUgZW50cmll
czogNjU1MzYgKG9yZGVyOiA4LCAxMDQ4NTc2IGJ5dGVzKQ0KWyAgICAyLjczNDM5MV0gVENQOiBI
YXNoIHRhYmxlcyBjb25maWd1cmVkIChlc3RhYmxpc2hlZCAyNjIxNDQgYmluZCA2NTUzNikNClsg
ICAgMi43MzQzOTNdIFRDUCByZW5vIHJlZ2lzdGVyZWQNClsgICAgMi43MzQ0MDJdIFVEUCBoYXNo
IHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykNClsgICAgMi43MzQ0
MTVdIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDMsIDMyNzY4IGJ5
dGVzKQ0KWyAgICAyLjczNDkyMF0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxDQpb
ICAgIDIuNzM0OTM0XSBwY2kgMDAwMDowMDowMC4wOiBMaW1pdGluZyBkaXJlY3QgUENJL1BDSSB0
cmFuc2ZlcnMNClsgICAgMi43MzUxMTddIHBjaSAwMDAwOjAwOjAxLjA6IFBJSVgzOiBFbmFibGlu
ZyBQYXNzaXZlIFJlbGVhc2UNClsgICAgMi43MzU0OTBdIHBjaSAwMDAwOjAwOjAxLjA6IEFjdGl2
YXRpbmcgSVNBIERNQSBoYW5nIHdvcmthcm91bmRzDQpbICAgIDIuNzM2MjM2XSBwY2kgMDAwMDow
MDowMi4wOiBCb290IHZpZGVvIGRldmljZQ0KWyAgICAyLjczNjc3Nl0gUENJOiBDTFMgMCBieXRl
cywgZGVmYXVsdCA2NA0KWyAgICAzLjEyOTQ5NF0gYXVkaXQ6IGluaXRpYWxpemluZyBuZXRsaW5r
IHNvY2tldCAoZGlzYWJsZWQpDQpbICAgIDMuMTM0NTcyXSB0eXBlPTIwMDAgYXVkaXQoMTM0Mzg0
MTc5NC45ODI6MSk6IGluaXRpYWxpemVkDQpbICAgIDMuMTU5NDEzXSBIdWdlVExCIHJlZ2lzdGVy
ZWQgMiBNQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcw0KWyAgICAzLjE3NTIxNF0g
VkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82LjUuMg0KWyAgICAzLjE4MTk4N10gRHF1b3QtY2FjaGUg
aGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyIDAsIDQwOTYgYnl0ZXMpDQpbICAgIDMuMTky
NzI2XSBmdXNlIGluaXQgKEFQSSB2ZXJzaW9uIDcuMTYpDQpbICAgIDMuMTk4OTkxXSBtc2dtbmkg
aGFzIGJlZW4gc2V0IHRvIDM5NjkNClsgICAgMy4yMDcyNTZdIEJsb2NrIGxheWVyIFNDU0kgZ2Vu
ZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNTMpDQpbICAgIDMu
MjE4NTUyXSBpbyBzY2hlZHVsZXIgbm9vcCByZWdpc3RlcmVkDQpbICAgIDMuMjI0OTk1XSBpbyBz
Y2hlZHVsZXIgZGVhZGxpbmUgcmVnaXN0ZXJlZA0KWyAgICAzLjIzMDQyNl0gaW8gc2NoZWR1bGVy
IGNmcSByZWdpc3RlcmVkIChkZWZhdWx0KQ0KWyAgICAzLjIzNTAyM10gcGNpX2hvdHBsdWc6IFBD
SSBIb3QgUGx1ZyBQQ0kgQ29yZSB2ZXJzaW9uOiAwLjUNClsgICAgMy4yMzk0NTJdIHBjaWVocDog
UENJIEV4cHJlc3MgSG90IFBsdWcgQ29udHJvbGxlciBEcml2ZXIgdmVyc2lvbjogMC40DQpbICAg
IDMuMjQ0NjAzXSBpbnB1dDogUG93ZXIgQnV0dG9uIGFzIC9kZXZpY2VzL0xOWFNZU1RNOjAwL0xO
WFBXUkJOOjAwL2lucHV0L2lucHV0MA0KWyAgICAzLjI1MDc0Ml0gQUNQSTogUG93ZXIgQnV0dG9u
IFtQV1JGXQ0KWyAgICAzLjI1Mzk4MF0gaW5wdXQ6IFNsZWVwIEJ1dHRvbiBhcyAvZGV2aWNlcy9M
TlhTWVNUTTowMC9MTlhTTFBCTjowMC9pbnB1dC9pbnB1dDENClsgICAgMy4yNTk5NjddIEFDUEk6
IFNsZWVwIEJ1dHRvbiBbU0xQRl0NClsgICAgMy4yNjM3NjJdIEFDUEk6IGFjcGlfaWRsZSByZWdp
c3RlcmVkIHdpdGggY3B1aWRsZQ0KWyAgICAzLjI2ODcxOF0gRVJTVDogVGFibGUgaXMgbm90IGZv
dW5kIQ0KWyAgICAzLjI3MjAyNF0gU2VyaWFsOiA4MjUwLzE2NTUwIGRyaXZlciwgMzIgcG9ydHMs
IElSUSBzaGFyaW5nIGVuYWJsZWQNClsgICAgMy4zMDQ5MDZdIHNlcmlhbDgyNTA6IHR0eVMwIGF0
IEkvTyAweDNmOCAoaXJxID0gNCkgaXMgYSAxNjU1MEENClsgICAgMy40OTAwNjRdIDAwOjBhOiB0
dHlTMCBhdCBJL08gMHgzZjggKGlycSA9IDQpIGlzIGEgMTY1NTBBDQpbICAgIDMuNDk4MjE0XSBM
aW51eCBhZ3BnYXJ0IGludGVyZmFjZSB2MC4xMDMNClsgICAgMy41MDUwODNdIGJyZDogbW9kdWxl
IGxvYWRlZA0KWyAgICAzLjUxMDQzM10gbG9vcDogbW9kdWxlIGxvYWRlZA0KWyAgICAzLjUxMzQ1
MV0gYXRhX3BpaXggMDAwMDowMDowMS4xOiB2ZXJzaW9uIDIuMTMNClsgICAgMy41MTczNjhdIGF0
YV9waWl4IDAwMDA6MDA6MDEuMTogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0DQpbICAgIDMu
NTIyMzM3XSBzY3NpMCA6IGF0YV9waWl4DQpbICAgIDMuNTI1MjMwXSBzY3NpMSA6IGF0YV9waWl4
DQpbICAgIDMuNTI4MDI1XSBhdGExOiBQQVRBIG1heCBNV0RNQTIgY21kIDB4MWYwIGN0bCAweDNm
NiBibWRtYSAweGMzMDAgaXJxIDE0DQpbICAgIDMuNTMzMTU5XSBhdGEyOiBQQVRBIG1heCBNV0RN
QTIgY21kIDB4MTcwIGN0bCAweDM3NiBibWRtYSAweGMzMDggaXJxIDE1DQpbICAgIDMuNTM4Nzc5
XSBGaXhlZCBNRElPIEJ1czogcHJvYmVkDQpbICAgIDMuNTQyNTU1XSBQUFAgZ2VuZXJpYyBkcml2
ZXIgdmVyc2lvbiAyLjQuMg0KWyAgICAzLjU0NzEwNV0gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBk
ZXZpY2UgZHJpdmVyLCAxLjYNClsgICAgMy41NTE0OTRdIHR1bjogKEMpIDE5OTktMjAwNCBNYXgg
S3Jhc255YW5za3kgPG1heGtAcXVhbGNvbW0uY29tPg0KWyAgICAzLjU1NjY3N10gZWhjaV9oY2Q6
IFVTQiAyLjAgJ0VuaGFuY2VkJyBIb3N0IENvbnRyb2xsZXIgKEVIQ0kpIERyaXZlcg0KWyAgICAz
LjU2MjA4MF0gb2hjaV9oY2Q6IFVTQiAxLjEgJ09wZW4nIEhvc3QgQ29udHJvbGxlciAoT0hDSSkg
RHJpdmVyDQpbICAgIDMuNTY3OTU3XSB1aGNpX2hjZDogVVNCIFVuaXZlcnNhbCBIb3N0IENvbnRy
b2xsZXIgSW50ZXJmYWNlIGRyaXZlcg0KWyAgICAzLjU3MzMwN10gaTgwNDI6IFBOUDogUFMvMiBD
b250cm9sbGVyIFtQTlAwMzAzOlBTMkssUE5QMGYxMzpQUzJNXSBhdCAweDYwLDB4NjQgaXJxIDEs
MTINClsgICAgMy41ODM2ODVdIHNlcmlvOiBpODA0MiBLQkQgcG9ydCBhdCAweDYwLDB4NjQgaXJx
IDENClsgICAgMy41ODc4NzZdIHNlcmlvOiBpODA0MiBBVVggcG9ydCBhdCAweDYwLDB4NjQgaXJx
IDEyDQpbICAgIDMuNTkyNDM3XSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZv
ciBhbGwgbWljZQ0KWyAgICAzLjU5NzY2NF0gaW5wdXQ6IEFUIFRyYW5zbGF0ZWQgU2V0IDIga2V5
Ym9hcmQgYXMgL2RldmljZXMvcGxhdGZvcm0vaTgwNDIvc2VyaW8wL2lucHV0L2lucHV0Mg0KWyAg
ICAzLjYwNDk3NF0gcnRjX2Ntb3MgMDA6MDU6IHJ0YyBjb3JlOiByZWdpc3RlcmVkIHJ0Y19jbW9z
IGFzIHJ0YzANClsgICAgMy42MTAwMjFdIHJ0YzA6IGFsYXJtcyB1cCB0byBvbmUgZGF5LCAxMTQg
Ynl0ZXMgbnZyYW0sIGhwZXQgaXJxcw0KWyAgICAzLjYxNTIzOF0gZGV2aWNlLW1hcHBlcjogdWV2
ZW50OiB2ZXJzaW9uIDEuMC4zDQpbICAgIDMuNjE5NjMwXSBkZXZpY2UtbWFwcGVyOiBpb2N0bDog
NC4yMC4wLWlvY3RsICgyMDExLTAyLTAyKSBpbml0aWFsaXNlZDogZG0tZGV2ZWxAcmVkaGF0LmNv
bQ0KWyAgICAzLjYyNjgxOF0gY3B1aWRsZTogdXNpbmcgZ292ZXJub3IgbGFkZGVyDQpbICAgIDMu
NjMxMTU2XSBjcHVpZGxlOiB1c2luZyBnb3Zlcm5vciBtZW51DQpbICAgIDMuNjM0NzUyXSBFRkkg
VmFyaWFibGVzIEZhY2lsaXR5IHYwLjA4IDIwMDQtTWF5LTE3DQpbICAgIDMuNjM5MjQxXSBUQ1Ag
Y3ViaWMgcmVnaXN0ZXJlZA0KWyAgICAzLjY0MjU4NV0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29s
IGZhbWlseSAxMA0KWyAgICAzLjY0NzUwM10gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWls
eSAxNw0KWyAgICAzLjY1MTUzNF0gUmVnaXN0ZXJpbmcgdGhlIGRuc19yZXNvbHZlciBrZXkgdHlw
ZQ0KWyAgICAzLjY1NTc0MV0gUE06IEhpYmVybmF0aW9uIGltYWdlIG5vdCBwcmVzZW50IG9yIGNv
dWxkIG5vdCBiZSBsb2FkZWQuDQpbICAgIDMuNjYxNjY5XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2
ZXJzaW9uIDENClsgICAgMy42NzY4OTddICAgTWFnaWMgbnVtYmVyOiA4OjcxMTozODkNClsgICAg
My42ODA0NTFdIHJ0Y19jbW9zIDAwOjA1OiBzZXR0aW5nIHN5c3RlbSBjbG9jayB0byAyMDEyLTA4
LTAxIDE3OjIzOjE1IFVUQyAoMTM0Mzg0MTc5NSkNClsgICAgMy42ODc0NzddIEJJT1MgRUREIGZh
Y2lsaXR5IHYwLjE2IDIwMDQtSnVuLTI1LCAwIGRldmljZXMgZm91bmQNClsgICAgMy42OTUxNDFd
IEVERCBpbmZvcm1hdGlvbiBub3QgYXZhaWxhYmxlLg0KWyAgICAzLjY5OTk3M10gYXRhMi4wMTog
Tk9ERVYgYWZ0ZXIgcG9sbGluZyBkZXRlY3Rpb24NClsgICAgMy43MDY1OTZdIGF0YTIuMDA6IEFU
QVBJOiBRRU1VIERWRC1ST00sIDEuMS41MCwgbWF4IFVETUEvMTAwDQpbICAgIDMuNzA5OTgyXSBh
dGEyLjAwOiBjb25maWd1cmVkIGZvciBNV0RNQTINClsgICAgMy43Mjc5OTZdIHNjc2kgMTowOjA6
MDogQ0QtUk9NICAgICAgICAgICAgUUVNVSAgICAgUUVNVSBEVkQtUk9NICAgICAxLjEuIFBROiAw
IEFOU0k6IDUNClsgICAgMy43MzYyNzBdIHNyMDogc2NzaTMtbW1jIGRyaXZlOiA0eC80eCBjZC9y
dyB4YS9mb3JtMiB0cmF5DQpbICAgIDMuNzQxMTY1XSBjZHJvbTogVW5pZm9ybSBDRC1ST00gZHJp
dmVyIFJldmlzaW9uOiAzLjIwDQpbICAgIDMuNzQ1ODUyXSBzciAxOjA6MDowOiBBdHRhY2hlZCBz
Y3NpIENELVJPTSBzcjANClsgICAgMy43NTIxNzNdIHNyIDE6MDowOjA6IEF0dGFjaGVkIHNjc2kg
Z2VuZXJpYyBzZzAgdHlwZSA1DQpbICAgIDMuNzYwMjkxXSBGcmVlaW5nIHVudXNlZCBrZXJuZWwg
bWVtb3J5OiA5ODRrIGZyZWVkDQpbICAgIDMuNzY1NTExXSBXcml0ZSBwcm90ZWN0aW5nIHRoZSBr
ZXJuZWwgcmVhZC1vbmx5IGRhdGE6IDEwMjQwaw0KWyAgICAzLjc3MTU4NF0gRnJlZWluZyB1bnVz
ZWQga2VybmVsIG1lbW9yeTogMjBrIGZyZWVkDQpbICAgIDMuNzgxMTQ3XSBGcmVlaW5nIHVudXNl
ZCBrZXJuZWwgbWVtb3J5OiAxNDAwayBmcmVlZA0KWyAgICAzLjgxMjAyMF0gdWRldmRbOTNdOiBz
dGFydGluZyB2ZXJzaW9uIDE3Mw0KWyAgICAzLjg2NTc4NF0gbXB0MnNhcyB2ZXJzaW9uIDA4LjEw
MC4wMC4wMiBsb2FkZWQNClsgICAgMy44ODU5MTZdIEZsb3BweSBkcml2ZShzKTogZmQwIGlzIDEu
NDRNDQpbICAgIDMuODk5MjE4XSBzY3NpMiA6IEZ1c2lvbiBNUFQgU0FTIEhvc3QNClsgICAgMy45
MDQ4OTRdIHhlbjogLS0+IGlycT0zNiwgcGlycT0xNg0KWyAgICAzLjkxNjczNl0gbXB0MnNhcyAw
MDAwOjAwOjA1LjA6IFBDSSBJTlQgQSAtPiBHU0kgMzYgKGxldmVsLCBsb3cpIC0+IElSUSAzNg0K
WyAgICAzLjkyODI2OV0gbXB0MnNhczA6IDMyIEJJVCBQQ0kgQlVTIERNQSBBRERSRVNTSU5HIFNV
UFBPUlRFRCwgdG90YWwgbWVtICgyMDM0ODg4IGtCKQ0KWyAgICAzLjkzMjc0MF0gRkRDIDAgaXMg
YSBTODIwNzhCDQpbICAgIDMuOTY3Njk0XSBtcHQyc2FzMDogUENJLU1TSS1YIGVuYWJsZWQ6IElS
USA3NQ0KWyAgICAzLjk3MjgwMV0gbXB0MnNhczA6IGlvbWVtKDB4MDAwMDAwMDBmMzBlMDAwMCks
IG1hcHBlZCgweGZmZmZjOTAwMDA4YzAwMDApLCBzaXplKDE2Mzg0KQ0KWyAgICAzLjk4MDUwMF0g
bXB0MnNhczA6IGlvcG9ydCgweDAwMDAwMDAwMDAwMGMyMDApLCBzaXplKDI1NikNClsgICAgNC4x
MjgxMjhdIFJlZmluZWQgVFNDIGNsb2Nrc291cmNlIGNhbGlicmF0aW9uOiAzMjkyLjUyNSBNSHou
DQpbICAgIDQuMjgwMDgwXSBtcHQyc2FzMDogc2VuZGluZyBkaWFnIHJlc2V0ICEhDQpbICAgIDUu
NDIwODQxXSBtcHQyc2FzMDogZGlhZyByZXNldDogU1VDQ0VTUw0KWyAgICA1LjU3Mzc1Nl0gbXB0
MnNhczA6IEFsbG9jYXRlZCBwaHlzaWNhbCBtZW1vcnk6IHNpemUoMjI3NyBrQikNClsgICAgNS41
ODM1NDldIG1wdDJzYXMwOiBDdXJyZW50IENvbnRyb2xsZXIgUXVldWUgRGVwdGgoMTQ4MSksIE1h
eCBDb250cm9sbGVyIFF1ZXVlIERlcHRoKDE3MjApDQpbICAgIDUuNTk3NDQwXSBtcHQyc2FzMDog
U2NhdHRlciBHYXRoZXIgRWxlbWVudHMgcGVyIElPKDEyOCkNClsgICAzNS44MzYxODFdIG1wdDJz
YXMwOiBfYmFzZV9ldmVudF9ub3RpZmljYXRpb246IHRpbWVvdXQNClsgICAzNS44NDUwNTldIG1w
dDJzYXMwOiBzZW5kaW5nIGRpYWcgcmVzZXQgISENClsgICAzNi45ODkwMjZdIG1wdDJzYXMwOiBk
aWFnIHJlc2V0OiBTVUNDRVNTDQpbICAgMzYuOTk3OTk5XSBtcHQyc2FzIDAwMDA6MDA6MDUuMDog
UENJIElOVCBBIGRpc2FibGVkDQpbICAgMzcuMDE1ODk2XSBtcHQyc2FzMDogZmFpbHVyZSBhdCAv
YnVpbGQvYnVpbGRkL2xpbnV4LTMuMC4wL2RyaXZlcnMvc2NzaS9tcHQyc2FzL21wdDJzYXNfc2Nz
aWguYzo3NDY0L19zY3NpaF9wcm9iZSgpIQ0KWyAgIDM3LjU5NTY1OF0gQnRyZnMgbG9hZGVkDQpb
ICAgMzcuNjA1MDU4XSB4b3I6IGF1dG9tYXRpY2FsbHkgdXNpbmcgYmVzdCBjaGVja3N1bW1pbmcg
ZnVuY3Rpb246IGdlbmVyaWNfc3NlDQpbICAgMzcuNjMyMDY1XSAgICBnZW5lcmljX3NzZTogIDE1
NTQuMDAwIE1CL3NlYw0KWyAgIDM3LjYzNzg0M10geG9yOiB1c2luZyBmdW5jdGlvbjogZ2VuZXJp
Y19zc2UgKDE1NTQuMDAwIE1CL3NlYykNClsgICAzNy42NDY2MjNdIGRldmljZS1tYXBwZXI6IGRt
LXJhaWQ0NTogaW5pdGlhbGl6ZWQgdjAuMjU5NGINClsgICAzNy43NDk5MjBdIElTTyA5NjYwIEV4
dGVuc2lvbnM6IE1pY3Jvc29mdCBKb2xpZXQgTGV2ZWwgMw0KWyAgIDM3Ljc3MTg3N10gSVNPIDk2
NjAgRXh0ZW5zaW9uczogUlJJUF8xOTkxQQ0KWyAgIDM3Ljg5NTUyMV0gc3F1YXNoZnM6IHZlcnNp
b24gNC4wICgyMDA5LzAxLzMxKSBQaGlsbGlwIExvdWdoZXINCiAqIFN0YXJ0aW5nIG1ETlMvRE5T
LVNEIGRhZW1vbhtbNzRHWyBPSyBdICogU3RhcnRpbmcgbUROUy9ETlMtU0QgZGFlbW9uG1s3NEdb
IE9LIF1bICAgNDQuMzk1NjU3XSBwaWl4NF9zbWJ1cyAwMDAwOjAwOjAxLjM6IEhvc3QgU01CdXMg
Y29udHJvbGxlciBub3QgZW5hYmxlZCENCg0NCiAqIFN0YXJ0aW5nIG5ldHdvcmsgY29ubmVjdGlv
biBtYW5hZ2VyG1s3NEdbIE9LIF0NDQoNDQogKiBTdGFydGluZyBuZXR3b3JrIGNvbm5lY3Rpb24g
bWFuYWdlchtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgY29uZmlndXJlIG5ldHdvcmsgZGV2aWNl
IHNlY3VyaXR5G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBjb25maWd1cmUgbmV0d29yayBkZXZp
Y2Ugc2VjdXJpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIE1vdW50IG5ldHdvcmsgZmlsZXN5
c3RlbXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIEZhaWxzYWZlIEJvb3QgRGVsYXkbWzc0R1sg
T0sgXQ0NCiAqIFN0YXJ0aW5nIE1vdW50IG5ldHdvcmsgZmlsZXN5c3RlbXMbWzc0R1sgT0sgXQ0N
CiAqIFN0YXJ0aW5nIEZhaWxzYWZlIEJvb3QgRGVsYXkbWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5n
IE1vdW50IG5ldHdvcmsgZmlsZXN5c3RlbXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIEJyaWRn
ZSBzb2NrZXQgZXZlbnRzIGludG8gdXBzdGFydBtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcgTW91
bnQgbmV0d29yayBmaWxlc3lzdGVtcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgQnJpZGdlIHNv
Y2tldCBldmVudHMgaW50byB1cHN0YXJ0G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBjb25maWd1
cmUgbmV0d29yayBkZXZpY2UbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIFN5c3RlbSBWIGluaXRp
YWxpc2F0aW9uIGNvbXBhdGliaWxpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGNvbmZpZ3Vy
ZSBuZXR3b3JrIGRldmljZRtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgU3lzdGVtIFYgaW5pdGlh
bGlzYXRpb24gY29tcGF0aWJpbGl0eRtbNzRHWyBPSyBdDQ0Kc3BlZWNoLWRpc3BhdGNoZXIgZGlz
YWJsZWQ7IGVkaXQgL2V0Yy9kZWZhdWx0L3NwZWVjaC1kaXNwYXRjaGVyDQ0KQ2hlY2tpbmcgZm9y
IHJ1bm5pbmcgdW5hdHRlbmRlZC11cGdyYWRlczogDQ0KICogU3RvcHBpbmcgRmFpbHNhZmUgQm9v
dCBEZWxheRtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcgU3lzdGVtIFYgaW5pdGlhbGlzYXRpb24g
Y29tcGF0aWJpbGl0eRtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgU3lzdGVtIFYgcnVubGV2ZWwg
Y29tcGF0aWJpbGl0eRtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgYXV0b21hdGljIGNyYXNoIHJl
cG9ydCBnZW5lcmF0aW9uG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBBQ1BJIGRhZW1vbhtbNzRH
WyBPSyBdDQ0KICogU3RhcnRpbmcgc2F2ZSBrZXJuZWwgbWVzc2FnZXMbWzc0R1sgT0sgXQ0NCiAq
IFN0YXJ0aW5nIGRlZmVycmVkIGV4ZWN1dGlvbiBzY2hlZHVsZXIbWzc0R1sgT0sgXQ0NCiAqIFN0
YXJ0aW5nIHJlZ3VsYXIgYmFja2dyb3VuZCBwcm9ncmFtIHByb2Nlc3NpbmcgZGFlbW9uG1s3NEdb
IE9LIF0NDQogKiBTdG9wcGluZyBjb2xkIHBsdWcgZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3Rv
cHBpbmcgbG9nIGluaXRpYWwgZGV2aWNlIGNyZWF0aW9uG1s3NEdbIE9LIF0NDQpzcGVlY2gtZGlz
cGF0Y2hlciBkaXNhYmxlZDsgZWRpdCAvZXRjL2RlZmF1bHQvc3BlZWNoLWRpc3BhdGNoZXINDQpD
aGVja2luZyBmb3IgcnVubmluZyB1bmF0dGVuZGVkLXVwZ3JhZGVzOiANDQogKiBTdG9wcGluZyBG
YWlsc2FmZSBCb290IERlbGF5G1s3NEdbIE9LIF0NDQogKiBTdG9wcGluZyBTeXN0ZW0gViBpbml0
aWFsaXNhdGlvbiBjb21wYXRpYmlsaXR5G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBTeXN0ZW0g
ViBydW5sZXZlbCBjb21wYXRpYmlsaXR5G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBhdXRvbWF0
aWMgY3Jhc2ggcmVwb3J0IGdlbmVyYXRpb24bWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIEFDUEkg
ZGFlbW9uG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBzYXZlIGtlcm5lbCBtZXNzYWdlcxtbNzRH
WyBPSyBdDQ0KICogU3RhcnRpbmcgZGVmZXJyZWQgZXhlY3V0aW9uIHNjaGVkdWxlchtbNzRHWyBP
SyBdDQ0KICogU3RhcnRpbmcgcmVndWxhciBiYWNrZ3JvdW5kIHByb2dyYW0gcHJvY2Vzc2luZyBk
YWVtb24bWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5nIGNvbGQgcGx1ZyBkZXZpY2VzG1s3NEdbIE9L
IF0NDQogKiBTdG9wcGluZyBsb2cgaW5pdGlhbCBkZXZpY2UgY3JlYXRpb24bWzc0R1sgT0sgXQ0N
CiAqIFN0YXJ0aW5nIGxvYWQgZmFsbGJhY2sgZ3JhcGhpY3MgZGV2aWNlcxtbNzRHWyBPSyBdDQ0K
ICogU3RhcnRpbmcgZW5hYmxlIHJlbWFpbmluZyBib290LXRpbWUgZW5jcnlwdGVkIGJsb2NrIGRl
dmljZXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGNvbmZpZ3VyZSB2aXJ0dWFsIG5ldHdvcmsg
ZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgQ1BVIGludGVycnVwdHMgYmFsYW5jaW5n
IGRhZW1vbhtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcgY29uZmlndXJlIHZpcnR1YWwgbmV0d29y
ayBkZXZpY2VzG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBjb25maWd1cmUgbmV0d29yayBkZXZp
Y2Ugc2VjdXJpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGxvYWQgZmFsbGJhY2sgZ3JhcGhp
Y3MgZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgZW5hYmxlIHJlbWFpbmluZyBib290
LXRpbWUgZW5jcnlwdGVkIGJsb2NrIGRldmljZXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGNv
bmZpZ3VyZSB2aXJ0dWFsIG5ldHdvcmsgZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcg
Q1BVIGludGVycnVwdHMgYmFsYW5jaW5nIGRhZW1vbhtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcg
Y29uZmlndXJlIHZpcnR1YWwgbmV0d29yayBkZXZpY2VzG1s3NEdbIE9LIF0NDQogKiBTdGFydGlu
ZyBjb25maWd1cmUgbmV0d29yayBkZXZpY2Ugc2VjdXJpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0
aW5nIHNhdmUgdWRldiBsb2cgYW5kIHVwZGF0ZSBydWxlcxtbNzRHWyBPSyBdICogU3RhcnRpbmcg
c2F2ZSB1ZGV2IGxvZyBhbmQgdXBkYXRlIHJ1bGVzG1s3NEdbIE9LIF0NDQoNDQogKiBTdG9wcGlu
ZyBzYXZlIHVkZXYgbG9nIGFuZCB1cGRhdGUgcnVsZXMbWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5n
IHNhdmUgdWRldiBsb2cgYW5kIHVwZGF0ZSBydWxlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcg
bG9hZCBmYWxsYmFjayBncmFwaGljcyBkZXZpY2VzG1s3NEdbG1szMW1mYWlsG1szOTs0OW1dDQ0K
ICogU3RhcnRpbmcgbG9hZCBmYWxsYmFjayBncmFwaGljcyBkZXZpY2VzG1s3NEdbG1szMW1mYWls
G1szOTs0OW1dDQ0KICogU3RhcnRpbmcgVWJ1bnR1IGxpdmUgQ0QgaW5zdGFsbGVyG1s3NEdbIE9L
IF0gKiBTdGFydGluZyBVYnVudHUgbGl2ZSBDRCBpbnN0YWxsZXIbWzc0R1sgT0sgXQ0NCg0NCiAq
IFN0b3BwaW5nIFVidW50dSBsaXZlIENEIGluc3RhbGxlchtbNzRHWyBPSyBdICogU3RvcHBpbmcg
VWJ1bnR1IGxpdmUgQ0QgaW5zdGFsbGVyG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBMaWdodERN
IERpc3BsYXkgTWFuYWdlchtbNzRHWyBPSyBdDQ0KDQ0KICogU3RhcnRpbmcgTGlnaHRETSBEaXNw
bGF5IE1hbmFnZXIbWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5nIGVuYWJsZSByZW1haW5pbmcgYm9v
dC10aW1lIGVuY3J5cHRlZCBibG9jayBkZXZpY2VzG1s3NEdbIE9LIF0NDQogKiBTdG9wcGluZyBl
bmFibGUgcmVtYWluaW5nIGJvb3QtdGltZSBlbmNyeXB0ZWQgYmxvY2sgZGV2aWNlcxtbNzRHWyBP
SyBdDQ0KICogU3RvcHBpbmcgc2F2ZSBrZXJuZWwgbWVzc2FnZXMbWzc0R1sgT0sgXQ0NCiAqIFN0
b3BwaW5nIHNhdmUga2VybmVsIG1lc3NhZ2VzG1s3NEdbIE9LIF0NDQoNG1s3NEdbIE9LIF0NCiAb
WzMzbSobWzM5OzQ5bSBQdWxzZUF1ZGlvIGNvbmZpZ3VyZWQgZm9yIHBlci11c2VyIHNlc3Npb25z
DQpzYW5lZCBkaXNhYmxlZDsgZWRpdCAvZXRjL2RlZmF1bHQvc2FuZWQNCiAqIENoZWNraW5nIGJh
dHRlcnkgc3RhdGUuLi4gICAgICAgG1s4MEcgDRtbNzRHWyBPSyBdDQo=
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="ubuntu_guest_xl_dmesg.log"
Content-Disposition: attachment; filename="ubuntu_guest_xl_dmesg.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3we3

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICAgICAgICAgICAgICAgICAgIF8gICAg
ICAgIF8gICAgIF8gICAgICAKIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgICAgXyAg
IF8gXyBfXyAgX19ffCB8XyBfXyBffCB8X18gfCB8IF9fXyAKICBcICAvLyBfIFwgJ18gXCAgfCB8
fCB8XyAgIF9fKSB8X198IHwgfCB8ICdfIFwvIF9ffCBfXy8gX2AgfCAnXyBcfCB8LyBfIFwKICAv
ICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy98X198IHxffCB8IHwgfCBcX18gXCB8fCAoX3wg
fCB8XykgfCB8ICBfXy8KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX198ICAgXF9fLF98
X3wgfF98X19fL1xfX1xfXyxffF8uX18vfF98XF9fX3wKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKKFhFTikg
WGVuIHZlcnNpb24gNC4yLXVuc3RhYmxlIChkZXJpY2tzb0Boc2QxLmNhLmNvbWNhc3QubmV0KSAo
Z2NjIHZlcnNpb24gNC42LjMgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpICkgVHVlIEp1
bCAzMSAwODo0NzowNCBQRFQgMjAxMgooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBGcmkgSnVsIDI3
IDEyOjIyOjEzIDIwMTIgKzAyMDAgMjU2ODg6ZTYyNjZmYzc2ZDA4CihYRU4pIEJvb3Rsb2FkZXI6
IEdSVUIgMS45OS0yMXVidW50dTMuMQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIGRv
bTBfbWVtPTQwOTZNIHhzYXZlPTAKKFhFTikgVmlkZW8gaW5mb3JtYXRpb246CihYRU4pICBWR0Eg
aXMgdGV4dCBtb2RlIDgweDI1LCBmb250IDh4MTYKKFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7
IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRzCihYRU4pIERpc2MgaW5mb3JtYXRpb246CihY
RU4pICBGb3VuZCAxIE1CUiBzaWduYXR1cmVzCihYRU4pICBGb3VuZCAyIEVERCBpbmZvcm1hdGlv
biBzdHJ1Y3R1cmVzCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6CihYRU4pICAwMDAwMDAwMDAwMDAw
MDAwIC0gMDAwMDAwMDAwMDA5YzgwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDAwMDA5YzgwMCAt
IDAwMDAwMDAwMDAwYTAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDBlMDAwMCAtIDAw
MDAwMDAwMDAxMDAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAw
MDAwZGRkMDcwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwZGRkMDcwMDAgLSAwMDAwMDAwMGRk
ZGJiMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZGRkYmIwMDAgLSAwMDAwMDAwMGRkZGJj
MDAwIChBQ1BJIGRhdGEpCihYRU4pICAwMDAwMDAwMGRkZGJjMDAwIC0gMDAwMDAwMDBkZGVkNzAw
MCAoQUNQSSBOVlMpCihYRU4pICAwMDAwMDAwMGRkZWQ3MDAwIC0gMDAwMDAwMDBkZWY5MjAwMCAo
cmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMGRlZjkyMDAwIC0gMDAwMDAwMDBkZWY5MzAwMCAodXNh
YmxlKQooWEVOKSAgMDAwMDAwMDBkZWY5MzAwMCAtIDAwMDAwMDAwZGVmZDYwMDAgKEFDUEkgTlZT
KQooWEVOKSAgMDAwMDAwMDBkZWZkNjAwMCAtIDAwMDAwMDAwZGY4MDAwMDAgKHVzYWJsZSkKKFhF
TikgIDAwMDAwMDAwZjgwMDAwMDAgLSAwMDAwMDAwMGZjMDAwMDAwIChyZXNlcnZlZCkKKFhFTikg
IDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAw
MDAwMDAwZmVkMDAwMDAgLSAwMDAwMDAwMGZlZDA0MDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAw
MDAwZmVkMWMwMDAgLSAwMDAwMDAwMGZlZDIwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAw
ZmVlMDAwMDAgLSAwMDAwMDAwMGZlZTAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZmYw
MDAwMDAgLSAwMDAwMDAwMTAwMDAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAxMDAwMDAw
MDAgLSAwMDAwMDAwNDIwMDAwMDAwICh1c2FibGUpCihYRU4pIEFDUEk6IFJTRFAgMDAwRjA0OTAs
IDAwMjQgKHIyIEFMQVNLQSkKKFhFTikgQUNQSTogWFNEVCBEREVDNzA5MCwgMDA5QyAocjEgQUxB
U0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IEZBQ1AgRERF
RDFDQTAsIDAwRjQgKHI0IEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEzKQoo
WEVOKSBBQ1BJOiBEU0RUIERERUM3MUMwLCBBQURBIChyMiBBTEFTS0EgICAgQSBNIEkgICAgICAg
NkYgSU5UTCAyMDA1MTExNykKKFhFTikgQUNQSTogRkFDUyBEREVENUY4MCwgMDA0MAooWEVOKSBB
Q1BJOiBBUElDIERERUQxRDk4LCAwMDkyIChyMyBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1J
ICAgICAxMDAxMykKKFhFTikgQUNQSTogRlBEVCBEREVEMUUzMCwgMDA0NCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IE1DRkcgRERFRDFFNzgs
IDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgICAgIDk3KQooWEVOKSBB
Q1BJOiBQUkFEIERERUQxRUI4LCAwMEJFIChyMiBQUkFESUQgIFBSQURUSUQgICAgICAgIDEgTVNG
VCAgMzAwMDAwMSkKKFhFTikgQUNQSTogSFBFVCBEREVEMUY3OCwgMDAzOCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSS4gICAgICAgIDUpCihYRU4pIEFDUEk6IFNTRFQgRERFRDFGQjAs
IDAzNkQgKHIxIFNhdGFSZSBTYXRhVGFibCAgICAgMTAwMCBJTlRMIDIwMDkxMTEyKQooWEVOKSBB
Q1BJOiBTUE1JIERERUQyMzIwLCAwMDQwIChyNSBBIE0gSSAgIE9FTVNQTUkgICAgICAgIDAgQU1J
LiAgICAgICAgMCkKKFhFTikgQUNQSTogU1NEVCBEREVEMjM2MCwgMDlBNCAocjEgIFBtUmVmICBD
cHUwSXN0ICAgICAzMDAwIElOVEwgMjAwNTExMTcpCihYRU4pIEFDUEk6IFNTRFQgRERFRDJEMDgs
IDBBODggKHIxICBQbVJlZiAgICBDcHVQbSAgICAgMzAwMCBJTlRMIDIwMDUxMTE3KQooWEVOKSBB
Q1BJOiBETUFSIERERUQzNzkwLCAwMDc4IChyMSBJTlRFTCAgICAgIFNOQiAgICAgICAgIDEgSU5U
TCAgICAgICAgMSkKKFhFTikgQUNQSTogRUlOSiBEREVEMzgwOCwgMDEzMCAocjEgICAgQU1JIEFN
SSBFSU5KICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIEFDUEk6IEVSU1QgRERFRDM5Mzgs
IDAyMTAgKHIxICBBTUlFUiBBTUkgRVJTVCAgICAgICAgMCAgICAgICAgICAgICAwKQooWEVOKSBB
Q1BJOiBIRVNUIERERUQzQjQ4LCAwMEE4IChyMSAgICBBTUkgQU1JIEhFU1QgICAgICAgIDAgICAg
ICAgICAgICAgMCkKKFhFTikgQUNQSTogQkVSVCBEREVEM0JGMCwgMDAzMCAocjEgICAgQU1JIEFN
SSBCRVJUICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIFN5c3RlbSBSQU06IDE2MzU2TUIg
KDE2NzQ5MzY4a0IpCihYRU4pIE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZAooWEVOKSBGYWtp
bmcgYSBub2RlIGF0IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDQyMDAwMDAwMAooWEVOKSBEb21h
aW4gaGVhcCBpbml0aWFsaXNlZAooWEVOKSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgMDAwZmQ3YjAK
KFhFTikgRE1JIDIuNyBwcmVzZW50LgooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0CihY
RU4pIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4NDA4CihYRU4pIEFDUEk6IEFDUEkgU0xFRVAg
SU5GTzogcG0xeF9jbnRbNDA0LDBdLCBwbTF4X2V2dFs0MDAsMF0KKFhFTikgQUNQSTogMzIvNjRY
IEZBQ1MgYWRkcmVzcyBtaXNtYXRjaCBpbiBGQURUIC0gZGRlZDVmODAvMDAwMDAwMDAwMDAwMDAw
MCwgdXNpbmcgMzIKKFhFTikgQUNQSTogICAgICAgICAgICAgICAgICB3YWtldXBfdmVjW2RkZWQ1
ZjhjXSwgdmVjX3NpemVbMjBdCihYRU4pIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZlZTAw
MDAwCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MDBdIGVuYWJs
ZWQpCihYRU4pIFByb2Nlc3NvciAjMCA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQooWEVOKSBQcm9jZXNz
b3IgIzIgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
M10gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICM0IDc6MTAgQVBJQyB2
ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDZd
IGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNiA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDAxXSBlbmFibGVkKQooWEVOKSBQ
cm9jZXNzb3IgIzEgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwNl0gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMzIDc6MTAg
QVBJQyB2ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDddIGxhcGljX2lk
WzB4MDVdIGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNSA3OjEwIEFQSUMgdmVyc2lvbiAyMQoo
WEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA4XSBsYXBpY19pZFsweDA3XSBlbmFibGVkKQoo
WEVOKSBQcm9jZXNzb3IgIzcgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUNf
Tk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pCihYRU4pIEFDUEk6IElPQVBJ
QyAoaWRbMHgwMl0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lfYmFzZVswXSkKKFhFTikgSU9BUElD
WzBdOiBhcGljX2lkIDIsIHZlcnNpb24gMzIsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMK
KFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZs
IGRmbCkKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJx
IDkgaGlnaCBsZXZlbCkKKFhFTikgQUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJyaWRlLgooWEVOKSBB
Q1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuCihYRU4pIEFDUEk6IElSUTkgdXNlZCBieSBvdmVy
cmlkZS4KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgRmxhdC4gIFVzaW5nIDEgSS9PIEFQSUNz
CihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmE3MDEgYmFzZTogMHhmZWQwMDAwMAooWEVOKSBF
UlNUIHRhYmxlIGlzIGludmFsaWQKKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25m
aWd1cmF0aW9uIGluZm9ybWF0aW9uCihYRU4pIFNNUDogQWxsb3dpbmcgOCBDUFVzICgwIGhvdHBs
dWcgQ1BVcykKKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCAxNTI4IE1TSS9NU0ktWAooWEVOKSBT
d2l0Y2hlZCB0byBBUElDIGRyaXZlciB4MmFwaWNfY2x1c3Rlci4KKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBEZXRlY3RlZCAzMjkyLjU3
OCBNSHogcHJvY2Vzc29yLgooWEVOKSBJbml0aW5nIG1lbW9yeSBzaGFyaW5nLgooWEVOKSBtY2Vf
aW50ZWwuYzoxMjM5OiBNQ0EgQ2FwYWJpbGl0eTogQkNBU1QgMSBTRVIgMCBDTUNJIDEgZmlyc3Ri
YW5rIDAgZXh0ZW5kZWQgTUNFIE1TUiAwCihYRU4pIEludGVsIG1hY2hpbmUgY2hlY2sgcmVwb3J0
aW5nIGVuYWJsZWQKKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBmODAwMDAw
MCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSAzZgooWEVOKSBQQ0k6IE1DRkcgYXJlYSBhdCBmODAw
MDAwMCByZXNlcnZlZCBpbiBFODIwCihYRU4pIFBDSTogVXNpbmcgTUNGRyBmb3Igc2VnbWVudCAw
MDAwIGJ1cyAwMC0zZgooWEVOKSBJbnRlbCBWVC1kIFNub29wIENvbnRyb2wgZW5hYmxlZC4KKFhF
TikgSW50ZWwgVlQtZCBEb20wIERNQSBQYXNzdGhyb3VnaCBub3QgZW5hYmxlZC4KKFhFTikgSW50
ZWwgVlQtZCBRdWV1ZWQgSW52YWxpZGF0aW9uIGVuYWJsZWQuCihYRU4pIEludGVsIFZULWQgSW50
ZXJydXB0IFJlbWFwcGluZyBlbmFibGVkLgooWEVOKSBJbnRlbCBWVC1kIFNoYXJlZCBFUFQgdGFi
bGVzIG5vdCBlbmFibGVkLgooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZAooWEVOKSAg
LSBEb20wIG1vZGU6IFJlbGF4ZWQKKFhFTikgRW5hYmxlZCBkaXJlY3RlZCBFT0kgd2l0aCBpb2Fw
aWNfYWNrX29sZCBvbiEKKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzCihYRU4pICAtPiBVc2lu
ZyBvbGQgQUNLIG1ldGhvZAooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9
MiBhcGljMj0tMSBwaW4yPS0xCihYRU4pIFRTQyBkZWFkbGluZSB0aW1lciBlbmFibGVkCihYRU4p
IFBsYXRmb3JtIHRpbWVyIGlzIDE0LjMxOE1IeiBIUEVUCihYRU4pIEFsbG9jYXRlZCBjb25zb2xl
IHJpbmcgb2YgNjQgS2lCLgooWEVOKSBWTVg6IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoK
KFhFTikgIC0gQVBJQyBNTUlPIGFjY2VzcyB2aXJ0dWFsaXNhdGlvbgooWEVOKSAgLSBBUElDIFRQ
UiBzaGFkb3cKKFhFTikgIC0gRXh0ZW5kZWQgUGFnZSBUYWJsZXMgKEVQVCkKKFhFTikgIC0gVmly
dHVhbC1Qcm9jZXNzb3IgSWRlbnRpZmllcnMgKFZQSUQpCihYRU4pICAtIFZpcnR1YWwgTk1JCihY
RU4pICAtIE1TUiBkaXJlY3QtYWNjZXNzIGJpdG1hcAooWEVOKSAgLSBVbnJlc3RyaWN0ZWQgR3Vl
c3QKKFhFTikgSFZNOiBBU0lEcyBlbmFibGVkLgooWEVOKSBIVk06IFZNWCBlbmFibGVkCihYRU4p
IEhWTTogSGFyZHdhcmUgQXNzaXN0ZWQgUGFnaW5nIChIQVApIGRldGVjdGVkCihYRU4pIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVzCihYRU4pIEFD
UEkgc2xlZXAgbW9kZXM6IFMzCihYRU4pIG1jaGVja19wb2xsOiBNYWNoaW5lIGNoZWNrIHBvbGxp
bmcgdGltZXIgc3RhcnRlZC4KKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pIGVs
Zl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weGFjNTAwMAooWEVO
KSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjMDAwMDAgbWVtc3o9MHhlNjBlMAoo
WEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZTcwMDAgbWVtc3o9MHgxNDQ4
MAooWEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZmMwMDAgbWVtc3o9MHgz
NjIwMDAKKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogbWVtb3J5OiAweDEwMDAwMDAgLT4gMHgyMDVl
MDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiCihYRU4pIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfVkVSU0lPTiA9ICIyLjYiCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogWEVOX1ZFUlNJT04gPSAieGVuLTMuMCIKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBWSVJUX0JBU0UgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBFTlRSWSA9IDB4ZmZmZmZmZmY4MWNmYzIwMAooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEhZ
UEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZmZjgxMDAxMDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90
ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBhZV9wZ2Rpcl9hYm92ZV80Z2Ii
CihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFFX01PREUgPSAieWVzIgooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IExPQURFUiA9ICJnZW5lcmljIgooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IHVua25vd24geGVuIGVsZiBub3RlICgweGQpCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VT
UEVORF9DQU5DRUwgPSAweDEKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIVl9TVEFSVF9MT1cg
PSAweGZmZmY4MDAwMDAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBQQUREUl9PRkZT
RVQgPSAweDAKKFhFTikgZWxmX3hlbl9hZGRyX2NhbGNfY2hlY2s6IGFkZHJlc3NlczoKKFhFTikg
ICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgICAgIGVsZl9w
YWRkcl9vZmZzZXQgPSAweDAKKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAgICAgPSAweGZmZmZmZmZm
ODAwMDAwMDAKKFhFTikgICAgIHZpcnRfa3N0YXJ0ICAgICAgPSAweGZmZmZmZmZmODEwMDAwMDAK
KFhFTikgICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODIwNWUwMDAKKFhFTikgICAg
IHZpcnRfZW50cnkgICAgICAgPSAweGZmZmZmZmZmODFjZmMyMDAKKFhFTikgICAgIHAybV9iYXNl
ICAgICAgICAgPSAweGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQs
IGxzYiwgY29tcGF0MzIKKFhFTikgIERvbTAga2VybmVsOiA2NC1iaXQsIFBBRSwgbHNiLCBwYWRk
ciAweDEwMDAwMDAgLT4gMHgyMDVlMDAwCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVO
VDoKKFhFTikgIERvbTAgYWxsb2MuOiAgIDAwMDAwMDA0MGMwMDAwMDAtPjAwMDAwMDA0MTAwMDAw
MDAgKDEwMjIzNTcgcGFnZXMgdG8gYmUgYWxsb2NhdGVkKQooWEVOKSAgSW5pdC4gcmFtZGlzazog
MDAwMDAwMDQxZDk5NTAwMC0+MDAwMDAwMDQxZmZmZjIwMAooWEVOKSBWSVJUVUFMIE1FTU9SWSBB
UlJBTkdFTUVOVDoKKFhFTikgIExvYWRlZCBrZXJuZWw6IGZmZmZmZmZmODEwMDAwMDAtPmZmZmZm
ZmZmODIwNWUwMDAKKFhFTikgIEluaXQuIHJhbWRpc2s6IGZmZmZmZmZmODIwNWUwMDAtPmZmZmZm
ZmZmODQ2YzgyMDAKKFhFTikgIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODQ2YzkwMDAtPmZmZmZm
ZmZmODRlYzkwMDAKKFhFTikgIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODRlYzkwMDAtPmZmZmZm
ZmZmODRlYzk0YjQKKFhFTikgIFBhZ2UgdGFibGVzOiAgIGZmZmZmZmZmODRlY2EwMDAtPmZmZmZm
ZmZmODRlZjUwMDAKKFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODRlZjUwMDAtPmZmZmZm
ZmZmODRlZjYwMDAKKFhFTikgIFRPVEFMOiAgICAgICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZm
ZmZmODUwMDAwMDAKKFhFTikgIEVOVFJZIEFERFJFU1M6IGZmZmZmZmZmODFjZmMyMDAKKFhFTikg
RG9tMCBoYXMgbWF4aW11bSA4IFZDUFVzCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0
IDB4ZmZmZmZmZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODFhYzUwMDAKKFhFTikgZWxmX2xvYWRf
YmluYXJ5OiBwaGRyIDEgYXQgMHhmZmZmZmZmZjgxYzAwMDAwIC0+IDB4ZmZmZmZmZmY4MWNlNjBl
MAooWEVOKSBlbGZfbG9hZF9iaW5hcnk6IHBoZHIgMiBhdCAweGZmZmZmZmZmODFjZTcwMDAgLT4g
MHhmZmZmZmZmZjgxY2ZiNDgwCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAzIGF0IDB4ZmZm
ZmZmZmY4MWNmYzAwMCAtPiAweGZmZmZmZmZmODFkZDIwMDAKKFhFTikgU2NydWJiaW5nIEZyZWUg
UkFNOiAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi5kb25lLgooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQg
c2V0IGF0IDB4NDAwMCBwYWdlcy4KKFhFTikgU3RkLiBMb2dsZXZlbDogQWxsCihYRU4pIEd1ZXN0
IExvZ2xldmVsOiBBbGwKKFhFTikgWGVuIGlzIHJlbGlucXVpc2hpbmcgVkdBIGNvbnNvbGUuCihY
RU4pICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0
byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQooWEVOKSBGcmVlZCAyNDBrQiBpbml0IG1lbW9yeS4KKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT05IGlycT05IGVtdWlycT00NzI4NTA0
MzIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MDEuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAxLjEKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDowNi4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWEu
MAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjFjLjAKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxYy42CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWQuMAooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjFlLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxZi4w
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWYuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjFmLjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowMC4wCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDE6MDAuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjAwLjAK
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDU6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjAKKFhFTikgREVCVUcg
ZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yNzYgaXJxPTI4IGVtdWlycT0tNjA3ODM1MTIxCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTYgaXJxPTE2IGVtdWlycT0yODE4
MDU1NzEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT04IGlycT04IGVtdWly
cT0wCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9Mjc1IGlycT0yOSBlbXVp
cnE9MTk5NjUxODc4OQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTI3NCBp
cnE9MzAgZW11aXJxPTEzMjM2OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTI3MyBpcnE9MzEgZW11aXJxPTQ5NjU4MTI0OAooWEVOKSBIVk0xOiBIVk0gTG9hZGVyCihYRU4p
IEhWTTE6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTE6IFhlbmJ1cyByaW5n
cyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTE6IFN5c3RlbSByZXF1ZXN0
ZWQgU2VhQklPUwooWEVOKSBIVk0xOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6
MjcwOiBEb20xIFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMTogUENJLUlTQSBs
aW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20xIFBDSSBsaW5rIDEgY2hh
bmdlZCAwIC0+IDEwCihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTEgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZN
MTogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMSBQ
Q0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAzIHJvdXRl
ZCB0byBJUlE1CihYRU4pIEhWTTE6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0x
OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA0OjAgSU5UQS0+
SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMTogcGNp
IGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0xOiBwY2kg
ZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTE6IHBjaSBk
ZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTEgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1
OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTE6IHBjaSBkZXYgMDI6
MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMTogcGNpIGRldiAwNDow
IGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MSBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTEgZ2Zu
PWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTE6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6
ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNMTogcGNpIGRldiAwMjowIGJhciAxNCBzaXpl
IDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUg
MDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTE6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAw
MDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNMTogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAw
MDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAw
MDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20xIGdwb3J0PWMyMDAgbXBv
cnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNMTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAw
MDEwOiAwMDAwYzMwMQooWEVOKSBIVk0xOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoK
KFhFTikgSFZNMTogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2
YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTE6ICAtIENQVTEgLi4uIDM2LWJpdCBw
aHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBI
Vk0xOiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMTogIC0gUkVQIElOU0IgYWNy
b3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTE6ICAtIEdTIGJhc2UgTVNS
cyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNMTogUGFzc2VkIDIgb2YgMiB0ZXN0cwoo
WEVOKSBIVk0xOiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTE6IExvYWRpbmcg
U2VhQklPUyAuLi4KKFhFTikgSFZNMTogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0x
OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTE6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4p
IEhWTTE6IEJJT1MgbWFwOgooWEVOKSBIVk0xOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UK
KFhFTikgSFZNMTogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMTogRTgyMCB0YWJs
ZToKKFhFTikgSFZNMTogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAw
MDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDow
MDBlMDAwMAooWEVOKSBIVk0xOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDow
MDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0g
MDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAw
MDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk0xOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAw
MDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogSW52b2tpbmcgU2Vh
QklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQxIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGlu
ZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMSBnZm49ZjMwMDAgbWZuPWY3YTAwIG5y
PTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9
ODAKKFhFTikgSFZNMjogSFZNIExvYWRlcgooWEVOKSBIVk0yOiBEZXRlY3RlZCBYZW4gdjQuMi11
bnN0YWJsZQooWEVOKSBIVk0yOiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5u
ZWwgNAooWEVOKSBIVk0yOiBTeXN0ZW0gcmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNMjogQ1BV
IHNwZWVkIGlzIDMyOTMgTUh6CihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAwIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4p
IGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk0yOiBQ
Q0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb20yIFBDSSBs
aW5rIDIgY2hhbmdlZCAwIC0+IDExCihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0
byBJUlExMQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQoo
WEVOKSBIVk0yOiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk0yOiBwY2kg
ZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIElOVEEtPklSUTUK
KFhFTikgSFZNMjogcGNpIGRldiAwNDowIElOVEEtPklSUTUKKFhFTikgSFZNMjogcGNpIGRldiAw
NTowIElOVEEtPklSUTEwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAw
MDAwMDogZjAwMDAwMDgKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAw
MDAwOiBmMjAwMDAwOAooWEVOKSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAw
MDA6IGYzMDAwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzA4MCBtZm49Zjdh
ODAgbnI9NDAKKFhFTikgSFZNMjogcGNpIGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBm
MzA4MDAwNAooWEVOKSBIVk0yOiBwY2kgZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYz
MGMwMDAwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMw
ZDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0y
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVO
KSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4p
IEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikg
SFZNMjogcGNpIGRldiAwMzowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBI
Vk0yOiBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhW
TTI6IHBjaSBkZXYgMDQ6MCBiYXIgMTQgc2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZN
MjogcGNpIGRldiAwNTowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tMiBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTI6
IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNMjog
TXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlzYXRpb246CihYRU4pIEhWTTI6ICAtIENQVTAgLi4uIDM2
LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgoo
WEVOKSBIVk0yOiAgLSBDUFUxIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZh
ciBNVFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNMjogVGVzdGluZyBIVk0gZW52aXJvbm1l
bnQ6CihYRU4pIEhWTTI6ICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBh
c3NlZAooWEVOKSBIVk0yOiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihY
RU4pIEhWTTI6IFBhc3NlZCAyIG9mIDIgdGVzdHMKKFhFTikgSFZNMjogV3JpdGluZyBTTUJJT1Mg
dGFibGVzIC4uLgooWEVOKSBIVk0yOiBMb2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTI6IENy
ZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhFTikgSFZNMjogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBI
Vk0yOiB2bTg2IFRTUyBhdCBmYzAwYTA4MAooWEVOKSBIVk0yOiBCSU9TIG1hcDoKKFhFTikgSFZN
MjogIDEwMDAwLTEwMGQzOiBTY3JhdGNoIHNwYWNlCihYRU4pIEhWTTI6ICBlMDAwMC1mZmZmZjog
TWFpbiBCSU9TCihYRU4pIEhWTTI6IEU4MjAgdGFibGU6CihYRU4pIEhWTTI6ICBbMDBdOiAwMDAw
MDAwMDowMDAwMDAwMCAtIDAwMDAwMDAwOjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNMjogIEhPTEU6
IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAwMDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNMjogIFswMV06
IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAwMDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhW
TTI6ICBbMDJdOiAwMDAwMDAwMDowMDEwMDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhF
TikgSFZNMjogIEhPTEU6IDAwMDAwMDAwOjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhF
TikgSFZNMjogIFswM106IDAwMDAwMDAwOmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJF
U0VSVkVECihYRU4pIEhWTTI6IEludm9raW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0
NzpkMiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTIgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMiBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkMiBs
ZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk0zOiBIVk0gTG9hZGVyCihYRU4pIEhWTTM6IERldGVjdGVk
IFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTM6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwg
ZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTM6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVO
KSBIVk0zOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBs
aW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMzogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRv
IElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihY
RU4pIEhWTTM6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6
IERvbTMgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZNMzogUENJLUlTQSBsaW5r
IDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kgbGluayAzIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTM6IFBDSS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4p
IEhWTTM6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAg
SU5UQS0+SVJRNQooWEVOKSBIVk0zOiBwY2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk0z
OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAx
MCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDE0
IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMzAg
c2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYz
MDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUg
MDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTM6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAw
MDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAw
MDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTAgbWZu
PWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYzMGUzIG1mbj1mN2Fj
MyBucj0xCihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMw
ZTAwMDQKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBl
NDAwMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBj
MDAxCihYRU4pIEhWTTM6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMx
MDEKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAw
MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAx
CihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20zIGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAK
KFhFTikgSFZNMzogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQoo
WEVOKSBIVk0zOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNMzogIC0g
Q1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0g
Li4uIGRvbmUuCihYRU4pIEhWTTM6ICAtIENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBN
VFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk0zOiBUZXN0aW5nIEhW
TSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMzogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRh
cmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTM6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4u
LiBwYXNzZWQKKFhFTikgSFZNMzogUGFzc2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk0zOiBXcml0
aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTM6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhF
TikgSFZNMzogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0zOiBMb2FkaW5nIEFDUEkg
Li4uCihYRU4pIEhWTTM6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4pIEhWTTM6IEJJT1MgbWFw
OgooWEVOKSBIVk0zOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNMzogIGUw
MDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMzogRTgyMCB0YWJsZToKKFhFTikgSFZNMzog
IFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBI
Vk0zOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBI
Vk0zOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJW
RUQKKFhFTikgSFZNMzogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAw
MDA6IFJBTQooWEVOKSBIVk0zOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpm
YzAwMDAwMAooWEVOKSBIVk0zOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTow
MDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMzogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikg
c3RkdmdhLmM6MTQ3OmQzIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3Rkdmdh
LmM6MTUxOmQzIGxlYXZpbmcgc3RkdmdhCihYRU4pIHN0ZHZnYS5jOjE0NzpkMyBlbnRlcmluZyBz
dGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgaXJxLmM6Mzc1OiBEb20zIGNhbGxiYWNrIHZp
YSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBk
b20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBn
Zm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1m
MzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMg
bWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZu
PWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49
ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3
YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9
MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhF
TikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVO
KSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4p
IG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFw
OmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6
IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3Zl
OiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRv
bTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBt
cG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAg
bWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBt
Zm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEK
KFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTMgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEu
YzoyNzA6IERvbTMgUENJIGxpbmsgMSBjaGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBE
b20zIFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kg
bGluayAzIGNoYW5nZWQgNSAtPiAwCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTkgaXJxPTAgZW11aXJxPTEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGly
cT0xNyBpcnE9MCBlbXVpcnE9OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTIxIGlycT0wIGVtdWlycT00CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9
MjAgaXJxPTAgZW11aXJxPTYKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01
NSBpcnE9LTEgZW11aXJxPS0xCihYRU4pIERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQg
aHZtX2RvbWFpbl91c2VfcGlycT0wIGVtdWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBV
bnN1cHBvcnRlZCBkZWxpdmVyeSBtb2RlIDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0
MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQw
OCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4pIEhWTTQ6IEhWTSBMb2FkZXIKKFhFTikgSFZN
NDogRGV0ZWN0ZWQgWGVuIHY0LjItdW5zdGFibGUKKFhFTikgSFZNNDogWGVuYnVzIHJpbmdzIEAw
eGZlZmZjMDAwLCBldmVudCBjaGFubmVsIDQKKFhFTikgSFZNNDogU3lzdGVtIHJlcXVlc3RlZCBT
ZWFCSU9TCihYRU4pIEhWTTQ6IENQVSBzcGVlZCBpcyAzMjkzIE1IegooWEVOKSBpcnEuYzoyNzA6
IERvbTQgUENJIGxpbmsgMCBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk00OiBQQ0ktSVNBIGxpbmsg
MCByb3V0ZWQgdG8gSVJRNQooWEVOKSBpcnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMSBjaGFuZ2Vk
IDAgLT4gMTAKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwCihYRU4p
IGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAyIGNoYW5nZWQgMCAtPiAxMQooWEVOKSBIVk00OiBQ
Q0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8gSVJRMTEKKFhFTikgaXJxLmM6MjcwOiBEb200IFBDSSBs
aW5rIDMgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDMgcm91dGVkIHRv
IElSUTUKKFhFTikgSFZNNDogcGNpIGRldiAwMTozIElOVEEtPklSUTEwCihYRU4pIEhWTTQ6IHBj
aSBkZXYgMDM6MCBJTlRBLT5JUlE1CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1
CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBJTlRBLT5JUlExMAooWEVOKSBIVk00OiBwY2kgZGV2
IDAyOjAgYmFyIDEwIHNpemUgMDIwMDAwMDA6IGYwMDAwMDA4CihYRU4pIEhWTTQ6IHBjaSBkZXYg
MDM6MCBiYXIgMTQgc2l6ZSAwMTAwMDAwMDogZjIwMDAwMDgKKFhFTikgSFZNNDogcGNpIGRldiAw
NTowIGJhciAzMCBzaXplIDAwMDgwMDAwOiBmMzAwMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNCBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBi
YXIgMWMgc2l6ZSAwMDA0MDAwMDogZjMwODAwMDQKKFhFTikgSFZNNDogcGNpIGRldiAwMjowIGJh
ciAzMCBzaXplIDAwMDEwMDAwOiBmMzBjMDAwMAooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFy
IDMwIHNpemUgMDAwMTAwMDA6IGYzMGQwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNCBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgSFZNNDogcGNpIGRldiAwNTowIGJhciAxNCBzaXplIDAw
MDA0MDAwOiBmMzBlMDAwNAooWEVOKSBIVk00OiBwY2kgZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAw
MDEwMDA6IGYzMGU0MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDAw
MDEwMDogMDAwMGMwMDEKKFhFTikgSFZNNDogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIDAwMDAw
MTAwOiAwMDAwYzEwMQooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgMDAwMDAx
MDA6IGYzMGU1MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSAwMDAwMDEw
MDogMDAwMGMyMDEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTQgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBIVk00OiBwY2kgZGV2IDAxOjEgYmFyIDIwIHNpemUgMDAwMDAwMTA6
IDAwMDBjMzAxCihYRU4pIEhWTTQ6IE11bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOgooWEVO
KSBIVk00OiAgLSBDUFUwIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBN
VFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNNDogIC0gQ1BVMSAuLi4gMzYtYml0IHBoeXMg
Li4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTQ6
IFRlc3RpbmcgSFZNIGVudmlyb25tZW50OgooWEVOKSBIVk00OiAgLSBSRVAgSU5TQiBhY3Jvc3Mg
cGFnZSBib3VuZGFyaWVzIC4uLiBwYXNzZWQKKFhFTikgSFZNNDogIC0gR1MgYmFzZSBNU1JzIGFu
ZCBTV0FQR1MgLi4uIHBhc3NlZAooWEVOKSBIVk00OiBQYXNzZWQgMiBvZiAyIHRlc3RzCihYRU4p
IEhWTTQ6IFdyaXRpbmcgU01CSU9TIHRhYmxlcyAuLi4KKFhFTikgSFZNNDogTG9hZGluZyBTZWFC
SU9TIC4uLgooWEVOKSBIVk00OiBDcmVhdGluZyBNUCB0YWJsZXMgLi4uCihYRU4pIEhWTTQ6IExv
YWRpbmcgQUNQSSAuLi4KKFhFTikgSFZNNDogdm04NiBUU1MgYXQgZmMwMGEwODAKKFhFTikgSFZN
NDogQklPUyBtYXA6CihYRU4pIEhWTTQ6ICAxMDAwMC0xMDBkMzogU2NyYXRjaCBzcGFjZQooWEVO
KSBIVk00OiAgZTAwMDAtZmZmZmY6IE1haW4gQklPUwooWEVOKSBIVk00OiBFODIwIHRhYmxlOgoo
WEVOKSBIVk00OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDog
UkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDowMDBhMDAwMCAtIDAwMDAwMDAwOjAwMGUw
MDAwCihYRU4pIEhWTTQ6ICBbMDFdOiAwMDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAw
MDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiAgWzAyXTogMDAwMDAwMDA6MDAxMDAwMDAgLSAwMDAw
MDAwMDo3ZjgwMDAwMDogUkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDo3ZjgwMDAwMCAt
IDAwMDAwMDAwOmZjMDAwMDAwCihYRU4pIEhWTTQ6ICBbMDNdOiAwMDAwMDAwMDpmYzAwMDAwMCAt
IDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiBJbnZva2luZyBTZWFCSU9T
IC4uLgooWEVOKSBzdGR2Z2EuYzoxNDc6ZDQgZW50ZXJpbmcgc3RkdmdhIGFuZCBjYWNoaW5nIG1v
ZGVzCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAK
KFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAoo
WEVOKSBzdGR2Z2EuYzoxNTE6ZDQgbGVhdmluZyBzdGR2Z2EKKFhFTikgc3RkdmdhLmM6MTQ3OmQ0
IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBpcnEuYzozNzU6IERvbTQg
Y2FsbGJhY2sgdmlhIGNoYW5nZWQgdG8gRGlyZWN0IFZlY3RvciAweGYzCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21h
cDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6
cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJl
bW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6
IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200
IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2Zu
PWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1m
MzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4
MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1m
bj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49Zjdh
YzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAg
bnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAg
bnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBu
cj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9
MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9
MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4p
IG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5
X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFw
OnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBn
cG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200
IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQg
Z2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1m
MzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUw
IG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49
ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQw
MDAgbnI9MTAwCihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAwIGNoYW5nZWQgNSAtPiAw
CihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAxIGNoYW5nZWQgMTAgLT4gMAooWEVOKSBp
cnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMiBjaGFuZ2VkIDExIC0+IDAKKFhFTikgaXJxLmM6Mjcw
OiBEb200IFBDSSBsaW5rIDMgY2hhbmdlZCA1IC0+IDAKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOSBpcnE9MCBlbXVpcnE9MQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9w
aXJxIDQwOCBwaXJxPTE3IGlycT0wIGVtdWlycT04CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3Bp
cnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGly
cSA0MDggcGlycT0yMCBpcnE9MCBlbXVpcnE9NgooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJx
IDQwOCBwaXJxPTU1IGlycT0tMSBlbXVpcnE9LTEKKFhFTikgREVCVUcgaHZtX3BjaV9tc2lfYXNz
ZXJ0IHBpcnE9NCBodm1fZG9tYWluX3VzZV9waXJxPTAgZW11aXJxPS0xCihYRU4pIHZtc2kuYzox
MDg6ZDMyNzY3IFVuc3VwcG9ydGVkIGRlbGl2ZXJ5IG1vZGUgMwooWEVOKSBERUJVRyBldnRjaG5f
YmluZF9waXJxIDQwOCBwaXJxPTIyIGlycT0wIGVtdWlycT03CihYRU4pIERFQlVHIGV2dGNobl9i
aW5kX3BpcnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgSFZNNTogSFZNIExvYWRl
cgooWEVOKSBIVk01OiBEZXRlY3RlZCBYZW4gdjQuMi11bnN0YWJsZQooWEVOKSBIVk01OiBYZW5i
dXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5uZWwgNAooWEVOKSBIVk01OiBTeXN0ZW0g
cmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNNTogQ1BVIHNwZWVkIGlzIDMyOTMgTUh6CihYRU4p
IGlycS5jOjI3MDogRG9tNSBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTU6IFBD
SS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4pIGlycS5jOjI3MDogRG9tNSBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8g
SVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb201IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExCihY
RU4pIEhWTTU6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0byBJUlExMQooWEVOKSBpcnEuYzoyNzA6
IERvbTUgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsg
MyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk01OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhF
TikgSFZNNTogcGNpIGRldiAwMzowIElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNDow
IElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIElOVEEtPklSUTEwCihYRU4pIEhW
TTU6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAwMDAwMDogZjAwMDAwMDgKKFhFTikgSFZN
NTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAwMDAwOiBmMjAwMDAwOAooWEVOKSBIVk01
OiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAwMDA6IGYzMDAwMDAwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb201IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgSFZNNTogcGNp
IGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBmMzA4MDAwNAooWEVOKSBIVk01OiBwY2kg
ZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYzMGMwMDAwCihYRU4pIEhWTTU6IHBjaSBk
ZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwZDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTUgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b201IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBIVk01OiBwY2kgZGV2IDA1OjAgYmFy
IDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4pIEhWTTU6IHBjaSBkZXYgMDI6MCBiYXIg
MTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwMzowIGJhciAx
MCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBIVk01OiBwY2kgZGV2IDA0OjAgYmFyIDEw
IHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhWTTU6IHBjaSBkZXYgMDQ6MCBiYXIgMTQg
c2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIGJhciAxMCBz
aXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNSBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTU6IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6
ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNNTogTXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlz
YXRpb246CihYRU4pIEhWTTU6ICAtIENQVTAgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJS
cyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk01OiAgLSBDUFUxIC4uLiAz
Ni1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4K
KFhFTikgSFZNNTogVGVzdGluZyBIVk0gZW52aXJvbm1lbnQ6CihYRU4pIEhWTTU6ICAtIFJFUCBJ
TlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZAooWEVOKSBIVk01OiAgLSBHUyBi
YXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihYRU4pIEhWTTU6IFBhc3NlZCAyIG9mIDIg
dGVzdHMKKFhFTikgSFZNNTogV3JpdGluZyBTTUJJT1MgdGFibGVzIC4uLgooWEVOKSBIVk01OiBM
b2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTU6IENyZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhF
TikgSFZNNTogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBIVk01OiB2bTg2IFRTUyBhdCBmYzAwYTA4
MAooWEVOKSBIVk01OiBCSU9TIG1hcDoKKFhFTikgSFZNNTogIDEwMDAwLTEwMGQzOiBTY3JhdGNo
IHNwYWNlCihYRU4pIEhWTTU6ICBlMDAwMC1mZmZmZjogTWFpbiBCSU9TCihYRU4pIEhWTTU6IEU4
MjAgdGFibGU6CihYRU4pIEhWTTU6ICBbMDBdOiAwMDAwMDAwMDowMDAwMDAwMCAtIDAwMDAwMDAw
OjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAw
MDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNNTogIFswMV06IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAw
MDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6ICBbMDJdOiAwMDAwMDAwMDowMDEw
MDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAw
OjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhFTikgSFZNNTogIFswM106IDAwMDAwMDAw
OmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6IEludm9r
aW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0NzpkNSBlbnRlcmluZyBzdGR2Z2EgYW5k
IGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTUgZ2ZuPWYzMDAwIG1mbj1m
N2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNSBnZm49ZjMwMDAgbWZuPWY3
YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkNSBsZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk02
OiBIVk0gTG9hZGVyCihYRU4pIEhWTTY6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4p
IEhWTTY6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhW
TTY6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVOKSBIVk02OiBDUFUgc3BlZWQgaXMgMzI5
MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhF
TikgSFZNNjogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBE
b202IFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihYRU4pIEhWTTY6IFBDSS1JU0EgbGluayAx
IHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMiBjaGFuZ2Vk
IDAgLT4gMTEKKFhFTikgSFZNNjogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4p
IGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTY6IFBD
SS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4pIEhWTTY6IHBjaSBkZXYgMDE6MyBJTlRB
LT5JUlExMAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBw
Y2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJR
MTAKKFhFTikgSFZNNjogcGNpIGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAw
OAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4
CihYRU4pIEhWTTY6IHBjaSBkZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVO
KSBIVk02OiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4p
IEhWTTY6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikg
SFZNNjogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5
X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTY6IHBjaSBk
ZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNNjogcGNpIGRl
diAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk02OiBwY2kgZGV2
IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTY6IHBjaSBkZXYg
MDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNNjogcGNpIGRldiAw
NDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk02OiBwY2kgZGV2IDA1
OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBk
b202IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNNjogcGNpIGRldiAwMTox
IGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQooWEVOKSBIVk02OiBNdWx0aXByb2Nlc3Nv
ciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNNjogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4u
IGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTY6ICAt
IENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhd
IC4uLiBkb25lLgooWEVOKSBIVk02OiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZN
NjogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhW
TTY6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNNjogUGFz
c2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk02OiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihY
RU4pIEhWTTY6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhFTikgSFZNNjogQ3JlYXRpbmcgTVAgdGFi
bGVzIC4uLgooWEVOKSBIVk02OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTY6IHZtODYgVFNT
IGF0IGZjMDBhMDgwCihYRU4pIEhWTTY6IEJJT1MgbWFwOgooWEVOKSBIVk02OiAgMTAwMDAtMTAw
ZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNNjogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhF
TikgSFZNNjogRTgyMCB0YWJsZToKKFhFTikgSFZNNjogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAw
IC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9MRTogMDAwMDAwMDA6MDAw
YTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBIVk02OiAgWzAxXTogMDAwMDAwMDA6MDAw
ZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNNjogIFswMl06IDAw
MDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9M
RTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk02OiAgWzAz
XTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikg
SFZNNjogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQ2IGVudGVyaW5n
IHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1m
MzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3RkdmdhLmM6MTUxOmQ2IGxlYXZpbmcgc3Rkdmdh
CihYRU4pIHN0ZHZnYS5jOjE0NzpkNiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMK
KFhFTikgaXJxLmM6Mzc1OiBEb202IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0
b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5y
PTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5y
PTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQw
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVO
KSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlf
bWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21h
cDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlf
bWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
NiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202
IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBn
Zm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3Bv
cnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBl
MCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49Zjdh
YzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMz
IG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAw
IG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5y
PTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9Mgoo
WEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikg
aW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1l
bW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0
X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFk
ZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRv
bTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJ
IGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMSBj
aGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDIgY2hhbmdlZCAx
MSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTkgaXJxPTAgZW11aXJxPTEKKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0xNyBpcnE9MCBlbXVpcnE9OAooWEVO
KSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4p
IERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MjAgaXJxPTAgZW11aXJxPTYKKFhFTikg
REVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01NSBpcnE9LTEgZW11aXJxPS0xCihYRU4p
IERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQgaHZtX2RvbWFpbl91c2VfcGlycT0wIGVt
dWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBVbnN1cHBvcnRlZCBkZWxpdmVyeSBtb2Rl
IDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9
NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00
Cg==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="qemu-dm-solaris.log"
Content-Disposition: attachment; filename="qemu-dm-solaris.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5cpgkmt4

Y2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8yCnhjOiBlcnJvcjogbGludXhfZ250
dGFiX3NldF9tYXhfZ3JhbnRzOiBpb2N0bCBTRVRfTUFYX0dSQU5UUyBmYWlsZWQgKDIyID0gSW52
YWxpZCBhcmd1bWVudCk6IEludGVybmFsIGVycm9yCnhlbiBiZTogcWRpc2stNzY4OiB4Y19nbnR0
YWJfc2V0X21heF9ncmFudHMgZmFpbGVkOiBJbnZhbGlkIGFyZ3VtZW50CnhjOiBlcnJvcjogbGlu
dXhfZ250dGFiX3NldF9tYXhfZ3JhbnRzOiBpb2N0bCBTRVRfTUFYX0dSQU5UUyBmYWlsZWQgKDIy
ID0gSW52YWxpZCBhcmd1bWVudCk6IEludGVybmFsIGVycm9yCnhlbiBiZTogcWRpc2stNTYzMjog
eGNfZ250dGFiX3NldF9tYXhfZ3JhbnRzIGZhaWxlZDogSW52YWxpZCBhcmd1bWVudApbMDA6MDUu
MF0geGVuX3B0X2luaXRmbjogQXNzaWduaW5nIHJlYWwgcGh5c2ljYWwgZGV2aWNlIDAyOjAwLjAg
dG8gZGV2Zm4gMHgyOApbMDA6MDUuMF0geGVuX3B0X3JlZ2lzdGVyX3JlZ2lvbnM6IElPIHJlZ2lv
biAwIHJlZ2lzdGVyZWQgKHNpemU9MHgwMDAwMDEwMCBiYXNlX2FkZHI9MHgwMDAwZDAwMCB0eXBl
OiAweDEpClswMDowNS4wXSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDEgcmVn
aXN0ZXJlZCAoc2l6ZT0weDAwMDA0MDAwIGJhc2VfYWRkcj0weGY3YWMwMDAwIHR5cGU6IDApClsw
MDowNS4wXSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDMgcmVnaXN0ZXJlZCAo
c2l6ZT0weDAwMDQwMDAwIGJhc2VfYWRkcj0weGY3YTgwMDAwIHR5cGU6IDApClswMDowNS4wXSB4
ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogRXhwYW5zaW9uIFJPTSByZWdpc3RlcmVkIChzaXplPTB4
MDAwODAwMDAgYmFzZV9hZGRyPTB4ZjdhMDAwMDApClswMDowNS4wXSB4ZW5fcHRfbXNpeF9pbml0
OiBnZXQgTVNJLVggdGFibGUgQkFSIGJhc2UgMHhmN2FjMDAwMApbMDA6MDUuMF0geGVuX3B0X21z
aXhfaW5pdDogdGFibGVfb2ZmID0gMHgyMDAwLCB0b3RhbF9lbnRyaWVzID0gMTUKWzAwOjA1LjBd
IHhlbl9wdF9tc2l4X2luaXQ6IG1hcHBpbmcgcGh5c2ljYWwgTVNJLVggdGFibGUgdG8gMHg3ZmM1
Mjg0ODkwMDAKWzAwOjA1LjBdIHhlbl9wdF9wY2lfaW50eDogaW50eD0xClswMDowNS4wXSB4ZW5f
cHRfaW5pdGZuOiBSZWFsIHBoeXNpY2FsIGRldmljZSAwMjowMC4wIHJlZ2lzdGVyZWQgc3VjY2Vz
c2Z1bHkhCg==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="solaris_guest_boot.log"
Content-Disposition: attachment; filename="solaris_guest_boot.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5cpgkmv5

G2Ntb2R1bGUgL3BsYXRmb3JtL2k4NnBjL2tlcm5lbC9hbWQ2NC91bml4OiB0ZXh0IGF0IFsweGZm
ZmZmZmZmZmI4MDAwMDAsIDB4ZmZmZmZmZmZmYjk1ZDA2M10gZGF0YSBhdCAweGZmZmZmZmZmZmJj
MDAwMDANCm1vZHVsZSAva2VybmVsL2FtZDY0L2dlbnVuaXg6IHRleHQgYXQgWzB4ZmZmZmZmZmZm
Yjk1ZDA2OCwgMHhmZmZmZmZmZmZiYmUzNjg3XSBkYXRhIGF0IDB4ZmZmZmZmZmZmYmNhMmY4MA0K
TG9hZGluZyBrbWRiLi4uDQptb2R1bGUgL2tlcm5lbC9taXNjL2FtZDY0L2ttZGJtb2Q6IHRleHQg
YXQgWzB4ZmZmZmZmZmZmYmQxM2RjMCwgMHhmZmZmZmZmZmZiZGMzNjE3XSBkYXRhIGF0IDB4ZmZm
ZmZmZmZmYmRjMzYyMA0KbW9kdWxlIC9rZXJuZWwvbWlzYy9hbWQ2NC9jdGY6IHRleHQgYXQgWzB4
ZmZmZmZmZmZmYmJlMzY4OCwgMHhmZmZmZmZmZmZiYmVkNDBmXSBkYXRhIGF0IDB4ZmZmZmZmZmZm
YmRkZTkzMA0KDQ1TdW5PUyBSZWxlYXNlIDUuMTEgVmVyc2lvbiAxMS4wIDY0LWJpdA0NDQpDb3B5
cmlnaHQgKGMpIDE5ODMsIDIwMTEsIE9yYWNsZSBhbmQvb3IgaXRzIGFmZmlsaWF0ZXMuIEFsbCBy
aWdodHMgcmVzZXJ2ZWQuDQ0NCng4Nl9mZWF0dXJlOiBsZ3BnDQ0NCng4Nl9mZWF0dXJlOiB0c2MN
DQ0KeDg2X2ZlYXR1cmU6IG1zcg0NDQp4ODZfZmVhdHVyZTogbXRycg0NDQp4ODZfZmVhdHVyZTog
cGdlDQ0NCng4Nl9mZWF0dXJlOiBkZQ0NDQp4ODZfZmVhdHVyZTogY21vdg0NDQp4ODZfZmVhdHVy
ZTogbW14DQ0NCng4Nl9mZWF0dXJlOiBtY2ENDQ0KeDg2X2ZlYXR1cmU6IHBhZQ0NDQp4ODZfZmVh
dHVyZTogY3Y4DQ0NCng4Nl9mZWF0dXJlOiBwYXQNDQ0KeDg2X2ZlYXR1cmU6IHNlcA0NDQp4ODZf
ZmVhdHVyZTogc3NlDQ0NCng4Nl9mZWF0dXJlOiBzc2UyDQ0NCng4Nl9mZWF0dXJlOiBodHQNDQ0K
eDg2X2ZlYXR1cmU6IGFzeXNjDQ0NCng4Nl9mZWF0dXJlOiBueA0NDQp4ODZfZmVhdHVyZTogc3Nl
Mw0NDQp4ODZfZmVhdHVyZTogY3gxNg0NDQp4ODZfZmVhdHVyZTogY21wDQ0NCng4Nl9mZWF0dXJl
OiB0c2NwDQ0NCng4Nl9mZWF0dXJlOiBjcHVpZA0NDQp4ODZfZmVhdHVyZTogc3NzZTMNDQ0KeDg2
X2ZlYXR1cmU6IHNzZTRfMQ0NDQp4ODZfZmVhdHVyZTogc3NlNF8yDQ0NCng4Nl9mZWF0dXJlOiBj
bGZzaA0NDQp4ODZfZmVhdHVyZTogNjQNDQ0KeDg2X2ZlYXR1cmU6IGFlcw0NDQp4ODZfZmVhdHVy
ZTogcGNsbXVscWRxDQ0NCm1lbSA9IDIwODg1NjRLICgweDdmNzlkMDAwKQ0NDQpVc2luZyBkZWZh
dWx0IGRldmljZSBpbnN0YW5jZSBkYXRhDQ0NClNNQklPUyB2Mi40IGxvYWRlZCAoMzUzIGJ5dGVz
KXJvb3QgbmV4dXMgPSBpODZwYw0NDQpwc2V1ZG8wIGF0IHJvb3QNDQ0KcHNldWRvMCBpcyAvcHNl
dWRvDQ0NCnNjc2lfdmhjaTAgYXQgcm9vdA0NDQpzY3NpX3ZoY2kwIGlzIC9zY3NpX3ZoY2kNDQ0K
bnBlMCBhdCByb290OiBzcGFjZSAwIG9mZnNldCAwDQ0NCm5wZTAgaXMgL3BjaUAwLDANDQ0KdHJh
cDogVW5rbm93biB0cmFwIHR5cGUgOCBpbiB1c2VyIG1vZGUNDQ0KDQ0NCg0NcGFuaWNbY3B1MF0v
dGhyZWFkPWZmZmZmZmZmZmJjMzZkZTA6IEJBRCBUUkFQOiB0eXBlPWQgKCNncCBHZW5lcmFsIHBy
b3RlY3Rpb24pIHJwPWZmZmZmZmZmZmJjNDg2NjAgYWRkcj1mMDAwZmY1M2YwMDBmZjAwDQ0NCg0N
DQojZ3AgR2VuZXJhbCBwcm90ZWN0aW9uDQ0NCmFkZHI9MHhmMDAwZmY1M2YwMDBmZjAwDQ0NCnBp
ZD0wLCBwYz0weGZmZmZmZmZmZmI4NjVmMWQsIHNwPTB4ZmZmZmZmZmZmYmM0ODc1OCwgZWZsYWdz
PTB4MTAyODYNDQ0KY3IwOiA4MDA1MDAzYjxwZyx3cCxuZSxldCx0cyxtcCxwZT4gY3I0OiA2Yjg8
eG1tZSxmeHNyLHBnZSxwYWUscHNlLGRlPg0NDQpjcjI6IDBjcjM6IGY4ZTYwMDBjcjg6IDANDQ0K
DQ0NCiAgICAgICAgcmRpOiAgICAgICAgICAgICAgICAwIHJzaTogICAgICAgICAgICAgICAgMSBy
ZHg6ICAgICAgICAgICAgICAgNDANDQ0KICAgICAgICByY3g6ICAgICAgICAgICAgICAgIDIgIHI4
OiBmZmZmZmZmZmZiYzQ4ODcwICByOTogICAgICAgICAgICAgICAgMA0NDQogICAgICAgIHJheDog
ZmZmZmZmZmZmYmMzNmRlMCByYng6ICAgICAgICAgICAgICAgIDAgcmJwOiBmZmZmZmZmZmZiYzQ4
N2IwDQ0NCiAgICAgICAgcjEwOiBmZmZmZmZmZmZiODViN2Q4IHIxMTogZjAwMGZmNTNmMDAwZmYw
MCByMTI6ICAgICAgICAgICAgICAgIDANDQ0KICAgICAgICByMTM6IGYwMDBmZjUzZjAwMGZmMDAg
cjE0OiAgICAgICAgICAgICAgICAxIHIxNTogZjAwMGZmNTNmMDAwZmYwMA0NDQogICAgICAgIGZz
YjogICAgICAgIDIwMDAwMDAwMCBnc2I6IGZmZmZmZmZmZmJjM2ViYzAgIGRzOiAgICAgICAgICAg
ICAgICAwDQ0NCiAgICAgICAgIGVzOiAgICAgICAgICAgICAgICAwICBmczogICAgICAgICAgICAg
ICAgMCAgZ3M6ICAgICAgICAgICAgICAgIDANDQ0KICAgICAgICB0cnA6ICAgICAgICAgICAgICAg
IGQgZXJyOiAgICAgICAgICAgICAgICAwIHJpcDogZmZmZmZmZmZmYjg2NWYxZA0NDQogICAgICAg
ICBjczogICAgICAgICAgICAgICAzMCByZmw6ICAgICAgICAgICAgMTAyODYgcnNwOiBmZmZmZmZm
ZmZiYzQ4NzU4DQ0NCiAgICAgICAgIHNzOiAgICAgICAgICAgICAgIDM4DQ0NCg0NDQpXYXJuaW5n
IC0gc3RhY2sgbm90IHdyaXR0ZW4gdG8gdGhlIGR1bXAgYnVmZmVyDQ0NCmZmZmZmZmZmZmJjNDg1
ODAgdW5peDpkaWUrMTMxICgpDQ0NCmZmZmZmZmZmZmJjNDg2NTAgdW5peDp0cmFwKzNiMiAoKQ0N
DQpmZmZmZmZmZmZiYzQ4NjYwIHVuaXg6Y21udHJhcCtlNiAoKQ0NDQpmZmZmZmZmZmZiYzQ4N2Iw
IHVuaXg6bXV0ZXhfb3duZXJfcnVubmluZytkICgpDQ0NCmZmZmZmZmZmZmJjNDg4NDAgZ2VudW5p
eDpkdW1wX29uZV9jb3JlKzZiICgpDQ0NCmZmZmZmZmZmZmJjNDg4ZTAgZ2VudW5peDpjb3JlKzQx
OSAoKQ0NDQpmZmZmZmZmZmZiYzQ4YTEwIHVuaXg6a2Vybl9ncGZhdWx0KzE4OCAoKQ0NDQpmZmZm
ZmZmZmZiYzQ4YWUwIHVuaXg6dHJhcCszOTMgKCkNDQ0KZmZmZmZmZmZmYmM0OGFmMCB1bml4OmNt
bnRyYXArZTYgKCkNDQ0KDQ0NCnBhbmljOiBlbnRlcmluZyBkZWJ1Z2dlciAobm8gZHVtcCBkZXZp
Y2UsIGNvbnRpbnVlIHRvIHJlYm9vdCkNDQ0KDQpXZWxjb21lIHRvIGttZGINCmttZGI6IHVuYWJs
ZSB0byBkZXRlcm1pbmUgdGVybWluYWwgdHlwZTogYXNzdW1pbmcgYHZ0MTAwJw0KGyhCGykwTG9h
ZGVkIG1vZHVsZXM6IFsgc2NzaV92aGNpIG1hYyB1cHBjIHVuaXgga3J0bGQgYXBpeCBnZW51bml4
IHNwZWNmcyBwY3BsdXNtcCBdDQpbMF0+IA==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="solaris_guest_xl_dmesg.log"
Content-Disposition: attachment; filename="solaris_guest_xl_dmesg.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5cpgkmw6

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICAgICAgICAgICAgICAgICAgIF8gICAg
ICAgIF8gICAgIF8gICAgICAKIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgICAgXyAg
IF8gXyBfXyAgX19ffCB8XyBfXyBffCB8X18gfCB8IF9fXyAKICBcICAvLyBfIFwgJ18gXCAgfCB8
fCB8XyAgIF9fKSB8X198IHwgfCB8ICdfIFwvIF9ffCBfXy8gX2AgfCAnXyBcfCB8LyBfIFwKICAv
ICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy98X198IHxffCB8IHwgfCBcX18gXCB8fCAoX3wg
fCB8XykgfCB8ICBfXy8KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX198ICAgXF9fLF98
X3wgfF98X19fL1xfX1xfXyxffF8uX18vfF98XF9fX3wKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKKFhFTikg
WGVuIHZlcnNpb24gNC4yLXVuc3RhYmxlIChkZXJpY2tzb0Boc2QxLmNhLmNvbWNhc3QubmV0KSAo
Z2NjIHZlcnNpb24gNC42LjMgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpICkgVHVlIEp1
bCAzMSAwODo0NzowNCBQRFQgMjAxMgooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBGcmkgSnVsIDI3
IDEyOjIyOjEzIDIwMTIgKzAyMDAgMjU2ODg6ZTYyNjZmYzc2ZDA4CihYRU4pIEJvb3Rsb2FkZXI6
IEdSVUIgMS45OS0yMXVidW50dTMuMQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIGRv
bTBfbWVtPTQwOTZNIHhzYXZlPTAKKFhFTikgVmlkZW8gaW5mb3JtYXRpb246CihYRU4pICBWR0Eg
aXMgdGV4dCBtb2RlIDgweDI1LCBmb250IDh4MTYKKFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7
IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRzCihYRU4pIERpc2MgaW5mb3JtYXRpb246CihY
RU4pICBGb3VuZCAxIE1CUiBzaWduYXR1cmVzCihYRU4pICBGb3VuZCAyIEVERCBpbmZvcm1hdGlv
biBzdHJ1Y3R1cmVzCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6CihYRU4pICAwMDAwMDAwMDAwMDAw
MDAwIC0gMDAwMDAwMDAwMDA5YzgwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDAwMDA5YzgwMCAt
IDAwMDAwMDAwMDAwYTAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDBlMDAwMCAtIDAw
MDAwMDAwMDAxMDAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAw
MDAwZGRkMDcwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwZGRkMDcwMDAgLSAwMDAwMDAwMGRk
ZGJiMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZGRkYmIwMDAgLSAwMDAwMDAwMGRkZGJj
MDAwIChBQ1BJIGRhdGEpCihYRU4pICAwMDAwMDAwMGRkZGJjMDAwIC0gMDAwMDAwMDBkZGVkNzAw
MCAoQUNQSSBOVlMpCihYRU4pICAwMDAwMDAwMGRkZWQ3MDAwIC0gMDAwMDAwMDBkZWY5MjAwMCAo
cmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMGRlZjkyMDAwIC0gMDAwMDAwMDBkZWY5MzAwMCAodXNh
YmxlKQooWEVOKSAgMDAwMDAwMDBkZWY5MzAwMCAtIDAwMDAwMDAwZGVmZDYwMDAgKEFDUEkgTlZT
KQooWEVOKSAgMDAwMDAwMDBkZWZkNjAwMCAtIDAwMDAwMDAwZGY4MDAwMDAgKHVzYWJsZSkKKFhF
TikgIDAwMDAwMDAwZjgwMDAwMDAgLSAwMDAwMDAwMGZjMDAwMDAwIChyZXNlcnZlZCkKKFhFTikg
IDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAw
MDAwMDAwZmVkMDAwMDAgLSAwMDAwMDAwMGZlZDA0MDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAw
MDAwZmVkMWMwMDAgLSAwMDAwMDAwMGZlZDIwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAw
ZmVlMDAwMDAgLSAwMDAwMDAwMGZlZTAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZmYw
MDAwMDAgLSAwMDAwMDAwMTAwMDAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAxMDAwMDAw
MDAgLSAwMDAwMDAwNDIwMDAwMDAwICh1c2FibGUpCihYRU4pIEFDUEk6IFJTRFAgMDAwRjA0OTAs
IDAwMjQgKHIyIEFMQVNLQSkKKFhFTikgQUNQSTogWFNEVCBEREVDNzA5MCwgMDA5QyAocjEgQUxB
U0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IEZBQ1AgRERF
RDFDQTAsIDAwRjQgKHI0IEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEzKQoo
WEVOKSBBQ1BJOiBEU0RUIERERUM3MUMwLCBBQURBIChyMiBBTEFTS0EgICAgQSBNIEkgICAgICAg
NkYgSU5UTCAyMDA1MTExNykKKFhFTikgQUNQSTogRkFDUyBEREVENUY4MCwgMDA0MAooWEVOKSBB
Q1BJOiBBUElDIERERUQxRDk4LCAwMDkyIChyMyBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1J
ICAgICAxMDAxMykKKFhFTikgQUNQSTogRlBEVCBEREVEMUUzMCwgMDA0NCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IE1DRkcgRERFRDFFNzgs
IDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgICAgIDk3KQooWEVOKSBB
Q1BJOiBQUkFEIERERUQxRUI4LCAwMEJFIChyMiBQUkFESUQgIFBSQURUSUQgICAgICAgIDEgTVNG
VCAgMzAwMDAwMSkKKFhFTikgQUNQSTogSFBFVCBEREVEMUY3OCwgMDAzOCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSS4gICAgICAgIDUpCihYRU4pIEFDUEk6IFNTRFQgRERFRDFGQjAs
IDAzNkQgKHIxIFNhdGFSZSBTYXRhVGFibCAgICAgMTAwMCBJTlRMIDIwMDkxMTEyKQooWEVOKSBB
Q1BJOiBTUE1JIERERUQyMzIwLCAwMDQwIChyNSBBIE0gSSAgIE9FTVNQTUkgICAgICAgIDAgQU1J
LiAgICAgICAgMCkKKFhFTikgQUNQSTogU1NEVCBEREVEMjM2MCwgMDlBNCAocjEgIFBtUmVmICBD
cHUwSXN0ICAgICAzMDAwIElOVEwgMjAwNTExMTcpCihYRU4pIEFDUEk6IFNTRFQgRERFRDJEMDgs
IDBBODggKHIxICBQbVJlZiAgICBDcHVQbSAgICAgMzAwMCBJTlRMIDIwMDUxMTE3KQooWEVOKSBB
Q1BJOiBETUFSIERERUQzNzkwLCAwMDc4IChyMSBJTlRFTCAgICAgIFNOQiAgICAgICAgIDEgSU5U
TCAgICAgICAgMSkKKFhFTikgQUNQSTogRUlOSiBEREVEMzgwOCwgMDEzMCAocjEgICAgQU1JIEFN
SSBFSU5KICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIEFDUEk6IEVSU1QgRERFRDM5Mzgs
IDAyMTAgKHIxICBBTUlFUiBBTUkgRVJTVCAgICAgICAgMCAgICAgICAgICAgICAwKQooWEVOKSBB
Q1BJOiBIRVNUIERERUQzQjQ4LCAwMEE4IChyMSAgICBBTUkgQU1JIEhFU1QgICAgICAgIDAgICAg
ICAgICAgICAgMCkKKFhFTikgQUNQSTogQkVSVCBEREVEM0JGMCwgMDAzMCAocjEgICAgQU1JIEFN
SSBCRVJUICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIFN5c3RlbSBSQU06IDE2MzU2TUIg
KDE2NzQ5MzY4a0IpCihYRU4pIE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZAooWEVOKSBGYWtp
bmcgYSBub2RlIGF0IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDQyMDAwMDAwMAooWEVOKSBEb21h
aW4gaGVhcCBpbml0aWFsaXNlZAooWEVOKSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgMDAwZmQ3YjAK
KFhFTikgRE1JIDIuNyBwcmVzZW50LgooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0CihY
RU4pIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4NDA4CihYRU4pIEFDUEk6IEFDUEkgU0xFRVAg
SU5GTzogcG0xeF9jbnRbNDA0LDBdLCBwbTF4X2V2dFs0MDAsMF0KKFhFTikgQUNQSTogMzIvNjRY
IEZBQ1MgYWRkcmVzcyBtaXNtYXRjaCBpbiBGQURUIC0gZGRlZDVmODAvMDAwMDAwMDAwMDAwMDAw
MCwgdXNpbmcgMzIKKFhFTikgQUNQSTogICAgICAgICAgICAgICAgICB3YWtldXBfdmVjW2RkZWQ1
ZjhjXSwgdmVjX3NpemVbMjBdCihYRU4pIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZlZTAw
MDAwCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MDBdIGVuYWJs
ZWQpCihYRU4pIFByb2Nlc3NvciAjMCA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQooWEVOKSBQcm9jZXNz
b3IgIzIgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
M10gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICM0IDc6MTAgQVBJQyB2
ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDZd
IGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNiA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDAxXSBlbmFibGVkKQooWEVOKSBQ
cm9jZXNzb3IgIzEgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwNl0gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMzIDc6MTAg
QVBJQyB2ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDddIGxhcGljX2lk
WzB4MDVdIGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNSA3OjEwIEFQSUMgdmVyc2lvbiAyMQoo
WEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA4XSBsYXBpY19pZFsweDA3XSBlbmFibGVkKQoo
WEVOKSBQcm9jZXNzb3IgIzcgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUNf
Tk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pCihYRU4pIEFDUEk6IElPQVBJ
QyAoaWRbMHgwMl0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lfYmFzZVswXSkKKFhFTikgSU9BUElD
WzBdOiBhcGljX2lkIDIsIHZlcnNpb24gMzIsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMK
KFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZs
IGRmbCkKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJx
IDkgaGlnaCBsZXZlbCkKKFhFTikgQUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJyaWRlLgooWEVOKSBB
Q1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuCihYRU4pIEFDUEk6IElSUTkgdXNlZCBieSBvdmVy
cmlkZS4KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgRmxhdC4gIFVzaW5nIDEgSS9PIEFQSUNz
CihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmE3MDEgYmFzZTogMHhmZWQwMDAwMAooWEVOKSBF
UlNUIHRhYmxlIGlzIGludmFsaWQKKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25m
aWd1cmF0aW9uIGluZm9ybWF0aW9uCihYRU4pIFNNUDogQWxsb3dpbmcgOCBDUFVzICgwIGhvdHBs
dWcgQ1BVcykKKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCAxNTI4IE1TSS9NU0ktWAooWEVOKSBT
d2l0Y2hlZCB0byBBUElDIGRyaXZlciB4MmFwaWNfY2x1c3Rlci4KKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBEZXRlY3RlZCAzMjkyLjU3
OCBNSHogcHJvY2Vzc29yLgooWEVOKSBJbml0aW5nIG1lbW9yeSBzaGFyaW5nLgooWEVOKSBtY2Vf
aW50ZWwuYzoxMjM5OiBNQ0EgQ2FwYWJpbGl0eTogQkNBU1QgMSBTRVIgMCBDTUNJIDEgZmlyc3Ri
YW5rIDAgZXh0ZW5kZWQgTUNFIE1TUiAwCihYRU4pIEludGVsIG1hY2hpbmUgY2hlY2sgcmVwb3J0
aW5nIGVuYWJsZWQKKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBmODAwMDAw
MCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSAzZgooWEVOKSBQQ0k6IE1DRkcgYXJlYSBhdCBmODAw
MDAwMCByZXNlcnZlZCBpbiBFODIwCihYRU4pIFBDSTogVXNpbmcgTUNGRyBmb3Igc2VnbWVudCAw
MDAwIGJ1cyAwMC0zZgooWEVOKSBJbnRlbCBWVC1kIFNub29wIENvbnRyb2wgZW5hYmxlZC4KKFhF
TikgSW50ZWwgVlQtZCBEb20wIERNQSBQYXNzdGhyb3VnaCBub3QgZW5hYmxlZC4KKFhFTikgSW50
ZWwgVlQtZCBRdWV1ZWQgSW52YWxpZGF0aW9uIGVuYWJsZWQuCihYRU4pIEludGVsIFZULWQgSW50
ZXJydXB0IFJlbWFwcGluZyBlbmFibGVkLgooWEVOKSBJbnRlbCBWVC1kIFNoYXJlZCBFUFQgdGFi
bGVzIG5vdCBlbmFibGVkLgooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZAooWEVOKSAg
LSBEb20wIG1vZGU6IFJlbGF4ZWQKKFhFTikgRW5hYmxlZCBkaXJlY3RlZCBFT0kgd2l0aCBpb2Fw
aWNfYWNrX29sZCBvbiEKKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzCihYRU4pICAtPiBVc2lu
ZyBvbGQgQUNLIG1ldGhvZAooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9
MiBhcGljMj0tMSBwaW4yPS0xCihYRU4pIFRTQyBkZWFkbGluZSB0aW1lciBlbmFibGVkCihYRU4p
IFBsYXRmb3JtIHRpbWVyIGlzIDE0LjMxOE1IeiBIUEVUCihYRU4pIEFsbG9jYXRlZCBjb25zb2xl
IHJpbmcgb2YgNjQgS2lCLgooWEVOKSBWTVg6IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoK
KFhFTikgIC0gQVBJQyBNTUlPIGFjY2VzcyB2aXJ0dWFsaXNhdGlvbgooWEVOKSAgLSBBUElDIFRQ
UiBzaGFkb3cKKFhFTikgIC0gRXh0ZW5kZWQgUGFnZSBUYWJsZXMgKEVQVCkKKFhFTikgIC0gVmly
dHVhbC1Qcm9jZXNzb3IgSWRlbnRpZmllcnMgKFZQSUQpCihYRU4pICAtIFZpcnR1YWwgTk1JCihY
RU4pICAtIE1TUiBkaXJlY3QtYWNjZXNzIGJpdG1hcAooWEVOKSAgLSBVbnJlc3RyaWN0ZWQgR3Vl
c3QKKFhFTikgSFZNOiBBU0lEcyBlbmFibGVkLgooWEVOKSBIVk06IFZNWCBlbmFibGVkCihYRU4p
IEhWTTogSGFyZHdhcmUgQXNzaXN0ZWQgUGFnaW5nIChIQVApIGRldGVjdGVkCihYRU4pIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVzCihYRU4pIEFD
UEkgc2xlZXAgbW9kZXM6IFMzCihYRU4pIG1jaGVja19wb2xsOiBNYWNoaW5lIGNoZWNrIHBvbGxp
bmcgdGltZXIgc3RhcnRlZC4KKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pIGVs
Zl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weGFjNTAwMAooWEVO
KSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjMDAwMDAgbWVtc3o9MHhlNjBlMAoo
WEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZTcwMDAgbWVtc3o9MHgxNDQ4
MAooWEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZmMwMDAgbWVtc3o9MHgz
NjIwMDAKKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogbWVtb3J5OiAweDEwMDAwMDAgLT4gMHgyMDVl
MDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiCihYRU4pIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfVkVSU0lPTiA9ICIyLjYiCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogWEVOX1ZFUlNJT04gPSAieGVuLTMuMCIKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBWSVJUX0JBU0UgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBFTlRSWSA9IDB4ZmZmZmZmZmY4MWNmYzIwMAooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEhZ
UEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZmZjgxMDAxMDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90
ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBhZV9wZ2Rpcl9hYm92ZV80Z2Ii
CihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFFX01PREUgPSAieWVzIgooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IExPQURFUiA9ICJnZW5lcmljIgooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IHVua25vd24geGVuIGVsZiBub3RlICgweGQpCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VT
UEVORF9DQU5DRUwgPSAweDEKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIVl9TVEFSVF9MT1cg
PSAweGZmZmY4MDAwMDAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBQQUREUl9PRkZT
RVQgPSAweDAKKFhFTikgZWxmX3hlbl9hZGRyX2NhbGNfY2hlY2s6IGFkZHJlc3NlczoKKFhFTikg
ICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgICAgIGVsZl9w
YWRkcl9vZmZzZXQgPSAweDAKKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAgICAgPSAweGZmZmZmZmZm
ODAwMDAwMDAKKFhFTikgICAgIHZpcnRfa3N0YXJ0ICAgICAgPSAweGZmZmZmZmZmODEwMDAwMDAK
KFhFTikgICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODIwNWUwMDAKKFhFTikgICAg
IHZpcnRfZW50cnkgICAgICAgPSAweGZmZmZmZmZmODFjZmMyMDAKKFhFTikgICAgIHAybV9iYXNl
ICAgICAgICAgPSAweGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQs
IGxzYiwgY29tcGF0MzIKKFhFTikgIERvbTAga2VybmVsOiA2NC1iaXQsIFBBRSwgbHNiLCBwYWRk
ciAweDEwMDAwMDAgLT4gMHgyMDVlMDAwCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVO
VDoKKFhFTikgIERvbTAgYWxsb2MuOiAgIDAwMDAwMDA0MGMwMDAwMDAtPjAwMDAwMDA0MTAwMDAw
MDAgKDEwMjIzNTcgcGFnZXMgdG8gYmUgYWxsb2NhdGVkKQooWEVOKSAgSW5pdC4gcmFtZGlzazog
MDAwMDAwMDQxZDk5NTAwMC0+MDAwMDAwMDQxZmZmZjIwMAooWEVOKSBWSVJUVUFMIE1FTU9SWSBB
UlJBTkdFTUVOVDoKKFhFTikgIExvYWRlZCBrZXJuZWw6IGZmZmZmZmZmODEwMDAwMDAtPmZmZmZm
ZmZmODIwNWUwMDAKKFhFTikgIEluaXQuIHJhbWRpc2s6IGZmZmZmZmZmODIwNWUwMDAtPmZmZmZm
ZmZmODQ2YzgyMDAKKFhFTikgIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODQ2YzkwMDAtPmZmZmZm
ZmZmODRlYzkwMDAKKFhFTikgIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODRlYzkwMDAtPmZmZmZm
ZmZmODRlYzk0YjQKKFhFTikgIFBhZ2UgdGFibGVzOiAgIGZmZmZmZmZmODRlY2EwMDAtPmZmZmZm
ZmZmODRlZjUwMDAKKFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODRlZjUwMDAtPmZmZmZm
ZmZmODRlZjYwMDAKKFhFTikgIFRPVEFMOiAgICAgICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZm
ZmZmODUwMDAwMDAKKFhFTikgIEVOVFJZIEFERFJFU1M6IGZmZmZmZmZmODFjZmMyMDAKKFhFTikg
RG9tMCBoYXMgbWF4aW11bSA4IFZDUFVzCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0
IDB4ZmZmZmZmZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODFhYzUwMDAKKFhFTikgZWxmX2xvYWRf
YmluYXJ5OiBwaGRyIDEgYXQgMHhmZmZmZmZmZjgxYzAwMDAwIC0+IDB4ZmZmZmZmZmY4MWNlNjBl
MAooWEVOKSBlbGZfbG9hZF9iaW5hcnk6IHBoZHIgMiBhdCAweGZmZmZmZmZmODFjZTcwMDAgLT4g
MHhmZmZmZmZmZjgxY2ZiNDgwCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAzIGF0IDB4ZmZm
ZmZmZmY4MWNmYzAwMCAtPiAweGZmZmZmZmZmODFkZDIwMDAKKFhFTikgU2NydWJiaW5nIEZyZWUg
UkFNOiAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi5kb25lLgooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQg
c2V0IGF0IDB4NDAwMCBwYWdlcy4KKFhFTikgU3RkLiBMb2dsZXZlbDogQWxsCihYRU4pIEd1ZXN0
IExvZ2xldmVsOiBBbGwKKFhFTikgWGVuIGlzIHJlbGlucXVpc2hpbmcgVkdBIGNvbnNvbGUuCihY
RU4pICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0
byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQooWEVOKSBGcmVlZCAyNDBrQiBpbml0IG1lbW9yeS4KKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT05IGlycT05IGVtdWlycT00NzI4NTA0
MzIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MDEuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAxLjEKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDowNi4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWEu
MAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjFjLjAKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxYy42CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWQuMAooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjFlLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxZi4w
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWYuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjFmLjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowMC4wCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDE6MDAuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjAwLjAK
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDU6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjAKKFhFTikgREVCVUcg
ZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yNzYgaXJxPTI4IGVtdWlycT0tNjA3ODM1MTIxCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTYgaXJxPTE2IGVtdWlycT0yODE4
MDU1NzEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT04IGlycT04IGVtdWly
cT0wCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9Mjc1IGlycT0yOSBlbXVp
cnE9MTk5NjUxODc4OQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTI3NCBp
cnE9MzAgZW11aXJxPTEzMjM2OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTI3MyBpcnE9MzEgZW11aXJxPTQ5NjU4MTI0OAooWEVOKSBIVk0xOiBIVk0gTG9hZGVyCihYRU4p
IEhWTTE6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTE6IFhlbmJ1cyByaW5n
cyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTE6IFN5c3RlbSByZXF1ZXN0
ZWQgU2VhQklPUwooWEVOKSBIVk0xOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6
MjcwOiBEb20xIFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMTogUENJLUlTQSBs
aW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20xIFBDSSBsaW5rIDEgY2hh
bmdlZCAwIC0+IDEwCihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTEgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZN
MTogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMSBQ
Q0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAzIHJvdXRl
ZCB0byBJUlE1CihYRU4pIEhWTTE6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0x
OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA0OjAgSU5UQS0+
SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMTogcGNp
IGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0xOiBwY2kg
ZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTE6IHBjaSBk
ZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTEgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1
OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTE6IHBjaSBkZXYgMDI6
MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMTogcGNpIGRldiAwNDow
IGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MSBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTEgZ2Zu
PWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTE6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6
ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNMTogcGNpIGRldiAwMjowIGJhciAxNCBzaXpl
IDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUg
MDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTE6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAw
MDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNMTogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAw
MDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAw
MDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20xIGdwb3J0PWMyMDAgbXBv
cnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNMTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAw
MDEwOiAwMDAwYzMwMQooWEVOKSBIVk0xOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoK
KFhFTikgSFZNMTogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2
YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTE6ICAtIENQVTEgLi4uIDM2LWJpdCBw
aHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBI
Vk0xOiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMTogIC0gUkVQIElOU0IgYWNy
b3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTE6ICAtIEdTIGJhc2UgTVNS
cyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNMTogUGFzc2VkIDIgb2YgMiB0ZXN0cwoo
WEVOKSBIVk0xOiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTE6IExvYWRpbmcg
U2VhQklPUyAuLi4KKFhFTikgSFZNMTogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0x
OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTE6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4p
IEhWTTE6IEJJT1MgbWFwOgooWEVOKSBIVk0xOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UK
KFhFTikgSFZNMTogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMTogRTgyMCB0YWJs
ZToKKFhFTikgSFZNMTogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAw
MDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDow
MDBlMDAwMAooWEVOKSBIVk0xOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDow
MDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0g
MDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAw
MDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk0xOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAw
MDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogSW52b2tpbmcgU2Vh
QklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQxIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGlu
ZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMSBnZm49ZjMwMDAgbWZuPWY3YTAwIG5y
PTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9
ODAKKFhFTikgSFZNMjogSFZNIExvYWRlcgooWEVOKSBIVk0yOiBEZXRlY3RlZCBYZW4gdjQuMi11
bnN0YWJsZQooWEVOKSBIVk0yOiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5u
ZWwgNAooWEVOKSBIVk0yOiBTeXN0ZW0gcmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNMjogQ1BV
IHNwZWVkIGlzIDMyOTMgTUh6CihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAwIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4p
IGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk0yOiBQ
Q0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb20yIFBDSSBs
aW5rIDIgY2hhbmdlZCAwIC0+IDExCihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0
byBJUlExMQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQoo
WEVOKSBIVk0yOiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk0yOiBwY2kg
ZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIElOVEEtPklSUTUK
KFhFTikgSFZNMjogcGNpIGRldiAwNDowIElOVEEtPklSUTUKKFhFTikgSFZNMjogcGNpIGRldiAw
NTowIElOVEEtPklSUTEwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAw
MDAwMDogZjAwMDAwMDgKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAw
MDAwOiBmMjAwMDAwOAooWEVOKSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAw
MDA6IGYzMDAwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzA4MCBtZm49Zjdh
ODAgbnI9NDAKKFhFTikgSFZNMjogcGNpIGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBm
MzA4MDAwNAooWEVOKSBIVk0yOiBwY2kgZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYz
MGMwMDAwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMw
ZDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0y
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVO
KSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4p
IEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikg
SFZNMjogcGNpIGRldiAwMzowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBI
Vk0yOiBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhW
TTI6IHBjaSBkZXYgMDQ6MCBiYXIgMTQgc2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZN
MjogcGNpIGRldiAwNTowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tMiBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTI6
IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNMjog
TXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlzYXRpb246CihYRU4pIEhWTTI6ICAtIENQVTAgLi4uIDM2
LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgoo
WEVOKSBIVk0yOiAgLSBDUFUxIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZh
ciBNVFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNMjogVGVzdGluZyBIVk0gZW52aXJvbm1l
bnQ6CihYRU4pIEhWTTI6ICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBh
c3NlZAooWEVOKSBIVk0yOiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihY
RU4pIEhWTTI6IFBhc3NlZCAyIG9mIDIgdGVzdHMKKFhFTikgSFZNMjogV3JpdGluZyBTTUJJT1Mg
dGFibGVzIC4uLgooWEVOKSBIVk0yOiBMb2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTI6IENy
ZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhFTikgSFZNMjogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBI
Vk0yOiB2bTg2IFRTUyBhdCBmYzAwYTA4MAooWEVOKSBIVk0yOiBCSU9TIG1hcDoKKFhFTikgSFZN
MjogIDEwMDAwLTEwMGQzOiBTY3JhdGNoIHNwYWNlCihYRU4pIEhWTTI6ICBlMDAwMC1mZmZmZjog
TWFpbiBCSU9TCihYRU4pIEhWTTI6IEU4MjAgdGFibGU6CihYRU4pIEhWTTI6ICBbMDBdOiAwMDAw
MDAwMDowMDAwMDAwMCAtIDAwMDAwMDAwOjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNMjogIEhPTEU6
IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAwMDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNMjogIFswMV06
IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAwMDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhW
TTI6ICBbMDJdOiAwMDAwMDAwMDowMDEwMDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhF
TikgSFZNMjogIEhPTEU6IDAwMDAwMDAwOjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhF
TikgSFZNMjogIFswM106IDAwMDAwMDAwOmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJF
U0VSVkVECihYRU4pIEhWTTI6IEludm9raW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0
NzpkMiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTIgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMiBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkMiBs
ZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk0zOiBIVk0gTG9hZGVyCihYRU4pIEhWTTM6IERldGVjdGVk
IFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTM6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwg
ZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTM6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVO
KSBIVk0zOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBs
aW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMzogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRv
IElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihY
RU4pIEhWTTM6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6
IERvbTMgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZNMzogUENJLUlTQSBsaW5r
IDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kgbGluayAzIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTM6IFBDSS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4p
IEhWTTM6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAg
SU5UQS0+SVJRNQooWEVOKSBIVk0zOiBwY2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk0z
OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAx
MCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDE0
IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMzAg
c2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYz
MDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUg
MDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTM6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAw
MDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAw
MDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTAgbWZu
PWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYzMGUzIG1mbj1mN2Fj
MyBucj0xCihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMw
ZTAwMDQKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBl
NDAwMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBj
MDAxCihYRU4pIEhWTTM6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMx
MDEKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAw
MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAx
CihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20zIGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAK
KFhFTikgSFZNMzogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQoo
WEVOKSBIVk0zOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNMzogIC0g
Q1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0g
Li4uIGRvbmUuCihYRU4pIEhWTTM6ICAtIENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBN
VFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk0zOiBUZXN0aW5nIEhW
TSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMzogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRh
cmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTM6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4u
LiBwYXNzZWQKKFhFTikgSFZNMzogUGFzc2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk0zOiBXcml0
aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTM6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhF
TikgSFZNMzogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0zOiBMb2FkaW5nIEFDUEkg
Li4uCihYRU4pIEhWTTM6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4pIEhWTTM6IEJJT1MgbWFw
OgooWEVOKSBIVk0zOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNMzogIGUw
MDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMzogRTgyMCB0YWJsZToKKFhFTikgSFZNMzog
IFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBI
Vk0zOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBI
Vk0zOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJW
RUQKKFhFTikgSFZNMzogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAw
MDA6IFJBTQooWEVOKSBIVk0zOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpm
YzAwMDAwMAooWEVOKSBIVk0zOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTow
MDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMzogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikg
c3RkdmdhLmM6MTQ3OmQzIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3Rkdmdh
LmM6MTUxOmQzIGxlYXZpbmcgc3RkdmdhCihYRU4pIHN0ZHZnYS5jOjE0NzpkMyBlbnRlcmluZyBz
dGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgaXJxLmM6Mzc1OiBEb20zIGNhbGxiYWNrIHZp
YSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBk
b20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBn
Zm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1m
MzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMg
bWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZu
PWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49
ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3
YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9
MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhF
TikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVO
KSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4p
IG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFw
OmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6
IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3Zl
OiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRv
bTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBt
cG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAg
bWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBt
Zm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEK
KFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTMgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEu
YzoyNzA6IERvbTMgUENJIGxpbmsgMSBjaGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBE
b20zIFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kg
bGluayAzIGNoYW5nZWQgNSAtPiAwCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTkgaXJxPTAgZW11aXJxPTEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGly
cT0xNyBpcnE9MCBlbXVpcnE9OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTIxIGlycT0wIGVtdWlycT00CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9
MjAgaXJxPTAgZW11aXJxPTYKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01
NSBpcnE9LTEgZW11aXJxPS0xCihYRU4pIERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQg
aHZtX2RvbWFpbl91c2VfcGlycT0wIGVtdWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBV
bnN1cHBvcnRlZCBkZWxpdmVyeSBtb2RlIDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0
MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQw
OCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4pIEhWTTQ6IEhWTSBMb2FkZXIKKFhFTikgSFZN
NDogRGV0ZWN0ZWQgWGVuIHY0LjItdW5zdGFibGUKKFhFTikgSFZNNDogWGVuYnVzIHJpbmdzIEAw
eGZlZmZjMDAwLCBldmVudCBjaGFubmVsIDQKKFhFTikgSFZNNDogU3lzdGVtIHJlcXVlc3RlZCBT
ZWFCSU9TCihYRU4pIEhWTTQ6IENQVSBzcGVlZCBpcyAzMjkzIE1IegooWEVOKSBpcnEuYzoyNzA6
IERvbTQgUENJIGxpbmsgMCBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk00OiBQQ0ktSVNBIGxpbmsg
MCByb3V0ZWQgdG8gSVJRNQooWEVOKSBpcnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMSBjaGFuZ2Vk
IDAgLT4gMTAKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwCihYRU4p
IGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAyIGNoYW5nZWQgMCAtPiAxMQooWEVOKSBIVk00OiBQ
Q0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8gSVJRMTEKKFhFTikgaXJxLmM6MjcwOiBEb200IFBDSSBs
aW5rIDMgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDMgcm91dGVkIHRv
IElSUTUKKFhFTikgSFZNNDogcGNpIGRldiAwMTozIElOVEEtPklSUTEwCihYRU4pIEhWTTQ6IHBj
aSBkZXYgMDM6MCBJTlRBLT5JUlE1CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1
CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBJTlRBLT5JUlExMAooWEVOKSBIVk00OiBwY2kgZGV2
IDAyOjAgYmFyIDEwIHNpemUgMDIwMDAwMDA6IGYwMDAwMDA4CihYRU4pIEhWTTQ6IHBjaSBkZXYg
MDM6MCBiYXIgMTQgc2l6ZSAwMTAwMDAwMDogZjIwMDAwMDgKKFhFTikgSFZNNDogcGNpIGRldiAw
NTowIGJhciAzMCBzaXplIDAwMDgwMDAwOiBmMzAwMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNCBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBi
YXIgMWMgc2l6ZSAwMDA0MDAwMDogZjMwODAwMDQKKFhFTikgSFZNNDogcGNpIGRldiAwMjowIGJh
ciAzMCBzaXplIDAwMDEwMDAwOiBmMzBjMDAwMAooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFy
IDMwIHNpemUgMDAwMTAwMDA6IGYzMGQwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNCBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgSFZNNDogcGNpIGRldiAwNTowIGJhciAxNCBzaXplIDAw
MDA0MDAwOiBmMzBlMDAwNAooWEVOKSBIVk00OiBwY2kgZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAw
MDEwMDA6IGYzMGU0MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDAw
MDEwMDogMDAwMGMwMDEKKFhFTikgSFZNNDogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIDAwMDAw
MTAwOiAwMDAwYzEwMQooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgMDAwMDAx
MDA6IGYzMGU1MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSAwMDAwMDEw
MDogMDAwMGMyMDEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTQgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBIVk00OiBwY2kgZGV2IDAxOjEgYmFyIDIwIHNpemUgMDAwMDAwMTA6
IDAwMDBjMzAxCihYRU4pIEhWTTQ6IE11bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOgooWEVO
KSBIVk00OiAgLSBDUFUwIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBN
VFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNNDogIC0gQ1BVMSAuLi4gMzYtYml0IHBoeXMg
Li4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTQ6
IFRlc3RpbmcgSFZNIGVudmlyb25tZW50OgooWEVOKSBIVk00OiAgLSBSRVAgSU5TQiBhY3Jvc3Mg
cGFnZSBib3VuZGFyaWVzIC4uLiBwYXNzZWQKKFhFTikgSFZNNDogIC0gR1MgYmFzZSBNU1JzIGFu
ZCBTV0FQR1MgLi4uIHBhc3NlZAooWEVOKSBIVk00OiBQYXNzZWQgMiBvZiAyIHRlc3RzCihYRU4p
IEhWTTQ6IFdyaXRpbmcgU01CSU9TIHRhYmxlcyAuLi4KKFhFTikgSFZNNDogTG9hZGluZyBTZWFC
SU9TIC4uLgooWEVOKSBIVk00OiBDcmVhdGluZyBNUCB0YWJsZXMgLi4uCihYRU4pIEhWTTQ6IExv
YWRpbmcgQUNQSSAuLi4KKFhFTikgSFZNNDogdm04NiBUU1MgYXQgZmMwMGEwODAKKFhFTikgSFZN
NDogQklPUyBtYXA6CihYRU4pIEhWTTQ6ICAxMDAwMC0xMDBkMzogU2NyYXRjaCBzcGFjZQooWEVO
KSBIVk00OiAgZTAwMDAtZmZmZmY6IE1haW4gQklPUwooWEVOKSBIVk00OiBFODIwIHRhYmxlOgoo
WEVOKSBIVk00OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDog
UkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDowMDBhMDAwMCAtIDAwMDAwMDAwOjAwMGUw
MDAwCihYRU4pIEhWTTQ6ICBbMDFdOiAwMDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAw
MDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiAgWzAyXTogMDAwMDAwMDA6MDAxMDAwMDAgLSAwMDAw
MDAwMDo3ZjgwMDAwMDogUkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDo3ZjgwMDAwMCAt
IDAwMDAwMDAwOmZjMDAwMDAwCihYRU4pIEhWTTQ6ICBbMDNdOiAwMDAwMDAwMDpmYzAwMDAwMCAt
IDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiBJbnZva2luZyBTZWFCSU9T
IC4uLgooWEVOKSBzdGR2Z2EuYzoxNDc6ZDQgZW50ZXJpbmcgc3RkdmdhIGFuZCBjYWNoaW5nIG1v
ZGVzCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAK
KFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAoo
WEVOKSBzdGR2Z2EuYzoxNTE6ZDQgbGVhdmluZyBzdGR2Z2EKKFhFTikgc3RkdmdhLmM6MTQ3OmQ0
IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBpcnEuYzozNzU6IERvbTQg
Y2FsbGJhY2sgdmlhIGNoYW5nZWQgdG8gRGlyZWN0IFZlY3RvciAweGYzCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21h
cDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6
cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJl
bW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6
IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200
IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2Zu
PWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1m
MzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4
MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1m
bj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49Zjdh
YzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAg
bnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAg
bnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBu
cj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9
MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9
MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4p
IG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5
X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFw
OnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBn
cG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200
IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQg
Z2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1m
MzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUw
IG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49
ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQw
MDAgbnI9MTAwCihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAwIGNoYW5nZWQgNSAtPiAw
CihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAxIGNoYW5nZWQgMTAgLT4gMAooWEVOKSBp
cnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMiBjaGFuZ2VkIDExIC0+IDAKKFhFTikgaXJxLmM6Mjcw
OiBEb200IFBDSSBsaW5rIDMgY2hhbmdlZCA1IC0+IDAKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOSBpcnE9MCBlbXVpcnE9MQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9w
aXJxIDQwOCBwaXJxPTE3IGlycT0wIGVtdWlycT04CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3Bp
cnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGly
cSA0MDggcGlycT0yMCBpcnE9MCBlbXVpcnE9NgooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJx
IDQwOCBwaXJxPTU1IGlycT0tMSBlbXVpcnE9LTEKKFhFTikgREVCVUcgaHZtX3BjaV9tc2lfYXNz
ZXJ0IHBpcnE9NCBodm1fZG9tYWluX3VzZV9waXJxPTAgZW11aXJxPS0xCihYRU4pIHZtc2kuYzox
MDg6ZDMyNzY3IFVuc3VwcG9ydGVkIGRlbGl2ZXJ5IG1vZGUgMwooWEVOKSBERUJVRyBldnRjaG5f
YmluZF9waXJxIDQwOCBwaXJxPTIyIGlycT0wIGVtdWlycT03CihYRU4pIERFQlVHIGV2dGNobl9i
aW5kX3BpcnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgSFZNNTogSFZNIExvYWRl
cgooWEVOKSBIVk01OiBEZXRlY3RlZCBYZW4gdjQuMi11bnN0YWJsZQooWEVOKSBIVk01OiBYZW5i
dXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5uZWwgNAooWEVOKSBIVk01OiBTeXN0ZW0g
cmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNNTogQ1BVIHNwZWVkIGlzIDMyOTMgTUh6CihYRU4p
IGlycS5jOjI3MDogRG9tNSBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTU6IFBD
SS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4pIGlycS5jOjI3MDogRG9tNSBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8g
SVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb201IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExCihY
RU4pIEhWTTU6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0byBJUlExMQooWEVOKSBpcnEuYzoyNzA6
IERvbTUgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsg
MyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk01OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhF
TikgSFZNNTogcGNpIGRldiAwMzowIElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNDow
IElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIElOVEEtPklSUTEwCihYRU4pIEhW
TTU6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAwMDAwMDogZjAwMDAwMDgKKFhFTikgSFZN
NTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAwMDAwOiBmMjAwMDAwOAooWEVOKSBIVk01
OiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAwMDA6IGYzMDAwMDAwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb201IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgSFZNNTogcGNp
IGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBmMzA4MDAwNAooWEVOKSBIVk01OiBwY2kg
ZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYzMGMwMDAwCihYRU4pIEhWTTU6IHBjaSBk
ZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwZDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTUgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b201IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBIVk01OiBwY2kgZGV2IDA1OjAgYmFy
IDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4pIEhWTTU6IHBjaSBkZXYgMDI6MCBiYXIg
MTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwMzowIGJhciAx
MCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBIVk01OiBwY2kgZGV2IDA0OjAgYmFyIDEw
IHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhWTTU6IHBjaSBkZXYgMDQ6MCBiYXIgMTQg
c2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIGJhciAxMCBz
aXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNSBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTU6IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6
ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNNTogTXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlz
YXRpb246CihYRU4pIEhWTTU6ICAtIENQVTAgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJS
cyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk01OiAgLSBDUFUxIC4uLiAz
Ni1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4K
KFhFTikgSFZNNTogVGVzdGluZyBIVk0gZW52aXJvbm1lbnQ6CihYRU4pIEhWTTU6ICAtIFJFUCBJ
TlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZAooWEVOKSBIVk01OiAgLSBHUyBi
YXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihYRU4pIEhWTTU6IFBhc3NlZCAyIG9mIDIg
dGVzdHMKKFhFTikgSFZNNTogV3JpdGluZyBTTUJJT1MgdGFibGVzIC4uLgooWEVOKSBIVk01OiBM
b2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTU6IENyZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhF
TikgSFZNNTogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBIVk01OiB2bTg2IFRTUyBhdCBmYzAwYTA4
MAooWEVOKSBIVk01OiBCSU9TIG1hcDoKKFhFTikgSFZNNTogIDEwMDAwLTEwMGQzOiBTY3JhdGNo
IHNwYWNlCihYRU4pIEhWTTU6ICBlMDAwMC1mZmZmZjogTWFpbiBCSU9TCihYRU4pIEhWTTU6IEU4
MjAgdGFibGU6CihYRU4pIEhWTTU6ICBbMDBdOiAwMDAwMDAwMDowMDAwMDAwMCAtIDAwMDAwMDAw
OjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAw
MDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNNTogIFswMV06IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAw
MDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6ICBbMDJdOiAwMDAwMDAwMDowMDEw
MDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAw
OjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhFTikgSFZNNTogIFswM106IDAwMDAwMDAw
OmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6IEludm9r
aW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0NzpkNSBlbnRlcmluZyBzdGR2Z2EgYW5k
IGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTUgZ2ZuPWYzMDAwIG1mbj1m
N2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNSBnZm49ZjMwMDAgbWZuPWY3
YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkNSBsZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk02
OiBIVk0gTG9hZGVyCihYRU4pIEhWTTY6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4p
IEhWTTY6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhW
TTY6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVOKSBIVk02OiBDUFUgc3BlZWQgaXMgMzI5
MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhF
TikgSFZNNjogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBE
b202IFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihYRU4pIEhWTTY6IFBDSS1JU0EgbGluayAx
IHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMiBjaGFuZ2Vk
IDAgLT4gMTEKKFhFTikgSFZNNjogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4p
IGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTY6IFBD
SS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4pIEhWTTY6IHBjaSBkZXYgMDE6MyBJTlRB
LT5JUlExMAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBw
Y2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJR
MTAKKFhFTikgSFZNNjogcGNpIGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAw
OAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4
CihYRU4pIEhWTTY6IHBjaSBkZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVO
KSBIVk02OiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4p
IEhWTTY6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikg
SFZNNjogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5
X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTY6IHBjaSBk
ZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNNjogcGNpIGRl
diAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk02OiBwY2kgZGV2
IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTY6IHBjaSBkZXYg
MDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNNjogcGNpIGRldiAw
NDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk02OiBwY2kgZGV2IDA1
OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBk
b202IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNNjogcGNpIGRldiAwMTox
IGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQooWEVOKSBIVk02OiBNdWx0aXByb2Nlc3Nv
ciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNNjogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4u
IGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTY6ICAt
IENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhd
IC4uLiBkb25lLgooWEVOKSBIVk02OiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZN
NjogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhW
TTY6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNNjogUGFz
c2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk02OiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihY
RU4pIEhWTTY6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhFTikgSFZNNjogQ3JlYXRpbmcgTVAgdGFi
bGVzIC4uLgooWEVOKSBIVk02OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTY6IHZtODYgVFNT
IGF0IGZjMDBhMDgwCihYRU4pIEhWTTY6IEJJT1MgbWFwOgooWEVOKSBIVk02OiAgMTAwMDAtMTAw
ZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNNjogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhF
TikgSFZNNjogRTgyMCB0YWJsZToKKFhFTikgSFZNNjogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAw
IC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9MRTogMDAwMDAwMDA6MDAw
YTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBIVk02OiAgWzAxXTogMDAwMDAwMDA6MDAw
ZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNNjogIFswMl06IDAw
MDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9M
RTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk02OiAgWzAz
XTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikg
SFZNNjogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQ2IGVudGVyaW5n
IHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1m
MzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3RkdmdhLmM6MTUxOmQ2IGxlYXZpbmcgc3Rkdmdh
CihYRU4pIHN0ZHZnYS5jOjE0NzpkNiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMK
KFhFTikgaXJxLmM6Mzc1OiBEb202IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0
b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5y
PTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5y
PTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQw
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVO
KSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlf
bWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21h
cDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlf
bWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
NiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202
IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBn
Zm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3Bv
cnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBl
MCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49Zjdh
YzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMz
IG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAw
IG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5y
PTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9Mgoo
WEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikg
aW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1l
bW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0
X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFk
ZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRv
bTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJ
IGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMSBj
aGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDIgY2hhbmdlZCAx
MSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTkgaXJxPTAgZW11aXJxPTEKKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0xNyBpcnE9MCBlbXVpcnE9OAooWEVO
KSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4p
IERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MjAgaXJxPTAgZW11aXJxPTYKKFhFTikg
REVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01NSBpcnE9LTEgZW11aXJxPS0xCihYRU4p
IERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQgaHZtX2RvbWFpbl91c2VfcGlycT0wIGVt
dWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBVbnN1cHBvcnRlZCBkZWxpdmVyeSBtb2Rl
IDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9
NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00
CihYRU4pIEhWTTc6IEhWTSBMb2FkZXIKKFhFTikgSFZNNzogRGV0ZWN0ZWQgWGVuIHY0LjItdW5z
dGFibGUKKFhFTikgSFZNNzogWGVuYnVzIHJpbmdzIEAweGZlZmZjMDAwLCBldmVudCBjaGFubmVs
IDQKKFhFTikgSFZNNzogU3lzdGVtIHJlcXVlc3RlZCBTZWFCSU9TCihYRU4pIEhWTTc6IENQVSBz
cGVlZCBpcyAzMjkzIE1IegooWEVOKSBpcnEuYzoyNzA6IERvbTcgUENJIGxpbmsgMCBjaGFuZ2Vk
IDAgLT4gNQooWEVOKSBIVk03OiBQQ0ktSVNBIGxpbmsgMCByb3V0ZWQgdG8gSVJRNQooWEVOKSBp
cnEuYzoyNzA6IERvbTcgUENJIGxpbmsgMSBjaGFuZ2VkIDAgLT4gMTAKKFhFTikgSFZNNzogUENJ
LUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwCihYRU4pIGlycS5jOjI3MDogRG9tNyBQQ0kgbGlu
ayAyIGNoYW5nZWQgMCAtPiAxMQooWEVOKSBIVk03OiBQQ0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8g
SVJRMTEKKFhFTikgaXJxLmM6MjcwOiBEb203IFBDSSBsaW5rIDMgY2hhbmdlZCAwIC0+IDUKKFhF
TikgSFZNNzogUENJLUlTQSBsaW5rIDMgcm91dGVkIHRvIElSUTUKKFhFTikgSFZNNzogcGNpIGRl
diAwMTozIElOVEEtPklSUTEwCihYRU4pIEhWTTc6IHBjaSBkZXYgMDM6MCBJTlRBLT5JUlE1CihY
RU4pIEhWTTc6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1CihYRU4pIEhWTTc6IHBjaSBkZXYgMDU6
MCBJTlRBLT5JUlExMAooWEVOKSBIVk03OiBwY2kgZGV2IDAyOjAgYmFyIDEwIHNpemUgMDIwMDAw
MDA6IGYwMDAwMDA4CihYRU4pIEhWTTc6IHBjaSBkZXYgMDM6MCBiYXIgMTQgc2l6ZSAwMTAwMDAw
MDogZjIwMDAwMDgKKFhFTikgSFZNNzogcGNpIGRldiAwNTowIGJhciAzMCBzaXplIDAwMDgwMDAw
OiBmMzAwMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIEhWTTc6IHBjaSBkZXYgMDU6MCBiYXIgMWMgc2l6ZSAwMDA0MDAwMDogZjMw
ODAwMDQKKFhFTikgSFZNNzogcGNpIGRldiAwMjowIGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBj
MDAwMAooWEVOKSBIVk03OiBwY2kgZGV2IDA0OjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYzMGQw
MDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb203IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9Mgoo
WEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikg
SFZNNzogcGNpIGRldiAwNTowIGJhciAxNCBzaXplIDAwMDA0MDAwOiBmMzBlMDAwNAooWEVOKSBI
Vk03OiBwY2kgZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAwMDEwMDA6IGYzMGU0MDAwCihYRU4pIEhW
TTc6IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMwMDEKKFhFTikgSFZN
NzogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzEwMQooWEVOKSBIVk03
OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgMDAwMDAxMDA6IGYzMGU1MDAwCihYRU4pIEhWTTc6
IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMyMDEKKFhFTikgaW9wb3J0
X21hcDphZGQ6IGRvbTcgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBIVk03OiBw
Y2kgZGV2IDAxOjEgYmFyIDIwIHNpemUgMDAwMDAwMTA6IDAwMDBjMzAxCihYRU4pIEhWTTc6IE11
bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOgooWEVOKSBIVk03OiAgLSBDUFUwIC4uLiAzNi1i
aXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4KKFhF
TikgSFZNNzogIC0gQ1BVMSAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIg
TVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTc6IFRlc3RpbmcgSFZNIGVudmlyb25tZW50
OgooWEVOKSBIVk03OiAgLSBSRVAgSU5TQiBhY3Jvc3MgcGFnZSBib3VuZGFyaWVzIC4uLiBwYXNz
ZWQKKFhFTikgSFZNNzogIC0gR1MgYmFzZSBNU1JzIGFuZCBTV0FQR1MgLi4uIHBhc3NlZAooWEVO
KSBIVk03OiBQYXNzZWQgMiBvZiAyIHRlc3RzCihYRU4pIEhWTTc6IFdyaXRpbmcgU01CSU9TIHRh
YmxlcyAuLi4KKFhFTikgSFZNNzogTG9hZGluZyBTZWFCSU9TIC4uLgooWEVOKSBIVk03OiBDcmVh
dGluZyBNUCB0YWJsZXMgLi4uCihYRU4pIEhWTTc6IExvYWRpbmcgQUNQSSAuLi4KKFhFTikgSFZN
Nzogdm04NiBUU1MgYXQgZmMwMGEwODAKKFhFTikgSFZNNzogQklPUyBtYXA6CihYRU4pIEhWTTc6
ICAxMDAwMC0xMDBkMzogU2NyYXRjaCBzcGFjZQooWEVOKSBIVk03OiAgZTAwMDAtZmZmZmY6IE1h
aW4gQklPUwooWEVOKSBIVk03OiBFODIwIHRhYmxlOgooWEVOKSBIVk03OiAgWzAwXTogMDAwMDAw
MDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDogUkFNCihYRU4pIEhWTTc6ICBIT0xFOiAw
MDAwMDAwMDowMDBhMDAwMCAtIDAwMDAwMDAwOjAwMGUwMDAwCihYRU4pIEhWTTc6ICBbMDFdOiAw
MDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAwMDAwOiBSRVNFUlZFRAooWEVOKSBIVk03
OiAgWzAyXTogMDAwMDAwMDA6MDAxMDAwMDAgLSAwMDAwMDAwMDo3ZjgwMDAwMDogUkFNCihYRU4p
IEhWTTc6ICBIT0xFOiAwMDAwMDAwMDo3ZjgwMDAwMCAtIDAwMDAwMDAwOmZjMDAwMDAwCihYRU4p
IEhWTTc6ICBbMDNdOiAwMDAwMDAwMDpmYzAwMDAwMCAtIDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNF
UlZFRAooWEVOKSBIVk03OiBJbnZva2luZyBTZWFCSU9TIC4uLgooWEVOKSBzdGR2Z2EuYzoxNDc6
ZDcgZW50ZXJpbmcgc3RkdmdhIGFuZCBjYWNoaW5nIG1vZGVzCihYRU4pIG1lbW9yeV9tYXA6YWRk
OiBkb203IGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6
IGRvbTcgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAooWEVOKSBzdGR2Z2EuYzoxNTE6ZDcgbGVh
dmluZyBzdGR2Z2EKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1m
N2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNyBnZm49ZjMwZTAgbWZuPWY3
YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2Fj
MyBucj0xCihYRU4pIGlvcG9ydF9tYXA6cmVtb3ZlOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAw
MCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBu
cj00MAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4p
IGlvcG9ydF9tYXA6YWRkOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikg
bWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVt
b3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9y
dF9tYXA6cmVtb3ZlOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVt
b3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlf
bWFwOmFkZDogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBk
b203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6
IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRv
bTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6cmVtb3ZlOiBkb203
IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcg
Z2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49
ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMGUz
IG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb203IGdwb3J0PWMyMDAgbXBv
cnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMDgwIG1m
bj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNyBnZm49ZjMwZTAgbWZu
PWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1m
N2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6cmVtb3ZlOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9
ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4
MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5y
PTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihY
RU4pIGlvcG9ydF9tYXA6YWRkOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAK
--bcaec53d5ed7de620704c637f31e
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec53d5ed7de620704c637f31e--


From xen-devel-bounces@lists.xen.org Wed Aug 01 17:52:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swd5n-00054n-12; Wed, 01 Aug 2012 17:52:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1Swd5k-00054S-St
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 17:52:17 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1343843526!6220035!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11796 invoked from network); 1 Aug 2012 17:52:06 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 17:52:06 -0000
Received: by weyz53 with SMTP id z53so6150030wey.32
	for <xen-devel@lists.xen.org>; Wed, 01 Aug 2012 10:52:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ddzI3Tmdb/xDceflx/35Y5Uz5RHCH1yEMQo2FSqWZuY=;
	b=yysUmrJIwBacMenduBejmmiitfPzcQagIiEfnIujTYKI/llFAqE4Yef230bdCpJx5h
	zS4c7PmN7hNme31Du+nYYbGX3Pu9DW43K+9+e8HtocZR9gkQHMrOLCwA6dReDmpMFPUC
	cIkF31USOdN8lkB/OjsaAUm0Zirw1udZ6tgQXQU77l0Ua/XYzMTCq2O9j1lOx/T3vXK+
	guiCF8AR1PqTumJJ3xj1uzzd6DUtDMRGc7MFy62BhOusirGsuJ1cc18Jp0QZHzK5BtNq
	qLoq59aeeR4kEz4nc97AzvghSBphvT83K1/ey5G8LMgkXMlHpD+OqxWAP34WPWSfH380
	p5Ug==
MIME-Version: 1.0
Received: by 10.180.19.169 with SMTP id g9mr13811415wie.9.1343843523780; Wed,
	01 Aug 2012 10:52:03 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Wed, 1 Aug 2012 10:52:03 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
Date: Wed, 1 Aug 2012 10:52:03 -0700
Message-ID: <CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Content-Type: multipart/mixed; boundary=bcaec53d5ed7de620704c637f31e
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec53d5ed7de620704c637f31e
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Tue, 31 Jul 2012, David Erickson wrote:
>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > On Tue, 31 Jul 2012, David Erickson wrote:
>> >> Just got back in town, following up on the prior discussion.  I
>> >> successfully compiled the latest code (25688 and qemu upstream
>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
>> >> problems during initialization of the card in the guest, in particular
>> >> the unsupported delivery mode 3 which seems to cause interrupt related
>> >> problems during init.  I've again attached the qemu-dm-log, and xl
>> >> dmesg log files, and additionally screenshots of the guest dmesg and
>> >> also for comparison starting the same livecd natively on the box.
>> >
>> > "unsupported delivery mode 3" means that the Linux guest is trying to
>> > remap the MSI onto an event channel but Xen is still trying to deliver
>> > the MSI using the emulated code path anyway.
>> >
>> > Adding
>> >
>> > #define XEN_PT_LOGGING_ENABLED 1
>> >
>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
>> > be helpful.
>> >
>> > The full Xen logs might also be useful. I would add some more tracing to
>> > the hypervisor too:
>> >
>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
>> > index b5975d1..08f4ab7 100644
>> > --- a/xen/drivers/passthrough/io.c
>> > +++ b/xen/drivers/passthrough/io.c
>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
>> >  {
>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
>> >
>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
>> > +            pirq->pirq,
>> > +            hvm_domain_use_pirq(d, pirq),
>> > +            pirq->arch.hvm.emuirq);
>> > +
>> >      if ( hvm_domain_use_pirq(d, pirq) )
>> >          send_guest_pirq(d, pirq);
>> >      else
>>
>> Hi Stefano-
>> I made the modifications (it looks like that DEFINE hasn't been used
>> in awhile, caused a few compilation issues, I had to prefix most of
>> the logged variables with s->hostaddr.), and am attaching the
>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
>> where do I find those at?
>
> Thanks for the logs!
> You can get the full Xen logs from the serial console but you can also
> grab the last few lines with "xl dmesg", like you did and it seems to be
> enough in this case.
>
>
> The initial MSI remapping has been done:
>
> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
>
> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
> necessary to be able to receive event notifications (emuirq=-1 in the
> Xen logs).
>
> Now we need to figure out why: we still need more logs, this time on the
> guest side.
> What is the kernel version that you are using in the guest?
> Could you please add "debug loglevel=9" to the guest kernel command line
> and then post the guest dmesg again?
> It would be great if you could use the emulated serial to get the logs
> rather than a picture. You can do that by adding serial='pty' to the VM
> config file and console=ttyS0 to the guest command line.
> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
> has been done:
>
>
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index 53777f8..d65a97a 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
>  #ifdef CONFIG_X86
>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
>  #endif
>
>   out:

The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
I've also attached all the logs, thanks for the tip on the serial
console, very useful.

Additionally I've attached logs for booting a solaris livecd (my
ultimate goal is to use this HBA card in Solaris), with the serial
console tip I was able to capture its kernel boot as well.

Lastly I still also have the issue where no NIC is being presented
enabled in the guests, my assumption is this is because of the error
in xen-hotplug.log: "RTNETLINK answers: Operation not supported", is
there some way to debug this? I checked my dom0's dmesg and this is
what I get when I boot the Ubuntu livecd VM:

[ 2506.619039] device vif8.0 entered promiscuous mode
[ 2506.624304] ADDRCONF(NETDEV_UP): vif8.0: link is not ready
[ 2506.777865] device vif8.0-emu entered promiscuous mode
[ 2506.783073] xenbr0: port 3(vif8.0-emu) entering forwarding state
[ 2506.783079] xenbr0: port 3(vif8.0-emu) entering forwarding state
[ 2506.895379] pciback 0000:02:00.0: restoring config space at offset
0xf (was 0x100, writing 0x10a)
[ 2506.895393] pciback 0000:02:00.0: restoring config space at offset
0xc (was 0x0, writing 0xf7a00000)
[ 2506.895410] pciback 0000:02:00.0: restoring config space at offset
0x7 (was 0x4, writing 0xf7a80004)
[ 2506.895420] pciback 0000:02:00.0: restoring config space at offset
0x5 (was 0x4, writing 0xf7ac0004)
[ 2506.895428] pciback 0000:02:00.0: restoring config space at offset
0x4 (was 0x1, writing 0xd001)
[ 2506.895435] pciback 0000:02:00.0: restoring config space at offset
0x3 (was 0x0, writing 0x10)
[ 2507.056216] xen-pciback: vpci: 0000:02:00.0: assign to virtual slot 0
[ 2507.056398] pciback 0000:02:00.0: device has been assigned to
another domain! Over-writting the ownership, but beware.
[ 2517.191338] vif8.0-emu: no IPv6 routers present
[ 2521.799320] xenbr0: port 3(vif8.0-emu) entering forwarding state

and here is my ifconfig while ubuntu is booted (without VMs it doesn't
have the vifs):
eth0      Link encap:Ethernet  HWaddr 00:e0:81:cb:db:74
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:71977 errors:0 dropped:0 overruns:0 frame:0
          TX packets:126629 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5472933 (5.4 MB)  TX bytes:57934759 (57.9 MB)
          Interrupt:16 Memory:f7900000-f7920000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:4002 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4002 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:26667388 (26.6 MB)  TX bytes:26667388 (26.6 MB)

vif8.0    Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:32
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vif8.0-emu Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:172 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 B)  TX bytes:26220 (26.2 KB)

xenbr0    Link encap:Ethernet  HWaddr 00:e0:81:cb:db:74
          inet addr:192.168.1.7  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::2e0:81ff:fecb:db74/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:71245 errors:0 dropped:0 overruns:0 frame:0
          TX packets:98995 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3595409 (3.5 MB)  TX bytes:55909320 (55.9 MB)

And the line from my ubuntu.conf:
vif = ['bridge=xenbr0']

Thanks-
David

--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="xen-hotplug.log"
Content-Disposition: attachment; filename="xen-hotplug.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3wa0

UlRORVRMSU5LIGFuc3dlcnM6IE9wZXJhdGlvbiBub3Qgc3VwcG9ydGVkCg==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="qemu-dm-ubuntu.log"
Content-Disposition: attachment; filename="qemu-dm-ubuntu.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3wc1

Y2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8yCnhjOiBlcnJvcjogbGludXhfZ250
dGFiX3NldF9tYXhfZ3JhbnRzOiBpb2N0bCBTRVRfTUFYX0dSQU5UUyBmYWlsZWQgKDIyID0gSW52
YWxpZCBhcmd1bWVudCk6IEludGVybmFsIGVycm9yCnhlbiBiZTogcWRpc2stNTYzMjogeGNfZ250
dGFiX3NldF9tYXhfZ3JhbnRzIGZhaWxlZDogSW52YWxpZCBhcmd1bWVudApbMDA6MDUuMF0geGVu
X3B0X2luaXRmbjogQXNzaWduaW5nIHJlYWwgcGh5c2ljYWwgZGV2aWNlIDAyOjAwLjAgdG8gZGV2
Zm4gMHgyOApbMDA6MDUuMF0geGVuX3B0X3JlZ2lzdGVyX3JlZ2lvbnM6IElPIHJlZ2lvbiAwIHJl
Z2lzdGVyZWQgKHNpemU9MHgwMDAwMDEwMCBiYXNlX2FkZHI9MHgwMDAwZDAwMCB0eXBlOiAweDEp
ClswMDowNS4wXSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDEgcmVnaXN0ZXJl
ZCAoc2l6ZT0weDAwMDA0MDAwIGJhc2VfYWRkcj0weGY3YWMwMDAwIHR5cGU6IDApClswMDowNS4w
XSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDMgcmVnaXN0ZXJlZCAoc2l6ZT0w
eDAwMDQwMDAwIGJhc2VfYWRkcj0weGY3YTgwMDAwIHR5cGU6IDApClswMDowNS4wXSB4ZW5fcHRf
cmVnaXN0ZXJfcmVnaW9uczogRXhwYW5zaW9uIFJPTSByZWdpc3RlcmVkIChzaXplPTB4MDAwODAw
MDAgYmFzZV9hZGRyPTB4ZjdhMDAwMDApClswMDowNS4wXSB4ZW5fcHRfbXNpeF9pbml0OiBnZXQg
TVNJLVggdGFibGUgQkFSIGJhc2UgMHhmN2FjMDAwMApbMDA6MDUuMF0geGVuX3B0X21zaXhfaW5p
dDogdGFibGVfb2ZmID0gMHgyMDAwLCB0b3RhbF9lbnRyaWVzID0gMTUKWzAwOjA1LjBdIHhlbl9w
dF9tc2l4X2luaXQ6IG1hcHBpbmcgcGh5c2ljYWwgTVNJLVggdGFibGUgdG8gMHg3ZmI0ZDAyY2Ew
MDAKWzAwOjA1LjBdIHhlbl9wdF9wY2lfaW50eDogaW50eD0xClswMDowNS4wXSB4ZW5fcHRfaW5p
dGZuOiBSZWFsIHBoeXNpY2FsIGRldmljZSAwMjowMC4wIHJlZ2lzdGVyZWQgc3VjY2Vzc2Z1bHkh
ClswMDowNS4wXSB4ZW5fcHRfbXNpeGN0cmxfcmVnX3dyaXRlOiBlbmFibGUgTVNJLVgKWzAwOjA1
LjBdIG1zaV9tc2l4X3NldHVwOiByZXF1ZXN0ZWQgcGlycSA0IGZvciBNU0ktWCAodmVjOiAwLCBl
bnRyeTogMCkKWzAwOjA1LjBdIG1zaV9tc2l4X3VwZGF0ZTogVXBkYXRpbmcgTVNJLVggd2l0aCBw
aXJxIDQgZ3ZlYyAwIGdmbGFncyAweDMwMzcgKGVudHJ5OiAwKQpbMDA6MDUuMF0gcGNpX21zaXhf
d3JpdGU6IEVycm9yOiBDYW4ndCB1cGRhdGUgbXNpeCBlbnRyeSAwIHNpbmNlIE1TSS1YIGlzIGFs
cmVhZHkgZW5hYmxlZC4KWzAwOjA1LjBdIHBjaV9tc2l4X3dyaXRlOiBFcnJvcjogQ2FuJ3QgdXBk
YXRlIG1zaXggZW50cnkgMCBzaW5jZSBNU0ktWCBpcyBhbHJlYWR5IGVuYWJsZWQuClswMDowNS4w
XSBwY2lfbXNpeF93cml0ZTogRXJyb3I6IENhbid0IHVwZGF0ZSBtc2l4IGVudHJ5IDAgc2luY2Ug
TVNJLVggaXMgYWxyZWFkeSBlbmFibGVkLgpbMDA6MDUuMF0gcGNpX21zaXhfd3JpdGU6IEVycm9y
OiBDYW4ndCB1cGRhdGUgbXNpeCBlbnRyeSAwIHNpbmNlIE1TSS1YIGlzIGFscmVhZHkgZW5hYmxl
ZC4KWzAwOjA1LjBdIHhlbl9wdF9tc2l4Y3RybF9yZWdfd3JpdGU6IGRpc2FibGUgTVNJLVgK
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="ubuntu_guest_boot.log"
Content-Disposition: attachment; filename="ubuntu_guest_boot.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3wd2

WyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgY3B1c2V0DQpbICAgIDAu
MDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHUNClsgICAgMC4wMDAwMDBdIExp
bnV4IHZlcnNpb24gMy4wLjAtMTItZ2VuZXJpYyAoYnVpbGRkQGNyZXN0ZWQpIChnY2MgdmVyc2lv
biA0LjYuMSAoVWJ1bnR1L0xpbmFybyA0LjYuMS05dWJ1bnR1MykgKSAjMjAtVWJ1bnR1IFNNUCBG
cmkgT2N0IDcgMTQ6NTY6MjUgVVRDIDIwMTEgKFVidW50dSAzLjAuMC0xMi4yMC1nZW5lcmljIDMu
MC40KQ0KWyAgICAwLjAwMDAwMF0gQ29tbWFuZCBsaW5lOiBmaWxlPS9jZHJvbS9wcmVzZWVkL3Vi
dW50dS5zZWVkIGJvb3Q9Y2FzcGVyIGluaXRyZD0vY2FzcGVyL2luaXRyZC5seiBxdWlldCBzcGxh
c2ggZGVidWcgbG9nbGV2ZWw9OSBjb25zb2xlPXR0eTAgY29uc29sZT10dHlTMCwxMTUyMDBuOCBj
b25zb2xlPXR0eVMwIC0tDQpbICAgIDAuMDAwMDAwXSBLRVJORUwgc3VwcG9ydGVkIGNwdXM6DQpb
ICAgIDAuMDAwMDAwXSAgIEludGVsIEdlbnVpbmVJbnRlbA0KWyAgICAwLjAwMDAwMF0gICBBTUQg
QXV0aGVudGljQU1EDQpbICAgIDAuMDAwMDAwXSAgIENlbnRhdXIgQ2VudGF1ckhhdWxzDQpbICAg
IDAuMDAwMDAwXSBCSU9TLXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAw
XSAgQklPUy1lODIwOiAwMDAwMDAwMDAwMDAwMDAwIC0gMDAwMDAwMDAwMDA5ZTQwMCAodXNhYmxl
KQ0KWyAgICAwLjAwMDAwMF0gIEJJT1MtZTgyMDogMDAwMDAwMDAwMDA5ZTQwMCAtIDAwMDAwMDAw
MDAwYTAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIEJJT1MtZTgyMDogMDAwMDAwMDAw
MDBmMDAwMCAtIDAwMDAwMDAwMDAxMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIEJJ
T1MtZTgyMDogMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAwMDAwN2Y3ZmYwMDAgKHVzYWJsZSkNClsg
ICAgMC4wMDAwMDBdICBCSU9TLWU4MjA6IDAwMDAwMDAwN2Y3ZmYwMDAgLSAwMDAwMDAwMDdmODAw
MDAwIChyZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdICBCSU9TLWU4MjA6IDAwMDAwMDAwZmMwMDAw
MDAgLSAwMDAwMDAwMTAwMDAwMDAwIChyZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdIE5YIChFeGVj
dXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAwLjAwMDAwMF0gRE1JIDIuNCBw
cmVzZW50Lg0KWyAgICAwLjAwMDAwMF0gRE1JOiBYZW4gSFZNIGRvbVUsIEJJT1MgNC4yLXVuc3Rh
YmxlIDA3LzMwLzIwMTINClsgICAgMC4wMDAwMDBdIEh5cGVydmlzb3IgZGV0ZWN0ZWQ6IFhlbiBI
Vk0NClsgICAgMC4wMDAwMDBdIFhlbiB2ZXJzaW9uIDQuMi4NClsgICAgMC4wMDAwMDBdIFhlbiBQ
bGF0Zm9ybSBQQ0k6IEkvTyBwcm90b2NvbCB2ZXJzaW9uIDENClsgICAgMC4wMDAwMDBdIE5ldGZy
b250IGFuZCB0aGUgWGVuIHBsYXRmb3JtIFBDSSBkcml2ZXIgaGF2ZSBiZWVuIGNvbXBpbGVkIGZv
ciB0aGlzIGtlcm5lbDogdW5wbHVnIGVtdWxhdGVkIE5JQ3MuDQpbICAgIDAuMDAwMDAwXSBCbGtm
cm9udCBhbmQgdGhlIFhlbiBwbGF0Zm9ybSBQQ0kgZHJpdmVyIGhhdmUgYmVlbiBjb21waWxlZCBm
b3IgdGhpcyBrZXJuZWw6IHVucGx1ZyBlbXVsYXRlZCBkaXNrcy4NClsgICAgMC4wMDAwMDBdIFlv
dSBtaWdodCBoYXZlIHRvIGNoYW5nZSB0aGUgcm9vdCBkZXZpY2UNClsgICAgMC4wMDAwMDBdIGZy
b20gL2Rldi9oZFthLWRdIHRvIC9kZXYveHZkW2EtZF0NClsgICAgMC4wMDAwMDBdIGluIHlvdXIg
cm9vdD0ga2VybmVsIGNvbW1hbmQgbGluZSBvcHRpb24NClsgICAgMC4wMDAwMDBdIEhWTU9QX3Bh
Z2V0YWJsZV9keWluZyBub3Qgc3VwcG9ydGVkDQpbICAgIDAuMDAwMDAwXSBlODIwIHVwZGF0ZSBy
YW5nZTogMDAwMDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwMTAwMDAgKHVzYWJsZSkgPT0+IChy
ZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdIGU4MjAgcmVtb3ZlIHJhbmdlOiAwMDAwMDAwMDAwMGEw
MDAwIC0gMDAwMDAwMDAwMDEwMDAwMCAodXNhYmxlKQ0KWyAgICAwLjAwMDAwMF0gTm8gQUdQIGJy
aWRnZSBmb3VuZA0KWyAgICAwLjAwMDAwMF0gbGFzdF9wZm4gPSAweDdmN2ZmIG1heF9hcmNoX3Bm
biA9IDB4NDAwMDAwMDAwDQpbICAgIDAuMDAwMDAwXSBNVFJSIGRlZmF1bHQgdHlwZTogd3JpdGUt
YmFjaw0KWyAgICAwLjAwMDAwMF0gTVRSUiBmaXhlZCByYW5nZXMgZW5hYmxlZDoNClsgICAgMC4w
MDAwMDBdICAgMDAwMDAtOUZGRkYgd3JpdGUtYmFjaw0KWyAgICAwLjAwMDAwMF0gICBBMDAwMC1C
RkZGRiB3cml0ZS1jb21iaW5pbmcNClsgICAgMC4wMDAwMDBdICAgQzAwMDAtRkZGRkYgd3JpdGUt
YmFjaw0KWyAgICAwLjAwMDAwMF0gTVRSUiB2YXJpYWJsZSByYW5nZXMgZW5hYmxlZDoNClsgICAg
MC4wMDAwMDBdICAgMCBiYXNlIDBGMDAwMDAwMCBtYXNrIEZGODAwMDAwMCB1bmNhY2hhYmxlDQpb
ICAgIDAuMDAwMDAwXSAgIDEgYmFzZSAwRjgwMDAwMDAgbWFzayBGRkMwMDAwMDAgdW5jYWNoYWJs
ZQ0KWyAgICAwLjAwMDAwMF0gICAyIGRpc2FibGVkDQpbICAgIDAuMDAwMDAwXSAgIDMgZGlzYWJs
ZWQNClsgICAgMC4wMDAwMDBdICAgNCBkaXNhYmxlZA0KWyAgICAwLjAwMDAwMF0gICA1IGRpc2Fi
bGVkDQpbICAgIDAuMDAwMDAwXSAgIDYgZGlzYWJsZWQNClsgICAgMC4wMDAwMDBdICAgNyBkaXNh
YmxlZA0KWyAgICAwLjAwMDAwMF0geDg2IFBBVCBlbmFibGVkOiBjcHUgMCwgb2xkIDB4NzA0MDYw
MDA3MDQwNiwgbmV3IDB4NzAxMDYwMDA3MDEwNg0KWyAgICAwLjAwMDAwMF0gZm91bmQgU01QIE1Q
LXRhYmxlIGF0IFtmZmZmODgwMDAwMGZkYWQwXSBmZGFkMA0KWyAgICAwLjAwMDAwMF0gaW5pdGlh
bCBtZW1vcnkgbWFwcGVkIDogMCAtIDIwMDAwMDAwDQpbICAgIDAuMDAwMDAwXSBCYXNlIG1lbW9y
eSB0cmFtcG9saW5lIGF0IFtmZmZmODgwMDAwMDk5MDAwXSA5OTAwMCBzaXplIDIwNDgwDQpbICAg
IDAuMDAwMDAwXSBpbml0X21lbW9yeV9tYXBwaW5nOiAwMDAwMDAwMDAwMDAwMDAwLTAwMDAwMDAw
N2Y3ZmYwMDANClsgICAgMC4wMDAwMDBdICAwMDAwMDAwMDAwIC0gMDA3ZjYwMDAwMCBwYWdlIDJN
DQpbICAgIDAuMDAwMDAwXSAgMDA3ZjYwMDAwMCAtIDAwN2Y3ZmYwMDAgcGFnZSA0aw0KWyAgICAw
LjAwMDAwMF0ga2VybmVsIGRpcmVjdCBtYXBwaW5nIHRhYmxlcyB1cCB0byA3ZjdmZjAwMCBAIDdl
YTE4MDAwLTdlYTFjMDAwDQpbICAgIDAuMDAwMDAwXSBSQU1ESVNLOiA3ZWExYzAwMCAtIDdmN2Zm
MDAwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBSU0RQIDAwMDAwMDAwMDAwZmRhMjAgMDAwMjQgKHYw
MiAgICBYZW4pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBYU0RUIDAwMDAwMDAwZmMwMDlmZDAgMDAw
NTQgKHYwMSAgICBYZW4gICAgICBIVk0gMDAwMDAwMDAgSFZNTCAwMDAwMDAwMCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IEZBQ1AgMDAwMDAwMDBmYzAwOTkwMCAwMDBGNCAodjA0ICAgIFhlbiAgICAg
IEhWTSAwMDAwMDAwMCBIVk1MIDAwMDAwMDAwKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRFNEVCAw
MDAwMDAwMGZjMDAxMmIwIDA4NUNEICh2MDIgICAgWGVuICAgICAgSFZNIDAwMDAwMDAwIElOVEwg
MjAxMDA1MjgpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNTIDAwMDAwMDAwZmMwMDEyNzAgMDAw
NDANClsgICAgMC4wMDAwMDBdIEFDUEk6IEFQSUMgMDAwMDAwMDBmYzAwOWEwMCAwMDQ2MCAodjAy
ICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBIVk1MIDAwMDAwMDAwKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogSFBFVCAwMDAwMDAwMGZjMDA5ZWUwIDAwMDM4ICh2MDEgICAgWGVuICAgICAgSFZNIDAw
MDAwMDAwIEhWTUwgMDAwMDAwMDApDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBXQUVUIDAwMDAwMDAw
ZmMwMDlmMjAgMDAwMjggKHYwMSAgICBYZW4gICAgICBIVk0gMDAwMDAwMDAgSFZNTCAwMDAwMDAw
MCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAwMDBmYzAwOWY1MCAwMDAzMSAodjAy
ICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBJTlRMIDIwMTAwNTI4KQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogU1NEVCAwMDAwMDAwMGZjMDA5ZjkwIDAwMDMxICh2MDIgICAgWGVuICAgICAgSFZNIDAw
MDAwMDAwIElOVEwgMjAxMDA1MjgpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMb2NhbCBBUElDIGFk
ZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAwLjAwMDAwMF0gTm8gTlVNQSBjb25maWd1cmF0aW9uIGZv
dW5kDQpbICAgIDAuMDAwMDAwXSBGYWtpbmcgYSBub2RlIGF0IDAwMDAwMDAwMDAwMDAwMDAtMDAw
MDAwMDA3ZjdmZjAwMA0KWyAgICAwLjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgMDAwMDAw
MDAwMDAwMDAwMC0wMDAwMDAwMDdmN2ZmMDAwDQpbICAgIDAuMDAwMDAwXSAgIE5PREVfREFUQSBb
MDAwMDAwMDA3ZWExMzAwMCAtIDAwMDAwMDAwN2VhMTdmZmZdDQpbICAgIDAuMDAwMDAwXSAgW2Zm
ZmZlYTAwMDAwMDAwMDAtZmZmZmVhMDAwMWJmZmZmZl0gUE1EIC0+IFtmZmZmODgwMDdjMjAwMDAw
LWZmZmY4ODAwN2RkZmZmZmZdIG9uIG5vZGUgMA0KWyAgICAwLjAwMDAwMF0gWm9uZSBQRk4gcmFu
Z2VzOg0KWyAgICAwLjAwMDAwMF0gICBETUEgICAgICAweDAwMDAwMDEwIC0+IDB4MDAwMDEwMDAN
ClsgICAgMC4wMDAwMDBdICAgRE1BMzIgICAgMHgwMDAwMTAwMCAtPiAweDAwMTAwMDAwDQpbICAg
IDAuMDAwMDAwXSAgIE5vcm1hbCAgIGVtcHR5DQpbICAgIDAuMDAwMDAwXSBNb3ZhYmxlIHpvbmUg
c3RhcnQgUEZOIGZvciBlYWNoIG5vZGUNClsgICAgMC4wMDAwMDBdIGVhcmx5X25vZGVfbWFwWzJd
IGFjdGl2ZSBQRk4gcmFuZ2VzDQpbICAgIDAuMDAwMDAwXSAgICAgMDogMHgwMDAwMDAxMCAtPiAw
eDAwMDAwMDllDQpbICAgIDAuMDAwMDAwXSAgICAgMDogMHgwMDAwMDEwMCAtPiAweDAwMDdmN2Zm
DQpbICAgIDAuMDAwMDAwXSBPbiBub2RlIDAgdG90YWxwYWdlczogNTIyMTI1DQpbICAgIDAuMDAw
MDAwXSAgIERNQSB6b25lOiA1NiBwYWdlcyB1c2VkIGZvciBtZW1tYXANClsgICAgMC4wMDAwMDBd
ICAgRE1BIHpvbmU6IDUgcGFnZXMgcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6
IDM5MjEgcGFnZXMsIExJRk8gYmF0Y2g6MA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA3
MDg0IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA1
MTEwNTkgcGFnZXMsIExJRk8gYmF0Y2g6MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRpbWVy
IElPIFBvcnQ6IDB4YjAwOA0KWyAgICAwLjAwMDAwMF0gQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNz
IDB4ZmVlMDAwMDANClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxh
cGljX2lkWzB4MDBdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDAxXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNfaWRbMHgwNF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19pZFsweDA2XSBkaXNhYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDhdIGRp
c2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNf
aWRbMHgwYV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsw
eDA2XSBsYXBpY19pZFsweDBjXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElD
IChhY3BpX2lkWzB4MDddIGxhcGljX2lkWzB4MGVdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwOF0gbGFwaWNfaWRbMHgxMF0gZGlzYWJsZWQpDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA5XSBsYXBpY19pZFsweDEyXSBkaXNh
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MGFdIGxhcGljX2lk
WzB4MTRdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
Yl0gbGFwaWNfaWRbMHgxNl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDBjXSBsYXBpY19pZFsweDE4XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IExBUElDIChhY3BpX2lkWzB4MGRdIGxhcGljX2lkWzB4MWFdIGRpc2FibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwZV0gbGFwaWNfaWRbMHgxY10gZGlzYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDBmXSBsYXBpY19pZFsw
eDFlXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MTBd
IGxhcGljX2lkWzB4MjBdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFj
cGlfaWRbMHgxMV0gbGFwaWNfaWRbMHgyMl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDEyXSBsYXBpY19pZFsweDI0XSBkaXNhYmxlZCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MTNdIGxhcGljX2lkWzB4MjZdIGRpc2FibGVk
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgxNF0gbGFwaWNfaWRbMHgy
OF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDE1XSBs
YXBpY19pZFsweDJhXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MTZdIGxhcGljX2lkWzB4MmNdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxN10gbGFwaWNfaWRbMHgyZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDE4XSBsYXBpY19pZFsweDMwXSBkaXNhYmxlZCkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MTldIGxhcGljX2lkWzB4MzJd
IGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgxYV0gbGFw
aWNfaWRbMHgzNF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDFiXSBsYXBpY19pZFsweDM2XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MWNdIGxhcGljX2lkWzB4MzhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgxZF0gbGFwaWNfaWRbMHgzYV0gZGlzYWJsZWQpDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDFlXSBsYXBpY19pZFsweDNjXSBk
aXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MWZdIGxhcGlj
X2lkWzB4M2VdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRb
MHgyMF0gbGFwaWNfaWRbMHg0MF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDIxXSBsYXBpY19pZFsweDQyXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBd
IEFDUEk6IExBUElDIChhY3BpX2lkWzB4MjJdIGxhcGljX2lkWzB4NDRdIGRpc2FibGVkKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgyM10gbGFwaWNfaWRbMHg0Nl0gZGlz
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDI0XSBsYXBpY19p
ZFsweDQ4XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MjVdIGxhcGljX2lkWzB4NGFdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMg
KGFjcGlfaWRbMHgyNl0gbGFwaWNfaWRbMHg0Y10gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDI3XSBsYXBpY19pZFsweDRlXSBkaXNhYmxlZCkNClsgICAg
MC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MjhdIGxhcGljX2lkWzB4NTBdIGRpc2Fi
bGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgyOV0gbGFwaWNfaWRb
MHg1Ml0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDJh
XSBsYXBpY19pZFsweDU0XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MmJdIGxhcGljX2lkWzB4NTZdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQ
STogTEFQSUMgKGFjcGlfaWRbMHgyY10gbGFwaWNfaWRbMHg1OF0gZGlzYWJsZWQpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDJkXSBsYXBpY19pZFsweDVhXSBkaXNhYmxl
ZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MmVdIGxhcGljX2lkWzB4
NWNdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgyZl0g
bGFwaWNfaWRbMHg1ZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNw
aV9pZFsweDMwXSBsYXBpY19pZFsweDYwXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6
IExBUElDIChhY3BpX2lkWzB4MzFdIGxhcGljX2lkWzB4NjJdIGRpc2FibGVkKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgzMl0gbGFwaWNfaWRbMHg2NF0gZGlzYWJsZWQp
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDMzXSBsYXBpY19pZFsweDY2
XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MzRdIGxh
cGljX2lkWzB4NjhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgzNV0gbGFwaWNfaWRbMHg2YV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDM2XSBsYXBpY19pZFsweDZjXSBkaXNhYmxlZCkNClsgICAgMC4wMDAw
MDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MzddIGxhcGljX2lkWzB4NmVdIGRpc2FibGVkKQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgzOF0gbGFwaWNfaWRbMHg3MF0g
ZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDM5XSBsYXBp
Y19pZFsweDcyXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lk
WzB4M2FdIGxhcGljX2lkWzB4NzRdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHgzYl0gbGFwaWNfaWRbMHg3Nl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDNjXSBsYXBpY19pZFsweDc4XSBkaXNhYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4M2RdIGxhcGljX2lkWzB4N2FdIGRp
c2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgzZV0gbGFwaWNf
aWRbMHg3Y10gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsw
eDNmXSBsYXBpY19pZFsweDdlXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElD
IChhY3BpX2lkWzB4NDBdIGxhcGljX2lkWzB4ODBdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0MV0gbGFwaWNfaWRbMHg4Ml0gZGlzYWJsZWQpDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDQyXSBsYXBpY19pZFsweDg0XSBkaXNh
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NDNdIGxhcGljX2lk
WzB4ODZdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0
NF0gbGFwaWNfaWRbMHg4OF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDQ1XSBsYXBpY19pZFsweDhhXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IExBUElDIChhY3BpX2lkWzB4NDZdIGxhcGljX2lkWzB4OGNdIGRpc2FibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0N10gbGFwaWNfaWRbMHg4ZV0gZGlzYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDQ4XSBsYXBpY19pZFsw
eDkwXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NDld
IGxhcGljX2lkWzB4OTJdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFj
cGlfaWRbMHg0YV0gbGFwaWNfaWRbMHg5NF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDRiXSBsYXBpY19pZFsweDk2XSBkaXNhYmxlZCkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NGNdIGxhcGljX2lkWzB4OThdIGRpc2FibGVk
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg0ZF0gbGFwaWNfaWRbMHg5
YV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDRlXSBs
YXBpY19pZFsweDljXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4NGZdIGxhcGljX2lkWzB4OWVdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHg1MF0gbGFwaWNfaWRbMHhhMF0gZGlzYWJsZWQpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDUxXSBsYXBpY19pZFsweGEyXSBkaXNhYmxlZCkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NTJdIGxhcGljX2lkWzB4YTRd
IGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg1M10gbGFw
aWNfaWRbMHhhNl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDU0XSBsYXBpY19pZFsweGE4XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4NTVdIGxhcGljX2lkWzB4YWFdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg1Nl0gbGFwaWNfaWRbMHhhY10gZGlzYWJsZWQpDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDU3XSBsYXBpY19pZFsweGFlXSBk
aXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NThdIGxhcGlj
X2lkWzB4YjBdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRb
MHg1OV0gbGFwaWNfaWRbMHhiMl0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDVhXSBsYXBpY19pZFsweGI0XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBd
IEFDUEk6IExBUElDIChhY3BpX2lkWzB4NWJdIGxhcGljX2lkWzB4YjZdIGRpc2FibGVkKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg1Y10gbGFwaWNfaWRbMHhiOF0gZGlz
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDVkXSBsYXBpY19p
ZFsweGJhXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
NWVdIGxhcGljX2lkWzB4YmNdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMg
KGFjcGlfaWRbMHg1Zl0gbGFwaWNfaWRbMHhiZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDYwXSBsYXBpY19pZFsweGMwXSBkaXNhYmxlZCkNClsgICAg
MC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NjFdIGxhcGljX2lkWzB4YzJdIGRpc2Fi
bGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg2Ml0gbGFwaWNfaWRb
MHhjNF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDYz
XSBsYXBpY19pZFsweGM2XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4NjRdIGxhcGljX2lkWzB4YzhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQ
STogTEFQSUMgKGFjcGlfaWRbMHg2NV0gbGFwaWNfaWRbMHhjYV0gZGlzYWJsZWQpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDY2XSBsYXBpY19pZFsweGNjXSBkaXNhYmxl
ZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NjddIGxhcGljX2lkWzB4
Y2VdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg2OF0g
bGFwaWNfaWRbMHhkMF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNw
aV9pZFsweDY5XSBsYXBpY19pZFsweGQyXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6
IExBUElDIChhY3BpX2lkWzB4NmFdIGxhcGljX2lkWzB4ZDRdIGRpc2FibGVkKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg2Yl0gbGFwaWNfaWRbMHhkNl0gZGlzYWJsZWQp
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDZjXSBsYXBpY19pZFsweGQ4
XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NmRdIGxh
cGljX2lkWzB4ZGFdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHg2ZV0gbGFwaWNfaWRbMHhkY10gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDZmXSBsYXBpY19pZFsweGRlXSBkaXNhYmxlZCkNClsgICAgMC4wMDAw
MDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NzBdIGxhcGljX2lkWzB4ZTBdIGRpc2FibGVkKQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3MV0gbGFwaWNfaWRbMHhlMl0g
ZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDcyXSBsYXBp
Y19pZFsweGU0XSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lk
WzB4NzNdIGxhcGljX2lkWzB4ZTZdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQ
SUMgKGFjcGlfaWRbMHg3NF0gbGFwaWNfaWRbMHhlOF0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDc1XSBsYXBpY19pZFsweGVhXSBkaXNhYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4NzZdIGxhcGljX2lkWzB4ZWNdIGRp
c2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3N10gbGFwaWNf
aWRbMHhlZV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsw
eDc4XSBsYXBpY19pZFsweGYwXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElD
IChhY3BpX2lkWzB4NzldIGxhcGljX2lkWzB4ZjJdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0g
QUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3YV0gbGFwaWNfaWRbMHhmNF0gZGlzYWJsZWQpDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDdiXSBsYXBpY19pZFsweGY2XSBkaXNh
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4N2NdIGxhcGljX2lk
WzB4ZjhdIGRpc2FibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHg3
ZF0gbGFwaWNfaWRbMHhmYV0gZGlzYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDdlXSBsYXBpY19pZFsweGZjXSBkaXNhYmxlZCkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IExBUElDIChhY3BpX2lkWzB4N2ZdIGxhcGljX2lkWzB4ZmVdIGRpc2FibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDAxXSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9i
YXNlWzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9BUElDWzBdOiBhcGljX2lkIDEsIHZlcnNpb24gMTcs
IGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtNDcNClsgICAgMC4wMDAwMDBdIEFDUEk6IElOVF9T
UkNfT1ZSIChidXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSA1IGdsb2JhbF9pcnEgNSBsb3cg
bGV2ZWwpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSAx
MCBnbG9iYWxfaXJxIDEwIGxvdyBsZXZlbCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IElOVF9TUkNf
T1ZSIChidXMgMCBidXNfaXJxIDExIGdsb2JhbF9pcnEgMTEgbG93IGxldmVsKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJyaWRlLg0KWyAgICAwLjAwMDAwMF0gQUNQSTog
SVJRMiB1c2VkIGJ5IG92ZXJyaWRlLg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSVJRNSB1c2VkIGJ5
IG92ZXJyaWRlLg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLg0K
WyAgICAwLjAwMDAwMF0gQUNQSTogSVJRMTAgdXNlZCBieSBvdmVycmlkZS4NClsgICAgMC4wMDAw
MDBdIEFDUEk6IElSUTExIHVzZWQgYnkgb3ZlcnJpZGUuDQpbICAgIDAuMDAwMDAwXSBVc2luZyBB
Q1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRpb24NClsgICAgMC4wMDAw
MDBdIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KWyAgICAwLjAw
MDAwMF0gU01QOiBBbGxvd2luZyAxMjggQ1BVcywgMTI2IGhvdHBsdWcgQ1BVcw0KWyAgICAwLjAw
MDAwMF0gbnJfaXJxc19nc2k6IDY0DQpbICAgIDAuMDAwMDAwXSBQTTogUmVnaXN0ZXJlZCBub3Nh
dmUgbWVtb3J5OiAwMDAwMDAwMDAwMDllMDAwIC0gMDAwMDAwMDAwMDA5ZjAwMA0KWyAgICAwLjAw
MDAwMF0gUE06IFJlZ2lzdGVyZWQgbm9zYXZlIG1lbW9yeTogMDAwMDAwMDAwMDA5ZjAwMCAtIDAw
MDAwMDAwMDAwYTAwMDANClsgICAgMC4wMDAwMDBdIFBNOiBSZWdpc3RlcmVkIG5vc2F2ZSBtZW1v
cnk6IDAwMDAwMDAwMDAwYTAwMDAgLSAwMDAwMDAwMDAwMGYwMDAwDQpbICAgIDAuMDAwMDAwXSBQ
TTogUmVnaXN0ZXJlZCBub3NhdmUgbWVtb3J5OiAwMDAwMDAwMDAwMGYwMDAwIC0gMDAwMDAwMDAw
MDEwMDAwMA0KWyAgICAwLjAwMDAwMF0gQWxsb2NhdGluZyBQQ0kgcmVzb3VyY2VzIHN0YXJ0aW5n
IGF0IDdmODAwMDAwIChnYXA6IDdmODAwMDAwOjdjODAwMDAwKQ0KWyAgICAwLjAwMDAwMF0gQm9v
dGluZyBwYXJhdmlydHVhbGl6ZWQga2VybmVsIG9uIFhlbiBIVk0NClsgICAgMC4wMDAwMDBdIHNl
dHVwX3BlcmNwdTogTlJfQ1BVUzoyNTYgbnJfY3B1bWFza19iaXRzOjI1NiBucl9jcHVfaWRzOjEy
OCBucl9ub2RlX2lkczoxDQpbICAgIDAuMDAwMDAwXSBQRVJDUFU6IEVtYmVkZGVkIDI3IHBhZ2Vz
L2NwdSBAZmZmZjg4MDA3YjIwMDAwMCBzNzk2MTYgcjgxOTIgZDIyNzg0IHUxMzEwNzINClsgICAg
MC4wMDAwMDBdIHBjcHUtYWxsb2M6IHM3OTYxNiByODE5MiBkMjI3ODQgdTEzMTA3MiBhbGxvYz0x
KjIwOTcxNTINClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwMDAgMDAxIDAwMiAwMDMg
MDA0IDAwNSAwMDYgMDA3IDAwOCAwMDkgMDEwIDAxMSAwMTIgMDEzIDAxNCAwMTUgDQpbICAgIDAu
MDAwMDAwXSBwY3B1LWFsbG9jOiBbMF0gMDE2IDAxNyAwMTggMDE5IDAyMCAwMjEgMDIyIDAyMyAw
MjQgMDI1IDAyNiAwMjcgMDI4IDAyOSAwMzAgMDMxIA0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxv
YzogWzBdIDAzMiAwMzMgMDM0IDAzNSAwMzYgMDM3IDAzOCAwMzkgMDQwIDA0MSAwNDIgMDQzIDA0
NCAwNDUgMDQ2IDA0NyANClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwNDggMDQ5IDA1
MCAwNTEgMDUyIDA1MyAwNTQgMDU1IDA1NiAwNTcgMDU4IDA1OSAwNjAgMDYxIDA2MiAwNjMgDQpb
ICAgIDAuMDAwMDAwXSBwY3B1LWFsbG9jOiBbMF0gMDY0IDA2NSAwNjYgMDY3IDA2OCAwNjkgMDcw
IDA3MSAwNzIgMDczIDA3NCAwNzUgMDc2IDA3NyAwNzggMDc5IA0KWyAgICAwLjAwMDAwMF0gcGNw
dS1hbGxvYzogWzBdIDA4MCAwODEgMDgyIDA4MyAwODQgMDg1IDA4NiAwODcgMDg4IDA4OSAwOTAg
MDkxIDA5MiAwOTMgMDk0IDA5NSANClsgICAgMC4wMDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwOTYg
MDk3IDA5OCAwOTkgMTAwIDEwMSAxMDIgMTAzIDEwNCAxMDUgMTA2IDEwNyAxMDggMTA5IDExMCAx
MTEgDQpbICAgIDAuMDAwMDAwXSBwY3B1LWFsbG9jOiBbMF0gMTEyIDExMyAxMTQgMTE1IDExNiAx
MTcgMTE4IDExOSAxMjAgMTIxIDEyMiAxMjMgMTI0IDEyNSAxMjYgMTI3IA0KWyAgICAwLjAwMDAw
MF0gQnVpbHQgMSB6b25lbGlzdHMgaW4gTm9kZSBvcmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24u
ICBUb3RhbCBwYWdlczogNTE0OTgwDQpbICAgIDAuMDAwMDAwXSBQb2xpY3kgem9uZTogRE1BMzIN
ClsgICAgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IGZpbGU9L2Nkcm9tL3ByZXNlZWQv
dWJ1bnR1LnNlZWQgYm9vdD1jYXNwZXIgaW5pdHJkPS9jYXNwZXIvaW5pdHJkLmx6IHF1aWV0IHNw
bGFzaCBkZWJ1ZyBsb2dsZXZlbD05IGNvbnNvbGU9dHR5MCBjb25zb2xlPXR0eVMwLDExNTIwMG44
IGNvbnNvbGU9dHR5UzAgLS0NClsgICAgMC4wMDAwMDBdIFBJRCBoYXNoIHRhYmxlIGVudHJpZXM6
IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykNClsgICAgMC4wMDAwMDBdIENoZWNraW5nIGFw
ZXJ0dXJlLi4uDQpbICAgIDAuMDAwMDAwXSBObyBBR1AgYnJpZGdlIGZvdW5kDQpbICAgIDAuMDAw
MDAwXSBDYWxnYXJ5OiBkZXRlY3RpbmcgQ2FsZ2FyeSB2aWEgQklPUyBFQkRBIGFyZWENClsgICAg
MC4wMDAwMDBdIENhbGdhcnk6IFVuYWJsZSB0byBsb2NhdGUgUmlvIEdyYW5kZSB0YWJsZSBpbiBF
QkRBIC0gYmFpbGluZyENClsgICAgMC4wMDAwMDBdIE1lbW9yeTogMjAxODI2NGsvMjA4ODk1Nmsg
YXZhaWxhYmxlICg2MTA0ayBrZXJuZWwgY29kZSwgNDU2ayBhYnNlbnQsIDcwMjM2ayByZXNlcnZl
ZCwgNDg4MGsgZGF0YSwgOTg0ayBpbml0KQ0KWyAgICAwLjAwMDAwMF0gU0xVQjogR2Vuc2xhYnM9
MTUsIEhXYWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTEyOCwgTm9kZXM9
MQ0KWyAgICAwLjAwMDAwMF0gSGllcmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4NClsgICAg
MC4wMDAwMDBdIAlSQ1UgZHludGljay1pZGxlIGdyYWNlLXBlcmlvZCBhY2NlbGVyYXRpb24gaXMg
ZW5hYmxlZC4NClsgICAgMC4wMDAwMDBdIE5SX0lSUVM6MTY2NDAgbnJfaXJxczoyMTEyIDE2DQpb
ICAgIDAuMDAwMDAwXSBYZW4gSFZNIGNhbGxiYWNrIHZlY3RvciBmb3IgZXZlbnQgZGVsaXZlcnkg
aXMgZW5hYmxlZA0KWyAgICAwLjAwMDAwMF0gQ29uc29sZTogY29sb3VyIFZHQSsgODB4MjUNClsg
ICAgMC4wMDAwMDBdIGNvbnNvbGUgW3R0eTBdIGVuYWJsZWQNClsgICAgMC4wMDAwMDBdIGNvbnNv
bGUgW3R0eVMwXSBlbmFibGVkDQpbICAgIDAuMDAwMDAwXSBhbGxvY2F0ZWQgMTY3NzcyMTYgYnl0
ZXMgb2YgcGFnZV9jZ3JvdXANClsgICAgMC4wMDAwMDBdIHBsZWFzZSB0cnkgJ2Nncm91cF9kaXNh
YmxlPW1lbW9yeScgb3B0aW9uIGlmIHlvdSBkb24ndCB3YW50IG1lbW9yeSBjZ3JvdXBzDQpbICAg
IDAuMDAwMDAwXSBocGV0IGNsb2NrZXZlbnQgcmVnaXN0ZXJlZA0KWyAgICAwLjAwMDAwMF0gRGV0
ZWN0ZWQgMzI5Mi41NzggTUh6IHByb2Nlc3Nvci4NClsgICAgMC4wMDgwMDBdIENhbGlicmF0aW5n
IGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRpbWVyIGZyZXF1
ZW5jeS4uIDY1ODUuMTUgQm9nb01JUFMgKGxwaj0xMzE3MDMxMikNClsgICAgMC4wMTQ1MTZdIHBp
ZF9tYXg6IGRlZmF1bHQ6IDEzMTA3MiBtaW5pbXVtOiAxMDI0DQpbICAgIDAuMDE2MDg4XSBTZWN1
cml0eSBGcmFtZXdvcmsgaW5pdGlhbGl6ZWQNClsgICAgMC4wMjAwMTNdIEFwcEFybW9yOiBBcHBB
cm1vciBpbml0aWFsaXplZA0KWyAgICAwLjAyNDAwNF0gWWFtYTogYmVjb21pbmcgbWluZGZ1bC4N
ClsgICAgMC4wMjgzMDRdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDI2MjE0NCAo
b3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMpDQpbICAgIDAuMDMyNzg2XSBJbm9kZS1jYWNoZSBoYXNo
IHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMpDQpbICAgIDAu
MDQwMzQyXSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDI1Ng0KWyAgICAwLjA0NDI2
MV0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgY3B1YWNjdA0KWyAgICAwLjA0ODAxM10gSW5p
dGlhbGl6aW5nIGNncm91cCBzdWJzeXMgbWVtb3J5DQpbICAgIDAuMDUxMzk5XSBJbml0aWFsaXpp
bmcgY2dyb3VwIHN1YnN5cyBkZXZpY2VzDQpbICAgIDAuMDUyMDA2XSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBmcmVlemVyDQpbICAgIDAuMDU2MDA2XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1
YnN5cyBuZXRfY2xzDQpbICAgIDAuMDYwMDA2XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBi
bGtpbw0KWyAgICAwLjA2NDAxMF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgcGVyZl9ldmVu
dA0KWyAgICAwLjA2ODExMV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDANClsgICAgMC4w
NzIwMDZdIENQVTogUHJvY2Vzc29yIENvcmUgSUQ6IDANClsgICAgMC4wNzQ5ODRdIG1jZTogQ1BV
IHN1cHBvcnRzIDkgTUNFIGJhbmtzDQpbICAgIDAuMDg1NzI1XSBBQ1BJOiBDb3JlIHJldmlzaW9u
IDIwMTEwNDEzDQpbICAgIDAuMDk1MDIxXSBmdHJhY2U6IGFsbG9jYXRpbmcgMjU2NTEgZW50cmll
cyBpbiAxMDEgcGFnZXMNClsgICAgMC4xMjUwMjddIHgyYXBpYyBub3QgZW5hYmxlZCwgSVJRIHJl
bWFwcGluZyBpbml0IGZhaWxlZA0KWyAgICAwLjEzMjAxNF0gU3dpdGNoZWQgQVBJQyByb3V0aW5n
IHRvIHBoeXNpY2FsIGZsYXQuDQpbICAgIDAuMTQxMzczXSAuLlRJTUVSOiB2ZWN0b3I9MHgzMCBh
cGljMT0wIHBpbjE9MiBhcGljMj0wIHBpbjI9MA0KWyAgICAwLjE4OTg0NV0gQ1BVMDogSW50ZWwo
UikgWGVvbihSKSBDUFUgRTMtMTIzMCBWMiBAIDMuMzBHSHogc3RlcHBpbmcgMDkNClsgICAgMC4y
MDAwMjRdIFhlbjogdXNpbmcgdmNwdW9wIHRpbWVyIGludGVyZmFjZQ0KWyAgICAwLjIwNDAyN10g
aW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAwDQpbICAgIDAuMjA4MTM5XSBQZXJmb3JtYW5j
ZSBFdmVudHM6IGdlbmVyaWMgYXJjaGl0ZWN0ZWQgcGVyZm1vbiwgSW50ZWwgUE1VIGRyaXZlci4N
ClsgICAgMC4yMTQzOTZdIC4uLiB2ZXJzaW9uOiAgICAgICAgICAgICAgICAzDQpbICAgIDAuMjE2
MDE2XSAuLi4gYml0IHdpZHRoOiAgICAgICAgICAgICAgNDgNClsgICAgMC4yMTk0NTldIC4uLiBn
ZW5lcmljIHJlZ2lzdGVyczogICAgICA0DQpbICAgIDAuMjIwMDE3XSAuLi4gdmFsdWUgbWFzazog
ICAgICAgICAgICAgMDAwMGZmZmZmZmZmZmZmZg0KWyAgICAwLjIyNDAxN10gLi4uIG1heCBwZXJp
b2Q6ICAgICAgICAgICAgIDAwMDAwMDAwN2ZmZmZmZmYNClsgICAgMC4yMjgwMTddIC4uLiBmaXhl
ZC1wdXJwb3NlIGV2ZW50czogICAzDQpbICAgIDAuMjMyMDEzXSAuLi4gZXZlbnQgbWFzazogICAg
ICAgICAgICAgMDAwMDAwMDcwMDAwMDAwZg0KWyAgICAwLjIzMjk1MV0gQm9vdGluZyBOb2RlICAg
MCwgUHJvY2Vzc29ycyAgIzENClsgICAgMC4yMzYwMThdIHNtcGJvb3QgY3B1IDE6IHN0YXJ0X2lw
ID0gOTkwMDANClsgICAgMC4zMzIwNzBdIEJyb3VnaHQgdXAgMiBDUFVzDQpbICAgIDAuMzMyMDY5
XSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDENClsgICAgMC4zMzYwMzNdIFRvdGFsIG9m
IDIgcHJvY2Vzc29ycyBhY3RpdmF0ZWQgKDEzMjMwLjUwIEJvZ29NSVBTKS4NClsgICAgMC4zNDQx
NzJdIGRldnRtcGZzOiBpbml0aWFsaXplZA0KWyAgICAwLjM0OTkxN10gcHJpbnRfY29uc3RyYWlu
dHM6IGR1bW15OiANClsgICAgMC4zNTIwNjVdIFRpbWU6IDE3OjIzOjEyICBEYXRlOiAwOC8wMS8x
Mg0KWyAgICAwLjM1NjEzNF0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNg0KWyAg
ICAwLjM2MDE1OF0gVHJ5aW5nIHRvIHVucGFjayByb290ZnMgaW1hZ2UgYXMgaW5pdHJhbWZzLi4u
DQpbICAgIDEuMzM2MTczXSBBQ1BJOiBidXMgdHlwZSBwY2kgcmVnaXN0ZXJlZA0KWyAgICAxLjM0
MDA4OF0gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUgMSBmb3IgYmFzZSBhY2Nlc3MNClsg
ICAgMS4zNDg5NDZdIGJpbzogY3JlYXRlIHNsYWIgPGJpby0wPiBhdCAwDQpbICAgIDEuMzU4MTkw
XSBBQ1BJOiBFQzogTG9vayB1cCBFQyBpbiBEU0RUDQpbICAgIDEuMzY3MTQzXSBBQ1BJOiBJbnRl
cnByZXRlciBlbmFibGVkDQpbICAgIDEuMzY4MDg5XSBBQ1BJOiAoc3VwcG9ydHMgUzAgUzMgUzQg
UzUpDQpbICAgIDEuMzc2MDkxXSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGludGVycnVwdCByb3V0
aW5nDQpbICAgIDIuMzU4NTA1XSBBQ1BJOiBObyBkb2NrIGRldmljZXMgZm91bmQuDQpbICAgIDIu
MzYwMTUzXSBIRVNUOiBUYWJsZSBub3QgZm91bmQuDQpbICAgIDIuMzYzMTYwXSBGcmVlaW5nIGlu
aXRyZCBtZW1vcnk6IDE0MjIwayBmcmVlZA0KWyAgICAyLjM3MjE2Ml0gUENJOiBVc2luZyBob3N0
IGJyaWRnZSB3aW5kb3dzIGZyb20gQUNQSTsgaWYgbmVjZXNzYXJ5LCB1c2UgInBjaT1ub2NycyIg
YW5kIHJlcG9ydCBhIGJ1Zw0KWyAgICAyLjM4NDI1Nl0gQUNQSTogUENJIFJvb3QgQnJpZGdlIFtQ
Q0kwXSAoZG9tYWluIDAwMDAgW2J1cyAwMC1mZl0pDQpbICAgIDIuMzg4Mjk3XSBwY2lfcm9vdCBQ
TlAwQTAzOjAwOiBob3N0IGJyaWRnZSB3aW5kb3cgW2lvICAweDAwMDAtMHgwY2Y3XQ0KWyAgICAy
LjM5NjE1NV0gcGNpX3Jvb3QgUE5QMEEwMzowMDogaG9zdCBicmlkZ2Ugd2luZG93IFtpbyAgMHgw
ZDAwLTB4ZmZmZl0NClsgICAgMi40MDAxNTRdIHBjaV9yb290IFBOUDBBMDM6MDA6IGhvc3QgYnJp
ZGdlIHdpbmRvdyBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZl0NClsgICAgMi40MDQxNTRdIHBj
aV9yb290IFBOUDBBMDM6MDA6IGhvc3QgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZjAwMDAwMDAtMHhm
YmZmZmZmZl0NClsgICAgMi40MTIyODFdIHBjaSAwMDAwOjAwOjAwLjA6IFs4MDg2OjEyMzddIHR5
cGUgMCBjbGFzcyAweDAwMDYwMA0KWyAgICAyLjQxNzI1Ml0gcGNpIDAwMDA6MDA6MDEuMDogWzgw
ODY6NzAwMF0gdHlwZSAwIGNsYXNzIDB4MDAwNjAxDQpbICAgIDIuNDI1ODY0XSBwY2kgMDAwMDow
MDowMS4xOiBbODA4Njo3MDEwXSB0eXBlIDAgY2xhc3MgMHgwMDAxMDENClsgICAgMi40MzAxNjdd
IHBjaSAwMDAwOjAwOjAxLjE6IHJlZyAyMDogW2lvICAweGMzMDAtMHhjMzBmXQ0KWyAgICAyLjQz
NzI1OV0gcGNpIDAwMDA6MDA6MDEuMzogWzgwODY6NzExM10gdHlwZSAwIGNsYXNzIDB4MDAwNjgw
DQpbICAgIDIuNDQxNzAwXSBwY2kgMDAwMDowMDowMS4zOiBxdWlyazogW2lvICAweGIwMDAtMHhi
MDNmXSBjbGFpbWVkIGJ5IFBJSVg0IEFDUEkNClsgICAgMi40NDgxODVdIHBjaSAwMDAwOjAwOjAx
LjM6IHF1aXJrOiBbaW8gIDB4YjEwMC0weGIxMGZdIGNsYWltZWQgYnkgUElJWDQgU01CDQpbICAg
IDIuNDU2NjQyXSBwY2kgMDAwMDowMDowMi4wOiBbMTAxMzowMGI4XSB0eXBlIDAgY2xhc3MgMHgw
MDAzMDANClsgICAgMi40NjA3ODVdIHBjaSAwMDAwOjAwOjAyLjA6IHJlZyAxMDogW21lbSAweGYw
MDAwMDAwLTB4ZjFmZmZmZmYgcHJlZl0NClsgICAgMi40Njg3MTldIHBjaSAwMDAwOjAwOjAyLjA6
IHJlZyAxNDogW21lbSAweGYzMGU0MDAwLTB4ZjMwZTRmZmZdDQpbICAgIDIuNDc0OTU5XSBwY2kg
MDAwMDowMDowMi4wOiByZWcgMzA6IFttZW0gMHhmMzBjMDAwMC0weGYzMGNmZmZmIHByZWZdDQpb
ICAgIDIuNDg0NDQwXSBwY2kgMDAwMDowMDowMy4wOiBbNTg1MzowMDAxXSB0eXBlIDAgY2xhc3Mg
MHgwMGZmODANClsgICAgMi40ODg4MjBdIHBjaSAwMDAwOjAwOjAzLjA6IHJlZyAxMDogW2lvICAw
eGMwMDAtMHhjMGZmXQ0KWyAgICAyLjQ5MjcyN10gcGNpIDAwMDA6MDA6MDMuMDogcmVnIDE0OiBb
bWVtIDB4ZjIwMDAwMDAtMHhmMmZmZmZmZiBwcmVmXQ0KWyAgICAyLjUwNDIyNl0gcGNpIDAwMDA6
MDA6MDUuMDogWzEwMDA6MDA3Ml0gdHlwZSAwIGNsYXNzIDB4MDAwMTA3DQpbICAgIDIuNTA4ODE3
XSBwY2kgMDAwMDowMDowNS4wOiByZWcgMTA6IFtpbyAgMHhjMjAwLTB4YzJmZl0NClsgICAgMi41
MTY4ODFdIHBjaSAwMDAwOjAwOjA1LjA6IHJlZyAxNDogW21lbSAweGYzMGUwMDAwLTB4ZjMwZTNm
ZmYgNjRiaXRdDQpbICAgIDIuNTI0ODgwXSBwY2kgMDAwMDowMDowNS4wOiByZWcgMWM6IFttZW0g
MHhmMzA4MDAwMC0weGYzMGJmZmZmIDY0Yml0XQ0KWyAgICAyLjUyOTMyNV0gcGNpIDAwMDA6MDA6
MDUuMDogcmVnIDMwOiBbbWVtIDB4ZjMwMDAwMDAtMHhmMzA3ZmZmZiBwcmVmXQ0KWyAgICAyLjUz
NjkwMF0gcGNpIDAwMDA6MDA6MDUuMDogc3VwcG9ydHMgRDEgRDINClsgICAgMi41NDExMzBdIEFD
UEk6IFBDSSBJbnRlcnJ1cHQgUm91dGluZyBUYWJsZSBbXF9TQl8uUENJMC5fUFJUXQ0KWyAgICAy
LjU0ODY1NV0gIHBjaTAwMDA6MDA6IFVuYWJsZSB0byByZXF1ZXN0IF9PU0MgY29udHJvbCAoX09T
QyBzdXBwb3J0IG1hc2s6IDB4MWUpDQpbICAgIDIuNTYwMTcyXSBBQ1BJOiBQQ0kgSW50ZXJydXB0
IExpbmsgW0xOS0FdIChJUlFzICo1IDEwIDExKQ0KWyAgICAyLjU2NzM3NF0gQUNQSTogUENJIElu
dGVycnVwdCBMaW5rIFtMTktCXSAoSVJRcyA1ICoxMCAxMSkNClsgICAgMi41NzE0NjVdIEFDUEk6
IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LQ10gKElSUXMgNSAxMCAqMTEpDQpbICAgIDIuNTc5Mzg2
XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOS0RdIChJUlFzICo1IDEwIDExKQ0KWyAgICAy
LjU4NzM0N10geGVuL2JhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlci4NClsgICAg
Mi41OTIxNjZdIGxhc3RfcGZuID0gMHg3ZjdmZiBtYXhfYXJjaF9wZm4gPSAweDQwMDAwMDAwMA0K
WyAgICAyLjU5NjE3Ml0geGVuLWJhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlci4N
ClsgICAgMi42MDAzMzZdIHZnYWFyYjogZGV2aWNlIGFkZGVkOiBQQ0k6MDAwMDowMDowMi4wLGRl
Y29kZXM9aW8rbWVtLG93bnM9aW8rbWVtLGxvY2tzPW5vbmUNClsgICAgMi42MDgxNjhdIHZnYWFy
YjogbG9hZGVkDQpbICAgIDIuNjEwOTc2XSB2Z2FhcmI6IGJyaWRnZSBjb250cm9sIHBvc3NpYmxl
IDAwMDA6MDA6MDIuMA0KWyAgICAyLjYxNjM2Ml0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQN
ClsgICAgMi42MjAxOTNdIGxpYmF0YSB2ZXJzaW9uIDMuMDAgbG9hZGVkLg0KWyAgICAyLjYyNDIw
Ml0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Jmcw0KWyAgICAy
LjYyODE3NV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWINClsg
ICAgMi42MzYxOTNdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGRldmljZSBkcml2ZXIgdXNiDQpb
ICAgIDIuNjQwMjMxXSBQQ0k6IFVzaW5nIEFDUEkgZm9yIElSUSByb3V0aW5nDQpbICAgIDIuNjQ0
MTY5XSBQQ0k6IHBjaV9jYWNoZV9saW5lX3NpemUgc2V0IHRvIDY0IGJ5dGVzDQpbICAgIDIuNjQ4
NTgzXSByZXNlcnZlIFJBTSBidWZmZXI6IDAwMDAwMDAwMDAwOWU0MDAgLSAwMDAwMDAwMDAwMDlm
ZmZmIA0KWyAgICAyLjY1MjE3MF0gcmVzZXJ2ZSBSQU0gYnVmZmVyOiAwMDAwMDAwMDdmN2ZmMDAw
IC0gMDAwMDAwMDA3ZmZmZmZmZiANClsgICAgMi42NjAyOTddIE5ldExhYmVsOiBJbml0aWFsaXpp
bmcNClsgICAgMi42NjQxNzBdIE5ldExhYmVsOiAgZG9tYWluIGhhc2ggc2l6ZSA9IDEyOA0KWyAg
ICAyLjY3MjE3MF0gTmV0TGFiZWw6ICBwcm90b2NvbHMgPSBVTkxBQkVMRUQgQ0lQU092NA0KWyAg
ICAyLjY3NjE3N10gTmV0TGFiZWw6ICB1bmxhYmVsZWQgdHJhZmZpYyBhbGxvd2VkIGJ5IGRlZmF1
bHQNClsgICAgMi42ODAyMjZdIEhQRVQ6IDMgdGltZXJzIGluIHRvdGFsLCAwIHRpbWVycyB3aWxs
IGJlIHVzZWQgZm9yIHBlci1jcHUgdGltZXINClsgICAgMi42ODgxODFdIGhwZXQwOiBhdCBNTUlP
IDB4ZmVkMDAwMDAsIElSUXMgMiwgOCwgMA0KWyAgICAyLjY5NTkxOF0gaHBldDA6IDMgY29tcGFy
YXRvcnMsIDY0LWJpdCA2Mi41MDAwMDAgTUh6IGNvdW50ZXINClsgICAgMi43MDgyMDhdIFN3aXRj
aGluZyB0byBjbG9ja3NvdXJjZSB4ZW4NClsgICAgMi43MjAwODRdIFN3aXRjaGVkIHRvIE5PSHog
bW9kZSBvbiBDUFUgIzANClsgICAgMi43MjAwODldIFN3aXRjaGVkIHRvIE5PSHogbW9kZSBvbiBD
UFUgIzENClsgICAgMi43MjQwMzRdIEFwcEFybW9yOiBBcHBBcm1vciBGaWxlc3lzdGVtIEVuYWJs
ZWQNClsgICAgMi43MjQxMDVdIHBucDogUG5QIEFDUEkgaW5pdA0KWyAgICAyLjcyNDEyNF0gQUNQ
STogYnVzIHR5cGUgcG5wIHJlZ2lzdGVyZWQNClsgICAgMi43MjQxNjJdIHBucCAwMDowMDogW21l
bSAweDAwMDAwMDAwLTB4MDAwOWZmZmZdDQpbICAgIDIuNzI0MjEyXSBzeXN0ZW0gMDA6MDA6IFtt
ZW0gMHgwMDAwMDAwMC0weDAwMDlmZmZmXSBjb3VsZCBub3QgYmUgcmVzZXJ2ZWQNClsgICAgMi43
MjQyMThdIHN5c3RlbSAwMDowMDogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBj
MDIgKGFjdGl2ZSkNClsgICAgMi43MjQzMjddIHBucCAwMDowMTogW2J1cyAwMC1mZl0NClsgICAg
Mi43MjQzMzBdIHBucCAwMDowMTogW2lvICAweDBjZjgtMHgwY2ZmXQ0KWyAgICAyLjcyNDMzM10g
cG5wIDAwOjAxOiBbaW8gIDB4MDAwMC0weDBjZjcgd2luZG93XQ0KWyAgICAyLjcyNDMzNl0gcG5w
IDAwOjAxOiBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQ0KWyAgICAyLjcyNDMzOV0gcG5wIDAw
OjAxOiBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZiB3aW5kb3ddDQpbICAgIDIuNzI0MzQzXSBw
bnAgMDA6MDE6IFttZW0gMHhmMDAwMDAwMC0weGZiZmZmZmZmIHdpbmRvd10NClsgICAgMi43MjQ0
MDVdIHBucCAwMDowMTogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBhMDMgKGFj
dGl2ZSkNClsgICAgMi43MjQ0NDddIHBucCAwMDowMjogW21lbSAweGZlZDAwMDAwLTB4ZmVkMDAz
ZmZdDQpbICAgIDIuNzI0NDc4XSBwbnAgMDA6MDI6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2Us
IElEcyBQTlAwMTAzIChhY3RpdmUpDQpbICAgIDIuNzI0NTA5XSBwbnAgMDA6MDM6IFtpbyAgMHgw
MDEwLTB4MDAxZl0NClsgICAgMi43MjQ1MTJdIHBucCAwMDowMzogW2lvICAweDAwMjItMHgwMDJk
XQ0KWyAgICAyLjcyNDUxNF0gcG5wIDAwOjAzOiBbaW8gIDB4MDAzMC0weDAwM2ZdDQpbICAgIDIu
NzI0NTE3XSBwbnAgMDA6MDM6IFtpbyAgMHgwMDQ0LTB4MDA1Zl0NClsgICAgMi43MjQ1MTldIHBu
cCAwMDowMzogW2lvICAweDAwNjItMHgwMDYzXQ0KWyAgICAyLjcyNDUyMl0gcG5wIDAwOjAzOiBb
aW8gIDB4MDA2NS0weDAwNmZdDQpbICAgIDIuNzI0NTI0XSBwbnAgMDA6MDM6IFtpbyAgMHgwMDcy
LTB4MDA3Zl0NClsgICAgMi43MjQ1MjddIHBucCAwMDowMzogW2lvICAweDAwODBdDQpbICAgIDIu
NzI0NTMwXSBwbnAgMDA6MDM6IFtpbyAgMHgwMDg0LTB4MDA4Nl0NClsgICAgMi43MjQ1MzNdIHBu
cCAwMDowMzogW2lvICAweDAwODhdDQpbICAgIDIuNzI0NTM1XSBwbnAgMDA6MDM6IFtpbyAgMHgw
MDhjLTB4MDA4ZV0NClsgICAgMi43MjQ1MzddIHBucCAwMDowMzogW2lvICAweDAwOTAtMHgwMDlm
XQ0KWyAgICAyLjcyNDU0MF0gcG5wIDAwOjAzOiBbaW8gIDB4MDBhMi0weDAwYmRdDQpbICAgIDIu
NzI0NTQyXSBwbnAgMDA6MDM6IFtpbyAgMHgwMGUwLTB4MDBlZl0NClsgICAgMi43MjQ1NDVdIHBu
cCAwMDowMzogW2lvICAweDA4YTAtMHgwOGEzXQ0KWyAgICAyLjcyNDU0N10gcG5wIDAwOjAzOiBb
aW8gIDB4MGNjMC0weDBjY2ZdDQpbICAgIDIuNzI0NTQ5XSBwbnAgMDA6MDM6IFtpbyAgMHgwNGQw
LTB4MDRkMV0NClsgICAgMi43MjQ2MDRdIHN5c3RlbSAwMDowMzogW2lvICAweDA4YTAtMHgwOGEz
XSBoYXMgYmVlbiByZXNlcnZlZA0KWyAgICAyLjcyNDYwN10gc3lzdGVtIDAwOjAzOiBbaW8gIDB4
MGNjMC0weDBjY2ZdIGhhcyBiZWVuIHJlc2VydmVkDQpbICAgIDIuNzI0NjExXSBzeXN0ZW0gMDA6
MDM6IFtpbyAgMHgwNGQwLTB4MDRkMV0gaGFzIGJlZW4gcmVzZXJ2ZWQNClsgICAgMi43MjQ2MTZd
IHN5c3RlbSAwMDowMzogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBjMDIgKGFj
dGl2ZSkNClsgICAgMi43MjQ2MzZdIHBucCAwMDowNDogW2RtYSA0XQ0KWyAgICAyLjcyNDYzOF0g
cG5wIDAwOjA0OiBbaW8gIDB4MDAwMC0weDAwMGZdDQpbICAgIDIuNzI0NjQxXSBwbnAgMDA6MDQ6
IFtpbyAgMHgwMDgxLTB4MDA4M10NClsgICAgMi43MjQ2NDNdIHBucCAwMDowNDogW2lvICAweDAw
ODddDQpbICAgIDIuNzI0NjQ2XSBwbnAgMDA6MDQ6IFtpbyAgMHgwMDg5LTB4MDA4Yl0NClsgICAg
Mi43MjQ2NTJdIHBucCAwMDowNDogW2lvICAweDAwOGZdDQpbICAgIDIuNzI0NjU0XSBwbnAgMDA6
MDQ6IFtpbyAgMHgwMGMwLTB4MDBkZl0NClsgICAgMi43MjQ2NTddIHBucCAwMDowNDogW2lvICAw
eDA0ODAtMHgwNDhmXQ0KWyAgICAyLjcyNDY4OF0gcG5wIDAwOjA0OiBQbHVnIGFuZCBQbGF5IEFD
UEkgZGV2aWNlLCBJRHMgUE5QMDIwMCAoYWN0aXZlKQ0KWyAgICAyLjcyNDcwNV0gcG5wIDAwOjA1
OiBbaW8gIDB4MDA3MC0weDAwNzFdDQpbICAgIDIuNzI0NzQwXSB4ZW46IC0tPiBpcnE9OCwgcGly
cT0xNw0KWyAgICAyLjcyNDc0NF0gcG5wIDAwOjA1OiBbaXJxIDhdDQpbICAgIDIuNzI0Nzc2XSBw
bnAgMDA6MDU6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYjAwIChhY3RpdmUp
DQpbICAgIDIuNzI0NzkwXSBwbnAgMDA6MDY6IFtpbyAgMHgwMDYxXQ0KWyAgICAyLjcyNDgyM10g
cG5wIDAwOjA2OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMDgwMCAoYWN0aXZl
KQ0KWyAgICAyLjcyNDg2OV0geGVuOiAtLT4gaXJxPTEyLCBwaXJxPTE4DQpbICAgIDIuNzI0ODcz
XSBwbnAgMDA6MDc6IFtpcnEgMTJdDQpbICAgIDIuNzI0OTAzXSBwbnAgMDA6MDc6IFBsdWcgYW5k
IFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwZjEzIChhY3RpdmUpDQpbICAgIDIuNzI0OTMyXSBw
bnAgMDA6MDg6IFtpbyAgMHgwMDYwXQ0KWyAgICAyLjcyNDkzNV0gcG5wIDAwOjA4OiBbaW8gIDB4
MDA2NF0NClsgICAgMi43MjQ5NTldIHhlbjogLS0+IGlycT0xLCBwaXJxPTE5DQpbICAgIDIuNzI0
OTYzXSBwbnAgMDA6MDg6IFtpcnEgMV0NClsgICAgMi43MjQ5OThdIHBucCAwMDowODogUGx1ZyBh
bmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDAzMDMgUE5QMDMwYiAoYWN0aXZlKQ0KWyAgICAy
LjcyNTAyNF0gcG5wIDAwOjA5OiBbaW8gIDB4MDNmMC0weDAzZjVdDQpbICAgIDIuNzI1MDI3XSBw
bnAgMDA6MDk6IFtpbyAgMHgwM2Y3XQ0KWyAgICAyLjcyNTA0OV0geGVuOiAtLT4gaXJxPTYsIHBp
cnE9MjANClsgICAgMi43MjUwNTJdIHBucCAwMDowOTogW2lycSA2XQ0KWyAgICAyLjcyNTA1NV0g
cG5wIDAwOjA5OiBbZG1hIDJdDQpbICAgIDIuNzI1MDkzXSBwbnAgMDA6MDk6IFBsdWcgYW5kIFBs
YXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwNzAwIChhY3RpdmUpDQpbICAgIDIuNzI1MTM1XSBwbnAg
MDA6MGE6IFtpbyAgMHgwM2Y4LTB4MDNmZl0NClsgICAgMi43MjUxNTldIHhlbjogLS0+IGlycT00
LCBwaXJxPTIxDQpbICAgIDIuNzI1MTYyXSBwbnAgMDA6MGE6IFtpcnEgNF0NClsgICAgMi43MjUx
OTddIHBucCAwMDowYTogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDA1MDEgKGFj
dGl2ZSkNClsgICAgMi43MjUyNDldIHBucCAwMDowYjogW2lvICAweDAzNzgtMHgwMzdmXQ0KWyAg
ICAyLjcyNTI3M10geGVuOiAtLT4gaXJxPTcsIHBpcnE9MjINClsgICAgMi43MjUyNzZdIHBucCAw
MDowYjogW2lycSA3XQ0KWyAgICAyLjcyNTMxNV0gcG5wIDAwOjBiOiBQbHVnIGFuZCBQbGF5IEFD
UEkgZGV2aWNlLCBJRHMgUE5QMDQwMCAoYWN0aXZlKQ0KWyAgICAyLjcyNTM1Nl0gcG5wIDAwOjBj
OiBbaW8gIDB4YWUwMC0weGFlMGZdDQpbICAgIDIuNzI1MzU5XSBwbnAgMDA6MGM6IFtpbyAgMHhi
MDQ0LTB4YjA0N10NClsgICAgMi43MjU0MDZdIHN5c3RlbSAwMDowYzogW2lvICAweGFlMDAtMHhh
ZTBmXSBoYXMgYmVlbiByZXNlcnZlZA0KWyAgICAyLjcyNTQxMF0gc3lzdGVtIDAwOjBjOiBbaW8g
IDB4YjA0NC0weGIwNDddIGhhcyBiZWVuIHJlc2VydmVkDQpbICAgIDIuNzI1NDE1XSBzeXN0ZW0g
MDA6MGM6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYzAyIChhY3RpdmUpDQpb
ICAgIDIuNzI2MDYzXSBwbnA6IFBuUCBBQ1BJOiBmb3VuZCAxMyBkZXZpY2VzDQpbICAgIDIuNzI2
MDY1XSBBQ1BJOiBBQ1BJIGJ1cyB0eXBlIHBucCB1bnJlZ2lzdGVyZWQNClsgICAgMi43MzI0MzVd
IFBDSTogbWF4IGJ1cyBkZXB0aDogMCBwY2lfdHJ5X251bTogMQ0KWyAgICAyLjczMjQ0NV0gcGNp
X2J1cyAwMDAwOjAwOiByZXNvdXJjZSA0IFtpbyAgMHgwMDAwLTB4MGNmN10NClsgICAgMi43MzI0
NDldIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNSBbaW8gIDB4MGQwMC0weGZmZmZdDQpbICAg
IDIuNzMyNDUyXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4
MDAwYmZmZmZdDQpbICAgIDIuNzMyNDU2XSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDcgW21l
bSAweGYwMDAwMDAwLTB4ZmJmZmZmZmZdDQpbICAgIDIuNzMyNTMyXSBORVQ6IFJlZ2lzdGVyZWQg
cHJvdG9jb2wgZmFtaWx5IDINClsgICAgMi43MzI3NTFdIElQIHJvdXRlIGNhY2hlIGhhc2ggdGFi
bGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA3LCA1MjQyODggYnl0ZXMpDQpbICAgIDIuNzMzNDI3
XSBUQ1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNjIxNDQgKG9yZGVyOiAxMCwg
NDE5NDMwNCBieXRlcykNClsgICAgMi43MzQyNDBdIFRDUCBiaW5kIGhhc2ggdGFibGUgZW50cmll
czogNjU1MzYgKG9yZGVyOiA4LCAxMDQ4NTc2IGJ5dGVzKQ0KWyAgICAyLjczNDM5MV0gVENQOiBI
YXNoIHRhYmxlcyBjb25maWd1cmVkIChlc3RhYmxpc2hlZCAyNjIxNDQgYmluZCA2NTUzNikNClsg
ICAgMi43MzQzOTNdIFRDUCByZW5vIHJlZ2lzdGVyZWQNClsgICAgMi43MzQ0MDJdIFVEUCBoYXNo
IHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykNClsgICAgMi43MzQ0
MTVdIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDMsIDMyNzY4IGJ5
dGVzKQ0KWyAgICAyLjczNDkyMF0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxDQpb
ICAgIDIuNzM0OTM0XSBwY2kgMDAwMDowMDowMC4wOiBMaW1pdGluZyBkaXJlY3QgUENJL1BDSSB0
cmFuc2ZlcnMNClsgICAgMi43MzUxMTddIHBjaSAwMDAwOjAwOjAxLjA6IFBJSVgzOiBFbmFibGlu
ZyBQYXNzaXZlIFJlbGVhc2UNClsgICAgMi43MzU0OTBdIHBjaSAwMDAwOjAwOjAxLjA6IEFjdGl2
YXRpbmcgSVNBIERNQSBoYW5nIHdvcmthcm91bmRzDQpbICAgIDIuNzM2MjM2XSBwY2kgMDAwMDow
MDowMi4wOiBCb290IHZpZGVvIGRldmljZQ0KWyAgICAyLjczNjc3Nl0gUENJOiBDTFMgMCBieXRl
cywgZGVmYXVsdCA2NA0KWyAgICAzLjEyOTQ5NF0gYXVkaXQ6IGluaXRpYWxpemluZyBuZXRsaW5r
IHNvY2tldCAoZGlzYWJsZWQpDQpbICAgIDMuMTM0NTcyXSB0eXBlPTIwMDAgYXVkaXQoMTM0Mzg0
MTc5NC45ODI6MSk6IGluaXRpYWxpemVkDQpbICAgIDMuMTU5NDEzXSBIdWdlVExCIHJlZ2lzdGVy
ZWQgMiBNQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcw0KWyAgICAzLjE3NTIxNF0g
VkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82LjUuMg0KWyAgICAzLjE4MTk4N10gRHF1b3QtY2FjaGUg
aGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyIDAsIDQwOTYgYnl0ZXMpDQpbICAgIDMuMTky
NzI2XSBmdXNlIGluaXQgKEFQSSB2ZXJzaW9uIDcuMTYpDQpbICAgIDMuMTk4OTkxXSBtc2dtbmkg
aGFzIGJlZW4gc2V0IHRvIDM5NjkNClsgICAgMy4yMDcyNTZdIEJsb2NrIGxheWVyIFNDU0kgZ2Vu
ZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNTMpDQpbICAgIDMu
MjE4NTUyXSBpbyBzY2hlZHVsZXIgbm9vcCByZWdpc3RlcmVkDQpbICAgIDMuMjI0OTk1XSBpbyBz
Y2hlZHVsZXIgZGVhZGxpbmUgcmVnaXN0ZXJlZA0KWyAgICAzLjIzMDQyNl0gaW8gc2NoZWR1bGVy
IGNmcSByZWdpc3RlcmVkIChkZWZhdWx0KQ0KWyAgICAzLjIzNTAyM10gcGNpX2hvdHBsdWc6IFBD
SSBIb3QgUGx1ZyBQQ0kgQ29yZSB2ZXJzaW9uOiAwLjUNClsgICAgMy4yMzk0NTJdIHBjaWVocDog
UENJIEV4cHJlc3MgSG90IFBsdWcgQ29udHJvbGxlciBEcml2ZXIgdmVyc2lvbjogMC40DQpbICAg
IDMuMjQ0NjAzXSBpbnB1dDogUG93ZXIgQnV0dG9uIGFzIC9kZXZpY2VzL0xOWFNZU1RNOjAwL0xO
WFBXUkJOOjAwL2lucHV0L2lucHV0MA0KWyAgICAzLjI1MDc0Ml0gQUNQSTogUG93ZXIgQnV0dG9u
IFtQV1JGXQ0KWyAgICAzLjI1Mzk4MF0gaW5wdXQ6IFNsZWVwIEJ1dHRvbiBhcyAvZGV2aWNlcy9M
TlhTWVNUTTowMC9MTlhTTFBCTjowMC9pbnB1dC9pbnB1dDENClsgICAgMy4yNTk5NjddIEFDUEk6
IFNsZWVwIEJ1dHRvbiBbU0xQRl0NClsgICAgMy4yNjM3NjJdIEFDUEk6IGFjcGlfaWRsZSByZWdp
c3RlcmVkIHdpdGggY3B1aWRsZQ0KWyAgICAzLjI2ODcxOF0gRVJTVDogVGFibGUgaXMgbm90IGZv
dW5kIQ0KWyAgICAzLjI3MjAyNF0gU2VyaWFsOiA4MjUwLzE2NTUwIGRyaXZlciwgMzIgcG9ydHMs
IElSUSBzaGFyaW5nIGVuYWJsZWQNClsgICAgMy4zMDQ5MDZdIHNlcmlhbDgyNTA6IHR0eVMwIGF0
IEkvTyAweDNmOCAoaXJxID0gNCkgaXMgYSAxNjU1MEENClsgICAgMy40OTAwNjRdIDAwOjBhOiB0
dHlTMCBhdCBJL08gMHgzZjggKGlycSA9IDQpIGlzIGEgMTY1NTBBDQpbICAgIDMuNDk4MjE0XSBM
aW51eCBhZ3BnYXJ0IGludGVyZmFjZSB2MC4xMDMNClsgICAgMy41MDUwODNdIGJyZDogbW9kdWxl
IGxvYWRlZA0KWyAgICAzLjUxMDQzM10gbG9vcDogbW9kdWxlIGxvYWRlZA0KWyAgICAzLjUxMzQ1
MV0gYXRhX3BpaXggMDAwMDowMDowMS4xOiB2ZXJzaW9uIDIuMTMNClsgICAgMy41MTczNjhdIGF0
YV9waWl4IDAwMDA6MDA6MDEuMTogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0DQpbICAgIDMu
NTIyMzM3XSBzY3NpMCA6IGF0YV9waWl4DQpbICAgIDMuNTI1MjMwXSBzY3NpMSA6IGF0YV9waWl4
DQpbICAgIDMuNTI4MDI1XSBhdGExOiBQQVRBIG1heCBNV0RNQTIgY21kIDB4MWYwIGN0bCAweDNm
NiBibWRtYSAweGMzMDAgaXJxIDE0DQpbICAgIDMuNTMzMTU5XSBhdGEyOiBQQVRBIG1heCBNV0RN
QTIgY21kIDB4MTcwIGN0bCAweDM3NiBibWRtYSAweGMzMDggaXJxIDE1DQpbICAgIDMuNTM4Nzc5
XSBGaXhlZCBNRElPIEJ1czogcHJvYmVkDQpbICAgIDMuNTQyNTU1XSBQUFAgZ2VuZXJpYyBkcml2
ZXIgdmVyc2lvbiAyLjQuMg0KWyAgICAzLjU0NzEwNV0gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBk
ZXZpY2UgZHJpdmVyLCAxLjYNClsgICAgMy41NTE0OTRdIHR1bjogKEMpIDE5OTktMjAwNCBNYXgg
S3Jhc255YW5za3kgPG1heGtAcXVhbGNvbW0uY29tPg0KWyAgICAzLjU1NjY3N10gZWhjaV9oY2Q6
IFVTQiAyLjAgJ0VuaGFuY2VkJyBIb3N0IENvbnRyb2xsZXIgKEVIQ0kpIERyaXZlcg0KWyAgICAz
LjU2MjA4MF0gb2hjaV9oY2Q6IFVTQiAxLjEgJ09wZW4nIEhvc3QgQ29udHJvbGxlciAoT0hDSSkg
RHJpdmVyDQpbICAgIDMuNTY3OTU3XSB1aGNpX2hjZDogVVNCIFVuaXZlcnNhbCBIb3N0IENvbnRy
b2xsZXIgSW50ZXJmYWNlIGRyaXZlcg0KWyAgICAzLjU3MzMwN10gaTgwNDI6IFBOUDogUFMvMiBD
b250cm9sbGVyIFtQTlAwMzAzOlBTMkssUE5QMGYxMzpQUzJNXSBhdCAweDYwLDB4NjQgaXJxIDEs
MTINClsgICAgMy41ODM2ODVdIHNlcmlvOiBpODA0MiBLQkQgcG9ydCBhdCAweDYwLDB4NjQgaXJx
IDENClsgICAgMy41ODc4NzZdIHNlcmlvOiBpODA0MiBBVVggcG9ydCBhdCAweDYwLDB4NjQgaXJx
IDEyDQpbICAgIDMuNTkyNDM3XSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZv
ciBhbGwgbWljZQ0KWyAgICAzLjU5NzY2NF0gaW5wdXQ6IEFUIFRyYW5zbGF0ZWQgU2V0IDIga2V5
Ym9hcmQgYXMgL2RldmljZXMvcGxhdGZvcm0vaTgwNDIvc2VyaW8wL2lucHV0L2lucHV0Mg0KWyAg
ICAzLjYwNDk3NF0gcnRjX2Ntb3MgMDA6MDU6IHJ0YyBjb3JlOiByZWdpc3RlcmVkIHJ0Y19jbW9z
IGFzIHJ0YzANClsgICAgMy42MTAwMjFdIHJ0YzA6IGFsYXJtcyB1cCB0byBvbmUgZGF5LCAxMTQg
Ynl0ZXMgbnZyYW0sIGhwZXQgaXJxcw0KWyAgICAzLjYxNTIzOF0gZGV2aWNlLW1hcHBlcjogdWV2
ZW50OiB2ZXJzaW9uIDEuMC4zDQpbICAgIDMuNjE5NjMwXSBkZXZpY2UtbWFwcGVyOiBpb2N0bDog
NC4yMC4wLWlvY3RsICgyMDExLTAyLTAyKSBpbml0aWFsaXNlZDogZG0tZGV2ZWxAcmVkaGF0LmNv
bQ0KWyAgICAzLjYyNjgxOF0gY3B1aWRsZTogdXNpbmcgZ292ZXJub3IgbGFkZGVyDQpbICAgIDMu
NjMxMTU2XSBjcHVpZGxlOiB1c2luZyBnb3Zlcm5vciBtZW51DQpbICAgIDMuNjM0NzUyXSBFRkkg
VmFyaWFibGVzIEZhY2lsaXR5IHYwLjA4IDIwMDQtTWF5LTE3DQpbICAgIDMuNjM5MjQxXSBUQ1Ag
Y3ViaWMgcmVnaXN0ZXJlZA0KWyAgICAzLjY0MjU4NV0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29s
IGZhbWlseSAxMA0KWyAgICAzLjY0NzUwM10gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWls
eSAxNw0KWyAgICAzLjY1MTUzNF0gUmVnaXN0ZXJpbmcgdGhlIGRuc19yZXNvbHZlciBrZXkgdHlw
ZQ0KWyAgICAzLjY1NTc0MV0gUE06IEhpYmVybmF0aW9uIGltYWdlIG5vdCBwcmVzZW50IG9yIGNv
dWxkIG5vdCBiZSBsb2FkZWQuDQpbICAgIDMuNjYxNjY5XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2
ZXJzaW9uIDENClsgICAgMy42NzY4OTddICAgTWFnaWMgbnVtYmVyOiA4OjcxMTozODkNClsgICAg
My42ODA0NTFdIHJ0Y19jbW9zIDAwOjA1OiBzZXR0aW5nIHN5c3RlbSBjbG9jayB0byAyMDEyLTA4
LTAxIDE3OjIzOjE1IFVUQyAoMTM0Mzg0MTc5NSkNClsgICAgMy42ODc0NzddIEJJT1MgRUREIGZh
Y2lsaXR5IHYwLjE2IDIwMDQtSnVuLTI1LCAwIGRldmljZXMgZm91bmQNClsgICAgMy42OTUxNDFd
IEVERCBpbmZvcm1hdGlvbiBub3QgYXZhaWxhYmxlLg0KWyAgICAzLjY5OTk3M10gYXRhMi4wMTog
Tk9ERVYgYWZ0ZXIgcG9sbGluZyBkZXRlY3Rpb24NClsgICAgMy43MDY1OTZdIGF0YTIuMDA6IEFU
QVBJOiBRRU1VIERWRC1ST00sIDEuMS41MCwgbWF4IFVETUEvMTAwDQpbICAgIDMuNzA5OTgyXSBh
dGEyLjAwOiBjb25maWd1cmVkIGZvciBNV0RNQTINClsgICAgMy43Mjc5OTZdIHNjc2kgMTowOjA6
MDogQ0QtUk9NICAgICAgICAgICAgUUVNVSAgICAgUUVNVSBEVkQtUk9NICAgICAxLjEuIFBROiAw
IEFOU0k6IDUNClsgICAgMy43MzYyNzBdIHNyMDogc2NzaTMtbW1jIGRyaXZlOiA0eC80eCBjZC9y
dyB4YS9mb3JtMiB0cmF5DQpbICAgIDMuNzQxMTY1XSBjZHJvbTogVW5pZm9ybSBDRC1ST00gZHJp
dmVyIFJldmlzaW9uOiAzLjIwDQpbICAgIDMuNzQ1ODUyXSBzciAxOjA6MDowOiBBdHRhY2hlZCBz
Y3NpIENELVJPTSBzcjANClsgICAgMy43NTIxNzNdIHNyIDE6MDowOjA6IEF0dGFjaGVkIHNjc2kg
Z2VuZXJpYyBzZzAgdHlwZSA1DQpbICAgIDMuNzYwMjkxXSBGcmVlaW5nIHVudXNlZCBrZXJuZWwg
bWVtb3J5OiA5ODRrIGZyZWVkDQpbICAgIDMuNzY1NTExXSBXcml0ZSBwcm90ZWN0aW5nIHRoZSBr
ZXJuZWwgcmVhZC1vbmx5IGRhdGE6IDEwMjQwaw0KWyAgICAzLjc3MTU4NF0gRnJlZWluZyB1bnVz
ZWQga2VybmVsIG1lbW9yeTogMjBrIGZyZWVkDQpbICAgIDMuNzgxMTQ3XSBGcmVlaW5nIHVudXNl
ZCBrZXJuZWwgbWVtb3J5OiAxNDAwayBmcmVlZA0KWyAgICAzLjgxMjAyMF0gdWRldmRbOTNdOiBz
dGFydGluZyB2ZXJzaW9uIDE3Mw0KWyAgICAzLjg2NTc4NF0gbXB0MnNhcyB2ZXJzaW9uIDA4LjEw
MC4wMC4wMiBsb2FkZWQNClsgICAgMy44ODU5MTZdIEZsb3BweSBkcml2ZShzKTogZmQwIGlzIDEu
NDRNDQpbICAgIDMuODk5MjE4XSBzY3NpMiA6IEZ1c2lvbiBNUFQgU0FTIEhvc3QNClsgICAgMy45
MDQ4OTRdIHhlbjogLS0+IGlycT0zNiwgcGlycT0xNg0KWyAgICAzLjkxNjczNl0gbXB0MnNhcyAw
MDAwOjAwOjA1LjA6IFBDSSBJTlQgQSAtPiBHU0kgMzYgKGxldmVsLCBsb3cpIC0+IElSUSAzNg0K
WyAgICAzLjkyODI2OV0gbXB0MnNhczA6IDMyIEJJVCBQQ0kgQlVTIERNQSBBRERSRVNTSU5HIFNV
UFBPUlRFRCwgdG90YWwgbWVtICgyMDM0ODg4IGtCKQ0KWyAgICAzLjkzMjc0MF0gRkRDIDAgaXMg
YSBTODIwNzhCDQpbICAgIDMuOTY3Njk0XSBtcHQyc2FzMDogUENJLU1TSS1YIGVuYWJsZWQ6IElS
USA3NQ0KWyAgICAzLjk3MjgwMV0gbXB0MnNhczA6IGlvbWVtKDB4MDAwMDAwMDBmMzBlMDAwMCks
IG1hcHBlZCgweGZmZmZjOTAwMDA4YzAwMDApLCBzaXplKDE2Mzg0KQ0KWyAgICAzLjk4MDUwMF0g
bXB0MnNhczA6IGlvcG9ydCgweDAwMDAwMDAwMDAwMGMyMDApLCBzaXplKDI1NikNClsgICAgNC4x
MjgxMjhdIFJlZmluZWQgVFNDIGNsb2Nrc291cmNlIGNhbGlicmF0aW9uOiAzMjkyLjUyNSBNSHou
DQpbICAgIDQuMjgwMDgwXSBtcHQyc2FzMDogc2VuZGluZyBkaWFnIHJlc2V0ICEhDQpbICAgIDUu
NDIwODQxXSBtcHQyc2FzMDogZGlhZyByZXNldDogU1VDQ0VTUw0KWyAgICA1LjU3Mzc1Nl0gbXB0
MnNhczA6IEFsbG9jYXRlZCBwaHlzaWNhbCBtZW1vcnk6IHNpemUoMjI3NyBrQikNClsgICAgNS41
ODM1NDldIG1wdDJzYXMwOiBDdXJyZW50IENvbnRyb2xsZXIgUXVldWUgRGVwdGgoMTQ4MSksIE1h
eCBDb250cm9sbGVyIFF1ZXVlIERlcHRoKDE3MjApDQpbICAgIDUuNTk3NDQwXSBtcHQyc2FzMDog
U2NhdHRlciBHYXRoZXIgRWxlbWVudHMgcGVyIElPKDEyOCkNClsgICAzNS44MzYxODFdIG1wdDJz
YXMwOiBfYmFzZV9ldmVudF9ub3RpZmljYXRpb246IHRpbWVvdXQNClsgICAzNS44NDUwNTldIG1w
dDJzYXMwOiBzZW5kaW5nIGRpYWcgcmVzZXQgISENClsgICAzNi45ODkwMjZdIG1wdDJzYXMwOiBk
aWFnIHJlc2V0OiBTVUNDRVNTDQpbICAgMzYuOTk3OTk5XSBtcHQyc2FzIDAwMDA6MDA6MDUuMDog
UENJIElOVCBBIGRpc2FibGVkDQpbICAgMzcuMDE1ODk2XSBtcHQyc2FzMDogZmFpbHVyZSBhdCAv
YnVpbGQvYnVpbGRkL2xpbnV4LTMuMC4wL2RyaXZlcnMvc2NzaS9tcHQyc2FzL21wdDJzYXNfc2Nz
aWguYzo3NDY0L19zY3NpaF9wcm9iZSgpIQ0KWyAgIDM3LjU5NTY1OF0gQnRyZnMgbG9hZGVkDQpb
ICAgMzcuNjA1MDU4XSB4b3I6IGF1dG9tYXRpY2FsbHkgdXNpbmcgYmVzdCBjaGVja3N1bW1pbmcg
ZnVuY3Rpb246IGdlbmVyaWNfc3NlDQpbICAgMzcuNjMyMDY1XSAgICBnZW5lcmljX3NzZTogIDE1
NTQuMDAwIE1CL3NlYw0KWyAgIDM3LjYzNzg0M10geG9yOiB1c2luZyBmdW5jdGlvbjogZ2VuZXJp
Y19zc2UgKDE1NTQuMDAwIE1CL3NlYykNClsgICAzNy42NDY2MjNdIGRldmljZS1tYXBwZXI6IGRt
LXJhaWQ0NTogaW5pdGlhbGl6ZWQgdjAuMjU5NGINClsgICAzNy43NDk5MjBdIElTTyA5NjYwIEV4
dGVuc2lvbnM6IE1pY3Jvc29mdCBKb2xpZXQgTGV2ZWwgMw0KWyAgIDM3Ljc3MTg3N10gSVNPIDk2
NjAgRXh0ZW5zaW9uczogUlJJUF8xOTkxQQ0KWyAgIDM3Ljg5NTUyMV0gc3F1YXNoZnM6IHZlcnNp
b24gNC4wICgyMDA5LzAxLzMxKSBQaGlsbGlwIExvdWdoZXINCiAqIFN0YXJ0aW5nIG1ETlMvRE5T
LVNEIGRhZW1vbhtbNzRHWyBPSyBdICogU3RhcnRpbmcgbUROUy9ETlMtU0QgZGFlbW9uG1s3NEdb
IE9LIF1bICAgNDQuMzk1NjU3XSBwaWl4NF9zbWJ1cyAwMDAwOjAwOjAxLjM6IEhvc3QgU01CdXMg
Y29udHJvbGxlciBub3QgZW5hYmxlZCENCg0NCiAqIFN0YXJ0aW5nIG5ldHdvcmsgY29ubmVjdGlv
biBtYW5hZ2VyG1s3NEdbIE9LIF0NDQoNDQogKiBTdGFydGluZyBuZXR3b3JrIGNvbm5lY3Rpb24g
bWFuYWdlchtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgY29uZmlndXJlIG5ldHdvcmsgZGV2aWNl
IHNlY3VyaXR5G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBjb25maWd1cmUgbmV0d29yayBkZXZp
Y2Ugc2VjdXJpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIE1vdW50IG5ldHdvcmsgZmlsZXN5
c3RlbXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIEZhaWxzYWZlIEJvb3QgRGVsYXkbWzc0R1sg
T0sgXQ0NCiAqIFN0YXJ0aW5nIE1vdW50IG5ldHdvcmsgZmlsZXN5c3RlbXMbWzc0R1sgT0sgXQ0N
CiAqIFN0YXJ0aW5nIEZhaWxzYWZlIEJvb3QgRGVsYXkbWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5n
IE1vdW50IG5ldHdvcmsgZmlsZXN5c3RlbXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIEJyaWRn
ZSBzb2NrZXQgZXZlbnRzIGludG8gdXBzdGFydBtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcgTW91
bnQgbmV0d29yayBmaWxlc3lzdGVtcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgQnJpZGdlIHNv
Y2tldCBldmVudHMgaW50byB1cHN0YXJ0G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBjb25maWd1
cmUgbmV0d29yayBkZXZpY2UbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIFN5c3RlbSBWIGluaXRp
YWxpc2F0aW9uIGNvbXBhdGliaWxpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGNvbmZpZ3Vy
ZSBuZXR3b3JrIGRldmljZRtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgU3lzdGVtIFYgaW5pdGlh
bGlzYXRpb24gY29tcGF0aWJpbGl0eRtbNzRHWyBPSyBdDQ0Kc3BlZWNoLWRpc3BhdGNoZXIgZGlz
YWJsZWQ7IGVkaXQgL2V0Yy9kZWZhdWx0L3NwZWVjaC1kaXNwYXRjaGVyDQ0KQ2hlY2tpbmcgZm9y
IHJ1bm5pbmcgdW5hdHRlbmRlZC11cGdyYWRlczogDQ0KICogU3RvcHBpbmcgRmFpbHNhZmUgQm9v
dCBEZWxheRtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcgU3lzdGVtIFYgaW5pdGlhbGlzYXRpb24g
Y29tcGF0aWJpbGl0eRtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgU3lzdGVtIFYgcnVubGV2ZWwg
Y29tcGF0aWJpbGl0eRtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgYXV0b21hdGljIGNyYXNoIHJl
cG9ydCBnZW5lcmF0aW9uG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBBQ1BJIGRhZW1vbhtbNzRH
WyBPSyBdDQ0KICogU3RhcnRpbmcgc2F2ZSBrZXJuZWwgbWVzc2FnZXMbWzc0R1sgT0sgXQ0NCiAq
IFN0YXJ0aW5nIGRlZmVycmVkIGV4ZWN1dGlvbiBzY2hlZHVsZXIbWzc0R1sgT0sgXQ0NCiAqIFN0
YXJ0aW5nIHJlZ3VsYXIgYmFja2dyb3VuZCBwcm9ncmFtIHByb2Nlc3NpbmcgZGFlbW9uG1s3NEdb
IE9LIF0NDQogKiBTdG9wcGluZyBjb2xkIHBsdWcgZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3Rv
cHBpbmcgbG9nIGluaXRpYWwgZGV2aWNlIGNyZWF0aW9uG1s3NEdbIE9LIF0NDQpzcGVlY2gtZGlz
cGF0Y2hlciBkaXNhYmxlZDsgZWRpdCAvZXRjL2RlZmF1bHQvc3BlZWNoLWRpc3BhdGNoZXINDQpD
aGVja2luZyBmb3IgcnVubmluZyB1bmF0dGVuZGVkLXVwZ3JhZGVzOiANDQogKiBTdG9wcGluZyBG
YWlsc2FmZSBCb290IERlbGF5G1s3NEdbIE9LIF0NDQogKiBTdG9wcGluZyBTeXN0ZW0gViBpbml0
aWFsaXNhdGlvbiBjb21wYXRpYmlsaXR5G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBTeXN0ZW0g
ViBydW5sZXZlbCBjb21wYXRpYmlsaXR5G1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBhdXRvbWF0
aWMgY3Jhc2ggcmVwb3J0IGdlbmVyYXRpb24bWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIEFDUEkg
ZGFlbW9uG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBzYXZlIGtlcm5lbCBtZXNzYWdlcxtbNzRH
WyBPSyBdDQ0KICogU3RhcnRpbmcgZGVmZXJyZWQgZXhlY3V0aW9uIHNjaGVkdWxlchtbNzRHWyBP
SyBdDQ0KICogU3RhcnRpbmcgcmVndWxhciBiYWNrZ3JvdW5kIHByb2dyYW0gcHJvY2Vzc2luZyBk
YWVtb24bWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5nIGNvbGQgcGx1ZyBkZXZpY2VzG1s3NEdbIE9L
IF0NDQogKiBTdG9wcGluZyBsb2cgaW5pdGlhbCBkZXZpY2UgY3JlYXRpb24bWzc0R1sgT0sgXQ0N
CiAqIFN0YXJ0aW5nIGxvYWQgZmFsbGJhY2sgZ3JhcGhpY3MgZGV2aWNlcxtbNzRHWyBPSyBdDQ0K
ICogU3RhcnRpbmcgZW5hYmxlIHJlbWFpbmluZyBib290LXRpbWUgZW5jcnlwdGVkIGJsb2NrIGRl
dmljZXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGNvbmZpZ3VyZSB2aXJ0dWFsIG5ldHdvcmsg
ZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgQ1BVIGludGVycnVwdHMgYmFsYW5jaW5n
IGRhZW1vbhtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcgY29uZmlndXJlIHZpcnR1YWwgbmV0d29y
ayBkZXZpY2VzG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBjb25maWd1cmUgbmV0d29yayBkZXZp
Y2Ugc2VjdXJpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGxvYWQgZmFsbGJhY2sgZ3JhcGhp
Y3MgZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcgZW5hYmxlIHJlbWFpbmluZyBib290
LXRpbWUgZW5jcnlwdGVkIGJsb2NrIGRldmljZXMbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0aW5nIGNv
bmZpZ3VyZSB2aXJ0dWFsIG5ldHdvcmsgZGV2aWNlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcg
Q1BVIGludGVycnVwdHMgYmFsYW5jaW5nIGRhZW1vbhtbNzRHWyBPSyBdDQ0KICogU3RvcHBpbmcg
Y29uZmlndXJlIHZpcnR1YWwgbmV0d29yayBkZXZpY2VzG1s3NEdbIE9LIF0NDQogKiBTdGFydGlu
ZyBjb25maWd1cmUgbmV0d29yayBkZXZpY2Ugc2VjdXJpdHkbWzc0R1sgT0sgXQ0NCiAqIFN0YXJ0
aW5nIHNhdmUgdWRldiBsb2cgYW5kIHVwZGF0ZSBydWxlcxtbNzRHWyBPSyBdICogU3RhcnRpbmcg
c2F2ZSB1ZGV2IGxvZyBhbmQgdXBkYXRlIHJ1bGVzG1s3NEdbIE9LIF0NDQoNDQogKiBTdG9wcGlu
ZyBzYXZlIHVkZXYgbG9nIGFuZCB1cGRhdGUgcnVsZXMbWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5n
IHNhdmUgdWRldiBsb2cgYW5kIHVwZGF0ZSBydWxlcxtbNzRHWyBPSyBdDQ0KICogU3RhcnRpbmcg
bG9hZCBmYWxsYmFjayBncmFwaGljcyBkZXZpY2VzG1s3NEdbG1szMW1mYWlsG1szOTs0OW1dDQ0K
ICogU3RhcnRpbmcgbG9hZCBmYWxsYmFjayBncmFwaGljcyBkZXZpY2VzG1s3NEdbG1szMW1mYWls
G1szOTs0OW1dDQ0KICogU3RhcnRpbmcgVWJ1bnR1IGxpdmUgQ0QgaW5zdGFsbGVyG1s3NEdbIE9L
IF0gKiBTdGFydGluZyBVYnVudHUgbGl2ZSBDRCBpbnN0YWxsZXIbWzc0R1sgT0sgXQ0NCg0NCiAq
IFN0b3BwaW5nIFVidW50dSBsaXZlIENEIGluc3RhbGxlchtbNzRHWyBPSyBdICogU3RvcHBpbmcg
VWJ1bnR1IGxpdmUgQ0QgaW5zdGFsbGVyG1s3NEdbIE9LIF0NDQogKiBTdGFydGluZyBMaWdodERN
IERpc3BsYXkgTWFuYWdlchtbNzRHWyBPSyBdDQ0KDQ0KICogU3RhcnRpbmcgTGlnaHRETSBEaXNw
bGF5IE1hbmFnZXIbWzc0R1sgT0sgXQ0NCiAqIFN0b3BwaW5nIGVuYWJsZSByZW1haW5pbmcgYm9v
dC10aW1lIGVuY3J5cHRlZCBibG9jayBkZXZpY2VzG1s3NEdbIE9LIF0NDQogKiBTdG9wcGluZyBl
bmFibGUgcmVtYWluaW5nIGJvb3QtdGltZSBlbmNyeXB0ZWQgYmxvY2sgZGV2aWNlcxtbNzRHWyBP
SyBdDQ0KICogU3RvcHBpbmcgc2F2ZSBrZXJuZWwgbWVzc2FnZXMbWzc0R1sgT0sgXQ0NCiAqIFN0
b3BwaW5nIHNhdmUga2VybmVsIG1lc3NhZ2VzG1s3NEdbIE9LIF0NDQoNG1s3NEdbIE9LIF0NCiAb
WzMzbSobWzM5OzQ5bSBQdWxzZUF1ZGlvIGNvbmZpZ3VyZWQgZm9yIHBlci11c2VyIHNlc3Npb25z
DQpzYW5lZCBkaXNhYmxlZDsgZWRpdCAvZXRjL2RlZmF1bHQvc2FuZWQNCiAqIENoZWNraW5nIGJh
dHRlcnkgc3RhdGUuLi4gICAgICAgG1s4MEcgDRtbNzRHWyBPSyBdDQo=
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="ubuntu_guest_xl_dmesg.log"
Content-Disposition: attachment; filename="ubuntu_guest_xl_dmesg.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5coz3we3

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICAgICAgICAgICAgICAgICAgIF8gICAg
ICAgIF8gICAgIF8gICAgICAKIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgICAgXyAg
IF8gXyBfXyAgX19ffCB8XyBfXyBffCB8X18gfCB8IF9fXyAKICBcICAvLyBfIFwgJ18gXCAgfCB8
fCB8XyAgIF9fKSB8X198IHwgfCB8ICdfIFwvIF9ffCBfXy8gX2AgfCAnXyBcfCB8LyBfIFwKICAv
ICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy98X198IHxffCB8IHwgfCBcX18gXCB8fCAoX3wg
fCB8XykgfCB8ICBfXy8KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX198ICAgXF9fLF98
X3wgfF98X19fL1xfX1xfXyxffF8uX18vfF98XF9fX3wKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKKFhFTikg
WGVuIHZlcnNpb24gNC4yLXVuc3RhYmxlIChkZXJpY2tzb0Boc2QxLmNhLmNvbWNhc3QubmV0KSAo
Z2NjIHZlcnNpb24gNC42LjMgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpICkgVHVlIEp1
bCAzMSAwODo0NzowNCBQRFQgMjAxMgooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBGcmkgSnVsIDI3
IDEyOjIyOjEzIDIwMTIgKzAyMDAgMjU2ODg6ZTYyNjZmYzc2ZDA4CihYRU4pIEJvb3Rsb2FkZXI6
IEdSVUIgMS45OS0yMXVidW50dTMuMQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIGRv
bTBfbWVtPTQwOTZNIHhzYXZlPTAKKFhFTikgVmlkZW8gaW5mb3JtYXRpb246CihYRU4pICBWR0Eg
aXMgdGV4dCBtb2RlIDgweDI1LCBmb250IDh4MTYKKFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7
IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRzCihYRU4pIERpc2MgaW5mb3JtYXRpb246CihY
RU4pICBGb3VuZCAxIE1CUiBzaWduYXR1cmVzCihYRU4pICBGb3VuZCAyIEVERCBpbmZvcm1hdGlv
biBzdHJ1Y3R1cmVzCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6CihYRU4pICAwMDAwMDAwMDAwMDAw
MDAwIC0gMDAwMDAwMDAwMDA5YzgwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDAwMDA5YzgwMCAt
IDAwMDAwMDAwMDAwYTAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDBlMDAwMCAtIDAw
MDAwMDAwMDAxMDAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAw
MDAwZGRkMDcwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwZGRkMDcwMDAgLSAwMDAwMDAwMGRk
ZGJiMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZGRkYmIwMDAgLSAwMDAwMDAwMGRkZGJj
MDAwIChBQ1BJIGRhdGEpCihYRU4pICAwMDAwMDAwMGRkZGJjMDAwIC0gMDAwMDAwMDBkZGVkNzAw
MCAoQUNQSSBOVlMpCihYRU4pICAwMDAwMDAwMGRkZWQ3MDAwIC0gMDAwMDAwMDBkZWY5MjAwMCAo
cmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMGRlZjkyMDAwIC0gMDAwMDAwMDBkZWY5MzAwMCAodXNh
YmxlKQooWEVOKSAgMDAwMDAwMDBkZWY5MzAwMCAtIDAwMDAwMDAwZGVmZDYwMDAgKEFDUEkgTlZT
KQooWEVOKSAgMDAwMDAwMDBkZWZkNjAwMCAtIDAwMDAwMDAwZGY4MDAwMDAgKHVzYWJsZSkKKFhF
TikgIDAwMDAwMDAwZjgwMDAwMDAgLSAwMDAwMDAwMGZjMDAwMDAwIChyZXNlcnZlZCkKKFhFTikg
IDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAw
MDAwMDAwZmVkMDAwMDAgLSAwMDAwMDAwMGZlZDA0MDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAw
MDAwZmVkMWMwMDAgLSAwMDAwMDAwMGZlZDIwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAw
ZmVlMDAwMDAgLSAwMDAwMDAwMGZlZTAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZmYw
MDAwMDAgLSAwMDAwMDAwMTAwMDAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAxMDAwMDAw
MDAgLSAwMDAwMDAwNDIwMDAwMDAwICh1c2FibGUpCihYRU4pIEFDUEk6IFJTRFAgMDAwRjA0OTAs
IDAwMjQgKHIyIEFMQVNLQSkKKFhFTikgQUNQSTogWFNEVCBEREVDNzA5MCwgMDA5QyAocjEgQUxB
U0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IEZBQ1AgRERF
RDFDQTAsIDAwRjQgKHI0IEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEzKQoo
WEVOKSBBQ1BJOiBEU0RUIERERUM3MUMwLCBBQURBIChyMiBBTEFTS0EgICAgQSBNIEkgICAgICAg
NkYgSU5UTCAyMDA1MTExNykKKFhFTikgQUNQSTogRkFDUyBEREVENUY4MCwgMDA0MAooWEVOKSBB
Q1BJOiBBUElDIERERUQxRDk4LCAwMDkyIChyMyBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1J
ICAgICAxMDAxMykKKFhFTikgQUNQSTogRlBEVCBEREVEMUUzMCwgMDA0NCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IE1DRkcgRERFRDFFNzgs
IDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgICAgIDk3KQooWEVOKSBB
Q1BJOiBQUkFEIERERUQxRUI4LCAwMEJFIChyMiBQUkFESUQgIFBSQURUSUQgICAgICAgIDEgTVNG
VCAgMzAwMDAwMSkKKFhFTikgQUNQSTogSFBFVCBEREVEMUY3OCwgMDAzOCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSS4gICAgICAgIDUpCihYRU4pIEFDUEk6IFNTRFQgRERFRDFGQjAs
IDAzNkQgKHIxIFNhdGFSZSBTYXRhVGFibCAgICAgMTAwMCBJTlRMIDIwMDkxMTEyKQooWEVOKSBB
Q1BJOiBTUE1JIERERUQyMzIwLCAwMDQwIChyNSBBIE0gSSAgIE9FTVNQTUkgICAgICAgIDAgQU1J
LiAgICAgICAgMCkKKFhFTikgQUNQSTogU1NEVCBEREVEMjM2MCwgMDlBNCAocjEgIFBtUmVmICBD
cHUwSXN0ICAgICAzMDAwIElOVEwgMjAwNTExMTcpCihYRU4pIEFDUEk6IFNTRFQgRERFRDJEMDgs
IDBBODggKHIxICBQbVJlZiAgICBDcHVQbSAgICAgMzAwMCBJTlRMIDIwMDUxMTE3KQooWEVOKSBB
Q1BJOiBETUFSIERERUQzNzkwLCAwMDc4IChyMSBJTlRFTCAgICAgIFNOQiAgICAgICAgIDEgSU5U
TCAgICAgICAgMSkKKFhFTikgQUNQSTogRUlOSiBEREVEMzgwOCwgMDEzMCAocjEgICAgQU1JIEFN
SSBFSU5KICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIEFDUEk6IEVSU1QgRERFRDM5Mzgs
IDAyMTAgKHIxICBBTUlFUiBBTUkgRVJTVCAgICAgICAgMCAgICAgICAgICAgICAwKQooWEVOKSBB
Q1BJOiBIRVNUIERERUQzQjQ4LCAwMEE4IChyMSAgICBBTUkgQU1JIEhFU1QgICAgICAgIDAgICAg
ICAgICAgICAgMCkKKFhFTikgQUNQSTogQkVSVCBEREVEM0JGMCwgMDAzMCAocjEgICAgQU1JIEFN
SSBCRVJUICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIFN5c3RlbSBSQU06IDE2MzU2TUIg
KDE2NzQ5MzY4a0IpCihYRU4pIE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZAooWEVOKSBGYWtp
bmcgYSBub2RlIGF0IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDQyMDAwMDAwMAooWEVOKSBEb21h
aW4gaGVhcCBpbml0aWFsaXNlZAooWEVOKSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgMDAwZmQ3YjAK
KFhFTikgRE1JIDIuNyBwcmVzZW50LgooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0CihY
RU4pIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4NDA4CihYRU4pIEFDUEk6IEFDUEkgU0xFRVAg
SU5GTzogcG0xeF9jbnRbNDA0LDBdLCBwbTF4X2V2dFs0MDAsMF0KKFhFTikgQUNQSTogMzIvNjRY
IEZBQ1MgYWRkcmVzcyBtaXNtYXRjaCBpbiBGQURUIC0gZGRlZDVmODAvMDAwMDAwMDAwMDAwMDAw
MCwgdXNpbmcgMzIKKFhFTikgQUNQSTogICAgICAgICAgICAgICAgICB3YWtldXBfdmVjW2RkZWQ1
ZjhjXSwgdmVjX3NpemVbMjBdCihYRU4pIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZlZTAw
MDAwCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MDBdIGVuYWJs
ZWQpCihYRU4pIFByb2Nlc3NvciAjMCA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQooWEVOKSBQcm9jZXNz
b3IgIzIgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
M10gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICM0IDc6MTAgQVBJQyB2
ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDZd
IGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNiA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDAxXSBlbmFibGVkKQooWEVOKSBQ
cm9jZXNzb3IgIzEgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwNl0gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMzIDc6MTAg
QVBJQyB2ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDddIGxhcGljX2lk
WzB4MDVdIGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNSA3OjEwIEFQSUMgdmVyc2lvbiAyMQoo
WEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA4XSBsYXBpY19pZFsweDA3XSBlbmFibGVkKQoo
WEVOKSBQcm9jZXNzb3IgIzcgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUNf
Tk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pCihYRU4pIEFDUEk6IElPQVBJ
QyAoaWRbMHgwMl0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lfYmFzZVswXSkKKFhFTikgSU9BUElD
WzBdOiBhcGljX2lkIDIsIHZlcnNpb24gMzIsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMK
KFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZs
IGRmbCkKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJx
IDkgaGlnaCBsZXZlbCkKKFhFTikgQUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJyaWRlLgooWEVOKSBB
Q1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuCihYRU4pIEFDUEk6IElSUTkgdXNlZCBieSBvdmVy
cmlkZS4KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgRmxhdC4gIFVzaW5nIDEgSS9PIEFQSUNz
CihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmE3MDEgYmFzZTogMHhmZWQwMDAwMAooWEVOKSBF
UlNUIHRhYmxlIGlzIGludmFsaWQKKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25m
aWd1cmF0aW9uIGluZm9ybWF0aW9uCihYRU4pIFNNUDogQWxsb3dpbmcgOCBDUFVzICgwIGhvdHBs
dWcgQ1BVcykKKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCAxNTI4IE1TSS9NU0ktWAooWEVOKSBT
d2l0Y2hlZCB0byBBUElDIGRyaXZlciB4MmFwaWNfY2x1c3Rlci4KKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBEZXRlY3RlZCAzMjkyLjU3
OCBNSHogcHJvY2Vzc29yLgooWEVOKSBJbml0aW5nIG1lbW9yeSBzaGFyaW5nLgooWEVOKSBtY2Vf
aW50ZWwuYzoxMjM5OiBNQ0EgQ2FwYWJpbGl0eTogQkNBU1QgMSBTRVIgMCBDTUNJIDEgZmlyc3Ri
YW5rIDAgZXh0ZW5kZWQgTUNFIE1TUiAwCihYRU4pIEludGVsIG1hY2hpbmUgY2hlY2sgcmVwb3J0
aW5nIGVuYWJsZWQKKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBmODAwMDAw
MCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSAzZgooWEVOKSBQQ0k6IE1DRkcgYXJlYSBhdCBmODAw
MDAwMCByZXNlcnZlZCBpbiBFODIwCihYRU4pIFBDSTogVXNpbmcgTUNGRyBmb3Igc2VnbWVudCAw
MDAwIGJ1cyAwMC0zZgooWEVOKSBJbnRlbCBWVC1kIFNub29wIENvbnRyb2wgZW5hYmxlZC4KKFhF
TikgSW50ZWwgVlQtZCBEb20wIERNQSBQYXNzdGhyb3VnaCBub3QgZW5hYmxlZC4KKFhFTikgSW50
ZWwgVlQtZCBRdWV1ZWQgSW52YWxpZGF0aW9uIGVuYWJsZWQuCihYRU4pIEludGVsIFZULWQgSW50
ZXJydXB0IFJlbWFwcGluZyBlbmFibGVkLgooWEVOKSBJbnRlbCBWVC1kIFNoYXJlZCBFUFQgdGFi
bGVzIG5vdCBlbmFibGVkLgooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZAooWEVOKSAg
LSBEb20wIG1vZGU6IFJlbGF4ZWQKKFhFTikgRW5hYmxlZCBkaXJlY3RlZCBFT0kgd2l0aCBpb2Fw
aWNfYWNrX29sZCBvbiEKKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzCihYRU4pICAtPiBVc2lu
ZyBvbGQgQUNLIG1ldGhvZAooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9
MiBhcGljMj0tMSBwaW4yPS0xCihYRU4pIFRTQyBkZWFkbGluZSB0aW1lciBlbmFibGVkCihYRU4p
IFBsYXRmb3JtIHRpbWVyIGlzIDE0LjMxOE1IeiBIUEVUCihYRU4pIEFsbG9jYXRlZCBjb25zb2xl
IHJpbmcgb2YgNjQgS2lCLgooWEVOKSBWTVg6IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoK
KFhFTikgIC0gQVBJQyBNTUlPIGFjY2VzcyB2aXJ0dWFsaXNhdGlvbgooWEVOKSAgLSBBUElDIFRQ
UiBzaGFkb3cKKFhFTikgIC0gRXh0ZW5kZWQgUGFnZSBUYWJsZXMgKEVQVCkKKFhFTikgIC0gVmly
dHVhbC1Qcm9jZXNzb3IgSWRlbnRpZmllcnMgKFZQSUQpCihYRU4pICAtIFZpcnR1YWwgTk1JCihY
RU4pICAtIE1TUiBkaXJlY3QtYWNjZXNzIGJpdG1hcAooWEVOKSAgLSBVbnJlc3RyaWN0ZWQgR3Vl
c3QKKFhFTikgSFZNOiBBU0lEcyBlbmFibGVkLgooWEVOKSBIVk06IFZNWCBlbmFibGVkCihYRU4p
IEhWTTogSGFyZHdhcmUgQXNzaXN0ZWQgUGFnaW5nIChIQVApIGRldGVjdGVkCihYRU4pIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVzCihYRU4pIEFD
UEkgc2xlZXAgbW9kZXM6IFMzCihYRU4pIG1jaGVja19wb2xsOiBNYWNoaW5lIGNoZWNrIHBvbGxp
bmcgdGltZXIgc3RhcnRlZC4KKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pIGVs
Zl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weGFjNTAwMAooWEVO
KSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjMDAwMDAgbWVtc3o9MHhlNjBlMAoo
WEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZTcwMDAgbWVtc3o9MHgxNDQ4
MAooWEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZmMwMDAgbWVtc3o9MHgz
NjIwMDAKKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogbWVtb3J5OiAweDEwMDAwMDAgLT4gMHgyMDVl
MDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiCihYRU4pIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfVkVSU0lPTiA9ICIyLjYiCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogWEVOX1ZFUlNJT04gPSAieGVuLTMuMCIKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBWSVJUX0JBU0UgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBFTlRSWSA9IDB4ZmZmZmZmZmY4MWNmYzIwMAooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEhZ
UEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZmZjgxMDAxMDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90
ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBhZV9wZ2Rpcl9hYm92ZV80Z2Ii
CihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFFX01PREUgPSAieWVzIgooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IExPQURFUiA9ICJnZW5lcmljIgooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IHVua25vd24geGVuIGVsZiBub3RlICgweGQpCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VT
UEVORF9DQU5DRUwgPSAweDEKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIVl9TVEFSVF9MT1cg
PSAweGZmZmY4MDAwMDAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBQQUREUl9PRkZT
RVQgPSAweDAKKFhFTikgZWxmX3hlbl9hZGRyX2NhbGNfY2hlY2s6IGFkZHJlc3NlczoKKFhFTikg
ICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgICAgIGVsZl9w
YWRkcl9vZmZzZXQgPSAweDAKKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAgICAgPSAweGZmZmZmZmZm
ODAwMDAwMDAKKFhFTikgICAgIHZpcnRfa3N0YXJ0ICAgICAgPSAweGZmZmZmZmZmODEwMDAwMDAK
KFhFTikgICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODIwNWUwMDAKKFhFTikgICAg
IHZpcnRfZW50cnkgICAgICAgPSAweGZmZmZmZmZmODFjZmMyMDAKKFhFTikgICAgIHAybV9iYXNl
ICAgICAgICAgPSAweGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQs
IGxzYiwgY29tcGF0MzIKKFhFTikgIERvbTAga2VybmVsOiA2NC1iaXQsIFBBRSwgbHNiLCBwYWRk
ciAweDEwMDAwMDAgLT4gMHgyMDVlMDAwCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVO
VDoKKFhFTikgIERvbTAgYWxsb2MuOiAgIDAwMDAwMDA0MGMwMDAwMDAtPjAwMDAwMDA0MTAwMDAw
MDAgKDEwMjIzNTcgcGFnZXMgdG8gYmUgYWxsb2NhdGVkKQooWEVOKSAgSW5pdC4gcmFtZGlzazog
MDAwMDAwMDQxZDk5NTAwMC0+MDAwMDAwMDQxZmZmZjIwMAooWEVOKSBWSVJUVUFMIE1FTU9SWSBB
UlJBTkdFTUVOVDoKKFhFTikgIExvYWRlZCBrZXJuZWw6IGZmZmZmZmZmODEwMDAwMDAtPmZmZmZm
ZmZmODIwNWUwMDAKKFhFTikgIEluaXQuIHJhbWRpc2s6IGZmZmZmZmZmODIwNWUwMDAtPmZmZmZm
ZmZmODQ2YzgyMDAKKFhFTikgIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODQ2YzkwMDAtPmZmZmZm
ZmZmODRlYzkwMDAKKFhFTikgIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODRlYzkwMDAtPmZmZmZm
ZmZmODRlYzk0YjQKKFhFTikgIFBhZ2UgdGFibGVzOiAgIGZmZmZmZmZmODRlY2EwMDAtPmZmZmZm
ZmZmODRlZjUwMDAKKFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODRlZjUwMDAtPmZmZmZm
ZmZmODRlZjYwMDAKKFhFTikgIFRPVEFMOiAgICAgICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZm
ZmZmODUwMDAwMDAKKFhFTikgIEVOVFJZIEFERFJFU1M6IGZmZmZmZmZmODFjZmMyMDAKKFhFTikg
RG9tMCBoYXMgbWF4aW11bSA4IFZDUFVzCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0
IDB4ZmZmZmZmZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODFhYzUwMDAKKFhFTikgZWxmX2xvYWRf
YmluYXJ5OiBwaGRyIDEgYXQgMHhmZmZmZmZmZjgxYzAwMDAwIC0+IDB4ZmZmZmZmZmY4MWNlNjBl
MAooWEVOKSBlbGZfbG9hZF9iaW5hcnk6IHBoZHIgMiBhdCAweGZmZmZmZmZmODFjZTcwMDAgLT4g
MHhmZmZmZmZmZjgxY2ZiNDgwCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAzIGF0IDB4ZmZm
ZmZmZmY4MWNmYzAwMCAtPiAweGZmZmZmZmZmODFkZDIwMDAKKFhFTikgU2NydWJiaW5nIEZyZWUg
UkFNOiAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi5kb25lLgooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQg
c2V0IGF0IDB4NDAwMCBwYWdlcy4KKFhFTikgU3RkLiBMb2dsZXZlbDogQWxsCihYRU4pIEd1ZXN0
IExvZ2xldmVsOiBBbGwKKFhFTikgWGVuIGlzIHJlbGlucXVpc2hpbmcgVkdBIGNvbnNvbGUuCihY
RU4pICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0
byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQooWEVOKSBGcmVlZCAyNDBrQiBpbml0IG1lbW9yeS4KKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT05IGlycT05IGVtdWlycT00NzI4NTA0
MzIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MDEuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAxLjEKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDowNi4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWEu
MAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjFjLjAKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxYy42CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWQuMAooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjFlLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxZi4w
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWYuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjFmLjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowMC4wCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDE6MDAuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjAwLjAK
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDU6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjAKKFhFTikgREVCVUcg
ZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yNzYgaXJxPTI4IGVtdWlycT0tNjA3ODM1MTIxCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTYgaXJxPTE2IGVtdWlycT0yODE4
MDU1NzEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT04IGlycT04IGVtdWly
cT0wCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9Mjc1IGlycT0yOSBlbXVp
cnE9MTk5NjUxODc4OQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTI3NCBp
cnE9MzAgZW11aXJxPTEzMjM2OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTI3MyBpcnE9MzEgZW11aXJxPTQ5NjU4MTI0OAooWEVOKSBIVk0xOiBIVk0gTG9hZGVyCihYRU4p
IEhWTTE6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTE6IFhlbmJ1cyByaW5n
cyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTE6IFN5c3RlbSByZXF1ZXN0
ZWQgU2VhQklPUwooWEVOKSBIVk0xOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6
MjcwOiBEb20xIFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMTogUENJLUlTQSBs
aW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20xIFBDSSBsaW5rIDEgY2hh
bmdlZCAwIC0+IDEwCihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTEgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZN
MTogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMSBQ
Q0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAzIHJvdXRl
ZCB0byBJUlE1CihYRU4pIEhWTTE6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0x
OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA0OjAgSU5UQS0+
SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMTogcGNp
IGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0xOiBwY2kg
ZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTE6IHBjaSBk
ZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTEgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1
OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTE6IHBjaSBkZXYgMDI6
MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMTogcGNpIGRldiAwNDow
IGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MSBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTEgZ2Zu
PWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTE6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6
ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNMTogcGNpIGRldiAwMjowIGJhciAxNCBzaXpl
IDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUg
MDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTE6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAw
MDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNMTogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAw
MDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAw
MDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20xIGdwb3J0PWMyMDAgbXBv
cnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNMTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAw
MDEwOiAwMDAwYzMwMQooWEVOKSBIVk0xOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoK
KFhFTikgSFZNMTogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2
YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTE6ICAtIENQVTEgLi4uIDM2LWJpdCBw
aHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBI
Vk0xOiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMTogIC0gUkVQIElOU0IgYWNy
b3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTE6ICAtIEdTIGJhc2UgTVNS
cyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNMTogUGFzc2VkIDIgb2YgMiB0ZXN0cwoo
WEVOKSBIVk0xOiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTE6IExvYWRpbmcg
U2VhQklPUyAuLi4KKFhFTikgSFZNMTogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0x
OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTE6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4p
IEhWTTE6IEJJT1MgbWFwOgooWEVOKSBIVk0xOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UK
KFhFTikgSFZNMTogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMTogRTgyMCB0YWJs
ZToKKFhFTikgSFZNMTogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAw
MDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDow
MDBlMDAwMAooWEVOKSBIVk0xOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDow
MDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0g
MDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAw
MDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk0xOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAw
MDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogSW52b2tpbmcgU2Vh
QklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQxIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGlu
ZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMSBnZm49ZjMwMDAgbWZuPWY3YTAwIG5y
PTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9
ODAKKFhFTikgSFZNMjogSFZNIExvYWRlcgooWEVOKSBIVk0yOiBEZXRlY3RlZCBYZW4gdjQuMi11
bnN0YWJsZQooWEVOKSBIVk0yOiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5u
ZWwgNAooWEVOKSBIVk0yOiBTeXN0ZW0gcmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNMjogQ1BV
IHNwZWVkIGlzIDMyOTMgTUh6CihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAwIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4p
IGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk0yOiBQ
Q0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb20yIFBDSSBs
aW5rIDIgY2hhbmdlZCAwIC0+IDExCihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0
byBJUlExMQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQoo
WEVOKSBIVk0yOiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk0yOiBwY2kg
ZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIElOVEEtPklSUTUK
KFhFTikgSFZNMjogcGNpIGRldiAwNDowIElOVEEtPklSUTUKKFhFTikgSFZNMjogcGNpIGRldiAw
NTowIElOVEEtPklSUTEwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAw
MDAwMDogZjAwMDAwMDgKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAw
MDAwOiBmMjAwMDAwOAooWEVOKSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAw
MDA6IGYzMDAwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzA4MCBtZm49Zjdh
ODAgbnI9NDAKKFhFTikgSFZNMjogcGNpIGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBm
MzA4MDAwNAooWEVOKSBIVk0yOiBwY2kgZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYz
MGMwMDAwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMw
ZDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0y
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVO
KSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4p
IEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikg
SFZNMjogcGNpIGRldiAwMzowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBI
Vk0yOiBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhW
TTI6IHBjaSBkZXYgMDQ6MCBiYXIgMTQgc2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZN
MjogcGNpIGRldiAwNTowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tMiBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTI6
IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNMjog
TXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlzYXRpb246CihYRU4pIEhWTTI6ICAtIENQVTAgLi4uIDM2
LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgoo
WEVOKSBIVk0yOiAgLSBDUFUxIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZh
ciBNVFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNMjogVGVzdGluZyBIVk0gZW52aXJvbm1l
bnQ6CihYRU4pIEhWTTI6ICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBh
c3NlZAooWEVOKSBIVk0yOiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihY
RU4pIEhWTTI6IFBhc3NlZCAyIG9mIDIgdGVzdHMKKFhFTikgSFZNMjogV3JpdGluZyBTTUJJT1Mg
dGFibGVzIC4uLgooWEVOKSBIVk0yOiBMb2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTI6IENy
ZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhFTikgSFZNMjogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBI
Vk0yOiB2bTg2IFRTUyBhdCBmYzAwYTA4MAooWEVOKSBIVk0yOiBCSU9TIG1hcDoKKFhFTikgSFZN
MjogIDEwMDAwLTEwMGQzOiBTY3JhdGNoIHNwYWNlCihYRU4pIEhWTTI6ICBlMDAwMC1mZmZmZjog
TWFpbiBCSU9TCihYRU4pIEhWTTI6IEU4MjAgdGFibGU6CihYRU4pIEhWTTI6ICBbMDBdOiAwMDAw
MDAwMDowMDAwMDAwMCAtIDAwMDAwMDAwOjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNMjogIEhPTEU6
IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAwMDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNMjogIFswMV06
IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAwMDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhW
TTI6ICBbMDJdOiAwMDAwMDAwMDowMDEwMDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhF
TikgSFZNMjogIEhPTEU6IDAwMDAwMDAwOjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhF
TikgSFZNMjogIFswM106IDAwMDAwMDAwOmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJF
U0VSVkVECihYRU4pIEhWTTI6IEludm9raW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0
NzpkMiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTIgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMiBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkMiBs
ZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk0zOiBIVk0gTG9hZGVyCihYRU4pIEhWTTM6IERldGVjdGVk
IFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTM6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwg
ZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTM6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVO
KSBIVk0zOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBs
aW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMzogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRv
IElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihY
RU4pIEhWTTM6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6
IERvbTMgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZNMzogUENJLUlTQSBsaW5r
IDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kgbGluayAzIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTM6IFBDSS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4p
IEhWTTM6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAg
SU5UQS0+SVJRNQooWEVOKSBIVk0zOiBwY2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk0z
OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAx
MCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDE0
IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMzAg
c2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYz
MDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUg
MDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTM6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAw
MDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAw
MDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTAgbWZu
PWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYzMGUzIG1mbj1mN2Fj
MyBucj0xCihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMw
ZTAwMDQKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBl
NDAwMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBj
MDAxCihYRU4pIEhWTTM6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMx
MDEKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAw
MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAx
CihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20zIGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAK
KFhFTikgSFZNMzogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQoo
WEVOKSBIVk0zOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNMzogIC0g
Q1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0g
Li4uIGRvbmUuCihYRU4pIEhWTTM6ICAtIENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBN
VFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk0zOiBUZXN0aW5nIEhW
TSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMzogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRh
cmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTM6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4u
LiBwYXNzZWQKKFhFTikgSFZNMzogUGFzc2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk0zOiBXcml0
aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTM6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhF
TikgSFZNMzogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0zOiBMb2FkaW5nIEFDUEkg
Li4uCihYRU4pIEhWTTM6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4pIEhWTTM6IEJJT1MgbWFw
OgooWEVOKSBIVk0zOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNMzogIGUw
MDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMzogRTgyMCB0YWJsZToKKFhFTikgSFZNMzog
IFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBI
Vk0zOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBI
Vk0zOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJW
RUQKKFhFTikgSFZNMzogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAw
MDA6IFJBTQooWEVOKSBIVk0zOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpm
YzAwMDAwMAooWEVOKSBIVk0zOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTow
MDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMzogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikg
c3RkdmdhLmM6MTQ3OmQzIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3Rkdmdh
LmM6MTUxOmQzIGxlYXZpbmcgc3RkdmdhCihYRU4pIHN0ZHZnYS5jOjE0NzpkMyBlbnRlcmluZyBz
dGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgaXJxLmM6Mzc1OiBEb20zIGNhbGxiYWNrIHZp
YSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBk
b20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBn
Zm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1m
MzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMg
bWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZu
PWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49
ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3
YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9
MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhF
TikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVO
KSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4p
IG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFw
OmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6
IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3Zl
OiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRv
bTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBt
cG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAg
bWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBt
Zm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEK
KFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTMgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEu
YzoyNzA6IERvbTMgUENJIGxpbmsgMSBjaGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBE
b20zIFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kg
bGluayAzIGNoYW5nZWQgNSAtPiAwCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTkgaXJxPTAgZW11aXJxPTEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGly
cT0xNyBpcnE9MCBlbXVpcnE9OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTIxIGlycT0wIGVtdWlycT00CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9
MjAgaXJxPTAgZW11aXJxPTYKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01
NSBpcnE9LTEgZW11aXJxPS0xCihYRU4pIERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQg
aHZtX2RvbWFpbl91c2VfcGlycT0wIGVtdWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBV
bnN1cHBvcnRlZCBkZWxpdmVyeSBtb2RlIDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0
MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQw
OCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4pIEhWTTQ6IEhWTSBMb2FkZXIKKFhFTikgSFZN
NDogRGV0ZWN0ZWQgWGVuIHY0LjItdW5zdGFibGUKKFhFTikgSFZNNDogWGVuYnVzIHJpbmdzIEAw
eGZlZmZjMDAwLCBldmVudCBjaGFubmVsIDQKKFhFTikgSFZNNDogU3lzdGVtIHJlcXVlc3RlZCBT
ZWFCSU9TCihYRU4pIEhWTTQ6IENQVSBzcGVlZCBpcyAzMjkzIE1IegooWEVOKSBpcnEuYzoyNzA6
IERvbTQgUENJIGxpbmsgMCBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk00OiBQQ0ktSVNBIGxpbmsg
MCByb3V0ZWQgdG8gSVJRNQooWEVOKSBpcnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMSBjaGFuZ2Vk
IDAgLT4gMTAKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwCihYRU4p
IGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAyIGNoYW5nZWQgMCAtPiAxMQooWEVOKSBIVk00OiBQ
Q0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8gSVJRMTEKKFhFTikgaXJxLmM6MjcwOiBEb200IFBDSSBs
aW5rIDMgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDMgcm91dGVkIHRv
IElSUTUKKFhFTikgSFZNNDogcGNpIGRldiAwMTozIElOVEEtPklSUTEwCihYRU4pIEhWTTQ6IHBj
aSBkZXYgMDM6MCBJTlRBLT5JUlE1CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1
CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBJTlRBLT5JUlExMAooWEVOKSBIVk00OiBwY2kgZGV2
IDAyOjAgYmFyIDEwIHNpemUgMDIwMDAwMDA6IGYwMDAwMDA4CihYRU4pIEhWTTQ6IHBjaSBkZXYg
MDM6MCBiYXIgMTQgc2l6ZSAwMTAwMDAwMDogZjIwMDAwMDgKKFhFTikgSFZNNDogcGNpIGRldiAw
NTowIGJhciAzMCBzaXplIDAwMDgwMDAwOiBmMzAwMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNCBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBi
YXIgMWMgc2l6ZSAwMDA0MDAwMDogZjMwODAwMDQKKFhFTikgSFZNNDogcGNpIGRldiAwMjowIGJh
ciAzMCBzaXplIDAwMDEwMDAwOiBmMzBjMDAwMAooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFy
IDMwIHNpemUgMDAwMTAwMDA6IGYzMGQwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNCBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgSFZNNDogcGNpIGRldiAwNTowIGJhciAxNCBzaXplIDAw
MDA0MDAwOiBmMzBlMDAwNAooWEVOKSBIVk00OiBwY2kgZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAw
MDEwMDA6IGYzMGU0MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDAw
MDEwMDogMDAwMGMwMDEKKFhFTikgSFZNNDogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIDAwMDAw
MTAwOiAwMDAwYzEwMQooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgMDAwMDAx
MDA6IGYzMGU1MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSAwMDAwMDEw
MDogMDAwMGMyMDEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTQgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBIVk00OiBwY2kgZGV2IDAxOjEgYmFyIDIwIHNpemUgMDAwMDAwMTA6
IDAwMDBjMzAxCihYRU4pIEhWTTQ6IE11bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOgooWEVO
KSBIVk00OiAgLSBDUFUwIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBN
VFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNNDogIC0gQ1BVMSAuLi4gMzYtYml0IHBoeXMg
Li4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTQ6
IFRlc3RpbmcgSFZNIGVudmlyb25tZW50OgooWEVOKSBIVk00OiAgLSBSRVAgSU5TQiBhY3Jvc3Mg
cGFnZSBib3VuZGFyaWVzIC4uLiBwYXNzZWQKKFhFTikgSFZNNDogIC0gR1MgYmFzZSBNU1JzIGFu
ZCBTV0FQR1MgLi4uIHBhc3NlZAooWEVOKSBIVk00OiBQYXNzZWQgMiBvZiAyIHRlc3RzCihYRU4p
IEhWTTQ6IFdyaXRpbmcgU01CSU9TIHRhYmxlcyAuLi4KKFhFTikgSFZNNDogTG9hZGluZyBTZWFC
SU9TIC4uLgooWEVOKSBIVk00OiBDcmVhdGluZyBNUCB0YWJsZXMgLi4uCihYRU4pIEhWTTQ6IExv
YWRpbmcgQUNQSSAuLi4KKFhFTikgSFZNNDogdm04NiBUU1MgYXQgZmMwMGEwODAKKFhFTikgSFZN
NDogQklPUyBtYXA6CihYRU4pIEhWTTQ6ICAxMDAwMC0xMDBkMzogU2NyYXRjaCBzcGFjZQooWEVO
KSBIVk00OiAgZTAwMDAtZmZmZmY6IE1haW4gQklPUwooWEVOKSBIVk00OiBFODIwIHRhYmxlOgoo
WEVOKSBIVk00OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDog
UkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDowMDBhMDAwMCAtIDAwMDAwMDAwOjAwMGUw
MDAwCihYRU4pIEhWTTQ6ICBbMDFdOiAwMDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAw
MDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiAgWzAyXTogMDAwMDAwMDA6MDAxMDAwMDAgLSAwMDAw
MDAwMDo3ZjgwMDAwMDogUkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDo3ZjgwMDAwMCAt
IDAwMDAwMDAwOmZjMDAwMDAwCihYRU4pIEhWTTQ6ICBbMDNdOiAwMDAwMDAwMDpmYzAwMDAwMCAt
IDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiBJbnZva2luZyBTZWFCSU9T
IC4uLgooWEVOKSBzdGR2Z2EuYzoxNDc6ZDQgZW50ZXJpbmcgc3RkdmdhIGFuZCBjYWNoaW5nIG1v
ZGVzCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAK
KFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAoo
WEVOKSBzdGR2Z2EuYzoxNTE6ZDQgbGVhdmluZyBzdGR2Z2EKKFhFTikgc3RkdmdhLmM6MTQ3OmQ0
IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBpcnEuYzozNzU6IERvbTQg
Y2FsbGJhY2sgdmlhIGNoYW5nZWQgdG8gRGlyZWN0IFZlY3RvciAweGYzCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21h
cDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6
cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJl
bW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6
IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200
IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2Zu
PWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1m
MzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4
MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1m
bj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49Zjdh
YzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAg
bnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAg
bnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBu
cj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9
MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9
MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4p
IG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5
X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFw
OnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBn
cG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200
IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQg
Z2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1m
MzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUw
IG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49
ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQw
MDAgbnI9MTAwCihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAwIGNoYW5nZWQgNSAtPiAw
CihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAxIGNoYW5nZWQgMTAgLT4gMAooWEVOKSBp
cnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMiBjaGFuZ2VkIDExIC0+IDAKKFhFTikgaXJxLmM6Mjcw
OiBEb200IFBDSSBsaW5rIDMgY2hhbmdlZCA1IC0+IDAKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOSBpcnE9MCBlbXVpcnE9MQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9w
aXJxIDQwOCBwaXJxPTE3IGlycT0wIGVtdWlycT04CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3Bp
cnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGly
cSA0MDggcGlycT0yMCBpcnE9MCBlbXVpcnE9NgooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJx
IDQwOCBwaXJxPTU1IGlycT0tMSBlbXVpcnE9LTEKKFhFTikgREVCVUcgaHZtX3BjaV9tc2lfYXNz
ZXJ0IHBpcnE9NCBodm1fZG9tYWluX3VzZV9waXJxPTAgZW11aXJxPS0xCihYRU4pIHZtc2kuYzox
MDg6ZDMyNzY3IFVuc3VwcG9ydGVkIGRlbGl2ZXJ5IG1vZGUgMwooWEVOKSBERUJVRyBldnRjaG5f
YmluZF9waXJxIDQwOCBwaXJxPTIyIGlycT0wIGVtdWlycT03CihYRU4pIERFQlVHIGV2dGNobl9i
aW5kX3BpcnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgSFZNNTogSFZNIExvYWRl
cgooWEVOKSBIVk01OiBEZXRlY3RlZCBYZW4gdjQuMi11bnN0YWJsZQooWEVOKSBIVk01OiBYZW5i
dXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5uZWwgNAooWEVOKSBIVk01OiBTeXN0ZW0g
cmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNNTogQ1BVIHNwZWVkIGlzIDMyOTMgTUh6CihYRU4p
IGlycS5jOjI3MDogRG9tNSBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTU6IFBD
SS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4pIGlycS5jOjI3MDogRG9tNSBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8g
SVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb201IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExCihY
RU4pIEhWTTU6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0byBJUlExMQooWEVOKSBpcnEuYzoyNzA6
IERvbTUgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsg
MyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk01OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhF
TikgSFZNNTogcGNpIGRldiAwMzowIElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNDow
IElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIElOVEEtPklSUTEwCihYRU4pIEhW
TTU6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAwMDAwMDogZjAwMDAwMDgKKFhFTikgSFZN
NTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAwMDAwOiBmMjAwMDAwOAooWEVOKSBIVk01
OiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAwMDA6IGYzMDAwMDAwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb201IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgSFZNNTogcGNp
IGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBmMzA4MDAwNAooWEVOKSBIVk01OiBwY2kg
ZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYzMGMwMDAwCihYRU4pIEhWTTU6IHBjaSBk
ZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwZDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTUgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b201IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBIVk01OiBwY2kgZGV2IDA1OjAgYmFy
IDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4pIEhWTTU6IHBjaSBkZXYgMDI6MCBiYXIg
MTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwMzowIGJhciAx
MCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBIVk01OiBwY2kgZGV2IDA0OjAgYmFyIDEw
IHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhWTTU6IHBjaSBkZXYgMDQ6MCBiYXIgMTQg
c2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIGJhciAxMCBz
aXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNSBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTU6IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6
ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNNTogTXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlz
YXRpb246CihYRU4pIEhWTTU6ICAtIENQVTAgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJS
cyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk01OiAgLSBDUFUxIC4uLiAz
Ni1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4K
KFhFTikgSFZNNTogVGVzdGluZyBIVk0gZW52aXJvbm1lbnQ6CihYRU4pIEhWTTU6ICAtIFJFUCBJ
TlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZAooWEVOKSBIVk01OiAgLSBHUyBi
YXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihYRU4pIEhWTTU6IFBhc3NlZCAyIG9mIDIg
dGVzdHMKKFhFTikgSFZNNTogV3JpdGluZyBTTUJJT1MgdGFibGVzIC4uLgooWEVOKSBIVk01OiBM
b2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTU6IENyZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhF
TikgSFZNNTogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBIVk01OiB2bTg2IFRTUyBhdCBmYzAwYTA4
MAooWEVOKSBIVk01OiBCSU9TIG1hcDoKKFhFTikgSFZNNTogIDEwMDAwLTEwMGQzOiBTY3JhdGNo
IHNwYWNlCihYRU4pIEhWTTU6ICBlMDAwMC1mZmZmZjogTWFpbiBCSU9TCihYRU4pIEhWTTU6IEU4
MjAgdGFibGU6CihYRU4pIEhWTTU6ICBbMDBdOiAwMDAwMDAwMDowMDAwMDAwMCAtIDAwMDAwMDAw
OjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAw
MDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNNTogIFswMV06IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAw
MDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6ICBbMDJdOiAwMDAwMDAwMDowMDEw
MDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAw
OjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhFTikgSFZNNTogIFswM106IDAwMDAwMDAw
OmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6IEludm9r
aW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0NzpkNSBlbnRlcmluZyBzdGR2Z2EgYW5k
IGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTUgZ2ZuPWYzMDAwIG1mbj1m
N2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNSBnZm49ZjMwMDAgbWZuPWY3
YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkNSBsZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk02
OiBIVk0gTG9hZGVyCihYRU4pIEhWTTY6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4p
IEhWTTY6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhW
TTY6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVOKSBIVk02OiBDUFUgc3BlZWQgaXMgMzI5
MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhF
TikgSFZNNjogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBE
b202IFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihYRU4pIEhWTTY6IFBDSS1JU0EgbGluayAx
IHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMiBjaGFuZ2Vk
IDAgLT4gMTEKKFhFTikgSFZNNjogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4p
IGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTY6IFBD
SS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4pIEhWTTY6IHBjaSBkZXYgMDE6MyBJTlRB
LT5JUlExMAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBw
Y2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJR
MTAKKFhFTikgSFZNNjogcGNpIGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAw
OAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4
CihYRU4pIEhWTTY6IHBjaSBkZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVO
KSBIVk02OiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4p
IEhWTTY6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikg
SFZNNjogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5
X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTY6IHBjaSBk
ZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNNjogcGNpIGRl
diAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk02OiBwY2kgZGV2
IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTY6IHBjaSBkZXYg
MDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNNjogcGNpIGRldiAw
NDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk02OiBwY2kgZGV2IDA1
OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBk
b202IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNNjogcGNpIGRldiAwMTox
IGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQooWEVOKSBIVk02OiBNdWx0aXByb2Nlc3Nv
ciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNNjogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4u
IGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTY6ICAt
IENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhd
IC4uLiBkb25lLgooWEVOKSBIVk02OiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZN
NjogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhW
TTY6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNNjogUGFz
c2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk02OiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihY
RU4pIEhWTTY6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhFTikgSFZNNjogQ3JlYXRpbmcgTVAgdGFi
bGVzIC4uLgooWEVOKSBIVk02OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTY6IHZtODYgVFNT
IGF0IGZjMDBhMDgwCihYRU4pIEhWTTY6IEJJT1MgbWFwOgooWEVOKSBIVk02OiAgMTAwMDAtMTAw
ZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNNjogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhF
TikgSFZNNjogRTgyMCB0YWJsZToKKFhFTikgSFZNNjogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAw
IC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9MRTogMDAwMDAwMDA6MDAw
YTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBIVk02OiAgWzAxXTogMDAwMDAwMDA6MDAw
ZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNNjogIFswMl06IDAw
MDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9M
RTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk02OiAgWzAz
XTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikg
SFZNNjogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQ2IGVudGVyaW5n
IHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1m
MzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3RkdmdhLmM6MTUxOmQ2IGxlYXZpbmcgc3Rkdmdh
CihYRU4pIHN0ZHZnYS5jOjE0NzpkNiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMK
KFhFTikgaXJxLmM6Mzc1OiBEb202IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0
b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5y
PTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5y
PTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQw
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVO
KSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlf
bWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21h
cDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlf
bWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
NiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202
IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBn
Zm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3Bv
cnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBl
MCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49Zjdh
YzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMz
IG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAw
IG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5y
PTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9Mgoo
WEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikg
aW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1l
bW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0
X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFk
ZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRv
bTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJ
IGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMSBj
aGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDIgY2hhbmdlZCAx
MSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTkgaXJxPTAgZW11aXJxPTEKKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0xNyBpcnE9MCBlbXVpcnE9OAooWEVO
KSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4p
IERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MjAgaXJxPTAgZW11aXJxPTYKKFhFTikg
REVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01NSBpcnE9LTEgZW11aXJxPS0xCihYRU4p
IERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQgaHZtX2RvbWFpbl91c2VfcGlycT0wIGVt
dWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBVbnN1cHBvcnRlZCBkZWxpdmVyeSBtb2Rl
IDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9
NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00
Cg==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="qemu-dm-solaris.log"
Content-Disposition: attachment; filename="qemu-dm-solaris.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5cpgkmt4

Y2hhciBkZXZpY2UgcmVkaXJlY3RlZCB0byAvZGV2L3B0cy8yCnhjOiBlcnJvcjogbGludXhfZ250
dGFiX3NldF9tYXhfZ3JhbnRzOiBpb2N0bCBTRVRfTUFYX0dSQU5UUyBmYWlsZWQgKDIyID0gSW52
YWxpZCBhcmd1bWVudCk6IEludGVybmFsIGVycm9yCnhlbiBiZTogcWRpc2stNzY4OiB4Y19nbnR0
YWJfc2V0X21heF9ncmFudHMgZmFpbGVkOiBJbnZhbGlkIGFyZ3VtZW50CnhjOiBlcnJvcjogbGlu
dXhfZ250dGFiX3NldF9tYXhfZ3JhbnRzOiBpb2N0bCBTRVRfTUFYX0dSQU5UUyBmYWlsZWQgKDIy
ID0gSW52YWxpZCBhcmd1bWVudCk6IEludGVybmFsIGVycm9yCnhlbiBiZTogcWRpc2stNTYzMjog
eGNfZ250dGFiX3NldF9tYXhfZ3JhbnRzIGZhaWxlZDogSW52YWxpZCBhcmd1bWVudApbMDA6MDUu
MF0geGVuX3B0X2luaXRmbjogQXNzaWduaW5nIHJlYWwgcGh5c2ljYWwgZGV2aWNlIDAyOjAwLjAg
dG8gZGV2Zm4gMHgyOApbMDA6MDUuMF0geGVuX3B0X3JlZ2lzdGVyX3JlZ2lvbnM6IElPIHJlZ2lv
biAwIHJlZ2lzdGVyZWQgKHNpemU9MHgwMDAwMDEwMCBiYXNlX2FkZHI9MHgwMDAwZDAwMCB0eXBl
OiAweDEpClswMDowNS4wXSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDEgcmVn
aXN0ZXJlZCAoc2l6ZT0weDAwMDA0MDAwIGJhc2VfYWRkcj0weGY3YWMwMDAwIHR5cGU6IDApClsw
MDowNS4wXSB4ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogSU8gcmVnaW9uIDMgcmVnaXN0ZXJlZCAo
c2l6ZT0weDAwMDQwMDAwIGJhc2VfYWRkcj0weGY3YTgwMDAwIHR5cGU6IDApClswMDowNS4wXSB4
ZW5fcHRfcmVnaXN0ZXJfcmVnaW9uczogRXhwYW5zaW9uIFJPTSByZWdpc3RlcmVkIChzaXplPTB4
MDAwODAwMDAgYmFzZV9hZGRyPTB4ZjdhMDAwMDApClswMDowNS4wXSB4ZW5fcHRfbXNpeF9pbml0
OiBnZXQgTVNJLVggdGFibGUgQkFSIGJhc2UgMHhmN2FjMDAwMApbMDA6MDUuMF0geGVuX3B0X21z
aXhfaW5pdDogdGFibGVfb2ZmID0gMHgyMDAwLCB0b3RhbF9lbnRyaWVzID0gMTUKWzAwOjA1LjBd
IHhlbl9wdF9tc2l4X2luaXQ6IG1hcHBpbmcgcGh5c2ljYWwgTVNJLVggdGFibGUgdG8gMHg3ZmM1
Mjg0ODkwMDAKWzAwOjA1LjBdIHhlbl9wdF9wY2lfaW50eDogaW50eD0xClswMDowNS4wXSB4ZW5f
cHRfaW5pdGZuOiBSZWFsIHBoeXNpY2FsIGRldmljZSAwMjowMC4wIHJlZ2lzdGVyZWQgc3VjY2Vz
c2Z1bHkhCg==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="solaris_guest_boot.log"
Content-Disposition: attachment; filename="solaris_guest_boot.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5cpgkmv5

G2Ntb2R1bGUgL3BsYXRmb3JtL2k4NnBjL2tlcm5lbC9hbWQ2NC91bml4OiB0ZXh0IGF0IFsweGZm
ZmZmZmZmZmI4MDAwMDAsIDB4ZmZmZmZmZmZmYjk1ZDA2M10gZGF0YSBhdCAweGZmZmZmZmZmZmJj
MDAwMDANCm1vZHVsZSAva2VybmVsL2FtZDY0L2dlbnVuaXg6IHRleHQgYXQgWzB4ZmZmZmZmZmZm
Yjk1ZDA2OCwgMHhmZmZmZmZmZmZiYmUzNjg3XSBkYXRhIGF0IDB4ZmZmZmZmZmZmYmNhMmY4MA0K
TG9hZGluZyBrbWRiLi4uDQptb2R1bGUgL2tlcm5lbC9taXNjL2FtZDY0L2ttZGJtb2Q6IHRleHQg
YXQgWzB4ZmZmZmZmZmZmYmQxM2RjMCwgMHhmZmZmZmZmZmZiZGMzNjE3XSBkYXRhIGF0IDB4ZmZm
ZmZmZmZmYmRjMzYyMA0KbW9kdWxlIC9rZXJuZWwvbWlzYy9hbWQ2NC9jdGY6IHRleHQgYXQgWzB4
ZmZmZmZmZmZmYmJlMzY4OCwgMHhmZmZmZmZmZmZiYmVkNDBmXSBkYXRhIGF0IDB4ZmZmZmZmZmZm
YmRkZTkzMA0KDQ1TdW5PUyBSZWxlYXNlIDUuMTEgVmVyc2lvbiAxMS4wIDY0LWJpdA0NDQpDb3B5
cmlnaHQgKGMpIDE5ODMsIDIwMTEsIE9yYWNsZSBhbmQvb3IgaXRzIGFmZmlsaWF0ZXMuIEFsbCBy
aWdodHMgcmVzZXJ2ZWQuDQ0NCng4Nl9mZWF0dXJlOiBsZ3BnDQ0NCng4Nl9mZWF0dXJlOiB0c2MN
DQ0KeDg2X2ZlYXR1cmU6IG1zcg0NDQp4ODZfZmVhdHVyZTogbXRycg0NDQp4ODZfZmVhdHVyZTog
cGdlDQ0NCng4Nl9mZWF0dXJlOiBkZQ0NDQp4ODZfZmVhdHVyZTogY21vdg0NDQp4ODZfZmVhdHVy
ZTogbW14DQ0NCng4Nl9mZWF0dXJlOiBtY2ENDQ0KeDg2X2ZlYXR1cmU6IHBhZQ0NDQp4ODZfZmVh
dHVyZTogY3Y4DQ0NCng4Nl9mZWF0dXJlOiBwYXQNDQ0KeDg2X2ZlYXR1cmU6IHNlcA0NDQp4ODZf
ZmVhdHVyZTogc3NlDQ0NCng4Nl9mZWF0dXJlOiBzc2UyDQ0NCng4Nl9mZWF0dXJlOiBodHQNDQ0K
eDg2X2ZlYXR1cmU6IGFzeXNjDQ0NCng4Nl9mZWF0dXJlOiBueA0NDQp4ODZfZmVhdHVyZTogc3Nl
Mw0NDQp4ODZfZmVhdHVyZTogY3gxNg0NDQp4ODZfZmVhdHVyZTogY21wDQ0NCng4Nl9mZWF0dXJl
OiB0c2NwDQ0NCng4Nl9mZWF0dXJlOiBjcHVpZA0NDQp4ODZfZmVhdHVyZTogc3NzZTMNDQ0KeDg2
X2ZlYXR1cmU6IHNzZTRfMQ0NDQp4ODZfZmVhdHVyZTogc3NlNF8yDQ0NCng4Nl9mZWF0dXJlOiBj
bGZzaA0NDQp4ODZfZmVhdHVyZTogNjQNDQ0KeDg2X2ZlYXR1cmU6IGFlcw0NDQp4ODZfZmVhdHVy
ZTogcGNsbXVscWRxDQ0NCm1lbSA9IDIwODg1NjRLICgweDdmNzlkMDAwKQ0NDQpVc2luZyBkZWZh
dWx0IGRldmljZSBpbnN0YW5jZSBkYXRhDQ0NClNNQklPUyB2Mi40IGxvYWRlZCAoMzUzIGJ5dGVz
KXJvb3QgbmV4dXMgPSBpODZwYw0NDQpwc2V1ZG8wIGF0IHJvb3QNDQ0KcHNldWRvMCBpcyAvcHNl
dWRvDQ0NCnNjc2lfdmhjaTAgYXQgcm9vdA0NDQpzY3NpX3ZoY2kwIGlzIC9zY3NpX3ZoY2kNDQ0K
bnBlMCBhdCByb290OiBzcGFjZSAwIG9mZnNldCAwDQ0NCm5wZTAgaXMgL3BjaUAwLDANDQ0KdHJh
cDogVW5rbm93biB0cmFwIHR5cGUgOCBpbiB1c2VyIG1vZGUNDQ0KDQ0NCg0NcGFuaWNbY3B1MF0v
dGhyZWFkPWZmZmZmZmZmZmJjMzZkZTA6IEJBRCBUUkFQOiB0eXBlPWQgKCNncCBHZW5lcmFsIHBy
b3RlY3Rpb24pIHJwPWZmZmZmZmZmZmJjNDg2NjAgYWRkcj1mMDAwZmY1M2YwMDBmZjAwDQ0NCg0N
DQojZ3AgR2VuZXJhbCBwcm90ZWN0aW9uDQ0NCmFkZHI9MHhmMDAwZmY1M2YwMDBmZjAwDQ0NCnBp
ZD0wLCBwYz0weGZmZmZmZmZmZmI4NjVmMWQsIHNwPTB4ZmZmZmZmZmZmYmM0ODc1OCwgZWZsYWdz
PTB4MTAyODYNDQ0KY3IwOiA4MDA1MDAzYjxwZyx3cCxuZSxldCx0cyxtcCxwZT4gY3I0OiA2Yjg8
eG1tZSxmeHNyLHBnZSxwYWUscHNlLGRlPg0NDQpjcjI6IDBjcjM6IGY4ZTYwMDBjcjg6IDANDQ0K
DQ0NCiAgICAgICAgcmRpOiAgICAgICAgICAgICAgICAwIHJzaTogICAgICAgICAgICAgICAgMSBy
ZHg6ICAgICAgICAgICAgICAgNDANDQ0KICAgICAgICByY3g6ICAgICAgICAgICAgICAgIDIgIHI4
OiBmZmZmZmZmZmZiYzQ4ODcwICByOTogICAgICAgICAgICAgICAgMA0NDQogICAgICAgIHJheDog
ZmZmZmZmZmZmYmMzNmRlMCByYng6ICAgICAgICAgICAgICAgIDAgcmJwOiBmZmZmZmZmZmZiYzQ4
N2IwDQ0NCiAgICAgICAgcjEwOiBmZmZmZmZmZmZiODViN2Q4IHIxMTogZjAwMGZmNTNmMDAwZmYw
MCByMTI6ICAgICAgICAgICAgICAgIDANDQ0KICAgICAgICByMTM6IGYwMDBmZjUzZjAwMGZmMDAg
cjE0OiAgICAgICAgICAgICAgICAxIHIxNTogZjAwMGZmNTNmMDAwZmYwMA0NDQogICAgICAgIGZz
YjogICAgICAgIDIwMDAwMDAwMCBnc2I6IGZmZmZmZmZmZmJjM2ViYzAgIGRzOiAgICAgICAgICAg
ICAgICAwDQ0NCiAgICAgICAgIGVzOiAgICAgICAgICAgICAgICAwICBmczogICAgICAgICAgICAg
ICAgMCAgZ3M6ICAgICAgICAgICAgICAgIDANDQ0KICAgICAgICB0cnA6ICAgICAgICAgICAgICAg
IGQgZXJyOiAgICAgICAgICAgICAgICAwIHJpcDogZmZmZmZmZmZmYjg2NWYxZA0NDQogICAgICAg
ICBjczogICAgICAgICAgICAgICAzMCByZmw6ICAgICAgICAgICAgMTAyODYgcnNwOiBmZmZmZmZm
ZmZiYzQ4NzU4DQ0NCiAgICAgICAgIHNzOiAgICAgICAgICAgICAgIDM4DQ0NCg0NDQpXYXJuaW5n
IC0gc3RhY2sgbm90IHdyaXR0ZW4gdG8gdGhlIGR1bXAgYnVmZmVyDQ0NCmZmZmZmZmZmZmJjNDg1
ODAgdW5peDpkaWUrMTMxICgpDQ0NCmZmZmZmZmZmZmJjNDg2NTAgdW5peDp0cmFwKzNiMiAoKQ0N
DQpmZmZmZmZmZmZiYzQ4NjYwIHVuaXg6Y21udHJhcCtlNiAoKQ0NDQpmZmZmZmZmZmZiYzQ4N2Iw
IHVuaXg6bXV0ZXhfb3duZXJfcnVubmluZytkICgpDQ0NCmZmZmZmZmZmZmJjNDg4NDAgZ2VudW5p
eDpkdW1wX29uZV9jb3JlKzZiICgpDQ0NCmZmZmZmZmZmZmJjNDg4ZTAgZ2VudW5peDpjb3JlKzQx
OSAoKQ0NDQpmZmZmZmZmZmZiYzQ4YTEwIHVuaXg6a2Vybl9ncGZhdWx0KzE4OCAoKQ0NDQpmZmZm
ZmZmZmZiYzQ4YWUwIHVuaXg6dHJhcCszOTMgKCkNDQ0KZmZmZmZmZmZmYmM0OGFmMCB1bml4OmNt
bnRyYXArZTYgKCkNDQ0KDQ0NCnBhbmljOiBlbnRlcmluZyBkZWJ1Z2dlciAobm8gZHVtcCBkZXZp
Y2UsIGNvbnRpbnVlIHRvIHJlYm9vdCkNDQ0KDQpXZWxjb21lIHRvIGttZGINCmttZGI6IHVuYWJs
ZSB0byBkZXRlcm1pbmUgdGVybWluYWwgdHlwZTogYXNzdW1pbmcgYHZ0MTAwJw0KGyhCGykwTG9h
ZGVkIG1vZHVsZXM6IFsgc2NzaV92aGNpIG1hYyB1cHBjIHVuaXgga3J0bGQgYXBpeCBnZW51bml4
IHNwZWNmcyBwY3BsdXNtcCBdDQpbMF0+IA==
--bcaec53d5ed7de620704c637f31e
Content-Type: application/octet-stream; name="solaris_guest_xl_dmesg.log"
Content-Disposition: attachment; filename="solaris_guest_xl_dmesg.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5cpgkmw6

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICAgICAgICAgICAgICAgICAgIF8gICAg
ICAgIF8gICAgIF8gICAgICAKIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgICAgXyAg
IF8gXyBfXyAgX19ffCB8XyBfXyBffCB8X18gfCB8IF9fXyAKICBcICAvLyBfIFwgJ18gXCAgfCB8
fCB8XyAgIF9fKSB8X198IHwgfCB8ICdfIFwvIF9ffCBfXy8gX2AgfCAnXyBcfCB8LyBfIFwKICAv
ICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy98X198IHxffCB8IHwgfCBcX18gXCB8fCAoX3wg
fCB8XykgfCB8ICBfXy8KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX198ICAgXF9fLF98
X3wgfF98X19fL1xfX1xfXyxffF8uX18vfF98XF9fX3wKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKKFhFTikg
WGVuIHZlcnNpb24gNC4yLXVuc3RhYmxlIChkZXJpY2tzb0Boc2QxLmNhLmNvbWNhc3QubmV0KSAo
Z2NjIHZlcnNpb24gNC42LjMgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpICkgVHVlIEp1
bCAzMSAwODo0NzowNCBQRFQgMjAxMgooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBGcmkgSnVsIDI3
IDEyOjIyOjEzIDIwMTIgKzAyMDAgMjU2ODg6ZTYyNjZmYzc2ZDA4CihYRU4pIEJvb3Rsb2FkZXI6
IEdSVUIgMS45OS0yMXVidW50dTMuMQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIGRv
bTBfbWVtPTQwOTZNIHhzYXZlPTAKKFhFTikgVmlkZW8gaW5mb3JtYXRpb246CihYRU4pICBWR0Eg
aXMgdGV4dCBtb2RlIDgweDI1LCBmb250IDh4MTYKKFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7
IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRzCihYRU4pIERpc2MgaW5mb3JtYXRpb246CihY
RU4pICBGb3VuZCAxIE1CUiBzaWduYXR1cmVzCihYRU4pICBGb3VuZCAyIEVERCBpbmZvcm1hdGlv
biBzdHJ1Y3R1cmVzCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6CihYRU4pICAwMDAwMDAwMDAwMDAw
MDAwIC0gMDAwMDAwMDAwMDA5YzgwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDAwMDA5YzgwMCAt
IDAwMDAwMDAwMDAwYTAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDBlMDAwMCAtIDAw
MDAwMDAwMDAxMDAwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAw
MDAwZGRkMDcwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwZGRkMDcwMDAgLSAwMDAwMDAwMGRk
ZGJiMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZGRkYmIwMDAgLSAwMDAwMDAwMGRkZGJj
MDAwIChBQ1BJIGRhdGEpCihYRU4pICAwMDAwMDAwMGRkZGJjMDAwIC0gMDAwMDAwMDBkZGVkNzAw
MCAoQUNQSSBOVlMpCihYRU4pICAwMDAwMDAwMGRkZWQ3MDAwIC0gMDAwMDAwMDBkZWY5MjAwMCAo
cmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMGRlZjkyMDAwIC0gMDAwMDAwMDBkZWY5MzAwMCAodXNh
YmxlKQooWEVOKSAgMDAwMDAwMDBkZWY5MzAwMCAtIDAwMDAwMDAwZGVmZDYwMDAgKEFDUEkgTlZT
KQooWEVOKSAgMDAwMDAwMDBkZWZkNjAwMCAtIDAwMDAwMDAwZGY4MDAwMDAgKHVzYWJsZSkKKFhF
TikgIDAwMDAwMDAwZjgwMDAwMDAgLSAwMDAwMDAwMGZjMDAwMDAwIChyZXNlcnZlZCkKKFhFTikg
IDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAw
MDAwMDAwZmVkMDAwMDAgLSAwMDAwMDAwMGZlZDA0MDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAw
MDAwZmVkMWMwMDAgLSAwMDAwMDAwMGZlZDIwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAw
ZmVlMDAwMDAgLSAwMDAwMDAwMGZlZTAxMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwZmYw
MDAwMDAgLSAwMDAwMDAwMTAwMDAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAxMDAwMDAw
MDAgLSAwMDAwMDAwNDIwMDAwMDAwICh1c2FibGUpCihYRU4pIEFDUEk6IFJTRFAgMDAwRjA0OTAs
IDAwMjQgKHIyIEFMQVNLQSkKKFhFTikgQUNQSTogWFNEVCBEREVDNzA5MCwgMDA5QyAocjEgQUxB
U0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IEZBQ1AgRERF
RDFDQTAsIDAwRjQgKHI0IEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEzKQoo
WEVOKSBBQ1BJOiBEU0RUIERERUM3MUMwLCBBQURBIChyMiBBTEFTS0EgICAgQSBNIEkgICAgICAg
NkYgSU5UTCAyMDA1MTExNykKKFhFTikgQUNQSTogRkFDUyBEREVENUY4MCwgMDA0MAooWEVOKSBB
Q1BJOiBBUElDIERERUQxRDk4LCAwMDkyIChyMyBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1J
ICAgICAxMDAxMykKKFhFTikgQUNQSTogRlBEVCBEREVEMUUzMCwgMDA0NCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IE1DRkcgRERFRDFFNzgs
IDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgICAgIDk3KQooWEVOKSBB
Q1BJOiBQUkFEIERERUQxRUI4LCAwMEJFIChyMiBQUkFESUQgIFBSQURUSUQgICAgICAgIDEgTVNG
VCAgMzAwMDAwMSkKKFhFTikgQUNQSTogSFBFVCBEREVEMUY3OCwgMDAzOCAocjEgQUxBU0tBICAg
IEEgTSBJICAxMDcyMDA5IEFNSS4gICAgICAgIDUpCihYRU4pIEFDUEk6IFNTRFQgRERFRDFGQjAs
IDAzNkQgKHIxIFNhdGFSZSBTYXRhVGFibCAgICAgMTAwMCBJTlRMIDIwMDkxMTEyKQooWEVOKSBB
Q1BJOiBTUE1JIERERUQyMzIwLCAwMDQwIChyNSBBIE0gSSAgIE9FTVNQTUkgICAgICAgIDAgQU1J
LiAgICAgICAgMCkKKFhFTikgQUNQSTogU1NEVCBEREVEMjM2MCwgMDlBNCAocjEgIFBtUmVmICBD
cHUwSXN0ICAgICAzMDAwIElOVEwgMjAwNTExMTcpCihYRU4pIEFDUEk6IFNTRFQgRERFRDJEMDgs
IDBBODggKHIxICBQbVJlZiAgICBDcHVQbSAgICAgMzAwMCBJTlRMIDIwMDUxMTE3KQooWEVOKSBB
Q1BJOiBETUFSIERERUQzNzkwLCAwMDc4IChyMSBJTlRFTCAgICAgIFNOQiAgICAgICAgIDEgSU5U
TCAgICAgICAgMSkKKFhFTikgQUNQSTogRUlOSiBEREVEMzgwOCwgMDEzMCAocjEgICAgQU1JIEFN
SSBFSU5KICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIEFDUEk6IEVSU1QgRERFRDM5Mzgs
IDAyMTAgKHIxICBBTUlFUiBBTUkgRVJTVCAgICAgICAgMCAgICAgICAgICAgICAwKQooWEVOKSBB
Q1BJOiBIRVNUIERERUQzQjQ4LCAwMEE4IChyMSAgICBBTUkgQU1JIEhFU1QgICAgICAgIDAgICAg
ICAgICAgICAgMCkKKFhFTikgQUNQSTogQkVSVCBEREVEM0JGMCwgMDAzMCAocjEgICAgQU1JIEFN
SSBCRVJUICAgICAgICAwICAgICAgICAgICAgIDApCihYRU4pIFN5c3RlbSBSQU06IDE2MzU2TUIg
KDE2NzQ5MzY4a0IpCihYRU4pIE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZAooWEVOKSBGYWtp
bmcgYSBub2RlIGF0IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDQyMDAwMDAwMAooWEVOKSBEb21h
aW4gaGVhcCBpbml0aWFsaXNlZAooWEVOKSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgMDAwZmQ3YjAK
KFhFTikgRE1JIDIuNyBwcmVzZW50LgooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0CihY
RU4pIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4NDA4CihYRU4pIEFDUEk6IEFDUEkgU0xFRVAg
SU5GTzogcG0xeF9jbnRbNDA0LDBdLCBwbTF4X2V2dFs0MDAsMF0KKFhFTikgQUNQSTogMzIvNjRY
IEZBQ1MgYWRkcmVzcyBtaXNtYXRjaCBpbiBGQURUIC0gZGRlZDVmODAvMDAwMDAwMDAwMDAwMDAw
MCwgdXNpbmcgMzIKKFhFTikgQUNQSTogICAgICAgICAgICAgICAgICB3YWtldXBfdmVjW2RkZWQ1
ZjhjXSwgdmVjX3NpemVbMjBdCihYRU4pIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZlZTAw
MDAwCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MDBdIGVuYWJs
ZWQpCihYRU4pIFByb2Nlc3NvciAjMCA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQooWEVOKSBQcm9jZXNz
b3IgIzIgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
M10gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICM0IDc6MTAgQVBJQyB2
ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDZd
IGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNiA3OjEwIEFQSUMgdmVyc2lvbiAyMQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDAxXSBlbmFibGVkKQooWEVOKSBQ
cm9jZXNzb3IgIzEgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlf
aWRbMHgwNl0gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMzIDc6MTAg
QVBJQyB2ZXJzaW9uIDIxCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDddIGxhcGljX2lk
WzB4MDVdIGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjNSA3OjEwIEFQSUMgdmVyc2lvbiAyMQoo
WEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA4XSBsYXBpY19pZFsweDA3XSBlbmFibGVkKQoo
WEVOKSBQcm9jZXNzb3IgIzcgNzoxMCBBUElDIHZlcnNpb24gMjEKKFhFTikgQUNQSTogTEFQSUNf
Tk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pCihYRU4pIEFDUEk6IElPQVBJ
QyAoaWRbMHgwMl0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lfYmFzZVswXSkKKFhFTikgSU9BUElD
WzBdOiBhcGljX2lkIDIsIHZlcnNpb24gMzIsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMK
KFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZs
IGRmbCkKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJx
IDkgaGlnaCBsZXZlbCkKKFhFTikgQUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJyaWRlLgooWEVOKSBB
Q1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuCihYRU4pIEFDUEk6IElSUTkgdXNlZCBieSBvdmVy
cmlkZS4KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgRmxhdC4gIFVzaW5nIDEgSS9PIEFQSUNz
CihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmE3MDEgYmFzZTogMHhmZWQwMDAwMAooWEVOKSBF
UlNUIHRhYmxlIGlzIGludmFsaWQKKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25m
aWd1cmF0aW9uIGluZm9ybWF0aW9uCihYRU4pIFNNUDogQWxsb3dpbmcgOCBDUFVzICgwIGhvdHBs
dWcgQ1BVcykKKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCAxNTI4IE1TSS9NU0ktWAooWEVOKSBT
d2l0Y2hlZCB0byBBUElDIGRyaXZlciB4MmFwaWNfY2x1c3Rlci4KKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBEZXRlY3RlZCAzMjkyLjU3
OCBNSHogcHJvY2Vzc29yLgooWEVOKSBJbml0aW5nIG1lbW9yeSBzaGFyaW5nLgooWEVOKSBtY2Vf
aW50ZWwuYzoxMjM5OiBNQ0EgQ2FwYWJpbGl0eTogQkNBU1QgMSBTRVIgMCBDTUNJIDEgZmlyc3Ri
YW5rIDAgZXh0ZW5kZWQgTUNFIE1TUiAwCihYRU4pIEludGVsIG1hY2hpbmUgY2hlY2sgcmVwb3J0
aW5nIGVuYWJsZWQKKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBmODAwMDAw
MCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSAzZgooWEVOKSBQQ0k6IE1DRkcgYXJlYSBhdCBmODAw
MDAwMCByZXNlcnZlZCBpbiBFODIwCihYRU4pIFBDSTogVXNpbmcgTUNGRyBmb3Igc2VnbWVudCAw
MDAwIGJ1cyAwMC0zZgooWEVOKSBJbnRlbCBWVC1kIFNub29wIENvbnRyb2wgZW5hYmxlZC4KKFhF
TikgSW50ZWwgVlQtZCBEb20wIERNQSBQYXNzdGhyb3VnaCBub3QgZW5hYmxlZC4KKFhFTikgSW50
ZWwgVlQtZCBRdWV1ZWQgSW52YWxpZGF0aW9uIGVuYWJsZWQuCihYRU4pIEludGVsIFZULWQgSW50
ZXJydXB0IFJlbWFwcGluZyBlbmFibGVkLgooWEVOKSBJbnRlbCBWVC1kIFNoYXJlZCBFUFQgdGFi
bGVzIG5vdCBlbmFibGVkLgooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZAooWEVOKSAg
LSBEb20wIG1vZGU6IFJlbGF4ZWQKKFhFTikgRW5hYmxlZCBkaXJlY3RlZCBFT0kgd2l0aCBpb2Fw
aWNfYWNrX29sZCBvbiEKKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzCihYRU4pICAtPiBVc2lu
ZyBvbGQgQUNLIG1ldGhvZAooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9
MiBhcGljMj0tMSBwaW4yPS0xCihYRU4pIFRTQyBkZWFkbGluZSB0aW1lciBlbmFibGVkCihYRU4p
IFBsYXRmb3JtIHRpbWVyIGlzIDE0LjMxOE1IeiBIUEVUCihYRU4pIEFsbG9jYXRlZCBjb25zb2xl
IHJpbmcgb2YgNjQgS2lCLgooWEVOKSBWTVg6IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoK
KFhFTikgIC0gQVBJQyBNTUlPIGFjY2VzcyB2aXJ0dWFsaXNhdGlvbgooWEVOKSAgLSBBUElDIFRQ
UiBzaGFkb3cKKFhFTikgIC0gRXh0ZW5kZWQgUGFnZSBUYWJsZXMgKEVQVCkKKFhFTikgIC0gVmly
dHVhbC1Qcm9jZXNzb3IgSWRlbnRpZmllcnMgKFZQSUQpCihYRU4pICAtIFZpcnR1YWwgTk1JCihY
RU4pICAtIE1TUiBkaXJlY3QtYWNjZXNzIGJpdG1hcAooWEVOKSAgLSBVbnJlc3RyaWN0ZWQgR3Vl
c3QKKFhFTikgSFZNOiBBU0lEcyBlbmFibGVkLgooWEVOKSBIVk06IFZNWCBlbmFibGVkCihYRU4p
IEhWTTogSGFyZHdhcmUgQXNzaXN0ZWQgUGFnaW5nIChIQVApIGRldGVjdGVkCihYRU4pIEhWTTog
SEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVzCihYRU4pIEFD
UEkgc2xlZXAgbW9kZXM6IFMzCihYRU4pIG1jaGVja19wb2xsOiBNYWNoaW5lIGNoZWNrIHBvbGxp
bmcgdGltZXIgc3RhcnRlZC4KKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pIGVs
Zl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weGFjNTAwMAooWEVO
KSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjMDAwMDAgbWVtc3o9MHhlNjBlMAoo
WEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZTcwMDAgbWVtc3o9MHgxNDQ4
MAooWEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZmMwMDAgbWVtc3o9MHgz
NjIwMDAKKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogbWVtb3J5OiAweDEwMDAwMDAgLT4gMHgyMDVl
MDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiCihYRU4pIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfVkVSU0lPTiA9ICIyLjYiCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogWEVOX1ZFUlNJT04gPSAieGVuLTMuMCIKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBWSVJUX0JBU0UgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3Rl
OiBFTlRSWSA9IDB4ZmZmZmZmZmY4MWNmYzIwMAooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEhZ
UEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZmZjgxMDAxMDAwCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90
ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVzfHBhZV9wZ2Rpcl9hYm92ZV80Z2Ii
CihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFFX01PREUgPSAieWVzIgooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IExPQURFUiA9ICJnZW5lcmljIgooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IHVua25vd24geGVuIGVsZiBub3RlICgweGQpCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VT
UEVORF9DQU5DRUwgPSAweDEKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIVl9TVEFSVF9MT1cg
PSAweGZmZmY4MDAwMDAwMDAwMDAKKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBQQUREUl9PRkZT
RVQgPSAweDAKKFhFTikgZWxmX3hlbl9hZGRyX2NhbGNfY2hlY2s6IGFkZHJlc3NlczoKKFhFTikg
ICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGZmZmZmZmZmODAwMDAwMDAKKFhFTikgICAgIGVsZl9w
YWRkcl9vZmZzZXQgPSAweDAKKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAgICAgPSAweGZmZmZmZmZm
ODAwMDAwMDAKKFhFTikgICAgIHZpcnRfa3N0YXJ0ICAgICAgPSAweGZmZmZmZmZmODEwMDAwMDAK
KFhFTikgICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODIwNWUwMDAKKFhFTikgICAg
IHZpcnRfZW50cnkgICAgICAgPSAweGZmZmZmZmZmODFjZmMyMDAKKFhFTikgICAgIHAybV9iYXNl
ICAgICAgICAgPSAweGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQs
IGxzYiwgY29tcGF0MzIKKFhFTikgIERvbTAga2VybmVsOiA2NC1iaXQsIFBBRSwgbHNiLCBwYWRk
ciAweDEwMDAwMDAgLT4gMHgyMDVlMDAwCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVO
VDoKKFhFTikgIERvbTAgYWxsb2MuOiAgIDAwMDAwMDA0MGMwMDAwMDAtPjAwMDAwMDA0MTAwMDAw
MDAgKDEwMjIzNTcgcGFnZXMgdG8gYmUgYWxsb2NhdGVkKQooWEVOKSAgSW5pdC4gcmFtZGlzazog
MDAwMDAwMDQxZDk5NTAwMC0+MDAwMDAwMDQxZmZmZjIwMAooWEVOKSBWSVJUVUFMIE1FTU9SWSBB
UlJBTkdFTUVOVDoKKFhFTikgIExvYWRlZCBrZXJuZWw6IGZmZmZmZmZmODEwMDAwMDAtPmZmZmZm
ZmZmODIwNWUwMDAKKFhFTikgIEluaXQuIHJhbWRpc2s6IGZmZmZmZmZmODIwNWUwMDAtPmZmZmZm
ZmZmODQ2YzgyMDAKKFhFTikgIFBoeXMtTWFjaCBtYXA6IGZmZmZmZmZmODQ2YzkwMDAtPmZmZmZm
ZmZmODRlYzkwMDAKKFhFTikgIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODRlYzkwMDAtPmZmZmZm
ZmZmODRlYzk0YjQKKFhFTikgIFBhZ2UgdGFibGVzOiAgIGZmZmZmZmZmODRlY2EwMDAtPmZmZmZm
ZmZmODRlZjUwMDAKKFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODRlZjUwMDAtPmZmZmZm
ZmZmODRlZjYwMDAKKFhFTikgIFRPVEFMOiAgICAgICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZm
ZmZmODUwMDAwMDAKKFhFTikgIEVOVFJZIEFERFJFU1M6IGZmZmZmZmZmODFjZmMyMDAKKFhFTikg
RG9tMCBoYXMgbWF4aW11bSA4IFZDUFVzCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0
IDB4ZmZmZmZmZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODFhYzUwMDAKKFhFTikgZWxmX2xvYWRf
YmluYXJ5OiBwaGRyIDEgYXQgMHhmZmZmZmZmZjgxYzAwMDAwIC0+IDB4ZmZmZmZmZmY4MWNlNjBl
MAooWEVOKSBlbGZfbG9hZF9iaW5hcnk6IHBoZHIgMiBhdCAweGZmZmZmZmZmODFjZTcwMDAgLT4g
MHhmZmZmZmZmZjgxY2ZiNDgwCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAzIGF0IDB4ZmZm
ZmZmZmY4MWNmYzAwMCAtPiAweGZmZmZmZmZmODFkZDIwMDAKKFhFTikgU2NydWJiaW5nIEZyZWUg
UkFNOiAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi5kb25lLgooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQg
c2V0IGF0IDB4NDAwMCBwYWdlcy4KKFhFTikgU3RkLiBMb2dsZXZlbDogQWxsCihYRU4pIEd1ZXN0
IExvZ2xldmVsOiBBbGwKKFhFTikgWGVuIGlzIHJlbGlucXVpc2hpbmcgVkdBIGNvbnNvbGUuCihY
RU4pICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0
byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQooWEVOKSBGcmVlZCAyNDBrQiBpbml0IG1lbW9yeS4KKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT05IGlycT05IGVtdWlycT00NzI4NTA0
MzIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MDEuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAxLjEKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDowNi4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWEu
MAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjFjLjAKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxYy42CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWQuMAooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjFlLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxZi4w
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWYuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjFmLjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowMC4wCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDE6MDAuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjAwLjAK
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDU6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjAKKFhFTikgREVCVUcg
ZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yNzYgaXJxPTI4IGVtdWlycT0tNjA3ODM1MTIxCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTYgaXJxPTE2IGVtdWlycT0yODE4
MDU1NzEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT04IGlycT04IGVtdWly
cT0wCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9Mjc1IGlycT0yOSBlbXVp
cnE9MTk5NjUxODc4OQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTI3NCBp
cnE9MzAgZW11aXJxPTEzMjM2OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTI3MyBpcnE9MzEgZW11aXJxPTQ5NjU4MTI0OAooWEVOKSBIVk0xOiBIVk0gTG9hZGVyCihYRU4p
IEhWTTE6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTE6IFhlbmJ1cyByaW5n
cyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTE6IFN5c3RlbSByZXF1ZXN0
ZWQgU2VhQklPUwooWEVOKSBIVk0xOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6
MjcwOiBEb20xIFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMTogUENJLUlTQSBs
aW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20xIFBDSSBsaW5rIDEgY2hh
bmdlZCAwIC0+IDEwCihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTEgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZN
MTogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMSBQ
Q0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTE6IFBDSS1JU0EgbGluayAzIHJvdXRl
ZCB0byBJUlE1CihYRU4pIEhWTTE6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0x
OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA0OjAgSU5UQS0+
SVJRNQooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMTogcGNp
IGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0xOiBwY2kg
ZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTE6IHBjaSBk
ZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTEgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1
OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTE6IHBjaSBkZXYgMDI6
MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMTogcGNpIGRldiAwNDow
IGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MSBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTEgZ2Zu
PWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTE6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6
ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNMTogcGNpIGRldiAwMjowIGJhciAxNCBzaXpl
IDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUg
MDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTE6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAw
MDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNMTogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAw
MDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk0xOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAw
MDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20xIGdwb3J0PWMyMDAgbXBv
cnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNMTogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAw
MDEwOiAwMDAwYzMwMQooWEVOKSBIVk0xOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoK
KFhFTikgSFZNMTogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2
YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTE6ICAtIENQVTEgLi4uIDM2LWJpdCBw
aHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBI
Vk0xOiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMTogIC0gUkVQIElOU0IgYWNy
b3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTE6ICAtIEdTIGJhc2UgTVNS
cyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNMTogUGFzc2VkIDIgb2YgMiB0ZXN0cwoo
WEVOKSBIVk0xOiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTE6IExvYWRpbmcg
U2VhQklPUyAuLi4KKFhFTikgSFZNMTogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0x
OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTE6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4p
IEhWTTE6IEJJT1MgbWFwOgooWEVOKSBIVk0xOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UK
KFhFTikgSFZNMTogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMTogRTgyMCB0YWJs
ZToKKFhFTikgSFZNMTogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAw
MDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDow
MDBlMDAwMAooWEVOKSBIVk0xOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDow
MDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0g
MDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk0xOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAw
MDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk0xOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAw
MDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMTogSW52b2tpbmcgU2Vh
QklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQxIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGlu
ZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMSBnZm49ZjMwMDAgbWZuPWY3YTAwIG5y
PTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20xIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9
ODAKKFhFTikgSFZNMjogSFZNIExvYWRlcgooWEVOKSBIVk0yOiBEZXRlY3RlZCBYZW4gdjQuMi11
bnN0YWJsZQooWEVOKSBIVk0yOiBYZW5idXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5u
ZWwgNAooWEVOKSBIVk0yOiBTeXN0ZW0gcmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNMjogQ1BV
IHNwZWVkIGlzIDMyOTMgTUh6CihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAwIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4p
IGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk0yOiBQ
Q0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb20yIFBDSSBs
aW5rIDIgY2hhbmdlZCAwIC0+IDExCihYRU4pIEhWTTI6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0
byBJUlExMQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQoo
WEVOKSBIVk0yOiBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk0yOiBwY2kg
ZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIElOVEEtPklSUTUK
KFhFTikgSFZNMjogcGNpIGRldiAwNDowIElOVEEtPklSUTUKKFhFTikgSFZNMjogcGNpIGRldiAw
NTowIElOVEEtPklSUTEwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAw
MDAwMDogZjAwMDAwMDgKKFhFTikgSFZNMjogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAw
MDAwOiBmMjAwMDAwOAooWEVOKSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAw
MDA6IGYzMDAwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzA4MCBtZm49Zjdh
ODAgbnI9NDAKKFhFTikgSFZNMjogcGNpIGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBm
MzA4MDAwNAooWEVOKSBIVk0yOiBwY2kgZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYz
MGMwMDAwCihYRU4pIEhWTTI6IHBjaSBkZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMw
ZDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0y
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20yIGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVO
KSBIVk0yOiBwY2kgZGV2IDA1OjAgYmFyIDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4p
IEhWTTI6IHBjaSBkZXYgMDI6MCBiYXIgMTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikg
SFZNMjogcGNpIGRldiAwMzowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBI
Vk0yOiBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhW
TTI6IHBjaSBkZXYgMDQ6MCBiYXIgMTQgc2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZN
MjogcGNpIGRldiAwNTowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tMiBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTI6
IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNMjog
TXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlzYXRpb246CihYRU4pIEhWTTI6ICAtIENQVTAgLi4uIDM2
LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgoo
WEVOKSBIVk0yOiAgLSBDUFUxIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZh
ciBNVFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNMjogVGVzdGluZyBIVk0gZW52aXJvbm1l
bnQ6CihYRU4pIEhWTTI6ICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBh
c3NlZAooWEVOKSBIVk0yOiAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihY
RU4pIEhWTTI6IFBhc3NlZCAyIG9mIDIgdGVzdHMKKFhFTikgSFZNMjogV3JpdGluZyBTTUJJT1Mg
dGFibGVzIC4uLgooWEVOKSBIVk0yOiBMb2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTI6IENy
ZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhFTikgSFZNMjogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBI
Vk0yOiB2bTg2IFRTUyBhdCBmYzAwYTA4MAooWEVOKSBIVk0yOiBCSU9TIG1hcDoKKFhFTikgSFZN
MjogIDEwMDAwLTEwMGQzOiBTY3JhdGNoIHNwYWNlCihYRU4pIEhWTTI6ICBlMDAwMC1mZmZmZjog
TWFpbiBCSU9TCihYRU4pIEhWTTI6IEU4MjAgdGFibGU6CihYRU4pIEhWTTI6ICBbMDBdOiAwMDAw
MDAwMDowMDAwMDAwMCAtIDAwMDAwMDAwOjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNMjogIEhPTEU6
IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAwMDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNMjogIFswMV06
IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAwMDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhW
TTI6ICBbMDJdOiAwMDAwMDAwMDowMDEwMDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhF
TikgSFZNMjogIEhPTEU6IDAwMDAwMDAwOjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhF
TikgSFZNMjogIFswM106IDAwMDAwMDAwOmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJF
U0VSVkVECihYRU4pIEhWTTI6IEludm9raW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0
NzpkMiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTIgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMiBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkMiBs
ZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk0zOiBIVk0gTG9hZGVyCihYRU4pIEhWTTM6IERldGVjdGVk
IFhlbiB2NC4yLXVuc3RhYmxlCihYRU4pIEhWTTM6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwg
ZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhWTTM6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVO
KSBIVk0zOiBDUFUgc3BlZWQgaXMgMzI5MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBs
aW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNMzogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRv
IElSUTUKKFhFTikgaXJxLmM6MjcwOiBEb20zIFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihY
RU4pIEhWTTM6IFBDSS1JU0EgbGluayAxIHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6
IERvbTMgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4gMTEKKFhFTikgSFZNMzogUENJLUlTQSBsaW5r
IDIgcm91dGVkIHRvIElSUTExCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kgbGluayAzIGNoYW5n
ZWQgMCAtPiA1CihYRU4pIEhWTTM6IFBDSS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4p
IEhWTTM6IHBjaSBkZXYgMDE6MyBJTlRBLT5JUlExMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAg
SU5UQS0+SVJRNQooWEVOKSBIVk0zOiBwY2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk0z
OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJRMTAKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAx
MCBzaXplIDAyMDAwMDAwOiBmMDAwMDAwOAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDE0
IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4CihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMzAg
c2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYz
MDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUg
MDAwNDAwMDA6IGYzMDgwMDA0CihYRU4pIEhWTTM6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAw
MDAxMDAwMDogZjMwYzAwMDAKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAw
MDEwMDAwOiBmMzBkMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTAgbWZu
PWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTMgZ2ZuPWYzMGUzIG1mbj1mN2Fj
MyBucj0xCihYRU4pIEhWTTM6IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMw
ZTAwMDQKKFhFTikgSFZNMzogcGNpIGRldiAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBl
NDAwMAooWEVOKSBIVk0zOiBwY2kgZGV2IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBj
MDAxCihYRU4pIEhWTTM6IHBjaSBkZXYgMDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMx
MDEKKFhFTikgSFZNMzogcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAw
MAooWEVOKSBIVk0zOiBwY2kgZGV2IDA1OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAx
CihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb20zIGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAK
KFhFTikgSFZNMzogcGNpIGRldiAwMToxIGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQoo
WEVOKSBIVk0zOiBNdWx0aXByb2Nlc3NvciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNMzogIC0g
Q1BVMCAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0g
Li4uIGRvbmUuCihYRU4pIEhWTTM6ICAtIENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBN
VFJScyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk0zOiBUZXN0aW5nIEhW
TSBlbnZpcm9ubWVudDoKKFhFTikgSFZNMzogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRh
cmllcyAuLi4gcGFzc2VkCihYRU4pIEhWTTM6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4u
LiBwYXNzZWQKKFhFTikgSFZNMzogUGFzc2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk0zOiBXcml0
aW5nIFNNQklPUyB0YWJsZXMgLi4uCihYRU4pIEhWTTM6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhF
TikgSFZNMzogQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooWEVOKSBIVk0zOiBMb2FkaW5nIEFDUEkg
Li4uCihYRU4pIEhWTTM6IHZtODYgVFNTIGF0IGZjMDBhMDgwCihYRU4pIEhWTTM6IEJJT1MgbWFw
OgooWEVOKSBIVk0zOiAgMTAwMDAtMTAwZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNMzogIGUw
MDAwLWZmZmZmOiBNYWluIEJJT1MKKFhFTikgSFZNMzogRTgyMCB0YWJsZToKKFhFTikgSFZNMzog
IFswMF06IDAwMDAwMDAwOjAwMDAwMDAwIC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBI
Vk0zOiAgSE9MRTogMDAwMDAwMDA6MDAwYTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBI
Vk0zOiAgWzAxXTogMDAwMDAwMDA6MDAwZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJW
RUQKKFhFTikgSFZNMzogIFswMl06IDAwMDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAw
MDA6IFJBTQooWEVOKSBIVk0zOiAgSE9MRTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpm
YzAwMDAwMAooWEVOKSBIVk0zOiAgWzAzXTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTow
MDAwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNMzogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikg
c3RkdmdhLmM6MTQ3OmQzIGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3Rkdmdh
LmM6MTUxOmQzIGxlYXZpbmcgc3RkdmdhCihYRU4pIHN0ZHZnYS5jOjE0NzpkMyBlbnRlcmluZyBz
dGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKFhFTikgaXJxLmM6Mzc1OiBEb20zIGNhbGxiYWNrIHZp
YSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBk
b20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
MyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBn
Zm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1m
MzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMg
bWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZu
PWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49
ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3
YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9
MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhF
TikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVO
KSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4p
IG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFw
OmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6
IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92
ZTogZG9tMyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3Zl
OiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRv
bTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9t
MyBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBt
cG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwODAg
bWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20zIGdmbj1mMzBlMCBt
Zm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tMyBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9y
dD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb20zIGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEK
KFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTMgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAoo
WEVOKSBpcnEuYzoyNzA6IERvbTMgUENJIGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEu
YzoyNzA6IERvbTMgUENJIGxpbmsgMSBjaGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBE
b20zIFBDSSBsaW5rIDIgY2hhbmdlZCAxMSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tMyBQQ0kg
bGluayAzIGNoYW5nZWQgNSAtPiAwCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTggaXJxPTAgZW11aXJxPTEyCihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBp
cnE9MTkgaXJxPTAgZW11aXJxPTEKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGly
cT0xNyBpcnE9MCBlbXVpcnE9OAooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJx
PTIxIGlycT0wIGVtdWlycT00CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9
MjAgaXJxPTAgZW11aXJxPTYKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01
NSBpcnE9LTEgZW11aXJxPS0xCihYRU4pIERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQg
aHZtX2RvbWFpbl91c2VfcGlycT0wIGVtdWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBV
bnN1cHBvcnRlZCBkZWxpdmVyeSBtb2RlIDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0
MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQw
OCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4pIEhWTTQ6IEhWTSBMb2FkZXIKKFhFTikgSFZN
NDogRGV0ZWN0ZWQgWGVuIHY0LjItdW5zdGFibGUKKFhFTikgSFZNNDogWGVuYnVzIHJpbmdzIEAw
eGZlZmZjMDAwLCBldmVudCBjaGFubmVsIDQKKFhFTikgSFZNNDogU3lzdGVtIHJlcXVlc3RlZCBT
ZWFCSU9TCihYRU4pIEhWTTQ6IENQVSBzcGVlZCBpcyAzMjkzIE1IegooWEVOKSBpcnEuYzoyNzA6
IERvbTQgUENJIGxpbmsgMCBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk00OiBQQ0ktSVNBIGxpbmsg
MCByb3V0ZWQgdG8gSVJRNQooWEVOKSBpcnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMSBjaGFuZ2Vk
IDAgLT4gMTAKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwCihYRU4p
IGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAyIGNoYW5nZWQgMCAtPiAxMQooWEVOKSBIVk00OiBQ
Q0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8gSVJRMTEKKFhFTikgaXJxLmM6MjcwOiBEb200IFBDSSBs
aW5rIDMgY2hhbmdlZCAwIC0+IDUKKFhFTikgSFZNNDogUENJLUlTQSBsaW5rIDMgcm91dGVkIHRv
IElSUTUKKFhFTikgSFZNNDogcGNpIGRldiAwMTozIElOVEEtPklSUTEwCihYRU4pIEhWTTQ6IHBj
aSBkZXYgMDM6MCBJTlRBLT5JUlE1CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1
CihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBJTlRBLT5JUlExMAooWEVOKSBIVk00OiBwY2kgZGV2
IDAyOjAgYmFyIDEwIHNpemUgMDIwMDAwMDA6IGYwMDAwMDA4CihYRU4pIEhWTTQ6IHBjaSBkZXYg
MDM6MCBiYXIgMTQgc2l6ZSAwMTAwMDAwMDogZjIwMDAwMDgKKFhFTikgSFZNNDogcGNpIGRldiAw
NTowIGJhciAzMCBzaXplIDAwMDgwMDAwOiBmMzAwMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNCBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBi
YXIgMWMgc2l6ZSAwMDA0MDAwMDogZjMwODAwMDQKKFhFTikgSFZNNDogcGNpIGRldiAwMjowIGJh
ciAzMCBzaXplIDAwMDEwMDAwOiBmMzBjMDAwMAooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFy
IDMwIHNpemUgMDAwMTAwMDA6IGYzMGQwMDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdm
bj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNCBnZm49ZjMw
ZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgSFZNNDogcGNpIGRldiAwNTowIGJhciAxNCBzaXplIDAw
MDA0MDAwOiBmMzBlMDAwNAooWEVOKSBIVk00OiBwY2kgZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAw
MDEwMDA6IGYzMGU0MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDAw
MDEwMDogMDAwMGMwMDEKKFhFTikgSFZNNDogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIDAwMDAw
MTAwOiAwMDAwYzEwMQooWEVOKSBIVk00OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgMDAwMDAx
MDA6IGYzMGU1MDAwCihYRU4pIEhWTTQ6IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSAwMDAwMDEw
MDogMDAwMGMyMDEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTQgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBIVk00OiBwY2kgZGV2IDAxOjEgYmFyIDIwIHNpemUgMDAwMDAwMTA6
IDAwMDBjMzAxCihYRU4pIEhWTTQ6IE11bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOgooWEVO
KSBIVk00OiAgLSBDUFUwIC4uLiAzNi1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBN
VFJScyBbMi84XSAuLi4gZG9uZS4KKFhFTikgSFZNNDogIC0gQ1BVMSAuLi4gMzYtYml0IHBoeXMg
Li4uIGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTQ6
IFRlc3RpbmcgSFZNIGVudmlyb25tZW50OgooWEVOKSBIVk00OiAgLSBSRVAgSU5TQiBhY3Jvc3Mg
cGFnZSBib3VuZGFyaWVzIC4uLiBwYXNzZWQKKFhFTikgSFZNNDogIC0gR1MgYmFzZSBNU1JzIGFu
ZCBTV0FQR1MgLi4uIHBhc3NlZAooWEVOKSBIVk00OiBQYXNzZWQgMiBvZiAyIHRlc3RzCihYRU4p
IEhWTTQ6IFdyaXRpbmcgU01CSU9TIHRhYmxlcyAuLi4KKFhFTikgSFZNNDogTG9hZGluZyBTZWFC
SU9TIC4uLgooWEVOKSBIVk00OiBDcmVhdGluZyBNUCB0YWJsZXMgLi4uCihYRU4pIEhWTTQ6IExv
YWRpbmcgQUNQSSAuLi4KKFhFTikgSFZNNDogdm04NiBUU1MgYXQgZmMwMGEwODAKKFhFTikgSFZN
NDogQklPUyBtYXA6CihYRU4pIEhWTTQ6ICAxMDAwMC0xMDBkMzogU2NyYXRjaCBzcGFjZQooWEVO
KSBIVk00OiAgZTAwMDAtZmZmZmY6IE1haW4gQklPUwooWEVOKSBIVk00OiBFODIwIHRhYmxlOgoo
WEVOKSBIVk00OiAgWzAwXTogMDAwMDAwMDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDog
UkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDowMDBhMDAwMCAtIDAwMDAwMDAwOjAwMGUw
MDAwCihYRU4pIEhWTTQ6ICBbMDFdOiAwMDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAw
MDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiAgWzAyXTogMDAwMDAwMDA6MDAxMDAwMDAgLSAwMDAw
MDAwMDo3ZjgwMDAwMDogUkFNCihYRU4pIEhWTTQ6ICBIT0xFOiAwMDAwMDAwMDo3ZjgwMDAwMCAt
IDAwMDAwMDAwOmZjMDAwMDAwCihYRU4pIEhWTTQ6ICBbMDNdOiAwMDAwMDAwMDpmYzAwMDAwMCAt
IDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNFUlZFRAooWEVOKSBIVk00OiBJbnZva2luZyBTZWFCSU9T
IC4uLgooWEVOKSBzdGR2Z2EuYzoxNDc6ZDQgZW50ZXJpbmcgc3RkdmdhIGFuZCBjYWNoaW5nIG1v
ZGVzCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAK
KFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAoo
WEVOKSBzdGR2Z2EuYzoxNTE6ZDQgbGVhdmluZyBzdGR2Z2EKKFhFTikgc3RkdmdhLmM6MTQ3OmQ0
IGVudGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBpcnEuYzozNzU6IERvbTQg
Y2FsbGJhY2sgdmlhIGNoYW5nZWQgdG8gRGlyZWN0IFZlY3RvciAweGYzCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21h
cDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6
cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJl
bW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6
IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200
IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2Zu
PWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1m
MzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4
MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1m
bj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49Zjdh
YzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAg
bnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAg
bnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBu
cj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9
MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9
MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4p
IG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3Bv
cnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5
X21hcDpyZW1vdmU6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9t
YXA6cmVtb3ZlOiBkb200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFw
OnJlbW92ZTogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb200IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTQgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b200IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBn
cG9ydD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200
IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTQg
Z2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb200IGdm
bj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOnJlbW92ZTogZG9tNCBncG9y
dD1jMjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1m
MzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTQgZ2ZuPWYzMGUw
IG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb200IGdmbj1mMzBlMyBtZm49
ZjdhYzMgbnI9MQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNCBncG9ydD1jMjAwIG1wb3J0PWQw
MDAgbnI9MTAwCihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAwIGNoYW5nZWQgNSAtPiAw
CihYRU4pIGlycS5jOjI3MDogRG9tNCBQQ0kgbGluayAxIGNoYW5nZWQgMTAgLT4gMAooWEVOKSBp
cnEuYzoyNzA6IERvbTQgUENJIGxpbmsgMiBjaGFuZ2VkIDExIC0+IDAKKFhFTikgaXJxLmM6Mjcw
OiBEb200IFBDSSBsaW5rIDMgY2hhbmdlZCA1IC0+IDAKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOCBpcnE9MCBlbXVpcnE9MTIKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRf
cGlycSA0MDggcGlycT0xOSBpcnE9MCBlbXVpcnE9MQooWEVOKSBERUJVRyBldnRjaG5fYmluZF9w
aXJxIDQwOCBwaXJxPTE3IGlycT0wIGVtdWlycT04CihYRU4pIERFQlVHIGV2dGNobl9iaW5kX3Bp
cnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGly
cSA0MDggcGlycT0yMCBpcnE9MCBlbXVpcnE9NgooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJx
IDQwOCBwaXJxPTU1IGlycT0tMSBlbXVpcnE9LTEKKFhFTikgREVCVUcgaHZtX3BjaV9tc2lfYXNz
ZXJ0IHBpcnE9NCBodm1fZG9tYWluX3VzZV9waXJxPTAgZW11aXJxPS0xCihYRU4pIHZtc2kuYzox
MDg6ZDMyNzY3IFVuc3VwcG9ydGVkIGRlbGl2ZXJ5IG1vZGUgMwooWEVOKSBERUJVRyBldnRjaG5f
YmluZF9waXJxIDQwOCBwaXJxPTIyIGlycT0wIGVtdWlycT03CihYRU4pIERFQlVHIGV2dGNobl9i
aW5kX3BpcnEgNDA4IHBpcnE9MjEgaXJxPTAgZW11aXJxPTQKKFhFTikgSFZNNTogSFZNIExvYWRl
cgooWEVOKSBIVk01OiBEZXRlY3RlZCBYZW4gdjQuMi11bnN0YWJsZQooWEVOKSBIVk01OiBYZW5i
dXMgcmluZ3MgQDB4ZmVmZmMwMDAsIGV2ZW50IGNoYW5uZWwgNAooWEVOKSBIVk01OiBTeXN0ZW0g
cmVxdWVzdGVkIFNlYUJJT1MKKFhFTikgSFZNNTogQ1BVIHNwZWVkIGlzIDMyOTMgTUh6CihYRU4p
IGlycS5jOjI3MDogRG9tNSBQQ0kgbGluayAwIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTU6IFBD
SS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1CihYRU4pIGlycS5jOjI3MDogRG9tNSBQQ0kgbGlu
ayAxIGNoYW5nZWQgMCAtPiAxMAooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8g
SVJRMTAKKFhFTikgaXJxLmM6MjcwOiBEb201IFBDSSBsaW5rIDIgY2hhbmdlZCAwIC0+IDExCihY
RU4pIEhWTTU6IFBDSS1JU0EgbGluayAyIHJvdXRlZCB0byBJUlExMQooWEVOKSBpcnEuYzoyNzA6
IERvbTUgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQooWEVOKSBIVk01OiBQQ0ktSVNBIGxpbmsg
MyByb3V0ZWQgdG8gSVJRNQooWEVOKSBIVk01OiBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTAKKFhF
TikgSFZNNTogcGNpIGRldiAwMzowIElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNDow
IElOVEEtPklSUTUKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIElOVEEtPklSUTEwCihYRU4pIEhW
TTU6IHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMjAwMDAwMDogZjAwMDAwMDgKKFhFTikgSFZN
NTogcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAxMDAwMDAwOiBmMjAwMDAwOAooWEVOKSBIVk01
OiBwY2kgZGV2IDA1OjAgYmFyIDMwIHNpemUgMDAwODAwMDA6IGYzMDAwMDAwCihYRU4pIG1lbW9y
eV9tYXA6YWRkOiBkb201IGdmbj1mMzA4MCBtZm49ZjdhODAgbnI9NDAKKFhFTikgSFZNNTogcGNp
IGRldiAwNTowIGJhciAxYyBzaXplIDAwMDQwMDAwOiBmMzA4MDAwNAooWEVOKSBIVk01OiBwY2kg
ZGV2IDAyOjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYzMGMwMDAwCihYRU4pIEhWTTU6IHBjaSBk
ZXYgMDQ6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwZDAwMDAKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTUgZ2ZuPWYzMGUwIG1mbj1mN2FjMCBucj0yCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBk
b201IGdmbj1mMzBlMyBtZm49ZjdhYzMgbnI9MQooWEVOKSBIVk01OiBwY2kgZGV2IDA1OjAgYmFy
IDE0IHNpemUgMDAwMDQwMDA6IGYzMGUwMDA0CihYRU4pIEhWTTU6IHBjaSBkZXYgMDI6MCBiYXIg
MTQgc2l6ZSAwMDAwMTAwMDogZjMwZTQwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwMzowIGJhciAx
MCBzaXplIDAwMDAwMTAwOiAwMDAwYzAwMQooWEVOKSBIVk01OiBwY2kgZGV2IDA0OjAgYmFyIDEw
IHNpemUgMDAwMDAxMDA6IDAwMDBjMTAxCihYRU4pIEhWTTU6IHBjaSBkZXYgMDQ6MCBiYXIgMTQg
c2l6ZSAwMDAwMDEwMDogZjMwZTUwMDAKKFhFTikgSFZNNTogcGNpIGRldiAwNTowIGJhciAxMCBz
aXplIDAwMDAwMTAwOiAwMDAwYzIwMQooWEVOKSBpb3BvcnRfbWFwOmFkZDogZG9tNSBncG9ydD1j
MjAwIG1wb3J0PWQwMDAgbnI9MTAwCihYRU4pIEhWTTU6IHBjaSBkZXYgMDE6MSBiYXIgMjAgc2l6
ZSAwMDAwMDAxMDogMDAwMGMzMDEKKFhFTikgSFZNNTogTXVsdGlwcm9jZXNzb3IgaW5pdGlhbGlz
YXRpb246CihYRU4pIEhWTTU6ICAtIENQVTAgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJS
cyAuLi4gdmFyIE1UUlJzIFsyLzhdIC4uLiBkb25lLgooWEVOKSBIVk01OiAgLSBDUFUxIC4uLiAz
Ni1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4K
KFhFTikgSFZNNTogVGVzdGluZyBIVk0gZW52aXJvbm1lbnQ6CihYRU4pIEhWTTU6ICAtIFJFUCBJ
TlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBhc3NlZAooWEVOKSBIVk01OiAgLSBHUyBi
YXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkCihYRU4pIEhWTTU6IFBhc3NlZCAyIG9mIDIg
dGVzdHMKKFhFTikgSFZNNTogV3JpdGluZyBTTUJJT1MgdGFibGVzIC4uLgooWEVOKSBIVk01OiBM
b2FkaW5nIFNlYUJJT1MgLi4uCihYRU4pIEhWTTU6IENyZWF0aW5nIE1QIHRhYmxlcyAuLi4KKFhF
TikgSFZNNTogTG9hZGluZyBBQ1BJIC4uLgooWEVOKSBIVk01OiB2bTg2IFRTUyBhdCBmYzAwYTA4
MAooWEVOKSBIVk01OiBCSU9TIG1hcDoKKFhFTikgSFZNNTogIDEwMDAwLTEwMGQzOiBTY3JhdGNo
IHNwYWNlCihYRU4pIEhWTTU6ICBlMDAwMC1mZmZmZjogTWFpbiBCSU9TCihYRU4pIEhWTTU6IEU4
MjAgdGFibGU6CihYRU4pIEhWTTU6ICBbMDBdOiAwMDAwMDAwMDowMDAwMDAwMCAtIDAwMDAwMDAw
OjAwMGEwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAwOjAwMGEwMDAwIC0gMDAw
MDAwMDA6MDAwZTAwMDAKKFhFTikgSFZNNTogIFswMV06IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAw
MDAwMDA6MDAxMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6ICBbMDJdOiAwMDAwMDAwMDowMDEw
MDAwMCAtIDAwMDAwMDAwOjdmODAwMDAwOiBSQU0KKFhFTikgSFZNNTogIEhPTEU6IDAwMDAwMDAw
OjdmODAwMDAwIC0gMDAwMDAwMDA6ZmMwMDAwMDAKKFhFTikgSFZNNTogIFswM106IDAwMDAwMDAw
OmZjMDAwMDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJFU0VSVkVECihYRU4pIEhWTTU6IEludm9r
aW5nIFNlYUJJT1MgLi4uCihYRU4pIHN0ZHZnYS5jOjE0NzpkNSBlbnRlcmluZyBzdGR2Z2EgYW5k
IGNhY2hpbmcgbW9kZXMKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTUgZ2ZuPWYzMDAwIG1mbj1m
N2EwMCBucj04MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNSBnZm49ZjMwMDAgbWZuPWY3
YTAwIG5yPTgwCihYRU4pIHN0ZHZnYS5jOjE1MTpkNSBsZWF2aW5nIHN0ZHZnYQooWEVOKSBIVk02
OiBIVk0gTG9hZGVyCihYRU4pIEhWTTY6IERldGVjdGVkIFhlbiB2NC4yLXVuc3RhYmxlCihYRU4p
IEhWTTY6IFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCA0CihYRU4pIEhW
TTY6IFN5c3RlbSByZXF1ZXN0ZWQgU2VhQklPUwooWEVOKSBIVk02OiBDUFUgc3BlZWQgaXMgMzI5
MyBNSHoKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUKKFhF
TikgSFZNNjogUENJLUlTQSBsaW5rIDAgcm91dGVkIHRvIElSUTUKKFhFTikgaXJxLmM6MjcwOiBE
b202IFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihYRU4pIEhWTTY6IFBDSS1JU0EgbGluayAx
IHJvdXRlZCB0byBJUlExMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMiBjaGFuZ2Vk
IDAgLT4gMTEKKFhFTikgSFZNNjogUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTExCihYRU4p
IGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihYRU4pIEhWTTY6IFBD
SS1JU0EgbGluayAzIHJvdXRlZCB0byBJUlE1CihYRU4pIEhWTTY6IHBjaSBkZXYgMDE6MyBJTlRB
LT5JUlExMAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBw
Y2kgZGV2IDA0OjAgSU5UQS0+SVJRNQooWEVOKSBIVk02OiBwY2kgZGV2IDA1OjAgSU5UQS0+SVJR
MTAKKFhFTikgSFZNNjogcGNpIGRldiAwMjowIGJhciAxMCBzaXplIDAyMDAwMDAwOiBmMDAwMDAw
OAooWEVOKSBIVk02OiBwY2kgZGV2IDAzOjAgYmFyIDE0IHNpemUgMDEwMDAwMDA6IGYyMDAwMDA4
CihYRU4pIEhWTTY6IHBjaSBkZXYgMDU6MCBiYXIgMzAgc2l6ZSAwMDA4MDAwMDogZjMwMDAwMDAK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVO
KSBIVk02OiBwY2kgZGV2IDA1OjAgYmFyIDFjIHNpemUgMDAwNDAwMDA6IGYzMDgwMDA0CihYRU4p
IEhWTTY6IHBjaSBkZXYgMDI6MCBiYXIgMzAgc2l6ZSAwMDAxMDAwMDogZjMwYzAwMDAKKFhFTikg
SFZNNjogcGNpIGRldiAwNDowIGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBkMDAwMAooWEVOKSBt
ZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5
X21hcDphZGQ6IGRvbTYgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIEhWTTY6IHBjaSBk
ZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwNDAwMDogZjMwZTAwMDQKKFhFTikgSFZNNjogcGNpIGRl
diAwMjowIGJhciAxNCBzaXplIDAwMDAxMDAwOiBmMzBlNDAwMAooWEVOKSBIVk02OiBwY2kgZGV2
IDAzOjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMDAxCihYRU4pIEhWTTY6IHBjaSBkZXYg
MDQ6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMxMDEKKFhFTikgSFZNNjogcGNpIGRldiAw
NDowIGJhciAxNCBzaXplIDAwMDAwMTAwOiBmMzBlNTAwMAooWEVOKSBIVk02OiBwY2kgZGV2IDA1
OjAgYmFyIDEwIHNpemUgMDAwMDAxMDA6IDAwMDBjMjAxCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBk
b202IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgSFZNNjogcGNpIGRldiAwMTox
IGJhciAyMCBzaXplIDAwMDAwMDEwOiAwMDAwYzMwMQooWEVOKSBIVk02OiBNdWx0aXByb2Nlc3Nv
ciBpbml0aWFsaXNhdGlvbjoKKFhFTikgSFZNNjogIC0gQ1BVMCAuLi4gMzYtYml0IHBoeXMgLi4u
IGZpeGVkIE1UUlJzIC4uLiB2YXIgTVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTY6ICAt
IENQVTEgLi4uIDM2LWJpdCBwaHlzIC4uLiBmaXhlZCBNVFJScyAuLi4gdmFyIE1UUlJzIFsyLzhd
IC4uLiBkb25lLgooWEVOKSBIVk02OiBUZXN0aW5nIEhWTSBlbnZpcm9ubWVudDoKKFhFTikgSFZN
NjogIC0gUkVQIElOU0IgYWNyb3NzIHBhZ2UgYm91bmRhcmllcyAuLi4gcGFzc2VkCihYRU4pIEhW
TTY6ICAtIEdTIGJhc2UgTVNScyBhbmQgU1dBUEdTIC4uLiBwYXNzZWQKKFhFTikgSFZNNjogUGFz
c2VkIDIgb2YgMiB0ZXN0cwooWEVOKSBIVk02OiBXcml0aW5nIFNNQklPUyB0YWJsZXMgLi4uCihY
RU4pIEhWTTY6IExvYWRpbmcgU2VhQklPUyAuLi4KKFhFTikgSFZNNjogQ3JlYXRpbmcgTVAgdGFi
bGVzIC4uLgooWEVOKSBIVk02OiBMb2FkaW5nIEFDUEkgLi4uCihYRU4pIEhWTTY6IHZtODYgVFNT
IGF0IGZjMDBhMDgwCihYRU4pIEhWTTY6IEJJT1MgbWFwOgooWEVOKSBIVk02OiAgMTAwMDAtMTAw
ZDM6IFNjcmF0Y2ggc3BhY2UKKFhFTikgSFZNNjogIGUwMDAwLWZmZmZmOiBNYWluIEJJT1MKKFhF
TikgSFZNNjogRTgyMCB0YWJsZToKKFhFTikgSFZNNjogIFswMF06IDAwMDAwMDAwOjAwMDAwMDAw
IC0gMDAwMDAwMDA6MDAwYTAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9MRTogMDAwMDAwMDA6MDAw
YTAwMDAgLSAwMDAwMDAwMDowMDBlMDAwMAooWEVOKSBIVk02OiAgWzAxXTogMDAwMDAwMDA6MDAw
ZTAwMDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQKKFhFTikgSFZNNjogIFswMl06IDAw
MDAwMDAwOjAwMTAwMDAwIC0gMDAwMDAwMDA6N2Y4MDAwMDA6IFJBTQooWEVOKSBIVk02OiAgSE9M
RTogMDAwMDAwMDA6N2Y4MDAwMDAgLSAwMDAwMDAwMDpmYzAwMDAwMAooWEVOKSBIVk02OiAgWzAz
XTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMTowMDAwMDAwMDogUkVTRVJWRUQKKFhFTikg
SFZNNjogSW52b2tpbmcgU2VhQklPUyAuLi4KKFhFTikgc3RkdmdhLmM6MTQ3OmQ2IGVudGVyaW5n
IHN0ZHZnYSBhbmQgY2FjaGluZyBtb2RlcwooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwMDAgbWZuPWY3YTAwIG5yPTgwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1m
MzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgc3RkdmdhLmM6MTUxOmQ2IGxlYXZpbmcgc3Rkdmdh
CihYRU4pIHN0ZHZnYS5jOjE0NzpkNiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMK
KFhFTikgaXJxLmM6Mzc1OiBEb202IGNhbGxiYWNrIHZpYSBjaGFuZ2VkIHRvIERpcmVjdCBWZWN0
b3IgMHhmMwooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAg
bnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5y
PTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5y
PTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQw
CihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVO
KSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9w
b3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9y
eV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlf
bWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21h
cDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlf
bWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6
YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDog
ZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYg
Z3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9t
NiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202
IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBn
Zm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3Bv
cnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49
ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBl
MCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZu
PWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1k
MDAwIG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3
YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49Zjdh
YzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMz
IG5yPTEKKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAw
IG5yPTEwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5y
PTQwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9Mgoo
WEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikg
aW9wb3J0X21hcDphZGQ6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1l
bW9yeV9tYXA6cmVtb3ZlOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1v
cnlfbWFwOnJlbW92ZTogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0
X21hcDpyZW1vdmU6IGRvbTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBtZW1v
cnlfbWFwOmFkZDogZG9tNiBnZm49ZjMwODAgbWZuPWY3YTgwIG5yPTQwCihYRU4pIG1lbW9yeV9t
YXA6YWRkOiBkb202IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9MgooWEVOKSBtZW1vcnlfbWFwOmFk
ZDogZG9tNiBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikgaW9wb3J0X21hcDphZGQ6IGRv
bTYgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJ
IGxpbmsgMCBjaGFuZ2VkIDUgLT4gMAooWEVOKSBpcnEuYzoyNzA6IERvbTYgUENJIGxpbmsgMSBj
aGFuZ2VkIDEwIC0+IDAKKFhFTikgaXJxLmM6MjcwOiBEb202IFBDSSBsaW5rIDIgY2hhbmdlZCAx
MSAtPiAwCihYRU4pIGlycS5jOjI3MDogRG9tNiBQQ0kgbGluayAzIGNoYW5nZWQgNSAtPiAwCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTggaXJxPTAgZW11aXJxPTEyCihY
RU4pIERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MTkgaXJxPTAgZW11aXJxPTEKKFhF
TikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0xNyBpcnE9MCBlbXVpcnE9OAooWEVO
KSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00CihYRU4p
IERFQlVHIGV2dGNobl9iaW5kX3BpcnEgNDA4IHBpcnE9MjAgaXJxPTAgZW11aXJxPTYKKFhFTikg
REVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT01NSBpcnE9LTEgZW11aXJxPS0xCihYRU4p
IERFQlVHIGh2bV9wY2lfbXNpX2Fzc2VydCBwaXJxPTQgaHZtX2RvbWFpbl91c2VfcGlycT0wIGVt
dWlycT0tMQooWEVOKSB2bXNpLmM6MTA4OmQzMjc2NyBVbnN1cHBvcnRlZCBkZWxpdmVyeSBtb2Rl
IDMKKFhFTikgREVCVUcgZXZ0Y2huX2JpbmRfcGlycSA0MDggcGlycT0yMiBpcnE9MCBlbXVpcnE9
NwooWEVOKSBERUJVRyBldnRjaG5fYmluZF9waXJxIDQwOCBwaXJxPTIxIGlycT0wIGVtdWlycT00
CihYRU4pIEhWTTc6IEhWTSBMb2FkZXIKKFhFTikgSFZNNzogRGV0ZWN0ZWQgWGVuIHY0LjItdW5z
dGFibGUKKFhFTikgSFZNNzogWGVuYnVzIHJpbmdzIEAweGZlZmZjMDAwLCBldmVudCBjaGFubmVs
IDQKKFhFTikgSFZNNzogU3lzdGVtIHJlcXVlc3RlZCBTZWFCSU9TCihYRU4pIEhWTTc6IENQVSBz
cGVlZCBpcyAzMjkzIE1IegooWEVOKSBpcnEuYzoyNzA6IERvbTcgUENJIGxpbmsgMCBjaGFuZ2Vk
IDAgLT4gNQooWEVOKSBIVk03OiBQQ0ktSVNBIGxpbmsgMCByb3V0ZWQgdG8gSVJRNQooWEVOKSBp
cnEuYzoyNzA6IERvbTcgUENJIGxpbmsgMSBjaGFuZ2VkIDAgLT4gMTAKKFhFTikgSFZNNzogUENJ
LUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwCihYRU4pIGlycS5jOjI3MDogRG9tNyBQQ0kgbGlu
ayAyIGNoYW5nZWQgMCAtPiAxMQooWEVOKSBIVk03OiBQQ0ktSVNBIGxpbmsgMiByb3V0ZWQgdG8g
SVJRMTEKKFhFTikgaXJxLmM6MjcwOiBEb203IFBDSSBsaW5rIDMgY2hhbmdlZCAwIC0+IDUKKFhF
TikgSFZNNzogUENJLUlTQSBsaW5rIDMgcm91dGVkIHRvIElSUTUKKFhFTikgSFZNNzogcGNpIGRl
diAwMTozIElOVEEtPklSUTEwCihYRU4pIEhWTTc6IHBjaSBkZXYgMDM6MCBJTlRBLT5JUlE1CihY
RU4pIEhWTTc6IHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1CihYRU4pIEhWTTc6IHBjaSBkZXYgMDU6
MCBJTlRBLT5JUlExMAooWEVOKSBIVk03OiBwY2kgZGV2IDAyOjAgYmFyIDEwIHNpemUgMDIwMDAw
MDA6IGYwMDAwMDA4CihYRU4pIEhWTTc6IHBjaSBkZXYgMDM6MCBiYXIgMTQgc2l6ZSAwMTAwMDAw
MDogZjIwMDAwMDgKKFhFTikgSFZNNzogcGNpIGRldiAwNTowIGJhciAzMCBzaXplIDAwMDgwMDAw
OiBmMzAwMDAwMAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwODAgbWZuPWY3YTgw
IG5yPTQwCihYRU4pIEhWTTc6IHBjaSBkZXYgMDU6MCBiYXIgMWMgc2l6ZSAwMDA0MDAwMDogZjMw
ODAwMDQKKFhFTikgSFZNNzogcGNpIGRldiAwMjowIGJhciAzMCBzaXplIDAwMDEwMDAwOiBmMzBj
MDAwMAooWEVOKSBIVk03OiBwY2kgZGV2IDA0OjAgYmFyIDMwIHNpemUgMDAwMTAwMDA6IGYzMGQw
MDAwCihYRU4pIG1lbW9yeV9tYXA6YWRkOiBkb203IGdmbj1mMzBlMCBtZm49ZjdhYzAgbnI9Mgoo
WEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwZTMgbWZuPWY3YWMzIG5yPTEKKFhFTikg
SFZNNzogcGNpIGRldiAwNTowIGJhciAxNCBzaXplIDAwMDA0MDAwOiBmMzBlMDAwNAooWEVOKSBI
Vk03OiBwY2kgZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAwMDEwMDA6IGYzMGU0MDAwCihYRU4pIEhW
TTc6IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMwMDEKKFhFTikgSFZN
NzogcGNpIGRldiAwNDowIGJhciAxMCBzaXplIDAwMDAwMTAwOiAwMDAwYzEwMQooWEVOKSBIVk03
OiBwY2kgZGV2IDA0OjAgYmFyIDE0IHNpemUgMDAwMDAxMDA6IGYzMGU1MDAwCihYRU4pIEhWTTc6
IHBjaSBkZXYgMDU6MCBiYXIgMTAgc2l6ZSAwMDAwMDEwMDogMDAwMGMyMDEKKFhFTikgaW9wb3J0
X21hcDphZGQ6IGRvbTcgZ3BvcnQ9YzIwMCBtcG9ydD1kMDAwIG5yPTEwMAooWEVOKSBIVk03OiBw
Y2kgZGV2IDAxOjEgYmFyIDIwIHNpemUgMDAwMDAwMTA6IDAwMDBjMzAxCihYRU4pIEhWTTc6IE11
bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOgooWEVOKSBIVk03OiAgLSBDUFUwIC4uLiAzNi1i
aXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJScyBbMi84XSAuLi4gZG9uZS4KKFhF
TikgSFZNNzogIC0gQ1BVMSAuLi4gMzYtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2YXIg
TVRSUnMgWzIvOF0gLi4uIGRvbmUuCihYRU4pIEhWTTc6IFRlc3RpbmcgSFZNIGVudmlyb25tZW50
OgooWEVOKSBIVk03OiAgLSBSRVAgSU5TQiBhY3Jvc3MgcGFnZSBib3VuZGFyaWVzIC4uLiBwYXNz
ZWQKKFhFTikgSFZNNzogIC0gR1MgYmFzZSBNU1JzIGFuZCBTV0FQR1MgLi4uIHBhc3NlZAooWEVO
KSBIVk03OiBQYXNzZWQgMiBvZiAyIHRlc3RzCihYRU4pIEhWTTc6IFdyaXRpbmcgU01CSU9TIHRh
YmxlcyAuLi4KKFhFTikgSFZNNzogTG9hZGluZyBTZWFCSU9TIC4uLgooWEVOKSBIVk03OiBDcmVh
dGluZyBNUCB0YWJsZXMgLi4uCihYRU4pIEhWTTc6IExvYWRpbmcgQUNQSSAuLi4KKFhFTikgSFZN
Nzogdm04NiBUU1MgYXQgZmMwMGEwODAKKFhFTikgSFZNNzogQklPUyBtYXA6CihYRU4pIEhWTTc6
ICAxMDAwMC0xMDBkMzogU2NyYXRjaCBzcGFjZQooWEVOKSBIVk03OiAgZTAwMDAtZmZmZmY6IE1h
aW4gQklPUwooWEVOKSBIVk03OiBFODIwIHRhYmxlOgooWEVOKSBIVk03OiAgWzAwXTogMDAwMDAw
MDA6MDAwMDAwMDAgLSAwMDAwMDAwMDowMDBhMDAwMDogUkFNCihYRU4pIEhWTTc6ICBIT0xFOiAw
MDAwMDAwMDowMDBhMDAwMCAtIDAwMDAwMDAwOjAwMGUwMDAwCihYRU4pIEhWTTc6ICBbMDFdOiAw
MDAwMDAwMDowMDBlMDAwMCAtIDAwMDAwMDAwOjAwMTAwMDAwOiBSRVNFUlZFRAooWEVOKSBIVk03
OiAgWzAyXTogMDAwMDAwMDA6MDAxMDAwMDAgLSAwMDAwMDAwMDo3ZjgwMDAwMDogUkFNCihYRU4p
IEhWTTc6ICBIT0xFOiAwMDAwMDAwMDo3ZjgwMDAwMCAtIDAwMDAwMDAwOmZjMDAwMDAwCihYRU4p
IEhWTTc6ICBbMDNdOiAwMDAwMDAwMDpmYzAwMDAwMCAtIDAwMDAwMDAxOjAwMDAwMDAwOiBSRVNF
UlZFRAooWEVOKSBIVk03OiBJbnZva2luZyBTZWFCSU9TIC4uLgooWEVOKSBzdGR2Z2EuYzoxNDc6
ZDcgZW50ZXJpbmcgc3RkdmdhIGFuZCBjYWNoaW5nIG1vZGVzCihYRU4pIG1lbW9yeV9tYXA6YWRk
OiBkb203IGdmbj1mMzAwMCBtZm49ZjdhMDAgbnI9ODAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6
IGRvbTcgZ2ZuPWYzMDAwIG1mbj1mN2EwMCBucj04MAooWEVOKSBzdGR2Z2EuYzoxNTE6ZDcgbGVh
dmluZyBzdGR2Z2EKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1m
N2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNyBnZm49ZjMwZTAgbWZuPWY3
YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2Fj
MyBucj0xCihYRU4pIGlvcG9ydF9tYXA6cmVtb3ZlOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAw
MCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBu
cj00MAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIK
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4p
IGlvcG9ydF9tYXA6YWRkOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikg
bWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBt
ZW1vcnlfbWFwOnJlbW92ZTogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVt
b3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9y
dF9tYXA6cmVtb3ZlOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVt
b3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlf
bWFwOmFkZDogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDph
ZGQ6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBk
b203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6
IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTog
ZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRv
bTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6cmVtb3ZlOiBkb203
IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcg
Z2ZuPWYzMDgwIG1mbj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49
ZjMwZTAgbWZuPWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMGUz
IG1mbj1mN2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6YWRkOiBkb203IGdwb3J0PWMyMDAgbXBv
cnQ9ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMDgwIG1m
bj1mN2E4MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOnJlbW92ZTogZG9tNyBnZm49ZjMwZTAgbWZu
PWY3YWMwIG5yPTIKKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1m
N2FjMyBucj0xCihYRU4pIGlvcG9ydF9tYXA6cmVtb3ZlOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9
ZDAwMCBucj0xMDAKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMDgwIG1mbj1mN2E4
MCBucj00MAooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tNyBnZm49ZjMwZTAgbWZuPWY3YWMwIG5y
PTIKKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTcgZ2ZuPWYzMGUzIG1mbj1mN2FjMyBucj0xCihY
RU4pIGlvcG9ydF9tYXA6YWRkOiBkb203IGdwb3J0PWMyMDAgbXBvcnQ9ZDAwMCBucj0xMDAK
--bcaec53d5ed7de620704c637f31e
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec53d5ed7de620704c637f31e--


From xen-devel-bounces@lists.xen.org Wed Aug 01 17:56:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swd9O-0005G3-Vt; Wed, 01 Aug 2012 17:56:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swd9N-0005Fu-1K
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 17:56:01 +0000
Received: from [85.158.138.51:58066] by server-9.bemta-3.messagelabs.com id
	87/21-27628-0BD69105; Wed, 01 Aug 2012 17:56:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343843759!21023619!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17111 invoked from network); 1 Aug 2012 17:55:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 17:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13809146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 17:55:59 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 18:55:59 +0100
Date: Wed, 1 Aug 2012 18:55:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jean Guyader <jean.guyader@eu.citrix.com>
In-Reply-To: <1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Tim
	\(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 16 Nov 2011, Jean Guyader wrote:
> 
> XENMAPSPACE_gmfn_range is like XENMAPSPACE_gmfn but it runs on
> a range of pages. The size of the range is defined in a new field.
> 
> This new field .size is located in the 16 bits padding between .domid
> and .space in struct xen_add_to_physmap to stay compatible with older
> versions.
> 
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>

Hi all,
I was reading more about this commit because this patch breaks the ABI
on ARM, when I realized that on x86 there is no standard that specifies
the alignment of fields in a struct.
As a consequence I don't think we can really be sure that between .domid
and .space we always have 16 bits of padding.
I am afraid that if a user compiles Linux or another guest kernel with a
compiler other than gcc, this hypercall might break. In fact it already
happened just switching from x86 to ARM.
Also, considering that the memory.h interface is supposed to be ANSI C,
isn't it wrong to assume compiler specific artifacts anyway?
Considering that we haven't made any releases yet with the change in
memory.h, shouldn't we revert the commit before it is too late?

- Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 17:56:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 17:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swd9O-0005G3-Vt; Wed, 01 Aug 2012 17:56:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Swd9N-0005Fu-1K
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 17:56:01 +0000
Received: from [85.158.138.51:58066] by server-9.bemta-3.messagelabs.com id
	87/21-27628-0BD69105; Wed, 01 Aug 2012 17:56:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343843759!21023619!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17111 invoked from network); 1 Aug 2012 17:55:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 17:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336348800"; d="scan'208";a="13809146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 17:55:59 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 18:55:59 +0100
Date: Wed, 1 Aug 2012 18:55:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jean Guyader <jean.guyader@eu.citrix.com>
In-Reply-To: <1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Tim
	\(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 16 Nov 2011, Jean Guyader wrote:
> 
> XENMAPSPACE_gmfn_range is like XENMAPSPACE_gmfn but it runs on
> a range of pages. The size of the range is defined in a new field.
> 
> This new field .size is located in the 16 bits padding between .domid
> and .space in struct xen_add_to_physmap to stay compatible with older
> versions.
> 
> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>

Hi all,
I was reading more about this commit because this patch breaks the ABI
on ARM, when I realized that on x86 there is no standard that specifies
the alignment of fields in a struct.
As a consequence I don't think we can really be sure that between .domid
and .space we always have 16 bits of padding.
I am afraid that if a user compiles Linux or another guest kernel with a
compiler other than gcc, this hypercall might break. In fact it already
happened just switching from x86 to ARM.
Also, considering that the memory.h interface is supposed to be ANSI C,
isn't it wrong to assume compiler specific artifacts anyway?
Considering that we haven't made any releases yet with the change in
memory.h, shouldn't we revert the commit before it is too late?

- Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 18:07:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdJz-0005aw-DB; Wed, 01 Aug 2012 18:06:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwdJx-0005ar-CE
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:06:57 +0000
Received: from [85.158.138.51:9057] by server-11.bemta-3.messagelabs.com id
	68/D1-00679-04079105; Wed, 01 Aug 2012 18:06:56 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343844413!28186974!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13828 invoked from network); 1 Aug 2012 18:06:54 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 18:06:54 -0000
Received: by eekd4 with SMTP id d4so2048620eek.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 11:06:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=4uDUp/HoIMB751+AbdxLkFzBfzh+iRR+0Y3541aaoLM=;
	b=vETiyCewbDDbqEjJpKDSJZyDa2A0nj0c6ecVei62vXKMCfnO2xqvouy8S/eKcbnJDB
	uTo6N5rz50zZcpuJkAnDhyEUbm6m4KNtrFOzAwbi8nReo1w1v939M2IL3hPuUt1oeXZZ
	7/R9zc6qbw2Qy1K/dWqtkm8PlmItaL2oVh+2r1UR98ltoCfTRBpgQ5Hi25gzLVMhNsn+
	kqBrBhDY3xIh7dOuRYGZwYV9KJWG6zOPmR0V8nO4245db6scKRL9ck/KCypsdyz3CXYl
	4eOuXx7FnNJk6M066EGJrJY3tSQSXej/EQm3C+SYtS/pP4jv+XkIGQ/Yab3zYHzKAjRc
	z96A==
Received: by 10.14.215.129 with SMTP id e1mr3412392eep.46.1343844413826;
	Wed, 01 Aug 2012 11:06:53 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id 9sm10843274eei.12.2012.08.01.11.06.50
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 11:06:53 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 01 Aug 2012 19:06:44 +0100
From: Keir Fraser <keir@xen.org>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>
Message-ID: <CC3F2EC4.477D8%keir@xen.org>
Thread-Topic: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
Thread-Index: Ac1wEGiVKsdlLoHn4UGsDn0mDUe6Qg==
In-Reply-To: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 18:55, "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
wrote:

> On Wed, 16 Nov 2011, Jean Guyader wrote:
>> 
>> XENMAPSPACE_gmfn_range is like XENMAPSPACE_gmfn but it runs on
>> a range of pages. The size of the range is defined in a new field.
>> 
>> This new field .size is located in the 16 bits padding between .domid
>> and .space in struct xen_add_to_physmap to stay compatible with older
>> versions.
>> 
>> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> 
> Hi all,
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.
> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.

Well, on x86 there *is* 16 bits of padding between the .domid and .space
fields. That's our ABI. Regardless of whether we rely on not-really-existent
padding rules in the compiler. If someone compiles in an environment that
aligns stuff differently, we have to rewrite our headers. :)

We don't have a supported ABI on ARM until 4.2.0 is released at the
earliest.

Should we be changing our rules on public headers, allowing
compiler-specific extensions to precisely lay out our structures? Quite
possibly. We used to do that, but it got shot down by the ppc and ia64
arches who wanted an easier life and just rely on compiler default layout
for a particular platform. Of course those maintainers aren't actually
voting any more. ;)

 -- Keir

> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.
> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?
> Considering that we haven't made any releases yet with the change in
> memory.h, shouldn't we revert the commit before it is too late?
> 
> - Stefano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 18:07:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdJz-0005aw-DB; Wed, 01 Aug 2012 18:06:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwdJx-0005ar-CE
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:06:57 +0000
Received: from [85.158.138.51:9057] by server-11.bemta-3.messagelabs.com id
	68/D1-00679-04079105; Wed, 01 Aug 2012 18:06:56 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343844413!28186974!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13828 invoked from network); 1 Aug 2012 18:06:54 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 18:06:54 -0000
Received: by eekd4 with SMTP id d4so2048620eek.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 11:06:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=4uDUp/HoIMB751+AbdxLkFzBfzh+iRR+0Y3541aaoLM=;
	b=vETiyCewbDDbqEjJpKDSJZyDa2A0nj0c6ecVei62vXKMCfnO2xqvouy8S/eKcbnJDB
	uTo6N5rz50zZcpuJkAnDhyEUbm6m4KNtrFOzAwbi8nReo1w1v939M2IL3hPuUt1oeXZZ
	7/R9zc6qbw2Qy1K/dWqtkm8PlmItaL2oVh+2r1UR98ltoCfTRBpgQ5Hi25gzLVMhNsn+
	kqBrBhDY3xIh7dOuRYGZwYV9KJWG6zOPmR0V8nO4245db6scKRL9ck/KCypsdyz3CXYl
	4eOuXx7FnNJk6M066EGJrJY3tSQSXej/EQm3C+SYtS/pP4jv+XkIGQ/Yab3zYHzKAjRc
	z96A==
Received: by 10.14.215.129 with SMTP id e1mr3412392eep.46.1343844413826;
	Wed, 01 Aug 2012 11:06:53 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id 9sm10843274eei.12.2012.08.01.11.06.50
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 11:06:53 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 01 Aug 2012 19:06:44 +0100
From: Keir Fraser <keir@xen.org>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>
Message-ID: <CC3F2EC4.477D8%keir@xen.org>
Thread-Topic: Should we revert "mm: New XENMEM space, XENMAPSPACE_gmfn_range"?
Thread-Index: Ac1wEGiVKsdlLoHn4UGsDn0mDUe6Qg==
In-Reply-To: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2012 18:55, "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
wrote:

> On Wed, 16 Nov 2011, Jean Guyader wrote:
>> 
>> XENMAPSPACE_gmfn_range is like XENMAPSPACE_gmfn but it runs on
>> a range of pages. The size of the range is defined in a new field.
>> 
>> This new field .size is located in the 16 bits padding between .domid
>> and .space in struct xen_add_to_physmap to stay compatible with older
>> versions.
>> 
>> Signed-off-by: Jean Guyader <jean.guyader@eu.citrix.com>
> 
> Hi all,
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.
> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.

Well, on x86 there *is* 16 bits of padding between the .domid and .space
fields. That's our ABI. Regardless of whether we rely on not-really-existent
padding rules in the compiler. If someone compiles in an environment that
aligns stuff differently, we have to rewrite our headers. :)

We don't have a supported ABI on ARM until 4.2.0 is released at the
earliest.

Should we be changing our rules on public headers, allowing
compiler-specific extensions to precisely lay out our structures? Quite
possibly. We used to do that, but it got shot down by the ppc and ia64
arches who wanted an easier life and just rely on compiler default layout
for a particular platform. Of course those maintainers aren't actually
voting any more. ;)

 -- Keir

> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.
> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?
> Considering that we haven't made any releases yet with the change in
> memory.h, shouldn't we revert the commit before it is too late?
> 
> - Stefano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 18:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdLP-0005ev-Sd; Wed, 01 Aug 2012 18:08:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwdLP-0005el-3I
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:08:27 +0000
Received: from [85.158.138.51:21185] by server-3.bemta-3.messagelabs.com id
	8E/C6-08301-A9079105; Wed, 01 Aug 2012 18:08:26 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343844505!29898688!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8903 invoked from network); 1 Aug 2012 18:08:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 18:08:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwdLK-0000ZK-1S; Wed, 01 Aug 2012 18:08:22 +0000
Date: Wed, 1 Aug 2012 19:08:21 +0100
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801180821.GA96456@ocelot.phlegethon.org>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:55 +0100 on 01 Aug (1343847341), Stefano Stabellini wrote:
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.
> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.
> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.

AIUI, reverting this patch won't solve that problem - either a compiler adds
16 bytes of padding (as GCC and clang do), in which case we can use that
area of the new argument, or it doesn't, in which case it's already not
compatible with a GCC-built xen.

> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?

Yes - people writing Windows drivers have this sort of alignment/padding
issue already in a number of places.  But since, as you say, there's no
standard describing this, the best we can do is keep any _new_ interfaces
size-aligned and explicitly sized.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 18:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdLP-0005ev-Sd; Wed, 01 Aug 2012 18:08:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwdLP-0005el-3I
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:08:27 +0000
Received: from [85.158.138.51:21185] by server-3.bemta-3.messagelabs.com id
	8E/C6-08301-A9079105; Wed, 01 Aug 2012 18:08:26 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343844505!29898688!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8903 invoked from network); 1 Aug 2012 18:08:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 18:08:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwdLK-0000ZK-1S; Wed, 01 Aug 2012 18:08:22 +0000
Date: Wed, 1 Aug 2012 19:08:21 +0100
From: Tim Deegan <tim@xen.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120801180821.GA96456@ocelot.phlegethon.org>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:55 +0100 on 01 Aug (1343847341), Stefano Stabellini wrote:
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.
> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.
> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.

AIUI, reverting this patch won't solve that problem - either a compiler adds
16 bytes of padding (as GCC and clang do), in which case we can use that
area of the new argument, or it doesn't, in which case it's already not
compatible with a GCC-built xen.

> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?

Yes - people writing Windows drivers have this sort of alignment/padding
issue already in a number of places.  But since, as you say, there's no
standard describing this, the best we can do is keep any _new_ interfaces
size-aligned and explicitly sized.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 18:20:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdWq-0005xL-33; Wed, 01 Aug 2012 18:20:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1SwdWo-0005xD-KI
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:20:14 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343845207!10808582!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12476 invoked from network); 1 Aug 2012 18:20:08 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 18:20:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71IJhQC019868
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 18:19:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71IJfGT024842
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 18:19:42 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71IJek5007626; Wed, 1 Aug 2012 13:19:41 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 11:19:40 -0700
Date: Wed, 1 Aug 2012 11:19:37 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120801111937.4c9b3702@mantra.us.oracle.com>
In-Reply-To: <20120801145215.GP7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145215.GP7227@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 20/24] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012 10:52:15 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jul 26, 2012 at 04:34:02PM +0100, Stefano Stabellini wrote:
> > Update struct xen_add_to_physmap to be in sync with Xen's version
> > of the structure.
> > The size field was introduced by:
> > 
> > changeset:   24164:707d27fe03e7
> > user:        Jean Guyader <jean.guyader@eu.citrix.com>
> > date:        Fri Nov 18 13:42:08 2011 +0000
> > summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> > 
> > According to the comment:
> > 
> > "This new field .size is located in the 16 bits padding
> > between .domid and .space in struct xen_add_to_physmap to stay
> > compatible with older versions."
> > 
> > This is not true on ARM where there is not padding, but it is valid
> > on X86, so introducing size is safe on X86 and it is going to fix
> > the interace for ARM.
> 
> Has this been checked actually for backwards compatibility? It sounds
> like it should work just fine with Xen 4.0 right?
> 
> I believe this also helps Mukesh's patches, so CC-ing him here for
> his Ack.

Yup, I already had that change in my tree.

thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 18:20:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdWq-0005xL-33; Wed, 01 Aug 2012 18:20:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1SwdWo-0005xD-KI
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:20:14 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343845207!10808582!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12476 invoked from network); 1 Aug 2012 18:20:08 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 18:20:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71IJhQC019868
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 18:19:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71IJfGT024842
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 18:19:42 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71IJek5007626; Wed, 1 Aug 2012 13:19:41 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 11:19:40 -0700
Date: Wed, 1 Aug 2012 11:19:37 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120801111937.4c9b3702@mantra.us.oracle.com>
In-Reply-To: <20120801145215.GP7227@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-20-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801145215.GP7227@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 20/24] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012 10:52:15 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jul 26, 2012 at 04:34:02PM +0100, Stefano Stabellini wrote:
> > Update struct xen_add_to_physmap to be in sync with Xen's version
> > of the structure.
> > The size field was introduced by:
> > 
> > changeset:   24164:707d27fe03e7
> > user:        Jean Guyader <jean.guyader@eu.citrix.com>
> > date:        Fri Nov 18 13:42:08 2011 +0000
> > summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> > 
> > According to the comment:
> > 
> > "This new field .size is located in the 16 bits padding
> > between .domid and .space in struct xen_add_to_physmap to stay
> > compatible with older versions."
> > 
> > This is not true on ARM where there is not padding, but it is valid
> > on X86, so introducing size is safe on X86 and it is going to fix
> > the interace for ARM.
> 
> Has this been checked actually for backwards compatibility? It sounds
> like it should work just fine with Xen 4.0 right?
> 
> I believe this also helps Mukesh's patches, so CC-ing him here for
> his Ack.

Yup, I already had that change in my tree.

thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 18:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdZK-00063A-L7; Wed, 01 Aug 2012 18:22:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwdZI-00062y-Uj
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 18:22:49 +0000
Received: from [85.158.143.99:34606] by server-3.bemta-4.messagelabs.com id
	49/68-01511-8F379105; Wed, 01 Aug 2012 18:22:48 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343845366!23421762!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17303 invoked from network); 1 Aug 2012 18:22:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 18:22:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; d="scan'208";a="33243629"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 14:22:46 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 14:22:45 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwdZF-0007ln-Gl;
	Wed, 01 Aug 2012 19:22:45 +0100
Message-ID: <501973F5.4030804@citrix.com>
Date: Wed, 1 Aug 2012 19:22:45 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------050907080400010102090902"
Subject: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------050907080400010102090902
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

The alternative is to change the root makefile to call "$(MAKE) -C
tools/ all" instead, but this way feels neater to me.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------050907080400010102090902
Content-Type: text/x-patch; name="tools-build-target.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-build-target.patch"

# HG changeset patch
# Parent f8ea2e12ed5ab33f160e65f22c9ce8f379863053
tools/makefile: Add build target

The root Makefile has a build target which calls "$(MAKE) build" in each of xen/
tools/ stubdom/ and docs/, which fails because of the tools/ Makefile.

This patch adds a phony build target, allowing a call to "make build" in the
repository root directory still work.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r f8ea2e12ed5a tools/Makefile
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -64,6 +64,9 @@ endif
 .PHONY: all
 all: subdirs-all
 
+.PHONY: build
+build: subdirs-all
+
 .PHONY: install
 install: subdirs-install
 	$(INSTALL_DIR) $(DESTDIR)/var/xen/dump

--------------050907080400010102090902
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050907080400010102090902--


From xen-devel-bounces@lists.xen.org Wed Aug 01 18:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 18:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwdZK-00063A-L7; Wed, 01 Aug 2012 18:22:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwdZI-00062y-Uj
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 18:22:49 +0000
Received: from [85.158.143.99:34606] by server-3.bemta-4.messagelabs.com id
	49/68-01511-8F379105; Wed, 01 Aug 2012 18:22:48 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343845366!23421762!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjEzMjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17303 invoked from network); 1 Aug 2012 18:22:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 18:22:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,695,1336363200"; d="scan'208";a="33243629"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 14:22:46 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 1 Aug 2012 14:22:45 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwdZF-0007ln-Gl;
	Wed, 01 Aug 2012 19:22:45 +0100
Message-ID: <501973F5.4030804@citrix.com>
Date: Wed, 1 Aug 2012 19:22:45 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------050907080400010102090902"
Subject: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------050907080400010102090902
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

The alternative is to change the root makefile to call "$(MAKE) -C
tools/ all" instead, but this way feels neater to me.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------050907080400010102090902
Content-Type: text/x-patch; name="tools-build-target.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-build-target.patch"

# HG changeset patch
# Parent f8ea2e12ed5ab33f160e65f22c9ce8f379863053
tools/makefile: Add build target

The root Makefile has a build target which calls "$(MAKE) build" in each of xen/
tools/ stubdom/ and docs/, which fails because of the tools/ Makefile.

This patch adds a phony build target, allowing a call to "make build" in the
repository root directory still work.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r f8ea2e12ed5a tools/Makefile
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -64,6 +64,9 @@ endif
 .PHONY: all
 all: subdirs-all
 
+.PHONY: build
+build: subdirs-all
+
 .PHONY: install
 install: subdirs-install
 	$(INSTALL_DIR) $(DESTDIR)/var/xen/dump

--------------050907080400010102090902
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050907080400010102090902--


From xen-devel-bounces@lists.xen.org Wed Aug 01 19:12:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 19:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SweKg-0006n0-61; Wed, 01 Aug 2012 19:11:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SweKd-0006mv-Qu
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 19:11:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343848294!11800714!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 346 invoked from network); 1 Aug 2012 19:11:35 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 19:11:35 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71JBUNw011963
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 19:11:30 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71JBRO6010362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 19:11:28 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71JBRSQ012765; Wed, 1 Aug 2012 14:11:27 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 12:11:27 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 18DD5402B5; Wed,  1 Aug 2012 15:02:27 -0400 (EDT)
Date: Wed, 1 Aug 2012 15:02:27 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@eu.citrix.com>
Message-ID: <20120801190227.GA13272@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] Regression in xen-netfront on v3.6.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So I hadn't done a git bisection yet. But if I choose git commit:
4b24ff71108164e047cf2c95990b77651163e315
    Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6

    Pull battery updates from Anton Vorontsov:


everything works nicely. Anything past that, so these merges:

konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
ac694db Merge branch 'akpm' (Andrew's patch-bomb)
a40a1d3 Merge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
3e9a970 Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
941c872 Merge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
8762541 Merge branch 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
6dbb35b Merge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
fd37ce3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
1da9b6b Merge branches 'cma', 'ipoib', 'ocrdma' and 'qib' into for-next
6aeea3e Merge remote-tracking branch 'origin' into irqdomain/next
931efdf Merge branch 'v4l_for_linus' into staging/for_v3.6
80c1834 Merge tag 'v3.5-rc6' into irqdomain/next

are the culprit. I think it might be the networking pull but not sure. Ian
any thoughts?

Using config file "/test.xm".
Started domain latest (id=2)
[    0.000000] console [hvc0] enabled, bootconsole disabled
[    0.000000] Xen: using vcpuop timer interface
[    0.000000] installing Xen timer for CPU 0
[    0.000000] tsc: Detected 2294.530 MHz processor
[    0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 4589.06 BogoMIPS (lpj=2294530)
[    0.000999] pid_max: default: 32768 minimum: 301
[    0.000999] Security Framework initialized
[    0.000999] SELinux:  Initializing.
[    0.000999] SELinux:  Starting in permissive mode
[    0.000999] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.001520] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
[    0.001875] Mount-cache hash table entries: 256
[    0.002007] Initializing cgroup subsys cpuacct
[    0.002013] Initializing cgroup subsys freezer
[    0.002070] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    0.002070] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[    0.002084] CPU: Physical Processor ID: 0
[    0.002087] CPU: Processor Core ID: 0
[    0.002094] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[    0.002094] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
[    0.002094] tlb_flushall_shift is 0x5
[    0.002164] SMP alternatives: switching to UP code
[    0.025291] Freeing SMP alternatives: 24k freed
[    0.025356] cpu 0 spinlock event irq 17
[    0.025383] Performance Events: unsupported p6 CPU model 45 no PMU driver, software events only.
[    0.025551] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.025576] Brought up 1 CPUs
[    0.028642] kworker/u:0 (14) used greatest stack depth: 5936 bytes left
[    0.028675] Grant tables using version 2 layout.
[    0.028691] Grant table initialized
[    0.047616] RTC time: 165:165:165, date: 165/165/65
[    0.047661] NET: Registered protocol family 16
[    0.048184] dca service started, version 1.12.1
[    0.048545] PCI: setting up Xen PCI frontend stub
[    0.048552] PCI: pci_cache_line_size set to 64 bytes
[    0.049543] kworker/u:0 (51) used greatest stack depth: 5472 bytes left
[    0.054147] bio: create slab <bio-0> at 0
[    0.054240] ACPI: Interpreter disabled.
[    0.054288] xen/balloon: Initialising balloon driver.
[    0.055127] xen-balloon: Initialising balloon driver.
[    0.055127] vgaarb: loaded
[    0.056125] usbcore: registered new interface driver usbfs
[    0.056162] usbcore: registered new interface driver hub
[    0.056217] usbcore: registered new device driver usb
[    0.056425] PCI: System does not support PCI
[    0.056431] PCI: System does not support PCI
[    0.056617] NetLabel: Initializing
[    0.056624] NetLabel:  domain hash size = 128
[    0.056627] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.056642] NetLabel:  unlabeled traffic allowed by default
[    0.056725] Switching to clocksource xen
[    0.056795] pnp: PnP ACPI: disabled
[    0.058698] NET: Registered protocol family 2
[    0.059805] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
[    0.061110] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.061243] TCP: Hash tables configured (established 524288 bind 65536)
[    0.061281] TCP: reno registered
[    0.061304] UDP hash table entries: 2048 (order: 4, 65536 bytes)
[    0.061341] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
[    0.061425] NET: Registered protocol family 1
[    0.061492] RPC: Registered named UNIX socket transport module.
[    0.061498] RPC: Registered udp transport module.
[    0.061504] RPC: Registered tcp transport module.
[    0.061510] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.061518] PCI: CLS 0 bytes, default 64
[    0.061643] Trying to unpack rootfs image as initramfs...
[    0.382189] Freeing initrd memory: 362080k freed
[    0.499615] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    0.499831] Machine check injector initialized
[    0.500181] microcode: CPU0 sig=0x206d2, pf=0x1, revision=0x8000020c
[    0.500229] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
[    0.500544] audit: initializing netlink socket (disabled)
[    0.500566] type=2000 audit(1343845740.901:1): initialized
[    0.515227] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.515358] VFS: Disk quotas dquot_6.5.2
[    0.515386] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.515525] NFS: Registering the id_resolver key type
[    0.515544] Key type id_resolver registered
[    0.515551] Key type id_legacy registered
[    0.515599] NTFS driver 2.1.30 [Flags: R/W].
[    0.515706] msgmni has been set to 8021
[    0.515765] SELinux:  Registering netfilter hooks
[    0.516222] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[    0.516232] io scheduler noop registered
[    0.516236] io scheduler deadline registered
[    0.516243] io scheduler cfq registered (default)
[    0.516337] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.516442] ioatdma: Intel(R) QuickData Technology Driver 4.00
[    0.532923] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.533399] Non-volatile memory driver v1.3
[    0.533406] Linux agpgart interface v0.103
[    0.533588] [drm] Initialized drm 1.1.0 20060810
[    0.535196] brd: module loaded
[    0.535992] loop: module loaded
[    0.536344] libphy: Fixed MDIO Bus: probed
[    0.536351] tun: Universal TUN/TAP device driver, 1.6
[    0.536354] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[    0.536419] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.6.0-k
[    0.536428] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
[    0.536721] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.536729] ehci_hcd: block sizes: qh 104 qtd 96 itd 192 sitd 96
[    0.536770] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.536776] ohci_hcd: block sizes: ed 80 td 96
[    0.536826] uhci_hcd: USB Universal Host Controller Interface driver
[    0.536929] usbcore: registered new interface driver usblp
[    0.536977] usbcore: registered new interface driver libusual
[    0.537164] i8042: PNP: No PS/2 controller found. Probing ports directly.
[    0.538013] i8042: No controller found
[    0.538103] mousedev: PS/2 mouse device common for all mice
[    0.598349] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
[    0.598404] rtc_cmos: probe of rtc_cmos failed with error -38
[    0.598559] EFI Variables Facility v0.08 2004-May-17
[    0.598701] Netfilter messages via NETLINK v0.30.
[    0.598719] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[    0.598790] ctnetlink v0.93: registering with nfnetlink.
[    0.598971] ip_tables: (C) 2000-2006 Netfilter Core Team
[    0.599007] TCP: cubic registered
[    0.599014] Initializing XFRM netlink socket
[    0.599043] NET: Registered protocol family 10
[    0.599238] ip6_tables: (C) 2000-2006 Netfilter Core Team
[    0.599374] sit: IPv6 over IPv4 tunneling driver
[    0.599589] NET: Registered protocol family 17
[    0.599618] Key type dns_resolver registered
[    0.599808] PM: Hibernation image not present or could not be loaded.
[    0.599824] registered taskstats version 1
[    0.599848] XENBUS: Device with no driver: device/vkbd/0
[    0.599852] XENBUS: Device with no driver: device/vfb/0
[    0.599856] XENBUS: Device with no driver: device/vbd/51712
[    0.599860] XENBUS: Device with no driver: device/vif/0
[    0.599886]   Magic number: 1:252:3141
[    0.600271] Freeing unused kernel memory: 704k freed
[    0.600500] Write protecting the kernel read-only data: 8192k
[    0.602437] Freeing unused kernel memory: 132k freed
[    0.602589] Freeing unused kernel memory: 340k freed
init started: BusyBox v1.14.3 (2012-08-01 13:52:44 EDT)
[    0.607489] consoletype (1056) used greatest stack depth: 5288 bytes left
Mounting directories  [  OK  ]
mount: mount point /proc/bus/usb does not exist
[    0.781044] modprobe (1085) used greatest stack depth: 5048 bytes left
mount: mount point /sys/kernel/config does not exist
[    0.785748] core_filesystem (1057) used greatest stack depth: 4968 bytes left
[    0.793721] input: Xen Virtual Keyboard as /devices/virtual/input/input0
[    0.793892] input: Xen Virtual Pointer as /devices/virtual/input/input1
[    1.010121] Initialising Xen virtual ethernet driver.
[    1.124604] blkfront: xvda: flush diskcache: enabled
[    1.126118]  xvda: xvda1 xvda2 xvda3 xvda4
[    1.239316] udevd (1128): /proc/1128/oom_adj is deprecated, please use /proc/1128/oom_score_adj instead.
udevd-work[1130]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} for writing: No such file or directory

[    1.395080] ip (1408) used greatest stack depth: 3912 bytes left
Waiting for devices [  OK  ]
Waiting for init.pre_custom [  OK  ]
Waiting for fb [  OK  ]
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7f44ddbc2000): Writting .. [800:600]
Done!
FATAL: Module agpgart_intel not found.
[    1.652131] [drm] radeon kernel modesetting enabled.
WARNING: Error inserting video (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/acpi/video.ko): No such device
WARNING: Error inserting mxm_wmi (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/platform/x86/mxm-wmi.ko): No such device
WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
WARNING: Error inserting ttm (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/ttm/ttm.ko): No such device
FATAL: Error inserting nouveau (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/nouveau/nouveau.ko): No such device
[    1.660288] Console: switching to colour frame buffer device 100x37
WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
FATAL: Error inserting i915 (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/i915/i915.ko): No such device
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7fb8669cc000): Writting .. [800:600]
Done!
VGA: 0000:
Waiting for network [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0:  [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
[    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.908703] PGD ea1df067 PUD e8ada067 PMD 0 
[    1.908774] Oops: 0000 [#1] SMP 
[    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
[    1.908938] CPU 0 
[    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1  
[    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
[    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
[    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
[    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
[    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
[    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
[    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
[    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
[    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
[    1.909055] Stack:
[    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
[    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
[    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
[    1.909055] Call Trace:
[    1.909055]  <IRQ> 
[    1.909055]  [<ffffffff81066028>] ? pvclock_clocksource_read+0x58/0xd0
[    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
[    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
[    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
[    1.909055]  <EOI> 
[    1.909055]  [<ffffffff8103a435>] do_softirq+0x65/0xa0
[    1.909055]  [<ffffffff8107f834>] local_bh_enable_ip+0x94/0xa0
[    1.909055]  [<ffffffff815d11a4>] _raw_spin_unlock_bh+0x24/0x30
[    1.909055]  [<ffffffffa0036d44>] xennet_open+0x54/0xe0 [xen_netfront]
[    1.909055]  [<ffffffff81481dcf>] __dev_open+0xbf/0x120
[    1.909055]  [<ffffffff8148022c>] __dev_change_flags+0x9c/0x180
[    1.909055]  [<ffffffff81481cc3>] dev_change_flags+0x23/0x70
[    1.909055]  [<ffffffff81491062>] do_setlink+0x1c2/0xa10
[    1.909055]  [<ffffffff812b5d74>] ? nla_parse+0x34/0x110
[    1.909055]  [<ffffffff81494005>] rtnl_newlink+0x3a5/0x5c0
[    1.909055]  [<ffffffff812541c4>] ? selinux_capable+0x34/0x50
[    1.909055]  [<ffffffff81250223>] ? security_capable+0x13/0x20
[    1.909055]  [<ffffffff81491e07>] rtnetlink_rcv_msg+0x2c7/0x330
[    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
[    1.909055]  [<ffffffff81149a52>] ? kmem_cache_alloc_node+0x82/0x1d0
[    1.909055]  [<ffffffff8147a00c>] ? __skb_recv_datagram+0x11c/0x2f0
[    1.909055]  [<ffffffff81491b40>] ? rtnetlink_rcv+0x30/0x30
[    1.909055]  [<ffffffff814a9c89>] netlink_rcv_skb+0x99/0xc0
[    1.909055]  [<ffffffff81491b30>] rtnetlink_rcv+0x20/0x30
[    1.909055]  [<ffffffff814a9998>] netlink_unicast+0x1a8/0x220
[    1.909055]  [<ffffffff814aa535>] netlink_sendmsg+0x205/0x300
[    1.909055]  [<ffffffff8146ce19>] sock_sendmsg+0xb9/0xf0
[    1.909055]  [<ffffffff8146c51e>] ? copy_from_user+0x3e/0x50
[    1.909055]  [<ffffffff8146c576>] ? move_addr_to_kernel+0x46/0x80
[    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
[    1.909055]  [<ffffffff8146dd2d>] __sys_sendmsg+0x3dd/0x400
[    1.909055]  [<ffffffff8112c751>] ? handle_mm_fault+0x261/0x380
[    1.909055]  [<ffffffff815d4cd0>] ? do_page_fault+0x250/0x4f0
[    1.909055]  [<ffffffff8114a587>] ? kmem_cache_alloc+0x1a7/0x1f0
[    1.909055]  [<ffffffff811311a4>] ? do_brk+0x1b4/0x350
[    1.909055]  [<ffffffff8146df04>] sys_sendmsg+0x44/0x80
[    1.909055]  [<ffffffff815d7bf9>] system_call_fastpath+0x16/0x1b
[    1.909055] Code: 44 00 00 41 80 8c 24 a8 00 00 00 04 e9 2f ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 41 8b 84 24 c8 00 00 00 49 03 84 24 d0 00 00 00 <80> 3c 25 10 00 00 00 00 48 8d 50 30 74 0f 48 83 3c 25 08 00 00 
[    1.909055] RIP  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909055]  RSP <ffff8800ffc03db8>
[    1.909055] CR2: 0000000000000010
[    1.947298] ---[ end trace 3f4ba742dffbe90d ]---
[    1.947824] Kernel panic - not syncing: Fatal exception in interrupt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 19:12:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 19:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SweKg-0006n0-61; Wed, 01 Aug 2012 19:11:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SweKd-0006mv-Qu
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 19:11:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343848294!11800714!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjY5Nzc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 346 invoked from network); 1 Aug 2012 19:11:35 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 19:11:35 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71JBUNw011963
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 19:11:30 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71JBRO6010362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 19:11:28 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71JBRSQ012765; Wed, 1 Aug 2012 14:11:27 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 12:11:27 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 18DD5402B5; Wed,  1 Aug 2012 15:02:27 -0400 (EDT)
Date: Wed, 1 Aug 2012 15:02:27 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@eu.citrix.com>
Message-ID: <20120801190227.GA13272@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] Regression in xen-netfront on v3.6.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So I hadn't done a git bisection yet. But if I choose git commit:
4b24ff71108164e047cf2c95990b77651163e315
    Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6

    Pull battery updates from Anton Vorontsov:


everything works nicely. Anything past that, so these merges:

konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
ac694db Merge branch 'akpm' (Andrew's patch-bomb)
a40a1d3 Merge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
3e9a970 Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
941c872 Merge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
8762541 Merge branch 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
6dbb35b Merge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
fd37ce3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
1da9b6b Merge branches 'cma', 'ipoib', 'ocrdma' and 'qib' into for-next
6aeea3e Merge remote-tracking branch 'origin' into irqdomain/next
931efdf Merge branch 'v4l_for_linus' into staging/for_v3.6
80c1834 Merge tag 'v3.5-rc6' into irqdomain/next

are the culprit. I think it might be the networking pull but not sure. Ian
any thoughts?

Using config file "/test.xm".
Started domain latest (id=2)
[    0.000000] console [hvc0] enabled, bootconsole disabled
[    0.000000] Xen: using vcpuop timer interface
[    0.000000] installing Xen timer for CPU 0
[    0.000000] tsc: Detected 2294.530 MHz processor
[    0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 4589.06 BogoMIPS (lpj=2294530)
[    0.000999] pid_max: default: 32768 minimum: 301
[    0.000999] Security Framework initialized
[    0.000999] SELinux:  Initializing.
[    0.000999] SELinux:  Starting in permissive mode
[    0.000999] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.001520] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
[    0.001875] Mount-cache hash table entries: 256
[    0.002007] Initializing cgroup subsys cpuacct
[    0.002013] Initializing cgroup subsys freezer
[    0.002070] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    0.002070] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[    0.002084] CPU: Physical Processor ID: 0
[    0.002087] CPU: Processor Core ID: 0
[    0.002094] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[    0.002094] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
[    0.002094] tlb_flushall_shift is 0x5
[    0.002164] SMP alternatives: switching to UP code
[    0.025291] Freeing SMP alternatives: 24k freed
[    0.025356] cpu 0 spinlock event irq 17
[    0.025383] Performance Events: unsupported p6 CPU model 45 no PMU driver, software events only.
[    0.025551] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.025576] Brought up 1 CPUs
[    0.028642] kworker/u:0 (14) used greatest stack depth: 5936 bytes left
[    0.028675] Grant tables using version 2 layout.
[    0.028691] Grant table initialized
[    0.047616] RTC time: 165:165:165, date: 165/165/65
[    0.047661] NET: Registered protocol family 16
[    0.048184] dca service started, version 1.12.1
[    0.048545] PCI: setting up Xen PCI frontend stub
[    0.048552] PCI: pci_cache_line_size set to 64 bytes
[    0.049543] kworker/u:0 (51) used greatest stack depth: 5472 bytes left
[    0.054147] bio: create slab <bio-0> at 0
[    0.054240] ACPI: Interpreter disabled.
[    0.054288] xen/balloon: Initialising balloon driver.
[    0.055127] xen-balloon: Initialising balloon driver.
[    0.055127] vgaarb: loaded
[    0.056125] usbcore: registered new interface driver usbfs
[    0.056162] usbcore: registered new interface driver hub
[    0.056217] usbcore: registered new device driver usb
[    0.056425] PCI: System does not support PCI
[    0.056431] PCI: System does not support PCI
[    0.056617] NetLabel: Initializing
[    0.056624] NetLabel:  domain hash size = 128
[    0.056627] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.056642] NetLabel:  unlabeled traffic allowed by default
[    0.056725] Switching to clocksource xen
[    0.056795] pnp: PnP ACPI: disabled
[    0.058698] NET: Registered protocol family 2
[    0.059805] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
[    0.061110] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.061243] TCP: Hash tables configured (established 524288 bind 65536)
[    0.061281] TCP: reno registered
[    0.061304] UDP hash table entries: 2048 (order: 4, 65536 bytes)
[    0.061341] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
[    0.061425] NET: Registered protocol family 1
[    0.061492] RPC: Registered named UNIX socket transport module.
[    0.061498] RPC: Registered udp transport module.
[    0.061504] RPC: Registered tcp transport module.
[    0.061510] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.061518] PCI: CLS 0 bytes, default 64
[    0.061643] Trying to unpack rootfs image as initramfs...
[    0.382189] Freeing initrd memory: 362080k freed
[    0.499615] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    0.499831] Machine check injector initialized
[    0.500181] microcode: CPU0 sig=0x206d2, pf=0x1, revision=0x8000020c
[    0.500229] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
[    0.500544] audit: initializing netlink socket (disabled)
[    0.500566] type=2000 audit(1343845740.901:1): initialized
[    0.515227] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.515358] VFS: Disk quotas dquot_6.5.2
[    0.515386] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.515525] NFS: Registering the id_resolver key type
[    0.515544] Key type id_resolver registered
[    0.515551] Key type id_legacy registered
[    0.515599] NTFS driver 2.1.30 [Flags: R/W].
[    0.515706] msgmni has been set to 8021
[    0.515765] SELinux:  Registering netfilter hooks
[    0.516222] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[    0.516232] io scheduler noop registered
[    0.516236] io scheduler deadline registered
[    0.516243] io scheduler cfq registered (default)
[    0.516337] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.516442] ioatdma: Intel(R) QuickData Technology Driver 4.00
[    0.532923] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.533399] Non-volatile memory driver v1.3
[    0.533406] Linux agpgart interface v0.103
[    0.533588] [drm] Initialized drm 1.1.0 20060810
[    0.535196] brd: module loaded
[    0.535992] loop: module loaded
[    0.536344] libphy: Fixed MDIO Bus: probed
[    0.536351] tun: Universal TUN/TAP device driver, 1.6
[    0.536354] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[    0.536419] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.6.0-k
[    0.536428] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
[    0.536721] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.536729] ehci_hcd: block sizes: qh 104 qtd 96 itd 192 sitd 96
[    0.536770] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.536776] ohci_hcd: block sizes: ed 80 td 96
[    0.536826] uhci_hcd: USB Universal Host Controller Interface driver
[    0.536929] usbcore: registered new interface driver usblp
[    0.536977] usbcore: registered new interface driver libusual
[    0.537164] i8042: PNP: No PS/2 controller found. Probing ports directly.
[    0.538013] i8042: No controller found
[    0.538103] mousedev: PS/2 mouse device common for all mice
[    0.598349] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
[    0.598404] rtc_cmos: probe of rtc_cmos failed with error -38
[    0.598559] EFI Variables Facility v0.08 2004-May-17
[    0.598701] Netfilter messages via NETLINK v0.30.
[    0.598719] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[    0.598790] ctnetlink v0.93: registering with nfnetlink.
[    0.598971] ip_tables: (C) 2000-2006 Netfilter Core Team
[    0.599007] TCP: cubic registered
[    0.599014] Initializing XFRM netlink socket
[    0.599043] NET: Registered protocol family 10
[    0.599238] ip6_tables: (C) 2000-2006 Netfilter Core Team
[    0.599374] sit: IPv6 over IPv4 tunneling driver
[    0.599589] NET: Registered protocol family 17
[    0.599618] Key type dns_resolver registered
[    0.599808] PM: Hibernation image not present or could not be loaded.
[    0.599824] registered taskstats version 1
[    0.599848] XENBUS: Device with no driver: device/vkbd/0
[    0.599852] XENBUS: Device with no driver: device/vfb/0
[    0.599856] XENBUS: Device with no driver: device/vbd/51712
[    0.599860] XENBUS: Device with no driver: device/vif/0
[    0.599886]   Magic number: 1:252:3141
[    0.600271] Freeing unused kernel memory: 704k freed
[    0.600500] Write protecting the kernel read-only data: 8192k
[    0.602437] Freeing unused kernel memory: 132k freed
[    0.602589] Freeing unused kernel memory: 340k freed
init started: BusyBox v1.14.3 (2012-08-01 13:52:44 EDT)
[    0.607489] consoletype (1056) used greatest stack depth: 5288 bytes left
Mounting directories  [  OK  ]
mount: mount point /proc/bus/usb does not exist
[    0.781044] modprobe (1085) used greatest stack depth: 5048 bytes left
mount: mount point /sys/kernel/config does not exist
[    0.785748] core_filesystem (1057) used greatest stack depth: 4968 bytes left
[    0.793721] input: Xen Virtual Keyboard as /devices/virtual/input/input0
[    0.793892] input: Xen Virtual Pointer as /devices/virtual/input/input1
[    1.010121] Initialising Xen virtual ethernet driver.
[    1.124604] blkfront: xvda: flush diskcache: enabled
[    1.126118]  xvda: xvda1 xvda2 xvda3 xvda4
[    1.239316] udevd (1128): /proc/1128/oom_adj is deprecated, please use /proc/1128/oom_score_adj instead.
udevd-work[1130]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} for writing: No such file or directory

[    1.395080] ip (1408) used greatest stack depth: 3912 bytes left
Waiting for devices [  OK  ]
Waiting for init.pre_custom [  OK  ]
Waiting for fb [  OK  ]
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7f44ddbc2000): Writting .. [800:600]
Done!
FATAL: Module agpgart_intel not found.
[    1.652131] [drm] radeon kernel modesetting enabled.
WARNING: Error inserting video (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/acpi/video.ko): No such device
WARNING: Error inserting mxm_wmi (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/platform/x86/mxm-wmi.ko): No such device
WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
WARNING: Error inserting ttm (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/ttm/ttm.ko): No such device
FATAL: Error inserting nouveau (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/nouveau/nouveau.ko): No such device
[    1.660288] Console: switching to colour frame buffer device 100x37
WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
FATAL: Error inserting i915 (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/i915/i915.ko): No such device
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7fb8669cc000): Writting .. [800:600]
Done!
VGA: 0000:
Waiting for network [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0:  [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
[    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.908703] PGD ea1df067 PUD e8ada067 PMD 0 
[    1.908774] Oops: 0000 [#1] SMP 
[    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
[    1.908938] CPU 0 
[    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1  
[    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
[    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
[    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
[    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
[    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
[    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
[    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
[    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
[    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
[    1.909055] Stack:
[    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
[    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
[    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
[    1.909055] Call Trace:
[    1.909055]  <IRQ> 
[    1.909055]  [<ffffffff81066028>] ? pvclock_clocksource_read+0x58/0xd0
[    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
[    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
[    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
[    1.909055]  <EOI> 
[    1.909055]  [<ffffffff8103a435>] do_softirq+0x65/0xa0
[    1.909055]  [<ffffffff8107f834>] local_bh_enable_ip+0x94/0xa0
[    1.909055]  [<ffffffff815d11a4>] _raw_spin_unlock_bh+0x24/0x30
[    1.909055]  [<ffffffffa0036d44>] xennet_open+0x54/0xe0 [xen_netfront]
[    1.909055]  [<ffffffff81481dcf>] __dev_open+0xbf/0x120
[    1.909055]  [<ffffffff8148022c>] __dev_change_flags+0x9c/0x180
[    1.909055]  [<ffffffff81481cc3>] dev_change_flags+0x23/0x70
[    1.909055]  [<ffffffff81491062>] do_setlink+0x1c2/0xa10
[    1.909055]  [<ffffffff812b5d74>] ? nla_parse+0x34/0x110
[    1.909055]  [<ffffffff81494005>] rtnl_newlink+0x3a5/0x5c0
[    1.909055]  [<ffffffff812541c4>] ? selinux_capable+0x34/0x50
[    1.909055]  [<ffffffff81250223>] ? security_capable+0x13/0x20
[    1.909055]  [<ffffffff81491e07>] rtnetlink_rcv_msg+0x2c7/0x330
[    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
[    1.909055]  [<ffffffff81149a52>] ? kmem_cache_alloc_node+0x82/0x1d0
[    1.909055]  [<ffffffff8147a00c>] ? __skb_recv_datagram+0x11c/0x2f0
[    1.909055]  [<ffffffff81491b40>] ? rtnetlink_rcv+0x30/0x30
[    1.909055]  [<ffffffff814a9c89>] netlink_rcv_skb+0x99/0xc0
[    1.909055]  [<ffffffff81491b30>] rtnetlink_rcv+0x20/0x30
[    1.909055]  [<ffffffff814a9998>] netlink_unicast+0x1a8/0x220
[    1.909055]  [<ffffffff814aa535>] netlink_sendmsg+0x205/0x300
[    1.909055]  [<ffffffff8146ce19>] sock_sendmsg+0xb9/0xf0
[    1.909055]  [<ffffffff8146c51e>] ? copy_from_user+0x3e/0x50
[    1.909055]  [<ffffffff8146c576>] ? move_addr_to_kernel+0x46/0x80
[    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
[    1.909055]  [<ffffffff8146dd2d>] __sys_sendmsg+0x3dd/0x400
[    1.909055]  [<ffffffff8112c751>] ? handle_mm_fault+0x261/0x380
[    1.909055]  [<ffffffff815d4cd0>] ? do_page_fault+0x250/0x4f0
[    1.909055]  [<ffffffff8114a587>] ? kmem_cache_alloc+0x1a7/0x1f0
[    1.909055]  [<ffffffff811311a4>] ? do_brk+0x1b4/0x350
[    1.909055]  [<ffffffff8146df04>] sys_sendmsg+0x44/0x80
[    1.909055]  [<ffffffff815d7bf9>] system_call_fastpath+0x16/0x1b
[    1.909055] Code: 44 00 00 41 80 8c 24 a8 00 00 00 04 e9 2f ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 41 8b 84 24 c8 00 00 00 49 03 84 24 d0 00 00 00 <80> 3c 25 10 00 00 00 00 48 8d 50 30 74 0f 48 83 3c 25 08 00 00 
[    1.909055] RIP  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909055]  RSP <ffff8800ffc03db8>
[    1.909055] CR2: 0000000000000010
[    1.947298] ---[ end trace 3f4ba742dffbe90d ]---
[    1.947824] Kernel panic - not syncing: Fatal exception in interrupt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 19:51:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 19:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swew7-0007Ui-42; Wed, 01 Aug 2012 19:50:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Swew5-0007Ud-3r
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 19:50:25 +0000
Received: from [85.158.143.99:10440] by server-2.bemta-4.messagelabs.com id
	51/D1-17938-08889105; Wed, 01 Aug 2012 19:50:24 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343850622!29244260!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjkzOTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13966 invoked from network); 1 Aug 2012 19:50:23 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 19:50:23 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id E0EF42C5C;
	Wed,  1 Aug 2012 22:50:21 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 987692005D; Wed,  1 Aug 2012 22:50:21 +0300 (EEST)
Date: Wed, 1 Aug 2012 22:50:21 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801195021.GD19851@reaktio.net>
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: lars.kurth@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 03:50:16PM +0100, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 1:08 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> > Hi everybody,
> >
> > at OSCON I had a couple of discussions regarding Fedora-like Xen Test Days.
> > It may be a little bit late for this release putting all the documentation
> > together (i.e. a TODO list of what we want to community and distros which
> > consume Xen) to test to pull this off for this release cycle.
> >
> > But I wanted to raise this as possibility and maybe something to build into
> > future release cycles.  If I look at http://wiki.xen.org/wiki/Xen_4.2 there
> > is fairly little (or in fact almost nothing) we have in terms of how new
> > functionality would be tested. My gut feel is that the biggest benefit of a
> > Xen test Day for 4.2 may be in testing XL. There were some improvements last
> > Monday, but I am not sure this is enough.
> >
> > If I look at https://fedoraproject.org/wiki/QA/Test_Days they have spent
> > quite a bit of effort on this, and it would probably take one person a week
> > or two full-time to pull this together. Also see,
> > https://fedoraproject.org/wiki/Test_Day:Current
> >
> > I just wanted to put this out there to see whether we should try for this
> > release cycle and gather views. It would require a volunteer to step up. If
> > the view is that this is not doable for 4.2, this may be a good thing to
> > either try for patch releases as well as maybe for Xen 4.3
> 
> I think for 4.2, the key thing we want to test is the xm -> xl
> transition; and the instructions for that are really simple --
> basically, "Do what you normally do using xl instead of xm". :-)
> Secondary things we want tested involve just installing it on
> different software setups (e.g., distros), and hardware testing.  But
> I think those will come as a matter of course with the first one.
> 

Xen hypervisor UEFI boot testing would be nice aswell.. Xen 4.2 has the hypervisor EFI patches,
but dom0 kernel also needs EFI patches, and that makes testing a bit more difficult..

Upstream Linux pvops dom0 kernel doesn't have EFI support yet, 
only Suse's xenlinux patches have EFI support afaik..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 19:51:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 19:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swew7-0007Ui-42; Wed, 01 Aug 2012 19:50:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Swew5-0007Ud-3r
	for xen-devel@lists.xen.org; Wed, 01 Aug 2012 19:50:25 +0000
Received: from [85.158.143.99:10440] by server-2.bemta-4.messagelabs.com id
	51/D1-17938-08889105; Wed, 01 Aug 2012 19:50:24 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343850622!29244260!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NjkzOTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13966 invoked from network); 1 Aug 2012 19:50:23 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Aug 2012 19:50:23 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id E0EF42C5C;
	Wed,  1 Aug 2012 22:50:21 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 987692005D; Wed,  1 Aug 2012 22:50:21 +0300 (EEST)
Date: Wed, 1 Aug 2012 22:50:21 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801195021.GD19851@reaktio.net>
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: lars.kurth@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 03:50:16PM +0100, George Dunlap wrote:
> On Wed, Aug 1, 2012 at 1:08 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> > Hi everybody,
> >
> > at OSCON I had a couple of discussions regarding Fedora-like Xen Test Days.
> > It may be a little bit late for this release putting all the documentation
> > together (i.e. a TODO list of what we want to community and distros which
> > consume Xen) to test to pull this off for this release cycle.
> >
> > But I wanted to raise this as possibility and maybe something to build into
> > future release cycles.  If I look at http://wiki.xen.org/wiki/Xen_4.2 there
> > is fairly little (or in fact almost nothing) we have in terms of how new
> > functionality would be tested. My gut feel is that the biggest benefit of a
> > Xen test Day for 4.2 may be in testing XL. There were some improvements last
> > Monday, but I am not sure this is enough.
> >
> > If I look at https://fedoraproject.org/wiki/QA/Test_Days they have spent
> > quite a bit of effort on this, and it would probably take one person a week
> > or two full-time to pull this together. Also see,
> > https://fedoraproject.org/wiki/Test_Day:Current
> >
> > I just wanted to put this out there to see whether we should try for this
> > release cycle and gather views. It would require a volunteer to step up. If
> > the view is that this is not doable for 4.2, this may be a good thing to
> > either try for patch releases as well as maybe for Xen 4.3
> 
> I think for 4.2, the key thing we want to test is the xm -> xl
> transition; and the instructions for that are really simple --
> basically, "Do what you normally do using xl instead of xm". :-)
> Secondary things we want tested involve just installing it on
> different software setups (e.g., distros), and hardware testing.  But
> I think those will come as a matter of course with the first one.
> 

Xen hypervisor UEFI boot testing would be nice aswell.. Xen 4.2 has the hypervisor EFI patches,
but dom0 kernel also needs EFI patches, and that makes testing a bit more difficult..

Upstream Linux pvops dom0 kernel doesn't have EFI support yet, 
only Suse's xenlinux patches have EFI support afaik..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 20:51:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 20:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swfsm-00088N-As; Wed, 01 Aug 2012 20:51:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1SwdiA-0006Lv-Mn
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:31:58 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343845675!10809533!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27817 invoked from network); 1 Aug 2012 18:27:57 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 18:27:57 -0000
Received: by ghy10 with SMTP id 10so9575727ghy.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 11:27:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=B7/1KGcZ+7wyZ3ZOZ6DKam10uMkJthOTWOS+dBdsvYE=;
	b=xc+y3+CG6GTtfEX4PlQ2t6lnqhlCuXMzKIhEYCErQeBdSwoUctECKghU1dXRgpZvdF
	t+d6A9EmqSk6j5DsehAlxCPbaa+4VmblvuUO8gGuaGBhUs1zHAfCuwvXiXONnkxptTtO
	OnVnmOICorzLBKx5oFoLU8wNCYOrZrqgG2MS8Hsl/zkZMYsQ5u0+L1wQffInFgsERQcU
	PUiCKXK5MhErjzFLyDV/By4Go1vgfP5bjWxqDzqwWbV0uYoPoSFDHSX5lzdhjYhVx5tg
	ZHdUVTV1VA2kXM3KBQwsxJBCDFp2F0wypY67dBQMH/hJqY7KjQ1susDv4C0J9kFUFx6V
	PaTw==
Received: by 10.60.29.228 with SMTP id n4mr30191391oeh.27.1343845675511;
	Wed, 01 Aug 2012 11:27:55 -0700 (PDT)
Received: from [10.10.10.90] ([173.226.190.126])
	by mx.google.com with ESMTPS id hd10sm3168007obc.8.2012.08.01.11.27.53
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 11:27:53 -0700 (PDT)
Message-ID: <50197527.3070007@gmail.com>
Date: Wed, 01 Aug 2012 13:27:51 -0500
From: Rob Herring <robherring2@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Wed, 01 Aug 2012 20:51:02 +0000
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> - Basic hypervisor.h and interface.h definitions.
> - Skelethon enlighten.c, set xen_start_info to an empty struct.
> - Do not limit xen_initial_domain to PV guests.
> 
> The new code only compiles when CONFIG_XEN is set, that is going to be
> added to arch/arm/Kconfig in a later patch.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/Makefile                     |    1 +
>  arch/arm/include/asm/hypervisor.h     |    6 +++
>  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
>  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++

These headers don't seem particularly ARM specific. Could they be moved
to asm-generic or include/linux?

Rob

>  arch/arm/xen/Makefile                 |    1 +
>  arch/arm/xen/enlighten.c              |   35 ++++++++++++++++++
>  include/xen/xen.h                     |    2 +-
>  7 files changed, 127 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/include/asm/hypervisor.h
>  create mode 100644 arch/arm/include/asm/xen/hypervisor.h
>  create mode 100644 arch/arm/include/asm/xen/interface.h
>  create mode 100644 arch/arm/xen/Makefile
>  create mode 100644 arch/arm/xen/enlighten.c
> 
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 0298b00..70aaa82 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -246,6 +246,7 @@ endif
>  core-$(CONFIG_FPE_NWFPE)	+= arch/arm/nwfpe/
>  core-$(CONFIG_FPE_FASTFPE)	+= $(FASTFPE_OBJ)
>  core-$(CONFIG_VFP)		+= arch/arm/vfp/
> +core-$(CONFIG_XEN)		+= arch/arm/xen/
>  
>  # If we have a machine-specific directory, then include it in the build.
>  core-y				+= arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
> diff --git a/arch/arm/include/asm/hypervisor.h b/arch/arm/include/asm/hypervisor.h
> new file mode 100644
> index 0000000..b90d9e5
> --- /dev/null
> +++ b/arch/arm/include/asm/hypervisor.h
> @@ -0,0 +1,6 @@
> +#ifndef _ASM_ARM_HYPERVISOR_H
> +#define _ASM_ARM_HYPERVISOR_H
> +
> +#include <asm/xen/hypervisor.h>
> +
> +#endif
> diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
> new file mode 100644
> index 0000000..d7ab99a
> --- /dev/null
> +++ b/arch/arm/include/asm/xen/hypervisor.h
> @@ -0,0 +1,19 @@
> +#ifndef _ASM_ARM_XEN_HYPERVISOR_H
> +#define _ASM_ARM_XEN_HYPERVISOR_H
> +
> +extern struct shared_info *HYPERVISOR_shared_info;
> +extern struct start_info *xen_start_info;
> +
> +/* Lazy mode for batching updates / context switch */
> +enum paravirt_lazy_mode {
> +	PARAVIRT_LAZY_NONE,
> +	PARAVIRT_LAZY_MMU,
> +	PARAVIRT_LAZY_CPU,
> +};
> +
> +static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
> +{
> +	return PARAVIRT_LAZY_NONE;
> +}
> +
> +#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> new file mode 100644
> index 0000000..6c3ab59
> --- /dev/null
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -0,0 +1,64 @@
> +/******************************************************************************
> + * Guest OS interface to ARM Xen.
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2011
> + */
> +
> +#ifndef _ASM_ARM_XEN_INTERFACE_H
> +#define _ASM_ARM_XEN_INTERFACE_H
> +
> +#include <linux/types.h>
> +
> +#define __DEFINE_GUEST_HANDLE(name, type) \
> +	typedef type * __guest_handle_ ## name
> +
> +#define DEFINE_GUEST_HANDLE_STRUCT(name) \
> +	__DEFINE_GUEST_HANDLE(name, struct name)
> +#define DEFINE_GUEST_HANDLE(name) __DEFINE_GUEST_HANDLE(name, name)
> +#define GUEST_HANDLE(name)        __guest_handle_ ## name
> +
> +#define set_xen_guest_handle(hnd, val)			\
> +	do {						\
> +		if (sizeof(hnd) == 8)			\
> +			*(uint64_t *)&(hnd) = 0;	\
> +		(hnd) = val;				\
> +	} while (0)
> +
> +#ifndef __ASSEMBLY__
> +/* Guest handles for primitive C types. */
> +__DEFINE_GUEST_HANDLE(uchar, unsigned char);
> +__DEFINE_GUEST_HANDLE(uint,  unsigned int);
> +__DEFINE_GUEST_HANDLE(ulong, unsigned long);
> +DEFINE_GUEST_HANDLE(char);
> +DEFINE_GUEST_HANDLE(int);
> +DEFINE_GUEST_HANDLE(long);
> +DEFINE_GUEST_HANDLE(void);
> +DEFINE_GUEST_HANDLE(uint64_t);
> +DEFINE_GUEST_HANDLE(uint32_t);
> +
> +/* Maximum number of virtual CPUs in multi-processor guests. */
> +#define MAX_VIRT_CPUS 1
> +
> +struct arch_vcpu_info { };
> +struct arch_shared_info { };
> +
> +/* XXX: Move pvclock definitions some place arch independent */
> +struct pvclock_vcpu_time_info {
> +	u32   version;
> +	u32   pad0;
> +	u64   tsc_timestamp;
> +	u64   system_time;
> +	u32   tsc_to_system_mul;
> +	s8    tsc_shift;
> +	u8    flags;
> +	u8    pad[2];
> +} __attribute__((__packed__)); /* 32 bytes */
> +
> +struct pvclock_wall_clock {
> +	u32   version;
> +	u32   sec;
> +	u32   nsec;
> +} __attribute__((__packed__));
> +#endif
> +
> +#endif /* _ASM_ARM_XEN_INTERFACE_H */
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> new file mode 100644
> index 0000000..0bad594
> --- /dev/null
> +++ b/arch/arm/xen/Makefile
> @@ -0,0 +1 @@
> +obj-y		:= enlighten.o
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> new file mode 100644
> index 0000000..d27c2a6
> --- /dev/null
> +++ b/arch/arm/xen/enlighten.c
> @@ -0,0 +1,35 @@
> +#include <xen/xen.h>
> +#include <xen/interface/xen.h>
> +#include <xen/interface/memory.h>
> +#include <xen/platform_pci.h>
> +#include <asm/xen/hypervisor.h>
> +#include <asm/xen/hypercall.h>
> +#include <linux/module.h>
> +
> +struct start_info _xen_start_info;
> +struct start_info *xen_start_info = &_xen_start_info;
> +EXPORT_SYMBOL_GPL(xen_start_info);
> +
> +enum xen_domain_type xen_domain_type = XEN_NATIVE;
> +EXPORT_SYMBOL_GPL(xen_domain_type);
> +
> +struct shared_info xen_dummy_shared_info;
> +struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
> +
> +DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
> +
> +/* XXX: to be removed */
> +__read_mostly int xen_have_vector_callback;
> +EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> +
> +int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> +EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> +
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long mfn, int nr,
> +			       pgprot_t prot, unsigned domid)
> +{
> +	return -ENOSYS;
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index a164024..2c0d3a5 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -23,7 +23,7 @@ extern enum xen_domain_type xen_domain_type;
>  #include <xen/interface/xen.h>
>  #include <asm/xen/hypervisor.h>
>  
> -#define xen_initial_domain()	(xen_pv_domain() && \
> +#define xen_initial_domain()	(xen_domain() && \
>  				 xen_start_info->flags & SIF_INITDOMAIN)
>  #else  /* !CONFIG_XEN_DOM0 */
>  #define xen_initial_domain()	(0)
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 20:51:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 20:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swfsm-00088N-As; Wed, 01 Aug 2012 20:51:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1SwdiA-0006Lv-Mn
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 18:31:58 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343845675!10809533!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27817 invoked from network); 1 Aug 2012 18:27:57 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 18:27:57 -0000
Received: by ghy10 with SMTP id 10so9575727ghy.30
	for <xen-devel@lists.xensource.com>;
	Wed, 01 Aug 2012 11:27:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=B7/1KGcZ+7wyZ3ZOZ6DKam10uMkJthOTWOS+dBdsvYE=;
	b=xc+y3+CG6GTtfEX4PlQ2t6lnqhlCuXMzKIhEYCErQeBdSwoUctECKghU1dXRgpZvdF
	t+d6A9EmqSk6j5DsehAlxCPbaa+4VmblvuUO8gGuaGBhUs1zHAfCuwvXiXONnkxptTtO
	OnVnmOICorzLBKx5oFoLU8wNCYOrZrqgG2MS8Hsl/zkZMYsQ5u0+L1wQffInFgsERQcU
	PUiCKXK5MhErjzFLyDV/By4Go1vgfP5bjWxqDzqwWbV0uYoPoSFDHSX5lzdhjYhVx5tg
	ZHdUVTV1VA2kXM3KBQwsxJBCDFp2F0wypY67dBQMH/hJqY7KjQ1susDv4C0J9kFUFx6V
	PaTw==
Received: by 10.60.29.228 with SMTP id n4mr30191391oeh.27.1343845675511;
	Wed, 01 Aug 2012 11:27:55 -0700 (PDT)
Received: from [10.10.10.90] ([173.226.190.126])
	by mx.google.com with ESMTPS id hd10sm3168007obc.8.2012.08.01.11.27.53
	(version=SSLv3 cipher=OTHER); Wed, 01 Aug 2012 11:27:53 -0700 (PDT)
Message-ID: <50197527.3070007@gmail.com>
Date: Wed, 01 Aug 2012 13:27:51 -0500
From: Rob Herring <robherring2@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Wed, 01 Aug 2012 20:51:02 +0000
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> - Basic hypervisor.h and interface.h definitions.
> - Skelethon enlighten.c, set xen_start_info to an empty struct.
> - Do not limit xen_initial_domain to PV guests.
> 
> The new code only compiles when CONFIG_XEN is set, that is going to be
> added to arch/arm/Kconfig in a later patch.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/Makefile                     |    1 +
>  arch/arm/include/asm/hypervisor.h     |    6 +++
>  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
>  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++

These headers don't seem particularly ARM specific. Could they be moved
to asm-generic or include/linux?

Rob

>  arch/arm/xen/Makefile                 |    1 +
>  arch/arm/xen/enlighten.c              |   35 ++++++++++++++++++
>  include/xen/xen.h                     |    2 +-
>  7 files changed, 127 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/include/asm/hypervisor.h
>  create mode 100644 arch/arm/include/asm/xen/hypervisor.h
>  create mode 100644 arch/arm/include/asm/xen/interface.h
>  create mode 100644 arch/arm/xen/Makefile
>  create mode 100644 arch/arm/xen/enlighten.c
> 
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 0298b00..70aaa82 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -246,6 +246,7 @@ endif
>  core-$(CONFIG_FPE_NWFPE)	+= arch/arm/nwfpe/
>  core-$(CONFIG_FPE_FASTFPE)	+= $(FASTFPE_OBJ)
>  core-$(CONFIG_VFP)		+= arch/arm/vfp/
> +core-$(CONFIG_XEN)		+= arch/arm/xen/
>  
>  # If we have a machine-specific directory, then include it in the build.
>  core-y				+= arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
> diff --git a/arch/arm/include/asm/hypervisor.h b/arch/arm/include/asm/hypervisor.h
> new file mode 100644
> index 0000000..b90d9e5
> --- /dev/null
> +++ b/arch/arm/include/asm/hypervisor.h
> @@ -0,0 +1,6 @@
> +#ifndef _ASM_ARM_HYPERVISOR_H
> +#define _ASM_ARM_HYPERVISOR_H
> +
> +#include <asm/xen/hypervisor.h>
> +
> +#endif
> diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
> new file mode 100644
> index 0000000..d7ab99a
> --- /dev/null
> +++ b/arch/arm/include/asm/xen/hypervisor.h
> @@ -0,0 +1,19 @@
> +#ifndef _ASM_ARM_XEN_HYPERVISOR_H
> +#define _ASM_ARM_XEN_HYPERVISOR_H
> +
> +extern struct shared_info *HYPERVISOR_shared_info;
> +extern struct start_info *xen_start_info;
> +
> +/* Lazy mode for batching updates / context switch */
> +enum paravirt_lazy_mode {
> +	PARAVIRT_LAZY_NONE,
> +	PARAVIRT_LAZY_MMU,
> +	PARAVIRT_LAZY_CPU,
> +};
> +
> +static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
> +{
> +	return PARAVIRT_LAZY_NONE;
> +}
> +
> +#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> new file mode 100644
> index 0000000..6c3ab59
> --- /dev/null
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -0,0 +1,64 @@
> +/******************************************************************************
> + * Guest OS interface to ARM Xen.
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2011
> + */
> +
> +#ifndef _ASM_ARM_XEN_INTERFACE_H
> +#define _ASM_ARM_XEN_INTERFACE_H
> +
> +#include <linux/types.h>
> +
> +#define __DEFINE_GUEST_HANDLE(name, type) \
> +	typedef type * __guest_handle_ ## name
> +
> +#define DEFINE_GUEST_HANDLE_STRUCT(name) \
> +	__DEFINE_GUEST_HANDLE(name, struct name)
> +#define DEFINE_GUEST_HANDLE(name) __DEFINE_GUEST_HANDLE(name, name)
> +#define GUEST_HANDLE(name)        __guest_handle_ ## name
> +
> +#define set_xen_guest_handle(hnd, val)			\
> +	do {						\
> +		if (sizeof(hnd) == 8)			\
> +			*(uint64_t *)&(hnd) = 0;	\
> +		(hnd) = val;				\
> +	} while (0)
> +
> +#ifndef __ASSEMBLY__
> +/* Guest handles for primitive C types. */
> +__DEFINE_GUEST_HANDLE(uchar, unsigned char);
> +__DEFINE_GUEST_HANDLE(uint,  unsigned int);
> +__DEFINE_GUEST_HANDLE(ulong, unsigned long);
> +DEFINE_GUEST_HANDLE(char);
> +DEFINE_GUEST_HANDLE(int);
> +DEFINE_GUEST_HANDLE(long);
> +DEFINE_GUEST_HANDLE(void);
> +DEFINE_GUEST_HANDLE(uint64_t);
> +DEFINE_GUEST_HANDLE(uint32_t);
> +
> +/* Maximum number of virtual CPUs in multi-processor guests. */
> +#define MAX_VIRT_CPUS 1
> +
> +struct arch_vcpu_info { };
> +struct arch_shared_info { };
> +
> +/* XXX: Move pvclock definitions some place arch independent */
> +struct pvclock_vcpu_time_info {
> +	u32   version;
> +	u32   pad0;
> +	u64   tsc_timestamp;
> +	u64   system_time;
> +	u32   tsc_to_system_mul;
> +	s8    tsc_shift;
> +	u8    flags;
> +	u8    pad[2];
> +} __attribute__((__packed__)); /* 32 bytes */
> +
> +struct pvclock_wall_clock {
> +	u32   version;
> +	u32   sec;
> +	u32   nsec;
> +} __attribute__((__packed__));
> +#endif
> +
> +#endif /* _ASM_ARM_XEN_INTERFACE_H */
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> new file mode 100644
> index 0000000..0bad594
> --- /dev/null
> +++ b/arch/arm/xen/Makefile
> @@ -0,0 +1 @@
> +obj-y		:= enlighten.o
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> new file mode 100644
> index 0000000..d27c2a6
> --- /dev/null
> +++ b/arch/arm/xen/enlighten.c
> @@ -0,0 +1,35 @@
> +#include <xen/xen.h>
> +#include <xen/interface/xen.h>
> +#include <xen/interface/memory.h>
> +#include <xen/platform_pci.h>
> +#include <asm/xen/hypervisor.h>
> +#include <asm/xen/hypercall.h>
> +#include <linux/module.h>
> +
> +struct start_info _xen_start_info;
> +struct start_info *xen_start_info = &_xen_start_info;
> +EXPORT_SYMBOL_GPL(xen_start_info);
> +
> +enum xen_domain_type xen_domain_type = XEN_NATIVE;
> +EXPORT_SYMBOL_GPL(xen_domain_type);
> +
> +struct shared_info xen_dummy_shared_info;
> +struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
> +
> +DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
> +
> +/* XXX: to be removed */
> +__read_mostly int xen_have_vector_callback;
> +EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> +
> +int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> +EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> +
> +int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> +			       unsigned long addr,
> +			       unsigned long mfn, int nr,
> +			       pgprot_t prot, unsigned domid)
> +{
> +	return -ENOSYS;
> +}
> +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index a164024..2c0d3a5 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -23,7 +23,7 @@ extern enum xen_domain_type xen_domain_type;
>  #include <xen/interface/xen.h>
>  #include <asm/xen/hypervisor.h>
>  
> -#define xen_initial_domain()	(xen_pv_domain() && \
> +#define xen_initial_domain()	(xen_domain() && \
>  				 xen_start_info->flags & SIF_INITDOMAIN)
>  #else  /* !CONFIG_XEN_DOM0 */
>  #define xen_initial_domain()	(0)
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 21:12:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 21:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwgD3-0000Ae-5B; Wed, 01 Aug 2012 21:12:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwgD1-0000AD-6e
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 21:11:59 +0000
Received: from [85.158.139.83:59067] by server-1.bemta-5.messagelabs.com id
	75/D9-29759-C9B99105; Wed, 01 Aug 2012 21:11:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343855515!24126289!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9249 invoked from network); 1 Aug 2012 21:11:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 21:11:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,696,1336348800"; d="scan'208";a="13810759"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 21:11:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 22:11:55 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwgCx-0000md-5j;
	Wed, 01 Aug 2012 21:11:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwgCw-0003Ow-KW;
	Wed, 01 Aug 2012 22:11:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13534-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 22:11:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13534: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13534 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13534/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13533
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13533
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13533
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3d622e2c7cfb
baseline version:
 xen                  cf0e661cb321

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3d622e2c7cfb
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3d622e2c7cfb
+ branch=xen-unstable
+ revision=3d622e2c7cfb
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3d622e2c7cfb ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 8 changesets with 21 changes to 19 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 21:12:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 21:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwgD3-0000Ae-5B; Wed, 01 Aug 2012 21:12:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwgD1-0000AD-6e
	for xen-devel@lists.xensource.com; Wed, 01 Aug 2012 21:11:59 +0000
Received: from [85.158.139.83:59067] by server-1.bemta-5.messagelabs.com id
	75/D9-29759-C9B99105; Wed, 01 Aug 2012 21:11:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343855515!24126289!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9249 invoked from network); 1 Aug 2012 21:11:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Aug 2012 21:11:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,696,1336348800"; d="scan'208";a="13810759"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	01 Aug 2012 21:11:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 1 Aug 2012 22:11:55 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwgCx-0000md-5j;
	Wed, 01 Aug 2012 21:11:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwgCw-0003Ow-KW;
	Wed, 01 Aug 2012 22:11:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13534-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Aug 2012 22:11:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13534: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13534 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13534/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13533
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13533
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13533
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13533

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3d622e2c7cfb
baseline version:
 xen                  cf0e661cb321

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3d622e2c7cfb
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3d622e2c7cfb
+ branch=xen-unstable
+ revision=3d622e2c7cfb
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3d622e2c7cfb ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 8 changesets with 21 changes to 19 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 22:35:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 22:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwhVN-00015X-6Z; Wed, 01 Aug 2012 22:35:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1SwhVL-00015S-PV
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 22:34:59 +0000
Received: from [85.158.143.99:2966] by server-3.bemta-4.messagelabs.com id
	3F/C5-01511-21FA9105; Wed, 01 Aug 2012 22:34:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343860497!22308121!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18587 invoked from network); 1 Aug 2012 22:34:58 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 22:34:58 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71MYiCO008142
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 22:34:45 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71MYhF8004410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 22:34:43 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71MYgjn022064; Wed, 1 Aug 2012 17:34:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 15:34:42 -0700
Date: Wed, 1 Aug 2012 15:34:39 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801153439.3f81c923@mantra.us.oracle.com>
In-Reply-To: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012 16:25:01 +0100
George Dunlap <George.Dunlap@eu.citrix.com> wrote:

> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> for this feature, mainly for "marketing" reasons.  I think it will
> probably give people the wrong idea about what the technology does.
> PV domains is one of Xen's really distinct advantages -- much simpler
> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> understand it, the mode you've been calling "hybrid" still has all of
> these advantages -- it just uses some of the HVM hardware extensions
> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> be seen as, "Even Xen has had to give up on PV."
> 
> Can I suggest something like "PVH" instead?  That (at least to me)
> makes it clear that PV domains are still fully PV, but just use some
> HVM extensions.
> 
> Thoughts?

Hi George,

We gave some thought looking for name. I figured pure PV will be around for 
a while at least. So there's PV on one side and HVM on the other, hybrid
somewhere in between. 

The issue with PV in HVM is that it limits PV to HVM container only. The
vision I had was that hybrid, a PV ops kernel that is somewhere in between
PV and HVM, could be configured with options. So, one could run hybrid
with say EPT off (altho, won't be supported this anymore). But generic name
like hybrid allows it in future to be flexible, instead of confined to a
specific. I suppose a PV guest could just be started with various options.

As for name in code, 'pvh' was confusing, as PVHVM is now routinely used to
refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV, well, thats a certain
virus ;). So I just used hybrid in the code to refer to PV guest that runs
in HVM container. I suppose I could change the flag to pv_in_hvm or
something.

In the end, I am flexible on whatever we wanna call it :).

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 01 22:35:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Aug 2012 22:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwhVN-00015X-6Z; Wed, 01 Aug 2012 22:35:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1SwhVL-00015S-PV
	for Xen-devel@lists.xensource.com; Wed, 01 Aug 2012 22:34:59 +0000
Received: from [85.158.143.99:2966] by server-3.bemta-4.messagelabs.com id
	3F/C5-01511-21FA9105; Wed, 01 Aug 2012 22:34:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343860497!22308121!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3MjQwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18587 invoked from network); 1 Aug 2012 22:34:58 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 1 Aug 2012 22:34:58 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q71MYiCO008142
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Aug 2012 22:34:45 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q71MYhF8004410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Aug 2012 22:34:43 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q71MYgjn022064; Wed, 1 Aug 2012 17:34:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Aug 2012 15:34:42 -0700
Date: Wed, 1 Aug 2012 15:34:39 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120801153439.3f81c923@mantra.us.oracle.com>
In-Reply-To: <CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012 16:25:01 +0100
George Dunlap <George.Dunlap@eu.citrix.com> wrote:

> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> for this feature, mainly for "marketing" reasons.  I think it will
> probably give people the wrong idea about what the technology does.
> PV domains is one of Xen's really distinct advantages -- much simpler
> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> understand it, the mode you've been calling "hybrid" still has all of
> these advantages -- it just uses some of the HVM hardware extensions
> to make the interface even simpler / faster.  I'm afraid "hybrid" may
> be seen as, "Even Xen has had to give up on PV."
> 
> Can I suggest something like "PVH" instead?  That (at least to me)
> makes it clear that PV domains are still fully PV, but just use some
> HVM extensions.
> 
> Thoughts?

Hi George,

We gave some thought looking for name. I figured pure PV will be around for 
a while at least. So there's PV on one side and HVM on the other, hybrid
somewhere in between. 

The issue with PV in HVM is that it limits PV to HVM container only. The
vision I had was that hybrid, a PV ops kernel that is somewhere in between
PV and HVM, could be configured with options. So, one could run hybrid
with say EPT off (altho, won't be supported this anymore). But generic name
like hybrid allows it in future to be flexible, instead of confined to a
specific. I suppose a PV guest could just be started with various options.

As for name in code, 'pvh' was confusing, as PVHVM is now routinely used to
refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV, well, thats a certain
virus ;). So I just used hybrid in the code to refer to PV guest that runs
in HVM container. I suppose I could change the flag to pv_in_hvm or
something.

In the end, I am flexible on whatever we wanna call it :).

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 01:05:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 01:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwjqY-0000Dc-A9; Thu, 02 Aug 2012 01:05:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1SwjqW-0000DX-8K
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 01:05:00 +0000
Received: from [85.158.139.83:44880] by server-7.bemta-5.messagelabs.com id
	F3/E2-28276-B32D9105; Thu, 02 Aug 2012 01:04:59 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343869498!24145237!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzMjcyMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8999 invoked from network); 2 Aug 2012 01:04:58 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 01:04:58 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 01 Aug 2012 18:04:56 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="201196253"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 01 Aug 2012 18:04:56 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 1 Aug 2012 18:04:56 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 1 Aug 2012 18:04:56 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 2 Aug 2012 09:04:54 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Dario Faggioli <raistlin@linux.it>, xen-devel <xen-devel@lists.xen.org>
Thread-Topic: NUMA TODO-list for xen-devel
Thread-Index: AQHNcAER/p/FU/fZGkSxhN/UzIuHTZdFsI5g
Date: Thu, 2 Aug 2012 01:04:54 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E2032D6@SHSMSX101.ccr.corp.intel.com>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli wrote on 2012-08-02:
> Hi everyone,
> 
> With automatic placement finally landing into xen-unstable, I stated
> thinking about what I could work on next, still in the field of
> improving Xen's NUMA support. Well, it turned out that running out of
> things to do is not an option! :-O
> 
> In fact, I can think of quite a bit of open issues in that area, that I'm
> just braindumping here. If anyone has thoughts or idea or feedback or
> whatever, I'd be happy to serve as a collector of them. I've already
> created a Wiki page to help with the tracking. You can see it here
> (for now it basically replicates this e-mail):
> 
>  http://wiki.xen.org/wiki/Xen_NUMA_Roadmap
> I'm putting a [D] (standing for Dario) near the points I've started
> working on or looking at, and again, I'd be happy to try tracking this
> too, i.e., keeping the list of "who-is-doing-what" updated, in order to
> ease collaboration.
> 
> So, let's cut the talking:
> 
>     - Automatic placement at guest creation time. Basics are there and
>       will be shipping with 4.2. However, a lot of other things are
>       missing and/or can be improved, for instance:
> [D]    * automated verification and testing of the placement;
>        * benchmarks and improvements of the placement heuristic;
> [D]    * choosing/building up some measure of node load (more accurate
>          than just counting vcpus) onto which to rely during placement;
>        * consider IONUMA during placement;
We should consider two things:
1. Dom0 IONUMA: Devices used by dom0 should get the dma buffer from the node which it resides. Currently, Dom0 allocates dma buffer without provide the node info to the hypercall..
2.Guest IONUMA: when guest boots up with pass through device, we need to allocate the memory from the node where the device resides for further dma buffer allocation. And let guest know the IONUMA topology. This rely on the guest NUMA.
This topic was mentioned in xen summit 2011:
http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalability_in_Xen.pdf


>        * automatic placement of Dom0, if possible (my current series is
>          only affecting DomU) * having internal xen data structure
>          honour the placement (e.g., I've been told that right now vcpu
>          stacks are always allocated on node 0... Andrew?).
> [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
>       just have them _prefer_ running on the nodes where their memory
>       is.
> [D] - Dynamic memory migration between different nodes of the host. As
>       the counter-part of the NUMA-aware scheduler.
>     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>       guest ends up on more than one nodes, make sure it knows it's
>       running on a NUMA platform (smaller than the actual host, but
>       still NUMA). This interacts with some of the above points:
>        * consider this during automatic placement for
>          resuming/migrating domains (if they have a virtual topology,
>          better not to change it); * consider this during memory
>          migration (it can change the actual topology, should we update
>          it on-line or disable memory migration?)
>     - NUMA and ballooning and memory sharing. In some more details:
>        * page sharing on NUMA boxes: it's probably sane to make it
>          possible disabling sharing pages across nodes; * ballooning and
>          its interaction with placement (races, amount of memory needed
>          and reported being different at different time, etc.).
>     - Inter-VM dependencies and communication issues. If a workload is
>       made up of more than just a VM and they all share the same (NUMA)
>       host, it might be best to have them sharing the nodes as much as
>       possible, or perhaps do right the opposite, depending on the
>       specific characteristics of he workload itself, and this might be
>       considered during placement, memory migration and perhaps
>       scheduling.
>     - Benchmarking and performances evaluation in general. Meaning both
>       agreeing on a (set of) relevant workload(s) and on how to extract
>       meaningful performances data from there (and maybe how to do that
>       automatically?).
> So, what do you think?
> 
> Thanks and Regards,
> Dario
> 
> -- <<This happens because I choose it to happen!>> (Raistlin Majere)
> ----------------------------------------------------------------- Dario
> Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software
> Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> 
> 
> -- <<This happens because I choose it to happen!>> (Raistlin Majere)
> ----------------------------------------------------------------- Dario
> Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software
> Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 01:05:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 01:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwjqY-0000Dc-A9; Thu, 02 Aug 2012 01:05:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1SwjqW-0000DX-8K
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 01:05:00 +0000
Received: from [85.158.139.83:44880] by server-7.bemta-5.messagelabs.com id
	F3/E2-28276-B32D9105; Thu, 02 Aug 2012 01:04:59 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343869498!24145237!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzMjcyMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8999 invoked from network); 2 Aug 2012 01:04:58 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 01:04:58 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 01 Aug 2012 18:04:56 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="201196253"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 01 Aug 2012 18:04:56 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 1 Aug 2012 18:04:56 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 1 Aug 2012 18:04:56 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 2 Aug 2012 09:04:54 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Dario Faggioli <raistlin@linux.it>, xen-devel <xen-devel@lists.xen.org>
Thread-Topic: NUMA TODO-list for xen-devel
Thread-Index: AQHNcAER/p/FU/fZGkSxhN/UzIuHTZdFsI5g
Date: Thu, 2 Aug 2012 01:04:54 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E2032D6@SHSMSX101.ccr.corp.intel.com>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli wrote on 2012-08-02:
> Hi everyone,
> 
> With automatic placement finally landing into xen-unstable, I stated
> thinking about what I could work on next, still in the field of
> improving Xen's NUMA support. Well, it turned out that running out of
> things to do is not an option! :-O
> 
> In fact, I can think of quite a bit of open issues in that area, that I'm
> just braindumping here. If anyone has thoughts or idea or feedback or
> whatever, I'd be happy to serve as a collector of them. I've already
> created a Wiki page to help with the tracking. You can see it here
> (for now it basically replicates this e-mail):
> 
>  http://wiki.xen.org/wiki/Xen_NUMA_Roadmap
> I'm putting a [D] (standing for Dario) near the points I've started
> working on or looking at, and again, I'd be happy to try tracking this
> too, i.e., keeping the list of "who-is-doing-what" updated, in order to
> ease collaboration.
> 
> So, let's cut the talking:
> 
>     - Automatic placement at guest creation time. Basics are there and
>       will be shipping with 4.2. However, a lot of other things are
>       missing and/or can be improved, for instance:
> [D]    * automated verification and testing of the placement;
>        * benchmarks and improvements of the placement heuristic;
> [D]    * choosing/building up some measure of node load (more accurate
>          than just counting vcpus) onto which to rely during placement;
>        * consider IONUMA during placement;
We should consider two things:
1. Dom0 IONUMA: Devices used by dom0 should get the dma buffer from the node which it resides. Currently, Dom0 allocates dma buffer without provide the node info to the hypercall..
2.Guest IONUMA: when guest boots up with pass through device, we need to allocate the memory from the node where the device resides for further dma buffer allocation. And let guest know the IONUMA topology. This rely on the guest NUMA.
This topic was mentioned in xen summit 2011:
http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalability_in_Xen.pdf


>        * automatic placement of Dom0, if possible (my current series is
>          only affecting DomU) * having internal xen data structure
>          honour the placement (e.g., I've been told that right now vcpu
>          stacks are always allocated on node 0... Andrew?).
> [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
>       just have them _prefer_ running on the nodes where their memory
>       is.
> [D] - Dynamic memory migration between different nodes of the host. As
>       the counter-part of the NUMA-aware scheduler.
>     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>       guest ends up on more than one nodes, make sure it knows it's
>       running on a NUMA platform (smaller than the actual host, but
>       still NUMA). This interacts with some of the above points:
>        * consider this during automatic placement for
>          resuming/migrating domains (if they have a virtual topology,
>          better not to change it); * consider this during memory
>          migration (it can change the actual topology, should we update
>          it on-line or disable memory migration?)
>     - NUMA and ballooning and memory sharing. In some more details:
>        * page sharing on NUMA boxes: it's probably sane to make it
>          possible disabling sharing pages across nodes; * ballooning and
>          its interaction with placement (races, amount of memory needed
>          and reported being different at different time, etc.).
>     - Inter-VM dependencies and communication issues. If a workload is
>       made up of more than just a VM and they all share the same (NUMA)
>       host, it might be best to have them sharing the nodes as much as
>       possible, or perhaps do right the opposite, depending on the
>       specific characteristics of he workload itself, and this might be
>       considered during placement, memory migration and perhaps
>       scheduling.
>     - Benchmarking and performances evaluation in general. Meaning both
>       agreeing on a (set of) relevant workload(s) and on how to extract
>       meaningful performances data from there (and maybe how to do that
>       automatically?).
> So, what do you think?
> 
> Thanks and Regards,
> Dario
> 
> -- <<This happens because I choose it to happen!>> (Raistlin Majere)
> ----------------------------------------------------------------- Dario
> Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software
> Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> 
> 
> -- <<This happens because I choose it to happen!>> (Raistlin Majere)
> ----------------------------------------------------------------- Dario
> Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software
> Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 03:07:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 03:07:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swlk0-0001L5-W1; Thu, 02 Aug 2012 03:06:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <horms@verge.net.au>) id 1Swljy-0001L0-V1
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 03:06:23 +0000
Received: from [85.158.143.99:4757] by server-3.bemta-4.messagelabs.com id
	A9/26-01511-DAEE9105; Thu, 02 Aug 2012 03:06:21 +0000
X-Env-Sender: horms@verge.net.au
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343876779!23465138!1
X-Originating-IP: [202.4.237.240]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10894 invoked from network); 2 Aug 2012 03:06:20 -0000
Received: from kirsty.vergenet.net (HELO kirsty.vergenet.net) (202.4.237.240)
	by server-10.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 03:06:20 -0000
Received: from ayumi.akashicho.tokyo.vergenet.net
	(p6117-ipbfp1901kobeminato.hyogo.ocn.ne.jp [114.172.117.117])
	by kirsty.vergenet.net (Postfix) with ESMTP id 036F325B718;
	Thu,  2 Aug 2012 13:06:18 +1000 (EST)
Received: by ayumi.akashicho.tokyo.vergenet.net (Postfix, from userid 7100)
	id 8B6E7EDE8B3; Thu,  2 Aug 2012 12:06:15 +0900 (JST)
Date: Thu, 2 Aug 2012 12:06:15 +0900
From: Simon Horman <horms@verge.net.au>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20120802030615.GA20262@verge.net.au>
References: <20120705121635.GA2007@host-192-168-1-59.local.net-space.pl>
	<201207231530.55871.ptesarik@suse.cz>
	<20120723201059.GA1848@host-192-168-1-59.local.net-space.pl>
	<201207241018.35292.ptesarik@suse.cz>
	<20120724135410.GA2230@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120724135410.GA2230@host-192-168-1-59.local.net-space.pl>
Organisation: Horms Solutions Ltd.
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: kexec@lists.infradead.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	Petr Tesarik <ptesarik@suse.cz>, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 24, 2012 at 03:54:10PM +0200, Daniel Kiper wrote:
> On Tue, Jul 24, 2012 at 10:18:34AM +0200, Petr Tesarik wrote:
> > Dne Po 23. ??ervence 2012 22:10:59 Daniel Kiper napsal(a):
> > > Hi Petr,
> > >
> > > On Mon, Jul 23, 2012 at 03:30:55PM +0200, Petr Tesarik wrote:
> > > > Dne Po 23. ??ervence 2012 14:56:07 Petr Tesarik napsal(a):
> > > > > Dne ??t 5. ??ervence 2012 14:16:35 Daniel Kiper napsal(a):
> > > > > > vmcoreinfo file could exists under /sys/kernel (valid on baremetal
> > > > > > only) and/or under /sys/hypervisor (valid when Xen dom0 is running).
> > > > > > Read only one of them. It means that only one PT_NOTE will be always
> > > > > > created. Remove extra code for second PT_NOTE creation.
> > > > >
> > > > > Hi Daniel,
> > > > >
> > > > > are you absolutely sure this is the right thing to do? IIUC these two
> > > > > VMCORINFO notes are very different. The one from /sys/kernel/vmcoreinfo
> > > > > describes the Dom0 kernel (type 'VMCOREINFO'), while the one from
> > > > > /sys/hypervisor describes the Xen hypervisor (type 'XEN_VMCOREINFO').
> > > > > If you keep only the hypervisor note, then e.g. makedumpfile won't be
> > > > > able to use dumplevel greater than 1, nor will it be able to extract
> > > > > the log buffer.
> > > >
> > > > I've just verified this, and I'm confident we have to keep both notes in
> > > > the dump file. Simon, please revert Daniel's patch to avoid regressions.
> > > >
> > > > I'm attaching a sample VMCOREINFO_XEN and VMCOREINFO to demonstrate the
> > > > difference. Note that the VMCOREINFO_XEN note is actually too big,
> > > > because Xen doesn't bother to maintain the correct note size in the note
> > > > header, so it always spans a complete page minus sizeof(Elf64_Nhdr)...
> > >
> > > [...]
> > >
> > > The problem with /sys/kernel/vmcoreinfo under Xen is that it expose invalid
> > > physical address. It breaks /proc/vmcore in crash kernel. That is why I
> > > proposed that fix. Additionally, /sys/kernel/vmcoreinfo is not available
> > > under Xen Linux Ver. 2.6.18. However, I did not do any makedumpfile tests.
> > > If you discovered any issues with my patch please drop me more details
> > > about your tests (Xen version, Linux Kernel version, makedumpfile version,
> > > command lines, config files, logs, etc.). I will be more then happy to
> > > fix/improve kexec-tools and makedumpfile.
> >
> > Hi Daniel,
> >
> > well, Linux v2.6.18 does not have /sys/kernel/vmcoreinfo, simply because the
> > VMCOREINFO infrastructure was not present in 2.6.18. It was added later with
> 
> Yep.
> 
> > commit fd59d231f81cb02870b9cf15f456a897f3669b4e, which went into 2.6.24.
> 
> Hmmm... As I know 2.6.24 does not support kexec/kdump under Xen dom0. Correct?
> 
> > I tested with the following combinations:
> >
> > * xen-3.3.1 + kernel-xen-2.6.27.54 + kexec-tools-2.0.0 + makedumpfile-1.3.1
> > * xen-4.0.3 + kernel-xen-2.6.32.59 + kexec-tools-2.0.0 + makedumpfile-1.3.1
> > * xen-4.1.2 + kernel-xen-3.0.34 + kexec-tools-2.0.0 + makedumpfile-1.4.0
> >
> > These versions correspond to SLES11-GA, SLES11-SP1 and SLES11-SP2,
> > respectively. All of them work just fine and save both ELF notes into the
> > dump.
> 
> Could you test current kexec-tools development version and
> latest makedumpfile version on latest SLES version?
> 
> > What do you mean by "invalid physical address"? I'm getting the correct
> > physical address under Xen. Obviously, it must be translated to machine
> > addresses if you need them from the secondary kernel.
> 
> Correct vmcoreinfo address should be established by calling
> HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, KEXEC_RANGE_MA_VMCOREINFO).
> Address exposed by /sys/kernel/vmcoreinfo is calculated in other way.
> /sys/hypervisor/vmcoreinfo does it correctly (in older implementations
> and in my new upstream implementation which I am going to post shortly).
> 
> > Anyway, let me re-install my test system and send all the necessary
> > information. What kind of log files are you interested in?
> 
> If you spot any error in any logfile which in your opinion
> is relevent to our testes please send me it.

Hi,

is there any consensus on what to do here?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 03:07:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 03:07:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swlk0-0001L5-W1; Thu, 02 Aug 2012 03:06:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <horms@verge.net.au>) id 1Swljy-0001L0-V1
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 03:06:23 +0000
Received: from [85.158.143.99:4757] by server-3.bemta-4.messagelabs.com id
	A9/26-01511-DAEE9105; Thu, 02 Aug 2012 03:06:21 +0000
X-Env-Sender: horms@verge.net.au
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343876779!23465138!1
X-Originating-IP: [202.4.237.240]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10894 invoked from network); 2 Aug 2012 03:06:20 -0000
Received: from kirsty.vergenet.net (HELO kirsty.vergenet.net) (202.4.237.240)
	by server-10.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 03:06:20 -0000
Received: from ayumi.akashicho.tokyo.vergenet.net
	(p6117-ipbfp1901kobeminato.hyogo.ocn.ne.jp [114.172.117.117])
	by kirsty.vergenet.net (Postfix) with ESMTP id 036F325B718;
	Thu,  2 Aug 2012 13:06:18 +1000 (EST)
Received: by ayumi.akashicho.tokyo.vergenet.net (Postfix, from userid 7100)
	id 8B6E7EDE8B3; Thu,  2 Aug 2012 12:06:15 +0900 (JST)
Date: Thu, 2 Aug 2012 12:06:15 +0900
From: Simon Horman <horms@verge.net.au>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20120802030615.GA20262@verge.net.au>
References: <20120705121635.GA2007@host-192-168-1-59.local.net-space.pl>
	<201207231530.55871.ptesarik@suse.cz>
	<20120723201059.GA1848@host-192-168-1-59.local.net-space.pl>
	<201207241018.35292.ptesarik@suse.cz>
	<20120724135410.GA2230@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120724135410.GA2230@host-192-168-1-59.local.net-space.pl>
Organisation: Horms Solutions Ltd.
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: kexec@lists.infradead.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	Petr Tesarik <ptesarik@suse.cz>, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 24, 2012 at 03:54:10PM +0200, Daniel Kiper wrote:
> On Tue, Jul 24, 2012 at 10:18:34AM +0200, Petr Tesarik wrote:
> > Dne Po 23. ??ervence 2012 22:10:59 Daniel Kiper napsal(a):
> > > Hi Petr,
> > >
> > > On Mon, Jul 23, 2012 at 03:30:55PM +0200, Petr Tesarik wrote:
> > > > Dne Po 23. ??ervence 2012 14:56:07 Petr Tesarik napsal(a):
> > > > > Dne ??t 5. ??ervence 2012 14:16:35 Daniel Kiper napsal(a):
> > > > > > vmcoreinfo file could exists under /sys/kernel (valid on baremetal
> > > > > > only) and/or under /sys/hypervisor (valid when Xen dom0 is running).
> > > > > > Read only one of them. It means that only one PT_NOTE will be always
> > > > > > created. Remove extra code for second PT_NOTE creation.
> > > > >
> > > > > Hi Daniel,
> > > > >
> > > > > are you absolutely sure this is the right thing to do? IIUC these two
> > > > > VMCORINFO notes are very different. The one from /sys/kernel/vmcoreinfo
> > > > > describes the Dom0 kernel (type 'VMCOREINFO'), while the one from
> > > > > /sys/hypervisor describes the Xen hypervisor (type 'XEN_VMCOREINFO').
> > > > > If you keep only the hypervisor note, then e.g. makedumpfile won't be
> > > > > able to use dumplevel greater than 1, nor will it be able to extract
> > > > > the log buffer.
> > > >
> > > > I've just verified this, and I'm confident we have to keep both notes in
> > > > the dump file. Simon, please revert Daniel's patch to avoid regressions.
> > > >
> > > > I'm attaching a sample VMCOREINFO_XEN and VMCOREINFO to demonstrate the
> > > > difference. Note that the VMCOREINFO_XEN note is actually too big,
> > > > because Xen doesn't bother to maintain the correct note size in the note
> > > > header, so it always spans a complete page minus sizeof(Elf64_Nhdr)...
> > >
> > > [...]
> > >
> > > The problem with /sys/kernel/vmcoreinfo under Xen is that it expose invalid
> > > physical address. It breaks /proc/vmcore in crash kernel. That is why I
> > > proposed that fix. Additionally, /sys/kernel/vmcoreinfo is not available
> > > under Xen Linux Ver. 2.6.18. However, I did not do any makedumpfile tests.
> > > If you discovered any issues with my patch please drop me more details
> > > about your tests (Xen version, Linux Kernel version, makedumpfile version,
> > > command lines, config files, logs, etc.). I will be more then happy to
> > > fix/improve kexec-tools and makedumpfile.
> >
> > Hi Daniel,
> >
> > well, Linux v2.6.18 does not have /sys/kernel/vmcoreinfo, simply because the
> > VMCOREINFO infrastructure was not present in 2.6.18. It was added later with
> 
> Yep.
> 
> > commit fd59d231f81cb02870b9cf15f456a897f3669b4e, which went into 2.6.24.
> 
> Hmmm... As I know 2.6.24 does not support kexec/kdump under Xen dom0. Correct?
> 
> > I tested with the following combinations:
> >
> > * xen-3.3.1 + kernel-xen-2.6.27.54 + kexec-tools-2.0.0 + makedumpfile-1.3.1
> > * xen-4.0.3 + kernel-xen-2.6.32.59 + kexec-tools-2.0.0 + makedumpfile-1.3.1
> > * xen-4.1.2 + kernel-xen-3.0.34 + kexec-tools-2.0.0 + makedumpfile-1.4.0
> >
> > These versions correspond to SLES11-GA, SLES11-SP1 and SLES11-SP2,
> > respectively. All of them work just fine and save both ELF notes into the
> > dump.
> 
> Could you test current kexec-tools development version and
> latest makedumpfile version on latest SLES version?
> 
> > What do you mean by "invalid physical address"? I'm getting the correct
> > physical address under Xen. Obviously, it must be translated to machine
> > addresses if you need them from the secondary kernel.
> 
> Correct vmcoreinfo address should be established by calling
> HYPERVISOR_kexec_op(KEXEC_CMD_kexec_get_range, KEXEC_RANGE_MA_VMCOREINFO).
> Address exposed by /sys/kernel/vmcoreinfo is calculated in other way.
> /sys/hypervisor/vmcoreinfo does it correctly (in older implementations
> and in my new upstream implementation which I am going to post shortly).
> 
> > Anyway, let me re-install my test system and send all the necessary
> > information. What kind of log files are you interested in?
> 
> If you spot any error in any logfile which in your opinion
> is relevent to our testes please send me it.

Hi,

is there any consensus on what to do here?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 05:06:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 05:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swnb7-000280-B1; Thu, 02 Aug 2012 05:05:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ms705@hermes.cam.ac.uk>) id 1Switn-00028o-N7
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 00:04:19 +0000
Received: from [85.158.139.83:16785] by server-2.bemta-5.messagelabs.com id
	98/9B-04598-204C9105; Thu, 02 Aug 2012 00:04:18 +0000
X-Env-Sender: ms705@hermes.cam.ac.uk
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343865858!18545763!1
X-Originating-IP: [131.111.8.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMxLjExMS44LjE0MyA9PiAxMjc0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15308 invoked from network); 2 Aug 2012 00:04:18 -0000
Received: from ppsw-43.csi.cam.ac.uk (HELO ppsw-43.csi.cam.ac.uk)
	(131.111.8.143) by server-8.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 00:04:18 -0000
X-Cam-AntiVirus: no malware found
X-Cam-SpamDetails: not scanned
X-Cam-ScannerInfo: http://www.cam.ac.uk/cs/email/scanner/
Received: from sjc185n146.joh.private.cam.ac.uk ([172.21.185.146]:45850)
	by ppsw-43.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.159]:465)
	with esmtpsa (PLAIN:ms705) (TLSv1:DHE-RSA-CAMELLIA256-SHA:256)
	id 1Swith-0000YJ-nB (Exim 4.72)
	(return-path <ms705@hermes.cam.ac.uk>); Thu, 02 Aug 2012 01:04:13 +0100
Message-ID: <5019C3F7.1080404@cl.cam.ac.uk>
Date: Thu, 02 Aug 2012 01:04:07 +0100
From: Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
	<1343840334.4958.45.camel@Solace>
In-Reply-To: <1343840334.4958.45.camel@Solace>
X-Mailman-Approved-At: Thu, 02 Aug 2012 05:05:19 +0000
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>, Steven Smith <steven.smith@cl.cam.ac.uk>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/12 17:58, Dario Faggioli wrote:
> On Wed, 2012-08-01 at 17:32 +0100, Anil Madhavapeddy wrote:
>> On 1 Aug 2012, at 17:16, Dario Faggioli <raistlin@linux.it> wrote:
>>
>>>    - Inter-VM dependencies and communication issues. If a workload is
>>>      made up of more than just a VM and they all share the same (NUMA)
>>>      host, it might be best to have them sharing the nodes as much as
>>>      possible, or perhaps do right the opposite, depending on the
>>>      specific characteristics of he workload itself, and this might be
>>>      considered during placement, memory migration and perhaps
>>>      scheduling.
>>>
>>>    - Benchmarking and performances evaluation in general. Meaning both
>>>      agreeing on a (set of) relevant workload(s) and on how to extract
>>>      meaningful performances data from there (and maybe how to do that
>>>      automatically?).
>>
>> I haven't tried out the latest Xen NUMA features yet, but we've been
>> keeping track of the IPC benchmarks as we get newer machines here:
>>
> 
>> http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/results.html
>>
> Wow... That's really cool. I'll definitely take a deep look at all these
> data! I'm also adding the link to the wiki, if you're fine with that...

No problem with adding a link, as this is public data :) If possible,
it'd be splendid to put a note next to this link encouraging people to
submit their own results -- doing so is very simple, and helps us extend
the database. Instructions are at
http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/ (or, for a short
link, http://fable.io).

>> Happy to share the raw data if you have cycles to figure out the best
>> way to auto-place multiple VMs so they are near each other from a memory
>> latency perspective.  
>>
> I don't have anything precise in mind yet, but we need to think about
> this.

While there has been plenty of work on optimizing co-location of
different kinds of workloads, there's relatively little work (that I am
aware of) on VM scheduling in this environment. One (sadly somewhat
lacking) paper at HotCloud this year [1] looked at NUMA-aware VM
migration to balance memory accesses. Of greater interest is possibly
the Google ISCA paper on the detrimental effect of sharing
micro-architectural resources between different kinds of workloads,
although it is not explicitly focused on NUMA, and the metrics are
defined with regards to specific classes of latency-sensitive jobs [2].

One interesting thing to look at (that we haven't looked at yet) is what
memory allocators do about NUMA these days; there is an AMD whitepaper
from 2009 discussing the performance benefits of a NUMA-aware version of
tcmalloc [3], but I have found it hard to reproduce their results on
modern hardware. Of course, being virtualized may complicate matters
here, since the memory allocator can no longer freely pick and choose
where to allocate from.

Scheduling, notably, is key here, since the CPU a process is scheduled
on may determine where its memory is allocated -- frequent migrations
are likely to be bad for performance due to remote memory accesses,
although we have been unable to quantify a significant difference on
non-synthetic macrobenchmarks; that said, we did not try very hard so far.

Cheers,
Malte

[1] - Ahn et al., "Dynamic Virtual Machine Scheduling in Clouds for
Architectural Shared Resources", in Proceedings of HotCloud 2012,
https://www.usenix.org/conference/hotcloud12/dynamic-virtual-machine-scheduling-clouds-architectural-shared-resources

[2] - Tang et al., "The impact of memory subsystem resource sharing on
datacenter applications", in Proceedings of ISCA 2011,
http://dl.acm.org/citation.cfm?id=2000099

[3] -
http://developer.amd.com/Assets/NUMA_aware_heap_memory_manager_article_final.pdf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 05:06:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 05:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swnb7-000280-B1; Thu, 02 Aug 2012 05:05:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ms705@hermes.cam.ac.uk>) id 1Switn-00028o-N7
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 00:04:19 +0000
Received: from [85.158.139.83:16785] by server-2.bemta-5.messagelabs.com id
	98/9B-04598-204C9105; Thu, 02 Aug 2012 00:04:18 +0000
X-Env-Sender: ms705@hermes.cam.ac.uk
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343865858!18545763!1
X-Originating-IP: [131.111.8.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMxLjExMS44LjE0MyA9PiAxMjc0OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15308 invoked from network); 2 Aug 2012 00:04:18 -0000
Received: from ppsw-43.csi.cam.ac.uk (HELO ppsw-43.csi.cam.ac.uk)
	(131.111.8.143) by server-8.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 00:04:18 -0000
X-Cam-AntiVirus: no malware found
X-Cam-SpamDetails: not scanned
X-Cam-ScannerInfo: http://www.cam.ac.uk/cs/email/scanner/
Received: from sjc185n146.joh.private.cam.ac.uk ([172.21.185.146]:45850)
	by ppsw-43.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.159]:465)
	with esmtpsa (PLAIN:ms705) (TLSv1:DHE-RSA-CAMELLIA256-SHA:256)
	id 1Swith-0000YJ-nB (Exim 4.72)
	(return-path <ms705@hermes.cam.ac.uk>); Thu, 02 Aug 2012 01:04:13 +0100
Message-ID: <5019C3F7.1080404@cl.cam.ac.uk>
Date: Thu, 02 Aug 2012 01:04:07 +0100
From: Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
	<1343840334.4958.45.camel@Solace>
In-Reply-To: <1343840334.4958.45.camel@Solace>
X-Mailman-Approved-At: Thu, 02 Aug 2012 05:05:19 +0000
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>, Steven Smith <steven.smith@cl.cam.ac.uk>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/12 17:58, Dario Faggioli wrote:
> On Wed, 2012-08-01 at 17:32 +0100, Anil Madhavapeddy wrote:
>> On 1 Aug 2012, at 17:16, Dario Faggioli <raistlin@linux.it> wrote:
>>
>>>    - Inter-VM dependencies and communication issues. If a workload is
>>>      made up of more than just a VM and they all share the same (NUMA)
>>>      host, it might be best to have them sharing the nodes as much as
>>>      possible, or perhaps do right the opposite, depending on the
>>>      specific characteristics of he workload itself, and this might be
>>>      considered during placement, memory migration and perhaps
>>>      scheduling.
>>>
>>>    - Benchmarking and performances evaluation in general. Meaning both
>>>      agreeing on a (set of) relevant workload(s) and on how to extract
>>>      meaningful performances data from there (and maybe how to do that
>>>      automatically?).
>>
>> I haven't tried out the latest Xen NUMA features yet, but we've been
>> keeping track of the IPC benchmarks as we get newer machines here:
>>
> 
>> http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/results.html
>>
> Wow... That's really cool. I'll definitely take a deep look at all these
> data! I'm also adding the link to the wiki, if you're fine with that...

No problem with adding a link, as this is public data :) If possible,
it'd be splendid to put a note next to this link encouraging people to
submit their own results -- doing so is very simple, and helps us extend
the database. Instructions are at
http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/ (or, for a short
link, http://fable.io).

>> Happy to share the raw data if you have cycles to figure out the best
>> way to auto-place multiple VMs so they are near each other from a memory
>> latency perspective.  
>>
> I don't have anything precise in mind yet, but we need to think about
> this.

While there has been plenty of work on optimizing co-location of
different kinds of workloads, there's relatively little work (that I am
aware of) on VM scheduling in this environment. One (sadly somewhat
lacking) paper at HotCloud this year [1] looked at NUMA-aware VM
migration to balance memory accesses. Of greater interest is possibly
the Google ISCA paper on the detrimental effect of sharing
micro-architectural resources between different kinds of workloads,
although it is not explicitly focused on NUMA, and the metrics are
defined with regards to specific classes of latency-sensitive jobs [2].

One interesting thing to look at (that we haven't looked at yet) is what
memory allocators do about NUMA these days; there is an AMD whitepaper
from 2009 discussing the performance benefits of a NUMA-aware version of
tcmalloc [3], but I have found it hard to reproduce their results on
modern hardware. Of course, being virtualized may complicate matters
here, since the memory allocator can no longer freely pick and choose
where to allocate from.

Scheduling, notably, is key here, since the CPU a process is scheduled
on may determine where its memory is allocated -- frequent migrations
are likely to be bad for performance due to remote memory accesses,
although we have been unable to quantify a significant difference on
non-synthetic macrobenchmarks; that said, we did not try very hard so far.

Cheers,
Malte

[1] - Ahn et al., "Dynamic Virtual Machine Scheduling in Clouds for
Architectural Shared Resources", in Proceedings of HotCloud 2012,
https://www.usenix.org/conference/hotcloud12/dynamic-virtual-machine-scheduling-clouds-architectural-shared-resources

[2] - Tang et al., "The impact of memory subsystem resource sharing on
datacenter applications", in Proceedings of ISCA 2011,
http://dl.acm.org/citation.cfm?id=2000099

[3] -
http://developer.amd.com/Assets/NUMA_aware_heap_memory_manager_article_final.pdf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 05:43:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 05:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwoAs-0002Nu-D5; Thu, 02 Aug 2012 05:42:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SwoAr-0002Np-Jv
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 05:42:17 +0000
Received: from [85.158.138.51:15595] by server-4.bemta-3.messagelabs.com id
	42/8C-29069-8331A105; Thu, 02 Aug 2012 05:42:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343886135!21083937!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzkxMjA=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzkxMjA=\n,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32513 invoked from network); 2 Aug 2012 05:42:15 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 05:42:15 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (joses mo31) (RZmta 30.2 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e026fco721r8hk ;
	Thu, 2 Aug 2012 07:42:15 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 6690318639; Thu,  2 Aug 2012 07:42:14 +0200 (CEST)
Date: Thu, 2 Aug 2012 07:42:14 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120802054214.GA13335@aepfle.de>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-2-git-send-email-roger.pau@citrix.com>
	<1343378909.6812.86.camel@zakaz.uk.xensource.com>
	<1343821660.27221.82.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343821660.27221.82.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] tools/build: fix pygrub linking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, Ian Campbell wrote:

> > > diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> > > index bd22dd4..8c99e11 100644
> > > --- a/tools/pygrub/Makefile
> > > +++ b/tools/pygrub/Makefile
> > > @@ -14,7 +14,10 @@ install: all
> > >  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
> > >  		--install-scripts=$(PRIVATE_BINDIR) --force
> > >  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> > > -	ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR)
> > > +	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> > > +	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> > > +	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
> > > +	fi

This needs quoting for the shell, like shown below:

[  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
[  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
[  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
[  148s] fi
[  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected

diff -r 3d622e2c7cfb tools/pygrub/Makefile
--- a/tools/pygrub/Makefile
+++ b/tools/pygrub/Makefile
@@ -14,8 +14,8 @@ install: all
 		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
 		--install-scripts=$(PRIVATE_BINDIR) --force
 	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
-	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
-	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
+	set -e; if [ "`readlink -f $(DESTDIR)/$(BINDIR)`" != \
+	             "`readlink -f $(PRIVATE_BINDIR)`" ]; then \
 	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
 	fi
 
Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 05:43:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 05:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwoAs-0002Nu-D5; Thu, 02 Aug 2012 05:42:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SwoAr-0002Np-Jv
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 05:42:17 +0000
Received: from [85.158.138.51:15595] by server-4.bemta-3.messagelabs.com id
	42/8C-29069-8331A105; Thu, 02 Aug 2012 05:42:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343886135!21083937!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzkxMjA=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzkxMjA=\n,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32513 invoked from network); 2 Aug 2012 05:42:15 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 05:42:15 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (joses mo31) (RZmta 30.2 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e026fco721r8hk ;
	Thu, 2 Aug 2012 07:42:15 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 6690318639; Thu,  2 Aug 2012 07:42:14 +0200 (CEST)
Date: Thu, 2 Aug 2012 07:42:14 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120802054214.GA13335@aepfle.de>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-2-git-send-email-roger.pau@citrix.com>
	<1343378909.6812.86.camel@zakaz.uk.xensource.com>
	<1343821660.27221.82.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343821660.27221.82.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] tools/build: fix pygrub linking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, Ian Campbell wrote:

> > > diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> > > index bd22dd4..8c99e11 100644
> > > --- a/tools/pygrub/Makefile
> > > +++ b/tools/pygrub/Makefile
> > > @@ -14,7 +14,10 @@ install: all
> > >  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
> > >  		--install-scripts=$(PRIVATE_BINDIR) --force
> > >  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> > > -	ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR)
> > > +	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> > > +	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> > > +	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
> > > +	fi

This needs quoting for the shell, like shown below:

[  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
[  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
[  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
[  148s] fi
[  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected

diff -r 3d622e2c7cfb tools/pygrub/Makefile
--- a/tools/pygrub/Makefile
+++ b/tools/pygrub/Makefile
@@ -14,8 +14,8 @@ install: all
 		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
 		--install-scripts=$(PRIVATE_BINDIR) --force
 	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
-	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
-	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
+	set -e; if [ "`readlink -f $(DESTDIR)/$(BINDIR)`" != \
+	             "`readlink -f $(PRIVATE_BINDIR)`" ]; then \
 	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
 	fi
 
Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 06:45:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 06:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swp8s-0002mT-7H; Thu, 02 Aug 2012 06:44:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swp8q-0002mO-RW
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 06:44:17 +0000
Received: from [85.158.143.35:24168] by server-3.bemta-4.messagelabs.com id
	B4/DF-01511-0C12A105; Thu, 02 Aug 2012 06:44:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343889855!5585924!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11270 invoked from network); 2 Aug 2012 06:44:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 06:44:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814608"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 06:44:12 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	07:44:12 +0100
Message-ID: <1343889851.7571.21.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 2 Aug 2012 07:44:11 +0100
In-Reply-To: <20120802054214.GA13335@aepfle.de>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-2-git-send-email-roger.pau@citrix.com>
	<1343378909.6812.86.camel@zakaz.uk.xensource.com>
	<1343821660.27221.82.camel@zakaz.uk.xensource.com>
	<20120802054214.GA13335@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] tools/build: fix pygrub linking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 06:42 +0100, Olaf Hering wrote:
> On Wed, Aug 01, Ian Campbell wrote:
> 
> > > > diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> > > > index bd22dd4..8c99e11 100644
> > > > --- a/tools/pygrub/Makefile
> > > > +++ b/tools/pygrub/Makefile
> > > > @@ -14,7 +14,10 @@ install: all
> > > >  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
> > > >  		--install-scripts=$(PRIVATE_BINDIR) --force
> > > >  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> > > > -	ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR)
> > > > +	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> > > > +	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> > > > +	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
> > > > +	fi
> 
> This needs quoting for the shell, like shown below:

Right, thanks.

Can you provide a S-o-b and I'll fabricate up a changelog as I commit
it.

> 
> [  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
> [  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
> [  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
> [  148s] fi
> [  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected
> 
> diff -r 3d622e2c7cfb tools/pygrub/Makefile
> --- a/tools/pygrub/Makefile
> +++ b/tools/pygrub/Makefile
> @@ -14,8 +14,8 @@ install: all
>  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
>  		--install-scripts=$(PRIVATE_BINDIR) --force
>  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> -	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> -	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> +	set -e; if [ "`readlink -f $(DESTDIR)/$(BINDIR)`" != \
> +	             "`readlink -f $(PRIVATE_BINDIR)`" ]; then \
>  	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
>  	fi
>  
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 06:45:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 06:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swp8s-0002mT-7H; Thu, 02 Aug 2012 06:44:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swp8q-0002mO-RW
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 06:44:17 +0000
Received: from [85.158.143.35:24168] by server-3.bemta-4.messagelabs.com id
	B4/DF-01511-0C12A105; Thu, 02 Aug 2012 06:44:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343889855!5585924!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11270 invoked from network); 2 Aug 2012 06:44:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 06:44:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814608"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 06:44:12 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	07:44:12 +0100
Message-ID: <1343889851.7571.21.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 2 Aug 2012 07:44:11 +0100
In-Reply-To: <20120802054214.GA13335@aepfle.de>
References: <1343332476-33765-1-git-send-email-roger.pau@citrix.com>
	<1343332476-33765-2-git-send-email-roger.pau@citrix.com>
	<1343378909.6812.86.camel@zakaz.uk.xensource.com>
	<1343821660.27221.82.camel@zakaz.uk.xensource.com>
	<20120802054214.GA13335@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] tools/build: fix pygrub linking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 06:42 +0100, Olaf Hering wrote:
> On Wed, Aug 01, Ian Campbell wrote:
> 
> > > > diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> > > > index bd22dd4..8c99e11 100644
> > > > --- a/tools/pygrub/Makefile
> > > > +++ b/tools/pygrub/Makefile
> > > > @@ -14,7 +14,10 @@ install: all
> > > >  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
> > > >  		--install-scripts=$(PRIVATE_BINDIR) --force
> > > >  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> > > > -	ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR)
> > > > +	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> > > > +	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> > > > +	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
> > > > +	fi
> 
> This needs quoting for the shell, like shown below:

Right, thanks.

Can you provide a S-o-b and I'll fabricate up a changelog as I commit
it.

> 
> [  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
> [  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
> [  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
> [  148s] fi
> [  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected
> 
> diff -r 3d622e2c7cfb tools/pygrub/Makefile
> --- a/tools/pygrub/Makefile
> +++ b/tools/pygrub/Makefile
> @@ -14,8 +14,8 @@ install: all
>  		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
>  		--install-scripts=$(PRIVATE_BINDIR) --force
>  	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
> -	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
> -	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
> +	set -e; if [ "`readlink -f $(DESTDIR)/$(BINDIR)`" != \
> +	             "`readlink -f $(PRIVATE_BINDIR)`" ]; then \
>  	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
>  	fi
>  
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 06:58:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 06:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwpLp-0002wO-KO; Thu, 02 Aug 2012 06:57:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwpLo-0002wJ-1v
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 06:57:40 +0000
Received: from [85.158.143.99:44287] by server-3.bemta-4.messagelabs.com id
	EB/7C-01511-3E42A105; Thu, 02 Aug 2012 06:57:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343890658!18198903!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22781 invoked from network); 2 Aug 2012 06:57:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 06:57:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814749"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 06:57:17 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	07:57:17 +0100
Message-ID: <1343890636.7571.31.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Thu, 2 Aug 2012 07:57:16 +0100
In-Reply-To: <20120801195021.GD19851@reaktio.net>
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
	<20120801195021.GD19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 20:50 +0100, Pasi K=E4rkk=E4inen wrote:
> On Wed, Aug 01, 2012 at 03:50:16PM +0100, George Dunlap wrote:
> > On Wed, Aug 1, 2012 at 1:08 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> > > Hi everybody,
> > >
> > > at OSCON I had a couple of discussions regarding Fedora-like Xen Test=
 Days.
> > > It may be a little bit late for this release putting all the document=
ation
> > > together (i.e. a TODO list of what we want to community and distros w=
hich
> > > consume Xen) to test to pull this off for this release cycle.
> > >
> > > But I wanted to raise this as possibility and maybe something to buil=
d into
> > > future release cycles.  If I look at http://wiki.xen.org/wiki/Xen_4.2=
 there
> > > is fairly little (or in fact almost nothing) we have in terms of how =
new
> > > functionality would be tested. My gut feel is that the biggest benefi=
t of a
> > > Xen test Day for 4.2 may be in testing XL. There were some improvemen=
ts last
> > > Monday, but I am not sure this is enough.
> > >
> > > If I look at https://fedoraproject.org/wiki/QA/Test_Days they have sp=
ent
> > > quite a bit of effort on this, and it would probably take one person =
a week
> > > or two full-time to pull this together. Also see,
> > > https://fedoraproject.org/wiki/Test_Day:Current
> > >
> > > I just wanted to put this out there to see whether we should try for =
this
> > > release cycle and gather views. It would require a volunteer to step =
up. If
> > > the view is that this is not doable for 4.2, this may be a good thing=
 to
> > > either try for patch releases as well as maybe for Xen 4.3
> > =

> > I think for 4.2, the key thing we want to test is the xm -> xl
> > transition; and the instructions for that are really simple --
> > basically, "Do what you normally do using xl instead of xm". :-)
> > Secondary things we want tested involve just installing it on
> > different software setups (e.g., distros), and hardware testing.  But
> > I think those will come as a matter of course with the first one.
> > =

> =

> Xen hypervisor UEFI boot testing would be nice aswell.

I think we need to be careful to keep the scope manageable for any one
event and not overreach ourselves by trying to test too many things at
once.

The xm -> xl transition is a reasonably size chunk of test and it makes
sense to me to have a day to itself. Of course there will be
installation and setup issues along the way but the focus should be on
the goal of ensuring that xl can replace xm.

Perhaps UEFI booting (or booting generally) would be a suitable topic
for a separate day.

>  Xen 4.2 has the hypervisor EFI patches,
> but dom0 kernel also needs EFI patches, and that makes testing a bit more=
 difficult..
> =

> Upstream Linux pvops dom0 kernel doesn't have EFI support yet, =

> only Suse's xenlinux patches have EFI support afaik.

This will certainly add to the complexity of a test day around UEFI.
Perhaps it would be better to wait until the pvops version lands?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 06:58:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 06:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwpLp-0002wO-KO; Thu, 02 Aug 2012 06:57:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwpLo-0002wJ-1v
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 06:57:40 +0000
Received: from [85.158.143.99:44287] by server-3.bemta-4.messagelabs.com id
	EB/7C-01511-3E42A105; Thu, 02 Aug 2012 06:57:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343890658!18198903!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22781 invoked from network); 2 Aug 2012 06:57:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 06:57:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814749"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 06:57:17 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	07:57:17 +0100
Message-ID: <1343890636.7571.31.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Thu, 2 Aug 2012 07:57:16 +0100
In-Reply-To: <20120801195021.GD19851@reaktio.net>
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
	<20120801195021.GD19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 20:50 +0100, Pasi K=E4rkk=E4inen wrote:
> On Wed, Aug 01, 2012 at 03:50:16PM +0100, George Dunlap wrote:
> > On Wed, Aug 1, 2012 at 1:08 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> > > Hi everybody,
> > >
> > > at OSCON I had a couple of discussions regarding Fedora-like Xen Test=
 Days.
> > > It may be a little bit late for this release putting all the document=
ation
> > > together (i.e. a TODO list of what we want to community and distros w=
hich
> > > consume Xen) to test to pull this off for this release cycle.
> > >
> > > But I wanted to raise this as possibility and maybe something to buil=
d into
> > > future release cycles.  If I look at http://wiki.xen.org/wiki/Xen_4.2=
 there
> > > is fairly little (or in fact almost nothing) we have in terms of how =
new
> > > functionality would be tested. My gut feel is that the biggest benefi=
t of a
> > > Xen test Day for 4.2 may be in testing XL. There were some improvemen=
ts last
> > > Monday, but I am not sure this is enough.
> > >
> > > If I look at https://fedoraproject.org/wiki/QA/Test_Days they have sp=
ent
> > > quite a bit of effort on this, and it would probably take one person =
a week
> > > or two full-time to pull this together. Also see,
> > > https://fedoraproject.org/wiki/Test_Day:Current
> > >
> > > I just wanted to put this out there to see whether we should try for =
this
> > > release cycle and gather views. It would require a volunteer to step =
up. If
> > > the view is that this is not doable for 4.2, this may be a good thing=
 to
> > > either try for patch releases as well as maybe for Xen 4.3
> > =

> > I think for 4.2, the key thing we want to test is the xm -> xl
> > transition; and the instructions for that are really simple --
> > basically, "Do what you normally do using xl instead of xm". :-)
> > Secondary things we want tested involve just installing it on
> > different software setups (e.g., distros), and hardware testing.  But
> > I think those will come as a matter of course with the first one.
> > =

> =

> Xen hypervisor UEFI boot testing would be nice aswell.

I think we need to be careful to keep the scope manageable for any one
event and not overreach ourselves by trying to test too many things at
once.

The xm -> xl transition is a reasonably size chunk of test and it makes
sense to me to have a day to itself. Of course there will be
installation and setup issues along the way but the focus should be on
the goal of ensuring that xl can replace xm.

Perhaps UEFI booting (or booting generally) would be a suitable topic
for a separate day.

>  Xen 4.2 has the hypervisor EFI patches,
> but dom0 kernel also needs EFI patches, and that makes testing a bit more=
 difficult..
> =

> Upstream Linux pvops dom0 kernel doesn't have EFI support yet, =

> only Suse's xenlinux patches have EFI support afaik.

This will certainly add to the complexity of a test day around UEFI.
Perhaps it would be better to wait until the pvops version lands?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:11:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwpYp-0003KE-HR; Thu, 02 Aug 2012 07:11:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwpYn-0003K9-OL
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 07:11:05 +0000
Received: from [85.158.143.99:47249] by server-1.bemta-4.messagelabs.com id
	4E/FC-24392-8082A105; Thu, 02 Aug 2012 07:11:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343891462!18201094!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2372 invoked from network); 2 Aug 2012 07:11:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:11:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814958"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:11:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 08:11:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwpYk-0004CH-9T;
	Thu, 02 Aug 2012 07:11:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwpYk-0001re-04;
	Thu, 02 Aug 2012 08:11:02 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13535-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 08:11:02 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13535: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13535 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13535/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 13534
 test-amd64-i386-xl-win-vcpus1  7 windows-install            fail pass in 13534

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13534
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13534
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13534
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13534

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop           fail in 13534 never pass

version targeted for testing:
 xen                  3d622e2c7cfb
baseline version:
 xen                  3d622e2c7cfb

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:11:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwpYp-0003KE-HR; Thu, 02 Aug 2012 07:11:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwpYn-0003K9-OL
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 07:11:05 +0000
Received: from [85.158.143.99:47249] by server-1.bemta-4.messagelabs.com id
	4E/FC-24392-8082A105; Thu, 02 Aug 2012 07:11:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343891462!18201094!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2372 invoked from network); 2 Aug 2012 07:11:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:11:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814958"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:11:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 08:11:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwpYk-0004CH-9T;
	Thu, 02 Aug 2012 07:11:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwpYk-0001re-04;
	Thu, 02 Aug 2012 08:11:02 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13535-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 08:11:02 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13535: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13535 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13535/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 13534
 test-amd64-i386-xl-win-vcpus1  7 windows-install            fail pass in 13534

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13534
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13534
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13534
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13534

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop           fail in 13534 never pass

version targeted for testing:
 xen                  3d622e2c7cfb
baseline version:
 xen                  3d622e2c7cfb

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:13:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swpaf-0003Q7-6f; Thu, 02 Aug 2012 07:13:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swpad-0003Q0-VV
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 07:13:00 +0000
Received: from [85.158.143.35:24283] by server-2.bemta-4.messagelabs.com id
	4A/68-17938-B782A105; Thu, 02 Aug 2012 07:12:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343891576!5189032!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19731 invoked from network); 2 Aug 2012 07:12:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:12:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814989"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:12:56 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:12:56 +0100
Message-ID: <1343891576.7571.37.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 2 Aug 2012 08:12:56 +0100
In-Reply-To: <20120801190227.GA13272@phenom.dumpdata.com>
References: <20120801190227.GA13272@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 20:02 +0100, Konrad Rzeszutek Wilk wrote:

> konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master

What is your linus/master right now? I get many more merges than you
did.

git tells me that nothing changed in netfront (or netback) in this
range, nor drivers/xen or arch/x86/xen

> [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]

What is xennet_poll+0x980 in your symbols? There's a lot of inlining
going on in this function so I can't make a sensible by-eye guess.

Nothing leaps out from DaveMs merge. I think you might have to keep
bisecting.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:13:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swpaf-0003Q7-6f; Thu, 02 Aug 2012 07:13:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swpad-0003Q0-VV
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 07:13:00 +0000
Received: from [85.158.143.35:24283] by server-2.bemta-4.messagelabs.com id
	4A/68-17938-B782A105; Thu, 02 Aug 2012 07:12:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343891576!5189032!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19731 invoked from network); 2 Aug 2012 07:12:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:12:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13814989"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:12:56 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:12:56 +0100
Message-ID: <1343891576.7571.37.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 2 Aug 2012 08:12:56 +0100
In-Reply-To: <20120801190227.GA13272@phenom.dumpdata.com>
References: <20120801190227.GA13272@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 20:02 +0100, Konrad Rzeszutek Wilk wrote:

> konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master

What is your linus/master right now? I get many more merges than you
did.

git tells me that nothing changed in netfront (or netback) in this
range, nor drivers/xen or arch/x86/xen

> [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]

What is xennet_poll+0x980 in your symbols? There's a lot of inlining
going on in this function so I can't make a sensible by-eye guess.

Nothing leaps out from DaveMs merge. I think you might have to keep
bisecting.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:36:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swpwq-0003zn-Tn; Thu, 02 Aug 2012 07:35:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swpwo-0003zi-E6
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 07:35:54 +0000
Received: from [85.158.138.51:16454] by server-11.bemta-3.messagelabs.com id
	2A/C5-00679-9DD2A105; Thu, 02 Aug 2012 07:35:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343892953!21979936!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17349 invoked from network); 2 Aug 2012 07:35:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:35:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13815371"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:35:52 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:35:52 +0100
Message-ID: <1343892951.7571.50.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Herring <robherring2@gmail.com>
Date: Thu, 2 Aug 2012 08:35:51 +0100
In-Reply-To: <50197527.3070007@gmail.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<50197527.3070007@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 19:27 +0100, Rob Herring wrote:
> On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> > - Basic hypervisor.h and interface.h definitions.
> > - Skelethon enlighten.c, set xen_start_info to an empty struct.
> > - Do not limit xen_initial_domain to PV guests.
> > 
> > The new code only compiles when CONFIG_XEN is set, that is going to be
> > added to arch/arm/Kconfig in a later patch.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/Makefile                     |    1 +
> >  arch/arm/include/asm/hypervisor.h     |    6 +++
> >  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
> >  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
> 
> These headers don't seem particularly ARM specific. Could they be moved
> to asm-generic or include/linux?

Or perhaps include/xen.

A bunch of it also looks like x86 specific stuff which has crept in.
e.g. PARAVIRT_LAZY_FOO and paravirt_get_lazy_mode() are arch/x86
specific and shouldn't be called from common code (and aren't, AFAICT).




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:36:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swpwq-0003zn-Tn; Thu, 02 Aug 2012 07:35:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swpwo-0003zi-E6
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 07:35:54 +0000
Received: from [85.158.138.51:16454] by server-11.bemta-3.messagelabs.com id
	2A/C5-00679-9DD2A105; Thu, 02 Aug 2012 07:35:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343892953!21979936!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17349 invoked from network); 2 Aug 2012 07:35:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:35:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13815371"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:35:52 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:35:52 +0100
Message-ID: <1343892951.7571.50.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Herring <robherring2@gmail.com>
Date: Thu, 2 Aug 2012 08:35:51 +0100
In-Reply-To: <50197527.3070007@gmail.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<50197527.3070007@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 19:27 +0100, Rob Herring wrote:
> On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> > - Basic hypervisor.h and interface.h definitions.
> > - Skelethon enlighten.c, set xen_start_info to an empty struct.
> > - Do not limit xen_initial_domain to PV guests.
> > 
> > The new code only compiles when CONFIG_XEN is set, that is going to be
> > added to arch/arm/Kconfig in a later patch.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/Makefile                     |    1 +
> >  arch/arm/include/asm/hypervisor.h     |    6 +++
> >  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
> >  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
> 
> These headers don't seem particularly ARM specific. Could they be moved
> to asm-generic or include/linux?

Or perhaps include/xen.

A bunch of it also looks like x86 specific stuff which has crept in.
e.g. PARAVIRT_LAZY_FOO and paravirt_get_lazy_mode() are arch/x86
specific and shouldn't be called from common code (and aren't, AFAICT).




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:46:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swq5z-0004C7-Uc; Thu, 02 Aug 2012 07:45:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swq5z-0004C2-8m
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 07:45:23 +0000
Received: from [85.158.143.35:25845] by server-2.bemta-4.messagelabs.com id
	5A/F1-17938-2103A105; Thu, 02 Aug 2012 07:45:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1343893521!15102184!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7367 invoked from network); 2 Aug 2012 07:45:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13815495"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:45:21 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:45:20 +0100
Message-ID: <1343893520.7571.58.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Thu, 2 Aug 2012 08:45:20 +0100
In-Reply-To: <CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 18:52 +0100, David Erickson wrote:
> my assumption is this is because of the error
> in xen-hotplug.log: "RTNETLINK answers: Operation not supported",

That's a benign warning AFAIK.

> and here is my ifconfig while ubuntu is booted (without VMs it doesn't
> have the vifs):

This all looks fine. I think you need to be investigating the network
configuration inside the guest. Does the eth* device exist, is it
configured etc

In your Ubuntu boot log I see:
        [    0.000000] Hypervisor detected: Xen HVM
        [    0.000000] Xen version 4.2.
        [    0.000000] Xen Platform PCI: I/O protocol version 1
        [    0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
        [    0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.

which means you will need the xen-netfront driver to be loaded, I don't
see any logs to that effect. What does lsmod say? What about "ifconfig
-a". Does the driver even exist on the live cd under /lib/modules
somewhere?

It's a bit odd that you still have vifX.Y-emu in dom0 given that the
emulated device is supposed to have been unplugged. I wonder if that is
a (separate) bug with emu device unplug. Which qemu was this again?

If you added xen_emul_unplug=never to your guest command line then you
would avoid this unplug and you should have an emulated NIC available
instead.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 07:46:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 07:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swq5z-0004C7-Uc; Thu, 02 Aug 2012 07:45:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swq5z-0004C2-8m
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 07:45:23 +0000
Received: from [85.158.143.35:25845] by server-2.bemta-4.messagelabs.com id
	5A/F1-17938-2103A105; Thu, 02 Aug 2012 07:45:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1343893521!15102184!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7367 invoked from network); 2 Aug 2012 07:45:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 07:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13815495"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:45:21 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:45:20 +0100
Message-ID: <1343893520.7571.58.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Thu, 2 Aug 2012 08:45:20 +0100
In-Reply-To: <CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 18:52 +0100, David Erickson wrote:
> my assumption is this is because of the error
> in xen-hotplug.log: "RTNETLINK answers: Operation not supported",

That's a benign warning AFAIK.

> and here is my ifconfig while ubuntu is booted (without VMs it doesn't
> have the vifs):

This all looks fine. I think you need to be investigating the network
configuration inside the guest. Does the eth* device exist, is it
configured etc

In your Ubuntu boot log I see:
        [    0.000000] Hypervisor detected: Xen HVM
        [    0.000000] Xen version 4.2.
        [    0.000000] Xen Platform PCI: I/O protocol version 1
        [    0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
        [    0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.

which means you will need the xen-netfront driver to be loaded, I don't
see any logs to that effect. What does lsmod say? What about "ifconfig
-a". Does the driver even exist on the live cd under /lib/modules
somewhere?

It's a bit odd that you still have vifX.Y-emu in dom0 given that the
emulated device is supposed to have been unplugged. I wonder if that is
a (separate) bug with emu device unplug. Which qemu was this again?

If you added xen_emul_unplug=never to your guest command line then you
would avoid this unplug and you should have an emulated NIC available
instead.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 08:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 08:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swqi2-0005Ox-99; Thu, 02 Aug 2012 08:24:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Swqi1-0005Os-E6
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 08:24:41 +0000
Received: from [85.158.143.99:45810] by server-2.bemta-4.messagelabs.com id
	E8/14-17938-8493A105; Thu, 02 Aug 2012 08:24:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343895879!22366644!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTM0MjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTM0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12861 invoked from network); 2 Aug 2012 08:24:40 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 08:24:40 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (jored mo59) (RZmta 30.2 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id g039bco7276WiA
	for <xen-devel@lists.xen.org>; Thu, 2 Aug 2012 10:24:39 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 3F86018638
	for <xen-devel@lists.xen.org>; Thu,  2 Aug 2012 10:24:39 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 480b8f03e6635873e32b6fdcb4a8e19b7b8b3a04
Message-Id: <480b8f03e6635873e32b.1343895878@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 02 Aug 2012 10:24:38 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] pygrub: add quoting to install receipe
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1343895857 -7200
# Node ID 480b8f03e6635873e32b6fdcb4a8e19b7b8b3a04
# Parent  3d622e2c7cfb15b37498e9bb8f1005516fe99f2f
pygrub: add quoting to install receipe

The changeset 25694:e20085770cb5 causes a syntax error if readline
returns nothing due to non-existant path:

[  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
[  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
[  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
[  148s] fi
[  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected

Add quoting to fix the error.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 3d622e2c7cfb -r 480b8f03e663 tools/pygrub/Makefile
--- a/tools/pygrub/Makefile
+++ b/tools/pygrub/Makefile
@@ -14,8 +14,8 @@ install: all
 		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
 		--install-scripts=$(PRIVATE_BINDIR) --force
 	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
-	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
-	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
+	set -e; if [ "`readlink -f $(DESTDIR)/$(BINDIR)`" != \
+	             "`readlink -f $(PRIVATE_BINDIR)`" ]; then \
 	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
 	fi
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 08:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 08:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swqi2-0005Ox-99; Thu, 02 Aug 2012 08:24:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Swqi1-0005Os-E6
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 08:24:41 +0000
Received: from [85.158.143.99:45810] by server-2.bemta-4.messagelabs.com id
	E8/14-17938-8493A105; Thu, 02 Aug 2012 08:24:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343895879!22366644!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTM0MjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTM0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12861 invoked from network); 2 Aug 2012 08:24:40 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 08:24:40 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (jored mo59) (RZmta 30.2 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id g039bco7276WiA
	for <xen-devel@lists.xen.org>; Thu, 2 Aug 2012 10:24:39 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 3F86018638
	for <xen-devel@lists.xen.org>; Thu,  2 Aug 2012 10:24:39 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 480b8f03e6635873e32b6fdcb4a8e19b7b8b3a04
Message-Id: <480b8f03e6635873e32b.1343895878@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 02 Aug 2012 10:24:38 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] pygrub: add quoting to install receipe
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1343895857 -7200
# Node ID 480b8f03e6635873e32b6fdcb4a8e19b7b8b3a04
# Parent  3d622e2c7cfb15b37498e9bb8f1005516fe99f2f
pygrub: add quoting to install receipe

The changeset 25694:e20085770cb5 causes a syntax error if readline
returns nothing due to non-existant path:

[  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
[  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
[  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
[  148s] fi
[  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected

Add quoting to fix the error.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 3d622e2c7cfb -r 480b8f03e663 tools/pygrub/Makefile
--- a/tools/pygrub/Makefile
+++ b/tools/pygrub/Makefile
@@ -14,8 +14,8 @@ install: all
 		$(PYTHON_PREFIX_ARG) --root="$(DESTDIR)" \
 		--install-scripts=$(PRIVATE_BINDIR) --force
 	$(INSTALL_DIR) $(DESTDIR)/var/run/xend/boot
-	set -e; if [ `readlink -f $(DESTDIR)/$(BINDIR)` != \
-	             `readlink -f $(PRIVATE_BINDIR)` ]; then \
+	set -e; if [ "`readlink -f $(DESTDIR)/$(BINDIR)`" != \
+	             "`readlink -f $(PRIVATE_BINDIR)`" ]; then \
 	    ln -sf $(PRIVATE_BINDIR)/pygrub $(DESTDIR)/$(BINDIR); \
 	fi
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 08:31:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 08:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwqoA-0005cV-6c; Thu, 02 Aug 2012 08:31:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1Swqo8-0005cP-6l
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 08:31:00 +0000
Received: from [85.158.143.35:5616] by server-2.bemta-4.messagelabs.com id
	5A/9F-17938-3CA3A105; Thu, 02 Aug 2012 08:30:59 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343896256!5202834!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA0MzQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27966 invoked from network); 2 Aug 2012 08:30:57 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-21.messagelabs.com with SMTP;
	2 Aug 2012 08:30:57 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 02 Aug 2012 01:30:31 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.67,351,1309762800"; d="scan'208";a="180635571"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 02 Aug 2012 01:30:30 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 01:30:30 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Thu, 2 Aug 2012 16:30:29 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Thread-Topic: [Patch 7] Xen/MCE: Abort live migration when vMCE occur
Thread-Index: AQHNbi+x3uY79hDDTvi+tsMk+lheYJdGNW2w
Date: Thu, 2 Aug 2012 08:30:29 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D3297@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352CD8C9@SHSMSX101.ccr.corp.intel.com>
	<501649B2.6010805@amd.com>
In-Reply-To: <501649B2.6010805@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch 7] Xen/MCE: Abort live migration when vMCE
	occur
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoph Egger wrote:
> On 07/27/12 17:24, Liu, Jinsong wrote:
> 
>> Xen/MCE: Abort live migration when vMCE occur
>> 
>> This patch monitor the critical area of live migration (from vMCE
>> point of view, 
>> the copypages stage of migration is the critical area while other
>> areas are not). 
>> 
>> If a vMCE occur at the critical area of live migration, abort and
>> try migration later. 
>> 
>> Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>
>> 
>> diff -r 8869ba37b577 tools/libxc/xc_domain.c
>> --- a/tools/libxc/xc_domain.c	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/tools/libxc/xc_domain.c	Thu Jul 26 22:52:09 2012 +0800 @@
>>      -283,6 +283,37 @@ return ret;
>>  }
>> 
>> +/* Start vmce monitor */
>> +int xc_domain_vmce_monitor_strat(xc_interface *xch,
>> +                                 uint32_t domid)
>> +{
>> +    int ret;
>> +    DECLARE_DOMCTL;
>> +
>> +    domctl.cmd = XEN_DOMCTL_vmce_monitor_start;
>> +    domctl.domain = (domid_t)domid;
>> +    ret = do_domctl(xch, &domctl);
>> +
>> +    return ret ? -1 : 0;
>> +}
>> +
>> +/* End vmce monitor */
>> +int xc_domain_vmce_monitor_end(xc_interface *xch,
>> +                               uint32_t domid,
>> +                               int *vmce_while_migrate) +{
>> +    int ret;
>> +    DECLARE_DOMCTL;
>> +
>> +    domctl.cmd = XEN_DOMCTL_vmce_monitor_end;
>> +    domctl.domain = (domid_t)domid;
>> +    ret = do_domctl(xch, &domctl);
>> +    if ( !ret )
>> +        *vmce_while_migrate =
>> domctl.u.vmce_monitor.vmce_while_migrate; + +    return ret ? -1 : 0;
>> +}
>> +
>>  /* get info from hvm guest for save */
>>  int xc_domain_hvm_getcontext(xc_interface *xch,
>>                               uint32_t domid,
>> diff -r 8869ba37b577 tools/libxc/xc_domain_save.c
>> --- a/tools/libxc/xc_domain_save.c	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/tools/libxc/xc_domain_save.c	Thu Jul 26 22:52:09 2012 +0800 @@
>>       -895,6 +895,8 @@ */
>>      int compressing = 0;
>> 
>> +    int vmce_while_migrate = 0;
>> +
>>      int completed = 0;
>> 
>>      if ( hvm && !callbacks->switch_qemu_logdirty ) @@ -1109,6
>>          +1111,12 @@ goto out;
>>      }
>> 
>> +    if ( xc_domain_vmce_monitor_strat(xch, dom) )
> 
> 
> You mean s/strat/start/ here, right?

Yep, thanks! will update.

Jinsong

> 
>> +    {
>> +        PERROR("Error when start vmce monitor\n"); +        goto
>> out; +    }
>> +
>>    copypages:
>>  #define wrexact(fd, buf, len) write_buffer(xch, last_iter, ob,
>>  (fd), (buf), (len)) #define wruncached(fd, live, buf, len)
>> write_uncached(xch, last_iter, ob, (fd), (buf), (len)) @@ -1571,6
>> +1579,17 @@  
>> 
>>      DPRINTF("All memory is saved\n");
>> 
>> +    if ( xc_domain_vmce_monitor_end(xch, dom, &vmce_while_migrate)
>> ) +    { +        PERROR("Error when end vmce monitor\n");
>> +        goto out;
>> +    }
>> +    else if ( vmce_while_migrate )
>> +    {
>> +        fprintf(stderr, "vMCE occurred, abort this time and try
>> later.\n"); +        goto out; +    }
>> +
>>      /* After last_iter, buffer the rest of pagebuf & tailbuf data
>>       into a * separate output buffer and flush it after the
>> compressed page chunks.       */ 
>> diff -r 8869ba37b577 tools/libxc/xenctrl.h
>> --- a/tools/libxc/xenctrl.h	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/tools/libxc/xenctrl.h	Thu Jul 26 22:52:09 2012 +0800 @@ -568,6
>>                            +568,26 @@ xc_domaininfo_t *info);
>> 
>>  /**
>> + * This function start monitor vmce event.
>> + * @parm xch a handle to an open hypervisor interface
>> + * @parm domid the domain id monitored
>> + * @return 0 on success, -1 on failure
>> + */
>> +int xc_domain_vmce_monitor_strat(xc_interface *xch,
>> +                                 uint32_t domid);
> 
> Dito.
> 
> Christoph
> 
>> +/**
>> + * This function end monitor vmce event
>> + * @parm xch a handle to an open hypervisor interface
>> + * @parm domid the domain id monitored
>> + * @parm vmce_while_migrate a pointer return whether vMCE occur
>> when migrate + * @return 0 on success, -1 on failure
>> + */
>> +int xc_domain_vmce_monitor_end(xc_interface *xch,
>> +                               uint32_t domid,
>> +                               int *vmce_while_migrate); +
>> +/**
>>   * This function returns information about the context of a hvm
>> domain 
>>   * @parm xch a handle to an open hypervisor interface
>>   * @parm domid the domain to get information from
>> diff -r 8869ba37b577 xen/arch/x86/cpu/mcheck/mce_intel.c
>> --- a/xen/arch/x86/cpu/mcheck/mce_intel.c	Thu Jul 19 22:14:08 2012
>> +0800 +++ b/xen/arch/x86/cpu/mcheck/mce_intel.c	Thu Jul 26 22:52:09
>>                      2012 +0800 @@ -688,6 +688,12 @@ goto
>>                  vmce_failed; }
>> 
>> +                if ( unlikely(d->arch.vmce_monitor) ) +            
>> { +                    /* vMCE occur when guest migration */
>> +                    d->arch.vmce_while_migrate = 1; +              
>> } +
>>                  /* We will inject vMCE to DOMU*/
>>                  if ( inject_vmce(d) < 0 )
>>                  {
>> diff -r 8869ba37b577 xen/arch/x86/domctl.c
>> --- a/xen/arch/x86/domctl.c	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/xen/arch/x86/domctl.c	Thu Jul 26 22:52:09 2012 +0800 @@
>>      -1517,6 +1517,41 @@ }
>>      break;
>> 
>> +    case XEN_DOMCTL_vmce_monitor_start:
>> +    {
>> +        struct domain *d;
>> +
>> +        d = rcu_lock_domain_by_id(domctl->domain); +        if ( d
>> != NULL ) +        {
>> +            d->arch.vmce_while_migrate = 0;
>> +            d->arch.vmce_monitor = 1;
>> +            rcu_unlock_domain(d);
>> +        }
>> +        else
>> +            ret = -ESRCH;
>> +    }
>> +    break;
>> +
>> +    case XEN_DOMCTL_vmce_monitor_end:
>> +    {
>> +        struct domain *d;
>> +
>> +        d = rcu_lock_domain_by_id(domctl->domain); +        if ( d
>> != NULL) +        {
>> +            d->arch.vmce_monitor = 0;
>> +            domctl->u.vmce_monitor.vmce_while_migrate =
>> +                                      d->arch.vmce_while_migrate;
>> +            rcu_unlock_domain(d);
>> +            if ( copy_to_guest(u_domctl, domctl, 1) )
>> +                ret = -EFAULT;
>> +        }
>> +        else
>> +            ret = -ESRCH;
>> +    }
>> +    break;
>> +
>>      default:
>>          ret = iommu_do_domctl(domctl, u_domctl);
>>          break;
>> diff -r 8869ba37b577 xen/include/asm-x86/domain.h
>> --- a/xen/include/asm-x86/domain.h	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/xen/include/asm-x86/domain.h	Thu Jul 26 22:52:09 2012 +0800 @@
>>      -292,6 +292,10 @@ bool_t has_32bit_shinfo;
>>      /* Domain cannot handle spurious page faults? */
>>      bool_t suppress_spurious_page_faults;
>> +    /* Monitoring guest memory copy of migration */ +    bool_t
>> vmce_monitor; +    /* Whether vMCE occur during guest memory copy of
>> migration */ +    bool_t vmce_while_migrate;
>> 
>>      /* Continuable domain_relinquish_resources(). */      enum {
>> diff -r 8869ba37b577 xen/include/public/domctl.h
>> --- a/xen/include/public/domctl.h	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/xen/include/public/domctl.h	Thu Jul 26 22:52:09 2012 +0800 @@
>>  -850,6 +850,12 @@ typedef struct xen_domctl_set_access_required
>>  xen_domctl_set_access_required_t;
>> DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_access_required_t); 
>> 
>> +struct xen_domctl_vmce_monitor {
>> +    uint8_t vmce_while_migrate;
>> +};
>> +typedef struct xen_domctl_vmce_monitor xen_domctl_vmce_monitor_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmce_monitor_t); +
>>  struct xen_domctl {
>>      uint32_t cmd;
>>  #define XEN_DOMCTL_createdomain                   1 @@ -915,6
>>  +921,8 @@ #define XEN_DOMCTL_set_access_required           64
>>  #define XEN_DOMCTL_audit_p2m                     65
>>  #define XEN_DOMCTL_set_virq_handler              66
>> +#define XEN_DOMCTL_vmce_monitor_start            67
>> +#define XEN_DOMCTL_vmce_monitor_end              68
>>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002 @@ -970,6
>>          +978,7 @@ struct xen_domctl_set_access_required
>>          access_required; struct xen_domctl_audit_p2m        
>>          audit_p2m; struct xen_domctl_set_virq_handler 
>> set_virq_handler; +        struct xen_domctl_vmce_monitor     
>>          vmce_monitor; struct xen_domctl_gdbsx_memio      
>>          gdbsx_guest_memio; struct xen_domctl_gdbsx_pauseunp_vcpu
>>          gdbsx_pauseunp_vcpu; struct xen_domctl_gdbsx_domstatus  
>> gdbsx_domstatus; 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 08:31:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 08:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwqoA-0005cV-6c; Thu, 02 Aug 2012 08:31:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1Swqo8-0005cP-6l
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 08:31:00 +0000
Received: from [85.158.143.35:5616] by server-2.bemta-4.messagelabs.com id
	5A/9F-17938-3CA3A105; Thu, 02 Aug 2012 08:30:59 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343896256!5202834!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA0MzQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27966 invoked from network); 2 Aug 2012 08:30:57 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-21.messagelabs.com with SMTP;
	2 Aug 2012 08:30:57 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 02 Aug 2012 01:30:31 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.67,351,1309762800"; d="scan'208";a="180635571"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 02 Aug 2012 01:30:30 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 01:30:30 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Thu, 2 Aug 2012 16:30:29 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Thread-Topic: [Patch 7] Xen/MCE: Abort live migration when vMCE occur
Thread-Index: AQHNbi+x3uY79hDDTvi+tsMk+lheYJdGNW2w
Date: Thu, 2 Aug 2012 08:30:29 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D3297@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352CD8C9@SHSMSX101.ccr.corp.intel.com>
	<501649B2.6010805@amd.com>
In-Reply-To: <501649B2.6010805@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch 7] Xen/MCE: Abort live migration when vMCE
	occur
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoph Egger wrote:
> On 07/27/12 17:24, Liu, Jinsong wrote:
> 
>> Xen/MCE: Abort live migration when vMCE occur
>> 
>> This patch monitor the critical area of live migration (from vMCE
>> point of view, 
>> the copypages stage of migration is the critical area while other
>> areas are not). 
>> 
>> If a vMCE occur at the critical area of live migration, abort and
>> try migration later. 
>> 
>> Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>
>> 
>> diff -r 8869ba37b577 tools/libxc/xc_domain.c
>> --- a/tools/libxc/xc_domain.c	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/tools/libxc/xc_domain.c	Thu Jul 26 22:52:09 2012 +0800 @@
>>      -283,6 +283,37 @@ return ret;
>>  }
>> 
>> +/* Start vmce monitor */
>> +int xc_domain_vmce_monitor_strat(xc_interface *xch,
>> +                                 uint32_t domid)
>> +{
>> +    int ret;
>> +    DECLARE_DOMCTL;
>> +
>> +    domctl.cmd = XEN_DOMCTL_vmce_monitor_start;
>> +    domctl.domain = (domid_t)domid;
>> +    ret = do_domctl(xch, &domctl);
>> +
>> +    return ret ? -1 : 0;
>> +}
>> +
>> +/* End vmce monitor */
>> +int xc_domain_vmce_monitor_end(xc_interface *xch,
>> +                               uint32_t domid,
>> +                               int *vmce_while_migrate) +{
>> +    int ret;
>> +    DECLARE_DOMCTL;
>> +
>> +    domctl.cmd = XEN_DOMCTL_vmce_monitor_end;
>> +    domctl.domain = (domid_t)domid;
>> +    ret = do_domctl(xch, &domctl);
>> +    if ( !ret )
>> +        *vmce_while_migrate =
>> domctl.u.vmce_monitor.vmce_while_migrate; + +    return ret ? -1 : 0;
>> +}
>> +
>>  /* get info from hvm guest for save */
>>  int xc_domain_hvm_getcontext(xc_interface *xch,
>>                               uint32_t domid,
>> diff -r 8869ba37b577 tools/libxc/xc_domain_save.c
>> --- a/tools/libxc/xc_domain_save.c	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/tools/libxc/xc_domain_save.c	Thu Jul 26 22:52:09 2012 +0800 @@
>>       -895,6 +895,8 @@ */
>>      int compressing = 0;
>> 
>> +    int vmce_while_migrate = 0;
>> +
>>      int completed = 0;
>> 
>>      if ( hvm && !callbacks->switch_qemu_logdirty ) @@ -1109,6
>>          +1111,12 @@ goto out;
>>      }
>> 
>> +    if ( xc_domain_vmce_monitor_strat(xch, dom) )
> 
> 
> You mean s/strat/start/ here, right?

Yep, thanks! will update.

Jinsong

> 
>> +    {
>> +        PERROR("Error when start vmce monitor\n"); +        goto
>> out; +    }
>> +
>>    copypages:
>>  #define wrexact(fd, buf, len) write_buffer(xch, last_iter, ob,
>>  (fd), (buf), (len)) #define wruncached(fd, live, buf, len)
>> write_uncached(xch, last_iter, ob, (fd), (buf), (len)) @@ -1571,6
>> +1579,17 @@  
>> 
>>      DPRINTF("All memory is saved\n");
>> 
>> +    if ( xc_domain_vmce_monitor_end(xch, dom, &vmce_while_migrate)
>> ) +    { +        PERROR("Error when end vmce monitor\n");
>> +        goto out;
>> +    }
>> +    else if ( vmce_while_migrate )
>> +    {
>> +        fprintf(stderr, "vMCE occurred, abort this time and try
>> later.\n"); +        goto out; +    }
>> +
>>      /* After last_iter, buffer the rest of pagebuf & tailbuf data
>>       into a * separate output buffer and flush it after the
>> compressed page chunks.       */ 
>> diff -r 8869ba37b577 tools/libxc/xenctrl.h
>> --- a/tools/libxc/xenctrl.h	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/tools/libxc/xenctrl.h	Thu Jul 26 22:52:09 2012 +0800 @@ -568,6
>>                            +568,26 @@ xc_domaininfo_t *info);
>> 
>>  /**
>> + * This function start monitor vmce event.
>> + * @parm xch a handle to an open hypervisor interface
>> + * @parm domid the domain id monitored
>> + * @return 0 on success, -1 on failure
>> + */
>> +int xc_domain_vmce_monitor_strat(xc_interface *xch,
>> +                                 uint32_t domid);
> 
> Dito.
> 
> Christoph
> 
>> +/**
>> + * This function end monitor vmce event
>> + * @parm xch a handle to an open hypervisor interface
>> + * @parm domid the domain id monitored
>> + * @parm vmce_while_migrate a pointer return whether vMCE occur
>> when migrate + * @return 0 on success, -1 on failure
>> + */
>> +int xc_domain_vmce_monitor_end(xc_interface *xch,
>> +                               uint32_t domid,
>> +                               int *vmce_while_migrate); +
>> +/**
>>   * This function returns information about the context of a hvm
>> domain 
>>   * @parm xch a handle to an open hypervisor interface
>>   * @parm domid the domain to get information from
>> diff -r 8869ba37b577 xen/arch/x86/cpu/mcheck/mce_intel.c
>> --- a/xen/arch/x86/cpu/mcheck/mce_intel.c	Thu Jul 19 22:14:08 2012
>> +0800 +++ b/xen/arch/x86/cpu/mcheck/mce_intel.c	Thu Jul 26 22:52:09
>>                      2012 +0800 @@ -688,6 +688,12 @@ goto
>>                  vmce_failed; }
>> 
>> +                if ( unlikely(d->arch.vmce_monitor) ) +            
>> { +                    /* vMCE occur when guest migration */
>> +                    d->arch.vmce_while_migrate = 1; +              
>> } +
>>                  /* We will inject vMCE to DOMU*/
>>                  if ( inject_vmce(d) < 0 )
>>                  {
>> diff -r 8869ba37b577 xen/arch/x86/domctl.c
>> --- a/xen/arch/x86/domctl.c	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/xen/arch/x86/domctl.c	Thu Jul 26 22:52:09 2012 +0800 @@
>>      -1517,6 +1517,41 @@ }
>>      break;
>> 
>> +    case XEN_DOMCTL_vmce_monitor_start:
>> +    {
>> +        struct domain *d;
>> +
>> +        d = rcu_lock_domain_by_id(domctl->domain); +        if ( d
>> != NULL ) +        {
>> +            d->arch.vmce_while_migrate = 0;
>> +            d->arch.vmce_monitor = 1;
>> +            rcu_unlock_domain(d);
>> +        }
>> +        else
>> +            ret = -ESRCH;
>> +    }
>> +    break;
>> +
>> +    case XEN_DOMCTL_vmce_monitor_end:
>> +    {
>> +        struct domain *d;
>> +
>> +        d = rcu_lock_domain_by_id(domctl->domain); +        if ( d
>> != NULL) +        {
>> +            d->arch.vmce_monitor = 0;
>> +            domctl->u.vmce_monitor.vmce_while_migrate =
>> +                                      d->arch.vmce_while_migrate;
>> +            rcu_unlock_domain(d);
>> +            if ( copy_to_guest(u_domctl, domctl, 1) )
>> +                ret = -EFAULT;
>> +        }
>> +        else
>> +            ret = -ESRCH;
>> +    }
>> +    break;
>> +
>>      default:
>>          ret = iommu_do_domctl(domctl, u_domctl);
>>          break;
>> diff -r 8869ba37b577 xen/include/asm-x86/domain.h
>> --- a/xen/include/asm-x86/domain.h	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/xen/include/asm-x86/domain.h	Thu Jul 26 22:52:09 2012 +0800 @@
>>      -292,6 +292,10 @@ bool_t has_32bit_shinfo;
>>      /* Domain cannot handle spurious page faults? */
>>      bool_t suppress_spurious_page_faults;
>> +    /* Monitoring guest memory copy of migration */ +    bool_t
>> vmce_monitor; +    /* Whether vMCE occur during guest memory copy of
>> migration */ +    bool_t vmce_while_migrate;
>> 
>>      /* Continuable domain_relinquish_resources(). */      enum {
>> diff -r 8869ba37b577 xen/include/public/domctl.h
>> --- a/xen/include/public/domctl.h	Thu Jul 19 22:14:08 2012 +0800
>> +++ b/xen/include/public/domctl.h	Thu Jul 26 22:52:09 2012 +0800 @@
>>  -850,6 +850,12 @@ typedef struct xen_domctl_set_access_required
>>  xen_domctl_set_access_required_t;
>> DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_access_required_t); 
>> 
>> +struct xen_domctl_vmce_monitor {
>> +    uint8_t vmce_while_migrate;
>> +};
>> +typedef struct xen_domctl_vmce_monitor xen_domctl_vmce_monitor_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmce_monitor_t); +
>>  struct xen_domctl {
>>      uint32_t cmd;
>>  #define XEN_DOMCTL_createdomain                   1 @@ -915,6
>>  +921,8 @@ #define XEN_DOMCTL_set_access_required           64
>>  #define XEN_DOMCTL_audit_p2m                     65
>>  #define XEN_DOMCTL_set_virq_handler              66
>> +#define XEN_DOMCTL_vmce_monitor_start            67
>> +#define XEN_DOMCTL_vmce_monitor_end              68
>>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002 @@ -970,6
>>          +978,7 @@ struct xen_domctl_set_access_required
>>          access_required; struct xen_domctl_audit_p2m        
>>          audit_p2m; struct xen_domctl_set_virq_handler 
>> set_virq_handler; +        struct xen_domctl_vmce_monitor     
>>          vmce_monitor; struct xen_domctl_gdbsx_memio      
>>          gdbsx_guest_memio; struct xen_domctl_gdbsx_pauseunp_vcpu
>>          gdbsx_pauseunp_vcpu; struct xen_domctl_gdbsx_domstatus  
>> gdbsx_domstatus; 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:00:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrGE-0005wc-O0; Thu, 02 Aug 2012 09:00:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwrGD-0005ud-7r
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:00:01 +0000
Received: from [85.158.143.99:23109] by server-2.bemta-4.messagelabs.com id
	25/33-17938-0914A105; Thu, 02 Aug 2012 09:00:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343897999!26532952!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13257 invoked from network); 2 Aug 2012 09:00:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:00:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13817072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 08:59:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	09:59:26 +0100
Message-ID: <1343897960.27221.105.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 09:59:20 +0100
In-Reply-To: <1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> 
>  * The autogenerated function libxl_event_init_type ignores the type
>    parameter. 

Your wish etc...

8<------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343897835 -3600
# Node ID 5bbb555747204f9b1926741416f79ab6b8b02361
# Parent  5feb45a76581091bd267eecccb078afb91db0b8c
libxl: idl: always initialise the KeyedEnum keyvar in the member init function

Previously we only initialised it if an explicit keyvar_init_val was given but
not if the default was implicitly 0.

In the generated code this only changes the unused libxl_event_init_type
function:

 void libxl_event_init_type(libxl_event *p, libxl_event_type type)
 {
+    assert(!p->type);
+    p->type = type;
     switch (p->type) {
     case LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN:
         break;

However I think it is wrong that this function is unused, this and
libxl_event_init should be used by libxl__event_new. As it happens both are
just memset to zero but for correctness we should use the init functions (in
case the IDL changes).

In the generator we also need to properly handle init_var == 0 which the
current if statements incorrectly treat as False. This doesn't actually have
any impact on the generated code.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 5feb45a76581 -r 5bbb55574720 tools/libxl/gentypes.py
--- a/tools/libxl/gentypes.py	Thu Aug 02 09:26:36 2012 +0100
+++ b/tools/libxl/gentypes.py	Thu Aug 02 09:57:15 2012 +0100
@@ -162,17 +162,20 @@ def libxl_C_type_member_init(ty, field):
                                 ku.keyvar.type.make_arg(ku.keyvar.name))
     s += "{\n"
     
-    if ku.keyvar.init_val:
+    if ku.keyvar.init_val is not None:
         init_val = ku.keyvar.init_val
-    elif ku.keyvar.type.init_val:
+    elif ku.keyvar.type.init_val is not None:
         init_val = ku.keyvar.type.init_val
     else:
         init_val = None
         
+    (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
     if init_val is not None:
-        (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
         s += "    assert(%s == %s);\n" % (fexpr, init_val)
-        s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+    else:
+        s += "    assert(!%s);\n" % (fexpr)
+    s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+
     (nparent,fexpr) = ty.member(ty.pass_arg("p"), field, isref=True)
     s += _libxl_C_type_init(ku, fexpr, parent=nparent, subinit=True)
     s += "}\n"
diff -r 5feb45a76581 -r 5bbb55574720 tools/libxl/libxl_event.c
--- a/tools/libxl/libxl_event.c	Thu Aug 02 09:26:36 2012 +0100
+++ b/tools/libxl/libxl_event.c	Thu Aug 02 09:57:15 2012 +0100
@@ -1163,7 +1163,10 @@ libxl_event *libxl__event_new(libxl__egc
     libxl_event *ev;
 
     ev = libxl__zalloc(NOGC,sizeof(*ev));
-    ev->type = type;
+
+    libxl_event_init(ev);
+    libxl_event_init_type(ev, type);
+
     ev->domid = domid;
 
     return ev;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:00:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrGE-0005wc-O0; Thu, 02 Aug 2012 09:00:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwrGD-0005ud-7r
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:00:01 +0000
Received: from [85.158.143.99:23109] by server-2.bemta-4.messagelabs.com id
	25/33-17938-0914A105; Thu, 02 Aug 2012 09:00:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1343897999!26532952!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13257 invoked from network); 2 Aug 2012 09:00:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:00:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13817072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 08:59:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	09:59:26 +0100
Message-ID: <1343897960.27221.105.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 09:59:20 +0100
In-Reply-To: <1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> 
>  * The autogenerated function libxl_event_init_type ignores the type
>    parameter. 

Your wish etc...

8<------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343897835 -3600
# Node ID 5bbb555747204f9b1926741416f79ab6b8b02361
# Parent  5feb45a76581091bd267eecccb078afb91db0b8c
libxl: idl: always initialise the KeyedEnum keyvar in the member init function

Previously we only initialised it if an explicit keyvar_init_val was given but
not if the default was implicitly 0.

In the generated code this only changes the unused libxl_event_init_type
function:

 void libxl_event_init_type(libxl_event *p, libxl_event_type type)
 {
+    assert(!p->type);
+    p->type = type;
     switch (p->type) {
     case LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN:
         break;

However I think it is wrong that this function is unused, this and
libxl_event_init should be used by libxl__event_new. As it happens both are
just memset to zero but for correctness we should use the init functions (in
case the IDL changes).

In the generator we also need to properly handle init_var == 0 which the
current if statements incorrectly treat as False. This doesn't actually have
any impact on the generated code.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 5feb45a76581 -r 5bbb55574720 tools/libxl/gentypes.py
--- a/tools/libxl/gentypes.py	Thu Aug 02 09:26:36 2012 +0100
+++ b/tools/libxl/gentypes.py	Thu Aug 02 09:57:15 2012 +0100
@@ -162,17 +162,20 @@ def libxl_C_type_member_init(ty, field):
                                 ku.keyvar.type.make_arg(ku.keyvar.name))
     s += "{\n"
     
-    if ku.keyvar.init_val:
+    if ku.keyvar.init_val is not None:
         init_val = ku.keyvar.init_val
-    elif ku.keyvar.type.init_val:
+    elif ku.keyvar.type.init_val is not None:
         init_val = ku.keyvar.type.init_val
     else:
         init_val = None
         
+    (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
     if init_val is not None:
-        (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
         s += "    assert(%s == %s);\n" % (fexpr, init_val)
-        s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+    else:
+        s += "    assert(!%s);\n" % (fexpr)
+    s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+
     (nparent,fexpr) = ty.member(ty.pass_arg("p"), field, isref=True)
     s += _libxl_C_type_init(ku, fexpr, parent=nparent, subinit=True)
     s += "}\n"
diff -r 5feb45a76581 -r 5bbb55574720 tools/libxl/libxl_event.c
--- a/tools/libxl/libxl_event.c	Thu Aug 02 09:26:36 2012 +0100
+++ b/tools/libxl/libxl_event.c	Thu Aug 02 09:57:15 2012 +0100
@@ -1163,7 +1163,10 @@ libxl_event *libxl__event_new(libxl__egc
     libxl_event *ev;
 
     ev = libxl__zalloc(NOGC,sizeof(*ev));
-    ev->type = type;
+
+    libxl_event_init(ev);
+    libxl_event_init_type(ev, type);
+
     ev->domid = domid;
 
     return ev;




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:01:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrGe-0005yE-CU; Thu, 02 Aug 2012 09:00:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwrGc-0005y3-QU
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:00:27 +0000
Received: from [85.158.138.51:5388] by server-1.bemta-3.messagelabs.com id
	83/5D-31934-8A14A105; Thu, 02 Aug 2012 09:00:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343898020!25986538!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19875 invoked from network); 2 Aug 2012 09:00:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:00:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="33307895"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:00:19 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:00:19 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwrGU-0004yc-US;
	Thu, 02 Aug 2012 10:00:18 +0100
MIME-Version: 1.0
X-Mercurial-Node: e638a0aeb9856003661aa75d684855c6fd940b3c
Message-ID: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 2 Aug 2012 10:00:18 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH] libxl: prefix *.for-check with _ to mark it as
 a generated file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343897973 -3600
# Node ID e638a0aeb9856003661aa75d684855c6fd940b3c
# Parent  5bbb555747204f9b1926741416f79ab6b8b02361
libxl: prefix *.for-check with _ to mark it as a generated file.

Keeps it out of my greps etc.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 5bbb55574720 -r e638a0aeb985 .hgignore
--- a/.hgignore	Thu Aug 02 09:57:15 2012 +0100
+++ b/.hgignore	Thu Aug 02 09:59:33 2012 +0100
@@ -187,7 +187,7 @@
 ^tools/libxl/testidl\.c$
 ^tools/libxl/tmp\..*$
 ^tools/libxl/.*\.new$
-^tools/libxl/libxl\.api-for-check
+^tools/libxl/_libxl\.api-for-check
 ^tools/libvchan/vchan-node[12]$
 ^tools/libaio/src/.*\.ol$
 ^tools/libaio/src/.*\.os$
diff -r 5bbb55574720 -r e638a0aeb985 tools/libxl/Makefile
--- a/tools/libxl/Makefile	Thu Aug 02 09:57:15 2012 +0100
+++ b/tools/libxl/Makefile	Thu Aug 02 09:59:33 2012 +0100
@@ -114,10 +114,10 @@ all: $(CLIENTS) libxenlight.so libxenlig
 genpath-target = $(call buildmakevars2file,_paths.h.tmp)
 $(eval $(genpath-target))
 
-libxl.api-ok: check-libxl-api-rules libxl.api-for-check
+libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
 
-%.api-for-check: %.h
+_%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
@@ -210,7 +210,7 @@ install: all
 .PHONY: clean
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
-	$(RM) -f _*.c *.pyc _paths.*.tmp *.api-for-check
+	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
 	$(RM) -f testidl.c.new testidl.c
 #	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:01:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrGe-0005yE-CU; Thu, 02 Aug 2012 09:00:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwrGc-0005y3-QU
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:00:27 +0000
Received: from [85.158.138.51:5388] by server-1.bemta-3.messagelabs.com id
	83/5D-31934-8A14A105; Thu, 02 Aug 2012 09:00:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343898020!25986538!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19875 invoked from network); 2 Aug 2012 09:00:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:00:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="33307895"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:00:19 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:00:19 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwrGU-0004yc-US;
	Thu, 02 Aug 2012 10:00:18 +0100
MIME-Version: 1.0
X-Mercurial-Node: e638a0aeb9856003661aa75d684855c6fd940b3c
Message-ID: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 2 Aug 2012 10:00:18 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH] libxl: prefix *.for-check with _ to mark it as
 a generated file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343897973 -3600
# Node ID e638a0aeb9856003661aa75d684855c6fd940b3c
# Parent  5bbb555747204f9b1926741416f79ab6b8b02361
libxl: prefix *.for-check with _ to mark it as a generated file.

Keeps it out of my greps etc.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 5bbb55574720 -r e638a0aeb985 .hgignore
--- a/.hgignore	Thu Aug 02 09:57:15 2012 +0100
+++ b/.hgignore	Thu Aug 02 09:59:33 2012 +0100
@@ -187,7 +187,7 @@
 ^tools/libxl/testidl\.c$
 ^tools/libxl/tmp\..*$
 ^tools/libxl/.*\.new$
-^tools/libxl/libxl\.api-for-check
+^tools/libxl/_libxl\.api-for-check
 ^tools/libvchan/vchan-node[12]$
 ^tools/libaio/src/.*\.ol$
 ^tools/libaio/src/.*\.os$
diff -r 5bbb55574720 -r e638a0aeb985 tools/libxl/Makefile
--- a/tools/libxl/Makefile	Thu Aug 02 09:57:15 2012 +0100
+++ b/tools/libxl/Makefile	Thu Aug 02 09:59:33 2012 +0100
@@ -114,10 +114,10 @@ all: $(CLIENTS) libxenlight.so libxenlig
 genpath-target = $(call buildmakevars2file,_paths.h.tmp)
 $(eval $(genpath-target))
 
-libxl.api-ok: check-libxl-api-rules libxl.api-for-check
+libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
 
-%.api-for-check: %.h
+_%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
@@ -210,7 +210,7 @@ install: all
 .PHONY: clean
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
-	$(RM) -f _*.c *.pyc _paths.*.tmp *.api-for-check
+	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
 	$(RM) -f testidl.c.new testidl.c
 #	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:02:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrHX-00063z-Qc; Thu, 02 Aug 2012 09:01:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwrHW-00063d-AQ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:01:22 +0000
Received: from [85.158.143.99:31633] by server-3.bemta-4.messagelabs.com id
	BF/F4-01511-1E14A105; Thu, 02 Aug 2012 09:01:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343898080!18221585!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14370 invoked from network); 2 Aug 2012 09:01:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13817123"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:01:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:01:19 +0100
Message-ID: <1343898078.27221.106.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 2 Aug 2012 10:01:18 +0100
In-Reply-To: <480b8f03e6635873e32b.1343895878@probook.site>
References: <480b8f03e6635873e32b.1343895878@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] pygrub: add quoting to install receipe
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 09:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1343895857 -7200
> # Node ID 480b8f03e6635873e32b6fdcb4a8e19b7b8b3a04
> # Parent  3d622e2c7cfb15b37498e9bb8f1005516fe99f2f
> pygrub: add quoting to install receipe
> 
> The changeset 25694:e20085770cb5 causes a syntax error if readline
> returns nothing due to non-existant path:
> 
> [  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
> [  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
> [  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
> [  148s] fi
> [  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected
> 
> Add quoting to fix the error.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I would apply but xenbits appears to be down. I've pinged the admins.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:02:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrHX-00063z-Qc; Thu, 02 Aug 2012 09:01:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwrHW-00063d-AQ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:01:22 +0000
Received: from [85.158.143.99:31633] by server-3.bemta-4.messagelabs.com id
	BF/F4-01511-1E14A105; Thu, 02 Aug 2012 09:01:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343898080!18221585!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14370 invoked from network); 2 Aug 2012 09:01:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13817123"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:01:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:01:19 +0100
Message-ID: <1343898078.27221.106.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 2 Aug 2012 10:01:18 +0100
In-Reply-To: <480b8f03e6635873e32b.1343895878@probook.site>
References: <480b8f03e6635873e32b.1343895878@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] pygrub: add quoting to install receipe
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 09:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1343895857 -7200
> # Node ID 480b8f03e6635873e32b6fdcb4a8e19b7b8b3a04
> # Parent  3d622e2c7cfb15b37498e9bb8f1005516fe99f2f
> pygrub: add quoting to install receipe
> 
> The changeset 25694:e20085770cb5 causes a syntax error if readline
> returns nothing due to non-existant path:
> 
> [  148s] set -e; if [ `readlink -f /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin` != \
> [  148s]              `readlink -f /usr/lib64/xen/bin` ]; then \
> [  148s]     ln -sf /usr/lib64/xen/bin/pygrub /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install//usr/bin; \
> [  148s] fi
> [  148s] /bin/sh: line 0: [: /home/abuild/rpmbuild/BUILD/xen-4.2.25700/non-dbg/dist/install/usr/bin: unary operator expected
> 
> Add quoting to fix the error.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I would apply but xenbits appears to be down. I've pinged the admins.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:05:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrKt-0006Lo-Dy; Thu, 02 Aug 2012 09:04:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwrKs-0006LV-N3
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:04:50 +0000
Received: from [85.158.138.51:20287] by server-4.bemta-3.messagelabs.com id
	5A/E5-29069-1B24A105; Thu, 02 Aug 2012 09:04:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343898289!20508268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4002 invoked from network); 2 Aug 2012 09:04:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-174.messagelabs.com with SMTP;
	2 Aug 2012 09:04:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:04:49 +0100
Message-Id: <501A5EF7020000780009219C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:05:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
In-Reply-To: <20120801155040.GB15812@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 17:50, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> With these patches I've gotten it to boot up to 384GB. Around that area
> something weird happens - mainly the pagetables that the toolstack allocated
> seems to have missing data. I hadn't looked in details, but this is what
> domain builder tells me:
> 
> 
> xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 
> 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
> xc_dom_malloc            : 1621 kB
> xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at 0x7fb0853a2000
> xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
> xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 
> 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
> xc_dom_malloc            : 4608 kB
> xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at 0x7fb0553a2000
> xc_dom_alloc_page   :   start info   : 0xffffffffc30b4000 (pfn 0x430b4)
> xc_dom_alloc_page   :   xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
> xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn 0x430b6)
> nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 
> 0xffffffffffffffff, 1 table(s)
> nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 
> 0xffffffffffffffff, 1 table(s)
> nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 
> 0xffffffffffffffff, 2 table(s)
> nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 
> 0xffffffffc33fffff, 538 table(s)
> xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 
> 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
> xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at 0x7fb055184000
> xc_dom_alloc_page   :   boot stack   : 0xffffffffc32d5000 (pfn 0x432d5)
> xc_dom_build_image  : virt_alloc_end : 0xffffffffc32d6000
> xc_dom_build_image  : virt_pgtab_end : 0xffffffffc3400000
> 
> Note it is is 0xffffffffc30b4000 - so already past the level2_kernel_pgt 
> (L3[510]
> and in level2_fixmap_pgt territory (L3[511]).
> 
> At that stage we are still operating using the Xen provided pagetable - which
> look to have the L4[511][511] empty! Which sounds to me like a Xen tool-stack
> problem? Jan, have you seen something similar to this?

No we haven't, but I also don't think anyone tried to create as
big a DomU. I was, however, under the impression that DomU-s
this big had been created at Oracle before. Or was that only up
to 256Gb perhaps?

In any case, setup_pgtables_x86_64() indeed looks flawed
to me: While the clearing of l1tab looks right, l[23]tab get
cleared (and hence a new table allocated) too early. l2tab
should really get cleared only when l1tab gets cleared _and_
the L2 clearing condition is true. Similarly for l3tab then, and
of course - even though it would unlikely ever matter -
setup_pgtables_x86_32_pae() is broken in the same way.

Afaict this got broken with the domain build re-write between
3.0.4 and 3.1 (the old code looks alright).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:05:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrKt-0006Lo-Dy; Thu, 02 Aug 2012 09:04:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwrKs-0006LV-N3
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:04:50 +0000
Received: from [85.158.138.51:20287] by server-4.bemta-3.messagelabs.com id
	5A/E5-29069-1B24A105; Thu, 02 Aug 2012 09:04:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343898289!20508268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MjA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4002 invoked from network); 2 Aug 2012 09:04:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-174.messagelabs.com with SMTP;
	2 Aug 2012 09:04:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:04:49 +0100
Message-Id: <501A5EF7020000780009219C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:05:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
In-Reply-To: <20120801155040.GB15812@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 17:50, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> With these patches I've gotten it to boot up to 384GB. Around that area
> something weird happens - mainly the pagetables that the toolstack allocated
> seems to have missing data. I hadn't looked in details, but this is what
> domain builder tells me:
> 
> 
> xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 
> 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
> xc_dom_malloc            : 1621 kB
> xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at 0x7fb0853a2000
> xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
> xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 
> 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
> xc_dom_malloc            : 4608 kB
> xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at 0x7fb0553a2000
> xc_dom_alloc_page   :   start info   : 0xffffffffc30b4000 (pfn 0x430b4)
> xc_dom_alloc_page   :   xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
> xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn 0x430b6)
> nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 
> 0xffffffffffffffff, 1 table(s)
> nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 
> 0xffffffffffffffff, 1 table(s)
> nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 
> 0xffffffffffffffff, 2 table(s)
> nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 
> 0xffffffffc33fffff, 538 table(s)
> xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 
> 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
> xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at 0x7fb055184000
> xc_dom_alloc_page   :   boot stack   : 0xffffffffc32d5000 (pfn 0x432d5)
> xc_dom_build_image  : virt_alloc_end : 0xffffffffc32d6000
> xc_dom_build_image  : virt_pgtab_end : 0xffffffffc3400000
> 
> Note it is is 0xffffffffc30b4000 - so already past the level2_kernel_pgt 
> (L3[510]
> and in level2_fixmap_pgt territory (L3[511]).
> 
> At that stage we are still operating using the Xen provided pagetable - which
> look to have the L4[511][511] empty! Which sounds to me like a Xen tool-stack
> problem? Jan, have you seen something similar to this?

No we haven't, but I also don't think anyone tried to create as
big a DomU. I was, however, under the impression that DomU-s
this big had been created at Oracle before. Or was that only up
to 256Gb perhaps?

In any case, setup_pgtables_x86_64() indeed looks flawed
to me: While the clearing of l1tab looks right, l[23]tab get
cleared (and hence a new table allocated) too early. l2tab
should really get cleared only when l1tab gets cleared _and_
the L2 clearing condition is true. Similarly for l3tab then, and
of course - even though it would unlikely ever matter -
setup_pgtables_x86_32_pae() is broken in the same way.

Afaict this got broken with the domain build re-write between
3.0.4 and 3.1 (the old code looks alright).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:23:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrbz-0006cA-22; Thu, 02 Aug 2012 09:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Swrbx-0006c5-Un
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:22:30 +0000
Received: from [85.158.143.99:41832] by server-1.bemta-4.messagelabs.com id
	B2/35-24392-5D64A105; Thu, 02 Aug 2012 09:22:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1343899348!22863045!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12101 invoked from network); 2 Aug 2012 09:22:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 09:22:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:22:27 +0100
Message-Id: <501A631D02000078000921B8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:23:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim\(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.

There is - the psABI supplements to the SVR4 ABI.

> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.

It would be very strange for a modern ABI (other than perhaps
ones targeting exclusively embedded environments, where space
matters) to allow structure fields at mis-aligned offsets. Is that
really the case for ARM? This would make the compiled code
accessing such fields pretty ugly, since I seem to recall that loads
and stores are required to be aligned there.

> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.
> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?

This is not compiler specific, but platform defined. Compilers
merely need to conform to that specification; if they don't they
can't be used for building Xen interfacing code without manual
tweaking (perhaps re-creation) of the interface headers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:23:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrbz-0006cA-22; Thu, 02 Aug 2012 09:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Swrbx-0006c5-Un
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:22:30 +0000
Received: from [85.158.143.99:41832] by server-1.bemta-4.messagelabs.com id
	B2/35-24392-5D64A105; Thu, 02 Aug 2012 09:22:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1343899348!22863045!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12101 invoked from network); 2 Aug 2012 09:22:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 09:22:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:22:27 +0100
Message-Id: <501A631D02000078000921B8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:23:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim\(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	Jean Guyader <Jean.Guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> I was reading more about this commit because this patch breaks the ABI
> on ARM, when I realized that on x86 there is no standard that specifies
> the alignment of fields in a struct.

There is - the psABI supplements to the SVR4 ABI.

> As a consequence I don't think we can really be sure that between .domid
> and .space we always have 16 bits of padding.

It would be very strange for a modern ABI (other than perhaps
ones targeting exclusively embedded environments, where space
matters) to allow structure fields at mis-aligned offsets. Is that
really the case for ARM? This would make the compiled code
accessing such fields pretty ugly, since I seem to recall that loads
and stores are required to be aligned there.

> I am afraid that if a user compiles Linux or another guest kernel with a
> compiler other than gcc, this hypercall might break. In fact it already
> happened just switching from x86 to ARM.
> Also, considering that the memory.h interface is supposed to be ANSI C,
> isn't it wrong to assume compiler specific artifacts anyway?

This is not compiler specific, but platform defined. Compilers
merely need to conform to that specification; if they don't they
can't be used for building Xen interfacing code without manual
tweaking (perhaps re-creation) of the interface headers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:23:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrcn-0006eb-G0; Thu, 02 Aug 2012 09:23:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swrcl-0006eB-IG
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:23:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343899370!11889684!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE3NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4272 invoked from network); 2 Aug 2012 09:22:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:22:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="33309007"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:22:37 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:22:37 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Swrc5-0005ML-89;
	Thu, 02 Aug 2012 10:22:37 +0100
MIME-Version: 1.0
X-Mercurial-Node: 075da4778b0a1a84680ef0acd26fcd3b01adeee4
Message-ID: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 2 Aug 2012 10:22:37 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343899348 -3600
# Node ID 075da4778b0a1a84680ef0acd26fcd3b01adeee4
# Parent  e638a0aeb9856003661aa75d684855c6fd940b3c
libxl: const correctness for libxl__xs_path_cleanup

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Thu Aug 02 09:59:33 2012 +0100
+++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
@@ -523,7 +523,7 @@ DEFINE_DEVICES_ADD(nic)
 int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
 {
     char *be_path = libxl__device_backend_path(gc, dev);
-    char *fe_path = libxl__device_frontend_path(gc, dev);
+    const char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
     int rc;
 
diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Thu Aug 02 09:59:33 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Thu Aug 02 10:22:28 2012 +0100
@@ -614,7 +614,7 @@ void libxl__xs_transaction_abort(libxl__
  * It mimics xenstore-rm -t behaviour.
  */
 _hidden int libxl__xs_path_cleanup(libxl__gc *gc, xs_transaction_t t,
-                                   char *user_path);
+                                   const char *user_path);
 
 /*
  * Event generation functions provided by the libxl event core to the
diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_xshelp.c
--- a/tools/libxl/libxl_xshelp.c	Thu Aug 02 09:59:33 2012 +0100
+++ b/tools/libxl/libxl_xshelp.c	Thu Aug 02 10:22:28 2012 +0100
@@ -233,7 +233,8 @@ void libxl__xs_transaction_abort(libxl__
     *t = 0;
 }
 
-int libxl__xs_path_cleanup(libxl__gc *gc, xs_transaction_t t, char *user_path)
+int libxl__xs_path_cleanup(libxl__gc *gc, xs_transaction_t t,
+                           const char *user_path)
 {
     unsigned int nb = 0;
     char *path, *last, *val;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:23:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrcn-0006eb-G0; Thu, 02 Aug 2012 09:23:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swrcl-0006eB-IG
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:23:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343899370!11889684!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE3NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4272 invoked from network); 2 Aug 2012 09:22:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:22:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="33309007"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:22:37 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:22:37 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1Swrc5-0005ML-89;
	Thu, 02 Aug 2012 10:22:37 +0100
MIME-Version: 1.0
X-Mercurial-Node: 075da4778b0a1a84680ef0acd26fcd3b01adeee4
Message-ID: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 2 Aug 2012 10:22:37 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343899348 -3600
# Node ID 075da4778b0a1a84680ef0acd26fcd3b01adeee4
# Parent  e638a0aeb9856003661aa75d684855c6fd940b3c
libxl: const correctness for libxl__xs_path_cleanup

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Thu Aug 02 09:59:33 2012 +0100
+++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
@@ -523,7 +523,7 @@ DEFINE_DEVICES_ADD(nic)
 int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
 {
     char *be_path = libxl__device_backend_path(gc, dev);
-    char *fe_path = libxl__device_frontend_path(gc, dev);
+    const char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
     int rc;
 
diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Thu Aug 02 09:59:33 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Thu Aug 02 10:22:28 2012 +0100
@@ -614,7 +614,7 @@ void libxl__xs_transaction_abort(libxl__
  * It mimics xenstore-rm -t behaviour.
  */
 _hidden int libxl__xs_path_cleanup(libxl__gc *gc, xs_transaction_t t,
-                                   char *user_path);
+                                   const char *user_path);
 
 /*
  * Event generation functions provided by the libxl event core to the
diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_xshelp.c
--- a/tools/libxl/libxl_xshelp.c	Thu Aug 02 09:59:33 2012 +0100
+++ b/tools/libxl/libxl_xshelp.c	Thu Aug 02 10:22:28 2012 +0100
@@ -233,7 +233,8 @@ void libxl__xs_transaction_abort(libxl__
     *t = 0;
 }
 
-int libxl__xs_path_cleanup(libxl__gc *gc, xs_transaction_t t, char *user_path)
+int libxl__xs_path_cleanup(libxl__gc *gc, xs_transaction_t t,
+                           const char *user_path)
 {
     unsigned int nb = 0;
     char *path, *last, *val;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwreC-0006lS-VH; Thu, 02 Aug 2012 09:24:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwreB-0006lF-Jb
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:24:47 +0000
Received: from [85.158.143.99:54958] by server-3.bemta-4.messagelabs.com id
	45/94-01511-E574A105; Thu, 02 Aug 2012 09:24:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343899486!24624287!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30706 invoked from network); 2 Aug 2012 09:24:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 09:24:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:24:46 +0100
Message-Id: <501A63A602000078000921BB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:25:26 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@eu.citrix.com>,
	"Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>,
	"Keir Fraser" <keir@xen.org>
References: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<CC3F2EC4.477D8%keir@xen.org>
In-Reply-To: <CC3F2EC4.477D8%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Jean Guyader <Jean.Guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 20:06, Keir Fraser <keir@xen.org> wrote:
> Should we be changing our rules on public headers, allowing
> compiler-specific extensions to precisely lay out our structures? Quite
> possibly. We used to do that, but it got shot down by the ppc and ia64
> arches who wanted an easier life and just rely on compiler default layout
> for a particular platform. Of course those maintainers aren't actually
> voting any more. ;)

In the expectation that other architectures might get ported,
keeping the headers as generic as possible is likely the best
thing to do. After all it was (hopefully!) for a reason that the
PPC and/or IA64 folks wanted the non-standard stuff to be
removed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwreC-0006lS-VH; Thu, 02 Aug 2012 09:24:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwreB-0006lF-Jb
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:24:47 +0000
Received: from [85.158.143.99:54958] by server-3.bemta-4.messagelabs.com id
	45/94-01511-E574A105; Thu, 02 Aug 2012 09:24:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343899486!24624287!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30706 invoked from network); 2 Aug 2012 09:24:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 09:24:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:24:46 +0100
Message-Id: <501A63A602000078000921BB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:25:26 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@eu.citrix.com>,
	"Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>,
	"Keir Fraser" <keir@xen.org>
References: <alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<CC3F2EC4.477D8%keir@xen.org>
In-Reply-To: <CC3F2EC4.477D8%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Jean Guyader <Jean.Guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 20:06, Keir Fraser <keir@xen.org> wrote:
> Should we be changing our rules on public headers, allowing
> compiler-specific extensions to precisely lay out our structures? Quite
> possibly. We used to do that, but it got shot down by the ppc and ia64
> arches who wanted an easier life and just rely on compiler default layout
> for a particular platform. Of course those maintainers aren't actually
> voting any more. ;)

In the expectation that other architectures might get ported,
keeping the headers as generic as possible is likely the best
thing to do. After all it was (hopefully!) for a reason that the
PPC and/or IA64 folks wanted the non-standard stuff to be
removed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:39:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrrc-000789-9o; Thu, 02 Aug 2012 09:38:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swrrb-000784-4n
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:38:39 +0000
Received: from [85.158.138.51:48941] by server-9.bemta-3.messagelabs.com id
	DC/30-27628-E9A4A105; Thu, 02 Aug 2012 09:38:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343900316!25995119!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9549 invoked from network); 2 Aug 2012 09:38:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:38:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203909949"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:38:35 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:38:35 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwrrW-0005aY-S5;
	Thu, 02 Aug 2012 10:38:34 +0100
MIME-Version: 1.0
X-Mercurial-Node: f345fbfa8f50975b5c327669ceaf68c1d098da8f
Message-ID: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 2 Aug 2012 10:38:34 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
	libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343900058 -3600
# Node ID f345fbfa8f50975b5c327669ceaf68c1d098da8f
# Parent  075da4778b0a1a84680ef0acd26fcd3b01adeee4
libxl: fix cleanup of tap devices in libxl__device_destroy

We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
read tapdisk-params. However it appears that we need to destroy the tap device
after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
<201207312141.q6VLfJje012656@wind.enjellic.com>.

So read the tapdisk-params in the cleanup transaction, before the remove, and
pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
if the device isn't a tap device.

There is no need to tear down the tap device from libxl__initiate_device_remove
since this ultimately calls libxl__device_destroy.

Propagate and log errors from libxl__device_destroy_tapdisk.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---

This patch depends on Ian Jacksons "libxl: unify libxl__device_destroy and
device_hotplug_done" and my "libxl: const correctness for
libxl__xs_path_cleanup"

diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_blktap2.c	Thu Aug 02 10:34:18 2012 +0100
@@ -51,28 +51,36 @@ char *libxl__blktap_devpath(libxl__gc *g
 }
 
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, char *params)
 {
-    char *path, *params, *type, *disk;
+    char *type, *disk;
     int err;
     tap_list_t tap;
 
-    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
-    if (!path) return;
-
-    params = libxl__xs_read(gc, XBT_NULL, path);
-    if (!params) return;
-
     type = params;
     disk = strchr(params, ':');
-    if (!disk) return;
+    if (!disk) {
+        LOG(ERROR, "Unable to parse params %s", params);
+        return ERROR_INVAL;
+    }
 
     *disk++ = '\0';
 
     err = tap_ctl_find(type, disk, &tap);
-    if (err < 0) return;
+    if (err < 0) {
+        /* returns -errno */
+        LOGEV(ERROR, -err, "Unable to find type %s disk %s", type, disk);
+        return ERROR_FAIL;
+    }
 
-    tap_ctl_destroy(tap.id, tap.minor);
+    err = tap_ctl_destroy(tap.id, tap.minor);
+    if (err < 0) {
+        LOGEV(ERROR, -err, "Failed to destroy tap device id %d minor %d",
+              tap.id, tap.minor);
+        return ERROR_FAIL;
+    }
+
+    return 0;
 }
 
 /*
diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:34:18 2012 +0100
@@ -522,8 +522,10 @@ DEFINE_DEVICES_ADD(nic)
 
 int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
 {
-    char *be_path = libxl__device_backend_path(gc, dev);
+    const char *be_path = libxl__device_backend_path(gc, dev);
     const char *fe_path = libxl__device_frontend_path(gc, dev);
+    const char *tapdisk_path = GCSPRINTF("%s/%s", be_path, "tapdisk-params");
+    char *tapdisk_params;
     xs_transaction_t t = 0;
     int rc;
 
@@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
         rc = libxl__xs_transaction_start(gc, &t);
         if (rc) goto out;
 
+        /* May not exist if this is not a tap device */
+        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
+
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
 
@@ -539,7 +544,8 @@ int libxl__device_destroy(libxl__gc *gc,
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
+    if (tapdisk_params)
+        rc = libxl__device_destroy_tapdisk(gc, tapdisk_params);
 
 out:
     return rc;
@@ -789,8 +795,6 @@ void libxl__initiate_device_remove(libxl
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
-
     rc = libxl__ev_devstate_wait(gc, &aodev->backend_ds,
                                  device_backend_callback,
                                  state_path, XenbusStateClosed,
diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Thu Aug 02 10:34:18 2012 +0100
@@ -1344,8 +1344,9 @@ _hidden char *libxl__blktap_devpath(libx
 /* libxl__device_destroy_tapdisk:
  *   Destroys any tapdisk process associated with the backend represented
  *   by be_path.
+ *   Always logs on failure.
  */
-_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+_hidden int libxl__device_destroy_tapdisk(libxl__gc *gc, char *params);
 
 _hidden int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
                                    libxl_device_disk *disk,
diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_noblktap2.c	Thu Aug 02 10:34:18 2012 +0100
@@ -28,8 +28,9 @@ char *libxl__blktap_devpath(libxl__gc *g
     return NULL;
 }
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, char *params)
 {
+    return 0;
 }
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:39:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrrc-000789-9o; Thu, 02 Aug 2012 09:38:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swrrb-000784-4n
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:38:39 +0000
Received: from [85.158.138.51:48941] by server-9.bemta-3.messagelabs.com id
	DC/30-27628-E9A4A105; Thu, 02 Aug 2012 09:38:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343900316!25995119!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9549 invoked from network); 2 Aug 2012 09:38:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:38:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203909949"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:38:35 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:38:35 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SwrrW-0005aY-S5;
	Thu, 02 Aug 2012 10:38:34 +0100
MIME-Version: 1.0
X-Mercurial-Node: f345fbfa8f50975b5c327669ceaf68c1d098da8f
Message-ID: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 2 Aug 2012 10:38:34 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
	libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343900058 -3600
# Node ID f345fbfa8f50975b5c327669ceaf68c1d098da8f
# Parent  075da4778b0a1a84680ef0acd26fcd3b01adeee4
libxl: fix cleanup of tap devices in libxl__device_destroy

We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
read tapdisk-params. However it appears that we need to destroy the tap device
after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
<201207312141.q6VLfJje012656@wind.enjellic.com>.

So read the tapdisk-params in the cleanup transaction, before the remove, and
pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
if the device isn't a tap device.

There is no need to tear down the tap device from libxl__initiate_device_remove
since this ultimately calls libxl__device_destroy.

Propagate and log errors from libxl__device_destroy_tapdisk.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---

This patch depends on Ian Jacksons "libxl: unify libxl__device_destroy and
device_hotplug_done" and my "libxl: const correctness for
libxl__xs_path_cleanup"

diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_blktap2.c	Thu Aug 02 10:34:18 2012 +0100
@@ -51,28 +51,36 @@ char *libxl__blktap_devpath(libxl__gc *g
 }
 
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, char *params)
 {
-    char *path, *params, *type, *disk;
+    char *type, *disk;
     int err;
     tap_list_t tap;
 
-    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
-    if (!path) return;
-
-    params = libxl__xs_read(gc, XBT_NULL, path);
-    if (!params) return;
-
     type = params;
     disk = strchr(params, ':');
-    if (!disk) return;
+    if (!disk) {
+        LOG(ERROR, "Unable to parse params %s", params);
+        return ERROR_INVAL;
+    }
 
     *disk++ = '\0';
 
     err = tap_ctl_find(type, disk, &tap);
-    if (err < 0) return;
+    if (err < 0) {
+        /* returns -errno */
+        LOGEV(ERROR, -err, "Unable to find type %s disk %s", type, disk);
+        return ERROR_FAIL;
+    }
 
-    tap_ctl_destroy(tap.id, tap.minor);
+    err = tap_ctl_destroy(tap.id, tap.minor);
+    if (err < 0) {
+        LOGEV(ERROR, -err, "Failed to destroy tap device id %d minor %d",
+              tap.id, tap.minor);
+        return ERROR_FAIL;
+    }
+
+    return 0;
 }
 
 /*
diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:34:18 2012 +0100
@@ -522,8 +522,10 @@ DEFINE_DEVICES_ADD(nic)
 
 int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
 {
-    char *be_path = libxl__device_backend_path(gc, dev);
+    const char *be_path = libxl__device_backend_path(gc, dev);
     const char *fe_path = libxl__device_frontend_path(gc, dev);
+    const char *tapdisk_path = GCSPRINTF("%s/%s", be_path, "tapdisk-params");
+    char *tapdisk_params;
     xs_transaction_t t = 0;
     int rc;
 
@@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
         rc = libxl__xs_transaction_start(gc, &t);
         if (rc) goto out;
 
+        /* May not exist if this is not a tap device */
+        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
+
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
 
@@ -539,7 +544,8 @@ int libxl__device_destroy(libxl__gc *gc,
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
+    if (tapdisk_params)
+        rc = libxl__device_destroy_tapdisk(gc, tapdisk_params);
 
 out:
     return rc;
@@ -789,8 +795,6 @@ void libxl__initiate_device_remove(libxl
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
-
     rc = libxl__ev_devstate_wait(gc, &aodev->backend_ds,
                                  device_backend_callback,
                                  state_path, XenbusStateClosed,
diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Thu Aug 02 10:34:18 2012 +0100
@@ -1344,8 +1344,9 @@ _hidden char *libxl__blktap_devpath(libx
 /* libxl__device_destroy_tapdisk:
  *   Destroys any tapdisk process associated with the backend represented
  *   by be_path.
+ *   Always logs on failure.
  */
-_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+_hidden int libxl__device_destroy_tapdisk(libxl__gc *gc, char *params);
 
 _hidden int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
                                    libxl_device_disk *disk,
diff -r 075da4778b0a -r f345fbfa8f50 tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Thu Aug 02 10:22:28 2012 +0100
+++ b/tools/libxl/libxl_noblktap2.c	Thu Aug 02 10:34:18 2012 +0100
@@ -28,8 +28,9 @@ char *libxl__blktap_devpath(libxl__gc *g
     return NULL;
 }
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, char *params)
 {
+    return 0;
 }
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:40:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrsz-0007DZ-T5; Thu, 02 Aug 2012 09:40:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Swrsx-0007Cv-S2
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:40:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343900387!11893866!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23879 invoked from network); 2 Aug 2012 09:39:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 09:39:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:39:46 +0100
Message-Id: <501A672B02000078000921EA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:40:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
In-Reply-To: <501959BE.60801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 18:30, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> - Xen NUMA internals.  Placing items such as the per-cpu stacks and data
> area on the local NUMA node, rather than unconditionally on node 0 at
> the moment.  As part of this, there will be changes to
> alloc_{dom,xen}heap_page() to allow specification of which node(s) to
> allocate memory from.

Those interfaces already support flags to be passed, including a
node ID. It just needs to be made use of in more places.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:40:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swrsz-0007DZ-T5; Thu, 02 Aug 2012 09:40:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Swrsx-0007Cv-S2
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:40:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343900387!11893866!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23879 invoked from network); 2 Aug 2012 09:39:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 09:39:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:39:46 +0100
Message-Id: <501A672B02000078000921EA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:40:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
In-Reply-To: <501959BE.60801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 18:30, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> - Xen NUMA internals.  Placing items such as the per-cpu stacks and data
> area on the local NUMA node, rather than unconditionally on node 0 at
> the moment.  As part of this, there will be changes to
> alloc_{dom,xen}heap_page() to allow specification of which node(s) to
> allocate memory from.

Those interfaces already support flags to be passed, including a
node ID. It just needs to be made use of in more places.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:43:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrvJ-0007M6-Dj; Thu, 02 Aug 2012 09:42:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwrvI-0007Li-0i
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:42:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343900541!11852357!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21690 invoked from network); 2 Aug 2012 09:42:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 09:42:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:42:21 +0100
Message-Id: <501A67C502000078000921FF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:43:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>       guest ends up on more than one nodes, make sure it knows it's
>       running on a NUMA platform (smaller than the actual host, but
>       still NUMA). This interacts with some of the above points:

The question is whether this is really useful beyond the (I would
suppose) relatively small set of cases where migration isn't
needed.

>        * consider this during automatic placement for
>          resuming/migrating domains (if they have a virtual topology,
>          better not to change it);
>        * consider this during memory migration (it can change the
>          actual topology, should we update it on-line or disable memory
>          migration?)

The question is whether trading functionality for performance
is an acceptable choice.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:43:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwrvJ-0007M6-Dj; Thu, 02 Aug 2012 09:42:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwrvI-0007Li-0i
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:42:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343900541!11852357!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21690 invoked from network); 2 Aug 2012 09:42:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 09:42:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 10:42:21 +0100
Message-Id: <501A67C502000078000921FF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 10:43:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>       guest ends up on more than one nodes, make sure it knows it's
>       running on a NUMA platform (smaller than the actual host, but
>       still NUMA). This interacts with some of the above points:

The question is whether this is really useful beyond the (I would
suppose) relatively small set of cases where migration isn't
needed.

>        * consider this during automatic placement for
>          resuming/migrating domains (if they have a virtual topology,
>          better not to change it);
>        * consider this during memory migration (it can change the
>          actual topology, should we update it on-line or disable memory
>          migration?)

The question is whether trading functionality for performance
is an acceptable choice.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:46:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwryC-0007We-0C; Thu, 02 Aug 2012 09:45:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwryB-0007WX-3V
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:45:27 +0000
Received: from [85.158.143.99:2616] by server-2.bemta-4.messagelabs.com id
	C5/AC-17938-63C4A105; Thu, 02 Aug 2012 09:45:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343900724!29300171!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1712 invoked from network); 2 Aug 2012 09:45:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:45:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818248"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:45:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:45:24 +0100
Message-ID: <1343900722.27221.107.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 10:45:22 +0100
In-Reply-To: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


>          rc = libxl__xs_transaction_start(gc, &t);
>          if (rc) goto out;
>  
> +        /* May not exist if this is not a tap device */
> +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> +
>          libxl__xs_path_cleanup(gc, t, fe_path);
>          libxl__xs_path_cleanup(gc, t, be_path);

Do we deliberate ignore the error codes from these two?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:46:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwryC-0007We-0C; Thu, 02 Aug 2012 09:45:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwryB-0007WX-3V
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:45:27 +0000
Received: from [85.158.143.99:2616] by server-2.bemta-4.messagelabs.com id
	C5/AC-17938-63C4A105; Thu, 02 Aug 2012 09:45:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343900724!29300171!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1712 invoked from network); 2 Aug 2012 09:45:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:45:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818248"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:45:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:45:24 +0100
Message-ID: <1343900722.27221.107.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 10:45:22 +0100
In-Reply-To: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


>          rc = libxl__xs_transaction_start(gc, &t);
>          if (rc) goto out;
>  
> +        /* May not exist if this is not a tap device */
> +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> +
>          libxl__xs_path_cleanup(gc, t, fe_path);
>          libxl__xs_path_cleanup(gc, t, be_path);

Do we deliberate ignore the error codes from these two?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:47:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws02-0007ei-GN; Thu, 02 Aug 2012 09:47:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sws01-0007eb-26
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:47:21 +0000
Received: from [85.158.138.51:60333] by server-9.bemta-3.messagelabs.com id
	2D/36-27628-8AC4A105; Thu, 02 Aug 2012 09:47:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343900839!29999879!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26560 invoked from network); 2 Aug 2012 09:47:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818285"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:46:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:46:51 +0100
Message-ID: <1343900809.27221.108.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 2 Aug 2012 10:46:49 +0100
In-Reply-To: <1343898078.27221.106.camel@zakaz.uk.xensource.com>
References: <480b8f03e6635873e32b.1343895878@probook.site>
	<1343898078.27221.106.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] pygrub: add quoting to install receipe
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 10:01 +0100, Ian Campbell wrote:
> On Thu, 2012-08-02 at 09:24 +0100, Olaf Hering wrote:
>  pygrub: add quoting to install receipe
>  
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I would apply but xenbits appears to be down. I've pinged the admins.

It's back => applied.

> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:47:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws02-0007ei-GN; Thu, 02 Aug 2012 09:47:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sws01-0007eb-26
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:47:21 +0000
Received: from [85.158.138.51:60333] by server-9.bemta-3.messagelabs.com id
	2D/36-27628-8AC4A105; Thu, 02 Aug 2012 09:47:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343900839!29999879!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26560 invoked from network); 2 Aug 2012 09:47:19 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818285"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:46:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:46:51 +0100
Message-ID: <1343900809.27221.108.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 2 Aug 2012 10:46:49 +0100
In-Reply-To: <1343898078.27221.106.camel@zakaz.uk.xensource.com>
References: <480b8f03e6635873e32b.1343895878@probook.site>
	<1343898078.27221.106.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] pygrub: add quoting to install receipe
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 10:01 +0100, Ian Campbell wrote:
> On Thu, 2012-08-02 at 09:24 +0100, Olaf Hering wrote:
>  pygrub: add quoting to install receipe
>  
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I would apply but xenbits appears to be down. I've pinged the admins.

It's back => applied.

> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:48:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws0i-0007il-U6; Thu, 02 Aug 2012 09:48:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Sws0h-0007iT-E1
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:48:03 +0000
Received: from [85.158.143.35:61729] by server-2.bemta-4.messagelabs.com id
	77/61-17938-2DC4A105; Thu, 02 Aug 2012 09:48:02 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343900869!17051690!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19142 invoked from network); 2 Aug 2012 09:47:49 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:47:49 -0000
Received: by eaah1 with SMTP id h1so2126981eaa.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 02:47:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+poJLoXtxlE/dICMJX6u6EJnMv34hF6YEUVeX2dcMuc=;
	b=RtmZIfQOep2QowivGiS9ZWXAJioA+dIDvJtkz2vw53Fc7LjSaGrZePIQNqKquST8nn
	tiI6jO25AkrGg9Znw1pH8dIaK0I34AGG05+N/0pb1IS8wF1CbY6PQ5u94iPaBeZrzSDW
	gq9M5yMR/hlHPmASqva2lBgu+oJDBIcymUpjrljyp3WSjZvRYcltA79IwqIaD7x79PgE
	gJDd6tQJB9mu9dZWRedmoqGWs7HavzJFtOCakl+rGtI3sYn96GIWWRxcf/V1M1qJ8awz
	hs53DIQyH83ZKj86va4BDtyssDhD2ElpNqG28QwzXBKUKPOtJt4Q9QctLYIgmcaPptxt
	04sQ==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr10067788eep.10.1343900869426; Thu,
	02 Aug 2012 02:47:49 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 2 Aug 2012 02:47:49 -0700 (PDT)
Date: Thu, 2 Aug 2012 17:47:49 +0800
Message-ID: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by fork() in
 parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5810520130255350286=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5810520130255350286==
Content-Type: multipart/alternative; boundary=047d7b670a9def7f2004c6454d9f

--047d7b670a9def7f2004c6454d9f
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
    In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(), it starts up a
child daemon process waiting for the domain death,
there are several lines echo the child pid as follows:

1592        if (child1) {                                  /*it's in the
parent*/


1593            printf("Daemon running with PID %d\n", child1);


/**it's in the child/
1643    LOG("Waiting for domain %s (domid %d) to die [pid %ld]",



1644        d_config.c_info.name, domid, (long)getpid());



After input `xl create xp-101.hvm`, you got `Daemon running with PID 26622`
following the command line in the screen; but the xl log file lies in
/var/log/xen/xl-xp-101.log contains a line `Waiting for domain xp-101
(domid 1) to die [pid 26624]`.
Why 26622 != 26624?  And 26622 could not be found by `ps -ef` command.
What happended to that?!


Thanks in advance!

--047d7b670a9def7f2004c6454d9f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<div>=A0 =A0 In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain()=
, it starts up a child daemon process waiting for the domain death, =A0</di=
v><div>there are several lines echo the child pid as follows:</div><div><br=
></div>
<div><div>1592 =A0 =A0 =A0 =A0if (child1) { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/*it&#39;s in the parent*/ =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div>
<div>1593 =A0 =A0 =A0 =A0 =A0 =A0printf(&quot;Daemon running with PID %d\n&=
quot;, child1);=A0</div></div><div><br></div><div><br></div><div>/**it&#39;=
s in the child/</div><div><div>1643 =A0 =A0LOG(&quot;Waiting for domain %s =
(domid %d) to die [pid %ld]&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0</div>
<div>1644 =A0 =A0 =A0 =A0<a href=3D"http://d_config.c_info.name">d_config.c=
_info.name</a>, domid, (long)getpid()); =A0</div></div><div><br></div><div>=
<br></div><div><br></div><div>After input `xl create xp-101.hvm`, you got `=
Daemon running with PID 26622` following the command line in the screen; bu=
t the xl log file lies in /var/log/xen/xl-xp-101.log contains a line `Waiti=
ng for domain xp-101 (domid 1) to die [pid 26624]`.</div>
<div>Why 26622 !=3D 26624? =A0And 26622 could not be found by `ps -ef` comm=
and.</div><div>What happended to that?!</div><div><br></div><div><br></div>=
<div>Thanks in advance!</div>

--047d7b670a9def7f2004c6454d9f--


--===============5810520130255350286==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5810520130255350286==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 09:48:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws0i-0007il-U6; Thu, 02 Aug 2012 09:48:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Sws0h-0007iT-E1
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:48:03 +0000
Received: from [85.158.143.35:61729] by server-2.bemta-4.messagelabs.com id
	77/61-17938-2DC4A105; Thu, 02 Aug 2012 09:48:02 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343900869!17051690!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19142 invoked from network); 2 Aug 2012 09:47:49 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:47:49 -0000
Received: by eaah1 with SMTP id h1so2126981eaa.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 02:47:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+poJLoXtxlE/dICMJX6u6EJnMv34hF6YEUVeX2dcMuc=;
	b=RtmZIfQOep2QowivGiS9ZWXAJioA+dIDvJtkz2vw53Fc7LjSaGrZePIQNqKquST8nn
	tiI6jO25AkrGg9Znw1pH8dIaK0I34AGG05+N/0pb1IS8wF1CbY6PQ5u94iPaBeZrzSDW
	gq9M5yMR/hlHPmASqva2lBgu+oJDBIcymUpjrljyp3WSjZvRYcltA79IwqIaD7x79PgE
	gJDd6tQJB9mu9dZWRedmoqGWs7HavzJFtOCakl+rGtI3sYn96GIWWRxcf/V1M1qJ8awz
	hs53DIQyH83ZKj86va4BDtyssDhD2ElpNqG28QwzXBKUKPOtJt4Q9QctLYIgmcaPptxt
	04sQ==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr10067788eep.10.1343900869426; Thu,
	02 Aug 2012 02:47:49 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 2 Aug 2012 02:47:49 -0700 (PDT)
Date: Thu, 2 Aug 2012 17:47:49 +0800
Message-ID: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by fork() in
 parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5810520130255350286=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5810520130255350286==
Content-Type: multipart/alternative; boundary=047d7b670a9def7f2004c6454d9f

--047d7b670a9def7f2004c6454d9f
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
    In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(), it starts up a
child daemon process waiting for the domain death,
there are several lines echo the child pid as follows:

1592        if (child1) {                                  /*it's in the
parent*/


1593            printf("Daemon running with PID %d\n", child1);


/**it's in the child/
1643    LOG("Waiting for domain %s (domid %d) to die [pid %ld]",



1644        d_config.c_info.name, domid, (long)getpid());



After input `xl create xp-101.hvm`, you got `Daemon running with PID 26622`
following the command line in the screen; but the xl log file lies in
/var/log/xen/xl-xp-101.log contains a line `Waiting for domain xp-101
(domid 1) to die [pid 26624]`.
Why 26622 != 26624?  And 26622 could not be found by `ps -ef` command.
What happended to that?!


Thanks in advance!

--047d7b670a9def7f2004c6454d9f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<div>=A0 =A0 In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain()=
, it starts up a child daemon process waiting for the domain death, =A0</di=
v><div>there are several lines echo the child pid as follows:</div><div><br=
></div>
<div><div>1592 =A0 =A0 =A0 =A0if (child1) { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/*it&#39;s in the parent*/ =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0</div>
<div>1593 =A0 =A0 =A0 =A0 =A0 =A0printf(&quot;Daemon running with PID %d\n&=
quot;, child1);=A0</div></div><div><br></div><div><br></div><div>/**it&#39;=
s in the child/</div><div><div>1643 =A0 =A0LOG(&quot;Waiting for domain %s =
(domid %d) to die [pid %ld]&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0</div>
<div>1644 =A0 =A0 =A0 =A0<a href=3D"http://d_config.c_info.name">d_config.c=
_info.name</a>, domid, (long)getpid()); =A0</div></div><div><br></div><div>=
<br></div><div><br></div><div>After input `xl create xp-101.hvm`, you got `=
Daemon running with PID 26622` following the command line in the screen; bu=
t the xl log file lies in /var/log/xen/xl-xp-101.log contains a line `Waiti=
ng for domain xp-101 (domid 1) to die [pid 26624]`.</div>
<div>Why 26622 !=3D 26624? =A0And 26622 could not be found by `ps -ef` comm=
and.</div><div>What happended to that?!</div><div><br></div><div><br></div>=
<div>Thanks in advance!</div>

--047d7b670a9def7f2004c6454d9f--


--===============5810520130255350286==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5810520130255350286==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 09:56:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws87-00081J-Tn; Thu, 02 Aug 2012 09:55:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1Sws86-00081E-Nr
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:55:43 +0000
Received: from [85.158.139.83:30171] by server-7.bemta-5.messagelabs.com id
	F6/26-28276-D9E4A105; Thu, 02 Aug 2012 09:55:41 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1343901338!29193803!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29823 invoked from network); 2 Aug 2012 09:55:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:55:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818487"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:55:38 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:55:38 +0100
Message-ID: <501A4C29.5080006@citrix.com>
Date: Thu, 2 Aug 2012 10:45:13 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
In-Reply-To: <501A631D02000078000921B8@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 10:23, Jan Beulich wrote:
>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>  wrote:
>>>>          
>> I was reading more about this commit because this patch breaks the ABI
>> on ARM, when I realized that on x86 there is no standard that specifies
>> the alignment of fields in a struct.
>>      
> There is - the psABI supplements to the SVR4 ABI.
>
>    

This is a completely different issue.
The problem here gcc/whatever compiler padding added to the struct in 
order to have alignment of the members to the word boundry. The 
difference is that this is not enforced in the ARM case (apparently, 
from Stefano's report) while it happens in the x86 case.

This is why it is a good rule to organize member of a struct from the 
bigger to the smaller when compiling with gcc and this is not the case 
of the struct in question.

In the end it is a compiler decisional thing, not something decided by 
the ABI.

Attilio


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:56:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws87-00081J-Tn; Thu, 02 Aug 2012 09:55:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1Sws86-00081E-Nr
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:55:43 +0000
Received: from [85.158.139.83:30171] by server-7.bemta-5.messagelabs.com id
	F6/26-28276-D9E4A105; Thu, 02 Aug 2012 09:55:41 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1343901338!29193803!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29823 invoked from network); 2 Aug 2012 09:55:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:55:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818487"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:55:38 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:55:38 +0100
Message-ID: <501A4C29.5080006@citrix.com>
Date: Thu, 2 Aug 2012 10:45:13 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
In-Reply-To: <501A631D02000078000921B8@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 10:23, Jan Beulich wrote:
>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>  wrote:
>>>>          
>> I was reading more about this commit because this patch breaks the ABI
>> on ARM, when I realized that on x86 there is no standard that specifies
>> the alignment of fields in a struct.
>>      
> There is - the psABI supplements to the SVR4 ABI.
>
>    

This is a completely different issue.
The problem here gcc/whatever compiler padding added to the struct in 
order to have alignment of the members to the word boundry. The 
difference is that this is not enforced in the ARM case (apparently, 
from Stefano's report) while it happens in the x86 case.

This is why it is a good rule to organize member of a struct from the 
bigger to the smaller when compiling with gcc and this is not the case 
of the struct in question.

In the end it is a compiler decisional thing, not something decided by 
the ABI.

Attilio


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws8R-00082P-AB; Thu, 02 Aug 2012 09:56:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sws8P-00082F-Sc
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:56:02 +0000
Received: from [85.158.138.51:12544] by server-12.bemta-3.messagelabs.com id
	D0/41-15259-1BE4A105; Thu, 02 Aug 2012 09:56:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343901360!21950275!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32694 invoked from network); 2 Aug 2012 09:56:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818496"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:56:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:56:00 +0100
Message-ID: <1343901358.27221.110.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 2 Aug 2012 10:55:58 +0100
In-Reply-To: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTAyIGF0IDEwOjQ3ICswMTAwLCDpqazno4ogd3JvdGU6Cgo+ICAgICBJ
biB4ZW4tNC4xLjIvdG9vbHMvbGlieGwveGxfY21kaW1wbC5jIDogY3JlYXRlX2RvbWFpbigpLCBp
dCBzdGFydHMKCklmIHlvdSBhcmUgZG9pbmcgZGV2ZWxvcG1lbnQgdGhlbiB0YXJnZXQgeGVuLXVu
c3RhYmxlIHdvdWxkIGJlCnByZWZlcmFibGUsIGVzcGVjaWFsbHkgaWYgeW91IGFyZSBsb29raW5n
IGF0IChsaWIpeGwuIFRoZSB4bCBzdHVmZiBpbgo0LjEgaXMgZWZmZWN0aXZlbHkgYSBwcmV2aWV3
IGFuZCBpdCBoYXMgY2hhbmdlIHN1YnN0YW50aWFsbHkgaW50ZXJuYWxseQpzaW5jZSA0LjEuCgo+
IEFmdGVyIGlucHV0IGB4bCBjcmVhdGUgeHAtMTAxLmh2bWAsIHlvdSBnb3QgYERhZW1vbiBydW5u
aW5nIHdpdGggUElECj4gMjY2MjJgIGZvbGxvd2luZyB0aGUgY29tbWFuZCBsaW5lIGluIHRoZSBz
Y3JlZW47IGJ1dCB0aGUgeGwgbG9nIGZpbGUKPiBsaWVzIGluIC92YXIvbG9nL3hlbi94bC14cC0x
MDEubG9nIGNvbnRhaW5zIGEgbGluZSBgV2FpdGluZyBmb3IgZG9tYWluCj4geHAtMTAxIChkb21p
ZCAxKSB0byBkaWUgW3BpZCAyNjYyNF1gLgo+IFdoeSAyNjYyMiAhPSAyNjYyND8gIEFuZCAyNjYy
MiBjb3VsZCBub3QgYmUgZm91bmQgYnkgYHBzIC1lZmAgY29tbWFuZC4KPiBXaGF0IGhhcHBlbmRl
ZCB0byB0aGF0PyEKClRoZXJlJ3MgYSBjYWxsIHRvIGRhbWVvbiBpbiB0aGVyZSwgd2hpY2ggd2ls
bCBpbnZvbHZlIGEgZG91YmxlIGZvcmsKSUlSQy4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws8R-00082P-AB; Thu, 02 Aug 2012 09:56:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sws8P-00082F-Sc
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 09:56:02 +0000
Received: from [85.158.138.51:12544] by server-12.bemta-3.messagelabs.com id
	D0/41-15259-1BE4A105; Thu, 02 Aug 2012 09:56:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343901360!21950275!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32694 invoked from network); 2 Aug 2012 09:56:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818496"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:56:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	10:56:00 +0100
Message-ID: <1343901358.27221.110.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 2 Aug 2012 10:55:58 +0100
In-Reply-To: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTAyIGF0IDEwOjQ3ICswMTAwLCDpqazno4ogd3JvdGU6Cgo+ICAgICBJ
biB4ZW4tNC4xLjIvdG9vbHMvbGlieGwveGxfY21kaW1wbC5jIDogY3JlYXRlX2RvbWFpbigpLCBp
dCBzdGFydHMKCklmIHlvdSBhcmUgZG9pbmcgZGV2ZWxvcG1lbnQgdGhlbiB0YXJnZXQgeGVuLXVu
c3RhYmxlIHdvdWxkIGJlCnByZWZlcmFibGUsIGVzcGVjaWFsbHkgaWYgeW91IGFyZSBsb29raW5n
IGF0IChsaWIpeGwuIFRoZSB4bCBzdHVmZiBpbgo0LjEgaXMgZWZmZWN0aXZlbHkgYSBwcmV2aWV3
IGFuZCBpdCBoYXMgY2hhbmdlIHN1YnN0YW50aWFsbHkgaW50ZXJuYWxseQpzaW5jZSA0LjEuCgo+
IEFmdGVyIGlucHV0IGB4bCBjcmVhdGUgeHAtMTAxLmh2bWAsIHlvdSBnb3QgYERhZW1vbiBydW5u
aW5nIHdpdGggUElECj4gMjY2MjJgIGZvbGxvd2luZyB0aGUgY29tbWFuZCBsaW5lIGluIHRoZSBz
Y3JlZW47IGJ1dCB0aGUgeGwgbG9nIGZpbGUKPiBsaWVzIGluIC92YXIvbG9nL3hlbi94bC14cC0x
MDEubG9nIGNvbnRhaW5zIGEgbGluZSBgV2FpdGluZyBmb3IgZG9tYWluCj4geHAtMTAxIChkb21p
ZCAxKSB0byBkaWUgW3BpZCAyNjYyNF1gLgo+IFdoeSAyNjYyMiAhPSAyNjYyND8gIEFuZCAyNjYy
MiBjb3VsZCBub3QgYmUgZm91bmQgYnkgYHBzIC1lZmAgY29tbWFuZC4KPiBXaGF0IGhhcHBlbmRl
ZCB0byB0aGF0PyEKClRoZXJlJ3MgYSBjYWxsIHRvIGRhbWVvbiBpbiB0aGVyZSwgd2hpY2ggd2ls
bCBpbnZvbHZlIGEgZG91YmxlIGZvcmsKSUlSQy4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws8r-00084s-NR; Thu, 02 Aug 2012 09:56:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Sws8p-00084f-OG
	for Xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:56:27 +0000
Received: from [85.158.138.51:16904] by server-2.bemta-3.messagelabs.com id
	46/F4-00359-ACE4A105; Thu, 02 Aug 2012 09:56:26 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343901384!28290294!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7288 invoked from network); 2 Aug 2012 09:56:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:56:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203910960"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:56:24 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:56:24 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Sws8l-0005rK-Od;
	Thu, 02 Aug 2012 10:56:23 +0100
Message-ID: <501A4E0C.1090509@eu.citrix.com>
Date: Thu, 2 Aug 2012 10:53:16 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
In-Reply-To: <20120801153439.3f81c923@mantra.us.oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/12 23:34, Mukesh Rathor wrote:
> On Wed, 1 Aug 2012 16:25:01 +0100
> George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>
>> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>> for this feature, mainly for "marketing" reasons.  I think it will
>> probably give people the wrong idea about what the technology does.
>> PV domains is one of Xen's really distinct advantages -- much simpler
>> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>> understand it, the mode you've been calling "hybrid" still has all of
>> these advantages -- it just uses some of the HVM hardware extensions
>> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>> be seen as, "Even Xen has had to give up on PV."
>>
>> Can I suggest something like "PVH" instead?  That (at least to me)
>> makes it clear that PV domains are still fully PV, but just use some
>> HVM extensions.
>>
>> Thoughts?
> Hi George,
>
> We gave some thought looking for name. I figured pure PV will be around for
> a while at least. So there's PV on one side and HVM on the other, hybrid
> somewhere in between.
I understand the idea, but I think it's not very accurate.  I would call 
Stefano's "PVHVM" stuff hybrid -- it has the legacy boot and emulated 
devices, but uses the PV interfaces for event delivery extensively.  The 
mode you're working on is too far towards the "PV" side to be called 
"hybrid".  (And as we've seen, the term has already confused people, who 
interpreted it as basically PVHVM.)
>
> The issue with PV in HVM is that it limits PV to HVM container only. The
> vision I had was that hybrid, a PV ops kernel that is somewhere in between
> PV and HVM, could be configured with options. So, one could run hybrid
> with say EPT off (altho, won't be supported this anymore). But generic name
> like hybrid allows it in future to be flexible, instead of confined to a
> specific. I suppose a PV guest could just be started with various options.
In general, I think "PV" should mean, "Doesn't use legacy boot, doesn't 
need emulated devices".  So I don't think "PVH" places any limitations 
on what particular subset of HVM hardware you use.  For things that 
specifically depend on knowing whether guest PTs are using mfns or 
gpfns, I think we should have checks for specific things -- for 
instance, "xen_mm_translate()" or something like that.

Also, don't confuse EPT (which is Intel-specific) with HAP (which is the 
generic term for either EPT or RVI); and don't confuse either of those 
with what is called "translate" mode.  Translate mode (where Xen 
translates the guest PTs from gpfns to mfns) can be done either with HAP 
or with shadow; and given the performance issues HAP has with certain 
workloads, we need to make sure that the HVM container mode can use both.

> As for name in code, 'pvh' was confusing, as PVHVM is now routinely used to
> refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV, well, thats a certain
> virus ;). So I just used hybrid in the code to refer to PV guest that runs
> in HVM container. I suppose I could change the flag to pv_in_hvm or
> something.
But is "pvhvm" ever actually used in the code?  If not, it's not a problem.

Actually, perhaps it would be better in any case, rather than having 
checks for "pvh" mode, to have checks for specific things -- e.g., is 
translation on or off (i.e., are running in classic PV mode, or with 
HAP)?  I'm not sure the other things you're doing with HVM, but it 
should be possible to come up with a descriptive name to use in the code 
for those options, rather than limiting to a specific mode.

In ancient days, there used to be options, both within Xen and within 
the classic Xen kernel, to run a PV guest in fully-translated mode 
(i.e., the guest PTs contained gpfns, not mfns), and "shadow_translate" 
was a mode used across guest types (both PV and HVM) to determine 
whether this was the case or not.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 09:56:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 09:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sws8r-00084s-NR; Thu, 02 Aug 2012 09:56:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1Sws8p-00084f-OG
	for Xen-devel@lists.xensource.com; Thu, 02 Aug 2012 09:56:27 +0000
Received: from [85.158.138.51:16904] by server-2.bemta-3.messagelabs.com id
	46/F4-00359-ACE4A105; Thu, 02 Aug 2012 09:56:26 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343901384!28290294!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7288 invoked from network); 2 Aug 2012 09:56:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 09:56:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203910960"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 05:56:24 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 05:56:24 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1Sws8l-0005rK-Od;
	Thu, 02 Aug 2012 10:56:23 +0100
Message-ID: <501A4E0C.1090509@eu.citrix.com>
Date: Thu, 2 Aug 2012 10:53:16 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
In-Reply-To: <20120801153439.3f81c923@mantra.us.oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/12 23:34, Mukesh Rathor wrote:
> On Wed, 1 Aug 2012 16:25:01 +0100
> George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>
>> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
>> for this feature, mainly for "marketing" reasons.  I think it will
>> probably give people the wrong idea about what the technology does.
>> PV domains is one of Xen's really distinct advantages -- much simpler
>> interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
>> understand it, the mode you've been calling "hybrid" still has all of
>> these advantages -- it just uses some of the HVM hardware extensions
>> to make the interface even simpler / faster.  I'm afraid "hybrid" may
>> be seen as, "Even Xen has had to give up on PV."
>>
>> Can I suggest something like "PVH" instead?  That (at least to me)
>> makes it clear that PV domains are still fully PV, but just use some
>> HVM extensions.
>>
>> Thoughts?
> Hi George,
>
> We gave some thought looking for name. I figured pure PV will be around for
> a while at least. So there's PV on one side and HVM on the other, hybrid
> somewhere in between.
I understand the idea, but I think it's not very accurate.  I would call 
Stefano's "PVHVM" stuff hybrid -- it has the legacy boot and emulated 
devices, but uses the PV interfaces for event delivery extensively.  The 
mode you're working on is too far towards the "PV" side to be called 
"hybrid".  (And as we've seen, the term has already confused people, who 
interpreted it as basically PVHVM.)
>
> The issue with PV in HVM is that it limits PV to HVM container only. The
> vision I had was that hybrid, a PV ops kernel that is somewhere in between
> PV and HVM, could be configured with options. So, one could run hybrid
> with say EPT off (altho, won't be supported this anymore). But generic name
> like hybrid allows it in future to be flexible, instead of confined to a
> specific. I suppose a PV guest could just be started with various options.
In general, I think "PV" should mean, "Doesn't use legacy boot, doesn't 
need emulated devices".  So I don't think "PVH" places any limitations 
on what particular subset of HVM hardware you use.  For things that 
specifically depend on knowing whether guest PTs are using mfns or 
gpfns, I think we should have checks for specific things -- for 
instance, "xen_mm_translate()" or something like that.

Also, don't confuse EPT (which is Intel-specific) with HAP (which is the 
generic term for either EPT or RVI); and don't confuse either of those 
with what is called "translate" mode.  Translate mode (where Xen 
translates the guest PTs from gpfns to mfns) can be done either with HAP 
or with shadow; and given the performance issues HAP has with certain 
workloads, we need to make sure that the HVM container mode can use both.

> As for name in code, 'pvh' was confusing, as PVHVM is now routinely used to
> refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV, well, thats a certain
> virus ;). So I just used hybrid in the code to refer to PV guest that runs
> in HVM container. I suppose I could change the flag to pv_in_hvm or
> something.
But is "pvhvm" ever actually used in the code?  If not, it's not a problem.

Actually, perhaps it would be better in any case, rather than having 
checks for "pvh" mode, to have checks for specific things -- e.g., is 
translation on or off (i.e., are running in classic PV mode, or with 
HAP)?  I'm not sure the other things you're doing with HVM, but it 
should be possible to come up with a descriptive name to use in the code 
for those options, rather than limiting to a specific mode.

In ancient days, there used to be options, both within Xen and within 
the classic Xen kernel, to run a PV guest in fully-translated mode 
(i.e., the guest PTs contained gpfns, not mfns), and "shadow_translate" 
was a mode used across guest types (both PV and HVM) to determine 
whether this was the case or not.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:08:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsKG-000096-3P; Thu, 02 Aug 2012 10:08:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsKF-000091-1K
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 10:08:15 +0000
Received: from [85.158.138.51:64882] by server-2.bemta-3.messagelabs.com id
	D3/4C-00359-D815A105; Thu, 02 Aug 2012 10:08:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343902092!30004475!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14082 invoked from network); 2 Aug 2012 10:08:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:08:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818789"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:08:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:08:12 +0100
Message-ID: <1343902091.27221.112.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:08:11 +0100
In-Reply-To: <1343328892-10195-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207261949150.26163@kaball.uk.xensource.com>
	<1343328892-10195-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim.Deegan@xen.org" <Tim.Deegan@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 2/4] xen/arm: implement get/put_page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 19:54 +0100, Stefano Stabellini wrote:
> Add a basic get_page_type and put_page_type implementation: we don't
> care about typecounts so just return success.
> 
> Also remove PGT_shared_page, that is unused.
> 
> 
> Changes in v3:
> 
> - replace get_page_type and put_page_type with an empty implementation.

Tim suggested that we consider adding an assert that the reference count
is non-zero in both of these, since it is incorrect to take a type count
without a reference count (we think).

In the future maybe we should consider refactoring the gnttab code (and
anything else using type counts directly) to use a "pg_make_writable"
arch abstraction so we an hide the type counts inside x86 code where
they properly belong.

> 
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/dummy.S     |    4 ----
>  xen/arch/arm/mm.c        |   13 +++++++++++++
>  xen/include/asm-arm/mm.h |    1 -
>  3 files changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index baced25..2b96d22 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -22,10 +22,6 @@ DUMMY(arch_get_info_guest);
>  DUMMY(arch_vcpu_reset);
>  NOP(update_vcpu_system_time);
>  
> -/* Page Reference & Type Maintenance */
> -DUMMY(get_page_type);
> -DUMMY(put_page_type);
> -
>  /* Grant Tables */
>  DUMMY(steal_page);
>  
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index e963af9..96a4ca2 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -594,6 +594,19 @@ int get_page(struct page_info *page, struct domain *domain)
>      return 0;
>  }
>  
> +/* Common code requires get_page_type and put_page_type.
> + * We don't care about typecounts so we just do the minimum to make it
> + * happy. */
> +int get_page_type(struct page_info *page, unsigned long type)
> +{
> +    return 1;
> +}
> +
> +void put_page_type(struct page_info *page)
> +{
> +    return;
> +}
> +
>  void gnttab_clear_flag(unsigned long nr, uint16_t *addr)
>  {
>      /*
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 53801b0..b37bd35 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -71,7 +71,6 @@ struct page_info
>  
>  #define PGT_none          PG_mask(0, 4)  /* no special uses of this page   */
>  #define PGT_writable_page PG_mask(7, 4)  /* has writable mappings?         */
> -#define PGT_shared_page   PG_mask(8, 4)  /* CoW sharable page              */
>  #define PGT_type_mask     PG_mask(15, 4) /* Bits 28-31 or 60-63.           */
>  
>   /* Owning guest has pinned this page to its current type? */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:08:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsKG-000096-3P; Thu, 02 Aug 2012 10:08:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsKF-000091-1K
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 10:08:15 +0000
Received: from [85.158.138.51:64882] by server-2.bemta-3.messagelabs.com id
	D3/4C-00359-D815A105; Thu, 02 Aug 2012 10:08:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343902092!30004475!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14082 invoked from network); 2 Aug 2012 10:08:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:08:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818789"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:08:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:08:12 +0100
Message-ID: <1343902091.27221.112.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:08:11 +0100
In-Reply-To: <1343328892-10195-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207261949150.26163@kaball.uk.xensource.com>
	<1343328892-10195-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim.Deegan@xen.org" <Tim.Deegan@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 2/4] xen/arm: implement get/put_page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-07-26 at 19:54 +0100, Stefano Stabellini wrote:
> Add a basic get_page_type and put_page_type implementation: we don't
> care about typecounts so just return success.
> 
> Also remove PGT_shared_page, that is unused.
> 
> 
> Changes in v3:
> 
> - replace get_page_type and put_page_type with an empty implementation.

Tim suggested that we consider adding an assert that the reference count
is non-zero in both of these, since it is incorrect to take a type count
without a reference count (we think).

In the future maybe we should consider refactoring the gnttab code (and
anything else using type counts directly) to use a "pg_make_writable"
arch abstraction so we an hide the type counts inside x86 code where
they properly belong.

> 
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/dummy.S     |    4 ----
>  xen/arch/arm/mm.c        |   13 +++++++++++++
>  xen/include/asm-arm/mm.h |    1 -
>  3 files changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/dummy.S b/xen/arch/arm/dummy.S
> index baced25..2b96d22 100644
> --- a/xen/arch/arm/dummy.S
> +++ b/xen/arch/arm/dummy.S
> @@ -22,10 +22,6 @@ DUMMY(arch_get_info_guest);
>  DUMMY(arch_vcpu_reset);
>  NOP(update_vcpu_system_time);
>  
> -/* Page Reference & Type Maintenance */
> -DUMMY(get_page_type);
> -DUMMY(put_page_type);
> -
>  /* Grant Tables */
>  DUMMY(steal_page);
>  
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index e963af9..96a4ca2 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -594,6 +594,19 @@ int get_page(struct page_info *page, struct domain *domain)
>      return 0;
>  }
>  
> +/* Common code requires get_page_type and put_page_type.
> + * We don't care about typecounts so we just do the minimum to make it
> + * happy. */
> +int get_page_type(struct page_info *page, unsigned long type)
> +{
> +    return 1;
> +}
> +
> +void put_page_type(struct page_info *page)
> +{
> +    return;
> +}
> +
>  void gnttab_clear_flag(unsigned long nr, uint16_t *addr)
>  {
>      /*
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 53801b0..b37bd35 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -71,7 +71,6 @@ struct page_info
>  
>  #define PGT_none          PG_mask(0, 4)  /* no special uses of this page   */
>  #define PGT_writable_page PG_mask(7, 4)  /* has writable mappings?         */
> -#define PGT_shared_page   PG_mask(8, 4)  /* CoW sharable page              */
>  #define PGT_type_mask     PG_mask(15, 4) /* Bits 28-31 or 60-63.           */
>  
>   /* Owning guest has pinned this page to its current type? */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:12:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsNd-0000Gj-NP; Thu, 02 Aug 2012 10:11:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsNc-0000Gc-Nu
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:11:44 +0000
Received: from [85.158.143.99:4026] by server-3.bemta-4.messagelabs.com id
	7C/E1-01511-F525A105; Thu, 02 Aug 2012 10:11:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343902301!29305436!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20502 invoked from network); 2 Aug 2012 10:11:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818845"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:11:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:11:11 +0100
Message-ID: <1343902270.27221.114.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:11:10 +0100
In-Reply-To: <1343838260-17725-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 01/11] libxl: unify libxl__device_destroy
 and device_hotplug_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> device_hotplug_done contains an open-coded but improved version of
> libxl__device_destroy.  So move the contents of device_hotplug_done
> into libxl__device_destroy, deleting the old code, and replace it at
> its old location with a function call.
> 
> Also fix the error handling: the rc from the destroy should be
> propagated into the aodev.
> 
> Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:12:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsNd-0000Gj-NP; Thu, 02 Aug 2012 10:11:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsNc-0000Gc-Nu
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:11:44 +0000
Received: from [85.158.143.99:4026] by server-3.bemta-4.messagelabs.com id
	7C/E1-01511-F525A105; Thu, 02 Aug 2012 10:11:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343902301!29305436!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20502 invoked from network); 2 Aug 2012 10:11:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818845"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:11:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:11:11 +0100
Message-ID: <1343902270.27221.114.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:11:10 +0100
In-Reply-To: <1343838260-17725-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 01/11] libxl: unify libxl__device_destroy
 and device_hotplug_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> device_hotplug_done contains an open-coded but improved version of
> libxl__device_destroy.  So move the contents of device_hotplug_done
> into libxl__device_destroy, deleting the old code, and replace it at
> its old location with a function call.
> 
> Also fix the error handling: the rc from the destroy should be
> propagated into the aodev.
> 
> Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:16:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsRu-0000TV-GM; Thu, 02 Aug 2012 10:16:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsRs-0000TL-RM
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:16:09 +0000
Received: from [85.158.139.83:34541] by server-7.bemta-5.messagelabs.com id
	7A/72-28276-7635A105; Thu, 02 Aug 2012 10:16:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343902567!29820369!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8911 invoked from network); 2 Aug 2012 10:16:07 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:16:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:16:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:16:06 +0100
Message-ID: <1343902565.27221.117.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:16:05 +0100
In-Reply-To: <1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader
 pty master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Receive POLLHUP on the bootloader master pty is not an error.
> Hopefully it means that the bootloader has exited and therefore the
> pty slave side has no process group any more.  (At least NetBSD
> indicates POLLHUP on the master in this case.)
> 
> So send the bootloader SIGTERM; if it has already exited then this has
> no effect (except that on some versions of NetBSD it erroneously
> returns ESRCH and we print a harmless warning) and we will then
> collect the bootloader's exit status and be satisfied.
> 
> However, we remember that we have done this so that if we got POLLHUP
> for some other reason than that the bootloader exited we report
> something resembling a useful message.
> 
> In order to implement this we need to provide a way for users of
> datacopier to handle POLLHUP rather than treating it as fatal.
> 
> We rename bootloader_abort to bootloader_stop since it now no longer
> only applies to error situations.
> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> -
> Changes in v4:
>  * Track whether we sent SIGTERM due to POLLHUP so we can report
>    messages properly.
> 
> Changes in v3:
>  * datacopier provides new interface for handling POLLHUP
>  * Do not ignore errors on the xenconsole pty
>  * Rename bootloader_abort.
> ---
>  tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
>  tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
>  tools/libxl/libxl_internal.h   |    7 +++++--
>  3 files changed, 57 insertions(+), 12 deletions(-)
> 
> diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
> index 99972a2..4bd5484 100644
> --- a/tools/libxl/libxl_aoutils.c
> +++ b/tools/libxl/libxl_aoutils.c
> @@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
>      LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
>  }
> 
> +static int datacopier_pollhup_handled(libxl__egc *egc,
> +                                      libxl__datacopier_state *dc,
> +                                      short revents, int onwrite)
> +{
> +    STATE_AO_GC(dc->ao);
> +
> +    if (dc->callback_pollhup && (revents & POLLHUP)) {
> +        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
> +            onwrite ? dc->writewhat : dc->readwhat,
> +            dc->copywhat);
> +        libxl__datacopier_kill(dc);
> +        dc->callback(egc, dc, onwrite, -1);

You've forgotten to make this ->callback_pollhup as discussed last time.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:16:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsRu-0000TV-GM; Thu, 02 Aug 2012 10:16:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsRs-0000TL-RM
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:16:09 +0000
Received: from [85.158.139.83:34541] by server-7.bemta-5.messagelabs.com id
	7A/72-28276-7635A105; Thu, 02 Aug 2012 10:16:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343902567!29820369!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8911 invoked from network); 2 Aug 2012 10:16:07 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:16:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:16:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:16:06 +0100
Message-ID: <1343902565.27221.117.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:16:05 +0100
In-Reply-To: <1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader
 pty master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Receive POLLHUP on the bootloader master pty is not an error.
> Hopefully it means that the bootloader has exited and therefore the
> pty slave side has no process group any more.  (At least NetBSD
> indicates POLLHUP on the master in this case.)
> 
> So send the bootloader SIGTERM; if it has already exited then this has
> no effect (except that on some versions of NetBSD it erroneously
> returns ESRCH and we print a harmless warning) and we will then
> collect the bootloader's exit status and be satisfied.
> 
> However, we remember that we have done this so that if we got POLLHUP
> for some other reason than that the bootloader exited we report
> something resembling a useful message.
> 
> In order to implement this we need to provide a way for users of
> datacopier to handle POLLHUP rather than treating it as fatal.
> 
> We rename bootloader_abort to bootloader_stop since it now no longer
> only applies to error situations.
> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> -
> Changes in v4:
>  * Track whether we sent SIGTERM due to POLLHUP so we can report
>    messages properly.
> 
> Changes in v3:
>  * datacopier provides new interface for handling POLLHUP
>  * Do not ignore errors on the xenconsole pty
>  * Rename bootloader_abort.
> ---
>  tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
>  tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
>  tools/libxl/libxl_internal.h   |    7 +++++--
>  3 files changed, 57 insertions(+), 12 deletions(-)
> 
> diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
> index 99972a2..4bd5484 100644
> --- a/tools/libxl/libxl_aoutils.c
> +++ b/tools/libxl/libxl_aoutils.c
> @@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
>      LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
>  }
> 
> +static int datacopier_pollhup_handled(libxl__egc *egc,
> +                                      libxl__datacopier_state *dc,
> +                                      short revents, int onwrite)
> +{
> +    STATE_AO_GC(dc->ao);
> +
> +    if (dc->callback_pollhup && (revents & POLLHUP)) {
> +        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
> +            onwrite ? dc->writewhat : dc->readwhat,
> +            dc->copywhat);
> +        libxl__datacopier_kill(dc);
> +        dc->callback(egc, dc, onwrite, -1);

You've forgotten to make this ->callback_pollhup as discussed last time.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:19:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:19:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsUT-0000bW-22; Thu, 02 Aug 2012 10:18:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsUR-0000bI-Ii
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:18:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343902721!10485629!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8079 invoked from network); 2 Aug 2012 10:18:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:18:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:18:27 +0100
Message-ID: <1343902705.27221.118.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:18:25 +0100
In-Reply-To: <1343838260-17725-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 03/11] libxl: fix device counting race in
 libxl__devices_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:

> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@eu.citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:19:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:19:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsUT-0000bW-22; Thu, 02 Aug 2012 10:18:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsUR-0000bI-Ii
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:18:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343902721!10485629!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8079 invoked from network); 2 Aug 2012 10:18:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:18:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:18:27 +0100
Message-ID: <1343902705.27221.118.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:18:25 +0100
In-Reply-To: <1343838260-17725-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 03/11] libxl: fix device counting race in
 libxl__devices_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:

> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@eu.citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:20:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsVC-0000et-Fw; Thu, 02 Aug 2012 10:19:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsVB-0000ec-CQ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:19:33 +0000
Received: from [85.158.143.35:58564] by server-1.bemta-4.messagelabs.com id
	3F/10-24392-3345A105; Thu, 02 Aug 2012 10:19:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1343902731!5017525!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4760 invoked from network); 2 Aug 2012 10:18:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:18:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818990"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:18:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:18:51 +0100
Message-ID: <1343902729.27221.119.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:18:49 +0100
In-Reply-To: <1343838260-17725-5-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-5-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 04/11] libxl: fix formatting of
 DEFINE_DEVICES_ADD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> These lines were exactly 80 columns wide, which produces hideous wrap
> damage in an 80 column emacs.  Reformat using emacs's C-c \,
> which puts the \ in column 72 (by default) where possible.
> 
> Whitespace change only.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:20:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsVC-0000et-Fw; Thu, 02 Aug 2012 10:19:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsVB-0000ec-CQ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:19:33 +0000
Received: from [85.158.143.35:58564] by server-1.bemta-4.messagelabs.com id
	3F/10-24392-3345A105; Thu, 02 Aug 2012 10:19:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1343902731!5017525!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4760 invoked from network); 2 Aug 2012 10:18:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:18:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13818990"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:18:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:18:51 +0100
Message-ID: <1343902729.27221.119.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:18:49 +0100
In-Reply-To: <1343838260-17725-5-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-5-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 04/11] libxl: fix formatting of
 DEFINE_DEVICES_ADD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> These lines were exactly 80 columns wide, which produces hideous wrap
> damage in an 80 column emacs.  Reformat using emacs's C-c \,
> which puts the \ in column 72 (by default) where possible.
> 
> Whitespace change only.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsVe-0000iL-Tc; Thu, 02 Aug 2012 10:20:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsVd-0000i0-1r
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:20:01 +0000
Received: from [85.158.139.83:65491] by server-5.bemta-5.messagelabs.com id
	42/E3-02722-0545A105; Thu, 02 Aug 2012 10:20:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343902799!29877757!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6025 invoked from network); 2 Aug 2012 10:19:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:19:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819016"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:19:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:19:59 +0100
Message-ID: <1343902797.27221.120.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:19:57 +0100
In-Reply-To: <1343838260-17725-6-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-6-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 05/11] libxl: abolish useless `start'
 parameter to libxl__add_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> 0 is always passed for this parameter and the code doesn't, actually,
> use it, now.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

>      {                                                                   \
>          AO_GC;                                                          \
>          int i;                                                          \
> -        int end = start + d_config->num_##type##s;                      \

This definition of end is pretty dodgy, glad to be rid of it.

> -        for (i = start; i < end; i++) {                                 \
> +        for (i = 0; i < d_config->num_##type##s; i++) {                 \
>              libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
> -            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
> +            libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
>                                         aodev);                          \
>          }                                                               \
>      }

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsVe-0000iL-Tc; Thu, 02 Aug 2012 10:20:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsVd-0000i0-1r
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:20:01 +0000
Received: from [85.158.139.83:65491] by server-5.bemta-5.messagelabs.com id
	42/E3-02722-0545A105; Thu, 02 Aug 2012 10:20:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343902799!29877757!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6025 invoked from network); 2 Aug 2012 10:19:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:19:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819016"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:19:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:19:59 +0100
Message-ID: <1343902797.27221.120.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:19:57 +0100
In-Reply-To: <1343838260-17725-6-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-6-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 05/11] libxl: abolish useless `start'
 parameter to libxl__add_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> 0 is always passed for this parameter and the code doesn't, actually,
> use it, now.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

>      {                                                                   \
>          AO_GC;                                                          \
>          int i;                                                          \
> -        int end = start + d_config->num_##type##s;                      \

This definition of end is pretty dodgy, glad to be rid of it.

> -        for (i = start; i < end; i++) {                                 \
> +        for (i = 0; i < d_config->num_##type##s; i++) {                 \
>              libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
> -            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
> +            libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
>                                         aodev);                          \
>          }                                                               \
>      }

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:21:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsWT-0000qD-CB; Thu, 02 Aug 2012 10:20:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsWR-0000p5-E4
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:20:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343902845!8770784!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7994 invoked from network); 2 Aug 2012 10:20:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:20:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819037"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:20:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:20:45 +0100
Message-ID: <1343902843.27221.121.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:20:43 +0100
In-Reply-To: <1343838260-17725-7-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-7-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 06/11] libxl: rename aodevs to multidev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> To be consistent with the new function naming, rename
> libxl__ao_devices to libxl__multidev and all variables aodevs to
> multidev.
> 
> No functional change.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Didn't read closely but on the basis that it is mechanical:

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:21:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsWT-0000qD-CB; Thu, 02 Aug 2012 10:20:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsWR-0000p5-E4
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:20:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343902845!8770784!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7994 invoked from network); 2 Aug 2012 10:20:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:20:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819037"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:20:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:20:45 +0100
Message-ID: <1343902843.27221.121.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:20:43 +0100
In-Reply-To: <1343838260-17725-7-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-7-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 06/11] libxl: rename aodevs to multidev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> To be consistent with the new function naming, rename
> libxl__ao_devices to libxl__multidev and all variables aodevs to
> multidev.
> 
> No functional change.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Didn't read closely but on the basis that it is mechanical:

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:22:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsXN-0000yV-RD; Thu, 02 Aug 2012 10:21:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwsXM-0000yJ-F6
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 10:21:48 +0000
Received: from [85.158.139.83:19166] by server-11.bemta-5.messagelabs.com id
	A9/3F-20400-BB45A105; Thu, 02 Aug 2012 10:21:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343902907!29329563!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15088 invoked from network); 2 Aug 2012 10:21:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 10:21:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 11:21:46 +0100
Message-Id: <501A71030200007800092257@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 11:22:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Attilio Rao" <attilio.rao@citrix.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<501A4C29.5080006@citrix.com>
In-Reply-To: <501A4C29.5080006@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	KonradRzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 11:45, Attilio Rao <attilio.rao@citrix.com> wrote:
> On 02/08/12 10:23, Jan Beulich wrote:
>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>  
> wrote:
>>>>>          
>>> I was reading more about this commit because this patch breaks the ABI
>>> on ARM, when I realized that on x86 there is no standard that specifies
>>> the alignment of fields in a struct.
>>>      
>> There is - the psABI supplements to the SVR4 ABI.
>>
>>    
> 
> This is a completely different issue.
> The problem here gcc/whatever compiler padding added to the struct in 
> order to have alignment of the members to the word boundry. The 
> difference is that this is not enforced in the ARM case (apparently, 
> from Stefano's report) while it happens in the x86 case.
> 
> This is why it is a good rule to organize member of a struct from the 
> bigger to the smaller when compiling with gcc and this is not the case 
> of the struct in question.
> 
> In the end it is a compiler decisional thing, not something decided by 
> the ABI.

No, definitely not. Otherwise inter-operation between code
compiled with different compilers would be impossible. To
allow this is what the various ABI specifications exist for (and
their absence had, e.g. on DOS, lead to a complete mess).

As to the ARM issue - mind pointing out where mis-aligned
structure fields are specified as being the standard?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:22:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsXN-0000yV-RD; Thu, 02 Aug 2012 10:21:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwsXM-0000yJ-F6
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 10:21:48 +0000
Received: from [85.158.139.83:19166] by server-11.bemta-5.messagelabs.com id
	A9/3F-20400-BB45A105; Thu, 02 Aug 2012 10:21:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343902907!29329563!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15088 invoked from network); 2 Aug 2012 10:21:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 10:21:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 11:21:46 +0100
Message-Id: <501A71030200007800092257@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 11:22:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Attilio Rao" <attilio.rao@citrix.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<501A4C29.5080006@citrix.com>
In-Reply-To: <501A4C29.5080006@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	KonradRzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 11:45, Attilio Rao <attilio.rao@citrix.com> wrote:
> On 02/08/12 10:23, Jan Beulich wrote:
>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>  
> wrote:
>>>>>          
>>> I was reading more about this commit because this patch breaks the ABI
>>> on ARM, when I realized that on x86 there is no standard that specifies
>>> the alignment of fields in a struct.
>>>      
>> There is - the psABI supplements to the SVR4 ABI.
>>
>>    
> 
> This is a completely different issue.
> The problem here gcc/whatever compiler padding added to the struct in 
> order to have alignment of the members to the word boundry. The 
> difference is that this is not enforced in the ARM case (apparently, 
> from Stefano's report) while it happens in the x86 case.
> 
> This is why it is a good rule to organize member of a struct from the 
> bigger to the smaller when compiling with gcc and this is not the case 
> of the struct in question.
> 
> In the end it is a compiler decisional thing, not something decided by 
> the ABI.

No, definitely not. Otherwise inter-operation between code
compiled with different compilers would be impossible. To
allow this is what the various ABI specifications exist for (and
their absence had, e.g. on DOS, lead to a complete mess).

As to the ARM issue - mind pointing out where mis-aligned
structure fields are specified as being the standard?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:23:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsYJ-00017Q-9K; Thu, 02 Aug 2012 10:22:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsYH-000175-NF
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:22:45 +0000
Received: from [85.158.143.35:16932] by server-3.bemta-4.messagelabs.com id
	62/F5-01511-5F45A105; Thu, 02 Aug 2012 10:22:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1343902928!10520956!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21871 invoked from network); 2 Aug 2012 10:22:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:22:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819075"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:22:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:22:08 +0100
Message-ID: <1343902926.27221.122.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:22:06 +0100
In-Reply-To: <1343838260-17725-8-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-8-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 07/11] libxl: do not blunder on if
 bootloader fails (again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Do not lose the rc value passed to bootloader_callback.  Do not lose
> the rc value from the bl when the local disk detach succeeds.
> 
> While we're here rationalise the use of bl->rc to make things clearer.
> Set it to zero at the start and always update it conditionally; copy
> it into bootloader_callback's argument each time.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_bootloader.c |   11 +++++++++--
>  1 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
> index bfc1b56..e103ee9 100644
> --- a/tools/libxl/libxl_bootloader.c
> +++ b/tools/libxl/libxl_bootloader.c
> @@ -206,6 +206,7 @@ static int parse_bootloader_result(libxl__egc *egc,
>  void libxl__bootloader_init(libxl__bootloader_state *bl)
>  {
>      assert(bl->ao);
> +    bl->rc = 0;
>      bl->dls.diskpath = NULL;
>      bl->openpty.ao = bl->ao;
>      bl->dls.ao = bl->ao;
> @@ -255,6 +256,9 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
>  static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
>                                  int rc)
>  {
> +    if (!bl->rc)
> +        bl->rc = rc;
> +
>      bootloader_cleanup(egc, bl);
>  
>      bl->dls.callback = bootloader_local_detached_cb;
> @@ -270,9 +274,11 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
>  
>      if (rc) {
>          LOG(ERROR, "unable to detach locally attached disk");
> +        if (!bl->rc)
> +            bl->rc = rc;
>      }
>  
> -    bl->callback(egc, bl, rc);
> +    bl->callback(egc, bl, bl->rc);
>  }
>  
>  /* might be called at any time, provided it's init'd */
> @@ -289,7 +295,8 @@ static void bootloader_stop(libxl__egc *egc,
>          if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
>                      rc ? "after failure, " : "", (unsigned long)bl->child.pid);
>      }
> -    bl->rc = rc;
> +    if (!bl->rc)
> +        bl->rc = rc;
>  }
>  
>  /*----- main flow of control -----*/



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:23:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsYJ-00017Q-9K; Thu, 02 Aug 2012 10:22:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsYH-000175-NF
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:22:45 +0000
Received: from [85.158.143.35:16932] by server-3.bemta-4.messagelabs.com id
	62/F5-01511-5F45A105; Thu, 02 Aug 2012 10:22:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1343902928!10520956!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21871 invoked from network); 2 Aug 2012 10:22:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:22:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819075"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:22:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:22:08 +0100
Message-ID: <1343902926.27221.122.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:22:06 +0100
In-Reply-To: <1343838260-17725-8-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-8-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 07/11] libxl: do not blunder on if
 bootloader fails (again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Do not lose the rc value passed to bootloader_callback.  Do not lose
> the rc value from the bl when the local disk detach succeeds.
> 
> While we're here rationalise the use of bl->rc to make things clearer.
> Set it to zero at the start and always update it conditionally; copy
> it into bootloader_callback's argument each time.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_bootloader.c |   11 +++++++++--
>  1 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
> index bfc1b56..e103ee9 100644
> --- a/tools/libxl/libxl_bootloader.c
> +++ b/tools/libxl/libxl_bootloader.c
> @@ -206,6 +206,7 @@ static int parse_bootloader_result(libxl__egc *egc,
>  void libxl__bootloader_init(libxl__bootloader_state *bl)
>  {
>      assert(bl->ao);
> +    bl->rc = 0;
>      bl->dls.diskpath = NULL;
>      bl->openpty.ao = bl->ao;
>      bl->dls.ao = bl->ao;
> @@ -255,6 +256,9 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
>  static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
>                                  int rc)
>  {
> +    if (!bl->rc)
> +        bl->rc = rc;
> +
>      bootloader_cleanup(egc, bl);
>  
>      bl->dls.callback = bootloader_local_detached_cb;
> @@ -270,9 +274,11 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
>  
>      if (rc) {
>          LOG(ERROR, "unable to detach locally attached disk");
> +        if (!bl->rc)
> +            bl->rc = rc;
>      }
>  
> -    bl->callback(egc, bl, rc);
> +    bl->callback(egc, bl, bl->rc);
>  }
>  
>  /* might be called at any time, provided it's init'd */
> @@ -289,7 +295,8 @@ static void bootloader_stop(libxl__egc *egc,
>          if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
>                      rc ? "after failure, " : "", (unsigned long)bl->child.pid);
>      }
> -    bl->rc = rc;
> +    if (!bl->rc)
> +        bl->rc = rc;
>  }
>  
>  /*----- main flow of control -----*/



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:25:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swsa0-0001OJ-VC; Thu, 02 Aug 2012 10:24:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsZz-0001Nm-Bg
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:24:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343903060!10913962!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5389 invoked from network); 2 Aug 2012 10:24:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:24:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819111"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:23:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:23:51 +0100
Message-ID: <1343903030.27221.124.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:23:50 +0100
In-Reply-To: <1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 09/11] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Change the TODOs in the remus code to "REMUS TODO" which will make
> them easier to grep for later.  AIUI all of these are essential for
> use of remus in production.
> 
> Also add a new TODO and a new assert, to check rc on entry to
> remus_checkpoint_dm_saved.

CCing Shriram in the hops of getting some actual code here (for 4.3 most
likely).

> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_dom.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index d749983..06d5e4f 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
>  
>  static int libxl__remus_domain_suspend_callback(void *data)
>  {
> -    /* TODO: Issue disk and network checkpoint reqs. */
> +    /* REMUS TODO: Issue disk and network checkpoint reqs. */
>      return libxl__domain_suspend_common_callback(data);
>  }
>  
> @@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(void *data)
>      if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
>          return 0;
>  
> -    /* TODO: Deal with disk. Start a new network output buffer */
> +    /* REMUS TODO: Deal with disk. Start a new network output buffer */
>      return 1;
>  }
>  
> @@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
>  static void remus_checkpoint_dm_saved(libxl__egc *egc,
>                                        libxl__domain_suspend_state *dss, int rc)
>  {
> -    /* TODO: Wait for disk and memory ack, release network buffer */
> -    /* TODO: make this asynchronous */
> +    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
> +    /* REMUS TODO: make this asynchronous */
> +    assert(!rc); /* REMUS TODO handle this error properly */
>      usleep(dss->interval * 1000);
>      libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:25:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swsa0-0001OJ-VC; Thu, 02 Aug 2012 10:24:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsZz-0001Nm-Bg
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:24:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343903060!10913962!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5389 invoked from network); 2 Aug 2012 10:24:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:24:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819111"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:23:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:23:51 +0100
Message-ID: <1343903030.27221.124.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:23:50 +0100
In-Reply-To: <1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 09/11] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Change the TODOs in the remus code to "REMUS TODO" which will make
> them easier to grep for later.  AIUI all of these are essential for
> use of remus in production.
> 
> Also add a new TODO and a new assert, to check rc on entry to
> remus_checkpoint_dm_saved.

CCing Shriram in the hops of getting some actual code here (for 4.3 most
likely).

> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_dom.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index d749983..06d5e4f 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
>  
>  static int libxl__remus_domain_suspend_callback(void *data)
>  {
> -    /* TODO: Issue disk and network checkpoint reqs. */
> +    /* REMUS TODO: Issue disk and network checkpoint reqs. */
>      return libxl__domain_suspend_common_callback(data);
>  }
>  
> @@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(void *data)
>      if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
>          return 0;
>  
> -    /* TODO: Deal with disk. Start a new network output buffer */
> +    /* REMUS TODO: Deal with disk. Start a new network output buffer */
>      return 1;
>  }
>  
> @@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
>  static void remus_checkpoint_dm_saved(libxl__egc *egc,
>                                        libxl__domain_suspend_state *dss, int rc)
>  {
> -    /* TODO: Wait for disk and memory ack, release network buffer */
> -    /* TODO: make this asynchronous */
> +    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
> +    /* REMUS TODO: make this asynchronous */
> +    assert(!rc); /* REMUS TODO handle this error properly */
>      usleep(dss->interval * 1000);
>      libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:25:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swsa6-0001P9-CB; Thu, 02 Aug 2012 10:24:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swsa4-0001OD-Vu
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:24:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343903060!10913962!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5703 invoked from network); 2 Aug 2012 10:24:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819120"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:24:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:24:22 +0100
Message-ID: <1343903061.27221.125.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:24:21 +0100
In-Reply-To: <1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 10/11] libxl: remove an unused numainfo
 parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

CCing Dario.

> ---
>  tools/libxl/libxl_numa.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
> index 5301ec4..2c8e59f 100644
> --- a/tools/libxl/libxl_numa.c
> +++ b/tools/libxl/libxl_numa.c
> @@ -231,7 +231,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
>   * candidates with just one node).
>   */
>  static int count_cpus_per_node(libxl_cputopology *tinfo, int nr_cpus,
> -                               libxl_numainfo *ninfo, int nr_nodes)
> +                               int nr_nodes)
>  {
>      int cpus_per_node = 0;
>      int j, i;
> @@ -340,7 +340,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
>      if (!min_nodes) {
>          int cpus_per_node;
>  
> -        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, ninfo, nr_nodes);
> +        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, nr_nodes);
>          if (cpus_per_node == 0)
>              min_nodes = 1;
>          else



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:25:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swsa6-0001P9-CB; Thu, 02 Aug 2012 10:24:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swsa4-0001OD-Vu
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:24:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343903060!10913962!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5703 invoked from network); 2 Aug 2012 10:24:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819120"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:24:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:24:22 +0100
Message-ID: <1343903061.27221.125.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:24:21 +0100
In-Reply-To: <1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 10/11] libxl: remove an unused numainfo
 parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

CCing Dario.

> ---
>  tools/libxl/libxl_numa.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
> index 5301ec4..2c8e59f 100644
> --- a/tools/libxl/libxl_numa.c
> +++ b/tools/libxl/libxl_numa.c
> @@ -231,7 +231,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
>   * candidates with just one node).
>   */
>  static int count_cpus_per_node(libxl_cputopology *tinfo, int nr_cpus,
> -                               libxl_numainfo *ninfo, int nr_nodes)
> +                               int nr_nodes)
>  {
>      int cpus_per_node = 0;
>      int j, i;
> @@ -340,7 +340,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
>      if (!min_nodes) {
>          int cpus_per_node;
>  
> -        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, ninfo, nr_nodes);
> +        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, nr_nodes);
>          if (cpus_per_node == 0)
>              min_nodes = 1;
>          else



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwshX-0001nv-9q; Thu, 02 Aug 2012 10:32:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SwshW-0001nq-Dz
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:32:18 +0000
Received: from [85.158.139.83:48733] by server-9.bemta-5.messagelabs.com id
	C6/E4-01069-1375A105; Thu, 02 Aug 2012 10:32:17 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343903536!26035941!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA0Njcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9221 invoked from network); 2 Aug 2012 10:32:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 10:32:16 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 02 Aug 2012 03:32:15 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.67,352,1309762800"; d="scan'208";a="174595164"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 02 Aug 2012 03:32:15 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 03:32:15 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 03:32:14 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 2 Aug 2012 18:32:13 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Matt Wilson <msw@amazon.com>, Jan Beulich <JBeulich@suse.com>, Keir Fraser
	<keir@xen.org>
Thread-Topic: [PATCH] xen/x86: Add support for cpuid masking on Intel Xeon
	Processor E5 Family
Thread-Index: AQHNbPY29Zrc8trYhUeGi1wAnxggd5dGVluA
Date: Thu, 2 Aug 2012 10:32:13 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D34C1@SHSMSX101.ccr.corp.intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
In-Reply-To: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

IMO, it's risky since it's not architecturally committed.
Only processor family 6 w/ 0x17, 0x1d, 0x1a, 0x1e, 0x1f, 0x25, 0x2c, 0x2e, 0x2f, 0x2a are *architecturally* supported.

Thanks,
Jinsong

Matt Wilson wrote:
> Although the "Intel Virtualization Technology FlexMigration
> Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> does not document support for extended model 2H model DH (Intel Xeon
> Processor E5 Family), empirical evidence shows that the same MSR
> addresses can be used for cpuid masking as exdended model 2H model AH
> (Intel Xen Processor E3-1200 Family).
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> --- a/xen/arch/x86/cpu/intel.c	Fri Jul 27 12:22:13 2012 +0200
> +++ b/xen/arch/x86/cpu/intel.c	Sat Jul 28 17:27:30 2012 +0000
> @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
>  			return;
>  		extra = "xsave ";
>  		break;
> -	case 0x2a:
> +	case 0x2a: case 0x2d:
>  		wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
>  		      opt_cpuid_mask_ecx,
>  		      opt_cpuid_mask_edx);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwshX-0001nv-9q; Thu, 02 Aug 2012 10:32:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SwshW-0001nq-Dz
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:32:18 +0000
Received: from [85.158.139.83:48733] by server-9.bemta-5.messagelabs.com id
	C6/E4-01069-1375A105; Thu, 02 Aug 2012 10:32:17 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343903536!26035941!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA0Njcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9221 invoked from network); 2 Aug 2012 10:32:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-182.messagelabs.com with SMTP;
	2 Aug 2012 10:32:16 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 02 Aug 2012 03:32:15 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.67,352,1309762800"; d="scan'208";a="174595164"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 02 Aug 2012 03:32:15 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 03:32:15 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 03:32:14 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 2 Aug 2012 18:32:13 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Matt Wilson <msw@amazon.com>, Jan Beulich <JBeulich@suse.com>, Keir Fraser
	<keir@xen.org>
Thread-Topic: [PATCH] xen/x86: Add support for cpuid masking on Intel Xeon
	Processor E5 Family
Thread-Index: AQHNbPY29Zrc8trYhUeGi1wAnxggd5dGVluA
Date: Thu, 2 Aug 2012 10:32:13 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D34C1@SHSMSX101.ccr.corp.intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
In-Reply-To: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

IMO, it's risky since it's not architecturally committed.
Only processor family 6 w/ 0x17, 0x1d, 0x1a, 0x1e, 0x1f, 0x25, 0x2c, 0x2e, 0x2f, 0x2a are *architecturally* supported.

Thanks,
Jinsong

Matt Wilson wrote:
> Although the "Intel Virtualization Technology FlexMigration
> Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> does not document support for extended model 2H model DH (Intel Xeon
> Processor E5 Family), empirical evidence shows that the same MSR
> addresses can be used for cpuid masking as exdended model 2H model AH
> (Intel Xen Processor E3-1200 Family).
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> --- a/xen/arch/x86/cpu/intel.c	Fri Jul 27 12:22:13 2012 +0200
> +++ b/xen/arch/x86/cpu/intel.c	Sat Jul 28 17:27:30 2012 +0000
> @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
>  			return;
>  		extra = "xsave ";
>  		break;
> -	case 0x2a:
> +	case 0x2a: case 0x2d:
>  		wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
>  		      opt_cpuid_mask_ecx,
>  		      opt_cpuid_mask_edx);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:42:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsqO-00022O-B5; Thu, 02 Aug 2012 10:41:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsqM-00022J-SD
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:41:27 +0000
Received: from [85.158.139.83:60960] by server-3.bemta-5.messagelabs.com id
	7D/6D-03367-5595A105; Thu, 02 Aug 2012 10:41:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343904085!18627502!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24123 invoked from network); 2 Aug 2012 10:41:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:41:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819479"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:41:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:41:24 +0100
Message-ID: <1343904082.27221.136.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:41:22 +0100
In-Reply-To: <1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> *** DO NOT APPLY ***
> 
> We have recently had a couple of bugs which basically involved
> ignoring the rc parameter to a callback function.  I thought I would
> try -Wunused-parameter.  Here are the results.
> 
> I found three further problems:
> 
>  * libxl_wait_for_free_memory takes a domid parameter but its
>    semantics don't seem to call for that.  This function is
>    going to have a big warning put on it for 4.2 and that
>    should happen soon.
> 
>  * qmp_synchronous_send has an `ask_timeout' parameter which is
>    ignored.
> 
>  * The autogenerated function libxl_event_init_type ignores the type
>    parameter.
> 
> Things I needed to do to get the rest of the code to compile:
> 
>  * Remove one harmless unused parameter from an internal function.
>    (Earlier in this series.)
> 
>  * Add an assert to make the error handling in the broken remus code
>    slightly less broken.  (Earlier in this series.)
> 
>  * Provide machinery in the Makefile for passing different CFLAGS to
>    libxl as opposed to xl and libxlu.  The flex- and bison-generated
>    files in libxlu can't be compiled with -Wunused-parameter.
> 
>  * Define a new helper macro
>         #define USE(var) ((void)(var))
>    and use it 43 times.  The pattern is something like
>         USE(egc);
>    in a function which takes egc but doesn't need it.  If the
>    parameter is later used, this is harmless.  In functions
>    which are placeholders the USE statement should be placed in the
>    middle of the function where the parameter would be used if the
>    function is changed later, so that the USE gets deleted by the
>    patch introducing the implementation.
> 
>  * Define a new helper macro for use only in other macros
>         #define MAYBE_UNUSED __attribute__((unused))
>    and use it in 10 different places.
> 
>  * Define new macros for helping declare common types of callback
>    functions.  For example:
> 
>         #define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)  \
>             libxl__egc *egc MAYBE_UNUSED,                             \
>             libxl__ev_xswatch *watch MAYBE_UNUSED,                    \
>             const char *wpath MAYBE_UNUSED,                           \
>             const char *epath MAYBE_UNUSED
> 
>    which is used like this:
> 
>         -static void some_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
>         -                          const char *wpath, const char *epath)
>         +static void some_callback(EV_XSWATCH_CALLBACK_PARAMS
>         +                          (egc, watch, wpath, epath))
>         {
>             ... now we use (or not) egc, watch, wpath, etc. or not as we like
> 
>    This somewhat resembles a Traditional K&R C typeless function
>    definition.  The types of the parameters are actually defined
>    for the compiler of course, along with the information that
>    the parameters might be unused.
> 
>    There are 4 macros of this kind with 22 call sites.
> 
> IMO on the cost (65 places in ordinary code where we have to write
> something somewhat ugly) is worth the benefit (finding, if we had
> deployed this right away, around 6 bugs).  But it's arguable.
> 
> *** DO NOT APPLY ***
> 
> Anyway, this patch must not be applied right now because it causes the
> build to fail.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/Makefile             |    4 +++-
>  tools/libxl/libxl.c              |   29 +++++++++++++++++++++++++----
>  tools/libxl/libxl_aoutils.c      |    8 ++++----
>  tools/libxl/libxl_blktap2.c      |    1 +
>  tools/libxl/libxl_bootloader.c   |    6 ++++++
>  tools/libxl/libxl_create.c       |    2 ++
>  tools/libxl/libxl_device.c       |    9 +++++----
>  tools/libxl/libxl_dm.c           |    4 ++++
>  tools/libxl/libxl_dom.c          |    8 ++++----
>  tools/libxl/libxl_event.c        |   22 +++++++++++++---------
>  tools/libxl/libxl_exec.c         |    8 ++++----
>  tools/libxl/libxl_fork.c         |    1 +
>  tools/libxl/libxl_internal.h     |   24 ++++++++++++++++++++++--
>  tools/libxl/libxl_pci.c          |    6 ++++++
>  tools/libxl/libxl_qmp.c          |   17 +++++++++--------
>  tools/libxl/libxl_save_callout.c |    4 ++--
>  tools/libxl/libxl_utils.c        |    2 ++
>  17 files changed, 113 insertions(+), 42 deletions(-)

I'm not entirely sure how I feel about this patch generally (it's quite
a bit of mess, but the bugs it would have found are real). It's also
quite a lot of churn for 4.2.

On the other hand we are likely to want to backport lots of libxl fixes
for 4.2.1 (I was actually considering an exception to the "no new
features" rule for 4.2.1 for xm parity causing patches) and having this
in 4.2 would make that cleaner.

I guess I come down (just) on the side of taking this, when it is baked.

> @@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
>      libxl__domain_suspend_state *dss;
>      int rc;
> 
> +    USE(recv_fd); /* TODO get rid of this and actually use it! */

You've only just introduced TODO REMUS...

> @@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
> 
>      assert(fd == ev->fd);
>      revents &= ev->events;
> +    USE(events); /* we use our own idea of what we asked for */

What is the point of this argument then?

Is getting an event we weren't expecting a log-worthy occurrence?

> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index 9c92ae6..a094965 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -800,6 +800,10 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
>  {
>      char *orig_state = priv;
> 
> +    USE(gc);
> +    USE(domid);
> +    USE(priv);

You actually use priv above.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:42:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsqO-00022O-B5; Thu, 02 Aug 2012 10:41:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwsqM-00022J-SD
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:41:27 +0000
Received: from [85.158.139.83:60960] by server-3.bemta-5.messagelabs.com id
	7D/6D-03367-5595A105; Thu, 02 Aug 2012 10:41:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343904085!18627502!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24123 invoked from network); 2 Aug 2012 10:41:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:41:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819479"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:41:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:41:24 +0100
Message-ID: <1343904082.27221.136.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 11:41:22 +0100
In-Reply-To: <1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> *** DO NOT APPLY ***
> 
> We have recently had a couple of bugs which basically involved
> ignoring the rc parameter to a callback function.  I thought I would
> try -Wunused-parameter.  Here are the results.
> 
> I found three further problems:
> 
>  * libxl_wait_for_free_memory takes a domid parameter but its
>    semantics don't seem to call for that.  This function is
>    going to have a big warning put on it for 4.2 and that
>    should happen soon.
> 
>  * qmp_synchronous_send has an `ask_timeout' parameter which is
>    ignored.
> 
>  * The autogenerated function libxl_event_init_type ignores the type
>    parameter.
> 
> Things I needed to do to get the rest of the code to compile:
> 
>  * Remove one harmless unused parameter from an internal function.
>    (Earlier in this series.)
> 
>  * Add an assert to make the error handling in the broken remus code
>    slightly less broken.  (Earlier in this series.)
> 
>  * Provide machinery in the Makefile for passing different CFLAGS to
>    libxl as opposed to xl and libxlu.  The flex- and bison-generated
>    files in libxlu can't be compiled with -Wunused-parameter.
> 
>  * Define a new helper macro
>         #define USE(var) ((void)(var))
>    and use it 43 times.  The pattern is something like
>         USE(egc);
>    in a function which takes egc but doesn't need it.  If the
>    parameter is later used, this is harmless.  In functions
>    which are placeholders the USE statement should be placed in the
>    middle of the function where the parameter would be used if the
>    function is changed later, so that the USE gets deleted by the
>    patch introducing the implementation.
> 
>  * Define a new helper macro for use only in other macros
>         #define MAYBE_UNUSED __attribute__((unused))
>    and use it in 10 different places.
> 
>  * Define new macros for helping declare common types of callback
>    functions.  For example:
> 
>         #define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)  \
>             libxl__egc *egc MAYBE_UNUSED,                             \
>             libxl__ev_xswatch *watch MAYBE_UNUSED,                    \
>             const char *wpath MAYBE_UNUSED,                           \
>             const char *epath MAYBE_UNUSED
> 
>    which is used like this:
> 
>         -static void some_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
>         -                          const char *wpath, const char *epath)
>         +static void some_callback(EV_XSWATCH_CALLBACK_PARAMS
>         +                          (egc, watch, wpath, epath))
>         {
>             ... now we use (or not) egc, watch, wpath, etc. or not as we like
> 
>    This somewhat resembles a Traditional K&R C typeless function
>    definition.  The types of the parameters are actually defined
>    for the compiler of course, along with the information that
>    the parameters might be unused.
> 
>    There are 4 macros of this kind with 22 call sites.
> 
> IMO on the cost (65 places in ordinary code where we have to write
> something somewhat ugly) is worth the benefit (finding, if we had
> deployed this right away, around 6 bugs).  But it's arguable.
> 
> *** DO NOT APPLY ***
> 
> Anyway, this patch must not be applied right now because it causes the
> build to fail.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/Makefile             |    4 +++-
>  tools/libxl/libxl.c              |   29 +++++++++++++++++++++++++----
>  tools/libxl/libxl_aoutils.c      |    8 ++++----
>  tools/libxl/libxl_blktap2.c      |    1 +
>  tools/libxl/libxl_bootloader.c   |    6 ++++++
>  tools/libxl/libxl_create.c       |    2 ++
>  tools/libxl/libxl_device.c       |    9 +++++----
>  tools/libxl/libxl_dm.c           |    4 ++++
>  tools/libxl/libxl_dom.c          |    8 ++++----
>  tools/libxl/libxl_event.c        |   22 +++++++++++++---------
>  tools/libxl/libxl_exec.c         |    8 ++++----
>  tools/libxl/libxl_fork.c         |    1 +
>  tools/libxl/libxl_internal.h     |   24 ++++++++++++++++++++++--
>  tools/libxl/libxl_pci.c          |    6 ++++++
>  tools/libxl/libxl_qmp.c          |   17 +++++++++--------
>  tools/libxl/libxl_save_callout.c |    4 ++--
>  tools/libxl/libxl_utils.c        |    2 ++
>  17 files changed, 113 insertions(+), 42 deletions(-)

I'm not entirely sure how I feel about this patch generally (it's quite
a bit of mess, but the bugs it would have found are real). It's also
quite a lot of churn for 4.2.

On the other hand we are likely to want to backport lots of libxl fixes
for 4.2.1 (I was actually considering an exception to the "no new
features" rule for 4.2.1 for xm parity causing patches) and having this
in 4.2 would make that cleaner.

I guess I come down (just) on the side of taking this, when it is baked.

> @@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
>      libxl__domain_suspend_state *dss;
>      int rc;
> 
> +    USE(recv_fd); /* TODO get rid of this and actually use it! */

You've only just introduced TODO REMUS...

> @@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
> 
>      assert(fd == ev->fd);
>      revents &= ev->events;
> +    USE(events); /* we use our own idea of what we asked for */

What is the point of this argument then?

Is getting an event we weren't expecting a log-worthy occurrence?

> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index 9c92ae6..a094965 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -800,6 +800,10 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
>  {
>      char *orig_state = priv;
> 
> +    USE(gc);
> +    USE(domid);
> +    USE(priv);

You actually use priv above.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:46:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsuL-0002Br-3r; Thu, 02 Aug 2012 10:45:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwsuJ-0002BR-K8
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:45:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343904324!11864832!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14433 invoked from network); 2 Aug 2012 10:45:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 10:45:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwsuC-00039B-BP; Thu, 02 Aug 2012 10:45:24 +0000
Date: Thu, 2 Aug 2012 11:45:24 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802104524.GA11437@ocelot.phlegethon.org>
References: <5008166B.6010603@amd.com>
	<20120726182111.GB4135@ocelot.phlegethon.org>
	<5012822E.2030603@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5012822E.2030603@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: fix write access fault on ro
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:57 +0200 on 27 Jul (1343397454), Christoph Egger wrote:
> >> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
> >>              if ( !handle_mmio() )
> >>                  hvm_inject_hw_exception(TRAP_gp_fault, 0);
> >>              return 1;
> >> +        case NESTEDHVM_PAGEFAULT_READONLY:
> >> +            break;
> > 
> > Don't we have to translate the faulting PA into an L1 address before
> > letting the rest of this fault handler run?  It explicitly operates on
> > the hostp2m.  
> > 
> > If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
> > rather than special-casing READONLY.  That way any other
> > automatically-fixed types (like the p2m_access magic) will be covered
> > too.
> 
> How do you differentiate if the error happened from walking l1 npt or
> host npt ?
> In the first case it isn't possible to provide l1 address.

It must be _possible_; after all we managed to detect the error. :)  In
any case it's definitely wrong to carry on with this handler with the
wrong address in hand.  So I wonder why this patch actually works for
you.  Does replacing the 'break' above with 'return 1' also fix the
problem?

In the short term, do you only care about pages that are read-only for
log-dirty tracking?  For the L1 walk, that should be handled by the PT
walker's own calls to paging_mark_dirty(), and the nested-p2m handler
could potentially take care of the other case by calling
paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:46:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwsuL-0002Br-3r; Thu, 02 Aug 2012 10:45:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwsuJ-0002BR-K8
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:45:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343904324!11864832!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14433 invoked from network); 2 Aug 2012 10:45:25 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 10:45:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwsuC-00039B-BP; Thu, 02 Aug 2012 10:45:24 +0000
Date: Thu, 2 Aug 2012 11:45:24 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802104524.GA11437@ocelot.phlegethon.org>
References: <5008166B.6010603@amd.com>
	<20120726182111.GB4135@ocelot.phlegethon.org>
	<5012822E.2030603@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5012822E.2030603@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: fix write access fault on ro
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:57 +0200 on 27 Jul (1343397454), Christoph Egger wrote:
> >> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
> >>              if ( !handle_mmio() )
> >>                  hvm_inject_hw_exception(TRAP_gp_fault, 0);
> >>              return 1;
> >> +        case NESTEDHVM_PAGEFAULT_READONLY:
> >> +            break;
> > 
> > Don't we have to translate the faulting PA into an L1 address before
> > letting the rest of this fault handler run?  It explicitly operates on
> > the hostp2m.  
> > 
> > If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
> > rather than special-casing READONLY.  That way any other
> > automatically-fixed types (like the p2m_access magic) will be covered
> > too.
> 
> How do you differentiate if the error happened from walking l1 npt or
> host npt ?
> In the first case it isn't possible to provide l1 address.

It must be _possible_; after all we managed to detect the error. :)  In
any case it's definitely wrong to carry on with this handler with the
wrong address in hand.  So I wonder why this patch actually works for
you.  Does replacing the 'break' above with 'return 1' also fix the
problem?

In the short term, do you only care about pages that are read-only for
log-dirty tracking?  For the L1 walk, that should be handled by the PT
walker's own calls to paging_mark_dirty(), and the nested-p2m handler
could potentially take care of the other case by calling
paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:46:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swsuc-0002E0-Gr; Thu, 02 Aug 2012 10:45:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1Swsua-0002Dh-Ot
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 10:45:48 +0000
Received: from [85.158.138.51:9003] by server-7.bemta-3.messagelabs.com id
	DD/03-21158-B5A5A105; Thu, 02 Aug 2012 10:45:47 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343904347!28300445!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31117 invoked from network); 2 Aug 2012 10:45:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:45:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819577"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:45:46 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:45:46 +0100
Message-ID: <501A57E9.6070407@citrix.com>
Date: Thu, 2 Aug 2012 11:35:21 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<501A4C29.5080006@citrix.com>
	<501A71030200007800092257@nat28.tlf.novell.com>
In-Reply-To: <501A71030200007800092257@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	KonradRzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 11:22, Jan Beulich wrote:
>>>> On 02.08.12 at 11:45, Attilio Rao<attilio.rao@citrix.com>  wrote:
>>>>          
>> On 02/08/12 10:23, Jan Beulich wrote:
>>      
>>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>>>>              
>> wrote:
>>      
>>>>>>
>>>>>>              
>>>> I was reading more about this commit because this patch breaks the ABI
>>>> on ARM, when I realized that on x86 there is no standard that specifies
>>>> the alignment of fields in a struct.
>>>>
>>>>          
>>> There is - the psABI supplements to the SVR4 ABI.
>>>
>>>
>>>        
>> This is a completely different issue.
>> The problem here gcc/whatever compiler padding added to the struct in
>> order to have alignment of the members to the word boundry. The
>> difference is that this is not enforced in the ARM case (apparently,
>> from Stefano's report) while it happens in the x86 case.
>>
>> This is why it is a good rule to organize member of a struct from the
>> bigger to the smaller when compiling with gcc and this is not the case
>> of the struct in question.
>>
>> In the end it is a compiler decisional thing, not something decided by
>> the ABI.
>>      
> No, definitely not. Otherwise inter-operation between code
> compiled with different compilers would be impossible. To
> allow this is what the various ABI specifications exist for (and
> their absence had, e.g. on DOS, lead to a complete mess).
>
>    

Look, I'm speaking about the problem Stefano is trying to crunch which 
has nothing to do with your discussion on ABI.

> As to the ARM issue - mind pointing out where mis-aligned
> structure fields are specified as being the standard?
>
>    

I think that alignment is important, infact I'm more surprised to the 
ARM side than the x86. Of course, because this is a compiler-dependent 
behaviour (the fact that not only gcc does that doesn't mean it is 
"standardized", just like it is not standardized anywhere that stack on 
x86 must be word aligned, even if it is so common that it is taken for 
granted now).

I was wondering, maybe ARM is compiled with -fpack-struct (even if I 
would be surprised)?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:46:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swsuc-0002E0-Gr; Thu, 02 Aug 2012 10:45:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1Swsua-0002Dh-Ot
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 10:45:48 +0000
Received: from [85.158.138.51:9003] by server-7.bemta-3.messagelabs.com id
	DD/03-21158-B5A5A105; Thu, 02 Aug 2012 10:45:47 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343904347!28300445!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31117 invoked from network); 2 Aug 2012 10:45:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 10:45:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13819577"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 10:45:46 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	11:45:46 +0100
Message-ID: <501A57E9.6070407@citrix.com>
Date: Thu, 2 Aug 2012 11:35:21 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<501A4C29.5080006@citrix.com>
	<501A71030200007800092257@nat28.tlf.novell.com>
In-Reply-To: <501A71030200007800092257@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	KonradRzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 11:22, Jan Beulich wrote:
>>>> On 02.08.12 at 11:45, Attilio Rao<attilio.rao@citrix.com>  wrote:
>>>>          
>> On 02/08/12 10:23, Jan Beulich wrote:
>>      
>>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>>>>              
>> wrote:
>>      
>>>>>>
>>>>>>              
>>>> I was reading more about this commit because this patch breaks the ABI
>>>> on ARM, when I realized that on x86 there is no standard that specifies
>>>> the alignment of fields in a struct.
>>>>
>>>>          
>>> There is - the psABI supplements to the SVR4 ABI.
>>>
>>>
>>>        
>> This is a completely different issue.
>> The problem here gcc/whatever compiler padding added to the struct in
>> order to have alignment of the members to the word boundry. The
>> difference is that this is not enforced in the ARM case (apparently,
>> from Stefano's report) while it happens in the x86 case.
>>
>> This is why it is a good rule to organize member of a struct from the
>> bigger to the smaller when compiling with gcc and this is not the case
>> of the struct in question.
>>
>> In the end it is a compiler decisional thing, not something decided by
>> the ABI.
>>      
> No, definitely not. Otherwise inter-operation between code
> compiled with different compilers would be impossible. To
> allow this is what the various ABI specifications exist for (and
> their absence had, e.g. on DOS, lead to a complete mess).
>
>    

Look, I'm speaking about the problem Stefano is trying to crunch which 
has nothing to do with your discussion on ABI.

> As to the ARM issue - mind pointing out where mis-aligned
> structure fields are specified as being the standard?
>
>    

I think that alignment is important, infact I'm more surprised to the 
ARM side than the x86. Of course, because this is a compiler-dependent 
behaviour (the fact that not only gcc does that doesn't mean it is 
"standardized", just like it is not standardized anywhere that stack on 
x86 must be word aligned, even if it is so common that it is taken for 
granted now).

I was wondering, maybe ARM is compiled with -fpack-struct (even if I 
would be surprised)?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:48:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwswT-0002Pr-1N; Thu, 02 Aug 2012 10:47:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwswR-0002Pc-C4
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:47:43 +0000
Received: from [85.158.143.35:15327] by server-3.bemta-4.messagelabs.com id
	C6/83-01511-ECA5A105; Thu, 02 Aug 2012 10:47:42 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343904461!17062876!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16309 invoked from network); 2 Aug 2012 10:47:42 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 10:47:42 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwswP-00039j-Gq; Thu, 02 Aug 2012 10:47:41 +0000
Date: Thu, 2 Aug 2012 11:47:41 +0100
From: Tim Deegan <tim@xen.org>
To: lmingcsce <lmingcsce@gmail.com>
Message-ID: <20120802104741.GB11437@ocelot.phlegethon.org>
References: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] About revoke write access of all the shadows
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:18 -0400 on 26 Jul (1343319518), lmingcsce wrote:
> Hi all,
> Recently, I read codes about the shadow page table. I'm wondering whether the kernel has provided the function to revoke write access of all the shadows of one domain. If you know one with this function, please tell me about it. Thanks.
> BTW, I have my own idea to implement this. My idea is as follows: 
> void sh_revoke_write_access_all(struct domain *d)
> {
>     foreach_pinned_shadow(d, sp, t)
>     {
> 
>        According to sp->u.sh.type, (like SH_type_l1_32_shadow ......), get each entry (shadow_l1e_get_flags) of the page table. Changes the flags to read only and then write the page table entry back (shadow_set_l1e).
>        When going through the page table, I can use SHADOW_FOREACH_L1E (L2E, L3E, L4E) macro. 
>        However, I have one question. When dealing with shadow page table L2, L3, L4, can I use the same way as L1 page table to change flags and set flags ?
> 
>     }
> }
> Do you think my idea is possible? Thanks for giving me some suggestions.

Yes, that should work.  But since the shadow pagetables never use
superpages, you should only adjust the entries in type_l1* pages;
there's no need to touch L2, L3 or L4.

Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:48:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwswT-0002Pr-1N; Thu, 02 Aug 2012 10:47:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwswR-0002Pc-C4
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:47:43 +0000
Received: from [85.158.143.35:15327] by server-3.bemta-4.messagelabs.com id
	C6/83-01511-ECA5A105; Thu, 02 Aug 2012 10:47:42 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343904461!17062876!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16309 invoked from network); 2 Aug 2012 10:47:42 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 10:47:42 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwswP-00039j-Gq; Thu, 02 Aug 2012 10:47:41 +0000
Date: Thu, 2 Aug 2012 11:47:41 +0100
From: Tim Deegan <tim@xen.org>
To: lmingcsce <lmingcsce@gmail.com>
Message-ID: <20120802104741.GB11437@ocelot.phlegethon.org>
References: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] About revoke write access of all the shadows
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:18 -0400 on 26 Jul (1343319518), lmingcsce wrote:
> Hi all,
> Recently, I read codes about the shadow page table. I'm wondering whether the kernel has provided the function to revoke write access of all the shadows of one domain. If you know one with this function, please tell me about it. Thanks.
> BTW, I have my own idea to implement this. My idea is as follows: 
> void sh_revoke_write_access_all(struct domain *d)
> {
>     foreach_pinned_shadow(d, sp, t)
>     {
> 
>        According to sp->u.sh.type, (like SH_type_l1_32_shadow ......), get each entry (shadow_l1e_get_flags) of the page table. Changes the flags to read only and then write the page table entry back (shadow_set_l1e).
>        When going through the page table, I can use SHADOW_FOREACH_L1E (L2E, L3E, L4E) macro. 
>        However, I have one question. When dealing with shadow page table L2, L3, L4, can I use the same way as L1 page table to change flags and set flags ?
> 
>     }
> }
> Do you think my idea is possible? Thanks for giving me some suggestions.

Yes, that should work.  But since the shadow pagetables never use
superpages, you should only adjust the entries in type_l1* pages;
there's no need to touch L2, L3 or L4.

Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:53:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swt1y-0002i1-2S; Thu, 02 Aug 2012 10:53:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Swt1w-0002hw-ME
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:53:24 +0000
Received: from [85.158.138.51:10208] by server-2.bemta-3.messagelabs.com id
	5F/5B-00359-32C5A105; Thu, 02 Aug 2012 10:53:23 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343904803!20531655!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21584 invoked from network); 2 Aug 2012 10:53:23 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 10:53:23 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Swt1o-0003B3-RY; Thu, 02 Aug 2012 10:53:16 +0000
Date: Thu, 2 Aug 2012 11:53:16 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120802105316.GC11437@ocelot.phlegethon.org>
References: <1343373880839-5710388.post@n5.nabble.com>
	<501265C60200007800090E27@nat28.tlf.novell.com>
	<1343380992.326297641@f89.mail.ru>
	<5012863B0200007800090EC1@nat28.tlf.novell.com>
	<1343384738.6812.146.camel@zakaz.uk.xensource.com>
	<50128BD70200007800090F11@nat28.tlf.novell.com>
	<50127463.6000109@ts.fujitsu.com>
	<20120727202913.GB23990@ocelot.phlegethon.org>
	<5016539D020000780009134A@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5016539D020000780009134A@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>,
	xen-devel <xen-devel@lists.xen.org>, Yuriy Logvinov <hackroute@mail.ru>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] compiling error "xen-unstable.hg"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:27 +0100 on 30 Jul (1343636877), Jan Beulich wrote:
> > We might do better to switch to '$(CC) --version | head -1', which seems
> > to provide something sensible on all the gcc and clang binaries I have
> > handy. 
> 
> The question is how relevant that string is in the first place:
> My cross compilers, for example, get invoked through a shell
> scripts that's named gccx. Invoking that with --version
> reproduces the shell script name (which the script forces as
> the argv[0] of the exec-ed "real" gcc):
> 
> gccx (GCC) 4.7.1
> 
> which isn't the case for -v:
> 
> gcc version 4.7.1 (GCC)

Sigh.  

> So as long as no deeper meaning is implied from the string by
> anyone (there's nothing in-tree that I'm aware of), I think
> that's at least as good a change as adding LC_ALL=C (of
> which I'm not really certain whether it would be fully reliable
> in all possible cases).

I'm not aware of anyone/anything that relies on that string except for
human consumption.  I'll submit a patch to change to --version. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 10:53:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 10:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swt1y-0002i1-2S; Thu, 02 Aug 2012 10:53:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Swt1w-0002hw-ME
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 10:53:24 +0000
Received: from [85.158.138.51:10208] by server-2.bemta-3.messagelabs.com id
	5F/5B-00359-32C5A105; Thu, 02 Aug 2012 10:53:23 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-174.messagelabs.com!1343904803!20531655!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21584 invoked from network); 2 Aug 2012 10:53:23 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 10:53:23 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Swt1o-0003B3-RY; Thu, 02 Aug 2012 10:53:16 +0000
Date: Thu, 2 Aug 2012 11:53:16 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120802105316.GC11437@ocelot.phlegethon.org>
References: <1343373880839-5710388.post@n5.nabble.com>
	<501265C60200007800090E27@nat28.tlf.novell.com>
	<1343380992.326297641@f89.mail.ru>
	<5012863B0200007800090EC1@nat28.tlf.novell.com>
	<1343384738.6812.146.camel@zakaz.uk.xensource.com>
	<50128BD70200007800090F11@nat28.tlf.novell.com>
	<50127463.6000109@ts.fujitsu.com>
	<20120727202913.GB23990@ocelot.phlegethon.org>
	<5016539D020000780009134A@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5016539D020000780009134A@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>,
	xen-devel <xen-devel@lists.xen.org>, Yuriy Logvinov <hackroute@mail.ru>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] compiling error "xen-unstable.hg"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:27 +0100 on 30 Jul (1343636877), Jan Beulich wrote:
> > We might do better to switch to '$(CC) --version | head -1', which seems
> > to provide something sensible on all the gcc and clang binaries I have
> > handy. 
> 
> The question is how relevant that string is in the first place:
> My cross compilers, for example, get invoked through a shell
> scripts that's named gccx. Invoking that with --version
> reproduces the shell script name (which the script forces as
> the argv[0] of the exec-ed "real" gcc):
> 
> gccx (GCC) 4.7.1
> 
> which isn't the case for -v:
> 
> gcc version 4.7.1 (GCC)

Sigh.  

> So as long as no deeper meaning is implied from the string by
> anyone (there's nothing in-tree that I'm aware of), I think
> that's at least as good a change as adding LC_ALL=C (of
> which I'm not really certain whether it would be fully reliable
> in all possible cases).

I'm not aware of anyone/anything that relies on that string except for
human consumption.  I'll submit a patch to change to --version. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:01:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swt8k-0002t7-Uu; Thu, 02 Aug 2012 11:00:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Swt8j-0002t2-8D
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:00:25 +0000
Received: from [85.158.143.99:5311] by server-1.bemta-4.messagelabs.com id
	E1/E6-24392-8CD5A105; Thu, 02 Aug 2012 11:00:24 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343905222!18244311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29875 invoked from network); 2 Aug 2012 11:00:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:00:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; 
	d="asc'?scan'208";a="13819863"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:00:22 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	12:00:22 +0100
Message-ID: <1343905213.8861.1.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 2 Aug 2012 13:00:13 +0200
In-Reply-To: <1343903061.27221.125.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
	<1343903061.27221.125.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 10/11] libxl: remove an unused numainfo
 parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8274132961945022384=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8274132961945022384==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-z45rZfx8hsX7w00jLYk/"

--=-z45rZfx8hsX7w00jLYk/
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 11:24 +0100, Ian Campbell wrote:=20
> On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>=20
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>=20
> CCing Dario.
>=20
Uups, sorry, I guess it was a stale from a previous version I forgot to
kill. Thanks!

Acked-by: Dario Faggioli <dario.faggioli@citrix.com>

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-z45rZfx8hsX7w00jLYk/
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAaXb0ACgkQk4XaBE3IOsTACgCgqoRxKw1W4Cx8ms1lUXasA/+e
sW0AoJUBM5WwwdY8F2/Ptw2fUDbYJobo
=cvm4
-----END PGP SIGNATURE-----

--=-z45rZfx8hsX7w00jLYk/--


--===============8274132961945022384==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8274132961945022384==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 11:01:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swt8k-0002t7-Uu; Thu, 02 Aug 2012 11:00:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1Swt8j-0002t2-8D
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:00:25 +0000
Received: from [85.158.143.99:5311] by server-1.bemta-4.messagelabs.com id
	E1/E6-24392-8CD5A105; Thu, 02 Aug 2012 11:00:24 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343905222!18244311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29875 invoked from network); 2 Aug 2012 11:00:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:00:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; 
	d="asc'?scan'208";a="13819863"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:00:22 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	12:00:22 +0100
Message-ID: <1343905213.8861.1.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 2 Aug 2012 13:00:13 +0200
In-Reply-To: <1343903061.27221.125.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-11-git-send-email-ian.jackson@eu.citrix.com>
	<1343903061.27221.125.camel@zakaz.uk.xensource.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 10/11] libxl: remove an unused numainfo
 parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8274132961945022384=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8274132961945022384==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-z45rZfx8hsX7w00jLYk/"

--=-z45rZfx8hsX7w00jLYk/
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 11:24 +0100, Ian Campbell wrote:=20
> On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>=20
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>=20
> CCing Dario.
>=20
Uups, sorry, I guess it was a stale from a previous version I forgot to
kill. Thanks!

Acked-by: Dario Faggioli <dario.faggioli@citrix.com>

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-z45rZfx8hsX7w00jLYk/
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAaXb0ACgkQk4XaBE3IOsTACgCgqoRxKw1W4Cx8ms1lUXasA/+e
sW0AoJUBM5WwwdY8F2/Ptw2fUDbYJobo
=cvm4
-----END PGP SIGNATURE-----

--=-z45rZfx8hsX7w00jLYk/--


--===============8274132961945022384==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8274132961945022384==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 11:07:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtF8-00034B-Pv; Thu, 02 Aug 2012 11:07:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtF7-000340-EN
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:07:01 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343905514!11912513!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31231 invoked from network); 2 Aug 2012 11:05:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:05:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203915688"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:05:14 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 07:05:14 -0400
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tim@xen.org>) id 1SwtDN-000769-Rk	for xen-devel@lists.xen.org;
	Thu, 02 Aug 2012 12:05:13 +0100
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.76)
	(envelope-from <tim@xen.org>)	id 1SwtDN-00013N-GY	for
	xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:05:13 +0100
MIME-Version: 1.0
X-Mercurial-Node: fdd4b7b36959492b3909613867360d9993d420a7
Message-ID: <fdd4b7b36959492b3909.1343905513@whitby.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.0.1
Date: Thu, 2 Aug 2012 12:05:13 +0100
From: Tim Deegan <tim@xen.org>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen: detect compiler version with '--version'
	rather than '-v'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Tim Deegan <tim@xen.org>
# Date 1343905471 -3600
# Node ID fdd4b7b36959492b3909613867360d9993d420a7
# Parent  3d17148e465ce87ddd8f555a001280348d848419
xen: detect compiler version with '--version' rather than '-v'

This allows us to get rid of the 'grep version', which doesn't
work with localized compilers.

Signed-off-by: Tim Deegan <tim@xen.org>

diff -r 3d17148e465c -r fdd4b7b36959 xen/Makefile
--- a/xen/Makefile	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/Makefile	Thu Aug 02 12:04:31 2012 +0100
@@ -103,7 +103,7 @@ include/xen/compile.h: include/xen/compi
 	    -e 's/@@whoami@@/$(XEN_WHOAMI)/g' \
 	    -e 's/@@domain@@/$(XEN_DOMAIN)/g' \
 	    -e 's/@@hostname@@/$(shell hostname)/g' \
-	    -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) -v 2>&1 | grep version | tail -1)!g' \
+	    -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) --version 2>&1 | head -1)!g' \
 	    -e 's/@@version@@/$(XEN_VERSION)/g' \
 	    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
 	    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:07:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtF8-00034B-Pv; Thu, 02 Aug 2012 11:07:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtF7-000340-EN
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:07:01 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343905514!11912513!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31231 invoked from network); 2 Aug 2012 11:05:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:05:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203915688"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:05:14 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 07:05:14 -0400
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tim@xen.org>) id 1SwtDN-000769-Rk	for xen-devel@lists.xen.org;
	Thu, 02 Aug 2012 12:05:13 +0100
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.76)
	(envelope-from <tim@xen.org>)	id 1SwtDN-00013N-GY	for
	xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:05:13 +0100
MIME-Version: 1.0
X-Mercurial-Node: fdd4b7b36959492b3909613867360d9993d420a7
Message-ID: <fdd4b7b36959492b3909.1343905513@whitby.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.0.1
Date: Thu, 2 Aug 2012 12:05:13 +0100
From: Tim Deegan <tim@xen.org>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen: detect compiler version with '--version'
	rather than '-v'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Tim Deegan <tim@xen.org>
# Date 1343905471 -3600
# Node ID fdd4b7b36959492b3909613867360d9993d420a7
# Parent  3d17148e465ce87ddd8f555a001280348d848419
xen: detect compiler version with '--version' rather than '-v'

This allows us to get rid of the 'grep version', which doesn't
work with localized compilers.

Signed-off-by: Tim Deegan <tim@xen.org>

diff -r 3d17148e465c -r fdd4b7b36959 xen/Makefile
--- a/xen/Makefile	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/Makefile	Thu Aug 02 12:04:31 2012 +0100
@@ -103,7 +103,7 @@ include/xen/compile.h: include/xen/compi
 	    -e 's/@@whoami@@/$(XEN_WHOAMI)/g' \
 	    -e 's/@@domain@@/$(XEN_DOMAIN)/g' \
 	    -e 's/@@hostname@@/$(shell hostname)/g' \
-	    -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) -v 2>&1 | grep version | tail -1)!g' \
+	    -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) --version 2>&1 | head -1)!g' \
 	    -e 's/@@version@@/$(XEN_VERSION)/g' \
 	    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
 	    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:09:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtHD-0003Cq-AW; Thu, 02 Aug 2012 11:09:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtHB-0003Ci-TZ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:09:10 +0000
Received: from [85.158.139.83:3542] by server-9.bemta-5.messagelabs.com id
	16/BF-01069-4DF5A105; Thu, 02 Aug 2012 11:09:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343905748!22691910!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9602 invoked from network); 2 Aug 2012 11:09:08 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:09:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtH8-0003Dj-Lm; Thu, 02 Aug 2012 11:09:06 +0000
Date: Thu, 2 Aug 2012 12:09:06 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20120802110906.GD11437@ocelot.phlegethon.org>
References: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] arm: fix gic_init_secondary_cpu.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:43 +0000 on 01 Aug (1343839437), Ian Campbell wrote:
> Using spin_lock_irq here is unnecessary (interrupts are not yet enabled) and
> wrong (since they will get unexpectedly renabled by spin_unlock_irq).
> 
> We can just use spin_lock/spin_unlock.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> Now an SMP model gets as far as hanging at dom0 "Calibrating delay loop...".
> 
> Tim, didn't you diagnose that a while ago?

I only got as far as showing that dom0 is spinning waiting for jiffies
to increase, but never taking the timer interrupts that would cause that
to happen.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:09:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtHD-0003Cq-AW; Thu, 02 Aug 2012 11:09:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtHB-0003Ci-TZ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:09:10 +0000
Received: from [85.158.139.83:3542] by server-9.bemta-5.messagelabs.com id
	16/BF-01069-4DF5A105; Thu, 02 Aug 2012 11:09:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343905748!22691910!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9602 invoked from network); 2 Aug 2012 11:09:08 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:09:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtH8-0003Dj-Lm; Thu, 02 Aug 2012 11:09:06 +0000
Date: Thu, 2 Aug 2012 12:09:06 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20120802110906.GD11437@ocelot.phlegethon.org>
References: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] arm: fix gic_init_secondary_cpu.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:43 +0000 on 01 Aug (1343839437), Ian Campbell wrote:
> Using spin_lock_irq here is unnecessary (interrupts are not yet enabled) and
> wrong (since they will get unexpectedly renabled by spin_unlock_irq).
> 
> We can just use spin_lock/spin_unlock.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> Now an SMP model gets as far as hanging at dom0 "Calibrating delay loop...".
> 
> Tim, didn't you diagnose that a while ago?

I only got as far as showing that dom0 is spinning waiting for jiffies
to increase, but never taking the timer interrupts that would cause that
to happen.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:11:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtJ5-0003KM-UB; Thu, 02 Aug 2012 11:11:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtJ4-0003KA-AP
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:11:06 +0000
Received: from [85.158.138.51:40173] by server-12.bemta-3.messagelabs.com id
	C6/43-15259-9406A105; Thu, 02 Aug 2012 11:11:05 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343905864!22023170!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26692 invoked from network); 2 Aug 2012 11:11:05 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:11:05 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtJ0-0003Ef-2U; Thu, 02 Aug 2012 11:11:02 +0000
Date: Thu, 2 Aug 2012 12:11:02 +0100
From: Tim Deegan <tim@xen.org>
To: Santosh Jodh <Santosh.Jodh@citrix.com>
Message-ID: <20120802111102.GE11437@ocelot.phlegethon.org>
References: <7914B38A4445B34AA16EB9F1352942F1012F0CDD8A71@SJCPMAILBOX01.citrite.net>
	<20120719102326.GC75169@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0CDD9588@SJCPMAILBOX01.citrite.net>
	<20120719145318.GA78502@ocelot.phlegethon.org>
	<CAL54oT3yqLtDpEHe7n20K=i-hz27P-TYK11tm+VNL2BXMs+hMg@mail.gmail.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0D841328@SJCPMAILBOX01.citrite.net>
	<20120724194150.GA68065@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
User-Agent: Mutt/1.4.2.1i
Cc: "Nakajima, Jun" <jun.nakajima@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Superpages for VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:44 -0700 on 31 Jul (1343742260), Santosh Jodh wrote:
> I am going to try to add this support. 
> 
> It looks like a new iommu_ops handler would be needed that would do
> the actual work of dumping the entries - one for AMD and one for
> Intel. Am I reading this correctly?

I think that's correct.

> Or is it better to get the root_table + paging_mode (for AMD) and
> pgd_maddr + agaw (for Intel) and then do a generic dump?

No; the iommu tabels are sufficiebntly arch-specific that it's best to
add arch-specific dump routines.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:11:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtJ5-0003KM-UB; Thu, 02 Aug 2012 11:11:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtJ4-0003KA-AP
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:11:06 +0000
Received: from [85.158.138.51:40173] by server-12.bemta-3.messagelabs.com id
	C6/43-15259-9406A105; Thu, 02 Aug 2012 11:11:05 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343905864!22023170!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26692 invoked from network); 2 Aug 2012 11:11:05 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:11:05 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtJ0-0003Ef-2U; Thu, 02 Aug 2012 11:11:02 +0000
Date: Thu, 2 Aug 2012 12:11:02 +0100
From: Tim Deegan <tim@xen.org>
To: Santosh Jodh <Santosh.Jodh@citrix.com>
Message-ID: <20120802111102.GE11437@ocelot.phlegethon.org>
References: <7914B38A4445B34AA16EB9F1352942F1012F0CDD8A71@SJCPMAILBOX01.citrite.net>
	<20120719102326.GC75169@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0CDD9588@SJCPMAILBOX01.citrite.net>
	<20120719145318.GA78502@ocelot.phlegethon.org>
	<CAL54oT3yqLtDpEHe7n20K=i-hz27P-TYK11tm+VNL2BXMs+hMg@mail.gmail.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0D841328@SJCPMAILBOX01.citrite.net>
	<20120724194150.GA68065@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
User-Agent: Mutt/1.4.2.1i
Cc: "Nakajima, Jun" <jun.nakajima@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Superpages for VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:44 -0700 on 31 Jul (1343742260), Santosh Jodh wrote:
> I am going to try to add this support. 
> 
> It looks like a new iommu_ops handler would be needed that would do
> the actual work of dumping the entries - one for AMD and one for
> Intel. Am I reading this correctly?

I think that's correct.

> Or is it better to get the root_table + paging_mode (for AMD) and
> pgd_maddr + agaw (for Intel) and then do a generic dump?

No; the iommu tabels are sufficiebntly arch-specific that it's best to
add arch-specific dump routines.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:19:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtQi-0003ZW-Se; Thu, 02 Aug 2012 11:19:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwtQh-0003ZR-1s
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:18:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343906332!8781521!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18642 invoked from network); 2 Aug 2012 11:18:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:18:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13820264"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:18:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 12:18:52 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwtQa-0006KG-FY; Thu, 02 Aug 2012 11:18:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwtQa-0005zr-Cm;
	Thu, 02 Aug 2012 12:18:52 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.25116.384446.85569@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 12:18:52 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343902565.27221.117.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
	<1343902565.27221.117.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader
 pty master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader pty master POLLHUP"):
> On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> > Receive POLLHUP on the bootloader master pty is not an error.
> > Hopefully it means that the bootloader has exited and therefore the
> > pty slave side has no process group any more.  (At least NetBSD
> > indicates POLLHUP on the master in this case.)
...
> > +static int datacopier_pollhup_handled(libxl__egc *egc,
> > +                                      libxl__datacopier_state *dc,
> > +                                      short revents, int onwrite)
> > +{
> > +    STATE_AO_GC(dc->ao);
> > +
> > +    if (dc->callback_pollhup && (revents & POLLHUP)) {
> > +        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
> > +            onwrite ? dc->writewhat : dc->readwhat,
> > +            dc->copywhat);
> > +        libxl__datacopier_kill(dc);
> > +        dc->callback(egc, dc, onwrite, -1);
> 
> You've forgotten to make this ->callback_pollhup as discussed last time.

So I have.  And this didn't show up in my testing because the callers
all either set ->callback_pollhup==0 or ==->callback.

I have fixed this in my tree.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:19:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtQi-0003ZW-Se; Thu, 02 Aug 2012 11:19:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwtQh-0003ZR-1s
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:18:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343906332!8781521!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18642 invoked from network); 2 Aug 2012 11:18:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:18:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336348800"; d="scan'208";a="13820264"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:18:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 12:18:52 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwtQa-0006KG-FY; Thu, 02 Aug 2012 11:18:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwtQa-0005zr-Cm;
	Thu, 02 Aug 2012 12:18:52 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.25116.384446.85569@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 12:18:52 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343902565.27221.117.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-3-git-send-email-ian.jackson@eu.citrix.com>
	<1343902565.27221.117.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader
 pty master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 02/11] libxl: react correctly to bootloader pty master POLLHUP"):
> On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> > Receive POLLHUP on the bootloader master pty is not an error.
> > Hopefully it means that the bootloader has exited and therefore the
> > pty slave side has no process group any more.  (At least NetBSD
> > indicates POLLHUP on the master in this case.)
...
> > +static int datacopier_pollhup_handled(libxl__egc *egc,
> > +                                      libxl__datacopier_state *dc,
> > +                                      short revents, int onwrite)
> > +{
> > +    STATE_AO_GC(dc->ao);
> > +
> > +    if (dc->callback_pollhup && (revents & POLLHUP)) {
> > +        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
> > +            onwrite ? dc->writewhat : dc->readwhat,
> > +            dc->copywhat);
> > +        libxl__datacopier_kill(dc);
> > +        dc->callback(egc, dc, onwrite, -1);
> 
> You've forgotten to make this ->callback_pollhup as discussed last time.

So I have.  And this didn't show up in my testing because the callers
all either set ->callback_pollhup==0 or ==->callback.

I have fixed this in my tree.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:19:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtR1-0003aX-8d; Thu, 02 Aug 2012 11:19:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtR0-0003aP-6O
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:19:18 +0000
Received: from [85.158.139.83:12232] by server-8.bemta-5.messagelabs.com id
	D8/8A-10278-5326A105; Thu, 02 Aug 2012 11:19:17 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343906356!27206119!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14969 invoked from network); 2 Aug 2012 11:19:16 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:19:16 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtQx-0003G3-Qk; Thu, 02 Aug 2012 11:19:15 +0000
Date: Thu, 2 Aug 2012 12:19:15 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802111915.GF11437@ocelot.phlegethon.org>
References: <5017FBB0.7060500@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5017FBB0.7060500@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.

Why not?  l2 gfns don't have any special meaning that we can
dictate from inside Xen.

> Pass correct pfec for translation into l1 guest gfn.

This seems like a good idea, but probably should happen for all
entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
to the guest that comes from translations outside his control.

How about this:

diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
+++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
@@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
         unsigned long gfn;
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
+        uint32_t pfec_21 = *pfec;
         uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
 
         /* translate l2 guest va into l2 guest gfn */
@@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
 
         /* translate l2 guest gfn into l1 guest gfn */
         return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
-                                       gfn << PAGE_SHIFT, pfec, NULL);
+                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);

Cheers,

Tim.

> Found with Hyper-V.
> 
> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> CC: Tim Deegan <tim@xen.org>
> 
> -- 
> ---to satisfy European Law for business letters:
> Advanced Micro Devices GmbH
> Einsteinring 24, 85689 Dornach b. Muenchen
> Geschaeftsfuehrer: Alberto Bozzo
> Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
> Registergericht Muenchen, HRB Nr. 43632

Content-Description: xen_p2m.diff
> diff -r 8330198c3240 xen/arch/x86/mm/p2m.c
> --- a/xen/arch/x86/mm/p2m.c	Fri Jul 27 12:24:03 2012 +0200
> +++ b/xen/arch/x86/mm/p2m.c	Tue Jul 31 16:49:54 2012 +0200
> @@ -1582,12 +1582,19 @@ unsigned long paging_gva_to_gfn(struct v
>          struct p2m_domain *p2m;
>          const struct paging_mode *mode;
>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
> +        uint32_t pfec1 = *pfec;
>  
>          /* translate l2 guest va into l2 guest gfn */
>          p2m = p2m_get_nestedp2m(v, ncr3);
>          mode = paging_get_nestedmode(v);
>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>  
> +        /* if l1 guest maps its mmio pages into the
> +         * l2 guest then we see this case here. */
> +        if (gfn == INVALID_GFN)
> +            return INVALID_GFN;
> +        *pfec = pfec1;
> +
>          /* translate l2 guest gfn into l1 guest gfn */
>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
>                                         gfn << PAGE_SHIFT, pfec, NULL);

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:19:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtR1-0003aX-8d; Thu, 02 Aug 2012 11:19:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtR0-0003aP-6O
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:19:18 +0000
Received: from [85.158.139.83:12232] by server-8.bemta-5.messagelabs.com id
	D8/8A-10278-5326A105; Thu, 02 Aug 2012 11:19:17 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343906356!27206119!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14969 invoked from network); 2 Aug 2012 11:19:16 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:19:16 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtQx-0003G3-Qk; Thu, 02 Aug 2012 11:19:15 +0000
Date: Thu, 2 Aug 2012 12:19:15 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802111915.GF11437@ocelot.phlegethon.org>
References: <5017FBB0.7060500@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5017FBB0.7060500@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.

Why not?  l2 gfns don't have any special meaning that we can
dictate from inside Xen.

> Pass correct pfec for translation into l1 guest gfn.

This seems like a good idea, but probably should happen for all
entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
to the guest that comes from translations outside his control.

How about this:

diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
+++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
@@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
         unsigned long gfn;
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
+        uint32_t pfec_21 = *pfec;
         uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
 
         /* translate l2 guest va into l2 guest gfn */
@@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
 
         /* translate l2 guest gfn into l1 guest gfn */
         return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
-                                       gfn << PAGE_SHIFT, pfec, NULL);
+                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
     }
 
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);

Cheers,

Tim.

> Found with Hyper-V.
> 
> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> CC: Tim Deegan <tim@xen.org>
> 
> -- 
> ---to satisfy European Law for business letters:
> Advanced Micro Devices GmbH
> Einsteinring 24, 85689 Dornach b. Muenchen
> Geschaeftsfuehrer: Alberto Bozzo
> Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
> Registergericht Muenchen, HRB Nr. 43632

Content-Description: xen_p2m.diff
> diff -r 8330198c3240 xen/arch/x86/mm/p2m.c
> --- a/xen/arch/x86/mm/p2m.c	Fri Jul 27 12:24:03 2012 +0200
> +++ b/xen/arch/x86/mm/p2m.c	Tue Jul 31 16:49:54 2012 +0200
> @@ -1582,12 +1582,19 @@ unsigned long paging_gva_to_gfn(struct v
>          struct p2m_domain *p2m;
>          const struct paging_mode *mode;
>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
> +        uint32_t pfec1 = *pfec;
>  
>          /* translate l2 guest va into l2 guest gfn */
>          p2m = p2m_get_nestedp2m(v, ncr3);
>          mode = paging_get_nestedmode(v);
>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>  
> +        /* if l1 guest maps its mmio pages into the
> +         * l2 guest then we see this case here. */
> +        if (gfn == INVALID_GFN)
> +            return INVALID_GFN;
> +        *pfec = pfec1;
> +
>          /* translate l2 guest gfn into l1 guest gfn */
>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
>                                         gfn << PAGE_SHIFT, pfec, NULL);

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:25:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtW5-0003qj-1Z; Thu, 02 Aug 2012 11:24:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwtW3-0003qa-I3
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:24:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1343906661!11816509!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9205 invoked from network); 2 Aug 2012 11:24:22 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:24:22 -0000
Received: by eeke53 with SMTP id e53so2295387eek.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 04:24:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=djPXNuZnncgSteG11loYxvUZCOzDXwAFawEa43JYRxI=;
	b=n8Jwjvrz/gQAb0RBdp+n3SoMc7/kpZeKlGXnJW67xELqOP45hXMBINL2T6LHkCou+i
	QnxBtl/s7HKRXFu7KzIii8f15NWsMf4zTbOhi84Zlqo7wFmhTtTNuAxL+bYMkhpqfrlq
	Kbl3G3CWWOro27Zj0MytaMIpJ/+6mNdeayP7eoXWhsJHkWBNr2sED7G15SBSWI6+cRmX
	hbBHPbRat366oKIX6+NHIjMVZZePjChd19bb+v+VrHWzfeMHW+rRxfSLjMDvj03RkxeZ
	0fePldbf6nDSJgsuwMFFVIPp73ciDTDcdsu61I3hi3+5JUx7ZpUID1oh2sD3U/g0lzZ9
	WryQ==
Received: by 10.14.175.7 with SMTP id y7mr91991eel.29.1343906661689;
	Thu, 02 Aug 2012 04:24:21 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id g46sm16459987eep.15.2012.08.02.04.24.20
	(version=SSLv3 cipher=OTHER); Thu, 02 Aug 2012 04:24:21 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 02 Aug 2012 12:24:18 +0100
From: Keir Fraser <keir@xen.org>
To: Tim Deegan <tim@xen.org>,
	<xen-devel@lists.xen.org>
Message-ID: <CC4021F2.4794F%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] xen: detect compiler version with
	'--version' rather than '-v'
Thread-Index: Ac1woVrbSeMJs11/zUqzkRjywBSnuQ==
In-Reply-To: <fdd4b7b36959492b3909.1343905513@whitby.uk.xensource.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] xen: detect compiler version with
 '--version' rather than '-v'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/2012 12:05, "Tim Deegan" <tim@xen.org> wrote:

> # HG changeset patch
> # User Tim Deegan <tim@xen.org>
> # Date 1343905471 -3600
> # Node ID fdd4b7b36959492b3909613867360d9993d420a7
> # Parent  3d17148e465ce87ddd8f555a001280348d848419
> xen: detect compiler version with '--version' rather than '-v'
> 
> This allows us to get rid of the 'grep version', which doesn't
> work with localized compilers.

Seem syou've thought about it and tested it more than I ever did.

Acked-by: Keir Fraser <keir@xen.org>

> Signed-off-by: Tim Deegan <tim@xen.org>
> 
> diff -r 3d17148e465c -r fdd4b7b36959 xen/Makefile
> --- a/xen/Makefile Thu Aug 02 11:49:37 2012 +0200
> +++ b/xen/Makefile Thu Aug 02 12:04:31 2012 +0100
> @@ -103,7 +103,7 @@ include/xen/compile.h: include/xen/compi
>    -e 's/@@whoami@@/$(XEN_WHOAMI)/g' \
>    -e 's/@@domain@@/$(XEN_DOMAIN)/g' \
>    -e 's/@@hostname@@/$(shell hostname)/g' \
> -     -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) -v 2>&1 | grep version | tail
> -1)!g' \
> +     -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) --version 2>&1 | head -1)!g'
> \
>    -e 's/@@version@@/$(XEN_VERSION)/g' \
>    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:25:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtW5-0003qj-1Z; Thu, 02 Aug 2012 11:24:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SwtW3-0003qa-I3
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:24:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1343906661!11816509!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9205 invoked from network); 2 Aug 2012 11:24:22 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:24:22 -0000
Received: by eeke53 with SMTP id e53so2295387eek.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 04:24:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=djPXNuZnncgSteG11loYxvUZCOzDXwAFawEa43JYRxI=;
	b=n8Jwjvrz/gQAb0RBdp+n3SoMc7/kpZeKlGXnJW67xELqOP45hXMBINL2T6LHkCou+i
	QnxBtl/s7HKRXFu7KzIii8f15NWsMf4zTbOhi84Zlqo7wFmhTtTNuAxL+bYMkhpqfrlq
	Kbl3G3CWWOro27Zj0MytaMIpJ/+6mNdeayP7eoXWhsJHkWBNr2sED7G15SBSWI6+cRmX
	hbBHPbRat366oKIX6+NHIjMVZZePjChd19bb+v+VrHWzfeMHW+rRxfSLjMDvj03RkxeZ
	0fePldbf6nDSJgsuwMFFVIPp73ciDTDcdsu61I3hi3+5JUx7ZpUID1oh2sD3U/g0lzZ9
	WryQ==
Received: by 10.14.175.7 with SMTP id y7mr91991eel.29.1343906661689;
	Thu, 02 Aug 2012 04:24:21 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id g46sm16459987eep.15.2012.08.02.04.24.20
	(version=SSLv3 cipher=OTHER); Thu, 02 Aug 2012 04:24:21 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 02 Aug 2012 12:24:18 +0100
From: Keir Fraser <keir@xen.org>
To: Tim Deegan <tim@xen.org>,
	<xen-devel@lists.xen.org>
Message-ID: <CC4021F2.4794F%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] xen: detect compiler version with
	'--version' rather than '-v'
Thread-Index: Ac1woVrbSeMJs11/zUqzkRjywBSnuQ==
In-Reply-To: <fdd4b7b36959492b3909.1343905513@whitby.uk.xensource.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] xen: detect compiler version with
 '--version' rather than '-v'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/2012 12:05, "Tim Deegan" <tim@xen.org> wrote:

> # HG changeset patch
> # User Tim Deegan <tim@xen.org>
> # Date 1343905471 -3600
> # Node ID fdd4b7b36959492b3909613867360d9993d420a7
> # Parent  3d17148e465ce87ddd8f555a001280348d848419
> xen: detect compiler version with '--version' rather than '-v'
> 
> This allows us to get rid of the 'grep version', which doesn't
> work with localized compilers.

Seem syou've thought about it and tested it more than I ever did.

Acked-by: Keir Fraser <keir@xen.org>

> Signed-off-by: Tim Deegan <tim@xen.org>
> 
> diff -r 3d17148e465c -r fdd4b7b36959 xen/Makefile
> --- a/xen/Makefile Thu Aug 02 11:49:37 2012 +0200
> +++ b/xen/Makefile Thu Aug 02 12:04:31 2012 +0100
> @@ -103,7 +103,7 @@ include/xen/compile.h: include/xen/compi
>    -e 's/@@whoami@@/$(XEN_WHOAMI)/g' \
>    -e 's/@@domain@@/$(XEN_DOMAIN)/g' \
>    -e 's/@@hostname@@/$(shell hostname)/g' \
> -     -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) -v 2>&1 | grep version | tail
> -1)!g' \
> +     -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) --version 2>&1 | head -1)!g'
> \
>    -e 's/@@version@@/$(XEN_VERSION)/g' \
>    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:29:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtaY-00041n-TM; Thu, 02 Aug 2012 11:29:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SwtaX-00041f-Ie
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:29:09 +0000
Received: from [85.158.138.51:8469] by server-10.bemta-3.messagelabs.com id
	4E/08-21993-4846A105; Thu, 02 Aug 2012 11:29:08 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343906948!22026515!1
X-Originating-IP: [213.199.154.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32245 invoked from network); 2 Aug 2012 11:29:08 -0000
Received: from am1ehsobe004.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.207)
	by server-6.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 11:29:08 -0000
Received: from mail76-am1-R.bigfish.com (10.3.201.253) by
	AM1EHSOBE008.bigfish.com (10.3.204.28) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 11:29:07 +0000
Received: from mail76-am1 (localhost [127.0.0.1])	by mail76-am1-R.bigfish.com
	(Postfix) with ESMTP id DC89D2A02AA;
	Thu,  2 Aug 2012 11:29:07 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail76-am1 (localhost.localdomain [127.0.0.1]) by mail76-am1
	(MessageSwitch) id 1343906946449314_4042;
	Thu,  2 Aug 2012 11:29:06 +0000 (UTC)
Received: from AM1EHSMHS013.bigfish.com (unknown [10.3.201.243])	by
	mail76-am1.bigfish.com (Postfix) with ESMTP id 69F6E40046;
	Thu,  2 Aug 2012 11:29:06 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS013.bigfish.com (10.3.207.151) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 11:29:05 +0000
X-WSS-ID: 0M84L8D-02-6NC-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2F7A9C80F1;	Thu,  2 Aug 2012 06:29:01 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 06:29:19 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 2 Aug 2012 06:29:03 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	07:29:02 -0400
Message-ID: <501A6478.8070605@amd.com>
Date: Thu, 2 Aug 2012 13:28:56 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
In-Reply-To: <20120802111915.GF11437@ocelot.phlegethon.org>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/12 13:19, Tim Deegan wrote:

> Hi,
> 
> At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
>> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.
> 
> Why not?  l2 gfns don't have any special meaning that we can
> dictate from inside Xen.
> 
>> Pass correct pfec for translation into l1 guest gfn.
> 
> This seems like a good idea, but probably should happen for all
> entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
> to the guest that comes from translations outside his control.
> 
> How about this:
> 
> diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
> --- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
> +++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
> @@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
>          unsigned long gfn;
>          struct p2m_domain *p2m;
>          const struct paging_mode *mode;
> +        uint32_t pfec_21 = *pfec;
>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
>  
>          /* translate l2 guest va into l2 guest gfn */
> @@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
>  
>          /* translate l2 guest gfn into l1 guest gfn */
>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
> -                                       gfn << PAGE_SHIFT, pfec, NULL);
> +                                       gfn << PAGE_SHIFT, &pfec_21, NULL);


The caller will see the return value of pfec and not from pfec_21.
If this is what the caller expects then this is fine with me.

Christoph

>      }
>  
>      return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
> 
> Cheers,
> 
> Tim.
> 
>> Found with Hyper-V.
>>
>> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
>> CC: Tim Deegan <tim@xen.org>
>>
>> -- 
>> ---to satisfy European Law for business letters:
>> Advanced Micro Devices GmbH
>> Einsteinring 24, 85689 Dornach b. Muenchen
>> Geschaeftsfuehrer: Alberto Bozzo
>> Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
>> Registergericht Muenchen, HRB Nr. 43632
> 
> Content-Description: xen_p2m.diff
>> diff -r 8330198c3240 xen/arch/x86/mm/p2m.c
>> --- a/xen/arch/x86/mm/p2m.c	Fri Jul 27 12:24:03 2012 +0200
>> +++ b/xen/arch/x86/mm/p2m.c	Tue Jul 31 16:49:54 2012 +0200
>> @@ -1582,12 +1582,19 @@ unsigned long paging_gva_to_gfn(struct v
>>          struct p2m_domain *p2m;
>>          const struct paging_mode *mode;
>>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
>> +        uint32_t pfec1 = *pfec;
>>  
>>          /* translate l2 guest va into l2 guest gfn */
>>          p2m = p2m_get_nestedp2m(v, ncr3);
>>          mode = paging_get_nestedmode(v);
>>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>>  
>> +        /* if l1 guest maps its mmio pages into the
>> +         * l2 guest then we see this case here. */
>> +        if (gfn == INVALID_GFN)
>> +            return INVALID_GFN;
>> +        *pfec = pfec1;
>> +
>>          /* translate l2 guest gfn into l1 guest gfn */
>>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
>>                                         gfn << PAGE_SHIFT, pfec, NULL);
> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:29:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtaY-00041n-TM; Thu, 02 Aug 2012 11:29:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SwtaX-00041f-Ie
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:29:09 +0000
Received: from [85.158.138.51:8469] by server-10.bemta-3.messagelabs.com id
	4E/08-21993-4846A105; Thu, 02 Aug 2012 11:29:08 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343906948!22026515!1
X-Originating-IP: [213.199.154.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32245 invoked from network); 2 Aug 2012 11:29:08 -0000
Received: from am1ehsobe004.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.207)
	by server-6.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 11:29:08 -0000
Received: from mail76-am1-R.bigfish.com (10.3.201.253) by
	AM1EHSOBE008.bigfish.com (10.3.204.28) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 11:29:07 +0000
Received: from mail76-am1 (localhost [127.0.0.1])	by mail76-am1-R.bigfish.com
	(Postfix) with ESMTP id DC89D2A02AA;
	Thu,  2 Aug 2012 11:29:07 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail76-am1 (localhost.localdomain [127.0.0.1]) by mail76-am1
	(MessageSwitch) id 1343906946449314_4042;
	Thu,  2 Aug 2012 11:29:06 +0000 (UTC)
Received: from AM1EHSMHS013.bigfish.com (unknown [10.3.201.243])	by
	mail76-am1.bigfish.com (Postfix) with ESMTP id 69F6E40046;
	Thu,  2 Aug 2012 11:29:06 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS013.bigfish.com (10.3.207.151) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 11:29:05 +0000
X-WSS-ID: 0M84L8D-02-6NC-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2F7A9C80F1;	Thu,  2 Aug 2012 06:29:01 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 06:29:19 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 2 Aug 2012 06:29:03 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	07:29:02 -0400
Message-ID: <501A6478.8070605@amd.com>
Date: Thu, 2 Aug 2012 13:28:56 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
In-Reply-To: <20120802111915.GF11437@ocelot.phlegethon.org>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/12 13:19, Tim Deegan wrote:

> Hi,
> 
> At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
>> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.
> 
> Why not?  l2 gfns don't have any special meaning that we can
> dictate from inside Xen.
> 
>> Pass correct pfec for translation into l1 guest gfn.
> 
> This seems like a good idea, but probably should happen for all
> entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
> to the guest that comes from translations outside his control.
> 
> How about this:
> 
> diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
> --- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
> +++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
> @@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
>          unsigned long gfn;
>          struct p2m_domain *p2m;
>          const struct paging_mode *mode;
> +        uint32_t pfec_21 = *pfec;
>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
>  
>          /* translate l2 guest va into l2 guest gfn */
> @@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
>  
>          /* translate l2 guest gfn into l1 guest gfn */
>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
> -                                       gfn << PAGE_SHIFT, pfec, NULL);
> +                                       gfn << PAGE_SHIFT, &pfec_21, NULL);


The caller will see the return value of pfec and not from pfec_21.
If this is what the caller expects then this is fine with me.

Christoph

>      }
>  
>      return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
> 
> Cheers,
> 
> Tim.
> 
>> Found with Hyper-V.
>>
>> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
>> CC: Tim Deegan <tim@xen.org>
>>
>> -- 
>> ---to satisfy European Law for business letters:
>> Advanced Micro Devices GmbH
>> Einsteinring 24, 85689 Dornach b. Muenchen
>> Geschaeftsfuehrer: Alberto Bozzo
>> Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
>> Registergericht Muenchen, HRB Nr. 43632
> 
> Content-Description: xen_p2m.diff
>> diff -r 8330198c3240 xen/arch/x86/mm/p2m.c
>> --- a/xen/arch/x86/mm/p2m.c	Fri Jul 27 12:24:03 2012 +0200
>> +++ b/xen/arch/x86/mm/p2m.c	Tue Jul 31 16:49:54 2012 +0200
>> @@ -1582,12 +1582,19 @@ unsigned long paging_gva_to_gfn(struct v
>>          struct p2m_domain *p2m;
>>          const struct paging_mode *mode;
>>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
>> +        uint32_t pfec1 = *pfec;
>>  
>>          /* translate l2 guest va into l2 guest gfn */
>>          p2m = p2m_get_nestedp2m(v, ncr3);
>>          mode = paging_get_nestedmode(v);
>>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>>  
>> +        /* if l1 guest maps its mmio pages into the
>> +         * l2 guest then we see this case here. */
>> +        if (gfn == INVALID_GFN)
>> +            return INVALID_GFN;
>> +        *pfec = pfec1;
>> +
>>          /* translate l2 guest gfn into l1 guest gfn */
>>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
>>                                         gfn << PAGE_SHIFT, pfec, NULL);
> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:35:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtgN-0004HY-O8; Thu, 02 Aug 2012 11:35:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtgM-0004Gt-MX
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:35:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343907301!8784629!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15923 invoked from network); 2 Aug 2012 11:35:01 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:35:01 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtgC-0003Jc-88; Thu, 02 Aug 2012 11:35:00 +0000
Date: Thu, 2 Aug 2012 12:35:00 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802113500.GG11437@ocelot.phlegethon.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
	<501A6478.8070605@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A6478.8070605@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:28 +0200 on 02 Aug (1343914136), Christoph Egger wrote:
> On 08/02/12 13:19, Tim Deegan wrote:
> 
> > Hi,
> > 
> > At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
> >> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.
> > 
> > Why not?  l2 gfns don't have any special meaning that we can
> > dictate from inside Xen.
> > 
> >> Pass correct pfec for translation into l1 guest gfn.
> > 
> > This seems like a good idea, but probably should happen for all
> > entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
> > to the guest that comes from translations outside his control.
> > 
> > How about this:
> > 
> > diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
> > --- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
> > +++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
> > @@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
> >          unsigned long gfn;
> >          struct p2m_domain *p2m;
> >          const struct paging_mode *mode;
> > +        uint32_t pfec_21 = *pfec;
> >          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
> >  
> >          /* translate l2 guest va into l2 guest gfn */
> > @@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
> >  
> >          /* translate l2 guest gfn into l1 guest gfn */
> >          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
> > -                                       gfn << PAGE_SHIFT, pfec, NULL);
> > +                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
> 
> 
> The caller will see the return value of pfec and not from pfec_21.
> If this is what the caller expects then this is fine with me.

Yes, I think that is what the caller expects -- the error code is made
up from the pagetable walk rather than from the p2m table.

Can I take that as an ack?

And more importantly, does it fix the Hyper-V problem you encountered?

Cheers,

Tim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:35:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtgN-0004HY-O8; Thu, 02 Aug 2012 11:35:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwtgM-0004Gt-MX
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 11:35:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343907301!8784629!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15923 invoked from network); 2 Aug 2012 11:35:01 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 11:35:01 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwtgC-0003Jc-88; Thu, 02 Aug 2012 11:35:00 +0000
Date: Thu, 2 Aug 2012 12:35:00 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802113500.GG11437@ocelot.phlegethon.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
	<501A6478.8070605@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A6478.8070605@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:28 +0200 on 02 Aug (1343914136), Christoph Egger wrote:
> On 08/02/12 13:19, Tim Deegan wrote:
> 
> > Hi,
> > 
> > At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
> >> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.
> > 
> > Why not?  l2 gfns don't have any special meaning that we can
> > dictate from inside Xen.
> > 
> >> Pass correct pfec for translation into l1 guest gfn.
> > 
> > This seems like a good idea, but probably should happen for all
> > entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
> > to the guest that comes from translations outside his control.
> > 
> > How about this:
> > 
> > diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
> > --- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
> > +++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
> > @@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
> >          unsigned long gfn;
> >          struct p2m_domain *p2m;
> >          const struct paging_mode *mode;
> > +        uint32_t pfec_21 = *pfec;
> >          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
> >  
> >          /* translate l2 guest va into l2 guest gfn */
> > @@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
> >  
> >          /* translate l2 guest gfn into l1 guest gfn */
> >          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
> > -                                       gfn << PAGE_SHIFT, pfec, NULL);
> > +                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
> 
> 
> The caller will see the return value of pfec and not from pfec_21.
> If this is what the caller expects then this is fine with me.

Yes, I think that is what the caller expects -- the error code is made
up from the pagetable walk rather than from the p2m table.

Can I take that as an ack?

And more importantly, does it fix the Hyper-V problem you encountered?

Cheers,

Tim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:45:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtpV-0004Tm-Qd; Thu, 02 Aug 2012 11:44:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SwtpU-0004Th-Ob
	for Xen-devel@lists.xensource.com; Thu, 02 Aug 2012 11:44:36 +0000
Received: from [85.158.143.35:53660] by server-3.bemta-4.messagelabs.com id
	26/A5-01511-4286A105; Thu, 02 Aug 2012 11:44:36 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343907867!5639859!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31287 invoked from network); 2 Aug 2012 11:44:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:44:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203918447"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:44:27 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 07:44:27 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SwtpK-0007n5-OK;
	Thu, 02 Aug 2012 12:44:26 +0100
Message-ID: <501A675F.4010804@eu.citrix.com>
Date: Thu, 2 Aug 2012 12:41:19 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
	<20120801165359.GF8228@US-SEA-R8XVZTX>
In-Reply-To: <20120801165359.GF8228@US-SEA-R8XVZTX>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] HYBRID naming [Was: Re: [HYBRID]: status update...]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/12 17:53, Matt Wilson wrote:
> On Wed, Aug 01, 2012 at 09:21:57AM -0700, George Dunlap wrote:
>> Hmm -- that's an interesting issue I hadn't thought of. "PVHVM" has 
>> already been sort of taken by Stefano's extensions to allow Linux 
>> kernels booted in HVM mode to use some of the PV extensions. I tend 
>> to think "xen_pvh_domain()" is probably OK, but maybe calling it 
>> "pvext" (or "pvhext") in the code, and "PVH" in documentation / 
>> stories? Just using "pvext" everywhere could work as well; it's a 
>> little bit "now even better!", but not as much as pvplus. 
> How about HAPV, for "Hardware Assisted Paravirtualization"? It's
> nicely pronounceable as "hap-vee" and follows the general
> "hardware-assisted paging" (HAP) Xen terminology that spans both Intel
> EPT and AMD RVI. 'if (xen_hapv_domain())'
Wouldn't "HAPV" make people think more of HAP than of PV?  It seems like 
it would be much more confusing to distinguish "hapv" from "hap" than 
"pvh" from "pv". :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 11:45:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 11:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwtpV-0004Tm-Qd; Thu, 02 Aug 2012 11:44:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SwtpU-0004Th-Ob
	for Xen-devel@lists.xensource.com; Thu, 02 Aug 2012 11:44:36 +0000
Received: from [85.158.143.35:53660] by server-3.bemta-4.messagelabs.com id
	26/A5-01511-4286A105; Thu, 02 Aug 2012 11:44:36 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343907867!5639859!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31287 invoked from network); 2 Aug 2012 11:44:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 11:44:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,699,1336363200"; d="scan'208";a="203918447"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 07:44:27 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 07:44:27 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SwtpK-0007n5-OK;
	Thu, 02 Aug 2012 12:44:26 +0100
Message-ID: <501A675F.4010804@eu.citrix.com>
Date: Thu, 2 Aug 2012 12:41:19 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
	<20120801165359.GF8228@US-SEA-R8XVZTX>
In-Reply-To: <20120801165359.GF8228@US-SEA-R8XVZTX>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] HYBRID naming [Was: Re: [HYBRID]: status update...]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/12 17:53, Matt Wilson wrote:
> On Wed, Aug 01, 2012 at 09:21:57AM -0700, George Dunlap wrote:
>> Hmm -- that's an interesting issue I hadn't thought of. "PVHVM" has 
>> already been sort of taken by Stefano's extensions to allow Linux 
>> kernels booted in HVM mode to use some of the PV extensions. I tend 
>> to think "xen_pvh_domain()" is probably OK, but maybe calling it 
>> "pvext" (or "pvhext") in the code, and "PVH" in documentation / 
>> stories? Just using "pvext" everywhere could work as well; it's a 
>> little bit "now even better!", but not as much as pvplus. 
> How about HAPV, for "Hardware Assisted Paravirtualization"? It's
> nicely pronounceable as "hap-vee" and follows the general
> "hardware-assisted paging" (HAP) Xen terminology that spans both Intel
> EPT and AMD RVI. 'if (xen_hapv_domain())'
Wouldn't "HAPV" make people think more of HAP than of PV?  It seems like 
it would be much more confusing to distinguish "hapv" from "hap" than 
"pvh" from "pv". :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 12:18:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 12:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwuM3-0004xY-Gt; Thu, 02 Aug 2012 12:18:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SwuM2-0004xQ-2O
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:18:14 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343909885!8792823!1
X-Originating-IP: [216.32.180.187]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5057 invoked from network); 2 Aug 2012 12:18:06 -0000
Received: from co1ehsobe004.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.187)
	by server-4.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 12:18:06 -0000
Received: from mail142-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE014.bigfish.com (10.243.66.77) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:18:04 +0000
Received: from mail142-co1 (localhost [127.0.0.1])	by
	mail142-co1-R.bigfish.com (Postfix) with ESMTP id 785C2880181;
	Thu,  2 Aug 2012 12:14:58 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI1432I1418Izz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail142-co1 (localhost.localdomain [127.0.0.1]) by mail142-co1
	(MessageSwitch) id 1343909696285711_24413;
	Thu,  2 Aug 2012 12:14:56 +0000 (UTC)
Received: from CO1EHSMHS006.bigfish.com (unknown [10.243.78.241])	by
	mail142-co1.bigfish.com (Postfix) with ESMTP id 3A0B1C40083;
	Thu,  2 Aug 2012 12:14:56 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS006.bigfish.com (10.243.66.16) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:14:55 +0000
X-WSS-ID: 0M84NCS-02-8QS-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2C4EBC80E8;	Thu,  2 Aug 2012 07:14:51 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 07:15:10 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 2 Aug 2012 07:14:53 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:14:53 -0400
Message-ID: <501A6F3B.9060102@amd.com>
Date: Thu, 2 Aug 2012 14:14:51 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
	<501A6478.8070605@amd.com>
	<20120802113500.GG11437@ocelot.phlegethon.org>
In-Reply-To: <20120802113500.GG11437@ocelot.phlegethon.org>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/12 13:35, Tim Deegan wrote:

> At 13:28 +0200 on 02 Aug (1343914136), Christoph Egger wrote:
>> On 08/02/12 13:19, Tim Deegan wrote:
>>
>>> Hi,
>>>
>>> At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
>>>> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.
>>>
>>> Why not?  l2 gfns don't have any special meaning that we can
>>> dictate from inside Xen.
>>>
>>>> Pass correct pfec for translation into l1 guest gfn.
>>>
>>> This seems like a good idea, but probably should happen for all
>>> entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
>>> to the guest that comes from translations outside his control.
>>>
>>> How about this:
>>>
>>> diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
>>> --- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
>>> +++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
>>> @@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
>>>          unsigned long gfn;
>>>          struct p2m_domain *p2m;
>>>          const struct paging_mode *mode;
>>> +        uint32_t pfec_21 = *pfec;
>>>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
>>>  
>>>          /* translate l2 guest va into l2 guest gfn */
>>> @@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
>>>  
>>>          /* translate l2 guest gfn into l1 guest gfn */
>>>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
>>> -                                       gfn << PAGE_SHIFT, pfec, NULL);
>>> +                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
>>
>>
>> The caller will see the return value of pfec and not from pfec_21.
>> If this is what the caller expects then this is fine with me.
> 
> Yes, I think that is what the caller expects -- the error code is made
> up from the pagetable walk rather than from the p2m table.
> 
> Can I take that as an ack?

Yes.

> And more importantly, does it fix the Hyper-V problem you encountered?

The one you mean is covered with the other patch.
But I found this with Hyper-V when doing MMIO accesses.

Christoph


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 12:18:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 12:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwuM3-0004xY-Gt; Thu, 02 Aug 2012 12:18:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SwuM2-0004xQ-2O
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:18:14 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343909885!8792823!1
X-Originating-IP: [216.32.180.187]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5057 invoked from network); 2 Aug 2012 12:18:06 -0000
Received: from co1ehsobe004.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.187)
	by server-4.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 12:18:06 -0000
Received: from mail142-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE014.bigfish.com (10.243.66.77) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:18:04 +0000
Received: from mail142-co1 (localhost [127.0.0.1])	by
	mail142-co1-R.bigfish.com (Postfix) with ESMTP id 785C2880181;
	Thu,  2 Aug 2012 12:14:58 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI1432I1418Izz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail142-co1 (localhost.localdomain [127.0.0.1]) by mail142-co1
	(MessageSwitch) id 1343909696285711_24413;
	Thu,  2 Aug 2012 12:14:56 +0000 (UTC)
Received: from CO1EHSMHS006.bigfish.com (unknown [10.243.78.241])	by
	mail142-co1.bigfish.com (Postfix) with ESMTP id 3A0B1C40083;
	Thu,  2 Aug 2012 12:14:56 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS006.bigfish.com (10.243.66.16) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:14:55 +0000
X-WSS-ID: 0M84NCS-02-8QS-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2C4EBC80E8;	Thu,  2 Aug 2012 07:14:51 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 07:15:10 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 2 Aug 2012 07:14:53 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:14:53 -0400
Message-ID: <501A6F3B.9060102@amd.com>
Date: Thu, 2 Aug 2012 14:14:51 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
	<501A6478.8070605@amd.com>
	<20120802113500.GG11437@ocelot.phlegethon.org>
In-Reply-To: <20120802113500.GG11437@ocelot.phlegethon.org>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/12 13:35, Tim Deegan wrote:

> At 13:28 +0200 on 02 Aug (1343914136), Christoph Egger wrote:
>> On 08/02/12 13:19, Tim Deegan wrote:
>>
>>> Hi,
>>>
>>> At 17:37 +0200 on 31 Jul (1343756240), Christoph Egger wrote:
>>>> Do not translate INVALID_GFN as l2 guest gfn into l1 guest gfn.
>>>
>>> Why not?  l2 gfns don't have any special meaning that we can
>>> dictate from inside Xen.
>>>
>>>> Pass correct pfec for translation into l1 guest gfn.
>>>
>>> This seems like a good idea, but probably should happen for all
>>> entries, not just INVALID_GFN ones -- we shouldn't be returning a PFEC
>>> to the guest that comes from translations outside his control.
>>>
>>> How about this:
>>>
>>> diff -r fdd4b7b36959 xen/arch/x86/mm/p2m.c
>>> --- a/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:04:31 2012 +0100
>>> +++ b/xen/arch/x86/mm/p2m.c	Thu Aug 02 12:17:48 2012 +0100
>>> @@ -1581,6 +1581,7 @@ unsigned long paging_gva_to_gfn(struct v
>>>          unsigned long gfn;
>>>          struct p2m_domain *p2m;
>>>          const struct paging_mode *mode;
>>> +        uint32_t pfec_21 = *pfec;
>>>          uint64_t ncr3 = nhvm_vcpu_hostcr3(v);
>>>  
>>>          /* translate l2 guest va into l2 guest gfn */
>>> @@ -1590,7 +1591,7 @@ unsigned long paging_gva_to_gfn(struct v
>>>  
>>>          /* translate l2 guest gfn into l1 guest gfn */
>>>          return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
>>> -                                       gfn << PAGE_SHIFT, pfec, NULL);
>>> +                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
>>
>>
>> The caller will see the return value of pfec and not from pfec_21.
>> If this is what the caller expects then this is fine with me.
> 
> Yes, I think that is what the caller expects -- the error code is made
> up from the pagetable walk rather than from the p2m table.
> 
> Can I take that as an ack?

Yes.

> And more importantly, does it fix the Hyper-V problem you encountered?

The one you mean is covered with the other patch.
But I found this with Hyper-V when doing MMIO accesses.

Christoph


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 12:19:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 12:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwuMq-0004zw-UT; Thu, 02 Aug 2012 12:19:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1SwuMp-0004zX-2O
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:19:03 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343909934!10509593!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24676 invoked from network); 2 Aug 2012 12:18:56 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 12:18:56 -0000
Received: from mail-ob0-f173.google.com (mail-ob0-f173.google.com
	[209.85.214.173]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.3/8.13.6) with ESMTP id q72CIqEt018970
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Thu, 2 Aug 2012 05:18:53 -0700
Received: by obbta14 with SMTP id ta14so17380498obb.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 05:18:51 -0700 (PDT)
Received: by 10.182.47.9 with SMTP id z9mr36126166obm.58.1343909931667; Thu,
	02 Aug 2012 05:18:51 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.76.141.165 with HTTP; Thu, 2 Aug 2012 05:18:11 -0700 (PDT)
In-Reply-To: <1343903030.27221.124.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
	<1343903030.27221.124.camel@zakaz.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Thu, 2 Aug 2012 08:18:11 -0400
Message-ID: <CAP8mzPNgh+q59j-eZwHDHwWa4VURiD2dsxAv2OFqyAhNjcxDeg@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 09/11] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5425414441761270241=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5425414441761270241==
Content-Type: multipart/alternative; boundary=14dae939982516521104c6476ae9

--14dae939982516521104c6476ae9
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Aug 2, 2012 at 6:23 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> > Change the TODOs in the remus code to "REMUS TODO" which will make
> > them easier to grep for later.  AIUI all of these are essential for
> > use of remus in production.
> >
> > Also add a new TODO and a new assert, to check rc on entry to
> > remus_checkpoint_dm_saved.
>
> CCing Shriram in the hops of getting some actual code here (for 4.3 most
> likely).
>
> >
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>
Thanks I saw this yesterday.

shriram

>  > ---
> >  tools/libxl/libxl_dom.c |    9 +++++----
> >  1 files changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > index d749983..06d5e4f 100644
> > --- a/tools/libxl/libxl_dom.c
> > +++ b/tools/libxl/libxl_dom.c
> > @@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t
> **buf,
> >
> >  static int libxl__remus_domain_suspend_callback(void *data)
> >  {
> > -    /* TODO: Issue disk and network checkpoint reqs. */
> > +    /* REMUS TODO: Issue disk and network checkpoint reqs. */
> >      return libxl__domain_suspend_common_callback(data);
> >  }
> >
> > @@ -1124,7 +1124,7 @@ static int
> libxl__remus_domain_resume_callback(void *data)
> >      if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
> >          return 0;
> >
> > -    /* TODO: Deal with disk. Start a new network output buffer */
> > +    /* REMUS TODO: Deal with disk. Start a new network output buffer */
> >      return 1;
> >  }
> >
> > @@ -1151,8 +1151,9 @@ static void
> libxl__remus_domain_checkpoint_callback(void *data)
> >  static void remus_checkpoint_dm_saved(libxl__egc *egc,
> >                                        libxl__domain_suspend_state *dss,
> int rc)
> >  {
> > -    /* TODO: Wait for disk and memory ack, release network buffer */
> > -    /* TODO: make this asynchronous */
> > +    /* REMUS TODO: Wait for disk and memory ack, release network buffer
> */
> > +    /* REMUS TODO: make this asynchronous */
> > +    assert(!rc); /* REMUS TODO handle this error properly */
> >      usleep(dss->interval * 1000);
> >      libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
> >  }
>
>
>

--14dae939982516521104c6476ae9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 2, 2012 at 6:23 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=
=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.c=
om</a>&gt;</span> wrote:<br><div class=3D"gmail_quote"><blockquote class=3D=
"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding=
-left:1ex">

<div class=3D"im">On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:<br>
&gt; Change the TODOs in the remus code to &quot;REMUS TODO&quot; which wil=
l make<br>
&gt; them easier to grep for later. =A0AIUI all of these are essential for<=
br>
&gt; use of remus in production.<br>
&gt;<br>
&gt; Also add a new TODO and a new assert, to check rc on entry to<br>
&gt; remus_checkpoint_dm_saved.<br>
<br>
</div>CCing Shriram in the hops of getting some actual code here (for 4.3 m=
ost<br>
likely).<br>
<br>
&gt;<br>
&gt; Signed-off-by: Ian Jackson &lt;<a href=3D"mailto:ian.jackson@eu.citrix=
.com">ian.jackson@eu.citrix.com</a>&gt;<br>
<br>
Acked-by: Ian Campbell &lt;<a href=3D"mailto:ian.campbell@citrix.com">ian.c=
ampbell@citrix.com</a>&gt;<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blockquote><div><=
br></div><div>Thanks I saw this yesterday.</div><div><br></div><div>shriram=
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">

<div class=3D"HOEnZb"><div class=3D"h5">
&gt; ---<br>
&gt; =A0tools/libxl/libxl_dom.c | =A0 =A09 +++++----<br>
&gt; =A01 files changed, 5 insertions(+), 4 deletions(-)<br>
&gt;<br>
&gt; diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c<br>
&gt; index d749983..06d5e4f 100644<br>
&gt; --- a/tools/libxl/libxl_dom.c<br>
&gt; +++ b/tools/libxl/libxl_dom.c<br>
&gt; @@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_=
t **buf,<br>
&gt;<br>
&gt; =A0static int libxl__remus_domain_suspend_callback(void *data)<br>
&gt; =A0{<br>
&gt; - =A0 =A0/* TODO: Issue disk and network checkpoint reqs. */<br>
&gt; + =A0 =A0/* REMUS TODO: Issue disk and network checkpoint reqs. */<br>
&gt; =A0 =A0 =A0return libxl__domain_suspend_common_callback(data);<br>
&gt; =A0}<br>
&gt;<br>
&gt; @@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(v=
oid *data)<br>
&gt; =A0 =A0 =A0if (libxl_domain_resume(CTX, dss-&gt;domid, /* Fast Suspend=
 */1))<br>
&gt; =A0 =A0 =A0 =A0 =A0return 0;<br>
&gt;<br>
&gt; - =A0 =A0/* TODO: Deal with disk. Start a new network output buffer */=
<br>
&gt; + =A0 =A0/* REMUS TODO: Deal with disk. Start a new network output buf=
fer */<br>
&gt; =A0 =A0 =A0return 1;<br>
&gt; =A0}<br>
&gt;<br>
&gt; @@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callb=
ack(void *data)<br>
&gt; =A0static void remus_checkpoint_dm_saved(libxl__egc *egc,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0libxl__domain_suspend_state *dss, int rc)<br>
&gt; =A0{<br>
&gt; - =A0 =A0/* TODO: Wait for disk and memory ack, release network buffer=
 */<br>
&gt; - =A0 =A0/* TODO: make this asynchronous */<br>
&gt; + =A0 =A0/* REMUS TODO: Wait for disk and memory ack, release network =
buffer */<br>
&gt; + =A0 =A0/* REMUS TODO: make this asynchronous */<br>
&gt; + =A0 =A0assert(!rc); /* REMUS TODO handle this error properly */<br>
&gt; =A0 =A0 =A0usleep(dss-&gt;interval * 1000);<br>
&gt; =A0 =A0 =A0libxl__xc_domain_saverestore_async_callback_done(egc, &amp;=
dss-&gt;shs, 1);<br>
&gt; =A0}<br>
<br>
<br>
</div></div></blockquote></div><br>

--14dae939982516521104c6476ae9--


--===============5425414441761270241==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5425414441761270241==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 12:19:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 12:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwuMq-0004zw-UT; Thu, 02 Aug 2012 12:19:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1SwuMp-0004zX-2O
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:19:03 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343909934!10509593!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24676 invoked from network); 2 Aug 2012 12:18:56 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 12:18:56 -0000
Received: from mail-ob0-f173.google.com (mail-ob0-f173.google.com
	[209.85.214.173]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.3/8.13.6) with ESMTP id q72CIqEt018970
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Thu, 2 Aug 2012 05:18:53 -0700
Received: by obbta14 with SMTP id ta14so17380498obb.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 05:18:51 -0700 (PDT)
Received: by 10.182.47.9 with SMTP id z9mr36126166obm.58.1343909931667; Thu,
	02 Aug 2012 05:18:51 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.76.141.165 with HTTP; Thu, 2 Aug 2012 05:18:11 -0700 (PDT)
In-Reply-To: <1343903030.27221.124.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-10-git-send-email-ian.jackson@eu.citrix.com>
	<1343903030.27221.124.camel@zakaz.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Thu, 2 Aug 2012 08:18:11 -0400
Message-ID: <CAP8mzPNgh+q59j-eZwHDHwWa4VURiD2dsxAv2OFqyAhNjcxDeg@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 09/11] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5425414441761270241=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5425414441761270241==
Content-Type: multipart/alternative; boundary=14dae939982516521104c6476ae9

--14dae939982516521104c6476ae9
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Aug 2, 2012 at 6:23 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:
> > Change the TODOs in the remus code to "REMUS TODO" which will make
> > them easier to grep for later.  AIUI all of these are essential for
> > use of remus in production.
> >
> > Also add a new TODO and a new assert, to check rc on entry to
> > remus_checkpoint_dm_saved.
>
> CCing Shriram in the hops of getting some actual code here (for 4.3 most
> likely).
>
> >
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>
Thanks I saw this yesterday.

shriram

>  > ---
> >  tools/libxl/libxl_dom.c |    9 +++++----
> >  1 files changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > index d749983..06d5e4f 100644
> > --- a/tools/libxl/libxl_dom.c
> > +++ b/tools/libxl/libxl_dom.c
> > @@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t
> **buf,
> >
> >  static int libxl__remus_domain_suspend_callback(void *data)
> >  {
> > -    /* TODO: Issue disk and network checkpoint reqs. */
> > +    /* REMUS TODO: Issue disk and network checkpoint reqs. */
> >      return libxl__domain_suspend_common_callback(data);
> >  }
> >
> > @@ -1124,7 +1124,7 @@ static int
> libxl__remus_domain_resume_callback(void *data)
> >      if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
> >          return 0;
> >
> > -    /* TODO: Deal with disk. Start a new network output buffer */
> > +    /* REMUS TODO: Deal with disk. Start a new network output buffer */
> >      return 1;
> >  }
> >
> > @@ -1151,8 +1151,9 @@ static void
> libxl__remus_domain_checkpoint_callback(void *data)
> >  static void remus_checkpoint_dm_saved(libxl__egc *egc,
> >                                        libxl__domain_suspend_state *dss,
> int rc)
> >  {
> > -    /* TODO: Wait for disk and memory ack, release network buffer */
> > -    /* TODO: make this asynchronous */
> > +    /* REMUS TODO: Wait for disk and memory ack, release network buffer
> */
> > +    /* REMUS TODO: make this asynchronous */
> > +    assert(!rc); /* REMUS TODO handle this error properly */
> >      usleep(dss->interval * 1000);
> >      libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
> >  }
>
>
>

--14dae939982516521104c6476ae9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 2, 2012 at 6:23 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=
=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.c=
om</a>&gt;</span> wrote:<br><div class=3D"gmail_quote"><blockquote class=3D=
"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding=
-left:1ex">

<div class=3D"im">On Wed, 2012-08-01 at 17:24 +0100, Ian Jackson wrote:<br>
&gt; Change the TODOs in the remus code to &quot;REMUS TODO&quot; which wil=
l make<br>
&gt; them easier to grep for later. =A0AIUI all of these are essential for<=
br>
&gt; use of remus in production.<br>
&gt;<br>
&gt; Also add a new TODO and a new assert, to check rc on entry to<br>
&gt; remus_checkpoint_dm_saved.<br>
<br>
</div>CCing Shriram in the hops of getting some actual code here (for 4.3 m=
ost<br>
likely).<br>
<br>
&gt;<br>
&gt; Signed-off-by: Ian Jackson &lt;<a href=3D"mailto:ian.jackson@eu.citrix=
.com">ian.jackson@eu.citrix.com</a>&gt;<br>
<br>
Acked-by: Ian Campbell &lt;<a href=3D"mailto:ian.campbell@citrix.com">ian.c=
ampbell@citrix.com</a>&gt;<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blockquote><div><=
br></div><div>Thanks I saw this yesterday.</div><div><br></div><div>shriram=
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">

<div class=3D"HOEnZb"><div class=3D"h5">
&gt; ---<br>
&gt; =A0tools/libxl/libxl_dom.c | =A0 =A09 +++++----<br>
&gt; =A01 files changed, 5 insertions(+), 4 deletions(-)<br>
&gt;<br>
&gt; diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c<br>
&gt; index d749983..06d5e4f 100644<br>
&gt; --- a/tools/libxl/libxl_dom.c<br>
&gt; +++ b/tools/libxl/libxl_dom.c<br>
&gt; @@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_=
t **buf,<br>
&gt;<br>
&gt; =A0static int libxl__remus_domain_suspend_callback(void *data)<br>
&gt; =A0{<br>
&gt; - =A0 =A0/* TODO: Issue disk and network checkpoint reqs. */<br>
&gt; + =A0 =A0/* REMUS TODO: Issue disk and network checkpoint reqs. */<br>
&gt; =A0 =A0 =A0return libxl__domain_suspend_common_callback(data);<br>
&gt; =A0}<br>
&gt;<br>
&gt; @@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(v=
oid *data)<br>
&gt; =A0 =A0 =A0if (libxl_domain_resume(CTX, dss-&gt;domid, /* Fast Suspend=
 */1))<br>
&gt; =A0 =A0 =A0 =A0 =A0return 0;<br>
&gt;<br>
&gt; - =A0 =A0/* TODO: Deal with disk. Start a new network output buffer */=
<br>
&gt; + =A0 =A0/* REMUS TODO: Deal with disk. Start a new network output buf=
fer */<br>
&gt; =A0 =A0 =A0return 1;<br>
&gt; =A0}<br>
&gt;<br>
&gt; @@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callb=
ack(void *data)<br>
&gt; =A0static void remus_checkpoint_dm_saved(libxl__egc *egc,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0libxl__domain_suspend_state *dss, int rc)<br>
&gt; =A0{<br>
&gt; - =A0 =A0/* TODO: Wait for disk and memory ack, release network buffer=
 */<br>
&gt; - =A0 =A0/* TODO: make this asynchronous */<br>
&gt; + =A0 =A0/* REMUS TODO: Wait for disk and memory ack, release network =
buffer */<br>
&gt; + =A0 =A0/* REMUS TODO: make this asynchronous */<br>
&gt; + =A0 =A0assert(!rc); /* REMUS TODO handle this error properly */<br>
&gt; =A0 =A0 =A0usleep(dss-&gt;interval * 1000);<br>
&gt; =A0 =A0 =A0libxl__xc_domain_saverestore_async_callback_done(egc, &amp;=
dss-&gt;shs, 1);<br>
&gt; =A0}<br>
<br>
<br>
</div></div></blockquote></div><br>

--14dae939982516521104c6476ae9--


--===============5425414441761270241==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5425414441761270241==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 12:33:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 12:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwuaK-0005I5-9r; Thu, 02 Aug 2012 12:33:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SwuaI-0005I0-KK
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:32:58 +0000
Received: from [85.158.143.99:20760] by server-3.bemta-4.messagelabs.com id
	75/18-01511-9737A105; Thu, 02 Aug 2012 12:32:57 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343910775!29032770!1
X-Originating-IP: [216.32.180.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7350 invoked from network); 2 Aug 2012 12:32:56 -0000
Received: from va3ehsobe006.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.16)
	by server-13.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 12:32:56 -0000
Received: from mail243-va3-R.bigfish.com (10.7.14.241) by
	VA3EHSOBE004.bigfish.com (10.7.40.24) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:32:55 +0000
Received: from mail243-va3 (localhost [127.0.0.1])	by
	mail243-va3-R.bigfish.com (Postfix) with ESMTP id 71EB5A400BD;
	Thu,  2 Aug 2012 12:32:55 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 11
X-BigFish: VPS11(zzbb2dI98dIc85fh1102Ic85dh1432Izz1202hzzz2dh668h839hd25he5bhf0ah107ah133w34h)
Received: from mail243-va3 (localhost.localdomain [127.0.0.1]) by mail243-va3
	(MessageSwitch) id 1343910773437687_11926;
	Thu,  2 Aug 2012 12:32:53 +0000 (UTC)
Received: from VA3EHSMHS007.bigfish.com (unknown [10.7.14.253])	by
	mail243-va3.bigfish.com (Postfix) with ESMTP id 666DE180045;
	Thu,  2 Aug 2012 12:32:53 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	VA3EHSMHS007.bigfish.com (10.7.99.17) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:32:49 +0000
X-WSS-ID: 0M84O6N-01-5DT-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2202110280BA;	Thu,  2 Aug 2012 07:32:47 -0500 (CDT)
Received: from sausexhtp01.amd.com (163.181.3.165) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 07:33:04 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp01.amd.com
	(163.181.3.165) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Thu, 2 Aug 2012 07:32:47 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:32:46 -0400
Message-ID: <501A736A.7080300@amd.com>
Date: Thu, 2 Aug 2012 14:32:42 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <5008166B.6010603@amd.com>
	<20120726182111.GB4135@ocelot.phlegethon.org>
	<5012822E.2030603@amd.com>
	<20120802104524.GA11437@ocelot.phlegethon.org>
In-Reply-To: <20120802104524.GA11437@ocelot.phlegethon.org>
Content-Type: multipart/mixed; boundary="------------020502030609020508090403"
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: fix write access fault on ro
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------020502030609020508090403
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 08/02/12 12:45, Tim Deegan wrote:

> At 13:57 +0200 on 27 Jul (1343397454), Christoph Egger wrote:
>>>> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
>>>>              if ( !handle_mmio() )
>>>>                  hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>>              return 1;
>>>> +        case NESTEDHVM_PAGEFAULT_READONLY:
>>>> +            break;
>>>
>>> Don't we have to translate the faulting PA into an L1 address before
>>> letting the rest of this fault handler run?  It explicitly operates on
>>> the hostp2m.  
>>>
>>> If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
>>> rather than special-casing READONLY.  That way any other
>>> automatically-fixed types (like the p2m_access magic) will be covered
>>> too.
>>
>> How do you differentiate if the error happened from walking l1 npt or
>> host npt ?
>> In the first case it isn't possible to provide l1 address.
> 
> It must be _possible_; after all we managed to detect the error. :)  In
> any case it's definitely wrong to carry on with this handler with the
> wrong address in hand.  So I wonder why this patch actually works for
> you.  Does replacing the 'break' above with 'return 1' also fix the
> problem?


No. Two things have to happen:

1. Calling paging_mark_dirty() and
2. using the same p2mt from the hostp2m in the nestedp2m.

>

> In the short term, do you only care about pages that are read-only for
> log-dirty tracking?  For the L1 walk, that should be handled by the PT
> walker's own calls to paging_mark_dirty(), and the nested-p2m handler
> could potentially take care of the other case by calling
> paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().


Ok, I consider this as a performance improvement rather a bugfix.

New version is attached.

Christoph

-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

--------------020502030609020508090403
Content-Type: text/plain; charset="us-ascii"; name="xen_nh_p2m.diff"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="xen_nh_p2m.diff"
Content-Description: xen_nh_p2m.diff

diff -r 8330198c3240 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/hvm/hvm.c	Thu Aug 02 13:42:15 2012 +0200
@@ -1278,12 +1278,14 @@ int hvm_hap_nested_page_fault(unsigned l
          * into l1 guest if not fixable. The algorithm is
          * the same as for shadow paging.
          */
-        rv = nestedhvm_hap_nested_page_fault(v, gpa,
+        rv = nestedhvm_hap_nested_page_fault(v, &gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
             return 1;
-        case NESTEDHVM_PAGEFAULT_ERROR:
+        case NESTEDHVM_PAGEFAULT_L1_ERROR:
+            /* An error occured while translating gpa from
+             * l2 guest address to l1 guest address. */
             return 0;
         case NESTEDHVM_PAGEFAULT_INJECT:
             return -1;
@@ -1291,6 +1293,10 @@ int hvm_hap_nested_page_fault(unsigned l
             if ( !handle_mmio() )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
             return 1;
+        case NESTEDHVM_PAGEFAULT_L0_ERROR:
+            /* gpa is now translated to l1 guest address, update gfn. */
+            gfn = gpa >> PAGE_SHIFT;
+            break;
         }
     }
 
diff -r 8330198c3240 xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/mm/hap/nested_hap.c	Thu Aug 02 13:42:15 2012 +0200
@@ -141,26 +141,29 @@ nestedhap_fix_p2m(struct vcpu *v, struct
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      unsigned int *page_order)
+                      p2m_type_t *p2mt,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_type_t p2mt;
     p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
-    if ( p2m_is_mmio(p2mt) )
+    if ( p2m_is_mmio(*p2mt) )
         goto out;
 
-    rc = NESTEDHVM_PAGEFAULT_ERROR;
-    if ( p2m_is_paging(p2mt) || p2m_is_shared(p2mt) || !p2m_is_ram(p2mt) )
+    rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
+    if ( access_w && p2m_is_readonly(*p2mt) )
         goto out;
 
-    rc = NESTEDHVM_PAGEFAULT_ERROR;
+    if ( p2m_is_paging(*p2mt) || p2m_is_shared(*p2mt) || !p2m_is_ram(*p2mt) )
+        goto out;
+
     if ( !mfn_valid(mfn) )
         goto out;
 
@@ -207,7 +210,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, pa
  * Returns:
  */
 int
-nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x)
 {
     int rv;
@@ -215,19 +218,20 @@ nestedhvm_hap_nested_page_fault(struct v
     struct domain *d = v->domain;
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
+    p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
         return rv;
-    case NESTEDHVM_PAGEFAULT_ERROR:
+    case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
         break;
@@ -237,13 +241,16 @@ nestedhvm_hap_nested_page_fault(struct v
     }
 
     /* ==> we have to walk L0 P2M */
-    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa, &page_order_10);
+    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
+        &p2mt_10, &page_order_10,
+        access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
         return rv;
-    case NESTEDHVM_PAGEFAULT_ERROR:
+    case NESTEDHVM_PAGEFAULT_L0_ERROR:
+        *L2_gpa = L1_gpa;
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
         break;
@@ -257,9 +264,9 @@ nestedhvm_hap_nested_page_fault(struct v
     page_order_20 = min(page_order_21, page_order_10);
 
     /* fix p2m_get_pagetable(nested_p2m) */
-    nestedhap_fix_p2m(v, nested_p2m, L2_gpa, L0_gpa, page_order_20,
-        p2m_ram_rw,
-        p2m_access_rwx /* FIXME: Should use same permission as l1 guest */);
+    nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
+        p2mt_10,
+        p2m_access_rwx /* FIXME: Should use minimum permission. */);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff -r 8330198c3240 xen/include/asm-x86/hvm/nestedhvm.h
--- a/xen/include/asm-x86/hvm/nestedhvm.h	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/include/asm-x86/hvm/nestedhvm.h	Thu Aug 02 13:42:15 2012 +0200
@@ -47,11 +47,12 @@ bool_t nestedhvm_vcpu_in_guestmode(struc
     vcpu_nestedhvm(v).nv_guestmode = 0
 
 /* Nested paging */
-#define NESTEDHVM_PAGEFAULT_DONE   0
-#define NESTEDHVM_PAGEFAULT_INJECT 1
-#define NESTEDHVM_PAGEFAULT_ERROR  2
-#define NESTEDHVM_PAGEFAULT_MMIO   3
-int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+#define NESTEDHVM_PAGEFAULT_DONE       0
+#define NESTEDHVM_PAGEFAULT_INJECT     1
+#define NESTEDHVM_PAGEFAULT_L1_ERROR   2
+#define NESTEDHVM_PAGEFAULT_L0_ERROR   3
+#define NESTEDHVM_PAGEFAULT_MMIO       4
+int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
 /* IO permission map */

--------------020502030609020508090403
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020502030609020508090403--


From xen-devel-bounces@lists.xen.org Thu Aug 02 12:33:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 12:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwuaK-0005I5-9r; Thu, 02 Aug 2012 12:33:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SwuaI-0005I0-KK
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 12:32:58 +0000
Received: from [85.158.143.99:20760] by server-3.bemta-4.messagelabs.com id
	75/18-01511-9737A105; Thu, 02 Aug 2012 12:32:57 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343910775!29032770!1
X-Originating-IP: [216.32.180.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7350 invoked from network); 2 Aug 2012 12:32:56 -0000
Received: from va3ehsobe006.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.16)
	by server-13.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 12:32:56 -0000
Received: from mail243-va3-R.bigfish.com (10.7.14.241) by
	VA3EHSOBE004.bigfish.com (10.7.40.24) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:32:55 +0000
Received: from mail243-va3 (localhost [127.0.0.1])	by
	mail243-va3-R.bigfish.com (Postfix) with ESMTP id 71EB5A400BD;
	Thu,  2 Aug 2012 12:32:55 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 11
X-BigFish: VPS11(zzbb2dI98dIc85fh1102Ic85dh1432Izz1202hzzz2dh668h839hd25he5bhf0ah107ah133w34h)
Received: from mail243-va3 (localhost.localdomain [127.0.0.1]) by mail243-va3
	(MessageSwitch) id 1343910773437687_11926;
	Thu,  2 Aug 2012 12:32:53 +0000 (UTC)
Received: from VA3EHSMHS007.bigfish.com (unknown [10.7.14.253])	by
	mail243-va3.bigfish.com (Postfix) with ESMTP id 666DE180045;
	Thu,  2 Aug 2012 12:32:53 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	VA3EHSMHS007.bigfish.com (10.7.99.17) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 12:32:49 +0000
X-WSS-ID: 0M84O6N-01-5DT-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2202110280BA;	Thu,  2 Aug 2012 07:32:47 -0500 (CDT)
Received: from sausexhtp01.amd.com (163.181.3.165) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 07:33:04 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp01.amd.com
	(163.181.3.165) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Thu, 2 Aug 2012 07:32:47 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	08:32:46 -0400
Message-ID: <501A736A.7080300@amd.com>
Date: Thu, 2 Aug 2012 14:32:42 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <5008166B.6010603@amd.com>
	<20120726182111.GB4135@ocelot.phlegethon.org>
	<5012822E.2030603@amd.com>
	<20120802104524.GA11437@ocelot.phlegethon.org>
In-Reply-To: <20120802104524.GA11437@ocelot.phlegethon.org>
Content-Type: multipart/mixed; boundary="------------020502030609020508090403"
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: fix write access fault on ro
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------020502030609020508090403
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 08/02/12 12:45, Tim Deegan wrote:

> At 13:57 +0200 on 27 Jul (1343397454), Christoph Egger wrote:
>>>> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
>>>>              if ( !handle_mmio() )
>>>>                  hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>>              return 1;
>>>> +        case NESTEDHVM_PAGEFAULT_READONLY:
>>>> +            break;
>>>
>>> Don't we have to translate the faulting PA into an L1 address before
>>> letting the rest of this fault handler run?  It explicitly operates on
>>> the hostp2m.  
>>>
>>> If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
>>> rather than special-casing READONLY.  That way any other
>>> automatically-fixed types (like the p2m_access magic) will be covered
>>> too.
>>
>> How do you differentiate if the error happened from walking l1 npt or
>> host npt ?
>> In the first case it isn't possible to provide l1 address.
> 
> It must be _possible_; after all we managed to detect the error. :)  In
> any case it's definitely wrong to carry on with this handler with the
> wrong address in hand.  So I wonder why this patch actually works for
> you.  Does replacing the 'break' above with 'return 1' also fix the
> problem?


No. Two things have to happen:

1. Calling paging_mark_dirty() and
2. using the same p2mt from the hostp2m in the nestedp2m.

>

> In the short term, do you only care about pages that are read-only for
> log-dirty tracking?  For the L1 walk, that should be handled by the PT
> walker's own calls to paging_mark_dirty(), and the nested-p2m handler
> could potentially take care of the other case by calling
> paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().


Ok, I consider this as a performance improvement rather a bugfix.

New version is attached.

Christoph

-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

--------------020502030609020508090403
Content-Type: text/plain; charset="us-ascii"; name="xen_nh_p2m.diff"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="xen_nh_p2m.diff"
Content-Description: xen_nh_p2m.diff

diff -r 8330198c3240 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/hvm/hvm.c	Thu Aug 02 13:42:15 2012 +0200
@@ -1278,12 +1278,14 @@ int hvm_hap_nested_page_fault(unsigned l
          * into l1 guest if not fixable. The algorithm is
          * the same as for shadow paging.
          */
-        rv = nestedhvm_hap_nested_page_fault(v, gpa,
+        rv = nestedhvm_hap_nested_page_fault(v, &gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
             return 1;
-        case NESTEDHVM_PAGEFAULT_ERROR:
+        case NESTEDHVM_PAGEFAULT_L1_ERROR:
+            /* An error occured while translating gpa from
+             * l2 guest address to l1 guest address. */
             return 0;
         case NESTEDHVM_PAGEFAULT_INJECT:
             return -1;
@@ -1291,6 +1293,10 @@ int hvm_hap_nested_page_fault(unsigned l
             if ( !handle_mmio() )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
             return 1;
+        case NESTEDHVM_PAGEFAULT_L0_ERROR:
+            /* gpa is now translated to l1 guest address, update gfn. */
+            gfn = gpa >> PAGE_SHIFT;
+            break;
         }
     }
 
diff -r 8330198c3240 xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/mm/hap/nested_hap.c	Thu Aug 02 13:42:15 2012 +0200
@@ -141,26 +141,29 @@ nestedhap_fix_p2m(struct vcpu *v, struct
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      unsigned int *page_order)
+                      p2m_type_t *p2mt,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_type_t p2mt;
     p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
-    if ( p2m_is_mmio(p2mt) )
+    if ( p2m_is_mmio(*p2mt) )
         goto out;
 
-    rc = NESTEDHVM_PAGEFAULT_ERROR;
-    if ( p2m_is_paging(p2mt) || p2m_is_shared(p2mt) || !p2m_is_ram(p2mt) )
+    rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
+    if ( access_w && p2m_is_readonly(*p2mt) )
         goto out;
 
-    rc = NESTEDHVM_PAGEFAULT_ERROR;
+    if ( p2m_is_paging(*p2mt) || p2m_is_shared(*p2mt) || !p2m_is_ram(*p2mt) )
+        goto out;
+
     if ( !mfn_valid(mfn) )
         goto out;
 
@@ -207,7 +210,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, pa
  * Returns:
  */
 int
-nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x)
 {
     int rv;
@@ -215,19 +218,20 @@ nestedhvm_hap_nested_page_fault(struct v
     struct domain *d = v->domain;
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
+    p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
         return rv;
-    case NESTEDHVM_PAGEFAULT_ERROR:
+    case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
         break;
@@ -237,13 +241,16 @@ nestedhvm_hap_nested_page_fault(struct v
     }
 
     /* ==> we have to walk L0 P2M */
-    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa, &page_order_10);
+    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
+        &p2mt_10, &page_order_10,
+        access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
         return rv;
-    case NESTEDHVM_PAGEFAULT_ERROR:
+    case NESTEDHVM_PAGEFAULT_L0_ERROR:
+        *L2_gpa = L1_gpa;
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
         break;
@@ -257,9 +264,9 @@ nestedhvm_hap_nested_page_fault(struct v
     page_order_20 = min(page_order_21, page_order_10);
 
     /* fix p2m_get_pagetable(nested_p2m) */
-    nestedhap_fix_p2m(v, nested_p2m, L2_gpa, L0_gpa, page_order_20,
-        p2m_ram_rw,
-        p2m_access_rwx /* FIXME: Should use same permission as l1 guest */);
+    nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
+        p2mt_10,
+        p2m_access_rwx /* FIXME: Should use minimum permission. */);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff -r 8330198c3240 xen/include/asm-x86/hvm/nestedhvm.h
--- a/xen/include/asm-x86/hvm/nestedhvm.h	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/include/asm-x86/hvm/nestedhvm.h	Thu Aug 02 13:42:15 2012 +0200
@@ -47,11 +47,12 @@ bool_t nestedhvm_vcpu_in_guestmode(struc
     vcpu_nestedhvm(v).nv_guestmode = 0
 
 /* Nested paging */
-#define NESTEDHVM_PAGEFAULT_DONE   0
-#define NESTEDHVM_PAGEFAULT_INJECT 1
-#define NESTEDHVM_PAGEFAULT_ERROR  2
-#define NESTEDHVM_PAGEFAULT_MMIO   3
-int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+#define NESTEDHVM_PAGEFAULT_DONE       0
+#define NESTEDHVM_PAGEFAULT_INJECT     1
+#define NESTEDHVM_PAGEFAULT_L1_ERROR   2
+#define NESTEDHVM_PAGEFAULT_L0_ERROR   3
+#define NESTEDHVM_PAGEFAULT_MMIO       4
+int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
 /* IO permission map */

--------------020502030609020508090403
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020502030609020508090403--


From xen-devel-bounces@lists.xen.org Thu Aug 02 13:06:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swv5i-0005gL-A6; Thu, 02 Aug 2012 13:05:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Swv5g-0005gG-Kg
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:05:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343912713!8801614!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE3NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1398 invoked from network); 2 Aug 2012 13:05:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:05:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="33328084"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:05:12 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 09:05:11 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Swv5T-0000hV-GM;
	Thu, 02 Aug 2012 14:05:11 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 2 Aug 2012 14:04:51 +0100
Message-ID: <1343912691-7952-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] cpufreq: P state stats aren't available if
	there is no cpufreq driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
reading the P state statistics causes a deadlock as an uninitialized
spinlock is locked in do_get_pm_info(). The spinlock is initialized in
cpufreq_statistic_init() which is not called if cpufreq_driver == NULL.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/drivers/acpi/pmstat.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 8788f01..698711e 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -66,6 +66,8 @@ int do_get_pm_info(struct xen_sysctl_get_pmstat *op)
     case PMSTAT_PX:
         if ( !(xen_processor_pmbits & XEN_PROCESSOR_PM_PX) )
             return -ENODEV;
+        if ( !cpufreq_driver )
+            return -ENODEV;
         if ( !pmpt || !(pmpt->perf.init & XEN_PX_INIT) )
             return -EINVAL;
         break;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:06:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swv5i-0005gL-A6; Thu, 02 Aug 2012 13:05:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Swv5g-0005gG-Kg
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:05:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343912713!8801614!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE3NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1398 invoked from network); 2 Aug 2012 13:05:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:05:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="33328084"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:05:12 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 09:05:11 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Swv5T-0000hV-GM;
	Thu, 02 Aug 2012 14:05:11 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 2 Aug 2012 14:04:51 +0100
Message-ID: <1343912691-7952-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] cpufreq: P state stats aren't available if
	there is no cpufreq driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
reading the P state statistics causes a deadlock as an uninitialized
spinlock is locked in do_get_pm_info(). The spinlock is initialized in
cpufreq_statistic_init() which is not called if cpufreq_driver == NULL.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/drivers/acpi/pmstat.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 8788f01..698711e 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -66,6 +66,8 @@ int do_get_pm_info(struct xen_sysctl_get_pmstat *op)
     case PMSTAT_PX:
         if ( !(xen_processor_pmbits & XEN_PROCESSOR_PM_PX) )
             return -ENODEV;
+        if ( !cpufreq_driver )
+            return -ENODEV;
         if ( !pmpt || !(pmpt->perf.init & XEN_PX_INIT) )
             return -EINVAL;
         break;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swv86-0005nu-0E; Thu, 02 Aug 2012 13:07:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1Swv84-0005ne-Fz
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:07:52 +0000
Received: from [85.158.138.51:12230] by server-4.bemta-3.messagelabs.com id
	A2/02-29069-7AB7A105; Thu, 02 Aug 2012 13:07:51 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343912870!26035445!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6048 invoked from network); 2 Aug 2012 13:07:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:07:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13822584"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:07:50 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	14:07:50 +0100
Message-ID: <501A7935.9090603@citrix.com>
Date: Thu, 2 Aug 2012 13:57:25 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>	<501A631D02000078000921B8@nat28.tlf.novell.com>	<501A4C29.5080006@citrix.com>	<501A71030200007800092257@nat28.tlf.novell.com>
	<501A57E9.6070407@citrix.com>
In-Reply-To: <501A57E9.6070407@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 11:35, Attilio Rao wrote:
> On 02/08/12 11:22, Jan Beulich wrote:
>    
>>>>> On 02.08.12 at 11:45, Attilio Rao<attilio.rao@citrix.com>   wrote:
>>>>>
>>>>>            
>>> On 02/08/12 10:23, Jan Beulich wrote:
>>>
>>>        
>>>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>>>>>
>>>>>>>                
>>> wrote:
>>>
>>>        
>>>>>>>
>>>>>>>                
>>>>> I was reading more about this commit because this patch breaks the ABI
>>>>> on ARM, when I realized that on x86 there is no standard that specifies
>>>>> the alignment of fields in a struct.
>>>>>
>>>>>
>>>>>            
>>>> There is - the psABI supplements to the SVR4 ABI.
>>>>
>>>>
>>>>
>>>>          
>>> This is a completely different issue.
>>> The problem here gcc/whatever compiler padding added to the struct in
>>> order to have alignment of the members to the word boundry. The
>>> difference is that this is not enforced in the ARM case (apparently,
>>> from Stefano's report) while it happens in the x86 case.
>>>
>>> This is why it is a good rule to organize member of a struct from the
>>> bigger to the smaller when compiling with gcc and this is not the case
>>> of the struct in question.
>>>
>>> In the end it is a compiler decisional thing, not something decided by
>>> the ABI.
>>>
>>>        
>> No, definitely not. Otherwise inter-operation between code
>> compiled with different compilers would be impossible. To
>> allow this is what the various ABI specifications exist for (and
>> their absence had, e.g. on DOS, lead to a complete mess).
>>
>>
>>      
> Look, I'm speaking about the problem Stefano is trying to crunch which
> has nothing to do with your discussion on ABI.
>
>    
>> As to the ARM issue - mind pointing out where mis-aligned
>> structure fields are specified as being the standard?
>>
>>
>>      
> I think that alignment is important, infact I'm more surprised to the
> ARM side than the x86. Of course, because this is a compiler-dependent
> behaviour (the fact that not only gcc does that doesn't mean it is
> "standardized", just like it is not standardized anywhere that stack on
> x86 must be word aligned, even if it is so common that it is taken for
> granted now).
>
>    

It seems that I was missing something -- the x86 psABI explicitly 
mention about the structures padding for the internal members of 
structures and unions (Chapter 3, Paragraph 3.1.2, subsection 
"Aggregates and Unions"), so this behaviour really cames from the SVR4, 
as Jan pointed out.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:08:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swv86-0005nu-0E; Thu, 02 Aug 2012 13:07:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1Swv84-0005ne-Fz
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:07:52 +0000
Received: from [85.158.138.51:12230] by server-4.bemta-3.messagelabs.com id
	A2/02-29069-7AB7A105; Thu, 02 Aug 2012 13:07:51 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1343912870!26035445!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6048 invoked from network); 2 Aug 2012 13:07:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:07:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13822584"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:07:50 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	14:07:50 +0100
Message-ID: <501A7935.9090603@citrix.com>
Date: Thu, 2 Aug 2012 13:57:25 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>	<501A631D02000078000921B8@nat28.tlf.novell.com>	<501A4C29.5080006@citrix.com>	<501A71030200007800092257@nat28.tlf.novell.com>
	<501A57E9.6070407@citrix.com>
In-Reply-To: <501A57E9.6070407@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
	XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 11:35, Attilio Rao wrote:
> On 02/08/12 11:22, Jan Beulich wrote:
>    
>>>>> On 02.08.12 at 11:45, Attilio Rao<attilio.rao@citrix.com>   wrote:
>>>>>
>>>>>            
>>> On 02/08/12 10:23, Jan Beulich wrote:
>>>
>>>        
>>>>>>> On 01.08.12 at 19:55, Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>>>>>
>>>>>>>                
>>> wrote:
>>>
>>>        
>>>>>>>
>>>>>>>                
>>>>> I was reading more about this commit because this patch breaks the ABI
>>>>> on ARM, when I realized that on x86 there is no standard that specifies
>>>>> the alignment of fields in a struct.
>>>>>
>>>>>
>>>>>            
>>>> There is - the psABI supplements to the SVR4 ABI.
>>>>
>>>>
>>>>
>>>>          
>>> This is a completely different issue.
>>> The problem here gcc/whatever compiler padding added to the struct in
>>> order to have alignment of the members to the word boundry. The
>>> difference is that this is not enforced in the ARM case (apparently,
>>> from Stefano's report) while it happens in the x86 case.
>>>
>>> This is why it is a good rule to organize member of a struct from the
>>> bigger to the smaller when compiling with gcc and this is not the case
>>> of the struct in question.
>>>
>>> In the end it is a compiler decisional thing, not something decided by
>>> the ABI.
>>>
>>>        
>> No, definitely not. Otherwise inter-operation between code
>> compiled with different compilers would be impossible. To
>> allow this is what the various ABI specifications exist for (and
>> their absence had, e.g. on DOS, lead to a complete mess).
>>
>>
>>      
> Look, I'm speaking about the problem Stefano is trying to crunch which
> has nothing to do with your discussion on ABI.
>
>    
>> As to the ARM issue - mind pointing out where mis-aligned
>> structure fields are specified as being the standard?
>>
>>
>>      
> I think that alignment is important, infact I'm more surprised to the
> ARM side than the x86. Of course, because this is a compiler-dependent
> behaviour (the fact that not only gcc does that doesn't mean it is
> "standardized", just like it is not standardized anywhere that stack on
> x86 must be word aligned, even if it is so common that it is taken for
> granted now).
>
>    

It seems that I was missing something -- the x86 psABI explicitly 
mention about the structures padding for the internal members of 
structures and unions (Chapter 3, Paragraph 3.1.2, subsection 
"Aggregates and Unions"), so this behaviour really cames from the SVR4, 
as Jan pointed out.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:14:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvDt-00061G-TV; Thu, 02 Aug 2012 13:13:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwvDs-00061B-Tq
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:13:53 +0000
Received: from [85.158.139.83:42985] by server-10.bemta-5.messagelabs.com id
	35/FB-02190-01D7A105; Thu, 02 Aug 2012 13:13:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1343913231!25197294!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28341 invoked from network); 2 Aug 2012 13:13:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:13:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13822728"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:13:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:13:50 +0100
Date: Thu, 2 Aug 2012 14:13:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <501A631D02000078000921B8@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012, Jan Beulich wrote:
> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > I was reading more about this commit because this patch breaks the ABI
> > on ARM, when I realized that on x86 there is no standard that specifies
> > the alignment of fields in a struct.
> 
> There is - the psABI supplements to the SVR4 ABI.

Thank you very much, that document was exactly what I was looking for.

Also it explains where my confusion was coming from: Jean's patch doesn't
break the ABI on ARM or x86, but I am carrying a patch in my patch queue
that does (unless Jean's patch is applied):

http://marc.info/?l=xen-devel&m=134305777903771

As you can see this patch splits .space into two shorts, and as a side
effect changes the offset of .space, removing the padding.
Thus it led me to think that Jean's patch was breaking the ABI when actually
with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
required to keep the binary interface compatible.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:14:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvDt-00061G-TV; Thu, 02 Aug 2012 13:13:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwvDs-00061B-Tq
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:13:53 +0000
Received: from [85.158.139.83:42985] by server-10.bemta-5.messagelabs.com id
	35/FB-02190-01D7A105; Thu, 02 Aug 2012 13:13:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1343913231!25197294!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28341 invoked from network); 2 Aug 2012 13:13:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:13:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13822728"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:13:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:13:50 +0100
Date: Thu, 2 Aug 2012 14:13:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <501A631D02000078000921B8@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	Jean Guyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012, Jan Beulich wrote:
> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > I was reading more about this commit because this patch breaks the ABI
> > on ARM, when I realized that on x86 there is no standard that specifies
> > the alignment of fields in a struct.
> 
> There is - the psABI supplements to the SVR4 ABI.

Thank you very much, that document was exactly what I was looking for.

Also it explains where my confusion was coming from: Jean's patch doesn't
break the ABI on ARM or x86, but I am carrying a patch in my patch queue
that does (unless Jean's patch is applied):

http://marc.info/?l=xen-devel&m=134305777903771

As you can see this patch splits .space into two shorts, and as a side
effect changes the offset of .space, removing the padding.
Thus it led me to think that Jean's patch was breaking the ABI when actually
with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
required to keep the binary interface compatible.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvIw-00069W-M7; Thu, 02 Aug 2012 13:19:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SwvIu-00069P-Ui
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:19:05 +0000
Received: from [85.158.138.51:43377] by server-2.bemta-3.messagelabs.com id
	30/93-00359-84E7A105; Thu, 02 Aug 2012 13:19:04 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343913541!21989244!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyMzQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20278 invoked from network); 2 Aug 2012 13:19:03 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 13:19:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72DIsH2002092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 13:18:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72DIruj022759
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 13:18:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72DIq3L009308; Thu, 2 Aug 2012 08:18:52 -0500
MIME-Version: 1.0
Message-ID: <25793edb-a702-43d6-b109-4a637ead3536@default>
Date: Thu, 2 Aug 2012 06:18:52 -0700 (PDT)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <horms@verge.net.au>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: ptesarik@suse.cz, olaf@aepfle.de, xen-devel@lists.xensource.com,
	kexec@lists.infradead.org, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Simon,

[...]

> > If you spot any error in any logfile which in your opinion
> > is relevent to our testes please send me it.
>
> Hi,
>
> is there any consensus on what to do here?

As I know Petr was going to do some tests.
I have not received any reply from him till now.
Maybe he is busy or on vacation.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvIw-00069W-M7; Thu, 02 Aug 2012 13:19:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SwvIu-00069P-Ui
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:19:05 +0000
Received: from [85.158.138.51:43377] by server-2.bemta-3.messagelabs.com id
	30/93-00359-84E7A105; Thu, 02 Aug 2012 13:19:04 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343913541!21989244!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyMzQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20278 invoked from network); 2 Aug 2012 13:19:03 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 13:19:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72DIsH2002092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 13:18:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72DIruj022759
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 13:18:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72DIq3L009308; Thu, 2 Aug 2012 08:18:52 -0500
MIME-Version: 1.0
Message-ID: <25793edb-a702-43d6-b109-4a637ead3536@default>
Date: Thu, 2 Aug 2012 06:18:52 -0700 (PDT)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <horms@verge.net.au>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: ptesarik@suse.cz, olaf@aepfle.de, xen-devel@lists.xensource.com,
	kexec@lists.infradead.org, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Simon,

[...]

> > If you spot any error in any logfile which in your opinion
> > is relevent to our testes please send me it.
>
> Hi,
>
> is there any consensus on what to do here?

As I know Petr was going to do some tests.
I have not received any reply from him till now.
Maybe he is busy or on vacation.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:22:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvLZ-0006Fu-80; Thu, 02 Aug 2012 13:21:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwvLX-0006Fk-Ey
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:21:47 +0000
Received: from [85.158.138.51:6051] by server-6.bemta-3.messagelabs.com id
	03/0E-20447-AEE7A105; Thu, 02 Aug 2012 13:21:46 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343913705!30167231!1
X-Originating-IP: [74.125.82.41]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9953 invoked from network); 2 Aug 2012 13:21:45 -0000
Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com)
	(74.125.82.41)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:21:45 -0000
Received: by wgbds1 with SMTP id ds1so5395803wgb.2
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 06:21:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=mVKbOiNQXguyU+4Zey3dzOs3y/SC9E98ckSIP/9nQgg=;
	b=sx/Xyp+V9MVBA/bORxbFEJlJNkLqBKa4zQE4vzxz05dgR7+/uGtK/0ktDGpbNP5466
	zZwCuXS0mSihHRja5kMnYTpSqH1sE2Yhmqw5sbnGi0tZkvSo+AmeMeveZRrFsnm9tUE6
	UAxkg2aC5o9w7wnDGuG9Nf4RFmOMO67R7w/OZqMn53cDLGHCr9oQ3zXidZZZ0MoA48gg
	U5vwYg37AE/DDQYbeS3HJ8wR6pdbbx58LMdxCB0sdF+0Q0a1p21JycQBohMZ5oRhYvVr
	RM0nu4liQKO6TS/S3wgI26kOUazbi4Os9pH24B1MhDeu3vJVUzJp6fteLiyL2nKBFG0p
	ENgQ==
Received: by 10.216.242.196 with SMTP id i46mr9136859wer.125.1343913705389;
	Thu, 02 Aug 2012 06:21:45 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id k20sm33682627wiv.11.2012.08.02.06.21.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 02 Aug 2012 06:21:44 -0700 (PDT)
Message-ID: <1343913696.4873.7.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 02 Aug 2012 15:21:36 +0200
In-Reply-To: <501A672B02000078000921EA@nat28.tlf.novell.com>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
	<501A672B02000078000921EA@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, George Dunlap <dunlapg@gmail.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4600879891119173199=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4600879891119173199==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-8emHdb/BuF7bVjc4jS9S"


--=-8emHdb/BuF7bVjc4jS9S
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 10:40 +0100, Jan Beulich wrote:
> >>> On 01.08.12 at 18:30, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
> > - Xen NUMA internals.  Placing items such as the per-cpu stacks and dat=
a
> > area on the local NUMA node, rather than unconditionally on node 0 at
> > the moment.  As part of this, there will be changes to
> > alloc_{dom,xen}heap_page() to allow specification of which node(s) to
> > allocate memory from.
>=20
> Those interfaces already support flags to be passed, including a
> node ID. It just needs to be made use of in more places.
>=20
Yes, I also remember it being already node_affinity conscious, and think
it's more a matter of how it is called. I'll update the wiki accordingly
(it doesn't need to contain these sort of details anyway).

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-8emHdb/BuF7bVjc4jS9S
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAafuAACgkQk4XaBE3IOsTGMgCdHFyXFI2cYdphRL/+QWTJ7RMB
1sgAnR8QG3grvhsu6NSd6KgXR7N2arNs
=YXRA
-----END PGP SIGNATURE-----

--=-8emHdb/BuF7bVjc4jS9S--



--===============4600879891119173199==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4600879891119173199==--



From xen-devel-bounces@lists.xen.org Thu Aug 02 13:22:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvLZ-0006Fu-80; Thu, 02 Aug 2012 13:21:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwvLX-0006Fk-Ey
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:21:47 +0000
Received: from [85.158.138.51:6051] by server-6.bemta-3.messagelabs.com id
	03/0E-20447-AEE7A105; Thu, 02 Aug 2012 13:21:46 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343913705!30167231!1
X-Originating-IP: [74.125.82.41]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9953 invoked from network); 2 Aug 2012 13:21:45 -0000
Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com)
	(74.125.82.41)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:21:45 -0000
Received: by wgbds1 with SMTP id ds1so5395803wgb.2
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 06:21:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=mVKbOiNQXguyU+4Zey3dzOs3y/SC9E98ckSIP/9nQgg=;
	b=sx/Xyp+V9MVBA/bORxbFEJlJNkLqBKa4zQE4vzxz05dgR7+/uGtK/0ktDGpbNP5466
	zZwCuXS0mSihHRja5kMnYTpSqH1sE2Yhmqw5sbnGi0tZkvSo+AmeMeveZRrFsnm9tUE6
	UAxkg2aC5o9w7wnDGuG9Nf4RFmOMO67R7w/OZqMn53cDLGHCr9oQ3zXidZZZ0MoA48gg
	U5vwYg37AE/DDQYbeS3HJ8wR6pdbbx58LMdxCB0sdF+0Q0a1p21JycQBohMZ5oRhYvVr
	RM0nu4liQKO6TS/S3wgI26kOUazbi4Os9pH24B1MhDeu3vJVUzJp6fteLiyL2nKBFG0p
	ENgQ==
Received: by 10.216.242.196 with SMTP id i46mr9136859wer.125.1343913705389;
	Thu, 02 Aug 2012 06:21:45 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id k20sm33682627wiv.11.2012.08.02.06.21.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 02 Aug 2012 06:21:44 -0700 (PDT)
Message-ID: <1343913696.4873.7.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 02 Aug 2012 15:21:36 +0200
In-Reply-To: <501A672B02000078000921EA@nat28.tlf.novell.com>
References: <1343837796.4958.32.camel@Solace> <501959BE.60801@citrix.com>
	<501A672B02000078000921EA@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, George Dunlap <dunlapg@gmail.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4600879891119173199=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4600879891119173199==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-8emHdb/BuF7bVjc4jS9S"


--=-8emHdb/BuF7bVjc4jS9S
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 10:40 +0100, Jan Beulich wrote:
> >>> On 01.08.12 at 18:30, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
> > - Xen NUMA internals.  Placing items such as the per-cpu stacks and dat=
a
> > area on the local NUMA node, rather than unconditionally on node 0 at
> > the moment.  As part of this, there will be changes to
> > alloc_{dom,xen}heap_page() to allow specification of which node(s) to
> > allocate memory from.
>=20
> Those interfaces already support flags to be passed, including a
> node ID. It just needs to be made use of in more places.
>=20
Yes, I also remember it being already node_affinity conscious, and think
it's more a matter of how it is called. I'll update the wiki accordingly
(it doesn't need to contain these sort of details anyway).

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-8emHdb/BuF7bVjc4jS9S
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAafuAACgkQk4XaBE3IOsTGMgCdHFyXFI2cYdphRL/+QWTJ7RMB
1sgAnR8QG3grvhsu6NSd6KgXR7N2arNs
=YXRA
-----END PGP SIGNATURE-----

--=-8emHdb/BuF7bVjc4jS9S--



--===============4600879891119173199==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4600879891119173199==--



From xen-devel-bounces@lists.xen.org Thu Aug 02 13:28:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvRA-0006SE-18; Thu, 02 Aug 2012 13:27:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SwvR9-0006S7-B2
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:27:35 +0000
Received: from [85.158.139.83:31874] by server-3.bemta-5.messagelabs.com id
	CE/82-03367-6408A105; Thu, 02 Aug 2012 13:27:34 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343914052!24251997!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7634 invoked from network); 2 Aug 2012 13:27:33 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 13:27:33 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72DRQoH014214
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 13:27:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72DRPSI017121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 13:27:25 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72DRPt5015821; Thu, 2 Aug 2012 08:27:25 -0500
MIME-Version: 1.0
Message-ID: <1d901f35-be2d-418b-bc80-b8c0420f3c0d@default>
Date: Thu, 2 Aug 2012 06:27:24 -0700 (PDT)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <horms@verge.net.au>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: ptesarik@suse.cz, olaf@aepfle.de, xen-devel@lists.xensource.com,
	kexec@lists.infradead.org, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

> > > If you spot any error in any logfile which in your opinion
> > > is relevent to our testes please send me it.
> >
> > Hi,
> >
> > is there any consensus on what to do here?
>
> As I know Petr was going to do some tests.
> I have not received any reply from him till now.
> Maybe he is busy or on vacation.

According to automatic reply he is on vacation until 13/08/2012.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:28:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvRA-0006SE-18; Thu, 02 Aug 2012 13:27:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SwvR9-0006S7-B2
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:27:35 +0000
Received: from [85.158.139.83:31874] by server-3.bemta-5.messagelabs.com id
	CE/82-03367-6408A105; Thu, 02 Aug 2012 13:27:34 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343914052!24251997!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7634 invoked from network); 2 Aug 2012 13:27:33 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 13:27:33 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72DRQoH014214
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 13:27:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72DRPSI017121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 13:27:25 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72DRPt5015821; Thu, 2 Aug 2012 08:27:25 -0500
MIME-Version: 1.0
Message-ID: <1d901f35-be2d-418b-bc80-b8c0420f3c0d@default>
Date: Thu, 2 Aug 2012 06:27:24 -0700 (PDT)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <horms@verge.net.au>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: ptesarik@suse.cz, olaf@aepfle.de, xen-devel@lists.xensource.com,
	kexec@lists.infradead.org, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

> > > If you spot any error in any logfile which in your opinion
> > > is relevent to our testes please send me it.
> >
> > Hi,
> >
> > is there any consensus on what to do here?
>
> As I know Petr was going to do some tests.
> I have not received any reply from him till now.
> Maybe he is busy or on vacation.

According to automatic reply he is on vacation until 13/08/2012.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:34:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvXM-0006g2-SB; Thu, 02 Aug 2012 13:34:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwvXL-0006fx-Mm
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:33:59 +0000
Received: from [85.158.143.35:29988] by server-3.bemta-4.messagelabs.com id
	67/C6-01511-7C18A105; Thu, 02 Aug 2012 13:33:59 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343914437!16473373!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7803 invoked from network); 2 Aug 2012 13:33:58 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:33:58 -0000
Received: by eeke53 with SMTP id e53so2341200eek.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 06:33:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type;
	bh=oFAHoAHRXGMr6ZEj0u0XEM3RX1xsznILwu3r0XceA04=;
	b=Q6dq8uJqkq5qPa4tdx75EdBFGVg9c+JPp1lHfx6yDp0RepBNp100Im47NvTaLJYMYA
	W5X0MsnSrk1iLcoz3ULJYYouxKgnRz6g7InQpDZ+u071bScPShzQBMKj+0vO/caC306r
	xBWHCuNMIFKY3Ry/jECleh1YJe1NJT6R6bAKFUqKG64bR4NEuEKGiDGY6L/AqyfROnRn
	5+5xfa2nttFy1X/ek2p5kQOAM8rJafdkXHm4oXWNCcC/hASdGYRLbXDkPdX0qhrw0Pra
	1H2POB1Qp5joi1PfQTSnPPNdHRKv5IJxqqkS/KTFe4TIj7VE4Y8pnE0K2woOdlVrZ3fy
	S3xQ==
Received: by 10.14.214.196 with SMTP id c44mr27117098eep.7.1343914437863;
	Thu, 02 Aug 2012 06:33:57 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id w3sm17552741eep.2.2012.08.02.06.33.56
	(version=SSLv3 cipher=OTHER); Thu, 02 Aug 2012 06:33:57 -0700 (PDT)
Message-ID: <501A81C4.8060405@xen.org>
Date: Thu, 02 Aug 2012 14:33:56 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
	<20120801195021.GD19851@reaktio.net>
	<1343890636.7571.31.camel@dagon.hellion.org.uk>
In-Reply-To: <1343890636.7571.31.camel@dagon.hellion.org.uk>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5684742624663666040=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5684742624663666040==
Content-Type: multipart/alternative;
 boundary="------------080904000207070804060800"

This is a multi-part message in MIME format.
--------------080904000207070804060800
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Ok,
we added and chose preliminary dates: Aug 14th (the day after RC3) and

  * http://wiki.xen.org/wiki/Xen_Test_Days
  * http://wiki.xen.org/wiki/Xen_4.2_RC3_test_instructions

Feedback and mods are welcome
Lars

--------------080904000207070804060800
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Ok,<br>
    we added and chose preliminary dates: Aug 14th (the day after RC3)
    and<br>
    <ul>
      <li><a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Xen_Test_Days">http://wiki.xen.org/wiki/Xen_Test_Days</a></li>
      <li><a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Xen_4.2_RC3_test_instructions">http://wiki.xen.org/wiki/Xen_4.2_RC3_test_instructions</a></li>
    </ul>
    Feedback and mods are welcome<br>
    Lars<br>
  </body>
</html>

--------------080904000207070804060800--


--===============5684742624663666040==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5684742624663666040==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 13:34:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvXM-0006g2-SB; Thu, 02 Aug 2012 13:34:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SwvXL-0006fx-Mm
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:33:59 +0000
Received: from [85.158.143.35:29988] by server-3.bemta-4.messagelabs.com id
	67/C6-01511-7C18A105; Thu, 02 Aug 2012 13:33:59 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343914437!16473373!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7803 invoked from network); 2 Aug 2012 13:33:58 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:33:58 -0000
Received: by eeke53 with SMTP id e53so2341200eek.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 06:33:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type;
	bh=oFAHoAHRXGMr6ZEj0u0XEM3RX1xsznILwu3r0XceA04=;
	b=Q6dq8uJqkq5qPa4tdx75EdBFGVg9c+JPp1lHfx6yDp0RepBNp100Im47NvTaLJYMYA
	W5X0MsnSrk1iLcoz3ULJYYouxKgnRz6g7InQpDZ+u071bScPShzQBMKj+0vO/caC306r
	xBWHCuNMIFKY3Ry/jECleh1YJe1NJT6R6bAKFUqKG64bR4NEuEKGiDGY6L/AqyfROnRn
	5+5xfa2nttFy1X/ek2p5kQOAM8rJafdkXHm4oXWNCcC/hASdGYRLbXDkPdX0qhrw0Pra
	1H2POB1Qp5joi1PfQTSnPPNdHRKv5IJxqqkS/KTFe4TIj7VE4Y8pnE0K2woOdlVrZ3fy
	S3xQ==
Received: by 10.14.214.196 with SMTP id c44mr27117098eep.7.1343914437863;
	Thu, 02 Aug 2012 06:33:57 -0700 (PDT)
Received: from [172.16.26.11] (b01bc490.bb.sky.com. [176.27.196.144])
	by mx.google.com with ESMTPS id w3sm17552741eep.2.2012.08.02.06.33.56
	(version=SSLv3 cipher=OTHER); Thu, 02 Aug 2012 06:33:57 -0700 (PDT)
Message-ID: <501A81C4.8060405@xen.org>
Date: Thu, 02 Aug 2012 14:33:56 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <50191C3E.4050003@xen.org>
	<CAFLBxZaWGBv5dxC9rYKJqrKV7faewhmEuGxM2XsTZ633QuMong@mail.gmail.com>
	<20120801195021.GD19851@reaktio.net>
	<1343890636.7571.31.camel@dagon.hellion.org.uk>
In-Reply-To: <1343890636.7571.31.camel@dagon.hellion.org.uk>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Proposal: Xen Test Days
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5684742624663666040=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5684742624663666040==
Content-Type: multipart/alternative;
 boundary="------------080904000207070804060800"

This is a multi-part message in MIME format.
--------------080904000207070804060800
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Ok,
we added and chose preliminary dates: Aug 14th (the day after RC3) and

  * http://wiki.xen.org/wiki/Xen_Test_Days
  * http://wiki.xen.org/wiki/Xen_4.2_RC3_test_instructions

Feedback and mods are welcome
Lars

--------------080904000207070804060800
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Ok,<br>
    we added and chose preliminary dates: Aug 14th (the day after RC3)
    and<br>
    <ul>
      <li><a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Xen_Test_Days">http://wiki.xen.org/wiki/Xen_Test_Days</a></li>
      <li><a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Xen_4.2_RC3_test_instructions">http://wiki.xen.org/wiki/Xen_4.2_RC3_test_instructions</a></li>
    </ul>
    Feedback and mods are welcome<br>
    Lars<br>
  </body>
</html>

--------------080904000207070804060800--


--===============5684742624663666040==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5684742624663666040==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 13:35:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYN-0006im-B3; Thu, 02 Aug 2012 13:35:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwvYL-0006if-Ju
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:35:01 +0000
Received: from [85.158.143.99:11887] by server-3.bemta-4.messagelabs.com id
	C4/D8-01511-4028A105; Thu, 02 Aug 2012 13:35:00 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343914500!24671204!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10203 invoked from network); 2 Aug 2012 13:35:00 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:35:00 -0000
Received: by weyz53 with SMTP id z53so6811364wey.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 06:35:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=qJtMrUdKVpSuWaDh58rRC8BMJhMgdLGXgBoIPFRvl1s=;
	b=molVh23IKa/CteEV3gwnbnx11t6+kzq/Le0c5Nj9QRMC0PmChIBwu5LQ++2r5l2lUL
	mJqaYHe4vCNPlLKMTy2XzznGsSv7glNVEegdLwvoJYDEAOII0SxJ3FzOnXvlm1yrbXIq
	7OrMo7oqDhk77owtTjI2EoZA1yK0cssQMCTA9opNNA5/kjiZ85O+h8XqHgh1ki8WSEtj
	9Cr8wjT6lpBt86lr2wxO1+xLfZ1PV2ZOGPf2dvMgkdQgQ3cel07p5oFvNuPvhd3yICWD
	+64wOH+/LmoEipZypoPe2I0Nd2sBzJneRgrfVw7MF8EJNiBAvDLWOlgMsyUBv2QO55vp
	SxPQ==
Received: by 10.180.14.34 with SMTP id m2mr4769227wic.21.1343914500023;
	Thu, 02 Aug 2012 06:35:00 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id b7sm17305099wiz.9.2012.08.02.06.34.57
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 02 Aug 2012 06:34:58 -0700 (PDT)
Message-ID: <1343914490.4873.18.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 02 Aug 2012 15:34:50 +0200
In-Reply-To: <501A67C502000078000921FF@nat28.tlf.novell.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6772459713668143152=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6772459713668143152==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-TDWXkHKQdr55u4QjssW5"


--=-TDWXkHKQdr55u4QjssW5
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
> >       guest ends up on more than one nodes, make sure it knows it's
> >       running on a NUMA platform (smaller than the actual host, but
> >       still NUMA). This interacts with some of the above points:
>=20
> The question is whether this is really useful beyond the (I would
> suppose) relatively small set of cases where migration isn't
> needed.
>=20
Mmm... Not sure I'm getting what you're saying here, sorry. Are you
suggesting that exposing a virtual topology is not a good idea as it
poses constraints/prevents live migration?

If yes, well, I mostly agree that this is an huge issue, and that's why
I think wee need some bright idea on how to deal with it. I mean, it's
easy to make it optional and let it automatically disable migration,
giving users the choice what they prefer, but I think this is more
dodging the problem than dealing with it! :-P

> >        * consider this during automatic placement for
> >          resuming/migrating domains (if they have a virtual topology,
> >          better not to change it);
> >        * consider this during memory migration (it can change the
> >          actual topology, should we update it on-line or disable memory
> >          migration?)
>=20
> The question is whether trading functionality for performance
> is an acceptable choice.
>=20
Indeed. Again, I think it is possible to implement things flexibly
enough, but then we need to come out with a sane default, so we're not
allowed to avoid discussing and deciding on this.

One can argue that it is an issue only for big-enough guests (and/or
nearly overcommitted hosts) that don't fit in only one node (as, if they
do, there is no virtual topology to export), but I'm not sure we can
neglect them on this basis.

Thanks for the feedback,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-TDWXkHKQdr55u4QjssW5
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAagfoACgkQk4XaBE3IOsTVowCgmj1vYVaE4f8CI950Q4El8uNz
tdAAn1gzC0QgJGbtni2Ww595rP22Rpls
=iCsx
-----END PGP SIGNATURE-----

--=-TDWXkHKQdr55u4QjssW5--



--===============6772459713668143152==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6772459713668143152==--



From xen-devel-bounces@lists.xen.org Thu Aug 02 13:35:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYN-0006im-B3; Thu, 02 Aug 2012 13:35:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SwvYL-0006if-Ju
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:35:01 +0000
Received: from [85.158.143.99:11887] by server-3.bemta-4.messagelabs.com id
	C4/D8-01511-4028A105; Thu, 02 Aug 2012 13:35:00 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343914500!24671204!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10203 invoked from network); 2 Aug 2012 13:35:00 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:35:00 -0000
Received: by weyz53 with SMTP id z53so6811364wey.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 06:35:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=qJtMrUdKVpSuWaDh58rRC8BMJhMgdLGXgBoIPFRvl1s=;
	b=molVh23IKa/CteEV3gwnbnx11t6+kzq/Le0c5Nj9QRMC0PmChIBwu5LQ++2r5l2lUL
	mJqaYHe4vCNPlLKMTy2XzznGsSv7glNVEegdLwvoJYDEAOII0SxJ3FzOnXvlm1yrbXIq
	7OrMo7oqDhk77owtTjI2EoZA1yK0cssQMCTA9opNNA5/kjiZ85O+h8XqHgh1ki8WSEtj
	9Cr8wjT6lpBt86lr2wxO1+xLfZ1PV2ZOGPf2dvMgkdQgQ3cel07p5oFvNuPvhd3yICWD
	+64wOH+/LmoEipZypoPe2I0Nd2sBzJneRgrfVw7MF8EJNiBAvDLWOlgMsyUBv2QO55vp
	SxPQ==
Received: by 10.180.14.34 with SMTP id m2mr4769227wic.21.1343914500023;
	Thu, 02 Aug 2012 06:35:00 -0700 (PDT)
Received: from [192.168.0.40] (ip-176-206.sn2.eutelia.it. [83.211.176.206])
	by mx.google.com with ESMTPS id b7sm17305099wiz.9.2012.08.02.06.34.57
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 02 Aug 2012 06:34:58 -0700 (PDT)
Message-ID: <1343914490.4873.18.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 02 Aug 2012 15:34:50 +0200
In-Reply-To: <501A67C502000078000921FF@nat28.tlf.novell.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6772459713668143152=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6772459713668143152==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-TDWXkHKQdr55u4QjssW5"


--=-TDWXkHKQdr55u4QjssW5
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
> >       guest ends up on more than one nodes, make sure it knows it's
> >       running on a NUMA platform (smaller than the actual host, but
> >       still NUMA). This interacts with some of the above points:
>=20
> The question is whether this is really useful beyond the (I would
> suppose) relatively small set of cases where migration isn't
> needed.
>=20
Mmm... Not sure I'm getting what you're saying here, sorry. Are you
suggesting that exposing a virtual topology is not a good idea as it
poses constraints/prevents live migration?

If yes, well, I mostly agree that this is an huge issue, and that's why
I think wee need some bright idea on how to deal with it. I mean, it's
easy to make it optional and let it automatically disable migration,
giving users the choice what they prefer, but I think this is more
dodging the problem than dealing with it! :-P

> >        * consider this during automatic placement for
> >          resuming/migrating domains (if they have a virtual topology,
> >          better not to change it);
> >        * consider this during memory migration (it can change the
> >          actual topology, should we update it on-line or disable memory
> >          migration?)
>=20
> The question is whether trading functionality for performance
> is an acceptable choice.
>=20
Indeed. Again, I think it is possible to implement things flexibly
enough, but then we need to come out with a sane default, so we're not
allowed to avoid discussing and deciding on this.

One can argue that it is an issue only for big-enough guests (and/or
nearly overcommitted hosts) that don't fit in only one node (as, if they
do, there is no virtual topology to export), but I'm not sure we can
neglect them on this basis.

Thanks for the feedback,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-TDWXkHKQdr55u4QjssW5
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAagfoACgkQk4XaBE3IOsTVowCgmj1vYVaE4f8CI950Q4El8uNz
tdAAn1gzC0QgJGbtni2Ww595rP22Rpls
=iCsx
-----END PGP SIGNATURE-----

--=-TDWXkHKQdr55u4QjssW5--



--===============6772459713668143152==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6772459713668143152==--



From xen-devel-bounces@lists.xen.org Thu Aug 02 13:35:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYS-0006jl-Sn; Thu, 02 Aug 2012 13:35:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwvYR-0006jT-SC
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:35:08 +0000
Received: from [85.158.143.99:4789] by server-3.bemta-4.messagelabs.com id
	AA/09-01511-B028A105; Thu, 02 Aug 2012 13:35:07 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1343914505!20320439!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13069 invoked from network); 2 Aug 2012 13:35:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:35:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:35:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:35:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwvYP-0007j1-CG; Thu, 02 Aug 2012 13:35:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwvYP-0001wX-BI;
	Thu, 02 Aug 2012 14:35:05 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.33289.314238.601890@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 14:35:05 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <5018F7ED.3010100@citrix.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
	<5018FBB60200007800091CCB@nat28.tlf.novell.com>
	<5018F7ED.3010100@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
 be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be set externally"):
> # compile.h contains dynamic build info. Rebuilt on every 'make'
> invocation.                                                                                                                                  
> 
> include/xen/compile.h: include/xen/compile.h.in .banner
>         @sed -e 's/@@date@@/$(shell LC_ALL=C date)/g' \
> 
> So it should only be executed once (unless someone is messing around
> deleting compile.h)
> 
> Having said that, even if it were executed more than once, there is no
> reasonable circumstance during which the contents of XEN_CHANGESET
> should change.

Uh, "hg up <changeset>", surely ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:35:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYS-0006jl-Sn; Thu, 02 Aug 2012 13:35:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwvYR-0006jT-SC
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:35:08 +0000
Received: from [85.158.143.99:4789] by server-3.bemta-4.messagelabs.com id
	AA/09-01511-B028A105; Thu, 02 Aug 2012 13:35:07 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1343914505!20320439!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13069 invoked from network); 2 Aug 2012 13:35:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:35:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:35:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:35:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwvYP-0007j1-CG; Thu, 02 Aug 2012 13:35:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwvYP-0001wX-BI;
	Thu, 02 Aug 2012 14:35:05 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.33289.314238.601890@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 14:35:05 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <5018F7ED.3010100@citrix.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
	<5018FBB60200007800091CCB@nat28.tlf.novell.com>
	<5018F7ED.3010100@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
 be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be set externally"):
> # compile.h contains dynamic build info. Rebuilt on every 'make'
> invocation.                                                                                                                                  
> 
> include/xen/compile.h: include/xen/compile.h.in .banner
>         @sed -e 's/@@date@@/$(shell LC_ALL=C date)/g' \
> 
> So it should only be executed once (unless someone is messing around
> deleting compile.h)
> 
> Having said that, even if it were executed more than once, there is no
> reasonable circumstance during which the contents of XEN_CHANGESET
> should change.

Uh, "hg up <changeset>", surely ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:35:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYj-0006lm-9d; Thu, 02 Aug 2012 13:35:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwvYi-0006lX-AD
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:35:24 +0000
Received: from [85.158.138.51:56191] by server-9.bemta-3.messagelabs.com id
	38/DE-27628-B128A105; Thu, 02 Aug 2012 13:35:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343914522!28331206!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16560 invoked from network); 2 Aug 2012 13:35:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	2 Aug 2012 13:35:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 14:35:22 +0100
Message-Id: <501A9E60020000780009233D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 14:36:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	JeanGuyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 15:13, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Thu, 2 Aug 2012, Jan Beulich wrote:
>> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > I was reading more about this commit because this patch breaks the ABI
>> > on ARM, when I realized that on x86 there is no standard that specifies
>> > the alignment of fields in a struct.
>> 
>> There is - the psABI supplements to the SVR4 ABI.
> 
> Thank you very much, that document was exactly what I was looking for.
> 
> Also it explains where my confusion was coming from: Jean's patch doesn't
> break the ABI on ARM or x86, but I am carrying a patch in my patch queue
> that does (unless Jean's patch is applied):
> 
> http://marc.info/?l=xen-devel&m=134305777903771 
> 
> As you can see this patch splits .space into two shorts, and as a side
> effect changes the offset of .space, removing the padding.
> Thus it led me to think that Jean's patch was breaking the ABI when actually
> with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
> required to keep the binary interface compatible.

And then you wouldn't need to split 'space' and break the ABI
at all, you could simply put 'size' and 'foreign_domid' into a union.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:35:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYj-0006lm-9d; Thu, 02 Aug 2012 13:35:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SwvYi-0006lX-AD
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:35:24 +0000
Received: from [85.158.138.51:56191] by server-9.bemta-3.messagelabs.com id
	38/DE-27628-B128A105; Thu, 02 Aug 2012 13:35:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343914522!28331206!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16560 invoked from network); 2 Aug 2012 13:35:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	2 Aug 2012 13:35:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 14:35:22 +0100
Message-Id: <501A9E60020000780009233D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 14:36:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	JeanGuyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 15:13, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Thu, 2 Aug 2012, Jan Beulich wrote:
>> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > I was reading more about this commit because this patch breaks the ABI
>> > on ARM, when I realized that on x86 there is no standard that specifies
>> > the alignment of fields in a struct.
>> 
>> There is - the psABI supplements to the SVR4 ABI.
> 
> Thank you very much, that document was exactly what I was looking for.
> 
> Also it explains where my confusion was coming from: Jean's patch doesn't
> break the ABI on ARM or x86, but I am carrying a patch in my patch queue
> that does (unless Jean's patch is applied):
> 
> http://marc.info/?l=xen-devel&m=134305777903771 
> 
> As you can see this patch splits .space into two shorts, and as a side
> effect changes the offset of .space, removing the padding.
> Thus it led me to think that Jean's patch was breaking the ABI when actually
> with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
> required to keep the binary interface compatible.

And then you wouldn't need to split 'space' and break the ABI
at all, you could simply put 'size' and 'foreign_domid' into a union.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:36:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYy-0006ob-Mh; Thu, 02 Aug 2012 13:35:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwvYx-0006oA-Ct
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:35:39 +0000
Received: from [85.158.139.83:48461] by server-11.bemta-5.messagelabs.com id
	E1/01-20400-A228A105; Thu, 02 Aug 2012 13:35:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343914537!29860044!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20885 invoked from network); 2 Aug 2012 13:35:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:35:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208,217";a="13823276"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:35:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:35:37 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwvYv-0007jE-96; Thu, 02 Aug 2012 13:35:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwvYv-0001wm-8S;
	Thu, 02 Aug 2012 14:35:37 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.33318.752962.567037@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 14:35:34 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343814942.27221.45.camel@zakaz.uk.xensource.com>
References: <20502.48288.351664.168722@mariner.uk.xensource.com>
	<1343814942.27221.45.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: document hotplug script protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] docs: document hotplug script protocol"):
> > diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> > index 5da5e11..86c16be 100644
...
> > -are equivalent to "script=<script>".
> > +are equivalent to "script=block-<script>".
...
> Should I pull these hunks into my block-script patch?

That's a good idea.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:36:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvYy-0006ob-Mh; Thu, 02 Aug 2012 13:35:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwvYx-0006oA-Ct
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:35:39 +0000
Received: from [85.158.139.83:48461] by server-11.bemta-5.messagelabs.com id
	E1/01-20400-A228A105; Thu, 02 Aug 2012 13:35:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343914537!29860044!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20885 invoked from network); 2 Aug 2012 13:35:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:35:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208,217";a="13823276"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:35:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:35:37 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwvYv-0007jE-96; Thu, 02 Aug 2012 13:35:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwvYv-0001wm-8S;
	Thu, 02 Aug 2012 14:35:37 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.33318.752962.567037@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 14:35:34 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343814942.27221.45.camel@zakaz.uk.xensource.com>
References: <20502.48288.351664.168722@mariner.uk.xensource.com>
	<1343814942.27221.45.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: document hotplug script protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] docs: document hotplug script protocol"):
> > diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> > index 5da5e11..86c16be 100644
...
> > -are equivalent to "script=<script>".
> > +are equivalent to "script=block-<script>".
...
> Should I pull these hunks into my block-script patch?

That's a good idea.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:39:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvcB-0007Hn-BR; Thu, 02 Aug 2012 13:38:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwvcA-0007Hb-Jt
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:38:58 +0000
Received: from [85.158.143.35:21776] by server-2.bemta-4.messagelabs.com id
	28/8D-17938-1F28A105; Thu, 02 Aug 2012 13:38:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1343914733!15831967!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1976 invoked from network); 2 Aug 2012 13:38:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:38:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823349"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:38:53 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:38:53 +0100
Date: Thu, 2 Aug 2012 14:38:35 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <501A9E60020000780009233D@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208021437510.4645@kaball.uk.xensource.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
	<501A9E60020000780009233D@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	JeanGuyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012, Jan Beulich wrote:
> >>> On 02.08.12 at 15:13, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Thu, 2 Aug 2012, Jan Beulich wrote:
> >> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > I was reading more about this commit because this patch breaks the ABI
> >> > on ARM, when I realized that on x86 there is no standard that specifies
> >> > the alignment of fields in a struct.
> >> 
> >> There is - the psABI supplements to the SVR4 ABI.
> > 
> > Thank you very much, that document was exactly what I was looking for.
> > 
> > Also it explains where my confusion was coming from: Jean's patch doesn't
> > break the ABI on ARM or x86, but I am carrying a patch in my patch queue
> > that does (unless Jean's patch is applied):
> > 
> > http://marc.info/?l=xen-devel&m=134305777903771 
> > 
> > As you can see this patch splits .space into two shorts, and as a side
> > effect changes the offset of .space, removing the padding.
> > Thus it led me to think that Jean's patch was breaking the ABI when actually
> > with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
> > required to keep the binary interface compatible.
> 
> And then you wouldn't need to split 'space' and break the ABI
> at all, you could simply put 'size' and 'foreign_domid' into a union.

Yes, that's a good suggestion.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:39:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvcB-0007Hn-BR; Thu, 02 Aug 2012 13:38:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SwvcA-0007Hb-Jt
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 13:38:58 +0000
Received: from [85.158.143.35:21776] by server-2.bemta-4.messagelabs.com id
	28/8D-17938-1F28A105; Thu, 02 Aug 2012 13:38:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1343914733!15831967!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1976 invoked from network); 2 Aug 2012 13:38:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:38:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823349"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:38:53 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:38:53 +0100
Date: Thu, 2 Aug 2012 14:38:35 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <501A9E60020000780009233D@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208021437510.4645@kaball.uk.xensource.com>
References: <1321471508-31633-1-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-2-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-3-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-4-git-send-email-jean.guyader@eu.citrix.com>
	<1321471508-31633-5-git-send-email-jean.guyader@eu.citrix.com>
	<alpine.DEB.2.02.1208011846510.4645@kaball.uk.xensource.com>
	<501A631D02000078000921B8@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208021357320.4645@kaball.uk.xensource.com>
	<501A9E60020000780009233D@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"allen.m.kay@intel.com" <allen.m.kay@intel.com>,
	JeanGuyader <jean.guyader@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] Should we revert "mm: New XENMEM space,
 XENMAPSPACE_gmfn_range"?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012, Jan Beulich wrote:
> >>> On 02.08.12 at 15:13, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Thu, 2 Aug 2012, Jan Beulich wrote:
> >> >>> On 01.08.12 at 19:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > I was reading more about this commit because this patch breaks the ABI
> >> > on ARM, when I realized that on x86 there is no standard that specifies
> >> > the alignment of fields in a struct.
> >> 
> >> There is - the psABI supplements to the SVR4 ABI.
> > 
> > Thank you very much, that document was exactly what I was looking for.
> > 
> > Also it explains where my confusion was coming from: Jean's patch doesn't
> > break the ABI on ARM or x86, but I am carrying a patch in my patch queue
> > that does (unless Jean's patch is applied):
> > 
> > http://marc.info/?l=xen-devel&m=134305777903771 
> > 
> > As you can see this patch splits .space into two shorts, and as a side
> > effect changes the offset of .space, removing the padding.
> > Thus it led me to think that Jean's patch was breaking the ABI when actually
> > with "arm: initial XENMAPSPACE_gmfn_foreign" applied, it becomes
> > required to keep the binary interface compatible.
> 
> And then you wouldn't need to split 'space' and break the ABI
> at all, you could simply put 'size' and 'foreign_domid' into a union.

Yes, that's a good suggestion.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:40:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvdb-0007RU-6J; Thu, 02 Aug 2012 13:40:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwvdZ-0007Qm-8l
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:40:25 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1343914818!9475054!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17329 invoked from network); 2 Aug 2012 13:40:19 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 13:40:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwvdR-0003hZ-Hn; Thu, 02 Aug 2012 13:40:17 +0000
Date: Thu, 2 Aug 2012 14:40:17 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802134017.GH11437@ocelot.phlegethon.org>
References: <5008166B.6010603@amd.com>
	<20120726182111.GB4135@ocelot.phlegethon.org>
	<5012822E.2030603@amd.com>
	<20120802104524.GA11437@ocelot.phlegethon.org>
	<501A736A.7080300@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A736A.7080300@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: fix write access fault on ro
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:32 +0200 on 02 Aug (1343917962), Christoph Egger wrote:
> > It must be _possible_; after all we managed to detect the error. :)  In
> > any case it's definitely wrong to carry on with this handler with the
> > wrong address in hand.  So I wonder why this patch actually works for
> > you.  Does replacing the 'break' above with 'return 1' also fix the
> > problem?
> 
> No. Two things have to happen:
> 
> 1. Calling paging_mark_dirty() and
> 2. using the same p2mt from the hostp2m in the nestedp2m.
> 
> >
> 
> > In the short term, do you only care about pages that are read-only for
> > log-dirty tracking?  For the L1 walk, that should be handled by the PT
> > walker's own calls to paging_mark_dirty(), and the nested-p2m handler
> > could potentially take care of the other case by calling
> > paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().
> 
> Ok, I consider this as a performance improvement rather a bugfix.
> 
> New version is attached.

Applied, thanks.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:40:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvdb-0007RU-6J; Thu, 02 Aug 2012 13:40:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SwvdZ-0007Qm-8l
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:40:25 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1343914818!9475054!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17329 invoked from network); 2 Aug 2012 13:40:19 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 13:40:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SwvdR-0003hZ-Hn; Thu, 02 Aug 2012 13:40:17 +0000
Date: Thu, 2 Aug 2012 14:40:17 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802134017.GH11437@ocelot.phlegethon.org>
References: <5008166B.6010603@amd.com>
	<20120726182111.GB4135@ocelot.phlegethon.org>
	<5012822E.2030603@amd.com>
	<20120802104524.GA11437@ocelot.phlegethon.org>
	<501A736A.7080300@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A736A.7080300@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: fix write access fault on ro
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:32 +0200 on 02 Aug (1343917962), Christoph Egger wrote:
> > It must be _possible_; after all we managed to detect the error. :)  In
> > any case it's definitely wrong to carry on with this handler with the
> > wrong address in hand.  So I wonder why this patch actually works for
> > you.  Does replacing the 'break' above with 'return 1' also fix the
> > problem?
> 
> No. Two things have to happen:
> 
> 1. Calling paging_mark_dirty() and
> 2. using the same p2mt from the hostp2m in the nestedp2m.
> 
> >
> 
> > In the short term, do you only care about pages that are read-only for
> > log-dirty tracking?  For the L1 walk, that should be handled by the PT
> > walker's own calls to paging_mark_dirty(), and the nested-p2m handler
> > could potentially take care of the other case by calling
> > paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().
> 
> Ok, I consider this as a performance improvement rather a bugfix.
> 
> New version is attached.

Applied, thanks.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:45:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvhi-0007iH-S1; Thu, 02 Aug 2012 13:44:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swvhi-0007iB-8x
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:44:42 +0000
Received: from [85.158.139.83:9328] by server-6.bemta-5.messagelabs.com id
	FE/A8-11348-9448A105; Thu, 02 Aug 2012 13:44:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343915080!29861880!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31117 invoked from network); 2 Aug 2012 13:44:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:44:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823473"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:44:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:44:40 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swvhg-0007qd-Fi; Thu, 02 Aug 2012 13:44:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swvhg-00022s-F0;
	Thu, 02 Aug 2012 14:44:40 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.33864.452690.133475@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 14:44:40 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343821631.27221.75.camel@zakaz.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
	<1343821631.27221.75.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal callers"):
> On Tue, 2012-07-31 at 18:56 +0100, Ian Jackson wrote:
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Roger Pau Monne <roger.pau@citrix.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Applied.
> 
> > -
> 
> Can you make this "---" ? Then git am does the right thing...

Not easily, no.  My processing stream would filter it out before I
posted it.  And sadly that's in the topgit linearize step which isn't
very configurable.  (I'm writing a replacement for topgit but it's
going to be a while still.)

I guess I could routinely use git-filter-branch on the results of
linearize to turn "-" into "---" but my own git-to-hg (upstream hat)
commit processing stream removes things after "-" too.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:45:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvhi-0007iH-S1; Thu, 02 Aug 2012 13:44:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swvhi-0007iB-8x
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:44:42 +0000
Received: from [85.158.139.83:9328] by server-6.bemta-5.messagelabs.com id
	FE/A8-11348-9448A105; Thu, 02 Aug 2012 13:44:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343915080!29861880!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31117 invoked from network); 2 Aug 2012 13:44:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:44:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823473"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:44:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 14:44:40 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swvhg-0007qd-Fi; Thu, 02 Aug 2012 13:44:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swvhg-00022s-F0;
	Thu, 02 Aug 2012 14:44:40 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.33864.452690.133475@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 14:44:40 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343821631.27221.75.camel@zakaz.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
	<1343821631.27221.75.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal callers"):
> On Tue, 2012-07-31 at 18:56 +0100, Ian Jackson wrote:
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Roger Pau Monne <roger.pau@citrix.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Applied.
> 
> > -
> 
> Can you make this "---" ? Then git am does the right thing...

Not easily, no.  My processing stream would filter it out before I
posted it.  And sadly that's in the topgit linearize step which isn't
very configurable.  (I'm writing a replacement for topgit but it's
going to be a while still.)

I guess I could routinely use git-filter-branch on the results of
linearize to turn "-" into "---" but my own git-to-hg (upstream hat)
commit processing stream removes things after "-" too.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:45:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvhm-0007ih-8E; Thu, 02 Aug 2012 13:44:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1Swvhl-0007iU-72
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:44:45 +0000
Received: from [85.158.143.99:45177] by server-3.bemta-4.messagelabs.com id
	D1/EA-01511-C448A105; Thu, 02 Aug 2012 13:44:44 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343915082!29344056!1
X-Originating-IP: [216.32.180.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20948 invoked from network); 2 Aug 2012 13:44:43 -0000
Received: from va3ehsobe006.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.16)
	by server-15.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 13:44:43 -0000
Received: from mail208-va3-R.bigfish.com (10.7.14.252) by
	VA3EHSOBE013.bigfish.com (10.7.40.63) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 13:44:42 +0000
Received: from mail208-va3 (localhost [127.0.0.1])	by
	mail208-va3-R.bigfish.com (Postfix) with ESMTP id 6AA41780535;
	Thu,  2 Aug 2012 13:44:42 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail208-va3 (localhost.localdomain [127.0.0.1]) by mail208-va3
	(MessageSwitch) id 1343915080102676_8836;
	Thu,  2 Aug 2012 13:44:40 +0000 (UTC)
Received: from VA3EHSMHS002.bigfish.com (unknown [10.7.14.249])	by
	mail208-va3.bigfish.com (Postfix) with ESMTP id 0D77472004A;
	Thu,  2 Aug 2012 13:44:40 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	VA3EHSMHS002.bigfish.com (10.7.99.12) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 13:44:39 +0000
X-WSS-ID: 0M84RID-01-9IX-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2981C10280B3;	Thu,  2 Aug 2012 08:44:37 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 08:44:54 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 2 Aug 2012 08:44:37 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	09:44:36 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 6E45D49C69B;
	Thu,  2 Aug 2012 14:44:35 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 4EA51594037; Thu,  2 Aug 2012
	15:44:35 +0200 (CEST)
Message-ID: <501A83A6.2060000@amd.com>
Date: Thu, 2 Aug 2012 15:41:58 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] auto-ballooning crashing Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

during some experiments with many guests I get crashing Dom0s because of 
too less memory. Actually the OOM killer goes 'round and kills random 
things, preferably qemu-dm's ;-)
The box in question has 128GB of memory, I start with dom0_mem=8192M (or 
16384M, doesn't matter). I also used "dom0_mem=8192M,min:1536M", but 
that didn't make any difference. Xen is c/s 25688.

Then I start some guests with 2GB each. This works fine until about 55 
guests, then I get some denies from xl when starting guests (which would 
be OK). But sometimes the guest start works (even after having failed 
before), but it has obviously ripped off precious memory from Dom0. With 
around 55 guests Dom0 has about 500MB in use.
The whole Dom0 is in trouble then, I get "fork: cannot allocate memory" 
messages for a simple "ls" and have to reboot the box.
This is with xl.conf:autoballooning=1 (= the commented default)
Setting it to 0 works, but is obviously not a real option as a default.

I found the hardcoded 128MB limit in libxl_internal.h, I guess this is 
way too small for this type of machine.

Either we change this to something higher (768 MB worked for me) or we 
make this a config option in xl.conf (like it was in xend-config.sxp)

Another option would be to make it dynamic, by looking at the actual 
memory currently used in Dom0 and don't balloon down to 110% or so of it.

Sadly (well..) I am about to leave for vacation, so no patch this time, 
I leave this as an exercise to the tool buffs ;-)

In any case we should do something still for Xen 4.2, as I guess people 
dislike crashing Dom0, tearing down all the domains with it...

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:45:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvhm-0007ih-8E; Thu, 02 Aug 2012 13:44:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1Swvhl-0007iU-72
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:44:45 +0000
Received: from [85.158.143.99:45177] by server-3.bemta-4.messagelabs.com id
	D1/EA-01511-C448A105; Thu, 02 Aug 2012 13:44:44 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1343915082!29344056!1
X-Originating-IP: [216.32.180.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20948 invoked from network); 2 Aug 2012 13:44:43 -0000
Received: from va3ehsobe006.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.16)
	by server-15.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	2 Aug 2012 13:44:43 -0000
Received: from mail208-va3-R.bigfish.com (10.7.14.252) by
	VA3EHSOBE013.bigfish.com (10.7.40.63) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 13:44:42 +0000
Received: from mail208-va3 (localhost [127.0.0.1])	by
	mail208-va3-R.bigfish.com (Postfix) with ESMTP id 6AA41780535;
	Thu,  2 Aug 2012 13:44:42 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail208-va3 (localhost.localdomain [127.0.0.1]) by mail208-va3
	(MessageSwitch) id 1343915080102676_8836;
	Thu,  2 Aug 2012 13:44:40 +0000 (UTC)
Received: from VA3EHSMHS002.bigfish.com (unknown [10.7.14.249])	by
	mail208-va3.bigfish.com (Postfix) with ESMTP id 0D77472004A;
	Thu,  2 Aug 2012 13:44:40 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	VA3EHSMHS002.bigfish.com (10.7.99.12) with Microsoft SMTP Server id
	14.1.225.23; Thu, 2 Aug 2012 13:44:39 +0000
X-WSS-ID: 0M84RID-01-9IX-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2981C10280B3;	Thu,  2 Aug 2012 08:44:37 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 2 Aug 2012 08:44:54 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 2 Aug 2012 08:44:37 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	09:44:36 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 6E45D49C69B;
	Thu,  2 Aug 2012 14:44:35 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 4EA51594037; Thu,  2 Aug 2012
	15:44:35 +0200 (CEST)
Message-ID: <501A83A6.2060000@amd.com>
Date: Thu, 2 Aug 2012 15:41:58 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] auto-ballooning crashing Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

during some experiments with many guests I get crashing Dom0s because of 
too less memory. Actually the OOM killer goes 'round and kills random 
things, preferably qemu-dm's ;-)
The box in question has 128GB of memory, I start with dom0_mem=8192M (or 
16384M, doesn't matter). I also used "dom0_mem=8192M,min:1536M", but 
that didn't make any difference. Xen is c/s 25688.

Then I start some guests with 2GB each. This works fine until about 55 
guests, then I get some denies from xl when starting guests (which would 
be OK). But sometimes the guest start works (even after having failed 
before), but it has obviously ripped off precious memory from Dom0. With 
around 55 guests Dom0 has about 500MB in use.
The whole Dom0 is in trouble then, I get "fork: cannot allocate memory" 
messages for a simple "ls" and have to reboot the box.
This is with xl.conf:autoballooning=1 (= the commented default)
Setting it to 0 works, but is obviously not a real option as a default.

I found the hardcoded 128MB limit in libxl_internal.h, I guess this is 
way too small for this type of machine.

Either we change this to something higher (768 MB worked for me) or we 
make this a config option in xl.conf (like it was in xend-config.sxp)

Another option would be to make it dynamic, by looking at the actual 
memory currently used in Dom0 and don't balloon down to 110% or so of it.

Sadly (well..) I am about to leave for vacation, so no patch this time, 
I leave this as an exercise to the tool buffs ;-)

In any case we should do something still for Xen 4.2, as I guess people 
dislike crashing Dom0, tearing down all the domains with it...

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:46:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvil-0007pj-P7; Thu, 02 Aug 2012 13:45:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Swvik-0007pN-CB
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:45:46 +0000
Received: from [85.158.138.51:3751] by server-6.bemta-3.messagelabs.com id
	41/F7-20447-9848A105; Thu, 02 Aug 2012 13:45:45 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343915144!30046081!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22466 invoked from network); 2 Aug 2012 13:45:44 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 13:45:44 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Swvih-0003j3-6c; Thu, 02 Aug 2012 13:45:43 +0000
Date: Thu, 2 Aug 2012 14:45:43 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802134543.GI11437@ocelot.phlegethon.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
	<501A6478.8070605@amd.com>
	<20120802113500.GG11437@ocelot.phlegethon.org>
	<501A6F3B.9060102@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A6F3B.9060102@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:14 +0200 on 02 Aug (1343916891), Christoph Egger wrote:
> > Yes, I think that is what the caller expects -- the error code is made
> > up from the pagetable walk rather than from the p2m table.
> > 
> > Can I take that as an ack?
> 
> Yes.

Thanks; I've applied it.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:46:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvil-0007pj-P7; Thu, 02 Aug 2012 13:45:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Swvik-0007pN-CB
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:45:46 +0000
Received: from [85.158.138.51:3751] by server-6.bemta-3.messagelabs.com id
	41/F7-20447-9848A105; Thu, 02 Aug 2012 13:45:45 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343915144!30046081!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22466 invoked from network); 2 Aug 2012 13:45:44 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 13:45:44 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1Swvih-0003j3-6c; Thu, 02 Aug 2012 13:45:43 +0000
Date: Thu, 2 Aug 2012 14:45:43 +0100
From: Tim Deegan <tim@xen.org>
To: Christoph Egger <Christoph.Egger@amd.com>
Message-ID: <20120802134543.GI11437@ocelot.phlegethon.org>
References: <5017FBB0.7060500@amd.com>
	<20120802111915.GF11437@ocelot.phlegethon.org>
	<501A6478.8070605@amd.com>
	<20120802113500.GG11437@ocelot.phlegethon.org>
	<501A6F3B.9060102@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A6F3B.9060102@amd.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] nestedhvm: do not translate INVALID_GFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:14 +0200 on 02 Aug (1343916891), Christoph Egger wrote:
> > Yes, I think that is what the caller expects -- the error code is made
> > up from the pagetable walk rather than from the p2m table.
> > 
> > Can I take that as an ack?
> 
> Yes.

Thanks; I've applied it.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:47:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvk9-0007zi-Dy; Thu, 02 Aug 2012 13:47:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Swvk8-0007zM-5n
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:47:12 +0000
Received: from [85.158.143.99:4336] by server-2.bemta-4.messagelabs.com id
	AA/CB-17938-FD48A105; Thu, 02 Aug 2012 13:47:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1343915228!22913010!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13867 invoked from network); 2 Aug 2012 13:47:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:47:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203932087"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:47:08 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 09:47:08 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Swvk3-0001Rv-Pn;
	Thu, 02 Aug 2012 14:47:07 +0100
Message-ID: <501A84DB.3040101@citrix.com>
Date: Thu, 2 Aug 2012 14:47:07 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
	<5018FBB60200007800091CCB@nat28.tlf.novell.com>	<5018F7ED.3010100@citrix.com>
	<20506.33289.314238.601890@mariner.uk.xensource.com>
In-Reply-To: <20506.33289.314238.601890@mariner.uk.xensource.com>
X-Enigmail-Version: 1.4.3
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
	be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 14:35, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be set externally"):
>> # compile.h contains dynamic build info. Rebuilt on every 'make'
>> invocation.                                                                                                                                  
>>
>> include/xen/compile.h: include/xen/compile.h.in .banner
>>         @sed -e 's/@@date@@/$(shell LC_ALL=C date)/g' \
>>
>> So it should only be executed once (unless someone is messing around
>> deleting compile.h)
>>
>> Having said that, even if it were executed more than once, there is no
>> reasonable circumstance during which the contents of XEN_CHANGESET
>> should change.
> Uh, "hg up <changeset>", surely ?
>
> Ian.

Sure, but compile.h is regenerated on every invocation of Make

I should have stated that during an individual build there are no
reasonable circumstances.  Running an hg update during a build is asking
for trouble.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:47:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swvk9-0007zi-Dy; Thu, 02 Aug 2012 13:47:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Swvk8-0007zM-5n
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:47:12 +0000
Received: from [85.158.143.99:4336] by server-2.bemta-4.messagelabs.com id
	AA/CB-17938-FD48A105; Thu, 02 Aug 2012 13:47:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1343915228!22913010!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13867 invoked from network); 2 Aug 2012 13:47:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:47:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203932087"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 09:47:08 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 09:47:08 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Swvk3-0001Rv-Pn;
	Thu, 02 Aug 2012 14:47:07 +0100
Message-ID: <501A84DB.3040101@citrix.com>
Date: Thu, 2 Aug 2012 14:47:07 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<ae32690d0d740d3aba01.1343749920@andrewcoop.uk.xensource.com>
	<20504.3594.340410.435942@mariner.uk.xensource.com>
	<5018FBB60200007800091CCB@nat28.tlf.novell.com>	<5018F7ED.3010100@citrix.com>
	<20506.33289.314238.601890@mariner.uk.xensource.com>
In-Reply-To: <20506.33289.314238.601890@mariner.uk.xensource.com>
X-Enigmail-Version: 1.4.3
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to
	be set externally
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/12 14:35, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [PATCH 4 of 5] xen/makefile: Allow XEN_CHANGESET to be set externally"):
>> # compile.h contains dynamic build info. Rebuilt on every 'make'
>> invocation.                                                                                                                                  
>>
>> include/xen/compile.h: include/xen/compile.h.in .banner
>>         @sed -e 's/@@date@@/$(shell LC_ALL=C date)/g' \
>>
>> So it should only be executed once (unless someone is messing around
>> deleting compile.h)
>>
>> Having said that, even if it were executed more than once, there is no
>> reasonable circumstance during which the contents of XEN_CHANGESET
>> should change.
> Uh, "hg up <changeset>", surely ?
>
> Ian.

Sure, but compile.h is regenerated on every invocation of Make

I should have stated that during an individual build there are no
reasonable circumstances.  Running an hg update during a build is asking
for trouble.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:50:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvnB-0008Hl-2N; Thu, 02 Aug 2012 13:50:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwvnA-0008HZ-3Q
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:50:20 +0000
Received: from [85.158.139.83:49276] by server-9.bemta-5.messagelabs.com id
	83/62-01069-B958A105; Thu, 02 Aug 2012 13:50:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1343915418!29950040!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14454 invoked from network); 2 Aug 2012 13:50:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:50:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823596"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:50:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	14:50:18 +0100
Message-ID: <1343915416.27221.157.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 14:50:16 +0100
In-Reply-To: <20506.33864.452690.133475@mariner.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
	<1343821631.27221.75.camel@zakaz.uk.xensource.com>
	<20506.33864.452690.133475@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 14:44 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal callers"):
> > On Tue, 2012-07-31 at 18:56 +0100, Ian Jackson wrote:
> > > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Applied.
> > 
> > > -
> > 
> > Can you make this "---" ? Then git am does the right thing...
> 
> Not easily, no.  My processing stream would filter it out before I
> posted it.  And sadly that's in the topgit linearize step which isn't
> very configurable.  (I'm writing a replacement for topgit but it's
> going to be a while still.)
> 
> I guess I could routinely use git-filter-branch on the results of
> linearize to turn "-" into "---" but my own git-to-hg (upstream hat)
> commit processing stream removes things after "-" too.

Your git2hgapply script? I'm using that but perhaps an old version? Or
maybe I just need to trust it...

> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 13:50:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 13:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwvnB-0008Hl-2N; Thu, 02 Aug 2012 13:50:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwvnA-0008HZ-3Q
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 13:50:20 +0000
Received: from [85.158.139.83:49276] by server-9.bemta-5.messagelabs.com id
	83/62-01069-B958A105; Thu, 02 Aug 2012 13:50:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1343915418!29950040!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14454 invoked from network); 2 Aug 2012 13:50:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 13:50:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13823596"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 13:50:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	14:50:18 +0100
Message-ID: <1343915416.27221.157.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 14:50:16 +0100
In-Reply-To: <20506.33864.452690.133475@mariner.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
	<1343821631.27221.75.camel@zakaz.uk.xensource.com>
	<20506.33864.452690.133475@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 14:44 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal callers"):
> > On Tue, 2012-07-31 at 18:56 +0100, Ian Jackson wrote:
> > > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Applied.
> > 
> > > -
> > 
> > Can you make this "---" ? Then git am does the right thing...
> 
> Not easily, no.  My processing stream would filter it out before I
> posted it.  And sadly that's in the topgit linearize step which isn't
> very configurable.  (I'm writing a replacement for topgit but it's
> going to be a while still.)
> 
> I guess I could routinely use git-filter-branch on the results of
> linearize to turn "-" into "---" but my own git-to-hg (upstream hat)
> commit processing stream removes things after "-" too.

Your git2hgapply script? I'm using that but perhaps an old version? Or
maybe I just need to trust it...

> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:07:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sww2n-00009j-Ju; Thu, 02 Aug 2012 14:06:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sww2l-00009e-PL
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:06:27 +0000
Received: from [85.158.143.99:18994] by server-1.bemta-4.messagelabs.com id
	4C/84-24392-3698A105; Thu, 02 Aug 2012 14:06:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343916386!29377918!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 478 invoked from network); 2 Aug 2012 14:06:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 14:06:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 15:06:25 +0100
Message-Id: <501AA5AB0200007800092399@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 15:07:07 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
In-Reply-To: <1343914490.4873.18.camel@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 15:34, Dario Faggioli <raistlin@linux.it> wrote:
> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>> >       guest ends up on more than one nodes, make sure it knows it's
>> >       running on a NUMA platform (smaller than the actual host, but
>> >       still NUMA). This interacts with some of the above points:
>> 
>> The question is whether this is really useful beyond the (I would
>> suppose) relatively small set of cases where migration isn't
>> needed.
>> 
> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
> suggesting that exposing a virtual topology is not a good idea as it
> poses constraints/prevents live migration?

Yes.

> If yes, well, I mostly agree that this is an huge issue, and that's why
> I think wee need some bright idea on how to deal with it. I mean, it's
> easy to make it optional and let it automatically disable migration,
> giving users the choice what they prefer, but I think this is more
> dodging the problem than dealing with it! :-P

Indeed.

>> >        * consider this during automatic placement for
>> >          resuming/migrating domains (if they have a virtual topology,
>> >          better not to change it);
>> >        * consider this during memory migration (it can change the
>> >          actual topology, should we update it on-line or disable memory
>> >          migration?)
>> 
>> The question is whether trading functionality for performance
>> is an acceptable choice.
>> 
> Indeed. Again, I think it is possible to implement things flexibly
> enough, but then we need to come out with a sane default, so we're not
> allowed to avoid discussing and deciding on this.
> 
> One can argue that it is an issue only for big-enough guests (and/or
> nearly overcommitted hosts) that don't fit in only one node (as, if they
> do, there is no virtual topology to export), but I'm not sure we can
> neglect them on this basis.

We certainly can't, the more that the "big enough" case may not
be that infrequent going forward.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:07:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sww2n-00009j-Ju; Thu, 02 Aug 2012 14:06:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sww2l-00009e-PL
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:06:27 +0000
Received: from [85.158.143.99:18994] by server-1.bemta-4.messagelabs.com id
	4C/84-24392-3698A105; Thu, 02 Aug 2012 14:06:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343916386!29377918!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3NzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 478 invoked from network); 2 Aug 2012 14:06:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	2 Aug 2012 14:06:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 02 Aug 2012 15:06:25 +0100
Message-Id: <501AA5AB0200007800092399@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 02 Aug 2012 15:07:07 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
In-Reply-To: <1343914490.4873.18.camel@Solace>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 15:34, Dario Faggioli <raistlin@linux.it> wrote:
> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>> >       guest ends up on more than one nodes, make sure it knows it's
>> >       running on a NUMA platform (smaller than the actual host, but
>> >       still NUMA). This interacts with some of the above points:
>> 
>> The question is whether this is really useful beyond the (I would
>> suppose) relatively small set of cases where migration isn't
>> needed.
>> 
> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
> suggesting that exposing a virtual topology is not a good idea as it
> poses constraints/prevents live migration?

Yes.

> If yes, well, I mostly agree that this is an huge issue, and that's why
> I think wee need some bright idea on how to deal with it. I mean, it's
> easy to make it optional and let it automatically disable migration,
> giving users the choice what they prefer, but I think this is more
> dodging the problem than dealing with it! :-P

Indeed.

>> >        * consider this during automatic placement for
>> >          resuming/migrating domains (if they have a virtual topology,
>> >          better not to change it);
>> >        * consider this during memory migration (it can change the
>> >          actual topology, should we update it on-line or disable memory
>> >          migration?)
>> 
>> The question is whether trading functionality for performance
>> is an acceptable choice.
>> 
> Indeed. Again, I think it is possible to implement things flexibly
> enough, but then we need to come out with a sane default, so we're not
> allowed to avoid discussing and deciding on this.
> 
> One can argue that it is an issue only for big-enough guests (and/or
> nearly overcommitted hosts) that don't fit in only one node (as, if they
> do, there is no virtual topology to export), but I'm not sure we can
> neglect them on this basis.

We certainly can't, the more that the "big enough" case may not
be that infrequent going forward.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:10:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:10:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sww5l-0000HP-6Y; Thu, 02 Aug 2012 14:09:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sww5j-0000H7-6t
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:09:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343916565!11947848!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8035 invoked from network); 2 Aug 2012 14:09:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:09:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824044"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:09:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:09:03 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sww5H-0007yb-5p; Thu, 02 Aug 2012 14:09:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sww5H-0002Fv-4x;
	Thu, 02 Aug 2012 15:09:03 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.35327.136486.104150@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:09:03 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343897960.27221.105.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
	<1343897960.27221.105.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 11/11] libxl: -Wunused-parameter"):
> libxl: idl: always initialise the KeyedEnum keyvar in the member init function

Thanks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I have incorporated this into my series.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:10:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:10:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sww5l-0000HP-6Y; Thu, 02 Aug 2012 14:09:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sww5j-0000H7-6t
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:09:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1343916565!11947848!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8035 invoked from network); 2 Aug 2012 14:09:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:09:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824044"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:09:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:09:03 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sww5H-0007yb-5p; Thu, 02 Aug 2012 14:09:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sww5H-0002Fv-4x;
	Thu, 02 Aug 2012 15:09:03 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.35327.136486.104150@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:09:03 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343897960.27221.105.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
	<1343897960.27221.105.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 11/11] libxl: -Wunused-parameter"):
> libxl: idl: always initialise the KeyedEnum keyvar in the member init function

Thanks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I have incorporated this into my series.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:18:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:18:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwE9-0000VC-6O; Thu, 02 Aug 2012 14:18:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwE7-0000V7-OF
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:18:11 +0000
Received: from [85.158.143.35:26946] by server-2.bemta-4.messagelabs.com id
	95/82-17938-32C8A105; Thu, 02 Aug 2012 14:18:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343917090!17102540!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16096 invoked from network); 2 Aug 2012 14:18:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:18:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824313"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:16:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:16:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwC9-000813-G3; Thu, 02 Aug 2012 14:16:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwC9-0002Rq-FA;
	Thu, 02 Aug 2012 15:16:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.35753.456647.344782@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:16:09 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
References: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: prefix *.for-check with _ to mark it
 as a generated file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: prefix *.for-check with _ to mark it as a generated file"):
> libxl: prefix *.for-check with _ to mark it as a generated file.
> 
> Keeps it out of my greps etc.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:18:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:18:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwE9-0000VC-6O; Thu, 02 Aug 2012 14:18:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwE7-0000V7-OF
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:18:11 +0000
Received: from [85.158.143.35:26946] by server-2.bemta-4.messagelabs.com id
	95/82-17938-32C8A105; Thu, 02 Aug 2012 14:18:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343917090!17102540!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16096 invoked from network); 2 Aug 2012 14:18:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:18:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824313"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:16:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:16:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwC9-000813-G3; Thu, 02 Aug 2012 14:16:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwC9-0002Rq-FA;
	Thu, 02 Aug 2012 15:16:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.35753.456647.344782@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:16:09 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
References: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: prefix *.for-check with _ to mark it
 as a generated file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: prefix *.for-check with _ to mark it as a generated file"):
> libxl: prefix *.for-check with _ to mark it as a generated file.
> 
> Keeps it out of my greps etc.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:19:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwEi-0000XW-Jp; Thu, 02 Aug 2012 14:18:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwEh-0000X0-3w
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:18:47 +0000
Received: from [85.158.143.99:30132] by server-1.bemta-4.messagelabs.com id
	7E/5B-24392-44C8A105; Thu, 02 Aug 2012 14:18:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343917121!23567623!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11722 invoked from network); 2 Aug 2012 14:18:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:18:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824354"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:17:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:17:42 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwDd-00081i-VK; Thu, 02 Aug 2012 14:17:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwDd-0002SC-Ub;
	Thu, 02 Aug 2012 15:17:41 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.35845.933539.770104@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:17:41 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> libxl: const correctness for libxl__xs_path_cleanup
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_device.c
> --- a/tools/libxl/libxl_device.c	Thu Aug 02 09:59:33 2012 +0100
> +++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
> @@ -523,7 +523,7 @@ DEFINE_DEVICES_ADD(nic)
>  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
>  {
>      char *be_path = libxl__device_backend_path(gc, dev);
> -    char *fe_path = libxl__device_frontend_path(gc, dev);
> +    const char *fe_path = libxl__device_frontend_path(gc, dev);

I don't understand why this is needed (or a good thing) for one but
not the other.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:19:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwEi-0000XW-Jp; Thu, 02 Aug 2012 14:18:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwEh-0000X0-3w
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:18:47 +0000
Received: from [85.158.143.99:30132] by server-1.bemta-4.messagelabs.com id
	7E/5B-24392-44C8A105; Thu, 02 Aug 2012 14:18:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343917121!23567623!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11722 invoked from network); 2 Aug 2012 14:18:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:18:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824354"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:17:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:17:42 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwDd-00081i-VK; Thu, 02 Aug 2012 14:17:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwDd-0002SC-Ub;
	Thu, 02 Aug 2012 15:17:41 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.35845.933539.770104@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:17:41 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> libxl: const correctness for libxl__xs_path_cleanup
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_device.c
> --- a/tools/libxl/libxl_device.c	Thu Aug 02 09:59:33 2012 +0100
> +++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
> @@ -523,7 +523,7 @@ DEFINE_DEVICES_ADD(nic)
>  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
>  {
>      char *be_path = libxl__device_backend_path(gc, dev);
> -    char *fe_path = libxl__device_frontend_path(gc, dev);
> +    const char *fe_path = libxl__device_frontend_path(gc, dev);

I don't understand why this is needed (or a good thing) for one but
not the other.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:22:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwHm-0000in-72; Thu, 02 Aug 2012 14:21:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwwHk-0000iG-Iq
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:21:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343917308!3564769!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2331 invoked from network); 2 Aug 2012 14:21:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:21:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824550"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:21:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:21:46 +0100
Message-ID: <1343917305.27221.159.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:21:45 +0100
In-Reply-To: <20506.35845.933539.770104@mariner.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
	<20506.35845.933539.770104@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:17 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> > libxl: const correctness for libxl__xs_path_cleanup
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_device.c
> > --- a/tools/libxl/libxl_device.c	Thu Aug 02 09:59:33 2012 +0100
> > +++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
> > @@ -523,7 +523,7 @@ DEFINE_DEVICES_ADD(nic)
> >  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
> >  {
> >      char *be_path = libxl__device_backend_path(gc, dev);
> > -    char *fe_path = libxl__device_frontend_path(gc, dev);
> > +    const char *fe_path = libxl__device_frontend_path(gc, dev);
> 
> I don't understand why this is needed (or a good thing) for one but
> not the other.

be_path becomes const in the next patch, after the non-const user is
removed.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:22:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwHm-0000in-72; Thu, 02 Aug 2012 14:21:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwwHk-0000iG-Iq
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:21:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343917308!3564769!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2331 invoked from network); 2 Aug 2012 14:21:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:21:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824550"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:21:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:21:46 +0100
Message-ID: <1343917305.27221.159.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:21:45 +0100
In-Reply-To: <20506.35845.933539.770104@mariner.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
	<20506.35845.933539.770104@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:17 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> > libxl: const correctness for libxl__xs_path_cleanup
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > diff -r e638a0aeb985 -r 075da4778b0a tools/libxl/libxl_device.c
> > --- a/tools/libxl/libxl_device.c	Thu Aug 02 09:59:33 2012 +0100
> > +++ b/tools/libxl/libxl_device.c	Thu Aug 02 10:22:28 2012 +0100
> > @@ -523,7 +523,7 @@ DEFINE_DEVICES_ADD(nic)
> >  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
> >  {
> >      char *be_path = libxl__device_backend_path(gc, dev);
> > -    char *fe_path = libxl__device_frontend_path(gc, dev);
> > +    const char *fe_path = libxl__device_frontend_path(gc, dev);
> 
> I don't understand why this is needed (or a good thing) for one but
> not the other.

be_path becomes const in the next patch, after the non-const user is
removed.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:23:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwIt-0000oV-Lw; Thu, 02 Aug 2012 14:23:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwwIt-0000o1-0m
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 14:23:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343917379!1860610!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29790 invoked from network); 2 Aug 2012 14:23:00 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 14:23:00 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72EMijl015231
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 14:22:44 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72EMhlK024889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 14:22:43 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72EMgBr027615; Thu, 2 Aug 2012 09:22:42 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 07:22:42 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9F5584029A; Thu,  2 Aug 2012 10:13:41 -0400 (EDT)
Date: Thu, 2 Aug 2012 10:13:41 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120802141341.GE16749@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<50197527.3070007@gmail.com>
	<1343892951.7571.50.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343892951.7571.50.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Rob Herring <robherring2@gmail.com>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 08:35:51AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-01 at 19:27 +0100, Rob Herring wrote:
> > On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> > > - Basic hypervisor.h and interface.h definitions.
> > > - Skelethon enlighten.c, set xen_start_info to an empty struct.
> > > - Do not limit xen_initial_domain to PV guests.
> > > 
> > > The new code only compiles when CONFIG_XEN is set, that is going to be
> > > added to arch/arm/Kconfig in a later patch.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  arch/arm/Makefile                     |    1 +
> > >  arch/arm/include/asm/hypervisor.h     |    6 +++
> > >  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
> > >  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
> > 
> > These headers don't seem particularly ARM specific. Could they be moved
> > to asm-generic or include/linux?
> 
> Or perhaps include/xen.
> 
> A bunch of it also looks like x86 specific stuff which has crept in.
> e.g. PARAVIRT_LAZY_FOO and paravirt_get_lazy_mode() are arch/x86
> specific and shouldn't be called from common code (and aren't, AFAICT).

The could be moved out..

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:23:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwIt-0000oV-Lw; Thu, 02 Aug 2012 14:23:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwwIt-0000o1-0m
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 14:23:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343917379!1860610!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29790 invoked from network); 2 Aug 2012 14:23:00 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 14:23:00 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72EMijl015231
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 14:22:44 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72EMhlK024889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 14:22:43 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72EMgBr027615; Thu, 2 Aug 2012 09:22:42 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 07:22:42 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9F5584029A; Thu,  2 Aug 2012 10:13:41 -0400 (EDT)
Date: Thu, 2 Aug 2012 10:13:41 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120802141341.GE16749@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<50197527.3070007@gmail.com>
	<1343892951.7571.50.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343892951.7571.50.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Rob Herring <robherring2@gmail.com>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 08:35:51AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-01 at 19:27 +0100, Rob Herring wrote:
> > On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> > > - Basic hypervisor.h and interface.h definitions.
> > > - Skelethon enlighten.c, set xen_start_info to an empty struct.
> > > - Do not limit xen_initial_domain to PV guests.
> > > 
> > > The new code only compiles when CONFIG_XEN is set, that is going to be
> > > added to arch/arm/Kconfig in a later patch.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  arch/arm/Makefile                     |    1 +
> > >  arch/arm/include/asm/hypervisor.h     |    6 +++
> > >  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
> > >  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
> > 
> > These headers don't seem particularly ARM specific. Could they be moved
> > to asm-generic or include/linux?
> 
> Or perhaps include/xen.
> 
> A bunch of it also looks like x86 specific stuff which has crept in.
> e.g. PARAVIRT_LAZY_FOO and paravirt_get_lazy_mode() are arch/x86
> specific and shouldn't be called from common code (and aren't, AFAICT).

The could be moved out..

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwM0-00013D-Ej; Thu, 02 Aug 2012 14:26:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwwLz-000131-4T
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:26:19 +0000
Received: from [85.158.139.83:52521] by server-10.bemta-5.messagelabs.com id
	36/0D-02190-A0E8A105; Thu, 02 Aug 2012 14:26:18 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1343917576!25212566!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17006 invoked from network); 2 Aug 2012 14:26:17 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 14:26:17 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72EQC35019302
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 14:26:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72EQBIi001097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 14:26:12 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72EQBcO019926; Thu, 2 Aug 2012 09:26:11 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 07:26:11 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 937024029A; Thu,  2 Aug 2012 10:17:10 -0400 (EDT)
Date: Thu, 2 Aug 2012 10:17:10 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120802141710.GF16749@phenom.dumpdata.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A5EF7020000780009219C@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 10:05:27AM +0100, Jan Beulich wrote:
> >>> On 01.08.12 at 17:50, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > With these patches I've gotten it to boot up to 384GB. Around that area
> > something weird happens - mainly the pagetables that the toolstack allocated
> > seems to have missing data. I hadn't looked in details, but this is what
> > domain builder tells me:
> > 
> > 
> > xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 
> > 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
> > xc_dom_malloc            : 1621 kB
> > xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at 0x7fb0853a2000
> > xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
> > xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 
> > 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
> > xc_dom_malloc            : 4608 kB
> > xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at 0x7fb0553a2000
> > xc_dom_alloc_page   :   start info   : 0xffffffffc30b4000 (pfn 0x430b4)
> > xc_dom_alloc_page   :   xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
> > xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn 0x430b6)
> > nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 
> > 0xffffffffffffffff, 1 table(s)
> > nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 
> > 0xffffffffffffffff, 1 table(s)
> > nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 
> > 0xffffffffffffffff, 2 table(s)
> > nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 
> > 0xffffffffc33fffff, 538 table(s)
> > xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 
> > 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
> > xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at 0x7fb055184000
> > xc_dom_alloc_page   :   boot stack   : 0xffffffffc32d5000 (pfn 0x432d5)
> > xc_dom_build_image  : virt_alloc_end : 0xffffffffc32d6000
> > xc_dom_build_image  : virt_pgtab_end : 0xffffffffc3400000
> > 
> > Note it is is 0xffffffffc30b4000 - so already past the level2_kernel_pgt 
> > (L3[510]
> > and in level2_fixmap_pgt territory (L3[511]).
> > 
> > At that stage we are still operating using the Xen provided pagetable - which
> > look to have the L4[511][511] empty! Which sounds to me like a Xen tool-stack
> > problem? Jan, have you seen something similar to this?
> 
> No we haven't, but I also don't think anyone tried to create as
> big a DomU. I was, however, under the impression that DomU-s
> this big had been created at Oracle before. Or was that only up
> to 256Gb perhaps?

Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
It might be that we did not have the 1TB hardware at that time yet.

Or perhaps I am missing some bug-fix from the old product..

> 
> In any case, setup_pgtables_x86_64() indeed looks flawed
> to me: While the clearing of l1tab looks right, l[23]tab get
> cleared (and hence a new table allocated) too early. l2tab
> should really get cleared only when l1tab gets cleared _and_
> the L2 clearing condition is true. Similarly for l3tab then, and
> of course - even though it would unlikely ever matter -
> setup_pgtables_x86_32_pae() is broken in the same way.
> 
> Afaict this got broken with the domain build re-write between
> 3.0.4 and 3.1 (the old code looks alright).

Oh wow. Long time ago. Thanks for the pointer - will look at this
once I am through with some of the current bug log.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwM0-00013D-Ej; Thu, 02 Aug 2012 14:26:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwwLz-000131-4T
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:26:19 +0000
Received: from [85.158.139.83:52521] by server-10.bemta-5.messagelabs.com id
	36/0D-02190-A0E8A105; Thu, 02 Aug 2012 14:26:18 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1343917576!25212566!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17006 invoked from network); 2 Aug 2012 14:26:17 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 14:26:17 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72EQC35019302
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 14:26:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72EQBIi001097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 14:26:12 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72EQBcO019926; Thu, 2 Aug 2012 09:26:11 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 07:26:11 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 937024029A; Thu,  2 Aug 2012 10:17:10 -0400 (EDT)
Date: Thu, 2 Aug 2012 10:17:10 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120802141710.GF16749@phenom.dumpdata.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A5EF7020000780009219C@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 10:05:27AM +0100, Jan Beulich wrote:
> >>> On 01.08.12 at 17:50, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > With these patches I've gotten it to boot up to 384GB. Around that area
> > something weird happens - mainly the pagetables that the toolstack allocated
> > seems to have missing data. I hadn't looked in details, but this is what
> > domain builder tells me:
> > 
> > 
> > xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 
> > 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
> > xc_dom_malloc            : 1621 kB
> > xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at 0x7fb0853a2000
> > xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
> > xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 
> > 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
> > xc_dom_malloc            : 4608 kB
> > xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at 0x7fb0553a2000
> > xc_dom_alloc_page   :   start info   : 0xffffffffc30b4000 (pfn 0x430b4)
> > xc_dom_alloc_page   :   xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
> > xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn 0x430b6)
> > nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 
> > 0xffffffffffffffff, 1 table(s)
> > nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 
> > 0xffffffffffffffff, 1 table(s)
> > nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 
> > 0xffffffffffffffff, 2 table(s)
> > nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 
> > 0xffffffffc33fffff, 538 table(s)
> > xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 
> > 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
> > xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at 0x7fb055184000
> > xc_dom_alloc_page   :   boot stack   : 0xffffffffc32d5000 (pfn 0x432d5)
> > xc_dom_build_image  : virt_alloc_end : 0xffffffffc32d6000
> > xc_dom_build_image  : virt_pgtab_end : 0xffffffffc3400000
> > 
> > Note it is is 0xffffffffc30b4000 - so already past the level2_kernel_pgt 
> > (L3[510]
> > and in level2_fixmap_pgt territory (L3[511]).
> > 
> > At that stage we are still operating using the Xen provided pagetable - which
> > look to have the L4[511][511] empty! Which sounds to me like a Xen tool-stack
> > problem? Jan, have you seen something similar to this?
> 
> No we haven't, but I also don't think anyone tried to create as
> big a DomU. I was, however, under the impression that DomU-s
> this big had been created at Oracle before. Or was that only up
> to 256Gb perhaps?

Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
It might be that we did not have the 1TB hardware at that time yet.

Or perhaps I am missing some bug-fix from the old product..

> 
> In any case, setup_pgtables_x86_64() indeed looks flawed
> to me: While the clearing of l1tab looks right, l[23]tab get
> cleared (and hence a new table allocated) too early. l2tab
> should really get cleared only when l1tab gets cleared _and_
> the L2 clearing condition is true. Similarly for l3tab then, and
> of course - even though it would unlikely ever matter -
> setup_pgtables_x86_32_pae() is broken in the same way.
> 
> Afaict this got broken with the domain build re-write between
> 3.0.4 and 3.1 (the old code looks alright).

Oh wow. Long time ago. Thanks for the pointer - will look at this
once I am through with some of the current bug log.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:34:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:34:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwT3-0001Gj-Aw; Thu, 02 Aug 2012 14:33:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwT1-0001Ge-8r
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:33:35 +0000
Received: from [85.158.143.35:50177] by server-1.bemta-4.messagelabs.com id
	A0/F4-24392-EBF8A105; Thu, 02 Aug 2012 14:33:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1343917937!15182022!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24463 invoked from network); 2 Aug 2012 14:32:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:32:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824912"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:32:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:32:17 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwRh-00087y-OC; Thu, 02 Aug 2012 14:32:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwRh-0005MG-NI;
	Thu, 02 Aug 2012 15:32:13 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.36717.700406.653968@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:32:13 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343904082.27221.136.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
	<1343904082.27221.136.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 11/11] libxl: -Wunused-parameter"):
> I'm not entirely sure how I feel about this patch generally (it's quite
> a bit of mess, but the bugs it would have found are real). It's also
> quite a lot of churn for 4.2.

Yes.

> On the other hand we are likely to want to backport lots of libxl fixes
> for 4.2.1 (I was actually considering an exception to the "no new
> features" rule for 4.2.1 for xm parity causing patches) and having this
> in 4.2 would make that cleaner.

Indeed.

> I guess I come down (just) on the side of taking this, when it is baked.

OK.  Having slept on it I think overall this is an improvement and
will help us in the future.

> > @@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
> >      libxl__domain_suspend_state *dss;
> >      int rc;
> > 
> > +    USE(recv_fd); /* TODO get rid of this and actually use it! */
> 
> You've only just introduced TODO REMUS...

Point.

> > @@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
> > 
> >      assert(fd == ev->fd);
> >      revents &= ev->events;
> > +    USE(events); /* we use our own idea of what we asked for */
> 
> What is the point of this argument then?

It's there for consistency with poll(2)'s API.

> Is getting an event we weren't expecting a log-worthy occurrence?

events might be different from those we requested because events might
be "in flight" from the application's call to poll, to us, while we
register/deregister them.  So this is not a logworthy event.  I guess
this property of the registration API is not documented and should be
(although it amounts only to a relaxation from the point of view of
the application).

Also I found a mistake in a related comment so I will fix these
comments in another patch.

> > diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> > index 9c92ae6..a094965 100644
> > --- a/tools/libxl/libxl_pci.c
> > +++ b/tools/libxl/libxl_pci.c
> > @@ -800,6 +800,10 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
> >  {
> >      char *orig_state = priv;
> > 
> > +    USE(gc);
> > +    USE(domid);
> > +    USE(priv);
> 
> You actually use priv above.

Fixed.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:34:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:34:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwT3-0001Gj-Aw; Thu, 02 Aug 2012 14:33:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwT1-0001Ge-8r
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:33:35 +0000
Received: from [85.158.143.35:50177] by server-1.bemta-4.messagelabs.com id
	A0/F4-24392-EBF8A105; Thu, 02 Aug 2012 14:33:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1343917937!15182022!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24463 invoked from network); 2 Aug 2012 14:32:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:32:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13824912"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:32:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:32:17 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwRh-00087y-OC; Thu, 02 Aug 2012 14:32:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwRh-0005MG-NI;
	Thu, 02 Aug 2012 15:32:13 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.36717.700406.653968@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:32:13 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343904082.27221.136.camel@zakaz.uk.xensource.com>
References: <1343838260-17725-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343838260-17725-12-git-send-email-ian.jackson@eu.citrix.com>
	<1343904082.27221.136.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/11] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 11/11] libxl: -Wunused-parameter"):
> I'm not entirely sure how I feel about this patch generally (it's quite
> a bit of mess, but the bugs it would have found are real). It's also
> quite a lot of churn for 4.2.

Yes.

> On the other hand we are likely to want to backport lots of libxl fixes
> for 4.2.1 (I was actually considering an exception to the "no new
> features" rule for 4.2.1 for xm parity causing patches) and having this
> in 4.2 would make that cleaner.

Indeed.

> I guess I come down (just) on the side of taking this, when it is baked.

OK.  Having slept on it I think overall this is an improvement and
will help us in the future.

> > @@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
> >      libxl__domain_suspend_state *dss;
> >      int rc;
> > 
> > +    USE(recv_fd); /* TODO get rid of this and actually use it! */
> 
> You've only just introduced TODO REMUS...

Point.

> > @@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
> > 
> >      assert(fd == ev->fd);
> >      revents &= ev->events;
> > +    USE(events); /* we use our own idea of what we asked for */
> 
> What is the point of this argument then?

It's there for consistency with poll(2)'s API.

> Is getting an event we weren't expecting a log-worthy occurrence?

events might be different from those we requested because events might
be "in flight" from the application's call to poll, to us, while we
register/deregister them.  So this is not a logworthy event.  I guess
this property of the registration API is not documented and should be
(although it amounts only to a relaxation from the point of view of
the application).

Also I found a mistake in a related comment so I will fix these
comments in another patch.

> > diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> > index 9c92ae6..a094965 100644
> > --- a/tools/libxl/libxl_pci.c
> > +++ b/tools/libxl/libxl_pci.c
> > @@ -800,6 +800,10 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
> >  {
> >      char *orig_state = priv;
> > 
> > +    USE(gc);
> > +    USE(domid);
> > +    USE(priv);
> 
> You actually use priv above.

Fixed.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:34:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwTA-0001HC-Nd; Thu, 02 Aug 2012 14:33:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwwT9-0001H5-J8
	for Xen-devel@lists.xensource.com; Thu, 02 Aug 2012 14:33:43 +0000
Received: from [85.158.143.99:54445] by server-1.bemta-4.messagelabs.com id
	8E/15-24392-6CF8A105; Thu, 02 Aug 2012 14:33:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343918019!23570498!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18738 invoked from network); 2 Aug 2012 14:33:40 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 14:33:40 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72EXXJA028688
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 14:33:34 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72EXXiM027465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 14:33:33 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72EXWuT025803; Thu, 2 Aug 2012 09:33:32 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 07:33:32 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E1C524029A; Thu,  2 Aug 2012 10:24:31 -0400 (EDT)
Date: Thu, 2 Aug 2012 10:24:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120802142431.GG16749@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A4E0C.1090509@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 10:53:16AM +0100, George Dunlap wrote:
> On 01/08/12 23:34, Mukesh Rathor wrote:
> >On Wed, 1 Aug 2012 16:25:01 +0100
> >George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> >
> >>I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >>for this feature, mainly for "marketing" reasons.  I think it will
> >>probably give people the wrong idea about what the technology does.
> >>PV domains is one of Xen's really distinct advantages -- much simpler
> >>interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >>understand it, the mode you've been calling "hybrid" still has all of
> >>these advantages -- it just uses some of the HVM hardware extensions
> >>to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >>be seen as, "Even Xen has had to give up on PV."
> >>
> >>Can I suggest something like "PVH" instead?  That (at least to me)
> >>makes it clear that PV domains are still fully PV, but just use some
> >>HVM extensions.
> >>
> >>Thoughts?
> >Hi George,
> >
> >We gave some thought looking for name. I figured pure PV will be around for
> >a while at least. So there's PV on one side and HVM on the other, hybrid
> >somewhere in between.
> I understand the idea, but I think it's not very accurate.  I would
> call Stefano's "PVHVM" stuff hybrid -- it has the legacy boot and
> emulated devices, but uses the PV interfaces for event delivery
> extensively.  The mode you're working on is too far towards the "PV"
> side to be called "hybrid".  (And as we've seen, the term has
> already confused people, who interpreted it as basically PVHVM.)
> >
> >The issue with PV in HVM is that it limits PV to HVM container only. The
> >vision I had was that hybrid, a PV ops kernel that is somewhere in between
> >PV and HVM, could be configured with options. So, one could run hybrid
> >with say EPT off (altho, won't be supported this anymore). But generic name
> >like hybrid allows it in future to be flexible, instead of confined to a
> >specific. I suppose a PV guest could just be started with various options.
> In general, I think "PV" should mean, "Doesn't use legacy boot,
> doesn't need emulated devices".  So I don't think "PVH" places any
> limitations on what particular subset of HVM hardware you use.  For
> things that specifically depend on knowing whether guest PTs are
> using mfns or gpfns, I think we should have checks for specific
> things -- for instance, "xen_mm_translate()" or something like that.


I like that.. We currently have 'if (feature(AUTOTRANSLATE)' .. blah
blah sprinkled around.

If we altered it to be 'if (xen_mm_translate())' and replaced a bunch
of 'if (xen_pv_domain())' with that it should make it easier. It
might even make the 'xen_hybrid_domain()' not needed at all.

This is good - it would also allow to remove some of the 'xen_hvm_domain()'
with it.

> 
> Also, don't confuse EPT (which is Intel-specific) with HAP (which is
> the generic term for either EPT or RVI); and don't confuse either of
> those with what is called "translate" mode.  Translate mode (where
> Xen translates the guest PTs from gpfns to mfns) can be done either
> with HAP or with shadow; and given the performance issues HAP has
> with certain workloads, we need to make sure that the HVM container
> mode can use both.
> 
> >As for name in code, 'pvh' was confusing, as PVHVM is now routinely used to
> >refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV, well, thats a certain
> >virus ;). So I just used hybrid in the code to refer to PV guest that runs
> >in HVM container. I suppose I could change the flag to pv_in_hvm or
> >something.
> But is "pvhvm" ever actually used in the code?  If not, it's not a problem.
> 
> Actually, perhaps it would be better in any case, rather than having
> checks for "pvh" mode, to have checks for specific things -- e.g.,
> is translation on or off (i.e., are running in classic PV mode, or
> with HAP)?  I'm not sure the other things you're doing with HVM, but
> it should be possible to come up with a descriptive name to use in
> the code for those options, rather than limiting to a specific mode.
> 
> In ancient days, there used to be options, both within Xen and
> within the classic Xen kernel, to run a PV guest in fully-translated
> mode (i.e., the guest PTs contained gpfns, not mfns), and
> "shadow_translate" was a mode used across guest types (both PV and
> HVM) to determine whether this was the case or not.

dom0_shadow=true .. And strangly enought it looks to actually boot
the pvops kernel (dom0) without much issues. Wow. It must be faking it
I think - no bugs??!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:34:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwTA-0001HC-Nd; Thu, 02 Aug 2012 14:33:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwwT9-0001H5-J8
	for Xen-devel@lists.xensource.com; Thu, 02 Aug 2012 14:33:43 +0000
Received: from [85.158.143.99:54445] by server-1.bemta-4.messagelabs.com id
	8E/15-24392-6CF8A105; Thu, 02 Aug 2012 14:33:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343918019!23570498!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NDk3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18738 invoked from network); 2 Aug 2012 14:33:40 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Aug 2012 14:33:40 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72EXXJA028688
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 14:33:34 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72EXXiM027465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 14:33:33 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72EXWuT025803; Thu, 2 Aug 2012 09:33:32 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 07:33:32 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E1C524029A; Thu,  2 Aug 2012 10:24:31 -0400 (EDT)
Date: Thu, 2 Aug 2012 10:24:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120802142431.GG16749@phenom.dumpdata.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A4E0C.1090509@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 10:53:16AM +0100, George Dunlap wrote:
> On 01/08/12 23:34, Mukesh Rathor wrote:
> >On Wed, 1 Aug 2012 16:25:01 +0100
> >George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> >
> >>I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >>for this feature, mainly for "marketing" reasons.  I think it will
> >>probably give people the wrong idea about what the technology does.
> >>PV domains is one of Xen's really distinct advantages -- much simpler
> >>interface, lighter-weight (no qemu, legacy boot), &c &c.  As I
> >>understand it, the mode you've been calling "hybrid" still has all of
> >>these advantages -- it just uses some of the HVM hardware extensions
> >>to make the interface even simpler / faster.  I'm afraid "hybrid" may
> >>be seen as, "Even Xen has had to give up on PV."
> >>
> >>Can I suggest something like "PVH" instead?  That (at least to me)
> >>makes it clear that PV domains are still fully PV, but just use some
> >>HVM extensions.
> >>
> >>Thoughts?
> >Hi George,
> >
> >We gave some thought looking for name. I figured pure PV will be around for
> >a while at least. So there's PV on one side and HVM on the other, hybrid
> >somewhere in between.
> I understand the idea, but I think it's not very accurate.  I would
> call Stefano's "PVHVM" stuff hybrid -- it has the legacy boot and
> emulated devices, but uses the PV interfaces for event delivery
> extensively.  The mode you're working on is too far towards the "PV"
> side to be called "hybrid".  (And as we've seen, the term has
> already confused people, who interpreted it as basically PVHVM.)
> >
> >The issue with PV in HVM is that it limits PV to HVM container only. The
> >vision I had was that hybrid, a PV ops kernel that is somewhere in between
> >PV and HVM, could be configured with options. So, one could run hybrid
> >with say EPT off (altho, won't be supported this anymore). But generic name
> >like hybrid allows it in future to be flexible, instead of confined to a
> >specific. I suppose a PV guest could just be started with various options.
> In general, I think "PV" should mean, "Doesn't use legacy boot,
> doesn't need emulated devices".  So I don't think "PVH" places any
> limitations on what particular subset of HVM hardware you use.  For
> things that specifically depend on knowing whether guest PTs are
> using mfns or gpfns, I think we should have checks for specific
> things -- for instance, "xen_mm_translate()" or something like that.


I like that.. We currently have 'if (feature(AUTOTRANSLATE)' .. blah
blah sprinkled around.

If we altered it to be 'if (xen_mm_translate())' and replaced a bunch
of 'if (xen_pv_domain())' with that it should make it easier. It
might even make the 'xen_hybrid_domain()' not needed at all.

This is good - it would also allow to remove some of the 'xen_hvm_domain()'
with it.

> 
> Also, don't confuse EPT (which is Intel-specific) with HAP (which is
> the generic term for either EPT or RVI); and don't confuse either of
> those with what is called "translate" mode.  Translate mode (where
> Xen translates the guest PTs from gpfns to mfns) can be done either
> with HAP or with shadow; and given the performance issues HAP has
> with certain workloads, we need to make sure that the HVM container
> mode can use both.
> 
> >As for name in code, 'pvh' was confusing, as PVHVM is now routinely used to
> >refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV, well, thats a certain
> >virus ;). So I just used hybrid in the code to refer to PV guest that runs
> >in HVM container. I suppose I could change the flag to pv_in_hvm or
> >something.
> But is "pvhvm" ever actually used in the code?  If not, it's not a problem.
> 
> Actually, perhaps it would be better in any case, rather than having
> checks for "pvh" mode, to have checks for specific things -- e.g.,
> is translation on or off (i.e., are running in classic PV mode, or
> with HAP)?  I'm not sure the other things you're doing with HVM, but
> it should be possible to come up with a descriptive name to use in
> the code for those options, rather than limiting to a specific mode.
> 
> In ancient days, there used to be options, both within Xen and
> within the classic Xen kernel, to run a PV guest in fully-translated
> mode (i.e., the guest PTs contained gpfns, not mfns), and
> "shadow_translate" was a mode used across guest types (both PV and
> HVM) to determine whether this was the case or not.

dom0_shadow=true .. And strangly enought it looks to actually boot
the pvops kernel (dom0) without much issues. Wow. It must be faking it
I think - no bugs??!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwb6-0001dX-Nc; Thu, 02 Aug 2012 14:41:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwb5-0001dF-DJ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:41:55 +0000
Received: from [85.158.143.99:39836] by server-2.bemta-4.messagelabs.com id
	45/CB-17938-2B19A105; Thu, 02 Aug 2012 14:41:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343918514!24652215!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4280 invoked from network); 2 Aug 2012 14:41:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:41:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825118"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:41:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:41:53 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwb3-0008BC-A1; Thu, 02 Aug 2012 14:41:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwb3-0005N6-8z;
	Thu, 02 Aug 2012 15:41:53 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37297.264426.80222@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:41:53 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
	libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> libxl: fix cleanup of tap devices in libxl__device_destroy
> 
> We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
> read tapdisk-params. However it appears that we need to destroy the tap device
> after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
> <201207312141.q6VLfJje012656@wind.enjellic.com>.
> 
> So read the tapdisk-params in the cleanup transaction, before the remove, and
> pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
> if the device isn't a tap device.
> 
> There is no need to tear down the tap device from libxl__initiate_device_remove
> since this ultimately calls libxl__device_destroy.
> 
> Propagate and log errors from libxl__device_destroy_tapdisk.

Can you please wrap your commit messages to 70ish rather than 80 ?
Here is a screenshot of my email client:
  http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png

The code all looks good.  Just one comment:

>  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
>  {
...
> @@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
>          rc = libxl__xs_transaction_start(gc, &t);
>          if (rc) goto out;
>  
> +        /* May not exist if this is not a tap device */
> +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);

You can still use libxl__xs_read_checked.  It considers ENOENT a
success (and therefore doesn't log about it).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwb6-0001dX-Nc; Thu, 02 Aug 2012 14:41:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwb5-0001dF-DJ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:41:55 +0000
Received: from [85.158.143.99:39836] by server-2.bemta-4.messagelabs.com id
	45/CB-17938-2B19A105; Thu, 02 Aug 2012 14:41:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343918514!24652215!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4280 invoked from network); 2 Aug 2012 14:41:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:41:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825118"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:41:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:41:53 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwb3-0008BC-A1; Thu, 02 Aug 2012 14:41:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwb3-0005N6-8z;
	Thu, 02 Aug 2012 15:41:53 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37297.264426.80222@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:41:53 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
	libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> libxl: fix cleanup of tap devices in libxl__device_destroy
> 
> We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
> read tapdisk-params. However it appears that we need to destroy the tap device
> after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
> <201207312141.q6VLfJje012656@wind.enjellic.com>.
> 
> So read the tapdisk-params in the cleanup transaction, before the remove, and
> pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
> if the device isn't a tap device.
> 
> There is no need to tear down the tap device from libxl__initiate_device_remove
> since this ultimately calls libxl__device_destroy.
> 
> Propagate and log errors from libxl__device_destroy_tapdisk.

Can you please wrap your commit messages to 70ish rather than 80 ?
Here is a screenshot of my email client:
  http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png

The code all looks good.  Just one comment:

>  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
>  {
...
> @@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
>          rc = libxl__xs_transaction_start(gc, &t);
>          if (rc) goto out;
>  
> +        /* May not exist if this is not a tap device */
> +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);

You can still use libxl__xs_read_checked.  It considers ENOENT a
success (and therefore doesn't log about it).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:43:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwbx-0001gT-5Y; Thu, 02 Aug 2012 14:42:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwbv-0001gJ-8g
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:42:47 +0000
Received: from [85.158.143.35:5963] by server-3.bemta-4.messagelabs.com id
	33/57-01511-6E19A105; Thu, 02 Aug 2012 14:42:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343918565!16528352!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31862 invoked from network); 2 Aug 2012 14:42:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:42:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825135"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:42:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:42:45 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwbt-0008BY-4M; Thu, 02 Aug 2012 14:42:45 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwbt-0005NA-3Z;
	Thu, 02 Aug 2012 15:42:45 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37349.96739.832156@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:42:45 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343900722.27221.107.camel@zakaz.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<1343900722.27221.107.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> 
> >          rc = libxl__xs_transaction_start(gc, &t);
> >          if (rc) goto out;
> >  
> > +        /* May not exist if this is not a tap device */
> > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> > +
> >          libxl__xs_path_cleanup(gc, t, fe_path);
> >          libxl__xs_path_cleanup(gc, t, be_path);
> 
> Do we deliberate ignore the error codes from these two?

I don't think so.

In general in this destroy path we should consider whether, on
failure, we should abandon the cleanup or note the error and carry on.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:43:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwbx-0001gT-5Y; Thu, 02 Aug 2012 14:42:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwbv-0001gJ-8g
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:42:47 +0000
Received: from [85.158.143.35:5963] by server-3.bemta-4.messagelabs.com id
	33/57-01511-6E19A105; Thu, 02 Aug 2012 14:42:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343918565!16528352!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31862 invoked from network); 2 Aug 2012 14:42:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:42:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825135"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:42:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:42:45 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwbt-0008BY-4M; Thu, 02 Aug 2012 14:42:45 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwbt-0005NA-3Z;
	Thu, 02 Aug 2012 15:42:45 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37349.96739.832156@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:42:45 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343900722.27221.107.camel@zakaz.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<1343900722.27221.107.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> 
> >          rc = libxl__xs_transaction_start(gc, &t);
> >          if (rc) goto out;
> >  
> > +        /* May not exist if this is not a tap device */
> > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> > +
> >          libxl__xs_path_cleanup(gc, t, fe_path);
> >          libxl__xs_path_cleanup(gc, t, be_path);
> 
> Do we deliberate ignore the error codes from these two?

I don't think so.

In general in this destroy path we should consider whether, on
failure, we should abandon the cleanup or note the error and carry on.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:44:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:44:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwct-0001lJ-KH; Thu, 02 Aug 2012 14:43:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swwcs-0001l8-Aw
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:43:46 +0000
Received: from [85.158.138.51:39632] by server-8.bemta-3.messagelabs.com id
	97/C2-10412-1229A105; Thu, 02 Aug 2012 14:43:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1343918624!30148861!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27370 invoked from network); 2 Aug 2012 14:43:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:43:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825159"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:43:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:43:44 +0100
Message-ID: <1343918623.27221.163.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:43:43 +0100
In-Reply-To: <20506.37297.264426.80222@mariner.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<20506.37297.264426.80222@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:41 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> > libxl: fix cleanup of tap devices in libxl__device_destroy
> > 
> > We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
> > read tapdisk-params. However it appears that we need to destroy the tap device
> > after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
> > <201207312141.q6VLfJje012656@wind.enjellic.com>.
> > 
> > So read the tapdisk-params in the cleanup transaction, before the remove, and
> > pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
> > if the device isn't a tap device.
> > 
> > There is no need to tear down the tap device from libxl__initiate_device_remove
> > since this ultimately calls libxl__device_destroy.
> > 
> > Propagate and log errors from libxl__device_destroy_tapdisk.
> 
> Can you please wrap your commit messages to 70ish rather than 80 ?
> Here is a screenshot of my email client:
>   http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png

If someone tells me the rune to put into vimrc such that "gqj" does this
then sure.

> 
> The code all looks good.  Just one comment:
> 
> >  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
> >  {
> ...
> > @@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
> >          rc = libxl__xs_transaction_start(gc, &t);
> >          if (rc) goto out;
> >  
> > +        /* May not exist if this is not a tap device */
> > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> 
> You can still use libxl__xs_read_checked.  It considers ENOENT a
> success (and therefore doesn't log about it).

OK.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:44:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:44:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwct-0001lJ-KH; Thu, 02 Aug 2012 14:43:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swwcs-0001l8-Aw
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:43:46 +0000
Received: from [85.158.138.51:39632] by server-8.bemta-3.messagelabs.com id
	97/C2-10412-1229A105; Thu, 02 Aug 2012 14:43:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1343918624!30148861!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27370 invoked from network); 2 Aug 2012 14:43:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:43:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825159"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:43:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:43:44 +0100
Message-ID: <1343918623.27221.163.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:43:43 +0100
In-Reply-To: <20506.37297.264426.80222@mariner.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<20506.37297.264426.80222@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:41 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> > libxl: fix cleanup of tap devices in libxl__device_destroy
> > 
> > We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
> > read tapdisk-params. However it appears that we need to destroy the tap device
> > after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
> > <201207312141.q6VLfJje012656@wind.enjellic.com>.
> > 
> > So read the tapdisk-params in the cleanup transaction, before the remove, and
> > pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
> > if the device isn't a tap device.
> > 
> > There is no need to tear down the tap device from libxl__initiate_device_remove
> > since this ultimately calls libxl__device_destroy.
> > 
> > Propagate and log errors from libxl__device_destroy_tapdisk.
> 
> Can you please wrap your commit messages to 70ish rather than 80 ?
> Here is a screenshot of my email client:
>   http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png

If someone tells me the rune to put into vimrc such that "gqj" does this
then sure.

> 
> The code all looks good.  Just one comment:
> 
> >  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
> >  {
> ...
> > @@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
> >          rc = libxl__xs_transaction_start(gc, &t);
> >          if (rc) goto out;
> >  
> > +        /* May not exist if this is not a tap device */
> > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> 
> You can still use libxl__xs_read_checked.  It considers ENOENT a
> success (and therefore doesn't log about it).

OK.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:45:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwweH-0001uA-8Y; Thu, 02 Aug 2012 14:45:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwweF-0001tw-Uf
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:45:12 +0000
Received: from [85.158.139.83:5259] by server-7.bemta-5.messagelabs.com id
	12/7D-28276-7729A105; Thu, 02 Aug 2012 14:45:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1343918710!29992271!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27520 invoked from network); 2 Aug 2012 14:45:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825191"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:45:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:45:01 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwe4-0008CG-My; Thu, 02 Aug 2012 14:45:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwe4-0005NN-M2;
	Thu, 02 Aug 2012 15:45:00 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37484.668649.252793@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:45:00 +0100
To: Andre Przywara <andre.przywara@amd.com>
In-Reply-To: <501A83A6.2060000@amd.com>
References: <501A83A6.2060000@amd.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] auto-ballooning crashing Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andre Przywara writes ("auto-ballooning crashing Dom0?"):
> during some experiments with many guests I get crashing Dom0s because of 
> too less memory. Actually the OOM killer goes 'round and kills random 
> things, preferably qemu-dm's ;-)
> The box in question has 128GB of memory, I start with dom0_mem=8192M (or 
> 16384M, doesn't matter). I also used "dom0_mem=8192M,min:1536M", but 
> that didn't make any difference. Xen is c/s 25688.

I have seen similar effects occasionally but have usually been to busy
in the middle of something else to do anything about it.  The
autoballooning arrangements aren't very good TBH and we are intending
to improve things in 4.3.

> Either we change this to something higher (768 MB worked for me) or we 
> make this a config option in xl.conf (like it was in xend-config.sxp)

Certainly it should be a config option.

> Another option would be to make it dynamic, by looking at the actual 
> memory currently used in Dom0 and don't balloon down to 110% or so of it.

That would be a possibility.

> In any case we should do something still for Xen 4.2, as I guess people 
> dislike crashing Dom0, tearing down all the domains with it...

Yes.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:45:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwweH-0001uA-8Y; Thu, 02 Aug 2012 14:45:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwweF-0001tw-Uf
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:45:12 +0000
Received: from [85.158.139.83:5259] by server-7.bemta-5.messagelabs.com id
	12/7D-28276-7729A105; Thu, 02 Aug 2012 14:45:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1343918710!29992271!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27520 invoked from network); 2 Aug 2012 14:45:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825191"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:45:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:45:01 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwe4-0008CG-My; Thu, 02 Aug 2012 14:45:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwe4-0005NN-M2;
	Thu, 02 Aug 2012 15:45:00 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37484.668649.252793@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:45:00 +0100
To: Andre Przywara <andre.przywara@amd.com>
In-Reply-To: <501A83A6.2060000@amd.com>
References: <501A83A6.2060000@amd.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] auto-ballooning crashing Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andre Przywara writes ("auto-ballooning crashing Dom0?"):
> during some experiments with many guests I get crashing Dom0s because of 
> too less memory. Actually the OOM killer goes 'round and kills random 
> things, preferably qemu-dm's ;-)
> The box in question has 128GB of memory, I start with dom0_mem=8192M (or 
> 16384M, doesn't matter). I also used "dom0_mem=8192M,min:1536M", but 
> that didn't make any difference. Xen is c/s 25688.

I have seen similar effects occasionally but have usually been to busy
in the middle of something else to do anything about it.  The
autoballooning arrangements aren't very good TBH and we are intending
to improve things in 4.3.

> Either we change this to something higher (768 MB worked for me) or we 
> make this a config option in xl.conf (like it was in xend-config.sxp)

Certainly it should be a config option.

> Another option would be to make it dynamic, by looking at the actual 
> memory currently used in Dom0 and don't balloon down to 110% or so of it.

That would be a possibility.

> In any case we should do something still for Xen 4.2, as I guess people 
> dislike crashing Dom0, tearing down all the domains with it...

Yes.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:46:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swweq-0001z5-Ls; Thu, 02 Aug 2012 14:45:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swweo-0001yX-Nw
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:45:47 +0000
Received: from [85.158.138.51:4135] by server-8.bemta-3.messagelabs.com id
	CC/66-10412-A929A105; Thu, 02 Aug 2012 14:45:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343918745!30088797!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17453 invoked from network); 2 Aug 2012 14:45:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825211"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:45:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:45:45 +0100
Message-ID: <1343918743.27221.164.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:45:43 +0100
In-Reply-To: <20506.37349.96739.832156@mariner.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<1343900722.27221.107.camel@zakaz.uk.xensource.com>
	<20506.37349.96739.832156@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:42 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> > 
> > >          rc = libxl__xs_transaction_start(gc, &t);
> > >          if (rc) goto out;
> > >  
> > > +        /* May not exist if this is not a tap device */
> > > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> > > +
> > >          libxl__xs_path_cleanup(gc, t, fe_path);
> > >          libxl__xs_path_cleanup(gc, t, be_path);
> > 
> > Do we deliberate ignore the error codes from these two?
> 
> I don't think so.
> 
> In general in this destroy path we should consider whether, on
> failure, we should abandon the cleanup or note the error and carry on.

Since this is a destroy operation note it and carry on I think, so as to
clean up as much as we are able.

BTW, is there a libxl__xs_transaction_abort missing in this function
too?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:46:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swweq-0001z5-Ls; Thu, 02 Aug 2012 14:45:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swweo-0001yX-Nw
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:45:47 +0000
Received: from [85.158.138.51:4135] by server-8.bemta-3.messagelabs.com id
	CC/66-10412-A929A105; Thu, 02 Aug 2012 14:45:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343918745!30088797!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17453 invoked from network); 2 Aug 2012 14:45:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825211"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:45:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:45:45 +0100
Message-ID: <1343918743.27221.164.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:45:43 +0100
In-Reply-To: <20506.37349.96739.832156@mariner.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<1343900722.27221.107.camel@zakaz.uk.xensource.com>
	<20506.37349.96739.832156@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:42 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> > 
> > >          rc = libxl__xs_transaction_start(gc, &t);
> > >          if (rc) goto out;
> > >  
> > > +        /* May not exist if this is not a tap device */
> > > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> > > +
> > >          libxl__xs_path_cleanup(gc, t, fe_path);
> > >          libxl__xs_path_cleanup(gc, t, be_path);
> > 
> > Do we deliberate ignore the error codes from these two?
> 
> I don't think so.
> 
> In general in this destroy path we should consider whether, on
> failure, we should abandon the cleanup or note the error and carry on.

Since this is a destroy operation note it and carry on I think, so as to
clean up as much as we are able.

BTW, is there a libxl__xs_transaction_abort missing in this function
too?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:46:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:46:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwev-000209-2A; Thu, 02 Aug 2012 14:45:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwet-0001zl-Mm
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:45:51 +0000
Received: from [85.158.138.51:4502] by server-4.bemta-3.messagelabs.com id
	46/6D-29069-E929A105; Thu, 02 Aug 2012 14:45:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343918745!30088797!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17653 invoked from network); 2 Aug 2012 14:45:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:45:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825213"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:45:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:45:50 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwer-0008Cc-RA; Thu, 02 Aug 2012 14:45:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwer-0005Nb-QQ;
	Thu, 02 Aug 2012 15:45:49 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="na0ZkgbZsl"
Content-Transfer-Encoding: 7bit
Message-ID: <20506.37533.405510.954668@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:45:49 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343915416.27221.157.camel@zakaz.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
	<1343821631.27221.75.camel@zakaz.uk.xensource.com>
	<20506.33864.452690.133475@mariner.uk.xensource.com>
	<1343915416.27221.157.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--na0ZkgbZsl
Content-Type: text/plain; charset="us-ascii"
Content-Description: message body text
Content-Transfer-Encoding: 7bit

Ian Campbell writes ("Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal callers"):
> On Thu, 2012-08-02 at 14:44 +0100, Ian Jackson wrote:
> > I guess I could routinely use git-filter-branch on the results of
> > linearize to turn "-" into "---" but my own git-to-hg (upstream hat)
> > commit processing stream removes things after "-" too.
> 
> Your git2hgapply script? I'm using that but perhaps an old version? Or
> maybe I just need to trust it...

I seem to have updated it at some point.

Ian.


--na0ZkgbZsl
Content-Type: application/octet-stream; name="git2hgapply"
Content-Disposition: attachment; filename="git2hgapply"
Content-Transfer-Encoding: base64

IyEvYmluL2Jhc2gKc2V0IC1lCnVzYWdlICgpIHsgY2F0IDw8RU5ECnVzYWdlOiAuLi4vZ2l0Mmhn
YXBwbHkgWzxnaXQtdHJlZT5dIDxyZXYtbGlzdD4gPGhnLXRyZWU+CiBkZWZhdWx0IGdpdC10cmVl
IGlzIC4KRU5ECn0KCmZhaWwgKCkgewogICAgICAgIGVjaG8gPiYyICIkMDogJCoiCiAgICAgICAg
ZXhpdCAxCn0KCmNhc2UgIiQjLiQxIiBpbgoyLlteLV0qKSAgICAgICAgZ2l0dHJlZT0uOyByZXZs
aXN0PSQxOyBoZ3RyZWU9JDIgOzsKMy5bXi1dKikgICAgICAgIGdpdHRyZWU9JDE7IHJldmxpc3Q9
JDI7IGhndHJlZT0kMyA7Owo/Li0taGVscCkgICAgICAgdXNhZ2U7IGV4aXQgMDs7CiopICAgICAg
ICAgICAgICB1c2FnZTsgZmFpbCAnYmFkIHVzYWdlJyA7Owplc2FjCgpzZXQgLW8gcGlwZWZhaWwK
CmNieT0nSWFuIEphY2tzb24gPElhbi5KYWNrc29uQGV1LmNpdHJpeC5jb20+JwoKY2FzZSAiJHJl
dmxpc3QiIGluCiouLiopICAgOzsKKikgICAgICByZXZsaXN0b3B0cz0tLW5vLXdhbGsgOzsKZXNh
YwoKdHJhcCAnc2V0ICtlOyBybSAtZiAiJHRmIjsgZXhpdCAxNicgMAp0Zj1gbWt0ZW1wYAoKcmV2
cz1gc2V0IC1lOyBjZCAkZ2l0dHJlZTsgZ2l0LXJldi1saXN0IC0tcmV2ZXJzZSAkcmV2bGlzdG9w
dHMgIiRyZXZsaXN0ImAKCmZvciByZXYgaW4gJHJldnM7IGRvCgogICAgICAgIChjZCAkZ2l0dHJl
ZTsgZ2l0LWxvZyAtbjEgLS1wcmV0dHk9Zm9ybWF0OidhcHBseWluZyAlcwonIFwKICAgICAgICAg
ICAgICAgICRyZXYgfCBjYXQpCiAgICAgICAgdGVzdCAkPyA9IDAgIyB3dGYKCiAgICAgICAgKGNk
ICRnaXR0cmVlOyBnaXQtbG9nIC1uMSAtLXByZXR0eT1mb3JtYXQ6XAonRnJvbTogJWFuIDwlYWU+
CgolcwoKJWInIkNvbW1pdHRlZC1ieTogJGNieSInCicgXAogICAgICAgICAgICAgICAgJHJldiAp
ID4kdGYKCglwZXJsIC1pfiAtbmUgJ3ByaW50IHVubGVzcyBtL14tK1xzKiQvLi4wJyAkdGY7Cgog
ICAgICAgIChjZCAkZ2l0dHJlZTsgZ2l0LXNob3cgLW4xIC0tcHJldHR5PWZvcm1hdDpcCiAgICAg
ICAgICAgICAgICAkcmV2ICkgPj4kdGYKICAgICAgICB0ZXN0ICQ/ID0gMCAjIHd0ZgoKICAgICAg
ICAoY2QgJGhndHJlZTsgaGcgaW1wb3J0IC1xICR0ZikKICAgICAgICB0ZXN0ICQ/ID0gMCAjIHd0
ZgoKZG9uZQoKcm0gJHRmCnRyYXAgJycgMAo=
--na0ZkgbZsl
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--na0ZkgbZsl--


From xen-devel-bounces@lists.xen.org Thu Aug 02 14:46:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:46:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwev-000209-2A; Thu, 02 Aug 2012 14:45:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwet-0001zl-Mm
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:45:51 +0000
Received: from [85.158.138.51:4502] by server-4.bemta-3.messagelabs.com id
	46/6D-29069-E929A105; Thu, 02 Aug 2012 14:45:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343918745!30088797!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17653 invoked from network); 2 Aug 2012 14:45:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:45:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825213"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:45:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:45:50 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwer-0008Cc-RA; Thu, 02 Aug 2012 14:45:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwer-0005Nb-QQ;
	Thu, 02 Aug 2012 15:45:49 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="na0ZkgbZsl"
Content-Transfer-Encoding: 7bit
Message-ID: <20506.37533.405510.954668@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:45:49 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343915416.27221.157.camel@zakaz.uk.xensource.com>
References: <20504.7228.128503.451291@mariner.uk.xensource.com>
	<1343821631.27221.75.camel@zakaz.uk.xensource.com>
	<20506.33864.452690.133475@mariner.uk.xensource.com>
	<1343915416.27221.157.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal
 callers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--na0ZkgbZsl
Content-Type: text/plain; charset="us-ascii"
Content-Description: message body text
Content-Transfer-Encoding: 7bit

Ian Campbell writes ("Re: [Xen-devel] [PATCH v3] libxl: enforce prohibitions of internal callers"):
> On Thu, 2012-08-02 at 14:44 +0100, Ian Jackson wrote:
> > I guess I could routinely use git-filter-branch on the results of
> > linearize to turn "-" into "---" but my own git-to-hg (upstream hat)
> > commit processing stream removes things after "-" too.
> 
> Your git2hgapply script? I'm using that but perhaps an old version? Or
> maybe I just need to trust it...

I seem to have updated it at some point.

Ian.


--na0ZkgbZsl
Content-Type: application/octet-stream; name="git2hgapply"
Content-Disposition: attachment; filename="git2hgapply"
Content-Transfer-Encoding: base64

IyEvYmluL2Jhc2gKc2V0IC1lCnVzYWdlICgpIHsgY2F0IDw8RU5ECnVzYWdlOiAuLi4vZ2l0Mmhn
YXBwbHkgWzxnaXQtdHJlZT5dIDxyZXYtbGlzdD4gPGhnLXRyZWU+CiBkZWZhdWx0IGdpdC10cmVl
IGlzIC4KRU5ECn0KCmZhaWwgKCkgewogICAgICAgIGVjaG8gPiYyICIkMDogJCoiCiAgICAgICAg
ZXhpdCAxCn0KCmNhc2UgIiQjLiQxIiBpbgoyLlteLV0qKSAgICAgICAgZ2l0dHJlZT0uOyByZXZs
aXN0PSQxOyBoZ3RyZWU9JDIgOzsKMy5bXi1dKikgICAgICAgIGdpdHRyZWU9JDE7IHJldmxpc3Q9
JDI7IGhndHJlZT0kMyA7Owo/Li0taGVscCkgICAgICAgdXNhZ2U7IGV4aXQgMDs7CiopICAgICAg
ICAgICAgICB1c2FnZTsgZmFpbCAnYmFkIHVzYWdlJyA7Owplc2FjCgpzZXQgLW8gcGlwZWZhaWwK
CmNieT0nSWFuIEphY2tzb24gPElhbi5KYWNrc29uQGV1LmNpdHJpeC5jb20+JwoKY2FzZSAiJHJl
dmxpc3QiIGluCiouLiopICAgOzsKKikgICAgICByZXZsaXN0b3B0cz0tLW5vLXdhbGsgOzsKZXNh
YwoKdHJhcCAnc2V0ICtlOyBybSAtZiAiJHRmIjsgZXhpdCAxNicgMAp0Zj1gbWt0ZW1wYAoKcmV2
cz1gc2V0IC1lOyBjZCAkZ2l0dHJlZTsgZ2l0LXJldi1saXN0IC0tcmV2ZXJzZSAkcmV2bGlzdG9w
dHMgIiRyZXZsaXN0ImAKCmZvciByZXYgaW4gJHJldnM7IGRvCgogICAgICAgIChjZCAkZ2l0dHJl
ZTsgZ2l0LWxvZyAtbjEgLS1wcmV0dHk9Zm9ybWF0OidhcHBseWluZyAlcwonIFwKICAgICAgICAg
ICAgICAgICRyZXYgfCBjYXQpCiAgICAgICAgdGVzdCAkPyA9IDAgIyB3dGYKCiAgICAgICAgKGNk
ICRnaXR0cmVlOyBnaXQtbG9nIC1uMSAtLXByZXR0eT1mb3JtYXQ6XAonRnJvbTogJWFuIDwlYWU+
CgolcwoKJWInIkNvbW1pdHRlZC1ieTogJGNieSInCicgXAogICAgICAgICAgICAgICAgJHJldiAp
ID4kdGYKCglwZXJsIC1pfiAtbmUgJ3ByaW50IHVubGVzcyBtL14tK1xzKiQvLi4wJyAkdGY7Cgog
ICAgICAgIChjZCAkZ2l0dHJlZTsgZ2l0LXNob3cgLW4xIC0tcHJldHR5PWZvcm1hdDpcCiAgICAg
ICAgICAgICAgICAkcmV2ICkgPj4kdGYKICAgICAgICB0ZXN0ICQ/ID0gMCAjIHd0ZgoKICAgICAg
ICAoY2QgJGhndHJlZTsgaGcgaW1wb3J0IC1xICR0ZikKICAgICAgICB0ZXN0ICQ/ID0gMCAjIHd0
ZgoKZG9uZQoKcm0gJHRmCnRyYXAgJycgMAo=
--na0ZkgbZsl
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--na0ZkgbZsl--


From xen-devel-bounces@lists.xen.org Thu Aug 02 14:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwfL-00026N-H1; Thu, 02 Aug 2012 14:46:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwfJ-00025f-Sd
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:46:17 +0000
Received: from [85.158.138.51:52830] by server-1.bemta-3.messagelabs.com id
	CF/22-31934-9B29A105; Thu, 02 Aug 2012 14:46:17 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343918776!29997862!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10910 invoked from network); 2 Aug 2012 14:46:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:46:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825227"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:46:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:46:16 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwfI-0008Ck-4F; Thu, 02 Aug 2012 14:46:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwfI-0005Nj-3O;
	Thu, 02 Aug 2012 15:46:16 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37557.263306.37516@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:46:13 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343917305.27221.159.camel@zakaz.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
	<20506.35845.933539.770104@mariner.uk.xensource.com>
	<1343917305.27221.159.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> On Thu, 2012-08-02 at 15:17 +0100, Ian Jackson wrote:
> > I don't understand why this is needed (or a good thing) for one but
> > not the other.
> 
> be_path becomes const in the next patch, after the non-const user is
> removed.

OK then:

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwfL-00026N-H1; Thu, 02 Aug 2012 14:46:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwfJ-00025f-Sd
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:46:17 +0000
Received: from [85.158.138.51:52830] by server-1.bemta-3.messagelabs.com id
	CF/22-31934-9B29A105; Thu, 02 Aug 2012 14:46:17 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343918776!29997862!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10910 invoked from network); 2 Aug 2012 14:46:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:46:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825227"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:46:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:46:16 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwfI-0008Ck-4F; Thu, 02 Aug 2012 14:46:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwfI-0005Nj-3O;
	Thu, 02 Aug 2012 15:46:16 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37557.263306.37516@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:46:13 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343917305.27221.159.camel@zakaz.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
	<20506.35845.933539.770104@mariner.uk.xensource.com>
	<1343917305.27221.159.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> On Thu, 2012-08-02 at 15:17 +0100, Ian Jackson wrote:
> > I don't understand why this is needed (or a good thing) for one but
> > not the other.
> 
> be_path becomes const in the next patch, after the non-const user is
> removed.

OK then:

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:48:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwhP-0002Sg-2S; Thu, 02 Aug 2012 14:48:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwhN-0002SK-7I
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:48:25 +0000
Received: from [85.158.143.35:63322] by server-3.bemta-4.messagelabs.com id
	CE/F0-01511-8339A105; Thu, 02 Aug 2012 14:48:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343918903!17108098!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16699 invoked from network); 2 Aug 2012 14:48:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:48:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825274"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:48:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:48:23 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwhL-0008Db-72; Thu, 02 Aug 2012 14:48:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwhL-0005Nx-5t;
	Thu, 02 Aug 2012 15:48:23 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37684.320856.526014@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:48:20 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343918623.27221.163.camel@zakaz.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<20506.37297.264426.80222@mariner.uk.xensource.com>
	<1343918623.27221.163.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] linewrapping commit messages (was Re: [PATCH] libxl:
	fix cleanup of tap devices in libxl__device_destroy)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> On Thu, 2012-08-02 at 15:41 +0100, Ian Jackson wrote:
> > Can you please wrap your commit messages to 70ish rather than 80 ?
> > Here is a screenshot of my email client:
> >   http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png
> 
> If someone tells me the rune to put into vimrc such that "gqj" does this
> then sure.

I asked IRC and people said:

  :q! emacs

and

  :set wm=10

Take your pick :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:48:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwhP-0002Sg-2S; Thu, 02 Aug 2012 14:48:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwwhN-0002SK-7I
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:48:25 +0000
Received: from [85.158.143.35:63322] by server-3.bemta-4.messagelabs.com id
	CE/F0-01511-8339A105; Thu, 02 Aug 2012 14:48:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343918903!17108098!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16699 invoked from network); 2 Aug 2012 14:48:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:48:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825274"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:48:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:48:23 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwwhL-0008Db-72; Thu, 02 Aug 2012 14:48:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwwhL-0005Nx-5t;
	Thu, 02 Aug 2012 15:48:23 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.37684.320856.526014@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:48:20 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343918623.27221.163.camel@zakaz.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<20506.37297.264426.80222@mariner.uk.xensource.com>
	<1343918623.27221.163.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] linewrapping commit messages (was Re: [PATCH] libxl:
	fix cleanup of tap devices in libxl__device_destroy)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> On Thu, 2012-08-02 at 15:41 +0100, Ian Jackson wrote:
> > Can you please wrap your commit messages to 70ish rather than 80 ?
> > Here is a screenshot of my email client:
> >   http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png
> 
> If someone tells me the rune to put into vimrc such that "gqj" does this
> then sure.

I asked IRC and people said:

  :q! emacs

and

  :set wm=10

Take your pick :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:54:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwn0-0002mE-SN; Thu, 02 Aug 2012 14:54:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwmz-0002m9-Lt
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:54:13 +0000
Received: from [85.158.143.99:43582] by server-3.bemta-4.messagelabs.com id
	AF/AA-01511-5949A105; Thu, 02 Aug 2012 14:54:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1343919252!20335463!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4145 invoked from network); 2 Aug 2012 14:54:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:54:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825393"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:54:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:54:12 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwmy-0008HK-48; Thu, 02 Aug 2012 14:54:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwmy-0005Qw-3B;
	Thu, 02 Aug 2012 15:54:12 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.38036.72752.596841@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:54:12 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343918743.27221.164.camel@zakaz.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<1343900722.27221.107.camel@zakaz.uk.xensource.com>
	<20506.37349.96739.832156@mariner.uk.xensource.com>
	<1343918743.27221.164.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> On Thu, 2012-08-02 at 15:42 +0100, Ian Jackson wrote:
> > In general in this destroy path we should consider whether, on
> > failure, we should abandon the cleanup or note the error and carry on.
> 
> Since this is a destroy operation note it and carry on I think, so as to
> clean up as much as we are able.

Right.  Good then I guess it is OK.  (We do risk leaking some stuff in
xenstore which you'd have to use low-level tools to remove but that's
probably acceptable if things are so bad you can't remove stuff from
xenstore.)

> BTW, is there a libxl__xs_transaction_abort missing in this function
> too?

Yes.  I have edited my patch to fix this.  (See below; I will repost
it with v5 of my series, later today I think.)

Ian.

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH] libxl: unify libxl__device_destroy and device_hotplug_done

device_hotplug_done contains an open-coded but improved version of
libxl__device_destroy.  So move the contents of device_hotplug_done
into libxl__device_destroy, deleting the old code, and replace it at
its old location with a function call.

Add the missing call to libxl__xs_transaction_abort (which was present
in neither version and technically speaking is always a no-op with
this code as it stands at the moment because no-one does "goto out"
other than after libxl__xs_transaction_start or _commit).

Also fix the error handling: the rc from the destroy should be
propagated into the aodev.

Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

-
Changes in v5 of series:
 * Also add missing xs abort.

---
 tools/libxl/libxl_device.c |   36 +++++++++++++-----------------------
 1 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index da0c3ea..95b169e 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -513,22 +513,24 @@ int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
     char *be_path = libxl__device_backend_path(gc, dev);
     char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
-    int rc = 0;
+    int rc;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
 
-    do {
-        t = xs_transaction_start(CTX->xsh);
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
-        rc = !xs_transaction_end(CTX->xsh, t, 0);
-    } while (rc && errno == EAGAIN);
-    if (rc) {
-        LOGE(ERROR, "unable to finish transaction");
-        goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc < 0) goto out;
     }
 
     libxl__device_destroy_tapdisk(gc, be_path);
 
 out:
+    libxl__xs_transaction_abort(gc, &t);
     return rc;
 }
 
@@ -993,29 +995,17 @@ error:
 static void device_hotplug_done(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    char *be_path = libxl__device_backend_path(gc, aodev->dev);
-    char *fe_path = libxl__device_frontend_path(gc, aodev->dev);
-    xs_transaction_t t = 0;
     int rc;
 
     device_hotplug_clean(gc, aodev);
 
     /* Clean xenstore if it's a disconnection */
     if (aodev->action == DEVICE_DISCONNECT) {
-        for (;;) {
-            rc = libxl__xs_transaction_start(gc, &t);
-            if (rc) goto out;
-
-            libxl__xs_path_cleanup(gc, t, fe_path);
-            libxl__xs_path_cleanup(gc, t, be_path);
-
-            rc = libxl__xs_transaction_commit(gc, &t);
-            if (!rc) break;
-            if (rc < 0) goto out;
-        }
+        rc = libxl__device_destroy(gc, aodev->dev);
+        if (!aodev->rc)
+            aodev->rc = rc;
     }
 
-out:
     aodev->callback(egc, aodev);
     return;
 }
-- 
tg: (7fc019d..) t/xen/xl.device-destroy-unify (depends on: t/xen/gitignore)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:54:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swwn0-0002mE-SN; Thu, 02 Aug 2012 14:54:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swwmz-0002m9-Lt
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:54:13 +0000
Received: from [85.158.143.99:43582] by server-3.bemta-4.messagelabs.com id
	AF/AA-01511-5949A105; Thu, 02 Aug 2012 14:54:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1343919252!20335463!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4145 invoked from network); 2 Aug 2012 14:54:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:54:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825393"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:54:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 15:54:12 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swwmy-0008HK-48; Thu, 02 Aug 2012 14:54:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swwmy-0005Qw-3B;
	Thu, 02 Aug 2012 15:54:12 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.38036.72752.596841@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 15:54:12 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343918743.27221.164.camel@zakaz.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<1343900722.27221.107.camel@zakaz.uk.xensource.com>
	<20506.37349.96739.832156@mariner.uk.xensource.com>
	<1343918743.27221.164.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> On Thu, 2012-08-02 at 15:42 +0100, Ian Jackson wrote:
> > In general in this destroy path we should consider whether, on
> > failure, we should abandon the cleanup or note the error and carry on.
> 
> Since this is a destroy operation note it and carry on I think, so as to
> clean up as much as we are able.

Right.  Good then I guess it is OK.  (We do risk leaking some stuff in
xenstore which you'd have to use low-level tools to remove but that's
probably acceptable if things are so bad you can't remove stuff from
xenstore.)

> BTW, is there a libxl__xs_transaction_abort missing in this function
> too?

Yes.  I have edited my patch to fix this.  (See below; I will repost
it with v5 of my series, later today I think.)

Ian.

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH] libxl: unify libxl__device_destroy and device_hotplug_done

device_hotplug_done contains an open-coded but improved version of
libxl__device_destroy.  So move the contents of device_hotplug_done
into libxl__device_destroy, deleting the old code, and replace it at
its old location with a function call.

Add the missing call to libxl__xs_transaction_abort (which was present
in neither version and technically speaking is always a no-op with
this code as it stands at the moment because no-one does "goto out"
other than after libxl__xs_transaction_start or _commit).

Also fix the error handling: the rc from the destroy should be
propagated into the aodev.

Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

-
Changes in v5 of series:
 * Also add missing xs abort.

---
 tools/libxl/libxl_device.c |   36 +++++++++++++-----------------------
 1 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index da0c3ea..95b169e 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -513,22 +513,24 @@ int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
     char *be_path = libxl__device_backend_path(gc, dev);
     char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
-    int rc = 0;
+    int rc;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
 
-    do {
-        t = xs_transaction_start(CTX->xsh);
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
-        rc = !xs_transaction_end(CTX->xsh, t, 0);
-    } while (rc && errno == EAGAIN);
-    if (rc) {
-        LOGE(ERROR, "unable to finish transaction");
-        goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc < 0) goto out;
     }
 
     libxl__device_destroy_tapdisk(gc, be_path);
 
 out:
+    libxl__xs_transaction_abort(gc, &t);
     return rc;
 }
 
@@ -993,29 +995,17 @@ error:
 static void device_hotplug_done(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    char *be_path = libxl__device_backend_path(gc, aodev->dev);
-    char *fe_path = libxl__device_frontend_path(gc, aodev->dev);
-    xs_transaction_t t = 0;
     int rc;
 
     device_hotplug_clean(gc, aodev);
 
     /* Clean xenstore if it's a disconnection */
     if (aodev->action == DEVICE_DISCONNECT) {
-        for (;;) {
-            rc = libxl__xs_transaction_start(gc, &t);
-            if (rc) goto out;
-
-            libxl__xs_path_cleanup(gc, t, fe_path);
-            libxl__xs_path_cleanup(gc, t, be_path);
-
-            rc = libxl__xs_transaction_commit(gc, &t);
-            if (!rc) break;
-            if (rc < 0) goto out;
-        }
+        rc = libxl__device_destroy(gc, aodev->dev);
+        if (!aodev->rc)
+            aodev->rc = rc;
     }
 
-out:
     aodev->callback(egc, aodev);
     return;
 }
-- 
tg: (7fc019d..) t/xen/xl.device-destroy-unify (depends on: t/xen/gitignore)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:56:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwoM-0002qd-Bi; Thu, 02 Aug 2012 14:55:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwwoK-0002qX-L9
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:55:36 +0000
Received: from [85.158.138.51:64131] by server-10.bemta-3.messagelabs.com id
	52/FD-21993-7E49A105; Thu, 02 Aug 2012 14:55:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343919334!23751654!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29634 invoked from network); 2 Aug 2012 14:55:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:55:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:55:06 +0100
Message-ID: <1343919303.27221.165.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:55:03 +0100
In-Reply-To: <20506.37297.264426.80222@mariner.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<20506.37297.264426.80222@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:41 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> > libxl: fix cleanup of tap devices in libxl__device_destroy
> > 
> > We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
> > read tapdisk-params. However it appears that we need to destroy the tap device
> > after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
> > <201207312141.q6VLfJje012656@wind.enjellic.com>.
> > 
> > So read the tapdisk-params in the cleanup transaction, before the remove, and
> > pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
> > if the device isn't a tap device.
> > 
> > There is no need to tear down the tap device from libxl__initiate_device_remove
> > since this ultimately calls libxl__device_destroy.
> > 
> > Propagate and log errors from libxl__device_destroy_tapdisk.
> 
> Can you please wrap your commit messages to 70ish rather than 80 ?
> Here is a screenshot of my email client:
>   http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png
> 
> The code all looks good.  Just one comment:
> 
> >  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
> >  {
> ...
> > @@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
> >          rc = libxl__xs_transaction_start(gc, &t);
> >          if (rc) goto out;
> >  
> > +        /* May not exist if this is not a tap device */
> > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> 
> You can still use libxl__xs_read_checked.  It considers ENOENT a
> success (and therefore doesn't log about it).

tapdisk_params cannot be const as read_checked requires because
tapdisk_destroy modified the string. I suppose I could add a strdup
inside.


> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 14:56:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 14:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwwoM-0002qd-Bi; Thu, 02 Aug 2012 14:55:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwwoK-0002qX-L9
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 14:55:36 +0000
Received: from [85.158.138.51:64131] by server-10.bemta-3.messagelabs.com id
	52/FD-21993-7E49A105; Thu, 02 Aug 2012 14:55:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343919334!23751654!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29634 invoked from network); 2 Aug 2012 14:55:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 14:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 14:55:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	15:55:06 +0100
Message-ID: <1343919303.27221.165.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 15:55:03 +0100
In-Reply-To: <20506.37297.264426.80222@mariner.uk.xensource.com>
References: <f345fbfa8f50975b5c32.1343900314@cosworth.uk.xensource.com>
	<20506.37297.264426.80222@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:41 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> > libxl: fix cleanup of tap devices in libxl__device_destroy
> > 
> > We pass be_path to tapdisk_destroy but we've already deleted it so it fails to
> > read tapdisk-params. However it appears that we need to destroy the tap device
> > after tearing down xenstore, to avoid the leak reported by Greg Wettstein in
> > <201207312141.q6VLfJje012656@wind.enjellic.com>.
> > 
> > So read the tapdisk-params in the cleanup transaction, before the remove, and
> > pass that down to destroy_tapdisk instead. tapdisk-params may of course be NULL
> > if the device isn't a tap device.
> > 
> > There is no need to tear down the tap device from libxl__initiate_device_remove
> > since this ultimately calls libxl__device_destroy.
> > 
> > Propagate and log errors from libxl__device_destroy_tapdisk.
> 
> Can you please wrap your commit messages to 70ish rather than 80 ?
> Here is a screenshot of my email client:
>   http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png
> 
> The code all looks good.  Just one comment:
> 
> >  int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
> >  {
> ...
> > @@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc,
> >          rc = libxl__xs_transaction_start(gc, &t);
> >          if (rc) goto out;
> >  
> > +        /* May not exist if this is not a tap device */
> > +        tapdisk_params = libxl__xs_read(gc, t, tapdisk_path);
> 
> You can still use libxl__xs_read_checked.  It considers ENOENT a
> success (and therefore doesn't log about it).

tapdisk_params cannot be const as read_checked requires because
tapdisk_destroy modified the string. I suppose I could add a strdup
inside.


> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swx3t-0003Jq-FY; Thu, 02 Aug 2012 15:11:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swx3r-0003Ja-Eb
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:11:39 +0000
Received: from [85.158.139.83:54948] by server-2.bemta-5.messagelabs.com id
	4C/0D-04598-AA89A105; Thu, 02 Aug 2012 15:11:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1343920298!22618498!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3330 invoked from network); 2 Aug 2012 15:11:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:11:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825926"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:11:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 16:11:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swx3Z-0008RW-9O; Thu, 02 Aug 2012 15:11:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swx3Z-00063x-8c;
	Thu, 02 Aug 2012 16:11:21 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.39065.252439.380882@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 16:11:21 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
References: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: make libxl_device_pci_{add, remove,
 destroy} interfaces asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl: make libxl_device_pci_{add, remove, destroy} interfaces asynchronous"):
> libxl: make libxl_device_pci_{add,remove,destroy} interfaces asynchronous

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swx3t-0003Jq-FY; Thu, 02 Aug 2012 15:11:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swx3r-0003Ja-Eb
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:11:39 +0000
Received: from [85.158.139.83:54948] by server-2.bemta-5.messagelabs.com id
	4C/0D-04598-AA89A105; Thu, 02 Aug 2012 15:11:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1343920298!22618498!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3330 invoked from network); 2 Aug 2012 15:11:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:11:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825926"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:11:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 16:11:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swx3Z-0008RW-9O; Thu, 02 Aug 2012 15:11:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swx3Z-00063x-8c;
	Thu, 02 Aug 2012 16:11:21 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.39065.252439.380882@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 16:11:21 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
References: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: make libxl_device_pci_{add, remove,
 destroy} interfaces asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl: make libxl_device_pci_{add, remove, destroy} interfaces asynchronous"):
> libxl: make libxl_device_pci_{add,remove,destroy} interfaces asynchronous

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swx3t-0003Jj-4h; Thu, 02 Aug 2012 15:11:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swx3q-0003JZ-Ru
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:11:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343920292!2053263!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17043 invoked from network); 2 Aug 2012 15:11:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:11:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825898"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:10:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 16:10:52 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swx36-0008RE-64; Thu, 02 Aug 2012 15:10:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swx36-00063k-48;
	Thu, 02 Aug 2012 16:10:52 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.39036.116372.505121@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 16:10:52 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
References: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: only read script once in
	libxl__hotplug_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: only read script once in libxl__hotplug_*"):
> libxl: only read script once in libxl__hotplug_*
> 
> instead of duplicating the error handling etc in get_hotplug_env
> just pass the script already read by the caller down.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swx3t-0003Jj-4h; Thu, 02 Aug 2012 15:11:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swx3q-0003JZ-Ru
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:11:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343920292!2053263!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17043 invoked from network); 2 Aug 2012 15:11:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:11:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13825898"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:10:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 16:10:52 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swx36-0008RE-64; Thu, 02 Aug 2012 15:10:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swx36-00063k-48;
	Thu, 02 Aug 2012 16:10:52 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.39036.116372.505121@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 16:10:52 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
References: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: only read script once in
	libxl__hotplug_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: only read script once in libxl__hotplug_*"):
> libxl: only read script once in libxl__hotplug_*
> 
> instead of duplicating the error handling etc in get_hotplug_env
> just pass the script already read by the caller down.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxI7-0003d4-Tv; Thu, 02 Aug 2012 15:26:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwxI7-0003cz-1J
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:26:23 +0000
Received: from [85.158.139.83:45013] by server-4.bemta-5.messagelabs.com id
	76/4F-27831-E1C9A105; Thu, 02 Aug 2012 15:26:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343921180!29935020!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16233 invoked from network); 2 Aug 2012 15:26:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:26:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203950081"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:26:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 11:26:03 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwxHm-00042H-T8;
	Thu, 02 Aug 2012 16:26:02 +0100
MIME-Version: 1.0
Message-ID: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Thu, 2 Aug 2012 16:25:56 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0 of 2] [RFC] Tools configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an RFC pair of patches to enable finer grain control for ./configure for
which tools are actually built.

There should be no changes as a result with which tools are built by default.

./configure should be regenerated as a result of the second patch.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxI7-0003d4-Tv; Thu, 02 Aug 2012 15:26:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwxI7-0003cz-1J
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:26:23 +0000
Received: from [85.158.139.83:45013] by server-4.bemta-5.messagelabs.com id
	76/4F-27831-E1C9A105; Thu, 02 Aug 2012 15:26:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343921180!29935020!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16233 invoked from network); 2 Aug 2012 15:26:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:26:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203950081"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:26:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 11:26:03 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwxHm-00042H-T8;
	Thu, 02 Aug 2012 16:26:02 +0100
MIME-Version: 1.0
Message-ID: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Thu, 2 Aug 2012 16:25:56 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0 of 2] [RFC] Tools configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an RFC pair of patches to enable finer grain control for ./configure for
which tools are actually built.

There should be no changes as a result with which tools are built by default.

./configure should be regenerated as a result of the second patch.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxIA-0003dI-9J; Thu, 02 Aug 2012 15:26:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwxI8-0003d6-FY
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:26:24 +0000
Received: from [85.158.139.83:45131] by server-8.bemta-5.messagelabs.com id
	56/3E-10278-F1C9A105; Thu, 02 Aug 2012 15:26:23 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343921180!29935020!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16616 invoked from network); 2 Aug 2012 15:26:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203950083"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:26:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 11:26:03 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwxHm-00042H-UU;
	Thu, 02 Aug 2012 16:26:02 +0100
MIME-Version: 1.0
X-Mercurial-Node: ed70a016d37553b041ca684b363c7db2f4de886b
Message-ID: <ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Thu, 2 Aug 2012 16:25:58 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2 of 2] tools/configure: [RFC] Allow all tools
 to be ./configure'd on or off
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 4307d512fb26 -r ed70a016d375 config/Tools.mk.in
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -50,6 +50,29 @@ CONFIG_LOMOUNT      := @lomount@
 CONFIG_OVMF         := @ovmf@
 CONFIG_ROMBIOS      := @rombios@
 CONFIG_SEABIOS      := @seabios@
+CONFIG_LIBXC        := @libxc@
+CONFIG_FLASK        := @flask@
+CONFIG_XENSTORE     := @xenstore@
+CONFIG_MISC         := @misctools@
+CONFIG_EXAMPLES     := @examples@
+CONFIG_HOTPLUG      := @hotplug@
+CONFIG_CONSOLE      := @console@
+CONFIG_XENTRACE     := @xentrace@
+CONFIG_XENMON       := @xenmon@
+CONFIG_XENSTAT      := @xenstat@
+CONFIG_FSIMAGE      := @fsimage@
+CONFIG_XENPM        := @xenpm@
+CONFIG_LIBXL        := @libxl@
+CONFIG_REMUS        := @remus@
+CONFIG_MEMSHR       := @memshr@
+CONFIG_BLKTAP       := @blktap@
+CONFIG_BLKTAP2      := @blktap2@
+CONFIG_BACKENDD     := @backendd@
+CONFIG_LIBVCHAN     := @libvchan@
+CONFIG_FIRMWARE     := @firmware@
+CONFIG_XENPAGING    := @xenpaging@
+CONFIG_GDBSX        := @gdbsx@
+CONFIG_KDD          := @kdd@
 
 #System options
 CONFIG_SYSTEM_LIBAIO:= @system_aio@
diff -r 4307d512fb26 -r ed70a016d375 tools/Makefile
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -7,48 +7,48 @@ endif
 
 SUBDIRS-y :=
 SUBDIRS-y += include
-SUBDIRS-y += libxc
-SUBDIRS-y += flask
-SUBDIRS-y += xenstore
-SUBDIRS-y += misc
-SUBDIRS-y += examples
-SUBDIRS-y += hotplug
-SUBDIRS-y += xentrace
+SUBDIRS-$(CONFIG_LIBXC) += libxc
+SUBDIRS-$(CONFIG_FLASK) += flask
+SUBDIRS-$(CONFIG_XENSTORE) += xenstore
+SUBDIRS-$(CONFIG_MISC) += misc
+SUBDIRS-$(CONFIG_EXAMPLES) += examples
+SUBDIRS-$(CONFIG_HOTPLUG) += hotplug
+SUBDIRS-$(CONFIG_XENTRACE) += xentrace
 SUBDIRS-$(CONFIG_XCUTILS) += xcutils
-SUBDIRS-y += console
-SUBDIRS-y += xenmon
+SUBDIRS-$(CONFIG_CONSOLE) += console
+SUBDIRS-$(CONFIG_XENMON) += xenmon
 SUBDIRS-$(VTPM_TOOLS) += vtpm_manager
 SUBDIRS-$(VTPM_TOOLS) += vtpm
-SUBDIRS-y += xenstat
-SUBDIRS-y += libfsimage
+SUBDIRS-$(CONFIG_XENSTAT) += xenstat
+SUBDIRS-$(CONFIG_FSIMAGE) += libfsimage
 SUBDIRS-$(LIBXENAPI_BINDINGS) += libxen
-SUBDIRS-y += xenpmd
-SUBDIRS-y += libxl
-SUBDIRS-y += remus
+SUBDIRS-$(CONFIG_XENPM) += xenpmd
+SUBDIRS-$(CONFIG_LIBXL) += libxl
+SUBDIRS-$(CONFIG_REMUS) += remus
 SUBDIRS-$(CONFIG_TESTS) += tests
 
 # Linux specific tools
 ifeq ($(CONFIG_Linux),y)
 SUBDIRS-y += $(SUBDIRS-libaio)
-SUBDIRS-y += memshr
-SUBDIRS-y += blktap
-SUBDIRS-y += blktap2
-SUBDIRS-y += libvchan
+SUBDIRS-$(CONFIG_MEMSHR) += memshr
+SUBDIRS-$(CONFIG_BLKTAP) += blktap
+SUBDIRS-$(CONFIG_BLKTAP2) += blktap2
+SUBDIRS-$(CONFIG_LIBVCHAN) += libvchan
 endif
 
 # NetBSD specific tools
 ifeq ($(CONFIG_NetBSD),y)
 SUBDIRS-y += $(SUBDIRS-libaio)
-SUBDIRS-y += blktap2
-SUBDIRS-y += xenbackendd
+SUBDIRS-$(CONFIG_BLKTAP2) += blktap2
+SUBDIRS-$(CONFIG_BACKENDD) += xenbackendd
 endif
 
 # x86 specific tools
 ifeq ($(CONFIG_X86),y)
-SUBDIRS-y += firmware
-SUBDIRS-y += xenpaging
-SUBDIRS-y += debugger/gdbsx
-SUBDIRS-y += debugger/kdd
+SUBDIRS-$(CONFIG_FIRMWARE) += firmware
+SUBDIRS-$(CONFIG_XENPAGING) += xenpaging
+SUBDIRS-$(CONFIG_GDBSX) += debugger/gdbsx
+SUBDIRS-$(CONFIG_KDD) += debugger/kdd
 endif
 
 # do not recurse in to a dir we are about to delete
diff -r 4307d512fb26 -r ed70a016d375 tools/configure.ac
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -49,6 +49,29 @@ AX_ARG_DEFAULT_DISABLE([ovmf], [Enable O
 AX_ARG_DEFAULT_ENABLE([rombios], [Disable ROM BIOS])
 AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
 AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
+AX_ARG_DEFAULT_ENABLE([libxc], [Disable xc])
+AX_ARG_DEFAULT_ENABLE([flask], [Disable flask])
+AX_ARG_DEFAULT_ENABLE([xenstore], [Disable xenstore])
+AX_ARG_DEFAULT_ENABLE([misctools], [Disable misc tools])
+AX_ARG_DEFAULT_ENABLE([examples], [Disable examples])
+AX_ARG_DEFAULT_ENABLE([hotplug], [Disable hotplug])
+AX_ARG_DEFAULT_ENABLE([xentrace], [Disable xentrace])
+AX_ARG_DEFAULT_ENABLE([console], [Disable guest console])
+AX_ARG_DEFAULT_ENABLE([xenmon], [Disable xenmon])
+AX_ARG_DEFAULT_ENABLE([xenstat], [Disable xenstat])
+AX_ARG_DEFAULT_ENABLE([fsimage], [Disable libfsimage])
+AX_ARG_DEFAULT_ENABLE([xenpm], [Disable xenpm])
+AX_ARG_DEFAULT_ENABLE([libxl], [Disable xl])
+AX_ARG_DEFAULT_ENABLE([remus], [Disable remus])
+AX_ARG_DEFAULT_ENABLE([memshr], [Disable memshr])
+AX_ARG_DEFAULT_ENABLE([blktap], [Disable blktap])
+AX_ARG_DEFAULT_ENABLE([blktap2], [Disable blktap2])
+AX_ARG_DEFAULT_ENABLE([libvchan], [Disable libvchan])
+AX_ARG_DEFAULT_ENABLE([backendd], [Disable xenbackendd])
+AX_ARG_DEFAULT_ENABLE([firmware], [Disable firmware])
+AX_ARG_DEFAULT_ENABLE([xenpaging], [Disable xenpaging])
+AX_ARG_DEFAULT_ENABLE([gdbsx], [Disable gdbsx])
+AX_ARG_DEFAULT_ENABLE([kdd], [Disable kdd])
 
 AC_ARG_VAR([PREPEND_INCLUDES],
     [List of include folders to prepend to CFLAGS (without -I)])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxIA-0003dI-9J; Thu, 02 Aug 2012 15:26:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwxI8-0003d6-FY
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:26:24 +0000
Received: from [85.158.139.83:45131] by server-8.bemta-5.messagelabs.com id
	56/3E-10278-F1C9A105; Thu, 02 Aug 2012 15:26:23 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343921180!29935020!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16616 invoked from network); 2 Aug 2012 15:26:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203950083"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:26:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 11:26:03 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwxHm-00042H-UU;
	Thu, 02 Aug 2012 16:26:02 +0100
MIME-Version: 1.0
X-Mercurial-Node: ed70a016d37553b041ca684b363c7db2f4de886b
Message-ID: <ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Thu, 2 Aug 2012 16:25:58 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2 of 2] tools/configure: [RFC] Allow all tools
 to be ./configure'd on or off
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 4307d512fb26 -r ed70a016d375 config/Tools.mk.in
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -50,6 +50,29 @@ CONFIG_LOMOUNT      := @lomount@
 CONFIG_OVMF         := @ovmf@
 CONFIG_ROMBIOS      := @rombios@
 CONFIG_SEABIOS      := @seabios@
+CONFIG_LIBXC        := @libxc@
+CONFIG_FLASK        := @flask@
+CONFIG_XENSTORE     := @xenstore@
+CONFIG_MISC         := @misctools@
+CONFIG_EXAMPLES     := @examples@
+CONFIG_HOTPLUG      := @hotplug@
+CONFIG_CONSOLE      := @console@
+CONFIG_XENTRACE     := @xentrace@
+CONFIG_XENMON       := @xenmon@
+CONFIG_XENSTAT      := @xenstat@
+CONFIG_FSIMAGE      := @fsimage@
+CONFIG_XENPM        := @xenpm@
+CONFIG_LIBXL        := @libxl@
+CONFIG_REMUS        := @remus@
+CONFIG_MEMSHR       := @memshr@
+CONFIG_BLKTAP       := @blktap@
+CONFIG_BLKTAP2      := @blktap2@
+CONFIG_BACKENDD     := @backendd@
+CONFIG_LIBVCHAN     := @libvchan@
+CONFIG_FIRMWARE     := @firmware@
+CONFIG_XENPAGING    := @xenpaging@
+CONFIG_GDBSX        := @gdbsx@
+CONFIG_KDD          := @kdd@
 
 #System options
 CONFIG_SYSTEM_LIBAIO:= @system_aio@
diff -r 4307d512fb26 -r ed70a016d375 tools/Makefile
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -7,48 +7,48 @@ endif
 
 SUBDIRS-y :=
 SUBDIRS-y += include
-SUBDIRS-y += libxc
-SUBDIRS-y += flask
-SUBDIRS-y += xenstore
-SUBDIRS-y += misc
-SUBDIRS-y += examples
-SUBDIRS-y += hotplug
-SUBDIRS-y += xentrace
+SUBDIRS-$(CONFIG_LIBXC) += libxc
+SUBDIRS-$(CONFIG_FLASK) += flask
+SUBDIRS-$(CONFIG_XENSTORE) += xenstore
+SUBDIRS-$(CONFIG_MISC) += misc
+SUBDIRS-$(CONFIG_EXAMPLES) += examples
+SUBDIRS-$(CONFIG_HOTPLUG) += hotplug
+SUBDIRS-$(CONFIG_XENTRACE) += xentrace
 SUBDIRS-$(CONFIG_XCUTILS) += xcutils
-SUBDIRS-y += console
-SUBDIRS-y += xenmon
+SUBDIRS-$(CONFIG_CONSOLE) += console
+SUBDIRS-$(CONFIG_XENMON) += xenmon
 SUBDIRS-$(VTPM_TOOLS) += vtpm_manager
 SUBDIRS-$(VTPM_TOOLS) += vtpm
-SUBDIRS-y += xenstat
-SUBDIRS-y += libfsimage
+SUBDIRS-$(CONFIG_XENSTAT) += xenstat
+SUBDIRS-$(CONFIG_FSIMAGE) += libfsimage
 SUBDIRS-$(LIBXENAPI_BINDINGS) += libxen
-SUBDIRS-y += xenpmd
-SUBDIRS-y += libxl
-SUBDIRS-y += remus
+SUBDIRS-$(CONFIG_XENPM) += xenpmd
+SUBDIRS-$(CONFIG_LIBXL) += libxl
+SUBDIRS-$(CONFIG_REMUS) += remus
 SUBDIRS-$(CONFIG_TESTS) += tests
 
 # Linux specific tools
 ifeq ($(CONFIG_Linux),y)
 SUBDIRS-y += $(SUBDIRS-libaio)
-SUBDIRS-y += memshr
-SUBDIRS-y += blktap
-SUBDIRS-y += blktap2
-SUBDIRS-y += libvchan
+SUBDIRS-$(CONFIG_MEMSHR) += memshr
+SUBDIRS-$(CONFIG_BLKTAP) += blktap
+SUBDIRS-$(CONFIG_BLKTAP2) += blktap2
+SUBDIRS-$(CONFIG_LIBVCHAN) += libvchan
 endif
 
 # NetBSD specific tools
 ifeq ($(CONFIG_NetBSD),y)
 SUBDIRS-y += $(SUBDIRS-libaio)
-SUBDIRS-y += blktap2
-SUBDIRS-y += xenbackendd
+SUBDIRS-$(CONFIG_BLKTAP2) += blktap2
+SUBDIRS-$(CONFIG_BACKENDD) += xenbackendd
 endif
 
 # x86 specific tools
 ifeq ($(CONFIG_X86),y)
-SUBDIRS-y += firmware
-SUBDIRS-y += xenpaging
-SUBDIRS-y += debugger/gdbsx
-SUBDIRS-y += debugger/kdd
+SUBDIRS-$(CONFIG_FIRMWARE) += firmware
+SUBDIRS-$(CONFIG_XENPAGING) += xenpaging
+SUBDIRS-$(CONFIG_GDBSX) += debugger/gdbsx
+SUBDIRS-$(CONFIG_KDD) += debugger/kdd
 endif
 
 # do not recurse in to a dir we are about to delete
diff -r 4307d512fb26 -r ed70a016d375 tools/configure.ac
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -49,6 +49,29 @@ AX_ARG_DEFAULT_DISABLE([ovmf], [Enable O
 AX_ARG_DEFAULT_ENABLE([rombios], [Disable ROM BIOS])
 AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
 AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
+AX_ARG_DEFAULT_ENABLE([libxc], [Disable xc])
+AX_ARG_DEFAULT_ENABLE([flask], [Disable flask])
+AX_ARG_DEFAULT_ENABLE([xenstore], [Disable xenstore])
+AX_ARG_DEFAULT_ENABLE([misctools], [Disable misc tools])
+AX_ARG_DEFAULT_ENABLE([examples], [Disable examples])
+AX_ARG_DEFAULT_ENABLE([hotplug], [Disable hotplug])
+AX_ARG_DEFAULT_ENABLE([xentrace], [Disable xentrace])
+AX_ARG_DEFAULT_ENABLE([console], [Disable guest console])
+AX_ARG_DEFAULT_ENABLE([xenmon], [Disable xenmon])
+AX_ARG_DEFAULT_ENABLE([xenstat], [Disable xenstat])
+AX_ARG_DEFAULT_ENABLE([fsimage], [Disable libfsimage])
+AX_ARG_DEFAULT_ENABLE([xenpm], [Disable xenpm])
+AX_ARG_DEFAULT_ENABLE([libxl], [Disable xl])
+AX_ARG_DEFAULT_ENABLE([remus], [Disable remus])
+AX_ARG_DEFAULT_ENABLE([memshr], [Disable memshr])
+AX_ARG_DEFAULT_ENABLE([blktap], [Disable blktap])
+AX_ARG_DEFAULT_ENABLE([blktap2], [Disable blktap2])
+AX_ARG_DEFAULT_ENABLE([libvchan], [Disable libvchan])
+AX_ARG_DEFAULT_ENABLE([backendd], [Disable xenbackendd])
+AX_ARG_DEFAULT_ENABLE([firmware], [Disable firmware])
+AX_ARG_DEFAULT_ENABLE([xenpaging], [Disable xenpaging])
+AX_ARG_DEFAULT_ENABLE([gdbsx], [Disable gdbsx])
+AX_ARG_DEFAULT_ENABLE([kdd], [Disable kdd])
 
 AC_ARG_VAR([PREPEND_INCLUDES],
     [List of include folders to prepend to CFLAGS (without -I)])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:27:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:27:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxIP-0003eo-MQ; Thu, 02 Aug 2012 15:26:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwxIN-0003eS-Rv
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:26:40 +0000
Received: from [85.158.143.35:55358] by server-1.bemta-4.messagelabs.com id
	84/7A-24392-F2C9A105; Thu, 02 Aug 2012 15:26:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343921178!3936335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11823 invoked from network); 2 Aug 2012 15:26:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:26:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203950082"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:26:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 11:26:03 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwxHm-00042H-Tt;
	Thu, 02 Aug 2012 16:26:02 +0100
MIME-Version: 1.0
X-Mercurial-Node: 4307d512fb26572e450e3a81fb9d2b8974733f33
Message-ID: <4307d512fb26572e450e.1343921157@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Thu, 2 Aug 2012 16:25:57 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1 of 2] tools/makefile: [RFC] Group
 system/architecture specific tools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Group the Linux, NetBSD and x86 specific tools together when filling the
SUBDIRS-y list.  This prepares for easier configuration in the following patch.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 3d622e2c7cfb -r 4307d512fb26 tools/Makefile
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -15,22 +15,41 @@ SUBDIRS-y += examples
 SUBDIRS-y += hotplug
 SUBDIRS-y += xentrace
 SUBDIRS-$(CONFIG_XCUTILS) += xcutils
-SUBDIRS-$(CONFIG_X86) += firmware
 SUBDIRS-y += console
 SUBDIRS-y += xenmon
 SUBDIRS-$(VTPM_TOOLS) += vtpm_manager
 SUBDIRS-$(VTPM_TOOLS) += vtpm
 SUBDIRS-y += xenstat
-SUBDIRS-$(CONFIG_Linux) += $(SUBDIRS-libaio)
-SUBDIRS-$(CONFIG_Linux) += memshr 
-SUBDIRS-$(CONFIG_Linux) += blktap
-SUBDIRS-$(CONFIG_Linux) += blktap2
-SUBDIRS-$(CONFIG_NetBSD) += $(SUBDIRS-libaio)
-SUBDIRS-$(CONFIG_NetBSD) += blktap2
-SUBDIRS-$(CONFIG_NetBSD) += xenbackendd
 SUBDIRS-y += libfsimage
 SUBDIRS-$(LIBXENAPI_BINDINGS) += libxen
-SUBDIRS-$(CONFIG_Linux) += libvchan
+SUBDIRS-y += xenpmd
+SUBDIRS-y += libxl
+SUBDIRS-y += remus
+SUBDIRS-$(CONFIG_TESTS) += tests
+
+# Linux specific tools
+ifeq ($(CONFIG_Linux),y)
+SUBDIRS-y += $(SUBDIRS-libaio)
+SUBDIRS-y += memshr
+SUBDIRS-y += blktap
+SUBDIRS-y += blktap2
+SUBDIRS-y += libvchan
+endif
+
+# NetBSD specific tools
+ifeq ($(CONFIG_NetBSD),y)
+SUBDIRS-y += $(SUBDIRS-libaio)
+SUBDIRS-y += blktap2
+SUBDIRS-y += xenbackendd
+endif
+
+# x86 specific tools
+ifeq ($(CONFIG_X86),y)
+SUBDIRS-y += firmware
+SUBDIRS-y += xenpaging
+SUBDIRS-y += debugger/gdbsx
+SUBDIRS-y += debugger/kdd
+endif
 
 # do not recurse in to a dir we are about to delete
 ifneq "$(MAKECMDGOALS)" "distclean"
@@ -38,13 +57,6 @@ SUBDIRS-$(CONFIG_IOEMU) += qemu-xen-trad
 SUBDIRS-$(CONFIG_IOEMU) += qemu-xen-dir
 endif
 
-SUBDIRS-y += xenpmd
-SUBDIRS-y += libxl
-SUBDIRS-y += remus
-SUBDIRS-$(CONFIG_X86) += xenpaging
-SUBDIRS-$(CONFIG_X86) += debugger/gdbsx
-SUBDIRS-$(CONFIG_X86) += debugger/kdd
-SUBDIRS-$(CONFIG_TESTS) += tests
 
 # These don't cross-compile
 ifeq ($(XEN_COMPILE_ARCH),$(XEN_TARGET_ARCH))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:27:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:27:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxIP-0003eo-MQ; Thu, 02 Aug 2012 15:26:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SwxIN-0003eS-Rv
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:26:40 +0000
Received: from [85.158.143.35:55358] by server-1.bemta-4.messagelabs.com id
	84/7A-24392-F2C9A105; Thu, 02 Aug 2012 15:26:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1343921178!3936335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkwMjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11823 invoked from network); 2 Aug 2012 15:26:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:26:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336363200"; d="scan'208";a="203950082"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:26:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 2 Aug 2012 11:26:03 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SwxHm-00042H-Tt;
	Thu, 02 Aug 2012 16:26:02 +0100
MIME-Version: 1.0
X-Mercurial-Node: 4307d512fb26572e450e3a81fb9d2b8974733f33
Message-ID: <4307d512fb26572e450e.1343921157@andrewcoop.uk.xensource.com>
In-Reply-To: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.2.3
Date: Thu, 2 Aug 2012 16:25:57 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1 of 2] tools/makefile: [RFC] Group
 system/architecture specific tools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Group the Linux, NetBSD and x86 specific tools together when filling the
SUBDIRS-y list.  This prepares for easier configuration in the following patch.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 3d622e2c7cfb -r 4307d512fb26 tools/Makefile
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -15,22 +15,41 @@ SUBDIRS-y += examples
 SUBDIRS-y += hotplug
 SUBDIRS-y += xentrace
 SUBDIRS-$(CONFIG_XCUTILS) += xcutils
-SUBDIRS-$(CONFIG_X86) += firmware
 SUBDIRS-y += console
 SUBDIRS-y += xenmon
 SUBDIRS-$(VTPM_TOOLS) += vtpm_manager
 SUBDIRS-$(VTPM_TOOLS) += vtpm
 SUBDIRS-y += xenstat
-SUBDIRS-$(CONFIG_Linux) += $(SUBDIRS-libaio)
-SUBDIRS-$(CONFIG_Linux) += memshr 
-SUBDIRS-$(CONFIG_Linux) += blktap
-SUBDIRS-$(CONFIG_Linux) += blktap2
-SUBDIRS-$(CONFIG_NetBSD) += $(SUBDIRS-libaio)
-SUBDIRS-$(CONFIG_NetBSD) += blktap2
-SUBDIRS-$(CONFIG_NetBSD) += xenbackendd
 SUBDIRS-y += libfsimage
 SUBDIRS-$(LIBXENAPI_BINDINGS) += libxen
-SUBDIRS-$(CONFIG_Linux) += libvchan
+SUBDIRS-y += xenpmd
+SUBDIRS-y += libxl
+SUBDIRS-y += remus
+SUBDIRS-$(CONFIG_TESTS) += tests
+
+# Linux specific tools
+ifeq ($(CONFIG_Linux),y)
+SUBDIRS-y += $(SUBDIRS-libaio)
+SUBDIRS-y += memshr
+SUBDIRS-y += blktap
+SUBDIRS-y += blktap2
+SUBDIRS-y += libvchan
+endif
+
+# NetBSD specific tools
+ifeq ($(CONFIG_NetBSD),y)
+SUBDIRS-y += $(SUBDIRS-libaio)
+SUBDIRS-y += blktap2
+SUBDIRS-y += xenbackendd
+endif
+
+# x86 specific tools
+ifeq ($(CONFIG_X86),y)
+SUBDIRS-y += firmware
+SUBDIRS-y += xenpaging
+SUBDIRS-y += debugger/gdbsx
+SUBDIRS-y += debugger/kdd
+endif
 
 # do not recurse in to a dir we are about to delete
 ifneq "$(MAKECMDGOALS)" "distclean"
@@ -38,13 +57,6 @@ SUBDIRS-$(CONFIG_IOEMU) += qemu-xen-trad
 SUBDIRS-$(CONFIG_IOEMU) += qemu-xen-dir
 endif
 
-SUBDIRS-y += xenpmd
-SUBDIRS-y += libxl
-SUBDIRS-y += remus
-SUBDIRS-$(CONFIG_X86) += xenpaging
-SUBDIRS-$(CONFIG_X86) += debugger/gdbsx
-SUBDIRS-$(CONFIG_X86) += debugger/kdd
-SUBDIRS-$(CONFIG_TESTS) += tests
 
 # These don't cross-compile
 ifeq ($(XEN_COMPILE_ARCH),$(XEN_TARGET_ARCH))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:33:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:33:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxOd-00046j-KF; Thu, 02 Aug 2012 15:33:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwxOc-00046c-QE
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:33:06 +0000
Received: from [85.158.138.51:41603] by server-4.bemta-3.messagelabs.com id
	E5/2E-29069-1BD9A105; Thu, 02 Aug 2012 15:33:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1343921485!9816009!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22328 invoked from network); 2 Aug 2012 15:31:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:31:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13826348"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:31:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 16:31:24 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwxMy-0000CV-4N; Thu, 02 Aug 2012 15:31:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwxMy-00065H-3R;
	Thu, 02 Aug 2012 16:31:24 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.40268.92394.694378@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 16:31:24 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
	<ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH 2 of 2] tools/configure: [RFC] Allow all tools
 to be ./configure'd on or off
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[PATCH 2 of 2] tools/configure: [RFC] Allow all tools to be ./configure'd on or off"):
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'm far from convinced by this.  Just trying to turn on and off
individual directories isn't always going to work.  And I don't think
that this stage of the 4.2 release is the right time to be inventing
new configure options.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:33:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:33:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxOd-00046j-KF; Thu, 02 Aug 2012 15:33:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwxOc-00046c-QE
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:33:06 +0000
Received: from [85.158.138.51:41603] by server-4.bemta-3.messagelabs.com id
	E5/2E-29069-1BD9A105; Thu, 02 Aug 2012 15:33:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1343921485!9816009!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22328 invoked from network); 2 Aug 2012 15:31:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:31:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13826348"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:31:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 16:31:24 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwxMy-0000CV-4N; Thu, 02 Aug 2012 15:31:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwxMy-00065H-3R;
	Thu, 02 Aug 2012 16:31:24 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.40268.92394.694378@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 16:31:24 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
	<ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH 2 of 2] tools/configure: [RFC] Allow all tools
 to be ./configure'd on or off
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[PATCH 2 of 2] tools/configure: [RFC] Allow all tools to be ./configure'd on or off"):
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'm far from convinced by this.  Just trying to turn on and off
individual directories isn't always going to work.  And I don't think
that this stage of the 4.2 release is the right time to be inventing
new configure options.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:39:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxUI-0004Kz-DD; Thu, 02 Aug 2012 15:38:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SwxUG-0004Ks-N0
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:38:56 +0000
Received: from [85.158.139.83:22157] by server-2.bemta-5.messagelabs.com id
	4C/31-04598-F0F9A105; Thu, 02 Aug 2012 15:38:55 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343921934!18683999!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6977 invoked from network); 2 Aug 2012 15:38:54 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:38:54 -0000
Received: by eeke53 with SMTP id e53so2396367eek.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 08:38:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=j7RJM6VbF45a3rvU9W8Aygr0CMUq9Y0GWXNIa1LnTFo=;
	b=eGJ2juHWGGmMNtPIBXfIcI5XhXR3FBNlnm+FvZSz7Hs1Ke+OzZI40oZgLb7G47wKWi
	xtAnWjnOmyU9fPJGYe9xgrrI6EasZnHi3zhmdIJuH9c8TGds/7Z+zHT0mlNpmNOe7VIf
	SmmJkN+uuX6xFolwxWEzIp7vGt9k6eXJmr7+gFNfYp5KEVk66hOCtRex33UAtYiMC07r
	npQaQ23xhW7X39Syo8RjZ+Y1wBIZDCmqFYPSqFlfIm+T0pVwcP5dyg+A8Q3nat3nUktQ
	x936eqFnbC/+WtsCTvh2+UZbUnQBWaXNOxOw5QeP1BCsjFmuoSqbo2qQGILeJ2P/OuXf
	+dWw==
MIME-Version: 1.0
Received: by 10.14.213.137 with SMTP id a9mr5693476eep.38.1343921933937; Thu,
	02 Aug 2012 08:38:53 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 2 Aug 2012 08:38:53 -0700 (PDT)
In-Reply-To: <1343901358.27221.110.camel@zakaz.uk.xensource.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
	<1343901358.27221.110.camel@zakaz.uk.xensource.com>
Date: Thu, 2 Aug 2012 23:38:53 +0800
Message-ID: <CA+ePHTDUVcrsaE8ztCEFPspdvU97aqP09_9d=0-Krprr5=WOHA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1615520170443261022=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1615520170443261022==
Content-Type: multipart/alternative; boundary=047d7b621e787a6bd104c64a35cb

--047d7b621e787a6bd104c64a35cb
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 2, 2012 at 5:55 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wrote:
>
> >     In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(), it starts
>
> If you are doing development then target xen-unstable would be
> preferable, especially if you are looking at (lib)xl. The xl stuff in
> 4.1 is effectively a preview and it has change substantially internally
> since 4.1.
>
> > After input `xl create xp-101.hvm`, you got `Daemon running with PID
> > 26622` following the command line in the screen; but the xl log file
> > lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain
> > xp-101 (domid 1) to die [pid 26624]`.
> > Why 26622 !=3D 26624?  And 26622 could not be found by `ps -ef` command=
.
> > What happended to that?!
>
> There's a call to dameon in there, which will involve a double fork
> IIRC.
>
> Ian.
>
>
    What do you mean by `double fork`? In my opinion, the parent exit after
invoking waitpid, and the child goes on toward the rest of the code.
    The PID recorded in xl log file  can't also be found by `ps -ef`
command.  How should I retrive the PID of the real daemon process?

--047d7b621e787a6bd104c64a35cb
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 2, 2012 at 5:55 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"im">On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wro=
te:<br>
<br>
&gt; =C2=A0 =C2=A0 In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(),=
 it starts<br>
<br>
</div>If you are doing development then target xen-unstable would be<br>
preferable, especially if you are looking at (lib)xl. The xl stuff in<br>
4.1 is effectively a preview and it has change substantially internally<br>
since 4.1.<br>
<div class=3D"im"><br>
&gt; After input `xl create xp-101.hvm`, you got `Daemon running with PID<b=
r>
&gt; 26622` following the command line in the screen; but the xl log file<b=
r>
&gt; lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain=
<br>
&gt; xp-101 (domid 1) to die [pid 26624]`.<br>
&gt; Why 26622 !=3D 26624? =C2=A0And 26622 could not be found by `ps -ef` c=
ommand.<br>
&gt; What happended to that?!<br>
<br>
</div>There&#39;s a call to dameon in there, which will involve a double fo=
rk<br>
IIRC.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><br><div>=C2=A0 =C2=A0 What do you mean by=
 `double fork`?=C2=A0In my opinion, the parent exit after invoking waitpid,=
 and the child goes on toward the rest of the code.</div><div>=C2=A0 =C2=A0=
 The PID recorded in xl log file =C2=A0can&#39;t also be found by `ps -ef` =
command. =C2=A0How should I retrive the PID of the real daemon process?</di=
v>

--047d7b621e787a6bd104c64a35cb--


--===============1615520170443261022==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1615520170443261022==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 15:39:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxUI-0004Kz-DD; Thu, 02 Aug 2012 15:38:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SwxUG-0004Ks-N0
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:38:56 +0000
Received: from [85.158.139.83:22157] by server-2.bemta-5.messagelabs.com id
	4C/31-04598-F0F9A105; Thu, 02 Aug 2012 15:38:55 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343921934!18683999!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6977 invoked from network); 2 Aug 2012 15:38:54 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:38:54 -0000
Received: by eeke53 with SMTP id e53so2396367eek.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 08:38:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=j7RJM6VbF45a3rvU9W8Aygr0CMUq9Y0GWXNIa1LnTFo=;
	b=eGJ2juHWGGmMNtPIBXfIcI5XhXR3FBNlnm+FvZSz7Hs1Ke+OzZI40oZgLb7G47wKWi
	xtAnWjnOmyU9fPJGYe9xgrrI6EasZnHi3zhmdIJuH9c8TGds/7Z+zHT0mlNpmNOe7VIf
	SmmJkN+uuX6xFolwxWEzIp7vGt9k6eXJmr7+gFNfYp5KEVk66hOCtRex33UAtYiMC07r
	npQaQ23xhW7X39Syo8RjZ+Y1wBIZDCmqFYPSqFlfIm+T0pVwcP5dyg+A8Q3nat3nUktQ
	x936eqFnbC/+WtsCTvh2+UZbUnQBWaXNOxOw5QeP1BCsjFmuoSqbo2qQGILeJ2P/OuXf
	+dWw==
MIME-Version: 1.0
Received: by 10.14.213.137 with SMTP id a9mr5693476eep.38.1343921933937; Thu,
	02 Aug 2012 08:38:53 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 2 Aug 2012 08:38:53 -0700 (PDT)
In-Reply-To: <1343901358.27221.110.camel@zakaz.uk.xensource.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
	<1343901358.27221.110.camel@zakaz.uk.xensource.com>
Date: Thu, 2 Aug 2012 23:38:53 +0800
Message-ID: <CA+ePHTDUVcrsaE8ztCEFPspdvU97aqP09_9d=0-Krprr5=WOHA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1615520170443261022=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1615520170443261022==
Content-Type: multipart/alternative; boundary=047d7b621e787a6bd104c64a35cb

--047d7b621e787a6bd104c64a35cb
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 2, 2012 at 5:55 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wrote:
>
> >     In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(), it starts
>
> If you are doing development then target xen-unstable would be
> preferable, especially if you are looking at (lib)xl. The xl stuff in
> 4.1 is effectively a preview and it has change substantially internally
> since 4.1.
>
> > After input `xl create xp-101.hvm`, you got `Daemon running with PID
> > 26622` following the command line in the screen; but the xl log file
> > lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain
> > xp-101 (domid 1) to die [pid 26624]`.
> > Why 26622 !=3D 26624?  And 26622 could not be found by `ps -ef` command=
.
> > What happended to that?!
>
> There's a call to dameon in there, which will involve a double fork
> IIRC.
>
> Ian.
>
>
    What do you mean by `double fork`? In my opinion, the parent exit after
invoking waitpid, and the child goes on toward the rest of the code.
    The PID recorded in xl log file  can't also be found by `ps -ef`
command.  How should I retrive the PID of the real daemon process?

--047d7b621e787a6bd104c64a35cb
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 2, 2012 at 5:55 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"im">On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wro=
te:<br>
<br>
&gt; =C2=A0 =C2=A0 In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(),=
 it starts<br>
<br>
</div>If you are doing development then target xen-unstable would be<br>
preferable, especially if you are looking at (lib)xl. The xl stuff in<br>
4.1 is effectively a preview and it has change substantially internally<br>
since 4.1.<br>
<div class=3D"im"><br>
&gt; After input `xl create xp-101.hvm`, you got `Daemon running with PID<b=
r>
&gt; 26622` following the command line in the screen; but the xl log file<b=
r>
&gt; lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain=
<br>
&gt; xp-101 (domid 1) to die [pid 26624]`.<br>
&gt; Why 26622 !=3D 26624? =C2=A0And 26622 could not be found by `ps -ef` c=
ommand.<br>
&gt; What happended to that?!<br>
<br>
</div>There&#39;s a call to dameon in there, which will involve a double fo=
rk<br>
IIRC.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><br><div>=C2=A0 =C2=A0 What do you mean by=
 `double fork`?=C2=A0In my opinion, the parent exit after invoking waitpid,=
 and the child goes on toward the rest of the code.</div><div>=C2=A0 =C2=A0=
 The PID recorded in xl log file =C2=A0can&#39;t also be found by `ps -ef` =
command. =C2=A0How should I retrive the PID of the real daemon process?</di=
v>

--047d7b621e787a6bd104c64a35cb--


--===============1615520170443261022==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1615520170443261022==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 15:54:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxiG-0004Xt-Vo; Thu, 02 Aug 2012 15:53:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SwxiF-0004Xo-GH
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:53:23 +0000
Received: from [85.158.143.35:61213] by server-1.bemta-4.messagelabs.com id
	25/12-24392-272AA105; Thu, 02 Aug 2012 15:53:22 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1343922794!15044968!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTQxNzA=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTQxNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4101 invoked from network); 2 Aug 2012 15:53:15 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 15:53:15 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (joses mo43) (RZmta 30.4 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Q0152fo72FbdbJ
	for <xen-devel@lists.xen.org>; Thu, 2 Aug 2012 17:53:14 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 2FD3E18638
	for <xen-devel@lists.xen.org>; Thu,  2 Aug 2012 17:53:14 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 756f87bda3c3172d34cab60dc7279c3292775275
Message-Id: <756f87bda3c3172d34ca.1343922793@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 02 Aug 2012 17:53:13 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] tools/vtpm: fix tpm_version.h error during
	parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1343922758 -7200
# Node ID 756f87bda3c3172d34cab60dc7279c3292775275
# Parent  983ea7521badb3e05d3379044fb283732ef558d6
tools/vtpm: fix tpm_version.h error during parallel build

Generating the tpm_version.h is not make -j safe:

In file included from ../tpm/tpm_emulator.h:25:0,
                 from ../tpm/tpm_startup.c:18:
../tpm/tpm_version.h:1:0: error: unterminated #ifndef
make[5]: *** [tpm_startup.o] Error 1

This happens because make can not know that 'all-recursive' depends on
'version'. Fix this by calling the individual make targets. Doing it
this way avoids adding yet another patch.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 983ea7521bad -r 756f87bda3c3 tools/vtpm/Makefile
--- a/tools/vtpm/Makefile
+++ b/tools/vtpm/Makefile
@@ -23,7 +23,7 @@ build: build_sub
 
 .PHONY: install
 install: build
-	$(MAKE) -C $(VTPM_DIR) $@
+	$(MAKE) -C $(VTPM_DIR) install-recursive
 
 .PHONY: clean
 clean:
@@ -66,7 +66,8 @@ updatepatches: clean orig
 .PHONY: build_sub
 build_sub: $(VTPM_DIR)/tpmd/tpmd
 	set -e; if [ -e $(GMP_HEADER) ]; then \
-		$(MAKE) -C $(VTPM_DIR); \
+		$(MAKE) -C $(VTPM_DIR) version; \
+		$(MAKE) -C $(VTPM_DIR) all-recursive; \
 	else \
 		echo "=== Unable to build VTPMs. libgmp could not be found."; \
 	fi

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:54:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxiG-0004Xt-Vo; Thu, 02 Aug 2012 15:53:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SwxiF-0004Xo-GH
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:53:23 +0000
Received: from [85.158.143.35:61213] by server-1.bemta-4.messagelabs.com id
	25/12-24392-272AA105; Thu, 02 Aug 2012 15:53:22 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1343922794!15044968!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTQxNzA=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTQxNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4101 invoked from network); 2 Aug 2012 15:53:15 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 15:53:15 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (joses mo43) (RZmta 30.4 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Q0152fo72FbdbJ
	for <xen-devel@lists.xen.org>; Thu, 2 Aug 2012 17:53:14 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 2FD3E18638
	for <xen-devel@lists.xen.org>; Thu,  2 Aug 2012 17:53:14 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 756f87bda3c3172d34cab60dc7279c3292775275
Message-Id: <756f87bda3c3172d34ca.1343922793@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 02 Aug 2012 17:53:13 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] tools/vtpm: fix tpm_version.h error during
	parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1343922758 -7200
# Node ID 756f87bda3c3172d34cab60dc7279c3292775275
# Parent  983ea7521badb3e05d3379044fb283732ef558d6
tools/vtpm: fix tpm_version.h error during parallel build

Generating the tpm_version.h is not make -j safe:

In file included from ../tpm/tpm_emulator.h:25:0,
                 from ../tpm/tpm_startup.c:18:
../tpm/tpm_version.h:1:0: error: unterminated #ifndef
make[5]: *** [tpm_startup.o] Error 1

This happens because make can not know that 'all-recursive' depends on
'version'. Fix this by calling the individual make targets. Doing it
this way avoids adding yet another patch.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 983ea7521bad -r 756f87bda3c3 tools/vtpm/Makefile
--- a/tools/vtpm/Makefile
+++ b/tools/vtpm/Makefile
@@ -23,7 +23,7 @@ build: build_sub
 
 .PHONY: install
 install: build
-	$(MAKE) -C $(VTPM_DIR) $@
+	$(MAKE) -C $(VTPM_DIR) install-recursive
 
 .PHONY: clean
 clean:
@@ -66,7 +66,8 @@ updatepatches: clean orig
 .PHONY: build_sub
 build_sub: $(VTPM_DIR)/tpmd/tpmd
 	set -e; if [ -e $(GMP_HEADER) ]; then \
-		$(MAKE) -C $(VTPM_DIR); \
+		$(MAKE) -C $(VTPM_DIR) version; \
+		$(MAKE) -C $(VTPM_DIR) all-recursive; \
 	else \
 		echo "=== Unable to build VTPMs. libgmp could not be found."; \
 	fi

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:54:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swxj0-0004aE-D9; Thu, 02 Aug 2012 15:54:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Swxiz-0004Zn-03
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:54:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343922842!2060465!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzk4Mjc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzk4Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26638 invoked from network); 2 Aug 2012 15:54:02 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 15:54:02 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (josoe mo75) (RZmta 30.4 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id c00c01o72EhlfS ;
	Thu, 2 Aug 2012 17:54:02 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id D972418639; Thu,  2 Aug 2012 17:54:01 +0200 (CEST)
Date: Thu, 2 Aug 2012 17:54:01 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120802155401.GA10748@aepfle.de>
References: <870b930e816fab3180c1.1343722356@probook.site>
	<1343723653.15432.65.camel@zakaz.uk.xensource.com>
	<1343724173.15432.70.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343724173.15432.70.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/vtpm: fix tpm_version.h error during
 parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, Ian Campbell wrote:

> I've just seen the original thread which points out that fixing this in
> that way requires patching the downloaded source while this solution
> requires only that we patch our own Makefile.
> 
> It would have been useful to note this in the commit message.

I just sent another version with updated commit message.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:54:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swxj0-0004aE-D9; Thu, 02 Aug 2012 15:54:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Swxiz-0004Zn-03
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:54:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-27.messagelabs.com!1343922842!2060465!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzk4Mjc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzNzk4Mjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26638 invoked from network); 2 Aug 2012 15:54:02 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 15:54:02 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjS0PGmfh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-083-197.pools.arcor-ip.net [88.65.83.197])
	by smtp.strato.de (josoe mo75) (RZmta 30.4 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id c00c01o72EhlfS ;
	Thu, 2 Aug 2012 17:54:02 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id D972418639; Thu,  2 Aug 2012 17:54:01 +0200 (CEST)
Date: Thu, 2 Aug 2012 17:54:01 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120802155401.GA10748@aepfle.de>
References: <870b930e816fab3180c1.1343722356@probook.site>
	<1343723653.15432.65.camel@zakaz.uk.xensource.com>
	<1343724173.15432.70.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343724173.15432.70.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/vtpm: fix tpm_version.h error during
 parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, Ian Campbell wrote:

> I've just seen the original thread which points out that fixing this in
> that way requires patching the downloaded source while this solution
> requires only that we patch our own Makefile.
> 
> It would have been useful to note this in the commit message.

I just sent another version with updated commit message.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:55:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxjM-0004c6-Pi; Thu, 02 Aug 2012 15:54:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwxjL-0004bU-3x
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:54:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343922865!11920628!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29323 invoked from network); 2 Aug 2012 15:54:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:54:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13826816"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:54:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	16:54:24 +0100
Message-ID: <1343922863.27221.166.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 16:54:23 +0100
In-Reply-To: <20506.40268.92394.694378@mariner.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
	<ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
	<20506.40268.92394.694378@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] tools/configure: [RFC] Allow all
 tools to be ./configure'd on or off
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:31 +0100, Ian Jackson wrote:
> And I don't think
> that this stage of the 4.2 release is the right time to be inventing
> new configure options. 

I agree, this sort of major reworking belongs in the 4.3 development
cycle.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:55:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxjM-0004c6-Pi; Thu, 02 Aug 2012 15:54:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwxjL-0004bU-3x
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:54:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343922865!11920628!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29323 invoked from network); 2 Aug 2012 15:54:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:54:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,701,1336348800"; d="scan'208";a="13826816"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 15:54:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	16:54:24 +0100
Message-ID: <1343922863.27221.166.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 16:54:23 +0100
In-Reply-To: <20506.40268.92394.694378@mariner.uk.xensource.com>
References: <patchbomb.1343921156@andrewcoop.uk.xensource.com>
	<ed70a016d37553b041ca.1343921158@andrewcoop.uk.xensource.com>
	<20506.40268.92394.694378@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2] tools/configure: [RFC] Allow all
 tools to be ./configure'd on or off
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:31 +0100, Ian Jackson wrote:
> And I don't think
> that this stage of the 4.2 release is the right time to be inventing
> new configure options. 

I agree, this sort of major reworking belongs in the 4.3 development
cycle.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:57:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swxm1-0004pE-C0; Thu, 02 Aug 2012 15:57:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Swxlz-0004p2-CA
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:57:15 +0000
Received: from [85.158.139.83:19026] by server-9.bemta-5.messagelabs.com id
	84/28-01069-A53AA105; Thu, 02 Aug 2012 15:57:14 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343923026!24280248!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE3NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3742 invoked from network); 2 Aug 2012 15:57:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:57:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336363200"; d="scan'208";a="33356774"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:57:05 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Thu, 2 Aug 2012 08:57:05 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Thu, 2 Aug 2012 08:56:50 -0700
Thread-Topic: [Xen-devel] Superpages for VT-D
Thread-Index: Ac1wn4MPYW73O1+gSoWGFyL/abYMXQAJ7WgQ
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0DE665E4@SJCPMAILBOX01.citrite.net>
References: <7914B38A4445B34AA16EB9F1352942F1012F0CDD8A71@SJCPMAILBOX01.citrite.net>
	<20120719102326.GC75169@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0CDD9588@SJCPMAILBOX01.citrite.net>
	<20120719145318.GA78502@ocelot.phlegethon.org>
	<CAL54oT3yqLtDpEHe7n20K=i-hz27P-TYK11tm+VNL2BXMs+hMg@mail.gmail.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0D841328@SJCPMAILBOX01.citrite.net>
	<20120724194150.GA68065@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
	<20120802111102.GE11437@ocelot.phlegethon.org>
In-Reply-To: <20120802111102.GE11437@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "Nakajima, Jun" <jun.nakajima@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Superpages for VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for confirming. BTW, would the IOMMU ever have entries above domains max mapped pfn?

-----Original Message-----
From: Tim Deegan [mailto:tim@xen.org] 
Sent: Thursday, August 02, 2012 4:11 AM
To: Santosh Jodh
Cc: xiantao.zhang@intel.com; Nakajima, Jun; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Superpages for VT-D

At 13:44 -0700 on 31 Jul (1343742260), Santosh Jodh wrote:
> I am going to try to add this support. 
> 
> It looks like a new iommu_ops handler would be needed that would do 
> the actual work of dumping the entries - one for AMD and one for 
> Intel. Am I reading this correctly?

I think that's correct.

> Or is it better to get the root_table + paging_mode (for AMD) and 
> pgd_maddr + agaw (for Intel) and then do a generic dump?

No; the iommu tabels are sufficiebntly arch-specific that it's best to add arch-specific dump routines.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 15:57:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 15:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swxm1-0004pE-C0; Thu, 02 Aug 2012 15:57:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Swxlz-0004p2-CA
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 15:57:15 +0000
Received: from [85.158.139.83:19026] by server-9.bemta-5.messagelabs.com id
	84/28-01069-A53AA105; Thu, 02 Aug 2012 15:57:14 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343923026!24280248!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE3NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3742 invoked from network); 2 Aug 2012 15:57:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 15:57:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336363200"; d="scan'208";a="33356774"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 11:57:05 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Thu, 2 Aug 2012 08:57:05 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Thu, 2 Aug 2012 08:56:50 -0700
Thread-Topic: [Xen-devel] Superpages for VT-D
Thread-Index: Ac1wn4MPYW73O1+gSoWGFyL/abYMXQAJ7WgQ
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0DE665E4@SJCPMAILBOX01.citrite.net>
References: <7914B38A4445B34AA16EB9F1352942F1012F0CDD8A71@SJCPMAILBOX01.citrite.net>
	<20120719102326.GC75169@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0CDD9588@SJCPMAILBOX01.citrite.net>
	<20120719145318.GA78502@ocelot.phlegethon.org>
	<CAL54oT3yqLtDpEHe7n20K=i-hz27P-TYK11tm+VNL2BXMs+hMg@mail.gmail.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0D841328@SJCPMAILBOX01.citrite.net>
	<20120724194150.GA68065@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
	<20120802111102.GE11437@ocelot.phlegethon.org>
In-Reply-To: <20120802111102.GE11437@ocelot.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "Nakajima, Jun" <jun.nakajima@intel.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Superpages for VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for confirming. BTW, would the IOMMU ever have entries above domains max mapped pfn?

-----Original Message-----
From: Tim Deegan [mailto:tim@xen.org] 
Sent: Thursday, August 02, 2012 4:11 AM
To: Santosh Jodh
Cc: xiantao.zhang@intel.com; Nakajima, Jun; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Superpages for VT-D

At 13:44 -0700 on 31 Jul (1343742260), Santosh Jodh wrote:
> I am going to try to add this support. 
> 
> It looks like a new iommu_ops handler would be needed that would do 
> the actual work of dumping the entries - one for AMD and one for 
> Intel. Am I reading this correctly?

I think that's correct.

> Or is it better to get the root_table + paging_mode (for AMD) and 
> pgd_maddr + agaw (for Intel) and then do a generic dump?

No; the iommu tabels are sufficiebntly arch-specific that it's best to add arch-specific dump routines.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:05:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swxtk-0005dU-AD; Thu, 02 Aug 2012 16:05:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swxtj-0005dP-Fg
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 16:05:15 +0000
Received: from [85.158.139.83:32385] by server-10.bemta-5.messagelabs.com id
	81/F1-02190-A35AA105; Thu, 02 Aug 2012 16:05:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343923513!24281629!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10925 invoked from network); 2 Aug 2012 16:05:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:05:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827023"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:05:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	17:05:13 +0100
Message-ID: <1343923512.27221.168.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 2 Aug 2012 17:05:12 +0100
In-Reply-To: <1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207201539380.26163@kaball.uk.xensource.com>
	<1342795744-3768-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/4] libxc/arm: allocate xenstore and
	console pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-20 at 15:49 +0100, Stefano Stabellini wrote:

> +
> +    for (i = 0; i < NR_MAGIC_PAGES; i++)
> +        p2m[i] = dom->rambase_pfn + dom->total_pages + i;
> +
> +    rc = xc_domain_populate_physmap_exact(
> +            dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
> +            0, 0, &p2m[i]);

This isn't right -- it should be just "p2m" not "&p2m[i]" -- the latter
only made sense when the call was in a loop, now it just means we point
off the end of the array.

I'll send a patch tomorrow, right now I have to run.

> +    if ( rc < 0 )
> +        return rc;
> +
> +    console_pfn = dom->rambase_pfn + dom->total_pages + CONSOLE_PFN_OFFSET;
> +    store_pfn = dom->rambase_pfn + dom->total_pages + XENSTORE_PFN_OFFSET;
> +
> +    xc_clear_domain_page(dom->xch, dom->guest_domid, console_pfn);
> +    xc_clear_domain_page(dom->xch, dom->guest_domid, store_pfn);
> +    xc_set_hvm_param(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
> +            console_pfn);
> +    xc_set_hvm_param(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
> +            store_pfn);
> +
>      return 0;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:05:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swxtk-0005dU-AD; Thu, 02 Aug 2012 16:05:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Swxtj-0005dP-Fg
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 16:05:15 +0000
Received: from [85.158.139.83:32385] by server-10.bemta-5.messagelabs.com id
	81/F1-02190-A35AA105; Thu, 02 Aug 2012 16:05:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343923513!24281629!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10925 invoked from network); 2 Aug 2012 16:05:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:05:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827023"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:05:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	17:05:13 +0100
Message-ID: <1343923512.27221.168.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 2 Aug 2012 17:05:12 +0100
In-Reply-To: <1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1207201539380.26163@kaball.uk.xensource.com>
	<1342795744-3768-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<1342795744-3768-4-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	\(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/4] libxc/arm: allocate xenstore and
	console pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-07-20 at 15:49 +0100, Stefano Stabellini wrote:

> +
> +    for (i = 0; i < NR_MAGIC_PAGES; i++)
> +        p2m[i] = dom->rambase_pfn + dom->total_pages + i;
> +
> +    rc = xc_domain_populate_physmap_exact(
> +            dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
> +            0, 0, &p2m[i]);

This isn't right -- it should be just "p2m" not "&p2m[i]" -- the latter
only made sense when the call was in a loop, now it just means we point
off the end of the array.

I'll send a patch tomorrow, right now I have to run.

> +    if ( rc < 0 )
> +        return rc;
> +
> +    console_pfn = dom->rambase_pfn + dom->total_pages + CONSOLE_PFN_OFFSET;
> +    store_pfn = dom->rambase_pfn + dom->total_pages + XENSTORE_PFN_OFFSET;
> +
> +    xc_clear_domain_page(dom->xch, dom->guest_domid, console_pfn);
> +    xc_clear_domain_page(dom->xch, dom->guest_domid, store_pfn);
> +    xc_set_hvm_param(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
> +            console_pfn);
> +    xc_set_hvm_param(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
> +            store_pfn);
> +
>      return 0;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:07:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxvI-0005h5-Pz; Thu, 02 Aug 2012 16:06:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1SwxvH-0005gb-AH
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:06:51 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343923603!10554163!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13297 invoked from network); 2 Aug 2012 16:06:44 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-3.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 16:06:44 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q72G6cte025614;
	Thu, 2 Aug 2012 11:06:38 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q72G6cEf025613;
	Thu, 2 Aug 2012 11:06:38 -0500
Date: Thu, 2 Aug 2012 11:06:38 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208021606.q72G6cEf025613@wind.enjellic.com>
In-Reply-To: Ian Campbell <Ian.Campbell@citrix.com>
	"Re: Blktap fixes and kernel patch." (Aug  1, 10:17am)
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Thu, 02 Aug 2012 11:06:38 -0500 (CDT)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Blktap fixes and kernel patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 1, 10:17am, Ian Campbell wrote:
} Subject: Re: Blktap fixes and kernel patch.

Hi Ian et. al, hope your week is proceeding well.

> On Tue, 2012-07-31 at 22:41 +0100, Dr. Greg Wettstein wrote:

> > In the process I updated the blktap2 kernel driver to patch cleanly
> > into the Linux 3.4 kernel.  These fixes have been validated against
> > the 3.4 kernel as well as the 3.2 kernel.

> Just to be clear this is just a straight forward port, there's no
> part of the deadlock fix in here?

The kernel patch is just a forward port to 3.4, there were no issues
with the driver itself that needed to be addressed.

It seemed the community lacks ready access to the kernel driver so
hopefully others will find a solid reference site for the patches
useful.

> > The first patch is one which was done by Ian for the development tree
> > with minor corrections for 4.1.2.  I'm including it for completeness
> > for those who want a trouble free patch set for a 4.1.2 distribution.
> > This patch fixes the orphaning of the tapdisk2 driver process when xl
> > shuts down.

> This is a fairly straight backport of a patch in unstable?

Correct, the only change besides some version skew noise is a change
in the calling convention for the libxl__gc context passed to
libxl__device_destroy_tapdisk().

> If you send a mail with a subject "Xen 4.1.x backport request
> <commit-id>" explaining which commit it is and CC keir@xen.org &
> ian.jackson@eu.citrix.com then we can see about getting this into a
> future 4.1.x (perhaps even 4.1.3, not sure which stage of rcs we are at
> there).
> 
> If the backport is reasonably trivial then there is often no need to
> include it but since you have done so you might as well include the
> patch for reference.

I will forward along the reference and the patch.

> > The second patch corrects the deadlock which occurs between the
> > blktap2 kernel driver and the blktap2 userspace control plane.  The
> > deadlock causes a delay in the shutdown of a XEN guest and results in
> > the 'orphaning' of tapdisk minor number allocations.  As seems to be
> > typical with these types of things the fix was trivially straight
> > forward once I finally figured out what was going on.

> Thanks for this.
> 
> Am I right that the important functional change here is that the xs_rm
> needs to come after we read the params node but before tap_ctl_destroy?
> Obviously removing the node before calling libxl__device_destroy_tapdisk
> is wrong since libxl__device_destroy_tapdisk reads from be_path!

Correct.

I debated a bit about how to do this in the cleanest fashion
possible.  Since the be_path is passed to libxl__device_destroy_tapdisk()
the simplest strategy seemed to be to abstract the libxl_ctx context
and pull the entry from xenstore after the tapdisk-params key was read
from the xenstore.

> Looking at 4.2.0-rc1 I see that libxl__device_destroy removes the
> backend before calling libxl__device_destroy_tapdisk, so I think that a
> fix is needed there too.
> 
> I'm less sure about the usage in libxl__initiate_device_remove. I wonder
> if the call to libxl__device_destroy_tapdisk there needs to move to
> device_hotplug_done right after the transaction which cleans up the
> backend back?
> 
> The code which used to be in libxl__devices_destroy is now in
> libxl__initiate_device_remove so I expect that fixing that would be
> sufficient.

The root cause of the problem is a deadlock between the xen-blkback
and blktap2 drivers when the tapdisk2 user space process requests
unmapping of the ring buffer.  The only thing which saves the kernel
is the select timeout on the IPC channel between libxl and the
userspace process.  That timeout allows xl to proceed forward and
teardown the backend which releases the deadlock but the error results
in orphaning of the tapdisk2 minor.

So the overal fix is pretty straight forward, libxl needs to shutdown
the backend before initiating the teardown of the blktap2 userspace
component of the device.

While my fix vaguely felt like a layering violation it seemed to be
the most correct approach.  Since libxl__device_destroy_tapdisk() is
stubbed out in the non-blktap2 case having the teardown of the backend
there would seem to generically fix the problem.  Provided of course
the function can be provided with the correct context, I'm not looking
at the current code.

> I'd really appreciate it if you could validate whether 4.2.0-rc1
> works for you or not, I suspect not. We would usually want fix the
> development version before considering fixes for the stable branches
> (even if the actual patch ends up looking totally different)
> otherwise we run the risk of regressions in the next version.

I'd be happy to give it a test.  Is there a tarball cut or should I
hone up my Mercurial skills?

> Is there a simple command which will list the leaked tap devices? If
> so we can consider adding it to the leak-check phase of the
> automated tests (although I'm not sure how much use these make of
> blktap)

The tap-ctl tools make managingn all this pretty straight forward.

If you issue the following command:

	tap-ctl list

On a faulty implementation after the startup and shutdown of a blktap2
using guest you will see the orphaned minor.  They steadily increase
as guests startup and shutdown.

> For future reference if you intend for a patch to be applied it is best
> to submit it in the form described in
> http://wiki.xen.org/wiki/Submitting_Xen_Patches, that is one patch per
> email, with a changelog specific to that change and a Signed-off-by. In
> this sort of scenario (a patch going to 4.1 which isn't a backport) the
> changelog should also mention why the patch isn't a backport.

Will do, should have been more methodical in my practice.

It appears that 4.1.2 is not properly cleaning up xenstore.  I'm
chasing that down now and if there are trivial correctness fixes I
will pass them onward.

> > Ian for your reference the following change which you introduced to
> > address this issue:
> > 
> > 79e3dbe4b659e78408a9eea76c51a601bd4a383a
> > tapdisk: respond to destroy request before tearing down the commuication channel
> > 
> > Is not needed and does not provide formally correct behavior in the
> > presence of the two patches noted above.

> Is it incorrect (i.e. should be reverted) or is it just incomplete/not
> helpful?

Its incorrect in that it changes a logically correct implementation
only for the purposes of masking the delay, which in this case, lead
to the discovery of the root cause of the problem.  Arguably libxl
needs a bit better error reporting around all this and the patch
arguably works against that.

I'm currently running without the patch in 4.1.2 and the tapdisk2
devices are performing very nicely.

> Ian.

Let me know if you have any additional questions/issues.

Best wishes for a pleasant weekend.

Greg

}-- End of excerpt from Ian Campbell

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"There is no heavier burden than a great potential."
                                -- Linus' Law

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:07:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwxvI-0005h5-Pz; Thu, 02 Aug 2012 16:06:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1SwxvH-0005gb-AH
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:06:51 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343923603!10554163!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13297 invoked from network); 2 Aug 2012 16:06:44 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-3.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 16:06:44 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q72G6cte025614;
	Thu, 2 Aug 2012 11:06:38 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q72G6cEf025613;
	Thu, 2 Aug 2012 11:06:38 -0500
Date: Thu, 2 Aug 2012 11:06:38 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208021606.q72G6cEf025613@wind.enjellic.com>
In-Reply-To: Ian Campbell <Ian.Campbell@citrix.com>
	"Re: Blktap fixes and kernel patch." (Aug  1, 10:17am)
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Thu, 02 Aug 2012 11:06:38 -0500 (CDT)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Blktap fixes and kernel patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 1, 10:17am, Ian Campbell wrote:
} Subject: Re: Blktap fixes and kernel patch.

Hi Ian et. al, hope your week is proceeding well.

> On Tue, 2012-07-31 at 22:41 +0100, Dr. Greg Wettstein wrote:

> > In the process I updated the blktap2 kernel driver to patch cleanly
> > into the Linux 3.4 kernel.  These fixes have been validated against
> > the 3.4 kernel as well as the 3.2 kernel.

> Just to be clear this is just a straight forward port, there's no
> part of the deadlock fix in here?

The kernel patch is just a forward port to 3.4, there were no issues
with the driver itself that needed to be addressed.

It seemed the community lacks ready access to the kernel driver so
hopefully others will find a solid reference site for the patches
useful.

> > The first patch is one which was done by Ian for the development tree
> > with minor corrections for 4.1.2.  I'm including it for completeness
> > for those who want a trouble free patch set for a 4.1.2 distribution.
> > This patch fixes the orphaning of the tapdisk2 driver process when xl
> > shuts down.

> This is a fairly straight backport of a patch in unstable?

Correct, the only change besides some version skew noise is a change
in the calling convention for the libxl__gc context passed to
libxl__device_destroy_tapdisk().

> If you send a mail with a subject "Xen 4.1.x backport request
> <commit-id>" explaining which commit it is and CC keir@xen.org &
> ian.jackson@eu.citrix.com then we can see about getting this into a
> future 4.1.x (perhaps even 4.1.3, not sure which stage of rcs we are at
> there).
> 
> If the backport is reasonably trivial then there is often no need to
> include it but since you have done so you might as well include the
> patch for reference.

I will forward along the reference and the patch.

> > The second patch corrects the deadlock which occurs between the
> > blktap2 kernel driver and the blktap2 userspace control plane.  The
> > deadlock causes a delay in the shutdown of a XEN guest and results in
> > the 'orphaning' of tapdisk minor number allocations.  As seems to be
> > typical with these types of things the fix was trivially straight
> > forward once I finally figured out what was going on.

> Thanks for this.
> 
> Am I right that the important functional change here is that the xs_rm
> needs to come after we read the params node but before tap_ctl_destroy?
> Obviously removing the node before calling libxl__device_destroy_tapdisk
> is wrong since libxl__device_destroy_tapdisk reads from be_path!

Correct.

I debated a bit about how to do this in the cleanest fashion
possible.  Since the be_path is passed to libxl__device_destroy_tapdisk()
the simplest strategy seemed to be to abstract the libxl_ctx context
and pull the entry from xenstore after the tapdisk-params key was read
from the xenstore.

> Looking at 4.2.0-rc1 I see that libxl__device_destroy removes the
> backend before calling libxl__device_destroy_tapdisk, so I think that a
> fix is needed there too.
> 
> I'm less sure about the usage in libxl__initiate_device_remove. I wonder
> if the call to libxl__device_destroy_tapdisk there needs to move to
> device_hotplug_done right after the transaction which cleans up the
> backend back?
> 
> The code which used to be in libxl__devices_destroy is now in
> libxl__initiate_device_remove so I expect that fixing that would be
> sufficient.

The root cause of the problem is a deadlock between the xen-blkback
and blktap2 drivers when the tapdisk2 user space process requests
unmapping of the ring buffer.  The only thing which saves the kernel
is the select timeout on the IPC channel between libxl and the
userspace process.  That timeout allows xl to proceed forward and
teardown the backend which releases the deadlock but the error results
in orphaning of the tapdisk2 minor.

So the overal fix is pretty straight forward, libxl needs to shutdown
the backend before initiating the teardown of the blktap2 userspace
component of the device.

While my fix vaguely felt like a layering violation it seemed to be
the most correct approach.  Since libxl__device_destroy_tapdisk() is
stubbed out in the non-blktap2 case having the teardown of the backend
there would seem to generically fix the problem.  Provided of course
the function can be provided with the correct context, I'm not looking
at the current code.

> I'd really appreciate it if you could validate whether 4.2.0-rc1
> works for you or not, I suspect not. We would usually want fix the
> development version before considering fixes for the stable branches
> (even if the actual patch ends up looking totally different)
> otherwise we run the risk of regressions in the next version.

I'd be happy to give it a test.  Is there a tarball cut or should I
hone up my Mercurial skills?

> Is there a simple command which will list the leaked tap devices? If
> so we can consider adding it to the leak-check phase of the
> automated tests (although I'm not sure how much use these make of
> blktap)

The tap-ctl tools make managingn all this pretty straight forward.

If you issue the following command:

	tap-ctl list

On a faulty implementation after the startup and shutdown of a blktap2
using guest you will see the orphaned minor.  They steadily increase
as guests startup and shutdown.

> For future reference if you intend for a patch to be applied it is best
> to submit it in the form described in
> http://wiki.xen.org/wiki/Submitting_Xen_Patches, that is one patch per
> email, with a changelog specific to that change and a Signed-off-by. In
> this sort of scenario (a patch going to 4.1 which isn't a backport) the
> changelog should also mention why the patch isn't a backport.

Will do, should have been more methodical in my practice.

It appears that 4.1.2 is not properly cleaning up xenstore.  I'm
chasing that down now and if there are trivial correctness fixes I
will pass them onward.

> > Ian for your reference the following change which you introduced to
> > address this issue:
> > 
> > 79e3dbe4b659e78408a9eea76c51a601bd4a383a
> > tapdisk: respond to destroy request before tearing down the commuication channel
> > 
> > Is not needed and does not provide formally correct behavior in the
> > presence of the two patches noted above.

> Is it incorrect (i.e. should be reverted) or is it just incomplete/not
> helpful?

Its incorrect in that it changes a logically correct implementation
only for the purposes of masking the delay, which in this case, lead
to the discovery of the root cause of the problem.  Arguably libxl
needs a bit better error reporting around all this and the patch
arguably works against that.

I'm currently running without the patch in 4.1.2 and the tapdisk2
devices are performing very nicely.

> Ian.

Let me know if you have any additional questions/issues.

Best wishes for a pleasant weekend.

Greg

}-- End of excerpt from Ian Campbell

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"There is no heavier burden than a great potential."
                                -- Linus' Law

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:21:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swy8c-0005wB-5u; Thu, 02 Aug 2012 16:20:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swy8a-0005w6-07
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:20:36 +0000
Received: from [85.158.138.51:25107] by server-7.bemta-3.messagelabs.com id
	E8/72-21158-3D8AA105; Thu, 02 Aug 2012 16:20:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343924434!30104611!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10213 invoked from network); 2 Aug 2012 16:20:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:20:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827367"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:20:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 17:20:34 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swy8Y-0000s2-2K; Thu, 02 Aug 2012 16:20:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swy8Y-00067c-1E;
	Thu, 02 Aug 2012 17:20:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.43218.20724.749302@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 17:20:34 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
References: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl: support custom block hotplug scripts"):
> libxl: support custom block hotplug scripts

Wow.  Thanks.  Everything looks good apart from this:

>                      DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
> -                   SAVESTRING("script", script, yytext);
> -               }
> +                    if (DPC->disk->script) {
> +                        if (*DPC->disk->script) {
> +                            xlu__disk_err(DPC,yytext,"script respecified");
> +                            return 0;
> +                        }
> +                        /* do not complain about overwriting empty strings */
> +                        free(DPC->disk->script);
> +                    }
> +                    DPC->disk->script = malloc(strlen("block-")
> +                                               +strlen(yytext) + 1);
> +                    strcpy(DPC->disk->script, "block-");
> +                    strcat(DPC->disk->script, yytext);

Isn't this very like the contents of the savestring() function ?
Ie you could do:
        char *newscript;
        asprintf(&newscript, ...);
        savestring(DPC, "script respecified", &DPC->disk->script, newscript);
        free(newscript);

Other places in xl use asprintf so you can use it here.

> +                }

Is this one-character indentation change intentional ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:21:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swy8c-0005wB-5u; Thu, 02 Aug 2012 16:20:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Swy8a-0005w6-07
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:20:36 +0000
Received: from [85.158.138.51:25107] by server-7.bemta-3.messagelabs.com id
	E8/72-21158-3D8AA105; Thu, 02 Aug 2012 16:20:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343924434!30104611!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10213 invoked from network); 2 Aug 2012 16:20:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:20:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827367"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:20:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 17:20:34 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Swy8Y-0000s2-2K; Thu, 02 Aug 2012 16:20:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Swy8Y-00067c-1E;
	Thu, 02 Aug 2012 17:20:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20506.43218.20724.749302@mariner.uk.xensource.com>
Date: Thu, 2 Aug 2012 17:20:34 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
References: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl: support custom block hotplug scripts"):
> libxl: support custom block hotplug scripts

Wow.  Thanks.  Everything looks good apart from this:

>                      DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
> -                   SAVESTRING("script", script, yytext);
> -               }
> +                    if (DPC->disk->script) {
> +                        if (*DPC->disk->script) {
> +                            xlu__disk_err(DPC,yytext,"script respecified");
> +                            return 0;
> +                        }
> +                        /* do not complain about overwriting empty strings */
> +                        free(DPC->disk->script);
> +                    }
> +                    DPC->disk->script = malloc(strlen("block-")
> +                                               +strlen(yytext) + 1);
> +                    strcpy(DPC->disk->script, "block-");
> +                    strcat(DPC->disk->script, yytext);

Isn't this very like the contents of the savestring() function ?
Ie you could do:
        char *newscript;
        asprintf(&newscript, ...);
        savestring(DPC, "script respecified", &DPC->disk->script, newscript);
        free(newscript);

Other places in xl use asprintf so you can use it here.

> +                }

Is this one-character indentation change intentional ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:25:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyCp-00065z-Vz; Thu, 02 Aug 2012 16:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwyCo-00065t-JT
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:24:58 +0000
Received: from [85.158.138.51:49320] by server-6.bemta-3.messagelabs.com id
	06/A3-20447-9D9AA105; Thu, 02 Aug 2012 16:24:57 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343924696!22024494!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16613 invoked from network); 2 Aug 2012 16:24:57 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:24:57 -0000
Received: by wibhq4 with SMTP id hq4so4358798wib.14
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 09:24:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=IYxMxkZx6YRILqGiEgxrv4pI85D7wC8NAy94W1BCFtk=;
	b=0zQQZYsIpyAhK+7bEY9eyoRK20YjMkZ0KXBrAF1uo+5B1BPXJ1VCT2s8xsxIKmZL1f
	BxLaGRRKifiFxdqAOc9zw3/BvaJ3rlKahOsryg/1d0eTSiX1cUxJiqCGSLQBIfQr1awP
	OccmoWE7u1IBdWPHCc9ypBpD2ov46T0qeiOoMbqTDLbGujiaKEK5y4oQPezmjArg7G/l
	2pIqm2MmnDLuKB6Zg58vZ1wQeepEcsRIGEMk5V9MtBINYsmUCVoa+fzqO2Plkfhmd1i0
	e5GQZurISoIs4S4pmJuThXimsqQ4+nK4XcDbFL6x2XUmKHj0B0moxunQYF0i4DBbLALy
	n5bg==
MIME-Version: 1.0
Received: by 10.180.84.164 with SMTP id a4mr6004450wiz.12.1343924696645; Thu,
	02 Aug 2012 09:24:56 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 09:24:56 -0700 (PDT)
In-Reply-To: <1343893520.7571.58.camel@dagon.hellion.org.uk>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
Date: Thu, 2 Aug 2012 09:24:56 -0700
Message-ID: <CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Content-Type: multipart/mixed; boundary=f46d043c06e626020f04c64ada83
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--f46d043c06e626020f04c64ada83
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-01 at 18:52 +0100, David Erickson wrote:
>> my assumption is this is because of the error
>> in xen-hotplug.log: "RTNETLINK answers: Operation not supported",
>
> That's a benign warning AFAIK.
>
>> and here is my ifconfig while ubuntu is booted (without VMs it doesn't
>> have the vifs):
>
> This all looks fine. I think you need to be investigating the network
> configuration inside the guest. Does the eth* device exist, is it
> configured etc
>
> In your Ubuntu boot log I see:
>         [    0.000000] Hypervisor detected: Xen HVM
>         [    0.000000] Xen version 4.2.
>         [    0.000000] Xen Platform PCI: I/O protocol version 1
>         [    0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
>         [    0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.
>
> which means you will need the xen-netfront driver to be loaded, I don't
> see any logs to that effect. What does lsmod say? What about "ifconfig
> -a". Does the driver even exist on the live cd under /lib/modules
> somewhere?


Hi Ian-
I've attached a log with the list of modules matching front,
xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.
 And ifconfig shows nothing other than the loopback interface.

> It's a bit odd that you still have vifX.Y-emu in dom0 given that the
> emulated device is supposed to have been unplugged. I wonder if that is
> a (separate) bug with emu device unplug. Which qemu was this again?

Upstream, commit 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a from 7/30

> If you added xen_emul_unplug=never to your guest command line then you
> would avoid this unplug and you should have an emulated NIC available
> instead.

Verified this did work.

Thanks,
David

--f46d043c06e626020f04c64ada83
Content-Type: application/octet-stream; name="ubuntu_guest_boot.log"
Content-Disposition: attachment; filename="ubuntu_guest_boot.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e20s2r0

dWJ1bnR1QHVidW50dTp+JCBsc21vZA0KTW9kdWxlICAgICAgICAgICAgICAgICAgU2l6ZSAgVXNl
ZCBieQ0KYm5lcCAgICAgICAgICAgICAgICAgICAxODQzNiAgMiANCnJmY29tbSAgICAgICAgICAg
ICAgICAgNDc5NDYgIDAgDQpibHVldG9vdGggICAgICAgICAgICAgMTY2MTEyICAxMCBibmVwLHJm
Y29tbQ0KZG1fY3J5cHQgICAgICAgICAgICAgICAyMzE5OSAgMCANCmxwICAgICAgICAgICAgICAg
ICAgICAgMTc3OTkgIDAgDQpwcGRldiAgICAgICAgICAgICAgICAgIDE3MTEzICAwIA0KcHNtb3Vz
ZSAgICAgICAgICAgICAgICA3Mzg4MiAgMCANCnBhcnBvcnRfcGMgICAgICAgICAgICAgMzY5NjIg
IDEgDQpwYXJwb3J0ICAgICAgICAgICAgICAgIDQ2NTYyICAzIGxwLHBwZGV2LHBhcnBvcnRfcGMN
CnNlcmlvX3JhdyAgICAgICAgICAgICAgMTMxNjYgIDAgDQppMmNfcGlpeDQgICAgICAgICAgICAg
IDEzMzAxICAwIA0KeGVuX3BsYXRmb3JtX3BjaSAgICAgICAxMjg4NSAgMCBbcGVybWFuZW50XQ0K
YmluZm10X21pc2MgICAgICAgICAgICAxNzU0MCAgMSANCnNxdWFzaGZzICAgICAgICAgICAgICAg
MzY3OTkgIDEgDQpvdmVybGF5ZnMgICAgICAgICAgICAgIDI4MjY3ICAxIA0KbmxzX3V0ZjggICAg
ICAgICAgICAgICAxMjU1NyAgMSANCmlzb2ZzICAgICAgICAgICAgICAgICAgNDAyNTMgIDEgDQpk
bV9yYWlkNDUgICAgICAgICAgICAgIDc4MTU1ICAwIA0KeG9yICAgICAgICAgICAgICAgICAgICAx
Mjg5NCAgMSBkbV9yYWlkNDUNCmRtX21pcnJvciAgICAgICAgICAgICAgMjIyMDMgIDAgDQpkbV9y
ZWdpb25faGFzaCAgICAgICAgIDIwOTE4ICAxIGRtX21pcnJvcg0KZG1fbG9nICAgICAgICAgICAg
ICAgICAxODU2NCAgMyBkbV9yYWlkNDUsZG1fbWlycm9yLGRtX3JlZ2lvbl9oYXNoDQpidHJmcyAg
ICAgICAgICAgICAgICAgNjQ4ODk1ICAwIA0KemxpYl9kZWZsYXRlICAgICAgICAgICAyNzEzOSAg
MSBidHJmcw0KbGliY3JjMzJjICAgICAgICAgICAgICAxMjY0NCAgMSBidHJmcw0KbXB0MnNhcyAg
ICAgICAgICAgICAgIDE1Mjg2MCAgMCANCmZsb3BweSAgICAgICAgICAgICAgICAgNzAzNjUgIDAg
DQpzY3NpX3RyYW5zcG9ydF9zYXMgICAgIDQwNTU4ICAxIG1wdDJzYXMNCnJhaWRfY2xhc3MgICAg
ICAgICAgICAgMTM2MjIgIDEgbXB0MnNhcw0KdWJ1bnR1QHVidW50dTp+JCBpZmNvbmZpZyAtYQ0K
bG8gICAgICAgIExpbmsgZW5jYXA6TG9jYWwgTG9vcGJhY2sgIA0KICAgICAgICAgIGluZXQgYWRk
cjoxMjcuMC4wLjEgIE1hc2s6MjU1LjAuMC4wDQogICAgICAgICAgaW5ldDYgYWRkcjogOjoxLzEy
OCBTY29wZTpIb3N0DQogICAgICAgICAgVVAgTE9PUEJBQ0sgUlVOTklORyAgTVRVOjE2NDM2ICBN
ZXRyaWM6MQ0KICAgICAgICAgIFJYIHBhY2tldHM6MzIgZXJyb3JzOjAgZHJvcHBlZDowIG92ZXJy
dW5zOjAgZnJhbWU6MA0KICAgICAgICAgIFRYIHBhY2tldHM6MzIgZXJyb3JzOjAgZHJvcHBlZDow
IG92ZXJydW5zOjAgY2FycmllcjowDQogICAgICAgICAgY29sbGlzaW9uczowIHR4cXVldWVsZW46
MCANCiAgICAgICAgICBSWCBieXRlczoyNDk2ICgyLjQgS0IpICBUWCBieXRlczoyNDk2ICgyLjQg
S0IpDQoNCg==
--f46d043c06e626020f04c64ada83
Content-Type: application/octet-stream; name="find.log"
Content-Disposition: attachment; filename="find.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e20s2t1

Li8zLjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5rbwou
LzMuMC4wLTEyLWdlbmVyaWMva2VybmVsL2RyaXZlcnMvaW5wdXQvbWlzYy94ZW4ta2JkZnJvbnQu
a28KLi8zLjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL21lZGlhL2R2Yi9mcm9udGVuZHMK
Li8zLjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL25ldC94ZW4tbmV0ZnJvbnQua28KLi8z
LjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL3BjaS94ZW4tcGNpZnJvbnQua28KLi8zLjAu
MC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL3N0YWdpbmcvZnJvbnRpZXIKLi8zLjAuMC0xMi1n
ZW5lcmljL2tlcm5lbC9kcml2ZXJzL3ZpZGVvL3hlbi1mYmZyb250LmtvCi4vMy4wLjAtMTItZ2Vu
ZXJpYy9rZXJuZWwvZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c19wcm9iZV9mcm9udGVuZC5rbwo=
--f46d043c06e626020f04c64ada83
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--f46d043c06e626020f04c64ada83--


From xen-devel-bounces@lists.xen.org Thu Aug 02 16:25:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyCp-00065z-Vz; Thu, 02 Aug 2012 16:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwyCo-00065t-JT
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:24:58 +0000
Received: from [85.158.138.51:49320] by server-6.bemta-3.messagelabs.com id
	06/A3-20447-9D9AA105; Thu, 02 Aug 2012 16:24:57 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343924696!22024494!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16613 invoked from network); 2 Aug 2012 16:24:57 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:24:57 -0000
Received: by wibhq4 with SMTP id hq4so4358798wib.14
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 09:24:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=IYxMxkZx6YRILqGiEgxrv4pI85D7wC8NAy94W1BCFtk=;
	b=0zQQZYsIpyAhK+7bEY9eyoRK20YjMkZ0KXBrAF1uo+5B1BPXJ1VCT2s8xsxIKmZL1f
	BxLaGRRKifiFxdqAOc9zw3/BvaJ3rlKahOsryg/1d0eTSiX1cUxJiqCGSLQBIfQr1awP
	OccmoWE7u1IBdWPHCc9ypBpD2ov46T0qeiOoMbqTDLbGujiaKEK5y4oQPezmjArg7G/l
	2pIqm2MmnDLuKB6Zg58vZ1wQeepEcsRIGEMk5V9MtBINYsmUCVoa+fzqO2Plkfhmd1i0
	e5GQZurISoIs4S4pmJuThXimsqQ4+nK4XcDbFL6x2XUmKHj0B0moxunQYF0i4DBbLALy
	n5bg==
MIME-Version: 1.0
Received: by 10.180.84.164 with SMTP id a4mr6004450wiz.12.1343924696645; Thu,
	02 Aug 2012 09:24:56 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 09:24:56 -0700 (PDT)
In-Reply-To: <1343893520.7571.58.camel@dagon.hellion.org.uk>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
Date: Thu, 2 Aug 2012 09:24:56 -0700
Message-ID: <CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Content-Type: multipart/mixed; boundary=f46d043c06e626020f04c64ada83
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--f46d043c06e626020f04c64ada83
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-01 at 18:52 +0100, David Erickson wrote:
>> my assumption is this is because of the error
>> in xen-hotplug.log: "RTNETLINK answers: Operation not supported",
>
> That's a benign warning AFAIK.
>
>> and here is my ifconfig while ubuntu is booted (without VMs it doesn't
>> have the vifs):
>
> This all looks fine. I think you need to be investigating the network
> configuration inside the guest. Does the eth* device exist, is it
> configured etc
>
> In your Ubuntu boot log I see:
>         [    0.000000] Hypervisor detected: Xen HVM
>         [    0.000000] Xen version 4.2.
>         [    0.000000] Xen Platform PCI: I/O protocol version 1
>         [    0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
>         [    0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.
>
> which means you will need the xen-netfront driver to be loaded, I don't
> see any logs to that effect. What does lsmod say? What about "ifconfig
> -a". Does the driver even exist on the live cd under /lib/modules
> somewhere?


Hi Ian-
I've attached a log with the list of modules matching front,
xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.
 And ifconfig shows nothing other than the loopback interface.

> It's a bit odd that you still have vifX.Y-emu in dom0 given that the
> emulated device is supposed to have been unplugged. I wonder if that is
> a (separate) bug with emu device unplug. Which qemu was this again?

Upstream, commit 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a from 7/30

> If you added xen_emul_unplug=never to your guest command line then you
> would avoid this unplug and you should have an emulated NIC available
> instead.

Verified this did work.

Thanks,
David

--f46d043c06e626020f04c64ada83
Content-Type: application/octet-stream; name="ubuntu_guest_boot.log"
Content-Disposition: attachment; filename="ubuntu_guest_boot.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e20s2r0

dWJ1bnR1QHVidW50dTp+JCBsc21vZA0KTW9kdWxlICAgICAgICAgICAgICAgICAgU2l6ZSAgVXNl
ZCBieQ0KYm5lcCAgICAgICAgICAgICAgICAgICAxODQzNiAgMiANCnJmY29tbSAgICAgICAgICAg
ICAgICAgNDc5NDYgIDAgDQpibHVldG9vdGggICAgICAgICAgICAgMTY2MTEyICAxMCBibmVwLHJm
Y29tbQ0KZG1fY3J5cHQgICAgICAgICAgICAgICAyMzE5OSAgMCANCmxwICAgICAgICAgICAgICAg
ICAgICAgMTc3OTkgIDAgDQpwcGRldiAgICAgICAgICAgICAgICAgIDE3MTEzICAwIA0KcHNtb3Vz
ZSAgICAgICAgICAgICAgICA3Mzg4MiAgMCANCnBhcnBvcnRfcGMgICAgICAgICAgICAgMzY5NjIg
IDEgDQpwYXJwb3J0ICAgICAgICAgICAgICAgIDQ2NTYyICAzIGxwLHBwZGV2LHBhcnBvcnRfcGMN
CnNlcmlvX3JhdyAgICAgICAgICAgICAgMTMxNjYgIDAgDQppMmNfcGlpeDQgICAgICAgICAgICAg
IDEzMzAxICAwIA0KeGVuX3BsYXRmb3JtX3BjaSAgICAgICAxMjg4NSAgMCBbcGVybWFuZW50XQ0K
YmluZm10X21pc2MgICAgICAgICAgICAxNzU0MCAgMSANCnNxdWFzaGZzICAgICAgICAgICAgICAg
MzY3OTkgIDEgDQpvdmVybGF5ZnMgICAgICAgICAgICAgIDI4MjY3ICAxIA0KbmxzX3V0ZjggICAg
ICAgICAgICAgICAxMjU1NyAgMSANCmlzb2ZzICAgICAgICAgICAgICAgICAgNDAyNTMgIDEgDQpk
bV9yYWlkNDUgICAgICAgICAgICAgIDc4MTU1ICAwIA0KeG9yICAgICAgICAgICAgICAgICAgICAx
Mjg5NCAgMSBkbV9yYWlkNDUNCmRtX21pcnJvciAgICAgICAgICAgICAgMjIyMDMgIDAgDQpkbV9y
ZWdpb25faGFzaCAgICAgICAgIDIwOTE4ICAxIGRtX21pcnJvcg0KZG1fbG9nICAgICAgICAgICAg
ICAgICAxODU2NCAgMyBkbV9yYWlkNDUsZG1fbWlycm9yLGRtX3JlZ2lvbl9oYXNoDQpidHJmcyAg
ICAgICAgICAgICAgICAgNjQ4ODk1ICAwIA0KemxpYl9kZWZsYXRlICAgICAgICAgICAyNzEzOSAg
MSBidHJmcw0KbGliY3JjMzJjICAgICAgICAgICAgICAxMjY0NCAgMSBidHJmcw0KbXB0MnNhcyAg
ICAgICAgICAgICAgIDE1Mjg2MCAgMCANCmZsb3BweSAgICAgICAgICAgICAgICAgNzAzNjUgIDAg
DQpzY3NpX3RyYW5zcG9ydF9zYXMgICAgIDQwNTU4ICAxIG1wdDJzYXMNCnJhaWRfY2xhc3MgICAg
ICAgICAgICAgMTM2MjIgIDEgbXB0MnNhcw0KdWJ1bnR1QHVidW50dTp+JCBpZmNvbmZpZyAtYQ0K
bG8gICAgICAgIExpbmsgZW5jYXA6TG9jYWwgTG9vcGJhY2sgIA0KICAgICAgICAgIGluZXQgYWRk
cjoxMjcuMC4wLjEgIE1hc2s6MjU1LjAuMC4wDQogICAgICAgICAgaW5ldDYgYWRkcjogOjoxLzEy
OCBTY29wZTpIb3N0DQogICAgICAgICAgVVAgTE9PUEJBQ0sgUlVOTklORyAgTVRVOjE2NDM2ICBN
ZXRyaWM6MQ0KICAgICAgICAgIFJYIHBhY2tldHM6MzIgZXJyb3JzOjAgZHJvcHBlZDowIG92ZXJy
dW5zOjAgZnJhbWU6MA0KICAgICAgICAgIFRYIHBhY2tldHM6MzIgZXJyb3JzOjAgZHJvcHBlZDow
IG92ZXJydW5zOjAgY2FycmllcjowDQogICAgICAgICAgY29sbGlzaW9uczowIHR4cXVldWVsZW46
MCANCiAgICAgICAgICBSWCBieXRlczoyNDk2ICgyLjQgS0IpICBUWCBieXRlczoyNDk2ICgyLjQg
S0IpDQoNCg==
--f46d043c06e626020f04c64ada83
Content-Type: application/octet-stream; name="find.log"
Content-Disposition: attachment; filename="find.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e20s2t1

Li8zLjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5rbwou
LzMuMC4wLTEyLWdlbmVyaWMva2VybmVsL2RyaXZlcnMvaW5wdXQvbWlzYy94ZW4ta2JkZnJvbnQu
a28KLi8zLjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL21lZGlhL2R2Yi9mcm9udGVuZHMK
Li8zLjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL25ldC94ZW4tbmV0ZnJvbnQua28KLi8z
LjAuMC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL3BjaS94ZW4tcGNpZnJvbnQua28KLi8zLjAu
MC0xMi1nZW5lcmljL2tlcm5lbC9kcml2ZXJzL3N0YWdpbmcvZnJvbnRpZXIKLi8zLjAuMC0xMi1n
ZW5lcmljL2tlcm5lbC9kcml2ZXJzL3ZpZGVvL3hlbi1mYmZyb250LmtvCi4vMy4wLjAtMTItZ2Vu
ZXJpYy9rZXJuZWwvZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c19wcm9iZV9mcm9udGVuZC5rbwo=
--f46d043c06e626020f04c64ada83
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--f46d043c06e626020f04c64ada83--


From xen-devel-bounces@lists.xen.org Thu Aug 02 16:36:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyNY-0006IB-6t; Thu, 02 Aug 2012 16:36:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwyNW-0006I6-JG
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:36:02 +0000
Received: from [85.158.143.35:59626] by server-3.bemta-4.messagelabs.com id
	89/3F-01511-17CAA105; Thu, 02 Aug 2012 16:36:01 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1343925360!15202293!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31032 invoked from network); 2 Aug 2012 16:36:01 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:36:01 -0000
Received: by eaah1 with SMTP id h1so2279920eaa.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 09:36:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=6bmM7E6aclVkywFyVNVtjITTgtXRGg0/D9UroGacQ+4=;
	b=wZgkb+wNoyFvPEnysswavZ72Y0fYly9mNBcer5F2v3OD3i35qwymfgTLSbddM05shV
	jfpZ8RqaBvzH+IhRyZVf444FbzUZ2yufcXt4WVbx5199DXtzP7VTi4LfR0v1rTPpZuMS
	UGDy3vOh1rJLCECQryoqPWMGgqF8onEBmLkQEl4fMKpqv1jXfzs8KYWTFVC7iM0a9454
	alwdZzTdinc0pT1r6T8YVhkjad8toU6xgcm/Lz6RRAf/9bRnZs4VWMSibnrwsfInrA27
	KGnDE2ApmiQl+/yJBKT3nKa8ojpg2q7lETSCIhTQmyg/OAy8h3gxZJOW4p6xOzw6+g4m
	SRlA==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr20388448eep.32.1343925360583;
	Thu, 02 Aug 2012 09:36:00 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Thu, 2 Aug 2012 09:36:00 -0700 (PDT)
In-Reply-To: <1343914490.4873.18.camel@Solace>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
Date: Thu, 2 Aug 2012 17:36:00 +0100
X-Google-Sender-Auth: lIneD9z_aA_9233xon-GO2tl6Pg
Message-ID: <CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>> >       guest ends up on more than one nodes, make sure it knows it's
>> >       running on a NUMA platform (smaller than the actual host, but
>> >       still NUMA). This interacts with some of the above points:
>>
>> The question is whether this is really useful beyond the (I would
>> suppose) relatively small set of cases where migration isn't
>> needed.
>>
> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
> suggesting that exposing a virtual topology is not a good idea as it
> poses constraints/prevents live migration?
>
> If yes, well, I mostly agree that this is an huge issue, and that's why
> I think wee need some bright idea on how to deal with it. I mean, it's
> easy to make it optional and let it automatically disable migration,
> giving users the choice what they prefer, but I think this is more
> dodging the problem than dealing with it! :-P
>
>> >        * consider this during automatic placement for
>> >          resuming/migrating domains (if they have a virtual topology,
>> >          better not to change it);
>> >        * consider this during memory migration (it can change the
>> >          actual topology, should we update it on-line or disable memory
>> >          migration?)

I think we could use cpu hot-plug to change the "virtual topology" of
VMs, couldn't we?  We could probably even do that on a running guest
if we really needed to.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:36:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyNY-0006IB-6t; Thu, 02 Aug 2012 16:36:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SwyNW-0006I6-JG
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:36:02 +0000
Received: from [85.158.143.35:59626] by server-3.bemta-4.messagelabs.com id
	89/3F-01511-17CAA105; Thu, 02 Aug 2012 16:36:01 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1343925360!15202293!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31032 invoked from network); 2 Aug 2012 16:36:01 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:36:01 -0000
Received: by eaah1 with SMTP id h1so2279920eaa.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 09:36:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=6bmM7E6aclVkywFyVNVtjITTgtXRGg0/D9UroGacQ+4=;
	b=wZgkb+wNoyFvPEnysswavZ72Y0fYly9mNBcer5F2v3OD3i35qwymfgTLSbddM05shV
	jfpZ8RqaBvzH+IhRyZVf444FbzUZ2yufcXt4WVbx5199DXtzP7VTi4LfR0v1rTPpZuMS
	UGDy3vOh1rJLCECQryoqPWMGgqF8onEBmLkQEl4fMKpqv1jXfzs8KYWTFVC7iM0a9454
	alwdZzTdinc0pT1r6T8YVhkjad8toU6xgcm/Lz6RRAf/9bRnZs4VWMSibnrwsfInrA27
	KGnDE2ApmiQl+/yJBKT3nKa8ojpg2q7lETSCIhTQmyg/OAy8h3gxZJOW4p6xOzw6+g4m
	SRlA==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr20388448eep.32.1343925360583;
	Thu, 02 Aug 2012 09:36:00 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Thu, 2 Aug 2012 09:36:00 -0700 (PDT)
In-Reply-To: <1343914490.4873.18.camel@Solace>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
Date: Thu, 2 Aug 2012 17:36:00 +0100
X-Google-Sender-Auth: lIneD9z_aA_9233xon-GO2tl6Pg
Message-ID: <CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Dario Faggioli <raistlin@linux.it>
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>> >       guest ends up on more than one nodes, make sure it knows it's
>> >       running on a NUMA platform (smaller than the actual host, but
>> >       still NUMA). This interacts with some of the above points:
>>
>> The question is whether this is really useful beyond the (I would
>> suppose) relatively small set of cases where migration isn't
>> needed.
>>
> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
> suggesting that exposing a virtual topology is not a good idea as it
> poses constraints/prevents live migration?
>
> If yes, well, I mostly agree that this is an huge issue, and that's why
> I think wee need some bright idea on how to deal with it. I mean, it's
> easy to make it optional and let it automatically disable migration,
> giving users the choice what they prefer, but I think this is more
> dodging the problem than dealing with it! :-P
>
>> >        * consider this during automatic placement for
>> >          resuming/migrating domains (if they have a virtual topology,
>> >          better not to change it);
>> >        * consider this during memory migration (it can change the
>> >          actual topology, should we update it on-line or disable memory
>> >          migration?)

I think we could use cpu hot-plug to change the "virtual topology" of
VMs, couldn't we?  We could probably even do that on a running guest
if we really needed to.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:42:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:42:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwySq-0006Vj-2m; Thu, 02 Aug 2012 16:41:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwySo-0006VY-3A
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:41:30 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1343925681!10490950!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2778 invoked from network); 2 Aug 2012 16:41:22 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:41:22 -0000
Received: by weyz53 with SMTP id z53so7034769wey.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 09:41:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=7C+2W0QxINe6OE2CUoS2HNY+OzBzSm46JivB3bl8J6o=;
	b=rTDW+g/sxNtL8WzEnrC6J08EZHZPvwIGmqO4VCA32c9tXCaSYXNeV8SV61SpkZUv+6
	3mIYi0jBqrAzBvEj3HEA+mlbNUPXQxwtyTxObueuwXxqjuNo5xXnPCwU9VmVhCufkM5G
	kY0L5pAPqxR/5EkoFje3FlwJxLIkworewFkibWdPeWQu5d23aLlbxG+OL5osy7vhTVXS
	fiB5zkDJEfcmgIXDRBfzBF+J5NOI1m50olpDK4rS9mnxREoNhVJov1y9BWw5eMNxasQC
	r4wqmFuUK3SOv8LofeUDvMM6mu+DIZGpSn83cCdcQGGCgs6audkmCL2nIO7RRh3MH+DV
	9dSw==
MIME-Version: 1.0
Received: by 10.217.3.7 with SMTP id q7mr4991227wes.47.1343925681012; Thu, 02
	Aug 2012 09:41:21 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 09:41:20 -0700 (PDT)
Date: Thu, 2 Aug 2012 09:41:20 -0700
Message-ID: <CANKx4w-wEkwqKcAvt8Fyq_x9h=HwcfvN9R8SkJM_WDgy1mcbXQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=20cf30207888d2409504c64b14b9
Subject: [Xen-devel] Reboot not working
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--20cf30207888d2409504c64b14b9
Content-Type: text/plain; charset=ISO-8859-1

Xen unstable with qemu upstream running a Ubuntu 11.10 livecd guest
(no pci passthrough for this test).  Trying to reboot from within the
guest directly or from xl with/without the -F flag produces a VNC
screen like the attached screenshot, the guest never reboots.

Thanks,
David

--20cf30207888d2409504c64b14b9
Content-Type: image/png; name="reboot.png"
Content-Disposition: attachment; filename="reboot.png"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e2lxm50

iVBORw0KGgoAAAANSUhEUgAAAtwAAAGyCAIAAABY3YSJAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJ
bWFnZVJlYWR5ccllPAAAA99pVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdp
bj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6
eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYwIDYxLjEz
NDc3NywgMjAxMC8wMi8xMi0xNzozMjowMCAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJo
dHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlw
dGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAv
IiB4bWxuczpkYz0iaHR0cDovL3B1cmwub3JnL2RjL2VsZW1lbnRzLzEuMS8iIHhtbG5zOnhtcE1N
PSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0iaHR0cDovL25z
LmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1wOkNyZWF0b3JUb29sPSJB
ZG9iZSBQaG90b3Nob3AgQ1M1IFdpbmRvd3MiIHhtcDpDcmVhdGVEYXRlPSIyMDEyLTA4LTAyVDA5
OjM2OjU0LTA3OjAwIiB4bXA6TW9kaWZ5RGF0ZT0iMjAxMi0wOC0wMlQwOTozNy0wNzowMCIgeG1w
Ok1ldGFkYXRhRGF0ZT0iMjAxMi0wOC0wMlQwOTozNy0wNzowMCIgZGM6Zm9ybWF0PSJpbWFnZS9w
bmciIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6NDg2QkZEN0JEQ0MwMTFFMTkxMkFFNDFDN0RE
QTJCNTMiIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6NDg2QkZEN0NEQ0MwMTFFMTkxMkFFNDFD
N0REQTJCNTMiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDo0
ODZCRkQ3OURDQzAxMUUxOTEyQUU0MUM3RERBMkI1MyIgc3RSZWY6ZG9jdW1lbnRJRD0ieG1wLmRp
ZDo0ODZCRkQ3QURDQzAxMUUxOTEyQUU0MUM3RERBMkI1MyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4g
PC9yZGY6UkRGPiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/PgUsJAMAAB/MSURBVHja
7N0LtBx1geDhqrqd3CQ8kojKMeIKIuqKjBPDIpjwOCNnOGISYUZhZ86OiARdFURkRXwxQWd84OOA
CnskwdfZo4LMCIge9wizgDDiIyBzxB1ggbAHxFmBJA6S5D66trqqu27d7r59311F+L5cbup2dVdf
6j76l3+9wkMOOeSCCy5Ytmz573732JNPPhkEQRiGwXj5LWH6p3FD+kEiCqMgHHe35B5hlN0URtGE
i+pxY+Hpmk81/tkb4jie6FHJrGBGuj6w99LiVPFuxQ/bbmxbWtuSJ/pwKg/pvL3z0+7xdJPeeQb3
nNnd5vyxzInkh2tgYCCKouT9woULlyxZsnjx4iWpxYuzycW12oL8275er480jCb/JdPJbcXlZJKf
19HRZOZo8j67Q5jKniIxODhYq9WyOycT2dyJfodQzW+bKi9/zj+95Dv26quvfvOb35x828/VMpNv
/jvuuGP16tVDQ0PVXOBsJD/X11xzzamnnpqssWQ6+cS+/OUv15IiSX6p3HPPPXlndHtZ6DJr7BdE
GBR/WSSZ0hklY50xiyiZ9Dtszl+9er82txVJ9j79Fdy9DyZqiIkSYSrVMmmOTPG5qpkjwDNX9rM/
f2kyy+XP36fX+W9mehtNHXHEEWeddVZt6dJlv/nNPRP3SBdRIUEa/3iJwlweJenNzUGU4pdndiMl
jQ/zF7m5+qrPfoAk+zDJkeR98q/J/B+LxU8yj5WpjFjM+D4zGH2ZQY5MMTXkCNCHNJnNwufp05vv
/+s90tDQUNIltccee2zCwZCu9VAYF2mUR9RMkmKLtE1Md6SkrUKKX+dggr6Z8fflzF5xi1FSzJHs
w6T4GnHW+jD/7mwbxpjK9FSKpDNHjI4Az540mf3Cn4lp8q3vXHvLrT/vcYdjjznir//zSfP0jNnC
O2+Z5VM0tuNs27ZtWo8ZGxWJmlGSjQ0MRI0/WYvkE3MYJY3saBs4Gb8TyQy+8LMZAyjKoiTLkWR6
YGAguzEbHUnWT9s2nSkWSVk5MuPVMrc5omxAmvQ5TZ5BW3OSGrjmqq/0uMObT33n3EZJ8RmThXfe
MvunS15Da+n6GjdUsnHjxqkvolarXXzxxY094NI/DVmNFCZmHCX5rOZ3W0eUFF+3pvVVn9amih5R
kuVIvSX5HLKJZM1m/+PZHYqfWzFQZpwjwRR2H5nBxpqK5IgiAWlSysJtzek99JKUR1Yhyft8OiuS
ZO6cPEut86bh4eFpLeLyyy8vRknyV2eUZO97l0fn6Mh0h3Dm6QWv6wBD1iLZKEi+JSubzuMjy5F8
pKS4HSeYwvE1sx8dmU15TH0t9W3/YmAPS5Ng3nY1nWWaPIOGTLJBi6LeIygzlg2EFLukWCRzNSrT
JUqCm28Ovv3tYO3aYN26/Lahhx9e+MlPBi972cg551zx2BW3/PqWF6x4wfkvOX/FPiuyAwVbUdI4
cLAZJdHYHrCTREnhWN8eUTLjXpnNq+BEL+HZt1f2vq08kpWRTeSDJZ1HBk1xnGMOR0eeETuOyBEw
cFKFJc/fkMkcLnPHjh3ZxFcu/2R+4zvf/eH89vnrkiuvuDi75Yx3nD+3O690iZKRgw+ubdkSPPFE
MUoWfv/7wZYtI2vWnH/X+Zvu33Ti8hOv/r9XX33P1b85+TdRvtEm+zsdLdlx96Xv/NRPmw9+3Ue+
+oHDl6eTD9/wF++7svXCs/pjX/vAquDOz57+8dvffsm16w8slMdD15/0vitfd+HXz18VbPns2276
s2RieTbroevfdE58yfVvOmj8Z71ty8Wn3fhn3/hg83nGmeAhk3yXTPpNmYXIH+6+9B3/65jL3/vq
fVo7t2YVkh+GExR2xOk8r8nUB0j27BxRJCBN9uA0ueSLX/3Xex/ocYdXvPzgc885Y1rL3LVr17Ru
n0Pz9xRdoqT2ohcFL3tZcN99wb33Bi9/+djwSfKiu379plvffd6q8za+fOMjOx550VUv+uG2H44d
fdPy8A//6oPfPPLDm65Z9Zz0wxv+8u0nn96qjqSsLrlu/VgfbEs75KvfvfPYQlA89Ouvpi9Q2eE+
zZf18X+F45Nk00Wv+NINR+zXvR4KD536N1x+e7F32kIhuc/y17z/M4+e8s27Lz/rsH3yvime6Ckf
UMmfbtLdSp5tOaJIgGA+t+nMMk1m//kkRTLpfqnTXWa+r8U73/3hrrfPuexwm0s+/7H8KZLp9533
iXwQZQ6jZNwar590cnTxZ4If/7gZJUmdbN0aHHlUtHTpU489tWrVquS2A5YeELwgeCp4qnE+krFv
ozDccfe134xP++y5WZEkDlr/1QvvO/27dx7TGO1I7hKH489ckv51+z89uO0/Hf6cZgV8d3P2DdRM
iXDcccjtS0ge8OBNt5156od6ftu0R8nUv8+an0HhR6W4s0gyfeCr3vqzf9z6Xw47bGnHoUZ5nWQP
ye5fVo5U8xBfOQL0Z+BklodqzvLzme6xrlOMks9++oKJYmU+iiR5umz5H7jg09lTJ++T6bnqkgl2
9Tjhz/PRkYakTho3ntDrBT8/QHfHb+8I3vqqA4vZ8Zz9XhHeftOD24ujB8Uzrq3ZuPHM2y/6ydbs
4+1JYmzYuPHoVoOEbQ/pWELygJ+c+eqXNCYfum7tG699KLs5m25Vxb9ct25t5rqtjc9i6/Xr113X
fMZ8Op247rr161KNm7ZvufitG28LbrvotPXrr9+6/c6L13/ml9vSTTPbtnzu5M/fua3xCb340L/5
2S0P7+hyDuy2V9zOc+G3nfKkeGOPQOn6qIlSJpjaiecn+lGcv40187dwYM9Ik0rtuzbLz+fpnmaw
wJGe5ny9JUXyqb/7b9nCkwo59pgjkvfZh8ntvU+aMoORkvGpsnRpcORRwR0/bW7BSepkcDA47thk
1toXrN2/tn92t1OWn3Jw7eCt4dbioMKObffHRx23dHysLD9gdfCvYXO8YfN7121u/L3mb7/xwebY
SPCSP9kQnn33QycddFDw0E8u+smZX3pneG9Q2E4T9krU7Y/etuaFk4x8bdoUfPkHP0iWft3as866
9tU/OGn85qDCWNHmzcGXbrjhoIeuW3f2Vb84+oOHn//NjfFbb3x9+rlu21LY1zYcG3FZtuLIOx75
w9l/su+4oO7cfaTtFGoTHUozg+vdGB0BjJr0YZkz25pz6Ctfes77P97jDq869JBpLTBpgg999HO9
7zC3X4jiM+Y7txZvmccoaVi/rhEl2RjJ1q3BsccG++yTTH7vDd/7u+trn3xwZPSp5LFX7V6U5Fgj
SuKg8DLz09/uCF7Tvn/HK/ZLAqSxT/CGL36/c5/Tg47526P/5qpfHnPBfv+yafXGbx4UPDi3L3tn
fvmkg7Ln2Xj0FT9+/MmTntvxgObEhi+l9zzosDPCTY/uyLOpeMfex9wGU9gcM5XtNaXkSB9yQZEA
z+g0meiBtVptok0nZ7/n9N7LnO6nkTTB3J4bbQbPOMvPIT9GddyYyIR3P+qoxujIzTe3bbs5f3P0
+f8ZLAlrK1YEyduSxbXxGx2CpQcec1TwjV9vLb7Kbn/gxttWr1ja89Vo+cGvP/q2G2+99qpNq49/
yfLmXZuPv+2R7a1FbXv8f8drXrjvpNemmWjkrbDg4plZJxujy0qkcGsyVW99kO4p0v3c851DIG3y
uW27m8ztxpqpjDr2Z2OKIgFm8wtkPk6MNLNldj5q0aJFu3fvnsNrnuzBkrW0a9euxYsXTzlK9tkn
OPLIxhjJNdcEe+/daJTUprui804IvnVe8JV3NN6OO7Tti5pUyZ+efFr4jQ984c4ns0ds33LxaRfd
dsZbVi1LX7k7X5mbtyxb9ZYNt2/efNuGU1alTdK8fdlLXr8m2Hz3g+k960/8nxt/sub1By0ft4Cl
K9a0siWZDDb9qnHnB689+4pWKSSL2vSdXzzRmHrwlouyBSx94ermYh+89r2b4rH/ibgeFyqgdbL4
5ue4/OXh7Tc98ES9vv3OzX//z3k17fjtz177gn0nzJH8PPTFs6h1SaWpDa4843KkM8IAqpYms/xM
kokVK1Zs3bp1YGBgxl3yLPklmayfZC09+uijBxxwQPtoU6/HrVsX3HJL8NRTQyecsDDddtP7a5mt
z+S///CGb1+x4tJ3nPmXzXu87mNfuzYLjXQbz5XnrL8ym9E4OHhZ+uKfLuXAw84Ig/iwA+O8Xho3
L3vN+ZdueFNzP5TkMZdev2r5+C9co1s2/urBk158YFo24XvPXpvcecOGDeHmIK2K5M6r4xtPW3dR
YQHBa5J7npMu9owzzgivzKIhzkdRxvpp2WvefMYn3ve2NwVvv+TadWs+uvrKj7/99uR/6m2nHXX7
fdk9tt7zP444+tJ968nji1cDzk45n+dIW5H0OK9rYGMNwNR+t8ztFeNnvDVnZGTkpS996Y9+9KOh
oaGDDz546dKlc/V/GkVRrVabw1U35wucbpFs3779kUceuf/++0844YS2fXLDiy76eNt1aT760Y9m
Hzeu13f88UmUBJ/7XHDccdmN530r+MItwesPGVm2T2NBZx2+6L4tVzRO6NqUTqUX5Gucb/4Pv/rC
hr//57d94XvNnUiae5SmV9cLp3BZ4nF7jLZ9txQ/aJxK5JG3dJ4ebbqnap1KE+TBEaQbbhJbf/jX
39v/i+85bN/sgnyZZO3l18HJJ4oPnOUVg6d1n7JCQZHsSf+syc7dvHDhwiVLlixevHhJavHibHJx
rbYg7+zs+39kZDT7KQiCuLicTPLjm/5kNH4+sjtk+8VnT5EYHBxMfm9md04miuf+8RWh6ytFFRaY
fLved999ySvu/J1WdQ+wbNmyAw444JBDDilemyX7B3ytx6tIo6TOO6/+0ENRq0gSnzll5ODnB//0
wFja5AvNzsMRpuMezUve7Pvqc7/72Zed8v6Tv558dPoX/nHdgUHzF1Rc2DW29/V+p7L1YenKMy68
6fTP/LJx4tcZFMmkL/Zdt4lkqbHtV1/80L3nXXr8vvnF+fIKya8bnOXIRHuTTCWG5ipH+hYKcgR4
Fo6aJF75ylceeuihFSytqtm5c2fnjq61SV5F1q1re0RSKu8+Pnj38WMP/M3P6vnrdPJPqWQqCqJs
W0g8EIfhi97wnavemD1x48U56Bwgmeh6vx1HXvX4GiX58w+vTkd3pvti2fv1vsclabLy2PvQd33t
PxYGRgpRkivuazJRkcx3jhgdAZ4ldVJimoym5rAn9uA0yYuk+FpfK6z65k2de8P2kDTKpz71qWBs
Z8bGThxhvXWV4PrYVYKbX93WVpsem2+Km2x6fD16pMwMXkGn0gfFwaFmYhWGTNrkG2uKXdL12jfz
miNaBJAms1zadBc1h5/APF1buJpfsmaUtL2cfOQjH2m+yget6+OFzftEzUvcpNf/jZobgLOBgda1
6OLk5migcf71qHA9nLF06IiSzjVerI2pfzHG730yvRfR3qcP6TpSku+1mm2jaeuP/Ja23V37vLHG
lhpAmpS1qLn6BObvaoUVVAuCaby2tXYEietBHNabtRGliZJ1SdYrURwVCmb85WAKURIHk5fHPH0Z
pl4kXV9x891Ui7XRubEm/zDodhb5PSBHFAkgTfoz1PEsGTLpfkG+jhXR9lVpP6i1cC3c+mgQRvXW
dYOjMJ/bFiVTLI/OQZTpfrV6vKL3Pr977zsUj+/NR0TaAqXrGc+KxwwbHQHY49NEl8wgSnq88IRd
10vxFbdwob0ovbExNRqM5iUxt1HSeX2ZrgvpvU/r1AdIel+hpu2saJ0nSZtZkVR23xFFAkiT6faB
TTmzjZL0VCKTvywVX3fT1ZR92H46gTmMkmKFdN3RdV4HSDr/39v6I8+OoNulgJ/ROaJFAGliyKSv
UbJmzZq99torOxgnDCbaqtOrIoLWdXS73mmK50ybeOHxpPkym9N1TPe8qL0vodd7CbN/jVcJ9E02
3pmdIXHBggUDAwPJ+1pBuqv72DdmcZtmYSHN8dRs8LQ1rBjnJ08L0n3UsrO0ZefqznZZyw7ic9o0
5uMb2+dfbiDu3LlzwihJiuTmm2++8cYbJ/u/7t4bPVZOt3kT3n+Ka9lvKOhnkeRnXK11k3VDfr3L
3lGSlEbQ7cJPbVGSFYkogT3VkUceuXbt2qeffrp7lCxZsiQpkssuu6zrg1euXHnXXXeZZZZZZpll
lllmzX7Wueeeu27dunH/LMkHUSQbAFCiyCoAACoRJfaVBAAqESVWAQAgSgAARAkAIEoAAEQJACBK
AAAmjRIncAYAyowSLQIAVCJKrAIAQJQAAIgSAECUAACIEgCg2lESu14wAFBilMStIpEkAECZURIY
IwEASpWlSKRIAIAqiEKndAUAqhAlVgEAULkosSkHAChLLQjCfAtOaFsOAFASm28AAFECAJCK41iU
AACV4JBgAKAaUWIVAACiBABgLEqy7TdhawIAoIwoccI0AKASUWIVAACiBABAlAAAVVILswve2MkV
ACiVkRIAQJQAALTUsr8cFwwAlCgMw8ZISZz8CVQJAFCmSI4AAFVQC8cuExyGLhkMAPRR3h5xHNvR
FQCoBFECAFQjSorH3bg4HwBQWpQoEgCgQlGiSACAcjVPnpbu++roGwCgNIUdXQUJAFCJKLERBwCo
RJQoEgCgElECAFCexmnm891b7egKAJSleJ4SawMAKDtK4hZrBADom2J71LIP0ve23QAApXVJvvlG
kQAAZYoCpycBAKoQJYoEAKiC/JDg9D+HBAMAJcn3Kcl3dwUAKDNKFAkAUIEoUSQAQLlqhRzRJQBA
CbIaGXftGwCAsrhKMABQsSixVwkAUJY4jmvp340/ogQAKJHTzAMA1YiSwlWCAQDKi5LmSeYdhAMA
lCfpEBfkAwAqoTFSEkaGSQCA0mQdUss+MF4CAJRr7OgbWQIA9Flxr9ZIkQAAVTB29I11AQCUGSXO
UwIAlKVYIMVr38TSBAAoS62VI1YFAFCCvEIaUWKHEgCgdMVDgo2WAAAlyCKklv4lRwCAktXybTcG
SgCAEkXZMIltNwBAiZIUqWX7k1gXAEC5onyMxHlKAIAS1QpFYm0AAKWJAmMkAEAFOCQYAKhGlGQX
CQ4cEgwA9Ffbhpqo660AAH3mkGAAoBKifMrurgBAnxXbI8qvxqdIAIASu8QhwQBAJbqkFsf1wI6u
AEDZajt37mpdky8YGRmxRgCAvsl3I2lEyfDwUOvWoF6vWzsAQP+jJBFl221svgEAypXu6FqPdQkA
UHKUFLfl6BIAoLQo0SIAQFWipEmaAABlRokNNwBAJaLEDiUAQBWipJ60SKhIAICyo8QqAACqEiXN
zTdWBgBQYpQUN9zYiAMA9E1beESKBACoQpdEigQAqEKXOHkaAFAJUdi6XrB1AQCUIuuQSI4AAFXg
gnwAQDWiRJEAAKVLgsQZXQGASojaIsUaAQBKjhJFAgBUIkoAAMqPEsMkAED5URLXFQkAUHaU1I2R
AABViJJ8SpsAAJWIEgAAUQIAiBIAAFECACBKAABRAgAgSgAAUQIAIEoAgKpHSWg9AABViBIAAFEC
ACBKAABRAgDQNUriOLY6AIDyowQAoES1YGyMxJHBAEBpjJQAANWIEruSAACViBKrAAAQJQAAnVFi
P1cAoBJRAgAgSgAAUQIAIEoAANqjxPlKAIBKRAkAgCgBAEQJAEBFosTeJABAJaIEAECUAACIEgBA
lAAAiBIAQJQAAIgSAECUAACIEgBAlAAATDVKwvwdAEBJUaJFAIBKRElrQpwAAJWIEgAAUQIAiBKr
AACoRJTE1gEAUIUoyafUCQBQhSjRJABAJaIEAECUAACixCoAAEQJAEBTzSoAAMoSp//F6YQoAQDK
yJG4+daYTCdECQBQRpRkTdKQdUksSgCA0qpktJ6eKS1NkzxKQisHAOibejpIUq9ne5U0usRICQBQ
gjgojJSkYyWiBAAoI0rSg27qrcNvHH0DAJRVJdmuJPlIiZOnAQClCbO3LEyMlAAApbdJ448oAQDK
KZFGioRjx/+KEgCglCZpVIkoAQAqUCVhEBVOlCZKAIAS08RICQBQbo203kQJAFB2lQTjqkSUAAAl
l0nGydMAgEoQJQCAKAEAECUAgCgBABAlAEC1oyS2LgCAKkQJAEDpUWKUBACoRJQAAIgSAIDAtW/g
2elPdx1pllnP0FnswYyUAACiBABAlAAAogQAQJQAAKIEAKBHlDifKwBQiSjJqRMAoBJRAgAgSgAA
UQIAIEoAAJpREiZ/UqHVAQCUGCUAAKIEACBVswrgWehXi+6YYM5Ks8yq+Cw/v1O07F1/1bdZc8VI
CQBQCaIEABAlAACiBAAQJQAAogQAqHaUOJ0rAFAm5ykBAEoQx0EcBPV6LEoAgJKrJI7r9fqoKAEA
yjRaH02SZHh4WJQAAKVGycjo6MjIrp07RQkAUKbh4aHh4eE/PvXvnVESOwAHAOibod0N27dvyz4M
jZQAAKXYtWvn7p07tz3xeH6LKAEAyoiSnQ1PPP57UQIAe7Lt//3b3WesXDnns2YcJbt2Pf3E7/9f
uv9IaPMNAFCO3bt3JX+2bXsybZLGLaIEACjB0NDQ8NDuf//DjjAIRQkAUJrhxMjw03/8Yxg2j/8V
JQBACUYTI6O7d+/KdigpRomTlAAA/VNPoqQ+OrR7KGhtvzFSAgCUoHHlm9H60PBwtqOro28AgHI0
rhBcr48kUdI6/EaUAAAlSIokjusjoyPZthsjJQBAWeJEfbQeOnkaAFBqkqSXA26kSdiYCkUJAFCG
xthIGITZXzbfAADlRUmUvA1EA63TkoSiBAAoQZRWyUBtIL9FlAAAJQijKHmr1WrjoyR2QlcAoL9R
ko2URAPxuChRJABAv6Ok8RZFYR4lkZUCAJQQJYWJtgvyAQD0T3OAJK6P33wDANDvKmlpFYooAQBK
DJN63BoqESUAQDk9EsSt940bwrEdXR2CAwD0vU3GJh19AwCUGSStLIltvgEAypFdiK9xopLGZfnG
RkpiqwYA6F+RZFcJTq8UnJ3f1UgJAFBOlqRVkg6SpBNjUWKoBADoZ5NkIyRxc8QkNFICAJTSJGmS
hGF6YeAGR98AAKVlSXPAJA0UUQIAlFsmYXbNYFECAJQvLhwS7ISuAECZ7OgKAJQmzi5/EzamRAkA
UEaOZKcjya4RnF6Xzz4lAEAliBIAoDSFa/LZfAMAlCE9w3wYRa3xEWd0BQBKEUVh8mfBggWN/UnS
U6jlURI7KhgA6JuBgVryZ3BwsHX1PSMlAEAZagsaluy9d35NYFECAJRgcHBw4cLBpUuXBc0dXY2U
AAClRMmiRcnbc/Z7bmugxEgJAFCGRYsTS/Z73vMDUQIAlCgpkqRLnvu853eOlDj0BgDon8HBRYsW
LV7+nOcGHVECANA/CwYHk7d90h1dRQkAUJpabUHytnjJXr2iZOXX/22ix5tllllmmWWWWWZNfVYP
UTSQvC1YMDg+SpzNFQDorzAT1cZHiSIBAPorDsLsbXyUZLO0CQDQtyiJG2/1etAZJQAAfa2SRL0e
ixIAoNQmSd8XmkSUAABlVUmcbsURJQBA+U1SuEWUAAAlCFtvuchKAQCqwEgJAFCWcRtwRAkAUFaR
TLCjq1OnAQDlpEkWJVoEACgpR5K3sVO62nwDAJQWJWFonxIAoFTZRYIHorFtNuGZG87MdicJg/CU
U0+58MILL7vssq4PXrly5V133WWWWWaZZZZZZpk1+1n/9V3v+cTHP/G7f/u9kRIAoExRGEZRUKtF
ogQAKDVKoiRKogULaqIEACg3ShLhgtrAWJTErTOUxFYPANC/KGmMlNSKUWKlAAD9lx19E0WiBACo
RJSEogQAqESaiBIAoFpRElkdAEAViBIAQJQAAIgSAECUAACIEgBAlAAATBoloXUBAFQhSgAARAkA
IEqsAgBAlAAAiBIAoHJRElsPAEAVogQAQJQAAIgSAKByUWK/EgCg9ChxinkAoBJRAgAgSgAARAkA
IEoAAEQJACBKAABECQDwDIgSZysBACoRJQAAogQAECUAAJWKEpfkAwAqESUAAKIEABAlAAAVihJn
KQEAKhElAACiBAAQJVYBAFCpKHGWEgCgElECACBKAABRYhUAAKIEAECUAACiBACgW5Q4HBgAqESU
AACIEgAAUQIAiBIAgPYoCa0HAKAKUQIAIEoAAIpRYhMOAFCJKAEAECUAgCixCgAAUQIAIEoAAFEC
ACBKAIBKR0lsXQAAVYgSAABRAgCIEqsAABAlAACiBAAQJQAAogQAECUAAKIEABAlAACiBAAQJQAA
ogQAECUAAKIEABAlAACziZLYegAAqhAlAACiBABgXJSEVgYAUGKUaBEAoBJRYhUAAJWIksaRN0ZL
AIDSo8QqAACqFSVOVgIAlCjcsOHMsLn5JnzjG0+89dZb77jjjvY7pX8KHzZv63GftgWE4RRumsKs
GdwNmO2viZYoigYGBmq12oJUrSCZld+t+Nh4vOyW/PZ6vZ7fnj0wSg2k8mVmE74QsId57Wtfu3bt
2qeffjq/pVac/cADDxxzzDEnnnjiRJ3R5ZYebdElU7rffcq/bnrd0e8smO8oSSStsKCld5RkqdHW
JZNGSfYUogSeJYaHh4sf1p73vOc9/vjvsw/uTeU//HlVjE1kvzVat45PgbHHhWFblLTPamuItl83
E88a91GPRwFzKBu6yN4PDg7uldp7772XLFmyOJVMJI0StRSjJHk/Ojpab8kTZDQ1MjKSzc1rJumb
hQsXJs+SvE+WWQyUYrj4osCep/GT/uIXv7iVIAAA5WiMuS5YUDvssMO6zQ07JgAA5t7ChQt//vOf
R5/+9Kf32muvww8/fP/995/8QQ7RAQDmTrZd+Be/+MVll132/wUYANaKDS7U84YkAAAAAElFTkSu
QmCC
--20cf30207888d2409504c64b14b9
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--20cf30207888d2409504c64b14b9--


From xen-devel-bounces@lists.xen.org Thu Aug 02 16:42:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:42:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwySq-0006Vj-2m; Thu, 02 Aug 2012 16:41:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwySo-0006VY-3A
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:41:30 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1343925681!10490950!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2778 invoked from network); 2 Aug 2012 16:41:22 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:41:22 -0000
Received: by weyz53 with SMTP id z53so7034769wey.32
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 09:41:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=7C+2W0QxINe6OE2CUoS2HNY+OzBzSm46JivB3bl8J6o=;
	b=rTDW+g/sxNtL8WzEnrC6J08EZHZPvwIGmqO4VCA32c9tXCaSYXNeV8SV61SpkZUv+6
	3mIYi0jBqrAzBvEj3HEA+mlbNUPXQxwtyTxObueuwXxqjuNo5xXnPCwU9VmVhCufkM5G
	kY0L5pAPqxR/5EkoFje3FlwJxLIkworewFkibWdPeWQu5d23aLlbxG+OL5osy7vhTVXS
	fiB5zkDJEfcmgIXDRBfzBF+J5NOI1m50olpDK4rS9mnxREoNhVJov1y9BWw5eMNxasQC
	r4wqmFuUK3SOv8LofeUDvMM6mu+DIZGpSn83cCdcQGGCgs6audkmCL2nIO7RRh3MH+DV
	9dSw==
MIME-Version: 1.0
Received: by 10.217.3.7 with SMTP id q7mr4991227wes.47.1343925681012; Thu, 02
	Aug 2012 09:41:21 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 09:41:20 -0700 (PDT)
Date: Thu, 2 Aug 2012 09:41:20 -0700
Message-ID: <CANKx4w-wEkwqKcAvt8Fyq_x9h=HwcfvN9R8SkJM_WDgy1mcbXQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=20cf30207888d2409504c64b14b9
Subject: [Xen-devel] Reboot not working
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--20cf30207888d2409504c64b14b9
Content-Type: text/plain; charset=ISO-8859-1

Xen unstable with qemu upstream running a Ubuntu 11.10 livecd guest
(no pci passthrough for this test).  Trying to reboot from within the
guest directly or from xl with/without the -F flag produces a VNC
screen like the attached screenshot, the guest never reboots.

Thanks,
David

--20cf30207888d2409504c64b14b9
Content-Type: image/png; name="reboot.png"
Content-Disposition: attachment; filename="reboot.png"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e2lxm50

iVBORw0KGgoAAAANSUhEUgAAAtwAAAGyCAIAAABY3YSJAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJ
bWFnZVJlYWR5ccllPAAAA99pVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdp
bj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6
eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYwIDYxLjEz
NDc3NywgMjAxMC8wMi8xMi0xNzozMjowMCAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJo
dHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlw
dGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAv
IiB4bWxuczpkYz0iaHR0cDovL3B1cmwub3JnL2RjL2VsZW1lbnRzLzEuMS8iIHhtbG5zOnhtcE1N
PSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxuczpzdFJlZj0iaHR0cDovL25z
LmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIgeG1wOkNyZWF0b3JUb29sPSJB
ZG9iZSBQaG90b3Nob3AgQ1M1IFdpbmRvd3MiIHhtcDpDcmVhdGVEYXRlPSIyMDEyLTA4LTAyVDA5
OjM2OjU0LTA3OjAwIiB4bXA6TW9kaWZ5RGF0ZT0iMjAxMi0wOC0wMlQwOTozNy0wNzowMCIgeG1w
Ok1ldGFkYXRhRGF0ZT0iMjAxMi0wOC0wMlQwOTozNy0wNzowMCIgZGM6Zm9ybWF0PSJpbWFnZS9w
bmciIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6NDg2QkZEN0JEQ0MwMTFFMTkxMkFFNDFDN0RE
QTJCNTMiIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6NDg2QkZEN0NEQ0MwMTFFMTkxMkFFNDFD
N0REQTJCNTMiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDo0
ODZCRkQ3OURDQzAxMUUxOTEyQUU0MUM3RERBMkI1MyIgc3RSZWY6ZG9jdW1lbnRJRD0ieG1wLmRp
ZDo0ODZCRkQ3QURDQzAxMUUxOTEyQUU0MUM3RERBMkI1MyIvPiA8L3JkZjpEZXNjcmlwdGlvbj4g
PC9yZGY6UkRGPiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/PgUsJAMAAB/MSURBVHja
7N0LtBx1geDhqrqd3CQ8kojKMeIKIuqKjBPDIpjwOCNnOGISYUZhZ86OiARdFURkRXwxQWd84OOA
CnskwdfZo4LMCIge9wizgDDiIyBzxB1ggbAHxFmBJA6S5D66trqqu27d7r59311F+L5cbup2dVdf
6j76l3+9wkMOOeSCCy5Ytmz573732JNPPhkEQRiGwXj5LWH6p3FD+kEiCqMgHHe35B5hlN0URtGE
i+pxY+Hpmk81/tkb4jie6FHJrGBGuj6w99LiVPFuxQ/bbmxbWtuSJ/pwKg/pvL3z0+7xdJPeeQb3
nNnd5vyxzInkh2tgYCCKouT9woULlyxZsnjx4iWpxYuzycW12oL8275er480jCb/JdPJbcXlZJKf
19HRZOZo8j67Q5jKniIxODhYq9WyOycT2dyJfodQzW+bKi9/zj+95Dv26quvfvOb35x828/VMpNv
/jvuuGP16tVDQ0PVXOBsJD/X11xzzamnnpqssWQ6+cS+/OUv15IiSX6p3HPPPXlndHtZ6DJr7BdE
GBR/WSSZ0hklY50xiyiZ9Dtszl+9er82txVJ9j79Fdy9DyZqiIkSYSrVMmmOTPG5qpkjwDNX9rM/
f2kyy+XP36fX+W9mehtNHXHEEWeddVZt6dJlv/nNPRP3SBdRIUEa/3iJwlweJenNzUGU4pdndiMl
jQ/zF7m5+qrPfoAk+zDJkeR98q/J/B+LxU8yj5WpjFjM+D4zGH2ZQY5MMTXkCNCHNJnNwufp05vv
/+s90tDQUNIltccee2zCwZCu9VAYF2mUR9RMkmKLtE1Md6SkrUKKX+dggr6Z8fflzF5xi1FSzJHs
w6T4GnHW+jD/7mwbxpjK9FSKpDNHjI4Az540mf3Cn4lp8q3vXHvLrT/vcYdjjznir//zSfP0jNnC
O2+Z5VM0tuNs27ZtWo8ZGxWJmlGSjQ0MRI0/WYvkE3MYJY3saBs4Gb8TyQy+8LMZAyjKoiTLkWR6
YGAguzEbHUnWT9s2nSkWSVk5MuPVMrc5omxAmvQ5TZ5BW3OSGrjmqq/0uMObT33n3EZJ8RmThXfe
MvunS15Da+n6GjdUsnHjxqkvolarXXzxxY094NI/DVmNFCZmHCX5rOZ3W0eUFF+3pvVVn9amih5R
kuVIvSX5HLKJZM1m/+PZHYqfWzFQZpwjwRR2H5nBxpqK5IgiAWlSysJtzek99JKUR1Yhyft8OiuS
ZO6cPEut86bh4eFpLeLyyy8vRknyV2eUZO97l0fn6Mh0h3Dm6QWv6wBD1iLZKEi+JSubzuMjy5F8
pKS4HSeYwvE1sx8dmU15TH0t9W3/YmAPS5Ng3nY1nWWaPIOGTLJBi6LeIygzlg2EFLukWCRzNSrT
JUqCm28Ovv3tYO3aYN26/Lahhx9e+MlPBi972cg551zx2BW3/PqWF6x4wfkvOX/FPiuyAwVbUdI4
cLAZJdHYHrCTREnhWN8eUTLjXpnNq+BEL+HZt1f2vq08kpWRTeSDJZ1HBk1xnGMOR0eeETuOyBEw
cFKFJc/fkMkcLnPHjh3ZxFcu/2R+4zvf/eH89vnrkiuvuDi75Yx3nD+3O690iZKRgw+ubdkSPPFE
MUoWfv/7wZYtI2vWnH/X+Zvu33Ti8hOv/r9XX33P1b85+TdRvtEm+zsdLdlx96Xv/NRPmw9+3Ue+
+oHDl6eTD9/wF++7svXCs/pjX/vAquDOz57+8dvffsm16w8slMdD15/0vitfd+HXz18VbPns2276
s2RieTbroevfdE58yfVvOmj8Z71ty8Wn3fhn3/hg83nGmeAhk3yXTPpNmYXIH+6+9B3/65jL3/vq
fVo7t2YVkh+GExR2xOk8r8nUB0j27BxRJCBN9uA0ueSLX/3Xex/ocYdXvPzgc885Y1rL3LVr17Ru
n0Pz9xRdoqT2ohcFL3tZcN99wb33Bi9/+djwSfKiu379plvffd6q8za+fOMjOx550VUv+uG2H44d
fdPy8A//6oPfPPLDm65Z9Zz0wxv+8u0nn96qjqSsLrlu/VgfbEs75KvfvfPYQlA89Ouvpi9Q2eE+
zZf18X+F45Nk00Wv+NINR+zXvR4KD536N1x+e7F32kIhuc/y17z/M4+e8s27Lz/rsH3yvime6Ckf
UMmfbtLdSp5tOaJIgGA+t+nMMk1m//kkRTLpfqnTXWa+r8U73/3hrrfPuexwm0s+/7H8KZLp9533
iXwQZQ6jZNwar590cnTxZ4If/7gZJUmdbN0aHHlUtHTpU489tWrVquS2A5YeELwgeCp4qnE+krFv
ozDccfe134xP++y5WZEkDlr/1QvvO/27dx7TGO1I7hKH489ckv51+z89uO0/Hf6cZgV8d3P2DdRM
iXDcccjtS0ge8OBNt5156od6ftu0R8nUv8+an0HhR6W4s0gyfeCr3vqzf9z6Xw47bGnHoUZ5nWQP
ye5fVo5U8xBfOQL0Z+BklodqzvLzme6xrlOMks9++oKJYmU+iiR5umz5H7jg09lTJ++T6bnqkgl2
9Tjhz/PRkYakTho3ntDrBT8/QHfHb+8I3vqqA4vZ8Zz9XhHeftOD24ujB8Uzrq3ZuPHM2y/6ydbs
4+1JYmzYuPHoVoOEbQ/pWELygJ+c+eqXNCYfum7tG699KLs5m25Vxb9ct25t5rqtjc9i6/Xr113X
fMZ8Op247rr161KNm7ZvufitG28LbrvotPXrr9+6/c6L13/ml9vSTTPbtnzu5M/fua3xCb340L/5
2S0P7+hyDuy2V9zOc+G3nfKkeGOPQOn6qIlSJpjaiecn+lGcv40187dwYM9Ik0rtuzbLz+fpnmaw
wJGe5ny9JUXyqb/7b9nCkwo59pgjkvfZh8ntvU+aMoORkvGpsnRpcORRwR0/bW7BSepkcDA47thk
1toXrN2/tn92t1OWn3Jw7eCt4dbioMKObffHRx23dHysLD9gdfCvYXO8YfN7121u/L3mb7/xwebY
SPCSP9kQnn33QycddFDw0E8u+smZX3pneG9Q2E4T9krU7Y/etuaFk4x8bdoUfPkHP0iWft3as866
9tU/OGn85qDCWNHmzcGXbrjhoIeuW3f2Vb84+oOHn//NjfFbb3x9+rlu21LY1zYcG3FZtuLIOx75
w9l/su+4oO7cfaTtFGoTHUozg+vdGB0BjJr0YZkz25pz6Ctfes77P97jDq869JBpLTBpgg999HO9
7zC3X4jiM+Y7txZvmccoaVi/rhEl2RjJ1q3BsccG++yTTH7vDd/7u+trn3xwZPSp5LFX7V6U5Fgj
SuKg8DLz09/uCF7Tvn/HK/ZLAqSxT/CGL36/c5/Tg47526P/5qpfHnPBfv+yafXGbx4UPDi3L3tn
fvmkg7Ln2Xj0FT9+/MmTntvxgObEhi+l9zzosDPCTY/uyLOpeMfex9wGU9gcM5XtNaXkSB9yQZEA
z+g0meiBtVptok0nZ7/n9N7LnO6nkTTB3J4bbQbPOMvPIT9GddyYyIR3P+qoxujIzTe3bbs5f3P0
+f8ZLAlrK1YEyduSxbXxGx2CpQcec1TwjV9vLb7Kbn/gxttWr1ja89Vo+cGvP/q2G2+99qpNq49/
yfLmXZuPv+2R7a1FbXv8f8drXrjvpNemmWjkrbDg4plZJxujy0qkcGsyVW99kO4p0v3c851DIG3y
uW27m8ztxpqpjDr2Z2OKIgFm8wtkPk6MNLNldj5q0aJFu3fvnsNrnuzBkrW0a9euxYsXTzlK9tkn
OPLIxhjJNdcEe+/daJTUprui804IvnVe8JV3NN6OO7Tti5pUyZ+efFr4jQ984c4ns0ds33LxaRfd
dsZbVi1LX7k7X5mbtyxb9ZYNt2/efNuGU1alTdK8fdlLXr8m2Hz3g+k960/8nxt/sub1By0ft4Cl
K9a0siWZDDb9qnHnB689+4pWKSSL2vSdXzzRmHrwlouyBSx94ermYh+89r2b4rH/ibgeFyqgdbL4
5ue4/OXh7Tc98ES9vv3OzX//z3k17fjtz177gn0nzJH8PPTFs6h1SaWpDa4843KkM8IAqpYms/xM
kokVK1Zs3bp1YGBgxl3yLPklmayfZC09+uijBxxwQPtoU6/HrVsX3HJL8NRTQyecsDDddtP7a5mt
z+S///CGb1+x4tJ3nPmXzXu87mNfuzYLjXQbz5XnrL8ym9E4OHhZ+uKfLuXAw84Ig/iwA+O8Xho3
L3vN+ZdueFNzP5TkMZdev2r5+C9co1s2/urBk158YFo24XvPXpvcecOGDeHmIK2K5M6r4xtPW3dR
YQHBa5J7npMu9owzzgivzKIhzkdRxvpp2WvefMYn3ve2NwVvv+TadWs+uvrKj7/99uR/6m2nHXX7
fdk9tt7zP444+tJ968nji1cDzk45n+dIW5H0OK9rYGMNwNR+t8ztFeNnvDVnZGTkpS996Y9+9KOh
oaGDDz546dKlc/V/GkVRrVabw1U35wucbpFs3779kUceuf/++0844YS2fXLDiy76eNt1aT760Y9m
Hzeu13f88UmUBJ/7XHDccdmN530r+MItwesPGVm2T2NBZx2+6L4tVzRO6NqUTqUX5Gucb/4Pv/rC
hr//57d94XvNnUiae5SmV9cLp3BZ4nF7jLZ9txQ/aJxK5JG3dJ4ebbqnap1KE+TBEaQbbhJbf/jX
39v/i+85bN/sgnyZZO3l18HJJ4oPnOUVg6d1n7JCQZHsSf+syc7dvHDhwiVLlixevHhJavHibHJx
rbYg7+zs+39kZDT7KQiCuLicTPLjm/5kNH4+sjtk+8VnT5EYHBxMfm9md04miuf+8RWh6ytFFRaY
fLved999ySvu/J1WdQ+wbNmyAw444JBDDilemyX7B3ytx6tIo6TOO6/+0ENRq0gSnzll5ODnB//0
wFja5AvNzsMRpuMezUve7Pvqc7/72Zed8v6Tv558dPoX/nHdgUHzF1Rc2DW29/V+p7L1YenKMy68
6fTP/LJx4tcZFMmkL/Zdt4lkqbHtV1/80L3nXXr8vvnF+fIKya8bnOXIRHuTTCWG5ipH+hYKcgR4
Fo6aJF75ylceeuihFSytqtm5c2fnjq61SV5F1q1re0RSKu8+Pnj38WMP/M3P6vnrdPJPqWQqCqJs
W0g8EIfhi97wnavemD1x48U56Bwgmeh6vx1HXvX4GiX58w+vTkd3pvti2fv1vsclabLy2PvQd33t
PxYGRgpRkivuazJRkcx3jhgdAZ4ldVJimoym5rAn9uA0yYuk+FpfK6z65k2de8P2kDTKpz71qWBs
Z8bGThxhvXWV4PrYVYKbX93WVpsem2+Km2x6fD16pMwMXkGn0gfFwaFmYhWGTNrkG2uKXdL12jfz
miNaBJAms1zadBc1h5/APF1buJpfsmaUtL2cfOQjH2m+yget6+OFzftEzUvcpNf/jZobgLOBgda1
6OLk5migcf71qHA9nLF06IiSzjVerI2pfzHG730yvRfR3qcP6TpSku+1mm2jaeuP/Ja23V37vLHG
lhpAmpS1qLn6BObvaoUVVAuCaby2tXYEietBHNabtRGliZJ1SdYrURwVCmb85WAKURIHk5fHPH0Z
pl4kXV9x891Ui7XRubEm/zDodhb5PSBHFAkgTfoz1PEsGTLpfkG+jhXR9lVpP6i1cC3c+mgQRvXW
dYOjMJ/bFiVTLI/OQZTpfrV6vKL3Pr977zsUj+/NR0TaAqXrGc+KxwwbHQHY49NEl8wgSnq88IRd
10vxFbdwob0ovbExNRqM5iUxt1HSeX2ZrgvpvU/r1AdIel+hpu2saJ0nSZtZkVR23xFFAkiT6faB
TTmzjZL0VCKTvywVX3fT1ZR92H46gTmMkmKFdN3RdV4HSDr/39v6I8+OoNulgJ/ROaJFAGliyKSv
UbJmzZq99torOxgnDCbaqtOrIoLWdXS73mmK50ybeOHxpPkym9N1TPe8qL0vodd7CbN/jVcJ9E02
3pmdIXHBggUDAwPJ+1pBuqv72DdmcZtmYSHN8dRs8LQ1rBjnJ08L0n3UsrO0ZefqznZZyw7ic9o0
5uMb2+dfbiDu3LlzwihJiuTmm2++8cYbJ/u/7t4bPVZOt3kT3n+Ka9lvKOhnkeRnXK11k3VDfr3L
3lGSlEbQ7cJPbVGSFYkogT3VkUceuXbt2qeffrp7lCxZsiQpkssuu6zrg1euXHnXXXeZZZZZZpll
lllmzX7Wueeeu27dunH/LMkHUSQbAFCiyCoAACoRJfaVBAAqESVWAQAgSgAARAkAIEoAAEQJACBK
AAAmjRIncAYAyowSLQIAVCJKrAIAQJQAAIgSAECUAACIEgCg2lESu14wAFBilMStIpEkAECZURIY
IwEASpWlSKRIAIAqiEKndAUAqhAlVgEAULkosSkHAChLLQjCfAtOaFsOAFASm28AAFECAJCK41iU
AACV4JBgAKAaUWIVAACiBABgLEqy7TdhawIAoIwoccI0AKASUWIVAACiBABAlAAAVVILswve2MkV
ACiVkRIAQJQAALTUsr8cFwwAlCgMw8ZISZz8CVQJAFCmSI4AAFVQC8cuExyGLhkMAPRR3h5xHNvR
FQCoBFECAFQjSorH3bg4HwBQWpQoEgCgQlGiSACAcjVPnpbu++roGwCgNIUdXQUJAFCJKLERBwCo
RJQoEgCgElECAFCexmnm891b7egKAJSleJ4SawMAKDtK4hZrBADom2J71LIP0ve23QAApXVJvvlG
kQAAZYoCpycBAKoQJYoEAKiC/JDg9D+HBAMAJcn3Kcl3dwUAKDNKFAkAUIEoUSQAQLlqhRzRJQBA
CbIaGXftGwCAsrhKMABQsSixVwkAUJY4jmvp340/ogQAKJHTzAMA1YiSwlWCAQDKi5LmSeYdhAMA
lCfpEBfkAwAqoTFSEkaGSQCA0mQdUss+MF4CAJRr7OgbWQIA9Flxr9ZIkQAAVTB29I11AQCUGSXO
UwIAlKVYIMVr38TSBAAoS62VI1YFAFCCvEIaUWKHEgCgdMVDgo2WAAAlyCKklv4lRwCAktXybTcG
SgCAEkXZMIltNwBAiZIUqWX7k1gXAEC5onyMxHlKAIAS1QpFYm0AAKWJAmMkAEAFOCQYAKhGlGQX
CQ4cEgwA9Ffbhpqo660AAH3mkGAAoBKifMrurgBAnxXbI8qvxqdIAIASu8QhwQBAJbqkFsf1wI6u
AEDZajt37mpdky8YGRmxRgCAvsl3I2lEyfDwUOvWoF6vWzsAQP+jJBFl221svgEAypXu6FqPdQkA
UHKUFLfl6BIAoLQo0SIAQFWipEmaAABlRokNNwBAJaLEDiUAQBWipJ60SKhIAICyo8QqAACqEiXN
zTdWBgBQYpQUN9zYiAMA9E1beESKBACoQpdEigQAqEKXOHkaAFAJUdi6XrB1AQCUIuuQSI4AAFXg
gnwAQDWiRJEAAKVLgsQZXQGASojaIsUaAQBKjhJFAgBUIkoAAMqPEsMkAED5URLXFQkAUHaU1I2R
AABViJJ8SpsAAJWIEgAAUQIAiBIAAFECACBKAABRAgAgSgAAUQIAIEoAgKpHSWg9AABViBIAAFEC
ACBKAABRAgDQNUriOLY6AIDyowQAoES1YGyMxJHBAEBpjJQAANWIEruSAACViBKrAAAQJQAAnVFi
P1cAoBJRAgAgSgAAUQIAIEoAANqjxPlKAIBKRAkAgCgBAEQJAEBFosTeJABAJaIEAECUAACIEgBA
lAAAiBIAQJQAAIgSAECUAACIEgBAlAAATDVKwvwdAEBJUaJFAIBKRElrQpwAAJWIEgAAUQIAiBKr
AACoRJTE1gEAUIUoyafUCQBQhSjRJABAJaIEAECUAACixCoAAEQJAEBTzSoAAMoSp//F6YQoAQDK
yJG4+daYTCdECQBQRpRkTdKQdUksSgCA0qpktJ6eKS1NkzxKQisHAOibejpIUq9ne5U0usRICQBQ
gjgojJSkYyWiBAAoI0rSg27qrcNvHH0DAJRVJdmuJPlIiZOnAQClCbO3LEyMlAAApbdJ448oAQDK
KZFGioRjx/+KEgCglCZpVIkoAQAqUCVhEBVOlCZKAIAS08RICQBQbo203kQJAFB2lQTjqkSUAAAl
l0nGydMAgEoQJQCAKAEAECUAgCgBABAlAEC1oyS2LgCAKkQJAEDpUWKUBACoRJQAAIgSAIDAtW/g
2elPdx1pllnP0FnswYyUAACiBABAlAAAogQAQJQAAKIEAKBHlDifKwBQiSjJqRMAoBJRAgAgSgAA
UQIAIEoAAJpREiZ/UqHVAQCUGCUAAKIEACBVswrgWehXi+6YYM5Ks8yq+Cw/v1O07F1/1bdZc8VI
CQBQCaIEABAlAACiBAAQJQAAogQAqHaUOJ0rAFAm5ykBAEoQx0EcBPV6LEoAgJKrJI7r9fqoKAEA
yjRaH02SZHh4WJQAAKVGycjo6MjIrp07RQkAUKbh4aHh4eE/PvXvnVESOwAHAOibod0N27dvyz4M
jZQAAKXYtWvn7p07tz3xeH6LKAEAyoiSnQ1PPP57UQIAe7Lt//3b3WesXDnns2YcJbt2Pf3E7/9f
uv9IaPMNAFCO3bt3JX+2bXsybZLGLaIEACjB0NDQ8NDuf//DjjAIRQkAUJrhxMjw03/8Yxg2j/8V
JQBACUYTI6O7d+/KdigpRomTlAAA/VNPoqQ+OrR7KGhtvzFSAgCUoHHlm9H60PBwtqOro28AgHI0
rhBcr48kUdI6/EaUAAAlSIokjusjoyPZthsjJQBAWeJEfbQeOnkaAFBqkqSXA26kSdiYCkUJAFCG
xthIGITZXzbfAADlRUmUvA1EA63TkoSiBAAoQZRWyUBtIL9FlAAAJQijKHmr1WrjoyR2QlcAoL9R
ko2URAPxuChRJABAv6Ok8RZFYR4lkZUCAJQQJYWJtgvyAQD0T3OAJK6P33wDANDvKmlpFYooAQBK
DJN63BoqESUAQDk9EsSt940bwrEdXR2CAwD0vU3GJh19AwCUGSStLIltvgEAypFdiK9xopLGZfnG
RkpiqwYA6F+RZFcJTq8UnJ3f1UgJAFBOlqRVkg6SpBNjUWKoBADoZ5NkIyRxc8QkNFICAJTSJGmS
hGF6YeAGR98AAKVlSXPAJA0UUQIAlFsmYXbNYFECAJQvLhwS7ISuAECZ7OgKAJQmzi5/EzamRAkA
UEaOZKcjya4RnF6Xzz4lAEAliBIAoDSFa/LZfAMAlCE9w3wYRa3xEWd0BQBKEUVh8mfBggWN/UnS
U6jlURI7KhgA6JuBgVryZ3BwsHX1PSMlAEAZagsaluy9d35NYFECAJRgcHBw4cLBpUuXBc0dXY2U
AAClRMmiRcnbc/Z7bmugxEgJAFCGRYsTS/Z73vMDUQIAlCgpkqRLnvu853eOlDj0BgDon8HBRYsW
LV7+nOcGHVECANA/CwYHk7d90h1dRQkAUJpabUHytnjJXr2iZOXX/22ix5tllllmmWWWWWZNfVYP
UTSQvC1YMDg+SpzNFQDorzAT1cZHiSIBAPorDsLsbXyUZLO0CQDQtyiJG2/1etAZJQAAfa2SRL0e
ixIAoNQmSd8XmkSUAABlVUmcbsURJQBA+U1SuEWUAAAlCFtvuchKAQCqwEgJAFCWcRtwRAkAUFaR
TLCjq1OnAQDlpEkWJVoEACgpR5K3sVO62nwDAJQWJWFonxIAoFTZRYIHorFtNuGZG87MdicJg/CU
U0+58MILL7vssq4PXrly5V133WWWWWaZZZZZZpk1+1n/9V3v+cTHP/G7f/u9kRIAoExRGEZRUKtF
ogQAKDVKoiRKogULaqIEACg3ShLhgtrAWJTErTOUxFYPANC/KGmMlNSKUWKlAAD9lx19E0WiBACo
RJSEogQAqESaiBIAoFpRElkdAEAViBIAQJQAAIgSAECUAACIEgBAlAAATBoloXUBAFQhSgAARAkA
IEqsAgBAlAAAiBIAoHJRElsPAEAVogQAQJQAAIgSAKByUWK/EgCg9ChxinkAoBJRAgAgSgAARAkA
IEoAAEQJACBKAABECQDwDIgSZysBACoRJQAAogQAECUAAJWKEpfkAwAqESUAAKIEABAlAAAVihJn
KQEAKhElAACiBAAQJVYBAFCpKHGWEgCgElECACBKAABRYhUAAKIEAECUAACiBACgW5Q4HBgAqESU
AACIEgAAUQIAiBIAgPYoCa0HAKAKUQIAIEoAAIpRYhMOAFCJKAEAECUAgCixCgAAUQIAIEoAAFEC
ACBKAIBKR0lsXQAAVYgSAABRAgCIEqsAABAlAACiBAAQJQAAogQAECUAAKIEABAlAACiBAAQJQAA
ogQAECUAAKIEABAlAACziZLYegAAqhAlAACiBABgXJSEVgYAUGKUaBEAoBJRYhUAAJWIksaRN0ZL
AIDSo8QqAACqFSVOVgIAlCjcsOHMsLn5JnzjG0+89dZb77jjjvY7pX8KHzZv63GftgWE4RRumsKs
GdwNmO2viZYoigYGBmq12oJUrSCZld+t+Nh4vOyW/PZ6vZ7fnj0wSg2k8mVmE74QsId57Wtfu3bt
2qeffjq/pVac/cADDxxzzDEnnnjiRJ3R5ZYebdElU7rffcq/bnrd0e8smO8oSSStsKCld5RkqdHW
JZNGSfYUogSeJYaHh4sf1p73vOc9/vjvsw/uTeU//HlVjE1kvzVat45PgbHHhWFblLTPamuItl83
E88a91GPRwFzKBu6yN4PDg7uldp7772XLFmyOJVMJI0StRSjJHk/Ojpab8kTZDQ1MjKSzc1rJumb
hQsXJs+SvE+WWQyUYrj4osCep/GT/uIXv7iVIAAA5WiMuS5YUDvssMO6zQ07JgAA5t7ChQt//vOf
R5/+9Kf32muvww8/fP/995/8QQ7RAQDmTrZd+Be/+MVll132/wUYANaKDS7U84YkAAAAAElFTkSu
QmCC
--20cf30207888d2409504c64b14b9
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--20cf30207888d2409504c64b14b9--


From xen-devel-bounces@lists.xen.org Thu Aug 02 16:44:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyUs-0006cA-Jg; Thu, 02 Aug 2012 16:43:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwyUq-0006c2-Rb
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 16:43:37 +0000
Received: from [85.158.139.83:41801] by server-5.bemta-5.messagelabs.com id
	2E/EC-02722-83EAA105; Thu, 02 Aug 2012 16:43:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1343925814!30244466!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30040 invoked from network); 2 Aug 2012 16:43:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:43:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827842"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:43:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 17:43:04 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwyUK-0001Sf-Lv;
	Thu, 02 Aug 2012 16:43:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwyUK-0004f6-5q;
	Thu, 02 Aug 2012 17:43:04 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13536-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 17:43:04 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13536: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13536 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13536/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13535
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13535
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13535
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13535

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3d17148e465c
baseline version:
 xen                  3d622e2c7cfb

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3d17148e465c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3d17148e465c
+ branch=xen-unstable
+ revision=3d17148e465c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3d17148e465c ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 2 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:44:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyUs-0006cA-Jg; Thu, 02 Aug 2012 16:43:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwyUq-0006c2-Rb
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 16:43:37 +0000
Received: from [85.158.139.83:41801] by server-5.bemta-5.messagelabs.com id
	2E/EC-02722-83EAA105; Thu, 02 Aug 2012 16:43:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1343925814!30244466!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30040 invoked from network); 2 Aug 2012 16:43:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:43:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827842"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:43:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 17:43:04 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwyUK-0001Sf-Lv;
	Thu, 02 Aug 2012 16:43:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwyUK-0004f6-5q;
	Thu, 02 Aug 2012 17:43:04 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13536-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 17:43:04 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13536: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13536 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13536/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13535
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13535
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13535
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13535

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3d17148e465c
baseline version:
 xen                  3d622e2c7cfb

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3d17148e465c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3d17148e465c
+ branch=xen-unstable
+ revision=3d17148e465c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3d17148e465c ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 2 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:45:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyWa-0006nE-EK; Thu, 02 Aug 2012 16:45:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwyWZ-0006ms-6M
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:45:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343925917!10980936!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6295 invoked from network); 2 Aug 2012 16:45:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:45:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827869"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:45:17 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	17:45:16 +0100
Message-ID: <1343925916.7571.67.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Thu, 2 Aug 2012 17:45:16 +0100
In-Reply-To: <CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 17:24 +0100, David Erickson wrote:
> On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> I've attached a log with the list of modules matching front,
> xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.#

So did you try manually loading it?

[...]
> > If you added xen_emul_unplug=never to your guest command line then you
> > would avoid this unplug and you should have an emulated NIC available
> > instead.
> 
> Verified this did work.

Great.

> 
> Thanks,
> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 16:45:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 16:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyWa-0006nE-EK; Thu, 02 Aug 2012 16:45:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SwyWZ-0006ms-6M
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 16:45:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343925917!10980936!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6295 invoked from network); 2 Aug 2012 16:45:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 16:45:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13827869"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 16:45:17 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	17:45:16 +0100
Message-ID: <1343925916.7571.67.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Thu, 2 Aug 2012 17:45:16 +0100
In-Reply-To: <CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 17:24 +0100, David Erickson wrote:
> On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> I've attached a log with the list of modules matching front,
> xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.#

So did you try manually loading it?

[...]
> > If you added xen_emul_unplug=never to your guest command line then you
> > would avoid this unplug and you should have an emulated NIC available
> > instead.
> 
> Verified this did work.

Great.

> 
> Thanks,
> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:01:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwylH-00076z-39; Thu, 02 Aug 2012 17:00:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwylF-00076u-HX
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 17:00:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343926825!3516742!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyMzQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16291 invoked from network); 2 Aug 2012 17:00:27 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 17:00:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72H0L0g004073
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 17:00:22 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72H0HIH014927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 17:00:18 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72H0HnD023071; Thu, 2 Aug 2012 12:00:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 10:00:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 400234029B; Thu,  2 Aug 2012 12:51:16 -0400 (EDT)
Date: Thu, 2 Aug 2012 12:51:16 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "W. Michael Petullo" <mike@flyn.org>, wei.wang2@amd.com,
	xen-devel@lists.xensource.com
Message-ID: <20120802165116.GA26474@phenom.dumpdata.com>
References: <20120702215042.GA15140@imp.flyn.org>
	<20120703100223.GB2058@reaktio.net>
	<20120703121604.GA3987@imp.flyn.org>
	<20120703125934.GA25644@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120703125934.GA25644@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [Fedora-xen] Xen/Linux 3.4.2 performance
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 03, 2012 at 08:59:34AM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jul 03, 2012 at 07:16:04AM -0500, W. Michael Petullo wrote:
> > >> We have seen a significant reduction in performance in our research DomU
> > >> OS kernel when running on Fedora 16 with Linux 3.4.2 vs. 3.3.7. We run
> > > a series of benchmarks which are DomU-kernel-space-CPU-heavy; many of
> > >> these run 10x slower when using the 3.4.2 Linux kernel as Dom0.
> > >> 
> > >> This is a little surprising---we've been tracking the Fedora kernels for
> > >> a long time with no problem like this. Did anyone else notice any changes?
> > >> 
> > > 
> > > Just to verify.. both the 3.3.7 and 3.4.2 Linux kernel are 'release' builds? 
> > > and not debug-versions from rawhide? 
> > 
> > Yes, they are the Fedora 16 release builds.
> 
> The commits that went in (3.4) were:
...
> So one thing that you might be hitting is that now the CPU freq driver is
> uploading the data to the hypervisor - the hypervisor might be doing
> power-save stuff instead of concentrating on giving your raw performance.
> 
> So can you start with 'cpufreq=verbose,performance' on your hypervisor line.

Michael openned a bug and on it we found that the xen-acpi-processor.off=1
would solve the performance problem. What that does is to not upload C-states
and P-states information to the hypervisor. So I pulled up an AMD box
and found that the problem is only if hypervisor enters C-2 states. If
I do 'xenpm set-max-cstate 1' it gets back to working nicely.

Wei, any ideas? This is with Xen 4.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:01:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwylH-00076z-39; Thu, 02 Aug 2012 17:00:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SwylF-00076u-HX
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 17:00:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343926825!3516742!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyMzQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16291 invoked from network); 2 Aug 2012 17:00:27 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 17:00:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72H0L0g004073
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 17:00:22 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72H0HIH014927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 17:00:18 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72H0HnD023071; Thu, 2 Aug 2012 12:00:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 10:00:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 400234029B; Thu,  2 Aug 2012 12:51:16 -0400 (EDT)
Date: Thu, 2 Aug 2012 12:51:16 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "W. Michael Petullo" <mike@flyn.org>, wei.wang2@amd.com,
	xen-devel@lists.xensource.com
Message-ID: <20120802165116.GA26474@phenom.dumpdata.com>
References: <20120702215042.GA15140@imp.flyn.org>
	<20120703100223.GB2058@reaktio.net>
	<20120703121604.GA3987@imp.flyn.org>
	<20120703125934.GA25644@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120703125934.GA25644@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [Fedora-xen] Xen/Linux 3.4.2 performance
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 03, 2012 at 08:59:34AM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jul 03, 2012 at 07:16:04AM -0500, W. Michael Petullo wrote:
> > >> We have seen a significant reduction in performance in our research DomU
> > >> OS kernel when running on Fedora 16 with Linux 3.4.2 vs. 3.3.7. We run
> > > a series of benchmarks which are DomU-kernel-space-CPU-heavy; many of
> > >> these run 10x slower when using the 3.4.2 Linux kernel as Dom0.
> > >> 
> > >> This is a little surprising---we've been tracking the Fedora kernels for
> > >> a long time with no problem like this. Did anyone else notice any changes?
> > >> 
> > > 
> > > Just to verify.. both the 3.3.7 and 3.4.2 Linux kernel are 'release' builds? 
> > > and not debug-versions from rawhide? 
> > 
> > Yes, they are the Fedora 16 release builds.
> 
> The commits that went in (3.4) were:
...
> So one thing that you might be hitting is that now the CPU freq driver is
> uploading the data to the hypervisor - the hypervisor might be doing
> power-save stuff instead of concentrating on giving your raw performance.
> 
> So can you start with 'cpufreq=verbose,performance' on your hypervisor line.

Michael openned a bug and on it we found that the xen-acpi-processor.off=1
would solve the performance problem. What that does is to not upload C-states
and P-states information to the hypervisor. So I pulled up an AMD box
and found that the problem is only if hypervisor enters C-2 states. If
I do 'xenpm set-max-cstate 1' it gets back to working nicely.

Wei, any ideas? This is with Xen 4.1



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:03:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwynE-0007BG-KC; Thu, 02 Aug 2012 17:02:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1SwynD-0007BA-S7
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 17:02:36 +0000
Received: from [85.158.138.51:52612] by server-2.bemta-3.messagelabs.com id
	C5/E1-00359-AA2BA105; Thu, 02 Aug 2012 17:02:34 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1343926953!30129907!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3834 invoked from network); 2 Aug 2012 17:02:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:02:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; 
	d="asc'?scan'208";a="13828060"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:02:33 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	18:02:33 +0100
Message-ID: <1343926951.4873.36.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Lars Kurth <lars.kurth@citrix.com>
Date: Thu, 2 Aug 2012 19:02:31 +0200
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xensource.com>
Subject: [Xen-devel] What about a Fedora TestDay about Xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8390678918115379082=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8390678918115379082==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-CGnZmNE/cSHkPRz4dqQL"

--=-CGnZmNE/cSHkPRz4dqQL
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Lars, Hi everyone,

As there is some ongoing discussion on Xen TestDays, allow me to mention
that Fedora does TestDays as a part of their QA process for each
release. I'm quite a bit convinced that it could be worthwhile to (try
to) organize one of them with focus on Xen[*].

Below there are some information I collected from the Fedora Wiki...
Basically the purpose of this e-mail is to gather thoughts of the Xen
community about such a thing and, more important, some will to help a
bit with the organization! :-P

Here some basic links about the TestDays:
 https://fedoraproject.org/wiki/QA/Test_Days
 https://fedoraproject.org/wiki/QA/SOP_Test_Day_management
 https://fedoraproject.org/wiki/QA/Fedora_18_test_days

The last one is the current schedule, which is quite full. Also, there
already is a 'Virtualization' test day. However, I think it could still
be useful to have one dedicated to Xen.

What I was thinking was to file a ticket for it, requesting for the
(currently) available spot on 2012-09-27. If that does not work for
them, we could as for something near to the Virtualization test day,
like the day before or so (just as they are doing with the X Test Week).

The ticket proposing the Virtualization test day is this one:
 https://fedorahosted.org/fedora-qa/ticket/303

So, as you see, there is very few to do right now. However, if they
accept it, there will be work to do, as per
https://fedoraproject.org/wiki/QA/SOP_Test_Day_management, with the
challenging issues being, according to me, the following:
 - building a live-CD (not mandatory but good to have)
 - defining test cases
 - promoting the event properly

I can try to deal with the live-CD, and I guess I also can (maybe with
some help from Lars) try to promote it as much as possible, using all
our usual and unusual channels.

Where I could use some help from the whole Xen community is in defining
meaningful test cases for such an event. Here it is an example for last
year Virtualization test day:
 https://fedoraproject.org/wiki/Test_Day:2012-04-12_Virtualization_Test_Day
 https://fedoraproject.org/wiki/Test_Day:2012-04-12_Virtualization_Test_Day=
#News_tests_and_features
 https://fedoraproject.org/wiki/Test_Day:2012-04-12_Virtualization_Test_Day=
#Previous_test_cases

Apart from that, as one could expect, hanging around on an IRC channel
during the event would be very useful.

Let me know any thoughts you have. It would be great if you can do that
ASAP, before we run over of free slots. :-P

Thanks and Regards,
Dario

[*] Of course this is a different thing from the Xen TestDay we're
discussing in the other thread, and I really think both of them could be
useful, serving the very same purpose, although in different ways.

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-CGnZmNE/cSHkPRz4dqQL
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAasqcACgkQk4XaBE3IOsTljgCfZWNQdVv2gSj3247wMprlgrmz
ILYAnRigCiLYhE8vrSKMTKuPyiHe2NL1
=gjj+
-----END PGP SIGNATURE-----

--=-CGnZmNE/cSHkPRz4dqQL--


--===============8390678918115379082==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8390678918115379082==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 17:03:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwynE-0007BG-KC; Thu, 02 Aug 2012 17:02:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1SwynD-0007BA-S7
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 17:02:36 +0000
Received: from [85.158.138.51:52612] by server-2.bemta-3.messagelabs.com id
	C5/E1-00359-AA2BA105; Thu, 02 Aug 2012 17:02:34 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1343926953!30129907!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3834 invoked from network); 2 Aug 2012 17:02:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:02:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; 
	d="asc'?scan'208";a="13828060"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:02:33 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	18:02:33 +0100
Message-ID: <1343926951.4873.36.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Lars Kurth <lars.kurth@citrix.com>
Date: Thu, 2 Aug 2012 19:02:31 +0200
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xensource.com>
Subject: [Xen-devel] What about a Fedora TestDay about Xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8390678918115379082=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8390678918115379082==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-CGnZmNE/cSHkPRz4dqQL"

--=-CGnZmNE/cSHkPRz4dqQL
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Lars, Hi everyone,

As there is some ongoing discussion on Xen TestDays, allow me to mention
that Fedora does TestDays as a part of their QA process for each
release. I'm quite a bit convinced that it could be worthwhile to (try
to) organize one of them with focus on Xen[*].

Below there are some information I collected from the Fedora Wiki...
Basically the purpose of this e-mail is to gather thoughts of the Xen
community about such a thing and, more important, some will to help a
bit with the organization! :-P

Here some basic links about the TestDays:
 https://fedoraproject.org/wiki/QA/Test_Days
 https://fedoraproject.org/wiki/QA/SOP_Test_Day_management
 https://fedoraproject.org/wiki/QA/Fedora_18_test_days

The last one is the current schedule, which is quite full. Also, there
already is a 'Virtualization' test day. However, I think it could still
be useful to have one dedicated to Xen.

What I was thinking was to file a ticket for it, requesting for the
(currently) available spot on 2012-09-27. If that does not work for
them, we could as for something near to the Virtualization test day,
like the day before or so (just as they are doing with the X Test Week).

The ticket proposing the Virtualization test day is this one:
 https://fedorahosted.org/fedora-qa/ticket/303

So, as you see, there is very few to do right now. However, if they
accept it, there will be work to do, as per
https://fedoraproject.org/wiki/QA/SOP_Test_Day_management, with the
challenging issues being, according to me, the following:
 - building a live-CD (not mandatory but good to have)
 - defining test cases
 - promoting the event properly

I can try to deal with the live-CD, and I guess I also can (maybe with
some help from Lars) try to promote it as much as possible, using all
our usual and unusual channels.

Where I could use some help from the whole Xen community is in defining
meaningful test cases for such an event. Here it is an example for last
year Virtualization test day:
 https://fedoraproject.org/wiki/Test_Day:2012-04-12_Virtualization_Test_Day
 https://fedoraproject.org/wiki/Test_Day:2012-04-12_Virtualization_Test_Day=
#News_tests_and_features
 https://fedoraproject.org/wiki/Test_Day:2012-04-12_Virtualization_Test_Day=
#Previous_test_cases

Apart from that, as one could expect, hanging around on an IRC channel
during the event would be very useful.

Let me know any thoughts you have. It would be great if you can do that
ASAP, before we run over of free slots. :-P

Thanks and Regards,
Dario

[*] Of course this is a different thing from the Xen TestDay we're
discussing in the other thread, and I really think both of them could be
useful, serving the very same purpose, although in different ways.

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-CGnZmNE/cSHkPRz4dqQL
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAasqcACgkQk4XaBE3IOsTljgCfZWNQdVv2gSj3247wMprlgrmz
ILYAnRigCiLYhE8vrSKMTKuPyiHe2NL1
=gjj+
-----END PGP SIGNATURE-----

--=-CGnZmNE/cSHkPRz4dqQL--


--===============8390678918115379082==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8390678918115379082==--


From xen-devel-bounces@lists.xen.org Thu Aug 02 17:10:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swyuh-0007WU-Ip; Thu, 02 Aug 2012 17:10:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1Swyug-0007WP-7w
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:10:18 +0000
Received: from [85.158.143.99:62827] by server-1.bemta-4.messagelabs.com id
	5F/61-24392-974BA105; Thu, 02 Aug 2012 17:10:17 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343927416!23593261!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7204 invoked from network); 2 Aug 2012 17:10:16 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:10:16 -0000
Received: by wibhm2 with SMTP id hm2so5671267wib.2
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 10:10:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=FAmVsUE53DEYMCwP3Nk60KZqW0ybQAdwfyLGOCukwmQ=;
	b=KA+C3+zA2yhdBudQyBmYjQvtR7QS4/fWB+YzK4A3+vh6vd7kCWsgWbvoY0fby76H4w
	zrGb2mWV1n1kYNTOeEX4LWXf3jiGAvA3eftHONTPfK65cHVrUmla9J0Ua1QTA18lNgHp
	+JQiWiFsjxU9BA7DX6iYtlYfzcTV3T40rHlVjcmCY1MhrywS3Zz0Bqy6lDmfihXtfULq
	D6CeeazznrYKF45lkpGBHNvXAlQ5VIpgBk+OCkPAx+Fk8Hl05MZ02ZV67cdNBqae91Ln
	62Bi3oLbfowTCXA4H++hyG2667dI5Nj4HPRfbt4oMwuv7bEtmjnOl2KLdf5pPaRW19qf
	oreg==
MIME-Version: 1.0
Received: by 10.180.86.226 with SMTP id s2mr6316018wiz.9.1343927415781; Thu,
	02 Aug 2012 10:10:15 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 10:10:15 -0700 (PDT)
In-Reply-To: <1343925916.7571.67.camel@dagon.hellion.org.uk>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
	<1343925916.7571.67.camel@dagon.hellion.org.uk>
Date: Thu, 2 Aug 2012 10:10:15 -0700
Message-ID: <CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 9:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-02 at 17:24 +0100, David Erickson wrote:
>> On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> I've attached a log with the list of modules matching front,
>> xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.#
>
> So did you try manually loading it?

I did, I got:
insmod: error inserting
'/lib/modules/3.0.0-12-generic/kernel/drivers/net/xen-netfront.ko': -1
Unknown symbol in module

Presumably it depends on other not-loaded kernel modules, do you know which?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:10:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swyuh-0007WU-Ip; Thu, 02 Aug 2012 17:10:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1Swyug-0007WP-7w
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:10:18 +0000
Received: from [85.158.143.99:62827] by server-1.bemta-4.messagelabs.com id
	5F/61-24392-974BA105; Thu, 02 Aug 2012 17:10:17 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1343927416!23593261!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7204 invoked from network); 2 Aug 2012 17:10:16 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:10:16 -0000
Received: by wibhm2 with SMTP id hm2so5671267wib.2
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 10:10:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=FAmVsUE53DEYMCwP3Nk60KZqW0ybQAdwfyLGOCukwmQ=;
	b=KA+C3+zA2yhdBudQyBmYjQvtR7QS4/fWB+YzK4A3+vh6vd7kCWsgWbvoY0fby76H4w
	zrGb2mWV1n1kYNTOeEX4LWXf3jiGAvA3eftHONTPfK65cHVrUmla9J0Ua1QTA18lNgHp
	+JQiWiFsjxU9BA7DX6iYtlYfzcTV3T40rHlVjcmCY1MhrywS3Zz0Bqy6lDmfihXtfULq
	D6CeeazznrYKF45lkpGBHNvXAlQ5VIpgBk+OCkPAx+Fk8Hl05MZ02ZV67cdNBqae91Ln
	62Bi3oLbfowTCXA4H++hyG2667dI5Nj4HPRfbt4oMwuv7bEtmjnOl2KLdf5pPaRW19qf
	oreg==
MIME-Version: 1.0
Received: by 10.180.86.226 with SMTP id s2mr6316018wiz.9.1343927415781; Thu,
	02 Aug 2012 10:10:15 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 10:10:15 -0700 (PDT)
In-Reply-To: <1343925916.7571.67.camel@dagon.hellion.org.uk>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
	<1343925916.7571.67.camel@dagon.hellion.org.uk>
Date: Thu, 2 Aug 2012 10:10:15 -0700
Message-ID: <CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 9:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-02 at 17:24 +0100, David Erickson wrote:
>> On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> I've attached a log with the list of modules matching front,
>> xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.#
>
> So did you try manually loading it?

I did, I got:
insmod: error inserting
'/lib/modules/3.0.0-12-generic/kernel/drivers/net/xen-netfront.ko': -1
Unknown symbol in module

Presumably it depends on other not-loaded kernel modules, do you know which?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:13:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyxJ-0007bt-9w; Thu, 02 Aug 2012 17:13:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwyxH-0007bi-4T
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:12:59 +0000
Received: from [85.158.139.83:53687] by server-12.bemta-5.messagelabs.com id
	05/F1-26304-A15BA105; Thu, 02 Aug 2012 17:12:58 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343927577!27269227!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10111 invoked from network); 2 Aug 2012 17:12:57 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:12:57 -0000
Received: by wibhm6 with SMTP id hm6so4390608wib.14
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 10:12:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=BZnjHb2NQdGtxnLQAbGwE6fZp/1kVVGoSQuHLrPzL3Y=;
	b=O/mmZup75W/DOo8jJFaQEe2p0rW6UNfwLY4NWmvnRzLBfhuH/HxaMqZVz1gYbLaoKI
	LiDkpyr9w21btuQGKGthjZ9zgunnSYHgJMEdD1ORz2mbTF5+TLyG2SvbxYHTbG1e6mmJ
	ykO5dfKl/5UER0IbR+VWAqLvqOOaLARgGHYnGzgY4+8PSFzJ4+SElEJ4o89R2sb+NVSP
	rBVVRb0QzuaKTMrbQPfeWqQfi65HomgmkcuHTojukNKHDoXErcCfyWJ1L+fkRwRp2qIo
	2dX0c3R+YawXRBsOM4+iZuc85kpQfixJ3fQCq6JuFOliUwr2yXTD2mUl9FnMJtDuxlpA
	BQqw==
MIME-Version: 1.0
Received: by 10.180.20.11 with SMTP id j11mr6306042wie.12.1343927577610; Thu,
	02 Aug 2012 10:12:57 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 10:12:57 -0700 (PDT)
In-Reply-To: <CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
	<1343925916.7571.67.camel@dagon.hellion.org.uk>
	<CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
Date: Thu, 2 Aug 2012 10:12:57 -0700
Message-ID: <CANKx4w9paNXNZaaUYj0tFwoTyDrr1-KHT1-C3o6mfG9QFPTDQQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 10:10 AM, David Erickson <halcyon1981@gmail.com> wrote:
> On Thu, Aug 2, 2012 at 9:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Thu, 2012-08-02 at 17:24 +0100, David Erickson wrote:
>>> On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> I've attached a log with the list of modules matching front,
>>> xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.#
>>
>> So did you try manually loading it?
>
> I did, I got:
> insmod: error inserting
> '/lib/modules/3.0.0-12-generic/kernel/drivers/net/xen-netfront.ko': -1
> Unknown symbol in module
>
> Presumably it depends on other not-loaded kernel modules, do you know which?

As a followup I ran the following:

ubuntu@ubuntu:~$ sudo modprobe xen-netfront
ubuntu@ubuntu:~$ [  238.408574] vbd vbd-5632: 19 xenbus_dev_probe on
device/vbd/5632
[  238.433304] vbd vbd-5632: failed to write error node for
device/vbd/5632 (19 xenbus_dev_probe on device/vbd/5632)

ubuntu@ubuntu:~$ sudo lsmod
Module                  Size  Used by
xen_blkfront           26261  0
xen_netfront           26671  0
xenbus_probe_frontend    13232  2 xen_blkfront,xen_netfront,[permanent]
bnep                   18436  2
rfcomm                 47946  0
bluetooth             166112  10 bnep,rfcomm
dm_crypt               23199  0
lp                     17799  0
ppdev                  17113  0
parport_pc             36962  1
parport                46562  3 lp,ppdev,parport_pc
psmouse                73882  0
serio_raw              13166  0
i2c_piix4              13301  0
xen_platform_pci       12885  0 [permanent]
binfmt_misc            17540  1
squashfs               36799  1
overlayfs              28267  1
nls_utf8               12557  1
isofs                  40253  1
dm_raid45              78155  0
xor                    12894  1 dm_raid45
dm_mirror              22203  0
dm_region_hash         20918  1 dm_mirror
dm_log                 18564  3 dm_raid45,dm_mirror,dm_region_hash
btrfs                 648895  0
zlib_deflate           27139  1 btrfs
libcrc32c              12644  1 btrfs
floppy                 70365  0
mpt2sas               152860  0
scsi_transport_sas     40558  1 mpt2sas
raid_class             13622  1 mpt2sas
ubuntu@ubuntu:~$ ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:16:3e:68:15:49
          inet addr:192.168.1.126  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe68:1549/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:54 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5952 (5.9 KB)  TX bytes:9522 (9.5 KB)
          Interrupt:75

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:96 errors:0 dropped:0 overruns:0 frame:0
          TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:7392 (7.3 KB)  TX bytes:7392 (7.3 KB)

So it looks like it loaded (with some errors, are those problems?),
but still not sure why it didn't auto load on boot.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:13:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwyxJ-0007bt-9w; Thu, 02 Aug 2012 17:13:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwyxH-0007bi-4T
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:12:59 +0000
Received: from [85.158.139.83:53687] by server-12.bemta-5.messagelabs.com id
	05/F1-26304-A15BA105; Thu, 02 Aug 2012 17:12:58 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1343927577!27269227!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10111 invoked from network); 2 Aug 2012 17:12:57 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:12:57 -0000
Received: by wibhm6 with SMTP id hm6so4390608wib.14
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 10:12:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=BZnjHb2NQdGtxnLQAbGwE6fZp/1kVVGoSQuHLrPzL3Y=;
	b=O/mmZup75W/DOo8jJFaQEe2p0rW6UNfwLY4NWmvnRzLBfhuH/HxaMqZVz1gYbLaoKI
	LiDkpyr9w21btuQGKGthjZ9zgunnSYHgJMEdD1ORz2mbTF5+TLyG2SvbxYHTbG1e6mmJ
	ykO5dfKl/5UER0IbR+VWAqLvqOOaLARgGHYnGzgY4+8PSFzJ4+SElEJ4o89R2sb+NVSP
	rBVVRb0QzuaKTMrbQPfeWqQfi65HomgmkcuHTojukNKHDoXErcCfyWJ1L+fkRwRp2qIo
	2dX0c3R+YawXRBsOM4+iZuc85kpQfixJ3fQCq6JuFOliUwr2yXTD2mUl9FnMJtDuxlpA
	BQqw==
MIME-Version: 1.0
Received: by 10.180.20.11 with SMTP id j11mr6306042wie.12.1343927577610; Thu,
	02 Aug 2012 10:12:57 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 10:12:57 -0700 (PDT)
In-Reply-To: <CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
	<1343925916.7571.67.camel@dagon.hellion.org.uk>
	<CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
Date: Thu, 2 Aug 2012 10:12:57 -0700
Message-ID: <CANKx4w9paNXNZaaUYj0tFwoTyDrr1-KHT1-C3o6mfG9QFPTDQQ@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 10:10 AM, David Erickson <halcyon1981@gmail.com> wrote:
> On Thu, Aug 2, 2012 at 9:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Thu, 2012-08-02 at 17:24 +0100, David Erickson wrote:
>>> On Thu, Aug 2, 2012 at 12:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> I've attached a log with the list of modules matching front,
>>> xen-netfront.ko is definitely there, but lsmod doesn't show it loaded.#
>>
>> So did you try manually loading it?
>
> I did, I got:
> insmod: error inserting
> '/lib/modules/3.0.0-12-generic/kernel/drivers/net/xen-netfront.ko': -1
> Unknown symbol in module
>
> Presumably it depends on other not-loaded kernel modules, do you know which?

As a followup I ran the following:

ubuntu@ubuntu:~$ sudo modprobe xen-netfront
ubuntu@ubuntu:~$ [  238.408574] vbd vbd-5632: 19 xenbus_dev_probe on
device/vbd/5632
[  238.433304] vbd vbd-5632: failed to write error node for
device/vbd/5632 (19 xenbus_dev_probe on device/vbd/5632)

ubuntu@ubuntu:~$ sudo lsmod
Module                  Size  Used by
xen_blkfront           26261  0
xen_netfront           26671  0
xenbus_probe_frontend    13232  2 xen_blkfront,xen_netfront,[permanent]
bnep                   18436  2
rfcomm                 47946  0
bluetooth             166112  10 bnep,rfcomm
dm_crypt               23199  0
lp                     17799  0
ppdev                  17113  0
parport_pc             36962  1
parport                46562  3 lp,ppdev,parport_pc
psmouse                73882  0
serio_raw              13166  0
i2c_piix4              13301  0
xen_platform_pci       12885  0 [permanent]
binfmt_misc            17540  1
squashfs               36799  1
overlayfs              28267  1
nls_utf8               12557  1
isofs                  40253  1
dm_raid45              78155  0
xor                    12894  1 dm_raid45
dm_mirror              22203  0
dm_region_hash         20918  1 dm_mirror
dm_log                 18564  3 dm_raid45,dm_mirror,dm_region_hash
btrfs                 648895  0
zlib_deflate           27139  1 btrfs
libcrc32c              12644  1 btrfs
floppy                 70365  0
mpt2sas               152860  0
scsi_transport_sas     40558  1 mpt2sas
raid_class             13622  1 mpt2sas
ubuntu@ubuntu:~$ ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:16:3e:68:15:49
          inet addr:192.168.1.126  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe68:1549/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:54 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5952 (5.9 KB)  TX bytes:9522 (9.5 KB)
          Interrupt:75

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:96 errors:0 dropped:0 overruns:0 frame:0
          TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:7392 (7.3 KB)  TX bytes:7392 (7.3 KB)

So it looks like it loaded (with some errors, are those problems?),
but still not sure why it didn't auto load on boot.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBO-0007uT-2a; Thu, 02 Aug 2012 17:27:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBJ-0007qs-CD
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:29 +0000
Received: from [85.158.139.83:57195] by server-11.bemta-5.messagelabs.com id
	48/EB-20400-088BA105; Thu, 02 Aug 2012 17:27:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343928446!29952258!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8490 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828371"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vi-6d; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Gh-5X;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:21 +0100
Message-ID: <1343928442-23966-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 12/13] libxl: add a comment re the memory
	management API instability
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl.h |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 5ec2d74..f11abc2 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -567,6 +567,17 @@ int libxl_domain_core_dump(libxl_ctx *ctx, uint32_t domid,
 int libxl_domain_setmaxmem(libxl_ctx *ctx, uint32_t domid, uint32_t target_memkb);
 int libxl_set_memory_target(libxl_ctx *ctx, uint32_t domid, int32_t target_memkb, int relative, int enforce);
 int libxl_get_memory_target(libxl_ctx *ctx, uint32_t domid, uint32_t *out_target);
+
+
+/*
+ * WARNING
+ * This memory management API is unstable even in Xen 4.2.
+ * It has a numer of deficiencies and we intend to replace it.
+ *
+ * The semantics of these functions should not be relied on to be very
+ * coherent or stable.  We will however endeavour to keep working
+ * existing programs which use them in roughly the same way as libxl.
+ */
 /* how much free memory in the system a domain needs to be built */
 int libxl_domain_need_memory(libxl_ctx *ctx, libxl_domain_build_info *b_info,
                              uint32_t *need_memkb);
@@ -577,6 +588,7 @@ int libxl_wait_for_free_memory(libxl_ctx *ctx, uint32_t domid, uint32_t memory_k
 /* wait for the memory target of a domain to be reached */
 int libxl_wait_for_memory_target(libxl_ctx *ctx, uint32_t domid, int wait_secs);
 
+
 int libxl_vncviewer_exec(libxl_ctx *ctx, uint32_t domid, int autopass);
 int libxl_console_exec(libxl_ctx *ctx, uint32_t domid, int cons_num, libxl_console_type type);
 /* libxl_primary_console_exec finds the domid and console number
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBO-0007uT-2a; Thu, 02 Aug 2012 17:27:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBJ-0007qs-CD
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:29 +0000
Received: from [85.158.139.83:57195] by server-11.bemta-5.messagelabs.com id
	48/EB-20400-088BA105; Thu, 02 Aug 2012 17:27:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343928446!29952258!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8490 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828371"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vi-6d; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Gh-5X;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:21 +0100
Message-ID: <1343928442-23966-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 12/13] libxl: add a comment re the memory
	management API instability
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl.h |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 5ec2d74..f11abc2 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -567,6 +567,17 @@ int libxl_domain_core_dump(libxl_ctx *ctx, uint32_t domid,
 int libxl_domain_setmaxmem(libxl_ctx *ctx, uint32_t domid, uint32_t target_memkb);
 int libxl_set_memory_target(libxl_ctx *ctx, uint32_t domid, int32_t target_memkb, int relative, int enforce);
 int libxl_get_memory_target(libxl_ctx *ctx, uint32_t domid, uint32_t *out_target);
+
+
+/*
+ * WARNING
+ * This memory management API is unstable even in Xen 4.2.
+ * It has a numer of deficiencies and we intend to replace it.
+ *
+ * The semantics of these functions should not be relied on to be very
+ * coherent or stable.  We will however endeavour to keep working
+ * existing programs which use them in roughly the same way as libxl.
+ */
 /* how much free memory in the system a domain needs to be built */
 int libxl_domain_need_memory(libxl_ctx *ctx, libxl_domain_build_info *b_info,
                              uint32_t *need_memkb);
@@ -577,6 +588,7 @@ int libxl_wait_for_free_memory(libxl_ctx *ctx, uint32_t domid, uint32_t memory_k
 /* wait for the memory target of a domain to be reached */
 int libxl_wait_for_memory_target(libxl_ctx *ctx, uint32_t domid, int wait_secs);
 
+
 int libxl_vncviewer_exec(libxl_ctx *ctx, uint32_t domid, int autopass);
 int libxl_console_exec(libxl_ctx *ctx, uint32_t domid, int cons_num, libxl_console_type type);
 /* libxl_primary_console_exec finds the domid and console number
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBK-0007s3-TB; Thu, 02 Aug 2012 17:27:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007qF-Vs
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:51936] by server-2.bemta-4.messagelabs.com id
	31/B6-17938-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21079 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828361"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vN-QR; Thu, 02 Aug 2012 17:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006Fs-OZ;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:09 +0100
Message-ID: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and cleanups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These should go into 4.2 soon:

 A * 01/13 libxl: unify libxl__device_destroy and device_hotplug_done
   * 02/13 libxl: react correctly to bootloader pty master POLLHUP
 A   03/13 libxl: fix device counting race in libxl__devices_destroy
 A   04/13 libxl: fix formatting of DEFINE_DEVICES_ADD
 A   05/13 libxl: abolish useless `start' parameter to libxl__add_*
 A   06/13 libxl: rename aodevs to multidev
 A   07/13 libxl: do not blunder on if bootloader fails (again)

These are harmless enough (but make no functional difference right
now) and should go in as well:

 A   08/13 libxl: remus: mark TODOs more clearly
 A   09/13 libxl: remove an unused numainfo parameter
 A   10/13 libxl: idl: always initialise KeyedEnum keyvar in member init

These are API doc fixes for 4.2:

   + 11/13 libxl: correct some comments regarding event API and fds
   + 12/13 libxl: add a comment re the memory management API instability

And this one is still waiting on the qmp ask_timeout question:

 X * 13/13 libxl: -Wunused-parameter

Key:

 A   acked
 X   DO NOT APPLY
   * modified since v4
   + new patch, not posted before


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBK-0007s3-TB; Thu, 02 Aug 2012 17:27:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007qF-Vs
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:51936] by server-2.bemta-4.messagelabs.com id
	31/B6-17938-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21079 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828361"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vN-QR; Thu, 02 Aug 2012 17:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006Fs-OZ;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:09 +0100
Message-ID: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and cleanups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These should go into 4.2 soon:

 A * 01/13 libxl: unify libxl__device_destroy and device_hotplug_done
   * 02/13 libxl: react correctly to bootloader pty master POLLHUP
 A   03/13 libxl: fix device counting race in libxl__devices_destroy
 A   04/13 libxl: fix formatting of DEFINE_DEVICES_ADD
 A   05/13 libxl: abolish useless `start' parameter to libxl__add_*
 A   06/13 libxl: rename aodevs to multidev
 A   07/13 libxl: do not blunder on if bootloader fails (again)

These are harmless enough (but make no functional difference right
now) and should go in as well:

 A   08/13 libxl: remus: mark TODOs more clearly
 A   09/13 libxl: remove an unused numainfo parameter
 A   10/13 libxl: idl: always initialise KeyedEnum keyvar in member init

These are API doc fixes for 4.2:

   + 11/13 libxl: correct some comments regarding event API and fds
   + 12/13 libxl: add a comment re the memory management API instability

And this one is still waiting on the qmp ask_timeout question:

 X * 13/13 libxl: -Wunused-parameter

Key:

 A   acked
 X   DO NOT APPLY
   * modified since v4
   + new patch, not posted before


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBM-0007tX-TG; Thu, 02 Aug 2012 17:27:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007q0-Fy
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:8999] by server-1.bemta-4.messagelabs.com id
	F2/6E-24392-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!9
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21133 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828370"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vZ-2F; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GK-1J;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:15 +0100
Message-ID: <1343928442-23966-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 06/13] libxl: rename aodevs to multidev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To be consistent with the new function naming, rename
libxl__ao_devices to libxl__multidev and all variables aodevs to
multidev.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_create.c   |   30 +++++++++---------
 tools/libxl/libxl_device.c   |   68 +++++++++++++++++++++---------------------
 tools/libxl/libxl_dm.c       |   30 +++++++++---------
 tools/libxl/libxl_internal.h |   26 ++++++++--------
 4 files changed, 77 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5275373..5f0d26f 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -599,10 +599,10 @@ static void domcreate_bootloader_done(libxl__egc *egc,
                                       libxl__bootloader_state *bl,
                                       int rc);
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *aodevs,
                                 int ret);
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *aodevs,
                                  int ret);
 
 static void domcreate_console_available(libxl__egc *egc,
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    libxl__multidev_begin(ao, &dcs->aodevs);
-    dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
-    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+    libxl__multidev_begin(ao, &dcs->multidev);
+    dcs->multidev.callback = domcreate_launch_dm;
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->multidev);
+    libxl__multidev_prepared(egc, &dcs->multidev, 0);
 
     return;
 
@@ -921,10 +921,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
                                 int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
 
@@ -1039,14 +1039,14 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        libxl__multidev_begin(ao, &dcs->aodevs);
-        dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
-        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+        libxl__multidev_begin(ao, &dcs->multidev);
+        dcs->multidev.callback = domcreate_attach_pci;
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->multidev);
+        libxl__multidev_prepared(egc, &dcs->multidev, 0);
         return;
     }
 
-    domcreate_attach_pci(egc, &dcs->aodevs, 0);
+    domcreate_attach_pci(egc, &dcs->multidev, 0);
     return;
 
 error_out:
@@ -1054,10 +1054,10 @@ error_out:
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *multidev,
                                  int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
     libxl_ctx *ctx = CTX;
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 27fbd21..9fc63f1 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -403,13 +403,13 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
 
 /* multidev */
 
-void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
+void libxl__multidev_begin(libxl__ao *ao, libxl__multidev *multidev)
 {
     AO_GC;
 
-    aodevs->ao = ao;
-    aodevs->array = 0;
-    aodevs->used = aodevs->allocd = 0;
+    multidev->ao = ao;
+    multidev->array = 0;
+    multidev->used = multidev->allocd = 0;
 
     /* We allocate an aodev to represent the operation of preparing
      * all of the other operations.  This operation is completed when
@@ -422,25 +422,25 @@ void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
      *  (iii) we have a nice consistent way to deal with any
      *      error that might occur while deciding what to initiate
      */
-    aodevs->preparation = libxl__multidev_prepare(aodevs);
+    multidev->preparation = libxl__multidev_prepare(multidev);
 }
 
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
 
-libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
-    STATE_AO_GC(aodevs->ao);
+libxl__ao_device *libxl__multidev_prepare(libxl__multidev *multidev) {
+    STATE_AO_GC(multidev->ao);
     libxl__ao_device *aodev;
 
     GCNEW(aodev);
-    aodev->aodevs = aodevs;
+    aodev->multidev = multidev;
     aodev->callback = multidev_one_callback;
     libxl__prepare_ao_device(ao, aodev);
 
-    if (aodevs->used >= aodevs->allocd) {
-        aodevs->allocd = aodevs->used * 2 + 5;
-        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
+    if (multidev->used >= multidev->allocd) {
+        multidev->allocd = multidev->used * 2 + 5;
+        GCREALLOC_ARRAY(multidev->array, multidev->allocd);
     }
-    aodevs->array[aodevs->used++] = aodev;
+    multidev->array[multidev->used++] = aodev;
 
     return aodev;
 }
@@ -448,28 +448,28 @@ libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    libxl__ao_devices *aodevs = aodev->aodevs;
+    libxl__multidev *multidev = aodev->multidev;
     int i, error = 0;
 
     aodev->active = 0;
 
-    for (i = 0; i < aodevs->used; i++) {
-        if (aodevs->array[i]->active)
+    for (i = 0; i < multidev->used; i++) {
+        if (multidev->array[i]->active)
             return;
 
-        if (aodevs->array[i]->rc)
-            error = aodevs->array[i]->rc;
+        if (multidev->array[i]->rc)
+            error = multidev->array[i]->rc;
     }
 
-    aodevs->callback(egc, aodevs, error);
+    multidev->callback(egc, multidev, error);
     return;
 }
 
-void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
-                              int rc)
+void libxl__multidev_prepared(libxl__egc *egc,
+                              libxl__multidev *multidev, int rc)
 {
-    aodevs->preparation->rc = rc;
-    multidev_one_callback(egc, aodevs->preparation);
+    multidev->preparation->rc = rc;
+    multidev_one_callback(egc, multidev->preparation);
 }
 
 /******************************************************************************/
@@ -486,12 +486,12 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
                               libxl_domain_config *d_config,            \
-                              libxl__ao_devices *aodevs)                \
+                              libxl__multidev *multidev)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
         for (i = 0; i < d_config->num_##type##s; i++) {                 \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__ao_device *aodev = libxl__multidev_prepare(multidev);  \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
@@ -532,8 +532,8 @@ out:
 
 /* Callback for device destruction */
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc);
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc);
 
 void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
 {
@@ -545,12 +545,12 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char **kinds = NULL, **devs = NULL;
     int i, j, rc = 0;
     libxl__device *dev;
-    libxl__ao_devices *aodevs = &drs->aodevs;
+    libxl__multidev *multidev = &drs->multidev;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    libxl__multidev_begin(ao, aodevs);
-    aodevs->callback = devices_remove_callback;
+    libxl__multidev_begin(ao, multidev);
+    multidev->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
     kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
@@ -587,7 +587,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = libxl__multidev_prepare(aodevs);
+                aodev = libxl__multidev_prepare(multidev);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
                 aodev->force = drs->force;
@@ -613,7 +613,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    libxl__multidev_prepared(egc, aodevs, rc);
+    libxl__multidev_prepared(egc, multidev, rc);
 }
 
 /* Callbacks for device related operations */
@@ -1003,10 +1003,10 @@ static void device_hotplug_clean(libxl__gc *gc, libxl__ao_device *aodev)
     assert(!libxl__ev_child_inuse(&aodev->child));
 }
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc)
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc)
 {
-    libxl__devices_remove_state *drs = CONTAINER_OF(aodevs, *drs, aodevs);
+    libxl__devices_remove_state *drs = CONTAINER_OF(multidev, *drs, multidev);
     STATE_AO_GC(drs->ao);
 
     drs->callback(egc, drs, rc);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 66aa45e..0c0084f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -714,10 +714,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
                                 int rc);
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret);
+                                 libxl__multidev *aodevs, int ret);
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *aodevs,
                               int rc);
 
 static void spaw_stubdom_pvqemu_destroy_cb(libxl__egc *egc,
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    libxl__multidev_begin(ao, &sdss->aodevs);
-    sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
-    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+    libxl__multidev_begin(ao, &sdss->multidev);
+    sdss->multidev.callback = spawn_stub_launch_dm;
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->multidev);
+    libxl__multidev_prepared(egc, &sdss->multidev, 0);
 
     free(args);
     return;
@@ -872,9 +872,9 @@ out:
 }
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret)
+                                 libxl__multidev *multidev, int ret)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int i, num_console = STUBDOM_SPECIAL_CONSOLES;
@@ -982,22 +982,22 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        libxl__multidev_begin(ao, &sdss->aodevs);
-        sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
-        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+        libxl__multidev_begin(ao, &sdss->multidev);
+        sdss->multidev.callback = stubdom_pvqemu_cb;
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->multidev);
+        libxl__multidev_prepared(egc, &sdss->multidev, 0);
         return;
     }
 
 out:
-    stubdom_pvqemu_cb(egc, &sdss->aodevs, rc);
+    stubdom_pvqemu_cb(egc, &sdss->multidev, rc);
 }
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *multidev,
                               int rc)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     uint32_t dm_domid = sdss->pvqemu.guest_domid;
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index bb3eb5f..6528694 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1797,7 +1797,7 @@ typedef enum {
 } libxl__device_action;
 
 typedef struct libxl__ao_device libxl__ao_device;
-typedef struct libxl__ao_devices libxl__ao_devices;
+typedef struct libxl__multidev libxl__multidev;
 typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
 
 /* This functions sets the necessary libxl__ao_device struct values to use
@@ -1827,7 +1827,7 @@ struct libxl__ao_device {
     int rc;
     /* private for multidev */
     int active;
-    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    libxl__multidev *multidev; /* reference to the containing multidev */
     /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
@@ -1853,12 +1853,12 @@ struct libxl__ao_device {
  */
 
 /* Starts preparing to add/remove a bunch of devices. */
-_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__multidev*);
 
 /* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
  * which has had libxl__prepare_ao_device called, and which has also
  * had ->callback set.  The user should not mess with aodev->callback. */
-_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__multidev*);
 
 /* Notifies the multidev machinery that we have now finished preparing
  * and initiating devices.  multidev->callback may then be called as
@@ -1866,10 +1866,10 @@ _hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
  * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
  * logged) multidev->callback will get a non-zero rc.
  * callback may be set by the user at any point before prepared. */
-_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__multidev*, int rc);
 
-typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
-struct libxl__ao_devices {
+typedef void libxl__devices_callback(libxl__egc*, libxl__multidev*, int rc);
+struct libxl__multidev {
     /* set by user: */
     libxl__devices_callback *callback;
     /* for private use by libxl__...ao_devices... machinery: */
@@ -2342,7 +2342,7 @@ struct libxl__devices_remove_state {
     libxl__devices_remove_callback *callback;
     int force; /* libxl_device_TYPE_destroy rather than _remove */
     /* private */
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
     int num_devices;
 };
 
@@ -2386,7 +2386,7 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by
+ * the caller is inside an async op. "multidev" will NOT be prepared by
  * this function, so the caller must make sure to call
  * libxl__multidev_begin before calling this function.
  *
@@ -2395,11 +2395,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                               libxl_domain_config *d_config,
-                              libxl__ao_devices *aodevs);
+                              libxl__multidev *multidev);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                              libxl_domain_config *d_config,
-                             libxl__ao_devices *aodevs);
+                             libxl__multidev *multidev);
 
 /*----- device model creation -----*/
 
@@ -2435,7 +2435,7 @@ typedef struct {
     libxl__domain_build_state dm_state;
     libxl__dm_spawn_state pvqemu;
     libxl__destroy_domid_state dis;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 } libxl__stub_dm_spawn_state;
 
 _hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
@@ -2467,7 +2467,7 @@ struct libxl__domain_create_state {
     libxl__save_helper_state shs;
     /* necessary if the domain creation failed and we have to destroy it */
     libxl__domain_destroy_state dds;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 };
 
 /*----- Domain suspend (save) functions -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBM-0007tX-TG; Thu, 02 Aug 2012 17:27:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007q0-Fy
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:8999] by server-1.bemta-4.messagelabs.com id
	F2/6E-24392-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!9
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21133 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828370"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vZ-2F; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GK-1J;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:15 +0100
Message-ID: <1343928442-23966-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 06/13] libxl: rename aodevs to multidev
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To be consistent with the new function naming, rename
libxl__ao_devices to libxl__multidev and all variables aodevs to
multidev.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_create.c   |   30 +++++++++---------
 tools/libxl/libxl_device.c   |   68 +++++++++++++++++++++---------------------
 tools/libxl/libxl_dm.c       |   30 +++++++++---------
 tools/libxl/libxl_internal.h |   26 ++++++++--------
 4 files changed, 77 insertions(+), 77 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5275373..5f0d26f 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -599,10 +599,10 @@ static void domcreate_bootloader_done(libxl__egc *egc,
                                       libxl__bootloader_state *bl,
                                       int rc);
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *aodevs,
                                 int ret);
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *aodevs,
                                  int ret);
 
 static void domcreate_console_available(libxl__egc *egc,
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    libxl__multidev_begin(ao, &dcs->aodevs);
-    dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
-    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+    libxl__multidev_begin(ao, &dcs->multidev);
+    dcs->multidev.callback = domcreate_launch_dm;
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->multidev);
+    libxl__multidev_prepared(egc, &dcs->multidev, 0);
 
     return;
 
@@ -921,10 +921,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_launch_dm(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
                                 int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
 
@@ -1039,14 +1039,14 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        libxl__multidev_begin(ao, &dcs->aodevs);
-        dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
-        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
+        libxl__multidev_begin(ao, &dcs->multidev);
+        dcs->multidev.callback = domcreate_attach_pci;
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->multidev);
+        libxl__multidev_prepared(egc, &dcs->multidev, 0);
         return;
     }
 
-    domcreate_attach_pci(egc, &dcs->aodevs, 0);
+    domcreate_attach_pci(egc, &dcs->multidev, 0);
     return;
 
 error_out:
@@ -1054,10 +1054,10 @@ error_out:
     domcreate_complete(egc, dcs, ret);
 }
 
-static void domcreate_attach_pci(libxl__egc *egc, libxl__ao_devices *aodevs,
+static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *multidev,
                                  int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(aodevs, *dcs, aodevs);
+    libxl__domain_create_state *dcs = CONTAINER_OF(multidev, *dcs, multidev);
     STATE_AO_GC(dcs->ao);
     int i;
     libxl_ctx *ctx = CTX;
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 27fbd21..9fc63f1 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -403,13 +403,13 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
 
 /* multidev */
 
-void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
+void libxl__multidev_begin(libxl__ao *ao, libxl__multidev *multidev)
 {
     AO_GC;
 
-    aodevs->ao = ao;
-    aodevs->array = 0;
-    aodevs->used = aodevs->allocd = 0;
+    multidev->ao = ao;
+    multidev->array = 0;
+    multidev->used = multidev->allocd = 0;
 
     /* We allocate an aodev to represent the operation of preparing
      * all of the other operations.  This operation is completed when
@@ -422,25 +422,25 @@ void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
      *  (iii) we have a nice consistent way to deal with any
      *      error that might occur while deciding what to initiate
      */
-    aodevs->preparation = libxl__multidev_prepare(aodevs);
+    multidev->preparation = libxl__multidev_prepare(multidev);
 }
 
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
 
-libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
-    STATE_AO_GC(aodevs->ao);
+libxl__ao_device *libxl__multidev_prepare(libxl__multidev *multidev) {
+    STATE_AO_GC(multidev->ao);
     libxl__ao_device *aodev;
 
     GCNEW(aodev);
-    aodev->aodevs = aodevs;
+    aodev->multidev = multidev;
     aodev->callback = multidev_one_callback;
     libxl__prepare_ao_device(ao, aodev);
 
-    if (aodevs->used >= aodevs->allocd) {
-        aodevs->allocd = aodevs->used * 2 + 5;
-        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
+    if (multidev->used >= multidev->allocd) {
+        multidev->allocd = multidev->used * 2 + 5;
+        GCREALLOC_ARRAY(multidev->array, multidev->allocd);
     }
-    aodevs->array[aodevs->used++] = aodev;
+    multidev->array[multidev->used++] = aodev;
 
     return aodev;
 }
@@ -448,28 +448,28 @@ libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
 static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    libxl__ao_devices *aodevs = aodev->aodevs;
+    libxl__multidev *multidev = aodev->multidev;
     int i, error = 0;
 
     aodev->active = 0;
 
-    for (i = 0; i < aodevs->used; i++) {
-        if (aodevs->array[i]->active)
+    for (i = 0; i < multidev->used; i++) {
+        if (multidev->array[i]->active)
             return;
 
-        if (aodevs->array[i]->rc)
-            error = aodevs->array[i]->rc;
+        if (multidev->array[i]->rc)
+            error = multidev->array[i]->rc;
     }
 
-    aodevs->callback(egc, aodevs, error);
+    multidev->callback(egc, multidev, error);
     return;
 }
 
-void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
-                              int rc)
+void libxl__multidev_prepared(libxl__egc *egc,
+                              libxl__multidev *multidev, int rc)
 {
-    aodevs->preparation->rc = rc;
-    multidev_one_callback(egc, aodevs->preparation);
+    multidev->preparation->rc = rc;
+    multidev_one_callback(egc, multidev->preparation);
 }
 
 /******************************************************************************/
@@ -486,12 +486,12 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
                               libxl_domain_config *d_config,            \
-                              libxl__ao_devices *aodevs)                \
+                              libxl__multidev *multidev)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
         for (i = 0; i < d_config->num_##type##s; i++) {                 \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__ao_device *aodev = libxl__multidev_prepare(multidev);  \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
@@ -532,8 +532,8 @@ out:
 
 /* Callback for device destruction */
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc);
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc);
 
 void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
 {
@@ -545,12 +545,12 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char **kinds = NULL, **devs = NULL;
     int i, j, rc = 0;
     libxl__device *dev;
-    libxl__ao_devices *aodevs = &drs->aodevs;
+    libxl__multidev *multidev = &drs->multidev;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    libxl__multidev_begin(ao, aodevs);
-    aodevs->callback = devices_remove_callback;
+    libxl__multidev_begin(ao, multidev);
+    multidev->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
     kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
@@ -587,7 +587,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = libxl__multidev_prepare(aodevs);
+                aodev = libxl__multidev_prepare(multidev);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
                 aodev->force = drs->force;
@@ -613,7 +613,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    libxl__multidev_prepared(egc, aodevs, rc);
+    libxl__multidev_prepared(egc, multidev, rc);
 }
 
 /* Callbacks for device related operations */
@@ -1003,10 +1003,10 @@ static void device_hotplug_clean(libxl__gc *gc, libxl__ao_device *aodev)
     assert(!libxl__ev_child_inuse(&aodev->child));
 }
 
-static void devices_remove_callback(libxl__egc *egc, libxl__ao_devices *aodevs,
-                                    int rc)
+static void devices_remove_callback(libxl__egc *egc,
+                                    libxl__multidev *multidev, int rc)
 {
-    libxl__devices_remove_state *drs = CONTAINER_OF(aodevs, *drs, aodevs);
+    libxl__devices_remove_state *drs = CONTAINER_OF(multidev, *drs, multidev);
     STATE_AO_GC(drs->ao);
 
     drs->callback(egc, drs, rc);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 66aa45e..0c0084f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -714,10 +714,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
                                 int rc);
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret);
+                                 libxl__multidev *aodevs, int ret);
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *aodevs,
                               int rc);
 
 static void spaw_stubdom_pvqemu_destroy_cb(libxl__egc *egc,
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    libxl__multidev_begin(ao, &sdss->aodevs);
-    sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
-    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+    libxl__multidev_begin(ao, &sdss->multidev);
+    sdss->multidev.callback = spawn_stub_launch_dm;
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->multidev);
+    libxl__multidev_prepared(egc, &sdss->multidev, 0);
 
     free(args);
     return;
@@ -872,9 +872,9 @@ out:
 }
 
 static void spawn_stub_launch_dm(libxl__egc *egc,
-                                 libxl__ao_devices *aodevs, int ret)
+                                 libxl__multidev *multidev, int ret)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int i, num_console = STUBDOM_SPECIAL_CONSOLES;
@@ -982,22 +982,22 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        libxl__multidev_begin(ao, &sdss->aodevs);
-        sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
-        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
+        libxl__multidev_begin(ao, &sdss->multidev);
+        sdss->multidev.callback = stubdom_pvqemu_cb;
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->multidev);
+        libxl__multidev_prepared(egc, &sdss->multidev, 0);
         return;
     }
 
 out:
-    stubdom_pvqemu_cb(egc, &sdss->aodevs, rc);
+    stubdom_pvqemu_cb(egc, &sdss->multidev, rc);
 }
 
 static void stubdom_pvqemu_cb(libxl__egc *egc,
-                              libxl__ao_devices *aodevs,
+                              libxl__multidev *multidev,
                               int rc)
 {
-    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(aodevs, *sdss, aodevs);
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     uint32_t dm_domid = sdss->pvqemu.guest_domid;
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index bb3eb5f..6528694 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1797,7 +1797,7 @@ typedef enum {
 } libxl__device_action;
 
 typedef struct libxl__ao_device libxl__ao_device;
-typedef struct libxl__ao_devices libxl__ao_devices;
+typedef struct libxl__multidev libxl__multidev;
 typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
 
 /* This functions sets the necessary libxl__ao_device struct values to use
@@ -1827,7 +1827,7 @@ struct libxl__ao_device {
     int rc;
     /* private for multidev */
     int active;
-    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    libxl__multidev *multidev; /* reference to the containing multidev */
     /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
@@ -1853,12 +1853,12 @@ struct libxl__ao_device {
  */
 
 /* Starts preparing to add/remove a bunch of devices. */
-_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__multidev*);
 
 /* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
  * which has had libxl__prepare_ao_device called, and which has also
  * had ->callback set.  The user should not mess with aodev->callback. */
-_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__multidev*);
 
 /* Notifies the multidev machinery that we have now finished preparing
  * and initiating devices.  multidev->callback may then be called as
@@ -1866,10 +1866,10 @@ _hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
  * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
  * logged) multidev->callback will get a non-zero rc.
  * callback may be set by the user at any point before prepared. */
-_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__multidev*, int rc);
 
-typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
-struct libxl__ao_devices {
+typedef void libxl__devices_callback(libxl__egc*, libxl__multidev*, int rc);
+struct libxl__multidev {
     /* set by user: */
     libxl__devices_callback *callback;
     /* for private use by libxl__...ao_devices... machinery: */
@@ -2342,7 +2342,7 @@ struct libxl__devices_remove_state {
     libxl__devices_remove_callback *callback;
     int force; /* libxl_device_TYPE_destroy rather than _remove */
     /* private */
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
     int num_devices;
 };
 
@@ -2386,7 +2386,7 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by
+ * the caller is inside an async op. "multidev" will NOT be prepared by
  * this function, so the caller must make sure to call
  * libxl__multidev_begin before calling this function.
  *
@@ -2395,11 +2395,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                               libxl_domain_config *d_config,
-                              libxl__ao_devices *aodevs);
+                              libxl__multidev *multidev);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
                              libxl_domain_config *d_config,
-                             libxl__ao_devices *aodevs);
+                             libxl__multidev *multidev);
 
 /*----- device model creation -----*/
 
@@ -2435,7 +2435,7 @@ typedef struct {
     libxl__domain_build_state dm_state;
     libxl__dm_spawn_state pvqemu;
     libxl__destroy_domid_state dis;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 } libxl__stub_dm_spawn_state;
 
 _hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
@@ -2467,7 +2467,7 @@ struct libxl__domain_create_state {
     libxl__save_helper_state shs;
     /* necessary if the domain creation failed and we have to destroy it */
     libxl__domain_destroy_state dds;
-    libxl__ao_devices aodevs;
+    libxl__multidev multidev;
 };
 
 /*----- Domain suspend (save) functions -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBJ-0007rA-E1; Thu, 02 Aug 2012 17:27:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007q0-4t
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:51900] by server-1.bemta-4.messagelabs.com id
	BE/5E-24392-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21094 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828364"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vS-V6; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006G8-T5;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:12 +0100
Message-ID: <1343928442-23966-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/13] libxl: fix device counting race in
	libxl__devices_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don't have a fixed number of devices in the aodevs array, and instead
size it depending on the devices present in xenstore.  Somewhat
formalise the multiple device addition/removal machinery to make this
clearer and easier to do.

As a side-effect we fix a few "lost thread of control" bug which would
occur if there were no devices of a particular kind.  (Various if
statements which checked for there being no devices have become
redundant, but are retained to avoid making the patch bigger.)

Specifically:

 * Users of libxl__ao_devices are no longer expected to know in
   advance how many device operations they are going to do.  Instead
   they can initiate them one at a time, between bracketing calls to
   "begin" and "prepared".

 * The array of aodevs used for this is dynamically sized; to support
   this it's an array of pointers rather than of structs.

 * Users of libxl__ao_devices are presented with a more opaque interface.
   They are are no longer expected to, themselves,
      - look into the array of aodevs (this is now private)
      - know that the individual addition/removal completions are
        handled by libxl__ao_devices_callback (this callback function
        is now a private function for the multidev machinery)
      - ever deal with populating the contents of an aodevs

 * The doc comments relating to some of the members of
   libxl__ao_device are clarified.  (And the member `aodevs' is moved
   to put it with the other members with the same status.)

 * The multidev machinery allocates an aodev to represent the
   operation of preparing all of the other operations.  See
   the comment in libxl__multidev_begin.

A wrinkle is that the functions are called "multidev" but the structs
are called "libxl__ao_devices" and "aodevs".  I have given these
functions this name to distinguish them from "libxl__ao_device" and
"aodev" and so forth by more than just the use of the plural "s"
suffix.

In the next patch we will rename the structs.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@eu.citrix.com>

-
Changes in v4:
 * Actually honour errors in rc argument to libxl__multidev_prepared.
 * Fix the doc comment for libxl__add_*.
 * In comments, consistently use "multidev" not "multidevs".

Changes in v3:
 * New multidev interfaces - extensive changes.
---
 tools/libxl/libxl_create.c   |    8 +-
 tools/libxl/libxl_device.c   |  129 +++++++++++++++++++-----------------------
 tools/libxl/libxl_dm.c       |    8 +-
 tools/libxl/libxl_internal.h |   75 +++++++++++++++----------
 4 files changed, 111 insertions(+), 109 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index aafacd8..3265d69 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    dcs->aodevs.size = d_config->num_disks;
+    libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__prepare_ao_devices(ao, &dcs->aodevs);
     libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
 
@@ -1039,10 +1039,10 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        dcs->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__prepare_ao_devices(ao, &dcs->aodevs);
         libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 95b169e..79dd502 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -58,50 +58,6 @@ int libxl__parse_backend_path(libxl__gc *gc,
     return libxl__device_kind_from_string(strkind, &dev->backend_kind);
 }
 
-static int libxl__num_devices(libxl__gc *gc, uint32_t domid)
-{
-    char *path;
-    unsigned int num_kinds, num_devs;
-    char **kinds = NULL, **devs = NULL;
-    int i, j, rc = 0;
-    libxl__device dev;
-    libxl__device_kind kind;
-    int numdevs = 0;
-
-    path = GCSPRINTF("/local/domain/%d/device", domid);
-    kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
-    if (!kinds) {
-        if (errno != ENOENT) {
-            LOGE(ERROR, "unable to get xenstore device listing %s", path);
-            rc = ERROR_FAIL;
-            goto out;
-        }
-        num_kinds = 0;
-    }
-    for (i = 0; i < num_kinds; i++) {
-        if (libxl__device_kind_from_string(kinds[i], &kind))
-            continue;
-        if (kind == LIBXL__DEVICE_KIND_CONSOLE)
-            continue;
-
-        path = GCSPRINTF("/local/domain/%d/device/%s", domid, kinds[i]);
-        devs = libxl__xs_directory(gc, XBT_NULL, path, &num_devs);
-        if (!devs)
-            continue;
-        for (j = 0; j < num_devs; j++) {
-            path = GCSPRINTF("/local/domain/%d/device/%s/%s/backend",
-                             domid, kinds[i], devs[j]);
-            path = libxl__xs_read(gc, XBT_NULL, path);
-            if (path && libxl__parse_backend_path(gc, path, &dev) == 0) {
-                numdevs++;
-            }
-        }
-    }
-out:
-    if (rc) return rc;
-    return numdevs;
-}
-
 int libxl__nic_type(libxl__gc *gc, libxl__device *dev, libxl_nic_type *nictype)
 {
     char *snictype, *be_path;
@@ -445,40 +401,81 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
     libxl__ev_child_init(&aodev->child);
 }
 
-void libxl__prepare_ao_devices(libxl__ao *ao, libxl__ao_devices *aodevs)
+/* multidev */
+
+void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
 {
     AO_GC;
 
-    GCNEW_ARRAY(aodevs->array, aodevs->size);
-    for (int i = 0; i < aodevs->size; i++) {
-        aodevs->array[i].aodevs = aodevs;
-        libxl__prepare_ao_device(ao, &aodevs->array[i]);
+    aodevs->ao = ao;
+    aodevs->array = 0;
+    aodevs->used = aodevs->allocd = 0;
+
+    /* We allocate an aodev to represent the operation of preparing
+     * all of the other operations.  This operation is completed when
+     * we have started all the others (ie, when the user calls
+     * _prepared).  That arranges automatically that
+     *  (i) we do not think we have finished even if one of the
+     *      operations completes while we are still preparing
+     *  (ii) if we are starting zero operations, we do still
+     *      make the callback as soon as we know this fact
+     *  (iii) we have a nice consistent way to deal with any
+     *      error that might occur while deciding what to initiate
+     */
+    aodevs->preparation = libxl__multidev_prepare(aodevs);
+}
+
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
+
+libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
+    STATE_AO_GC(aodevs->ao);
+    libxl__ao_device *aodev;
+
+    GCNEW(aodev);
+    aodev->aodevs = aodevs;
+    aodev->callback = multidev_one_callback;
+    libxl__prepare_ao_device(ao, aodev);
+
+    if (aodevs->used >= aodevs->allocd) {
+        aodevs->allocd = aodevs->used * 2 + 5;
+        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
     }
+    aodevs->array[aodevs->used++] = aodev;
+
+    return aodev;
 }
 
-void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
     libxl__ao_devices *aodevs = aodev->aodevs;
     int i, error = 0;
 
     aodev->active = 0;
-    for (i = 0; i < aodevs->size; i++) {
-        if (aodevs->array[i].active)
+
+    for (i = 0; i < aodevs->used; i++) {
+        if (aodevs->array[i]->active)
             return;
 
-        if (aodevs->array[i].rc)
-            error = aodevs->array[i].rc;
+        if (aodevs->array[i]->rc)
+            error = aodevs->array[i]->rc;
     }
 
     aodevs->callback(egc, aodevs, error);
     return;
 }
 
+void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
+                              int rc)
+{
+    aodevs->preparation->rc = rc;
+    multidev_one_callback(egc, aodevs->preparation);
+}
+
 /******************************************************************************/
 
 /* Macro for defining the functions that will add a bunch of disks when
- * inside an async op.
+ * inside an async op with multidev.
  * This macro is added to prevent repetition of code.
  *
  * The following functions are defined:
@@ -495,9 +492,9 @@ void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
         int i;                                                                 \
         int end = start + d_config->num_##type##s;                             \
         for (i = start; i < end; i++) {                                        \
-            aodevs->array[i].callback = libxl__ao_devices_callback;            \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       &aodevs->array[i]);                     \
+                                       aodev);                                 \
         }                                                                      \
     }
 
@@ -547,20 +544,13 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char *path;
     unsigned int num_kinds, num_dev_xsentries;
     char **kinds = NULL, **devs = NULL;
-    int i, j, numdev = 0, rc = 0;
+    int i, j, rc = 0;
     libxl__device *dev;
     libxl__ao_devices *aodevs = &drs->aodevs;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    aodevs->size = libxl__num_devices(gc, drs->domid);
-    if (aodevs->size < 0) {
-        LOG(ERROR, "unable to get number of devices for domain %u", drs->domid);
-        rc = aodevs->size;
-        goto out;
-    }
-
-    libxl__prepare_ao_devices(drs->ao, aodevs);
+    libxl__multidev_begin(ao, aodevs);
     aodevs->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
@@ -598,13 +588,11 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = &aodevs->array[numdev];
+                aodev = libxl__multidev_prepare(aodevs);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
-                aodev->callback = libxl__ao_devices_callback;
                 aodev->force = drs->force;
                 libxl__initiate_device_remove(egc, aodev);
-                numdev++;
             }
         }
     }
@@ -626,8 +614,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    if (!numdev) drs->callback(egc, drs, rc);
-    return;
+    libxl__multidev_prepared(egc, aodevs, rc);
 }
 
 /* Callbacks for device related operations */
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f2e9572..177642b 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    sdss->aodevs.size = dm_config->num_disks;
+    libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__prepare_ao_devices(ao, &sdss->aodevs);
     libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
     return;
@@ -982,10 +982,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        sdss->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__prepare_ao_devices(ao, &sdss->aodevs);
         libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2d6c71a..07e92fb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1816,20 +1816,6 @@ typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
  */
 _hidden void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev);
 
-/* Prepare a bunch of devices for addition/removal. Every ao_device in
- * ao_devices is set to 'active', and the ao_device 'base' field is set to
- * the one pointed by aodevs.
- */
-_hidden void libxl__prepare_ao_devices(libxl__ao *ao,
-                                       libxl__ao_devices *aodevs);
-
-/* Generic callback to use when adding/removing several devices, this will
- * check if the given aodev is the last one, and call the callback in the
- * parent libxl__ao_devices struct, passing the appropriate error if found.
- */
-_hidden void libxl__ao_devices_callback(libxl__egc *egc,
-                                        libxl__ao_device *aodev);
-
 struct libxl__ao_device {
     /* filled in by user */
     libxl__ao *ao;
@@ -1837,32 +1823,60 @@ struct libxl__ao_device {
     libxl__device *dev;
     int force;
     libxl__device_callback *callback;
-    /* private for implementation */
-    int active;
+    /* return value, zeroed by user on entry, is valid on callback */
     int rc;
+    /* private for multidev */
+    int active;
+    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
     libxl__ev_time timeout;
-    /* Used internally to have a reference to the upper libxl__ao_devices
-     * struct when present */
-    libxl__ao_devices *aodevs;
     /* device hotplug execution */
     const char *what;
     int num_exec;
     libxl__ev_child child;
 };
 
-/* Helper struct to simply the plug/unplug of multiple devices at the same
- * time.
- *
- * This structure holds several devices, and the callback is only called
- * when all the devices inside of the array have finished.
- */
+/*
+ * Multiple devices "multidev" handling.
+ *
+ * Firstly, you should
+ *    libxl__multidev_begin
+ *    multidev->callback = ...
+ * Then zero or more times
+ *    libxl__multidev_prepare
+ *    libal__initiate_device_{remove/addition}.
+ * Finally, once
+ *    libxl__multidev_prepared
+ * which will result (perhaps reentrantly) in one call to callback().
+ */
+
+/* Starts preparing to add/remove a bunch of devices. */
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+
+/* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
+ * which has had libxl__prepare_ao_device called, and which has also
+ * had ->callback set.  The user should not mess with aodev->callback. */
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+
+/* Notifies the multidev machinery that we have now finished preparing
+ * and initiating devices.  multidev->callback may then be called as
+ * soon as there are no prepared but not completed operations
+ * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
+ * logged) multidev->callback will get a non-zero rc.
+ * callback may be set by the user at any point before prepared. */
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+
 typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
 struct libxl__ao_devices {
-    libxl__ao_device *array;
-    int size;
+    /* set by user: */
     libxl__devices_callback *callback;
+    /* for private use by libxl__...ao_devices... machinery: */
+    libxl__ao *ao;
+    libxl__ao_device **array;
+    int used, allocd;
+    libxl__ao_device *preparation;
 };
 
 /*
@@ -2372,10 +2386,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by this
- * function, so the caller must make sure to call _prepare before calling this
- * function. The start parameter contains the position inside the aodevs array
- * that should be used to store the state of this devices.
+ * the caller is inside an async op. "devices" will NOT be prepared by
+ * this function, so the caller must make sure to call
+ * libxl__multidev_begin before calling this function. The start
+ * parameter contains the position inside the aodevs array that should
+ * be used to store the state of this devices.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBJ-0007rT-UW; Thu, 02 Aug 2012 17:27:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007q1-Gq
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:51927] by server-3.bemta-4.messagelabs.com id
	C6/E9-01511-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!10
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21149 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vh-6Y; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Ge-53;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:20 +0100
Message-ID: <1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 11/13] libxl: correct some comments regarding
	event API and fds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

* libxl may indeed register more than one callback for the same fd,
  with some restrictions.  The allowable range of responses to this by
  the application means that this should pose no problems for users.
  But the documentation comment should be fixed.

* Document the relaxed synchronicity semantics of the fd_modify
  registration callback.

* A couple of comments referred to old names for functions.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.h |   17 ++++++++++++++---
 1 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3344bc8..cead71b 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -320,13 +320,24 @@ typedef struct libxl_osevent_hooks {
  * *for_registration_update is honoured by libxl and will be passed
  * to future modify or deregister calls.
  *
- * libxl will only attempt to register one callback for any one fd.
+ * libxl may want to register more than one callback for any one fd;
+ * in that case: (i) each such registration will have at least one bit
+ * set in revents which is unique to that registration; (ii) if an
+ * event occurs which is relevant for multiple registrations the
+ * application's event system is may call libxl_osevent_occurred_fd
+ * for one, some, or all of those registrations.
+ *
+ * If fd_modify is used, it is permitted for the application's event
+ * system to still make calls to libxl_osevent_occurred_fd for the
+ * "old" set of requested events; these will be safely ignored by
+ * libxl.
+ *
  * libxl will remember the value stored in *for_app_registration_out
  * (or *for_app_registration_update) by a successful call to
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
- * register_fd_hooks may be called only once for each libxl_ctx.
+ * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
  * register_event_hooks).  Conversely, the application MUST NOT make
@@ -357,7 +368,7 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 /* It is NOT legal to call _occurred_ reentrantly within any libxl
  * function.  Specifically it is NOT legal to call it from within
  * a register callback.  Conversely, libxl MAY call register/deregister
- * from within libxl_event_registered_call_*.
+ * from within libxl_event_occurred_call_*.
  */
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBJ-0007rA-E1; Thu, 02 Aug 2012 17:27:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007q0-4t
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:51900] by server-1.bemta-4.messagelabs.com id
	BE/5E-24392-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21094 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828364"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vS-V6; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006G8-T5;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:12 +0100
Message-ID: <1343928442-23966-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/13] libxl: fix device counting race in
	libxl__devices_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don't have a fixed number of devices in the aodevs array, and instead
size it depending on the devices present in xenstore.  Somewhat
formalise the multiple device addition/removal machinery to make this
clearer and easier to do.

As a side-effect we fix a few "lost thread of control" bug which would
occur if there were no devices of a particular kind.  (Various if
statements which checked for there being no devices have become
redundant, but are retained to avoid making the patch bigger.)

Specifically:

 * Users of libxl__ao_devices are no longer expected to know in
   advance how many device operations they are going to do.  Instead
   they can initiate them one at a time, between bracketing calls to
   "begin" and "prepared".

 * The array of aodevs used for this is dynamically sized; to support
   this it's an array of pointers rather than of structs.

 * Users of libxl__ao_devices are presented with a more opaque interface.
   They are are no longer expected to, themselves,
      - look into the array of aodevs (this is now private)
      - know that the individual addition/removal completions are
        handled by libxl__ao_devices_callback (this callback function
        is now a private function for the multidev machinery)
      - ever deal with populating the contents of an aodevs

 * The doc comments relating to some of the members of
   libxl__ao_device are clarified.  (And the member `aodevs' is moved
   to put it with the other members with the same status.)

 * The multidev machinery allocates an aodev to represent the
   operation of preparing all of the other operations.  See
   the comment in libxl__multidev_begin.

A wrinkle is that the functions are called "multidev" but the structs
are called "libxl__ao_devices" and "aodevs".  I have given these
functions this name to distinguish them from "libxl__ao_device" and
"aodev" and so forth by more than just the use of the plural "s"
suffix.

In the next patch we will rename the structs.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@eu.citrix.com>

-
Changes in v4:
 * Actually honour errors in rc argument to libxl__multidev_prepared.
 * Fix the doc comment for libxl__add_*.
 * In comments, consistently use "multidev" not "multidevs".

Changes in v3:
 * New multidev interfaces - extensive changes.
---
 tools/libxl/libxl_create.c   |    8 +-
 tools/libxl/libxl_device.c   |  129 +++++++++++++++++++-----------------------
 tools/libxl/libxl_dm.c       |    8 +-
 tools/libxl/libxl_internal.h |   75 +++++++++++++++----------
 4 files changed, 111 insertions(+), 109 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index aafacd8..3265d69 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -909,10 +909,10 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     store_libxl_entry(gc, domid, &d_config->b_info);
 
-    dcs->aodevs.size = d_config->num_disks;
+    libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__prepare_ao_devices(ao, &dcs->aodevs);
     libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
 
@@ -1039,10 +1039,10 @@ static void domcreate_devmodel_started(libxl__egc *egc,
     /* Plug nic interfaces */
     if (d_config->num_nics > 0) {
         /* Attach nics */
-        dcs->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__prepare_ao_devices(ao, &dcs->aodevs);
         libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 95b169e..79dd502 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -58,50 +58,6 @@ int libxl__parse_backend_path(libxl__gc *gc,
     return libxl__device_kind_from_string(strkind, &dev->backend_kind);
 }
 
-static int libxl__num_devices(libxl__gc *gc, uint32_t domid)
-{
-    char *path;
-    unsigned int num_kinds, num_devs;
-    char **kinds = NULL, **devs = NULL;
-    int i, j, rc = 0;
-    libxl__device dev;
-    libxl__device_kind kind;
-    int numdevs = 0;
-
-    path = GCSPRINTF("/local/domain/%d/device", domid);
-    kinds = libxl__xs_directory(gc, XBT_NULL, path, &num_kinds);
-    if (!kinds) {
-        if (errno != ENOENT) {
-            LOGE(ERROR, "unable to get xenstore device listing %s", path);
-            rc = ERROR_FAIL;
-            goto out;
-        }
-        num_kinds = 0;
-    }
-    for (i = 0; i < num_kinds; i++) {
-        if (libxl__device_kind_from_string(kinds[i], &kind))
-            continue;
-        if (kind == LIBXL__DEVICE_KIND_CONSOLE)
-            continue;
-
-        path = GCSPRINTF("/local/domain/%d/device/%s", domid, kinds[i]);
-        devs = libxl__xs_directory(gc, XBT_NULL, path, &num_devs);
-        if (!devs)
-            continue;
-        for (j = 0; j < num_devs; j++) {
-            path = GCSPRINTF("/local/domain/%d/device/%s/%s/backend",
-                             domid, kinds[i], devs[j]);
-            path = libxl__xs_read(gc, XBT_NULL, path);
-            if (path && libxl__parse_backend_path(gc, path, &dev) == 0) {
-                numdevs++;
-            }
-        }
-    }
-out:
-    if (rc) return rc;
-    return numdevs;
-}
-
 int libxl__nic_type(libxl__gc *gc, libxl__device *dev, libxl_nic_type *nictype)
 {
     char *snictype, *be_path;
@@ -445,40 +401,81 @@ void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev)
     libxl__ev_child_init(&aodev->child);
 }
 
-void libxl__prepare_ao_devices(libxl__ao *ao, libxl__ao_devices *aodevs)
+/* multidev */
+
+void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices *aodevs)
 {
     AO_GC;
 
-    GCNEW_ARRAY(aodevs->array, aodevs->size);
-    for (int i = 0; i < aodevs->size; i++) {
-        aodevs->array[i].aodevs = aodevs;
-        libxl__prepare_ao_device(ao, &aodevs->array[i]);
+    aodevs->ao = ao;
+    aodevs->array = 0;
+    aodevs->used = aodevs->allocd = 0;
+
+    /* We allocate an aodev to represent the operation of preparing
+     * all of the other operations.  This operation is completed when
+     * we have started all the others (ie, when the user calls
+     * _prepared).  That arranges automatically that
+     *  (i) we do not think we have finished even if one of the
+     *      operations completes while we are still preparing
+     *  (ii) if we are starting zero operations, we do still
+     *      make the callback as soon as we know this fact
+     *  (iii) we have a nice consistent way to deal with any
+     *      error that might occur while deciding what to initiate
+     */
+    aodevs->preparation = libxl__multidev_prepare(aodevs);
+}
+
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev);
+
+libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices *aodevs) {
+    STATE_AO_GC(aodevs->ao);
+    libxl__ao_device *aodev;
+
+    GCNEW(aodev);
+    aodev->aodevs = aodevs;
+    aodev->callback = multidev_one_callback;
+    libxl__prepare_ao_device(ao, aodev);
+
+    if (aodevs->used >= aodevs->allocd) {
+        aodevs->allocd = aodevs->used * 2 + 5;
+        GCREALLOC_ARRAY(aodevs->array, aodevs->allocd);
     }
+    aodevs->array[aodevs->used++] = aodev;
+
+    return aodev;
 }
 
-void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
+static void multidev_one_callback(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
     libxl__ao_devices *aodevs = aodev->aodevs;
     int i, error = 0;
 
     aodev->active = 0;
-    for (i = 0; i < aodevs->size; i++) {
-        if (aodevs->array[i].active)
+
+    for (i = 0; i < aodevs->used; i++) {
+        if (aodevs->array[i]->active)
             return;
 
-        if (aodevs->array[i].rc)
-            error = aodevs->array[i].rc;
+        if (aodevs->array[i]->rc)
+            error = aodevs->array[i]->rc;
     }
 
     aodevs->callback(egc, aodevs, error);
     return;
 }
 
+void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
+                              int rc)
+{
+    aodevs->preparation->rc = rc;
+    multidev_one_callback(egc, aodevs->preparation);
+}
+
 /******************************************************************************/
 
 /* Macro for defining the functions that will add a bunch of disks when
- * inside an async op.
+ * inside an async op with multidev.
  * This macro is added to prevent repetition of code.
  *
  * The following functions are defined:
@@ -495,9 +492,9 @@ void libxl__ao_devices_callback(libxl__egc *egc, libxl__ao_device *aodev)
         int i;                                                                 \
         int end = start + d_config->num_##type##s;                             \
         for (i = start; i < end; i++) {                                        \
-            aodevs->array[i].callback = libxl__ao_devices_callback;            \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
             libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       &aodevs->array[i]);                     \
+                                       aodev);                                 \
         }                                                                      \
     }
 
@@ -547,20 +544,13 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     char *path;
     unsigned int num_kinds, num_dev_xsentries;
     char **kinds = NULL, **devs = NULL;
-    int i, j, numdev = 0, rc = 0;
+    int i, j, rc = 0;
     libxl__device *dev;
     libxl__ao_devices *aodevs = &drs->aodevs;
     libxl__ao_device *aodev;
     libxl__device_kind kind;
 
-    aodevs->size = libxl__num_devices(gc, drs->domid);
-    if (aodevs->size < 0) {
-        LOG(ERROR, "unable to get number of devices for domain %u", drs->domid);
-        rc = aodevs->size;
-        goto out;
-    }
-
-    libxl__prepare_ao_devices(drs->ao, aodevs);
+    libxl__multidev_begin(ao, aodevs);
     aodevs->callback = devices_remove_callback;
 
     path = libxl__sprintf(gc, "/local/domain/%d/device", domid);
@@ -598,13 +588,11 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
                     libxl__device_destroy(gc, dev);
                     continue;
                 }
-                aodev = &aodevs->array[numdev];
+                aodev = libxl__multidev_prepare(aodevs);
                 aodev->action = DEVICE_DISCONNECT;
                 aodev->dev = dev;
-                aodev->callback = libxl__ao_devices_callback;
                 aodev->force = drs->force;
                 libxl__initiate_device_remove(egc, aodev);
-                numdev++;
             }
         }
     }
@@ -626,8 +614,7 @@ void libxl__devices_destroy(libxl__egc *egc, libxl__devices_remove_state *drs)
     }
 
 out:
-    if (!numdev) drs->callback(egc, drs, rc);
-    return;
+    libxl__multidev_prepared(egc, aodevs, rc);
 }
 
 /* Callbacks for device related operations */
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f2e9572..177642b 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -856,10 +856,10 @@ retry_transaction:
         if (errno == EAGAIN)
             goto retry_transaction;
 
-    sdss->aodevs.size = dm_config->num_disks;
+    libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__prepare_ao_devices(ao, &sdss->aodevs);
     libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
     return;
@@ -982,10 +982,10 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (rc) goto out;
 
     if (d_config->num_nics > 0) {
-        sdss->aodevs.size = d_config->num_nics;
+        libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__prepare_ao_devices(ao, &sdss->aodevs);
         libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2d6c71a..07e92fb 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1816,20 +1816,6 @@ typedef void libxl__device_callback(libxl__egc*, libxl__ao_device*);
  */
 _hidden void libxl__prepare_ao_device(libxl__ao *ao, libxl__ao_device *aodev);
 
-/* Prepare a bunch of devices for addition/removal. Every ao_device in
- * ao_devices is set to 'active', and the ao_device 'base' field is set to
- * the one pointed by aodevs.
- */
-_hidden void libxl__prepare_ao_devices(libxl__ao *ao,
-                                       libxl__ao_devices *aodevs);
-
-/* Generic callback to use when adding/removing several devices, this will
- * check if the given aodev is the last one, and call the callback in the
- * parent libxl__ao_devices struct, passing the appropriate error if found.
- */
-_hidden void libxl__ao_devices_callback(libxl__egc *egc,
-                                        libxl__ao_device *aodev);
-
 struct libxl__ao_device {
     /* filled in by user */
     libxl__ao *ao;
@@ -1837,32 +1823,60 @@ struct libxl__ao_device {
     libxl__device *dev;
     int force;
     libxl__device_callback *callback;
-    /* private for implementation */
-    int active;
+    /* return value, zeroed by user on entry, is valid on callback */
     int rc;
+    /* private for multidev */
+    int active;
+    libxl__ao_devices *aodevs; /* reference to the containing multidev */
+    /* private for add/remove implementation */
     libxl__ev_devstate backend_ds;
     /* Bodge for Qemu devices, also used for timeout of hotplug execution */
     libxl__ev_time timeout;
-    /* Used internally to have a reference to the upper libxl__ao_devices
-     * struct when present */
-    libxl__ao_devices *aodevs;
     /* device hotplug execution */
     const char *what;
     int num_exec;
     libxl__ev_child child;
 };
 
-/* Helper struct to simply the plug/unplug of multiple devices at the same
- * time.
- *
- * This structure holds several devices, and the callback is only called
- * when all the devices inside of the array have finished.
- */
+/*
+ * Multiple devices "multidev" handling.
+ *
+ * Firstly, you should
+ *    libxl__multidev_begin
+ *    multidev->callback = ...
+ * Then zero or more times
+ *    libxl__multidev_prepare
+ *    libal__initiate_device_{remove/addition}.
+ * Finally, once
+ *    libxl__multidev_prepared
+ * which will result (perhaps reentrantly) in one call to callback().
+ */
+
+/* Starts preparing to add/remove a bunch of devices. */
+_hidden void libxl__multidev_begin(libxl__ao *ao, libxl__ao_devices*);
+
+/* Prepares to add/remove one of many devices.  Returns a libxl__ao_device
+ * which has had libxl__prepare_ao_device called, and which has also
+ * had ->callback set.  The user should not mess with aodev->callback. */
+_hidden libxl__ao_device *libxl__multidev_prepare(libxl__ao_devices*);
+
+/* Notifies the multidev machinery that we have now finished preparing
+ * and initiating devices.  multidev->callback may then be called as
+ * soon as there are no prepared but not completed operations
+ * outstanding, perhaps reentrantly.  If rc!=0 (error should have been
+ * logged) multidev->callback will get a non-zero rc.
+ * callback may be set by the user at any point before prepared. */
+_hidden void libxl__multidev_prepared(libxl__egc*, libxl__ao_devices*, int rc);
+
 typedef void libxl__devices_callback(libxl__egc*, libxl__ao_devices*, int rc);
 struct libxl__ao_devices {
-    libxl__ao_device *array;
-    int size;
+    /* set by user: */
     libxl__devices_callback *callback;
+    /* for private use by libxl__...ao_devices... machinery: */
+    libxl__ao *ao;
+    libxl__ao_device **array;
+    int used, allocd;
+    libxl__ao_device *preparation;
 };
 
 /*
@@ -2372,10 +2386,11 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
                                     libxl__devices_remove_state *drs);
 
 /* Helper function to add a bunch of disks. This should be used when
- * the caller is inside an async op. "devices" will NOT be prepared by this
- * function, so the caller must make sure to call _prepare before calling this
- * function. The start parameter contains the position inside the aodevs array
- * that should be used to store the state of this devices.
+ * the caller is inside an async op. "devices" will NOT be prepared by
+ * this function, so the caller must make sure to call
+ * libxl__multidev_begin before calling this function. The start
+ * parameter contains the position inside the aodevs array that should
+ * be used to store the state of this devices.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBJ-0007rT-UW; Thu, 02 Aug 2012 17:27:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007q1-Gq
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:51927] by server-3.bemta-4.messagelabs.com id
	C6/E9-01511-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!10
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21149 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vh-6Y; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Ge-53;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:20 +0100
Message-ID: <1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 11/13] libxl: correct some comments regarding
	event API and fds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

* libxl may indeed register more than one callback for the same fd,
  with some restrictions.  The allowable range of responses to this by
  the application means that this should pose no problems for users.
  But the documentation comment should be fixed.

* Document the relaxed synchronicity semantics of the fd_modify
  registration callback.

* A couple of comments referred to old names for functions.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.h |   17 ++++++++++++++---
 1 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3344bc8..cead71b 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -320,13 +320,24 @@ typedef struct libxl_osevent_hooks {
  * *for_registration_update is honoured by libxl and will be passed
  * to future modify or deregister calls.
  *
- * libxl will only attempt to register one callback for any one fd.
+ * libxl may want to register more than one callback for any one fd;
+ * in that case: (i) each such registration will have at least one bit
+ * set in revents which is unique to that registration; (ii) if an
+ * event occurs which is relevant for multiple registrations the
+ * application's event system is may call libxl_osevent_occurred_fd
+ * for one, some, or all of those registrations.
+ *
+ * If fd_modify is used, it is permitted for the application's event
+ * system to still make calls to libxl_osevent_occurred_fd for the
+ * "old" set of requested events; these will be safely ignored by
+ * libxl.
+ *
  * libxl will remember the value stored in *for_app_registration_out
  * (or *for_app_registration_update) by a successful call to
  * register (or modify), and pass it to subsequent calls to modify
  * or deregister.
  *
- * register_fd_hooks may be called only once for each libxl_ctx.
+ * osevent_register_hooks may be called only once for each libxl_ctx.
  * libxl may make calls to register/modify/deregister from within
  * any libxl function (indeed, it will usually call register from
  * register_event_hooks).  Conversely, the application MUST NOT make
@@ -357,7 +368,7 @@ void libxl_osevent_register_hooks(libxl_ctx *ctx,
 /* It is NOT legal to call _occurred_ reentrantly within any libxl
  * function.  Specifically it is NOT legal to call it from within
  * a register callback.  Conversely, libxl MAY call register/deregister
- * from within libxl_event_registered_call_*.
+ * from within libxl_event_occurred_call_*.
  */
 
 void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBM-0007ss-5d; Thu, 02 Aug 2012 17:27:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007qK-Ag
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.139.83:48800] by server-12.bemta-5.messagelabs.com id
	AD/C3-26304-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343928446!29952258!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8485 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828369"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001ve-4v; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GW-2u;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:18 +0100
Message-ID: <1343928442-23966-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 09/13] libxl: remove an unused numainfo parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Dario Faggioli <dario.faggioli@citrix.com>
---
 tools/libxl/libxl_numa.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
index 5301ec4..2c8e59f 100644
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -231,7 +231,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
  * candidates with just one node).
  */
 static int count_cpus_per_node(libxl_cputopology *tinfo, int nr_cpus,
-                               libxl_numainfo *ninfo, int nr_nodes)
+                               int nr_nodes)
 {
     int cpus_per_node = 0;
     int j, i;
@@ -340,7 +340,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
     if (!min_nodes) {
         int cpus_per_node;
 
-        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, ninfo, nr_nodes);
+        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, nr_nodes);
         if (cpus_per_node == 0)
             min_nodes = 1;
         else
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBL-0007sc-No; Thu, 02 Aug 2012 17:27:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007qA-Ux
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.139.83:57153] by server-3.bemta-5.messagelabs.com id
	6F/54-03367-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343928446!29952258!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8492 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828373"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vf-5U; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Ga-4V;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:19 +0100
Message-ID: <1343928442-23966-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/13] libxl: idl: always initialise KeyedEnum
	keyvar in member init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl: idl: always initialise the KeyedEnum keyvar in the member init function

Previously we only initialised it if an explicit keyvar_init_val was
given but not if the default was implicitly 0.

In the generated code this only changes the unused libxl_event_init_type
function:

 void libxl_event_init_type(libxl_event *p, libxl_event_type type)
 {
+    assert(!p->type);
+    p->type = type;
     switch (p->type) {
     case LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN:
         break;

However I think it is wrong that this function is unused, this and
libxl_event_init should be used by libxl__event_new. As it happens
both are just memset to zero but for correctness we should use the
init functions (in case the IDL changes).

In the generator we also need to properly handle init_var == 0 which
the current if statements incorrectly treat as False. This doesn't
actually have any impact on the generated code.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/gentypes.py   |   11 +++++++----
 tools/libxl/libxl_event.c |    5 ++++-
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/gentypes.py b/tools/libxl/gentypes.py
index 1d13201..30f29ba 100644
--- a/tools/libxl/gentypes.py
+++ b/tools/libxl/gentypes.py
@@ -162,17 +162,20 @@ def libxl_C_type_member_init(ty, field):
                                 ku.keyvar.type.make_arg(ku.keyvar.name))
     s += "{\n"
     
-    if ku.keyvar.init_val:
+    if ku.keyvar.init_val is not None:
         init_val = ku.keyvar.init_val
-    elif ku.keyvar.type.init_val:
+    elif ku.keyvar.type.init_val is not None:
         init_val = ku.keyvar.type.init_val
     else:
         init_val = None
         
+    (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
     if init_val is not None:
-        (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
         s += "    assert(%s == %s);\n" % (fexpr, init_val)
-        s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+    else:
+        s += "    assert(!%s);\n" % (fexpr)
+    s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+
     (nparent,fexpr) = ty.member(ty.pass_arg("p"), field, isref=True)
     s += _libxl_C_type_init(ku, fexpr, parent=nparent, subinit=True)
     s += "}\n"
diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 1af64c8..939906c 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1163,7 +1163,10 @@ libxl_event *libxl__event_new(libxl__egc *egc,
     libxl_event *ev;
 
     ev = libxl__zalloc(NOGC,sizeof(*ev));
-    ev->type = type;
+
+    libxl_event_init(ev);
+    libxl_event_init_type(ev, type);
+
     ev->domid = domid;
 
     return ev;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBJ-0007qz-1n; Thu, 02 Aug 2012 17:27:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBG-0007q1-TI
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:51891] by server-3.bemta-4.messagelabs.com id
	E0/E9-01511-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21086 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828363"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vR-TK; Thu, 02 Aug 2012 17:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006G4-Rx;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:11 +0100
Message-ID: <1343928442-23966-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/13] libxl: react correctly to bootloader pty
	master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Receive POLLHUP on the bootloader master pty is not an error.
Hopefully it means that the bootloader has exited and therefore the
pty slave side has no process group any more.  (At least NetBSD
indicates POLLHUP on the master in this case.)

So send the bootloader SIGTERM; if it has already exited then this has
no effect (except that on some versions of NetBSD it erroneously
returns ESRCH and we print a harmless warning) and we will then
collect the bootloader's exit status and be satisfied.

However, we remember that we have done this so that if we got POLLHUP
for some other reason than that the bootloader exited we report
something resembling a useful message.

In order to implement this we need to provide a way for users of
datacopier to handle POLLHUP rather than treating it as fatal.

We rename bootloader_abort to bootloader_stop since it now no longer
only applies to error situations.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

-
Changes in v5:
 * Correctly call dc->callback_pollhup, not dc->callback,
   in datacopier_pollhup_handled.

Changes in v4:
 * Track whether we sent SIGTERM due to POLLHUP so we can report
   messages properly.

Changes in v3:
 * datacopier provides new interface for handling POLLHUP
 * Do not ignore errors on the xenconsole pty
 * Rename bootloader_abort.
---
 tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
 tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
 tools/libxl/libxl_internal.h   |    7 +++++--
 3 files changed, 57 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 99972a2..983a60a 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
     LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
 }
 
+static int datacopier_pollhup_handled(libxl__egc *egc,
+                                      libxl__datacopier_state *dc,
+                                      short revents, int onwrite)
+{
+    STATE_AO_GC(dc->ao);
+
+    if (dc->callback_pollhup && (revents & POLLHUP)) {
+        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
+            onwrite ? dc->writewhat : dc->readwhat,
+            dc->copywhat);
+        libxl__datacopier_kill(dc);
+        dc->callback_pollhup(egc, dc, onwrite, -1);
+        return 1;
+    }
+    return 0;
+}
+
 static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
                                 int fd, short events, short revents) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 0))
+        return;
+
     if (revents & ~POLLIN) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLIN)"
             " on %s during copy of %s", revents, dc->readwhat, dc->copywhat);
@@ -163,6 +183,9 @@ static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 1))
+        return;
+
     if (revents & ~POLLOUT) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLOUT)"
             " on %s during copy of %s", revents, dc->writewhat, dc->copywhat);
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index ef5a91b..bfc1b56 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -215,6 +215,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
     libxl__domaindeathcheck_init(&bl->deathcheck);
     bl->keystrokes.ao = bl->ao;  libxl__datacopier_init(&bl->keystrokes);
     bl->display.ao = bl->ao;     libxl__datacopier_init(&bl->display);
+    bl->got_pollhup = 0;
 }
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
@@ -275,7 +276,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 }
 
 /* might be called at any time, provided it's init'd */
-static void bootloader_abort(libxl__egc *egc,
+static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
     STATE_AO_GC(bl->ao);
@@ -285,8 +286,8 @@ static void bootloader_abort(libxl__egc *egc,
     libxl__datacopier_kill(&bl->display);
     if (libxl__ev_child_inuse(&bl->child)) {
         r = kill(bl->child.pid, SIGTERM);
-        if (r) LOGE(WARN, "after failure, failed to kill bootloader [%lu]",
-                    (unsigned long)bl->child.pid);
+        if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
+                    rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
     bl->rc = rc;
 }
@@ -508,7 +509,10 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->keystrokes.maxsz = BOOTLOADER_BUF_OUT;
     bl->keystrokes.copywhat =
         GCSPRINTF("bootloader input for domain %"PRIu32, bl->domid);
-    bl->keystrokes.callback = bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback =         bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback_pollhup = bootloader_keystrokes_copyfail;
+        /* pollhup gets called with errnoval==-1 which is not otherwise
+         * possible since errnos are nonnegative, so it's unambiguous */
     rc = libxl__datacopier_start(&bl->keystrokes);
     if (rc) goto out;
 
@@ -516,7 +520,8 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->display.maxsz = BOOTLOADER_BUF_IN;
     bl->display.copywhat =
         GCSPRINTF("bootloader output for domain %"PRIu32, bl->domid);
-    bl->display.callback = bootloader_display_copyfail;
+    bl->display.callback =         bootloader_display_copyfail;
+    bl->display.callback_pollhup = bootloader_display_copyfail;
     rc = libxl__datacopier_start(&bl->display);
     if (rc) goto out;
 
@@ -562,30 +567,42 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
 
 /* perhaps one of these will be called, but perhaps not */
 static void bootloader_copyfail(libxl__egc *egc, const char *which,
-       libxl__bootloader_state *bl, int onwrite, int errnoval)
+        libxl__bootloader_state *bl, int ondisplay, int onwrite, int errnoval)
 {
     STATE_AO_GC(bl->ao);
+    int rc = ERROR_FAIL;
+
+    if (errnoval==-1) {
+        /* POLLHUP */
+        if (!!ondisplay != !!onwrite) {
+            rc = 0;
+            bl->got_pollhup = 1;
+        } else {
+            LOG(ERROR, "unexpected POLLHUP on %s", which);
+        }
+    }
     if (!onwrite && !errnoval)
         LOG(ERROR, "unexpected eof copying %s", which);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+
+    bootloader_stop(egc, bl, rc);
 }
 static void bootloader_keystrokes_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
-    bootloader_copyfail(egc, "bootloader input", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, errnoval);
 }
 static void bootloader_display_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, display);
-    bootloader_copyfail(egc, "bootloader output", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader output", bl, 1, onwrite, errnoval);
 }
 
 static void bootloader_domaindeath(libxl__egc *egc, libxl__domaindeathcheck *dc)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, deathcheck);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+    bootloader_stop(egc, bl, ERROR_FAIL);
 }
 
 static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
@@ -599,6 +616,8 @@ static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
     libxl__datacopier_kill(&bl->display);
 
     if (status) {
+        if (bl->got_pollhup && WIFSIGNALED(status) && WTERMSIG(status)==SIGTERM)
+            LOG(ERROR, "got POLLHUP, sent SIGTERM");
         LOG(ERROR, "bootloader failed - consult logfile %s", bl->logfile);
         libxl_report_child_exitstatus(CTX, XTL_ERROR, "bootloader",
                                       pid, status);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 58004b3..2d6c71a 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2076,7 +2076,9 @@ typedef struct libxl__datacopier_buf libxl__datacopier_buf;
  *     errnoval==0 means we got eof and all data was written
  *     errnoval!=0 means we had a read error, logged
  * onwrite==-1 means some other internal failure, errnoval not valid, logged
- * in all cases copier is killed before calling this callback */
+ * If we get POLLHUP, we call callback_pollhup(..., onwrite, -1);
+ * or if callback_pollhup==0 this is an internal failure, as above.
+ * In all cases copier is killed before calling this callback */
 typedef void libxl__datacopier_callback(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval);
 
@@ -2095,6 +2097,7 @@ struct libxl__datacopier_state {
     const char *copywhat, *readwhat, *writewhat; /* for error msgs */
     FILE *log; /* gets a copy of everything */
     libxl__datacopier_callback *callback;
+    libxl__datacopier_callback *callback_pollhup;
     /* remaining fields are private to datacopier */
     libxl__ev_fd toread, towrite;
     ssize_t used;
@@ -2279,7 +2282,7 @@ struct libxl__bootloader_state {
     int nargs, argsspace;
     const char **args;
     libxl__datacopier_state keystrokes, display;
-    int rc;
+    int rc, got_pollhup;
 };
 
 _hidden void libxl__bootloader_init(libxl__bootloader_state *bl);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBM-0007ss-5d; Thu, 02 Aug 2012 17:27:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007qK-Ag
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.139.83:48800] by server-12.bemta-5.messagelabs.com id
	AD/C3-26304-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343928446!29952258!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8485 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828369"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001ve-4v; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GW-2u;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:18 +0100
Message-ID: <1343928442-23966-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 09/13] libxl: remove an unused numainfo parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Dario Faggioli <dario.faggioli@citrix.com>
---
 tools/libxl/libxl_numa.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
index 5301ec4..2c8e59f 100644
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -231,7 +231,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, libxl_cputopology *tinfo,
  * candidates with just one node).
  */
 static int count_cpus_per_node(libxl_cputopology *tinfo, int nr_cpus,
-                               libxl_numainfo *ninfo, int nr_nodes)
+                               int nr_nodes)
 {
     int cpus_per_node = 0;
     int j, i;
@@ -340,7 +340,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
     if (!min_nodes) {
         int cpus_per_node;
 
-        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, ninfo, nr_nodes);
+        cpus_per_node = count_cpus_per_node(tinfo, nr_cpus, nr_nodes);
         if (cpus_per_node == 0)
             min_nodes = 1;
         else
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBN-0007u7-NZ; Thu, 02 Aug 2012 17:27:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBJ-0007q0-1u
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:29 +0000
Received: from [85.158.143.99:51966] by server-1.bemta-4.messagelabs.com id
	B5/6E-24392-088BA105; Thu, 02 Aug 2012 17:27:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21108 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828366"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vY-1l; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GG-0i;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:14 +0100
Message-ID: <1343928442-23966-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 05/13] libxl: abolish useless `start' parameter
	to libxl__add_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

0 is always passed for this parameter and the code doesn't, actually,
use it, now.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_create.c   |    4 ++--
 tools/libxl/libxl_device.c   |    7 +++----
 tools/libxl/libxl_dm.c       |    4 ++--
 tools/libxl/libxl_internal.h |    8 +++-----
 4 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 3265d69..5275373 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -911,7 +911,7 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
     libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
@@ -1041,7 +1041,7 @@ static void domcreate_devmodel_started(libxl__egc *egc,
         /* Attach nics */
         libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
         libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 4a53181..27fbd21 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -485,15 +485,14 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
-                              int start, libxl_domain_config *d_config, \
+                              libxl_domain_config *d_config,            \
                               libxl__ao_devices *aodevs)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
-        int end = start + d_config->num_##type##s;                      \
-        for (i = start; i < end; i++) {                                 \
+        for (i = 0; i < d_config->num_##type##s; i++) {                 \
             libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
     }
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 177642b..66aa45e 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -858,7 +858,7 @@ retry_transaction:
 
     libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
     libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
@@ -984,7 +984,7 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (d_config->num_nics > 0) {
         libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
         libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 07e92fb..bb3eb5f 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2388,19 +2388,17 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
 /* Helper function to add a bunch of disks. This should be used when
  * the caller is inside an async op. "devices" will NOT be prepared by
  * this function, so the caller must make sure to call
- * libxl__multidev_begin before calling this function. The start
- * parameter contains the position inside the aodevs array that should
- * be used to store the state of this devices.
+ * libxl__multidev_begin before calling this function.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                              int start, libxl_domain_config *d_config,
+                              libxl_domain_config *d_config,
                               libxl__ao_devices *aodevs);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                             int start, libxl_domain_config *d_config,
+                             libxl_domain_config *d_config,
                              libxl__ao_devices *aodevs);
 
 /*----- device model creation -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBN-0007tr-AU; Thu, 02 Aug 2012 17:27:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007qF-FZ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:9004] by server-2.bemta-4.messagelabs.com id
	B1/B6-17938-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!11
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21155 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828374"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vj-75; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Gk-6B;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:22 +0100
Message-ID: <1343928442-23966-14-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/13] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

*** DO NOT APPLY ***

We have recently had a couple of bugs which basically involved
ignoring the rc parameter to a callback function.  I thought I would
try -Wunused-parameter.  Here are the results.

I found three further problems:

 * libxl_wait_for_free_memory takes a domid parameter but its
   semantics don't seem to call for that.  However this function has
   an API stability warning, and we will leave it be in case the domid
   turns out to be useful later for some kind of compatibility bodge.

 * qmp_synchronous_send has an `ask_timeout' parameter which is
   ignored.

 * The autogenerated function libxl_event_init_type ignored the type
   parameter.  This is now fixed by Ian Campbell in an earlier
   patch in this series.

Things I needed to do to get the rest of the code to compile:

 * Remove one harmless unused parameter from an internal function.
   (Earlier in this series.)

 * Add an assert to make the error handling in the broken remus code
   slightly less broken.  (Earlier in this series.)

 * Provide machinery in the Makefile for passing different CFLAGS to
   libxl as opposed to xl and libxlu.  The flex- and bison-generated
   files in libxlu can't be compiled with -Wunused-parameter.

 * Define a new helper macro
        #define USE(var) ((void)(var))
   and use it 43 times.  The pattern is something like
        USE(egc);
   in a function which takes egc but doesn't need it.  If the
   parameter is later used, this is harmless.  In functions
   which are placeholders the USE statement should be placed in the
   middle of the function where the parameter would be used if the
   function is changed later, so that the USE gets deleted by the
   patch introducing the implementation.

 * Define a new helper macro for use only in other macros
        #define MAYBE_UNUSED __attribute__((unused))
   and use it in 10 different places.

 * Define new macros for helping declare common types of callback
   functions.  For example:

        #define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)  \
            libxl__egc *egc MAYBE_UNUSED,                             \
            libxl__ev_xswatch *watch MAYBE_UNUSED,                    \
            const char *wpath MAYBE_UNUSED,                           \
            const char *epath MAYBE_UNUSED

   which is used like this:

        -static void some_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
        -                          const char *wpath, const char *epath)
        +static void some_callback(EV_XSWATCH_CALLBACK_PARAMS
        +                          (egc, watch, wpath, epath))
        {
            ... now we use (or not) egc, watch, wpath, etc. or not as we like

   This somewhat resembles a Traditional K&R C typeless function
   definition.  The types of the parameters are actually defined
   for the compiler of course, along with the information that
   the parameters might be unused.

   There are 4 macros of this kind with 22 call sites.

IMO on the cost (65 places in ordinary code where we have to write
something somewhat ugly) is worth the benefit (finding, if we had
deployed this right away, around 6 bugs).  But it's arguable.

*** DO NOT APPLY ***

Anyway, this patch must not be applied right now because it causes the
build to fail.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>

-
Changes in v5 of series:
 * Use REMUS TODO in comment.
 * Ignore domid parameter to libxl_wait_for_free_memory.
 * Do not unnecessarily but harmlessly USE(priv) in pci_ins_check.
 * Mention that libxl_event_init_type problem is now fixed.
---
 tools/libxl/Makefile             |    4 +++-
 tools/libxl/libxl.c              |   31 +++++++++++++++++++++++++++----
 tools/libxl/libxl_aoutils.c      |    8 ++++----
 tools/libxl/libxl_blktap2.c      |    1 +
 tools/libxl/libxl_bootloader.c   |    6 ++++++
 tools/libxl/libxl_create.c       |    2 ++
 tools/libxl/libxl_device.c       |    9 +++++----
 tools/libxl/libxl_dm.c           |    4 ++++
 tools/libxl/libxl_dom.c          |    8 ++++----
 tools/libxl/libxl_event.c        |   22 +++++++++++++---------
 tools/libxl/libxl_exec.c         |    8 ++++----
 tools/libxl/libxl_fork.c         |    1 +
 tools/libxl/libxl_internal.h     |   24 ++++++++++++++++++++++--
 tools/libxl/libxl_pci.c          |    5 +++++
 tools/libxl/libxl_qmp.c          |   17 +++++++++--------
 tools/libxl/libxl_save_callout.c |    4 ++--
 tools/libxl/libxl_utils.c        |    2 ++
 17 files changed, 114 insertions(+), 42 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 63a8157..f5ff115 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -13,6 +13,8 @@ XLUMINOR = 0
 
 CFLAGS += -Werror -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
+CFLAGS_FOR_LIBXL = -Wunused-parameter
+
 CFLAGS += -I. -fPIC
 
 ifeq ($(CONFIG_Linux),y)
@@ -71,7 +73,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-$(LIBXL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
+$(LIBXL_OBJS): CFLAGS += $(CFLAGS_FOR_LIBXL) $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h \
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 726a70e..4f387df 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -28,6 +28,8 @@ int libxl_ctx_alloc(libxl_ctx **pctx, int version,
     struct stat stat_buf;
     int rc;
 
+    USE(flags);
+
     if (version != LIBXL_VERSION) { rc = ERROR_VERSION; goto out; }
 
     ctx = malloc(sizeof(*ctx));
@@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     libxl__domain_suspend_state *dss;
     int rc;
 
+    USE(recv_fd); /* REMUS TODO: get rid of this and actually use it! */
+
     libxl_domain_type type = libxl__domain_type(gc, domid);
     if (type == LIBXL_DOMAIN_TYPE_INVALID) {
         rc = ERROR_FAIL;
@@ -979,8 +983,9 @@ static void domain_death_occurred(libxl__egc *egc,
     LIBXL_TAILQ_INSERT_HEAD(&CTX->death_reported, evg, entry);
 }
 
-static void domain_death_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void domain_death_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                          (egc, watch, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_domain_death *evg;
     uint32_t domid;
@@ -1137,8 +1142,9 @@ void libxl_evdisable_domain_death(libxl_ctx *ctx,
     GC_FREE;
 }
 
-static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void disk_eject_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                        (egc, w, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_disk_eject *evg = (void*)w;
     char *backend;
@@ -2525,6 +2531,8 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
                                   libxl_device_nic *nic,
                                   libxl__device *device)
 {
+    USE(gc);
+
     device->backend_devid    = nic->devid;
     device->backend_domid    = nic->backend_domid;
     device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
@@ -2903,6 +2911,9 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
+    USE(gc);
+
+    USE(vkb);
     return 0;
 }
 
@@ -2910,6 +2921,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vkb *vkb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vkb->devid;
     device->backend_domid = vkb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
@@ -2991,6 +3003,7 @@ out:
 
 int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
 {
+    USE(gc);
     libxl_defbool_setdefault(&vfb->vnc.enable, true);
     if (libxl_defbool_val(vfb->vnc.enable)) {
         if (!vfb->vnc.listen) {
@@ -3011,6 +3024,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vfb *vfb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vfb->devid;
     device->backend_domid = vfb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VFB;
@@ -3567,6 +3581,8 @@ int libxl_wait_for_free_memory(libxl_ctx *ctx, uint32_t domid, uint32_t
     uint32_t freemem_slack;
     GC_INIT(ctx);
 
+    USE(domid); /* this may turn out to be useful for compatibility, later */
+
     rc = libxl__get_free_memory_slack(gc, &freemem_slack);
     if (rc < 0)
         goto out;
@@ -3937,8 +3953,13 @@ libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx)
 static int sched_arinc653_domain_set(libxl__gc *gc, uint32_t domid,
                                      const libxl_domain_sched_params *scinfo)
 {
+    USE(gc);
+
     /* Currently, the ARINC 653 scheduler does not take any domain-specific
          configuration, so we simply return success. */
+    USE(domid);
+    USE(scinfo);
+
     return 0;
 }
 
@@ -4386,6 +4407,8 @@ int libxl_xen_console_read_line(libxl_ctx *ctx,
 void libxl_xen_console_read_finish(libxl_ctx *ctx,
                                    libxl_xen_console_reader *cr)
 {
+    USE(ctx);
+
     free(cr->buffer);
     free(cr);
 }
diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 983a60a..6d85f56 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -114,8 +114,8 @@ static int datacopier_pollhup_handled(libxl__egc *egc,
     return 0;
 }
 
-static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_readable(EV_FD_CALLBACK_PARAMS
+                                (egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
@@ -178,8 +178,8 @@ static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
     datacopier_check_state(egc, dc);
 }
 
-static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_writable(EV_FD_CALLBACK_PARAMS(
+                                egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
diff --git a/tools/libxl/libxl_blktap2.c b/tools/libxl/libxl_blktap2.c
index 2c40182..660a669 100644
--- a/tools/libxl/libxl_blktap2.c
+++ b/tools/libxl/libxl_blktap2.c
@@ -19,6 +19,7 @@
 
 int libxl__blktap_enabled(libxl__gc *gc)
 {
+    USE(gc);
     const char *msg;
     return !tap_ctl_check(&msg);
 }
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index e103ee9..902dfe6 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -88,6 +88,7 @@ static void make_bootloader_args(libxl__gc *gc, libxl__bootloader_state *bl,
 static int setup_xenconsoled_pty(libxl__egc *egc, libxl__bootloader_state *bl,
                                  char *slave_path, size_t slave_path_len)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     struct termios termattr;
     int r, rc;
@@ -141,6 +142,7 @@ static const char *bootloader_result_command(libxl__gc *gc, const char *buf,
 static int parse_bootloader_result(libxl__egc *egc,
                                    libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     char buf[PATH_MAX*2];
     FILE *f = 0;
@@ -221,6 +223,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int i;
 
@@ -256,6 +259,8 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    USE(egc);
+
     if (!bl->rc)
         bl->rc = rc;
 
@@ -285,6 +290,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int r;
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5f0d26f..5d56d67 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -62,6 +62,8 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
 int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                          libxl_domain_create_info *c_info)
 {
+    USE(gc);
+
     if (!c_info->type)
         return ERROR_INVAL;
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 9fc63f1..80c5511 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -44,6 +44,8 @@ int libxl__parse_backend_path(libxl__gc *gc,
                               const char *path,
                               libxl__device *dev)
 {
+    USE(gc);
+
     /* /local/domain/<domid>/backend/<kind>/<domid>/<devid> */
     char strkind[16]; /* Longest is actually "console" */
     int rc = sscanf(path, "/local/domain/%d/backend/%15[^/]/%u/%d",
@@ -796,8 +798,7 @@ out:
     return;
 }
 
-static void device_qemu_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                const struct timeval *requested_abs)
+static void device_qemu_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
@@ -919,8 +920,8 @@ out:
     return;
 }
 
-static void device_hotplug_timeout_cb(libxl__egc *egc, libxl__ev_time *ev,
-                                      const struct timeval *requested_abs)
+static void device_hotplug_timeout_cb(EV_TIME_CALLBACK_PARAMS
+                                      (egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..6995306 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -640,6 +640,7 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
                                         libxl_device_vfb *vfb,
                                         libxl_device_vkb *vkb)
 {
+    USE(gc);
     const libxl_domain_build_info *b_info = &guest_config->b_info;
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_HVM)
@@ -1177,6 +1178,7 @@ static void device_model_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
 {
     libxl__dm_spawn_state *dmss = CONTAINER_OF(spawn, *dmss, spawn);
     STATE_AO_GC(spawn->ao);
+    USE(egc);
 
     if (!xsdata)
         return;
@@ -1262,6 +1264,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
         int nr_vfbs, libxl_device_vfb *vfbs,
         int nr_disks, libxl_device_disk *disks)
 {
+    USE(gc);
     int i, ret = 0;
 
     /*
@@ -1283,6 +1286,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
     }
 
     if (nr_vfbs > 0) {
+        USE(vfbs);
         ret = 1;
         goto out;
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 06d5e4f..4a36652 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -756,8 +756,8 @@ void libxl__domain_suspend_common_switch_qemu_logdirty
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                    const struct timeval *requested_abs)
+static void switch_logdirty_timeout(EV_TIME_CALLBACK_PARAMS
+                                    (egc, ev, requested_abs))
 {
     libxl__domain_suspend_state *dss = CONTAINER_OF(ev, *dss, logdirty.timeout);
     STATE_AO_GC(dss->ao);
@@ -765,8 +765,8 @@ static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_xswatch(libxl__egc *egc, libxl__ev_xswatch *watch,
-                            const char *watch_path, const char *event_path)
+static void switch_logdirty_xswatch(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, wpath, epath))
 {
     libxl__domain_suspend_state *dss =
         CONTAINER_OF(watch, *dss, logdirty.watch);
diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 939906c..3ec9361 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -197,6 +197,8 @@ static void time_done_debug(libxl__gc *gc, const char *func,
                "ev_time=%p done rc=%d .func=%p infinite=%d abs=%lu.%06lu",
                ev, rc, ev->func, ev->infinite,
                (unsigned long)ev->abs.tv_sec, (unsigned long)ev->abs.tv_usec);
+#else
+    USE(gc); USE(func); USE(ev); USE(rc);
 #endif
 }
 
@@ -381,8 +383,7 @@ static void libxl__set_watch_slot_contents(libxl__ev_watch_slot *slot,
     slot->empty.sle_next = (void*)w;
 }
 
-static void watchfd_callback(libxl__egc *egc, libxl__ev_fd *ev,
-                             int fd, short events, short revents)
+static void watchfd_callback(EV_FD_CALLBACK_PARAMS(egc,ev, fd,events,revents))
 {
     EGC_GC;
 
@@ -571,8 +572,8 @@ void libxl__ev_xswatch_deregister(libxl__gc *gc, libxl__ev_xswatch *w)
  * waiting for device state
  */
 
-static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
-                                const char *watch_path, const char *event_path)
+static void devstate_watch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, watch_path, event_path))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(watch, *ds, watch);
@@ -605,8 +606,7 @@ static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
     ds->callback(egc, ds, rc);
 }
 
-static void devstate_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                             const struct timeval *requested_abs)
+static void devstate_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(ev, *ds, timeout);
@@ -662,8 +662,8 @@ int libxl__ev_devstate_wait(libxl__gc *gc, libxl__ev_devstate *ds,
  * futile.
  */
 
-static void domaindeathcheck_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                            const char *watch_path, const char *event_path)
+static void domaindeathcheck_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                      (egc, w, watch_path, event_path))
 {
     libxl__domaindeathcheck *dc = CONTAINER_OF(w, *dc, watch);
     EGC_GC;
@@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
 
     assert(fd == ev->fd);
     revents &= ev->events;
+    USE(events); /* we use our own idea of what we asked for */
     if (revents)
         ev->func(egc, ev, fd, ev->events, revents);
 
@@ -1151,6 +1152,8 @@ void libxl__event_occurred(libxl__egc *egc, libxl_event *event)
 
 void libxl_event_free(libxl_ctx *ctx, libxl_event *event)
 {
+    USE(ctx);
+
     /* Exceptionally, this function may be called from libxl, with ctx==0 */
     libxl_event_dispose(event);
     free(event);
@@ -1648,7 +1651,8 @@ int libxl__ao_inprogress(libxl__ao *ao,
  * for how.  But we want to copy *how.  So we have this dummy function
  * whose address is stored in callback if the app passed how==NULL. */
 static void dummy_asyncprogress_callback_ignore
-  (libxl_ctx *ctx, libxl_event *ev, void *for_callback) { }
+  (libxl_ctx *ctx, libxl_event *ev, void *for_callback)
+    { USE(ctx); USE(ev); USE(for_callback); }
 
 void libxl__ao_progress_gethow(libxl_asyncprogress_how *in_state,
                                const libxl_asyncprogress_how *from_app) {
diff --git a/tools/libxl/libxl_exec.c b/tools/libxl/libxl_exec.c
index 0477386..ed6b44e 100644
--- a/tools/libxl/libxl_exec.c
+++ b/tools/libxl/libxl_exec.c
@@ -280,6 +280,7 @@ void libxl__spawn_init(libxl__spawn_state *ss)
 
 int libxl__spawn_spawn(libxl__egc *egc, libxl__spawn_state *ss)
 {
+    USE(egc);
     STATE_AO_GC(ss->ao);
     int r;
     pid_t child;
@@ -387,8 +388,7 @@ static void spawn_fail(libxl__egc *egc, libxl__spawn_state *ss)
     spawn_detach(gc, ss);
 }
 
-static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                          const struct timeval *requested_abs)
+static void spawn_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     /* Before event, was Attached. */
     EGC_GC;
@@ -397,8 +397,8 @@ static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
     spawn_fail(egc, ss); /* must be last */
 }
 
-static void spawn_watch_event(libxl__egc *egc, libxl__ev_xswatch *xsw,
-                              const char *watch_path, const char *event_path)
+static void spawn_watch_event(EV_XSWATCH_CALLBACK_PARAMS
+                              (egc, xsw, wpath, epath))
 {
     /* On entry, is Attached. */
     EGC_GC;
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 044ddad..0379604 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -157,6 +157,7 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 
 static void sigchld_handler(int signo)
 {
+    USE(signo);
     int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
     assert(!e); /* errors are probably EBADF, very bad */
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 6528694..2381388 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -101,6 +101,8 @@
 #define DISABLE_UDEV_PATH "libxl/disable_udev"
 
 #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+#define USE(var) ((void)(var))
+#define MAYBE_UNUSED __attribute__((unused))
 
 #define LIBXL__LOGGING_ENABLED
 
@@ -153,6 +155,14 @@ typedef void libxl__ev_fd_callback(libxl__egc *egc, libxl__ev_fd *ev,
    * It is not permitted to listen for the same or overlapping events
    * on the same fd using multiple different libxl__ev_fd's.
    */
+
+/* Declare your callback functions with this helper and you avoid unused
+ * parameter warnings (and don't have to list all the types either): */
+#define EV_FD_CALLBACK_PARAMS(egc, ev, fd, events, revents)             \
+     libxl__egc *egc MAYBE_UNUSED, libxl__ev_fd *ev MAYBE_UNUSED,       \
+     int fd MAYBE_UNUSED,                                               \
+     short events MAYBE_UNUSED, short revents MAYBE_UNUSED
+
 struct libxl__ev_fd {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -168,6 +178,11 @@ struct libxl__ev_fd {
 typedef struct libxl__ev_time libxl__ev_time;
 typedef void libxl__ev_time_callback(libxl__egc *egc, libxl__ev_time *ev,
                                      const struct timeval *requested_abs);
+
+#define EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs)                 \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_time *ev MAYBE_UNUSED,      \
+    const struct timeval *requested_abs MAYBE_UNUSED
+
 struct libxl__ev_time {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -180,8 +195,13 @@ struct libxl__ev_time {
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
-typedef void libxl__ev_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch*,
-                            const char *watch_path, const char *event_path);
+typedef void libxl__ev_xswatch_callback(libxl__egc *egc,
+    libxl__ev_xswatch *watch, const char *watch_path, const char *event_path);
+
+#define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)    \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_xswatch *watch MAYBE_UNUSED,            \
+    const char *wpath MAYBE_UNUSED, const char *epath MAYBE_UNUSED
+
 struct libxl__ev_xswatch {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 9c92ae6..5e866f2 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -798,8 +798,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
 static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void *priv)
 {
+    USE(gc);
     char *orig_state = priv;
 
+    USE(domid);
+
     if ( !strcmp(state, "pci-insert-failed") )
         return -1;
     if ( !strcmp(state, "pci-inserted") )
@@ -1007,6 +1010,8 @@ static int libxl__device_pci_reset(libxl__gc *gc, unsigned int domain, unsigned
 
 int libxl__device_pci_setdefault(libxl__gc *gc, libxl_device_pci *pci)
 {
+    USE(gc);
+    USE(pci);
     return 0;
 }
 
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..c354c71 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -46,6 +46,10 @@
 typedef int (*qmp_callback_t)(libxl__qmp_handler *qmp,
                               const libxl__json_object *tree,
                               void *opaque);
+#define QMP_CALLBACK_PARAMS(qmp, tree, opaque)          \
+    libxl__qmp_handler *qmp MAYBE_UNUSED,               \
+    const libxl__json_object *tree MAYBE_UNUSED,        \
+    void *opaque MAYBE_UNUSED
 
 typedef struct qmp_request_context {
     int rc;
@@ -109,9 +113,7 @@ static int store_serial_port_info(libxl__qmp_handler *qmp,
     return ret;
 }
 
-static int register_serials_chardev_callback(libxl__qmp_handler *qmp,
-                                             const libxl__json_object *o,
-                                             void *unused)
+static int register_serials_chardev_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     const libxl__json_object *obj = NULL;
     const libxl__json_object *label = NULL;
@@ -165,9 +167,7 @@ static int qmp_write_domain_console_item(libxl__gc *gc, int domid,
     return libxl__xs_write(gc, XBT_NULL, path, "%s", value);
 }
 
-static int qmp_register_vnc_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o,
-                                     void *unused)
+static int qmp_register_vnc_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     GC_INIT(qmp->ctx);
     const libxl__json_object *obj;
@@ -203,8 +203,7 @@ out:
     return rc;
 }
 
-static int qmp_capabilities_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o, void *unused)
+static int qmp_capabilities_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     qmp->connected = true;
 
@@ -228,6 +227,8 @@ static int enable_qmp_capabilities(libxl__qmp_handler *qmp)
 static libxl__qmp_message_type qmp_response_type(libxl__qmp_handler *qmp,
                                                  const libxl__json_object *o)
 {
+    USE(qmp);
+
     libxl__qmp_message_type type;
     libxl__json_map_node *node = NULL;
     int i = 0;
diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
index 078b7ee..78bb67e 100644
--- a/tools/libxl/libxl_save_callout.c
+++ b/tools/libxl/libxl_save_callout.c
@@ -252,8 +252,8 @@ static void helper_failed(libxl__egc *egc, libxl__save_helper_state *shs,
                 (unsigned long)shs->child.pid);
 }
 
-static void helper_stdout_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                   int fd, short events, short revents)
+static void helper_stdout_readable(EV_FD_CALLBACK_PARAMS
+                                   (egc, ev, fd, events, revents))
 {
     libxl__save_helper_state *shs = CONTAINER_OF(ev, *shs, readable);
     STATE_AO_GC(shs->ao);
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index f7b44a0..a7c34a9 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -230,6 +230,7 @@ out:
 
 int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend)
 {
+    USE(ctx);
     char *p;
     int rc = 0;
 
@@ -513,6 +514,7 @@ void libxl_bitmap_dispose(libxl_bitmap *map)
 void libxl_bitmap_copy(libxl_ctx *ctx, libxl_bitmap *dptr,
                        const libxl_bitmap *sptr)
 {
+    USE(ctx);
     int sz;
 
     assert(dptr->size == sptr->size);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBM-0007t8-Gi; Thu, 02 Aug 2012 17:27:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007q1-J1
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:51971] by server-3.bemta-4.messagelabs.com id
	AA/E9-01511-088BA105; Thu, 02 Aug 2012 17:27:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!8
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21122 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828368"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vb-3D; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GS-2R;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:17 +0100
Message-ID: <1343928442-23966-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 08/13] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change the TODOs in the remus code to "REMUS TODO" which will make
them easier to grep for later.  AIUI all of these are essential for
use of remus in production.

Also add a new TODO and a new assert, to check rc on entry to
remus_checkpoint_dm_saved.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_dom.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index d749983..06d5e4f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* TODO: Issue disk and network checkpoint reqs. */
+    /* REMUS TODO: Issue disk and network checkpoint reqs. */
     return libxl__domain_suspend_common_callback(data);
 }
 
@@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. Start a new network output buffer */
     return 1;
 }
 
@@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* TODO: Wait for disk and memory ack, release network buffer */
-    /* TODO: make this asynchronous */
+    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
+    /* REMUS TODO: make this asynchronous */
+    assert(!rc); /* REMUS TODO handle this error properly */
     usleep(dss->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBL-0007sc-No; Thu, 02 Aug 2012 17:27:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007qA-Ux
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.139.83:57153] by server-3.bemta-5.messagelabs.com id
	6F/54-03367-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343928446!29952258!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8492 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828373"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vf-5U; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Ga-4V;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:19 +0100
Message-ID: <1343928442-23966-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/13] libxl: idl: always initialise KeyedEnum
	keyvar in member init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl: idl: always initialise the KeyedEnum keyvar in the member init function

Previously we only initialised it if an explicit keyvar_init_val was
given but not if the default was implicitly 0.

In the generated code this only changes the unused libxl_event_init_type
function:

 void libxl_event_init_type(libxl_event *p, libxl_event_type type)
 {
+    assert(!p->type);
+    p->type = type;
     switch (p->type) {
     case LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN:
         break;

However I think it is wrong that this function is unused, this and
libxl_event_init should be used by libxl__event_new. As it happens
both are just memset to zero but for correctness we should use the
init functions (in case the IDL changes).

In the generator we also need to properly handle init_var == 0 which
the current if statements incorrectly treat as False. This doesn't
actually have any impact on the generated code.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxl/gentypes.py   |   11 +++++++----
 tools/libxl/libxl_event.c |    5 ++++-
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/gentypes.py b/tools/libxl/gentypes.py
index 1d13201..30f29ba 100644
--- a/tools/libxl/gentypes.py
+++ b/tools/libxl/gentypes.py
@@ -162,17 +162,20 @@ def libxl_C_type_member_init(ty, field):
                                 ku.keyvar.type.make_arg(ku.keyvar.name))
     s += "{\n"
     
-    if ku.keyvar.init_val:
+    if ku.keyvar.init_val is not None:
         init_val = ku.keyvar.init_val
-    elif ku.keyvar.type.init_val:
+    elif ku.keyvar.type.init_val is not None:
         init_val = ku.keyvar.type.init_val
     else:
         init_val = None
         
+    (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
     if init_val is not None:
-        (nparent,fexpr) = ty.member(ty.pass_arg("p"), ku.keyvar, isref=True)
         s += "    assert(%s == %s);\n" % (fexpr, init_val)
-        s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+    else:
+        s += "    assert(!%s);\n" % (fexpr)
+    s += "    %s = %s;\n" % (fexpr, ku.keyvar.name)
+
     (nparent,fexpr) = ty.member(ty.pass_arg("p"), field, isref=True)
     s += _libxl_C_type_init(ku, fexpr, parent=nparent, subinit=True)
     s += "}\n"
diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 1af64c8..939906c 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1163,7 +1163,10 @@ libxl_event *libxl__event_new(libxl__egc *egc,
     libxl_event *ev;
 
     ev = libxl__zalloc(NOGC,sizeof(*ev));
-    ev->type = type;
+
+    libxl_event_init(ev);
+    libxl_event_init_type(ev, type);
+
     ev->domid = domid;
 
     return ev;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBK-0007rf-Bq; Thu, 02 Aug 2012 17:27:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007q0-N7
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:8952] by server-1.bemta-4.messagelabs.com id
	EE/5E-24392-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21103 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828365"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vU-1S; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006GC-Uj;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:13 +0100
Message-ID: <1343928442-23966-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 04/13] libxl: fix formatting of
	DEFINE_DEVICES_ADD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These lines were exactly 80 columns wide, which produces hideous wrap
damage in an 80 column emacs.  Reformat using emacs's C-c \,
which puts the \ in column 72 (by default) where possible.

Whitespace change only.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_device.c |   26 +++++++++++++-------------
 1 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 79dd502..4a53181 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -483,19 +483,19 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
  * libxl__add_nics
  */
 
-#define DEFINE_DEVICES_ADD(type)                                               \
-    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid,  \
-                              int start, libxl_domain_config *d_config,        \
-                              libxl__ao_devices *aodevs)                       \
-    {                                                                          \
-        AO_GC;                                                                 \
-        int i;                                                                 \
-        int end = start + d_config->num_##type##s;                             \
-        for (i = start; i < end; i++) {                                        \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       aodev);                                 \
-        }                                                                      \
+#define DEFINE_DEVICES_ADD(type)                                        \
+    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
+                              int start, libxl_domain_config *d_config, \
+                              libxl__ao_devices *aodevs)                \
+    {                                                                   \
+        AO_GC;                                                          \
+        int i;                                                          \
+        int end = start + d_config->num_##type##s;                      \
+        for (i = start; i < end; i++) {                                 \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+                                       aodev);                          \
+        }                                                               \
     }
 
 DEFINE_DEVICES_ADD(disk)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBI-0007qf-MQ; Thu, 02 Aug 2012 17:27:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBG-0007q0-Ls
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:26 +0000
Received: from [85.158.143.99:51881] by server-1.bemta-4.messagelabs.com id
	5D/5E-24392-D78BA105; Thu, 02 Aug 2012 17:27:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21081 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828362"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vO-SG; Thu, 02 Aug 2012 17:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006Fu-P5;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:10 +0100
Message-ID: <1343928442-23966-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 01/13] libxl: unify libxl__device_destroy and
	device_hotplug_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

device_hotplug_done contains an open-coded but improved version of
libxl__device_destroy.  So move the contents of device_hotplug_done
into libxl__device_destroy, deleting the old code, and replace it at
its old location with a function call.

Add the missing call to libxl__xs_transaction_abort (which was present
in neither version and technically speaking is always a no-op with
this code as it stands at the moment because no-one does "goto out"
other than after libxl__xs_transaction_start or _commit).

Also fix the error handling: the rc from the destroy should be
propagated into the aodev.

Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

-
Changes in v5 of series:
 * Also add missing xs abort.
---
 tools/libxl/libxl_device.c |   36 +++++++++++++-----------------------
 1 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index da0c3ea..95b169e 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -513,22 +513,24 @@ int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
     char *be_path = libxl__device_backend_path(gc, dev);
     char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
-    int rc = 0;
+    int rc;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
 
-    do {
-        t = xs_transaction_start(CTX->xsh);
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
-        rc = !xs_transaction_end(CTX->xsh, t, 0);
-    } while (rc && errno == EAGAIN);
-    if (rc) {
-        LOGE(ERROR, "unable to finish transaction");
-        goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc < 0) goto out;
     }
 
     libxl__device_destroy_tapdisk(gc, be_path);
 
 out:
+    libxl__xs_transaction_abort(gc, &t);
     return rc;
 }
 
@@ -993,29 +995,17 @@ error:
 static void device_hotplug_done(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    char *be_path = libxl__device_backend_path(gc, aodev->dev);
-    char *fe_path = libxl__device_frontend_path(gc, aodev->dev);
-    xs_transaction_t t = 0;
     int rc;
 
     device_hotplug_clean(gc, aodev);
 
     /* Clean xenstore if it's a disconnection */
     if (aodev->action == DEVICE_DISCONNECT) {
-        for (;;) {
-            rc = libxl__xs_transaction_start(gc, &t);
-            if (rc) goto out;
-
-            libxl__xs_path_cleanup(gc, t, fe_path);
-            libxl__xs_path_cleanup(gc, t, be_path);
-
-            rc = libxl__xs_transaction_commit(gc, &t);
-            if (!rc) break;
-            if (rc < 0) goto out;
-        }
+        rc = libxl__device_destroy(gc, aodev->dev);
+        if (!aodev->rc)
+            aodev->rc = rc;
     }
 
-out:
     aodev->callback(egc, aodev);
     return;
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBJ-0007qz-1n; Thu, 02 Aug 2012 17:27:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBG-0007q1-TI
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:51891] by server-3.bemta-4.messagelabs.com id
	E0/E9-01511-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21086 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828363"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vR-TK; Thu, 02 Aug 2012 17:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006G4-Rx;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:11 +0100
Message-ID: <1343928442-23966-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/13] libxl: react correctly to bootloader pty
	master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Receive POLLHUP on the bootloader master pty is not an error.
Hopefully it means that the bootloader has exited and therefore the
pty slave side has no process group any more.  (At least NetBSD
indicates POLLHUP on the master in this case.)

So send the bootloader SIGTERM; if it has already exited then this has
no effect (except that on some versions of NetBSD it erroneously
returns ESRCH and we print a harmless warning) and we will then
collect the bootloader's exit status and be satisfied.

However, we remember that we have done this so that if we got POLLHUP
for some other reason than that the bootloader exited we report
something resembling a useful message.

In order to implement this we need to provide a way for users of
datacopier to handle POLLHUP rather than treating it as fatal.

We rename bootloader_abort to bootloader_stop since it now no longer
only applies to error situations.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

-
Changes in v5:
 * Correctly call dc->callback_pollhup, not dc->callback,
   in datacopier_pollhup_handled.

Changes in v4:
 * Track whether we sent SIGTERM due to POLLHUP so we can report
   messages properly.

Changes in v3:
 * datacopier provides new interface for handling POLLHUP
 * Do not ignore errors on the xenconsole pty
 * Rename bootloader_abort.
---
 tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
 tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
 tools/libxl/libxl_internal.h   |    7 +++++--
 3 files changed, 57 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 99972a2..983a60a 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
     LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
 }
 
+static int datacopier_pollhup_handled(libxl__egc *egc,
+                                      libxl__datacopier_state *dc,
+                                      short revents, int onwrite)
+{
+    STATE_AO_GC(dc->ao);
+
+    if (dc->callback_pollhup && (revents & POLLHUP)) {
+        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
+            onwrite ? dc->writewhat : dc->readwhat,
+            dc->copywhat);
+        libxl__datacopier_kill(dc);
+        dc->callback_pollhup(egc, dc, onwrite, -1);
+        return 1;
+    }
+    return 0;
+}
+
 static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
                                 int fd, short events, short revents) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 0))
+        return;
+
     if (revents & ~POLLIN) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLIN)"
             " on %s during copy of %s", revents, dc->readwhat, dc->copywhat);
@@ -163,6 +183,9 @@ static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
+    if (datacopier_pollhup_handled(egc, dc, revents, 1))
+        return;
+
     if (revents & ~POLLOUT) {
         LOG(ERROR, "unexpected poll event 0x%x (should be POLLOUT)"
             " on %s during copy of %s", revents, dc->writewhat, dc->copywhat);
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index ef5a91b..bfc1b56 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -215,6 +215,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
     libxl__domaindeathcheck_init(&bl->deathcheck);
     bl->keystrokes.ao = bl->ao;  libxl__datacopier_init(&bl->keystrokes);
     bl->display.ao = bl->ao;     libxl__datacopier_init(&bl->display);
+    bl->got_pollhup = 0;
 }
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
@@ -275,7 +276,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 }
 
 /* might be called at any time, provided it's init'd */
-static void bootloader_abort(libxl__egc *egc,
+static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
     STATE_AO_GC(bl->ao);
@@ -285,8 +286,8 @@ static void bootloader_abort(libxl__egc *egc,
     libxl__datacopier_kill(&bl->display);
     if (libxl__ev_child_inuse(&bl->child)) {
         r = kill(bl->child.pid, SIGTERM);
-        if (r) LOGE(WARN, "after failure, failed to kill bootloader [%lu]",
-                    (unsigned long)bl->child.pid);
+        if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
+                    rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
     bl->rc = rc;
 }
@@ -508,7 +509,10 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->keystrokes.maxsz = BOOTLOADER_BUF_OUT;
     bl->keystrokes.copywhat =
         GCSPRINTF("bootloader input for domain %"PRIu32, bl->domid);
-    bl->keystrokes.callback = bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback =         bootloader_keystrokes_copyfail;
+    bl->keystrokes.callback_pollhup = bootloader_keystrokes_copyfail;
+        /* pollhup gets called with errnoval==-1 which is not otherwise
+         * possible since errnos are nonnegative, so it's unambiguous */
     rc = libxl__datacopier_start(&bl->keystrokes);
     if (rc) goto out;
 
@@ -516,7 +520,8 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
     bl->display.maxsz = BOOTLOADER_BUF_IN;
     bl->display.copywhat =
         GCSPRINTF("bootloader output for domain %"PRIu32, bl->domid);
-    bl->display.callback = bootloader_display_copyfail;
+    bl->display.callback =         bootloader_display_copyfail;
+    bl->display.callback_pollhup = bootloader_display_copyfail;
     rc = libxl__datacopier_start(&bl->display);
     if (rc) goto out;
 
@@ -562,30 +567,42 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
 
 /* perhaps one of these will be called, but perhaps not */
 static void bootloader_copyfail(libxl__egc *egc, const char *which,
-       libxl__bootloader_state *bl, int onwrite, int errnoval)
+        libxl__bootloader_state *bl, int ondisplay, int onwrite, int errnoval)
 {
     STATE_AO_GC(bl->ao);
+    int rc = ERROR_FAIL;
+
+    if (errnoval==-1) {
+        /* POLLHUP */
+        if (!!ondisplay != !!onwrite) {
+            rc = 0;
+            bl->got_pollhup = 1;
+        } else {
+            LOG(ERROR, "unexpected POLLHUP on %s", which);
+        }
+    }
     if (!onwrite && !errnoval)
         LOG(ERROR, "unexpected eof copying %s", which);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+
+    bootloader_stop(egc, bl, rc);
 }
 static void bootloader_keystrokes_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
-    bootloader_copyfail(egc, "bootloader input", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, errnoval);
 }
 static void bootloader_display_copyfail(libxl__egc *egc,
        libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, display);
-    bootloader_copyfail(egc, "bootloader output", bl, onwrite, errnoval);
+    bootloader_copyfail(egc, "bootloader output", bl, 1, onwrite, errnoval);
 }
 
 static void bootloader_domaindeath(libxl__egc *egc, libxl__domaindeathcheck *dc)
 {
     libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, deathcheck);
-    bootloader_abort(egc, bl, ERROR_FAIL);
+    bootloader_stop(egc, bl, ERROR_FAIL);
 }
 
 static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
@@ -599,6 +616,8 @@ static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
     libxl__datacopier_kill(&bl->display);
 
     if (status) {
+        if (bl->got_pollhup && WIFSIGNALED(status) && WTERMSIG(status)==SIGTERM)
+            LOG(ERROR, "got POLLHUP, sent SIGTERM");
         LOG(ERROR, "bootloader failed - consult logfile %s", bl->logfile);
         libxl_report_child_exitstatus(CTX, XTL_ERROR, "bootloader",
                                       pid, status);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 58004b3..2d6c71a 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2076,7 +2076,9 @@ typedef struct libxl__datacopier_buf libxl__datacopier_buf;
  *     errnoval==0 means we got eof and all data was written
  *     errnoval!=0 means we had a read error, logged
  * onwrite==-1 means some other internal failure, errnoval not valid, logged
- * in all cases copier is killed before calling this callback */
+ * If we get POLLHUP, we call callback_pollhup(..., onwrite, -1);
+ * or if callback_pollhup==0 this is an internal failure, as above.
+ * In all cases copier is killed before calling this callback */
 typedef void libxl__datacopier_callback(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval);
 
@@ -2095,6 +2097,7 @@ struct libxl__datacopier_state {
     const char *copywhat, *readwhat, *writewhat; /* for error msgs */
     FILE *log; /* gets a copy of everything */
     libxl__datacopier_callback *callback;
+    libxl__datacopier_callback *callback_pollhup;
     /* remaining fields are private to datacopier */
     libxl__ev_fd toread, towrite;
     ssize_t used;
@@ -2279,7 +2282,7 @@ struct libxl__bootloader_state {
     int nargs, argsspace;
     const char **args;
     libxl__datacopier_state keystrokes, display;
-    int rc;
+    int rc, got_pollhup;
 };
 
 _hidden void libxl__bootloader_init(libxl__bootloader_state *bl);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBN-0007u7-NZ; Thu, 02 Aug 2012 17:27:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBJ-0007q0-1u
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:29 +0000
Received: from [85.158.143.99:51966] by server-1.bemta-4.messagelabs.com id
	B5/6E-24392-088BA105; Thu, 02 Aug 2012 17:27:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21108 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828366"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vY-1l; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GG-0i;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:14 +0100
Message-ID: <1343928442-23966-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 05/13] libxl: abolish useless `start' parameter
	to libxl__add_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

0 is always passed for this parameter and the code doesn't, actually,
use it, now.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_create.c   |    4 ++--
 tools/libxl/libxl_device.c   |    7 +++----
 tools/libxl/libxl_dm.c       |    4 ++--
 tools/libxl/libxl_internal.h |    8 +++-----
 4 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 3265d69..5275373 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -911,7 +911,7 @@ static void domcreate_rebuild_done(libxl__egc *egc,
 
     libxl__multidev_begin(ao, &dcs->aodevs);
     dcs->aodevs.callback = domcreate_launch_dm;
-    libxl__add_disks(egc, ao, domid, 0, d_config, &dcs->aodevs);
+    libxl__add_disks(egc, ao, domid, d_config, &dcs->aodevs);
     libxl__multidev_prepared(egc, &dcs->aodevs, 0);
 
     return;
@@ -1041,7 +1041,7 @@ static void domcreate_devmodel_started(libxl__egc *egc,
         /* Attach nics */
         libxl__multidev_begin(ao, &dcs->aodevs);
         dcs->aodevs.callback = domcreate_attach_pci;
-        libxl__add_nics(egc, ao, domid, 0, d_config, &dcs->aodevs);
+        libxl__add_nics(egc, ao, domid, d_config, &dcs->aodevs);
         libxl__multidev_prepared(egc, &dcs->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 4a53181..27fbd21 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -485,15 +485,14 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
 
 #define DEFINE_DEVICES_ADD(type)                                        \
     void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
-                              int start, libxl_domain_config *d_config, \
+                              libxl_domain_config *d_config,            \
                               libxl__ao_devices *aodevs)                \
     {                                                                   \
         AO_GC;                                                          \
         int i;                                                          \
-        int end = start + d_config->num_##type##s;                      \
-        for (i = start; i < end; i++) {                                 \
+        for (i = 0; i < d_config->num_##type##s; i++) {                 \
             libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i], \
                                        aodev);                          \
         }                                                               \
     }
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 177642b..66aa45e 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -858,7 +858,7 @@ retry_transaction:
 
     libxl__multidev_begin(ao, &sdss->aodevs);
     sdss->aodevs.callback = spawn_stub_launch_dm;
-    libxl__add_disks(egc, ao, dm_domid, 0, dm_config, &sdss->aodevs);
+    libxl__add_disks(egc, ao, dm_domid, dm_config, &sdss->aodevs);
     libxl__multidev_prepared(egc, &sdss->aodevs, 0);
 
     free(args);
@@ -984,7 +984,7 @@ static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
     if (d_config->num_nics > 0) {
         libxl__multidev_begin(ao, &sdss->aodevs);
         sdss->aodevs.callback = stubdom_pvqemu_cb;
-        libxl__add_nics(egc, ao, dm_domid, 0, d_config, &sdss->aodevs);
+        libxl__add_nics(egc, ao, dm_domid, d_config, &sdss->aodevs);
         libxl__multidev_prepared(egc, &sdss->aodevs, 0);
         return;
     }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 07e92fb..bb3eb5f 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2388,19 +2388,17 @@ _hidden void libxl__devices_destroy(libxl__egc *egc,
 /* Helper function to add a bunch of disks. This should be used when
  * the caller is inside an async op. "devices" will NOT be prepared by
  * this function, so the caller must make sure to call
- * libxl__multidev_begin before calling this function. The start
- * parameter contains the position inside the aodevs array that should
- * be used to store the state of this devices.
+ * libxl__multidev_begin before calling this function.
  *
  * The "callback" will be called for each device, and the user is responsible
  * for calling libxl__ao_device_check_last on the callback.
  */
 _hidden void libxl__add_disks(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                              int start, libxl_domain_config *d_config,
+                              libxl_domain_config *d_config,
                               libxl__ao_devices *aodevs);
 
 _hidden void libxl__add_nics(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                             int start, libxl_domain_config *d_config,
+                             libxl_domain_config *d_config,
                              libxl__ao_devices *aodevs);
 
 /*----- device model creation -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBK-0007rf-Bq; Thu, 02 Aug 2012 17:27:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBH-0007q0-N7
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:27 +0000
Received: from [85.158.143.99:8952] by server-1.bemta-4.messagelabs.com id
	EE/5E-24392-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21103 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828365"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vU-1S; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006GC-Uj;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:13 +0100
Message-ID: <1343928442-23966-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 04/13] libxl: fix formatting of
	DEFINE_DEVICES_ADD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These lines were exactly 80 columns wide, which produces hideous wrap
damage in an 80 column emacs.  Reformat using emacs's C-c \,
which puts the \ in column 72 (by default) where possible.

Whitespace change only.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_device.c |   26 +++++++++++++-------------
 1 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 79dd502..4a53181 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -483,19 +483,19 @@ void libxl__multidev_prepared(libxl__egc *egc, libxl__ao_devices *aodevs,
  * libxl__add_nics
  */
 
-#define DEFINE_DEVICES_ADD(type)                                               \
-    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid,  \
-                              int start, libxl_domain_config *d_config,        \
-                              libxl__ao_devices *aodevs)                       \
-    {                                                                          \
-        AO_GC;                                                                 \
-        int i;                                                                 \
-        int end = start + d_config->num_##type##s;                             \
-        for (i = start; i < end; i++) {                                        \
-            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);         \
-            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start],\
-                                       aodev);                                 \
-        }                                                                      \
+#define DEFINE_DEVICES_ADD(type)                                        \
+    void libxl__add_##type##s(libxl__egc *egc, libxl__ao *ao, uint32_t domid, \
+                              int start, libxl_domain_config *d_config, \
+                              libxl__ao_devices *aodevs)                \
+    {                                                                   \
+        AO_GC;                                                          \
+        int i;                                                          \
+        int end = start + d_config->num_##type##s;                      \
+        for (i = start; i < end; i++) {                                 \
+            libxl__ao_device *aodev = libxl__multidev_prepare(aodevs);  \
+            libxl__device_##type##_add(egc, domid, &d_config->type##s[i-start], \
+                                       aodev);                          \
+        }                                                               \
     }
 
 DEFINE_DEVICES_ADD(disk)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBN-0007tr-AU; Thu, 02 Aug 2012 17:27:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007qF-FZ
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:9004] by server-2.bemta-4.messagelabs.com id
	B1/B6-17938-F78BA105; Thu, 02 Aug 2012 17:27:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!11
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21155 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828374"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vj-75; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006Gk-6B;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:22 +0100
Message-ID: <1343928442-23966-14-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/13] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

*** DO NOT APPLY ***

We have recently had a couple of bugs which basically involved
ignoring the rc parameter to a callback function.  I thought I would
try -Wunused-parameter.  Here are the results.

I found three further problems:

 * libxl_wait_for_free_memory takes a domid parameter but its
   semantics don't seem to call for that.  However this function has
   an API stability warning, and we will leave it be in case the domid
   turns out to be useful later for some kind of compatibility bodge.

 * qmp_synchronous_send has an `ask_timeout' parameter which is
   ignored.

 * The autogenerated function libxl_event_init_type ignored the type
   parameter.  This is now fixed by Ian Campbell in an earlier
   patch in this series.

Things I needed to do to get the rest of the code to compile:

 * Remove one harmless unused parameter from an internal function.
   (Earlier in this series.)

 * Add an assert to make the error handling in the broken remus code
   slightly less broken.  (Earlier in this series.)

 * Provide machinery in the Makefile for passing different CFLAGS to
   libxl as opposed to xl and libxlu.  The flex- and bison-generated
   files in libxlu can't be compiled with -Wunused-parameter.

 * Define a new helper macro
        #define USE(var) ((void)(var))
   and use it 43 times.  The pattern is something like
        USE(egc);
   in a function which takes egc but doesn't need it.  If the
   parameter is later used, this is harmless.  In functions
   which are placeholders the USE statement should be placed in the
   middle of the function where the parameter would be used if the
   function is changed later, so that the USE gets deleted by the
   patch introducing the implementation.

 * Define a new helper macro for use only in other macros
        #define MAYBE_UNUSED __attribute__((unused))
   and use it in 10 different places.

 * Define new macros for helping declare common types of callback
   functions.  For example:

        #define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)  \
            libxl__egc *egc MAYBE_UNUSED,                             \
            libxl__ev_xswatch *watch MAYBE_UNUSED,                    \
            const char *wpath MAYBE_UNUSED,                           \
            const char *epath MAYBE_UNUSED

   which is used like this:

        -static void some_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
        -                          const char *wpath, const char *epath)
        +static void some_callback(EV_XSWATCH_CALLBACK_PARAMS
        +                          (egc, watch, wpath, epath))
        {
            ... now we use (or not) egc, watch, wpath, etc. or not as we like

   This somewhat resembles a Traditional K&R C typeless function
   definition.  The types of the parameters are actually defined
   for the compiler of course, along with the information that
   the parameters might be unused.

   There are 4 macros of this kind with 22 call sites.

IMO on the cost (65 places in ordinary code where we have to write
something somewhat ugly) is worth the benefit (finding, if we had
deployed this right away, around 6 bugs).  But it's arguable.

*** DO NOT APPLY ***

Anyway, this patch must not be applied right now because it causes the
build to fail.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>

-
Changes in v5 of series:
 * Use REMUS TODO in comment.
 * Ignore domid parameter to libxl_wait_for_free_memory.
 * Do not unnecessarily but harmlessly USE(priv) in pci_ins_check.
 * Mention that libxl_event_init_type problem is now fixed.
---
 tools/libxl/Makefile             |    4 +++-
 tools/libxl/libxl.c              |   31 +++++++++++++++++++++++++++----
 tools/libxl/libxl_aoutils.c      |    8 ++++----
 tools/libxl/libxl_blktap2.c      |    1 +
 tools/libxl/libxl_bootloader.c   |    6 ++++++
 tools/libxl/libxl_create.c       |    2 ++
 tools/libxl/libxl_device.c       |    9 +++++----
 tools/libxl/libxl_dm.c           |    4 ++++
 tools/libxl/libxl_dom.c          |    8 ++++----
 tools/libxl/libxl_event.c        |   22 +++++++++++++---------
 tools/libxl/libxl_exec.c         |    8 ++++----
 tools/libxl/libxl_fork.c         |    1 +
 tools/libxl/libxl_internal.h     |   24 ++++++++++++++++++++++--
 tools/libxl/libxl_pci.c          |    5 +++++
 tools/libxl/libxl_qmp.c          |   17 +++++++++--------
 tools/libxl/libxl_save_callout.c |    4 ++--
 tools/libxl/libxl_utils.c        |    2 ++
 17 files changed, 114 insertions(+), 42 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 63a8157..f5ff115 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -13,6 +13,8 @@ XLUMINOR = 0
 
 CFLAGS += -Werror -Wno-format-zero-length -Wmissing-declarations \
 	-Wno-declaration-after-statement -Wformat-nonliteral
+CFLAGS_FOR_LIBXL = -Wunused-parameter
+
 CFLAGS += -I. -fPIC
 
 ifeq ($(CONFIG_Linux),y)
@@ -71,7 +73,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-$(LIBXL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
+$(LIBXL_OBJS): CFLAGS += $(CFLAGS_FOR_LIBXL) $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(CFLAGS_libblktapctl) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h \
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 726a70e..4f387df 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -28,6 +28,8 @@ int libxl_ctx_alloc(libxl_ctx **pctx, int version,
     struct stat stat_buf;
     int rc;
 
+    USE(flags);
+
     if (version != LIBXL_VERSION) { rc = ERROR_VERSION; goto out; }
 
     ctx = malloc(sizeof(*ctx));
@@ -700,6 +702,8 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     libxl__domain_suspend_state *dss;
     int rc;
 
+    USE(recv_fd); /* REMUS TODO: get rid of this and actually use it! */
+
     libxl_domain_type type = libxl__domain_type(gc, domid);
     if (type == LIBXL_DOMAIN_TYPE_INVALID) {
         rc = ERROR_FAIL;
@@ -979,8 +983,9 @@ static void domain_death_occurred(libxl__egc *egc,
     LIBXL_TAILQ_INSERT_HEAD(&CTX->death_reported, evg, entry);
 }
 
-static void domain_death_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void domain_death_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                          (egc, watch, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_domain_death *evg;
     uint32_t domid;
@@ -1137,8 +1142,9 @@ void libxl_evdisable_domain_death(libxl_ctx *ctx,
     GC_FREE;
 }
 
-static void disk_eject_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                                        const char *wpath, const char *epath) {
+static void disk_eject_xswatch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                        (egc, w, wpath, epath))
+{
     EGC_GC;
     libxl_evgen_disk_eject *evg = (void*)w;
     char *backend;
@@ -2525,6 +2531,8 @@ static int libxl__device_from_nic(libxl__gc *gc, uint32_t domid,
                                   libxl_device_nic *nic,
                                   libxl__device *device)
 {
+    USE(gc);
+
     device->backend_devid    = nic->devid;
     device->backend_domid    = nic->backend_domid;
     device->backend_kind     = LIBXL__DEVICE_KIND_VIF;
@@ -2903,6 +2911,9 @@ out:
 
 int libxl__device_vkb_setdefault(libxl__gc *gc, libxl_device_vkb *vkb)
 {
+    USE(gc);
+
+    USE(vkb);
     return 0;
 }
 
@@ -2910,6 +2921,7 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vkb *vkb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vkb->devid;
     device->backend_domid = vkb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VKBD;
@@ -2991,6 +3003,7 @@ out:
 
 int libxl__device_vfb_setdefault(libxl__gc *gc, libxl_device_vfb *vfb)
 {
+    USE(gc);
     libxl_defbool_setdefault(&vfb->vnc.enable, true);
     if (libxl_defbool_val(vfb->vnc.enable)) {
         if (!vfb->vnc.listen) {
@@ -3011,6 +3024,7 @@ static int libxl__device_from_vfb(libxl__gc *gc, uint32_t domid,
                                   libxl_device_vfb *vfb,
                                   libxl__device *device)
 {
+    USE(gc);
     device->backend_devid = vfb->devid;
     device->backend_domid = vfb->backend_domid;
     device->backend_kind = LIBXL__DEVICE_KIND_VFB;
@@ -3567,6 +3581,8 @@ int libxl_wait_for_free_memory(libxl_ctx *ctx, uint32_t domid, uint32_t
     uint32_t freemem_slack;
     GC_INIT(ctx);
 
+    USE(domid); /* this may turn out to be useful for compatibility, later */
+
     rc = libxl__get_free_memory_slack(gc, &freemem_slack);
     if (rc < 0)
         goto out;
@@ -3937,8 +3953,13 @@ libxl_scheduler libxl_get_scheduler(libxl_ctx *ctx)
 static int sched_arinc653_domain_set(libxl__gc *gc, uint32_t domid,
                                      const libxl_domain_sched_params *scinfo)
 {
+    USE(gc);
+
     /* Currently, the ARINC 653 scheduler does not take any domain-specific
          configuration, so we simply return success. */
+    USE(domid);
+    USE(scinfo);
+
     return 0;
 }
 
@@ -4386,6 +4407,8 @@ int libxl_xen_console_read_line(libxl_ctx *ctx,
 void libxl_xen_console_read_finish(libxl_ctx *ctx,
                                    libxl_xen_console_reader *cr)
 {
+    USE(ctx);
+
     free(cr->buffer);
     free(cr);
 }
diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 983a60a..6d85f56 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -114,8 +114,8 @@ static int datacopier_pollhup_handled(libxl__egc *egc,
     return 0;
 }
 
-static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_readable(EV_FD_CALLBACK_PARAMS
+                                (egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
     STATE_AO_GC(dc->ao);
 
@@ -178,8 +178,8 @@ static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
     datacopier_check_state(egc, dc);
 }
 
-static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
-                                int fd, short events, short revents) {
+static void datacopier_writable(EV_FD_CALLBACK_PARAMS(
+                                egc, ev, fd, events, revents)) {
     libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
     STATE_AO_GC(dc->ao);
 
diff --git a/tools/libxl/libxl_blktap2.c b/tools/libxl/libxl_blktap2.c
index 2c40182..660a669 100644
--- a/tools/libxl/libxl_blktap2.c
+++ b/tools/libxl/libxl_blktap2.c
@@ -19,6 +19,7 @@
 
 int libxl__blktap_enabled(libxl__gc *gc)
 {
+    USE(gc);
     const char *msg;
     return !tap_ctl_check(&msg);
 }
diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index e103ee9..902dfe6 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -88,6 +88,7 @@ static void make_bootloader_args(libxl__gc *gc, libxl__bootloader_state *bl,
 static int setup_xenconsoled_pty(libxl__egc *egc, libxl__bootloader_state *bl,
                                  char *slave_path, size_t slave_path_len)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     struct termios termattr;
     int r, rc;
@@ -141,6 +142,7 @@ static const char *bootloader_result_command(libxl__gc *gc, const char *buf,
 static int parse_bootloader_result(libxl__egc *egc,
                                    libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     char buf[PATH_MAX*2];
     FILE *f = 0;
@@ -221,6 +223,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
 
 static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int i;
 
@@ -256,6 +259,8 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    USE(egc);
+
     if (!bl->rc)
         bl->rc = rc;
 
@@ -285,6 +290,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_stop(libxl__egc *egc,
                              libxl__bootloader_state *bl, int rc)
 {
+    USE(egc);
     STATE_AO_GC(bl->ao);
     int r;
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5f0d26f..5d56d67 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -62,6 +62,8 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
 int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                          libxl_domain_create_info *c_info)
 {
+    USE(gc);
+
     if (!c_info->type)
         return ERROR_INVAL;
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 9fc63f1..80c5511 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -44,6 +44,8 @@ int libxl__parse_backend_path(libxl__gc *gc,
                               const char *path,
                               libxl__device *dev)
 {
+    USE(gc);
+
     /* /local/domain/<domid>/backend/<kind>/<domid>/<devid> */
     char strkind[16]; /* Longest is actually "console" */
     int rc = sscanf(path, "/local/domain/%d/backend/%15[^/]/%u/%d",
@@ -796,8 +798,7 @@ out:
     return;
 }
 
-static void device_qemu_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                const struct timeval *requested_abs)
+static void device_qemu_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
@@ -919,8 +920,8 @@ out:
     return;
 }
 
-static void device_hotplug_timeout_cb(libxl__egc *egc, libxl__ev_time *ev,
-                                      const struct timeval *requested_abs)
+static void device_hotplug_timeout_cb(EV_TIME_CALLBACK_PARAMS
+                                      (egc, ev, requested_abs))
 {
     libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, timeout);
     STATE_AO_GC(aodev->ao);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..6995306 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -640,6 +640,7 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
                                         libxl_device_vfb *vfb,
                                         libxl_device_vkb *vkb)
 {
+    USE(gc);
     const libxl_domain_build_info *b_info = &guest_config->b_info;
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_HVM)
@@ -1177,6 +1178,7 @@ static void device_model_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
 {
     libxl__dm_spawn_state *dmss = CONTAINER_OF(spawn, *dmss, spawn);
     STATE_AO_GC(spawn->ao);
+    USE(egc);
 
     if (!xsdata)
         return;
@@ -1262,6 +1264,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
         int nr_vfbs, libxl_device_vfb *vfbs,
         int nr_disks, libxl_device_disk *disks)
 {
+    USE(gc);
     int i, ret = 0;
 
     /*
@@ -1283,6 +1286,7 @@ int libxl__need_xenpv_qemu(libxl__gc *gc,
     }
 
     if (nr_vfbs > 0) {
+        USE(vfbs);
         ret = 1;
         goto out;
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 06d5e4f..4a36652 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -756,8 +756,8 @@ void libxl__domain_suspend_common_switch_qemu_logdirty
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                                    const struct timeval *requested_abs)
+static void switch_logdirty_timeout(EV_TIME_CALLBACK_PARAMS
+                                    (egc, ev, requested_abs))
 {
     libxl__domain_suspend_state *dss = CONTAINER_OF(ev, *dss, logdirty.timeout);
     STATE_AO_GC(dss->ao);
@@ -765,8 +765,8 @@ static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
     switch_logdirty_done(egc,dss,-1);
 }
 
-static void switch_logdirty_xswatch(libxl__egc *egc, libxl__ev_xswatch *watch,
-                            const char *watch_path, const char *event_path)
+static void switch_logdirty_xswatch(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, wpath, epath))
 {
     libxl__domain_suspend_state *dss =
         CONTAINER_OF(watch, *dss, logdirty.watch);
diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 939906c..3ec9361 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -197,6 +197,8 @@ static void time_done_debug(libxl__gc *gc, const char *func,
                "ev_time=%p done rc=%d .func=%p infinite=%d abs=%lu.%06lu",
                ev, rc, ev->func, ev->infinite,
                (unsigned long)ev->abs.tv_sec, (unsigned long)ev->abs.tv_usec);
+#else
+    USE(gc); USE(func); USE(ev); USE(rc);
 #endif
 }
 
@@ -381,8 +383,7 @@ static void libxl__set_watch_slot_contents(libxl__ev_watch_slot *slot,
     slot->empty.sle_next = (void*)w;
 }
 
-static void watchfd_callback(libxl__egc *egc, libxl__ev_fd *ev,
-                             int fd, short events, short revents)
+static void watchfd_callback(EV_FD_CALLBACK_PARAMS(egc,ev, fd,events,revents))
 {
     EGC_GC;
 
@@ -571,8 +572,8 @@ void libxl__ev_xswatch_deregister(libxl__gc *gc, libxl__ev_xswatch *w)
  * waiting for device state
  */
 
-static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
-                                const char *watch_path, const char *event_path)
+static void devstate_watch_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                    (egc, watch, watch_path, event_path))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(watch, *ds, watch);
@@ -605,8 +606,7 @@ static void devstate_watch_callback(libxl__egc *egc, libxl__ev_xswatch *watch,
     ds->callback(egc, ds, rc);
 }
 
-static void devstate_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                             const struct timeval *requested_abs)
+static void devstate_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     EGC_GC;
     libxl__ev_devstate *ds = CONTAINER_OF(ev, *ds, timeout);
@@ -662,8 +662,8 @@ int libxl__ev_devstate_wait(libxl__gc *gc, libxl__ev_devstate *ds,
  * futile.
  */
 
-static void domaindeathcheck_callback(libxl__egc *egc, libxl__ev_xswatch *w,
-                            const char *watch_path, const char *event_path)
+static void domaindeathcheck_callback(EV_XSWATCH_CALLBACK_PARAMS
+                                      (egc, w, watch_path, event_path))
 {
     libxl__domaindeathcheck *dc = CONTAINER_OF(w, *dc, watch);
     EGC_GC;
@@ -1019,6 +1019,7 @@ void libxl_osevent_occurred_fd(libxl_ctx *ctx, void *for_libxl,
 
     assert(fd == ev->fd);
     revents &= ev->events;
+    USE(events); /* we use our own idea of what we asked for */
     if (revents)
         ev->func(egc, ev, fd, ev->events, revents);
 
@@ -1151,6 +1152,8 @@ void libxl__event_occurred(libxl__egc *egc, libxl_event *event)
 
 void libxl_event_free(libxl_ctx *ctx, libxl_event *event)
 {
+    USE(ctx);
+
     /* Exceptionally, this function may be called from libxl, with ctx==0 */
     libxl_event_dispose(event);
     free(event);
@@ -1648,7 +1651,8 @@ int libxl__ao_inprogress(libxl__ao *ao,
  * for how.  But we want to copy *how.  So we have this dummy function
  * whose address is stored in callback if the app passed how==NULL. */
 static void dummy_asyncprogress_callback_ignore
-  (libxl_ctx *ctx, libxl_event *ev, void *for_callback) { }
+  (libxl_ctx *ctx, libxl_event *ev, void *for_callback)
+    { USE(ctx); USE(ev); USE(for_callback); }
 
 void libxl__ao_progress_gethow(libxl_asyncprogress_how *in_state,
                                const libxl_asyncprogress_how *from_app) {
diff --git a/tools/libxl/libxl_exec.c b/tools/libxl/libxl_exec.c
index 0477386..ed6b44e 100644
--- a/tools/libxl/libxl_exec.c
+++ b/tools/libxl/libxl_exec.c
@@ -280,6 +280,7 @@ void libxl__spawn_init(libxl__spawn_state *ss)
 
 int libxl__spawn_spawn(libxl__egc *egc, libxl__spawn_state *ss)
 {
+    USE(egc);
     STATE_AO_GC(ss->ao);
     int r;
     pid_t child;
@@ -387,8 +388,7 @@ static void spawn_fail(libxl__egc *egc, libxl__spawn_state *ss)
     spawn_detach(gc, ss);
 }
 
-static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
-                          const struct timeval *requested_abs)
+static void spawn_timeout(EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs))
 {
     /* Before event, was Attached. */
     EGC_GC;
@@ -397,8 +397,8 @@ static void spawn_timeout(libxl__egc *egc, libxl__ev_time *ev,
     spawn_fail(egc, ss); /* must be last */
 }
 
-static void spawn_watch_event(libxl__egc *egc, libxl__ev_xswatch *xsw,
-                              const char *watch_path, const char *event_path)
+static void spawn_watch_event(EV_XSWATCH_CALLBACK_PARAMS
+                              (egc, xsw, wpath, epath))
 {
     /* On entry, is Attached. */
     EGC_GC;
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 044ddad..0379604 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -157,6 +157,7 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 
 static void sigchld_handler(int signo)
 {
+    USE(signo);
     int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
     assert(!e); /* errors are probably EBADF, very bad */
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 6528694..2381388 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -101,6 +101,8 @@
 #define DISABLE_UDEV_PATH "libxl/disable_udev"
 
 #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+#define USE(var) ((void)(var))
+#define MAYBE_UNUSED __attribute__((unused))
 
 #define LIBXL__LOGGING_ENABLED
 
@@ -153,6 +155,14 @@ typedef void libxl__ev_fd_callback(libxl__egc *egc, libxl__ev_fd *ev,
    * It is not permitted to listen for the same or overlapping events
    * on the same fd using multiple different libxl__ev_fd's.
    */
+
+/* Declare your callback functions with this helper and you avoid unused
+ * parameter warnings (and don't have to list all the types either): */
+#define EV_FD_CALLBACK_PARAMS(egc, ev, fd, events, revents)             \
+     libxl__egc *egc MAYBE_UNUSED, libxl__ev_fd *ev MAYBE_UNUSED,       \
+     int fd MAYBE_UNUSED,                                               \
+     short events MAYBE_UNUSED, short revents MAYBE_UNUSED
+
 struct libxl__ev_fd {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -168,6 +178,11 @@ struct libxl__ev_fd {
 typedef struct libxl__ev_time libxl__ev_time;
 typedef void libxl__ev_time_callback(libxl__egc *egc, libxl__ev_time *ev,
                                      const struct timeval *requested_abs);
+
+#define EV_TIME_CALLBACK_PARAMS(egc, ev, requested_abs)                 \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_time *ev MAYBE_UNUSED,      \
+    const struct timeval *requested_abs MAYBE_UNUSED
+
 struct libxl__ev_time {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
@@ -180,8 +195,13 @@ struct libxl__ev_time {
 };
 
 typedef struct libxl__ev_xswatch libxl__ev_xswatch;
-typedef void libxl__ev_xswatch_callback(libxl__egc *egc, libxl__ev_xswatch*,
-                            const char *watch_path, const char *event_path);
+typedef void libxl__ev_xswatch_callback(libxl__egc *egc,
+    libxl__ev_xswatch *watch, const char *watch_path, const char *event_path);
+
+#define EV_XSWATCH_CALLBACK_PARAMS(egc, watch, wpath, epath)    \
+    libxl__egc *egc MAYBE_UNUSED, libxl__ev_xswatch *watch MAYBE_UNUSED,            \
+    const char *wpath MAYBE_UNUSED, const char *epath MAYBE_UNUSED
+
 struct libxl__ev_xswatch {
     /* caller should include this in their own struct */
     /* read-only for caller, who may read only when registered: */
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 9c92ae6..5e866f2 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -798,8 +798,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
 static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void *priv)
 {
+    USE(gc);
     char *orig_state = priv;
 
+    USE(domid);
+
     if ( !strcmp(state, "pci-insert-failed") )
         return -1;
     if ( !strcmp(state, "pci-inserted") )
@@ -1007,6 +1010,8 @@ static int libxl__device_pci_reset(libxl__gc *gc, unsigned int domain, unsigned
 
 int libxl__device_pci_setdefault(libxl__gc *gc, libxl_device_pci *pci)
 {
+    USE(gc);
+    USE(pci);
     return 0;
 }
 
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..c354c71 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -46,6 +46,10 @@
 typedef int (*qmp_callback_t)(libxl__qmp_handler *qmp,
                               const libxl__json_object *tree,
                               void *opaque);
+#define QMP_CALLBACK_PARAMS(qmp, tree, opaque)          \
+    libxl__qmp_handler *qmp MAYBE_UNUSED,               \
+    const libxl__json_object *tree MAYBE_UNUSED,        \
+    void *opaque MAYBE_UNUSED
 
 typedef struct qmp_request_context {
     int rc;
@@ -109,9 +113,7 @@ static int store_serial_port_info(libxl__qmp_handler *qmp,
     return ret;
 }
 
-static int register_serials_chardev_callback(libxl__qmp_handler *qmp,
-                                             const libxl__json_object *o,
-                                             void *unused)
+static int register_serials_chardev_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     const libxl__json_object *obj = NULL;
     const libxl__json_object *label = NULL;
@@ -165,9 +167,7 @@ static int qmp_write_domain_console_item(libxl__gc *gc, int domid,
     return libxl__xs_write(gc, XBT_NULL, path, "%s", value);
 }
 
-static int qmp_register_vnc_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o,
-                                     void *unused)
+static int qmp_register_vnc_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     GC_INIT(qmp->ctx);
     const libxl__json_object *obj;
@@ -203,8 +203,7 @@ out:
     return rc;
 }
 
-static int qmp_capabilities_callback(libxl__qmp_handler *qmp,
-                                     const libxl__json_object *o, void *unused)
+static int qmp_capabilities_callback(QMP_CALLBACK_PARAMS(qmp,o,unused))
 {
     qmp->connected = true;
 
@@ -228,6 +227,8 @@ static int enable_qmp_capabilities(libxl__qmp_handler *qmp)
 static libxl__qmp_message_type qmp_response_type(libxl__qmp_handler *qmp,
                                                  const libxl__json_object *o)
 {
+    USE(qmp);
+
     libxl__qmp_message_type type;
     libxl__json_map_node *node = NULL;
     int i = 0;
diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
index 078b7ee..78bb67e 100644
--- a/tools/libxl/libxl_save_callout.c
+++ b/tools/libxl/libxl_save_callout.c
@@ -252,8 +252,8 @@ static void helper_failed(libxl__egc *egc, libxl__save_helper_state *shs,
                 (unsigned long)shs->child.pid);
 }
 
-static void helper_stdout_readable(libxl__egc *egc, libxl__ev_fd *ev,
-                                   int fd, short events, short revents)
+static void helper_stdout_readable(EV_FD_CALLBACK_PARAMS
+                                   (egc, ev, fd, events, revents))
 {
     libxl__save_helper_state *shs = CONTAINER_OF(ev, *shs, readable);
     STATE_AO_GC(shs->ao);
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index f7b44a0..a7c34a9 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -230,6 +230,7 @@ out:
 
 int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend)
 {
+    USE(ctx);
     char *p;
     int rc = 0;
 
@@ -513,6 +514,7 @@ void libxl_bitmap_dispose(libxl_bitmap *map)
 void libxl_bitmap_copy(libxl_ctx *ctx, libxl_bitmap *dptr,
                        const libxl_bitmap *sptr)
 {
+    USE(ctx);
     int sz;
 
     assert(dptr->size == sptr->size);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBM-0007t8-Gi; Thu, 02 Aug 2012 17:27:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007q1-J1
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:51971] by server-3.bemta-4.messagelabs.com id
	AA/E9-01511-088BA105; Thu, 02 Aug 2012 17:27:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!8
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21122 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828368"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001vb-3D; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GS-2R;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:17 +0100
Message-ID: <1343928442-23966-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 08/13] libxl: remus: mark TODOs more clearly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change the TODOs in the remus code to "REMUS TODO" which will make
them easier to grep for later.  AIUI all of these are essential for
use of remus in production.

Also add a new TODO and a new assert, to check rc on entry to
remus_checkpoint_dm_saved.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_dom.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index d749983..06d5e4f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1110,7 +1110,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* TODO: Issue disk and network checkpoint reqs. */
+    /* REMUS TODO: Issue disk and network checkpoint reqs. */
     return libxl__domain_suspend_common_callback(data);
 }
 
@@ -1124,7 +1124,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. Start a new network output buffer */
     return 1;
 }
 
@@ -1151,8 +1151,9 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* TODO: Wait for disk and memory ack, release network buffer */
-    /* TODO: make this asynchronous */
+    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
+    /* REMUS TODO: make this asynchronous */
+    assert(!rc); /* REMUS TODO handle this error properly */
     usleep(dss->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBI-0007qf-MQ; Thu, 02 Aug 2012 17:27:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBG-0007q0-Ls
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:26 +0000
Received: from [85.158.143.99:51881] by server-1.bemta-4.messagelabs.com id
	5D/5E-24392-D78BA105; Thu, 02 Aug 2012 17:27:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21081 invoked from network); 2 Aug 2012 17:27:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828362"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBE-0001vO-SG; Thu, 02 Aug 2012 17:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBE-0006Fu-P5;
	Thu, 02 Aug 2012 18:27:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:10 +0100
Message-ID: <1343928442-23966-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 01/13] libxl: unify libxl__device_destroy and
	device_hotplug_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

device_hotplug_done contains an open-coded but improved version of
libxl__device_destroy.  So move the contents of device_hotplug_done
into libxl__device_destroy, deleting the old code, and replace it at
its old location with a function call.

Add the missing call to libxl__xs_transaction_abort (which was present
in neither version and technically speaking is always a no-op with
this code as it stands at the moment because no-one does "goto out"
other than after libxl__xs_transaction_start or _commit).

Also fix the error handling: the rc from the destroy should be
propagated into the aodev.

Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

-
Changes in v5 of series:
 * Also add missing xs abort.
---
 tools/libxl/libxl_device.c |   36 +++++++++++++-----------------------
 1 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index da0c3ea..95b169e 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -513,22 +513,24 @@ int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
     char *be_path = libxl__device_backend_path(gc, dev);
     char *fe_path = libxl__device_frontend_path(gc, dev);
     xs_transaction_t t = 0;
-    int rc = 0;
+    int rc;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
 
-    do {
-        t = xs_transaction_start(CTX->xsh);
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
-        rc = !xs_transaction_end(CTX->xsh, t, 0);
-    } while (rc && errno == EAGAIN);
-    if (rc) {
-        LOGE(ERROR, "unable to finish transaction");
-        goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc < 0) goto out;
     }
 
     libxl__device_destroy_tapdisk(gc, be_path);
 
 out:
+    libxl__xs_transaction_abort(gc, &t);
     return rc;
 }
 
@@ -993,29 +995,17 @@ error:
 static void device_hotplug_done(libxl__egc *egc, libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
-    char *be_path = libxl__device_backend_path(gc, aodev->dev);
-    char *fe_path = libxl__device_frontend_path(gc, aodev->dev);
-    xs_transaction_t t = 0;
     int rc;
 
     device_hotplug_clean(gc, aodev);
 
     /* Clean xenstore if it's a disconnection */
     if (aodev->action == DEVICE_DISCONNECT) {
-        for (;;) {
-            rc = libxl__xs_transaction_start(gc, &t);
-            if (rc) goto out;
-
-            libxl__xs_path_cleanup(gc, t, fe_path);
-            libxl__xs_path_cleanup(gc, t, be_path);
-
-            rc = libxl__xs_transaction_commit(gc, &t);
-            if (!rc) break;
-            if (rc < 0) goto out;
-        }
+        rc = libxl__device_destroy(gc, aodev->dev);
+        if (!aodev->rc)
+            aodev->rc = rc;
     }
 
-out:
     aodev->callback(egc, aodev);
     return;
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBL-0007sC-9a; Thu, 02 Aug 2012 17:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007q0-30
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:8976] by server-1.bemta-4.messagelabs.com id
	00/6E-24392-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!7
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21116 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828367"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001va-2g; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GO-1u;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:16 +0100
Message-ID: <1343928442-23966-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 07/13] libxl: do not blunder on if bootloader
	fails (again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not lose the rc value passed to bootloader_callback.  Do not lose
the rc value from the bl when the local disk detach succeeds.

While we're here rationalise the use of bl->rc to make things clearer.
Set it to zero at the start and always update it conditionally; copy
it into bootloader_callback's argument each time.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_bootloader.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index bfc1b56..e103ee9 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -206,6 +206,7 @@ static int parse_bootloader_result(libxl__egc *egc,
 void libxl__bootloader_init(libxl__bootloader_state *bl)
 {
     assert(bl->ao);
+    bl->rc = 0;
     bl->dls.diskpath = NULL;
     bl->openpty.ao = bl->ao;
     bl->dls.ao = bl->ao;
@@ -255,6 +256,9 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    if (!bl->rc)
+        bl->rc = rc;
+
     bootloader_cleanup(egc, bl);
 
     bl->dls.callback = bootloader_local_detached_cb;
@@ -270,9 +274,11 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 
     if (rc) {
         LOG(ERROR, "unable to detach locally attached disk");
+        if (!bl->rc)
+            bl->rc = rc;
     }
 
-    bl->callback(egc, bl, rc);
+    bl->callback(egc, bl, bl->rc);
 }
 
 /* might be called at any time, provided it's init'd */
@@ -289,7 +295,8 @@ static void bootloader_stop(libxl__egc *egc,
         if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
                     rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
-    bl->rc = rc;
+    if (!bl->rc)
+        bl->rc = rc;
 }
 
 /*----- main flow of control -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:28:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzBL-0007sC-9a; Thu, 02 Aug 2012 17:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzBI-0007q0-30
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:27:28 +0000
Received: from [85.158.143.99:8976] by server-1.bemta-4.messagelabs.com id
	00/6E-24392-E78BA105; Thu, 02 Aug 2012 17:27:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343928445!29407608!7
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21116 invoked from network); 2 Aug 2012 17:27:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:27:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828367"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:27:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 18:27:25 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SwzBF-0001va-2g; Thu, 02 Aug 2012 17:27:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SwzBF-0006GO-1u;
	Thu, 02 Aug 2012 18:27:25 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 2 Aug 2012 18:27:16 +0100
Message-ID: <1343928442-23966-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 07/13] libxl: do not blunder on if bootloader
	fails (again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not lose the rc value passed to bootloader_callback.  Do not lose
the rc value from the bl when the local disk detach succeeds.

While we're here rationalise the use of bl->rc to make things clearer.
Set it to zero at the start and always update it conditionally; copy
it into bootloader_callback's argument each time.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_bootloader.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
index bfc1b56..e103ee9 100644
--- a/tools/libxl/libxl_bootloader.c
+++ b/tools/libxl/libxl_bootloader.c
@@ -206,6 +206,7 @@ static int parse_bootloader_result(libxl__egc *egc,
 void libxl__bootloader_init(libxl__bootloader_state *bl)
 {
     assert(bl->ao);
+    bl->rc = 0;
     bl->dls.diskpath = NULL;
     bl->openpty.ao = bl->ao;
     bl->dls.ao = bl->ao;
@@ -255,6 +256,9 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 static void bootloader_callback(libxl__egc *egc, libxl__bootloader_state *bl,
                                 int rc)
 {
+    if (!bl->rc)
+        bl->rc = rc;
+
     bootloader_cleanup(egc, bl);
 
     bl->dls.callback = bootloader_local_detached_cb;
@@ -270,9 +274,11 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
 
     if (rc) {
         LOG(ERROR, "unable to detach locally attached disk");
+        if (!bl->rc)
+            bl->rc = rc;
     }
 
-    bl->callback(egc, bl, rc);
+    bl->callback(egc, bl, bl->rc);
 }
 
 /* might be called at any time, provided it's init'd */
@@ -289,7 +295,8 @@ static void bootloader_stop(libxl__egc *egc,
         if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
                     rc ? "after failure, " : "", (unsigned long)bl->child.pid);
     }
-    bl->rc = rc;
+    if (!bl->rc)
+        bl->rc = rc;
 }
 
 /*----- main flow of control -----*/
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:38:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzLp-0001VF-L5; Thu, 02 Aug 2012 17:38:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwzLn-0001V5-As
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:38:19 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343929089!11933908!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27812 invoked from network); 2 Aug 2012 17:38:10 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:38:10 -0000
Received: by wibhm6 with SMTP id hm6so4409546wib.14
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 10:38:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=sGRROJoVyyyoHwqUJVJ5yx2G7jN/5/y5+ijX7dg4JKo=;
	b=Fw33ttFoZIpMXNioY/EuTGFfLjE2s4lTHp9AZC7AyZVrp9F+Ug1PrDS28dAqt7cRtD
	m2c9lGxn+me6AN2Wvz7OU2aaMADTANHt01d/ov388ewOCYqlPSUl64mGIC/tPqd4/9d1
	cq5MI2LQ5onM/PCdkdBwOYoZ5rWKVPyXN4Xh9jvCQYo/J/vu4N3LgbmKjFFauEzHFNTT
	M/SnHedWOjG9u75h9rPCXZ6xlGiF+xSUsRTAzyHK5x4ctaRWJ1HrxJQs0CTeVEiyKUOZ
	kPpIFMpLG2yhcdSCLa399x/g0nHbBPh+QIFVHyPGLZ3DqWwoxBayl+klpu7YkiNhQE8W
	lVPA==
MIME-Version: 1.0
Received: by 10.180.84.164 with SMTP id a4mr6474172wiz.12.1343929089388; Thu,
	02 Aug 2012 10:38:09 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 10:38:08 -0700 (PDT)
In-Reply-To: <CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
Date: Thu, 2 Aug 2012 10:38:08 -0700
Message-ID: <CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Content-Type: multipart/mixed; boundary=f46d043c06e6f9f25804c64bdf1e
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--f46d043c06e6f9f25804c64bdf1e
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Aug 1, 2012 at 10:52 AM, David Erickson <halcyon1981@gmail.com> wrote:
> On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Tue, 31 Jul 2012, David Erickson wrote:
>>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
>>> <stefano.stabellini@eu.citrix.com> wrote:
>>> > On Tue, 31 Jul 2012, David Erickson wrote:
>>> >> Just got back in town, following up on the prior discussion.  I
>>> >> successfully compiled the latest code (25688 and qemu upstream
>>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
>>> >> problems during initialization of the card in the guest, in particular
>>> >> the unsupported delivery mode 3 which seems to cause interrupt related
>>> >> problems during init.  I've again attached the qemu-dm-log, and xl
>>> >> dmesg log files, and additionally screenshots of the guest dmesg and
>>> >> also for comparison starting the same livecd natively on the box.
>>> >
>>> > "unsupported delivery mode 3" means that the Linux guest is trying to
>>> > remap the MSI onto an event channel but Xen is still trying to deliver
>>> > the MSI using the emulated code path anyway.
>>> >
>>> > Adding
>>> >
>>> > #define XEN_PT_LOGGING_ENABLED 1
>>> >
>>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
>>> > be helpful.
>>> >
>>> > The full Xen logs might also be useful. I would add some more tracing to
>>> > the hypervisor too:
>>> >
>>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
>>> > index b5975d1..08f4ab7 100644
>>> > --- a/xen/drivers/passthrough/io.c
>>> > +++ b/xen/drivers/passthrough/io.c
>>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
>>> >  {
>>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
>>> >
>>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
>>> > +            pirq->pirq,
>>> > +            hvm_domain_use_pirq(d, pirq),
>>> > +            pirq->arch.hvm.emuirq);
>>> > +
>>> >      if ( hvm_domain_use_pirq(d, pirq) )
>>> >          send_guest_pirq(d, pirq);
>>> >      else
>>>
>>> Hi Stefano-
>>> I made the modifications (it looks like that DEFINE hasn't been used
>>> in awhile, caused a few compilation issues, I had to prefix most of
>>> the logged variables with s->hostaddr.), and am attaching the
>>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
>>> where do I find those at?
>>
>> Thanks for the logs!
>> You can get the full Xen logs from the serial console but you can also
>> grab the last few lines with "xl dmesg", like you did and it seems to be
>> enough in this case.
>>
>>
>> The initial MSI remapping has been done:
>>
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
>>
>> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
>> necessary to be able to receive event notifications (emuirq=-1 in the
>> Xen logs).
>>
>> Now we need to figure out why: we still need more logs, this time on the
>> guest side.
>> What is the kernel version that you are using in the guest?
>> Could you please add "debug loglevel=9" to the guest kernel command line
>> and then post the guest dmesg again?
>> It would be great if you could use the emulated serial to get the logs
>> rather than a picture. You can do that by adding serial='pty' to the VM
>> config file and console=ttyS0 to the guest command line.
>> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
>> has been done:
>>
>>
>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
>> index 53777f8..d65a97a 100644
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
>>  #ifdef CONFIG_X86
>>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
>>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
>> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
>> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
>>  #endif
>>
>>   out:
>
> The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
> I've also attached all the logs, thanks for the tip on the serial
> console, very useful.
>
> Additionally I've attached logs for booting a solaris livecd (my
> ultimate goal is to use this HBA card in Solaris), with the serial
> console tip I was able to capture its kernel boot as well.

I'm attaching another log from Solaris' kernel debugger, I'm not sure
how helpful it is but I found it interesting that it didn't detect an
Intel IOMMU/ACPI table and unloaded it, then tried AMD - and I'm new
to Solaris but comparing this log to one without PCI Passthrough, the
npe module never gets loaded without PCI passthrough, so I assume it
failed while setting up the AMD IOMMU module then loaded the npe
module to report the error.

-D

--f46d043c06e6f9f25804c64bdf1e
Content-Type: application/octet-stream; name="solaris_kernel_panic.log"
Content-Disposition: attachment; filename="solaris_kernel_panic.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e4mxap7

bG9hZCAnbWlzYy9idXNyYScgaWQgMjAgbG9hZGVkIEAgMHhmZmZmZmZmZmY3OTIwMDAwLzB4ZmZm
ZmZmZmZmYmU1YTM4YSBzaXplIDEwMjQKOC85Ngpsb2FkICdtaXNjL2lvdmNmZycgaWQgMjEgbG9h
ZGVkIEAgMHhmZmZmZmZmZmY3OTIzMDAwLzB4ZmZmZmZmZmZmYmU1YTQxMiBzaXplIDEyMgowOC80
ODgKbG9hZCAnbWlzYy9wY2llJyBpZCAxOSBsb2FkZWQgQCAweGZmZmZmZmZmZjc4ZGQwMDAvMHhm
ZmZmZmZmZmZiZTU5YWUyIHNpemUgMjcyNjAKMC8yMjE2CmxvYWQgJ21pc2MvcGNpX2F1dG9jb25m
aWcnIGlkIDE3IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzg3MzAwMC8weGZmZmZmZmZmZmJlNTgwYmEg
CnNpemUgMzk5NjAvMzI4Cmluc3RhbGxpbmcgcGNpX2F1dG9jb25maWcsIG1vZHVsZSBpZCAxNy4K
aW5zdGFsbGluZyBhY3BpY2EsIG1vZHVsZSBpZCAxOC4KQUNQSTogUlNEUCBmZGEyMCAwMDAyNCAo
djIgICAgWGVuKQpBQ1BJOiBYU0RUIGZjMDA5ZmQwIDAwMDU0ICh2MSAgICBYZW4gICAgICBIVk0g
MDAwMDAwMDAgSFZNTCAwMDAwMDAwMCkKQUNQSTogRkFDUCBmYzAwOTkwMCAwMDBGNCAodjQgICAg
WGVuICAgICAgSFZNIDAwMDAwMDAwIEhWTUwgMDAwMDAwMDApCkFDUEk6IERTRFQgZmMwMDEyYjAg
MDg1Q0QgKHYyICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBJTlRMIDIwMTAwNTI4KQpBQ1BJOiBG
QUNTIGZjMDAxMjcwIDAwMDQwCkFDUEk6IEFQSUMgZmMwMDlhMDAgMDA0NjAgKHYyICAgIFhlbiAg
ICAgIEhWTSAwMDAwMDAwMCBIVk1MIDAwMDAwMDAwKQpBQ1BJOiBIUEVUIGZjMDA5ZWUwIDAwMDM4
ICh2MSAgICBYZW4gICAgICBIVk0gMDAwMDAwMDAgSFZNTCAwMDAwMDAwMCkKQUNQSTogV0FFVCBm
YzAwOWYyMCAwMDAyOCAodjEgICAgWGVuICAgICAgSFZNIDAwMDAwMDAwIEhWTUwgMDAwMDAwMDAp
CkFDUEk6IFNTRFQgZmMwMDlmNTAgMDAwMzEgKHYyICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBJ
TlRMIDIwMTAwNTI4KQpBQ1BJOiBTU0RUIGZjMDA5ZjkwIDAwMDMxICh2MiAgICBYZW4gICAgICBI
Vk0gMDAwMDAwMDAgSU5UTCAyMDEwMDUyOCkKaW5zdGFsbGluZyBwY2llLCBtb2R1bGUgaWQgMTku
Cmluc3RhbGxpbmcgYnVzcmEsIG1vZHVsZSBpZCAyMC4KaW5zdGFsbGluZyBpb3ZjZmcsIG1vZHVs
ZSBpZCAyMS4gICAgICAKbG9hZCAnbWlzYy9hY3BpZGV2JyBpZCAyMiBsb2FkZWQgQCAweGZmZmZm
ZmZmZjc5MjYwMDAvMHhmZmZmZmZmZmZiZTVjNmEyIHNpemUgNzMKNDQ4LzI3MjAKaW5zdGFsbGlu
ZyBhY3BpZGV2LCBtb2R1bGUgaWQgMjIuCmxvYWQgJ2Rydi9pc2EnIGlkIDIzIGxvYWRlZCBAIDB4
ZmZmZmZmZmZmNzkzODAwMC8weGZmZmZmZmZmZmJlNWQxNzIgc2l6ZSAxNjI4MC8xCjMzNgppbnN0
YWxsaW5nIGlzYSwgbW9kdWxlIGlkIDIzLgpVc2luZyBkZWZhdWx0IGRldmljZSBpbnN0YW5jZSBk
YXRhClNNQklPUyB2Mi40IGxvYWRlZCAoMzUzIGJ5dGVzKQpsb2FkICdtYWNoL3VwcGMnIGlkIDI0
IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzkzYzAwMC8weGZmZmZmZmZmZmJlNWQ2YWEgc2l6ZSAxMzM1
MgovNDg4Cmluc3RhbGxpbmcgdXBwYywgbW9kdWxlIGlkIDI0LgpTa2lwcGluZyBwc206IHhwdl9w
c20KbG9hZCAnbWFjaC9hcGl4JyBpZCAyNiBsb2FkZWQgQCAweGZmZmZmZmZmZjc5NDIwMDAvMHhm
ZmZmZmZmZmZiZTVlYTYyIHNpemUgODQzNDQKLzE4NTYKaW5zdGFsbGluZyBhcGl4LCBtb2R1bGUg
aWQgMjYuCmxvYWQgJ21hY2gvcGNwbHVzbXAnIGlkIDI3IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzk1
YTAwMC8weGZmZmZmZmZmZmJlNjIyNTMgc2l6ZSA2Cjg3ODQvMTc3NgppbnN0YWxsaW5nIHBjcGx1
c21wLCBtb2R1bGUgaWQgMjcuCmxvYWQgJ2Rydi9yb290bmV4JyBpZCAyOCBsb2FkZWQgQCAweGZm
ZmZmZmZmZjc5NmUwMDAvMHhmZmZmZmZmZmZiZTY2YmY0IHNpemUgMTkwCjU2LzcyMAppbnN0YWxs
aW5nIHJvb3RuZXgsIG1vZHVsZSBpZCAyOC4Kcm9vdCBuZXh1cyA9IGk4NnBjCmxvYWQgJ2Rydi9v
cHRpb25zJyBpZCAyOSBsb2FkZWQgQCAweGZmZmZmZmZmZjc4MTRkMzAvMHhmZmZmZmZmZmZiZTY2
ZWNjIHNpemUgNTkyCi8xOTIKaW5zdGFsbGluZyBvcHRpb25zLCBtb2R1bGUgaWQgMjkuCmxvYWQg
J2Rydi9wc2V1ZG8nIGlkIDMwIGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzg0ZTRmOC8weGZmZmZmZmZm
ZmJlNjZmOGMgc2l6ZSAyNjg4Ci81NjgKaW5zdGFsbGluZyBwc2V1ZG8sIG1vZHVsZSBpZCAzMC4K
cHNldWRvMCBhdCByb290CnBzZXVkbzAgaXMgL3BzZXVkbwpsb2FkICdkcnYvY2xvbmUnIGlkIDMx
IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzk1OWE5OC8weGZmZmZmZmZmZmJlNjcxYzQgc2l6ZSAxMzEy
Lwo1NjgKaW5zdGFsbGluZyBjbG9uZSwgbW9kdWxlIGlkIDMxLgpsb2FkICdtaXNjL3Njc2knIGlk
IDMzIGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzk4NjAwMC8weGZmZmZmZmZmZmJlNjc2MjQgc2l6ZSAx
NDM0NAowLzE2MDE2CmxvYWQgJ2Rydi9zY3NpX3ZoY2knIGlkIDMyIGxvYWRlZCBAIDB4ZmZmZmZm
ZmZmNzk3MzAwMC8weGZmZmZmZmZmZmJlNjczZmMgc2l6ZSA3CjQwNjQvNTUyCmluc3RhbGxpbmcg
c2NzaV92aGNpLCBtb2R1bGUgaWQgMzIuCmluc3RhbGxpbmcgc2NzaSwgbW9kdWxlIGlkIDMzLgps
b2FkICdtaXNjL3Njc2lfdmhjaS9zY3NpX3ZoY2lfZl9hc3ltX3N1bicgaWQgMzQgbG9hZGVkIEAg
MHhmZmZmZmZmZmY3OTg1MTUwLzB4ZgpmZmZmZmZmZmJlNmI2MWEgc2l6ZSAzNzEyLzIwOAppbnN0
YWxsaW5nIHNjc2lfdmhjaV9mX2FzeW1fc3VuLCBtb2R1bGUgaWQgMzQuCmxvYWQgJ21pc2Mvc2Nz
aV92aGNpL3Njc2lfdmhjaV9mX2FzeW1fbHNpJyBpZCAzNSBsb2FkZWQgQCAweGZmZmZmZmZmZjc5
YWQwMDAvMHhmCmZmZmZmZmZmYmU2YjZlYSBzaXplIDcxNTIvNjQwCmluc3RhbGxpbmcgc2NzaV92
aGNpX2ZfYXN5bV9sc2ksIG1vZHVsZSBpZCAzNS4KbG9hZCAnbWlzYy9zY3NpX3ZoY2kvc2NzaV92
aGNpX2ZfYXN5bV9lbWMnIGlkIDM2IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzlhZjAwMC8weGYKZmZm
ZmZmZmZiZTZiOTZhIHNpemUgNDI0OC8yMzIKaW5zdGFsbGluZyBzY3NpX3ZoY2lfZl9hc3ltX2Vt
YywgbW9kdWxlIGlkIDM2Lgpsb2FkICdtaXNjL3Njc2lfdmhjaS9zY3NpX3ZoY2lfZl9zeW1fZW1j
JyBpZCAzNyBsb2FkZWQgQCAweGZmZmZmZmZmZjc4NmVjNjgvMHhmZgpmZmZmZmZmYmU2YmE1MiBz
aXplIDg2NC8yMDgKaW5zdGFsbGluZyBzY3NpX3ZoY2lfZl9zeW1fZW1jLCBtb2R1bGUgaWQgMzcu
CmxvYWQgJ21pc2Mvc2NzaV92aGNpL3Njc2lfdmhjaV9mX3N5bV9oZHMnIGlkIDM4IGxvYWRlZCBA
IDB4ZmZmZmZmZmZmNzgwMThhMC8weGZmCmZmZmZmZmZiZTZiYjIyIHNpemUgMTg1Ni8yMDAKaW5z
dGFsbGluZyBzY3NpX3ZoY2lfZl9zeW1faGRzLCBtb2R1bGUgaWQgMzguCmxvYWQgJ21pc2Mvc2Nz
aV92aGNpL3Njc2lfdmhjaV9mX3N5bScgaWQgMzkgbG9hZGVkIEAgMHhmZmZmZmZmZmY3OTU2OTc4
LzB4ZmZmZmZmCmZmZmJlNmJiZWEgc2l6ZSAxNTA0LzMzNgppbnN0YWxsaW5nIHNjc2lfdmhjaV9m
X3N5bSwgbW9kdWxlIGlkIDM5Lgpsb2FkICdtaXNjL3Njc2lfdmhjaS9zY3NpX3ZoY2lfZl90cGdz
JyBpZCA0MCBsb2FkZWQgQCAweGZmZmZmZmZmZmJiZmYzNTAvMHhmZmZmZgpmZmZmYmU2YmQzYSBz
aXplIDMwNzIvMTkyCmluc3RhbGxpbmcgc2NzaV92aGNpX2ZfdHBncywgbW9kdWxlIGlkIDQwLgpz
Y3NpX3ZoY2kwIGF0IHJvb3QKc2NzaV92aGNpMCBpcyAvc2NzaV92aGNpCnVuaW5zdGFsbGVkIGFw
aXgKdW5sb2FkaW5nIGFwaXgsIG1vZHVsZSBpZCAyNiwgbG9hZGNudCAxLgpsb2FkICdtaXNjL2lv
bW11JyBpZCA0MiBsb2FkZWQgQCAweGZmZmZmZmZmZjc5NGFlZDAvMHhmZmZmZmZmZmZiZTVlZGIy
IHNpemUgMjI2OAowLzU4NApsb2FkICdkcnYvaW50ZWxfaW9tbXUnIGlkIDQxIGxvYWRlZCBAIDB4
ZmZmZmZmZmZmNzk0MjAwMC8weGZmZmZmZmZmZmJlNWVhNjIgc2l6ZQogMzY1NjAvODQ4Cmluc3Rh
bGxpbmcgaW50ZWxfaW9tbXUsIG1vZHVsZSBpZCA0MS4gCmluc3RhbGxpbmcgaW9tbXUsIG1vZHVs
ZSBpZCA0Mi4KTm8gRE1BUiBBQ1BJIHRhYmxlLiBObyBJbnRlbCBJT01NVSBwcmVzZW50Cgp1bmxv
YWRpbmcgaW50ZWxfaW9tbXUsIG1vZHVsZSBpZCA0MSwgbG9hZGNudCAxLgpsb2FkICdkcnYvYW1k
X2lvbW11JyBpZCA0MyBsb2FkZWQgQCAweGZmZmZmZmZmZjc5NDIwMDAvMHhmZmZmZmZmZmZiZTVl
YTYyIHNpemUgMwo2MDgwLzUyMAppbnN0YWxsaW5nIGFtZF9pb21tdSwgbW9kdWxlIGlkIDQzLgp1
bmxvYWRpbmcgYW1kX2lvbW11LCBtb2R1bGUgaWQgNDMsIGxvYWRjbnQgMS4KbG9hZCAnZHJ2L25w
ZScgaWQgNDQgbG9hZGVkIEAgMHhmZmZmZmZmZmY3OTQyMDAwLzB4ZmZmZmZmZmZmYmU1ZjAzMiBz
aXplIDMyNjcyLzMKMTM2Cmluc3RhbGxpbmcgbnBlLCBtb2R1bGUgaWQgNDQuCm5wZTAgYXQgcm9v
dDogc3BhY2UgMCBvZmZzZXQgMApucGUwIGlzIC9wY2lAMCwwCnRyYXA6IFVua25vd24gdHJhcCB0
eXBlIDggaW4gdXNlciBtb2RlCgpwYW5pY1tjcHUwXS90aHJlYWQ9ZmZmZmZmZmZmYmMzNmRlMDog
CkJBRCBUUkFQOiB0eXBlPWQgKCNncCBHZW5lcmFsIHByb3RlY3Rpb24pIHJwPWZmZmZmZmZmZmJj
NDg2NjAgYWRkcj1mMDAwZmY1M2YwMDBmCmYwMAoKCiNncCBHZW5lcmFsIHByb3RlY3Rpb24KYWRk
cj0weGYwMDBmZjUzZjAwMGZmMDAKcGlkPTAsIHBjPTB4ZmZmZmZmZmZmYjg2NWYxZCwgc3A9MHhm
ZmZmZmZmZmZiYzQ4NzU4LCBlZmxhZ3M9MHgxMDI4NgpjcjA6IDgwMDUwMDNiPHBnLHdwLG5lLGV0
LHRzLG1wLHBlPiBjcjQ6IDZiODx4bW1lLGZ4c3IscGdlLHBhZSxwc2UsZGU+CmNyMjogMApjcjM6
IGY4ZTYwMDAKY3I4OiAwCgogICAgICAgIHJkaTogICAgICAgICAgICAgICAgMCByc2k6ICAgICAg
ICAgICAgICAgIDEgcmR4OiAgICAgICAgICAgICAgIDQwCiAgICAgICAgcmN4OiAgICAgICAgICAg
ICAgICAyICByODogZmZmZmZmZmZmYmM0ODg3MCAgcjk6ICAgICAgICAgICAgICAgIDAKICAgICAg
ICByYXg6IGZmZmZmZmZmZmJjMzZkZTAgcmJ4OiAgICAgICAgICAgICAgICAwIHJicDogZmZmZmZm
ZmZmYmM0ODdiMAogICAgICAgIHIxMDogZmZmZmZmZmZmYjg1YjdkOCByMTE6IGYwMDBmZjUzZjAw
MGZmMDAgcjEyOiAgICAgICAgICAgICAgICAwCiAgICAgICAgcjEzOiBmMDAwZmY1M2YwMDBmZjAw
IHIxNDogICAgICAgICAgICAgICAgMSByMTU6IGYwMDBmZjUzZjAwMGZmMDAKICAgICAgICBmc2I6
ICAgICAgICAyMDAwMDAwMDAgZ3NiOiBmZmZmZmZmZmZiYzNlYmMwICBkczogICAgICAgICAgICAg
ICAgMAogICAgICAgICBlczogICAgICAgICAgICAgICAgMCAgZnM6ICAgICAgICAgICAgICAgIDAg
IGdzOiAgICAgICAgICAgICAgICAwCiAgICAgICAgdHJwOiAgICAgICAgICAgICAgICBkIGVycjog
ICAgICAgICAgICAgICAgMCByaXA6IGZmZmZmZmZmZmI4NjVmMWQKICAgICAgICAgY3M6ICAgICAg
ICAgICAgICAgMzAgcmZsOiAgICAgICAgICAgIDEwMjg2IHJzcDogZmZmZmZmZmZmYmM0ODc1OAog
ICAgICAgICBzczogICAgICAgICAgICAgICAzOAoKV2FybmluZyAtIHN0YWNrIG5vdCB3cml0dGVu
IHRvIHRoZSBkdW1wIGJ1ZmZlcgpmZmZmZmZmZmZiYzQ4NTgwIHVuaXg6ZGllKzEzMSAoKQpmZmZm
ZmZmZmZiYzQ4NjUwIHVuaXg6dHJhcCszYjIgKCkKZmZmZmZmZmZmYmM0ODY2MCB1bml4OmNtbnRy
YXArZTYgKCkKZmZmZmZmZmZmYmM0ODdiMCB1bml4Om11dGV4X293bmVyX3J1bm5pbmcrZCAoKQpm
ZmZmZmZmZmZiYzQ4ODQwIGdlbnVuaXg6ZHVtcF9vbmVfY29yZSs2YiAoKQpmZmZmZmZmZmZiYzQ4
OGUwIGdlbnVuaXg6Y29yZSs0MTkgKCkgIApmZmZmZmZmZmZiYzQ4YTEwIHVuaXg6a2Vybl9ncGZh
dWx0KzE4OCAoKQpmZmZmZmZmZmZiYzQ4YWUwIHVuaXg6dHJhcCszOTMgKCkKZmZmZmZmZmZmYmM0
OGFmMCB1bml4OmNtbnRyYXArZTYgKCkKClswXT4gOjpzdGF0dXMKZGVidWdnaW5nIGxpdmUga2Vy
bmVsICg2NC1iaXQpIG9uIChub3Qgc2V0KQpvcGVyYXRpbmcgc3lzdGVtOiA1LjExIDExLjAgKGk4
NnBjKQppbWFnZSB1dWlkOiAobm90IHNldCkKRFRyYWNlIHN0YXRlOiBpbmFjdGl2ZQpzdG9wcGVk
IG9uOiBkZWJ1Z2dlciBlbnRyeSB0cmFwClswXT4gJGMKa21kYl9lbnRlcisweGIoKQpkZWJ1Z19l
bnRlcisweDNmKGZmZmZmZmZmZmI5NTlkNjApCnBhbmljc3lzKzB4NWJkKGZmZmZmZmZmZmI5NTc0
MTgsIGZmZmZmZmZmZmJjNDg0ZDAsIGZmZmZmZmZmZmJjODdiYjAsIDEpCnZwYW5pYysweDE1Yygp
CnBhbmljKzB4OTQoKQpkaWUrMHgxMzEoZCwgZmZmZmZmZmZmYmM0ODY2MCwgZjAwMGZmNTNmMDAw
ZmYwMCwgMCkKdHJhcCsweDNiMihmZmZmZmZmZmZiYzQ4NjYwLCBmMDAwZmY1M2YwMDBmZjAwLCAw
KQoweGZmZmZmZmZmZmI4MDAxZDYoKQptdXRleF9vd25lcl9ydW5uaW5nKzB4ZCgpCmR1bXBfb25l
X2NvcmUrMHg2YihiLCBmZmZmZmZmZiwgMSwgMCwgZmZmZmZmZmZmYmM0ODg3MCkKY29yZSsweDQx
OShiLCAwKQprZXJuX2dwZmF1bHQrMHgxODgoZmZmZmZmZmZmYmM0OGFmMCkKdHJhcCsweDM5Myhm
ZmZmZmZmZmZiYzQ4YWYwLCBmZmZmZmZmZmZiYzdmNmQ4LCAwKQoweGZmZmZmZmZmZmI4MDAxZDYo
KQpbMF0+IAoK
--f46d043c06e6f9f25804c64bdf1e
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--f46d043c06e6f9f25804c64bdf1e--


From xen-devel-bounces@lists.xen.org Thu Aug 02 17:38:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzLp-0001VF-L5; Thu, 02 Aug 2012 17:38:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SwzLn-0001V5-As
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 17:38:19 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1343929089!11933908!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27812 invoked from network); 2 Aug 2012 17:38:10 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:38:10 -0000
Received: by wibhm6 with SMTP id hm6so4409546wib.14
	for <xen-devel@lists.xen.org>; Thu, 02 Aug 2012 10:38:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=sGRROJoVyyyoHwqUJVJ5yx2G7jN/5/y5+ijX7dg4JKo=;
	b=Fw33ttFoZIpMXNioY/EuTGFfLjE2s4lTHp9AZC7AyZVrp9F+Ug1PrDS28dAqt7cRtD
	m2c9lGxn+me6AN2Wvz7OU2aaMADTANHt01d/ov388ewOCYqlPSUl64mGIC/tPqd4/9d1
	cq5MI2LQ5onM/PCdkdBwOYoZ5rWKVPyXN4Xh9jvCQYo/J/vu4N3LgbmKjFFauEzHFNTT
	M/SnHedWOjG9u75h9rPCXZ6xlGiF+xSUsRTAzyHK5x4ctaRWJ1HrxJQs0CTeVEiyKUOZ
	kPpIFMpLG2yhcdSCLa399x/g0nHbBPh+QIFVHyPGLZ3DqWwoxBayl+klpu7YkiNhQE8W
	lVPA==
MIME-Version: 1.0
Received: by 10.180.84.164 with SMTP id a4mr6474172wiz.12.1343929089388; Thu,
	02 Aug 2012 10:38:09 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Thu, 2 Aug 2012 10:38:08 -0700 (PDT)
In-Reply-To: <CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
Date: Thu, 2 Aug 2012 10:38:08 -0700
Message-ID: <CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Content-Type: multipart/mixed; boundary=f46d043c06e6f9f25804c64bdf1e
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--f46d043c06e6f9f25804c64bdf1e
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Aug 1, 2012 at 10:52 AM, David Erickson <halcyon1981@gmail.com> wrote:
> On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Tue, 31 Jul 2012, David Erickson wrote:
>>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
>>> <stefano.stabellini@eu.citrix.com> wrote:
>>> > On Tue, 31 Jul 2012, David Erickson wrote:
>>> >> Just got back in town, following up on the prior discussion.  I
>>> >> successfully compiled the latest code (25688 and qemu upstream
>>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
>>> >> problems during initialization of the card in the guest, in particular
>>> >> the unsupported delivery mode 3 which seems to cause interrupt related
>>> >> problems during init.  I've again attached the qemu-dm-log, and xl
>>> >> dmesg log files, and additionally screenshots of the guest dmesg and
>>> >> also for comparison starting the same livecd natively on the box.
>>> >
>>> > "unsupported delivery mode 3" means that the Linux guest is trying to
>>> > remap the MSI onto an event channel but Xen is still trying to deliver
>>> > the MSI using the emulated code path anyway.
>>> >
>>> > Adding
>>> >
>>> > #define XEN_PT_LOGGING_ENABLED 1
>>> >
>>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
>>> > be helpful.
>>> >
>>> > The full Xen logs might also be useful. I would add some more tracing to
>>> > the hypervisor too:
>>> >
>>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
>>> > index b5975d1..08f4ab7 100644
>>> > --- a/xen/drivers/passthrough/io.c
>>> > +++ b/xen/drivers/passthrough/io.c
>>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
>>> >  {
>>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
>>> >
>>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
>>> > +            pirq->pirq,
>>> > +            hvm_domain_use_pirq(d, pirq),
>>> > +            pirq->arch.hvm.emuirq);
>>> > +
>>> >      if ( hvm_domain_use_pirq(d, pirq) )
>>> >          send_guest_pirq(d, pirq);
>>> >      else
>>>
>>> Hi Stefano-
>>> I made the modifications (it looks like that DEFINE hasn't been used
>>> in awhile, caused a few compilation issues, I had to prefix most of
>>> the logged variables with s->hostaddr.), and am attaching the
>>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
>>> where do I find those at?
>>
>> Thanks for the logs!
>> You can get the full Xen logs from the serial console but you can also
>> grab the last few lines with "xl dmesg", like you did and it seems to be
>> enough in this case.
>>
>>
>> The initial MSI remapping has been done:
>>
>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
>>
>> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
>> necessary to be able to receive event notifications (emuirq=-1 in the
>> Xen logs).
>>
>> Now we need to figure out why: we still need more logs, this time on the
>> guest side.
>> What is the kernel version that you are using in the guest?
>> Could you please add "debug loglevel=9" to the guest kernel command line
>> and then post the guest dmesg again?
>> It would be great if you could use the emulated serial to get the logs
>> rather than a picture. You can do that by adding serial='pty' to the VM
>> config file and console=ttyS0 to the guest command line.
>> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
>> has been done:
>>
>>
>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
>> index 53777f8..d65a97a 100644
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
>>  #ifdef CONFIG_X86
>>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
>>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
>> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
>> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
>>  #endif
>>
>>   out:
>
> The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
> I've also attached all the logs, thanks for the tip on the serial
> console, very useful.
>
> Additionally I've attached logs for booting a solaris livecd (my
> ultimate goal is to use this HBA card in Solaris), with the serial
> console tip I was able to capture its kernel boot as well.

I'm attaching another log from Solaris' kernel debugger, I'm not sure
how helpful it is but I found it interesting that it didn't detect an
Intel IOMMU/ACPI table and unloaded it, then tried AMD - and I'm new
to Solaris but comparing this log to one without PCI Passthrough, the
npe module never gets loaded without PCI passthrough, so I assume it
failed while setting up the AMD IOMMU module then loaded the npe
module to report the error.

-D

--f46d043c06e6f9f25804c64bdf1e
Content-Type: application/octet-stream; name="solaris_kernel_panic.log"
Content-Disposition: attachment; filename="solaris_kernel_panic.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5e4mxap7

bG9hZCAnbWlzYy9idXNyYScgaWQgMjAgbG9hZGVkIEAgMHhmZmZmZmZmZmY3OTIwMDAwLzB4ZmZm
ZmZmZmZmYmU1YTM4YSBzaXplIDEwMjQKOC85Ngpsb2FkICdtaXNjL2lvdmNmZycgaWQgMjEgbG9h
ZGVkIEAgMHhmZmZmZmZmZmY3OTIzMDAwLzB4ZmZmZmZmZmZmYmU1YTQxMiBzaXplIDEyMgowOC80
ODgKbG9hZCAnbWlzYy9wY2llJyBpZCAxOSBsb2FkZWQgQCAweGZmZmZmZmZmZjc4ZGQwMDAvMHhm
ZmZmZmZmZmZiZTU5YWUyIHNpemUgMjcyNjAKMC8yMjE2CmxvYWQgJ21pc2MvcGNpX2F1dG9jb25m
aWcnIGlkIDE3IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzg3MzAwMC8weGZmZmZmZmZmZmJlNTgwYmEg
CnNpemUgMzk5NjAvMzI4Cmluc3RhbGxpbmcgcGNpX2F1dG9jb25maWcsIG1vZHVsZSBpZCAxNy4K
aW5zdGFsbGluZyBhY3BpY2EsIG1vZHVsZSBpZCAxOC4KQUNQSTogUlNEUCBmZGEyMCAwMDAyNCAo
djIgICAgWGVuKQpBQ1BJOiBYU0RUIGZjMDA5ZmQwIDAwMDU0ICh2MSAgICBYZW4gICAgICBIVk0g
MDAwMDAwMDAgSFZNTCAwMDAwMDAwMCkKQUNQSTogRkFDUCBmYzAwOTkwMCAwMDBGNCAodjQgICAg
WGVuICAgICAgSFZNIDAwMDAwMDAwIEhWTUwgMDAwMDAwMDApCkFDUEk6IERTRFQgZmMwMDEyYjAg
MDg1Q0QgKHYyICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBJTlRMIDIwMTAwNTI4KQpBQ1BJOiBG
QUNTIGZjMDAxMjcwIDAwMDQwCkFDUEk6IEFQSUMgZmMwMDlhMDAgMDA0NjAgKHYyICAgIFhlbiAg
ICAgIEhWTSAwMDAwMDAwMCBIVk1MIDAwMDAwMDAwKQpBQ1BJOiBIUEVUIGZjMDA5ZWUwIDAwMDM4
ICh2MSAgICBYZW4gICAgICBIVk0gMDAwMDAwMDAgSFZNTCAwMDAwMDAwMCkKQUNQSTogV0FFVCBm
YzAwOWYyMCAwMDAyOCAodjEgICAgWGVuICAgICAgSFZNIDAwMDAwMDAwIEhWTUwgMDAwMDAwMDAp
CkFDUEk6IFNTRFQgZmMwMDlmNTAgMDAwMzEgKHYyICAgIFhlbiAgICAgIEhWTSAwMDAwMDAwMCBJ
TlRMIDIwMTAwNTI4KQpBQ1BJOiBTU0RUIGZjMDA5ZjkwIDAwMDMxICh2MiAgICBYZW4gICAgICBI
Vk0gMDAwMDAwMDAgSU5UTCAyMDEwMDUyOCkKaW5zdGFsbGluZyBwY2llLCBtb2R1bGUgaWQgMTku
Cmluc3RhbGxpbmcgYnVzcmEsIG1vZHVsZSBpZCAyMC4KaW5zdGFsbGluZyBpb3ZjZmcsIG1vZHVs
ZSBpZCAyMS4gICAgICAKbG9hZCAnbWlzYy9hY3BpZGV2JyBpZCAyMiBsb2FkZWQgQCAweGZmZmZm
ZmZmZjc5MjYwMDAvMHhmZmZmZmZmZmZiZTVjNmEyIHNpemUgNzMKNDQ4LzI3MjAKaW5zdGFsbGlu
ZyBhY3BpZGV2LCBtb2R1bGUgaWQgMjIuCmxvYWQgJ2Rydi9pc2EnIGlkIDIzIGxvYWRlZCBAIDB4
ZmZmZmZmZmZmNzkzODAwMC8weGZmZmZmZmZmZmJlNWQxNzIgc2l6ZSAxNjI4MC8xCjMzNgppbnN0
YWxsaW5nIGlzYSwgbW9kdWxlIGlkIDIzLgpVc2luZyBkZWZhdWx0IGRldmljZSBpbnN0YW5jZSBk
YXRhClNNQklPUyB2Mi40IGxvYWRlZCAoMzUzIGJ5dGVzKQpsb2FkICdtYWNoL3VwcGMnIGlkIDI0
IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzkzYzAwMC8weGZmZmZmZmZmZmJlNWQ2YWEgc2l6ZSAxMzM1
MgovNDg4Cmluc3RhbGxpbmcgdXBwYywgbW9kdWxlIGlkIDI0LgpTa2lwcGluZyBwc206IHhwdl9w
c20KbG9hZCAnbWFjaC9hcGl4JyBpZCAyNiBsb2FkZWQgQCAweGZmZmZmZmZmZjc5NDIwMDAvMHhm
ZmZmZmZmZmZiZTVlYTYyIHNpemUgODQzNDQKLzE4NTYKaW5zdGFsbGluZyBhcGl4LCBtb2R1bGUg
aWQgMjYuCmxvYWQgJ21hY2gvcGNwbHVzbXAnIGlkIDI3IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzk1
YTAwMC8weGZmZmZmZmZmZmJlNjIyNTMgc2l6ZSA2Cjg3ODQvMTc3NgppbnN0YWxsaW5nIHBjcGx1
c21wLCBtb2R1bGUgaWQgMjcuCmxvYWQgJ2Rydi9yb290bmV4JyBpZCAyOCBsb2FkZWQgQCAweGZm
ZmZmZmZmZjc5NmUwMDAvMHhmZmZmZmZmZmZiZTY2YmY0IHNpemUgMTkwCjU2LzcyMAppbnN0YWxs
aW5nIHJvb3RuZXgsIG1vZHVsZSBpZCAyOC4Kcm9vdCBuZXh1cyA9IGk4NnBjCmxvYWQgJ2Rydi9v
cHRpb25zJyBpZCAyOSBsb2FkZWQgQCAweGZmZmZmZmZmZjc4MTRkMzAvMHhmZmZmZmZmZmZiZTY2
ZWNjIHNpemUgNTkyCi8xOTIKaW5zdGFsbGluZyBvcHRpb25zLCBtb2R1bGUgaWQgMjkuCmxvYWQg
J2Rydi9wc2V1ZG8nIGlkIDMwIGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzg0ZTRmOC8weGZmZmZmZmZm
ZmJlNjZmOGMgc2l6ZSAyNjg4Ci81NjgKaW5zdGFsbGluZyBwc2V1ZG8sIG1vZHVsZSBpZCAzMC4K
cHNldWRvMCBhdCByb290CnBzZXVkbzAgaXMgL3BzZXVkbwpsb2FkICdkcnYvY2xvbmUnIGlkIDMx
IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzk1OWE5OC8weGZmZmZmZmZmZmJlNjcxYzQgc2l6ZSAxMzEy
Lwo1NjgKaW5zdGFsbGluZyBjbG9uZSwgbW9kdWxlIGlkIDMxLgpsb2FkICdtaXNjL3Njc2knIGlk
IDMzIGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzk4NjAwMC8weGZmZmZmZmZmZmJlNjc2MjQgc2l6ZSAx
NDM0NAowLzE2MDE2CmxvYWQgJ2Rydi9zY3NpX3ZoY2knIGlkIDMyIGxvYWRlZCBAIDB4ZmZmZmZm
ZmZmNzk3MzAwMC8weGZmZmZmZmZmZmJlNjczZmMgc2l6ZSA3CjQwNjQvNTUyCmluc3RhbGxpbmcg
c2NzaV92aGNpLCBtb2R1bGUgaWQgMzIuCmluc3RhbGxpbmcgc2NzaSwgbW9kdWxlIGlkIDMzLgps
b2FkICdtaXNjL3Njc2lfdmhjaS9zY3NpX3ZoY2lfZl9hc3ltX3N1bicgaWQgMzQgbG9hZGVkIEAg
MHhmZmZmZmZmZmY3OTg1MTUwLzB4ZgpmZmZmZmZmZmJlNmI2MWEgc2l6ZSAzNzEyLzIwOAppbnN0
YWxsaW5nIHNjc2lfdmhjaV9mX2FzeW1fc3VuLCBtb2R1bGUgaWQgMzQuCmxvYWQgJ21pc2Mvc2Nz
aV92aGNpL3Njc2lfdmhjaV9mX2FzeW1fbHNpJyBpZCAzNSBsb2FkZWQgQCAweGZmZmZmZmZmZjc5
YWQwMDAvMHhmCmZmZmZmZmZmYmU2YjZlYSBzaXplIDcxNTIvNjQwCmluc3RhbGxpbmcgc2NzaV92
aGNpX2ZfYXN5bV9sc2ksIG1vZHVsZSBpZCAzNS4KbG9hZCAnbWlzYy9zY3NpX3ZoY2kvc2NzaV92
aGNpX2ZfYXN5bV9lbWMnIGlkIDM2IGxvYWRlZCBAIDB4ZmZmZmZmZmZmNzlhZjAwMC8weGYKZmZm
ZmZmZmZiZTZiOTZhIHNpemUgNDI0OC8yMzIKaW5zdGFsbGluZyBzY3NpX3ZoY2lfZl9hc3ltX2Vt
YywgbW9kdWxlIGlkIDM2Lgpsb2FkICdtaXNjL3Njc2lfdmhjaS9zY3NpX3ZoY2lfZl9zeW1fZW1j
JyBpZCAzNyBsb2FkZWQgQCAweGZmZmZmZmZmZjc4NmVjNjgvMHhmZgpmZmZmZmZmYmU2YmE1MiBz
aXplIDg2NC8yMDgKaW5zdGFsbGluZyBzY3NpX3ZoY2lfZl9zeW1fZW1jLCBtb2R1bGUgaWQgMzcu
CmxvYWQgJ21pc2Mvc2NzaV92aGNpL3Njc2lfdmhjaV9mX3N5bV9oZHMnIGlkIDM4IGxvYWRlZCBA
IDB4ZmZmZmZmZmZmNzgwMThhMC8weGZmCmZmZmZmZmZiZTZiYjIyIHNpemUgMTg1Ni8yMDAKaW5z
dGFsbGluZyBzY3NpX3ZoY2lfZl9zeW1faGRzLCBtb2R1bGUgaWQgMzguCmxvYWQgJ21pc2Mvc2Nz
aV92aGNpL3Njc2lfdmhjaV9mX3N5bScgaWQgMzkgbG9hZGVkIEAgMHhmZmZmZmZmZmY3OTU2OTc4
LzB4ZmZmZmZmCmZmZmJlNmJiZWEgc2l6ZSAxNTA0LzMzNgppbnN0YWxsaW5nIHNjc2lfdmhjaV9m
X3N5bSwgbW9kdWxlIGlkIDM5Lgpsb2FkICdtaXNjL3Njc2lfdmhjaS9zY3NpX3ZoY2lfZl90cGdz
JyBpZCA0MCBsb2FkZWQgQCAweGZmZmZmZmZmZmJiZmYzNTAvMHhmZmZmZgpmZmZmYmU2YmQzYSBz
aXplIDMwNzIvMTkyCmluc3RhbGxpbmcgc2NzaV92aGNpX2ZfdHBncywgbW9kdWxlIGlkIDQwLgpz
Y3NpX3ZoY2kwIGF0IHJvb3QKc2NzaV92aGNpMCBpcyAvc2NzaV92aGNpCnVuaW5zdGFsbGVkIGFw
aXgKdW5sb2FkaW5nIGFwaXgsIG1vZHVsZSBpZCAyNiwgbG9hZGNudCAxLgpsb2FkICdtaXNjL2lv
bW11JyBpZCA0MiBsb2FkZWQgQCAweGZmZmZmZmZmZjc5NGFlZDAvMHhmZmZmZmZmZmZiZTVlZGIy
IHNpemUgMjI2OAowLzU4NApsb2FkICdkcnYvaW50ZWxfaW9tbXUnIGlkIDQxIGxvYWRlZCBAIDB4
ZmZmZmZmZmZmNzk0MjAwMC8weGZmZmZmZmZmZmJlNWVhNjIgc2l6ZQogMzY1NjAvODQ4Cmluc3Rh
bGxpbmcgaW50ZWxfaW9tbXUsIG1vZHVsZSBpZCA0MS4gCmluc3RhbGxpbmcgaW9tbXUsIG1vZHVs
ZSBpZCA0Mi4KTm8gRE1BUiBBQ1BJIHRhYmxlLiBObyBJbnRlbCBJT01NVSBwcmVzZW50Cgp1bmxv
YWRpbmcgaW50ZWxfaW9tbXUsIG1vZHVsZSBpZCA0MSwgbG9hZGNudCAxLgpsb2FkICdkcnYvYW1k
X2lvbW11JyBpZCA0MyBsb2FkZWQgQCAweGZmZmZmZmZmZjc5NDIwMDAvMHhmZmZmZmZmZmZiZTVl
YTYyIHNpemUgMwo2MDgwLzUyMAppbnN0YWxsaW5nIGFtZF9pb21tdSwgbW9kdWxlIGlkIDQzLgp1
bmxvYWRpbmcgYW1kX2lvbW11LCBtb2R1bGUgaWQgNDMsIGxvYWRjbnQgMS4KbG9hZCAnZHJ2L25w
ZScgaWQgNDQgbG9hZGVkIEAgMHhmZmZmZmZmZmY3OTQyMDAwLzB4ZmZmZmZmZmZmYmU1ZjAzMiBz
aXplIDMyNjcyLzMKMTM2Cmluc3RhbGxpbmcgbnBlLCBtb2R1bGUgaWQgNDQuCm5wZTAgYXQgcm9v
dDogc3BhY2UgMCBvZmZzZXQgMApucGUwIGlzIC9wY2lAMCwwCnRyYXA6IFVua25vd24gdHJhcCB0
eXBlIDggaW4gdXNlciBtb2RlCgpwYW5pY1tjcHUwXS90aHJlYWQ9ZmZmZmZmZmZmYmMzNmRlMDog
CkJBRCBUUkFQOiB0eXBlPWQgKCNncCBHZW5lcmFsIHByb3RlY3Rpb24pIHJwPWZmZmZmZmZmZmJj
NDg2NjAgYWRkcj1mMDAwZmY1M2YwMDBmCmYwMAoKCiNncCBHZW5lcmFsIHByb3RlY3Rpb24KYWRk
cj0weGYwMDBmZjUzZjAwMGZmMDAKcGlkPTAsIHBjPTB4ZmZmZmZmZmZmYjg2NWYxZCwgc3A9MHhm
ZmZmZmZmZmZiYzQ4NzU4LCBlZmxhZ3M9MHgxMDI4NgpjcjA6IDgwMDUwMDNiPHBnLHdwLG5lLGV0
LHRzLG1wLHBlPiBjcjQ6IDZiODx4bW1lLGZ4c3IscGdlLHBhZSxwc2UsZGU+CmNyMjogMApjcjM6
IGY4ZTYwMDAKY3I4OiAwCgogICAgICAgIHJkaTogICAgICAgICAgICAgICAgMCByc2k6ICAgICAg
ICAgICAgICAgIDEgcmR4OiAgICAgICAgICAgICAgIDQwCiAgICAgICAgcmN4OiAgICAgICAgICAg
ICAgICAyICByODogZmZmZmZmZmZmYmM0ODg3MCAgcjk6ICAgICAgICAgICAgICAgIDAKICAgICAg
ICByYXg6IGZmZmZmZmZmZmJjMzZkZTAgcmJ4OiAgICAgICAgICAgICAgICAwIHJicDogZmZmZmZm
ZmZmYmM0ODdiMAogICAgICAgIHIxMDogZmZmZmZmZmZmYjg1YjdkOCByMTE6IGYwMDBmZjUzZjAw
MGZmMDAgcjEyOiAgICAgICAgICAgICAgICAwCiAgICAgICAgcjEzOiBmMDAwZmY1M2YwMDBmZjAw
IHIxNDogICAgICAgICAgICAgICAgMSByMTU6IGYwMDBmZjUzZjAwMGZmMDAKICAgICAgICBmc2I6
ICAgICAgICAyMDAwMDAwMDAgZ3NiOiBmZmZmZmZmZmZiYzNlYmMwICBkczogICAgICAgICAgICAg
ICAgMAogICAgICAgICBlczogICAgICAgICAgICAgICAgMCAgZnM6ICAgICAgICAgICAgICAgIDAg
IGdzOiAgICAgICAgICAgICAgICAwCiAgICAgICAgdHJwOiAgICAgICAgICAgICAgICBkIGVycjog
ICAgICAgICAgICAgICAgMCByaXA6IGZmZmZmZmZmZmI4NjVmMWQKICAgICAgICAgY3M6ICAgICAg
ICAgICAgICAgMzAgcmZsOiAgICAgICAgICAgIDEwMjg2IHJzcDogZmZmZmZmZmZmYmM0ODc1OAog
ICAgICAgICBzczogICAgICAgICAgICAgICAzOAoKV2FybmluZyAtIHN0YWNrIG5vdCB3cml0dGVu
IHRvIHRoZSBkdW1wIGJ1ZmZlcgpmZmZmZmZmZmZiYzQ4NTgwIHVuaXg6ZGllKzEzMSAoKQpmZmZm
ZmZmZmZiYzQ4NjUwIHVuaXg6dHJhcCszYjIgKCkKZmZmZmZmZmZmYmM0ODY2MCB1bml4OmNtbnRy
YXArZTYgKCkKZmZmZmZmZmZmYmM0ODdiMCB1bml4Om11dGV4X293bmVyX3J1bm5pbmcrZCAoKQpm
ZmZmZmZmZmZiYzQ4ODQwIGdlbnVuaXg6ZHVtcF9vbmVfY29yZSs2YiAoKQpmZmZmZmZmZmZiYzQ4
OGUwIGdlbnVuaXg6Y29yZSs0MTkgKCkgIApmZmZmZmZmZmZiYzQ4YTEwIHVuaXg6a2Vybl9ncGZh
dWx0KzE4OCAoKQpmZmZmZmZmZmZiYzQ4YWUwIHVuaXg6dHJhcCszOTMgKCkKZmZmZmZmZmZmYmM0
OGFmMCB1bml4OmNtbnRyYXArZTYgKCkKClswXT4gOjpzdGF0dXMKZGVidWdnaW5nIGxpdmUga2Vy
bmVsICg2NC1iaXQpIG9uIChub3Qgc2V0KQpvcGVyYXRpbmcgc3lzdGVtOiA1LjExIDExLjAgKGk4
NnBjKQppbWFnZSB1dWlkOiAobm90IHNldCkKRFRyYWNlIHN0YXRlOiBpbmFjdGl2ZQpzdG9wcGVk
IG9uOiBkZWJ1Z2dlciBlbnRyeSB0cmFwClswXT4gJGMKa21kYl9lbnRlcisweGIoKQpkZWJ1Z19l
bnRlcisweDNmKGZmZmZmZmZmZmI5NTlkNjApCnBhbmljc3lzKzB4NWJkKGZmZmZmZmZmZmI5NTc0
MTgsIGZmZmZmZmZmZmJjNDg0ZDAsIGZmZmZmZmZmZmJjODdiYjAsIDEpCnZwYW5pYysweDE1Yygp
CnBhbmljKzB4OTQoKQpkaWUrMHgxMzEoZCwgZmZmZmZmZmZmYmM0ODY2MCwgZjAwMGZmNTNmMDAw
ZmYwMCwgMCkKdHJhcCsweDNiMihmZmZmZmZmZmZiYzQ4NjYwLCBmMDAwZmY1M2YwMDBmZjAwLCAw
KQoweGZmZmZmZmZmZmI4MDAxZDYoKQptdXRleF9vd25lcl9ydW5uaW5nKzB4ZCgpCmR1bXBfb25l
X2NvcmUrMHg2YihiLCBmZmZmZmZmZiwgMSwgMCwgZmZmZmZmZmZmYmM0ODg3MCkKY29yZSsweDQx
OShiLCAwKQprZXJuX2dwZmF1bHQrMHgxODgoZmZmZmZmZmZmYmM0OGFmMCkKdHJhcCsweDM5Myhm
ZmZmZmZmZmZiYzQ4YWYwLCBmZmZmZmZmZmZiYzdmNmQ4LCAwKQoweGZmZmZmZmZmZmI4MDAxZDYo
KQpbMF0+IAoK
--f46d043c06e6f9f25804c64bdf1e
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--f46d043c06e6f9f25804c64bdf1e--


From xen-devel-bounces@lists.xen.org Thu Aug 02 17:56:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swzci-0001pW-Fh; Thu, 02 Aug 2012 17:55:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth@citrix.com>) id 1Swzcg-0001pJ-TG
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 17:55:47 +0000
X-Env-Sender: lars.kurth@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343930140!10567654!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24745 invoked from network); 2 Aug 2012 17:55:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:55:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828730"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:55:16 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Thu, 2 Aug 2012
	18:55:16 +0100
From: Lars Kurth <lars.kurth@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 2 Aug 2012 18:55:14 +0100
Thread-Topic: What about a Fedora TestDay about Xen?
Thread-Index: Ac1w0JuLkMnyhBD7QXGjhLgXeGagRwABsuZg
Message-ID: <344C0F67BC927847A2C92F9EE358DB0EED0B6AD9A7@LONPMAILBOX01.citrite.net>
References: <1343926951.4873.36.camel@Solace>
In-Reply-To: <1343926951.4873.36.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] What about a Fedora TestDay about Xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RGFyaW8sDQoNCkkgYW0gbm90IHN1cmUgd2hldGhlciB3ZSBzaG91bGQgYW5nbGUgZm9yIGEgWGVu
IEZlZG9yYSBUZXN0IGRheSB0aGlzIHRpbWUgcm91bmQuIA0KDQpXZSBzaG91bGQgZmlyc3QgbWFr
ZSBzdXJlIHRoYXQgd2UgaGF2ZSBnb29kIHByZXNlbmNlIGF0IHRoZSBWaXJ0dWFsaXphdGlvbiBU
ZXN0IERheSAod2UgZ290IGEgYml0LCBidXQgbmV2ZXIgYXMgbXVjaCBhcyBLVk0pIGFuZCB0cnkg
YW5kIGRvIGFsbCB0aGUgdGhpbmdzIHdlIHdvdWxkIGZvciBhIFhlbiBUZXN0IERheS4gVGhhdCB3
YXksIHdlIGdldCB0byBwcmFjdGljZSBhbmQgYnVpbGQgdXAgY3JlZGliaWxpdHkgZm9yIHRoZSBu
ZXh0IEZlZG9yYSByZWxlYXNlIHdoZXJlIGl0IHByb2JhYmx5IGRvZXMgbWFrZSBzZW5zZSB0byBk
byBvdXIgb3duIHRlc3QgZGF5LiBJZiB3ZSBkbyBhIFhlbiBGZWRvcmEgVGVzdCBEYXkgYW5kIG5v
Ym9keSB0dXJucyB1cCB0aGlzIGNhbiBiYWNrZmlyZSBiYWRseQ0KDQpEb2VzIHRoaXMgbWFrZSBz
ZW5zZT8NCg0KUmVnYXJkcw0KTGFycw0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJv
bTogRGFyaW8gRmFnZ2lvbGkgW21haWx0bzpkYXJpby5mYWdnaW9saUBjaXRyaXguY29tXSANClNl
bnQ6IDAyIEF1Z3VzdCAyMDEyIDE4OjAzDQpUbzogTGFycyBLdXJ0aA0KQ2M6IHhlbi1kZXZlbDsg
R2VvcmdlIER1bmxhcDsgUGFzaSBLw6Rya2vDpGluZW4NClN1YmplY3Q6IFdoYXQgYWJvdXQgYSBG
ZWRvcmEgVGVzdERheSBhYm91dCBYZW4/DQoNCkhpIExhcnMsIEhpIGV2ZXJ5b25lLA0KDQpBcyB0
aGVyZSBpcyBzb21lIG9uZ29pbmcgZGlzY3Vzc2lvbiBvbiBYZW4gVGVzdERheXMsIGFsbG93IG1l
IHRvIG1lbnRpb24gdGhhdCBGZWRvcmEgZG9lcyBUZXN0RGF5cyBhcyBhIHBhcnQgb2YgdGhlaXIg
UUEgcHJvY2VzcyBmb3IgZWFjaCByZWxlYXNlLiBJJ20gcXVpdGUgYSBiaXQgY29udmluY2VkIHRo
YXQgaXQgY291bGQgYmUgd29ydGh3aGlsZSB0byAodHJ5DQp0bykgb3JnYW5pemUgb25lIG9mIHRo
ZW0gd2l0aCBmb2N1cyBvbiBYZW5bKl0uDQoNCkJlbG93IHRoZXJlIGFyZSBzb21lIGluZm9ybWF0
aW9uIEkgY29sbGVjdGVkIGZyb20gdGhlIEZlZG9yYSBXaWtpLi4uDQpCYXNpY2FsbHkgdGhlIHB1
cnBvc2Ugb2YgdGhpcyBlLW1haWwgaXMgdG8gZ2F0aGVyIHRob3VnaHRzIG9mIHRoZSBYZW4gY29t
bXVuaXR5IGFib3V0IHN1Y2ggYSB0aGluZyBhbmQsIG1vcmUgaW1wb3J0YW50LCBzb21lIHdpbGwg
dG8gaGVscCBhIGJpdCB3aXRoIHRoZSBvcmdhbml6YXRpb24hIDotUA0KDQpIZXJlIHNvbWUgYmFz
aWMgbGlua3MgYWJvdXQgdGhlIFRlc3REYXlzOg0KIGh0dHBzOi8vZmVkb3JhcHJvamVjdC5vcmcv
d2lraS9RQS9UZXN0X0RheXMNCiBodHRwczovL2ZlZG9yYXByb2plY3Qub3JnL3dpa2kvUUEvU09Q
X1Rlc3RfRGF5X21hbmFnZW1lbnQNCiBodHRwczovL2ZlZG9yYXByb2plY3Qub3JnL3dpa2kvUUEv
RmVkb3JhXzE4X3Rlc3RfZGF5cw0KDQpUaGUgbGFzdCBvbmUgaXMgdGhlIGN1cnJlbnQgc2NoZWR1
bGUsIHdoaWNoIGlzIHF1aXRlIGZ1bGwuIEFsc28sIHRoZXJlIGFscmVhZHkgaXMgYSAnVmlydHVh
bGl6YXRpb24nIHRlc3QgZGF5LiBIb3dldmVyLCBJIHRoaW5rIGl0IGNvdWxkIHN0aWxsIGJlIHVz
ZWZ1bCB0byBoYXZlIG9uZSBkZWRpY2F0ZWQgdG8gWGVuLg0KDQpXaGF0IEkgd2FzIHRoaW5raW5n
IHdhcyB0byBmaWxlIGEgdGlja2V0IGZvciBpdCwgcmVxdWVzdGluZyBmb3IgdGhlDQooY3VycmVu
dGx5KSBhdmFpbGFibGUgc3BvdCBvbiAyMDEyLTA5LTI3LiBJZiB0aGF0IGRvZXMgbm90IHdvcmsg
Zm9yIHRoZW0sIHdlIGNvdWxkIGFzIGZvciBzb21ldGhpbmcgbmVhciB0byB0aGUgVmlydHVhbGl6
YXRpb24gdGVzdCBkYXksIGxpa2UgdGhlIGRheSBiZWZvcmUgb3Igc28gKGp1c3QgYXMgdGhleSBh
cmUgZG9pbmcgd2l0aCB0aGUgWCBUZXN0IFdlZWspLg0KDQpUaGUgdGlja2V0IHByb3Bvc2luZyB0
aGUgVmlydHVhbGl6YXRpb24gdGVzdCBkYXkgaXMgdGhpcyBvbmU6DQogaHR0cHM6Ly9mZWRvcmFo
b3N0ZWQub3JnL2ZlZG9yYS1xYS90aWNrZXQvMzAzDQoNClNvLCBhcyB5b3Ugc2VlLCB0aGVyZSBp
cyB2ZXJ5IGZldyB0byBkbyByaWdodCBub3cuIEhvd2V2ZXIsIGlmIHRoZXkgYWNjZXB0IGl0LCB0
aGVyZSB3aWxsIGJlIHdvcmsgdG8gZG8sIGFzIHBlciBodHRwczovL2ZlZG9yYXByb2plY3Qub3Jn
L3dpa2kvUUEvU09QX1Rlc3RfRGF5X21hbmFnZW1lbnQsIHdpdGggdGhlIGNoYWxsZW5naW5nIGlz
c3VlcyBiZWluZywgYWNjb3JkaW5nIHRvIG1lLCB0aGUgZm9sbG93aW5nOg0KIC0gYnVpbGRpbmcg
YSBsaXZlLUNEIChub3QgbWFuZGF0b3J5IGJ1dCBnb29kIHRvIGhhdmUpDQogLSBkZWZpbmluZyB0
ZXN0IGNhc2VzDQogLSBwcm9tb3RpbmcgdGhlIGV2ZW50IHByb3Blcmx5DQoNCkkgY2FuIHRyeSB0
byBkZWFsIHdpdGggdGhlIGxpdmUtQ0QsIGFuZCBJIGd1ZXNzIEkgYWxzbyBjYW4gKG1heWJlIHdp
dGggc29tZSBoZWxwIGZyb20gTGFycykgdHJ5IHRvIHByb21vdGUgaXQgYXMgbXVjaCBhcyBwb3Nz
aWJsZSwgdXNpbmcgYWxsIG91ciB1c3VhbCBhbmQgdW51c3VhbCBjaGFubmVscy4NCg0KV2hlcmUg
SSBjb3VsZCB1c2Ugc29tZSBoZWxwIGZyb20gdGhlIHdob2xlIFhlbiBjb21tdW5pdHkgaXMgaW4g
ZGVmaW5pbmcgbWVhbmluZ2Z1bCB0ZXN0IGNhc2VzIGZvciBzdWNoIGFuIGV2ZW50LiBIZXJlIGl0
IGlzIGFuIGV4YW1wbGUgZm9yIGxhc3QgeWVhciBWaXJ0dWFsaXphdGlvbiB0ZXN0IGRheToNCiBo
dHRwczovL2ZlZG9yYXByb2plY3Qub3JnL3dpa2kvVGVzdF9EYXk6MjAxMi0wNC0xMl9WaXJ0dWFs
aXphdGlvbl9UZXN0X0RheQ0KIGh0dHBzOi8vZmVkb3JhcHJvamVjdC5vcmcvd2lraS9UZXN0X0Rh
eToyMDEyLTA0LTEyX1ZpcnR1YWxpemF0aW9uX1Rlc3RfRGF5I05ld3NfdGVzdHNfYW5kX2ZlYXR1
cmVzDQogaHR0cHM6Ly9mZWRvcmFwcm9qZWN0Lm9yZy93aWtpL1Rlc3RfRGF5OjIwMTItMDQtMTJf
VmlydHVhbGl6YXRpb25fVGVzdF9EYXkjUHJldmlvdXNfdGVzdF9jYXNlcw0KDQpBcGFydCBmcm9t
IHRoYXQsIGFzIG9uZSBjb3VsZCBleHBlY3QsIGhhbmdpbmcgYXJvdW5kIG9uIGFuIElSQyBjaGFu
bmVsIGR1cmluZyB0aGUgZXZlbnQgd291bGQgYmUgdmVyeSB1c2VmdWwuDQoNCkxldCBtZSBrbm93
IGFueSB0aG91Z2h0cyB5b3UgaGF2ZS4gSXQgd291bGQgYmUgZ3JlYXQgaWYgeW91IGNhbiBkbyB0
aGF0IEFTQVAsIGJlZm9yZSB3ZSBydW4gb3ZlciBvZiBmcmVlIHNsb3RzLiA6LVANCg0KVGhhbmtz
IGFuZCBSZWdhcmRzLA0KRGFyaW8NCg0KWypdIE9mIGNvdXJzZSB0aGlzIGlzIGEgZGlmZmVyZW50
IHRoaW5nIGZyb20gdGhlIFhlbiBUZXN0RGF5IHdlJ3JlIGRpc2N1c3NpbmcgaW4gdGhlIG90aGVy
IHRocmVhZCwgYW5kIEkgcmVhbGx5IHRoaW5rIGJvdGggb2YgdGhlbSBjb3VsZCBiZSB1c2VmdWws
IHNlcnZpbmcgdGhlIHZlcnkgc2FtZSBwdXJwb3NlLCBhbHRob3VnaCBpbiBkaWZmZXJlbnQgd2F5
cy4NCg0KLS0NCjw8VGhpcyBoYXBwZW5zIGJlY2F1c2UgSSBjaG9vc2UgaXQgdG8gaGFwcGVuIT4+
IChSYWlzdGxpbiBNYWplcmUpDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KRGFyaW8gRmFnZ2lvbGksIFBoLkQsIGh0dHA6
Ly9yZXRpcy5zc3N1cC5pdC9wZW9wbGUvZmFnZ2lvbGkNClNlbmlvciBTb2Z0d2FyZSBFbmdpbmVl
ciwgQ2l0cml4IFN5c3RlbXMgUiZEIEx0ZC4sIENhbWJyaWRnZSAoVUspDQoNCl9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxp
c3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs
Cg==

From xen-devel-bounces@lists.xen.org Thu Aug 02 17:56:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 17:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Swzci-0001pW-Fh; Thu, 02 Aug 2012 17:55:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth@citrix.com>) id 1Swzcg-0001pJ-TG
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 17:55:47 +0000
X-Env-Sender: lars.kurth@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343930140!10567654!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24745 invoked from network); 2 Aug 2012 17:55:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 17:55:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828730"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 17:55:16 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Thu, 2 Aug 2012
	18:55:16 +0100
From: Lars Kurth <lars.kurth@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Thu, 2 Aug 2012 18:55:14 +0100
Thread-Topic: What about a Fedora TestDay about Xen?
Thread-Index: Ac1w0JuLkMnyhBD7QXGjhLgXeGagRwABsuZg
Message-ID: <344C0F67BC927847A2C92F9EE358DB0EED0B6AD9A7@LONPMAILBOX01.citrite.net>
References: <1343926951.4873.36.camel@Solace>
In-Reply-To: <1343926951.4873.36.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] What about a Fedora TestDay about Xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RGFyaW8sDQoNCkkgYW0gbm90IHN1cmUgd2hldGhlciB3ZSBzaG91bGQgYW5nbGUgZm9yIGEgWGVu
IEZlZG9yYSBUZXN0IGRheSB0aGlzIHRpbWUgcm91bmQuIA0KDQpXZSBzaG91bGQgZmlyc3QgbWFr
ZSBzdXJlIHRoYXQgd2UgaGF2ZSBnb29kIHByZXNlbmNlIGF0IHRoZSBWaXJ0dWFsaXphdGlvbiBU
ZXN0IERheSAod2UgZ290IGEgYml0LCBidXQgbmV2ZXIgYXMgbXVjaCBhcyBLVk0pIGFuZCB0cnkg
YW5kIGRvIGFsbCB0aGUgdGhpbmdzIHdlIHdvdWxkIGZvciBhIFhlbiBUZXN0IERheS4gVGhhdCB3
YXksIHdlIGdldCB0byBwcmFjdGljZSBhbmQgYnVpbGQgdXAgY3JlZGliaWxpdHkgZm9yIHRoZSBu
ZXh0IEZlZG9yYSByZWxlYXNlIHdoZXJlIGl0IHByb2JhYmx5IGRvZXMgbWFrZSBzZW5zZSB0byBk
byBvdXIgb3duIHRlc3QgZGF5LiBJZiB3ZSBkbyBhIFhlbiBGZWRvcmEgVGVzdCBEYXkgYW5kIG5v
Ym9keSB0dXJucyB1cCB0aGlzIGNhbiBiYWNrZmlyZSBiYWRseQ0KDQpEb2VzIHRoaXMgbWFrZSBz
ZW5zZT8NCg0KUmVnYXJkcw0KTGFycw0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJv
bTogRGFyaW8gRmFnZ2lvbGkgW21haWx0bzpkYXJpby5mYWdnaW9saUBjaXRyaXguY29tXSANClNl
bnQ6IDAyIEF1Z3VzdCAyMDEyIDE4OjAzDQpUbzogTGFycyBLdXJ0aA0KQ2M6IHhlbi1kZXZlbDsg
R2VvcmdlIER1bmxhcDsgUGFzaSBLw6Rya2vDpGluZW4NClN1YmplY3Q6IFdoYXQgYWJvdXQgYSBG
ZWRvcmEgVGVzdERheSBhYm91dCBYZW4/DQoNCkhpIExhcnMsIEhpIGV2ZXJ5b25lLA0KDQpBcyB0
aGVyZSBpcyBzb21lIG9uZ29pbmcgZGlzY3Vzc2lvbiBvbiBYZW4gVGVzdERheXMsIGFsbG93IG1l
IHRvIG1lbnRpb24gdGhhdCBGZWRvcmEgZG9lcyBUZXN0RGF5cyBhcyBhIHBhcnQgb2YgdGhlaXIg
UUEgcHJvY2VzcyBmb3IgZWFjaCByZWxlYXNlLiBJJ20gcXVpdGUgYSBiaXQgY29udmluY2VkIHRo
YXQgaXQgY291bGQgYmUgd29ydGh3aGlsZSB0byAodHJ5DQp0bykgb3JnYW5pemUgb25lIG9mIHRo
ZW0gd2l0aCBmb2N1cyBvbiBYZW5bKl0uDQoNCkJlbG93IHRoZXJlIGFyZSBzb21lIGluZm9ybWF0
aW9uIEkgY29sbGVjdGVkIGZyb20gdGhlIEZlZG9yYSBXaWtpLi4uDQpCYXNpY2FsbHkgdGhlIHB1
cnBvc2Ugb2YgdGhpcyBlLW1haWwgaXMgdG8gZ2F0aGVyIHRob3VnaHRzIG9mIHRoZSBYZW4gY29t
bXVuaXR5IGFib3V0IHN1Y2ggYSB0aGluZyBhbmQsIG1vcmUgaW1wb3J0YW50LCBzb21lIHdpbGwg
dG8gaGVscCBhIGJpdCB3aXRoIHRoZSBvcmdhbml6YXRpb24hIDotUA0KDQpIZXJlIHNvbWUgYmFz
aWMgbGlua3MgYWJvdXQgdGhlIFRlc3REYXlzOg0KIGh0dHBzOi8vZmVkb3JhcHJvamVjdC5vcmcv
d2lraS9RQS9UZXN0X0RheXMNCiBodHRwczovL2ZlZG9yYXByb2plY3Qub3JnL3dpa2kvUUEvU09Q
X1Rlc3RfRGF5X21hbmFnZW1lbnQNCiBodHRwczovL2ZlZG9yYXByb2plY3Qub3JnL3dpa2kvUUEv
RmVkb3JhXzE4X3Rlc3RfZGF5cw0KDQpUaGUgbGFzdCBvbmUgaXMgdGhlIGN1cnJlbnQgc2NoZWR1
bGUsIHdoaWNoIGlzIHF1aXRlIGZ1bGwuIEFsc28sIHRoZXJlIGFscmVhZHkgaXMgYSAnVmlydHVh
bGl6YXRpb24nIHRlc3QgZGF5LiBIb3dldmVyLCBJIHRoaW5rIGl0IGNvdWxkIHN0aWxsIGJlIHVz
ZWZ1bCB0byBoYXZlIG9uZSBkZWRpY2F0ZWQgdG8gWGVuLg0KDQpXaGF0IEkgd2FzIHRoaW5raW5n
IHdhcyB0byBmaWxlIGEgdGlja2V0IGZvciBpdCwgcmVxdWVzdGluZyBmb3IgdGhlDQooY3VycmVu
dGx5KSBhdmFpbGFibGUgc3BvdCBvbiAyMDEyLTA5LTI3LiBJZiB0aGF0IGRvZXMgbm90IHdvcmsg
Zm9yIHRoZW0sIHdlIGNvdWxkIGFzIGZvciBzb21ldGhpbmcgbmVhciB0byB0aGUgVmlydHVhbGl6
YXRpb24gdGVzdCBkYXksIGxpa2UgdGhlIGRheSBiZWZvcmUgb3Igc28gKGp1c3QgYXMgdGhleSBh
cmUgZG9pbmcgd2l0aCB0aGUgWCBUZXN0IFdlZWspLg0KDQpUaGUgdGlja2V0IHByb3Bvc2luZyB0
aGUgVmlydHVhbGl6YXRpb24gdGVzdCBkYXkgaXMgdGhpcyBvbmU6DQogaHR0cHM6Ly9mZWRvcmFo
b3N0ZWQub3JnL2ZlZG9yYS1xYS90aWNrZXQvMzAzDQoNClNvLCBhcyB5b3Ugc2VlLCB0aGVyZSBp
cyB2ZXJ5IGZldyB0byBkbyByaWdodCBub3cuIEhvd2V2ZXIsIGlmIHRoZXkgYWNjZXB0IGl0LCB0
aGVyZSB3aWxsIGJlIHdvcmsgdG8gZG8sIGFzIHBlciBodHRwczovL2ZlZG9yYXByb2plY3Qub3Jn
L3dpa2kvUUEvU09QX1Rlc3RfRGF5X21hbmFnZW1lbnQsIHdpdGggdGhlIGNoYWxsZW5naW5nIGlz
c3VlcyBiZWluZywgYWNjb3JkaW5nIHRvIG1lLCB0aGUgZm9sbG93aW5nOg0KIC0gYnVpbGRpbmcg
YSBsaXZlLUNEIChub3QgbWFuZGF0b3J5IGJ1dCBnb29kIHRvIGhhdmUpDQogLSBkZWZpbmluZyB0
ZXN0IGNhc2VzDQogLSBwcm9tb3RpbmcgdGhlIGV2ZW50IHByb3Blcmx5DQoNCkkgY2FuIHRyeSB0
byBkZWFsIHdpdGggdGhlIGxpdmUtQ0QsIGFuZCBJIGd1ZXNzIEkgYWxzbyBjYW4gKG1heWJlIHdp
dGggc29tZSBoZWxwIGZyb20gTGFycykgdHJ5IHRvIHByb21vdGUgaXQgYXMgbXVjaCBhcyBwb3Nz
aWJsZSwgdXNpbmcgYWxsIG91ciB1c3VhbCBhbmQgdW51c3VhbCBjaGFubmVscy4NCg0KV2hlcmUg
SSBjb3VsZCB1c2Ugc29tZSBoZWxwIGZyb20gdGhlIHdob2xlIFhlbiBjb21tdW5pdHkgaXMgaW4g
ZGVmaW5pbmcgbWVhbmluZ2Z1bCB0ZXN0IGNhc2VzIGZvciBzdWNoIGFuIGV2ZW50LiBIZXJlIGl0
IGlzIGFuIGV4YW1wbGUgZm9yIGxhc3QgeWVhciBWaXJ0dWFsaXphdGlvbiB0ZXN0IGRheToNCiBo
dHRwczovL2ZlZG9yYXByb2plY3Qub3JnL3dpa2kvVGVzdF9EYXk6MjAxMi0wNC0xMl9WaXJ0dWFs
aXphdGlvbl9UZXN0X0RheQ0KIGh0dHBzOi8vZmVkb3JhcHJvamVjdC5vcmcvd2lraS9UZXN0X0Rh
eToyMDEyLTA0LTEyX1ZpcnR1YWxpemF0aW9uX1Rlc3RfRGF5I05ld3NfdGVzdHNfYW5kX2ZlYXR1
cmVzDQogaHR0cHM6Ly9mZWRvcmFwcm9qZWN0Lm9yZy93aWtpL1Rlc3RfRGF5OjIwMTItMDQtMTJf
VmlydHVhbGl6YXRpb25fVGVzdF9EYXkjUHJldmlvdXNfdGVzdF9jYXNlcw0KDQpBcGFydCBmcm9t
IHRoYXQsIGFzIG9uZSBjb3VsZCBleHBlY3QsIGhhbmdpbmcgYXJvdW5kIG9uIGFuIElSQyBjaGFu
bmVsIGR1cmluZyB0aGUgZXZlbnQgd291bGQgYmUgdmVyeSB1c2VmdWwuDQoNCkxldCBtZSBrbm93
IGFueSB0aG91Z2h0cyB5b3UgaGF2ZS4gSXQgd291bGQgYmUgZ3JlYXQgaWYgeW91IGNhbiBkbyB0
aGF0IEFTQVAsIGJlZm9yZSB3ZSBydW4gb3ZlciBvZiBmcmVlIHNsb3RzLiA6LVANCg0KVGhhbmtz
IGFuZCBSZWdhcmRzLA0KRGFyaW8NCg0KWypdIE9mIGNvdXJzZSB0aGlzIGlzIGEgZGlmZmVyZW50
IHRoaW5nIGZyb20gdGhlIFhlbiBUZXN0RGF5IHdlJ3JlIGRpc2N1c3NpbmcgaW4gdGhlIG90aGVy
IHRocmVhZCwgYW5kIEkgcmVhbGx5IHRoaW5rIGJvdGggb2YgdGhlbSBjb3VsZCBiZSB1c2VmdWws
IHNlcnZpbmcgdGhlIHZlcnkgc2FtZSBwdXJwb3NlLCBhbHRob3VnaCBpbiBkaWZmZXJlbnQgd2F5
cy4NCg0KLS0NCjw8VGhpcyBoYXBwZW5zIGJlY2F1c2UgSSBjaG9vc2UgaXQgdG8gaGFwcGVuIT4+
IChSYWlzdGxpbiBNYWplcmUpDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KRGFyaW8gRmFnZ2lvbGksIFBoLkQsIGh0dHA6
Ly9yZXRpcy5zc3N1cC5pdC9wZW9wbGUvZmFnZ2lvbGkNClNlbmlvciBTb2Z0d2FyZSBFbmdpbmVl
ciwgQ2l0cml4IFN5c3RlbXMgUiZEIEx0ZC4sIENhbWJyaWRnZSAoVUspDQoNCl9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxp
c3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs
Cg==

From xen-devel-bounces@lists.xen.org Thu Aug 02 18:14:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 18:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzuV-0002GC-9T; Thu, 02 Aug 2012 18:14:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzuU-0002G7-3a
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 18:14:10 +0000
Received: from [85.158.143.35:20543] by server-2.bemta-4.messagelabs.com id
	8E/F3-17938-173CA105; Thu, 02 Aug 2012 18:14:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343931248!16557798!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30028 invoked from network); 2 Aug 2012 18:14:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 18:14:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828911"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 18:14:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 19:14:07 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwzuR-0002Cv-9t;
	Thu, 02 Aug 2012 18:14:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwzuR-0003Hr-46;
	Thu, 02 Aug 2012 19:14:07 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13537-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 19:14:07 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13537: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13537 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13537/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore         fail REGR. vs. 13489
 test-amd64-amd64-xl-sedf      9 guest-start               fail REGR. vs. 13489
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13489
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13489
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13489

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 linux                f351a1d7efda2edd52c23a150b07b8380c47b6c0
baseline version:
 linux                ce05b1d31e57b7de6b814073e88bdd403ce71229

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=f351a1d7efda2edd52c23a150b07b8380c47b6c0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 f351a1d7efda2edd52c23a150b07b8380c47b6c0
+ branch=linux-3.0
+ revision=f351a1d7efda2edd52c23a150b07b8380c47b6c0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git f351a1d7efda2edd52c23a150b07b8380c47b6c0:tested/linux-3.0
Counting objects: 1   
Counting objects: 312, done.
Compressing objects:   2% (1/40)   
Compressing objects:   5% (2/40)   
Compressing objects:   7% (3/40)   
Compressing objects:  10% (4/40)   
Compressing objects:  12% (5/40)   
Compressing objects:  15% (6/40)   
Compressing objects:  17% (7/40)   
Compressing objects:  20% (8/40)   
Compressing objects:  22% (9/40)   
Compressing objects:  25% (10/40)   
Compressing objects:  27% (11/40)   
Compressing objects:  30% (12/40)   
Compressing objects:  32% (13/40)   
Compressing objects:  35% (14/40)   
Compressing objects:  37% (15/40)   
Compressing objects:  40% (16/40)   
Compressing objects:  42% (17/40)   
Compressing objects:  45% (18/40)   
Compressing objects:  47% (19/40)   
Compressing objects:  50% (20/40)   
Compressing objects:  52% (21/40)   
Compressing objects:  55% (22/40)   
Compressing objects:  57% (23/40)   
Compressing objects:  60% (24/40)   
Compressing objects:  62% (25/40)   
Compressing objects:  65% (26/40)   
Compressing objects:  67% (27/40)   
Compressing objects:  70% (28/40)   
Compressing objects:  72% (29/40)   
Compressing objects:  75% (30/40)   
Compressing objects:  77% (31/40)   
Compressing objects:  80% (32/40)   
Compressing objects:  82% (33/40)   
Compressing objects:  85% (34/40)   
Compressing objects:  87% (35/40)   
Compressing objects:  90% (36/40)   
Compressing objects:  92% (37/40)   
Compressing objects:  95% (38/40)   
Compressing objects:  97% (39/40)   
Compressing objects: 100% (40/40)   
Compressing objects: 100% (40/40), done.
Writing objects:   0% (1/249)   
Writing objects:   1% (3/249)   
Writing objects:   2% (5/249)   
Writing objects:   3% (8/249)   
Writing objects:   4% (10/249)   
Writing objects:   5% (13/249)   
Writing objects:   6% (15/249)   
Writing objects:   7% (19/249)   
Writing objects:   8% (20/249)   
Writing objects:   9% (24/249)   
Writing objects:  10% (25/249)   
Writing objects:  11% (28/249)   
Writing objects:  12% (30/249)   
Writing objects:  13% (33/249)   
Writing objects:  14% (35/249)   
Writing objects:  15% (38/249)   
Writing objects:  16% (40/249)   
Writing objects:  17% (43/249)   
Writing objects:  18% (45/249)   
Writing objects:  19% (48/249)   
Writing objects:  20% (50/249)   
Writing objects:  21% (53/249)   
Writing objects:  22% (55/249)   
Writing objects:  23% (58/249)   
Writing objects:  24% (60/249)   
Writing objects:  25% (63/249)   
Writing objects:  26% (65/249)   
Writing objects:  27% (68/249)   
Writing objects:  28% (70/249)   
Writing objects:  29% (73/249)   
Writing objects:  30% (75/249)   
Writing objects:  31% (78/249)   
Writing objects:  32% (80/249)   
Writing objects:  33% (83/249)   
Writing objects:  34% (85/249)   
Writing objects:  35% (89/249)   
Writing objects:  36% (90/249)   
Writing objects:  37% (93/249)   
Writing objects:  38% (95/249)   
Writing objects:  39% (98/249)   
Writing objects:  40% (100/249)   
Writing objects:  41% (103/249)   
Writing objects:  42% (106/249)   
Writing objects:  43% (108/249)   
Writing objects:  44% (110/249)   
Writing objects:  45% (113/249)   
Writing objects:  46% (115/249)   
Writing objects:  47% (118/249)   
Writing objects:  48% (120/249)   
Writing objects:  49% (123/249)   
Writing objects:  50% (125/249)   
Writing objects:  51% (127/249)   
Writing objects:  52% (130/249)   
Writing objects:  53% (132/249)   
Writing objects:  54% (135/249)   
Writing objects:  55% (137/249)   
Writing objects:  56% (140/249)   
Writing objects:  57% (142/249)   
Writing objects:  58% (145/249)   
Writing objects:  59% (147/249)   
Writing objects:  60% (150/249)   
Writing objects:  61% (152/249)   
Writing objects:  62% (155/249)   
Writing objects:  63% (157/249)   
Writing objects:  64% (160/249)   
Writing objects:  65% (162/249)   
Writing objects:  66% (165/249)   
Writing objects:  67% (167/249)   
Writing objects:  68% (170/249)   
Writing objects:  69% (172/249)   
Writing objects:  70% (175/249)   
Writing objects:  71% (177/249)   
Writing objects:  72% (180/249)   
Writing objects:  73% (182/249)   
Writing objects:  74% (185/249)   
Writing objects:  75% (187/249)   
Writing objects:  76% (190/249)   
Writing objects:  77% (192/249)   
Writing objects:  78% (195/249)   
Writing objects:  79% (197/249)   
Writing objects:  80% (200/249)   
Writing objects:  81% (202/249)   
Writing objects:  82% (205/249)   
Writing objects:  83% (207/249)   
Writing objects:  84% (210/249)   
Writing objects:  85% (212/249)   
Writing objects:  86% (215/249)   
Writing objects:  87% (217/249)   
Writing objects:  88% (220/249)   
Writing objects:  89% (222/249)   
Writing objects:  90% (225/249)   
Writing objects:  91% (227/249)   
Writing objects:  92% (230/249)   
Writing objects:  93% (232/249)   
Writing objects:  94% (235/249)   
Writing objects:  95% (237/249)   
Writing objects:  96% (240/249)   
Writing objects:  97% (242/249)   
Writing objects:  98% (245/249)   
Writing objects:  99% (247/249)   
Writing objects: 100% (249/249)   
Writing objects: 100% (249/249), 69.74 KiB, done.
Total 249 (delta 208), reused 249 (delta 208)
To xen@xenbits.xensource.com:git/linux-pvops.git
   ce05b1d..f351a1d  f351a1d7efda2edd52c23a150b07b8380c47b6c0 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 18:14:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 18:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SwzuV-0002GC-9T; Thu, 02 Aug 2012 18:14:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SwzuU-0002G7-3a
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 18:14:10 +0000
Received: from [85.158.143.35:20543] by server-2.bemta-4.messagelabs.com id
	8E/F3-17938-173CA105; Thu, 02 Aug 2012 18:14:09 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1343931248!16557798!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30028 invoked from network); 2 Aug 2012 18:14:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 18:14:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13828911"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 18:14:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 19:14:07 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SwzuR-0002Cv-9t;
	Thu, 02 Aug 2012 18:14:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SwzuR-0003Hr-46;
	Thu, 02 Aug 2012 19:14:07 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13537-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 19:14:07 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13537: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13537 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13537/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore         fail REGR. vs. 13489
 test-amd64-amd64-xl-sedf      9 guest-start               fail REGR. vs. 13489
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13489
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13489
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13489

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 linux                f351a1d7efda2edd52c23a150b07b8380c47b6c0
baseline version:
 linux                ce05b1d31e57b7de6b814073e88bdd403ce71229

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=f351a1d7efda2edd52c23a150b07b8380c47b6c0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 f351a1d7efda2edd52c23a150b07b8380c47b6c0
+ branch=linux-3.0
+ revision=f351a1d7efda2edd52c23a150b07b8380c47b6c0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git f351a1d7efda2edd52c23a150b07b8380c47b6c0:tested/linux-3.0
Counting objects: 1   
Counting objects: 312, done.
Compressing objects:   2% (1/40)   
Compressing objects:   5% (2/40)   
Compressing objects:   7% (3/40)   
Compressing objects:  10% (4/40)   
Compressing objects:  12% (5/40)   
Compressing objects:  15% (6/40)   
Compressing objects:  17% (7/40)   
Compressing objects:  20% (8/40)   
Compressing objects:  22% (9/40)   
Compressing objects:  25% (10/40)   
Compressing objects:  27% (11/40)   
Compressing objects:  30% (12/40)   
Compressing objects:  32% (13/40)   
Compressing objects:  35% (14/40)   
Compressing objects:  37% (15/40)   
Compressing objects:  40% (16/40)   
Compressing objects:  42% (17/40)   
Compressing objects:  45% (18/40)   
Compressing objects:  47% (19/40)   
Compressing objects:  50% (20/40)   
Compressing objects:  52% (21/40)   
Compressing objects:  55% (22/40)   
Compressing objects:  57% (23/40)   
Compressing objects:  60% (24/40)   
Compressing objects:  62% (25/40)   
Compressing objects:  65% (26/40)   
Compressing objects:  67% (27/40)   
Compressing objects:  70% (28/40)   
Compressing objects:  72% (29/40)   
Compressing objects:  75% (30/40)   
Compressing objects:  77% (31/40)   
Compressing objects:  80% (32/40)   
Compressing objects:  82% (33/40)   
Compressing objects:  85% (34/40)   
Compressing objects:  87% (35/40)   
Compressing objects:  90% (36/40)   
Compressing objects:  92% (37/40)   
Compressing objects:  95% (38/40)   
Compressing objects:  97% (39/40)   
Compressing objects: 100% (40/40)   
Compressing objects: 100% (40/40), done.
Writing objects:   0% (1/249)   
Writing objects:   1% (3/249)   
Writing objects:   2% (5/249)   
Writing objects:   3% (8/249)   
Writing objects:   4% (10/249)   
Writing objects:   5% (13/249)   
Writing objects:   6% (15/249)   
Writing objects:   7% (19/249)   
Writing objects:   8% (20/249)   
Writing objects:   9% (24/249)   
Writing objects:  10% (25/249)   
Writing objects:  11% (28/249)   
Writing objects:  12% (30/249)   
Writing objects:  13% (33/249)   
Writing objects:  14% (35/249)   
Writing objects:  15% (38/249)   
Writing objects:  16% (40/249)   
Writing objects:  17% (43/249)   
Writing objects:  18% (45/249)   
Writing objects:  19% (48/249)   
Writing objects:  20% (50/249)   
Writing objects:  21% (53/249)   
Writing objects:  22% (55/249)   
Writing objects:  23% (58/249)   
Writing objects:  24% (60/249)   
Writing objects:  25% (63/249)   
Writing objects:  26% (65/249)   
Writing objects:  27% (68/249)   
Writing objects:  28% (70/249)   
Writing objects:  29% (73/249)   
Writing objects:  30% (75/249)   
Writing objects:  31% (78/249)   
Writing objects:  32% (80/249)   
Writing objects:  33% (83/249)   
Writing objects:  34% (85/249)   
Writing objects:  35% (89/249)   
Writing objects:  36% (90/249)   
Writing objects:  37% (93/249)   
Writing objects:  38% (95/249)   
Writing objects:  39% (98/249)   
Writing objects:  40% (100/249)   
Writing objects:  41% (103/249)   
Writing objects:  42% (106/249)   
Writing objects:  43% (108/249)   
Writing objects:  44% (110/249)   
Writing objects:  45% (113/249)   
Writing objects:  46% (115/249)   
Writing objects:  47% (118/249)   
Writing objects:  48% (120/249)   
Writing objects:  49% (123/249)   
Writing objects:  50% (125/249)   
Writing objects:  51% (127/249)   
Writing objects:  52% (130/249)   
Writing objects:  53% (132/249)   
Writing objects:  54% (135/249)   
Writing objects:  55% (137/249)   
Writing objects:  56% (140/249)   
Writing objects:  57% (142/249)   
Writing objects:  58% (145/249)   
Writing objects:  59% (147/249)   
Writing objects:  60% (150/249)   
Writing objects:  61% (152/249)   
Writing objects:  62% (155/249)   
Writing objects:  63% (157/249)   
Writing objects:  64% (160/249)   
Writing objects:  65% (162/249)   
Writing objects:  66% (165/249)   
Writing objects:  67% (167/249)   
Writing objects:  68% (170/249)   
Writing objects:  69% (172/249)   
Writing objects:  70% (175/249)   
Writing objects:  71% (177/249)   
Writing objects:  72% (180/249)   
Writing objects:  73% (182/249)   
Writing objects:  74% (185/249)   
Writing objects:  75% (187/249)   
Writing objects:  76% (190/249)   
Writing objects:  77% (192/249)   
Writing objects:  78% (195/249)   
Writing objects:  79% (197/249)   
Writing objects:  80% (200/249)   
Writing objects:  81% (202/249)   
Writing objects:  82% (205/249)   
Writing objects:  83% (207/249)   
Writing objects:  84% (210/249)   
Writing objects:  85% (212/249)   
Writing objects:  86% (215/249)   
Writing objects:  87% (217/249)   
Writing objects:  88% (220/249)   
Writing objects:  89% (222/249)   
Writing objects:  90% (225/249)   
Writing objects:  91% (227/249)   
Writing objects:  92% (230/249)   
Writing objects:  93% (232/249)   
Writing objects:  94% (235/249)   
Writing objects:  95% (237/249)   
Writing objects:  96% (240/249)   
Writing objects:  97% (242/249)   
Writing objects:  98% (245/249)   
Writing objects:  99% (247/249)   
Writing objects: 100% (249/249)   
Writing objects: 100% (249/249), 69.74 KiB, done.
Total 249 (delta 208), reused 249 (delta 208)
To xen@xenbits.xensource.com:git/linux-pvops.git
   ce05b1d..f351a1d  f351a1d7efda2edd52c23a150b07b8380c47b6c0 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 18:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 18:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx0Ip-0002Za-M6; Thu, 02 Aug 2012 18:39:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sx0In-0002ZV-Pj
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 18:39:18 +0000
Received: from [85.158.139.83:16416] by server-2.bemta-5.messagelabs.com id
	72/09-04598-459CA105; Thu, 02 Aug 2012 18:39:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343932755!29411496!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14574 invoked from network); 2 Aug 2012 18:39:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 18:39:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13829117"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 18:39:14 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	19:39:14 +0100
Message-ID: <1343932754.7571.71.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Thu, 2 Aug 2012 19:39:14 +0100
In-Reply-To: <CANKx4w9paNXNZaaUYj0tFwoTyDrr1-KHT1-C3o6mfG9QFPTDQQ@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
	<1343925916.7571.67.camel@dagon.hellion.org.uk>
	<CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
	<CANKx4w9paNXNZaaUYj0tFwoTyDrr1-KHT1-C3o6mfG9QFPTDQQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:12 +0100, David Erickson wrote:

> ubuntu@ubuntu:~$ sudo modprobe xen-netfront
> ubuntu@ubuntu:~$ [  238.408574] vbd vbd-5632: 19 xenbus_dev_probe on
> device/vbd/5632
> [  238.433304] vbd vbd-5632: failed to write error node for
> device/vbd/5632 (19 xenbus_dev_probe on device/vbd/5632)
> 
> ubuntu@ubuntu:~$ sudo lsmod
> Module                  Size  Used by
> xen_blkfront           26261  0
> xen_netfront           26671  0
> xenbus_probe_frontend    13232  2 xen_blkfront,xen_netfront,[permanent]
[...]
> ubuntu@ubuntu:~$ ifconfig -a
> eth0      Link encap:Ethernet  HWaddr 00:16:3e:68:15:49
[...]
> So it looks like it loaded

Great.

>  (with some errors, are those problems?),

Strangely those were vbd (aka disk) errors.

> but still not sure why it didn't auto load on boot.

xenbus_probe_frontend is a module. It should either be built in or
Ubuntu's tools (primarily the one which builds initrds, but perhaps also
something in  the live-cd suite) need to learn to load it manually at
the appropriate times. It's a tiny module so we would generally
recommend building it in.

You should file this as a bug against Ubuntu, I think. I'd have sworn
this had been reported to them before but perhaps it has regressed.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 18:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 18:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx0Ip-0002Za-M6; Thu, 02 Aug 2012 18:39:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sx0In-0002ZV-Pj
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 18:39:18 +0000
Received: from [85.158.139.83:16416] by server-2.bemta-5.messagelabs.com id
	72/09-04598-459CA105; Thu, 02 Aug 2012 18:39:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343932755!29411496!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14574 invoked from network); 2 Aug 2012 18:39:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 18:39:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13829117"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 18:39:14 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	19:39:14 +0100
Message-ID: <1343932754.7571.71.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Thu, 2 Aug 2012 19:39:14 +0100
In-Reply-To: <CANKx4w9paNXNZaaUYj0tFwoTyDrr1-KHT1-C3o6mfG9QFPTDQQ@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<1343893520.7571.58.camel@dagon.hellion.org.uk>
	<CANKx4w_GwULR1gcqJPh37J_rWGxoaxPirsPchdZ=E8iVYgSQSQ@mail.gmail.com>
	<1343925916.7571.67.camel@dagon.hellion.org.uk>
	<CANKx4w9HApwsiBTcHBHud3A_=BJhT9tzATivpY3WjL+V7=yCaQ@mail.gmail.com>
	<CANKx4w9paNXNZaaUYj0tFwoTyDrr1-KHT1-C3o6mfG9QFPTDQQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:12 +0100, David Erickson wrote:

> ubuntu@ubuntu:~$ sudo modprobe xen-netfront
> ubuntu@ubuntu:~$ [  238.408574] vbd vbd-5632: 19 xenbus_dev_probe on
> device/vbd/5632
> [  238.433304] vbd vbd-5632: failed to write error node for
> device/vbd/5632 (19 xenbus_dev_probe on device/vbd/5632)
> 
> ubuntu@ubuntu:~$ sudo lsmod
> Module                  Size  Used by
> xen_blkfront           26261  0
> xen_netfront           26671  0
> xenbus_probe_frontend    13232  2 xen_blkfront,xen_netfront,[permanent]
[...]
> ubuntu@ubuntu:~$ ifconfig -a
> eth0      Link encap:Ethernet  HWaddr 00:16:3e:68:15:49
[...]
> So it looks like it loaded

Great.

>  (with some errors, are those problems?),

Strangely those were vbd (aka disk) errors.

> but still not sure why it didn't auto load on boot.

xenbus_probe_frontend is a module. It should either be built in or
Ubuntu's tools (primarily the one which builds initrds, but perhaps also
something in  the live-cd suite) need to learn to load it manually at
the appropriate times. It's a tiny module so we would generally
recommend building it in.

You should file this as a bug against Ubuntu, I think. I'd have sworn
this had been reported to them before but perhaps it has regressed.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 18:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 18:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx0Wp-0002jP-1b; Thu, 02 Aug 2012 18:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sx0Wo-0002jJ-B4
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 18:53:46 +0000
Received: from [85.158.143.99:37266] by server-3.bemta-4.messagelabs.com id
	D7/BF-01511-9BCCA105; Thu, 02 Aug 2012 18:53:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1343933624!24736542!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6371 invoked from network); 2 Aug 2012 18:53:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 18:53:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13829267"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 18:53:43 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	19:53:43 +0100
Message-ID: <1343933623.7571.77.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 19:53:43 +0100
In-Reply-To: <20506.43218.20724.749302@mariner.uk.xensource.com>
References: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
	<20506.43218.20724.749302@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 17:20 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH V2] libxl: support custom block hotplug scripts"):
> > libxl: support custom block hotplug scripts
> 
> Wow.  Thanks.  Everything looks good apart from this:
> 
> >                      DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
> > -                   SAVESTRING("script", script, yytext);
> > -               }
> > +                    if (DPC->disk->script) {
> > +                        if (*DPC->disk->script) {
> > +                            xlu__disk_err(DPC,yytext,"script respecified");
> > +                            return 0;
> > +                        }
> > +                        /* do not complain about overwriting empty strings */
> > +                        free(DPC->disk->script);
> > +                    }
> > +                    DPC->disk->script = malloc(strlen("block-")
> > +                                               +strlen(yytext) + 1);
> > +                    strcpy(DPC->disk->script, "block-");
> > +                    strcat(DPC->disk->script, yytext);
> 
> Isn't this very like the contents of the savestring() function ?

> Ie you could do:
>         char *newscript;
>         asprintf(&newscript, ...);
>         savestring(DPC, "script respecified", &DPC->disk->script, newscript);
>         free(newscript);
> 
> Other places in xl use asprintf so you can use it here.

I hadn't realised asprintf was fair game. I'll rework along these lines.

> > +                }
> 
> Is this one-character indentation change intentional ?

There's a few stray hard tabs in this file (which is predominantly using
8 space indendation). I think I fixed it in the blocks I was changing
which will look like a 1 char reindent in an MUA. 

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 18:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 18:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx0Wp-0002jP-1b; Thu, 02 Aug 2012 18:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sx0Wo-0002jJ-B4
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 18:53:46 +0000
Received: from [85.158.143.99:37266] by server-3.bemta-4.messagelabs.com id
	D7/BF-01511-9BCCA105; Thu, 02 Aug 2012 18:53:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1343933624!24736542!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6371 invoked from network); 2 Aug 2012 18:53:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 18:53:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,702,1336348800"; d="scan'208";a="13829267"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 18:53:43 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 2 Aug 2012
	19:53:43 +0100
Message-ID: <1343933623.7571.77.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 19:53:43 +0100
In-Reply-To: <20506.43218.20724.749302@mariner.uk.xensource.com>
References: <84f0686ebcbfb0fa3a43.1343815057@cosworth.uk.xensource.com>
	<20506.43218.20724.749302@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 17:20 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH V2] libxl: support custom block hotplug scripts"):
> > libxl: support custom block hotplug scripts
> 
> Wow.  Thanks.  Everything looks good apart from this:
> 
> >                      DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
> > -                   SAVESTRING("script", script, yytext);
> > -               }
> > +                    if (DPC->disk->script) {
> > +                        if (*DPC->disk->script) {
> > +                            xlu__disk_err(DPC,yytext,"script respecified");
> > +                            return 0;
> > +                        }
> > +                        /* do not complain about overwriting empty strings */
> > +                        free(DPC->disk->script);
> > +                    }
> > +                    DPC->disk->script = malloc(strlen("block-")
> > +                                               +strlen(yytext) + 1);
> > +                    strcpy(DPC->disk->script, "block-");
> > +                    strcat(DPC->disk->script, yytext);
> 
> Isn't this very like the contents of the savestring() function ?

> Ie you could do:
>         char *newscript;
>         asprintf(&newscript, ...);
>         savestring(DPC, "script respecified", &DPC->disk->script, newscript);
>         free(newscript);
> 
> Other places in xl use asprintf so you can use it here.

I hadn't realised asprintf was fair game. I'll rework along these lines.

> > +                }
> 
> Is this one-character indentation change intentional ?

There's a few stray hard tabs in this file (which is predominantly using
8 space indendation). I think I fixed it in the blocks I was changing
which will look like a 1 char reindent in an MUA. 

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 20:18:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 20:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx1qG-0003Tv-Oj; Thu, 02 Aug 2012 20:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sx1qE-0003Tq-RG
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 20:17:55 +0000
Received: from [85.158.143.99:65005] by server-3.bemta-4.messagelabs.com id
	A0/00-01511-270EA105; Thu, 02 Aug 2012 20:17:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343938673!29098070!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26996 invoked from network); 2 Aug 2012 20:17:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 20:17:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,703,1336348800"; d="scan'208";a="13830017"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 20:17:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 21:17:51 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sx1qB-00036h-HI;
	Thu, 02 Aug 2012 20:17:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sx1qB-0000E8-9J;
	Thu, 02 Aug 2012 21:17:51 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13538-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 21:17:51 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13538: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13538 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13538/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 20:18:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 20:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx1qG-0003Tv-Oj; Thu, 02 Aug 2012 20:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sx1qE-0003Tq-RG
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 20:17:55 +0000
Received: from [85.158.143.99:65005] by server-3.bemta-4.messagelabs.com id
	A0/00-01511-270EA105; Thu, 02 Aug 2012 20:17:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343938673!29098070!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc3OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26996 invoked from network); 2 Aug 2012 20:17:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 20:17:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,703,1336348800"; d="scan'208";a="13830017"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 20:17:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 2 Aug 2012 21:17:51 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sx1qB-00036h-HI;
	Thu, 02 Aug 2012 20:17:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sx1qB-0000E8-9J;
	Thu, 02 Aug 2012 21:17:51 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13538-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Aug 2012 21:17:51 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13538: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13538 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13538/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 21:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 21:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx2hE-0003wp-5v; Thu, 02 Aug 2012 21:12:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sx2hC-0003wk-Po
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 21:12:39 +0000
Received: from [85.158.138.51:17476] by server-11.bemta-3.messagelabs.com id
	D5/0B-00679-64DEA105; Thu, 02 Aug 2012 21:12:38 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343941956!30103507!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwMTY5Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8000 invoked from network); 2 Aug 2012 21:12:37 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 21:12:37 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1343941957; x=1375477957;
	h=date:from:to:cc:subject:message-id:mime-version;
	bh=wXRGGeSIkTUP1HuECUkXMoS34gw/xb17HDG80Wca+2k=;
	b=XEAXThMPfbt2x7ZqGBo6RxNZ/YStMF7RwnocS8KGXwemYZ4j8yHpQtgL
	IR+y66yO7QOF1gu/a4vlNf7Y/ugUdQ==;
X-IronPort-AV: E=Sophos;i="4.77,703,1336348800"; d="scan'208";a="417925968"
Received: from smtp-in-9001.sea19.amazon.com ([10.186.144.32])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 02 Aug 2012 21:12:00 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-9001.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q72LBxev005307
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 2 Aug 2012 21:11:59 GMT
Received: from US-SEA-R8XVZTX (10.224.80.44) by ex10-hub-9004.ant.amazon.com
	(10.185.137.182) with Microsoft SMTP Server id 14.2.247.3;
	Thu, 2 Aug 2012 14:11:57 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Thu, 02 Aug 2012
	14:11:57 -0700
Date: Thu, 2 Aug 2012 14:11:57 -0700
From: Matt Wilson <msw@amazon.com>
To: <xen-devel@lists.xen.org>, Lars Kurth <lars.kurth@xen.org>
Message-ID: <20120802211157.GG8228@US-SEA-R8XVZTX>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Subject: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Several folks have let me know that my messages sent via lists.xen.org
are marked as spam / spoofed, especially when using Gmail to receive
Xen mail. I believe this is because outbound Amazon email contains a
DKIM signature. When Mailman modifies my message and re-sends it, the
DKIM signature is invalidated [1].

To work around this, Mailman 2.1.10 and later contain a configuration
variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
on we'd work around the problem.

Matt

[1] http://wiki.list.org/display/DEV/DKIM
[2] https://bugs.launchpad.net/mailman/+bug/557493

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 21:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 21:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx2hE-0003wp-5v; Thu, 02 Aug 2012 21:12:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sx2hC-0003wk-Po
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 21:12:39 +0000
Received: from [85.158.138.51:17476] by server-11.bemta-3.messagelabs.com id
	D5/0B-00679-64DEA105; Thu, 02 Aug 2012 21:12:38 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343941956!30103507!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwMTY5Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8000 invoked from network); 2 Aug 2012 21:12:37 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 21:12:37 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1343941957; x=1375477957;
	h=date:from:to:cc:subject:message-id:mime-version;
	bh=wXRGGeSIkTUP1HuECUkXMoS34gw/xb17HDG80Wca+2k=;
	b=XEAXThMPfbt2x7ZqGBo6RxNZ/YStMF7RwnocS8KGXwemYZ4j8yHpQtgL
	IR+y66yO7QOF1gu/a4vlNf7Y/ugUdQ==;
X-IronPort-AV: E=Sophos;i="4.77,703,1336348800"; d="scan'208";a="417925968"
Received: from smtp-in-9001.sea19.amazon.com ([10.186.144.32])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 02 Aug 2012 21:12:00 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-9001.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q72LBxev005307
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 2 Aug 2012 21:11:59 GMT
Received: from US-SEA-R8XVZTX (10.224.80.44) by ex10-hub-9004.ant.amazon.com
	(10.185.137.182) with Microsoft SMTP Server id 14.2.247.3;
	Thu, 2 Aug 2012 14:11:57 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Thu, 02 Aug 2012
	14:11:57 -0700
Date: Thu, 2 Aug 2012 14:11:57 -0700
From: Matt Wilson <msw@amazon.com>
To: <xen-devel@lists.xen.org>, Lars Kurth <lars.kurth@xen.org>
Message-ID: <20120802211157.GG8228@US-SEA-R8XVZTX>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Subject: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Several folks have let me know that my messages sent via lists.xen.org
are marked as spam / spoofed, especially when using Gmail to receive
Xen mail. I believe this is because outbound Amazon email contains a
DKIM signature. When Mailman modifies my message and re-sends it, the
DKIM signature is invalidated [1].

To work around this, Mailman 2.1.10 and later contain a configuration
variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
on we'd work around the problem.

Matt

[1] http://wiki.list.org/display/DEV/DKIM
[2] https://bugs.launchpad.net/mailman/+bug/557493

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 22:12:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 22:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx3c3-0004Ub-2C; Thu, 02 Aug 2012 22:11:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sx3c1-0004UW-FI
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 22:11:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343945472!11010951!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5945 invoked from network); 2 Aug 2012 22:11:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 22:11:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 30 Jul 2012 07:57:06 +0100
Message-Id: <50164C5F0200007800091325@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 30 Jul 2012 07:57:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>,
	"Donald D Dugger" <donald.d.dugger@intel.com>,
	"Jun Nakajima" <jun.nakajima@intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
In-Reply-To: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
> Although the "Intel Virtualization Technology FlexMigration
> Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> does not document support for extended model 2H model DH (Intel Xeon
> Processor E5 Family), empirical evidence shows that the same MSR
> addresses can be used for cpuid masking as exdended model 2H model AH
> (Intel Xen Processor E3-1200 Family).

Empirical evidence isn't really enough - let's have someone at Intel
confirm this - Jun, Don?

Jan

> Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> --- a/xen/arch/x86/cpu/intel.c	Fri Jul 27 12:22:13 2012 +0200
> +++ b/xen/arch/x86/cpu/intel.c	Sat Jul 28 17:27:30 2012 +0000
> @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
>  			return;
>  		extra = "xsave ";
>  		break;
> -	case 0x2a:
> +	case 0x2a: case 0x2d:
>  		wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
>  		      opt_cpuid_mask_ecx,
>  		      opt_cpuid_mask_edx);




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 22:12:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 22:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx3c3-0004Ub-2C; Thu, 02 Aug 2012 22:11:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sx3c1-0004UW-FI
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 22:11:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343945472!11010951!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5945 invoked from network); 2 Aug 2012 22:11:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	2 Aug 2012 22:11:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 30 Jul 2012 07:57:06 +0100
Message-Id: <50164C5F0200007800091325@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 30 Jul 2012 07:57:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>,
	"Donald D Dugger" <donald.d.dugger@intel.com>,
	"Jun Nakajima" <jun.nakajima@intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
In-Reply-To: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
> Although the "Intel Virtualization Technology FlexMigration
> Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> does not document support for extended model 2H model DH (Intel Xeon
> Processor E5 Family), empirical evidence shows that the same MSR
> addresses can be used for cpuid masking as exdended model 2H model AH
> (Intel Xen Processor E3-1200 Family).

Empirical evidence isn't really enough - let's have someone at Intel
confirm this - Jun, Don?

Jan

> Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> --- a/xen/arch/x86/cpu/intel.c	Fri Jul 27 12:22:13 2012 +0200
> +++ b/xen/arch/x86/cpu/intel.c	Sat Jul 28 17:27:30 2012 +0000
> @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
>  			return;
>  		extra = "xsave ";
>  		break;
> -	case 0x2a:
> +	case 0x2a: case 0x2d:
>  		wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
>  		      opt_cpuid_mask_ecx,
>  		      opt_cpuid_mask_edx);




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 23:05:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 23:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx4RG-0004r5-D6; Thu, 02 Aug 2012 23:04:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Sx4RF-0004r0-7h
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 23:04:17 +0000
Received: from [85.158.138.51:26849] by server-5.bemta-3.messagelabs.com id
	63/B0-28237-0770B105; Thu, 02 Aug 2012 23:04:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343948654!21240227!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjczNTY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31544 invoked from network); 2 Aug 2012 23:04:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 23:04:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72N47OS028879
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 23:04:08 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72N46v5003255
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 23:04:07 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72N461G028243; Thu, 2 Aug 2012 18:04:06 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 16:04:06 -0700
Date: Thu, 2 Aug 2012 16:04:03 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120802160403.02de484e@mantra.us.oracle.com>
In-Reply-To: <20120802141710.GF16749@phenom.dumpdata.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012 10:17:10 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Aug 02, 2012 at 10:05:27AM +0100, Jan Beulich wrote:
> > >>> On 01.08.12 at 17:50, Konrad Rzeszutek Wilk
> > >>> <konrad.wilk@oracle.com> wrote:
> > > With these patches I've gotten it to boot up to 384GB. Around
> > > that area something weird happens - mainly the pagetables that
> > > the toolstack allocated seems to have missing data. I hadn't
> > > looked in details, but this is what domain builder tells me:
> > > 
> > > 
> > > xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 
> > > 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
> > > xc_dom_malloc            : 1621 kB
> > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at
> > > 0x7fb0853a2000 xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
> > > xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 
> > > 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
> > > xc_dom_malloc            : 4608 kB
> > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at
> > > 0x7fb0553a2000 xc_dom_alloc_page   :   start info   :
> > > 0xffffffffc30b4000 (pfn 0x430b4) xc_dom_alloc_page   :
> > > xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
> > > xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn
> > > 0x430b6) nr_page_tables: 0x0000ffffffffffff/48:
> > > 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
> > > nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 ->
> > > 0xffffffffffffffff, 1 table(s) nr_page_tables:
> > > 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffffffffff,
> > > 2 table(s) nr_page_tables: 0x00000000001fffff/21:
> > > 0xffffffff80000000 -> 0xffffffffc33fffff, 538 table(s)
> > > xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 
> > > 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
> > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at
> > > 0x7fb055184000 xc_dom_alloc_page   :   boot stack   :
> > > 0xffffffffc32d5000 (pfn 0x432d5) xc_dom_build_image  :
> > > virt_alloc_end : 0xffffffffc32d6000 xc_dom_build_image  :
> > > virt_pgtab_end : 0xffffffffc3400000
> > > 
> > > Note it is is 0xffffffffc30b4000 - so already past the
> > > level2_kernel_pgt (L3[510]
> > > and in level2_fixmap_pgt territory (L3[511]).
> > > 
> > > At that stage we are still operating using the Xen provided
> > > pagetable - which look to have the L4[511][511] empty! Which
> > > sounds to me like a Xen tool-stack problem? Jan, have you seen
> > > something similar to this?
> > 
> > No we haven't, but I also don't think anyone tried to create as
> > big a DomU. I was, however, under the impression that DomU-s
> > this big had been created at Oracle before. Or was that only up
> > to 256Gb perhaps?
> 
> Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
> It might be that we did not have the 1TB hardware at that time yet.

Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
got broken after it looks like. I can debug later if it becomes hot. 

thanks,
Mukesh




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 23:05:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 23:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx4RG-0004r5-D6; Thu, 02 Aug 2012 23:04:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Sx4RF-0004r0-7h
	for xen-devel@lists.xen.org; Thu, 02 Aug 2012 23:04:17 +0000
Received: from [85.158.138.51:26849] by server-5.bemta-3.messagelabs.com id
	63/B0-28237-0770B105; Thu, 02 Aug 2012 23:04:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343948654!21240227!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjczNTY5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31544 invoked from network); 2 Aug 2012 23:04:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Aug 2012 23:04:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q72N47OS028879
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Aug 2012 23:04:08 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q72N46v5003255
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Aug 2012 23:04:07 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q72N461G028243; Thu, 2 Aug 2012 18:04:06 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Aug 2012 16:04:06 -0700
Date: Thu, 2 Aug 2012 16:04:03 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120802160403.02de484e@mantra.us.oracle.com>
In-Reply-To: <20120802141710.GF16749@phenom.dumpdata.com>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012 10:17:10 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Aug 02, 2012 at 10:05:27AM +0100, Jan Beulich wrote:
> > >>> On 01.08.12 at 17:50, Konrad Rzeszutek Wilk
> > >>> <konrad.wilk@oracle.com> wrote:
> > > With these patches I've gotten it to boot up to 384GB. Around
> > > that area something weird happens - mainly the pagetables that
> > > the toolstack allocated seems to have missing data. I hadn't
> > > looked in details, but this is what domain builder tells me:
> > > 
> > > 
> > > xc_dom_alloc_segment:   ramdisk      : 0xffffffff82278000 -> 
> > > 0xffffffff930b4000  (pfn 0x2278 + 0x10e3c pages)
> > > xc_dom_malloc            : 1621 kB
> > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x2278+0x10e3c at
> > > 0x7fb0853a2000 xc_dom_do_gunzip: unzip ok, 0x4ba831c -> 0x10e3be10
> > > xc_dom_alloc_segment:   phys2mach    : 0xffffffff930b4000 -> 
> > > 0xffffffffc30b4000  (pfn 0x130b4 + 0x30000 pages)
> > > xc_dom_malloc            : 4608 kB
> > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x130b4+0x30000 at
> > > 0x7fb0553a2000 xc_dom_alloc_page   :   start info   :
> > > 0xffffffffc30b4000 (pfn 0x430b4) xc_dom_alloc_page   :
> > > xenstore     : 0xffffffffc30b5000 (pfn 0x430b5)
> > > xc_dom_alloc_page   :   console      : 0xffffffffc30b6000 (pfn
> > > 0x430b6) nr_page_tables: 0x0000ffffffffffff/48:
> > > 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
> > > nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 ->
> > > 0xffffffffffffffff, 1 table(s) nr_page_tables:
> > > 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffffffffff,
> > > 2 table(s) nr_page_tables: 0x00000000001fffff/21:
> > > 0xffffffff80000000 -> 0xffffffffc33fffff, 538 table(s)
> > > xc_dom_alloc_segment:   page tables  : 0xffffffffc30b7000 -> 
> > > 0xffffffffc32d5000  (pfn 0x430b7 + 0x21e pages)
> > > xc_dom_pfn_to_ptr: domU mapping: pfn 0x430b7+0x21e at
> > > 0x7fb055184000 xc_dom_alloc_page   :   boot stack   :
> > > 0xffffffffc32d5000 (pfn 0x432d5) xc_dom_build_image  :
> > > virt_alloc_end : 0xffffffffc32d6000 xc_dom_build_image  :
> > > virt_pgtab_end : 0xffffffffc3400000
> > > 
> > > Note it is is 0xffffffffc30b4000 - so already past the
> > > level2_kernel_pgt (L3[510]
> > > and in level2_fixmap_pgt territory (L3[511]).
> > > 
> > > At that stage we are still operating using the Xen provided
> > > pagetable - which look to have the L4[511][511] empty! Which
> > > sounds to me like a Xen tool-stack problem? Jan, have you seen
> > > something similar to this?
> > 
> > No we haven't, but I also don't think anyone tried to create as
> > big a DomU. I was, however, under the impression that DomU-s
> > this big had been created at Oracle before. Or was that only up
> > to 256Gb perhaps?
> 
> Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
> It might be that we did not have the 1TB hardware at that time yet.

Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
got broken after it looks like. I can debug later if it becomes hot. 

thanks,
Mukesh




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 23:42:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 23:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx51d-0005FJ-K8; Thu, 02 Aug 2012 23:41:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sx51c-0005FE-J9
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 23:41:52 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343950904!10597652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30038 invoked from network); 2 Aug 2012 23:41:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 23:41:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,704,1336348800"; d="scan'208";a="13831355"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 23:41:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 00:41:44 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sx51U-0004da-2Z;
	Thu, 02 Aug 2012 23:41:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sx51T-0004hD-U3;
	Fri, 03 Aug 2012 00:41:43 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13539-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 00:41:43 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13539 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13539/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2      fail blocked in 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 23:42:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 23:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx51d-0005FJ-K8; Thu, 02 Aug 2012 23:41:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sx51c-0005FE-J9
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 23:41:52 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1343950904!10597652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30038 invoked from network); 2 Aug 2012 23:41:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 23:41:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,704,1336348800"; d="scan'208";a="13831355"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	02 Aug 2012 23:41:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 00:41:44 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sx51U-0004da-2Z;
	Thu, 02 Aug 2012 23:41:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sx51T-0004hD-U3;
	Fri, 03 Aug 2012 00:41:43 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13539-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 00:41:43 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13539 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13539/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2      fail blocked in 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 23:53:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 23:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx5CR-0005Qc-V7; Thu, 02 Aug 2012 23:53:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1Sx5CQ-0005QX-Gs
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 23:53:02 +0000
Received: from [85.158.139.83:31729] by server-8.bemta-5.messagelabs.com id
	FB/DE-10278-DD21B105; Thu, 02 Aug 2012 23:53:01 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343951580!24325571!1
X-Originating-IP: [209.85.212.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1500 invoked from network); 2 Aug 2012 23:53:01 -0000
Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com)
	(209.85.212.177)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 23:53:01 -0000
Received: by wibhm11 with SMTP id hm11so73698wib.6
	for <xen-devel@lists.xensource.com>;
	Thu, 02 Aug 2012 16:53:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=1cPYJot2+6HEsj43FOR27iiPXtb6okBJmCjyxq8JbTQ=;
	b=tvYToUVdnEfMvpf5hMzXsUmEGvPSkEHzbJdVdIaG2Rf5aF/Ike+zfor3oAXGxyUUmX
	MhV3oN/+hr2ARgqpwgmAZgflF6PyUFhgim3Hb+xsztb0AGL15la1srGaoN+rWGMzAOal
	YNBtHwuqcvSXOOe9EiHubeeYn3fdm1hq/1Q+oL5/0QY7xQFJbe82uBG3bGa89qAQEra5
	ay2tKFmN7Et6eVncooDJP7V1pQO6gPdVzb7JIfOIBt+ILsQBWRI18+RVhGl2YsxX+qZF
	tC8eXU7KiuJdCSGC4UK7UsV2nUrfx26TKrhp8AvUcaLBp1J9nhybnq9NXa1QboZ+2iyQ
	+PNA==
Received: by 10.216.153.207 with SMTP id f57mr11855871wek.196.1343951580475;
	Thu, 02 Aug 2012 16:53:00 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.216.203.207 with HTTP; Thu, 2 Aug 2012 16:52:40 -0700 (PDT)
In-Reply-To: <500DB2E2.6070703@linaro.org>
References: <20120722133441.GA6874@gmail.com>
	<20120723144917.GF793@phenom.dumpdata.com>
	<500D8CDD.3060309@linaro.org>
	<20120723182431.GD21870@phenom.dumpdata.com>
	<500D9EBC.204@linaro.org> <20120723195144.GA3454@phenom.dumpdata.com>
	<500DB2E2.6070703@linaro.org>
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Thu, 2 Aug 2012 16:52:40 -0700
X-Google-Sender-Auth: _A0ebqoUPuWpV5wYbIRnLnYpVZo
Message-ID: <CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
To: John Stultz <john.stultz@linaro.org>
Cc: xen-devel@lists.xensource.com, Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Andrew Morton <akpm@linux-foundation.org>, Ingo Molnar <mingo@kernel.org>
Subject: Re: [Xen-devel] Was: Re: [GIT PULL] timer changes for v3.6,
 Is: Regression introduced by
 1e75fa8be9fb61e1af46b5b3b176347a4c958ca1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jul 23, 2012 at 1:24 PM, John Stultz <john.stultz@linaro.org> wrote:
>
> Great! Thanks again so much for the testing and quick reporting!

Hmm. I'm just cutting 3.6-rc1, and noticing that apparently this patch
never reached me. So now -rc1 is broken on 32 bit under Xen.

I'm not going to delay rc1 for this, but I thought I'd point this out
in the hope that we get it fixed soon. I'll be around for small fixes
for another day and a half before I'm traveling for vacation.

              Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 02 23:53:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Aug 2012 23:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx5CR-0005Qc-V7; Thu, 02 Aug 2012 23:53:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1Sx5CQ-0005QX-Gs
	for xen-devel@lists.xensource.com; Thu, 02 Aug 2012 23:53:02 +0000
Received: from [85.158.139.83:31729] by server-8.bemta-5.messagelabs.com id
	FB/DE-10278-DD21B105; Thu, 02 Aug 2012 23:53:01 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343951580!24325571!1
X-Originating-IP: [209.85.212.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1500 invoked from network); 2 Aug 2012 23:53:01 -0000
Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com)
	(209.85.212.177)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Aug 2012 23:53:01 -0000
Received: by wibhm11 with SMTP id hm11so73698wib.6
	for <xen-devel@lists.xensource.com>;
	Thu, 02 Aug 2012 16:53:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=1cPYJot2+6HEsj43FOR27iiPXtb6okBJmCjyxq8JbTQ=;
	b=tvYToUVdnEfMvpf5hMzXsUmEGvPSkEHzbJdVdIaG2Rf5aF/Ike+zfor3oAXGxyUUmX
	MhV3oN/+hr2ARgqpwgmAZgflF6PyUFhgim3Hb+xsztb0AGL15la1srGaoN+rWGMzAOal
	YNBtHwuqcvSXOOe9EiHubeeYn3fdm1hq/1Q+oL5/0QY7xQFJbe82uBG3bGa89qAQEra5
	ay2tKFmN7Et6eVncooDJP7V1pQO6gPdVzb7JIfOIBt+ILsQBWRI18+RVhGl2YsxX+qZF
	tC8eXU7KiuJdCSGC4UK7UsV2nUrfx26TKrhp8AvUcaLBp1J9nhybnq9NXa1QboZ+2iyQ
	+PNA==
Received: by 10.216.153.207 with SMTP id f57mr11855871wek.196.1343951580475;
	Thu, 02 Aug 2012 16:53:00 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.216.203.207 with HTTP; Thu, 2 Aug 2012 16:52:40 -0700 (PDT)
In-Reply-To: <500DB2E2.6070703@linaro.org>
References: <20120722133441.GA6874@gmail.com>
	<20120723144917.GF793@phenom.dumpdata.com>
	<500D8CDD.3060309@linaro.org>
	<20120723182431.GD21870@phenom.dumpdata.com>
	<500D9EBC.204@linaro.org> <20120723195144.GA3454@phenom.dumpdata.com>
	<500DB2E2.6070703@linaro.org>
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Thu, 2 Aug 2012 16:52:40 -0700
X-Google-Sender-Auth: _A0ebqoUPuWpV5wYbIRnLnYpVZo
Message-ID: <CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
To: John Stultz <john.stultz@linaro.org>
Cc: xen-devel@lists.xensource.com, Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Andrew Morton <akpm@linux-foundation.org>, Ingo Molnar <mingo@kernel.org>
Subject: Re: [Xen-devel] Was: Re: [GIT PULL] timer changes for v3.6,
 Is: Regression introduced by
 1e75fa8be9fb61e1af46b5b3b176347a4c958ca1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jul 23, 2012 at 1:24 PM, John Stultz <john.stultz@linaro.org> wrote:
>
> Great! Thanks again so much for the testing and quick reporting!

Hmm. I'm just cutting 3.6-rc1, and noticing that apparently this patch
never reached me. So now -rc1 is broken on 32 bit under Xen.

I'm not going to delay rc1 for this, but I thought I'd point this out
in the hope that we get it fixed soon. I'll be around for small fixes
for another day and a half before I'm traveling for vacation.

              Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 00:30:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 00:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx5lx-0006ld-CU; Fri, 03 Aug 2012 00:29:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <john.stultz@linaro.org>) id 1Sx5lv-0006lX-Sy
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 00:29:44 +0000
Received: from [85.158.139.83:20524] by server-5.bemta-5.messagelabs.com id
	DD/1F-02722-77B1B105; Fri, 03 Aug 2012 00:29:43 +0000
X-Env-Sender: john.stultz@linaro.org
X-Msg-Ref: server-3.tower-182.messagelabs.com!1343953779!28079728!1
X-Originating-IP: [32.97.110.151]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMzIuOTcuMTEwLjE1MSA9PiA0Njc1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23287 invoked from network); 3 Aug 2012 00:29:41 -0000
Received: from e33.co.us.ibm.com (HELO e33.co.us.ibm.com) (32.97.110.151)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 00:29:41 -0000
Received: from /spool/local
	by e33.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only!
	Violators will be prosecuted
	for <xen-devel@lists.xensource.com> from <john.stultz@linaro.org>;
	Thu, 2 Aug 2012 18:29:39 -0600
Received: from d03dlp02.boulder.ibm.com (9.17.202.178)
	by e33.co.us.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 2 Aug 2012 18:28:45 -0600
Received: from d03relay01.boulder.ibm.com (d03relay01.boulder.ibm.com
	[9.17.195.226])
	by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 28FBD3E4003E
	for <xen-devel@lists.xensource.com>;
	Fri,  3 Aug 2012 00:28:43 +0000 (WET)
Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169])
	by d03relay01.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	q730SiRo160692
	for <xen-devel@lists.xensource.com>; Thu, 2 Aug 2012 18:28:44 -0600
Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1])
	by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP
	id q730SheW004243
	for <xen-devel@lists.xensource.com>; Thu, 2 Aug 2012 18:28:44 -0600
Received: from [9.49.148.209] (sig-9-49-148-209.mts.ibm.com [9.49.148.209])
	by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id
	q730Seil004141; Thu, 2 Aug 2012 18:28:41 -0600
Message-ID: <501B1B38.3060808@linaro.org>
Date: Thu, 02 Aug 2012 17:28:40 -0700
From: John Stultz <john.stultz@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Linus Torvalds <torvalds@linux-foundation.org>
References: <20120722133441.GA6874@gmail.com>
	<20120723144917.GF793@phenom.dumpdata.com>
	<500D8CDD.3060309@linaro.org>
	<20120723182431.GD21870@phenom.dumpdata.com>
	<500D9EBC.204@linaro.org>
	<20120723195144.GA3454@phenom.dumpdata.com>
	<500DB2E2.6070703@linaro.org>
	<CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
In-Reply-To: <CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12080300-2398-0000-0000-000009167F86
Cc: xen-devel@lists.xensource.com, Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Andrew Morton <akpm@linux-foundation.org>, Ingo Molnar <mingo@kernel.org>
Subject: Re: [Xen-devel] Was: Re: [GIT PULL] timer changes for v3.6,
 Is: Regression introduced by
 1e75fa8be9fb61e1af46b5b3b176347a4c958ca1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/2012 04:52 PM, Linus Torvalds wrote:
> On Mon, Jul 23, 2012 at 1:24 PM, John Stultz <john.stultz@linaro.org> wrote:
>> Great! Thanks again so much for the testing and quick reporting!
> Hmm. I'm just cutting 3.6-rc1, and noticing that apparently this patch
> never reached me. So now -rc1 is broken on 32 bit under Xen.
>
> I'm not going to delay rc1 for this, but I thought I'd point this out
> in the hope that we get it fixed soon. I'll be around for small fixes
> for another day and a half before I'm traveling for vacation.

Yea, the fix has been sitting in tip/timers/urgent.  I heard Thomas was 
on vacation, so maybe that's why he's not sent the pull request?

Ingo, could you make the pull request?  Sorry if the commit log didn't 
make it clear this was more urgent.

thanks
-john



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 00:30:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 00:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx5lx-0006ld-CU; Fri, 03 Aug 2012 00:29:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <john.stultz@linaro.org>) id 1Sx5lv-0006lX-Sy
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 00:29:44 +0000
Received: from [85.158.139.83:20524] by server-5.bemta-5.messagelabs.com id
	DD/1F-02722-77B1B105; Fri, 03 Aug 2012 00:29:43 +0000
X-Env-Sender: john.stultz@linaro.org
X-Msg-Ref: server-3.tower-182.messagelabs.com!1343953779!28079728!1
X-Originating-IP: [32.97.110.151]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMzIuOTcuMTEwLjE1MSA9PiA0Njc1OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23287 invoked from network); 3 Aug 2012 00:29:41 -0000
Received: from e33.co.us.ibm.com (HELO e33.co.us.ibm.com) (32.97.110.151)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 00:29:41 -0000
Received: from /spool/local
	by e33.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only!
	Violators will be prosecuted
	for <xen-devel@lists.xensource.com> from <john.stultz@linaro.org>;
	Thu, 2 Aug 2012 18:29:39 -0600
Received: from d03dlp02.boulder.ibm.com (9.17.202.178)
	by e33.co.us.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 2 Aug 2012 18:28:45 -0600
Received: from d03relay01.boulder.ibm.com (d03relay01.boulder.ibm.com
	[9.17.195.226])
	by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 28FBD3E4003E
	for <xen-devel@lists.xensource.com>;
	Fri,  3 Aug 2012 00:28:43 +0000 (WET)
Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169])
	by d03relay01.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	q730SiRo160692
	for <xen-devel@lists.xensource.com>; Thu, 2 Aug 2012 18:28:44 -0600
Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1])
	by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP
	id q730SheW004243
	for <xen-devel@lists.xensource.com>; Thu, 2 Aug 2012 18:28:44 -0600
Received: from [9.49.148.209] (sig-9-49-148-209.mts.ibm.com [9.49.148.209])
	by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id
	q730Seil004141; Thu, 2 Aug 2012 18:28:41 -0600
Message-ID: <501B1B38.3060808@linaro.org>
Date: Thu, 02 Aug 2012 17:28:40 -0700
From: John Stultz <john.stultz@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Linus Torvalds <torvalds@linux-foundation.org>
References: <20120722133441.GA6874@gmail.com>
	<20120723144917.GF793@phenom.dumpdata.com>
	<500D8CDD.3060309@linaro.org>
	<20120723182431.GD21870@phenom.dumpdata.com>
	<500D9EBC.204@linaro.org>
	<20120723195144.GA3454@phenom.dumpdata.com>
	<500DB2E2.6070703@linaro.org>
	<CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
In-Reply-To: <CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12080300-2398-0000-0000-000009167F86
Cc: xen-devel@lists.xensource.com, Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Andrew Morton <akpm@linux-foundation.org>, Ingo Molnar <mingo@kernel.org>
Subject: Re: [Xen-devel] Was: Re: [GIT PULL] timer changes for v3.6,
 Is: Regression introduced by
 1e75fa8be9fb61e1af46b5b3b176347a4c958ca1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/2012 04:52 PM, Linus Torvalds wrote:
> On Mon, Jul 23, 2012 at 1:24 PM, John Stultz <john.stultz@linaro.org> wrote:
>> Great! Thanks again so much for the testing and quick reporting!
> Hmm. I'm just cutting 3.6-rc1, and noticing that apparently this patch
> never reached me. So now -rc1 is broken on 32 bit under Xen.
>
> I'm not going to delay rc1 for this, but I thought I'd point this out
> in the hope that we get it fixed soon. I'll be around for small fixes
> for another day and a half before I'm traveling for vacation.

Yea, the fix has been sitting in tip/timers/urgent.  I heard Thomas was 
on vacation, so maybe that's why he's not sent the pull request?

Ingo, could you make the pull request?  Sorry if the commit log didn't 
make it clear this was more urgent.

thanks
-john



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 02:45:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 02:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx7sx-0003Qb-NM; Fri, 03 Aug 2012 02:45:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sx7sw-0003QW-6k
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 02:45:06 +0000
Received: from [85.158.138.51:47875] by server-10.bemta-3.messagelabs.com id
	96/8E-21993-03B3B105; Fri, 03 Aug 2012 02:45:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343961903!30130681!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1019 invoked from network); 3 Aug 2012 02:45:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 02:45:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,704,1336348800"; d="scan'208";a="13832172"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 02:45:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 03:45:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sx7sr-0005bT-O9;
	Fri, 03 Aug 2012 02:45:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sx7sr-0007b8-Bi;
	Fri, 03 Aug 2012 03:45:01 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13540-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 03:45:01 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13540: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13540 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 02:45:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 02:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx7sx-0003Qb-NM; Fri, 03 Aug 2012 02:45:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sx7sw-0003QW-6k
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 02:45:06 +0000
Received: from [85.158.138.51:47875] by server-10.bemta-3.messagelabs.com id
	96/8E-21993-03B3B105; Fri, 03 Aug 2012 02:45:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1343961903!30130681!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1019 invoked from network); 3 Aug 2012 02:45:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 02:45:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,704,1336348800"; d="scan'208";a="13832172"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 02:45:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 03:45:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sx7sr-0005bT-O9;
	Fri, 03 Aug 2012 02:45:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sx7sr-0007b8-Bi;
	Fri, 03 Aug 2012 03:45:01 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13540-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 03:45:01 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13540: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13540 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 02:59:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 02:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx86q-0003aP-47; Fri, 03 Aug 2012 02:59:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1Sx86n-0003aK-VT
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 02:59:26 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343962759!3567986!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA1MDE3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28157 invoked from network); 3 Aug 2012 02:59:20 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 02:59:20 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 02 Aug 2012 19:59:18 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.67,352,1309762800"; d="scan'208";a="175005181"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 02 Aug 2012 19:59:18 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 19:59:18 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 19:59:18 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Fri, 3 Aug 2012 10:59:16 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: xen-devel <xen-devel@lists.xen.org>
Thread-Topic: error when pass through device to guest with qemu-xen-dir-remote
Thread-Index: Ac1xI/d2CwL/sZ+3RsmMyYPR9jF2hQ==
Date: Fri, 3 Aug 2012 02:59:15 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "anthony.perard@citrix.com" <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an error message from QMP server: Parameter 'driver' expects a driver name

It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?

Another question:
Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.

best regards
yang

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 02:59:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 02:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sx86q-0003aP-47; Fri, 03 Aug 2012 02:59:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1Sx86n-0003aK-VT
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 02:59:26 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1343962759!3567986!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA1MDE3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28157 invoked from network); 3 Aug 2012 02:59:20 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 02:59:20 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 02 Aug 2012 19:59:18 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.67,352,1309762800"; d="scan'208";a="175005181"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 02 Aug 2012 19:59:18 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 19:59:18 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 2 Aug 2012 19:59:18 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Fri, 3 Aug 2012 10:59:16 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: xen-devel <xen-devel@lists.xen.org>
Thread-Topic: error when pass through device to guest with qemu-xen-dir-remote
Thread-Index: Ac1xI/d2CwL/sZ+3RsmMyYPR9jF2hQ==
Date: Fri, 3 Aug 2012 02:59:15 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "anthony.perard@citrix.com" <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an error message from QMP server: Parameter 'driver' expects a driver name

It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?

Another question:
Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.

best regards
yang

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 05:13:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxABi-00056U-Rj; Fri, 03 Aug 2012 05:12:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxABh-00056O-HH
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 05:12:37 +0000
Received: from [85.158.143.99:34098] by server-2.bemta-4.messagelabs.com id
	9E/61-17938-4CD5B105; Fri, 03 Aug 2012 05:12:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343970756!24730787!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8732 invoked from network); 3 Aug 2012 05:12:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 05:12:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13832935"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 05:12:36 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	06:12:36 +0100
Message-ID: <1343970755.24794.0.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 06:12:35 +0100
In-Reply-To: <osstest-13539-mainreport@xen.org>
References: <osstest-13539-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, Tim
	Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 00:41 +0100, xen.org wrote:
> flight 13539 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13539/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
>  build-i386                    4 xen-build                 fail REGR. vs. 13536

gcc -O1 -fno-omit-frame-pointer -m32 -march=i686 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement   -fno-builtin -fno-common -Wredundant-decls -iwithprefix include -Werror -Wno-pointer-arith -pipe -I/home/osstest/build.13539.build-i386/xen-unstable/xen/include  -I/home/osstest/build.13539.build-i386/xen-unstable/xen/include/asm-x86/mach-generic -I/home/osstest/build.13539.build-i386/xen-unstable/xen/include/asm-x86/mach-default -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs -fno-optimize-sibling-calls -nostdinc -g -D__XEN__ -include /home/osstest/build.13539.build-i386/xen-unstable/xen/include/xen/config.h -DVERBOSE -DHAS_ACPI -DHAS_GDBSX -DHAS_PASSTHROUGH -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .mce_amd_quirks.o.d -c mce_amd_quirks.c -o mce_amd_quirks.o
cc1: warnings being treated as errors
hvm.c: In function 'hvm_hap_nested_page_fault':
hvm.c:1282: error: passing argument 2 of 'nestedhvm_hap_nested_page_fault' from incompatible pointer type
/home/osstest/build.13539.build-i386/xen-unstable/xen/include/asm/hvm/nestedhvm.h:55: note: expected 'paddr_t *' but argument is of type 'long unsigned int *'

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 05:13:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxABi-00056U-Rj; Fri, 03 Aug 2012 05:12:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxABh-00056O-HH
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 05:12:37 +0000
Received: from [85.158.143.99:34098] by server-2.bemta-4.messagelabs.com id
	9E/61-17938-4CD5B105; Fri, 03 Aug 2012 05:12:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343970756!24730787!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8732 invoked from network); 3 Aug 2012 05:12:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 05:12:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13832935"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 05:12:36 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	06:12:36 +0100
Message-ID: <1343970755.24794.0.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 06:12:35 +0100
In-Reply-To: <osstest-13539-mainreport@xen.org>
References: <osstest-13539-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, Tim
	Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 00:41 +0100, xen.org wrote:
> flight 13539 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13539/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
>  build-i386                    4 xen-build                 fail REGR. vs. 13536

gcc -O1 -fno-omit-frame-pointer -m32 -march=i686 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement   -fno-builtin -fno-common -Wredundant-decls -iwithprefix include -Werror -Wno-pointer-arith -pipe -I/home/osstest/build.13539.build-i386/xen-unstable/xen/include  -I/home/osstest/build.13539.build-i386/xen-unstable/xen/include/asm-x86/mach-generic -I/home/osstest/build.13539.build-i386/xen-unstable/xen/include/asm-x86/mach-default -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs -fno-optimize-sibling-calls -nostdinc -g -D__XEN__ -include /home/osstest/build.13539.build-i386/xen-unstable/xen/include/xen/config.h -DVERBOSE -DHAS_ACPI -DHAS_GDBSX -DHAS_PASSTHROUGH -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .mce_amd_quirks.o.d -c mce_amd_quirks.c -o mce_amd_quirks.o
cc1: warnings being treated as errors
hvm.c: In function 'hvm_hap_nested_page_fault':
hvm.c:1282: error: passing argument 2 of 'nestedhvm_hap_nested_page_fault' from incompatible pointer type
/home/osstest/build.13539.build-i386/xen-unstable/xen/include/asm/hvm/nestedhvm.h:55: note: expected 'paddr_t *' but argument is of type 'long unsigned int *'

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 05:35:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAX3-0005Hx-QX; Fri, 03 Aug 2012 05:34:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1SxAX2-0005Hs-SY
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 05:34:41 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343972073!1950596!1
X-Originating-IP: [98.138.90.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=FROM_HAS_ULINE_NUMS,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21487 invoked from network); 3 Aug 2012 05:34:34 -0000
Received: from nm24-vm1.bullet.mail.ne1.yahoo.com (HELO
	nm24-vm1.bullet.mail.ne1.yahoo.com) (98.138.90.45)
	by server-15.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 05:34:34 -0000
Received: from [98.138.90.54] by nm24.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Aug 2012 05:34:33 -0000
Received: from [98.138.226.166] by tm7.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Aug 2012 05:34:33 -0000
Received: from [127.0.0.1] by omp1067.mail.ne1.yahoo.com with NNFMP;
	03 Aug 2012 05:34:33 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 501753.67461.bm@omp1067.mail.ne1.yahoo.com
Received: (qmail 72092 invoked by uid 60001); 3 Aug 2012 05:34:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1343972073; bh=/gwe+RT/eFyS4SeOLWrl4XtBSEormcGkXW+2floq5GE=;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=Qn8h8A0AWrweZIq3FzD46GwY0o/qkaXSaccb3sc5ctJRA2smh8ZdJXoaBD8OGuiTJvrl3SWp4E+ZdbWTix6Sd1HKBUuoO2LxDNye6+2pqmItbTssKTa9O7ysQcXW0hbIqFYM85NXwecJ7KG48tdtp6K4V63Q+J9SMr43of+i/XI=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=I0kltLkZzah+ATTgWuYPmBIEyvAi9HmgZMmmB8Hyhic7lTRnauXLAwdrGcwqPvhe61znBk+XPiOZiPNG8aoQRJnmvSMUkTyFaIGpB+LVrsuDrcGM/b+ah5Rijes3p1orPHNTAHo/0aJW9Cjp13nc1va/WsWTg96mQVnLsVU+nJw=;
X-YMail-OSG: Py0X9boVM1mrZw9ruvTMfb0gyEMVrW9V1BMjH_NbTkuUHdT
	V15CyQJf6ayenN1OjoK0uxJqpubvqTV.o9bSOZ2mhtTrkNysFUDXLfxxfpNp
	zL39T0QwKmR7jU2TfrRN5mYVsCxwAB79tTyj7d9AdrRZ4lIZmW5M1zvmI442
	NjS03LxSta_CTXsSd85viLuYreuRTLR15zZAJoqxaWW_lPoSlP8yCDeHwc1L
	JipV2G5mgUtacwjYg7u2OTxjfqXjoqv_t2nXwA7jPodoEv8FHs5c67HpceY4
	JQWcPHI.2eyRpKUaxhaHqlPSkkr505AaSYG.b6UaHMRpjo1Vkn3gCX.b8Thc
	rHDGd.6vp.2wIpqnxP.ZJVulmoPs.55tzadnGO_iDk2fC4iX3_ABu_hrdf9H
	_iVrOOWCVc9hVYGt3dBpekDnDw.ggcVLvk6emcDRy9sssoVgSLUKpI2dVAxS
	5WatHzrtITtN77eUa_3ztX91GRQ--
Received: from [58.27.199.186] by web126004.mail.ne1.yahoo.com via HTTP;
	Thu, 02 Aug 2012 22:34:33 PDT
X-Mailer: YahooMailWebService/0.8.120.356233
Message-ID: <1343972073.61454.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Date: Thu, 2 Aug 2012 22:34:33 -0700 (PDT)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] Debian squeeze as Dom 0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4546846732720811062=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4546846732720811062==
Content-Type: multipart/alternative; boundary="1688457910-1237027740-1343972073=:61454"

--1688457910-1237027740-1343972073=:61454
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,=0AI'm going through the experience of learning XEN. My=A0intentions are=
 to track down changes (at source code level) required to make a kernel int=
o dom 0=0Akernel (if we consider that=A0particular=A0kernel is earlier than=
 2.6.18). while going through Xen wiki I have a confusion in my mind.=0ADeb=
ian=A0squeeze 6.0 is said to be Dom 0 kernel. Debian squeeze is using 2.6.3=
2 linux kernel and once we install Debian we have to install Xen packages=
=A0=0Ain which Xen enabled Linux kernel is also included. Then why we consi=
der Debian squeeze as Dom 0 kernel because of its readily available package=
s?=0Ait is also stated that Dom 0 support was added since 2.6.37 and if we =
have a distribution having kernel 3.0 we just need to install hypervisor. =
=A0then why we call=A0=0ADebian squeeze a Dom 0 kernel though it is using 2=
.6.32 kernel?=0A=0AThanks in advance=0Amaheen
--1688457910-1237027740-1343972073=:61454
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div style=3D"font-fa=
mily: 'times new roman', 'new york', times, serif; font-size: 12pt; ">Hi,</=
div><div><font face=3D"'times new roman', 'new york', times, serif" size=3D=
"3">I'm going through the experience of learning XEN. My&nbsp;</font><font =
face=3D"'times new roman', 'new york', times, serif">intentions are to trac=
k down changes (at source code level) required to make a kernel into dom 0<=
/font></div><div><font face=3D"'times new roman', 'new york', times, serif"=
>kernel (if we consider that&nbsp;particular&nbsp;kernel is earlier than 2.=
6.18). while going through Xen wiki I have a confusion in my mind.</font></=
div><div><font face=3D"'times new roman', 'new york', times, serif">Debian&=
nbsp;squeeze 6.0 is said to be Dom 0 kernel. Debian squeeze is using 2.6.32=
 linux kernel and once we install Debian we have to install Xen
 packages&nbsp;</font></div><div><font face=3D"'times new roman', 'new york=
', times, serif">in which Xen enabled Linux kernel is also included. Then w=
hy we consider Debian squeeze as Dom 0 kernel because of its readily availa=
ble packages?</font></div><div><font face=3D"'times new roman', 'new york',=
 times, serif">it is also stated that Dom 0 support was added since 2.6.37 =
and if we have a distribution having kernel 3.0 we just need to install hyp=
ervisor. &nbsp;then why we call&nbsp;</font></div><div><font face=3D"'times=
 new roman', 'new york', times, serif">Debian squeeze a Dom 0 kernel though=
 it is using 2.6.32 kernel?</font></div><div><font face=3D"'times new roman=
', 'new york', times, serif"><br></font></div><div><font face=3D"'times new=
 roman', 'new york', times, serif">Thanks in advance</font></div><div><font=
 face=3D"'times new roman', 'new york', times, serif">maheen</font></div></=
div></body></html>
--1688457910-1237027740-1343972073=:61454--


--===============4546846732720811062==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4546846732720811062==--


From xen-devel-bounces@lists.xen.org Fri Aug 03 05:35:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAX3-0005Hx-QX; Fri, 03 Aug 2012 05:34:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1SxAX2-0005Hs-SY
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 05:34:41 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343972073!1950596!1
X-Originating-IP: [98.138.90.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=FROM_HAS_ULINE_NUMS,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21487 invoked from network); 3 Aug 2012 05:34:34 -0000
Received: from nm24-vm1.bullet.mail.ne1.yahoo.com (HELO
	nm24-vm1.bullet.mail.ne1.yahoo.com) (98.138.90.45)
	by server-15.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 05:34:34 -0000
Received: from [98.138.90.54] by nm24.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Aug 2012 05:34:33 -0000
Received: from [98.138.226.166] by tm7.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Aug 2012 05:34:33 -0000
Received: from [127.0.0.1] by omp1067.mail.ne1.yahoo.com with NNFMP;
	03 Aug 2012 05:34:33 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 501753.67461.bm@omp1067.mail.ne1.yahoo.com
Received: (qmail 72092 invoked by uid 60001); 3 Aug 2012 05:34:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1343972073; bh=/gwe+RT/eFyS4SeOLWrl4XtBSEormcGkXW+2floq5GE=;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=Qn8h8A0AWrweZIq3FzD46GwY0o/qkaXSaccb3sc5ctJRA2smh8ZdJXoaBD8OGuiTJvrl3SWp4E+ZdbWTix6Sd1HKBUuoO2LxDNye6+2pqmItbTssKTa9O7ysQcXW0hbIqFYM85NXwecJ7KG48tdtp6K4V63Q+J9SMr43of+i/XI=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=I0kltLkZzah+ATTgWuYPmBIEyvAi9HmgZMmmB8Hyhic7lTRnauXLAwdrGcwqPvhe61znBk+XPiOZiPNG8aoQRJnmvSMUkTyFaIGpB+LVrsuDrcGM/b+ah5Rijes3p1orPHNTAHo/0aJW9Cjp13nc1va/WsWTg96mQVnLsVU+nJw=;
X-YMail-OSG: Py0X9boVM1mrZw9ruvTMfb0gyEMVrW9V1BMjH_NbTkuUHdT
	V15CyQJf6ayenN1OjoK0uxJqpubvqTV.o9bSOZ2mhtTrkNysFUDXLfxxfpNp
	zL39T0QwKmR7jU2TfrRN5mYVsCxwAB79tTyj7d9AdrRZ4lIZmW5M1zvmI442
	NjS03LxSta_CTXsSd85viLuYreuRTLR15zZAJoqxaWW_lPoSlP8yCDeHwc1L
	JipV2G5mgUtacwjYg7u2OTxjfqXjoqv_t2nXwA7jPodoEv8FHs5c67HpceY4
	JQWcPHI.2eyRpKUaxhaHqlPSkkr505AaSYG.b6UaHMRpjo1Vkn3gCX.b8Thc
	rHDGd.6vp.2wIpqnxP.ZJVulmoPs.55tzadnGO_iDk2fC4iX3_ABu_hrdf9H
	_iVrOOWCVc9hVYGt3dBpekDnDw.ggcVLvk6emcDRy9sssoVgSLUKpI2dVAxS
	5WatHzrtITtN77eUa_3ztX91GRQ--
Received: from [58.27.199.186] by web126004.mail.ne1.yahoo.com via HTTP;
	Thu, 02 Aug 2012 22:34:33 PDT
X-Mailer: YahooMailWebService/0.8.120.356233
Message-ID: <1343972073.61454.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Date: Thu, 2 Aug 2012 22:34:33 -0700 (PDT)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] Debian squeeze as Dom 0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4546846732720811062=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4546846732720811062==
Content-Type: multipart/alternative; boundary="1688457910-1237027740-1343972073=:61454"

--1688457910-1237027740-1343972073=:61454
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,=0AI'm going through the experience of learning XEN. My=A0intentions are=
 to track down changes (at source code level) required to make a kernel int=
o dom 0=0Akernel (if we consider that=A0particular=A0kernel is earlier than=
 2.6.18). while going through Xen wiki I have a confusion in my mind.=0ADeb=
ian=A0squeeze 6.0 is said to be Dom 0 kernel. Debian squeeze is using 2.6.3=
2 linux kernel and once we install Debian we have to install Xen packages=
=A0=0Ain which Xen enabled Linux kernel is also included. Then why we consi=
der Debian squeeze as Dom 0 kernel because of its readily available package=
s?=0Ait is also stated that Dom 0 support was added since 2.6.37 and if we =
have a distribution having kernel 3.0 we just need to install hypervisor. =
=A0then why we call=A0=0ADebian squeeze a Dom 0 kernel though it is using 2=
.6.32 kernel?=0A=0AThanks in advance=0Amaheen
--1688457910-1237027740-1343972073=:61454
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div style=3D"font-fa=
mily: 'times new roman', 'new york', times, serif; font-size: 12pt; ">Hi,</=
div><div><font face=3D"'times new roman', 'new york', times, serif" size=3D=
"3">I'm going through the experience of learning XEN. My&nbsp;</font><font =
face=3D"'times new roman', 'new york', times, serif">intentions are to trac=
k down changes (at source code level) required to make a kernel into dom 0<=
/font></div><div><font face=3D"'times new roman', 'new york', times, serif"=
>kernel (if we consider that&nbsp;particular&nbsp;kernel is earlier than 2.=
6.18). while going through Xen wiki I have a confusion in my mind.</font></=
div><div><font face=3D"'times new roman', 'new york', times, serif">Debian&=
nbsp;squeeze 6.0 is said to be Dom 0 kernel. Debian squeeze is using 2.6.32=
 linux kernel and once we install Debian we have to install Xen
 packages&nbsp;</font></div><div><font face=3D"'times new roman', 'new york=
', times, serif">in which Xen enabled Linux kernel is also included. Then w=
hy we consider Debian squeeze as Dom 0 kernel because of its readily availa=
ble packages?</font></div><div><font face=3D"'times new roman', 'new york',=
 times, serif">it is also stated that Dom 0 support was added since 2.6.37 =
and if we have a distribution having kernel 3.0 we just need to install hyp=
ervisor. &nbsp;then why we call&nbsp;</font></div><div><font face=3D"'times=
 new roman', 'new york', times, serif">Debian squeeze a Dom 0 kernel though=
 it is using 2.6.32 kernel?</font></div><div><font face=3D"'times new roman=
', 'new york', times, serif"><br></font></div><div><font face=3D"'times new=
 roman', 'new york', times, serif">Thanks in advance</font></div><div><font=
 face=3D"'times new roman', 'new york', times, serif">maheen</font></div></=
div></body></html>
--1688457910-1237027740-1343972073=:61454--


--===============4546846732720811062==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4546846732720811062==--


From xen-devel-bounces@lists.xen.org Fri Aug 03 05:48:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAjw-0005YH-9k; Fri, 03 Aug 2012 05:48:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAju-0005YC-HN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 05:47:58 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343972869!8909783!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29607 invoked from network); 3 Aug 2012 05:47:52 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-4.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Aug 2012 05:47:52 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAjj-0006yI-Le
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:47:47 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Fri, 3 Aug 2012 15:47:47 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Fri, 3 Aug 2012 15:47:46 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: vscsi
Thread-Index: Ac1xO1POgSvT8XwuRd6KRavh/MT4Xw==
Date: Fri, 3 Aug 2012 05:47:46 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:21b2:229a:69c9:684e]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19082.002
x-tm-as-result: No--35.351100-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 03 Aug 2012 05:47:47.0494 (UTC)
	FILETIME=[82CAA460:01CD713B]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm not sure if this is a debian thing or if debian is using an older version, but I see this in xend debug log when I try to use vscsi:

cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host6/model: No such file or directory
cat: /sys/bus/scsi/devices/host6/type: No such file or directory
cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/model: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/type: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/rev: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/scsi_level: No such file or directory

In my case the files it is looking for are actually in /sys/bus/scsi/devices/target6:0:3/0:6:0:3/...

Is anyone maintaining vscsi anymore?

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 05:48:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAjw-0005YH-9k; Fri, 03 Aug 2012 05:48:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAju-0005YC-HN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 05:47:58 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343972869!8909783!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29607 invoked from network); 3 Aug 2012 05:47:52 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-4.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Aug 2012 05:47:52 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAjj-0006yI-Le
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:47:47 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Fri, 3 Aug 2012 15:47:47 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Fri, 3 Aug 2012 15:47:46 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: vscsi
Thread-Index: Ac1xO1POgSvT8XwuRd6KRavh/MT4Xw==
Date: Fri, 3 Aug 2012 05:47:46 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:21b2:229a:69c9:684e]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19082.002
x-tm-as-result: No--35.351100-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 03 Aug 2012 05:47:47.0494 (UTC)
	FILETIME=[82CAA460:01CD713B]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm not sure if this is a debian thing or if debian is using an older version, but I see this in xend debug log when I try to use vscsi:

cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host6/model: No such file or directory
cat: /sys/bus/scsi/devices/host6/type: No such file or directory
cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/model: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/type: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/rev: No such file or directory
cat: /sys/bus/scsi/devices/target6:0:3/scsi_level: No such file or directory

In my case the files it is looking for are actually in /sys/bus/scsi/devices/target6:0:3/0:6:0:3/...

Is anyone maintaining vscsi anymore?

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 05:49:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAkk-0005an-Mj; Fri, 03 Aug 2012 05:48:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxAkj-0005aZ-F7
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 05:48:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343972923!3653796!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9714 invoked from network); 3 Aug 2012 05:48:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 05:48:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13833331"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 05:48:43 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	06:48:43 +0100
Message-ID: <1343972922.24794.2.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: maheen butt <maheen_butt26@yahoo.com>
Date: Fri, 3 Aug 2012 06:48:42 +0100
In-Reply-To: <1343972073.61454.YahooMailNeo@web126004.mail.ne1.yahoo.com>
References: <1343972073.61454.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Debian squeeze as Dom 0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 06:34 +0100, maheen butt wrote:
> Hi,
> I'm going through the experience of learning XEN. My intentions are to
> track down changes (at source code level) required to make a kernel
> into dom 0
> kernel (if we consider that particular kernel is earlier than 2.6.18).
> while going through Xen wiki I have a confusion in my mind.
> Debian squeeze 6.0 is said to be Dom 0 kernel. Debian squeeze is using
> 2.6.32 linux kernel and once we install Debian we have to install Xen
> packages 
> in which Xen enabled Linux kernel is also included. Then why we
> consider Debian squeeze as Dom 0 kernel because of its readily
> available packages?
> it is also stated that Dom 0 support was added since 2.6.37 and if we
> have a distribution having kernel 3.0 we just need to install
> hypervisor.  then why we call 
> Debian squeeze a Dom 0 kernel though it is using 2.6.32 kernel?

The Squeeze kernel's xen flavour is based on the old out of tree 2.6.32
pvops xen.git tree.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 05:49:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 05:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAkk-0005an-Mj; Fri, 03 Aug 2012 05:48:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxAkj-0005aZ-F7
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 05:48:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343972923!3653796!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9714 invoked from network); 3 Aug 2012 05:48:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 05:48:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13833331"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 05:48:43 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	06:48:43 +0100
Message-ID: <1343972922.24794.2.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: maheen butt <maheen_butt26@yahoo.com>
Date: Fri, 3 Aug 2012 06:48:42 +0100
In-Reply-To: <1343972073.61454.YahooMailNeo@web126004.mail.ne1.yahoo.com>
References: <1343972073.61454.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Debian squeeze as Dom 0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 06:34 +0100, maheen butt wrote:
> Hi,
> I'm going through the experience of learning XEN. My intentions are to
> track down changes (at source code level) required to make a kernel
> into dom 0
> kernel (if we consider that particular kernel is earlier than 2.6.18).
> while going through Xen wiki I have a confusion in my mind.
> Debian squeeze 6.0 is said to be Dom 0 kernel. Debian squeeze is using
> 2.6.32 linux kernel and once we install Debian we have to install Xen
> packages 
> in which Xen enabled Linux kernel is also included. Then why we
> consider Debian squeeze as Dom 0 kernel because of its readily
> available packages?
> it is also stated that Dom 0 support was added since 2.6.37 and if we
> have a distribution having kernel 3.0 we just need to install
> hypervisor.  then why we call 
> Debian squeeze a Dom 0 kernel though it is using 2.6.32 kernel?

The Squeeze kernel's xen flavour is based on the old out of tree 2.6.32
pvops xen.git tree.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 06:01:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 06:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAwv-0005uX-UV; Fri, 03 Aug 2012 06:01:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAwu-0005uS-Qs
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 06:01:25 +0000
Received: from [85.158.143.35:64372] by server-2.bemta-4.messagelabs.com id
	39/9B-17938-4396B105; Fri, 03 Aug 2012 06:01:24 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343973672!16570056!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12992 invoked from network); 3 Aug 2012 06:01:16 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-14.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Aug 2012 06:01:15 -0000
Received: from smtp2.bendigoit.com.au ([203.16.207.99]
	helo=mail.bendigoit.com.au)
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAwg-0000vr-H9
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:01:10 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Fri, 3 Aug 2012 16:01:10 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Fri, 3 Aug 2012 16:01:10 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: James Harper <james.harper@bendigoit.com.au>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: vscsi
Thread-Index: Ac1xO1POgSvT8XwuRd6KRavh/MT4XwAAgMXg
Date: Fri, 3 Aug 2012 06:01:09 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299DF656@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:21b2:229a:69c9:684e]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19082.002
x-tm-as-result: No--33.503300-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 03 Aug 2012 06:01:10.0687 (UTC)
	FILETIME=[61882AF0:01CD713D]
X-Really-From-Bendigo-IT: magichashvalue
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I'm not sure if this is a debian thing or if debian is using an older version, but I
> see this in xend debug log when I try to use vscsi:
> 
> cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/host6/model: No such file or directory
> cat: /sys/bus/scsi/devices/host6/type: No such file or directory
> cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
> cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/model: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/type: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/rev: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/scsi_level: No such file or directory
> 
> In my case the files it is looking for are actually in
> /sys/bus/scsi/devices/target6:0:3/0:6:0:3/...
> 

It seems that this was fixed by installing lsscsi...

james

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 06:01:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 06:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAwv-0005uX-UV; Fri, 03 Aug 2012 06:01:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAwu-0005uS-Qs
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 06:01:25 +0000
Received: from [85.158.143.35:64372] by server-2.bemta-4.messagelabs.com id
	39/9B-17938-4396B105; Fri, 03 Aug 2012 06:01:24 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343973672!16570056!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12992 invoked from network); 3 Aug 2012 06:01:16 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-14.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Aug 2012 06:01:15 -0000
Received: from smtp2.bendigoit.com.au ([203.16.207.99]
	helo=mail.bendigoit.com.au)
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxAwg-0000vr-H9
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:01:10 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Fri, 3 Aug 2012 16:01:10 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Fri, 3 Aug 2012 16:01:10 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: James Harper <james.harper@bendigoit.com.au>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: vscsi
Thread-Index: Ac1xO1POgSvT8XwuRd6KRavh/MT4XwAAgMXg
Date: Fri, 3 Aug 2012 06:01:09 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299DF656@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:21b2:229a:69c9:684e]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19082.002
x-tm-as-result: No--33.503300-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 03 Aug 2012 06:01:10.0687 (UTC)
	FILETIME=[61882AF0:01CD713D]
X-Really-From-Bendigo-IT: magichashvalue
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I'm not sure if this is a debian thing or if debian is using an older version, but I
> see this in xend debug log when I try to use vscsi:
> 
> cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/host6/model: No such file or directory
> cat: /sys/bus/scsi/devices/host6/type: No such file or directory
> cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
> cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/model: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/type: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/rev: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/scsi_level: No such file or directory
> 
> In my case the files it is looking for are actually in
> /sys/bus/scsi/devices/target6:0:3/0:6:0:3/...
> 

It seems that this was fixed by installing lsscsi...

james

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 06:04:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 06:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAzP-00060Q-GF; Fri, 03 Aug 2012 06:03:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxAzN-00060J-SV
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 06:03:58 +0000
Received: from [85.158.143.35:13161] by server-2.bemta-4.messagelabs.com id
	F2/1E-17938-DC96B105; Fri, 03 Aug 2012 06:03:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1343973836!18445108!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31977 invoked from network); 3 Aug 2012 06:03:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 06:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13833454"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 06:03:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 07:03:31 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxAyw-00075M-Nu;
	Fri, 03 Aug 2012 06:03:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxAyw-0005fZ-J2;
	Fri, 03 Aug 2012 07:03:30 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13541-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 07:03:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13541: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13541 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13541/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 06:04:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 06:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxAzP-00060Q-GF; Fri, 03 Aug 2012 06:03:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxAzN-00060J-SV
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 06:03:58 +0000
Received: from [85.158.143.35:13161] by server-2.bemta-4.messagelabs.com id
	F2/1E-17938-DC96B105; Fri, 03 Aug 2012 06:03:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1343973836!18445108!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31977 invoked from network); 3 Aug 2012 06:03:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 06:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13833454"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 06:03:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 07:03:31 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxAyw-00075M-Nu;
	Fri, 03 Aug 2012 06:03:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxAyw-0005fZ-J2;
	Fri, 03 Aug 2012 07:03:30 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13541-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 07:03:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13541: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13541 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13541/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13536
 build-i386                    4 xen-build                 fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  983ea7521bad
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25705:983ea7521bad
tag:         tip
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 14:44:53 2012 +0100
    
    nestedhvm: return the pfec from the pagetable walk.
    
    Don't clobber it with the pfec from teh p2m walk behind it; the guest
    will not expect (or be able to handle) error codes that come from the
    p2m table, which it can't see or control.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Christoph Egger <Christoph.Egger@amd.com>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25704:c323f1af7e67
user:        Christoph Egger <Christoph.Egger@amd.com>
date:        Thu Aug 02 14:38:09 2012 +0100
    
    nestedhvm: fix write access fault on ro mapping
    
    Fix write access fault when host npt is mapped read-only.
    In this case let the host handle the #NPF.
    Apply host p2mt to hap-on-hap pagetable entry.
    This fixes the l2 guest graphic display refresh problem.
    
    Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25703:90bc5e0a67b5
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 02 12:04:31 2012 +0100
    
    xen: detect compiler version with '--version' rather than '-v'
    
    This allows us to get rid of the 'grep version', which doesn't
    work with localized compilers.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Acked-by: Keir Fraser <keir@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25702:3d17148e465c
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 02 11:49:37 2012 +0200
    
    x86: also allow disabling LAPIC NMI watchdog on newer CPUs
    
    This complements c/s 9146:941897e98591, and also replaces a literal
    zero with a proper manifest constant.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:16:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxC6z-00074W-0G; Fri, 03 Aug 2012 07:15:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mingo.kernel.org@gmail.com>) id 1SxBU5-0006M2-Eq
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 06:35:41 +0000
X-Env-Sender: mingo.kernel.org@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343975734!1956827!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6850 invoked from network); 3 Aug 2012 06:35:34 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 06:35:34 -0000
Received: by weyx43 with SMTP id x43so237732wey.30
	for <xen-devel@lists.xensource.com>;
	Thu, 02 Aug 2012 23:35:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=xdqQZ2OhcyOhxd21LzTHjV8SZCK53PEB2YVODsBDRxo=;
	b=eDX01aWFTf11w0tCv1VOVowVhp0BXJyWX9FaOiai8cpTzsgzAOGjupXKn7OlCHMBTC
	83KXRHTxrJL+DMJC6eak3wyqM6TPuDAFOvR/vVoVN1Y10chcc7QiDetMS7fMMQlJTaL4
	GOplPinFdnDFa6AxCOdE3FfweZOLtMNsv7ml5Yn9Uus8yBbs2wawAeQSvoFXNj/4bWI3
	U7I8ITVZaLK/vvQ6QVEMW4idibe3GmRTXooxLUeUa7GDk3I1MXI+wAa42jxaPj67XGOv
	gQn7DuYgPIi65ESAXo/QbZHlegOXFyHsVkFpkYnqlyB38ZBCDxDlSlkjGCh1HqVjg07V
	Np3Q==
Received: by 10.216.122.203 with SMTP id t53mr386810weh.5.1343975734447;
	Thu, 02 Aug 2012 23:35:34 -0700 (PDT)
Received: from gmail.com (54033750.catv.pool.telekom.hu. [84.3.55.80])
	by mx.google.com with ESMTPS id w7sm37781809wiz.0.2012.08.02.23.35.31
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 02 Aug 2012 23:35:32 -0700 (PDT)
Date: Fri, 3 Aug 2012 08:35:29 +0200
From: Ingo Molnar <mingo@kernel.org>
To: John Stultz <john.stultz@linaro.org>
Message-ID: <20120803063529.GA12424@gmail.com>
References: <20120722133441.GA6874@gmail.com>
	<20120723144917.GF793@phenom.dumpdata.com>
	<500D8CDD.3060309@linaro.org>
	<20120723182431.GD21870@phenom.dumpdata.com>
	<500D9EBC.204@linaro.org>
	<20120723195144.GA3454@phenom.dumpdata.com>
	<500DB2E2.6070703@linaro.org>
	<CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
	<501B1B38.3060808@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501B1B38.3060808@linaro.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Fri, 03 Aug 2012 07:15:51 +0000
Cc: xen-devel@lists.xensource.com, Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] Was: Re: [GIT PULL] timer changes for v3.6,
 Is: Regression introduced by
 1e75fa8be9fb61e1af46b5b3b176347a4c958ca1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


* John Stultz <john.stultz@linaro.org> wrote:

> On 08/02/2012 04:52 PM, Linus Torvalds wrote:
> >On Mon, Jul 23, 2012 at 1:24 PM, John Stultz <john.stultz@linaro.org> wrote:
> >> Great! Thanks again so much for the testing and quick 
> >> reporting!
> > Hmm. I'm just cutting 3.6-rc1, and noticing that apparently 
> > this patch never reached me. So now -rc1 is broken on 32 bit 
> > under Xen.
> >
> > I'm not going to delay rc1 for this, but I thought I'd point 
> > this out in the hope that we get it fixed soon. I'll be 
> > around for small fixes for another day and a half before I'm 
> > traveling for vacation.
> 
> Yea, the fix has been sitting in tip/timers/urgent.  I heard 
> Thomas was on vacation, so maybe that's why he's not sent the 
> pull request?
> 
> Ingo, could you make the pull request?  Sorry if the commit 
> log didn't make it clear this was more urgent.

Yeah, it's pending - I've got some other urgent bits pending as 
well, will send the timers/urgent bits with them later today.

Thanks,

	Ingo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:16:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxC6z-00074W-0G; Fri, 03 Aug 2012 07:15:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mingo.kernel.org@gmail.com>) id 1SxBU5-0006M2-Eq
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 06:35:41 +0000
X-Env-Sender: mingo.kernel.org@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343975734!1956827!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6850 invoked from network); 3 Aug 2012 06:35:34 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 06:35:34 -0000
Received: by weyx43 with SMTP id x43so237732wey.30
	for <xen-devel@lists.xensource.com>;
	Thu, 02 Aug 2012 23:35:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=xdqQZ2OhcyOhxd21LzTHjV8SZCK53PEB2YVODsBDRxo=;
	b=eDX01aWFTf11w0tCv1VOVowVhp0BXJyWX9FaOiai8cpTzsgzAOGjupXKn7OlCHMBTC
	83KXRHTxrJL+DMJC6eak3wyqM6TPuDAFOvR/vVoVN1Y10chcc7QiDetMS7fMMQlJTaL4
	GOplPinFdnDFa6AxCOdE3FfweZOLtMNsv7ml5Yn9Uus8yBbs2wawAeQSvoFXNj/4bWI3
	U7I8ITVZaLK/vvQ6QVEMW4idibe3GmRTXooxLUeUa7GDk3I1MXI+wAa42jxaPj67XGOv
	gQn7DuYgPIi65ESAXo/QbZHlegOXFyHsVkFpkYnqlyB38ZBCDxDlSlkjGCh1HqVjg07V
	Np3Q==
Received: by 10.216.122.203 with SMTP id t53mr386810weh.5.1343975734447;
	Thu, 02 Aug 2012 23:35:34 -0700 (PDT)
Received: from gmail.com (54033750.catv.pool.telekom.hu. [84.3.55.80])
	by mx.google.com with ESMTPS id w7sm37781809wiz.0.2012.08.02.23.35.31
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 02 Aug 2012 23:35:32 -0700 (PDT)
Date: Fri, 3 Aug 2012 08:35:29 +0200
From: Ingo Molnar <mingo@kernel.org>
To: John Stultz <john.stultz@linaro.org>
Message-ID: <20120803063529.GA12424@gmail.com>
References: <20120722133441.GA6874@gmail.com>
	<20120723144917.GF793@phenom.dumpdata.com>
	<500D8CDD.3060309@linaro.org>
	<20120723182431.GD21870@phenom.dumpdata.com>
	<500D9EBC.204@linaro.org>
	<20120723195144.GA3454@phenom.dumpdata.com>
	<500DB2E2.6070703@linaro.org>
	<CA+55aFwK8y2p=m7fEQxiHj0L8BKEpiHDX=cKX80XgAT9DLs6Sg@mail.gmail.com>
	<501B1B38.3060808@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501B1B38.3060808@linaro.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Fri, 03 Aug 2012 07:15:51 +0000
Cc: xen-devel@lists.xensource.com, Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] Was: Re: [GIT PULL] timer changes for v3.6,
 Is: Regression introduced by
 1e75fa8be9fb61e1af46b5b3b176347a4c958ca1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


* John Stultz <john.stultz@linaro.org> wrote:

> On 08/02/2012 04:52 PM, Linus Torvalds wrote:
> >On Mon, Jul 23, 2012 at 1:24 PM, John Stultz <john.stultz@linaro.org> wrote:
> >> Great! Thanks again so much for the testing and quick 
> >> reporting!
> > Hmm. I'm just cutting 3.6-rc1, and noticing that apparently 
> > this patch never reached me. So now -rc1 is broken on 32 bit 
> > under Xen.
> >
> > I'm not going to delay rc1 for this, but I thought I'd point 
> > this out in the hope that we get it fixed soon. I'll be 
> > around for small fixes for another day and a half before I'm 
> > traveling for vacation.
> 
> Yea, the fix has been sitting in tip/timers/urgent.  I heard 
> Thomas was on vacation, so maybe that's why he's not sent the 
> pull request?
> 
> Ingo, could you make the pull request?  Sorry if the commit 
> log didn't make it clear this was more urgent.

Yeah, it's pending - I've got some other urgent bits pending as 
well, will send the timers/urgent bits with them later today.

Thanks,

	Ingo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:48:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCce-0007NF-3z; Fri, 03 Aug 2012 07:48:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCcd-0007N9-82
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 07:48:35 +0000
Received: from [85.158.139.83:41960] by server-1.bemta-5.messagelabs.com id
	92/04-29759-2528B105; Fri, 03 Aug 2012 07:48:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1343980113!25316121!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6222 invoked from network); 3 Aug 2012 07:48:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 07:48:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 07:48:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	08:48:33 +0100
Message-ID: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 08:48:32 +0100
In-Reply-To: <1343970755.24794.0.camel@dagon.hellion.org.uk>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTA4LTAzIGF0IDA2OjEyICswMTAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4ZW4ub3JnIHdyb3RlOgo+ID4gZmxp
Z2h0IDEzNTM5IHhlbi11bnN0YWJsZSByZWFsIFtyZWFsXQo+ID4gaHR0cDovL3d3dy5jaGlhcmsu
Z3JlZW5lbmQub3JnLnVrL354ZW5zcmN0cy9sb2dzLzEzNTM5Lwo+ID4gCj4gPiBSZWdyZXNzaW9u
cyA6LSgKPiA+IAo+ID4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVkIGFuZCBhcmUgYmxvY2tp
bmcsCj4gPiBpbmNsdWRpbmcgdGVzdHMgd2hpY2ggY291bGQgbm90IGJlIHJ1bjoKPiA+ICBidWls
ZC1pMzg2LW9sZGtlcm4gICAgICAgICAgICA0IHhlbi1idWlsZCAgICAgICAgICAgICAgICAgZmFp
bCBSRUdSLiB2cy4gMTM1MzYKPiA+ICBidWlsZC1pMzg2ICAgICAgICAgICAgICAgICAgICA0IHhl
bi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTM1MzYKCjg8LS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQoKIyBIRyBjaGFuZ2VzZXQgcGF0Y2gKIyBVc2VyIElh
biBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CiMgRGF0ZSAxMzQzOTgwMDQ1IC0z
NjAwCiMgTm9kZSBJRCAyM2ZkY2EzYWRiMzM0NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCiMg
UGFyZW50ICA5NWE0YWI2MzJhYzI1Y2UwZWM2YTI0NWRjYzQ2YWQ1N2QzYzcwMzBmCm5lc3RlZGh2
bTogZml4IG5lc3RlZCBwYWdlIGZhdWx0IGJ1aWxkIGVycm9yIG9uIDMyLWJpdAoKICAgIGNjMTog
d2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMKICAgIGh2bS5jOiBJbiBmdW5jdGlvbiDi
gJhodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx04oCZOgogICAgaHZtLmM6MTI4MjogZXJyb3I6IHBh
c3NpbmcgYXJndW1lbnQgMiBvZiDigJhuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx04oCZ
IGZyb20gaW5jb21wYXRpYmxlIHBvaW50ZXIgdHlwZSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVs
L3hlbi11bnN0YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5oOjU1OiBub3Rl
OiBleHBlY3RlZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9mIHR5cGUg4oCYbG9u
ZyB1bnNpZ25lZCBpbnQgKuKAmQoKaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB0YWtlcyBhbiB1
bnNpZ25lZCBsb25nIGdwYSBhbmQgcGFzc2VzICZncGEKdG8gbmVzdGVkaHZtX2hhcF9uZXN0ZWRf
cGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2luY2UgYm90aApvZiB0aGUgY2Fs
bGVycyBvZiBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0IChzdm1fZG9fbmVzdGVkX3BnZmF1bHQg
YW5kCmVwdF9oYW5kbGVfdmlvbGF0aW9uKSBhY3R1YWxseSBoYXZlIHRoZSBncGEgd2hpY2ggdGhl
eSBwYXNzIHRvCmh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsg
aXQgbWFrZXMgc2Vuc2UgdG8KY2hhbmdlIHRoZSBhcmd1bWVudCB0byBodm1faGFwX25lc3RlZF9w
YWdlX2ZhdWx0LgoKVGhlIG90aGVyIHVzZXIgb2YgZ3BhIGluIGh2bV9oYXBfbmVzdGVkX3BhZ2Vf
ZmF1bHQgaXMgYSBjYWxsIHRvCnAybV9tZW1fYWNjZXNzX2NoZWNrLCB3aGljaCBjdXJyZW50bHkg
YWxzbyB0YWtlcyBhIHBhZGRyX3QgZ3BhIGJ1dCBJCnRoaW5rIGEgcGFkZHJfdCBpcyBhcHByb3By
aWF0ZSB0aGVyZSB0b28uCgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVs
bEBjaXRyaXguY29tPgoKZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9h
cmNoL3g4Ni9odm0vaHZtLmMKLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVnIDAz
IDA4OjQzOjEwIDIwMTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVn
IDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKQEAgLTEyNDIsNyArMTI0Miw3IEBAIHZvaWQgaHZtX2lu
amVjdF9wYWdlX2ZhdWx0KGludCBlcnJjb2RlLCAKICAgICBodm1faW5qZWN0X3RyYXAoJnRyYXAp
OwogfQogCi1pbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwK
K2ludCBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0KHBhZGRyX3QgZ3BhLAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgYm9vbF90IGdsYV92YWxpZCwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLAogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgYm9vbF90IGFjY2Vzc19yLApkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMg
eGVuL2FyY2gveDg2L21tL3AybS5jCi0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVn
IDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS5jCUZyaSBB
dWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMApAQCAtMTIzMyw3ICsxMjMzLDcgQEAgdm9pZCBwMm1f
bWVtX3BhZ2luZ19yZXN1bWUoc3RydWN0IGRvbWFpbgogICAgIH0KIH0KIAotYm9vbF90IHAybV9t
ZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNp
Z25lZCBsb25nIGdsYSwgCitib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hlY2socGFkZHJfdCBncGEs
IGJvb2xfdCBnbGFfdmFsaWQsIHVuc2lnbmVkIGxvbmcgZ2xhLCAKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgYm9vbF90IGFjY2Vzc19yLCBib29sX3QgYWNjZXNzX3csIGJvb2xfdCBhY2Nlc3Nf
eCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9w
dHIpCiB7CmRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4vaW5jbHVkZS9h
c20teDg2L2h2bS9odm0uaAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkg
QXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0v
aHZtLmgJRnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCkBAIC00MzMsNyArNDMzLDcgQEAg
c3RhdGljIGlubGluZSB2b2lkIGh2bV9zZXRfaW5mb19ndWVzdChzdAogCiBpbnQgaHZtX2RlYnVn
X29wKHN0cnVjdCB2Y3B1ICp2LCBpbnQzMl90IG9wKTsKIAotaW50IGh2bV9oYXBfbmVzdGVkX3Bh
Z2VfZmF1bHQodW5zaWduZWQgbG9uZyBncGEsCitpbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVs
dChwYWRkcl90IGdwYSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBnbGFf
dmFsaWQsIHVuc2lnbmVkIGxvbmcgZ2xhLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Ym9vbF90IGFjY2Vzc19yLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFj
Y2Vzc193LApkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1ZGUv
YXNtLXg4Ni9wMm0uaAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBBdWcgMDMg
MDg6NDM6MTAgMjAxMiArMDEwMAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBB
dWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMApAQCAtNTg5LDcgKzU4OSw3IEBAIHN0YXRpYyBpbmxp
bmUgdm9pZCBwMm1fbWVtX3BhZ2luZ19wb3B1bGEKICAqIGJlZW4gcHJvbW90ZWQgd2l0aCBubyB1
bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBiZWVuIHBvcHVsYXRlZCwg
CiAgKiB0aGVuIHRoZSBjYWxsZXIgbXVzdCBwdXQgdGhlIGV2ZW50IGluIHRoZSByaW5nIChvbmNl
IGhhdmluZyByZWxlYXNlZCBnZXRfZ2ZuKgogICogbG9ja3MgLS0gY2FsbGVyIG11c3QgYWxzbyB4
ZnJlZSB0aGUgcmVxdWVzdC4gKi8KLWJvb2xfdCBwMm1fbWVtX2FjY2Vzc19jaGVjayh1bnNpZ25l
ZCBsb25nIGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9uZyBnbGEsIAorYm9vbF90
IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNp
Z25lZCBsb25nIGdsYSwgCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3Nf
ciwgYm9vbF90IGFjY2Vzc193LCBib29sX3QgYWNjZXNzX3gsCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgIG1lbV9ldmVudF9yZXF1ZXN0X3QgKipyZXFfcHRyKTsKIC8qIFJlc3VtZXMgdGhlIHJ1
bm5pbmcgb2YgdGhlIFZDUFUsIHJlc3RhcnRpbmcgdGhlIGxhc3QgaW5zdHJ1Y3Rpb24gKi8KQEAg
LTYwNiw3ICs2MDYsNyBAQCBpbnQgcDJtX2dldF9tZW1fYWNjZXNzKHN0cnVjdCBkb21haW4gKmQs
CiAgICAgICAgICAgICAgICAgICAgICAgIGh2bW1lbV9hY2Nlc3NfdCAqYWNjZXNzKTsKIAogI2Vs
c2UKLXN0YXRpYyBpbmxpbmUgYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxv
bmcgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCAKK3N0YXRpYyBpbmxpbmUgYm9vbF90IHAybV9tZW1f
YWNjZXNzX2NoZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCAKICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIGdsYSwgYm9vbF90IGFj
Y2Vzc19yLCAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3Qg
YWNjZXNzX3csIGJvb2xfdCBhY2Nlc3NfeCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBtZW1fZXZlbnRfcmVxdWVzdF90ICoqcmVxX3B0cikKCgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:48:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCce-0007NF-3z; Fri, 03 Aug 2012 07:48:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCcd-0007N9-82
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 07:48:35 +0000
Received: from [85.158.139.83:41960] by server-1.bemta-5.messagelabs.com id
	92/04-29759-2528B105; Fri, 03 Aug 2012 07:48:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1343980113!25316121!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6222 invoked from network); 3 Aug 2012 07:48:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 07:48:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 07:48:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	08:48:33 +0100
Message-ID: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 08:48:32 +0100
In-Reply-To: <1343970755.24794.0.camel@dagon.hellion.org.uk>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTA4LTAzIGF0IDA2OjEyICswMTAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4ZW4ub3JnIHdyb3RlOgo+ID4gZmxp
Z2h0IDEzNTM5IHhlbi11bnN0YWJsZSByZWFsIFtyZWFsXQo+ID4gaHR0cDovL3d3dy5jaGlhcmsu
Z3JlZW5lbmQub3JnLnVrL354ZW5zcmN0cy9sb2dzLzEzNTM5Lwo+ID4gCj4gPiBSZWdyZXNzaW9u
cyA6LSgKPiA+IAo+ID4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVkIGFuZCBhcmUgYmxvY2tp
bmcsCj4gPiBpbmNsdWRpbmcgdGVzdHMgd2hpY2ggY291bGQgbm90IGJlIHJ1bjoKPiA+ICBidWls
ZC1pMzg2LW9sZGtlcm4gICAgICAgICAgICA0IHhlbi1idWlsZCAgICAgICAgICAgICAgICAgZmFp
bCBSRUdSLiB2cy4gMTM1MzYKPiA+ICBidWlsZC1pMzg2ICAgICAgICAgICAgICAgICAgICA0IHhl
bi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gMTM1MzYKCjg8LS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQoKIyBIRyBjaGFuZ2VzZXQgcGF0Y2gKIyBVc2VyIElh
biBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CiMgRGF0ZSAxMzQzOTgwMDQ1IC0z
NjAwCiMgTm9kZSBJRCAyM2ZkY2EzYWRiMzM0NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCiMg
UGFyZW50ICA5NWE0YWI2MzJhYzI1Y2UwZWM2YTI0NWRjYzQ2YWQ1N2QzYzcwMzBmCm5lc3RlZGh2
bTogZml4IG5lc3RlZCBwYWdlIGZhdWx0IGJ1aWxkIGVycm9yIG9uIDMyLWJpdAoKICAgIGNjMTog
d2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMKICAgIGh2bS5jOiBJbiBmdW5jdGlvbiDi
gJhodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx04oCZOgogICAgaHZtLmM6MTI4MjogZXJyb3I6IHBh
c3NpbmcgYXJndW1lbnQgMiBvZiDigJhuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx04oCZ
IGZyb20gaW5jb21wYXRpYmxlIHBvaW50ZXIgdHlwZSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVs
L3hlbi11bnN0YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5oOjU1OiBub3Rl
OiBleHBlY3RlZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9mIHR5cGUg4oCYbG9u
ZyB1bnNpZ25lZCBpbnQgKuKAmQoKaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB0YWtlcyBhbiB1
bnNpZ25lZCBsb25nIGdwYSBhbmQgcGFzc2VzICZncGEKdG8gbmVzdGVkaHZtX2hhcF9uZXN0ZWRf
cGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2luY2UgYm90aApvZiB0aGUgY2Fs
bGVycyBvZiBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0IChzdm1fZG9fbmVzdGVkX3BnZmF1bHQg
YW5kCmVwdF9oYW5kbGVfdmlvbGF0aW9uKSBhY3R1YWxseSBoYXZlIHRoZSBncGEgd2hpY2ggdGhl
eSBwYXNzIHRvCmh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsg
aXQgbWFrZXMgc2Vuc2UgdG8KY2hhbmdlIHRoZSBhcmd1bWVudCB0byBodm1faGFwX25lc3RlZF9w
YWdlX2ZhdWx0LgoKVGhlIG90aGVyIHVzZXIgb2YgZ3BhIGluIGh2bV9oYXBfbmVzdGVkX3BhZ2Vf
ZmF1bHQgaXMgYSBjYWxsIHRvCnAybV9tZW1fYWNjZXNzX2NoZWNrLCB3aGljaCBjdXJyZW50bHkg
YWxzbyB0YWtlcyBhIHBhZGRyX3QgZ3BhIGJ1dCBJCnRoaW5rIGEgcGFkZHJfdCBpcyBhcHByb3By
aWF0ZSB0aGVyZSB0b28uCgpTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVs
bEBjaXRyaXguY29tPgoKZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9h
cmNoL3g4Ni9odm0vaHZtLmMKLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVnIDAz
IDA4OjQzOjEwIDIwMTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVn
IDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKQEAgLTEyNDIsNyArMTI0Miw3IEBAIHZvaWQgaHZtX2lu
amVjdF9wYWdlX2ZhdWx0KGludCBlcnJjb2RlLCAKICAgICBodm1faW5qZWN0X3RyYXAoJnRyYXAp
OwogfQogCi1pbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwK
K2ludCBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0KHBhZGRyX3QgZ3BhLAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgYm9vbF90IGdsYV92YWxpZCwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLAogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgYm9vbF90IGFjY2Vzc19yLApkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMg
eGVuL2FyY2gveDg2L21tL3AybS5jCi0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVn
IDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS5jCUZyaSBB
dWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMApAQCAtMTIzMyw3ICsxMjMzLDcgQEAgdm9pZCBwMm1f
bWVtX3BhZ2luZ19yZXN1bWUoc3RydWN0IGRvbWFpbgogICAgIH0KIH0KIAotYm9vbF90IHAybV9t
ZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNp
Z25lZCBsb25nIGdsYSwgCitib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hlY2socGFkZHJfdCBncGEs
IGJvb2xfdCBnbGFfdmFsaWQsIHVuc2lnbmVkIGxvbmcgZ2xhLCAKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgYm9vbF90IGFjY2Vzc19yLCBib29sX3QgYWNjZXNzX3csIGJvb2xfdCBhY2Nlc3Nf
eCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9w
dHIpCiB7CmRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4vaW5jbHVkZS9h
c20teDg2L2h2bS9odm0uaAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkg
QXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0v
aHZtLmgJRnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCkBAIC00MzMsNyArNDMzLDcgQEAg
c3RhdGljIGlubGluZSB2b2lkIGh2bV9zZXRfaW5mb19ndWVzdChzdAogCiBpbnQgaHZtX2RlYnVn
X29wKHN0cnVjdCB2Y3B1ICp2LCBpbnQzMl90IG9wKTsKIAotaW50IGh2bV9oYXBfbmVzdGVkX3Bh
Z2VfZmF1bHQodW5zaWduZWQgbG9uZyBncGEsCitpbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVs
dChwYWRkcl90IGdwYSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBnbGFf
dmFsaWQsIHVuc2lnbmVkIGxvbmcgZ2xhLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Ym9vbF90IGFjY2Vzc19yLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFj
Y2Vzc193LApkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1ZGUv
YXNtLXg4Ni9wMm0uaAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBBdWcgMDMg
MDg6NDM6MTAgMjAxMiArMDEwMAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBB
dWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMApAQCAtNTg5LDcgKzU4OSw3IEBAIHN0YXRpYyBpbmxp
bmUgdm9pZCBwMm1fbWVtX3BhZ2luZ19wb3B1bGEKICAqIGJlZW4gcHJvbW90ZWQgd2l0aCBubyB1
bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBiZWVuIHBvcHVsYXRlZCwg
CiAgKiB0aGVuIHRoZSBjYWxsZXIgbXVzdCBwdXQgdGhlIGV2ZW50IGluIHRoZSByaW5nIChvbmNl
IGhhdmluZyByZWxlYXNlZCBnZXRfZ2ZuKgogICogbG9ja3MgLS0gY2FsbGVyIG11c3QgYWxzbyB4
ZnJlZSB0aGUgcmVxdWVzdC4gKi8KLWJvb2xfdCBwMm1fbWVtX2FjY2Vzc19jaGVjayh1bnNpZ25l
ZCBsb25nIGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9uZyBnbGEsIAorYm9vbF90
IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNp
Z25lZCBsb25nIGdsYSwgCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3Nf
ciwgYm9vbF90IGFjY2Vzc193LCBib29sX3QgYWNjZXNzX3gsCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgIG1lbV9ldmVudF9yZXF1ZXN0X3QgKipyZXFfcHRyKTsKIC8qIFJlc3VtZXMgdGhlIHJ1
bm5pbmcgb2YgdGhlIFZDUFUsIHJlc3RhcnRpbmcgdGhlIGxhc3QgaW5zdHJ1Y3Rpb24gKi8KQEAg
LTYwNiw3ICs2MDYsNyBAQCBpbnQgcDJtX2dldF9tZW1fYWNjZXNzKHN0cnVjdCBkb21haW4gKmQs
CiAgICAgICAgICAgICAgICAgICAgICAgIGh2bW1lbV9hY2Nlc3NfdCAqYWNjZXNzKTsKIAogI2Vs
c2UKLXN0YXRpYyBpbmxpbmUgYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxv
bmcgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCAKK3N0YXRpYyBpbmxpbmUgYm9vbF90IHAybV9tZW1f
YWNjZXNzX2NoZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCAKICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIGdsYSwgYm9vbF90IGFj
Y2Vzc19yLCAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3Qg
YWNjZXNzX3csIGJvb2xfdCBhY2Nlc3NfeCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBtZW1fZXZlbnRfcmVxdWVzdF90ICoqcmVxX3B0cikKCgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxClN-0007YU-Aj; Fri, 03 Aug 2012 07:57:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxClL-0007YP-Cu
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 07:57:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1343980648!10576660!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31466 invoked from network); 3 Aug 2012 07:57:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 07:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834689"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 07:57:28 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	08:57:28 +0100
Message-ID: <1343980647.21372.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 08:57:27 +0100
In-Reply-To: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

KGFkZGluZyBLZWlyICYgSmFuIGluIGNhc2UgQ2hyaXN0b3BoICYgVGltIGFyZW4ndCBhcm91bmQp
CgpPbiBGcmksIDIwMTItMDgtMDMgYXQgMDg6NDggKzAxMDAsIElhbiBDYW1wYmVsbCB3cm90ZToK
PiBPbiBGcmksIDIwMTItMDgtMDMgYXQgMDY6MTIgKzAxMDAsIElhbiBDYW1wYmVsbCB3cm90ZToK
PiA+IE9uIEZyaSwgMjAxMi0wOC0wMyBhdCAwMDo0MSArMDEwMCwgeGVuLm9yZyB3cm90ZToKPiA+
ID4gZmxpZ2h0IDEzNTM5IHhlbi11bnN0YWJsZSByZWFsIFtyZWFsXQo+ID4gPiBodHRwOi8vd3d3
LmNoaWFyay5ncmVlbmVuZC5vcmcudWsvfnhlbnNyY3RzL2xvZ3MvMTM1MzkvCj4gPiA+IAo+ID4g
PiBSZWdyZXNzaW9ucyA6LSgKPiA+ID4gCj4gPiA+IFRlc3RzIHdoaWNoIGRpZCBub3Qgc3VjY2Vl
ZCBhbmQgYXJlIGJsb2NraW5nLAo+ID4gPiBpbmNsdWRpbmcgdGVzdHMgd2hpY2ggY291bGQgbm90
IGJlIHJ1bjoKPiA+ID4gIGJ1aWxkLWkzODYtb2xka2VybiAgICAgICAgICAgIDQgeGVuLWJ1aWxk
ICAgICAgICAgICAgICAgICBmYWlsIFJFR1IuIHZzLiAxMzUzNgo+ID4gPiAgYnVpbGQtaTM4NiAg
ICAgICAgICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAgICAgICAgIGZhaWwgUkVHUi4g
dnMuIDEzNTM2Cj4gCj4gODwtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCj4gCj4g
IyBIRyBjaGFuZ2VzZXQgcGF0Y2gKPiAjIFVzZXIgSWFuIENhbXBiZWxsIDxpYW4uY2FtcGJlbGxA
Y2l0cml4LmNvbT4KPiAjIERhdGUgMTM0Mzk4MDA0NSAtMzYwMAo+ICMgTm9kZSBJRCAyM2ZkY2Ez
YWRiMzM0NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCj4gIyBQYXJlbnQgIDk1YTRhYjYzMmFj
MjVjZTBlYzZhMjQ1ZGNjNDZhZDU3ZDNjNzAzMGYKPiBuZXN0ZWRodm06IGZpeCBuZXN0ZWQgcGFn
ZSBmYXVsdCBidWlsZCBlcnJvciBvbiAzMi1iaXQKPiAKPiAgICAgY2MxOiB3YXJuaW5ncyBiZWlu
ZyB0cmVhdGVkIGFzIGVycm9ycwo+ICAgICBodm0uYzogSW4gZnVuY3Rpb24g4oCYaHZtX2hhcF9u
ZXN0ZWRfcGFnZV9mYXVsdOKAmToKPiAgICAgaHZtLmM6MTI4MjogZXJyb3I6IHBhc3NpbmcgYXJn
dW1lbnQgMiBvZiDigJhuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx04oCZIGZyb20gaW5j
b21wYXRpYmxlIHBvaW50ZXIgdHlwZSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL3hlbi11bnN0
YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5oOjU1OiBub3RlOiBleHBlY3Rl
ZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9mIHR5cGUg4oCYbG9uZyB1bnNpZ25l
ZCBpbnQgKuKAmQo+IAo+IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgdGFrZXMgYW4gdW5zaWdu
ZWQgbG9uZyBncGEgYW5kIHBhc3NlcyAmZ3BhCj4gdG8gbmVzdGVkaHZtX2hhcF9uZXN0ZWRfcGFn
ZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2luY2UgYm90aAo+IG9mIHRoZSBjYWxs
ZXJzIG9mIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgKHN2bV9kb19uZXN0ZWRfcGdmYXVsdCBh
bmQKPiBlcHRfaGFuZGxlX3Zpb2xhdGlvbikgYWN0dWFsbHkgaGF2ZSB0aGUgZ3BhIHdoaWNoIHRo
ZXkgcGFzcyB0bwo+IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhp
bmsgaXQgbWFrZXMgc2Vuc2UgdG8KPiBjaGFuZ2UgdGhlIGFyZ3VtZW50IHRvIGh2bV9oYXBfbmVz
dGVkX3BhZ2VfZmF1bHQuCj4gCj4gVGhlIG90aGVyIHVzZXIgb2YgZ3BhIGluIGh2bV9oYXBfbmVz
dGVkX3BhZ2VfZmF1bHQgaXMgYSBjYWxsIHRvCj4gcDJtX21lbV9hY2Nlc3NfY2hlY2ssIHdoaWNo
IGN1cnJlbnRseSBhbHNvIHRha2VzIGEgcGFkZHJfdCBncGEgYnV0IEkKPiB0aGluayBhIHBhZGRy
X3QgaXMgYXBwcm9wcmlhdGUgdGhlcmUgdG9vLgo+IAo+IFNpZ25lZC1vZmYtYnk6IElhbiBDYW1w
YmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+Cj4gCj4gZGlmZiAtciA5NWE0YWI2MzJhYzIg
LXIgMjNmZGNhM2FkYjMzIHhlbi9hcmNoL3g4Ni9odm0vaHZtLmMKPiAtLS0gYS94ZW4vYXJjaC94
ODYvaHZtL2h2bS5jCUZyaSBBdWcgMDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9h
cmNoL3g4Ni9odm0vaHZtLmMJRnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gQEAgLTEy
NDIsNyArMTI0Miw3IEBAIHZvaWQgaHZtX2luamVjdF9wYWdlX2ZhdWx0KGludCBlcnJjb2RlLCAK
PiAgICAgIGh2bV9pbmplY3RfdHJhcCgmdHJhcCk7Cj4gIH0KPiAgCj4gLWludCBodm1faGFwX25l
c3RlZF9wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgZ3BhLAo+ICtpbnQgaHZtX2hhcF9uZXN0ZWRf
cGFnZV9mYXVsdChwYWRkcl90IGdwYSwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Ym9vbF90IGdsYV92YWxpZCwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWdu
ZWQgbG9uZyBnbGEsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nl
c3NfciwKPiBkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2FyY2gveDg2
L21tL3AybS5jCj4gLS0tIGEveGVuL2FyY2gveDg2L21tL3AybS5jCUZyaSBBdWcgMDMgMDg6NDM6
MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVnIDAzIDA4
OjQ3OjI1IDIwMTIgKzAxMDAKPiBAQCAtMTIzMyw3ICsxMjMzLDcgQEAgdm9pZCBwMm1fbWVtX3Bh
Z2luZ19yZXN1bWUoc3RydWN0IGRvbWFpbgo+ICAgICAgfQo+ICB9Cj4gIAo+IC1ib29sX3QgcDJt
X21lbV9hY2Nlc3NfY2hlY2sodW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIHVu
c2lnbmVkIGxvbmcgZ2xhLCAKPiArYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3Qg
Z3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIGdsYSwgCj4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgYm9vbF90IGFjY2Vzc19yLCBib29sX3QgYWNjZXNzX3csIGJvb2xfdCBh
Y2Nlc3NfeCwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZW1fZXZlbnRfcmVxdWVzdF90
ICoqcmVxX3B0cikKPiAgewo+IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4
ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAo+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
aHZtL2h2bS5oCUZyaSBBdWcgMDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvaHZtL2h2bS5oCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMAo+IEBA
IC00MzMsNyArNDMzLDcgQEAgc3RhdGljIGlubGluZSB2b2lkIGh2bV9zZXRfaW5mb19ndWVzdChz
dAo+ICAKPiAgaW50IGh2bV9kZWJ1Z19vcChzdHJ1Y3QgdmNwdSAqdiwgaW50MzJfdCBvcCk7Cj4g
IAo+IC1pbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwKPiAr
aW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQocGFkZHJfdCBncGEsCj4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGJvb2xfdCBnbGFfdmFsaWQsIHVuc2lnbmVkIGxvbmcgZ2xhLAo+
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsCj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfdywKPiBkaWZmIC1yIDk1YTRh
YjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1ZGUvYXNtLXg4Ni9wMm0uaAo+IC0tLSBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvcDJtLmgJRnJpIEF1ZyAwMyAwODo0MzoxMCAyMDEyICswMTAw
Cj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9wMm0uaAlGcmkgQXVnIDAzIDA4OjQ3OjI1IDIw
MTIgKzAxMDAKPiBAQCAtNTg5LDcgKzU4OSw3IEBAIHN0YXRpYyBpbmxpbmUgdm9pZCBwMm1fbWVt
X3BhZ2luZ19wb3B1bGEKPiAgICogYmVlbiBwcm9tb3RlZCB3aXRoIG5vIHVuZGVybHlpbmcgdmNw
dSBwYXVzZS4gSWYgdGhlIHJlcV9wdHIgaGFzIGJlZW4gcG9wdWxhdGVkLCAKPiAgICogdGhlbiB0
aGUgY2FsbGVyIG11c3QgcHV0IHRoZSBldmVudCBpbiB0aGUgcmluZyAob25jZSBoYXZpbmcgcmVs
ZWFzZWQgZ2V0X2dmbioKPiAgICogbG9ja3MgLS0gY2FsbGVyIG11c3QgYWxzbyB4ZnJlZSB0aGUg
cmVxdWVzdC4gKi8KPiAtYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcg
Z3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIGdsYSwgCj4gK2Jvb2xfdCBwMm1f
bWVtX2FjY2Vzc19jaGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQg
bG9uZyBnbGEsIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3Nfciwg
Ym9vbF90IGFjY2Vzc193LCBib29sX3QgYWNjZXNzX3gsCj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpOwo+ICAvKiBSZXN1bWVzIHRoZSBy
dW5uaW5nIG9mIHRoZSBWQ1BVLCByZXN0YXJ0aW5nIHRoZSBsYXN0IGluc3RydWN0aW9uICovCj4g
QEAgLTYwNiw3ICs2MDYsNyBAQCBpbnQgcDJtX2dldF9tZW1fYWNjZXNzKHN0cnVjdCBkb21haW4g
KmQsCj4gICAgICAgICAgICAgICAgICAgICAgICAgaHZtbWVtX2FjY2Vzc190ICphY2Nlc3MpOwo+
ICAKPiAgI2Vsc2UKPiAtc3RhdGljIGlubGluZSBib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hlY2so
dW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIAo+ICtzdGF0aWMgaW5saW5lIGJv
b2xfdCBwMm1fbWVtX2FjY2Vzc19jaGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwg
Cj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25n
IGdsYSwgYm9vbF90IGFjY2Vzc19yLCAKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfdywgYm9vbF90IGFjY2Vzc194LAo+ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9w
dHIpCj4gCj4gCj4gCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KPiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
PiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:57:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxClN-0007YU-Aj; Fri, 03 Aug 2012 07:57:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxClL-0007YP-Cu
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 07:57:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1343980648!10576660!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31466 invoked from network); 3 Aug 2012 07:57:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 07:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834689"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 07:57:28 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	08:57:28 +0100
Message-ID: <1343980647.21372.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 08:57:27 +0100
In-Reply-To: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

KGFkZGluZyBLZWlyICYgSmFuIGluIGNhc2UgQ2hyaXN0b3BoICYgVGltIGFyZW4ndCBhcm91bmQp
CgpPbiBGcmksIDIwMTItMDgtMDMgYXQgMDg6NDggKzAxMDAsIElhbiBDYW1wYmVsbCB3cm90ZToK
PiBPbiBGcmksIDIwMTItMDgtMDMgYXQgMDY6MTIgKzAxMDAsIElhbiBDYW1wYmVsbCB3cm90ZToK
PiA+IE9uIEZyaSwgMjAxMi0wOC0wMyBhdCAwMDo0MSArMDEwMCwgeGVuLm9yZyB3cm90ZToKPiA+
ID4gZmxpZ2h0IDEzNTM5IHhlbi11bnN0YWJsZSByZWFsIFtyZWFsXQo+ID4gPiBodHRwOi8vd3d3
LmNoaWFyay5ncmVlbmVuZC5vcmcudWsvfnhlbnNyY3RzL2xvZ3MvMTM1MzkvCj4gPiA+IAo+ID4g
PiBSZWdyZXNzaW9ucyA6LSgKPiA+ID4gCj4gPiA+IFRlc3RzIHdoaWNoIGRpZCBub3Qgc3VjY2Vl
ZCBhbmQgYXJlIGJsb2NraW5nLAo+ID4gPiBpbmNsdWRpbmcgdGVzdHMgd2hpY2ggY291bGQgbm90
IGJlIHJ1bjoKPiA+ID4gIGJ1aWxkLWkzODYtb2xka2VybiAgICAgICAgICAgIDQgeGVuLWJ1aWxk
ICAgICAgICAgICAgICAgICBmYWlsIFJFR1IuIHZzLiAxMzUzNgo+ID4gPiAgYnVpbGQtaTM4NiAg
ICAgICAgICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAgICAgICAgIGZhaWwgUkVHUi4g
dnMuIDEzNTM2Cj4gCj4gODwtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCj4gCj4g
IyBIRyBjaGFuZ2VzZXQgcGF0Y2gKPiAjIFVzZXIgSWFuIENhbXBiZWxsIDxpYW4uY2FtcGJlbGxA
Y2l0cml4LmNvbT4KPiAjIERhdGUgMTM0Mzk4MDA0NSAtMzYwMAo+ICMgTm9kZSBJRCAyM2ZkY2Ez
YWRiMzM0NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCj4gIyBQYXJlbnQgIDk1YTRhYjYzMmFj
MjVjZTBlYzZhMjQ1ZGNjNDZhZDU3ZDNjNzAzMGYKPiBuZXN0ZWRodm06IGZpeCBuZXN0ZWQgcGFn
ZSBmYXVsdCBidWlsZCBlcnJvciBvbiAzMi1iaXQKPiAKPiAgICAgY2MxOiB3YXJuaW5ncyBiZWlu
ZyB0cmVhdGVkIGFzIGVycm9ycwo+ICAgICBodm0uYzogSW4gZnVuY3Rpb24g4oCYaHZtX2hhcF9u
ZXN0ZWRfcGFnZV9mYXVsdOKAmToKPiAgICAgaHZtLmM6MTI4MjogZXJyb3I6IHBhc3NpbmcgYXJn
dW1lbnQgMiBvZiDigJhuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx04oCZIGZyb20gaW5j
b21wYXRpYmxlIHBvaW50ZXIgdHlwZSAvbG9jYWwvc2NyYXRjaC9pYW5jL2RldmVsL3hlbi11bnN0
YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5oOjU1OiBub3RlOiBleHBlY3Rl
ZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9mIHR5cGUg4oCYbG9uZyB1bnNpZ25l
ZCBpbnQgKuKAmQo+IAo+IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgdGFrZXMgYW4gdW5zaWdu
ZWQgbG9uZyBncGEgYW5kIHBhc3NlcyAmZ3BhCj4gdG8gbmVzdGVkaHZtX2hhcF9uZXN0ZWRfcGFn
ZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2luY2UgYm90aAo+IG9mIHRoZSBjYWxs
ZXJzIG9mIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgKHN2bV9kb19uZXN0ZWRfcGdmYXVsdCBh
bmQKPiBlcHRfaGFuZGxlX3Zpb2xhdGlvbikgYWN0dWFsbHkgaGF2ZSB0aGUgZ3BhIHdoaWNoIHRo
ZXkgcGFzcyB0bwo+IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhp
bmsgaXQgbWFrZXMgc2Vuc2UgdG8KPiBjaGFuZ2UgdGhlIGFyZ3VtZW50IHRvIGh2bV9oYXBfbmVz
dGVkX3BhZ2VfZmF1bHQuCj4gCj4gVGhlIG90aGVyIHVzZXIgb2YgZ3BhIGluIGh2bV9oYXBfbmVz
dGVkX3BhZ2VfZmF1bHQgaXMgYSBjYWxsIHRvCj4gcDJtX21lbV9hY2Nlc3NfY2hlY2ssIHdoaWNo
IGN1cnJlbnRseSBhbHNvIHRha2VzIGEgcGFkZHJfdCBncGEgYnV0IEkKPiB0aGluayBhIHBhZGRy
X3QgaXMgYXBwcm9wcmlhdGUgdGhlcmUgdG9vLgo+IAo+IFNpZ25lZC1vZmYtYnk6IElhbiBDYW1w
YmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+Cj4gCj4gZGlmZiAtciA5NWE0YWI2MzJhYzIg
LXIgMjNmZGNhM2FkYjMzIHhlbi9hcmNoL3g4Ni9odm0vaHZtLmMKPiAtLS0gYS94ZW4vYXJjaC94
ODYvaHZtL2h2bS5jCUZyaSBBdWcgMDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9h
cmNoL3g4Ni9odm0vaHZtLmMJRnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gQEAgLTEy
NDIsNyArMTI0Miw3IEBAIHZvaWQgaHZtX2luamVjdF9wYWdlX2ZhdWx0KGludCBlcnJjb2RlLCAK
PiAgICAgIGh2bV9pbmplY3RfdHJhcCgmdHJhcCk7Cj4gIH0KPiAgCj4gLWludCBodm1faGFwX25l
c3RlZF9wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgZ3BhLAo+ICtpbnQgaHZtX2hhcF9uZXN0ZWRf
cGFnZV9mYXVsdChwYWRkcl90IGdwYSwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Ym9vbF90IGdsYV92YWxpZCwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWdu
ZWQgbG9uZyBnbGEsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nl
c3NfciwKPiBkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2FyY2gveDg2
L21tL3AybS5jCj4gLS0tIGEveGVuL2FyY2gveDg2L21tL3AybS5jCUZyaSBBdWcgMDMgMDg6NDM6
MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVnIDAzIDA4
OjQ3OjI1IDIwMTIgKzAxMDAKPiBAQCAtMTIzMyw3ICsxMjMzLDcgQEAgdm9pZCBwMm1fbWVtX3Bh
Z2luZ19yZXN1bWUoc3RydWN0IGRvbWFpbgo+ICAgICAgfQo+ICB9Cj4gIAo+IC1ib29sX3QgcDJt
X21lbV9hY2Nlc3NfY2hlY2sodW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIHVu
c2lnbmVkIGxvbmcgZ2xhLCAKPiArYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3Qg
Z3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIGdsYSwgCj4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgYm9vbF90IGFjY2Vzc19yLCBib29sX3QgYWNjZXNzX3csIGJvb2xfdCBh
Y2Nlc3NfeCwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZW1fZXZlbnRfcmVxdWVzdF90
ICoqcmVxX3B0cikKPiAgewo+IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4
ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAo+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
aHZtL2h2bS5oCUZyaSBBdWcgMDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvaHZtL2h2bS5oCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMAo+IEBA
IC00MzMsNyArNDMzLDcgQEAgc3RhdGljIGlubGluZSB2b2lkIGh2bV9zZXRfaW5mb19ndWVzdChz
dAo+ICAKPiAgaW50IGh2bV9kZWJ1Z19vcChzdHJ1Y3QgdmNwdSAqdiwgaW50MzJfdCBvcCk7Cj4g
IAo+IC1pbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwKPiAr
aW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQocGFkZHJfdCBncGEsCj4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGJvb2xfdCBnbGFfdmFsaWQsIHVuc2lnbmVkIGxvbmcgZ2xhLAo+
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsCj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfdywKPiBkaWZmIC1yIDk1YTRh
YjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1ZGUvYXNtLXg4Ni9wMm0uaAo+IC0tLSBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvcDJtLmgJRnJpIEF1ZyAwMyAwODo0MzoxMCAyMDEyICswMTAw
Cj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9wMm0uaAlGcmkgQXVnIDAzIDA4OjQ3OjI1IDIw
MTIgKzAxMDAKPiBAQCAtNTg5LDcgKzU4OSw3IEBAIHN0YXRpYyBpbmxpbmUgdm9pZCBwMm1fbWVt
X3BhZ2luZ19wb3B1bGEKPiAgICogYmVlbiBwcm9tb3RlZCB3aXRoIG5vIHVuZGVybHlpbmcgdmNw
dSBwYXVzZS4gSWYgdGhlIHJlcV9wdHIgaGFzIGJlZW4gcG9wdWxhdGVkLCAKPiAgICogdGhlbiB0
aGUgY2FsbGVyIG11c3QgcHV0IHRoZSBldmVudCBpbiB0aGUgcmluZyAob25jZSBoYXZpbmcgcmVs
ZWFzZWQgZ2V0X2dmbioKPiAgICogbG9ja3MgLS0gY2FsbGVyIG11c3QgYWxzbyB4ZnJlZSB0aGUg
cmVxdWVzdC4gKi8KPiAtYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcg
Z3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIGdsYSwgCj4gK2Jvb2xfdCBwMm1f
bWVtX2FjY2Vzc19jaGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQg
bG9uZyBnbGEsIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3Nfciwg
Ym9vbF90IGFjY2Vzc193LCBib29sX3QgYWNjZXNzX3gsCj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpOwo+ICAvKiBSZXN1bWVzIHRoZSBy
dW5uaW5nIG9mIHRoZSBWQ1BVLCByZXN0YXJ0aW5nIHRoZSBsYXN0IGluc3RydWN0aW9uICovCj4g
QEAgLTYwNiw3ICs2MDYsNyBAQCBpbnQgcDJtX2dldF9tZW1fYWNjZXNzKHN0cnVjdCBkb21haW4g
KmQsCj4gICAgICAgICAgICAgICAgICAgICAgICAgaHZtbWVtX2FjY2Vzc190ICphY2Nlc3MpOwo+
ICAKPiAgI2Vsc2UKPiAtc3RhdGljIGlubGluZSBib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hlY2so
dW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIAo+ICtzdGF0aWMgaW5saW5lIGJv
b2xfdCBwMm1fbWVtX2FjY2Vzc19jaGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwg
Cj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25n
IGdsYSwgYm9vbF90IGFjY2Vzc19yLCAKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfdywgYm9vbF90IGFjY2Vzc194LAo+ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9w
dHIpCj4gCj4gCj4gCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KPiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
PiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:58:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCmE-0007an-PP; Fri, 03 Aug 2012 07:58:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxCmC-0007ai-Px
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 07:58:29 +0000
Received: from [85.158.139.83:38274] by server-2.bemta-5.messagelabs.com id
	A6/63-04598-3A48B105; Fri, 03 Aug 2012 07:58:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343980701!29977317!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18838 invoked from network); 3 Aug 2012 07:58:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 07:58:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 08:58:19 +0100
Message-Id: <501BA0E6020000780009263B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 08:59:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDA5OjQ4LCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiBGcmksIDIwMTItMDgtMDMgYXQgMDY6MTIgKzAxMDAsIElhbiBD
YW1wYmVsbCB3cm90ZToKPj4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4ZW4u
b3JnIHdyb3RlOgo+PiA+IGZsaWdodCAxMzUzOSB4ZW4tdW5zdGFibGUgcmVhbCBbcmVhbF0KPj4g
PiBodHRwOi8vd3d3LmNoaWFyay5ncmVlbmVuZC5vcmcudWsvfnhlbnNyY3RzL2xvZ3MvMTM1Mzkv
IAo+PiA+IAo+PiA+IFJlZ3Jlc3Npb25zIDotKAo+PiA+IAo+PiA+IFRlc3RzIHdoaWNoIGRpZCBu
b3Qgc3VjY2VlZCBhbmQgYXJlIGJsb2NraW5nLAo+PiA+IGluY2x1ZGluZyB0ZXN0cyB3aGljaCBj
b3VsZCBub3QgYmUgcnVuOgo+PiA+ICBidWlsZC1pMzg2LW9sZGtlcm4gICAgICAgICAgICA0IHhl
bi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gCj4gMTM1MzYKPj4gPiAgYnVp
bGQtaTM4NiAgICAgICAgICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAgICAgICAgIGZh
aWwgUkVHUi4gdnMuIAo+IDEzNTM2Cj4gCj4gODwtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tCj4gCj4gIyBIRyBjaGFuZ2VzZXQgcGF0Y2gKPiAjIFVzZXIgSWFuIENhbXBiZWxsIDxp
YW4uY2FtcGJlbGxAY2l0cml4LmNvbT4KPiAjIERhdGUgMTM0Mzk4MDA0NSAtMzYwMAo+ICMgTm9k
ZSBJRCAyM2ZkY2EzYWRiMzM0NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCj4gIyBQYXJlbnQg
IDk1YTRhYjYzMmFjMjVjZTBlYzZhMjQ1ZGNjNDZhZDU3ZDNjNzAzMGYKPiBuZXN0ZWRodm06IGZp
eCBuZXN0ZWQgcGFnZSBmYXVsdCBidWlsZCBlcnJvciBvbiAzMi1iaXQKPiAKPiAgICAgY2MxOiB3
YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+ICAgICBodm0uYzogSW4gZnVuY3Rpb24g
4oCYaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdOKAmToKPiAgICAgaHZtLmM6MTI4MjogZXJyb3I6
IHBhc3NpbmcgYXJndW1lbnQgMiBvZiAKPiDigJhuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2Zh
dWx04oCZIGZyb20gaW5jb21wYXRpYmxlIHBvaW50ZXIgdHlwZSAKPiAvbG9jYWwvc2NyYXRjaC9p
YW5jL2RldmVsL3hlbi11bnN0YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5o
OjU1OiAKPiBub3RlOiBleHBlY3RlZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9m
IHR5cGUg4oCYbG9uZyB1bnNpZ25lZCBpbnQgKuKAmQo+IAo+IGh2bV9oYXBfbmVzdGVkX3BhZ2Vf
ZmF1bHQgdGFrZXMgYW4gdW5zaWduZWQgbG9uZyBncGEgYW5kIHBhc3NlcyAmZ3BhCj4gdG8gbmVz
dGVkaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2lu
Y2UgYm90aAo+IG9mIHRoZSBjYWxsZXJzIG9mIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgKHN2
bV9kb19uZXN0ZWRfcGdmYXVsdCBhbmQKPiBlcHRfaGFuZGxlX3Zpb2xhdGlvbikgYWN0dWFsbHkg
aGF2ZSB0aGUgZ3BhIHdoaWNoIHRoZXkgcGFzcyB0bwo+IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1
bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsgaXQgbWFrZXMgc2Vuc2UgdG8KPiBjaGFuZ2UgdGhlIGFy
Z3VtZW50IHRvIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQuCgpBbmQgdGhhdCdzIGV2ZW4gb3V0
c2lkZSBvZiB0aGUgY3VycmVudCBidWlsZCBmYWlsdXJlIC0gaXQganVzdApjYW4ndCBoYXZlIHdv
cmtlZCBmb3IgPjRHYiBndWVzdHMgb24gdGhlIDMyLWJpdCBoeXBlcnZpc29yLgoKPiBUaGUgb3Ro
ZXIgdXNlciBvZiBncGEgaW4gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCBpcyBhIGNhbGwgdG8K
PiBwMm1fbWVtX2FjY2Vzc19jaGVjaywgd2hpY2ggY3VycmVudGx5IGFsc28gdGFrZXMgYSBwYWRk
cl90IGdwYSBidXQgSQo+IHRoaW5rIGEgcGFkZHJfdCBpcyBhcHByb3ByaWF0ZSB0aGVyZSB0b28u
Cj4gCj4gU2lnbmVkLW9mZi1ieTogSWFuIENhbXBiZWxsIDxpYW4uY2FtcGJlbGxAY2l0cml4LmNv
bT4KCkFja2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+Cgo+IGRpZmYgLXIg
OTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4vYXJjaC94ODYvaHZtL2h2bS5jCj4gLS0t
IGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAK
PiArKysgYi94ZW4vYXJjaC94ODYvaHZtL2h2bS5jCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiAr
MDEwMAo+IEBAIC0xMjQyLDcgKzEyNDIsNyBAQCB2b2lkIGh2bV9pbmplY3RfcGFnZV9mYXVsdChp
bnQgZXJyY29kZSwgCj4gICAgICBodm1faW5qZWN0X3RyYXAoJnRyYXApOwo+ICB9Cj4gIAo+IC1p
bnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwKPiAraW50IGh2
bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQocGFkZHJfdCBncGEsCj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGJvb2xfdCBnbGFfdmFsaWQsCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBib29sX3QgYWNjZXNzX3IsCj4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMz
IHhlbi9hcmNoL3g4Ni9tbS9wMm0uYwo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkg
QXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiArKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLmMJ
RnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gQEAgLTEyMzMsNyArMTIzMyw3IEBAIHZv
aWQgcDJtX21lbV9wYWdpbmdfcmVzdW1lKHN0cnVjdCBkb21haW4KPiAgICAgIH0KPiAgfQo+ICAK
PiAtYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3Qg
Z2xhX3ZhbGlkLCB1bnNpZ25lZCAKPiBsb25nIGdsYSwgCj4gK2Jvb2xfdCBwMm1fbWVtX2FjY2Vz
c19jaGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9uZyAKPiBn
bGEsIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfciwgYm9vbF90
IGFjY2Vzc193LCBib29sX3QgCj4gYWNjZXNzX3gsCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpCj4gIHsKPiBkaWZmIC1yIDk1YTRhYjYz
MmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vaHZtLmgKPiAtLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIg
KzAxMDAKPiArKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkgQXVnIDAzIDA4
OjQ3OjI1IDIwMTIgKzAxMDAKPiBAQCAtNDMzLDcgKzQzMyw3IEBAIHN0YXRpYyBpbmxpbmUgdm9p
ZCBodm1fc2V0X2luZm9fZ3Vlc3Qoc3QKPiAgCj4gIGludCBodm1fZGVidWdfb3Aoc3RydWN0IHZj
cHUgKnYsIGludDMyX3Qgb3ApOwo+ICAKPiAtaW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQo
dW5zaWduZWQgbG9uZyBncGEsCj4gK2ludCBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0KHBhZGRy
X3QgZ3BhLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgZ2xhX3ZhbGlk
LCB1bnNpZ25lZCBsb25nIGdsYSwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9v
bF90IGFjY2Vzc19yLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNj
ZXNzX3csCj4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9pbmNsdWRl
L2FzbS14ODYvcDJtLmgKPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBBdWcg
MDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvcDJtLmgJ
RnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gQEAgLTU4OSw3ICs1ODksNyBAQCBzdGF0
aWMgaW5saW5lIHZvaWQgcDJtX21lbV9wYWdpbmdfcG9wdWxhCj4gICAqIGJlZW4gcHJvbW90ZWQg
d2l0aCBubyB1bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBiZWVuIAo+
IHBvcHVsYXRlZCwgCj4gICAqIHRoZW4gdGhlIGNhbGxlciBtdXN0IHB1dCB0aGUgZXZlbnQgaW4g
dGhlIHJpbmcgKG9uY2UgaGF2aW5nIHJlbGVhc2VkIAo+IGdldF9nZm4qCj4gICAqIGxvY2tzIC0t
IGNhbGxlciBtdXN0IGFsc28geGZyZWUgdGhlIHJlcXVlc3QuICovCj4gLWJvb2xfdCBwMm1fbWVt
X2FjY2Vzc19jaGVjayh1bnNpZ25lZCBsb25nIGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWdu
ZWQgCj4gbG9uZyBnbGEsIAo+ICtib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hlY2socGFkZHJfdCBn
cGEsIGJvb2xfdCBnbGFfdmFsaWQsIHVuc2lnbmVkIGxvbmcgCj4gZ2xhLCAKPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xfdCBhY2Nlc3NfdywgYm9vbF90
IAo+IGFjY2Vzc194LAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIG1lbV9ldmVudF9yZXF1
ZXN0X3QgKipyZXFfcHRyKTsKPiAgLyogUmVzdW1lcyB0aGUgcnVubmluZyBvZiB0aGUgVkNQVSwg
cmVzdGFydGluZyB0aGUgbGFzdCBpbnN0cnVjdGlvbiAqLwo+IEBAIC02MDYsNyArNjA2LDcgQEAg
aW50IHAybV9nZXRfbWVtX2FjY2VzcyhzdHJ1Y3QgZG9tYWluICpkLAo+ICAgICAgICAgICAgICAg
ICAgICAgICAgIGh2bW1lbV9hY2Nlc3NfdCAqYWNjZXNzKTsKPiAgCj4gICNlbHNlCj4gLXN0YXRp
YyBpbmxpbmUgYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBi
b29sX3QgCj4gZ2xhX3ZhbGlkLCAKPiArc3RhdGljIGlubGluZSBib29sX3QgcDJtX21lbV9hY2Nl
c3NfY2hlY2socGFkZHJfdCBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIAo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBnbGEsIGJvb2xfdCBhY2Nl
c3NfciwgCj4gCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29s
X3QgYWNjZXNzX3csIGJvb2xfdCBhY2Nlc3NfeCwKPiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIG1lbV9ldmVudF9yZXF1ZXN0X3QgKipyZXFfcHRyKQo+IAo+IAo+IAo+
IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gWGVuLWRl
dmVsIG1haWxpbmcgbGlzdAo+IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+IGh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbCAKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 07:58:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 07:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCmE-0007an-PP; Fri, 03 Aug 2012 07:58:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxCmC-0007ai-Px
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 07:58:29 +0000
Received: from [85.158.139.83:38274] by server-2.bemta-5.messagelabs.com id
	A6/63-04598-3A48B105; Fri, 03 Aug 2012 07:58:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1343980701!29977317!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18838 invoked from network); 3 Aug 2012 07:58:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 07:58:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 08:58:19 +0100
Message-Id: <501BA0E6020000780009263B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 08:59:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDA5OjQ4LCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiBGcmksIDIwMTItMDgtMDMgYXQgMDY6MTIgKzAxMDAsIElhbiBD
YW1wYmVsbCB3cm90ZToKPj4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4ZW4u
b3JnIHdyb3RlOgo+PiA+IGZsaWdodCAxMzUzOSB4ZW4tdW5zdGFibGUgcmVhbCBbcmVhbF0KPj4g
PiBodHRwOi8vd3d3LmNoaWFyay5ncmVlbmVuZC5vcmcudWsvfnhlbnNyY3RzL2xvZ3MvMTM1Mzkv
IAo+PiA+IAo+PiA+IFJlZ3Jlc3Npb25zIDotKAo+PiA+IAo+PiA+IFRlc3RzIHdoaWNoIGRpZCBu
b3Qgc3VjY2VlZCBhbmQgYXJlIGJsb2NraW5nLAo+PiA+IGluY2x1ZGluZyB0ZXN0cyB3aGljaCBj
b3VsZCBub3QgYmUgcnVuOgo+PiA+ICBidWlsZC1pMzg2LW9sZGtlcm4gICAgICAgICAgICA0IHhl
bi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gCj4gMTM1MzYKPj4gPiAgYnVp
bGQtaTM4NiAgICAgICAgICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAgICAgICAgIGZh
aWwgUkVHUi4gdnMuIAo+IDEzNTM2Cj4gCj4gODwtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tCj4gCj4gIyBIRyBjaGFuZ2VzZXQgcGF0Y2gKPiAjIFVzZXIgSWFuIENhbXBiZWxsIDxp
YW4uY2FtcGJlbGxAY2l0cml4LmNvbT4KPiAjIERhdGUgMTM0Mzk4MDA0NSAtMzYwMAo+ICMgTm9k
ZSBJRCAyM2ZkY2EzYWRiMzM0NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCj4gIyBQYXJlbnQg
IDk1YTRhYjYzMmFjMjVjZTBlYzZhMjQ1ZGNjNDZhZDU3ZDNjNzAzMGYKPiBuZXN0ZWRodm06IGZp
eCBuZXN0ZWQgcGFnZSBmYXVsdCBidWlsZCBlcnJvciBvbiAzMi1iaXQKPiAKPiAgICAgY2MxOiB3
YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+ICAgICBodm0uYzogSW4gZnVuY3Rpb24g
4oCYaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdOKAmToKPiAgICAgaHZtLmM6MTI4MjogZXJyb3I6
IHBhc3NpbmcgYXJndW1lbnQgMiBvZiAKPiDigJhuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2Zh
dWx04oCZIGZyb20gaW5jb21wYXRpYmxlIHBvaW50ZXIgdHlwZSAKPiAvbG9jYWwvc2NyYXRjaC9p
YW5jL2RldmVsL3hlbi11bnN0YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5o
OjU1OiAKPiBub3RlOiBleHBlY3RlZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9m
IHR5cGUg4oCYbG9uZyB1bnNpZ25lZCBpbnQgKuKAmQo+IAo+IGh2bV9oYXBfbmVzdGVkX3BhZ2Vf
ZmF1bHQgdGFrZXMgYW4gdW5zaWduZWQgbG9uZyBncGEgYW5kIHBhc3NlcyAmZ3BhCj4gdG8gbmVz
dGVkaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2lu
Y2UgYm90aAo+IG9mIHRoZSBjYWxsZXJzIG9mIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQgKHN2
bV9kb19uZXN0ZWRfcGdmYXVsdCBhbmQKPiBlcHRfaGFuZGxlX3Zpb2xhdGlvbikgYWN0dWFsbHkg
aGF2ZSB0aGUgZ3BhIHdoaWNoIHRoZXkgcGFzcyB0bwo+IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1
bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsgaXQgbWFrZXMgc2Vuc2UgdG8KPiBjaGFuZ2UgdGhlIGFy
Z3VtZW50IHRvIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQuCgpBbmQgdGhhdCdzIGV2ZW4gb3V0
c2lkZSBvZiB0aGUgY3VycmVudCBidWlsZCBmYWlsdXJlIC0gaXQganVzdApjYW4ndCBoYXZlIHdv
cmtlZCBmb3IgPjRHYiBndWVzdHMgb24gdGhlIDMyLWJpdCBoeXBlcnZpc29yLgoKPiBUaGUgb3Ro
ZXIgdXNlciBvZiBncGEgaW4gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCBpcyBhIGNhbGwgdG8K
PiBwMm1fbWVtX2FjY2Vzc19jaGVjaywgd2hpY2ggY3VycmVudGx5IGFsc28gdGFrZXMgYSBwYWRk
cl90IGdwYSBidXQgSQo+IHRoaW5rIGEgcGFkZHJfdCBpcyBhcHByb3ByaWF0ZSB0aGVyZSB0b28u
Cj4gCj4gU2lnbmVkLW9mZi1ieTogSWFuIENhbXBiZWxsIDxpYW4uY2FtcGJlbGxAY2l0cml4LmNv
bT4KCkFja2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+Cgo+IGRpZmYgLXIg
OTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4vYXJjaC94ODYvaHZtL2h2bS5jCj4gLS0t
IGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAK
PiArKysgYi94ZW4vYXJjaC94ODYvaHZtL2h2bS5jCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiAr
MDEwMAo+IEBAIC0xMjQyLDcgKzEyNDIsNyBAQCB2b2lkIGh2bV9pbmplY3RfcGFnZV9mYXVsdChp
bnQgZXJyY29kZSwgCj4gICAgICBodm1faW5qZWN0X3RyYXAoJnRyYXApOwo+ICB9Cj4gIAo+IC1p
bnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwKPiAraW50IGh2
bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQocGFkZHJfdCBncGEsCj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGJvb2xfdCBnbGFfdmFsaWQsCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBib29sX3QgYWNjZXNzX3IsCj4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMz
IHhlbi9hcmNoL3g4Ni9tbS9wMm0uYwo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkg
QXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiArKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLmMJ
RnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gQEAgLTEyMzMsNyArMTIzMyw3IEBAIHZv
aWQgcDJtX21lbV9wYWdpbmdfcmVzdW1lKHN0cnVjdCBkb21haW4KPiAgICAgIH0KPiAgfQo+ICAK
PiAtYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3Qg
Z2xhX3ZhbGlkLCB1bnNpZ25lZCAKPiBsb25nIGdsYSwgCj4gK2Jvb2xfdCBwMm1fbWVtX2FjY2Vz
c19jaGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9uZyAKPiBn
bGEsIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfciwgYm9vbF90
IGFjY2Vzc193LCBib29sX3QgCj4gYWNjZXNzX3gsCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpCj4gIHsKPiBkaWZmIC1yIDk1YTRhYjYz
MmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vaHZtLmgKPiAtLS0g
YS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIg
KzAxMDAKPiArKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkgQXVnIDAzIDA4
OjQ3OjI1IDIwMTIgKzAxMDAKPiBAQCAtNDMzLDcgKzQzMyw3IEBAIHN0YXRpYyBpbmxpbmUgdm9p
ZCBodm1fc2V0X2luZm9fZ3Vlc3Qoc3QKPiAgCj4gIGludCBodm1fZGVidWdfb3Aoc3RydWN0IHZj
cHUgKnYsIGludDMyX3Qgb3ApOwo+ICAKPiAtaW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQo
dW5zaWduZWQgbG9uZyBncGEsCj4gK2ludCBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0KHBhZGRy
X3QgZ3BhLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgZ2xhX3ZhbGlk
LCB1bnNpZ25lZCBsb25nIGdsYSwKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9v
bF90IGFjY2Vzc19yLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNj
ZXNzX3csCj4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9pbmNsdWRl
L2FzbS14ODYvcDJtLmgKPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBBdWcg
MDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvcDJtLmgJ
RnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gQEAgLTU4OSw3ICs1ODksNyBAQCBzdGF0
aWMgaW5saW5lIHZvaWQgcDJtX21lbV9wYWdpbmdfcG9wdWxhCj4gICAqIGJlZW4gcHJvbW90ZWQg
d2l0aCBubyB1bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBiZWVuIAo+
IHBvcHVsYXRlZCwgCj4gICAqIHRoZW4gdGhlIGNhbGxlciBtdXN0IHB1dCB0aGUgZXZlbnQgaW4g
dGhlIHJpbmcgKG9uY2UgaGF2aW5nIHJlbGVhc2VkIAo+IGdldF9nZm4qCj4gICAqIGxvY2tzIC0t
IGNhbGxlciBtdXN0IGFsc28geGZyZWUgdGhlIHJlcXVlc3QuICovCj4gLWJvb2xfdCBwMm1fbWVt
X2FjY2Vzc19jaGVjayh1bnNpZ25lZCBsb25nIGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWdu
ZWQgCj4gbG9uZyBnbGEsIAo+ICtib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hlY2socGFkZHJfdCBn
cGEsIGJvb2xfdCBnbGFfdmFsaWQsIHVuc2lnbmVkIGxvbmcgCj4gZ2xhLCAKPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xfdCBhY2Nlc3NfdywgYm9vbF90
IAo+IGFjY2Vzc194LAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIG1lbV9ldmVudF9yZXF1
ZXN0X3QgKipyZXFfcHRyKTsKPiAgLyogUmVzdW1lcyB0aGUgcnVubmluZyBvZiB0aGUgVkNQVSwg
cmVzdGFydGluZyB0aGUgbGFzdCBpbnN0cnVjdGlvbiAqLwo+IEBAIC02MDYsNyArNjA2LDcgQEAg
aW50IHAybV9nZXRfbWVtX2FjY2VzcyhzdHJ1Y3QgZG9tYWluICpkLAo+ICAgICAgICAgICAgICAg
ICAgICAgICAgIGh2bW1lbV9hY2Nlc3NfdCAqYWNjZXNzKTsKPiAgCj4gICNlbHNlCj4gLXN0YXRp
YyBpbmxpbmUgYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBi
b29sX3QgCj4gZ2xhX3ZhbGlkLCAKPiArc3RhdGljIGlubGluZSBib29sX3QgcDJtX21lbV9hY2Nl
c3NfY2hlY2socGFkZHJfdCBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIAo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBnbGEsIGJvb2xfdCBhY2Nl
c3NfciwgCj4gCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29s
X3QgYWNjZXNzX3csIGJvb2xfdCBhY2Nlc3NfeCwKPiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIG1lbV9ldmVudF9yZXF1ZXN0X3QgKipyZXFfcHRyKQo+IAo+IAo+IAo+
IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gWGVuLWRl
dmVsIG1haWxpbmcgbGlzdAo+IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+IGh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbCAKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:00:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCo9-00089X-1Z; Fri, 03 Aug 2012 08:00:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCo8-00089M-0G
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:00:28 +0000
Received: from [85.158.143.99:56502] by server-3.bemta-4.messagelabs.com id
	61/14-01511-B158B105; Fri, 03 Aug 2012 08:00:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343980819!24750788!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23142 invoked from network); 3 Aug 2012 08:00:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:00:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834731"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:00:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:00:18 +0100
Message-ID: <1343980817.21372.8.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 3 Aug 2012 09:00:17 +0100
In-Reply-To: <501BA0E6020000780009263B@nat28.tlf.novell.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<501BA0E6020000780009263B@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTA4LTAzIGF0IDA4OjU5ICswMTAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiA+
Pj4gT24gMDMuMDguMTIgYXQgMDk6NDgsIElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+IHdyb3RlOgo+ID4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDA2OjEyICswMTAwLCBJYW4g
Q2FtcGJlbGwgd3JvdGU6Cj4gPj4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4
ZW4ub3JnIHdyb3RlOgo+ID4+ID4gZmxpZ2h0IDEzNTM5IHhlbi11bnN0YWJsZSByZWFsIFtyZWFs
XQo+ID4+ID4gaHR0cDovL3d3dy5jaGlhcmsuZ3JlZW5lbmQub3JnLnVrL354ZW5zcmN0cy9sb2dz
LzEzNTM5LyAKPiA+PiA+IAo+ID4+ID4gUmVncmVzc2lvbnMgOi0oCj4gPj4gPiAKPiA+PiA+IFRl
c3RzIHdoaWNoIGRpZCBub3Qgc3VjY2VlZCBhbmQgYXJlIGJsb2NraW5nLAo+ID4+ID4gaW5jbHVk
aW5nIHRlc3RzIHdoaWNoIGNvdWxkIG5vdCBiZSBydW46Cj4gPj4gPiAgYnVpbGQtaTM4Ni1vbGRr
ZXJuICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAgICAgICAgIGZhaWwgUkVHUi4gdnMu
IAo+ID4gMTM1MzYKPiA+PiA+ICBidWlsZC1pMzg2ICAgICAgICAgICAgICAgICAgICA0IHhlbi1i
dWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gCj4gPiAxMzUzNgo+ID4gCj4gPiA4
PC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KPiA+IAo+ID4gIyBIRyBjaGFuZ2Vz
ZXQgcGF0Y2gKPiA+ICMgVXNlciBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29t
Pgo+ID4gIyBEYXRlIDEzNDM5ODAwNDUgLTM2MDAKPiA+ICMgTm9kZSBJRCAyM2ZkY2EzYWRiMzM0
NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCj4gPiAjIFBhcmVudCAgOTVhNGFiNjMyYWMyNWNl
MGVjNmEyNDVkY2M0NmFkNTdkM2M3MDMwZgo+ID4gbmVzdGVkaHZtOiBmaXggbmVzdGVkIHBhZ2Ug
ZmF1bHQgYnVpbGQgZXJyb3Igb24gMzItYml0Cj4gPiAKPiA+ICAgICBjYzE6IHdhcm5pbmdzIGJl
aW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4gPiAgICAgaHZtLmM6IEluIGZ1bmN0aW9uIOKAmGh2bV9o
YXBfbmVzdGVkX3BhZ2VfZmF1bHTigJk6Cj4gPiAgICAgaHZtLmM6MTI4MjogZXJyb3I6IHBhc3Np
bmcgYXJndW1lbnQgMiBvZiAKPiA+IOKAmG5lc3RlZGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHTi
gJkgZnJvbSBpbmNvbXBhdGlibGUgcG9pbnRlciB0eXBlIAo+ID4gL2xvY2FsL3NjcmF0Y2gvaWFu
Yy9kZXZlbC94ZW4tdW5zdGFibGUuaGcveGVuL2luY2x1ZGUvYXNtL2h2bS9uZXN0ZWRodm0uaDo1
NTogCj4gPiBub3RlOiBleHBlY3RlZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9m
IHR5cGUg4oCYbG9uZyB1bnNpZ25lZCBpbnQgKuKAmQo+ID4gCj4gPiBodm1faGFwX25lc3RlZF9w
YWdlX2ZhdWx0IHRha2VzIGFuIHVuc2lnbmVkIGxvbmcgZ3BhIGFuZCBwYXNzZXMgJmdwYQo+ID4g
dG8gbmVzdGVkaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3Qg
Ki4gU2luY2UgYm90aAo+ID4gb2YgdGhlIGNhbGxlcnMgb2YgaHZtX2hhcF9uZXN0ZWRfcGFnZV9m
YXVsdCAoc3ZtX2RvX25lc3RlZF9wZ2ZhdWx0IGFuZAo+ID4gZXB0X2hhbmRsZV92aW9sYXRpb24p
IGFjdHVhbGx5IGhhdmUgdGhlIGdwYSB3aGljaCB0aGV5IHBhc3MgdG8KPiA+IGh2bV9oYXBfbmVz
dGVkX3BhZ2VfZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsgaXQgbWFrZXMgc2Vuc2UgdG8KPiA+
IGNoYW5nZSB0aGUgYXJndW1lbnQgdG8gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdC4KPiAKPiBB
bmQgdGhhdCdzIGV2ZW4gb3V0c2lkZSBvZiB0aGUgY3VycmVudCBidWlsZCBmYWlsdXJlIC0gaXQg
anVzdAo+IGNhbid0IGhhdmUgd29ya2VkIGZvciA+NEdiIGd1ZXN0cyBvbiB0aGUgMzItYml0IGh5
cGVydmlzb3IuCgpSaWdodC4gSSBtdXN0IGFkbWl0IEkgd2FzIHN1cnByaXNlZCB0byBmaW5kIHRo
YXQgbmVzdGVkaHZtIHdhcyBhIGZlYXR1cmUKb2YgMzIgYml0IGF0IGFsbCwgSSBoYWQgZXhwZWN0
IHRoZSBmaXggdG8gYmUgY2hhbmdpbmcgc29tZSBzb3J0IG9mIHN0dWIKZnVuY3Rpb24uLi4KCj4g
Cj4gPiBUaGUgb3RoZXIgdXNlciBvZiBncGEgaW4gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCBp
cyBhIGNhbGwgdG8KPiA+IHAybV9tZW1fYWNjZXNzX2NoZWNrLCB3aGljaCBjdXJyZW50bHkgYWxz
byB0YWtlcyBhIHBhZGRyX3QgZ3BhIGJ1dCBJCj4gPiB0aGluayBhIHBhZGRyX3QgaXMgYXBwcm9w
cmlhdGUgdGhlcmUgdG9vLgo+ID4gCj4gPiBTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlh
bi5jYW1wYmVsbEBjaXRyaXguY29tPgo+IAo+IEFja2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+Cj4gCj4gPiBkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMg
eGVuL2FyY2gveDg2L2h2bS9odm0uYwo+ID4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlG
cmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiA+ICsrKyBiL3hlbi9hcmNoL3g4Ni9odm0v
aHZtLmMJRnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gPiBAQCAtMTI0Miw3ICsxMjQy
LDcgQEAgdm9pZCBodm1faW5qZWN0X3BhZ2VfZmF1bHQoaW50IGVycmNvZGUsIAo+ID4gICAgICBo
dm1faW5qZWN0X3RyYXAoJnRyYXApOwo+ID4gIH0KPiA+ICAKPiA+IC1pbnQgaHZtX2hhcF9uZXN0
ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwKPiA+ICtpbnQgaHZtX2hhcF9uZXN0ZWRf
cGFnZV9mYXVsdChwYWRkcl90IGdwYSwKPiA+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBib29sX3QgZ2xhX3ZhbGlkLAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVu
c2lnbmVkIGxvbmcgZ2xhLAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xf
dCBhY2Nlc3NfciwKPiA+IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4v
YXJjaC94ODYvbW0vcDJtLmMKPiA+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVn
IDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiA+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlG
cmkgQXVnIDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKPiA+IEBAIC0xMjMzLDcgKzEyMzMsNyBAQCB2
b2lkIHAybV9tZW1fcGFnaW5nX3Jlc3VtZShzdHJ1Y3QgZG9tYWluCj4gPiAgICAgIH0KPiA+ICB9
Cj4gPiAgCj4gPiAtYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3Bh
LCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCAKPiA+IGxvbmcgZ2xhLCAKPiA+ICtib29sX3Qg
cDJtX21lbV9hY2Nlc3NfY2hlY2socGFkZHJfdCBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIHVuc2ln
bmVkIGxvbmcgCj4gPiBnbGEsIAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90
IGFjY2Vzc19yLCBib29sX3QgYWNjZXNzX3csIGJvb2xfdCAKPiA+IGFjY2Vzc194LAo+ID4gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpCj4g
PiAgewo+ID4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9pbmNsdWRl
L2FzbS14ODYvaHZtL2h2bS5oCj4gPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0u
aAlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiA+ICsrKyBiL3hlbi9pbmNsdWRlL2Fz
bS14ODYvaHZtL2h2bS5oCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMAo+ID4gQEAgLTQz
Myw3ICs0MzMsNyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgaHZtX3NldF9pbmZvX2d1ZXN0KHN0Cj4g
PiAgCj4gPiAgaW50IGh2bV9kZWJ1Z19vcChzdHJ1Y3QgdmNwdSAqdiwgaW50MzJfdCBvcCk7Cj4g
PiAgCj4gPiAtaW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQodW5zaWduZWQgbG9uZyBncGEs
Cj4gPiAraW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQocGFkZHJfdCBncGEsCj4gPiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9u
ZyBnbGEsCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFjY2Vzc19y
LAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfdywKPiA+
IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4vaW5jbHVkZS9hc20teDg2
L3AybS5oCj4gPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBBdWcgMDMgMDg6
NDM6MTAgMjAxMiArMDEwMAo+ID4gKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9wMm0uaAlGcmkg
QXVnIDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKPiA+IEBAIC01ODksNyArNTg5LDcgQEAgc3RhdGlj
IGlubGluZSB2b2lkIHAybV9tZW1fcGFnaW5nX3BvcHVsYQo+ID4gICAqIGJlZW4gcHJvbW90ZWQg
d2l0aCBubyB1bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBiZWVuIAo+
ID4gcG9wdWxhdGVkLCAKPiA+ICAgKiB0aGVuIHRoZSBjYWxsZXIgbXVzdCBwdXQgdGhlIGV2ZW50
IGluIHRoZSByaW5nIChvbmNlIGhhdmluZyByZWxlYXNlZCAKPiA+IGdldF9nZm4qCj4gPiAgICog
bG9ja3MgLS0gY2FsbGVyIG11c3QgYWxzbyB4ZnJlZSB0aGUgcmVxdWVzdC4gKi8KPiA+IC1ib29s
X3QgcDJtX21lbV9hY2Nlc3NfY2hlY2sodW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCBnbGFfdmFs
aWQsIHVuc2lnbmVkIAo+ID4gbG9uZyBnbGEsIAo+ID4gK2Jvb2xfdCBwMm1fbWVtX2FjY2Vzc19j
aGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9uZyAKPiA+IGds
YSwgCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xf
dCBhY2Nlc3NfdywgYm9vbF90IAo+ID4gYWNjZXNzX3gsCj4gPiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBtZW1fZXZlbnRfcmVxdWVzdF90ICoqcmVxX3B0cik7Cj4gPiAgLyogUmVzdW1lcyB0
aGUgcnVubmluZyBvZiB0aGUgVkNQVSwgcmVzdGFydGluZyB0aGUgbGFzdCBpbnN0cnVjdGlvbiAq
Lwo+ID4gQEAgLTYwNiw3ICs2MDYsNyBAQCBpbnQgcDJtX2dldF9tZW1fYWNjZXNzKHN0cnVjdCBk
b21haW4gKmQsCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICBodm1tZW1fYWNjZXNzX3QgKmFj
Y2Vzcyk7Cj4gPiAgCj4gPiAgI2Vsc2UKPiA+IC1zdGF0aWMgaW5saW5lIGJvb2xfdCBwMm1fbWVt
X2FjY2Vzc19jaGVjayh1bnNpZ25lZCBsb25nIGdwYSwgYm9vbF90IAo+ID4gZ2xhX3ZhbGlkLCAK
PiA+ICtzdGF0aWMgaW5saW5lIGJvb2xfdCBwMm1fbWVtX2FjY2Vzc19jaGVjayhwYWRkcl90IGdw
YSwgYm9vbF90IGdsYV92YWxpZCwgCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLCBib29sX3QgYWNjZXNzX3IsIAo+ID4gCj4gPiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3Nfdywg
Ym9vbF90IGFjY2Vzc194LAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBtZW1fZXZlbnRfcmVxdWVzdF90ICoqcmVxX3B0cikKPiA+IAo+ID4gCj4gPiAKPiA+IF9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gPiBYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Cj4gPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZyAKPiA+IGh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbCAKPiAKPiAKCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:00:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCo9-00089X-1Z; Fri, 03 Aug 2012 08:00:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCo8-00089M-0G
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:00:28 +0000
Received: from [85.158.143.99:56502] by server-3.bemta-4.messagelabs.com id
	61/14-01511-B158B105; Fri, 03 Aug 2012 08:00:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1343980819!24750788!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23142 invoked from network); 3 Aug 2012 08:00:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:00:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834731"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:00:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:00:18 +0100
Message-ID: <1343980817.21372.8.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 3 Aug 2012 09:00:17 +0100
In-Reply-To: <501BA0E6020000780009263B@nat28.tlf.novell.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<501BA0E6020000780009263B@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTA4LTAzIGF0IDA4OjU5ICswMTAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiA+
Pj4gT24gMDMuMDguMTIgYXQgMDk6NDgsIElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+IHdyb3RlOgo+ID4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDA2OjEyICswMTAwLCBJYW4g
Q2FtcGJlbGwgd3JvdGU6Cj4gPj4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4
ZW4ub3JnIHdyb3RlOgo+ID4+ID4gZmxpZ2h0IDEzNTM5IHhlbi11bnN0YWJsZSByZWFsIFtyZWFs
XQo+ID4+ID4gaHR0cDovL3d3dy5jaGlhcmsuZ3JlZW5lbmQub3JnLnVrL354ZW5zcmN0cy9sb2dz
LzEzNTM5LyAKPiA+PiA+IAo+ID4+ID4gUmVncmVzc2lvbnMgOi0oCj4gPj4gPiAKPiA+PiA+IFRl
c3RzIHdoaWNoIGRpZCBub3Qgc3VjY2VlZCBhbmQgYXJlIGJsb2NraW5nLAo+ID4+ID4gaW5jbHVk
aW5nIHRlc3RzIHdoaWNoIGNvdWxkIG5vdCBiZSBydW46Cj4gPj4gPiAgYnVpbGQtaTM4Ni1vbGRr
ZXJuICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAgICAgICAgIGZhaWwgUkVHUi4gdnMu
IAo+ID4gMTM1MzYKPiA+PiA+ICBidWlsZC1pMzg2ICAgICAgICAgICAgICAgICAgICA0IHhlbi1i
dWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gCj4gPiAxMzUzNgo+ID4gCj4gPiA4
PC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KPiA+IAo+ID4gIyBIRyBjaGFuZ2Vz
ZXQgcGF0Y2gKPiA+ICMgVXNlciBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29t
Pgo+ID4gIyBEYXRlIDEzNDM5ODAwNDUgLTM2MDAKPiA+ICMgTm9kZSBJRCAyM2ZkY2EzYWRiMzM0
NjA5MGVhOGI2NWI3N2NhZDdkMjc5Y2Y5ZGFmCj4gPiAjIFBhcmVudCAgOTVhNGFiNjMyYWMyNWNl
MGVjNmEyNDVkY2M0NmFkNTdkM2M3MDMwZgo+ID4gbmVzdGVkaHZtOiBmaXggbmVzdGVkIHBhZ2Ug
ZmF1bHQgYnVpbGQgZXJyb3Igb24gMzItYml0Cj4gPiAKPiA+ICAgICBjYzE6IHdhcm5pbmdzIGJl
aW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4gPiAgICAgaHZtLmM6IEluIGZ1bmN0aW9uIOKAmGh2bV9o
YXBfbmVzdGVkX3BhZ2VfZmF1bHTigJk6Cj4gPiAgICAgaHZtLmM6MTI4MjogZXJyb3I6IHBhc3Np
bmcgYXJndW1lbnQgMiBvZiAKPiA+IOKAmG5lc3RlZGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHTi
gJkgZnJvbSBpbmNvbXBhdGlibGUgcG9pbnRlciB0eXBlIAo+ID4gL2xvY2FsL3NjcmF0Y2gvaWFu
Yy9kZXZlbC94ZW4tdW5zdGFibGUuaGcveGVuL2luY2x1ZGUvYXNtL2h2bS9uZXN0ZWRodm0uaDo1
NTogCj4gPiBub3RlOiBleHBlY3RlZCDigJhwYWRkcl90ICrigJkgYnV0IGFyZ3VtZW50IGlzIG9m
IHR5cGUg4oCYbG9uZyB1bnNpZ25lZCBpbnQgKuKAmQo+ID4gCj4gPiBodm1faGFwX25lc3RlZF9w
YWdlX2ZhdWx0IHRha2VzIGFuIHVuc2lnbmVkIGxvbmcgZ3BhIGFuZCBwYXNzZXMgJmdwYQo+ID4g
dG8gbmVzdGVkaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3Qg
Ki4gU2luY2UgYm90aAo+ID4gb2YgdGhlIGNhbGxlcnMgb2YgaHZtX2hhcF9uZXN0ZWRfcGFnZV9m
YXVsdCAoc3ZtX2RvX25lc3RlZF9wZ2ZhdWx0IGFuZAo+ID4gZXB0X2hhbmRsZV92aW9sYXRpb24p
IGFjdHVhbGx5IGhhdmUgdGhlIGdwYSB3aGljaCB0aGV5IHBhc3MgdG8KPiA+IGh2bV9oYXBfbmVz
dGVkX3BhZ2VfZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsgaXQgbWFrZXMgc2Vuc2UgdG8KPiA+
IGNoYW5nZSB0aGUgYXJndW1lbnQgdG8gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdC4KPiAKPiBB
bmQgdGhhdCdzIGV2ZW4gb3V0c2lkZSBvZiB0aGUgY3VycmVudCBidWlsZCBmYWlsdXJlIC0gaXQg
anVzdAo+IGNhbid0IGhhdmUgd29ya2VkIGZvciA+NEdiIGd1ZXN0cyBvbiB0aGUgMzItYml0IGh5
cGVydmlzb3IuCgpSaWdodC4gSSBtdXN0IGFkbWl0IEkgd2FzIHN1cnByaXNlZCB0byBmaW5kIHRo
YXQgbmVzdGVkaHZtIHdhcyBhIGZlYXR1cmUKb2YgMzIgYml0IGF0IGFsbCwgSSBoYWQgZXhwZWN0
IHRoZSBmaXggdG8gYmUgY2hhbmdpbmcgc29tZSBzb3J0IG9mIHN0dWIKZnVuY3Rpb24uLi4KCj4g
Cj4gPiBUaGUgb3RoZXIgdXNlciBvZiBncGEgaW4gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCBp
cyBhIGNhbGwgdG8KPiA+IHAybV9tZW1fYWNjZXNzX2NoZWNrLCB3aGljaCBjdXJyZW50bHkgYWxz
byB0YWtlcyBhIHBhZGRyX3QgZ3BhIGJ1dCBJCj4gPiB0aGluayBhIHBhZGRyX3QgaXMgYXBwcm9w
cmlhdGUgdGhlcmUgdG9vLgo+ID4gCj4gPiBTaWduZWQtb2ZmLWJ5OiBJYW4gQ2FtcGJlbGwgPGlh
bi5jYW1wYmVsbEBjaXRyaXguY29tPgo+IAo+IEFja2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+Cj4gCj4gPiBkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMg
eGVuL2FyY2gveDg2L2h2bS9odm0uYwo+ID4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlG
cmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiA+ICsrKyBiL3hlbi9hcmNoL3g4Ni9odm0v
aHZtLmMJRnJpIEF1ZyAwMyAwODo0NzoyNSAyMDEyICswMTAwCj4gPiBAQCAtMTI0Miw3ICsxMjQy
LDcgQEAgdm9pZCBodm1faW5qZWN0X3BhZ2VfZmF1bHQoaW50IGVycmNvZGUsIAo+ID4gICAgICBo
dm1faW5qZWN0X3RyYXAoJnRyYXApOwo+ID4gIH0KPiA+ICAKPiA+IC1pbnQgaHZtX2hhcF9uZXN0
ZWRfcGFnZV9mYXVsdCh1bnNpZ25lZCBsb25nIGdwYSwKPiA+ICtpbnQgaHZtX2hhcF9uZXN0ZWRf
cGFnZV9mYXVsdChwYWRkcl90IGdwYSwKPiA+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBib29sX3QgZ2xhX3ZhbGlkLAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVu
c2lnbmVkIGxvbmcgZ2xhLAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xf
dCBhY2Nlc3NfciwKPiA+IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4v
YXJjaC94ODYvbW0vcDJtLmMKPiA+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVn
IDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiA+ICsrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlG
cmkgQXVnIDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKPiA+IEBAIC0xMjMzLDcgKzEyMzMsNyBAQCB2
b2lkIHAybV9tZW1fcGFnaW5nX3Jlc3VtZShzdHJ1Y3QgZG9tYWluCj4gPiAgICAgIH0KPiA+ICB9
Cj4gPiAgCj4gPiAtYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3Bh
LCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCAKPiA+IGxvbmcgZ2xhLCAKPiA+ICtib29sX3Qg
cDJtX21lbV9hY2Nlc3NfY2hlY2socGFkZHJfdCBncGEsIGJvb2xfdCBnbGFfdmFsaWQsIHVuc2ln
bmVkIGxvbmcgCj4gPiBnbGEsIAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90
IGFjY2Vzc19yLCBib29sX3QgYWNjZXNzX3csIGJvb2xfdCAKPiA+IGFjY2Vzc194LAo+ID4gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpCj4g
PiAgewo+ID4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9pbmNsdWRl
L2FzbS14ODYvaHZtL2h2bS5oCj4gPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0u
aAlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPiA+ICsrKyBiL3hlbi9pbmNsdWRlL2Fz
bS14ODYvaHZtL2h2bS5oCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMAo+ID4gQEAgLTQz
Myw3ICs0MzMsNyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgaHZtX3NldF9pbmZvX2d1ZXN0KHN0Cj4g
PiAgCj4gPiAgaW50IGh2bV9kZWJ1Z19vcChzdHJ1Y3QgdmNwdSAqdiwgaW50MzJfdCBvcCk7Cj4g
PiAgCj4gPiAtaW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQodW5zaWduZWQgbG9uZyBncGEs
Cj4gPiAraW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQocGFkZHJfdCBncGEsCj4gPiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9u
ZyBnbGEsCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFjY2Vzc19y
LAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3NfdywKPiA+
IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIzZmRjYTNhZGIzMyB4ZW4vaW5jbHVkZS9hc20teDg2
L3AybS5oCj4gPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBBdWcgMDMgMDg6
NDM6MTAgMjAxMiArMDEwMAo+ID4gKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9wMm0uaAlGcmkg
QXVnIDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKPiA+IEBAIC01ODksNyArNTg5LDcgQEAgc3RhdGlj
IGlubGluZSB2b2lkIHAybV9tZW1fcGFnaW5nX3BvcHVsYQo+ID4gICAqIGJlZW4gcHJvbW90ZWQg
d2l0aCBubyB1bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBiZWVuIAo+
ID4gcG9wdWxhdGVkLCAKPiA+ICAgKiB0aGVuIHRoZSBjYWxsZXIgbXVzdCBwdXQgdGhlIGV2ZW50
IGluIHRoZSByaW5nIChvbmNlIGhhdmluZyByZWxlYXNlZCAKPiA+IGdldF9nZm4qCj4gPiAgICog
bG9ja3MgLS0gY2FsbGVyIG11c3QgYWxzbyB4ZnJlZSB0aGUgcmVxdWVzdC4gKi8KPiA+IC1ib29s
X3QgcDJtX21lbV9hY2Nlc3NfY2hlY2sodW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCBnbGFfdmFs
aWQsIHVuc2lnbmVkIAo+ID4gbG9uZyBnbGEsIAo+ID4gK2Jvb2xfdCBwMm1fbWVtX2FjY2Vzc19j
aGVjayhwYWRkcl90IGdwYSwgYm9vbF90IGdsYV92YWxpZCwgdW5zaWduZWQgbG9uZyAKPiA+IGds
YSwgCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xf
dCBhY2Nlc3NfdywgYm9vbF90IAo+ID4gYWNjZXNzX3gsCj4gPiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBtZW1fZXZlbnRfcmVxdWVzdF90ICoqcmVxX3B0cik7Cj4gPiAgLyogUmVzdW1lcyB0
aGUgcnVubmluZyBvZiB0aGUgVkNQVSwgcmVzdGFydGluZyB0aGUgbGFzdCBpbnN0cnVjdGlvbiAq
Lwo+ID4gQEAgLTYwNiw3ICs2MDYsNyBAQCBpbnQgcDJtX2dldF9tZW1fYWNjZXNzKHN0cnVjdCBk
b21haW4gKmQsCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICBodm1tZW1fYWNjZXNzX3QgKmFj
Y2Vzcyk7Cj4gPiAgCj4gPiAgI2Vsc2UKPiA+IC1zdGF0aWMgaW5saW5lIGJvb2xfdCBwMm1fbWVt
X2FjY2Vzc19jaGVjayh1bnNpZ25lZCBsb25nIGdwYSwgYm9vbF90IAo+ID4gZ2xhX3ZhbGlkLCAK
PiA+ICtzdGF0aWMgaW5saW5lIGJvb2xfdCBwMm1fbWVtX2FjY2Vzc19jaGVjayhwYWRkcl90IGdw
YSwgYm9vbF90IGdsYV92YWxpZCwgCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLCBib29sX3QgYWNjZXNzX3IsIAo+ID4gCj4gPiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBhY2Nlc3Nfdywg
Ym9vbF90IGFjY2Vzc194LAo+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBtZW1fZXZlbnRfcmVxdWVzdF90ICoqcmVxX3B0cikKPiA+IAo+ID4gCj4gPiAKPiA+IF9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gPiBYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Cj4gPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZyAKPiA+IGh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbCAKPiAKPiAKCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:02:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCpR-0008Go-L4; Fri, 03 Aug 2012 08:01:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCpQ-0008GV-2i
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:01:48 +0000
Received: from [85.158.143.99:4268] by server-3.bemta-4.messagelabs.com id
	1C/B6-01511-B658B105; Fri, 03 Aug 2012 08:01:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343980903!29483049!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE4NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10139 invoked from network); 3 Aug 2012 08:01:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:01:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336363200"; d="scan'208";a="33438614"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 04:01:43 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 04:01:42 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SxCpK-0003QC-3J;
	Fri, 03 Aug 2012 09:01:42 +0100
MIME-Version: 1.0
X-Mercurial-Node: bfd5e107774c3ba5020719861288b9de4e4da68b
Message-ID: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 3 Aug 2012 09:01:41 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V3] libxl: support custom block hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343980890 -3600
# Node ID bfd5e107774c3ba5020719861288b9de4e4da68b
# Parent  8e1090d822e51e484414856a43f297b34ecfeb2d
libxl: support custom block hotplug scripts

These are provided using the "script=" syntax described in
docs/misc/xl-disk-configuration.txt.

The existing hotplug scripts currently conflate two different
concepts, namely that of making a datapath available in the backend
domain (logging into iSCSI LUNs and the like) and that of actually
connecting that datapath to a Xen backend path (e.g. writing
"physical-device" node in xenstore to bring up blkback).

For this reason the script support implemented here is only supported
in conjunction with backendtype=phy.

Eventually we hope to rework the hotplug scripts to separate the to
concepts, but that is not 4.2 material.

In addition there are some other subtleties:

 - Previously in the blktap case we would add "script = .../blktap" to
   the backend flex array, but then jumped to the PHY case which added
   "script = .../block" too. The block one takes precendence since it
   comes second.

   This was, accidentally, correct. The blktap script is for blktap1
   devices and not blktap2 devices. libxl completely manages the
   blktap2 side of things without resorting to hotplug scripts and
   creates a blkback device directly.  Therefore the "block" script is
   always the correct one to call. Custom script are not supported in
   this context.

 - libxl should not write the "physical-device" node. This is the
   responsibility of the block script. Writing the "physical-device"
   node in libxl basically completely short-cuts the standard block
   hotplug script which uses "physical-device" to know if it has run
   already or not.

   In the case of more complex scripts libxl cannot know the right
   value to write here anyway, in particular the device may not exist
   until after the script is called.

   This change has the side effect of re-enabling the checks for
   device sharing aspect of the default block script, which I have tested
   and which now cause libxl to properly abort now that libxl properly
   checks for hotplug script errors.

   There is no sharing check for blktap2 since even if you reuse the
   same vhd the resulting tap device is different. I would have preferred
   to simply write the "physical-device" node for the blktap2 case but
   the hotplug script infrastructure is not currently setup to handle
   LIBXL__DEVICE_KIND_VBD
   devices without a hotplug script (backendtype phy and tap both end
   up as KIND_VBD). Changing this was more surgery than I was happy doing
   for 4.2 and therefore I have simply hardcoded to the block script for
   the LIBXL_DISK_BACKEND_TAP case.

 - libxl__device_disk_set_backend running against a phy device with a
   script cannot stat the device to check its properties since it may
   not exist until the script is run. Therefore I have special cased
   this in disk_try_backend to simply assume that backend == phy is
   always ok if a script was
   configured.  Similarly the other backend types are always rejected
   if a script was configured.

   Note that the reason for implementing the default script behaviour
   in device_disk_add instead of libxl__device_disk_setdefault is
   because we need to be able to tell when the script was
   user-supplied rather than defaulted by libxl in order to correctly
   implement the above. The setdefault function must be idempotent so
   we cannot simply update disk->script.

   I suspect that for 4.3 a script member should be added to
   libxl__device, this would also help in the case above of handling
   devices with no script in a consistent manner. This is not 4.2
   material.

 - When the block script falls through and shells out to a block-$type
   script it used to pass "$node" however the only place this was
   assigned was in the remove+phy case (in which case it contains the
   file:// derived /dev/loopN device), and in that case the script
   exits without falling through to the block-$type case.

   Since libxl never creates a type other than phy this never happens
   in practice anyway and we now call the correct block-$type script
   directly.  But fix it up anyway since it is confusing.

 - The block-nbd and block-enbd scripts which we supply appear to be
   broken WRT the hotplug calling convention, in that they seem to
   expect a command line parameter (perhaps the $node described above)
   rather than reading the appropriate node from xenstore.

   I rather suspect this was broken by 7774:e2e7f47e6f79 in November
   2005. I think it is safe to say no one is using these scripts! I
   haven't fixed this here. It would be good to track down some working
   scripts and either incorproate them or defer to them in their existing
   home (e.g. if they live somewhere useful like the nbd tools
   package).

 - Added a few block script related entries to check-xl-disk-parse
   from http://backdrift.org/xen-block-iscsi-script-with-multipath-support
   and http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html /
   http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html (and
   snuck in another interesting empty CDROM case)

   This highlighted two bugs in the libxlu disk parser handling of the
   deprecated "<script>:" prefix:

   - It was failing to prefix with "block-" to construct the actual
     script name

   - The regex for matching iscsi or drdb or e?nbd was incorrect

 - Use libxl__abs_path for the nic script too. Just because the
   existing code nearly tricked me into repeating the mistake

I have tested with a custom block script which uses "lvchange -a" to
dynamically add remove the referenced device (simulates iSCSI
login/logout without requiring me to faff around setting up an iSCSI
target). I also tested on a blktap2 system.

I haven't directly tested anything more complex like iscsi: or nbd:
other than what check-xl-disk-parse exercises.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3:
  - incorporate IanJ's improvements to docs/misc/xl-disk-configuration.txt
    from <20502.48288.351664.168722@mariner.uk.xensource.com>
  - wrap commit message to 70 columns
  - use asprintf instead of open coding savestring. Using this
    requires libxl_osdep.h is included before stdio.h, which means
    moving it from libxlu_disk_i.h (included in the %{ } section of
    the .l file) into a new %top { } section so that it goes before
    the boiler plates own include of stdio.h. The other includer of
    libxlu_disk_i.h (libxlu_disk.c) already has its own libxl_osdep.h
    as the first include.

v2:
  - observe that script= requires backendtype=phy and substantially rework to
    correctly reflect that.
  - remove unintentional braces change in SAVESTRING macro

diff -r 8e1090d822e5 -r bfd5e107774c docs/misc/xl-disk-configuration.txt
--- a/docs/misc/xl-disk-configuration.txt	Fri Aug 03 08:21:19 2012 +0100
+++ b/docs/misc/xl-disk-configuration.txt	Fri Aug 03 09:01:30 2012 +0100
@@ -160,7 +160,10 @@ script=<script>
 ---------------
 
 Specifies that <target> is not a normal host path, but rather
-information to be interpreted by /etc/xen/scripts/block-<script>.
+information to be interpreted by the executable program <script>,
+(looked for in /etc/xen/scripts, if it doesn't contain a slash).
+
+These scripts are normally called "block-<script>".
 
 
 
@@ -204,7 +207,7 @@ Supported values:      iscsi:  nbd:  enb
 In xend and old versions of libxl it was necessary to specify the
 "script" (see above) with a prefix.  For compatibility, these four
 prefixes are recognised as specifying the corresponding script.  They
-are equivalent to "script=<script>".
+are equivalent to "script=block-<script>".
 
 
 <deprecated-prefix>:
diff -r 8e1090d822e5 -r bfd5e107774c tools/hotplug/Linux/block
--- a/tools/hotplug/Linux/block	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/hotplug/Linux/block	Fri Aug 03 09:01:30 2012 +0100
@@ -342,4 +342,4 @@ esac
 
 # If we've reached here, $t is neither phy nor file, so fire a helper script.
 [ -x ${XEN_SCRIPT_DIR}/block-"$t" ] && \
-  ${XEN_SCRIPT_DIR}/block-"$t" "$command" $node
+  ${XEN_SCRIPT_DIR}/block-"$t" "$command"
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/check-xl-disk-parse
--- a/tools/libxl/check-xl-disk-parse	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/check-xl-disk-parse	Fri Aug 03 09:01:30 2012 +0100
@@ -142,5 +142,44 @@ disk: {
 
 EOF
 one 0 vdev=hdc,access=r,devtype=cdrom,format=empty
+one 0 vdev=hdc,access=r,devtype=cdrom
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
+    "vdev": "xvda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-iscsi",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://backdrift.org/xen-block-iscsi-script-with-multipath-support
+one 0 iscsi:iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost,xvda,w
+one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "app01",
+    "vdev": "hda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-drbd",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html
+# http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html
+one 0 drbd:app01,hda,w
 
 complete
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxl.c	Fri Aug 03 09:01:30 2012 +0100
@@ -1796,9 +1796,9 @@ static void device_disk_add(libxl__egc *
     STATE_AO_GC(aodev->ao);
     flexarray_t *front = NULL;
     flexarray_t *back = NULL;
-    char *dev;
+    char *dev, *script;
     libxl__device *device;
-    int major, minor, rc;
+    int rc;
     libxl_ctx *ctx = gc->owner;
     xs_transaction_t t = XBT_NULL;
 
@@ -1833,13 +1833,6 @@ static void device_disk_add(libxl__egc *
             goto out_free;
         }
 
-        if (disk->script) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "External block scripts"
-                       " not yet supported, sorry");
-            rc = ERROR_INVAL;
-            goto out_free;
-        }
-
         GCNEW(device);
         rc = libxl__device_from_disk(gc, domid, disk, device);
         if (rc != 0) {
@@ -1851,18 +1844,16 @@ static void device_disk_add(libxl__egc *
         switch (disk->backend) {
             case LIBXL_DISK_BACKEND_PHY:
                 dev = disk->pdev_path;
+
+                script = libxl__abs_path(gc, disk->script ?: "block",
+                                         libxl__xen_script_dir_path());
+
         do_backend_phy:
-                libxl__device_physdisk_major_minor(dev, &major, &minor);
-                flexarray_append(back, "physical-device");
-                flexarray_append(back, libxl__sprintf(gc, "%x:%x", major, minor));
-
                 flexarray_append(back, "params");
                 flexarray_append(back, dev);
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "block"));
+                assert(script);
+                flexarray_append_pair(back, "script", script);
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
@@ -1879,10 +1870,12 @@ static void device_disk_add(libxl__egc *
                     libxl__device_disk_string_of_format(disk->format),
                     disk->pdev_path));
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "blktap"));
+                /*
+                 * tap devices do not support custom block scripts and
+                 * always use the plain block script.
+                 */
+                script = libxl__abs_path(gc, "block",
+                                         libxl__xen_script_dir_path());
 
                 /* now create a phy device to export the device to the guest */
                 goto do_backend_phy;
@@ -2582,13 +2575,10 @@ void libxl__device_nic_add(libxl__egc *e
     flexarray_append(back, "1");
     flexarray_append(back, "state");
     flexarray_append(back, libxl__sprintf(gc, "%d", 1));
-    if (nic->script) {
-        flexarray_append(back, "script");
-        flexarray_append(back, nic->script[0]=='/' ? nic->script
-                         : libxl__sprintf(gc, "%s/%s",
-                                          libxl__xen_script_dir_path(),
-                                          nic->script));
-    }
+    if (nic->script)
+        flexarray_append_pair(back, "script",
+                              libxl__abs_path(gc, nic->script,
+                                              libxl__xen_script_dir_path()));
 
     if (nic->ifname) {
         flexarray_append(back, "vifname");
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxl_device.c	Fri Aug 03 09:01:30 2012 +0100
@@ -191,18 +191,26 @@ typedef struct {
 } disk_try_backend_args;
 
 static int disk_try_backend(disk_try_backend_args *a,
-                            libxl_disk_backend backend) {
+                            libxl_disk_backend backend)
+ {
+    libxl__gc *gc = a->gc;
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
-    libxl_ctx *ctx = libxl__gc_owner(a->gc);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
     switch (backend) {
-
     case LIBXL_DISK_BACKEND_PHY:
         if (!(a->disk->format == LIBXL_DISK_FORMAT_RAW ||
               a->disk->format == LIBXL_DISK_FORMAT_EMPTY)) {
             goto bad_format;
         }
 
+        if (a->disk->script) {
+            LOG(DEBUG, "Disk vdev=%s, uses script=... assuming phy backend",
+                a->disk->vdev);
+            return backend;
+        }
+
         if (libxl__try_phy_backend(a->stab.st_mode))
             return backend;
 
@@ -212,6 +220,8 @@ static int disk_try_backend(disk_try_bac
         return 0;
 
     case LIBXL_DISK_BACKEND_TAP:
+        if (a->disk->script) goto bad_script;
+
         if (!libxl__blktap_enabled(a->gc)) {
             LIBXL__LOG(ctx, LIBXL__LOG_DEBUG, "Disk vdev=%s, backend tap"
                        " unsuitable because blktap not available",
@@ -225,6 +235,7 @@ static int disk_try_backend(disk_try_bac
         return backend;
 
     case LIBXL_DISK_BACKEND_QDISK:
+        if (a->disk->script) goto bad_script;
         return backend;
 
     default:
@@ -242,6 +253,11 @@ static int disk_try_backend(disk_try_bac
                libxl_disk_backend_to_string(backend),
                libxl_disk_format_to_string(a->disk->format));
     return 0;
+
+ bad_script:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with script=...",
+        a->disk->vdev, libxl_disk_backend_to_string(backend));
+    return 0;
 }
 
 int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
@@ -264,7 +280,7 @@ int libxl__device_disk_set_backend(libxl
             return ERROR_INVAL;
         }
         memset(&a.stab, 0, sizeof(a.stab));
-    } else {
+    } else if (!disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
             LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Disk vdev=%s "
                              "failed to stat: %s",
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_i.h
--- a/tools/libxl/libxlu_disk_i.h	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_i.h	Fri Aug 03 09:01:30 2012 +0100
@@ -1,8 +1,6 @@
 #ifndef LIBXLU_DISK_I_H
 #define LIBXLU_DISK_I_H
 
-#include "libxl_osdeps.h" /* must come before any other headers */
-
 #include "libxlu_internal.h"
 
 
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_l.c
--- a/tools/libxl/libxlu_disk_l.c	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.c	Fri Aug 03 09:01:30 2012 +0100
@@ -1,6 +1,10 @@
 #line 2 "libxlu_disk_l.c"
+#line 31 "libxlu_disk_l.l"
+#include "libxl_osdeps.h" /* must come before any other headers */
 
-#line 4 "libxlu_disk_l.c"
+
+
+#line 8 "libxlu_disk_l.c"
 
 #define  YY_INT_ALIGNED short int
 
@@ -366,7 +370,7 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[456] =
+static yyconst flex_int16_t yy_acclist[447] =
     {   0,
        24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
     16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
@@ -379,77 +383,76 @@ static yyconst flex_int16_t yy_acclist[4
      8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
 
-       22,   24, 8193,   22, 8193,   22, 8193, 8213,   22, 8213,
-       22, 8213,   12,   22,   22,   22,   22,   22,   22,   22,
+       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
+     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-     8213,   22, 8213,   22, 8213,   12,   22,   17, 8213,   22,
-    16405,   22,   22,   22,   22,   22,   22,   22, 8213,   22,
-    16405,   20, 8213,   22,16405,   22, 8205, 8213,   22,16397,
-    16405,   22,   22, 8208, 8213,   22,16400,16405,   22,   22,
-       22,   22,   17, 8213,   22,   17, 8213,   22,   17,   22,
-       17, 8213,   22,    3,   22,   22,   19, 8213,   22,16405,
-       22,   22,   22,   22,   20, 8213,   22,   20, 8213,   22,
+       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
+     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
+     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
+     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
+    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
+     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
+       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
 
-       20,   22,   20, 8213, 8205, 8213,   22, 8205, 8213,   22,
-     8205,   22, 8205, 8213,   22, 8208, 8213,   22, 8208, 8213,
-       22, 8208,   22, 8208, 8213,   22,   22,    9,   22,   17,
-     8213,   22,   17, 8213,   22,   17, 8213,   17,   22,   17,
-       22,    3,   22,   22,   19, 8213,   22,   19, 8213,   22,
-       19,   22,   19, 8213,   22,   18, 8213,   22,16405, 8206,
-     8213,   22,16398,16405,   22,   20, 8213,   22,   20, 8213,
-       22,   20, 8213,   20,   22,   20, 8205, 8213,   22, 8205,
-     8213,   22, 8205, 8213, 8205,   22, 8205,   22, 8208, 8213,
-       22, 8208, 8213,   22, 8208, 8213, 8208,   22, 8208,   22,
+     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
+     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
+     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
+     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
+       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
+       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
+     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
+    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
+       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
+       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
 
-       22,    9,   12,    9,    7,   22,   22,   19, 8213,   22,
-       19, 8213,   22,   19, 8213,   19,   22,   19,    2,   18,
-     8213,   22,   18, 8213,   22,   18,   22,   18, 8213, 8206,
-     8213,   22, 8206, 8213,   22, 8206,   22, 8206, 8213,   22,
-       10,   22,   11,    9,    9,   12,    7,   12,    7,   22,
-        6,    2,   12,    2,   18, 8213,   22,   18, 8213,   22,
-       18, 8213,   18,   22,   18, 8206, 8213,   22, 8206, 8213,
-       22, 8206, 8213, 8206,   22, 8206,   22,   10,   12,   10,
-       15, 8213,   22,16405,   11,   12,   11,    7,    7,   12,
-       22,    6,   12,    6,    6,   12,    6,   12,    2,    2,
+     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
+       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
+        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
+       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
+     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
+        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
+       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
+       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
+       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
+        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
 
-       12, 8206,   22,16398,   10,   10,   12,   15, 8213,   22,
-       15, 8213,   22,   15,   22,   15, 8213,   11,   12,   22,
-        6,    6,   12,    6,    6,   15, 8213,   22,   15, 8213,
-       22,   15, 8213,   15,   22,   15,   22,    6,    6,    8,
-        6,    5,    6,    8,   12,    8,    4,    6,    5,    6,
-        8,    8,   12,    4,    6
+       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
+       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
+     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
+        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
+        6,    8,    8,   12,    4,    6
     } ;
 
-static yyconst flex_int16_t yy_accept[257] =
+static yyconst flex_int16_t yy_accept[252] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
        51,   54,   57,   60,   63,   66,   68,   69,   70,   71,
        73,   76,   78,   79,   80,   81,   84,   84,   85,   86,
        87,   88,   89,   90,   91,   92,   93,   94,   95,   96,
-       97,   98,   99,  100,  101,  102,  103,  105,  107,  108,
-      110,  112,  113,  114,  115,  116,  117,  118,  119,  120,
+       97,   98,   99,  100,  101,  102,  103,  104,  106,  108,
+      109,  111,  113,  114,  115,  116,  117,  118,  119,  120,
       121,  122,  123,  124,  125,  126,  127,  128,  129,  130,
-      131,  133,  135,  136,  137,  138,  142,  143,  144,  145,
-      146,  147,  148,  149,  152,  156,  157,  162,  163,  164,
+      131,  132,  133,  135,  137,  138,  139,  140,  144,  145,
+      146,  147,  148,  149,  150,  151,  156,  160,  161,  166,
 
-      169,  170,  171,  172,  173,  176,  179,  181,  183,  184,
-      186,  187,  191,  192,  193,  194,  195,  198,  201,  203,
-      205,  208,  211,  213,  215,  216,  219,  222,  224,  226,
-      227,  228,  229,  230,  233,  236,  238,  240,  241,  242,
-      244,  245,  248,  251,  253,  255,  256,  260,  265,  266,
-      269,  272,  274,  276,  277,  280,  283,  285,  287,  288,
-      289,  292,  295,  297,  299,  300,  301,  302,  304,  305,
-      306,  307,  308,  311,  314,  316,  318,  319,  320,  323,
-      326,  328,  330,  333,  336,  338,  340,  341,  342,  343,
-      344,  345,  347,  349,  350,  351,  352,  354,  355,  358,
+      167,  168,  173,  174,  175,  176,  177,  180,  183,  185,
+      187,  188,  190,  191,  195,  196,  197,  200,  203,  205,
+      207,  210,  213,  215,  217,  220,  223,  225,  227,  228,
+      231,  234,  236,  238,  239,  240,  241,  242,  245,  248,
+      250,  252,  253,  254,  256,  257,  260,  263,  265,  267,
+      268,  272,  275,  278,  280,  282,  283,  286,  289,  291,
+      293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
+      314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
+      328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
+      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
 
-      361,  363,  365,  366,  369,  372,  374,  376,  377,  378,
-      380,  381,  385,  387,  388,  389,  391,  392,  394,  395,
-      397,  399,  400,  402,  405,  406,  408,  411,  414,  416,
-      418,  420,  421,  422,  424,  425,  426,  429,  432,  434,
-      436,  437,  438,  439,  440,  441,  442,  444,  446,  447,
-      449,  451,  452,  454,  456,  456
+      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
+      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
+      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
+      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
+      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
+      447
     } ;
 
 static yyconst flex_int32_t yy_ec[256] =
@@ -492,244 +495,238 @@ static yyconst flex_int32_t yy_meta[34] 
         1,    1,    1
     } ;
 
-static yyconst flex_int16_t yy_base[313] =
+static yyconst flex_int16_t yy_base[308] =
     {   0,
-        0,    0,  572,  560,  559,  551,   32,   35,  662,  662,
-       44,   62,   30,   40,   32,   50,  533,   49,   47,   59,
-       68,  525,   69,  517,   72,    0,  662,  515,  662,   83,
-       91,    0,    0,  100,  501,  109,    0,   78,   51,   86,
-       89,   74,   96,  105,  109,  110,  111,  112,  117,   73,
-      119,  118,  121,  120,  122,    0,  134,    0,    0,  138,
-        0,    0,  495,  130,  144,  129,  143,  145,  146,  147,
-      148,  149,  153,  154,  155,  158,  161,  165,  166,  170,
-      180,    0,    0,  662,  171,  201,  176,  175,  178,  183,
-      465,  182,  190,  455,  212,  188,  221,  208,  224,  234,
+        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
+       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
+       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
+       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
+       32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
+      118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
+      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
+      148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
+      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
+      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
 
-      209,  230,  236,  221,  244,    0,  247,    0,  184,  248,
-      244,  269,  231,  247,  251,  258,  272,    0,  279,    0,
-      283,    0,  286,    0,  255,  290,    0,  293,    0,  270,
-      281,  455,  254,  297,    0,    0,    0,    0,  294,  662,
-      295,  308,    0,  310,    0,  257,  319,  328,  304,  331,
-        0,    0,    0,    0,  335,    0,    0,    0,    0,  316,
-      338,    0,    0,    0,    0,  333,  336,  447,  662,  429,
-      338,  340,  348,    0,    0,    0,    0,  428,  351,    0,
-      355,    0,  359,    0,  362,    0,  357,  427,  308,  369,
-      426,  662,  425,  662,  346,  365,  423,  662,  371,    0,
+      237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
+      251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
+      286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
+        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
+        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
+      332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
+        0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
+        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
+        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
+      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
 
-        0,    0,    0,  378,    0,    0,    0,    0,  380,  421,
-      662,  388,  420,    0,  419,  662,  373,  418,  662,  372,
-      382,  417,  662,  398,  416,  662,  400,    0,  402,    0,
-        0,  385,  415,  662,  390,  275,  409,    0,    0,    0,
-        0,  405,  404,  406,  264,  412,  224,  129,  662,   87,
-      662,   47,  662,  662,  662,  434,  438,  441,  445,  449,
-      453,  457,  461,  465,  469,  473,  477,  481,  485,  489,
-      493,  497,  501,  505,  509,  513,  517,  521,  525,  529,
-      533,  537,  541,  545,  549,  553,  557,  561,  565,  569,
-      573,  577,  581,  585,  589,  593,  597,  601,  605,  609,
+      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
+      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
+      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
+      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
+      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
+      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
+      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
+      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
+      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
+      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
 
-      613,  617,  621,  625,  629,  633,  637,  641,  645,  649,
-      653,  657
+      627,  631,  635,  639,  643,  647,  651
     } ;
 
-static yyconst flex_int16_t yy_def[313] =
+static yyconst flex_int16_t yy_def[308] =
     {   0,
-      255,    1,  256,  256,  255,  257,  258,  258,  255,  255,
-      259,  259,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  255,  257,  255,  261,
-      258,  262,  262,  263,   12,  257,  264,   12,   12,   12,
+      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
+      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
+      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  261,  262,  262,  265,
-      266,  266,  255,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
+      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-      265,  266,  266,  255,   12,  267,   12,   12,   12,   12,
-       12,   12,   12,   36,  268,   12,  269,   12,   12,  270,
+       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
+       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
 
-       12,   12,   12,   12,  271,  272,  267,  272,   12,   12,
-       12,  273,   12,   12,   12,  257,  274,  275,  268,  275,
-      276,  277,  269,  277,   12,  278,  279,  270,  279,   12,
-       12,  280,   12,  271,  272,  272,  281,  281,   12,  255,
-       12,  282,  283,  273,  283,   12,  284,  285,  257,  274,
-      275,  275,  286,  286,  276,  277,  277,  287,  287,   12,
-      278,  279,  279,  288,  288,   12,   12,  289,  255,  290,
-       12,   12,  282,  283,  283,  291,  291,  292,  293,  294,
-      284,  294,  295,  296,  285,  296,  257,  297,   12,  298,
-      289,  255,  299,  255,   12,  300,  301,  255,  293,  294,
+       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
+       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
+      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
+      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
+      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
+      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
+      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
+      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
+      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
+       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
 
-      294,  302,  302,  295,  296,  296,  303,  303,  257,  304,
-      255,  305,  306,  306,  299,  255,   12,  307,  255,  307,
-      307,  301,  255,  285,  304,  255,  308,  309,  305,  309,
-      306,   12,  307,  255,  307,  307,  308,  309,  309,  310,
-      310,   12,  307,  307,  311,  307,  307,  312,  255,  307,
-      255,  312,  255,  255,    0,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
+      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
+      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
+      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
+      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
+      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
 
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255
+      250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_nxt[696] =
+static yyconst flex_int16_t yy_nxt[690] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
        17,   17,   19,   17,   20,   21,   22,   23,   24,   17,
        25,   17,   17,   31,   31,   32,   31,   31,   32,   35,
        33,   35,   41,   33,   28,   28,   28,   29,   34,   35,
-      249,   36,   37,   42,   43,   38,   35,   48,   35,   35,
-       35,   39,   28,   28,   28,   29,   34,   44,   35,   36,
-       37,   40,   46,   45,   65,   49,   47,   35,   35,   50,
-       52,   35,   35,   35,   54,   28,   58,   35,   55,   64,
-      254,   59,   31,   31,   32,   35,   75,   66,   35,   33,
+       35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
+       35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
+       37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
+      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
+       59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
 
-       28,   28,   28,   29,   68,   35,   48,   28,   37,   60,
-       60,   60,   61,   60,   35,   67,   60,   62,   35,   35,
-       35,   35,   72,   71,   73,   69,   35,   35,   35,   35,
-       35,   35,  253,   80,   76,   70,   28,   58,   35,   35,
-       28,   82,   59,   85,   77,   78,   83,   79,   87,   74,
-       76,   86,   35,   35,   35,   35,   35,   35,   35,   90,
-       94,   95,   35,   35,   35,   97,   88,   35,   91,   92,
-       35,   99,  100,   89,   35,   35,   93,  101,   98,   35,
-       35,  102,   28,   82,   35,   35,   96,   35,   83,  109,
-      112,   35,   35,   35,   76,   97,  110,   35,  104,   35,
+       28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
+       61,   61,   62,   61,   35,   70,   61,   63,   66,   35,
+       49,   35,   35,   72,   74,   35,   69,   35,   75,   35,
+       35,   35,   35,   88,   35,   35,   82,   78,   35,   28,
+       59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
+       84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
+       35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
+       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
+       95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
+       28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
 
-      103,  105,  105,   60,  106,  105,  139,  115,  105,  108,
-      111,  114,  117,  117,   60,  118,  117,   35,   35,  117,
-      120,  121,  121,   60,  122,  121,  130,  251,  121,  124,
-       35,  100,  125,   35,  126,  126,   60,  127,  126,   35,
-       35,  126,  129,  131,  132,   35,   28,  135,  133,   28,
-      137,  140,  136,   35,  147,  138,   35,   35,  148,  146,
-       35,   29,  170,   35,   35,  178,   35,  249,  141,  142,
-      142,   60,  143,  142,   28,  151,  142,  145,  219,   35,
-      152,   28,  153,  160,  149,   28,  156,  154,   28,  158,
-       35,  157,   28,  162,  159,   28,  164,  166,  163,   28,
+      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
+       75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
+      117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
+       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
+      137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
+      131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
+       35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
+      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
+      146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
+       28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
 
-      135,  165,  244,   35,   35,  136,  171,   29,  172,  167,
-       28,  174,   28,  176,  187,  212,  175,   35,  177,  179,
-      179,   60,  180,  179,  188,   35,  179,  182,  183,  183,
-       60,  184,  183,   28,  151,  183,  186,   28,  156,  152,
-       28,  162,   35,  157,  190,   35,  163,   35,  196,   35,
-       28,  174,  189,   28,  200,   35,  175,   28,  202,  201,
-       29,   28,  205,  203,   28,  207,  195,  206,  219,  209,
-      208,   63,  214,   28,  200,  234,  220,  221,  217,  201,
-       28,  205,   35,   29,  235,  234,  206,  224,  227,  227,
-       60,  228,  227,  219,   35,  227,  230,  232,  242,  236,
+      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
+      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
+      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
+      185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
+      189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
+      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
+       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
+      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
+      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
+      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
 
-       28,  207,   28,  238,   28,  240,  208,  219,  239,  219,
-      241,   28,  238,  245,   35,  219,  243,  239,  219,  211,
-      198,  234,  194,  231,  226,  247,  223,  246,  216,  169,
-      211,  198,  194,  250,   26,   26,   26,   26,   28,   28,
-       28,   30,   30,   30,   30,   35,   35,   35,   35,   56,
-      192,   56,   56,   57,   57,   57,   57,   59,  169,   59,
-       59,   34,   34,   34,   34,   63,   63,  116,   63,   81,
-       81,   81,   81,   83,  113,   83,   83,  107,  107,  107,
-      107,  119,  119,  119,  119,  123,  123,  123,  123,  128,
-      128,  128,  128,  134,  134,  134,  134,  136,   84,  136,
+       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
+      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
+      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
+       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
+       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
+       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
+       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
+       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
+      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
+      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
 
-      136,  144,  144,  144,  144,  150,  150,  150,  150,  152,
-       35,  152,  152,  155,  155,  155,  155,  157,   29,  157,
-      157,  161,  161,  161,  161,  163,   53,  163,  163,  168,
-      168,  168,  168,  138,   51,  138,  138,  173,  173,  173,
-      173,  175,   35,  175,  175,  181,  181,  181,  181,  185,
-      185,  185,  185,  154,   29,  154,  154,  159,  255,  159,
-      159,  165,   27,  165,  165,  191,  191,  191,  191,  193,
-      193,  193,  193,  177,   27,  177,  177,  197,  197,  197,
-      197,  199,  199,  199,  199,  201,  255,  201,  201,  204,
-      204,  204,  204,  206,  255,  206,  206,  210,  210,  210,
+      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
+      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
+      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
+      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
+       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
+      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
+      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
+      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
+      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
+      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
 
-      210,  213,  213,  213,  213,  215,  215,  215,  215,  218,
-      218,  218,  218,  222,  222,  222,  222,  203,  255,  203,
-      203,  208,  255,  208,  208,  225,  225,  225,  225,  229,
-      229,  229,  229,  214,  255,  214,  214,  233,  233,  233,
-      233,  237,  237,  237,  237,  239,  255,  239,  239,  241,
-      255,  241,  241,  248,  248,  248,  248,  252,  252,  252,
-      252,    5,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
+      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
+      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
+      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
+      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
+      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_chk[696] =
+static yyconst flex_int16_t yy_chk[690] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    7,    7,    7,    8,    8,    8,   13,
-        7,   15,   13,    8,   11,   11,   11,   11,   11,   14,
-      252,   11,   11,   14,   15,   11,   19,   19,   18,   16,
-       39,   11,   12,   12,   12,   12,   12,   16,   20,   12,
-       12,   12,   18,   16,   39,   20,   18,   21,   23,   21,
-       23,   25,   50,   42,   25,   30,   30,   38,   25,   38,
-      250,   30,   31,   31,   31,   40,   50,   40,   41,   31,
+        7,   41,   13,    8,   11,   11,   11,   11,   11,   47,
+       14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
+       16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
+       12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
+      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
+       30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
 
-       34,   34,   34,   34,   42,   43,   43,   34,   34,   36,
-       36,   36,   36,   36,   44,   41,   36,   36,   45,   46,
-       47,   48,   47,   46,   48,   44,   49,   52,   51,   54,
-       53,   55,  248,   54,   55,   45,   57,   57,   66,   64,
-       60,   60,   57,   64,   52,   53,   60,   53,   66,   49,
-       51,   65,   67,   65,   68,   69,   70,   71,   72,   69,
-       73,   74,   73,   74,   75,   76,   67,   76,   70,   71,
-       77,   78,   78,   68,   78,   79,   72,   78,   77,   80,
-       85,   79,   81,   81,   88,   87,   75,   89,   81,   87,
-       90,   92,   90,  109,   96,   96,   88,   96,   85,   93,
+       34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
+       36,   36,   36,   36,   43,   43,   36,   36,   39,   44,
+       44,   50,   48,   46,   48,   49,   42,   51,   49,   52,
+       53,   54,   55,   66,   56,   66,   55,   56,   65,   58,
+       58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
+       61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
+       73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
+       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
+       74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
+       83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
 
-       80,   86,   86,   86,   86,   86,  109,   93,   86,   86,
-       89,   92,   95,   95,   95,   95,   95,   98,  101,   95,
-       95,   97,   97,   97,   97,   97,  101,  247,   97,   97,
-      104,   99,   98,   99,  100,  100,  100,  100,  100,  102,
-      113,  100,  100,  102,  103,  103,  105,  105,  104,  107,
-      107,  110,  105,  111,  114,  107,  114,  110,  115,  113,
-      115,  116,  133,  133,  125,  146,  146,  245,  111,  112,
-      112,  112,  112,  112,  117,  117,  112,  112,  236,  130,
-      117,  119,  119,  125,  116,  121,  121,  119,  123,  123,
-      131,  121,  126,  126,  123,  128,  128,  130,  126,  134,
+       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
+       95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
+       96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
+      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
+      106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
+      102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
+      111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
+      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
+      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
+      123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
 
-      134,  128,  236,  139,  141,  134,  139,  149,  141,  131,
-      142,  142,  144,  144,  149,  189,  142,  189,  144,  147,
-      147,  147,  147,  147,  160,  160,  147,  147,  148,  148,
-      148,  148,  148,  150,  150,  148,  148,  155,  155,  150,
-      161,  161,  166,  155,  167,  167,  161,  171,  172,  172,
-      173,  173,  166,  179,  179,  195,  173,  181,  181,  179,
-      187,  183,  183,  181,  185,  185,  171,  183,  196,  187,
-      185,  190,  190,  199,  199,  220,  196,  196,  195,  199,
-      204,  204,  217,  209,  220,  221,  204,  209,  212,  212,
-      212,  212,  212,  235,  232,  212,  212,  217,  232,  221,
+      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
+      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
+      145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
+      150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
+      151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
+      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
+      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
+      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
+      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
+      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
 
-      224,  224,  227,  227,  229,  229,  224,  243,  227,  244,
-      229,  237,  237,  242,  242,  246,  235,  237,  233,  225,
-      222,  218,  215,  213,  210,  244,  197,  243,  193,  191,
-      188,  178,  170,  246,  256,  256,  256,  256,  257,  257,
-      257,  258,  258,  258,  258,  259,  259,  259,  259,  260,
-      168,  260,  260,  261,  261,  261,  261,  262,  132,  262,
-      262,  263,  263,  263,  263,  264,  264,   94,  264,  265,
-      265,  265,  265,  266,   91,  266,  266,  267,  267,  267,
-      267,  268,  268,  268,  268,  269,  269,  269,  269,  270,
-      270,  270,  270,  271,  271,  271,  271,  272,   63,  272,
+      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
+      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
+      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
+      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
+      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
+      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
+      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
+      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
+      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
+      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
 
-      272,  273,  273,  273,  273,  274,  274,  274,  274,  275,
-       35,  275,  275,  276,  276,  276,  276,  277,   28,  277,
-      277,  278,  278,  278,  278,  279,   24,  279,  279,  280,
-      280,  280,  280,  281,   22,  281,  281,  282,  282,  282,
-      282,  283,   17,  283,  283,  284,  284,  284,  284,  285,
-      285,  285,  285,  286,    6,  286,  286,  287,    5,  287,
-      287,  288,    4,  288,  288,  289,  289,  289,  289,  290,
-      290,  290,  290,  291,    3,  291,  291,  292,  292,  292,
-      292,  293,  293,  293,  293,  294,    0,  294,  294,  295,
-      295,  295,  295,  296,    0,  296,  296,  297,  297,  297,
+      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
+      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
+      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
+      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
+        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
+      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
+        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
+      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
+        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
+      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
 
-      297,  298,  298,  298,  298,  299,  299,  299,  299,  300,
-      300,  300,  300,  301,  301,  301,  301,  302,    0,  302,
-      302,  303,    0,  303,  303,  304,  304,  304,  304,  305,
-      305,  305,  305,  306,    0,  306,  306,  307,  307,  307,
-      307,  308,  308,  308,  308,  309,    0,  309,  309,  310,
-        0,  310,  310,  311,  311,  311,  311,  312,  312,  312,
-      312,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
+      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
+      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
+      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
+        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
+      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -776,7 +773,8 @@ goto find_rule; \
  * syntax; if the target string has to contain "," or ":" the new
  * syntax's "target=" should be used.
  */
-#line 31 "libxlu_disk_l.l"
+
+#line 35 "libxlu_disk_l.l"
 #include "libxlu_disk_i.h"
 
 #define YY_NO_INPUT
@@ -885,7 +883,7 @@ static int vdev_and_devtype(DiskParseCon
 #define DPC ((DiskParseContext*)yyextra)
 
 
-#line 889 "libxlu_disk_l.c"
+#line 887 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1121,12 +1119,12 @@ YY_DECL
 	register int yy_act;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
-#line 151 "libxlu_disk_l.l"
+#line 155 "libxlu_disk_l.l"
 
 
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1130 "libxlu_disk_l.c"
+#line 1128 "libxlu_disk_l.c"
 
 	if ( !yyg->yy_init )
 		{
@@ -1190,14 +1188,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 256 )
+				if ( yy_current_state >= 251 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 255 );
+		while ( yy_current_state != 250 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1247,72 +1245,72 @@ do_action:	/* This label is used only to
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 155 "libxlu_disk_l.l"
+#line 159 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 159 "libxlu_disk_l.l"
+#line 163 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 161 "libxlu_disk_l.l"
+#line 165 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 162 "libxlu_disk_l.l"
+#line 166 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 163 "libxlu_disk_l.l"
+#line 167 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 164 "libxlu_disk_l.l"
+#line 168 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 166 "libxlu_disk_l.l"
+#line 170 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 167 "libxlu_disk_l.l"
+#line 171 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 169 "libxlu_disk_l.l"
+#line 173 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 170 "libxlu_disk_l.l"
+#line 174 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
 case 11:
 YY_RULE_SETUP
-#line 174 "libxlu_disk_l.l"
+#line 178 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
 case 12:
 /* rule 12 can match eol */
 YY_RULE_SETUP
-#line 178 "libxlu_disk_l.l"
+#line 182 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
@@ -1320,7 +1318,7 @@ YY_RULE_SETUP
    * matched the whole string, so these patterns take precedence */
 case 13:
 YY_RULE_SETUP
-#line 185 "libxlu_disk_l.l"
+#line 189 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
@@ -1329,24 +1327,31 @@ YY_RULE_SETUP
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 191 "libxlu_disk_l.l"
+#line 195 "libxlu_disk_l.l"
 {
-		    STRIP(':');
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 	YY_BREAK
 case 15:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
+#line 208 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 16:
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
+#line 209 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 17:
@@ -1354,7 +1359,7 @@ case 17:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 199 "libxlu_disk_l.l"
+#line 210 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 18:
@@ -1362,7 +1367,7 @@ case 18:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 200 "libxlu_disk_l.l"
+#line 211 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 19:
@@ -1370,7 +1375,7 @@ case 19:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 201 "libxlu_disk_l.l"
+#line 212 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 20:
@@ -1378,13 +1383,13 @@ case 20:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 202 "libxlu_disk_l.l"
+#line 213 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
@@ -1394,7 +1399,7 @@ YY_RULE_SETUP
 case 22:
 /* rule 22 can match eol */
 YY_RULE_SETUP
-#line 211 "libxlu_disk_l.l"
+#line 222 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1423,7 +1428,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 23:
 YY_RULE_SETUP
-#line 237 "libxlu_disk_l.l"
+#line 248 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
@@ -1431,17 +1436,17 @@ YY_RULE_SETUP
 	YY_BREAK
 case 24:
 YY_RULE_SETUP
-#line 241 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
 case 25:
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1445 "libxlu_disk_l.c"
+#line 1450 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1705,7 +1710,7 @@ static int yy_get_next_buffer (yyscan_t 
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 256 )
+			if ( yy_current_state >= 251 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1729,11 +1734,11 @@ static int yy_get_next_buffer (yyscan_t 
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 256 )
+		if ( yy_current_state >= 251 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 255);
+	yy_is_jam = (yy_current_state == 250);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2533,4 +2538,4 @@ void xlu__disk_yyfree (void * ptr , yysc
 
 #define YYTABLES_NAME "yytables"
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_l.h
--- a/tools/libxl/libxlu_disk_l.h	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.h	Fri Aug 03 09:01:30 2012 +0100
@@ -3,8 +3,12 @@
 #define xlu__disk_yyIN_HEADER 1
 
 #line 6 "libxlu_disk_l.h"
+#line 31 "libxlu_disk_l.l"
+#include "libxl_osdeps.h" /* must come before any other headers */
 
-#line 8 "libxlu_disk_l.h"
+
+
+#line 12 "libxlu_disk_l.h"
 
 #define  YY_INT_ALIGNED short int
 
@@ -340,8 +344,8 @@ extern int xlu__disk_yylex (yyscan_t yys
 #undef YY_DECL
 #endif
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 
-#line 346 "libxlu_disk_l.h"
+#line 350 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
 #endif /* xlu__disk_yyHEADER_H */
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_l.l
--- a/tools/libxl/libxlu_disk_l.l	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.l	Fri Aug 03 09:01:30 2012 +0100
@@ -27,6 +27,10 @@
  * syntax's "target=" should be used.
  */
 
+%top{
+#include "libxl_osdeps.h" /* must come before any other headers */
+}
+
 %{
 #include "libxlu_disk_i.h"
 
@@ -188,11 +192,18 @@ target=.*	{ STRIP(','); SAVESTRING("targ
                     setformat(DPC, yytext);
                  }
 
-iscsi:|e?nbd:drbd:/.* {
-		    STRIP(':');
+(iscsi|e?nbd|drbd):/.* {
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 
 tapdisk:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }
 tap2?:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:02:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCpR-0008Go-L4; Fri, 03 Aug 2012 08:01:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCpQ-0008GV-2i
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:01:48 +0000
Received: from [85.158.143.99:4268] by server-3.bemta-4.messagelabs.com id
	1C/B6-01511-B658B105; Fri, 03 Aug 2012 08:01:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343980903!29483049!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE4NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10139 invoked from network); 3 Aug 2012 08:01:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:01:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336363200"; d="scan'208";a="33438614"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 04:01:43 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 04:01:42 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SxCpK-0003QC-3J;
	Fri, 03 Aug 2012 09:01:42 +0100
MIME-Version: 1.0
X-Mercurial-Node: bfd5e107774c3ba5020719861288b9de4e4da68b
Message-ID: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 3 Aug 2012 09:01:41 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V3] libxl: support custom block hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343980890 -3600
# Node ID bfd5e107774c3ba5020719861288b9de4e4da68b
# Parent  8e1090d822e51e484414856a43f297b34ecfeb2d
libxl: support custom block hotplug scripts

These are provided using the "script=" syntax described in
docs/misc/xl-disk-configuration.txt.

The existing hotplug scripts currently conflate two different
concepts, namely that of making a datapath available in the backend
domain (logging into iSCSI LUNs and the like) and that of actually
connecting that datapath to a Xen backend path (e.g. writing
"physical-device" node in xenstore to bring up blkback).

For this reason the script support implemented here is only supported
in conjunction with backendtype=phy.

Eventually we hope to rework the hotplug scripts to separate the to
concepts, but that is not 4.2 material.

In addition there are some other subtleties:

 - Previously in the blktap case we would add "script = .../blktap" to
   the backend flex array, but then jumped to the PHY case which added
   "script = .../block" too. The block one takes precendence since it
   comes second.

   This was, accidentally, correct. The blktap script is for blktap1
   devices and not blktap2 devices. libxl completely manages the
   blktap2 side of things without resorting to hotplug scripts and
   creates a blkback device directly.  Therefore the "block" script is
   always the correct one to call. Custom script are not supported in
   this context.

 - libxl should not write the "physical-device" node. This is the
   responsibility of the block script. Writing the "physical-device"
   node in libxl basically completely short-cuts the standard block
   hotplug script which uses "physical-device" to know if it has run
   already or not.

   In the case of more complex scripts libxl cannot know the right
   value to write here anyway, in particular the device may not exist
   until after the script is called.

   This change has the side effect of re-enabling the checks for
   device sharing aspect of the default block script, which I have tested
   and which now cause libxl to properly abort now that libxl properly
   checks for hotplug script errors.

   There is no sharing check for blktap2 since even if you reuse the
   same vhd the resulting tap device is different. I would have preferred
   to simply write the "physical-device" node for the blktap2 case but
   the hotplug script infrastructure is not currently setup to handle
   LIBXL__DEVICE_KIND_VBD
   devices without a hotplug script (backendtype phy and tap both end
   up as KIND_VBD). Changing this was more surgery than I was happy doing
   for 4.2 and therefore I have simply hardcoded to the block script for
   the LIBXL_DISK_BACKEND_TAP case.

 - libxl__device_disk_set_backend running against a phy device with a
   script cannot stat the device to check its properties since it may
   not exist until the script is run. Therefore I have special cased
   this in disk_try_backend to simply assume that backend == phy is
   always ok if a script was
   configured.  Similarly the other backend types are always rejected
   if a script was configured.

   Note that the reason for implementing the default script behaviour
   in device_disk_add instead of libxl__device_disk_setdefault is
   because we need to be able to tell when the script was
   user-supplied rather than defaulted by libxl in order to correctly
   implement the above. The setdefault function must be idempotent so
   we cannot simply update disk->script.

   I suspect that for 4.3 a script member should be added to
   libxl__device, this would also help in the case above of handling
   devices with no script in a consistent manner. This is not 4.2
   material.

 - When the block script falls through and shells out to a block-$type
   script it used to pass "$node" however the only place this was
   assigned was in the remove+phy case (in which case it contains the
   file:// derived /dev/loopN device), and in that case the script
   exits without falling through to the block-$type case.

   Since libxl never creates a type other than phy this never happens
   in practice anyway and we now call the correct block-$type script
   directly.  But fix it up anyway since it is confusing.

 - The block-nbd and block-enbd scripts which we supply appear to be
   broken WRT the hotplug calling convention, in that they seem to
   expect a command line parameter (perhaps the $node described above)
   rather than reading the appropriate node from xenstore.

   I rather suspect this was broken by 7774:e2e7f47e6f79 in November
   2005. I think it is safe to say no one is using these scripts! I
   haven't fixed this here. It would be good to track down some working
   scripts and either incorproate them or defer to them in their existing
   home (e.g. if they live somewhere useful like the nbd tools
   package).

 - Added a few block script related entries to check-xl-disk-parse
   from http://backdrift.org/xen-block-iscsi-script-with-multipath-support
   and http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html /
   http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html (and
   snuck in another interesting empty CDROM case)

   This highlighted two bugs in the libxlu disk parser handling of the
   deprecated "<script>:" prefix:

   - It was failing to prefix with "block-" to construct the actual
     script name

   - The regex for matching iscsi or drdb or e?nbd was incorrect

 - Use libxl__abs_path for the nic script too. Just because the
   existing code nearly tricked me into repeating the mistake

I have tested with a custom block script which uses "lvchange -a" to
dynamically add remove the referenced device (simulates iSCSI
login/logout without requiring me to faff around setting up an iSCSI
target). I also tested on a blktap2 system.

I haven't directly tested anything more complex like iscsi: or nbd:
other than what check-xl-disk-parse exercises.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3:
  - incorporate IanJ's improvements to docs/misc/xl-disk-configuration.txt
    from <20502.48288.351664.168722@mariner.uk.xensource.com>
  - wrap commit message to 70 columns
  - use asprintf instead of open coding savestring. Using this
    requires libxl_osdep.h is included before stdio.h, which means
    moving it from libxlu_disk_i.h (included in the %{ } section of
    the .l file) into a new %top { } section so that it goes before
    the boiler plates own include of stdio.h. The other includer of
    libxlu_disk_i.h (libxlu_disk.c) already has its own libxl_osdep.h
    as the first include.

v2:
  - observe that script= requires backendtype=phy and substantially rework to
    correctly reflect that.
  - remove unintentional braces change in SAVESTRING macro

diff -r 8e1090d822e5 -r bfd5e107774c docs/misc/xl-disk-configuration.txt
--- a/docs/misc/xl-disk-configuration.txt	Fri Aug 03 08:21:19 2012 +0100
+++ b/docs/misc/xl-disk-configuration.txt	Fri Aug 03 09:01:30 2012 +0100
@@ -160,7 +160,10 @@ script=<script>
 ---------------
 
 Specifies that <target> is not a normal host path, but rather
-information to be interpreted by /etc/xen/scripts/block-<script>.
+information to be interpreted by the executable program <script>,
+(looked for in /etc/xen/scripts, if it doesn't contain a slash).
+
+These scripts are normally called "block-<script>".
 
 
 
@@ -204,7 +207,7 @@ Supported values:      iscsi:  nbd:  enb
 In xend and old versions of libxl it was necessary to specify the
 "script" (see above) with a prefix.  For compatibility, these four
 prefixes are recognised as specifying the corresponding script.  They
-are equivalent to "script=<script>".
+are equivalent to "script=block-<script>".
 
 
 <deprecated-prefix>:
diff -r 8e1090d822e5 -r bfd5e107774c tools/hotplug/Linux/block
--- a/tools/hotplug/Linux/block	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/hotplug/Linux/block	Fri Aug 03 09:01:30 2012 +0100
@@ -342,4 +342,4 @@ esac
 
 # If we've reached here, $t is neither phy nor file, so fire a helper script.
 [ -x ${XEN_SCRIPT_DIR}/block-"$t" ] && \
-  ${XEN_SCRIPT_DIR}/block-"$t" "$command" $node
+  ${XEN_SCRIPT_DIR}/block-"$t" "$command"
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/check-xl-disk-parse
--- a/tools/libxl/check-xl-disk-parse	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/check-xl-disk-parse	Fri Aug 03 09:01:30 2012 +0100
@@ -142,5 +142,44 @@ disk: {
 
 EOF
 one 0 vdev=hdc,access=r,devtype=cdrom,format=empty
+one 0 vdev=hdc,access=r,devtype=cdrom
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
+    "vdev": "xvda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-iscsi",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://backdrift.org/xen-block-iscsi-script-with-multipath-support
+one 0 iscsi:iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost,xvda,w
+one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost
+
+expected <<EOF
+disk: {
+    "backend_domid": 0,
+    "pdev_path": "app01",
+    "vdev": "hda",
+    "backend": "unknown",
+    "format": "raw",
+    "script": "block-drbd",
+    "removable": 0,
+    "readwrite": 1,
+    "is_cdrom": 0
+}
+
+EOF
+
+# http://lists.linbit.com/pipermail/drbd-user/2008-September/010221.html
+# http://www.drbd.org/users-guide-emb/s-xen-configure-domu.html
+one 0 drbd:app01,hda,w
 
 complete
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxl.c	Fri Aug 03 09:01:30 2012 +0100
@@ -1796,9 +1796,9 @@ static void device_disk_add(libxl__egc *
     STATE_AO_GC(aodev->ao);
     flexarray_t *front = NULL;
     flexarray_t *back = NULL;
-    char *dev;
+    char *dev, *script;
     libxl__device *device;
-    int major, minor, rc;
+    int rc;
     libxl_ctx *ctx = gc->owner;
     xs_transaction_t t = XBT_NULL;
 
@@ -1833,13 +1833,6 @@ static void device_disk_add(libxl__egc *
             goto out_free;
         }
 
-        if (disk->script) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "External block scripts"
-                       " not yet supported, sorry");
-            rc = ERROR_INVAL;
-            goto out_free;
-        }
-
         GCNEW(device);
         rc = libxl__device_from_disk(gc, domid, disk, device);
         if (rc != 0) {
@@ -1851,18 +1844,16 @@ static void device_disk_add(libxl__egc *
         switch (disk->backend) {
             case LIBXL_DISK_BACKEND_PHY:
                 dev = disk->pdev_path;
+
+                script = libxl__abs_path(gc, disk->script ?: "block",
+                                         libxl__xen_script_dir_path());
+
         do_backend_phy:
-                libxl__device_physdisk_major_minor(dev, &major, &minor);
-                flexarray_append(back, "physical-device");
-                flexarray_append(back, libxl__sprintf(gc, "%x:%x", major, minor));
-
                 flexarray_append(back, "params");
                 flexarray_append(back, dev);
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "block"));
+                assert(script);
+                flexarray_append_pair(back, "script", script);
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
@@ -1879,10 +1870,12 @@ static void device_disk_add(libxl__egc *
                     libxl__device_disk_string_of_format(disk->format),
                     disk->pdev_path));
 
-                flexarray_append(back, "script");
-                flexarray_append(back, GCSPRINTF("%s/%s",
-                                                 libxl__xen_script_dir_path(),
-                                                 "blktap"));
+                /*
+                 * tap devices do not support custom block scripts and
+                 * always use the plain block script.
+                 */
+                script = libxl__abs_path(gc, "block",
+                                         libxl__xen_script_dir_path());
 
                 /* now create a phy device to export the device to the guest */
                 goto do_backend_phy;
@@ -2582,13 +2575,10 @@ void libxl__device_nic_add(libxl__egc *e
     flexarray_append(back, "1");
     flexarray_append(back, "state");
     flexarray_append(back, libxl__sprintf(gc, "%d", 1));
-    if (nic->script) {
-        flexarray_append(back, "script");
-        flexarray_append(back, nic->script[0]=='/' ? nic->script
-                         : libxl__sprintf(gc, "%s/%s",
-                                          libxl__xen_script_dir_path(),
-                                          nic->script));
-    }
+    if (nic->script)
+        flexarray_append_pair(back, "script",
+                              libxl__abs_path(gc, nic->script,
+                                              libxl__xen_script_dir_path()));
 
     if (nic->ifname) {
         flexarray_append(back, "vifname");
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxl_device.c	Fri Aug 03 09:01:30 2012 +0100
@@ -191,18 +191,26 @@ typedef struct {
 } disk_try_backend_args;
 
 static int disk_try_backend(disk_try_backend_args *a,
-                            libxl_disk_backend backend) {
+                            libxl_disk_backend backend)
+ {
+    libxl__gc *gc = a->gc;
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
-    libxl_ctx *ctx = libxl__gc_owner(a->gc);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
     switch (backend) {
-
     case LIBXL_DISK_BACKEND_PHY:
         if (!(a->disk->format == LIBXL_DISK_FORMAT_RAW ||
               a->disk->format == LIBXL_DISK_FORMAT_EMPTY)) {
             goto bad_format;
         }
 
+        if (a->disk->script) {
+            LOG(DEBUG, "Disk vdev=%s, uses script=... assuming phy backend",
+                a->disk->vdev);
+            return backend;
+        }
+
         if (libxl__try_phy_backend(a->stab.st_mode))
             return backend;
 
@@ -212,6 +220,8 @@ static int disk_try_backend(disk_try_bac
         return 0;
 
     case LIBXL_DISK_BACKEND_TAP:
+        if (a->disk->script) goto bad_script;
+
         if (!libxl__blktap_enabled(a->gc)) {
             LIBXL__LOG(ctx, LIBXL__LOG_DEBUG, "Disk vdev=%s, backend tap"
                        " unsuitable because blktap not available",
@@ -225,6 +235,7 @@ static int disk_try_backend(disk_try_bac
         return backend;
 
     case LIBXL_DISK_BACKEND_QDISK:
+        if (a->disk->script) goto bad_script;
         return backend;
 
     default:
@@ -242,6 +253,11 @@ static int disk_try_backend(disk_try_bac
                libxl_disk_backend_to_string(backend),
                libxl_disk_format_to_string(a->disk->format));
     return 0;
+
+ bad_script:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with script=...",
+        a->disk->vdev, libxl_disk_backend_to_string(backend));
+    return 0;
 }
 
 int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
@@ -264,7 +280,7 @@ int libxl__device_disk_set_backend(libxl
             return ERROR_INVAL;
         }
         memset(&a.stab, 0, sizeof(a.stab));
-    } else {
+    } else if (!disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
             LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Disk vdev=%s "
                              "failed to stat: %s",
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_i.h
--- a/tools/libxl/libxlu_disk_i.h	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_i.h	Fri Aug 03 09:01:30 2012 +0100
@@ -1,8 +1,6 @@
 #ifndef LIBXLU_DISK_I_H
 #define LIBXLU_DISK_I_H
 
-#include "libxl_osdeps.h" /* must come before any other headers */
-
 #include "libxlu_internal.h"
 
 
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_l.c
--- a/tools/libxl/libxlu_disk_l.c	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.c	Fri Aug 03 09:01:30 2012 +0100
@@ -1,6 +1,10 @@
 #line 2 "libxlu_disk_l.c"
+#line 31 "libxlu_disk_l.l"
+#include "libxl_osdeps.h" /* must come before any other headers */
 
-#line 4 "libxlu_disk_l.c"
+
+
+#line 8 "libxlu_disk_l.c"
 
 #define  YY_INT_ALIGNED short int
 
@@ -366,7 +370,7 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[456] =
+static yyconst flex_int16_t yy_acclist[447] =
     {   0,
        24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
     16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
@@ -379,77 +383,76 @@ static yyconst flex_int16_t yy_acclist[4
      8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
 
-       22,   24, 8193,   22, 8193,   22, 8193, 8213,   22, 8213,
-       22, 8213,   12,   22,   22,   22,   22,   22,   22,   22,
+       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
+     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
        22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-     8213,   22, 8213,   22, 8213,   12,   22,   17, 8213,   22,
-    16405,   22,   22,   22,   22,   22,   22,   22, 8213,   22,
-    16405,   20, 8213,   22,16405,   22, 8205, 8213,   22,16397,
-    16405,   22,   22, 8208, 8213,   22,16400,16405,   22,   22,
-       22,   22,   17, 8213,   22,   17, 8213,   22,   17,   22,
-       17, 8213,   22,    3,   22,   22,   19, 8213,   22,16405,
-       22,   22,   22,   22,   20, 8213,   22,   20, 8213,   22,
+       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
+     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
+     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
+     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
+    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
+     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
+       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
 
-       20,   22,   20, 8213, 8205, 8213,   22, 8205, 8213,   22,
-     8205,   22, 8205, 8213,   22, 8208, 8213,   22, 8208, 8213,
-       22, 8208,   22, 8208, 8213,   22,   22,    9,   22,   17,
-     8213,   22,   17, 8213,   22,   17, 8213,   17,   22,   17,
-       22,    3,   22,   22,   19, 8213,   22,   19, 8213,   22,
-       19,   22,   19, 8213,   22,   18, 8213,   22,16405, 8206,
-     8213,   22,16398,16405,   22,   20, 8213,   22,   20, 8213,
-       22,   20, 8213,   20,   22,   20, 8205, 8213,   22, 8205,
-     8213,   22, 8205, 8213, 8205,   22, 8205,   22, 8208, 8213,
-       22, 8208, 8213,   22, 8208, 8213, 8208,   22, 8208,   22,
+     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
+     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
+     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
+     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
+       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
+       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
+     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
+    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
+       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
+       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
 
-       22,    9,   12,    9,    7,   22,   22,   19, 8213,   22,
-       19, 8213,   22,   19, 8213,   19,   22,   19,    2,   18,
-     8213,   22,   18, 8213,   22,   18,   22,   18, 8213, 8206,
-     8213,   22, 8206, 8213,   22, 8206,   22, 8206, 8213,   22,
-       10,   22,   11,    9,    9,   12,    7,   12,    7,   22,
-        6,    2,   12,    2,   18, 8213,   22,   18, 8213,   22,
-       18, 8213,   18,   22,   18, 8206, 8213,   22, 8206, 8213,
-       22, 8206, 8213, 8206,   22, 8206,   22,   10,   12,   10,
-       15, 8213,   22,16405,   11,   12,   11,    7,    7,   12,
-       22,    6,   12,    6,    6,   12,    6,   12,    2,    2,
+     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
+       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
+        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
+       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
+     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
+        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
+       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
+       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
+       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
+        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
 
-       12, 8206,   22,16398,   10,   10,   12,   15, 8213,   22,
-       15, 8213,   22,   15,   22,   15, 8213,   11,   12,   22,
-        6,    6,   12,    6,    6,   15, 8213,   22,   15, 8213,
-       22,   15, 8213,   15,   22,   15,   22,    6,    6,    8,
-        6,    5,    6,    8,   12,    8,    4,    6,    5,    6,
-        8,    8,   12,    4,    6
+       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
+       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
+     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
+        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
+        6,    8,    8,   12,    4,    6
     } ;
 
-static yyconst flex_int16_t yy_accept[257] =
+static yyconst flex_int16_t yy_accept[252] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
        51,   54,   57,   60,   63,   66,   68,   69,   70,   71,
        73,   76,   78,   79,   80,   81,   84,   84,   85,   86,
        87,   88,   89,   90,   91,   92,   93,   94,   95,   96,
-       97,   98,   99,  100,  101,  102,  103,  105,  107,  108,
-      110,  112,  113,  114,  115,  116,  117,  118,  119,  120,
+       97,   98,   99,  100,  101,  102,  103,  104,  106,  108,
+      109,  111,  113,  114,  115,  116,  117,  118,  119,  120,
       121,  122,  123,  124,  125,  126,  127,  128,  129,  130,
-      131,  133,  135,  136,  137,  138,  142,  143,  144,  145,
-      146,  147,  148,  149,  152,  156,  157,  162,  163,  164,
+      131,  132,  133,  135,  137,  138,  139,  140,  144,  145,
+      146,  147,  148,  149,  150,  151,  156,  160,  161,  166,
 
-      169,  170,  171,  172,  173,  176,  179,  181,  183,  184,
-      186,  187,  191,  192,  193,  194,  195,  198,  201,  203,
-      205,  208,  211,  213,  215,  216,  219,  222,  224,  226,
-      227,  228,  229,  230,  233,  236,  238,  240,  241,  242,
-      244,  245,  248,  251,  253,  255,  256,  260,  265,  266,
-      269,  272,  274,  276,  277,  280,  283,  285,  287,  288,
-      289,  292,  295,  297,  299,  300,  301,  302,  304,  305,
-      306,  307,  308,  311,  314,  316,  318,  319,  320,  323,
-      326,  328,  330,  333,  336,  338,  340,  341,  342,  343,
-      344,  345,  347,  349,  350,  351,  352,  354,  355,  358,
+      167,  168,  173,  174,  175,  176,  177,  180,  183,  185,
+      187,  188,  190,  191,  195,  196,  197,  200,  203,  205,
+      207,  210,  213,  215,  217,  220,  223,  225,  227,  228,
+      231,  234,  236,  238,  239,  240,  241,  242,  245,  248,
+      250,  252,  253,  254,  256,  257,  260,  263,  265,  267,
+      268,  272,  275,  278,  280,  282,  283,  286,  289,  291,
+      293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
+      314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
+      328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
+      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
 
-      361,  363,  365,  366,  369,  372,  374,  376,  377,  378,
-      380,  381,  385,  387,  388,  389,  391,  392,  394,  395,
-      397,  399,  400,  402,  405,  406,  408,  411,  414,  416,
-      418,  420,  421,  422,  424,  425,  426,  429,  432,  434,
-      436,  437,  438,  439,  440,  441,  442,  444,  446,  447,
-      449,  451,  452,  454,  456,  456
+      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
+      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
+      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
+      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
+      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
+      447
     } ;
 
 static yyconst flex_int32_t yy_ec[256] =
@@ -492,244 +495,238 @@ static yyconst flex_int32_t yy_meta[34] 
         1,    1,    1
     } ;
 
-static yyconst flex_int16_t yy_base[313] =
+static yyconst flex_int16_t yy_base[308] =
     {   0,
-        0,    0,  572,  560,  559,  551,   32,   35,  662,  662,
-       44,   62,   30,   40,   32,   50,  533,   49,   47,   59,
-       68,  525,   69,  517,   72,    0,  662,  515,  662,   83,
-       91,    0,    0,  100,  501,  109,    0,   78,   51,   86,
-       89,   74,   96,  105,  109,  110,  111,  112,  117,   73,
-      119,  118,  121,  120,  122,    0,  134,    0,    0,  138,
-        0,    0,  495,  130,  144,  129,  143,  145,  146,  147,
-      148,  149,  153,  154,  155,  158,  161,  165,  166,  170,
-      180,    0,    0,  662,  171,  201,  176,  175,  178,  183,
-      465,  182,  190,  455,  212,  188,  221,  208,  224,  234,
+        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
+       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
+       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
+       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
+       32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
+      118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
+      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
+      148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
+      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
+      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
 
-      209,  230,  236,  221,  244,    0,  247,    0,  184,  248,
-      244,  269,  231,  247,  251,  258,  272,    0,  279,    0,
-      283,    0,  286,    0,  255,  290,    0,  293,    0,  270,
-      281,  455,  254,  297,    0,    0,    0,    0,  294,  662,
-      295,  308,    0,  310,    0,  257,  319,  328,  304,  331,
-        0,    0,    0,    0,  335,    0,    0,    0,    0,  316,
-      338,    0,    0,    0,    0,  333,  336,  447,  662,  429,
-      338,  340,  348,    0,    0,    0,    0,  428,  351,    0,
-      355,    0,  359,    0,  362,    0,  357,  427,  308,  369,
-      426,  662,  425,  662,  346,  365,  423,  662,  371,    0,
+      237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
+      251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
+      286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
+        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
+        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
+      332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
+        0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
+        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
+        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
+      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
 
-        0,    0,    0,  378,    0,    0,    0,    0,  380,  421,
-      662,  388,  420,    0,  419,  662,  373,  418,  662,  372,
-      382,  417,  662,  398,  416,  662,  400,    0,  402,    0,
-        0,  385,  415,  662,  390,  275,  409,    0,    0,    0,
-        0,  405,  404,  406,  264,  412,  224,  129,  662,   87,
-      662,   47,  662,  662,  662,  434,  438,  441,  445,  449,
-      453,  457,  461,  465,  469,  473,  477,  481,  485,  489,
-      493,  497,  501,  505,  509,  513,  517,  521,  525,  529,
-      533,  537,  541,  545,  549,  553,  557,  561,  565,  569,
-      573,  577,  581,  585,  589,  593,  597,  601,  605,  609,
+      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
+      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
+      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
+      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
+      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
+      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
+      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
+      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
+      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
+      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
 
-      613,  617,  621,  625,  629,  633,  637,  641,  645,  649,
-      653,  657
+      627,  631,  635,  639,  643,  647,  651
     } ;
 
-static yyconst flex_int16_t yy_def[313] =
+static yyconst flex_int16_t yy_def[308] =
     {   0,
-      255,    1,  256,  256,  255,  257,  258,  258,  255,  255,
-      259,  259,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  255,  257,  255,  261,
-      258,  262,  262,  263,   12,  257,  264,   12,   12,   12,
+      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
+      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
+      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  260,  261,  262,  262,  265,
-      266,  266,  255,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
+      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-      265,  266,  266,  255,   12,  267,   12,   12,   12,   12,
-       12,   12,   12,   36,  268,   12,  269,   12,   12,  270,
+       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
+       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
 
-       12,   12,   12,   12,  271,  272,  267,  272,   12,   12,
-       12,  273,   12,   12,   12,  257,  274,  275,  268,  275,
-      276,  277,  269,  277,   12,  278,  279,  270,  279,   12,
-       12,  280,   12,  271,  272,  272,  281,  281,   12,  255,
-       12,  282,  283,  273,  283,   12,  284,  285,  257,  274,
-      275,  275,  286,  286,  276,  277,  277,  287,  287,   12,
-      278,  279,  279,  288,  288,   12,   12,  289,  255,  290,
-       12,   12,  282,  283,  283,  291,  291,  292,  293,  294,
-      284,  294,  295,  296,  285,  296,  257,  297,   12,  298,
-      289,  255,  299,  255,   12,  300,  301,  255,  293,  294,
+       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
+       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
+      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
+      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
+      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
+      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
+      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
+      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
+      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
+       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
 
-      294,  302,  302,  295,  296,  296,  303,  303,  257,  304,
-      255,  305,  306,  306,  299,  255,   12,  307,  255,  307,
-      307,  301,  255,  285,  304,  255,  308,  309,  305,  309,
-      306,   12,  307,  255,  307,  307,  308,  309,  309,  310,
-      310,   12,  307,  307,  311,  307,  307,  312,  255,  307,
-      255,  312,  255,  255,    0,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
+      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
+      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
+      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
+      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
+      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
 
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255
+      250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_nxt[696] =
+static yyconst flex_int16_t yy_nxt[690] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
        17,   17,   19,   17,   20,   21,   22,   23,   24,   17,
        25,   17,   17,   31,   31,   32,   31,   31,   32,   35,
        33,   35,   41,   33,   28,   28,   28,   29,   34,   35,
-      249,   36,   37,   42,   43,   38,   35,   48,   35,   35,
-       35,   39,   28,   28,   28,   29,   34,   44,   35,   36,
-       37,   40,   46,   45,   65,   49,   47,   35,   35,   50,
-       52,   35,   35,   35,   54,   28,   58,   35,   55,   64,
-      254,   59,   31,   31,   32,   35,   75,   66,   35,   33,
+       35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
+       35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
+       37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
+      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
+       59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
 
-       28,   28,   28,   29,   68,   35,   48,   28,   37,   60,
-       60,   60,   61,   60,   35,   67,   60,   62,   35,   35,
-       35,   35,   72,   71,   73,   69,   35,   35,   35,   35,
-       35,   35,  253,   80,   76,   70,   28,   58,   35,   35,
-       28,   82,   59,   85,   77,   78,   83,   79,   87,   74,
-       76,   86,   35,   35,   35,   35,   35,   35,   35,   90,
-       94,   95,   35,   35,   35,   97,   88,   35,   91,   92,
-       35,   99,  100,   89,   35,   35,   93,  101,   98,   35,
-       35,  102,   28,   82,   35,   35,   96,   35,   83,  109,
-      112,   35,   35,   35,   76,   97,  110,   35,  104,   35,
+       28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
+       61,   61,   62,   61,   35,   70,   61,   63,   66,   35,
+       49,   35,   35,   72,   74,   35,   69,   35,   75,   35,
+       35,   35,   35,   88,   35,   35,   82,   78,   35,   28,
+       59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
+       84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
+       35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
+       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
+       95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
+       28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
 
-      103,  105,  105,   60,  106,  105,  139,  115,  105,  108,
-      111,  114,  117,  117,   60,  118,  117,   35,   35,  117,
-      120,  121,  121,   60,  122,  121,  130,  251,  121,  124,
-       35,  100,  125,   35,  126,  126,   60,  127,  126,   35,
-       35,  126,  129,  131,  132,   35,   28,  135,  133,   28,
-      137,  140,  136,   35,  147,  138,   35,   35,  148,  146,
-       35,   29,  170,   35,   35,  178,   35,  249,  141,  142,
-      142,   60,  143,  142,   28,  151,  142,  145,  219,   35,
-      152,   28,  153,  160,  149,   28,  156,  154,   28,  158,
-       35,  157,   28,  162,  159,   28,  164,  166,  163,   28,
+      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
+       75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
+      117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
+       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
+      137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
+      131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
+       35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
+      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
+      146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
+       28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
 
-      135,  165,  244,   35,   35,  136,  171,   29,  172,  167,
-       28,  174,   28,  176,  187,  212,  175,   35,  177,  179,
-      179,   60,  180,  179,  188,   35,  179,  182,  183,  183,
-       60,  184,  183,   28,  151,  183,  186,   28,  156,  152,
-       28,  162,   35,  157,  190,   35,  163,   35,  196,   35,
-       28,  174,  189,   28,  200,   35,  175,   28,  202,  201,
-       29,   28,  205,  203,   28,  207,  195,  206,  219,  209,
-      208,   63,  214,   28,  200,  234,  220,  221,  217,  201,
-       28,  205,   35,   29,  235,  234,  206,  224,  227,  227,
-       60,  228,  227,  219,   35,  227,  230,  232,  242,  236,
+      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
+      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
+      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
+      185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
+      189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
+      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
+       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
+      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
+      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
+      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
 
-       28,  207,   28,  238,   28,  240,  208,  219,  239,  219,
-      241,   28,  238,  245,   35,  219,  243,  239,  219,  211,
-      198,  234,  194,  231,  226,  247,  223,  246,  216,  169,
-      211,  198,  194,  250,   26,   26,   26,   26,   28,   28,
-       28,   30,   30,   30,   30,   35,   35,   35,   35,   56,
-      192,   56,   56,   57,   57,   57,   57,   59,  169,   59,
-       59,   34,   34,   34,   34,   63,   63,  116,   63,   81,
-       81,   81,   81,   83,  113,   83,   83,  107,  107,  107,
-      107,  119,  119,  119,  119,  123,  123,  123,  123,  128,
-      128,  128,  128,  134,  134,  134,  134,  136,   84,  136,
+       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
+      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
+      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
+       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
+       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
+       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
+       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
+       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
+      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
+      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
 
-      136,  144,  144,  144,  144,  150,  150,  150,  150,  152,
-       35,  152,  152,  155,  155,  155,  155,  157,   29,  157,
-      157,  161,  161,  161,  161,  163,   53,  163,  163,  168,
-      168,  168,  168,  138,   51,  138,  138,  173,  173,  173,
-      173,  175,   35,  175,  175,  181,  181,  181,  181,  185,
-      185,  185,  185,  154,   29,  154,  154,  159,  255,  159,
-      159,  165,   27,  165,  165,  191,  191,  191,  191,  193,
-      193,  193,  193,  177,   27,  177,  177,  197,  197,  197,
-      197,  199,  199,  199,  199,  201,  255,  201,  201,  204,
-      204,  204,  204,  206,  255,  206,  206,  210,  210,  210,
+      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
+      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
+      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
+      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
+       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
+      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
+      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
+      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
+      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
+      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
 
-      210,  213,  213,  213,  213,  215,  215,  215,  215,  218,
-      218,  218,  218,  222,  222,  222,  222,  203,  255,  203,
-      203,  208,  255,  208,  208,  225,  225,  225,  225,  229,
-      229,  229,  229,  214,  255,  214,  214,  233,  233,  233,
-      233,  237,  237,  237,  237,  239,  255,  239,  239,  241,
-      255,  241,  241,  248,  248,  248,  248,  252,  252,  252,
-      252,    5,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
+      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
+      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
+      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
+      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
+      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
-static yyconst flex_int16_t yy_chk[696] =
+static yyconst flex_int16_t yy_chk[690] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    7,    7,    7,    8,    8,    8,   13,
-        7,   15,   13,    8,   11,   11,   11,   11,   11,   14,
-      252,   11,   11,   14,   15,   11,   19,   19,   18,   16,
-       39,   11,   12,   12,   12,   12,   12,   16,   20,   12,
-       12,   12,   18,   16,   39,   20,   18,   21,   23,   21,
-       23,   25,   50,   42,   25,   30,   30,   38,   25,   38,
-      250,   30,   31,   31,   31,   40,   50,   40,   41,   31,
+        7,   41,   13,    8,   11,   11,   11,   11,   11,   47,
+       14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
+       16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
+       12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
+      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
+       30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
 
-       34,   34,   34,   34,   42,   43,   43,   34,   34,   36,
-       36,   36,   36,   36,   44,   41,   36,   36,   45,   46,
-       47,   48,   47,   46,   48,   44,   49,   52,   51,   54,
-       53,   55,  248,   54,   55,   45,   57,   57,   66,   64,
-       60,   60,   57,   64,   52,   53,   60,   53,   66,   49,
-       51,   65,   67,   65,   68,   69,   70,   71,   72,   69,
-       73,   74,   73,   74,   75,   76,   67,   76,   70,   71,
-       77,   78,   78,   68,   78,   79,   72,   78,   77,   80,
-       85,   79,   81,   81,   88,   87,   75,   89,   81,   87,
-       90,   92,   90,  109,   96,   96,   88,   96,   85,   93,
+       34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
+       36,   36,   36,   36,   43,   43,   36,   36,   39,   44,
+       44,   50,   48,   46,   48,   49,   42,   51,   49,   52,
+       53,   54,   55,   66,   56,   66,   55,   56,   65,   58,
+       58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
+       61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
+       73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
+       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
+       74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
+       83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
 
-       80,   86,   86,   86,   86,   86,  109,   93,   86,   86,
-       89,   92,   95,   95,   95,   95,   95,   98,  101,   95,
-       95,   97,   97,   97,   97,   97,  101,  247,   97,   97,
-      104,   99,   98,   99,  100,  100,  100,  100,  100,  102,
-      113,  100,  100,  102,  103,  103,  105,  105,  104,  107,
-      107,  110,  105,  111,  114,  107,  114,  110,  115,  113,
-      115,  116,  133,  133,  125,  146,  146,  245,  111,  112,
-      112,  112,  112,  112,  117,  117,  112,  112,  236,  130,
-      117,  119,  119,  125,  116,  121,  121,  119,  123,  123,
-      131,  121,  126,  126,  123,  128,  128,  130,  126,  134,
+       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
+       95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
+       96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
+      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
+      106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
+      102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
+      111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
+      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
+      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
+      123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
 
-      134,  128,  236,  139,  141,  134,  139,  149,  141,  131,
-      142,  142,  144,  144,  149,  189,  142,  189,  144,  147,
-      147,  147,  147,  147,  160,  160,  147,  147,  148,  148,
-      148,  148,  148,  150,  150,  148,  148,  155,  155,  150,
-      161,  161,  166,  155,  167,  167,  161,  171,  172,  172,
-      173,  173,  166,  179,  179,  195,  173,  181,  181,  179,
-      187,  183,  183,  181,  185,  185,  171,  183,  196,  187,
-      185,  190,  190,  199,  199,  220,  196,  196,  195,  199,
-      204,  204,  217,  209,  220,  221,  204,  209,  212,  212,
-      212,  212,  212,  235,  232,  212,  212,  217,  232,  221,
+      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
+      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
+      145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
+      150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
+      151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
+      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
+      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
+      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
+      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
+      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
 
-      224,  224,  227,  227,  229,  229,  224,  243,  227,  244,
-      229,  237,  237,  242,  242,  246,  235,  237,  233,  225,
-      222,  218,  215,  213,  210,  244,  197,  243,  193,  191,
-      188,  178,  170,  246,  256,  256,  256,  256,  257,  257,
-      257,  258,  258,  258,  258,  259,  259,  259,  259,  260,
-      168,  260,  260,  261,  261,  261,  261,  262,  132,  262,
-      262,  263,  263,  263,  263,  264,  264,   94,  264,  265,
-      265,  265,  265,  266,   91,  266,  266,  267,  267,  267,
-      267,  268,  268,  268,  268,  269,  269,  269,  269,  270,
-      270,  270,  270,  271,  271,  271,  271,  272,   63,  272,
+      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
+      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
+      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
+      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
+      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
+      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
+      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
+      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
+      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
+      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
 
-      272,  273,  273,  273,  273,  274,  274,  274,  274,  275,
-       35,  275,  275,  276,  276,  276,  276,  277,   28,  277,
-      277,  278,  278,  278,  278,  279,   24,  279,  279,  280,
-      280,  280,  280,  281,   22,  281,  281,  282,  282,  282,
-      282,  283,   17,  283,  283,  284,  284,  284,  284,  285,
-      285,  285,  285,  286,    6,  286,  286,  287,    5,  287,
-      287,  288,    4,  288,  288,  289,  289,  289,  289,  290,
-      290,  290,  290,  291,    3,  291,  291,  292,  292,  292,
-      292,  293,  293,  293,  293,  294,    0,  294,  294,  295,
-      295,  295,  295,  296,    0,  296,  296,  297,  297,  297,
+      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
+      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
+      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
+      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
+        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
+      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
+        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
+      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
+        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
+      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
 
-      297,  298,  298,  298,  298,  299,  299,  299,  299,  300,
-      300,  300,  300,  301,  301,  301,  301,  302,    0,  302,
-      302,  303,    0,  303,  303,  304,  304,  304,  304,  305,
-      305,  305,  305,  306,    0,  306,  306,  307,  307,  307,
-      307,  308,  308,  308,  308,  309,    0,  309,  309,  310,
-        0,  310,  310,  311,  311,  311,  311,  312,  312,  312,
-      312,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255,  255,  255,  255,  255,  255,
-      255,  255,  255,  255,  255
-
+      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
+      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
+      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
+      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
+        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
+      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
+      250,  250,  250,  250,  250,  250,  250,  250,  250
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -776,7 +773,8 @@ goto find_rule; \
  * syntax; if the target string has to contain "," or ":" the new
  * syntax's "target=" should be used.
  */
-#line 31 "libxlu_disk_l.l"
+
+#line 35 "libxlu_disk_l.l"
 #include "libxlu_disk_i.h"
 
 #define YY_NO_INPUT
@@ -885,7 +883,7 @@ static int vdev_and_devtype(DiskParseCon
 #define DPC ((DiskParseContext*)yyextra)
 
 
-#line 889 "libxlu_disk_l.c"
+#line 887 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1121,12 +1119,12 @@ YY_DECL
 	register int yy_act;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
-#line 151 "libxlu_disk_l.l"
+#line 155 "libxlu_disk_l.l"
 
 
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1130 "libxlu_disk_l.c"
+#line 1128 "libxlu_disk_l.c"
 
 	if ( !yyg->yy_init )
 		{
@@ -1190,14 +1188,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 256 )
+				if ( yy_current_state >= 251 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 255 );
+		while ( yy_current_state != 250 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1247,72 +1245,72 @@ do_action:	/* This label is used only to
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 155 "libxlu_disk_l.l"
+#line 159 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 159 "libxlu_disk_l.l"
+#line 163 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 161 "libxlu_disk_l.l"
+#line 165 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 162 "libxlu_disk_l.l"
+#line 166 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 163 "libxlu_disk_l.l"
+#line 167 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 164 "libxlu_disk_l.l"
+#line 168 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 166 "libxlu_disk_l.l"
+#line 170 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 167 "libxlu_disk_l.l"
+#line 171 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 169 "libxlu_disk_l.l"
+#line 173 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 170 "libxlu_disk_l.l"
+#line 174 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
 case 11:
 YY_RULE_SETUP
-#line 174 "libxlu_disk_l.l"
+#line 178 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
 case 12:
 /* rule 12 can match eol */
 YY_RULE_SETUP
-#line 178 "libxlu_disk_l.l"
+#line 182 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
@@ -1320,7 +1318,7 @@ YY_RULE_SETUP
    * matched the whole string, so these patterns take precedence */
 case 13:
 YY_RULE_SETUP
-#line 185 "libxlu_disk_l.l"
+#line 189 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
@@ -1329,24 +1327,31 @@ YY_RULE_SETUP
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 191 "libxlu_disk_l.l"
+#line 195 "libxlu_disk_l.l"
 {
-		    STRIP(':');
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 	YY_BREAK
 case 15:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
+#line 208 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 16:
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
+#line 209 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 17:
@@ -1354,7 +1359,7 @@ case 17:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 199 "libxlu_disk_l.l"
+#line 210 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 18:
@@ -1362,7 +1367,7 @@ case 18:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 200 "libxlu_disk_l.l"
+#line 211 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 19:
@@ -1370,7 +1375,7 @@ case 19:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 201 "libxlu_disk_l.l"
+#line 212 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 20:
@@ -1378,13 +1383,13 @@ case 20:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 202 "libxlu_disk_l.l"
+#line 213 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
@@ -1394,7 +1399,7 @@ YY_RULE_SETUP
 case 22:
 /* rule 22 can match eol */
 YY_RULE_SETUP
-#line 211 "libxlu_disk_l.l"
+#line 222 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1423,7 +1428,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 23:
 YY_RULE_SETUP
-#line 237 "libxlu_disk_l.l"
+#line 248 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
@@ -1431,17 +1436,17 @@ YY_RULE_SETUP
 	YY_BREAK
 case 24:
 YY_RULE_SETUP
-#line 241 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
 case 25:
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1445 "libxlu_disk_l.c"
+#line 1450 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1705,7 +1710,7 @@ static int yy_get_next_buffer (yyscan_t 
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 256 )
+			if ( yy_current_state >= 251 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1729,11 +1734,11 @@ static int yy_get_next_buffer (yyscan_t 
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 256 )
+		if ( yy_current_state >= 251 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 255);
+	yy_is_jam = (yy_current_state == 250);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2533,4 +2538,4 @@ void xlu__disk_yyfree (void * ptr , yysc
 
 #define YYTABLES_NAME "yytables"
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_l.h
--- a/tools/libxl/libxlu_disk_l.h	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.h	Fri Aug 03 09:01:30 2012 +0100
@@ -3,8 +3,12 @@
 #define xlu__disk_yyIN_HEADER 1
 
 #line 6 "libxlu_disk_l.h"
+#line 31 "libxlu_disk_l.l"
+#include "libxl_osdeps.h" /* must come before any other headers */
 
-#line 8 "libxlu_disk_l.h"
+
+
+#line 12 "libxlu_disk_l.h"
 
 #define  YY_INT_ALIGNED short int
 
@@ -340,8 +344,8 @@ extern int xlu__disk_yylex (yyscan_t yys
 #undef YY_DECL
 #endif
 
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 
-#line 346 "libxlu_disk_l.h"
+#line 350 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
 #endif /* xlu__disk_yyHEADER_H */
diff -r 8e1090d822e5 -r bfd5e107774c tools/libxl/libxlu_disk_l.l
--- a/tools/libxl/libxlu_disk_l.l	Fri Aug 03 08:21:19 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.l	Fri Aug 03 09:01:30 2012 +0100
@@ -27,6 +27,10 @@
  * syntax's "target=" should be used.
  */
 
+%top{
+#include "libxl_osdeps.h" /* must come before any other headers */
+}
+
 %{
 #include "libxlu_disk_i.h"
 
@@ -188,11 +192,18 @@ target=.*	{ STRIP(','); SAVESTRING("targ
                     setformat(DPC, yytext);
                  }
 
-iscsi:|e?nbd:drbd:/.* {
-		    STRIP(':');
+(iscsi|e?nbd|drbd):/.* {
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 
 tapdisk:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }
 tap2?:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:09:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCwH-0000J1-Pk; Fri, 03 Aug 2012 08:08:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCwG-0000Iw-Le
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:08:52 +0000
Received: from [85.158.143.99:42916] by server-2.bemta-4.messagelabs.com id
	61/96-17938-3178B105; Fri, 03 Aug 2012 08:08:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343981330!29159367!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14805 invoked from network); 3 Aug 2012 08:08:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834877"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:08:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:08:50 +0100
Message-ID: <1343981328.21372.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:08:48 +0100
In-Reply-To: <1343928442-23966-14-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-14-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 13/13] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> diff --git a/tools/libxl/libxl_blktap2.c b/tools/libxl/libxl_blktap2.c
> index 2c40182..660a669 100644
> --- a/tools/libxl/libxl_blktap2.c
> +++ b/tools/libxl/libxl_blktap2.c
> @@ -19,6 +19,7 @@
> 
>  int libxl__blktap_enabled(libxl__gc *gc)
>  {
> +    USE(gc);
>      const char *msg;
>      return !tap_ctl_check(&msg);
>  } 

You'll need to consider libxl_noblktap2.c too. And perhaps some other
conditionally compiled stuff, e.g. libxl_netbsd.c

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:09:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxCwH-0000J1-Pk; Fri, 03 Aug 2012 08:08:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxCwG-0000Iw-Le
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:08:52 +0000
Received: from [85.158.143.99:42916] by server-2.bemta-4.messagelabs.com id
	61/96-17938-3178B105; Fri, 03 Aug 2012 08:08:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1343981330!29159367!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14805 invoked from network); 3 Aug 2012 08:08:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,705,1336348800"; d="scan'208";a="13834877"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:08:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:08:50 +0100
Message-ID: <1343981328.21372.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:08:48 +0100
In-Reply-To: <1343928442-23966-14-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-14-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 13/13] libxl: -Wunused-parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> diff --git a/tools/libxl/libxl_blktap2.c b/tools/libxl/libxl_blktap2.c
> index 2c40182..660a669 100644
> --- a/tools/libxl/libxl_blktap2.c
> +++ b/tools/libxl/libxl_blktap2.c
> @@ -19,6 +19,7 @@
> 
>  int libxl__blktap_enabled(libxl__gc *gc)
>  {
> +    USE(gc);
>      const char *msg;
>      return !tap_ctl_check(&msg);
>  } 

You'll need to consider libxl_noblktap2.c too. And perhaps some other
conditionally compiled stuff, e.g. libxl_netbsd.c

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:33:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDJo-0000Uc-SI; Fri, 03 Aug 2012 08:33:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDJn-0000UX-Mc
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:33:12 +0000
Received: from [85.158.139.83:63199] by server-9.bemta-5.messagelabs.com id
	F4/E6-01069-6CC8B105; Fri, 03 Aug 2012 08:33:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343982790!18784570!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30414 invoked from network); 3 Aug 2012 08:33:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:33:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835295"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:32:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:32:52 +0100
Message-ID: <1343982770.21372.14.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:32:50 +0100
In-Reply-To: <1343928442-23966-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/13] libxl: react correctly to bootloader
 pty master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> Receive POLLHUP on the bootloader master pty is not an error.
> Hopefully it means that the bootloader has exited and therefore the
> pty slave side has no process group any more.  (At least NetBSD
> indicates POLLHUP on the master in this case.)
> 
> So send the bootloader SIGTERM; if it has already exited then this has
> no effect (except that on some versions of NetBSD it erroneously
> returns ESRCH and we print a harmless warning) and we will then
> collect the bootloader's exit status and be satisfied.
> 
> However, we remember that we have done this so that if we got POLLHUP
> for some other reason than that the bootloader exited we report
> something resembling a useful message.
> 
> In order to implement this we need to provide a way for users of
> datacopier to handle POLLHUP rather than treating it as fatal.
> 
> We rename bootloader_abort to bootloader_stop since it now no longer
> only applies to error situations.
> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> -
> Changes in v5:
>  * Correctly call dc->callback_pollhup, not dc->callback,
>    in datacopier_pollhup_handled.

Therefore:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> Changes in v4:
>  * Track whether we sent SIGTERM due to POLLHUP so we can report
>    messages properly.
> 
> Changes in v3:
>  * datacopier provides new interface for handling POLLHUP
>  * Do not ignore errors on the xenconsole pty
>  * Rename bootloader_abort.
> ---
>  tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
>  tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
>  tools/libxl/libxl_internal.h   |    7 +++++--
>  3 files changed, 57 insertions(+), 12 deletions(-)
> 
> diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
> index 99972a2..983a60a 100644
> --- a/tools/libxl/libxl_aoutils.c
> +++ b/tools/libxl/libxl_aoutils.c
> @@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
>      LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
>  }
> 
> +static int datacopier_pollhup_handled(libxl__egc *egc,
> +                                      libxl__datacopier_state *dc,
> +                                      short revents, int onwrite)
> +{
> +    STATE_AO_GC(dc->ao);
> +
> +    if (dc->callback_pollhup && (revents & POLLHUP)) {
> +        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
> +            onwrite ? dc->writewhat : dc->readwhat,
> +            dc->copywhat);
> +        libxl__datacopier_kill(dc);
> +        dc->callback_pollhup(egc, dc, onwrite, -1);
> +        return 1;
> +    }
> +    return 0;
> +}
> +
>  static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
>                                  int fd, short events, short revents) {
>      libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
>      STATE_AO_GC(dc->ao);
> 
> +    if (datacopier_pollhup_handled(egc, dc, revents, 0))
> +        return;
> +
>      if (revents & ~POLLIN) {
>          LOG(ERROR, "unexpected poll event 0x%x (should be POLLIN)"
>              " on %s during copy of %s", revents, dc->readwhat, dc->copywhat);
> @@ -163,6 +183,9 @@ static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
>      libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
>      STATE_AO_GC(dc->ao);
> 
> +    if (datacopier_pollhup_handled(egc, dc, revents, 1))
> +        return;
> +
>      if (revents & ~POLLOUT) {
>          LOG(ERROR, "unexpected poll event 0x%x (should be POLLOUT)"
>              " on %s during copy of %s", revents, dc->writewhat, dc->copywhat);
> diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
> index ef5a91b..bfc1b56 100644
> --- a/tools/libxl/libxl_bootloader.c
> +++ b/tools/libxl/libxl_bootloader.c
> @@ -215,6 +215,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
>      libxl__domaindeathcheck_init(&bl->deathcheck);
>      bl->keystrokes.ao = bl->ao;  libxl__datacopier_init(&bl->keystrokes);
>      bl->display.ao = bl->ao;     libxl__datacopier_init(&bl->display);
> +    bl->got_pollhup = 0;
>  }
> 
>  static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
> @@ -275,7 +276,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
>  }
> 
>  /* might be called at any time, provided it's init'd */
> -static void bootloader_abort(libxl__egc *egc,
> +static void bootloader_stop(libxl__egc *egc,
>                               libxl__bootloader_state *bl, int rc)
>  {
>      STATE_AO_GC(bl->ao);
> @@ -285,8 +286,8 @@ static void bootloader_abort(libxl__egc *egc,
>      libxl__datacopier_kill(&bl->display);
>      if (libxl__ev_child_inuse(&bl->child)) {
>          r = kill(bl->child.pid, SIGTERM);
> -        if (r) LOGE(WARN, "after failure, failed to kill bootloader [%lu]",
> -                    (unsigned long)bl->child.pid);
> +        if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
> +                    rc ? "after failure, " : "", (unsigned long)bl->child.pid);
>      }
>      bl->rc = rc;
>  }
> @@ -508,7 +509,10 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
>      bl->keystrokes.maxsz = BOOTLOADER_BUF_OUT;
>      bl->keystrokes.copywhat =
>          GCSPRINTF("bootloader input for domain %"PRIu32, bl->domid);
> -    bl->keystrokes.callback = bootloader_keystrokes_copyfail;
> +    bl->keystrokes.callback =         bootloader_keystrokes_copyfail;
> +    bl->keystrokes.callback_pollhup = bootloader_keystrokes_copyfail;
> +        /* pollhup gets called with errnoval==-1 which is not otherwise
> +         * possible since errnos are nonnegative, so it's unambiguous */
>      rc = libxl__datacopier_start(&bl->keystrokes);
>      if (rc) goto out;
> 
> @@ -516,7 +520,8 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
>      bl->display.maxsz = BOOTLOADER_BUF_IN;
>      bl->display.copywhat =
>          GCSPRINTF("bootloader output for domain %"PRIu32, bl->domid);
> -    bl->display.callback = bootloader_display_copyfail;
> +    bl->display.callback =         bootloader_display_copyfail;
> +    bl->display.callback_pollhup = bootloader_display_copyfail;
>      rc = libxl__datacopier_start(&bl->display);
>      if (rc) goto out;
> 
> @@ -562,30 +567,42 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
> 
>  /* perhaps one of these will be called, but perhaps not */
>  static void bootloader_copyfail(libxl__egc *egc, const char *which,
> -       libxl__bootloader_state *bl, int onwrite, int errnoval)
> +        libxl__bootloader_state *bl, int ondisplay, int onwrite, int errnoval)
>  {
>      STATE_AO_GC(bl->ao);
> +    int rc = ERROR_FAIL;
> +
> +    if (errnoval==-1) {
> +        /* POLLHUP */
> +        if (!!ondisplay != !!onwrite) {
> +            rc = 0;
> +            bl->got_pollhup = 1;
> +        } else {
> +            LOG(ERROR, "unexpected POLLHUP on %s", which);
> +        }
> +    }
>      if (!onwrite && !errnoval)
>          LOG(ERROR, "unexpected eof copying %s", which);
> -    bootloader_abort(egc, bl, ERROR_FAIL);
> +
> +    bootloader_stop(egc, bl, rc);
>  }
>  static void bootloader_keystrokes_copyfail(libxl__egc *egc,
>         libxl__datacopier_state *dc, int onwrite, int errnoval)
>  {
>      libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
> -    bootloader_copyfail(egc, "bootloader input", bl, onwrite, errnoval);
> +    bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, errnoval);
>  }
>  static void bootloader_display_copyfail(libxl__egc *egc,
>         libxl__datacopier_state *dc, int onwrite, int errnoval)
>  {
>      libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, display);
> -    bootloader_copyfail(egc, "bootloader output", bl, onwrite, errnoval);
> +    bootloader_copyfail(egc, "bootloader output", bl, 1, onwrite, errnoval);
>  }
> 
>  static void bootloader_domaindeath(libxl__egc *egc, libxl__domaindeathcheck *dc)
>  {
>      libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, deathcheck);
> -    bootloader_abort(egc, bl, ERROR_FAIL);
> +    bootloader_stop(egc, bl, ERROR_FAIL);
>  }
> 
>  static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
> @@ -599,6 +616,8 @@ static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
>      libxl__datacopier_kill(&bl->display);
> 
>      if (status) {
> +        if (bl->got_pollhup && WIFSIGNALED(status) && WTERMSIG(status)==SIGTERM)
> +            LOG(ERROR, "got POLLHUP, sent SIGTERM");
>          LOG(ERROR, "bootloader failed - consult logfile %s", bl->logfile);
>          libxl_report_child_exitstatus(CTX, XTL_ERROR, "bootloader",
>                                        pid, status);
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 58004b3..2d6c71a 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -2076,7 +2076,9 @@ typedef struct libxl__datacopier_buf libxl__datacopier_buf;
>   *     errnoval==0 means we got eof and all data was written
>   *     errnoval!=0 means we had a read error, logged
>   * onwrite==-1 means some other internal failure, errnoval not valid, logged
> - * in all cases copier is killed before calling this callback */
> + * If we get POLLHUP, we call callback_pollhup(..., onwrite, -1);
> + * or if callback_pollhup==0 this is an internal failure, as above.
> + * In all cases copier is killed before calling this callback */
>  typedef void libxl__datacopier_callback(libxl__egc *egc,
>       libxl__datacopier_state *dc, int onwrite, int errnoval);
> 
> @@ -2095,6 +2097,7 @@ struct libxl__datacopier_state {
>      const char *copywhat, *readwhat, *writewhat; /* for error msgs */
>      FILE *log; /* gets a copy of everything */
>      libxl__datacopier_callback *callback;
> +    libxl__datacopier_callback *callback_pollhup;
>      /* remaining fields are private to datacopier */
>      libxl__ev_fd toread, towrite;
>      ssize_t used;
> @@ -2279,7 +2282,7 @@ struct libxl__bootloader_state {
>      int nargs, argsspace;
>      const char **args;
>      libxl__datacopier_state keystrokes, display;
> -    int rc;
> +    int rc, got_pollhup;
>  };
> 
>  _hidden void libxl__bootloader_init(libxl__bootloader_state *bl);
> --
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:33:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDJo-0000Uc-SI; Fri, 03 Aug 2012 08:33:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDJn-0000UX-Mc
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:33:12 +0000
Received: from [85.158.139.83:63199] by server-9.bemta-5.messagelabs.com id
	F4/E6-01069-6CC8B105; Fri, 03 Aug 2012 08:33:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343982790!18784570!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30414 invoked from network); 3 Aug 2012 08:33:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:33:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835295"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:32:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:32:52 +0100
Message-ID: <1343982770.21372.14.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:32:50 +0100
In-Reply-To: <1343928442-23966-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 02/13] libxl: react correctly to bootloader
 pty master POLLHUP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> Receive POLLHUP on the bootloader master pty is not an error.
> Hopefully it means that the bootloader has exited and therefore the
> pty slave side has no process group any more.  (At least NetBSD
> indicates POLLHUP on the master in this case.)
> 
> So send the bootloader SIGTERM; if it has already exited then this has
> no effect (except that on some versions of NetBSD it erroneously
> returns ESRCH and we print a harmless warning) and we will then
> collect the bootloader's exit status and be satisfied.
> 
> However, we remember that we have done this so that if we got POLLHUP
> for some other reason than that the bootloader exited we report
> something resembling a useful message.
> 
> In order to implement this we need to provide a way for users of
> datacopier to handle POLLHUP rather than treating it as fatal.
> 
> We rename bootloader_abort to bootloader_stop since it now no longer
> only applies to error situations.
> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> -
> Changes in v5:
>  * Correctly call dc->callback_pollhup, not dc->callback,
>    in datacopier_pollhup_handled.

Therefore:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> Changes in v4:
>  * Track whether we sent SIGTERM due to POLLHUP so we can report
>    messages properly.
> 
> Changes in v3:
>  * datacopier provides new interface for handling POLLHUP
>  * Do not ignore errors on the xenconsole pty
>  * Rename bootloader_abort.
> ---
>  tools/libxl/libxl_aoutils.c    |   23 +++++++++++++++++++++++
>  tools/libxl/libxl_bootloader.c |   39 +++++++++++++++++++++++++++++----------
>  tools/libxl/libxl_internal.h   |    7 +++++--
>  3 files changed, 57 insertions(+), 12 deletions(-)
> 
> diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
> index 99972a2..983a60a 100644
> --- a/tools/libxl/libxl_aoutils.c
> +++ b/tools/libxl/libxl_aoutils.c
> @@ -97,11 +97,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, libxl__datacopier_state *dc,
>      LIBXL_TAILQ_INSERT_TAIL(&dc->bufs, buf, entry);
>  }
> 
> +static int datacopier_pollhup_handled(libxl__egc *egc,
> +                                      libxl__datacopier_state *dc,
> +                                      short revents, int onwrite)
> +{
> +    STATE_AO_GC(dc->ao);
> +
> +    if (dc->callback_pollhup && (revents & POLLHUP)) {
> +        LOG(DEBUG, "received POLLHUP on %s during copy of %s",
> +            onwrite ? dc->writewhat : dc->readwhat,
> +            dc->copywhat);
> +        libxl__datacopier_kill(dc);
> +        dc->callback_pollhup(egc, dc, onwrite, -1);
> +        return 1;
> +    }
> +    return 0;
> +}
> +
>  static void datacopier_readable(libxl__egc *egc, libxl__ev_fd *ev,
>                                  int fd, short events, short revents) {
>      libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, toread);
>      STATE_AO_GC(dc->ao);
> 
> +    if (datacopier_pollhup_handled(egc, dc, revents, 0))
> +        return;
> +
>      if (revents & ~POLLIN) {
>          LOG(ERROR, "unexpected poll event 0x%x (should be POLLIN)"
>              " on %s during copy of %s", revents, dc->readwhat, dc->copywhat);
> @@ -163,6 +183,9 @@ static void datacopier_writable(libxl__egc *egc, libxl__ev_fd *ev,
>      libxl__datacopier_state *dc = CONTAINER_OF(ev, *dc, towrite);
>      STATE_AO_GC(dc->ao);
> 
> +    if (datacopier_pollhup_handled(egc, dc, revents, 1))
> +        return;
> +
>      if (revents & ~POLLOUT) {
>          LOG(ERROR, "unexpected poll event 0x%x (should be POLLOUT)"
>              " on %s during copy of %s", revents, dc->writewhat, dc->copywhat);
> diff --git a/tools/libxl/libxl_bootloader.c b/tools/libxl/libxl_bootloader.c
> index ef5a91b..bfc1b56 100644
> --- a/tools/libxl/libxl_bootloader.c
> +++ b/tools/libxl/libxl_bootloader.c
> @@ -215,6 +215,7 @@ void libxl__bootloader_init(libxl__bootloader_state *bl)
>      libxl__domaindeathcheck_init(&bl->deathcheck);
>      bl->keystrokes.ao = bl->ao;  libxl__datacopier_init(&bl->keystrokes);
>      bl->display.ao = bl->ao;     libxl__datacopier_init(&bl->display);
> +    bl->got_pollhup = 0;
>  }
> 
>  static void bootloader_cleanup(libxl__egc *egc, libxl__bootloader_state *bl)
> @@ -275,7 +276,7 @@ static void bootloader_local_detached_cb(libxl__egc *egc,
>  }
> 
>  /* might be called at any time, provided it's init'd */
> -static void bootloader_abort(libxl__egc *egc,
> +static void bootloader_stop(libxl__egc *egc,
>                               libxl__bootloader_state *bl, int rc)
>  {
>      STATE_AO_GC(bl->ao);
> @@ -285,8 +286,8 @@ static void bootloader_abort(libxl__egc *egc,
>      libxl__datacopier_kill(&bl->display);
>      if (libxl__ev_child_inuse(&bl->child)) {
>          r = kill(bl->child.pid, SIGTERM);
> -        if (r) LOGE(WARN, "after failure, failed to kill bootloader [%lu]",
> -                    (unsigned long)bl->child.pid);
> +        if (r) LOGE(WARN, "%sfailed to kill bootloader [%lu]",
> +                    rc ? "after failure, " : "", (unsigned long)bl->child.pid);
>      }
>      bl->rc = rc;
>  }
> @@ -508,7 +509,10 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
>      bl->keystrokes.maxsz = BOOTLOADER_BUF_OUT;
>      bl->keystrokes.copywhat =
>          GCSPRINTF("bootloader input for domain %"PRIu32, bl->domid);
> -    bl->keystrokes.callback = bootloader_keystrokes_copyfail;
> +    bl->keystrokes.callback =         bootloader_keystrokes_copyfail;
> +    bl->keystrokes.callback_pollhup = bootloader_keystrokes_copyfail;
> +        /* pollhup gets called with errnoval==-1 which is not otherwise
> +         * possible since errnos are nonnegative, so it's unambiguous */
>      rc = libxl__datacopier_start(&bl->keystrokes);
>      if (rc) goto out;
> 
> @@ -516,7 +520,8 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
>      bl->display.maxsz = BOOTLOADER_BUF_IN;
>      bl->display.copywhat =
>          GCSPRINTF("bootloader output for domain %"PRIu32, bl->domid);
> -    bl->display.callback = bootloader_display_copyfail;
> +    bl->display.callback =         bootloader_display_copyfail;
> +    bl->display.callback_pollhup = bootloader_display_copyfail;
>      rc = libxl__datacopier_start(&bl->display);
>      if (rc) goto out;
> 
> @@ -562,30 +567,42 @@ static void bootloader_gotptys(libxl__egc *egc, libxl__openpty_state *op)
> 
>  /* perhaps one of these will be called, but perhaps not */
>  static void bootloader_copyfail(libxl__egc *egc, const char *which,
> -       libxl__bootloader_state *bl, int onwrite, int errnoval)
> +        libxl__bootloader_state *bl, int ondisplay, int onwrite, int errnoval)
>  {
>      STATE_AO_GC(bl->ao);
> +    int rc = ERROR_FAIL;
> +
> +    if (errnoval==-1) {
> +        /* POLLHUP */
> +        if (!!ondisplay != !!onwrite) {
> +            rc = 0;
> +            bl->got_pollhup = 1;
> +        } else {
> +            LOG(ERROR, "unexpected POLLHUP on %s", which);
> +        }
> +    }
>      if (!onwrite && !errnoval)
>          LOG(ERROR, "unexpected eof copying %s", which);
> -    bootloader_abort(egc, bl, ERROR_FAIL);
> +
> +    bootloader_stop(egc, bl, rc);
>  }
>  static void bootloader_keystrokes_copyfail(libxl__egc *egc,
>         libxl__datacopier_state *dc, int onwrite, int errnoval)
>  {
>      libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
> -    bootloader_copyfail(egc, "bootloader input", bl, onwrite, errnoval);
> +    bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, errnoval);
>  }
>  static void bootloader_display_copyfail(libxl__egc *egc,
>         libxl__datacopier_state *dc, int onwrite, int errnoval)
>  {
>      libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, display);
> -    bootloader_copyfail(egc, "bootloader output", bl, onwrite, errnoval);
> +    bootloader_copyfail(egc, "bootloader output", bl, 1, onwrite, errnoval);
>  }
> 
>  static void bootloader_domaindeath(libxl__egc *egc, libxl__domaindeathcheck *dc)
>  {
>      libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, deathcheck);
> -    bootloader_abort(egc, bl, ERROR_FAIL);
> +    bootloader_stop(egc, bl, ERROR_FAIL);
>  }
> 
>  static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
> @@ -599,6 +616,8 @@ static void bootloader_finished(libxl__egc *egc, libxl__ev_child *child,
>      libxl__datacopier_kill(&bl->display);
> 
>      if (status) {
> +        if (bl->got_pollhup && WIFSIGNALED(status) && WTERMSIG(status)==SIGTERM)
> +            LOG(ERROR, "got POLLHUP, sent SIGTERM");
>          LOG(ERROR, "bootloader failed - consult logfile %s", bl->logfile);
>          libxl_report_child_exitstatus(CTX, XTL_ERROR, "bootloader",
>                                        pid, status);
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 58004b3..2d6c71a 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -2076,7 +2076,9 @@ typedef struct libxl__datacopier_buf libxl__datacopier_buf;
>   *     errnoval==0 means we got eof and all data was written
>   *     errnoval!=0 means we had a read error, logged
>   * onwrite==-1 means some other internal failure, errnoval not valid, logged
> - * in all cases copier is killed before calling this callback */
> + * If we get POLLHUP, we call callback_pollhup(..., onwrite, -1);
> + * or if callback_pollhup==0 this is an internal failure, as above.
> + * In all cases copier is killed before calling this callback */
>  typedef void libxl__datacopier_callback(libxl__egc *egc,
>       libxl__datacopier_state *dc, int onwrite, int errnoval);
> 
> @@ -2095,6 +2097,7 @@ struct libxl__datacopier_state {
>      const char *copywhat, *readwhat, *writewhat; /* for error msgs */
>      FILE *log; /* gets a copy of everything */
>      libxl__datacopier_callback *callback;
> +    libxl__datacopier_callback *callback_pollhup;
>      /* remaining fields are private to datacopier */
>      libxl__ev_fd toread, towrite;
>      ssize_t used;
> @@ -2279,7 +2282,7 @@ struct libxl__bootloader_state {
>      int nargs, argsspace;
>      const char **args;
>      libxl__datacopier_state keystrokes, display;
> -    int rc;
> +    int rc, got_pollhup;
>  };
> 
>  _hidden void libxl__bootloader_init(libxl__bootloader_state *bl);
> --
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:35:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDLN-0000bX-Bb; Fri, 03 Aug 2012 08:34:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SxDLL-0000bM-C2
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:34:47 +0000
Received: from [85.158.143.99:45238] by server-1.bemta-4.messagelabs.com id
	B3/F7-24392-62D8B105; Fri, 03 Aug 2012 08:34:46 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343982886!24789067!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26645 invoked from network); 3 Aug 2012 08:34:46 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 08:34:46 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SxDLF-0006p0-VE; Fri, 03 Aug 2012 08:34:41 +0000
Date: Fri, 3 Aug 2012 09:34:41 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120803083441.GA25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> nestedhvm: fix nested page fault build error on 32-bit
> 
>     cc1: warnings being treated as errors
>     hvm.c: In function ???hvm_hap_nested_page_fault???:
>     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> 
> hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> ept_handle_violation) actually have the gpa which they pass to
> hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> change the argument to hvm_hap_nested_page_fault.
> 
> The other user of gpa in hvm_hap_nested_page_fault is a call to
> p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> think a paddr_t is appropriate there too.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

I think this is a candidate for backporting, too.  As Jan points out,
this is a HAP bug on 32-bit with >4G guests.

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:35:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDLN-0000bX-Bb; Fri, 03 Aug 2012 08:34:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SxDLL-0000bM-C2
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:34:47 +0000
Received: from [85.158.143.99:45238] by server-1.bemta-4.messagelabs.com id
	B3/F7-24392-62D8B105; Fri, 03 Aug 2012 08:34:46 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343982886!24789067!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26645 invoked from network); 3 Aug 2012 08:34:46 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 08:34:46 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SxDLF-0006p0-VE; Fri, 03 Aug 2012 08:34:41 +0000
Date: Fri, 3 Aug 2012 09:34:41 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120803083441.GA25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343980112.21372.4.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: keir@xen.org, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> nestedhvm: fix nested page fault build error on 32-bit
> 
>     cc1: warnings being treated as errors
>     hvm.c: In function ???hvm_hap_nested_page_fault???:
>     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> 
> hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> ept_handle_violation) actually have the gpa which they pass to
> hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> change the argument to hvm_hap_nested_page_fault.
> 
> The other user of gpa in hvm_hap_nested_page_fault is a call to
> p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> think a paddr_t is appropriate there too.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

I think this is a candidate for backporting, too.  As Jan points out,
this is a HAP bug on 32-bit with >4G guests.

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:35:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDLj-0000dz-ON; Fri, 03 Aug 2012 08:35:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDLh-0000dg-LZ
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:35:09 +0000
Received: from [85.158.138.51:64490] by server-6.bemta-3.messagelabs.com id
	DC/A6-20447-C3D8B105; Fri, 03 Aug 2012 08:35:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1343982907!8714374!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3116 invoked from network); 3 Aug 2012 08:35:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:35:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835349"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:35:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:35:07 +0100
Message-ID: <1343982906.21372.16.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:35:06 +0100
In-Reply-To: <1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/13] libxl: correct some comments
 regarding event API and fds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> * libxl may indeed register more than one callback for the same fd,
>   with some restrictions.  The allowable range of responses to this by
>   the application means that this should pose no problems for users.
>   But the documentation comment should be fixed.
> 
> * Document the relaxed synchronicity semantics of the fd_modify
>   registration callback.
> 
> * A couple of comments referred to old names for functions.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
>  tools/libxl/libxl_event.h |   17 ++++++++++++++---
>  1 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 3344bc8..cead71b 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -320,13 +320,24 @@ typedef struct libxl_osevent_hooks {
>   * *for_registration_update is honoured by libxl and will be passed
>   * to future modify or deregister calls.
>   *
> - * libxl will only attempt to register one callback for any one fd.
> + * libxl may want to register more than one callback for any one fd;
> + * in that case: (i) each such registration will have at least one bit
> + * set in revents which is unique to that registration; (ii) if an
> + * event occurs which is relevant for multiple registrations the
> + * application's event system is may call libxl_osevent_occurred_fd

                                 is may ?

Probably meant just "may".

Otherwise:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

(no need to resend, if you confirm the intended words are as I suggest
I'll tweak on commit)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:35:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDLj-0000dz-ON; Fri, 03 Aug 2012 08:35:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDLh-0000dg-LZ
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:35:09 +0000
Received: from [85.158.138.51:64490] by server-6.bemta-3.messagelabs.com id
	DC/A6-20447-C3D8B105; Fri, 03 Aug 2012 08:35:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1343982907!8714374!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3116 invoked from network); 3 Aug 2012 08:35:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:35:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835349"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:35:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:35:07 +0100
Message-ID: <1343982906.21372.16.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:35:06 +0100
In-Reply-To: <1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/13] libxl: correct some comments
 regarding event API and fds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> * libxl may indeed register more than one callback for the same fd,
>   with some restrictions.  The allowable range of responses to this by
>   the application means that this should pose no problems for users.
>   But the documentation comment should be fixed.
> 
> * Document the relaxed synchronicity semantics of the fd_modify
>   registration callback.
> 
> * A couple of comments referred to old names for functions.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
>  tools/libxl/libxl_event.h |   17 ++++++++++++++---
>  1 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 3344bc8..cead71b 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -320,13 +320,24 @@ typedef struct libxl_osevent_hooks {
>   * *for_registration_update is honoured by libxl and will be passed
>   * to future modify or deregister calls.
>   *
> - * libxl will only attempt to register one callback for any one fd.
> + * libxl may want to register more than one callback for any one fd;
> + * in that case: (i) each such registration will have at least one bit
> + * set in revents which is unique to that registration; (ii) if an
> + * event occurs which is relevant for multiple registrations the
> + * application's event system is may call libxl_osevent_occurred_fd

                                 is may ?

Probably meant just "may".

Otherwise:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

(no need to resend, if you confirm the intended words are as I suggest
I'll tweak on commit)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:35:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDME-0000jw-5M; Fri, 03 Aug 2012 08:35:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDMD-0000jh-IA
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:35:41 +0000
Received: from [85.158.138.51:44705] by server-6.bemta-3.messagelabs.com id
	F4/87-20447-C5D8B105; Fri, 03 Aug 2012 08:35:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343982940!30107752!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25106 invoked from network); 3 Aug 2012 08:35:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:35:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835366"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:35:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:35:40 +0100
Message-ID: <1343982938.21372.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:35:38 +0100
In-Reply-To: <1343928442-23966-13-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-13-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 12/13] libxl: add a comment re the memory
 management API instability
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:35:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDME-0000jw-5M; Fri, 03 Aug 2012 08:35:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDMD-0000jh-IA
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:35:41 +0000
Received: from [85.158.138.51:44705] by server-6.bemta-3.messagelabs.com id
	F4/87-20447-C5D8B105; Fri, 03 Aug 2012 08:35:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343982940!30107752!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25106 invoked from network); 3 Aug 2012 08:35:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:35:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835366"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:35:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:35:40 +0100
Message-ID: <1343982938.21372.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:35:38 +0100
In-Reply-To: <1343928442-23966-13-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-13-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 12/13] libxl: add a comment re the memory
 management API instability
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:38:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDOc-00010V-Ng; Fri, 03 Aug 2012 08:38:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDOb-00010N-Dw
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:38:09 +0000
Received: from [85.158.143.35:12074] by server-2.bemta-4.messagelabs.com id
	2E/95-17938-0FD8B105; Fri, 03 Aug 2012 08:38:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1343983088!13105997!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26988 invoked from network); 3 Aug 2012 08:38:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:38:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835416"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:38:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:38:00 +0100
Message-ID: <1343983078.21372.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 3 Aug 2012 09:37:58 +0100
In-Reply-To: <20120803083441.GA25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<20120803083441.GA25286@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 09:34 +0100, Tim Deegan wrote:
> At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> > nestedhvm: fix nested page fault build error on 32-bit
> > 
> >     cc1: warnings being treated as errors
> >     hvm.c: In function ???hvm_hap_nested_page_fault???:
> >     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> > 
> > hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> > to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> > of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> > ept_handle_violation) actually have the gpa which they pass to
> > hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> > change the argument to hvm_hap_nested_page_fault.
> > 
> > The other user of gpa in hvm_hap_nested_page_fault is a call to
> > p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> > think a paddr_t is appropriate there too.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Is one of you or Jan going to apply or shall I? (I'm doing a tools
commit sweep right now)

> I think this is a candidate for backporting, too.  As Jan points out,
> this is a HAP bug on 32-bit with >4G guests.
> 
> Tim.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:38:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDOc-00010V-Ng; Fri, 03 Aug 2012 08:38:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDOb-00010N-Dw
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:38:09 +0000
Received: from [85.158.143.35:12074] by server-2.bemta-4.messagelabs.com id
	2E/95-17938-0FD8B105; Fri, 03 Aug 2012 08:38:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1343983088!13105997!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26988 invoked from network); 3 Aug 2012 08:38:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:38:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835416"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:38:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:38:00 +0100
Message-ID: <1343983078.21372.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 3 Aug 2012 09:37:58 +0100
In-Reply-To: <20120803083441.GA25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<20120803083441.GA25286@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 09:34 +0100, Tim Deegan wrote:
> At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> > nestedhvm: fix nested page fault build error on 32-bit
> > 
> >     cc1: warnings being treated as errors
> >     hvm.c: In function ???hvm_hap_nested_page_fault???:
> >     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> > 
> > hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> > to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> > of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> > ept_handle_violation) actually have the gpa which they pass to
> > hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> > change the argument to hvm_hap_nested_page_fault.
> > 
> > The other user of gpa in hvm_hap_nested_page_fault is a call to
> > p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> > think a paddr_t is appropriate there too.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Tim Deegan <tim@xen.org>

Is one of you or Jan going to apply or shall I? (I'm doing a tools
commit sweep right now)

> I think this is a candidate for backporting, too.  As Jan points out,
> this is a HAP bug on 32-bit with >4G guests.
> 
> Tim.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDQB-0001CB-Cu; Fri, 03 Aug 2012 08:39:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxDQ9-0001Bs-Jw
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:39:46 +0000
Received: from [85.158.139.83:28261] by server-1.bemta-5.messagelabs.com id
	16/22-29759-05E8B105; Fri, 03 Aug 2012 08:39:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343983183!22845469!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23977 invoked from network); 3 Aug 2012 08:39:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 08:39:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 09:39:43 +0100
Message-Id: <501BAA990200007800092661@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 09:40:25 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC4F58269.0__="
Subject: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC4F58269.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In __prepare_to_wait(), properly mark early clobbered registers. By
doing so, we at once eliminate the need to save/restore rCX and rDI.

In check_wakeup_from_wait(), make the current constraints match by
removing the code that actuall alters registers. By adjusting the
resume address in __prepare_to_wait(), we can simply re-use the copying
operation there (rather than doing a second pointless copy in the
opposite direction after branching to the resume point), which at once
eliminates the need for re-loading rCX and rDI inside the asm().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/wait.c
+++ b/xen/common/wait.c
@@ -126,6 +126,7 @@ static void __prepare_to_wait(struct wai
 {
     char *cpu_info =3D (char *)get_cpu_info();
     struct vcpu *curr =3D current;
+    unsigned long dummy;
=20
     ASSERT(wqv->esp =3D=3D 0);
=20
@@ -140,27 +141,27 @@ static void __prepare_to_wait(struct wai
=20
     asm volatile (
 #ifdef CONFIG_X86_64
-        "push %%rax; push %%rbx; push %%rcx; push %%rdx; push %%rdi; "
+        "push %%rax; push %%rbx; push %%rdx; "
         "push %%rbp; push %%r8; push %%r9; push %%r10; push %%r11; "
         "push %%r12; push %%r13; push %%r14; push %%r15; call 1f; "
-        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov %%rsp,%%rsi; "
+        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "
         "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "
         "xor %%esi,%%esi; jmp 3f; "
         "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "
         "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "
         "pop %%r11; pop %%r10; pop %%r9; pop %%r8; "
-        "pop %%rbp; pop %%rdi; pop %%rdx; pop %%rcx; pop %%rbx; pop =
%%rax"
+        "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax"
 #else
-        "push %%eax; push %%ebx; push %%ecx; push %%edx; push %%edi; "
+        "push %%eax; push %%ebx; push %%edx; "
         "push %%ebp; call 1f; "
-        "1: mov 8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "
+        "1: mov %%esp,%%esi; addl $2f-1b,(%%esp); "
         "sub %%esi,%%ecx; cmp %3,%%ecx; jbe 2f; "
         "xor %%esi,%%esi; jmp 3f; "
         "2: rep movsb; mov %%esp,%%esi; 3: pop %%eax; "
-        "pop %%ebp; pop %%edi; pop %%edx; pop %%ecx; pop %%ebx; pop =
%%eax"
+        "pop %%ebp; pop %%edx; pop %%ebx; pop %%eax"
 #endif
-        : "=3DS" (wqv->esp)
-        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)
+        : "=3D&S" (wqv->esp), "=3D&c" (dummy), "=3D&D" (dummy)
+        : "i" (PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)
         : "memory" );
=20
     if ( unlikely(wqv->esp =3D=3D 0) )
@@ -200,7 +201,7 @@ void check_wakeup_from_wait(void)
     }
=20
     asm volatile (
-        "mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"
+        "mov %1,%%"__OP"sp; jmp *(%0)"
         : : "S" (wqv->stack), "D" (wqv->esp),
         "c" ((char *)get_cpu_info() - (char *)wqv->esp)
         : "memory" );




--=__PartC4F58269.0__=
Content-Type: text/plain; name="x86-wait-constraints.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-wait-constraints.patch"

x86: fix wait code asm() constraints=0A=0AIn __prepare_to_wait(), properly =
mark early clobbered registers. By=0Adoing so, we at once eliminate the =
need to save/restore rCX and rDI.=0A=0AIn check_wakeup_from_wait(), make =
the current constraints match by=0Aremoving the code that actuall alters =
registers. By adjusting the=0Aresume address in __prepare_to_wait(), we =
can simply re-use the copying=0Aoperation there (rather than doing a =
second pointless copy in the=0Aopposite direction after branching to the =
resume point), which at once=0Aeliminates the need for re-loading rCX and =
rDI inside the asm().=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/common/wait.c=0A+++ b/xen/common/wait.c=0A@@ -126,6 +126,7 =
@@ static void __prepare_to_wait(struct wai=0A {=0A     char *cpu_info =3D =
(char *)get_cpu_info();=0A     struct vcpu *curr =3D current;=0A+    =
unsigned long dummy;=0A =0A     ASSERT(wqv->esp =3D=3D 0);=0A =0A@@ =
-140,27 +141,27 @@ static void __prepare_to_wait(struct wai=0A =0A     asm =
volatile (=0A #ifdef CONFIG_X86_64=0A-        "push %%rax; push %%rbx; =
push %%rcx; push %%rdx; push %%rdi; "=0A+        "push %%rax; push %%rbx; =
push %%rdx; "=0A         "push %%rbp; push %%r8; push %%r9; push %%r10; =
push %%r11; "=0A         "push %%r12; push %%r13; push %%r14; push %%r15; =
call 1f; "=0A-        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov =
%%rsp,%%rsi; "=0A+        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "=0A   =
      "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "=0A         "xor %%esi,%%esi=
; jmp 3f; "=0A         "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "=0A  =
       "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "=0A         "pop =
%%r11; pop %%r10; pop %%r9; pop %%r8; "=0A-        "pop %%rbp; pop %%rdi; =
pop %%rdx; pop %%rcx; pop %%rbx; pop %%rax"=0A+        "pop %%rbp; pop =
%%rdx; pop %%rbx; pop %%rax"=0A #else=0A-        "push %%eax; push %%ebx; =
push %%ecx; push %%edx; push %%edi; "=0A+        "push %%eax; push %%ebx; =
push %%edx; "=0A         "push %%ebp; call 1f; "=0A-        "1: mov =
8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "=0A+        "1: mov =
%%esp,%%esi; addl $2f-1b,(%%esp); "=0A         "sub %%esi,%%ecx; cmp =
%3,%%ecx; jbe 2f; "=0A         "xor %%esi,%%esi; jmp 3f; "=0A         "2: =
rep movsb; mov %%esp,%%esi; 3: pop %%eax; "=0A-        "pop %%ebp; pop =
%%edi; pop %%edx; pop %%ecx; pop %%ebx; pop %%eax"=0A+        "pop %%ebp; =
pop %%edx; pop %%ebx; pop %%eax"=0A #endif=0A-        : "=3DS" (wqv->esp)=
=0A-        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)=0A+        =
: "=3D&S" (wqv->esp), "=3D&c" (dummy), "=3D&D" (dummy)=0A+        : "i" =
(PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)=0A         : "memory" );=0A =
=0A     if ( unlikely(wqv->esp =3D=3D 0) )=0A@@ -200,7 +201,7 @@ void =
check_wakeup_from_wait(void)=0A     }=0A =0A     asm volatile (=0A-        =
"mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"=0A+        "mov =
%1,%%"__OP"sp; jmp *(%0)"=0A         : : "S" (wqv->stack), "D" (wqv->esp),=
=0A         "c" ((char *)get_cpu_info() - (char *)wqv->esp)=0A         : =
"memory" );=0A
--=__PartC4F58269.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC4F58269.0__=--


From xen-devel-bounces@lists.xen.org Fri Aug 03 08:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDQB-0001CB-Cu; Fri, 03 Aug 2012 08:39:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxDQ9-0001Bs-Jw
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:39:46 +0000
Received: from [85.158.139.83:28261] by server-1.bemta-5.messagelabs.com id
	16/22-29759-05E8B105; Fri, 03 Aug 2012 08:39:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1343983183!22845469!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23977 invoked from network); 3 Aug 2012 08:39:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 08:39:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 09:39:43 +0100
Message-Id: <501BAA990200007800092661@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 09:40:25 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC4F58269.0__="
Subject: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC4F58269.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In __prepare_to_wait(), properly mark early clobbered registers. By
doing so, we at once eliminate the need to save/restore rCX and rDI.

In check_wakeup_from_wait(), make the current constraints match by
removing the code that actuall alters registers. By adjusting the
resume address in __prepare_to_wait(), we can simply re-use the copying
operation there (rather than doing a second pointless copy in the
opposite direction after branching to the resume point), which at once
eliminates the need for re-loading rCX and rDI inside the asm().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/wait.c
+++ b/xen/common/wait.c
@@ -126,6 +126,7 @@ static void __prepare_to_wait(struct wai
 {
     char *cpu_info =3D (char *)get_cpu_info();
     struct vcpu *curr =3D current;
+    unsigned long dummy;
=20
     ASSERT(wqv->esp =3D=3D 0);
=20
@@ -140,27 +141,27 @@ static void __prepare_to_wait(struct wai
=20
     asm volatile (
 #ifdef CONFIG_X86_64
-        "push %%rax; push %%rbx; push %%rcx; push %%rdx; push %%rdi; "
+        "push %%rax; push %%rbx; push %%rdx; "
         "push %%rbp; push %%r8; push %%r9; push %%r10; push %%r11; "
         "push %%r12; push %%r13; push %%r14; push %%r15; call 1f; "
-        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov %%rsp,%%rsi; "
+        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "
         "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "
         "xor %%esi,%%esi; jmp 3f; "
         "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "
         "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "
         "pop %%r11; pop %%r10; pop %%r9; pop %%r8; "
-        "pop %%rbp; pop %%rdi; pop %%rdx; pop %%rcx; pop %%rbx; pop =
%%rax"
+        "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax"
 #else
-        "push %%eax; push %%ebx; push %%ecx; push %%edx; push %%edi; "
+        "push %%eax; push %%ebx; push %%edx; "
         "push %%ebp; call 1f; "
-        "1: mov 8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "
+        "1: mov %%esp,%%esi; addl $2f-1b,(%%esp); "
         "sub %%esi,%%ecx; cmp %3,%%ecx; jbe 2f; "
         "xor %%esi,%%esi; jmp 3f; "
         "2: rep movsb; mov %%esp,%%esi; 3: pop %%eax; "
-        "pop %%ebp; pop %%edi; pop %%edx; pop %%ecx; pop %%ebx; pop =
%%eax"
+        "pop %%ebp; pop %%edx; pop %%ebx; pop %%eax"
 #endif
-        : "=3DS" (wqv->esp)
-        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)
+        : "=3D&S" (wqv->esp), "=3D&c" (dummy), "=3D&D" (dummy)
+        : "i" (PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)
         : "memory" );
=20
     if ( unlikely(wqv->esp =3D=3D 0) )
@@ -200,7 +201,7 @@ void check_wakeup_from_wait(void)
     }
=20
     asm volatile (
-        "mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"
+        "mov %1,%%"__OP"sp; jmp *(%0)"
         : : "S" (wqv->stack), "D" (wqv->esp),
         "c" ((char *)get_cpu_info() - (char *)wqv->esp)
         : "memory" );




--=__PartC4F58269.0__=
Content-Type: text/plain; name="x86-wait-constraints.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-wait-constraints.patch"

x86: fix wait code asm() constraints=0A=0AIn __prepare_to_wait(), properly =
mark early clobbered registers. By=0Adoing so, we at once eliminate the =
need to save/restore rCX and rDI.=0A=0AIn check_wakeup_from_wait(), make =
the current constraints match by=0Aremoving the code that actuall alters =
registers. By adjusting the=0Aresume address in __prepare_to_wait(), we =
can simply re-use the copying=0Aoperation there (rather than doing a =
second pointless copy in the=0Aopposite direction after branching to the =
resume point), which at once=0Aeliminates the need for re-loading rCX and =
rDI inside the asm().=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/common/wait.c=0A+++ b/xen/common/wait.c=0A@@ -126,6 +126,7 =
@@ static void __prepare_to_wait(struct wai=0A {=0A     char *cpu_info =3D =
(char *)get_cpu_info();=0A     struct vcpu *curr =3D current;=0A+    =
unsigned long dummy;=0A =0A     ASSERT(wqv->esp =3D=3D 0);=0A =0A@@ =
-140,27 +141,27 @@ static void __prepare_to_wait(struct wai=0A =0A     asm =
volatile (=0A #ifdef CONFIG_X86_64=0A-        "push %%rax; push %%rbx; =
push %%rcx; push %%rdx; push %%rdi; "=0A+        "push %%rax; push %%rbx; =
push %%rdx; "=0A         "push %%rbp; push %%r8; push %%r9; push %%r10; =
push %%r11; "=0A         "push %%r12; push %%r13; push %%r14; push %%r15; =
call 1f; "=0A-        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov =
%%rsp,%%rsi; "=0A+        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "=0A   =
      "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "=0A         "xor %%esi,%%esi=
; jmp 3f; "=0A         "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "=0A  =
       "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "=0A         "pop =
%%r11; pop %%r10; pop %%r9; pop %%r8; "=0A-        "pop %%rbp; pop %%rdi; =
pop %%rdx; pop %%rcx; pop %%rbx; pop %%rax"=0A+        "pop %%rbp; pop =
%%rdx; pop %%rbx; pop %%rax"=0A #else=0A-        "push %%eax; push %%ebx; =
push %%ecx; push %%edx; push %%edi; "=0A+        "push %%eax; push %%ebx; =
push %%edx; "=0A         "push %%ebp; call 1f; "=0A-        "1: mov =
8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "=0A+        "1: mov =
%%esp,%%esi; addl $2f-1b,(%%esp); "=0A         "sub %%esi,%%ecx; cmp =
%3,%%ecx; jbe 2f; "=0A         "xor %%esi,%%esi; jmp 3f; "=0A         "2: =
rep movsb; mov %%esp,%%esi; 3: pop %%eax; "=0A-        "pop %%ebp; pop =
%%edi; pop %%edx; pop %%ecx; pop %%ebx; pop %%eax"=0A+        "pop %%ebp; =
pop %%edx; pop %%ebx; pop %%eax"=0A #endif=0A-        : "=3DS" (wqv->esp)=
=0A-        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)=0A+        =
: "=3D&S" (wqv->esp), "=3D&c" (dummy), "=3D&D" (dummy)=0A+        : "i" =
(PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)=0A         : "memory" );=0A =
=0A     if ( unlikely(wqv->esp =3D=3D 0) )=0A@@ -200,7 +201,7 @@ void =
check_wakeup_from_wait(void)=0A     }=0A =0A     asm volatile (=0A-        =
"mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"=0A+        "mov =
%1,%%"__OP"sp; jmp *(%0)"=0A         : : "S" (wqv->stack), "D" (wqv->esp),=
=0A         "c" ((char *)get_cpu_info() - (char *)wqv->esp)=0A         : =
"memory" );=0A
--=__PartC4F58269.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC4F58269.0__=--


From xen-devel-bounces@lists.xen.org Fri Aug 03 08:40:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:40:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDQO-0001E1-Ps; Fri, 03 Aug 2012 08:40:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SxDQN-0001Dj-Uk
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:40:00 +0000
Received: from [85.158.138.51:39393] by server-2.bemta-3.messagelabs.com id
	2B/76-00359-F5E8B105; Fri, 03 Aug 2012 08:39:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343983198!22117228!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12861 invoked from network); 3 Aug 2012 08:39:58 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 08:39:58 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SxDQK-0006qf-0B; Fri, 03 Aug 2012 08:39:56 +0000
Date: Fri, 3 Aug 2012 09:39:55 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120803083955.GB25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<20120803083441.GA25286@ocelot.phlegethon.org>
	<1343983078.21372.18.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343983078.21372.18.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:37 +0100 on 03 Aug (1343986678), Ian Campbell wrote:
> On Fri, 2012-08-03 at 09:34 +0100, Tim Deegan wrote:
> > At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> > > nestedhvm: fix nested page fault build error on 32-bit
> > > 
> > >     cc1: warnings being treated as errors
> > >     hvm.c: In function ???hvm_hap_nested_page_fault???:
> > >     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> > > 
> > > hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> > > to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> > > of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> > > ept_handle_violation) actually have the gpa which they pass to
> > > hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> > > change the argument to hvm_hap_nested_page_fault.
> > > 
> > > The other user of gpa in hvm_hap_nested_page_fault is a call to
> > > p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> > > think a paddr_t is appropriate there too.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Acked-by: Tim Deegan <tim@xen.org>
> 
> Is one of you or Jan going to apply or shall I? (I'm doing a tools
> commit sweep right now)

If you're already applying things, please do apply up this one too.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:40:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:40:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDQO-0001E1-Ps; Fri, 03 Aug 2012 08:40:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SxDQN-0001Dj-Uk
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:40:00 +0000
Received: from [85.158.138.51:39393] by server-2.bemta-3.messagelabs.com id
	2B/76-00359-F5E8B105; Fri, 03 Aug 2012 08:39:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-174.messagelabs.com!1343983198!22117228!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12861 invoked from network); 3 Aug 2012 08:39:58 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 08:39:58 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SxDQK-0006qf-0B; Fri, 03 Aug 2012 08:39:56 +0000
Date: Fri, 3 Aug 2012 09:39:55 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120803083955.GB25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<20120803083441.GA25286@ocelot.phlegethon.org>
	<1343983078.21372.18.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1343983078.21372.18.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:37 +0100 on 03 Aug (1343986678), Ian Campbell wrote:
> On Fri, 2012-08-03 at 09:34 +0100, Tim Deegan wrote:
> > At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> > > nestedhvm: fix nested page fault build error on 32-bit
> > > 
> > >     cc1: warnings being treated as errors
> > >     hvm.c: In function ???hvm_hap_nested_page_fault???:
> > >     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> > > 
> > > hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> > > to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> > > of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> > > ept_handle_violation) actually have the gpa which they pass to
> > > hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> > > change the argument to hvm_hap_nested_page_fault.
> > > 
> > > The other user of gpa in hvm_hap_nested_page_fault is a call to
> > > p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> > > think a paddr_t is appropriate there too.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Acked-by: Tim Deegan <tim@xen.org>
> 
> Is one of you or Jan going to apply or shall I? (I'm doing a tools
> commit sweep right now)

If you're already applying things, please do apply up this one too.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:43:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDTg-0001VW-Cp; Fri, 03 Aug 2012 08:43:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxDTf-0001VL-5t
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:43:23 +0000
Received: from [85.158.139.83:19916] by server-12.bemta-5.messagelabs.com id
	AB/2A-26304-A2F8B105; Fri, 03 Aug 2012 08:43:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1343983401!30072055!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17495 invoked from network); 3 Aug 2012 08:43:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 08:43:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 09:43:20 +0100
Message-Id: <501BAB710200007800092664@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 09:44:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<501BA0E6020000780009263B@nat28.tlf.novell.com>
	<1343980817.21372.8.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343980817.21372.8.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDEwOjAwLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiBGcmksIDIwMTItMDgtMDMgYXQgMDg6NTkgKzAxMDAsIEphbiBC
ZXVsaWNoIHdyb3RlOgo+PiA+Pj4gT24gMDMuMDguMTIgYXQgMDk6NDgsIElhbiBDYW1wYmVsbCA8
SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+IHdyb3RlOgo+PiA+IE9uIEZyaSwgMjAxMi0wOC0wMyBh
dCAwNjoxMiArMDEwMCwgSWFuIENhbXBiZWxsIHdyb3RlOgo+PiA+PiBPbiBGcmksIDIwMTItMDgt
MDMgYXQgMDA6NDEgKzAxMDAsIHhlbi5vcmcgd3JvdGU6Cj4+ID4+ID4gZmxpZ2h0IDEzNTM5IHhl
bi11bnN0YWJsZSByZWFsIFtyZWFsXQo+PiA+PiA+IGh0dHA6Ly93d3cuY2hpYXJrLmdyZWVuZW5k
Lm9yZy51ay9+eGVuc3JjdHMvbG9ncy8xMzUzOS8gCj4+ID4+ID4gCj4+ID4+ID4gUmVncmVzc2lv
bnMgOi0oCj4+ID4+ID4gCj4+ID4+ID4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVkIGFuZCBh
cmUgYmxvY2tpbmcsCj4+ID4+ID4gaW5jbHVkaW5nIHRlc3RzIHdoaWNoIGNvdWxkIG5vdCBiZSBy
dW46Cj4+ID4+ID4gIGJ1aWxkLWkzODYtb2xka2VybiAgICAgICAgICAgIDQgeGVuLWJ1aWxkICAg
ICAgICAgICAgICAgICBmYWlsIFJFR1IuIHZzLiAKPj4gPiAxMzUzNgo+PiA+PiA+ICBidWlsZC1p
Mzg2ICAgICAgICAgICAgICAgICAgICA0IHhlbi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBS
RUdSLiB2cy4gCj4+ID4gMTM1MzYKPj4gPiAKPj4gPiA4PC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KPj4gPiAKPj4gPiAjIEhHIGNoYW5nZXNldCBwYXRjaAo+PiA+ICMgVXNlciBJ
YW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgo+PiA+ICMgRGF0ZSAxMzQzOTgw
MDQ1IC0zNjAwCj4+ID4gIyBOb2RlIElEIDIzZmRjYTNhZGIzMzQ2MDkwZWE4YjY1Yjc3Y2FkN2Qy
NzljZjlkYWYKPj4gPiAjIFBhcmVudCAgOTVhNGFiNjMyYWMyNWNlMGVjNmEyNDVkY2M0NmFkNTdk
M2M3MDMwZgo+PiA+IG5lc3RlZGh2bTogZml4IG5lc3RlZCBwYWdlIGZhdWx0IGJ1aWxkIGVycm9y
IG9uIDMyLWJpdAo+PiA+IAo+PiA+ICAgICBjYzE6IHdhcm5pbmdzIGJlaW5nIHRyZWF0ZWQgYXMg
ZXJyb3JzCj4+ID4gICAgIGh2bS5jOiBJbiBmdW5jdGlvbiDigJhodm1faGFwX25lc3RlZF9wYWdl
X2ZhdWx04oCZOgo+PiA+ICAgICBodm0uYzoxMjgyOiBlcnJvcjogcGFzc2luZyBhcmd1bWVudCAy
IG9mIAo+PiA+IOKAmG5lc3RlZGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHTigJkgZnJvbSBpbmNv
bXBhdGlibGUgcG9pbnRlciB0eXBlIAo+PiA+IAo+IC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwv
eGVuLXVuc3RhYmxlLmhnL3hlbi9pbmNsdWRlL2FzbS9odm0vbmVzdGVkaHZtLmg6NTU6IAo+IAo+
PiA+IG5vdGU6IGV4cGVjdGVkIOKAmHBhZGRyX3QgKuKAmSBidXQgYXJndW1lbnQgaXMgb2YgdHlw
ZSDigJhsb25nIHVuc2lnbmVkIGludCAq4oCZCj4+ID4gCj4+ID4gaHZtX2hhcF9uZXN0ZWRfcGFn
ZV9mYXVsdCB0YWtlcyBhbiB1bnNpZ25lZCBsb25nIGdwYSBhbmQgcGFzc2VzICZncGEKPj4gPiB0
byBuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0IHdoaWNoIHRha2VzIGEgcGFkZHJfdCAq
LiBTaW5jZSBib3RoCj4+ID4gb2YgdGhlIGNhbGxlcnMgb2YgaHZtX2hhcF9uZXN0ZWRfcGFnZV9m
YXVsdCAoc3ZtX2RvX25lc3RlZF9wZ2ZhdWx0IGFuZAo+PiA+IGVwdF9oYW5kbGVfdmlvbGF0aW9u
KSBhY3R1YWxseSBoYXZlIHRoZSBncGEgd2hpY2ggdGhleSBwYXNzIHRvCj4+ID4gaHZtX2hhcF9u
ZXN0ZWRfcGFnZV9mYXVsdCBhcyBhIHBhZGRyX3QgSSB0aGluayBpdCBtYWtlcyBzZW5zZSB0bwo+
PiA+IGNoYW5nZSB0aGUgYXJndW1lbnQgdG8gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdC4KPj4g
Cj4+IEFuZCB0aGF0J3MgZXZlbiBvdXRzaWRlIG9mIHRoZSBjdXJyZW50IGJ1aWxkIGZhaWx1cmUg
LSBpdCBqdXN0Cj4+IGNhbid0IGhhdmUgd29ya2VkIGZvciA+NEdiIGd1ZXN0cyBvbiB0aGUgMzIt
Yml0IGh5cGVydmlzb3IuCj4gCj4gUmlnaHQuIEkgbXVzdCBhZG1pdCBJIHdhcyBzdXJwcmlzZWQg
dG8gZmluZCB0aGF0IG5lc3RlZGh2bSB3YXMgYSBmZWF0dXJlCj4gb2YgMzIgYml0IGF0IGFsbCwg
SSBoYWQgZXhwZWN0IHRoZSBmaXggdG8gYmUgY2hhbmdpbmcgc29tZSBzb3J0IG9mIHN0dWIKPiBm
dW5jdGlvbi4uLgoKU28gZGlkIEkgZmlyc3QgdGhpbmsuIEJ1dCB0aGUgZnVuY3Rpb24gdGhhdCBu
ZWVkZWQgY2hhbmdpbmcgcmVhbGx5Cmlzbid0IG5lc3RlZC1IVk0gb25seSwgc28gdGhlIGZpeCB3
YXMgcmVxdWlyZWQgaW4gYW55IGNhc2UgKGp1c3QKdGhhdCB3aXRob3V0IHRoZSBleHBvc2luZyBj
L3MsIHRoZSBwcm9ibGVtIHdvdWxkIGxpa2VseSBoYXZlIGJlZW4KZm91bmQgYSBsb3QgbGF0ZXIp
LgoKSmFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:43:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDTg-0001VW-Cp; Fri, 03 Aug 2012 08:43:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxDTf-0001VL-5t
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:43:23 +0000
Received: from [85.158.139.83:19916] by server-12.bemta-5.messagelabs.com id
	AB/2A-26304-A2F8B105; Fri, 03 Aug 2012 08:43:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1343983401!30072055!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17495 invoked from network); 3 Aug 2012 08:43:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 08:43:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 09:43:20 +0100
Message-Id: <501BAB710200007800092664@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 09:44:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<501BA0E6020000780009263B@nat28.tlf.novell.com>
	<1343980817.21372.8.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343980817.21372.8.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDEwOjAwLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiBGcmksIDIwMTItMDgtMDMgYXQgMDg6NTkgKzAxMDAsIEphbiBC
ZXVsaWNoIHdyb3RlOgo+PiA+Pj4gT24gMDMuMDguMTIgYXQgMDk6NDgsIElhbiBDYW1wYmVsbCA8
SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+IHdyb3RlOgo+PiA+IE9uIEZyaSwgMjAxMi0wOC0wMyBh
dCAwNjoxMiArMDEwMCwgSWFuIENhbXBiZWxsIHdyb3RlOgo+PiA+PiBPbiBGcmksIDIwMTItMDgt
MDMgYXQgMDA6NDEgKzAxMDAsIHhlbi5vcmcgd3JvdGU6Cj4+ID4+ID4gZmxpZ2h0IDEzNTM5IHhl
bi11bnN0YWJsZSByZWFsIFtyZWFsXQo+PiA+PiA+IGh0dHA6Ly93d3cuY2hpYXJrLmdyZWVuZW5k
Lm9yZy51ay9+eGVuc3JjdHMvbG9ncy8xMzUzOS8gCj4+ID4+ID4gCj4+ID4+ID4gUmVncmVzc2lv
bnMgOi0oCj4+ID4+ID4gCj4+ID4+ID4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVkIGFuZCBh
cmUgYmxvY2tpbmcsCj4+ID4+ID4gaW5jbHVkaW5nIHRlc3RzIHdoaWNoIGNvdWxkIG5vdCBiZSBy
dW46Cj4+ID4+ID4gIGJ1aWxkLWkzODYtb2xka2VybiAgICAgICAgICAgIDQgeGVuLWJ1aWxkICAg
ICAgICAgICAgICAgICBmYWlsIFJFR1IuIHZzLiAKPj4gPiAxMzUzNgo+PiA+PiA+ICBidWlsZC1p
Mzg2ICAgICAgICAgICAgICAgICAgICA0IHhlbi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBS
RUdSLiB2cy4gCj4+ID4gMTM1MzYKPj4gPiAKPj4gPiA4PC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KPj4gPiAKPj4gPiAjIEhHIGNoYW5nZXNldCBwYXRjaAo+PiA+ICMgVXNlciBJ
YW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgo+PiA+ICMgRGF0ZSAxMzQzOTgw
MDQ1IC0zNjAwCj4+ID4gIyBOb2RlIElEIDIzZmRjYTNhZGIzMzQ2MDkwZWE4YjY1Yjc3Y2FkN2Qy
NzljZjlkYWYKPj4gPiAjIFBhcmVudCAgOTVhNGFiNjMyYWMyNWNlMGVjNmEyNDVkY2M0NmFkNTdk
M2M3MDMwZgo+PiA+IG5lc3RlZGh2bTogZml4IG5lc3RlZCBwYWdlIGZhdWx0IGJ1aWxkIGVycm9y
IG9uIDMyLWJpdAo+PiA+IAo+PiA+ICAgICBjYzE6IHdhcm5pbmdzIGJlaW5nIHRyZWF0ZWQgYXMg
ZXJyb3JzCj4+ID4gICAgIGh2bS5jOiBJbiBmdW5jdGlvbiDigJhodm1faGFwX25lc3RlZF9wYWdl
X2ZhdWx04oCZOgo+PiA+ICAgICBodm0uYzoxMjgyOiBlcnJvcjogcGFzc2luZyBhcmd1bWVudCAy
IG9mIAo+PiA+IOKAmG5lc3RlZGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHTigJkgZnJvbSBpbmNv
bXBhdGlibGUgcG9pbnRlciB0eXBlIAo+PiA+IAo+IC9sb2NhbC9zY3JhdGNoL2lhbmMvZGV2ZWwv
eGVuLXVuc3RhYmxlLmhnL3hlbi9pbmNsdWRlL2FzbS9odm0vbmVzdGVkaHZtLmg6NTU6IAo+IAo+
PiA+IG5vdGU6IGV4cGVjdGVkIOKAmHBhZGRyX3QgKuKAmSBidXQgYXJndW1lbnQgaXMgb2YgdHlw
ZSDigJhsb25nIHVuc2lnbmVkIGludCAq4oCZCj4+ID4gCj4+ID4gaHZtX2hhcF9uZXN0ZWRfcGFn
ZV9mYXVsdCB0YWtlcyBhbiB1bnNpZ25lZCBsb25nIGdwYSBhbmQgcGFzc2VzICZncGEKPj4gPiB0
byBuZXN0ZWRodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0IHdoaWNoIHRha2VzIGEgcGFkZHJfdCAq
LiBTaW5jZSBib3RoCj4+ID4gb2YgdGhlIGNhbGxlcnMgb2YgaHZtX2hhcF9uZXN0ZWRfcGFnZV9m
YXVsdCAoc3ZtX2RvX25lc3RlZF9wZ2ZhdWx0IGFuZAo+PiA+IGVwdF9oYW5kbGVfdmlvbGF0aW9u
KSBhY3R1YWxseSBoYXZlIHRoZSBncGEgd2hpY2ggdGhleSBwYXNzIHRvCj4+ID4gaHZtX2hhcF9u
ZXN0ZWRfcGFnZV9mYXVsdCBhcyBhIHBhZGRyX3QgSSB0aGluayBpdCBtYWtlcyBzZW5zZSB0bwo+
PiA+IGNoYW5nZSB0aGUgYXJndW1lbnQgdG8gaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdC4KPj4g
Cj4+IEFuZCB0aGF0J3MgZXZlbiBvdXRzaWRlIG9mIHRoZSBjdXJyZW50IGJ1aWxkIGZhaWx1cmUg
LSBpdCBqdXN0Cj4+IGNhbid0IGhhdmUgd29ya2VkIGZvciA+NEdiIGd1ZXN0cyBvbiB0aGUgMzIt
Yml0IGh5cGVydmlzb3IuCj4gCj4gUmlnaHQuIEkgbXVzdCBhZG1pdCBJIHdhcyBzdXJwcmlzZWQg
dG8gZmluZCB0aGF0IG5lc3RlZGh2bSB3YXMgYSBmZWF0dXJlCj4gb2YgMzIgYml0IGF0IGFsbCwg
SSBoYWQgZXhwZWN0IHRoZSBmaXggdG8gYmUgY2hhbmdpbmcgc29tZSBzb3J0IG9mIHN0dWIKPiBm
dW5jdGlvbi4uLgoKU28gZGlkIEkgZmlyc3QgdGhpbmsuIEJ1dCB0aGUgZnVuY3Rpb24gdGhhdCBu
ZWVkZWQgY2hhbmdpbmcgcmVhbGx5Cmlzbid0IG5lc3RlZC1IVk0gb25seSwgc28gdGhlIGZpeCB3
YXMgcmVxdWlyZWQgaW4gYW55IGNhc2UgKGp1c3QKdGhhdCB3aXRob3V0IHRoZSBleHBvc2luZyBj
L3MsIHRoZSBwcm9ibGVtIHdvdWxkIGxpa2VseSBoYXZlIGJlZW4KZm91bmQgYSBsb3QgbGF0ZXIp
LgoKSmFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:55:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDfc-0001kf-Kv; Fri, 03 Aug 2012 08:55:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDfa-0001ka-Ko
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:55:42 +0000
Received: from [85.158.143.99:43733] by server-2.bemta-4.messagelabs.com id
	A1/D2-17938-E029B105; Fri, 03 Aug 2012 08:55:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343984139!29492501!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26946 invoked from network); 3 Aug 2012 08:55:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:55:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835799"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:55:39 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:55:39 +0100
Message-ID: <1343984137.21372.22.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 3 Aug 2012 09:55:37 +0100
In-Reply-To: <20120803083955.GB25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<20120803083441.GA25286@ocelot.phlegethon.org>
	<1343983078.21372.18.camel@zakaz.uk.xensource.com>
	<20120803083955.GB25286@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 09:39 +0100, Tim Deegan wrote:
> At 09:37 +0100 on 03 Aug (1343986678), Ian Campbell wrote:
> > On Fri, 2012-08-03 at 09:34 +0100, Tim Deegan wrote:
> > > At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> > > > nestedhvm: fix nested page fault build error on 32-bit
> > > > 
> > > >     cc1: warnings being treated as errors
> > > >     hvm.c: In function ???hvm_hap_nested_page_fault???:
> > > >     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> > > > 
> > > > hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> > > > to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> > > > of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> > > > ept_handle_violation) actually have the gpa which they pass to
> > > > hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> > > > change the argument to hvm_hap_nested_page_fault.
> > > > 
> > > > The other user of gpa in hvm_hap_nested_page_fault is a call to
> > > > p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> > > > think a paddr_t is appropriate there too.
> > > > 
> > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > Acked-by: Tim Deegan <tim@xen.org>
> > 
> > Is one of you or Jan going to apply or shall I? (I'm doing a tools
> > commit sweep right now)
> 
> If you're already applying things, please do apply up this one too.

Done.

I added an extra sentence to the commit log:
        Jan points out that this is also an issue for >4GB guests on the
        32 
        bit hypervisor.

> 
> Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:55:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDfc-0001kf-Kv; Fri, 03 Aug 2012 08:55:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDfa-0001ka-Ko
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 08:55:42 +0000
Received: from [85.158.143.99:43733] by server-2.bemta-4.messagelabs.com id
	A1/D2-17938-E029B105; Fri, 03 Aug 2012 08:55:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1343984139!29492501!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26946 invoked from network); 3 Aug 2012 08:55:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:55:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835799"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:55:39 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:55:39 +0100
Message-ID: <1343984137.21372.22.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 3 Aug 2012 09:55:37 +0100
In-Reply-To: <20120803083955.GB25286@ocelot.phlegethon.org>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<20120803083441.GA25286@ocelot.phlegethon.org>
	<1343983078.21372.18.camel@zakaz.uk.xensource.com>
	<20120803083955.GB25286@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 09:39 +0100, Tim Deegan wrote:
> At 09:37 +0100 on 03 Aug (1343986678), Ian Campbell wrote:
> > On Fri, 2012-08-03 at 09:34 +0100, Tim Deegan wrote:
> > > At 08:48 +0100 on 03 Aug (1343983712), Ian Campbell wrote:
> > > > nestedhvm: fix nested page fault build error on 32-bit
> > > > 
> > > >     cc1: warnings being treated as errors
> > > >     hvm.c: In function ???hvm_hap_nested_page_fault???:
> > > >     hvm.c:1282: error: passing argument 2 of ???nestedhvm_hap_nested_page_fault??? from incompatible pointer type /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55: note: expected ???paddr_t *??? but argument is of type ???long unsigned int *???
> > > > 
> > > > hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
> > > > to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
> > > > of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
> > > > ept_handle_violation) actually have the gpa which they pass to
> > > > hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
> > > > change the argument to hvm_hap_nested_page_fault.
> > > > 
> > > > The other user of gpa in hvm_hap_nested_page_fault is a call to
> > > > p2m_mem_access_check, which currently also takes a paddr_t gpa but I
> > > > think a paddr_t is appropriate there too.
> > > > 
> > > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > Acked-by: Tim Deegan <tim@xen.org>
> > 
> > Is one of you or Jan going to apply or shall I? (I'm doing a tools
> > commit sweep right now)
> 
> If you're already applying things, please do apply up this one too.

Done.

I added an extra sentence to the commit log:
        Jan points out that this is also an issue for >4GB guests on the
        32 
        bit hypervisor.

> 
> Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDg8-0001mY-1e; Fri, 03 Aug 2012 08:56:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDg6-0001mQ-Jr
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:56:14 +0000
Received: from [85.158.143.35:57642] by server-3.bemta-4.messagelabs.com id
	71/A2-01511-D229B105; Fri, 03 Aug 2012 08:56:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343984160!16594834!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5059 invoked from network); 3 Aug 2012 08:56:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835810"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:56:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:56:00 +0100
Message-ID: <1343984158.21372.23.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 3 Aug 2012 09:55:58 +0100
In-Reply-To: <756f87bda3c3172d34ca.1343922793@probook.site>
References: <756f87bda3c3172d34ca.1343922793@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/vtpm: fix tpm_version.h error during
 parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:53 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1343922758 -7200
> # Node ID 756f87bda3c3172d34cab60dc7279c3292775275
> # Parent  983ea7521badb3e05d3379044fb283732ef558d6
> tools/vtpm: fix tpm_version.h error during parallel build
> 
> Generating the tpm_version.h is not make -j safe:
> 
> In file included from ../tpm/tpm_emulator.h:25:0,
>                  from ../tpm/tpm_startup.c:18:
> ../tpm/tpm_version.h:1:0: error: unterminated #ifndef
> make[5]: *** [tpm_startup.o] Error 1
> 
> This happens because make can not know that 'all-recursive' depends on
> 'version'. Fix this by calling the individual make targets. Doing it
> this way avoids adding yet another patch.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.

I made the last paragraph:
        This happens because make can not know that 'all-recursive' depends on
        'version'. Fix this by calling the individual make targets.
        Doing it
        this way avoids adding yet another patch to the downloaded
        source.

(i.e. gave some hint why we want to avoid patching)

> 
> diff -r 983ea7521bad -r 756f87bda3c3 tools/vtpm/Makefile
> --- a/tools/vtpm/Makefile
> +++ b/tools/vtpm/Makefile
> @@ -23,7 +23,7 @@ build: build_sub
>  
>  .PHONY: install
>  install: build
> -	$(MAKE) -C $(VTPM_DIR) $@
> +	$(MAKE) -C $(VTPM_DIR) install-recursive
>  
>  .PHONY: clean
>  clean:
> @@ -66,7 +66,8 @@ updatepatches: clean orig
>  .PHONY: build_sub
>  build_sub: $(VTPM_DIR)/tpmd/tpmd
>  	set -e; if [ -e $(GMP_HEADER) ]; then \
> -		$(MAKE) -C $(VTPM_DIR); \
> +		$(MAKE) -C $(VTPM_DIR) version; \
> +		$(MAKE) -C $(VTPM_DIR) all-recursive; \
>  	else \
>  		echo "=== Unable to build VTPMs. libgmp could not be found."; \
>  	fi
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDg8-0001mY-1e; Fri, 03 Aug 2012 08:56:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDg6-0001mQ-Jr
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:56:14 +0000
Received: from [85.158.143.35:57642] by server-3.bemta-4.messagelabs.com id
	71/A2-01511-D229B105; Fri, 03 Aug 2012 08:56:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1343984160!16594834!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5059 invoked from network); 3 Aug 2012 08:56:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835810"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:56:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:56:00 +0100
Message-ID: <1343984158.21372.23.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 3 Aug 2012 09:55:58 +0100
In-Reply-To: <756f87bda3c3172d34ca.1343922793@probook.site>
References: <756f87bda3c3172d34ca.1343922793@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/vtpm: fix tpm_version.h error during
 parallel build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:53 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1343922758 -7200
> # Node ID 756f87bda3c3172d34cab60dc7279c3292775275
> # Parent  983ea7521badb3e05d3379044fb283732ef558d6
> tools/vtpm: fix tpm_version.h error during parallel build
> 
> Generating the tpm_version.h is not make -j safe:
> 
> In file included from ../tpm/tpm_emulator.h:25:0,
>                  from ../tpm/tpm_startup.c:18:
> ../tpm/tpm_version.h:1:0: error: unterminated #ifndef
> make[5]: *** [tpm_startup.o] Error 1
> 
> This happens because make can not know that 'all-recursive' depends on
> 'version'. Fix this by calling the individual make targets. Doing it
> this way avoids adding yet another patch.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.

I made the last paragraph:
        This happens because make can not know that 'all-recursive' depends on
        'version'. Fix this by calling the individual make targets.
        Doing it
        this way avoids adding yet another patch to the downloaded
        source.

(i.e. gave some hint why we want to avoid patching)

> 
> diff -r 983ea7521bad -r 756f87bda3c3 tools/vtpm/Makefile
> --- a/tools/vtpm/Makefile
> +++ b/tools/vtpm/Makefile
> @@ -23,7 +23,7 @@ build: build_sub
>  
>  .PHONY: install
>  install: build
> -	$(MAKE) -C $(VTPM_DIR) $@
> +	$(MAKE) -C $(VTPM_DIR) install-recursive
>  
>  .PHONY: clean
>  clean:
> @@ -66,7 +66,8 @@ updatepatches: clean orig
>  .PHONY: build_sub
>  build_sub: $(VTPM_DIR)/tpmd/tpmd
>  	set -e; if [ -e $(GMP_HEADER) ]; then \
> -		$(MAKE) -C $(VTPM_DIR); \
> +		$(MAKE) -C $(VTPM_DIR) version; \
> +		$(MAKE) -C $(VTPM_DIR) all-recursive; \
>  	else \
>  		echo "=== Unable to build VTPMs. libgmp could not be found."; \
>  	fi
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:56:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDgV-0001pP-EA; Fri, 03 Aug 2012 08:56:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDgU-0001pC-Ik
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:56:38 +0000
Received: from [85.158.139.83:9375] by server-9.bemta-5.messagelabs.com id
	9A/E8-01069-5429B105; Fri, 03 Aug 2012 08:56:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343984197!24381874!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8903 invoked from network); 3 Aug 2012 08:56:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835825"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:56:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:56:36 +0100
Message-ID: <1343984195.21372.24.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:56:35 +0100
In-Reply-To: <20506.35753.456647.344782@mariner.uk.xensource.com>
References: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
	<20506.35753.456647.344782@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: prefix *.for-check with _ to mark it
 as a generated file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:16 +0100, Ian Jackson wrote:
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:56:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDgV-0001pP-EA; Fri, 03 Aug 2012 08:56:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDgU-0001pC-Ik
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:56:38 +0000
Received: from [85.158.139.83:9375] by server-9.bemta-5.messagelabs.com id
	9A/E8-01069-5429B105; Fri, 03 Aug 2012 08:56:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1343984197!24381874!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8903 invoked from network); 3 Aug 2012 08:56:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835825"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:56:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:56:36 +0100
Message-ID: <1343984195.21372.24.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:56:35 +0100
In-Reply-To: <20506.35753.456647.344782@mariner.uk.xensource.com>
References: <e638a0aeb9856003661a.1343898018@cosworth.uk.xensource.com>
	<20506.35753.456647.344782@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: prefix *.for-check with _ to mark it
 as a generated file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:16 +0100, Ian Jackson wrote:
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:56:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDgc-0001qc-Qz; Fri, 03 Aug 2012 08:56:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDgb-0001qE-0u
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:56:45 +0000
Received: from [85.158.138.51:15672] by server-2.bemta-3.messagelabs.com id
	62/E3-00359-C429B105; Fri, 03 Aug 2012 08:56:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343984203!21298744!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1596 invoked from network); 3 Aug 2012 08:56:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:56:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835828"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:56:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:56:43 +0100
Message-ID: <1343984201.21372.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 3 Aug 2012 09:56:41 +0100
In-Reply-To: <20120802110906.GD11437@ocelot.phlegethon.org>
References: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
	<20120802110906.GD11437@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm: fix gic_init_secondary_cpu.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 12:09 +0100, Tim Deegan wrote:
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:56:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDgc-0001qc-Qz; Fri, 03 Aug 2012 08:56:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDgb-0001qE-0u
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:56:45 +0000
Received: from [85.158.138.51:15672] by server-2.bemta-3.messagelabs.com id
	62/E3-00359-C429B105; Fri, 03 Aug 2012 08:56:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1343984203!21298744!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1596 invoked from network); 3 Aug 2012 08:56:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:56:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835828"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:56:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:56:43 +0100
Message-ID: <1343984201.21372.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 3 Aug 2012 09:56:41 +0100
In-Reply-To: <20120802110906.GD11437@ocelot.phlegethon.org>
References: <1343839438-3321-1-git-send-email-ian.campbell@citrix.com>
	<20120802110906.GD11437@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm: fix gic_init_secondary_cpu.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 12:09 +0100, Tim Deegan wrote:
> Acked-by: Tim Deegan <tim@xen.org>

Applied, thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDgy-0001vv-Ei; Fri, 03 Aug 2012 08:57:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDgx-0001vV-6C
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:57:07 +0000
Received: from [85.158.138.51:7555] by server-10.bemta-3.messagelabs.com id
	85/C6-21993-2629B105; Fri, 03 Aug 2012 08:57:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343984225!23866265!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18141 invoked from network); 3 Aug 2012 08:57:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:57:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835837"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:57:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:57:05 +0100
Message-ID: <1343984224.21372.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:57:04 +0100
In-Reply-To: <20506.39065.252439.380882@mariner.uk.xensource.com>
References: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
	<20506.39065.252439.380882@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: make libxl_device_pci_{add, remove,
 destroy} interfaces asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:11 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH V2] libxl: make libxl_device_pci_{add, remove, destroy} interfaces asynchronous"):
> > libxl: make libxl_device_pci_{add,remove,destroy} interfaces asynchronous
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDgy-0001vv-Ei; Fri, 03 Aug 2012 08:57:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDgx-0001vV-6C
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:57:07 +0000
Received: from [85.158.138.51:7555] by server-10.bemta-3.messagelabs.com id
	85/C6-21993-2629B105; Fri, 03 Aug 2012 08:57:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343984225!23866265!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18141 invoked from network); 3 Aug 2012 08:57:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:57:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835837"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:57:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:57:05 +0100
Message-ID: <1343984224.21372.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:57:04 +0100
In-Reply-To: <20506.39065.252439.380882@mariner.uk.xensource.com>
References: <6b09cb00e9f4d2dcea48.1343823896@cosworth.uk.xensource.com>
	<20506.39065.252439.380882@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: make libxl_device_pci_{add, remove,
 destroy} interfaces asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:11 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH V2] libxl: make libxl_device_pci_{add, remove, destroy} interfaces asynchronous"):
> > libxl: make libxl_device_pci_{add,remove,destroy} interfaces asynchronous
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:59:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDj0-0002FZ-28; Fri, 03 Aug 2012 08:59:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDiy-0002FI-9c
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:59:12 +0000
Received: from [85.158.143.35:12411] by server-3.bemta-4.messagelabs.com id
	92/97-01511-FD29B105; Fri, 03 Aug 2012 08:59:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1343984350!10678718!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18465 invoked from network); 3 Aug 2012 08:59:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:59:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835893"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:59:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:59:10 +0100
Message-ID: <1343984349.21372.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:59:09 +0100
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and
 cleanups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> These should go into 4.2 soon:

I've applied 01..12 but skipped 11 pending you confirmation of the
intended wording.

> 
>  A * 01/13 libxl: unify libxl__device_destroy and device_hotplug_done
>    * 02/13 libxl: react correctly to bootloader pty master POLLHUP
>  A   03/13 libxl: fix device counting race in libxl__devices_destroy
>  A   04/13 libxl: fix formatting of DEFINE_DEVICES_ADD
>  A   05/13 libxl: abolish useless `start' parameter to libxl__add_*
>  A   06/13 libxl: rename aodevs to multidev
>  A   07/13 libxl: do not blunder on if bootloader fails (again)
> 
> These are harmless enough (but make no functional difference right
> now) and should go in as well:
> 
>  A   08/13 libxl: remus: mark TODOs more clearly
>  A   09/13 libxl: remove an unused numainfo parameter
>  A   10/13 libxl: idl: always initialise KeyedEnum keyvar in member init
> 
> These are API doc fixes for 4.2:
> 
>    + 11/13 libxl: correct some comments regarding event API and fds
>    + 12/13 libxl: add a comment re the memory management API instability
> 
> And this one is still waiting on the qmp ask_timeout question:
> 
>  X * 13/13 libxl: -Wunused-parameter
> 
> Key:
> 
>  A   acked
>  X   DO NOT APPLY
>    * modified since v4
>    + new patch, not posted before
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 08:59:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 08:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDj0-0002FZ-28; Fri, 03 Aug 2012 08:59:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDiy-0002FI-9c
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 08:59:12 +0000
Received: from [85.158.143.35:12411] by server-3.bemta-4.messagelabs.com id
	92/97-01511-FD29B105; Fri, 03 Aug 2012 08:59:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1343984350!10678718!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18465 invoked from network); 3 Aug 2012 08:59:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 08:59:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835893"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 08:59:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	09:59:10 +0100
Message-ID: <1343984349.21372.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 09:59:09 +0100
In-Reply-To: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and
 cleanups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> These should go into 4.2 soon:

I've applied 01..12 but skipped 11 pending you confirmation of the
intended wording.

> 
>  A * 01/13 libxl: unify libxl__device_destroy and device_hotplug_done
>    * 02/13 libxl: react correctly to bootloader pty master POLLHUP
>  A   03/13 libxl: fix device counting race in libxl__devices_destroy
>  A   04/13 libxl: fix formatting of DEFINE_DEVICES_ADD
>  A   05/13 libxl: abolish useless `start' parameter to libxl__add_*
>  A   06/13 libxl: rename aodevs to multidev
>  A   07/13 libxl: do not blunder on if bootloader fails (again)
> 
> These are harmless enough (but make no functional difference right
> now) and should go in as well:
> 
>  A   08/13 libxl: remus: mark TODOs more clearly
>  A   09/13 libxl: remove an unused numainfo parameter
>  A   10/13 libxl: idl: always initialise KeyedEnum keyvar in member init
> 
> These are API doc fixes for 4.2:
> 
>    + 11/13 libxl: correct some comments regarding event API and fds
>    + 12/13 libxl: add a comment re the memory management API instability
> 
> And this one is still waiting on the qmp ask_timeout question:
> 
>  X * 13/13 libxl: -Wunused-parameter
> 
> Key:
> 
>  A   acked
>  X   DO NOT APPLY
>    * modified since v4
>    + new patch, not posted before
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:01:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDlF-0002aM-A0; Fri, 03 Aug 2012 09:01:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDlE-0002aA-HC
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:01:32 +0000
Received: from [85.158.143.99:22472] by server-3.bemta-4.messagelabs.com id
	D4/4C-01511-B639B105; Fri, 03 Aug 2012 09:01:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343984491!18395545!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9224 invoked from network); 3 Aug 2012 09:01:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:01:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835948"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 09:01:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:01:31 +0100
Message-ID: <1343984489.21372.28.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 10:01:29 +0100
In-Reply-To: <20506.39036.116372.505121@mariner.uk.xensource.com>
References: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
	<20506.39036.116372.505121@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: only read script once in
	libxl__hotplug_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:10 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: only read script once in libxl__hotplug_*"):
> > libxl: only read script once in libxl__hotplug_*
> > 
> > instead of duplicating the error handling etc in get_hotplug_env
> > just pass the script already read by the caller down.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:01:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDlF-0002aM-A0; Fri, 03 Aug 2012 09:01:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDlE-0002aA-HC
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:01:32 +0000
Received: from [85.158.143.99:22472] by server-3.bemta-4.messagelabs.com id
	D4/4C-01511-B639B105; Fri, 03 Aug 2012 09:01:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343984491!18395545!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9224 invoked from network); 3 Aug 2012 09:01:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:01:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835948"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 09:01:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:01:31 +0100
Message-ID: <1343984489.21372.28.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 10:01:29 +0100
In-Reply-To: <20506.39036.116372.505121@mariner.uk.xensource.com>
References: <5ba5402335fe0365d2d0.1343821923@cosworth.uk.xensource.com>
	<20506.39036.116372.505121@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: only read script once in
	libxl__hotplug_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 16:10 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH] libxl: only read script once in libxl__hotplug_*"):
> > libxl: only read script once in libxl__hotplug_*
> > 
> > instead of duplicating the error handling etc in get_hotplug_env
> > just pass the script already read by the caller down.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:02:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDla-0002d2-Mv; Fri, 03 Aug 2012 09:01:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDlZ-0002cj-2d
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:01:53 +0000
Received: from [85.158.138.51:52040] by server-12.bemta-3.messagelabs.com id
	46/1B-15259-0839B105; Fri, 03 Aug 2012 09:01:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343984511!28460178!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25869 invoked from network); 3 Aug 2012 09:01:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:01:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835953"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 09:01:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:01:51 +0100
Message-ID: <1343984510.21372.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 10:01:50 +0100
In-Reply-To: <20506.37557.263306.37516@mariner.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
	<20506.35845.933539.770104@mariner.uk.xensource.com>
	<1343917305.27221.159.camel@zakaz.uk.xensource.com>
	<20506.37557.263306.37516@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:46 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> > On Thu, 2012-08-02 at 15:17 +0100, Ian Jackson wrote:
> > > I don't understand why this is needed (or a good thing) for one but
> > > not the other.
> > 
> > be_path becomes const in the next patch, after the non-const user is
> > removed.
> 
> OK then:
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:02:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDla-0002d2-Mv; Fri, 03 Aug 2012 09:01:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDlZ-0002cj-2d
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:01:53 +0000
Received: from [85.158.138.51:52040] by server-12.bemta-3.messagelabs.com id
	46/1B-15259-0839B105; Fri, 03 Aug 2012 09:01:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1343984511!28460178!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc4OTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25869 invoked from network); 3 Aug 2012 09:01:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:01:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13835953"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 09:01:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:01:51 +0100
Message-ID: <1343984510.21372.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 10:01:50 +0100
In-Reply-To: <20506.37557.263306.37516@mariner.uk.xensource.com>
References: <075da4778b0a1a84680e.1343899357@cosworth.uk.xensource.com>
	<20506.35845.933539.770104@mariner.uk.xensource.com>
	<1343917305.27221.159.camel@zakaz.uk.xensource.com>
	<20506.37557.263306.37516@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: const correctness for
	libxl__xs_path_cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 15:46 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] libxl: const correctness for libxl__xs_path_cleanup"):
> > On Thu, 2012-08-02 at 15:17 +0100, Ian Jackson wrote:
> > > I don't understand why this is needed (or a good thing) for one but
> > > not the other.
> > 
> > be_path becomes const in the next patch, after the non-const user is
> > removed.
> 
> OK then:
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:11:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDv5-00038p-Qx; Fri, 03 Aug 2012 09:11:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SxDv3-00038k-O3
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 09:11:42 +0000
Received: from [85.158.143.99:17293] by server-1.bemta-4.messagelabs.com id
	6F/65-24392-DC59B105; Fri, 03 Aug 2012 09:11:41 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343985097!18397523!1
X-Originating-IP: [216.32.180.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15226 invoked from network); 3 Aug 2012 09:11:39 -0000
Received: from co1ehsobe005.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.188)
	by server-16.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 09:11:39 -0000
Received: from mail181-co1-R.bigfish.com (10.243.78.237) by
	CO1EHSOBE013.bigfish.com (10.243.66.76) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:11:37 +0000
Received: from mail181-co1 (localhost [127.0.0.1])	by
	mail181-co1-R.bigfish.com (Postfix) with ESMTP id 32C61C0520;
	Fri,  3 Aug 2012 09:11:37 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dIc89bh936eI1432Izz1202hzz8275bh8275dhz2dh668h839h93fhd25he5bhf0ah107ah)
Received: from mail181-co1 (localhost.localdomain [127.0.0.1]) by mail181-co1
	(MessageSwitch) id 1343985094932137_3253;
	Fri,  3 Aug 2012 09:11:34 +0000 (UTC)
Received: from CO1EHSMHS015.bigfish.com (unknown [10.243.78.234])	by
	mail181-co1.bigfish.com (Postfix) with ESMTP id E14A4800044;
	Fri,  3 Aug 2012 09:11:34 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS015.bigfish.com (10.243.66.25) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:11:34 +0000
X-WSS-ID: 0M869J8-01-6AU-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2175910280BE;	Fri,  3 Aug 2012 04:11:31 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 04:11:50 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 3 Aug 2012 04:11:30 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	05:11:30 -0400
Message-ID: <501B95C0.1080708@amd.com>
Date: Fri, 3 Aug 2012 11:11:28 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<501BA0E6020000780009263B@nat28.tlf.novell.com>
In-Reply-To: <501BA0E6020000780009263B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDgvMDMvMTIgMDk6NTksIEphbiBCZXVsaWNoIHdyb3RlOgoKPj4+PiBPbiAwMy4wOC4xMiBh
dCAwOTo0OCwgSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbT4gd3JvdGU6Cj4+
IE9uIEZyaSwgMjAxMi0wOC0wMyBhdCAwNjoxMiArMDEwMCwgSWFuIENhbXBiZWxsIHdyb3RlOgo+
Pj4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4ZW4ub3JnIHdyb3RlOgo+Pj4+
IGZsaWdodCAxMzUzOSB4ZW4tdW5zdGFibGUgcmVhbCBbcmVhbF0KPj4+PiBodHRwOi8vd3d3LmNo
aWFyay5ncmVlbmVuZC5vcmcudWsvfnhlbnNyY3RzL2xvZ3MvMTM1MzkvIAo+Pj4+Cj4+Pj4gUmVn
cmVzc2lvbnMgOi0oCj4+Pj4KPj4+PiBUZXN0cyB3aGljaCBkaWQgbm90IHN1Y2NlZWQgYW5kIGFy
ZSBibG9ja2luZywKPj4+PiBpbmNsdWRpbmcgdGVzdHMgd2hpY2ggY291bGQgbm90IGJlIHJ1bjoK
Pj4+PiAgYnVpbGQtaTM4Ni1vbGRrZXJuICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAg
ICAgICAgIGZhaWwgUkVHUi4gdnMuIAo+PiAxMzUzNgo+Pj4+ICBidWlsZC1pMzg2ICAgICAgICAg
ICAgICAgICAgICA0IHhlbi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gCj4+
IDEzNTM2Cj4+Cj4+IDg8LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+Pgo+PiAj
IEhHIGNoYW5nZXNldCBwYXRjaAo+PiAjIFVzZXIgSWFuIENhbXBiZWxsIDxpYW4uY2FtcGJlbGxA
Y2l0cml4LmNvbT4KPj4gIyBEYXRlIDEzNDM5ODAwNDUgLTM2MDAKPj4gIyBOb2RlIElEIDIzZmRj
YTNhZGIzMzQ2MDkwZWE4YjY1Yjc3Y2FkN2QyNzljZjlkYWYKPj4gIyBQYXJlbnQgIDk1YTRhYjYz
MmFjMjVjZTBlYzZhMjQ1ZGNjNDZhZDU3ZDNjNzAzMGYKPj4gbmVzdGVkaHZtOiBmaXggbmVzdGVk
IHBhZ2UgZmF1bHQgYnVpbGQgZXJyb3Igb24gMzItYml0Cj4+Cj4+ICAgICBjYzE6IHdhcm5pbmdz
IGJlaW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4+ICAgICBodm0uYzogSW4gZnVuY3Rpb24g4oCYaHZt
X2hhcF9uZXN0ZWRfcGFnZV9mYXVsdOKAmToKPj4gICAgIGh2bS5jOjEyODI6IGVycm9yOiBwYXNz
aW5nIGFyZ3VtZW50IDIgb2YgCj4+IOKAmG5lc3RlZGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHTi
gJkgZnJvbSBpbmNvbXBhdGlibGUgcG9pbnRlciB0eXBlIAo+PiAvbG9jYWwvc2NyYXRjaC9pYW5j
L2RldmVsL3hlbi11bnN0YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5oOjU1
OiAKPj4gbm90ZTogZXhwZWN0ZWQg4oCYcGFkZHJfdCAq4oCZIGJ1dCBhcmd1bWVudCBpcyBvZiB0
eXBlIOKAmGxvbmcgdW5zaWduZWQgaW50ICrigJkKPj4KPj4gaHZtX2hhcF9uZXN0ZWRfcGFnZV9m
YXVsdCB0YWtlcyBhbiB1bnNpZ25lZCBsb25nIGdwYSBhbmQgcGFzc2VzICZncGEKPj4gdG8gbmVz
dGVkaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2lu
Y2UgYm90aAo+PiBvZiB0aGUgY2FsbGVycyBvZiBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0IChz
dm1fZG9fbmVzdGVkX3BnZmF1bHQgYW5kCj4+IGVwdF9oYW5kbGVfdmlvbGF0aW9uKSBhY3R1YWxs
eSBoYXZlIHRoZSBncGEgd2hpY2ggdGhleSBwYXNzIHRvCj4+IGh2bV9oYXBfbmVzdGVkX3BhZ2Vf
ZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsgaXQgbWFrZXMgc2Vuc2UgdG8KPj4gY2hhbmdlIHRo
ZSBhcmd1bWVudCB0byBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0Lgo+IAo+IEFuZCB0aGF0J3Mg
ZXZlbiBvdXRzaWRlIG9mIHRoZSBjdXJyZW50IGJ1aWxkIGZhaWx1cmUgLSBpdCBqdXN0Cj4gY2Fu
J3QgaGF2ZSB3b3JrZWQgZm9yID40R2IgZ3Vlc3RzIG9uIHRoZSAzMi1iaXQgaHlwZXJ2aXNvci4K
PiAKPj4gVGhlIG90aGVyIHVzZXIgb2YgZ3BhIGluIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQg
aXMgYSBjYWxsIHRvCj4+IHAybV9tZW1fYWNjZXNzX2NoZWNrLCB3aGljaCBjdXJyZW50bHkgYWxz
byB0YWtlcyBhIHBhZGRyX3QgZ3BhIGJ1dCBJCj4+IHRoaW5rIGEgcGFkZHJfdCBpcyBhcHByb3By
aWF0ZSB0aGVyZSB0b28uCj4+Cj4+IFNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNh
bXBiZWxsQGNpdHJpeC5jb20+Cj4gCj4gQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KCkFja2VkLWJ5OiBDaHJpc3RvcGggRWdnZXIgPENocmlzdG9waC5FZ2dlckBhbWQu
Y29tPgoKPiAKPj4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9hcmNo
L3g4Ni9odm0vaHZtLmMKPj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVnIDAz
IDA4OjQzOjEwIDIwMTIgKzAxMDAKPj4gKysrIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkg
QXVnIDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKPj4gQEAgLTEyNDIsNyArMTI0Miw3IEBAIHZvaWQg
aHZtX2luamVjdF9wYWdlX2ZhdWx0KGludCBlcnJjb2RlLCAKPj4gICAgICBodm1faW5qZWN0X3Ry
YXAoJnRyYXApOwo+PiAgfQo+PiAgCj4+IC1pbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1
bnNpZ25lZCBsb25nIGdwYSwKPj4gK2ludCBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0KHBhZGRy
X3QgZ3BhLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGdsYV92YWxp
ZCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLAo+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFjY2Vzc19yLAo+PiBkaWZm
IC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2FyY2gveDg2L21tL3AybS5jCj4+
IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAx
MDAKPj4gKysrIGIveGVuL2FyY2gveDg2L21tL3AybS5jCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAx
MiArMDEwMAo+PiBAQCAtMTIzMyw3ICsxMjMzLDcgQEAgdm9pZCBwMm1fbWVtX3BhZ2luZ19yZXN1
bWUoc3RydWN0IGRvbWFpbgo+PiAgICAgIH0KPj4gIH0KPj4gIAo+PiAtYm9vbF90IHAybV9tZW1f
YWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25l
ZCAKPj4gbG9uZyBnbGEsIAo+PiArYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3Qg
Z3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIAo+PiBnbGEsIAo+PiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xfdCBhY2Nlc3NfdywgYm9v
bF90IAo+PiBhY2Nlc3NfeCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50
X3JlcXVlc3RfdCAqKnJlcV9wdHIpCj4+ICB7Cj4+IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIz
ZmRjYTNhZGIzMyB4ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAo+PiAtLS0gYS94ZW4vaW5j
bHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPj4g
KysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vaHZtLmgJRnJpIEF1ZyAwMyAwODo0NzoyNSAy
MDEyICswMTAwCj4+IEBAIC00MzMsNyArNDMzLDcgQEAgc3RhdGljIGlubGluZSB2b2lkIGh2bV9z
ZXRfaW5mb19ndWVzdChzdAo+PiAgCj4+ICBpbnQgaHZtX2RlYnVnX29wKHN0cnVjdCB2Y3B1ICp2
LCBpbnQzMl90IG9wKTsKPj4gIAo+PiAtaW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQodW5z
aWduZWQgbG9uZyBncGEsCj4+ICtpbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdChwYWRkcl90
IGdwYSwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBnbGFfdmFsaWQs
IHVuc2lnbmVkIGxvbmcgZ2xhLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9v
bF90IGFjY2Vzc19yLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFj
Y2Vzc193LAo+PiBkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1
ZGUvYXNtLXg4Ni9wMm0uaAo+PiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBB
dWcgMDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+PiArKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3Ay
bS5oCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMAo+PiBAQCAtNTg5LDcgKzU4OSw3IEBA
IHN0YXRpYyBpbmxpbmUgdm9pZCBwMm1fbWVtX3BhZ2luZ19wb3B1bGEKPj4gICAqIGJlZW4gcHJv
bW90ZWQgd2l0aCBubyB1bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBi
ZWVuIAo+PiBwb3B1bGF0ZWQsIAo+PiAgICogdGhlbiB0aGUgY2FsbGVyIG11c3QgcHV0IHRoZSBl
dmVudCBpbiB0aGUgcmluZyAob25jZSBoYXZpbmcgcmVsZWFzZWQgCj4+IGdldF9nZm4qCj4+ICAg
KiBsb2NrcyAtLSBjYWxsZXIgbXVzdCBhbHNvIHhmcmVlIHRoZSByZXF1ZXN0LiAqLwo+PiAtYm9v
bF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3QgZ2xhX3Zh
bGlkLCB1bnNpZ25lZCAKPj4gbG9uZyBnbGEsIAo+PiArYm9vbF90IHAybV9tZW1fYWNjZXNzX2No
ZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIAo+PiBnbGEs
IAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xfdCBh
Y2Nlc3NfdywgYm9vbF90IAo+PiBhY2Nlc3NfeCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpOwo+PiAgLyogUmVzdW1lcyB0aGUgcnVu
bmluZyBvZiB0aGUgVkNQVSwgcmVzdGFydGluZyB0aGUgbGFzdCBpbnN0cnVjdGlvbiAqLwo+PiBA
QCAtNjA2LDcgKzYwNiw3IEBAIGludCBwMm1fZ2V0X21lbV9hY2Nlc3Moc3RydWN0IGRvbWFpbiAq
ZCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgaHZtbWVtX2FjY2Vzc190ICphY2Nlc3MpOwo+
PiAgCj4+ICAjZWxzZQo+PiAtc3RhdGljIGlubGluZSBib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hl
Y2sodW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCAKPj4gZ2xhX3ZhbGlkLCAKPj4gK3N0YXRpYyBp
bmxpbmUgYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xh
X3ZhbGlkLCAKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNp
Z25lZCBsb25nIGdsYSwgYm9vbF90IGFjY2Vzc19yLCAKPj4KPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3csIGJvb2xfdCBhY2Nlc3NfeCwK
Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZW1fZXZlbnRfcmVx
dWVzdF90ICoqcmVxX3B0cikKPj4KPj4KPj4KPj4gX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KPj4gWGVuLWRldmVsIG1haWxpbmcgbGlzdAo+PiBYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZyAKPj4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsIAoKCgot
LSAKLS0tdG8gc2F0aXNmeSBFdXJvcGVhbiBMYXcgZm9yIGJ1c2luZXNzIGxldHRlcnM6CkFkdmFu
Y2VkIE1pY3JvIERldmljZXMgR21iSApFaW5zdGVpbnJpbmcgMjQsIDg1Njg5IERvcm5hY2ggYi4g
TXVlbmNoZW4KR2VzY2hhZWZ0c2Z1ZWhyZXI6IEFsYmVydG8gQm96em8KU2l0ejogRG9ybmFjaCwg
R2VtZWluZGUgQXNjaGhlaW0sIExhbmRrcmVpcyBNdWVuY2hlbgpSZWdpc3RlcmdlcmljaHQgTXVl
bmNoZW4sIEhSQiBOci4gNDM2MzIKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:11:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDv5-00038p-Qx; Fri, 03 Aug 2012 09:11:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SxDv3-00038k-O3
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 09:11:42 +0000
Received: from [85.158.143.99:17293] by server-1.bemta-4.messagelabs.com id
	6F/65-24392-DC59B105; Fri, 03 Aug 2012 09:11:41 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343985097!18397523!1
X-Originating-IP: [216.32.180.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15226 invoked from network); 3 Aug 2012 09:11:39 -0000
Received: from co1ehsobe005.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.188)
	by server-16.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 09:11:39 -0000
Received: from mail181-co1-R.bigfish.com (10.243.78.237) by
	CO1EHSOBE013.bigfish.com (10.243.66.76) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:11:37 +0000
Received: from mail181-co1 (localhost [127.0.0.1])	by
	mail181-co1-R.bigfish.com (Postfix) with ESMTP id 32C61C0520;
	Fri,  3 Aug 2012 09:11:37 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dIc89bh936eI1432Izz1202hzz8275bh8275dhz2dh668h839h93fhd25he5bhf0ah107ah)
Received: from mail181-co1 (localhost.localdomain [127.0.0.1]) by mail181-co1
	(MessageSwitch) id 1343985094932137_3253;
	Fri,  3 Aug 2012 09:11:34 +0000 (UTC)
Received: from CO1EHSMHS015.bigfish.com (unknown [10.243.78.234])	by
	mail181-co1.bigfish.com (Postfix) with ESMTP id E14A4800044;
	Fri,  3 Aug 2012 09:11:34 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS015.bigfish.com (10.243.66.25) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:11:34 +0000
X-WSS-ID: 0M869J8-01-6AU-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2175910280BE;	Fri,  3 Aug 2012 04:11:31 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 04:11:50 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 3 Aug 2012 04:11:30 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	05:11:30 -0400
Message-ID: <501B95C0.1080708@amd.com>
Date: Fri, 3 Aug 2012 11:11:28 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <osstest-13539-mainreport@xen.org>
	<1343970755.24794.0.camel@dagon.hellion.org.uk>
	<1343980112.21372.4.camel@zakaz.uk.xensource.com>
	<501BA0E6020000780009263B@nat28.tlf.novell.com>
In-Reply-To: <501BA0E6020000780009263B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13539: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDgvMDMvMTIgMDk6NTksIEphbiBCZXVsaWNoIHdyb3RlOgoKPj4+PiBPbiAwMy4wOC4xMiBh
dCAwOTo0OCwgSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbT4gd3JvdGU6Cj4+
IE9uIEZyaSwgMjAxMi0wOC0wMyBhdCAwNjoxMiArMDEwMCwgSWFuIENhbXBiZWxsIHdyb3RlOgo+
Pj4gT24gRnJpLCAyMDEyLTA4LTAzIGF0IDAwOjQxICswMTAwLCB4ZW4ub3JnIHdyb3RlOgo+Pj4+
IGZsaWdodCAxMzUzOSB4ZW4tdW5zdGFibGUgcmVhbCBbcmVhbF0KPj4+PiBodHRwOi8vd3d3LmNo
aWFyay5ncmVlbmVuZC5vcmcudWsvfnhlbnNyY3RzL2xvZ3MvMTM1MzkvIAo+Pj4+Cj4+Pj4gUmVn
cmVzc2lvbnMgOi0oCj4+Pj4KPj4+PiBUZXN0cyB3aGljaCBkaWQgbm90IHN1Y2NlZWQgYW5kIGFy
ZSBibG9ja2luZywKPj4+PiBpbmNsdWRpbmcgdGVzdHMgd2hpY2ggY291bGQgbm90IGJlIHJ1bjoK
Pj4+PiAgYnVpbGQtaTM4Ni1vbGRrZXJuICAgICAgICAgICAgNCB4ZW4tYnVpbGQgICAgICAgICAg
ICAgICAgIGZhaWwgUkVHUi4gdnMuIAo+PiAxMzUzNgo+Pj4+ICBidWlsZC1pMzg2ICAgICAgICAg
ICAgICAgICAgICA0IHhlbi1idWlsZCAgICAgICAgICAgICAgICAgZmFpbCBSRUdSLiB2cy4gCj4+
IDEzNTM2Cj4+Cj4+IDg8LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+Pgo+PiAj
IEhHIGNoYW5nZXNldCBwYXRjaAo+PiAjIFVzZXIgSWFuIENhbXBiZWxsIDxpYW4uY2FtcGJlbGxA
Y2l0cml4LmNvbT4KPj4gIyBEYXRlIDEzNDM5ODAwNDUgLTM2MDAKPj4gIyBOb2RlIElEIDIzZmRj
YTNhZGIzMzQ2MDkwZWE4YjY1Yjc3Y2FkN2QyNzljZjlkYWYKPj4gIyBQYXJlbnQgIDk1YTRhYjYz
MmFjMjVjZTBlYzZhMjQ1ZGNjNDZhZDU3ZDNjNzAzMGYKPj4gbmVzdGVkaHZtOiBmaXggbmVzdGVk
IHBhZ2UgZmF1bHQgYnVpbGQgZXJyb3Igb24gMzItYml0Cj4+Cj4+ICAgICBjYzE6IHdhcm5pbmdz
IGJlaW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4+ICAgICBodm0uYzogSW4gZnVuY3Rpb24g4oCYaHZt
X2hhcF9uZXN0ZWRfcGFnZV9mYXVsdOKAmToKPj4gICAgIGh2bS5jOjEyODI6IGVycm9yOiBwYXNz
aW5nIGFyZ3VtZW50IDIgb2YgCj4+IOKAmG5lc3RlZGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHTi
gJkgZnJvbSBpbmNvbXBhdGlibGUgcG9pbnRlciB0eXBlIAo+PiAvbG9jYWwvc2NyYXRjaC9pYW5j
L2RldmVsL3hlbi11bnN0YWJsZS5oZy94ZW4vaW5jbHVkZS9hc20vaHZtL25lc3RlZGh2bS5oOjU1
OiAKPj4gbm90ZTogZXhwZWN0ZWQg4oCYcGFkZHJfdCAq4oCZIGJ1dCBhcmd1bWVudCBpcyBvZiB0
eXBlIOKAmGxvbmcgdW5zaWduZWQgaW50ICrigJkKPj4KPj4gaHZtX2hhcF9uZXN0ZWRfcGFnZV9m
YXVsdCB0YWtlcyBhbiB1bnNpZ25lZCBsb25nIGdwYSBhbmQgcGFzc2VzICZncGEKPj4gdG8gbmVz
dGVkaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCB3aGljaCB0YWtlcyBhIHBhZGRyX3QgKi4gU2lu
Y2UgYm90aAo+PiBvZiB0aGUgY2FsbGVycyBvZiBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0IChz
dm1fZG9fbmVzdGVkX3BnZmF1bHQgYW5kCj4+IGVwdF9oYW5kbGVfdmlvbGF0aW9uKSBhY3R1YWxs
eSBoYXZlIHRoZSBncGEgd2hpY2ggdGhleSBwYXNzIHRvCj4+IGh2bV9oYXBfbmVzdGVkX3BhZ2Vf
ZmF1bHQgYXMgYSBwYWRkcl90IEkgdGhpbmsgaXQgbWFrZXMgc2Vuc2UgdG8KPj4gY2hhbmdlIHRo
ZSBhcmd1bWVudCB0byBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0Lgo+IAo+IEFuZCB0aGF0J3Mg
ZXZlbiBvdXRzaWRlIG9mIHRoZSBjdXJyZW50IGJ1aWxkIGZhaWx1cmUgLSBpdCBqdXN0Cj4gY2Fu
J3QgaGF2ZSB3b3JrZWQgZm9yID40R2IgZ3Vlc3RzIG9uIHRoZSAzMi1iaXQgaHlwZXJ2aXNvci4K
PiAKPj4gVGhlIG90aGVyIHVzZXIgb2YgZ3BhIGluIGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQg
aXMgYSBjYWxsIHRvCj4+IHAybV9tZW1fYWNjZXNzX2NoZWNrLCB3aGljaCBjdXJyZW50bHkgYWxz
byB0YWtlcyBhIHBhZGRyX3QgZ3BhIGJ1dCBJCj4+IHRoaW5rIGEgcGFkZHJfdCBpcyBhcHByb3By
aWF0ZSB0aGVyZSB0b28uCj4+Cj4+IFNpZ25lZC1vZmYtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNh
bXBiZWxsQGNpdHJpeC5jb20+Cj4gCj4gQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KCkFja2VkLWJ5OiBDaHJpc3RvcGggRWdnZXIgPENocmlzdG9waC5FZ2dlckBhbWQu
Y29tPgoKPiAKPj4gZGlmZiAtciA5NWE0YWI2MzJhYzIgLXIgMjNmZGNhM2FkYjMzIHhlbi9hcmNo
L3g4Ni9odm0vaHZtLmMKPj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkgQXVnIDAz
IDA4OjQzOjEwIDIwMTIgKzAxMDAKPj4gKysrIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYwlGcmkg
QXVnIDAzIDA4OjQ3OjI1IDIwMTIgKzAxMDAKPj4gQEAgLTEyNDIsNyArMTI0Miw3IEBAIHZvaWQg
aHZtX2luamVjdF9wYWdlX2ZhdWx0KGludCBlcnJjb2RlLCAKPj4gICAgICBodm1faW5qZWN0X3Ry
YXAoJnRyYXApOwo+PiAgfQo+PiAgCj4+IC1pbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdCh1
bnNpZ25lZCBsb25nIGdwYSwKPj4gK2ludCBodm1faGFwX25lc3RlZF9wYWdlX2ZhdWx0KHBhZGRy
X3QgZ3BhLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGdsYV92YWxp
ZCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgZ2xhLAo+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFjY2Vzc19yLAo+PiBkaWZm
IC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2FyY2gveDg2L21tL3AybS5jCj4+
IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYwlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAx
MDAKPj4gKysrIGIveGVuL2FyY2gveDg2L21tL3AybS5jCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAx
MiArMDEwMAo+PiBAQCAtMTIzMyw3ICsxMjMzLDcgQEAgdm9pZCBwMm1fbWVtX3BhZ2luZ19yZXN1
bWUoc3RydWN0IGRvbWFpbgo+PiAgICAgIH0KPj4gIH0KPj4gIAo+PiAtYm9vbF90IHAybV9tZW1f
YWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25l
ZCAKPj4gbG9uZyBnbGEsIAo+PiArYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3Qg
Z3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIAo+PiBnbGEsIAo+PiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xfdCBhY2Nlc3NfdywgYm9v
bF90IAo+PiBhY2Nlc3NfeCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgbWVtX2V2ZW50
X3JlcXVlc3RfdCAqKnJlcV9wdHIpCj4+ICB7Cj4+IGRpZmYgLXIgOTVhNGFiNjMyYWMyIC1yIDIz
ZmRjYTNhZGIzMyB4ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9odm0uaAo+PiAtLS0gYS94ZW4vaW5j
bHVkZS9hc20teDg2L2h2bS9odm0uaAlGcmkgQXVnIDAzIDA4OjQzOjEwIDIwMTIgKzAxMDAKPj4g
KysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vaHZtLmgJRnJpIEF1ZyAwMyAwODo0NzoyNSAy
MDEyICswMTAwCj4+IEBAIC00MzMsNyArNDMzLDcgQEAgc3RhdGljIGlubGluZSB2b2lkIGh2bV9z
ZXRfaW5mb19ndWVzdChzdAo+PiAgCj4+ICBpbnQgaHZtX2RlYnVnX29wKHN0cnVjdCB2Y3B1ICp2
LCBpbnQzMl90IG9wKTsKPj4gIAo+PiAtaW50IGh2bV9oYXBfbmVzdGVkX3BhZ2VfZmF1bHQodW5z
aWduZWQgbG9uZyBncGEsCj4+ICtpbnQgaHZtX2hhcF9uZXN0ZWRfcGFnZV9mYXVsdChwYWRkcl90
IGdwYSwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBnbGFfdmFsaWQs
IHVuc2lnbmVkIGxvbmcgZ2xhLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9v
bF90IGFjY2Vzc19yLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbF90IGFj
Y2Vzc193LAo+PiBkaWZmIC1yIDk1YTRhYjYzMmFjMiAtciAyM2ZkY2EzYWRiMzMgeGVuL2luY2x1
ZGUvYXNtLXg4Ni9wMm0uaAo+PiAtLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oCUZyaSBB
dWcgMDMgMDg6NDM6MTAgMjAxMiArMDEwMAo+PiArKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3Ay
bS5oCUZyaSBBdWcgMDMgMDg6NDc6MjUgMjAxMiArMDEwMAo+PiBAQCAtNTg5LDcgKzU4OSw3IEBA
IHN0YXRpYyBpbmxpbmUgdm9pZCBwMm1fbWVtX3BhZ2luZ19wb3B1bGEKPj4gICAqIGJlZW4gcHJv
bW90ZWQgd2l0aCBubyB1bmRlcmx5aW5nIHZjcHUgcGF1c2UuIElmIHRoZSByZXFfcHRyIGhhcyBi
ZWVuIAo+PiBwb3B1bGF0ZWQsIAo+PiAgICogdGhlbiB0aGUgY2FsbGVyIG11c3QgcHV0IHRoZSBl
dmVudCBpbiB0aGUgcmluZyAob25jZSBoYXZpbmcgcmVsZWFzZWQgCj4+IGdldF9nZm4qCj4+ICAg
KiBsb2NrcyAtLSBjYWxsZXIgbXVzdCBhbHNvIHhmcmVlIHRoZSByZXF1ZXN0LiAqLwo+PiAtYm9v
bF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHVuc2lnbmVkIGxvbmcgZ3BhLCBib29sX3QgZ2xhX3Zh
bGlkLCB1bnNpZ25lZCAKPj4gbG9uZyBnbGEsIAo+PiArYm9vbF90IHAybV9tZW1fYWNjZXNzX2No
ZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xhX3ZhbGlkLCB1bnNpZ25lZCBsb25nIAo+PiBnbGEs
IAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3IsIGJvb2xfdCBh
Y2Nlc3NfdywgYm9vbF90IAo+PiBhY2Nlc3NfeCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgbWVtX2V2ZW50X3JlcXVlc3RfdCAqKnJlcV9wdHIpOwo+PiAgLyogUmVzdW1lcyB0aGUgcnVu
bmluZyBvZiB0aGUgVkNQVSwgcmVzdGFydGluZyB0aGUgbGFzdCBpbnN0cnVjdGlvbiAqLwo+PiBA
QCAtNjA2LDcgKzYwNiw3IEBAIGludCBwMm1fZ2V0X21lbV9hY2Nlc3Moc3RydWN0IGRvbWFpbiAq
ZCwKPj4gICAgICAgICAgICAgICAgICAgICAgICAgaHZtbWVtX2FjY2Vzc190ICphY2Nlc3MpOwo+
PiAgCj4+ICAjZWxzZQo+PiAtc3RhdGljIGlubGluZSBib29sX3QgcDJtX21lbV9hY2Nlc3NfY2hl
Y2sodW5zaWduZWQgbG9uZyBncGEsIGJvb2xfdCAKPj4gZ2xhX3ZhbGlkLCAKPj4gK3N0YXRpYyBp
bmxpbmUgYm9vbF90IHAybV9tZW1fYWNjZXNzX2NoZWNrKHBhZGRyX3QgZ3BhLCBib29sX3QgZ2xh
X3ZhbGlkLCAKPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNp
Z25lZCBsb25nIGdsYSwgYm9vbF90IGFjY2Vzc19yLCAKPj4KPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBib29sX3QgYWNjZXNzX3csIGJvb2xfdCBhY2Nlc3NfeCwK
Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZW1fZXZlbnRfcmVx
dWVzdF90ICoqcmVxX3B0cikKPj4KPj4KPj4KPj4gX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KPj4gWGVuLWRldmVsIG1haWxpbmcgbGlzdAo+PiBYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZyAKPj4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsIAoKCgot
LSAKLS0tdG8gc2F0aXNmeSBFdXJvcGVhbiBMYXcgZm9yIGJ1c2luZXNzIGxldHRlcnM6CkFkdmFu
Y2VkIE1pY3JvIERldmljZXMgR21iSApFaW5zdGVpbnJpbmcgMjQsIDg1Njg5IERvcm5hY2ggYi4g
TXVlbmNoZW4KR2VzY2hhZWZ0c2Z1ZWhyZXI6IEFsYmVydG8gQm96em8KU2l0ejogRG9ybmFjaCwg
R2VtZWluZGUgQXNjaGhlaW0sIExhbmRrcmVpcyBNdWVuY2hlbgpSZWdpc3RlcmdlcmljaHQgTXVl
bmNoZW4sIEhSQiBOci4gNDM2MzIKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDws-0003DN-As; Fri, 03 Aug 2012 09:13:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDwq-0003DE-Og
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:13:33 +0000
Received: from [85.158.138.51:29659] by server-5.bemta-3.messagelabs.com id
	4D/5D-28237-B369B105; Fri, 03 Aug 2012 09:13:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343985209!30300670!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE4NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11774 invoked from network); 3 Aug 2012 09:13:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:13:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336363200"; d="scan'208";a="33443706"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 05:13:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 05:13:29 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SxDwm-0004T6-Kb;
	Fri, 03 Aug 2012 10:13:28 +0100
MIME-Version: 1.0
X-Mercurial-Node: 2c21d5c75dcbdf52987fbc47e4c8181b9236bca3
Message-ID: <2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 3 Aug 2012 10:13:28 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl: fix cleanup of tap devices in
	libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343985174 -3600
# Node ID 2c21d5c75dcbdf52987fbc47e4c8181b9236bca3
# Parent  f9d3acc755bd1b05f5d2c6a592a760953f0e83bd
libxl: fix cleanup of tap devices in libxl__device_destroy

We pass be_path to tapdisk_destroy but we've already deleted it so it
fails to read tapdisk-params. However it appears that we need to
destroy the tap device after tearing down xenstore, to avoid the leak
reported by Greg Wettstein in
<201207312141.q6VLfJje012656@wind.enjellic.com>.

So read the tapdisk-params in the cleanup transaction, before the
remove, and pass that down to destroy_tapdisk instead. tapdisk-params
may of course be NULL if the device isn't a tap device.

There is no need to tear down the tap device from
libxl__initiate_device_remove since this ultimately calls
libxl__device_destroy.

Propagate and log errors from libxl__device_destroy_tapdisk.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2:
  - use libxl__xs_read_checked. libxl__device_destroy_tapdisk
    therefore takes a const char * params and dups itself a writeable
    copy.
  - prerequisites are now in tree.

diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_blktap2.c	Fri Aug 03 10:12:54 2012 +0100
@@ -51,28 +51,37 @@ char *libxl__blktap_devpath(libxl__gc *g
 }
 
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, const char *params)
 {
-    char *path, *params, *type, *disk;
+    char *type, *disk;
     int err;
     tap_list_t tap;
 
-    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
-    if (!path) return;
+    type = libxl__strdup(gc, params);
 
-    params = libxl__xs_read(gc, XBT_NULL, path);
-    if (!params) return;
-
-    type = params;
-    disk = strchr(params, ':');
-    if (!disk) return;
+    disk = strchr(type, ':');
+    if (!disk) {
+        LOG(ERROR, "Unable to parse params %s", params);
+        return ERROR_INVAL;
+    }
 
     *disk++ = '\0';
 
     err = tap_ctl_find(type, disk, &tap);
-    if (err < 0) return;
+    if (err < 0) {
+        /* returns -errno */
+        LOGEV(ERROR, -err, "Unable to find type %s disk %s", type, disk);
+        return ERROR_FAIL;
+    }
 
-    tap_ctl_destroy(tap.id, tap.minor);
+    err = tap_ctl_destroy(tap.id, tap.minor);
+    if (err < 0) {
+        LOGEV(ERROR, -err, "Failed to destroy tap device id %d minor %d",
+              tap.id, tap.minor);
+        return ERROR_FAIL;
+    }
+
+    return 0;
 }
 
 /*
diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_device.c	Fri Aug 03 10:12:54 2012 +0100
@@ -522,8 +522,10 @@ DEFINE_DEVICES_ADD(nic)
 
 int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
 {
-    char *be_path = libxl__device_backend_path(gc, dev);
+    const char *be_path = libxl__device_backend_path(gc, dev);
     const char *fe_path = libxl__device_frontend_path(gc, dev);
+    const char *tapdisk_path = GCSPRINTF("%s/%s", be_path, "tapdisk-params");
+    const char *tapdisk_params;
     xs_transaction_t t = 0;
     int rc;
 
@@ -531,6 +533,10 @@ int libxl__device_destroy(libxl__gc *gc,
         rc = libxl__xs_transaction_start(gc, &t);
         if (rc) goto out;
 
+        /* May not exist if this is not a tap device */
+        rc = libxl__xs_read_checked(gc, t, tapdisk_path, &tapdisk_params);
+        if (rc) goto out;
+
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
 
@@ -539,7 +545,8 @@ int libxl__device_destroy(libxl__gc *gc,
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
+    if (tapdisk_params)
+        rc = libxl__device_destroy_tapdisk(gc, tapdisk_params);
 
 out:
     libxl__xs_transaction_abort(gc, &t);
@@ -790,8 +797,6 @@ void libxl__initiate_device_remove(libxl
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
-
     rc = libxl__ev_devstate_wait(gc, &aodev->backend_ds,
                                  device_backend_callback,
                                  state_path, XenbusStateClosed,
diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Fri Aug 03 10:12:54 2012 +0100
@@ -1344,8 +1344,9 @@ _hidden char *libxl__blktap_devpath(libx
 /* libxl__device_destroy_tapdisk:
  *   Destroys any tapdisk process associated with the backend represented
  *   by be_path.
+ *   Always logs on failure.
  */
-_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+_hidden int libxl__device_destroy_tapdisk(libxl__gc *gc, const char *params);
 
 _hidden int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
                                    libxl_device_disk *disk,
diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_noblktap2.c	Fri Aug 03 10:12:54 2012 +0100
@@ -28,8 +28,9 @@ char *libxl__blktap_devpath(libxl__gc *g
     return NULL;
 }
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, const char *params)
 {
+    return 0;
 }
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxDws-0003DN-As; Fri, 03 Aug 2012 09:13:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxDwq-0003DE-Og
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:13:33 +0000
Received: from [85.158.138.51:29659] by server-5.bemta-3.messagelabs.com id
	4D/5D-28237-B369B105; Fri, 03 Aug 2012 09:13:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343985209!30300670!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjE4NjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11774 invoked from network); 3 Aug 2012 09:13:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:13:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336363200"; d="scan'208";a="33443706"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 05:13:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 05:13:29 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SxDwm-0004T6-Kb;
	Fri, 03 Aug 2012 10:13:28 +0100
MIME-Version: 1.0
X-Mercurial-Node: 2c21d5c75dcbdf52987fbc47e4c8181b9236bca3
Message-ID: <2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 3 Aug 2012 10:13:28 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl: fix cleanup of tap devices in
	libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343985174 -3600
# Node ID 2c21d5c75dcbdf52987fbc47e4c8181b9236bca3
# Parent  f9d3acc755bd1b05f5d2c6a592a760953f0e83bd
libxl: fix cleanup of tap devices in libxl__device_destroy

We pass be_path to tapdisk_destroy but we've already deleted it so it
fails to read tapdisk-params. However it appears that we need to
destroy the tap device after tearing down xenstore, to avoid the leak
reported by Greg Wettstein in
<201207312141.q6VLfJje012656@wind.enjellic.com>.

So read the tapdisk-params in the cleanup transaction, before the
remove, and pass that down to destroy_tapdisk instead. tapdisk-params
may of course be NULL if the device isn't a tap device.

There is no need to tear down the tap device from
libxl__initiate_device_remove since this ultimately calls
libxl__device_destroy.

Propagate and log errors from libxl__device_destroy_tapdisk.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2:
  - use libxl__xs_read_checked. libxl__device_destroy_tapdisk
    therefore takes a const char * params and dups itself a writeable
    copy.
  - prerequisites are now in tree.

diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_blktap2.c	Fri Aug 03 10:12:54 2012 +0100
@@ -51,28 +51,37 @@ char *libxl__blktap_devpath(libxl__gc *g
 }
 
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, const char *params)
 {
-    char *path, *params, *type, *disk;
+    char *type, *disk;
     int err;
     tap_list_t tap;
 
-    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
-    if (!path) return;
+    type = libxl__strdup(gc, params);
 
-    params = libxl__xs_read(gc, XBT_NULL, path);
-    if (!params) return;
-
-    type = params;
-    disk = strchr(params, ':');
-    if (!disk) return;
+    disk = strchr(type, ':');
+    if (!disk) {
+        LOG(ERROR, "Unable to parse params %s", params);
+        return ERROR_INVAL;
+    }
 
     *disk++ = '\0';
 
     err = tap_ctl_find(type, disk, &tap);
-    if (err < 0) return;
+    if (err < 0) {
+        /* returns -errno */
+        LOGEV(ERROR, -err, "Unable to find type %s disk %s", type, disk);
+        return ERROR_FAIL;
+    }
 
-    tap_ctl_destroy(tap.id, tap.minor);
+    err = tap_ctl_destroy(tap.id, tap.minor);
+    if (err < 0) {
+        LOGEV(ERROR, -err, "Failed to destroy tap device id %d minor %d",
+              tap.id, tap.minor);
+        return ERROR_FAIL;
+    }
+
+    return 0;
 }
 
 /*
diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_device.c	Fri Aug 03 10:12:54 2012 +0100
@@ -522,8 +522,10 @@ DEFINE_DEVICES_ADD(nic)
 
 int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
 {
-    char *be_path = libxl__device_backend_path(gc, dev);
+    const char *be_path = libxl__device_backend_path(gc, dev);
     const char *fe_path = libxl__device_frontend_path(gc, dev);
+    const char *tapdisk_path = GCSPRINTF("%s/%s", be_path, "tapdisk-params");
+    const char *tapdisk_params;
     xs_transaction_t t = 0;
     int rc;
 
@@ -531,6 +533,10 @@ int libxl__device_destroy(libxl__gc *gc,
         rc = libxl__xs_transaction_start(gc, &t);
         if (rc) goto out;
 
+        /* May not exist if this is not a tap device */
+        rc = libxl__xs_read_checked(gc, t, tapdisk_path, &tapdisk_params);
+        if (rc) goto out;
+
         libxl__xs_path_cleanup(gc, t, fe_path);
         libxl__xs_path_cleanup(gc, t, be_path);
 
@@ -539,7 +545,8 @@ int libxl__device_destroy(libxl__gc *gc,
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
+    if (tapdisk_params)
+        rc = libxl__device_destroy_tapdisk(gc, tapdisk_params);
 
 out:
     libxl__xs_transaction_abort(gc, &t);
@@ -790,8 +797,6 @@ void libxl__initiate_device_remove(libxl
         if (rc < 0) goto out;
     }
 
-    libxl__device_destroy_tapdisk(gc, be_path);
-
     rc = libxl__ev_devstate_wait(gc, &aodev->backend_ds,
                                  device_backend_callback,
                                  state_path, XenbusStateClosed,
diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Fri Aug 03 10:12:54 2012 +0100
@@ -1344,8 +1344,9 @@ _hidden char *libxl__blktap_devpath(libx
 /* libxl__device_destroy_tapdisk:
  *   Destroys any tapdisk process associated with the backend represented
  *   by be_path.
+ *   Always logs on failure.
  */
-_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+_hidden int libxl__device_destroy_tapdisk(libxl__gc *gc, const char *params);
 
 _hidden int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
                                    libxl_device_disk *disk,
diff -r f9d3acc755bd -r 2c21d5c75dcb tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Fri Aug 03 10:12:33 2012 +0100
+++ b/tools/libxl/libxl_noblktap2.c	Fri Aug 03 10:12:54 2012 +0100
@@ -28,8 +28,9 @@ char *libxl__blktap_devpath(libxl__gc *g
     return NULL;
 }
 
-void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+int libxl__device_destroy_tapdisk(libxl__gc *gc, const char *params)
 {
+    return 0;
 }
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:22:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxE5D-0003SZ-Fc; Fri, 03 Aug 2012 09:22:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxE5B-0003SU-L0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:22:09 +0000
Received: from [85.158.143.35:62123] by server-3.bemta-4.messagelabs.com id
	64/32-01511-0489B105; Fri, 03 Aug 2012 09:22:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343985695!5333394!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11712 invoked from network); 3 Aug 2012 09:21:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:21:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13836477"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 09:21:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:21:34 +0100
Message-ID: <1343985693.21372.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "greg@enjellic.com" <greg@enjellic.com>
Date: Fri, 3 Aug 2012 10:21:33 +0100
In-Reply-To: <201208021606.q72G6cEf025613@wind.enjellic.com>
References: <201208021606.q72G6cEf025613@wind.enjellic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Mike McClurg <mike.mcclurg@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Blktap fixes and kernel patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 17:06 +0100, Dr. Greg Wettstein wrote:
> On Aug 1, 10:17am, Ian Campbell wrote:
> } Subject: Re: Blktap fixes and kernel patch.
> 
> Hi Ian et. al, hope your week is proceeding well.
> 
> > On Tue, 2012-07-31 at 22:41 +0100, Dr. Greg Wettstein wrote:
> 
> > > In the process I updated the blktap2 kernel driver to patch cleanly
> > > into the Linux 3.4 kernel.  These fixes have been validated against
> > > the 3.4 kernel as well as the 3.2 kernel.
> 
> > Just to be clear this is just a straight forward port, there's no
> > part of the deadlock fix in here?
> 
> The kernel patch is just a forward port to 3.4, there were no issues
> with the driver itself that needed to be addressed.
> 
> It seemed the community lacks ready access to the kernel driver so
> hopefully others will find a solid reference site for the patches
> useful.

I think they will, thanks.

The closest thing we have for an "upstream" for these now is the
blktap-dkms packages which are in Debian and Ubuntu, but I don't think
those really have a maintainer as such (I think Mike, CCd, wrangles with
them a bit). Might that be something you would like to maintain?

[...]
> > If you send a mail with a subject "Xen 4.1.x backport request
> > <commit-id>" explaining which commit it is and CC keir@xen.org &
> > ian.jackson@eu.citrix.com then we can see about getting this into a
> > future 4.1.x (perhaps even 4.1.3, not sure which stage of rcs we are at
> > there).
> > 
> > If the backport is reasonably trivial then there is often no need to
> > include it but since you have done so you might as well include the
> > patch for reference.
> 
> I will forward along the reference and the patch.

Great thanks.

> > > The second patch corrects the deadlock which occurs between the
> > > blktap2 kernel driver and the blktap2 userspace control plane.  The
> > > deadlock causes a delay in the shutdown of a XEN guest and results in
> > > the 'orphaning' of tapdisk minor number allocations.  As seems to be
> > > typical with these types of things the fix was trivially straight
> > > forward once I finally figured out what was going on.
> 
> > Thanks for this.
> > 
> > Am I right that the important functional change here is that the xs_rm
> > needs to come after we read the params node but before tap_ctl_destroy?
> > Obviously removing the node before calling libxl__device_destroy_tapdisk
> > is wrong since libxl__device_destroy_tapdisk reads from be_path!
> 
> Correct.
> 
> I debated a bit about how to do this in the cleanest fashion
> possible.  Since the be_path is passed to libxl__device_destroy_tapdisk()
> the simplest strategy seemed to be to abstract the libxl_ctx context
> and pull the entry from xenstore after the tapdisk-params key was read
> from the xenstore.

[...]
> While my fix vaguely felt like a layering violation it seemed to be
> the most correct approach.  Since libxl__device_destroy_tapdisk() is
> stubbed out in the non-blktap2 case having the teardown of the backend
> there would seem to generically fix the problem.  Provided of course
> the function can be provided with the correct context, I'm not looking
> at the current code.

It was a little more complicated in unstable, because the rm is now in a
transaction. What I ended up doing was making
libxl__device_destroy_tapdisk take the actual params string instead of
the be_path, so I could read it as part of the transaction and then call
destroy_tapdisk.

I've just sent out V2 of that patch, I CCd you. 
<2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>

> > I'd really appreciate it if you could validate whether 4.2.0-rc1
> > works for you or not, I suspect not. We would usually want fix the
> > development version before considering fixes for the stable branches
> > (even if the actual patch ends up looking totally different)
> > otherwise we run the risk of regressions in the next version.
> 
> I'd be happy to give it a test.  Is there a tarball cut or should I
> hone up my Mercurial skills?

We don't do tarballs for rcs. In any case I suggest you use the
Mercurial tip anyway since it has the prerequisites for my patch in it
now.

Those prerequisistes are still being tested so you will need the staging
tree:
        hg clone http://xenbits.xen.org/hg/staging/xen-unstable.hg

once they pass testing (this afternoon BST, assuming all is well) then
you can drop the "/staging" bit. We'll probably to rc2 early next week.

> > Is there a simple command which will list the leaked tap devices? If
> > so we can consider adding it to the leak-check phase of the
> > automated tests (although I'm not sure how much use these make of
> > blktap)
> 
> The tap-ctl tools make managingn all this pretty straight forward.
> 
> If you issue the following command:
> 
> 	tap-ctl list
> 
> On a faulty implementation after the startup and shutdown of a blktap2
> using guest you will see the orphaned minor.  They steadily increase
> as guests startup and shutdown.

Thanks, this rings a bell from last time I looked at this.

> [..]

> It appears that 4.1.2 is not properly cleaning up xenstore.  I'm
> chasing that down now and if there are trivial correctness fixes I
> will pass them onward.

Thanks.

> 
> > > Ian for your reference the following change which you introduced to
> > > address this issue:
> > > 
> > > 79e3dbe4b659e78408a9eea76c51a601bd4a383a
> > > tapdisk: respond to destroy request before tearing down the commuication channel
> > > 
> > > Is not needed and does not provide formally correct behavior in the
> > > presence of the two patches noted above.
> 
> > Is it incorrect (i.e. should be reverted) or is it just incomplete/not
> > helpful?
> 
> Its incorrect in that it changes a logically correct implementation
> only for the purposes of masking the delay, which in this case, lead
> to the discovery of the root cause of the problem.  Arguably libxl
> needs a bit better error reporting around all this and the patch
> arguably works against that.
> 
> I'm currently running without the patch in 4.1.2 and the tapdisk2
> devices are performing very nicely.

I remember this patch now but I can't seem to find it in any tree, I
suspect it was never applied, which sounds like a good thing.

> Best wishes for a pleasant weekend.

Thanks, and you.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:22:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxE5D-0003SZ-Fc; Fri, 03 Aug 2012 09:22:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxE5B-0003SU-L0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:22:09 +0000
Received: from [85.158.143.35:62123] by server-3.bemta-4.messagelabs.com id
	64/32-01511-0489B105; Fri, 03 Aug 2012 09:22:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1343985695!5333394!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11712 invoked from network); 3 Aug 2012 09:21:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 09:21:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13836477"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 09:21:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:21:34 +0100
Message-ID: <1343985693.21372.40.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "greg@enjellic.com" <greg@enjellic.com>
Date: Fri, 3 Aug 2012 10:21:33 +0100
In-Reply-To: <201208021606.q72G6cEf025613@wind.enjellic.com>
References: <201208021606.q72G6cEf025613@wind.enjellic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Mike McClurg <mike.mcclurg@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Blktap fixes and kernel patch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-02 at 17:06 +0100, Dr. Greg Wettstein wrote:
> On Aug 1, 10:17am, Ian Campbell wrote:
> } Subject: Re: Blktap fixes and kernel patch.
> 
> Hi Ian et. al, hope your week is proceeding well.
> 
> > On Tue, 2012-07-31 at 22:41 +0100, Dr. Greg Wettstein wrote:
> 
> > > In the process I updated the blktap2 kernel driver to patch cleanly
> > > into the Linux 3.4 kernel.  These fixes have been validated against
> > > the 3.4 kernel as well as the 3.2 kernel.
> 
> > Just to be clear this is just a straight forward port, there's no
> > part of the deadlock fix in here?
> 
> The kernel patch is just a forward port to 3.4, there were no issues
> with the driver itself that needed to be addressed.
> 
> It seemed the community lacks ready access to the kernel driver so
> hopefully others will find a solid reference site for the patches
> useful.

I think they will, thanks.

The closest thing we have for an "upstream" for these now is the
blktap-dkms packages which are in Debian and Ubuntu, but I don't think
those really have a maintainer as such (I think Mike, CCd, wrangles with
them a bit). Might that be something you would like to maintain?

[...]
> > If you send a mail with a subject "Xen 4.1.x backport request
> > <commit-id>" explaining which commit it is and CC keir@xen.org &
> > ian.jackson@eu.citrix.com then we can see about getting this into a
> > future 4.1.x (perhaps even 4.1.3, not sure which stage of rcs we are at
> > there).
> > 
> > If the backport is reasonably trivial then there is often no need to
> > include it but since you have done so you might as well include the
> > patch for reference.
> 
> I will forward along the reference and the patch.

Great thanks.

> > > The second patch corrects the deadlock which occurs between the
> > > blktap2 kernel driver and the blktap2 userspace control plane.  The
> > > deadlock causes a delay in the shutdown of a XEN guest and results in
> > > the 'orphaning' of tapdisk minor number allocations.  As seems to be
> > > typical with these types of things the fix was trivially straight
> > > forward once I finally figured out what was going on.
> 
> > Thanks for this.
> > 
> > Am I right that the important functional change here is that the xs_rm
> > needs to come after we read the params node but before tap_ctl_destroy?
> > Obviously removing the node before calling libxl__device_destroy_tapdisk
> > is wrong since libxl__device_destroy_tapdisk reads from be_path!
> 
> Correct.
> 
> I debated a bit about how to do this in the cleanest fashion
> possible.  Since the be_path is passed to libxl__device_destroy_tapdisk()
> the simplest strategy seemed to be to abstract the libxl_ctx context
> and pull the entry from xenstore after the tapdisk-params key was read
> from the xenstore.

[...]
> While my fix vaguely felt like a layering violation it seemed to be
> the most correct approach.  Since libxl__device_destroy_tapdisk() is
> stubbed out in the non-blktap2 case having the teardown of the backend
> there would seem to generically fix the problem.  Provided of course
> the function can be provided with the correct context, I'm not looking
> at the current code.

It was a little more complicated in unstable, because the rm is now in a
transaction. What I ended up doing was making
libxl__device_destroy_tapdisk take the actual params string instead of
the be_path, so I could read it as part of the transaction and then call
destroy_tapdisk.

I've just sent out V2 of that patch, I CCd you. 
<2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>

> > I'd really appreciate it if you could validate whether 4.2.0-rc1
> > works for you or not, I suspect not. We would usually want fix the
> > development version before considering fixes for the stable branches
> > (even if the actual patch ends up looking totally different)
> > otherwise we run the risk of regressions in the next version.
> 
> I'd be happy to give it a test.  Is there a tarball cut or should I
> hone up my Mercurial skills?

We don't do tarballs for rcs. In any case I suggest you use the
Mercurial tip anyway since it has the prerequisites for my patch in it
now.

Those prerequisistes are still being tested so you will need the staging
tree:
        hg clone http://xenbits.xen.org/hg/staging/xen-unstable.hg

once they pass testing (this afternoon BST, assuming all is well) then
you can drop the "/staging" bit. We'll probably to rc2 early next week.

> > Is there a simple command which will list the leaked tap devices? If
> > so we can consider adding it to the leak-check phase of the
> > automated tests (although I'm not sure how much use these make of
> > blktap)
> 
> The tap-ctl tools make managingn all this pretty straight forward.
> 
> If you issue the following command:
> 
> 	tap-ctl list
> 
> On a faulty implementation after the startup and shutdown of a blktap2
> using guest you will see the orphaned minor.  They steadily increase
> as guests startup and shutdown.

Thanks, this rings a bell from last time I looked at this.

> [..]

> It appears that 4.1.2 is not properly cleaning up xenstore.  I'm
> chasing that down now and if there are trivial correctness fixes I
> will pass them onward.

Thanks.

> 
> > > Ian for your reference the following change which you introduced to
> > > address this issue:
> > > 
> > > 79e3dbe4b659e78408a9eea76c51a601bd4a383a
> > > tapdisk: respond to destroy request before tearing down the commuication channel
> > > 
> > > Is not needed and does not provide formally correct behavior in the
> > > presence of the two patches noted above.
> 
> > Is it incorrect (i.e. should be reverted) or is it just incomplete/not
> > helpful?
> 
> Its incorrect in that it changes a logically correct implementation
> only for the purposes of masking the delay, which in this case, lead
> to the discovery of the root cause of the problem.  Arguably libxl
> needs a bit better error reporting around all this and the patch
> arguably works against that.
> 
> I'm currently running without the patch in 4.1.2 and the tapdisk2
> devices are performing very nicely.

I remember this patch now but I can't seem to find it in any tree, I
suspect it was never applied, which sounds like a good thing.

> Best wishes for a pleasant weekend.

Thanks, and you.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:23:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxE68-0003VV-UP; Fri, 03 Aug 2012 09:23:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxE67-0003VJ-7L
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:23:07 +0000
Received: from [85.158.143.99:13129] by server-1.bemta-4.messagelabs.com id
	74/D7-24392-A789B105; Fri, 03 Aug 2012 09:23:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1343985786!20307944!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3010 invoked from network); 3 Aug 2012 09:23:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 09:23:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 10:23:04 +0100
Message-Id: <501BB4C202000078000926DB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 10:23:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
In-Reply-To: <CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 18:36, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
>> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>>> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>>> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>>> >       guest ends up on more than one nodes, make sure it knows it's
>>> >       running on a NUMA platform (smaller than the actual host, but
>>> >       still NUMA). This interacts with some of the above points:
>>>
>>> The question is whether this is really useful beyond the (I would
>>> suppose) relatively small set of cases where migration isn't
>>> needed.
>>>
>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
>> suggesting that exposing a virtual topology is not a good idea as it
>> poses constraints/prevents live migration?
>>
>> If yes, well, I mostly agree that this is an huge issue, and that's why
>> I think wee need some bright idea on how to deal with it. I mean, it's
>> easy to make it optional and let it automatically disable migration,
>> giving users the choice what they prefer, but I think this is more
>> dodging the problem than dealing with it! :-P
>>
>>> >        * consider this during automatic placement for
>>> >          resuming/migrating domains (if they have a virtual topology,
>>> >          better not to change it);
>>> >        * consider this during memory migration (it can change the
>>> >          actual topology, should we update it on-line or disable memory
>>> >          migration?)
> 
> I think we could use cpu hot-plug to change the "virtual topology" of
> VMs, couldn't we?  We could probably even do that on a running guest
> if we really needed to.

Hmm, not sure - using hotplug behind the back of the guest might
be possible, but you'd first need to hot-unplug the vCPU. That's
something that I don't think you can do on HVM guests (and for
PV guests, guest visible NUMA support makes even less sense
than for HVM ones).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:23:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxE68-0003VV-UP; Fri, 03 Aug 2012 09:23:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxE67-0003VJ-7L
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:23:07 +0000
Received: from [85.158.143.99:13129] by server-1.bemta-4.messagelabs.com id
	74/D7-24392-A789B105; Fri, 03 Aug 2012 09:23:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1343985786!20307944!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3010 invoked from network); 3 Aug 2012 09:23:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 09:23:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 10:23:04 +0100
Message-Id: <501BB4C202000078000926DB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 10:23:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
In-Reply-To: <CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 18:36, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
>> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>>> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>>> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>>> >       guest ends up on more than one nodes, make sure it knows it's
>>> >       running on a NUMA platform (smaller than the actual host, but
>>> >       still NUMA). This interacts with some of the above points:
>>>
>>> The question is whether this is really useful beyond the (I would
>>> suppose) relatively small set of cases where migration isn't
>>> needed.
>>>
>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
>> suggesting that exposing a virtual topology is not a good idea as it
>> poses constraints/prevents live migration?
>>
>> If yes, well, I mostly agree that this is an huge issue, and that's why
>> I think wee need some bright idea on how to deal with it. I mean, it's
>> easy to make it optional and let it automatically disable migration,
>> giving users the choice what they prefer, but I think this is more
>> dodging the problem than dealing with it! :-P
>>
>>> >        * consider this during automatic placement for
>>> >          resuming/migrating domains (if they have a virtual topology,
>>> >          better not to change it);
>>> >        * consider this during memory migration (it can change the
>>> >          actual topology, should we update it on-line or disable memory
>>> >          migration?)
> 
> I think we could use cpu hot-plug to change the "virtual topology" of
> VMs, couldn't we?  We could probably even do that on a running guest
> if we really needed to.

Hmm, not sure - using hotplug behind the back of the guest might
be possible, but you'd first need to hot-unplug the vCPU. That's
something that I don't think you can do on HVM guests (and for
PV guests, guest visible NUMA support makes even less sense
than for HVM ones).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:29:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxECK-0003kD-Rt; Fri, 03 Aug 2012 09:29:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxECJ-0003k8-9M
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:29:31 +0000
Received: from [85.158.143.99:15894] by server-2.bemta-4.messagelabs.com id
	83/3D-17938-AF99B105; Fri, 03 Aug 2012 09:29:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343986169!29742350!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22241 invoked from network); 3 Aug 2012 09:29:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 09:29:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 10:29:29 +0100
Message-Id: <501BB64402000078000926F0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 10:30:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "W. Michael Petullo" <mike@flyn.org>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20120702215042.GA15140@imp.flyn.org>
	<20120703100223.GB2058@reaktio.net>
	<20120703121604.GA3987@imp.flyn.org>
	<20120703125934.GA25644@phenom.dumpdata.com>
	<20120802165116.GA26474@phenom.dumpdata.com>
In-Reply-To: <20120802165116.GA26474@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, xen@lists.fedoraproject.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Fedora-xen] Xen/Linux 3.4.2 performance
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 18:51, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> Michael openned a bug and on it we found that the xen-acpi-processor.off=1
> would solve the performance problem. What that does is to not upload C-states
> and P-states information to the hypervisor. So I pulled up an AMD box
> and found that the problem is only if hypervisor enters C-2 states. If
> I do 'xenpm set-max-cstate 1' it gets back to working nicely.
> 
> Wei, any ideas? This is with Xen 4.1

Is that 4.1 you're using lacking a backport of -unstable
25195:a06e6cdeafe3 (4.1-testing 23298:435493696053)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:29:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxECK-0003kD-Rt; Fri, 03 Aug 2012 09:29:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxECJ-0003k8-9M
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:29:31 +0000
Received: from [85.158.143.99:15894] by server-2.bemta-4.messagelabs.com id
	83/3D-17938-AF99B105; Fri, 03 Aug 2012 09:29:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1343986169!29742350!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22241 invoked from network); 3 Aug 2012 09:29:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 09:29:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 10:29:29 +0100
Message-Id: <501BB64402000078000926F0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 10:30:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "W. Michael Petullo" <mike@flyn.org>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20120702215042.GA15140@imp.flyn.org>
	<20120703100223.GB2058@reaktio.net>
	<20120703121604.GA3987@imp.flyn.org>
	<20120703125934.GA25644@phenom.dumpdata.com>
	<20120802165116.GA26474@phenom.dumpdata.com>
In-Reply-To: <20120802165116.GA26474@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, xen@lists.fedoraproject.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Fedora-xen] Xen/Linux 3.4.2 performance
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.08.12 at 18:51, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> Michael openned a bug and on it we found that the xen-acpi-processor.off=1
> would solve the performance problem. What that does is to not upload C-states
> and P-states information to the hypervisor. So I pulled up an AMD box
> and found that the problem is only if hypervisor enters C-2 states. If
> I do 'xenpm set-max-cstate 1' it gets back to working nicely.
> 
> Wei, any ideas? This is with Xen 4.1

Is that 4.1 you're using lacking a backport of -unstable
25195:a06e6cdeafe3 (4.1-testing 23298:435493696053)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:52:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEY9-00045p-18; Fri, 03 Aug 2012 09:52:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxEY8-00045k-2M
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:52:04 +0000
Received: from [85.158.139.83:9279] by server-4.bemta-5.messagelabs.com id
	47/D2-27831-34F9B105; Fri, 03 Aug 2012 09:52:03 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343987522!29506163!1
X-Originating-IP: [213.199.154.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15826 invoked from network); 3 Aug 2012 09:52:02 -0000
Received: from db3ehsobe004.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.142)
	by server-13.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 09:52:02 -0000
Received: from mail58-db3-R.bigfish.com (10.3.81.227) by
	DB3EHSOBE005.bigfish.com (10.3.84.25) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:52:02 +0000
Received: from mail58-db3 (localhost [127.0.0.1])	by mail58-db3-R.bigfish.com
	(Postfix) with ESMTP id E960E120133;
	Fri,  3 Aug 2012 09:52:01 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -11
X-BigFish: VPS-11(zzbb2dI98dI9371I936eI1432I179dNzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail58-db3 (localhost.localdomain [127.0.0.1]) by mail58-db3
	(MessageSwitch) id 1343987519980492_8694;
	Fri,  3 Aug 2012 09:51:59 +0000 (UTC)
Received: from DB3EHSMHS004.bigfish.com (unknown [10.3.81.233])	by
	mail58-db3.bigfish.com (Postfix) with ESMTP id EC5C11E0060;
	Fri,  3 Aug 2012 09:51:59 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	DB3EHSMHS004.bigfish.com (10.3.87.104) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:51:58 +0000
X-WSS-ID: 0M86BEH-02-APT-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2A6BDC80E8;	Fri,  3 Aug 2012 04:51:52 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 04:52:14 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 04:51:54 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	05:51:53 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 2745D49C20C;
	Fri,  3 Aug 2012 10:51:52 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id CEEC1594037; Fri,  3 Aug 2012
	11:51:51 +0200 (CEST)
Message-ID: <501B9E81.1020302@amd.com>
Date: Fri, 3 Aug 2012 11:48:49 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
In-Reply-To: <501BB4C202000078000926DB@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 11:23 AM, Jan Beulich wrote:
>>>> On 02.08.12 at 18:36, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
>>> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>>>>>>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>>>>>      - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>>>>>        guest ends up on more than one nodes, make sure it knows it's
>>>>>        running on a NUMA platform (smaller than the actual host, but
>>>>>        still NUMA). This interacts with some of the above points:
>>>>
>>>> The question is whether this is really useful beyond the (I would
>>>> suppose) relatively small set of cases where migration isn't
>>>> needed.
>>>>
>>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
>>> suggesting that exposing a virtual topology is not a good idea as it
>>> poses constraints/prevents live migration?

Honestly, what would be the problems with migration? NUMA awareness is 
actually a software optimization, so we will not really break something 
if the advertised topology isn't the real one. This is especially true 
if we lower the number of NUMA nodes. Say the guest starts with two 
nodes and then gets migrated to a machine where it can happily live in 
one node. There would be some extra effort by the guest OS to obey the 
virtual NUMA topology, but if there isn't actually a NUMA penalty 
anymore this shouldn't really hurt, right?
Even if we would need to go to a machine where we have more nodes for a 
certain guest than before, this is actually what we have today: guest 
NUMA unawareness. I am not sure if this is really a migration 
showstopper, and certainly not a NUMA guest showstopper.

But we could make it a config file option, so we leave this decision to 
the admin. I have talked to people with huge guests, they keep asking me 
about this feature.

>>>
>>> If yes, well, I mostly agree that this is an huge issue, and that's why
>>> I think wee need some bright idea on how to deal with it. I mean, it's
>>> easy to make it optional and let it automatically disable migration,
>>> giving users the choice what they prefer, but I think this is more
>>> dodging the problem than dealing with it! :-P
>>>
>>>>>         * consider this during automatic placement for
>>>>>           resuming/migrating domains (if they have a virtual topology,
>>>>>           better not to change it);
>>>>>         * consider this during memory migration (it can change the
>>>>>           actual topology, should we update it on-line or disable memory
>>>>>           migration?)
>>
>> I think we could use cpu hot-plug to change the "virtual topology" of
>> VMs, couldn't we?  We could probably even do that on a running guest
>> if we really needed to.
>
> Hmm, not sure - using hotplug behind the back of the guest might
> be possible, but you'd first need to hot-unplug the vCPU. That's
> something that I don't think you can do on HVM guests (and for
> PV guests, guest visible NUMA support makes even less sense
> than for HVM ones).

I don't think that hotplug would really work. I have checked this some 
times ago, at least the Linux NUMA code cannot be really fooled by this. 
The SRAT table is firmware defined and static by nature, so there is no 
code in Linux to change the NUMA topology at runtime. This is especially 
true for the memory layout.

But as said above, I don't really buy this as an argument against guest 
NUMA. At least provide it as an option to people who know what they do.

Regards,
Andre.


-- 
Andre Przywara
AMD-OSRC (Dresden)
Tel: x29712


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 09:52:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 09:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEY9-00045p-18; Fri, 03 Aug 2012 09:52:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxEY8-00045k-2M
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 09:52:04 +0000
Received: from [85.158.139.83:9279] by server-4.bemta-5.messagelabs.com id
	47/D2-27831-34F9B105; Fri, 03 Aug 2012 09:52:03 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343987522!29506163!1
X-Originating-IP: [213.199.154.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15826 invoked from network); 3 Aug 2012 09:52:02 -0000
Received: from db3ehsobe004.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.142)
	by server-13.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 09:52:02 -0000
Received: from mail58-db3-R.bigfish.com (10.3.81.227) by
	DB3EHSOBE005.bigfish.com (10.3.84.25) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:52:02 +0000
Received: from mail58-db3 (localhost [127.0.0.1])	by mail58-db3-R.bigfish.com
	(Postfix) with ESMTP id E960E120133;
	Fri,  3 Aug 2012 09:52:01 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -11
X-BigFish: VPS-11(zzbb2dI98dI9371I936eI1432I179dNzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail58-db3 (localhost.localdomain [127.0.0.1]) by mail58-db3
	(MessageSwitch) id 1343987519980492_8694;
	Fri,  3 Aug 2012 09:51:59 +0000 (UTC)
Received: from DB3EHSMHS004.bigfish.com (unknown [10.3.81.233])	by
	mail58-db3.bigfish.com (Postfix) with ESMTP id EC5C11E0060;
	Fri,  3 Aug 2012 09:51:59 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	DB3EHSMHS004.bigfish.com (10.3.87.104) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 09:51:58 +0000
X-WSS-ID: 0M86BEH-02-APT-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2A6BDC80E8;	Fri,  3 Aug 2012 04:51:52 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 04:52:14 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 04:51:54 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	05:51:53 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 2745D49C20C;
	Fri,  3 Aug 2012 10:51:52 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id CEEC1594037; Fri,  3 Aug 2012
	11:51:51 +0200 (CEST)
Message-ID: <501B9E81.1020302@amd.com>
Date: Fri, 3 Aug 2012 11:48:49 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
In-Reply-To: <501BB4C202000078000926DB@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 11:23 AM, Jan Beulich wrote:
>>>> On 02.08.12 at 18:36, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
>>> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>>>>>>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>>>>>      - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>>>>>        guest ends up on more than one nodes, make sure it knows it's
>>>>>        running on a NUMA platform (smaller than the actual host, but
>>>>>        still NUMA). This interacts with some of the above points:
>>>>
>>>> The question is whether this is really useful beyond the (I would
>>>> suppose) relatively small set of cases where migration isn't
>>>> needed.
>>>>
>>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
>>> suggesting that exposing a virtual topology is not a good idea as it
>>> poses constraints/prevents live migration?

Honestly, what would be the problems with migration? NUMA awareness is 
actually a software optimization, so we will not really break something 
if the advertised topology isn't the real one. This is especially true 
if we lower the number of NUMA nodes. Say the guest starts with two 
nodes and then gets migrated to a machine where it can happily live in 
one node. There would be some extra effort by the guest OS to obey the 
virtual NUMA topology, but if there isn't actually a NUMA penalty 
anymore this shouldn't really hurt, right?
Even if we would need to go to a machine where we have more nodes for a 
certain guest than before, this is actually what we have today: guest 
NUMA unawareness. I am not sure if this is really a migration 
showstopper, and certainly not a NUMA guest showstopper.

But we could make it a config file option, so we leave this decision to 
the admin. I have talked to people with huge guests, they keep asking me 
about this feature.

>>>
>>> If yes, well, I mostly agree that this is an huge issue, and that's why
>>> I think wee need some bright idea on how to deal with it. I mean, it's
>>> easy to make it optional and let it automatically disable migration,
>>> giving users the choice what they prefer, but I think this is more
>>> dodging the problem than dealing with it! :-P
>>>
>>>>>         * consider this during automatic placement for
>>>>>           resuming/migrating domains (if they have a virtual topology,
>>>>>           better not to change it);
>>>>>         * consider this during memory migration (it can change the
>>>>>           actual topology, should we update it on-line or disable memory
>>>>>           migration?)
>>
>> I think we could use cpu hot-plug to change the "virtual topology" of
>> VMs, couldn't we?  We could probably even do that on a running guest
>> if we really needed to.
>
> Hmm, not sure - using hotplug behind the back of the guest might
> be possible, but you'd first need to hot-unplug the vCPU. That's
> something that I don't think you can do on HVM guests (and for
> PV guests, guest visible NUMA support makes even less sense
> than for HVM ones).

I don't think that hotplug would really work. I have checked this some 
times ago, at least the Linux NUMA code cannot be really fooled by this. 
The SRAT table is firmware defined and static by nature, so there is no 
code in Linux to change the NUMA topology at runtime. This is especially 
true for the memory layout.

But as said above, I don't really buy this as an argument against guest 
NUMA. At least provide it as an option to people who know what they do.

Regards,
Andre.


-- 
Andre Przywara
AMD-OSRC (Dresden)
Tel: x29712


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:02:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEi7-0004Ki-5j; Fri, 03 Aug 2012 10:02:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxEi5-0004Kd-66
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:02:21 +0000
Received: from [85.158.139.83:49091] by server-10.bemta-5.messagelabs.com id
	3F/0A-02190-CA1AB105; Fri, 03 Aug 2012 10:02:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343988139!18803542!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11051 invoked from network); 3 Aug 2012 10:02:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 10:02:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:02:18 +0100
Message-Id: <501BBDF50200007800092728@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:03:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andre Przywara" <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
	<501B9E81.1020302@amd.com>
In-Reply-To: <501B9E81.1020302@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 11:48, Andre Przywara <andre.przywara@amd.com> wrote:
> On 08/03/2012 11:23 AM, Jan Beulich wrote:
>>>>> On 02.08.12 at 18:36, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
>>>> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>>>>>>>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>>>>>>      - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>>>>>>        guest ends up on more than one nodes, make sure it knows it's
>>>>>>        running on a NUMA platform (smaller than the actual host, but
>>>>>>        still NUMA). This interacts with some of the above points:
>>>>>
>>>>> The question is whether this is really useful beyond the (I would
>>>>> suppose) relatively small set of cases where migration isn't
>>>>> needed.
>>>>>
>>>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
>>>> suggesting that exposing a virtual topology is not a good idea as it
>>>> poses constraints/prevents live migration?
> 
> Honestly, what would be the problems with migration? NUMA awareness is 
> actually a software optimization, so we will not really break something 
> if the advertised topology isn't the real one.

Sure, nothing would break, but the purpose of the whole feature
is improving performance, and that might get entirely lost (or
even worse) after a migration to a different topology host.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:02:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEi7-0004Ki-5j; Fri, 03 Aug 2012 10:02:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxEi5-0004Kd-66
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:02:21 +0000
Received: from [85.158.139.83:49091] by server-10.bemta-5.messagelabs.com id
	3F/0A-02190-CA1AB105; Fri, 03 Aug 2012 10:02:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1343988139!18803542!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11051 invoked from network); 3 Aug 2012 10:02:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 10:02:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:02:18 +0100
Message-Id: <501BBDF50200007800092728@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:03:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andre Przywara" <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
	<501B9E81.1020302@amd.com>
In-Reply-To: <501B9E81.1020302@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 11:48, Andre Przywara <andre.przywara@amd.com> wrote:
> On 08/03/2012 11:23 AM, Jan Beulich wrote:
>>>>> On 02.08.12 at 18:36, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> On Thu, Aug 2, 2012 at 2:34 PM, Dario Faggioli <raistlin@linux.it> wrote:
>>>> On Thu, 2012-08-02 at 10:43 +0100, Jan Beulich wrote:
>>>>>>>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
>>>>>>      - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
>>>>>>        guest ends up on more than one nodes, make sure it knows it's
>>>>>>        running on a NUMA platform (smaller than the actual host, but
>>>>>>        still NUMA). This interacts with some of the above points:
>>>>>
>>>>> The question is whether this is really useful beyond the (I would
>>>>> suppose) relatively small set of cases where migration isn't
>>>>> needed.
>>>>>
>>>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
>>>> suggesting that exposing a virtual topology is not a good idea as it
>>>> poses constraints/prevents live migration?
> 
> Honestly, what would be the problems with migration? NUMA awareness is 
> actually a software optimization, so we will not really break something 
> if the advertised topology isn't the real one.

Sure, nothing would break, but the purpose of the whole feature
is improving performance, and that might get entirely lost (or
even worse) after a migration to a different topology host.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:05:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEki-0004Tm-NN; Fri, 03 Aug 2012 10:05:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxEkg-0004Th-O0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:05:02 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343988294!8952645!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25233 invoked from network); 3 Aug 2012 10:04:55 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:04:55 -0000
Received: by eeke53 with SMTP id e53so128556eek.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 03:04:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Hn6j5lVjA89Yx7jTDcy1O4iVm/wL2C2wJPDBHyIBuV4=;
	b=ATHXweDDIMhpVaVgHQhgLf0P+ajWxy1g8qbHshgTy42X+a8E6jYhklV2JRGcc+DUR/
	csYjs4yEDzkp7Ey2LdXbMvfUGxbkTOCHWN2BkNWyahluMPeUXKeOpbgCEQVBky3LywOk
	9uI3GBSRicXt0BTRb5qUDqR60/Ny6QkeydB9DTKFQ+U5ohr3YUc6BxUc5Q6DgU1rvIEA
	dqIpvRSdAIi2+RZljQLm0W5xX00R6UPcKossAKFGzutLYIThNVGe25b0cO9/DnL+5jBG
	kgKxkG4vF7ui+0uyW2c8DdiLEFr5putSnN0L+/v/ZLDG+t3Z9HJ3fORyT0qOGebEUcu/
	Pdtg==
Received: by 10.14.175.130 with SMTP id z2mr1484279eel.0.1343988294486;
	Fri, 03 Aug 2012 03:04:54 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id s8sm24373833eeo.8.2012.08.03.03.04.53
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 03:04:54 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 03 Aug 2012 11:04:47 +0100
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC4160CF.47AAC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xX2mIHLBZS20rkUCfZ63g32bRZA==
In-Reply-To: <501BAA990200007800092661@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:

> In __prepare_to_wait(), properly mark early clobbered registers. By
> doing so, we at once eliminate the need to save/restore rCX and rDI.
> 
> In check_wakeup_from_wait(), make the current constraints match by
> removing the code that actuall alters registers. By adjusting the
> resume address in __prepare_to_wait(), we can simply re-use the copying
> operation there (rather than doing a second pointless copy in the
> opposite direction after branching to the resume point), which at once
> eliminates the need for re-loading rCX and rDI inside the asm().

First of all, this is code improvement, rather than a bug fix, right? The
asm constraints are correct for the code as it is, I believe.

It also seems the patch splits into two independent parts:

 A. Not sure whether the trade-off of the rCX/rDI save/restore versus more
complex asm constraints makes sense.

 B. Separately, the adjustment of the restore return address, and avoiding
needing to reload rCX/rDI after label 1, as well as avoiding the copy in
check_wakeup_from_wait(), is very nice.

I'm inclined to take the second part only, and make it clearer in the
changeset comment that it is not a bug fix.

What do you think?

 -- Keir

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -126,6 +126,7 @@ static void __prepare_to_wait(struct wai
>  {
>      char *cpu_info = (char *)get_cpu_info();
>      struct vcpu *curr = current;
> +    unsigned long dummy;
>  
>      ASSERT(wqv->esp == 0);
>  
> @@ -140,27 +141,27 @@ static void __prepare_to_wait(struct wai
>  
>      asm volatile (
>  #ifdef CONFIG_X86_64
> -        "push %%rax; push %%rbx; push %%rcx; push %%rdx; push %%rdi; "
> +        "push %%rax; push %%rbx; push %%rdx; "
>          "push %%rbp; push %%r8; push %%r9; push %%r10; push %%r11; "
>          "push %%r12; push %%r13; push %%r14; push %%r15; call 1f; "
> -        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov %%rsp,%%rsi; "
> +        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "
>          "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "
>          "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "
>          "pop %%r11; pop %%r10; pop %%r9; pop %%r8; "
> -        "pop %%rbp; pop %%rdi; pop %%rdx; pop %%rcx; pop %%rbx; pop %%rax"
> +        "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax"
>  #else
> -        "push %%eax; push %%ebx; push %%ecx; push %%edx; push %%edi; "
> +        "push %%eax; push %%ebx; push %%edx; "
>          "push %%ebp; call 1f; "
> -        "1: mov 8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "
> +        "1: mov %%esp,%%esi; addl $2f-1b,(%%esp); "
>          "sub %%esi,%%ecx; cmp %3,%%ecx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%esp,%%esi; 3: pop %%eax; "
> -        "pop %%ebp; pop %%edi; pop %%edx; pop %%ecx; pop %%ebx; pop %%eax"
> +        "pop %%ebp; pop %%edx; pop %%ebx; pop %%eax"
>  #endif
> -        : "=S" (wqv->esp)
> -        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)
> +        : "=&S" (wqv->esp), "=&c" (dummy), "=&D" (dummy)
> +        : "i" (PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)
>          : "memory" );
>  
>      if ( unlikely(wqv->esp == 0) )
> @@ -200,7 +201,7 @@ void check_wakeup_from_wait(void)
>      }
>  
>      asm volatile (
> -        "mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"
> +        "mov %1,%%"__OP"sp; jmp *(%0)"
>          : : "S" (wqv->stack), "D" (wqv->esp),
>          "c" ((char *)get_cpu_info() - (char *)wqv->esp)
>          : "memory" );
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:05:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEki-0004Tm-NN; Fri, 03 Aug 2012 10:05:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxEkg-0004Th-O0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:05:02 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343988294!8952645!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25233 invoked from network); 3 Aug 2012 10:04:55 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:04:55 -0000
Received: by eeke53 with SMTP id e53so128556eek.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 03:04:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Hn6j5lVjA89Yx7jTDcy1O4iVm/wL2C2wJPDBHyIBuV4=;
	b=ATHXweDDIMhpVaVgHQhgLf0P+ajWxy1g8qbHshgTy42X+a8E6jYhklV2JRGcc+DUR/
	csYjs4yEDzkp7Ey2LdXbMvfUGxbkTOCHWN2BkNWyahluMPeUXKeOpbgCEQVBky3LywOk
	9uI3GBSRicXt0BTRb5qUDqR60/Ny6QkeydB9DTKFQ+U5ohr3YUc6BxUc5Q6DgU1rvIEA
	dqIpvRSdAIi2+RZljQLm0W5xX00R6UPcKossAKFGzutLYIThNVGe25b0cO9/DnL+5jBG
	kgKxkG4vF7ui+0uyW2c8DdiLEFr5putSnN0L+/v/ZLDG+t3Z9HJ3fORyT0qOGebEUcu/
	Pdtg==
Received: by 10.14.175.130 with SMTP id z2mr1484279eel.0.1343988294486;
	Fri, 03 Aug 2012 03:04:54 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id s8sm24373833eeo.8.2012.08.03.03.04.53
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 03:04:54 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 03 Aug 2012 11:04:47 +0100
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC4160CF.47AAC%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xX2mIHLBZS20rkUCfZ63g32bRZA==
In-Reply-To: <501BAA990200007800092661@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:

> In __prepare_to_wait(), properly mark early clobbered registers. By
> doing so, we at once eliminate the need to save/restore rCX and rDI.
> 
> In check_wakeup_from_wait(), make the current constraints match by
> removing the code that actuall alters registers. By adjusting the
> resume address in __prepare_to_wait(), we can simply re-use the copying
> operation there (rather than doing a second pointless copy in the
> opposite direction after branching to the resume point), which at once
> eliminates the need for re-loading rCX and rDI inside the asm().

First of all, this is code improvement, rather than a bug fix, right? The
asm constraints are correct for the code as it is, I believe.

It also seems the patch splits into two independent parts:

 A. Not sure whether the trade-off of the rCX/rDI save/restore versus more
complex asm constraints makes sense.

 B. Separately, the adjustment of the restore return address, and avoiding
needing to reload rCX/rDI after label 1, as well as avoiding the copy in
check_wakeup_from_wait(), is very nice.

I'm inclined to take the second part only, and make it clearer in the
changeset comment that it is not a bug fix.

What do you think?

 -- Keir

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -126,6 +126,7 @@ static void __prepare_to_wait(struct wai
>  {
>      char *cpu_info = (char *)get_cpu_info();
>      struct vcpu *curr = current;
> +    unsigned long dummy;
>  
>      ASSERT(wqv->esp == 0);
>  
> @@ -140,27 +141,27 @@ static void __prepare_to_wait(struct wai
>  
>      asm volatile (
>  #ifdef CONFIG_X86_64
> -        "push %%rax; push %%rbx; push %%rcx; push %%rdx; push %%rdi; "
> +        "push %%rax; push %%rbx; push %%rdx; "
>          "push %%rbp; push %%r8; push %%r9; push %%r10; push %%r11; "
>          "push %%r12; push %%r13; push %%r14; push %%r15; call 1f; "
> -        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov %%rsp,%%rsi; "
> +        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "
>          "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "
>          "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "
>          "pop %%r11; pop %%r10; pop %%r9; pop %%r8; "
> -        "pop %%rbp; pop %%rdi; pop %%rdx; pop %%rcx; pop %%rbx; pop %%rax"
> +        "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax"
>  #else
> -        "push %%eax; push %%ebx; push %%ecx; push %%edx; push %%edi; "
> +        "push %%eax; push %%ebx; push %%edx; "
>          "push %%ebp; call 1f; "
> -        "1: mov 8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "
> +        "1: mov %%esp,%%esi; addl $2f-1b,(%%esp); "
>          "sub %%esi,%%ecx; cmp %3,%%ecx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%esp,%%esi; 3: pop %%eax; "
> -        "pop %%ebp; pop %%edi; pop %%edx; pop %%ecx; pop %%ebx; pop %%eax"
> +        "pop %%ebp; pop %%edx; pop %%ebx; pop %%eax"
>  #endif
> -        : "=S" (wqv->esp)
> -        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)
> +        : "=&S" (wqv->esp), "=&c" (dummy), "=&D" (dummy)
> +        : "i" (PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)
>          : "memory" );
>  
>      if ( unlikely(wqv->esp == 0) )
> @@ -200,7 +201,7 @@ void check_wakeup_from_wait(void)
>      }
>  
>      asm volatile (
> -        "mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"
> +        "mov %1,%%"__OP"sp; jmp *(%0)"
>          : : "S" (wqv->stack), "D" (wqv->esp),
>          "c" ((char *)get_cpu_info() - (char *)wqv->esp)
>          : "memory" );
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:06:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxElZ-0004dL-9e; Fri, 03 Aug 2012 10:05:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxElX-0004cz-QU
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:05:56 +0000
Received: from [85.158.138.51:4920] by server-9.bemta-3.messagelabs.com id
	0C/9F-27628-182AB105; Fri, 03 Aug 2012 10:05:53 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343988351!30126242!1
X-Originating-IP: [65.55.88.13]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25199 invoked from network); 3 Aug 2012 10:05:52 -0000
Received: from tx2ehsobe003.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.13)
	by server-16.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 10:05:52 -0000
Received: from mail224-tx2-R.bigfish.com (10.9.14.241) by
	TX2EHSOBE013.bigfish.com (10.9.40.33) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 10:05:50 +0000
Received: from mail224-tx2 (localhost [127.0.0.1])	by
	mail224-tx2-R.bigfish.com (Postfix) with ESMTP id 45EE9880350;
	Fri,  3 Aug 2012 10:05:50 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I14ffIzz1202hzzz2dh668h839h93fhd25he5bhf0ah107ah)
Received: from mail224-tx2 (localhost.localdomain [127.0.0.1]) by mail224-tx2
	(MessageSwitch) id 1343988348781639_17719;
	Fri,  3 Aug 2012 10:05:48 +0000 (UTC)
Received: from TX2EHSMHS015.bigfish.com (unknown [10.9.14.246])	by
	mail224-tx2.bigfish.com (Postfix) with ESMTP id B1E7B2C0044;
	Fri,  3 Aug 2012 10:05:48 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS015.bigfish.com (10.9.99.115) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 10:05:48 +0000
X-WSS-ID: 0M86C1L-02-BA1-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B680C80E8;	Fri,  3 Aug 2012 05:05:44 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 05:06:05 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 05:05:46 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	06:05:45 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 89A7549C20C;
	Fri,  3 Aug 2012 11:05:44 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 63825594037; Fri,  3 Aug 2012
	12:05:44 +0200 (CEST)
Message-ID: <501BA1C0.7040100@amd.com>
Date: Fri, 3 Aug 2012 12:02:40 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
X-OriginatorOrg: amd.com
Cc: Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2012 06:16 PM, Dario Faggioli wrote:
> Hi everyone,
>
> With automatic placement finally landing into xen-unstable, I stated
> thinking about what I could work on next, still in the field of
> improving Xen's NUMA support. Well, it turned out that running out of
> things to do is not an option! :-O
>
> In fact, I can think of quite a bit of open issues in that area, that I'm
> just braindumping here.

> ...
>
>         * automatic placement of Dom0, if possible (my current series is
>           only affecting DomU)

I think Dom0 NUMA awareness should be one of the top priorities. If I 
boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which 
actually has memory from all 8 nodes and thinks it's memory is flat.
There are some tricks to confine it to node 0 (dom0_mem=<memory of 
node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires 
intimate knowledge of the systems parameters and is error-prone. Also 
this does not work well with ballooning.
Actually we could improve the NUMA placement with that: By asking the 
Dom0 explicitly for memory from a certain node.

>         * having internal xen data structure honour the placement (e.g.,
>           I've been told that right now vcpu stacks are always allocated
>           on node 0... Andrew?).
>
> [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
>        just have them _prefer_ running on the nodes where their memory
>        is.

This would be really cool. I once thought about something like a 
home-node. We start with placement to allocate memory from one node. 
Then we relax the VCPU-pinning, but mark this node as special for this 
guest, so that it if possible happens to get run there. But in times of 
CPU pressure we are happy to let it run on other nodes: CPU starving is 
much worse than NUMA penalty.

>
> [D] - Dynamic memory migration between different nodes of the host. As
>        the counter-part of the NUMA-aware scheduler.

I once read about a VMware feature: bandwith-limited migration in the 
background, hot pages first. So we get flexibility and avoid CPU 
starving, but still don't hog the system with memory copying.
Sounds quite ambitious, though.

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:06:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxElZ-0004dL-9e; Fri, 03 Aug 2012 10:05:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxElX-0004cz-QU
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:05:56 +0000
Received: from [85.158.138.51:4920] by server-9.bemta-3.messagelabs.com id
	0C/9F-27628-182AB105; Fri, 03 Aug 2012 10:05:53 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343988351!30126242!1
X-Originating-IP: [65.55.88.13]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25199 invoked from network); 3 Aug 2012 10:05:52 -0000
Received: from tx2ehsobe003.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.13)
	by server-16.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 10:05:52 -0000
Received: from mail224-tx2-R.bigfish.com (10.9.14.241) by
	TX2EHSOBE013.bigfish.com (10.9.40.33) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 10:05:50 +0000
Received: from mail224-tx2 (localhost [127.0.0.1])	by
	mail224-tx2-R.bigfish.com (Postfix) with ESMTP id 45EE9880350;
	Fri,  3 Aug 2012 10:05:50 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I14ffIzz1202hzzz2dh668h839h93fhd25he5bhf0ah107ah)
Received: from mail224-tx2 (localhost.localdomain [127.0.0.1]) by mail224-tx2
	(MessageSwitch) id 1343988348781639_17719;
	Fri,  3 Aug 2012 10:05:48 +0000 (UTC)
Received: from TX2EHSMHS015.bigfish.com (unknown [10.9.14.246])	by
	mail224-tx2.bigfish.com (Postfix) with ESMTP id B1E7B2C0044;
	Fri,  3 Aug 2012 10:05:48 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS015.bigfish.com (10.9.99.115) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 10:05:48 +0000
X-WSS-ID: 0M86C1L-02-BA1-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B680C80E8;	Fri,  3 Aug 2012 05:05:44 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 05:06:05 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 05:05:46 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	06:05:45 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 89A7549C20C;
	Fri,  3 Aug 2012 11:05:44 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 63825594037; Fri,  3 Aug 2012
	12:05:44 +0200 (CEST)
Message-ID: <501BA1C0.7040100@amd.com>
Date: Fri, 3 Aug 2012 12:02:40 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
X-OriginatorOrg: amd.com
Cc: Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2012 06:16 PM, Dario Faggioli wrote:
> Hi everyone,
>
> With automatic placement finally landing into xen-unstable, I stated
> thinking about what I could work on next, still in the field of
> improving Xen's NUMA support. Well, it turned out that running out of
> things to do is not an option! :-O
>
> In fact, I can think of quite a bit of open issues in that area, that I'm
> just braindumping here.

> ...
>
>         * automatic placement of Dom0, if possible (my current series is
>           only affecting DomU)

I think Dom0 NUMA awareness should be one of the top priorities. If I 
boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which 
actually has memory from all 8 nodes and thinks it's memory is flat.
There are some tricks to confine it to node 0 (dom0_mem=<memory of 
node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires 
intimate knowledge of the systems parameters and is error-prone. Also 
this does not work well with ballooning.
Actually we could improve the NUMA placement with that: By asking the 
Dom0 explicitly for memory from a certain node.

>         * having internal xen data structure honour the placement (e.g.,
>           I've been told that right now vcpu stacks are always allocated
>           on node 0... Andrew?).
>
> [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
>        just have them _prefer_ running on the nodes where their memory
>        is.

This would be really cool. I once thought about something like a 
home-node. We start with placement to allocate memory from one node. 
Then we relax the VCPU-pinning, but mark this node as special for this 
guest, so that it if possible happens to get run there. But in times of 
CPU pressure we are happy to let it run on other nodes: CPU starving is 
much worse than NUMA penalty.

>
> [D] - Dynamic memory migration between different nodes of the host. As
>        the counter-part of the NUMA-aware scheduler.

I once read about a VMware feature: bandwith-limited migration in the 
background, hot pages first. So we get flexibility and avoid CPU 
starving, but still don't hog the system with memory copying.
Sounds quite ambitious, though.

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:09:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:09:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEp9-0004tQ-2A; Fri, 03 Aug 2012 10:09:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxEp7-0004tG-8T
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:09:37 +0000
Received: from [85.158.138.51:61331] by server-12.bemta-3.messagelabs.com id
	49/95-15259-063AB105; Fri, 03 Aug 2012 10:09:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343988575!30312088!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7141 invoked from network); 3 Aug 2012 10:09:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:09:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13837529"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:09:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:09:34 +0100
Message-ID: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 11:09:33 +0100
In-Reply-To: <1343637026.14184.9.camel@zakaz.uk.xensource.com>
References: <1343637026.14184.9.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)

Where are we with this?

Is it still a viable candidate for 4.2, now that we have reached rc1
(almost 2)?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:09:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:09:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEp9-0004tQ-2A; Fri, 03 Aug 2012 10:09:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxEp7-0004tG-8T
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:09:37 +0000
Received: from [85.158.138.51:61331] by server-12.bemta-3.messagelabs.com id
	49/95-15259-063AB105; Fri, 03 Aug 2012 10:09:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343988575!30312088!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7141 invoked from network); 3 Aug 2012 10:09:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:09:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13837529"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:09:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:09:34 +0100
Message-ID: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 11:09:33 +0100
In-Reply-To: <1343637026.14184.9.camel@zakaz.uk.xensource.com>
References: <1343637026.14184.9.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)

Where are we with this?

Is it still a viable candidate for 4.2, now that we have reached rc1
(almost 2)?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:10:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEq1-0004xr-F4; Fri, 03 Aug 2012 10:10:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxEq0-0004xg-8W
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:10:32 +0000
Received: from [85.158.143.35:57681] by server-2.bemta-4.messagelabs.com id
	43/54-17938-793AB105; Fri, 03 Aug 2012 10:10:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1343988630!13123790!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25860 invoked from network); 3 Aug 2012 10:10:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13837574"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:10:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:10:29 +0100
Message-ID: <1343988628.21372.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Fri, 3 Aug 2012 11:10:28 +0100
In-Reply-To: <1343221925.18971.93.camel@zakaz.uk.xensource.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-07-25 at 14:12 +0100, Ian Campbell wrote:
> On Wed, 2012-07-25 at 12:48 +0100, Wangzhenguo wrote:
> > > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > > Sent: Tuesday, July 24, 2012 9:06 PM
> > > 
> > > Hrm. yes. This basically harks back to the use of mlock as a surrogate
> > > for the requirement to properly lock down memory for use as a hypercall
> > > argument. This is flawed in ways other than this (i.e. NUMA memory
> > > migration would also break it).
> > > 
> > > The correct answer here is a special device (e.g. /dev/xen/hypercall?)
> > > which can be mmap by libxc to provide memory specifically for this
> > > purspoe which is fully locked down (e.g. VM_DONTCOPY and whatever else
> > > is required).
> > 
> > Thanks for your reply. I see the madvise syscall can make the vma to
> > be VM_DONTCOPY by deliverying MADV_DONTFORK advice.
> 
> I didn't know about this MADV option, it sounds like the right idea to
> me.
> 
> >  I'll fixup and test it.

BTW, I think this would be a good fix to have for 4.2.0 if you are able
to produce a patch.

> 
> > > 
> > > The libxc "osdep" interface (see xenctrlosdep.h) provides a framework
> > > for doing this, it just doesn't actually implement the use of the
> > > special driver yet.
> > > 
> > > Is that something you might be interested in coding up?
> > > 
> > > Ian.
> > 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:10:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEq1-0004xr-F4; Fri, 03 Aug 2012 10:10:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxEq0-0004xg-8W
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:10:32 +0000
Received: from [85.158.143.35:57681] by server-2.bemta-4.messagelabs.com id
	43/54-17938-793AB105; Fri, 03 Aug 2012 10:10:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1343988630!13123790!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25860 invoked from network); 3 Aug 2012 10:10:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13837574"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:10:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:10:29 +0100
Message-ID: <1343988628.21372.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Fri, 3 Aug 2012 11:10:28 +0100
In-Reply-To: <1343221925.18971.93.camel@zakaz.uk.xensource.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-07-25 at 14:12 +0100, Ian Campbell wrote:
> On Wed, 2012-07-25 at 12:48 +0100, Wangzhenguo wrote:
> > > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > > Sent: Tuesday, July 24, 2012 9:06 PM
> > > 
> > > Hrm. yes. This basically harks back to the use of mlock as a surrogate
> > > for the requirement to properly lock down memory for use as a hypercall
> > > argument. This is flawed in ways other than this (i.e. NUMA memory
> > > migration would also break it).
> > > 
> > > The correct answer here is a special device (e.g. /dev/xen/hypercall?)
> > > which can be mmap by libxc to provide memory specifically for this
> > > purspoe which is fully locked down (e.g. VM_DONTCOPY and whatever else
> > > is required).
> > 
> > Thanks for your reply. I see the madvise syscall can make the vma to
> > be VM_DONTCOPY by deliverying MADV_DONTFORK advice.
> 
> I didn't know about this MADV option, it sounds like the right idea to
> me.
> 
> >  I'll fixup and test it.

BTW, I think this would be a good fix to have for 4.2.0 if you are able
to produce a patch.

> 
> > > 
> > > The libxc "osdep" interface (see xenctrlosdep.h) provides a framework
> > > for doing this, it just doesn't actually implement the use of the
> > > special driver yet.
> > > 
> > > Is that something you might be interested in coding up?
> > > 
> > > Ian.
> > 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:17:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEw3-0005HH-FQ; Fri, 03 Aug 2012 10:16:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxEw2-0005H5-BX
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:16:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343988997!11094666!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkzODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16473 invoked from network); 3 Aug 2012 10:16:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:16:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336363200"; d="scan'208";a="204040172"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 06:16:37 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 06:16:36 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SxEvs-0005UW-G7;
	Fri, 03 Aug 2012 11:16:36 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 3 Aug 2012 10:16:35 +0000
Message-ID: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to popphysmap
	in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxc/xc_dom_arm.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index aac63e5..b743a6c 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -60,7 +60,7 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
 
     rc = xc_domain_populate_physmap_exact(
             dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
-            0, 0, &p2m[i]);
+            0, 0, p2m);
     if ( rc < 0 )
         return rc;
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:17:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxEw3-0005HH-FQ; Fri, 03 Aug 2012 10:16:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxEw2-0005H5-BX
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:16:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343988997!11094666!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkzODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16473 invoked from network); 3 Aug 2012 10:16:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:16:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336363200"; d="scan'208";a="204040172"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 06:16:37 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 06:16:36 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SxEvs-0005UW-G7;
	Fri, 03 Aug 2012 11:16:36 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 3 Aug 2012 10:16:35 +0000
Message-ID: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to popphysmap
	in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxc/xc_dom_arm.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index aac63e5..b743a6c 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -60,7 +60,7 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
 
     rc = xc_domain_populate_physmap_exact(
             dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
-            0, 0, &p2m[i]);
+            0, 0, p2m);
     if ( rc < 0 )
         return rc;
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:29:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxF7Z-0005RN-Q9; Fri, 03 Aug 2012 10:28:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxF7X-0005RI-Mw
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:28:39 +0000
Received: from [85.158.143.99:14488] by server-3.bemta-4.messagelabs.com id
	6A/3A-01511-6D7AB105; Fri, 03 Aug 2012 10:28:38 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343989717!18411722!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24649 invoked from network); 3 Aug 2012 10:28:37 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:28:37 -0000
Received: by eaah1 with SMTP id h1so137558eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 03:28:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YWb1kUIB23qjnPwvp/eawU2jUbcsUmyYvhKuGWIWGFU=;
	b=YVfP4cAUT+VW6/OjHbg5S8xqNaBXVxxRqoABSKZceS322mb8A34mVp13wechAnSRNR
	8ocXAoYT0530RLxVCx0h/q3n5rFIzVM0fsXO3JNLYz03AadxO4AnDGrkLxO7KwcIw6ze
	pfO/O83rFIYc57Sg5UOLOP8/KPippdvJ/hPZMPVUhw8YK8a2P42oI9+ml+1XwRN/Yg3i
	i4u1FmLWbFCwztuJGwYjJYkaUzAleAUz5r3/ZMgJDo9LfIMYTQ2haZVVIo/9rS+oJFcj
	bbIUwZYOVefR5HXNpUKUC4hxk86xdaH4Za6htDDj6rE6lqOcyM1NvNzuk0fyUnZ+aaHi
	5FnQ==
Received: by 10.14.215.197 with SMTP id e45mr1481642eep.36.1343989717407;
	Fri, 03 Aug 2012 03:28:37 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id v5sm24564290eel.6.2012.08.03.03.28.35
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 03:28:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 11:28:33 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC416661.3A655%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: Ac1xYrt+GqT0K7XsckeGlIqYi0qT3A==
In-Reply-To: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

> On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
> 
> Where are we with this?
> 
> Is it still a viable candidate for 4.2, now that we have reached rc1
> (almost 2)?

Didn't we already take the trivial patch that will ease the transition to
4.3?

 -- Keir

> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:29:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxF7Z-0005RN-Q9; Fri, 03 Aug 2012 10:28:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxF7X-0005RI-Mw
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:28:39 +0000
Received: from [85.158.143.99:14488] by server-3.bemta-4.messagelabs.com id
	6A/3A-01511-6D7AB105; Fri, 03 Aug 2012 10:28:38 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1343989717!18411722!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24649 invoked from network); 3 Aug 2012 10:28:37 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:28:37 -0000
Received: by eaah1 with SMTP id h1so137558eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 03:28:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YWb1kUIB23qjnPwvp/eawU2jUbcsUmyYvhKuGWIWGFU=;
	b=YVfP4cAUT+VW6/OjHbg5S8xqNaBXVxxRqoABSKZceS322mb8A34mVp13wechAnSRNR
	8ocXAoYT0530RLxVCx0h/q3n5rFIzVM0fsXO3JNLYz03AadxO4AnDGrkLxO7KwcIw6ze
	pfO/O83rFIYc57Sg5UOLOP8/KPippdvJ/hPZMPVUhw8YK8a2P42oI9+ml+1XwRN/Yg3i
	i4u1FmLWbFCwztuJGwYjJYkaUzAleAUz5r3/ZMgJDo9LfIMYTQ2haZVVIo/9rS+oJFcj
	bbIUwZYOVefR5HXNpUKUC4hxk86xdaH4Za6htDDj6rE6lqOcyM1NvNzuk0fyUnZ+aaHi
	5FnQ==
Received: by 10.14.215.197 with SMTP id e45mr1481642eep.36.1343989717407;
	Fri, 03 Aug 2012 03:28:37 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id v5sm24564290eel.6.2012.08.03.03.28.35
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 03:28:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 11:28:33 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC416661.3A655%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: Ac1xYrt+GqT0K7XsckeGlIqYi0qT3A==
In-Reply-To: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

> On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
> 
> Where are we with this?
> 
> Is it still a viable candidate for 4.2, now that we have reached rc1
> (almost 2)?

Didn't we already take the trivial patch that will ease the transition to
4.3?

 -- Keir

> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:30:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxF90-0005Vw-8t; Fri, 03 Aug 2012 10:30:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SxF8z-0005Vk-Gh
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:30:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343989802!3702317!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8561 invoked from network); 3 Aug 2012 10:30:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:30:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838018"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:29:55 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:29:55 +0100
Date: Fri, 3 Aug 2012 11:29:36 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest with
 qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an error message from QMP server: Parameter 'driver' expects a driver name
> 
> It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?

Yes, it was accepted, but it is present only in upstream QEMU (from
git://git.qemu.org/qemu.git), not the tree we are currently using in
xen-unstable for development
(git://xenbits.xensource.com/qemu-upstream-unstable.git).
Make sure you are using the right tree!

Anthony is currently on vacation and is going to be back in about a
week.

> Another question:
> Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.

You should use upstream QEMU, I am going to rebase our tree on that
early on in the 4.3 release cycle.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:30:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxF90-0005Vw-8t; Fri, 03 Aug 2012 10:30:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SxF8z-0005Vk-Gh
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:30:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1343989802!3702317!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8561 invoked from network); 3 Aug 2012 10:30:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:30:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838018"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:29:55 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:29:55 +0100
Date: Fri, 3 Aug 2012 11:29:36 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest with
 qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an error message from QMP server: Parameter 'driver' expects a driver name
> 
> It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?

Yes, it was accepted, but it is present only in upstream QEMU (from
git://git.qemu.org/qemu.git), not the tree we are currently using in
xen-unstable for development
(git://xenbits.xensource.com/qemu-upstream-unstable.git).
Make sure you are using the right tree!

Anthony is currently on vacation and is going to be back in about a
week.

> Another question:
> Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.

You should use upstream QEMU, I am going to rebase our tree on that
early on in the 4.3 release cycle.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:30:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxF9K-0005Xp-Lw; Fri, 03 Aug 2012 10:30:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxF9I-0005XW-QT
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:30:29 +0000
Received: from [85.158.138.51:9708] by server-2.bemta-3.messagelabs.com id
	CF/98-00359-348AB105; Fri, 03 Aug 2012 10:30:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343989826!30206530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9869 invoked from network); 3 Aug 2012 10:30:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:30:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838028"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:30:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:30:26 +0100
Message-ID: <1343989824.21372.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir.xen@gmail.com>
Date: Fri, 3 Aug 2012 11:30:24 +0100
In-Reply-To: <CC416661.3A655%keir.xen@gmail.com>
References: <CC416661.3A655%keir.xen@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:28 +0100, Keir Fraser wrote:
> On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
> 
> > On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
> >>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
> >>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
> > 
> > Where are we with this?
> > 
> > Is it still a viable candidate for 4.2, now that we have reached rc1
> > (almost 2)?
> 
> Didn't we already take the trivial patch that will ease the transition to
> 4.3?

Possibly, in which case I can scratch it off the list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:30:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxF9K-0005Xp-Lw; Fri, 03 Aug 2012 10:30:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxF9I-0005XW-QT
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:30:29 +0000
Received: from [85.158.138.51:9708] by server-2.bemta-3.messagelabs.com id
	CF/98-00359-348AB105; Fri, 03 Aug 2012 10:30:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1343989826!30206530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9869 invoked from network); 3 Aug 2012 10:30:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:30:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838028"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:30:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:30:26 +0100
Message-ID: <1343989824.21372.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir.xen@gmail.com>
Date: Fri, 3 Aug 2012 11:30:24 +0100
In-Reply-To: <CC416661.3A655%keir.xen@gmail.com>
References: <CC416661.3A655%keir.xen@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:28 +0100, Keir Fraser wrote:
> On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
> 
> > On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
> >>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
> >>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
> > 
> > Where are we with this?
> > 
> > Is it still a viable candidate for 4.2, now that we have reached rc1
> > (almost 2)?
> 
> Didn't we already take the trivial patch that will ease the transition to
> 4.3?

Possibly, in which case I can scratch it off the list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:33:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFCF-0005mg-8a; Fri, 03 Aug 2012 10:33:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxFCD-0005mT-LM
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:33:29 +0000
Received: from [85.158.143.99:62055] by server-2.bemta-4.messagelabs.com id
	A3/7C-17938-9F8AB105; Fri, 03 Aug 2012 10:33:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1343990008!30114053!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19263 invoked from network); 3 Aug 2012 10:33:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 10:33:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:33:27 +0100
Message-Id: <501BC5410200007800092765@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:34:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir@xen.org>
References: <501BAA990200007800092661@nat28.tlf.novell.com>
	<CC4160CF.47AAC%keir@xen.org>
In-Reply-To: <CC4160CF.47AAC%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 12:04, Keir Fraser <keir@xen.org> wrote:
> On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> In __prepare_to_wait(), properly mark early clobbered registers. By
>> doing so, we at once eliminate the need to save/restore rCX and rDI.
>> 
>> In check_wakeup_from_wait(), make the current constraints match by
>> removing the code that actuall alters registers. By adjusting the
>> resume address in __prepare_to_wait(), we can simply re-use the copying
>> operation there (rather than doing a second pointless copy in the
>> opposite direction after branching to the resume point), which at once
>> eliminates the need for re-loading rCX and rDI inside the asm().
> 
> First of all, this is code improvement, rather than a bug fix, right? The
> asm constraints are correct for the code as it is, I believe.

No, the constraints aren't really correct at present (yet this is
not visible as a functional bug in any way) - from a formal
perspective, the early clobber specification is needed on _any_
operand that doesn't retain its value throughout an asm(). Any
future compiler could derive something from this that we don't
intend.

> It also seems the patch splits into two independent parts:
> 
>  A. Not sure whether the trade-off of the rCX/rDI save/restore versus more
> complex asm constraints makes sense.
> 
>  B. Separately, the adjustment of the restore return address, and avoiding
> needing to reload rCX/rDI after label 1, as well as avoiding the copy in
> check_wakeup_from_wait(), is very nice.
> 
> I'm inclined to take the second part only, and make it clearer in the
> changeset comment that it is not a bug fix.
> 
> What do you think?

The patch could be split, yes, but where exactly the split(s)
should be isn't that obvious to me. And as it's fixing the same
kind of issue on both asm()-s, it seemed sensible to keep the
changes together.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:33:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFCF-0005mg-8a; Fri, 03 Aug 2012 10:33:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxFCD-0005mT-LM
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:33:29 +0000
Received: from [85.158.143.99:62055] by server-2.bemta-4.messagelabs.com id
	A3/7C-17938-9F8AB105; Fri, 03 Aug 2012 10:33:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1343990008!30114053!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19263 invoked from network); 3 Aug 2012 10:33:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 10:33:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:33:27 +0100
Message-Id: <501BC5410200007800092765@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:34:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir@xen.org>
References: <501BAA990200007800092661@nat28.tlf.novell.com>
	<CC4160CF.47AAC%keir@xen.org>
In-Reply-To: <CC4160CF.47AAC%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 12:04, Keir Fraser <keir@xen.org> wrote:
> On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> In __prepare_to_wait(), properly mark early clobbered registers. By
>> doing so, we at once eliminate the need to save/restore rCX and rDI.
>> 
>> In check_wakeup_from_wait(), make the current constraints match by
>> removing the code that actuall alters registers. By adjusting the
>> resume address in __prepare_to_wait(), we can simply re-use the copying
>> operation there (rather than doing a second pointless copy in the
>> opposite direction after branching to the resume point), which at once
>> eliminates the need for re-loading rCX and rDI inside the asm().
> 
> First of all, this is code improvement, rather than a bug fix, right? The
> asm constraints are correct for the code as it is, I believe.

No, the constraints aren't really correct at present (yet this is
not visible as a functional bug in any way) - from a formal
perspective, the early clobber specification is needed on _any_
operand that doesn't retain its value throughout an asm(). Any
future compiler could derive something from this that we don't
intend.

> It also seems the patch splits into two independent parts:
> 
>  A. Not sure whether the trade-off of the rCX/rDI save/restore versus more
> complex asm constraints makes sense.
> 
>  B. Separately, the adjustment of the restore return address, and avoiding
> needing to reload rCX/rDI after label 1, as well as avoiding the copy in
> check_wakeup_from_wait(), is very nice.
> 
> I'm inclined to take the second part only, and make it clearer in the
> changeset comment that it is not a bug fix.
> 
> What do you think?

The patch could be split, yes, but where exactly the split(s)
should be isn't that obvious to me. And as it's fixing the same
kind of issue on both asm()-s, it seemed sensible to keep the
changes together.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:36:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFFA-0006DU-Ob; Fri, 03 Aug 2012 10:36:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxFF9-0006Cn-GN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:36:31 +0000
Received: from [85.158.139.83:22766] by server-5.bemta-5.messagelabs.com id
	D6/C0-02722-EA9AB105; Fri, 03 Aug 2012 10:36:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343990189!26217765!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32766 invoked from network); 3 Aug 2012 10:36:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:36:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:36:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:36:29 +0100
Message-ID: <1343990187.21372.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 3 Aug 2012 11:36:27 +0100
In-Reply-To: <alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> > When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> > libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an error message from QMP server: Parameter 'driver' expects a driver name
> > 
> > It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> > I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?
> 
> Yes, it was accepted, but it is present only in upstream QEMU (from
> git://git.qemu.org/qemu.git), not the tree we are currently using in
> xen-unstable for development
> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
> Make sure you are using the right tree!

http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the
upstream qemu tree instead of our stable branch of upstream.

> 
> Anthony is currently on vacation and is going to be back in about a
> week.
> 
> > Another question:
> > Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.
> 
> You should use upstream QEMU, I am going to rebase our tree on that
> early on in the 4.3 release cycle.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:36:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFFA-0006DU-Ob; Fri, 03 Aug 2012 10:36:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxFF9-0006Cn-GN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:36:31 +0000
Received: from [85.158.139.83:22766] by server-5.bemta-5.messagelabs.com id
	D6/C0-02722-EA9AB105; Fri, 03 Aug 2012 10:36:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343990189!26217765!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32766 invoked from network); 3 Aug 2012 10:36:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:36:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:36:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:36:29 +0100
Message-ID: <1343990187.21372.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 3 Aug 2012 11:36:27 +0100
In-Reply-To: <alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> > When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> > libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an error message from QMP server: Parameter 'driver' expects a driver name
> > 
> > It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> > I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?
> 
> Yes, it was accepted, but it is present only in upstream QEMU (from
> git://git.qemu.org/qemu.git), not the tree we are currently using in
> xen-unstable for development
> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
> Make sure you are using the right tree!

http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the
upstream qemu tree instead of our stable branch of upstream.

> 
> Anthony is currently on vacation and is going to be back in about a
> week.
> 
> > Another question:
> > Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.
> 
> You should use upstream QEMU, I am going to rebase our tree on that
> early on in the 4.3 release cycle.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:39:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFIK-0006UI-B0; Fri, 03 Aug 2012 10:39:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxFIJ-0006U4-5X
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:39:47 +0000
Received: from [85.158.139.83:48435] by server-8.bemta-5.messagelabs.com id
	59/43-10278-27AAB105; Fri, 03 Aug 2012 10:39:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1343990385!30095241!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17504 invoked from network); 3 Aug 2012 10:39:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 10:39:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:39:44 +0100
Message-Id: <501BC6B8020000780009277B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:40:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andre Przywara" <andre.przywara@amd.com>,
	"Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
In-Reply-To: <501BA1C0.7040100@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, George Dunlap <dunlapg@gmail.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 12:02, Andre Przywara <andre.przywara@amd.com> wrote:
> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
>> Hi everyone,
>>
>> With automatic placement finally landing into xen-unstable, I stated
>> thinking about what I could work on next, still in the field of
>> improving Xen's NUMA support. Well, it turned out that running out of
>> things to do is not an option! :-O
>>
>> In fact, I can think of quite a bit of open issues in that area, that I'm
>> just braindumping here.
> 
>> ...
>>
>>         * automatic placement of Dom0, if possible (my current series is
>>           only affecting DomU)
> 
> I think Dom0 NUMA awareness should be one of the top priorities. If I 
> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which 
> actually has memory from all 8 nodes and thinks it's memory is flat.
> There are some tricks to confine it to node 0 (dom0_mem=<memory of 
> node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires 
> intimate knowledge of the systems parameters and is error-prone.

How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
an extension to the current options?

> Also this does not work well with ballooning.
> Actually we could improve the NUMA placement with that: By asking the 
> Dom0 explicitly for memory from a certain node.

Yes, passing sideband information to the balloon driver was
always a missing item, not only for NUMA support, but also
for address restricted memory (e.g. such needed to start
32-bit PV guests on big systems).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:39:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFIK-0006UI-B0; Fri, 03 Aug 2012 10:39:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxFIJ-0006U4-5X
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:39:47 +0000
Received: from [85.158.139.83:48435] by server-8.bemta-5.messagelabs.com id
	59/43-10278-27AAB105; Fri, 03 Aug 2012 10:39:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1343990385!30095241!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17504 invoked from network); 3 Aug 2012 10:39:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 10:39:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:39:44 +0100
Message-Id: <501BC6B8020000780009277B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:40:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andre Przywara" <andre.przywara@amd.com>,
	"Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
In-Reply-To: <501BA1C0.7040100@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, George Dunlap <dunlapg@gmail.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 12:02, Andre Przywara <andre.przywara@amd.com> wrote:
> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
>> Hi everyone,
>>
>> With automatic placement finally landing into xen-unstable, I stated
>> thinking about what I could work on next, still in the field of
>> improving Xen's NUMA support. Well, it turned out that running out of
>> things to do is not an option! :-O
>>
>> In fact, I can think of quite a bit of open issues in that area, that I'm
>> just braindumping here.
> 
>> ...
>>
>>         * automatic placement of Dom0, if possible (my current series is
>>           only affecting DomU)
> 
> I think Dom0 NUMA awareness should be one of the top priorities. If I 
> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which 
> actually has memory from all 8 nodes and thinks it's memory is flat.
> There are some tricks to confine it to node 0 (dom0_mem=<memory of 
> node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires 
> intimate knowledge of the systems parameters and is error-prone.

How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
an extension to the current options?

> Also this does not work well with ballooning.
> Actually we could improve the NUMA placement with that: By asking the 
> Dom0 explicitly for memory from a certain node.

Yes, passing sideband information to the balloon driver was
always a missing item, not only for NUMA support, but also
for address restricted memory (e.g. such needed to start
32-bit PV guests on big systems).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:43:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFLt-0006gj-Vr; Fri, 03 Aug 2012 10:43:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxFLt-0006gc-6E
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:43:29 +0000
Received: from [85.158.139.83:18427] by server-9.bemta-5.messagelabs.com id
	8F/79-01069-05BAB105; Fri, 03 Aug 2012 10:43:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343990607!29516551!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 752 invoked from network); 3 Aug 2012 10:43:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 10:43:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:43:27 +0100
Message-Id: <501BC799020000780009278B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:44:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Keir Fraser" <keir.xen@gmail.com>
References: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
	<CC416661.3A655%keir.xen@gmail.com>
In-Reply-To: <CC416661.3A655%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 12:28, Keir Fraser <keir.xen@gmail.com> wrote:
> On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
> 
>> On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>>>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>>>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
>> 
>> Where are we with this?
>> 
>> Is it still a viable candidate for 4.2, now that we have reached rc1
>> (almost 2)?
> 
> Didn't we already take the trivial patch that will ease the transition to
> 4.3?

We took one necessary patch, but I think at least the second
one of the recently posted series would also be needed. And
the really important patch for migration forward compatibility
was patch 5 in that series, yet I wouldn't want to take patches
3 and 4 for 4.2.

In any case, the series is in need of resubmission anyway.
Perhaps (if that's possible, I didn't check in too much detail)
reordering patch 5 could be done at once.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:43:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFLt-0006gj-Vr; Fri, 03 Aug 2012 10:43:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxFLt-0006gc-6E
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:43:29 +0000
Received: from [85.158.139.83:18427] by server-9.bemta-5.messagelabs.com id
	8F/79-01069-05BAB105; Fri, 03 Aug 2012 10:43:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1343990607!29516551!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 752 invoked from network); 3 Aug 2012 10:43:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	3 Aug 2012 10:43:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 11:43:27 +0100
Message-Id: <501BC799020000780009278B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 11:44:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Keir Fraser" <keir.xen@gmail.com>
References: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
	<CC416661.3A655%keir.xen@gmail.com>
In-Reply-To: <CC416661.3A655%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 12:28, Keir Fraser <keir.xen@gmail.com> wrote:
> On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
> 
>> On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>>>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>>>      new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
>> 
>> Where are we with this?
>> 
>> Is it still a viable candidate for 4.2, now that we have reached rc1
>> (almost 2)?
> 
> Didn't we already take the trivial patch that will ease the transition to
> 4.3?

We took one necessary patch, but I think at least the second
one of the recently posted series would also be needed. And
the really important patch for migration forward compatibility
was patch 5 in that series, yet I wouldn't want to take patches
3 and 4 for 4.2.

In any case, the series is in need of resubmission anyway.
Perhaps (if that's possible, I didn't check in too much detail)
reordering patch 5 could be done at once.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:51:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFT1-0006uT-SD; Fri, 03 Aug 2012 10:50:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFT0-0006uN-NA
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:50:50 +0000
Received: from [85.158.138.51:22275] by server-5.bemta-3.messagelabs.com id
	57/37-28237-90DAB105; Fri, 03 Aug 2012 10:50:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343991048!30134985!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27036 invoked from network); 3 Aug 2012 10:50:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:50:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838437"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:50:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:50:10 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFSM-0000wp-Jk; Fri, 03 Aug 2012 10:50:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFSM-0006w7-G3;
	Fri, 03 Aug 2012 11:50:10 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.44258.468642.677482@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 11:50:10 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
References: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V3] libxl: support custom block hotplug scripts"):
> libxl: support custom block hotplug scripts

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:51:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFT1-0006uT-SD; Fri, 03 Aug 2012 10:50:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFT0-0006uN-NA
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:50:50 +0000
Received: from [85.158.138.51:22275] by server-5.bemta-3.messagelabs.com id
	57/37-28237-90DAB105; Fri, 03 Aug 2012 10:50:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343991048!30134985!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27036 invoked from network); 3 Aug 2012 10:50:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:50:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838437"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:50:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:50:10 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFSM-0000wp-Jk; Fri, 03 Aug 2012 10:50:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFSM-0006w7-G3;
	Fri, 03 Aug 2012 11:50:10 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.44258.468642.677482@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 11:50:10 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
References: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V3] libxl: support custom block hotplug scripts"):
> libxl: support custom block hotplug scripts

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:51:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFT5-0006ul-87; Fri, 03 Aug 2012 10:50:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SxFT3-0006ua-EP
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:50:53 +0000
Received: from [85.158.138.51:22483] by server-2.bemta-3.messagelabs.com id
	45/AB-00359-C0DAB105; Fri, 03 Aug 2012 10:50:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343991048!30134985!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27450 invoked from network); 3 Aug 2012 10:50:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:50:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838436"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:50:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:50:09 +0100
Date: Fri, 3 Aug 2012 11:49:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jesper Dahl Nyerup <nyerup@one.com>
In-Reply-To: <20120801140137.GA4866@one.com>
Message-ID: <alpine.DEB.2.02.1208031140370.4645@kaball.uk.xensource.com>
References: <20120801140137.GA4866@one.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] File system passthrough using v9fs?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Jesper Dahl Nyerup wrote:
> Hi,
> 
> I need to deploy a bunch of VMs, that need data from NFS shares residing
> on a network I don't trust my VMs to connect to directly.
> 
> What would it take for me to mount the shares on the hosts, and export
> them to my VMs using v9fs, for instance?

There is a chance that v9fs is going to work, but it is a completely
untested configuration. You need to use upstream QEMU, configure it with
the right options to enable v9fs. Once you have done that, you probably
also need to pass a particular command line option to QEMU in order to
enable v9fs, but unfortunately xl doesn't know about it.
However you can add any command line parameters you like to QEMU, adding
a "device_model_args" config parameter to your VM. 


> In practice, only one of my VMs will access a portion of the NFS at a
> time, and the host won't touch it at all, so I'm pretty confident that
> the VMs' VFS caching and locking won't be an issue.
> 
> I understand that KVM can do v9fs exports using virtio and qemu[1], and
> I was wondering if this was possible with Xen as well, as Xen also makes
> use of qemu. Allegedly using virtio devices should theoretically be
> possible for HVM guests, but I'm not sure if this has been implemented
> in Xen's qemu.
> 
> I don't have a preference for v9fs at all, so any hints or insights to
> similar solutions will be greatly appreciated.

Wouldn't it be easier to create a special vlan that lets your VM connect
just to that NFS share rather than the entire secure network?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:51:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFT5-0006ul-87; Fri, 03 Aug 2012 10:50:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SxFT3-0006ua-EP
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:50:53 +0000
Received: from [85.158.138.51:22483] by server-2.bemta-3.messagelabs.com id
	45/AB-00359-C0DAB105; Fri, 03 Aug 2012 10:50:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1343991048!30134985!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27450 invoked from network); 3 Aug 2012 10:50:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:50:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838436"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:50:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:50:09 +0100
Date: Fri, 3 Aug 2012 11:49:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jesper Dahl Nyerup <nyerup@one.com>
In-Reply-To: <20120801140137.GA4866@one.com>
Message-ID: <alpine.DEB.2.02.1208031140370.4645@kaball.uk.xensource.com>
References: <20120801140137.GA4866@one.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] File system passthrough using v9fs?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Jesper Dahl Nyerup wrote:
> Hi,
> 
> I need to deploy a bunch of VMs, that need data from NFS shares residing
> on a network I don't trust my VMs to connect to directly.
> 
> What would it take for me to mount the shares on the hosts, and export
> them to my VMs using v9fs, for instance?

There is a chance that v9fs is going to work, but it is a completely
untested configuration. You need to use upstream QEMU, configure it with
the right options to enable v9fs. Once you have done that, you probably
also need to pass a particular command line option to QEMU in order to
enable v9fs, but unfortunately xl doesn't know about it.
However you can add any command line parameters you like to QEMU, adding
a "device_model_args" config parameter to your VM. 


> In practice, only one of my VMs will access a portion of the NFS at a
> time, and the host won't touch it at all, so I'm pretty confident that
> the VMs' VFS caching and locking won't be an issue.
> 
> I understand that KVM can do v9fs exports using virtio and qemu[1], and
> I was wondering if this was possible with Xen as well, as Xen also makes
> use of qemu. Allegedly using virtio devices should theoretically be
> possible for HVM guests, but I'm not sure if this has been implemented
> in Xen's qemu.
> 
> I don't have a preference for v9fs at all, so any hints or insights to
> similar solutions will be greatly appreciated.

Wouldn't it be easier to create a special vlan that lets your VM connect
just to that NFS share rather than the entire secure network?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:52:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFUr-00074M-P5; Fri, 03 Aug 2012 10:52:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SxFUq-00074E-Do
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:52:44 +0000
Received: from [85.158.143.35:40055] by server-1.bemta-4.messagelabs.com id
	58/D1-24392-B7DAB105; Fri, 03 Aug 2012 10:52:43 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343991097!15962121!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19415 invoked from network); 3 Aug 2012 10:51:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838464"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:51:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:51:37 +0100
Date: Fri, 3 Aug 2012 11:51:18 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
References: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to
 popphysmap in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> ---
>  tools/libxc/xc_dom_arm.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
> index aac63e5..b743a6c 100644
> --- a/tools/libxc/xc_dom_arm.c
> +++ b/tools/libxc/xc_dom_arm.c
> @@ -60,7 +60,7 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
>  
>      rc = xc_domain_populate_physmap_exact(
>              dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
> -            0, 0, &p2m[i]);
> +            0, 0, p2m);
>      if ( rc < 0 )
>          return rc;
>  
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:52:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFUr-00074M-P5; Fri, 03 Aug 2012 10:52:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SxFUq-00074E-Do
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:52:44 +0000
Received: from [85.158.143.35:40055] by server-1.bemta-4.messagelabs.com id
	58/D1-24392-B7DAB105; Fri, 03 Aug 2012 10:52:43 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1343991097!15962121!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19415 invoked from network); 3 Aug 2012 10:51:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838464"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:51:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:51:37 +0100
Date: Fri, 3 Aug 2012 11:51:18 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
References: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to
 popphysmap in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> ---
>  tools/libxc/xc_dom_arm.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
> index aac63e5..b743a6c 100644
> --- a/tools/libxc/xc_dom_arm.c
> +++ b/tools/libxc/xc_dom_arm.c
> @@ -60,7 +60,7 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
>  
>      rc = xc_domain_populate_physmap_exact(
>              dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
> -            0, 0, &p2m[i]);
> +            0, 0, p2m);
>      if ( rc < 0 )
>          return rc;
>  
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:53:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFVX-00078p-6u; Fri, 03 Aug 2012 10:53:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFVV-00078Y-JQ
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:53:25 +0000
Received: from [85.158.139.83:36529] by server-9.bemta-5.messagelabs.com id
	F1/83-01069-4ADAB105; Fri, 03 Aug 2012 10:53:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343991204!26221307!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13946 invoked from network); 3 Aug 2012 10:53:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:53:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838503"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:53:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:53:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFVF-0000zw-O7; Fri, 03 Aug 2012 10:53:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFVF-0006xm-N7;
	Fri, 03 Aug 2012 11:53:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.44437.704456.214860@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 11:53:09 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343982906.21372.16.camel@zakaz.uk.xensource.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
	<1343982906.21372.16.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/13] libxl: correct some comments
 regarding event API and fds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 11/13] libxl: correct some comments regarding event API and fds"):
> On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> > - * libxl will only attempt to register one callback for any one fd.
> > + * libxl may want to register more than one callback for any one fd;
> > + * in that case: (i) each such registration will have at least one bit
> > + * set in revents which is unique to that registration; (ii) if an
> > + * event occurs which is relevant for multiple registrations the
> > + * application's event system is may call libxl_osevent_occurred_fd
> 
>                                  is may ?
> 
> Probably meant just "may".

Yes.

> Otherwise:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> (no need to resend, if you confirm the intended words are as I suggest
> I'll tweak on commit)

It's probably easier if I commit the series myself, when I've finished
collecting the acks etc., as I already have it in a git branch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:53:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFVX-00078p-6u; Fri, 03 Aug 2012 10:53:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFVV-00078Y-JQ
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:53:25 +0000
Received: from [85.158.139.83:36529] by server-9.bemta-5.messagelabs.com id
	F1/83-01069-4ADAB105; Fri, 03 Aug 2012 10:53:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343991204!26221307!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13946 invoked from network); 3 Aug 2012 10:53:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:53:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838503"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:53:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:53:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFVF-0000zw-O7; Fri, 03 Aug 2012 10:53:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFVF-0006xm-N7;
	Fri, 03 Aug 2012 11:53:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.44437.704456.214860@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 11:53:09 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343982906.21372.16.camel@zakaz.uk.xensource.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343928442-23966-12-git-send-email-ian.jackson@eu.citrix.com>
	<1343982906.21372.16.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 11/13] libxl: correct some comments
 regarding event API and fds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 11/13] libxl: correct some comments regarding event API and fds"):
> On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> > - * libxl will only attempt to register one callback for any one fd.
> > + * libxl may want to register more than one callback for any one fd;
> > + * in that case: (i) each such registration will have at least one bit
> > + * set in revents which is unique to that registration; (ii) if an
> > + * event occurs which is relevant for multiple registrations the
> > + * application's event system is may call libxl_osevent_occurred_fd
> 
>                                  is may ?
> 
> Probably meant just "may".

Yes.

> Otherwise:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> (no need to resend, if you confirm the intended words are as I suggest
> I'll tweak on commit)

It's probably easier if I commit the series myself, when I've finished
collecting the acks etc., as I already have it in a git branch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:56:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFY9-0007OR-PK; Fri, 03 Aug 2012 10:56:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxFY8-0007OF-6Y
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:56:08 +0000
Received: from [85.158.143.99:58281] by server-1.bemta-4.messagelabs.com id
	8A/07-24392-74EAB105; Fri, 03 Aug 2012 10:56:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343991366!24815583!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13660 invoked from network); 3 Aug 2012 10:56:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838550"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:55:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:55:24 +0100
Message-ID: <1343991322.21372.49.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 3 Aug 2012 11:55:22 +0100
In-Reply-To: <alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
References: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to
 popphysmap in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:51 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

I may end up folding this into your original patch in my branch.
> 
> > ---
> >  tools/libxc/xc_dom_arm.c |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
> > index aac63e5..b743a6c 100644
> > --- a/tools/libxc/xc_dom_arm.c
> > +++ b/tools/libxc/xc_dom_arm.c
> > @@ -60,7 +60,7 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
> >  
> >      rc = xc_domain_populate_physmap_exact(
> >              dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
> > -            0, 0, &p2m[i]);
> > +            0, 0, p2m);
> >      if ( rc < 0 )
> >          return rc;
> >  
> > -- 
> > 1.7.9.1
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 10:56:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 10:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFY9-0007OR-PK; Fri, 03 Aug 2012 10:56:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxFY8-0007OF-6Y
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:56:08 +0000
Received: from [85.158.143.99:58281] by server-1.bemta-4.messagelabs.com id
	8A/07-24392-74EAB105; Fri, 03 Aug 2012 10:56:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1343991366!24815583!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13660 invoked from network); 3 Aug 2012 10:56:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838550"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:55:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:55:24 +0100
Message-ID: <1343991322.21372.49.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 3 Aug 2012 11:55:22 +0100
In-Reply-To: <alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
References: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to
 popphysmap in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:51 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

I may end up folding this into your original patch in my branch.
> 
> > ---
> >  tools/libxc/xc_dom_arm.c |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
> > index aac63e5..b743a6c 100644
> > --- a/tools/libxc/xc_dom_arm.c
> > +++ b/tools/libxc/xc_dom_arm.c
> > @@ -60,7 +60,7 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
> >  
> >      rc = xc_domain_populate_physmap_exact(
> >              dom->xch, dom->guest_domid, NR_MAGIC_PAGES,
> > -            0, 0, &p2m[i]);
> > +            0, 0, p2m);
> >      if ( rc < 0 )
> >          return rc;
> >  
> > -- 
> > 1.7.9.1
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:00:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFbs-0007bQ-DU; Fri, 03 Aug 2012 11:00:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFbq-0007bH-Bl
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:59:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343991592!8963064!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6607 invoked from network); 3 Aug 2012 10:59:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:59:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838634"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:59:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:59:52 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFbj-00012D-Qk; Fri, 03 Aug 2012 10:59:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFbj-0007J8-Pg;
	Fri, 03 Aug 2012 11:59:51 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.44839.755096.252376@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 11:59:51 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
References: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V3] libxl: support custom block hotplug scripts"):
> libxl: support custom block hotplug scripts

Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:00:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFbs-0007bQ-DU; Fri, 03 Aug 2012 11:00:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFbq-0007bH-Bl
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 10:59:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1343991592!8963064!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6607 invoked from network); 3 Aug 2012 10:59:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 10:59:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838634"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 10:59:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 11:59:52 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFbj-00012D-Qk; Fri, 03 Aug 2012 10:59:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFbj-0007J8-Pg;
	Fri, 03 Aug 2012 11:59:51 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.44839.755096.252376@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 11:59:51 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
References: <bfd5e107774c3ba50207.1343980901@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxl: support custom block hotplug
	scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V3] libxl: support custom block hotplug scripts"):
> libxl: support custom block hotplug scripts

Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFc9-0007fS-UT; Fri, 03 Aug 2012 11:00:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxFc8-0007ei-5c
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:00:16 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343991607!11102056!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1044 invoked from network); 3 Aug 2012 11:00:07 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:00:07 -0000
Received: by eaah1 with SMTP id h1so148343eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 04:00:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Bo1ah9sOyvgGkosWsAORgSYnD45+pBG63cWhp6E/47A=;
	b=vPRcet+9rIxc1WohF94hpyAIVyjUmi53ziiqNIdAdY4WD0FnWUMPKOCv54a6QVk7S/
	3v7PpitiS83yRyoaWO67khzaQ6M8QH6Bun/3r7RvNhDyvH0rYZbPhzbU8FpW1Nww0DVC
	yEFc88QivJ7xWWiPOu/N3bmpKOstrtXy++0vT+WqhdOjPbTugVBkVagbhG+9rEnfre9S
	4HmxdZgi/TrjYE995XqT+rq+n5e6OTyuYt1KPrgwZ12DI0GjavFBF1FtgijbfvilxXGc
	ncwcWHTQUojjvvtgXGLH6CohATB6NRpKSwUMJxX6KK6dsvnXwzUdmIKgRz7wIDwo7TT6
	8Lmg==
Received: by 10.14.223.9 with SMTP id u9mr1641022eep.10.1343991607294;
	Fri, 03 Aug 2012 04:00:07 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id w5sm24816415eeo.1.2012.08.03.04.00.05
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 04:00:06 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 12:00:02 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC416DC2.3A667%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xZyFtZwAETaYibkqhWDrDLe1Dlw==
In-Reply-To: <501BC5410200007800092765@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 11:34, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 03.08.12 at 12:04, Keir Fraser <keir@xen.org> wrote:
>> On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> In __prepare_to_wait(), properly mark early clobbered registers. By
>>> doing so, we at once eliminate the need to save/restore rCX and rDI.
>>> 
>>> In check_wakeup_from_wait(), make the current constraints match by
>>> removing the code that actuall alters registers. By adjusting the
>>> resume address in __prepare_to_wait(), we can simply re-use the copying
>>> operation there (rather than doing a second pointless copy in the
>>> opposite direction after branching to the resume point), which at once
>>> eliminates the need for re-loading rCX and rDI inside the asm().
>> 
>> First of all, this is code improvement, rather than a bug fix, right? The
>> asm constraints are correct for the code as it is, I believe.
> 
> No, the constraints aren't really correct at present (yet this is
> not visible as a functional bug in any way) - from a formal
> perspective, the early clobber specification is needed on _any_
> operand that doesn't retain its value throughout an asm(). Any
> future compiler could derive something from this that we don't
> intend.

I'm confused. The registers have the same values at the start and the end of
the asm statement. How can it possibly matter, even in theory, whether they
temporarily change in the middle? Is this fairly strong assumption written
down in the gcc documentation anywhere?

>> It also seems the patch splits into two independent parts:
>> 
>>  A. Not sure whether the trade-off of the rCX/rDI save/restore versus more
>> complex asm constraints makes sense.
>> 
>>  B. Separately, the adjustment of the restore return address, and avoiding
>> needing to reload rCX/rDI after label 1, as well as avoiding the copy in
>> check_wakeup_from_wait(), is very nice.
>> 
>> I'm inclined to take the second part only, and make it clearer in the
>> changeset comment that it is not a bug fix.
>> 
>> What do you think?
> 
> The patch could be split, yes, but where exactly the split(s)
> should be isn't that obvious to me. And as it's fixing the same
> kind of issue on both asm()-s, it seemed sensible to keep the
> changes together.

Yes, that confused me too -- the output constraints on the second asm can
hardly be wrong, or at least matter, since it never returns! Execution state
is completely reloaded within the asm statement.

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:00:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFc9-0007fS-UT; Fri, 03 Aug 2012 11:00:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxFc8-0007ei-5c
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:00:16 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343991607!11102056!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1044 invoked from network); 3 Aug 2012 11:00:07 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:00:07 -0000
Received: by eaah1 with SMTP id h1so148343eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 04:00:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Bo1ah9sOyvgGkosWsAORgSYnD45+pBG63cWhp6E/47A=;
	b=vPRcet+9rIxc1WohF94hpyAIVyjUmi53ziiqNIdAdY4WD0FnWUMPKOCv54a6QVk7S/
	3v7PpitiS83yRyoaWO67khzaQ6M8QH6Bun/3r7RvNhDyvH0rYZbPhzbU8FpW1Nww0DVC
	yEFc88QivJ7xWWiPOu/N3bmpKOstrtXy++0vT+WqhdOjPbTugVBkVagbhG+9rEnfre9S
	4HmxdZgi/TrjYE995XqT+rq+n5e6OTyuYt1KPrgwZ12DI0GjavFBF1FtgijbfvilxXGc
	ncwcWHTQUojjvvtgXGLH6CohATB6NRpKSwUMJxX6KK6dsvnXwzUdmIKgRz7wIDwo7TT6
	8Lmg==
Received: by 10.14.223.9 with SMTP id u9mr1641022eep.10.1343991607294;
	Fri, 03 Aug 2012 04:00:07 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id w5sm24816415eeo.1.2012.08.03.04.00.05
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 04:00:06 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 12:00:02 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC416DC2.3A667%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xZyFtZwAETaYibkqhWDrDLe1Dlw==
In-Reply-To: <501BC5410200007800092765@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 11:34, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 03.08.12 at 12:04, Keir Fraser <keir@xen.org> wrote:
>> On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> In __prepare_to_wait(), properly mark early clobbered registers. By
>>> doing so, we at once eliminate the need to save/restore rCX and rDI.
>>> 
>>> In check_wakeup_from_wait(), make the current constraints match by
>>> removing the code that actuall alters registers. By adjusting the
>>> resume address in __prepare_to_wait(), we can simply re-use the copying
>>> operation there (rather than doing a second pointless copy in the
>>> opposite direction after branching to the resume point), which at once
>>> eliminates the need for re-loading rCX and rDI inside the asm().
>> 
>> First of all, this is code improvement, rather than a bug fix, right? The
>> asm constraints are correct for the code as it is, I believe.
> 
> No, the constraints aren't really correct at present (yet this is
> not visible as a functional bug in any way) - from a formal
> perspective, the early clobber specification is needed on _any_
> operand that doesn't retain its value throughout an asm(). Any
> future compiler could derive something from this that we don't
> intend.

I'm confused. The registers have the same values at the start and the end of
the asm statement. How can it possibly matter, even in theory, whether they
temporarily change in the middle? Is this fairly strong assumption written
down in the gcc documentation anywhere?

>> It also seems the patch splits into two independent parts:
>> 
>>  A. Not sure whether the trade-off of the rCX/rDI save/restore versus more
>> complex asm constraints makes sense.
>> 
>>  B. Separately, the adjustment of the restore return address, and avoiding
>> needing to reload rCX/rDI after label 1, as well as avoiding the copy in
>> check_wakeup_from_wait(), is very nice.
>> 
>> I'm inclined to take the second part only, and make it clearer in the
>> changeset comment that it is not a bug fix.
>> 
>> What do you think?
> 
> The patch could be split, yes, but where exactly the split(s)
> should be isn't that obvious to me. And as it's fixing the same
> kind of issue on both asm()-s, it seemed sensible to keep the
> changes together.

Yes, that confused me too -- the output constraints on the second asm can
hardly be wrong, or at least matter, since it never returns! Execution state
is completely reloaded within the asm statement.

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:05:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFh4-0007z9-PL; Fri, 03 Aug 2012 11:05:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFh3-0007z3-B0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:05:21 +0000
Received: from [85.158.143.35:53038] by server-2.bemta-4.messagelabs.com id
	40/50-17938-070BB105; Fri, 03 Aug 2012 11:05:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1343991902!15979174!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8977 invoked from network); 3 Aug 2012 11:05:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:05:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 11:05:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 12:05:02 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFgk-000149-EC; Fri, 03 Aug 2012 11:05:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFgk-0007Jg-C4;
	Fri, 03 Aug 2012 12:05:02 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.45150.356453.778391@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 12:05:02 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343984349.21372.27.camel@zakaz.uk.xensource.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343984349.21372.27.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and
 cleanups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and cleanups"):
> On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> > These should go into 4.2 soon:
> 
> I've applied 01..12 but skipped 11 pending you confirmation of the
> intended wording.

Ah.  I have now committed the fixed 11/13.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:05:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFh4-0007z9-PL; Fri, 03 Aug 2012 11:05:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFh3-0007z3-B0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:05:21 +0000
Received: from [85.158.143.35:53038] by server-2.bemta-4.messagelabs.com id
	40/50-17938-070BB105; Fri, 03 Aug 2012 11:05:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1343991902!15979174!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8977 invoked from network); 3 Aug 2012 11:05:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:05:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 11:05:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 12:05:02 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFgk-000149-EC; Fri, 03 Aug 2012 11:05:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFgk-0007Jg-C4;
	Fri, 03 Aug 2012 12:05:02 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.45150.356453.778391@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 12:05:02 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343984349.21372.27.camel@zakaz.uk.xensource.com>
References: <1343928442-23966-1-git-send-email-ian.jackson@eu.citrix.com>
	<1343984349.21372.27.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and
 cleanups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH v5 00/13] libxl: Assorted bugfixes and cleanups"):
> On Thu, 2012-08-02 at 18:27 +0100, Ian Jackson wrote:
> > These should go into 4.2 soon:
> 
> I've applied 01..12 but skipped 11 pending you confirmation of the
> intended wording.

Ah.  I have now committed the fixed 11/13.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:05:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFhV-000814-5O; Fri, 03 Aug 2012 11:05:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SxFhU-00080P-Jb
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:05:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343991875!2006586!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkzODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3812 invoked from network); 3 Aug 2012 11:04:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:04:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336363200"; d="scan'208";a="204042883"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 07:04:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 07:03:55 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SxFff-0006Kt-E7;
	Fri, 03 Aug 2012 12:03:55 +0100
Message-ID: <501BAF5E.6080805@eu.citrix.com>
Date: Fri, 3 Aug 2012 12:00:46 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Andre Przywara <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
	<501B9E81.1020302@amd.com>
In-Reply-To: <501B9E81.1020302@amd.com>
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 10:48, Andre Przywara wrote:
>>> I think we could use cpu hot-plug to change the "virtual topology" of
>>> VMs, couldn't we?  We could probably even do that on a running guest
>>> if we really needed to.
>> Hmm, not sure - using hotplug behind the back of the guest might
>> be possible, but you'd first need to hot-unplug the vCPU. That's
>> something that I don't think you can do on HVM guests (and for
>> PV guests, guest visible NUMA support makes even less sense
>> than for HVM ones).
> I don't think that hotplug would really work. I have checked this some
> times ago, at least the Linux NUMA code cannot be really fooled by this.
> The SRAT table is firmware defined and static by nature, so there is no
> code in Linux to change the NUMA topology at runtime. This is especially
> true for the memory layout.
I was more thinking of giving a VM the biggest topology you would want 
at boot, and then asking Linux to online or offline vcpus; for example, 
giving it a 4x2 topology (4 vcores x 2 vnodes).  When running on a 
system with 2 cores per node, you offline 2 vcpus per vnode, giving it 
an effective layout of 2x2.  When running on a system with 4 cores per 
node, you could offline all of the cores on one node, giving it an 
effective topology of 4x1.

Unfortunately, I just realized that you could change the number of vcpus 
in a given node, but you couldn't move the memory around very easily.   
Unless you have memory hotplug? Hmm..... :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:05:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFhV-000814-5O; Fri, 03 Aug 2012 11:05:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SxFhU-00080P-Jb
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:05:48 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1343991875!2006586!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNjkzODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3812 invoked from network); 3 Aug 2012 11:04:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:04:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336363200"; d="scan'208";a="204042883"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 07:04:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 07:03:55 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SxFff-0006Kt-E7;
	Fri, 03 Aug 2012 12:03:55 +0100
Message-ID: <501BAF5E.6080805@eu.citrix.com>
Date: Fri, 3 Aug 2012 12:00:46 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Andre Przywara <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
	<501B9E81.1020302@amd.com>
In-Reply-To: <501B9E81.1020302@amd.com>
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 10:48, Andre Przywara wrote:
>>> I think we could use cpu hot-plug to change the "virtual topology" of
>>> VMs, couldn't we?  We could probably even do that on a running guest
>>> if we really needed to.
>> Hmm, not sure - using hotplug behind the back of the guest might
>> be possible, but you'd first need to hot-unplug the vCPU. That's
>> something that I don't think you can do on HVM guests (and for
>> PV guests, guest visible NUMA support makes even less sense
>> than for HVM ones).
> I don't think that hotplug would really work. I have checked this some
> times ago, at least the Linux NUMA code cannot be really fooled by this.
> The SRAT table is firmware defined and static by nature, so there is no
> code in Linux to change the NUMA topology at runtime. This is especially
> true for the memory layout.
I was more thinking of giving a VM the biggest topology you would want 
at boot, and then asking Linux to online or offline vcpus; for example, 
giving it a 4x2 topology (4 vcores x 2 vnodes).  When running on a 
system with 2 cores per node, you offline 2 vcpus per vnode, giving it 
an effective layout of 2x2.  When running on a system with 4 cores per 
node, you could offline all of the cores on one node, giving it an 
effective topology of 4x1.

Unfortunately, I just realized that you could change the number of vcpus 
in a given node, but you couldn't move the memory around very easily.   
Unless you have memory hotplug? Hmm..... :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:08:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFjv-0008DV-Nn; Fri, 03 Aug 2012 11:08:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFju-0008DO-WA
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 11:08:19 +0000
Received: from [85.158.143.35:10459] by server-3.bemta-4.messagelabs.com id
	FE/5E-01511-221BB105; Fri, 03 Aug 2012 11:08:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343992097!17247542!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12047 invoked from network); 3 Aug 2012 11:08:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:08:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838830"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 11:08:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 12:08:17 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFjs-00015E-PN; Fri, 03 Aug 2012 11:08:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFjs-0007KD-OF;
	Fri, 03 Aug 2012 12:08:16 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.45344.552468.930223@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 12:08:16 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343824081.27221.84.camel@zakaz.uk.xensource.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> On Fri, 2012-06-08 at 15:07 +0100, Ian Jackson wrote:
> > Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> > > tools/hotplug/Linux/init.d/: added other xen kernel modules on 
> > > xencommons start
> > 
> > This looks at least harmless to me.
> > 
> > I'm surprised, however, that these things aren't loaded automatically.
> > For example, shouldn't the xenbus driver's enumeration automatically
> > load blkback too ?
> 
> Yes it should, there is autoloading stuff for all the backends.
> 
> Not sure about gntalloc. I suspect not.
> 
> > Having said that, I'm inclined to apply this unless someone 
> > explains that it's a bad idea.

I have applied it.

But we should still try to fix the upstream kernels not to need it.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:08:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxFjv-0008DV-Nn; Fri, 03 Aug 2012 11:08:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxFju-0008DO-WA
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 11:08:19 +0000
Received: from [85.158.143.35:10459] by server-3.bemta-4.messagelabs.com id
	FE/5E-01511-221BB105; Fri, 03 Aug 2012 11:08:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1343992097!17247542!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12047 invoked from network); 3 Aug 2012 11:08:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:08:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13838830"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 11:08:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 12:08:17 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxFjs-00015E-PN; Fri, 03 Aug 2012 11:08:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxFjs-0007KD-OF;
	Fri, 03 Aug 2012 12:08:16 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.45344.552468.930223@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 12:08:16 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343824081.27221.84.camel@zakaz.uk.xensource.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> On Fri, 2012-06-08 at 15:07 +0100, Ian Jackson wrote:
> > Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> > > tools/hotplug/Linux/init.d/: added other xen kernel modules on 
> > > xencommons start
> > 
> > This looks at least harmless to me.
> > 
> > I'm surprised, however, that these things aren't loaded automatically.
> > For example, shouldn't the xenbus driver's enumeration automatically
> > load blkback too ?
> 
> Yes it should, there is autoloading stuff for all the backends.
> 
> Not sure about gntalloc. I suspect not.
> 
> > Having said that, I'm inclined to apply this unless someone 
> > explains that it's a bad idea.

I have applied it.

But we should still try to fix the upstream kernels not to need it.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxG0j-0000Du-Va; Fri, 03 Aug 2012 11:25:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxG0j-0000Dh-3D
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:25:41 +0000
Received: from [85.158.139.83:26352] by server-2.bemta-5.messagelabs.com id
	09/4C-04598-435BB105; Fri, 03 Aug 2012 11:25:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343993139!26228133!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31145 invoked from network); 3 Aug 2012 11:25:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13839142"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 11:25:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 12:25:38 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxG0g-0001Db-DL; Fri, 03 Aug 2012 11:25:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxG0g-0007Ux-CD;
	Fri, 03 Aug 2012 12:25:38 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.46386.364669.437762@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 12:25:38 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>
References: <2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> libxl: fix cleanup of tap devices in libxl__device_destroy

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxG0j-0000Du-Va; Fri, 03 Aug 2012 11:25:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxG0j-0000Dh-3D
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:25:41 +0000
Received: from [85.158.139.83:26352] by server-2.bemta-5.messagelabs.com id
	09/4C-04598-435BB105; Fri, 03 Aug 2012 11:25:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343993139!26228133!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31145 invoked from network); 3 Aug 2012 11:25:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 11:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13839142"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 11:25:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 12:25:38 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxG0g-0001Db-DL; Fri, 03 Aug 2012 11:25:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxG0g-0007Ux-CD;
	Fri, 03 Aug 2012 12:25:38 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.46386.364669.437762@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 12:25:38 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>
References: <2c21d5c75dcbdf52987f.1343985208@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Greg Wettstein <greg@wind.enjellic.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl: fix cleanup of tap devices in
 libxl__device_destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl: fix cleanup of tap devices in libxl__device_destroy"):
> libxl: fix cleanup of tap devices in libxl__device_destroy

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:29:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:29:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxG4V-0000T9-Ku; Fri, 03 Aug 2012 11:29:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxG4T-0000T3-UD
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:29:34 +0000
Received: from [85.158.138.51:16259] by server-10.bemta-3.messagelabs.com id
	C1/33-21993-D16BB105; Fri, 03 Aug 2012 11:29:33 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343993370!23897521!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27806 invoked from network); 3 Aug 2012 11:29:32 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-14.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 11:29:32 -0000
Received: from mail134-co1-R.bigfish.com (10.243.78.229) by
	CO1EHSOBE009.bigfish.com (10.243.66.72) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 11:29:29 +0000
Received: from mail134-co1 (localhost [127.0.0.1])	by
	mail134-co1-R.bigfish.com (Postfix) with ESMTP id E14D85803A6;
	Fri,  3 Aug 2012 11:29:29 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I14ffIzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail134-co1 (localhost.localdomain [127.0.0.1]) by mail134-co1
	(MessageSwitch) id 1343993368369050_30083;
	Fri,  3 Aug 2012 11:29:28 +0000 (UTC)
Received: from CO1EHSMHS032.bigfish.com (unknown [10.243.78.245])	by
	mail134-co1.bigfish.com (Postfix) with ESMTP id 54BA2BC0048;
	Fri,  3 Aug 2012 11:29:28 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS032.bigfish.com (10.243.66.42) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 11:29:28 +0000
X-WSS-ID: 0M86FX2-01-BEJ-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2E1AB10280BF;	Fri,  3 Aug 2012 06:29:25 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 06:29:45 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 06:29:25 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	07:29:25 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 250B749C20C;
	Fri,  3 Aug 2012 12:29:24 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 569CB594037; Fri,  3 Aug 2012
	13:29:23 +0200 (CEST)
Message-ID: <501BB54E.1050302@amd.com>
Date: Fri, 3 Aug 2012 13:26:06 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
In-Reply-To: <501BC6B8020000780009277B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	George Dunlap <dunlapg@gmail.com>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 12:40 PM, Jan Beulich wrote:
>>>> On 03.08.12 at 12:02, Andre Przywara <andre.przywara@amd.com> wrote:
>> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
>>> Hi everyone,
>>>
>>> With automatic placement finally landing into xen-unstable, I stated
>>> thinking about what I could work on next, still in the field of
>>> improving Xen's NUMA support. Well, it turned out that running out of
>>> things to do is not an option! :-O
>>>
>>> In fact, I can think of quite a bit of open issues in that area, that I'm
>>> just braindumping here.
>>
>>> ...
>>>
>>>          * automatic placement of Dom0, if possible (my current series is
>>>            only affecting DomU)
>>
>> I think Dom0 NUMA awareness should be one of the top priorities. If I
>> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which
>> actually has memory from all 8 nodes and thinks it's memory is flat.
>> There are some tricks to confine it to node 0 (dom0_mem=<memory of
>> node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires
>> intimate knowledge of the systems parameters and is error-prone.
>
> How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
> an extension to the current options?

Yes, that sounds like a good idea. And relatively easy to implement.
Maybe a list or a number of nodes (to make it more complicated ;-)

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:29:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:29:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxG4V-0000T9-Ku; Fri, 03 Aug 2012 11:29:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxG4T-0000T3-UD
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:29:34 +0000
Received: from [85.158.138.51:16259] by server-10.bemta-3.messagelabs.com id
	C1/33-21993-D16BB105; Fri, 03 Aug 2012 11:29:33 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1343993370!23897521!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27806 invoked from network); 3 Aug 2012 11:29:32 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-14.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 11:29:32 -0000
Received: from mail134-co1-R.bigfish.com (10.243.78.229) by
	CO1EHSOBE009.bigfish.com (10.243.66.72) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 11:29:29 +0000
Received: from mail134-co1 (localhost [127.0.0.1])	by
	mail134-co1-R.bigfish.com (Postfix) with ESMTP id E14D85803A6;
	Fri,  3 Aug 2012 11:29:29 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I14ffIzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail134-co1 (localhost.localdomain [127.0.0.1]) by mail134-co1
	(MessageSwitch) id 1343993368369050_30083;
	Fri,  3 Aug 2012 11:29:28 +0000 (UTC)
Received: from CO1EHSMHS032.bigfish.com (unknown [10.243.78.245])	by
	mail134-co1.bigfish.com (Postfix) with ESMTP id 54BA2BC0048;
	Fri,  3 Aug 2012 11:29:28 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS032.bigfish.com (10.243.66.42) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 11:29:28 +0000
X-WSS-ID: 0M86FX2-01-BEJ-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2E1AB10280BF;	Fri,  3 Aug 2012 06:29:25 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 06:29:45 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 06:29:25 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	07:29:25 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 250B749C20C;
	Fri,  3 Aug 2012 12:29:24 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 569CB594037; Fri,  3 Aug 2012
	13:29:23 +0200 (CEST)
Message-ID: <501BB54E.1050302@amd.com>
Date: Fri, 3 Aug 2012 13:26:06 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
In-Reply-To: <501BC6B8020000780009277B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	George Dunlap <dunlapg@gmail.com>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 12:40 PM, Jan Beulich wrote:
>>>> On 03.08.12 at 12:02, Andre Przywara <andre.przywara@amd.com> wrote:
>> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
>>> Hi everyone,
>>>
>>> With automatic placement finally landing into xen-unstable, I stated
>>> thinking about what I could work on next, still in the field of
>>> improving Xen's NUMA support. Well, it turned out that running out of
>>> things to do is not an option! :-O
>>>
>>> In fact, I can think of quite a bit of open issues in that area, that I'm
>>> just braindumping here.
>>
>>> ...
>>>
>>>          * automatic placement of Dom0, if possible (my current series is
>>>            only affecting DomU)
>>
>> I think Dom0 NUMA awareness should be one of the top priorities. If I
>> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which
>> actually has memory from all 8 nodes and thinks it's memory is flat.
>> There are some tricks to confine it to node 0 (dom0_mem=<memory of
>> node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires
>> intimate knowledge of the systems parameters and is error-prone.
>
> How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
> an extension to the current options?

Yes, that sounds like a good idea. And relatively easy to implement.
Maybe a list or a number of nodes (to make it more complicated ;-)

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:36:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGAp-0000rq-HI; Fri, 03 Aug 2012 11:36:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxGAn-0000rh-CN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:36:05 +0000
Received: from [85.158.138.51:3664] by server-12.bemta-3.messagelabs.com id
	FE/75-15259-4A7BB105; Fri, 03 Aug 2012 11:36:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343993763!30233704!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12846 invoked from network); 3 Aug 2012 11:36:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 11:36:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 12:36:03 +0100
Message-Id: <501BD3ED02000078000927F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 12:36:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <501BC5410200007800092765@nat28.tlf.novell.com>
	<CC416DC2.3A667%keir.xen@gmail.com>
In-Reply-To: <CC416DC2.3A667%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDEzOjAwLCBLZWlyIEZyYXNlciA8a2Vpci54ZW5AZ21haWwuY29t
PiB3cm90ZToKPiBPbiAwMy8wOC8yMDEyIDExOjM0LCAiSmFuIEJldWxpY2giIDxKQmV1bGljaEBz
dXNlLmNvbT4gd3JvdGU6Cj4gCj4+Pj4+IE9uIDAzLjA4LjEyIGF0IDEyOjA0LCBLZWlyIEZyYXNl
ciA8a2VpckB4ZW4ub3JnPiB3cm90ZToKPj4+IE9uIDAzLzA4LzIwMTIgMDk6NDAsICJKYW4gQmV1
bGljaCIgPEpCZXVsaWNoQHN1c2UuY29tPiB3cm90ZToKPj4+IAo+Pj4+IEluIF9fcHJlcGFyZV90
b193YWl0KCksIHByb3Blcmx5IG1hcmsgZWFybHkgY2xvYmJlcmVkIHJlZ2lzdGVycy4gQnkKPj4+
PiBkb2luZyBzbywgd2UgYXQgb25jZSBlbGltaW5hdGUgdGhlIG5lZWQgdG8gc2F2ZS9yZXN0b3Jl
IHJDWCBhbmQgckRJLgo+Pj4+IAo+Pj4+IEluIGNoZWNrX3dha2V1cF9mcm9tX3dhaXQoKSwgbWFr
ZSB0aGUgY3VycmVudCBjb25zdHJhaW50cyBtYXRjaCBieQo+Pj4+IHJlbW92aW5nIHRoZSBjb2Rl
IHRoYXQgYWN0dWFsbCBhbHRlcnMgcmVnaXN0ZXJzLiBCeSBhZGp1c3RpbmcgdGhlCj4+Pj4gcmVz
dW1lIGFkZHJlc3MgaW4gX19wcmVwYXJlX3RvX3dhaXQoKSwgd2UgY2FuIHNpbXBseSByZS11c2Ug
dGhlIGNvcHlpbmcKPj4+PiBvcGVyYXRpb24gdGhlcmUgKHJhdGhlciB0aGFuIGRvaW5nIGEgc2Vj
b25kIHBvaW50bGVzcyBjb3B5IGluIHRoZQo+Pj4+IG9wcG9zaXRlIGRpcmVjdGlvbiBhZnRlciBi
cmFuY2hpbmcgdG8gdGhlIHJlc3VtZSBwb2ludCksIHdoaWNoIGF0IG9uY2UKPj4+PiBlbGltaW5h
dGVzIHRoZSBuZWVkIGZvciByZS1sb2FkaW5nIHJDWCBhbmQgckRJIGluc2lkZSB0aGUgYXNtKCku
Cj4+PiAKPj4+IEZpcnN0IG9mIGFsbCwgdGhpcyBpcyBjb2RlIGltcHJvdmVtZW50LCByYXRoZXIg
dGhhbiBhIGJ1ZyBmaXgsIHJpZ2h0PyBUaGUKPj4+IGFzbSBjb25zdHJhaW50cyBhcmUgY29ycmVj
dCBmb3IgdGhlIGNvZGUgYXMgaXQgaXMsIEkgYmVsaWV2ZS4KPj4gCj4+IE5vLCB0aGUgY29uc3Ry
YWludHMgYXJlbid0IHJlYWxseSBjb3JyZWN0IGF0IHByZXNlbnQgKHlldCB0aGlzIGlzCj4+IG5v
dCB2aXNpYmxlIGFzIGEgZnVuY3Rpb25hbCBidWcgaW4gYW55IHdheSkgLSBmcm9tIGEgZm9ybWFs
Cj4+IHBlcnNwZWN0aXZlLCB0aGUgZWFybHkgY2xvYmJlciBzcGVjaWZpY2F0aW9uIGlzIG5lZWRl
ZCBvbiBfYW55Xwo+PiBvcGVyYW5kIHRoYXQgZG9lc24ndCByZXRhaW4gaXRzIHZhbHVlIHRocm91
Z2hvdXQgYW4gYXNtKCkuIEFueQo+PiBmdXR1cmUgY29tcGlsZXIgY291bGQgZGVyaXZlIHNvbWV0
aGluZyBmcm9tIHRoaXMgdGhhdCB3ZSBkb24ndAo+PiBpbnRlbmQuCj4gCj4gSSdtIGNvbmZ1c2Vk
LiBUaGUgcmVnaXN0ZXJzIGhhdmUgdGhlIHNhbWUgdmFsdWVzIGF0IHRoZSBzdGFydCBhbmQgdGhl
IGVuZCBvZgo+IHRoZSBhc20gc3RhdGVtZW50LiBIb3cgY2FuIGl0IHBvc3NpYmx5IG1hdHRlciwg
ZXZlbiBpbiB0aGVvcnksIHdoZXRoZXIgdGhleQo+IHRlbXBvcmFyaWx5IGNoYW5nZSBpbiB0aGUg
bWlkZGxlPyBJcyB0aGlzIGZhaXJseSBzdHJvbmcgYXNzdW1wdGlvbiB3cml0dGVuCj4gZG93biBp
biB0aGUgZ2NjIGRvY3VtZW50YXRpb24gYW55d2hlcmU/CgpJdCdzIGluIHRoZSBzcGVjaWZpY2F0
aW9uIG9mIHRoZSAmIG1vZGlmaWVyOgoKIuKAmCbigJkgTWVhbnMgKGluIGEgcGFydGljdWxhciBh
bHRlcm5hdGl2ZSkgdGhhdCB0aGlzIG9wZXJhbmQgaXMgYW4KIGVhcmx5Y2xvYmJlciBvcGVyYW5k
LCB3aGljaCBpcyBtb2RpZmllZCBiZWZvcmUgdGhlIGluc3RydWN0aW9uCiBpcyBmaW5pc2hlZCB1
c2luZyB0aGUgaW5wdXQgb3BlcmFuZHMuIFRoZXJlZm9yZSwgdGhpcyBvcGVyYW5kCiBtYXkgbm90
IGxpZSBpbiBhIHJlZ2lzdGVyIHRoYXQgaXMgdXNlZCBhcyBhbiBpbnB1dCBvcGVyYW5kIG9yIGFz
CiBwYXJ0IG9mIGFueSBtZW1vcnkgYWRkcmVzcy4iCgpPZiBjb3Vyc2UsIGhlcmUgd2UncmUgbm90
IGhhdmluZyBhbnkgb3RoZXIgb3BlcmFuZHMsIHdoaWNoCmlzIHdoeSBhdCBsZWFzdCBhdCBwcmVz
ZW50IGdldHRpbmcgdGhpcyB3cm9uZyBkb2VzIG5vIGhhcm0uCgo+Pj4gSXQgYWxzbyBzZWVtcyB0
aGUgcGF0Y2ggc3BsaXRzIGludG8gdHdvIGluZGVwZW5kZW50IHBhcnRzOgo+Pj4gCj4+PiAgQS4g
Tm90IHN1cmUgd2hldGhlciB0aGUgdHJhZGUtb2ZmIG9mIHRoZSByQ1gvckRJIHNhdmUvcmVzdG9y
ZSB2ZXJzdXMgbW9yZQo+Pj4gY29tcGxleCBhc20gY29uc3RyYWludHMgbWFrZXMgc2Vuc2UuCj4+
PiAKPj4+ICBCLiBTZXBhcmF0ZWx5LCB0aGUgYWRqdXN0bWVudCBvZiB0aGUgcmVzdG9yZSByZXR1
cm4gYWRkcmVzcywgYW5kIGF2b2lkaW5nCj4+PiBuZWVkaW5nIHRvIHJlbG9hZCByQ1gvckRJIGFm
dGVyIGxhYmVsIDEsIGFzIHdlbGwgYXMgYXZvaWRpbmcgdGhlIGNvcHkgaW4KPj4+IGNoZWNrX3dh
a2V1cF9mcm9tX3dhaXQoKSwgaXMgdmVyeSBuaWNlLgo+Pj4gCj4+PiBJJ20gaW5jbGluZWQgdG8g
dGFrZSB0aGUgc2Vjb25kIHBhcnQgb25seSwgYW5kIG1ha2UgaXQgY2xlYXJlciBpbiB0aGUKPj4+
IGNoYW5nZXNldCBjb21tZW50IHRoYXQgaXQgaXMgbm90IGEgYnVnIGZpeC4KPj4+IAo+Pj4gV2hh
dCBkbyB5b3UgdGhpbms/Cj4+IAo+PiBUaGUgcGF0Y2ggY291bGQgYmUgc3BsaXQsIHllcywgYnV0
IHdoZXJlIGV4YWN0bHkgdGhlIHNwbGl0KHMpCj4+IHNob3VsZCBiZSBpc24ndCB0aGF0IG9idmlv
dXMgdG8gbWUuIEFuZCBhcyBpdCdzIGZpeGluZyB0aGUgc2FtZQo+PiBraW5kIG9mIGlzc3VlIG9u
IGJvdGggYXNtKCktcywgaXQgc2VlbWVkIHNlbnNpYmxlIHRvIGtlZXAgdGhlCj4+IGNoYW5nZXMg
dG9nZXRoZXIuCj4gCj4gWWVzLCB0aGF0IGNvbmZ1c2VkIG1lIHRvbyAtLSB0aGUgb3V0cHV0IGNv
bnN0cmFpbnRzIG9uIHRoZSBzZWNvbmQgYXNtIGNhbgo+IGhhcmRseSBiZSB3cm9uZywgb3IgYXQg
bGVhc3QgbWF0dGVyLCBzaW5jZSBpdCBuZXZlciByZXR1cm5zISBFeGVjdXRpb24gc3RhdGUKPiBp
cyBjb21wbGV0ZWx5IHJlbG9hZGVkIHdpdGhpbiB0aGUgYXNtIHN0YXRlbWVudC4KCkZvcm1hbGx5
IHRoZXkncmUgd3JvbmcgdG9vIHdpdGhvdXQgdGhhdCBjaGFuZ2UuIEFuZCB0aGUKZmFjdCB0aGF0
IHRoZSBhc20oKSBkb2VzIG5vdCAicmV0dXJuIiBpcyBpcnJlbGV2YW50IGhlcmUsIGFzCnRoZSBy
ZXN0cmljdGlvbiBpcyBiZWNhdXNlIG9mIHRoZSBwb3RlbnRpYWwgdXNlIG9mIHRoZSByZWdpc3Rl
cgppbnNpZGUgdGhlIGFzbSgpLCBhZnRlciBpdCBhbHJlYWR5IGdvdCBtb2RpZmllZC4KCkZvcm1h
bGx5IGl0J3MgYWxzbyBub3QgcGVybWl0dGVkIGZvciB0aGUgYXNtKCkgdG8gYnJhbmNoCmVsc2V3
aGVyZSwgYnV0IHRoYXQgaXMgdmlvbGF0ZWQgaW4gc28gbWFueSBwbGFjZXMgKExpbnV4IG5vdAp0
aGUgbGVhc3QpIHRoYXQgdGhleSBjYW4gaGFyZGx5IGRhcmUgdG8gZXZlciBjb21lIHVwIHdpdGgK
c29tZXRoaW5nIGJyZWFraW5nIHRoaXMuCgoiU3BlYWtpbmcgb2YgbGFiZWxzLCBqdW1wcyBmcm9t
IG9uZSBhc20gdG8gYW5vdGhlciBhcmUgbm90CiBzdXBwb3J0ZWQuIFRoZSBjb21waWxlcuKAmXMg
b3B0aW1pemVycyBkbyBub3Qga25vdyBhYm91dCB0aGVzZQoganVtcHMsIGFuZCB0aGVyZWZvcmUg
dGhleSBjYW5ub3QgdGFrZSBhY2NvdW50IG9mIHRoZW0gd2hlbgogZGVjaWRpbmcgaG93IHRvIG9w
dGltaXplLiIKClBsdXMsIHRoaXMgbGlrZWx5IGlzIHJlYWxseSB0YXJnZXRpbmcganVtcHMgZnJv
bSBvbmUgYXNtIHRvIGFub3RoZXIKX3dpdGhpbl8gb25lIGZ1bmN0aW9uLCBhbGJlaXQgdGhhdCdz
IG5vdCBiZWluZyBzYWlkIGV4cGxpY2l0bHkuCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:36:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGAp-0000rq-HI; Fri, 03 Aug 2012 11:36:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxGAn-0000rh-CN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:36:05 +0000
Received: from [85.158.138.51:3664] by server-12.bemta-3.messagelabs.com id
	FE/75-15259-4A7BB105; Fri, 03 Aug 2012 11:36:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343993763!30233704!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12846 invoked from network); 3 Aug 2012 11:36:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 11:36:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 12:36:03 +0100
Message-Id: <501BD3ED02000078000927F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 12:36:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <501BC5410200007800092765@nat28.tlf.novell.com>
	<CC416DC2.3A667%keir.xen@gmail.com>
In-Reply-To: <CC416DC2.3A667%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDEzOjAwLCBLZWlyIEZyYXNlciA8a2Vpci54ZW5AZ21haWwuY29t
PiB3cm90ZToKPiBPbiAwMy8wOC8yMDEyIDExOjM0LCAiSmFuIEJldWxpY2giIDxKQmV1bGljaEBz
dXNlLmNvbT4gd3JvdGU6Cj4gCj4+Pj4+IE9uIDAzLjA4LjEyIGF0IDEyOjA0LCBLZWlyIEZyYXNl
ciA8a2VpckB4ZW4ub3JnPiB3cm90ZToKPj4+IE9uIDAzLzA4LzIwMTIgMDk6NDAsICJKYW4gQmV1
bGljaCIgPEpCZXVsaWNoQHN1c2UuY29tPiB3cm90ZToKPj4+IAo+Pj4+IEluIF9fcHJlcGFyZV90
b193YWl0KCksIHByb3Blcmx5IG1hcmsgZWFybHkgY2xvYmJlcmVkIHJlZ2lzdGVycy4gQnkKPj4+
PiBkb2luZyBzbywgd2UgYXQgb25jZSBlbGltaW5hdGUgdGhlIG5lZWQgdG8gc2F2ZS9yZXN0b3Jl
IHJDWCBhbmQgckRJLgo+Pj4+IAo+Pj4+IEluIGNoZWNrX3dha2V1cF9mcm9tX3dhaXQoKSwgbWFr
ZSB0aGUgY3VycmVudCBjb25zdHJhaW50cyBtYXRjaCBieQo+Pj4+IHJlbW92aW5nIHRoZSBjb2Rl
IHRoYXQgYWN0dWFsbCBhbHRlcnMgcmVnaXN0ZXJzLiBCeSBhZGp1c3RpbmcgdGhlCj4+Pj4gcmVz
dW1lIGFkZHJlc3MgaW4gX19wcmVwYXJlX3RvX3dhaXQoKSwgd2UgY2FuIHNpbXBseSByZS11c2Ug
dGhlIGNvcHlpbmcKPj4+PiBvcGVyYXRpb24gdGhlcmUgKHJhdGhlciB0aGFuIGRvaW5nIGEgc2Vj
b25kIHBvaW50bGVzcyBjb3B5IGluIHRoZQo+Pj4+IG9wcG9zaXRlIGRpcmVjdGlvbiBhZnRlciBi
cmFuY2hpbmcgdG8gdGhlIHJlc3VtZSBwb2ludCksIHdoaWNoIGF0IG9uY2UKPj4+PiBlbGltaW5h
dGVzIHRoZSBuZWVkIGZvciByZS1sb2FkaW5nIHJDWCBhbmQgckRJIGluc2lkZSB0aGUgYXNtKCku
Cj4+PiAKPj4+IEZpcnN0IG9mIGFsbCwgdGhpcyBpcyBjb2RlIGltcHJvdmVtZW50LCByYXRoZXIg
dGhhbiBhIGJ1ZyBmaXgsIHJpZ2h0PyBUaGUKPj4+IGFzbSBjb25zdHJhaW50cyBhcmUgY29ycmVj
dCBmb3IgdGhlIGNvZGUgYXMgaXQgaXMsIEkgYmVsaWV2ZS4KPj4gCj4+IE5vLCB0aGUgY29uc3Ry
YWludHMgYXJlbid0IHJlYWxseSBjb3JyZWN0IGF0IHByZXNlbnQgKHlldCB0aGlzIGlzCj4+IG5v
dCB2aXNpYmxlIGFzIGEgZnVuY3Rpb25hbCBidWcgaW4gYW55IHdheSkgLSBmcm9tIGEgZm9ybWFs
Cj4+IHBlcnNwZWN0aXZlLCB0aGUgZWFybHkgY2xvYmJlciBzcGVjaWZpY2F0aW9uIGlzIG5lZWRl
ZCBvbiBfYW55Xwo+PiBvcGVyYW5kIHRoYXQgZG9lc24ndCByZXRhaW4gaXRzIHZhbHVlIHRocm91
Z2hvdXQgYW4gYXNtKCkuIEFueQo+PiBmdXR1cmUgY29tcGlsZXIgY291bGQgZGVyaXZlIHNvbWV0
aGluZyBmcm9tIHRoaXMgdGhhdCB3ZSBkb24ndAo+PiBpbnRlbmQuCj4gCj4gSSdtIGNvbmZ1c2Vk
LiBUaGUgcmVnaXN0ZXJzIGhhdmUgdGhlIHNhbWUgdmFsdWVzIGF0IHRoZSBzdGFydCBhbmQgdGhl
IGVuZCBvZgo+IHRoZSBhc20gc3RhdGVtZW50LiBIb3cgY2FuIGl0IHBvc3NpYmx5IG1hdHRlciwg
ZXZlbiBpbiB0aGVvcnksIHdoZXRoZXIgdGhleQo+IHRlbXBvcmFyaWx5IGNoYW5nZSBpbiB0aGUg
bWlkZGxlPyBJcyB0aGlzIGZhaXJseSBzdHJvbmcgYXNzdW1wdGlvbiB3cml0dGVuCj4gZG93biBp
biB0aGUgZ2NjIGRvY3VtZW50YXRpb24gYW55d2hlcmU/CgpJdCdzIGluIHRoZSBzcGVjaWZpY2F0
aW9uIG9mIHRoZSAmIG1vZGlmaWVyOgoKIuKAmCbigJkgTWVhbnMgKGluIGEgcGFydGljdWxhciBh
bHRlcm5hdGl2ZSkgdGhhdCB0aGlzIG9wZXJhbmQgaXMgYW4KIGVhcmx5Y2xvYmJlciBvcGVyYW5k
LCB3aGljaCBpcyBtb2RpZmllZCBiZWZvcmUgdGhlIGluc3RydWN0aW9uCiBpcyBmaW5pc2hlZCB1
c2luZyB0aGUgaW5wdXQgb3BlcmFuZHMuIFRoZXJlZm9yZSwgdGhpcyBvcGVyYW5kCiBtYXkgbm90
IGxpZSBpbiBhIHJlZ2lzdGVyIHRoYXQgaXMgdXNlZCBhcyBhbiBpbnB1dCBvcGVyYW5kIG9yIGFz
CiBwYXJ0IG9mIGFueSBtZW1vcnkgYWRkcmVzcy4iCgpPZiBjb3Vyc2UsIGhlcmUgd2UncmUgbm90
IGhhdmluZyBhbnkgb3RoZXIgb3BlcmFuZHMsIHdoaWNoCmlzIHdoeSBhdCBsZWFzdCBhdCBwcmVz
ZW50IGdldHRpbmcgdGhpcyB3cm9uZyBkb2VzIG5vIGhhcm0uCgo+Pj4gSXQgYWxzbyBzZWVtcyB0
aGUgcGF0Y2ggc3BsaXRzIGludG8gdHdvIGluZGVwZW5kZW50IHBhcnRzOgo+Pj4gCj4+PiAgQS4g
Tm90IHN1cmUgd2hldGhlciB0aGUgdHJhZGUtb2ZmIG9mIHRoZSByQ1gvckRJIHNhdmUvcmVzdG9y
ZSB2ZXJzdXMgbW9yZQo+Pj4gY29tcGxleCBhc20gY29uc3RyYWludHMgbWFrZXMgc2Vuc2UuCj4+
PiAKPj4+ICBCLiBTZXBhcmF0ZWx5LCB0aGUgYWRqdXN0bWVudCBvZiB0aGUgcmVzdG9yZSByZXR1
cm4gYWRkcmVzcywgYW5kIGF2b2lkaW5nCj4+PiBuZWVkaW5nIHRvIHJlbG9hZCByQ1gvckRJIGFm
dGVyIGxhYmVsIDEsIGFzIHdlbGwgYXMgYXZvaWRpbmcgdGhlIGNvcHkgaW4KPj4+IGNoZWNrX3dh
a2V1cF9mcm9tX3dhaXQoKSwgaXMgdmVyeSBuaWNlLgo+Pj4gCj4+PiBJJ20gaW5jbGluZWQgdG8g
dGFrZSB0aGUgc2Vjb25kIHBhcnQgb25seSwgYW5kIG1ha2UgaXQgY2xlYXJlciBpbiB0aGUKPj4+
IGNoYW5nZXNldCBjb21tZW50IHRoYXQgaXQgaXMgbm90IGEgYnVnIGZpeC4KPj4+IAo+Pj4gV2hh
dCBkbyB5b3UgdGhpbms/Cj4+IAo+PiBUaGUgcGF0Y2ggY291bGQgYmUgc3BsaXQsIHllcywgYnV0
IHdoZXJlIGV4YWN0bHkgdGhlIHNwbGl0KHMpCj4+IHNob3VsZCBiZSBpc24ndCB0aGF0IG9idmlv
dXMgdG8gbWUuIEFuZCBhcyBpdCdzIGZpeGluZyB0aGUgc2FtZQo+PiBraW5kIG9mIGlzc3VlIG9u
IGJvdGggYXNtKCktcywgaXQgc2VlbWVkIHNlbnNpYmxlIHRvIGtlZXAgdGhlCj4+IGNoYW5nZXMg
dG9nZXRoZXIuCj4gCj4gWWVzLCB0aGF0IGNvbmZ1c2VkIG1lIHRvbyAtLSB0aGUgb3V0cHV0IGNv
bnN0cmFpbnRzIG9uIHRoZSBzZWNvbmQgYXNtIGNhbgo+IGhhcmRseSBiZSB3cm9uZywgb3IgYXQg
bGVhc3QgbWF0dGVyLCBzaW5jZSBpdCBuZXZlciByZXR1cm5zISBFeGVjdXRpb24gc3RhdGUKPiBp
cyBjb21wbGV0ZWx5IHJlbG9hZGVkIHdpdGhpbiB0aGUgYXNtIHN0YXRlbWVudC4KCkZvcm1hbGx5
IHRoZXkncmUgd3JvbmcgdG9vIHdpdGhvdXQgdGhhdCBjaGFuZ2UuIEFuZCB0aGUKZmFjdCB0aGF0
IHRoZSBhc20oKSBkb2VzIG5vdCAicmV0dXJuIiBpcyBpcnJlbGV2YW50IGhlcmUsIGFzCnRoZSBy
ZXN0cmljdGlvbiBpcyBiZWNhdXNlIG9mIHRoZSBwb3RlbnRpYWwgdXNlIG9mIHRoZSByZWdpc3Rl
cgppbnNpZGUgdGhlIGFzbSgpLCBhZnRlciBpdCBhbHJlYWR5IGdvdCBtb2RpZmllZC4KCkZvcm1h
bGx5IGl0J3MgYWxzbyBub3QgcGVybWl0dGVkIGZvciB0aGUgYXNtKCkgdG8gYnJhbmNoCmVsc2V3
aGVyZSwgYnV0IHRoYXQgaXMgdmlvbGF0ZWQgaW4gc28gbWFueSBwbGFjZXMgKExpbnV4IG5vdAp0
aGUgbGVhc3QpIHRoYXQgdGhleSBjYW4gaGFyZGx5IGRhcmUgdG8gZXZlciBjb21lIHVwIHdpdGgK
c29tZXRoaW5nIGJyZWFraW5nIHRoaXMuCgoiU3BlYWtpbmcgb2YgbGFiZWxzLCBqdW1wcyBmcm9t
IG9uZSBhc20gdG8gYW5vdGhlciBhcmUgbm90CiBzdXBwb3J0ZWQuIFRoZSBjb21waWxlcuKAmXMg
b3B0aW1pemVycyBkbyBub3Qga25vdyBhYm91dCB0aGVzZQoganVtcHMsIGFuZCB0aGVyZWZvcmUg
dGhleSBjYW5ub3QgdGFrZSBhY2NvdW50IG9mIHRoZW0gd2hlbgogZGVjaWRpbmcgaG93IHRvIG9w
dGltaXplLiIKClBsdXMsIHRoaXMgbGlrZWx5IGlzIHJlYWxseSB0YXJnZXRpbmcganVtcHMgZnJv
bSBvbmUgYXNtIHRvIGFub3RoZXIKX3dpdGhpbl8gb25lIGZ1bmN0aW9uLCBhbGJlaXQgdGhhdCdz
IG5vdCBiZWluZyBzYWlkIGV4cGxpY2l0bHkuCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:38:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGCn-00010I-63; Fri, 03 Aug 2012 11:38:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxGCm-00010C-0L
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:38:08 +0000
Received: from [85.158.138.51:18989] by server-10.bemta-3.messagelabs.com id
	4B/32-21993-F18BB105; Fri, 03 Aug 2012 11:38:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343993885!30328373!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28497 invoked from network); 3 Aug 2012 11:38:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 11:38:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 12:38:03 +0100
Message-Id: <501BD46402000078000927F9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 12:38:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andre Przywara" <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
	<501BB54E.1050302@amd.com>
In-Reply-To: <501BB54E.1050302@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	George Dunlap <dunlapg@gmail.com>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 13:26, Andre Przywara <andre.przywara@amd.com> wrote:
> On 08/03/2012 12:40 PM, Jan Beulich wrote:
>>>>> On 03.08.12 at 12:02, Andre Przywara <andre.przywara@amd.com> wrote:
>>> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
>>>> Hi everyone,
>>>>
>>>> With automatic placement finally landing into xen-unstable, I stated
>>>> thinking about what I could work on next, still in the field of
>>>> improving Xen's NUMA support. Well, it turned out that running out of
>>>> things to do is not an option! :-O
>>>>
>>>> In fact, I can think of quite a bit of open issues in that area, that I'm
>>>> just braindumping here.
>>>
>>>> ...
>>>>
>>>>          * automatic placement of Dom0, if possible (my current series is
>>>>            only affecting DomU)
>>>
>>> I think Dom0 NUMA awareness should be one of the top priorities. If I
>>> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which
>>> actually has memory from all 8 nodes and thinks it's memory is flat.
>>> There are some tricks to confine it to node 0 (dom0_mem=<memory of
>>> node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires
>>> intimate knowledge of the systems parameters and is error-prone.
>>
>> How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
>> an extension to the current options?
> 
> Yes, that sounds like a good idea. And relatively easy to implement.
> Maybe a list or a number of nodes (to make it more complicated ;-)

Oh yes, of course I implied this flexibility. Just wanted to give
an easy to read example.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:38:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGCn-00010I-63; Fri, 03 Aug 2012 11:38:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxGCm-00010C-0L
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:38:08 +0000
Received: from [85.158.138.51:18989] by server-10.bemta-3.messagelabs.com id
	4B/32-21993-F18BB105; Fri, 03 Aug 2012 11:38:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1343993885!30328373!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28497 invoked from network); 3 Aug 2012 11:38:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 11:38:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 12:38:03 +0100
Message-Id: <501BD46402000078000927F9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 12:38:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andre Przywara" <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
	<501BB54E.1050302@amd.com>
In-Reply-To: <501BB54E.1050302@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anil Madhavapeddy <anil@recoil.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	George Dunlap <dunlapg@gmail.com>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 13:26, Andre Przywara <andre.przywara@amd.com> wrote:
> On 08/03/2012 12:40 PM, Jan Beulich wrote:
>>>>> On 03.08.12 at 12:02, Andre Przywara <andre.przywara@amd.com> wrote:
>>> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
>>>> Hi everyone,
>>>>
>>>> With automatic placement finally landing into xen-unstable, I stated
>>>> thinking about what I could work on next, still in the field of
>>>> improving Xen's NUMA support. Well, it turned out that running out of
>>>> things to do is not an option! :-O
>>>>
>>>> In fact, I can think of quite a bit of open issues in that area, that I'm
>>>> just braindumping here.
>>>
>>>> ...
>>>>
>>>>          * automatic placement of Dom0, if possible (my current series is
>>>>            only affecting DomU)
>>>
>>> I think Dom0 NUMA awareness should be one of the top priorities. If I
>>> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which
>>> actually has memory from all 8 nodes and thinks it's memory is flat.
>>> There are some tricks to confine it to node 0 (dom0_mem=<memory of
>>> node0> dom0_vcpus=<cores in node0> dom0_vcpus_pin), but this requires
>>> intimate knowledge of the systems parameters and is error-prone.
>>
>> How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
>> an extension to the current options?
> 
> Yes, that sounds like a good idea. And relatively easy to implement.
> Maybe a list or a number of nodes (to make it more complicated ;-)

Oh yes, of course I implied this flexibility. Just wanted to give
an easy to read example.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:41:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGFa-0001A4-OQ; Fri, 03 Aug 2012 11:41:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SxGFZ-00019x-Lq
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:41:01 +0000
Received: from [85.158.143.99:51553] by server-2.bemta-4.messagelabs.com id
	6A/87-17938-DC8BB105; Fri, 03 Aug 2012 11:41:01 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-216.messagelabs.com!1343994056!30126328!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13548 invoked from network); 3 Aug 2012 11:40:56 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 11:40:56 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SxGFP-0007LL-Ra; Fri, 03 Aug 2012 11:40:51 +0000
Date: Fri, 3 Aug 2012 12:40:51 +0100
From: Tim Deegan <tim@xen.org>
To: Santosh Jodh <Santosh.Jodh@citrix.com>
Message-ID: <20120803114051.GC25286@ocelot.phlegethon.org>
References: <7914B38A4445B34AA16EB9F1352942F1012F0CDD8A71@SJCPMAILBOX01.citrite.net>
	<20120719102326.GC75169@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0CDD9588@SJCPMAILBOX01.citrite.net>
	<20120719145318.GA78502@ocelot.phlegethon.org>
	<CAL54oT3yqLtDpEHe7n20K=i-hz27P-TYK11tm+VNL2BXMs+hMg@mail.gmail.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0D841328@SJCPMAILBOX01.citrite.net>
	<20120724194150.GA68065@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
	<20120802111102.GE11437@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE665E4@SJCPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0DE665E4@SJCPMAILBOX01.citrite.net>
User-Agent: Mutt/1.4.2.1i
Cc: "xiantao.zhang@intel.com" <xiantao.zhang@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Superpages for VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:56 -0700 on 02 Aug (1343897810), Santosh Jodh wrote:
> Thanks for confirming. BTW, would the IOMMU ever have entries above
> domains max mapped pfn?

I don't believe so, but potentially some grant-table-style operations
might want to map pages in for DMA that don't need to be mapped in for
CPU access. 

But if you're writing a dump routine from scratch it should be easy
enough (and more efficient) to dump all the entries with a depth-first
pass over the trie, rather than individually querying all frames up to
max_mapped_pfn.

Tim.

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org] 
> Sent: Thursday, August 02, 2012 4:11 AM
> To: Santosh Jodh
> Cc: xiantao.zhang@intel.com; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Superpages for VT-D
> 
> At 13:44 -0700 on 31 Jul (1343742260), Santosh Jodh wrote:
> > I am going to try to add this support. 
> > 
> > It looks like a new iommu_ops handler would be needed that would do 
> > the actual work of dumping the entries - one for AMD and one for 
> > Intel. Am I reading this correctly?
> 
> I think that's correct.
> 
> > Or is it better to get the root_table + paging_mode (for AMD) and 
> > pgd_maddr + agaw (for Intel) and then do a generic dump?
> 
> No; the iommu tabels are sufficiebntly arch-specific that it's best to add arch-specific dump routines.
> 
> Cheers,
> 
> Tim.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 11:41:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 11:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGFa-0001A4-OQ; Fri, 03 Aug 2012 11:41:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SxGFZ-00019x-Lq
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 11:41:01 +0000
Received: from [85.158.143.99:51553] by server-2.bemta-4.messagelabs.com id
	6A/87-17938-DC8BB105; Fri, 03 Aug 2012 11:41:01 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-9.tower-216.messagelabs.com!1343994056!30126328!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13548 invoked from network); 3 Aug 2012 11:40:56 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 11:40:56 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SxGFP-0007LL-Ra; Fri, 03 Aug 2012 11:40:51 +0000
Date: Fri, 3 Aug 2012 12:40:51 +0100
From: Tim Deegan <tim@xen.org>
To: Santosh Jodh <Santosh.Jodh@citrix.com>
Message-ID: <20120803114051.GC25286@ocelot.phlegethon.org>
References: <7914B38A4445B34AA16EB9F1352942F1012F0CDD8A71@SJCPMAILBOX01.citrite.net>
	<20120719102326.GC75169@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0CDD9588@SJCPMAILBOX01.citrite.net>
	<20120719145318.GA78502@ocelot.phlegethon.org>
	<CAL54oT3yqLtDpEHe7n20K=i-hz27P-TYK11tm+VNL2BXMs+hMg@mail.gmail.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0D841328@SJCPMAILBOX01.citrite.net>
	<20120724194150.GA68065@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE66157@SJCPMAILBOX01.citrite.net>
	<20120802111102.GE11437@ocelot.phlegethon.org>
	<7914B38A4445B34AA16EB9F1352942F1012F0DE665E4@SJCPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0DE665E4@SJCPMAILBOX01.citrite.net>
User-Agent: Mutt/1.4.2.1i
Cc: "xiantao.zhang@intel.com" <xiantao.zhang@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Superpages for VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:56 -0700 on 02 Aug (1343897810), Santosh Jodh wrote:
> Thanks for confirming. BTW, would the IOMMU ever have entries above
> domains max mapped pfn?

I don't believe so, but potentially some grant-table-style operations
might want to map pages in for DMA that don't need to be mapped in for
CPU access. 

But if you're writing a dump routine from scratch it should be easy
enough (and more efficient) to dump all the entries with a depth-first
pass over the trie, rather than individually querying all frames up to
max_mapped_pfn.

Tim.

> -----Original Message-----
> From: Tim Deegan [mailto:tim@xen.org] 
> Sent: Thursday, August 02, 2012 4:11 AM
> To: Santosh Jodh
> Cc: xiantao.zhang@intel.com; Nakajima, Jun; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Superpages for VT-D
> 
> At 13:44 -0700 on 31 Jul (1343742260), Santosh Jodh wrote:
> > I am going to try to add this support. 
> > 
> > It looks like a new iommu_ops handler would be needed that would do 
> > the actual work of dumping the entries - one for AMD and one for 
> > Intel. Am I reading this correctly?
> 
> I think that's correct.
> 
> > Or is it better to get the root_table + paging_mode (for AMD) and 
> > pgd_maddr + agaw (for Intel) and then do a generic dump?
> 
> No; the iommu tabels are sufficiebntly arch-specific that it's best to add arch-specific dump routines.
> 
> Cheers,
> 
> Tim.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:04:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGcH-0001tW-Gy; Fri, 03 Aug 2012 12:04:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxGcG-0001tR-OP
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 12:04:29 +0000
Received: from [85.158.138.51:27353] by server-3.bemta-3.messagelabs.com id
	C7/EA-08301-B4EBB105; Fri, 03 Aug 2012 12:04:27 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343995463!22214354!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8506 invoked from network); 3 Aug 2012 12:04:24 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 12:04:24 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q73C4Eib011455
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Aug 2012 08:04:15 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q73C4E8a011453;
	Fri, 3 Aug 2012 08:04:14 -0400
Date: Fri, 3 Aug 2012 08:04:14 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, akpm@linux-foundation.org, 
	linux-kernel@vger.kernel.org
Message-ID: <20120803120414.GA10670@andromeda.dapyr.net>
References: <20120801190227.GA13272@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120801190227.GA13272@phenom.dumpdata.com>
User-Agent: Mutt/1.5.9i
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> So I hadn't done a git bisection yet. But if I choose git commit:
> 4b24ff71108164e047cf2c95990b77651163e315
>     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> 
>     Pull battery updates from Anton Vorontsov:
> 
> 
> everything works nicely. Anything past that, so these merges:
> 
> konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)

Somewhere in there is the culprit. Hadn't done yet the full bisection
(was just checking out in each merge to see when it stopped working)

Andrew CC-ing you here, the serial log is below.
> a40a1d3 Merge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
> 3e9a970 Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
> 941c872 Merge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
> 8762541 Merge branch 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
> 6dbb35b Merge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
> fd37ce3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> 1da9b6b Merge branches 'cma', 'ipoib', 'ocrdma' and 'qib' into for-next
> 6aeea3e Merge remote-tracking branch 'origin' into irqdomain/next
> 931efdf Merge branch 'v4l_for_linus' into staging/for_v3.6
> 80c1834 Merge tag 'v3.5-rc6' into irqdomain/next
> 
> are the culprit. I think it might be the networking pull but not sure. Ian
> any thoughts?
> 
> Using config file "/test.xm".
> Started domain latest (id=2)
> [    0.000000] console [hvc0] enabled, bootconsole disabled
> [    0.000000] Xen: using vcpuop timer interface
> [    0.000000] installing Xen timer for CPU 0
> [    0.000000] tsc: Detected 2294.530 MHz processor
> [    0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 4589.06 BogoMIPS (lpj=2294530)
> [    0.000999] pid_max: default: 32768 minimum: 301
> [    0.000999] Security Framework initialized
> [    0.000999] SELinux:  Initializing.
> [    0.000999] SELinux:  Starting in permissive mode
> [    0.000999] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
> [    0.001520] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
> [    0.001875] Mount-cache hash table entries: 256
> [    0.002007] Initializing cgroup subsys cpuacct
> [    0.002013] Initializing cgroup subsys freezer
> [    0.002070] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
> [    0.002070] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
> [    0.002084] CPU: Physical Processor ID: 0
> [    0.002087] CPU: Processor Core ID: 0
> [    0.002094] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
> [    0.002094] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
> [    0.002094] tlb_flushall_shift is 0x5
> [    0.002164] SMP alternatives: switching to UP code
> [    0.025291] Freeing SMP alternatives: 24k freed
> [    0.025356] cpu 0 spinlock event irq 17
> [    0.025383] Performance Events: unsupported p6 CPU model 45 no PMU driver, software events only.
> [    0.025551] NMI watchdog: disabled (cpu0): hardware events not enabled
> [    0.025576] Brought up 1 CPUs
> [    0.028642] kworker/u:0 (14) used greatest stack depth: 5936 bytes left
> [    0.028675] Grant tables using version 2 layout.
> [    0.028691] Grant table initialized
> [    0.047616] RTC time: 165:165:165, date: 165/165/65
> [    0.047661] NET: Registered protocol family 16
> [    0.048184] dca service started, version 1.12.1
> [    0.048545] PCI: setting up Xen PCI frontend stub
> [    0.048552] PCI: pci_cache_line_size set to 64 bytes
> [    0.049543] kworker/u:0 (51) used greatest stack depth: 5472 bytes left
> [    0.054147] bio: create slab <bio-0> at 0
> [    0.054240] ACPI: Interpreter disabled.
> [    0.054288] xen/balloon: Initialising balloon driver.
> [    0.055127] xen-balloon: Initialising balloon driver.
> [    0.055127] vgaarb: loaded
> [    0.056125] usbcore: registered new interface driver usbfs
> [    0.056162] usbcore: registered new interface driver hub
> [    0.056217] usbcore: registered new device driver usb
> [    0.056425] PCI: System does not support PCI
> [    0.056431] PCI: System does not support PCI
> [    0.056617] NetLabel: Initializing
> [    0.056624] NetLabel:  domain hash size = 128
> [    0.056627] NetLabel:  protocols = UNLABELED CIPSOv4
> [    0.056642] NetLabel:  unlabeled traffic allowed by default
> [    0.056725] Switching to clocksource xen
> [    0.056795] pnp: PnP ACPI: disabled
> [    0.058698] NET: Registered protocol family 2
> [    0.059805] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
> [    0.061110] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
> [    0.061243] TCP: Hash tables configured (established 524288 bind 65536)
> [    0.061281] TCP: reno registered
> [    0.061304] UDP hash table entries: 2048 (order: 4, 65536 bytes)
> [    0.061341] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
> [    0.061425] NET: Registered protocol family 1
> [    0.061492] RPC: Registered named UNIX socket transport module.
> [    0.061498] RPC: Registered udp transport module.
> [    0.061504] RPC: Registered tcp transport module.
> [    0.061510] RPC: Registered tcp NFSv4.1 backchannel transport module.
> [    0.061518] PCI: CLS 0 bytes, default 64
> [    0.061643] Trying to unpack rootfs image as initramfs...
> [    0.382189] Freeing initrd memory: 362080k freed
> [    0.499615] platform rtc_cmos: registered platform RTC device (no PNP device found)
> [    0.499831] Machine check injector initialized
> [    0.500181] microcode: CPU0 sig=0x206d2, pf=0x1, revision=0x8000020c
> [    0.500229] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
> [    0.500544] audit: initializing netlink socket (disabled)
> [    0.500566] type=2000 audit(1343845740.901:1): initialized
> [    0.515227] HugeTLB registered 2 MB page size, pre-allocated 0 pages
> [    0.515358] VFS: Disk quotas dquot_6.5.2
> [    0.515386] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> [    0.515525] NFS: Registering the id_resolver key type
> [    0.515544] Key type id_resolver registered
> [    0.515551] Key type id_legacy registered
> [    0.515599] NTFS driver 2.1.30 [Flags: R/W].
> [    0.515706] msgmni has been set to 8021
> [    0.515765] SELinux:  Registering netfilter hooks
> [    0.516222] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
> [    0.516232] io scheduler noop registered
> [    0.516236] io scheduler deadline registered
> [    0.516243] io scheduler cfq registered (default)
> [    0.516337] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [    0.516442] ioatdma: Intel(R) QuickData Technology Driver 4.00
> [    0.532923] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
> [    0.533399] Non-volatile memory driver v1.3
> [    0.533406] Linux agpgart interface v0.103
> [    0.533588] [drm] Initialized drm 1.1.0 20060810
> [    0.535196] brd: module loaded
> [    0.535992] loop: module loaded
> [    0.536344] libphy: Fixed MDIO Bus: probed
> [    0.536351] tun: Universal TUN/TAP device driver, 1.6
> [    0.536354] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [    0.536419] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.6.0-k
> [    0.536428] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
> [    0.536721] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> [    0.536729] ehci_hcd: block sizes: qh 104 qtd 96 itd 192 sitd 96
> [    0.536770] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    0.536776] ohci_hcd: block sizes: ed 80 td 96
> [    0.536826] uhci_hcd: USB Universal Host Controller Interface driver
> [    0.536929] usbcore: registered new interface driver usblp
> [    0.536977] usbcore: registered new interface driver libusual
> [    0.537164] i8042: PNP: No PS/2 controller found. Probing ports directly.
> [    0.538013] i8042: No controller found
> [    0.538103] mousedev: PS/2 mouse device common for all mice
> [    0.598349] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
> [    0.598404] rtc_cmos: probe of rtc_cmos failed with error -38
> [    0.598559] EFI Variables Facility v0.08 2004-May-17
> [    0.598701] Netfilter messages via NETLINK v0.30.
> [    0.598719] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
> [    0.598790] ctnetlink v0.93: registering with nfnetlink.
> [    0.598971] ip_tables: (C) 2000-2006 Netfilter Core Team
> [    0.599007] TCP: cubic registered
> [    0.599014] Initializing XFRM netlink socket
> [    0.599043] NET: Registered protocol family 10
> [    0.599238] ip6_tables: (C) 2000-2006 Netfilter Core Team
> [    0.599374] sit: IPv6 over IPv4 tunneling driver
> [    0.599589] NET: Registered protocol family 17
> [    0.599618] Key type dns_resolver registered
> [    0.599808] PM: Hibernation image not present or could not be loaded.
> [    0.599824] registered taskstats version 1
> [    0.599848] XENBUS: Device with no driver: device/vkbd/0
> [    0.599852] XENBUS: Device with no driver: device/vfb/0
> [    0.599856] XENBUS: Device with no driver: device/vbd/51712
> [    0.599860] XENBUS: Device with no driver: device/vif/0
> [    0.599886]   Magic number: 1:252:3141
> [    0.600271] Freeing unused kernel memory: 704k freed
> [    0.600500] Write protecting the kernel read-only data: 8192k
> [    0.602437] Freeing unused kernel memory: 132k freed
> [    0.602589] Freeing unused kernel memory: 340k freed
> init started: BusyBox v1.14.3 (2012-08-01 13:52:44 EDT)
> [    0.607489] consoletype (1056) used greatest stack depth: 5288 bytes left
> Mounting directories  [  OK  ]
> mount: mount point /proc/bus/usb does not exist
> [    0.781044] modprobe (1085) used greatest stack depth: 5048 bytes left
> mount: mount point /sys/kernel/config does not exist
> [    0.785748] core_filesystem (1057) used greatest stack depth: 4968 bytes left
> [    0.793721] input: Xen Virtual Keyboard as /devices/virtual/input/input0
> [    0.793892] input: Xen Virtual Pointer as /devices/virtual/input/input1
> [    1.010121] Initialising Xen virtual ethernet driver.
> [    1.124604] blkfront: xvda: flush diskcache: enabled
> [    1.126118]  xvda: xvda1 xvda2 xvda3 xvda4
> [    1.239316] udevd (1128): /proc/1128/oom_adj is deprecated, please use /proc/1128/oom_score_adj instead.
> udevd-work[1130]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} for writing: No such file or directory
> 
> [    1.395080] ip (1408) used greatest stack depth: 3912 bytes left
> Waiting for devices [  OK  ]
> Waiting for init.pre_custom [  OK  ]
> Waiting for fb [  OK  ]
> Starting..[/dev/fb0]
> /dev/fb0: len:0
> /dev/fb0: bits/pixel32
> (7f44ddbc2000): Writting .. [800:600]
> Done!
> FATAL: Module agpgart_intel not found.
> [    1.652131] [drm] radeon kernel modesetting enabled.
> WARNING: Error inserting video (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/acpi/video.ko): No such device
> WARNING: Error inserting mxm_wmi (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/platform/x86/mxm-wmi.ko): No such device
> WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> WARNING: Error inserting ttm (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/ttm/ttm.ko): No such device
> FATAL: Error inserting nouveau (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/nouveau/nouveau.ko): No such device
> [    1.660288] Console: switching to colour frame buffer device 100x37
> WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> FATAL: Error inserting i915 (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/i915/i915.ko): No such device
> Starting..[/dev/fb0]
> /dev/fb0: len:0
> /dev/fb0: bits/pixel32
> (7fb8669cc000): Writting .. [800:600]
> Done!
> VGA: 0000:
> Waiting for network [  OK  ]
> Bringing up loopback interface:  [  OK  ]
> Bringing up interface eth0:  [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.908703] PGD ea1df067 PUD e8ada067 PMD 0 
> [    1.908774] Oops: 0000 [#1] SMP 
> [    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [    1.908938] CPU 0 
> [    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1  
> [    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
> [    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
> [    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
> [    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
> [    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
> [    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
> [    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> [    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
> [    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
> [    1.909055] Stack:
> [    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
> [    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
> [    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
> [    1.909055] Call Trace:
> [    1.909055]  <IRQ> 
> [    1.909055]  [<ffffffff81066028>] ? pvclock_clocksource_read+0x58/0xd0
> [    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
> [    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
> [    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
> [    1.909055]  <EOI> 
> [    1.909055]  [<ffffffff8103a435>] do_softirq+0x65/0xa0
> [    1.909055]  [<ffffffff8107f834>] local_bh_enable_ip+0x94/0xa0
> [    1.909055]  [<ffffffff815d11a4>] _raw_spin_unlock_bh+0x24/0x30
> [    1.909055]  [<ffffffffa0036d44>] xennet_open+0x54/0xe0 [xen_netfront]
> [    1.909055]  [<ffffffff81481dcf>] __dev_open+0xbf/0x120
> [    1.909055]  [<ffffffff8148022c>] __dev_change_flags+0x9c/0x180
> [    1.909055]  [<ffffffff81481cc3>] dev_change_flags+0x23/0x70
> [    1.909055]  [<ffffffff81491062>] do_setlink+0x1c2/0xa10
> [    1.909055]  [<ffffffff812b5d74>] ? nla_parse+0x34/0x110
> [    1.909055]  [<ffffffff81494005>] rtnl_newlink+0x3a5/0x5c0
> [    1.909055]  [<ffffffff812541c4>] ? selinux_capable+0x34/0x50
> [    1.909055]  [<ffffffff81250223>] ? security_capable+0x13/0x20
> [    1.909055]  [<ffffffff81491e07>] rtnetlink_rcv_msg+0x2c7/0x330
> [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> [    1.909055]  [<ffffffff81149a52>] ? kmem_cache_alloc_node+0x82/0x1d0
> [    1.909055]  [<ffffffff8147a00c>] ? __skb_recv_datagram+0x11c/0x2f0
> [    1.909055]  [<ffffffff81491b40>] ? rtnetlink_rcv+0x30/0x30
> [    1.909055]  [<ffffffff814a9c89>] netlink_rcv_skb+0x99/0xc0
> [    1.909055]  [<ffffffff81491b30>] rtnetlink_rcv+0x20/0x30
> [    1.909055]  [<ffffffff814a9998>] netlink_unicast+0x1a8/0x220
> [    1.909055]  [<ffffffff814aa535>] netlink_sendmsg+0x205/0x300
> [    1.909055]  [<ffffffff8146ce19>] sock_sendmsg+0xb9/0xf0
> [    1.909055]  [<ffffffff8146c51e>] ? copy_from_user+0x3e/0x50
> [    1.909055]  [<ffffffff8146c576>] ? move_addr_to_kernel+0x46/0x80
> [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> [    1.909055]  [<ffffffff8146dd2d>] __sys_sendmsg+0x3dd/0x400
> [    1.909055]  [<ffffffff8112c751>] ? handle_mm_fault+0x261/0x380
> [    1.909055]  [<ffffffff815d4cd0>] ? do_page_fault+0x250/0x4f0
> [    1.909055]  [<ffffffff8114a587>] ? kmem_cache_alloc+0x1a7/0x1f0
> [    1.909055]  [<ffffffff811311a4>] ? do_brk+0x1b4/0x350
> [    1.909055]  [<ffffffff8146df04>] sys_sendmsg+0x44/0x80
> [    1.909055]  [<ffffffff815d7bf9>] system_call_fastpath+0x16/0x1b
> [    1.909055] Code: 44 00 00 41 80 8c 24 a8 00 00 00 04 e9 2f ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 41 8b 84 24 c8 00 00 00 49 03 84 24 d0 00 00 00 <80> 3c 25 10 00 00 00 00 48 8d 50 30 74 0f 48 83 3c 25 08 00 00 
> [    1.909055] RIP  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.909055]  RSP <ffff8800ffc03db8>
> [    1.909055] CR2: 0000000000000010
> [    1.947298] ---[ end trace 3f4ba742dffbe90d ]---
> [    1.947824] Kernel panic - not syncing: Fatal exception in interrupt
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:04:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGcH-0001tW-Gy; Fri, 03 Aug 2012 12:04:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxGcG-0001tR-OP
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 12:04:29 +0000
Received: from [85.158.138.51:27353] by server-3.bemta-3.messagelabs.com id
	C7/EA-08301-B4EBB105; Fri, 03 Aug 2012 12:04:27 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-6.tower-174.messagelabs.com!1343995463!22214354!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8506 invoked from network); 3 Aug 2012 12:04:24 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 12:04:24 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q73C4Eib011455
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Aug 2012 08:04:15 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q73C4E8a011453;
	Fri, 3 Aug 2012 08:04:14 -0400
Date: Fri, 3 Aug 2012 08:04:14 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, akpm@linux-foundation.org, 
	linux-kernel@vger.kernel.org
Message-ID: <20120803120414.GA10670@andromeda.dapyr.net>
References: <20120801190227.GA13272@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120801190227.GA13272@phenom.dumpdata.com>
User-Agent: Mutt/1.5.9i
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> So I hadn't done a git bisection yet. But if I choose git commit:
> 4b24ff71108164e047cf2c95990b77651163e315
>     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> 
>     Pull battery updates from Anton Vorontsov:
> 
> 
> everything works nicely. Anything past that, so these merges:
> 
> konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)

Somewhere in there is the culprit. Hadn't done yet the full bisection
(was just checking out in each merge to see when it stopped working)

Andrew CC-ing you here, the serial log is below.
> a40a1d3 Merge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
> 3e9a970 Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
> 941c872 Merge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
> 8762541 Merge branch 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
> 6dbb35b Merge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
> fd37ce3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> 1da9b6b Merge branches 'cma', 'ipoib', 'ocrdma' and 'qib' into for-next
> 6aeea3e Merge remote-tracking branch 'origin' into irqdomain/next
> 931efdf Merge branch 'v4l_for_linus' into staging/for_v3.6
> 80c1834 Merge tag 'v3.5-rc6' into irqdomain/next
> 
> are the culprit. I think it might be the networking pull but not sure. Ian
> any thoughts?
> 
> Using config file "/test.xm".
> Started domain latest (id=2)
> [    0.000000] console [hvc0] enabled, bootconsole disabled
> [    0.000000] Xen: using vcpuop timer interface
> [    0.000000] installing Xen timer for CPU 0
> [    0.000000] tsc: Detected 2294.530 MHz processor
> [    0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 4589.06 BogoMIPS (lpj=2294530)
> [    0.000999] pid_max: default: 32768 minimum: 301
> [    0.000999] Security Framework initialized
> [    0.000999] SELinux:  Initializing.
> [    0.000999] SELinux:  Starting in permissive mode
> [    0.000999] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
> [    0.001520] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
> [    0.001875] Mount-cache hash table entries: 256
> [    0.002007] Initializing cgroup subsys cpuacct
> [    0.002013] Initializing cgroup subsys freezer
> [    0.002070] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
> [    0.002070] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
> [    0.002084] CPU: Physical Processor ID: 0
> [    0.002087] CPU: Processor Core ID: 0
> [    0.002094] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
> [    0.002094] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
> [    0.002094] tlb_flushall_shift is 0x5
> [    0.002164] SMP alternatives: switching to UP code
> [    0.025291] Freeing SMP alternatives: 24k freed
> [    0.025356] cpu 0 spinlock event irq 17
> [    0.025383] Performance Events: unsupported p6 CPU model 45 no PMU driver, software events only.
> [    0.025551] NMI watchdog: disabled (cpu0): hardware events not enabled
> [    0.025576] Brought up 1 CPUs
> [    0.028642] kworker/u:0 (14) used greatest stack depth: 5936 bytes left
> [    0.028675] Grant tables using version 2 layout.
> [    0.028691] Grant table initialized
> [    0.047616] RTC time: 165:165:165, date: 165/165/65
> [    0.047661] NET: Registered protocol family 16
> [    0.048184] dca service started, version 1.12.1
> [    0.048545] PCI: setting up Xen PCI frontend stub
> [    0.048552] PCI: pci_cache_line_size set to 64 bytes
> [    0.049543] kworker/u:0 (51) used greatest stack depth: 5472 bytes left
> [    0.054147] bio: create slab <bio-0> at 0
> [    0.054240] ACPI: Interpreter disabled.
> [    0.054288] xen/balloon: Initialising balloon driver.
> [    0.055127] xen-balloon: Initialising balloon driver.
> [    0.055127] vgaarb: loaded
> [    0.056125] usbcore: registered new interface driver usbfs
> [    0.056162] usbcore: registered new interface driver hub
> [    0.056217] usbcore: registered new device driver usb
> [    0.056425] PCI: System does not support PCI
> [    0.056431] PCI: System does not support PCI
> [    0.056617] NetLabel: Initializing
> [    0.056624] NetLabel:  domain hash size = 128
> [    0.056627] NetLabel:  protocols = UNLABELED CIPSOv4
> [    0.056642] NetLabel:  unlabeled traffic allowed by default
> [    0.056725] Switching to clocksource xen
> [    0.056795] pnp: PnP ACPI: disabled
> [    0.058698] NET: Registered protocol family 2
> [    0.059805] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
> [    0.061110] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
> [    0.061243] TCP: Hash tables configured (established 524288 bind 65536)
> [    0.061281] TCP: reno registered
> [    0.061304] UDP hash table entries: 2048 (order: 4, 65536 bytes)
> [    0.061341] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
> [    0.061425] NET: Registered protocol family 1
> [    0.061492] RPC: Registered named UNIX socket transport module.
> [    0.061498] RPC: Registered udp transport module.
> [    0.061504] RPC: Registered tcp transport module.
> [    0.061510] RPC: Registered tcp NFSv4.1 backchannel transport module.
> [    0.061518] PCI: CLS 0 bytes, default 64
> [    0.061643] Trying to unpack rootfs image as initramfs...
> [    0.382189] Freeing initrd memory: 362080k freed
> [    0.499615] platform rtc_cmos: registered platform RTC device (no PNP device found)
> [    0.499831] Machine check injector initialized
> [    0.500181] microcode: CPU0 sig=0x206d2, pf=0x1, revision=0x8000020c
> [    0.500229] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
> [    0.500544] audit: initializing netlink socket (disabled)
> [    0.500566] type=2000 audit(1343845740.901:1): initialized
> [    0.515227] HugeTLB registered 2 MB page size, pre-allocated 0 pages
> [    0.515358] VFS: Disk quotas dquot_6.5.2
> [    0.515386] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> [    0.515525] NFS: Registering the id_resolver key type
> [    0.515544] Key type id_resolver registered
> [    0.515551] Key type id_legacy registered
> [    0.515599] NTFS driver 2.1.30 [Flags: R/W].
> [    0.515706] msgmni has been set to 8021
> [    0.515765] SELinux:  Registering netfilter hooks
> [    0.516222] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
> [    0.516232] io scheduler noop registered
> [    0.516236] io scheduler deadline registered
> [    0.516243] io scheduler cfq registered (default)
> [    0.516337] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [    0.516442] ioatdma: Intel(R) QuickData Technology Driver 4.00
> [    0.532923] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
> [    0.533399] Non-volatile memory driver v1.3
> [    0.533406] Linux agpgart interface v0.103
> [    0.533588] [drm] Initialized drm 1.1.0 20060810
> [    0.535196] brd: module loaded
> [    0.535992] loop: module loaded
> [    0.536344] libphy: Fixed MDIO Bus: probed
> [    0.536351] tun: Universal TUN/TAP device driver, 1.6
> [    0.536354] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [    0.536419] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.6.0-k
> [    0.536428] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
> [    0.536721] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> [    0.536729] ehci_hcd: block sizes: qh 104 qtd 96 itd 192 sitd 96
> [    0.536770] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    0.536776] ohci_hcd: block sizes: ed 80 td 96
> [    0.536826] uhci_hcd: USB Universal Host Controller Interface driver
> [    0.536929] usbcore: registered new interface driver usblp
> [    0.536977] usbcore: registered new interface driver libusual
> [    0.537164] i8042: PNP: No PS/2 controller found. Probing ports directly.
> [    0.538013] i8042: No controller found
> [    0.538103] mousedev: PS/2 mouse device common for all mice
> [    0.598349] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
> [    0.598404] rtc_cmos: probe of rtc_cmos failed with error -38
> [    0.598559] EFI Variables Facility v0.08 2004-May-17
> [    0.598701] Netfilter messages via NETLINK v0.30.
> [    0.598719] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
> [    0.598790] ctnetlink v0.93: registering with nfnetlink.
> [    0.598971] ip_tables: (C) 2000-2006 Netfilter Core Team
> [    0.599007] TCP: cubic registered
> [    0.599014] Initializing XFRM netlink socket
> [    0.599043] NET: Registered protocol family 10
> [    0.599238] ip6_tables: (C) 2000-2006 Netfilter Core Team
> [    0.599374] sit: IPv6 over IPv4 tunneling driver
> [    0.599589] NET: Registered protocol family 17
> [    0.599618] Key type dns_resolver registered
> [    0.599808] PM: Hibernation image not present or could not be loaded.
> [    0.599824] registered taskstats version 1
> [    0.599848] XENBUS: Device with no driver: device/vkbd/0
> [    0.599852] XENBUS: Device with no driver: device/vfb/0
> [    0.599856] XENBUS: Device with no driver: device/vbd/51712
> [    0.599860] XENBUS: Device with no driver: device/vif/0
> [    0.599886]   Magic number: 1:252:3141
> [    0.600271] Freeing unused kernel memory: 704k freed
> [    0.600500] Write protecting the kernel read-only data: 8192k
> [    0.602437] Freeing unused kernel memory: 132k freed
> [    0.602589] Freeing unused kernel memory: 340k freed
> init started: BusyBox v1.14.3 (2012-08-01 13:52:44 EDT)
> [    0.607489] consoletype (1056) used greatest stack depth: 5288 bytes left
> Mounting directories  [  OK  ]
> mount: mount point /proc/bus/usb does not exist
> [    0.781044] modprobe (1085) used greatest stack depth: 5048 bytes left
> mount: mount point /sys/kernel/config does not exist
> [    0.785748] core_filesystem (1057) used greatest stack depth: 4968 bytes left
> [    0.793721] input: Xen Virtual Keyboard as /devices/virtual/input/input0
> [    0.793892] input: Xen Virtual Pointer as /devices/virtual/input/input1
> [    1.010121] Initialising Xen virtual ethernet driver.
> [    1.124604] blkfront: xvda: flush diskcache: enabled
> [    1.126118]  xvda: xvda1 xvda2 xvda3 xvda4
> [    1.239316] udevd (1128): /proc/1128/oom_adj is deprecated, please use /proc/1128/oom_score_adj instead.
> udevd-work[1130]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} for writing: No such file or directory
> 
> [    1.395080] ip (1408) used greatest stack depth: 3912 bytes left
> Waiting for devices [  OK  ]
> Waiting for init.pre_custom [  OK  ]
> Waiting for fb [  OK  ]
> Starting..[/dev/fb0]
> /dev/fb0: len:0
> /dev/fb0: bits/pixel32
> (7f44ddbc2000): Writting .. [800:600]
> Done!
> FATAL: Module agpgart_intel not found.
> [    1.652131] [drm] radeon kernel modesetting enabled.
> WARNING: Error inserting video (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/acpi/video.ko): No such device
> WARNING: Error inserting mxm_wmi (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/platform/x86/mxm-wmi.ko): No such device
> WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> WARNING: Error inserting ttm (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/ttm/ttm.ko): No such device
> FATAL: Error inserting nouveau (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/nouveau/nouveau.ko): No such device
> [    1.660288] Console: switching to colour frame buffer device 100x37
> WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> FATAL: Error inserting i915 (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/i915/i915.ko): No such device
> Starting..[/dev/fb0]
> /dev/fb0: len:0
> /dev/fb0: bits/pixel32
> (7fb8669cc000): Writting .. [800:600]
> Done!
> VGA: 0000:
> Waiting for network [  OK  ]
> Bringing up loopback interface:  [  OK  ]
> Bringing up interface eth0:  [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.908703] PGD ea1df067 PUD e8ada067 PMD 0 
> [    1.908774] Oops: 0000 [#1] SMP 
> [    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [    1.908938] CPU 0 
> [    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1  
> [    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
> [    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
> [    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
> [    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
> [    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
> [    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
> [    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> [    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
> [    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
> [    1.909055] Stack:
> [    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
> [    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
> [    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
> [    1.909055] Call Trace:
> [    1.909055]  <IRQ> 
> [    1.909055]  [<ffffffff81066028>] ? pvclock_clocksource_read+0x58/0xd0
> [    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
> [    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
> [    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
> [    1.909055]  <EOI> 
> [    1.909055]  [<ffffffff8103a435>] do_softirq+0x65/0xa0
> [    1.909055]  [<ffffffff8107f834>] local_bh_enable_ip+0x94/0xa0
> [    1.909055]  [<ffffffff815d11a4>] _raw_spin_unlock_bh+0x24/0x30
> [    1.909055]  [<ffffffffa0036d44>] xennet_open+0x54/0xe0 [xen_netfront]
> [    1.909055]  [<ffffffff81481dcf>] __dev_open+0xbf/0x120
> [    1.909055]  [<ffffffff8148022c>] __dev_change_flags+0x9c/0x180
> [    1.909055]  [<ffffffff81481cc3>] dev_change_flags+0x23/0x70
> [    1.909055]  [<ffffffff81491062>] do_setlink+0x1c2/0xa10
> [    1.909055]  [<ffffffff812b5d74>] ? nla_parse+0x34/0x110
> [    1.909055]  [<ffffffff81494005>] rtnl_newlink+0x3a5/0x5c0
> [    1.909055]  [<ffffffff812541c4>] ? selinux_capable+0x34/0x50
> [    1.909055]  [<ffffffff81250223>] ? security_capable+0x13/0x20
> [    1.909055]  [<ffffffff81491e07>] rtnetlink_rcv_msg+0x2c7/0x330
> [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> [    1.909055]  [<ffffffff81149a52>] ? kmem_cache_alloc_node+0x82/0x1d0
> [    1.909055]  [<ffffffff8147a00c>] ? __skb_recv_datagram+0x11c/0x2f0
> [    1.909055]  [<ffffffff81491b40>] ? rtnetlink_rcv+0x30/0x30
> [    1.909055]  [<ffffffff814a9c89>] netlink_rcv_skb+0x99/0xc0
> [    1.909055]  [<ffffffff81491b30>] rtnetlink_rcv+0x20/0x30
> [    1.909055]  [<ffffffff814a9998>] netlink_unicast+0x1a8/0x220
> [    1.909055]  [<ffffffff814aa535>] netlink_sendmsg+0x205/0x300
> [    1.909055]  [<ffffffff8146ce19>] sock_sendmsg+0xb9/0xf0
> [    1.909055]  [<ffffffff8146c51e>] ? copy_from_user+0x3e/0x50
> [    1.909055]  [<ffffffff8146c576>] ? move_addr_to_kernel+0x46/0x80
> [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> [    1.909055]  [<ffffffff8146dd2d>] __sys_sendmsg+0x3dd/0x400
> [    1.909055]  [<ffffffff8112c751>] ? handle_mm_fault+0x261/0x380
> [    1.909055]  [<ffffffff815d4cd0>] ? do_page_fault+0x250/0x4f0
> [    1.909055]  [<ffffffff8114a587>] ? kmem_cache_alloc+0x1a7/0x1f0
> [    1.909055]  [<ffffffff811311a4>] ? do_brk+0x1b4/0x350
> [    1.909055]  [<ffffffff8146df04>] sys_sendmsg+0x44/0x80
> [    1.909055]  [<ffffffff815d7bf9>] system_call_fastpath+0x16/0x1b
> [    1.909055] Code: 44 00 00 41 80 8c 24 a8 00 00 00 04 e9 2f ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 41 8b 84 24 c8 00 00 00 49 03 84 24 d0 00 00 00 <80> 3c 25 10 00 00 00 00 48 8d 50 30 74 0f 48 83 3c 25 08 00 00 
> [    1.909055] RIP  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.909055]  RSP <ffff8800ffc03db8>
> [    1.909055] CR2: 0000000000000010
> [    1.947298] ---[ end trace 3f4ba742dffbe90d ]---
> [    1.947824] Kernel panic - not syncing: Fatal exception in interrupt
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:05:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGd4-0001x0-Up; Fri, 03 Aug 2012 12:05:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxGd4-0001wr-3C
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:05:18 +0000
Received: from [85.158.143.99:42272] by server-1.bemta-4.messagelabs.com id
	34/14-24392-D7EBB105; Fri, 03 Aug 2012 12:05:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343995515!22579655!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27298 invoked from network); 3 Aug 2012 12:05:15 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:05:15 -0000
Received: by eaah1 with SMTP id h1so170192eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 05:05:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=6zM5FOSUZ1EQidPSAV/874EqYHTx9+oPIjAQ/K15Br8=;
	b=igPxUJr14ZSwFx5MSuZFX0gByg7JFtQYH6F8NEsSReIeiGyjnQPpQDC7hhVgX2x29C
	qVjo+SsOuI4vphkATiE61vcWGYNQjXayMypeDfbDJc8PlregDbCI1bJASjoEBIIsp1Kt
	UMHyFuw2LSDZB6k0vx2GvmKDbpb55GGwvfOPyffBBBMY3wBCkNpua1IUwj0xkVXXTavz
	AgiSKFTht9/BBN2nNPz0fXp/R0MPd03ScHICY+TFQjdCHn9hJ2V3Sn9eH066J/pmrMgx
	Zi661w43MNX1bZYncXLyBZaxRhx8uZJEsgWgU8hsqx8JnPLYQW6mPQvaLdAR5zssAG4K
	+vwg==
Received: by 10.14.175.7 with SMTP id y7mr1811245eel.29.1343995514993;
	Fri, 03 Aug 2012 05:05:14 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id v5sm25252262eel.6.2012.08.03.05.05.12
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 05:05:13 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 13:05:07 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC417D03.3A67E%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xcDj90K7DGDNe50S7BP6+ti2jOg==
In-Reply-To: <501BAA990200007800092661@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:

> In __prepare_to_wait(), properly mark early clobbered registers. By
> doing so, we at once eliminate the need to save/restore rCX and rDI.

Okay, this patch has my blessing as is. But please add a remark that the
existing constraints are falling foul of a strict reading of the gcc
specification, and are actually okay in practice (being very
straightforward, no memory constraints, etc). I really thought you had found
a bug in practice, but this was not the case.

> In check_wakeup_from_wait(), make the current constraints match by
> removing the code that actuall alters registers. By adjusting the
> resume address in __prepare_to_wait(), we can simply re-use the copying
> operation there (rather than doing a second pointless copy in the
> opposite direction after branching to the resume point), which at once
> eliminates the need for re-loading rCX and rDI inside the asm().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -126,6 +126,7 @@ static void __prepare_to_wait(struct wai
>  {
>      char *cpu_info = (char *)get_cpu_info();
>      struct vcpu *curr = current;
> +    unsigned long dummy;
>  
>      ASSERT(wqv->esp == 0);
>  
> @@ -140,27 +141,27 @@ static void __prepare_to_wait(struct wai
>  
>      asm volatile (
>  #ifdef CONFIG_X86_64
> -        "push %%rax; push %%rbx; push %%rcx; push %%rdx; push %%rdi; "
> +        "push %%rax; push %%rbx; push %%rdx; "
>          "push %%rbp; push %%r8; push %%r9; push %%r10; push %%r11; "
>          "push %%r12; push %%r13; push %%r14; push %%r15; call 1f; "
> -        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov %%rsp,%%rsi; "
> +        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "
>          "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "
>          "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "
>          "pop %%r11; pop %%r10; pop %%r9; pop %%r8; "
> -        "pop %%rbp; pop %%rdi; pop %%rdx; pop %%rcx; pop %%rbx; pop %%rax"
> +        "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax"
>  #else
> -        "push %%eax; push %%ebx; push %%ecx; push %%edx; push %%edi; "
> +        "push %%eax; push %%ebx; push %%edx; "
>          "push %%ebp; call 1f; "
> -        "1: mov 8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "
> +        "1: mov %%esp,%%esi; addl $2f-1b,(%%esp); "
>          "sub %%esi,%%ecx; cmp %3,%%ecx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%esp,%%esi; 3: pop %%eax; "
> -        "pop %%ebp; pop %%edi; pop %%edx; pop %%ecx; pop %%ebx; pop %%eax"
> +        "pop %%ebp; pop %%edx; pop %%ebx; pop %%eax"
>  #endif
> -        : "=S" (wqv->esp)
> -        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)
> +        : "=&S" (wqv->esp), "=&c" (dummy), "=&D" (dummy)
> +        : "i" (PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)
>          : "memory" );
>  
>      if ( unlikely(wqv->esp == 0) )
> @@ -200,7 +201,7 @@ void check_wakeup_from_wait(void)
>      }
>  
>      asm volatile (
> -        "mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"
> +        "mov %1,%%"__OP"sp; jmp *(%0)"
>          : : "S" (wqv->stack), "D" (wqv->esp),
>          "c" ((char *)get_cpu_info() - (char *)wqv->esp)
>          : "memory" );
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:05:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGd4-0001x0-Up; Fri, 03 Aug 2012 12:05:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxGd4-0001wr-3C
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:05:18 +0000
Received: from [85.158.143.99:42272] by server-1.bemta-4.messagelabs.com id
	34/14-24392-D7EBB105; Fri, 03 Aug 2012 12:05:17 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1343995515!22579655!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27298 invoked from network); 3 Aug 2012 12:05:15 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:05:15 -0000
Received: by eaah1 with SMTP id h1so170192eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 05:05:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=6zM5FOSUZ1EQidPSAV/874EqYHTx9+oPIjAQ/K15Br8=;
	b=igPxUJr14ZSwFx5MSuZFX0gByg7JFtQYH6F8NEsSReIeiGyjnQPpQDC7hhVgX2x29C
	qVjo+SsOuI4vphkATiE61vcWGYNQjXayMypeDfbDJc8PlregDbCI1bJASjoEBIIsp1Kt
	UMHyFuw2LSDZB6k0vx2GvmKDbpb55GGwvfOPyffBBBMY3wBCkNpua1IUwj0xkVXXTavz
	AgiSKFTht9/BBN2nNPz0fXp/R0MPd03ScHICY+TFQjdCHn9hJ2V3Sn9eH066J/pmrMgx
	Zi661w43MNX1bZYncXLyBZaxRhx8uZJEsgWgU8hsqx8JnPLYQW6mPQvaLdAR5zssAG4K
	+vwg==
Received: by 10.14.175.7 with SMTP id y7mr1811245eel.29.1343995514993;
	Fri, 03 Aug 2012 05:05:14 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id v5sm25252262eel.6.2012.08.03.05.05.12
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 05:05:13 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 13:05:07 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC417D03.3A67E%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xcDj90K7DGDNe50S7BP6+ti2jOg==
In-Reply-To: <501BAA990200007800092661@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 09:40, "Jan Beulich" <JBeulich@suse.com> wrote:

> In __prepare_to_wait(), properly mark early clobbered registers. By
> doing so, we at once eliminate the need to save/restore rCX and rDI.

Okay, this patch has my blessing as is. But please add a remark that the
existing constraints are falling foul of a strict reading of the gcc
specification, and are actually okay in practice (being very
straightforward, no memory constraints, etc). I really thought you had found
a bug in practice, but this was not the case.

> In check_wakeup_from_wait(), make the current constraints match by
> removing the code that actuall alters registers. By adjusting the
> resume address in __prepare_to_wait(), we can simply re-use the copying
> operation there (rather than doing a second pointless copy in the
> opposite direction after branching to the resume point), which at once
> eliminates the need for re-loading rCX and rDI inside the asm().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -126,6 +126,7 @@ static void __prepare_to_wait(struct wai
>  {
>      char *cpu_info = (char *)get_cpu_info();
>      struct vcpu *curr = current;
> +    unsigned long dummy;
>  
>      ASSERT(wqv->esp == 0);
>  
> @@ -140,27 +141,27 @@ static void __prepare_to_wait(struct wai
>  
>      asm volatile (
>  #ifdef CONFIG_X86_64
> -        "push %%rax; push %%rbx; push %%rcx; push %%rdx; push %%rdi; "
> +        "push %%rax; push %%rbx; push %%rdx; "
>          "push %%rbp; push %%r8; push %%r9; push %%r10; push %%r11; "
>          "push %%r12; push %%r13; push %%r14; push %%r15; call 1f; "
> -        "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov %%rsp,%%rsi; "
> +        "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); "
>          "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; "
>          "pop %%r15; pop %%r14; pop %%r13; pop %%r12; "
>          "pop %%r11; pop %%r10; pop %%r9; pop %%r8; "
> -        "pop %%rbp; pop %%rdi; pop %%rdx; pop %%rcx; pop %%rbx; pop %%rax"
> +        "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax"
>  #else
> -        "push %%eax; push %%ebx; push %%ecx; push %%edx; push %%edi; "
> +        "push %%eax; push %%ebx; push %%edx; "
>          "push %%ebp; call 1f; "
> -        "1: mov 8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; "
> +        "1: mov %%esp,%%esi; addl $2f-1b,(%%esp); "
>          "sub %%esi,%%ecx; cmp %3,%%ecx; jbe 2f; "
>          "xor %%esi,%%esi; jmp 3f; "
>          "2: rep movsb; mov %%esp,%%esi; 3: pop %%eax; "
> -        "pop %%ebp; pop %%edi; pop %%edx; pop %%ecx; pop %%ebx; pop %%eax"
> +        "pop %%ebp; pop %%edx; pop %%ebx; pop %%eax"
>  #endif
> -        : "=S" (wqv->esp)
> -        : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE)
> +        : "=&S" (wqv->esp), "=&c" (dummy), "=&D" (dummy)
> +        : "i" (PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack)
>          : "memory" );
>  
>      if ( unlikely(wqv->esp == 0) )
> @@ -200,7 +201,7 @@ void check_wakeup_from_wait(void)
>      }
>  
>      asm volatile (
> -        "mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)"
> +        "mov %1,%%"__OP"sp; jmp *(%0)"
>          : : "S" (wqv->stack), "D" (wqv->esp),
>          "c" ((char *)get_cpu_info() - (char *)wqv->esp)
>          : "memory" );
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:09:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:09:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGga-0002A8-IZ; Fri, 03 Aug 2012 12:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxGgZ-00029y-CP
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:08:55 +0000
Received: from [85.158.139.83:13043] by server-9.bemta-5.messagelabs.com id
	76/98-01069-65FBB105; Fri, 03 Aug 2012 12:08:54 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343995733!30076561!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29066 invoked from network); 3 Aug 2012 12:08:53 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:08:53 -0000
Received: by eaah1 with SMTP id h1so171386eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 05:08:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=7Y60XOQYpMY4JdXzw+kHXsZPRxMpOkQpGBddd9gpTyA=;
	b=Nx2Z4tIkjOilN/bQhljCfYyG4Ns+b8mZQf6KlrWbnryEU9f7VdfeiRhoAnh0bdZvmn
	MbDFlaP/h+8jRP+CypEfKadmMlWIWyeTJFOU/h9M0HkoDX7DbKHC7B2eSRue9CggTvvM
	3zI8ZIf+n6dtDiNUtBYRlwzoIoG7gAxszkK9reFs0BSX7BUb+keRi0jyClJQxDtfvPwX
	kDlNRbat3Io9yrCaMVGbfIOybLw5i00sw5CKQm/LXn5ZTtFyDtYfdlXO7BGpQ36TTmf1
	GAc98kajp2/PTHWUnUZNBNAswvO8sikzzoR3LMbC3ISJV9uEqCtFIBnh/rT+JxA2kNAP
	pW6g==
Received: by 10.14.203.70 with SMTP id e46mr1386375eeo.2.1343995733543;
	Fri, 03 Aug 2012 05:08:53 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id j4sm25245186eeo.11.2012.08.03.05.08.51
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 05:08:52 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 13:08:43 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC417DDB.3A680%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xcLm89wyAfWE0V0CPs2ReDjOCcQ==
In-Reply-To: <501BD3ED02000078000927F6@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 12:36, "Jan Beulich" <JBeulich@suse.com> wrote:

>> I'm confused. The registers have the same values at the start and the en=
d of
>> the asm statement. How can it possibly matter, even in theory, whether t=
hey
>> temporarily change in the middle? Is this fairly strong assumption writt=
en
>> down in the gcc documentation anywhere?
> =

> It's in the specification of the & modifier:
> =

> "=8C&=B9 Means (in a particular alternative) that this operand is an
>  earlyclobber operand, which is modified before the instruction
>  is finished using the input operands. Therefore, this operand
>  may not lie in a register that is used as an input operand or as
>  part of any memory address."
> =

> Of course, here we're not having any other operands, which
> is why at least at present getting this wrong does no harm.

Yep, okay, that makes sense. Especially the use of an input operand to form
a memory address, although of course that cannot happen in our specific case
here. I have acked your patch, although I'd like an update to the patch
comment.

 Thanks,
 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:09:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:09:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGga-0002A8-IZ; Fri, 03 Aug 2012 12:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxGgZ-00029y-CP
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:08:55 +0000
Received: from [85.158.139.83:13043] by server-9.bemta-5.messagelabs.com id
	76/98-01069-65FBB105; Fri, 03 Aug 2012 12:08:54 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1343995733!30076561!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29066 invoked from network); 3 Aug 2012 12:08:53 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:08:53 -0000
Received: by eaah1 with SMTP id h1so171386eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 05:08:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=7Y60XOQYpMY4JdXzw+kHXsZPRxMpOkQpGBddd9gpTyA=;
	b=Nx2Z4tIkjOilN/bQhljCfYyG4Ns+b8mZQf6KlrWbnryEU9f7VdfeiRhoAnh0bdZvmn
	MbDFlaP/h+8jRP+CypEfKadmMlWIWyeTJFOU/h9M0HkoDX7DbKHC7B2eSRue9CggTvvM
	3zI8ZIf+n6dtDiNUtBYRlwzoIoG7gAxszkK9reFs0BSX7BUb+keRi0jyClJQxDtfvPwX
	kDlNRbat3Io9yrCaMVGbfIOybLw5i00sw5CKQm/LXn5ZTtFyDtYfdlXO7BGpQ36TTmf1
	GAc98kajp2/PTHWUnUZNBNAswvO8sikzzoR3LMbC3ISJV9uEqCtFIBnh/rT+JxA2kNAP
	pW6g==
Received: by 10.14.203.70 with SMTP id e46mr1386375eeo.2.1343995733543;
	Fri, 03 Aug 2012 05:08:53 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id j4sm25245186eeo.11.2012.08.03.05.08.51
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 05:08:52 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 13:08:43 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC417DDB.3A680%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xcLm89wyAfWE0V0CPs2ReDjOCcQ==
In-Reply-To: <501BD3ED02000078000927F6@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 12:36, "Jan Beulich" <JBeulich@suse.com> wrote:

>> I'm confused. The registers have the same values at the start and the en=
d of
>> the asm statement. How can it possibly matter, even in theory, whether t=
hey
>> temporarily change in the middle? Is this fairly strong assumption writt=
en
>> down in the gcc documentation anywhere?
> =

> It's in the specification of the & modifier:
> =

> "=8C&=B9 Means (in a particular alternative) that this operand is an
>  earlyclobber operand, which is modified before the instruction
>  is finished using the input operands. Therefore, this operand
>  may not lie in a register that is used as an input operand or as
>  part of any memory address."
> =

> Of course, here we're not having any other operands, which
> is why at least at present getting this wrong does no harm.

Yep, okay, that makes sense. Especially the use of an input operand to form
a memory address, although of course that cannot happen in our specific case
here. I have acked your patch, although I'd like an update to the patch
comment.

 Thanks,
 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:15:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGmL-0002UW-Hq; Fri, 03 Aug 2012 12:14:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxGmK-0002UR-Pr
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:14:52 +0000
Received: from [85.158.138.51:26690] by server-3.bemta-3.messagelabs.com id
	FA/DA-08301-CB0CB105; Fri, 03 Aug 2012 12:14:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1343996091!30303193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17172 invoked from network); 3 Aug 2012 12:14:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 12:14:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 13:14:51 +0100
Message-Id: <501BDD05020000780009284B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 13:15:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <501BD3ED02000078000927F6@nat28.tlf.novell.com>
	<CC417DDB.3A680%keir.xen@gmail.com>
In-Reply-To: <CC417DDB.3A680%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDE0OjA4LCBLZWlyIEZyYXNlciA8a2Vpci54ZW5AZ21haWwuY29t
PiB3cm90ZToKPiBPbiAwMy8wOC8yMDEyIDEyOjM2LCAiSmFuIEJldWxpY2giIDxKQmV1bGljaEBz
dXNlLmNvbT4gd3JvdGU6Cj4gCj4+PiBJJ20gY29uZnVzZWQuIFRoZSByZWdpc3RlcnMgaGF2ZSB0
aGUgc2FtZSB2YWx1ZXMgYXQgdGhlIHN0YXJ0IGFuZCB0aGUgZW5kIG9mCj4+PiB0aGUgYXNtIHN0
YXRlbWVudC4gSG93IGNhbiBpdCBwb3NzaWJseSBtYXR0ZXIsIGV2ZW4gaW4gdGhlb3J5LCB3aGV0
aGVyIHRoZXkKPj4+IHRlbXBvcmFyaWx5IGNoYW5nZSBpbiB0aGUgbWlkZGxlPyBJcyB0aGlzIGZh
aXJseSBzdHJvbmcgYXNzdW1wdGlvbiB3cml0dGVuCj4+PiBkb3duIGluIHRoZSBnY2MgZG9jdW1l
bnRhdGlvbiBhbnl3aGVyZT8KPj4gCj4+IEl0J3MgaW4gdGhlIHNwZWNpZmljYXRpb24gb2YgdGhl
ICYgbW9kaWZpZXI6Cj4+IAo+PiAixZImwrkgTWVhbnMgKGluIGEgcGFydGljdWxhciBhbHRlcm5h
dGl2ZSkgdGhhdCB0aGlzIG9wZXJhbmQgaXMgYW4KPj4gIGVhcmx5Y2xvYmJlciBvcGVyYW5kLCB3
aGljaCBpcyBtb2RpZmllZCBiZWZvcmUgdGhlIGluc3RydWN0aW9uCj4+ICBpcyBmaW5pc2hlZCB1
c2luZyB0aGUgaW5wdXQgb3BlcmFuZHMuIFRoZXJlZm9yZSwgdGhpcyBvcGVyYW5kCj4+ICBtYXkg
bm90IGxpZSBpbiBhIHJlZ2lzdGVyIHRoYXQgaXMgdXNlZCBhcyBhbiBpbnB1dCBvcGVyYW5kIG9y
IGFzCj4+ICBwYXJ0IG9mIGFueSBtZW1vcnkgYWRkcmVzcy4iCj4+IAo+PiBPZiBjb3Vyc2UsIGhl
cmUgd2UncmUgbm90IGhhdmluZyBhbnkgb3RoZXIgb3BlcmFuZHMsIHdoaWNoCj4+IGlzIHdoeSBh
dCBsZWFzdCBhdCBwcmVzZW50IGdldHRpbmcgdGhpcyB3cm9uZyBkb2VzIG5vIGhhcm0uCj4gCj4g
WWVwLCBva2F5LCB0aGF0IG1ha2VzIHNlbnNlLiBFc3BlY2lhbGx5IHRoZSB1c2Ugb2YgYW4gaW5w
dXQgb3BlcmFuZCB0byBmb3JtCj4gYSBtZW1vcnkgYWRkcmVzcywgYWx0aG91Z2ggb2YgY291cnNl
IHRoYXQgY2Fubm90IGhhcHBlbiBpbiBvdXIgc3BlY2lmaWMgY2FzZQo+IGhlcmUuIEkgaGF2ZSBh
Y2tlZCB5b3VyIHBhdGNoLCBhbHRob3VnaCBJJ2QgbGlrZSBhbiB1cGRhdGUgdG8gdGhlIHBhdGNo
Cj4gY29tbWVudC4KCkhvdyBhYm91dCB0aGlzIGFzIGFuIGFkZGVkIGluaXRpYWwgcGFyYWdyYXBo
PwoKVGhpcyBmaXhlcyB0aGVvcmV0aWNhbCBpc3N1ZXMgd2l0aCB0aG9zZSBjb25zdHJhaW50cyAt
IG9wZXJhbmRzIHRoYXQKZ2V0IGNsb2JiZXJlZCBiZWZvcmUgY29uc3VtaW5nIGFsbCBpbnB1dCBv
cGVyYW5kcyBtdXN0IGJlIG1hcmtlZCBzbwphY2NvcmRpbmcgdGhlIHRoZSBnY2MgZG9jdW1lbnRh
dGlvbi4gQmV5b25kIHRoYXQsIHRoZSBjaGFuZ2UgaXMgbWVyZWx5CmNvZGUgaW1wcm92ZW1lbnQs
IG5vdCBhIGJ1ZyBmaXguCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:15:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGmL-0002UW-Hq; Fri, 03 Aug 2012 12:14:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxGmK-0002UR-Pr
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:14:52 +0000
Received: from [85.158.138.51:26690] by server-3.bemta-3.messagelabs.com id
	FA/DA-08301-CB0CB105; Fri, 03 Aug 2012 12:14:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1343996091!30303193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17172 invoked from network); 3 Aug 2012 12:14:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 12:14:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 13:14:51 +0100
Message-Id: <501BDD05020000780009284B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 13:15:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <501BD3ED02000078000927F6@nat28.tlf.novell.com>
	<CC417DDB.3A680%keir.xen@gmail.com>
In-Reply-To: <CC417DDB.3A680%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjA4LjEyIGF0IDE0OjA4LCBLZWlyIEZyYXNlciA8a2Vpci54ZW5AZ21haWwuY29t
PiB3cm90ZToKPiBPbiAwMy8wOC8yMDEyIDEyOjM2LCAiSmFuIEJldWxpY2giIDxKQmV1bGljaEBz
dXNlLmNvbT4gd3JvdGU6Cj4gCj4+PiBJJ20gY29uZnVzZWQuIFRoZSByZWdpc3RlcnMgaGF2ZSB0
aGUgc2FtZSB2YWx1ZXMgYXQgdGhlIHN0YXJ0IGFuZCB0aGUgZW5kIG9mCj4+PiB0aGUgYXNtIHN0
YXRlbWVudC4gSG93IGNhbiBpdCBwb3NzaWJseSBtYXR0ZXIsIGV2ZW4gaW4gdGhlb3J5LCB3aGV0
aGVyIHRoZXkKPj4+IHRlbXBvcmFyaWx5IGNoYW5nZSBpbiB0aGUgbWlkZGxlPyBJcyB0aGlzIGZh
aXJseSBzdHJvbmcgYXNzdW1wdGlvbiB3cml0dGVuCj4+PiBkb3duIGluIHRoZSBnY2MgZG9jdW1l
bnRhdGlvbiBhbnl3aGVyZT8KPj4gCj4+IEl0J3MgaW4gdGhlIHNwZWNpZmljYXRpb24gb2YgdGhl
ICYgbW9kaWZpZXI6Cj4+IAo+PiAixZImwrkgTWVhbnMgKGluIGEgcGFydGljdWxhciBhbHRlcm5h
dGl2ZSkgdGhhdCB0aGlzIG9wZXJhbmQgaXMgYW4KPj4gIGVhcmx5Y2xvYmJlciBvcGVyYW5kLCB3
aGljaCBpcyBtb2RpZmllZCBiZWZvcmUgdGhlIGluc3RydWN0aW9uCj4+ICBpcyBmaW5pc2hlZCB1
c2luZyB0aGUgaW5wdXQgb3BlcmFuZHMuIFRoZXJlZm9yZSwgdGhpcyBvcGVyYW5kCj4+ICBtYXkg
bm90IGxpZSBpbiBhIHJlZ2lzdGVyIHRoYXQgaXMgdXNlZCBhcyBhbiBpbnB1dCBvcGVyYW5kIG9y
IGFzCj4+ICBwYXJ0IG9mIGFueSBtZW1vcnkgYWRkcmVzcy4iCj4+IAo+PiBPZiBjb3Vyc2UsIGhl
cmUgd2UncmUgbm90IGhhdmluZyBhbnkgb3RoZXIgb3BlcmFuZHMsIHdoaWNoCj4+IGlzIHdoeSBh
dCBsZWFzdCBhdCBwcmVzZW50IGdldHRpbmcgdGhpcyB3cm9uZyBkb2VzIG5vIGhhcm0uCj4gCj4g
WWVwLCBva2F5LCB0aGF0IG1ha2VzIHNlbnNlLiBFc3BlY2lhbGx5IHRoZSB1c2Ugb2YgYW4gaW5w
dXQgb3BlcmFuZCB0byBmb3JtCj4gYSBtZW1vcnkgYWRkcmVzcywgYWx0aG91Z2ggb2YgY291cnNl
IHRoYXQgY2Fubm90IGhhcHBlbiBpbiBvdXIgc3BlY2lmaWMgY2FzZQo+IGhlcmUuIEkgaGF2ZSBh
Y2tlZCB5b3VyIHBhdGNoLCBhbHRob3VnaCBJJ2QgbGlrZSBhbiB1cGRhdGUgdG8gdGhlIHBhdGNo
Cj4gY29tbWVudC4KCkhvdyBhYm91dCB0aGlzIGFzIGFuIGFkZGVkIGluaXRpYWwgcGFyYWdyYXBo
PwoKVGhpcyBmaXhlcyB0aGVvcmV0aWNhbCBpc3N1ZXMgd2l0aCB0aG9zZSBjb25zdHJhaW50cyAt
IG9wZXJhbmRzIHRoYXQKZ2V0IGNsb2JiZXJlZCBiZWZvcmUgY29uc3VtaW5nIGFsbCBpbnB1dCBv
cGVyYW5kcyBtdXN0IGJlIG1hcmtlZCBzbwphY2NvcmRpbmcgdGhlIHRoZSBnY2MgZG9jdW1lbnRh
dGlvbi4gQmV5b25kIHRoYXQsIHRoZSBjaGFuZ2UgaXMgbWVyZWx5CmNvZGUgaW1wcm92ZW1lbnQs
IG5vdCBhIGJ1ZyBmaXguCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:24:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGvG-0002fI-IP; Fri, 03 Aug 2012 12:24:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxGvF-0002fD-CL
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:24:05 +0000
Received: from [85.158.138.51:5878] by server-10.bemta-3.messagelabs.com id
	59/C5-21993-4E2CB105; Fri, 03 Aug 2012 12:24:04 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1343996643!30262094!1
X-Originating-IP: [213.199.154.139]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3539 invoked from network); 3 Aug 2012 12:24:03 -0000
Received: from db3ehsobe001.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.139)
	by server-2.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 12:24:03 -0000
Received: from mail53-db3-R.bigfish.com (10.3.81.237) by
	DB3EHSOBE005.bigfish.com (10.3.84.25) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 12:24:03 +0000
Received: from mail53-db3 (localhost [127.0.0.1])	by mail53-db3-R.bigfish.com
	(Postfix) with ESMTP id 63A601803FB;
	Fri,  3 Aug 2012 12:24:03 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail53-db3 (localhost.localdomain [127.0.0.1]) by mail53-db3
	(MessageSwitch) id 1343996641303343_30372;
	Fri,  3 Aug 2012 12:24:01 +0000 (UTC)
Received: from DB3EHSMHS008.bigfish.com (unknown [10.3.81.242])	by
	mail53-db3.bigfish.com (Postfix) with ESMTP id 3E0C1100048;
	Fri,  3 Aug 2012 12:24:01 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	DB3EHSMHS008.bigfish.com (10.3.87.108) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 12:24:00 +0000
X-WSS-ID: 0M86IFW-02-43U-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 27512C80E8;	Fri,  3 Aug 2012 07:23:56 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 07:24:18 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 3 Aug 2012 07:23:58 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	08:23:57 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id D3FD049C20C;
	Fri,  3 Aug 2012 13:23:55 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id B1029594037; Fri,  3 Aug 2012
	14:23:55 +0200 (CEST)
Message-ID: <501BC20F.3040205@amd.com>
Date: Fri, 3 Aug 2012 14:20:31 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jeremy Fitzhardinge
	<jeremy@goop.org>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Dom0 crash with old style AMD NUMA detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

we see Dom0 crashes due to the kernel detecting the NUMA topology not by 
ACPI, but directly from the northbridge (CONFIG_AMD_NUMA).

This will detect the actual NUMA config of the physical machine, but 
will crash about the mismatch with Dom0's virtual memory. Variation of 
the theme: Dom0 sees what it's not supposed to see.

This happens with the said config option enabled and on a machine where 
this scanning is still enabled (K8 and Fam10h, not Bulldozer class)

We have this dump then:
[    0.000000] NUMA: Warning: node ids are out of bound, from=-1 to=-1
distance=10
[    0.000000] Scanning NUMA topology in Northbridge 24
[    0.000000] Number of physical nodes 4
[    0.000000] Node 0 MemBase 0000000000000000 Limit 0000000040000000
[    0.000000] Node 1 MemBase 0000000040000000 Limit 0000000138000000
[    0.000000] Node 2 MemBase 0000000138000000 Limit 00000001f8000000
[    0.000000] Node 3 MemBase 00000001f8000000 Limit 0000000238000000
[    0.000000] Initmem setup node 0 0000000000000000-0000000040000000
[    0.000000]   NODE_DATA [000000003ffd9000 - 000000003fffffff]
[    0.000000] Initmem setup node 1 0000000040000000-0000000138000000
[    0.000000]   NODE_DATA [0000000137fd9000 - 0000000137ffffff]
[    0.000000] Initmem setup node 2 0000000138000000-00000001f8000000
[    0.000000]   NODE_DATA [00000001f095e000 - 00000001f0984fff]
[    0.000000] Initmem setup node 3 00000001f8000000-0000000238000000
[    0.000000] Cannot find 159744 bytes in node 3
[    0.000000] BUG: unable to handle kernel NULL pointer dereference at 
(null)
[    0.000000] IP: [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
[    0.000000] PGD 0
[    0.000000] Oops: 0000 [#1] SMP
[    0.000000] CPU 0
[    0.000000] Modules linked in:
[    0.000000]
[    0.000000] Pid: 0, comm: swapper Not tainted 3.3.6 #1 AMD Dinar/Dinar
[    0.000000] RIP: e030:[<ffffffff81d220e6>]  [<ffffffff81d220e6>] 
__alloc_bootmem_node+0x43/0x96
[    0.000000] RSP: e02b:ffffffff81c01de8  EFLAGS: 00010046
[    0.000000] RAX: 0000000000000000 RBX: 00000000000000c0 RCX: 
0000000000000000
[    0.000000] RDX: 0000000000000040 RSI: 00000000000000c0 RDI: 
0000000000000000
[    0.000000] RBP: ffffffff81c01e08 R08: 0000000000000000 R09: 
0000000000000000
[    0.000000] R10: 0000000000098000 R11: 0000000000000000 R12: 
0000000000000000
[    0.000000] R13: 0000000000000000 R14: 0000000000000040 R15: 
0000000000000003
[    0.000000] FS:  0000000000000000(0000) GS:ffffffff81ced000(0000) 
knlGS:0000000000000000
[    0.000000] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
[    0.000000] CR2: 0000000000000000 CR3: 0000000001c05000 CR4: 
0000000000000660
[    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
0000000000000000
[    0.000000] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 
0000000000000000
[    0.000000] Process swapper (pid: 0, threadinfo ffffffff81c00000, 
task ffffffff81c0d020)
[    0.000000] Stack:
[    0.000000]  00000000000000c0 0000000000000003 0000000000000000 
000000000000003f
[    0.000000]  ffffffff81c01e68 ffffffff81d23024 0000000000400000 
0000000000000002
[    0.000000]  0000000000080000 ffff8801f055e000 ffff8801f055e1f8 
0000000000000000
[    0.000000] Call Trace:
[    0.000000]  [<ffffffff81d23024>] 
sparse_early_usemaps_alloc_node+0x64/0x178
[    0.000000]  [<ffffffff81d23348>] sparse_init+0xe4/0x25a
[    0.000000]  [<ffffffff81d16840>] paging_init+0x13/0x22
[    0.000000]  [<ffffffff81d07fbb>] setup_arch+0x9c6/0xa9b
[    0.000000]  [<ffffffff81683954>] ? printk+0x3c/0x3e
[    0.000000]  [<ffffffff81d01a38>] start_kernel+0xe5/0x468
[    0.000000]  [<ffffffff81d012cf>] x86_64_start_reservations+0xba/0xc1
[    0.000000]  [<ffffffff81007153>] ? xen_setup_runstate_info+0x2c/0x36
[    0.000000]  [<ffffffff81d050ee>] xen_start_kernel+0x565/0x56c
[    0.000000] Code: 79 bc 3e ff 85 c0 74 23 80 3d 19 e9 21 00 00 75 59 
be 2a
01 00 00 48 c7 c7 d0 55 a8 81 e8 b6 dc 31 ff c6 05 ff e8 21 00 01 eb 3f 
<41> 8b
bc 24 60 60 02 00 49 83 c8 ff 4c 89 e9 4c 89 f2 48 89 de
[    0.000000] RIP  [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
[    0.000000]  RSP <ffffffff81c01de8>
[    0.000000] CR2: 0000000000000000
[    0.000000] ---[ end trace a7919e7f17c0a725 ]---
[    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
(XEN) Domain 0 crashed: 'noreboot' set - not rebooting.



The obvious solution would be to explicitly deny northbridge scanning 
when running as Dom0, though I am not sure how to implement this without 
upsetting the other kernel folks about "that crappy Xen thing" again ;-)

Could someone propose a fix for this (I am OoO for the next two weeks).

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:24:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxGvG-0002fI-IP; Fri, 03 Aug 2012 12:24:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1SxGvF-0002fD-CL
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:24:05 +0000
Received: from [85.158.138.51:5878] by server-10.bemta-3.messagelabs.com id
	59/C5-21993-4E2CB105; Fri, 03 Aug 2012 12:24:04 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1343996643!30262094!1
X-Originating-IP: [213.199.154.139]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3539 invoked from network); 3 Aug 2012 12:24:03 -0000
Received: from db3ehsobe001.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.139)
	by server-2.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 12:24:03 -0000
Received: from mail53-db3-R.bigfish.com (10.3.81.237) by
	DB3EHSOBE005.bigfish.com (10.3.84.25) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 12:24:03 +0000
Received: from mail53-db3 (localhost [127.0.0.1])	by mail53-db3-R.bigfish.com
	(Postfix) with ESMTP id 63A601803FB;
	Fri,  3 Aug 2012 12:24:03 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail53-db3 (localhost.localdomain [127.0.0.1]) by mail53-db3
	(MessageSwitch) id 1343996641303343_30372;
	Fri,  3 Aug 2012 12:24:01 +0000 (UTC)
Received: from DB3EHSMHS008.bigfish.com (unknown [10.3.81.242])	by
	mail53-db3.bigfish.com (Postfix) with ESMTP id 3E0C1100048;
	Fri,  3 Aug 2012 12:24:01 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	DB3EHSMHS008.bigfish.com (10.3.87.108) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 12:24:00 +0000
X-WSS-ID: 0M86IFW-02-43U-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 27512C80E8;	Fri,  3 Aug 2012 07:23:56 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 07:24:18 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 3 Aug 2012 07:23:58 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	08:23:57 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id D3FD049C20C;
	Fri,  3 Aug 2012 13:23:55 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id B1029594037; Fri,  3 Aug 2012
	14:23:55 +0200 (CEST)
Message-ID: <501BC20F.3040205@amd.com>
Date: Fri, 3 Aug 2012 14:20:31 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jeremy Fitzhardinge
	<jeremy@goop.org>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Dom0 crash with old style AMD NUMA detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

we see Dom0 crashes due to the kernel detecting the NUMA topology not by 
ACPI, but directly from the northbridge (CONFIG_AMD_NUMA).

This will detect the actual NUMA config of the physical machine, but 
will crash about the mismatch with Dom0's virtual memory. Variation of 
the theme: Dom0 sees what it's not supposed to see.

This happens with the said config option enabled and on a machine where 
this scanning is still enabled (K8 and Fam10h, not Bulldozer class)

We have this dump then:
[    0.000000] NUMA: Warning: node ids are out of bound, from=-1 to=-1
distance=10
[    0.000000] Scanning NUMA topology in Northbridge 24
[    0.000000] Number of physical nodes 4
[    0.000000] Node 0 MemBase 0000000000000000 Limit 0000000040000000
[    0.000000] Node 1 MemBase 0000000040000000 Limit 0000000138000000
[    0.000000] Node 2 MemBase 0000000138000000 Limit 00000001f8000000
[    0.000000] Node 3 MemBase 00000001f8000000 Limit 0000000238000000
[    0.000000] Initmem setup node 0 0000000000000000-0000000040000000
[    0.000000]   NODE_DATA [000000003ffd9000 - 000000003fffffff]
[    0.000000] Initmem setup node 1 0000000040000000-0000000138000000
[    0.000000]   NODE_DATA [0000000137fd9000 - 0000000137ffffff]
[    0.000000] Initmem setup node 2 0000000138000000-00000001f8000000
[    0.000000]   NODE_DATA [00000001f095e000 - 00000001f0984fff]
[    0.000000] Initmem setup node 3 00000001f8000000-0000000238000000
[    0.000000] Cannot find 159744 bytes in node 3
[    0.000000] BUG: unable to handle kernel NULL pointer dereference at 
(null)
[    0.000000] IP: [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
[    0.000000] PGD 0
[    0.000000] Oops: 0000 [#1] SMP
[    0.000000] CPU 0
[    0.000000] Modules linked in:
[    0.000000]
[    0.000000] Pid: 0, comm: swapper Not tainted 3.3.6 #1 AMD Dinar/Dinar
[    0.000000] RIP: e030:[<ffffffff81d220e6>]  [<ffffffff81d220e6>] 
__alloc_bootmem_node+0x43/0x96
[    0.000000] RSP: e02b:ffffffff81c01de8  EFLAGS: 00010046
[    0.000000] RAX: 0000000000000000 RBX: 00000000000000c0 RCX: 
0000000000000000
[    0.000000] RDX: 0000000000000040 RSI: 00000000000000c0 RDI: 
0000000000000000
[    0.000000] RBP: ffffffff81c01e08 R08: 0000000000000000 R09: 
0000000000000000
[    0.000000] R10: 0000000000098000 R11: 0000000000000000 R12: 
0000000000000000
[    0.000000] R13: 0000000000000000 R14: 0000000000000040 R15: 
0000000000000003
[    0.000000] FS:  0000000000000000(0000) GS:ffffffff81ced000(0000) 
knlGS:0000000000000000
[    0.000000] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
[    0.000000] CR2: 0000000000000000 CR3: 0000000001c05000 CR4: 
0000000000000660
[    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
0000000000000000
[    0.000000] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 
0000000000000000
[    0.000000] Process swapper (pid: 0, threadinfo ffffffff81c00000, 
task ffffffff81c0d020)
[    0.000000] Stack:
[    0.000000]  00000000000000c0 0000000000000003 0000000000000000 
000000000000003f
[    0.000000]  ffffffff81c01e68 ffffffff81d23024 0000000000400000 
0000000000000002
[    0.000000]  0000000000080000 ffff8801f055e000 ffff8801f055e1f8 
0000000000000000
[    0.000000] Call Trace:
[    0.000000]  [<ffffffff81d23024>] 
sparse_early_usemaps_alloc_node+0x64/0x178
[    0.000000]  [<ffffffff81d23348>] sparse_init+0xe4/0x25a
[    0.000000]  [<ffffffff81d16840>] paging_init+0x13/0x22
[    0.000000]  [<ffffffff81d07fbb>] setup_arch+0x9c6/0xa9b
[    0.000000]  [<ffffffff81683954>] ? printk+0x3c/0x3e
[    0.000000]  [<ffffffff81d01a38>] start_kernel+0xe5/0x468
[    0.000000]  [<ffffffff81d012cf>] x86_64_start_reservations+0xba/0xc1
[    0.000000]  [<ffffffff81007153>] ? xen_setup_runstate_info+0x2c/0x36
[    0.000000]  [<ffffffff81d050ee>] xen_start_kernel+0x565/0x56c
[    0.000000] Code: 79 bc 3e ff 85 c0 74 23 80 3d 19 e9 21 00 00 75 59 
be 2a
01 00 00 48 c7 c7 d0 55 a8 81 e8 b6 dc 31 ff c6 05 ff e8 21 00 01 eb 3f 
<41> 8b
bc 24 60 60 02 00 49 83 c8 ff 4c 89 e9 4c 89 f2 48 89 de
[    0.000000] RIP  [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
[    0.000000]  RSP <ffffffff81c01de8>
[    0.000000] CR2: 0000000000000000
[    0.000000] ---[ end trace a7919e7f17c0a725 ]---
[    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
(XEN) Domain 0 crashed: 'noreboot' set - not rebooting.



The obvious solution would be to explicitly deny northbridge scanning 
when running as Dom0, though I am not sure how to implement this without 
upsetting the other kernel folks about "that crappy Xen thing" again ;-)

Could someone propose a fix for this (I am OoO for the next two weeks).

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:36:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxH7S-00036A-HL; Fri, 03 Aug 2012 12:36:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxH7R-000363-Ja
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:36:41 +0000
Received: from [85.158.138.51:26881] by server-2.bemta-3.messagelabs.com id
	CB/50-00359-8D5CB105; Fri, 03 Aug 2012 12:36:40 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343997396!30243579!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15007 invoked from network); 3 Aug 2012 12:36:37 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 12:36:37 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q73CaT0F012048
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Aug 2012 08:36:29 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q73CaSIo012046;
	Fri, 3 Aug 2012 08:36:28 -0400
Date: Fri, 3 Aug 2012 08:36:28 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Andre Przywara <andre.przywara@amd.com>
Message-ID: <20120803123628.GB10670@andromeda.dapyr.net>
References: <501BC20F.3040205@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501BC20F.3040205@amd.com>
User-Agent: Mutt/1.5.9i
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Dom0 crash with old style AMD NUMA detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 02:20:31PM +0200, Andre Przywara wrote:
> Hi,
> 
> we see Dom0 crashes due to the kernel detecting the NUMA topology not by 
> ACPI, but directly from the northbridge (CONFIG_AMD_NUMA).
> 
> This will detect the actual NUMA config of the physical machine, but 
> will crash about the mismatch with Dom0's virtual memory. Variation of 
> the theme: Dom0 sees what it's not supposed to see.
> 
> This happens with the said config option enabled and on a machine where 
> this scanning is still enabled (K8 and Fam10h, not Bulldozer class)
> 
> We have this dump then:
> [    0.000000] NUMA: Warning: node ids are out of bound, from=-1 to=-1
> distance=10
> [    0.000000] Scanning NUMA topology in Northbridge 24
> [    0.000000] Number of physical nodes 4
> [    0.000000] Node 0 MemBase 0000000000000000 Limit 0000000040000000
> [    0.000000] Node 1 MemBase 0000000040000000 Limit 0000000138000000
> [    0.000000] Node 2 MemBase 0000000138000000 Limit 00000001f8000000
> [    0.000000] Node 3 MemBase 00000001f8000000 Limit 0000000238000000
> [    0.000000] Initmem setup node 0 0000000000000000-0000000040000000
> [    0.000000]   NODE_DATA [000000003ffd9000 - 000000003fffffff]
> [    0.000000] Initmem setup node 1 0000000040000000-0000000138000000
> [    0.000000]   NODE_DATA [0000000137fd9000 - 0000000137ffffff]
> [    0.000000] Initmem setup node 2 0000000138000000-00000001f8000000
> [    0.000000]   NODE_DATA [00000001f095e000 - 00000001f0984fff]
> [    0.000000] Initmem setup node 3 00000001f8000000-0000000238000000
> [    0.000000] Cannot find 159744 bytes in node 3
> [    0.000000] BUG: unable to handle kernel NULL pointer dereference at 
> (null)
> [    0.000000] IP: [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> [    0.000000] PGD 0
> [    0.000000] Oops: 0000 [#1] SMP
> [    0.000000] CPU 0
> [    0.000000] Modules linked in:
> [    0.000000]
> [    0.000000] Pid: 0, comm: swapper Not tainted 3.3.6 #1 AMD Dinar/Dinar
> [    0.000000] RIP: e030:[<ffffffff81d220e6>]  [<ffffffff81d220e6>] 
> __alloc_bootmem_node+0x43/0x96
> [    0.000000] RSP: e02b:ffffffff81c01de8  EFLAGS: 00010046
> [    0.000000] RAX: 0000000000000000 RBX: 00000000000000c0 RCX: 
> 0000000000000000
> [    0.000000] RDX: 0000000000000040 RSI: 00000000000000c0 RDI: 
> 0000000000000000
> [    0.000000] RBP: ffffffff81c01e08 R08: 0000000000000000 R09: 
> 0000000000000000
> [    0.000000] R10: 0000000000098000 R11: 0000000000000000 R12: 
> 0000000000000000
> [    0.000000] R13: 0000000000000000 R14: 0000000000000040 R15: 
> 0000000000000003
> [    0.000000] FS:  0000000000000000(0000) GS:ffffffff81ced000(0000) 
> knlGS:0000000000000000
> [    0.000000] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> [    0.000000] CR2: 0000000000000000 CR3: 0000000001c05000 CR4: 
> 0000000000000660
> [    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
> 0000000000000000
> [    0.000000] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 
> 0000000000000000
> [    0.000000] Process swapper (pid: 0, threadinfo ffffffff81c00000, 
> task ffffffff81c0d020)
> [    0.000000] Stack:
> [    0.000000]  00000000000000c0 0000000000000003 0000000000000000 
> 000000000000003f
> [    0.000000]  ffffffff81c01e68 ffffffff81d23024 0000000000400000 
> 0000000000000002
> [    0.000000]  0000000000080000 ffff8801f055e000 ffff8801f055e1f8 
> 0000000000000000
> [    0.000000] Call Trace:
> [    0.000000]  [<ffffffff81d23024>] 
> sparse_early_usemaps_alloc_node+0x64/0x178
> [    0.000000]  [<ffffffff81d23348>] sparse_init+0xe4/0x25a
> [    0.000000]  [<ffffffff81d16840>] paging_init+0x13/0x22
> [    0.000000]  [<ffffffff81d07fbb>] setup_arch+0x9c6/0xa9b
> [    0.000000]  [<ffffffff81683954>] ? printk+0x3c/0x3e
> [    0.000000]  [<ffffffff81d01a38>] start_kernel+0xe5/0x468
> [    0.000000]  [<ffffffff81d012cf>] x86_64_start_reservations+0xba/0xc1
> [    0.000000]  [<ffffffff81007153>] ? xen_setup_runstate_info+0x2c/0x36
> [    0.000000]  [<ffffffff81d050ee>] xen_start_kernel+0x565/0x56c
> [    0.000000] Code: 79 bc 3e ff 85 c0 74 23 80 3d 19 e9 21 00 00 75 59 
> be 2a
> 01 00 00 48 c7 c7 d0 55 a8 81 e8 b6 dc 31 ff c6 05 ff e8 21 00 01 eb 3f 
> <41> 8b
> bc 24 60 60 02 00 49 83 c8 ff 4c 89 e9 4c 89 f2 48 89 de
> [    0.000000] RIP  [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> [    0.000000]  RSP <ffffffff81c01de8>
> [    0.000000] CR2: 0000000000000000
> [    0.000000] ---[ end trace a7919e7f17c0a725 ]---
> [    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
> (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> 
> 
> 
> The obvious solution would be to explicitly deny northbridge scanning 
> when running as Dom0, though I am not sure how to implement this without 
> upsetting the other kernel folks about "that crappy Xen thing" again ;-)

Heh.
Is there a numa=0 option that could be used to override it to turn it
off?
> 
> Could someone propose a fix for this (I am OoO for the next two weeks).
> 
> Regards,
> Andre.
> 
> -- 
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:36:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxH7S-00036A-HL; Fri, 03 Aug 2012 12:36:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxH7R-000363-Ja
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:36:41 +0000
Received: from [85.158.138.51:26881] by server-2.bemta-3.messagelabs.com id
	CB/50-00359-8D5CB105; Fri, 03 Aug 2012 12:36:40 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1343997396!30243579!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15007 invoked from network); 3 Aug 2012 12:36:37 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 12:36:37 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q73CaT0F012048
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Aug 2012 08:36:29 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q73CaSIo012046;
	Fri, 3 Aug 2012 08:36:28 -0400
Date: Fri, 3 Aug 2012 08:36:28 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Andre Przywara <andre.przywara@amd.com>
Message-ID: <20120803123628.GB10670@andromeda.dapyr.net>
References: <501BC20F.3040205@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501BC20F.3040205@amd.com>
User-Agent: Mutt/1.5.9i
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Dom0 crash with old style AMD NUMA detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 02:20:31PM +0200, Andre Przywara wrote:
> Hi,
> 
> we see Dom0 crashes due to the kernel detecting the NUMA topology not by 
> ACPI, but directly from the northbridge (CONFIG_AMD_NUMA).
> 
> This will detect the actual NUMA config of the physical machine, but 
> will crash about the mismatch with Dom0's virtual memory. Variation of 
> the theme: Dom0 sees what it's not supposed to see.
> 
> This happens with the said config option enabled and on a machine where 
> this scanning is still enabled (K8 and Fam10h, not Bulldozer class)
> 
> We have this dump then:
> [    0.000000] NUMA: Warning: node ids are out of bound, from=-1 to=-1
> distance=10
> [    0.000000] Scanning NUMA topology in Northbridge 24
> [    0.000000] Number of physical nodes 4
> [    0.000000] Node 0 MemBase 0000000000000000 Limit 0000000040000000
> [    0.000000] Node 1 MemBase 0000000040000000 Limit 0000000138000000
> [    0.000000] Node 2 MemBase 0000000138000000 Limit 00000001f8000000
> [    0.000000] Node 3 MemBase 00000001f8000000 Limit 0000000238000000
> [    0.000000] Initmem setup node 0 0000000000000000-0000000040000000
> [    0.000000]   NODE_DATA [000000003ffd9000 - 000000003fffffff]
> [    0.000000] Initmem setup node 1 0000000040000000-0000000138000000
> [    0.000000]   NODE_DATA [0000000137fd9000 - 0000000137ffffff]
> [    0.000000] Initmem setup node 2 0000000138000000-00000001f8000000
> [    0.000000]   NODE_DATA [00000001f095e000 - 00000001f0984fff]
> [    0.000000] Initmem setup node 3 00000001f8000000-0000000238000000
> [    0.000000] Cannot find 159744 bytes in node 3
> [    0.000000] BUG: unable to handle kernel NULL pointer dereference at 
> (null)
> [    0.000000] IP: [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> [    0.000000] PGD 0
> [    0.000000] Oops: 0000 [#1] SMP
> [    0.000000] CPU 0
> [    0.000000] Modules linked in:
> [    0.000000]
> [    0.000000] Pid: 0, comm: swapper Not tainted 3.3.6 #1 AMD Dinar/Dinar
> [    0.000000] RIP: e030:[<ffffffff81d220e6>]  [<ffffffff81d220e6>] 
> __alloc_bootmem_node+0x43/0x96
> [    0.000000] RSP: e02b:ffffffff81c01de8  EFLAGS: 00010046
> [    0.000000] RAX: 0000000000000000 RBX: 00000000000000c0 RCX: 
> 0000000000000000
> [    0.000000] RDX: 0000000000000040 RSI: 00000000000000c0 RDI: 
> 0000000000000000
> [    0.000000] RBP: ffffffff81c01e08 R08: 0000000000000000 R09: 
> 0000000000000000
> [    0.000000] R10: 0000000000098000 R11: 0000000000000000 R12: 
> 0000000000000000
> [    0.000000] R13: 0000000000000000 R14: 0000000000000040 R15: 
> 0000000000000003
> [    0.000000] FS:  0000000000000000(0000) GS:ffffffff81ced000(0000) 
> knlGS:0000000000000000
> [    0.000000] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> [    0.000000] CR2: 0000000000000000 CR3: 0000000001c05000 CR4: 
> 0000000000000660
> [    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
> 0000000000000000
> [    0.000000] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 
> 0000000000000000
> [    0.000000] Process swapper (pid: 0, threadinfo ffffffff81c00000, 
> task ffffffff81c0d020)
> [    0.000000] Stack:
> [    0.000000]  00000000000000c0 0000000000000003 0000000000000000 
> 000000000000003f
> [    0.000000]  ffffffff81c01e68 ffffffff81d23024 0000000000400000 
> 0000000000000002
> [    0.000000]  0000000000080000 ffff8801f055e000 ffff8801f055e1f8 
> 0000000000000000
> [    0.000000] Call Trace:
> [    0.000000]  [<ffffffff81d23024>] 
> sparse_early_usemaps_alloc_node+0x64/0x178
> [    0.000000]  [<ffffffff81d23348>] sparse_init+0xe4/0x25a
> [    0.000000]  [<ffffffff81d16840>] paging_init+0x13/0x22
> [    0.000000]  [<ffffffff81d07fbb>] setup_arch+0x9c6/0xa9b
> [    0.000000]  [<ffffffff81683954>] ? printk+0x3c/0x3e
> [    0.000000]  [<ffffffff81d01a38>] start_kernel+0xe5/0x468
> [    0.000000]  [<ffffffff81d012cf>] x86_64_start_reservations+0xba/0xc1
> [    0.000000]  [<ffffffff81007153>] ? xen_setup_runstate_info+0x2c/0x36
> [    0.000000]  [<ffffffff81d050ee>] xen_start_kernel+0x565/0x56c
> [    0.000000] Code: 79 bc 3e ff 85 c0 74 23 80 3d 19 e9 21 00 00 75 59 
> be 2a
> 01 00 00 48 c7 c7 d0 55 a8 81 e8 b6 dc 31 ff c6 05 ff e8 21 00 01 eb 3f 
> <41> 8b
> bc 24 60 60 02 00 49 83 c8 ff 4c 89 e9 4c 89 f2 48 89 de
> [    0.000000] RIP  [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> [    0.000000]  RSP <ffffffff81c01de8>
> [    0.000000] CR2: 0000000000000000
> [    0.000000] ---[ end trace a7919e7f17c0a725 ]---
> [    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
> (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> 
> 
> 
> The obvious solution would be to explicitly deny northbridge scanning 
> when running as Dom0, though I am not sure how to implement this without 
> upsetting the other kernel folks about "that crappy Xen thing" again ;-)

Heh.
Is there a numa=0 option that could be used to override it to turn it
off?
> 
> Could someone propose a fix for this (I am OoO for the next two weeks).
> 
> Regards,
> Andre.
> 
> -- 
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:54:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHOo-0003jj-Sc; Fri, 03 Aug 2012 12:54:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxHOo-0003je-5o
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:54:38 +0000
Received: from [85.158.139.83:32194] by server-12.bemta-5.messagelabs.com id
	4A/A6-26304-D0ACB105; Fri, 03 Aug 2012 12:54:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343998474!26245499!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23495 invoked from network); 3 Aug 2012 12:54:34 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:54:34 -0000
Received: by eaah1 with SMTP id h1so186770eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 05:54:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ieL0bZrAM+bzXJCJAIF+JijknxuClLljins47FAnjcs=;
	b=OFo37WehwKgO9hnKUkjPP8qb7epJ6zuVJbrnC91Dfx9d2vuNe/qhQZu4i0inYkvG+x
	EoHdpULDqd22u4uUDw+18lyzp2/ZlMUoQ3MwrMka+0EjF/WAUFWUdowe0QwZDf/1Sn//
	RMLC2p1i9O8fLeGUkG6Jl6QGSj5Is7+kNCgfZ1U+nBgAkUMb2VV4V+or6FUm+lAKXKA1
	Z7xoKD26DQLoK3ObtBKVY/vhOsi82NsZx0N6Q6JBeELnfNcUurSSR+mTPIlQknhTFN7d
	NAziO9IdUwYudCEdQ36McCE8cEC2I/Jmby31ResAt6yjjoCfc8kgmZ0FPOpbMz/AHPHj
	t/CQ==
Received: by 10.14.5.78 with SMTP id 54mr2057421eek.1.1343998474351;
	Fri, 03 Aug 2012 05:54:34 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id h42sm25592027eem.5.2012.08.03.05.54.32
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 05:54:33 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 13:54:29 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC418895.3A686%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xdx56CD5mc/MmC0OW366DUBe/yw==
In-Reply-To: <501BDD05020000780009284B@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 13:15, "Jan Beulich" <JBeulich@suse.com> wrote:

>> Yep, okay, that makes sense. Especially the use of an input operand to form
>> a memory address, although of course that cannot happen in our specific case
>> here. I have acked your patch, although I'd like an update to the patch
>> comment.
> 
> How about this as an added initial paragraph?
> 
> This fixes theoretical issues with those constraints - operands that
> get clobbered before consuming all input operands must be marked so
> according the the gcc documentation. Beyond that, the change is merely
> code improvement, not a bug fix.

Perfect!

 K.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:54:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHOo-0003jj-Sc; Fri, 03 Aug 2012 12:54:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxHOo-0003je-5o
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 12:54:38 +0000
Received: from [85.158.139.83:32194] by server-12.bemta-5.messagelabs.com id
	4A/A6-26304-D0ACB105; Fri, 03 Aug 2012 12:54:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1343998474!26245499!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23495 invoked from network); 3 Aug 2012 12:54:34 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:54:34 -0000
Received: by eaah1 with SMTP id h1so186770eaa.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 05:54:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ieL0bZrAM+bzXJCJAIF+JijknxuClLljins47FAnjcs=;
	b=OFo37WehwKgO9hnKUkjPP8qb7epJ6zuVJbrnC91Dfx9d2vuNe/qhQZu4i0inYkvG+x
	EoHdpULDqd22u4uUDw+18lyzp2/ZlMUoQ3MwrMka+0EjF/WAUFWUdowe0QwZDf/1Sn//
	RMLC2p1i9O8fLeGUkG6Jl6QGSj5Is7+kNCgfZ1U+nBgAkUMb2VV4V+or6FUm+lAKXKA1
	Z7xoKD26DQLoK3ObtBKVY/vhOsi82NsZx0N6Q6JBeELnfNcUurSSR+mTPIlQknhTFN7d
	NAziO9IdUwYudCEdQ36McCE8cEC2I/Jmby31ResAt6yjjoCfc8kgmZ0FPOpbMz/AHPHj
	t/CQ==
Received: by 10.14.5.78 with SMTP id 54mr2057421eek.1.1343998474351;
	Fri, 03 Aug 2012 05:54:34 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id h42sm25592027eem.5.2012.08.03.05.54.32
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 05:54:33 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 03 Aug 2012 13:54:29 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC418895.3A686%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
Thread-Index: Ac1xdx56CD5mc/MmC0OW366DUBe/yw==
In-Reply-To: <501BDD05020000780009284B@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 13:15, "Jan Beulich" <JBeulich@suse.com> wrote:

>> Yep, okay, that makes sense. Especially the use of an input operand to form
>> a memory address, although of course that cannot happen in our specific case
>> here. I have acked your patch, although I'd like an update to the patch
>> comment.
> 
> How about this as an added initial paragraph?
> 
> This fixes theoretical issues with those constraints - operands that
> get clobbered before consuming all input operands must be marked so
> according the the gcc documentation. Beyond that, the change is merely
> code improvement, not a bug fix.

Perfect!

 K.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 12:55:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHPi-0003n6-AB; Fri, 03 Aug 2012 12:55:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1SxHPg-0003mg-Nz
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 12:55:32 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343998522!11123315!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25048 invoked from network); 3 Aug 2012 12:55:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; 
	d="asc'?scan'208";a="13840897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 12:55:22 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	13:55:21 +0100
Message-ID: <1343998520.11787.4.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Lars Kurth <lars.kurth@citrix.com>
Date: Fri, 3 Aug 2012 14:55:20 +0200
In-Reply-To: <344C0F67BC927847A2C92F9EE358DB0EED0B6AD9A7@LONPMAILBOX01.citrite.net>
References: <1343926951.4873.36.camel@Solace>
	<344C0F67BC927847A2C92F9EE358DB0EED0B6AD9A7@LONPMAILBOX01.citrite.net>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] What about a Fedora TestDay about Xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3795562523033574207=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3795562523033574207==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-1+0wEBWNSCsYr9qrYIQS"

--=-1+0wEBWNSCsYr9qrYIQS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 18:55 +0100, Lars Kurth wrote:=20
> Dario,
>=20
Hi,

> I am not sure whether we should angle for a Xen Fedora Test day this time=
 round.=20
>=20
> We should first make sure that we have good presence at the Virtualizatio=
n Test Day (we got a bit, but never as much as KVM) and try and do all the =
things we would for a Xen Test Day. That way, we get to practice and build =
up credibility for the next Fedora release where it probably does make sens=
e to do our own test day. If we do a Xen Fedora Test Day and nobody turns u=
p this can backfire badly
>=20
Ok, I see your point, thanks for sharing your view.

> Does this make sense?
>=20
I guess so, I'll keep an eye on the (Fedora) Virtualization TestDay and
see if we can have some Xen related testcases there.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-1+0wEBWNSCsYr9qrYIQS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAbyjgACgkQk4XaBE3IOsSzTACgoz0OISDcA3n9vKg2uwdxtwE+
QYoAn0ry+SvlTuzYijdPZUZ44AsnuqEA
=UjBq
-----END PGP SIGNATURE-----

--=-1+0wEBWNSCsYr9qrYIQS--


--===============3795562523033574207==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3795562523033574207==--


From xen-devel-bounces@lists.xen.org Fri Aug 03 12:55:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 12:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHPi-0003n6-AB; Fri, 03 Aug 2012 12:55:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1SxHPg-0003mg-Nz
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 12:55:32 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1343998522!11123315!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25048 invoked from network); 3 Aug 2012 12:55:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 12:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; 
	d="asc'?scan'208";a="13840897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 12:55:22 +0000
Received: from [IPv6:::1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	13:55:21 +0100
Message-ID: <1343998520.11787.4.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Lars Kurth <lars.kurth@citrix.com>
Date: Fri, 3 Aug 2012 14:55:20 +0200
In-Reply-To: <344C0F67BC927847A2C92F9EE358DB0EED0B6AD9A7@LONPMAILBOX01.citrite.net>
References: <1343926951.4873.36.camel@Solace>
	<344C0F67BC927847A2C92F9EE358DB0EED0B6AD9A7@LONPMAILBOX01.citrite.net>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] What about a Fedora TestDay about Xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3795562523033574207=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3795562523033574207==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-1+0wEBWNSCsYr9qrYIQS"

--=-1+0wEBWNSCsYr9qrYIQS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 18:55 +0100, Lars Kurth wrote:=20
> Dario,
>=20
Hi,

> I am not sure whether we should angle for a Xen Fedora Test day this time=
 round.=20
>=20
> We should first make sure that we have good presence at the Virtualizatio=
n Test Day (we got a bit, but never as much as KVM) and try and do all the =
things we would for a Xen Test Day. That way, we get to practice and build =
up credibility for the next Fedora release where it probably does make sens=
e to do our own test day. If we do a Xen Fedora Test Day and nobody turns u=
p this can backfire badly
>=20
Ok, I see your point, thanks for sharing your view.

> Does this make sense?
>=20
I guess so, I'll keep an eye on the (Fedora) Virtualization TestDay and
see if we can have some Xen related testcases there.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-1+0wEBWNSCsYr9qrYIQS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAbyjgACgkQk4XaBE3IOsSzTACgoz0OISDcA3n9vKg2uwdxtwE+
QYoAn0ry+SvlTuzYijdPZUZ44AsnuqEA
=UjBq
-----END PGP SIGNATURE-----

--=-1+0wEBWNSCsYr9qrYIQS--


--===============3795562523033574207==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3795562523033574207==--


From xen-devel-bounces@lists.xen.org Fri Aug 03 13:14:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHi2-0004I4-2g; Fri, 03 Aug 2012 13:14:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1SxHi0-0004Hz-CU
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:14:28 +0000
Received: from [85.158.138.51:65123] by server-8.bemta-3.messagelabs.com id
	15/3D-10412-3BECB105; Fri, 03 Aug 2012 13:14:27 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-2.tower-174.messagelabs.com!1343999665!30271384!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32520 invoked from network); 3 Aug 2012 13:14:25 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-2.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 13:14:25 -0000
Received: from [83.211.176.206] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 79962942; Fri, 03 Aug 2012 15:14:24 +0200
Message-ID: <1343999644.11787.9.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 03 Aug 2012 15:14:04 +0200
In-Reply-To: <501BD46402000078000927F9@nat28.tlf.novell.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
	<501BB54E.1050302@amd.com>
	<501BD46402000078000927F9@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7029823554544432922=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7029823554544432922==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-OcrXWu5uAJdLW1T+Lk6D"


--=-OcrXWu5uAJdLW1T+Lk6D
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 12:38 +0100, Jan Beulich wrote:=20
> >> How about "dom0_mem=3Dnode<n> dom0_vcpus=3Dnode<n>" as
> >> an extension to the current options?
> >=20
> > Yes, that sounds like a good idea. And relatively easy to implement.
> > Maybe a list or a number of nodes (to make it more complicated ;-)
>=20
> Oh yes, of course I implied this flexibility. Just wanted to give
> an easy to read example.
>=20
Yep, I agree it sounds nice and should be not to hard. I'll update the
Wiki page.

I only have one question, should we try to take IONUMA into account here
as well? I mean, if it turns out that I/O hubs are connected to some
specific node(s), shouldn't we consider pinning/"affining" Dom0 to those
node(s), as it most likely will be responsible for some/most DomUs' I/O?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-OcrXWu5uAJdLW1T+Lk6D
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAbzpwACgkQk4XaBE3IOsQkVACeP6/51ZXEqLiXLGnBdHTTZW1F
/aAAniUjq/9sAKL72MztTk9IgSmNDtGS
=sPRo
-----END PGP SIGNATURE-----

--=-OcrXWu5uAJdLW1T+Lk6D--



--===============7029823554544432922==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7029823554544432922==--



From xen-devel-bounces@lists.xen.org Fri Aug 03 13:14:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHi2-0004I4-2g; Fri, 03 Aug 2012 13:14:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin@linux.it>) id 1SxHi0-0004Hz-CU
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:14:28 +0000
Received: from [85.158.138.51:65123] by server-8.bemta-3.messagelabs.com id
	15/3D-10412-3BECB105; Fri, 03 Aug 2012 13:14:27 +0000
X-Env-Sender: raistlin@linux.it
X-Msg-Ref: server-2.tower-174.messagelabs.com!1343999665!30271384!1
X-Originating-IP: [193.205.80.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32520 invoked from network); 3 Aug 2012 13:14:25 -0000
Received: from ms01.sssup.it (HELO sssup.it) (193.205.80.99)
	by server-2.tower-174.messagelabs.com with SMTP;
	3 Aug 2012 13:14:25 -0000
Received: from [83.211.176.206] (account d.faggioli@sssup.it HELO
	[192.168.0.20]) by sssup.it (CommuniGate Pro SMTP 5.3.14)
	with ESMTPSA id 79962942; Fri, 03 Aug 2012 15:14:24 +0200
Message-ID: <1343999644.11787.9.camel@Abyss>
From: Dario Faggioli <raistlin@linux.it>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 03 Aug 2012 15:14:04 +0200
In-Reply-To: <501BD46402000078000927F9@nat28.tlf.novell.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
	<501BB54E.1050302@amd.com>
	<501BD46402000078000927F9@nat28.tlf.novell.com>
X-Mailer: Evolution 3.2.3 (3.2.3-3.fc16) 
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7029823554544432922=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7029823554544432922==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-OcrXWu5uAJdLW1T+Lk6D"


--=-OcrXWu5uAJdLW1T+Lk6D
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 12:38 +0100, Jan Beulich wrote:=20
> >> How about "dom0_mem=3Dnode<n> dom0_vcpus=3Dnode<n>" as
> >> an extension to the current options?
> >=20
> > Yes, that sounds like a good idea. And relatively easy to implement.
> > Maybe a list or a number of nodes (to make it more complicated ;-)
>=20
> Oh yes, of course I implied this flexibility. Just wanted to give
> an easy to read example.
>=20
Yep, I agree it sounds nice and should be not to hard. I'll update the
Wiki page.

I only have one question, should we try to take IONUMA into account here
as well? I mean, if it turns out that I/O hubs are connected to some
specific node(s), shouldn't we consider pinning/"affining" Dom0 to those
node(s), as it most likely will be responsible for some/most DomUs' I/O?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



--=-OcrXWu5uAJdLW1T+Lk6D
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAbzpwACgkQk4XaBE3IOsQkVACeP6/51ZXEqLiXLGnBdHTTZW1F
/aAAniUjq/9sAKL72MztTk9IgSmNDtGS
=sPRo
-----END PGP SIGNATURE-----

--=-OcrXWu5uAJdLW1T+Lk6D--



--===============7029823554544432922==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7029823554544432922==--



From xen-devel-bounces@lists.xen.org Fri Aug 03 13:30:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHxI-0004V8-Mm; Fri, 03 Aug 2012 13:30:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxHxH-0004V3-CN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:30:15 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344000604!3665504!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30662 invoked from network); 3 Aug 2012 13:30:05 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 13:30:05 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q73DU1nW014210
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Aug 2012 09:30:01 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q73DU1wF014208;
	Fri, 3 Aug 2012 09:30:01 -0400
Date: Fri, 3 Aug 2012 09:30:01 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120803133001.GA13750@andromeda.dapyr.net>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120802160403.02de484e@mantra.us.oracle.com>
User-Agent: Mutt/1.5.9i
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
	for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > > Note it is is 0xffffffffc30b4000 - so already past the
> > > > level2_kernel_pgt (L3[510]
> > > > and in level2_fixmap_pgt territory (L3[511]).
> > > > 
> > > > At that stage we are still operating using the Xen provided
> > > > pagetable - which look to have the L4[511][511] empty! Which
> > > > sounds to me like a Xen tool-stack problem? Jan, have you seen
> > > > something similar to this?
> > > 
> > > No we haven't, but I also don't think anyone tried to create as
> > > big a DomU. I was, however, under the impression that DomU-s
> > > this big had been created at Oracle before. Or was that only up
> > > to 256Gb perhaps?
> > 
> > Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
> > It might be that we did not have the 1TB hardware at that time yet.
> 
> Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
> got broken after it looks like. I can debug later if it becomes hot. 

I got the kernel part fixed but its the toolstack that got bugs in it.
If you recall - where there any patches in the toolstack for this or
did you just concentrate on the kernel?
Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 13:30:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxHxI-0004V8-Mm; Fri, 03 Aug 2012 13:30:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxHxH-0004V3-CN
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:30:15 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344000604!3665504!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30662 invoked from network); 3 Aug 2012 13:30:05 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 13:30:05 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q73DU1nW014210
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Aug 2012 09:30:01 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q73DU1wF014208;
	Fri, 3 Aug 2012 09:30:01 -0400
Date: Fri, 3 Aug 2012 09:30:01 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120803133001.GA13750@andromeda.dapyr.net>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120802160403.02de484e@mantra.us.oracle.com>
User-Agent: Mutt/1.5.9i
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
	for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > > Note it is is 0xffffffffc30b4000 - so already past the
> > > > level2_kernel_pgt (L3[510]
> > > > and in level2_fixmap_pgt territory (L3[511]).
> > > > 
> > > > At that stage we are still operating using the Xen provided
> > > > pagetable - which look to have the L4[511][511] empty! Which
> > > > sounds to me like a Xen tool-stack problem? Jan, have you seen
> > > > something similar to this?
> > > 
> > > No we haven't, but I also don't think anyone tried to create as
> > > big a DomU. I was, however, under the impression that DomU-s
> > > this big had been created at Oracle before. Or was that only up
> > > to 256Gb perhaps?
> > 
> > Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
> > It might be that we did not have the 1TB hardware at that time yet.
> 
> Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
> got broken after it looks like. I can debug later if it becomes hot. 

I got the kernel part fixed but its the toolstack that got bugs in it.
If you recall - where there any patches in the toolstack for this or
did you just concentrate on the kernel?
Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 13:52:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxIIo-0004u3-DD; Fri, 03 Aug 2012 13:52:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxIIm-0004td-EC
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:52:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344001935!11135766!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6040 invoked from network); 3 Aug 2012 13:52:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 13:52:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 14:52:14 +0100
Message-Id: <501BF3D802000078000928A4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 14:52:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
	<501BB54E.1050302@amd.com>
	<501BD46402000078000927F9@nat28.tlf.novell.com>
	<1343999644.11787.9.camel@Abyss>
In-Reply-To: <1343999644.11787.9.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 15:14, Dario Faggioli <raistlin@linux.it> wrote:
> On Fri, 2012-08-03 at 12:38 +0100, Jan Beulich wrote: 
>> >> How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
>> >> an extension to the current options?
>> > 
>> > Yes, that sounds like a good idea. And relatively easy to implement.
>> > Maybe a list or a number of nodes (to make it more complicated ;-)
>> 
>> Oh yes, of course I implied this flexibility. Just wanted to give
>> an easy to read example.
>> 
> Yep, I agree it sounds nice and should be not to hard. I'll update the
> Wiki page.
> 
> I only have one question, should we try to take IONUMA into account here
> as well? I mean, if it turns out that I/O hubs are connected to some
> specific node(s), shouldn't we consider pinning/"affining" Dom0 to those
> node(s), as it most likely will be responsible for some/most DomUs' I/O?

I don't think the necessary information is available at the time
when Dom0 gets constructed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 13:52:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxIIo-0004u3-DD; Fri, 03 Aug 2012 13:52:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxIIm-0004td-EC
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:52:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344001935!11135766!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6040 invoked from network); 3 Aug 2012 13:52:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 13:52:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 14:52:14 +0100
Message-Id: <501BF3D802000078000928A4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 14:52:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<501BC6B8020000780009277B@nat28.tlf.novell.com>
	<501BB54E.1050302@amd.com>
	<501BD46402000078000927F9@nat28.tlf.novell.com>
	<1343999644.11787.9.camel@Abyss>
In-Reply-To: <1343999644.11787.9.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 15:14, Dario Faggioli <raistlin@linux.it> wrote:
> On Fri, 2012-08-03 at 12:38 +0100, Jan Beulich wrote: 
>> >> How about "dom0_mem=node<n> dom0_vcpus=node<n>" as
>> >> an extension to the current options?
>> > 
>> > Yes, that sounds like a good idea. And relatively easy to implement.
>> > Maybe a list or a number of nodes (to make it more complicated ;-)
>> 
>> Oh yes, of course I implied this flexibility. Just wanted to give
>> an easy to read example.
>> 
> Yep, I agree it sounds nice and should be not to hard. I'll update the
> Wiki page.
> 
> I only have one question, should we try to take IONUMA into account here
> as well? I mean, if it turns out that I/O hubs are connected to some
> specific node(s), shouldn't we consider pinning/"affining" Dom0 to those
> node(s), as it most likely will be responsible for some/most DomUs' I/O?

I don't think the necessary information is available at the time
when Dom0 gets constructed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 13:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxIKS-00052j-TW; Fri, 03 Aug 2012 13:54:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxIKR-000523-C6
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:54:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344002043!10645988!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1924 invoked from network); 3 Aug 2012 13:54:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 13:54:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 14:54:03 +0100
Message-Id: <501BF44602000078000928B4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 14:54:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@darnok.org>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
	<20120803133001.GA13750@andromeda.dapyr.net>
In-Reply-To: <20120803133001.GA13750@andromeda.dapyr.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 15:30, Konrad Rzeszutek Wilk <konrad@darnok.org> wrote:
>> > > > Note it is is 0xffffffffc30b4000 - so already past the
>> > > > level2_kernel_pgt (L3[510]
>> > > > and in level2_fixmap_pgt territory (L3[511]).
>> > > > 
>> > > > At that stage we are still operating using the Xen provided
>> > > > pagetable - which look to have the L4[511][511] empty! Which
>> > > > sounds to me like a Xen tool-stack problem? Jan, have you seen
>> > > > something similar to this?
>> > > 
>> > > No we haven't, but I also don't think anyone tried to create as
>> > > big a DomU. I was, however, under the impression that DomU-s
>> > > this big had been created at Oracle before. Or was that only up
>> > > to 256Gb perhaps?
>> > 
>> > Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
>> > It might be that we did not have the 1TB hardware at that time yet.
>> 
>> Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
>> got broken after it looks like. I can debug later if it becomes hot. 
> 
> I got the kernel part fixed but its the toolstack that got bugs in it.

So did you try the suggested fix? Or are you waiting for me to
put this in patch form?

Jan

> If you recall - where there any patches in the toolstack for this or
> did you just concentrate on the kernel?
> Thanks!




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 13:54:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 13:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxIKS-00052j-TW; Fri, 03 Aug 2012 13:54:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxIKR-000523-C6
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 13:54:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344002043!10645988!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1924 invoked from network); 3 Aug 2012 13:54:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 13:54:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 14:54:03 +0100
Message-Id: <501BF44602000078000928B4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 14:54:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@darnok.org>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
	<20120803133001.GA13750@andromeda.dapyr.net>
In-Reply-To: <20120803133001.GA13750@andromeda.dapyr.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 15:30, Konrad Rzeszutek Wilk <konrad@darnok.org> wrote:
>> > > > Note it is is 0xffffffffc30b4000 - so already past the
>> > > > level2_kernel_pgt (L3[510]
>> > > > and in level2_fixmap_pgt territory (L3[511]).
>> > > > 
>> > > > At that stage we are still operating using the Xen provided
>> > > > pagetable - which look to have the L4[511][511] empty! Which
>> > > > sounds to me like a Xen tool-stack problem? Jan, have you seen
>> > > > something similar to this?
>> > > 
>> > > No we haven't, but I also don't think anyone tried to create as
>> > > big a DomU. I was, however, under the impression that DomU-s
>> > > this big had been created at Oracle before. Or was that only up
>> > > to 256Gb perhaps?
>> > 
>> > Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
>> > It might be that we did not have the 1TB hardware at that time yet.
>> 
>> Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
>> got broken after it looks like. I can debug later if it becomes hot. 
> 
> I got the kernel part fixed but its the toolstack that got bugs in it.

So did you try the suggested fix? Or are you waiting for me to
put this in patch form?

Jan

> If you recall - where there any patches in the toolstack for this or
> did you just concentrate on the kernel?
> Thanks!




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:21:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxIkM-0005eb-80; Fri, 03 Aug 2012 14:20:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SxIkK-0005eW-QO
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:20:57 +0000
Received: from [85.158.138.51:63384] by server-9.bemta-3.messagelabs.com id
	E0/84-27628-84EDB105; Fri, 03 Aug 2012 14:20:56 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344003654!30175968!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22483 invoked from network); 3 Aug 2012 14:20:55 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-16.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 14:20:55 -0000
Received: from mail158-ch1-R.bigfish.com (10.43.68.249) by
	CH1EHSOBE008.bigfish.com (10.43.70.58) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 14:20:53 +0000
Received: from mail158-ch1 (localhost [127.0.0.1])	by
	mail158-ch1-R.bigfish.com (Postfix) with ESMTP id 50F3840246	for
	<xen-devel@lists.xen.org>; Fri,  3 Aug 2012 14:20:53 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(zzc8kzz1202hzz8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail158-ch1 (localhost.localdomain [127.0.0.1]) by mail158-ch1
	(MessageSwitch) id 1344003651422765_10954;
	Fri,  3 Aug 2012 14:20:51 +0000 (UTC)
Received: from CH1EHSMHS029.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.250])	by mail158-ch1.bigfish.com (Postfix) with ESMTP id
	5B3F82A00CF	for <xen-devel@lists.xen.org>;
	Fri,  3 Aug 2012 14:20:51 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS029.bigfish.com (10.43.70.29) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 14:20:49 +0000
X-WSS-ID: 0M86NUO-01-6OZ-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1) with ESMTP id 2DDD110280C5	for <xen-devel@lists.xen.org>;
	Fri,  3 Aug 2012 09:20:48 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 09:21:08 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 3 Aug 2012 09:20:48 -0500
Received: from donner.osrc.amd.com (165.204.15.15) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:20:47 -0400
Message-ID: <501BDF23.50409@amd.com>
Date: Fri, 3 Aug 2012 16:24:35 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi,

I installed c/s 25729 and xl crashes.
The prior c/s I used was 25687 and worked fine.
I can't find any c/s that could touch libxl in such a way.

xl segfaults with xl create.


# gdb xl
GNU gdb (GDB) 7.3.1
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64--netbsd".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/local.25729/sbin/xl...done.
(gdb) set args -vvv create -c /hvm-guest/win2008.conf
(gdb) run
Starting program: /usr/local.25729/sbin/xl create -c /hvm-guest/win2008.conf
Parsing config from /hvm-guest/win2008.conf

Program received signal SIGSEGV, Segmentation fault.
0x00007f7ff7806c57 in xlu__disk_yylex (yyscanner=0x7f7ff7b1f0c0)
    at libxlu_disk_l.c:1195
1195    libxlu_disk_l.c: No such file or directory.
        in libxlu_disk_l.c
(gdb) bt
#0  0x00007f7ff7806c57 in xlu__disk_yylex (yyscanner=0x7f7ff7b1f0c0)
    at libxlu_disk_l.c:1195
#1  0x00007f7ff7807d28 in xlu_disk_parse (cfg=<optimized out>, nspecs=1,
    specs=<optimized out>, disk=0x7f7ff7b24080) at libxlu_disk.c:66
#2  0x0000000000408213 in parse_disk_config_multistring
(config=0x7f7fffffd158,
    nspecs=1, specs=0x7f7fffffd0b8, disk=0x7f7ff7b24080) at xl_cmdimpl.c:423
#3  0x0000000000408284 in parse_disk_config (config=<optimized out>,
    spec=0x7f7ff7b01500 "file:/hvm-guest/win2008.img,ioemu:hda,w",
    disk=<optimized out>) at xl_cmdimpl.c:434
#4  0x000000000040a905 in parse_config_data (config_source=<optimized out>,
    config_data=<optimized out>, config_len=<optimized out>,
    d_config=0x7f7fffffd340, dom_info=<optimized out>) at xl_cmdimpl.c:920
#5  0x000000000040c35e in create_domain (dom_info=0x7f7fffffd610)
    at xl_cmdimpl.c:1742
#6  0x0000000000410466 in main_create (argc=3, argv=<optimized out>)
    at xl_cmdimpl.c:3771
#7  0x0000000000406f55 in main (argc=3, argv=0x7f7fffffdb68) at xl.c:267
(gdb)



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:21:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxIkM-0005eb-80; Fri, 03 Aug 2012 14:20:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SxIkK-0005eW-QO
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:20:57 +0000
Received: from [85.158.138.51:63384] by server-9.bemta-3.messagelabs.com id
	E0/84-27628-84EDB105; Fri, 03 Aug 2012 14:20:56 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344003654!30175968!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22483 invoked from network); 3 Aug 2012 14:20:55 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-16.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 14:20:55 -0000
Received: from mail158-ch1-R.bigfish.com (10.43.68.249) by
	CH1EHSOBE008.bigfish.com (10.43.70.58) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 14:20:53 +0000
Received: from mail158-ch1 (localhost [127.0.0.1])	by
	mail158-ch1-R.bigfish.com (Postfix) with ESMTP id 50F3840246	for
	<xen-devel@lists.xen.org>; Fri,  3 Aug 2012 14:20:53 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(zzc8kzz1202hzz8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail158-ch1 (localhost.localdomain [127.0.0.1]) by mail158-ch1
	(MessageSwitch) id 1344003651422765_10954;
	Fri,  3 Aug 2012 14:20:51 +0000 (UTC)
Received: from CH1EHSMHS029.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.250])	by mail158-ch1.bigfish.com (Postfix) with ESMTP id
	5B3F82A00CF	for <xen-devel@lists.xen.org>;
	Fri,  3 Aug 2012 14:20:51 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS029.bigfish.com (10.43.70.29) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 14:20:49 +0000
X-WSS-ID: 0M86NUO-01-6OZ-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1) with ESMTP id 2DDD110280C5	for <xen-devel@lists.xen.org>;
	Fri,  3 Aug 2012 09:20:48 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 09:21:08 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 3 Aug 2012 09:20:48 -0500
Received: from donner.osrc.amd.com (165.204.15.15) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	10:20:47 -0400
Message-ID: <501BDF23.50409@amd.com>
Date: Fri, 3 Aug 2012 16:24:35 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi,

I installed c/s 25729 and xl crashes.
The prior c/s I used was 25687 and worked fine.
I can't find any c/s that could touch libxl in such a way.

xl segfaults with xl create.


# gdb xl
GNU gdb (GDB) 7.3.1
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64--netbsd".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/local.25729/sbin/xl...done.
(gdb) set args -vvv create -c /hvm-guest/win2008.conf
(gdb) run
Starting program: /usr/local.25729/sbin/xl create -c /hvm-guest/win2008.conf
Parsing config from /hvm-guest/win2008.conf

Program received signal SIGSEGV, Segmentation fault.
0x00007f7ff7806c57 in xlu__disk_yylex (yyscanner=0x7f7ff7b1f0c0)
    at libxlu_disk_l.c:1195
1195    libxlu_disk_l.c: No such file or directory.
        in libxlu_disk_l.c
(gdb) bt
#0  0x00007f7ff7806c57 in xlu__disk_yylex (yyscanner=0x7f7ff7b1f0c0)
    at libxlu_disk_l.c:1195
#1  0x00007f7ff7807d28 in xlu_disk_parse (cfg=<optimized out>, nspecs=1,
    specs=<optimized out>, disk=0x7f7ff7b24080) at libxlu_disk.c:66
#2  0x0000000000408213 in parse_disk_config_multistring
(config=0x7f7fffffd158,
    nspecs=1, specs=0x7f7fffffd0b8, disk=0x7f7ff7b24080) at xl_cmdimpl.c:423
#3  0x0000000000408284 in parse_disk_config (config=<optimized out>,
    spec=0x7f7ff7b01500 "file:/hvm-guest/win2008.img,ioemu:hda,w",
    disk=<optimized out>) at xl_cmdimpl.c:434
#4  0x000000000040a905 in parse_config_data (config_source=<optimized out>,
    config_data=<optimized out>, config_len=<optimized out>,
    d_config=0x7f7fffffd340, dom_info=<optimized out>) at xl_cmdimpl.c:920
#5  0x000000000040c35e in create_domain (dom_info=0x7f7fffffd610)
    at xl_cmdimpl.c:1742
#6  0x0000000000410466 in main_create (argc=3, argv=<optimized out>)
    at xl_cmdimpl.c:3771
#7  0x0000000000406f55 in main (argc=3, argv=0x7f7fffffdb68) at xl.c:267
(gdb)



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJ7C-0005yf-AD; Fri, 03 Aug 2012 14:44:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxJ7A-0005ya-Kd
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:44:32 +0000
Received: from [85.158.139.83:45471] by server-5.bemta-5.messagelabs.com id
	09/82-02722-FC3EB105; Fri, 03 Aug 2012 14:44:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344005070!30145158!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27674 invoked from network); 3 Aug 2012 14:44:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 14:44:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13842958"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 14:44:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 15:44:30 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxJ78-0002ld-Eo; Fri, 03 Aug 2012 14:44:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxJ78-0007g2-Du;
	Fri, 03 Aug 2012 15:44:30 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.58318.416753.917851@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 15:44:30 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20120802211157.GG8228@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> Several folks have let me know that my messages sent via lists.xen.org
> are marked as spam / spoofed, especially when using Gmail to receive
> Xen mail. I believe this is because outbound Amazon email contains a
> DKIM signature. When Mailman modifies my message and re-sends it, the
> DKIM signature is invalidated [1].
> 
> To work around this, Mailman 2.1.10 and later contain a configuration
> variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
> on we'd work around the problem.
...
> [1] http://wiki.list.org/display/DEV/DKIM
> [2] https://bugs.launchpad.net/mailman/+bug/557493

Having checked RFC4871 I think it is clear that according to the
standards
  - Mailman SHOULD NOT [1] strip DKIM-Signature
  - No-one should treat a message with an invalid DKIM signature
    differently from a message with no DKIM signature at all [2]

[1] 4871 says in s3.5 that DKIM-Signature SHOULD be treated the same
way as a trace header (ie a Received), so removing it would be a
violation of that SHOULD not necessarily a violation of the MUST NOT
mess with Received headers.

[2] RFC4871 6.1:
   A verifier SHOULD NOT treat a message that has one or more bad
   signatures and no good signatures differently from a message with
   no signature at all; such treatment is a matter of local policy and
   is beyond the scope of this document.

I think it would be better if you would do one of:
  (a)  Get Gmail fixed to comply with RFC4871 6.1;
  (b)  Get your correspondents to use a non-broken email host;
  (c)  Get the DKIM the spec changed or clarified;
  (d)  Stop putting these abused things in your email headers.

That would be better than asking lists.xen.org to start violating the
specified protocol.  Now of course a SHOULD is not an absolute
requirement.  Perhaps mailing lists are a special case somehow; but if
so I would expect this to be addressed in the relevant standards
documents.  I don't see any particular reason to think that
lists.xen.org is somehow unusual.

Do you agree ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJ7C-0005yf-AD; Fri, 03 Aug 2012 14:44:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxJ7A-0005ya-Kd
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:44:32 +0000
Received: from [85.158.139.83:45471] by server-5.bemta-5.messagelabs.com id
	09/82-02722-FC3EB105; Fri, 03 Aug 2012 14:44:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344005070!30145158!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27674 invoked from network); 3 Aug 2012 14:44:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 14:44:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13842958"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 14:44:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 15:44:30 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SxJ78-0002ld-Eo; Fri, 03 Aug 2012 14:44:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SxJ78-0007g2-Du;
	Fri, 03 Aug 2012 15:44:30 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20507.58318.416753.917851@mariner.uk.xensource.com>
Date: Fri, 3 Aug 2012 15:44:30 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20120802211157.GG8228@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> Several folks have let me know that my messages sent via lists.xen.org
> are marked as spam / spoofed, especially when using Gmail to receive
> Xen mail. I believe this is because outbound Amazon email contains a
> DKIM signature. When Mailman modifies my message and re-sends it, the
> DKIM signature is invalidated [1].
> 
> To work around this, Mailman 2.1.10 and later contain a configuration
> variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
> on we'd work around the problem.
...
> [1] http://wiki.list.org/display/DEV/DKIM
> [2] https://bugs.launchpad.net/mailman/+bug/557493

Having checked RFC4871 I think it is clear that according to the
standards
  - Mailman SHOULD NOT [1] strip DKIM-Signature
  - No-one should treat a message with an invalid DKIM signature
    differently from a message with no DKIM signature at all [2]

[1] 4871 says in s3.5 that DKIM-Signature SHOULD be treated the same
way as a trace header (ie a Received), so removing it would be a
violation of that SHOULD not necessarily a violation of the MUST NOT
mess with Received headers.

[2] RFC4871 6.1:
   A verifier SHOULD NOT treat a message that has one or more bad
   signatures and no good signatures differently from a message with
   no signature at all; such treatment is a matter of local policy and
   is beyond the scope of this document.

I think it would be better if you would do one of:
  (a)  Get Gmail fixed to comply with RFC4871 6.1;
  (b)  Get your correspondents to use a non-broken email host;
  (c)  Get the DKIM the spec changed or clarified;
  (d)  Stop putting these abused things in your email headers.

That would be better than asking lists.xen.org to start violating the
specified protocol.  Now of course a SHOULD is not an absolute
requirement.  Perhaps mailing lists are a special case somehow; but if
so I would expect this to be addressed in the relevant standards
documents.  I don't see any particular reason to think that
lists.xen.org is somehow unusual.

Do you agree ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:46:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJ8F-00061M-OM; Fri, 03 Aug 2012 14:45:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxJ8E-00061F-3x
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:45:38 +0000
Received: from [85.158.143.35:60124] by server-3.bemta-4.messagelabs.com id
	83/1A-01511-114EB105; Fri, 03 Aug 2012 14:45:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344005135!16020083!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7267 invoked from network); 3 Aug 2012 14:45:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 14:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13842979"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 14:45:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	15:45:35 +0100
Message-ID: <1344005133.21372.54.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Fri, 3 Aug 2012 15:45:33 +0100
In-Reply-To: <501BDF23.50409@amd.com>
References: <501BDF23.50409@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 15:24 +0100, Christoph Egger wrote:
> Hi,
> 
> I installed c/s 25729 and xl crashes.
> The prior c/s I used was 25687 and worked fine.

These number can be ambiguous, what are the long hashes?

> I can't find any c/s that could touch libxl in such a way.

Me neither, but the xlu disk parser has been touched recently, e.g.
25727:a8d708fcb347 is in there.

Recently there was 25665:fab03d9ee1ba and 25663:968b205da696 too.

A dry run with your syntax seems to work for me e.g. :
        xl -N block-attach 0 'file:/hvm-guest/win2008.img,ioemu:hda,w'

Did the generated flex files get rebuilt in your environment?

> Starting program: /usr/local.25729/sbin/xl create -c /hvm-guest/win2008.conf

Can you send win2008.conf please.

Does a dry run create reproduce it? xl -N create .../win2008.conf --
that'll be easier for me to repro.

Is this under NetBSD?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:46:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJ8F-00061M-OM; Fri, 03 Aug 2012 14:45:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxJ8E-00061F-3x
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:45:38 +0000
Received: from [85.158.143.35:60124] by server-3.bemta-4.messagelabs.com id
	83/1A-01511-114EB105; Fri, 03 Aug 2012 14:45:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344005135!16020083!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7267 invoked from network); 3 Aug 2012 14:45:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 14:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13842979"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 14:45:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	15:45:35 +0100
Message-ID: <1344005133.21372.54.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Fri, 3 Aug 2012 15:45:33 +0100
In-Reply-To: <501BDF23.50409@amd.com>
References: <501BDF23.50409@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 15:24 +0100, Christoph Egger wrote:
> Hi,
> 
> I installed c/s 25729 and xl crashes.
> The prior c/s I used was 25687 and worked fine.

These number can be ambiguous, what are the long hashes?

> I can't find any c/s that could touch libxl in such a way.

Me neither, but the xlu disk parser has been touched recently, e.g.
25727:a8d708fcb347 is in there.

Recently there was 25665:fab03d9ee1ba and 25663:968b205da696 too.

A dry run with your syntax seems to work for me e.g. :
        xl -N block-attach 0 'file:/hvm-guest/win2008.img,ioemu:hda,w'

Did the generated flex files get rebuilt in your environment?

> Starting program: /usr/local.25729/sbin/xl create -c /hvm-guest/win2008.conf

Can you send win2008.conf please.

Does a dry run create reproduce it? xl -N create .../win2008.conf --
that'll be easier for me to repro.

Is this under NetBSD?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:51:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJDJ-0006Ff-FK; Fri, 03 Aug 2012 14:50:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1SxJDI-0006FZ-EF
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:50:52 +0000
Received: from [85.158.143.99:34895] by server-1.bemta-4.messagelabs.com id
	AC/D3-24392-B45EB105; Fri, 03 Aug 2012 14:50:51 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344005450!29553521!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 890 invoked from network); 3 Aug 2012 14:50:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 14:50:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13843111"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 14:50:50 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 3 Aug 2012
	15:50:50 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 15:50:50 +0100
Thread-Topic: [PATCH] Increment buffer used to read first boot sector in
	order to accomodate space for 4k sector
Thread-Index: Ac1xh1/K/CCLHNCBRpaT0ze5//VGGA==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH] Increment buffer used to read first boot sector
 in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


If a 4k disk is used for first BIOS disk loader corrupt itself.

This patch increase sector buffer in order to avoid this overflow

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/boot/edd.S        |    2 +-
 xen/arch/x86/boot/trampoline.S |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
index 2c8df8c..1c802a6 100644
--- a/xen/arch/x86/boot/edd.S
+++ b/xen/arch/x86/boot/edd.S
@@ -154,4 +154,4 @@ boot_mbr_signature_nr:
 boot_mbr_signature:
         .fill   EDD_MBR_SIG_MAX*8,1,0
 boot_edd_info:
-        .fill   512,1,0                         # big enough for a disc sector
+        .fill   4096,1,0                         # big enough for a disc sector
diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index 4421fc2..bd54c9e 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -224,6 +224,6 @@ skip_realmode:
 rm_idt: .word   256*4-1, 0, 0
 
 #include "mem.S"
-#include "edd.S"
 #include "video.S"
 #include "wakeup.S"
+#include "edd.S"
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 14:51:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 14:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJDJ-0006Ff-FK; Fri, 03 Aug 2012 14:50:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1SxJDI-0006FZ-EF
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 14:50:52 +0000
Received: from [85.158.143.99:34895] by server-1.bemta-4.messagelabs.com id
	AC/D3-24392-B45EB105; Fri, 03 Aug 2012 14:50:51 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344005450!29553521!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 890 invoked from network); 3 Aug 2012 14:50:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 14:50:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13843111"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 14:50:50 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 3 Aug 2012
	15:50:50 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 15:50:50 +0100
Thread-Topic: [PATCH] Increment buffer used to read first boot sector in
	order to accomodate space for 4k sector
Thread-Index: Ac1xh1/K/CCLHNCBRpaT0ze5//VGGA==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH] Increment buffer used to read first boot sector
 in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


If a 4k disk is used for first BIOS disk loader corrupt itself.

This patch increase sector buffer in order to avoid this overflow

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/boot/edd.S        |    2 +-
 xen/arch/x86/boot/trampoline.S |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
index 2c8df8c..1c802a6 100644
--- a/xen/arch/x86/boot/edd.S
+++ b/xen/arch/x86/boot/edd.S
@@ -154,4 +154,4 @@ boot_mbr_signature_nr:
 boot_mbr_signature:
         .fill   EDD_MBR_SIG_MAX*8,1,0
 boot_edd_info:
-        .fill   512,1,0                         # big enough for a disc sector
+        .fill   4096,1,0                         # big enough for a disc sector
diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index 4421fc2..bd54c9e 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -224,6 +224,6 @@ skip_realmode:
 rm_idt: .word   256*4-1, 0, 0
 
 #include "mem.S"
-#include "edd.S"
 #include "video.S"
 #include "wakeup.S"
+#include "edd.S"
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 15:10:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJVq-0006iY-U9; Fri, 03 Aug 2012 15:10:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <d.vrabel.98@gmail.com>) id 1SxJVq-0006iM-1X
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:10:02 +0000
Received: from [85.158.139.83:52978] by server-2.bemta-5.messagelabs.com id
	E0/79-04598-9C9EB105; Fri, 03 Aug 2012 15:10:01 +0000
X-Env-Sender: d.vrabel.98@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344006599!29567934!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30713 invoked from network); 3 Aug 2012 15:10:00 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 15:10:00 -0000
Received: by yenl1 with SMTP id l1so1084022yen.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 08:09:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=yPLYPDyAVXQGpDwpoOwOE+wXPN9jT3nWS62EFdOkQ/A=;
	b=tIRLqNquj1JGJHijpjvao6v8Ki5PA3BGtD4lzHhYVq35ptB9iHRTHYahBDH91mq5j1
	R6yqITHuTwJlEzmmMfsv1c+MYaYNDctN1QabieSVPthYgbGUdG3LCX7PfEIPmlh4aOyl
	BLyoVI2zr9CgHYduiiNKQmaD2YyXy2h/MFo95owSq6xuuewlxlJExZ6N30cuvuzVpPEC
	h7qiBValQ5Rb6YuP0VXfJeOmy73aJQcTtXXs3E4nYcNEeYstzdULpph4tjfFpYtyFFZy
	ehf0ttW+WLW3HZZUkqmdnbhNl12LWj3LPKj3BJB8O6vW1f4KtYjBMscb8cj4QXkfeUPr
	HehQ==
Received: by 10.236.179.98 with SMTP id g62mr2248003yhm.44.1344006599227;
	Fri, 03 Aug 2012 08:09:59 -0700 (PDT)
Received: from [10.80.2.76] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id x3sm16694005yhd.9.2012.08.03.08.09.56
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 08:09:58 -0700 (PDT)
Message-ID: <501BE9C3.1070806@cantab.net>
Date: Fri, 03 Aug 2012 16:09:55 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 15:50, Frediano Ziglio wrote:
> 
> If a 4k disk is used for first BIOS disk loader corrupt itself.
> 
> This patch increase sector buffer in order to avoid this overflow
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/boot/edd.S        |    2 +-
>  xen/arch/x86/boot/trampoline.S |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
> index 2c8df8c..1c802a6 100644
> --- a/xen/arch/x86/boot/edd.S
> +++ b/xen/arch/x86/boot/edd.S
> @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
>  boot_mbr_signature:
>          .fill   EDD_MBR_SIG_MAX*8,1,0
>  boot_edd_info:
> -        .fill   512,1,0                         # big enough for a disc sector
> +        .fill   4096,1,0                         # big enough for a disc sector

Can we get a #define for this value?

> diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
> index 4421fc2..bd54c9e 100644
> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -224,6 +224,6 @@ skip_realmode:
>  rm_idt: .word   256*4-1, 0, 0
>  
>  #include "mem.S"
> -#include "edd.S"
>  #include "video.S"
>  #include "wakeup.S"
> +#include "edd.S"

This part looks unnecessary.  Included by mistake?

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 15:10:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJVq-0006iY-U9; Fri, 03 Aug 2012 15:10:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <d.vrabel.98@gmail.com>) id 1SxJVq-0006iM-1X
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:10:02 +0000
Received: from [85.158.139.83:52978] by server-2.bemta-5.messagelabs.com id
	E0/79-04598-9C9EB105; Fri, 03 Aug 2012 15:10:01 +0000
X-Env-Sender: d.vrabel.98@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344006599!29567934!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30713 invoked from network); 3 Aug 2012 15:10:00 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 15:10:00 -0000
Received: by yenl1 with SMTP id l1so1084022yen.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 08:09:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=yPLYPDyAVXQGpDwpoOwOE+wXPN9jT3nWS62EFdOkQ/A=;
	b=tIRLqNquj1JGJHijpjvao6v8Ki5PA3BGtD4lzHhYVq35ptB9iHRTHYahBDH91mq5j1
	R6yqITHuTwJlEzmmMfsv1c+MYaYNDctN1QabieSVPthYgbGUdG3LCX7PfEIPmlh4aOyl
	BLyoVI2zr9CgHYduiiNKQmaD2YyXy2h/MFo95owSq6xuuewlxlJExZ6N30cuvuzVpPEC
	h7qiBValQ5Rb6YuP0VXfJeOmy73aJQcTtXXs3E4nYcNEeYstzdULpph4tjfFpYtyFFZy
	ehf0ttW+WLW3HZZUkqmdnbhNl12LWj3LPKj3BJB8O6vW1f4KtYjBMscb8cj4QXkfeUPr
	HehQ==
Received: by 10.236.179.98 with SMTP id g62mr2248003yhm.44.1344006599227;
	Fri, 03 Aug 2012 08:09:59 -0700 (PDT)
Received: from [10.80.2.76] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id x3sm16694005yhd.9.2012.08.03.08.09.56
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 08:09:58 -0700 (PDT)
Message-ID: <501BE9C3.1070806@cantab.net>
Date: Fri, 03 Aug 2012 16:09:55 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 15:50, Frediano Ziglio wrote:
> 
> If a 4k disk is used for first BIOS disk loader corrupt itself.
> 
> This patch increase sector buffer in order to avoid this overflow
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/boot/edd.S        |    2 +-
>  xen/arch/x86/boot/trampoline.S |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
> index 2c8df8c..1c802a6 100644
> --- a/xen/arch/x86/boot/edd.S
> +++ b/xen/arch/x86/boot/edd.S
> @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
>  boot_mbr_signature:
>          .fill   EDD_MBR_SIG_MAX*8,1,0
>  boot_edd_info:
> -        .fill   512,1,0                         # big enough for a disc sector
> +        .fill   4096,1,0                         # big enough for a disc sector

Can we get a #define for this value?

> diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
> index 4421fc2..bd54c9e 100644
> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -224,6 +224,6 @@ skip_realmode:
>  rm_idt: .word   256*4-1, 0, 0
>  
>  #include "mem.S"
> -#include "edd.S"
>  #include "video.S"
>  #include "wakeup.S"
> +#include "edd.S"

This part looks unnecessary.  Included by mistake?

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 15:15:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJaO-0006zX-QI; Fri, 03 Aug 2012 15:14:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SxJaN-0006zQ-2L
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:14:43 +0000
Received: from [85.158.139.83:23053] by server-11.bemta-5.messagelabs.com id
	41/5D-20400-2EAEB105; Fri, 03 Aug 2012 15:14:42 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344006880!27430668!1
X-Originating-IP: [216.32.180.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21381 invoked from network); 3 Aug 2012 15:14:41 -0000
Received: from va3ehsobe004.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.14)
	by server-4.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 15:14:41 -0000
Received: from mail242-va3-R.bigfish.com (10.7.14.254) by
	VA3EHSOBE006.bigfish.com (10.7.40.26) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 15:14:39 +0000
Received: from mail242-va3 (localhost [127.0.0.1])	by
	mail242-va3-R.bigfish.com (Postfix) with ESMTP id F2DFD4E0467;
	Fri,  3 Aug 2012 15:14:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI936eIc85fh1432Ic857hzz1202hzzz2dh668h839hd25he5bhf0ah107ah34h)
Received: from mail242-va3 (localhost.localdomain [127.0.0.1]) by mail242-va3
	(MessageSwitch) id 1344006878551487_20746;
	Fri,  3 Aug 2012 15:14:38 +0000 (UTC)
Received: from VA3EHSMHS003.bigfish.com (unknown [10.7.14.254])	by
	mail242-va3.bigfish.com (Postfix) with ESMTP id 81C944C0049;
	Fri,  3 Aug 2012 15:14:38 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	VA3EHSMHS003.bigfish.com (10.7.99.13) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 15:14:37 +0000
X-WSS-ID: 0M86QCC-01-9ZH-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2A5361028147;	Fri,  3 Aug 2012 10:14:35 -0500 (CDT)
Received: from SAUSEXDAG04.amd.com (163.181.55.4) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 10:14:55 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag04.amd.com
	(163.181.55.4) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 10:14:35 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:14:34 -0400
Message-ID: <501BEAD8.3040300@amd.com>
Date: Fri, 3 Aug 2012 17:14:32 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344005133.21372.54.camel@zakaz.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------060704090007000900070207"
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060704090007000900070207
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On 08/03/12 16:45, Ian Campbell wrote:

> On Fri, 2012-08-03 at 15:24 +0100, Christoph Egger wrote:
>> Hi,
>>
>> I installed c/s 25729 and xl crashes.
>> The prior c/s I used was 25687 and worked fine.
> 
> These number can be ambiguous, what are the long hashes?


25729:6ccad16b50b6
25687:fab4434f5145

> 
>> I can't find any c/s that could touch libxl in such a way.
> 
> Me neither, but the xlu disk parser has been touched recently, e.g.
> 25727:a8d708fcb347 is in there.

Yes, that's it.
Reverting this changeset makes xl work again for me.


> Recently there was 25665:fab03d9ee1ba and 25663:968b205da696 too.
> 
> A dry run with your syntax seems to work for me e.g. :
>         xl -N block-attach 0 'file:/hvm-guest/win2008.img,ioemu:hda,w'


This also crashes for me but works for me with c/s 25727:a8d708fcb347
reverted.
xl info works.

 
> Did the generated flex files get rebuilt in your environment?
> 
>> Starting program: /usr/local.25729/sbin/xl create -c /hvm-guest/win2008.conf
> 
> Can you send win2008.conf please.


Attached.

> 
> Does a dry run create reproduce it? xl -N create .../win2008.conf --
> that'll be easier for me to repro.


Yes.

 
> Is this under NetBSD?


Yes.

Christoph


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

--------------060704090007000900070207
Content-Type: text/plain; charset="us-ascii"; name="win2008.conf"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="win2008.conf"
Content-Description: win2008.conf

IyAgLSotIG1vZGU6IHB5dGhvbjsgLSotCiM9PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiMgUHl0
aG9uIGNvbmZpZ3VyYXRpb24gc2V0dXAgZm9yICd4bSBjcmVhdGUnLgojIFRoaXMgc2NyaXB0
IHNldHMgdGhlIHBhcmFtZXRlcnMgdXNlZCB3aGVuIGEgZG9tYWluIGlzIGNyZWF0ZWQgdXNp
bmcgJ3htIGNyZWF0ZScuCiMgWW91IHVzZSBhIHNlcGFyYXRlIHNjcmlwdCBmb3IgZWFjaCBk
b21haW4geW91IHdhbnQgdG8gY3JlYXRlLCBvciAKIyB5b3UgY2FuIHNldCB0aGUgcGFyYW1l
dGVycyBmb3IgdGhlIGRvbWFpbiBvbiB0aGUgeG0gY29tbWFuZCBsaW5lLgojPT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PQoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBLZXJuZWwgaW1hZ2Ug
ZmlsZS4KI2tlcm5lbCA9ICJodm1sb2FkZXIiCgojIFRoZSBkb21haW4gYnVpbGQgZnVuY3Rp
b24uIEhWTSBkb21haW4gdXNlcyAnaHZtJy4KYnVpbGRlcj0naHZtJwoKIyBJbml0aWFsIG1l
bW9yeSBhbGxvY2F0aW9uIChpbiBtZWdhYnl0ZXMpIGZvciB0aGUgbmV3IGRvbWFpbi4KIwoj
IFdBUk5JTkc6IENyZWF0aW5nIGEgZG9tYWluIHdpdGggaW5zdWZmaWNpZW50IG1lbW9yeSBt
YXkgY2F1c2Ugb3V0IG9mCiMgICAgICAgICAgbWVtb3J5IGVycm9ycy4gVGhlIGRvbWFpbiBu
ZWVkcyBlbm91Z2ggbWVtb3J5IHRvIGJvb3Qga2VybmVsCiMgICAgICAgICAgYW5kIG1vZHVs
ZXMuIEFsbG9jYXRpbmcgbGVzcyB0aGFuIDMyTUJzIGlzIG5vdCByZWNvbW1lbmRlZC4KI21l
bW9yeSA9IDY1MDAKI21lbW9yeSA9IDUxMgojbWVtb3J5ID0gMjA0OAptZW1vcnkgPSAzMDcy
CgpuZXN0ZWRodm09MQoKIyBTaGFkb3cgcGFnZXRhYmxlIG1lbW9yeSBmb3IgdGhlIGRvbWFp
biwgaW4gTUIuCiMgU2hvdWxkIGJlIGF0IGxlYXN0IDJLQiBwZXIgTUIgb2YgZG9tYWluIG1l
bW9yeSwgcGx1cyBhIGZldyBNQiBwZXIgdmNwdS4KI3NoYWRvd19tZW1vcnkgPSAyMAoKIyBB
IG5hbWUgZm9yIHlvdXIgZG9tYWluLiBBbGwgZG9tYWlucyBtdXN0IGhhdmUgZGlmZmVyZW50
IG5hbWVzLgpuYW1lID0gIndpbjIwMDgiCgojIDEyOC1iaXQgVVVJRCBmb3IgdGhlIGRvbWFp
bi4gIFRoZSBkZWZhdWx0IGJlaGF2aW9yIGlzIHRvIGdlbmVyYXRlIGEgbmV3IFVVSUQKIyBv
biBlYWNoIGNhbGwgdG8gJ3htIGNyZWF0ZScuCiN1dWlkID0gIjA2ZWQwMGZlLTExNjItNGZj
NC1iNWQ4LTExOTkzZWU0YThiOSIKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojIHRoZSBu
dW1iZXIgb2YgY3B1cyBndWVzdCBwbGF0Zm9ybSBoYXMsIGRlZmF1bHQ9MQp2Y3B1cz0xCgoj
IGVuYWJsZS9kaXNhYmxlIEhWTSBndWVzdCBQQUUsIGRlZmF1bHQ9MCAoZGlzYWJsZWQpCiNw
YWU9MQoKIyBlbmFibGUvZGlzYWJsZSBIVk0gZ3Vlc3QgQUNQSSwgZGVmYXVsdD0wIChkaXNh
YmxlZCkKYWNwaT0xCgojIGVuYWJsZS9kaXNhYmxlIEhWTSBndWVzdCBBUElDLCBkZWZhdWx0
PTAgKGRpc2FibGVkKQphcGljPTEKCiNjcHVpZD0iaG9zdCxwYWdlMWdiPWssaHlwZXJ2aXNv
cj0wLHN2bV9ucHQ9MCIKY3B1aWQ9Imhvc3QscGFnZTFnYj1rLGh5cGVydmlzb3I9MCIKCiMg
TGlzdCBvZiB3aGljaCBDUFVTIHRoaXMgZG9tYWluIGlzIGFsbG93ZWQgdG8gdXNlLCBkZWZh
dWx0IFhlbiBwaWNrcwojY3B1cyA9ICIiICAgICAgICAgIyBsZWF2ZSB0byBYZW4gdG8gcGlj
awojY3B1cyA9ICIwIiAgICAgICAgIyBhbGwgdmNwdXMgcnVuIG9uIENQVTAKI2NwdXMgPSAi
MC0zLDUsXjEiICMgcnVuIG9uIGNwdXMgMCwyLDMsNQojY3B1cyA9ICIyLTMiCgojIE9wdGlv
bmFsbHkgZGVmaW5lIG1hYyBhbmQvb3IgYnJpZGdlIGZvciB0aGUgbmV0d29yayBpbnRlcmZh
Y2VzLgojIFJhbmRvbSBNQUNzIGFyZSBhc3NpZ25lZCBpZiBub3QgZ2l2ZW4uCiN2aWYgPSBb
ICd0eXBlPWlvZW11LCBtYWM9MDA6MTY6M2U6MDA6Y2U6YTMsIGJyaWRnZT1icmlkZ2UwLCBt
b2RlbD1uZTJrX3BjaScgXQojIHR5cGU9aW9lbXUgc3BlY2lmeSB0aGUgTklDIGlzIGFuIGlv
ZW11IGRldmljZSBub3QgbmV0ZnJvbnQKdmlmID0gWyAndHlwZT1pb2VtdSwgbWFjPTAwOjE2
OjNlOjAxOmNlOmEzLCBicmlkZ2U9YnJpZGdlMCwgbW9kZWw9ZTEwMDAnICBdCgojLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLQojIERlZmluZSB0aGUgZGlzayBkZXZpY2VzIHlvdSB3YW50IHRo
ZSBkb21haW4gdG8gaGF2ZSBhY2Nlc3MgdG8sIGFuZAojIHdoYXQgeW91IHdhbnQgdGhlbSBh
Y2Nlc3NpYmxlIGFzLgojIEVhY2ggZGlzayBlbnRyeSBpcyBvZiB0aGUgZm9ybSBwaHk6VU5B
TUUsREVWLE1PREUKIyB3aGVyZSBVTkFNRSBpcyB0aGUgZGV2aWNlLCBERVYgaXMgdGhlIGRl
dmljZSBuYW1lIHRoZSBkb21haW4gd2lsbCBzZWUsCiMgYW5kIE1PREUgaXMgciBmb3IgcmVh
ZC1vbmx5LCB3IGZvciByZWFkLXdyaXRlLgoKZGlzayA9IFsgJ2ZpbGU6L2h2bS1ndWVzdC93
aW4yMDA4LmltZyxpb2VtdTpoZGEsdycsICdmaWxlOi9odm0tZ3Vlc3Qvd2luMjAwOC5pc28s
aGRjOmNkcm9tLHInIF0gCgojLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojIENvbmZpZ3VyZSB0
aGUgYmVoYXZpb3VyIHdoZW4gYSBkb21haW4gZXhpdHMuICBUaGVyZSBhcmUgdGhyZWUgJ3Jl
YXNvbnMnCiMgZm9yIGEgZG9tYWluIHRvIHN0b3A6IHBvd2Vyb2ZmLCByZWJvb3QsIGFuZCBj
cmFzaC4gIEZvciBlYWNoIG9mIHRoZXNlIHlvdQojIG1heSBzcGVjaWZ5OgojCiMgICAiZGVz
dHJveSIsICAgICAgICBtZWFuaW5nIHRoYXQgdGhlIGRvbWFpbiBpcyBjbGVhbmVkIHVwIGFz
IG5vcm1hbDsKIyAgICJyZXN0YXJ0IiwgICAgICAgIG1lYW5pbmcgdGhhdCBhIG5ldyBkb21h
aW4gaXMgc3RhcnRlZCBpbiBwbGFjZSBvZiB0aGUgb2xkCiMgICAgICAgICAgICAgICAgICAg
ICBvbmU7CiMgICAicHJlc2VydmUiLCAgICAgICBtZWFuaW5nIHRoYXQgbm8gY2xlYW4tdXAg
aXMgZG9uZSB1bnRpbCB0aGUgZG9tYWluIGlzCiMgICAgICAgICAgICAgICAgICAgICBtYW51
YWxseSBkZXN0cm95ZWQgKHVzaW5nIHhtIGRlc3Ryb3ksIGZvciBleGFtcGxlKTsgb3IKIyAg
ICJyZW5hbWUtcmVzdGFydCIsIG1lYW5pbmcgdGhhdCB0aGUgb2xkIGRvbWFpbiBpcyBub3Qg
Y2xlYW5lZCB1cCwgYnV0IGlzCiMgICAgICAgICAgICAgICAgICAgICByZW5hbWVkIGFuZCBh
IG5ldyBkb21haW4gc3RhcnRlZCBpbiBpdHMgcGxhY2UuCiMKIyBUaGUgZGVmYXVsdCBpcwoj
CiMgICBvbl9wb3dlcm9mZiA9ICdkZXN0cm95JwojICAgb25fcmVib290ICAgPSAncmVzdGFy
dCcKIyAgIG9uX2NyYXNoICAgID0gJ3Jlc3RhcnQnCiMKIyBGb3IgYmFja3dhcmRzIGNvbXBh
dGliaWxpdHkgd2UgYWxzbyBzdXBwb3J0IHRoZSBkZXByZWNhdGVkIG9wdGlvbiByZXN0YXJ0
CiMKIyByZXN0YXJ0ID0gJ29ucmVib290JyBtZWFucyBvbl9wb3dlcm9mZiA9ICdkZXN0cm95
JwojICAgICAgICAgICAgICAgICAgICAgICAgICAgIG9uX3JlYm9vdCAgID0gJ3Jlc3RhcnQn
CiMgICAgICAgICAgICAgICAgICAgICAgICAgICAgb25fY3Jhc2ggICAgPSAnZGVzdHJveScK
IwojIHJlc3RhcnQgPSAnYWx3YXlzJyAgIG1lYW5zIG9uX3Bvd2Vyb2ZmID0gJ3Jlc3RhcnQn
CiMgICAgICAgICAgICAgICAgICAgICAgICAgICAgb25fcmVib290ICAgPSAncmVzdGFydCcK
IyAgICAgICAgICAgICAgICAgICAgICAgICAgICBvbl9jcmFzaCAgICA9ICdyZXN0YXJ0Jwoj
CiMgcmVzdGFydCA9ICduZXZlcicgICAgbWVhbnMgb25fcG93ZXJvZmYgPSAnZGVzdHJveScK
IyAgICAgICAgICAgICAgICAgICAgICAgICAgICBvbl9yZWJvb3QgICA9ICdkZXN0cm95Jwoj
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIG9uX2NyYXNoICAgID0gJ2Rlc3Ryb3knCgoj
b25fcG93ZXJvZmYgPSAnZGVzdHJveScKI29uX3JlYm9vdCAgID0gJ3Jlc3RhcnQnCiNvbl9j
cmFzaCAgICA9ICdyZXN0YXJ0Jwojb25fcG93ZXJvZmYgPSAncHJlc2VydmUnCiNvbl9yZWJv
b3QgICA9ICdwcmVzZXJ2ZScKI29uX2NyYXNoICAgID0gJ3ByZXNlcnZlJwpvbl9jcmFzaCAg
ICA9ICdkZXN0cm95Jwpvbl9wb3dlcm9mZiA9ICdkZXN0cm95Jwpvbl9yZWJvb3QgPSAnZGVz
dHJveScKCiM9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CgojIERldmljZSBNb2RlbAojZGV2aWNl
X21vZGVsID0gJ3FlbXUtZG0nCgojLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBib290IG9u
IGZsb3BweSAoYSksIGhhcmQgZGlzayAoYykgb3IgQ0QtUk9NIChkKSAKIyBkZWZhdWx0OiBo
YXJkIGRpc2ssIGNkLXJvbSwgZmxvcHB5CmJvb3Q9ImNkIgojYm9vdD0iZGMiCgojLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0KIyAgd3JpdGUgdG8gdGVtcG9yYXJ5IGZpbGVzIGluc3RlYWQg
b2YgZGlzayBpbWFnZSBmaWxlcwojc25hcHNob3Q9MQoKIy0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0KIyBlbmFibGUgU0RMIGxpYnJhcnkgZm9yIGdyYXBoaWNzLCBkZWZhdWx0ID0gMApzZGw9
MAoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgVk5DIGxpYnJhcnkgZm9yIGdy
YXBoaWNzLCBkZWZhdWx0ID0gMQp2bmM9MQoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBh
ZGRyZXNzIHRoYXQgc2hvdWxkIGJlIGxpc3RlbmVkIG9uIGZvciB0aGUgVk5DIHNlcnZlciBp
ZiB2bmMgaXMgc2V0LgojIGRlZmF1bHQgaXMgdG8gdXNlICd2bmMtbGlzdGVuJyBzZXR0aW5n
IGZyb20gL2V0Yy94ZW4veGVuZC1jb25maWcuc3hwCnZuY2xpc3Rlbj0iMC4wLjAuMCIKCiMt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgc2V0IFZOQyBkaXNwbGF5IG51bWJlciwgZGVmYXVs
dCA9IGRvbWlkCiN2bmNkaXNwbGF5PTEKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgdHJ5
IHRvIGZpbmQgYW4gdW51c2VkIHBvcnQgZm9yIHRoZSBWTkMgc2VydmVyLCBkZWZhdWx0ID0g
MQojdm5jdW51c2VkPTEKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgZW5hYmxlIHNwYXdu
aW5nIHZuY3ZpZXdlciBmb3IgZG9tYWluJ3MgY29uc29sZQojIChvbmx5IHZhbGlkIHdoZW4g
dm5jPTEpLCBkZWZhdWx0ID0gMAojdm5jY29uc29sZT0wCgojLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLQojIHNldCBwYXNzd29yZCBmb3IgZG9tYWluJ3MgVk5DIGNvbnNvbGUKIyBkZWZhdWx0
IGlzIGRlcGVudHMgb24gdm5jcGFzc3dkIGluIHhlbmQtY29uZmlnLnN4cAp2bmNwYXNzd2Q9
JycKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgbm8gZ3JhcGhpY3MsIHVzZSBzZXJpYWwg
cG9ydApub2dyYXBoaWM9MAoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgc3Rk
dmdhLCBkZWZhdWx0ID0gMCAodXNlIGNpcnJ1cyBsb2dpYyBkZXZpY2UgbW9kZWwpCnN0ZHZn
YT0xCgojLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyAgIHNlcmlhbCBwb3J0IHJlLWRpcmVj
dCB0byBwdHkgZGVpdmNlLCAvZGV2L3B0cy9uIAojICAgdGhlbiB4bSBjb25zb2xlIG9yIG1p
bmljb20gY2FuIGNvbm5lY3QKI3NlcmlhbD0ncHR5JwoKCiMtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLQojICAgZW5hYmxlIHNvdW5kIGNhcmQgc3VwcG9ydCwgW3NiMTZ8ZXMxMzcwfGFsbHwu
LiwuLl0sIGRlZmF1bHQgbm9uZQojc291bmRodz0nc2IxNicKCgojLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0KIyAgICBzZXQgdGhlIHJlYWwgdGltZSBjbG9jayB0byBsb2NhbCB0aW1lIFtk
ZWZhdWx0PTAgaS5lLiBzZXQgdG8gdXRjXQojbG9jYWx0aW1lPTEKCgojLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KIyAgICBzdGFydCBpbiBmdWxsIHNjcmVlbgojZnVsbC1zY3JlZW49MSAg
IAoKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojICAgRW5hYmxlIFVTQiBzdXBwb3J0IChz
cGVjaWZpYyBkZXZpY2VzIHNwZWNpZmllZCBhdCBydW50aW1lIHRocm91Z2ggdGhlCiMJCQlt
b25pdG9yIHdpbmRvdykKI3VzYj0xCnVzYj0xCgojICAgRW5hYmxlIFVTQiBtb3VzZSBzdXBw
b3J0IChvbmx5IGVuYWJsZSBvbmUgb2YgdGhlIGZvbGxvd2luZywgYG1vdXNlJyBmb3IKIwkJ
CSAgICAgIFBTLzIgcHJvdG9jb2wgcmVsYXRpdmUgbW91c2UsIGB0YWJsZXQnIGZvcgojCQkJ
ICAgICAgYWJzb2x1dGUgbW91c2UpCiN1c2JkZXZpY2U9J21vdXNlJwp1c2JkZXZpY2U9J3Rh
YmxldCcKCgojdmlyaWRpYW49MQojaHBldD0wCg==
--------------060704090007000900070207
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060704090007000900070207--


From xen-devel-bounces@lists.xen.org Fri Aug 03 15:15:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJaO-0006zX-QI; Fri, 03 Aug 2012 15:14:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1SxJaN-0006zQ-2L
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:14:43 +0000
Received: from [85.158.139.83:23053] by server-11.bemta-5.messagelabs.com id
	41/5D-20400-2EAEB105; Fri, 03 Aug 2012 15:14:42 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344006880!27430668!1
X-Originating-IP: [216.32.180.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21381 invoked from network); 3 Aug 2012 15:14:41 -0000
Received: from va3ehsobe004.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.14)
	by server-4.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Aug 2012 15:14:41 -0000
Received: from mail242-va3-R.bigfish.com (10.7.14.254) by
	VA3EHSOBE006.bigfish.com (10.7.40.26) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 15:14:39 +0000
Received: from mail242-va3 (localhost [127.0.0.1])	by
	mail242-va3-R.bigfish.com (Postfix) with ESMTP id F2DFD4E0467;
	Fri,  3 Aug 2012 15:14:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI936eIc85fh1432Ic857hzz1202hzzz2dh668h839hd25he5bhf0ah107ah34h)
Received: from mail242-va3 (localhost.localdomain [127.0.0.1]) by mail242-va3
	(MessageSwitch) id 1344006878551487_20746;
	Fri,  3 Aug 2012 15:14:38 +0000 (UTC)
Received: from VA3EHSMHS003.bigfish.com (unknown [10.7.14.254])	by
	mail242-va3.bigfish.com (Postfix) with ESMTP id 81C944C0049;
	Fri,  3 Aug 2012 15:14:38 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	VA3EHSMHS003.bigfish.com (10.7.99.13) with Microsoft SMTP Server id
	14.1.225.23; Fri, 3 Aug 2012 15:14:37 +0000
X-WSS-ID: 0M86QCC-01-9ZH-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2A5361028147;	Fri,  3 Aug 2012 10:14:35 -0500 (CDT)
Received: from SAUSEXDAG04.amd.com (163.181.55.4) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 3 Aug 2012 10:14:55 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag04.amd.com
	(163.181.55.4) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 3 Aug 2012 10:14:35 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	11:14:34 -0400
Message-ID: <501BEAD8.3040300@amd.com>
Date: Fri, 3 Aug 2012 17:14:32 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344005133.21372.54.camel@zakaz.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------060704090007000900070207"
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060704090007000900070207
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On 08/03/12 16:45, Ian Campbell wrote:

> On Fri, 2012-08-03 at 15:24 +0100, Christoph Egger wrote:
>> Hi,
>>
>> I installed c/s 25729 and xl crashes.
>> The prior c/s I used was 25687 and worked fine.
> 
> These number can be ambiguous, what are the long hashes?


25729:6ccad16b50b6
25687:fab4434f5145

> 
>> I can't find any c/s that could touch libxl in such a way.
> 
> Me neither, but the xlu disk parser has been touched recently, e.g.
> 25727:a8d708fcb347 is in there.

Yes, that's it.
Reverting this changeset makes xl work again for me.


> Recently there was 25665:fab03d9ee1ba and 25663:968b205da696 too.
> 
> A dry run with your syntax seems to work for me e.g. :
>         xl -N block-attach 0 'file:/hvm-guest/win2008.img,ioemu:hda,w'


This also crashes for me but works for me with c/s 25727:a8d708fcb347
reverted.
xl info works.

 
> Did the generated flex files get rebuilt in your environment?
> 
>> Starting program: /usr/local.25729/sbin/xl create -c /hvm-guest/win2008.conf
> 
> Can you send win2008.conf please.


Attached.

> 
> Does a dry run create reproduce it? xl -N create .../win2008.conf --
> that'll be easier for me to repro.


Yes.

 
> Is this under NetBSD?


Yes.

Christoph


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

--------------060704090007000900070207
Content-Type: text/plain; charset="us-ascii"; name="win2008.conf"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="win2008.conf"
Content-Description: win2008.conf

IyAgLSotIG1vZGU6IHB5dGhvbjsgLSotCiM9PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiMgUHl0
aG9uIGNvbmZpZ3VyYXRpb24gc2V0dXAgZm9yICd4bSBjcmVhdGUnLgojIFRoaXMgc2NyaXB0
IHNldHMgdGhlIHBhcmFtZXRlcnMgdXNlZCB3aGVuIGEgZG9tYWluIGlzIGNyZWF0ZWQgdXNp
bmcgJ3htIGNyZWF0ZScuCiMgWW91IHVzZSBhIHNlcGFyYXRlIHNjcmlwdCBmb3IgZWFjaCBk
b21haW4geW91IHdhbnQgdG8gY3JlYXRlLCBvciAKIyB5b3UgY2FuIHNldCB0aGUgcGFyYW1l
dGVycyBmb3IgdGhlIGRvbWFpbiBvbiB0aGUgeG0gY29tbWFuZCBsaW5lLgojPT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PQoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBLZXJuZWwgaW1hZ2Ug
ZmlsZS4KI2tlcm5lbCA9ICJodm1sb2FkZXIiCgojIFRoZSBkb21haW4gYnVpbGQgZnVuY3Rp
b24uIEhWTSBkb21haW4gdXNlcyAnaHZtJy4KYnVpbGRlcj0naHZtJwoKIyBJbml0aWFsIG1l
bW9yeSBhbGxvY2F0aW9uIChpbiBtZWdhYnl0ZXMpIGZvciB0aGUgbmV3IGRvbWFpbi4KIwoj
IFdBUk5JTkc6IENyZWF0aW5nIGEgZG9tYWluIHdpdGggaW5zdWZmaWNpZW50IG1lbW9yeSBt
YXkgY2F1c2Ugb3V0IG9mCiMgICAgICAgICAgbWVtb3J5IGVycm9ycy4gVGhlIGRvbWFpbiBu
ZWVkcyBlbm91Z2ggbWVtb3J5IHRvIGJvb3Qga2VybmVsCiMgICAgICAgICAgYW5kIG1vZHVs
ZXMuIEFsbG9jYXRpbmcgbGVzcyB0aGFuIDMyTUJzIGlzIG5vdCByZWNvbW1lbmRlZC4KI21l
bW9yeSA9IDY1MDAKI21lbW9yeSA9IDUxMgojbWVtb3J5ID0gMjA0OAptZW1vcnkgPSAzMDcy
CgpuZXN0ZWRodm09MQoKIyBTaGFkb3cgcGFnZXRhYmxlIG1lbW9yeSBmb3IgdGhlIGRvbWFp
biwgaW4gTUIuCiMgU2hvdWxkIGJlIGF0IGxlYXN0IDJLQiBwZXIgTUIgb2YgZG9tYWluIG1l
bW9yeSwgcGx1cyBhIGZldyBNQiBwZXIgdmNwdS4KI3NoYWRvd19tZW1vcnkgPSAyMAoKIyBB
IG5hbWUgZm9yIHlvdXIgZG9tYWluLiBBbGwgZG9tYWlucyBtdXN0IGhhdmUgZGlmZmVyZW50
IG5hbWVzLgpuYW1lID0gIndpbjIwMDgiCgojIDEyOC1iaXQgVVVJRCBmb3IgdGhlIGRvbWFp
bi4gIFRoZSBkZWZhdWx0IGJlaGF2aW9yIGlzIHRvIGdlbmVyYXRlIGEgbmV3IFVVSUQKIyBv
biBlYWNoIGNhbGwgdG8gJ3htIGNyZWF0ZScuCiN1dWlkID0gIjA2ZWQwMGZlLTExNjItNGZj
NC1iNWQ4LTExOTkzZWU0YThiOSIKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojIHRoZSBu
dW1iZXIgb2YgY3B1cyBndWVzdCBwbGF0Zm9ybSBoYXMsIGRlZmF1bHQ9MQp2Y3B1cz0xCgoj
IGVuYWJsZS9kaXNhYmxlIEhWTSBndWVzdCBQQUUsIGRlZmF1bHQ9MCAoZGlzYWJsZWQpCiNw
YWU9MQoKIyBlbmFibGUvZGlzYWJsZSBIVk0gZ3Vlc3QgQUNQSSwgZGVmYXVsdD0wIChkaXNh
YmxlZCkKYWNwaT0xCgojIGVuYWJsZS9kaXNhYmxlIEhWTSBndWVzdCBBUElDLCBkZWZhdWx0
PTAgKGRpc2FibGVkKQphcGljPTEKCiNjcHVpZD0iaG9zdCxwYWdlMWdiPWssaHlwZXJ2aXNv
cj0wLHN2bV9ucHQ9MCIKY3B1aWQ9Imhvc3QscGFnZTFnYj1rLGh5cGVydmlzb3I9MCIKCiMg
TGlzdCBvZiB3aGljaCBDUFVTIHRoaXMgZG9tYWluIGlzIGFsbG93ZWQgdG8gdXNlLCBkZWZh
dWx0IFhlbiBwaWNrcwojY3B1cyA9ICIiICAgICAgICAgIyBsZWF2ZSB0byBYZW4gdG8gcGlj
awojY3B1cyA9ICIwIiAgICAgICAgIyBhbGwgdmNwdXMgcnVuIG9uIENQVTAKI2NwdXMgPSAi
MC0zLDUsXjEiICMgcnVuIG9uIGNwdXMgMCwyLDMsNQojY3B1cyA9ICIyLTMiCgojIE9wdGlv
bmFsbHkgZGVmaW5lIG1hYyBhbmQvb3IgYnJpZGdlIGZvciB0aGUgbmV0d29yayBpbnRlcmZh
Y2VzLgojIFJhbmRvbSBNQUNzIGFyZSBhc3NpZ25lZCBpZiBub3QgZ2l2ZW4uCiN2aWYgPSBb
ICd0eXBlPWlvZW11LCBtYWM9MDA6MTY6M2U6MDA6Y2U6YTMsIGJyaWRnZT1icmlkZ2UwLCBt
b2RlbD1uZTJrX3BjaScgXQojIHR5cGU9aW9lbXUgc3BlY2lmeSB0aGUgTklDIGlzIGFuIGlv
ZW11IGRldmljZSBub3QgbmV0ZnJvbnQKdmlmID0gWyAndHlwZT1pb2VtdSwgbWFjPTAwOjE2
OjNlOjAxOmNlOmEzLCBicmlkZ2U9YnJpZGdlMCwgbW9kZWw9ZTEwMDAnICBdCgojLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLQojIERlZmluZSB0aGUgZGlzayBkZXZpY2VzIHlvdSB3YW50IHRo
ZSBkb21haW4gdG8gaGF2ZSBhY2Nlc3MgdG8sIGFuZAojIHdoYXQgeW91IHdhbnQgdGhlbSBh
Y2Nlc3NpYmxlIGFzLgojIEVhY2ggZGlzayBlbnRyeSBpcyBvZiB0aGUgZm9ybSBwaHk6VU5B
TUUsREVWLE1PREUKIyB3aGVyZSBVTkFNRSBpcyB0aGUgZGV2aWNlLCBERVYgaXMgdGhlIGRl
dmljZSBuYW1lIHRoZSBkb21haW4gd2lsbCBzZWUsCiMgYW5kIE1PREUgaXMgciBmb3IgcmVh
ZC1vbmx5LCB3IGZvciByZWFkLXdyaXRlLgoKZGlzayA9IFsgJ2ZpbGU6L2h2bS1ndWVzdC93
aW4yMDA4LmltZyxpb2VtdTpoZGEsdycsICdmaWxlOi9odm0tZ3Vlc3Qvd2luMjAwOC5pc28s
aGRjOmNkcm9tLHInIF0gCgojLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojIENvbmZpZ3VyZSB0
aGUgYmVoYXZpb3VyIHdoZW4gYSBkb21haW4gZXhpdHMuICBUaGVyZSBhcmUgdGhyZWUgJ3Jl
YXNvbnMnCiMgZm9yIGEgZG9tYWluIHRvIHN0b3A6IHBvd2Vyb2ZmLCByZWJvb3QsIGFuZCBj
cmFzaC4gIEZvciBlYWNoIG9mIHRoZXNlIHlvdQojIG1heSBzcGVjaWZ5OgojCiMgICAiZGVz
dHJveSIsICAgICAgICBtZWFuaW5nIHRoYXQgdGhlIGRvbWFpbiBpcyBjbGVhbmVkIHVwIGFz
IG5vcm1hbDsKIyAgICJyZXN0YXJ0IiwgICAgICAgIG1lYW5pbmcgdGhhdCBhIG5ldyBkb21h
aW4gaXMgc3RhcnRlZCBpbiBwbGFjZSBvZiB0aGUgb2xkCiMgICAgICAgICAgICAgICAgICAg
ICBvbmU7CiMgICAicHJlc2VydmUiLCAgICAgICBtZWFuaW5nIHRoYXQgbm8gY2xlYW4tdXAg
aXMgZG9uZSB1bnRpbCB0aGUgZG9tYWluIGlzCiMgICAgICAgICAgICAgICAgICAgICBtYW51
YWxseSBkZXN0cm95ZWQgKHVzaW5nIHhtIGRlc3Ryb3ksIGZvciBleGFtcGxlKTsgb3IKIyAg
ICJyZW5hbWUtcmVzdGFydCIsIG1lYW5pbmcgdGhhdCB0aGUgb2xkIGRvbWFpbiBpcyBub3Qg
Y2xlYW5lZCB1cCwgYnV0IGlzCiMgICAgICAgICAgICAgICAgICAgICByZW5hbWVkIGFuZCBh
IG5ldyBkb21haW4gc3RhcnRlZCBpbiBpdHMgcGxhY2UuCiMKIyBUaGUgZGVmYXVsdCBpcwoj
CiMgICBvbl9wb3dlcm9mZiA9ICdkZXN0cm95JwojICAgb25fcmVib290ICAgPSAncmVzdGFy
dCcKIyAgIG9uX2NyYXNoICAgID0gJ3Jlc3RhcnQnCiMKIyBGb3IgYmFja3dhcmRzIGNvbXBh
dGliaWxpdHkgd2UgYWxzbyBzdXBwb3J0IHRoZSBkZXByZWNhdGVkIG9wdGlvbiByZXN0YXJ0
CiMKIyByZXN0YXJ0ID0gJ29ucmVib290JyBtZWFucyBvbl9wb3dlcm9mZiA9ICdkZXN0cm95
JwojICAgICAgICAgICAgICAgICAgICAgICAgICAgIG9uX3JlYm9vdCAgID0gJ3Jlc3RhcnQn
CiMgICAgICAgICAgICAgICAgICAgICAgICAgICAgb25fY3Jhc2ggICAgPSAnZGVzdHJveScK
IwojIHJlc3RhcnQgPSAnYWx3YXlzJyAgIG1lYW5zIG9uX3Bvd2Vyb2ZmID0gJ3Jlc3RhcnQn
CiMgICAgICAgICAgICAgICAgICAgICAgICAgICAgb25fcmVib290ICAgPSAncmVzdGFydCcK
IyAgICAgICAgICAgICAgICAgICAgICAgICAgICBvbl9jcmFzaCAgICA9ICdyZXN0YXJ0Jwoj
CiMgcmVzdGFydCA9ICduZXZlcicgICAgbWVhbnMgb25fcG93ZXJvZmYgPSAnZGVzdHJveScK
IyAgICAgICAgICAgICAgICAgICAgICAgICAgICBvbl9yZWJvb3QgICA9ICdkZXN0cm95Jwoj
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIG9uX2NyYXNoICAgID0gJ2Rlc3Ryb3knCgoj
b25fcG93ZXJvZmYgPSAnZGVzdHJveScKI29uX3JlYm9vdCAgID0gJ3Jlc3RhcnQnCiNvbl9j
cmFzaCAgICA9ICdyZXN0YXJ0Jwojb25fcG93ZXJvZmYgPSAncHJlc2VydmUnCiNvbl9yZWJv
b3QgICA9ICdwcmVzZXJ2ZScKI29uX2NyYXNoICAgID0gJ3ByZXNlcnZlJwpvbl9jcmFzaCAg
ICA9ICdkZXN0cm95Jwpvbl9wb3dlcm9mZiA9ICdkZXN0cm95Jwpvbl9yZWJvb3QgPSAnZGVz
dHJveScKCiM9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CgojIERldmljZSBNb2RlbAojZGV2aWNl
X21vZGVsID0gJ3FlbXUtZG0nCgojLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBib290IG9u
IGZsb3BweSAoYSksIGhhcmQgZGlzayAoYykgb3IgQ0QtUk9NIChkKSAKIyBkZWZhdWx0OiBo
YXJkIGRpc2ssIGNkLXJvbSwgZmxvcHB5CmJvb3Q9ImNkIgojYm9vdD0iZGMiCgojLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0KIyAgd3JpdGUgdG8gdGVtcG9yYXJ5IGZpbGVzIGluc3RlYWQg
b2YgZGlzayBpbWFnZSBmaWxlcwojc25hcHNob3Q9MQoKIy0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0KIyBlbmFibGUgU0RMIGxpYnJhcnkgZm9yIGdyYXBoaWNzLCBkZWZhdWx0ID0gMApzZGw9
MAoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgVk5DIGxpYnJhcnkgZm9yIGdy
YXBoaWNzLCBkZWZhdWx0ID0gMQp2bmM9MQoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBh
ZGRyZXNzIHRoYXQgc2hvdWxkIGJlIGxpc3RlbmVkIG9uIGZvciB0aGUgVk5DIHNlcnZlciBp
ZiB2bmMgaXMgc2V0LgojIGRlZmF1bHQgaXMgdG8gdXNlICd2bmMtbGlzdGVuJyBzZXR0aW5n
IGZyb20gL2V0Yy94ZW4veGVuZC1jb25maWcuc3hwCnZuY2xpc3Rlbj0iMC4wLjAuMCIKCiMt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgc2V0IFZOQyBkaXNwbGF5IG51bWJlciwgZGVmYXVs
dCA9IGRvbWlkCiN2bmNkaXNwbGF5PTEKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgdHJ5
IHRvIGZpbmQgYW4gdW51c2VkIHBvcnQgZm9yIHRoZSBWTkMgc2VydmVyLCBkZWZhdWx0ID0g
MQojdm5jdW51c2VkPTEKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgZW5hYmxlIHNwYXdu
aW5nIHZuY3ZpZXdlciBmb3IgZG9tYWluJ3MgY29uc29sZQojIChvbmx5IHZhbGlkIHdoZW4g
dm5jPTEpLCBkZWZhdWx0ID0gMAojdm5jY29uc29sZT0wCgojLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLQojIHNldCBwYXNzd29yZCBmb3IgZG9tYWluJ3MgVk5DIGNvbnNvbGUKIyBkZWZhdWx0
IGlzIGRlcGVudHMgb24gdm5jcGFzc3dkIGluIHhlbmQtY29uZmlnLnN4cAp2bmNwYXNzd2Q9
JycKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiMgbm8gZ3JhcGhpY3MsIHVzZSBzZXJpYWwg
cG9ydApub2dyYXBoaWM9MAoKIy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyBlbmFibGUgc3Rk
dmdhLCBkZWZhdWx0ID0gMCAodXNlIGNpcnJ1cyBsb2dpYyBkZXZpY2UgbW9kZWwpCnN0ZHZn
YT0xCgojLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIyAgIHNlcmlhbCBwb3J0IHJlLWRpcmVj
dCB0byBwdHkgZGVpdmNlLCAvZGV2L3B0cy9uIAojICAgdGhlbiB4bSBjb25zb2xlIG9yIG1p
bmljb20gY2FuIGNvbm5lY3QKI3NlcmlhbD0ncHR5JwoKCiMtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLQojICAgZW5hYmxlIHNvdW5kIGNhcmQgc3VwcG9ydCwgW3NiMTZ8ZXMxMzcwfGFsbHwu
LiwuLl0sIGRlZmF1bHQgbm9uZQojc291bmRodz0nc2IxNicKCgojLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0KIyAgICBzZXQgdGhlIHJlYWwgdGltZSBjbG9jayB0byBsb2NhbCB0aW1lIFtk
ZWZhdWx0PTAgaS5lLiBzZXQgdG8gdXRjXQojbG9jYWx0aW1lPTEKCgojLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KIyAgICBzdGFydCBpbiBmdWxsIHNjcmVlbgojZnVsbC1zY3JlZW49MSAg
IAoKCiMtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQojICAgRW5hYmxlIFVTQiBzdXBwb3J0IChz
cGVjaWZpYyBkZXZpY2VzIHNwZWNpZmllZCBhdCBydW50aW1lIHRocm91Z2ggdGhlCiMJCQlt
b25pdG9yIHdpbmRvdykKI3VzYj0xCnVzYj0xCgojICAgRW5hYmxlIFVTQiBtb3VzZSBzdXBw
b3J0IChvbmx5IGVuYWJsZSBvbmUgb2YgdGhlIGZvbGxvd2luZywgYG1vdXNlJyBmb3IKIwkJ
CSAgICAgIFBTLzIgcHJvdG9jb2wgcmVsYXRpdmUgbW91c2UsIGB0YWJsZXQnIGZvcgojCQkJ
ICAgICAgYWJzb2x1dGUgbW91c2UpCiN1c2JkZXZpY2U9J21vdXNlJwp1c2JkZXZpY2U9J3Rh
YmxldCcKCgojdmlyaWRpYW49MQojaHBldD0wCg==
--------------060704090007000900070207
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060704090007000900070207--


From xen-devel-bounces@lists.xen.org Fri Aug 03 15:22:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJho-0007Ew-Ng; Fri, 03 Aug 2012 15:22:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxJhn-0007Ep-Fo
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:22:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344007335!10663265!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12013 invoked from network); 3 Aug 2012 15:22:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 15:22:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 16:22:23 +0100
Message-Id: <501C08F20200007800092920@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 16:22:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> If a 4k disk is used for first BIOS disk loader corrupt itself.

If such is really permitted by the specification (which I doubt it
is for the standard, old-style INT13 functions - it's a different
story for functions 42 and 43, where the caller can be expected
to call function 48 first).

> This patch increase sector buffer in order to avoid this overflow

And if we indeed need to adjust for this, then let's fix this properly:
Don't just increase the buffer size, but also check that the sector
size reported actually fits. That may require calling Fn48 first,
before doing the actual read.

> --- a/xen/arch/x86/boot/edd.S
> +++ b/xen/arch/x86/boot/edd.S
> @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
>  boot_mbr_signature:
>          .fill   EDD_MBR_SIG_MAX*8,1,0
>  boot_edd_info:
> -        .fill   512,1,0                         # big enough for a disc sector
> +        .fill   4096,1,0                         # big enough for a disc sector

Also I wonder whether it wouldn't be more smart to re-use the
wakeup stack (which is already 4k in size), and shrink this buffer
to the maximum size ever used without reading sectors into it
(EDD_INFO_MAX*(EDDEXTSIZE+EDDPARMSIZE)).

> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -224,6 +224,6 @@ skip_realmode:
>  rm_idt: .word   256*4-1, 0, 0
>  
>  #include "mem.S"
> -#include "edd.S"
>  #include "video.S"
>  #include "wakeup.S"
> +#include "edd.S"

Finally, you should also explain why this change is needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 15:22:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJho-0007Ew-Ng; Fri, 03 Aug 2012 15:22:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SxJhn-0007Ep-Fo
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 15:22:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344007335!10663265!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12013 invoked from network); 3 Aug 2012 15:22:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 15:22:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 03 Aug 2012 16:22:23 +0100
Message-Id: <501C08F20200007800092920@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 03 Aug 2012 16:22:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> If a 4k disk is used for first BIOS disk loader corrupt itself.

If such is really permitted by the specification (which I doubt it
is for the standard, old-style INT13 functions - it's a different
story for functions 42 and 43, where the caller can be expected
to call function 48 first).

> This patch increase sector buffer in order to avoid this overflow

And if we indeed need to adjust for this, then let's fix this properly:
Don't just increase the buffer size, but also check that the sector
size reported actually fits. That may require calling Fn48 first,
before doing the actual read.

> --- a/xen/arch/x86/boot/edd.S
> +++ b/xen/arch/x86/boot/edd.S
> @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
>  boot_mbr_signature:
>          .fill   EDD_MBR_SIG_MAX*8,1,0
>  boot_edd_info:
> -        .fill   512,1,0                         # big enough for a disc sector
> +        .fill   4096,1,0                         # big enough for a disc sector

Also I wonder whether it wouldn't be more smart to re-use the
wakeup stack (which is already 4k in size), and shrink this buffer
to the maximum size ever used without reading sectors into it
(EDD_INFO_MAX*(EDDEXTSIZE+EDDPARMSIZE)).

> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -224,6 +224,6 @@ skip_realmode:
>  rm_idt: .word   256*4-1, 0, 0
>  
>  #include "mem.S"
> -#include "edd.S"
>  #include "video.S"
>  #include "wakeup.S"
> +#include "edd.S"

Finally, you should also explain why this change is needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 15:28:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJn3-0007OO-Ej; Fri, 03 Aug 2012 15:27:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxJn2-0007OG-FT
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 15:27:48 +0000
Received: from [85.158.138.51:26443] by server-10.bemta-3.messagelabs.com id
	8A/40-21993-3FDEB105; Fri, 03 Aug 2012 15:27:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344007663!23940255!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21462 invoked from network); 3 Aug 2012 15:27:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 15:27:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13843752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 15:27:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 16:27:43 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxJmw-0003Pu-Qs;
	Fri, 03 Aug 2012 15:27:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxJmw-0001dd-Hb;
	Fri, 03 Aug 2012 16:27:42 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13542-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 16:27:42 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13542: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13542 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13542/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 15:28:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 15:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxJn3-0007OO-Ej; Fri, 03 Aug 2012 15:27:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxJn2-0007OG-FT
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 15:27:48 +0000
Received: from [85.158.138.51:26443] by server-10.bemta-3.messagelabs.com id
	8A/40-21993-3FDEB105; Fri, 03 Aug 2012 15:27:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344007663!23940255!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21462 invoked from network); 3 Aug 2012 15:27:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 15:27:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,706,1336348800"; d="scan'208";a="13843752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 15:27:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 16:27:43 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxJmw-0003Pu-Qs;
	Fri, 03 Aug 2012 15:27:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxJmw-0001dd-Hb;
	Fri, 03 Aug 2012 16:27:42 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13542-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 16:27:42 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13542: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13542 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13542/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:28:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxKiz-0000rw-6g; Fri, 03 Aug 2012 16:27:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SxKix-0000rr-Il
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:27:39 +0000
Received: from [85.158.138.51:38594] by server-3.bemta-3.messagelabs.com id
	3A/32-08301-9FBFB105; Fri, 03 Aug 2012 16:27:37 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344011255!30379536!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjIxMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22562 invoked from network); 3 Aug 2012 16:27:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,707,1336363200"; d="scan'208";a="33483327"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 12:27:33 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 12:27:33 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SxKiq-0003cV-Pr;
	Fri, 03 Aug 2012 17:27:32 +0100
Message-ID: <501BFB37.4010307@eu.citrix.com>
Date: Fri, 3 Aug 2012 17:24:23 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
In-Reply-To: <20507.58318.416753.917851@mariner.uk.xensource.com>
Cc: Lars Kurth <lars.kurth@xen.org>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 15:44, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] lists.xen.org Mailman configuration and DKIM"):
>> Several folks have let me know that my messages sent via lists.xen.org
>> are marked as spam / spoofed, especially when using Gmail to receive
>> Xen mail. I believe this is because outbound Amazon email contains a
>> DKIM signature. When Mailman modifies my message and re-sends it, the
>> DKIM signature is invalidated [1].
>>
>> To work around this, Mailman 2.1.10 and later contain a configuration
>> variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
>> on we'd work around the problem.
> ...
>> [1] http://wiki.list.org/display/DEV/DKIM
>> [2] https://bugs.launchpad.net/mailman/+bug/557493
> Having checked RFC4871 I think it is clear that according to the
> standards
>    - Mailman SHOULD NOT [1] strip DKIM-Signature
>    - No-one should treat a message with an invalid DKIM signature
>      differently from a message with no DKIM signature at all [2]
It's actually pretty likely that gmail would also reject the mail if it 
had no DKIM signature, isn't it?  In which case stripping the signature 
wouldn't really help.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:28:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxKiz-0000rw-6g; Fri, 03 Aug 2012 16:27:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SxKix-0000rr-Il
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:27:39 +0000
Received: from [85.158.138.51:38594] by server-3.bemta-3.messagelabs.com id
	3A/32-08301-9FBFB105; Fri, 03 Aug 2012 16:27:37 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344011255!30379536!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjIxMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22562 invoked from network); 3 Aug 2012 16:27:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,707,1336363200"; d="scan'208";a="33483327"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 12:27:33 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 3 Aug 2012 12:27:33 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SxKiq-0003cV-Pr;
	Fri, 03 Aug 2012 17:27:32 +0100
Message-ID: <501BFB37.4010307@eu.citrix.com>
Date: Fri, 3 Aug 2012 17:24:23 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
In-Reply-To: <20507.58318.416753.917851@mariner.uk.xensource.com>
Cc: Lars Kurth <lars.kurth@xen.org>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 15:44, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] lists.xen.org Mailman configuration and DKIM"):
>> Several folks have let me know that my messages sent via lists.xen.org
>> are marked as spam / spoofed, especially when using Gmail to receive
>> Xen mail. I believe this is because outbound Amazon email contains a
>> DKIM signature. When Mailman modifies my message and re-sends it, the
>> DKIM signature is invalidated [1].
>>
>> To work around this, Mailman 2.1.10 and later contain a configuration
>> variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
>> on we'd work around the problem.
> ...
>> [1] http://wiki.list.org/display/DEV/DKIM
>> [2] https://bugs.launchpad.net/mailman/+bug/557493
> Having checked RFC4871 I think it is clear that according to the
> standards
>    - Mailman SHOULD NOT [1] strip DKIM-Signature
>    - No-one should treat a message with an invalid DKIM signature
>      differently from a message with no DKIM signature at all [2]
It's actually pretty likely that gmail would also reject the mail if it 
had no DKIM signature, isn't it?  In which case stripping the signature 
wouldn't really help.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:40:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxKum-00018G-Iw; Fri, 03 Aug 2012 16:39:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SxKul-00018B-3R
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:39:51 +0000
Received: from [85.158.143.35:41308] by server-1.bemta-4.messagelabs.com id
	1A/84-24392-6DEFB105; Fri, 03 Aug 2012 16:39:50 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344011987!6271879!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20988 invoked from network); 3 Aug 2012 16:39:48 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:39:48 -0000
Received: by wgbed3 with SMTP id ed3so580034wgb.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 09:39:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=9Iyh8xVKI5tbZho6IHvmQXpkEqDz9VXm1mc5lEj891E=;
	b=Nmcsyrh2sBxJiUloKhZhlIfHCHCFcDsuv2xB9Zhbv30k2rQZacM8Do1IzYN4FYlAOv
	jLyFCJBfOtKwi/X/rDNOaKe/8m5Nelgi42dr1MS3VgZJGc2TQdViOkV/45bOH71yj29R
	rQch5qjxnXxK5Kv23xM/DE8LdskEeI1HQi9AwYDpt6j5cdMf3rtUThkHxHzXN68Ve/+D
	x26j0GMXZw7sBjZlduNTxodqaYhkGz00RrgR3tlyphB2/5ummsgxErkx5eAkRJdVnzbc
	SEs4YGBsnCkYSq8LoabagaVTaRDmxJxTaNoGtpp9TZwbsWJeAeS2esUioAy3aD2pXNJK
	YhMQ==
MIME-Version: 1.0
Received: by 10.180.83.106 with SMTP id p10mr14593879wiy.21.1344011987590;
	Fri, 03 Aug 2012 09:39:47 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Fri, 3 Aug 2012 09:39:47 -0700 (PDT)
Date: Fri, 3 Aug 2012 09:39:47 -0700
Message-ID: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xen_platform_pci
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I tried setting xen_platform_pci=0 on my test ubuntu 11.10 livecd VM
hoping it would run in HVM only mode, but the guest's logs showed it
detecting the Xen host and loaded the PV netfront driver (after
modprobe).  Is this expected behavior?  Is there no way to force HVM
only and not PVHVM or total PV?

Setup:
Xen Unstable
Qemu Upstream

Thanks,
David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:40:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxKum-00018G-Iw; Fri, 03 Aug 2012 16:39:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SxKul-00018B-3R
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:39:51 +0000
Received: from [85.158.143.35:41308] by server-1.bemta-4.messagelabs.com id
	1A/84-24392-6DEFB105; Fri, 03 Aug 2012 16:39:50 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344011987!6271879!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20988 invoked from network); 3 Aug 2012 16:39:48 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:39:48 -0000
Received: by wgbed3 with SMTP id ed3so580034wgb.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 09:39:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=9Iyh8xVKI5tbZho6IHvmQXpkEqDz9VXm1mc5lEj891E=;
	b=Nmcsyrh2sBxJiUloKhZhlIfHCHCFcDsuv2xB9Zhbv30k2rQZacM8Do1IzYN4FYlAOv
	jLyFCJBfOtKwi/X/rDNOaKe/8m5Nelgi42dr1MS3VgZJGc2TQdViOkV/45bOH71yj29R
	rQch5qjxnXxK5Kv23xM/DE8LdskEeI1HQi9AwYDpt6j5cdMf3rtUThkHxHzXN68Ve/+D
	x26j0GMXZw7sBjZlduNTxodqaYhkGz00RrgR3tlyphB2/5ummsgxErkx5eAkRJdVnzbc
	SEs4YGBsnCkYSq8LoabagaVTaRDmxJxTaNoGtpp9TZwbsWJeAeS2esUioAy3aD2pXNJK
	YhMQ==
MIME-Version: 1.0
Received: by 10.180.83.106 with SMTP id p10mr14593879wiy.21.1344011987590;
	Fri, 03 Aug 2012 09:39:47 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Fri, 3 Aug 2012 09:39:47 -0700 (PDT)
Date: Fri, 3 Aug 2012 09:39:47 -0700
Message-ID: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xen_platform_pci
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I tried setting xen_platform_pci=0 on my test ubuntu 11.10 livecd VM
hoping it would run in HVM only mode, but the guest's logs showed it
detecting the Xen host and loaded the PV netfront driver (after
modprobe).  Is this expected behavior?  Is there no way to force HVM
only and not PVHVM or total PV?

Setup:
Xen Unstable
Qemu Upstream

Thanks,
David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:44:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxKyV-0001Ei-7E; Fri, 03 Aug 2012 16:43:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1SxKyS-0001EW-V7
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:43:41 +0000
Received: from [85.158.138.51:12136] by server-11.bemta-3.messagelabs.com id
	04/56-00679-CBFFB105; Fri, 03 Aug 2012 16:43:40 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344012219!21383274!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2189 invoked from network); 3 Aug 2012 16:43:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,707,1336348800"; d="scan'208";a="13844611"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 16:43:38 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Fri, 3 Aug 2012
	17:43:39 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "JBeulich@suse.com" <JBeulich@suse.com>
Date: Fri, 3 Aug 2012 17:43:37 +0100
Thread-Topic: [Xen-devel] [PATCH] Increment buffer used to read first boot
	sector in order to accomodate space for 4k sector
Thread-Index: Ac1xlyFaOswY2d3USeaGhxY5O1Otfw==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0E@LONPMAILBOX01.citrite.net>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
	<501C08F20200007800092920@nat28.tlf.novell.com>
In-Reply-To: <501C08F20200007800092920@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 16:22 +0100, Jan Beulich wrote:
> >>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> > If a 4k disk is used for first BIOS disk loader corrupt itself.
> 
> If such is really permitted by the specification (which I doubt it
> is for the standard, old-style INT13 functions - it's a different
> story for functions 42 and 43, where the caller can be expected
> to call function 48 first).
> 

I don't know. They always speaks about sector size and in int13/5 for
floppy you can specify different sector sizes (up to 1024).

> > This patch increase sector buffer in order to avoid this overflow
> 
> And if we indeed need to adjust for this, then let's fix this properly:
> Don't just increase the buffer size, but also check that the sector
> size reported actually fits. That may require calling Fn48 first,
> before doing the actual read.
> 

Or read to a location of memory we are sure there is enough space
(something like 2000:0000).

> > --- a/xen/arch/x86/boot/edd.S
> > +++ b/xen/arch/x86/boot/edd.S
> > @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
> >  boot_mbr_signature:
> >          .fill   EDD_MBR_SIG_MAX*8,1,0
> >  boot_edd_info:
> > -        .fill   512,1,0                         # big enough for a disc sector
> > +        .fill   4096,1,0                         # big enough for a disc sector
> 
> Also I wonder whether it wouldn't be more smart to re-use the
> wakeup stack (which is already 4k in size), and shrink this buffer
> to the maximum size ever used without reading sectors into it
> (EDD_INFO_MAX*(EDDEXTSIZE+EDDPARMSIZE)).
> 

Yes, reusing this buffer could be useful. It could be also useful to put
it at the end of the trampoline code in order to try avoiding future
problems if sector size grow.

> > --- a/xen/arch/x86/boot/trampoline.S
> > +++ b/xen/arch/x86/boot/trampoline.S
> > @@ -224,6 +224,6 @@ skip_realmode:
> >  rm_idt: .word   256*4-1, 0, 0
> >  
> >  #include "mem.S"
> > -#include "edd.S"
> >  #include "video.S"
> >  #include "wakeup.S"
> > +#include "edd.S"
> 
> Finally, you should also explain why this change is needed.
> 

This is to move the buffer at the end and avoid overflowing to other
code.

> Jan
> 

Frediano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:44:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxKyV-0001Ei-7E; Fri, 03 Aug 2012 16:43:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1SxKyS-0001EW-V7
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:43:41 +0000
Received: from [85.158.138.51:12136] by server-11.bemta-3.messagelabs.com id
	04/56-00679-CBFFB105; Fri, 03 Aug 2012 16:43:40 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344012219!21383274!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2189 invoked from network); 3 Aug 2012 16:43:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,707,1336348800"; d="scan'208";a="13844611"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 16:43:38 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Fri, 3 Aug 2012
	17:43:39 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "JBeulich@suse.com" <JBeulich@suse.com>
Date: Fri, 3 Aug 2012 17:43:37 +0100
Thread-Topic: [Xen-devel] [PATCH] Increment buffer used to read first boot
	sector in order to accomodate space for 4k sector
Thread-Index: Ac1xlyFaOswY2d3USeaGhxY5O1Otfw==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0E@LONPMAILBOX01.citrite.net>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
	<501C08F20200007800092920@nat28.tlf.novell.com>
In-Reply-To: <501C08F20200007800092920@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 16:22 +0100, Jan Beulich wrote:
> >>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> > If a 4k disk is used for first BIOS disk loader corrupt itself.
> 
> If such is really permitted by the specification (which I doubt it
> is for the standard, old-style INT13 functions - it's a different
> story for functions 42 and 43, where the caller can be expected
> to call function 48 first).
> 

I don't know. They always speaks about sector size and in int13/5 for
floppy you can specify different sector sizes (up to 1024).

> > This patch increase sector buffer in order to avoid this overflow
> 
> And if we indeed need to adjust for this, then let's fix this properly:
> Don't just increase the buffer size, but also check that the sector
> size reported actually fits. That may require calling Fn48 first,
> before doing the actual read.
> 

Or read to a location of memory we are sure there is enough space
(something like 2000:0000).

> > --- a/xen/arch/x86/boot/edd.S
> > +++ b/xen/arch/x86/boot/edd.S
> > @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
> >  boot_mbr_signature:
> >          .fill   EDD_MBR_SIG_MAX*8,1,0
> >  boot_edd_info:
> > -        .fill   512,1,0                         # big enough for a disc sector
> > +        .fill   4096,1,0                         # big enough for a disc sector
> 
> Also I wonder whether it wouldn't be more smart to re-use the
> wakeup stack (which is already 4k in size), and shrink this buffer
> to the maximum size ever used without reading sectors into it
> (EDD_INFO_MAX*(EDDEXTSIZE+EDDPARMSIZE)).
> 

Yes, reusing this buffer could be useful. It could be also useful to put
it at the end of the trampoline code in order to try avoiding future
problems if sector size grow.

> > --- a/xen/arch/x86/boot/trampoline.S
> > +++ b/xen/arch/x86/boot/trampoline.S
> > @@ -224,6 +224,6 @@ skip_realmode:
> >  rm_idt: .word   256*4-1, 0, 0
> >  
> >  #include "mem.S"
> > -#include "edd.S"
> >  #include "video.S"
> >  #include "wakeup.S"
> > +#include "edd.S"
> 
> Finally, you should also explain why this change is needed.
> 

This is to move the buffer at the end and avoid overflowing to other
code.

> Jan
> 

Frediano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:49:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxL48-0001QG-0k; Fri, 03 Aug 2012 16:49:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxL46-0001QB-Oh
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:49:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344012564!10675395!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 599 invoked from network); 3 Aug 2012 16:49:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:49:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,707,1336348800"; d="scan'208";a="13844682"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 16:49:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	17:49:24 +0100
Message-ID: <1344012563.21372.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Fri, 3 Aug 2012 17:49:23 +0100
In-Reply-To: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
References: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen_platform_pci
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 17:39 +0100, David Erickson wrote:
> I tried setting xen_platform_pci=0 on my test ubuntu 11.10 livecd VM
> hoping it would run in HVM only mode, but the guest's logs showed it
> detecting the Xen host and loaded the PV netfront driver (after
> modprobe).  Is this expected behavior?  Is there no way to force HVM
> only and not PVHVM or total PV?
> 
> Setup:
> Xen Unstable
> Qemu Upstream

It looks like libxl doesn't propagate the xen_platform_pci setting to
upstream qemu (only does it for trad). Anthony -- is this something you
can look into please.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:49:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxL48-0001QG-0k; Fri, 03 Aug 2012 16:49:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SxL46-0001QB-Oh
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:49:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344012564!10675395!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 599 invoked from network); 3 Aug 2012 16:49:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:49:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,707,1336348800"; d="scan'208";a="13844682"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 16:49:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Fri, 3 Aug 2012
	17:49:24 +0100
Message-ID: <1344012563.21372.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Fri, 3 Aug 2012 17:49:23 +0100
In-Reply-To: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
References: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen_platform_pci
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 17:39 +0100, David Erickson wrote:
> I tried setting xen_platform_pci=0 on my test ubuntu 11.10 livecd VM
> hoping it would run in HVM only mode, but the guest's logs showed it
> detecting the Xen host and loaded the PV netfront driver (after
> modprobe).  Is this expected behavior?  Is there no way to force HVM
> only and not PVHVM or total PV?
> 
> Setup:
> Xen Unstable
> Qemu Upstream

It looks like libxl doesn't propagate the xen_platform_pci setting to
upstream qemu (only does it for trad). Anthony -- is this something you
can look into please.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:50:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxL56-0001TL-F0; Fri, 03 Aug 2012 16:50:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SxL54-0001TB-W4
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:50:31 +0000
Received: from [85.158.138.51:4607] by server-4.bemta-3.messagelabs.com id
	78/A2-29069-6510C105; Fri, 03 Aug 2012 16:50:30 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344012627!30274570!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1191 invoked from network); 3 Aug 2012 16:50:27 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:50:27 -0000
Received: by wibhm6 with SMTP id hm6so5226885wib.14
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 09:50:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=wgEhuUgtLEWZxBIxHDUIMqF3J4+pOETbVFly+9wUg+4=;
	b=HaGlilGqZ0pMem5yIKzz+d4HaT9HQ8llGqj44fOSnC76n52POdGMCvwFgPZUpRB3tj
	yk9wXSoMVKAQVQi/sQxLzQ+7KHTiJMgyLEnEI9RmTDQSVwYx7ALD8IilEgavWjLOphGv
	xowXtlUa8wd6g3Gxr6oIOAJoP7fpRU6DvB4OaOjBz1SskhmCxt9cPL8dNVWffc/Vjt/U
	BpaQiJ2mNEAQM0m1qLeripeaQ5e/m+G0N87viMscOFkDmrujKEsAoFTdlG4fjterZe31
	vdXnqzE5wA1EI6NwXo7b+vZFEekroaX38NoMVR6/YAVtY5tK3zr2WTNYRwlri/VekmB+
	tsSg==
MIME-Version: 1.0
Received: by 10.217.0.145 with SMTP id l17mr1175373wes.133.1344012627355; Fri,
	03 Aug 2012 09:50:27 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Fri, 3 Aug 2012 09:50:27 -0700 (PDT)
In-Reply-To: <CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
Date: Fri, 3 Aug 2012 09:50:27 -0700
Message-ID: <CANKx4w-ofYb+2nLzazuh7J3XcFx1361pitGQqBy7FOTbLN2kFg@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 10:38 AM, David Erickson <halcyon1981@gmail.com> wrote:
> On Wed, Aug 1, 2012 at 10:52 AM, David Erickson <halcyon1981@gmail.com> wrote:
>> On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>>> On Tue, 31 Jul 2012, David Erickson wrote:
>>>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
>>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>> > On Tue, 31 Jul 2012, David Erickson wrote:
>>>> >> Just got back in town, following up on the prior discussion.  I
>>>> >> successfully compiled the latest code (25688 and qemu upstream
>>>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
>>>> >> problems during initialization of the card in the guest, in particular
>>>> >> the unsupported delivery mode 3 which seems to cause interrupt related
>>>> >> problems during init.  I've again attached the qemu-dm-log, and xl
>>>> >> dmesg log files, and additionally screenshots of the guest dmesg and
>>>> >> also for comparison starting the same livecd natively on the box.
>>>> >
>>>> > "unsupported delivery mode 3" means that the Linux guest is trying to
>>>> > remap the MSI onto an event channel but Xen is still trying to deliver
>>>> > the MSI using the emulated code path anyway.
>>>> >
>>>> > Adding
>>>> >
>>>> > #define XEN_PT_LOGGING_ENABLED 1
>>>> >
>>>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
>>>> > be helpful.
>>>> >
>>>> > The full Xen logs might also be useful. I would add some more tracing to
>>>> > the hypervisor too:
>>>> >
>>>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
>>>> > index b5975d1..08f4ab7 100644
>>>> > --- a/xen/drivers/passthrough/io.c
>>>> > +++ b/xen/drivers/passthrough/io.c
>>>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
>>>> >  {
>>>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
>>>> >
>>>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
>>>> > +            pirq->pirq,
>>>> > +            hvm_domain_use_pirq(d, pirq),
>>>> > +            pirq->arch.hvm.emuirq);
>>>> > +
>>>> >      if ( hvm_domain_use_pirq(d, pirq) )
>>>> >          send_guest_pirq(d, pirq);
>>>> >      else
>>>>
>>>> Hi Stefano-
>>>> I made the modifications (it looks like that DEFINE hasn't been used
>>>> in awhile, caused a few compilation issues, I had to prefix most of
>>>> the logged variables with s->hostaddr.), and am attaching the
>>>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
>>>> where do I find those at?
>>>
>>> Thanks for the logs!
>>> You can get the full Xen logs from the serial console but you can also
>>> grab the last few lines with "xl dmesg", like you did and it seems to be
>>> enough in this case.
>>>
>>>
>>> The initial MSI remapping has been done:
>>>
>>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
>>>
>>> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
>>> necessary to be able to receive event notifications (emuirq=-1 in the
>>> Xen logs).
>>>
>>> Now we need to figure out why: we still need more logs, this time on the
>>> guest side.
>>> What is the kernel version that you are using in the guest?
>>> Could you please add "debug loglevel=9" to the guest kernel command line
>>> and then post the guest dmesg again?
>>> It would be great if you could use the emulated serial to get the logs
>>> rather than a picture. You can do that by adding serial='pty' to the VM
>>> config file and console=ttyS0 to the guest command line.
>>> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
>>> has been done:
>>>
>>>
>>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
>>> index 53777f8..d65a97a 100644
>>> --- a/xen/common/event_channel.c
>>> +++ b/xen/common/event_channel.c
>>> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
>>>  #ifdef CONFIG_X86
>>>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
>>>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
>>> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
>>> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
>>>  #endif
>>>
>>>   out:
>>
>> The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
>> I've also attached all the logs, thanks for the tip on the serial
>> console, very useful.
>>
>> Additionally I've attached logs for booting a solaris livecd (my
>> ultimate goal is to use this HBA card in Solaris), with the serial
>> console tip I was able to capture its kernel boot as well.
>
> I'm attaching another log from Solaris' kernel debugger, I'm not sure
> how helpful it is but I found it interesting that it didn't detect an
> Intel IOMMU/ACPI table and unloaded it, then tried AMD - and I'm new
> to Solaris but comparing this log to one without PCI Passthrough, the
> npe module never gets loaded without PCI passthrough, so I assume it
> failed while setting up the AMD IOMMU module then loaded the npe
> module to report the error.

Just following up, is there anything I can do to further help debug
and figure out what is causing the problem here?  I'm assuming since
it isn't working properly in PV or HVM guests there may be multiple
bugs.  Is there an easy way to run Ubuntu as HVM only (more friendly
than Solaris) to try and isolate if that is a separate bug from what
is being seen with the PV Ubuntu VM?

Thanks,
David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 16:50:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 16:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxL56-0001TL-F0; Fri, 03 Aug 2012 16:50:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1SxL54-0001TB-W4
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 16:50:31 +0000
Received: from [85.158.138.51:4607] by server-4.bemta-3.messagelabs.com id
	78/A2-29069-6510C105; Fri, 03 Aug 2012 16:50:30 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344012627!30274570!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1191 invoked from network); 3 Aug 2012 16:50:27 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 16:50:27 -0000
Received: by wibhm6 with SMTP id hm6so5226885wib.14
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 09:50:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=wgEhuUgtLEWZxBIxHDUIMqF3J4+pOETbVFly+9wUg+4=;
	b=HaGlilGqZ0pMem5yIKzz+d4HaT9HQ8llGqj44fOSnC76n52POdGMCvwFgPZUpRB3tj
	yk9wXSoMVKAQVQi/sQxLzQ+7KHTiJMgyLEnEI9RmTDQSVwYx7ALD8IilEgavWjLOphGv
	xowXtlUa8wd6g3Gxr6oIOAJoP7fpRU6DvB4OaOjBz1SskhmCxt9cPL8dNVWffc/Vjt/U
	BpaQiJ2mNEAQM0m1qLeripeaQ5e/m+G0N87viMscOFkDmrujKEsAoFTdlG4fjterZe31
	vdXnqzE5wA1EI6NwXo7b+vZFEekroaX38NoMVR6/YAVtY5tK3zr2WTNYRwlri/VekmB+
	tsSg==
MIME-Version: 1.0
Received: by 10.217.0.145 with SMTP id l17mr1175373wes.133.1344012627355; Fri,
	03 Aug 2012 09:50:27 -0700 (PDT)
Received: by 10.223.83.9 with HTTP; Fri, 3 Aug 2012 09:50:27 -0700 (PDT)
In-Reply-To: <CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9XJGnD5u4hN+RhyOy7OUmF5nh-G8549ER1W19jy__u3Q@mail.gmail.com>
	<CANKx4w8Azvf1s0aqGSRSn9_f-Zq_uL_CAN2rfh5kd=udsT2YDg@mail.gmail.com>
	<CACA08Dw0B3xK_Lr6WQiCSgDewt9ED6GPm-_KaTZzQarsELPrcg@mail.gmail.com>
	<CANKx4w8Y0Uddmre-nsm0TT+GgP3AMiCuoXfbc2-2L+PG3TUgbg@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
Date: Fri, 3 Aug 2012 09:50:27 -0700
Message-ID: <CANKx4w-ofYb+2nLzazuh7J3XcFx1361pitGQqBy7FOTbLN2kFg@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 2, 2012 at 10:38 AM, David Erickson <halcyon1981@gmail.com> wrote:
> On Wed, Aug 1, 2012 at 10:52 AM, David Erickson <halcyon1981@gmail.com> wrote:
>> On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>>> On Tue, 31 Jul 2012, David Erickson wrote:
>>>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
>>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>> > On Tue, 31 Jul 2012, David Erickson wrote:
>>>> >> Just got back in town, following up on the prior discussion.  I
>>>> >> successfully compiled the latest code (25688 and qemu upstream
>>>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
>>>> >> problems during initialization of the card in the guest, in particular
>>>> >> the unsupported delivery mode 3 which seems to cause interrupt related
>>>> >> problems during init.  I've again attached the qemu-dm-log, and xl
>>>> >> dmesg log files, and additionally screenshots of the guest dmesg and
>>>> >> also for comparison starting the same livecd natively on the box.
>>>> >
>>>> > "unsupported delivery mode 3" means that the Linux guest is trying to
>>>> > remap the MSI onto an event channel but Xen is still trying to deliver
>>>> > the MSI using the emulated code path anyway.
>>>> >
>>>> > Adding
>>>> >
>>>> > #define XEN_PT_LOGGING_ENABLED 1
>>>> >
>>>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
>>>> > be helpful.
>>>> >
>>>> > The full Xen logs might also be useful. I would add some more tracing to
>>>> > the hypervisor too:
>>>> >
>>>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
>>>> > index b5975d1..08f4ab7 100644
>>>> > --- a/xen/drivers/passthrough/io.c
>>>> > +++ b/xen/drivers/passthrough/io.c
>>>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
>>>> >  {
>>>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
>>>> >
>>>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
>>>> > +            pirq->pirq,
>>>> > +            hvm_domain_use_pirq(d, pirq),
>>>> > +            pirq->arch.hvm.emuirq);
>>>> > +
>>>> >      if ( hvm_domain_use_pirq(d, pirq) )
>>>> >          send_guest_pirq(d, pirq);
>>>> >      else
>>>>
>>>> Hi Stefano-
>>>> I made the modifications (it looks like that DEFINE hasn't been used
>>>> in awhile, caused a few compilation issues, I had to prefix most of
>>>> the logged variables with s->hostaddr.), and am attaching the
>>>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
>>>> where do I find those at?
>>>
>>> Thanks for the logs!
>>> You can get the full Xen logs from the serial console but you can also
>>> grab the last few lines with "xl dmesg", like you did and it seems to be
>>> enough in this case.
>>>
>>>
>>> The initial MSI remapping has been done:
>>>
>>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
>>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
>>>
>>> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
>>> necessary to be able to receive event notifications (emuirq=-1 in the
>>> Xen logs).
>>>
>>> Now we need to figure out why: we still need more logs, this time on the
>>> guest side.
>>> What is the kernel version that you are using in the guest?
>>> Could you please add "debug loglevel=9" to the guest kernel command line
>>> and then post the guest dmesg again?
>>> It would be great if you could use the emulated serial to get the logs
>>> rather than a picture. You can do that by adding serial='pty' to the VM
>>> config file and console=ttyS0 to the guest command line.
>>> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
>>> has been done:
>>>
>>>
>>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
>>> index 53777f8..d65a97a 100644
>>> --- a/xen/common/event_channel.c
>>> +++ b/xen/common/event_channel.c
>>> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
>>>  #ifdef CONFIG_X86
>>>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
>>>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
>>> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
>>> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
>>>  #endif
>>>
>>>   out:
>>
>> The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
>> I've also attached all the logs, thanks for the tip on the serial
>> console, very useful.
>>
>> Additionally I've attached logs for booting a solaris livecd (my
>> ultimate goal is to use this HBA card in Solaris), with the serial
>> console tip I was able to capture its kernel boot as well.
>
> I'm attaching another log from Solaris' kernel debugger, I'm not sure
> how helpful it is but I found it interesting that it didn't detect an
> Intel IOMMU/ACPI table and unloaded it, then tried AMD - and I'm new
> to Solaris but comparing this log to one without PCI Passthrough, the
> npe module never gets loaded without PCI passthrough, so I assume it
> failed while setting up the AMD IOMMU module then loaded the npe
> module to report the error.

Just following up, is there anything I can do to further help debug
and figure out what is causing the problem here?  I'm assuming since
it isn't working properly in PV or HVM guests there may be multiple
bugs.  Is there an easy way to run Ubuntu as HVM only (more friendly
than Solaris) to try and isolate if that is a separate bug from what
is being seen with the PV Ubuntu VM?

Thanks,
David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 17:31:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 17:31:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxLik-0001ql-Ol; Fri, 03 Aug 2012 17:31:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SxLii-0001qg-AB
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 17:31:28 +0000
Received: from [85.158.139.83:19989] by server-7.bemta-5.messagelabs.com id
	BA/FC-28276-FEA0C105; Fri, 03 Aug 2012 17:31:27 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344015086!28224473!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29352 invoked from network); 3 Aug 2012 17:31:26 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 17:31:26 -0000
Received: by eeke53 with SMTP id e53so286287eek.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 10:31:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=SuPCmQ83eoOZ7ynVbM00VxGvEPYxLQXrmZdeizvMLe4=;
	b=hoGU9Nre0UXU5GLt9Pqg9gOKtYjkiXof3eWz2RgALh1u6vIkw0qEvbzFu8ZkYCJd0P
	KfBTlzqO4P/aCWmLMudNx9BIwGBb6nLM28iO6gieRqJRAb0QeN1YUZPok9idfuIutS1l
	aXPv0VHPP8QGskP3HWDx/L4DTp41x1y3ca0iVtYBwqCFA32ZiA+08CsYkF9g5XBscLj2
	n6Aar78GsPMmZim1rCZTHXI/XqluJWAPqOgcLazG2VeM4q4JTBdbza0UIfCDL7/eL0sb
	B5mOL0AXbG+InYQPKMZ8+XiX2ZTN4ONKPTsxwbo4k4d4VG22JlBcpGQKJfWJiWpMpkqF
	aTRQ==
MIME-Version: 1.0
Received: by 10.14.214.197 with SMTP id c45mr3046574eep.37.1344015085869; Fri,
	03 Aug 2012 10:31:25 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Fri, 3 Aug 2012 10:31:25 -0700 (PDT)
In-Reply-To: <CAFLBxZZKoVfyKuN7_kW_+mJ69PmXwUvukXBK3rpD+we8c-wvXA@mail.gmail.com>
References: <CAFLBxZZKoVfyKuN7_kW_+mJ69PmXwUvukXBK3rpD+we8c-wvXA@mail.gmail.com>
Date: Fri, 3 Aug 2012 18:31:25 +0100
X-Google-Sender-Auth: svQPL6Ezh7b5XtFSWsHkGKGJVOs
Message-ID: <CAFLBxZbCamJqx5GYGEjrkvVVbcRW7Y4-rS1zqHKmkjDqrk7UFQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Security discussion: Summary of proposals and
 criteria (was Re: Security vulnerability process, and CVE-2012-0217)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I said before that I was going to give an analysis, and I had a very
detailed one written out.  The result of that analysis looked very
clear-cut.  But a couple of new arguments have come to light, and it
makes the whole thing much less clear to me.  So what I'm going to do
instead is describe the arguments that I think are pertinent, and then
my own recommendation.

Next week I plan on sending a poll out.  The poll won't be structured
like a vote; rather, the purpose of the poll is to help move the
discussion forwards by idenfitying where the sentiment lies.  The poll
will ask you to rate each option with one of the following selections:
* This is an excellent idea, and I will argue for it.
* I am happy with this idea, but I will not argue for it.
* I am not happy with this idea, but I will not argue against it.
* This is a terrible idea, and I will argue against it.

If we have some options which has at least one "argue for" and no
"argue againsts", then we can simply take a formal vote and move on to
the smaller points int he discussion.  Otherwise, we can eliminate
options for which there are no "argue for"s, and focus on the points
where there both "argue for"s and "argue against"s.

Back to the discussion.  There are two additional points I want to bring out.

First, as Joanna and others have pointed out, closing a vulnerability
will not close a back-door that has been installed while the user was
vulnerable.  So it may well be worth an attacker's time to develop an
exploit based on a bug report.

Secondly, my original discussion had assumed that the risk during
"public vulnerability" for all users was the same.  Unfortunately, I
don't think that's true.  Some targets may be more valuable than
others.  In particular, the value of attacking a hosting provider may
be correlated to the value to an attacker of the aggregate of all of
their customers.  Thus it is simply more likely for a large provider
to be the targt of an attack than a small provider.

Thus public vulnerability for a large provider may be very risky
indeed; and I tend to agree with the idea that large providers, and
other large potential users (such as large governmetns, &c) should be
given some pre-disclosure to minimize this risk.

However, as has been previously mentioned, being on the pre-disclosure
list is a very large advantage, and is unfair towards the majority of
users, who are also at significant risk during their own public
vulnerability period.

So right now I think the best option is to have a pre-disclosure list
that is fairly easy to join: if the security team has reason to
believe you are a hosting company, they can put you on the list.

Although I am unhappy with the idea of only large providers being on
the list, I still think it's a better option than giving them no
pre-disclsoure.

 -George

On Fri, Jul 6, 2012 at 5:46 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> We've had a number of viewpoints expressed, and now we need to figure
> out how to move forward in the discussion.
>
> One thing we all seem to agree on is that with regard to the public
> disclosure and the wishes of the discloser:
> * In general, we should default to following the wishes of the discloser
> * We should have a framework available to advise the discloser of a
> reasonable embargo period if they don't have strong opinions of their
> own (many have listed the oCERT guidelines)
> * Disclosing early against the wishes of the disclosure is possible if
> the discloser's request is unreasonable, but should only be considered
> in extreme situations.
>
> What next needs to be decided, it seems to me, is concerning
> pre-disclosure: Are we going to have a pre-disclosure list (to whom we
> send details before the public disclosure), and if so who is going to
> be on it?  Then we can start filling in the details.
>
> What I propose is this.  I'll try to summarize the different options
> and angles discussed.  I will also try to synthesize the different
> arguments people have made and make my own recommendation.  Assuming
> that no creative new solutions are introduced in response, I think we
> should take an anonymous "straw poll", just to see what people think
> about the various options.  If that shows a strong consensus, then we
> should have a formal vote.  If it does not show consensus, then we'll
> at least be able to discuss the issue more constructively (by avoiding
> solutions no one is championing).
>
> So below is my summary of the options and the criteria that have been
> brought up so far.  It's fairly long, so I will give my own analysis
> and recommendation in a different mail, perhaps in a day or two.  I
> will also be working with Lars to form a straw poll where members of
> the list can informally express their preference, so we can see where
> we are in terms of agreement, sometime over the next day or two.
>
> = Proposed options =
>
> At a high level, I think we basically have five options to consider.
>
> In all cases, I think that we can make a public announcement that
> there *is* a security vulnerability, and the date we expect to
> publicly disclose the fix, so that anyone who has not been disclosed
> to non-publicly can be prepared to apply it as soon as possible.
>
> 1. No pre-disclosure list.  People are brought in only to help produce
> a fix.  The fix is released to everyone publicly when it's ready (or,
> if the discloser has asked for a longer embargo period, when that
> embargo period is up).
>
> 2. Pre-disclosure list consists only of software vendors -- people who
> compile and ship binaries to others.  No updates may be given to any
> user until the embargo period is up.
>
> 3. Pre-disclosure list consists of software vendors and some subset of
> privleged users (e.g., service providers above a certain size).
> Privileged users will be provided with patches at the same time as
> software vendors.  However, they will not be permitted to update their
> systems until the embargo period is up.
>
> 4. Pre-disclosure list consists of software vendors and privileged
> users. Privleged users will be provided with patches at the same time
> as software vendors.  They will be permitted to update their systems
> at any time.  Software vendors will be permitted to send code updates
> to service providers who are on the pre-disclosure list.  (This is the
> status quo.)
>
> 5. Pre-disclsoure list is open to any organiation (perhaps with some
> minimal entrance criteria, like having some form of incorporation, or
> having registered a domain name).  Members of the list may update
> their systems at any time; software vendors will be permitted to send
> code updates to anyone on the pre-disclosure list.
>
> 6. Pre-disclosure list open to any organization, but no one permitted
> to roll out fixes until the embargo period is up.
>
> = Criteria =
>
> I think there are several criteria we need to consider.
>
> * _Risk of being exploited_.  The ultimate goal any pre-disclosure
> process is to try to minimize the total risk for users of being
> exploited.  That said, any policy decision must take into account both
> the benefits in terms of risk reduction as well as the other costs of
> implementing the policy.
>
> To simplify things a bit, I think there are two kinds of risk.
> Between the time a vulnerability has been publicly announced and the
> time a user patches their system, that user is "publicly vulnerable"
> -- running software that contains a public vulnerability.  However,
> the user was vulnerable before that; they were vulnerable from the
> time they deployed the system with the vulnerability.  I will call
> this "privately vulnerable" -- running software that contains a
> non-public vulnerability.
>
> Now at first glance, it would seem obvious that being publicly
> vulnerable carries a much higher risk of being privately vulnerable.
> After all, to exploit a vulnerability you need to have malicious
> intent, the skills to leverage a vulnerability into an exploit, and
> you need to know about a vulnerability.  By announcing it publicly, a
> much greater number of people with malicious intent and the requisite
> skills will now know about the vulnerability; surely this increases
> the chances of someone being actually exploited.
>
> However, one should not under-estimate the risk of private
> vulnerability.  Black hats prize and actively look for vulnerabilities
> which have not yet been made public.  There is, in fact, a black
> market for such "0-day" exploits.  If your infrastructure is at all
> valuable, black hats have already been looking for the bug which makes
> you vulnerable; you have no way of knowing if they have found it yet
> or not.
>
> In fact, one could make the argument that publicly announcing a
> vulnerability along with a fix makes the vulnerability _less_ valuable
> to black-hats.  Developing an exploit from a vulnerability requires a
> significant amount of effort; and you know that security-conscious
> service providers will be working as fast as possible to close the
> hole.  Why would you spend your time and energy for an exploit that's
> only going to be useful for a day or two at most?
>
> Ultimately the only way to say for sure would be to talk to people who
> know the black hat community well.  But we can conclude this: private
> vulnerability is a definite risk which needs to be considered when
> minimizing total risk.
>
> Another thing to consider is how the nature of the pre-disclosure and
> public disclosure affect the risk.  For pre-disclosure, the more
> individuals have access to pre-disclosure information, the higher the
> risk that the information will end up in the hands of a black-hat.
> Having a list anyone can sign up to, for instance, may be very little
> more secure than a quiet public disclosure.
>
> For public disclosure, the nature of the disclosure may affect the
> risk, or the perception of risk, materially.  If the fix is simply
> checked into a public repository without fanfare or comment, it may
> not raise the risk of public vulnerability significantly; while if the
> fix is announced in press releases and on blogs, the _perception_ of
> the risk will undoubtedly increase.
>
> * _Fairness_.  Xen is a community project and relies on the good-will
> of the community to continue.  Giving one sub-group of our users an
> advantage over another sub-group will be costly in terms of community
> good will.  Furthermore, depending on what kind of sub-group we have
> and how it's run, it may well be considered anti-competitive and
> illegal in some jurisdictions.  Some might say we should never
> consider such a thing.  At very least, doing so should be very
> carefully considered to make sure the risk is worth the benefit.
>
> The majority of this document will focus on the impact of the policy
> on actual users.  However, I think it is also legitimate to consider
> the impact of the policies on software vendors as well.  Regardless of
> the actual risk to users, the _perception_ of risk may have a
> significant impact on the success of some vendors over others.
>
> It is in fact very difficult to achieve perfect fairness between all
> kinds of parties.  However, as much as possible, unfairness should be
> based on decisions that the party themselves have a reasonable choice
> about.  For instance, having a slight advantage to compiling your own
> hypervisor directly from xen.org rather than using a software vendor
> might be tolerable because 1) those receiving from software vendors
> may have other advantages not available to those consuming directly,
> and 2) anyone can switch to pulling directly from xen.org if they
> wish.
>
> * _Administrative overhead_.  This comprises a number of different
> aspects: for example, how hard is it to come up with a precise and
> "fair" policy?  How much effort will it be for xen.org to determine
> whether or not someone should be on the list?
>
> Another question has to do with robustness of enforcement.  If there
> is a strong incentive for people on the list to break the rules
> ("moral hazard"), then we need to import a whole legal framework: how
> do we detect breaking the rules?  Who decides that the rules have
> indeed been broken, and decides the consequences?  Is there an appeals
> process?  At what point is someone who has broken the rules in the
> past allowed back on the list?  What are the legal, project, and
> community implications of having to do this, and so on?  All of this
> will impose a much heavier burden on not only this discussion, but
> also on the xen.org security team.
>
> (Disclaimer: I am not a lawyer.) It should be noted that because of
> the nature of the GPL, we cannot impose additional contractual
> limitations on the re-distribution of a GPL'ed patch, and thus we
> cannot seek legal redress for those who re-distribute such a patch (or
> resulting binaries) in a way that violates the pre-disclosure policy.
> But for the purposes of this discussion, I am going to assume that we
> can, however, choose to remove them from the pre-disclosure list as a
> result.
>
> I think those cover the main points that have been brought up in the
> discussion.  Please feel free to give feedback.  Next week probably I
> will attempt to give an analysis, applying these criteria to the
> different options.  I haven't yet come up with what I think is a
> satisfactory conclusion yet.
>
>  -George
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 17:31:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 17:31:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxLik-0001ql-Ol; Fri, 03 Aug 2012 17:31:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SxLii-0001qg-AB
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 17:31:28 +0000
Received: from [85.158.139.83:19989] by server-7.bemta-5.messagelabs.com id
	BA/FC-28276-FEA0C105; Fri, 03 Aug 2012 17:31:27 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344015086!28224473!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29352 invoked from network); 3 Aug 2012 17:31:26 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 17:31:26 -0000
Received: by eeke53 with SMTP id e53so286287eek.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 10:31:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=SuPCmQ83eoOZ7ynVbM00VxGvEPYxLQXrmZdeizvMLe4=;
	b=hoGU9Nre0UXU5GLt9Pqg9gOKtYjkiXof3eWz2RgALh1u6vIkw0qEvbzFu8ZkYCJd0P
	KfBTlzqO4P/aCWmLMudNx9BIwGBb6nLM28iO6gieRqJRAb0QeN1YUZPok9idfuIutS1l
	aXPv0VHPP8QGskP3HWDx/L4DTp41x1y3ca0iVtYBwqCFA32ZiA+08CsYkF9g5XBscLj2
	n6Aar78GsPMmZim1rCZTHXI/XqluJWAPqOgcLazG2VeM4q4JTBdbza0UIfCDL7/eL0sb
	B5mOL0AXbG+InYQPKMZ8+XiX2ZTN4ONKPTsxwbo4k4d4VG22JlBcpGQKJfWJiWpMpkqF
	aTRQ==
MIME-Version: 1.0
Received: by 10.14.214.197 with SMTP id c45mr3046574eep.37.1344015085869; Fri,
	03 Aug 2012 10:31:25 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Fri, 3 Aug 2012 10:31:25 -0700 (PDT)
In-Reply-To: <CAFLBxZZKoVfyKuN7_kW_+mJ69PmXwUvukXBK3rpD+we8c-wvXA@mail.gmail.com>
References: <CAFLBxZZKoVfyKuN7_kW_+mJ69PmXwUvukXBK3rpD+we8c-wvXA@mail.gmail.com>
Date: Fri, 3 Aug 2012 18:31:25 +0100
X-Google-Sender-Auth: svQPL6Ezh7b5XtFSWsHkGKGJVOs
Message-ID: <CAFLBxZbCamJqx5GYGEjrkvVVbcRW7Y4-rS1zqHKmkjDqrk7UFQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Security discussion: Summary of proposals and
 criteria (was Re: Security vulnerability process, and CVE-2012-0217)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I said before that I was going to give an analysis, and I had a very
detailed one written out.  The result of that analysis looked very
clear-cut.  But a couple of new arguments have come to light, and it
makes the whole thing much less clear to me.  So what I'm going to do
instead is describe the arguments that I think are pertinent, and then
my own recommendation.

Next week I plan on sending a poll out.  The poll won't be structured
like a vote; rather, the purpose of the poll is to help move the
discussion forwards by idenfitying where the sentiment lies.  The poll
will ask you to rate each option with one of the following selections:
* This is an excellent idea, and I will argue for it.
* I am happy with this idea, but I will not argue for it.
* I am not happy with this idea, but I will not argue against it.
* This is a terrible idea, and I will argue against it.

If we have some options which has at least one "argue for" and no
"argue againsts", then we can simply take a formal vote and move on to
the smaller points int he discussion.  Otherwise, we can eliminate
options for which there are no "argue for"s, and focus on the points
where there both "argue for"s and "argue against"s.

Back to the discussion.  There are two additional points I want to bring out.

First, as Joanna and others have pointed out, closing a vulnerability
will not close a back-door that has been installed while the user was
vulnerable.  So it may well be worth an attacker's time to develop an
exploit based on a bug report.

Secondly, my original discussion had assumed that the risk during
"public vulnerability" for all users was the same.  Unfortunately, I
don't think that's true.  Some targets may be more valuable than
others.  In particular, the value of attacking a hosting provider may
be correlated to the value to an attacker of the aggregate of all of
their customers.  Thus it is simply more likely for a large provider
to be the targt of an attack than a small provider.

Thus public vulnerability for a large provider may be very risky
indeed; and I tend to agree with the idea that large providers, and
other large potential users (such as large governmetns, &c) should be
given some pre-disclosure to minimize this risk.

However, as has been previously mentioned, being on the pre-disclosure
list is a very large advantage, and is unfair towards the majority of
users, who are also at significant risk during their own public
vulnerability period.

So right now I think the best option is to have a pre-disclosure list
that is fairly easy to join: if the security team has reason to
believe you are a hosting company, they can put you on the list.

Although I am unhappy with the idea of only large providers being on
the list, I still think it's a better option than giving them no
pre-disclsoure.

 -George

On Fri, Jul 6, 2012 at 5:46 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> We've had a number of viewpoints expressed, and now we need to figure
> out how to move forward in the discussion.
>
> One thing we all seem to agree on is that with regard to the public
> disclosure and the wishes of the discloser:
> * In general, we should default to following the wishes of the discloser
> * We should have a framework available to advise the discloser of a
> reasonable embargo period if they don't have strong opinions of their
> own (many have listed the oCERT guidelines)
> * Disclosing early against the wishes of the disclosure is possible if
> the discloser's request is unreasonable, but should only be considered
> in extreme situations.
>
> What next needs to be decided, it seems to me, is concerning
> pre-disclosure: Are we going to have a pre-disclosure list (to whom we
> send details before the public disclosure), and if so who is going to
> be on it?  Then we can start filling in the details.
>
> What I propose is this.  I'll try to summarize the different options
> and angles discussed.  I will also try to synthesize the different
> arguments people have made and make my own recommendation.  Assuming
> that no creative new solutions are introduced in response, I think we
> should take an anonymous "straw poll", just to see what people think
> about the various options.  If that shows a strong consensus, then we
> should have a formal vote.  If it does not show consensus, then we'll
> at least be able to discuss the issue more constructively (by avoiding
> solutions no one is championing).
>
> So below is my summary of the options and the criteria that have been
> brought up so far.  It's fairly long, so I will give my own analysis
> and recommendation in a different mail, perhaps in a day or two.  I
> will also be working with Lars to form a straw poll where members of
> the list can informally express their preference, so we can see where
> we are in terms of agreement, sometime over the next day or two.
>
> = Proposed options =
>
> At a high level, I think we basically have five options to consider.
>
> In all cases, I think that we can make a public announcement that
> there *is* a security vulnerability, and the date we expect to
> publicly disclose the fix, so that anyone who has not been disclosed
> to non-publicly can be prepared to apply it as soon as possible.
>
> 1. No pre-disclosure list.  People are brought in only to help produce
> a fix.  The fix is released to everyone publicly when it's ready (or,
> if the discloser has asked for a longer embargo period, when that
> embargo period is up).
>
> 2. Pre-disclosure list consists only of software vendors -- people who
> compile and ship binaries to others.  No updates may be given to any
> user until the embargo period is up.
>
> 3. Pre-disclosure list consists of software vendors and some subset of
> privleged users (e.g., service providers above a certain size).
> Privileged users will be provided with patches at the same time as
> software vendors.  However, they will not be permitted to update their
> systems until the embargo period is up.
>
> 4. Pre-disclosure list consists of software vendors and privileged
> users. Privleged users will be provided with patches at the same time
> as software vendors.  They will be permitted to update their systems
> at any time.  Software vendors will be permitted to send code updates
> to service providers who are on the pre-disclosure list.  (This is the
> status quo.)
>
> 5. Pre-disclsoure list is open to any organiation (perhaps with some
> minimal entrance criteria, like having some form of incorporation, or
> having registered a domain name).  Members of the list may update
> their systems at any time; software vendors will be permitted to send
> code updates to anyone on the pre-disclosure list.
>
> 6. Pre-disclosure list open to any organization, but no one permitted
> to roll out fixes until the embargo period is up.
>
> = Criteria =
>
> I think there are several criteria we need to consider.
>
> * _Risk of being exploited_.  The ultimate goal any pre-disclosure
> process is to try to minimize the total risk for users of being
> exploited.  That said, any policy decision must take into account both
> the benefits in terms of risk reduction as well as the other costs of
> implementing the policy.
>
> To simplify things a bit, I think there are two kinds of risk.
> Between the time a vulnerability has been publicly announced and the
> time a user patches their system, that user is "publicly vulnerable"
> -- running software that contains a public vulnerability.  However,
> the user was vulnerable before that; they were vulnerable from the
> time they deployed the system with the vulnerability.  I will call
> this "privately vulnerable" -- running software that contains a
> non-public vulnerability.
>
> Now at first glance, it would seem obvious that being publicly
> vulnerable carries a much higher risk of being privately vulnerable.
> After all, to exploit a vulnerability you need to have malicious
> intent, the skills to leverage a vulnerability into an exploit, and
> you need to know about a vulnerability.  By announcing it publicly, a
> much greater number of people with malicious intent and the requisite
> skills will now know about the vulnerability; surely this increases
> the chances of someone being actually exploited.
>
> However, one should not under-estimate the risk of private
> vulnerability.  Black hats prize and actively look for vulnerabilities
> which have not yet been made public.  There is, in fact, a black
> market for such "0-day" exploits.  If your infrastructure is at all
> valuable, black hats have already been looking for the bug which makes
> you vulnerable; you have no way of knowing if they have found it yet
> or not.
>
> In fact, one could make the argument that publicly announcing a
> vulnerability along with a fix makes the vulnerability _less_ valuable
> to black-hats.  Developing an exploit from a vulnerability requires a
> significant amount of effort; and you know that security-conscious
> service providers will be working as fast as possible to close the
> hole.  Why would you spend your time and energy for an exploit that's
> only going to be useful for a day or two at most?
>
> Ultimately the only way to say for sure would be to talk to people who
> know the black hat community well.  But we can conclude this: private
> vulnerability is a definite risk which needs to be considered when
> minimizing total risk.
>
> Another thing to consider is how the nature of the pre-disclosure and
> public disclosure affect the risk.  For pre-disclosure, the more
> individuals have access to pre-disclosure information, the higher the
> risk that the information will end up in the hands of a black-hat.
> Having a list anyone can sign up to, for instance, may be very little
> more secure than a quiet public disclosure.
>
> For public disclosure, the nature of the disclosure may affect the
> risk, or the perception of risk, materially.  If the fix is simply
> checked into a public repository without fanfare or comment, it may
> not raise the risk of public vulnerability significantly; while if the
> fix is announced in press releases and on blogs, the _perception_ of
> the risk will undoubtedly increase.
>
> * _Fairness_.  Xen is a community project and relies on the good-will
> of the community to continue.  Giving one sub-group of our users an
> advantage over another sub-group will be costly in terms of community
> good will.  Furthermore, depending on what kind of sub-group we have
> and how it's run, it may well be considered anti-competitive and
> illegal in some jurisdictions.  Some might say we should never
> consider such a thing.  At very least, doing so should be very
> carefully considered to make sure the risk is worth the benefit.
>
> The majority of this document will focus on the impact of the policy
> on actual users.  However, I think it is also legitimate to consider
> the impact of the policies on software vendors as well.  Regardless of
> the actual risk to users, the _perception_ of risk may have a
> significant impact on the success of some vendors over others.
>
> It is in fact very difficult to achieve perfect fairness between all
> kinds of parties.  However, as much as possible, unfairness should be
> based on decisions that the party themselves have a reasonable choice
> about.  For instance, having a slight advantage to compiling your own
> hypervisor directly from xen.org rather than using a software vendor
> might be tolerable because 1) those receiving from software vendors
> may have other advantages not available to those consuming directly,
> and 2) anyone can switch to pulling directly from xen.org if they
> wish.
>
> * _Administrative overhead_.  This comprises a number of different
> aspects: for example, how hard is it to come up with a precise and
> "fair" policy?  How much effort will it be for xen.org to determine
> whether or not someone should be on the list?
>
> Another question has to do with robustness of enforcement.  If there
> is a strong incentive for people on the list to break the rules
> ("moral hazard"), then we need to import a whole legal framework: how
> do we detect breaking the rules?  Who decides that the rules have
> indeed been broken, and decides the consequences?  Is there an appeals
> process?  At what point is someone who has broken the rules in the
> past allowed back on the list?  What are the legal, project, and
> community implications of having to do this, and so on?  All of this
> will impose a much heavier burden on not only this discussion, but
> also on the xen.org security team.
>
> (Disclaimer: I am not a lawyer.) It should be noted that because of
> the nature of the GPL, we cannot impose additional contractual
> limitations on the re-distribution of a GPL'ed patch, and thus we
> cannot seek legal redress for those who re-distribute such a patch (or
> resulting binaries) in a way that violates the pre-disclosure policy.
> But for the purposes of this discussion, I am going to assume that we
> can, however, choose to remove them from the pre-disclosure list as a
> result.
>
> I think those cover the main points that have been brought up in the
> discussion.  Please feel free to give feedback.  Next week probably I
> will attempt to give an analysis, applying these criteria to the
> different options.  I haven't yet come up with what I think is a
> satisfactory conclusion yet.
>
>  -George
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 18:16:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 18:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxMPe-0002YX-7a; Fri, 03 Aug 2012 18:15:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julian.pidancet@gmail.com>) id 1SxMPc-0002YS-NW
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 18:15:48 +0000
Received: from [85.158.143.35:64292] by server-3.bemta-4.messagelabs.com id
	C4/22-01511-4551C105; Fri, 03 Aug 2012 18:15:48 +0000
X-Env-Sender: julian.pidancet@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344017745!16685751!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32613 invoked from network); 3 Aug 2012 18:15:46 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 18:15:46 -0000
Received: by ghrr14 with SMTP id r14so1314809ghr.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 11:15:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:x-mailer;
	bh=fp7Jj5AGyeSMnB3BhXp+E9c92sjSm737bxG9c3CAdFY=;
	b=qR7IH8c4kzdHGSb6DoPnNj8FuJCQoAje5yrQVLhR94pSlDsYPGZMd5YqkN+gSHtKqH
	8eWvJiQ2UUIqFBZh+lyuRqfRRV2fOBB7qkltOc/pkcXdbjcvJ5TM4RdobRXzc9uz7Sgh
	HX24z8rmTFvtRsC2gv0YuIMH60gkFjCSB42KvvQzYjnt9Ui6JFIaZ9iWlIAV61m+x8Bm
	hFO4DdAmnUCTXfBX+v8pqPFvqP+yYinTjleLSsU12vWAaerxWuTdy+kVtQ3V49puKaJx
	g4S6y06DhByhv2hHlcaTBay0xnMkaEyn7zVzgLYEdK/72tM5cKCiiTIxmAGwH/tCcABD
	5zwg==
Received: by 10.236.177.1 with SMTP id c1mr2998267yhm.71.1344017745407;
	Fri, 03 Aug 2012 11:15:45 -0700 (PDT)
Received: from localhost.localdomain (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id q17sm5844045anm.12.2012.08.03.11.15.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 03 Aug 2012 11:15:44 -0700 (PDT)
From: julian.pidancet@gmail.com
To: linux-kernel@vger.kernel.org
Date: Fri,  3 Aug 2012 19:26:38 +0100
Message-Id: <cf6ab1bc6ccb8e997a4665b59f4dcac374764f96.1344018162.git.julian.pidancet@citrix.com>
X-Mailer: git-send-email 1.7.2.5
Cc: davem@davemloft.net, maze@google.com, xen-devel@lists.xen.org,
	edumazet@google.com, ycheng@google.com,
	Julian Pidancet <julian.pidancet@citrix.com>
Subject: [Xen-devel] [RFC PATCH] Introduce V4V socket family for
	inter-virtual machines communication
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Julian Pidancet <julian.pidancet@citrix.com>

V4V is an hypervisor based inter domain-communication system beeing
developed for the Xen hypervisor.

I am currently working on a kernel socket family implementation of this
protocol and realized that socket family numbers were allocated
statically. It basically makes it impossible to create a new socket
family without having to modify the include/linux/socket.h and bump the
AF_MAX definition beforehand. (Any attempt to call sock_register() with
a family value greater than or equal to AF_MAX will fail).

Therefore I am submitting this RFC patch to find out wether it would be
acceptable to add a new socket family in the AF list without breaking
compatibility.

The socket family introduced in this patch is called AF_V4V, which
implies that the family would be Xen specific. But we can also consider
adding a more generic, hypervisor agnostic socket family, which enables
inter-VM communication.

Signed-off-by: Julian Pidancet <julian.pidancet@citrix.com>
---
 include/linux/socket.h |    4 +++-
 include/linux/v4v.h    |   27 +++++++++++++++++++++++++++
 2 files changed, 30 insertions(+), 1 deletions(-)
 create mode 100644 include/linux/v4v.h

diff --git a/include/linux/socket.h b/include/linux/socket.h
index ba7b2e8..5e879d0 100644
--- a/include/linux/socket.h
+++ b/include/linux/socket.h
@@ -195,7 +195,8 @@ struct ucred {
 #define AF_CAIF		37	/* CAIF sockets			*/
 #define AF_ALG		38	/* Algorithm sockets		*/
 #define AF_NFC		39	/* NFC sockets			*/
-#define AF_MAX		40	/* For now.. */
+#define AF_V4V		40	/* Inter virtual domain sockets */
+#define AF_MAX		41	/* For now.. */
 
 /* Protocol families, same as address families. */
 #define PF_UNSPEC	AF_UNSPEC
@@ -238,6 +239,7 @@ struct ucred {
 #define PF_CAIF		AF_CAIF
 #define PF_ALG		AF_ALG
 #define PF_NFC		AF_NFC
+#define PF_V4V		AF_V4V
 #define PF_MAX		AF_MAX
 
 /* Maximum queue length specifiable by listen.  */
diff --git a/include/linux/v4v.h b/include/linux/v4v.h
new file mode 100644
index 0000000..172f67f
--- /dev/null
+++ b/include/linux/v4v.h
@@ -0,0 +1,27 @@
+/*
+ * linux/v4v.h
+ *
+ * Definitions for V4V network layer
+ *
+ * Authors: Julian Pidancet <julian.pidancet@citrix.com>
+ * Copyright (c) 2012 Citrix Systems
+ * All rights reserved.
+ *
+ */
+#ifndef V4V_KERNEL_H
+#define V4V_KERNEL_H
+
+#include <linux/types.h>
+#include <linux/socket.h>
+
+typedef struct {
+        __u32                   port;
+        __u16                   domain;
+} v4v_address;
+
+struct sockaddr_v4v {
+        __kernel_sa_family_t    sv4v_family;
+        v4v_address             sv4v_addr;
+};
+
+#endif /* V4V_KERNEL_H */
-- 
Julian Pidancet


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 18:16:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 18:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxMPe-0002YX-7a; Fri, 03 Aug 2012 18:15:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julian.pidancet@gmail.com>) id 1SxMPc-0002YS-NW
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 18:15:48 +0000
Received: from [85.158.143.35:64292] by server-3.bemta-4.messagelabs.com id
	C4/22-01511-4551C105; Fri, 03 Aug 2012 18:15:48 +0000
X-Env-Sender: julian.pidancet@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344017745!16685751!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32613 invoked from network); 3 Aug 2012 18:15:46 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 18:15:46 -0000
Received: by ghrr14 with SMTP id r14so1314809ghr.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 11:15:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:x-mailer;
	bh=fp7Jj5AGyeSMnB3BhXp+E9c92sjSm737bxG9c3CAdFY=;
	b=qR7IH8c4kzdHGSb6DoPnNj8FuJCQoAje5yrQVLhR94pSlDsYPGZMd5YqkN+gSHtKqH
	8eWvJiQ2UUIqFBZh+lyuRqfRRV2fOBB7qkltOc/pkcXdbjcvJ5TM4RdobRXzc9uz7Sgh
	HX24z8rmTFvtRsC2gv0YuIMH60gkFjCSB42KvvQzYjnt9Ui6JFIaZ9iWlIAV61m+x8Bm
	hFO4DdAmnUCTXfBX+v8pqPFvqP+yYinTjleLSsU12vWAaerxWuTdy+kVtQ3V49puKaJx
	g4S6y06DhByhv2hHlcaTBay0xnMkaEyn7zVzgLYEdK/72tM5cKCiiTIxmAGwH/tCcABD
	5zwg==
Received: by 10.236.177.1 with SMTP id c1mr2998267yhm.71.1344017745407;
	Fri, 03 Aug 2012 11:15:45 -0700 (PDT)
Received: from localhost.localdomain (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id q17sm5844045anm.12.2012.08.03.11.15.43
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 03 Aug 2012 11:15:44 -0700 (PDT)
From: julian.pidancet@gmail.com
To: linux-kernel@vger.kernel.org
Date: Fri,  3 Aug 2012 19:26:38 +0100
Message-Id: <cf6ab1bc6ccb8e997a4665b59f4dcac374764f96.1344018162.git.julian.pidancet@citrix.com>
X-Mailer: git-send-email 1.7.2.5
Cc: davem@davemloft.net, maze@google.com, xen-devel@lists.xen.org,
	edumazet@google.com, ycheng@google.com,
	Julian Pidancet <julian.pidancet@citrix.com>
Subject: [Xen-devel] [RFC PATCH] Introduce V4V socket family for
	inter-virtual machines communication
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Julian Pidancet <julian.pidancet@citrix.com>

V4V is an hypervisor based inter domain-communication system beeing
developed for the Xen hypervisor.

I am currently working on a kernel socket family implementation of this
protocol and realized that socket family numbers were allocated
statically. It basically makes it impossible to create a new socket
family without having to modify the include/linux/socket.h and bump the
AF_MAX definition beforehand. (Any attempt to call sock_register() with
a family value greater than or equal to AF_MAX will fail).

Therefore I am submitting this RFC patch to find out wether it would be
acceptable to add a new socket family in the AF list without breaking
compatibility.

The socket family introduced in this patch is called AF_V4V, which
implies that the family would be Xen specific. But we can also consider
adding a more generic, hypervisor agnostic socket family, which enables
inter-VM communication.

Signed-off-by: Julian Pidancet <julian.pidancet@citrix.com>
---
 include/linux/socket.h |    4 +++-
 include/linux/v4v.h    |   27 +++++++++++++++++++++++++++
 2 files changed, 30 insertions(+), 1 deletions(-)
 create mode 100644 include/linux/v4v.h

diff --git a/include/linux/socket.h b/include/linux/socket.h
index ba7b2e8..5e879d0 100644
--- a/include/linux/socket.h
+++ b/include/linux/socket.h
@@ -195,7 +195,8 @@ struct ucred {
 #define AF_CAIF		37	/* CAIF sockets			*/
 #define AF_ALG		38	/* Algorithm sockets		*/
 #define AF_NFC		39	/* NFC sockets			*/
-#define AF_MAX		40	/* For now.. */
+#define AF_V4V		40	/* Inter virtual domain sockets */
+#define AF_MAX		41	/* For now.. */
 
 /* Protocol families, same as address families. */
 #define PF_UNSPEC	AF_UNSPEC
@@ -238,6 +239,7 @@ struct ucred {
 #define PF_CAIF		AF_CAIF
 #define PF_ALG		AF_ALG
 #define PF_NFC		AF_NFC
+#define PF_V4V		AF_V4V
 #define PF_MAX		AF_MAX
 
 /* Maximum queue length specifiable by listen.  */
diff --git a/include/linux/v4v.h b/include/linux/v4v.h
new file mode 100644
index 0000000..172f67f
--- /dev/null
+++ b/include/linux/v4v.h
@@ -0,0 +1,27 @@
+/*
+ * linux/v4v.h
+ *
+ * Definitions for V4V network layer
+ *
+ * Authors: Julian Pidancet <julian.pidancet@citrix.com>
+ * Copyright (c) 2012 Citrix Systems
+ * All rights reserved.
+ *
+ */
+#ifndef V4V_KERNEL_H
+#define V4V_KERNEL_H
+
+#include <linux/types.h>
+#include <linux/socket.h>
+
+typedef struct {
+        __u32                   port;
+        __u16                   domain;
+} v4v_address;
+
+struct sockaddr_v4v {
+        __kernel_sa_family_t    sv4v_family;
+        v4v_address             sv4v_addr;
+};
+
+#endif /* V4V_KERNEL_H */
-- 
Julian Pidancet


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 18:39:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 18:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxMm2-0002l1-C4; Fri, 03 Aug 2012 18:38:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1SxMm1-0002kw-Nj
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 18:38:57 +0000
Received: from [85.158.138.51:45960] by server-1.bemta-3.messagelabs.com id
	07/21-31934-0CA1C105; Fri, 03 Aug 2012 18:38:56 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344019134!26267575!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NzY0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31167 invoked from network); 3 Aug 2012 18:38:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 18:38:56 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73Ic2xI009992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 18:38:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73Ic1gE012080
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 18:38:01 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73Ic1wS027240; Fri, 3 Aug 2012 13:38:01 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Aug 2012 11:38:00 -0700
Date: Fri, 3 Aug 2012 11:37:58 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20120803113758.4914281d@mantra.us.oracle.com>
In-Reply-To: <20120803133001.GA13750@andromeda.dapyr.net>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
	<20120803133001.GA13750@andromeda.dapyr.net>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012 09:30:01 -0400
Konrad Rzeszutek Wilk <konrad@darnok.org> wrote:

> > > > > Note it is is 0xffffffffc30b4000 - so already past the
> > > > > level2_kernel_pgt (L3[510]
> > > > > and in level2_fixmap_pgt territory (L3[511]).
> > > > > 
> > > > > At that stage we are still operating using the Xen provided
> > > > > pagetable - which look to have the L4[511][511] empty! Which
> > > > > sounds to me like a Xen tool-stack problem? Jan, have you seen
> > > > > something similar to this?
> > > > 
> > > > No we haven't, but I also don't think anyone tried to create as
> > > > big a DomU. I was, however, under the impression that DomU-s
> > > > this big had been created at Oracle before. Or was that only up
> > > > to 256Gb perhaps?
> > > 
> > > Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
> > > It might be that we did not have the 1TB hardware at that time
> > > yet.
> > 
> > Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
> > got broken after it looks like. I can debug later if it becomes
> > hot. 
> 
> I got the kernel part fixed but its the toolstack that got bugs in it.
> If you recall - where there any patches in the toolstack for this or
> did you just concentrate on the kernel?

Ah, I remember, it was issue in tool stack, xm, so I punted it for
tools expert. They were busy, so we hoped xl would fix it. 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 18:39:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 18:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxMm2-0002l1-C4; Fri, 03 Aug 2012 18:38:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1SxMm1-0002kw-Nj
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 18:38:57 +0000
Received: from [85.158.138.51:45960] by server-1.bemta-3.messagelabs.com id
	07/21-31934-0CA1C105; Fri, 03 Aug 2012 18:38:56 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344019134!26267575!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NzY0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31167 invoked from network); 3 Aug 2012 18:38:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 18:38:56 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73Ic2xI009992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 18:38:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73Ic1gE012080
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 18:38:01 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73Ic1wS027240; Fri, 3 Aug 2012 13:38:01 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Aug 2012 11:38:00 -0700
Date: Fri, 3 Aug 2012 11:37:58 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20120803113758.4914281d@mantra.us.oracle.com>
In-Reply-To: <20120803133001.GA13750@andromeda.dapyr.net>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
	<20120803133001.GA13750@andromeda.dapyr.net>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012 09:30:01 -0400
Konrad Rzeszutek Wilk <konrad@darnok.org> wrote:

> > > > > Note it is is 0xffffffffc30b4000 - so already past the
> > > > > level2_kernel_pgt (L3[510]
> > > > > and in level2_fixmap_pgt territory (L3[511]).
> > > > > 
> > > > > At that stage we are still operating using the Xen provided
> > > > > pagetable - which look to have the L4[511][511] empty! Which
> > > > > sounds to me like a Xen tool-stack problem? Jan, have you seen
> > > > > something similar to this?
> > > > 
> > > > No we haven't, but I also don't think anyone tried to create as
> > > > big a DomU. I was, however, under the impression that DomU-s
> > > > this big had been created at Oracle before. Or was that only up
> > > > to 256Gb perhaps?
> > > 
> > > Mukesh do you recall? Was it with OVM2.2.2 which was 3.4 based?
> > > It might be that we did not have the 1TB hardware at that time
> > > yet.
> > 
> > Yes, in ovm2.x, I debugged/booted upto 500GB domU. So something
> > got broken after it looks like. I can debug later if it becomes
> > hot. 
> 
> I got the kernel part fixed but its the toolstack that got bugs in it.
> If you recall - where there any patches in the toolstack for this or
> did you just concentrate on the kernel?

Ah, I remember, it was issue in tool stack, xm, so I punted it for
tools expert. They were busy, so we hoped xl would fix it. 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 18:58:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 18:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxN4J-0002wX-32; Fri, 03 Aug 2012 18:57:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SxN4H-0002wP-S7
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 18:57:50 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344020249!2059825!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14933 invoked from network); 3 Aug 2012 18:57:30 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-15.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 18:57:30 -0000
X-TM-IMSS-Message-ID: <6c86992e0001f564@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	6c86992e0001f564 ; Fri, 3 Aug 2012 14:58:05 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q73IvOuJ022868; 
	Fri, 3 Aug 2012 14:57:24 -0400
Message-ID: <501C1F14.9000505@tycho.nsa.gov>
Date: Fri, 03 Aug 2012 14:57:24 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Multicall result missing sign extension in Xen or Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While trying to figure out why a failing component of a multicall did not
properly return its result, I discovered that multicall results are not
sign-extended when placed in the unsigned long result field. For hypercalls
such as do_mmu_update which return a (signed) int, this results in Linux
incorrectly thinking the hypercall succeeded when it has actually failed
since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
(b->entries[i].result < 0).

Is this a bug in Xen (using the wrong return type for do_mmu_op and other
hypercalls) or in Linux (assuming all returns are signed longs)? One or the
other needs to be changed, because the current setup is silently hiding
failed memory mapping operations.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 18:58:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 18:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxN4J-0002wX-32; Fri, 03 Aug 2012 18:57:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SxN4H-0002wP-S7
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 18:57:50 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344020249!2059825!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14933 invoked from network); 3 Aug 2012 18:57:30 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-15.tower-27.messagelabs.com with SMTP;
	3 Aug 2012 18:57:30 -0000
X-TM-IMSS-Message-ID: <6c86992e0001f564@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	6c86992e0001f564 ; Fri, 3 Aug 2012 14:58:05 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q73IvOuJ022868; 
	Fri, 3 Aug 2012 14:57:24 -0400
Message-ID: <501C1F14.9000505@tycho.nsa.gov>
Date: Fri, 03 Aug 2012 14:57:24 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Multicall result missing sign extension in Xen or Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While trying to figure out why a failing component of a multicall did not
properly return its result, I discovered that multicall results are not
sign-extended when placed in the unsigned long result field. For hypercalls
such as do_mmu_update which return a (signed) int, this results in Linux
incorrectly thinking the hypercall succeeded when it has actually failed
since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
(b->entries[i].result < 0).

Is this a bug in Xen (using the wrong return type for do_mmu_op and other
hypercalls) or in Linux (assuming all returns are signed longs)? One or the
other needs to be changed, because the current setup is silently hiding
failed memory mapping operations.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 19:01:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxN7k-000354-QJ; Fri, 03 Aug 2012 19:01:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jeremy@goop.org>) id 1SxN6s-000340-5a
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:00:30 +0000
Received: from [85.158.138.51:51779] by server-1.bemta-3.messagelabs.com id
	02/DC-31934-DCF1C105; Fri, 03 Aug 2012 19:00:29 +0000
X-Env-Sender: jeremy@goop.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344020427!30211261!1
X-Originating-IP: [74.207.240.146]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14213 invoked from network); 3 Aug 2012 19:00:28 -0000
Received: from claw.goop.org (HELO claw.goop.org) (74.207.240.146)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 19:00:28 -0000
Received: from saboo.goop.org (50-76-62-73-ip-static.hfc.comcastbusiness.net
	[50.76.62.73]) (Authenticated sender: smtp-saboo)
	by claw.goop.org (Postfix) with ESMTPSA id 25A593EA;
	Fri,  3 Aug 2012 12:00:26 -0700 (PDT)
Received: from saboo.goop.org (localhost [IPv6:::1])
	by saboo.goop.org (Postfix) with ESMTP id 58FE5205B0;
	Fri,  3 Aug 2012 12:00:21 -0700 (PDT)
Message-ID: <501C1FC5.40405@goop.org>
Date: Fri, 03 Aug 2012 12:00:21 -0700
From: Jeremy Fitzhardinge <jeremy@goop.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <501C1F14.9000505@tycho.nsa.gov>
In-Reply-To: <501C1F14.9000505@tycho.nsa.gov>
X-Enigmail-Version: 1.4.3
X-Mailman-Approved-At: Fri, 03 Aug 2012 19:01:23 +0000
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multicall result missing sign extension in Xen or
	Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 11:57 AM, Daniel De Graaf wrote:
> While trying to figure out why a failing component of a multicall did not
> properly return its result, I discovered that multicall results are not
> sign-extended when placed in the unsigned long result field. For hypercalls
> such as do_mmu_update which return a (signed) int, this results in Linux
> incorrectly thinking the hypercall succeeded when it has actually failed
> since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
> (b->entries[i].result < 0).
>
> Is this a bug in Xen (using the wrong return type for do_mmu_op and other
> hypercalls) or in Linux (assuming all returns are signed longs)? One or the
> other needs to be changed, because the current setup is silently hiding
> failed memory mapping operations.
>

Ah, that explains a long-standing mystery for me that I never got around
to investigating.

If Xen is populating a 64-bit result with non-signed-extended 32-bit
result, then it sounds like a Xen bug, but one that's effectively baked
into the ABI now.  Is there any risk of the unextended 32bit error being
ambiguously confused with a legitimate result?

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 19:01:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxN7k-000354-QJ; Fri, 03 Aug 2012 19:01:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jeremy@goop.org>) id 1SxN6s-000340-5a
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:00:30 +0000
Received: from [85.158.138.51:51779] by server-1.bemta-3.messagelabs.com id
	02/DC-31934-DCF1C105; Fri, 03 Aug 2012 19:00:29 +0000
X-Env-Sender: jeremy@goop.org
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344020427!30211261!1
X-Originating-IP: [74.207.240.146]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14213 invoked from network); 3 Aug 2012 19:00:28 -0000
Received: from claw.goop.org (HELO claw.goop.org) (74.207.240.146)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 19:00:28 -0000
Received: from saboo.goop.org (50-76-62-73-ip-static.hfc.comcastbusiness.net
	[50.76.62.73]) (Authenticated sender: smtp-saboo)
	by claw.goop.org (Postfix) with ESMTPSA id 25A593EA;
	Fri,  3 Aug 2012 12:00:26 -0700 (PDT)
Received: from saboo.goop.org (localhost [IPv6:::1])
	by saboo.goop.org (Postfix) with ESMTP id 58FE5205B0;
	Fri,  3 Aug 2012 12:00:21 -0700 (PDT)
Message-ID: <501C1FC5.40405@goop.org>
Date: Fri, 03 Aug 2012 12:00:21 -0700
From: Jeremy Fitzhardinge <jeremy@goop.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <501C1F14.9000505@tycho.nsa.gov>
In-Reply-To: <501C1F14.9000505@tycho.nsa.gov>
X-Enigmail-Version: 1.4.3
X-Mailman-Approved-At: Fri, 03 Aug 2012 19:01:23 +0000
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multicall result missing sign extension in Xen or
	Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 11:57 AM, Daniel De Graaf wrote:
> While trying to figure out why a failing component of a multicall did not
> properly return its result, I discovered that multicall results are not
> sign-extended when placed in the unsigned long result field. For hypercalls
> such as do_mmu_update which return a (signed) int, this results in Linux
> incorrectly thinking the hypercall succeeded when it has actually failed
> since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
> (b->entries[i].result < 0).
>
> Is this a bug in Xen (using the wrong return type for do_mmu_op and other
> hypercalls) or in Linux (assuming all returns are signed longs)? One or the
> other needs to be changed, because the current setup is silently hiding
> failed memory mapping operations.
>

Ah, that explains a long-standing mystery for me that I never got around
to investigating.

If Xen is populating a 64-bit result with non-signed-extended 32-bit
result, then it sounds like a Xen bug, but one that's effectively baked
into the ABI now.  Is there any risk of the unextended 32bit error being
ambiguously confused with a legitimate result?

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 19:26:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:26:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNVk-0003Lt-0c; Fri, 03 Aug 2012 19:26:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SxNVh-0003Lo-PT
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:26:09 +0000
Received: from [85.158.143.99:19651] by server-1.bemta-4.messagelabs.com id
	6B/5F-24392-1D52C105; Fri, 03 Aug 2012 19:26:09 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344021966!24860234!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17526 invoked from network); 3 Aug 2012 19:26:06 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 19:26:06 -0000
X-TM-IMSS-Message-ID: <6ca0c4540001fa4e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	6ca0c4540001fa4e ; Fri, 3 Aug 2012 15:26:40 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q73JQ36B024533; 
	Fri, 3 Aug 2012 15:26:03 -0400
Message-ID: <501C25CB.1000001@tycho.nsa.gov>
Date: Fri, 03 Aug 2012 15:26:03 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jeremy Fitzhardinge <jeremy@goop.org>
References: <501C1F14.9000505@tycho.nsa.gov> <501C1FC5.40405@goop.org>
In-Reply-To: <501C1FC5.40405@goop.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multicall result missing sign extension in Xen or
	Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 03:00 PM, Jeremy Fitzhardinge wrote:
> On 08/03/2012 11:57 AM, Daniel De Graaf wrote:
>> While trying to figure out why a failing component of a multicall did not
>> properly return its result, I discovered that multicall results are not
>> sign-extended when placed in the unsigned long result field. For hypercalls
>> such as do_mmu_update which return a (signed) int, this results in Linux
>> incorrectly thinking the hypercall succeeded when it has actually failed
>> since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
>> (b->entries[i].result < 0).
>>
>> Is this a bug in Xen (using the wrong return type for do_mmu_op and other
>> hypercalls) or in Linux (assuming all returns are signed longs)? One or the
>> other needs to be changed, because the current setup is silently hiding
>> failed memory mapping operations.
>>
> 
> Ah, that explains a long-standing mystery for me that I never got around
> to investigating.
> 
> If Xen is populating a 64-bit result with non-signed-extended 32-bit
> result, then it sounds like a Xen bug, but one that's effectively baked
> into the ABI now.  Is there any risk of the unextended 32bit error being
> ambiguously confused with a legitimate result?
> 
>     J
> 
> 

There are hypercalls which will return a value that corresponds to a 32-bit
error result - XENMEM_current_reservation, for example - but these are not
likely to be called as part of a multicall. Most of the multicall users
return zero, error, or a count.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 19:26:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:26:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNVk-0003Lt-0c; Fri, 03 Aug 2012 19:26:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SxNVh-0003Lo-PT
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:26:09 +0000
Received: from [85.158.143.99:19651] by server-1.bemta-4.messagelabs.com id
	6B/5F-24392-1D52C105; Fri, 03 Aug 2012 19:26:09 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344021966!24860234!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17526 invoked from network); 3 Aug 2012 19:26:06 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-216.messagelabs.com with SMTP;
	3 Aug 2012 19:26:06 -0000
X-TM-IMSS-Message-ID: <6ca0c4540001fa4e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	6ca0c4540001fa4e ; Fri, 3 Aug 2012 15:26:40 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q73JQ36B024533; 
	Fri, 3 Aug 2012 15:26:03 -0400
Message-ID: <501C25CB.1000001@tycho.nsa.gov>
Date: Fri, 03 Aug 2012 15:26:03 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jeremy Fitzhardinge <jeremy@goop.org>
References: <501C1F14.9000505@tycho.nsa.gov> <501C1FC5.40405@goop.org>
In-Reply-To: <501C1FC5.40405@goop.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multicall result missing sign extension in Xen or
	Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/2012 03:00 PM, Jeremy Fitzhardinge wrote:
> On 08/03/2012 11:57 AM, Daniel De Graaf wrote:
>> While trying to figure out why a failing component of a multicall did not
>> properly return its result, I discovered that multicall results are not
>> sign-extended when placed in the unsigned long result field. For hypercalls
>> such as do_mmu_update which return a (signed) int, this results in Linux
>> incorrectly thinking the hypercall succeeded when it has actually failed
>> since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
>> (b->entries[i].result < 0).
>>
>> Is this a bug in Xen (using the wrong return type for do_mmu_op and other
>> hypercalls) or in Linux (assuming all returns are signed longs)? One or the
>> other needs to be changed, because the current setup is silently hiding
>> failed memory mapping operations.
>>
> 
> Ah, that explains a long-standing mystery for me that I never got around
> to investigating.
> 
> If Xen is populating a 64-bit result with non-signed-extended 32-bit
> result, then it sounds like a Xen bug, but one that's effectively baked
> into the ABI now.  Is there any risk of the unextended 32bit error being
> ambiguously confused with a legitimate result?
> 
>     J
> 
> 

There are hypercalls which will return a value that corresponds to a 32-bit
error result - XENMEM_current_reservation, for example - but these are not
likely to be called as part of a multicall. Most of the multicall users
return zero, error, or a count.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 19:47:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNpr-0003Yk-Ur; Fri, 03 Aug 2012 19:46:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxNpq-0003Yb-8o
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 19:46:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344023188!2064419!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24870 invoked from network); 3 Aug 2012 19:46:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:46:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846104"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:46:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:46:28 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxNpM-0005zv-6Y;
	Fri, 03 Aug 2012 19:46:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxNpL-0003Ye-U6;
	Fri, 03 Aug 2012 20:46:28 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13543-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 20:46:27 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13543: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13543 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13543/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  9ad379939b78
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 463 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 19:47:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNpr-0003Yk-Ur; Fri, 03 Aug 2012 19:46:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxNpq-0003Yb-8o
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 19:46:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344023188!2064419!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24870 invoked from network); 3 Aug 2012 19:46:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:46:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846104"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:46:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:46:28 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxNpM-0005zv-6Y;
	Fri, 03 Aug 2012 19:46:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxNpL-0003Ye-U6;
	Fri, 03 Aug 2012 20:46:28 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13543-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Aug 2012 20:46:27 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13543: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13543 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13543/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  9ad379939b78
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 463 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsb-0003gr-Ra; Fri, 03 Aug 2012 19:49:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003g2-QA
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:9909] by server-3.bemta-4.messagelabs.com id
	8B/4C-01511-B5B2C105; Fri, 03 Aug 2012 19:49:47 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24429 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846181"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063g-RT;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id 00D7434045A; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:51 +0100
Message-ID: <1344023454-31425-3-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 2/5] xen: Introduce guest_handle_for_field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


This helper turns a field of a GUEST_HANDLE in
a GUEST_HANDLE.

From: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/include/asm-x86/guest_access.h |    3 +++
 1 file changed, 3 insertions(+)


--------------true
Content-Type: text/x-patch;
	name="0002-xen-Introduce-guest_handle_for_field.patch"
Content-Disposition: attachment;
	filename="0002-xen-Introduce-guest_handle_for_field.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/gue=
st_access.h
index 2b429c2..e3ac1d6 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -51,6 +51,9 @@
     (XEN_GUEST_HANDLE(type)) { _x };            \
 })
=20
+#define guest_handle_for_field(hnd, type, fld)          \
+    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
+
 #define guest_handle_from_ptr(ptr, type)        \
     ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsb-0003gr-Ra; Fri, 03 Aug 2012 19:49:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003g2-QA
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:9909] by server-3.bemta-4.messagelabs.com id
	8B/4C-01511-B5B2C105; Fri, 03 Aug 2012 19:49:47 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24429 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846181"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063g-RT;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id 00D7434045A; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:51 +0100
Message-ID: <1344023454-31425-3-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 2/5] xen: Introduce guest_handle_for_field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


This helper turns a field of a GUEST_HANDLE in
a GUEST_HANDLE.

From: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/include/asm-x86/guest_access.h |    3 +++
 1 file changed, 3 insertions(+)


--------------true
Content-Type: text/x-patch;
	name="0002-xen-Introduce-guest_handle_for_field.patch"
Content-Disposition: attachment;
	filename="0002-xen-Introduce-guest_handle_for_field.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/gue=
st_access.h
index 2b429c2..e3ac1d6 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -51,6 +51,9 @@
     (XEN_GUEST_HANDLE(type)) { _x };            \
 })
=20
+#define guest_handle_for_field(hnd, type, fld)          \
+    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
+
 #define guest_handle_from_ptr(ptr, type)        \
     ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsb-0003gZ-3C; Fri, 03 Aug 2012 19:49:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003g0-G4
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:10062] by server-2.bemta-4.messagelabs.com id
	81/12-17938-A5B2C105; Fri, 03 Aug 2012 19:49:46 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24404 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846179"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:45 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063d-Ni;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id D452834049D; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:50 +0100
Message-ID: <1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/include/asm-arm/types.h |    1 +
 xen/include/asm-x86/types.h |    6 ++++++
 2 files changed, 7 insertions(+)


--------------true
Content-Type: text/x-patch; name="0001-xen-add-ssize_t.patch"
Content-Disposition: attachment; filename="0001-xen-add-ssize_t.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/include/asm-arm/types.h b/xen/include/asm-arm/types.h
index 48864f9..d2c5612 100644
--- a/xen/include/asm-arm/types.h
+++ b/xen/include/asm-arm/types.h
@@ -35,6 +35,7 @@ typedef u64 paddr_t;
 #define PRIpaddr "016llx"
=20
 typedef unsigned long size_t;
+typedef long ssize_t;
=20
 typedef char bool_t;
 #define test_and_set_bool(b)   xchg(&(b), 1)
diff --git a/xen/include/asm-x86/types.h b/xen/include/asm-x86/types.h
index 1c4c5d5..bb7ffc2 100644
--- a/xen/include/asm-x86/types.h
+++ b/xen/include/asm-x86/types.h
@@ -59,6 +59,12 @@ typedef char bool_t;
 #define test_and_set_bool(b)   xchg(&(b), 1)
 #define test_and_clear_bool(b) xchg(&(b), 0)
=20
+#if defined(__i386__)
+typedef int ssize_t;
+#else /* __x86_64 */
+typedef long ssize_t;
+#endif
+
 #endif /* __ASSEMBLY__ */
=20
 #endif /* __X86_TYPES_H__ */

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsb-0003gZ-3C; Fri, 03 Aug 2012 19:49:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003g0-G4
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:10062] by server-2.bemta-4.messagelabs.com id
	81/12-17938-A5B2C105; Fri, 03 Aug 2012 19:49:46 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24404 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846179"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:45 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063d-Ni;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id D452834049D; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:50 +0100
Message-ID: <1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/include/asm-arm/types.h |    1 +
 xen/include/asm-x86/types.h |    6 ++++++
 2 files changed, 7 insertions(+)


--------------true
Content-Type: text/x-patch; name="0001-xen-add-ssize_t.patch"
Content-Disposition: attachment; filename="0001-xen-add-ssize_t.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/include/asm-arm/types.h b/xen/include/asm-arm/types.h
index 48864f9..d2c5612 100644
--- a/xen/include/asm-arm/types.h
+++ b/xen/include/asm-arm/types.h
@@ -35,6 +35,7 @@ typedef u64 paddr_t;
 #define PRIpaddr "016llx"
=20
 typedef unsigned long size_t;
+typedef long ssize_t;
=20
 typedef char bool_t;
 #define test_and_set_bool(b)   xchg(&(b), 1)
diff --git a/xen/include/asm-x86/types.h b/xen/include/asm-x86/types.h
index 1c4c5d5..bb7ffc2 100644
--- a/xen/include/asm-x86/types.h
+++ b/xen/include/asm-x86/types.h
@@ -59,6 +59,12 @@ typedef char bool_t;
 #define test_and_set_bool(b)   xchg(&(b), 1)
 #define test_and_clear_bool(b) xchg(&(b), 0)
=20
+#if defined(__i386__)
+typedef int ssize_t;
+#else /* __x86_64 */
+typedef long ssize_t;
+#endif
+
 #endif /* __ASSEMBLY__ */
=20
 #endif /* __X86_TYPES_H__ */

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsa-0003gN-No; Fri, 03 Aug 2012 19:49:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003fz-8U
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:10056] by server-1.bemta-4.messagelabs.com id
	3B/98-24392-A5B2C105; Fri, 03 Aug 2012 19:49:46 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24399 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846178"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:45 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063a-Jg;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id A922434045A; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 3 Aug 2012 20:50:49 +0100
Message-ID: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 0/5] RFC: V4V (v3)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable

v3 changes:
        - Switch to event channel
                - Allocated a unbound event channel
                  per domain.
                - Add a new v4v call to share the
                  event channel port.
        - Public headers with actual type definition
        - Align all the v4v type to 64 bits
        - Modify v4v MAGIC numbers because we won't
          but backward compatible anymore
        - Merge insert and insertv
        - Merge send and sendv
        - Turn all the lock prerequisite from comment
          to ASSERT()
        - Make use or write_atomic instead of volatile pointers
        - Merge v4v_memcpy_to_guest_ring and
          v4v_memcpy_to_guest_ring_from_guest
                - Introduce copy_from_guest_maybe that can take
                  a void * and a handle as src address.
        - TODO:
                - Add libv4v userspace code

v2 changes:
        - Cleanup plugin header
        - Include basic access control
        - Use guest_handle_for_field


Jan Beulich (1):
  xen: Introduce guest_handle_for_field

Jean Guyader (4):
  xen: add ssize_t
  xen: virq, remove VIRQ_XC_RESERVED
  xen: events, exposes evtchn_alloc_unbound_domain
  xen: Add V4V implementation

 xen/arch/x86/hvm/hvm.c             |    9 +-
 xen/arch/x86/x86_32/entry.S        |    2 +
 xen/arch/x86/x86_64/compat/entry.S |    2 +
 xen/arch/x86/x86_64/entry.S        |    2 +
 xen/common/Makefile                |    1 +
 xen/common/domain.c                |   13 +-
 xen/common/event_channel.c         |   33 +-
 xen/common/v4v.c                   | 1895 ++++++++++++++++++++++++++++++=
++++++
 xen/include/asm-arm/types.h        |    1 +
 xen/include/asm-x86/guest_access.h |    3 +
 xen/include/asm-x86/types.h        |    6 +
 xen/include/public/v4v.h           |  291 ++++++
 xen/include/public/xen.h           |    3 +-
 xen/include/xen/event.h            |    2 +
 xen/include/xen/sched.h            |    4 +
 xen/include/xen/v4v.h              |  134 +++
 xen/include/xen/v4v_utils.h        |  276 ++++++
 17 files changed, 2665 insertions(+), 12 deletions(-)
 create mode 100644 xen/common/v4v.c
 create mode 100644 xen/include/public/v4v.h
 create mode 100644 xen/include/xen/v4v.h
 create mode 100644 xen/include/xen/v4v_utils.h

--=20
1.7.9.5


--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsc-0003gy-70; Fri, 03 Aug 2012 19:49:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsa-0003g2-8z
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:48 +0000
Received: from [85.158.143.99:10076] by server-3.bemta-4.messagelabs.com id
	2C/4C-01511-B5B2C105; Fri, 03 Aug 2012 19:49:47 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24458 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846182"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsY-00063m-1z;
	Fri, 03 Aug 2012 19:49:46 +0000
Received: by spongy (Postfix, from userid 2023)	id 3A0CD34045A; Fri,  3 Aug
	2012 20:50:58 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:53 +0100
Message-ID: <1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


Exposes evtchn_alloc_unbound_domain to the rest of
Xen so we can create allocated unbound evtchn within Xen.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/common/event_channel.c |   33 +++++++++++++++++++++++++++------
 xen/include/xen/event.h    |    2 ++
 2 files changed, 29 insertions(+), 6 deletions(-)


--------------true
Content-Type: text/x-patch;
	name="0004-xen-events-exposes-evtchn_alloc_unbound_domain.patch"
Content-Disposition: attachment;
	filename="0004-xen-events-exposes-evtchn_alloc_unbound_domain.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..067365b 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -51,6 +51,8 @@
=20
 #define consumer_is_xen(e) (!!(e)->xen_consumer)
=20
+static long __evtchn_close(struct domain *d, int port);
+
 /*
  * The function alloc_unbound_xen_event_channel() allows an arbitrary
  * notifier function to be specified. However, very few unique functions
@@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbou=
nd_t *alloc)
 {
     struct evtchn *chn;
     struct domain *d;
-    int            port;
+    evtchn_port_t  port;
     domid_t        dom =3D alloc->dom;
-    long           rc;
+    int            rc;
=20
     rc =3D rcu_lock_target_domain_by_id(dom, &d);
     if ( rc )
         return rc;
=20
-    spin_lock(&d->event_lock);
+    rc =3D evtchn_alloc_unbound_domain(d, &port);
+    if ( rc )
+        ERROR_EXIT_DOM(rc, d);
=20
-    if ( (port =3D get_free_port(d)) < 0 )
-        ERROR_EXIT_DOM(port, d);
     chn =3D evtchn_from_port(d, port);
=20
     rc =3D xsm_evtchn_unbound(d, chn, alloc->remote_dom);
@@ -186,12 +188,31 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbou=
nd_t *alloc)
     alloc->port =3D port;
=20
  out:
-    spin_unlock(&d->event_lock);
     rcu_unlock_domain(d);
=20
     return rc;
 }
=20
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port)
+{
+    struct evtchn *chn;
+    int            free_port =3D 0;
+
+    spin_lock(&d->event_lock);
+
+    if ( (free_port =3D get_free_port(d)) < 0 )
+        goto out;
+    chn =3D evtchn_from_port(d, free_port);
+    chn->state =3D ECS_UNBOUND;
+    chn->u.unbound.remote_domid =3D DOMID_INVALID;
+    *port =3D free_port;
+    /* Everything is fine, returns 0 */
+    free_port =3D 0;
+
+ out:
+    spin_unlock(&d->event_lock);
+    return free_port;
+}
=20
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
 {
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..d89b0c1 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -69,6 +69,8 @@ int guest_enabled_event(struct vcpu *v, uint32_t virq);
 /* Notify remote end of a Xen-attached event channel.*/
 void notify_via_xen_event_channel(struct domain *ld, int lport);
=20
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port);
+
 /* Internal event channel object accessors */
 #define bucket_from_port(d,p) \
     ((d)->evtchn[(p)/EVTCHNS_PER_BUCKET])

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsc-0003gy-70; Fri, 03 Aug 2012 19:49:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsa-0003g2-8z
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:48 +0000
Received: from [85.158.143.99:10076] by server-3.bemta-4.messagelabs.com id
	2C/4C-01511-B5B2C105; Fri, 03 Aug 2012 19:49:47 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24458 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846182"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsY-00063m-1z;
	Fri, 03 Aug 2012 19:49:46 +0000
Received: by spongy (Postfix, from userid 2023)	id 3A0CD34045A; Fri,  3 Aug
	2012 20:50:58 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:53 +0100
Message-ID: <1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


Exposes evtchn_alloc_unbound_domain to the rest of
Xen so we can create allocated unbound evtchn within Xen.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/common/event_channel.c |   33 +++++++++++++++++++++++++++------
 xen/include/xen/event.h    |    2 ++
 2 files changed, 29 insertions(+), 6 deletions(-)


--------------true
Content-Type: text/x-patch;
	name="0004-xen-events-exposes-evtchn_alloc_unbound_domain.patch"
Content-Disposition: attachment;
	filename="0004-xen-events-exposes-evtchn_alloc_unbound_domain.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..067365b 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -51,6 +51,8 @@
=20
 #define consumer_is_xen(e) (!!(e)->xen_consumer)
=20
+static long __evtchn_close(struct domain *d, int port);
+
 /*
  * The function alloc_unbound_xen_event_channel() allows an arbitrary
  * notifier function to be specified. However, very few unique functions
@@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbou=
nd_t *alloc)
 {
     struct evtchn *chn;
     struct domain *d;
-    int            port;
+    evtchn_port_t  port;
     domid_t        dom =3D alloc->dom;
-    long           rc;
+    int            rc;
=20
     rc =3D rcu_lock_target_domain_by_id(dom, &d);
     if ( rc )
         return rc;
=20
-    spin_lock(&d->event_lock);
+    rc =3D evtchn_alloc_unbound_domain(d, &port);
+    if ( rc )
+        ERROR_EXIT_DOM(rc, d);
=20
-    if ( (port =3D get_free_port(d)) < 0 )
-        ERROR_EXIT_DOM(port, d);
     chn =3D evtchn_from_port(d, port);
=20
     rc =3D xsm_evtchn_unbound(d, chn, alloc->remote_dom);
@@ -186,12 +188,31 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbou=
nd_t *alloc)
     alloc->port =3D port;
=20
  out:
-    spin_unlock(&d->event_lock);
     rcu_unlock_domain(d);
=20
     return rc;
 }
=20
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port)
+{
+    struct evtchn *chn;
+    int            free_port =3D 0;
+
+    spin_lock(&d->event_lock);
+
+    if ( (free_port =3D get_free_port(d)) < 0 )
+        goto out;
+    chn =3D evtchn_from_port(d, free_port);
+    chn->state =3D ECS_UNBOUND;
+    chn->u.unbound.remote_domid =3D DOMID_INVALID;
+    *port =3D free_port;
+    /* Everything is fine, returns 0 */
+    free_port =3D 0;
+
+ out:
+    spin_unlock(&d->event_lock);
+    return free_port;
+}
=20
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
 {
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..d89b0c1 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -69,6 +69,8 @@ int guest_enabled_event(struct vcpu *v, uint32_t virq);
 /* Notify remote end of a Xen-attached event channel.*/
 void notify_via_xen_event_channel(struct domain *ld, int lport);
=20
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port);
+
 /* Internal event channel object accessors */
 #define bucket_from_port(d,p) \
     ((d)->evtchn[(p)/EVTCHNS_PER_BUCKET])

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsa-0003gN-No; Fri, 03 Aug 2012 19:49:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003fz-8U
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:10056] by server-1.bemta-4.messagelabs.com id
	3B/98-24392-A5B2C105; Fri, 03 Aug 2012 19:49:46 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24399 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846178"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:45 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063a-Jg;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id A922434045A; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 3 Aug 2012 20:50:49 +0100
Message-ID: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 0/5] RFC: V4V (v3)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable

v3 changes:
        - Switch to event channel
                - Allocated a unbound event channel
                  per domain.
                - Add a new v4v call to share the
                  event channel port.
        - Public headers with actual type definition
        - Align all the v4v type to 64 bits
        - Modify v4v MAGIC numbers because we won't
          but backward compatible anymore
        - Merge insert and insertv
        - Merge send and sendv
        - Turn all the lock prerequisite from comment
          to ASSERT()
        - Make use or write_atomic instead of volatile pointers
        - Merge v4v_memcpy_to_guest_ring and
          v4v_memcpy_to_guest_ring_from_guest
                - Introduce copy_from_guest_maybe that can take
                  a void * and a handle as src address.
        - TODO:
                - Add libv4v userspace code

v2 changes:
        - Cleanup plugin header
        - Include basic access control
        - Use guest_handle_for_field


Jan Beulich (1):
  xen: Introduce guest_handle_for_field

Jean Guyader (4):
  xen: add ssize_t
  xen: virq, remove VIRQ_XC_RESERVED
  xen: events, exposes evtchn_alloc_unbound_domain
  xen: Add V4V implementation

 xen/arch/x86/hvm/hvm.c             |    9 +-
 xen/arch/x86/x86_32/entry.S        |    2 +
 xen/arch/x86/x86_64/compat/entry.S |    2 +
 xen/arch/x86/x86_64/entry.S        |    2 +
 xen/common/Makefile                |    1 +
 xen/common/domain.c                |   13 +-
 xen/common/event_channel.c         |   33 +-
 xen/common/v4v.c                   | 1895 ++++++++++++++++++++++++++++++=
++++++
 xen/include/asm-arm/types.h        |    1 +
 xen/include/asm-x86/guest_access.h |    3 +
 xen/include/asm-x86/types.h        |    6 +
 xen/include/public/v4v.h           |  291 ++++++
 xen/include/public/xen.h           |    3 +-
 xen/include/xen/event.h            |    2 +
 xen/include/xen/sched.h            |    4 +
 xen/include/xen/v4v.h              |  134 +++
 xen/include/xen/v4v_utils.h        |  276 ++++++
 17 files changed, 2665 insertions(+), 12 deletions(-)
 create mode 100644 xen/common/v4v.c
 create mode 100644 xen/include/public/v4v.h
 create mode 100644 xen/include/xen/v4v.h
 create mode 100644 xen/include/xen/v4v_utils.h

--=20
1.7.9.5


--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsb-0003gk-FT; Fri, 03 Aug 2012 19:49:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003fz-Na
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:10063] by server-1.bemta-4.messagelabs.com id
	DB/98-24392-A5B2C105; Fri, 03 Aug 2012 19:49:46 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24409 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846180"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063j-VH;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id 1D51D34049D; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 3 Aug 2012 20:50:52 +0100
Message-ID: <1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


VIRQ_XC_RESERVED was reserved for V4V but we have switched
to event channels so this place holder is no longer required.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/include/public/xen.h |    1 -
 1 file changed, 1 deletion(-)


--------------true
Content-Type: text/x-patch;
	name="0003-xen-virq-remove-VIRQ_XC_RESERVED.patch"
Content-Disposition: attachment;
	filename="0003-xen-virq-remove-VIRQ_XC_RESERVED.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b2f6c50..b19425b 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -157,7 +157,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
 #define VIRQ_CON_RING   8  /* G. (DOM0) Bytes received on console       =
     */
 #define VIRQ_PCPU_STATE 9  /* G. (DOM0) PCPU state changed              =
     */
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured      =
     */
-#define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                =
     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
=20
 /* Architecture-specific VIRQ definitions. */

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsb-0003gk-FT; Fri, 03 Aug 2012 19:49:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsZ-0003fz-Na
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:47 +0000
Received: from [85.158.143.99:10063] by server-1.bemta-4.messagelabs.com id
	DB/98-24392-A5B2C105; Fri, 03 Aug 2012 19:49:46 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24409 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846180"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsX-00063j-VH;
	Fri, 03 Aug 2012 19:49:45 +0000
Received: by spongy (Postfix, from userid 2023)	id 1D51D34049D; Fri,  3 Aug
	2012 20:50:57 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: xen-devel@lists.xen.org
Date: Fri, 3 Aug 2012 20:50:52 +0100
Message-ID: <1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


VIRQ_XC_RESERVED was reserved for V4V but we have switched
to event channels so this place holder is no longer required.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/include/public/xen.h |    1 -
 1 file changed, 1 deletion(-)


--------------true
Content-Type: text/x-patch;
	name="0003-xen-virq-remove-VIRQ_XC_RESERVED.patch"
Content-Disposition: attachment;
	filename="0003-xen-virq-remove-VIRQ_XC_RESERVED.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b2f6c50..b19425b 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -157,7 +157,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
 #define VIRQ_CON_RING   8  /* G. (DOM0) Bytes received on console       =
     */
 #define VIRQ_PCPU_STATE 9  /* G. (DOM0) PCPU state changed              =
     */
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured      =
     */
-#define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                =
     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
=20
 /* Architecture-specific VIRQ definitions. */

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsc-0003hB-Kt; Fri, 03 Aug 2012 19:49:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsa-0003fz-3W
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:48 +0000
Received: from [85.158.143.99:10077] by server-1.bemta-4.messagelabs.com id
	9C/98-24392-B5B2C105; Fri, 03 Aug 2012 19:49:47 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24491 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846183"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsY-00063n-3f;
	Fri, 03 Aug 2012 19:49:46 +0000
Received: by spongy (Postfix, from userid 2023)	id 5091734049D; Fri,  3 Aug
	2012 20:50:58 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:54 +0100
Message-ID: <1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


Setup of v4v domains a domain gets created and cleanup
when a domain die. Wire up the v4v hypercall.

Include v4v internal and public headers.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/arch/x86/hvm/hvm.c             |    9 +-
 xen/arch/x86/x86_32/entry.S        |    2 +
 xen/arch/x86/x86_64/compat/entry.S |    2 +
 xen/arch/x86/x86_64/entry.S        |    2 +
 xen/common/Makefile                |    1 +
 xen/common/domain.c                |   13 +-
 xen/common/v4v.c                   | 1895 ++++++++++++++++++++++++++++++=
++++++
 xen/include/public/v4v.h           |  291 ++++++
 xen/include/public/xen.h           |    2 +-
 xen/include/xen/sched.h            |    4 +
 xen/include/xen/v4v.h              |  134 +++
 xen/include/xen/v4v_utils.h        |  276 ++++++
 12 files changed, 2626 insertions(+), 5 deletions(-)
 create mode 100644 xen/common/v4v.c
 create mode 100644 xen/include/public/v4v.h
 create mode 100644 xen/include/xen/v4v.h
 create mode 100644 xen/include/xen/v4v_utils.h


--------------true
Content-Type: text/x-patch; name="0005-xen-Add-V4V-implementation.patch"
Content-Disposition: attachment;
	filename="0005-xen-Add-V4V-implementation.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 22c136b..2671069 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3125,7 +3125,8 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hy=
percalls] =3D {
     HYPERCALL(set_timer_op),
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    HYPERCALL(v4v_op)
 };
=20
 #else /* defined(__x86_64__) */
@@ -3210,7 +3211,8 @@ static hvm_hypercall_t *hvm_hypercall64_table[NR_hy=
percalls] =3D {
     HYPERCALL(set_timer_op),
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    HYPERCALL(v4v_op)
 };
=20
 #define COMPAT_CALL(x)                                        \
@@ -3227,7 +3229,8 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hy=
percalls] =3D {
     COMPAT_CALL(set_timer_op),
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    HYPERCALL(v4v_op)
 };
=20
 #endif /* defined(__x86_64__) */
diff --git a/xen/arch/x86/x86_32/entry.S b/xen/arch/x86/x86_32/entry.S
index 2982679..5b1f55b 100644
--- a/xen/arch/x86/x86_32/entry.S
+++ b/xen/arch/x86/x86_32/entry.S
@@ -700,6 +700,7 @@ ENTRY(hypercall_table)
         .long do_domctl
         .long do_kexec_op
         .long do_tmem_op
+        .long do_v4v_op
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/4)
         .long do_ni_hypercall
         .endr
@@ -748,6 +749,7 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec_op          */
         .byte 1 /* do_tmem_op           */
+        .byte 5 /* do_v4v_op		*/
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/com=
pat/entry.S
index f49ff2d..6b838d3 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -414,6 +414,7 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_v4v_op
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -462,6 +463,7 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 5 /* do_v4v_op		    */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 997bc94..e6a7fdd 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -707,6 +707,7 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_v4v_op
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -755,6 +756,7 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 5 /* do_v4v_op		*/
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 9eba8bc..fe3c72c 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -45,6 +45,7 @@ obj-y +=3D tmem_xen.o
 obj-y +=3D radix-tree.o
 obj-y +=3D rbtree.o
 obj-y +=3D lzo.o
+obj-y +=3D v4v.o
=20
 obj-bin-$(CONFIG_X86) +=3D $(foreach n,decompress bunzip2 unxz unlzma un=
lzo,$(n).init.o)
=20
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..1600f45 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -195,7 +195,8 @@ struct domain *domain_create(
 {
     struct domain *d, **pd;
     enum { INIT_xsm =3D 1u<<0, INIT_watchdog =3D 1u<<1, INIT_rangeset =3D=
 1u<<2,
-           INIT_evtchn =3D 1u<<3, INIT_gnttab =3D 1u<<4, INIT_arch =3D 1=
u<<5 };
+           INIT_evtchn =3D 1u<<3, INIT_gnttab =3D 1u<<4, INIT_arch =3D 1=
u<<5,
+           INIT_v4v =3D 1u<<6 };
     int init_status =3D 0;
     int poolid =3D CPUPOOLID_NONE;
=20
@@ -305,6 +306,13 @@ struct domain *domain_create(
         spin_unlock(&domlist_update_lock);
     }
=20
+    if ( !is_idle_domain(d) )
+    {
+        if ( v4v_init(d) !=3D 0 )
+            goto fail;
+        init_status |=3D INIT_v4v;
+    }
+
     return d;
=20
  fail:
@@ -313,6 +321,8 @@ struct domain *domain_create(
     xfree(d->mem_event);
     if ( init_status & INIT_arch )
         arch_domain_destroy(d);
+    if ( init_status & INIT_v4v )
+	v4v_destroy(d);
     if ( init_status & INIT_gnttab )
         grant_table_destroy(d);
     if ( init_status & INIT_evtchn )
@@ -466,6 +476,7 @@ int domain_kill(struct domain *d)
         domain_pause(d);
         d->is_dying =3D DOMDYING_dying;
         spin_barrier(&d->domain_lock);
+        v4v_destroy(d);
         evtchn_destroy(d);
         gnttab_release_mappings(d);
         tmem_destroy(d->tmem);
diff --git a/xen/common/v4v.c b/xen/common/v4v.c
new file mode 100644
index 0000000..8b96b38
--- /dev/null
+++ b/xen/common/v4v.c
@@ -0,0 +1,1895 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#include <xen/config.h>
+#include <xen/mm.h>
+#include <xen/compat.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/sched.h>
+#include <xen/domain.h>
+#include <xen/v4v.h>
+#include <xen/event.h>
+#include <xen/guest_access.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <xen/keyhandler.h>
+#include <xen/v4v_utils.h>
+
+#ifdef V4V_DEBUG
+#define v4v_dprintk(format, args...)            \
+    do {                                        \
+        printk("%s:%d " format,                 \
+               __FILE__, __LINE__, ## args );   \
+    } while ( 1 =3D=3D 0 )
+#else
+#define v4v_dprintk(format, ... ) (void)0
+#endif
+
+
+struct list_head viprules =3D LIST_HEAD_INIT(viprules);
+static DEFINE_RWLOCK(viprules_lock);
+
+DEFINE_XEN_GUEST_HANDLE(uint8_t);
+static struct v4v_ring_info *v4v_ring_find_info(struct domain *d,
+                                                v4v_ring_id_t *id);
+
+static struct v4v_ring_info *v4v_ring_find_info_by_addr(struct domain *d=
,
+                                                        struct v4v_addr =
*a,
+                                                        domid_t p);
+
+typedef struct internal_v4v_iov
+{
+    XEN_GUEST_HANDLE(v4v_iov_t) guest_iov;
+    v4v_iov_t                   *xen_iov;
+} internal_v4v_iov_t;
+
+/*
+ * locks
+ */
+
+/*
+ * locking is organized as follows:
+ *
+ * the global lock v4v_lock: L1 protects the v4v elements
+ * of all struct domain *d in the system, it does not
+ * protect any of the elements of d->v4v, just their
+ * addresses. By extension since the destruction of
+ * a domain with a non-NULL d->v4v will need to free
+ * the d->v4v pointer, holding this lock gauruntees
+ * that no domains pointers in which v4v is interested
+ * become invalid whilst this lock is held.
+ */
+
+static DEFINE_RWLOCK(v4v_lock); /* L1 */
+
+/*
+ * the lock d->v4v->lock: L2:  Read on protects the hash table and
+ * the elements in the hash_table d->v4v->ring_hash, and
+ * the node and id fields in struct v4v_ring_info in the
+ * hash table. Write on L2 protects all of the elements of
+ * struct v4v_ring_info. To take L2 you must already have R(L1)
+ * W(L1) implies W(L2) and L3
+ *
+ * the lock v4v_ring_info *ringinfo; ringinfo->lock: L3:
+ * protects len,tx_ptr the guest ring, the
+ * guest ring_data and the pending list. To take L3 you must
+ * already have R(L2). W(L2) implies L3
+ */
+
+
+/*
+ * Debugs
+ */
+
+#ifdef V4V_DEBUG
+static void
+v4v_hexdump(void *_p, int len)
+{
+    uint8_t *buf =3D (uint8_t *)_p;
+    int i, j;
+
+    for ( i =3D 0; i < len; i +=3D 16 )
+    {
+        printk(KERN_ERR "%p:", &buf[i]);
+        for ( j =3D 0; j < 16; ++j )
+        {
+            int k =3D i + j;
+            if ( k < len )
+                printk(" %02x", buf[k]);
+            else
+                printk("   ");
+        }
+        printk(" ");
+
+        for ( j =3D 0; j < 16; ++j )
+        {
+            int k =3D i + j;
+            if ( k < len )
+                printk("%c", ((buf[k] > 32) && (buf[k] < 127)) ? buf[k] =
: '.');
+            else
+                printk(" ");
+        }
+        printk("\n");
+    }
+}
+#endif
+
+
+/*
+ * Event channel
+ */
+
+static void
+v4v_signal_domain(struct domain *d)
+{
+    v4v_dprintk("send guest VIRQ_V4V domid:%d\n", d->domain_id);
+
+    evtchn_send(d, d->v4v->evtchn_port);
+}
+
+static void
+v4v_signal_domid(domid_t id)
+{
+    struct domain *d =3D get_domain_by_id(id);
+    if ( !d )
+        return;
+    v4v_signal_domain(d);
+    put_domain(d);
+}
+
+
+/*
+ * ring buffer
+ */
+
+static void
+v4v_ring_unmap(struct v4v_ring_info *ring_info)
+{
+    int i;
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    for ( i =3D 0; i < ring_info->npage; ++i )
+    {
+        if ( !ring_info->mfn_mapping[i] )
+            continue;
+        v4v_dprintk("unmapping page %p from %p\n",
+                    (void*)mfn_x(ring_info->mfns[i]),
+                    ring_info->mfn_mapping[i]);
+
+        unmap_domain_page(ring_info->mfn_mapping[i]);
+        ring_info->mfn_mapping[i] =3D NULL;
+    }
+}
+
+static uint8_t *
+v4v_ring_map_page(struct v4v_ring_info *ring_info, int i)
+{
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    if ( i >=3D ring_info->npage )
+        return NULL;
+    if ( ring_info->mfn_mapping[i] )
+        return ring_info->mfn_mapping[i];
+    ring_info->mfn_mapping[i] =3D map_domain_page(mfn_x(ring_info->mfns[=
i]));
+
+    v4v_dprintk("mapping page %p to %p\n",
+                (void *)mfn_x(ring_info->mfns[i]),
+                ring_info->mfn_mapping[i]);
+    return ring_info->mfn_mapping[i];
+}
+
+static int
+v4v_memcpy_from_guest_ring(void *_dst, struct v4v_ring_info *ring_info,
+                           uint32_t offset, uint32_t len)
+{
+    int page =3D offset >> PAGE_SHIFT;
+    uint8_t *src;
+    uint8_t *dst =3D _dst;
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    offset &=3D PAGE_SIZE - 1;
+
+    while ( (offset + len) > PAGE_SIZE )
+    {
+        src =3D v4v_ring_map_page(ring_info, page);
+
+        if ( !src )
+            return -EFAULT;
+
+        v4v_dprintk("memcpy(%p,%p+%d,%d)\n",
+                    dst, src, offset,
+                    (int)(PAGE_SIZE - offset));
+        memcpy(dst, src + offset, PAGE_SIZE - offset);
+
+        page++;
+        len -=3D PAGE_SIZE - offset;
+        dst +=3D PAGE_SIZE - offset;
+        offset =3D 0;
+    }
+
+    src =3D v4v_ring_map_page(ring_info, page);
+    if ( !src )
+        return -EFAULT;
+
+    v4v_dprintk("memcpy(%p,%p+%d,%d)\n", dst, src, offset, len);
+    memcpy(dst, src + offset, len);
+
+    return 0;
+}
+
+static int
+v4v_update_tx_ptr(struct v4v_ring_info *ring_info, uint32_t tx_ptr)
+{
+    uint8_t *dst =3D v4v_ring_map_page(ring_info, 0);
+    uint32_t *p =3D (uint32_t *)(dst + offsetof(v4v_ring_t, tx_ptr));
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    if ( !dst )
+        return -EFAULT;
+    write_atomic(p, tx_ptr);
+    mb();
+    return 0;
+}
+
+static int
+v4v_copy_from_guest_maybe(void *dst, void *src,
+                          XEN_GUEST_HANDLE(uint8_t) src_hnd,
+                          uint32_t len)
+{
+    int rc =3D 0;
+
+    if ( src )
+        memcpy(dst, src, len);
+    else
+        rc =3D copy_from_guest(dst, src_hnd, len);
+    return rc;
+}
+
+static int
+v4v_memcpy_to_guest_ring(struct v4v_ring_info *ring_info,
+                         uint32_t offset,
+                         void *src,
+                         XEN_GUEST_HANDLE(uint8_t) src_hnd,
+                         uint32_t len)
+{
+    int page =3D offset >> PAGE_SHIFT;
+    uint8_t *dst;
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    offset &=3D PAGE_SIZE - 1;
+
+    while ( (offset + len) > PAGE_SIZE )
+    {
+        dst =3D v4v_ring_map_page(ring_info, page);
+        if ( !dst )
+            return -EFAULT;
+
+        if ( v4v_copy_from_guest_maybe(dst + offset, src, src_hnd,
+                                       PAGE_SIZE - offset) )
+            return -EFAULT;
+
+        page++;
+        len -=3D PAGE_SIZE - offset;
+        if ( src )
+            src +=3D (PAGE_SIZE - offset);
+        else
+            guest_handle_add_offset(src_hnd, PAGE_SIZE - offset);
+        offset =3D 0;
+    }
+
+    dst =3D v4v_ring_map_page(ring_info, page);
+    if ( !dst )
+        return -EFAULT;
+
+    if ( v4v_copy_from_guest_maybe(dst + offset, src, src_hnd, len) )
+        return -EFAULT;
+
+    return 0;
+}
+
+static int
+v4v_ringbuf_get_rx_ptr(struct domain *d, struct v4v_ring_info *ring_info=
,
+                        uint32_t * rx_ptr)
+{
+    v4v_ring_t *ringp;
+
+    if ( ring_info->npage =3D=3D 0 )
+        return -1;
+
+    ringp =3D map_domain_page(mfn_x(ring_info->mfns[0]));
+
+    v4v_dprintk("v4v_ringbuf_payload_space: mapped %p to %p\n",
+                (void *)mfn_x(ring_info->mfns[0]), ringp);
+    if ( !ringp )
+        return -1;
+
+    write_atomic(rx_ptr, ringp->rx_ptr);
+    mb();
+
+    unmap_domain_page(mfn_x(ring_info->mfns[0]));
+    return 0;
+}
+
+uint32_t
+v4v_ringbuf_payload_space(struct domain * d, struct v4v_ring_info * ring=
_info)
+{
+    v4v_ring_t ring;
+    int32_t ret;
+
+    ring.tx_ptr =3D ring_info->tx_ptr;
+    ring.len =3D ring_info->len;
+
+    if ( v4v_ringbuf_get_rx_ptr(d, ring_info, &ring.rx_ptr) )
+        return 0;
+
+    v4v_dprintk("v4v_ringbuf_payload_space:tx_ptr=3D%d rx_ptr=3D%d\n",
+                (int)ring.tx_ptr, (int)ring.rx_ptr);
+    if ( ring.rx_ptr =3D=3D ring.tx_ptr )
+        return ring.len - sizeof (struct v4v_ring_message_header);
+
+    ret =3D ring.rx_ptr - ring.tx_ptr;
+    if ( ret < 0 )
+        ret +=3D ring.len;
+
+    ret -=3D sizeof (struct v4v_ring_message_header);
+    ret -=3D V4V_ROUNDUP(1);
+
+    return (ret < 0) ? 0 : ret;
+}
+
+static unsigned long
+v4v_iov_copy(v4v_iov_t *iov, internal_v4v_iov_t *iovs)
+{
+    if (iovs->xen_iov =3D=3D NULL)
+    {
+        return copy_from_guest(iov, iovs->guest_iov, 1);
+    }
+    else
+    {
+        *iov =3D *(iovs->xen_iov);
+        return 0;
+    }
+}
+
+static void
+v4v_iov_add_offset(internal_v4v_iov_t *iovs, int offset)
+{
+    if (iovs->xen_iov =3D=3D NULL)
+        guest_handle_add_offset(iovs->guest_iov, offset);
+    else
+        iovs->xen_iov +=3D offset;
+}
+
+static size_t
+v4v_iov_count(internal_v4v_iov_t iovs, int niov)
+{
+    v4v_iov_t iov;
+    size_t ret =3D 0;
+
+    while ( niov-- )
+    {
+        if ( v4v_iov_copy(&iov, &iovs) )
+            return -EFAULT;
+
+        ret +=3D iov.iov_len;
+        v4v_iov_add_offset(&iovs, 1);
+    }
+
+    return ret;
+}
+
+static ssize_t
+v4v_ringbuf_insertv(struct domain *d,
+                    struct v4v_ring_info *ring_info,
+                    v4v_ring_id_t *src_id, uint32_t proto,
+                    internal_v4v_iov_t iovs, uint32_t niov,
+                    size_t len)
+{
+    v4v_ring_t ring;
+    struct v4v_ring_message_header mh =3D { 0 };
+    int32_t sp;
+    int32_t happy_ret;
+    int32_t ret =3D 0;
+    XEN_GUEST_HANDLE(uint8_t) empty_hnd =3D { 0 };
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    happy_ret =3D len;
+
+    if ( (V4V_ROUNDUP(len) + sizeof (struct v4v_ring_message_header) ) >=
=3D
+            ring_info->len)
+        return -EMSGSIZE;
+
+    do
+    {
+        if ( (ret =3D v4v_memcpy_from_guest_ring(&ring, ring_info, 0,
+                                               sizeof (ring))) )
+            break;
+
+        ring.tx_ptr =3D ring_info->tx_ptr;
+        ring.len =3D ring_info->len;
+
+        v4v_dprintk("ring.tx_ptr=3D%d ring.rx_ptr=3D%d ring.len=3D%d rin=
g_info->tx_ptr=3D%d\n",
+                    ring.tx_ptr, ring.rx_ptr, ring.len, ring_info->tx_pt=
r);
+
+        if ( ring.rx_ptr =3D=3D ring.tx_ptr )
+            sp =3D ring_info->len;
+        else
+        {
+            sp =3D ring.rx_ptr - ring.tx_ptr;
+            if ( sp < 0 )
+                sp +=3D ring.len;
+        }
+
+        if ( (V4V_ROUNDUP(len) + sizeof (struct v4v_ring_message_header)=
) >=3D sp )
+        {
+            v4v_dprintk("EAGAIN\n");
+            ret =3D -EAGAIN;
+            break;
+        }
+
+        mh.len =3D len + sizeof (struct v4v_ring_message_header);
+        mh.source =3D src_id->addr;
+        mh.protocol =3D proto;
+
+        if ( (ret =3D v4v_memcpy_to_guest_ring(ring_info,
+                                             ring.tx_ptr + sizeof (v4v_r=
ing_t),
+                                             &mh, empty_hnd,
+                                             sizeof (mh))) )
+            break;
+
+        ring.tx_ptr +=3D sizeof (mh);
+        if ( ring.tx_ptr =3D=3D ring_info->len )
+            ring.tx_ptr =3D 0;
+
+        while ( niov-- )
+        {
+            XEN_GUEST_HANDLE(uint8_t) buf_hnd;
+            v4v_iov_t iov;
+
+            if ( v4v_iov_copy(&iov, &iovs) )
+            {
+                ret =3D -EFAULT;
+                break;
+            }
+
+            buf_hnd.p =3D (uint8_t *)iov.iov_base; //FIXME
+            len =3D iov.iov_len;
+
+            if ( unlikely(!guest_handle_okay(buf_hnd, len)) )
+            {
+                ret =3D -EFAULT;
+                break;
+            }
+
+            sp =3D ring.len - ring.tx_ptr;
+
+            if ( len > sp )
+            {
+                ret =3D v4v_memcpy_to_guest_ring(ring_info,
+                        ring.tx_ptr + sizeof (v4v_ring_t),
+                        NULL, buf_hnd, sp);
+                if ( ret )
+                    break;
+
+                ring.tx_ptr =3D 0;
+                len -=3D sp;
+                guest_handle_add_offset(buf_hnd, sp);
+            }
+
+            ret =3D v4v_memcpy_to_guest_ring(ring_info,
+                    ring.tx_ptr + sizeof (v4v_ring_t),
+                    NULL, buf_hnd, len);
+            if ( ret )
+                break;
+
+            ring.tx_ptr +=3D len;
+
+            if ( ring.tx_ptr =3D=3D ring_info->len )
+                ring.tx_ptr =3D 0;
+
+            v4v_iov_add_offset(&iovs, 1);
+        }
+        if ( ret )
+            break;
+
+        ring.tx_ptr =3D V4V_ROUNDUP(ring.tx_ptr);
+
+        if ( ring.tx_ptr >=3D ring_info->len )
+            ring.tx_ptr -=3D ring_info->len;
+
+        mb();
+        ring_info->tx_ptr =3D ring.tx_ptr;
+        if ( (ret =3D v4v_update_tx_ptr(ring_info, ring.tx_ptr)) )
+            break;
+    }
+    while ( 0 );
+
+    v4v_ring_unmap(ring_info);
+
+    return ret ? ret : happy_ret;
+}
+
+
+
+/* pending */
+static void
+v4v_pending_remove_ent(struct v4v_pending_ent *ent)
+{
+    hlist_del(&ent->node);
+    xfree(ent);
+}
+
+static void
+v4v_pending_remove_all(struct v4v_ring_info *info)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *pending_ent;
+
+    ASSERT(spin_is_locked(&info->lock));
+    hlist_for_each_entry_safe(pending_ent, node, next, &info->pending,
+            node) v4v_pending_remove_ent(pending_ent);
+}
+
+static void
+v4v_pending_notify(struct domain *caller_d, struct hlist_head *to_notify=
)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *pending_ent;
+
+    ASSERT(rw_is_locked(&v4v_lock));
+
+    hlist_for_each_entry_safe(pending_ent, node, next, to_notify, node)
+    {
+        hlist_del(&pending_ent->node);
+        v4v_signal_domid(pending_ent->id);
+        xfree(pending_ent);
+    }
+
+}
+
+static void
+v4v_pending_find(struct domain *d, struct v4v_ring_info *ring_info,
+                 uint32_t payload_space, struct hlist_head *to_notify)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *ent;
+
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    spin_lock(&ring_info->lock);
+    hlist_for_each_entry_safe(ent, node, next, &ring_info->pending, node=
)
+    {
+        if ( payload_space >=3D ent->len )
+        {
+            hlist_del(&ent->node);
+            hlist_add_head(&ent->node, to_notify);
+        }
+    }
+    spin_unlock(&ring_info->lock);
+}
+
+/*caller must have L3 */
+static int
+v4v_pending_queue(struct v4v_ring_info *ring_info, domid_t src_id, int l=
en)
+{
+    struct v4v_pending_ent *ent =3D xmalloc(struct v4v_pending_ent);
+
+    if ( !ent )
+    {
+        v4v_dprintk("ENOMEM\n");
+        return -ENOMEM;
+    }
+
+    ent->len =3D len;
+    ent->id =3D src_id;
+
+    hlist_add_head(&ent->node, &ring_info->pending);
+
+    return 0;
+}
+
+/* L3 */
+static int
+v4v_pending_requeue(struct v4v_ring_info *ring_info, domid_t src_id, int=
 len)
+{
+    struct hlist_node *node;
+    struct v4v_pending_ent *ent;
+
+    hlist_for_each_entry(ent, node, &ring_info->pending, node)
+    {
+        if ( ent->id =3D=3D src_id )
+        {
+            if ( ent->len < len )
+                ent->len =3D len;
+            return 0;
+        }
+    }
+
+    return v4v_pending_queue(ring_info, src_id, len);
+}
+
+
+/* L3 */
+static void
+v4v_pending_cancel(struct v4v_ring_info *ring_info, domid_t src_id)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *ent;
+
+    hlist_for_each_entry_safe(ent, node, next, &ring_info->pending, node=
)
+    {
+        if ( ent->id =3D=3D src_id)
+        {
+            hlist_del(&ent->node);
+            xfree(ent);
+        }
+    }
+}
+
+/*
+ * ring data
+ */
+
+/*Caller should hold R(L1)*/
+static int
+v4v_fill_ring_data(struct domain *src_d,
+                   XEN_GUEST_HANDLE(v4v_ring_data_ent_t) data_ent_hnd)
+{
+    v4v_ring_data_ent_t ent;
+    struct domain *dst_d;
+    struct v4v_ring_info *ring_info;
+
+    if ( copy_from_guest(&ent, data_ent_hnd, 1) )
+    {
+        v4v_dprintk("EFAULT\n");
+        return -EFAULT;
+    }
+
+    v4v_dprintk("v4v_fill_ring_data: ent.ring.domain=3D%d,ent.ring.port=3D=
%u\n",
+                (int)ent.ring.domain, (int)ent.ring.port);
+
+    ent.flags =3D 0;
+
+    dst_d =3D get_domain_by_id(ent.ring.domain);
+
+    if ( dst_d && dst_d->v4v )
+    {
+        read_lock(&dst_d->v4v->lock);
+        ring_info =3D v4v_ring_find_info_by_addr(dst_d, &ent.ring,
+                                               src_d->domain_id);
+
+        if ( ring_info )
+        {
+            uint32_t space_avail;
+
+            ent.flags |=3D V4V_RING_DATA_F_EXISTS;
+            ent.max_message_size =3D
+                ring_info->len - sizeof (struct v4v_ring_message_header)=
 -
+                V4V_ROUNDUP(1);
+            spin_lock(&ring_info->lock);
+
+            space_avail =3D v4v_ringbuf_payload_space(dst_d, ring_info);
+
+            if ( space_avail >=3D ent.space_required )
+            {
+                v4v_pending_cancel(ring_info, src_d->domain_id);
+                ent.flags |=3D V4V_RING_DATA_F_SUFFICIENT;
+            }
+            else
+            {
+                v4v_pending_requeue(ring_info, src_d->domain_id,
+                        ent.space_required);
+                ent.flags |=3D V4V_RING_DATA_F_PENDING;
+            }
+
+            spin_unlock(&ring_info->lock);
+
+            if ( space_avail =3D=3D ent.max_message_size )
+                ent.flags |=3D V4V_RING_DATA_F_EMPTY;
+
+        }
+        read_unlock(&dst_d->v4v->lock);
+    }
+
+    if ( dst_d )
+        put_domain(dst_d);
+
+    if ( copy_field_to_guest(data_ent_hnd, &ent, flags) )
+    {
+        v4v_dprintk("EFAULT\n");
+        return -EFAULT;
+    }
+    return 0;
+}
+
+/*Called should hold no more than R(L1) */
+static int
+v4v_fill_ring_datas(struct domain *d, int nent,
+                     XEN_GUEST_HANDLE(v4v_ring_data_ent_t) data_ent_hnd)
+{
+    int ret =3D 0;
+
+    read_lock(&v4v_lock);
+    while ( !ret && nent-- )
+    {
+        ret =3D v4v_fill_ring_data(d, data_ent_hnd);
+        guest_handle_add_offset(data_ent_hnd, 1);
+    }
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+/*
+ * ring
+ */
+static int
+v4v_find_ring_mfns(struct domain *d, struct v4v_ring_info *ring_info,
+                   uint32_t npage, XEN_GUEST_HANDLE(v4v_pfn_t) pfn_hnd)
+{
+    int i,j;
+    mfn_t *mfns;
+    uint8_t **mfn_mapping;
+    unsigned long mfn;
+    struct page_info *page;
+    int ret =3D 0;
+
+    if ( (npage << PAGE_SHIFT) < ring_info->len )
+    {
+        v4v_dprintk("EINVAL\n");
+        return -EINVAL;
+    }
+
+    mfns =3D xmalloc_array(mfn_t, npage);
+    if ( !mfns )
+    {
+        v4v_dprintk("ENOMEM\n");
+        return -ENOMEM;
+    }
+
+    mfn_mapping =3D xmalloc_array(uint8_t *, npage);
+    if ( !mfn_mapping )
+    {
+        xfree(mfns);
+        return -ENOMEM;
+    }
+
+    for ( i =3D 0; i < npage; ++i )
+    {
+        unsigned long pfn;
+        p2m_type_t p2mt;
+
+        if ( copy_from_guest_offset(&pfn, pfn_hnd, i, 1) )
+        {
+            ret =3D -EFAULT;
+            v4v_dprintk("EFAULT\n");
+            break;
+        }
+
+        mfn =3D mfn_x(get_gfn(d, pfn, &p2mt));
+        if ( !mfn_valid(mfn) )
+        {
+            printk(KERN_ERR "v4v domain %d passed invalid mfn %"PRI_mfn"=
 ring %p seq %d\n",
+                    d->domain_id, mfn, ring_info, i);
+            ret =3D -EINVAL;
+            break;
+        }
+        page =3D mfn_to_page(mfn);
+        if ( !get_page_and_type(page, d, PGT_writable_page) )
+        {
+            printk(KERN_ERR "v4v domain %d passed wrong type mfn %"PRI_m=
fn" ring %p seq %d\n",
+                    d->domain_id, mfn, ring_info, i);
+            ret =3D -EINVAL;
+            break;
+        }
+        mfns[i] =3D _mfn(mfn);
+        v4v_dprintk("v4v_find_ring_mfns: %d: %lx -> %lx\n",
+                    i, (unsigned long)pfn, (unsigned long)mfn_x(mfns[i])=
);
+        mfn_mapping[i] =3D NULL;
+        put_gfn(d, pfn);
+    }
+
+    if ( !ret )
+    {
+        ring_info->npage =3D npage;
+        ring_info->mfns =3D mfns;
+        ring_info->mfn_mapping =3D mfn_mapping;
+    }
+    else
+    {
+        j =3D i;
+        for ( i =3D 0; i < j; ++i )
+            if ( mfn_x(mfns[i]) !=3D 0 )
+                put_page_and_type(mfn_to_page(mfn_x(mfns[i])));
+        xfree(mfn_mapping);
+        xfree(mfns);
+    }
+    return ret;
+}
+
+
+static struct v4v_ring_info *
+v4v_ring_find_info(struct domain *d, v4v_ring_id_t *id)
+{
+    uint16_t hash;
+    struct hlist_node *node;
+    struct v4v_ring_info *ring_info;
+
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    hash =3D v4v_hash_fn(id);
+
+    v4v_dprintk("ring_find_info: d->v4v=3D%p, d->v4v->ring_hash[%d]=3D%p=
 id=3D%p\n",
+                d->v4v, (int)hash, d->v4v->ring_hash[hash].first, id);
+    v4v_dprintk("ring_find_info: id.addr.port=3D%d id.addr.domain=3D%d i=
d.addr.partner=3D%d\n",
+                id->addr.port, id->addr.domain, id->partner);
+
+    hlist_for_each_entry(ring_info, node, &d->v4v->ring_hash[hash], node=
)
+    {
+        v4v_ring_id_t *cmpid =3D &ring_info->id;
+
+        if ( cmpid->addr.port =3D=3D id->addr.port &&
+             cmpid->addr.domain =3D=3D id->addr.domain &&
+             cmpid->partner =3D=3D id->partner)
+        {
+            v4v_dprintk("ring_find_info: ring_info=3D%p\n", ring_info);
+            return ring_info;
+        }
+    }
+    v4v_dprintk("ring_find_info: no ring_info found\n");
+    return NULL;
+}
+
+static struct v4v_ring_info *
+v4v_ring_find_info_by_addr(struct domain *d, struct v4v_addr *a, domid_t=
 p)
+{
+    v4v_ring_id_t id;
+    struct v4v_ring_info *ret;
+
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    if ( !a )
+        return NULL;
+
+    id.addr.port =3D a->port;
+    id.addr.domain =3D d->domain_id;
+    id.partner =3D p;
+
+    ret =3D v4v_ring_find_info(d, &id);
+    if ( ret )
+        return ret;
+
+    id.partner =3D V4V_DOMID_ANY;
+
+    return v4v_ring_find_info(d, &id);
+}
+
+static void v4v_ring_remove_mfns(struct domain *d, struct v4v_ring_info =
*ring_info)
+{
+    int i;
+
+    ASSERT(rw_is_write_locked(&d->v4v->lock));
+
+    if ( ring_info->mfns )
+    {
+        for ( i =3D 0; i < ring_info->npage; ++i )
+            if ( mfn_x(ring_info->mfns[i]) !=3D 0 )
+                put_page_and_type(mfn_to_page(mfn_x(ring_info->mfns[i]))=
);
+        xfree(ring_info->mfns);
+    }
+    if ( ring_info->mfn_mapping )
+        xfree(ring_info->mfn_mapping);
+    ring_info->mfns =3D NULL;
+
+}
+
+static void
+v4v_ring_remove_info(struct domain *d, struct v4v_ring_info *ring_info)
+{
+    ASSERT(rw_is_write_locked(&d->v4v->lock));
+
+    spin_lock(&ring_info->lock);
+
+    v4v_pending_remove_all(ring_info);
+    hlist_del(&ring_info->node);
+    v4v_ring_remove_mfns(d, ring_info);
+
+    spin_unlock(&ring_info->lock);
+
+    xfree(ring_info);
+}
+
+/* Call from guest to unpublish a ring */
+static long
+v4v_ring_remove(struct domain *d, XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd)
+{
+    struct v4v_ring ring;
+    struct v4v_ring_info *ring_info;
+    int ret =3D 0;
+
+    read_lock(&v4v_lock);
+
+    do
+    {
+        if ( !d->v4v )
+        {
+            v4v_dprintk("EINVAL\n");
+            ret =3D -EINVAL;
+            break;
+        }
+
+        if ( copy_from_guest(&ring, ring_hnd, 1) )
+        {
+            v4v_dprintk("EFAULT\n");
+            ret =3D -EFAULT;
+            break;
+        }
+
+        if ( ring.magic !=3D V4V_RING_MAGIC )
+        {
+            v4v_dprintk("ring.magic(%lx) !=3D V4V_RING_MAGIC(%lx), EINVA=
L\n",
+                    ring.magic, V4V_RING_MAGIC);
+            ret =3D -EINVAL;
+            break;
+        }
+
+        ring.id.addr.domain =3D d->domain_id;
+
+        write_lock(&d->v4v->lock);
+        ring_info =3D v4v_ring_find_info(d, &ring.id);
+
+        if ( ring_info )
+            v4v_ring_remove_info(d, ring_info);
+
+        write_unlock(&d->v4v->lock);
+
+        if ( !ring_info )
+        {
+            v4v_dprintk("ENOENT\n");
+            ret =3D -ENOENT;
+            break;
+        }
+
+    }
+    while ( 0 );
+
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+/* call from guest to publish a ring */
+static long
+v4v_ring_add(struct domain *d, XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd,
+             uint32_t npage, XEN_GUEST_HANDLE(v4v_pfn_t) pfn_hnd)
+{
+    struct v4v_ring ring;
+    struct v4v_ring_info *ring_info;
+    int need_to_insert =3D 0;
+    int ret =3D 0;
+
+    if ( (long)ring_hnd.p & (PAGE_SIZE - 1) )
+    {
+        v4v_dprintk("EINVAL\n");
+        return -EINVAL;
+    }
+
+    read_lock(&v4v_lock);
+    do
+    {
+        if ( !d->v4v )
+        {
+            v4v_dprintk(" !d->v4v, EINVAL\n");
+            ret =3D -EINVAL;
+            break;
+        }
+
+        if ( copy_from_guest(&ring, ring_hnd, 1) )
+        {
+            v4v_dprintk(" copy_from_guest failed, EFAULT\n");
+            ret =3D -EFAULT;
+            break;
+        }
+
+        if ( ring.magic !=3D V4V_RING_MAGIC )
+        {
+            v4v_dprintk("ring.magic(%lx) !=3D V4V_RING_MAGIC(%lx), EINVA=
L\n",
+                        ring.magic, V4V_RING_MAGIC);
+            ret =3D -EINVAL;
+            break;
+        }
+
+        if ( (ring.len <
+                    (sizeof (struct v4v_ring_message_header) + V4V_ROUND=
UP(1) +
+                     V4V_ROUNDUP(1))) || (V4V_ROUNDUP(ring.len) !=3D rin=
g.len) )
+        {
+            v4v_dprintk("EINVAL\n");
+            ret =3D -EINVAL;
+            break;
+        }
+
+        ring.id.addr.domain =3D d->domain_id;
+        if ( copy_field_to_guest(ring_hnd, &ring, id) )
+        {
+            v4v_dprintk("EFAULT\n");
+            ret =3D -EFAULT;
+            break;
+        }
+
+        /*
+         * no need for a lock yet, because only we know about this
+         * set the tx pointer if it looks bogus (we don't reset it
+         * because this might be a re-register after S4)
+         */
+        if ( (ring.tx_ptr >=3D ring.len)
+                || (V4V_ROUNDUP(ring.tx_ptr) !=3D ring.tx_ptr) )
+        {
+            ring.tx_ptr =3D ring.rx_ptr;
+        }
+        copy_field_to_guest(ring_hnd, &ring, tx_ptr);
+
+        read_lock(&d->v4v->lock);
+        ring_info =3D v4v_ring_find_info(d, &ring.id);
+
+        if ( !ring_info )
+        {
+            read_unlock(&d->v4v->lock);
+            ring_info =3D xmalloc(struct v4v_ring_info);
+            if ( !ring_info )
+            {
+                v4v_dprintk("ENOMEM\n");
+                ret =3D -ENOMEM;
+                break;
+            }
+            need_to_insert++;
+            spin_lock_init(&ring_info->lock);
+            INIT_HLIST_HEAD(&ring_info->pending);
+            ring_info->mfns =3D NULL;
+
+        }
+        else
+        {
+            /*
+             * Ring info already existed. If mfn list was already
+             * populated remove the MFN's from list and then add
+             * the new list.
+             */
+            printk(KERN_INFO "v4v: dom%d ring already registered\n",
+                    current->domain->domain_id);
+            ret =3D -EEXIST;
+            break;
+        }
+
+        spin_lock(&ring_info->lock);
+        ring_info->id =3D ring.id;
+        ring_info->len =3D ring.len;
+        ring_info->tx_ptr =3D ring.tx_ptr;
+        ring_info->ring =3D ring_hnd;
+        if ( ring_info->mfns )
+            xfree(ring_info->mfns);
+        ret =3D v4v_find_ring_mfns(d, ring_info, npage, pfn_hnd);
+        spin_unlock(&ring_info->lock);
+        if ( ret )
+            break;
+
+        if ( !need_to_insert )
+        {
+            read_unlock(&d->v4v->lock);
+        }
+        else
+        {
+            uint16_t hash =3D v4v_hash_fn(&ring.id);
+            write_lock(&d->v4v->lock);
+            hlist_add_head(&ring_info->node, &d->v4v->ring_hash[hash]);
+            write_unlock(&d->v4v->lock);
+        }
+    }
+    while ( 0 );
+
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+
+/*
+ * io
+ */
+
+static void
+v4v_notify_ring(struct domain *d, struct v4v_ring_info *ring_info,
+                struct hlist_head *to_notify)
+{
+    uint32_t space;
+
+    ASSERT(rw_is_locked(&v4v_lock));
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    spin_lock(&ring_info->lock);
+    space =3D v4v_ringbuf_payload_space(d, ring_info);
+    spin_unlock(&ring_info->lock);
+
+    v4v_pending_find(d, ring_info, space, to_notify);
+}
+
+/*notify hypercall*/
+static long
+v4v_notify(struct domain *d,
+           XEN_GUEST_HANDLE(v4v_ring_data_t) ring_data_hnd)
+{
+    v4v_ring_data_t ring_data;
+    HLIST_HEAD(to_notify);
+    int i;
+    int ret =3D 0;
+
+    read_lock(&v4v_lock);
+
+    if ( !d->v4v )
+    {
+        read_unlock(&v4v_lock);
+        v4v_dprintk("!d->v4v, ENODEV\n");
+        return -ENODEV;
+    }
+
+    read_lock(&d->v4v->lock);
+    for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+    {
+        struct hlist_node *node, *next;
+        struct v4v_ring_info *ring_info;
+
+        hlist_for_each_entry_safe(ring_info, node,
+                next, &d->v4v->ring_hash[i],
+                node)
+        {
+            v4v_notify_ring(d, ring_info, &to_notify);
+        }
+    }
+    read_unlock(&d->v4v->lock);
+
+    if ( !hlist_empty(&to_notify) )
+        v4v_pending_notify(d, &to_notify);
+
+    do
+    {
+        if ( !guest_handle_is_null(ring_data_hnd) )
+        {
+            /* Quick sanity check on ring_data_hnd */
+            if ( copy_field_from_guest(&ring_data, ring_data_hnd, magic)=
 )
+            {
+                v4v_dprintk("copy_field_from_guest failed\n");
+                ret =3D -EFAULT;
+                break;
+            }
+
+            if ( ring_data.magic !=3D V4V_RING_DATA_MAGIC )
+            {
+                v4v_dprintk("ring.magic(%lx) !=3D V4V_RING_MAGIC(%lx), E=
INVAL\n",
+                        ring_data.magic, V4V_RING_MAGIC);
+                ret =3D -EINVAL;
+                break;
+            }
+
+            if ( copy_from_guest(&ring_data, ring_data_hnd, 1) )
+            {
+                v4v_dprintk("copy_from_guest failed\n");
+                ret =3D -EFAULT;
+                break;
+            }
+
+            {
+                XEN_GUEST_HANDLE(v4v_ring_data_ent_t) ring_data_ent_hnd;
+                ring_data_ent_hnd =3D
+                    guest_handle_for_field(ring_data_hnd, v4v_ring_data_=
ent_t, data[0]);
+                ret =3D v4v_fill_ring_datas(d, ring_data.nent, ring_data=
_ent_hnd);
+            }
+        }
+    }
+    while ( 0 );
+
+    read_unlock(&v4v_lock);
+
+    return ret;
+}
+
+#ifdef V4V_DEBUG
+void
+v4v_viptables_print_rule(struct v4v_viptables_rule_node *node)
+{
+    v4v_viptables_rule_t *rule;
+
+    if ( node =3D=3D NULL )
+    {
+        printk("(null)\n");
+        return;
+    }
+
+    rule =3D &node->rule;
+
+    if ( rule->accept =3D=3D 1 )
+        printk("ACCEPT");
+    else
+        printk("REJECT");
+
+    printk(" ");
+
+    if ( rule->src.domain =3D=3D DOMID_ANY )
+        printk("*");
+    else
+        printk("%i", rule->src.domain);
+
+    printk(":");
+
+    if ( rule->src.port =3D=3D -1 )
+        printk("*");
+    else
+        printk("%i", rule->src.port);
+
+    printk(" -> ");
+
+    if ( rule->dst.domain =3D=3D DOMID_ANY )
+        printk("*");
+    else
+        printk("%i", rule->dst.domain);
+
+    printk(":");
+
+    if ( rule->dst.port =3D=3D -1 )
+        printk("*");
+    else
+        printk("%i", rule->dst.port);
+
+    printk("\n");
+}
+#endif /* V4V_DEBUG */
+
+int
+v4v_viptables_add(struct domain *src_d,
+                  XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule,
+                  int32_t position)
+{
+    struct v4v_viptables_rule_node* new =3D NULL;
+    struct list_head* tmp;
+
+    ASSERT(rw_is_write_locked(&viprules_lock));
+
+    /* First rule is n.1 */
+    position--;
+
+    new =3D xmalloc(struct v4v_viptables_rule_node);
+    if ( new =3D=3D NULL )
+        return -ENOMEM;
+
+    if ( copy_from_guest(&new->rule, rule, 1) )
+    {
+        xfree(new);
+        return -EFAULT;
+    }
+
+#ifdef V4V_DEBUG
+    printk(KERN_ERR "VIPTables: ");
+    v4v_viptables_print_rule(new);
+#endif /* V4V_DEBUG */
+
+    tmp =3D &viprules;
+    while ( position !=3D 0 && tmp->next !=3D &viprules )
+    {
+        tmp =3D tmp->next;
+        position--;
+    }
+    list_add(&new->list, tmp);
+
+    return 0;
+}
+
+int
+v4v_viptables_del(struct domain *src_d,
+                  XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule,
+                  int32_t position)
+{
+    struct list_head *tmp =3D NULL;
+    struct list_head *next =3D NULL;
+    struct v4v_viptables_rule_node *node;
+
+    ASSERT(rw_is_write_locked(&viprules_lock));
+
+    if ( position !=3D -1 )
+    {
+        /* We want to delete the rule number <position> */
+        tmp =3D &viprules;
+        while ( position !=3D 0 && tmp->next !=3D &viprules )
+        {
+            tmp =3D tmp->next;
+            position--;
+        }
+    }
+    else if ( !guest_handle_is_null(rule) )
+    {
+        struct v4v_viptables_rule r;
+
+        if ( copy_field_from_guest(&r, rule, src) ||
+             copy_field_from_guest(&r, rule, dst) ||
+             copy_field_from_guest(&r, rule, accept) )
+        {
+            return -EFAULT;
+        }
+
+        list_for_each(tmp, &viprules)
+        {
+            node =3D list_entry(tmp, struct v4v_viptables_rule_node, lis=
t);
+
+            if ( (node->rule.src.domain =3D=3D r.src.domain) &&
+                 (node->rule.src.port   =3D=3D r.src.port)   &&
+                 (node->rule.dst.domain =3D=3D r.dst.domain) &&
+                 (node->rule.dst.port   =3D=3D r.dst.port))
+            {
+                position =3D 0;
+                break;
+            }
+        }
+    }
+    else
+    {
+        /* We want to flush the rules! */
+        printk(KERN_ERR "VIPTables: flushing rules\n");
+        list_for_each_safe(tmp, next, &viprules)
+        {
+            node =3D list_entry(tmp, struct v4v_viptables_rule_node, lis=
t);
+            list_del(tmp);
+            xfree(node);
+        }
+    }
+
+    if ( position =3D=3D 0 && tmp !=3D &viprules )
+    {
+        node =3D list_entry(tmp, struct v4v_viptables_rule_node, list);
+#ifdef V4V_DEBUG
+        printk(KERN_ERR "VIPTables: deleting rule: ");
+        v4v_viptables_print_rule(node);
+#endif /* V4V_DEBUG */
+        list_del(tmp);
+        xfree(node);
+    }
+
+    return 0;
+}
+
+static size_t
+v4v_viptables_list(struct domain *src_d,
+                   XEN_GUEST_HANDLE(v4v_viptables_list_t) list_hnd)
+{
+    struct list_head *ptr;
+    struct v4v_viptables_rule_node *node;
+    struct v4v_viptables_list rules_list;
+    uint32_t nbrules;
+    XEN_GUEST_HANDLE(v4v_viptables_rule_t) guest_rules;
+
+    ASSERT(rw_is_locked(&viprules_lock));
+
+    memset(&rules_list, 0, sizeof (rules_list));
+    if ( copy_from_guest(&rules_list, list_hnd, 1) )
+        return -EFAULT;
+
+    ptr =3D viprules.next;
+    while ( rules_list.start_rule !=3D 0 && ptr->next !=3D &viprules )
+    {
+        ptr =3D ptr->next;
+        rules_list.start_rule--;
+    }
+
+    if ( rules_list.nb_rules =3D=3D 0 )
+        return -EINVAL;
+
+    guest_rules =3D guest_handle_for_field(list_hnd, v4v_viptables_rule_=
t, rules[0]);
+
+    nbrules =3D 0;
+    while ( nbrules < rules_list.nb_rules && ptr !=3D &viprules )
+    {
+        node =3D list_entry(ptr, struct v4v_viptables_rule_node, list);
+
+        if ( !guest_handle_okay(guest_rules, 1) )
+            break;
+
+        if ( copy_to_guest(guest_rules, &node->rule, 1) )
+            break;
+
+        guest_handle_add_offset(guest_rules, 1);
+
+        nbrules++;
+        ptr =3D ptr->next;
+    }
+
+    rules_list.nb_rules =3D nbrules;
+    if ( copy_field_to_guest(list_hnd, &rules_list, nb_rules) )
+        return -EFAULT;
+
+    return 0;
+}
+
+static size_t
+v4v_viptables_check(v4v_addr_t * src, v4v_addr_t * dst)
+{
+    struct list_head *ptr;
+    struct v4v_viptables_rule_node *node;
+    size_t ret =3D 0; /* Defaulting to ACCEPT */
+
+    read_lock(&viprules_lock);
+
+    list_for_each(ptr, &viprules)
+    {
+        node =3D list_entry(ptr, struct v4v_viptables_rule_node, list);
+
+        if ( (node->rule.src.domain =3D=3D V4V_DOMID_ANY ||
+              node->rule.src.domain =3D=3D src->domain) &&
+             (node->rule.src.port =3D=3D V4V_PORT_ANY ||
+              node->rule.src.port =3D=3D src->port) &&
+             (node->rule.dst.domain =3D=3D V4V_DOMID_ANY ||
+              node->rule.dst.domain =3D=3D dst->domain) &&
+             (node->rule.dst.port =3D=3D V4V_PORT_ANY ||
+              node->rule.dst.port =3D=3D dst->port) )
+        {
+            ret =3D !node->rule.accept;
+            break;
+        }
+    }
+
+    read_unlock(&viprules_lock);
+    return ret;
+}
+
+/*
+ * Hypercall to do the send
+ */
+static size_t
+v4v_sendv(struct domain *src_d, v4v_addr_t * src_addr,
+          v4v_addr_t * dst_addr, uint32_t proto,
+          internal_v4v_iov_t iovs, size_t niov)
+{
+    struct domain *dst_d;
+    v4v_ring_id_t src_id;
+    struct v4v_ring_info *ring_info;
+    int ret =3D 0;
+
+    if ( !dst_addr )
+    {
+        v4v_dprintk("!dst_addr, EINVAL\n");
+        return -EINVAL;
+    }
+
+    read_lock(&v4v_lock);
+    if ( !src_d->v4v )
+    {
+        read_unlock(&v4v_lock);
+        v4v_dprintk("!src_d->v4v, EINVAL\n");
+        return -EINVAL;
+    }
+
+    src_id.addr.port =3D src_addr->port;
+    src_id.addr.domain =3D src_d->domain_id;
+    src_id.partner =3D dst_addr->domain;
+
+    dst_d =3D get_domain_by_id(dst_addr->domain);
+    if ( !dst_d )
+    {
+        read_unlock(&v4v_lock);
+        v4v_dprintk("!dst_d, ECONNREFUSED\n");
+        return -ECONNREFUSED;
+    }
+
+    if ( v4v_viptables_check(src_addr, dst_addr) !=3D 0 )
+    {
+        read_unlock(&v4v_lock);
+        gdprintk(XENLOG_WARNING,
+                 "V4V: VIPTables REJECTED %i:%i -> %i:%i\n",
+                 src_addr->domain, src_addr->port,
+                 dst_addr->domain, dst_addr->port);
+        return -ECONNREFUSED;
+    }
+
+    do
+    {
+        if ( !dst_d->v4v )
+        {
+            v4v_dprintk("dst_d->v4v, ECONNREFUSED\n");
+            ret =3D -ECONNREFUSED;
+            break;
+        }
+
+        read_lock(&dst_d->v4v->lock);
+        ring_info =3D
+            v4v_ring_find_info_by_addr(dst_d, dst_addr, src_addr->domain=
);
+
+        if ( !ring_info )
+        {
+            ret =3D -ECONNREFUSED;
+            v4v_dprintk(" !ring_info, ECONNREFUSED\n");
+        }
+        else
+        {
+            size_t len =3D v4v_iov_count(iovs, niov);
+
+            if ( len < 0 )
+            {
+                ret =3D len;
+                break;
+            }
+
+            spin_lock(&ring_info->lock);
+            ret =3D
+                v4v_ringbuf_insertv(dst_d, ring_info, &src_id, proto, io=
vs,
+                        niov, len);
+            if ( ret =3D=3D -EAGAIN )
+            {
+                v4v_dprintk("v4v_ringbuf_insertv failed, EAGAIN\n");
+                /* Schedule a wake up on the event channel when space is=
 there */
+                if ( v4v_pending_requeue(ring_info, src_d->domain_id, le=
n) )
+                {
+                    v4v_dprintk("v4v_pending_requeue failed, ENOMEM\n");
+                    ret =3D -ENOMEM;
+                }
+            }
+            spin_unlock(&ring_info->lock);
+
+            if ( ret >=3D 0 )
+            {
+                v4v_signal_domain(dst_d);
+            }
+
+        }
+        read_unlock(&dst_d->v4v->lock);
+
+    }
+    while ( 0 );
+
+    put_domain(dst_d);
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+static void
+v4v_info(struct domain *d, v4v_info_t *info)
+{
+    read_lock(&d->v4v->lock);
+    info->ring_magic =3D V4V_RING_MAGIC;
+    info->data_magic =3D V4V_RING_DATA_MAGIC;
+    info->evtchn =3D d->v4v->evtchn_port;
+    read_unlock(&d->v4v->lock);
+}
+
+/*
+ * hypercall glue
+ */
+long
+do_v4v_op(int cmd, XEN_GUEST_HANDLE(void) arg1,
+          XEN_GUEST_HANDLE(void) arg2,
+          uint32_t arg3, uint32_t arg4)
+{
+    struct domain *d =3D current->domain;
+    long rc =3D -EFAULT;
+
+    v4v_dprintk("->do_v4v_op(%d,%p,%p,%d,%d)\n", cmd,
+                (void *)arg1.p, (void *)arg2.p, (int) arg3, (int) arg4);
+
+    domain_lock(d);
+    switch (cmd)
+    {
+        case V4VOP_register_ring:
+            {
+                XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd =3D
+                    guest_handle_cast(arg1, v4v_ring_t);
+                XEN_GUEST_HANDLE(v4v_pfn_t) pfn_hnd =3D
+                    guest_handle_cast(arg2, v4v_pfn_t);
+                uint32_t npage =3D arg3;
+                if ( unlikely(!guest_handle_okay(ring_hnd, 1)) )
+                    goto out;
+                if ( unlikely(!guest_handle_okay(pfn_hnd, npage)) )
+                    goto out;
+                rc =3D v4v_ring_add(d, ring_hnd, npage, pfn_hnd);
+                break;
+            }
+        case V4VOP_unregister_ring:
+            {
+                XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd =3D
+                    guest_handle_cast(arg1, v4v_ring_t);
+                if ( unlikely(!guest_handle_okay(ring_hnd, 1)) )
+                    goto out;
+                rc =3D v4v_ring_remove(d, ring_hnd);
+                break;
+            }
+        case V4VOP_send:
+            {
+                uint32_t len =3D arg3;
+                uint32_t protocol =3D arg4;
+                v4v_iov_t iov;
+                internal_v4v_iov_t iovs;
+                XEN_GUEST_HANDLE(v4v_send_addr_t) addr_hnd =3D
+                    guest_handle_cast(arg1, v4v_send_addr_t);
+                v4v_send_addr_t addr;
+
+                if ( unlikely(!guest_handle_okay(addr_hnd, 1)) )
+                    goto out;
+                if ( copy_from_guest(&addr, addr_hnd, 1) )
+                    goto out;
+
+                iov.iov_base =3D (uint64_t)arg2.p; // FIXME
+                iov.iov_len =3D len;
+                iovs.xen_iov =3D &iov;
+                rc =3D v4v_sendv(d, &addr.src, &addr.dst, protocol, iovs=
, 1);
+                break;
+            }
+        case V4VOP_sendv:
+            {
+                internal_v4v_iov_t iovs;
+                uint32_t niov =3D arg3;
+                uint32_t protocol =3D arg4;
+                XEN_GUEST_HANDLE(v4v_send_addr_t) addr_hnd =3D
+                    guest_handle_cast(arg1, v4v_send_addr_t);
+                v4v_send_addr_t addr;
+
+                memset(&iovs, 0, sizeof (iovs));
+                iovs.guest_iov =3D guest_handle_cast(arg2, v4v_iov_t);
+
+                if ( unlikely(!guest_handle_okay(addr_hnd, 1)) )
+                    goto out;
+                if ( copy_from_guest(&addr, addr_hnd, 1) )
+                    goto out;
+
+                if ( unlikely(!guest_handle_okay(iovs.guest_iov, niov)) =
)
+                    goto out;
+
+                rc =3D v4v_sendv(d, &addr.src, &addr.dst, protocol, iovs=
, niov);
+                break;
+            }
+        case V4VOP_notify:
+            {
+                XEN_GUEST_HANDLE(v4v_ring_data_t) ring_data_hnd =3D
+                    guest_handle_cast(arg1, v4v_ring_data_t);
+                rc =3D v4v_notify(d, ring_data_hnd);
+                break;
+            }
+        case V4VOP_viptables_add:
+            {
+                uint32_t position =3D arg3;
+                XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule_hnd =3D
+                    guest_handle_cast(arg1, v4v_viptables_rule_t);
+                rc =3D -EPERM;
+                if ( !IS_PRIV(d) )
+                    goto out;
+
+                write_lock(&viprules_lock);
+                rc =3D v4v_viptables_add(d, rule_hnd, position);
+                write_unlock(&viprules_lock);
+                break;
+            }
+        case V4VOP_viptables_del:
+            {
+                uint32_t position =3D arg3;
+                XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule_hnd =3D
+                    guest_handle_cast(arg1, v4v_viptables_rule_t);
+                rc =3D -EPERM;
+                if ( !IS_PRIV(d) )
+                    goto out;
+
+                write_lock(&viprules_lock);
+                rc =3D v4v_viptables_del(d, rule_hnd, position);
+                write_unlock(&viprules_lock);
+                break;
+            }
+        case V4VOP_viptables_list:
+            {
+                XEN_GUEST_HANDLE(v4v_viptables_list_t) rules_list_hnd =3D
+                    guest_handle_cast(arg1, v4v_viptables_list_t);
+                rc =3D -EPERM;
+                if ( !IS_PRIV(d) )
+                    goto out;
+
+                rc =3D -EFAULT;
+                if ( unlikely(!guest_handle_okay(rules_list_hnd, 1)) )
+                    goto out;
+
+                read_lock(&viprules_lock);
+                rc =3D v4v_viptables_list(d, rules_list_hnd);
+                read_unlock(&viprules_lock);
+                break;
+            }
+        case V4VOP_info:
+            {
+                XEN_GUEST_HANDLE(v4v_info_t) info_hnd =3D
+                    guest_handle_cast(arg1, v4v_info_t);
+                v4v_info_t info;
+
+                if ( unlikely(!guest_handle_okay(info_hnd, 1)) )
+                    goto out;
+                v4v_info(d, &info);
+                if ( copy_to_guest(info_hnd, &info, 1) )
+                    goto out;
+                rc =3D 0;
+                break;
+            }
+        default:
+            rc =3D -ENOSYS;
+            break;
+    }
+out:
+    domain_unlock(d);
+    v4v_dprintk("<-do_v4v_op()=3D%d\n", (int)rc);
+    return rc;
+}
+
+/*
+ * init
+ */
+
+void
+v4v_destroy(struct domain *d)
+{
+    int i;
+
+    BUG_ON(!d->is_dying);
+    write_lock(&v4v_lock);
+
+    v4v_dprintk("d->v=3D%p\n", d->v4v);
+
+    if ( d->v4v )
+    {
+        for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+        {
+            struct hlist_node *node, *next;
+            struct v4v_ring_info *ring_info;
+
+            hlist_for_each_entry_safe(ring_info, node,
+                    next, &d->v4v->ring_hash[i],
+                    node)
+            {
+                v4v_ring_remove_info(d, ring_info);
+            }
+        }
+    }
+
+    d->v4v =3D NULL;
+    write_unlock(&v4v_lock);
+}
+
+int
+v4v_init(struct domain *d)
+{
+    struct v4v_domain *v4v;
+    evtchn_port_t port;
+    struct evtchn *chn;
+    int i;
+    int rc;
+
+    v4v =3D xmalloc(struct v4v_domain);
+    if ( !v4v )
+        return -ENOMEM;
+
+    rc =3D evtchn_alloc_unbound_domain(d, &port);
+    if ( rc )
+        return rc;
+
+    chn =3D evtchn_from_port(d, port);
+    chn->u.unbound.remote_domid =3D d->domain_id;
+
+    rwlock_init(&v4v->lock);
+
+    v4v->evtchn_port =3D port;
+    for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+        INIT_HLIST_HEAD(&v4v->ring_hash[i]);
+
+    write_lock(&v4v_lock);
+    d->v4v =3D v4v;
+    write_unlock(&v4v_lock);
+
+    return 0;
+}
+
+
+/*
+ * debug
+ */
+
+static void
+dump_domain_ring(struct domain *d, struct v4v_ring_info *ring_info)
+{
+    uint32_t rx_ptr;
+
+    printk(KERN_ERR "  ring: domid=3D%d port=3D0x%08x partner=3D%d npage=
=3D%d\n",
+           (int)d->domain_id, (int)ring_info->id.addr.port,
+           (int)ring_info->id.partner, (int)ring_info->npage);
+
+    if ( v4v_ringbuf_get_rx_ptr(d, ring_info, &rx_ptr) )
+    {
+        printk(KERN_ERR "   Failed to read rx_ptr\n");
+        return;
+    }
+
+    printk(KERN_ERR "   tx_ptr=3D%d rx_ptr=3D%d len=3D%d\n",
+           (int)ring_info->tx_ptr, (int)rx_ptr, (int)ring_info->len);
+}
+
+static void
+dump_domain(struct domain *d)
+{
+    int i;
+
+    printk(KERN_ERR " domain %d:\n", (int)d->domain_id);
+
+    read_lock(&d->v4v->lock);
+
+    for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+    {
+        struct hlist_node *node;
+        struct v4v_ring_info *ring_info;
+
+        hlist_for_each_entry(ring_info, node, &d->v4v->ring_hash[i], nod=
e)
+            dump_domain_ring(d, ring_info);
+    }
+
+    printk(KERN_ERR "  event channel: %d\n",  d->v4v->evtchn_port);
+    read_unlock(&d->v4v->lock);
+
+    printk(KERN_ERR "\n");
+    v4v_signal_domain(d);
+}
+
+static void
+dump_state(unsigned char key)
+{
+    struct domain *d;
+
+    printk(KERN_ERR "\n\nV4V:\n");
+    read_lock(&v4v_lock);
+
+    rcu_read_lock(&domlist_read_lock);
+
+    for_each_domain(d)
+        dump_domain(d);
+
+    rcu_read_unlock(&domlist_read_lock);
+
+    read_unlock(&v4v_lock);
+}
+
+struct keyhandler v4v_info_keyhandler =3D
+{
+    .diagnostic =3D 1,
+    .u.fn =3D dump_state,
+    .desc =3D "dump v4v states and interupt"
+};
+
+static int __init
+setup_dump_rings(void)
+{
+    register_keyhandler('4', &v4v_info_keyhandler);
+    return 0;
+}
+
+__initcall(setup_dump_rings);
+
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/v4v.h b/xen/include/public/v4v.h
new file mode 100644
index 0000000..1f1c156
--- /dev/null
+++ b/xen/include/public/v4v.h
@@ -0,0 +1,291 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __XEN_PUBLIC_V4V_H__
+#define __XEN_PUBLIC_V4V_H__
+
+#include "xen.h"
+#include "event_channel.h"
+
+/*
+ * Structure definitions
+ */
+
+#define V4V_RING_MAGIC          0xA822f72bb0b9d8cc
+#define V4V_RING_DATA_MAGIC	0x45fe852220b801d4
+
+#define V4V_PROTO_DGRAM		0x3c2c1db8
+#define V4V_PROTO_STREAM 	0x70f6a8e5
+
+#define V4V_DOMID_ANY           0x7fffU
+#define V4V_PORT_ANY            0
+
+typedef struct v4v_iov
+{
+    uint64_t iov_base;
+    uint64_t iov_len;
+} v4v_iov_t;
+
+typedef struct v4v_addr
+{
+    uint32_t port;
+    domid_t domain;
+    uint16_t pad;
+} v4v_addr_t;
+
+typedef struct v4v_ring_id
+{
+    v4v_addr_t addr;
+    domid_t partner;
+    uint16_t pad;
+} v4v_ring_id_t;
+
+typedef uint64_t v4v_pfn_t;
+
+typedef struct
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+} v4v_send_addr_t;
+
+/*
+ * v4v_ring
+ * id: xen only looks at this during register/unregister
+ *     and will fill in id.addr.domain
+ * rx_ptr: rx pointer, modified by domain
+ * tx_ptr: tx pointer, modified by xen
+ *
+ */
+struct v4v_ring
+{
+    uint64_t magic;
+    v4v_ring_id_t id;
+    uint32_t len;
+    uint32_t rx_ptr;
+    uint32_t tx_ptr;
+    uint8_t reserved[32];
+    uint8_t ring[0];
+};
+typedef struct v4v_ring v4v_ring_t;
+
+#define V4V_RING_DATA_F_EMPTY       (1U << 0) /* Ring is empty */
+#define V4V_RING_DATA_F_EXISTS      (1U << 1) /* Ring exists */
+#define V4V_RING_DATA_F_PENDING     (1U << 2) /* Pending interrupt exist=
s - do not
+                                               * rely on this field - fo=
r
+                                               * profiling only */
+#define V4V_RING_DATA_F_SUFFICIENT  (1U << 3) /* Sufficient space to que=
ue
+                                               * space_required bytes ex=
ists */
+
+typedef struct v4v_ring_data_ent
+{
+    v4v_addr_t ring;
+    uint16_t flags;
+    uint16_t pad;
+    uint32_t space_required;
+    uint32_t max_message_size;
+} v4v_ring_data_ent_t;
+
+typedef struct v4v_ring_data
+{
+    uint64_t magic;
+    uint32_t nent;
+    uint32_t pad;
+    uint64_t reserved[4];
+    v4v_ring_data_ent_t data[0];
+} v4v_ring_data_t;
+
+struct v4v_info
+{
+    uint64_t ring_magic;
+    uint64_t data_magic;
+    evtchn_port_t evtchn;
+};
+typedef struct v4v_info v4v_info_t;
+
+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
+/*
+ * Messages on the ring are padded to 128 bits
+ * Len here refers to the exact length of the data not including the
+ * 128 bit header. The message uses
+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
+ */
+
+#define V4V_SHF_SYN		(1 << 0)
+#define V4V_SHF_ACK		(1 << 1)
+#define V4V_SHF_RST		(1 << 2)
+
+#define V4V_SHF_PING		(1 << 8)
+#define V4V_SHF_PONG		(1 << 9)
+
+struct v4v_stream_header
+{
+    uint32_t flags;
+    uint32_t conid;
+};
+
+struct v4v_ring_message_header
+{
+    uint32_t len;
+    uint32_t pad0;
+    v4v_addr_t source;
+    uint32_t protocol;
+    uint32_t pad1;
+    uint8_t data[0];
+};
+
+typedef struct v4v_viptables_rule
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+    uint32_t accept;
+    uint32_t pad;
+} v4v_viptables_rule_t;
+
+typedef struct v4v_viptables_list
+{
+    uint32_t start_rule;
+    uint32_t nb_rules;
+    struct v4v_viptables_rule rules[0];
+} v4v_viptables_list_t;
+
+/*
+ * HYPERCALLS
+ */
+
+#define V4VOP_register_ring 	1
+/*
+ * Registers a ring with Xen, if a ring with the same v4v_ring_id exists=
,
+ * this ring takes its place, registration will not change tx_ptr
+ * unless it is invalid
+ *
+ * do_v4v_op(V4VOP_unregister_ring,
+ *           v4v_ring, XEN_GUEST_HANDLE(v4v_pfn),
+ *           npage, 0)
+ */
+
+
+#define V4VOP_unregister_ring 	2
+/*
+ * Unregister a ring.
+ *
+ * v4v_hypercall(V4VOP_send, v4v_ring, NULL, 0, 0)
+ */
+
+#define V4VOP_send 		3
+/*
+ * Sends len bytes of buf to dst, giving src as the source address (xen =
will
+ * ignore src->domain and put your domain in the actually message), xen
+ * first looks for a ring with id.addr=3D=3Ddst and id.partner=3D=3Dsend=
ing_domain
+ * if that fails it looks for id.addr=3D=3Ddst and id.partner=3D=3DDOMID=
_ANY.
+ * protocol is the 32 bit protocol number used from the message
+ * most likely V4V_PROTO_DGRAM or STREAM. If insufficient space exists
+ * it will return -EAGAIN and xen will twing the V4V_INTERRUPT when
+ * sufficient space becomes available
+ *
+ * v4v_hypercall(V4VOP_send,
+ *               v4v_send_addr_t addr,
+ *               void* buf,
+ *               uint32_t len,
+ *               uint32_t protocol)
+ */
+
+
+#define V4VOP_notify 		4
+/* Asks xen for information about other rings in the system
+ *
+ * ent->ring is the v4v_addr_t of the ring you want information on
+ * the same matching rules are used as for V4VOP_send.
+ *
+ * ent->space_required  if this field is not null xen will check
+ * that there is space in the destination ring for this many bytes
+ * of payload. If there is it will set the V4V_RING_DATA_F_SUFFICIENT
+ * and CANCEL any pending interrupt for that ent->ring, if insufficient
+ * space is available it will schedule an interrupt and the flag will
+ * not be set.
+ *
+ * The flags are set by xen when notify replies
+ * V4V_RING_DATA_F_EMPTY	ring is empty
+ * V4V_RING_DATA_F_PENDING	interrupt is pending - don't rely on this
+ * V4V_RING_DATA_F_SUFFICIENT	sufficient space for space_required is the=
re
+ * V4V_RING_DATA_F_EXISTS	ring exists
+ *
+ * v4v_hypercall(V4VOP_notify,
+ *               XEN_GUEST_HANDLE(v4v_ring_data_ent) ent,
+ *               NULL, nent, 0)
+ */
+
+#define V4VOP_sendv		5
+/*
+ * Identical to V4VOP_send except rather than buf and len it takes
+ * an array of v4v_iov and a length of the array.
+ *
+ * v4v_hypercall(V4VOP_sendv,
+ *               v4v_send_addr_t addr,
+ *               v4v_iov iov,
+ *               uint32_t niov,
+ *               uint32_t protocol)
+ */
+
+#define V4VOP_viptables_add     6
+/*
+ * Insert a filtering rules after a given position.
+ *
+ * v4v_hypercall(V4VOP_viptables_add,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_del     7
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_del,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_list    8
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_list,
+ *               v4v_vitpables_list_t list,
+ *               NULL, 0, 0)
+ */
+
+#define V4VOP_info              9
+/*
+ * v4v_hypercall(V4VOP_info,
+ *               XEN_GUEST_HANDLE(v4v_info_t) info,
+ *               NULL, 0, 0)
+ */
+
+#endif /* __XEN_PUBLIC_V4V_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b19425b..868d119 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -99,7 +99,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
 #define __HYPERVISOR_domctl               36
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
-#define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient *=
/
+#define __HYPERVISOR_v4v_op               39
=20
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 53804c8..296de52 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -23,6 +23,7 @@
 #include <public/sysctl.h>
 #include <public/vcpu.h>
 #include <public/mem_event.h>
+#include <xen/v4v.h>
=20
 #ifdef CONFIG_COMPAT
 #include <compat/vcpu.h>
@@ -350,6 +351,9 @@ struct domain
     nodemask_t node_affinity;
     unsigned int last_alloc_node;
     spinlock_t node_affinity_lock;
+
+    /* v4v */
+    struct v4v_domain *v4v;
 };
=20
 struct domain_setup_info
diff --git a/xen/include/xen/v4v.h b/xen/include/xen/v4v.h
new file mode 100644
index 0000000..cba5ea7
--- /dev/null
+++ b/xen/include/xen/v4v.h
@@ -0,0 +1,134 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __V4V_PRIVATE_H__
+#define __V4V_PRIVATE_H__
+
+#include <xen/config.h>
+#include <xen/types.h>
+#include <xen/spinlock.h>
+#include <xen/smp.h>
+#include <xen/shared.h>
+#include <xen/list.h>
+#include <public/v4v.h>
+
+#define V4V_HTABLE_SIZE 32
+
+/*
+ * Handlers
+ */
+
+DEFINE_XEN_GUEST_HANDLE (v4v_iov_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_addr_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_send_addr_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_pfn_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_ring_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_ring_data_ent_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_ring_data_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_info_t);
+
+DEFINE_XEN_GUEST_HANDLE (v4v_viptables_rule_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_viptables_list_t);
+
+/*
+ * Helper functions
+ */
+
+static inline uint16_t
+v4v_hash_fn (v4v_ring_id_t *id)
+{
+    uint16_t ret;
+    ret =3D (uint16_t) (id->addr.port >> 16);
+    ret ^=3D (uint16_t) id->addr.port;
+    ret ^=3D id->addr.domain;
+    ret ^=3D id->partner;
+
+    ret &=3D (V4V_HTABLE_SIZE-1);
+
+    return ret;
+}
+
+struct v4v_pending_ent
+{
+    struct hlist_node node;
+    domid_t id;
+    uint32_t len;
+};
+
+
+struct v4v_ring_info
+{
+    /* next node in the hash, protected by L2  */
+    struct hlist_node node;
+    /* this ring's id, protected by L2 */
+    v4v_ring_id_t id;
+    /* L3 */
+    spinlock_t lock;
+    /* cached length of the ring (from ring->len), protected by L3 */
+    uint32_t len;
+    uint32_t npage;
+    /* cached tx pointer location, protected by L3 */
+    uint32_t tx_ptr;
+    /* guest ring, protected by L3 */
+    XEN_GUEST_HANDLE(v4v_ring_t) ring;
+    /* mapped ring pages protected by L3*/
+    uint8_t **mfn_mapping;
+    /* list of mfns of guest ring */
+    mfn_t *mfns;
+    /* list of struct v4v_pending_ent for this ring, L3 */
+    struct hlist_head pending;
+};
+
+/*
+ * The value of the v4v element in a struct domain is
+ * protected by the global lock L1
+ */
+struct v4v_domain
+{
+    /* L2 */
+    rwlock_t lock;
+    /* event channel */
+    evtchn_port_t evtchn_port;
+    /* protected by L2 */
+    struct hlist_head ring_hash[V4V_HTABLE_SIZE];
+};
+
+typedef struct v4v_viptables_rule_node
+{
+    struct list_head list;
+    v4v_viptables_rule_t rule;
+} v4v_viptables_rule_node_t;
+
+void v4v_destroy(struct domain *d);
+int v4v_init(struct domain *d);
+long do_v4v_op (int cmd,
+                XEN_GUEST_HANDLE (void) arg1,
+                XEN_GUEST_HANDLE (void) arg2,
+                uint32_t arg3,
+                uint32_t arg4);
+
+#endif /* __V4V_PRIVATE_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/v4v_utils.h b/xen/include/xen/v4v_utils.h
new file mode 100644
index 0000000..67b2d77
--- /dev/null
+++ b/xen/include/xen/v4v_utils.h
@@ -0,0 +1,276 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __V4V_UTILS_H__
+# define __V4V_UTILS_H__
+
+/* Compiler specific hacks */
+#if defined(__GNUC__)
+# define V4V_UNUSED __attribute__ ((unused))
+# ifndef __STRICT_ANSI__
+#  define V4V_INLINE inline
+# else
+#  define V4V_INLINE
+# endif
+#else /* !__GNUC__ */
+# define V4V_UNUSED
+# define V4V_INLINE
+#endif
+
+
+/*
+ * Utility functions
+ */
+
+static V4V_INLINE uint32_t
+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
+{
+    int32_t ret;
+    ret =3D r->tx_ptr - r->rx_ptr;
+    if (ret >=3D 0)
+        return ret;
+    return (uint32_t) (r->len + ret);
+}
+
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, setting from and protocol if they are not NULL, returns
+ * the actual length of the message, or -1 if there is nothing to read
+ */
+V4V_UNUSED static V4V_INLINE ssize_t
+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * prot=
ocol,
+              void *_buf, size_t t, int consume)
+{
+    volatile struct v4v_ring_message_header *mh;
+    /* unnecessary cast from void * required by MSVC compiler */
+    uint8_t *buf =3D (uint8_t *) _buf;
+    uint32_t btr =3D v4v_ring_bytes_to_read (r);
+    uint32_t rxp =3D r->rx_ptr;
+    uint32_t bte;
+    uint32_t len;
+    ssize_t ret;
+
+
+    if (btr < sizeof (*mh))
+        return -1;
+
+    /*
+     * Becuase the message_header is 128 bits long and the ring is 128 b=
it
+     * aligned, we're gaurunteed never to wrap
+     */
+    mh =3D (volatile struct v4v_ring_message_header *) &r->ring[r->rx_pt=
r];
+
+    len =3D mh->len;
+    if (btr < len)
+        return -1;
+
+#if defined(__GNUC__)
+    if (from)
+        *from =3D mh->source;
+#else
+    /* MSVC can't do the above */
+    if (from)
+	memcpy((void *) from, (void *) &(mh->source), sizeof(struct v4v_addr));
+#endif
+
+    if (protocol)
+        *protocol =3D mh->protocol;
+
+    rxp +=3D sizeof (*mh);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+    len -=3D sizeof (*mh);
+    ret =3D len;
+
+    bte =3D r->len - rxp;
+
+    if (bte < len)
+    {
+        if (t < bte)
+        {
+            if (buf)
+            {
+                memcpy (buf, (void *) &r->ring[rxp], t);
+                buf +=3D t;
+            }
+
+            rxp =3D 0;
+            len -=3D bte;
+            t =3D 0;
+        }
+        else
+        {
+            if (buf)
+            {
+                memcpy (buf, (void *) &r->ring[rxp], bte);
+                buf +=3D bte;
+            }
+            rxp =3D 0;
+            len -=3D bte;
+            t -=3D bte;
+        }
+    }
+
+    if (buf && t)
+        memcpy (buf, (void *) &r->ring[rxp], (t < len) ? t : len);
+
+
+    rxp +=3D V4V_ROUNDUP (len);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+
+    mb ();
+
+    if (consume)
+        r->rx_ptr =3D rxp;
+
+    return ret;
+}
+
+static V4V_INLINE void
+v4v_memcpy_skip (void *_dst, const void *_src, size_t len, size_t *skip)
+{
+    const uint8_t *src =3D  (const uint8_t *) _src;
+    uint8_t *dst =3D (uint8_t *) _dst;
+
+    if (!*skip)
+    {
+        memcpy (dst, src, len);
+        return;
+    }
+
+    if (*skip >=3D len)
+    {
+        *skip -=3D len;
+        return;
+    }
+
+    src +=3D *skip;
+    dst +=3D *skip;
+    len -=3D *skip;
+    *skip =3D 0;
+
+    memcpy (dst, src, len);
+}
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, skipping skip bytes, setting from and protocol if they are n=
ot
+ * NULL, returns the actual length of the message, or -1 if there is
+ * nothing to read
+ */
+static ssize_t
+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
+                     uint32_t * protocol, void *_buf, size_t t, int cons=
ume,
+                     size_t skip) V4V_UNUSED;
+
+V4V_INLINE static ssize_t
+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
+                     uint32_t * protocol, void *_buf, size_t t, int cons=
ume,
+                     size_t skip)
+{
+    volatile struct v4v_ring_message_header *mh;
+    /* unnecessary cast from void * required by MSVC compiler */
+    uint8_t *buf =3D (uint8_t *) _buf;
+    uint32_t btr =3D v4v_ring_bytes_to_read (r);
+    uint32_t rxp =3D r->rx_ptr;
+    uint32_t bte;
+    uint32_t len;
+    ssize_t ret;
+
+    buf -=3D skip;
+
+    if (btr < sizeof (*mh))
+        return -1;
+
+    /*
+     * Becuase the message_header is 128 bits long and the ring is 128 b=
it
+     * aligned, we're gaurunteed never to wrap
+     */
+    mh =3D (volatile struct v4v_ring_message_header *) &r->ring[r->rx_pt=
r];
+
+    len =3D mh->len;
+    if (btr < len)
+        return -1;
+
+#if defined(__GNUC__)
+    if (from)
+        *from =3D mh->source;
+#else
+    /* MSVC can't do the above */
+    if (from)
+	memcpy((void *) from, (void *) &(mh->source), sizeof(struct v4v_addr));
+#endif
+
+    if (protocol)
+        *protocol =3D mh->protocol;
+
+    rxp +=3D sizeof (*mh);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+    len -=3D sizeof (*mh);
+    ret =3D len;
+
+    bte =3D r->len - rxp;
+
+    if (bte < len)
+    {
+        if (t < bte)
+        {
+            if (buf)
+            {
+                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], t, &skip);
+                buf +=3D t;
+            }
+
+            rxp =3D 0;
+            len -=3D bte;
+            t =3D 0;
+        }
+        else
+        {
+            if (buf)
+            {
+                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], bte,
+                        &skip);
+                buf +=3D bte;
+            }
+            rxp =3D 0;
+            len -=3D bte;
+            t -=3D bte;
+        }
+    }
+
+    if (buf && t)
+        v4v_memcpy_skip (buf, (void *) &r->ring[rxp], (t < len) ? t : le=
n,
+                         &skip);
+
+
+    rxp +=3D V4V_ROUNDUP (len);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+
+    mb ();
+
+    if (consume)
+        r->rx_ptr =3D rxp;
+
+    return ret;
+}
+
+#endif /* !__V4V_UTILS_H__ */

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 19:50:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 19:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxNsc-0003hB-Kt; Fri, 03 Aug 2012 19:49:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxNsa-0003fz-3W
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 19:49:48 +0000
Received: from [85.158.143.99:10077] by server-1.bemta-4.messagelabs.com id
	9C/98-24392-B5B2C105; Fri, 03 Aug 2012 19:49:47 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344023386!23130264!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24491 invoked from network); 3 Aug 2012 19:49:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 19:49:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="13846183"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 19:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 20:49:46 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxNsY-00063n-3f;
	Fri, 03 Aug 2012 19:49:46 +0000
Received: by spongy (Postfix, from userid 2023)	id 5091734049D; Fri,  3 Aug
	2012 20:50:58 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 20:50:54 +0100
Message-ID: <1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable


Setup of v4v domains a domain gets created and cleanup
when a domain die. Wire up the v4v hypercall.

Include v4v internal and public headers.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 xen/arch/x86/hvm/hvm.c             |    9 +-
 xen/arch/x86/x86_32/entry.S        |    2 +
 xen/arch/x86/x86_64/compat/entry.S |    2 +
 xen/arch/x86/x86_64/entry.S        |    2 +
 xen/common/Makefile                |    1 +
 xen/common/domain.c                |   13 +-
 xen/common/v4v.c                   | 1895 ++++++++++++++++++++++++++++++=
++++++
 xen/include/public/v4v.h           |  291 ++++++
 xen/include/public/xen.h           |    2 +-
 xen/include/xen/sched.h            |    4 +
 xen/include/xen/v4v.h              |  134 +++
 xen/include/xen/v4v_utils.h        |  276 ++++++
 12 files changed, 2626 insertions(+), 5 deletions(-)
 create mode 100644 xen/common/v4v.c
 create mode 100644 xen/include/public/v4v.h
 create mode 100644 xen/include/xen/v4v.h
 create mode 100644 xen/include/xen/v4v_utils.h


--------------true
Content-Type: text/x-patch; name="0005-xen-Add-V4V-implementation.patch"
Content-Disposition: attachment;
	filename="0005-xen-Add-V4V-implementation.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 22c136b..2671069 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3125,7 +3125,8 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hy=
percalls] =3D {
     HYPERCALL(set_timer_op),
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    HYPERCALL(v4v_op)
 };
=20
 #else /* defined(__x86_64__) */
@@ -3210,7 +3211,8 @@ static hvm_hypercall_t *hvm_hypercall64_table[NR_hy=
percalls] =3D {
     HYPERCALL(set_timer_op),
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    HYPERCALL(v4v_op)
 };
=20
 #define COMPAT_CALL(x)                                        \
@@ -3227,7 +3229,8 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hy=
percalls] =3D {
     COMPAT_CALL(set_timer_op),
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    HYPERCALL(v4v_op)
 };
=20
 #endif /* defined(__x86_64__) */
diff --git a/xen/arch/x86/x86_32/entry.S b/xen/arch/x86/x86_32/entry.S
index 2982679..5b1f55b 100644
--- a/xen/arch/x86/x86_32/entry.S
+++ b/xen/arch/x86/x86_32/entry.S
@@ -700,6 +700,7 @@ ENTRY(hypercall_table)
         .long do_domctl
         .long do_kexec_op
         .long do_tmem_op
+        .long do_v4v_op
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/4)
         .long do_ni_hypercall
         .endr
@@ -748,6 +749,7 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec_op          */
         .byte 1 /* do_tmem_op           */
+        .byte 5 /* do_v4v_op		*/
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/com=
pat/entry.S
index f49ff2d..6b838d3 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -414,6 +414,7 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_v4v_op
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -462,6 +463,7 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 5 /* do_v4v_op		    */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 997bc94..e6a7fdd 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -707,6 +707,7 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_v4v_op
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -755,6 +756,7 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 5 /* do_v4v_op		*/
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 9eba8bc..fe3c72c 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -45,6 +45,7 @@ obj-y +=3D tmem_xen.o
 obj-y +=3D radix-tree.o
 obj-y +=3D rbtree.o
 obj-y +=3D lzo.o
+obj-y +=3D v4v.o
=20
 obj-bin-$(CONFIG_X86) +=3D $(foreach n,decompress bunzip2 unxz unlzma un=
lzo,$(n).init.o)
=20
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..1600f45 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -195,7 +195,8 @@ struct domain *domain_create(
 {
     struct domain *d, **pd;
     enum { INIT_xsm =3D 1u<<0, INIT_watchdog =3D 1u<<1, INIT_rangeset =3D=
 1u<<2,
-           INIT_evtchn =3D 1u<<3, INIT_gnttab =3D 1u<<4, INIT_arch =3D 1=
u<<5 };
+           INIT_evtchn =3D 1u<<3, INIT_gnttab =3D 1u<<4, INIT_arch =3D 1=
u<<5,
+           INIT_v4v =3D 1u<<6 };
     int init_status =3D 0;
     int poolid =3D CPUPOOLID_NONE;
=20
@@ -305,6 +306,13 @@ struct domain *domain_create(
         spin_unlock(&domlist_update_lock);
     }
=20
+    if ( !is_idle_domain(d) )
+    {
+        if ( v4v_init(d) !=3D 0 )
+            goto fail;
+        init_status |=3D INIT_v4v;
+    }
+
     return d;
=20
  fail:
@@ -313,6 +321,8 @@ struct domain *domain_create(
     xfree(d->mem_event);
     if ( init_status & INIT_arch )
         arch_domain_destroy(d);
+    if ( init_status & INIT_v4v )
+	v4v_destroy(d);
     if ( init_status & INIT_gnttab )
         grant_table_destroy(d);
     if ( init_status & INIT_evtchn )
@@ -466,6 +476,7 @@ int domain_kill(struct domain *d)
         domain_pause(d);
         d->is_dying =3D DOMDYING_dying;
         spin_barrier(&d->domain_lock);
+        v4v_destroy(d);
         evtchn_destroy(d);
         gnttab_release_mappings(d);
         tmem_destroy(d->tmem);
diff --git a/xen/common/v4v.c b/xen/common/v4v.c
new file mode 100644
index 0000000..8b96b38
--- /dev/null
+++ b/xen/common/v4v.c
@@ -0,0 +1,1895 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#include <xen/config.h>
+#include <xen/mm.h>
+#include <xen/compat.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/sched.h>
+#include <xen/domain.h>
+#include <xen/v4v.h>
+#include <xen/event.h>
+#include <xen/guest_access.h>
+#include <asm/paging.h>
+#include <asm/p2m.h>
+#include <xen/keyhandler.h>
+#include <xen/v4v_utils.h>
+
+#ifdef V4V_DEBUG
+#define v4v_dprintk(format, args...)            \
+    do {                                        \
+        printk("%s:%d " format,                 \
+               __FILE__, __LINE__, ## args );   \
+    } while ( 1 =3D=3D 0 )
+#else
+#define v4v_dprintk(format, ... ) (void)0
+#endif
+
+
+struct list_head viprules =3D LIST_HEAD_INIT(viprules);
+static DEFINE_RWLOCK(viprules_lock);
+
+DEFINE_XEN_GUEST_HANDLE(uint8_t);
+static struct v4v_ring_info *v4v_ring_find_info(struct domain *d,
+                                                v4v_ring_id_t *id);
+
+static struct v4v_ring_info *v4v_ring_find_info_by_addr(struct domain *d=
,
+                                                        struct v4v_addr =
*a,
+                                                        domid_t p);
+
+typedef struct internal_v4v_iov
+{
+    XEN_GUEST_HANDLE(v4v_iov_t) guest_iov;
+    v4v_iov_t                   *xen_iov;
+} internal_v4v_iov_t;
+
+/*
+ * locks
+ */
+
+/*
+ * locking is organized as follows:
+ *
+ * the global lock v4v_lock: L1 protects the v4v elements
+ * of all struct domain *d in the system, it does not
+ * protect any of the elements of d->v4v, just their
+ * addresses. By extension since the destruction of
+ * a domain with a non-NULL d->v4v will need to free
+ * the d->v4v pointer, holding this lock gauruntees
+ * that no domains pointers in which v4v is interested
+ * become invalid whilst this lock is held.
+ */
+
+static DEFINE_RWLOCK(v4v_lock); /* L1 */
+
+/*
+ * the lock d->v4v->lock: L2:  Read on protects the hash table and
+ * the elements in the hash_table d->v4v->ring_hash, and
+ * the node and id fields in struct v4v_ring_info in the
+ * hash table. Write on L2 protects all of the elements of
+ * struct v4v_ring_info. To take L2 you must already have R(L1)
+ * W(L1) implies W(L2) and L3
+ *
+ * the lock v4v_ring_info *ringinfo; ringinfo->lock: L3:
+ * protects len,tx_ptr the guest ring, the
+ * guest ring_data and the pending list. To take L3 you must
+ * already have R(L2). W(L2) implies L3
+ */
+
+
+/*
+ * Debugs
+ */
+
+#ifdef V4V_DEBUG
+static void
+v4v_hexdump(void *_p, int len)
+{
+    uint8_t *buf =3D (uint8_t *)_p;
+    int i, j;
+
+    for ( i =3D 0; i < len; i +=3D 16 )
+    {
+        printk(KERN_ERR "%p:", &buf[i]);
+        for ( j =3D 0; j < 16; ++j )
+        {
+            int k =3D i + j;
+            if ( k < len )
+                printk(" %02x", buf[k]);
+            else
+                printk("   ");
+        }
+        printk(" ");
+
+        for ( j =3D 0; j < 16; ++j )
+        {
+            int k =3D i + j;
+            if ( k < len )
+                printk("%c", ((buf[k] > 32) && (buf[k] < 127)) ? buf[k] =
: '.');
+            else
+                printk(" ");
+        }
+        printk("\n");
+    }
+}
+#endif
+
+
+/*
+ * Event channel
+ */
+
+static void
+v4v_signal_domain(struct domain *d)
+{
+    v4v_dprintk("send guest VIRQ_V4V domid:%d\n", d->domain_id);
+
+    evtchn_send(d, d->v4v->evtchn_port);
+}
+
+static void
+v4v_signal_domid(domid_t id)
+{
+    struct domain *d =3D get_domain_by_id(id);
+    if ( !d )
+        return;
+    v4v_signal_domain(d);
+    put_domain(d);
+}
+
+
+/*
+ * ring buffer
+ */
+
+static void
+v4v_ring_unmap(struct v4v_ring_info *ring_info)
+{
+    int i;
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    for ( i =3D 0; i < ring_info->npage; ++i )
+    {
+        if ( !ring_info->mfn_mapping[i] )
+            continue;
+        v4v_dprintk("unmapping page %p from %p\n",
+                    (void*)mfn_x(ring_info->mfns[i]),
+                    ring_info->mfn_mapping[i]);
+
+        unmap_domain_page(ring_info->mfn_mapping[i]);
+        ring_info->mfn_mapping[i] =3D NULL;
+    }
+}
+
+static uint8_t *
+v4v_ring_map_page(struct v4v_ring_info *ring_info, int i)
+{
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    if ( i >=3D ring_info->npage )
+        return NULL;
+    if ( ring_info->mfn_mapping[i] )
+        return ring_info->mfn_mapping[i];
+    ring_info->mfn_mapping[i] =3D map_domain_page(mfn_x(ring_info->mfns[=
i]));
+
+    v4v_dprintk("mapping page %p to %p\n",
+                (void *)mfn_x(ring_info->mfns[i]),
+                ring_info->mfn_mapping[i]);
+    return ring_info->mfn_mapping[i];
+}
+
+static int
+v4v_memcpy_from_guest_ring(void *_dst, struct v4v_ring_info *ring_info,
+                           uint32_t offset, uint32_t len)
+{
+    int page =3D offset >> PAGE_SHIFT;
+    uint8_t *src;
+    uint8_t *dst =3D _dst;
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    offset &=3D PAGE_SIZE - 1;
+
+    while ( (offset + len) > PAGE_SIZE )
+    {
+        src =3D v4v_ring_map_page(ring_info, page);
+
+        if ( !src )
+            return -EFAULT;
+
+        v4v_dprintk("memcpy(%p,%p+%d,%d)\n",
+                    dst, src, offset,
+                    (int)(PAGE_SIZE - offset));
+        memcpy(dst, src + offset, PAGE_SIZE - offset);
+
+        page++;
+        len -=3D PAGE_SIZE - offset;
+        dst +=3D PAGE_SIZE - offset;
+        offset =3D 0;
+    }
+
+    src =3D v4v_ring_map_page(ring_info, page);
+    if ( !src )
+        return -EFAULT;
+
+    v4v_dprintk("memcpy(%p,%p+%d,%d)\n", dst, src, offset, len);
+    memcpy(dst, src + offset, len);
+
+    return 0;
+}
+
+static int
+v4v_update_tx_ptr(struct v4v_ring_info *ring_info, uint32_t tx_ptr)
+{
+    uint8_t *dst =3D v4v_ring_map_page(ring_info, 0);
+    uint32_t *p =3D (uint32_t *)(dst + offsetof(v4v_ring_t, tx_ptr));
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    if ( !dst )
+        return -EFAULT;
+    write_atomic(p, tx_ptr);
+    mb();
+    return 0;
+}
+
+static int
+v4v_copy_from_guest_maybe(void *dst, void *src,
+                          XEN_GUEST_HANDLE(uint8_t) src_hnd,
+                          uint32_t len)
+{
+    int rc =3D 0;
+
+    if ( src )
+        memcpy(dst, src, len);
+    else
+        rc =3D copy_from_guest(dst, src_hnd, len);
+    return rc;
+}
+
+static int
+v4v_memcpy_to_guest_ring(struct v4v_ring_info *ring_info,
+                         uint32_t offset,
+                         void *src,
+                         XEN_GUEST_HANDLE(uint8_t) src_hnd,
+                         uint32_t len)
+{
+    int page =3D offset >> PAGE_SHIFT;
+    uint8_t *dst;
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    offset &=3D PAGE_SIZE - 1;
+
+    while ( (offset + len) > PAGE_SIZE )
+    {
+        dst =3D v4v_ring_map_page(ring_info, page);
+        if ( !dst )
+            return -EFAULT;
+
+        if ( v4v_copy_from_guest_maybe(dst + offset, src, src_hnd,
+                                       PAGE_SIZE - offset) )
+            return -EFAULT;
+
+        page++;
+        len -=3D PAGE_SIZE - offset;
+        if ( src )
+            src +=3D (PAGE_SIZE - offset);
+        else
+            guest_handle_add_offset(src_hnd, PAGE_SIZE - offset);
+        offset =3D 0;
+    }
+
+    dst =3D v4v_ring_map_page(ring_info, page);
+    if ( !dst )
+        return -EFAULT;
+
+    if ( v4v_copy_from_guest_maybe(dst + offset, src, src_hnd, len) )
+        return -EFAULT;
+
+    return 0;
+}
+
+static int
+v4v_ringbuf_get_rx_ptr(struct domain *d, struct v4v_ring_info *ring_info=
,
+                        uint32_t * rx_ptr)
+{
+    v4v_ring_t *ringp;
+
+    if ( ring_info->npage =3D=3D 0 )
+        return -1;
+
+    ringp =3D map_domain_page(mfn_x(ring_info->mfns[0]));
+
+    v4v_dprintk("v4v_ringbuf_payload_space: mapped %p to %p\n",
+                (void *)mfn_x(ring_info->mfns[0]), ringp);
+    if ( !ringp )
+        return -1;
+
+    write_atomic(rx_ptr, ringp->rx_ptr);
+    mb();
+
+    unmap_domain_page(mfn_x(ring_info->mfns[0]));
+    return 0;
+}
+
+uint32_t
+v4v_ringbuf_payload_space(struct domain * d, struct v4v_ring_info * ring=
_info)
+{
+    v4v_ring_t ring;
+    int32_t ret;
+
+    ring.tx_ptr =3D ring_info->tx_ptr;
+    ring.len =3D ring_info->len;
+
+    if ( v4v_ringbuf_get_rx_ptr(d, ring_info, &ring.rx_ptr) )
+        return 0;
+
+    v4v_dprintk("v4v_ringbuf_payload_space:tx_ptr=3D%d rx_ptr=3D%d\n",
+                (int)ring.tx_ptr, (int)ring.rx_ptr);
+    if ( ring.rx_ptr =3D=3D ring.tx_ptr )
+        return ring.len - sizeof (struct v4v_ring_message_header);
+
+    ret =3D ring.rx_ptr - ring.tx_ptr;
+    if ( ret < 0 )
+        ret +=3D ring.len;
+
+    ret -=3D sizeof (struct v4v_ring_message_header);
+    ret -=3D V4V_ROUNDUP(1);
+
+    return (ret < 0) ? 0 : ret;
+}
+
+static unsigned long
+v4v_iov_copy(v4v_iov_t *iov, internal_v4v_iov_t *iovs)
+{
+    if (iovs->xen_iov =3D=3D NULL)
+    {
+        return copy_from_guest(iov, iovs->guest_iov, 1);
+    }
+    else
+    {
+        *iov =3D *(iovs->xen_iov);
+        return 0;
+    }
+}
+
+static void
+v4v_iov_add_offset(internal_v4v_iov_t *iovs, int offset)
+{
+    if (iovs->xen_iov =3D=3D NULL)
+        guest_handle_add_offset(iovs->guest_iov, offset);
+    else
+        iovs->xen_iov +=3D offset;
+}
+
+static size_t
+v4v_iov_count(internal_v4v_iov_t iovs, int niov)
+{
+    v4v_iov_t iov;
+    size_t ret =3D 0;
+
+    while ( niov-- )
+    {
+        if ( v4v_iov_copy(&iov, &iovs) )
+            return -EFAULT;
+
+        ret +=3D iov.iov_len;
+        v4v_iov_add_offset(&iovs, 1);
+    }
+
+    return ret;
+}
+
+static ssize_t
+v4v_ringbuf_insertv(struct domain *d,
+                    struct v4v_ring_info *ring_info,
+                    v4v_ring_id_t *src_id, uint32_t proto,
+                    internal_v4v_iov_t iovs, uint32_t niov,
+                    size_t len)
+{
+    v4v_ring_t ring;
+    struct v4v_ring_message_header mh =3D { 0 };
+    int32_t sp;
+    int32_t happy_ret;
+    int32_t ret =3D 0;
+    XEN_GUEST_HANDLE(uint8_t) empty_hnd =3D { 0 };
+
+    ASSERT(spin_is_locked(&ring_info->lock));
+
+    happy_ret =3D len;
+
+    if ( (V4V_ROUNDUP(len) + sizeof (struct v4v_ring_message_header) ) >=
=3D
+            ring_info->len)
+        return -EMSGSIZE;
+
+    do
+    {
+        if ( (ret =3D v4v_memcpy_from_guest_ring(&ring, ring_info, 0,
+                                               sizeof (ring))) )
+            break;
+
+        ring.tx_ptr =3D ring_info->tx_ptr;
+        ring.len =3D ring_info->len;
+
+        v4v_dprintk("ring.tx_ptr=3D%d ring.rx_ptr=3D%d ring.len=3D%d rin=
g_info->tx_ptr=3D%d\n",
+                    ring.tx_ptr, ring.rx_ptr, ring.len, ring_info->tx_pt=
r);
+
+        if ( ring.rx_ptr =3D=3D ring.tx_ptr )
+            sp =3D ring_info->len;
+        else
+        {
+            sp =3D ring.rx_ptr - ring.tx_ptr;
+            if ( sp < 0 )
+                sp +=3D ring.len;
+        }
+
+        if ( (V4V_ROUNDUP(len) + sizeof (struct v4v_ring_message_header)=
) >=3D sp )
+        {
+            v4v_dprintk("EAGAIN\n");
+            ret =3D -EAGAIN;
+            break;
+        }
+
+        mh.len =3D len + sizeof (struct v4v_ring_message_header);
+        mh.source =3D src_id->addr;
+        mh.protocol =3D proto;
+
+        if ( (ret =3D v4v_memcpy_to_guest_ring(ring_info,
+                                             ring.tx_ptr + sizeof (v4v_r=
ing_t),
+                                             &mh, empty_hnd,
+                                             sizeof (mh))) )
+            break;
+
+        ring.tx_ptr +=3D sizeof (mh);
+        if ( ring.tx_ptr =3D=3D ring_info->len )
+            ring.tx_ptr =3D 0;
+
+        while ( niov-- )
+        {
+            XEN_GUEST_HANDLE(uint8_t) buf_hnd;
+            v4v_iov_t iov;
+
+            if ( v4v_iov_copy(&iov, &iovs) )
+            {
+                ret =3D -EFAULT;
+                break;
+            }
+
+            buf_hnd.p =3D (uint8_t *)iov.iov_base; //FIXME
+            len =3D iov.iov_len;
+
+            if ( unlikely(!guest_handle_okay(buf_hnd, len)) )
+            {
+                ret =3D -EFAULT;
+                break;
+            }
+
+            sp =3D ring.len - ring.tx_ptr;
+
+            if ( len > sp )
+            {
+                ret =3D v4v_memcpy_to_guest_ring(ring_info,
+                        ring.tx_ptr + sizeof (v4v_ring_t),
+                        NULL, buf_hnd, sp);
+                if ( ret )
+                    break;
+
+                ring.tx_ptr =3D 0;
+                len -=3D sp;
+                guest_handle_add_offset(buf_hnd, sp);
+            }
+
+            ret =3D v4v_memcpy_to_guest_ring(ring_info,
+                    ring.tx_ptr + sizeof (v4v_ring_t),
+                    NULL, buf_hnd, len);
+            if ( ret )
+                break;
+
+            ring.tx_ptr +=3D len;
+
+            if ( ring.tx_ptr =3D=3D ring_info->len )
+                ring.tx_ptr =3D 0;
+
+            v4v_iov_add_offset(&iovs, 1);
+        }
+        if ( ret )
+            break;
+
+        ring.tx_ptr =3D V4V_ROUNDUP(ring.tx_ptr);
+
+        if ( ring.tx_ptr >=3D ring_info->len )
+            ring.tx_ptr -=3D ring_info->len;
+
+        mb();
+        ring_info->tx_ptr =3D ring.tx_ptr;
+        if ( (ret =3D v4v_update_tx_ptr(ring_info, ring.tx_ptr)) )
+            break;
+    }
+    while ( 0 );
+
+    v4v_ring_unmap(ring_info);
+
+    return ret ? ret : happy_ret;
+}
+
+
+
+/* pending */
+static void
+v4v_pending_remove_ent(struct v4v_pending_ent *ent)
+{
+    hlist_del(&ent->node);
+    xfree(ent);
+}
+
+static void
+v4v_pending_remove_all(struct v4v_ring_info *info)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *pending_ent;
+
+    ASSERT(spin_is_locked(&info->lock));
+    hlist_for_each_entry_safe(pending_ent, node, next, &info->pending,
+            node) v4v_pending_remove_ent(pending_ent);
+}
+
+static void
+v4v_pending_notify(struct domain *caller_d, struct hlist_head *to_notify=
)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *pending_ent;
+
+    ASSERT(rw_is_locked(&v4v_lock));
+
+    hlist_for_each_entry_safe(pending_ent, node, next, to_notify, node)
+    {
+        hlist_del(&pending_ent->node);
+        v4v_signal_domid(pending_ent->id);
+        xfree(pending_ent);
+    }
+
+}
+
+static void
+v4v_pending_find(struct domain *d, struct v4v_ring_info *ring_info,
+                 uint32_t payload_space, struct hlist_head *to_notify)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *ent;
+
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    spin_lock(&ring_info->lock);
+    hlist_for_each_entry_safe(ent, node, next, &ring_info->pending, node=
)
+    {
+        if ( payload_space >=3D ent->len )
+        {
+            hlist_del(&ent->node);
+            hlist_add_head(&ent->node, to_notify);
+        }
+    }
+    spin_unlock(&ring_info->lock);
+}
+
+/*caller must have L3 */
+static int
+v4v_pending_queue(struct v4v_ring_info *ring_info, domid_t src_id, int l=
en)
+{
+    struct v4v_pending_ent *ent =3D xmalloc(struct v4v_pending_ent);
+
+    if ( !ent )
+    {
+        v4v_dprintk("ENOMEM\n");
+        return -ENOMEM;
+    }
+
+    ent->len =3D len;
+    ent->id =3D src_id;
+
+    hlist_add_head(&ent->node, &ring_info->pending);
+
+    return 0;
+}
+
+/* L3 */
+static int
+v4v_pending_requeue(struct v4v_ring_info *ring_info, domid_t src_id, int=
 len)
+{
+    struct hlist_node *node;
+    struct v4v_pending_ent *ent;
+
+    hlist_for_each_entry(ent, node, &ring_info->pending, node)
+    {
+        if ( ent->id =3D=3D src_id )
+        {
+            if ( ent->len < len )
+                ent->len =3D len;
+            return 0;
+        }
+    }
+
+    return v4v_pending_queue(ring_info, src_id, len);
+}
+
+
+/* L3 */
+static void
+v4v_pending_cancel(struct v4v_ring_info *ring_info, domid_t src_id)
+{
+    struct hlist_node *node, *next;
+    struct v4v_pending_ent *ent;
+
+    hlist_for_each_entry_safe(ent, node, next, &ring_info->pending, node=
)
+    {
+        if ( ent->id =3D=3D src_id)
+        {
+            hlist_del(&ent->node);
+            xfree(ent);
+        }
+    }
+}
+
+/*
+ * ring data
+ */
+
+/*Caller should hold R(L1)*/
+static int
+v4v_fill_ring_data(struct domain *src_d,
+                   XEN_GUEST_HANDLE(v4v_ring_data_ent_t) data_ent_hnd)
+{
+    v4v_ring_data_ent_t ent;
+    struct domain *dst_d;
+    struct v4v_ring_info *ring_info;
+
+    if ( copy_from_guest(&ent, data_ent_hnd, 1) )
+    {
+        v4v_dprintk("EFAULT\n");
+        return -EFAULT;
+    }
+
+    v4v_dprintk("v4v_fill_ring_data: ent.ring.domain=3D%d,ent.ring.port=3D=
%u\n",
+                (int)ent.ring.domain, (int)ent.ring.port);
+
+    ent.flags =3D 0;
+
+    dst_d =3D get_domain_by_id(ent.ring.domain);
+
+    if ( dst_d && dst_d->v4v )
+    {
+        read_lock(&dst_d->v4v->lock);
+        ring_info =3D v4v_ring_find_info_by_addr(dst_d, &ent.ring,
+                                               src_d->domain_id);
+
+        if ( ring_info )
+        {
+            uint32_t space_avail;
+
+            ent.flags |=3D V4V_RING_DATA_F_EXISTS;
+            ent.max_message_size =3D
+                ring_info->len - sizeof (struct v4v_ring_message_header)=
 -
+                V4V_ROUNDUP(1);
+            spin_lock(&ring_info->lock);
+
+            space_avail =3D v4v_ringbuf_payload_space(dst_d, ring_info);
+
+            if ( space_avail >=3D ent.space_required )
+            {
+                v4v_pending_cancel(ring_info, src_d->domain_id);
+                ent.flags |=3D V4V_RING_DATA_F_SUFFICIENT;
+            }
+            else
+            {
+                v4v_pending_requeue(ring_info, src_d->domain_id,
+                        ent.space_required);
+                ent.flags |=3D V4V_RING_DATA_F_PENDING;
+            }
+
+            spin_unlock(&ring_info->lock);
+
+            if ( space_avail =3D=3D ent.max_message_size )
+                ent.flags |=3D V4V_RING_DATA_F_EMPTY;
+
+        }
+        read_unlock(&dst_d->v4v->lock);
+    }
+
+    if ( dst_d )
+        put_domain(dst_d);
+
+    if ( copy_field_to_guest(data_ent_hnd, &ent, flags) )
+    {
+        v4v_dprintk("EFAULT\n");
+        return -EFAULT;
+    }
+    return 0;
+}
+
+/*Called should hold no more than R(L1) */
+static int
+v4v_fill_ring_datas(struct domain *d, int nent,
+                     XEN_GUEST_HANDLE(v4v_ring_data_ent_t) data_ent_hnd)
+{
+    int ret =3D 0;
+
+    read_lock(&v4v_lock);
+    while ( !ret && nent-- )
+    {
+        ret =3D v4v_fill_ring_data(d, data_ent_hnd);
+        guest_handle_add_offset(data_ent_hnd, 1);
+    }
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+/*
+ * ring
+ */
+static int
+v4v_find_ring_mfns(struct domain *d, struct v4v_ring_info *ring_info,
+                   uint32_t npage, XEN_GUEST_HANDLE(v4v_pfn_t) pfn_hnd)
+{
+    int i,j;
+    mfn_t *mfns;
+    uint8_t **mfn_mapping;
+    unsigned long mfn;
+    struct page_info *page;
+    int ret =3D 0;
+
+    if ( (npage << PAGE_SHIFT) < ring_info->len )
+    {
+        v4v_dprintk("EINVAL\n");
+        return -EINVAL;
+    }
+
+    mfns =3D xmalloc_array(mfn_t, npage);
+    if ( !mfns )
+    {
+        v4v_dprintk("ENOMEM\n");
+        return -ENOMEM;
+    }
+
+    mfn_mapping =3D xmalloc_array(uint8_t *, npage);
+    if ( !mfn_mapping )
+    {
+        xfree(mfns);
+        return -ENOMEM;
+    }
+
+    for ( i =3D 0; i < npage; ++i )
+    {
+        unsigned long pfn;
+        p2m_type_t p2mt;
+
+        if ( copy_from_guest_offset(&pfn, pfn_hnd, i, 1) )
+        {
+            ret =3D -EFAULT;
+            v4v_dprintk("EFAULT\n");
+            break;
+        }
+
+        mfn =3D mfn_x(get_gfn(d, pfn, &p2mt));
+        if ( !mfn_valid(mfn) )
+        {
+            printk(KERN_ERR "v4v domain %d passed invalid mfn %"PRI_mfn"=
 ring %p seq %d\n",
+                    d->domain_id, mfn, ring_info, i);
+            ret =3D -EINVAL;
+            break;
+        }
+        page =3D mfn_to_page(mfn);
+        if ( !get_page_and_type(page, d, PGT_writable_page) )
+        {
+            printk(KERN_ERR "v4v domain %d passed wrong type mfn %"PRI_m=
fn" ring %p seq %d\n",
+                    d->domain_id, mfn, ring_info, i);
+            ret =3D -EINVAL;
+            break;
+        }
+        mfns[i] =3D _mfn(mfn);
+        v4v_dprintk("v4v_find_ring_mfns: %d: %lx -> %lx\n",
+                    i, (unsigned long)pfn, (unsigned long)mfn_x(mfns[i])=
);
+        mfn_mapping[i] =3D NULL;
+        put_gfn(d, pfn);
+    }
+
+    if ( !ret )
+    {
+        ring_info->npage =3D npage;
+        ring_info->mfns =3D mfns;
+        ring_info->mfn_mapping =3D mfn_mapping;
+    }
+    else
+    {
+        j =3D i;
+        for ( i =3D 0; i < j; ++i )
+            if ( mfn_x(mfns[i]) !=3D 0 )
+                put_page_and_type(mfn_to_page(mfn_x(mfns[i])));
+        xfree(mfn_mapping);
+        xfree(mfns);
+    }
+    return ret;
+}
+
+
+static struct v4v_ring_info *
+v4v_ring_find_info(struct domain *d, v4v_ring_id_t *id)
+{
+    uint16_t hash;
+    struct hlist_node *node;
+    struct v4v_ring_info *ring_info;
+
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    hash =3D v4v_hash_fn(id);
+
+    v4v_dprintk("ring_find_info: d->v4v=3D%p, d->v4v->ring_hash[%d]=3D%p=
 id=3D%p\n",
+                d->v4v, (int)hash, d->v4v->ring_hash[hash].first, id);
+    v4v_dprintk("ring_find_info: id.addr.port=3D%d id.addr.domain=3D%d i=
d.addr.partner=3D%d\n",
+                id->addr.port, id->addr.domain, id->partner);
+
+    hlist_for_each_entry(ring_info, node, &d->v4v->ring_hash[hash], node=
)
+    {
+        v4v_ring_id_t *cmpid =3D &ring_info->id;
+
+        if ( cmpid->addr.port =3D=3D id->addr.port &&
+             cmpid->addr.domain =3D=3D id->addr.domain &&
+             cmpid->partner =3D=3D id->partner)
+        {
+            v4v_dprintk("ring_find_info: ring_info=3D%p\n", ring_info);
+            return ring_info;
+        }
+    }
+    v4v_dprintk("ring_find_info: no ring_info found\n");
+    return NULL;
+}
+
+static struct v4v_ring_info *
+v4v_ring_find_info_by_addr(struct domain *d, struct v4v_addr *a, domid_t=
 p)
+{
+    v4v_ring_id_t id;
+    struct v4v_ring_info *ret;
+
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    if ( !a )
+        return NULL;
+
+    id.addr.port =3D a->port;
+    id.addr.domain =3D d->domain_id;
+    id.partner =3D p;
+
+    ret =3D v4v_ring_find_info(d, &id);
+    if ( ret )
+        return ret;
+
+    id.partner =3D V4V_DOMID_ANY;
+
+    return v4v_ring_find_info(d, &id);
+}
+
+static void v4v_ring_remove_mfns(struct domain *d, struct v4v_ring_info =
*ring_info)
+{
+    int i;
+
+    ASSERT(rw_is_write_locked(&d->v4v->lock));
+
+    if ( ring_info->mfns )
+    {
+        for ( i =3D 0; i < ring_info->npage; ++i )
+            if ( mfn_x(ring_info->mfns[i]) !=3D 0 )
+                put_page_and_type(mfn_to_page(mfn_x(ring_info->mfns[i]))=
);
+        xfree(ring_info->mfns);
+    }
+    if ( ring_info->mfn_mapping )
+        xfree(ring_info->mfn_mapping);
+    ring_info->mfns =3D NULL;
+
+}
+
+static void
+v4v_ring_remove_info(struct domain *d, struct v4v_ring_info *ring_info)
+{
+    ASSERT(rw_is_write_locked(&d->v4v->lock));
+
+    spin_lock(&ring_info->lock);
+
+    v4v_pending_remove_all(ring_info);
+    hlist_del(&ring_info->node);
+    v4v_ring_remove_mfns(d, ring_info);
+
+    spin_unlock(&ring_info->lock);
+
+    xfree(ring_info);
+}
+
+/* Call from guest to unpublish a ring */
+static long
+v4v_ring_remove(struct domain *d, XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd)
+{
+    struct v4v_ring ring;
+    struct v4v_ring_info *ring_info;
+    int ret =3D 0;
+
+    read_lock(&v4v_lock);
+
+    do
+    {
+        if ( !d->v4v )
+        {
+            v4v_dprintk("EINVAL\n");
+            ret =3D -EINVAL;
+            break;
+        }
+
+        if ( copy_from_guest(&ring, ring_hnd, 1) )
+        {
+            v4v_dprintk("EFAULT\n");
+            ret =3D -EFAULT;
+            break;
+        }
+
+        if ( ring.magic !=3D V4V_RING_MAGIC )
+        {
+            v4v_dprintk("ring.magic(%lx) !=3D V4V_RING_MAGIC(%lx), EINVA=
L\n",
+                    ring.magic, V4V_RING_MAGIC);
+            ret =3D -EINVAL;
+            break;
+        }
+
+        ring.id.addr.domain =3D d->domain_id;
+
+        write_lock(&d->v4v->lock);
+        ring_info =3D v4v_ring_find_info(d, &ring.id);
+
+        if ( ring_info )
+            v4v_ring_remove_info(d, ring_info);
+
+        write_unlock(&d->v4v->lock);
+
+        if ( !ring_info )
+        {
+            v4v_dprintk("ENOENT\n");
+            ret =3D -ENOENT;
+            break;
+        }
+
+    }
+    while ( 0 );
+
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+/* call from guest to publish a ring */
+static long
+v4v_ring_add(struct domain *d, XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd,
+             uint32_t npage, XEN_GUEST_HANDLE(v4v_pfn_t) pfn_hnd)
+{
+    struct v4v_ring ring;
+    struct v4v_ring_info *ring_info;
+    int need_to_insert =3D 0;
+    int ret =3D 0;
+
+    if ( (long)ring_hnd.p & (PAGE_SIZE - 1) )
+    {
+        v4v_dprintk("EINVAL\n");
+        return -EINVAL;
+    }
+
+    read_lock(&v4v_lock);
+    do
+    {
+        if ( !d->v4v )
+        {
+            v4v_dprintk(" !d->v4v, EINVAL\n");
+            ret =3D -EINVAL;
+            break;
+        }
+
+        if ( copy_from_guest(&ring, ring_hnd, 1) )
+        {
+            v4v_dprintk(" copy_from_guest failed, EFAULT\n");
+            ret =3D -EFAULT;
+            break;
+        }
+
+        if ( ring.magic !=3D V4V_RING_MAGIC )
+        {
+            v4v_dprintk("ring.magic(%lx) !=3D V4V_RING_MAGIC(%lx), EINVA=
L\n",
+                        ring.magic, V4V_RING_MAGIC);
+            ret =3D -EINVAL;
+            break;
+        }
+
+        if ( (ring.len <
+                    (sizeof (struct v4v_ring_message_header) + V4V_ROUND=
UP(1) +
+                     V4V_ROUNDUP(1))) || (V4V_ROUNDUP(ring.len) !=3D rin=
g.len) )
+        {
+            v4v_dprintk("EINVAL\n");
+            ret =3D -EINVAL;
+            break;
+        }
+
+        ring.id.addr.domain =3D d->domain_id;
+        if ( copy_field_to_guest(ring_hnd, &ring, id) )
+        {
+            v4v_dprintk("EFAULT\n");
+            ret =3D -EFAULT;
+            break;
+        }
+
+        /*
+         * no need for a lock yet, because only we know about this
+         * set the tx pointer if it looks bogus (we don't reset it
+         * because this might be a re-register after S4)
+         */
+        if ( (ring.tx_ptr >=3D ring.len)
+                || (V4V_ROUNDUP(ring.tx_ptr) !=3D ring.tx_ptr) )
+        {
+            ring.tx_ptr =3D ring.rx_ptr;
+        }
+        copy_field_to_guest(ring_hnd, &ring, tx_ptr);
+
+        read_lock(&d->v4v->lock);
+        ring_info =3D v4v_ring_find_info(d, &ring.id);
+
+        if ( !ring_info )
+        {
+            read_unlock(&d->v4v->lock);
+            ring_info =3D xmalloc(struct v4v_ring_info);
+            if ( !ring_info )
+            {
+                v4v_dprintk("ENOMEM\n");
+                ret =3D -ENOMEM;
+                break;
+            }
+            need_to_insert++;
+            spin_lock_init(&ring_info->lock);
+            INIT_HLIST_HEAD(&ring_info->pending);
+            ring_info->mfns =3D NULL;
+
+        }
+        else
+        {
+            /*
+             * Ring info already existed. If mfn list was already
+             * populated remove the MFN's from list and then add
+             * the new list.
+             */
+            printk(KERN_INFO "v4v: dom%d ring already registered\n",
+                    current->domain->domain_id);
+            ret =3D -EEXIST;
+            break;
+        }
+
+        spin_lock(&ring_info->lock);
+        ring_info->id =3D ring.id;
+        ring_info->len =3D ring.len;
+        ring_info->tx_ptr =3D ring.tx_ptr;
+        ring_info->ring =3D ring_hnd;
+        if ( ring_info->mfns )
+            xfree(ring_info->mfns);
+        ret =3D v4v_find_ring_mfns(d, ring_info, npage, pfn_hnd);
+        spin_unlock(&ring_info->lock);
+        if ( ret )
+            break;
+
+        if ( !need_to_insert )
+        {
+            read_unlock(&d->v4v->lock);
+        }
+        else
+        {
+            uint16_t hash =3D v4v_hash_fn(&ring.id);
+            write_lock(&d->v4v->lock);
+            hlist_add_head(&ring_info->node, &d->v4v->ring_hash[hash]);
+            write_unlock(&d->v4v->lock);
+        }
+    }
+    while ( 0 );
+
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+
+/*
+ * io
+ */
+
+static void
+v4v_notify_ring(struct domain *d, struct v4v_ring_info *ring_info,
+                struct hlist_head *to_notify)
+{
+    uint32_t space;
+
+    ASSERT(rw_is_locked(&v4v_lock));
+    ASSERT(rw_is_locked(&d->v4v->lock));
+
+    spin_lock(&ring_info->lock);
+    space =3D v4v_ringbuf_payload_space(d, ring_info);
+    spin_unlock(&ring_info->lock);
+
+    v4v_pending_find(d, ring_info, space, to_notify);
+}
+
+/*notify hypercall*/
+static long
+v4v_notify(struct domain *d,
+           XEN_GUEST_HANDLE(v4v_ring_data_t) ring_data_hnd)
+{
+    v4v_ring_data_t ring_data;
+    HLIST_HEAD(to_notify);
+    int i;
+    int ret =3D 0;
+
+    read_lock(&v4v_lock);
+
+    if ( !d->v4v )
+    {
+        read_unlock(&v4v_lock);
+        v4v_dprintk("!d->v4v, ENODEV\n");
+        return -ENODEV;
+    }
+
+    read_lock(&d->v4v->lock);
+    for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+    {
+        struct hlist_node *node, *next;
+        struct v4v_ring_info *ring_info;
+
+        hlist_for_each_entry_safe(ring_info, node,
+                next, &d->v4v->ring_hash[i],
+                node)
+        {
+            v4v_notify_ring(d, ring_info, &to_notify);
+        }
+    }
+    read_unlock(&d->v4v->lock);
+
+    if ( !hlist_empty(&to_notify) )
+        v4v_pending_notify(d, &to_notify);
+
+    do
+    {
+        if ( !guest_handle_is_null(ring_data_hnd) )
+        {
+            /* Quick sanity check on ring_data_hnd */
+            if ( copy_field_from_guest(&ring_data, ring_data_hnd, magic)=
 )
+            {
+                v4v_dprintk("copy_field_from_guest failed\n");
+                ret =3D -EFAULT;
+                break;
+            }
+
+            if ( ring_data.magic !=3D V4V_RING_DATA_MAGIC )
+            {
+                v4v_dprintk("ring.magic(%lx) !=3D V4V_RING_MAGIC(%lx), E=
INVAL\n",
+                        ring_data.magic, V4V_RING_MAGIC);
+                ret =3D -EINVAL;
+                break;
+            }
+
+            if ( copy_from_guest(&ring_data, ring_data_hnd, 1) )
+            {
+                v4v_dprintk("copy_from_guest failed\n");
+                ret =3D -EFAULT;
+                break;
+            }
+
+            {
+                XEN_GUEST_HANDLE(v4v_ring_data_ent_t) ring_data_ent_hnd;
+                ring_data_ent_hnd =3D
+                    guest_handle_for_field(ring_data_hnd, v4v_ring_data_=
ent_t, data[0]);
+                ret =3D v4v_fill_ring_datas(d, ring_data.nent, ring_data=
_ent_hnd);
+            }
+        }
+    }
+    while ( 0 );
+
+    read_unlock(&v4v_lock);
+
+    return ret;
+}
+
+#ifdef V4V_DEBUG
+void
+v4v_viptables_print_rule(struct v4v_viptables_rule_node *node)
+{
+    v4v_viptables_rule_t *rule;
+
+    if ( node =3D=3D NULL )
+    {
+        printk("(null)\n");
+        return;
+    }
+
+    rule =3D &node->rule;
+
+    if ( rule->accept =3D=3D 1 )
+        printk("ACCEPT");
+    else
+        printk("REJECT");
+
+    printk(" ");
+
+    if ( rule->src.domain =3D=3D DOMID_ANY )
+        printk("*");
+    else
+        printk("%i", rule->src.domain);
+
+    printk(":");
+
+    if ( rule->src.port =3D=3D -1 )
+        printk("*");
+    else
+        printk("%i", rule->src.port);
+
+    printk(" -> ");
+
+    if ( rule->dst.domain =3D=3D DOMID_ANY )
+        printk("*");
+    else
+        printk("%i", rule->dst.domain);
+
+    printk(":");
+
+    if ( rule->dst.port =3D=3D -1 )
+        printk("*");
+    else
+        printk("%i", rule->dst.port);
+
+    printk("\n");
+}
+#endif /* V4V_DEBUG */
+
+int
+v4v_viptables_add(struct domain *src_d,
+                  XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule,
+                  int32_t position)
+{
+    struct v4v_viptables_rule_node* new =3D NULL;
+    struct list_head* tmp;
+
+    ASSERT(rw_is_write_locked(&viprules_lock));
+
+    /* First rule is n.1 */
+    position--;
+
+    new =3D xmalloc(struct v4v_viptables_rule_node);
+    if ( new =3D=3D NULL )
+        return -ENOMEM;
+
+    if ( copy_from_guest(&new->rule, rule, 1) )
+    {
+        xfree(new);
+        return -EFAULT;
+    }
+
+#ifdef V4V_DEBUG
+    printk(KERN_ERR "VIPTables: ");
+    v4v_viptables_print_rule(new);
+#endif /* V4V_DEBUG */
+
+    tmp =3D &viprules;
+    while ( position !=3D 0 && tmp->next !=3D &viprules )
+    {
+        tmp =3D tmp->next;
+        position--;
+    }
+    list_add(&new->list, tmp);
+
+    return 0;
+}
+
+int
+v4v_viptables_del(struct domain *src_d,
+                  XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule,
+                  int32_t position)
+{
+    struct list_head *tmp =3D NULL;
+    struct list_head *next =3D NULL;
+    struct v4v_viptables_rule_node *node;
+
+    ASSERT(rw_is_write_locked(&viprules_lock));
+
+    if ( position !=3D -1 )
+    {
+        /* We want to delete the rule number <position> */
+        tmp =3D &viprules;
+        while ( position !=3D 0 && tmp->next !=3D &viprules )
+        {
+            tmp =3D tmp->next;
+            position--;
+        }
+    }
+    else if ( !guest_handle_is_null(rule) )
+    {
+        struct v4v_viptables_rule r;
+
+        if ( copy_field_from_guest(&r, rule, src) ||
+             copy_field_from_guest(&r, rule, dst) ||
+             copy_field_from_guest(&r, rule, accept) )
+        {
+            return -EFAULT;
+        }
+
+        list_for_each(tmp, &viprules)
+        {
+            node =3D list_entry(tmp, struct v4v_viptables_rule_node, lis=
t);
+
+            if ( (node->rule.src.domain =3D=3D r.src.domain) &&
+                 (node->rule.src.port   =3D=3D r.src.port)   &&
+                 (node->rule.dst.domain =3D=3D r.dst.domain) &&
+                 (node->rule.dst.port   =3D=3D r.dst.port))
+            {
+                position =3D 0;
+                break;
+            }
+        }
+    }
+    else
+    {
+        /* We want to flush the rules! */
+        printk(KERN_ERR "VIPTables: flushing rules\n");
+        list_for_each_safe(tmp, next, &viprules)
+        {
+            node =3D list_entry(tmp, struct v4v_viptables_rule_node, lis=
t);
+            list_del(tmp);
+            xfree(node);
+        }
+    }
+
+    if ( position =3D=3D 0 && tmp !=3D &viprules )
+    {
+        node =3D list_entry(tmp, struct v4v_viptables_rule_node, list);
+#ifdef V4V_DEBUG
+        printk(KERN_ERR "VIPTables: deleting rule: ");
+        v4v_viptables_print_rule(node);
+#endif /* V4V_DEBUG */
+        list_del(tmp);
+        xfree(node);
+    }
+
+    return 0;
+}
+
+static size_t
+v4v_viptables_list(struct domain *src_d,
+                   XEN_GUEST_HANDLE(v4v_viptables_list_t) list_hnd)
+{
+    struct list_head *ptr;
+    struct v4v_viptables_rule_node *node;
+    struct v4v_viptables_list rules_list;
+    uint32_t nbrules;
+    XEN_GUEST_HANDLE(v4v_viptables_rule_t) guest_rules;
+
+    ASSERT(rw_is_locked(&viprules_lock));
+
+    memset(&rules_list, 0, sizeof (rules_list));
+    if ( copy_from_guest(&rules_list, list_hnd, 1) )
+        return -EFAULT;
+
+    ptr =3D viprules.next;
+    while ( rules_list.start_rule !=3D 0 && ptr->next !=3D &viprules )
+    {
+        ptr =3D ptr->next;
+        rules_list.start_rule--;
+    }
+
+    if ( rules_list.nb_rules =3D=3D 0 )
+        return -EINVAL;
+
+    guest_rules =3D guest_handle_for_field(list_hnd, v4v_viptables_rule_=
t, rules[0]);
+
+    nbrules =3D 0;
+    while ( nbrules < rules_list.nb_rules && ptr !=3D &viprules )
+    {
+        node =3D list_entry(ptr, struct v4v_viptables_rule_node, list);
+
+        if ( !guest_handle_okay(guest_rules, 1) )
+            break;
+
+        if ( copy_to_guest(guest_rules, &node->rule, 1) )
+            break;
+
+        guest_handle_add_offset(guest_rules, 1);
+
+        nbrules++;
+        ptr =3D ptr->next;
+    }
+
+    rules_list.nb_rules =3D nbrules;
+    if ( copy_field_to_guest(list_hnd, &rules_list, nb_rules) )
+        return -EFAULT;
+
+    return 0;
+}
+
+static size_t
+v4v_viptables_check(v4v_addr_t * src, v4v_addr_t * dst)
+{
+    struct list_head *ptr;
+    struct v4v_viptables_rule_node *node;
+    size_t ret =3D 0; /* Defaulting to ACCEPT */
+
+    read_lock(&viprules_lock);
+
+    list_for_each(ptr, &viprules)
+    {
+        node =3D list_entry(ptr, struct v4v_viptables_rule_node, list);
+
+        if ( (node->rule.src.domain =3D=3D V4V_DOMID_ANY ||
+              node->rule.src.domain =3D=3D src->domain) &&
+             (node->rule.src.port =3D=3D V4V_PORT_ANY ||
+              node->rule.src.port =3D=3D src->port) &&
+             (node->rule.dst.domain =3D=3D V4V_DOMID_ANY ||
+              node->rule.dst.domain =3D=3D dst->domain) &&
+             (node->rule.dst.port =3D=3D V4V_PORT_ANY ||
+              node->rule.dst.port =3D=3D dst->port) )
+        {
+            ret =3D !node->rule.accept;
+            break;
+        }
+    }
+
+    read_unlock(&viprules_lock);
+    return ret;
+}
+
+/*
+ * Hypercall to do the send
+ */
+static size_t
+v4v_sendv(struct domain *src_d, v4v_addr_t * src_addr,
+          v4v_addr_t * dst_addr, uint32_t proto,
+          internal_v4v_iov_t iovs, size_t niov)
+{
+    struct domain *dst_d;
+    v4v_ring_id_t src_id;
+    struct v4v_ring_info *ring_info;
+    int ret =3D 0;
+
+    if ( !dst_addr )
+    {
+        v4v_dprintk("!dst_addr, EINVAL\n");
+        return -EINVAL;
+    }
+
+    read_lock(&v4v_lock);
+    if ( !src_d->v4v )
+    {
+        read_unlock(&v4v_lock);
+        v4v_dprintk("!src_d->v4v, EINVAL\n");
+        return -EINVAL;
+    }
+
+    src_id.addr.port =3D src_addr->port;
+    src_id.addr.domain =3D src_d->domain_id;
+    src_id.partner =3D dst_addr->domain;
+
+    dst_d =3D get_domain_by_id(dst_addr->domain);
+    if ( !dst_d )
+    {
+        read_unlock(&v4v_lock);
+        v4v_dprintk("!dst_d, ECONNREFUSED\n");
+        return -ECONNREFUSED;
+    }
+
+    if ( v4v_viptables_check(src_addr, dst_addr) !=3D 0 )
+    {
+        read_unlock(&v4v_lock);
+        gdprintk(XENLOG_WARNING,
+                 "V4V: VIPTables REJECTED %i:%i -> %i:%i\n",
+                 src_addr->domain, src_addr->port,
+                 dst_addr->domain, dst_addr->port);
+        return -ECONNREFUSED;
+    }
+
+    do
+    {
+        if ( !dst_d->v4v )
+        {
+            v4v_dprintk("dst_d->v4v, ECONNREFUSED\n");
+            ret =3D -ECONNREFUSED;
+            break;
+        }
+
+        read_lock(&dst_d->v4v->lock);
+        ring_info =3D
+            v4v_ring_find_info_by_addr(dst_d, dst_addr, src_addr->domain=
);
+
+        if ( !ring_info )
+        {
+            ret =3D -ECONNREFUSED;
+            v4v_dprintk(" !ring_info, ECONNREFUSED\n");
+        }
+        else
+        {
+            size_t len =3D v4v_iov_count(iovs, niov);
+
+            if ( len < 0 )
+            {
+                ret =3D len;
+                break;
+            }
+
+            spin_lock(&ring_info->lock);
+            ret =3D
+                v4v_ringbuf_insertv(dst_d, ring_info, &src_id, proto, io=
vs,
+                        niov, len);
+            if ( ret =3D=3D -EAGAIN )
+            {
+                v4v_dprintk("v4v_ringbuf_insertv failed, EAGAIN\n");
+                /* Schedule a wake up on the event channel when space is=
 there */
+                if ( v4v_pending_requeue(ring_info, src_d->domain_id, le=
n) )
+                {
+                    v4v_dprintk("v4v_pending_requeue failed, ENOMEM\n");
+                    ret =3D -ENOMEM;
+                }
+            }
+            spin_unlock(&ring_info->lock);
+
+            if ( ret >=3D 0 )
+            {
+                v4v_signal_domain(dst_d);
+            }
+
+        }
+        read_unlock(&dst_d->v4v->lock);
+
+    }
+    while ( 0 );
+
+    put_domain(dst_d);
+    read_unlock(&v4v_lock);
+    return ret;
+}
+
+static void
+v4v_info(struct domain *d, v4v_info_t *info)
+{
+    read_lock(&d->v4v->lock);
+    info->ring_magic =3D V4V_RING_MAGIC;
+    info->data_magic =3D V4V_RING_DATA_MAGIC;
+    info->evtchn =3D d->v4v->evtchn_port;
+    read_unlock(&d->v4v->lock);
+}
+
+/*
+ * hypercall glue
+ */
+long
+do_v4v_op(int cmd, XEN_GUEST_HANDLE(void) arg1,
+          XEN_GUEST_HANDLE(void) arg2,
+          uint32_t arg3, uint32_t arg4)
+{
+    struct domain *d =3D current->domain;
+    long rc =3D -EFAULT;
+
+    v4v_dprintk("->do_v4v_op(%d,%p,%p,%d,%d)\n", cmd,
+                (void *)arg1.p, (void *)arg2.p, (int) arg3, (int) arg4);
+
+    domain_lock(d);
+    switch (cmd)
+    {
+        case V4VOP_register_ring:
+            {
+                XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd =3D
+                    guest_handle_cast(arg1, v4v_ring_t);
+                XEN_GUEST_HANDLE(v4v_pfn_t) pfn_hnd =3D
+                    guest_handle_cast(arg2, v4v_pfn_t);
+                uint32_t npage =3D arg3;
+                if ( unlikely(!guest_handle_okay(ring_hnd, 1)) )
+                    goto out;
+                if ( unlikely(!guest_handle_okay(pfn_hnd, npage)) )
+                    goto out;
+                rc =3D v4v_ring_add(d, ring_hnd, npage, pfn_hnd);
+                break;
+            }
+        case V4VOP_unregister_ring:
+            {
+                XEN_GUEST_HANDLE(v4v_ring_t) ring_hnd =3D
+                    guest_handle_cast(arg1, v4v_ring_t);
+                if ( unlikely(!guest_handle_okay(ring_hnd, 1)) )
+                    goto out;
+                rc =3D v4v_ring_remove(d, ring_hnd);
+                break;
+            }
+        case V4VOP_send:
+            {
+                uint32_t len =3D arg3;
+                uint32_t protocol =3D arg4;
+                v4v_iov_t iov;
+                internal_v4v_iov_t iovs;
+                XEN_GUEST_HANDLE(v4v_send_addr_t) addr_hnd =3D
+                    guest_handle_cast(arg1, v4v_send_addr_t);
+                v4v_send_addr_t addr;
+
+                if ( unlikely(!guest_handle_okay(addr_hnd, 1)) )
+                    goto out;
+                if ( copy_from_guest(&addr, addr_hnd, 1) )
+                    goto out;
+
+                iov.iov_base =3D (uint64_t)arg2.p; // FIXME
+                iov.iov_len =3D len;
+                iovs.xen_iov =3D &iov;
+                rc =3D v4v_sendv(d, &addr.src, &addr.dst, protocol, iovs=
, 1);
+                break;
+            }
+        case V4VOP_sendv:
+            {
+                internal_v4v_iov_t iovs;
+                uint32_t niov =3D arg3;
+                uint32_t protocol =3D arg4;
+                XEN_GUEST_HANDLE(v4v_send_addr_t) addr_hnd =3D
+                    guest_handle_cast(arg1, v4v_send_addr_t);
+                v4v_send_addr_t addr;
+
+                memset(&iovs, 0, sizeof (iovs));
+                iovs.guest_iov =3D guest_handle_cast(arg2, v4v_iov_t);
+
+                if ( unlikely(!guest_handle_okay(addr_hnd, 1)) )
+                    goto out;
+                if ( copy_from_guest(&addr, addr_hnd, 1) )
+                    goto out;
+
+                if ( unlikely(!guest_handle_okay(iovs.guest_iov, niov)) =
)
+                    goto out;
+
+                rc =3D v4v_sendv(d, &addr.src, &addr.dst, protocol, iovs=
, niov);
+                break;
+            }
+        case V4VOP_notify:
+            {
+                XEN_GUEST_HANDLE(v4v_ring_data_t) ring_data_hnd =3D
+                    guest_handle_cast(arg1, v4v_ring_data_t);
+                rc =3D v4v_notify(d, ring_data_hnd);
+                break;
+            }
+        case V4VOP_viptables_add:
+            {
+                uint32_t position =3D arg3;
+                XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule_hnd =3D
+                    guest_handle_cast(arg1, v4v_viptables_rule_t);
+                rc =3D -EPERM;
+                if ( !IS_PRIV(d) )
+                    goto out;
+
+                write_lock(&viprules_lock);
+                rc =3D v4v_viptables_add(d, rule_hnd, position);
+                write_unlock(&viprules_lock);
+                break;
+            }
+        case V4VOP_viptables_del:
+            {
+                uint32_t position =3D arg3;
+                XEN_GUEST_HANDLE(v4v_viptables_rule_t) rule_hnd =3D
+                    guest_handle_cast(arg1, v4v_viptables_rule_t);
+                rc =3D -EPERM;
+                if ( !IS_PRIV(d) )
+                    goto out;
+
+                write_lock(&viprules_lock);
+                rc =3D v4v_viptables_del(d, rule_hnd, position);
+                write_unlock(&viprules_lock);
+                break;
+            }
+        case V4VOP_viptables_list:
+            {
+                XEN_GUEST_HANDLE(v4v_viptables_list_t) rules_list_hnd =3D
+                    guest_handle_cast(arg1, v4v_viptables_list_t);
+                rc =3D -EPERM;
+                if ( !IS_PRIV(d) )
+                    goto out;
+
+                rc =3D -EFAULT;
+                if ( unlikely(!guest_handle_okay(rules_list_hnd, 1)) )
+                    goto out;
+
+                read_lock(&viprules_lock);
+                rc =3D v4v_viptables_list(d, rules_list_hnd);
+                read_unlock(&viprules_lock);
+                break;
+            }
+        case V4VOP_info:
+            {
+                XEN_GUEST_HANDLE(v4v_info_t) info_hnd =3D
+                    guest_handle_cast(arg1, v4v_info_t);
+                v4v_info_t info;
+
+                if ( unlikely(!guest_handle_okay(info_hnd, 1)) )
+                    goto out;
+                v4v_info(d, &info);
+                if ( copy_to_guest(info_hnd, &info, 1) )
+                    goto out;
+                rc =3D 0;
+                break;
+            }
+        default:
+            rc =3D -ENOSYS;
+            break;
+    }
+out:
+    domain_unlock(d);
+    v4v_dprintk("<-do_v4v_op()=3D%d\n", (int)rc);
+    return rc;
+}
+
+/*
+ * init
+ */
+
+void
+v4v_destroy(struct domain *d)
+{
+    int i;
+
+    BUG_ON(!d->is_dying);
+    write_lock(&v4v_lock);
+
+    v4v_dprintk("d->v=3D%p\n", d->v4v);
+
+    if ( d->v4v )
+    {
+        for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+        {
+            struct hlist_node *node, *next;
+            struct v4v_ring_info *ring_info;
+
+            hlist_for_each_entry_safe(ring_info, node,
+                    next, &d->v4v->ring_hash[i],
+                    node)
+            {
+                v4v_ring_remove_info(d, ring_info);
+            }
+        }
+    }
+
+    d->v4v =3D NULL;
+    write_unlock(&v4v_lock);
+}
+
+int
+v4v_init(struct domain *d)
+{
+    struct v4v_domain *v4v;
+    evtchn_port_t port;
+    struct evtchn *chn;
+    int i;
+    int rc;
+
+    v4v =3D xmalloc(struct v4v_domain);
+    if ( !v4v )
+        return -ENOMEM;
+
+    rc =3D evtchn_alloc_unbound_domain(d, &port);
+    if ( rc )
+        return rc;
+
+    chn =3D evtchn_from_port(d, port);
+    chn->u.unbound.remote_domid =3D d->domain_id;
+
+    rwlock_init(&v4v->lock);
+
+    v4v->evtchn_port =3D port;
+    for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+        INIT_HLIST_HEAD(&v4v->ring_hash[i]);
+
+    write_lock(&v4v_lock);
+    d->v4v =3D v4v;
+    write_unlock(&v4v_lock);
+
+    return 0;
+}
+
+
+/*
+ * debug
+ */
+
+static void
+dump_domain_ring(struct domain *d, struct v4v_ring_info *ring_info)
+{
+    uint32_t rx_ptr;
+
+    printk(KERN_ERR "  ring: domid=3D%d port=3D0x%08x partner=3D%d npage=
=3D%d\n",
+           (int)d->domain_id, (int)ring_info->id.addr.port,
+           (int)ring_info->id.partner, (int)ring_info->npage);
+
+    if ( v4v_ringbuf_get_rx_ptr(d, ring_info, &rx_ptr) )
+    {
+        printk(KERN_ERR "   Failed to read rx_ptr\n");
+        return;
+    }
+
+    printk(KERN_ERR "   tx_ptr=3D%d rx_ptr=3D%d len=3D%d\n",
+           (int)ring_info->tx_ptr, (int)rx_ptr, (int)ring_info->len);
+}
+
+static void
+dump_domain(struct domain *d)
+{
+    int i;
+
+    printk(KERN_ERR " domain %d:\n", (int)d->domain_id);
+
+    read_lock(&d->v4v->lock);
+
+    for ( i =3D 0; i < V4V_HTABLE_SIZE; ++i )
+    {
+        struct hlist_node *node;
+        struct v4v_ring_info *ring_info;
+
+        hlist_for_each_entry(ring_info, node, &d->v4v->ring_hash[i], nod=
e)
+            dump_domain_ring(d, ring_info);
+    }
+
+    printk(KERN_ERR "  event channel: %d\n",  d->v4v->evtchn_port);
+    read_unlock(&d->v4v->lock);
+
+    printk(KERN_ERR "\n");
+    v4v_signal_domain(d);
+}
+
+static void
+dump_state(unsigned char key)
+{
+    struct domain *d;
+
+    printk(KERN_ERR "\n\nV4V:\n");
+    read_lock(&v4v_lock);
+
+    rcu_read_lock(&domlist_read_lock);
+
+    for_each_domain(d)
+        dump_domain(d);
+
+    rcu_read_unlock(&domlist_read_lock);
+
+    read_unlock(&v4v_lock);
+}
+
+struct keyhandler v4v_info_keyhandler =3D
+{
+    .diagnostic =3D 1,
+    .u.fn =3D dump_state,
+    .desc =3D "dump v4v states and interupt"
+};
+
+static int __init
+setup_dump_rings(void)
+{
+    register_keyhandler('4', &v4v_info_keyhandler);
+    return 0;
+}
+
+__initcall(setup_dump_rings);
+
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/v4v.h b/xen/include/public/v4v.h
new file mode 100644
index 0000000..1f1c156
--- /dev/null
+++ b/xen/include/public/v4v.h
@@ -0,0 +1,291 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __XEN_PUBLIC_V4V_H__
+#define __XEN_PUBLIC_V4V_H__
+
+#include "xen.h"
+#include "event_channel.h"
+
+/*
+ * Structure definitions
+ */
+
+#define V4V_RING_MAGIC          0xA822f72bb0b9d8cc
+#define V4V_RING_DATA_MAGIC	0x45fe852220b801d4
+
+#define V4V_PROTO_DGRAM		0x3c2c1db8
+#define V4V_PROTO_STREAM 	0x70f6a8e5
+
+#define V4V_DOMID_ANY           0x7fffU
+#define V4V_PORT_ANY            0
+
+typedef struct v4v_iov
+{
+    uint64_t iov_base;
+    uint64_t iov_len;
+} v4v_iov_t;
+
+typedef struct v4v_addr
+{
+    uint32_t port;
+    domid_t domain;
+    uint16_t pad;
+} v4v_addr_t;
+
+typedef struct v4v_ring_id
+{
+    v4v_addr_t addr;
+    domid_t partner;
+    uint16_t pad;
+} v4v_ring_id_t;
+
+typedef uint64_t v4v_pfn_t;
+
+typedef struct
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+} v4v_send_addr_t;
+
+/*
+ * v4v_ring
+ * id: xen only looks at this during register/unregister
+ *     and will fill in id.addr.domain
+ * rx_ptr: rx pointer, modified by domain
+ * tx_ptr: tx pointer, modified by xen
+ *
+ */
+struct v4v_ring
+{
+    uint64_t magic;
+    v4v_ring_id_t id;
+    uint32_t len;
+    uint32_t rx_ptr;
+    uint32_t tx_ptr;
+    uint8_t reserved[32];
+    uint8_t ring[0];
+};
+typedef struct v4v_ring v4v_ring_t;
+
+#define V4V_RING_DATA_F_EMPTY       (1U << 0) /* Ring is empty */
+#define V4V_RING_DATA_F_EXISTS      (1U << 1) /* Ring exists */
+#define V4V_RING_DATA_F_PENDING     (1U << 2) /* Pending interrupt exist=
s - do not
+                                               * rely on this field - fo=
r
+                                               * profiling only */
+#define V4V_RING_DATA_F_SUFFICIENT  (1U << 3) /* Sufficient space to que=
ue
+                                               * space_required bytes ex=
ists */
+
+typedef struct v4v_ring_data_ent
+{
+    v4v_addr_t ring;
+    uint16_t flags;
+    uint16_t pad;
+    uint32_t space_required;
+    uint32_t max_message_size;
+} v4v_ring_data_ent_t;
+
+typedef struct v4v_ring_data
+{
+    uint64_t magic;
+    uint32_t nent;
+    uint32_t pad;
+    uint64_t reserved[4];
+    v4v_ring_data_ent_t data[0];
+} v4v_ring_data_t;
+
+struct v4v_info
+{
+    uint64_t ring_magic;
+    uint64_t data_magic;
+    evtchn_port_t evtchn;
+};
+typedef struct v4v_info v4v_info_t;
+
+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
+/*
+ * Messages on the ring are padded to 128 bits
+ * Len here refers to the exact length of the data not including the
+ * 128 bit header. The message uses
+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
+ */
+
+#define V4V_SHF_SYN		(1 << 0)
+#define V4V_SHF_ACK		(1 << 1)
+#define V4V_SHF_RST		(1 << 2)
+
+#define V4V_SHF_PING		(1 << 8)
+#define V4V_SHF_PONG		(1 << 9)
+
+struct v4v_stream_header
+{
+    uint32_t flags;
+    uint32_t conid;
+};
+
+struct v4v_ring_message_header
+{
+    uint32_t len;
+    uint32_t pad0;
+    v4v_addr_t source;
+    uint32_t protocol;
+    uint32_t pad1;
+    uint8_t data[0];
+};
+
+typedef struct v4v_viptables_rule
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+    uint32_t accept;
+    uint32_t pad;
+} v4v_viptables_rule_t;
+
+typedef struct v4v_viptables_list
+{
+    uint32_t start_rule;
+    uint32_t nb_rules;
+    struct v4v_viptables_rule rules[0];
+} v4v_viptables_list_t;
+
+/*
+ * HYPERCALLS
+ */
+
+#define V4VOP_register_ring 	1
+/*
+ * Registers a ring with Xen, if a ring with the same v4v_ring_id exists=
,
+ * this ring takes its place, registration will not change tx_ptr
+ * unless it is invalid
+ *
+ * do_v4v_op(V4VOP_unregister_ring,
+ *           v4v_ring, XEN_GUEST_HANDLE(v4v_pfn),
+ *           npage, 0)
+ */
+
+
+#define V4VOP_unregister_ring 	2
+/*
+ * Unregister a ring.
+ *
+ * v4v_hypercall(V4VOP_send, v4v_ring, NULL, 0, 0)
+ */
+
+#define V4VOP_send 		3
+/*
+ * Sends len bytes of buf to dst, giving src as the source address (xen =
will
+ * ignore src->domain and put your domain in the actually message), xen
+ * first looks for a ring with id.addr=3D=3Ddst and id.partner=3D=3Dsend=
ing_domain
+ * if that fails it looks for id.addr=3D=3Ddst and id.partner=3D=3DDOMID=
_ANY.
+ * protocol is the 32 bit protocol number used from the message
+ * most likely V4V_PROTO_DGRAM or STREAM. If insufficient space exists
+ * it will return -EAGAIN and xen will twing the V4V_INTERRUPT when
+ * sufficient space becomes available
+ *
+ * v4v_hypercall(V4VOP_send,
+ *               v4v_send_addr_t addr,
+ *               void* buf,
+ *               uint32_t len,
+ *               uint32_t protocol)
+ */
+
+
+#define V4VOP_notify 		4
+/* Asks xen for information about other rings in the system
+ *
+ * ent->ring is the v4v_addr_t of the ring you want information on
+ * the same matching rules are used as for V4VOP_send.
+ *
+ * ent->space_required  if this field is not null xen will check
+ * that there is space in the destination ring for this many bytes
+ * of payload. If there is it will set the V4V_RING_DATA_F_SUFFICIENT
+ * and CANCEL any pending interrupt for that ent->ring, if insufficient
+ * space is available it will schedule an interrupt and the flag will
+ * not be set.
+ *
+ * The flags are set by xen when notify replies
+ * V4V_RING_DATA_F_EMPTY	ring is empty
+ * V4V_RING_DATA_F_PENDING	interrupt is pending - don't rely on this
+ * V4V_RING_DATA_F_SUFFICIENT	sufficient space for space_required is the=
re
+ * V4V_RING_DATA_F_EXISTS	ring exists
+ *
+ * v4v_hypercall(V4VOP_notify,
+ *               XEN_GUEST_HANDLE(v4v_ring_data_ent) ent,
+ *               NULL, nent, 0)
+ */
+
+#define V4VOP_sendv		5
+/*
+ * Identical to V4VOP_send except rather than buf and len it takes
+ * an array of v4v_iov and a length of the array.
+ *
+ * v4v_hypercall(V4VOP_sendv,
+ *               v4v_send_addr_t addr,
+ *               v4v_iov iov,
+ *               uint32_t niov,
+ *               uint32_t protocol)
+ */
+
+#define V4VOP_viptables_add     6
+/*
+ * Insert a filtering rules after a given position.
+ *
+ * v4v_hypercall(V4VOP_viptables_add,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_del     7
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_del,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_list    8
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_list,
+ *               v4v_vitpables_list_t list,
+ *               NULL, 0, 0)
+ */
+
+#define V4VOP_info              9
+/*
+ * v4v_hypercall(V4VOP_info,
+ *               XEN_GUEST_HANDLE(v4v_info_t) info,
+ *               NULL, 0, 0)
+ */
+
+#endif /* __XEN_PUBLIC_V4V_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b19425b..868d119 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -99,7 +99,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
 #define __HYPERVISOR_domctl               36
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
-#define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient *=
/
+#define __HYPERVISOR_v4v_op               39
=20
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 53804c8..296de52 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -23,6 +23,7 @@
 #include <public/sysctl.h>
 #include <public/vcpu.h>
 #include <public/mem_event.h>
+#include <xen/v4v.h>
=20
 #ifdef CONFIG_COMPAT
 #include <compat/vcpu.h>
@@ -350,6 +351,9 @@ struct domain
     nodemask_t node_affinity;
     unsigned int last_alloc_node;
     spinlock_t node_affinity_lock;
+
+    /* v4v */
+    struct v4v_domain *v4v;
 };
=20
 struct domain_setup_info
diff --git a/xen/include/xen/v4v.h b/xen/include/xen/v4v.h
new file mode 100644
index 0000000..cba5ea7
--- /dev/null
+++ b/xen/include/xen/v4v.h
@@ -0,0 +1,134 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __V4V_PRIVATE_H__
+#define __V4V_PRIVATE_H__
+
+#include <xen/config.h>
+#include <xen/types.h>
+#include <xen/spinlock.h>
+#include <xen/smp.h>
+#include <xen/shared.h>
+#include <xen/list.h>
+#include <public/v4v.h>
+
+#define V4V_HTABLE_SIZE 32
+
+/*
+ * Handlers
+ */
+
+DEFINE_XEN_GUEST_HANDLE (v4v_iov_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_addr_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_send_addr_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_pfn_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_ring_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_ring_data_ent_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_ring_data_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_info_t);
+
+DEFINE_XEN_GUEST_HANDLE (v4v_viptables_rule_t);
+DEFINE_XEN_GUEST_HANDLE (v4v_viptables_list_t);
+
+/*
+ * Helper functions
+ */
+
+static inline uint16_t
+v4v_hash_fn (v4v_ring_id_t *id)
+{
+    uint16_t ret;
+    ret =3D (uint16_t) (id->addr.port >> 16);
+    ret ^=3D (uint16_t) id->addr.port;
+    ret ^=3D id->addr.domain;
+    ret ^=3D id->partner;
+
+    ret &=3D (V4V_HTABLE_SIZE-1);
+
+    return ret;
+}
+
+struct v4v_pending_ent
+{
+    struct hlist_node node;
+    domid_t id;
+    uint32_t len;
+};
+
+
+struct v4v_ring_info
+{
+    /* next node in the hash, protected by L2  */
+    struct hlist_node node;
+    /* this ring's id, protected by L2 */
+    v4v_ring_id_t id;
+    /* L3 */
+    spinlock_t lock;
+    /* cached length of the ring (from ring->len), protected by L3 */
+    uint32_t len;
+    uint32_t npage;
+    /* cached tx pointer location, protected by L3 */
+    uint32_t tx_ptr;
+    /* guest ring, protected by L3 */
+    XEN_GUEST_HANDLE(v4v_ring_t) ring;
+    /* mapped ring pages protected by L3*/
+    uint8_t **mfn_mapping;
+    /* list of mfns of guest ring */
+    mfn_t *mfns;
+    /* list of struct v4v_pending_ent for this ring, L3 */
+    struct hlist_head pending;
+};
+
+/*
+ * The value of the v4v element in a struct domain is
+ * protected by the global lock L1
+ */
+struct v4v_domain
+{
+    /* L2 */
+    rwlock_t lock;
+    /* event channel */
+    evtchn_port_t evtchn_port;
+    /* protected by L2 */
+    struct hlist_head ring_hash[V4V_HTABLE_SIZE];
+};
+
+typedef struct v4v_viptables_rule_node
+{
+    struct list_head list;
+    v4v_viptables_rule_t rule;
+} v4v_viptables_rule_node_t;
+
+void v4v_destroy(struct domain *d);
+int v4v_init(struct domain *d);
+long do_v4v_op (int cmd,
+                XEN_GUEST_HANDLE (void) arg1,
+                XEN_GUEST_HANDLE (void) arg2,
+                uint32_t arg3,
+                uint32_t arg4);
+
+#endif /* __V4V_PRIVATE_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/v4v_utils.h b/xen/include/xen/v4v_utils.h
new file mode 100644
index 0000000..67b2d77
--- /dev/null
+++ b/xen/include/xen/v4v_utils.h
@@ -0,0 +1,276 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __V4V_UTILS_H__
+# define __V4V_UTILS_H__
+
+/* Compiler specific hacks */
+#if defined(__GNUC__)
+# define V4V_UNUSED __attribute__ ((unused))
+# ifndef __STRICT_ANSI__
+#  define V4V_INLINE inline
+# else
+#  define V4V_INLINE
+# endif
+#else /* !__GNUC__ */
+# define V4V_UNUSED
+# define V4V_INLINE
+#endif
+
+
+/*
+ * Utility functions
+ */
+
+static V4V_INLINE uint32_t
+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
+{
+    int32_t ret;
+    ret =3D r->tx_ptr - r->rx_ptr;
+    if (ret >=3D 0)
+        return ret;
+    return (uint32_t) (r->len + ret);
+}
+
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, setting from and protocol if they are not NULL, returns
+ * the actual length of the message, or -1 if there is nothing to read
+ */
+V4V_UNUSED static V4V_INLINE ssize_t
+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * prot=
ocol,
+              void *_buf, size_t t, int consume)
+{
+    volatile struct v4v_ring_message_header *mh;
+    /* unnecessary cast from void * required by MSVC compiler */
+    uint8_t *buf =3D (uint8_t *) _buf;
+    uint32_t btr =3D v4v_ring_bytes_to_read (r);
+    uint32_t rxp =3D r->rx_ptr;
+    uint32_t bte;
+    uint32_t len;
+    ssize_t ret;
+
+
+    if (btr < sizeof (*mh))
+        return -1;
+
+    /*
+     * Becuase the message_header is 128 bits long and the ring is 128 b=
it
+     * aligned, we're gaurunteed never to wrap
+     */
+    mh =3D (volatile struct v4v_ring_message_header *) &r->ring[r->rx_pt=
r];
+
+    len =3D mh->len;
+    if (btr < len)
+        return -1;
+
+#if defined(__GNUC__)
+    if (from)
+        *from =3D mh->source;
+#else
+    /* MSVC can't do the above */
+    if (from)
+	memcpy((void *) from, (void *) &(mh->source), sizeof(struct v4v_addr));
+#endif
+
+    if (protocol)
+        *protocol =3D mh->protocol;
+
+    rxp +=3D sizeof (*mh);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+    len -=3D sizeof (*mh);
+    ret =3D len;
+
+    bte =3D r->len - rxp;
+
+    if (bte < len)
+    {
+        if (t < bte)
+        {
+            if (buf)
+            {
+                memcpy (buf, (void *) &r->ring[rxp], t);
+                buf +=3D t;
+            }
+
+            rxp =3D 0;
+            len -=3D bte;
+            t =3D 0;
+        }
+        else
+        {
+            if (buf)
+            {
+                memcpy (buf, (void *) &r->ring[rxp], bte);
+                buf +=3D bte;
+            }
+            rxp =3D 0;
+            len -=3D bte;
+            t -=3D bte;
+        }
+    }
+
+    if (buf && t)
+        memcpy (buf, (void *) &r->ring[rxp], (t < len) ? t : len);
+
+
+    rxp +=3D V4V_ROUNDUP (len);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+
+    mb ();
+
+    if (consume)
+        r->rx_ptr =3D rxp;
+
+    return ret;
+}
+
+static V4V_INLINE void
+v4v_memcpy_skip (void *_dst, const void *_src, size_t len, size_t *skip)
+{
+    const uint8_t *src =3D  (const uint8_t *) _src;
+    uint8_t *dst =3D (uint8_t *) _dst;
+
+    if (!*skip)
+    {
+        memcpy (dst, src, len);
+        return;
+    }
+
+    if (*skip >=3D len)
+    {
+        *skip -=3D len;
+        return;
+    }
+
+    src +=3D *skip;
+    dst +=3D *skip;
+    len -=3D *skip;
+    *skip =3D 0;
+
+    memcpy (dst, src, len);
+}
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, skipping skip bytes, setting from and protocol if they are n=
ot
+ * NULL, returns the actual length of the message, or -1 if there is
+ * nothing to read
+ */
+static ssize_t
+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
+                     uint32_t * protocol, void *_buf, size_t t, int cons=
ume,
+                     size_t skip) V4V_UNUSED;
+
+V4V_INLINE static ssize_t
+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
+                     uint32_t * protocol, void *_buf, size_t t, int cons=
ume,
+                     size_t skip)
+{
+    volatile struct v4v_ring_message_header *mh;
+    /* unnecessary cast from void * required by MSVC compiler */
+    uint8_t *buf =3D (uint8_t *) _buf;
+    uint32_t btr =3D v4v_ring_bytes_to_read (r);
+    uint32_t rxp =3D r->rx_ptr;
+    uint32_t bte;
+    uint32_t len;
+    ssize_t ret;
+
+    buf -=3D skip;
+
+    if (btr < sizeof (*mh))
+        return -1;
+
+    /*
+     * Becuase the message_header is 128 bits long and the ring is 128 b=
it
+     * aligned, we're gaurunteed never to wrap
+     */
+    mh =3D (volatile struct v4v_ring_message_header *) &r->ring[r->rx_pt=
r];
+
+    len =3D mh->len;
+    if (btr < len)
+        return -1;
+
+#if defined(__GNUC__)
+    if (from)
+        *from =3D mh->source;
+#else
+    /* MSVC can't do the above */
+    if (from)
+	memcpy((void *) from, (void *) &(mh->source), sizeof(struct v4v_addr));
+#endif
+
+    if (protocol)
+        *protocol =3D mh->protocol;
+
+    rxp +=3D sizeof (*mh);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+    len -=3D sizeof (*mh);
+    ret =3D len;
+
+    bte =3D r->len - rxp;
+
+    if (bte < len)
+    {
+        if (t < bte)
+        {
+            if (buf)
+            {
+                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], t, &skip);
+                buf +=3D t;
+            }
+
+            rxp =3D 0;
+            len -=3D bte;
+            t =3D 0;
+        }
+        else
+        {
+            if (buf)
+            {
+                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], bte,
+                        &skip);
+                buf +=3D bte;
+            }
+            rxp =3D 0;
+            len -=3D bte;
+            t -=3D bte;
+        }
+    }
+
+    if (buf && t)
+        v4v_memcpy_skip (buf, (void *) &r->ring[rxp], (t < len) ? t : le=
n,
+                         &skip);
+
+
+    rxp +=3D V4V_ROUNDUP (len);
+    if (rxp =3D=3D r->len)
+        rxp =3D 0;
+
+    mb ();
+
+    if (consume)
+        r->rx_ptr =3D rxp;
+
+    return ret;
+}
+
+#endif /* !__V4V_UTILS_H__ */

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 20:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 20:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxOjd-0005FL-F4; Fri, 03 Aug 2012 20:44:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1SxOjb-0005FE-IX
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 20:44:35 +0000
Received: from [85.158.139.83:15221] by server-3.bemta-5.messagelabs.com id
	95/F4-03367-2383C105; Fri, 03 Aug 2012 20:44:34 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344026673!26306610!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23313 invoked from network); 3 Aug 2012 20:44:34 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 20:44:34 -0000
Received: by ghy16 with SMTP id 16so1687021ghy.30
	for <xen-devel@lists.xensource.com>;
	Fri, 03 Aug 2012 13:44:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=from:to:cc:subject:in-reply-to:references:user-agent:date
	:message-id:mime-version:content-type:x-gm-message-state;
	bh=FGNQx9GMKVzW9ewiglf+SJS+mTQ2gy8Bu5OW9oqio0U=;
	b=O231VE/GamPy94Rcxw/FhJ7PASjGqK35gN0AsttdtCjlfKDGgIMmpV6qUW+qV9pl6W
	Zf3iEWuiEsdyHvvQuUDDfXnseJhwcpl8iksN5iAF/fthY5e+Q9ZeG0RcDlBzkH2BMOV2
	Oj1w+XI3Kr02K3kFU8sQw2UFN3FcVXD5IO+WEXB8Kz37JNUNFTu072dFdoCjzj9UjEjf
	Nz+fIYVJrfvJLWvTuXlvgAmOcoh7LORmgFnNqTCWkrWS10S3GLmm/ryxkyQ/y3GJXXXM
	M0xsbNG4/Nq+2N43Cl9GzYG6cAwsNhUNkrm4HGAEZxtt8/JJkAVTcBfxmcf+QaUv1PS3
	BxRA==
Received: by 10.60.172.202 with SMTP id be10mr7602663oec.53.1344026672859;
	Fri, 03 Aug 2012 13:44:32 -0700 (PDT)
Received: from titi.smtp.gmail.com (cpe-70-123-145-39.austin.res.rr.com.
	[70.123.145.39])
	by mx.google.com with ESMTPS id o4sm8654356oef.11.2012.08.03.13.44.31
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 03 Aug 2012 13:44:32 -0700 (PDT)
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org
In-Reply-To: <alpine.DEB.2.02.1208011447240.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208011447240.4645@kaball.uk.xensource.com>
User-Agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1
	(x86_64-pc-linux-gnu)
Date: Fri, 03 Aug 2012 15:44:30 -0500
Message-ID: <87wr1fu2mp.fsf@codemonkey.ws>
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQlR2SupnNyYU22iTpPrPT74mJ2QkIXrwkWM1F3ZU66ofLyuqBflw91x71Cm2EgjcEQTHUJp
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] Xen fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:

> Hi Anthony,
> please pull a couple of simple Xen compilation fixes from:
>
> git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-fixes-20120801
>
> Anthony PERARD (1):
>       configure: Fix xen probe with Xen 4.2 and later
>
> Stefano Stabellini (1):
>       fix Xen compilation

Pulled. Thanks.

Regards,

Anthony Liguori

>
>  configure   |    1 -
>  hw/xen_pt.c |    4 +---
>  2 files changed, 1 insertions(+), 4 deletions(-)
>
> Cheers,
>
> Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 20:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 20:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxOjd-0005FL-F4; Fri, 03 Aug 2012 20:44:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1SxOjb-0005FE-IX
	for xen-devel@lists.xensource.com; Fri, 03 Aug 2012 20:44:35 +0000
Received: from [85.158.139.83:15221] by server-3.bemta-5.messagelabs.com id
	95/F4-03367-2383C105; Fri, 03 Aug 2012 20:44:34 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344026673!26306610!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23313 invoked from network); 3 Aug 2012 20:44:34 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 20:44:34 -0000
Received: by ghy16 with SMTP id 16so1687021ghy.30
	for <xen-devel@lists.xensource.com>;
	Fri, 03 Aug 2012 13:44:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=from:to:cc:subject:in-reply-to:references:user-agent:date
	:message-id:mime-version:content-type:x-gm-message-state;
	bh=FGNQx9GMKVzW9ewiglf+SJS+mTQ2gy8Bu5OW9oqio0U=;
	b=O231VE/GamPy94Rcxw/FhJ7PASjGqK35gN0AsttdtCjlfKDGgIMmpV6qUW+qV9pl6W
	Zf3iEWuiEsdyHvvQuUDDfXnseJhwcpl8iksN5iAF/fthY5e+Q9ZeG0RcDlBzkH2BMOV2
	Oj1w+XI3Kr02K3kFU8sQw2UFN3FcVXD5IO+WEXB8Kz37JNUNFTu072dFdoCjzj9UjEjf
	Nz+fIYVJrfvJLWvTuXlvgAmOcoh7LORmgFnNqTCWkrWS10S3GLmm/ryxkyQ/y3GJXXXM
	M0xsbNG4/Nq+2N43Cl9GzYG6cAwsNhUNkrm4HGAEZxtt8/JJkAVTcBfxmcf+QaUv1PS3
	BxRA==
Received: by 10.60.172.202 with SMTP id be10mr7602663oec.53.1344026672859;
	Fri, 03 Aug 2012 13:44:32 -0700 (PDT)
Received: from titi.smtp.gmail.com (cpe-70-123-145-39.austin.res.rr.com.
	[70.123.145.39])
	by mx.google.com with ESMTPS id o4sm8654356oef.11.2012.08.03.13.44.31
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 03 Aug 2012 13:44:32 -0700 (PDT)
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org
In-Reply-To: <alpine.DEB.2.02.1208011447240.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208011447240.4645@kaball.uk.xensource.com>
User-Agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1
	(x86_64-pc-linux-gnu)
Date: Fri, 03 Aug 2012 15:44:30 -0500
Message-ID: <87wr1fu2mp.fsf@codemonkey.ws>
MIME-Version: 1.0
X-Gm-Message-State: ALoCoQlR2SupnNyYU22iTpPrPT74mJ2QkIXrwkWM1F3ZU66ofLyuqBflw91x71Cm2EgjcEQTHUJp
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] Xen fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:

> Hi Anthony,
> please pull a couple of simple Xen compilation fixes from:
>
> git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-fixes-20120801
>
> Anthony PERARD (1):
>       configure: Fix xen probe with Xen 4.2 and later
>
> Stefano Stabellini (1):
>       fix Xen compilation

Pulled. Thanks.

Regards,

Anthony Liguori

>
>  configure   |    1 -
>  hw/xen_pt.c |    4 +---
>  2 files changed, 1 insertions(+), 4 deletions(-)
>
> Cheers,
>
> Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 20:52:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 20:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxOqt-0005YH-BC; Fri, 03 Aug 2012 20:52:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SxOqr-0005YB-Q8
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 20:52:06 +0000
Received: from [85.158.138.51:32821] by server-1.bemta-3.messagelabs.com id
	35/18-31934-4F93C105; Fri, 03 Aug 2012 20:52:04 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344027121!30405293!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwMjkxMQ==\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBnbWFpbGJsb2cuYmxvZ3Nwb3QuY29t\nKQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 616 invoked from network); 3 Aug 2012 20:52:02 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 20:52:02 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344027122; x=1375563122;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=09V2FDNo+tt8RvM0pVIHit9xqQ33W8qP8rLGY70BjjU=;
	b=QUsSJxxB9FTpWdoKl5WzR4LLmxYl01e2xOOBz16MAZXY7byYlfLpeQbI
	q8/5NeF9HavSC5Zq+uESXYZNcwmy2g==;
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="418479008"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 03 Aug 2012 20:51:54 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q73KprEj022077
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Fri, 3 Aug 2012 20:51:54 GMT
Received: from US-SEA-R8XVZTX (10.224.80.38) by ex10-hub-31005.ant.amazon.com
	(10.185.176.12) with Microsoft SMTP Server id 14.2.247.3;
	Fri, 3 Aug 2012 13:51:46 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Fri, 03 Aug 2012
	13:51:47 -0700
Date: Fri, 3 Aug 2012 13:51:46 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120803205146.GA6268@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20507.58318.416753.917851@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 07:44:30AM -0700, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> > Several folks have let me know that my messages sent via lists.xen.org
> > are marked as spam / spoofed, especially when using Gmail to receive
> > Xen mail. I believe this is because outbound Amazon email contains a
> > DKIM signature. When Mailman modifies my message and re-sends it, the
> > DKIM signature is invalidated [1].
> > 
> > To work around this, Mailman 2.1.10 and later contain a configuration
> > variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
> > on we'd work around the problem.
> ...
> > [1] http://wiki.list.org/display/DEV/DKIM
> > [2] https://bugs.launchpad.net/mailman/+bug/557493
> 
> Having checked RFC4871 I think it is clear that according to the
> standards
>   - Mailman SHOULD NOT [1] strip DKIM-Signature
>   - No-one should treat a message with an invalid DKIM signature
>     differently from a message with no DKIM signature at all [2]
> 
> [1] 4871 says in s3.5 that DKIM-Signature SHOULD be treated the same
> way as a trace header (ie a Received), so removing it would be a
> violation of that SHOULD not necessarily a violation of the MUST NOT
> mess with Received headers.
> 
> [2] RFC4871 6.1:
>    A verifier SHOULD NOT treat a message that has one or more bad
>    signatures and no good signatures differently from a message with
>    no signature at all; such treatment is a matter of local policy and
>    is beyond the scope of this document.

> I think it would be better if you would do one of:
>   (a)  Get Gmail fixed to comply with RFC4871 6.1;

I agree that the Gmail implementation is inconvenient, but I do not
think that they are not compliant with RFC 4871 6.1 given the RFC 2119
definition of "SHOULD NOT". I should also mention that I'm not
confident that stripping DKIM headers will resolve the problem. In
fact, Gmail markes messages sent from ebay.com and paypal.com that do
not pass DKIM validation as phishing [1][2][3]. I do not know if
messages from amazon.com are handled similarly.

>   (b)  Get your correspondents to use a non-broken email host;

Lars, George - is that an option?

>   (c)  Get the DKIM the spec changed or clarified;

I think that RFC 4871 is pretty clear in the intent, but leaves room
for interpretation via SHOULD / SHOULD NOT.

>   (d)  Stop putting these abused things in your email headers.

Obviously this isn't going to happen. The amazon.com domain is a
popular target for spammers and phishers, and providing DKIM headers
may help prevent phishing attacks.

> That would be better than asking lists.xen.org to start violating the
> specified protocol.  Now of course a SHOULD is not an absolute
> requirement.  Perhaps mailing lists are a special case somehow; but if
> so I would expect this to be addressed in the relevant standards
> documents.  I don't see any particular reason to think that
> lists.xen.org is somehow unusual.

Ultimately I think that Mailman should verify DKIM signatures, provide
a new signature for the modified message (or have the outbound MTA do
the signing), and retain the origional DKIM signature as a trace. I
believe that this is in line with the recomendations for intermediary
email handlers like Mailman in RFC 5863 [4]. Of course, I don't know
if Gmail will rework their implementation to ignore the invalid
signature. At least one Mailman user reported success simply adding a
new signature and not stripping any header [5].

If a test of removing DKIM headers to see if it helps with delivery to
Gmail is off the table, then perhaps configuring Mailman in a way that
doesn't break DKIM signatures would be an option? Amazon's signed
headers include date, from, to, cc, subject, message-id and
mime-version. If the subject manipulation of adding [Xen-devel] was
removed, the signature would likely still be valid.

Personally, I think that stripping DKIM headers as a short term
workaround is less objectionable.

Matt

[1] http://gmailblog.blogspot.com/2008/07/fighting-phishing-with-ebay-and-paypal.html
[2] https://support.google.com/mail/bin/answer.py?hl=en&answer=105760
[3] https://support.google.com/mail/bin/answer.py?hl=en&answer=175365
[4] http://tools.ietf.org/html/rfc5863#page-25
[5] http://mail.python.org/pipermail/mailman-users/2011-October/072304.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 20:52:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 20:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxOqt-0005YH-BC; Fri, 03 Aug 2012 20:52:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SxOqr-0005YB-Q8
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 20:52:06 +0000
Received: from [85.158.138.51:32821] by server-1.bemta-3.messagelabs.com id
	35/18-31934-4F93C105; Fri, 03 Aug 2012 20:52:04 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344027121!30405293!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwMjkxMQ==\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBnbWFpbGJsb2cuYmxvZ3Nwb3QuY29t\nKQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 616 invoked from network); 3 Aug 2012 20:52:02 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 20:52:02 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344027122; x=1375563122;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=09V2FDNo+tt8RvM0pVIHit9xqQ33W8qP8rLGY70BjjU=;
	b=QUsSJxxB9FTpWdoKl5WzR4LLmxYl01e2xOOBz16MAZXY7byYlfLpeQbI
	q8/5NeF9HavSC5Zq+uESXYZNcwmy2g==;
X-IronPort-AV: E=Sophos;i="4.77,708,1336348800"; d="scan'208";a="418479008"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 03 Aug 2012 20:51:54 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q73KprEj022077
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Fri, 3 Aug 2012 20:51:54 GMT
Received: from US-SEA-R8XVZTX (10.224.80.38) by ex10-hub-31005.ant.amazon.com
	(10.185.176.12) with Microsoft SMTP Server id 14.2.247.3;
	Fri, 3 Aug 2012 13:51:46 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Fri, 03 Aug 2012
	13:51:47 -0700
Date: Fri, 3 Aug 2012 13:51:46 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120803205146.GA6268@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20507.58318.416753.917851@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 07:44:30AM -0700, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> > Several folks have let me know that my messages sent via lists.xen.org
> > are marked as spam / spoofed, especially when using Gmail to receive
> > Xen mail. I believe this is because outbound Amazon email contains a
> > DKIM signature. When Mailman modifies my message and re-sends it, the
> > DKIM signature is invalidated [1].
> > 
> > To work around this, Mailman 2.1.10 and later contain a configuration
> > variable called "REMOVE_DKIM_HEADERS" [2]. Perhaps if this were turned
> > on we'd work around the problem.
> ...
> > [1] http://wiki.list.org/display/DEV/DKIM
> > [2] https://bugs.launchpad.net/mailman/+bug/557493
> 
> Having checked RFC4871 I think it is clear that according to the
> standards
>   - Mailman SHOULD NOT [1] strip DKIM-Signature
>   - No-one should treat a message with an invalid DKIM signature
>     differently from a message with no DKIM signature at all [2]
> 
> [1] 4871 says in s3.5 that DKIM-Signature SHOULD be treated the same
> way as a trace header (ie a Received), so removing it would be a
> violation of that SHOULD not necessarily a violation of the MUST NOT
> mess with Received headers.
> 
> [2] RFC4871 6.1:
>    A verifier SHOULD NOT treat a message that has one or more bad
>    signatures and no good signatures differently from a message with
>    no signature at all; such treatment is a matter of local policy and
>    is beyond the scope of this document.

> I think it would be better if you would do one of:
>   (a)  Get Gmail fixed to comply with RFC4871 6.1;

I agree that the Gmail implementation is inconvenient, but I do not
think that they are not compliant with RFC 4871 6.1 given the RFC 2119
definition of "SHOULD NOT". I should also mention that I'm not
confident that stripping DKIM headers will resolve the problem. In
fact, Gmail markes messages sent from ebay.com and paypal.com that do
not pass DKIM validation as phishing [1][2][3]. I do not know if
messages from amazon.com are handled similarly.

>   (b)  Get your correspondents to use a non-broken email host;

Lars, George - is that an option?

>   (c)  Get the DKIM the spec changed or clarified;

I think that RFC 4871 is pretty clear in the intent, but leaves room
for interpretation via SHOULD / SHOULD NOT.

>   (d)  Stop putting these abused things in your email headers.

Obviously this isn't going to happen. The amazon.com domain is a
popular target for spammers and phishers, and providing DKIM headers
may help prevent phishing attacks.

> That would be better than asking lists.xen.org to start violating the
> specified protocol.  Now of course a SHOULD is not an absolute
> requirement.  Perhaps mailing lists are a special case somehow; but if
> so I would expect this to be addressed in the relevant standards
> documents.  I don't see any particular reason to think that
> lists.xen.org is somehow unusual.

Ultimately I think that Mailman should verify DKIM signatures, provide
a new signature for the modified message (or have the outbound MTA do
the signing), and retain the origional DKIM signature as a trace. I
believe that this is in line with the recomendations for intermediary
email handlers like Mailman in RFC 5863 [4]. Of course, I don't know
if Gmail will rework their implementation to ignore the invalid
signature. At least one Mailman user reported success simply adding a
new signature and not stripping any header [5].

If a test of removing DKIM headers to see if it helps with delivery to
Gmail is off the table, then perhaps configuring Mailman in a way that
doesn't break DKIM signatures would be an option? Amazon's signed
headers include date, from, to, cc, subject, message-id and
mime-version. If the subject manipulation of adding [Xen-devel] was
removed, the signature would likely still be valid.

Personally, I think that stripping DKIM headers as a short term
workaround is less objectionable.

Matt

[1] http://gmailblog.blogspot.com/2008/07/fighting-phishing-with-ebay-and-paypal.html
[2] https://support.google.com/mail/bin/answer.py?hl=en&answer=105760
[3] https://support.google.com/mail/bin/answer.py?hl=en&answer=175365
[4] http://tools.ietf.org/html/rfc5863#page-25
[5] http://mail.python.org/pipermail/mailman-users/2011-October/072304.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 21:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 21:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxPOJ-0005th-6d; Fri, 03 Aug 2012 21:26:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxPOH-0005tc-N9
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 21:26:37 +0000
Received: from [85.158.138.51:29960] by server-10.bemta-3.messagelabs.com id
	54/B1-21993-C024C105; Fri, 03 Aug 2012 21:26:36 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344029196!22289844!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29168 invoked from network); 3 Aug 2012 21:26:36 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 21:26:36 -0000
Received: by wgbed3 with SMTP id ed3so731475wgb.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 14:26:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=7+L7iDZlqPPvfC72UnVoKugJUTn4B6FBATkRK0UWmCE=;
	b=GgIyW7m17jXBKNpo8jcX4GNd4MYV5X16ED+3HmUIl7ZJeIUTxNxVAeZqfDN+Bduh/u
	j44WndBjEtpV7f2sKoqqTyY1dFvSt/hdf6uCzNfydv6y0Ld9z1NXp3yKdNywiEbj9NMb
	qOS/yx3ucioyFeRbN1VFkkz7q0qeyy0luptOo7lZlLq5LrE6S+uo3vvTTueTeC3XyBez
	FuzPoqfNaqguu7mx3Xpa+sWCnoVaB4kf76n5goR33vofdtqE6+qoKpzvbSBZpW7bHUxo
	SSBMFQXoyTgbKUCOc2iWFafoHgYXtYBojadxVR7gdUswNMp+LGeMhVJYHfHTgA22M5pB
	UtLg==
Received: by 10.180.89.65 with SMTP id bm1mr18756159wib.1.1344029196102;
	Fri, 03 Aug 2012 14:26:36 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id cl8sm41811482wib.10.2012.08.03.14.26.34
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 14:26:35 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 03 Aug 2012 22:26:29 +0100
From: Keir Fraser <keir@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC420095.47B75%keir@xen.org>
Thread-Topic: [Xen-devel] Multicall result missing sign extension in Xen or
	Linux
Thread-Index: Ac1xvqUGha5Adwpvl0iBvvSGCuu10g==
In-Reply-To: <501C1F14.9000505@tycho.nsa.gov>
Mime-version: 1.0
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Multicall result missing sign extension in Xen or
 Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 19:57, "Daniel De Graaf" <dgdegra@tycho.nsa.gov> wrote:

> While trying to figure out why a failing component of a multicall did not
> properly return its result, I discovered that multicall results are not
> sign-extended when placed in the unsigned long result field. For hypercalls
> such as do_mmu_update which return a (signed) int, this results in Linux
> incorrectly thinking the hypercall succeeded when it has actually failed
> since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
> (b->entries[i].result < 0).
> 
> Is this a bug in Xen (using the wrong return type for do_mmu_op and other
> hypercalls) or in Linux (assuming all returns are signed longs)? One or the
> other needs to be changed, because the current setup is silently hiding
> failed memory mapping operations.

I think this is a Xen bug, and we should update all hypercalls to explicitly
return a long. Nice and straightforward.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 21:26:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 21:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxPOJ-0005th-6d; Fri, 03 Aug 2012 21:26:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SxPOH-0005tc-N9
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 21:26:37 +0000
Received: from [85.158.138.51:29960] by server-10.bemta-3.messagelabs.com id
	54/B1-21993-C024C105; Fri, 03 Aug 2012 21:26:36 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344029196!22289844!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29168 invoked from network); 3 Aug 2012 21:26:36 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 21:26:36 -0000
Received: by wgbed3 with SMTP id ed3so731475wgb.32
	for <xen-devel@lists.xen.org>; Fri, 03 Aug 2012 14:26:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=7+L7iDZlqPPvfC72UnVoKugJUTn4B6FBATkRK0UWmCE=;
	b=GgIyW7m17jXBKNpo8jcX4GNd4MYV5X16ED+3HmUIl7ZJeIUTxNxVAeZqfDN+Bduh/u
	j44WndBjEtpV7f2sKoqqTyY1dFvSt/hdf6uCzNfydv6y0Ld9z1NXp3yKdNywiEbj9NMb
	qOS/yx3ucioyFeRbN1VFkkz7q0qeyy0luptOo7lZlLq5LrE6S+uo3vvTTueTeC3XyBez
	FuzPoqfNaqguu7mx3Xpa+sWCnoVaB4kf76n5goR33vofdtqE6+qoKpzvbSBZpW7bHUxo
	SSBMFQXoyTgbKUCOc2iWFafoHgYXtYBojadxVR7gdUswNMp+LGeMhVJYHfHTgA22M5pB
	UtLg==
Received: by 10.180.89.65 with SMTP id bm1mr18756159wib.1.1344029196102;
	Fri, 03 Aug 2012 14:26:36 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id cl8sm41811482wib.10.2012.08.03.14.26.34
	(version=SSLv3 cipher=OTHER); Fri, 03 Aug 2012 14:26:35 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 03 Aug 2012 22:26:29 +0100
From: Keir Fraser <keir@xen.org>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC420095.47B75%keir@xen.org>
Thread-Topic: [Xen-devel] Multicall result missing sign extension in Xen or
	Linux
Thread-Index: Ac1xvqUGha5Adwpvl0iBvvSGCuu10g==
In-Reply-To: <501C1F14.9000505@tycho.nsa.gov>
Mime-version: 1.0
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Multicall result missing sign extension in Xen or
 Linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/2012 19:57, "Daniel De Graaf" <dgdegra@tycho.nsa.gov> wrote:

> While trying to figure out why a failing component of a multicall did not
> properly return its result, I discovered that multicall results are not
> sign-extended when placed in the unsigned long result field. For hypercalls
> such as do_mmu_update which return a (signed) int, this results in Linux
> incorrectly thinking the hypercall succeeded when it has actually failed
> since arch/x86/xen/multicalls.c uses a signed long for "result" and checks
> (b->entries[i].result < 0).
> 
> Is this a bug in Xen (using the wrong return type for do_mmu_op and other
> hypercalls) or in Linux (assuming all returns are signed longs)? One or the
> other needs to be changed, because the current setup is silently hiding
> failed memory mapping operations.

I think this is a Xen bug, and we should update all hypercalls to explicitly
return a long. Nice and straightforward.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:23:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQGt-0006YU-Bi; Fri, 03 Aug 2012 22:23:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQGs-0006YP-BV
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:23:02 +0000
Received: from [85.158.143.35:32368] by server-1.bemta-4.messagelabs.com id
	2A/DC-24392-54F4C105; Fri, 03 Aug 2012 22:23:01 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344032579!15410391!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3ODc2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19304 invoked from network); 3 Aug 2012 22:23:00 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 22:23:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73MMkuT032600
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:22:47 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MMhrB019074
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:22:44 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MMgKu009254; Fri, 3 Aug 2012 17:22:42 -0500
MIME-Version: 1.0
Message-ID: <ed108492-5601-4bc3-8a5f-a8a7cb5916fb@default>
Date: Fri, 3 Aug 2012 15:22:26 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Dario Faggioli <raistlin@linux.it>, xen-devel <xen-devel@lists.xen.org>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Konrad Wilk <konrad.wilk@oracle.com>, Kurt Hackel <kurt.hackel@oracle.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang, Yang
	Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Dario Faggioli [mailto:raistlin@linux.it]
> Subject: [Xen-devel] NUMA TODO-list for xen-devel
> 
> Hi everyone,

Hi Dario --

Thanks for your great work on NUMA... an interest area of
mine but one, sadly, I haven't been able to give much time to,
so I'm glad you've taken this bull by the horns.

I've been sitting on an idea for some time that probably
deserves some exposure on your list.  Naturally, it involves
my favorite topic tmem (readers, please don't tune out yet :-).

It has occurred to me that a fundamental tenet of NUMA
is to put infrequently used data on "other" nodes, while
pulling frequently used data onto a "local" node.

Tmem very nicely separates infrequently-used data from
frequently-used data with an API/ABI that is now fully
implemented in upstream Linux.

If Xen had a "alloc_page_on_any_node_but_the_current_one()"
(or "any_node_except_this_guests_node_set" for multinode guests)
and Xen's tmem implementation were to use it, especially
in combination with selfballooning (also upstream), this
could solve a significant part of the NUMA problem when running
tmem-enabled guests.  The most frequently used data
stays in the guest (thus in the guest's "current node")
and the less frequently used data lives in tmem in the
hypervisor (on the guest's "complement guest's

Naturally, this doesn't solve any NUMA problems at all for
tmem-ignorant or tmem-disabled guests, but if it works
sufficiently well for tmem-enabled guests, that might
encourage other OS's to do a simple implementation of tmem.

Sadly, I'm not able to invest much time in this idea,
but the combination of tmem and NUMA might interest some
developers and/or grad students, in which case I'd be happy
to spend a little time assisting.

I'll be at Xen Summit for at least the first day, so we
can chat more if you are interested.  George/Jan, I suspect
you have the best knowledge of tmem outside of Oracle as well
as being NUMA-fluent, so I'd appreciate your thoughts as well!

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:23:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQGt-0006YU-Bi; Fri, 03 Aug 2012 22:23:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQGs-0006YP-BV
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:23:02 +0000
Received: from [85.158.143.35:32368] by server-1.bemta-4.messagelabs.com id
	2A/DC-24392-54F4C105; Fri, 03 Aug 2012 22:23:01 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344032579!15410391!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3ODc2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19304 invoked from network); 3 Aug 2012 22:23:00 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 22:23:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73MMkuT032600
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:22:47 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MMhrB019074
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:22:44 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MMgKu009254; Fri, 3 Aug 2012 17:22:42 -0500
MIME-Version: 1.0
Message-ID: <ed108492-5601-4bc3-8a5f-a8a7cb5916fb@default>
Date: Fri, 3 Aug 2012 15:22:26 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Dario Faggioli <raistlin@linux.it>, xen-devel <xen-devel@lists.xen.org>
References: <1343837796.4958.32.camel@Solace>
In-Reply-To: <1343837796.4958.32.camel@Solace>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	Konrad Wilk <konrad.wilk@oracle.com>, Kurt Hackel <kurt.hackel@oracle.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang, Yang
	Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Dario Faggioli [mailto:raistlin@linux.it]
> Subject: [Xen-devel] NUMA TODO-list for xen-devel
> 
> Hi everyone,

Hi Dario --

Thanks for your great work on NUMA... an interest area of
mine but one, sadly, I haven't been able to give much time to,
so I'm glad you've taken this bull by the horns.

I've been sitting on an idea for some time that probably
deserves some exposure on your list.  Naturally, it involves
my favorite topic tmem (readers, please don't tune out yet :-).

It has occurred to me that a fundamental tenet of NUMA
is to put infrequently used data on "other" nodes, while
pulling frequently used data onto a "local" node.

Tmem very nicely separates infrequently-used data from
frequently-used data with an API/ABI that is now fully
implemented in upstream Linux.

If Xen had a "alloc_page_on_any_node_but_the_current_one()"
(or "any_node_except_this_guests_node_set" for multinode guests)
and Xen's tmem implementation were to use it, especially
in combination with selfballooning (also upstream), this
could solve a significant part of the NUMA problem when running
tmem-enabled guests.  The most frequently used data
stays in the guest (thus in the guest's "current node")
and the less frequently used data lives in tmem in the
hypervisor (on the guest's "complement guest's

Naturally, this doesn't solve any NUMA problems at all for
tmem-ignorant or tmem-disabled guests, but if it works
sufficiently well for tmem-enabled guests, that might
encourage other OS's to do a simple implementation of tmem.

Sadly, I'm not able to invest much time in this idea,
but the combination of tmem and NUMA might interest some
developers and/or grad students, in which case I'd be happy
to spend a little time assisting.

I'll be at Xen Summit for at least the first day, so we
can chat more if you are interested.  George/Jan, I suspect
you have the best knowledge of tmem outside of Oracle as well
as being NUMA-fluent, so I'd appreciate your thoughts as well!

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQH4-0006Z6-OQ; Fri, 03 Aug 2012 22:23:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxQH2-0006Yw-M0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:23:13 +0000
Received: from [85.158.139.83:41714] by server-3.bemta-5.messagelabs.com id
	0D/27-03367-F4F4C105; Fri, 03 Aug 2012 22:23:11 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344032590!30106025!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13845 invoked from network); 3 Aug 2012 22:23:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 22:23:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,709,1336348800"; d="scan'208";a="13847581"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 22:23:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 23:23:10 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxQGz-00077V-NT;
	Fri, 03 Aug 2012 22:23:09 +0000
Received: by spongy (Postfix, from userid 2023)	id 0FFEB34045A; Fri,  3 Aug
	2012 23:24:22 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 23:24:20 +0100
Message-ID: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable

This is a Linux driver for the V4V inter VM communication system.

I've posted the V4V Xen patches for comments, to find more info about
V4V you can check out this link.
http://osdir.com/ml/general/2012-08/msg05904.html

This linux driver exposes two char devices one for TCP one for UDP.
The interface exposed to userspace are made of IOCTLs, one per
network operation (listen, bind, accept, send, recv, ...).

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 drivers/xen/Kconfig         |    4 +
 drivers/xen/Makefile        |    1 +
 drivers/xen/v4v.c           | 2639 +++++++++++++++++++++++++++++++++++++=
++++++
 drivers/xen/v4v_utils.h     |  278 +++++
 include/xen/interface/v4v.h |  299 +++++
 include/xen/interface/xen.h |    1 +
 include/xen/v4vdev.h        |   34 +
 7 files changed, 3256 insertions(+)
 create mode 100644 drivers/xen/v4v.c
 create mode 100644 drivers/xen/v4v_utils.h
 create mode 100644 include/xen/interface/v4v.h
 create mode 100644 include/xen/v4vdev.h


--------------true
Content-Type: text/x-patch; name="0001-v4v.patch"
Content-Disposition: attachment; filename="0001-v4v.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 8d2501e..db500cc 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -196,4 +196,8 @@ config XEN_ACPI_PROCESSOR
 	  called xen_acpi_processor  If you do not know what to choose, select
 	  M here. If the CPUFREQ drivers are built in, select Y here.
=20
+config XEN_V4V
+	tristate "Xen V4V driver"
+        default m
+
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index fc34886..a3d3014 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -21,6 +21,7 @@ obj-$(CONFIG_XEN_DOM0)			+=3D pci.o acpi.o
 obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+=3D xen-pciback/
 obj-$(CONFIG_XEN_PRIVCMD)		+=3D xen-privcmd.o
 obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+=3D xen-acpi-processor.o
+obj-$(CONFIG_XEN_V4V)			+=3D v4v.o
 xen-evtchn-y				:=3D evtchn.o
 xen-gntdev-y				:=3D gntdev.o
 xen-gntalloc-y				:=3D gntalloc.o
diff --git a/drivers/xen/v4v.c b/drivers/xen/v4v.c
new file mode 100644
index 0000000..141be66
--- /dev/null
+++ b/drivers/xen/v4v.c
@@ -0,0 +1,2639 @@
+/***********************************************************************=
*******
+ * drivers/xen/v4v/v4v.c
+ *
+ * V4V interdomain communication driver.
+ *
+ * Copyright (c) 2012 Jean Guyader
+ * Copyright (c) 2009 Ross Philipson
+ * Copyright (c) 2009 James McKenzie
+ * Copyright (c) 2009 Citrix Systems, Inc.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining=
 a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, mo=
dify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Sof=
tware,
+ * and to permit persons to whom the Software is furnished to do so, sub=
ject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be includ=
ed in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRE=
SS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILI=
TY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHA=
LL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHE=
R
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISI=
NG
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER D=
EALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/socket.h>
+#include <linux/sched.h>
+#include <xen/events.h>
+#include <xen/evtchn.h>
+#include <xen/page.h>
+#include <xen/xen.h>
+#include <linux/fs.h>
+#include <linux/platform_device.h>
+#include <linux/miscdevice.h>
+#include <linux/major.h>
+#include <linux/proc_fs.h>
+#include <linux/poll.h>
+#include <linux/random.h>
+#include <linux/wait.h>
+#include <linux/file.h>
+#include <linux/mount.h>
+
+#include <xen/interface/v4v.h>
+#include <xen/v4vdev.h>
+#include "v4v_utils.h"
+
+#define DEFAULT_RING_SIZE \
+    (V4V_ROUNDUP((((PAGE_SIZE)*32) - sizeof(v4v_ring_t)-V4V_ROUNDUP(1)))=
)
+
+/* The type of a ring*/
+typedef enum {
+        V4V_RTYPE_IDLE =3D 0,
+        V4V_RTYPE_DGRAM,
+        V4V_RTYPE_LISTENER,
+        V4V_RTYPE_CONNECTOR,
+} v4v_rtype;
+
+/* the state of a v4V_private*/
+typedef enum {
+        V4V_STATE_IDLE =3D 0,
+        V4V_STATE_BOUND,
+        V4V_STATE_LISTENING,
+        V4V_STATE_ACCEPTED,
+        V4V_STATE_CONNECTING,
+        V4V_STATE_CONNECTED,
+        V4V_STATE_DISCONNECTED
+} v4v_state;
+
+typedef enum {
+        V4V_PTYPE_DGRAM =3D 1,
+        V4V_PTYPE_STREAM,
+} v4v_ptype;
+
+static rwlock_t list_lock;
+static struct list_head ring_list;
+
+struct v4v_private;
+
+/*
+ * Ring pointer itself is protected by the refcnt the lists its in by li=
st_lock.
+ *
+ * It's permittable to decrement the refcnt whilst holding the read lock=
, and then
+ * clean up refcnt=3D0 rings later.
+ *
+ * If a ring has refcnt!=3D0 we expect ->ring to be non NULL, and for th=
e ring to
+ * be registered with Xen.
+ */
+
+struct ring {
+        struct list_head node;
+        atomic_t refcnt;
+
+        spinlock_t lock;        /* Protects the data in the v4v_ring_t a=
lso privates and sponsor */
+
+        struct list_head privates;      /* Protected by lock */
+        struct v4v_private *sponsor;    /* Protected by lock */
+
+        v4v_rtype type;
+
+        /* Ring */
+        v4v_ring_t *ring;
+        v4v_pfn_t *pfn_list;
+        size_t pfn_list_npages;
+        int order;
+};
+
+struct v4v_private {
+        struct list_head node;
+        v4v_state state;
+        v4v_ptype ptype;
+        uint32_t desired_ring_size;
+        struct ring *r;
+        wait_queue_head_t readq;
+        wait_queue_head_t writeq;
+        v4v_addr_t peer;
+        uint32_t conid;
+        spinlock_t pending_recv_lock;   /* Protects pending messages, an=
d pending_error */
+        struct list_head pending_recv_list;     /* For LISTENER contains=
 only ... */
+        atomic_t pending_recv_count;
+        int pending_error;
+        int full;
+        int send_blocked;
+        int rx;
+};
+
+struct pending_recv {
+        struct list_head node;
+        v4v_addr_t from;
+        size_t data_len, data_ptr;
+        struct v4v_stream_header sh;
+        uint8_t data[0];
+} V4V_PACKED;
+
+static spinlock_t interrupt_lock;
+static spinlock_t pending_xmit_lock;
+static struct list_head pending_xmit_list;
+static atomic_t pending_xmit_count;
+
+enum v4v_pending_xmit_type {
+        V4V_PENDING_XMIT_INLINE =3D 1,    /* Send the inline xmit */
+        V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR,   /* Wake up writeq of spo=
nsor of the ringid from */
+        V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES,  /* Wake up writeq of a p=
rivate of ringid from with conid */
+};
+
+struct pending_xmit {
+        struct list_head node;
+        enum v4v_pending_xmit_type type;
+        uint32_t conid;
+        struct v4v_ring_id from;
+        v4v_addr_t to;
+        size_t len;
+        uint32_t protocol;
+        uint8_t data[0];
+};
+
+#define MAX_PENDING_RECVS        16
+
+/* Hypercalls */
+
+static inline int __must_check
+HYPERVISOR_v4v_op(int cmd, void *arg1, void *arg2,
+                  uint32_t arg3, uint32_t arg4)
+{
+        return _hypercall5(int, v4v_op, cmd, arg1, arg2, arg3, arg4);
+}
+
+static int v4v_info(v4v_info_t *info)
+{
+        (void)(*(volatile int*)info);
+        return HYPERVISOR_v4v_op (V4VOP_info, info, NULL, 0, 0);
+}
+
+static int H_v4v_register_ring(v4v_ring_t * r, v4v_pfn_t * l, size_t npa=
ges)
+{
+        (void)(*(volatile int *)r);
+        return HYPERVISOR_v4v_op(V4VOP_register_ring, r, l, npages, 0);
+}
+
+static int H_v4v_unregister_ring(v4v_ring_t * r)
+{
+        (void)(*(volatile int *)r);
+        return HYPERVISOR_v4v_op(V4VOP_unregister_ring, r, NULL, 0, 0);
+}
+
+static int
+H_v4v_send(v4v_addr_t * s, v4v_addr_t * d, const void *buf, uint32_t len=
,
+           uint32_t protocol)
+{
+        v4v_send_addr_t addr;
+        addr.src =3D *s;
+        addr.dst =3D *d;
+        return HYPERVISOR_v4v_op(V4VOP_send, &addr, (void *)buf, len, pr=
otocol);
+}
+
+static int
+H_v4v_sendv(v4v_addr_t * s, v4v_addr_t * d, const v4v_iov_t * iovs,
+            uint32_t niov, uint32_t protocol)
+{
+        v4v_send_addr_t addr;
+        addr.src =3D *s;
+        addr.dst =3D *d;
+        return HYPERVISOR_v4v_op(V4VOP_sendv, &addr, (void *)iovs, niov,
+                                 protocol);
+}
+
+static int H_v4v_notify(v4v_ring_data_t * rd)
+{
+        return HYPERVISOR_v4v_op(V4VOP_notify, rd, NULL, 0, 0);
+}
+
+static int H_v4v_viptables_add(v4v_viptables_rule_t * rule, int position=
)
+{
+        return HYPERVISOR_v4v_op(V4VOP_viptables_add, rule, NULL,
+                                 position, 0);
+}
+
+static int H_v4v_viptables_del(v4v_viptables_rule_t * rule, int position=
)
+{
+        return HYPERVISOR_v4v_op(V4VOP_viptables_del, rule, NULL,
+                                 position, 0);
+}
+
+static int H_v4v_viptables_list(struct v4v_viptables_list *list)
+{
+        return HYPERVISOR_v4v_op(V4VOP_viptables_list, list, NULL, 0, 0)=
;
+}
+
+/* Port/Ring uniqueness */
+
+/* Need to hold write lock for all of these */
+
+static int v4v_id_in_use(struct v4v_ring_id *id)
+{
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if ((r->ring->id.addr.port =3D=3D id->addr.port)
+                    && (r->ring->id.partner =3D=3D id->partner))
+                        return 1;
+        }
+
+        return 0;
+}
+
+static int v4v_port_in_use(uint32_t port, uint32_t * max)
+{
+        uint32_t ret =3D 0;
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if (r->ring->id.addr.port =3D=3D port)
+                        ret++;
+                if (max && (r->ring->id.addr.port > *max))
+                        *max =3D r->ring->id.addr.port;
+        }
+
+        return ret;
+}
+
+static uint32_t v4v_random_port(void)
+{
+        uint32_t port;
+
+        port =3D random32();
+        port |=3D 0x80000000U;
+        if (port > 0xf0000000U) {
+                port -=3D 0x10000000;
+        }
+
+        return port;
+}
+
+/* Caller needs to hold lock */
+static uint32_t v4v_find_spare_port_number(void)
+{
+        uint32_t port, max =3D 0x80000000U;
+
+        port =3D v4v_random_port();
+        if (!v4v_port_in_use(port, &max)) {
+                return port;
+        } else {
+                port =3D max + 1;
+        }
+
+        return port;
+}
+
+/* Ring Goo */
+
+static int register_ring(struct ring *r)
+{
+        return H_v4v_register_ring((void *)r->ring,
+                                   r->pfn_list,
+                                   r->pfn_list_npages);
+}
+
+static int unregister_ring(struct ring *r)
+{
+        return H_v4v_unregister_ring((void *)r->ring);
+}
+
+static void refresh_pfn_list(struct ring *r)
+{
+        uint8_t *b =3D (void *)r->ring;
+        int i;
+
+        for (i =3D 0; i < r->pfn_list_npages; ++i) {
+                r->pfn_list[i] =3D pfn_to_mfn(vmalloc_to_pfn(b));
+                b +=3D PAGE_SIZE;
+        }
+}
+
+static void allocate_pfn_list(struct ring *r)
+{
+        int n =3D (r->ring->len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+        int len =3D sizeof(v4v_pfn_t) * n;
+
+        r->pfn_list =3D kmalloc(len, GFP_KERNEL);
+        if (!r->pfn_list)
+                return;
+        r->pfn_list_npages =3D n;
+
+        refresh_pfn_list(r);
+}
+
+static int allocate_ring(struct ring *r, int ring_len)
+{
+        int len =3D ring_len + sizeof(v4v_ring_t);
+        int ret =3D 0;
+
+        if (ring_len !=3D V4V_ROUNDUP(ring_len)) {
+                ret =3D -EINVAL;
+                goto fail;
+        }
+
+        r->ring =3D NULL;
+        r->pfn_list =3D NULL;
+        r->order =3D 0;
+
+        r->order =3D get_order(len);
+
+        r->ring =3D vmalloc(len);
+
+        if (!r->ring) {
+                ret =3D -ENOMEM;
+                goto fail;
+        }
+
+        memset((void *)r->ring, 0, len);
+
+        r->ring->magic =3D V4V_RING_MAGIC;
+        r->ring->len =3D ring_len;
+        r->ring->rx_ptr =3D r->ring->tx_ptr =3D 0;
+
+        memset((void *)r->ring->ring, 0x5a, ring_len);
+
+        allocate_pfn_list(r);
+        if (!r->pfn_list) {
+
+                ret =3D -ENOMEM;
+                goto fail;
+        }
+
+        return 0;
+ fail:
+        if (r->ring)
+                vfree(r->ring);
+        if (r->pfn_list)
+                kfree(r->pfn_list);
+
+        r->ring =3D NULL;
+        r->pfn_list =3D NULL;
+
+        return ret;
+}
+
+/* Caller must hold lock */
+static void recover_ring(struct ring *r)
+{
+        /* It's all gone horribly wrong */
+        r->ring->rx_ptr =3D r->ring->tx_ptr;
+        /* Xen updates tx_ptr atomically to always be pointing somewhere=
 sensible */
+}
+
+/* Caller must hold no locks, ring is allocated with a refcnt of 1 */
+static int new_ring(struct v4v_private *sponsor, struct v4v_ring_id *pid=
)
+{
+        struct v4v_ring_id id =3D *pid;
+        struct ring *r;
+        int ret;
+        unsigned long flags;
+
+        if (id.addr.domain !=3D V4V_DOMID_NONE)
+                return -EINVAL;
+
+        r =3D kmalloc(sizeof(struct ring), GFP_KERNEL);
+        memset(r, 0, sizeof(struct ring));
+
+        ret =3D allocate_ring(r, sponsor->desired_ring_size);
+        if (ret) {
+                kfree(r);
+                return ret;
+        }
+
+        INIT_LIST_HEAD(&r->privates);
+        spin_lock_init(&r->lock);
+        atomic_set(&r->refcnt, 1);
+
+        write_lock_irqsave(&list_lock, flags);
+        if (sponsor->state !=3D V4V_STATE_IDLE) {
+                ret =3D -EINVAL;
+                goto fail;
+        }
+
+        if (!id.addr.port) {
+                id.addr.port =3D v4v_find_spare_port_number();
+        } else if (v4v_id_in_use(&id)) {
+                ret =3D -EADDRINUSE;
+                goto fail;
+        }
+
+        r->ring->id =3D id;
+        r->sponsor =3D sponsor;
+        sponsor->r =3D r;
+        sponsor->state =3D V4V_STATE_BOUND;
+
+        ret =3D register_ring(r);
+        if (ret)
+                goto fail;
+
+        list_add(&r->node, &ring_list);
+        write_unlock_irqrestore(&list_lock, flags);
+        return 0;
+
+ fail:
+        write_unlock_irqrestore(&list_lock, flags);
+
+        vfree(r->ring);
+        kfree(r->pfn_list);
+        kfree(r);
+
+        sponsor->r =3D NULL;
+        sponsor->state =3D V4V_STATE_IDLE;
+
+        return ret;
+}
+
+/* Cleans up old rings */
+static void delete_ring(struct ring *r)
+{
+        int ret;
+
+        list_del(&r->node);
+
+        if ((ret =3D unregister_ring(r))) {
+                printk(KERN_ERR
+                       "unregister_ring hypercall failed: %d. Leaking ri=
ng.\n",
+                       ret);
+        } else {
+                vfree(r->ring);
+        }
+
+        kfree(r->pfn_list);
+        kfree(r);
+}
+
+/* Returns !0 if you sucessfully got a reference to the ring */
+static int get_ring(struct ring *r)
+{
+        return atomic_add_unless(&r->refcnt, 1, 0);
+}
+
+/* Must be called with DEBUG_WRITELOCK; v4v_write_lock */
+static void put_ring(struct ring *r)
+{
+        if (!r)
+                return;
+
+        if (atomic_dec_and_test(&r->refcnt)) {
+                delete_ring(r);
+        }
+}
+
+/* Caller must hold ring_lock */
+static struct ring *find_ring_by_id(struct v4v_ring_id *id)
+{
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if (!memcmp
+                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id=
)))
+                        return r;
+        }
+        return NULL;
+}
+
+/* Caller must hold ring_lock */
+struct ring *find_ring_by_id_type(struct v4v_ring_id *id, v4v_rtype t)
+{
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if (r->type !=3D t)
+                        continue;
+                if (!memcmp
+                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id=
)))
+                        return r;
+        }
+
+        return NULL;
+}
+
+/* Pending xmits */
+
+/* Caller must hold pending_xmit_lock */
+
+static void
+xmit_queue_wakeup_private(struct v4v_ring_id *from,
+                          uint32_t conid, v4v_addr_t * to, int len, int =
delete)
+{
+        struct pending_xmit *p;
+
+        list_for_each_entry(p, &pending_xmit_list, node) {
+                if (p->type !=3D V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES)
+                        continue;
+                if (p->conid !=3D conid)
+                        continue;
+
+                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id))=
)
+                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
+                        if (delete) {
+                                atomic_dec(&pending_xmit_count);
+                                list_del(&p->node);
+                        } else {
+                                p->len =3D len;
+                        }
+                        return;
+                }
+        }
+
+        if (delete)
+                return;
+
+        p =3D kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
+        if (!p) {
+                printk(KERN_ERR
+                       "Out of memory trying to queue an xmit sponsor wa=
keup\n");
+                return;
+        }
+        p->type =3D V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES;
+        p->conid =3D conid;
+        p->from =3D *from;
+        p->to =3D *to;
+        p->len =3D len;
+
+        atomic_inc(&pending_xmit_count);
+        list_add_tail(&p->node, &pending_xmit_list);
+}
+
+/* Caller must hold pending_xmit_lock */
+static void
+xmit_queue_wakeup_sponsor(struct v4v_ring_id *from, v4v_addr_t * to,
+                          int len, int delete)
+{
+        struct pending_xmit *p;
+
+        list_for_each_entry(p, &pending_xmit_list, node) {
+                if (p->type !=3D V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR)
+                        continue;
+                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id))=
)
+                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
+                        if (delete) {
+                                atomic_dec(&pending_xmit_count);
+                                list_del(&p->node);
+                        } else {
+                                p->len =3D len;
+                        }
+                        return;
+                }
+        }
+
+        if (delete)
+                return;
+
+        p =3D kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
+        if (!p) {
+                printk(KERN_ERR
+                       "Out of memory trying to queue an xmit sponsor wa=
keup\n");
+                return;
+        }
+        p->type =3D V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR;
+        p->from =3D *from;
+        p->to =3D *to;
+        p->len =3D len;
+        atomic_inc(&pending_xmit_count);
+        list_add_tail(&p->node, &pending_xmit_list);
+}
+
+static int
+xmit_queue_inline(struct v4v_ring_id *from, v4v_addr_t * to,
+                  void *buf, size_t len, uint32_t protocol)
+{
+        ssize_t ret;
+        unsigned long flags;
+        struct pending_xmit *p;
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+
+        ret =3D H_v4v_send(&from->addr, to, buf, len, protocol);
+        if (ret !=3D -EAGAIN) {
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                return ret;
+        }
+
+        p =3D kmalloc(sizeof(struct pending_xmit) + len, GFP_ATOMIC);
+        if (!p) {
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                printk(KERN_ERR
+                       "Out of memory trying to queue an xmit of %zu byt=
es\n",
+                       len);
+
+                return -ENOMEM;
+        }
+
+        p->type =3D V4V_PENDING_XMIT_INLINE;
+        p->from =3D *from;
+        p->to =3D *to;
+        p->len =3D len;
+        p->protocol =3D protocol;
+
+        if (len)
+                memcpy(p->data, buf, len);
+
+        list_add_tail(&p->node, &pending_xmit_list);
+        atomic_inc(&pending_xmit_count);
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return len;
+}
+
+static void
+xmit_queue_rst_to(struct v4v_ring_id *from, uint32_t conid, v4v_addr_t *=
 to)
+{
+        struct v4v_stream_header sh;
+
+        if (!to)
+                return;
+
+        sh.conid =3D conid;
+        sh.flags =3D V4V_SHF_RST;
+        xmit_queue_inline(from, to, &sh, sizeof(sh), V4V_PROTO_STREAM);
+}
+
+/* RX */
+
+static int
+copy_into_pending_recv(struct ring *r, int len, struct v4v_private *p)
+{
+        struct pending_recv *pending;
+        int k;
+
+        /* Too much queued? Let the ring take the strain */
+        if (atomic_read(&p->pending_recv_count) > MAX_PENDING_RECVS) {
+                spin_lock(&p->pending_recv_lock);
+                p->full =3D 1;
+                spin_unlock(&p->pending_recv_lock);
+
+                return -1;
+        }
+
+        pending =3D
+            kmalloc(sizeof(struct pending_recv) -
+                    sizeof(struct v4v_stream_header) + len, GFP_ATOMIC);
+
+        if (!pending)
+                return -1;
+
+        pending->data_ptr =3D 0;
+        pending->data_len =3D len - sizeof(struct v4v_stream_header);
+
+        k =3D v4v_copy_out(r->ring, &pending->from, NULL, &pending->sh, =
len, 1);
+
+        spin_lock(&p->pending_recv_lock);
+        list_add_tail(&pending->node, &p->pending_recv_list);
+        atomic_inc(&p->pending_recv_count);
+        p->full =3D 0;
+        spin_unlock(&p->pending_recv_lock);
+
+        return 0;
+}
+
+/* Notify */
+
+/* Caller must hold list_lock */
+static void
+wakeup_privates(struct v4v_ring_id *id, v4v_addr_t * peer, uint32_t coni=
d)
+{
+        struct ring *r =3D find_ring_by_id_type(id, V4V_RTYPE_LISTENER);
+        struct v4v_private *p;
+
+        if (!r)
+                return;
+
+        list_for_each_entry(p, &r->privates, node) {
+                if ((p->conid =3D=3D conid)
+                    && !memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
+                        p->send_blocked =3D 0;
+                        wake_up_interruptible_all(&p->writeq);
+                        return;
+                }
+        }
+}
+
+/* Caller must hold list_lock */
+static void wakeup_sponsor(struct v4v_ring_id *id)
+{
+        struct ring *r =3D find_ring_by_id(id);
+
+        if (!r)
+                return;
+
+        if (!r->sponsor)
+                return;
+
+        r->sponsor->send_blocked =3D 0;
+        wake_up_interruptible_all(&r->sponsor->writeq);
+}
+
+static void v4v_null_notify(void)
+{
+        H_v4v_notify(NULL);
+}
+
+/* Caller must hold list_lock */
+static void v4v_notify(void)
+{
+        unsigned long flags;
+        int ret;
+        int nent;
+        struct pending_xmit *p, *n;
+        v4v_ring_data_t *d;
+        int i =3D 0;
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+
+        nent =3D atomic_read(&pending_xmit_count);
+        d =3D kmalloc(sizeof(v4v_ring_data_t) +
+                    nent * sizeof(v4v_ring_data_ent_t), GFP_ATOMIC);
+        if (!d) {
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                return;
+        }
+        memset(d, 0, sizeof(v4v_ring_data_t));
+
+        d->magic =3D V4V_RING_DATA_MAGIC;
+
+        list_for_each_entry(p, &pending_xmit_list, node) {
+                if (i !=3D nent) {
+                        d->data[i].ring =3D p->to;
+                        d->data[i].space_required =3D p->len;
+                        i++;
+                }
+        }
+        d->nent =3D i;
+
+        if (H_v4v_notify(d)) {
+                kfree(d);
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                //MOAN;
+                return;
+        }
+
+        i =3D 0;
+        list_for_each_entry_safe(p, n, &pending_xmit_list, node) {
+                int processed =3D 1;
+
+                if (i =3D=3D nent)
+                        continue;
+
+                if (d->data[i].flags & V4V_RING_DATA_F_EXISTS) {
+                        switch (p->type) {
+                        case V4V_PENDING_XMIT_INLINE:
+                                if (!
+                                    (d->data[i].flags &
+                                     V4V_RING_DATA_F_SUFFICIENT)) {
+                                        processed =3D 0;
+                                        break;
+                                }
+                                ret =3D
+                                    H_v4v_send(&p->from.addr, &p->to, p-=
>data,
+                                               p->len, p->protocol);
+                                if (ret =3D=3D -EAGAIN)
+                                        processed =3D 0;
+                                break;
+                        case V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR:
+                                if (d->
+                                    data[i].flags & V4V_RING_DATA_F_SUFF=
ICIENT)
+                                {
+                                        wakeup_sponsor(&p->from);
+                                } else {
+                                        processed =3D 0;
+                                }
+                                break;
+                        case V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES:
+                                if (d->
+                                    data[i].flags & V4V_RING_DATA_F_SUFF=
ICIENT)
+                                {
+                                        wakeup_privates(&p->from, &p->to=
,
+                                                        p->conid);
+                                } else {
+                                        processed =3D 0;
+                                }
+                                break;
+                        }
+                }
+                if (processed) {
+                        list_del(&p->node);     /* No one to talk to */
+                        atomic_dec(&pending_xmit_count);
+                        kfree(p);
+                }
+                i++;
+        }
+
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+        kfree(d);
+}
+
+/* VIPtables */
+static void
+v4v_viptables_add(struct v4v_private *p, struct v4v_viptables_rule *rule=
,
+                  int position)
+{
+        H_v4v_viptables_add(rule, position);
+}
+
+static void
+v4v_viptables_del(struct v4v_private *p, struct v4v_viptables_rule *rule=
,
+                  int position)
+{
+        H_v4v_viptables_del(rule, position);
+}
+
+static int v4v_viptables_list(struct v4v_private *p, struct v4v_viptable=
s_list *list)
+{
+        return H_v4v_viptables_list(list);
+}
+
+/* State Machines */
+static int
+connector_state_machine(struct v4v_private *p, struct v4v_stream_header =
*sh)
+{
+        if (sh->flags & V4V_SHF_ACK) {
+                switch (p->state) {
+                case V4V_STATE_CONNECTING:
+                        p->state =3D V4V_STATE_CONNECTED;
+
+                        spin_lock(&p->pending_recv_lock);
+                        p->pending_error =3D 0;
+                        spin_unlock(&p->pending_recv_lock);
+
+                        wake_up_interruptible_all(&p->writeq);
+                        return 0;
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_DISCONNECTED:
+                        p->state =3D V4V_STATE_DISCONNECTED;
+
+                        wake_up_interruptible_all(&p->readq);
+                        wake_up_interruptible_all(&p->writeq);
+                        return 1;       /* Send RST */
+                default:
+                        break;
+                }
+        }
+
+        if (sh->flags & V4V_SHF_RST) {
+                switch (p->state) {
+                case V4V_STATE_CONNECTING:
+                        spin_lock(&p->pending_recv_lock);
+                        p->pending_error =3D -ECONNREFUSED;
+                        spin_unlock(&p->pending_recv_lock);
+                case V4V_STATE_CONNECTED:
+                        p->state =3D V4V_STATE_DISCONNECTED;
+                        wake_up_interruptible_all(&p->readq);
+                        wake_up_interruptible_all(&p->writeq);
+                        return 0;
+                default:
+                        break;
+                }
+        }
+
+        return 0;
+}
+
+static void
+acceptor_state_machine(struct v4v_private *p, struct v4v_stream_header *=
sh)
+{
+        if ((sh->flags & V4V_SHF_RST)
+            && ((p->state =3D=3D V4V_STATE_ACCEPTED))) {
+                p->state =3D V4V_STATE_DISCONNECTED;
+                wake_up_interruptible_all(&p->readq);
+                wake_up_interruptible_all(&p->writeq);
+        }
+}
+
+/* Interrupt handler */
+
+static int connector_interrupt(struct ring *r)
+{
+        ssize_t msg_len;
+        uint32_t protocol;
+        struct v4v_stream_header sh;
+        v4v_addr_t from;
+        int ret =3D 0;
+
+        if (!r->sponsor) {
+                //MOAN;
+                return -1;
+        }
+
+        msg_len =3D v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 0);  /* Peek the header */
+        if (msg_len =3D=3D -1) {
+                recover_ring(r);
+                return ret;
+        }
+
+        if ((protocol !=3D V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) =
{
+                /* Wrong protocol bin it */
+                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
+                return ret;
+        }
+
+        if (sh.flags & V4V_SHF_SYN) {   /* This is a connector no-one sh=
ould send SYN, send RST back */
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 1);
+                if (msg_len =3D=3D sizeof(sh))
+                        xmit_queue_rst_to(&r->ring->id, sh.conid, &from)=
;
+                return ret;
+        }
+
+        /* Right connexion? */
+        if (sh.conid !=3D r->sponsor->conid) {
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 1);
+                xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
+                return ret;
+        }
+
+        /* Any messages to eat? */
+        if (sh.flags & (V4V_SHF_ACK | V4V_SHF_RST)) {
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 1);
+                if (msg_len =3D=3D sizeof(sh)) {
+                        if (connector_state_machine(r->sponsor, &sh))
+                                xmit_queue_rst_to(&r->ring->id, sh.conid=
,
+                                                  &from);
+                }
+                return ret;
+        }
+        //FIXME set a flag to say wake up the userland process next time=
, and do that rather than copy
+        ret =3D copy_into_pending_recv(r, msg_len, r->sponsor);
+        wake_up_interruptible_all(&r->sponsor->readq);
+
+        return ret;
+}
+
+static int
+acceptor_interrupt(struct v4v_private *p, struct ring *r,
+                   struct v4v_stream_header *sh, ssize_t msg_len)
+{
+        v4v_addr_t from;
+        int ret =3D 0;
+
+        if (sh->flags & (V4V_SHF_SYN | V4V_SHF_ACK)) {  /* This is an  a=
cceptor no-one should send SYN or ACK, send RST back */
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), =
1);
+                if (msg_len =3D=3D sizeof(*sh))
+                        xmit_queue_rst_to(&r->ring->id, sh->conid, &from=
);
+                return ret;
+        }
+
+        /* Is it all over */
+        if (sh->flags & V4V_SHF_RST) {
+                /* Consume the RST */
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), =
1);
+                if (msg_len =3D=3D sizeof(*sh))
+                        acceptor_state_machine(p, sh);
+                return ret;
+        }
+
+        /* Copy the message out */
+        ret =3D copy_into_pending_recv(r, msg_len, p);
+        wake_up_interruptible_all(&p->readq);
+
+        return ret;
+}
+
+static int listener_interrupt(struct ring *r)
+{
+        int ret =3D 0;
+        ssize_t msg_len;
+        uint32_t protocol;
+        struct v4v_stream_header sh;
+        struct v4v_private *p;
+        v4v_addr_t from;
+
+        msg_len =3D v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 0);  /* Peek the header */
+        if (msg_len =3D=3D -1) {
+                recover_ring(r);
+                return ret;
+        }
+
+        if ((protocol !=3D V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) =
{
+                /* Wrong protocol bin it */
+                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
+                return ret;
+        }
+
+        list_for_each_entry(p, &r->privates, node) {
+                if ((p->conid =3D=3D sh.conid)
+                    && (!memcmp(&p->peer, &from, sizeof(v4v_addr_t)))) {
+                        ret =3D acceptor_interrupt(p, r, &sh, msg_len);
+                        return ret;
+                }
+        }
+
+        /* Consume it */
+        if (r->sponsor && (sh.flags & V4V_SHF_RST)) {
+                /*
+                 * If we previously received a SYN which has not been pu=
lled by
+                 * v4v_accept() from the pending queue yet, the RST will=
 be dropped here
+                 * and the connection will never be closed.
+                 * Hence we must make sure to evict the SYN header from =
the pending queue
+                 * before it gets picked up by v4v_accept().
+                 */
+                struct pending_recv *pending, *t;
+
+                spin_lock(&r->sponsor->pending_recv_lock);
+                list_for_each_entry_safe(pending, t,
+                                         &r->sponsor->pending_recv_list,=
 node) {
+                        if (pending->sh.flags & V4V_SHF_SYN
+                            && pending->sh.conid =3D=3D sh.conid) {
+                                list_del(&pending->node);
+                                atomic_dec(&r->sponsor->pending_recv_cou=
nt);
+                                kfree(pending);
+                                break;
+                        }
+                }
+                spin_unlock(&r->sponsor->pending_recv_lock);
+
+                /* Rst to a listener, should be picked up above for the =
connexion, drop it */
+                v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
+                return ret;
+        }
+
+        if (sh.flags & V4V_SHF_SYN) {
+                /* Syn to new connexion */
+                if ((!r->sponsor) || (msg_len !=3D sizeof(sh))) {
+                        v4v_copy_out(r->ring, NULL, NULL, NULL,
+                                           sizeof(sh), 1);
+                        return ret;
+                }
+                ret =3D copy_into_pending_recv(r, msg_len, r->sponsor);
+                wake_up_interruptible_all(&r->sponsor->readq);
+                return ret;
+        }
+
+        v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
+        /* Data for unknown destination, RST them */
+        xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
+
+        return ret;
+}
+
+static void v4v_interrupt_rx(void)
+{
+        struct ring *r;
+
+        read_lock(&list_lock);
+
+        /* Wake up anyone pending */
+        list_for_each_entry(r, &ring_list, node) {
+                if (r->ring->tx_ptr =3D=3D r->ring->rx_ptr)
+                        continue;
+
+                switch (r->type) {
+                case V4V_RTYPE_IDLE:
+                        v4v_copy_out(r->ring, NULL, NULL, NULL, 1, 1);
+                        break;
+                case V4V_RTYPE_DGRAM:  /* For datagrams we just wake up =
the reader */
+                        if (r->sponsor)
+                                wake_up_interruptible_all(&r->sponsor->r=
eadq);
+                        break;
+                case V4V_RTYPE_CONNECTOR:
+                        spin_lock(&r->lock);
+                        while ((r->ring->tx_ptr !=3D r->ring->rx_ptr)
+                               && !connector_interrupt(r)) ;
+                        spin_unlock(&r->lock);
+                        break;
+                case V4V_RTYPE_LISTENER:
+                        spin_lock(&r->lock);
+                        while ((r->ring->tx_ptr !=3D r->ring->rx_ptr)
+                               && !listener_interrupt(r)) ;
+                        spin_unlock(&r->lock);
+                        break;
+                default:       /* enum warning */
+                        break;
+                }
+        }
+        read_unlock(&list_lock);
+}
+
+static irqreturn_t v4v_interrupt(int irq, void *dev_id)
+{
+        unsigned long flags;
+
+        spin_lock_irqsave(&interrupt_lock, flags);
+        v4v_interrupt_rx();
+        v4v_notify();
+        spin_unlock_irqrestore(&interrupt_lock, flags);
+
+        return IRQ_HANDLED;
+}
+
+static void v4v_fake_irq(void)
+{
+        unsigned long flags;
+
+        spin_lock_irqsave(&interrupt_lock, flags);
+        v4v_interrupt_rx();
+        v4v_null_notify();
+        spin_unlock_irqrestore(&interrupt_lock, flags);
+}
+
+/* Filesystem gunge */
+
+#define V4VFS_MAGIC 0x56345644  /* "V4VD" */
+
+static struct vfsmount *v4v_mnt =3D NULL;
+static const struct file_operations v4v_fops_stream;
+
+static struct dentry *v4vfs_mount_pseudo(struct file_system_type *fs_typ=
e,
+                                         int flags, const char *dev_name=
,
+                                         void *data)
+{
+        return mount_pseudo(fs_type, "v4v:", NULL, NULL, V4VFS_MAGIC);
+}
+
+static struct file_system_type v4v_fs =3D {
+        /* No owner field so module can be unloaded */
+        .name =3D "v4vfs",
+        .mount =3D v4vfs_mount_pseudo,
+        .kill_sb =3D kill_litter_super
+};
+
+static int setup_fs(void)
+{
+        int ret;
+
+        ret =3D register_filesystem(&v4v_fs);
+        if (ret) {
+                printk(KERN_ERR
+                       "v4v: couldn't register tedious filesystem thingy=
\n");
+                return ret;
+        }
+
+        v4v_mnt =3D kern_mount(&v4v_fs);
+        if (IS_ERR(v4v_mnt)) {
+                unregister_filesystem(&v4v_fs);
+                ret =3D PTR_ERR(v4v_mnt);
+                printk(KERN_ERR
+                       "v4v: couldn't mount tedious filesystem thingy\n"=
);
+                return ret;
+        }
+
+        return 0;
+}
+
+static void unsetup_fs(void)
+{
+        mntput(v4v_mnt);
+        unregister_filesystem(&v4v_fs);
+}
+
+/* Methods */
+
+static int stream_connected(struct v4v_private *p)
+{
+        switch (p->state) {
+        case V4V_STATE_ACCEPTED:
+        case V4V_STATE_CONNECTED:
+                return 1;
+        default:
+                return 0;
+        }
+}
+
+static size_t
+v4v_try_send_sponsor(struct v4v_private *p,
+                     v4v_addr_t * dest,
+                     const void *buf, size_t len, uint32_t protocol)
+{
+        size_t ret;
+        unsigned long flags;
+
+        ret =3D H_v4v_send(&p->r->ring->id.addr, dest, buf, len, protoco=
l);
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+        if (ret =3D=3D -EAGAIN) {
+                /* Add pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0)=
;
+                p->send_blocked++;
+
+        } else {
+                /* Remove pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1)=
;
+                p->send_blocked =3D 0;
+        }
+
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return ret;
+}
+
+static size_t
+v4v_try_sendv_sponsor(struct v4v_private *p,
+                      v4v_addr_t * dest,
+                      const v4v_iov_t * iovs, size_t niov, size_t len,
+                      uint32_t protocol)
+{
+        size_t ret;
+        unsigned long flags;
+
+        ret =3D H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, prot=
ocol);
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+        if (ret =3D=3D -EAGAIN) {
+                /* Add pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0)=
;
+                p->send_blocked++;
+
+        } else {
+                /* Remove pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1)=
;
+                p->send_blocked =3D 0;
+        }
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return ret;
+}
+
+/*
+ * Try to send from one of the ring's privates (not its sponsor),
+ * and queue an writeq wakeup if we fail
+ */
+static size_t
+v4v_try_sendv_privates(struct v4v_private *p,
+                       v4v_addr_t * dest,
+                       const v4v_iov_t * iovs, size_t niov, size_t len,
+                       uint32_t protocol)
+{
+        size_t ret;
+        unsigned long flags;
+
+        ret =3D H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, prot=
ocol);
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+        if (ret =3D=3D -EAGAIN) {
+                /* Add pending xmit */
+                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, des=
t, len,
+                                          0);
+                p->send_blocked++;
+        } else {
+                /* Remove pending xmit */
+                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, des=
t, len,
+                                          1);
+                p->send_blocked =3D 0;
+        }
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return ret;
+}
+
+static ssize_t
+v4v_sendto_from_sponsor(struct v4v_private *p,
+                        const void *buf, size_t len,
+                        int nonblock, v4v_addr_t * dest, uint32_t protoc=
ol)
+{
+        size_t ret =3D 0, ts_ret;
+
+        switch (p->state) {
+        case V4V_STATE_CONNECTING:
+                ret =3D -ENOTCONN;
+                break;
+        case V4V_STATE_DISCONNECTED:
+                ret =3D -EPIPE;
+                break;
+        case V4V_STATE_BOUND:
+        case V4V_STATE_CONNECTED:
+                break;
+        default:
+                ret =3D -EINVAL;
+        }
+
+        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_head=
er)))
+                return -EMSGSIZE;
+
+        if (ret)
+                return ret;
+
+        if (nonblock) {
+                return H_v4v_send(&p->r->ring->id.addr, dest, buf, len,
+                                  protocol);;
+        }
+        /*
+         * I happen to know that wait_event_interruptible will never
+         * evaluate the 2nd argument once it has returned true but
+         * I shouldn't.
+         *
+         * The EAGAIN will cause xen to send an interrupt which will
+         * via the pending_xmit_list and writeq wake us up.
+         */
+        ret =3D wait_event_interruptible(p->writeq,
+                                       ((ts_ret =3D
+                                         v4v_try_send_sponsor
+                                         (p, dest,
+                                          buf, len, protocol)) !=3D -EAG=
AIN));
+        if (ret)
+                ret =3D ts_ret;
+
+        return ret;
+}
+
+static ssize_t
+v4v_stream_sendvto_from_sponsor(struct v4v_private *p,
+                                const v4v_iov_t * iovs, size_t niov,
+                                size_t len, int nonblock,
+                                v4v_addr_t * dest, uint32_t protocol)
+{
+        size_t ret =3D 0, ts_ret;
+
+        switch (p->state) {
+        case V4V_STATE_CONNECTING:
+                return -ENOTCONN;
+        case V4V_STATE_DISCONNECTED:
+                return -EPIPE;
+        case V4V_STATE_BOUND:
+        case V4V_STATE_CONNECTED:
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_head=
er)))
+                return -EMSGSIZE;
+
+        if (ret)
+                return ret;
+
+        if (nonblock) {
+                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, nio=
v,
+                                   protocol);
+        }
+        /*
+         * I happen to know that wait_event_interruptible will never
+         * evaluate the 2nd argument once it has returned true but
+         * I shouldn't.
+         *
+         * The EAGAIN will cause xen to send an interrupt which will
+         * via the pending_xmit_list and writeq wake us up.
+         */
+        ret =3D wait_event_interruptible(p->writeq,
+                                       ((ts_ret =3D
+                                         v4v_try_sendv_sponsor
+                                         (p, dest,
+                                          iovs, niov, len,
+                                          protocol)) !=3D -EAGAIN)
+                                       || !stream_connected(p));
+        if (ret =3D=3D 0)
+                ret =3D ts_ret;
+
+        return ret;
+}
+static ssize_t
+v4v_stream_sendvto_from_private(struct v4v_private *p,
+                                const v4v_iov_t * iovs, size_t niov,
+                                size_t len, int nonblock,
+                                v4v_addr_t * dest, uint32_t protocol)
+{
+        size_t ret =3D 0, ts_ret;
+
+        switch (p->state) {
+        case V4V_STATE_DISCONNECTED:
+                return -EPIPE;
+        case V4V_STATE_ACCEPTED:
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_head=
er)))
+                return -EMSGSIZE;
+
+        if (ret)
+                return ret;
+
+        if (nonblock) {
+                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, nio=
v,
+                                   protocol);
+        }
+        /*
+         * I happen to know that wait_event_interruptible will never
+         * evaluate the 2nd argument once it has returned true but
+         * I shouldn't.
+         *
+         * The EAGAIN will cause xen to send an interrupt which will
+         * via the pending_xmit_list and writeq wake us up.
+         */
+        ret =3D wait_event_interruptible(p->writeq,
+                                       ((ts_ret =3D
+                                         v4v_try_sendv_privates
+                                         (p, dest,
+                                          iovs, niov, len,
+                                          protocol)) !=3D -EAGAIN)
+                                       || !stream_connected(p));
+        if (ret =3D=3D 0)
+                ret =3D ts_ret;
+
+        return ret;
+}
+
+static int v4v_get_sock_name(struct v4v_private *p, struct v4v_ring_id *=
id)
+{
+        int rc =3D 0;
+
+        read_lock(&list_lock);
+        if ((p->r) && (p->r->ring)) {
+                *id =3D p->r->ring->id;
+        } else {
+                rc =3D -EINVAL;
+        }
+        read_unlock(&list_lock);
+
+        return rc;
+}
+
+static int v4v_get_peer_name(struct v4v_private *p, v4v_addr_t * id)
+{
+        int rc =3D 0;
+        read_lock(&list_lock);
+
+        switch (p->state) {
+        case V4V_STATE_CONNECTING:
+        case V4V_STATE_CONNECTED:
+        case V4V_STATE_ACCEPTED:
+                *id =3D p->peer;
+                break;
+        default:
+                rc =3D -ENOTCONN;
+        }
+
+        read_unlock(&list_lock);
+        return rc;
+}
+
+static int v4v_set_ring_size(struct v4v_private *p, uint32_t ring_size)
+{
+
+        if (ring_size <
+            (sizeof(struct v4v_ring_message_header) + V4V_ROUNDUP(1)))
+                return -EINVAL;
+        if (ring_size !=3D V4V_ROUNDUP(ring_size))
+                return -EINVAL;
+
+        read_lock(&list_lock);
+        if (p->state !=3D V4V_STATE_IDLE) {
+                read_unlock(&list_lock);
+                return -EINVAL;
+        }
+
+        p->desired_ring_size =3D ring_size;
+        read_unlock(&list_lock);
+
+        return 0;
+}
+
+static ssize_t
+v4v_recvfrom_dgram(struct v4v_private *p, void *buf, size_t len,
+                   int nonblock, int peek, v4v_addr_t * src)
+{
+        ssize_t ret;
+        uint32_t protocol;
+        v4v_addr_t lsrc;
+
+        if (!src)
+                src =3D &lsrc;
+
+retry:
+        if (!nonblock) {
+                ret =3D wait_event_interruptible(p->readq,
+                                               (p->r->ring->rx_ptr !=3D
+                                                p->r->ring->tx_ptr));
+                if (ret)
+                        return ret;
+        }
+
+        read_lock(&list_lock);
+
+        /*
+         * For datagrams, we know the interrrupt handler will never use
+         * the ring, leave irqs on
+         */
+        spin_lock(&p->r->lock);
+        if (p->r->ring->rx_ptr =3D=3D p->r->ring->tx_ptr) {
+                spin_unlock(&p->r->lock);
+                if (nonblock) {
+                        ret =3D -EAGAIN;
+                        goto unlock;
+                }
+                read_unlock(&list_lock);
+                goto retry;
+        }
+        ret =3D v4v_copy_out(p->r->ring, src, &protocol, buf, len, !peek=
);
+        if (ret < 0) {
+                recover_ring(p->r);
+                spin_unlock(&p->r->lock);
+                read_unlock(&list_lock);
+                goto retry;
+        }
+        spin_unlock(&p->r->lock);
+
+        if (!peek)
+                v4v_null_notify();
+
+        if (protocol !=3D V4V_PROTO_DGRAM) {
+                /* If peeking consume the rubbish */
+                if (peek)
+                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1)=
;
+                read_unlock(&list_lock);
+                goto retry;
+        }
+
+        if ((p->state =3D=3D V4V_STATE_CONNECTED) &&
+            memcmp(src, &p->peer, sizeof(v4v_addr_t))) {
+                /* Wrong source - bin it */
+                if (peek)
+                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1)=
;
+                read_unlock(&list_lock);
+                goto retry;
+        }
+
+unlock:
+        read_unlock(&list_lock);
+
+        return ret;
+}
+
+static ssize_t
+v4v_recv_stream(struct v4v_private *p, void *_buf, int len, int recv_fla=
gs,
+                int nonblock)
+{
+        size_t count =3D 0;
+        int ret =3D 0;
+        unsigned long flags;
+        int schedule_irq =3D 0;
+        uint8_t *buf =3D (void *)_buf;
+
+        read_lock(&list_lock);
+
+        switch (p->state) {
+        case V4V_STATE_DISCONNECTED:
+                ret =3D -EPIPE;
+                goto unlock;
+        case V4V_STATE_CONNECTING:
+                ret =3D -ENOTCONN;
+                goto unlock;
+        case V4V_STATE_CONNECTED:
+        case V4V_STATE_ACCEPTED:
+                break;
+        default:
+                ret =3D -EINVAL;
+                goto unlock;
+        }
+
+        do {
+                if (!nonblock) {
+                        ret =3D wait_event_interruptible(p->readq,
+                                                       (!list_empty(&p->=
pending_recv_list)
+                                                        || !stream_conne=
cted(p)));
+
+                        if (ret)
+                                break;
+                }
+                       =20
+                spin_lock_irqsave(&p->pending_recv_lock, flags);
+
+                while (!list_empty(&p->pending_recv_list) && len) {
+                        size_t to_copy;
+                        struct pending_recv *pending;
+                        int unlink =3D 0;
+
+                        pending =3D list_first_entry(&p->pending_recv_li=
st,
+                                                   struct pending_recv, =
node);
+
+                        if ((pending->data_len - pending->data_ptr) > le=
n) {
+                                to_copy =3D len;
+                        } else {
+                                unlink =3D 1;
+                                to_copy =3D pending->data_len - pending-=
>data_ptr;
+                        }
+
+                        if (!access_ok(VERIFY_WRITE, buf, to_copy)) {
+                                printk(KERN_ERR
+                                       "V4V - ERROR: buf invalid _buf=3D=
%p buf=3D%p len=3D%d to_copy=3D%zu count=3D%zu\n",
+                                       _buf, buf, len, to_copy, count);
+                                spin_unlock_irqrestore(&p->pending_recv_=
lock, flags);
+                                read_unlock(&list_lock);
+                                return -EFAULT;
+                        }
+                       =20
+                        if (copy_to_user(buf, pending->data + pending->d=
ata_ptr, to_copy))
+                        {
+                                spin_unlock_irqrestore(&p->pending_recv_=
lock, flags);
+                                read_unlock(&list_lock);
+                                return -EFAULT;
+                        }
+
+                        if (unlink) {
+                                list_del(&pending->node);
+                                kfree(pending);
+                                atomic_dec(&p->pending_recv_count);
+                                if (p->full)
+                                        schedule_irq =3D 1;
+                        } else
+                                pending->data_ptr +=3D to_copy;
+
+                        buf +=3D to_copy;
+                        count +=3D to_copy;
+                        len -=3D to_copy;
+                }
+                       =20
+                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
+
+                if (p->state =3D=3D V4V_STATE_DISCONNECTED) {
+                        ret =3D -EPIPE;
+                        break;
+                }
+
+                if (nonblock)
+                        ret =3D -EAGAIN;
+
+        } while ((recv_flags & MSG_WAITALL) && len);
+
+unlock:
+        read_unlock(&list_lock);
+
+        if (schedule_irq)
+                v4v_fake_irq();
+
+        return count ? count : ret;
+}
+
+static ssize_t
+v4v_send_stream(struct v4v_private *p, const void *_buf, int len, int no=
nblock)
+{
+        int write_lump;
+        const uint8_t *buf =3D _buf;
+        size_t count =3D 0;
+        ssize_t ret;
+        int to_send;
+
+        write_lump =3D DEFAULT_RING_SIZE >> 2;
+
+        switch (p->state) {
+        case V4V_STATE_DISCONNECTED:
+                return -EPIPE;
+        case V4V_STATE_CONNECTING:
+                return -ENOTCONN;
+        case V4V_STATE_CONNECTED:
+        case V4V_STATE_ACCEPTED:
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        while (len) {
+                struct v4v_stream_header sh;
+                v4v_iov_t iovs[2];
+
+                to_send =3D len > write_lump ? write_lump : len;
+                sh.flags =3D 0;
+                sh.conid =3D p->conid;
+
+                iovs[0].iov_base =3D (uintptr_t)&sh;
+                iovs[0].iov_len =3D sizeof (sh);
+
+                iovs[1].iov_base =3D (uintptr_t)buf;
+                iovs[1].iov_len =3D to_send;
+
+                if (p->state =3D=3D V4V_STATE_CONNECTED)
+                    ret =3D v4v_stream_sendvto_from_sponsor(
+                                p, iovs, 2,
+                                to_send + sizeof(struct v4v_stream_heade=
r),
+                                nonblock, &p->peer, V4V_PROTO_STREAM);
+                else
+                    ret =3D v4v_stream_sendvto_from_private(
+                                p, iovs, 2,
+                                to_send + sizeof(struct v4v_stream_heade=
r),
+                                nonblock, &p->peer, V4V_PROTO_STREAM);
+
+                if (ret < 0) {
+                        return count ? count : ret;
+                }
+
+                len -=3D to_send;
+                buf +=3D to_send;
+                count +=3D to_send;
+
+                if (nonblock)
+                        return count;
+        }
+
+        return count;
+}
+
+static int v4v_bind(struct v4v_private *p, struct v4v_ring_id *ring_id)
+{
+        int ret =3D 0;
+
+        if (ring_id->addr.domain !=3D V4V_DOMID_NONE) {
+                return -EINVAL;
+        }
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                ret =3D new_ring(p, ring_id);
+                if (!ret)
+                        p->r->type =3D V4V_RTYPE_DGRAM;
+                break;
+        case V4V_PTYPE_STREAM:
+                ret =3D new_ring(p, ring_id);
+                break;
+        }
+
+        return ret;
+}
+
+static int v4v_listen(struct v4v_private *p)
+{
+        if (p->ptype !=3D V4V_PTYPE_STREAM)
+                return -EINVAL;
+
+        if (p->state !=3D V4V_STATE_BOUND) {
+                return -EINVAL;
+        }
+
+        p->r->type =3D V4V_RTYPE_LISTENER;
+        p->state =3D V4V_STATE_LISTENING;
+
+        return 0;
+}
+
+static int v4v_connect(struct v4v_private *p, v4v_addr_t * peer, int non=
block)
+{
+        struct v4v_stream_header sh;
+        int ret =3D -EINVAL;
+
+        if (p->ptype =3D=3D V4V_PTYPE_DGRAM) {
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                case V4V_STATE_CONNECTED:
+                        if (peer) {
+                                p->state =3D V4V_STATE_CONNECTED;
+                                memcpy(&p->peer, peer, sizeof(v4v_addr_t=
));
+                        } else {
+                                p->state =3D V4V_STATE_BOUND;
+                        }
+                        return 0;
+                default:
+                        return -EINVAL;
+                }
+        }
+        if (p->ptype !=3D V4V_PTYPE_STREAM) {
+                return -EINVAL;
+        }
+
+        /* Irritiatingly we need to be restartable */
+        switch (p->state) {
+        case V4V_STATE_BOUND:
+                p->r->type =3D V4V_RTYPE_CONNECTOR;
+                p->state =3D V4V_STATE_CONNECTING;
+                p->conid =3D random32();
+                p->peer =3D *peer;
+
+                sh.flags =3D V4V_SHF_SYN;
+                sh.conid =3D p->conid;
+
+                ret =3D
+                    xmit_queue_inline(&p->r->ring->id, &p->peer, &sh,
+                                      sizeof(sh), V4V_PROTO_STREAM);
+                if (ret =3D=3D sizeof(sh))
+                        ret =3D 0;
+
+                if (ret && (ret !=3D -EAGAIN)) {
+                        p->state =3D V4V_STATE_BOUND;
+                        p->r->type =3D V4V_RTYPE_DGRAM;
+                        return ret;
+                }
+
+                break;
+        case V4V_STATE_CONNECTED:
+                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
+                        return -EINVAL;
+                } else {
+                        return 0;
+                }
+        case V4V_STATE_CONNECTING:
+                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
+                        return -EINVAL;
+                }
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        if (nonblock) {
+                return -EINPROGRESS;
+        }
+
+        while (p->state !=3D V4V_STATE_CONNECTED) {
+                ret =3D
+                    wait_event_interruptible(p->writeq,
+                                             (p->state !=3D
+                                              V4V_STATE_CONNECTING));
+                if (ret)
+                        return ret;
+
+                if (p->state =3D=3D V4V_STATE_DISCONNECTED) {
+                        p->state =3D V4V_STATE_BOUND;
+                        p->r->type =3D V4V_RTYPE_DGRAM;
+                        ret =3D -ECONNREFUSED;
+                        break;
+                }
+        }
+
+        return ret;
+}
+
+static int allocate_fd_with_private(void *private)
+{
+        int fd;
+        struct file *f;
+        struct qstr name =3D {.name =3D "" };
+        struct path path;
+        struct inode *ind;
+
+        fd =3D get_unused_fd();
+        if (fd < 0)
+                return fd;
+
+        path.dentry =3D d_alloc_pseudo(v4v_mnt->mnt_sb, &name);
+        if (unlikely(!path.dentry)) {
+                put_unused_fd(fd);
+                return -ENOMEM;
+        }
+        ind =3D new_inode(v4v_mnt->mnt_sb);
+        ind->i_ino =3D get_next_ino();
+        ind->i_fop =3D v4v_mnt->mnt_root->d_inode->i_fop;
+        ind->i_state =3D v4v_mnt->mnt_root->d_inode->i_state;
+        ind->i_mode =3D v4v_mnt->mnt_root->d_inode->i_mode;
+        ind->i_uid =3D current_fsuid();
+        ind->i_gid =3D current_fsgid();
+        d_instantiate(path.dentry, ind);
+
+        path.mnt =3D mntget(v4v_mnt);
+
+        f =3D alloc_file(&path, FMODE_READ | FMODE_WRITE, &v4v_fops_stre=
am);
+        if (!f) {
+                /* Put back fd ? */
+                return -ENFILE;
+        }
+
+        f->private_data =3D private;
+        fd_install(fd, f);
+
+        return fd;
+}
+
+static int
+v4v_accept(struct v4v_private *p, struct v4v_addr *peer, int nonblock)
+{
+        int fd;
+        int ret =3D 0;
+        struct v4v_private *a =3D NULL;
+        struct pending_recv *r =3D NULL;
+        unsigned long flags;
+        struct v4v_stream_header sh;
+
+        if (p->ptype !=3D V4V_PTYPE_STREAM)
+                return -ENOTTY;
+
+        if (p->state !=3D V4V_STATE_LISTENING) {
+                return -EINVAL;
+        }
+
+        /* FIXME: leak! */
+        for (;;) {
+                ret =3D
+                    wait_event_interruptible(p->readq,
+                                             (!list_empty
+                                              (&p->pending_recv_list))
+                                             || nonblock);
+                if (ret)
+                        return ret;
+
+                /* Write lock implicitly has pending_recv_lock */
+                write_lock_irqsave(&list_lock, flags);
+
+                if (!list_empty(&p->pending_recv_list)) {
+                        r =3D list_first_entry(&p->pending_recv_list,
+                                             struct pending_recv, node);
+
+                        list_del(&r->node);
+                        atomic_dec(&p->pending_recv_count);
+
+                        if ((!r->data_len) && (r->sh.flags & V4V_SHF_SYN=
))
+                                break;
+
+                        kfree(r);
+                }
+
+                write_unlock_irqrestore(&list_lock, flags);
+                if (nonblock)
+                        return -EAGAIN;
+        }
+        write_unlock_irqrestore(&list_lock, flags);
+
+        a =3D kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
+        if (!a) {
+                ret =3D -ENOMEM;
+                goto release;
+        }
+
+        memset(a, 0, sizeof(struct v4v_private));
+        a->state =3D V4V_STATE_ACCEPTED;
+        a->ptype =3D V4V_PTYPE_STREAM;
+        a->r =3D p->r;
+        if (!get_ring(a->r)) {
+                a->r =3D NULL;
+                ret =3D -EINVAL;
+                goto release;
+        }
+
+        init_waitqueue_head(&a->readq);
+        init_waitqueue_head(&a->writeq);
+        spin_lock_init(&a->pending_recv_lock);
+        INIT_LIST_HEAD(&a->pending_recv_list);
+        atomic_set(&a->pending_recv_count, 0);
+
+        a->send_blocked =3D 0;
+        a->peer =3D r->from;
+        a->conid =3D r->sh.conid;
+
+        if (peer)
+                *peer =3D r->from;
+
+        fd =3D allocate_fd_with_private(a);
+        if (fd < 0) {
+                ret =3D fd;
+                goto release;
+        }
+
+        write_lock_irqsave(&list_lock, flags);
+        list_add(&a->node, &a->r->privates);
+        write_unlock_irqrestore(&list_lock, flags);
+
+        /* Ship the ACK */
+        sh.conid =3D a->conid;
+        sh.flags =3D V4V_SHF_ACK;
+
+        xmit_queue_inline(&a->r->ring->id, &a->peer, &sh,
+                          sizeof(sh), V4V_PROTO_STREAM);
+        kfree(r);
+
+        return fd;
+
+ release:
+        kfree(r);
+        if (a) {
+                write_lock_irqsave(&list_lock, flags);
+                if (a->r)
+                        put_ring(a->r);
+                write_unlock_irqrestore(&list_lock, flags);
+                kfree(a);
+        }
+        return ret;
+}
+
+ssize_t
+v4v_sendto(struct v4v_private * p, const void *buf, size_t len, int flag=
s,
+           v4v_addr_t * addr, int nonblock)
+{
+        ssize_t rc;
+
+        if (!access_ok(VERIFY_READ, buf, len))
+                return -EFAULT;
+        if (!access_ok(VERIFY_READ, addr, len))
+                return -EFAULT;
+
+        if (flags & MSG_DONTWAIT)
+                nonblock++;
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                        if (!addr)
+                                return -ENOTCONN;
+                        rc =3D v4v_sendto_from_sponsor(p, buf, len, nonb=
lock,
+                                                     addr, V4V_PROTO_DGR=
AM);
+                        break;
+
+                case V4V_STATE_CONNECTED:
+                        if (addr)
+                                return -EISCONN;
+
+                        rc =3D v4v_sendto_from_sponsor(p, buf, len, nonb=
lock,
+                                                     &p->peer, V4V_PROTO=
_DGRAM);
+                        break;
+
+                default:
+                        return -EINVAL;
+                }
+                break;
+        case V4V_PTYPE_STREAM:
+                if (addr)
+                        return -EISCONN;
+                switch (p->state) {
+                case V4V_STATE_CONNECTING:
+                case V4V_STATE_BOUND:
+                        return -ENOTCONN;
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_ACCEPTED:
+                        rc =3D v4v_send_stream(p, buf, len, nonblock);
+                        break;
+                case V4V_STATE_DISCONNECTED:
+
+                        rc =3D -EPIPE;
+                        break;
+                default:
+
+                        return -EINVAL;
+                }
+                break;
+        default:
+
+                return -ENOTTY;
+        }
+
+        if ((rc =3D=3D -EPIPE) && !(flags & MSG_NOSIGNAL))
+                send_sig(SIGPIPE, current, 0);
+
+        return rc;
+}
+
+ssize_t
+v4v_recvfrom(struct v4v_private * p, void *buf, size_t len, int flags,
+             v4v_addr_t * addr, int nonblock)
+{
+        int peek =3D 0;
+        ssize_t rc =3D 0;
+
+        if (!access_ok(VERIFY_WRITE, buf, len))
+                return -EFAULT;
+        if ((addr) && (!access_ok(VERIFY_WRITE, addr, sizeof(v4v_addr_t)=
)))
+                return -EFAULT;
+
+        if (flags & MSG_DONTWAIT)
+                nonblock++;
+        if (flags & MSG_PEEK)
+                peek++;
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                rc =3D v4v_recvfrom_dgram(p, buf, len, nonblock, peek, a=
ddr);
+                break;
+        case V4V_PTYPE_STREAM:
+                if (peek)
+                        return -EINVAL;
+
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                        return -ENOTCONN;
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_ACCEPTED:
+                        if (addr)
+                                *addr =3D p->peer;
+                        rc =3D v4v_recv_stream(p, buf, len, flags, nonbl=
ock);
+                        break;
+                case V4V_STATE_DISCONNECTED:
+                        rc =3D 0;
+                        break;
+                default:
+                        rc =3D -EINVAL;
+                }
+        }
+
+        if ((rc > (ssize_t) len) && !(flags & MSG_TRUNC))
+                rc =3D len;
+
+        return rc;
+}
+
+/* fops */
+
+static int v4v_open_dgram(struct inode *inode, struct file *f)
+{
+        struct v4v_private *p;
+
+        p =3D kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
+        if (!p)
+                return -ENOMEM;
+
+        memset(p, 0, sizeof(struct v4v_private));
+        p->state =3D V4V_STATE_IDLE;
+        p->desired_ring_size =3D DEFAULT_RING_SIZE;
+        p->r =3D NULL;
+        p->ptype =3D V4V_PTYPE_DGRAM;
+        p->send_blocked =3D 0;
+
+        init_waitqueue_head(&p->readq);
+        init_waitqueue_head(&p->writeq);
+
+        spin_lock_init(&p->pending_recv_lock);
+        INIT_LIST_HEAD(&p->pending_recv_list);
+        atomic_set(&p->pending_recv_count, 0);
+
+        f->private_data =3D p;
+        return 0;
+}
+
+static int v4v_open_stream(struct inode *inode, struct file *f)
+{
+        struct v4v_private *p;
+
+        p =3D kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
+        if (!p)
+                return -ENOMEM;
+
+        memset(p, 0, sizeof(struct v4v_private));
+        p->state =3D V4V_STATE_IDLE;
+        p->desired_ring_size =3D DEFAULT_RING_SIZE;
+        p->r =3D NULL;
+        p->ptype =3D V4V_PTYPE_STREAM;
+        p->send_blocked =3D 0;
+
+        init_waitqueue_head(&p->readq);
+        init_waitqueue_head(&p->writeq);
+
+        spin_lock_init(&p->pending_recv_lock);
+        INIT_LIST_HEAD(&p->pending_recv_list);
+        atomic_set(&p->pending_recv_count, 0);
+
+        f->private_data =3D p;
+        return 0;
+}
+
+static int v4v_release(struct inode *inode, struct file *f)
+{
+        struct v4v_private *p =3D (struct v4v_private *)f->private_data;
+        unsigned long flags;
+        struct pending_recv *pending;
+
+        if (p->ptype =3D=3D V4V_PTYPE_STREAM) {
+                switch (p->state) {
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_CONNECTING:
+                case V4V_STATE_ACCEPTED:
+                        xmit_queue_rst_to(&p->r->ring->id, p->conid, &p-=
>peer);
+                        break;
+                default:
+                        break;
+                }
+        }
+
+        write_lock_irqsave(&list_lock, flags);
+        if (!p->r) {
+                write_unlock_irqrestore(&list_lock, flags);
+                goto release;
+        }
+
+        if (p !=3D p->r->sponsor) {
+                put_ring(p->r);
+                list_del(&p->node);
+                write_unlock_irqrestore(&list_lock, flags);
+                goto release;
+        }
+
+        p->r->sponsor =3D NULL;
+        put_ring(p->r);
+        write_unlock_irqrestore(&list_lock, flags);
+
+        while (!list_empty(&p->pending_recv_list)) {
+                pending =3D
+                    list_first_entry(&p->pending_recv_list,
+                                     struct pending_recv, node);
+
+                list_del(&pending->node);
+                kfree(pending);
+                atomic_dec(&p->pending_recv_count);
+        }
+
+ release:
+        kfree(p);
+
+        return 0;
+}
+
+static ssize_t
+v4v_write(struct file *f, const char __user * buf, size_t count, loff_t =
* ppos)
+{
+        struct v4v_private *p =3D f->private_data;
+        int nonblock =3D f->f_flags & O_NONBLOCK;
+
+        return v4v_sendto(p, buf, count, 0, NULL, nonblock);
+}
+
+static ssize_t
+v4v_read(struct file *f, char __user * buf, size_t count, loff_t * ppos)
+{
+        struct v4v_private *p =3D f->private_data;
+        int nonblock =3D f->f_flags & O_NONBLOCK;
+
+        return v4v_recvfrom(p, (void *)buf, count, 0, NULL, nonblock);
+}
+
+static long v4v_ioctl(struct file *f, unsigned int cmd, unsigned long ar=
g)
+{
+        int rc =3D -ENOTTY;
+
+        int nonblock =3D f->f_flags & O_NONBLOCK;
+        struct v4v_private *p =3D f->private_data;
+
+        if (_IOC_TYPE(cmd) !=3D V4V_TYPE)
+                return rc;
+
+        switch (cmd) {
+        case V4VIOCSETRINGSIZE:
+                if (!access_ok(VERIFY_READ, arg, sizeof(uint32_t)))
+                        return -EFAULT;
+                rc =3D v4v_set_ring_size(p, *(uint32_t *) arg);
+                break;
+        case V4VIOCBIND:
+                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_ring_=
id)))
+                        return -EFAULT;
+                rc =3D v4v_bind(p, (struct v4v_ring_id *)arg);
+                break;
+        case V4VIOCGETSOCKNAME:
+                if (!access_ok(VERIFY_WRITE, arg, sizeof(struct v4v_ring=
_id)))
+                        return -EFAULT;
+                rc =3D v4v_get_sock_name(p, (struct v4v_ring_id *)arg);
+                break;
+        case V4VIOCGETPEERNAME:
+                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
+                        return -EFAULT;
+                rc =3D v4v_get_peer_name(p, (v4v_addr_t *) arg);
+                break;
+        case V4VIOCCONNECT:
+                if (!access_ok(VERIFY_READ, arg, sizeof(v4v_addr_t)))
+                        return -EFAULT;
+                /* Bind if not done */
+                if (p->state =3D=3D V4V_STATE_IDLE) {
+                        struct v4v_ring_id id;
+                        memset(&id, 0, sizeof(id));
+                        id.partner =3D V4V_DOMID_NONE;
+                        id.addr.domain =3D V4V_DOMID_NONE;
+                        id.addr.port =3D 0;
+                        rc =3D v4v_bind(p, &id);
+                        if (rc)
+                                break;
+                }
+                rc =3D v4v_connect(p, (v4v_addr_t *) arg, nonblock);
+                break;
+        case V4VIOCGETCONNECTERR:
+                {
+                        unsigned long flags;
+                        if (!access_ok(VERIFY_WRITE, arg, sizeof(int)))
+                                return -EFAULT;
+
+                        spin_lock_irqsave(&p->pending_recv_lock, flags);
+                        *(int *)arg =3D p->pending_error;
+                        p->pending_error =3D 0;
+                        spin_unlock_irqrestore(&p->pending_recv_lock, fl=
ags);
+                        rc =3D 0;
+                }
+                break;
+        case V4VIOCLISTEN:
+                rc =3D v4v_listen(p);
+                break;
+        case V4VIOCACCEPT:
+                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
+                        return -EFAULT;
+                rc =3D v4v_accept(p, (v4v_addr_t *) arg, nonblock);
+                break;
+        case V4VIOCSEND:
+                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev))=
)
+                        return -EFAULT;
+                {
+                        struct v4v_dev a =3D *(struct v4v_dev *)arg;
+
+                        rc =3D v4v_sendto(p, a.buf, a.len, a.flags, a.ad=
dr,
+                                        nonblock);
+                }
+                break;
+        case V4VIOCRECV:
+                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev))=
)
+                        return -EFAULT;
+                {
+                        struct v4v_dev a =3D *(struct v4v_dev *)arg;
+                        rc =3D v4v_recvfrom(p, a.buf, a.len, a.flags, a.=
addr,
+                                          nonblock);
+                }
+                break;
+        case V4VIOCVIPTABLESADD:
+                if (!access_ok
+                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_=
pos)))
+                        return -EFAULT;
+                {
+                        struct v4v_viptables_rule_pos *rule =3D
+                            (struct v4v_viptables_rule_pos *)arg;
+                        v4v_viptables_add(p, rule->rule, rule->position)=
;
+                        rc =3D 0;
+                }
+                break;
+        case V4VIOCVIPTABLESDEL:
+                if (!access_ok
+                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_=
pos)))
+                        return -EFAULT;
+                {
+                        struct v4v_viptables_rule_pos *rule =3D
+                            (struct v4v_viptables_rule_pos *)arg;
+                        v4v_viptables_del(p, rule->rule, rule->position)=
;
+                        rc =3D 0;
+                }
+                break;
+        case V4VIOCVIPTABLESLIST:
+                if (!access_ok
+                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_list)=
))
+                        return -EFAULT;
+                {
+                        struct v4v_viptables_list *list =3D
+                            (struct v4v_viptables_list *)arg;
+                        rc =3D v4v_viptables_list(p, list);
+                }
+                break;
+        default:
+                printk(KERN_ERR "v4v: unkown ioctl, cmd:0x%x nr:%d size:=
0x%x\n",
+                       cmd, _IOC_NR(cmd), _IOC_SIZE(cmd));
+        }
+
+        return rc;
+}
+
+static unsigned int v4v_poll(struct file *f, poll_table * pt)
+{
+        unsigned int mask =3D 0;
+        struct v4v_private *p =3D f->private_data;
+
+        read_lock(&list_lock);
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                switch (p->state) {
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_BOUND:
+                        poll_wait(f, &p->readq, pt);
+                        mask |=3D POLLOUT | POLLWRNORM;
+                        if (p->r->ring->tx_ptr !=3D p->r->ring->rx_ptr)
+                                mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                default:
+                        break;
+                }
+                break;
+        case V4V_PTYPE_STREAM:
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                        break;
+                case V4V_STATE_LISTENING:
+                        poll_wait(f, &p->readq, pt);
+                        if (!list_empty(&p->pending_recv_list))
+                                mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                case V4V_STATE_ACCEPTED:
+                case V4V_STATE_CONNECTED:
+                        poll_wait(f, &p->readq, pt);
+                        poll_wait(f, &p->writeq, pt);
+                        if (!p->send_blocked)
+                                mask |=3D POLLOUT | POLLWRNORM;
+                        if (!list_empty(&p->pending_recv_list))
+                                mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                case V4V_STATE_CONNECTING:
+                        poll_wait(f, &p->writeq, pt);
+                        break;
+                case V4V_STATE_DISCONNECTED:
+                        mask |=3D POLLOUT | POLLWRNORM;
+                        mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                case V4V_STATE_IDLE:
+                        break;
+                }
+                break;
+        }
+
+        read_unlock(&list_lock);
+        return mask;
+}
+
+static const struct file_operations v4v_fops_stream =3D {
+        .owner =3D THIS_MODULE,
+        .write =3D v4v_write,
+        .read =3D v4v_read,
+        .unlocked_ioctl =3D v4v_ioctl,
+        .open =3D v4v_open_stream,
+        .release =3D v4v_release,
+        .poll =3D v4v_poll,
+};
+
+static const struct file_operations v4v_fops_dgram =3D {
+        .owner =3D THIS_MODULE,
+        .write =3D v4v_write,
+        .read =3D v4v_read,
+        .unlocked_ioctl =3D v4v_ioctl,
+        .open =3D v4v_open_dgram,
+        .release =3D v4v_release,
+        .poll =3D v4v_poll,
+};
+
+/* Xen VIRQ */
+static int v4v_irq =3D -1;
+
+static void unbind_virq(void)
+{
+        unbind_from_irqhandler (v4v_irq, NULL);
+        v4v_irq =3D -1;
+}
+
+static int bind_evtchn(void)
+{
+        v4v_info_t info;
+        int result;
+       =20
+        v4v_info(&info);
+        if (info.ring_magic !=3D V4V_RING_MAGIC)
+                return 1;
+
+        result =3D
+                bind_interdomain_evtchn_to_irqhandler(
+                        0, info.evtchn,
+                        v4v_interrupt, IRQF_SAMPLE_RANDOM, "v4v", NULL);
+
+        if (result < 0) {
+                unbind_virq();
+                return result;
+        }
+
+        v4v_irq =3D result;
+
+        return 0;
+}
+
+/* V4V Device */
+
+static struct miscdevice v4v_miscdev_dgram =3D {
+        .minor =3D MISC_DYNAMIC_MINOR,
+        .name =3D "v4v_dgram",
+        .fops =3D &v4v_fops_dgram,
+};
+
+static struct miscdevice v4v_miscdev_stream =3D {
+        .minor =3D MISC_DYNAMIC_MINOR,
+        .name =3D "v4v_stream",
+        .fops =3D &v4v_fops_stream,
+};
+
+static int v4v_suspend(struct platform_device *dev, pm_message_t state)
+{
+        unbind_virq();
+        return 0;
+}
+
+static int v4v_resume(struct platform_device *dev)
+{
+        struct ring *r;
+
+        read_lock(&list_lock);
+        list_for_each_entry(r, &ring_list, node) {
+                refresh_pfn_list(r);
+                if (register_ring(r)) {
+                        printk(KERN_ERR
+                               "Failed to re-register a v4v ring on resu=
me, port=3D0x%08x\n",
+                               r->ring->id.addr.port);
+                }
+        }
+        read_unlock(&list_lock);
+
+        if (bind_evtchn()) {
+                printk(KERN_ERR "v4v_resume: failed to bind v4v evtchn\n=
");
+                return -ENODEV;
+        }
+
+        return 0;
+}
+
+static void v4v_shutdown(struct platform_device *dev)
+{
+}
+
+static int __devinit v4v_probe(struct platform_device *dev)
+{
+        int err =3D 0;
+        int ret;
+
+        ret =3D setup_fs();
+        if (ret)
+                return ret;
+
+        INIT_LIST_HEAD(&ring_list);
+        rwlock_init(&list_lock);
+        INIT_LIST_HEAD(&pending_xmit_list);
+        spin_lock_init(&pending_xmit_lock);
+        spin_lock_init(&interrupt_lock);
+        atomic_set(&pending_xmit_count, 0);
+
+        if (bind_evtchn()) {
+                printk(KERN_ERR "failed to bind v4v evtchn\n");
+                unsetup_fs();
+                return -ENODEV;
+        }
+
+        err =3D misc_register(&v4v_miscdev_dgram);
+        if (err !=3D 0) {
+                printk(KERN_ERR "Could not register /dev/v4v_dgram\n");
+                unsetup_fs();
+                return err;
+        }
+
+        err =3D misc_register(&v4v_miscdev_stream);
+        if (err !=3D 0) {
+                printk(KERN_ERR "Could not register /dev/v4v_stream\n");
+                unsetup_fs();
+                return err;
+        }
+
+        printk(KERN_INFO "Xen V4V device installed.\n");
+        return 0;
+}
+
+/* Platform Gunge */
+
+static int __devexit v4v_remove(struct platform_device *dev)
+{
+        unbind_virq();
+        misc_deregister(&v4v_miscdev_dgram);
+        misc_deregister(&v4v_miscdev_stream);
+        unsetup_fs();
+        return 0;
+}
+
+static struct platform_driver v4v_driver =3D {
+        .driver =3D {
+                   .name =3D "v4v",
+                   .owner =3D THIS_MODULE,
+                   },
+        .probe =3D v4v_probe,
+        .remove =3D __devexit_p(v4v_remove),
+        .shutdown =3D v4v_shutdown,
+        .suspend =3D v4v_suspend,
+        .resume =3D v4v_resume,
+};
+
+static struct platform_device *v4v_platform_device;
+
+static int __init v4v_init(void)
+{
+        int error;
+
+        if (!xen_domain())
+        {
+                printk(KERN_ERR "v4v only works under Xen\n");
+                return -ENODEV;
+        }
+
+        error =3D platform_driver_register(&v4v_driver);
+        if (error)
+                return error;
+
+        v4v_platform_device =3D platform_device_alloc("v4v", -1);
+        if (!v4v_platform_device) {
+                platform_driver_unregister(&v4v_driver);
+                return -ENOMEM;
+        }
+
+        error =3D platform_device_add(v4v_platform_device);
+        if (error) {
+                platform_device_put(v4v_platform_device);
+                platform_driver_unregister(&v4v_driver);
+                return error;
+        }
+
+        return 0;
+}
+
+static void __exit v4v_cleanup(void)
+{
+        platform_device_unregister(v4v_platform_device);
+        platform_driver_unregister(&v4v_driver);
+}
+
+module_init(v4v_init);
+module_exit(v4v_cleanup);
+MODULE_LICENSE("GPL");
diff --git a/drivers/xen/v4v_utils.h b/drivers/xen/v4v_utils.h
new file mode 100644
index 0000000..91c00b6
--- /dev/null
+++ b/drivers/xen/v4v_utils.h
@@ -0,0 +1,278 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __V4V_UTILS_H__
+# define __V4V_UTILS_H__
+
+/* Compiler specific hacks */
+#if defined(__GNUC__)
+# define V4V_UNUSED __attribute__ ((unused))
+# ifndef __STRICT_ANSI__
+#  define V4V_INLINE inline
+# else
+#  define V4V_INLINE
+# endif
+#else /* !__GNUC__ */
+# define V4V_UNUSED
+# define V4V_INLINE
+#endif
+
+
+/*
+ * Utility functions
+ */
+static V4V_INLINE uint32_t
+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
+{
+        int32_t ret;
+        ret =3D r->tx_ptr - r->rx_ptr;
+        if (ret >=3D 0)
+                return ret;
+        return (uint32_t) (r->len + ret);
+}
+
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, setting from and protocol if they are not NULL, returns
+ * the actual length of the message, or -1 if there is nothing to read
+ */
+V4V_UNUSED static V4V_INLINE ssize_t
+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * prot=
ocol,
+              void *_buf, size_t t, int consume)
+{
+        volatile struct v4v_ring_message_header *mh;
+        /* unnecessary cast from void * required by MSVC compiler */
+        uint8_t *buf =3D (uint8_t *) _buf;
+        uint32_t btr =3D v4v_ring_bytes_to_read (r);
+        uint32_t rxp =3D r->rx_ptr;
+        uint32_t bte;
+        uint32_t len;
+        ssize_t ret;
+
+
+        if (btr < sizeof (*mh))
+                return -1;
+
+        /*
+         * Becuase the message_header is 128 bits long and the ring is 1=
28 bit
+         * aligned, we're gaurunteed never to wrap
+         */
+        mh =3D (volatile struct v4v_ring_message_header *) &r->ring[r->r=
x_ptr];
+
+        len =3D mh->len;
+
+        if (btr < len)
+        {
+                return -1;
+        }
+
+#if defined(__GNUC__)
+        if (from)
+                *from =3D mh->source;
+#else
+        /* MSVC can't do the above */
+        if (from)
+                memcpy((void *) from, (void *) &(mh->source), sizeof(str=
uct v4v_addr));
+#endif
+
+        if (protocol)
+                *protocol =3D mh->protocol;
+
+        rxp +=3D sizeof (*mh);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+        len -=3D sizeof (*mh);
+        ret =3D len;
+
+        bte =3D r->len - rxp;
+
+        if (bte < len)
+        {
+                if (t < bte)
+                {
+                        if (buf)
+                        {
+                                memcpy (buf, (void *) &r->ring[rxp], t);
+                                buf +=3D t;
+                        }
+
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t =3D 0;
+                }
+                else
+                {
+                        if (buf)
+                        {
+                                memcpy (buf, (void *) &r->ring[rxp], bte=
);
+                                buf +=3D bte;
+                        }
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t -=3D bte;
+                }
+        }
+
+        if (buf && t)
+                memcpy (buf, (void *) &r->ring[rxp], (t < len) ? t : len=
);
+
+
+        rxp +=3D V4V_ROUNDUP (len);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+
+        mb ();
+
+        if (consume)
+                r->rx_ptr =3D rxp;
+
+        return ret;
+}
+
+static V4V_INLINE void
+v4v_memcpy_skip (void *_dst, const void *_src, size_t len, size_t *skip)
+{
+        const uint8_t *src =3D  (const uint8_t *) _src;
+        uint8_t *dst =3D (uint8_t *) _dst;
+
+        if (!*skip)
+        {
+                memcpy (dst, src, len);
+                return;
+        }
+
+        if (*skip >=3D len)
+        {
+                *skip -=3D len;
+                return;
+        }
+
+        src +=3D *skip;
+        dst +=3D *skip;
+        len -=3D *skip;
+        *skip =3D 0;
+
+        memcpy (dst, src, len);
+}
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, skipping skip bytes, setting from and protocol if they are n=
ot
+ * NULL, returns the actual length of the message, or -1 if there is
+ * nothing to read
+ */
+static ssize_t
+v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
+                    uint32_t * protocol, void *_buf, size_t t, int consu=
me,
+                    size_t skip) V4V_UNUSED;
+
+V4V_INLINE static ssize_t
+v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
+                    uint32_t * protocol, void *_buf, size_t t, int consu=
me,
+                    size_t skip)
+{
+        volatile struct v4v_ring_message_header *mh;
+        /* unnecessary cast from void * required by MSVC compiler */
+        uint8_t *buf =3D (uint8_t *) _buf;
+        uint32_t btr =3D v4v_ring_bytes_to_read (r);
+        uint32_t rxp =3D r->rx_ptr;
+        uint32_t bte;
+        uint32_t len;
+        ssize_t ret;
+
+        buf -=3D skip;
+
+        if (btr < sizeof (*mh))
+                return -1;
+
+        /*
+         * Becuase the message_header is 128 bits long and the ring is 1=
28 bit
+         * aligned, we're gaurunteed never to wrap
+         */
+        mh =3D (volatile struct v4v_ring_message_header *)&r->ring[r->rx=
_ptr];
+
+        len =3D mh->len;
+        if (btr < len)
+                return -1;
+
+#if defined(__GNUC__)
+        if (from)
+                *from =3D mh->source;
+#else
+        /* MSVC can't do the above */
+        if (from)
+                memcpy((void *)from, (void *)&(mh->source), sizeof(struc=
t v4v_addr));
+#endif
+
+        if (protocol)
+                *protocol =3D mh->protocol;
+
+        rxp +=3D sizeof (*mh);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+        len -=3D sizeof (*mh);
+        ret =3D len;
+
+        bte =3D r->len - rxp;
+
+        if (bte < len)
+        {
+                if (t < bte)
+                {
+                        if (buf)
+                        {
+                                v4v_memcpy_skip (buf, (void *) &r->ring[=
rxp], t, &skip);
+                                buf +=3D t;
+                        }
+
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t =3D 0;
+                }
+                else
+                {
+                        if (buf)
+                        {
+                                v4v_memcpy_skip (buf, (void *) &r->ring[=
rxp], bte,
+                                                &skip);
+                                buf +=3D bte;
+                        }
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t -=3D bte;
+                }
+        }
+
+        if (buf && t)
+                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], (t < len) =
? t : len,
+                                &skip);
+
+
+        rxp +=3D V4V_ROUNDUP (len);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+
+        mb ();
+
+        if (consume)
+                r->rx_ptr =3D rxp;
+
+        return ret;
+}
+
+#endif /* !__V4V_UTILS_H__ */
diff --git a/include/xen/interface/v4v.h b/include/xen/interface/v4v.h
new file mode 100644
index 0000000..36ff95c
--- /dev/null
+++ b/include/xen/interface/v4v.h
@@ -0,0 +1,299 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __XEN_PUBLIC_V4V_H__
+#define __XEN_PUBLIC_V4V_H__
+
+/*
+ * Structure definitions
+ */
+
+#define V4V_RING_MAGIC          0xA822F72BB0B9D8CC
+#define V4V_RING_DATA_MAGIC	0x45FE852220B801E4
+
+#define V4V_PROTO_DGRAM		0x3c2c1db8
+#define V4V_PROTO_STREAM 	0x70f6a8e5
+
+#define V4V_DOMID_INVALID       (0x7FFFU)
+#define V4V_DOMID_NONE          V4V_DOMID_INVALID
+#define V4V_DOMID_ANY           V4V_DOMID_INVALID
+#define V4V_PORT_NONE           0
+
+typedef struct v4v_iov
+{
+    uint64_t iov_base;
+    uint64_t iov_len;
+} v4v_iov_t;
+
+typedef struct v4v_addr
+{
+    uint32_t port;
+    domid_t domain;
+    uint16_t pad;
+} v4v_addr_t;
+
+typedef struct v4v_ring_id
+{
+    v4v_addr_t addr;
+    domid_t partner;
+    uint16_t pad;
+} v4v_ring_id_t;
+
+typedef uint64_t v4v_pfn_t;
+
+typedef struct
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+} v4v_send_addr_t;
+
+/*
+ * v4v_ring
+ * id:
+ * xen only looks at this during register/unregister
+ * and will fill in id.addr.domain
+ *
+ * rx_ptr: rx pointer, modified by domain
+ * tx_ptr: tx pointer, modified by xen
+ *
+ */
+struct v4v_ring
+{
+    uint64_t magic;
+    v4v_ring_id_t id;
+    uint32_t len;
+    uint32_t rx_ptr;
+    uint32_t tx_ptr;
+    uint8_t reserved[32];
+    uint8_t ring[0];
+};
+typedef struct v4v_ring v4v_ring_t;
+
+#define V4V_RING_DATA_F_EMPTY       (1U << 0) /* Ring is empty */
+#define V4V_RING_DATA_F_EXISTS      (1U << 1) /* Ring exists */
+#define V4V_RING_DATA_F_PENDING     (1U << 2) /* Pending interrupt exist=
s - do not
+                                               * rely on this field - fo=
r
+                                               * profiling only */
+#define V4V_RING_DATA_F_SUFFICIENT  (1U << 3) /* Sufficient space to que=
ue
+                                               * space_required bytes ex=
ists */
+
+#if defined(__GNUC__)
+# define V4V_RING_DATA_ENT_FULLRING
+# define V4V_RING_DATA_ENT_FULL
+#else
+# define V4V_RING_DATA_ENT_FULLRING fullring
+# define V4V_RING_DATA_ENT_FULL full
+#endif
+typedef struct v4v_ring_data_ent
+{
+    v4v_addr_t ring;
+    uint16_t flags;
+    uint16_t pad;
+    uint32_t space_required;
+    uint32_t max_message_size;
+} v4v_ring_data_ent_t;
+
+typedef struct v4v_ring_data
+{
+    uint64_t magic;
+    uint32_t nent;
+    uint32_t pad;
+    uint64_t reserved[4];
+    v4v_ring_data_ent_t data[0];
+} v4v_ring_data_t;
+
+struct v4v_info
+{
+    uint64_t ring_magic;
+    uint64_t data_magic;
+    evtchn_port_t evtchn;
+};
+typedef struct v4v_info v4v_info_t;
+
+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
+/*
+ * Messages on the ring are padded to 128 bits
+ * Len here refers to the exact length of the data not including the
+ * 128 bit header. The message uses
+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
+ */
+
+#define V4V_SHF_SYN		(1 << 0)
+#define V4V_SHF_ACK		(1 << 1)
+#define V4V_SHF_RST		(1 << 2)
+
+#define V4V_SHF_PING		(1 << 8)
+#define V4V_SHF_PONG		(1 << 9)
+
+struct v4v_stream_header
+{
+    uint32_t flags;
+    uint32_t conid;
+};
+
+struct v4v_ring_message_header
+{
+    uint32_t len;
+    uint32_t pad0;
+    v4v_addr_t source;
+    uint32_t protocol;
+    uint32_t pad1;
+    uint8_t data[0];
+};
+
+typedef struct v4v_viptables_rule
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+    uint32_t accept;
+    uint32_t pad;
+} v4v_viptables_rule_t;
+
+typedef struct v4v_viptables_list
+{
+    uint32_t start_rule;
+    uint32_t nb_rules;
+    struct v4v_viptables_rule rules[0];
+} v4v_viptables_list_t;
+
+/*
+ * HYPERCALLS
+ */
+
+#define V4VOP_register_ring 	1
+/*
+ * Registers a ring with Xen, if a ring with the same v4v_ring_id exists=
,
+ * this ring takes its place, registration will not change tx_ptr
+ * unless it is invalid
+ *
+ * do_v4v_op(V4VOP_unregister_ring,
+ *           v4v_ring, XEN_GUEST_HANDLE(v4v_pfn),
+ *           npage, 0)
+ */
+
+
+#define V4VOP_unregister_ring 	2
+/*
+ * Unregister a ring.
+ *
+ * v4v_hypercall(V4VOP_send, v4v_ring, NULL, 0, 0)
+ */
+
+#define V4VOP_send 		3
+/*
+ * Sends len bytes of buf to dst, giving src as the source address (xen =
will
+ * ignore src->domain and put your domain in the actually message), xen
+ * first looks for a ring with id.addr=3D=3Ddst and id.partner=3D=3Dsend=
ing_domain
+ * if that fails it looks for id.addr=3D=3Ddst and id.partner=3D=3DDOMID=
_ANY.
+ * protocol is the 32 bit protocol number used from the message
+ * most likely V4V_PROTO_DGRAM or STREAM. If insufficient space exists
+ * it will return -EAGAIN and xen will twing the V4V_INTERRUPT when
+ * sufficient space becomes available
+ *
+ * v4v_hypercall(V4VOP_send,
+ *               v4v_send_addr_t addr,
+ *               void* buf,
+ *               uint32_t len,
+ *               uint32_t protocol)
+ */
+
+
+#define V4VOP_notify 		4
+/* Asks xen for information about other rings in the system
+ *
+ * ent->ring is the v4v_addr_t of the ring you want information on
+ * the same matching rules are used as for V4VOP_send.
+ *
+ * ent->space_required  if this field is not null xen will check
+ * that there is space in the destination ring for this many bytes
+ * of payload. If there is it will set the V4V_RING_DATA_F_SUFFICIENT
+ * and CANCEL any pending interrupt for that ent->ring, if insufficient
+ * space is available it will schedule an interrupt and the flag will
+ * not be set.
+ *
+ * The flags are set by xen when notify replies
+ * V4V_RING_DATA_F_EMPTY	ring is empty
+ * V4V_RING_DATA_F_PENDING	interrupt is pending - don't rely on this
+ * V4V_RING_DATA_F_SUFFICIENT	sufficient space for space_required is the=
re
+ * V4V_RING_DATA_F_EXISTS	ring exists
+ *
+ * v4v_hypercall(V4VOP_notify,
+ *               XEN_GUEST_HANDLE(v4v_ring_data_ent) ent,
+ *               NULL, nent, 0)
+ */
+
+#define V4VOP_sendv		5
+/*
+ * Identical to V4VOP_send except rather than buf and len it takes
+ * an array of v4v_iov and a length of the array.
+ *
+ * v4v_hypercall(V4VOP_sendv,
+ *               v4v_send_addr_t addr,
+ *               v4v_iov iov,
+ *               uint32_t niov,
+ *               uint32_t protocol)
+ */
+
+#define V4VOP_viptables_add     6
+/*
+ * Insert a filtering rules after a given position.
+ *
+ * v4v_hypercall(V4VOP_viptables_add,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_del     7
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_del,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_list    8
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_list,
+ *               v4v_vitpables_list_t list,
+ *               NULL, 0, 0)
+ */
+
+#define V4VOP_info              9
+/*
+ * v4v_hypercall(V4VOP_info,
+ *               XEN_GUEST_HANDLE(v4v_info_t) info,
+ *               NULL, 0, 0)
+ */
+
+#endif /* __XEN_PUBLIC_V4V_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..395f6cd 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -59,6 +59,7 @@
 #define __HYPERVISOR_physdev_op           33
 #define __HYPERVISOR_hvm_op               34
 #define __HYPERVISOR_tmem_op              38
+#define __HYPERVISOR_v4v_op               39
=20
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/include/xen/v4vdev.h b/include/xen/v4vdev.h
new file mode 100644
index 0000000..a30b608
--- /dev/null
+++ b/include/xen/v4vdev.h
@@ -0,0 +1,34 @@
+#ifndef __V4V_DGRAM_H__
+#define __V4V_DGRAM_H__
+
+struct v4v_dev
+{
+    void *buf;
+    size_t len;
+    int flags;
+    v4v_addr_t *addr;
+};
+
+struct v4v_viptables_rule_pos
+{
+    struct v4v_viptables_rule* rule;
+    int position;
+};
+
+#define V4V_TYPE 'W'
+
+#define V4VIOCSETRINGSIZE 	_IOW (V4V_TYPE,  1, uint32_t)
+#define V4VIOCBIND		_IOW (V4V_TYPE,  2, v4v_ring_id_t)
+#define V4VIOCGETSOCKNAME	_IOW (V4V_TYPE,  3, v4v_ring_id_t)
+#define V4VIOCGETPEERNAME	_IOW (V4V_TYPE,  4, v4v_addr_t)
+#define V4VIOCCONNECT		_IOW (V4V_TYPE,  5, v4v_addr_t)
+#define V4VIOCGETCONNECTERR	_IOW (V4V_TYPE,  6, int)
+#define V4VIOCLISTEN		_IOW (V4V_TYPE,  7, uint32_t) /*unused args */
+#define V4VIOCACCEPT		_IOW (V4V_TYPE,  8, v4v_addr_t)=20
+#define V4VIOCSEND		_IOW (V4V_TYPE,  9, struct v4v_dev)
+#define V4VIOCRECV		_IOW (V4V_TYPE, 10, struct v4v_dev)
+#define V4VIOCVIPTABLESADD	_IOW (V4V_TYPE, 11, struct v4v_viptables_rule=
_pos)
+#define V4VIOCVIPTABLESDEL	_IOW (V4V_TYPE, 12, struct v4v_viptables_rule=
_pos)
+#define V4VIOCVIPTABLESLIST	_IOW (V4V_TYPE, 13, struct v4v_viptables_lis=
t)
+
+#endif

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 22:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQH4-0006Z6-OQ; Fri, 03 Aug 2012 22:23:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SxQH2-0006Yw-M0
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:23:13 +0000
Received: from [85.158.139.83:41714] by server-3.bemta-5.messagelabs.com id
	0D/27-03367-F4F4C105; Fri, 03 Aug 2012 22:23:11 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344032590!30106025!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13845 invoked from network); 3 Aug 2012 22:23:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Aug 2012 22:23:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,709,1336348800"; d="scan'208";a="13847581"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	03 Aug 2012 22:23:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 3 Aug 2012 23:23:10 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SxQGz-00077V-NT;
	Fri, 03 Aug 2012 22:23:09 +0000
Received: by spongy (Postfix, from userid 2023)	id 0FFEB34045A; Fri,  3 Aug
	2012 23:24:22 +0100 (BST)
From: Jean Guyader <jean.guyader@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Aug 2012 23:24:20 +0100
Message-ID: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------true"
Cc: Jean Guyader <jean.guyader@citrix.com>
Subject: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------true
Content-Type: text/plain; charset="UTF-8"; format=fixed
Content-Transfer-Encoding: quoted-printable

This is a Linux driver for the V4V inter VM communication system.

I've posted the V4V Xen patches for comments, to find more info about
V4V you can check out this link.
http://osdir.com/ml/general/2012-08/msg05904.html

This linux driver exposes two char devices one for TCP one for UDP.
The interface exposed to userspace are made of IOCTLs, one per
network operation (listen, bind, accept, send, recv, ...).

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
---
 drivers/xen/Kconfig         |    4 +
 drivers/xen/Makefile        |    1 +
 drivers/xen/v4v.c           | 2639 +++++++++++++++++++++++++++++++++++++=
++++++
 drivers/xen/v4v_utils.h     |  278 +++++
 include/xen/interface/v4v.h |  299 +++++
 include/xen/interface/xen.h |    1 +
 include/xen/v4vdev.h        |   34 +
 7 files changed, 3256 insertions(+)
 create mode 100644 drivers/xen/v4v.c
 create mode 100644 drivers/xen/v4v_utils.h
 create mode 100644 include/xen/interface/v4v.h
 create mode 100644 include/xen/v4vdev.h


--------------true
Content-Type: text/x-patch; name="0001-v4v.patch"
Content-Disposition: attachment; filename="0001-v4v.patch"
Content-Transfer-Encoding: quoted-printable

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 8d2501e..db500cc 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -196,4 +196,8 @@ config XEN_ACPI_PROCESSOR
 	  called xen_acpi_processor  If you do not know what to choose, select
 	  M here. If the CPUFREQ drivers are built in, select Y here.
=20
+config XEN_V4V
+	tristate "Xen V4V driver"
+        default m
+
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index fc34886..a3d3014 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -21,6 +21,7 @@ obj-$(CONFIG_XEN_DOM0)			+=3D pci.o acpi.o
 obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+=3D xen-pciback/
 obj-$(CONFIG_XEN_PRIVCMD)		+=3D xen-privcmd.o
 obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+=3D xen-acpi-processor.o
+obj-$(CONFIG_XEN_V4V)			+=3D v4v.o
 xen-evtchn-y				:=3D evtchn.o
 xen-gntdev-y				:=3D gntdev.o
 xen-gntalloc-y				:=3D gntalloc.o
diff --git a/drivers/xen/v4v.c b/drivers/xen/v4v.c
new file mode 100644
index 0000000..141be66
--- /dev/null
+++ b/drivers/xen/v4v.c
@@ -0,0 +1,2639 @@
+/***********************************************************************=
*******
+ * drivers/xen/v4v/v4v.c
+ *
+ * V4V interdomain communication driver.
+ *
+ * Copyright (c) 2012 Jean Guyader
+ * Copyright (c) 2009 Ross Philipson
+ * Copyright (c) 2009 James McKenzie
+ * Copyright (c) 2009 Citrix Systems, Inc.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining=
 a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, mo=
dify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Sof=
tware,
+ * and to permit persons to whom the Software is furnished to do so, sub=
ject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be includ=
ed in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRE=
SS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILI=
TY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHA=
LL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHE=
R
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISI=
NG
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER D=
EALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/socket.h>
+#include <linux/sched.h>
+#include <xen/events.h>
+#include <xen/evtchn.h>
+#include <xen/page.h>
+#include <xen/xen.h>
+#include <linux/fs.h>
+#include <linux/platform_device.h>
+#include <linux/miscdevice.h>
+#include <linux/major.h>
+#include <linux/proc_fs.h>
+#include <linux/poll.h>
+#include <linux/random.h>
+#include <linux/wait.h>
+#include <linux/file.h>
+#include <linux/mount.h>
+
+#include <xen/interface/v4v.h>
+#include <xen/v4vdev.h>
+#include "v4v_utils.h"
+
+#define DEFAULT_RING_SIZE \
+    (V4V_ROUNDUP((((PAGE_SIZE)*32) - sizeof(v4v_ring_t)-V4V_ROUNDUP(1)))=
)
+
+/* The type of a ring*/
+typedef enum {
+        V4V_RTYPE_IDLE =3D 0,
+        V4V_RTYPE_DGRAM,
+        V4V_RTYPE_LISTENER,
+        V4V_RTYPE_CONNECTOR,
+} v4v_rtype;
+
+/* the state of a v4V_private*/
+typedef enum {
+        V4V_STATE_IDLE =3D 0,
+        V4V_STATE_BOUND,
+        V4V_STATE_LISTENING,
+        V4V_STATE_ACCEPTED,
+        V4V_STATE_CONNECTING,
+        V4V_STATE_CONNECTED,
+        V4V_STATE_DISCONNECTED
+} v4v_state;
+
+typedef enum {
+        V4V_PTYPE_DGRAM =3D 1,
+        V4V_PTYPE_STREAM,
+} v4v_ptype;
+
+static rwlock_t list_lock;
+static struct list_head ring_list;
+
+struct v4v_private;
+
+/*
+ * Ring pointer itself is protected by the refcnt the lists its in by li=
st_lock.
+ *
+ * It's permittable to decrement the refcnt whilst holding the read lock=
, and then
+ * clean up refcnt=3D0 rings later.
+ *
+ * If a ring has refcnt!=3D0 we expect ->ring to be non NULL, and for th=
e ring to
+ * be registered with Xen.
+ */
+
+struct ring {
+        struct list_head node;
+        atomic_t refcnt;
+
+        spinlock_t lock;        /* Protects the data in the v4v_ring_t a=
lso privates and sponsor */
+
+        struct list_head privates;      /* Protected by lock */
+        struct v4v_private *sponsor;    /* Protected by lock */
+
+        v4v_rtype type;
+
+        /* Ring */
+        v4v_ring_t *ring;
+        v4v_pfn_t *pfn_list;
+        size_t pfn_list_npages;
+        int order;
+};
+
+struct v4v_private {
+        struct list_head node;
+        v4v_state state;
+        v4v_ptype ptype;
+        uint32_t desired_ring_size;
+        struct ring *r;
+        wait_queue_head_t readq;
+        wait_queue_head_t writeq;
+        v4v_addr_t peer;
+        uint32_t conid;
+        spinlock_t pending_recv_lock;   /* Protects pending messages, an=
d pending_error */
+        struct list_head pending_recv_list;     /* For LISTENER contains=
 only ... */
+        atomic_t pending_recv_count;
+        int pending_error;
+        int full;
+        int send_blocked;
+        int rx;
+};
+
+struct pending_recv {
+        struct list_head node;
+        v4v_addr_t from;
+        size_t data_len, data_ptr;
+        struct v4v_stream_header sh;
+        uint8_t data[0];
+} V4V_PACKED;
+
+static spinlock_t interrupt_lock;
+static spinlock_t pending_xmit_lock;
+static struct list_head pending_xmit_list;
+static atomic_t pending_xmit_count;
+
+enum v4v_pending_xmit_type {
+        V4V_PENDING_XMIT_INLINE =3D 1,    /* Send the inline xmit */
+        V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR,   /* Wake up writeq of spo=
nsor of the ringid from */
+        V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES,  /* Wake up writeq of a p=
rivate of ringid from with conid */
+};
+
+struct pending_xmit {
+        struct list_head node;
+        enum v4v_pending_xmit_type type;
+        uint32_t conid;
+        struct v4v_ring_id from;
+        v4v_addr_t to;
+        size_t len;
+        uint32_t protocol;
+        uint8_t data[0];
+};
+
+#define MAX_PENDING_RECVS        16
+
+/* Hypercalls */
+
+static inline int __must_check
+HYPERVISOR_v4v_op(int cmd, void *arg1, void *arg2,
+                  uint32_t arg3, uint32_t arg4)
+{
+        return _hypercall5(int, v4v_op, cmd, arg1, arg2, arg3, arg4);
+}
+
+static int v4v_info(v4v_info_t *info)
+{
+        (void)(*(volatile int*)info);
+        return HYPERVISOR_v4v_op (V4VOP_info, info, NULL, 0, 0);
+}
+
+static int H_v4v_register_ring(v4v_ring_t * r, v4v_pfn_t * l, size_t npa=
ges)
+{
+        (void)(*(volatile int *)r);
+        return HYPERVISOR_v4v_op(V4VOP_register_ring, r, l, npages, 0);
+}
+
+static int H_v4v_unregister_ring(v4v_ring_t * r)
+{
+        (void)(*(volatile int *)r);
+        return HYPERVISOR_v4v_op(V4VOP_unregister_ring, r, NULL, 0, 0);
+}
+
+static int
+H_v4v_send(v4v_addr_t * s, v4v_addr_t * d, const void *buf, uint32_t len=
,
+           uint32_t protocol)
+{
+        v4v_send_addr_t addr;
+        addr.src =3D *s;
+        addr.dst =3D *d;
+        return HYPERVISOR_v4v_op(V4VOP_send, &addr, (void *)buf, len, pr=
otocol);
+}
+
+static int
+H_v4v_sendv(v4v_addr_t * s, v4v_addr_t * d, const v4v_iov_t * iovs,
+            uint32_t niov, uint32_t protocol)
+{
+        v4v_send_addr_t addr;
+        addr.src =3D *s;
+        addr.dst =3D *d;
+        return HYPERVISOR_v4v_op(V4VOP_sendv, &addr, (void *)iovs, niov,
+                                 protocol);
+}
+
+static int H_v4v_notify(v4v_ring_data_t * rd)
+{
+        return HYPERVISOR_v4v_op(V4VOP_notify, rd, NULL, 0, 0);
+}
+
+static int H_v4v_viptables_add(v4v_viptables_rule_t * rule, int position=
)
+{
+        return HYPERVISOR_v4v_op(V4VOP_viptables_add, rule, NULL,
+                                 position, 0);
+}
+
+static int H_v4v_viptables_del(v4v_viptables_rule_t * rule, int position=
)
+{
+        return HYPERVISOR_v4v_op(V4VOP_viptables_del, rule, NULL,
+                                 position, 0);
+}
+
+static int H_v4v_viptables_list(struct v4v_viptables_list *list)
+{
+        return HYPERVISOR_v4v_op(V4VOP_viptables_list, list, NULL, 0, 0)=
;
+}
+
+/* Port/Ring uniqueness */
+
+/* Need to hold write lock for all of these */
+
+static int v4v_id_in_use(struct v4v_ring_id *id)
+{
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if ((r->ring->id.addr.port =3D=3D id->addr.port)
+                    && (r->ring->id.partner =3D=3D id->partner))
+                        return 1;
+        }
+
+        return 0;
+}
+
+static int v4v_port_in_use(uint32_t port, uint32_t * max)
+{
+        uint32_t ret =3D 0;
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if (r->ring->id.addr.port =3D=3D port)
+                        ret++;
+                if (max && (r->ring->id.addr.port > *max))
+                        *max =3D r->ring->id.addr.port;
+        }
+
+        return ret;
+}
+
+static uint32_t v4v_random_port(void)
+{
+        uint32_t port;
+
+        port =3D random32();
+        port |=3D 0x80000000U;
+        if (port > 0xf0000000U) {
+                port -=3D 0x10000000;
+        }
+
+        return port;
+}
+
+/* Caller needs to hold lock */
+static uint32_t v4v_find_spare_port_number(void)
+{
+        uint32_t port, max =3D 0x80000000U;
+
+        port =3D v4v_random_port();
+        if (!v4v_port_in_use(port, &max)) {
+                return port;
+        } else {
+                port =3D max + 1;
+        }
+
+        return port;
+}
+
+/* Ring Goo */
+
+static int register_ring(struct ring *r)
+{
+        return H_v4v_register_ring((void *)r->ring,
+                                   r->pfn_list,
+                                   r->pfn_list_npages);
+}
+
+static int unregister_ring(struct ring *r)
+{
+        return H_v4v_unregister_ring((void *)r->ring);
+}
+
+static void refresh_pfn_list(struct ring *r)
+{
+        uint8_t *b =3D (void *)r->ring;
+        int i;
+
+        for (i =3D 0; i < r->pfn_list_npages; ++i) {
+                r->pfn_list[i] =3D pfn_to_mfn(vmalloc_to_pfn(b));
+                b +=3D PAGE_SIZE;
+        }
+}
+
+static void allocate_pfn_list(struct ring *r)
+{
+        int n =3D (r->ring->len + PAGE_SIZE - 1) >> PAGE_SHIFT;
+        int len =3D sizeof(v4v_pfn_t) * n;
+
+        r->pfn_list =3D kmalloc(len, GFP_KERNEL);
+        if (!r->pfn_list)
+                return;
+        r->pfn_list_npages =3D n;
+
+        refresh_pfn_list(r);
+}
+
+static int allocate_ring(struct ring *r, int ring_len)
+{
+        int len =3D ring_len + sizeof(v4v_ring_t);
+        int ret =3D 0;
+
+        if (ring_len !=3D V4V_ROUNDUP(ring_len)) {
+                ret =3D -EINVAL;
+                goto fail;
+        }
+
+        r->ring =3D NULL;
+        r->pfn_list =3D NULL;
+        r->order =3D 0;
+
+        r->order =3D get_order(len);
+
+        r->ring =3D vmalloc(len);
+
+        if (!r->ring) {
+                ret =3D -ENOMEM;
+                goto fail;
+        }
+
+        memset((void *)r->ring, 0, len);
+
+        r->ring->magic =3D V4V_RING_MAGIC;
+        r->ring->len =3D ring_len;
+        r->ring->rx_ptr =3D r->ring->tx_ptr =3D 0;
+
+        memset((void *)r->ring->ring, 0x5a, ring_len);
+
+        allocate_pfn_list(r);
+        if (!r->pfn_list) {
+
+                ret =3D -ENOMEM;
+                goto fail;
+        }
+
+        return 0;
+ fail:
+        if (r->ring)
+                vfree(r->ring);
+        if (r->pfn_list)
+                kfree(r->pfn_list);
+
+        r->ring =3D NULL;
+        r->pfn_list =3D NULL;
+
+        return ret;
+}
+
+/* Caller must hold lock */
+static void recover_ring(struct ring *r)
+{
+        /* It's all gone horribly wrong */
+        r->ring->rx_ptr =3D r->ring->tx_ptr;
+        /* Xen updates tx_ptr atomically to always be pointing somewhere=
 sensible */
+}
+
+/* Caller must hold no locks, ring is allocated with a refcnt of 1 */
+static int new_ring(struct v4v_private *sponsor, struct v4v_ring_id *pid=
)
+{
+        struct v4v_ring_id id =3D *pid;
+        struct ring *r;
+        int ret;
+        unsigned long flags;
+
+        if (id.addr.domain !=3D V4V_DOMID_NONE)
+                return -EINVAL;
+
+        r =3D kmalloc(sizeof(struct ring), GFP_KERNEL);
+        memset(r, 0, sizeof(struct ring));
+
+        ret =3D allocate_ring(r, sponsor->desired_ring_size);
+        if (ret) {
+                kfree(r);
+                return ret;
+        }
+
+        INIT_LIST_HEAD(&r->privates);
+        spin_lock_init(&r->lock);
+        atomic_set(&r->refcnt, 1);
+
+        write_lock_irqsave(&list_lock, flags);
+        if (sponsor->state !=3D V4V_STATE_IDLE) {
+                ret =3D -EINVAL;
+                goto fail;
+        }
+
+        if (!id.addr.port) {
+                id.addr.port =3D v4v_find_spare_port_number();
+        } else if (v4v_id_in_use(&id)) {
+                ret =3D -EADDRINUSE;
+                goto fail;
+        }
+
+        r->ring->id =3D id;
+        r->sponsor =3D sponsor;
+        sponsor->r =3D r;
+        sponsor->state =3D V4V_STATE_BOUND;
+
+        ret =3D register_ring(r);
+        if (ret)
+                goto fail;
+
+        list_add(&r->node, &ring_list);
+        write_unlock_irqrestore(&list_lock, flags);
+        return 0;
+
+ fail:
+        write_unlock_irqrestore(&list_lock, flags);
+
+        vfree(r->ring);
+        kfree(r->pfn_list);
+        kfree(r);
+
+        sponsor->r =3D NULL;
+        sponsor->state =3D V4V_STATE_IDLE;
+
+        return ret;
+}
+
+/* Cleans up old rings */
+static void delete_ring(struct ring *r)
+{
+        int ret;
+
+        list_del(&r->node);
+
+        if ((ret =3D unregister_ring(r))) {
+                printk(KERN_ERR
+                       "unregister_ring hypercall failed: %d. Leaking ri=
ng.\n",
+                       ret);
+        } else {
+                vfree(r->ring);
+        }
+
+        kfree(r->pfn_list);
+        kfree(r);
+}
+
+/* Returns !0 if you sucessfully got a reference to the ring */
+static int get_ring(struct ring *r)
+{
+        return atomic_add_unless(&r->refcnt, 1, 0);
+}
+
+/* Must be called with DEBUG_WRITELOCK; v4v_write_lock */
+static void put_ring(struct ring *r)
+{
+        if (!r)
+                return;
+
+        if (atomic_dec_and_test(&r->refcnt)) {
+                delete_ring(r);
+        }
+}
+
+/* Caller must hold ring_lock */
+static struct ring *find_ring_by_id(struct v4v_ring_id *id)
+{
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if (!memcmp
+                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id=
)))
+                        return r;
+        }
+        return NULL;
+}
+
+/* Caller must hold ring_lock */
+struct ring *find_ring_by_id_type(struct v4v_ring_id *id, v4v_rtype t)
+{
+        struct ring *r;
+
+        list_for_each_entry(r, &ring_list, node) {
+                if (r->type !=3D t)
+                        continue;
+                if (!memcmp
+                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id=
)))
+                        return r;
+        }
+
+        return NULL;
+}
+
+/* Pending xmits */
+
+/* Caller must hold pending_xmit_lock */
+
+static void
+xmit_queue_wakeup_private(struct v4v_ring_id *from,
+                          uint32_t conid, v4v_addr_t * to, int len, int =
delete)
+{
+        struct pending_xmit *p;
+
+        list_for_each_entry(p, &pending_xmit_list, node) {
+                if (p->type !=3D V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES)
+                        continue;
+                if (p->conid !=3D conid)
+                        continue;
+
+                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id))=
)
+                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
+                        if (delete) {
+                                atomic_dec(&pending_xmit_count);
+                                list_del(&p->node);
+                        } else {
+                                p->len =3D len;
+                        }
+                        return;
+                }
+        }
+
+        if (delete)
+                return;
+
+        p =3D kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
+        if (!p) {
+                printk(KERN_ERR
+                       "Out of memory trying to queue an xmit sponsor wa=
keup\n");
+                return;
+        }
+        p->type =3D V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES;
+        p->conid =3D conid;
+        p->from =3D *from;
+        p->to =3D *to;
+        p->len =3D len;
+
+        atomic_inc(&pending_xmit_count);
+        list_add_tail(&p->node, &pending_xmit_list);
+}
+
+/* Caller must hold pending_xmit_lock */
+static void
+xmit_queue_wakeup_sponsor(struct v4v_ring_id *from, v4v_addr_t * to,
+                          int len, int delete)
+{
+        struct pending_xmit *p;
+
+        list_for_each_entry(p, &pending_xmit_list, node) {
+                if (p->type !=3D V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR)
+                        continue;
+                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id))=
)
+                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
+                        if (delete) {
+                                atomic_dec(&pending_xmit_count);
+                                list_del(&p->node);
+                        } else {
+                                p->len =3D len;
+                        }
+                        return;
+                }
+        }
+
+        if (delete)
+                return;
+
+        p =3D kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
+        if (!p) {
+                printk(KERN_ERR
+                       "Out of memory trying to queue an xmit sponsor wa=
keup\n");
+                return;
+        }
+        p->type =3D V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR;
+        p->from =3D *from;
+        p->to =3D *to;
+        p->len =3D len;
+        atomic_inc(&pending_xmit_count);
+        list_add_tail(&p->node, &pending_xmit_list);
+}
+
+static int
+xmit_queue_inline(struct v4v_ring_id *from, v4v_addr_t * to,
+                  void *buf, size_t len, uint32_t protocol)
+{
+        ssize_t ret;
+        unsigned long flags;
+        struct pending_xmit *p;
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+
+        ret =3D H_v4v_send(&from->addr, to, buf, len, protocol);
+        if (ret !=3D -EAGAIN) {
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                return ret;
+        }
+
+        p =3D kmalloc(sizeof(struct pending_xmit) + len, GFP_ATOMIC);
+        if (!p) {
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                printk(KERN_ERR
+                       "Out of memory trying to queue an xmit of %zu byt=
es\n",
+                       len);
+
+                return -ENOMEM;
+        }
+
+        p->type =3D V4V_PENDING_XMIT_INLINE;
+        p->from =3D *from;
+        p->to =3D *to;
+        p->len =3D len;
+        p->protocol =3D protocol;
+
+        if (len)
+                memcpy(p->data, buf, len);
+
+        list_add_tail(&p->node, &pending_xmit_list);
+        atomic_inc(&pending_xmit_count);
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return len;
+}
+
+static void
+xmit_queue_rst_to(struct v4v_ring_id *from, uint32_t conid, v4v_addr_t *=
 to)
+{
+        struct v4v_stream_header sh;
+
+        if (!to)
+                return;
+
+        sh.conid =3D conid;
+        sh.flags =3D V4V_SHF_RST;
+        xmit_queue_inline(from, to, &sh, sizeof(sh), V4V_PROTO_STREAM);
+}
+
+/* RX */
+
+static int
+copy_into_pending_recv(struct ring *r, int len, struct v4v_private *p)
+{
+        struct pending_recv *pending;
+        int k;
+
+        /* Too much queued? Let the ring take the strain */
+        if (atomic_read(&p->pending_recv_count) > MAX_PENDING_RECVS) {
+                spin_lock(&p->pending_recv_lock);
+                p->full =3D 1;
+                spin_unlock(&p->pending_recv_lock);
+
+                return -1;
+        }
+
+        pending =3D
+            kmalloc(sizeof(struct pending_recv) -
+                    sizeof(struct v4v_stream_header) + len, GFP_ATOMIC);
+
+        if (!pending)
+                return -1;
+
+        pending->data_ptr =3D 0;
+        pending->data_len =3D len - sizeof(struct v4v_stream_header);
+
+        k =3D v4v_copy_out(r->ring, &pending->from, NULL, &pending->sh, =
len, 1);
+
+        spin_lock(&p->pending_recv_lock);
+        list_add_tail(&pending->node, &p->pending_recv_list);
+        atomic_inc(&p->pending_recv_count);
+        p->full =3D 0;
+        spin_unlock(&p->pending_recv_lock);
+
+        return 0;
+}
+
+/* Notify */
+
+/* Caller must hold list_lock */
+static void
+wakeup_privates(struct v4v_ring_id *id, v4v_addr_t * peer, uint32_t coni=
d)
+{
+        struct ring *r =3D find_ring_by_id_type(id, V4V_RTYPE_LISTENER);
+        struct v4v_private *p;
+
+        if (!r)
+                return;
+
+        list_for_each_entry(p, &r->privates, node) {
+                if ((p->conid =3D=3D conid)
+                    && !memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
+                        p->send_blocked =3D 0;
+                        wake_up_interruptible_all(&p->writeq);
+                        return;
+                }
+        }
+}
+
+/* Caller must hold list_lock */
+static void wakeup_sponsor(struct v4v_ring_id *id)
+{
+        struct ring *r =3D find_ring_by_id(id);
+
+        if (!r)
+                return;
+
+        if (!r->sponsor)
+                return;
+
+        r->sponsor->send_blocked =3D 0;
+        wake_up_interruptible_all(&r->sponsor->writeq);
+}
+
+static void v4v_null_notify(void)
+{
+        H_v4v_notify(NULL);
+}
+
+/* Caller must hold list_lock */
+static void v4v_notify(void)
+{
+        unsigned long flags;
+        int ret;
+        int nent;
+        struct pending_xmit *p, *n;
+        v4v_ring_data_t *d;
+        int i =3D 0;
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+
+        nent =3D atomic_read(&pending_xmit_count);
+        d =3D kmalloc(sizeof(v4v_ring_data_t) +
+                    nent * sizeof(v4v_ring_data_ent_t), GFP_ATOMIC);
+        if (!d) {
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                return;
+        }
+        memset(d, 0, sizeof(v4v_ring_data_t));
+
+        d->magic =3D V4V_RING_DATA_MAGIC;
+
+        list_for_each_entry(p, &pending_xmit_list, node) {
+                if (i !=3D nent) {
+                        d->data[i].ring =3D p->to;
+                        d->data[i].space_required =3D p->len;
+                        i++;
+                }
+        }
+        d->nent =3D i;
+
+        if (H_v4v_notify(d)) {
+                kfree(d);
+                spin_unlock_irqrestore(&pending_xmit_lock, flags);
+                //MOAN;
+                return;
+        }
+
+        i =3D 0;
+        list_for_each_entry_safe(p, n, &pending_xmit_list, node) {
+                int processed =3D 1;
+
+                if (i =3D=3D nent)
+                        continue;
+
+                if (d->data[i].flags & V4V_RING_DATA_F_EXISTS) {
+                        switch (p->type) {
+                        case V4V_PENDING_XMIT_INLINE:
+                                if (!
+                                    (d->data[i].flags &
+                                     V4V_RING_DATA_F_SUFFICIENT)) {
+                                        processed =3D 0;
+                                        break;
+                                }
+                                ret =3D
+                                    H_v4v_send(&p->from.addr, &p->to, p-=
>data,
+                                               p->len, p->protocol);
+                                if (ret =3D=3D -EAGAIN)
+                                        processed =3D 0;
+                                break;
+                        case V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR:
+                                if (d->
+                                    data[i].flags & V4V_RING_DATA_F_SUFF=
ICIENT)
+                                {
+                                        wakeup_sponsor(&p->from);
+                                } else {
+                                        processed =3D 0;
+                                }
+                                break;
+                        case V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES:
+                                if (d->
+                                    data[i].flags & V4V_RING_DATA_F_SUFF=
ICIENT)
+                                {
+                                        wakeup_privates(&p->from, &p->to=
,
+                                                        p->conid);
+                                } else {
+                                        processed =3D 0;
+                                }
+                                break;
+                        }
+                }
+                if (processed) {
+                        list_del(&p->node);     /* No one to talk to */
+                        atomic_dec(&pending_xmit_count);
+                        kfree(p);
+                }
+                i++;
+        }
+
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+        kfree(d);
+}
+
+/* VIPtables */
+static void
+v4v_viptables_add(struct v4v_private *p, struct v4v_viptables_rule *rule=
,
+                  int position)
+{
+        H_v4v_viptables_add(rule, position);
+}
+
+static void
+v4v_viptables_del(struct v4v_private *p, struct v4v_viptables_rule *rule=
,
+                  int position)
+{
+        H_v4v_viptables_del(rule, position);
+}
+
+static int v4v_viptables_list(struct v4v_private *p, struct v4v_viptable=
s_list *list)
+{
+        return H_v4v_viptables_list(list);
+}
+
+/* State Machines */
+static int
+connector_state_machine(struct v4v_private *p, struct v4v_stream_header =
*sh)
+{
+        if (sh->flags & V4V_SHF_ACK) {
+                switch (p->state) {
+                case V4V_STATE_CONNECTING:
+                        p->state =3D V4V_STATE_CONNECTED;
+
+                        spin_lock(&p->pending_recv_lock);
+                        p->pending_error =3D 0;
+                        spin_unlock(&p->pending_recv_lock);
+
+                        wake_up_interruptible_all(&p->writeq);
+                        return 0;
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_DISCONNECTED:
+                        p->state =3D V4V_STATE_DISCONNECTED;
+
+                        wake_up_interruptible_all(&p->readq);
+                        wake_up_interruptible_all(&p->writeq);
+                        return 1;       /* Send RST */
+                default:
+                        break;
+                }
+        }
+
+        if (sh->flags & V4V_SHF_RST) {
+                switch (p->state) {
+                case V4V_STATE_CONNECTING:
+                        spin_lock(&p->pending_recv_lock);
+                        p->pending_error =3D -ECONNREFUSED;
+                        spin_unlock(&p->pending_recv_lock);
+                case V4V_STATE_CONNECTED:
+                        p->state =3D V4V_STATE_DISCONNECTED;
+                        wake_up_interruptible_all(&p->readq);
+                        wake_up_interruptible_all(&p->writeq);
+                        return 0;
+                default:
+                        break;
+                }
+        }
+
+        return 0;
+}
+
+static void
+acceptor_state_machine(struct v4v_private *p, struct v4v_stream_header *=
sh)
+{
+        if ((sh->flags & V4V_SHF_RST)
+            && ((p->state =3D=3D V4V_STATE_ACCEPTED))) {
+                p->state =3D V4V_STATE_DISCONNECTED;
+                wake_up_interruptible_all(&p->readq);
+                wake_up_interruptible_all(&p->writeq);
+        }
+}
+
+/* Interrupt handler */
+
+static int connector_interrupt(struct ring *r)
+{
+        ssize_t msg_len;
+        uint32_t protocol;
+        struct v4v_stream_header sh;
+        v4v_addr_t from;
+        int ret =3D 0;
+
+        if (!r->sponsor) {
+                //MOAN;
+                return -1;
+        }
+
+        msg_len =3D v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 0);  /* Peek the header */
+        if (msg_len =3D=3D -1) {
+                recover_ring(r);
+                return ret;
+        }
+
+        if ((protocol !=3D V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) =
{
+                /* Wrong protocol bin it */
+                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
+                return ret;
+        }
+
+        if (sh.flags & V4V_SHF_SYN) {   /* This is a connector no-one sh=
ould send SYN, send RST back */
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 1);
+                if (msg_len =3D=3D sizeof(sh))
+                        xmit_queue_rst_to(&r->ring->id, sh.conid, &from)=
;
+                return ret;
+        }
+
+        /* Right connexion? */
+        if (sh.conid !=3D r->sponsor->conid) {
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 1);
+                xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
+                return ret;
+        }
+
+        /* Any messages to eat? */
+        if (sh.flags & (V4V_SHF_ACK | V4V_SHF_RST)) {
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 1);
+                if (msg_len =3D=3D sizeof(sh)) {
+                        if (connector_state_machine(r->sponsor, &sh))
+                                xmit_queue_rst_to(&r->ring->id, sh.conid=
,
+                                                  &from);
+                }
+                return ret;
+        }
+        //FIXME set a flag to say wake up the userland process next time=
, and do that rather than copy
+        ret =3D copy_into_pending_recv(r, msg_len, r->sponsor);
+        wake_up_interruptible_all(&r->sponsor->readq);
+
+        return ret;
+}
+
+static int
+acceptor_interrupt(struct v4v_private *p, struct ring *r,
+                   struct v4v_stream_header *sh, ssize_t msg_len)
+{
+        v4v_addr_t from;
+        int ret =3D 0;
+
+        if (sh->flags & (V4V_SHF_SYN | V4V_SHF_ACK)) {  /* This is an  a=
cceptor no-one should send SYN or ACK, send RST back */
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), =
1);
+                if (msg_len =3D=3D sizeof(*sh))
+                        xmit_queue_rst_to(&r->ring->id, sh->conid, &from=
);
+                return ret;
+        }
+
+        /* Is it all over */
+        if (sh->flags & V4V_SHF_RST) {
+                /* Consume the RST */
+                msg_len =3D
+                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), =
1);
+                if (msg_len =3D=3D sizeof(*sh))
+                        acceptor_state_machine(p, sh);
+                return ret;
+        }
+
+        /* Copy the message out */
+        ret =3D copy_into_pending_recv(r, msg_len, p);
+        wake_up_interruptible_all(&p->readq);
+
+        return ret;
+}
+
+static int listener_interrupt(struct ring *r)
+{
+        int ret =3D 0;
+        ssize_t msg_len;
+        uint32_t protocol;
+        struct v4v_stream_header sh;
+        struct v4v_private *p;
+        v4v_addr_t from;
+
+        msg_len =3D v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(=
sh), 0);  /* Peek the header */
+        if (msg_len =3D=3D -1) {
+                recover_ring(r);
+                return ret;
+        }
+
+        if ((protocol !=3D V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) =
{
+                /* Wrong protocol bin it */
+                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
+                return ret;
+        }
+
+        list_for_each_entry(p, &r->privates, node) {
+                if ((p->conid =3D=3D sh.conid)
+                    && (!memcmp(&p->peer, &from, sizeof(v4v_addr_t)))) {
+                        ret =3D acceptor_interrupt(p, r, &sh, msg_len);
+                        return ret;
+                }
+        }
+
+        /* Consume it */
+        if (r->sponsor && (sh.flags & V4V_SHF_RST)) {
+                /*
+                 * If we previously received a SYN which has not been pu=
lled by
+                 * v4v_accept() from the pending queue yet, the RST will=
 be dropped here
+                 * and the connection will never be closed.
+                 * Hence we must make sure to evict the SYN header from =
the pending queue
+                 * before it gets picked up by v4v_accept().
+                 */
+                struct pending_recv *pending, *t;
+
+                spin_lock(&r->sponsor->pending_recv_lock);
+                list_for_each_entry_safe(pending, t,
+                                         &r->sponsor->pending_recv_list,=
 node) {
+                        if (pending->sh.flags & V4V_SHF_SYN
+                            && pending->sh.conid =3D=3D sh.conid) {
+                                list_del(&pending->node);
+                                atomic_dec(&r->sponsor->pending_recv_cou=
nt);
+                                kfree(pending);
+                                break;
+                        }
+                }
+                spin_unlock(&r->sponsor->pending_recv_lock);
+
+                /* Rst to a listener, should be picked up above for the =
connexion, drop it */
+                v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
+                return ret;
+        }
+
+        if (sh.flags & V4V_SHF_SYN) {
+                /* Syn to new connexion */
+                if ((!r->sponsor) || (msg_len !=3D sizeof(sh))) {
+                        v4v_copy_out(r->ring, NULL, NULL, NULL,
+                                           sizeof(sh), 1);
+                        return ret;
+                }
+                ret =3D copy_into_pending_recv(r, msg_len, r->sponsor);
+                wake_up_interruptible_all(&r->sponsor->readq);
+                return ret;
+        }
+
+        v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
+        /* Data for unknown destination, RST them */
+        xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
+
+        return ret;
+}
+
+static void v4v_interrupt_rx(void)
+{
+        struct ring *r;
+
+        read_lock(&list_lock);
+
+        /* Wake up anyone pending */
+        list_for_each_entry(r, &ring_list, node) {
+                if (r->ring->tx_ptr =3D=3D r->ring->rx_ptr)
+                        continue;
+
+                switch (r->type) {
+                case V4V_RTYPE_IDLE:
+                        v4v_copy_out(r->ring, NULL, NULL, NULL, 1, 1);
+                        break;
+                case V4V_RTYPE_DGRAM:  /* For datagrams we just wake up =
the reader */
+                        if (r->sponsor)
+                                wake_up_interruptible_all(&r->sponsor->r=
eadq);
+                        break;
+                case V4V_RTYPE_CONNECTOR:
+                        spin_lock(&r->lock);
+                        while ((r->ring->tx_ptr !=3D r->ring->rx_ptr)
+                               && !connector_interrupt(r)) ;
+                        spin_unlock(&r->lock);
+                        break;
+                case V4V_RTYPE_LISTENER:
+                        spin_lock(&r->lock);
+                        while ((r->ring->tx_ptr !=3D r->ring->rx_ptr)
+                               && !listener_interrupt(r)) ;
+                        spin_unlock(&r->lock);
+                        break;
+                default:       /* enum warning */
+                        break;
+                }
+        }
+        read_unlock(&list_lock);
+}
+
+static irqreturn_t v4v_interrupt(int irq, void *dev_id)
+{
+        unsigned long flags;
+
+        spin_lock_irqsave(&interrupt_lock, flags);
+        v4v_interrupt_rx();
+        v4v_notify();
+        spin_unlock_irqrestore(&interrupt_lock, flags);
+
+        return IRQ_HANDLED;
+}
+
+static void v4v_fake_irq(void)
+{
+        unsigned long flags;
+
+        spin_lock_irqsave(&interrupt_lock, flags);
+        v4v_interrupt_rx();
+        v4v_null_notify();
+        spin_unlock_irqrestore(&interrupt_lock, flags);
+}
+
+/* Filesystem gunge */
+
+#define V4VFS_MAGIC 0x56345644  /* "V4VD" */
+
+static struct vfsmount *v4v_mnt =3D NULL;
+static const struct file_operations v4v_fops_stream;
+
+static struct dentry *v4vfs_mount_pseudo(struct file_system_type *fs_typ=
e,
+                                         int flags, const char *dev_name=
,
+                                         void *data)
+{
+        return mount_pseudo(fs_type, "v4v:", NULL, NULL, V4VFS_MAGIC);
+}
+
+static struct file_system_type v4v_fs =3D {
+        /* No owner field so module can be unloaded */
+        .name =3D "v4vfs",
+        .mount =3D v4vfs_mount_pseudo,
+        .kill_sb =3D kill_litter_super
+};
+
+static int setup_fs(void)
+{
+        int ret;
+
+        ret =3D register_filesystem(&v4v_fs);
+        if (ret) {
+                printk(KERN_ERR
+                       "v4v: couldn't register tedious filesystem thingy=
\n");
+                return ret;
+        }
+
+        v4v_mnt =3D kern_mount(&v4v_fs);
+        if (IS_ERR(v4v_mnt)) {
+                unregister_filesystem(&v4v_fs);
+                ret =3D PTR_ERR(v4v_mnt);
+                printk(KERN_ERR
+                       "v4v: couldn't mount tedious filesystem thingy\n"=
);
+                return ret;
+        }
+
+        return 0;
+}
+
+static void unsetup_fs(void)
+{
+        mntput(v4v_mnt);
+        unregister_filesystem(&v4v_fs);
+}
+
+/* Methods */
+
+static int stream_connected(struct v4v_private *p)
+{
+        switch (p->state) {
+        case V4V_STATE_ACCEPTED:
+        case V4V_STATE_CONNECTED:
+                return 1;
+        default:
+                return 0;
+        }
+}
+
+static size_t
+v4v_try_send_sponsor(struct v4v_private *p,
+                     v4v_addr_t * dest,
+                     const void *buf, size_t len, uint32_t protocol)
+{
+        size_t ret;
+        unsigned long flags;
+
+        ret =3D H_v4v_send(&p->r->ring->id.addr, dest, buf, len, protoco=
l);
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+        if (ret =3D=3D -EAGAIN) {
+                /* Add pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0)=
;
+                p->send_blocked++;
+
+        } else {
+                /* Remove pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1)=
;
+                p->send_blocked =3D 0;
+        }
+
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return ret;
+}
+
+static size_t
+v4v_try_sendv_sponsor(struct v4v_private *p,
+                      v4v_addr_t * dest,
+                      const v4v_iov_t * iovs, size_t niov, size_t len,
+                      uint32_t protocol)
+{
+        size_t ret;
+        unsigned long flags;
+
+        ret =3D H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, prot=
ocol);
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+        if (ret =3D=3D -EAGAIN) {
+                /* Add pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0)=
;
+                p->send_blocked++;
+
+        } else {
+                /* Remove pending xmit */
+                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1)=
;
+                p->send_blocked =3D 0;
+        }
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return ret;
+}
+
+/*
+ * Try to send from one of the ring's privates (not its sponsor),
+ * and queue an writeq wakeup if we fail
+ */
+static size_t
+v4v_try_sendv_privates(struct v4v_private *p,
+                       v4v_addr_t * dest,
+                       const v4v_iov_t * iovs, size_t niov, size_t len,
+                       uint32_t protocol)
+{
+        size_t ret;
+        unsigned long flags;
+
+        ret =3D H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, prot=
ocol);
+
+        spin_lock_irqsave(&pending_xmit_lock, flags);
+        if (ret =3D=3D -EAGAIN) {
+                /* Add pending xmit */
+                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, des=
t, len,
+                                          0);
+                p->send_blocked++;
+        } else {
+                /* Remove pending xmit */
+                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, des=
t, len,
+                                          1);
+                p->send_blocked =3D 0;
+        }
+        spin_unlock_irqrestore(&pending_xmit_lock, flags);
+
+        return ret;
+}
+
+static ssize_t
+v4v_sendto_from_sponsor(struct v4v_private *p,
+                        const void *buf, size_t len,
+                        int nonblock, v4v_addr_t * dest, uint32_t protoc=
ol)
+{
+        size_t ret =3D 0, ts_ret;
+
+        switch (p->state) {
+        case V4V_STATE_CONNECTING:
+                ret =3D -ENOTCONN;
+                break;
+        case V4V_STATE_DISCONNECTED:
+                ret =3D -EPIPE;
+                break;
+        case V4V_STATE_BOUND:
+        case V4V_STATE_CONNECTED:
+                break;
+        default:
+                ret =3D -EINVAL;
+        }
+
+        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_head=
er)))
+                return -EMSGSIZE;
+
+        if (ret)
+                return ret;
+
+        if (nonblock) {
+                return H_v4v_send(&p->r->ring->id.addr, dest, buf, len,
+                                  protocol);;
+        }
+        /*
+         * I happen to know that wait_event_interruptible will never
+         * evaluate the 2nd argument once it has returned true but
+         * I shouldn't.
+         *
+         * The EAGAIN will cause xen to send an interrupt which will
+         * via the pending_xmit_list and writeq wake us up.
+         */
+        ret =3D wait_event_interruptible(p->writeq,
+                                       ((ts_ret =3D
+                                         v4v_try_send_sponsor
+                                         (p, dest,
+                                          buf, len, protocol)) !=3D -EAG=
AIN));
+        if (ret)
+                ret =3D ts_ret;
+
+        return ret;
+}
+
+static ssize_t
+v4v_stream_sendvto_from_sponsor(struct v4v_private *p,
+                                const v4v_iov_t * iovs, size_t niov,
+                                size_t len, int nonblock,
+                                v4v_addr_t * dest, uint32_t protocol)
+{
+        size_t ret =3D 0, ts_ret;
+
+        switch (p->state) {
+        case V4V_STATE_CONNECTING:
+                return -ENOTCONN;
+        case V4V_STATE_DISCONNECTED:
+                return -EPIPE;
+        case V4V_STATE_BOUND:
+        case V4V_STATE_CONNECTED:
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_head=
er)))
+                return -EMSGSIZE;
+
+        if (ret)
+                return ret;
+
+        if (nonblock) {
+                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, nio=
v,
+                                   protocol);
+        }
+        /*
+         * I happen to know that wait_event_interruptible will never
+         * evaluate the 2nd argument once it has returned true but
+         * I shouldn't.
+         *
+         * The EAGAIN will cause xen to send an interrupt which will
+         * via the pending_xmit_list and writeq wake us up.
+         */
+        ret =3D wait_event_interruptible(p->writeq,
+                                       ((ts_ret =3D
+                                         v4v_try_sendv_sponsor
+                                         (p, dest,
+                                          iovs, niov, len,
+                                          protocol)) !=3D -EAGAIN)
+                                       || !stream_connected(p));
+        if (ret =3D=3D 0)
+                ret =3D ts_ret;
+
+        return ret;
+}
+static ssize_t
+v4v_stream_sendvto_from_private(struct v4v_private *p,
+                                const v4v_iov_t * iovs, size_t niov,
+                                size_t len, int nonblock,
+                                v4v_addr_t * dest, uint32_t protocol)
+{
+        size_t ret =3D 0, ts_ret;
+
+        switch (p->state) {
+        case V4V_STATE_DISCONNECTED:
+                return -EPIPE;
+        case V4V_STATE_ACCEPTED:
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_head=
er)))
+                return -EMSGSIZE;
+
+        if (ret)
+                return ret;
+
+        if (nonblock) {
+                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, nio=
v,
+                                   protocol);
+        }
+        /*
+         * I happen to know that wait_event_interruptible will never
+         * evaluate the 2nd argument once it has returned true but
+         * I shouldn't.
+         *
+         * The EAGAIN will cause xen to send an interrupt which will
+         * via the pending_xmit_list and writeq wake us up.
+         */
+        ret =3D wait_event_interruptible(p->writeq,
+                                       ((ts_ret =3D
+                                         v4v_try_sendv_privates
+                                         (p, dest,
+                                          iovs, niov, len,
+                                          protocol)) !=3D -EAGAIN)
+                                       || !stream_connected(p));
+        if (ret =3D=3D 0)
+                ret =3D ts_ret;
+
+        return ret;
+}
+
+static int v4v_get_sock_name(struct v4v_private *p, struct v4v_ring_id *=
id)
+{
+        int rc =3D 0;
+
+        read_lock(&list_lock);
+        if ((p->r) && (p->r->ring)) {
+                *id =3D p->r->ring->id;
+        } else {
+                rc =3D -EINVAL;
+        }
+        read_unlock(&list_lock);
+
+        return rc;
+}
+
+static int v4v_get_peer_name(struct v4v_private *p, v4v_addr_t * id)
+{
+        int rc =3D 0;
+        read_lock(&list_lock);
+
+        switch (p->state) {
+        case V4V_STATE_CONNECTING:
+        case V4V_STATE_CONNECTED:
+        case V4V_STATE_ACCEPTED:
+                *id =3D p->peer;
+                break;
+        default:
+                rc =3D -ENOTCONN;
+        }
+
+        read_unlock(&list_lock);
+        return rc;
+}
+
+static int v4v_set_ring_size(struct v4v_private *p, uint32_t ring_size)
+{
+
+        if (ring_size <
+            (sizeof(struct v4v_ring_message_header) + V4V_ROUNDUP(1)))
+                return -EINVAL;
+        if (ring_size !=3D V4V_ROUNDUP(ring_size))
+                return -EINVAL;
+
+        read_lock(&list_lock);
+        if (p->state !=3D V4V_STATE_IDLE) {
+                read_unlock(&list_lock);
+                return -EINVAL;
+        }
+
+        p->desired_ring_size =3D ring_size;
+        read_unlock(&list_lock);
+
+        return 0;
+}
+
+static ssize_t
+v4v_recvfrom_dgram(struct v4v_private *p, void *buf, size_t len,
+                   int nonblock, int peek, v4v_addr_t * src)
+{
+        ssize_t ret;
+        uint32_t protocol;
+        v4v_addr_t lsrc;
+
+        if (!src)
+                src =3D &lsrc;
+
+retry:
+        if (!nonblock) {
+                ret =3D wait_event_interruptible(p->readq,
+                                               (p->r->ring->rx_ptr !=3D
+                                                p->r->ring->tx_ptr));
+                if (ret)
+                        return ret;
+        }
+
+        read_lock(&list_lock);
+
+        /*
+         * For datagrams, we know the interrrupt handler will never use
+         * the ring, leave irqs on
+         */
+        spin_lock(&p->r->lock);
+        if (p->r->ring->rx_ptr =3D=3D p->r->ring->tx_ptr) {
+                spin_unlock(&p->r->lock);
+                if (nonblock) {
+                        ret =3D -EAGAIN;
+                        goto unlock;
+                }
+                read_unlock(&list_lock);
+                goto retry;
+        }
+        ret =3D v4v_copy_out(p->r->ring, src, &protocol, buf, len, !peek=
);
+        if (ret < 0) {
+                recover_ring(p->r);
+                spin_unlock(&p->r->lock);
+                read_unlock(&list_lock);
+                goto retry;
+        }
+        spin_unlock(&p->r->lock);
+
+        if (!peek)
+                v4v_null_notify();
+
+        if (protocol !=3D V4V_PROTO_DGRAM) {
+                /* If peeking consume the rubbish */
+                if (peek)
+                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1)=
;
+                read_unlock(&list_lock);
+                goto retry;
+        }
+
+        if ((p->state =3D=3D V4V_STATE_CONNECTED) &&
+            memcmp(src, &p->peer, sizeof(v4v_addr_t))) {
+                /* Wrong source - bin it */
+                if (peek)
+                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1)=
;
+                read_unlock(&list_lock);
+                goto retry;
+        }
+
+unlock:
+        read_unlock(&list_lock);
+
+        return ret;
+}
+
+static ssize_t
+v4v_recv_stream(struct v4v_private *p, void *_buf, int len, int recv_fla=
gs,
+                int nonblock)
+{
+        size_t count =3D 0;
+        int ret =3D 0;
+        unsigned long flags;
+        int schedule_irq =3D 0;
+        uint8_t *buf =3D (void *)_buf;
+
+        read_lock(&list_lock);
+
+        switch (p->state) {
+        case V4V_STATE_DISCONNECTED:
+                ret =3D -EPIPE;
+                goto unlock;
+        case V4V_STATE_CONNECTING:
+                ret =3D -ENOTCONN;
+                goto unlock;
+        case V4V_STATE_CONNECTED:
+        case V4V_STATE_ACCEPTED:
+                break;
+        default:
+                ret =3D -EINVAL;
+                goto unlock;
+        }
+
+        do {
+                if (!nonblock) {
+                        ret =3D wait_event_interruptible(p->readq,
+                                                       (!list_empty(&p->=
pending_recv_list)
+                                                        || !stream_conne=
cted(p)));
+
+                        if (ret)
+                                break;
+                }
+                       =20
+                spin_lock_irqsave(&p->pending_recv_lock, flags);
+
+                while (!list_empty(&p->pending_recv_list) && len) {
+                        size_t to_copy;
+                        struct pending_recv *pending;
+                        int unlink =3D 0;
+
+                        pending =3D list_first_entry(&p->pending_recv_li=
st,
+                                                   struct pending_recv, =
node);
+
+                        if ((pending->data_len - pending->data_ptr) > le=
n) {
+                                to_copy =3D len;
+                        } else {
+                                unlink =3D 1;
+                                to_copy =3D pending->data_len - pending-=
>data_ptr;
+                        }
+
+                        if (!access_ok(VERIFY_WRITE, buf, to_copy)) {
+                                printk(KERN_ERR
+                                       "V4V - ERROR: buf invalid _buf=3D=
%p buf=3D%p len=3D%d to_copy=3D%zu count=3D%zu\n",
+                                       _buf, buf, len, to_copy, count);
+                                spin_unlock_irqrestore(&p->pending_recv_=
lock, flags);
+                                read_unlock(&list_lock);
+                                return -EFAULT;
+                        }
+                       =20
+                        if (copy_to_user(buf, pending->data + pending->d=
ata_ptr, to_copy))
+                        {
+                                spin_unlock_irqrestore(&p->pending_recv_=
lock, flags);
+                                read_unlock(&list_lock);
+                                return -EFAULT;
+                        }
+
+                        if (unlink) {
+                                list_del(&pending->node);
+                                kfree(pending);
+                                atomic_dec(&p->pending_recv_count);
+                                if (p->full)
+                                        schedule_irq =3D 1;
+                        } else
+                                pending->data_ptr +=3D to_copy;
+
+                        buf +=3D to_copy;
+                        count +=3D to_copy;
+                        len -=3D to_copy;
+                }
+                       =20
+                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
+
+                if (p->state =3D=3D V4V_STATE_DISCONNECTED) {
+                        ret =3D -EPIPE;
+                        break;
+                }
+
+                if (nonblock)
+                        ret =3D -EAGAIN;
+
+        } while ((recv_flags & MSG_WAITALL) && len);
+
+unlock:
+        read_unlock(&list_lock);
+
+        if (schedule_irq)
+                v4v_fake_irq();
+
+        return count ? count : ret;
+}
+
+static ssize_t
+v4v_send_stream(struct v4v_private *p, const void *_buf, int len, int no=
nblock)
+{
+        int write_lump;
+        const uint8_t *buf =3D _buf;
+        size_t count =3D 0;
+        ssize_t ret;
+        int to_send;
+
+        write_lump =3D DEFAULT_RING_SIZE >> 2;
+
+        switch (p->state) {
+        case V4V_STATE_DISCONNECTED:
+                return -EPIPE;
+        case V4V_STATE_CONNECTING:
+                return -ENOTCONN;
+        case V4V_STATE_CONNECTED:
+        case V4V_STATE_ACCEPTED:
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        while (len) {
+                struct v4v_stream_header sh;
+                v4v_iov_t iovs[2];
+
+                to_send =3D len > write_lump ? write_lump : len;
+                sh.flags =3D 0;
+                sh.conid =3D p->conid;
+
+                iovs[0].iov_base =3D (uintptr_t)&sh;
+                iovs[0].iov_len =3D sizeof (sh);
+
+                iovs[1].iov_base =3D (uintptr_t)buf;
+                iovs[1].iov_len =3D to_send;
+
+                if (p->state =3D=3D V4V_STATE_CONNECTED)
+                    ret =3D v4v_stream_sendvto_from_sponsor(
+                                p, iovs, 2,
+                                to_send + sizeof(struct v4v_stream_heade=
r),
+                                nonblock, &p->peer, V4V_PROTO_STREAM);
+                else
+                    ret =3D v4v_stream_sendvto_from_private(
+                                p, iovs, 2,
+                                to_send + sizeof(struct v4v_stream_heade=
r),
+                                nonblock, &p->peer, V4V_PROTO_STREAM);
+
+                if (ret < 0) {
+                        return count ? count : ret;
+                }
+
+                len -=3D to_send;
+                buf +=3D to_send;
+                count +=3D to_send;
+
+                if (nonblock)
+                        return count;
+        }
+
+        return count;
+}
+
+static int v4v_bind(struct v4v_private *p, struct v4v_ring_id *ring_id)
+{
+        int ret =3D 0;
+
+        if (ring_id->addr.domain !=3D V4V_DOMID_NONE) {
+                return -EINVAL;
+        }
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                ret =3D new_ring(p, ring_id);
+                if (!ret)
+                        p->r->type =3D V4V_RTYPE_DGRAM;
+                break;
+        case V4V_PTYPE_STREAM:
+                ret =3D new_ring(p, ring_id);
+                break;
+        }
+
+        return ret;
+}
+
+static int v4v_listen(struct v4v_private *p)
+{
+        if (p->ptype !=3D V4V_PTYPE_STREAM)
+                return -EINVAL;
+
+        if (p->state !=3D V4V_STATE_BOUND) {
+                return -EINVAL;
+        }
+
+        p->r->type =3D V4V_RTYPE_LISTENER;
+        p->state =3D V4V_STATE_LISTENING;
+
+        return 0;
+}
+
+static int v4v_connect(struct v4v_private *p, v4v_addr_t * peer, int non=
block)
+{
+        struct v4v_stream_header sh;
+        int ret =3D -EINVAL;
+
+        if (p->ptype =3D=3D V4V_PTYPE_DGRAM) {
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                case V4V_STATE_CONNECTED:
+                        if (peer) {
+                                p->state =3D V4V_STATE_CONNECTED;
+                                memcpy(&p->peer, peer, sizeof(v4v_addr_t=
));
+                        } else {
+                                p->state =3D V4V_STATE_BOUND;
+                        }
+                        return 0;
+                default:
+                        return -EINVAL;
+                }
+        }
+        if (p->ptype !=3D V4V_PTYPE_STREAM) {
+                return -EINVAL;
+        }
+
+        /* Irritiatingly we need to be restartable */
+        switch (p->state) {
+        case V4V_STATE_BOUND:
+                p->r->type =3D V4V_RTYPE_CONNECTOR;
+                p->state =3D V4V_STATE_CONNECTING;
+                p->conid =3D random32();
+                p->peer =3D *peer;
+
+                sh.flags =3D V4V_SHF_SYN;
+                sh.conid =3D p->conid;
+
+                ret =3D
+                    xmit_queue_inline(&p->r->ring->id, &p->peer, &sh,
+                                      sizeof(sh), V4V_PROTO_STREAM);
+                if (ret =3D=3D sizeof(sh))
+                        ret =3D 0;
+
+                if (ret && (ret !=3D -EAGAIN)) {
+                        p->state =3D V4V_STATE_BOUND;
+                        p->r->type =3D V4V_RTYPE_DGRAM;
+                        return ret;
+                }
+
+                break;
+        case V4V_STATE_CONNECTED:
+                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
+                        return -EINVAL;
+                } else {
+                        return 0;
+                }
+        case V4V_STATE_CONNECTING:
+                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
+                        return -EINVAL;
+                }
+                break;
+        default:
+                return -EINVAL;
+        }
+
+        if (nonblock) {
+                return -EINPROGRESS;
+        }
+
+        while (p->state !=3D V4V_STATE_CONNECTED) {
+                ret =3D
+                    wait_event_interruptible(p->writeq,
+                                             (p->state !=3D
+                                              V4V_STATE_CONNECTING));
+                if (ret)
+                        return ret;
+
+                if (p->state =3D=3D V4V_STATE_DISCONNECTED) {
+                        p->state =3D V4V_STATE_BOUND;
+                        p->r->type =3D V4V_RTYPE_DGRAM;
+                        ret =3D -ECONNREFUSED;
+                        break;
+                }
+        }
+
+        return ret;
+}
+
+static int allocate_fd_with_private(void *private)
+{
+        int fd;
+        struct file *f;
+        struct qstr name =3D {.name =3D "" };
+        struct path path;
+        struct inode *ind;
+
+        fd =3D get_unused_fd();
+        if (fd < 0)
+                return fd;
+
+        path.dentry =3D d_alloc_pseudo(v4v_mnt->mnt_sb, &name);
+        if (unlikely(!path.dentry)) {
+                put_unused_fd(fd);
+                return -ENOMEM;
+        }
+        ind =3D new_inode(v4v_mnt->mnt_sb);
+        ind->i_ino =3D get_next_ino();
+        ind->i_fop =3D v4v_mnt->mnt_root->d_inode->i_fop;
+        ind->i_state =3D v4v_mnt->mnt_root->d_inode->i_state;
+        ind->i_mode =3D v4v_mnt->mnt_root->d_inode->i_mode;
+        ind->i_uid =3D current_fsuid();
+        ind->i_gid =3D current_fsgid();
+        d_instantiate(path.dentry, ind);
+
+        path.mnt =3D mntget(v4v_mnt);
+
+        f =3D alloc_file(&path, FMODE_READ | FMODE_WRITE, &v4v_fops_stre=
am);
+        if (!f) {
+                /* Put back fd ? */
+                return -ENFILE;
+        }
+
+        f->private_data =3D private;
+        fd_install(fd, f);
+
+        return fd;
+}
+
+static int
+v4v_accept(struct v4v_private *p, struct v4v_addr *peer, int nonblock)
+{
+        int fd;
+        int ret =3D 0;
+        struct v4v_private *a =3D NULL;
+        struct pending_recv *r =3D NULL;
+        unsigned long flags;
+        struct v4v_stream_header sh;
+
+        if (p->ptype !=3D V4V_PTYPE_STREAM)
+                return -ENOTTY;
+
+        if (p->state !=3D V4V_STATE_LISTENING) {
+                return -EINVAL;
+        }
+
+        /* FIXME: leak! */
+        for (;;) {
+                ret =3D
+                    wait_event_interruptible(p->readq,
+                                             (!list_empty
+                                              (&p->pending_recv_list))
+                                             || nonblock);
+                if (ret)
+                        return ret;
+
+                /* Write lock implicitly has pending_recv_lock */
+                write_lock_irqsave(&list_lock, flags);
+
+                if (!list_empty(&p->pending_recv_list)) {
+                        r =3D list_first_entry(&p->pending_recv_list,
+                                             struct pending_recv, node);
+
+                        list_del(&r->node);
+                        atomic_dec(&p->pending_recv_count);
+
+                        if ((!r->data_len) && (r->sh.flags & V4V_SHF_SYN=
))
+                                break;
+
+                        kfree(r);
+                }
+
+                write_unlock_irqrestore(&list_lock, flags);
+                if (nonblock)
+                        return -EAGAIN;
+        }
+        write_unlock_irqrestore(&list_lock, flags);
+
+        a =3D kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
+        if (!a) {
+                ret =3D -ENOMEM;
+                goto release;
+        }
+
+        memset(a, 0, sizeof(struct v4v_private));
+        a->state =3D V4V_STATE_ACCEPTED;
+        a->ptype =3D V4V_PTYPE_STREAM;
+        a->r =3D p->r;
+        if (!get_ring(a->r)) {
+                a->r =3D NULL;
+                ret =3D -EINVAL;
+                goto release;
+        }
+
+        init_waitqueue_head(&a->readq);
+        init_waitqueue_head(&a->writeq);
+        spin_lock_init(&a->pending_recv_lock);
+        INIT_LIST_HEAD(&a->pending_recv_list);
+        atomic_set(&a->pending_recv_count, 0);
+
+        a->send_blocked =3D 0;
+        a->peer =3D r->from;
+        a->conid =3D r->sh.conid;
+
+        if (peer)
+                *peer =3D r->from;
+
+        fd =3D allocate_fd_with_private(a);
+        if (fd < 0) {
+                ret =3D fd;
+                goto release;
+        }
+
+        write_lock_irqsave(&list_lock, flags);
+        list_add(&a->node, &a->r->privates);
+        write_unlock_irqrestore(&list_lock, flags);
+
+        /* Ship the ACK */
+        sh.conid =3D a->conid;
+        sh.flags =3D V4V_SHF_ACK;
+
+        xmit_queue_inline(&a->r->ring->id, &a->peer, &sh,
+                          sizeof(sh), V4V_PROTO_STREAM);
+        kfree(r);
+
+        return fd;
+
+ release:
+        kfree(r);
+        if (a) {
+                write_lock_irqsave(&list_lock, flags);
+                if (a->r)
+                        put_ring(a->r);
+                write_unlock_irqrestore(&list_lock, flags);
+                kfree(a);
+        }
+        return ret;
+}
+
+ssize_t
+v4v_sendto(struct v4v_private * p, const void *buf, size_t len, int flag=
s,
+           v4v_addr_t * addr, int nonblock)
+{
+        ssize_t rc;
+
+        if (!access_ok(VERIFY_READ, buf, len))
+                return -EFAULT;
+        if (!access_ok(VERIFY_READ, addr, len))
+                return -EFAULT;
+
+        if (flags & MSG_DONTWAIT)
+                nonblock++;
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                        if (!addr)
+                                return -ENOTCONN;
+                        rc =3D v4v_sendto_from_sponsor(p, buf, len, nonb=
lock,
+                                                     addr, V4V_PROTO_DGR=
AM);
+                        break;
+
+                case V4V_STATE_CONNECTED:
+                        if (addr)
+                                return -EISCONN;
+
+                        rc =3D v4v_sendto_from_sponsor(p, buf, len, nonb=
lock,
+                                                     &p->peer, V4V_PROTO=
_DGRAM);
+                        break;
+
+                default:
+                        return -EINVAL;
+                }
+                break;
+        case V4V_PTYPE_STREAM:
+                if (addr)
+                        return -EISCONN;
+                switch (p->state) {
+                case V4V_STATE_CONNECTING:
+                case V4V_STATE_BOUND:
+                        return -ENOTCONN;
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_ACCEPTED:
+                        rc =3D v4v_send_stream(p, buf, len, nonblock);
+                        break;
+                case V4V_STATE_DISCONNECTED:
+
+                        rc =3D -EPIPE;
+                        break;
+                default:
+
+                        return -EINVAL;
+                }
+                break;
+        default:
+
+                return -ENOTTY;
+        }
+
+        if ((rc =3D=3D -EPIPE) && !(flags & MSG_NOSIGNAL))
+                send_sig(SIGPIPE, current, 0);
+
+        return rc;
+}
+
+ssize_t
+v4v_recvfrom(struct v4v_private * p, void *buf, size_t len, int flags,
+             v4v_addr_t * addr, int nonblock)
+{
+        int peek =3D 0;
+        ssize_t rc =3D 0;
+
+        if (!access_ok(VERIFY_WRITE, buf, len))
+                return -EFAULT;
+        if ((addr) && (!access_ok(VERIFY_WRITE, addr, sizeof(v4v_addr_t)=
)))
+                return -EFAULT;
+
+        if (flags & MSG_DONTWAIT)
+                nonblock++;
+        if (flags & MSG_PEEK)
+                peek++;
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                rc =3D v4v_recvfrom_dgram(p, buf, len, nonblock, peek, a=
ddr);
+                break;
+        case V4V_PTYPE_STREAM:
+                if (peek)
+                        return -EINVAL;
+
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                        return -ENOTCONN;
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_ACCEPTED:
+                        if (addr)
+                                *addr =3D p->peer;
+                        rc =3D v4v_recv_stream(p, buf, len, flags, nonbl=
ock);
+                        break;
+                case V4V_STATE_DISCONNECTED:
+                        rc =3D 0;
+                        break;
+                default:
+                        rc =3D -EINVAL;
+                }
+        }
+
+        if ((rc > (ssize_t) len) && !(flags & MSG_TRUNC))
+                rc =3D len;
+
+        return rc;
+}
+
+/* fops */
+
+static int v4v_open_dgram(struct inode *inode, struct file *f)
+{
+        struct v4v_private *p;
+
+        p =3D kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
+        if (!p)
+                return -ENOMEM;
+
+        memset(p, 0, sizeof(struct v4v_private));
+        p->state =3D V4V_STATE_IDLE;
+        p->desired_ring_size =3D DEFAULT_RING_SIZE;
+        p->r =3D NULL;
+        p->ptype =3D V4V_PTYPE_DGRAM;
+        p->send_blocked =3D 0;
+
+        init_waitqueue_head(&p->readq);
+        init_waitqueue_head(&p->writeq);
+
+        spin_lock_init(&p->pending_recv_lock);
+        INIT_LIST_HEAD(&p->pending_recv_list);
+        atomic_set(&p->pending_recv_count, 0);
+
+        f->private_data =3D p;
+        return 0;
+}
+
+static int v4v_open_stream(struct inode *inode, struct file *f)
+{
+        struct v4v_private *p;
+
+        p =3D kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
+        if (!p)
+                return -ENOMEM;
+
+        memset(p, 0, sizeof(struct v4v_private));
+        p->state =3D V4V_STATE_IDLE;
+        p->desired_ring_size =3D DEFAULT_RING_SIZE;
+        p->r =3D NULL;
+        p->ptype =3D V4V_PTYPE_STREAM;
+        p->send_blocked =3D 0;
+
+        init_waitqueue_head(&p->readq);
+        init_waitqueue_head(&p->writeq);
+
+        spin_lock_init(&p->pending_recv_lock);
+        INIT_LIST_HEAD(&p->pending_recv_list);
+        atomic_set(&p->pending_recv_count, 0);
+
+        f->private_data =3D p;
+        return 0;
+}
+
+static int v4v_release(struct inode *inode, struct file *f)
+{
+        struct v4v_private *p =3D (struct v4v_private *)f->private_data;
+        unsigned long flags;
+        struct pending_recv *pending;
+
+        if (p->ptype =3D=3D V4V_PTYPE_STREAM) {
+                switch (p->state) {
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_CONNECTING:
+                case V4V_STATE_ACCEPTED:
+                        xmit_queue_rst_to(&p->r->ring->id, p->conid, &p-=
>peer);
+                        break;
+                default:
+                        break;
+                }
+        }
+
+        write_lock_irqsave(&list_lock, flags);
+        if (!p->r) {
+                write_unlock_irqrestore(&list_lock, flags);
+                goto release;
+        }
+
+        if (p !=3D p->r->sponsor) {
+                put_ring(p->r);
+                list_del(&p->node);
+                write_unlock_irqrestore(&list_lock, flags);
+                goto release;
+        }
+
+        p->r->sponsor =3D NULL;
+        put_ring(p->r);
+        write_unlock_irqrestore(&list_lock, flags);
+
+        while (!list_empty(&p->pending_recv_list)) {
+                pending =3D
+                    list_first_entry(&p->pending_recv_list,
+                                     struct pending_recv, node);
+
+                list_del(&pending->node);
+                kfree(pending);
+                atomic_dec(&p->pending_recv_count);
+        }
+
+ release:
+        kfree(p);
+
+        return 0;
+}
+
+static ssize_t
+v4v_write(struct file *f, const char __user * buf, size_t count, loff_t =
* ppos)
+{
+        struct v4v_private *p =3D f->private_data;
+        int nonblock =3D f->f_flags & O_NONBLOCK;
+
+        return v4v_sendto(p, buf, count, 0, NULL, nonblock);
+}
+
+static ssize_t
+v4v_read(struct file *f, char __user * buf, size_t count, loff_t * ppos)
+{
+        struct v4v_private *p =3D f->private_data;
+        int nonblock =3D f->f_flags & O_NONBLOCK;
+
+        return v4v_recvfrom(p, (void *)buf, count, 0, NULL, nonblock);
+}
+
+static long v4v_ioctl(struct file *f, unsigned int cmd, unsigned long ar=
g)
+{
+        int rc =3D -ENOTTY;
+
+        int nonblock =3D f->f_flags & O_NONBLOCK;
+        struct v4v_private *p =3D f->private_data;
+
+        if (_IOC_TYPE(cmd) !=3D V4V_TYPE)
+                return rc;
+
+        switch (cmd) {
+        case V4VIOCSETRINGSIZE:
+                if (!access_ok(VERIFY_READ, arg, sizeof(uint32_t)))
+                        return -EFAULT;
+                rc =3D v4v_set_ring_size(p, *(uint32_t *) arg);
+                break;
+        case V4VIOCBIND:
+                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_ring_=
id)))
+                        return -EFAULT;
+                rc =3D v4v_bind(p, (struct v4v_ring_id *)arg);
+                break;
+        case V4VIOCGETSOCKNAME:
+                if (!access_ok(VERIFY_WRITE, arg, sizeof(struct v4v_ring=
_id)))
+                        return -EFAULT;
+                rc =3D v4v_get_sock_name(p, (struct v4v_ring_id *)arg);
+                break;
+        case V4VIOCGETPEERNAME:
+                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
+                        return -EFAULT;
+                rc =3D v4v_get_peer_name(p, (v4v_addr_t *) arg);
+                break;
+        case V4VIOCCONNECT:
+                if (!access_ok(VERIFY_READ, arg, sizeof(v4v_addr_t)))
+                        return -EFAULT;
+                /* Bind if not done */
+                if (p->state =3D=3D V4V_STATE_IDLE) {
+                        struct v4v_ring_id id;
+                        memset(&id, 0, sizeof(id));
+                        id.partner =3D V4V_DOMID_NONE;
+                        id.addr.domain =3D V4V_DOMID_NONE;
+                        id.addr.port =3D 0;
+                        rc =3D v4v_bind(p, &id);
+                        if (rc)
+                                break;
+                }
+                rc =3D v4v_connect(p, (v4v_addr_t *) arg, nonblock);
+                break;
+        case V4VIOCGETCONNECTERR:
+                {
+                        unsigned long flags;
+                        if (!access_ok(VERIFY_WRITE, arg, sizeof(int)))
+                                return -EFAULT;
+
+                        spin_lock_irqsave(&p->pending_recv_lock, flags);
+                        *(int *)arg =3D p->pending_error;
+                        p->pending_error =3D 0;
+                        spin_unlock_irqrestore(&p->pending_recv_lock, fl=
ags);
+                        rc =3D 0;
+                }
+                break;
+        case V4VIOCLISTEN:
+                rc =3D v4v_listen(p);
+                break;
+        case V4VIOCACCEPT:
+                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
+                        return -EFAULT;
+                rc =3D v4v_accept(p, (v4v_addr_t *) arg, nonblock);
+                break;
+        case V4VIOCSEND:
+                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev))=
)
+                        return -EFAULT;
+                {
+                        struct v4v_dev a =3D *(struct v4v_dev *)arg;
+
+                        rc =3D v4v_sendto(p, a.buf, a.len, a.flags, a.ad=
dr,
+                                        nonblock);
+                }
+                break;
+        case V4VIOCRECV:
+                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev))=
)
+                        return -EFAULT;
+                {
+                        struct v4v_dev a =3D *(struct v4v_dev *)arg;
+                        rc =3D v4v_recvfrom(p, a.buf, a.len, a.flags, a.=
addr,
+                                          nonblock);
+                }
+                break;
+        case V4VIOCVIPTABLESADD:
+                if (!access_ok
+                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_=
pos)))
+                        return -EFAULT;
+                {
+                        struct v4v_viptables_rule_pos *rule =3D
+                            (struct v4v_viptables_rule_pos *)arg;
+                        v4v_viptables_add(p, rule->rule, rule->position)=
;
+                        rc =3D 0;
+                }
+                break;
+        case V4VIOCVIPTABLESDEL:
+                if (!access_ok
+                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_=
pos)))
+                        return -EFAULT;
+                {
+                        struct v4v_viptables_rule_pos *rule =3D
+                            (struct v4v_viptables_rule_pos *)arg;
+                        v4v_viptables_del(p, rule->rule, rule->position)=
;
+                        rc =3D 0;
+                }
+                break;
+        case V4VIOCVIPTABLESLIST:
+                if (!access_ok
+                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_list)=
))
+                        return -EFAULT;
+                {
+                        struct v4v_viptables_list *list =3D
+                            (struct v4v_viptables_list *)arg;
+                        rc =3D v4v_viptables_list(p, list);
+                }
+                break;
+        default:
+                printk(KERN_ERR "v4v: unkown ioctl, cmd:0x%x nr:%d size:=
0x%x\n",
+                       cmd, _IOC_NR(cmd), _IOC_SIZE(cmd));
+        }
+
+        return rc;
+}
+
+static unsigned int v4v_poll(struct file *f, poll_table * pt)
+{
+        unsigned int mask =3D 0;
+        struct v4v_private *p =3D f->private_data;
+
+        read_lock(&list_lock);
+
+        switch (p->ptype) {
+        case V4V_PTYPE_DGRAM:
+                switch (p->state) {
+                case V4V_STATE_CONNECTED:
+                case V4V_STATE_BOUND:
+                        poll_wait(f, &p->readq, pt);
+                        mask |=3D POLLOUT | POLLWRNORM;
+                        if (p->r->ring->tx_ptr !=3D p->r->ring->rx_ptr)
+                                mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                default:
+                        break;
+                }
+                break;
+        case V4V_PTYPE_STREAM:
+                switch (p->state) {
+                case V4V_STATE_BOUND:
+                        break;
+                case V4V_STATE_LISTENING:
+                        poll_wait(f, &p->readq, pt);
+                        if (!list_empty(&p->pending_recv_list))
+                                mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                case V4V_STATE_ACCEPTED:
+                case V4V_STATE_CONNECTED:
+                        poll_wait(f, &p->readq, pt);
+                        poll_wait(f, &p->writeq, pt);
+                        if (!p->send_blocked)
+                                mask |=3D POLLOUT | POLLWRNORM;
+                        if (!list_empty(&p->pending_recv_list))
+                                mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                case V4V_STATE_CONNECTING:
+                        poll_wait(f, &p->writeq, pt);
+                        break;
+                case V4V_STATE_DISCONNECTED:
+                        mask |=3D POLLOUT | POLLWRNORM;
+                        mask |=3D POLLIN | POLLRDNORM;
+                        break;
+                case V4V_STATE_IDLE:
+                        break;
+                }
+                break;
+        }
+
+        read_unlock(&list_lock);
+        return mask;
+}
+
+static const struct file_operations v4v_fops_stream =3D {
+        .owner =3D THIS_MODULE,
+        .write =3D v4v_write,
+        .read =3D v4v_read,
+        .unlocked_ioctl =3D v4v_ioctl,
+        .open =3D v4v_open_stream,
+        .release =3D v4v_release,
+        .poll =3D v4v_poll,
+};
+
+static const struct file_operations v4v_fops_dgram =3D {
+        .owner =3D THIS_MODULE,
+        .write =3D v4v_write,
+        .read =3D v4v_read,
+        .unlocked_ioctl =3D v4v_ioctl,
+        .open =3D v4v_open_dgram,
+        .release =3D v4v_release,
+        .poll =3D v4v_poll,
+};
+
+/* Xen VIRQ */
+static int v4v_irq =3D -1;
+
+static void unbind_virq(void)
+{
+        unbind_from_irqhandler (v4v_irq, NULL);
+        v4v_irq =3D -1;
+}
+
+static int bind_evtchn(void)
+{
+        v4v_info_t info;
+        int result;
+       =20
+        v4v_info(&info);
+        if (info.ring_magic !=3D V4V_RING_MAGIC)
+                return 1;
+
+        result =3D
+                bind_interdomain_evtchn_to_irqhandler(
+                        0, info.evtchn,
+                        v4v_interrupt, IRQF_SAMPLE_RANDOM, "v4v", NULL);
+
+        if (result < 0) {
+                unbind_virq();
+                return result;
+        }
+
+        v4v_irq =3D result;
+
+        return 0;
+}
+
+/* V4V Device */
+
+static struct miscdevice v4v_miscdev_dgram =3D {
+        .minor =3D MISC_DYNAMIC_MINOR,
+        .name =3D "v4v_dgram",
+        .fops =3D &v4v_fops_dgram,
+};
+
+static struct miscdevice v4v_miscdev_stream =3D {
+        .minor =3D MISC_DYNAMIC_MINOR,
+        .name =3D "v4v_stream",
+        .fops =3D &v4v_fops_stream,
+};
+
+static int v4v_suspend(struct platform_device *dev, pm_message_t state)
+{
+        unbind_virq();
+        return 0;
+}
+
+static int v4v_resume(struct platform_device *dev)
+{
+        struct ring *r;
+
+        read_lock(&list_lock);
+        list_for_each_entry(r, &ring_list, node) {
+                refresh_pfn_list(r);
+                if (register_ring(r)) {
+                        printk(KERN_ERR
+                               "Failed to re-register a v4v ring on resu=
me, port=3D0x%08x\n",
+                               r->ring->id.addr.port);
+                }
+        }
+        read_unlock(&list_lock);
+
+        if (bind_evtchn()) {
+                printk(KERN_ERR "v4v_resume: failed to bind v4v evtchn\n=
");
+                return -ENODEV;
+        }
+
+        return 0;
+}
+
+static void v4v_shutdown(struct platform_device *dev)
+{
+}
+
+static int __devinit v4v_probe(struct platform_device *dev)
+{
+        int err =3D 0;
+        int ret;
+
+        ret =3D setup_fs();
+        if (ret)
+                return ret;
+
+        INIT_LIST_HEAD(&ring_list);
+        rwlock_init(&list_lock);
+        INIT_LIST_HEAD(&pending_xmit_list);
+        spin_lock_init(&pending_xmit_lock);
+        spin_lock_init(&interrupt_lock);
+        atomic_set(&pending_xmit_count, 0);
+
+        if (bind_evtchn()) {
+                printk(KERN_ERR "failed to bind v4v evtchn\n");
+                unsetup_fs();
+                return -ENODEV;
+        }
+
+        err =3D misc_register(&v4v_miscdev_dgram);
+        if (err !=3D 0) {
+                printk(KERN_ERR "Could not register /dev/v4v_dgram\n");
+                unsetup_fs();
+                return err;
+        }
+
+        err =3D misc_register(&v4v_miscdev_stream);
+        if (err !=3D 0) {
+                printk(KERN_ERR "Could not register /dev/v4v_stream\n");
+                unsetup_fs();
+                return err;
+        }
+
+        printk(KERN_INFO "Xen V4V device installed.\n");
+        return 0;
+}
+
+/* Platform Gunge */
+
+static int __devexit v4v_remove(struct platform_device *dev)
+{
+        unbind_virq();
+        misc_deregister(&v4v_miscdev_dgram);
+        misc_deregister(&v4v_miscdev_stream);
+        unsetup_fs();
+        return 0;
+}
+
+static struct platform_driver v4v_driver =3D {
+        .driver =3D {
+                   .name =3D "v4v",
+                   .owner =3D THIS_MODULE,
+                   },
+        .probe =3D v4v_probe,
+        .remove =3D __devexit_p(v4v_remove),
+        .shutdown =3D v4v_shutdown,
+        .suspend =3D v4v_suspend,
+        .resume =3D v4v_resume,
+};
+
+static struct platform_device *v4v_platform_device;
+
+static int __init v4v_init(void)
+{
+        int error;
+
+        if (!xen_domain())
+        {
+                printk(KERN_ERR "v4v only works under Xen\n");
+                return -ENODEV;
+        }
+
+        error =3D platform_driver_register(&v4v_driver);
+        if (error)
+                return error;
+
+        v4v_platform_device =3D platform_device_alloc("v4v", -1);
+        if (!v4v_platform_device) {
+                platform_driver_unregister(&v4v_driver);
+                return -ENOMEM;
+        }
+
+        error =3D platform_device_add(v4v_platform_device);
+        if (error) {
+                platform_device_put(v4v_platform_device);
+                platform_driver_unregister(&v4v_driver);
+                return error;
+        }
+
+        return 0;
+}
+
+static void __exit v4v_cleanup(void)
+{
+        platform_device_unregister(v4v_platform_device);
+        platform_driver_unregister(&v4v_driver);
+}
+
+module_init(v4v_init);
+module_exit(v4v_cleanup);
+MODULE_LICENSE("GPL");
diff --git a/drivers/xen/v4v_utils.h b/drivers/xen/v4v_utils.h
new file mode 100644
index 0000000..91c00b6
--- /dev/null
+++ b/drivers/xen/v4v_utils.h
@@ -0,0 +1,278 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __V4V_UTILS_H__
+# define __V4V_UTILS_H__
+
+/* Compiler specific hacks */
+#if defined(__GNUC__)
+# define V4V_UNUSED __attribute__ ((unused))
+# ifndef __STRICT_ANSI__
+#  define V4V_INLINE inline
+# else
+#  define V4V_INLINE
+# endif
+#else /* !__GNUC__ */
+# define V4V_UNUSED
+# define V4V_INLINE
+#endif
+
+
+/*
+ * Utility functions
+ */
+static V4V_INLINE uint32_t
+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
+{
+        int32_t ret;
+        ret =3D r->tx_ptr - r->rx_ptr;
+        if (ret >=3D 0)
+                return ret;
+        return (uint32_t) (r->len + ret);
+}
+
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, setting from and protocol if they are not NULL, returns
+ * the actual length of the message, or -1 if there is nothing to read
+ */
+V4V_UNUSED static V4V_INLINE ssize_t
+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * prot=
ocol,
+              void *_buf, size_t t, int consume)
+{
+        volatile struct v4v_ring_message_header *mh;
+        /* unnecessary cast from void * required by MSVC compiler */
+        uint8_t *buf =3D (uint8_t *) _buf;
+        uint32_t btr =3D v4v_ring_bytes_to_read (r);
+        uint32_t rxp =3D r->rx_ptr;
+        uint32_t bte;
+        uint32_t len;
+        ssize_t ret;
+
+
+        if (btr < sizeof (*mh))
+                return -1;
+
+        /*
+         * Becuase the message_header is 128 bits long and the ring is 1=
28 bit
+         * aligned, we're gaurunteed never to wrap
+         */
+        mh =3D (volatile struct v4v_ring_message_header *) &r->ring[r->r=
x_ptr];
+
+        len =3D mh->len;
+
+        if (btr < len)
+        {
+                return -1;
+        }
+
+#if defined(__GNUC__)
+        if (from)
+                *from =3D mh->source;
+#else
+        /* MSVC can't do the above */
+        if (from)
+                memcpy((void *) from, (void *) &(mh->source), sizeof(str=
uct v4v_addr));
+#endif
+
+        if (protocol)
+                *protocol =3D mh->protocol;
+
+        rxp +=3D sizeof (*mh);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+        len -=3D sizeof (*mh);
+        ret =3D len;
+
+        bte =3D r->len - rxp;
+
+        if (bte < len)
+        {
+                if (t < bte)
+                {
+                        if (buf)
+                        {
+                                memcpy (buf, (void *) &r->ring[rxp], t);
+                                buf +=3D t;
+                        }
+
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t =3D 0;
+                }
+                else
+                {
+                        if (buf)
+                        {
+                                memcpy (buf, (void *) &r->ring[rxp], bte=
);
+                                buf +=3D bte;
+                        }
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t -=3D bte;
+                }
+        }
+
+        if (buf && t)
+                memcpy (buf, (void *) &r->ring[rxp], (t < len) ? t : len=
);
+
+
+        rxp +=3D V4V_ROUNDUP (len);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+
+        mb ();
+
+        if (consume)
+                r->rx_ptr =3D rxp;
+
+        return ret;
+}
+
+static V4V_INLINE void
+v4v_memcpy_skip (void *_dst, const void *_src, size_t len, size_t *skip)
+{
+        const uint8_t *src =3D  (const uint8_t *) _src;
+        uint8_t *dst =3D (uint8_t *) _dst;
+
+        if (!*skip)
+        {
+                memcpy (dst, src, len);
+                return;
+        }
+
+        if (*skip >=3D len)
+        {
+                *skip -=3D len;
+                return;
+        }
+
+        src +=3D *skip;
+        dst +=3D *skip;
+        len -=3D *skip;
+        *skip =3D 0;
+
+        memcpy (dst, src, len);
+}
+
+/*
+ * Copy at most t bytes of the next message in the ring, into the buffer
+ * at _buf, skipping skip bytes, setting from and protocol if they are n=
ot
+ * NULL, returns the actual length of the message, or -1 if there is
+ * nothing to read
+ */
+static ssize_t
+v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
+                    uint32_t * protocol, void *_buf, size_t t, int consu=
me,
+                    size_t skip) V4V_UNUSED;
+
+V4V_INLINE static ssize_t
+v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
+                    uint32_t * protocol, void *_buf, size_t t, int consu=
me,
+                    size_t skip)
+{
+        volatile struct v4v_ring_message_header *mh;
+        /* unnecessary cast from void * required by MSVC compiler */
+        uint8_t *buf =3D (uint8_t *) _buf;
+        uint32_t btr =3D v4v_ring_bytes_to_read (r);
+        uint32_t rxp =3D r->rx_ptr;
+        uint32_t bte;
+        uint32_t len;
+        ssize_t ret;
+
+        buf -=3D skip;
+
+        if (btr < sizeof (*mh))
+                return -1;
+
+        /*
+         * Becuase the message_header is 128 bits long and the ring is 1=
28 bit
+         * aligned, we're gaurunteed never to wrap
+         */
+        mh =3D (volatile struct v4v_ring_message_header *)&r->ring[r->rx=
_ptr];
+
+        len =3D mh->len;
+        if (btr < len)
+                return -1;
+
+#if defined(__GNUC__)
+        if (from)
+                *from =3D mh->source;
+#else
+        /* MSVC can't do the above */
+        if (from)
+                memcpy((void *)from, (void *)&(mh->source), sizeof(struc=
t v4v_addr));
+#endif
+
+        if (protocol)
+                *protocol =3D mh->protocol;
+
+        rxp +=3D sizeof (*mh);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+        len -=3D sizeof (*mh);
+        ret =3D len;
+
+        bte =3D r->len - rxp;
+
+        if (bte < len)
+        {
+                if (t < bte)
+                {
+                        if (buf)
+                        {
+                                v4v_memcpy_skip (buf, (void *) &r->ring[=
rxp], t, &skip);
+                                buf +=3D t;
+                        }
+
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t =3D 0;
+                }
+                else
+                {
+                        if (buf)
+                        {
+                                v4v_memcpy_skip (buf, (void *) &r->ring[=
rxp], bte,
+                                                &skip);
+                                buf +=3D bte;
+                        }
+                        rxp =3D 0;
+                        len -=3D bte;
+                        t -=3D bte;
+                }
+        }
+
+        if (buf && t)
+                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], (t < len) =
? t : len,
+                                &skip);
+
+
+        rxp +=3D V4V_ROUNDUP (len);
+        if (rxp =3D=3D r->len)
+                rxp =3D 0;
+
+        mb ();
+
+        if (consume)
+                r->rx_ptr =3D rxp;
+
+        return ret;
+}
+
+#endif /* !__V4V_UTILS_H__ */
diff --git a/include/xen/interface/v4v.h b/include/xen/interface/v4v.h
new file mode 100644
index 0000000..36ff95c
--- /dev/null
+++ b/include/xen/interface/v4v.h
@@ -0,0 +1,299 @@
+/***********************************************************************=
*******
+ * V4V
+ *
+ * Version 2 of v2v (Virtual-to-Virtual)
+ *
+ * Copyright (c) 2010, Citrix Systems
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 =
 USA
+ */
+
+#ifndef __XEN_PUBLIC_V4V_H__
+#define __XEN_PUBLIC_V4V_H__
+
+/*
+ * Structure definitions
+ */
+
+#define V4V_RING_MAGIC          0xA822F72BB0B9D8CC
+#define V4V_RING_DATA_MAGIC	0x45FE852220B801E4
+
+#define V4V_PROTO_DGRAM		0x3c2c1db8
+#define V4V_PROTO_STREAM 	0x70f6a8e5
+
+#define V4V_DOMID_INVALID       (0x7FFFU)
+#define V4V_DOMID_NONE          V4V_DOMID_INVALID
+#define V4V_DOMID_ANY           V4V_DOMID_INVALID
+#define V4V_PORT_NONE           0
+
+typedef struct v4v_iov
+{
+    uint64_t iov_base;
+    uint64_t iov_len;
+} v4v_iov_t;
+
+typedef struct v4v_addr
+{
+    uint32_t port;
+    domid_t domain;
+    uint16_t pad;
+} v4v_addr_t;
+
+typedef struct v4v_ring_id
+{
+    v4v_addr_t addr;
+    domid_t partner;
+    uint16_t pad;
+} v4v_ring_id_t;
+
+typedef uint64_t v4v_pfn_t;
+
+typedef struct
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+} v4v_send_addr_t;
+
+/*
+ * v4v_ring
+ * id:
+ * xen only looks at this during register/unregister
+ * and will fill in id.addr.domain
+ *
+ * rx_ptr: rx pointer, modified by domain
+ * tx_ptr: tx pointer, modified by xen
+ *
+ */
+struct v4v_ring
+{
+    uint64_t magic;
+    v4v_ring_id_t id;
+    uint32_t len;
+    uint32_t rx_ptr;
+    uint32_t tx_ptr;
+    uint8_t reserved[32];
+    uint8_t ring[0];
+};
+typedef struct v4v_ring v4v_ring_t;
+
+#define V4V_RING_DATA_F_EMPTY       (1U << 0) /* Ring is empty */
+#define V4V_RING_DATA_F_EXISTS      (1U << 1) /* Ring exists */
+#define V4V_RING_DATA_F_PENDING     (1U << 2) /* Pending interrupt exist=
s - do not
+                                               * rely on this field - fo=
r
+                                               * profiling only */
+#define V4V_RING_DATA_F_SUFFICIENT  (1U << 3) /* Sufficient space to que=
ue
+                                               * space_required bytes ex=
ists */
+
+#if defined(__GNUC__)
+# define V4V_RING_DATA_ENT_FULLRING
+# define V4V_RING_DATA_ENT_FULL
+#else
+# define V4V_RING_DATA_ENT_FULLRING fullring
+# define V4V_RING_DATA_ENT_FULL full
+#endif
+typedef struct v4v_ring_data_ent
+{
+    v4v_addr_t ring;
+    uint16_t flags;
+    uint16_t pad;
+    uint32_t space_required;
+    uint32_t max_message_size;
+} v4v_ring_data_ent_t;
+
+typedef struct v4v_ring_data
+{
+    uint64_t magic;
+    uint32_t nent;
+    uint32_t pad;
+    uint64_t reserved[4];
+    v4v_ring_data_ent_t data[0];
+} v4v_ring_data_t;
+
+struct v4v_info
+{
+    uint64_t ring_magic;
+    uint64_t data_magic;
+    evtchn_port_t evtchn;
+};
+typedef struct v4v_info v4v_info_t;
+
+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
+/*
+ * Messages on the ring are padded to 128 bits
+ * Len here refers to the exact length of the data not including the
+ * 128 bit header. The message uses
+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
+ */
+
+#define V4V_SHF_SYN		(1 << 0)
+#define V4V_SHF_ACK		(1 << 1)
+#define V4V_SHF_RST		(1 << 2)
+
+#define V4V_SHF_PING		(1 << 8)
+#define V4V_SHF_PONG		(1 << 9)
+
+struct v4v_stream_header
+{
+    uint32_t flags;
+    uint32_t conid;
+};
+
+struct v4v_ring_message_header
+{
+    uint32_t len;
+    uint32_t pad0;
+    v4v_addr_t source;
+    uint32_t protocol;
+    uint32_t pad1;
+    uint8_t data[0];
+};
+
+typedef struct v4v_viptables_rule
+{
+    v4v_addr_t src;
+    v4v_addr_t dst;
+    uint32_t accept;
+    uint32_t pad;
+} v4v_viptables_rule_t;
+
+typedef struct v4v_viptables_list
+{
+    uint32_t start_rule;
+    uint32_t nb_rules;
+    struct v4v_viptables_rule rules[0];
+} v4v_viptables_list_t;
+
+/*
+ * HYPERCALLS
+ */
+
+#define V4VOP_register_ring 	1
+/*
+ * Registers a ring with Xen, if a ring with the same v4v_ring_id exists=
,
+ * this ring takes its place, registration will not change tx_ptr
+ * unless it is invalid
+ *
+ * do_v4v_op(V4VOP_unregister_ring,
+ *           v4v_ring, XEN_GUEST_HANDLE(v4v_pfn),
+ *           npage, 0)
+ */
+
+
+#define V4VOP_unregister_ring 	2
+/*
+ * Unregister a ring.
+ *
+ * v4v_hypercall(V4VOP_send, v4v_ring, NULL, 0, 0)
+ */
+
+#define V4VOP_send 		3
+/*
+ * Sends len bytes of buf to dst, giving src as the source address (xen =
will
+ * ignore src->domain and put your domain in the actually message), xen
+ * first looks for a ring with id.addr=3D=3Ddst and id.partner=3D=3Dsend=
ing_domain
+ * if that fails it looks for id.addr=3D=3Ddst and id.partner=3D=3DDOMID=
_ANY.
+ * protocol is the 32 bit protocol number used from the message
+ * most likely V4V_PROTO_DGRAM or STREAM. If insufficient space exists
+ * it will return -EAGAIN and xen will twing the V4V_INTERRUPT when
+ * sufficient space becomes available
+ *
+ * v4v_hypercall(V4VOP_send,
+ *               v4v_send_addr_t addr,
+ *               void* buf,
+ *               uint32_t len,
+ *               uint32_t protocol)
+ */
+
+
+#define V4VOP_notify 		4
+/* Asks xen for information about other rings in the system
+ *
+ * ent->ring is the v4v_addr_t of the ring you want information on
+ * the same matching rules are used as for V4VOP_send.
+ *
+ * ent->space_required  if this field is not null xen will check
+ * that there is space in the destination ring for this many bytes
+ * of payload. If there is it will set the V4V_RING_DATA_F_SUFFICIENT
+ * and CANCEL any pending interrupt for that ent->ring, if insufficient
+ * space is available it will schedule an interrupt and the flag will
+ * not be set.
+ *
+ * The flags are set by xen when notify replies
+ * V4V_RING_DATA_F_EMPTY	ring is empty
+ * V4V_RING_DATA_F_PENDING	interrupt is pending - don't rely on this
+ * V4V_RING_DATA_F_SUFFICIENT	sufficient space for space_required is the=
re
+ * V4V_RING_DATA_F_EXISTS	ring exists
+ *
+ * v4v_hypercall(V4VOP_notify,
+ *               XEN_GUEST_HANDLE(v4v_ring_data_ent) ent,
+ *               NULL, nent, 0)
+ */
+
+#define V4VOP_sendv		5
+/*
+ * Identical to V4VOP_send except rather than buf and len it takes
+ * an array of v4v_iov and a length of the array.
+ *
+ * v4v_hypercall(V4VOP_sendv,
+ *               v4v_send_addr_t addr,
+ *               v4v_iov iov,
+ *               uint32_t niov,
+ *               uint32_t protocol)
+ */
+
+#define V4VOP_viptables_add     6
+/*
+ * Insert a filtering rules after a given position.
+ *
+ * v4v_hypercall(V4VOP_viptables_add,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_del     7
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_del,
+ *               v4v_viptables_rule_t rule,
+ *               NULL,
+ *               uint32_t position, 0)
+ */
+
+#define V4VOP_viptables_list    8
+/*
+ * Delete a filtering rules at a position or the rule
+ * that matches "rule".
+ *
+ * v4v_hypercall(V4VOP_viptables_list,
+ *               v4v_vitpables_list_t list,
+ *               NULL, 0, 0)
+ */
+
+#define V4VOP_info              9
+/*
+ * v4v_hypercall(V4VOP_info,
+ *               XEN_GUEST_HANDLE(v4v_info_t) info,
+ *               NULL, 0, 0)
+ */
+
+#endif /* __XEN_PUBLIC_V4V_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..395f6cd 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -59,6 +59,7 @@
 #define __HYPERVISOR_physdev_op           33
 #define __HYPERVISOR_hvm_op               34
 #define __HYPERVISOR_tmem_op              38
+#define __HYPERVISOR_v4v_op               39
=20
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/include/xen/v4vdev.h b/include/xen/v4vdev.h
new file mode 100644
index 0000000..a30b608
--- /dev/null
+++ b/include/xen/v4vdev.h
@@ -0,0 +1,34 @@
+#ifndef __V4V_DGRAM_H__
+#define __V4V_DGRAM_H__
+
+struct v4v_dev
+{
+    void *buf;
+    size_t len;
+    int flags;
+    v4v_addr_t *addr;
+};
+
+struct v4v_viptables_rule_pos
+{
+    struct v4v_viptables_rule* rule;
+    int position;
+};
+
+#define V4V_TYPE 'W'
+
+#define V4VIOCSETRINGSIZE 	_IOW (V4V_TYPE,  1, uint32_t)
+#define V4VIOCBIND		_IOW (V4V_TYPE,  2, v4v_ring_id_t)
+#define V4VIOCGETSOCKNAME	_IOW (V4V_TYPE,  3, v4v_ring_id_t)
+#define V4VIOCGETPEERNAME	_IOW (V4V_TYPE,  4, v4v_addr_t)
+#define V4VIOCCONNECT		_IOW (V4V_TYPE,  5, v4v_addr_t)
+#define V4VIOCGETCONNECTERR	_IOW (V4V_TYPE,  6, int)
+#define V4VIOCLISTEN		_IOW (V4V_TYPE,  7, uint32_t) /*unused args */
+#define V4VIOCACCEPT		_IOW (V4V_TYPE,  8, v4v_addr_t)=20
+#define V4VIOCSEND		_IOW (V4V_TYPE,  9, struct v4v_dev)
+#define V4VIOCRECV		_IOW (V4V_TYPE, 10, struct v4v_dev)
+#define V4VIOCVIPTABLESADD	_IOW (V4V_TYPE, 11, struct v4v_viptables_rule=
_pos)
+#define V4VIOCVIPTABLESDEL	_IOW (V4V_TYPE, 12, struct v4v_viptables_rule=
_pos)
+#define V4VIOCVIPTABLESLIST	_IOW (V4V_TYPE, 13, struct v4v_viptables_lis=
t)
+
+#endif

--------------true
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------true--


From xen-devel-bounces@lists.xen.org Fri Aug 03 22:35:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:35:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQT1-0006v2-9A; Fri, 03 Aug 2012 22:35:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQSz-0006ux-Bn
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:35:33 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344033325!10706048!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcxMDI2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9585 invoked from network); 3 Aug 2012 22:35:27 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 22:35:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73MZEde003431
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:35:14 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MZBsV002844
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:35:12 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MZ9nh015088; Fri, 3 Aug 2012 17:35:09 -0500
MIME-Version: 1.0
Message-ID: <6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
Date: Fri, 3 Aug 2012 15:34:53 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
In-Reply-To: <501A67C502000078000921FF@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, August 02, 2012 3:43 AM
> To: Dario Faggioli
> Cc: Andre Przywara; Anil Madhavapeddy; George Dunlap; xen-devel; Andrew Cooper; Yang Z Zhang
> Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
> 
> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
> >       guest ends up on more than one nodes, make sure it knows it's
> >       running on a NUMA platform (smaller than the actual host, but
> >       still NUMA). This interacts with some of the above points:
> 
> The question is whether this is really useful beyond the (I would
> suppose) relatively small set of cases where migration isn't
> needed.
> 
> >        * consider this during automatic placement for
> >          resuming/migrating domains (if they have a virtual topology,
> >          better not to change it);
> >        * consider this during memory migration (it can change the
> >          actual topology, should we update it on-line or disable memory
> >          migration?)
> 
> The question is whether trading functionality for performance
> is an acceptable choice.

If there were a lwn.net equivalent for Xen, I'd be pushing to get
quoted on the following:

"Virtualization: You can have flexibility or you can have performance.
Pick one."

A couple of years ago when NUMA was first being extensively discussed
for Xen, I suggested that this should really be a "top level" flag
that a sysadmin should be able to select: Either optimize for
performance or optimize for flexibility.  Then Xen and the Xen tools
should "do the right thing" depending on the selection.

I still think this is a good way to surface the tradeoffs for
a very complex problem to the vast majority of users/admins.
Clearly they will want "both" but forcing the choice will
provoke more thought about their use model, as well as provide
important guidance to the underlying implementations.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:35:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:35:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQT1-0006v2-9A; Fri, 03 Aug 2012 22:35:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQSz-0006ux-Bn
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:35:33 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344033325!10706048!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcxMDI2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9585 invoked from network); 3 Aug 2012 22:35:27 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 22:35:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73MZEde003431
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:35:14 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MZBsV002844
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:35:12 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MZ9nh015088; Fri, 3 Aug 2012 17:35:09 -0500
MIME-Version: 1.0
Message-ID: <6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
Date: Fri, 3 Aug 2012 15:34:53 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
In-Reply-To: <501A67C502000078000921FF@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, August 02, 2012 3:43 AM
> To: Dario Faggioli
> Cc: Andre Przywara; Anil Madhavapeddy; George Dunlap; xen-devel; Andrew Cooper; Yang Z Zhang
> Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
> 
> >>> On 01.08.12 at 18:16, Dario Faggioli <raistlin@linux.it> wrote:
> >     - Virtual NUMA topology exposure to guests (a.k.a guest-numa). If a
> >       guest ends up on more than one nodes, make sure it knows it's
> >       running on a NUMA platform (smaller than the actual host, but
> >       still NUMA). This interacts with some of the above points:
> 
> The question is whether this is really useful beyond the (I would
> suppose) relatively small set of cases where migration isn't
> needed.
> 
> >        * consider this during automatic placement for
> >          resuming/migrating domains (if they have a virtual topology,
> >          better not to change it);
> >        * consider this during memory migration (it can change the
> >          actual topology, should we update it on-line or disable memory
> >          migration?)
> 
> The question is whether trading functionality for performance
> is an acceptable choice.

If there were a lwn.net equivalent for Xen, I'd be pushing to get
quoted on the following:

"Virtualization: You can have flexibility or you can have performance.
Pick one."

A couple of years ago when NUMA was first being extensively discussed
for Xen, I suggested that this should really be a "top level" flag
that a sysadmin should be able to select: Either optimize for
performance or optimize for flexibility.  Then Xen and the Xen tools
should "do the right thing" depending on the selection.

I still think this is a good way to surface the tradeoffs for
a very complex problem to the vast majority of users/admins.
Clearly they will want "both" but forcing the choice will
provoke more thought about their use model, as well as provide
important guidance to the underlying implementations.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:41:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQYB-00072x-16; Fri, 03 Aug 2012 22:40:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQY9-00072s-R5
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:40:54 +0000
Received: from [85.158.139.83:19213] by server-10.bemta-5.messagelabs.com id
	2A/DF-02190-5735C105; Fri, 03 Aug 2012 22:40:53 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344033649!29612717!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NzY0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26587 invoked from network); 3 Aug 2012 22:40:51 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 22:40:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73MeaWC010714
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:40:37 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MeZQa028878
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:40:36 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MeZF3012287; Fri, 3 Aug 2012 17:40:35 -0500
MIME-Version: 1.0
Message-ID: <419a545d-77d1-4120-8b0f-ef2164cfd492@default>
Date: Fri, 3 Aug 2012 15:40:18 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Andre Przywara <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
	<501B9E81.1020302@amd.com>
	<501BBDF50200007800092728@nat28.tlf.novell.com>
In-Reply-To: <501BBDF50200007800092728@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>>> The question is whether this is really useful beyond the (I would
> >>>>> suppose) relatively small set of cases where migration isn't
> >>>>> needed.
> >>>>>
> >>>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
> >>>> suggesting that exposing a virtual topology is not a good idea as it
> >>>> poses constraints/prevents live migration?
> >
> > Honestly, what would be the problems with migration? NUMA awareness is
> > actually a software optimization, so we will not really break something
> > if the advertised topology isn't the real one.
> 
> Sure, nothing would break, but the purpose of the whole feature
> is improving performance, and that might get entirely lost (or
> even worse) after a migration to a different topology host.

+1

In the end, customers who care about getting 99.9% of native performance
should use physical hardware.  Live migration means that someone/something
is trying to do resource optimization and so performance optimization is
secondary.  But claiming great performance before migration and getting
sucky performance after migration, is IMHO a disaster, especially when
future "cloud users" won't have a clue whether their environment has
migrated or not.

Just my two cents...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:41:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQYB-00072x-16; Fri, 03 Aug 2012 22:40:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQY9-00072s-R5
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:40:54 +0000
Received: from [85.158.139.83:19213] by server-10.bemta-5.messagelabs.com id
	2A/DF-02190-5735C105; Fri, 03 Aug 2012 22:40:53 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344033649!29612717!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY3NzY0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26587 invoked from network); 3 Aug 2012 22:40:51 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Aug 2012 22:40:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73MeaWC010714
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:40:37 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MeZQa028878
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:40:36 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MeZF3012287; Fri, 3 Aug 2012 17:40:35 -0500
MIME-Version: 1.0
Message-ID: <419a545d-77d1-4120-8b0f-ef2164cfd492@default>
Date: Fri, 3 Aug 2012 15:40:18 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Andre Przywara <andre.przywara@amd.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<1343914490.4873.18.camel@Solace>
	<CAFLBxZajiMKPvXG35boG9poYNbzFDgh5d-oRDA3T7gS55ofmrg@mail.gmail.com>
	<501BB4C202000078000926DB@nat28.tlf.novell.com>
	<501B9E81.1020302@amd.com>
	<501BBDF50200007800092728@nat28.tlf.novell.com>
In-Reply-To: <501BBDF50200007800092728@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Dario Faggioli <raistlin@linux.it>,
	Yang Z Zhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>>> The question is whether this is really useful beyond the (I would
> >>>>> suppose) relatively small set of cases where migration isn't
> >>>>> needed.
> >>>>>
> >>>> Mmm... Not sure I'm getting what you're saying here, sorry. Are you
> >>>> suggesting that exposing a virtual topology is not a good idea as it
> >>>> poses constraints/prevents live migration?
> >
> > Honestly, what would be the problems with migration? NUMA awareness is
> > actually a software optimization, so we will not really break something
> > if the advertised topology isn't the real one.
> 
> Sure, nothing would break, but the purpose of the whole feature
> is improving performance, and that might get entirely lost (or
> even worse) after a migration to a different topology host.

+1

In the end, customers who care about getting 99.9% of native performance
should use physical hardware.  Live migration means that someone/something
is trying to do resource optimization and so performance optimization is
secondary.  But claiming great performance before migration and getting
sucky performance after migration, is IMHO a disaster, especially when
future "cloud users" won't have a clue whether their environment has
migrated or not.

Just my two cents...


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:43:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:43:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQaM-00079M-Hs; Fri, 03 Aug 2012 22:43:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQaL-00079E-FC
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:43:09 +0000
Received: from [85.158.138.51:30524] by server-5.bemta-3.messagelabs.com id
	85/BF-28237-CF35C105; Fri, 03 Aug 2012 22:43:08 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344033786!21414133!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc0OTg3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7055 invoked from network); 3 Aug 2012 22:43:07 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 22:43:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73Mgtv2007725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:42:56 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MgsZL006835
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:42:55 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MgsTw013424; Fri, 3 Aug 2012 17:42:54 -0500
MIME-Version: 1.0
Message-ID: <45216a40-585d-47df-86a0-3b78843d7ef7@default>
Date: Fri, 3 Aug 2012 15:42:38 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Andre Przywara <andre.przywara@amd.com>, Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
In-Reply-To: <501BA1C0.7040100@amd.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > [D] - Dynamic memory migration between different nodes of the host. As
> >        the counter-part of the NUMA-aware scheduler.
> 
> I once read about a VMware feature: bandwith-limited migration in the
> background, hot pages first. So we get flexibility and avoid CPU
> starving, but still don't hog the system with memory copying.
> Sounds quite ambitious, though.

Something like this, but between NUMA nodes instead of physical systems?

http://osnet.cs.binghamton.edu/publications/hines09postcopy_osr.pdf 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 03 22:43:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Aug 2012 22:43:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxQaM-00079M-Hs; Fri, 03 Aug 2012 22:43:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SxQaL-00079E-FC
	for xen-devel@lists.xen.org; Fri, 03 Aug 2012 22:43:09 +0000
Received: from [85.158.138.51:30524] by server-5.bemta-3.messagelabs.com id
	85/BF-28237-CF35C105; Fri, 03 Aug 2012 22:43:08 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344033786!21414133!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc0OTg3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7055 invoked from network); 3 Aug 2012 22:43:07 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Aug 2012 22:43:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q73Mgtv2007725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Aug 2012 22:42:56 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q73MgsZL006835
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Aug 2012 22:42:55 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q73MgsTw013424; Fri, 3 Aug 2012 17:42:54 -0500
MIME-Version: 1.0
Message-ID: <45216a40-585d-47df-86a0-3b78843d7ef7@default>
Date: Fri, 3 Aug 2012 15:42:38 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Andre Przywara <andre.przywara@amd.com>, Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
In-Reply-To: <501BA1C0.7040100@amd.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.6  (510070) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > [D] - Dynamic memory migration between different nodes of the host. As
> >        the counter-part of the NUMA-aware scheduler.
> 
> I once read about a VMware feature: bandwith-limited migration in the
> background, hot pages first. So we get flexibility and avoid CPU
> starving, but still don't hog the system with memory copying.
> Sounds quite ambitious, though.

Something like this, but between NUMA nodes instead of physical systems?

http://osnet.cs.binghamton.edu/publications/hines09postcopy_osr.pdf 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 00:19:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 00:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxS55-00085c-VU; Sat, 04 Aug 2012 00:18:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxS53-00085X-PD
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 00:18:58 +0000
Received: from [85.158.143.99:55301] by server-3.bemta-4.messagelabs.com id
	41/93-01511-17A6C105; Sat, 04 Aug 2012 00:18:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344039536!20421949!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27480 invoked from network); 4 Aug 2012 00:18:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 00:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,709,1336348800"; d="scan'208";a="13848088"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 00:18:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 01:18:46 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxS4r-0007q2-N9;
	Sat, 04 Aug 2012 00:18:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxS4r-000384-Lx;
	Sat, 04 Aug 2012 01:18:45 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13544-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 01:18:45 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13544: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13544 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13544/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 00:19:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 00:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxS55-00085c-VU; Sat, 04 Aug 2012 00:18:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxS53-00085X-PD
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 00:18:58 +0000
Received: from [85.158.143.99:55301] by server-3.bemta-4.messagelabs.com id
	41/93-01511-17A6C105; Sat, 04 Aug 2012 00:18:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344039536!20421949!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDc5NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27480 invoked from network); 4 Aug 2012 00:18:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 00:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,709,1336348800"; d="scan'208";a="13848088"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 00:18:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 01:18:46 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxS4r-0007q2-N9;
	Sat, 04 Aug 2012 00:18:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxS4r-000384-Lx;
	Sat, 04 Aug 2012 01:18:45 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13544-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 01:18:45 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13544: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13544 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13544/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 01:07:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 01:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxSpD-0003jX-UA; Sat, 04 Aug 2012 01:06:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SxSpC-0003jS-5B
	for Xen-devel@lists.xensource.com; Sat, 04 Aug 2012 01:06:38 +0000
Received: from [85.158.143.35:50145] by server-2.bemta-4.messagelabs.com id
	32/AB-17938-D957C105; Sat, 04 Aug 2012 01:06:37 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344042395!10804181!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzI3ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13549 invoked from network); 4 Aug 2012 01:06:35 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Aug 2012 01:06:35 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id DAE101BA4;
	Sat,  4 Aug 2012 04:06:32 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 985B32005D; Sat,  4 Aug 2012 04:06:32 +0300 (EEST)
Date: Sat, 4 Aug 2012 04:06:32 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120804010632.GE19851@reaktio.net>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
	<20120801165359.GF8228@US-SEA-R8XVZTX>
	<501A675F.4010804@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A675F.4010804@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] HYBRID naming [Was: Re: [HYBRID]: status update...]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 12:41:19PM +0100, George Dunlap wrote:
> On 01/08/12 17:53, Matt Wilson wrote:
> >On Wed, Aug 01, 2012 at 09:21:57AM -0700, George Dunlap wrote:
> >>Hmm -- that's an interesting issue I hadn't thought of. "PVHVM"
> >>has already been sort of taken by Stefano's extensions to allow
> >>Linux kernels booted in HVM mode to use some of the PV
> >>extensions. I tend to think "xen_pvh_domain()" is probably OK,
> >>but maybe calling it "pvext" (or "pvhext") in the code, and
> >>"PVH" in documentation / stories? Just using "pvext" everywhere
> >>could work as well; it's a little bit "now even better!", but
> >>not as much as pvplus.
> >How about HAPV, for "Hardware Assisted Paravirtualization"? It's
> >nicely pronounceable as "hap-vee" and follows the general
> >"hardware-assisted paging" (HAP) Xen terminology that spans both Intel
> >EPT and AMD RVI. 'if (xen_hapv_domain())'
> Wouldn't "HAPV" make people think more of HAP than of PV?  It seems
> like it would be much more confusing to distinguish "hapv" from
> "hap" than "pvh" from "pv". :-)
> 

HAPV sounds like Highly Available PV.. not very good :)

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 01:07:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 01:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxSpD-0003jX-UA; Sat, 04 Aug 2012 01:06:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SxSpC-0003jS-5B
	for Xen-devel@lists.xensource.com; Sat, 04 Aug 2012 01:06:38 +0000
Received: from [85.158.143.35:50145] by server-2.bemta-4.messagelabs.com id
	32/AB-17938-D957C105; Sat, 04 Aug 2012 01:06:37 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344042395!10804181!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzI3ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13549 invoked from network); 4 Aug 2012 01:06:35 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Aug 2012 01:06:35 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id DAE101BA4;
	Sat,  4 Aug 2012 04:06:32 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 985B32005D; Sat,  4 Aug 2012 04:06:32 +0300 (EEST)
Date: Sat, 4 Aug 2012 04:06:32 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120804010632.GE19851@reaktio.net>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801152508.GA7132@phenom.dumpdata.com>
	<CAFLBxZb=yzcmxewCei49M6DX16x7eKZRGmxMUDbaKOTXgGZCGA@mail.gmail.com>
	<20120801160515.GA16155@phenom.dumpdata.com>
	<CAFLBxZYXOiWhAni3X23O62DbbigzFECMbvpUFnGs38y12h2V0g@mail.gmail.com>
	<20120801165359.GF8228@US-SEA-R8XVZTX>
	<501A675F.4010804@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501A675F.4010804@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] HYBRID naming [Was: Re: [HYBRID]: status update...]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 12:41:19PM +0100, George Dunlap wrote:
> On 01/08/12 17:53, Matt Wilson wrote:
> >On Wed, Aug 01, 2012 at 09:21:57AM -0700, George Dunlap wrote:
> >>Hmm -- that's an interesting issue I hadn't thought of. "PVHVM"
> >>has already been sort of taken by Stefano's extensions to allow
> >>Linux kernels booted in HVM mode to use some of the PV
> >>extensions. I tend to think "xen_pvh_domain()" is probably OK,
> >>but maybe calling it "pvext" (or "pvhext") in the code, and
> >>"PVH" in documentation / stories? Just using "pvext" everywhere
> >>could work as well; it's a little bit "now even better!", but
> >>not as much as pvplus.
> >How about HAPV, for "Hardware Assisted Paravirtualization"? It's
> >nicely pronounceable as "hap-vee" and follows the general
> >"hardware-assisted paging" (HAP) Xen terminology that spans both Intel
> >EPT and AMD RVI. 'if (xen_hapv_domain())'
> Wouldn't "HAPV" make people think more of HAP than of PV?  It seems
> like it would be much more confusing to distinguish "hapv" from
> "hap" than "pvh" from "pv". :-)
> 

HAPV sounds like Highly Available PV.. not very good :)

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 01:27:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 01:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxT8k-0003vq-Rh; Sat, 04 Aug 2012 01:26:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SxT8i-0003vl-Vj
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 01:26:49 +0000
Received: from [85.158.139.83:18386] by server-1.bemta-5.messagelabs.com id
	AD/6E-29759-85A7C105; Sat, 04 Aug 2012 01:26:48 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344043607!30470970!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzE5NjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16368 invoked from network); 4 Aug 2012 01:26:47 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-10.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Aug 2012 01:26:47 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 800E41DC3;
	Sat,  4 Aug 2012 04:26:46 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id F124D2005D; Sat,  4 Aug 2012 04:26:45 +0300 (EEST)
Date: Sat, 4 Aug 2012 04:26:45 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: James Harper <james.harper@bendigoit.com.au>
Message-ID: <20120804012645.GF19851@reaktio.net>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 05:47:46AM +0000, James Harper wrote:
> I'm not sure if this is a debian thing or if debian is using an older version, but I see this in xend debug log when I try to use vscsi:
> 
> cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/host6/model: No such file or directory
> cat: /sys/bus/scsi/devices/host6/type: No such file or directory
> cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
> cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/model: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/type: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/rev: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/scsi_level: No such file or directory
> 
> In my case the files it is looking for are actually in /sys/bus/scsi/devices/target6:0:3/0:6:0:3/...
> 
> Is anyone maintaining vscsi anymore?
> 

Konrad put the Xen pvscsi Linux kernel patches to his git tree,
but other than that I don't think anyone is really maintaining pvscsi.

Do you want become pvscsi maintainer? :) Someone should upstream the drivers to upstream Linux..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 01:27:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 01:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxT8k-0003vq-Rh; Sat, 04 Aug 2012 01:26:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SxT8i-0003vl-Vj
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 01:26:49 +0000
Received: from [85.158.139.83:18386] by server-1.bemta-5.messagelabs.com id
	AD/6E-29759-85A7C105; Sat, 04 Aug 2012 01:26:48 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344043607!30470970!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzE5NjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16368 invoked from network); 4 Aug 2012 01:26:47 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-10.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Aug 2012 01:26:47 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 800E41DC3;
	Sat,  4 Aug 2012 04:26:46 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id F124D2005D; Sat,  4 Aug 2012 04:26:45 +0300 (EEST)
Date: Sat, 4 Aug 2012 04:26:45 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: James Harper <james.harper@bendigoit.com.au>
Message-ID: <20120804012645.GF19851@reaktio.net>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 05:47:46AM +0000, James Harper wrote:
> I'm not sure if this is a debian thing or if debian is using an older version, but I see this in xend debug log when I try to use vscsi:
> 
> cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/host6/model: No such file or directory
> cat: /sys/bus/scsi/devices/host6/type: No such file or directory
> cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
> cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/vendor: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/model: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/type: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/rev: No such file or directory
> cat: /sys/bus/scsi/devices/target6:0:3/scsi_level: No such file or directory
> 
> In my case the files it is looking for are actually in /sys/bus/scsi/devices/target6:0:3/0:6:0:3/...
> 
> Is anyone maintaining vscsi anymore?
> 

Konrad put the Xen pvscsi Linux kernel patches to his git tree,
but other than that I don't think anyone is really maintaining pvscsi.

Do you want become pvscsi maintainer? :) Someone should upstream the drivers to upstream Linux..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 04:37:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 04:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxW6k-0005Nj-TP; Sat, 04 Aug 2012 04:36:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxW6i-0005Ne-F2
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 04:36:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344055010!2312134!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5695 invoked from network); 4 Aug 2012 04:36:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 04:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,710,1336348800"; d="scan'208";a="13849097"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 04:36:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 05:36:50 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxW6b-0000sd-Ur;
	Sat, 04 Aug 2012 04:36:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxW6b-0005Xa-UJ;
	Sat, 04 Aug 2012 05:36:49 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13545-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 05:36:49 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13545: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13545 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13545/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 04:37:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 04:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxW6k-0005Nj-TP; Sat, 04 Aug 2012 04:36:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxW6i-0005Ne-F2
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 04:36:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344055010!2312134!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5695 invoked from network); 4 Aug 2012 04:36:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 04:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,710,1336348800"; d="scan'208";a="13849097"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 04:36:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 05:36:50 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxW6b-0000sd-Ur;
	Sat, 04 Aug 2012 04:36:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxW6b-0005Xa-UJ;
	Sat, 04 Aug 2012 05:36:49 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13545-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 05:36:49 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13545: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13545 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13545/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 06:36:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 06:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxXxm-00066G-OQ; Sat, 04 Aug 2012 06:35:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxXxl-00066B-5F
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 06:35:49 +0000
Received: from [85.158.139.83:35127] by server-4.bemta-5.messagelabs.com id
	07/D1-27831-4C2CC105; Sat, 04 Aug 2012 06:35:48 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344062144!30258058!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2007 invoked from network); 4 Aug 2012 06:35:47 -0000
Received: from smtp2.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-5.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Aug 2012 06:35:47 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1SxXxb-0003B9-CE; Sat, 04 Aug 2012 16:35:39 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Sat, 4 Aug 2012 16:35:39 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Sat, 4 Aug 2012 16:35:38 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Thread-Topic: [Xen-devel] vscsi
Thread-Index: Ac1xO1POgSvT8XwuRd6KRavh/MT4XwAURAmAAB+oxmA=
Date: Sat, 4 Aug 2012 06:35:38 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299E603F@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
	<20120804012645.GF19851@reaktio.net>
In-Reply-To: <20120804012645.GF19851@reaktio.net>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19084.005
x-tm-as-result: No--24.622000-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 04 Aug 2012 06:35:39.0157 (UTC)
	FILETIME=[5CD96450:01CD720B]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Is anyone maintaining vscsi anymore?
> >
> 
> Konrad put the Xen pvscsi Linux kernel patches to his git tree, but other than
> that I don't think anyone is really maintaining pvscsi.
> 
> Do you want become pvscsi maintainer? :) Someone should upstream the
> drivers to upstream Linux..
> 

While I have a need for it I can have a go.

One thing I just noticed, /etc/xen/scripts/vscsi is #!/bin/sh when it should be #!/bin/bash (sh != bash under Debian by default).

I guess the main thing is that afaik the vscsi stuff isn't in xl... can anyone estimate the work required to port it? (if it hasn't been done already)

Thanks

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 06:36:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 06:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxXxm-00066G-OQ; Sat, 04 Aug 2012 06:35:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1SxXxl-00066B-5F
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 06:35:49 +0000
Received: from [85.158.139.83:35127] by server-4.bemta-5.messagelabs.com id
	07/D1-27831-4C2CC105; Sat, 04 Aug 2012 06:35:48 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344062144!30258058!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2007 invoked from network); 4 Aug 2012 06:35:47 -0000
Received: from smtp2.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-5.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	4 Aug 2012 06:35:47 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1SxXxb-0003B9-CE; Sat, 04 Aug 2012 16:35:39 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Sat, 4 Aug 2012 16:35:39 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Sat, 4 Aug 2012 16:35:38 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Thread-Topic: [Xen-devel] vscsi
Thread-Index: Ac1xO1POgSvT8XwuRd6KRavh/MT4XwAURAmAAB+oxmA=
Date: Sat, 4 Aug 2012 06:35:38 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299E603F@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
	<20120804012645.GF19851@reaktio.net>
In-Reply-To: <20120804012645.GF19851@reaktio.net>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19084.005
x-tm-as-result: No--24.622000-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 04 Aug 2012 06:35:39.0157 (UTC)
	FILETIME=[5CD96450:01CD720B]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Is anyone maintaining vscsi anymore?
> >
> 
> Konrad put the Xen pvscsi Linux kernel patches to his git tree, but other than
> that I don't think anyone is really maintaining pvscsi.
> 
> Do you want become pvscsi maintainer? :) Someone should upstream the
> drivers to upstream Linux..
> 

While I have a need for it I can have a go.

One thing I just noticed, /etc/xen/scripts/vscsi is #!/bin/sh when it should be #!/bin/bash (sh != bash under Debian by default).

I guess the main thing is that afaik the vscsi stuff isn't in xl... can anyone estimate the work required to port it? (if it hasn't been done already)

Thanks

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 09:09:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 09:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxaLq-0007Hh-P7; Sat, 04 Aug 2012 09:08:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxaLp-0007Hc-0g
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 09:08:49 +0000
Received: from [85.158.143.35:29386] by server-2.bemta-4.messagelabs.com id
	02/C4-17938-0A6EC105; Sat, 04 Aug 2012 09:08:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344071327!10839448!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24787 invoked from network); 4 Aug 2012 09:08:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 09:08:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,711,1336348800"; d="scan'208";a="13849883"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 09:08:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 10:08:47 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxaLm-0002M4-QN;
	Sat, 04 Aug 2012 09:08:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxaLm-0006v8-Ov;
	Sat, 04 Aug 2012 10:08:46 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13546-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 10:08:46 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13546: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13546 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13546/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 09:09:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 09:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxaLq-0007Hh-P7; Sat, 04 Aug 2012 09:08:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxaLp-0007Hc-0g
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 09:08:49 +0000
Received: from [85.158.143.35:29386] by server-2.bemta-4.messagelabs.com id
	02/C4-17938-0A6EC105; Sat, 04 Aug 2012 09:08:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344071327!10839448!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgwNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24787 invoked from network); 4 Aug 2012 09:08:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 09:08:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,711,1336348800"; d="scan'208";a="13849883"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 09:08:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 10:08:47 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxaLm-0002M4-QN;
	Sat, 04 Aug 2012 09:08:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxaLm-0006v8-Ov;
	Sat, 04 Aug 2012 10:08:46 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13546-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 10:08:46 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13546: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13546 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13546/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 11:04:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 11:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxc9R-0007nW-BA; Sat, 04 Aug 2012 11:04:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1Sxc9Q-0007nR-DW
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 11:04:08 +0000
Received: from [85.158.139.83:48296] by server-4.bemta-5.messagelabs.com id
	9F/AC-27831-7A10D105; Sat, 04 Aug 2012 11:04:07 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344078243!30280001!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21070 invoked from network); 4 Aug 2012 11:04:05 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Aug 2012 11:04:05 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q74B3uU7018596
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Sat, 4 Aug 2012 07:03:56 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q74B3thF018594;
	Sat, 4 Aug 2012 07:03:55 -0400
Date: Sat, 4 Aug 2012 07:03:55 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, akpm@linux-foundation.org, 
	linux-kernel@vger.kernel.org, mgorman@suse.de, davem@davemloft.net
Message-ID: <20120804110355.GA17640@andromeda.dapyr.net>
References: <20120801190227.GA13272@phenom.dumpdata.com>
	<20120803120414.GA10670@andromeda.dapyr.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120803120414.GA10670@andromeda.dapyr.net>
User-Agent: Mutt/1.5.9i
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6 (git commit
	c48a11c7ad2623b99bbd6859b0b4234e7f11176f,
	netvm: propagate page->pfmemalloc to skb)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 08:04:14AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> > So I hadn't done a git bisection yet. But if I choose git commit:
> > 4b24ff71108164e047cf2c95990b77651163e315
> >     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> > 
> >     Pull battery updates from Anton Vorontsov:
> > 
> > 
> > everything works nicely. Anything past that, so these merges:
> > 
> > konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> > 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
> ===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)
> 
> Somewhere in there is the culprit. Hadn't done yet the full bisection
> (was just checking out in each merge to see when it stopped working)

Mel, your:
commit c48a11c7ad2623b99bbd6859b0b4234e7f11176f
Author: Mel Gorman <mgorman@suse.de>
Date:   Tue Jul 31 16:44:23 2012 -0700

    netvm: propagate page->pfmemalloc to skb

is the culprit per git bisect. Any ideas - do the drivers need to do
some extra processing? Here is the git bisect log

git bisect start
# good: [a40a1d3d0a2fd613fdec6d89d3c053268ced76ed] Merge tag
# 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
git bisect good a40a1d3d0a2fd613fdec6d89d3c053268ced76ed
# bad: [ac694dbdbc403c00e2c14d10bc7b8412cc378259] Merge branch 'akpm'
# (Andrew's patch-bomb)
git bisect bad ac694dbdbc403c00e2c14d10bc7b8412cc378259
# good: [62ce1c706f817cb9defef3ac2dfdd815149f2968] mm, oom: move
# declaration for mem_cgroup_out_of_memory to oom.h
git bisect good 62ce1c706f817cb9defef3ac2dfdd815149f2968
# bad: [5a178119b0fbe37f7dfb602b37df9cc4b1dc9d71] mm: add support for
# direct_IO to highmem pages
git bisect bad 5a178119b0fbe37f7dfb602b37df9cc4b1dc9d71
# good: [7cb0240492caea2f6467f827313478f41877e6ef] netvm: allow the use
# of __GFP_MEMALLOC by specific sockets
git bisect good 7cb0240492caea2f6467f827313478f41877e6ef
# bad: [5515061d22f0f9976ae7815864bfd22042d36848] mm: throttle direct
# reclaimers if PF_MEMALLOC reserves are low and swap is backed by
# network storage
git bisect bad 5515061d22f0f9976ae7815864bfd22042d36848
# bad: [0614002bb5f7411e61ffa0dfe5be1f2c84df3da3] netvm: propagate
# page->pfmemalloc from skb_alloc_page to skb
git bisect bad 0614002bb5f7411e61ffa0dfe5be1f2c84df3da3
# bad: [c48a11c7ad2623b99bbd6859b0b4234e7f11176f] netvm: propagate
# page->pfmemalloc to skb
git bisect bad c48a11c7ad2623b99bbd6859b0b4234e7f11176f
# good: [c93bdd0e03e848555d144eb44a1f275b871a8dd5] netvm: allow skb
# allocation to use PFMEMALLOC reserves
git bisect good c93bdd0e03e848555d144eb44a1f275b871a8dd5


> 
> Andrew CC-ing you here, the serial log is below.
> > a40a1d3 Merge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
> > 3e9a970 Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
> > 941c872 Merge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
> > 8762541 Merge branch 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
> > 6dbb35b Merge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
> > fd37ce3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> > 1da9b6b Merge branches 'cma', 'ipoib', 'ocrdma' and 'qib' into for-next
> > 6aeea3e Merge remote-tracking branch 'origin' into irqdomain/next
> > 931efdf Merge branch 'v4l_for_linus' into staging/for_v3.6
> > 80c1834 Merge tag 'v3.5-rc6' into irqdomain/next
> > 
> > are the culprit. I think it might be the networking pull but not sure. Ian
> > any thoughts?
> > 
> > Using config file "/test.xm".
> > Started domain latest (id=2)
> > [    0.000000] console [hvc0] enabled, bootconsole disabled
> > [    0.000000] Xen: using vcpuop timer interface
> > [    0.000000] installing Xen timer for CPU 0
> > [    0.000000] tsc: Detected 2294.530 MHz processor
> > [    0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 4589.06 BogoMIPS (lpj=2294530)
> > [    0.000999] pid_max: default: 32768 minimum: 301
> > [    0.000999] Security Framework initialized
> > [    0.000999] SELinux:  Initializing.
> > [    0.000999] SELinux:  Starting in permissive mode
> > [    0.000999] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
> > [    0.001520] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
> > [    0.001875] Mount-cache hash table entries: 256
> > [    0.002007] Initializing cgroup subsys cpuacct
> > [    0.002013] Initializing cgroup subsys freezer
> > [    0.002070] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
> > [    0.002070] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
> > [    0.002084] CPU: Physical Processor ID: 0
> > [    0.002087] CPU: Processor Core ID: 0
> > [    0.002094] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
> > [    0.002094] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
> > [    0.002094] tlb_flushall_shift is 0x5
> > [    0.002164] SMP alternatives: switching to UP code
> > [    0.025291] Freeing SMP alternatives: 24k freed
> > [    0.025356] cpu 0 spinlock event irq 17
> > [    0.025383] Performance Events: unsupported p6 CPU model 45 no PMU driver, software events only.
> > [    0.025551] NMI watchdog: disabled (cpu0): hardware events not enabled
> > [    0.025576] Brought up 1 CPUs
> > [    0.028642] kworker/u:0 (14) used greatest stack depth: 5936 bytes left
> > [    0.028675] Grant tables using version 2 layout.
> > [    0.028691] Grant table initialized
> > [    0.047616] RTC time: 165:165:165, date: 165/165/65
> > [    0.047661] NET: Registered protocol family 16
> > [    0.048184] dca service started, version 1.12.1
> > [    0.048545] PCI: setting up Xen PCI frontend stub
> > [    0.048552] PCI: pci_cache_line_size set to 64 bytes
> > [    0.049543] kworker/u:0 (51) used greatest stack depth: 5472 bytes left
> > [    0.054147] bio: create slab <bio-0> at 0
> > [    0.054240] ACPI: Interpreter disabled.
> > [    0.054288] xen/balloon: Initialising balloon driver.
> > [    0.055127] xen-balloon: Initialising balloon driver.
> > [    0.055127] vgaarb: loaded
> > [    0.056125] usbcore: registered new interface driver usbfs
> > [    0.056162] usbcore: registered new interface driver hub
> > [    0.056217] usbcore: registered new device driver usb
> > [    0.056425] PCI: System does not support PCI
> > [    0.056431] PCI: System does not support PCI
> > [    0.056617] NetLabel: Initializing
> > [    0.056624] NetLabel:  domain hash size = 128
> > [    0.056627] NetLabel:  protocols = UNLABELED CIPSOv4
> > [    0.056642] NetLabel:  unlabeled traffic allowed by default
> > [    0.056725] Switching to clocksource xen
> > [    0.056795] pnp: PnP ACPI: disabled
> > [    0.058698] NET: Registered protocol family 2
> > [    0.059805] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
> > [    0.061110] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
> > [    0.061243] TCP: Hash tables configured (established 524288 bind 65536)
> > [    0.061281] TCP: reno registered
> > [    0.061304] UDP hash table entries: 2048 (order: 4, 65536 bytes)
> > [    0.061341] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
> > [    0.061425] NET: Registered protocol family 1
> > [    0.061492] RPC: Registered named UNIX socket transport module.
> > [    0.061498] RPC: Registered udp transport module.
> > [    0.061504] RPC: Registered tcp transport module.
> > [    0.061510] RPC: Registered tcp NFSv4.1 backchannel transport module.
> > [    0.061518] PCI: CLS 0 bytes, default 64
> > [    0.061643] Trying to unpack rootfs image as initramfs...
> > [    0.382189] Freeing initrd memory: 362080k freed
> > [    0.499615] platform rtc_cmos: registered platform RTC device (no PNP device found)
> > [    0.499831] Machine check injector initialized
> > [    0.500181] microcode: CPU0 sig=0x206d2, pf=0x1, revision=0x8000020c
> > [    0.500229] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
> > [    0.500544] audit: initializing netlink socket (disabled)
> > [    0.500566] type=2000 audit(1343845740.901:1): initialized
> > [    0.515227] HugeTLB registered 2 MB page size, pre-allocated 0 pages
> > [    0.515358] VFS: Disk quotas dquot_6.5.2
> > [    0.515386] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> > [    0.515525] NFS: Registering the id_resolver key type
> > [    0.515544] Key type id_resolver registered
> > [    0.515551] Key type id_legacy registered
> > [    0.515599] NTFS driver 2.1.30 [Flags: R/W].
> > [    0.515706] msgmni has been set to 8021
> > [    0.515765] SELinux:  Registering netfilter hooks
> > [    0.516222] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
> > [    0.516232] io scheduler noop registered
> > [    0.516236] io scheduler deadline registered
> > [    0.516243] io scheduler cfq registered (default)
> > [    0.516337] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> > [    0.516442] ioatdma: Intel(R) QuickData Technology Driver 4.00
> > [    0.532923] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
> > [    0.533399] Non-volatile memory driver v1.3
> > [    0.533406] Linux agpgart interface v0.103
> > [    0.533588] [drm] Initialized drm 1.1.0 20060810
> > [    0.535196] brd: module loaded
> > [    0.535992] loop: module loaded
> > [    0.536344] libphy: Fixed MDIO Bus: probed
> > [    0.536351] tun: Universal TUN/TAP device driver, 1.6
> > [    0.536354] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> > [    0.536419] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.6.0-k
> > [    0.536428] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
> > [    0.536721] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> > [    0.536729] ehci_hcd: block sizes: qh 104 qtd 96 itd 192 sitd 96
> > [    0.536770] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> > [    0.536776] ohci_hcd: block sizes: ed 80 td 96
> > [    0.536826] uhci_hcd: USB Universal Host Controller Interface driver
> > [    0.536929] usbcore: registered new interface driver usblp
> > [    0.536977] usbcore: registered new interface driver libusual
> > [    0.537164] i8042: PNP: No PS/2 controller found. Probing ports directly.
> > [    0.538013] i8042: No controller found
> > [    0.538103] mousedev: PS/2 mouse device common for all mice
> > [    0.598349] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
> > [    0.598404] rtc_cmos: probe of rtc_cmos failed with error -38
> > [    0.598559] EFI Variables Facility v0.08 2004-May-17
> > [    0.598701] Netfilter messages via NETLINK v0.30.
> > [    0.598719] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
> > [    0.598790] ctnetlink v0.93: registering with nfnetlink.
> > [    0.598971] ip_tables: (C) 2000-2006 Netfilter Core Team
> > [    0.599007] TCP: cubic registered
> > [    0.599014] Initializing XFRM netlink socket
> > [    0.599043] NET: Registered protocol family 10
> > [    0.599238] ip6_tables: (C) 2000-2006 Netfilter Core Team
> > [    0.599374] sit: IPv6 over IPv4 tunneling driver
> > [    0.599589] NET: Registered protocol family 17
> > [    0.599618] Key type dns_resolver registered
> > [    0.599808] PM: Hibernation image not present or could not be loaded.
> > [    0.599824] registered taskstats version 1
> > [    0.599848] XENBUS: Device with no driver: device/vkbd/0
> > [    0.599852] XENBUS: Device with no driver: device/vfb/0
> > [    0.599856] XENBUS: Device with no driver: device/vbd/51712
> > [    0.599860] XENBUS: Device with no driver: device/vif/0
> > [    0.599886]   Magic number: 1:252:3141
> > [    0.600271] Freeing unused kernel memory: 704k freed
> > [    0.600500] Write protecting the kernel read-only data: 8192k
> > [    0.602437] Freeing unused kernel memory: 132k freed
> > [    0.602589] Freeing unused kernel memory: 340k freed
> > init started: BusyBox v1.14.3 (2012-08-01 13:52:44 EDT)
> > [    0.607489] consoletype (1056) used greatest stack depth: 5288 bytes left
> > Mounting directories  [  OK  ]
> > mount: mount point /proc/bus/usb does not exist
> > [    0.781044] modprobe (1085) used greatest stack depth: 5048 bytes left
> > mount: mount point /sys/kernel/config does not exist
> > [    0.785748] core_filesystem (1057) used greatest stack depth: 4968 bytes left
> > [    0.793721] input: Xen Virtual Keyboard as /devices/virtual/input/input0
> > [    0.793892] input: Xen Virtual Pointer as /devices/virtual/input/input1
> > [    1.010121] Initialising Xen virtual ethernet driver.
> > [    1.124604] blkfront: xvda: flush diskcache: enabled
> > [    1.126118]  xvda: xvda1 xvda2 xvda3 xvda4
> > [    1.239316] udevd (1128): /proc/1128/oom_adj is deprecated, please use /proc/1128/oom_score_adj instead.
> > udevd-work[1130]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} for writing: No such file or directory
> > 
> > [    1.395080] ip (1408) used greatest stack depth: 3912 bytes left
> > Waiting for devices [  OK  ]
> > Waiting for init.pre_custom [  OK  ]
> > Waiting for fb [  OK  ]
> > Starting..[/dev/fb0]
> > /dev/fb0: len:0
> > /dev/fb0: bits/pixel32
> > (7f44ddbc2000): Writting .. [800:600]
> > Done!
> > FATAL: Module agpgart_intel not found.
> > [    1.652131] [drm] radeon kernel modesetting enabled.
> > WARNING: Error inserting video (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/acpi/video.ko): No such device
> > WARNING: Error inserting mxm_wmi (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/platform/x86/mxm-wmi.ko): No such device
> > WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> > WARNING: Error inserting ttm (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/ttm/ttm.ko): No such device
> > FATAL: Error inserting nouveau (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/nouveau/nouveau.ko): No such device
> > [    1.660288] Console: switching to colour frame buffer device 100x37
> > WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> > FATAL: Error inserting i915 (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/i915/i915.ko): No such device
> > Starting..[/dev/fb0]
> > /dev/fb0: len:0
> > /dev/fb0: bits/pixel32
> > (7fb8669cc000): Writting .. [800:600]
> > Done!
> > VGA: 0000:
> > Waiting for network [  OK  ]
> > Bringing up loopback interface:  [  OK  ]
> > Bringing up interface eth0:  [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> > [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> > [    1.908703] PGD ea1df067 PUD e8ada067 PMD 0 
> > [    1.908774] Oops: 0000 [#1] SMP 
> > [    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [    1.908938] CPU 0 
> > [    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1  
> > [    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> > [    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
> > [    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
> > [    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
> > [    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
> > [    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
> > [    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
> > [    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> > [    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> > [    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
> > [    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > [    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
> > [    1.909055] Stack:
> > [    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
> > [    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
> > [    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
> > [    1.909055] Call Trace:
> > [    1.909055]  <IRQ> 
> > [    1.909055]  [<ffffffff81066028>] ? pvclock_clocksource_read+0x58/0xd0
> > [    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
> > [    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
> > [    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
> > [    1.909055]  <EOI> 
> > [    1.909055]  [<ffffffff8103a435>] do_softirq+0x65/0xa0
> > [    1.909055]  [<ffffffff8107f834>] local_bh_enable_ip+0x94/0xa0
> > [    1.909055]  [<ffffffff815d11a4>] _raw_spin_unlock_bh+0x24/0x30
> > [    1.909055]  [<ffffffffa0036d44>] xennet_open+0x54/0xe0 [xen_netfront]
> > [    1.909055]  [<ffffffff81481dcf>] __dev_open+0xbf/0x120
> > [    1.909055]  [<ffffffff8148022c>] __dev_change_flags+0x9c/0x180
> > [    1.909055]  [<ffffffff81481cc3>] dev_change_flags+0x23/0x70
> > [    1.909055]  [<ffffffff81491062>] do_setlink+0x1c2/0xa10
> > [    1.909055]  [<ffffffff812b5d74>] ? nla_parse+0x34/0x110
> > [    1.909055]  [<ffffffff81494005>] rtnl_newlink+0x3a5/0x5c0
> > [    1.909055]  [<ffffffff812541c4>] ? selinux_capable+0x34/0x50
> > [    1.909055]  [<ffffffff81250223>] ? security_capable+0x13/0x20
> > [    1.909055]  [<ffffffff81491e07>] rtnetlink_rcv_msg+0x2c7/0x330
> > [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> > [    1.909055]  [<ffffffff81149a52>] ? kmem_cache_alloc_node+0x82/0x1d0
> > [    1.909055]  [<ffffffff8147a00c>] ? __skb_recv_datagram+0x11c/0x2f0
> > [    1.909055]  [<ffffffff81491b40>] ? rtnetlink_rcv+0x30/0x30
> > [    1.909055]  [<ffffffff814a9c89>] netlink_rcv_skb+0x99/0xc0
> > [    1.909055]  [<ffffffff81491b30>] rtnetlink_rcv+0x20/0x30
> > [    1.909055]  [<ffffffff814a9998>] netlink_unicast+0x1a8/0x220
> > [    1.909055]  [<ffffffff814aa535>] netlink_sendmsg+0x205/0x300
> > [    1.909055]  [<ffffffff8146ce19>] sock_sendmsg+0xb9/0xf0
> > [    1.909055]  [<ffffffff8146c51e>] ? copy_from_user+0x3e/0x50
> > [    1.909055]  [<ffffffff8146c576>] ? move_addr_to_kernel+0x46/0x80
> > [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> > [    1.909055]  [<ffffffff8146dd2d>] __sys_sendmsg+0x3dd/0x400
> > [    1.909055]  [<ffffffff8112c751>] ? handle_mm_fault+0x261/0x380
> > [    1.909055]  [<ffffffff815d4cd0>] ? do_page_fault+0x250/0x4f0
> > [    1.909055]  [<ffffffff8114a587>] ? kmem_cache_alloc+0x1a7/0x1f0
> > [    1.909055]  [<ffffffff811311a4>] ? do_brk+0x1b4/0x350
> > [    1.909055]  [<ffffffff8146df04>] sys_sendmsg+0x44/0x80
> > [    1.909055]  [<ffffffff815d7bf9>] system_call_fastpath+0x16/0x1b
> > [    1.909055] Code: 44 00 00 41 80 8c 24 a8 00 00 00 04 e9 2f ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 41 8b 84 24 c8 00 00 00 49 03 84 24 d0 00 00 00 <80> 3c 25 10 00 00 00 00 48 8d 50 30 74 0f 48 83 3c 25 08 00 00 
> > [    1.909055] RIP  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> > [    1.909055]  RSP <ffff8800ffc03db8>
> > [    1.909055] CR2: 0000000000000010
> > [    1.947298] ---[ end trace 3f4ba742dffbe90d ]---
> > [    1.947824] Kernel panic - not syncing: Fatal exception in interrupt
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 11:04:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 11:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxc9R-0007nW-BA; Sat, 04 Aug 2012 11:04:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1Sxc9Q-0007nR-DW
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 11:04:08 +0000
Received: from [85.158.139.83:48296] by server-4.bemta-5.messagelabs.com id
	9F/AC-27831-7A10D105; Sat, 04 Aug 2012 11:04:07 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344078243!30280001!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21070 invoked from network); 4 Aug 2012 11:04:05 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Aug 2012 11:04:05 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q74B3uU7018596
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Sat, 4 Aug 2012 07:03:56 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q74B3thF018594;
	Sat, 4 Aug 2012 07:03:55 -0400
Date: Sat, 4 Aug 2012 07:03:55 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, akpm@linux-foundation.org, 
	linux-kernel@vger.kernel.org, mgorman@suse.de, davem@davemloft.net
Message-ID: <20120804110355.GA17640@andromeda.dapyr.net>
References: <20120801190227.GA13272@phenom.dumpdata.com>
	<20120803120414.GA10670@andromeda.dapyr.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120803120414.GA10670@andromeda.dapyr.net>
User-Agent: Mutt/1.5.9i
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6 (git commit
	c48a11c7ad2623b99bbd6859b0b4234e7f11176f,
	netvm: propagate page->pfmemalloc to skb)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 08:04:14AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> > So I hadn't done a git bisection yet. But if I choose git commit:
> > 4b24ff71108164e047cf2c95990b77651163e315
> >     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> > 
> >     Pull battery updates from Anton Vorontsov:
> > 
> > 
> > everything works nicely. Anything past that, so these merges:
> > 
> > konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> > 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
> ===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)
> 
> Somewhere in there is the culprit. Hadn't done yet the full bisection
> (was just checking out in each merge to see when it stopped working)

Mel, your:
commit c48a11c7ad2623b99bbd6859b0b4234e7f11176f
Author: Mel Gorman <mgorman@suse.de>
Date:   Tue Jul 31 16:44:23 2012 -0700

    netvm: propagate page->pfmemalloc to skb

is the culprit per git bisect. Any ideas - do the drivers need to do
some extra processing? Here is the git bisect log

git bisect start
# good: [a40a1d3d0a2fd613fdec6d89d3c053268ced76ed] Merge tag
# 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
git bisect good a40a1d3d0a2fd613fdec6d89d3c053268ced76ed
# bad: [ac694dbdbc403c00e2c14d10bc7b8412cc378259] Merge branch 'akpm'
# (Andrew's patch-bomb)
git bisect bad ac694dbdbc403c00e2c14d10bc7b8412cc378259
# good: [62ce1c706f817cb9defef3ac2dfdd815149f2968] mm, oom: move
# declaration for mem_cgroup_out_of_memory to oom.h
git bisect good 62ce1c706f817cb9defef3ac2dfdd815149f2968
# bad: [5a178119b0fbe37f7dfb602b37df9cc4b1dc9d71] mm: add support for
# direct_IO to highmem pages
git bisect bad 5a178119b0fbe37f7dfb602b37df9cc4b1dc9d71
# good: [7cb0240492caea2f6467f827313478f41877e6ef] netvm: allow the use
# of __GFP_MEMALLOC by specific sockets
git bisect good 7cb0240492caea2f6467f827313478f41877e6ef
# bad: [5515061d22f0f9976ae7815864bfd22042d36848] mm: throttle direct
# reclaimers if PF_MEMALLOC reserves are low and swap is backed by
# network storage
git bisect bad 5515061d22f0f9976ae7815864bfd22042d36848
# bad: [0614002bb5f7411e61ffa0dfe5be1f2c84df3da3] netvm: propagate
# page->pfmemalloc from skb_alloc_page to skb
git bisect bad 0614002bb5f7411e61ffa0dfe5be1f2c84df3da3
# bad: [c48a11c7ad2623b99bbd6859b0b4234e7f11176f] netvm: propagate
# page->pfmemalloc to skb
git bisect bad c48a11c7ad2623b99bbd6859b0b4234e7f11176f
# good: [c93bdd0e03e848555d144eb44a1f275b871a8dd5] netvm: allow skb
# allocation to use PFMEMALLOC reserves
git bisect good c93bdd0e03e848555d144eb44a1f275b871a8dd5


> 
> Andrew CC-ing you here, the serial log is below.
> > a40a1d3 Merge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
> > 3e9a970 Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
> > 941c872 Merge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
> > 8762541 Merge branch 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
> > 6dbb35b Merge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
> > fd37ce3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> > 1da9b6b Merge branches 'cma', 'ipoib', 'ocrdma' and 'qib' into for-next
> > 6aeea3e Merge remote-tracking branch 'origin' into irqdomain/next
> > 931efdf Merge branch 'v4l_for_linus' into staging/for_v3.6
> > 80c1834 Merge tag 'v3.5-rc6' into irqdomain/next
> > 
> > are the culprit. I think it might be the networking pull but not sure. Ian
> > any thoughts?
> > 
> > Using config file "/test.xm".
> > Started domain latest (id=2)
> > [    0.000000] console [hvc0] enabled, bootconsole disabled
> > [    0.000000] Xen: using vcpuop timer interface
> > [    0.000000] installing Xen timer for CPU 0
> > [    0.000000] tsc: Detected 2294.530 MHz processor
> > [    0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 4589.06 BogoMIPS (lpj=2294530)
> > [    0.000999] pid_max: default: 32768 minimum: 301
> > [    0.000999] Security Framework initialized
> > [    0.000999] SELinux:  Initializing.
> > [    0.000999] SELinux:  Starting in permissive mode
> > [    0.000999] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
> > [    0.001520] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
> > [    0.001875] Mount-cache hash table entries: 256
> > [    0.002007] Initializing cgroup subsys cpuacct
> > [    0.002013] Initializing cgroup subsys freezer
> > [    0.002070] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
> > [    0.002070] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
> > [    0.002084] CPU: Physical Processor ID: 0
> > [    0.002087] CPU: Processor Core ID: 0
> > [    0.002094] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
> > [    0.002094] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
> > [    0.002094] tlb_flushall_shift is 0x5
> > [    0.002164] SMP alternatives: switching to UP code
> > [    0.025291] Freeing SMP alternatives: 24k freed
> > [    0.025356] cpu 0 spinlock event irq 17
> > [    0.025383] Performance Events: unsupported p6 CPU model 45 no PMU driver, software events only.
> > [    0.025551] NMI watchdog: disabled (cpu0): hardware events not enabled
> > [    0.025576] Brought up 1 CPUs
> > [    0.028642] kworker/u:0 (14) used greatest stack depth: 5936 bytes left
> > [    0.028675] Grant tables using version 2 layout.
> > [    0.028691] Grant table initialized
> > [    0.047616] RTC time: 165:165:165, date: 165/165/65
> > [    0.047661] NET: Registered protocol family 16
> > [    0.048184] dca service started, version 1.12.1
> > [    0.048545] PCI: setting up Xen PCI frontend stub
> > [    0.048552] PCI: pci_cache_line_size set to 64 bytes
> > [    0.049543] kworker/u:0 (51) used greatest stack depth: 5472 bytes left
> > [    0.054147] bio: create slab <bio-0> at 0
> > [    0.054240] ACPI: Interpreter disabled.
> > [    0.054288] xen/balloon: Initialising balloon driver.
> > [    0.055127] xen-balloon: Initialising balloon driver.
> > [    0.055127] vgaarb: loaded
> > [    0.056125] usbcore: registered new interface driver usbfs
> > [    0.056162] usbcore: registered new interface driver hub
> > [    0.056217] usbcore: registered new device driver usb
> > [    0.056425] PCI: System does not support PCI
> > [    0.056431] PCI: System does not support PCI
> > [    0.056617] NetLabel: Initializing
> > [    0.056624] NetLabel:  domain hash size = 128
> > [    0.056627] NetLabel:  protocols = UNLABELED CIPSOv4
> > [    0.056642] NetLabel:  unlabeled traffic allowed by default
> > [    0.056725] Switching to clocksource xen
> > [    0.056795] pnp: PnP ACPI: disabled
> > [    0.058698] NET: Registered protocol family 2
> > [    0.059805] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
> > [    0.061110] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
> > [    0.061243] TCP: Hash tables configured (established 524288 bind 65536)
> > [    0.061281] TCP: reno registered
> > [    0.061304] UDP hash table entries: 2048 (order: 4, 65536 bytes)
> > [    0.061341] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
> > [    0.061425] NET: Registered protocol family 1
> > [    0.061492] RPC: Registered named UNIX socket transport module.
> > [    0.061498] RPC: Registered udp transport module.
> > [    0.061504] RPC: Registered tcp transport module.
> > [    0.061510] RPC: Registered tcp NFSv4.1 backchannel transport module.
> > [    0.061518] PCI: CLS 0 bytes, default 64
> > [    0.061643] Trying to unpack rootfs image as initramfs...
> > [    0.382189] Freeing initrd memory: 362080k freed
> > [    0.499615] platform rtc_cmos: registered platform RTC device (no PNP device found)
> > [    0.499831] Machine check injector initialized
> > [    0.500181] microcode: CPU0 sig=0x206d2, pf=0x1, revision=0x8000020c
> > [    0.500229] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
> > [    0.500544] audit: initializing netlink socket (disabled)
> > [    0.500566] type=2000 audit(1343845740.901:1): initialized
> > [    0.515227] HugeTLB registered 2 MB page size, pre-allocated 0 pages
> > [    0.515358] VFS: Disk quotas dquot_6.5.2
> > [    0.515386] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> > [    0.515525] NFS: Registering the id_resolver key type
> > [    0.515544] Key type id_resolver registered
> > [    0.515551] Key type id_legacy registered
> > [    0.515599] NTFS driver 2.1.30 [Flags: R/W].
> > [    0.515706] msgmni has been set to 8021
> > [    0.515765] SELinux:  Registering netfilter hooks
> > [    0.516222] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
> > [    0.516232] io scheduler noop registered
> > [    0.516236] io scheduler deadline registered
> > [    0.516243] io scheduler cfq registered (default)
> > [    0.516337] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> > [    0.516442] ioatdma: Intel(R) QuickData Technology Driver 4.00
> > [    0.532923] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
> > [    0.533399] Non-volatile memory driver v1.3
> > [    0.533406] Linux agpgart interface v0.103
> > [    0.533588] [drm] Initialized drm 1.1.0 20060810
> > [    0.535196] brd: module loaded
> > [    0.535992] loop: module loaded
> > [    0.536344] libphy: Fixed MDIO Bus: probed
> > [    0.536351] tun: Universal TUN/TAP device driver, 1.6
> > [    0.536354] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> > [    0.536419] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.6.0-k
> > [    0.536428] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
> > [    0.536721] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> > [    0.536729] ehci_hcd: block sizes: qh 104 qtd 96 itd 192 sitd 96
> > [    0.536770] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> > [    0.536776] ohci_hcd: block sizes: ed 80 td 96
> > [    0.536826] uhci_hcd: USB Universal Host Controller Interface driver
> > [    0.536929] usbcore: registered new interface driver usblp
> > [    0.536977] usbcore: registered new interface driver libusual
> > [    0.537164] i8042: PNP: No PS/2 controller found. Probing ports directly.
> > [    0.538013] i8042: No controller found
> > [    0.538103] mousedev: PS/2 mouse device common for all mice
> > [    0.598349] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
> > [    0.598404] rtc_cmos: probe of rtc_cmos failed with error -38
> > [    0.598559] EFI Variables Facility v0.08 2004-May-17
> > [    0.598701] Netfilter messages via NETLINK v0.30.
> > [    0.598719] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
> > [    0.598790] ctnetlink v0.93: registering with nfnetlink.
> > [    0.598971] ip_tables: (C) 2000-2006 Netfilter Core Team
> > [    0.599007] TCP: cubic registered
> > [    0.599014] Initializing XFRM netlink socket
> > [    0.599043] NET: Registered protocol family 10
> > [    0.599238] ip6_tables: (C) 2000-2006 Netfilter Core Team
> > [    0.599374] sit: IPv6 over IPv4 tunneling driver
> > [    0.599589] NET: Registered protocol family 17
> > [    0.599618] Key type dns_resolver registered
> > [    0.599808] PM: Hibernation image not present or could not be loaded.
> > [    0.599824] registered taskstats version 1
> > [    0.599848] XENBUS: Device with no driver: device/vkbd/0
> > [    0.599852] XENBUS: Device with no driver: device/vfb/0
> > [    0.599856] XENBUS: Device with no driver: device/vbd/51712
> > [    0.599860] XENBUS: Device with no driver: device/vif/0
> > [    0.599886]   Magic number: 1:252:3141
> > [    0.600271] Freeing unused kernel memory: 704k freed
> > [    0.600500] Write protecting the kernel read-only data: 8192k
> > [    0.602437] Freeing unused kernel memory: 132k freed
> > [    0.602589] Freeing unused kernel memory: 340k freed
> > init started: BusyBox v1.14.3 (2012-08-01 13:52:44 EDT)
> > [    0.607489] consoletype (1056) used greatest stack depth: 5288 bytes left
> > Mounting directories  [  OK  ]
> > mount: mount point /proc/bus/usb does not exist
> > [    0.781044] modprobe (1085) used greatest stack depth: 5048 bytes left
> > mount: mount point /sys/kernel/config does not exist
> > [    0.785748] core_filesystem (1057) used greatest stack depth: 4968 bytes left
> > [    0.793721] input: Xen Virtual Keyboard as /devices/virtual/input/input0
> > [    0.793892] input: Xen Virtual Pointer as /devices/virtual/input/input1
> > [    1.010121] Initialising Xen virtual ethernet driver.
> > [    1.124604] blkfront: xvda: flush diskcache: enabled
> > [    1.126118]  xvda: xvda1 xvda2 xvda3 xvda4
> > [    1.239316] udevd (1128): /proc/1128/oom_adj is deprecated, please use /proc/1128/oom_score_adj instead.
> > udevd-work[1130]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} for writing: No such file or directory
> > 
> > [    1.395080] ip (1408) used greatest stack depth: 3912 bytes left
> > Waiting for devices [  OK  ]
> > Waiting for init.pre_custom [  OK  ]
> > Waiting for fb [  OK  ]
> > Starting..[/dev/fb0]
> > /dev/fb0: len:0
> > /dev/fb0: bits/pixel32
> > (7f44ddbc2000): Writting .. [800:600]
> > Done!
> > FATAL: Module agpgart_intel not found.
> > [    1.652131] [drm] radeon kernel modesetting enabled.
> > WARNING: Error inserting video (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/acpi/video.ko): No such device
> > WARNING: Error inserting mxm_wmi (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/platform/x86/mxm-wmi.ko): No such device
> > WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> > WARNING: Error inserting ttm (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/ttm/ttm.ko): No such device
> > FATAL: Error inserting nouveau (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/nouveau/nouveau.ko): No such device
> > [    1.660288] Console: switching to colour frame buffer device 100x37
> > WARNING: Error inserting drm_kms_helper (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/drm_kms_helper.ko): No such device
> > FATAL: Error inserting i915 (/lib/modules/3.5.0upstream-08854-g444fa66/kernel/drivers/gpu/drm/i915/i915.ko): No such device
> > Starting..[/dev/fb0]
> > /dev/fb0: len:0
> > /dev/fb0: bits/pixel32
> > (7fb8669cc000): Writting .. [800:600]
> > Done!
> > VGA: 0000:
> > Waiting for network [  OK  ]
> > Bringing up loopback interface:  [  OK  ]
> > Bringing up interface eth0:  [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> > [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> > [    1.908703] PGD ea1df067 PUD e8ada067 PMD 0 
> > [    1.908774] Oops: 0000 [#1] SMP 
> > [    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [    1.908938] CPU 0 
> > [    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1  
> > [    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> > [    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
> > [    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
> > [    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
> > [    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
> > [    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
> > [    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
> > [    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> > [    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> > [    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
> > [    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > [    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
> > [    1.909055] Stack:
> > [    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
> > [    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
> > [    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
> > [    1.909055] Call Trace:
> > [    1.909055]  <IRQ> 
> > [    1.909055]  [<ffffffff81066028>] ? pvclock_clocksource_read+0x58/0xd0
> > [    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
> > [    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
> > [    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
> > [    1.909055]  <EOI> 
> > [    1.909055]  [<ffffffff8103a435>] do_softirq+0x65/0xa0
> > [    1.909055]  [<ffffffff8107f834>] local_bh_enable_ip+0x94/0xa0
> > [    1.909055]  [<ffffffff815d11a4>] _raw_spin_unlock_bh+0x24/0x30
> > [    1.909055]  [<ffffffffa0036d44>] xennet_open+0x54/0xe0 [xen_netfront]
> > [    1.909055]  [<ffffffff81481dcf>] __dev_open+0xbf/0x120
> > [    1.909055]  [<ffffffff8148022c>] __dev_change_flags+0x9c/0x180
> > [    1.909055]  [<ffffffff81481cc3>] dev_change_flags+0x23/0x70
> > [    1.909055]  [<ffffffff81491062>] do_setlink+0x1c2/0xa10
> > [    1.909055]  [<ffffffff812b5d74>] ? nla_parse+0x34/0x110
> > [    1.909055]  [<ffffffff81494005>] rtnl_newlink+0x3a5/0x5c0
> > [    1.909055]  [<ffffffff812541c4>] ? selinux_capable+0x34/0x50
> > [    1.909055]  [<ffffffff81250223>] ? security_capable+0x13/0x20
> > [    1.909055]  [<ffffffff81491e07>] rtnetlink_rcv_msg+0x2c7/0x330
> > [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> > [    1.909055]  [<ffffffff81149a52>] ? kmem_cache_alloc_node+0x82/0x1d0
> > [    1.909055]  [<ffffffff8147a00c>] ? __skb_recv_datagram+0x11c/0x2f0
> > [    1.909055]  [<ffffffff81491b40>] ? rtnetlink_rcv+0x30/0x30
> > [    1.909055]  [<ffffffff814a9c89>] netlink_rcv_skb+0x99/0xc0
> > [    1.909055]  [<ffffffff81491b30>] rtnetlink_rcv+0x20/0x30
> > [    1.909055]  [<ffffffff814a9998>] netlink_unicast+0x1a8/0x220
> > [    1.909055]  [<ffffffff814aa535>] netlink_sendmsg+0x205/0x300
> > [    1.909055]  [<ffffffff8146ce19>] sock_sendmsg+0xb9/0xf0
> > [    1.909055]  [<ffffffff8146c51e>] ? copy_from_user+0x3e/0x50
> > [    1.909055]  [<ffffffff8146c576>] ? move_addr_to_kernel+0x46/0x80
> > [    1.909055]  [<ffffffff810a18b7>] ? __might_sleep+0xe7/0x100
> > [    1.909055]  [<ffffffff8146dd2d>] __sys_sendmsg+0x3dd/0x400
> > [    1.909055]  [<ffffffff8112c751>] ? handle_mm_fault+0x261/0x380
> > [    1.909055]  [<ffffffff815d4cd0>] ? do_page_fault+0x250/0x4f0
> > [    1.909055]  [<ffffffff8114a587>] ? kmem_cache_alloc+0x1a7/0x1f0
> > [    1.909055]  [<ffffffff811311a4>] ? do_brk+0x1b4/0x350
> > [    1.909055]  [<ffffffff8146df04>] sys_sendmsg+0x44/0x80
> > [    1.909055]  [<ffffffff815d7bf9>] system_call_fastpath+0x16/0x1b
> > [    1.909055] Code: 44 00 00 41 80 8c 24 a8 00 00 00 04 e9 2f ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 41 8b 84 24 c8 00 00 00 49 03 84 24 d0 00 00 00 <80> 3c 25 10 00 00 00 00 48 8d 50 30 74 0f 48 83 3c 25 08 00 00 
> > [    1.909055] RIP  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> > [    1.909055]  RSP <ffff8800ffc03db8>
> > [    1.909055] CR2: 0000000000000010
> > [    1.947298] ---[ end trace 3f4ba742dffbe90d ]---
> > [    1.947824] Kernel panic - not syncing: Fatal exception in interrupt
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 11:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 11:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxcHa-0007ym-GH; Sat, 04 Aug 2012 11:12:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxcHZ-0007yd-B7
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 11:12:33 +0000
Received: from [85.158.139.83:10534] by server-5.bemta-5.messagelabs.com id
	85/D1-02722-0A30D105; Sat, 04 Aug 2012 11:12:32 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344078750!30280740!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3757 invoked from network); 4 Aug 2012 11:12:31 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Aug 2012 11:12:31 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q74BCTHK018717
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Sat, 4 Aug 2012 07:12:29 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q74BCTBb018715;
	Sat, 4 Aug 2012 07:12:29 -0400
Date: Sat, 4 Aug 2012 07:12:29 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: "Fan, Huaxiang" <hufan@websense.com>
Message-ID: <20120804111229.GB17640@andromeda.dapyr.net>
References: <E71FC5D6F96C3C4B93FC8FF942D924C675ADD043@SBJEXCH1A.websense.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E71FC5D6F96C3C4B93FC8FF942D924C675ADD043@SBJEXCH1A.websense.com>
User-Agent: Mutt/1.5.9i
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Konrad Rzeszutek Wilk \(konrad.wilk@oracle.com\)" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] PCI passthrough for domU allocated with more than
	4G memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 05:57:13AM +0000, Fan, Huaxiang wrote:
> Hi,
> 
> I have encountered some strange problems when I was trying to PCI passthrough Broadcom 5709/5716 NICs to domUs allocated with more than 4G memory. Please see below for details.
> 
> My environment is:
> Hardware Platform: DELL R210 with 2 Broadcom 5709 NICs and 2 Broadcom 5716 NICs
> Xen: xen 4.2 unstable (64bits for hypervisor and 32bit for tools)
> Kernel for both dom0 and domUs: xenified kernel 2.6.32.57 (32bit)
> OS: CentOS 6.2 (32bit)
> 
> The general info regarding to xen can be get via below command
> 
> # xl info
> 
> host                   : 7.8
> 
> release                : 2.6.32.57
> 
> version                : #1 SMP Fri Jul 6 18:44:16 CST 2012
> 
> machine                : i686
> 
> nr_cpus                : 8
> 
> max_cpu_id             : 31
> 
> nr_nodes               : 1
> 
> cores_per_socket       : 4
> 
> threads_per_core       : 2
> 
> cpu_mhz                : 2660
> 
> hw_caps                : bfebfbff:28100800:00000000:00003b40:0098e3fd:00000000:00000001:00000000
> 
> virt_caps              : hvm hvm_directio
> 
> total_memory           : 8182
> 
> free_memory            : 7046
> 
> sharing_freed_memory   : 0
> 
> sharing_used_memory    : 0
> 
> free_cpus              : 0
> 
> xen_major              : 4
> 
> xen_minor              : 2
> 
> xen_extra              : -unstable
> 
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> 
> xen_scheduler          : credit
> 
> xen_pagesize           : 4096
> 
> platform_params        : virt_start=0xff400000
> 
> xen_changeset          : unavailable
> 
> xen_commandline        : dom0_mem=1024M dom0_max_vcpus=2 dom0_vcpus_pin
> 
> cc_compiler            : gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC)
> 
> cc_compile_by          : root
> 
> cc_compile_domain      :
> 
> cc_compile_date        : Thu Jul 12 11:20:56 CST 2012
> 
> xend_config_format     : 4
> 
> case 1> I specified PV-domU config as below
> 
> memory=3072
> 
> maxmem=6144
> 
> name="bs"
> 
> vif=['ip=169.254.254.1,script=vif-nat',]
> 
> disk=['file:/root/bs.img,xvda1,w']
> 
> kernel='/root/vmlinuz'
> 
> extra="iommu=soft console=hvc0"
> 
> ramdisk='/root/initrd.img'
> 
> root="/dev/xvda1 ro"
> 
> pci=['01:00.0','01:00.1']
> 
> and then start domU. After that I executed below command
> 
> # xl list
> 
> Name                                        ID   Mem VCPUs      State   Time(s)
> 
> Domain-0                                     0  1024     2     r-----      88.7
> 
> bs                                           2  3072     1     -b----       1.1
> 
> It seemed normal to me. But when I logon bs domU, and executed below command
> 
> # cat /proc/meminfo | head
> 
> MemTotal:        6158940 kB
> 
> MemFree:         2944776 kB

So that is the right amount. It has around 3GB of free kernel space.

> 
> Buffers:            5108 kB
> 
> Cached:            32292 kB
> 
> SwapCached:            0 kB
> 
> Active:            21456 kB
> 
> Inactive:          22936 kB
> 
> Active(anon):       7000 kB
> 
> Inactive(anon):      108 kB
> 
> Active(file):      14456 kB
> 
> It indicated the total memory was 6G, why?

I wish you also included the DirectMap number. Irregardless of
what /proc/meminfo says, what did your dmesg say in 'Memory' section?

> When I back to dom0, I executed below command
> 
> # xl mem-set bs 6144
> 
> # xl list
> 
> Name                                        ID   Mem VCPUs      State   Time(s)
> 
> Domain-0                                     0  1024     2     r-----      93.5
> 
> bs                                           2  6144     1     -b----      10.5
> It seemed normal to me. But when I logon bs domU again and executed below command
> 
> # cat /proc/meminfo | head
> 
> MemTotal:        9304668 kB
> 
> MemFree:         6087540 kB

So that is right. 6GB of free space.
> 
> Buffers:            5168 kB
> 
> Cached:            32464 kB
> 
> SwapCached:            0 kB
> 
> Active:            22300 kB
> 
> Inactive:          22408 kB
> 
> Active(anon):       7080 kB
> 
> Inactive(anon):      108 kB
> 
> Active(file):      15220 kB
> 
> 
> 
> It indicated total memory was 9G (6G + 3G). It was wired. Any idea about this?

Presumarily b/c '3G' of it is the E820_UNUSUABLE or the big gap in the
E820. But irregardless of that - do you have 6GB of ram in your guest?

> Case 2> I specified PV-domU config as below
> 
> memory=6144
> 
> maxmem=6144
> 
> name="bs"
> 
> vif=['ip=169.254.254.1,script=vif-nat',]
> 
> disk=['file:/root/bs.img,xvda1,w']
> 
> kernel='/root/vmlinuz'
> 
> extra="iommu=soft console=hvc0"
> 
> ramdisk='/root/initrd.img'
> 
> root="/dev/xvda1 ro"
> 
> pci=['01:00.0','01:00.1']
> 
> and then start domU. After that I executed below command
> 
> # xl list
> 
> Name                                        ID   Mem VCPUs      State   Time(s)
> 
> Domain-0                                     0   648     2     r-----     120.5
> 
> bs                                           3  3360     1     -b----       7.0
> 
> the output was very confusing. Why dom0 memory had been shrank to 648M and only 3360M assigned to bs domU?

 I think you are seeing a bug in the xl with the autoballoon. Smoebody
mentioned this on the xen-devel.
> 
> My own analysis:
> I extracted the bios e820 memory map on bs domU as below
> 
> [    0.000000] BIOS-provided physical RAM map:
> 
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> 
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> 
> [    0.000000]  Xen: 0000000000100000 - 00000000bf699000 (usable)
> 
> [    0.000000]  Xen: 00000000bf699000 - 00000000bf6af000 (reserved)
> 
> [    0.000000]  Xen: 00000000bf6af000 - 00000000bf6ce000 (ACPI data)
> 
> [    0.000000]  Xen: 00000000bf6ce000 - 00000000c0000000 (reserved)
> 
> [    0.000000]  Xen: 00000000e0000000 - 00000000f0000000 (reserved)
> 
> [    0.000000]  Xen: 00000000fe000000 - 0000000100000000 (reserved)
> 
> [    0.000000]  Xen: 0000000180000000 - 00000001c33ec000 (usable)
> 
> I think the root cause might be related to the holes between c0000000 and e0000000 and between f0000000 and fe000000 and between 100000000 and 180000000. And I think the e820_host option set according to my tracking.

Not sure what you mean by 'my tracking'.

The holes are there to provide the PCI subsystem the space to stick the
MMIO BARs of your PCI device. The memory that was "taken" out of those
areas is then appended to the next E820_RAM region. It should have
gone to 0x1000000, not to 0x18000000 though.

Ohh, you are using the ancient 2.6.32 domU! Well, that is ancient -
and it probably has bugs. Have you considered using something more
recent?
> 
> Thanks in advance
> HUAXIANG FAN
> Software Engineer II
> 
> WEBSENSE NETWORK SECURITY TECHNOLOGY R&D (BEIJING) CO. LTD.
> ph: +8610.5884.4327
> fax: +8610.5884.4727
> www.websense.cn<http://www.websense.cn>
> 
> Websense TRITON(tm)
> For Essential Information Protection(tm)
> Web Security<http://www.websense.com/content/Regional/SCH/WebSecurityOverview.aspx> | Data Security<http://www.websense.com/content/Regional/SCH/DataSecurity.aspx> | Email Security<http://www.websense.com/content/Regional/SCH/MessagingSecurity.aspx>
> 
> 
> 
>  Protected by Websense Hosted Email Security -- www.websense.com 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 11:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 11:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxcHa-0007ym-GH; Sat, 04 Aug 2012 11:12:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1SxcHZ-0007yd-B7
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 11:12:33 +0000
Received: from [85.158.139.83:10534] by server-5.bemta-5.messagelabs.com id
	85/D1-02722-0A30D105; Sat, 04 Aug 2012 11:12:32 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344078750!30280740!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3757 invoked from network); 4 Aug 2012 11:12:31 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Aug 2012 11:12:31 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	q74BCTHK018717
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Sat, 4 Aug 2012 07:12:29 -0400
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id q74BCTBb018715;
	Sat, 4 Aug 2012 07:12:29 -0400
Date: Sat, 4 Aug 2012 07:12:29 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: "Fan, Huaxiang" <hufan@websense.com>
Message-ID: <20120804111229.GB17640@andromeda.dapyr.net>
References: <E71FC5D6F96C3C4B93FC8FF942D924C675ADD043@SBJEXCH1A.websense.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E71FC5D6F96C3C4B93FC8FF942D924C675ADD043@SBJEXCH1A.websense.com>
User-Agent: Mutt/1.5.9i
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Konrad Rzeszutek Wilk \(konrad.wilk@oracle.com\)" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] PCI passthrough for domU allocated with more than
	4G memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 01, 2012 at 05:57:13AM +0000, Fan, Huaxiang wrote:
> Hi,
> 
> I have encountered some strange problems when I was trying to PCI passthrough Broadcom 5709/5716 NICs to domUs allocated with more than 4G memory. Please see below for details.
> 
> My environment is:
> Hardware Platform: DELL R210 with 2 Broadcom 5709 NICs and 2 Broadcom 5716 NICs
> Xen: xen 4.2 unstable (64bits for hypervisor and 32bit for tools)
> Kernel for both dom0 and domUs: xenified kernel 2.6.32.57 (32bit)
> OS: CentOS 6.2 (32bit)
> 
> The general info regarding to xen can be get via below command
> 
> # xl info
> 
> host                   : 7.8
> 
> release                : 2.6.32.57
> 
> version                : #1 SMP Fri Jul 6 18:44:16 CST 2012
> 
> machine                : i686
> 
> nr_cpus                : 8
> 
> max_cpu_id             : 31
> 
> nr_nodes               : 1
> 
> cores_per_socket       : 4
> 
> threads_per_core       : 2
> 
> cpu_mhz                : 2660
> 
> hw_caps                : bfebfbff:28100800:00000000:00003b40:0098e3fd:00000000:00000001:00000000
> 
> virt_caps              : hvm hvm_directio
> 
> total_memory           : 8182
> 
> free_memory            : 7046
> 
> sharing_freed_memory   : 0
> 
> sharing_used_memory    : 0
> 
> free_cpus              : 0
> 
> xen_major              : 4
> 
> xen_minor              : 2
> 
> xen_extra              : -unstable
> 
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> 
> xen_scheduler          : credit
> 
> xen_pagesize           : 4096
> 
> platform_params        : virt_start=0xff400000
> 
> xen_changeset          : unavailable
> 
> xen_commandline        : dom0_mem=1024M dom0_max_vcpus=2 dom0_vcpus_pin
> 
> cc_compiler            : gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC)
> 
> cc_compile_by          : root
> 
> cc_compile_domain      :
> 
> cc_compile_date        : Thu Jul 12 11:20:56 CST 2012
> 
> xend_config_format     : 4
> 
> case 1> I specified PV-domU config as below
> 
> memory=3072
> 
> maxmem=6144
> 
> name="bs"
> 
> vif=['ip=169.254.254.1,script=vif-nat',]
> 
> disk=['file:/root/bs.img,xvda1,w']
> 
> kernel='/root/vmlinuz'
> 
> extra="iommu=soft console=hvc0"
> 
> ramdisk='/root/initrd.img'
> 
> root="/dev/xvda1 ro"
> 
> pci=['01:00.0','01:00.1']
> 
> and then start domU. After that I executed below command
> 
> # xl list
> 
> Name                                        ID   Mem VCPUs      State   Time(s)
> 
> Domain-0                                     0  1024     2     r-----      88.7
> 
> bs                                           2  3072     1     -b----       1.1
> 
> It seemed normal to me. But when I logon bs domU, and executed below command
> 
> # cat /proc/meminfo | head
> 
> MemTotal:        6158940 kB
> 
> MemFree:         2944776 kB

So that is the right amount. It has around 3GB of free kernel space.

> 
> Buffers:            5108 kB
> 
> Cached:            32292 kB
> 
> SwapCached:            0 kB
> 
> Active:            21456 kB
> 
> Inactive:          22936 kB
> 
> Active(anon):       7000 kB
> 
> Inactive(anon):      108 kB
> 
> Active(file):      14456 kB
> 
> It indicated the total memory was 6G, why?

I wish you also included the DirectMap number. Irregardless of
what /proc/meminfo says, what did your dmesg say in 'Memory' section?

> When I back to dom0, I executed below command
> 
> # xl mem-set bs 6144
> 
> # xl list
> 
> Name                                        ID   Mem VCPUs      State   Time(s)
> 
> Domain-0                                     0  1024     2     r-----      93.5
> 
> bs                                           2  6144     1     -b----      10.5
> It seemed normal to me. But when I logon bs domU again and executed below command
> 
> # cat /proc/meminfo | head
> 
> MemTotal:        9304668 kB
> 
> MemFree:         6087540 kB

So that is right. 6GB of free space.
> 
> Buffers:            5168 kB
> 
> Cached:            32464 kB
> 
> SwapCached:            0 kB
> 
> Active:            22300 kB
> 
> Inactive:          22408 kB
> 
> Active(anon):       7080 kB
> 
> Inactive(anon):      108 kB
> 
> Active(file):      15220 kB
> 
> 
> 
> It indicated total memory was 9G (6G + 3G). It was wired. Any idea about this?

Presumarily b/c '3G' of it is the E820_UNUSUABLE or the big gap in the
E820. But irregardless of that - do you have 6GB of ram in your guest?

> Case 2> I specified PV-domU config as below
> 
> memory=6144
> 
> maxmem=6144
> 
> name="bs"
> 
> vif=['ip=169.254.254.1,script=vif-nat',]
> 
> disk=['file:/root/bs.img,xvda1,w']
> 
> kernel='/root/vmlinuz'
> 
> extra="iommu=soft console=hvc0"
> 
> ramdisk='/root/initrd.img'
> 
> root="/dev/xvda1 ro"
> 
> pci=['01:00.0','01:00.1']
> 
> and then start domU. After that I executed below command
> 
> # xl list
> 
> Name                                        ID   Mem VCPUs      State   Time(s)
> 
> Domain-0                                     0   648     2     r-----     120.5
> 
> bs                                           3  3360     1     -b----       7.0
> 
> the output was very confusing. Why dom0 memory had been shrank to 648M and only 3360M assigned to bs domU?

 I think you are seeing a bug in the xl with the autoballoon. Smoebody
mentioned this on the xen-devel.
> 
> My own analysis:
> I extracted the bios e820 memory map on bs domU as below
> 
> [    0.000000] BIOS-provided physical RAM map:
> 
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> 
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> 
> [    0.000000]  Xen: 0000000000100000 - 00000000bf699000 (usable)
> 
> [    0.000000]  Xen: 00000000bf699000 - 00000000bf6af000 (reserved)
> 
> [    0.000000]  Xen: 00000000bf6af000 - 00000000bf6ce000 (ACPI data)
> 
> [    0.000000]  Xen: 00000000bf6ce000 - 00000000c0000000 (reserved)
> 
> [    0.000000]  Xen: 00000000e0000000 - 00000000f0000000 (reserved)
> 
> [    0.000000]  Xen: 00000000fe000000 - 0000000100000000 (reserved)
> 
> [    0.000000]  Xen: 0000000180000000 - 00000001c33ec000 (usable)
> 
> I think the root cause might be related to the holes between c0000000 and e0000000 and between f0000000 and fe000000 and between 100000000 and 180000000. And I think the e820_host option set according to my tracking.

Not sure what you mean by 'my tracking'.

The holes are there to provide the PCI subsystem the space to stick the
MMIO BARs of your PCI device. The memory that was "taken" out of those
areas is then appended to the next E820_RAM region. It should have
gone to 0x1000000, not to 0x18000000 though.

Ohh, you are using the ancient 2.6.32 domU! Well, that is ancient -
and it probably has bugs. Have you considered using something more
recent?
> 
> Thanks in advance
> HUAXIANG FAN
> Software Engineer II
> 
> WEBSENSE NETWORK SECURITY TECHNOLOGY R&D (BEIJING) CO. LTD.
> ph: +8610.5884.4327
> fax: +8610.5884.4727
> www.websense.cn<http://www.websense.cn>
> 
> Websense TRITON(tm)
> For Essential Information Protection(tm)
> Web Security<http://www.websense.com/content/Regional/SCH/WebSecurityOverview.aspx> | Data Security<http://www.websense.com/content/Regional/SCH/DataSecurity.aspx> | Email Security<http://www.websense.com/content/Regional/SCH/MessagingSecurity.aspx>
> 
> 
> 
>  Protected by Websense Hosted Email Security -- www.websense.com 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 13:25:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeLZ-0000V7-Qy; Sat, 04 Aug 2012 13:24:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SxeLY-0000V2-RD
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 13:24:49 +0000
Received: from [85.158.139.83:64613] by server-2.bemta-5.messagelabs.com id
	B9/38-04598-0A22D105; Sat, 04 Aug 2012 13:24:48 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344086686!18973034!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24067 invoked from network); 4 Aug 2012 13:24:47 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 13:24:47 -0000
Received: by vbip1 with SMTP id p1so1835195vbi.32
	for <xen-devel@lists.xen.org>; Sat, 04 Aug 2012 06:24:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=ELm3oKsiFv6EGkZRjS+nIPdy/nWPgoE28g/YO82radY=;
	b=Lxg9FwJOElr4XPyH9iYE1JFqPgBQUjdMnz0RN0VKWHi4o5Ohl0G1xHF+NNi0sZtVt0
	AMnCeJ/DpEal2Wn/GLSsInJ7JcgtKrU35bfwUCINJaW7Hipn3sE1u77GGyP9OPSMVZ3d
	LWL3cOL8Ly6uKX42k6wuZno1B5XxEGX2Oa9r7plSDRzTpRSicTksJmVyEn/NRS0+d8Qk
	xPkLZGX7QHmGjJJrYoa3o9DOud6PfVL+15pPcA6DgQmRtWb9bQr9dPJAnEuSQyO3Fltd
	3mHeSvEho4JrUR+n+bZAWS9dRjEVFUl+r9Va0zpW2LQYsaj/zblZz+OJYbh8MJaUcDW+
	0Ttw==
Received: by 10.220.240.5 with SMTP id ky5mr3831191vcb.57.1344086685726; Sat,
	04 Aug 2012 06:24:45 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Sat, 4 Aug 2012 06:24:25 -0700 (PDT)
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Sat, 4 Aug 2012 14:24:25 +0100
Message-ID: <CAEBdQ93hQPTiV3mEvXzi1U+PG04oA9ZSUQHK8GPTNF8mQNNyLg@mail.gmail.com>
To: Jean Guyader <jean.guyader@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/5] RFC: V4V (v3)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 3 August 2012 20:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> v3 changes:
>         - Switch to event channel
>                 - Allocated a unbound event channel
>                   per domain.
>                 - Add a new v4v call to share the
>                   event channel port.
>         - Public headers with actual type definition
>         - Align all the v4v type to 64 bits
>         - Modify v4v MAGIC numbers because we won't
>           but backward compatible anymore
>         - Merge insert and insertv
>         - Merge send and sendv
>         - Turn all the lock prerequisite from comment
>           to ASSERT()
>         - Make use or write_atomic instead of volatile pointers
>         - Merge v4v_memcpy_to_guest_ring and
>           v4v_memcpy_to_guest_ring_from_guest
>                 - Introduce copy_from_guest_maybe that can take
>                   a void * and a handle as src address.
            - Replace 6 arguments hypercalls with 5 arguments hypercalls.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 13:25:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeLZ-0000V7-Qy; Sat, 04 Aug 2012 13:24:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SxeLY-0000V2-RD
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 13:24:49 +0000
Received: from [85.158.139.83:64613] by server-2.bemta-5.messagelabs.com id
	B9/38-04598-0A22D105; Sat, 04 Aug 2012 13:24:48 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344086686!18973034!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24067 invoked from network); 4 Aug 2012 13:24:47 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 13:24:47 -0000
Received: by vbip1 with SMTP id p1so1835195vbi.32
	for <xen-devel@lists.xen.org>; Sat, 04 Aug 2012 06:24:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=ELm3oKsiFv6EGkZRjS+nIPdy/nWPgoE28g/YO82radY=;
	b=Lxg9FwJOElr4XPyH9iYE1JFqPgBQUjdMnz0RN0VKWHi4o5Ohl0G1xHF+NNi0sZtVt0
	AMnCeJ/DpEal2Wn/GLSsInJ7JcgtKrU35bfwUCINJaW7Hipn3sE1u77GGyP9OPSMVZ3d
	LWL3cOL8Ly6uKX42k6wuZno1B5XxEGX2Oa9r7plSDRzTpRSicTksJmVyEn/NRS0+d8Qk
	xPkLZGX7QHmGjJJrYoa3o9DOud6PfVL+15pPcA6DgQmRtWb9bQr9dPJAnEuSQyO3Fltd
	3mHeSvEho4JrUR+n+bZAWS9dRjEVFUl+r9Va0zpW2LQYsaj/zblZz+OJYbh8MJaUcDW+
	0Ttw==
Received: by 10.220.240.5 with SMTP id ky5mr3831191vcb.57.1344086685726; Sat,
	04 Aug 2012 06:24:45 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Sat, 4 Aug 2012 06:24:25 -0700 (PDT)
In-Reply-To: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Sat, 4 Aug 2012 14:24:25 +0100
Message-ID: <CAEBdQ93hQPTiV3mEvXzi1U+PG04oA9ZSUQHK8GPTNF8mQNNyLg@mail.gmail.com>
To: Jean Guyader <jean.guyader@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/5] RFC: V4V (v3)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 3 August 2012 20:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> v3 changes:
>         - Switch to event channel
>                 - Allocated a unbound event channel
>                   per domain.
>                 - Add a new v4v call to share the
>                   event channel port.
>         - Public headers with actual type definition
>         - Align all the v4v type to 64 bits
>         - Modify v4v MAGIC numbers because we won't
>           but backward compatible anymore
>         - Merge insert and insertv
>         - Merge send and sendv
>         - Turn all the lock prerequisite from comment
>           to ASSERT()
>         - Make use or write_atomic instead of volatile pointers
>         - Merge v4v_memcpy_to_guest_ring and
>           v4v_memcpy_to_guest_ring_from_guest
>                 - Introduce copy_from_guest_maybe that can take
>                   a void * and a handle as src address.
            - Replace 6 arguments hypercalls with 5 arguments hypercalls.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 13:27:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeNr-0000Zu-C5; Sat, 04 Aug 2012 13:27:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxeNp-0000Zn-Qr
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 13:27:10 +0000
Received: from [85.158.138.51:64992] by server-10.bemta-3.messagelabs.com id
	01/1C-21993-C232D105; Sat, 04 Aug 2012 13:27:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344086828!8897766!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22831 invoked from network); 4 Aug 2012 13:27:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 13:27:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,711,1336348800"; d="scan'208";a="13850695"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 13:27:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 14:27:07 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxeNn-0003oZ-Bk;
	Sat, 04 Aug 2012 13:27:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxeNn-0001HZ-8z;
	Sat, 04 Aug 2012 14:27:07 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13547-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 14:27:07 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13547: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13547 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13547/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 13:27:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeNr-0000Zu-C5; Sat, 04 Aug 2012 13:27:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxeNp-0000Zn-Qr
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 13:27:10 +0000
Received: from [85.158.138.51:64992] by server-10.bemta-3.messagelabs.com id
	01/1C-21993-C232D105; Sat, 04 Aug 2012 13:27:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344086828!8897766!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22831 invoked from network); 4 Aug 2012 13:27:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 13:27:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,711,1336348800"; d="scan'208";a="13850695"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 13:27:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 14:27:07 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxeNn-0003oZ-Bk;
	Sat, 04 Aug 2012 13:27:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxeNn-0001HZ-8z;
	Sat, 04 Aug 2012 14:27:07 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13547-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 14:27:07 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13547: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13547 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13547/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 13:35:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeVx-0000px-Ii; Sat, 04 Aug 2012 13:35:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lmingcsce@gmail.com>) id 1SxeVv-0000ps-Vv
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 13:35:32 +0000
Received: from [85.158.143.35:65098] by server-1.bemta-4.messagelabs.com id
	38/AD-24392-3252D105; Sat, 04 Aug 2012 13:35:31 +0000
X-Env-Sender: lmingcsce@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344087328!5355020!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1448 invoked from network); 4 Aug 2012 13:35:30 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 13:35:30 -0000
Received: by yhpp34 with SMTP id p34so1809016yhp.32
	for <xen-devel@lists.xen.org>; Sat, 04 Aug 2012 06:35:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:message-id:references:to:x-mailer;
	bh=4bz4hhHJCyrTSS+AdK5SaWz5Sx3VHr0GMi472xJicps=;
	b=zzsjL35sYnV2tTAPDFmmsZvU0NeyAJLj05iV9r67A7H4ajuJ1yLwwm/LcdEEfzszRn
	Csi7DhAJj16LL7XL79+hA6iyo3WOecNZdhgnqdfxRvZ9WbaYC1RjRc8oFGd1F6RLakfd
	f15tzJt0Z6tgYM6SZ+8Ud9W5BE00CGSykzvYbEDNpvNtF0E8CRLKX27PnoeV+ooHL418
	w8U9uGIzyr/J9fNjmzRuZxsvMkLDoaoCHrsdRHFnUF87EQlDo2eCbt69DbT92aM4BUuH
	YRi18g51GmqUJy2jKrP9QY2gALL5Z9owZDNw6h6llM2PCiNqiwuKIsKIJRJk9yqLBoqD
	N1qQ==
Received: by 10.236.72.103 with SMTP id s67mr5109868yhd.78.1344087328603;
	Sat, 04 Aug 2012 06:35:28 -0700 (PDT)
Received: from [10.136.57.147] (n128-227-4-33.xlate.ufl.edu. [128.227.4.33])
	by mx.google.com with ESMTPS id v3sm10794465anm.16.2012.08.04.06.35.27
	(version=TLSv1/SSLv3 cipher=OTHER);
	Sat, 04 Aug 2012 06:35:27 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: lmingcsce <lmingcsce@gmail.com>
In-Reply-To: <20120802104741.GB11437@ocelot.phlegethon.org>
Date: Sat, 4 Aug 2012 09:35:26 -0400
Message-Id: <033B5345-93F0-412C-822A-5F952694BD30@gmail.com>
References: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
	<20120802104741.GB11437@ocelot.phlegethon.org>
To: Tim Deegan <tim@xen.org>
X-Mailer: Apple Mail (2.1278)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] About revoke write access of all the shadows
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2821765996202968484=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2821765996202968484==
Content-Type: multipart/alternative; boundary="Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A"


--Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1

Thanks.=20
=46rom shadow_blow_tables function of the log dirty mode mechanism, I =
find it uses this way. However, through debugging =
foreach_pinned_shadow(d, sp, t), I find that all the pages I get are =
L2_pae_shadow or L2h_page_shadow, there is no L1 page type.=20
Can you help explain why this happen? If so, how can I get all the L1 =
page type of one domain? What I want to do is to set all the shadow =
tables as read only.

Best,


On Aug 2, 2012, at 6:47 AM, Tim Deegan wrote:

>> void sh_revoke_write_access_all(struct domain *d)
>=20


--Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=iso-8859-1

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
">Thanks.&nbsp;<div>=46rom shadow_blow_tables function of the log dirty =
mode mechanism, I find it uses this way. However, through debugging =
foreach_pinned_shadow(d, sp, t), I find that all the pages I get are =
L2_pae_shadow or L2h_page_shadow, there is no L1 page =
type.&nbsp;</div><div>Can you help explain why this happen? If so, how =
can I get all the L1 page type of one domain? What I want to do is to =
set all the shadow tables as read =
only.</div><div><br></div><div>Best,</div><div><br></div><div><br><div><di=
v>On Aug 2, 2012, at 6:47 AM, Tim Deegan wrote:</div><br =
class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; =
font-family: Helvetica; font-style: normal; font-variant: normal; =
font-weight: normal; letter-spacing: normal; line-height: normal; =
orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: =
none; white-space: normal; widows: 2; word-spacing: 0px; =
-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: =
0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; =
"><div><blockquote type=3D"cite">void sh_revoke_write_access_all(struct =
domain *d)<br></blockquote></div></span><br =
class=3D"Apple-interchange-newline"></blockquote></div><br></div></body></=
html>=

--Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A--


--===============2821765996202968484==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2821765996202968484==--


From xen-devel-bounces@lists.xen.org Sat Aug 04 13:35:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeVx-0000px-Ii; Sat, 04 Aug 2012 13:35:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lmingcsce@gmail.com>) id 1SxeVv-0000ps-Vv
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 13:35:32 +0000
Received: from [85.158.143.35:65098] by server-1.bemta-4.messagelabs.com id
	38/AD-24392-3252D105; Sat, 04 Aug 2012 13:35:31 +0000
X-Env-Sender: lmingcsce@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344087328!5355020!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1448 invoked from network); 4 Aug 2012 13:35:30 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 13:35:30 -0000
Received: by yhpp34 with SMTP id p34so1809016yhp.32
	for <xen-devel@lists.xen.org>; Sat, 04 Aug 2012 06:35:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:message-id:references:to:x-mailer;
	bh=4bz4hhHJCyrTSS+AdK5SaWz5Sx3VHr0GMi472xJicps=;
	b=zzsjL35sYnV2tTAPDFmmsZvU0NeyAJLj05iV9r67A7H4ajuJ1yLwwm/LcdEEfzszRn
	Csi7DhAJj16LL7XL79+hA6iyo3WOecNZdhgnqdfxRvZ9WbaYC1RjRc8oFGd1F6RLakfd
	f15tzJt0Z6tgYM6SZ+8Ud9W5BE00CGSykzvYbEDNpvNtF0E8CRLKX27PnoeV+ooHL418
	w8U9uGIzyr/J9fNjmzRuZxsvMkLDoaoCHrsdRHFnUF87EQlDo2eCbt69DbT92aM4BUuH
	YRi18g51GmqUJy2jKrP9QY2gALL5Z9owZDNw6h6llM2PCiNqiwuKIsKIJRJk9yqLBoqD
	N1qQ==
Received: by 10.236.72.103 with SMTP id s67mr5109868yhd.78.1344087328603;
	Sat, 04 Aug 2012 06:35:28 -0700 (PDT)
Received: from [10.136.57.147] (n128-227-4-33.xlate.ufl.edu. [128.227.4.33])
	by mx.google.com with ESMTPS id v3sm10794465anm.16.2012.08.04.06.35.27
	(version=TLSv1/SSLv3 cipher=OTHER);
	Sat, 04 Aug 2012 06:35:27 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: lmingcsce <lmingcsce@gmail.com>
In-Reply-To: <20120802104741.GB11437@ocelot.phlegethon.org>
Date: Sat, 4 Aug 2012 09:35:26 -0400
Message-Id: <033B5345-93F0-412C-822A-5F952694BD30@gmail.com>
References: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
	<20120802104741.GB11437@ocelot.phlegethon.org>
To: Tim Deegan <tim@xen.org>
X-Mailer: Apple Mail (2.1278)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] About revoke write access of all the shadows
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2821765996202968484=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2821765996202968484==
Content-Type: multipart/alternative; boundary="Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A"


--Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1

Thanks.=20
=46rom shadow_blow_tables function of the log dirty mode mechanism, I =
find it uses this way. However, through debugging =
foreach_pinned_shadow(d, sp, t), I find that all the pages I get are =
L2_pae_shadow or L2h_page_shadow, there is no L1 page type.=20
Can you help explain why this happen? If so, how can I get all the L1 =
page type of one domain? What I want to do is to set all the shadow =
tables as read only.

Best,


On Aug 2, 2012, at 6:47 AM, Tim Deegan wrote:

>> void sh_revoke_write_access_all(struct domain *d)
>=20


--Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=iso-8859-1

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
">Thanks.&nbsp;<div>=46rom shadow_blow_tables function of the log dirty =
mode mechanism, I find it uses this way. However, through debugging =
foreach_pinned_shadow(d, sp, t), I find that all the pages I get are =
L2_pae_shadow or L2h_page_shadow, there is no L1 page =
type.&nbsp;</div><div>Can you help explain why this happen? If so, how =
can I get all the L1 page type of one domain? What I want to do is to =
set all the shadow tables as read =
only.</div><div><br></div><div>Best,</div><div><br></div><div><br><div><di=
v>On Aug 2, 2012, at 6:47 AM, Tim Deegan wrote:</div><br =
class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; =
font-family: Helvetica; font-style: normal; font-variant: normal; =
font-weight: normal; letter-spacing: normal; line-height: normal; =
orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: =
none; white-space: normal; widows: 2; word-spacing: 0px; =
-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: =
0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; =
"><div><blockquote type=3D"cite">void sh_revoke_write_access_all(struct =
domain *d)<br></blockquote></div></span><br =
class=3D"Apple-interchange-newline"></blockquote></div><br></div></body></=
html>=

--Apple-Mail=_CC2561A3-98C2-4B91-8099-89C6BC9AC31A--


--===============2821765996202968484==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2821765996202968484==--


From xen-devel-bounces@lists.xen.org Sat Aug 04 13:38:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeYY-0000vO-4l; Sat, 04 Aug 2012 13:38:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1SxeRl-0000l7-OA
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 13:31:13 +0000
Received: from [85.158.143.35:55488] by server-2.bemta-4.messagelabs.com id
	4E/57-17938-1242D105; Sat, 04 Aug 2012 13:31:13 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344087071!16771303!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29545 invoked from network); 4 Aug 2012 13:31:12 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Aug 2012 13:31:12 -0000
Received: from relay2.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 45AA0A30B9;
	Sat,  4 Aug 2012 15:31:10 +0200 (CEST)
Date: Sat, 4 Aug 2012 14:31:05 +0100
From: Mel Gorman <mgorman@suse.de>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20120804133105.GE29814@suse.de>
References: <20120801190227.GA13272@phenom.dumpdata.com>
	<20120803120414.GA10670@andromeda.dapyr.net>
	<20120804110355.GA17640@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120804110355.GA17640@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Sat, 04 Aug 2012 13:38:13 +0000
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ian Campbell <Ian.Campbell@eu.citrix.com>,
	akpm@linux-foundation.org, davem@davemloft.net
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6 (git commit
 c48a11c7ad2623b99bbd6859b0b4234e7f11176f,
 netvm: propagate page->pfmemalloc to skb)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Aug 04, 2012 at 07:03:55AM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 03, 2012 at 08:04:14AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> > > So I hadn't done a git bisection yet. But if I choose git commit:
> > > 4b24ff71108164e047cf2c95990b77651163e315
> > >     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> > > 
> > >     Pull battery updates from Anton Vorontsov:
> > > 
> > > 
> > > everything works nicely. Anything past that, so these merges:
> > > 
> > > konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> > > 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
> > ===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)
> > 
> > Somewhere in there is the culprit. Hadn't done yet the full bisection
> > (was just checking out in each merge to see when it stopped working)
> 
> Mel, your:
> commit c48a11c7ad2623b99bbd6859b0b4234e7f11176f
> Author: Mel Gorman <mgorman@suse.de>
> Date:   Tue Jul 31 16:44:23 2012 -0700
> 
>     netvm: propagate page->pfmemalloc to skb
> 
> is the culprit per git bisect. Any ideas - do the drivers need to do
> some extra processing? Here is the git bisect log
> 

The problem appears to be at drivers/net/xen-netfront.c#973 where it
calls __skb_fill_page_desc(skb, 0, NULL, 0, 0) . The driver does not
have to do extra processing as such but I did not expect NULL to be
passed in like this. Can you check if this fixes the bug please?

---8<---
netvm: check for page == NULL when propogating the skb->pfmemalloc flag

Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
for the following bug triggered by a xen network driver

[    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
[    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.908703] PGD ea1df067 PUD e8ada067 PMD 0
[    1.908774] Oops: 0000 [#1] SMP
[    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea +xen_kbdfront xenfs xen_privcmd
[    1.908938] CPU 0
[    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1
[    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
[    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
[    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
[    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
[    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
[    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
[    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
[    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
[    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
[    1.909055] Stack:
[    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
[    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
[    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
[    1.909055] Call Trace:
[    1.909055]  <IRQ>
[    1.909055]  [<ffffffff81066028>] ?  pvclock_clocksource_read+0x58/0xd0
[    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
[    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
[    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30

The problem is that the xenfront driver is passing a NULL page to
__skb_fill_page_desc() which was unexpected. This patch checks that
there is a page before dereferencing.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/skbuff.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7632c87..8857669 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1256,7 +1256,7 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
 	 * do not lose pfmemalloc information as the pages would not be
 	 * allocated using __GFP_MEMALLOC.
 	 */
-	if (page->pfmemalloc && !page->mapping)
+	if (page && page->pfmemalloc && !page->mapping)
 		skb->pfmemalloc	= true;
 	frag->page.p		  = page;
 	frag->page_offset	  = off;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 13:38:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 13:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxeYY-0000vO-4l; Sat, 04 Aug 2012 13:38:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1SxeRl-0000l7-OA
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 13:31:13 +0000
Received: from [85.158.143.35:55488] by server-2.bemta-4.messagelabs.com id
	4E/57-17938-1242D105; Sat, 04 Aug 2012 13:31:13 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344087071!16771303!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29545 invoked from network); 4 Aug 2012 13:31:12 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Aug 2012 13:31:12 -0000
Received: from relay2.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 45AA0A30B9;
	Sat,  4 Aug 2012 15:31:10 +0200 (CEST)
Date: Sat, 4 Aug 2012 14:31:05 +0100
From: Mel Gorman <mgorman@suse.de>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20120804133105.GE29814@suse.de>
References: <20120801190227.GA13272@phenom.dumpdata.com>
	<20120803120414.GA10670@andromeda.dapyr.net>
	<20120804110355.GA17640@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120804110355.GA17640@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Sat, 04 Aug 2012 13:38:13 +0000
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ian Campbell <Ian.Campbell@eu.citrix.com>,
	akpm@linux-foundation.org, davem@davemloft.net
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6 (git commit
 c48a11c7ad2623b99bbd6859b0b4234e7f11176f,
 netvm: propagate page->pfmemalloc to skb)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Aug 04, 2012 at 07:03:55AM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 03, 2012 at 08:04:14AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> > > So I hadn't done a git bisection yet. But if I choose git commit:
> > > 4b24ff71108164e047cf2c95990b77651163e315
> > >     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> > > 
> > >     Pull battery updates from Anton Vorontsov:
> > > 
> > > 
> > > everything works nicely. Anything past that, so these merges:
> > > 
> > > konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> > > 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
> > ===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)
> > 
> > Somewhere in there is the culprit. Hadn't done yet the full bisection
> > (was just checking out in each merge to see when it stopped working)
> 
> Mel, your:
> commit c48a11c7ad2623b99bbd6859b0b4234e7f11176f
> Author: Mel Gorman <mgorman@suse.de>
> Date:   Tue Jul 31 16:44:23 2012 -0700
> 
>     netvm: propagate page->pfmemalloc to skb
> 
> is the culprit per git bisect. Any ideas - do the drivers need to do
> some extra processing? Here is the git bisect log
> 

The problem appears to be at drivers/net/xen-netfront.c#973 where it
calls __skb_fill_page_desc(skb, 0, NULL, 0, 0) . The driver does not
have to do extra processing as such but I did not expect NULL to be
passed in like this. Can you check if this fixes the bug please?

---8<---
netvm: check for page == NULL when propogating the skb->pfmemalloc flag

Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
for the following bug triggered by a xen network driver

[    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
[    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.908703] PGD ea1df067 PUD e8ada067 PMD 0
[    1.908774] Oops: 0000 [#1] SMP
[    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea +xen_kbdfront xenfs xen_privcmd
[    1.908938] CPU 0
[    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1
[    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
[    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
[    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
[    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
[    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
[    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
[    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
[    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
[    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
[    1.909055] Stack:
[    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
[    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
[    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
[    1.909055] Call Trace:
[    1.909055]  <IRQ>
[    1.909055]  [<ffffffff81066028>] ?  pvclock_clocksource_read+0x58/0xd0
[    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
[    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
[    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30

The problem is that the xenfront driver is passing a NULL page to
__skb_fill_page_desc() which was unexpected. This patch checks that
there is a page before dereferencing.

Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/skbuff.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7632c87..8857669 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1256,7 +1256,7 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
 	 * do not lose pfmemalloc information as the pages would not be
 	 * allocated using __GFP_MEMALLOC.
 	 */
-	if (page->pfmemalloc && !page->mapping)
+	if (page && page->pfmemalloc && !page->mapping)
 		skb->pfmemalloc	= true;
 	frag->page.p		  = page;
 	frag->page_offset	  = off;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 18:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 18:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxidU-0002Zo-Ef; Sat, 04 Aug 2012 17:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxidT-0002Zj-Ae
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 17:59:35 +0000
Received: from [85.158.143.99:58761] by server-2.bemta-4.messagelabs.com id
	7F/C4-17938-6036D105; Sat, 04 Aug 2012 17:59:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344103173!22738360!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20890 invoked from network); 4 Aug 2012 17:59:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 17:59:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,713,1336348800"; d="scan'208";a="13851624"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 17:59:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 18:59:32 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxidQ-0005Fn-G1;
	Sat, 04 Aug 2012 17:59:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxidQ-0003GK-Ad;
	Sat, 04 Aug 2012 18:59:32 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13548-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 18:59:32 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13548: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13548 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13548/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 18:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 18:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxidU-0002Zo-Ef; Sat, 04 Aug 2012 17:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxidT-0002Zj-Ae
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 17:59:35 +0000
Received: from [85.158.143.99:58761] by server-2.bemta-4.messagelabs.com id
	7F/C4-17938-6036D105; Sat, 04 Aug 2012 17:59:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344103173!22738360!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20890 invoked from network); 4 Aug 2012 17:59:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 17:59:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,713,1336348800"; d="scan'208";a="13851624"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 17:59:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 18:59:32 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxidQ-0005Fn-G1;
	Sat, 04 Aug 2012 17:59:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxidQ-0003GK-Ad;
	Sat, 04 Aug 2012 18:59:32 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13548-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 18:59:32 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13548: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13548 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13548/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 20:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 20:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxkkS-0003Y6-G5; Sat, 04 Aug 2012 20:14:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <listmail@triad.rr.com>) id 1SxkkQ-0003Y1-95
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 20:14:54 +0000
Received: from [85.158.138.51:3925] by server-3.bemta-3.messagelabs.com id
	CA/E2-08301-DB28D105; Sat, 04 Aug 2012 20:14:53 +0000
X-Env-Sender: listmail@triad.rr.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344111291!30427218!1
X-Originating-IP: [71.74.56.122]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n,sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n,BODY_RANDOM_LONG,
	UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18153 invoked from network); 4 Aug 2012 20:14:51 -0000
Received: from hrndva-omtalb.mail.rr.com (HELO hrndva-omtalb.mail.rr.com)
	(71.74.56.122) by server-2.tower-174.messagelabs.com with SMTP;
	4 Aug 2012 20:14:51 -0000
X-Authority-Analysis: v=2.0 cv=StQSGYy0 c=1 sm=0 a=R14c1kN7475LMi+rpQwGWw==:17
	a=Kt2980LFN-gA:10 a=kfTud4QeKxsA:10 a=TYiCTjJO1OAA:10
	a=05ChyHeVI94A:10 a=ayC55rCoAAAA:8 a=eVUyA3bpAAAA:8
	a=hzDrfXCMBYWcBzvj7jwA:9 a=pILNOxqGKmIA:10
	a=99PfMDc3TXVWNJnz4OsA:9 a=OXpUkxasB13gQJS8:21
	a=TjlKATjJw-Rc3W-2:21 a=tPZYhJ7K0OI8bZJAEaIA:9
	a=R14c1kN7475LMi+rpQwGWw==:117
X-Cloudmark-Score: 0
X-Originating-IP: 65.190.252.167
Received: from [65.190.252.167] ([65.190.252.167:56295] helo=corenix.localnet)
	by hrndva-oedge03.mail.rr.com (envelope-from <listmail@triad.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTP
	id B2/35-00797-AB28D105; Sat, 04 Aug 2012 20:14:50 +0000
Received: by corenix.localnet (Postfix, from userid 1003)
	id 6FE1885384; Sat,  4 Aug 2012 16:14:50 -0400 (EDT)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on corenix.localnet
X-Spam-Level: 
X-Spam-Status: No, score=-1.0 required=4.5 tests=ALL_TRUSTED autolearn=ham
	version=3.3.1
Received: from [192.168.0.128] (gamehub.localnet [192.168.0.128])
	by corenix.localnet (Postfix) with ESMTPS id 5112E84B81
	for <xen-devel@lists.xen.org>; Sat,  4 Aug 2012 16:14:44 -0400 (EDT)
Message-ID: <501D82A9.8010805@triad.rr.com>
Date: Sat, 04 Aug 2012 16:14:33 -0400
From: Richie <listmail@triad.rr.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary="------------020003030801030106020408"
Subject: [Xen-devel] wheezy VT-d passthrough test: DMAR:[fault reason 06h]
 PTE Read access is not set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020003030801030106020408
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit

lspci -vvv and -t attached
Serial console output: http://pastebin.ca/2177395

Main error appears to be:
     (XEN) [VT-D]iommu.c:858: iommu_fault_status: Primary Pending Fault
     (XEN) [VT-D]iommu.c:833: DMAR:[DMA Read] Request device [00:1e.0] 
fault addr df8e5000, iommu reg = ffff82c3fff57000
     (XEN) DMAR:[fault reason 06h] PTE Read access is not set

onboard nic on the same bus is going down as depicted in the console log

non-vtd passthrough to pv appears to work fine (shows up on domU lspci, 
did not run a hardware test)





--------------020003030801030106020408
Content-Type: text/plain; charset=windows-1252;
 name="lspci.vvv.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci.vvv.txt"

00:00.0 Host bridge: Intel Corporation Core Processor DMI (rev 11)
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:03.0 PCI bridge: Intel Corporation Core Processor PCI Express Root Port 1 (rev 11) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
	I/O behind bridge: 0000b000-0000bfff
	Memory behind bridge: f3c00000-f3cfffff
	Prefetchable memory behind bridge: 00000000e0000000-00000000efffffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:08.0 System peripheral: Intel Corporation Core Processor System Management Registers (rev 11)
	Subsystem: Device 0043:0083
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:08.1 System peripheral: Intel Corporation Core Processor Semaphore and Scratchpad Registers (rev 11)
	Subsystem: Device 0043:0083
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:08.2 System peripheral: Intel Corporation Core Processor System Control and Status Registers (rev 11)
	Subsystem: Device 0043:0083
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:08.3 System peripheral: Intel Corporation Core Processor Miscellaneous Registers (rev 11)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:10.0 System peripheral: Intel Corporation Core Processor QPI Link (rev 11)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:10.1 System peripheral: Intel Corporation Core Processor QPI Routing and Protocol Registers (rev 11)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) (prog-if 20 [EHCI])
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 16
	Region 0: Memory at f3bfe000 (32-bit, non-prefetchable) [size=1K]
	Capabilities: <access denied>
	Kernel driver in use: ehci_hcd

00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=06, subordinate=06, sec-latency=0
	I/O behind bridge: 00003000-00003fff
	Memory behind bridge: f0a00000-f0bfffff
	Prefetchable memory behind bridge: 00000000f0c00000-00000000f0dfffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=05, subordinate=05, sec-latency=0
	I/O behind bridge: 00002000-00002fff
	Memory behind bridge: f0600000-f07fffff
	Prefetchable memory behind bridge: 00000000f0800000-00000000f09fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=04, subordinate=04, sec-latency=0
	I/O behind bridge: 00001000-00001fff
	Memory behind bridge: f0200000-f03fffff
	Prefetchable memory behind bridge: 00000000f0400000-00000000f05fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.6 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 7 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=03, subordinate=03, sec-latency=0
	I/O behind bridge: 0000d000-0000dfff
	Memory behind bridge: f3e00000-f3efffff
	Prefetchable memory behind bridge: 00000000f0000000-00000000f01fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.7 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 8 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=02, subordinate=02, sec-latency=0
	I/O behind bridge: 0000c000-0000cfff
	Memory behind bridge: f3d00000-f3dfffff
	Prefetchable memory behind bridge: 00000000f2f00000-00000000f2ffffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) (prog-if 20 [EHCI])
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 23
	Region 0: Memory at f3bfd000 (32-bit, non-prefetchable) [size=1K]
	Capabilities: <access denied>
	Kernel driver in use: ehci_hcd

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) (prog-if 01 [Subtractive decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Bus: primary=00, secondary=07, subordinate=07, sec-latency=32
	I/O behind bridge: 0000e000-0000efff
	Memory behind bridge: f3f00000-f7ffffff
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff
	Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>

00:1f.0 ISA bridge: Intel Corporation 5 Series Chipset LPC Interface Controller (rev 05)
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Capabilities: <access denied>

00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 05) (prog-if 01 [AHCI 1.0])
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin D routed to IRQ 315
	Region 0: I/O ports at a880 [size=8]
	Region 1: I/O ports at a800 [size=4]
	Region 2: I/O ports at a480 [size=8]
	Region 3: I/O ports at a400 [size=4]
	Region 4: I/O ports at a080 [size=32]
	Region 5: Memory at f3bfb000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: <access denied>
	Kernel driver in use: ahci

00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05)
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin C routed to IRQ 18
	Region 0: Memory at f3bfc000 (64-bit, non-prefetchable) [size=256]
	Region 4: I/O ports at ffe0 [size=32]
	Kernel driver in use: i801_smbus

01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI R580 [Radeon X1900 XT] (Primary) (prog-if 00 [VGA controller])
	Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0b12
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 316
	Region 0: Memory at e0000000 (64-bit, prefetchable) [size=256M]
	Region 2: Memory at f3ce0000 (64-bit, non-prefetchable) [size=64K]
	Region 4: I/O ports at b000 [size=256]
	Expansion ROM at f3cc0000 [disabled] [size=128K]
	Capabilities: <access denied>
	Kernel driver in use: radeon

01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI R580 [Radeon X1900 XT] (Secondary)
	Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0b13
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Region 0: Memory at f3cf0000 (64-bit, non-prefetchable) [size=64K]
	Capabilities: <access denied>

02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03)
	Subsystem: ASUSTeK Computer Inc. M4A785TD Motherboard
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 314
	Region 0: I/O ports at c800 [size=256]
	Region 2: Memory at f2fff000 (64-bit, prefetchable) [size=4K]
	Region 4: Memory at f2ff8000 (64-bit, prefetchable) [size=16K]
	Expansion ROM at f3df0000 [disabled] [size=64K]
	Capabilities: <access denied>
	Kernel driver in use: r8169

03:00.0 SATA controller: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 03) (prog-if 01 [AHCI 1.0])
	Subsystem: ASUSTeK Computer Inc. Device 824f
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 18
	Region 5: Memory at f3efe000 (32-bit, non-prefetchable) [size=8K]
	Capabilities: <access denied>
	Kernel driver in use: ahci

03:00.1 IDE interface: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 03) (prog-if 85 [Master SecO PriO])
	Subsystem: ASUSTeK Computer Inc. Device 824f
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin B routed to IRQ 19
	Region 0: I/O ports at dc00 [size=8]
	Region 1: I/O ports at d880 [size=4]
	Region 2: I/O ports at d800 [size=8]
	Region 3: I/O ports at d480 [size=4]
	Region 4: I/O ports at d400 [size=16]
	Capabilities: <access denied>
	Kernel driver in use: pata_jmicron

07:01.0 Multimedia video controller: Conexant Systems, Inc. CX23418 Single-Chip MPEG-2 Encoder with Integrated Analog Video/Broadcast Audio Decoder
	Subsystem: Hauppauge computer works Inc. WinTV HVR-1600
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin A routed to IRQ 16
	Region 0: Memory at f4000000 (32-bit, non-prefetchable) [disabled] [size=64M]
	Capabilities: <access denied>
	Kernel driver in use: pciback

07:02.0 Serial controller: NetMos Technology PCI 9865 Multi-I/O Controller (prog-if 02 [16550])
	Subsystem: Device a000:1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 17
	Region 0: I/O ports at e480 [size=8]
	Region 1: Memory at f3ffb000 (32-bit, non-prefetchable) [size=4K]
	Region 4: Memory at f3ffa000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>
	Kernel driver in use: serial

07:02.1 Serial controller: NetMos Technology PCI 9865 Multi-I/O Controller (prog-if 02 [16550])
	Subsystem: Device a000:1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64, Cache Line Size: 32 bytes
	Interrupt: pin B routed to IRQ 18
	Region 0: I/O ports at e800 [size=8]
	Region 1: Memory at f3ffd000 (32-bit, non-prefetchable) [size=4K]
	Region 4: Memory at f3ffc000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>
	Kernel driver in use: serial

07:02.2 Parallel controller: Illegal Vendor ID Device 9865 (prog-if 03 [IEEE1284])
	Subsystem: Device a000:2000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64, Cache Line Size: 32 bytes
	Interrupt: pin C routed to IRQ 11
	Region 0: I/O ports at ec00 [size=8]
	Region 1: I/O ports at e880 [size=8]
	Region 2: Memory at f3fff000 (32-bit, non-prefetchable) [size=4K]
	Region 4: Memory at f3ffe000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>

07:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10)
	Subsystem: ASUSTeK Computer Inc. Device 820d
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64 (8000ns min, 16000ns max), Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 19
	Region 0: I/O ports at e000 [size=256]
	Region 1: Memory at f3ff9000 (32-bit, non-prefetchable) [size=256]
	Expansion ROM at f3fc0000 [disabled] [size=128K]
	Capabilities: <access denied>
	Kernel driver in use: r8169

3f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-Core Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Kernel driver in use: i7core_edac

3f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:03.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:03.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Target Address Decoder (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:03.4 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Test Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Address Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.2 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Rank Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.3 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Thermal Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Address Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.2 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Rank Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.3 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Thermal Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0



--------------020003030801030106020408
Content-Type: text/plain; charset=windows-1252;
 name="lspci.t.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci.t.txt"

-+-[0000:3f]-+-00.0
 |           +-00.1
 |           +-02.0
 |           +-02.1
 |           +-03.0
 |           +-03.1
 |           +-03.4
 |           +-04.0
 |           +-04.1
 |           +-04.2
 |           +-04.3
 |           +-05.0
 |           +-05.1
 |           +-05.2
 |           \-05.3
 \-[0000:00]-+-00.0
             +-03.0-[01]--+-00.0
             |            \-00.1
             +-08.0
             +-08.1
             +-08.2
             +-08.3
             +-10.0
             +-10.1
             +-1a.0
             +-1c.0-[06]--
             +-1c.4-[05]--
             +-1c.5-[04]--
             +-1c.6-[03]--+-00.0
             |            \-00.1
             +-1c.7-[02]----00.0
             +-1d.0
             +-1e.0-[07]--+-01.0
             |            +-02.0
             |            +-02.1
             |            +-02.2
             |            \-04.0
             +-1f.0
             +-1f.2
             \-1f.3

--------------020003030801030106020408
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020003030801030106020408--


From xen-devel-bounces@lists.xen.org Sat Aug 04 20:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 20:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxkkS-0003Y6-G5; Sat, 04 Aug 2012 20:14:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <listmail@triad.rr.com>) id 1SxkkQ-0003Y1-95
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 20:14:54 +0000
Received: from [85.158.138.51:3925] by server-3.bemta-3.messagelabs.com id
	CA/E2-08301-DB28D105; Sat, 04 Aug 2012 20:14:53 +0000
X-Env-Sender: listmail@triad.rr.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344111291!30427218!1
X-Originating-IP: [71.74.56.122]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n,sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n,BODY_RANDOM_LONG,
	UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18153 invoked from network); 4 Aug 2012 20:14:51 -0000
Received: from hrndva-omtalb.mail.rr.com (HELO hrndva-omtalb.mail.rr.com)
	(71.74.56.122) by server-2.tower-174.messagelabs.com with SMTP;
	4 Aug 2012 20:14:51 -0000
X-Authority-Analysis: v=2.0 cv=StQSGYy0 c=1 sm=0 a=R14c1kN7475LMi+rpQwGWw==:17
	a=Kt2980LFN-gA:10 a=kfTud4QeKxsA:10 a=TYiCTjJO1OAA:10
	a=05ChyHeVI94A:10 a=ayC55rCoAAAA:8 a=eVUyA3bpAAAA:8
	a=hzDrfXCMBYWcBzvj7jwA:9 a=pILNOxqGKmIA:10
	a=99PfMDc3TXVWNJnz4OsA:9 a=OXpUkxasB13gQJS8:21
	a=TjlKATjJw-Rc3W-2:21 a=tPZYhJ7K0OI8bZJAEaIA:9
	a=R14c1kN7475LMi+rpQwGWw==:117
X-Cloudmark-Score: 0
X-Originating-IP: 65.190.252.167
Received: from [65.190.252.167] ([65.190.252.167:56295] helo=corenix.localnet)
	by hrndva-oedge03.mail.rr.com (envelope-from <listmail@triad.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTP
	id B2/35-00797-AB28D105; Sat, 04 Aug 2012 20:14:50 +0000
Received: by corenix.localnet (Postfix, from userid 1003)
	id 6FE1885384; Sat,  4 Aug 2012 16:14:50 -0400 (EDT)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on corenix.localnet
X-Spam-Level: 
X-Spam-Status: No, score=-1.0 required=4.5 tests=ALL_TRUSTED autolearn=ham
	version=3.3.1
Received: from [192.168.0.128] (gamehub.localnet [192.168.0.128])
	by corenix.localnet (Postfix) with ESMTPS id 5112E84B81
	for <xen-devel@lists.xen.org>; Sat,  4 Aug 2012 16:14:44 -0400 (EDT)
Message-ID: <501D82A9.8010805@triad.rr.com>
Date: Sat, 04 Aug 2012 16:14:33 -0400
From: Richie <listmail@triad.rr.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary="------------020003030801030106020408"
Subject: [Xen-devel] wheezy VT-d passthrough test: DMAR:[fault reason 06h]
 PTE Read access is not set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020003030801030106020408
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit

lspci -vvv and -t attached
Serial console output: http://pastebin.ca/2177395

Main error appears to be:
     (XEN) [VT-D]iommu.c:858: iommu_fault_status: Primary Pending Fault
     (XEN) [VT-D]iommu.c:833: DMAR:[DMA Read] Request device [00:1e.0] 
fault addr df8e5000, iommu reg = ffff82c3fff57000
     (XEN) DMAR:[fault reason 06h] PTE Read access is not set

onboard nic on the same bus is going down as depicted in the console log

non-vtd passthrough to pv appears to work fine (shows up on domU lspci, 
did not run a hardware test)





--------------020003030801030106020408
Content-Type: text/plain; charset=windows-1252;
 name="lspci.vvv.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci.vvv.txt"

00:00.0 Host bridge: Intel Corporation Core Processor DMI (rev 11)
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:03.0 PCI bridge: Intel Corporation Core Processor PCI Express Root Port 1 (rev 11) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
	I/O behind bridge: 0000b000-0000bfff
	Memory behind bridge: f3c00000-f3cfffff
	Prefetchable memory behind bridge: 00000000e0000000-00000000efffffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:08.0 System peripheral: Intel Corporation Core Processor System Management Registers (rev 11)
	Subsystem: Device 0043:0083
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:08.1 System peripheral: Intel Corporation Core Processor Semaphore and Scratchpad Registers (rev 11)
	Subsystem: Device 0043:0083
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:08.2 System peripheral: Intel Corporation Core Processor System Control and Status Registers (rev 11)
	Subsystem: Device 0043:0083
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Capabilities: <access denied>

00:08.3 System peripheral: Intel Corporation Core Processor Miscellaneous Registers (rev 11)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:10.0 System peripheral: Intel Corporation Core Processor QPI Link (rev 11)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:10.1 System peripheral: Intel Corporation Core Processor QPI Routing and Protocol Registers (rev 11)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) (prog-if 20 [EHCI])
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 16
	Region 0: Memory at f3bfe000 (32-bit, non-prefetchable) [size=1K]
	Capabilities: <access denied>
	Kernel driver in use: ehci_hcd

00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=06, subordinate=06, sec-latency=0
	I/O behind bridge: 00003000-00003fff
	Memory behind bridge: f0a00000-f0bfffff
	Prefetchable memory behind bridge: 00000000f0c00000-00000000f0dfffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=05, subordinate=05, sec-latency=0
	I/O behind bridge: 00002000-00002fff
	Memory behind bridge: f0600000-f07fffff
	Prefetchable memory behind bridge: 00000000f0800000-00000000f09fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=04, subordinate=04, sec-latency=0
	I/O behind bridge: 00001000-00001fff
	Memory behind bridge: f0200000-f03fffff
	Prefetchable memory behind bridge: 00000000f0400000-00000000f05fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.6 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 7 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=03, subordinate=03, sec-latency=0
	I/O behind bridge: 0000d000-0000dfff
	Memory behind bridge: f3e00000-f3efffff
	Prefetchable memory behind bridge: 00000000f0000000-00000000f01fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1c.7 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 8 (rev 05) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Bus: primary=00, secondary=02, subordinate=02, sec-latency=0
	I/O behind bridge: 0000c000-0000cfff
	Memory behind bridge: f3d00000-f3dfffff
	Prefetchable memory behind bridge: 00000000f2f00000-00000000f2ffffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) (prog-if 20 [EHCI])
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 23
	Region 0: Memory at f3bfd000 (32-bit, non-prefetchable) [size=1K]
	Capabilities: <access denied>
	Kernel driver in use: ehci_hcd

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) (prog-if 01 [Subtractive decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Bus: primary=00, secondary=07, subordinate=07, sec-latency=32
	I/O behind bridge: 0000e000-0000efff
	Memory behind bridge: f3f00000-f7ffffff
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff
	Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>

00:1f.0 ISA bridge: Intel Corporation 5 Series Chipset LPC Interface Controller (rev 05)
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Capabilities: <access denied>

00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 05) (prog-if 01 [AHCI 1.0])
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin D routed to IRQ 315
	Region 0: I/O ports at a880 [size=8]
	Region 1: I/O ports at a800 [size=4]
	Region 2: I/O ports at a480 [size=8]
	Region 3: I/O ports at a400 [size=4]
	Region 4: I/O ports at a080 [size=32]
	Region 5: Memory at f3bfb000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: <access denied>
	Kernel driver in use: ahci

00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05)
	Subsystem: ASUSTeK Computer Inc. Device 8383
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin C routed to IRQ 18
	Region 0: Memory at f3bfc000 (64-bit, non-prefetchable) [size=256]
	Region 4: I/O ports at ffe0 [size=32]
	Kernel driver in use: i801_smbus

01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI R580 [Radeon X1900 XT] (Primary) (prog-if 00 [VGA controller])
	Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0b12
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 316
	Region 0: Memory at e0000000 (64-bit, prefetchable) [size=256M]
	Region 2: Memory at f3ce0000 (64-bit, non-prefetchable) [size=64K]
	Region 4: I/O ports at b000 [size=256]
	Expansion ROM at f3cc0000 [disabled] [size=128K]
	Capabilities: <access denied>
	Kernel driver in use: radeon

01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI R580 [Radeon X1900 XT] (Secondary)
	Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0b13
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Region 0: Memory at f3cf0000 (64-bit, non-prefetchable) [size=64K]
	Capabilities: <access denied>

02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03)
	Subsystem: ASUSTeK Computer Inc. M4A785TD Motherboard
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 314
	Region 0: I/O ports at c800 [size=256]
	Region 2: Memory at f2fff000 (64-bit, prefetchable) [size=4K]
	Region 4: Memory at f2ff8000 (64-bit, prefetchable) [size=16K]
	Expansion ROM at f3df0000 [disabled] [size=64K]
	Capabilities: <access denied>
	Kernel driver in use: r8169

03:00.0 SATA controller: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 03) (prog-if 01 [AHCI 1.0])
	Subsystem: ASUSTeK Computer Inc. Device 824f
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 18
	Region 5: Memory at f3efe000 (32-bit, non-prefetchable) [size=8K]
	Capabilities: <access denied>
	Kernel driver in use: ahci

03:00.1 IDE interface: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 03) (prog-if 85 [Master SecO PriO])
	Subsystem: ASUSTeK Computer Inc. Device 824f
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin B routed to IRQ 19
	Region 0: I/O ports at dc00 [size=8]
	Region 1: I/O ports at d880 [size=4]
	Region 2: I/O ports at d800 [size=8]
	Region 3: I/O ports at d480 [size=4]
	Region 4: I/O ports at d400 [size=16]
	Capabilities: <access denied>
	Kernel driver in use: pata_jmicron

07:01.0 Multimedia video controller: Conexant Systems, Inc. CX23418 Single-Chip MPEG-2 Encoder with Integrated Analog Video/Broadcast Audio Decoder
	Subsystem: Hauppauge computer works Inc. WinTV HVR-1600
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin A routed to IRQ 16
	Region 0: Memory at f4000000 (32-bit, non-prefetchable) [disabled] [size=64M]
	Capabilities: <access denied>
	Kernel driver in use: pciback

07:02.0 Serial controller: NetMos Technology PCI 9865 Multi-I/O Controller (prog-if 02 [16550])
	Subsystem: Device a000:1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64, Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 17
	Region 0: I/O ports at e480 [size=8]
	Region 1: Memory at f3ffb000 (32-bit, non-prefetchable) [size=4K]
	Region 4: Memory at f3ffa000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>
	Kernel driver in use: serial

07:02.1 Serial controller: NetMos Technology PCI 9865 Multi-I/O Controller (prog-if 02 [16550])
	Subsystem: Device a000:1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64, Cache Line Size: 32 bytes
	Interrupt: pin B routed to IRQ 18
	Region 0: I/O ports at e800 [size=8]
	Region 1: Memory at f3ffd000 (32-bit, non-prefetchable) [size=4K]
	Region 4: Memory at f3ffc000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>
	Kernel driver in use: serial

07:02.2 Parallel controller: Illegal Vendor ID Device 9865 (prog-if 03 [IEEE1284])
	Subsystem: Device a000:2000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64, Cache Line Size: 32 bytes
	Interrupt: pin C routed to IRQ 11
	Region 0: I/O ports at ec00 [size=8]
	Region 1: I/O ports at e880 [size=8]
	Region 2: Memory at f3fff000 (32-bit, non-prefetchable) [size=4K]
	Region 4: Memory at f3ffe000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>

07:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10)
	Subsystem: ASUSTeK Computer Inc. Device 820d
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 64 (8000ns min, 16000ns max), Cache Line Size: 32 bytes
	Interrupt: pin A routed to IRQ 19
	Region 0: I/O ports at e000 [size=256]
	Region 1: Memory at f3ff9000 (32-bit, non-prefetchable) [size=256]
	Expansion ROM at f3fc0000 [disabled] [size=128K]
	Capabilities: <access denied>
	Kernel driver in use: r8169

3f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-Core Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Kernel driver in use: i7core_edac

3f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:03.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:03.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Target Address Decoder (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:03.4 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Test Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Address Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.2 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Rank Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:04.3 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 0 Thermal Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.0 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.1 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Address Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.2 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Rank Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

3f:05.3 Host bridge: Intel Corporation Core Processor Integrated Memory Controller Channel 1 Thermal Control Registers (rev 04)
	Subsystem: Intel Corporation Device 8086
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0



--------------020003030801030106020408
Content-Type: text/plain; charset=windows-1252;
 name="lspci.t.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci.t.txt"

-+-[0000:3f]-+-00.0
 |           +-00.1
 |           +-02.0
 |           +-02.1
 |           +-03.0
 |           +-03.1
 |           +-03.4
 |           +-04.0
 |           +-04.1
 |           +-04.2
 |           +-04.3
 |           +-05.0
 |           +-05.1
 |           +-05.2
 |           \-05.3
 \-[0000:00]-+-00.0
             +-03.0-[01]--+-00.0
             |            \-00.1
             +-08.0
             +-08.1
             +-08.2
             +-08.3
             +-10.0
             +-10.1
             +-1a.0
             +-1c.0-[06]--
             +-1c.4-[05]--
             +-1c.5-[04]--
             +-1c.6-[03]--+-00.0
             |            \-00.1
             +-1c.7-[02]----00.0
             +-1d.0
             +-1e.0-[07]--+-01.0
             |            +-02.0
             |            +-02.1
             |            +-02.2
             |            \-04.0
             +-1f.0
             +-1f.2
             \-1f.3

--------------020003030801030106020408
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020003030801030106020408--


From xen-devel-bounces@lists.xen.org Sat Aug 04 20:38:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 20:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxl6v-0003jS-Im; Sat, 04 Aug 2012 20:38:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <listmail@triad.rr.com>) id 1Sxl6t-0003jM-5U
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 20:38:07 +0000
Received: from [85.158.143.35:22080] by server-2.bemta-4.messagelabs.com id
	E2/32-17938-E288D105; Sat, 04 Aug 2012 20:38:06 +0000
X-Env-Sender: listmail@triad.rr.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344112683!18674592!1
X-Originating-IP: [71.74.56.122]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n,sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27130 invoked from network); 4 Aug 2012 20:38:04 -0000
Received: from hrndva-omtalb.mail.rr.com (HELO hrndva-omtalb.mail.rr.com)
	(71.74.56.122) by server-6.tower-21.messagelabs.com with SMTP;
	4 Aug 2012 20:38:04 -0000
X-Authority-Analysis: v=2.0 cv=ZuBv2qHG c=1 sm=0 a=R14c1kN7475LMi+rpQwGWw==:17
	a=Kt2980LFN-gA:10 a=kfTud4QeKxsA:10 a=mIU5gKuZvJwA:10
	a=05ChyHeVI94A:10 a=ayC55rCoAAAA:8 a=W2ASNwKqUNy46prrTFoA:9
	a=wPNLvfGTeEIA:10 a=AlD6xeB1mtd1I4UeoXAA:9 a=pILNOxqGKmIA:10
	a=oYszMzJgK7c-UBrM:21 a=5OftHOs5r1-k-HS1:21
	a=sYPjotMp4jqWD35iaBoA:9 a=o6KnOFfG3bstj1pE:21
	a=wlazF4Mc5GnPQm6M:21 a=IxzCleGfHC1At5Wr9AAA:9
	a=mzt647ctkWIfzJwtBMMA:9 a=R14c1kN7475LMi+rpQwGWw==:117
X-Cloudmark-Score: 0
X-Originating-IP: 65.190.252.167
Received: from [65.190.252.167] ([65.190.252.167:57898] helo=corenix.localnet)
	by hrndva-oedge04.mail.rr.com (envelope-from <listmail@triad.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTP
	id C6/C3-21135-B288D105; Sat, 04 Aug 2012 20:38:03 +0000
Received: by corenix.localnet (Postfix, from userid 1003)
	id 0E9B985384; Sat,  4 Aug 2012 16:38:03 -0400 (EDT)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on corenix.localnet
X-Spam-Level: 
X-Spam-Status: No, score=-1.0 required=4.5 tests=ALL_TRUSTED autolearn=ham
	version=3.3.1
Received: from [192.168.0.128] (gamehub.localnet [192.168.0.128])
	by corenix.localnet (Postfix) with ESMTPS id 0362184B81
	for <xen-devel@lists.xen.org>; Sat,  4 Aug 2012 16:38:01 -0400 (EDT)
Message-ID: <501D881E.2020402@triad.rr.com>
Date: Sat, 04 Aug 2012 16:37:50 -0400
From: Richie <listmail@triad.rr.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501D82A9.8010805@triad.rr.com>
In-Reply-To: <501D82A9.8010805@triad.rr.com>
Content-Type: multipart/mixed; boundary="------------020603040908070100040103"
Subject: Re: [Xen-devel] wheezy VT-d passthrough test: DMAR:[fault reason
 06h] PTE Read access is not set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020603040908070100040103
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Additional information:

I just checked the hvm and saw that the passthrough actually succeeds.  
Windows wants to install a driver for the tuner.  So there is the iommu 
fault and onboard nic issue.  The serial card is on the same bus, but 
seems unaffected.

Attached more logs.

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="qemu-dm-winxpbravo.log"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="qemu-dm-winxpbravo.log"

domid: 1
-c config qemu network with xen bridge for 
vif1.0-emu br0
Using file /dev/mainvg/winxpbravo in read-write mode
Using file /DATA/xpua.iso in read-only mode
Watching /local/domain/0/device-model/1/logdirty/cmd
Watching /local/domain/0/device-model/1/command
Watching /local/domain/1/cpu
char device redirected to /dev/pts/1
qemu_map_cache_init nr_buckets = 10000 size 4194304
shared page at pfn feffd
buffered io page at pfn feffb
Guest uuid = 24648f2d-a378-8d17-0aa5-2e494641dea2
Time offset set 0
populating video RAM at ff000000
mapping video RAM from ff000000
Register xen platform.
Done register platform.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state.
xs_read(/local/domain/0/device-model/1/xen_extended_power_mgmt): read error
xs_read(): vncpasswd get error. /vm/24648f2d-a378-8d17-0aa5-2e494641dea2/vncpasswd.
medium change watch on `hdc' (index: 1): /DATA/xpua.iso
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
Log-dirty: no command yet.
vcpu-set: watch node error.
xs_read(/local/domain/1/log-throttling): read error
qemu: ignoring not-understood drive `/local/domain/1/log-throttling'
medium change watch on `/local/domain/1/log-throttling' - unknown device, ignored
dm-command: hot insert pass-through pci dev 
register_real_device: Assigning real physical device 07:01.0 ...
register_real_device: Enable MSI translation via per device option
register_real_device: Disable power management
pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: No such file or directory: 0x7:0x1.0x0
pt_register_regions: IO region registered (size=0x04000000 base_addr=0xf4000000)
pci_intx: intx=1
register_real_device: Real physical device 07:01.0 registered successfuly!
IRQ type = INTx
char device redirected to /dev/pts/2
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
pt_iomem_map: e_phys=f0000000 maddr=f4000000 type=0 len=67108864 index=0 first_map=1
cirrus vga map change while on lfb mode
mapping vram to f4000000 - f4400000
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro state.
pt_iomem_map: e_phys=ffffffff maddr=f4000000 type=0 len=67108864 index=0 first_map=0
pt_pci_write_config: Warning: Guest attempt to set address to unused Base Address Register. [00:05.0][Offset:30h][Length:4]
pt_iomem_map: e_phys=f0000000 maddr=f4000000 type=0 len=67108864 index=0 first_map=0
shutdown requested in cpu_handle_ioreq
Issued domain 1 poweroff

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="xend.log"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="xend.log"

[2012-08-04 12:22:09 4661] INFO (SrvDaemon:332) Xend Daemon started
[2012-08-04 12:22:09 4661] INFO (SrvDaemon:336) Xend changeset: unavailab=
le.
[2012-08-04 12:22:09 4661] DEBUG (XendNode:332) pscsi record count: 16
[2012-08-04 12:22:09 4661] DEBUG (XendCPUPool:747) recreate_active_pools
[2012-08-04 12:22:09 4661] DEBUG (XendDomainInfo:151) XendDomainInfo.recr=
eate({'max_vcpu_id': 7, 'cpu_time': 137078680947L, 'ssidref': 0, 'hvm': 0=
, 'shutdown_reason': 255, 'dying': 0, 'online_vcpus': 8, 'domid': 0, 'pau=
sed': 0, 'crashed': 0, 'running': 1, 'maxmem_kb': 17179869180L, 'shutdown=
': 0, 'mem_kb': 2096776L, 'blocked': 0, 'handle': [0, 0, 0, 0, 0, 0, 0, 0=
, 0, 0, 0, 0, 0, 0, 0, 0], 'cpupool': 0, 'name': 'Domain-0'})
[2012-08-04 12:22:09 4661] INFO (XendDomainInfo:169) Recreating domain 0,=
 UUID 00000000-0000-0000-0000-000000000000. at /local/domain/0
[2012-08-04 12:22:09 4661] DEBUG (XendDomain:476) Adding Domain: 0
[2012-08-04 12:22:09 4661] DEBUG (XendDomainInfo:1881) XendDomainInfo.han=
dleShutdownWatch
[2012-08-04 12:22:09 4661] DEBUG (XendDomain:410) number of vcpus to use =
is 0
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VBD.set_device=
 not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VBD.set_type n=
ot found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: session.get_al=
l_records not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: event.get_reco=
rd not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: event.get_all =
not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VIF.set_device=
 not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VIF.set_MAC no=
t found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VIF.set_MTU no=
t found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: debug.get_all =
not found
[2012-08-04 12:22:10 4661] INFO (XMLRPCServer:161) Opening Unix domain so=
cket XML-RPC server on /var/run/xend/xen-api.sock; authentication has bee=
n disabled for this server.
[2012-08-04 12:22:10 4661] INFO (XMLRPCServer:161) Opening Unix domain so=
cket XML-RPC server on /var/run/xend/xmlrpc.sock.
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:103) XendDomainInfo.crea=
te(['vm', ['name', 'winxpbravo'], ['memory', 1024], ['shadow_memory', 8],=
 ['on_xend_start', 'ignore'], ['on_xend_stop', 'ignore'], ['vcpus', 1], [=
'oos', 1], ['image', ['hvm', ['kernel', '/usr/lib/xen-4.1/boot/hvmloader'=
], ['videoram', 4], ['serial', 'pty'], ['acpi', 1], ['apic', 1], ['boot',=
 'dc'], ['cpuid', []], ['cpuid_check', []], ['display', ':0'], ['fda', ''=
], ['fdb', ''], ['guest_os_type', 'default'], ['hap', 1], ['hpet', 0], ['=
isa', 0], ['keymap', ''], ['localtime', 0], ['nographic', 0], ['oos', 1],=
 ['pae', 1], ['pci', [['0x0000', '0x07', '0x01', '0x0', '0x100', [], '07:=
01.0']]], ['pci_msitranslate', 1], ['pci_power_mgmt', 0], ['rtc_timeoffse=
t', 0], ['sdl', 0], ['soundhw', ''], ['stdvga', 0], ['timer_mode', 1], ['=
usb', 0], ['usbdevice', 'tablet'], ['vcpus', 1], ['vnc', 1], ['vncconsole=
', 1], ['vncunused', 1], ['viridian', 0], ['vpt_align', 1], ['xauthority'=
, '/home/tuxuser/.Xauthority'], ['xen_platform_pci', 1], ['memory_sharing=
', 0], ['device_model', '/usr/lib/xen-4.1/bin/qemu-dm'], ['vncpasswd', 'X=
XXXXXXX'], ['tsc_mode', 0], ['nomigrate', 0]]], ['s3_integrity', 1], ['de=
vice', ['vbd', ['uname', 'phy:/dev/mainvg/winxpbravo'], ['dev', 'hda'], [=
'mode', 'w']]], ['device', ['vbd', ['uname', 'file:/DATA/xpua.iso'], ['de=
v', 'hdc:cdrom'], ['mode', 'r']]], ['device', ['pci', ['dev', ['slot', '0=
x01'], ['domain', '0x0000'], ['key', '07:01.0'], ['bus', '0x07'], ['vdevf=
n', '0x100'], ['func', '0x0']]]], ['device', ['vif', ['bridge', 'br0']]]]=
)
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:2498) XendDomainInfo.con=
structDomain
[2012-08-04 12:22:19 4661] DEBUG (balloon:187) Balloon: 6173540 KiB free;=
 need 16384; done.
[2012-08-04 12:22:19 4661] DEBUG (XendDomain:476) Adding Domain: 1
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:2836) XendDomainInfo.ini=
tDomain: 1 256
[2012-08-04 12:22:19 4661] DEBUG (image:339) No VNC passwd configured for=
 vfb access
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: boot, val: dc
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: fda, val: None
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: fdb, val: None
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: soundhw, val: None
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: localtime, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: serial, val: ['pty']
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: std-vga, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: isa, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: acpi, val: 1
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: usb, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: usbdevice, val: tablet=

[2012-08-04 12:22:19 4661] DEBUG (image:891) args: gfx_passthru, val: Non=
e
[2012-08-04 12:22:19 4661] INFO (image:822) Need to create platform devic=
e.[domid:1]
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:2863) _initDomain:shadow=
_memory=3D0x8, memory_static_max=3D0x40000000, memory_static_min=3D0x0.
[2012-08-04 12:22:19 4661] INFO (image:182) buildDomain os=3Dhvm dom=3D1 =
vcpus=3D1
[2012-08-04 12:22:19 4661] DEBUG (image:945) domid          =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:946) image          =3D /usr/lib/=
xen-4.1/boot/hvmloader
[2012-08-04 12:22:19 4661] DEBUG (image:947) store_evtchn   =3D 2
[2012-08-04 12:22:19 4661] DEBUG (image:948) memsize        =3D 1024
[2012-08-04 12:22:19 4661] DEBUG (image:949) target         =3D 1024
[2012-08-04 12:22:19 4661] DEBUG (image:950) vcpus          =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:951) vcpu_avail     =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:952) acpi           =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:953) apic           =3D 1
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vfb :=
 {'vncunused': 1, 'other_config': {'vncunused': 1, 'vnc': '1'}, 'vnc': '1=
', 'uuid': '9798fd97-65e8-6289-2173-1e89c6aaafa4'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'state': '1', 'backend-id': '0', 'backend': '/local/domain/0/backend/v=
fb/1/0'} to /local/domain/1/device/vfb/0.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'vncunused': '1', 'domain': 'winxpbravo', 'frontend': '/local/domain/1=
/device/vfb/0', 'uuid': '9798fd97-65e8-6289-2173-1e89c6aaafa4', 'frontend=
-id': '1', 'state': '1', 'online': '1', 'vnc': '1'} to /local/domain/0/ba=
ckend/vfb/1/0.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vbd :=
 {'uuid': '5cff27e4-f2cd-0dba-e31a-926711434782', 'bootable': 1, 'driver'=
: 'paravirtualised', 'dev': 'hda', 'uname': 'phy:/dev/mainvg/winxpbravo',=
 'mode': 'w'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'backend-id': '0', 'virtual-device': '768', 'device-type': 'disk', 'st=
ate': '1', 'backend': '/local/domain/0/backend/vbd/1/768'} to /local/doma=
in/1/device/vbd/768.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'frontend': '/local/domain/1/device/vbd/768', =
'uuid': '5cff27e4-f2cd-0dba-e31a-926711434782', 'bootable': '1', 'dev': '=
hda', 'state': '1', 'params': '/dev/mainvg/winxpbravo', 'mode': 'w', 'onl=
ine': '1', 'frontend-id': '1', 'type': 'phy'} to /local/domain/0/backend/=
vbd/1/768.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vbd :=
 {'uuid': 'a1600d60-b870-034d-47c2-74901af9e26b', 'bootable': 0, 'driver'=
: 'paravirtualised', 'dev': 'hdc:cdrom', 'uname': 'file:/DATA/xpua.iso', =
'mode': 'r'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'backend-id': '0', 'virtual-device': '5632', 'device-type': 'cdrom', '=
state': '1', 'backend': '/local/domain/0/backend/vbd/1/5632'} to /local/d=
omain/1/device/vbd/5632.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'frontend': '/local/domain/1/device/vbd/5632',=
 'uuid': 'a1600d60-b870-034d-47c2-74901af9e26b', 'bootable': '0', 'dev': =
'hdc', 'state': '1', 'params': '/DATA/xpua.iso', 'mode': 'r', 'online': '=
1', 'frontend-id': '1', 'type': 'file'} to /local/domain/0/backend/vbd/1/=
5632.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vif :=
 {'bridge': 'br0', 'mac': '00:16:3e:57:f8:b7', 'uuid': 'd544bee2-524a-aeb=
2-2c6b-00aec7c3567f'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'backend-id': '0', 'mac': '00:16:3e:57:f8:b7', 'handle': '0', 'state':=
 '1', 'backend': '/local/domain/0/backend/vif/1/0'} to /local/domain/1/de=
vice/vif/0.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'bridge': 'br0', 'domain': 'winxpbravo', 'handle': '0', 'uuid': 'd544b=
ee2-524a-aeb2-2c6b-00aec7c3567f', 'script': '/etc/xen/scripts/vif-bridge'=
, 'mac': '00:16:3e:57:f8:b7', 'frontend-id': '1', 'state': '1', 'online':=
 '1', 'frontend': '/local/domain/1/device/vif/0'} to /local/domain/0/back=
end/vif/1/0.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: pci :=
 {'devs': [{'slot': '0x01', 'domain': '0x0000', 'key': '07:01.0', 'bus': =
'0x07', 'vdevfn': '0x100', 'func': '0x0', 'uuid': 'ccd63064-bcb9-03cc-438=
b-7162e508642e'}], 'uuid': '0b6a4af4-5a4a-9c5a-8fdb-7a2fc6161eec'}
[2012-08-04 12:22:20 4661] INFO (image:418) spawning device models: /usr/=
lib/xen-4.1/bin/qemu-dm ['/usr/lib/xen-4.1/bin/qemu-dm', '-d', '1', '-dom=
ain-name', 'winxpbravo', '-videoram', '4', '-vnc', '127.0.0.1:0', '-vncun=
used', '-vcpus', '1', '-vcpu_avail', '0x1', '-boot', 'dc', '-serial', 'pt=
y', '-acpi', '-usbdevice', 'tablet', '-net', 'nic,vlan=3D1,macaddr=3D00:1=
6:3e:57:f8:b7,model=3Drtl8139', '-net', 'tap,vlan=3D1,ifname=3Dvif1.0-emu=
,bridge=3Dbr0', '-M', 'xenfv']
[2012-08-04 12:22:20 4661] INFO (image:467) device model pid: 4999
[2012-08-04 12:22:20 4661] INFO (image:590) waiting for sentinel_fifo
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:893) XendDomainInfo.pci_=
device_configure: ['pci', ['dev', ['slot', '0x01'], ['domain', '0x0000'],=
 ['key', '07:01.0'], ['bus', '0x07'], ['vdevfn', '0x100'], ['func', '0x0'=
], ['uuid', 'ccd63064-bcb9-03cc-438b-7162e508642e']], ['state', 'Initiali=
sing'], ['sub_state', 'Booting']]
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:779) XendDomainInfo.hvm_=
pci_device_insert: {'devs': [{'slot': '0x01', 'domain': '0x0000', 'key': =
'07:01.0', 'bus': '0x07', 'vdevfn': '0x100', 'func': '0x0', 'uuid': 'ccd6=
3064-bcb9-03cc-438b-7162e508642e'}], 'states': ['Initialising']}
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:790) XendDomainInfo.hvm_=
pci_device_insert_dev: {'slot': '0x01', 'domain': '0x0000', 'key': '07:01=
=2E0', 'bus': '0x07', 'vdevfn': '0x100', 'func': '0x0', 'uuid': 'ccd63064=
-bcb9-03cc-438b-7162e508642e'}
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:811) XendDomainInfo.hvm_=
pci_device_insert_dev: 0000:07:01.0@100,msitranslate=3D1,power_mgmt=3D0
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:815) pci: assign device =
0000:07:01.0@100,msitranslate=3D1,power_mgmt=3D0
[2012-08-04 12:22:20 4661] DEBUG (image:508) signalDeviceModel: orig_stat=
e is None, retrying
[2012-08-04 12:22:20 4661] DEBUG (image:508) signalDeviceModel: orig_stat=
e is None, retrying
[2012-08-04 12:22:20 4661] INFO (image:538) signalDeviceModel:restore dm =
state to running
[2012-08-04 12:22:20 4661] INFO (pciquirk:92) NO quirks found for PCI dev=
ice [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciquirk:135) Permissive mode NOT enabl=
ed for PCI device [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciif:334) pci: enabling iomem 0xf40000=
00/0x4000000 pfn 0xf4000/0x4000
[2012-08-04 12:22:20 4661] DEBUG (pciif:351) pci: enabling irq 16
[2012-08-04 12:22:20 4661] DEBUG (pciif:456) pci: register aer watch /loc=
al/domain/0/backend/pci/1/0/aerState
[2012-08-04 12:22:20 4661] DEBUG (DevController:95) DevController: writin=
g {'state': '1', 'backend-id': '0', 'backend': '/local/domain/0/backend/p=
ci/1/0'} to /local/domain/1/device/pci/0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'key-0': '07:01.0', 'vdevfn-0': '100', 'uuid':=
 '0b6a4af4-5a4a-9c5a-8fdb-7a2fc6161eec', 'frontend-id': '1', 'dev-0': '00=
00:07:01.0', 'state': '1', 'online': '1', 'frontend': '/local/domain/1/de=
vice/pci/0', 'num_devs': '1', 'uuid-0': 'ccd63064-bcb9-03cc-438b-7162e508=
642e', 'opts-0': 'msitranslate=3D1,power_mgmt=3D0'} to /local/domain/0/ba=
ckend/pci/1/0.
[2012-08-04 12:22:20 4661] DEBUG (pciif:169) Reconfiguring PCI device 000=
0:07:01.0.
[2012-08-04 12:22:20 4661] INFO (pciquirk:92) NO quirks found for PCI dev=
ice [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciquirk:135) Permissive mode NOT enabl=
ed for PCI device [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciif:334) pci: enabling iomem 0xf40000=
00/0x4000000 pfn 0xf4000/0x4000
[2012-08-04 12:22:20 4661] DEBUG (pciif:351) pci: enabling irq 16
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:3420) Storing VM details=
: {'on_xend_stop': 'ignore', 'pool_name': 'Pool-0', 'shadow_memory': '9',=
 'uuid': '24648f2d-a378-8d17-0aa5-2e494641dea2', 'on_reboot': 'restart', =
'start_time': '1344097340.61', 'on_poweroff': 'destroy', 'bootloader_args=
': '', 'on_xend_start': 'ignore', 'on_crash': 'restart', 'xend/restart_co=
unt': '0', 'vcpus': '1', 'vcpu_avail': '1', 'bootloader': '', 'image': "(=
hvm (kernel '') (superpages 0) (videoram 4) (hpet 0) (stdvga 0) (loader /=
usr/lib/xen-4.1/boot/hvmloader) (xen_platform_pci 1) (rtc_timeoffset 0) (=
pci ((0x0000 0x07 0x01 0x0 0x100 ()))) (hap 1) (localtime 0) (timer_mode =
1) (pci_msitranslate 1) (oos 1) (apic 1) (sdl 0) (usbdevice tablet) (disp=
lay :0) (vpt_align 1) (vncconsole 1) (serial pty) (vncunused 1) (boot dc)=
 (pae 1) (viridian 0) (acpi 1) (vnc 1) (nographic 0) (nomigrate 0) (usb 0=
) (tsc_mode 0) (guest_os_type default) (device_model /usr/lib/xen-4.1/bin=
/qemu-dm) (pci_power_mgmt 0) (xauthority /home/tuxuser/.Xauthority) (isa =
0) (notes (SUSPEND_CANCEL 1)))", 'name': 'winxpbravo'}
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:1794) Storing domain det=
ails: {'console/port': '3', 'description': '', 'console/limit': '1048576'=
, 'store/port': '2', 'vm': '/vm/24648f2d-a378-8d17-0aa5-2e494641dea2', 'd=
omid': '1', 'image/suspend-cancel': '1', 'cpu/0/availability': 'online', =
'memory/target': '1048576', 'control/platform-feature-multiprocessor-susp=
end': '1', 'store/ring-ref': '1044476', 'console/type': 'ioemu', 'name': =
'winxpbravo'}
[2012-08-04 12:22:20 4661] DEBUG (DevController:95) DevController: writin=
g {'state': '1', 'backend-id': '0', 'backend': '/local/domain/0/backend/c=
onsole/1/0'} to /local/domain/1/device/console/0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'frontend': '/local/domain/1/device/console/0'=
, 'uuid': 'b3e682ec-b1c5-f42f-da57-c81d9f8a1d75', 'frontend-id': '1', 'st=
ate': '1', 'location': '3', 'online': '1', 'protocol': 'vt100'} to /local=
/domain/0/backend/console/1/0.
[2012-08-04 12:22:20 4661] DEBUG (pciif:460) XendDomainInfo.handleAerStat=
eWatch
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:1881) XendDomainInfo.han=
dleShutdownWatch
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
tap2.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vif.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:628) hotplugStatusCallbac=
k /local/domain/0/backend/vif/1/0/hotplug-status.
[2012-08-04 12:22:20 4661] DEBUG (DevController:642) hotplugStatusCallbac=
k 1.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vkbd.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
ioports.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
tap.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vif2.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
console.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vscsi.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vbd.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 768.
[2012-08-04 12:22:20 4661] DEBUG (DevController:628) hotplugStatusCallbac=
k /local/domain/0/backend/vbd/1/768/hotplug-status.
[2012-08-04 12:22:20 4661] DEBUG (DevController:642) hotplugStatusCallbac=
k 1.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 5632.
[2012-08-04 12:22:20 4661] DEBUG (DevController:628) hotplugStatusCallbac=
k /local/domain/0/backend/vbd/1/5632/hotplug-status.
[2012-08-04 12:22:20 4661] DEBUG (DevController:642) hotplugStatusCallbac=
k 1.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
irq.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vfb.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
pci.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vusb.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vtpm.
[2012-08-04 12:22:20 4661] INFO (XendDomain:1225) Domain winxpbravo (1) u=
npaused.
[2012-08-04 12:26:42 4661] INFO (XendDomainInfo:2078) Domain has shutdown=
: name=3Dwinxpbravo id=3D1 reason=3Dpoweroff.
[2012-08-04 12:26:42 4661] DEBUG (XendDomainInfo:3071) XendDomainInfo.des=
troy: domid=3D1
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2401) Destroying device =
model
[2012-08-04 12:26:43 4661] INFO (image:615) winxpbravo device model termi=
nated
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2408) Releasing devices
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vif, device =3D vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing console/0=

[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D console, device =3D console/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/5632
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/5632
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vfb/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vfb, device =3D vfb/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing pci/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D pci, device =3D pci/0
[2012-08-04 12:26:43 4661] DEBUG (pciif:578) pci: unregister aer watch
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2406) No device model
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2408) Releasing devices
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vif, device =3D vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/5632
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/5632

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="xend-debug.log"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xend-debug.log"

Xend started at Sat Aug  4 12:22:09 2012.
cat: /sys/bus/scsi/devices/host0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host0/model: No such file or directory
cat: /sys/bus/scsi/devices/host0/type: No such file or directory
cat: /sys/bus/scsi/devices/host0/rev: No such file or directory
cat: /sys/bus/scsi/devices/host0/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host1/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host1/model: No such file or directory
cat: /sys/bus/scsi/devices/host1/type: No such file or directory
cat: /sys/bus/scsi/devices/host1/rev: No such file or directory
cat: /sys/bus/scsi/devices/host1/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host2/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host2/model: No such file or directory
cat: /sys/bus/scsi/devices/host2/type: No such file or directory
cat: /sys/bus/scsi/devices/host2/rev: No such file or directory
cat: /sys/bus/scsi/devices/host2/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host3/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host3/model: No such file or directory
cat: /sys/bus/scsi/devices/host3/type: No such file or directory
cat: /sys/bus/scsi/devices/host3/rev: No such file or directory
cat: /sys/bus/scsi/devices/host3/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host4/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host4/model: No such file or directory
cat: /sys/bus/scsi/devices/host4/type: No such file or directory
cat: /sys/bus/scsi/devices/host4/rev: No such file or directory
cat: /sys/bus/scsi/devices/host4/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host5/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host5/model: No such file or directory
cat: /sys/bus/scsi/devices/host5/type: No such file or directory
cat: /sys/bus/scsi/devices/host5/rev: No such file or directory
cat: /sys/bus/scsi/devices/host5/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host6/model: No such file or directory
cat: /sys/bus/scsi/devices/host6/type: No such file or directory
cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host7/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host7/model: No such file or directory
cat: /sys/bus/scsi/devices/host7/type: No such file or directory
cat: /sys/bus/scsi/devices/host7/rev: No such file or directory
cat: /sys/bus/scsi/devices/host7/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host8/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host8/model: No such file or directory
cat: /sys/bus/scsi/devices/host8/type: No such file or directory
cat: /sys/bus/scsi/devices/host8/rev: No such file or directory
cat: /sys/bus/scsi/devices/host8/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host9/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host9/model: No such file or directory
cat: /sys/bus/scsi/devices/host9/type: No such file or directory
cat: /sys/bus/scsi/devices/host9/rev: No such file or directory
cat: /sys/bus/scsi/devices/host9/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/model: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/type: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/rev: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/model: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/type: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/rev: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/model: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/type: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/rev: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/scsi_level: No such file or directory
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->0000000000174130
  TOTAL:         0000000000000000->0000000040000000
  ENTRY ADDRESS: 0000000000101520
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001ff
  1GB PAGES: 0x0000000000000000

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="xen-hotplug.log"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xen-hotplug.log"

RTNETLINK answers: Operation not supported

--------------020603040908070100040103
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020603040908070100040103--


From xen-devel-bounces@lists.xen.org Sat Aug 04 20:38:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 20:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxl6v-0003jS-Im; Sat, 04 Aug 2012 20:38:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <listmail@triad.rr.com>) id 1Sxl6t-0003jM-5U
	for xen-devel@lists.xen.org; Sat, 04 Aug 2012 20:38:07 +0000
Received: from [85.158.143.35:22080] by server-2.bemta-4.messagelabs.com id
	E2/32-17938-E288D105; Sat, 04 Aug 2012 20:38:06 +0000
X-Env-Sender: listmail@triad.rr.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344112683!18674592!1
X-Originating-IP: [71.74.56.122]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n,sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg2NDY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27130 invoked from network); 4 Aug 2012 20:38:04 -0000
Received: from hrndva-omtalb.mail.rr.com (HELO hrndva-omtalb.mail.rr.com)
	(71.74.56.122) by server-6.tower-21.messagelabs.com with SMTP;
	4 Aug 2012 20:38:04 -0000
X-Authority-Analysis: v=2.0 cv=ZuBv2qHG c=1 sm=0 a=R14c1kN7475LMi+rpQwGWw==:17
	a=Kt2980LFN-gA:10 a=kfTud4QeKxsA:10 a=mIU5gKuZvJwA:10
	a=05ChyHeVI94A:10 a=ayC55rCoAAAA:8 a=W2ASNwKqUNy46prrTFoA:9
	a=wPNLvfGTeEIA:10 a=AlD6xeB1mtd1I4UeoXAA:9 a=pILNOxqGKmIA:10
	a=oYszMzJgK7c-UBrM:21 a=5OftHOs5r1-k-HS1:21
	a=sYPjotMp4jqWD35iaBoA:9 a=o6KnOFfG3bstj1pE:21
	a=wlazF4Mc5GnPQm6M:21 a=IxzCleGfHC1At5Wr9AAA:9
	a=mzt647ctkWIfzJwtBMMA:9 a=R14c1kN7475LMi+rpQwGWw==:117
X-Cloudmark-Score: 0
X-Originating-IP: 65.190.252.167
Received: from [65.190.252.167] ([65.190.252.167:57898] helo=corenix.localnet)
	by hrndva-oedge04.mail.rr.com (envelope-from <listmail@triad.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTP
	id C6/C3-21135-B288D105; Sat, 04 Aug 2012 20:38:03 +0000
Received: by corenix.localnet (Postfix, from userid 1003)
	id 0E9B985384; Sat,  4 Aug 2012 16:38:03 -0400 (EDT)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on corenix.localnet
X-Spam-Level: 
X-Spam-Status: No, score=-1.0 required=4.5 tests=ALL_TRUSTED autolearn=ham
	version=3.3.1
Received: from [192.168.0.128] (gamehub.localnet [192.168.0.128])
	by corenix.localnet (Postfix) with ESMTPS id 0362184B81
	for <xen-devel@lists.xen.org>; Sat,  4 Aug 2012 16:38:01 -0400 (EDT)
Message-ID: <501D881E.2020402@triad.rr.com>
Date: Sat, 04 Aug 2012 16:37:50 -0400
From: Richie <listmail@triad.rr.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501D82A9.8010805@triad.rr.com>
In-Reply-To: <501D82A9.8010805@triad.rr.com>
Content-Type: multipart/mixed; boundary="------------020603040908070100040103"
Subject: Re: [Xen-devel] wheezy VT-d passthrough test: DMAR:[fault reason
 06h] PTE Read access is not set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020603040908070100040103
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Additional information:

I just checked the hvm and saw that the passthrough actually succeeds.  
Windows wants to install a driver for the tuner.  So there is the iommu 
fault and onboard nic issue.  The serial card is on the same bus, but 
seems unaffected.

Attached more logs.

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="qemu-dm-winxpbravo.log"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="qemu-dm-winxpbravo.log"

domid: 1
-c config qemu network with xen bridge for 
vif1.0-emu br0
Using file /dev/mainvg/winxpbravo in read-write mode
Using file /DATA/xpua.iso in read-only mode
Watching /local/domain/0/device-model/1/logdirty/cmd
Watching /local/domain/0/device-model/1/command
Watching /local/domain/1/cpu
char device redirected to /dev/pts/1
qemu_map_cache_init nr_buckets = 10000 size 4194304
shared page at pfn feffd
buffered io page at pfn feffb
Guest uuid = 24648f2d-a378-8d17-0aa5-2e494641dea2
Time offset set 0
populating video RAM at ff000000
mapping video RAM from ff000000
Register xen platform.
Done register platform.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state.
xs_read(/local/domain/0/device-model/1/xen_extended_power_mgmt): read error
xs_read(): vncpasswd get error. /vm/24648f2d-a378-8d17-0aa5-2e494641dea2/vncpasswd.
medium change watch on `hdc' (index: 1): /DATA/xpua.iso
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
Log-dirty: no command yet.
vcpu-set: watch node error.
xs_read(/local/domain/1/log-throttling): read error
qemu: ignoring not-understood drive `/local/domain/1/log-throttling'
medium change watch on `/local/domain/1/log-throttling' - unknown device, ignored
dm-command: hot insert pass-through pci dev 
register_real_device: Assigning real physical device 07:01.0 ...
register_real_device: Enable MSI translation via per device option
register_real_device: Disable power management
pt_iomul_init: Error: pt_iomul_init can't open file /dev/xen/pci_iomul: No such file or directory: 0x7:0x1.0x0
pt_register_regions: IO region registered (size=0x04000000 base_addr=0xf4000000)
pci_intx: intx=1
register_real_device: Real physical device 07:01.0 registered successfuly!
IRQ type = INTx
char device redirected to /dev/pts/2
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
xen be: console-0: xen be: console-0: initialise() failed
initialise() failed
pt_iomem_map: e_phys=f0000000 maddr=f4000000 type=0 len=67108864 index=0 first_map=1
cirrus vga map change while on lfb mode
mapping vram to f4000000 - f4400000
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro state.
pt_iomem_map: e_phys=ffffffff maddr=f4000000 type=0 len=67108864 index=0 first_map=0
pt_pci_write_config: Warning: Guest attempt to set address to unused Base Address Register. [00:05.0][Offset:30h][Length:4]
pt_iomem_map: e_phys=f0000000 maddr=f4000000 type=0 len=67108864 index=0 first_map=0
shutdown requested in cpu_handle_ioreq
Issued domain 1 poweroff

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="xend.log"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="xend.log"

[2012-08-04 12:22:09 4661] INFO (SrvDaemon:332) Xend Daemon started
[2012-08-04 12:22:09 4661] INFO (SrvDaemon:336) Xend changeset: unavailab=
le.
[2012-08-04 12:22:09 4661] DEBUG (XendNode:332) pscsi record count: 16
[2012-08-04 12:22:09 4661] DEBUG (XendCPUPool:747) recreate_active_pools
[2012-08-04 12:22:09 4661] DEBUG (XendDomainInfo:151) XendDomainInfo.recr=
eate({'max_vcpu_id': 7, 'cpu_time': 137078680947L, 'ssidref': 0, 'hvm': 0=
, 'shutdown_reason': 255, 'dying': 0, 'online_vcpus': 8, 'domid': 0, 'pau=
sed': 0, 'crashed': 0, 'running': 1, 'maxmem_kb': 17179869180L, 'shutdown=
': 0, 'mem_kb': 2096776L, 'blocked': 0, 'handle': [0, 0, 0, 0, 0, 0, 0, 0=
, 0, 0, 0, 0, 0, 0, 0, 0], 'cpupool': 0, 'name': 'Domain-0'})
[2012-08-04 12:22:09 4661] INFO (XendDomainInfo:169) Recreating domain 0,=
 UUID 00000000-0000-0000-0000-000000000000. at /local/domain/0
[2012-08-04 12:22:09 4661] DEBUG (XendDomain:476) Adding Domain: 0
[2012-08-04 12:22:09 4661] DEBUG (XendDomainInfo:1881) XendDomainInfo.han=
dleShutdownWatch
[2012-08-04 12:22:09 4661] DEBUG (XendDomain:410) number of vcpus to use =
is 0
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VBD.set_device=
 not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VBD.set_type n=
ot found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: session.get_al=
l_records not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: event.get_reco=
rd not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: event.get_all =
not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VIF.set_device=
 not found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VIF.set_MAC no=
t found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: VIF.set_MTU no=
t found
[2012-08-04 12:22:10 4661] WARNING (XendAPI:708) API call: debug.get_all =
not found
[2012-08-04 12:22:10 4661] INFO (XMLRPCServer:161) Opening Unix domain so=
cket XML-RPC server on /var/run/xend/xen-api.sock; authentication has bee=
n disabled for this server.
[2012-08-04 12:22:10 4661] INFO (XMLRPCServer:161) Opening Unix domain so=
cket XML-RPC server on /var/run/xend/xmlrpc.sock.
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:103) XendDomainInfo.crea=
te(['vm', ['name', 'winxpbravo'], ['memory', 1024], ['shadow_memory', 8],=
 ['on_xend_start', 'ignore'], ['on_xend_stop', 'ignore'], ['vcpus', 1], [=
'oos', 1], ['image', ['hvm', ['kernel', '/usr/lib/xen-4.1/boot/hvmloader'=
], ['videoram', 4], ['serial', 'pty'], ['acpi', 1], ['apic', 1], ['boot',=
 'dc'], ['cpuid', []], ['cpuid_check', []], ['display', ':0'], ['fda', ''=
], ['fdb', ''], ['guest_os_type', 'default'], ['hap', 1], ['hpet', 0], ['=
isa', 0], ['keymap', ''], ['localtime', 0], ['nographic', 0], ['oos', 1],=
 ['pae', 1], ['pci', [['0x0000', '0x07', '0x01', '0x0', '0x100', [], '07:=
01.0']]], ['pci_msitranslate', 1], ['pci_power_mgmt', 0], ['rtc_timeoffse=
t', 0], ['sdl', 0], ['soundhw', ''], ['stdvga', 0], ['timer_mode', 1], ['=
usb', 0], ['usbdevice', 'tablet'], ['vcpus', 1], ['vnc', 1], ['vncconsole=
', 1], ['vncunused', 1], ['viridian', 0], ['vpt_align', 1], ['xauthority'=
, '/home/tuxuser/.Xauthority'], ['xen_platform_pci', 1], ['memory_sharing=
', 0], ['device_model', '/usr/lib/xen-4.1/bin/qemu-dm'], ['vncpasswd', 'X=
XXXXXXX'], ['tsc_mode', 0], ['nomigrate', 0]]], ['s3_integrity', 1], ['de=
vice', ['vbd', ['uname', 'phy:/dev/mainvg/winxpbravo'], ['dev', 'hda'], [=
'mode', 'w']]], ['device', ['vbd', ['uname', 'file:/DATA/xpua.iso'], ['de=
v', 'hdc:cdrom'], ['mode', 'r']]], ['device', ['pci', ['dev', ['slot', '0=
x01'], ['domain', '0x0000'], ['key', '07:01.0'], ['bus', '0x07'], ['vdevf=
n', '0x100'], ['func', '0x0']]]], ['device', ['vif', ['bridge', 'br0']]]]=
)
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:2498) XendDomainInfo.con=
structDomain
[2012-08-04 12:22:19 4661] DEBUG (balloon:187) Balloon: 6173540 KiB free;=
 need 16384; done.
[2012-08-04 12:22:19 4661] DEBUG (XendDomain:476) Adding Domain: 1
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:2836) XendDomainInfo.ini=
tDomain: 1 256
[2012-08-04 12:22:19 4661] DEBUG (image:339) No VNC passwd configured for=
 vfb access
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: boot, val: dc
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: fda, val: None
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: fdb, val: None
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: soundhw, val: None
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: localtime, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: serial, val: ['pty']
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: std-vga, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: isa, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: acpi, val: 1
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: usb, val: 0
[2012-08-04 12:22:19 4661] DEBUG (image:891) args: usbdevice, val: tablet=

[2012-08-04 12:22:19 4661] DEBUG (image:891) args: gfx_passthru, val: Non=
e
[2012-08-04 12:22:19 4661] INFO (image:822) Need to create platform devic=
e.[domid:1]
[2012-08-04 12:22:19 4661] DEBUG (XendDomainInfo:2863) _initDomain:shadow=
_memory=3D0x8, memory_static_max=3D0x40000000, memory_static_min=3D0x0.
[2012-08-04 12:22:19 4661] INFO (image:182) buildDomain os=3Dhvm dom=3D1 =
vcpus=3D1
[2012-08-04 12:22:19 4661] DEBUG (image:945) domid          =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:946) image          =3D /usr/lib/=
xen-4.1/boot/hvmloader
[2012-08-04 12:22:19 4661] DEBUG (image:947) store_evtchn   =3D 2
[2012-08-04 12:22:19 4661] DEBUG (image:948) memsize        =3D 1024
[2012-08-04 12:22:19 4661] DEBUG (image:949) target         =3D 1024
[2012-08-04 12:22:19 4661] DEBUG (image:950) vcpus          =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:951) vcpu_avail     =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:952) acpi           =3D 1
[2012-08-04 12:22:19 4661] DEBUG (image:953) apic           =3D 1
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vfb :=
 {'vncunused': 1, 'other_config': {'vncunused': 1, 'vnc': '1'}, 'vnc': '1=
', 'uuid': '9798fd97-65e8-6289-2173-1e89c6aaafa4'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'state': '1', 'backend-id': '0', 'backend': '/local/domain/0/backend/v=
fb/1/0'} to /local/domain/1/device/vfb/0.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'vncunused': '1', 'domain': 'winxpbravo', 'frontend': '/local/domain/1=
/device/vfb/0', 'uuid': '9798fd97-65e8-6289-2173-1e89c6aaafa4', 'frontend=
-id': '1', 'state': '1', 'online': '1', 'vnc': '1'} to /local/domain/0/ba=
ckend/vfb/1/0.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vbd :=
 {'uuid': '5cff27e4-f2cd-0dba-e31a-926711434782', 'bootable': 1, 'driver'=
: 'paravirtualised', 'dev': 'hda', 'uname': 'phy:/dev/mainvg/winxpbravo',=
 'mode': 'w'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'backend-id': '0', 'virtual-device': '768', 'device-type': 'disk', 'st=
ate': '1', 'backend': '/local/domain/0/backend/vbd/1/768'} to /local/doma=
in/1/device/vbd/768.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'frontend': '/local/domain/1/device/vbd/768', =
'uuid': '5cff27e4-f2cd-0dba-e31a-926711434782', 'bootable': '1', 'dev': '=
hda', 'state': '1', 'params': '/dev/mainvg/winxpbravo', 'mode': 'w', 'onl=
ine': '1', 'frontend-id': '1', 'type': 'phy'} to /local/domain/0/backend/=
vbd/1/768.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vbd :=
 {'uuid': 'a1600d60-b870-034d-47c2-74901af9e26b', 'bootable': 0, 'driver'=
: 'paravirtualised', 'dev': 'hdc:cdrom', 'uname': 'file:/DATA/xpua.iso', =
'mode': 'r'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'backend-id': '0', 'virtual-device': '5632', 'device-type': 'cdrom', '=
state': '1', 'backend': '/local/domain/0/backend/vbd/1/5632'} to /local/d=
omain/1/device/vbd/5632.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'frontend': '/local/domain/1/device/vbd/5632',=
 'uuid': 'a1600d60-b870-034d-47c2-74901af9e26b', 'bootable': '0', 'dev': =
'hdc', 'state': '1', 'params': '/DATA/xpua.iso', 'mode': 'r', 'online': '=
1', 'frontend-id': '1', 'type': 'file'} to /local/domain/0/backend/vbd/1/=
5632.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: vif :=
 {'bridge': 'br0', 'mac': '00:16:3e:57:f8:b7', 'uuid': 'd544bee2-524a-aeb=
2-2c6b-00aec7c3567f'}
[2012-08-04 12:22:19 4661] DEBUG (DevController:95) DevController: writin=
g {'backend-id': '0', 'mac': '00:16:3e:57:f8:b7', 'handle': '0', 'state':=
 '1', 'backend': '/local/domain/0/backend/vif/1/0'} to /local/domain/1/de=
vice/vif/0.
[2012-08-04 12:22:19 4661] DEBUG (DevController:97) DevController: writin=
g {'bridge': 'br0', 'domain': 'winxpbravo', 'handle': '0', 'uuid': 'd544b=
ee2-524a-aeb2-2c6b-00aec7c3567f', 'script': '/etc/xen/scripts/vif-bridge'=
, 'mac': '00:16:3e:57:f8:b7', 'frontend-id': '1', 'state': '1', 'online':=
 '1', 'frontend': '/local/domain/1/device/vif/0'} to /local/domain/0/back=
end/vif/1/0.
[2012-08-04 12:22:19 4661] INFO (XendDomainInfo:2357) createDevice: pci :=
 {'devs': [{'slot': '0x01', 'domain': '0x0000', 'key': '07:01.0', 'bus': =
'0x07', 'vdevfn': '0x100', 'func': '0x0', 'uuid': 'ccd63064-bcb9-03cc-438=
b-7162e508642e'}], 'uuid': '0b6a4af4-5a4a-9c5a-8fdb-7a2fc6161eec'}
[2012-08-04 12:22:20 4661] INFO (image:418) spawning device models: /usr/=
lib/xen-4.1/bin/qemu-dm ['/usr/lib/xen-4.1/bin/qemu-dm', '-d', '1', '-dom=
ain-name', 'winxpbravo', '-videoram', '4', '-vnc', '127.0.0.1:0', '-vncun=
used', '-vcpus', '1', '-vcpu_avail', '0x1', '-boot', 'dc', '-serial', 'pt=
y', '-acpi', '-usbdevice', 'tablet', '-net', 'nic,vlan=3D1,macaddr=3D00:1=
6:3e:57:f8:b7,model=3Drtl8139', '-net', 'tap,vlan=3D1,ifname=3Dvif1.0-emu=
,bridge=3Dbr0', '-M', 'xenfv']
[2012-08-04 12:22:20 4661] INFO (image:467) device model pid: 4999
[2012-08-04 12:22:20 4661] INFO (image:590) waiting for sentinel_fifo
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:893) XendDomainInfo.pci_=
device_configure: ['pci', ['dev', ['slot', '0x01'], ['domain', '0x0000'],=
 ['key', '07:01.0'], ['bus', '0x07'], ['vdevfn', '0x100'], ['func', '0x0'=
], ['uuid', 'ccd63064-bcb9-03cc-438b-7162e508642e']], ['state', 'Initiali=
sing'], ['sub_state', 'Booting']]
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:779) XendDomainInfo.hvm_=
pci_device_insert: {'devs': [{'slot': '0x01', 'domain': '0x0000', 'key': =
'07:01.0', 'bus': '0x07', 'vdevfn': '0x100', 'func': '0x0', 'uuid': 'ccd6=
3064-bcb9-03cc-438b-7162e508642e'}], 'states': ['Initialising']}
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:790) XendDomainInfo.hvm_=
pci_device_insert_dev: {'slot': '0x01', 'domain': '0x0000', 'key': '07:01=
=2E0', 'bus': '0x07', 'vdevfn': '0x100', 'func': '0x0', 'uuid': 'ccd63064=
-bcb9-03cc-438b-7162e508642e'}
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:811) XendDomainInfo.hvm_=
pci_device_insert_dev: 0000:07:01.0@100,msitranslate=3D1,power_mgmt=3D0
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:815) pci: assign device =
0000:07:01.0@100,msitranslate=3D1,power_mgmt=3D0
[2012-08-04 12:22:20 4661] DEBUG (image:508) signalDeviceModel: orig_stat=
e is None, retrying
[2012-08-04 12:22:20 4661] DEBUG (image:508) signalDeviceModel: orig_stat=
e is None, retrying
[2012-08-04 12:22:20 4661] INFO (image:538) signalDeviceModel:restore dm =
state to running
[2012-08-04 12:22:20 4661] INFO (pciquirk:92) NO quirks found for PCI dev=
ice [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciquirk:135) Permissive mode NOT enabl=
ed for PCI device [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciif:334) pci: enabling iomem 0xf40000=
00/0x4000000 pfn 0xf4000/0x4000
[2012-08-04 12:22:20 4661] DEBUG (pciif:351) pci: enabling irq 16
[2012-08-04 12:22:20 4661] DEBUG (pciif:456) pci: register aer watch /loc=
al/domain/0/backend/pci/1/0/aerState
[2012-08-04 12:22:20 4661] DEBUG (DevController:95) DevController: writin=
g {'state': '1', 'backend-id': '0', 'backend': '/local/domain/0/backend/p=
ci/1/0'} to /local/domain/1/device/pci/0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'key-0': '07:01.0', 'vdevfn-0': '100', 'uuid':=
 '0b6a4af4-5a4a-9c5a-8fdb-7a2fc6161eec', 'frontend-id': '1', 'dev-0': '00=
00:07:01.0', 'state': '1', 'online': '1', 'frontend': '/local/domain/1/de=
vice/pci/0', 'num_devs': '1', 'uuid-0': 'ccd63064-bcb9-03cc-438b-7162e508=
642e', 'opts-0': 'msitranslate=3D1,power_mgmt=3D0'} to /local/domain/0/ba=
ckend/pci/1/0.
[2012-08-04 12:22:20 4661] DEBUG (pciif:169) Reconfiguring PCI device 000=
0:07:01.0.
[2012-08-04 12:22:20 4661] INFO (pciquirk:92) NO quirks found for PCI dev=
ice [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciquirk:135) Permissive mode NOT enabl=
ed for PCI device [14f1:5b7a:0070:7444]
[2012-08-04 12:22:20 4661] DEBUG (pciif:334) pci: enabling iomem 0xf40000=
00/0x4000000 pfn 0xf4000/0x4000
[2012-08-04 12:22:20 4661] DEBUG (pciif:351) pci: enabling irq 16
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:3420) Storing VM details=
: {'on_xend_stop': 'ignore', 'pool_name': 'Pool-0', 'shadow_memory': '9',=
 'uuid': '24648f2d-a378-8d17-0aa5-2e494641dea2', 'on_reboot': 'restart', =
'start_time': '1344097340.61', 'on_poweroff': 'destroy', 'bootloader_args=
': '', 'on_xend_start': 'ignore', 'on_crash': 'restart', 'xend/restart_co=
unt': '0', 'vcpus': '1', 'vcpu_avail': '1', 'bootloader': '', 'image': "(=
hvm (kernel '') (superpages 0) (videoram 4) (hpet 0) (stdvga 0) (loader /=
usr/lib/xen-4.1/boot/hvmloader) (xen_platform_pci 1) (rtc_timeoffset 0) (=
pci ((0x0000 0x07 0x01 0x0 0x100 ()))) (hap 1) (localtime 0) (timer_mode =
1) (pci_msitranslate 1) (oos 1) (apic 1) (sdl 0) (usbdevice tablet) (disp=
lay :0) (vpt_align 1) (vncconsole 1) (serial pty) (vncunused 1) (boot dc)=
 (pae 1) (viridian 0) (acpi 1) (vnc 1) (nographic 0) (nomigrate 0) (usb 0=
) (tsc_mode 0) (guest_os_type default) (device_model /usr/lib/xen-4.1/bin=
/qemu-dm) (pci_power_mgmt 0) (xauthority /home/tuxuser/.Xauthority) (isa =
0) (notes (SUSPEND_CANCEL 1)))", 'name': 'winxpbravo'}
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:1794) Storing domain det=
ails: {'console/port': '3', 'description': '', 'console/limit': '1048576'=
, 'store/port': '2', 'vm': '/vm/24648f2d-a378-8d17-0aa5-2e494641dea2', 'd=
omid': '1', 'image/suspend-cancel': '1', 'cpu/0/availability': 'online', =
'memory/target': '1048576', 'control/platform-feature-multiprocessor-susp=
end': '1', 'store/ring-ref': '1044476', 'console/type': 'ioemu', 'name': =
'winxpbravo'}
[2012-08-04 12:22:20 4661] DEBUG (DevController:95) DevController: writin=
g {'state': '1', 'backend-id': '0', 'backend': '/local/domain/0/backend/c=
onsole/1/0'} to /local/domain/1/device/console/0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:97) DevController: writin=
g {'domain': 'winxpbravo', 'frontend': '/local/domain/1/device/console/0'=
, 'uuid': 'b3e682ec-b1c5-f42f-da57-c81d9f8a1d75', 'frontend-id': '1', 'st=
ate': '1', 'location': '3', 'online': '1', 'protocol': 'vt100'} to /local=
/domain/0/backend/console/1/0.
[2012-08-04 12:22:20 4661] DEBUG (pciif:460) XendDomainInfo.handleAerStat=
eWatch
[2012-08-04 12:22:20 4661] DEBUG (XendDomainInfo:1881) XendDomainInfo.han=
dleShutdownWatch
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
tap2.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vif.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:628) hotplugStatusCallbac=
k /local/domain/0/backend/vif/1/0/hotplug-status.
[2012-08-04 12:22:20 4661] DEBUG (DevController:642) hotplugStatusCallbac=
k 1.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vkbd.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
ioports.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
tap.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vif2.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
console.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vscsi.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vbd.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 768.
[2012-08-04 12:22:20 4661] DEBUG (DevController:628) hotplugStatusCallbac=
k /local/domain/0/backend/vbd/1/768/hotplug-status.
[2012-08-04 12:22:20 4661] DEBUG (DevController:642) hotplugStatusCallbac=
k 1.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 5632.
[2012-08-04 12:22:20 4661] DEBUG (DevController:628) hotplugStatusCallbac=
k /local/domain/0/backend/vbd/1/5632/hotplug-status.
[2012-08-04 12:22:20 4661] DEBUG (DevController:642) hotplugStatusCallbac=
k 1.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
irq.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vfb.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
pci.
[2012-08-04 12:22:20 4661] DEBUG (DevController:144) Waiting for 0.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vusb.
[2012-08-04 12:22:20 4661] DEBUG (DevController:139) Waiting for devices =
vtpm.
[2012-08-04 12:22:20 4661] INFO (XendDomain:1225) Domain winxpbravo (1) u=
npaused.
[2012-08-04 12:26:42 4661] INFO (XendDomainInfo:2078) Domain has shutdown=
: name=3Dwinxpbravo id=3D1 reason=3Dpoweroff.
[2012-08-04 12:26:42 4661] DEBUG (XendDomainInfo:3071) XendDomainInfo.des=
troy: domid=3D1
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2401) Destroying device =
model
[2012-08-04 12:26:43 4661] INFO (image:615) winxpbravo device model termi=
nated
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2408) Releasing devices
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vif, device =3D vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing console/0=

[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D console, device =3D console/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/5632
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/5632
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vfb/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vfb, device =3D vfb/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing pci/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D pci, device =3D pci/0
[2012-08-04 12:26:43 4661] DEBUG (pciif:578) pci: unregister aer watch
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2406) No device model
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2408) Releasing devices
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vif, device =3D vif/0
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/768
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:2414) Removing vbd/5632
[2012-08-04 12:26:43 4661] DEBUG (XendDomainInfo:1276) XendDomainInfo.des=
troyDevice: deviceClass =3D vbd, device =3D vbd/5632

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="xend-debug.log"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xend-debug.log"

Xend started at Sat Aug  4 12:22:09 2012.
cat: /sys/bus/scsi/devices/host0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host0/model: No such file or directory
cat: /sys/bus/scsi/devices/host0/type: No such file or directory
cat: /sys/bus/scsi/devices/host0/rev: No such file or directory
cat: /sys/bus/scsi/devices/host0/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host1/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host1/model: No such file or directory
cat: /sys/bus/scsi/devices/host1/type: No such file or directory
cat: /sys/bus/scsi/devices/host1/rev: No such file or directory
cat: /sys/bus/scsi/devices/host1/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host2/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host2/model: No such file or directory
cat: /sys/bus/scsi/devices/host2/type: No such file or directory
cat: /sys/bus/scsi/devices/host2/rev: No such file or directory
cat: /sys/bus/scsi/devices/host2/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host3/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host3/model: No such file or directory
cat: /sys/bus/scsi/devices/host3/type: No such file or directory
cat: /sys/bus/scsi/devices/host3/rev: No such file or directory
cat: /sys/bus/scsi/devices/host3/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host4/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host4/model: No such file or directory
cat: /sys/bus/scsi/devices/host4/type: No such file or directory
cat: /sys/bus/scsi/devices/host4/rev: No such file or directory
cat: /sys/bus/scsi/devices/host4/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host5/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host5/model: No such file or directory
cat: /sys/bus/scsi/devices/host5/type: No such file or directory
cat: /sys/bus/scsi/devices/host5/rev: No such file or directory
cat: /sys/bus/scsi/devices/host5/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host6/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host6/model: No such file or directory
cat: /sys/bus/scsi/devices/host6/type: No such file or directory
cat: /sys/bus/scsi/devices/host6/rev: No such file or directory
cat: /sys/bus/scsi/devices/host6/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host7/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host7/model: No such file or directory
cat: /sys/bus/scsi/devices/host7/type: No such file or directory
cat: /sys/bus/scsi/devices/host7/rev: No such file or directory
cat: /sys/bus/scsi/devices/host7/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host8/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host8/model: No such file or directory
cat: /sys/bus/scsi/devices/host8/type: No such file or directory
cat: /sys/bus/scsi/devices/host8/rev: No such file or directory
cat: /sys/bus/scsi/devices/host8/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/host9/vendor: No such file or directory
cat: /sys/bus/scsi/devices/host9/model: No such file or directory
cat: /sys/bus/scsi/devices/host9/type: No such file or directory
cat: /sys/bus/scsi/devices/host9/rev: No such file or directory
cat: /sys/bus/scsi/devices/host9/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/model: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/type: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/rev: No such file or directory
cat: /sys/bus/scsi/devices/target2:0:0/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/model: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/type: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/rev: No such file or directory
cat: /sys/bus/scsi/devices/target3:0:0/scsi_level: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/vendor: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/model: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/type: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/rev: No such file or directory
cat: /sys/bus/scsi/devices/target8:0:0/scsi_level: No such file or directory
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->0000000000174130
  TOTAL:         0000000000000000->0000000040000000
  ENTRY ADDRESS: 0000000000101520
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001ff
  1GB PAGES: 0x0000000000000000

--------------020603040908070100040103
Content-Type: text/plain; charset=windows-1252;
 name="xen-hotplug.log"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xen-hotplug.log"

RTNETLINK answers: Operation not supported

--------------020603040908070100040103
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020603040908070100040103--


From xen-devel-bounces@lists.xen.org Sat Aug 04 22:17:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 22:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxmf0-0004DF-Ge; Sat, 04 Aug 2012 22:17:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sxmez-0004DA-4A
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 22:17:25 +0000
Received: from [85.158.139.83:42939] by server-12.bemta-5.messagelabs.com id
	AC/99-26304-47F9D105; Sat, 04 Aug 2012 22:17:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344118643!23061242!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17843 invoked from network); 4 Aug 2012 22:17:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 22:17:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,714,1336348800"; d="scan'208";a="13852146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 22:17:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 23:17:22 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sxmew-0006cR-JW;
	Sat, 04 Aug 2012 22:17:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sxmeu-0006OM-Ns;
	Sat, 04 Aug 2012 23:17:22 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13549-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 23:17:20 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13549: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13549 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13549/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 04 22:17:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Aug 2012 22:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxmf0-0004DF-Ge; Sat, 04 Aug 2012 22:17:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sxmez-0004DA-4A
	for xen-devel@lists.xensource.com; Sat, 04 Aug 2012 22:17:25 +0000
Received: from [85.158.139.83:42939] by server-12.bemta-5.messagelabs.com id
	AC/99-26304-47F9D105; Sat, 04 Aug 2012 22:17:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344118643!23061242!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17843 invoked from network); 4 Aug 2012 22:17:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Aug 2012 22:17:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,714,1336348800"; d="scan'208";a="13852146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	04 Aug 2012 22:17:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 4 Aug 2012 23:17:22 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sxmew-0006cR-JW;
	Sat, 04 Aug 2012 22:17:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sxmeu-0006OM-Ns;
	Sat, 04 Aug 2012 23:17:22 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13549-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Aug 2012 23:17:20 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13549: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13549 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13549/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 02:51:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 02:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxqvj-0001C3-R7; Sun, 05 Aug 2012 02:50:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sxqvi-0001By-B0
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 02:50:58 +0000
Received: from [85.158.143.99:2274] by server-3.bemta-4.messagelabs.com id
	7A/CE-01511-19FDD105; Sun, 05 Aug 2012 02:50:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344135056!22766845!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2617 invoked from network); 5 Aug 2012 02:50:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 02:50:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,714,1336348800"; d="scan'208";a="13852671"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 02:50:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 03:50:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sxqv5-0008PR-Ho;
	Sun, 05 Aug 2012 02:50:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sxqv5-0005lQ-HM;
	Sun, 05 Aug 2012 03:50:19 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13550-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 03:50:19 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13550: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13550 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13550/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 02:51:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 02:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxqvj-0001C3-R7; Sun, 05 Aug 2012 02:50:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sxqvi-0001By-B0
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 02:50:58 +0000
Received: from [85.158.143.99:2274] by server-3.bemta-4.messagelabs.com id
	7A/CE-01511-19FDD105; Sun, 05 Aug 2012 02:50:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344135056!22766845!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2617 invoked from network); 5 Aug 2012 02:50:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 02:50:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,714,1336348800"; d="scan'208";a="13852671"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 02:50:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 03:50:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sxqv5-0008PR-Ho;
	Sun, 05 Aug 2012 02:50:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sxqv5-0005lQ-HM;
	Sun, 05 Aug 2012 03:50:19 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13550-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 03:50:19 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13550: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13550 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13550/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 07:10:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 07:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxuxp-0002eR-Lq; Sun, 05 Aug 2012 07:09:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sxuxo-0002eI-67
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 07:09:24 +0000
Received: from [85.158.138.51:54588] by server-11.bemta-3.messagelabs.com id
	A3/B7-00679-32C1E105; Sun, 05 Aug 2012 07:09:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344150562!22363467!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28319 invoked from network); 5 Aug 2012 07:09:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 07:09:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13853249"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 07:09:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 08:09:22 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sxuxl-0001QK-Tb;
	Sun, 05 Aug 2012 07:09:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sxuxl-0001hr-T4;
	Sun, 05 Aug 2012 08:09:21 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13551-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 08:09:21 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13551: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13551 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13551/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 07:10:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 07:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxuxp-0002eR-Lq; Sun, 05 Aug 2012 07:09:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sxuxo-0002eI-67
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 07:09:24 +0000
Received: from [85.158.138.51:54588] by server-11.bemta-3.messagelabs.com id
	A3/B7-00679-32C1E105; Sun, 05 Aug 2012 07:09:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344150562!22363467!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28319 invoked from network); 5 Aug 2012 07:09:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 07:09:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13853249"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 07:09:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 08:09:22 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sxuxl-0001QK-Tb;
	Sun, 05 Aug 2012 07:09:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sxuxl-0001hr-T4;
	Sun, 05 Aug 2012 08:09:21 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13551-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 08:09:21 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13551: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13551 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13551/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 10:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 10:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxy0s-0004MQ-J9; Sun, 05 Aug 2012 10:24:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1Sxy0q-0004ML-Eu
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 10:24:44 +0000
Received: from [85.158.143.99:36397] by server-3.bemta-4.messagelabs.com id
	DA/0F-01511-BE94E105; Sun, 05 Aug 2012 10:24:43 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344162282!23917774!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY2NjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11897 invoked from network); 5 Aug 2012 10:24:43 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-216.messagelabs.com with SMTP;
	5 Aug 2012 10:24:43 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 05 Aug 2012 03:24:41 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="177088802"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by azsmga001.ch.intel.com with ESMTP; 05 Aug 2012 03:24:41 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 5 Aug 2012 03:24:40 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Sun, 5 Aug 2012 18:24:39 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Patch 2/6] Xen/MCE: remove mcg_ctl and other adjustment for
	future vMCE
Thread-Index: AQHNblY5CWSuIDJ4QUuNAy7SHei7jpdLAMcg
Date: Sun, 5 Aug 2012 10:24:38 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
In-Reply-To: <5016A69A0200007800091435@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 23.07.12 at 11:39, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> @@ -175,11 +179,16 @@
>>                     *val);
>>          break;
>>      case MSR_IA32_MCG_CTL:
>> -        /* Always 0 if no CTL support */
>>          if ( cur->arch.mcg_cap & MCG_CTL_P )
>> -            *val = vmce->mcg_ctl & h_mcg_ctl;
>> -        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n",
>> -                   *val);
>> +        {
>> +            *val = ~0UL;
>> +            mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL
>> 0x%"PRIx64"\n", *val); +        } +        else
>> +        {
>> +            mce_printk(MCE_QUIET, "MCE: no MCG_CTL\n");
>> +            ret = -1;
> 
> Is there a particular reason to make this access fault here, when
> it didn't before? I.e. was there anything wrong with the previous
> approach of returning zero on reads and ignoring writes when
> !MCG_CTL_P?
> 

Semantically this code is better than previous approach, since !MCG_CTL_P means unimplemented MCG_CTL so access it would generate GP#.

Thanks,
Jinsong

>> +        }
>>          break;
>>      default:
>>          ret = mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr,
>>      val) : 0; @@ -287,15 +296,16 @@ struct domain_mca_msrs *vmce =
>> dom_vmce(cur->domain);      int ret = 1; 
>> 
>> -    if ( !g_mcg_cap )
>> -        return 0;
>> -
>>      spin_lock(&vmce->lock);
>> 
>>      switch ( msr )
>>      {
>>      case MSR_IA32_MCG_CTL:
>> -        vmce->mcg_ctl = val;
>> +        if ( !(cur->arch.mcg_cap & MCG_CTL_P) )
>> +        {
>> +            mce_printk(MCE_QUIET, "MCE: no MCG_CTL\n"); +          
>> ret = -1; +        }
>>          break;
>>      case MSR_IA32_MCG_STATUS:
>>          vmce->mcg_status = val;
> 
> Other than that, the patch looks fine to me.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 10:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 10:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sxy0s-0004MQ-J9; Sun, 05 Aug 2012 10:24:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1Sxy0q-0004ML-Eu
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 10:24:44 +0000
Received: from [85.158.143.99:36397] by server-3.bemta-4.messagelabs.com id
	DA/0F-01511-BE94E105; Sun, 05 Aug 2012 10:24:43 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344162282!23917774!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY2NjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11897 invoked from network); 5 Aug 2012 10:24:43 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-216.messagelabs.com with SMTP;
	5 Aug 2012 10:24:43 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 05 Aug 2012 03:24:41 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="177088802"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by azsmga001.ch.intel.com with ESMTP; 05 Aug 2012 03:24:41 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 5 Aug 2012 03:24:40 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Sun, 5 Aug 2012 18:24:39 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Patch 2/6] Xen/MCE: remove mcg_ctl and other adjustment for
	future vMCE
Thread-Index: AQHNblY5CWSuIDJ4QUuNAy7SHei7jpdLAMcg
Date: Sun, 5 Aug 2012 10:24:38 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
In-Reply-To: <5016A69A0200007800091435@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 23.07.12 at 11:39, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> @@ -175,11 +179,16 @@
>>                     *val);
>>          break;
>>      case MSR_IA32_MCG_CTL:
>> -        /* Always 0 if no CTL support */
>>          if ( cur->arch.mcg_cap & MCG_CTL_P )
>> -            *val = vmce->mcg_ctl & h_mcg_ctl;
>> -        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n",
>> -                   *val);
>> +        {
>> +            *val = ~0UL;
>> +            mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL
>> 0x%"PRIx64"\n", *val); +        } +        else
>> +        {
>> +            mce_printk(MCE_QUIET, "MCE: no MCG_CTL\n");
>> +            ret = -1;
> 
> Is there a particular reason to make this access fault here, when
> it didn't before? I.e. was there anything wrong with the previous
> approach of returning zero on reads and ignoring writes when
> !MCG_CTL_P?
> 

Semantically this code is better than previous approach, since !MCG_CTL_P means unimplemented MCG_CTL so access it would generate GP#.

Thanks,
Jinsong

>> +        }
>>          break;
>>      default:
>>          ret = mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr,
>>      val) : 0; @@ -287,15 +296,16 @@ struct domain_mca_msrs *vmce =
>> dom_vmce(cur->domain);      int ret = 1; 
>> 
>> -    if ( !g_mcg_cap )
>> -        return 0;
>> -
>>      spin_lock(&vmce->lock);
>> 
>>      switch ( msr )
>>      {
>>      case MSR_IA32_MCG_CTL:
>> -        vmce->mcg_ctl = val;
>> +        if ( !(cur->arch.mcg_cap & MCG_CTL_P) )
>> +        {
>> +            mce_printk(MCE_QUIET, "MCE: no MCG_CTL\n"); +          
>> ret = -1; +        }
>>          break;
>>      case MSR_IA32_MCG_STATUS:
>>          vmce->mcg_status = val;
> 
> Other than that, the patch looks fine to me.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 10:58:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 10:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxyXB-0004a1-Hf; Sun, 05 Aug 2012 10:58:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SxyXA-0004Zw-G2
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 10:58:08 +0000
Received: from [85.158.143.99:40653] by server-3.bemta-4.messagelabs.com id
	59/C8-01511-FB15E105; Sun, 05 Aug 2012 10:58:07 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344164285!20554397!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY2NjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13281 invoked from network); 5 Aug 2012 10:58:06 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-8.tower-216.messagelabs.com with SMTP;
	5 Aug 2012 10:58:06 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Aug 2012 03:58:04 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="130446757"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Aug 2012 03:58:04 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 5 Aug 2012 03:58:04 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Sun, 5 Aug 2012 18:58:02 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Patch 3/6] Xen/MCE: vMCE emulation
Thread-Index: AQHNblnLg1E9oqFnbkePc5fHVAoEsJdLELew
Date: Sun, 5 Aug 2012 10:58:01 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D597E@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40B1@SHSMSX101.ccr.corp.intel.com>
	<5016AC920200007800091446@nat28.tlf.novell.com>
In-Reply-To: <5016AC920200007800091446@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 3/6] Xen/MCE: vMCE emulation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 23.07.12 at 11:40, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> 
> From here on, I think this is to go in only after 4.2. Hence
> eventual resubmission wouldn't be necessary until after 4.2
> went out.
> 

OK, agree all comments, will update accordingly and resubmit after that.

Thanks,
Jinsong

>> -int intel_mce_rdmsr(const struct vcpu *, uint32_t msr, uint64_t
>> *val); 
>> -int intel_mce_wrmsr(struct vcpu *, uint32_t msr, uint64_t val);
>> +void intel_vmce_mci_ctl2_rdmsr(const struct vcpu *, uint32_t msr,
>> uint64_t *val); +void intel_vmce_mci_ctl2_wrmsr(struct vcpu *,
>> uint32_t msr, uint64_t val); 
> 
> I don't see a need for renaming those - they could well serve to
> deal with eventual other Intel specific additions to the MSR space
> in the future.
> 
>> -int intel_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t
>> *val) +void intel_vmce_mci_ctl2_rdmsr(const struct vcpu *v, uint32_t
>>  msr, uint64_t *val) {
>> -    int ret = 0;
>> +    int bank = msr - MSR_IA32_MC0_CTL2;
> 
> 'unsigned int' here allows ...
> 
>> 
>> -    if ( msr >= MSR_IA32_MC0_CTL2 &&
>> -         msr < MSR_IA32_MCx_CTL2(v->arch.mcg_cap & MCG_CAP_COUNT) )
>> +    if ( (bank >= 0) && (bank < GUEST_BANK_NUM) )
> 
> ... the first check here to be dropped (and is more natural as
> well as in line with other code in this file).
> 
>>  void vmce_init_vcpu(struct vcpu *v)
>>  {
>> -    v->arch.mcg_cap = GUEST_MCG_CAP;
>> +    int i;
>> +
>> +    /* global MCA MSRs init */
>> +    v->arch.vmce.mcg_cap = GUEST_MCG_CAP;
>> +    v->arch.vmce.mcg_status = 0;
>> +
>> +    /* per-bank MCA MSRs init */
>> +    for ( i = 0; i < GUEST_BANK_NUM; i++ )
>> +    {
>> +        v->arch.vmce.bank[i].mci_status = 0;
>> +        v->arch.vmce.bank[i].mci_addr = 0;
>> +        v->arch.vmce.bank[i].mci_misc = 0;
>> +        v->arch.vmce.bank[i].mci_ctl2 = 0;
>> +    }
> 
> memset()?
> 
>> @@ -3,28 +3,46 @@
>>  #ifndef _XEN_X86_MCE_H
>>  #define _XEN_X86_MCE_H
>> 
>> -/* This entry is for recording bank nodes for the impacted domain,
>> - * put into impact_header list. */
>> -struct bank_entry {
>> -    struct list_head list;
>> -    uint16_t bank;
>> +/*
>> + * Emulalte 2 banks for guest
>> + * Bank0: reserved for 'bank0 quirk' occur at some very old
>> processors: + *   1). Intel cpu whose family-model value < 06-1A; +
>> *   2). AMD K7 + * Bank1: used to transfer error info to guest
>> + */
>> +#define BANK0 0
>> +#define BANK1 1
> 
> These two look superfluous.
> 
>> +#define GUEST_BANK_NUM 2
> 
> This one (pluse the BANK* ones if you strongly feel they should be
> kept) should get MC... added somewhere in their names, as this is
> a header that's not private to the MCE code.
> 
>> +
>> +/*
>> + * MCG_SER_P:  software error recovery supported
>> + * MCG_TES_P:  to avoid MCi_status bit56:53 model specific
>> + * MCG_CMCI_P: expose CMCI capability but never really inject it to
>> guest, + *             for sake of performance since guest not
>> polling periodically + */ +#define GUEST_MCG_CAP (MCG_SER_P |
>> MCG_TES_P | MCG_CMCI_P | GUEST_BANK_NUM) 
> 
> Didn't we settle on not enabling CMCI_P and TES_P for AMD CPUs?
> 
>> +
>> +/* Filter MSCOD model specific error code to guest */
>> +#define MCi_STATUS_MSCOD_MASK (~(0x0ffffUL << 16))
> 
> Is that really correct, especially for both 32- and 64-bit?
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 10:58:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 10:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxyXB-0004a1-Hf; Sun, 05 Aug 2012 10:58:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SxyXA-0004Zw-G2
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 10:58:08 +0000
Received: from [85.158.143.99:40653] by server-3.bemta-4.messagelabs.com id
	59/C8-01511-FB15E105; Sun, 05 Aug 2012 10:58:07 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344164285!20554397!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY2NjU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13281 invoked from network); 5 Aug 2012 10:58:06 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-8.tower-216.messagelabs.com with SMTP;
	5 Aug 2012 10:58:06 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 05 Aug 2012 03:58:04 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="130446757"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 05 Aug 2012 03:58:04 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 5 Aug 2012 03:58:04 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Sun, 5 Aug 2012 18:58:02 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Patch 3/6] Xen/MCE: vMCE emulation
Thread-Index: AQHNblnLg1E9oqFnbkePc5fHVAoEsJdLELew
Date: Sun, 5 Aug 2012 10:58:01 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D597E@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40B1@SHSMSX101.ccr.corp.intel.com>
	<5016AC920200007800091446@nat28.tlf.novell.com>
In-Reply-To: <5016AC920200007800091446@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 3/6] Xen/MCE: vMCE emulation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 23.07.12 at 11:40, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> 
> From here on, I think this is to go in only after 4.2. Hence
> eventual resubmission wouldn't be necessary until after 4.2
> went out.
> 

OK, agree all comments, will update accordingly and resubmit after that.

Thanks,
Jinsong

>> -int intel_mce_rdmsr(const struct vcpu *, uint32_t msr, uint64_t
>> *val); 
>> -int intel_mce_wrmsr(struct vcpu *, uint32_t msr, uint64_t val);
>> +void intel_vmce_mci_ctl2_rdmsr(const struct vcpu *, uint32_t msr,
>> uint64_t *val); +void intel_vmce_mci_ctl2_wrmsr(struct vcpu *,
>> uint32_t msr, uint64_t val); 
> 
> I don't see a need for renaming those - they could well serve to
> deal with eventual other Intel specific additions to the MSR space
> in the future.
> 
>> -int intel_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t
>> *val) +void intel_vmce_mci_ctl2_rdmsr(const struct vcpu *v, uint32_t
>>  msr, uint64_t *val) {
>> -    int ret = 0;
>> +    int bank = msr - MSR_IA32_MC0_CTL2;
> 
> 'unsigned int' here allows ...
> 
>> 
>> -    if ( msr >= MSR_IA32_MC0_CTL2 &&
>> -         msr < MSR_IA32_MCx_CTL2(v->arch.mcg_cap & MCG_CAP_COUNT) )
>> +    if ( (bank >= 0) && (bank < GUEST_BANK_NUM) )
> 
> ... the first check here to be dropped (and is more natural as
> well as in line with other code in this file).
> 
>>  void vmce_init_vcpu(struct vcpu *v)
>>  {
>> -    v->arch.mcg_cap = GUEST_MCG_CAP;
>> +    int i;
>> +
>> +    /* global MCA MSRs init */
>> +    v->arch.vmce.mcg_cap = GUEST_MCG_CAP;
>> +    v->arch.vmce.mcg_status = 0;
>> +
>> +    /* per-bank MCA MSRs init */
>> +    for ( i = 0; i < GUEST_BANK_NUM; i++ )
>> +    {
>> +        v->arch.vmce.bank[i].mci_status = 0;
>> +        v->arch.vmce.bank[i].mci_addr = 0;
>> +        v->arch.vmce.bank[i].mci_misc = 0;
>> +        v->arch.vmce.bank[i].mci_ctl2 = 0;
>> +    }
> 
> memset()?
> 
>> @@ -3,28 +3,46 @@
>>  #ifndef _XEN_X86_MCE_H
>>  #define _XEN_X86_MCE_H
>> 
>> -/* This entry is for recording bank nodes for the impacted domain,
>> - * put into impact_header list. */
>> -struct bank_entry {
>> -    struct list_head list;
>> -    uint16_t bank;
>> +/*
>> + * Emulalte 2 banks for guest
>> + * Bank0: reserved for 'bank0 quirk' occur at some very old
>> processors: + *   1). Intel cpu whose family-model value < 06-1A; +
>> *   2). AMD K7 + * Bank1: used to transfer error info to guest
>> + */
>> +#define BANK0 0
>> +#define BANK1 1
> 
> These two look superfluous.
> 
>> +#define GUEST_BANK_NUM 2
> 
> This one (pluse the BANK* ones if you strongly feel they should be
> kept) should get MC... added somewhere in their names, as this is
> a header that's not private to the MCE code.
> 
>> +
>> +/*
>> + * MCG_SER_P:  software error recovery supported
>> + * MCG_TES_P:  to avoid MCi_status bit56:53 model specific
>> + * MCG_CMCI_P: expose CMCI capability but never really inject it to
>> guest, + *             for sake of performance since guest not
>> polling periodically + */ +#define GUEST_MCG_CAP (MCG_SER_P |
>> MCG_TES_P | MCG_CMCI_P | GUEST_BANK_NUM) 
> 
> Didn't we settle on not enabling CMCI_P and TES_P for AMD CPUs?
> 
>> +
>> +/* Filter MSCOD model specific error code to guest */
>> +#define MCi_STATUS_MSCOD_MASK (~(0x0ffffUL << 16))
> 
> Is that really correct, especially for both 32- and 64-bit?
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 11:42:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 11:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxzDH-0004sI-78; Sun, 05 Aug 2012 11:41:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxzDF-0004sD-G4
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 11:41:37 +0000
Received: from [85.158.143.35:43004] by server-3.bemta-4.messagelabs.com id
	01/B6-01511-0FB5E105; Sun, 05 Aug 2012 11:41:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344166895!16851437!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2657 invoked from network); 5 Aug 2012 11:41:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 11:41:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13854163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 11:41:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 12:41:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxzDC-00030p-Sr;
	Sun, 05 Aug 2012 11:41:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxzDC-0001gk-Rn;
	Sun, 05 Aug 2012 12:41:34 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13552-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 12:41:34 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13552: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13552 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13552/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 11:42:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 11:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SxzDH-0004sI-78; Sun, 05 Aug 2012 11:41:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SxzDF-0004sD-G4
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 11:41:37 +0000
Received: from [85.158.143.35:43004] by server-3.bemta-4.messagelabs.com id
	01/B6-01511-0FB5E105; Sun, 05 Aug 2012 11:41:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344166895!16851437!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2657 invoked from network); 5 Aug 2012 11:41:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 11:41:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13854163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 11:41:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 12:41:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SxzDC-00030p-Sr;
	Sun, 05 Aug 2012 11:41:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SxzDC-0001gk-Rn;
	Sun, 05 Aug 2012 12:41:34 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13552-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 12:41:34 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13552: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13552 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13552/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 14:03:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 14:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy1Pj-0005ar-NT; Sun, 05 Aug 2012 14:02:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mpf30@cam.ac.uk>) id 1Sy0Ez-0005K4-A5
	for xen-devel@lists.xen.org; Sun, 05 Aug 2012 12:47:29 +0000
Received: from [85.158.143.99:53054] by server-3.bemta-4.messagelabs.com id
	20/9A-01511-06B6E105; Sun, 05 Aug 2012 12:47:28 +0000
X-Env-Sender: mpf30@cam.ac.uk
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344170847!29722448!1
X-Originating-IP: [131.111.8.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMxLjExMS44LjE0MSA9PiAxMjc3Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 767 invoked from network); 5 Aug 2012 12:47:28 -0000
Received: from ppsw-41.csi.cam.ac.uk (HELO ppsw-41.csi.cam.ac.uk)
	(131.111.8.141) by server-15.tower-216.messagelabs.com with SMTP;
	5 Aug 2012 12:47:28 -0000
X-Cam-AntiVirus: no malware found
X-Cam-SpamDetails: not scanned
X-Cam-ScannerInfo: http://www.cam.ac.uk/cs/email/scanner/
Received: from mpf30.sp.phy.cam.ac.uk ([131.111.73.200]:50515 helo=FexBox)
	by ppsw-41.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.156]:465)
	with esmtpsa (LOGIN:mpf30) (TLSv1:AES128-SHA:128)
	id 1Sy0Ex-00014l-S6 (Exim 4.72) for xen-devel@lists.xen.org
	(return-path <mpf30@cam.ac.uk>); Sun, 05 Aug 2012 13:47:27 +0100
From: "M. Fletcher" <mpf30@cam.ac.uk>
To: <xen-devel@lists.xen.org>
Date: Sun, 5 Aug 2012 13:47:30 +0100
Message-ID: <002901cd7308$79d55890$6d8009b0$@cam.ac.uk>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 14.0
Thread-Index: Ac1zBuiTgEeBFzEqQd+cDqeQ+ZjgWg==
Content-Language: en-gb
X-Mailman-Approved-At: Sun, 05 Aug 2012 14:02:38 +0000
Subject: [Xen-devel] Unable to configure GRUB to boto xen 4.2 Kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1916301179634493384=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============1916301179634493384==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_002A_01CD7310.DB9D1BF0"
Content-Language: en-gb

This is a multipart message in MIME format.

------=_NextPart_000_002A_01CD7310.DB9D1BF0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

I've just built and installed xen 4.2 -rc1 (72160635df2c) (from
/usr/local/xen-rc-4.2/), which seemed to go fine, along with all
dependencies listed in readme (on a clean debian-squeeze base). After a few
days of head bashing I can't configure  grub to boot into the xen kernel.

 

Initially I was getting a whole load of python errors as xen installed into
/usr/lib/python2.6/site-packages and sys.path was only referencing
/usr/bin/python2.6/dist-packages. I corrected that by adding a xenPaths.pth
file pointing at site-packages.

 

  xm list now produces (which I assume is because I'm not in the xen
kernel):

 

   Error: Unable to connect to xend: No such file or directory. Is xend
running?

 

I have the following entries in the boot directory

 

    ls /boot

    config-2.6.32-5-amd64                 initrd.img-xen-4.2
System.map-2.6.32-5-amd64                xen-4.2.gz  xen-syms-4.2.0-rc1

    grub                                     initrd.img-xen-4.2.0
vmlinuz-2.6.32-5-amd64              xen-4.gz

    initrd.img-2.6.32-5-amd64  initrd.img-xen-4.2.0-rc1  xen-4.2.0-rc1.gz
xen.gz

 

ls /etc/grub.d shows:

 

    ls /etc/grub.d

    00_header  05_debian_theme  08_linux_xen  10_linux  30_os-prober
40_custom  41_custom README

 

when I try update-grub as root I get the following output:

 

    root@mcd40:/etc/grub.d# update-grub

    Generating grub.cfg ...

    Found background image: /usr/share/images/desktop-base/desktop-grub.png

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    Found linux image: /boot/vmlinuz-2.6.32-5-amd64

    Found initrd image: /boot/initrd.img-2.6.32-5-amd64

    done

 

Someone else seems to have a similar problem
here(http://lists.xen.org/archives/html/xen-users/2011-12/msg00074.html),
but I don't quite understand what the proposed solution suggests.

 

If anyone could point me in the right direction that would be appreciated.

 

Marc

SP Group, Dept of Physics,

Cambridge University 


------=_NextPart_000_002A_01CD7310.DB9D1BF0
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-GB link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>I've just =
built and installed xen 4.2 -rc1 (<span =
style=3D'font-size:9.0pt;font-family:"Courier New"'>72160635df2c</span>) =
(from /usr/local/xen-rc-4.2/), which seemed to go fine, along with all =
dependencies listed in readme (on a clean debian-squeeze base). After a =
few days of head bashing I can't configure &nbsp;grub to boot into the =
xen kernel.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Initially I was getting a whole load of python errors =
as xen installed into /usr/lib/python2.6/site-packages and sys.path was =
only referencing /usr/bin/python2.6/dist-packages. I corrected that by =
adding a xenPaths.pth file pointing at site-packages.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>&nbsp; xm =
list now produces (which I assume is because I&#8217;m not in the xen =
kernel):<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp; Error: Unable to connect to xend: No =
such file or directory. Is xend running?<o:p></o:p></i></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I have the =
following entries in the boot directory<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; ls /boot<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
config-2.6.32-5-amd64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; =
initrd.img-xen-4.2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp;&nbsp;&nbsp; =
System.map-2.6.32-5-amd64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; xen-4.2.gz&nbsp; =
xen-syms-4.2.0-rc1<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
grub&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; =
initrd.img-xen-4.2.0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
vmlinuz-2.6.32-5-amd64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; xen-4.gz<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; initrd.img-2.6.32-5-amd64&nbsp; =
initrd.img-xen-4.2.0-rc1&nbsp; =
xen-4.2.0-rc1.gz&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
xen.gz<o:p></o:p></i></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>ls /etc/grub.d shows:<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; ls =
/etc/grub.d<o:p></o:p></i></p><p class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
00_header&nbsp; 05_debian_theme&nbsp; 08_linux_xen&nbsp; 10_linux&nbsp; =
30_os-prober&nbsp; 40_custom&nbsp; 41_custom README<o:p></o:p></i></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>when I try =
update-grub as root I get the following output:<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; root@mcd40:/etc/grub.d# =
update-grub<o:p></o:p></i></p><p class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
Generating grub.cfg ...<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; Found background image: =
/usr/share/images/desktop-base/desktop-grub.png<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; Found linux image: =
/boot/vmlinuz-2.6.32-5-amd64<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; Found initrd image: =
/boot/initrd.img-2.6.32-5-amd64<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; done<o:p></o:p></i></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Someone else =
seems to have a similar problem =
here(http://lists.xen.org/archives/html/xen-users/2011-12/msg00074.html),=
 &nbsp;but I don't quite understand what the proposed solution =
suggests.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>If anyone could point me in the right direction that =
would be appreciated.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Marc<o:p></o:p></p><p class=3DMsoNormal>SP Group, Dept =
of Physics,<o:p></o:p></p><p class=3DMsoNormal>Cambridge University =
<o:p></o:p></p></div></body></html>
------=_NextPart_000_002A_01CD7310.DB9D1BF0--



--===============1916301179634493384==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1916301179634493384==--



From xen-devel-bounces@lists.xen.org Sun Aug 05 14:03:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 14:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy1Pj-0005ar-NT; Sun, 05 Aug 2012 14:02:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mpf30@cam.ac.uk>) id 1Sy0Ez-0005K4-A5
	for xen-devel@lists.xen.org; Sun, 05 Aug 2012 12:47:29 +0000
Received: from [85.158.143.99:53054] by server-3.bemta-4.messagelabs.com id
	20/9A-01511-06B6E105; Sun, 05 Aug 2012 12:47:28 +0000
X-Env-Sender: mpf30@cam.ac.uk
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344170847!29722448!1
X-Originating-IP: [131.111.8.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMxLjExMS44LjE0MSA9PiAxMjc3Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 767 invoked from network); 5 Aug 2012 12:47:28 -0000
Received: from ppsw-41.csi.cam.ac.uk (HELO ppsw-41.csi.cam.ac.uk)
	(131.111.8.141) by server-15.tower-216.messagelabs.com with SMTP;
	5 Aug 2012 12:47:28 -0000
X-Cam-AntiVirus: no malware found
X-Cam-SpamDetails: not scanned
X-Cam-ScannerInfo: http://www.cam.ac.uk/cs/email/scanner/
Received: from mpf30.sp.phy.cam.ac.uk ([131.111.73.200]:50515 helo=FexBox)
	by ppsw-41.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.156]:465)
	with esmtpsa (LOGIN:mpf30) (TLSv1:AES128-SHA:128)
	id 1Sy0Ex-00014l-S6 (Exim 4.72) for xen-devel@lists.xen.org
	(return-path <mpf30@cam.ac.uk>); Sun, 05 Aug 2012 13:47:27 +0100
From: "M. Fletcher" <mpf30@cam.ac.uk>
To: <xen-devel@lists.xen.org>
Date: Sun, 5 Aug 2012 13:47:30 +0100
Message-ID: <002901cd7308$79d55890$6d8009b0$@cam.ac.uk>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 14.0
Thread-Index: Ac1zBuiTgEeBFzEqQd+cDqeQ+ZjgWg==
Content-Language: en-gb
X-Mailman-Approved-At: Sun, 05 Aug 2012 14:02:38 +0000
Subject: [Xen-devel] Unable to configure GRUB to boto xen 4.2 Kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1916301179634493384=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============1916301179634493384==
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_002A_01CD7310.DB9D1BF0"
Content-Language: en-gb

This is a multipart message in MIME format.

------=_NextPart_000_002A_01CD7310.DB9D1BF0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

I've just built and installed xen 4.2 -rc1 (72160635df2c) (from
/usr/local/xen-rc-4.2/), which seemed to go fine, along with all
dependencies listed in readme (on a clean debian-squeeze base). After a few
days of head bashing I can't configure  grub to boot into the xen kernel.

 

Initially I was getting a whole load of python errors as xen installed into
/usr/lib/python2.6/site-packages and sys.path was only referencing
/usr/bin/python2.6/dist-packages. I corrected that by adding a xenPaths.pth
file pointing at site-packages.

 

  xm list now produces (which I assume is because I'm not in the xen
kernel):

 

   Error: Unable to connect to xend: No such file or directory. Is xend
running?

 

I have the following entries in the boot directory

 

    ls /boot

    config-2.6.32-5-amd64                 initrd.img-xen-4.2
System.map-2.6.32-5-amd64                xen-4.2.gz  xen-syms-4.2.0-rc1

    grub                                     initrd.img-xen-4.2.0
vmlinuz-2.6.32-5-amd64              xen-4.gz

    initrd.img-2.6.32-5-amd64  initrd.img-xen-4.2.0-rc1  xen-4.2.0-rc1.gz
xen.gz

 

ls /etc/grub.d shows:

 

    ls /etc/grub.d

    00_header  05_debian_theme  08_linux_xen  10_linux  30_os-prober
40_custom  41_custom README

 

when I try update-grub as root I get the following output:

 

    root@mcd40:/etc/grub.d# update-grub

    Generating grub.cfg ...

    Found background image: /usr/share/images/desktop-base/desktop-grub.png

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
version number

    Found linux image: /boot/vmlinuz-2.6.32-5-amd64

    Found initrd image: /boot/initrd.img-2.6.32-5-amd64

    done

 

Someone else seems to have a similar problem
here(http://lists.xen.org/archives/html/xen-users/2011-12/msg00074.html),
but I don't quite understand what the proposed solution suggests.

 

If anyone could point me in the right direction that would be appreciated.

 

Marc

SP Group, Dept of Physics,

Cambridge University 


------=_NextPart_000_002A_01CD7310.DB9D1BF0
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-GB link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>I've just =
built and installed xen 4.2 -rc1 (<span =
style=3D'font-size:9.0pt;font-family:"Courier New"'>72160635df2c</span>) =
(from /usr/local/xen-rc-4.2/), which seemed to go fine, along with all =
dependencies listed in readme (on a clean debian-squeeze base). After a =
few days of head bashing I can't configure &nbsp;grub to boot into the =
xen kernel.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Initially I was getting a whole load of python errors =
as xen installed into /usr/lib/python2.6/site-packages and sys.path was =
only referencing /usr/bin/python2.6/dist-packages. I corrected that by =
adding a xenPaths.pth file pointing at site-packages.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>&nbsp; xm =
list now produces (which I assume is because I&#8217;m not in the xen =
kernel):<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp; Error: Unable to connect to xend: No =
such file or directory. Is xend running?<o:p></o:p></i></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I have the =
following entries in the boot directory<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; ls /boot<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
config-2.6.32-5-amd64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; =
initrd.img-xen-4.2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp;&nbsp;&nbsp; =
System.map-2.6.32-5-amd64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; xen-4.2.gz&nbsp; =
xen-syms-4.2.0-rc1<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
grub&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; =
initrd.img-xen-4.2.0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
vmlinuz-2.6.32-5-amd64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; xen-4.gz<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; initrd.img-2.6.32-5-amd64&nbsp; =
initrd.img-xen-4.2.0-rc1&nbsp; =
xen-4.2.0-rc1.gz&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
xen.gz<o:p></o:p></i></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>ls /etc/grub.d shows:<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; ls =
/etc/grub.d<o:p></o:p></i></p><p class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
00_header&nbsp; 05_debian_theme&nbsp; 08_linux_xen&nbsp; 10_linux&nbsp; =
30_os-prober&nbsp; 40_custom&nbsp; 41_custom README<o:p></o:p></i></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>when I try =
update-grub as root I get the following output:<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; root@mcd40:/etc/grub.d# =
update-grub<o:p></o:p></i></p><p class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; =
Generating grub.cfg ...<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; Found background image: =
/usr/share/images/desktop-base/desktop-grub.png<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; dpkg: version '/boot/xen.gz' has =
bad syntax: invalid character in version number<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; Found linux image: =
/boot/vmlinuz-2.6.32-5-amd64<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; Found initrd image: =
/boot/initrd.img-2.6.32-5-amd64<o:p></o:p></i></p><p =
class=3DMsoNormal><i>&nbsp;&nbsp;&nbsp; done<o:p></o:p></i></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Someone else =
seems to have a similar problem =
here(http://lists.xen.org/archives/html/xen-users/2011-12/msg00074.html),=
 &nbsp;but I don't quite understand what the proposed solution =
suggests.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>If anyone could point me in the right direction that =
would be appreciated.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Marc<o:p></o:p></p><p class=3DMsoNormal>SP Group, Dept =
of Physics,<o:p></o:p></p><p class=3DMsoNormal>Cambridge University =
<o:p></o:p></p></div></body></html>
------=_NextPart_000_002A_01CD7310.DB9D1BF0--



--===============1916301179634493384==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1916301179634493384==--



From xen-devel-bounces@lists.xen.org Sun Aug 05 14:41:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 14:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy20r-0005q1-RO; Sun, 05 Aug 2012 14:41:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Sy20p-0005pw-IW
	for xen-devel@lists.xen.org; Sun, 05 Aug 2012 14:40:59 +0000
Received: from [85.158.138.51:4789] by server-4.bemta-3.messagelabs.com id
	BC/67-29069-AF58E105; Sun, 05 Aug 2012 14:40:58 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344177656!30475874!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15604 invoked from network); 5 Aug 2012 14:40:57 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-11.tower-174.messagelabs.com with SMTP;
	5 Aug 2012 14:40:57 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q75Eeo9I008502;
	Sun, 5 Aug 2012 09:40:50 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q75Een7F008501;
	Sun, 5 Aug 2012 09:40:49 -0500
Date: Sun, 5 Aug 2012 09:40:49 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208051440.q75Een7F008501@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Sun, 05 Aug 2012 09:40:50 -0500 (CDT)
Cc: ian.jackson@eu.citrix.com, keir@xen.org
Subject: [Xen-devel] Xen 4.1.x backport request
	<e4781aedf817c5ab36f6f3077e44c43c566a2812>
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Backport of the following patch from development:

# User Ian Campbell <[hidden email]>
# Date 1309968705 -3600
# Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
# Parent  05700ef33648e0777fb48ba965bf723264d56a31
libxl: attempt to cleanup tapdisk processes on disk backend destroy.

This patch properly terminates the tapdisk2 process(es) started
to service a virtual block device.

Without this patch 4.1.2 leaves a tapdisk2 process running for each
tap device created.

Signed-off-by: Greg Wettstein <greg@enjellic.com>

diff -r fa34499e8f6c tools/blktap2/control/tap-ctl-list.c
--- a/tools/blktap2/control/tap-ctl-list.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/blktap2/control/tap-ctl-list.c	Sun Aug 05 09:26:56 2012 -0500
@@ -506,17 +506,15 @@ out:
 }
 
 int
-tap_ctl_find_minor(const char *type, const char *path)
+tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
 {
 	tap_list_t **list, **_entry;
-	int minor, err;
+	int ret = -ENOENT, err;
 
 	err = tap_ctl_list(&list);
 	if (err)
 		return err;
 
-	minor = -1;
-
 	for (_entry = list; *_entry != NULL; ++_entry) {
 		tap_list_t *entry  = *_entry;
 
@@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
 		if (path && (!entry->path || strcmp(entry->path, path)))
 			continue;
 
-		minor = entry->minor;
+		*tap = *entry;
+		tap->type = tap->path = NULL;
+		ret = 0;
 		break;
 	}
 
 	tap_ctl_free_list(list);
 
-	return minor >= 0 ? minor : -ENOENT;
+	return ret;
 }
diff -r fa34499e8f6c tools/blktap2/control/tap-ctl.h
--- a/tools/blktap2/control/tap-ctl.h	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/blktap2/control/tap-ctl.h	Sun Aug 05 09:26:56 2012 -0500
@@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
 
 int tap_ctl_list(tap_list_t ***list);
 void tap_ctl_free_list(tap_list_t **list);
-int tap_ctl_find_minor(const char *type, const char *path);
+int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
 
 int tap_ctl_allocate(int *minor, char **devname);
 int tap_ctl_free(const int minor);
diff -r fa34499e8f6c tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:26:56 2012 -0500
@@ -18,6 +18,8 @@
 
 #include "tap-ctl.h"
 
+#include <string.h>
+
 int libxl__blktap_enabled(libxl__gc *gc)
 {
     const char *msg;
@@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
 {
     const char *type;
     char *params, *devname = NULL;
-    int minor, err;
+    tap_list_t tap;
+    int err;
 
     type = libxl__device_disk_string_of_format(format);
-    minor = tap_ctl_find_minor(type, disk);
-    if (minor >= 0) {
-        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", minor);
+    err = tap_ctl_find(type, disk, &tap);
+    if (err == 0) {
+        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", tap.minor);
         if (devname)
             return devname;
     }
@@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
 
     return NULL;
 }
+
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+    char *path, *params, *type, *disk;
+    int err;
+    tap_list_t tap;
+
+    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
+    if (!path) return;
+
+    params = libxl__xs_read(gc, XBT_NULL, path);
+    if (!params) return;
+
+    type = params;
+    disk = strchr(params, ':');
+    if (!disk) return;
+
+    *disk++ = '\0';
+
+    err = tap_ctl_find(type, disk, &tap);
+    if (err < 0) return;
+
+    tap_ctl_destroy(tap.id, tap.minor);
+}
diff -r fa34499e8f6c tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_device.c	Sun Aug 05 09:26:56 2012 -0500
@@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
     if (!state)
         goto out;
     if (atoi(state) != 4) {
+        libxl__device_destroy_tapdisk(&gc, be_path);
         xs_rm(ctx->xsh, XBT_NULL, be_path);
         goto out;
     }
@@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
             }
         }
     }
+    libxl__device_destroy_tapdisk(&gc, be_path);
 out:
     libxl__free_all(&gc);
     return 0;
diff -r fa34499e8f6c tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Sun Aug 05 09:26:56 2012 -0500
@@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
                                  const char *disk,
                                  libxl_disk_format format);
 
+/* libxl__device_destroy_tapdisk:
+ *   Destroys any tapdisk process associated with the backend represented
+ *   by be_path.
+ */
+_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+
 _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uuid);
 
 struct libxl__xen_console_reader {
diff -r fa34499e8f6c tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_noblktap2.c	Sun Aug 05 09:26:56 2012 -0500
@@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
 {
     return NULL;
 }
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+}

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"My thoughts on the composition and effectiveness of the advisory
 committee?

 I think they are destined to accomplish about the same thing as what
 you would get from locking 9 chimpanzees in a room with an armed
 thermonuclear weapon and a can opener with orders to disarm it."
                                -- Dr. Greg Wettstein
                                   Resurrection

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 14:41:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 14:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy20r-0005q1-RO; Sun, 05 Aug 2012 14:41:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Sy20p-0005pw-IW
	for xen-devel@lists.xen.org; Sun, 05 Aug 2012 14:40:59 +0000
Received: from [85.158.138.51:4789] by server-4.bemta-3.messagelabs.com id
	BC/67-29069-AF58E105; Sun, 05 Aug 2012 14:40:58 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344177656!30475874!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15604 invoked from network); 5 Aug 2012 14:40:57 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-11.tower-174.messagelabs.com with SMTP;
	5 Aug 2012 14:40:57 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q75Eeo9I008502;
	Sun, 5 Aug 2012 09:40:50 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q75Een7F008501;
	Sun, 5 Aug 2012 09:40:49 -0500
Date: Sun, 5 Aug 2012 09:40:49 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208051440.q75Een7F008501@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Sun, 05 Aug 2012 09:40:50 -0500 (CDT)
Cc: ian.jackson@eu.citrix.com, keir@xen.org
Subject: [Xen-devel] Xen 4.1.x backport request
	<e4781aedf817c5ab36f6f3077e44c43c566a2812>
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Backport of the following patch from development:

# User Ian Campbell <[hidden email]>
# Date 1309968705 -3600
# Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
# Parent  05700ef33648e0777fb48ba965bf723264d56a31
libxl: attempt to cleanup tapdisk processes on disk backend destroy.

This patch properly terminates the tapdisk2 process(es) started
to service a virtual block device.

Without this patch 4.1.2 leaves a tapdisk2 process running for each
tap device created.

Signed-off-by: Greg Wettstein <greg@enjellic.com>

diff -r fa34499e8f6c tools/blktap2/control/tap-ctl-list.c
--- a/tools/blktap2/control/tap-ctl-list.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/blktap2/control/tap-ctl-list.c	Sun Aug 05 09:26:56 2012 -0500
@@ -506,17 +506,15 @@ out:
 }
 
 int
-tap_ctl_find_minor(const char *type, const char *path)
+tap_ctl_find(const char *type, const char *path, tap_list_t *tap)
 {
 	tap_list_t **list, **_entry;
-	int minor, err;
+	int ret = -ENOENT, err;
 
 	err = tap_ctl_list(&list);
 	if (err)
 		return err;
 
-	minor = -1;
-
 	for (_entry = list; *_entry != NULL; ++_entry) {
 		tap_list_t *entry  = *_entry;
 
@@ -526,11 +524,13 @@ tap_ctl_find_minor(const char *type, con
 		if (path && (!entry->path || strcmp(entry->path, path)))
 			continue;
 
-		minor = entry->minor;
+		*tap = *entry;
+		tap->type = tap->path = NULL;
+		ret = 0;
 		break;
 	}
 
 	tap_ctl_free_list(list);
 
-	return minor >= 0 ? minor : -ENOENT;
+	return ret;
 }
diff -r fa34499e8f6c tools/blktap2/control/tap-ctl.h
--- a/tools/blktap2/control/tap-ctl.h	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/blktap2/control/tap-ctl.h	Sun Aug 05 09:26:56 2012 -0500
@@ -76,7 +76,7 @@ int tap_ctl_get_driver_id(const char *ha
 
 int tap_ctl_list(tap_list_t ***list);
 void tap_ctl_free_list(tap_list_t **list);
-int tap_ctl_find_minor(const char *type, const char *path);
+int tap_ctl_find(const char *type, const char *path, tap_list_t *tap);
 
 int tap_ctl_allocate(int *minor, char **devname);
 int tap_ctl_free(const int minor);
diff -r fa34499e8f6c tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:26:56 2012 -0500
@@ -18,6 +18,8 @@
 
 #include "tap-ctl.h"
 
+#include <string.h>
+
 int libxl__blktap_enabled(libxl__gc *gc)
 {
     const char *msg;
@@ -30,12 +32,13 @@ const char *libxl__blktap_devpath(libxl_
 {
     const char *type;
     char *params, *devname = NULL;
-    int minor, err;
+    tap_list_t tap;
+    int err;
 
     type = libxl__device_disk_string_of_format(format);
-    minor = tap_ctl_find_minor(type, disk);
-    if (minor >= 0) {
-        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", minor);
+    err = tap_ctl_find(type, disk, &tap);
+    if (err == 0) {
+        devname = libxl__sprintf(gc, "/dev/xen/blktap-2/tapdev%d", tap.minor);
         if (devname)
             return devname;
     }
@@ -49,3 +52,28 @@ const char *libxl__blktap_devpath(libxl_
 
     return NULL;
 }
+
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+    char *path, *params, *type, *disk;
+    int err;
+    tap_list_t tap;
+
+    path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
+    if (!path) return;
+
+    params = libxl__xs_read(gc, XBT_NULL, path);
+    if (!params) return;
+
+    type = params;
+    disk = strchr(params, ':');
+    if (!disk) return;
+
+    *disk++ = '\0';
+
+    err = tap_ctl_find(type, disk, &tap);
+    if (err < 0) return;
+
+    tap_ctl_destroy(tap.id, tap.minor);
+}
diff -r fa34499e8f6c tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_device.c	Sun Aug 05 09:26:56 2012 -0500
@@ -250,6 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
     if (!state)
         goto out;
     if (atoi(state) != 4) {
+        libxl__device_destroy_tapdisk(&gc, be_path);
         xs_rm(ctx->xsh, XBT_NULL, be_path);
         goto out;
     }
@@ -368,6 +369,7 @@ int libxl__devices_destroy(libxl_ctx *ct
             }
         }
     }
+    libxl__device_destroy_tapdisk(&gc, be_path);
 out:
     libxl__free_all(&gc);
     return 0;
diff -r fa34499e8f6c tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Sun Aug 05 09:26:56 2012 -0500
@@ -314,6 +314,12 @@ _hidden const char *libxl__blktap_devpat
                                  const char *disk,
                                  libxl_disk_format format);
 
+/* libxl__device_destroy_tapdisk:
+ *   Destroys any tapdisk process associated with the backend represented
+ *   by be_path.
+ */
+_hidden void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path);
+
 _hidden char *libxl__uuid2string(libxl__gc *gc, const libxl_uuid uuid);
 
 struct libxl__xen_console_reader {
diff -r fa34499e8f6c tools/libxl/libxl_noblktap2.c
--- a/tools/libxl/libxl_noblktap2.c	Mon Jul 30 13:38:58 2012 +0100
+++ b/tools/libxl/libxl_noblktap2.c	Sun Aug 05 09:26:56 2012 -0500
@@ -27,3 +27,7 @@ const char *libxl__blktap_devpath(libxl_
 {
     return NULL;
 }
+
+void libxl__device_destroy_tapdisk(libxl__gc *gc, char *be_path)
+{
+}

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"My thoughts on the composition and effectiveness of the advisory
 committee?

 I think they are destined to accomplish about the same thing as what
 you would get from locking 9 chimpanzees in a room with an armed
 thermonuclear weapon and a can opener with orders to disarm it."
                                -- Dr. Greg Wettstein
                                   Resurrection

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 14:49:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 14:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy28e-0005z3-Ph; Sun, 05 Aug 2012 14:49:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Sy28d-0005yx-KA
	for xen-devel@lists.xen.org; Sun, 05 Aug 2012 14:49:03 +0000
Received: from [85.158.143.35:60741] by server-2.bemta-4.messagelabs.com id
	6C/CA-17938-ED78E105; Sun, 05 Aug 2012 14:49:02 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1344178140!4315429!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12737 invoked from network); 5 Aug 2012 14:49:00 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-5.tower-21.messagelabs.com with SMTP;
	5 Aug 2012 14:49:00 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q75Emwvb008574;
	Sun, 5 Aug 2012 09:48:58 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q75EmvG5008573;
	Sun, 5 Aug 2012 09:48:57 -0500
Date: Sun, 5 Aug 2012 09:48:57 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208051448.q75EmvG5008573@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Sun, 05 Aug 2012 09:48:58 -0500 (CDT)
Cc: ian.jackson@eu.citrix.com, keir@xen.org
Subject: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor
	leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl: Seal tapdisk minor leak.

This patch needs to be applied on top of the following patch
to establish correct cleanup of a blktap2 based virtual disk:

libxl: attempt to cleanup tapdisk processes on disk backend destroy.

To implement correct cleanup of blktap devices in Xen 4.1.2.

This patch implements the release of the backend device before calling
for the destruction of the userspace component of the tapdisk device.

Without this patch the kernel xen-blkback driver deadlocks with
the tapdisk user control plane until the IPC channel is terminated by the
timeout on the select() call.  This results in a noticeable delay
in the termination of the guest and causes the blktap minor
number which had been allocated for the device to be orphaned.

Signed-off-by: Greg Wettstein <greg@enjellic.com>

diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
+++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
@@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
     char *path, *params, *type, *disk;
     int err;
     tap_list_t tap;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
 
     path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
     if (!path) return;
@@ -75,5 +76,11 @@ void libxl__device_destroy_tapdisk(libxl
     err = tap_ctl_find(type, disk, &tap);
     if (err < 0) return;
 
+    /*
+     * Remove the instance of the backend device to avoid a deadlock with the
+     * removal of the tap device.
+     */
+    xs_rm(ctx->xsh, XBT_NULL, be_path);
+
     tap_ctl_destroy(tap.id, tap.minor);
 }
diff -r b2b7a7a49af5 tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Sat Aug 04 16:17:08 2012 -0500
+++ b/tools/libxl/libxl_device.c	Sun Aug 05 09:22:35 2012 -0500
@@ -250,8 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
     if (!state)
         goto out;
     if (atoi(state) != 4) {
-        libxl__device_destroy_tapdisk(&gc, be_path);
-        xs_rm(ctx->xsh, XBT_NULL, be_path);
+	libxl__device_destroy_tapdisk(&gc, be_path);
         goto out;
     }
 
As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"More people are killed every year by pigs than by sharks, which shows
 you how good we are at evaluating risk."
                                -- Bruce Schneier
                                   Beyond Fear

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 14:49:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 14:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy28e-0005z3-Ph; Sun, 05 Aug 2012 14:49:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Sy28d-0005yx-KA
	for xen-devel@lists.xen.org; Sun, 05 Aug 2012 14:49:03 +0000
Received: from [85.158.143.35:60741] by server-2.bemta-4.messagelabs.com id
	6C/CA-17938-ED78E105; Sun, 05 Aug 2012 14:49:02 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1344178140!4315429!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12737 invoked from network); 5 Aug 2012 14:49:00 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-5.tower-21.messagelabs.com with SMTP;
	5 Aug 2012 14:49:00 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q75Emwvb008574;
	Sun, 5 Aug 2012 09:48:58 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q75EmvG5008573;
	Sun, 5 Aug 2012 09:48:57 -0500
Date: Sun, 5 Aug 2012 09:48:57 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208051448.q75EmvG5008573@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Sun, 05 Aug 2012 09:48:58 -0500 (CDT)
Cc: ian.jackson@eu.citrix.com, keir@xen.org
Subject: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor
	leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl: Seal tapdisk minor leak.

This patch needs to be applied on top of the following patch
to establish correct cleanup of a blktap2 based virtual disk:

libxl: attempt to cleanup tapdisk processes on disk backend destroy.

To implement correct cleanup of blktap devices in Xen 4.1.2.

This patch implements the release of the backend device before calling
for the destruction of the userspace component of the tapdisk device.

Without this patch the kernel xen-blkback driver deadlocks with
the tapdisk user control plane until the IPC channel is terminated by the
timeout on the select() call.  This results in a noticeable delay
in the termination of the guest and causes the blktap minor
number which had been allocated for the device to be orphaned.

Signed-off-by: Greg Wettstein <greg@enjellic.com>

diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
--- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
+++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
@@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
     char *path, *params, *type, *disk;
     int err;
     tap_list_t tap;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
 
     path = libxl__sprintf(gc, "%s/tapdisk-params", be_path);
     if (!path) return;
@@ -75,5 +76,11 @@ void libxl__device_destroy_tapdisk(libxl
     err = tap_ctl_find(type, disk, &tap);
     if (err < 0) return;
 
+    /*
+     * Remove the instance of the backend device to avoid a deadlock with the
+     * removal of the tap device.
+     */
+    xs_rm(ctx->xsh, XBT_NULL, be_path);
+
     tap_ctl_destroy(tap.id, tap.minor);
 }
diff -r b2b7a7a49af5 tools/libxl/libxl_device.c
--- a/tools/libxl/libxl_device.c	Sat Aug 04 16:17:08 2012 -0500
+++ b/tools/libxl/libxl_device.c	Sun Aug 05 09:22:35 2012 -0500
@@ -250,8 +250,7 @@ int libxl__device_destroy(libxl_ctx *ctx
     if (!state)
         goto out;
     if (atoi(state) != 4) {
-        libxl__device_destroy_tapdisk(&gc, be_path);
-        xs_rm(ctx->xsh, XBT_NULL, be_path);
+	libxl__device_destroy_tapdisk(&gc, be_path);
         goto out;
     }
 
As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"More people are killed every year by pigs than by sharks, which shows
 you how good we are at evaluating risk."
                                -- Bruce Schneier
                                   Beyond Fear

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 15:25:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 15:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy2h5-0006Ru-BR; Sun, 05 Aug 2012 15:24:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sy2h3-0006Rn-H1
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 15:24:37 +0000
Received: from [85.158.139.83:47611] by server-12.bemta-5.messagelabs.com id
	CA/61-26304-4309E105; Sun, 05 Aug 2012 15:24:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344180275!30354646!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 963 invoked from network); 5 Aug 2012 15:24:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 15:24:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13855090"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 15:24:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 16:24:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sy2h1-0004VU-3A;
	Sun, 05 Aug 2012 15:24:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sy2h0-0005Ck-O4;
	Sun, 05 Aug 2012 16:24:34 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13553-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 16:24:34 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13553: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13553 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13553/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 15:25:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 15:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy2h5-0006Ru-BR; Sun, 05 Aug 2012 15:24:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sy2h3-0006Rn-H1
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 15:24:37 +0000
Received: from [85.158.139.83:47611] by server-12.bemta-5.messagelabs.com id
	CA/61-26304-4309E105; Sun, 05 Aug 2012 15:24:36 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344180275!30354646!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 963 invoked from network); 5 Aug 2012 15:24:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 15:24:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13855090"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 15:24:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 16:24:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sy2h1-0004VU-3A;
	Sun, 05 Aug 2012 15:24:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sy2h0-0005Ck-O4;
	Sun, 05 Aug 2012 16:24:34 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13553-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 16:24:34 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13553: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13553 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13553/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 19:46:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 19:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy6mL-0007yb-Hd; Sun, 05 Aug 2012 19:46:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sy6mJ-0007yT-Uv
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 19:46:20 +0000
Received: from [85.158.139.83:22299] by server-4.bemta-5.messagelabs.com id
	2D/3B-27831-B8DCE105; Sun, 05 Aug 2012 19:46:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344195977!29656718!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20560 invoked from network); 5 Aug 2012 19:46:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 19:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13856093"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 19:46:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 20:46:13 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sy6mC-0005zb-UL;
	Sun, 05 Aug 2012 19:46:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sy6mC-00076i-Rx;
	Sun, 05 Aug 2012 20:46:12 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13554-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 20:46:12 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13554: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13554 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13554/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 05 19:46:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Aug 2012 19:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sy6mL-0007yb-Hd; Sun, 05 Aug 2012 19:46:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sy6mJ-0007yT-Uv
	for xen-devel@lists.xensource.com; Sun, 05 Aug 2012 19:46:20 +0000
Received: from [85.158.139.83:22299] by server-4.bemta-5.messagelabs.com id
	2D/3B-27831-B8DCE105; Sun, 05 Aug 2012 19:46:19 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344195977!29656718!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20560 invoked from network); 5 Aug 2012 19:46:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Aug 2012 19:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,715,1336348800"; d="scan'208";a="13856093"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	05 Aug 2012 19:46:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 5 Aug 2012 20:46:13 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sy6mC-0005zb-UL;
	Sun, 05 Aug 2012 19:46:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sy6mC-00076i-Rx;
	Sun, 05 Aug 2012 20:46:12 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13554-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Aug 2012 20:46:12 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13554: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13554 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13554/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 00:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 00:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyAs3-0000wA-QK; Mon, 06 Aug 2012 00:08:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyAs1-0000w5-Sb
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 00:08:30 +0000
Received: from [85.158.139.83:19357] by server-11.bemta-5.messagelabs.com id
	77/C0-20400-DFA0F105; Mon, 06 Aug 2012 00:08:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344211708!24685554!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18998 invoked from network); 6 Aug 2012 00:08:28 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 00:08:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,717,1336348800"; d="scan'208";a="13856684"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 00:08:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 01:08:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyArz-0007Ns-C1;
	Mon, 06 Aug 2012 00:08:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyArz-0001lr-Ar;
	Mon, 06 Aug 2012 01:08:27 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13555-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 01:08:27 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13555: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13555 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13555/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 00:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 00:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyAs3-0000wA-QK; Mon, 06 Aug 2012 00:08:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyAs1-0000w5-Sb
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 00:08:30 +0000
Received: from [85.158.139.83:19357] by server-11.bemta-5.messagelabs.com id
	77/C0-20400-DFA0F105; Mon, 06 Aug 2012 00:08:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344211708!24685554!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18998 invoked from network); 6 Aug 2012 00:08:28 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 00:08:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,717,1336348800"; d="scan'208";a="13856684"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 00:08:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 01:08:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyArz-0007Ns-C1;
	Mon, 06 Aug 2012 00:08:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyArz-0001lr-Ar;
	Mon, 06 Aug 2012 01:08:27 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13555-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 01:08:27 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13555: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13555 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13555/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 03:56:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 03:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyEPz-0006Lt-6E; Mon, 06 Aug 2012 03:55:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SyEPx-0006Lo-Dl
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 03:55:45 +0000
Received: from [85.158.143.99:37160] by server-3.bemta-4.messagelabs.com id
	D1/B1-01511-0404F105; Mon, 06 Aug 2012 03:55:44 +0000
X-Env-Sender: andy@strugglers.net
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344225344!20754065!1
X-Originating-IP: [85.119.80.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16451 invoked from network); 6 Aug 2012 03:55:44 -0000
Received: from bitfolk.com (HELO mail.bitfolk.com) (85.119.80.223)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Aug 2012 03:55:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com;
	s=alpha; 
	h=Subject:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Cc:To:From:Date;
	bh=hwEECg8S7no5c2SVzDaIY/br6p+a0hWwqC0Bded8LnE=; 
	b=2i6Zjd5giG2NfIEFTM+iEqy2KD77Edkm0b30SE4VJ3BDaXyLrr4mYe9kZWEAJqsysqfbSAoIA5biLllsQLrw7ZvqbM+dXQpzLEYE3X+06ozlT5vA3ulwyahONiY7pJua;
Received: from andy by mail.bitfolk.com with local (Exim 4.72)
	(envelope-from <andy@strugglers.net>)
	id 1SyEPs-0008V5-Gy; Mon, 06 Aug 2012 03:55:41 +0000
Date: Mon, 6 Aug 2012 03:55:40 +0000
From: Andy Smith <andy@strugglers.net>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120806035540.GC11695@bitfolk.com>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50165AAB0200007800091380@nat28.tlf.novell.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Virus-Scanner: Scanned by ClamAV on mail.bitfolk.com at Mon,
	06 Aug 2012 03:55:40 +0000
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	spamd2.lon.bitfolk.com
X-Spam-Level: 
X-Spam-ASN: 
X-Spam-Status: No, score=-0.0 required=5.0 tests=NO_RELAYS shortcircuit=no
	autolearn=disabled version=3.3.1
X-Spam-Report: * -0.0 NO_RELAYS Informational: message was not relayed via SMTP
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on mail.bitfolk.com)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

Thanks for getting back to me.

On Mon, Jul 30, 2012 at 08:58:03AM +0100, Jan Beulich wrote:
> >>> On 27.07.12 at 18:15, Andy Smith <andy@strugglers.net> wrote:
> > I upgraded it to Debian squeeze, with xen-hypervisor-4.0-amd64
> > (4.0.1-4) / linux-image-2.6.32-5-xen-amd64 (2.6.32-45) and now it
> > does not complete a boot of the dom0 kernel.
> 
> The minimum would be to check against 4.0.3 (ideally tip of
> 4.0-testing), to exclude a problem already fixed even on that
> branch.

I checked out http://xenbits.xen.org/hg/xen-4.0-testing.hg and
compiled the hypervisor. It seems to boot okay, though I have not
yet done any exhaustive stress tests.

"xm info" now says:

    host                   : sol
    release                : 2.6.32-5-xen-amd64
    version                : #1 SMP Sun May 6 08:57:29 UTC 2012
    machine                : x86_64
    nr_cpus                : 4
    nr_nodes               : 1
    cores_per_socket       : 4
    threads_per_core       : 1
    cpu_mhz                : 2133
    hw_caps                : bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000
    virt_caps              : hvm
    total_memory           : 24567
    free_memory            : 23280
    node_to_cpu            : node0:0-3
    node_to_memory         : node0:23280
    node_to_dma32_mem      : node0:2996
    max_node_id            : 0
    xen_major              : 4
    xen_minor              : 0
    xen_extra              : .4-rc4-pre
    xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
    xen_scheduler          : credit
    xen_pagesize           : 4096
    platform_params        : virt_start=0xffff800000000000
    xen_changeset          : Mon Jul 30 13:39:47 2012 +0100 21607:6d7ae840463c
    xen_commandline        : placeholder dom0_mem=1024M loglvl=all guest_loglvl=all com1=9600,8n1 console=com1,vga serial_tx_buffer=64k
    cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8) 
    cc_compile_by          : andy
    cc_compile_domain      : strugglers.net
    cc_compile_date        : Mon Aug  6 02:40:47 UTC 2012
    xend_config_format     : 4

(a keen observer will note that my last set of logs showed 96G RAM,
whereas it now has only 24G. I had to take those 6x16G DIMMs out and
use them for something else in the meantime, and I replaced them
with 6x4G ones. I did confirm that the problem was still evident
before carrying on.)

Is there any point at this stage in trying to find out which commit
appears to have fixed the problem I was seeing? If so, any pointers
on the best ways to do that would be appreciated as I'm not that
familiar with hg (or git).

I am not ready to use 4.1 in production at the moment so I have to
stick with 4.0.x. I'm assuming that if I wish to continue using
4.0.x on this host I would be better off compiling the released
4.0.3 and using that (if it works for me), as opposed to the tip of
4.0-testing that I have here?

Thanks,
Andy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 03:56:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 03:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyEPz-0006Lt-6E; Mon, 06 Aug 2012 03:55:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SyEPx-0006Lo-Dl
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 03:55:45 +0000
Received: from [85.158.143.99:37160] by server-3.bemta-4.messagelabs.com id
	D1/B1-01511-0404F105; Mon, 06 Aug 2012 03:55:44 +0000
X-Env-Sender: andy@strugglers.net
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344225344!20754065!1
X-Originating-IP: [85.119.80.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16451 invoked from network); 6 Aug 2012 03:55:44 -0000
Received: from bitfolk.com (HELO mail.bitfolk.com) (85.119.80.223)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Aug 2012 03:55:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com;
	s=alpha; 
	h=Subject:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Cc:To:From:Date;
	bh=hwEECg8S7no5c2SVzDaIY/br6p+a0hWwqC0Bded8LnE=; 
	b=2i6Zjd5giG2NfIEFTM+iEqy2KD77Edkm0b30SE4VJ3BDaXyLrr4mYe9kZWEAJqsysqfbSAoIA5biLllsQLrw7ZvqbM+dXQpzLEYE3X+06ozlT5vA3ulwyahONiY7pJua;
Received: from andy by mail.bitfolk.com with local (Exim 4.72)
	(envelope-from <andy@strugglers.net>)
	id 1SyEPs-0008V5-Gy; Mon, 06 Aug 2012 03:55:41 +0000
Date: Mon, 6 Aug 2012 03:55:40 +0000
From: Andy Smith <andy@strugglers.net>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120806035540.GC11695@bitfolk.com>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50165AAB0200007800091380@nat28.tlf.novell.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Virus-Scanner: Scanned by ClamAV on mail.bitfolk.com at Mon,
	06 Aug 2012 03:55:40 +0000
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	spamd2.lon.bitfolk.com
X-Spam-Level: 
X-Spam-ASN: 
X-Spam-Status: No, score=-0.0 required=5.0 tests=NO_RELAYS shortcircuit=no
	autolearn=disabled version=3.3.1
X-Spam-Report: * -0.0 NO_RELAYS Informational: message was not relayed via SMTP
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on mail.bitfolk.com)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

Thanks for getting back to me.

On Mon, Jul 30, 2012 at 08:58:03AM +0100, Jan Beulich wrote:
> >>> On 27.07.12 at 18:15, Andy Smith <andy@strugglers.net> wrote:
> > I upgraded it to Debian squeeze, with xen-hypervisor-4.0-amd64
> > (4.0.1-4) / linux-image-2.6.32-5-xen-amd64 (2.6.32-45) and now it
> > does not complete a boot of the dom0 kernel.
> 
> The minimum would be to check against 4.0.3 (ideally tip of
> 4.0-testing), to exclude a problem already fixed even on that
> branch.

I checked out http://xenbits.xen.org/hg/xen-4.0-testing.hg and
compiled the hypervisor. It seems to boot okay, though I have not
yet done any exhaustive stress tests.

"xm info" now says:

    host                   : sol
    release                : 2.6.32-5-xen-amd64
    version                : #1 SMP Sun May 6 08:57:29 UTC 2012
    machine                : x86_64
    nr_cpus                : 4
    nr_nodes               : 1
    cores_per_socket       : 4
    threads_per_core       : 1
    cpu_mhz                : 2133
    hw_caps                : bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000
    virt_caps              : hvm
    total_memory           : 24567
    free_memory            : 23280
    node_to_cpu            : node0:0-3
    node_to_memory         : node0:23280
    node_to_dma32_mem      : node0:2996
    max_node_id            : 0
    xen_major              : 4
    xen_minor              : 0
    xen_extra              : .4-rc4-pre
    xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 
    xen_scheduler          : credit
    xen_pagesize           : 4096
    platform_params        : virt_start=0xffff800000000000
    xen_changeset          : Mon Jul 30 13:39:47 2012 +0100 21607:6d7ae840463c
    xen_commandline        : placeholder dom0_mem=1024M loglvl=all guest_loglvl=all com1=9600,8n1 console=com1,vga serial_tx_buffer=64k
    cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8) 
    cc_compile_by          : andy
    cc_compile_domain      : strugglers.net
    cc_compile_date        : Mon Aug  6 02:40:47 UTC 2012
    xend_config_format     : 4

(a keen observer will note that my last set of logs showed 96G RAM,
whereas it now has only 24G. I had to take those 6x16G DIMMs out and
use them for something else in the meantime, and I replaced them
with 6x4G ones. I did confirm that the problem was still evident
before carrying on.)

Is there any point at this stage in trying to find out which commit
appears to have fixed the problem I was seeing? If so, any pointers
on the best ways to do that would be appreciated as I'm not that
familiar with hg (or git).

I am not ready to use 4.1 in production at the moment so I have to
stick with 4.0.x. I'm assuming that if I wish to continue using
4.0.x on this host I would be better off compiling the released
4.0.3 and using that (if it works for me), as opposed to the tip of
4.0-testing that I have here?

Thanks,
Andy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 04:32:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 04:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyEyc-0006cH-78; Mon, 06 Aug 2012 04:31:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyEya-0006cC-4C
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 04:31:32 +0000
Received: from [85.158.139.83:52453] by server-1.bemta-5.messagelabs.com id
	EC/50-29759-3A84F105; Mon, 06 Aug 2012 04:31:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344227490!23172258!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2116 invoked from network); 6 Aug 2012 04:31:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 04:31:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,717,1336348800"; d="scan'208";a="13857422"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 04:31:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 05:31:29 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyEyX-0000MC-Ms;
	Mon, 06 Aug 2012 04:31:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyEyX-0003d1-Kb;
	Mon, 06 Aug 2012 05:31:29 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13556-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 05:31:29 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13556: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13556 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13556/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 04:32:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 04:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyEyc-0006cH-78; Mon, 06 Aug 2012 04:31:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyEya-0006cC-4C
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 04:31:32 +0000
Received: from [85.158.139.83:52453] by server-1.bemta-5.messagelabs.com id
	EC/50-29759-3A84F105; Mon, 06 Aug 2012 04:31:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344227490!23172258!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2116 invoked from network); 6 Aug 2012 04:31:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 04:31:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,717,1336348800"; d="scan'208";a="13857422"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 04:31:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 05:31:29 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyEyX-0000MC-Ms;
	Mon, 06 Aug 2012 04:31:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyEyX-0003d1-Kb;
	Mon, 06 Aug 2012 05:31:29 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13556-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 05:31:29 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13556: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13556 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13556/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:37:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyGvb-0007ua-3R; Mon, 06 Aug 2012 06:36:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SyGvZ-0007uV-5A
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:36:33 +0000
Received: from [85.158.138.51:26485] by server-8.bemta-3.messagelabs.com id
	BD/F8-25919-0F56F105; Mon, 06 Aug 2012 06:36:32 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344234991!30559987!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzQ0NjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17475 invoked from network); 6 Aug 2012 06:36:31 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 06:36:31 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 39E3424F1;
	Mon,  6 Aug 2012 09:36:30 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D8A942005D; Mon,  6 Aug 2012 09:36:29 +0300 (EEST)
Date: Mon, 6 Aug 2012 09:36:29 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Andy Smith <andy@strugglers.net>
Message-ID: <20120806063629.GG19851@reaktio.net>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120806035540.GC11695@bitfolk.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:55:40AM +0000, Andy Smith wrote:
> 
> I am not ready to use 4.1 in production at the moment so I have to
> stick with 4.0.x. I'm assuming that if I wish to continue using
> 4.0.x on this host I would be better off compiling the released
> 4.0.3 and using that (if it works for me), as opposed to the tip of
> 4.0-testing that I have here?
> 

Well.. if you read the changelog at http://xenbits.xen.org/xen-4.0-testing.hg
there's only 3 changesets after 4.0.3 release.

So judge yourself :) they look like pretty straight forward bugfixes.
Also one of them is a security fix!

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:37:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyGvb-0007ua-3R; Mon, 06 Aug 2012 06:36:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1SyGvZ-0007uV-5A
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:36:33 +0000
Received: from [85.158.138.51:26485] by server-8.bemta-3.messagelabs.com id
	BD/F8-25919-0F56F105; Mon, 06 Aug 2012 06:36:32 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344234991!30559987!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzQ0NjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17475 invoked from network); 6 Aug 2012 06:36:31 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 06:36:31 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 39E3424F1;
	Mon,  6 Aug 2012 09:36:30 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D8A942005D; Mon,  6 Aug 2012 09:36:29 +0300 (EEST)
Date: Mon, 6 Aug 2012 09:36:29 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Andy Smith <andy@strugglers.net>
Message-ID: <20120806063629.GG19851@reaktio.net>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120806035540.GC11695@bitfolk.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:55:40AM +0000, Andy Smith wrote:
> 
> I am not ready to use 4.1 in production at the moment so I have to
> stick with 4.0.x. I'm assuming that if I wish to continue using
> 4.0.x on this host I would be better off compiling the released
> 4.0.3 and using that (if it works for me), as opposed to the tip of
> 4.0-testing that I have here?
> 

Well.. if you read the changelog at http://xenbits.xen.org/xen-4.0-testing.hg
there's only 3 changesets after 4.0.3 release.

So judge yourself :) they look like pretty straight forward bugfixes.
Also one of them is a security fix!

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:46:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyH4k-000841-64; Mon, 06 Aug 2012 06:46:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SyH4i-00083w-RK
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:46:01 +0000
Received: from [85.158.143.35:12266] by server-2.bemta-4.messagelabs.com id
	CB/D3-17938-8286F105; Mon, 06 Aug 2012 06:46:00 +0000
X-Env-Sender: andy@strugglers.net
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344235559!13441382!1
X-Originating-IP: [85.119.80.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22354 invoked from network); 6 Aug 2012 06:45:59 -0000
Received: from bitfolk.com (HELO mail.bitfolk.com) (85.119.80.223)
	by server-11.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Aug 2012 06:45:59 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com;
	s=alpha; 
	h=Subject:In-Reply-To:Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID:To:From:Date;
	bh=xPC9Yxz7A0Sj9lNbzAQiG7rQnpWT/TkNxyEwKaBwok0=; 
	b=rR3mfBNDq0OBK6iHn7telclRgm2CVo2j/X2LFKfiaj2OOSubrsO/R/ZoVNqyxxmC2wdTT5afEiXfSAnHYHTvG4CGhy0LEBxaEir78OJuxWTo8c/hVkRiogf5d8JkuFao;
Received: from andy by mail.bitfolk.com with local (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SyH4g-0005AS-MT
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:45:58 +0000
Date: Mon, 6 Aug 2012 06:45:58 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xen.org
Message-ID: <20120806064558.GD11695@bitfolk.com>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
	<20120806063629.GG19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120806063629.GG19851@reaktio.net>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Virus-Scanner: Scanned by ClamAV on mail.bitfolk.com at Mon,
	06 Aug 2012 06:45:58 +0000
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	spamd0.lon.bitfolk.com
X-Spam-Level: 
X-Spam-ASN: 
X-Spam-Status: No, score=-0.0 required=5.0 tests=NO_RELAYS shortcircuit=no
	autolearn=disabled version=3.3.1
X-Spam-Report: * -0.0 NO_RELAYS Informational: message was not relayed via SMTP
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on mail.bitfolk.com)
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Pasi,

On Mon, Aug 06, 2012 at 09:36:29AM +0300, Pasi K=E4rkk=E4inen wrote:
> So judge yourself :) they look like pretty straight forward bugfixes.
> Also one of them is a security fix!

I see. Yes, going without that one is not going to be an option.

Thanks,
Andy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:46:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyH4k-000841-64; Mon, 06 Aug 2012 06:46:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SyH4i-00083w-RK
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:46:01 +0000
Received: from [85.158.143.35:12266] by server-2.bemta-4.messagelabs.com id
	CB/D3-17938-8286F105; Mon, 06 Aug 2012 06:46:00 +0000
X-Env-Sender: andy@strugglers.net
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344235559!13441382!1
X-Originating-IP: [85.119.80.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22354 invoked from network); 6 Aug 2012 06:45:59 -0000
Received: from bitfolk.com (HELO mail.bitfolk.com) (85.119.80.223)
	by server-11.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Aug 2012 06:45:59 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com;
	s=alpha; 
	h=Subject:In-Reply-To:Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID:To:From:Date;
	bh=xPC9Yxz7A0Sj9lNbzAQiG7rQnpWT/TkNxyEwKaBwok0=; 
	b=rR3mfBNDq0OBK6iHn7telclRgm2CVo2j/X2LFKfiaj2OOSubrsO/R/ZoVNqyxxmC2wdTT5afEiXfSAnHYHTvG4CGhy0LEBxaEir78OJuxWTo8c/hVkRiogf5d8JkuFao;
Received: from andy by mail.bitfolk.com with local (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SyH4g-0005AS-MT
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:45:58 +0000
Date: Mon, 6 Aug 2012 06:45:58 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xen.org
Message-ID: <20120806064558.GD11695@bitfolk.com>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
	<20120806063629.GG19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120806063629.GG19851@reaktio.net>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Virus-Scanner: Scanned by ClamAV on mail.bitfolk.com at Mon,
	06 Aug 2012 06:45:58 +0000
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	spamd0.lon.bitfolk.com
X-Spam-Level: 
X-Spam-ASN: 
X-Spam-Status: No, score=-0.0 required=5.0 tests=NO_RELAYS shortcircuit=no
	autolearn=disabled version=3.3.1
X-Spam-Report: * -0.0 NO_RELAYS Informational: message was not relayed via SMTP
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on mail.bitfolk.com)
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Pasi,

On Mon, Aug 06, 2012 at 09:36:29AM +0300, Pasi K=E4rkk=E4inen wrote:
> So judge yourself :) they look like pretty straight forward bugfixes.
> Also one of them is a security fix!

I see. Yes, going without that one is not going to be an option.

Thanks,
Andy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:50:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyH8W-0008At-S8; Mon, 06 Aug 2012 06:49:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyH8V-0008Ai-It
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 06:49:55 +0000
Received: from [85.158.143.99:49685] by server-3.bemta-4.messagelabs.com id
	4C/AB-01511-2196F105; Mon, 06 Aug 2012 06:49:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344235794!25124579!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24024 invoked from network); 6 Aug 2012 06:49:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 06:49:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 07:50:09 +0100
Message-Id: <501F852E0200007800092C24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 07:49:50 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.08.12 at 12:24, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 23.07.12 at 11:39, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>> @@ -175,11 +179,16 @@
>>>                     *val);
>>>          break;
>>>      case MSR_IA32_MCG_CTL:
>>> -        /* Always 0 if no CTL support */
>>>          if ( cur->arch.mcg_cap & MCG_CTL_P )
>>> -            *val = vmce->mcg_ctl & h_mcg_ctl;
>>> -        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n",
>>> -                   *val);
>>> +        {
>>> +            *val = ~0UL;
>>> +            mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL
>>> 0x%"PRIx64"\n", *val); +        } +        else
>>> +        {
>>> +            mce_printk(MCE_QUIET, "MCE: no MCG_CTL\n");
>>> +            ret = -1;
>> 
>> Is there a particular reason to make this access fault here, when
>> it didn't before? I.e. was there anything wrong with the previous
>> approach of returning zero on reads and ignoring writes when
>> !MCG_CTL_P?
>> 
> 
> Semantically this code is better than previous approach, since !MCG_CTL_P 
> means unimplemented MCG_CTL so access it would generate GP#.

Agreed. But nevertheless I'd like to be a little more conservative
here. After all, "knowing" that this won't break Windows or Linux
isn't covering all possible HVM guests (and the quotes are there
to indicate that (a) unless you have access to Windows sources,
you can't really know, you may at best have empirical data
suggesting so, and (b) makes you/us dependent on all older
Windows/Linux versions you didn't try out/look at behave
correctly here too).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:50:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyH8W-0008At-S8; Mon, 06 Aug 2012 06:49:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyH8V-0008Ai-It
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 06:49:55 +0000
Received: from [85.158.143.99:49685] by server-3.bemta-4.messagelabs.com id
	4C/AB-01511-2196F105; Mon, 06 Aug 2012 06:49:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344235794!25124579!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24024 invoked from network); 6 Aug 2012 06:49:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 06:49:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 07:50:09 +0100
Message-Id: <501F852E0200007800092C24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 07:49:50 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.08.12 at 12:24, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 23.07.12 at 11:39, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>> @@ -175,11 +179,16 @@
>>>                     *val);
>>>          break;
>>>      case MSR_IA32_MCG_CTL:
>>> -        /* Always 0 if no CTL support */
>>>          if ( cur->arch.mcg_cap & MCG_CTL_P )
>>> -            *val = vmce->mcg_ctl & h_mcg_ctl;
>>> -        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n",
>>> -                   *val);
>>> +        {
>>> +            *val = ~0UL;
>>> +            mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL
>>> 0x%"PRIx64"\n", *val); +        } +        else
>>> +        {
>>> +            mce_printk(MCE_QUIET, "MCE: no MCG_CTL\n");
>>> +            ret = -1;
>> 
>> Is there a particular reason to make this access fault here, when
>> it didn't before? I.e. was there anything wrong with the previous
>> approach of returning zero on reads and ignoring writes when
>> !MCG_CTL_P?
>> 
> 
> Semantically this code is better than previous approach, since !MCG_CTL_P 
> means unimplemented MCG_CTL so access it would generate GP#.

Agreed. But nevertheless I'd like to be a little more conservative
here. After all, "knowing" that this won't break Windows or Linux
isn't covering all possible HVM guests (and the quotes are there
to indicate that (a) unless you have access to Windows sources,
you can't really know, you may at best have empirical data
suggesting so, and (b) makes you/us dependent on all older
Windows/Linux versions you didn't try out/look at behave
correctly here too).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:55:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyHDf-0008L7-KA; Mon, 06 Aug 2012 06:55:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyHDe-0008L1-9J
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:55:14 +0000
Received: from [85.158.143.99:34321] by server-3.bemta-4.messagelabs.com id
	3C/21-01511-15A6F105; Mon, 06 Aug 2012 06:55:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344236111!29494191!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12664 invoked from network); 6 Aug 2012 06:55:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 06:55:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 07:55:34 +0100
Message-Id: <501F866B0200007800092C2D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 07:55:07 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZZKoVfyKuN7_kW_+mJ69PmXwUvukXBK3rpD+we8c-wvXA@mail.gmail.com>
	<CAFLBxZbCamJqx5GYGEjrkvVVbcRW7Y4-rS1zqHKmkjDqrk7UFQ@mail.gmail.com>
In-Reply-To: <CAFLBxZbCamJqx5GYGEjrkvVVbcRW7Y4-rS1zqHKmkjDqrk7UFQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Lars Kurth <lars.kurth@xen.org>, Matt Wilson <msw@amazon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Security discussion: Summary of proposals and
 criteria (was Re: Security vulnerability process, and CVE-2012-0217)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 19:31, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Secondly, my original discussion had assumed that the risk during
> "public vulnerability" for all users was the same.  Unfortunately, I
> don't think that's true.  Some targets may be more valuable than
> others.  In particular, the value of attacking a hosting provider may
> be correlated to the value to an attacker of the aggregate of all of
> their customers.  Thus it is simply more likely for a large provider
> to be the targt of an attack than a small provider.

Not necessarily - if the same attack works universally (or can
be made work with very little additional effort), using it against
many smaller ones may be as worthwhile to the attacker.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 06:55:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 06:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyHDf-0008L7-KA; Mon, 06 Aug 2012 06:55:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyHDe-0008L1-9J
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 06:55:14 +0000
Received: from [85.158.143.99:34321] by server-3.bemta-4.messagelabs.com id
	3C/21-01511-15A6F105; Mon, 06 Aug 2012 06:55:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344236111!29494191!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12664 invoked from network); 6 Aug 2012 06:55:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 06:55:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 07:55:34 +0100
Message-Id: <501F866B0200007800092C2D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 07:55:07 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZZKoVfyKuN7_kW_+mJ69PmXwUvukXBK3rpD+we8c-wvXA@mail.gmail.com>
	<CAFLBxZbCamJqx5GYGEjrkvVVbcRW7Y4-rS1zqHKmkjDqrk7UFQ@mail.gmail.com>
In-Reply-To: <CAFLBxZbCamJqx5GYGEjrkvVVbcRW7Y4-rS1zqHKmkjDqrk7UFQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Lars Kurth <lars.kurth@xen.org>, Matt Wilson <msw@amazon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Security discussion: Summary of proposals and
 criteria (was Re: Security vulnerability process, and CVE-2012-0217)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 19:31, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Secondly, my original discussion had assumed that the risk during
> "public vulnerability" for all users was the same.  Unfortunately, I
> don't think that's true.  Some targets may be more valuable than
> others.  In particular, the value of attacking a hosting provider may
> be correlated to the value to an attacker of the aggregate of all of
> their customers.  Thus it is simply more likely for a large provider
> to be the targt of an attack than a small provider.

Not necessarily - if the same attack works universally (or can
be made work with very little additional effort), using it against
many smaller ones may be as worthwhile to the attacker.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 07:15:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 07:15:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyHXD-0000Gp-OL; Mon, 06 Aug 2012 07:15:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyHXB-0000Gk-Gh
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 07:15:25 +0000
Received: from [85.158.138.51:45287] by server-2.bemta-3.messagelabs.com id
	33/E9-29239-C0F6F105; Mon, 06 Aug 2012 07:15:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344237323!30641244!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25157 invoked from network); 6 Aug 2012 07:15:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 07:15:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 08:17:22 +0100
Message-Id: <501F8B250200007800092C62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 08:15:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>,
	"Dan Magenheimer" <dan.magenheimer@oracle.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
In-Reply-To: <6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang ZZhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.08.12 at 00:34, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
>>  From: Jan Beulich [mailto:JBeulich@suse.com]
>> The question is whether trading functionality for performance
>> is an acceptable choice.
> 
> If there were a lwn.net equivalent for Xen, I'd be pushing to get
> quoted on the following:
> 
> "Virtualization: You can have flexibility or you can have performance.
> Pick one."
> 
> A couple of years ago when NUMA was first being extensively discussed
> for Xen, I suggested that this should really be a "top level" flag
> that a sysadmin should be able to select: Either optimize for
> performance or optimize for flexibility.  Then Xen and the Xen tools
> should "do the right thing" depending on the selection.
> 
> I still think this is a good way to surface the tradeoffs for
> a very complex problem to the vast majority of users/admins.
> Clearly they will want "both" but forcing the choice will
> provoke more thought about their use model, as well as provide
> important guidance to the underlying implementations.

I would expect a good part to pick performance, and then
go whine about something not working in an emergency. On
xen-devel one could respond with this-is-what-you-get, but
you can't necessarily do so to paying customers...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 07:15:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 07:15:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyHXD-0000Gp-OL; Mon, 06 Aug 2012 07:15:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyHXB-0000Gk-Gh
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 07:15:25 +0000
Received: from [85.158.138.51:45287] by server-2.bemta-3.messagelabs.com id
	33/E9-29239-C0F6F105; Mon, 06 Aug 2012 07:15:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344237323!30641244!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25157 invoked from network); 6 Aug 2012 07:15:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 07:15:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 08:17:22 +0100
Message-Id: <501F8B250200007800092C62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 08:15:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <raistlin@linux.it>,
	"Dan Magenheimer" <dan.magenheimer@oracle.com>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
In-Reply-To: <6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang ZZhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.08.12 at 00:34, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
>>  From: Jan Beulich [mailto:JBeulich@suse.com]
>> The question is whether trading functionality for performance
>> is an acceptable choice.
> 
> If there were a lwn.net equivalent for Xen, I'd be pushing to get
> quoted on the following:
> 
> "Virtualization: You can have flexibility or you can have performance.
> Pick one."
> 
> A couple of years ago when NUMA was first being extensively discussed
> for Xen, I suggested that this should really be a "top level" flag
> that a sysadmin should be able to select: Either optimize for
> performance or optimize for flexibility.  Then Xen and the Xen tools
> should "do the right thing" depending on the selection.
> 
> I still think this is a good way to surface the tradeoffs for
> a very complex problem to the vast majority of users/admins.
> Clearly they will want "both" but forcing the choice will
> provoke more thought about their use model, as well as provide
> important guidance to the underlying implementations.

I would expect a good part to pick performance, and then
go whine about something not working in an emergency. On
xen-devel one could respond with this-is-what-you-get, but
you can't necessarily do so to paying customers...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 07:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 07:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyHng-0000RN-C3; Mon, 06 Aug 2012 07:32:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyHnf-0000RI-LJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 07:32:27 +0000
Received: from [85.158.143.35:52198] by server-3.bemta-4.messagelabs.com id
	56/00-01511-A037F105; Mon, 06 Aug 2012 07:32:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344238345!5665750!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19170 invoked from network); 6 Aug 2012 07:32:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 07:32:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 08:34:25 +0100
Message-Id: <501F8F240200007800092C84@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 08:32:20 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andy Smith" <andy@strugglers.net>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
In-Reply-To: <20120806035540.GC11695@bitfolk.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 05:55, Andy Smith <andy@strugglers.net> wrote:
> I checked out http://xenbits.xen.org/hg/xen-4.0-testing.hg and
> compiled the hypervisor. It seems to boot okay, though I have not
> yet done any exhaustive stress tests.

Good.

> Is there any point at this stage in trying to find out which commit
> appears to have fixed the problem I was seeing?

That's a question you have to ask yourself (or the Debian folks
if you want them to deliver a fixed packed). From our
(developers') perspective knowing the problem is fixed is
sufficient.

> If so, any pointers
> on the best ways to do that would be appreciated as I'm not that
> familiar with hg (or git).

Bi-section would probably be the best thing to do, unless by
going through the differences or individual changeset
descriptions you're able to right away turn up possible
candidates.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 07:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 07:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyHng-0000RN-C3; Mon, 06 Aug 2012 07:32:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyHnf-0000RI-LJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 07:32:27 +0000
Received: from [85.158.143.35:52198] by server-3.bemta-4.messagelabs.com id
	56/00-01511-A037F105; Mon, 06 Aug 2012 07:32:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344238345!5665750!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19170 invoked from network); 6 Aug 2012 07:32:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 07:32:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 08:34:25 +0100
Message-Id: <501F8F240200007800092C84@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 08:32:20 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andy Smith" <andy@strugglers.net>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
In-Reply-To: <20120806035540.GC11695@bitfolk.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 05:55, Andy Smith <andy@strugglers.net> wrote:
> I checked out http://xenbits.xen.org/hg/xen-4.0-testing.hg and
> compiled the hypervisor. It seems to boot okay, though I have not
> yet done any exhaustive stress tests.

Good.

> Is there any point at this stage in trying to find out which commit
> appears to have fixed the problem I was seeing?

That's a question you have to ask yourself (or the Debian folks
if you want them to deliver a fixed packed). From our
(developers') perspective knowing the problem is fixed is
sufficient.

> If so, any pointers
> on the best ways to do that would be appreciated as I'm not that
> familiar with hg (or git).

Bi-section would probably be the best thing to do, unless by
going through the differences or individual changeset
descriptions you're able to right away turn up possible
candidates.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 07:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 07:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyI9E-0000ck-Bs; Mon, 06 Aug 2012 07:54:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyI9C-0000cf-Hj
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 07:54:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344239667!4041017!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28120 invoked from network); 6 Aug 2012 07:54:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 07:54:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 08:54:26 +0100
Message-Id: <501F944E0200007800092C97@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 08:54:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
	<501C08F20200007800092920@nat28.tlf.novell.com>
	<7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0E@LONPMAILBOX01.citrite.net>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0E@LONPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 18:43, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> On Fri, 2012-08-03 at 16:22 +0100, Jan Beulich wrote:
>> >>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> > If a 4k disk is used for first BIOS disk loader corrupt itself.
>> 
>> If such is really permitted by the specification (which I doubt it
>> is for the standard, old-style INT13 functions - it's a different
>> story for functions 42 and 43, where the caller can be expected
>> to call function 48 first).
>> 
> 
> I don't know. They always speaks about sector size and in int13/5 for
> floppy you can specify different sector sizes (up to 1024).

This is the format operation (and you can't do the same for
read/write without adjusting some memory variables). Plus,
as you say, this is a floppy specific thing. I'm unaware of
the old INT13 interface allowing other than 512-byte sectors.
Did you check with the vendor of the machine/BIOS?

>> > This patch increase sector buffer in order to avoid this overflow
>> 
>> And if we indeed need to adjust for this, then let's fix this properly:
>> Don't just increase the buffer size, but also check that the sector
>> size reported actually fits. That may require calling Fn48 first,
>> before doing the actual read.
>> 
> 
> Or read to a location of memory we are sure there is enough space
> (something like 2000:0000).

No, please let's not start using fixed addresses again. If
anything, you need to consult the memory map to see
what area of memory is safe to use.

>> > --- a/xen/arch/x86/boot/edd.S
>> > +++ b/xen/arch/x86/boot/edd.S
>> > @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
>> >  boot_mbr_signature:
>> >          .fill   EDD_MBR_SIG_MAX*8,1,0
>> >  boot_edd_info:
>> > -        .fill   512,1,0                         # big enough for a disc 
> sector
>> > +        .fill   4096,1,0                         # big enough for a disc 
> sector
>> 
>> Also I wonder whether it wouldn't be more smart to re-use the
>> wakeup stack (which is already 4k in size), and shrink this buffer
>> to the maximum size ever used without reading sectors into it
>> (EDD_INFO_MAX*(EDDEXTSIZE+EDDPARMSIZE)).
>> 
> 
> Yes, reusing this buffer could be useful. It could be also useful to put
> it at the end of the trampoline code in order to try avoiding future
> problems if sector size grow.

Putting it at the end doesn't help in any way - you'd then risk
corrupting the EBDA or other BIOS/firmware data.

>> > --- a/xen/arch/x86/boot/trampoline.S
>> > +++ b/xen/arch/x86/boot/trampoline.S
>> > @@ -224,6 +224,6 @@ skip_realmode:
>> >  rm_idt: .word   256*4-1, 0, 0
>> >  
>> >  #include "mem.S"
>> > -#include "edd.S"
>> >  #include "video.S"
>> >  #include "wakeup.S"
>> > +#include "edd.S"
>> 
>> Finally, you should also explain why this change is needed.
>> 
> 
> This is to move the buffer at the end and avoid overflowing to other
> code.

As said - this should be clarified in the change set comment and
doesn't really help. The only thing helping being safe going
forward is
- determine the sector size
- either don't do I/O when it's too large, or dynamically determine
  a safe buffer location.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 07:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 07:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyI9E-0000ck-Bs; Mon, 06 Aug 2012 07:54:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyI9C-0000cf-Hj
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 07:54:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344239667!4041017!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28120 invoked from network); 6 Aug 2012 07:54:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 07:54:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 08:54:26 +0100
Message-Id: <501F944E0200007800092C97@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 08:54:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
	<501C08F20200007800092920@nat28.tlf.novell.com>
	<7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0E@LONPMAILBOX01.citrite.net>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0E@LONPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 18:43, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> On Fri, 2012-08-03 at 16:22 +0100, Jan Beulich wrote:
>> >>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> > If a 4k disk is used for first BIOS disk loader corrupt itself.
>> 
>> If such is really permitted by the specification (which I doubt it
>> is for the standard, old-style INT13 functions - it's a different
>> story for functions 42 and 43, where the caller can be expected
>> to call function 48 first).
>> 
> 
> I don't know. They always speaks about sector size and in int13/5 for
> floppy you can specify different sector sizes (up to 1024).

This is the format operation (and you can't do the same for
read/write without adjusting some memory variables). Plus,
as you say, this is a floppy specific thing. I'm unaware of
the old INT13 interface allowing other than 512-byte sectors.
Did you check with the vendor of the machine/BIOS?

>> > This patch increase sector buffer in order to avoid this overflow
>> 
>> And if we indeed need to adjust for this, then let's fix this properly:
>> Don't just increase the buffer size, but also check that the sector
>> size reported actually fits. That may require calling Fn48 first,
>> before doing the actual read.
>> 
> 
> Or read to a location of memory we are sure there is enough space
> (something like 2000:0000).

No, please let's not start using fixed addresses again. If
anything, you need to consult the memory map to see
what area of memory is safe to use.

>> > --- a/xen/arch/x86/boot/edd.S
>> > +++ b/xen/arch/x86/boot/edd.S
>> > @@ -154,4 +154,4 @@ boot_mbr_signature_nr:
>> >  boot_mbr_signature:
>> >          .fill   EDD_MBR_SIG_MAX*8,1,0
>> >  boot_edd_info:
>> > -        .fill   512,1,0                         # big enough for a disc 
> sector
>> > +        .fill   4096,1,0                         # big enough for a disc 
> sector
>> 
>> Also I wonder whether it wouldn't be more smart to re-use the
>> wakeup stack (which is already 4k in size), and shrink this buffer
>> to the maximum size ever used without reading sectors into it
>> (EDD_INFO_MAX*(EDDEXTSIZE+EDDPARMSIZE)).
>> 
> 
> Yes, reusing this buffer could be useful. It could be also useful to put
> it at the end of the trampoline code in order to try avoiding future
> problems if sector size grow.

Putting it at the end doesn't help in any way - you'd then risk
corrupting the EBDA or other BIOS/firmware data.

>> > --- a/xen/arch/x86/boot/trampoline.S
>> > +++ b/xen/arch/x86/boot/trampoline.S
>> > @@ -224,6 +224,6 @@ skip_realmode:
>> >  rm_idt: .word   256*4-1, 0, 0
>> >  
>> >  #include "mem.S"
>> > -#include "edd.S"
>> >  #include "video.S"
>> >  #include "wakeup.S"
>> > +#include "edd.S"
>> 
>> Finally, you should also explain why this change is needed.
>> 
> 
> This is to move the buffer at the end and avoid overflowing to other
> code.

As said - this should be clarified in the change set comment and
doesn't really help. The only thing helping being safe going
forward is
- determine the sector size
- either don't do I/O when it's too large, or dynamically determine
  a safe buffer location.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:08:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIMK-0001Q5-BG; Mon, 06 Aug 2012 08:08:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIMI-0001Pw-ON
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:08:14 +0000
Received: from [85.158.139.83:47075] by server-12.bemta-5.messagelabs.com id
	A6/DC-26304-D6B7F105; Mon, 06 Aug 2012 08:08:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344240493!24730848!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6353 invoked from network); 6 Aug 2012 08:08:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 08:08:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:08:12 +0100
Message-Id: <501F97890200007800092CA9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:08:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:

Without finally explaining why you need this type in the first place,
I'll continue to NAK this patch. (This is made even worse by the fact
taht the two inline functions in patch 5 that make use of the type
appear to be unused.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:08:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIMK-0001Q5-BG; Mon, 06 Aug 2012 08:08:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIMI-0001Pw-ON
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:08:14 +0000
Received: from [85.158.139.83:47075] by server-12.bemta-5.messagelabs.com id
	A6/DC-26304-D6B7F105; Mon, 06 Aug 2012 08:08:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344240493!24730848!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6353 invoked from network); 6 Aug 2012 08:08:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 08:08:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:08:12 +0100
Message-Id: <501F97890200007800092CA9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:08:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:

Without finally explaining why you need this type in the first place,
I'll continue to NAK this patch. (This is made even worse by the fact
taht the two inline functions in patch 5 that make use of the type
appear to be unused.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:10:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIO8-0001Vd-Rc; Mon, 06 Aug 2012 08:10:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIO7-0001VS-Er
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:10:07 +0000
Received: from [85.158.139.83:63875] by server-7.bemta-5.messagelabs.com id
	38/74-28276-EDB7F105; Mon, 06 Aug 2012 08:10:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344240605!29717767!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27911 invoked from network); 6 Aug 2012 08:10:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 08:10:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:10:05 +0100
Message-Id: <501F97F90200007800092CAC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:10:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> VIRQ_XC_RESERVED was reserved for V4V but we have switched
> to event channels so this place holder is no longer required.

I'm fine with this change, but is a future re-use of the value indeed
not going to cause problems on XenServer (or wherever else this
is patch set coming from)?

Jan

> Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> ---
>  xen/include/public/xen.h |    1 -
>  1 file changed, 1 deletion(-)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:10:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIO8-0001Vd-Rc; Mon, 06 Aug 2012 08:10:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIO7-0001VS-Er
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:10:07 +0000
Received: from [85.158.139.83:63875] by server-7.bemta-5.messagelabs.com id
	38/74-28276-EDB7F105; Mon, 06 Aug 2012 08:10:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344240605!29717767!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27911 invoked from network); 6 Aug 2012 08:10:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 08:10:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:10:05 +0100
Message-Id: <501F97F90200007800092CAC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:10:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> VIRQ_XC_RESERVED was reserved for V4V but we have switched
> to event channels so this place holder is no longer required.

I'm fine with this change, but is a future re-use of the value indeed
not going to cause problems on XenServer (or wherever else this
is patch set coming from)?

Jan

> Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> ---
>  xen/include/public/xen.h |    1 -
>  1 file changed, 1 deletion(-)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:19:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIXM-0001r5-25; Mon, 06 Aug 2012 08:19:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIXL-0001qv-5G
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:19:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344241173!11673790!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2523 invoked from network); 6 Aug 2012 08:19:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 08:19:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:19:32 +0100
Message-Id: <501F9A300200007800092CBF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:19:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>--- a/xen/common/event_channel.c
>+++ b/xen/common/event_channel.c
>@@ -51,6 +51,8 @@
> 
> #define consumer_is_xen(e) (!!(e)->xen_consumer)
> 
>+static long __evtchn_close(struct domain *d, int port);

What is this needed for?

>+
> /*
>  * The function alloc_unbound_xen_event_channel() allows an arbitrary
>  * notifier function to be specified. However, very few unique functions
>@@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> {
>     struct evtchn *chn;
>     struct domain *d;
>-    int            port;
>+    evtchn_port_t  port;
>     domid_t        dom = alloc->dom;
>-    long           rc;
>+    int            rc;
> 
>     rc = rcu_lock_target_domain_by_id(dom, &d);
>     if ( rc )
>         return rc;
> 
>-    spin_lock(&d->event_lock);
>+    rc = evtchn_alloc_unbound_domain(d, &port);
>+    if ( rc )
>+        ERROR_EXIT_DOM(rc, d);
> 
>-    if ( (port = get_free_port(d)) < 0 )
>-        ERROR_EXIT_DOM(port, d);
>     chn = evtchn_from_port(d, port);
> 
>     rc = xsm_evtchn_unbound(d, chn, alloc->remote_dom);
>@@ -186,12 +188,31 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>     alloc->port = port;
> 
>  out:
>-    spin_unlock(&d->event_lock);
>     rcu_unlock_domain(d);
> 
>     return rc;
> }
> 
>+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port)
>+{
>+    struct evtchn *chn;
>+    int            free_port = 0;
>+
>+    spin_lock(&d->event_lock);
>+
>+    if ( (free_port = get_free_port(d)) < 0 )
>+        goto out;
>+    chn = evtchn_from_port(d, free_port);

The code below this is not really a plain breakout from the
function above:

>+    chn->state = ECS_UNBOUND;

The equivalent to this ought to be removed from the original
function as being redundant.

>+    chn->u.unbound.remote_domid = DOMID_INVALID;

The single caller here will immediately overwrite this value. It
would seem more clean to simply pass in the intended value,
and eliminate the corresponding code from the caller too.

Jan

>+    *port = free_port;
>+    /* Everything is fine, returns 0 */
>+    free_port = 0;
>+
>+ out:
>+    spin_unlock(&d->event_lock);
>+    return free_port;
>+}
> 
> static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
> {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:19:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIXM-0001r5-25; Mon, 06 Aug 2012 08:19:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIXL-0001qv-5G
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:19:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344241173!11673790!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2523 invoked from network); 6 Aug 2012 08:19:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 08:19:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:19:32 +0100
Message-Id: <501F9A300200007800092CBF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:19:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>--- a/xen/common/event_channel.c
>+++ b/xen/common/event_channel.c
>@@ -51,6 +51,8 @@
> 
> #define consumer_is_xen(e) (!!(e)->xen_consumer)
> 
>+static long __evtchn_close(struct domain *d, int port);

What is this needed for?

>+
> /*
>  * The function alloc_unbound_xen_event_channel() allows an arbitrary
>  * notifier function to be specified. However, very few unique functions
>@@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> {
>     struct evtchn *chn;
>     struct domain *d;
>-    int            port;
>+    evtchn_port_t  port;
>     domid_t        dom = alloc->dom;
>-    long           rc;
>+    int            rc;
> 
>     rc = rcu_lock_target_domain_by_id(dom, &d);
>     if ( rc )
>         return rc;
> 
>-    spin_lock(&d->event_lock);
>+    rc = evtchn_alloc_unbound_domain(d, &port);
>+    if ( rc )
>+        ERROR_EXIT_DOM(rc, d);
> 
>-    if ( (port = get_free_port(d)) < 0 )
>-        ERROR_EXIT_DOM(port, d);
>     chn = evtchn_from_port(d, port);
> 
>     rc = xsm_evtchn_unbound(d, chn, alloc->remote_dom);
>@@ -186,12 +188,31 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>     alloc->port = port;
> 
>  out:
>-    spin_unlock(&d->event_lock);
>     rcu_unlock_domain(d);
> 
>     return rc;
> }
> 
>+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port)
>+{
>+    struct evtchn *chn;
>+    int            free_port = 0;
>+
>+    spin_lock(&d->event_lock);
>+
>+    if ( (free_port = get_free_port(d)) < 0 )
>+        goto out;
>+    chn = evtchn_from_port(d, free_port);

The code below this is not really a plain breakout from the
function above:

>+    chn->state = ECS_UNBOUND;

The equivalent to this ought to be removed from the original
function as being redundant.

>+    chn->u.unbound.remote_domid = DOMID_INVALID;

The single caller here will immediately overwrite this value. It
would seem more clean to simply pass in the intended value,
and eliminate the corresponding code from the caller too.

Jan

>+    *port = free_port;
>+    /* Everything is fine, returns 0 */
>+    free_port = 0;
>+
>+ out:
>+    spin_unlock(&d->event_lock);
>+    return free_port;
>+}
> 
> static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
> {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:27:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIeo-00021C-Vp; Mon, 06 Aug 2012 08:27:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <horms@verge.net.au>) id 1SyIeo-000217-3C
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 08:27:22 +0000
Received: from [85.158.143.99:46984] by server-3.bemta-4.messagelabs.com id
	09/29-01511-9EF7F105; Mon, 06 Aug 2012 08:27:21 +0000
X-Env-Sender: horms@verge.net.au
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344241639!25141141!1
X-Originating-IP: [202.4.237.240]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3317 invoked from network); 6 Aug 2012 08:27:20 -0000
Received: from kirsty.vergenet.net (HELO kirsty.vergenet.net) (202.4.237.240)
	by server-4.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 08:27:20 -0000
Received: from ayumi.akashicho.tokyo.vergenet.net
	(p6117-ipbfp1901kobeminato.hyogo.ocn.ne.jp [114.172.117.117])
	by kirsty.vergenet.net (Postfix) with ESMTP id A2EF6266CEE;
	Mon,  6 Aug 2012 18:27:18 +1000 (EST)
Received: by ayumi.akashicho.tokyo.vergenet.net (Postfix, from userid 7100)
	id 4CAA7EDE8B3; Mon,  6 Aug 2012 17:27:17 +0900 (JST)
Date: Mon, 6 Aug 2012 17:27:17 +0900
From: Simon Horman <horms@verge.net.au>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20120806082717.GT18095@verge.net.au>
References: <1d901f35-be2d-418b-bc80-b8c0420f3c0d@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1d901f35-be2d-418b-bc80-b8c0420f3c0d@default>
Organisation: Horms Solutions Ltd.
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: kexec@lists.infradead.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	ptesarik@suse.cz, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 06:27:24AM -0700, Daniel Kiper wrote:
> Hi,
> 
> > > > If you spot any error in any logfile which in your opinion
> > > > is relevent to our testes please send me it.
> > >
> > > Hi,
> > >
> > > is there any consensus on what to do here?
> >
> > As I know Petr was going to do some tests.
> > I have not received any reply from him till now.
> > Maybe he is busy or on vacation.
> 
> According to automatic reply he is on vacation until 13/08/2012.

Thanks. I guess we should wait then.

Incidently, I will be on vacation for a week from the 13th.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:27:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIeo-00021C-Vp; Mon, 06 Aug 2012 08:27:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <horms@verge.net.au>) id 1SyIeo-000217-3C
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 08:27:22 +0000
Received: from [85.158.143.99:46984] by server-3.bemta-4.messagelabs.com id
	09/29-01511-9EF7F105; Mon, 06 Aug 2012 08:27:21 +0000
X-Env-Sender: horms@verge.net.au
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344241639!25141141!1
X-Originating-IP: [202.4.237.240]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3317 invoked from network); 6 Aug 2012 08:27:20 -0000
Received: from kirsty.vergenet.net (HELO kirsty.vergenet.net) (202.4.237.240)
	by server-4.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 08:27:20 -0000
Received: from ayumi.akashicho.tokyo.vergenet.net
	(p6117-ipbfp1901kobeminato.hyogo.ocn.ne.jp [114.172.117.117])
	by kirsty.vergenet.net (Postfix) with ESMTP id A2EF6266CEE;
	Mon,  6 Aug 2012 18:27:18 +1000 (EST)
Received: by ayumi.akashicho.tokyo.vergenet.net (Postfix, from userid 7100)
	id 4CAA7EDE8B3; Mon,  6 Aug 2012 17:27:17 +0900 (JST)
Date: Mon, 6 Aug 2012 17:27:17 +0900
From: Simon Horman <horms@verge.net.au>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20120806082717.GT18095@verge.net.au>
References: <1d901f35-be2d-418b-bc80-b8c0420f3c0d@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1d901f35-be2d-418b-bc80-b8c0420f3c0d@default>
Organisation: Horms Solutions Ltd.
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: kexec@lists.infradead.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	ptesarik@suse.cz, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 02, 2012 at 06:27:24AM -0700, Daniel Kiper wrote:
> Hi,
> 
> > > > If you spot any error in any logfile which in your opinion
> > > > is relevent to our testes please send me it.
> > >
> > > Hi,
> > >
> > > is there any consensus on what to do here?
> >
> > As I know Petr was going to do some tests.
> > I have not received any reply from him till now.
> > Maybe he is busy or on vacation.
> 
> According to automatic reply he is on vacation until 13/08/2012.

Thanks. I guess we should wait then.

Incidently, I will be on vacation for a week from the 13th.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:34:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIls-0002BD-T2; Mon, 06 Aug 2012 08:34:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyIlr-0002B8-Is
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 08:34:39 +0000
Received: from [85.158.139.83:25941] by server-12.bemta-5.messagelabs.com id
	37/E3-26304-E918F105; Mon, 06 Aug 2012 08:34:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344242078!25686718!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19882 invoked from network); 6 Aug 2012 08:34:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:34:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:34:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:34:38 +0100
Message-ID: <1344242076.11339.1.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 09:34:36 +0100
In-Reply-To: <osstest-13547-mainreport@xen.org>
References: <osstest-13547-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13547: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2012-08-04 at 14:27 +0100, xen.org wrote:
> flight 13547 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13547/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
>  build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
>  build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
>  build-amd64                   2 host-install(2)         broken REGR. vs. 13536
>  build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
>  build-i386                    2 host-install(2)         broken REGR. vs. 13536

Looks like our local debian mirror/proxy is sick:

        Aug  4 09:50:06.976520 Configuring the network with DHCP  ..100%
        Aug  4 09:50:15.324538 Checking the Debian archive mirror  ..25%..50%..75%..100%
        Aug  4 09:50:16.140553 Choose a mirror of the Debian archive
        Aug  4 09:50:16.140585 -------------------------------------
        Aug  4 09:50:16.140616 
        Aug  4 09:50:16.140634 !! ERROR: Bad archive mirror
        Aug  4 09:50:16.140660 
        Aug  4 09:50:16.140678 An error has been detected while trying to use the specified Debian archive 
        Aug  4 09:50:16.152548 mirror.
        Aug  4 09:50:16.152571 
        Aug  4 09:50:16.152595 Possible reasons for the error are: incorrect mirror specified; mirror is not 
        Aug  4 09:50:16.160549 available (possibly due to an unreliable network connection); mirror is broken 
        Aug  4 09:50:16.172635 (for example because an invalid Release file was found); mirror does not 
        Aug  4 09:50:16.172672 support the correct Debian version.
        Aug  4 09:50:16.180539 
        Aug  4 09:50:16.180559 Additional details may be available in /var/log/syslog or on virtual console 4.
        Aug  4 09:50:16.180605 
        Aug  4 09:50:16.180649 Please check the specified mirror or try a different one.
        Aug  4 09:50:16.188548 [Press enter to continue] 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:34:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIls-0002BD-T2; Mon, 06 Aug 2012 08:34:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyIlr-0002B8-Is
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 08:34:39 +0000
Received: from [85.158.139.83:25941] by server-12.bemta-5.messagelabs.com id
	37/E3-26304-E918F105; Mon, 06 Aug 2012 08:34:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344242078!25686718!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19882 invoked from network); 6 Aug 2012 08:34:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:34:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:34:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:34:38 +0100
Message-ID: <1344242076.11339.1.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 09:34:36 +0100
In-Reply-To: <osstest-13547-mainreport@xen.org>
References: <osstest-13547-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13547: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2012-08-04 at 14:27 +0100, xen.org wrote:
> flight 13547 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13547/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
>  build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
>  build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
>  build-amd64                   2 host-install(2)         broken REGR. vs. 13536
>  build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
>  build-i386                    2 host-install(2)         broken REGR. vs. 13536

Looks like our local debian mirror/proxy is sick:

        Aug  4 09:50:06.976520 Configuring the network with DHCP  ..100%
        Aug  4 09:50:15.324538 Checking the Debian archive mirror  ..25%..50%..75%..100%
        Aug  4 09:50:16.140553 Choose a mirror of the Debian archive
        Aug  4 09:50:16.140585 -------------------------------------
        Aug  4 09:50:16.140616 
        Aug  4 09:50:16.140634 !! ERROR: Bad archive mirror
        Aug  4 09:50:16.140660 
        Aug  4 09:50:16.140678 An error has been detected while trying to use the specified Debian archive 
        Aug  4 09:50:16.152548 mirror.
        Aug  4 09:50:16.152571 
        Aug  4 09:50:16.152595 Possible reasons for the error are: incorrect mirror specified; mirror is not 
        Aug  4 09:50:16.160549 available (possibly due to an unreliable network connection); mirror is broken 
        Aug  4 09:50:16.172635 (for example because an invalid Release file was found); mirror does not 
        Aug  4 09:50:16.172672 support the correct Debian version.
        Aug  4 09:50:16.180539 
        Aug  4 09:50:16.180559 Additional details may be available in /var/log/syslog or on virtual console 4.
        Aug  4 09:50:16.180605 
        Aug  4 09:50:16.180649 Please check the specified mirror or try a different one.
        Aug  4 09:50:16.188548 [Press enter to continue] 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:42:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:42:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIt9-0002OC-OG; Mon, 06 Aug 2012 08:42:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyIt9-0002Ns-2z
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:42:11 +0000
Received: from [85.158.143.35:21923] by server-1.bemta-4.messagelabs.com id
	DA/DC-24392-2638F105; Mon, 06 Aug 2012 08:42:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344242529!11031698!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24057 invoked from network); 6 Aug 2012 08:42:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:42:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860521"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:41:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:41:49 +0100
Message-ID: <1344242508.11339.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: James Harper <james.harper@bendigoit.com.au>
Date: Mon, 6 Aug 2012 09:41:48 +0100
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299E603F@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
	<20120804012645.GF19851@reaktio.net>
	<6035A0D088A63A46850C3988ED045A4B299E603F@BITCOM1.int.sbss.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2012-08-04 at 07:35 +0100, James Harper wrote:
> I guess the main thing is that afaik the vscsi stuff isn't in xl...
> can anyone estimate the work required to port it? (if it hasn't been
> done already)

I've not heard of anyone doing it.

You'd need to reverse engineer the actual xm/xend syntax and figure out
how this translates into xenstore (I suspect this will actually be the
hardest bit). Then it's just a case of writing a parser in xl or libxlu
and defining/implementing libxl_device_scsi in libxl. The libxl side is
fairly standard and any of the existing devices should provide a
reasonable template to follow.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:42:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:42:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIt9-0002OC-OG; Mon, 06 Aug 2012 08:42:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyIt9-0002Ns-2z
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:42:11 +0000
Received: from [85.158.143.35:21923] by server-1.bemta-4.messagelabs.com id
	DA/DC-24392-2638F105; Mon, 06 Aug 2012 08:42:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344242529!11031698!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24057 invoked from network); 6 Aug 2012 08:42:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:42:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860521"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:41:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:41:49 +0100
Message-ID: <1344242508.11339.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: James Harper <james.harper@bendigoit.com.au>
Date: Mon, 6 Aug 2012 09:41:48 +0100
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299E603F@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299DF56E@BITCOM1.int.sbss.com.au>
	<20120804012645.GF19851@reaktio.net>
	<6035A0D088A63A46850C3988ED045A4B299E603F@BITCOM1.int.sbss.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vscsi
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2012-08-04 at 07:35 +0100, James Harper wrote:
> I guess the main thing is that afaik the vscsi stuff isn't in xl...
> can anyone estimate the work required to port it? (if it hasn't been
> done already)

I've not heard of anyone doing it.

You'd need to reverse engineer the actual xm/xend syntax and figure out
how this translates into xenstore (I suspect this will actually be the
hardest bit). Then it's just a case of writing a parser in xl or libxlu
and defining/implementing libxl_device_scsi in libxl. The libxl side is
fairly standard and any of the existing devices should provide a
reasonable template to follow.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:46:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIwn-0002ej-C8; Mon, 06 Aug 2012 08:45:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIwl-0002ed-JB
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:45:55 +0000
Received: from [85.158.143.99:20858] by server-1.bemta-4.messagelabs.com id
	48/53-24392-2448F105; Mon, 06 Aug 2012 08:45:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344242753!25144862!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9066 invoked from network); 6 Aug 2012 08:45:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 08:45:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:45:52 +0100
Message-Id: <501FA05C0200007800092CD7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:45:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>--- /dev/null
>+++ b/xen/include/public/v4v.h
>...
>+#define V4V_DOMID_ANY           0x7fffU

I think I asked this before - why not use one of the pre-existing
DOMID values? And if there is a good reason, it should be said
here in a comment, to avoid the same question being asked
later again.

>...
>+typedef uint64_t v4v_pfn_t;

We already have xen_pfn_t, so why do you need yet another
flavor?

>...
>+struct v4v_info
>+{
>+    uint64_t ring_magic;
>+    uint64_t data_magic;
>+    evtchn_port_t evtchn;

Missing padding at the end?

>+};
>+typedef struct v4v_info v4v_info_t;
>+
>+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)

Doesn't seem to belong here. Or is the subsequent comment
actually related to this (in which case it should be moved ahead
of the definition and made match it).

>+/*
>+ * Messages on the ring are padded to 128 bits
>+ * Len here refers to the exact length of the data not including the
>+ * 128 bit header. The message uses
>+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
>+ */
>...
>+/*
>+ * HYPERCALLS
>+ */
>...

In the block below here, please get the naming (do_v4v_op()
vs v4v_hypercall()) and the use of newlines (either always one
or always two between individual hypercall descriptions)
consistent. Hmm, even the descriptions don't seem to always
match the definitions (not really obvious because apparently
again the descriptions follow the definitions, whereas the
opposite is the usual way to arrange things).

>--- /dev/null
>+++ b/xen/include/xen/v4v_utils.h
>...
>+/* Compiler specific hacks */
>+#if defined(__GNUC__)
>+# define V4V_UNUSED __attribute__ ((unused))
>+# ifndef __STRICT_ANSI__
>+#  define V4V_INLINE inline
>+# else
>+#  define V4V_INLINE
>+# endif
>+#else /* !__GNUC__ */
>+# define V4V_UNUSED
>+# define V4V_INLINE
>+#endif

This suggests the header is really intended to be public?

>...
>+static V4V_INLINE uint32_t
>+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)

No space between function name and opening parenthesis
(throughout this file).

>...
>+V4V_UNUSED static V4V_INLINE ssize_t

V4V_UNUSED? Doesn't make sense in conjunction with
V4V_INLINE, at least as long as you're using GNU extensions
anyway (see above as to the disposition of the header).

>+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * protocol,
>+              void *_buf, size_t t, int consume)

Dead functions shouldn't be placed here.

>...
>+static ssize_t
>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>+                     size_t skip) V4V_UNUSED;
>+
>+V4V_INLINE static ssize_t
>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>+                     size_t skip)
>+{

What's the point of having a declaration followed immediately by
a definition? Also, the function is dead too.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:46:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyIwn-0002ej-C8; Mon, 06 Aug 2012 08:45:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyIwl-0002ed-JB
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:45:55 +0000
Received: from [85.158.143.99:20858] by server-1.bemta-4.messagelabs.com id
	48/53-24392-2448F105; Mon, 06 Aug 2012 08:45:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344242753!25144862!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9066 invoked from network); 6 Aug 2012 08:45:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 08:45:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 09:45:52 +0100
Message-Id: <501FA05C0200007800092CD7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 09:45:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
In-Reply-To: <1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>--- /dev/null
>+++ b/xen/include/public/v4v.h
>...
>+#define V4V_DOMID_ANY           0x7fffU

I think I asked this before - why not use one of the pre-existing
DOMID values? And if there is a good reason, it should be said
here in a comment, to avoid the same question being asked
later again.

>...
>+typedef uint64_t v4v_pfn_t;

We already have xen_pfn_t, so why do you need yet another
flavor?

>...
>+struct v4v_info
>+{
>+    uint64_t ring_magic;
>+    uint64_t data_magic;
>+    evtchn_port_t evtchn;

Missing padding at the end?

>+};
>+typedef struct v4v_info v4v_info_t;
>+
>+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)

Doesn't seem to belong here. Or is the subsequent comment
actually related to this (in which case it should be moved ahead
of the definition and made match it).

>+/*
>+ * Messages on the ring are padded to 128 bits
>+ * Len here refers to the exact length of the data not including the
>+ * 128 bit header. The message uses
>+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
>+ */
>...
>+/*
>+ * HYPERCALLS
>+ */
>...

In the block below here, please get the naming (do_v4v_op()
vs v4v_hypercall()) and the use of newlines (either always one
or always two between individual hypercall descriptions)
consistent. Hmm, even the descriptions don't seem to always
match the definitions (not really obvious because apparently
again the descriptions follow the definitions, whereas the
opposite is the usual way to arrange things).

>--- /dev/null
>+++ b/xen/include/xen/v4v_utils.h
>...
>+/* Compiler specific hacks */
>+#if defined(__GNUC__)
>+# define V4V_UNUSED __attribute__ ((unused))
>+# ifndef __STRICT_ANSI__
>+#  define V4V_INLINE inline
>+# else
>+#  define V4V_INLINE
>+# endif
>+#else /* !__GNUC__ */
>+# define V4V_UNUSED
>+# define V4V_INLINE
>+#endif

This suggests the header is really intended to be public?

>...
>+static V4V_INLINE uint32_t
>+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)

No space between function name and opening parenthesis
(throughout this file).

>...
>+V4V_UNUSED static V4V_INLINE ssize_t

V4V_UNUSED? Doesn't make sense in conjunction with
V4V_INLINE, at least as long as you're using GNU extensions
anyway (see above as to the disposition of the header).

>+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * protocol,
>+              void *_buf, size_t t, int consume)

Dead functions shouldn't be placed here.

>...
>+static ssize_t
>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>+                     size_t skip) V4V_UNUSED;
>+
>+V4V_INLINE static ssize_t
>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>+                     size_t skip)
>+{

What's the point of having a declaration followed immediately by
a definition? Also, the function is dead too.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:53:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:53:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJ3h-0002rY-8G; Mon, 06 Aug 2012 08:53:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyJ3f-0002rT-OX
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:53:03 +0000
Received: from [85.158.143.99:32491] by server-3.bemta-4.messagelabs.com id
	D1/38-01511-FE58F105; Mon, 06 Aug 2012 08:53:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344243182!25163633!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31790 invoked from network); 6 Aug 2012 08:53:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:53:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860786"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:53:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:53:02 +0100
Message-ID: <1344243180.11339.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "M. Fletcher" <mpf30@cam.ac.uk>
Date: Mon, 6 Aug 2012 09:53:00 +0100
In-Reply-To: <002901cd7308$79d55890$6d8009b0$@cam.ac.uk>
References: <002901cd7308$79d55890$6d8009b0$@cam.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Unable to configure GRUB to boto xen 4.2 Kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-08-05 at 13:47 +0100, M. Fletcher wrote:
>     dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
> version number

update-grub wants to parse the filename to figure out the version (to
use in the menu entry), obviously this entry confuses it (I think this
is a grub bug, it would be useful if you would report this upstream).

As a work around you should be able to just remove /boot/xen.gz, it is
normally a symlink to a name containing a more fully qualified version.
In fact you could probably remove all the symlinks.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:53:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:53:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJ3h-0002rY-8G; Mon, 06 Aug 2012 08:53:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyJ3f-0002rT-OX
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:53:03 +0000
Received: from [85.158.143.99:32491] by server-3.bemta-4.messagelabs.com id
	D1/38-01511-FE58F105; Mon, 06 Aug 2012 08:53:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344243182!25163633!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31790 invoked from network); 6 Aug 2012 08:53:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:53:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860786"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:53:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:53:02 +0100
Message-ID: <1344243180.11339.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "M. Fletcher" <mpf30@cam.ac.uk>
Date: Mon, 6 Aug 2012 09:53:00 +0100
In-Reply-To: <002901cd7308$79d55890$6d8009b0$@cam.ac.uk>
References: <002901cd7308$79d55890$6d8009b0$@cam.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Unable to configure GRUB to boto xen 4.2 Kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-08-05 at 13:47 +0100, M. Fletcher wrote:
>     dpkg: version '/boot/xen.gz' has bad syntax: invalid character in
> version number

update-grub wants to parse the filename to figure out the version (to
use in the menu entry), obviously this entry confuses it (I think this
is a grub bug, it would be useful if you would report this upstream).

As a work around you should be able to just remove /boot/xen.gz, it is
normally a symlink to a name containing a more fully qualified version.
In fact you could probably remove all the symlinks.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJ4b-0002uz-MU; Mon, 06 Aug 2012 08:54:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyJ4Z-0002um-B6
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 08:53:59 +0000
Received: from [85.158.139.83:43755] by server-8.bemta-5.messagelabs.com id
	6D/65-10278-6268F105; Mon, 06 Aug 2012 08:53:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344243237!23209699!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16566 invoked from network); 6 Aug 2012 08:53:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:53:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860814"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:53:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 09:53:57 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyJ4X-0001rf-Ci;
	Mon, 06 Aug 2012 08:53:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyJ4X-0007DO-9Z;
	Mon, 06 Aug 2012 09:53:57 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13557-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 09:53:57 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13557: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13557 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13557/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJ4b-0002uz-MU; Mon, 06 Aug 2012 08:54:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyJ4Z-0002um-B6
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 08:53:59 +0000
Received: from [85.158.139.83:43755] by server-8.bemta-5.messagelabs.com id
	6D/65-10278-6268F105; Mon, 06 Aug 2012 08:53:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344243237!23209699!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16566 invoked from network); 6 Aug 2012 08:53:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:53:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860814"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:53:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 09:53:57 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyJ4X-0001rf-Ci;
	Mon, 06 Aug 2012 08:53:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyJ4X-0007DO-9Z;
	Mon, 06 Aug 2012 09:53:57 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13557-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 09:53:57 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13557: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13557 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13557/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13536
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13536
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13536
 build-amd64                   2 host-install(2)         broken REGR. vs. 13536
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13536
 build-i386                    2 host-install(2)         broken REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  6ccad16b50b6
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 653 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:58:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJ8h-00039C-HF; Mon, 06 Aug 2012 08:58:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyJ8g-000394-FE
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:58:14 +0000
Received: from [85.158.139.83:65155] by server-4.bemta-5.messagelabs.com id
	30/5A-27831-5278F105; Mon, 06 Aug 2012 08:58:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344243492!29728419!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23446 invoked from network); 6 Aug 2012 08:58:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:58:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860913"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:58:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:58:12 +0100
Message-ID: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 09:58:11 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UGxhbiBmb3IgYSA0LjIgcmVsZWFzZToKaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRt
bC94ZW4tZGV2ZWwvMjAxMi0wMy9tc2cwMDc5My5odG1sCgpUaGUgdGltZSBsaW5lIGlzIGFzIGZv
bGxvd3M6CgoxOSBNYXJjaCAgICAgICAgLS0gVE9ETyBsaXN0IGxvY2tlZCBkb3duCjIgQXByaWwg
ICAgICAgICAtLSBGZWF0dXJlIEZyZWV6ZQozMCBKdWx5ICAgICAgICAgLS0gRmlyc3QgcmVsZWFz
ZSBjYW5kaWRhdGUgICAgICA8PCBET05FCldlZWtseSAgICAgICAgICAtLSBSQ04rMSB1bnRpbCBy
ZWxlYXNlICAgICAgICAgIDw8IFdFIEFSRSBIRVJFCgpXZSByZWxlYXNlZCBSQzEgbGFzdCB3ZWVr
LiBNYW55IG9mIHRoZSBrbm93biBpc3N1ZXMgYXJlIG5vdyBmaXhlZCBpbgpNZXJjdXJpYWwgYW5k
IEkgZXhwZWN0IHJjMiB3aWxsIGZvbGxvdyBvbmNlIHdlIGhhdmUgYSB0ZXN0IHB1c2ggaGFzCmhh
cHBlbmVkIChhbiBpbmZyYXN0cnVjdHVyZSBmYWlsdXJlIHByZXZlbnRlZCB0aGlzIGZyb20gaGFw
cGVuaW5nIG92ZXIKdGhlIHdlZWtlbmQpLgoKVGhlIHVwZGF0ZWQgVE9ETyBsaXN0IGZvbGxvd3Mu
CgpoeXBlcnZpc29yLCBibG9ja2VyczoKCiAgICAqIE5vbmUKCnRvb2xzLCBibG9ja2VyczoKCiAg
ICAqIGxpYnhsIHN0YWJsZSBBUEkgLS0gd2Ugd291bGQgbGlrZSA0LjIgdG8gZGVmaW5lIGEgc3Rh
YmxlIEFQSQogICAgICB3aGljaCBkb3duc3RyZWFtJ3MgY2FuIHN0YXJ0IHRvIHJlbHkgb24gbm90
IGNoYW5naW5nLiBBc3BlY3RzIG9mCiAgICAgIHRoaXMgYXJlOgoKICAgICAgICAqIEludGVyZmFj
ZXMgd2hpY2ggbWF5IG5lZWQgdG8gYmUgYXN5bmM6CgogICAgICAgICAgICAqIGxpYnhsX2Rldmlj
ZV9wY2lfYWRkIChhbmQgcmVtb3ZlKS4gKElhbiBDLCBET05FKQoKICAgICogeGwgY29tcGF0aWJp
bGl0eSB3aXRoIHhtOgoKICAgICAgICAqIE5vIGtub3duIGlzc3VlcwoKICAgICogW0NIRUNLXSBN
b3JlIGZvcm1hbGx5IGRlcHJlY2F0ZSB4bS94ZW5kLiBNYW5wYWdlIHBhdGNoZXMgYWxyZWFkeQog
ICAgICBpbiB0cmVlLiBOZWVkcyByZWxlYXNlIG5vdGluZyBhbmQgY29tbXVuaWNhdGlvbiBhcm91
bmQgLXJjMSB0bwogICAgICByZW1pbmQgcGVvcGxlIHRvIHRlc3QgeGwuCgogICAgKiBjYWxsaW5n
IGhvdHBsdWcgc2NyaXB0cyBmcm9tIHhsIChMaW51eCBhbmQgTmV0QlNEKSAoUm9nZXIgUGF1CiAg
ICAgIE1vbm7DqSwgRE9ORSkKCiAgICAqIEJsb2NrIHNjcmlwdCBzdXBwb3J0IChJYW4gQywgRE9O
RSkKCiAgICAqIFtDSEVDS10gQ29uZmlybSB0aGF0IG1pZ3JhdGlvbiBmcm9tIFhlbiA0LjEgLT4g
NC4yIHdvcmtzLgoKICAgICogW0JVR10gbGlieGxfX2RldmljZXNfZGVzdHJveSBoYXMgYSByYWNl
IGFnYWluc3QKICAgICAgcGx1Z2dpbmcvdW5wbHVnZ2luZyBkZXZpY2VzIHRvIHRoZSBkb21haW4g
d2hpY2ggY2FuIHJlc3VsdCBpbgogICAgICBvdmVyLSBvciB1bmRlci1mbG93aW5nIHRoZSBhb2Rl
dnMgYXJyYXkgKFJvZ2VyIFBhdSBNb25uw6ksIElhbgogICAgICBKYWNrc29uLCBET05FKQoKICAg
ICogQnVtcCBsaWJyYXJ5IFNPTkFNRVMgYXMgbmVjZXNzYXJ5LgogICAgICA8MjA1MDIuMzk0NDAu
OTY5NjE5LjgyNDk3NkBtYXJpbmVyLnVrLnhlbnNvdXJjZS5jb20+CgpoeXBlcnZpc29yLCBuaWNl
IHRvIGhhdmU6CgogICAgKiB2TUNFIHNhdmUvcmVzdG9yZSBjaGFuZ2VzLCB0byBzaW1wbGlmeSBt
aWdyYXRpb24gNC4yLT40LjMgd2l0aAogICAgIG5ldyB2TUNFIGluIDQuMy4gKEppbnNvbmcgTGl1
LCBKYW4gQmV1bGljaCkKCnRvb2xzLCBuaWNlIHRvIGhhdmU6CgogICAgKiB4bCBjb21wYXRpYmls
aXR5IHdpdGggeG06CgogICAgICAgICogTm9uZQoKICAgICogbGlieGwgc3RhYmxlIEFQSQoKICAg
ICAgICAqIGxpYnhsX3dhaXRfZm9yX2ZyZWVfbWVtb3J5L2xpYnhsX3dhaXRfZm9yX21lbW9yeV90
YXJnZXQuCiAgICAgICAgICBJbnRlcmZhY2UgbmVlZHMgYW4gb3ZlcmhhdWwsIHJlbGF0ZWQgdG8K
ICAgICAgICAgIGxvY2tpbmcvc2VyaWFsaXphdGlvbiBvdmVyIGRvbWFpbiBjcmVhdGUuIElhbkog
dG8gYWRkIG5vdGUKICAgICAgICAgIGFib3V0IHRoaXMgaW50ZXJmYWNlIGJlaW5nIHN1YnN0YW5k
YXJkIGJ1dCBvdGhlcndpc2UgZGVmZXIKICAgICAgICAgIHRvIDQuMy4gKERPTkUpCgogICAgKiB4
bC5jZmcoNSkgZG9jdW1lbnRhdGlvbiBwYXRjaCBmb3IgcWVtdS11cHN0cmVhbQogICAgICB2aWRl
b3JhbS92aWRlb21lbSBzdXBwb3J0OgogICAgICBodHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZl
cy9odG1sL3hlbi1kZXZlbC8yMDEyLTA1L21zZzAwMjUwLmh0bWwKICAgICAgcWVtdS11cHN0cmVh
bSBkb2Vzbid0IHN1cHBvcnQgc3BlY2lmeWluZyB2aWRlb21lbSBzaXplIGZvciB0aGUKICAgICAg
SFZNIGd1ZXN0IGNpcnJ1cy9zdGR2Z2EuICAoYnV0IHRoaXMgd29ya3Mgd2l0aAogICAgICBxZW11
LXhlbi10cmFkaXRpb25hbCkuIChQYXNpIEvDpHJra8OkaW5lbikKCiAgICAqIFtCVUddIGxvbmcg
c3RvcCBkdXJpbmcgdGhlIGd1ZXN0IGJvb3QgcHJvY2VzcyB3aXRoIHFjb3cgaW1hZ2UsCiAgICAg
IHJlcG9ydGVkIGJ5IEludGVsOiBodHRwOi8vYnVnemlsbGEueGVuLm9yZy9idWd6aWxsYS9zaG93
X2J1Zy5jZ2k/aWQ9MTgyMQoKICAgICogW0JVR10gdmNwdS1zZXQgZG9lc24ndCB0YWtlIGVmZmVj
dCBvbiBndWVzdCwgcmVwb3J0ZWQgYnkgSW50ZWw6CiAgICAgIGh0dHA6Ly9idWd6aWxsYS54ZW4u
b3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0xODIyCgogICAgKiBMb2FkIGJsa3RhcCBkcml2
ZXIgZnJvbSB4ZW5jb21tb25zIGluaXRzY3JpcHQgaWYgYXZhaWxhYmxlLCB0aHJlYWQgYXQ6CiAg
ICAgIDxkYjYxNGU5MmZhZjc0M2UyMGIzZi4xMzM3MDk2OTc3QGtvZG8yPi4gVG8gYmUgZml4ZWQg
bW9yZQogICAgICBwcm9wZXJseSBpbiA0LjMuIChQYXRjaCBwb3N0ZWQsIGRpc2N1c3Npb24sIHBs
YW4gdG8gdGFrZSBzaW1wbGUKICAgICAgeGVuY29tbW9ucyBwYXRjaCBmb3IgNC4yIGFuZCByZXZp
c3QgZm9yIDQuMy4gUGluZyBzZW50KQoKICAgICogW0JVR10geGwgYWxsb3dzIHNhbWUgUENJIGRl
dmljZSB0byBiZSBhc3NpZ25lZCB0byBtdWx0aXBsZQogICAgICBndWVzdHMuIGh0dHA6Ly9idWd6
aWxsYS54ZW4ub3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0xODI2CiAgICAgICg8RTQ1NThD
MEM5NjY4ODc0ODgzN0VCMUIwNUJFRUQ3NUEwRkQ1NTc0QUBTSFNNU1gxMDIuY2NyLmNvcnAuaW50
ZWwuY29tPikKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Aug 06 08:58:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 08:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJ8h-00039C-HF; Mon, 06 Aug 2012 08:58:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyJ8g-000394-FE
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 08:58:14 +0000
Received: from [85.158.139.83:65155] by server-4.bemta-5.messagelabs.com id
	30/5A-27831-5278F105; Mon, 06 Aug 2012 08:58:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344243492!29728419!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23446 invoked from network); 6 Aug 2012 08:58:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 08:58:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13860913"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:58:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	09:58:12 +0100
Message-ID: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 09:58:11 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UGxhbiBmb3IgYSA0LjIgcmVsZWFzZToKaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRt
bC94ZW4tZGV2ZWwvMjAxMi0wMy9tc2cwMDc5My5odG1sCgpUaGUgdGltZSBsaW5lIGlzIGFzIGZv
bGxvd3M6CgoxOSBNYXJjaCAgICAgICAgLS0gVE9ETyBsaXN0IGxvY2tlZCBkb3duCjIgQXByaWwg
ICAgICAgICAtLSBGZWF0dXJlIEZyZWV6ZQozMCBKdWx5ICAgICAgICAgLS0gRmlyc3QgcmVsZWFz
ZSBjYW5kaWRhdGUgICAgICA8PCBET05FCldlZWtseSAgICAgICAgICAtLSBSQ04rMSB1bnRpbCBy
ZWxlYXNlICAgICAgICAgIDw8IFdFIEFSRSBIRVJFCgpXZSByZWxlYXNlZCBSQzEgbGFzdCB3ZWVr
LiBNYW55IG9mIHRoZSBrbm93biBpc3N1ZXMgYXJlIG5vdyBmaXhlZCBpbgpNZXJjdXJpYWwgYW5k
IEkgZXhwZWN0IHJjMiB3aWxsIGZvbGxvdyBvbmNlIHdlIGhhdmUgYSB0ZXN0IHB1c2ggaGFzCmhh
cHBlbmVkIChhbiBpbmZyYXN0cnVjdHVyZSBmYWlsdXJlIHByZXZlbnRlZCB0aGlzIGZyb20gaGFw
cGVuaW5nIG92ZXIKdGhlIHdlZWtlbmQpLgoKVGhlIHVwZGF0ZWQgVE9ETyBsaXN0IGZvbGxvd3Mu
CgpoeXBlcnZpc29yLCBibG9ja2VyczoKCiAgICAqIE5vbmUKCnRvb2xzLCBibG9ja2VyczoKCiAg
ICAqIGxpYnhsIHN0YWJsZSBBUEkgLS0gd2Ugd291bGQgbGlrZSA0LjIgdG8gZGVmaW5lIGEgc3Rh
YmxlIEFQSQogICAgICB3aGljaCBkb3duc3RyZWFtJ3MgY2FuIHN0YXJ0IHRvIHJlbHkgb24gbm90
IGNoYW5naW5nLiBBc3BlY3RzIG9mCiAgICAgIHRoaXMgYXJlOgoKICAgICAgICAqIEludGVyZmFj
ZXMgd2hpY2ggbWF5IG5lZWQgdG8gYmUgYXN5bmM6CgogICAgICAgICAgICAqIGxpYnhsX2Rldmlj
ZV9wY2lfYWRkIChhbmQgcmVtb3ZlKS4gKElhbiBDLCBET05FKQoKICAgICogeGwgY29tcGF0aWJp
bGl0eSB3aXRoIHhtOgoKICAgICAgICAqIE5vIGtub3duIGlzc3VlcwoKICAgICogW0NIRUNLXSBN
b3JlIGZvcm1hbGx5IGRlcHJlY2F0ZSB4bS94ZW5kLiBNYW5wYWdlIHBhdGNoZXMgYWxyZWFkeQog
ICAgICBpbiB0cmVlLiBOZWVkcyByZWxlYXNlIG5vdGluZyBhbmQgY29tbXVuaWNhdGlvbiBhcm91
bmQgLXJjMSB0bwogICAgICByZW1pbmQgcGVvcGxlIHRvIHRlc3QgeGwuCgogICAgKiBjYWxsaW5n
IGhvdHBsdWcgc2NyaXB0cyBmcm9tIHhsIChMaW51eCBhbmQgTmV0QlNEKSAoUm9nZXIgUGF1CiAg
ICAgIE1vbm7DqSwgRE9ORSkKCiAgICAqIEJsb2NrIHNjcmlwdCBzdXBwb3J0IChJYW4gQywgRE9O
RSkKCiAgICAqIFtDSEVDS10gQ29uZmlybSB0aGF0IG1pZ3JhdGlvbiBmcm9tIFhlbiA0LjEgLT4g
NC4yIHdvcmtzLgoKICAgICogW0JVR10gbGlieGxfX2RldmljZXNfZGVzdHJveSBoYXMgYSByYWNl
IGFnYWluc3QKICAgICAgcGx1Z2dpbmcvdW5wbHVnZ2luZyBkZXZpY2VzIHRvIHRoZSBkb21haW4g
d2hpY2ggY2FuIHJlc3VsdCBpbgogICAgICBvdmVyLSBvciB1bmRlci1mbG93aW5nIHRoZSBhb2Rl
dnMgYXJyYXkgKFJvZ2VyIFBhdSBNb25uw6ksIElhbgogICAgICBKYWNrc29uLCBET05FKQoKICAg
ICogQnVtcCBsaWJyYXJ5IFNPTkFNRVMgYXMgbmVjZXNzYXJ5LgogICAgICA8MjA1MDIuMzk0NDAu
OTY5NjE5LjgyNDk3NkBtYXJpbmVyLnVrLnhlbnNvdXJjZS5jb20+CgpoeXBlcnZpc29yLCBuaWNl
IHRvIGhhdmU6CgogICAgKiB2TUNFIHNhdmUvcmVzdG9yZSBjaGFuZ2VzLCB0byBzaW1wbGlmeSBt
aWdyYXRpb24gNC4yLT40LjMgd2l0aAogICAgIG5ldyB2TUNFIGluIDQuMy4gKEppbnNvbmcgTGl1
LCBKYW4gQmV1bGljaCkKCnRvb2xzLCBuaWNlIHRvIGhhdmU6CgogICAgKiB4bCBjb21wYXRpYmls
aXR5IHdpdGggeG06CgogICAgICAgICogTm9uZQoKICAgICogbGlieGwgc3RhYmxlIEFQSQoKICAg
ICAgICAqIGxpYnhsX3dhaXRfZm9yX2ZyZWVfbWVtb3J5L2xpYnhsX3dhaXRfZm9yX21lbW9yeV90
YXJnZXQuCiAgICAgICAgICBJbnRlcmZhY2UgbmVlZHMgYW4gb3ZlcmhhdWwsIHJlbGF0ZWQgdG8K
ICAgICAgICAgIGxvY2tpbmcvc2VyaWFsaXphdGlvbiBvdmVyIGRvbWFpbiBjcmVhdGUuIElhbkog
dG8gYWRkIG5vdGUKICAgICAgICAgIGFib3V0IHRoaXMgaW50ZXJmYWNlIGJlaW5nIHN1YnN0YW5k
YXJkIGJ1dCBvdGhlcndpc2UgZGVmZXIKICAgICAgICAgIHRvIDQuMy4gKERPTkUpCgogICAgKiB4
bC5jZmcoNSkgZG9jdW1lbnRhdGlvbiBwYXRjaCBmb3IgcWVtdS11cHN0cmVhbQogICAgICB2aWRl
b3JhbS92aWRlb21lbSBzdXBwb3J0OgogICAgICBodHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZl
cy9odG1sL3hlbi1kZXZlbC8yMDEyLTA1L21zZzAwMjUwLmh0bWwKICAgICAgcWVtdS11cHN0cmVh
bSBkb2Vzbid0IHN1cHBvcnQgc3BlY2lmeWluZyB2aWRlb21lbSBzaXplIGZvciB0aGUKICAgICAg
SFZNIGd1ZXN0IGNpcnJ1cy9zdGR2Z2EuICAoYnV0IHRoaXMgd29ya3Mgd2l0aAogICAgICBxZW11
LXhlbi10cmFkaXRpb25hbCkuIChQYXNpIEvDpHJra8OkaW5lbikKCiAgICAqIFtCVUddIGxvbmcg
c3RvcCBkdXJpbmcgdGhlIGd1ZXN0IGJvb3QgcHJvY2VzcyB3aXRoIHFjb3cgaW1hZ2UsCiAgICAg
IHJlcG9ydGVkIGJ5IEludGVsOiBodHRwOi8vYnVnemlsbGEueGVuLm9yZy9idWd6aWxsYS9zaG93
X2J1Zy5jZ2k/aWQ9MTgyMQoKICAgICogW0JVR10gdmNwdS1zZXQgZG9lc24ndCB0YWtlIGVmZmVj
dCBvbiBndWVzdCwgcmVwb3J0ZWQgYnkgSW50ZWw6CiAgICAgIGh0dHA6Ly9idWd6aWxsYS54ZW4u
b3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0xODIyCgogICAgKiBMb2FkIGJsa3RhcCBkcml2
ZXIgZnJvbSB4ZW5jb21tb25zIGluaXRzY3JpcHQgaWYgYXZhaWxhYmxlLCB0aHJlYWQgYXQ6CiAg
ICAgIDxkYjYxNGU5MmZhZjc0M2UyMGIzZi4xMzM3MDk2OTc3QGtvZG8yPi4gVG8gYmUgZml4ZWQg
bW9yZQogICAgICBwcm9wZXJseSBpbiA0LjMuIChQYXRjaCBwb3N0ZWQsIGRpc2N1c3Npb24sIHBs
YW4gdG8gdGFrZSBzaW1wbGUKICAgICAgeGVuY29tbW9ucyBwYXRjaCBmb3IgNC4yIGFuZCByZXZp
c3QgZm9yIDQuMy4gUGluZyBzZW50KQoKICAgICogW0JVR10geGwgYWxsb3dzIHNhbWUgUENJIGRl
dmljZSB0byBiZSBhc3NpZ25lZCB0byBtdWx0aXBsZQogICAgICBndWVzdHMuIGh0dHA6Ly9idWd6
aWxsYS54ZW4ub3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0xODI2CiAgICAgICg8RTQ1NThD
MEM5NjY4ODc0ODgzN0VCMUIwNUJFRUQ3NUEwRkQ1NTc0QUBTSFNNU1gxMDIuY2NyLmNvcnAuaW50
ZWwuY29tPikKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Aug 06 09:14:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 09:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJNk-0003Ty-1A; Mon, 06 Aug 2012 09:13:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyJNh-0003Tt-Pr
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 09:13:46 +0000
Received: from [85.158.138.51:60507] by server-10.bemta-3.messagelabs.com id
	08/D9-07905-9CA8F105; Mon, 06 Aug 2012 09:13:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344244424!9083073!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 677 invoked from network); 6 Aug 2012 09:13:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 09:13:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208,217";a="13861270"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 09:13:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	10:13:44 +0100
Message-ID: <1344244422.11339.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Mon, 6 Aug 2012 10:13:42 +0100
In-Reply-To: <501BEAD8.3040300@amd.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> > A dry run with your syntax seems to work for me e.g. :
> >         xl -N block-attach 0 'file:/hvm-guest/win2008.img,ioemu:hda,w'
> 
> 
> This also crashes for me but works for me with c/s 25727:a8d708fcb347
> reverted.

I can somehow repro this morning too.

It looks like 25727:a8d708fcb347 is missing some bits of my original
patch got missed during application, specifically the changes to the
iscsi/nbd/enbd prefix handling rule.

Putting that back fixes the issue, although I can't exactly see why, so
I'm suspicious of IanJ's flex rerun (as noted in the commit log). We
both typically use Debian Squeeze and my patch included the regenerated
files so I would have expected no change.

8<------------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344244364 -3600
# Node ID 0dd09d4825a685f1f02560e6201533220ecbfa1f
# Parent  923a7fd08d5e6647a1ed91fcfdeecd9f59aa54fc
libxl: re-apply missing bits of  25739:1781892b19f8 (block script support)

The parts of this patch relating to the following changes were some
how missed during application

   This highlighted two bugs in the libxlu disk parser handling of the
   deprecated "<script>:" prefix:

   - It was failing to prefix with "block-" to construct the actual
     script name

   - The regex for matching iscsi or drdb or e?nbd was incorrect

For some reason this seems to have caused xl to segfault although I
can't see why this would be the case.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 923a7fd08d5e -r 0dd09d4825a6 tools/libxl/libxlu_disk_l.c
--- a/tools/libxl/libxlu_disk_l.c	Mon Aug 06 10:04:59 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.c	Mon Aug 06 10:12:44 2012 +0100
@@ -370,7 +370,7 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[456] =
+static yyconst flex_int16_t yy_acclist[447] =
     {   0,
        24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
     16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
@@ -1188,14 +1188,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 256 )
+				if ( yy_current_state >= 251 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 255 );
+		while ( yy_current_state != 250 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1245,72 +1245,72 @@ do_action:	/* This label is used only to
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 155 "libxlu_disk_l.l"
+#line 159 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 159 "libxlu_disk_l.l"
+#line 163 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 161 "libxlu_disk_l.l"
+#line 165 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 162 "libxlu_disk_l.l"
+#line 166 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 163 "libxlu_disk_l.l"
+#line 167 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 164 "libxlu_disk_l.l"
+#line 168 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 166 "libxlu_disk_l.l"
+#line 170 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 167 "libxlu_disk_l.l"
+#line 171 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 169 "libxlu_disk_l.l"
+#line 173 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 170 "libxlu_disk_l.l"
+#line 174 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
 case 11:
 YY_RULE_SETUP
-#line 174 "libxlu_disk_l.l"
+#line 178 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
 case 12:
 /* rule 12 can match eol */
 YY_RULE_SETUP
-#line 178 "libxlu_disk_l.l"
+#line 182 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
@@ -1327,24 +1327,31 @@ YY_RULE_SETUP
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 191 "libxlu_disk_l.l"
+#line 195 "libxlu_disk_l.l"
 {
-		    STRIP(':');
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 	YY_BREAK
 case 15:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
+#line 208 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 16:
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
+#line 209 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 17:
@@ -1376,13 +1383,13 @@ case 20:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 202 "libxlu_disk_l.l"
+#line 213 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
@@ -1429,17 +1436,17 @@ YY_RULE_SETUP
 	YY_BREAK
 case 24:
 YY_RULE_SETUP
-#line 241 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
 case 25:
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1445 "libxlu_disk_l.c"
+#line 1450 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1703,7 +1710,7 @@ static int yy_get_next_buffer (yyscan_t 
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 256 )
+			if ( yy_current_state >= 251 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1727,11 +1734,11 @@ static int yy_get_next_buffer (yyscan_t 
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 256 )
+		if ( yy_current_state >= 251 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 255);
+	yy_is_jam = (yy_current_state == 250);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
diff -r 923a7fd08d5e -r 0dd09d4825a6 tools/libxl/libxlu_disk_l.l
--- a/tools/libxl/libxlu_disk_l.l	Mon Aug 06 10:04:59 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.l	Mon Aug 06 10:12:44 2012 +0100
@@ -192,11 +192,18 @@ target=.*	{ STRIP(','); SAVESTRING("targ
                     setformat(DPC, yytext);
                  }
 
-iscsi:|e?nbd:drbd:/.* {
-		    STRIP(':');
+(iscsi|e?nbd|drbd):/.* {
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 
 tapdisk:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }
 tap2?:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 09:14:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 09:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJNk-0003Ty-1A; Mon, 06 Aug 2012 09:13:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyJNh-0003Tt-Pr
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 09:13:46 +0000
Received: from [85.158.138.51:60507] by server-10.bemta-3.messagelabs.com id
	08/D9-07905-9CA8F105; Mon, 06 Aug 2012 09:13:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344244424!9083073!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 677 invoked from network); 6 Aug 2012 09:13:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 09:13:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208,217";a="13861270"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 09:13:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	10:13:44 +0100
Message-ID: <1344244422.11339.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Mon, 6 Aug 2012 10:13:42 +0100
In-Reply-To: <501BEAD8.3040300@amd.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> > A dry run with your syntax seems to work for me e.g. :
> >         xl -N block-attach 0 'file:/hvm-guest/win2008.img,ioemu:hda,w'
> 
> 
> This also crashes for me but works for me with c/s 25727:a8d708fcb347
> reverted.

I can somehow repro this morning too.

It looks like 25727:a8d708fcb347 is missing some bits of my original
patch got missed during application, specifically the changes to the
iscsi/nbd/enbd prefix handling rule.

Putting that back fixes the issue, although I can't exactly see why, so
I'm suspicious of IanJ's flex rerun (as noted in the commit log). We
both typically use Debian Squeeze and my patch included the regenerated
files so I would have expected no change.

8<------------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344244364 -3600
# Node ID 0dd09d4825a685f1f02560e6201533220ecbfa1f
# Parent  923a7fd08d5e6647a1ed91fcfdeecd9f59aa54fc
libxl: re-apply missing bits of  25739:1781892b19f8 (block script support)

The parts of this patch relating to the following changes were some
how missed during application

   This highlighted two bugs in the libxlu disk parser handling of the
   deprecated "<script>:" prefix:

   - It was failing to prefix with "block-" to construct the actual
     script name

   - The regex for matching iscsi or drdb or e?nbd was incorrect

For some reason this seems to have caused xl to segfault although I
can't see why this would be the case.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 923a7fd08d5e -r 0dd09d4825a6 tools/libxl/libxlu_disk_l.c
--- a/tools/libxl/libxlu_disk_l.c	Mon Aug 06 10:04:59 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.c	Mon Aug 06 10:12:44 2012 +0100
@@ -370,7 +370,7 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[456] =
+static yyconst flex_int16_t yy_acclist[447] =
     {   0,
        24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
     16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
@@ -1188,14 +1188,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 256 )
+				if ( yy_current_state >= 251 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 255 );
+		while ( yy_current_state != 250 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1245,72 +1245,72 @@ do_action:	/* This label is used only to
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 155 "libxlu_disk_l.l"
+#line 159 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 159 "libxlu_disk_l.l"
+#line 163 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 161 "libxlu_disk_l.l"
+#line 165 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 162 "libxlu_disk_l.l"
+#line 166 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 163 "libxlu_disk_l.l"
+#line 167 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 164 "libxlu_disk_l.l"
+#line 168 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 166 "libxlu_disk_l.l"
+#line 170 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 167 "libxlu_disk_l.l"
+#line 171 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 169 "libxlu_disk_l.l"
+#line 173 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 170 "libxlu_disk_l.l"
+#line 174 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
 case 11:
 YY_RULE_SETUP
-#line 174 "libxlu_disk_l.l"
+#line 178 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
 case 12:
 /* rule 12 can match eol */
 YY_RULE_SETUP
-#line 178 "libxlu_disk_l.l"
+#line 182 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
@@ -1327,24 +1327,31 @@ YY_RULE_SETUP
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 191 "libxlu_disk_l.l"
+#line 195 "libxlu_disk_l.l"
 {
-		    STRIP(':');
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 	YY_BREAK
 case 15:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
+#line 208 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 16:
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
+#line 209 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 17:
@@ -1376,13 +1383,13 @@ case 20:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 202 "libxlu_disk_l.l"
+#line 213 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
@@ -1429,17 +1436,17 @@ YY_RULE_SETUP
 	YY_BREAK
 case 24:
 YY_RULE_SETUP
-#line 241 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
 case 25:
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
+#line 255 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1445 "libxlu_disk_l.c"
+#line 1450 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1703,7 +1710,7 @@ static int yy_get_next_buffer (yyscan_t 
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 256 )
+			if ( yy_current_state >= 251 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1727,11 +1734,11 @@ static int yy_get_next_buffer (yyscan_t 
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 256 )
+		if ( yy_current_state >= 251 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 255);
+	yy_is_jam = (yy_current_state == 250);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
diff -r 923a7fd08d5e -r 0dd09d4825a6 tools/libxl/libxlu_disk_l.l
--- a/tools/libxl/libxlu_disk_l.l	Mon Aug 06 10:04:59 2012 +0100
+++ b/tools/libxl/libxlu_disk_l.l	Mon Aug 06 10:12:44 2012 +0100
@@ -192,11 +192,18 @@ target=.*	{ STRIP(','); SAVESTRING("targ
                     setformat(DPC, yytext);
                  }
 
-iscsi:|e?nbd:drbd:/.* {
-		    STRIP(':');
+(iscsi|e?nbd|drbd):/.* {
+                    char *newscript;
+                    STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `script=...'");
-		    SAVESTRING("script", script, yytext);
-		}
+                    if (asprintf(&newscript, "block-%s", yytext) < 0) {
+                            xlu__disk_err(DPC,yytext,"unable to format script");
+                            return 0;
+                    }
+                    savestring(DPC, "script respecified",
+                               &DPC->disk->script, newscript);
+                    free(newscript);
+                }
 
 tapdisk:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }
 tap2?:/.*	{ DPC->had_depr_prefix=1; DEPRECATE(0); }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 09:33:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 09:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJg7-0003hl-2I; Mon, 06 Aug 2012 09:32:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SyJg5-0003hg-BW
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 09:32:45 +0000
Received: from [85.158.139.83:14817] by server-6.bemta-5.messagelabs.com id
	DC/20-11348-C3F8F105; Mon, 06 Aug 2012 09:32:44 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344245562!24749082!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6528 invoked from network); 6 Aug 2012 09:32:43 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 09:32:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q769WaIf006839
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 09:32:36 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q769WZdD014552
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 09:32:35 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q769WY03024005; Mon, 6 Aug 2012 04:32:34 -0500
MIME-Version: 1.0
Message-ID: <4be0efaf-096f-44d7-9429-9c946b06bd73@default>
Date: Mon, 6 Aug 2012 02:32:34 -0700 (PDT)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <horms@verge.net.au>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: ptesarik@suse.cz, olaf@aepfle.de, xen-devel@lists.xensource.com,
	kexec@lists.infradead.org, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

[...]

> > > As I know Petr was going to do some tests.
> > > I have not received any reply from him till now.
> > > Maybe he is busy or on vacation.
> >
> > According to automatic reply he is on vacation until 13/08/2012.
>
> Thanks. I guess we should wait then.
>
> Incidently, I will be on vacation for a week from the 13th.

I am too but for three weeks. However, I will be reading
email from time to time.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 09:33:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 09:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyJg7-0003hl-2I; Mon, 06 Aug 2012 09:32:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SyJg5-0003hg-BW
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 09:32:45 +0000
Received: from [85.158.139.83:14817] by server-6.bemta-5.messagelabs.com id
	DC/20-11348-C3F8F105; Mon, 06 Aug 2012 09:32:44 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344245562!24749082!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6528 invoked from network); 6 Aug 2012 09:32:43 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 09:32:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q769WaIf006839
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 09:32:36 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q769WZdD014552
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 09:32:35 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q769WY03024005; Mon, 6 Aug 2012 04:32:34 -0500
MIME-Version: 1.0
Message-ID: <4be0efaf-096f-44d7-9429-9c946b06bd73@default>
Date: Mon, 6 Aug 2012 02:32:34 -0700 (PDT)
From: Daniel Kiper <daniel.kiper@oracle.com>
To: <horms@verge.net.au>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: ptesarik@suse.cz, olaf@aepfle.de, xen-devel@lists.xensource.com,
	kexec@lists.infradead.org, konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

[...]

> > > As I know Petr was going to do some tests.
> > > I have not received any reply from him till now.
> > > Maybe he is busy or on vacation.
> >
> > According to automatic reply he is on vacation until 13/08/2012.
>
> Thanks. I guess we should wait then.
>
> Incidently, I will be on vacation for a week from the 13th.

I am too but for three weeks. However, I will be reading
email from time to time.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:13:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKIw-00048C-1o; Mon, 06 Aug 2012 10:12:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1SyKIv-000487-19
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:12:53 +0000
Received: from [85.158.139.83:53288] by server-9.bemta-5.messagelabs.com id
	7A/70-01069-4A89F105; Mon, 06 Aug 2012 10:12:52 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344247971!19169490!1
X-Originating-IP: [212.227.126.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwNDU=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwNDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11953 invoked from network); 6 Aug 2012 10:12:51 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.171) by server-8.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 10:12:51 -0000
Received: from [10.3.0.26] ([141.4.215.32])
	by mrelayeu.kundenserver.de (node=mrbap3) with ESMTP (Nemesis)
	id 0MEF9E-1T0XE23yI4-00FQvd; Mon, 06 Aug 2012 12:12:51 +0200
Message-ID: <501F98A1.4070806@brockmann-consult.de>
Date: Mon, 06 Aug 2012 12:12:49 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Enigmail-Version: 1.4.2
X-Provags-ID: V02:K0:E0nmMhXj0wgingjIN2ZT99A8L7xSuzPnod1tD5gl3EL
	mUboVJDrk5ZAYzMLmKxgVHO73lq+XLftQsh5QHwI8HUX2s/bQB
	+kl1Cn49hc/twRMT3b8Mppo6vRgoJr7zii/9jGBStpNhCV7Tcy
	3I+g85P1XaPglDuYfsTQKfO8H97cRk56/AwuJeFUdb27ypg/AR
	E1u+nLrVoSnSkNU3tLL4OHLiVNRLAwXcdzgNcz6ZzoZEhCNHpc
	MtEz6nzSVnzkZRLI/f1M4c9QRxynt3+aRqBaUTbvcPVWCAES6z
	CSomutAD6k+fMGFAam0V/xJaPrcfRfalEmJOQZTPU7xcK9UBaw
	Vq7QltHafD8chnaQUNizJAPqxQj33C9I7w61MJrxl
Subject: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

my AMD FX-8150 system with vanilla source code is super slow, both the
dom0 and domUs. However, after I merge the upstream patches I found in
the openSUSE rpm, it runs normally.

I tried 4.2-unstable and it was the same. There was no rc1 when I tested
it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
obviously those patches won't work any more since the 4.2 code looks
completely reorganized, so I'm stuck with 4.1.2

Here is the rpm I was using at the time:
http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm

To see the list of the patches and what order to apply them, see the
spec file.

Please make sure this performance issue is fixed for the 4.2 release.
And I would be happy to test whatever files you send me.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:13:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKIw-00048C-1o; Mon, 06 Aug 2012 10:12:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1SyKIv-000487-19
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:12:53 +0000
Received: from [85.158.139.83:53288] by server-9.bemta-5.messagelabs.com id
	7A/70-01069-4A89F105; Mon, 06 Aug 2012 10:12:52 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344247971!19169490!1
X-Originating-IP: [212.227.126.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwNDU=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwNDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11953 invoked from network); 6 Aug 2012 10:12:51 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.171) by server-8.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 10:12:51 -0000
Received: from [10.3.0.26] ([141.4.215.32])
	by mrelayeu.kundenserver.de (node=mrbap3) with ESMTP (Nemesis)
	id 0MEF9E-1T0XE23yI4-00FQvd; Mon, 06 Aug 2012 12:12:51 +0200
Message-ID: <501F98A1.4070806@brockmann-consult.de>
Date: Mon, 06 Aug 2012 12:12:49 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Enigmail-Version: 1.4.2
X-Provags-ID: V02:K0:E0nmMhXj0wgingjIN2ZT99A8L7xSuzPnod1tD5gl3EL
	mUboVJDrk5ZAYzMLmKxgVHO73lq+XLftQsh5QHwI8HUX2s/bQB
	+kl1Cn49hc/twRMT3b8Mppo6vRgoJr7zii/9jGBStpNhCV7Tcy
	3I+g85P1XaPglDuYfsTQKfO8H97cRk56/AwuJeFUdb27ypg/AR
	E1u+nLrVoSnSkNU3tLL4OHLiVNRLAwXcdzgNcz6ZzoZEhCNHpc
	MtEz6nzSVnzkZRLI/f1M4c9QRxynt3+aRqBaUTbvcPVWCAES6z
	CSomutAD6k+fMGFAam0V/xJaPrcfRfalEmJOQZTPU7xcK9UBaw
	Vq7QltHafD8chnaQUNizJAPqxQj33C9I7w61MJrxl
Subject: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

my AMD FX-8150 system with vanilla source code is super slow, both the
dom0 and domUs. However, after I merge the upstream patches I found in
the openSUSE rpm, it runs normally.

I tried 4.2-unstable and it was the same. There was no rc1 when I tested
it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
obviously those patches won't work any more since the 4.2 code looks
completely reorganized, so I'm stuck with 4.1.2

Here is the rpm I was using at the time:
http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm

To see the list of the patches and what order to apply them, see the
spec file.

Please make sure this performance issue is fixed for the 4.2 release.
And I would be happy to test whatever files you send me.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:24:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKTn-0004Hd-6o; Mon, 06 Aug 2012 10:24:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKTl-0004HY-I1
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:24:06 +0000
Received: from [85.158.139.83:45333] by server-4.bemta-5.messagelabs.com id
	94/90-27831-44B9F105; Mon, 06 Aug 2012 10:24:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344248642!30420406!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31777 invoked from network); 6 Aug 2012 10:24:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:24:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13862927"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:23:53 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:23:53 +0100
Date: Mon, 6 Aug 2012 11:23:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Erickson <halcyon1981@gmail.com>
In-Reply-To: <CANKx4w-ofYb+2nLzazuh7J3XcFx1361pitGQqBy7FOTbLN2kFg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208061106050.4645@kaball.uk.xensource.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
	<CANKx4w-ofYb+2nLzazuh7J3XcFx1361pitGQqBy7FOTbLN2kFg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012, David Erickson wrote:
> On Thu, Aug 2, 2012 at 10:38 AM, David Erickson <halcyon1981@gmail.com> wrote:
> > On Wed, Aug 1, 2012 at 10:52 AM, David Erickson <halcyon1981@gmail.com> wrote:
> >> On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >>> On Tue, 31 Jul 2012, David Erickson wrote:
> >>>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
> >>>> <stefano.stabellini@eu.citrix.com> wrote:
> >>>> > On Tue, 31 Jul 2012, David Erickson wrote:
> >>>> >> Just got back in town, following up on the prior discussion.  I
> >>>> >> successfully compiled the latest code (25688 and qemu upstream
> >>>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
> >>>> >> problems during initialization of the card in the guest, in particular
> >>>> >> the unsupported delivery mode 3 which seems to cause interrupt related
> >>>> >> problems during init.  I've again attached the qemu-dm-log, and xl
> >>>> >> dmesg log files, and additionally screenshots of the guest dmesg and
> >>>> >> also for comparison starting the same livecd natively on the box.
> >>>> >
> >>>> > "unsupported delivery mode 3" means that the Linux guest is trying to
> >>>> > remap the MSI onto an event channel but Xen is still trying to deliver
> >>>> > the MSI using the emulated code path anyway.
> >>>> >
> >>>> > Adding
> >>>> >
> >>>> > #define XEN_PT_LOGGING_ENABLED 1
> >>>> >
> >>>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
> >>>> > be helpful.
> >>>> >
> >>>> > The full Xen logs might also be useful. I would add some more tracing to
> >>>> > the hypervisor too:
> >>>> >
> >>>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> >>>> > index b5975d1..08f4ab7 100644
> >>>> > --- a/xen/drivers/passthrough/io.c
> >>>> > +++ b/xen/drivers/passthrough/io.c
> >>>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
> >>>> >  {
> >>>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
> >>>> >
> >>>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
> >>>> > +            pirq->pirq,
> >>>> > +            hvm_domain_use_pirq(d, pirq),
> >>>> > +            pirq->arch.hvm.emuirq);
> >>>> > +
> >>>> >      if ( hvm_domain_use_pirq(d, pirq) )
> >>>> >          send_guest_pirq(d, pirq);
> >>>> >      else
> >>>>
> >>>> Hi Stefano-
> >>>> I made the modifications (it looks like that DEFINE hasn't been used
> >>>> in awhile, caused a few compilation issues, I had to prefix most of
> >>>> the logged variables with s->hostaddr.), and am attaching the
> >>>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
> >>>> where do I find those at?
> >>>
> >>> Thanks for the logs!
> >>> You can get the full Xen logs from the serial console but you can also
> >>> grab the last few lines with "xl dmesg", like you did and it seems to be
> >>> enough in this case.
> >>>
> >>>
> >>> The initial MSI remapping has been done:
> >>>
> >>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> >>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
> >>>
> >>> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
> >>> necessary to be able to receive event notifications (emuirq=-1 in the
> >>> Xen logs).
> >>>
> >>> Now we need to figure out why: we still need more logs, this time on the
> >>> guest side.
> >>> What is the kernel version that you are using in the guest?
> >>> Could you please add "debug loglevel=9" to the guest kernel command line
> >>> and then post the guest dmesg again?
> >>> It would be great if you could use the emulated serial to get the logs
> >>> rather than a picture. You can do that by adding serial='pty' to the VM
> >>> config file and console=ttyS0 to the guest command line.
> >>> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
> >>> has been done:
> >>>
> >>>
> >>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> >>> index 53777f8..d65a97a 100644
> >>> --- a/xen/common/event_channel.c
> >>> +++ b/xen/common/event_channel.c
> >>> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
> >>>  #ifdef CONFIG_X86
> >>>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
> >>>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
> >>> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
> >>> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
> >>>  #endif
> >>>
> >>>   out:
> >>
> >> The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
> >> I've also attached all the logs, thanks for the tip on the serial
> >> console, very useful.
> >>
> >> Additionally I've attached logs for booting a solaris livecd (my
> >> ultimate goal is to use this HBA card in Solaris), with the serial
> >> console tip I was able to capture its kernel boot as well.
> >
> > I'm attaching another log from Solaris' kernel debugger, I'm not sure
> > how helpful it is but I found it interesting that it didn't detect an
> > Intel IOMMU/ACPI table and unloaded it, then tried AMD - and I'm new
> > to Solaris but comparing this log to one without PCI Passthrough, the
> > npe module never gets loaded without PCI passthrough, so I assume it
> > failed while setting up the AMD IOMMU module then loaded the npe
> > module to report the error.

unfortunately I don't know much about solaris so it doesn't help me very
much


> Just following up, is there anything I can do to further help debug
> and figure out what is causing the problem here?  I'm assuming since
> it isn't working properly in PV or HVM guests there may be multiple
> bugs.  Is there an easy way to run Ubuntu as HVM only (more friendly
> than Solaris) to try and isolate if that is a separate bug from what
> is being seen with the PV Ubuntu VM?

I didn't get any specific output on the PV MSI setup, probably because
it is only printed when CONFIG_DEBUG is setup.
Would you be able to rebuild your Ubuntu kernel with the appended patch?


diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index 6e96e65..039f29c 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -90,6 +90,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 	struct msi_desc *msidesc;
 	struct msi_msg msg;
 
+	printk("DEBUG %s %d   %s %s nvec=%d\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),nvec);
 	list_for_each_entry(msidesc, &dev->msi_list, list) {
 		__read_msi_msg(msidesc, &msg);
 		pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) |
@@ -102,7 +103,9 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 			xen_msi_compose_msg(dev, pirq, &msg);
 			__write_msi_msg(msidesc, &msg);
 			dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
+			printk("DEBUG %s %d   %s %s pirq=%d\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),pirq);
 		} else {
+			printk("DEBUG %s %d   %s %s pirq=%d already bound\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),pirq);
 			dev_dbg(&dev->dev,
 				"xen: msi already bound to pirq=%d\n", pirq);
 		}
@@ -115,9 +118,11 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 		dev_dbg(&dev->dev,
 			"xen: msi --> pirq=%d --> irq=%d\n", pirq, irq);
 	}
+	printk("DEBUG %s %d   %s %s pirq=%d irq=%d\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),pirq,irq);
 	return 0;
 
 error:
+	printk("DEBUG %s %d   %s %s error\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev));
 	dev_err(&dev->dev,
 		"Xen PCI frontend has not registered MSI/MSI-X support!\n");
 	return -ENODEV;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:24:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKTn-0004Hd-6o; Mon, 06 Aug 2012 10:24:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKTl-0004HY-I1
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:24:06 +0000
Received: from [85.158.139.83:45333] by server-4.bemta-5.messagelabs.com id
	94/90-27831-44B9F105; Mon, 06 Aug 2012 10:24:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344248642!30420406!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31777 invoked from network); 6 Aug 2012 10:24:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:24:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13862927"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:23:53 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:23:53 +0100
Date: Mon, 6 Aug 2012 11:23:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Erickson <halcyon1981@gmail.com>
In-Reply-To: <CANKx4w-ofYb+2nLzazuh7J3XcFx1361pitGQqBy7FOTbLN2kFg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208061106050.4645@kaball.uk.xensource.com>
References: <CACA08Dwza8TjixUYz1o6PWzfaPwfz1jMiGVNyZE9KcJZ_Ln2oA@mail.gmail.com>
	<CANKx4w9O0bW=GJknP=hnYW5nUAas8AyKhDVjXGMmgeBkRBon5w@mail.gmail.com>
	<CAHyyzzTMX4wcg+DNEL91dWmo0R-6oGJLNH5O50bSUeHkmTWAwQ@mail.gmail.com>
	<CANKx4w9BVBrmQwaCtJr6CsZ6OK=jV+pb35QtNLHp+Jp6=739aA@mail.gmail.com>
	<1342682536.18848.50.camel@dagon.hellion.org.uk>
	<alpine.DEB.2.02.1207191258580.23783@kaball.uk.xensource.com>
	<CANKx4w-UKwt9H45N9Kbox708OBaQEqG3niELp593h2P7oc+pjw@mail.gmail.com>
	<CANKx4w9=X7iXvQBrORgh6aNHEQ--+4QeFwXDBG0HV-DYk7HpxQ@mail.gmail.com>
	<alpine.DEB.2.02.1207311119500.4645@kaball.uk.xensource.com>
	<CANKx4w_9sX9hsgSkwV6Vb8xSVQ_4n+_sFjfXSDEPHQ1seH=9=g@mail.gmail.com>
	<alpine.DEB.2.02.1208011150330.4645@kaball.uk.xensource.com>
	<CANKx4w9AUuAUWtnW59NL7az8VJ-jLJn984hMqvGVwTEfVAvs1A@mail.gmail.com>
	<CANKx4w8Gddh=GAz=mL4Xccykc9_kqi7dS2ER0=-=NxxwWo6VHA@mail.gmail.com>
	<CANKx4w-ofYb+2nLzazuh7J3XcFx1361pitGQqBy7FOTbLN2kFg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	jacek burghardt <jaceksburghardt@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] LSI SAS2008 Option Rom Failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Aug 2012, David Erickson wrote:
> On Thu, Aug 2, 2012 at 10:38 AM, David Erickson <halcyon1981@gmail.com> wrote:
> > On Wed, Aug 1, 2012 at 10:52 AM, David Erickson <halcyon1981@gmail.com> wrote:
> >> On Wed, Aug 1, 2012 at 4:13 AM, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >>> On Tue, 31 Jul 2012, David Erickson wrote:
> >>>> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
> >>>> <stefano.stabellini@eu.citrix.com> wrote:
> >>>> > On Tue, 31 Jul 2012, David Erickson wrote:
> >>>> >> Just got back in town, following up on the prior discussion.  I
> >>>> >> successfully compiled the latest code (25688 and qemu upstream
> >>>> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
> >>>> >> problems during initialization of the card in the guest, in particular
> >>>> >> the unsupported delivery mode 3 which seems to cause interrupt related
> >>>> >> problems during init.  I've again attached the qemu-dm-log, and xl
> >>>> >> dmesg log files, and additionally screenshots of the guest dmesg and
> >>>> >> also for comparison starting the same livecd natively on the box.
> >>>> >
> >>>> > "unsupported delivery mode 3" means that the Linux guest is trying to
> >>>> > remap the MSI onto an event channel but Xen is still trying to deliver
> >>>> > the MSI using the emulated code path anyway.
> >>>> >
> >>>> > Adding
> >>>> >
> >>>> > #define XEN_PT_LOGGING_ENABLED 1
> >>>> >
> >>>> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
> >>>> > be helpful.
> >>>> >
> >>>> > The full Xen logs might also be useful. I would add some more tracing to
> >>>> > the hypervisor too:
> >>>> >
> >>>> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> >>>> > index b5975d1..08f4ab7 100644
> >>>> > --- a/xen/drivers/passthrough/io.c
> >>>> > +++ b/xen/drivers/passthrough/io.c
> >>>> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
> >>>> >  {
> >>>> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
> >>>> >
> >>>> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
> >>>> > +            pirq->pirq,
> >>>> > +            hvm_domain_use_pirq(d, pirq),
> >>>> > +            pirq->arch.hvm.emuirq);
> >>>> > +
> >>>> >      if ( hvm_domain_use_pirq(d, pirq) )
> >>>> >          send_guest_pirq(d, pirq);
> >>>> >      else
> >>>>
> >>>> Hi Stefano-
> >>>> I made the modifications (it looks like that DEFINE hasn't been used
> >>>> in awhile, caused a few compilation issues, I had to prefix most of
> >>>> the logged variables with s->hostaddr.), and am attaching the
> >>>> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
> >>>> where do I find those at?
> >>>
> >>> Thanks for the logs!
> >>> You can get the full Xen logs from the serial console but you can also
> >>> grab the last few lines with "xl dmesg", like you did and it seems to be
> >>> enough in this case.
> >>>
> >>>
> >>> The initial MSI remapping has been done:
> >>>
> >>> [00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
> >>> [00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 (entry: 0)
> >>>
> >>> But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
> >>> necessary to be able to receive event notifications (emuirq=-1 in the
> >>> Xen logs).
> >>>
> >>> Now we need to figure out why: we still need more logs, this time on the
> >>> guest side.
> >>> What is the kernel version that you are using in the guest?
> >>> Could you please add "debug loglevel=9" to the guest kernel command line
> >>> and then post the guest dmesg again?
> >>> It would be great if you could use the emulated serial to get the logs
> >>> rather than a picture. You can do that by adding serial='pty' to the VM
> >>> config file and console=ttyS0 to the guest command line.
> >>> This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
> >>> has been done:
> >>>
> >>>
> >>> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> >>> index 53777f8..d65a97a 100644
> >>> --- a/xen/common/event_channel.c
> >>> +++ b/xen/common/event_channel.c
> >>> @@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
> >>>  #ifdef CONFIG_X86
> >>>      if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
> >>>          map_domain_emuirq_pirq(d, pirq, IRQ_PT);
> >>> +    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
> >>> +            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
> >>>  #endif
> >>>
> >>>   out:
> >>
> >> The guest is an Ubuntu 11.10 livecd, kernel version 3.0.0-12-generic.
> >> I've also attached all the logs, thanks for the tip on the serial
> >> console, very useful.
> >>
> >> Additionally I've attached logs for booting a solaris livecd (my
> >> ultimate goal is to use this HBA card in Solaris), with the serial
> >> console tip I was able to capture its kernel boot as well.
> >
> > I'm attaching another log from Solaris' kernel debugger, I'm not sure
> > how helpful it is but I found it interesting that it didn't detect an
> > Intel IOMMU/ACPI table and unloaded it, then tried AMD - and I'm new
> > to Solaris but comparing this log to one without PCI Passthrough, the
> > npe module never gets loaded without PCI passthrough, so I assume it
> > failed while setting up the AMD IOMMU module then loaded the npe
> > module to report the error.

unfortunately I don't know much about solaris so it doesn't help me very
much


> Just following up, is there anything I can do to further help debug
> and figure out what is causing the problem here?  I'm assuming since
> it isn't working properly in PV or HVM guests there may be multiple
> bugs.  Is there an easy way to run Ubuntu as HVM only (more friendly
> than Solaris) to try and isolate if that is a separate bug from what
> is being seen with the PV Ubuntu VM?

I didn't get any specific output on the PV MSI setup, probably because
it is only printed when CONFIG_DEBUG is setup.
Would you be able to rebuild your Ubuntu kernel with the appended patch?


diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index 6e96e65..039f29c 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -90,6 +90,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 	struct msi_desc *msidesc;
 	struct msi_msg msg;
 
+	printk("DEBUG %s %d   %s %s nvec=%d\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),nvec);
 	list_for_each_entry(msidesc, &dev->msi_list, list) {
 		__read_msi_msg(msidesc, &msg);
 		pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) |
@@ -102,7 +103,9 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 			xen_msi_compose_msg(dev, pirq, &msg);
 			__write_msi_msg(msidesc, &msg);
 			dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
+			printk("DEBUG %s %d   %s %s pirq=%d\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),pirq);
 		} else {
+			printk("DEBUG %s %d   %s %s pirq=%d already bound\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),pirq);
 			dev_dbg(&dev->dev,
 				"xen: msi already bound to pirq=%d\n", pirq);
 		}
@@ -115,9 +118,11 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 		dev_dbg(&dev->dev,
 			"xen: msi --> pirq=%d --> irq=%d\n", pirq, irq);
 	}
+	printk("DEBUG %s %d   %s %s pirq=%d irq=%d\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev),pirq,irq);
 	return 0;
 
 error:
+	printk("DEBUG %s %d   %s %s error\n",__func__,__LINE__,dev_driver_string(&dev->dev), dev_name(&dev->dev));
 	dev_err(&dev->dev,
 		"Xen PCI frontend has not registered MSI/MSI-X support!\n");
 	return -ENODEV;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:25:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKUV-0004Kx-PD; Mon, 06 Aug 2012 10:24:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SyKUU-0004Kp-Ov
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:24:50 +0000
Received: from [85.158.139.83:43112] by server-11.bemta-5.messagelabs.com id
	CA/5F-20400-27B9F105; Mon, 06 Aug 2012 10:24:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344248686!30370580!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28984 invoked from network); 6 Aug 2012 10:24:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:24:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204254481"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 06:24:46 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 06:24:46 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SyKUP-000757-VW;
	Mon, 06 Aug 2012 11:24:45 +0100
Message-ID: <501F9B6D.10706@citrix.com>
Date: Mon, 6 Aug 2012 11:24:45 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Peter Maloney <peter.maloney@brockmann-consult.de>
References: <501F98A1.4070806@brockmann-consult.de>
In-Reply-To: <501F98A1.4070806@brockmann-consult.de>
X-Enigmail-Version: 1.4.3
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 11:12, Peter Maloney wrote:
> my AMD FX-8150 system with vanilla source code is super slow, both the
> dom0 and domUs. However, after I merge the upstream patches I found in
> the openSUSE rpm, it runs normally.
>
> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> obviously those patches won't work any more since the 4.2 code looks
> completely reorganized, so I'm stuck with 4.1.2
>
> Here is the rpm I was using at the time:
> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm
>
> To see the list of the patches and what order to apply them, see the
> spec file.
>
> Please make sure this performance issue is fixed for the 4.2 release.
> And I would be happy to test whatever files you send me.
>

Without identifying which patch or patches make a difference for you,
there is very little we can do.  There are 406 patches in that spec
file.  Furthermore, from the file names, I would say that most of the
patches have been backported from unstable.

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:25:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKUV-0004Kx-PD; Mon, 06 Aug 2012 10:24:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SyKUU-0004Kp-Ov
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:24:50 +0000
Received: from [85.158.139.83:43112] by server-11.bemta-5.messagelabs.com id
	CA/5F-20400-27B9F105; Mon, 06 Aug 2012 10:24:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344248686!30370580!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28984 invoked from network); 6 Aug 2012 10:24:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:24:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204254481"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 06:24:46 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 06:24:46 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SyKUP-000757-VW;
	Mon, 06 Aug 2012 11:24:45 +0100
Message-ID: <501F9B6D.10706@citrix.com>
Date: Mon, 6 Aug 2012 11:24:45 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Peter Maloney <peter.maloney@brockmann-consult.de>
References: <501F98A1.4070806@brockmann-consult.de>
In-Reply-To: <501F98A1.4070806@brockmann-consult.de>
X-Enigmail-Version: 1.4.3
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 11:12, Peter Maloney wrote:
> my AMD FX-8150 system with vanilla source code is super slow, both the
> dom0 and domUs. However, after I merge the upstream patches I found in
> the openSUSE rpm, it runs normally.
>
> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> obviously those patches won't work any more since the 4.2 code looks
> completely reorganized, so I'm stuck with 4.1.2
>
> Here is the rpm I was using at the time:
> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm
>
> To see the list of the patches and what order to apply them, see the
> spec file.
>
> Please make sure this performance issue is fixed for the 4.2 release.
> And I would be happy to test whatever files you send me.
>

Without identifying which patch or patches make a difference for you,
there is very little we can do.  There are 406 patches in that spec
file.  Furthermore, from the file names, I would say that most of the
patches have been backported from unstable.

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:31:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKaa-0004a7-JZ; Mon, 06 Aug 2012 10:31:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyKaZ-0004a2-KK
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:31:07 +0000
Received: from [85.158.138.51:59653] by server-5.bemta-3.messagelabs.com id
	7B/62-27557-AEC9F105; Mon, 06 Aug 2012 10:31:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344249066!22504919!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22235 invoked from network); 6 Aug 2012 10:31:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 10:31:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 11:31:05 +0100
Message-Id: <501FB9060200007800092D4E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 11:31:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Maloney" <peter.maloney@brockmann-consult.de>
References: <501F98A1.4070806@brockmann-consult.de>
In-Reply-To: <501F98A1.4070806@brockmann-consult.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
> my AMD FX-8150 system with vanilla source code is super slow, both the
> dom0 and domUs. However, after I merge the upstream patches I found in
> the openSUSE rpm, it runs normally.

I'd be very surprised if you really just took the upstream patches,
and the result was better than 4.2-rc1. After all, what upstream
means is that they were taken from -unstable.

> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> obviously those patches won't work any more since the 4.2 code looks
> completely reorganized, so I'm stuck with 4.1.2

Obviously the upstream patches can't be applied to something
that already has all those changes. Other patches, of which we
unfortunately have quite a few, would be a different story.

> Here is the rpm I was using at the time:
> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
> 
> To see the list of the patches and what order to apply them, see the
> spec file.

That still won't tell us which patches you did apply.

> Please make sure this performance issue is fixed for the 4.2 release.
> And I would be happy to test whatever files you send me.

The sort of report you're doing isn't that helpful. What would
help is if you could narrow down which patch(es) it is that
make things so much better. Giving 4.1.3-rc a try might also
be worthwhile, albeit I would hope we don't have a regression
in 4.2.0-rc compared to 4.1.3-rc...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:31:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKaa-0004a7-JZ; Mon, 06 Aug 2012 10:31:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyKaZ-0004a2-KK
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 10:31:07 +0000
Received: from [85.158.138.51:59653] by server-5.bemta-3.messagelabs.com id
	7B/62-27557-AEC9F105; Mon, 06 Aug 2012 10:31:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344249066!22504919!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22235 invoked from network); 6 Aug 2012 10:31:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 10:31:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 11:31:05 +0100
Message-Id: <501FB9060200007800092D4E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 11:31:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Maloney" <peter.maloney@brockmann-consult.de>
References: <501F98A1.4070806@brockmann-consult.de>
In-Reply-To: <501F98A1.4070806@brockmann-consult.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
> my AMD FX-8150 system with vanilla source code is super slow, both the
> dom0 and domUs. However, after I merge the upstream patches I found in
> the openSUSE rpm, it runs normally.

I'd be very surprised if you really just took the upstream patches,
and the result was better than 4.2-rc1. After all, what upstream
means is that they were taken from -unstable.

> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> obviously those patches won't work any more since the 4.2 code looks
> completely reorganized, so I'm stuck with 4.1.2

Obviously the upstream patches can't be applied to something
that already has all those changes. Other patches, of which we
unfortunately have quite a few, would be a different story.

> Here is the rpm I was using at the time:
> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
> 
> To see the list of the patches and what order to apply them, see the
> spec file.

That still won't tell us which patches you did apply.

> Please make sure this performance issue is fixed for the 4.2 release.
> And I would be happy to test whatever files you send me.

The sort of report you're doing isn't that helpful. What would
help is if you could narrow down which patch(es) it is that
make things so much better. Giving 4.1.3-rc a try might also
be worthwhile, albeit I would hope we don't have a regression
in 4.2.0-rc compared to 4.1.3-rc...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKav-0004bn-Vt; Mon, 06 Aug 2012 10:31:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKau-0004ba-DT
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 10:31:28 +0000
Received: from [85.158.143.99:45713] by server-2.bemta-4.messagelabs.com id
	9E/D8-17938-FFC9F105; Mon, 06 Aug 2012 10:31:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344249086!25131372!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4225 invoked from network); 6 Aug 2012 10:31:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:31:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13863086"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:31:26 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:31:26 +0100
Date: Mon, 6 Aug 2012 11:31:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801144418.GM7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061127440.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-15-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801144418.GM7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 15/24] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:57PM +0100, Stefano Stabellini wrote:
> > Compile events.c on ARM.
> > Parse, map and enable the IRQ to get event notifications from the device
> > tree (node "/xen").
> > 
> > On ARM Linux irqs are not enabled by default:
> > 
> > - call enable_percpu_irq for xen_events_irq (drivers are supposed
> > to call enable_irq after request_irq);
> > 
> > - reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
> > default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
> > irq_startup, that is responsible for calling irq_unmask at startup time.
> > As a result event channels remain masked.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c |   33 +++++++++++++++++++++++++++++++++
> >  arch/x86/xen/enlighten.c |    1 +
> >  arch/x86/xen/irq.c       |    1 +
> >  arch/x86/xen/xen-ops.h   |    1 -
> >  drivers/xen/events.c     |   18 +++++++++++++++---
> >  include/xen/events.h     |    2 ++
> >  6 files changed, 52 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index 854af1e..60d6d36 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -7,8 +7,11 @@
> >  #include <xen/grant_table.h>
> >  #include <xen/hvm.h>
> >  #include <xen/xenbus.h>
> > +#include <xen/events.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/irqreturn.h>
> >  #include <linux/module.h>
> >  #include <linux/of.h>
> >  #include <linux/of_irq.h>
> > @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> >  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> >  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> >  
> > +static __read_mostly int xen_events_irq = -1;
> > +
> >  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  			       unsigned long addr,
> >  			       unsigned long mfn, int nr,
> > @@ -65,6 +70,9 @@ int __init xen_guest_init(void)
> >  	if (of_address_to_resource(node, 0, &res))
> >  		return -EINVAL;
> >  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> > +	xen_events_irq = irq_of_parse_and_map(node, 0);
> > +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> > +			xen_events_irq, xen_hvm_resume_frames);
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -114,3 +122,28 @@ int __init xen_guest_init(void)
> >  }
> >  EXPORT_SYMBOL_GPL(xen_guest_init);
> >  core_initcall(xen_guest_init);
> > +
> > +static irqreturn_t xen_arm_callback(int irq, void *arg)
> > +{
> > +	xen_hvm_evtchn_do_upcall();
> > +	return 0;
> 
> Um, IRQ_HANDLED?

Yep


> > +}
> > +
> > +static int __init xen_init_events(void)
> > +{
> > +	if (!xen_domain() || xen_events_irq < 0)
> > +		return -ENODEV;
> > +
> > +	xen_init_IRQ();
> > +
> > +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> > +			"events", xen_vcpu)) {
> > +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> > +		return -EINVAL;
> > +	}
> > +
> > +	enable_percpu_irq(xen_events_irq, 0);
> > +
> > +	return 0;
> > +}
> > +postcore_initcall(xen_init_events);
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index 6131d43..5a30502 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -33,6 +33,7 @@
> >  #include <linux/memblock.h>
> >  
> >  #include <xen/xen.h>
> > +#include <xen/events.h>
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/version.h>
> >  #include <xen/interface/physdev.h>
> > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > index 1573376..01a4dc0 100644
> > --- a/arch/x86/xen/irq.c
> > +++ b/arch/x86/xen/irq.c
> > @@ -5,6 +5,7 @@
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/sched.h>
> >  #include <xen/interface/vcpu.h>
> > +#include <xen/events.h>
> >  
> >  #include <asm/xen/hypercall.h>
> >  #include <asm/xen/hypervisor.h>
> > diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> > index 202d4c1..2368295 100644
> > --- a/arch/x86/xen/xen-ops.h
> > +++ b/arch/x86/xen/xen-ops.h
> > @@ -35,7 +35,6 @@ void xen_set_pat(u64);
> >  
> >  char * __init xen_memory_setup(void);
> >  void __init xen_arch_setup(void);
> > -void __init xen_init_IRQ(void);
> >  void xen_enable_sysenter(void);
> >  void xen_enable_syscall(void);
> >  void xen_vcpu_restore(void);
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 7da65d3..9b506b2 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -31,14 +31,16 @@
> >  #include <linux/irqnr.h>
> >  #include <linux/pci.h>
> >  
> > +#ifdef CONFIG_X86
> >  #include <asm/desc.h>
> >  #include <asm/ptrace.h>
> >  #include <asm/irq.h>
> >  #include <asm/idle.h>
> >  #include <asm/io_apic.h>
> > -#include <asm/sync_bitops.h>
> >  #include <asm/xen/page.h>
> >  #include <asm/xen/pci.h>
> > +#endif
> > +#include <asm/sync_bitops.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <asm/xen/hypervisor.h>
> >  
> > @@ -50,6 +52,9 @@
> >  #include <xen/interface/event_channel.h>
> >  #include <xen/interface/hvm/hvm_op.h>
> >  #include <xen/interface/hvm/params.h>
> > +#include <xen/interface/physdev.h>
> > +#include <xen/interface/sched.h>
> > +#include <asm/hw_irq.h>
> >  
> >  /*
> >   * This lock protects updates to the following mapping and reference-count
> > @@ -834,6 +839,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
> >  		struct irq_info *info = info_for_irq(irq);
> >  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
> >  	}
> > +	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
> 
> I feel that this should be its own commit by itself. I am not certain
> of the implication of this on x86 and I think it deserves some explanation.

OK. It shouldn't have any effects on x86, considering that both
IRQ_NOREQUEST and IRQ_NOAUTOEN are not set there.


> >  
> >  out:
> >  	mutex_unlock(&irq_mapping_update_lock);
> > @@ -1377,7 +1383,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
> >  {
> >  	struct pt_regs *old_regs = set_irq_regs(regs);
> >  
> > +#ifdef CONFIG_X86
> >  	exit_idle();
> > +#endif
> 
> Doesn't exist? Or is that it does not need it?

It does not exist.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKav-0004bn-Vt; Mon, 06 Aug 2012 10:31:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKau-0004ba-DT
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 10:31:28 +0000
Received: from [85.158.143.99:45713] by server-2.bemta-4.messagelabs.com id
	9E/D8-17938-FFC9F105; Mon, 06 Aug 2012 10:31:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344249086!25131372!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4225 invoked from network); 6 Aug 2012 10:31:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:31:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13863086"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:31:26 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:31:26 +0100
Date: Mon, 6 Aug 2012 11:31:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801144418.GM7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061127440.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-15-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801144418.GM7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 15/24] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:57PM +0100, Stefano Stabellini wrote:
> > Compile events.c on ARM.
> > Parse, map and enable the IRQ to get event notifications from the device
> > tree (node "/xen").
> > 
> > On ARM Linux irqs are not enabled by default:
> > 
> > - call enable_percpu_irq for xen_events_irq (drivers are supposed
> > to call enable_irq after request_irq);
> > 
> > - reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
> > default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
> > irq_startup, that is responsible for calling irq_unmask at startup time.
> > As a result event channels remain masked.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c |   33 +++++++++++++++++++++++++++++++++
> >  arch/x86/xen/enlighten.c |    1 +
> >  arch/x86/xen/irq.c       |    1 +
> >  arch/x86/xen/xen-ops.h   |    1 -
> >  drivers/xen/events.c     |   18 +++++++++++++++---
> >  include/xen/events.h     |    2 ++
> >  6 files changed, 52 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index 854af1e..60d6d36 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -7,8 +7,11 @@
> >  #include <xen/grant_table.h>
> >  #include <xen/hvm.h>
> >  #include <xen/xenbus.h>
> > +#include <xen/events.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/irqreturn.h>
> >  #include <linux/module.h>
> >  #include <linux/of.h>
> >  #include <linux/of_irq.h>
> > @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> >  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> >  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> >  
> > +static __read_mostly int xen_events_irq = -1;
> > +
> >  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  			       unsigned long addr,
> >  			       unsigned long mfn, int nr,
> > @@ -65,6 +70,9 @@ int __init xen_guest_init(void)
> >  	if (of_address_to_resource(node, 0, &res))
> >  		return -EINVAL;
> >  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> > +	xen_events_irq = irq_of_parse_and_map(node, 0);
> > +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> > +			xen_events_irq, xen_hvm_resume_frames);
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -114,3 +122,28 @@ int __init xen_guest_init(void)
> >  }
> >  EXPORT_SYMBOL_GPL(xen_guest_init);
> >  core_initcall(xen_guest_init);
> > +
> > +static irqreturn_t xen_arm_callback(int irq, void *arg)
> > +{
> > +	xen_hvm_evtchn_do_upcall();
> > +	return 0;
> 
> Um, IRQ_HANDLED?

Yep


> > +}
> > +
> > +static int __init xen_init_events(void)
> > +{
> > +	if (!xen_domain() || xen_events_irq < 0)
> > +		return -ENODEV;
> > +
> > +	xen_init_IRQ();
> > +
> > +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> > +			"events", xen_vcpu)) {
> > +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> > +		return -EINVAL;
> > +	}
> > +
> > +	enable_percpu_irq(xen_events_irq, 0);
> > +
> > +	return 0;
> > +}
> > +postcore_initcall(xen_init_events);
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index 6131d43..5a30502 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -33,6 +33,7 @@
> >  #include <linux/memblock.h>
> >  
> >  #include <xen/xen.h>
> > +#include <xen/events.h>
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/version.h>
> >  #include <xen/interface/physdev.h>
> > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > index 1573376..01a4dc0 100644
> > --- a/arch/x86/xen/irq.c
> > +++ b/arch/x86/xen/irq.c
> > @@ -5,6 +5,7 @@
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/sched.h>
> >  #include <xen/interface/vcpu.h>
> > +#include <xen/events.h>
> >  
> >  #include <asm/xen/hypercall.h>
> >  #include <asm/xen/hypervisor.h>
> > diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> > index 202d4c1..2368295 100644
> > --- a/arch/x86/xen/xen-ops.h
> > +++ b/arch/x86/xen/xen-ops.h
> > @@ -35,7 +35,6 @@ void xen_set_pat(u64);
> >  
> >  char * __init xen_memory_setup(void);
> >  void __init xen_arch_setup(void);
> > -void __init xen_init_IRQ(void);
> >  void xen_enable_sysenter(void);
> >  void xen_enable_syscall(void);
> >  void xen_vcpu_restore(void);
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 7da65d3..9b506b2 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -31,14 +31,16 @@
> >  #include <linux/irqnr.h>
> >  #include <linux/pci.h>
> >  
> > +#ifdef CONFIG_X86
> >  #include <asm/desc.h>
> >  #include <asm/ptrace.h>
> >  #include <asm/irq.h>
> >  #include <asm/idle.h>
> >  #include <asm/io_apic.h>
> > -#include <asm/sync_bitops.h>
> >  #include <asm/xen/page.h>
> >  #include <asm/xen/pci.h>
> > +#endif
> > +#include <asm/sync_bitops.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <asm/xen/hypervisor.h>
> >  
> > @@ -50,6 +52,9 @@
> >  #include <xen/interface/event_channel.h>
> >  #include <xen/interface/hvm/hvm_op.h>
> >  #include <xen/interface/hvm/params.h>
> > +#include <xen/interface/physdev.h>
> > +#include <xen/interface/sched.h>
> > +#include <asm/hw_irq.h>
> >  
> >  /*
> >   * This lock protects updates to the following mapping and reference-count
> > @@ -834,6 +839,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
> >  		struct irq_info *info = info_for_irq(irq);
> >  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
> >  	}
> > +	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
> 
> I feel that this should be its own commit by itself. I am not certain
> of the implication of this on x86 and I think it deserves some explanation.

OK. It shouldn't have any effects on x86, considering that both
IRQ_NOREQUEST and IRQ_NOAUTOEN are not set there.


> >  
> >  out:
> >  	mutex_unlock(&irq_mapping_update_lock);
> > @@ -1377,7 +1383,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
> >  {
> >  	struct pt_regs *old_regs = set_irq_regs(regs);
> >  
> > +#ifdef CONFIG_X86
> >  	exit_idle();
> > +#endif
> 
> Doesn't exist? Or is that it does not need it?

It does not exist.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:46:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKpc-0004v4-ER; Mon, 06 Aug 2012 10:46:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKpa-0004uz-Sd
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 10:46:39 +0000
Received: from [85.158.139.83:54280] by server-12.bemta-5.messagelabs.com id
	F1/7A-26304-E80AF105; Mon, 06 Aug 2012 10:46:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344249995!30493266!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25980 invoked from network); 6 Aug 2012 10:46:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:46:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13863580"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:46:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:46:35 +0100
Date: Mon, 6 Aug 2012 11:46:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120802141341.GE16749@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061142070.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<50197527.3070007@gmail.com>
	<1343892951.7571.50.camel@dagon.hellion.org.uk>
	<20120802141341.GE16749@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Rob Herring <robherring2@gmail.com>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 02, 2012 at 08:35:51AM +0100, Ian Campbell wrote:
> > On Wed, 2012-08-01 at 19:27 +0100, Rob Herring wrote:
> > > On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> > > > - Basic hypervisor.h and interface.h definitions.
> > > > - Skelethon enlighten.c, set xen_start_info to an empty struct.
> > > > - Do not limit xen_initial_domain to PV guests.
> > > > 
> > > > The new code only compiles when CONFIG_XEN is set, that is going to be
> > > > added to arch/arm/Kconfig in a later patch.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > ---
> > > >  arch/arm/Makefile                     |    1 +
> > > >  arch/arm/include/asm/hypervisor.h     |    6 +++
> > > >  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
> > > >  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
> > > 
> > > These headers don't seem particularly ARM specific. Could they be moved
> > > to asm-generic or include/linux?
> > 
> > Or perhaps include/xen.
> > 
> > A bunch of it also looks like x86 specific stuff which has crept in.
> > e.g. PARAVIRT_LAZY_FOO and paravirt_get_lazy_mode() are arch/x86
> > specific and shouldn't be called from common code (and aren't, AFAICT).
> 
> The could be moved out..
> 

they are called from grant-table.c; sigh, I was the one to add them there :-(

interface.h is ARM specific, except for the pvclock structs, that in
fact are marked "XXX".

hypervisor.h is almost empty but I guess I could move out the following two
lines:

extern struct shared_info *HYPERVISOR_shared_info;
extern struct start_info *xen_start_info;

Considering that each arch is free to map them (or not) the way it
wants, I don't think is a good idea.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:46:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKpc-0004v4-ER; Mon, 06 Aug 2012 10:46:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKpa-0004uz-Sd
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 10:46:39 +0000
Received: from [85.158.139.83:54280] by server-12.bemta-5.messagelabs.com id
	F1/7A-26304-E80AF105; Mon, 06 Aug 2012 10:46:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344249995!30493266!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25980 invoked from network); 6 Aug 2012 10:46:36 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:46:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13863580"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:46:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:46:35 +0100
Date: Mon, 6 Aug 2012 11:46:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120802141341.GE16749@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061142070.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<50197527.3070007@gmail.com>
	<1343892951.7571.50.camel@dagon.hellion.org.uk>
	<20120802141341.GE16749@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Rob Herring <robherring2@gmail.com>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 02, 2012 at 08:35:51AM +0100, Ian Campbell wrote:
> > On Wed, 2012-08-01 at 19:27 +0100, Rob Herring wrote:
> > > On 07/26/2012 10:33 AM, Stefano Stabellini wrote:
> > > > - Basic hypervisor.h and interface.h definitions.
> > > > - Skelethon enlighten.c, set xen_start_info to an empty struct.
> > > > - Do not limit xen_initial_domain to PV guests.
> > > > 
> > > > The new code only compiles when CONFIG_XEN is set, that is going to be
> > > > added to arch/arm/Kconfig in a later patch.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > ---
> > > >  arch/arm/Makefile                     |    1 +
> > > >  arch/arm/include/asm/hypervisor.h     |    6 +++
> > > >  arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
> > > >  arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
> > > 
> > > These headers don't seem particularly ARM specific. Could they be moved
> > > to asm-generic or include/linux?
> > 
> > Or perhaps include/xen.
> > 
> > A bunch of it also looks like x86 specific stuff which has crept in.
> > e.g. PARAVIRT_LAZY_FOO and paravirt_get_lazy_mode() are arch/x86
> > specific and shouldn't be called from common code (and aren't, AFAICT).
> 
> The could be moved out..
> 

they are called from grant-table.c; sigh, I was the one to add them there :-(

interface.h is ARM specific, except for the pvclock structs, that in
fact are marked "XXX".

hypervisor.h is almost empty but I guess I could move out the following two
lines:

extern struct shared_info *HYPERVISOR_shared_info;
extern struct start_info *xen_start_info;

Considering that each arch is free to map them (or not) the way it
wants, I don't think is a good idea.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:56:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKyh-00054Q-FW; Mon, 06 Aug 2012 10:56:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKyg-00054L-1X
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 10:56:02 +0000
Received: from [85.158.143.99:44551] by server-2.bemta-4.messagelabs.com id
	8F/01-17938-1C2AF105; Mon, 06 Aug 2012 10:56:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344250560!18765197!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11057 invoked from network); 6 Aug 2012 10:56:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13863791"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:56:00 +0100
Date: Mon, 6 Aug 2012 11:55:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801104237.GB7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061146480.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163020.GB9222@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207271246080.26163@kaball.uk.xensource.com>
	<20120801104237.GB7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > +struct pvclock_wall_clock {
> > > > +	u32   version;
> > > > +	u32   sec;
> > > > +	u32   nsec;
> > > > +} __attribute__((__packed__));
> > > 
> > > That is weird. It is 4+4+4 = 12 bytes? Don't you want it to be 16 bytes?
> > 
> > I agree that 16 bytes would be a better choice, but it needs to match
> > the struct in Xen that is defined as follow:
> > 
> >     uint32_t wc_version;      /* Version counter: see vcpu_time_info_t. */
> >     uint32_t wc_sec;          /* Secs  00:00:00 UTC, Jan 1, 1970.  */
> >     uint32_t wc_nsec;         /* Nsecs 00:00:00 UTC, Jan 1, 1970.  */
> 
> Would it make sense to add some paddigin then at least? In both
> cases? Or is it too late for this?

I can see why adding some padding would be useful if the structs were
not packed and we wanted to enforce 32/64 bit compatibility on x86.
However on ARM the field alignments on 32 and 64 bits are the same for
integer values so the padding wouldn't make a difference.
In any case both structs are packed, so the alignment is forced to be the
same by the compiler.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 10:56:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 10:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyKyh-00054Q-FW; Mon, 06 Aug 2012 10:56:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyKyg-00054L-1X
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 10:56:02 +0000
Received: from [85.158.143.99:44551] by server-2.bemta-4.messagelabs.com id
	8F/01-17938-1C2AF105; Mon, 06 Aug 2012 10:56:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344250560!18765197!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11057 invoked from network); 6 Aug 2012 10:56:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 10:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13863791"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 11:56:00 +0100
Date: Mon, 6 Aug 2012 11:55:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801104237.GB7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061146480.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120726163020.GB9222@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207271246080.26163@kaball.uk.xensource.com>
	<20120801104237.GB7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 01/24] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > +struct pvclock_wall_clock {
> > > > +	u32   version;
> > > > +	u32   sec;
> > > > +	u32   nsec;
> > > > +} __attribute__((__packed__));
> > > 
> > > That is weird. It is 4+4+4 = 12 bytes? Don't you want it to be 16 bytes?
> > 
> > I agree that 16 bytes would be a better choice, but it needs to match
> > the struct in Xen that is defined as follow:
> > 
> >     uint32_t wc_version;      /* Version counter: see vcpu_time_info_t. */
> >     uint32_t wc_sec;          /* Secs  00:00:00 UTC, Jan 1, 1970.  */
> >     uint32_t wc_nsec;         /* Nsecs 00:00:00 UTC, Jan 1, 1970.  */
> 
> Would it make sense to add some paddigin then at least? In both
> cases? Or is it too late for this?

I can see why adding some padding would be useful if the structs were
not packed and we wanted to enforce 32/64 bit compatibility on x86.
However on ARM the field alignments on 32 and 64 bits are the same for
integer values so the padding wouldn't make a difference.
In any case both structs are packed, so the alignment is forced to be the
same by the compiler.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLJz-0005Im-Ek; Mon, 06 Aug 2012 11:18:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyLJy-0005Ih-31
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 11:18:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344251868!4015807!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10543 invoked from network); 6 Aug 2012 11:17:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:17:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864255"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:17:32 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 12:17:32 +0100
Date: Mon, 6 Aug 2012 12:17:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801142840.GG7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061200130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-9-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801142840.GG7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 09/24] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:51PM +0100, Stefano Stabellini wrote:
> > bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> > an error.
> > 
> > If Linux is running as an HVM domain and is running as Dom0, use
> > xenstored_local_init to initialize the xenstore page and event channel.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >  drivers/xen/xenbus/xenbus_probe.c |   27 +++++++++++++++++----------
> >  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >  3 files changed, 19 insertions(+), 11 deletions(-)
> > 
> > diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> > index 52fe7ad..c5aa55c 100644
> > --- a/drivers/xen/xenbus/xenbus_comms.c
> > +++ b/drivers/xen/xenbus/xenbus_comms.c
> > @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >  		int err;
> >  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >  						0, "xenbus", &xb_waitq);
> > -		if (err <= 0) {
> > +		if (err < 0) {
> >  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >  			return err;
> >  		}
> > diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> > index b793723..3ae47c2 100644
> > --- a/drivers/xen/xenbus/xenbus_probe.c
> > +++ b/drivers/xen/xenbus/xenbus_probe.c
> > @@ -729,16 +729,23 @@ static int __init xenbus_init(void)
> >  	xenbus_ring_ops_init();
> >  
> >  	if (xen_hvm_domain()) {
> > -		uint64_t v = 0;
> > -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> > -		if (err)
> > -			goto out_error;
> > -		xen_store_evtchn = (int)v;
> > -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> > -		if (err)
> > -			goto out_error;
> > -		xen_store_mfn = (unsigned long)v;
> > -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> > +		if (xen_initial_domain()) {
> > +			err = xenstored_local_init();
> > +			xen_store_interface =
> > +				phys_to_virt(xen_store_mfn << PAGE_SHIFT);
> > +		} else {
> > +			uint64_t v = 0;
> > +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> > +			if (err)
> > +				goto out_error;
> > +			xen_store_evtchn = (int)v;
> > +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> > +			if (err)
> > +				goto out_error;
> > +			xen_store_mfn = (unsigned long)v;
> > +			xen_store_interface =
> > +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> > +		}
> 
> This, and along with the Hybrid PV dom0 (not yet posted, but it was doing
> similar manipulation here) is getting more and more like a rat-mess.
> 
> 
> Any chance we can just abstract the three different XenStore access
> ways and just have something like this:
> 
> 	enum {
> 		USE_UNKNOWN
> 		USE_HVM,
> 		USE_PV,
> 		USE_LOCAL
> 		USE_ALREADY_INIT
> 	};
> 	int usage = USE_UNKNOWN;
> 	if (xen_pv_domain())
> 		usage = USE_PV;
> 	if (xen_hvm_domain())
> 		usage = USE_HVM;
> 	if (xen_initial_domain())
> 		usage = USE_LOCAL;
> 
> 	if (xen_start_info->store_evtchn)
> 		usage = USE_ALREADY_INIT;
> 	
> 	.. other overwrites..
> 
> 	switch (usage) {
> 		.. blah blah.
> 	}

I'll give it a try.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLJz-0005Im-Ek; Mon, 06 Aug 2012 11:18:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyLJy-0005Ih-31
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 11:18:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344251868!4015807!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10543 invoked from network); 6 Aug 2012 11:17:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:17:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864255"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:17:32 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 12:17:32 +0100
Date: Mon, 6 Aug 2012 12:17:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120801142840.GG7227@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061200130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1207251741470.26163@kaball.uk.xensource.com>
	<1343316846-25860-9-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120801142840.GG7227@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH 09/24] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 1 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 04:33:51PM +0100, Stefano Stabellini wrote:
> > bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> > an error.
> > 
> > If Linux is running as an HVM domain and is running as Dom0, use
> > xenstored_local_init to initialize the xenstore page and event channel.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >  drivers/xen/xenbus/xenbus_probe.c |   27 +++++++++++++++++----------
> >  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >  3 files changed, 19 insertions(+), 11 deletions(-)
> > 
> > diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> > index 52fe7ad..c5aa55c 100644
> > --- a/drivers/xen/xenbus/xenbus_comms.c
> > +++ b/drivers/xen/xenbus/xenbus_comms.c
> > @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >  		int err;
> >  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >  						0, "xenbus", &xb_waitq);
> > -		if (err <= 0) {
> > +		if (err < 0) {
> >  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >  			return err;
> >  		}
> > diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> > index b793723..3ae47c2 100644
> > --- a/drivers/xen/xenbus/xenbus_probe.c
> > +++ b/drivers/xen/xenbus/xenbus_probe.c
> > @@ -729,16 +729,23 @@ static int __init xenbus_init(void)
> >  	xenbus_ring_ops_init();
> >  
> >  	if (xen_hvm_domain()) {
> > -		uint64_t v = 0;
> > -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> > -		if (err)
> > -			goto out_error;
> > -		xen_store_evtchn = (int)v;
> > -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> > -		if (err)
> > -			goto out_error;
> > -		xen_store_mfn = (unsigned long)v;
> > -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> > +		if (xen_initial_domain()) {
> > +			err = xenstored_local_init();
> > +			xen_store_interface =
> > +				phys_to_virt(xen_store_mfn << PAGE_SHIFT);
> > +		} else {
> > +			uint64_t v = 0;
> > +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> > +			if (err)
> > +				goto out_error;
> > +			xen_store_evtchn = (int)v;
> > +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> > +			if (err)
> > +				goto out_error;
> > +			xen_store_mfn = (unsigned long)v;
> > +			xen_store_interface =
> > +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> > +		}
> 
> This, and along with the Hybrid PV dom0 (not yet posted, but it was doing
> similar manipulation here) is getting more and more like a rat-mess.
> 
> 
> Any chance we can just abstract the three different XenStore access
> ways and just have something like this:
> 
> 	enum {
> 		USE_UNKNOWN
> 		USE_HVM,
> 		USE_PV,
> 		USE_LOCAL
> 		USE_ALREADY_INIT
> 	};
> 	int usage = USE_UNKNOWN;
> 	if (xen_pv_domain())
> 		usage = USE_PV;
> 	if (xen_hvm_domain())
> 		usage = USE_HVM;
> 	if (xen_initial_domain())
> 		usage = USE_LOCAL;
> 
> 	if (xen_start_info->store_evtchn)
> 		usage = USE_ALREADY_INIT;
> 	
> 	.. other overwrites..
> 
> 	switch (usage) {
> 		.. blah blah.
> 	}

I'll give it a try.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:34:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLa0-0005nL-UR; Mon, 06 Aug 2012 11:34:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLZz-0005nF-CR
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:34:35 +0000
Received: from [85.158.143.99:41481] by server-3.bemta-4.messagelabs.com id
	19/01-01511-ACBAF105; Mon, 06 Aug 2012 11:34:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344252874!20824694!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21873 invoked from network); 6 Aug 2012 11:34:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:34:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864542"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:34:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:34:33 +0100
Message-ID: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 12:34:32 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH 0/4] arm: SMP interrupt handling fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With the following series I now successfully boot to a dom0 prompt on a
4 way FastModel.

The most important one is not disabling the GICD on all CPUs, although I
suspect the per-CPU irq_desc thing is pretty important too. The other
two are just incidental things I happened to find while investigating.

        arm: disable distributor delivery on boot CPU only
        arm: don't bother setting up vtimer, vgic etc on idle CPUs
        arm/vtimer: convert result to ticks when reading CNTPCT register
        arm: Use per-CPU irq_desc for PPIs and SGIs

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:34:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLa0-0005nL-UR; Mon, 06 Aug 2012 11:34:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLZz-0005nF-CR
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:34:35 +0000
Received: from [85.158.143.99:41481] by server-3.bemta-4.messagelabs.com id
	19/01-01511-ACBAF105; Mon, 06 Aug 2012 11:34:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344252874!20824694!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21873 invoked from network); 6 Aug 2012 11:34:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:34:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864542"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:34:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:34:33 +0100
Message-ID: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 12:34:32 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH 0/4] arm: SMP interrupt handling fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With the following series I now successfully boot to a dom0 prompt on a
4 way FastModel.

The most important one is not disabling the GICD on all CPUs, although I
suspect the per-CPU irq_desc thing is pretty important too. The other
two are just incidental things I happened to find while investigating.

        arm: disable distributor delivery on boot CPU only
        arm: don't bother setting up vtimer, vgic etc on idle CPUs
        arm/vtimer: convert result to ticks when reading CNTPCT register
        arm: Use per-CPU irq_desc for PPIs and SGIs

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaU-0005pe-Nn; Mon, 06 Aug 2012 11:35:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLaT-0005pC-N7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:05 +0000
Received: from [85.158.143.99:46013] by server-3.bemta-4.messagelabs.com id
	88/62-01511-9EBAF105; Mon, 06 Aug 2012 11:35:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344252901!29870842!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31257 invoked from network); 6 Aug 2012 11:35:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204258737"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-NY;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:34:59 +0000
Message-ID: <1344252900-26148-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/4] arm/vtimer: convert result to ticks when
	reading CNTPCT register
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/vtimer.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 6b1152e..92c385c 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -103,6 +103,7 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
     struct hsr_cp64 cp64 = hsr.cp64;
     uint32_t *r1 = &regs->r0 + cp64.reg1;
     uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint64_t ticks
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP64_REGS_MASK )
@@ -111,8 +112,9 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
         if ( cp64.read )
         {
             now = NOW() - v->arch.vtimer.offset;
-            *r1 = (uint32_t)(now & 0xffffffff);
-            *r2 = (uint32_t)(now >> 32);
+            ticks = ns_to_ticks(now);
+            *r1 = (uint32_t)(ticks & 0xffffffff);
+            *r2 = (uint32_t)(ticks >> 32);
             return 1;
         }
         else
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaU-0005pe-Nn; Mon, 06 Aug 2012 11:35:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLaT-0005pC-N7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:05 +0000
Received: from [85.158.143.99:46013] by server-3.bemta-4.messagelabs.com id
	88/62-01511-9EBAF105; Mon, 06 Aug 2012 11:35:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344252901!29870842!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31257 invoked from network); 6 Aug 2012 11:35:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204258737"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-NY;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:34:59 +0000
Message-ID: <1344252900-26148-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/4] arm/vtimer: convert result to ticks when
	reading CNTPCT register
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/vtimer.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 6b1152e..92c385c 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -103,6 +103,7 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
     struct hsr_cp64 cp64 = hsr.cp64;
     uint32_t *r1 = &regs->r0 + cp64.reg1;
     uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint64_t ticks
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP64_REGS_MASK )
@@ -111,8 +112,9 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
         if ( cp64.read )
         {
             now = NOW() - v->arch.vtimer.offset;
-            *r1 = (uint32_t)(now & 0xffffffff);
-            *r2 = (uint32_t)(now >> 32);
+            ticks = ns_to_ticks(now);
+            *r1 = (uint32_t)(ticks & 0xffffffff);
+            *r2 = (uint32_t)(ticks >> 32);
             return 1;
         }
         else
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaU-0005pX-At; Mon, 06 Aug 2012 11:35:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLaS-0005pC-QI
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:04 +0000
Received: from [85.158.143.99:45915] by server-3.bemta-4.messagelabs.com id
	A0/62-01511-8EBAF105; Mon, 06 Aug 2012 11:35:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344252901!29870842!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31185 invoked from network); 6 Aug 2012 11:35:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204258735"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-Ko;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:34:58 +0000
Message-ID: <1344252900-26148-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/4] arm: don't bother setting up vtimer,
	vgic etc on idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..f47db4f 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -386,6 +386,10 @@ int vcpu_initialise(struct vcpu *v)
     v->arch.saved_context.sp = (uint32_t)v->arch.cpu_info;
     v->arch.saved_context.pc = (uint32_t)continue_new_vcpu;
 
+    /* Idle VCPUs don't need the rest of this setup */
+    if ( is_idle_vcpu(v) )
+        return rc;
+
     if ( (rc = vcpu_vgic_init(v)) != 0 )
         return rc;
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaU-0005pX-At; Mon, 06 Aug 2012 11:35:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLaS-0005pC-QI
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:04 +0000
Received: from [85.158.143.99:45915] by server-3.bemta-4.messagelabs.com id
	A0/62-01511-8EBAF105; Mon, 06 Aug 2012 11:35:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344252901!29870842!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31185 invoked from network); 6 Aug 2012 11:35:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204258735"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-Ko;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:34:58 +0000
Message-ID: <1344252900-26148-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/4] arm: don't bother setting up vtimer,
	vgic etc on idle CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..f47db4f 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -386,6 +386,10 @@ int vcpu_initialise(struct vcpu *v)
     v->arch.saved_context.sp = (uint32_t)v->arch.cpu_info;
     v->arch.saved_context.pc = (uint32_t)continue_new_vcpu;
 
+    /* Idle VCPUs don't need the rest of this setup */
+    if ( is_idle_vcpu(v) )
+        return rc;
+
     if ( (rc = vcpu_vgic_init(v)) != 0 )
         return rc;
 
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaw-0005uh-Gh; Mon, 06 Aug 2012 11:35:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLav-0005tF-4y
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344252921!11063054!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3279 invoked from network); 6 Aug 2012 11:35:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="33679939"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-P7;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:35:00 +0000
Message-ID: <1344252900-26148-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/4] arm: Use per-CPU irq_desc for PPIs and SGIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The first 32 interrupts on a GIC are the Peripheral Private Interrupts
and Software-Generated Interrupts and are local to each processor.

The irq_desc cannot be shared since we use irq_desc->status to track
whether the IRQ is in-progress etc. Therefore give each processor its
own local irq_desc for each of these interupts.

We must also route them on each CPU, so do so.

This feels like a bit of a layering violation (since the core ARM
irq.c now knows about thinkgs wich are really gic.c business)

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/gic.c          |   23 ++++++++++++++++++-----
 xen/arch/arm/gic.h          |    3 ++-
 xen/arch/arm/irq.c          |   23 ++++++++++++++++++++++-
 xen/arch/arm/setup.c        |    3 ++-
 xen/arch/arm/smpboot.c      |    9 ++++++++-
 xen/include/asm-arm/irq.h   |   13 +++++++++++++
 xen/include/asm-arm/setup.h |    2 --
 xen/include/xen/irq.h       |   10 ++--------
 8 files changed, 67 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6f5b0e1..f674111 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -50,9 +50,17 @@ static struct {
     uint64_t lr_mask;
 } gic;
 
-irq_desc_t irq_desc[NR_IRQS];
+static irq_desc_t irq_desc[NR_IRQS];
+static DEFINE_PER_CPU(irq_desc_t[NR_LOCAL_IRQS], local_irq_desc);
+
 unsigned nr_lrs;
 
+irq_desc_t *__irq_to_desc(int irq)
+{
+    if (irq < NR_LOCAL_IRQS) return &this_cpu(local_irq_desc)[irq];
+    return &irq_desc[irq-NR_LOCAL_IRQS];
+}
+
 void gic_save_state(struct vcpu *v)
 {
     int i;
@@ -256,8 +264,8 @@ static void __cpuinit gic_cpu_init(void)
 {
     int i;
 
-    /* The first 32 interrupts (PPI and SGI) are banked per-cpu, so 
-     * even though they are controlled with GICD registers, they must 
+    /* The first 32 interrupts (PPI and SGI) are banked per-cpu, so
+     * even though they are controlled with GICD registers, they must
      * be set up here with the other per-cpu state. */
     GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
     GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
@@ -338,7 +346,7 @@ void gic_disable_cpu(void)
     spin_unlock_irq(&gic.lock);
 }
 
-void gic_route_irqs(void)
+void gic_route_ppis(void)
 {
     /* XXX should get these from DT */
     /* GIC maintenance */
@@ -347,6 +355,11 @@ void gic_route_irqs(void)
     gic_route_irq(26, 1, 1u << smp_processor_id(), 0xa0);
     /* Timer */
     gic_route_irq(30, 1, 1u << smp_processor_id(), 0xa0);
+}
+
+void gic_route_spis(void)
+{
+    /* XXX should get these from DT */
     /* UART */
     gic_route_irq(37, 0, 1u << smp_processor_id(), 0xa0);
 }
@@ -404,7 +417,7 @@ int __init setup_irq(unsigned int irq, struct irqaction *new)
 
     rc = __setup_irq(desc, irq, new);
 
-    spin_unlock_irqrestore(&desc->lock,flags);
+    spin_unlock_irqrestore(&desc->lock, flags);
 
     return rc;
 }
diff --git a/xen/arch/arm/gic.h b/xen/arch/arm/gic.h
index fa2cf06..b8f9f201 100644
--- a/xen/arch/arm/gic.h
+++ b/xen/arch/arm/gic.h
@@ -132,7 +132,8 @@ extern int vcpu_vgic_init(struct vcpu *v);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-extern void gic_route_irqs(void);
+extern void gic_route_ppis(void);
+extern void gic_route_spis(void);
 
 extern void gic_inject(void);
 
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index f9d663b..72e83e6 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -58,20 +58,41 @@ static int __init init_irq_data(void)
 {
     int irq;
 
-    for (irq = 0; irq < NR_IRQS; irq++) {
+    for (irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++) {
         struct irq_desc *desc = irq_to_desc(irq);
         init_one_irq_desc(desc);
         desc->irq = irq;
         desc->action  = NULL;
     }
+
+    return 0;
+}
+
+static int __cpuinit init_local_irq_data(void)
+{
+    int irq;
+
+    for (irq = 0; irq < NR_LOCAL_IRQS; irq++) {
+        struct irq_desc *desc = irq_to_desc(irq);
+        init_one_irq_desc(desc);
+        desc->irq = irq;
+        desc->action  = NULL;
+    }
+
     return 0;
 }
 
 void __init init_IRQ(void)
 {
+    BUG_ON(init_local_irq_data() < 0);
     BUG_ON(init_irq_data() < 0);
 }
 
+void __cpuinit init_secondary_IRQ(void)
+{
+    BUG_ON(init_local_irq_data() < 0);
+}
+
 int __init request_irq(unsigned int irq,
         void (*handler)(int, void *, struct cpu_user_regs *),
         unsigned long irqflags, const char * devname, void *dev_id)
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index fd70553..c4ca270 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -199,7 +199,8 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_IRQ();
 
-    gic_route_irqs();
+    gic_route_ppis();
+    gic_route_spis();
 
     init_maintenance_interrupt();
     init_timer_interrupt();
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 6463a8d..c0750c0 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -26,6 +26,8 @@
 #include <xen/sched.h>
 #include <xen/smp.h>
 #include <xen/softirq.h>
+#include <xen/timer.h>
+#include <xen/irq.h>
 #include <asm/vfp.h>
 #include "gic.h"
 
@@ -129,8 +131,13 @@ void __cpuinit start_secondary(unsigned long boot_phys_offset,
     enable_vfp();
 
     gic_init_secondary_cpu();
+
+    init_secondary_IRQ();
+
+    gic_route_ppis();
+
+    init_maintenance_interrupt();
     init_timer_interrupt();
-    gic_route_irqs();
 
     set_current(idle_vcpu[cpuid]);
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index 21e0b85..abde839 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -17,10 +17,23 @@ struct irq_cfg {
 #define arch_irq_desc irq_cfg
 };
 
+#define NR_LOCAL_IRQS	32
+#define NR_IRQS		1024
+#define nr_irqs NR_IRQS
+
+struct irq_desc;
+
+struct irq_desc *__irq_to_desc(int irq);
+
+#define irq_to_desc(irq)    __irq_to_desc(irq)
+
 void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq);
 
 #define domain_pirq_to_irq(d, pirq) (pirq)
 
+void init_IRQ(void);
+void init_secondary_IRQ(void);
+
 #endif /* _ASM_HW_IRQ_H */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 6433b4e..8769f66 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -9,8 +9,6 @@ void arch_get_xen_caps(xen_capabilities_info_t *info);
 
 int construct_dom0(struct domain *d);
 
-void init_IRQ(void);
-
 #endif
 /*
  * Local variables:
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index cbe1dbc..5973cce 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -88,21 +88,15 @@ typedef struct irq_desc {
     struct list_head rl_link;
 } __cacheline_aligned irq_desc_t;
 
+#ifndef irq_to_desc
 #define irq_to_desc(irq)    (&irq_desc[irq])
+#endif
 
 int init_one_irq_desc(struct irq_desc *);
 int arch_init_one_irq_desc(struct irq_desc *);
 
 #define irq_desc_initialized(desc) ((desc)->handler != NULL)
 
-#if defined(__arm__)
-
-#define NR_IRQS		1024
-#define nr_irqs NR_IRQS
-extern irq_desc_t irq_desc[NR_IRQS];
-
-#endif
-
 extern int setup_irq(unsigned int irq, struct irqaction *);
 extern void release_irq(unsigned int irq);
 extern int request_irq(unsigned int irq,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaw-0005uh-Gh; Mon, 06 Aug 2012 11:35:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLav-0005tF-4y
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344252921!11063054!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3279 invoked from network); 6 Aug 2012 11:35:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="33679939"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-P7;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:35:00 +0000
Message-ID: <1344252900-26148-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/4] arm: Use per-CPU irq_desc for PPIs and SGIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The first 32 interrupts on a GIC are the Peripheral Private Interrupts
and Software-Generated Interrupts and are local to each processor.

The irq_desc cannot be shared since we use irq_desc->status to track
whether the IRQ is in-progress etc. Therefore give each processor its
own local irq_desc for each of these interupts.

We must also route them on each CPU, so do so.

This feels like a bit of a layering violation (since the core ARM
irq.c now knows about thinkgs wich are really gic.c business)

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/gic.c          |   23 ++++++++++++++++++-----
 xen/arch/arm/gic.h          |    3 ++-
 xen/arch/arm/irq.c          |   23 ++++++++++++++++++++++-
 xen/arch/arm/setup.c        |    3 ++-
 xen/arch/arm/smpboot.c      |    9 ++++++++-
 xen/include/asm-arm/irq.h   |   13 +++++++++++++
 xen/include/asm-arm/setup.h |    2 --
 xen/include/xen/irq.h       |   10 ++--------
 8 files changed, 67 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6f5b0e1..f674111 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -50,9 +50,17 @@ static struct {
     uint64_t lr_mask;
 } gic;
 
-irq_desc_t irq_desc[NR_IRQS];
+static irq_desc_t irq_desc[NR_IRQS];
+static DEFINE_PER_CPU(irq_desc_t[NR_LOCAL_IRQS], local_irq_desc);
+
 unsigned nr_lrs;
 
+irq_desc_t *__irq_to_desc(int irq)
+{
+    if (irq < NR_LOCAL_IRQS) return &this_cpu(local_irq_desc)[irq];
+    return &irq_desc[irq-NR_LOCAL_IRQS];
+}
+
 void gic_save_state(struct vcpu *v)
 {
     int i;
@@ -256,8 +264,8 @@ static void __cpuinit gic_cpu_init(void)
 {
     int i;
 
-    /* The first 32 interrupts (PPI and SGI) are banked per-cpu, so 
-     * even though they are controlled with GICD registers, they must 
+    /* The first 32 interrupts (PPI and SGI) are banked per-cpu, so
+     * even though they are controlled with GICD registers, they must
      * be set up here with the other per-cpu state. */
     GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
     GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
@@ -338,7 +346,7 @@ void gic_disable_cpu(void)
     spin_unlock_irq(&gic.lock);
 }
 
-void gic_route_irqs(void)
+void gic_route_ppis(void)
 {
     /* XXX should get these from DT */
     /* GIC maintenance */
@@ -347,6 +355,11 @@ void gic_route_irqs(void)
     gic_route_irq(26, 1, 1u << smp_processor_id(), 0xa0);
     /* Timer */
     gic_route_irq(30, 1, 1u << smp_processor_id(), 0xa0);
+}
+
+void gic_route_spis(void)
+{
+    /* XXX should get these from DT */
     /* UART */
     gic_route_irq(37, 0, 1u << smp_processor_id(), 0xa0);
 }
@@ -404,7 +417,7 @@ int __init setup_irq(unsigned int irq, struct irqaction *new)
 
     rc = __setup_irq(desc, irq, new);
 
-    spin_unlock_irqrestore(&desc->lock,flags);
+    spin_unlock_irqrestore(&desc->lock, flags);
 
     return rc;
 }
diff --git a/xen/arch/arm/gic.h b/xen/arch/arm/gic.h
index fa2cf06..b8f9f201 100644
--- a/xen/arch/arm/gic.h
+++ b/xen/arch/arm/gic.h
@@ -132,7 +132,8 @@ extern int vcpu_vgic_init(struct vcpu *v);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-extern void gic_route_irqs(void);
+extern void gic_route_ppis(void);
+extern void gic_route_spis(void);
 
 extern void gic_inject(void);
 
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index f9d663b..72e83e6 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -58,20 +58,41 @@ static int __init init_irq_data(void)
 {
     int irq;
 
-    for (irq = 0; irq < NR_IRQS; irq++) {
+    for (irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++) {
         struct irq_desc *desc = irq_to_desc(irq);
         init_one_irq_desc(desc);
         desc->irq = irq;
         desc->action  = NULL;
     }
+
+    return 0;
+}
+
+static int __cpuinit init_local_irq_data(void)
+{
+    int irq;
+
+    for (irq = 0; irq < NR_LOCAL_IRQS; irq++) {
+        struct irq_desc *desc = irq_to_desc(irq);
+        init_one_irq_desc(desc);
+        desc->irq = irq;
+        desc->action  = NULL;
+    }
+
     return 0;
 }
 
 void __init init_IRQ(void)
 {
+    BUG_ON(init_local_irq_data() < 0);
     BUG_ON(init_irq_data() < 0);
 }
 
+void __cpuinit init_secondary_IRQ(void)
+{
+    BUG_ON(init_local_irq_data() < 0);
+}
+
 int __init request_irq(unsigned int irq,
         void (*handler)(int, void *, struct cpu_user_regs *),
         unsigned long irqflags, const char * devname, void *dev_id)
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index fd70553..c4ca270 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -199,7 +199,8 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_IRQ();
 
-    gic_route_irqs();
+    gic_route_ppis();
+    gic_route_spis();
 
     init_maintenance_interrupt();
     init_timer_interrupt();
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 6463a8d..c0750c0 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -26,6 +26,8 @@
 #include <xen/sched.h>
 #include <xen/smp.h>
 #include <xen/softirq.h>
+#include <xen/timer.h>
+#include <xen/irq.h>
 #include <asm/vfp.h>
 #include "gic.h"
 
@@ -129,8 +131,13 @@ void __cpuinit start_secondary(unsigned long boot_phys_offset,
     enable_vfp();
 
     gic_init_secondary_cpu();
+
+    init_secondary_IRQ();
+
+    gic_route_ppis();
+
+    init_maintenance_interrupt();
     init_timer_interrupt();
-    gic_route_irqs();
 
     set_current(idle_vcpu[cpuid]);
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index 21e0b85..abde839 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -17,10 +17,23 @@ struct irq_cfg {
 #define arch_irq_desc irq_cfg
 };
 
+#define NR_LOCAL_IRQS	32
+#define NR_IRQS		1024
+#define nr_irqs NR_IRQS
+
+struct irq_desc;
+
+struct irq_desc *__irq_to_desc(int irq);
+
+#define irq_to_desc(irq)    __irq_to_desc(irq)
+
 void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq);
 
 #define domain_pirq_to_irq(d, pirq) (pirq)
 
+void init_IRQ(void);
+void init_secondary_IRQ(void);
+
 #endif /* _ASM_HW_IRQ_H */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 6433b4e..8769f66 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -9,8 +9,6 @@ void arch_get_xen_caps(xen_capabilities_info_t *info);
 
 int construct_dom0(struct domain *d);
 
-void init_IRQ(void);
-
 #endif
 /*
  * Local variables:
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index cbe1dbc..5973cce 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -88,21 +88,15 @@ typedef struct irq_desc {
     struct list_head rl_link;
 } __cacheline_aligned irq_desc_t;
 
+#ifndef irq_to_desc
 #define irq_to_desc(irq)    (&irq_desc[irq])
+#endif
 
 int init_one_irq_desc(struct irq_desc *);
 int arch_init_one_irq_desc(struct irq_desc *);
 
 #define irq_desc_initialized(desc) ((desc)->handler != NULL)
 
-#if defined(__arm__)
-
-#define NR_IRQS		1024
-#define nr_irqs NR_IRQS
-extern irq_desc_t irq_desc[NR_IRQS];
-
-#endif
-
 extern int setup_irq(unsigned int irq, struct irqaction *);
 extern void release_irq(unsigned int irq);
 extern int request_irq(unsigned int irq,
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaw-0005uV-53; Mon, 06 Aug 2012 11:35:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLav-0005tE-2b
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344252921!11063054!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3222 invoked from network); 6 Aug 2012 11:35:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="33679938"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-Hz;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:34:57 +0000
Message-ID: <1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on boot
	CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The secondary processors do not call enter_hyp_mode until the boot CPU
has brought most of the system up, including enabling delivery via the
distributor. This means that bringing up secondary CPUs unexpectedly
disables the GICD again, meaning we get no further interrupts on any
CPU.

It's not clear that the distributor actually needs to be disabled to
modify the group registers but it seems reasonable that the bringup
code should make sure the GICD is disabled even if not doing the
transition to hyp mode, so move this to the main flow of head.S and
only do it on the boot processor.

For completeness also disable the GICC (CPU interface) on all CPUs
too.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/head.S        |   14 ++++++++++++++
 xen/arch/arm/mode_switch.S |   14 +++++++++-----
 2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
index cdbe011..a69bf72 100644
--- a/xen/arch/arm/head.S
+++ b/xen/arch/arm/head.S
@@ -67,6 +67,12 @@ start:
 	add   r8, r10                /* r8 := paddr(DTB) */
 #endif
 
+	/* Disable interrupt delivery at the GIC's CPU interface */
+	mov   r0, #GIC_BASE_ADDRESS
+	add   r0, r0, #GIC_CR_OFFSET
+	mov   r1, #0
+	str   r1, [r0]
+
 	/* Are we the boot CPU? */
 	mov   r12, #0                /* r12 := CPU ID */
 	mrc   CP32(r0, MPIDR)
@@ -85,8 +91,16 @@ start:
 	ldr   r1, [r0]               /* Which CPU is being booted? */
 	teq   r1, r12                /* Is it us? */
 	bne   1b
+	b     secondary_cpu
 
 boot_cpu:
+	/* Setup which only needs to be done once, on the boot CPU */
+	mov   r0, #GIC_BASE_ADDRESS
+	add   r0, r0, #GIC_DR_OFFSET
+	mov   r1, #0
+	str   r1, [r0]               /* Disable delivery in the distributor */
+
+secondary_cpu:
 #ifdef EARLY_UART_ADDRESS
 	ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
 	teq   r12, #0                   /* CPU 0 sets up the UART too */
diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
index f5549d7..9211d26 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/mode_switch.S
@@ -49,13 +49,17 @@ enter_hyp_mode:
 	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_DR_OFFSET
-	mov   r1, #0
-	str   r1, [r0]               /* Disable delivery in the distributor */
 	add   r0, r0, #0x80          /* GICD_IGROUP0 */
-	mov   r2, #0xffffffff        /* All interrupts to group 1 */
+	mov   r2, #0xffffffff        /* Interrupts 0-31 (SGI&PPI) to group 1 */
+	/* The remaining interrupts are Shared Periphal Interrupts and so
+	 * need reconfiguring only once, on the boot CPU */
 	str   r2, [r0]
-	str   r2, [r0, #4]
-	str   r2, [r0, #8]
+	teq   r12, #0
+	bne   skip_spi
+	str   r2, [r0, #4]           /* Interrupts 32-63 (SPI) to group 1 */
+	str   r2, [r0, #8]           /* Interrupts 64-95 (SPI) to group 1 */
+skip_spi:
+	
 	/* Must drop priority mask below 0x80 before entering NS state */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_CR_OFFSET
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:35:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLaw-0005uV-53; Mon, 06 Aug 2012 11:35:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLav-0005tE-2b
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:35:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344252921!11063054!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3222 invoked from network); 6 Aug 2012 11:35:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:35:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="33679938"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 07:35:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 07:35:01 -0400
Received: from [10.80.246.74] (helo=army.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyLaO-0008E8-Hz;
	Mon, 06 Aug 2012 12:35:00 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 11:34:57 +0000
Message-ID: <1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.9.1
In-Reply-To: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: stefano.stabellini@citrix.com, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on boot
	CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The secondary processors do not call enter_hyp_mode until the boot CPU
has brought most of the system up, including enabling delivery via the
distributor. This means that bringing up secondary CPUs unexpectedly
disables the GICD again, meaning we get no further interrupts on any
CPU.

It's not clear that the distributor actually needs to be disabled to
modify the group registers but it seems reasonable that the bringup
code should make sure the GICD is disabled even if not doing the
transition to hyp mode, so move this to the main flow of head.S and
only do it on the boot processor.

For completeness also disable the GICC (CPU interface) on all CPUs
too.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/head.S        |   14 ++++++++++++++
 xen/arch/arm/mode_switch.S |   14 +++++++++-----
 2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
index cdbe011..a69bf72 100644
--- a/xen/arch/arm/head.S
+++ b/xen/arch/arm/head.S
@@ -67,6 +67,12 @@ start:
 	add   r8, r10                /* r8 := paddr(DTB) */
 #endif
 
+	/* Disable interrupt delivery at the GIC's CPU interface */
+	mov   r0, #GIC_BASE_ADDRESS
+	add   r0, r0, #GIC_CR_OFFSET
+	mov   r1, #0
+	str   r1, [r0]
+
 	/* Are we the boot CPU? */
 	mov   r12, #0                /* r12 := CPU ID */
 	mrc   CP32(r0, MPIDR)
@@ -85,8 +91,16 @@ start:
 	ldr   r1, [r0]               /* Which CPU is being booted? */
 	teq   r1, r12                /* Is it us? */
 	bne   1b
+	b     secondary_cpu
 
 boot_cpu:
+	/* Setup which only needs to be done once, on the boot CPU */
+	mov   r0, #GIC_BASE_ADDRESS
+	add   r0, r0, #GIC_DR_OFFSET
+	mov   r1, #0
+	str   r1, [r0]               /* Disable delivery in the distributor */
+
+secondary_cpu:
 #ifdef EARLY_UART_ADDRESS
 	ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
 	teq   r12, #0                   /* CPU 0 sets up the UART too */
diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
index f5549d7..9211d26 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/mode_switch.S
@@ -49,13 +49,17 @@ enter_hyp_mode:
 	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_DR_OFFSET
-	mov   r1, #0
-	str   r1, [r0]               /* Disable delivery in the distributor */
 	add   r0, r0, #0x80          /* GICD_IGROUP0 */
-	mov   r2, #0xffffffff        /* All interrupts to group 1 */
+	mov   r2, #0xffffffff        /* Interrupts 0-31 (SGI&PPI) to group 1 */
+	/* The remaining interrupts are Shared Periphal Interrupts and so
+	 * need reconfiguring only once, on the boot CPU */
 	str   r2, [r0]
-	str   r2, [r0, #4]
-	str   r2, [r0, #8]
+	teq   r12, #0
+	bne   skip_spi
+	str   r2, [r0, #4]           /* Interrupts 32-63 (SPI) to group 1 */
+	str   r2, [r0, #8]           /* Interrupts 64-95 (SPI) to group 1 */
+skip_spi:
+	
 	/* Must drop priority mask below 0x80 before entering NS state */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_CR_OFFSET
-- 
1.7.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:37:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLcA-0006Co-0j; Mon, 06 Aug 2012 11:36:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLc8-0006CN-Sm
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:36:48 +0000
Received: from [85.158.143.35:31464] by server-1.bemta-4.messagelabs.com id
	EE/69-24392-05CAF105; Mon, 06 Aug 2012 11:36:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344253004!16984277!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13194 invoked from network); 6 Aug 2012 11:36:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864594"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:36:28 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:36:28 +0100
Message-ID: <1344252987.11339.28.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Mon, 6 Aug 2012 12:36:27 +0100
In-Reply-To: <1344244422.11339.17.camel@zakaz.uk.xensource.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
	<1344244422.11339.17.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 10:13 +0100, Ian Campbell wrote:

> It looks like 25727:a8d708fcb347 is missing some bits of my original
> patch got missed during application, specifically the changes to the
> iscsi/nbd/enbd prefix handling rule.

Ian J has reverted 25727:a8d708fcb347 and re-committed the entire patch.
I hope this fixes the issue for you, it does for me.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:37:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLcA-0006Co-0j; Mon, 06 Aug 2012 11:36:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLc8-0006CN-Sm
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:36:48 +0000
Received: from [85.158.143.35:31464] by server-1.bemta-4.messagelabs.com id
	EE/69-24392-05CAF105; Mon, 06 Aug 2012 11:36:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344253004!16984277!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13194 invoked from network); 6 Aug 2012 11:36:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864594"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:36:28 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:36:28 +0100
Message-ID: <1344252987.11339.28.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Christoph Egger <Christoph.Egger@amd.com>
Date: Mon, 6 Aug 2012 12:36:27 +0100
In-Reply-To: <1344244422.11339.17.camel@zakaz.uk.xensource.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
	<1344244422.11339.17.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 10:13 +0100, Ian Campbell wrote:

> It looks like 25727:a8d708fcb347 is missing some bits of my original
> patch got missed during application, specifically the changes to the
> iscsi/nbd/enbd prefix handling rule.

Ian J has reverted 25727:a8d708fcb347 and re-committed the entire patch.
I hope this fixes the issue for you, it does for me.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:38:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLdz-0006V2-HD; Mon, 06 Aug 2012 11:38:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyLdx-0006Um-VH
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 11:38:42 +0000
Received: from [85.158.143.99:11651] by server-1.bemta-4.messagelabs.com id
	62/4C-24392-1CCAF105; Mon, 06 Aug 2012 11:38:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344253120!25180160!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26943 invoked from network); 6 Aug 2012 11:38:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:38:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864635"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:38:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 12:38:40 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyLdw-00038k-1B;
	Mon, 06 Aug 2012 11:38:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyLdw-0007jO-0X;
	Mon, 06 Aug 2012 12:38:40 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13563-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 12:38:40 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13563: trouble: preparing/queued
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13563 xen-unstable running [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13563/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pcipt-intel    <none executed>              queued
 test-amd64-amd64-pv             <none executed>              queued
 test-amd64-i386-pv              <none executed>              queued
 test-amd64-i386-rhel6hvm-amd    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-i386-i386-xl               <none executed>              queued
 test-amd64-i386-xl-multivcpu    <none executed>              queued
 test-amd64-i386-xl-credit2      <none executed>              queued
 test-amd64-amd64-xl-sedf-pin    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-i386-i386-pair             <none executed>              queued
 test-amd64-amd64-xl-qemuu-winxpsp3    <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-i386-i386-xl-qemuu-winxpsp3    <none executed>              queued
 test-amd64-i386-xend-winxpsp3    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-rhel6hvm-intel    <none executed>              queued
 test-amd64-amd64-win            <none executed>              queued
 test-i386-i386-pv               <none executed>              queued
 test-amd64-amd64-xl-sedf        <none executed>              queued
 test-amd64-i386-win-vcpus1      <none executed>              queued
 test-amd64-amd64-xl-winxpsp3    <none executed>              queued
 test-amd64-i386-xl-win7-amd64    <none executed>              queued
 test-i386-i386-xl-winxpsp3      <none executed>              queued
 test-amd64-amd64-xl-win7-amd64    <none executed>              queued
 build-i386-pvops              1 hosts-allocate           running [st=running!]
 build-i386-oldkern            1 hosts-allocate           running [st=running!]
 build-amd64-oldkern           1 hosts-allocate           running [st=running!]
 build-amd64                   1 hosts-allocate           running [st=running!]
 test-amd64-i386-xl-winxpsp3-vcpus1    <none executed>              queued
 test-amd64-i386-win             <none executed>              queued
 build-amd64-pvops             1 hosts-allocate           running [st=running!]
 build-i386                    1 hosts-allocate           running [st=running!]
 test-amd64-amd64-pair           <none executed>              queued
 test-i386-i386-win              <none executed>              queued
 test-i386-i386-xl-win           <none executed>              queued
 test-amd64-i386-xl-win-vcpus1    <none executed>              queued
 test-amd64-amd64-xl-win         <none executed>              queued

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  preparing
 build-i386                                                   preparing
 build-amd64-oldkern                                          preparing
 build-i386-oldkern                                           preparing
 build-amd64-pvops                                            preparing
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-i386-i386-xl                                            queued  
 test-amd64-i386-rhel6hvm-amd                                 queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-amd64-xl-win7-amd64                               queued  
 test-amd64-i386-xl-win7-amd64                                queued  
 test-amd64-i386-xl-credit2                                   queued  
 test-amd64-amd64-xl-pcipt-intel                              queued  
 test-amd64-i386-rhel6hvm-intel                               queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-i386-xl-multivcpu                                 queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-i386-i386-pair                                          queued  
 test-amd64-amd64-xl-sedf-pin                                 queued  
 test-amd64-amd64-pv                                          queued  
 test-amd64-i386-pv                                           queued  
 test-i386-i386-pv                                            queued  
 test-amd64-amd64-xl-sedf                                     queued  
 test-amd64-i386-win-vcpus1                                   queued  
 test-amd64-i386-xl-win-vcpus1                                queued  
 test-amd64-i386-xl-winxpsp3-vcpus1                           queued  
 test-amd64-amd64-win                                         queued  
 test-amd64-i386-win                                          queued  
 test-i386-i386-win                                           queued  
 test-amd64-amd64-xl-win                                      queued  
 test-i386-i386-xl-win                                        queued  
 test-amd64-amd64-xl-qemuu-winxpsp3                           queued  
 test-i386-i386-xl-qemuu-winxpsp3                             queued  
 test-amd64-i386-xend-winxpsp3                                queued  
 test-amd64-amd64-xl-winxpsp3                                 queued  
 test-i386-i386-xl-winxpsp3                                   queued  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:38:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLdz-0006V2-HD; Mon, 06 Aug 2012 11:38:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyLdx-0006Um-VH
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 11:38:42 +0000
Received: from [85.158.143.99:11651] by server-1.bemta-4.messagelabs.com id
	62/4C-24392-1CCAF105; Mon, 06 Aug 2012 11:38:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344253120!25180160!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26943 invoked from network); 6 Aug 2012 11:38:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:38:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13864635"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:38:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 12:38:40 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyLdw-00038k-1B;
	Mon, 06 Aug 2012 11:38:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyLdw-0007jO-0X;
	Mon, 06 Aug 2012 12:38:40 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13563-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 12:38:40 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13563: trouble: preparing/queued
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13563 xen-unstable running [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13563/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pcipt-intel    <none executed>              queued
 test-amd64-amd64-pv             <none executed>              queued
 test-amd64-i386-pv              <none executed>              queued
 test-amd64-i386-rhel6hvm-amd    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-i386-i386-xl               <none executed>              queued
 test-amd64-i386-xl-multivcpu    <none executed>              queued
 test-amd64-i386-xl-credit2      <none executed>              queued
 test-amd64-amd64-xl-sedf-pin    <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-i386-i386-pair             <none executed>              queued
 test-amd64-amd64-xl-qemuu-winxpsp3    <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 test-i386-i386-xl-qemuu-winxpsp3    <none executed>              queued
 test-amd64-i386-xend-winxpsp3    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-rhel6hvm-intel    <none executed>              queued
 test-amd64-amd64-win            <none executed>              queued
 test-i386-i386-pv               <none executed>              queued
 test-amd64-amd64-xl-sedf        <none executed>              queued
 test-amd64-i386-win-vcpus1      <none executed>              queued
 test-amd64-amd64-xl-winxpsp3    <none executed>              queued
 test-amd64-i386-xl-win7-amd64    <none executed>              queued
 test-i386-i386-xl-winxpsp3      <none executed>              queued
 test-amd64-amd64-xl-win7-amd64    <none executed>              queued
 build-i386-pvops              1 hosts-allocate           running [st=running!]
 build-i386-oldkern            1 hosts-allocate           running [st=running!]
 build-amd64-oldkern           1 hosts-allocate           running [st=running!]
 build-amd64                   1 hosts-allocate           running [st=running!]
 test-amd64-i386-xl-winxpsp3-vcpus1    <none executed>              queued
 test-amd64-i386-win             <none executed>              queued
 build-amd64-pvops             1 hosts-allocate           running [st=running!]
 build-i386                    1 hosts-allocate           running [st=running!]
 test-amd64-amd64-pair           <none executed>              queued
 test-i386-i386-win              <none executed>              queued
 test-i386-i386-xl-win           <none executed>              queued
 test-amd64-i386-xl-win-vcpus1    <none executed>              queued
 test-amd64-amd64-xl-win         <none executed>              queued

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  preparing
 build-i386                                                   preparing
 build-amd64-oldkern                                          preparing
 build-i386-oldkern                                           preparing
 build-amd64-pvops                                            preparing
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-i386-i386-xl                                            queued  
 test-amd64-i386-rhel6hvm-amd                                 queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-amd64-xl-win7-amd64                               queued  
 test-amd64-i386-xl-win7-amd64                                queued  
 test-amd64-i386-xl-credit2                                   queued  
 test-amd64-amd64-xl-pcipt-intel                              queued  
 test-amd64-i386-rhel6hvm-intel                               queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-i386-xl-multivcpu                                 queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-i386-i386-pair                                          queued  
 test-amd64-amd64-xl-sedf-pin                                 queued  
 test-amd64-amd64-pv                                          queued  
 test-amd64-i386-pv                                           queued  
 test-i386-i386-pv                                            queued  
 test-amd64-amd64-xl-sedf                                     queued  
 test-amd64-i386-win-vcpus1                                   queued  
 test-amd64-i386-xl-win-vcpus1                                queued  
 test-amd64-i386-xl-winxpsp3-vcpus1                           queued  
 test-amd64-amd64-win                                         queued  
 test-amd64-i386-win                                          queued  
 test-i386-i386-win                                           queued  
 test-amd64-amd64-xl-win                                      queued  
 test-i386-i386-xl-win                                        queued  
 test-amd64-amd64-xl-qemuu-winxpsp3                           queued  
 test-i386-i386-xl-qemuu-winxpsp3                             queued  
 test-amd64-i386-xend-winxpsp3                                queued  
 test-amd64-amd64-xl-winxpsp3                                 queued  
 test-i386-i386-xl-winxpsp3                                   queued  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:55:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLu7-0006rb-8Z; Mon, 06 Aug 2012 11:55:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SyLu5-0006rW-9X
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:55:21 +0000
Received: from [85.158.143.35:24197] by server-3.bemta-4.messagelabs.com id
	71/15-01511-8A0BF105; Mon, 06 Aug 2012 11:55:20 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344254118!16987195!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16234 invoked from network); 6 Aug 2012 11:55:19 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Aug 2012 11:55:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SyLu0-000IL0-LP; Mon, 06 Aug 2012 11:55:16 +0000
Date: Mon, 6 Aug 2012 12:55:16 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20120806115516.GA68290@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:34 +0000 on 06 Aug (1344252897), Ian Campbell wrote:
> The secondary processors do not call enter_hyp_mode until the boot CPU
> has brought most of the system up, including enabling delivery via the
> distributor. This means that bringing up secondary CPUs unexpectedly
> disables the GICD again, meaning we get no further interrupts on any
> CPU.
> 
> It's not clear that the distributor actually needs to be disabled to
> modify the group registers but it seems reasonable that the bringup
> code should make sure the GICD is disabled even if not doing the
> transition to hyp mode, so move this to the main flow of head.S and
> only do it on the boot processor.
> 
> For completeness also disable the GICC (CPU interface) on all CPUs
> too.

I think that having interrupts disabled is something we can rely on the
bootloader/firmware handling for us, so this should all stay in
mode_switch.S for now, and avoid leaking GIC_* magic constants into
head.S.  (Unless you fancy writing a DT parser in assembler :))

Tim.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/head.S        |   14 ++++++++++++++
>  xen/arch/arm/mode_switch.S |   14 +++++++++-----
>  2 files changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> index cdbe011..a69bf72 100644
> --- a/xen/arch/arm/head.S
> +++ b/xen/arch/arm/head.S
> @@ -67,6 +67,12 @@ start:
>  	add   r8, r10                /* r8 := paddr(DTB) */
>  #endif
>  
> +	/* Disable interrupt delivery at the GIC's CPU interface */
> +	mov   r0, #GIC_BASE_ADDRESS
> +	add   r0, r0, #GIC_CR_OFFSET
> +	mov   r1, #0
> +	str   r1, [r0]
> +
>  	/* Are we the boot CPU? */
>  	mov   r12, #0                /* r12 := CPU ID */
>  	mrc   CP32(r0, MPIDR)
> @@ -85,8 +91,16 @@ start:
>  	ldr   r1, [r0]               /* Which CPU is being booted? */
>  	teq   r1, r12                /* Is it us? */
>  	bne   1b
> +	b     secondary_cpu
>  
>  boot_cpu:
> +	/* Setup which only needs to be done once, on the boot CPU */
> +	mov   r0, #GIC_BASE_ADDRESS
> +	add   r0, r0, #GIC_DR_OFFSET
> +	mov   r1, #0
> +	str   r1, [r0]               /* Disable delivery in the distributor */
> +
> +secondary_cpu:
>  #ifdef EARLY_UART_ADDRESS
>  	ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
>  	teq   r12, #0                   /* CPU 0 sets up the UART too */
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
> index f5549d7..9211d26 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/mode_switch.S
> @@ -49,13 +49,17 @@ enter_hyp_mode:
>  	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_DR_OFFSET
> -	mov   r1, #0
> -	str   r1, [r0]               /* Disable delivery in the distributor */
>  	add   r0, r0, #0x80          /* GICD_IGROUP0 */
> -	mov   r2, #0xffffffff        /* All interrupts to group 1 */
> +	mov   r2, #0xffffffff        /* Interrupts 0-31 (SGI&PPI) to group 1 */
> +	/* The remaining interrupts are Shared Periphal Interrupts and so
> +	 * need reconfiguring only once, on the boot CPU */
>  	str   r2, [r0]
> -	str   r2, [r0, #4]
> -	str   r2, [r0, #8]
> +	teq   r12, #0
> +	bne   skip_spi
> +	str   r2, [r0, #4]           /* Interrupts 32-63 (SPI) to group 1 */
> +	str   r2, [r0, #8]           /* Interrupts 64-95 (SPI) to group 1 */
> +skip_spi:
> +	
>  	/* Must drop priority mask below 0x80 before entering NS state */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_CR_OFFSET
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:55:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLu7-0006rb-8Z; Mon, 06 Aug 2012 11:55:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SyLu5-0006rW-9X
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:55:21 +0000
Received: from [85.158.143.35:24197] by server-3.bemta-4.messagelabs.com id
	71/15-01511-8A0BF105; Mon, 06 Aug 2012 11:55:20 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344254118!16987195!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16234 invoked from network); 6 Aug 2012 11:55:19 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Aug 2012 11:55:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SyLu0-000IL0-LP; Mon, 06 Aug 2012 11:55:16 +0000
Date: Mon, 6 Aug 2012 12:55:16 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20120806115516.GA68290@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:34 +0000 on 06 Aug (1344252897), Ian Campbell wrote:
> The secondary processors do not call enter_hyp_mode until the boot CPU
> has brought most of the system up, including enabling delivery via the
> distributor. This means that bringing up secondary CPUs unexpectedly
> disables the GICD again, meaning we get no further interrupts on any
> CPU.
> 
> It's not clear that the distributor actually needs to be disabled to
> modify the group registers but it seems reasonable that the bringup
> code should make sure the GICD is disabled even if not doing the
> transition to hyp mode, so move this to the main flow of head.S and
> only do it on the boot processor.
> 
> For completeness also disable the GICC (CPU interface) on all CPUs
> too.

I think that having interrupts disabled is something we can rely on the
bootloader/firmware handling for us, so this should all stay in
mode_switch.S for now, and avoid leaking GIC_* magic constants into
head.S.  (Unless you fancy writing a DT parser in assembler :))

Tim.

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/head.S        |   14 ++++++++++++++
>  xen/arch/arm/mode_switch.S |   14 +++++++++-----
>  2 files changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/head.S b/xen/arch/arm/head.S
> index cdbe011..a69bf72 100644
> --- a/xen/arch/arm/head.S
> +++ b/xen/arch/arm/head.S
> @@ -67,6 +67,12 @@ start:
>  	add   r8, r10                /* r8 := paddr(DTB) */
>  #endif
>  
> +	/* Disable interrupt delivery at the GIC's CPU interface */
> +	mov   r0, #GIC_BASE_ADDRESS
> +	add   r0, r0, #GIC_CR_OFFSET
> +	mov   r1, #0
> +	str   r1, [r0]
> +
>  	/* Are we the boot CPU? */
>  	mov   r12, #0                /* r12 := CPU ID */
>  	mrc   CP32(r0, MPIDR)
> @@ -85,8 +91,16 @@ start:
>  	ldr   r1, [r0]               /* Which CPU is being booted? */
>  	teq   r1, r12                /* Is it us? */
>  	bne   1b
> +	b     secondary_cpu
>  
>  boot_cpu:
> +	/* Setup which only needs to be done once, on the boot CPU */
> +	mov   r0, #GIC_BASE_ADDRESS
> +	add   r0, r0, #GIC_DR_OFFSET
> +	mov   r1, #0
> +	str   r1, [r0]               /* Disable delivery in the distributor */
> +
> +secondary_cpu:
>  #ifdef EARLY_UART_ADDRESS
>  	ldr   r11, =EARLY_UART_ADDRESS  /* r11 := UART base address */
>  	teq   r12, #0                   /* CPU 0 sets up the UART too */
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
> index f5549d7..9211d26 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/mode_switch.S
> @@ -49,13 +49,17 @@ enter_hyp_mode:
>  	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_DR_OFFSET
> -	mov   r1, #0
> -	str   r1, [r0]               /* Disable delivery in the distributor */
>  	add   r0, r0, #0x80          /* GICD_IGROUP0 */
> -	mov   r2, #0xffffffff        /* All interrupts to group 1 */
> +	mov   r2, #0xffffffff        /* Interrupts 0-31 (SGI&PPI) to group 1 */
> +	/* The remaining interrupts are Shared Periphal Interrupts and so
> +	 * need reconfiguring only once, on the boot CPU */
>  	str   r2, [r0]
> -	str   r2, [r0, #4]
> -	str   r2, [r0, #8]
> +	teq   r12, #0
> +	bne   skip_spi
> +	str   r2, [r0, #4]           /* Interrupts 32-63 (SPI) to group 1 */
> +	str   r2, [r0, #8]           /* Interrupts 64-95 (SPI) to group 1 */
> +skip_spi:
> +	
>  	/* Must drop priority mask below 0x80 before entering NS state */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_CR_OFFSET
> -- 
> 1.7.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLvg-0006w7-OP; Mon, 06 Aug 2012 11:57:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLvf-0006w0-ST
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:57:00 +0000
Received: from [85.158.139.83:63620] by server-12.bemta-5.messagelabs.com id
	58/99-26304-B01BF105; Mon, 06 Aug 2012 11:56:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344254218!25730012!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10833 invoked from network); 6 Aug 2012 11:56:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:56:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13865085"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:56:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:56:21 +0100
Message-ID: <1344254179.11339.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Mon, 6 Aug 2012 12:56:19 +0100
In-Reply-To: <20120806115516.GA68290@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 12:55 +0100, Tim Deegan wrote:
> At 11:34 +0000 on 06 Aug (1344252897), Ian Campbell wrote:
> > The secondary processors do not call enter_hyp_mode until the boot CPU
> > has brought most of the system up, including enabling delivery via the
> > distributor. This means that bringing up secondary CPUs unexpectedly
> > disables the GICD again, meaning we get no further interrupts on any
> > CPU.
> > 
> > It's not clear that the distributor actually needs to be disabled to
> > modify the group registers but it seems reasonable that the bringup
> > code should make sure the GICD is disabled even if not doing the
> > transition to hyp mode, so move this to the main flow of head.S and
> > only do it on the boot processor.
> > 
> > For completeness also disable the GICC (CPU interface) on all CPUs
> > too.
> 
> I think that having interrupts disabled is something we can rely on the
> bootloader/firmware handling for us, so this should all stay in
> mode_switch.S for now, and avoid leaking GIC_* magic constants into
> head.S.  (Unless you fancy writing a DT parser in assembler :))

Not really ;-) I'll move this back then.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 11:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 11:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyLvg-0006w7-OP; Mon, 06 Aug 2012 11:57:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyLvf-0006w0-ST
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 11:57:00 +0000
Received: from [85.158.139.83:63620] by server-12.bemta-5.messagelabs.com id
	58/99-26304-B01BF105; Mon, 06 Aug 2012 11:56:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344254218!25730012!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10833 invoked from network); 6 Aug 2012 11:56:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 11:56:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13865085"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:56:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:56:21 +0100
Message-ID: <1344254179.11339.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Mon, 6 Aug 2012 12:56:19 +0100
In-Reply-To: <20120806115516.GA68290@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 12:55 +0100, Tim Deegan wrote:
> At 11:34 +0000 on 06 Aug (1344252897), Ian Campbell wrote:
> > The secondary processors do not call enter_hyp_mode until the boot CPU
> > has brought most of the system up, including enabling delivery via the
> > distributor. This means that bringing up secondary CPUs unexpectedly
> > disables the GICD again, meaning we get no further interrupts on any
> > CPU.
> > 
> > It's not clear that the distributor actually needs to be disabled to
> > modify the group registers but it seems reasonable that the bringup
> > code should make sure the GICD is disabled even if not doing the
> > transition to hyp mode, so move this to the main flow of head.S and
> > only do it on the boot processor.
> > 
> > For completeness also disable the GICC (CPU interface) on all CPUs
> > too.
> 
> I think that having interrupts disabled is something we can rely on the
> bootloader/firmware handling for us, so this should all stay in
> mode_switch.S for now, and avoid leaking GIC_* magic constants into
> head.S.  (Unless you fancy writing a DT parser in assembler :))

Not really ;-) I'll move this back then.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 12:41:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 12:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyMcZ-0007Sv-0D; Mon, 06 Aug 2012 12:41:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <malcolm.crossley@citrix.com>) id 1SyMcW-0007Sq-KL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 12:41:16 +0000
X-Env-Sender: malcolm.crossley@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344256869!11076461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5498 invoked from network); 6 Aug 2012 12:41:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 12:41:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204263439"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:41:08 -0400
Received: from [10.80.3.206] (10.80.3.206) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	08:41:08 -0400
Message-ID: <501FBB63.3050309@citrix.com>
Date: Mon, 6 Aug 2012 13:41:07 +0100
From: Malcolm Crossley <malcolm.crossley@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <501F98A1.4070806@brockmann-consult.de>
In-Reply-To: <501F98A1.4070806@brockmann-consult.de>
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 11:12, Peter Maloney wrote:
> my AMD FX-8150 system with vanilla source code is super slow, both the
> dom0 and domUs. However, after I merge the upstream patches I found in
> the openSUSE rpm, it runs normally.
>
> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> obviously those patches won't work any more since the 4.2 code looks
> completely reorganized, so I'm stuck with 4.1.2
>
> Here is the rpm I was using at the time:
> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm
>
> To see the list of the patches and what order to apply them, see the
> spec file.
>
> Please make sure this performance issue is fixed for the 4.2 release.
> And I would be happy to test whatever files you send me.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
I suspect you may need the following patch to improve your 4.1.2 
performance:

http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053

The cache flush on every C2 transition is very expensive and causes a 
large slow down.

4.1.3-rc3 already includes that patch so it would be worth testing that 
version.

Malcolm





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 12:41:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 12:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyMcZ-0007Sv-0D; Mon, 06 Aug 2012 12:41:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <malcolm.crossley@citrix.com>) id 1SyMcW-0007Sq-KL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 12:41:16 +0000
X-Env-Sender: malcolm.crossley@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344256869!11076461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5498 invoked from network); 6 Aug 2012 12:41:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 12:41:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336363200"; d="scan'208";a="204263439"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 08:41:08 -0400
Received: from [10.80.3.206] (10.80.3.206) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	08:41:08 -0400
Message-ID: <501FBB63.3050309@citrix.com>
Date: Mon, 6 Aug 2012 13:41:07 +0100
From: Malcolm Crossley <malcolm.crossley@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <501F98A1.4070806@brockmann-consult.de>
In-Reply-To: <501F98A1.4070806@brockmann-consult.de>
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 11:12, Peter Maloney wrote:
> my AMD FX-8150 system with vanilla source code is super slow, both the
> dom0 and domUs. However, after I merge the upstream patches I found in
> the openSUSE rpm, it runs normally.
>
> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> obviously those patches won't work any more since the 4.2 code looks
> completely reorganized, so I'm stuck with 4.1.2
>
> Here is the rpm I was using at the time:
> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm
>
> To see the list of the patches and what order to apply them, see the
> spec file.
>
> Please make sure this performance issue is fixed for the 4.2 release.
> And I would be happy to test whatever files you send me.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
I suspect you may need the following patch to improve your 4.1.2 
performance:

http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053

The cache flush on every C2 transition is very expensive and causes a 
large slow down.

4.1.3-rc3 already includes that patch so it would be worth testing that 
version.

Malcolm





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:02:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyMwo-0007fi-0v; Mon, 06 Aug 2012 13:02:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SyMwm-0007fd-VE
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:02:13 +0000
Received: from [85.158.139.83:36426] by server-3.bemta-5.messagelabs.com id
	7A/D9-03367-450CF105; Mon, 06 Aug 2012 13:02:12 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344258129!23139392!1
X-Originating-IP: [119.145.14.64]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NCA9PiA0MDExOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5293 invoked from network); 6 Aug 2012 13:02:10 -0000
Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com)
	(119.145.14.64) by server-16.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 13:02:10 -0000
Received: from 172.24.2.119 (EHLO szxeml210-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg01-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AMS79774; Mon, 06 Aug 2012 21:02:07 +0800 (CST)
Received: from SZXEML409-HUB.china.huawei.com (10.82.67.136) by
	szxeml210-edg.china.huawei.com (172.24.2.183) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Mon, 6 Aug 2012 21:01:28 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml409-hub.china.huawei.com ([10.82.67.136]) with mapi id
	14.01.0323.003; Mon, 6 Aug 2012 21:01:18 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlw
Date: Mon, 6 Aug 2012 13:01:17 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343988628.21372.46.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Friday, August 03, 2012 6:10 PM
> 
> BTW, I think this would be a good fix to have for 4.2.0 if you are able
> to produce a patch.
> 
Hi, Ian
There is a patch that changes:
1. use madvise(MADV_DONTFORK) decleare that don't copy the vma when fork 
   after allocating pages, and usr madvise(MADV_DOFORK) restore the flags 
   of vma before freeing the pages.
2. use mmap/nunmap to alloc/free memory instead of malloc/free for passing 
   through libc.
   The free interface in libc may not really free memory, just returns the 
   control to libc. If the memeory set not copy when call fork(), after forking:
   a, In child process, you call free() to the memory, then malloc(), 
      the libc maybe return the same memory, if you access the memeory, 
      and it causes segment fault.
   b, If you not call the free() in child process, it maybe leak the memory 
      which manages the malloc's memory in libc.
   mmap/munmap don't those problems.
3. In the same thread, do not call fork() syscall between xc__hypercall_buffer_alloc_pages 
   and xc__hypercall_buffer_free_pages,otherwise it will cause segment fault 
   when access the hypercall buffer in child process.
  In normal context, we call alloc hypercall buffer, then call hypercall, 
and free the hypercall buffer (or free to the cache). No one call fork() 
between alloc and free hypercall buffers, so, I don't think it's a problem.

We test the patch and it's OK on multi-threads and multi-processes context.

Thanks Ian and xiaowei for giving good ideas.

diff -r 3d17148e465c tools/libxc/xc_hcall_buf.c
--- a/tools/libxc/xc_hcall_buf.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/tools/libxc/xc_hcall_buf.c	Mon Aug 06 19:45:00 2012 +0800
@@ -19,6 +19,7 @@
 #include <stdlib.h>
 #include <string.h>
 #include <pthread.h>
+#include <sys/mman.h>
 
 #include "xc_private.h"
 #include "xg_private.h"
@@ -135,6 +136,9 @@
 
     b->hbuf = p;
 
+    /*Do not copy the VMA to child process when call fork(), avoid the page being COW when hyper calling*/
+    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);
+    
     memset(p, 0, nr_pages * PAGE_SIZE);
 
     return b->hbuf;
@@ -145,6 +149,8 @@
     if ( b->hbuf == NULL )
         return;
 
+    /*Recover the VMA flags, allow copy the VMA when call fork()*/
+    madvise(b->hbuf, nr_pages * PAGE_SIZE, MADV_DOFORK) ;
     if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
         xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
 }
diff -r 3d17148e465c tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/tools/libxc/xc_linux_osdep.c	Mon Aug 06 19:45:00 2012 +0800
@@ -93,22 +93,14 @@
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:02:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyMwo-0007fi-0v; Mon, 06 Aug 2012 13:02:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SyMwm-0007fd-VE
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:02:13 +0000
Received: from [85.158.139.83:36426] by server-3.bemta-5.messagelabs.com id
	7A/D9-03367-450CF105; Mon, 06 Aug 2012 13:02:12 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344258129!23139392!1
X-Originating-IP: [119.145.14.64]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NCA9PiA0MDExOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5293 invoked from network); 6 Aug 2012 13:02:10 -0000
Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com)
	(119.145.14.64) by server-16.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 13:02:10 -0000
Received: from 172.24.2.119 (EHLO szxeml210-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg01-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AMS79774; Mon, 06 Aug 2012 21:02:07 +0800 (CST)
Received: from SZXEML409-HUB.china.huawei.com (10.82.67.136) by
	szxeml210-edg.china.huawei.com (172.24.2.183) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Mon, 6 Aug 2012 21:01:28 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml409-hub.china.huawei.com ([10.82.67.136]) with mapi id
	14.01.0323.003; Mon, 6 Aug 2012 21:01:18 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlw
Date: Mon, 6 Aug 2012 13:01:17 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343988628.21372.46.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Friday, August 03, 2012 6:10 PM
> 
> BTW, I think this would be a good fix to have for 4.2.0 if you are able
> to produce a patch.
> 
Hi, Ian
There is a patch that changes:
1. use madvise(MADV_DONTFORK) decleare that don't copy the vma when fork 
   after allocating pages, and usr madvise(MADV_DOFORK) restore the flags 
   of vma before freeing the pages.
2. use mmap/nunmap to alloc/free memory instead of malloc/free for passing 
   through libc.
   The free interface in libc may not really free memory, just returns the 
   control to libc. If the memeory set not copy when call fork(), after forking:
   a, In child process, you call free() to the memory, then malloc(), 
      the libc maybe return the same memory, if you access the memeory, 
      and it causes segment fault.
   b, If you not call the free() in child process, it maybe leak the memory 
      which manages the malloc's memory in libc.
   mmap/munmap don't those problems.
3. In the same thread, do not call fork() syscall between xc__hypercall_buffer_alloc_pages 
   and xc__hypercall_buffer_free_pages,otherwise it will cause segment fault 
   when access the hypercall buffer in child process.
  In normal context, we call alloc hypercall buffer, then call hypercall, 
and free the hypercall buffer (or free to the cache). No one call fork() 
between alloc and free hypercall buffers, so, I don't think it's a problem.

We test the patch and it's OK on multi-threads and multi-processes context.

Thanks Ian and xiaowei for giving good ideas.

diff -r 3d17148e465c tools/libxc/xc_hcall_buf.c
--- a/tools/libxc/xc_hcall_buf.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/tools/libxc/xc_hcall_buf.c	Mon Aug 06 19:45:00 2012 +0800
@@ -19,6 +19,7 @@
 #include <stdlib.h>
 #include <string.h>
 #include <pthread.h>
+#include <sys/mman.h>
 
 #include "xc_private.h"
 #include "xg_private.h"
@@ -135,6 +136,9 @@
 
     b->hbuf = p;
 
+    /*Do not copy the VMA to child process when call fork(), avoid the page being COW when hyper calling*/
+    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);
+    
     memset(p, 0, nr_pages * PAGE_SIZE);
 
     return b->hbuf;
@@ -145,6 +149,8 @@
     if ( b->hbuf == NULL )
         return;
 
+    /*Recover the VMA flags, allow copy the VMA when call fork()*/
+    madvise(b->hbuf, nr_pages * PAGE_SIZE, MADV_DOFORK) ;
     if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
         xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
 }
diff -r 3d17148e465c tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/tools/libxc/xc_linux_osdep.c	Mon Aug 06 19:45:00 2012 +0800
@@ -93,22 +93,14 @@
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:28:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNMJ-0007tA-AF; Mon, 06 Aug 2012 13:28:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyNMI-0007t5-1N
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:28:34 +0000
Received: from [85.158.143.35:4432] by server-2.bemta-4.messagelabs.com id
	7C/24-17938-186CF105; Mon, 06 Aug 2012 13:28:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344259711!15709196!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12844 invoked from network); 6 Aug 2012 13:28:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 13:28:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13867097"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 13:28:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	14:28:31 +0100
Message-ID: <1344259710.11339.39.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Mon, 6 Aug 2012 14:28:30 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 14:01 +0100, Wangzhenguo wrote:
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Friday, August 03, 2012 6:10 PM
> > 
> > BTW, I think this would be a good fix to have for 4.2.0 if you are able
> > to produce a patch.
> > 
> Hi, Ian
> There is a patch that changes:
> 1. use madvise(MADV_DONTFORK) decleare that don't copy the vma when fork 
>    after allocating pages, and usr madvise(MADV_DOFORK) restore the flags 
>    of vma before freeing the pages.
> 2. use mmap/nunmap to alloc/free memory instead of malloc/free for passing 
>    through libc.
>    The free interface in libc may not really free memory, just returns the 
>    control to libc. If the memeory set not copy when call fork(), after forking:
>    a, In child process, you call free() to the memory, then malloc(), 
>       the libc maybe return the same memory, if you access the memeory, 
>       and it causes segment fault.
>    b, If you not call the free() in child process, it maybe leak the memory 
>       which manages the malloc's memory in libc.
>    mmap/munmap don't those problems.
> 3. In the same thread, do not call fork() syscall between xc__hypercall_buffer_alloc_pages 
>    and xc__hypercall_buffer_free_pages,otherwise it will cause segment fault 
>    when access the hypercall buffer in child process.
>   In normal context, we call alloc hypercall buffer, then call hypercall, 
> and free the hypercall buffer (or free to the cache). No one call fork() 
> between alloc and free hypercall buffers, so, I don't think it's a problem

Another thread in the process might fork though, wasn't that the main
observation you made when you first posted?

I think perhaps you mean it is forbidden to fork and then access a
hypercall buffer allocated before the fork, which sounds ok, since no
thread which allocates a hypercall buffer should fork with it still
allocated.

> .
> 
> We test the patch and it's OK on multi-threads and multi-processes context.
> 
> Thanks Ian and xiaowei for giving good ideas.

Thanks, this will need a Signed-off-by and a commit message as described
in: http://wiki.xen.org/wiki/Submitting_Xen_Patches

> diff -r 3d17148e465c tools/libxc/xc_hcall_buf.c
> --- a/tools/libxc/xc_hcall_buf.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/tools/libxc/xc_hcall_buf.c	Mon Aug 06 19:45:00 2012 +0800
> @@ -19,6 +19,7 @@

Please can you add this to your ~/.hgrc:
        [diff]
        showfunc = True

That will make "hg diff" and similar functions show the name of the
changed function here which is very useful for reviewers.

> @@ -135,6 +136,9 @@
>  
>      b->hbuf = p;
>  
> +    /*Do not copy the VMA to child process when call fork(), avoid the page being COW when hyper calling*/
> +    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);

madvise(2) tells me that MADV_{DO,DONT}FORK are Linux specific, so I
think this belongs in the Linux specific alloc_hypercall_buffer hook.

>      memset(p, 0, nr_pages * PAGE_SIZE);
>  
>      return b->hbuf;
> @@ -145,6 +149,8 @@
>      if ( b->hbuf == NULL )
>          return;
>  
> +    /*Recover the VMA flags, allow copy the VMA when call fork()*/
> +    madvise(b->hbuf, nr_pages * PAGE_SIZE, MADV_DOFORK) ;

Likewise I think this belongs in the free_hypercall_buffer hook.

>      if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
>          xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
>  }
> diff -r 3d17148e465c tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/tools/libxc/xc_linux_osdep.c	Mon Aug 06 19:45:00 2012 +0800
> @@ -93,22 +93,14 @@
>      size_t size = npages * XC_PAGE_SIZE;
>      void *p;
>  
> -    p = xc_memalign(xch, XC_PAGE_SIZE, size);
> -    if (!p)
> -        return NULL;
> +    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);

I suppose this must necessarily return a page aligned result?

>  
> -    if ( mlock(p, size) < 0 )
> -    {
> -        free(p);
> -        return NULL;
> -    }
>      return p;
>  }
>  
>  static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
>  {
> -    munlock(ptr, npages * XC_PAGE_SIZE);
> -    free(ptr);
> +    munmap(ptr, npages * XC_PAGE_SIZE);
>  }
>  
>  static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:28:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNMJ-0007tA-AF; Mon, 06 Aug 2012 13:28:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyNMI-0007t5-1N
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:28:34 +0000
Received: from [85.158.143.35:4432] by server-2.bemta-4.messagelabs.com id
	7C/24-17938-186CF105; Mon, 06 Aug 2012 13:28:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344259711!15709196!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12844 invoked from network); 6 Aug 2012 13:28:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 13:28:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13867097"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 13:28:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	14:28:31 +0100
Message-ID: <1344259710.11339.39.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Mon, 6 Aug 2012 14:28:30 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 14:01 +0100, Wangzhenguo wrote:
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Friday, August 03, 2012 6:10 PM
> > 
> > BTW, I think this would be a good fix to have for 4.2.0 if you are able
> > to produce a patch.
> > 
> Hi, Ian
> There is a patch that changes:
> 1. use madvise(MADV_DONTFORK) decleare that don't copy the vma when fork 
>    after allocating pages, and usr madvise(MADV_DOFORK) restore the flags 
>    of vma before freeing the pages.
> 2. use mmap/nunmap to alloc/free memory instead of malloc/free for passing 
>    through libc.
>    The free interface in libc may not really free memory, just returns the 
>    control to libc. If the memeory set not copy when call fork(), after forking:
>    a, In child process, you call free() to the memory, then malloc(), 
>       the libc maybe return the same memory, if you access the memeory, 
>       and it causes segment fault.
>    b, If you not call the free() in child process, it maybe leak the memory 
>       which manages the malloc's memory in libc.
>    mmap/munmap don't those problems.
> 3. In the same thread, do not call fork() syscall between xc__hypercall_buffer_alloc_pages 
>    and xc__hypercall_buffer_free_pages,otherwise it will cause segment fault 
>    when access the hypercall buffer in child process.
>   In normal context, we call alloc hypercall buffer, then call hypercall, 
> and free the hypercall buffer (or free to the cache). No one call fork() 
> between alloc and free hypercall buffers, so, I don't think it's a problem

Another thread in the process might fork though, wasn't that the main
observation you made when you first posted?

I think perhaps you mean it is forbidden to fork and then access a
hypercall buffer allocated before the fork, which sounds ok, since no
thread which allocates a hypercall buffer should fork with it still
allocated.

> .
> 
> We test the patch and it's OK on multi-threads and multi-processes context.
> 
> Thanks Ian and xiaowei for giving good ideas.

Thanks, this will need a Signed-off-by and a commit message as described
in: http://wiki.xen.org/wiki/Submitting_Xen_Patches

> diff -r 3d17148e465c tools/libxc/xc_hcall_buf.c
> --- a/tools/libxc/xc_hcall_buf.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/tools/libxc/xc_hcall_buf.c	Mon Aug 06 19:45:00 2012 +0800
> @@ -19,6 +19,7 @@

Please can you add this to your ~/.hgrc:
        [diff]
        showfunc = True

That will make "hg diff" and similar functions show the name of the
changed function here which is very useful for reviewers.

> @@ -135,6 +136,9 @@
>  
>      b->hbuf = p;
>  
> +    /*Do not copy the VMA to child process when call fork(), avoid the page being COW when hyper calling*/
> +    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);

madvise(2) tells me that MADV_{DO,DONT}FORK are Linux specific, so I
think this belongs in the Linux specific alloc_hypercall_buffer hook.

>      memset(p, 0, nr_pages * PAGE_SIZE);
>  
>      return b->hbuf;
> @@ -145,6 +149,8 @@
>      if ( b->hbuf == NULL )
>          return;
>  
> +    /*Recover the VMA flags, allow copy the VMA when call fork()*/
> +    madvise(b->hbuf, nr_pages * PAGE_SIZE, MADV_DOFORK) ;

Likewise I think this belongs in the free_hypercall_buffer hook.

>      if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
>          xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
>  }
> diff -r 3d17148e465c tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/tools/libxc/xc_linux_osdep.c	Mon Aug 06 19:45:00 2012 +0800
> @@ -93,22 +93,14 @@
>      size_t size = npages * XC_PAGE_SIZE;
>      void *p;
>  
> -    p = xc_memalign(xch, XC_PAGE_SIZE, size);
> -    if (!p)
> -        return NULL;
> +    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);

I suppose this must necessarily return a page aligned result?

>  
> -    if ( mlock(p, size) < 0 )
> -    {
> -        free(p);
> -        return NULL;
> -    }
>      return p;
>  }
>  
>  static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
>  {
> -    munlock(ptr, npages * XC_PAGE_SIZE);
> -    free(ptr);
> +    munmap(ptr, npages * XC_PAGE_SIZE);
>  }
>  
>  static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:46:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNdE-00086x-3g; Mon, 06 Aug 2012 13:46:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyNdC-00086s-K7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:46:02 +0000
Received: from [85.158.138.51:46470] by server-5.bemta-3.messagelabs.com id
	24/9C-27557-99ACF105; Mon, 06 Aug 2012 13:46:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344260760!30607801!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15571 invoked from network); 6 Aug 2012 13:46:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 13:46:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13867445"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 13:45:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	14:45:57 +0100
Message-ID: <1344260756.11339.44.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Mon, 6 Aug 2012 14:45:56 +0100
In-Reply-To: <1344254179.11339.29.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
 boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 12:56 +0100, Ian Campbell wrote:
> On Mon, 2012-08-06 at 12:55 +0100, Tim Deegan wrote:
> > At 11:34 +0000 on 06 Aug (1344252897), Ian Campbell wrote:
> > > The secondary processors do not call enter_hyp_mode until the boot CPU
> > > has brought most of the system up, including enabling delivery via the
> > > distributor. This means that bringing up secondary CPUs unexpectedly
> > > disables the GICD again, meaning we get no further interrupts on any
> > > CPU.
> > > 
> > > It's not clear that the distributor actually needs to be disabled to
> > > modify the group registers but it seems reasonable that the bringup
> > > code should make sure the GICD is disabled even if not doing the
> > > transition to hyp mode, so move this to the main flow of head.S and
> > > only do it on the boot processor.
> > > 
> > > For completeness also disable the GICC (CPU interface) on all CPUs
> > > too.
> > 
> > I think that having interrupts disabled is something we can rely on the
> > bootloader/firmware handling for us, so this should all stay in
> > mode_switch.S for now, and avoid leaking GIC_* magic constants into
> > head.S.  (Unless you fancy writing a DT parser in assembler :))
> 
> Not really ;-) I'll move this back then.

8<---------------------------------------------------------------

>From 6440d1868cb03573ebacf5eb3cfc69f4f6abdf15 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 6 Aug 2012 09:40:59 +0000
Subject: [PATCH] arm: disable distributor delivery on boot CPU only

The secondary processors do not call enter_hyp_mode until the boot CPU
has brought most of the system up, including enabling delivery via the
distributor. This means that bringing up secondary CPUs unexpectedly
disables the GICD again, meaning we get no further interrupts on any
CPU.

For completeness also disable the GICC (CPU interface) on all CPUs
too.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mode_switch.S |   23 +++++++++++++++++------
 1 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
index f5549d7..acbd523 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/mode_switch.S
@@ -23,6 +23,8 @@
 
 /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
  *
+ * Expects r12 == CPU number
+ *
  * This code is specific to the VE model, and not intended to be used
  * on production systems.  As such it's a bit hackier than the main
  * boot code in head.S.  In future it will be replaced by better
@@ -46,19 +48,28 @@ enter_hyp_mode:
 	mcr   CP32(r0, CNTFRQ)
 	ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
 	mcr   CP32(r0, NSACR)
-	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_DR_OFFSET
+	/* Disable the GIC distributor, on the boot CPU only */
 	mov   r1, #0
-	str   r1, [r0]               /* Disable delivery in the distributor */
+	teq   r12, #0                /* Is this the boot CPU? */
+	streq r1, [r0]
+	/* Continuing ugliness: Set up the GIC so NS state owns interrupts,
+	 * The first 32 interrupts (SGIs & PPIs) must be configured on all
+	 * CPUs while the remainder are SPIs and only need to be done one, on
+	 * the boot CPU. */
 	add   r0, r0, #0x80          /* GICD_IGROUP0 */
 	mov   r2, #0xffffffff        /* All interrupts to group 1 */
-	str   r2, [r0]
-	str   r2, [r0, #4]
-	str   r2, [r0, #8]
-	/* Must drop priority mask below 0x80 before entering NS state */
+	teq   r12, #0                /* Boot CPU? */
+	str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
+	streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
+	streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
+	/* Disable the GIC CPU interface on all processors */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_CR_OFFSET
+	mov   r1, #0
+	str   r1, [r0]		     
+	/* Must drop priority mask below 0x80 before entering NS state */
 	ldr   r1, =0xff
 	str   r1, [r0, #0x4]         /* -> GICC_PMR */
 	/* Reset a few config registers */
-- 
1.7.9.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:46:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNdE-00086x-3g; Mon, 06 Aug 2012 13:46:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyNdC-00086s-K7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:46:02 +0000
Received: from [85.158.138.51:46470] by server-5.bemta-3.messagelabs.com id
	24/9C-27557-99ACF105; Mon, 06 Aug 2012 13:46:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344260760!30607801!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15571 invoked from network); 6 Aug 2012 13:46:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 13:46:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13867445"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 13:45:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	14:45:57 +0100
Message-ID: <1344260756.11339.44.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Tim (Xen.org)" <tim@xen.org>
Date: Mon, 6 Aug 2012 14:45:56 +0100
In-Reply-To: <1344254179.11339.29.camel@zakaz.uk.xensource.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
 boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 12:56 +0100, Ian Campbell wrote:
> On Mon, 2012-08-06 at 12:55 +0100, Tim Deegan wrote:
> > At 11:34 +0000 on 06 Aug (1344252897), Ian Campbell wrote:
> > > The secondary processors do not call enter_hyp_mode until the boot CPU
> > > has brought most of the system up, including enabling delivery via the
> > > distributor. This means that bringing up secondary CPUs unexpectedly
> > > disables the GICD again, meaning we get no further interrupts on any
> > > CPU.
> > > 
> > > It's not clear that the distributor actually needs to be disabled to
> > > modify the group registers but it seems reasonable that the bringup
> > > code should make sure the GICD is disabled even if not doing the
> > > transition to hyp mode, so move this to the main flow of head.S and
> > > only do it on the boot processor.
> > > 
> > > For completeness also disable the GICC (CPU interface) on all CPUs
> > > too.
> > 
> > I think that having interrupts disabled is something we can rely on the
> > bootloader/firmware handling for us, so this should all stay in
> > mode_switch.S for now, and avoid leaking GIC_* magic constants into
> > head.S.  (Unless you fancy writing a DT parser in assembler :))
> 
> Not really ;-) I'll move this back then.

8<---------------------------------------------------------------

>From 6440d1868cb03573ebacf5eb3cfc69f4f6abdf15 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 6 Aug 2012 09:40:59 +0000
Subject: [PATCH] arm: disable distributor delivery on boot CPU only

The secondary processors do not call enter_hyp_mode until the boot CPU
has brought most of the system up, including enabling delivery via the
distributor. This means that bringing up secondary CPUs unexpectedly
disables the GICD again, meaning we get no further interrupts on any
CPU.

For completeness also disable the GICC (CPU interface) on all CPUs
too.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mode_switch.S |   23 +++++++++++++++++------
 1 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
index f5549d7..acbd523 100644
--- a/xen/arch/arm/mode_switch.S
+++ b/xen/arch/arm/mode_switch.S
@@ -23,6 +23,8 @@
 
 /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
  *
+ * Expects r12 == CPU number
+ *
  * This code is specific to the VE model, and not intended to be used
  * on production systems.  As such it's a bit hackier than the main
  * boot code in head.S.  In future it will be replaced by better
@@ -46,19 +48,28 @@ enter_hyp_mode:
 	mcr   CP32(r0, CNTFRQ)
 	ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
 	mcr   CP32(r0, NSACR)
-	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_DR_OFFSET
+	/* Disable the GIC distributor, on the boot CPU only */
 	mov   r1, #0
-	str   r1, [r0]               /* Disable delivery in the distributor */
+	teq   r12, #0                /* Is this the boot CPU? */
+	streq r1, [r0]
+	/* Continuing ugliness: Set up the GIC so NS state owns interrupts,
+	 * The first 32 interrupts (SGIs & PPIs) must be configured on all
+	 * CPUs while the remainder are SPIs and only need to be done one, on
+	 * the boot CPU. */
 	add   r0, r0, #0x80          /* GICD_IGROUP0 */
 	mov   r2, #0xffffffff        /* All interrupts to group 1 */
-	str   r2, [r0]
-	str   r2, [r0, #4]
-	str   r2, [r0, #8]
-	/* Must drop priority mask below 0x80 before entering NS state */
+	teq   r12, #0                /* Boot CPU? */
+	str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
+	streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
+	streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
+	/* Disable the GIC CPU interface on all processors */
 	mov   r0, #GIC_BASE_ADDRESS
 	add   r0, r0, #GIC_CR_OFFSET
+	mov   r1, #0
+	str   r1, [r0]		     
+	/* Must drop priority mask below 0x80 before entering NS state */
 	ldr   r1, =0xff
 	str   r1, [r0, #0x4]         /* -> GICC_PMR */
 	/* Reset a few config registers */
-- 
1.7.9.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNdZ-00087s-GM; Mon, 06 Aug 2012 13:46:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyNdX-00087i-NU
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:46:23 +0000
Received: from [85.158.143.35:42236] by server-3.bemta-4.messagelabs.com id
	7F/7A-01511-FAACF105; Mon, 06 Aug 2012 13:46:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344260778!17639004!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27953 invoked from network); 6 Aug 2012 13:46:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 13:46:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13867455"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 13:46:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	14:46:18 +0100
Message-ID: <1344260776.11339.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 14:46:16 +0100
In-Reply-To: <1344252900-26148-3-git-send-email-ian.campbell@citrix.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-3-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/4] arm/vtimer: convert result to ticks
 when reading CNTPCT register
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 12:34 +0100, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/vtimer.c |    6 ++++--
>  1 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 6b1152e..92c385c 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -103,6 +103,7 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
>      struct hsr_cp64 cp64 = hsr.cp64;
>      uint32_t *r1 = &regs->r0 + cp64.reg1;
>      uint32_t *r2 = &regs->r0 + cp64.reg2;
> +    uint64_t ticks

Ahem, I need to remember to build the final version of my patches before
sending...

8<---------------------------------------------------------

>From fd78d6059ae46ab46b0f3ade0e696dcf288a8b99 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 6 Aug 2012 11:08:12 +0000
Subject: [PATCH] arm/vtimer: convert result to ticks when reading CNTPCT
 register

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/vtimer.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 6b1152e..490b021 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -103,6 +103,7 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
     struct hsr_cp64 cp64 = hsr.cp64;
     uint32_t *r1 = &regs->r0 + cp64.reg1;
     uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint64_t ticks;
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP64_REGS_MASK )
@@ -111,8 +112,9 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
         if ( cp64.read )
         {
             now = NOW() - v->arch.vtimer.offset;
-            *r1 = (uint32_t)(now & 0xffffffff);
-            *r2 = (uint32_t)(now >> 32);
+            ticks = ns_to_ticks(now);
+            *r1 = (uint32_t)(ticks & 0xffffffff);
+            *r2 = (uint32_t)(ticks >> 32);
             return 1;
         }
         else
-- 
1.7.9.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNdZ-00087s-GM; Mon, 06 Aug 2012 13:46:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyNdX-00087i-NU
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:46:23 +0000
Received: from [85.158.143.35:42236] by server-3.bemta-4.messagelabs.com id
	7F/7A-01511-FAACF105; Mon, 06 Aug 2012 13:46:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344260778!17639004!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27953 invoked from network); 6 Aug 2012 13:46:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 13:46:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,718,1336348800"; d="scan'208";a="13867455"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 13:46:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	14:46:18 +0100
Message-ID: <1344260776.11339.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Mon, 6 Aug 2012 14:46:16 +0100
In-Reply-To: <1344252900-26148-3-git-send-email-ian.campbell@citrix.com>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-3-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/4] arm/vtimer: convert result to ticks
 when reading CNTPCT register
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 12:34 +0100, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/vtimer.c |    6 ++++--
>  1 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 6b1152e..92c385c 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -103,6 +103,7 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
>      struct hsr_cp64 cp64 = hsr.cp64;
>      uint32_t *r1 = &regs->r0 + cp64.reg1;
>      uint32_t *r2 = &regs->r0 + cp64.reg2;
> +    uint64_t ticks

Ahem, I need to remember to build the final version of my patches before
sending...

8<---------------------------------------------------------

>From fd78d6059ae46ab46b0f3ade0e696dcf288a8b99 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 6 Aug 2012 11:08:12 +0000
Subject: [PATCH] arm/vtimer: convert result to ticks when reading CNTPCT
 register

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/vtimer.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 6b1152e..490b021 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -103,6 +103,7 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
     struct hsr_cp64 cp64 = hsr.cp64;
     uint32_t *r1 = &regs->r0 + cp64.reg1;
     uint32_t *r2 = &regs->r0 + cp64.reg2;
+    uint64_t ticks;
     s_time_t now;
 
     switch ( hsr.bits & HSR_CP64_REGS_MASK )
@@ -111,8 +112,9 @@ static int vtimer_emulate_64(struct cpu_user_regs *regs, union hsr hsr)
         if ( cp64.read )
         {
             now = NOW() - v->arch.vtimer.offset;
-            *r1 = (uint32_t)(now & 0xffffffff);
-            *r2 = (uint32_t)(now >> 32);
+            ticks = ns_to_ticks(now);
+            *r1 = (uint32_t)(ticks & 0xffffffff);
+            *r2 = (uint32_t)(ticks >> 32);
             return 1;
         }
         else
-- 
1.7.9.1




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:51:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:51:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNiS-0008O9-D4; Mon, 06 Aug 2012 13:51:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SyNiQ-0008O1-Sk
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 13:51:27 +0000
Received: from [85.158.143.35:11584] by server-3.bemta-4.messagelabs.com id
	7C/63-01511-EDBCF105; Mon, 06 Aug 2012 13:51:26 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344261080!5737725!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzMzY0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27861 invoked from network); 6 Aug 2012 13:51:21 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 13:51:21 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Aug 2012 06:51:19 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="203149121"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 06 Aug 2012 06:51:19 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 06:51:19 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 06:51:19 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Mon, 6 Aug 2012 21:51:16 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Patch 2/6] Xen/MCE: remove mcg_ctl and other adjustment for
	future vMCE
Thread-Index: AQHNc5+sCWSuIDJ4QUuNAy7SHei7jpdMxclg
Date: Mon, 6 Aug 2012 13:51:17 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D616D@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
	<501F852E0200007800092C24@nat28.tlf.novell.com>
In-Reply-To: <501F852E0200007800092C24@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_"
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

>>>=20
>>> Is there a particular reason to make this access fault here, when
>>> it didn't before? I.e. was there anything wrong with the previous
>>> approach of returning zero on reads and ignoring writes when
>>> !MCG_CTL_P?=20
>>>=20
>>=20
>> Semantically this code is better than previous approach, since
>> !MCG_CTL_P means unimplemented MCG_CTL so access it would generate
>> GP#.=20
>=20
> Agreed. But nevertheless I'd like to be a little more conservative
> here. After all, "knowing" that this won't break Windows or Linux
> isn't covering all possible HVM guests (and the quotes are there
> to indicate that (a) unless you have access to Windows sources,
> you can't really know, you may at best have empirical data
> suggesting so, and (b) makes you/us dependent on all older
> Windows/Linux versions you didn't try out/look at behave
> correctly here too).

OK, fine to me to use previous approach, updated as attached.

Thanks,
Jinsong

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D


Xen/MCE: remove mcg_ctl and other adjustment for future vMCE

This patch is a middle-work patch, prepare for future new vMCE model.
It remove mcg_ctl, disable MCG_CTL_P, and set bank number to 2.

Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>

diff -r 8067891037a6 xen/arch/x86/cpu/mcheck/mce.c
--- a/xen/arch/x86/cpu/mcheck/mce.c	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/arch/x86/cpu/mcheck/mce.c	Tue Aug 07 04:26:56 2012 +0800
@@ -843,8 +843,6 @@
=20
     mctelem_init(sizeof(struct mc_info));
=20
-    vmce_init(c);
-
     /* Turn on MCE now */
     set_in_cr4(X86_CR4_MCE);
=20
diff -r 8067891037a6 xen/arch/x86/cpu/mcheck/mce.h
--- a/xen/arch/x86/cpu/mcheck/mce.h	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/arch/x86/cpu/mcheck/mce.h	Tue Aug 07 04:26:56 2012 +0800
@@ -170,8 +170,6 @@
 int inject_vmce(struct domain *d);
 int vmce_domain_inject(struct mcinfo_bank *bank, struct domain *d, struct =
mcinfo_global *global);
=20
-extern int vmce_init(struct cpuinfo_x86 *c);
-
 static inline int mce_vendor_bank_msr(const struct vcpu *v, uint32_t msr)
 {
     if ( boot_cpu_data.x86_vendor =3D=3D X86_VENDOR_INTEL &&
diff -r 8067891037a6 xen/arch/x86/cpu/mcheck/vmce.c
--- a/xen/arch/x86/cpu/mcheck/vmce.c	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/arch/x86/cpu/mcheck/vmce.c	Tue Aug 07 04:26:56 2012 +0800
@@ -19,13 +19,18 @@
 #include "mce.h"
 #include "x86_mca.h"
=20
+/*
+ * Emulalte 2 banks for guest
+ * Bank0: reserved for 'bank0 quirk' occur at some very old processors:
+ *   1). Intel cpu whose family-model value < 06-1A;
+ *   2). AMD K7
+ * Bank1: used to transfer error info to guest
+ */
+#define GUEST_BANK_NUM 2
+#define GUEST_MCG_CAP (MCG_TES_P | MCG_SER_P | GUEST_BANK_NUM)
+
 #define dom_vmce(x)   ((x)->arch.vmca_msrs)
=20
-static uint64_t __read_mostly g_mcg_cap;
-
-/* Real value in physical CTL MSR */
-static uint64_t __read_mostly h_mcg_ctl;
-
 int vmce_init_msr(struct domain *d)
 {
     dom_vmce(d) =3D xmalloc(struct domain_mca_msrs);
@@ -33,7 +38,6 @@
         return -ENOMEM;
=20
     dom_vmce(d)->mcg_status =3D 0x0;
-    dom_vmce(d)->mcg_ctl =3D ~(uint64_t)0x0;
     dom_vmce(d)->nr_injection =3D 0;
=20
     INIT_LIST_HEAD(&dom_vmce(d)->impact_header);
@@ -52,17 +56,17 @@
=20
 void vmce_init_vcpu(struct vcpu *v)
 {
-    v->arch.mcg_cap =3D g_mcg_cap;
+    v->arch.mcg_cap =3D GUEST_MCG_CAP;
 }
=20
 int vmce_restore_vcpu(struct vcpu *v, uint64_t caps)
 {
-    if ( caps & ~g_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
+    if ( caps & ~GUEST_MCG_CAP & ~MCG_CAP_COUNT & ~MCG_CTL_P )
     {
         dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
                 " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
                 is_hvm_vcpu(v) ? "HVM" : "PV", caps, v->domain->domain_id,
-                v->vcpu_id, g_mcg_cap & ~MCG_CAP_COUNT);
+                v->vcpu_id, GUEST_MCG_CAP & ~MCG_CAP_COUNT);
         return -EPERM;
     }
=20
@@ -175,11 +179,10 @@
                    *val);
         break;
     case MSR_IA32_MCG_CTL:
-        /* Always 0 if no CTL support */
+        /* Stick all 1's when CTL support, and 0's when no CTL support */
         if ( cur->arch.mcg_cap & MCG_CTL_P )
-            *val =3D vmce->mcg_ctl & h_mcg_ctl;
-        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n",
-                   *val);
+            *val =3D ~0ULL;
+        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n", *val);
         break;
     default:
         ret =3D mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr, val) : 0=
;
@@ -287,15 +290,11 @@
     struct domain_mca_msrs *vmce =3D dom_vmce(cur->domain);
     int ret =3D 1;
=20
-    if ( !g_mcg_cap )
-        return 0;
-
     spin_lock(&vmce->lock);
=20
     switch ( msr )
     {
     case MSR_IA32_MCG_CTL:
-        vmce->mcg_ctl =3D val;
         break;
     case MSR_IA32_MCG_STATUS:
         vmce->mcg_status =3D val;
@@ -510,31 +509,6 @@
 }
 #endif
=20
-int vmce_init(struct cpuinfo_x86 *c)
-{
-    u64 value;
-
-    rdmsrl(MSR_IA32_MCG_CAP, value);
-    /* For Guest vMCE usage */
-    g_mcg_cap =3D value & (MCG_CAP_COUNT | MCG_CTL_P | MCG_TES_P | MCG_SER=
_P);
-    if (value & MCG_CTL_P)
-        rdmsrl(MSR_IA32_MCG_CTL, h_mcg_ctl);
-
-    return 0;
-}
-
-static int mca_ctl_conflict(struct mcinfo_bank *bank, struct domain *d)
-{
-    if ( !bank || !d )
-        return 1;
-
-    /* Will MCE happen in host if If host mcg_ctl is 0? */
-    if ( ~d->arch.vmca_msrs->mcg_ctl & h_mcg_ctl )
-        return 1;
-
-    return 0;
-}
-
 static int is_hvm_vmce_ready(struct mcinfo_bank *bank, struct domain *d)
 {
     struct vcpu *v;
@@ -588,14 +562,6 @@
     if (no_vmce)
         return 0;
=20
-    /* Guest has different MCE ctl value setting */
-    if (mca_ctl_conflict(bank, d))
-    {
-        dprintk(XENLOG_WARNING,
-          "No vmce, guest has different mca control setting\n");
-        return 0;
-    }
-
     return 1;
 }
=20
diff -r 8067891037a6 xen/include/asm-x86/mce.h
--- a/xen/include/asm-x86/mce.h	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/include/asm-x86/mce.h	Tue Aug 07 04:26:56 2012 +0800
@@ -16,7 +16,6 @@
 struct domain_mca_msrs
 {
     /* Guest should not change below values after DOM boot up */
-    uint64_t mcg_ctl;
     uint64_t mcg_status;
     uint16_t nr_injection;
     struct list_head impact_header;=

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_
Content-Type: application/octet-stream;
	name="2_vmce_mcg_ctl_compatible.patch"
Content-Description: 2_vmce_mcg_ctl_compatible.patch
Content-Disposition: attachment; filename="2_vmce_mcg_ctl_compatible.patch";
	size=5214; creation-date="Mon, 06 Aug 2012 13:21:52 GMT";
	modification-date="Mon, 06 Aug 2012 20:26:56 GMT"
Content-Transfer-Encoding: base64

WGVuL01DRTogcmVtb3ZlIG1jZ19jdGwgYW5kIG90aGVyIGFkanVzdG1lbnQgZm9yIGZ1dHVyZSB2
TUNFCgpUaGlzIHBhdGNoIGlzIGEgbWlkZGxlLXdvcmsgcGF0Y2gsIHByZXBhcmUgZm9yIGZ1dHVy
ZSBuZXcgdk1DRSBtb2RlbC4KSXQgcmVtb3ZlIG1jZ19jdGwsIGRpc2FibGUgTUNHX0NUTF9QLCBh
bmQgc2V0IGJhbmsgbnVtYmVyIHRvIDIuCgpTaWduZWQtb2ZmLWJ5OiBMaXUsIEppbnNvbmcgPGpp
bnNvbmcubGl1QGludGVsLmNvbT4KCmRpZmYgLXIgODA2Nzg5MTAzN2E2IHhlbi9hcmNoL3g4Ni9j
cHUvbWNoZWNrL21jZS5jCi0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvbWNoZWNrL21jZS5jCVRodSBK
dWwgMTkgMjA6NDg6MjUgMjAxMiArMDgwMAorKysgYi94ZW4vYXJjaC94ODYvY3B1L21jaGVjay9t
Y2UuYwlUdWUgQXVnIDA3IDA0OjI2OjU2IDIwMTIgKzA4MDAKQEAgLTg0Myw4ICs4NDMsNiBAQAog
CiAgICAgbWN0ZWxlbV9pbml0KHNpemVvZihzdHJ1Y3QgbWNfaW5mbykpOwogCi0gICAgdm1jZV9p
bml0KGMpOwotCiAgICAgLyogVHVybiBvbiBNQ0Ugbm93ICovCiAgICAgc2V0X2luX2NyNChYODZf
Q1I0X01DRSk7CiAKZGlmZiAtciA4MDY3ODkxMDM3YTYgeGVuL2FyY2gveDg2L2NwdS9tY2hlY2sv
bWNlLmgKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9tY2hlY2svbWNlLmgJVGh1IEp1bCAxOSAyMDo0
ODoyNSAyMDEyICswODAwCisrKyBiL3hlbi9hcmNoL3g4Ni9jcHUvbWNoZWNrL21jZS5oCVR1ZSBB
dWcgMDcgMDQ6MjY6NTYgMjAxMiArMDgwMApAQCAtMTcwLDggKzE3MCw2IEBACiBpbnQgaW5qZWN0
X3ZtY2Uoc3RydWN0IGRvbWFpbiAqZCk7CiBpbnQgdm1jZV9kb21haW5faW5qZWN0KHN0cnVjdCBt
Y2luZm9fYmFuayAqYmFuaywgc3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IG1jaW5mb19nbG9iYWwg
Kmdsb2JhbCk7CiAKLWV4dGVybiBpbnQgdm1jZV9pbml0KHN0cnVjdCBjcHVpbmZvX3g4NiAqYyk7
Ci0KIHN0YXRpYyBpbmxpbmUgaW50IG1jZV92ZW5kb3JfYmFua19tc3IoY29uc3Qgc3RydWN0IHZj
cHUgKnYsIHVpbnQzMl90IG1zcikKIHsKICAgICBpZiAoIGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRv
ciA9PSBYODZfVkVORE9SX0lOVEVMICYmCmRpZmYgLXIgODA2Nzg5MTAzN2E2IHhlbi9hcmNoL3g4
Ni9jcHUvbWNoZWNrL3ZtY2UuYwotLS0gYS94ZW4vYXJjaC94ODYvY3B1L21jaGVjay92bWNlLmMJ
VGh1IEp1bCAxOSAyMDo0ODoyNSAyMDEyICswODAwCisrKyBiL3hlbi9hcmNoL3g4Ni9jcHUvbWNo
ZWNrL3ZtY2UuYwlUdWUgQXVnIDA3IDA0OjI2OjU2IDIwMTIgKzA4MDAKQEAgLTE5LDEzICsxOSwx
OCBAQAogI2luY2x1ZGUgIm1jZS5oIgogI2luY2x1ZGUgIng4Nl9tY2EuaCIKIAorLyoKKyAqIEVt
dWxhbHRlIDIgYmFua3MgZm9yIGd1ZXN0CisgKiBCYW5rMDogcmVzZXJ2ZWQgZm9yICdiYW5rMCBx
dWlyaycgb2NjdXIgYXQgc29tZSB2ZXJ5IG9sZCBwcm9jZXNzb3JzOgorICogICAxKS4gSW50ZWwg
Y3B1IHdob3NlIGZhbWlseS1tb2RlbCB2YWx1ZSA8IDA2LTFBOworICogICAyKS4gQU1EIEs3Cisg
KiBCYW5rMTogdXNlZCB0byB0cmFuc2ZlciBlcnJvciBpbmZvIHRvIGd1ZXN0CisgKi8KKyNkZWZp
bmUgR1VFU1RfQkFOS19OVU0gMgorI2RlZmluZSBHVUVTVF9NQ0dfQ0FQIChNQ0dfVEVTX1AgfCBN
Q0dfU0VSX1AgfCBHVUVTVF9CQU5LX05VTSkKKwogI2RlZmluZSBkb21fdm1jZSh4KSAgICgoeCkt
PmFyY2gudm1jYV9tc3JzKQogCi1zdGF0aWMgdWludDY0X3QgX19yZWFkX21vc3RseSBnX21jZ19j
YXA7Ci0KLS8qIFJlYWwgdmFsdWUgaW4gcGh5c2ljYWwgQ1RMIE1TUiAqLwotc3RhdGljIHVpbnQ2
NF90IF9fcmVhZF9tb3N0bHkgaF9tY2dfY3RsOwotCiBpbnQgdm1jZV9pbml0X21zcihzdHJ1Y3Qg
ZG9tYWluICpkKQogewogICAgIGRvbV92bWNlKGQpID0geG1hbGxvYyhzdHJ1Y3QgZG9tYWluX21j
YV9tc3JzKTsKQEAgLTMzLDcgKzM4LDYgQEAKICAgICAgICAgcmV0dXJuIC1FTk9NRU07CiAKICAg
ICBkb21fdm1jZShkKS0+bWNnX3N0YXR1cyA9IDB4MDsKLSAgICBkb21fdm1jZShkKS0+bWNnX2N0
bCA9IH4odWludDY0X3QpMHgwOwogICAgIGRvbV92bWNlKGQpLT5ucl9pbmplY3Rpb24gPSAwOwog
CiAgICAgSU5JVF9MSVNUX0hFQUQoJmRvbV92bWNlKGQpLT5pbXBhY3RfaGVhZGVyKTsKQEAgLTUy
LDE3ICs1NiwxNyBAQAogCiB2b2lkIHZtY2VfaW5pdF92Y3B1KHN0cnVjdCB2Y3B1ICp2KQogewot
ICAgIHYtPmFyY2gubWNnX2NhcCA9IGdfbWNnX2NhcDsKKyAgICB2LT5hcmNoLm1jZ19jYXAgPSBH
VUVTVF9NQ0dfQ0FQOwogfQogCiBpbnQgdm1jZV9yZXN0b3JlX3ZjcHUoc3RydWN0IHZjcHUgKnYs
IHVpbnQ2NF90IGNhcHMpCiB7Ci0gICAgaWYgKCBjYXBzICYgfmdfbWNnX2NhcCAmIH5NQ0dfQ0FQ
X0NPVU5UICYgfk1DR19DVExfUCApCisgICAgaWYgKCBjYXBzICYgfkdVRVNUX01DR19DQVAgJiB+
TUNHX0NBUF9DT1VOVCAmIH5NQ0dfQ1RMX1AgKQogICAgIHsKICAgICAgICAgZHByaW50ayhYRU5M
T0dfR19FUlIsICIlcyByZXN0b3JlOiB1bnN1cHBvcnRlZCBNQ0EgY2FwYWJpbGl0aWVzIgogICAg
ICAgICAgICAgICAgICIgJSMiIFBSSXg2NCAiIGZvciBkJWQ6diV1IChzdXBwb3J0ZWQ6ICUjTHgp
XG4iLAogICAgICAgICAgICAgICAgIGlzX2h2bV92Y3B1KHYpID8gIkhWTSIgOiAiUFYiLCBjYXBz
LCB2LT5kb21haW4tPmRvbWFpbl9pZCwKLSAgICAgICAgICAgICAgICB2LT52Y3B1X2lkLCBnX21j
Z19jYXAgJiB+TUNHX0NBUF9DT1VOVCk7CisgICAgICAgICAgICAgICAgdi0+dmNwdV9pZCwgR1VF
U1RfTUNHX0NBUCAmIH5NQ0dfQ0FQX0NPVU5UKTsKICAgICAgICAgcmV0dXJuIC1FUEVSTTsKICAg
ICB9CiAKQEAgLTE3NSwxMSArMTc5LDEwIEBACiAgICAgICAgICAgICAgICAgICAgKnZhbCk7CiAg
ICAgICAgIGJyZWFrOwogICAgIGNhc2UgTVNSX0lBMzJfTUNHX0NUTDoKLSAgICAgICAgLyogQWx3
YXlzIDAgaWYgbm8gQ1RMIHN1cHBvcnQgKi8KKyAgICAgICAgLyogU3RpY2sgYWxsIDEncyB3aGVu
IENUTCBzdXBwb3J0LCBhbmQgMCdzIHdoZW4gbm8gQ1RMIHN1cHBvcnQgKi8KICAgICAgICAgaWYg
KCBjdXItPmFyY2gubWNnX2NhcCAmIE1DR19DVExfUCApCi0gICAgICAgICAgICAqdmFsID0gdm1j
ZS0+bWNnX2N0bCAmIGhfbWNnX2N0bDsKLSAgICAgICAgbWNlX3ByaW50ayhNQ0VfVkVSQk9TRSwg
Ik1DRTogcmRtc3IgTUNHX0NUTCAweCUiUFJJeDY0IlxuIiwKLSAgICAgICAgICAgICAgICAgICAq
dmFsKTsKKyAgICAgICAgICAgICp2YWwgPSB+MFVMTDsKKyAgICAgICAgbWNlX3ByaW50ayhNQ0Vf
VkVSQk9TRSwgIk1DRTogcmRtc3IgTUNHX0NUTCAweCUiUFJJeDY0IlxuIiwgKnZhbCk7CiAgICAg
ICAgIGJyZWFrOwogICAgIGRlZmF1bHQ6CiAgICAgICAgIHJldCA9IG1jZV9iYW5rX21zcihjdXIs
IG1zcikgPyBiYW5rX21jZV9yZG1zcihjdXIsIG1zciwgdmFsKSA6IDA7CkBAIC0yODcsMTUgKzI5
MCwxMSBAQAogICAgIHN0cnVjdCBkb21haW5fbWNhX21zcnMgKnZtY2UgPSBkb21fdm1jZShjdXIt
PmRvbWFpbik7CiAgICAgaW50IHJldCA9IDE7CiAKLSAgICBpZiAoICFnX21jZ19jYXAgKQotICAg
ICAgICByZXR1cm4gMDsKLQogICAgIHNwaW5fbG9jaygmdm1jZS0+bG9jayk7CiAKICAgICBzd2l0
Y2ggKCBtc3IgKQogICAgIHsKICAgICBjYXNlIE1TUl9JQTMyX01DR19DVEw6Ci0gICAgICAgIHZt
Y2UtPm1jZ19jdGwgPSB2YWw7CiAgICAgICAgIGJyZWFrOwogICAgIGNhc2UgTVNSX0lBMzJfTUNH
X1NUQVRVUzoKICAgICAgICAgdm1jZS0+bWNnX3N0YXR1cyA9IHZhbDsKQEAgLTUxMCwzMSArNTA5
LDYgQEAKIH0KICNlbmRpZgogCi1pbnQgdm1jZV9pbml0KHN0cnVjdCBjcHVpbmZvX3g4NiAqYykK
LXsKLSAgICB1NjQgdmFsdWU7Ci0KLSAgICByZG1zcmwoTVNSX0lBMzJfTUNHX0NBUCwgdmFsdWUp
OwotICAgIC8qIEZvciBHdWVzdCB2TUNFIHVzYWdlICovCi0gICAgZ19tY2dfY2FwID0gdmFsdWUg
JiAoTUNHX0NBUF9DT1VOVCB8IE1DR19DVExfUCB8IE1DR19URVNfUCB8IE1DR19TRVJfUCk7Ci0g
ICAgaWYgKHZhbHVlICYgTUNHX0NUTF9QKQotICAgICAgICByZG1zcmwoTVNSX0lBMzJfTUNHX0NU
TCwgaF9tY2dfY3RsKTsKLQotICAgIHJldHVybiAwOwotfQotCi1zdGF0aWMgaW50IG1jYV9jdGxf
Y29uZmxpY3Qoc3RydWN0IG1jaW5mb19iYW5rICpiYW5rLCBzdHJ1Y3QgZG9tYWluICpkKQotewot
ICAgIGlmICggIWJhbmsgfHwgIWQgKQotICAgICAgICByZXR1cm4gMTsKLQotICAgIC8qIFdpbGwg
TUNFIGhhcHBlbiBpbiBob3N0IGlmIElmIGhvc3QgbWNnX2N0bCBpcyAwPyAqLwotICAgIGlmICgg
fmQtPmFyY2gudm1jYV9tc3JzLT5tY2dfY3RsICYgaF9tY2dfY3RsICkKLSAgICAgICAgcmV0dXJu
IDE7Ci0KLSAgICByZXR1cm4gMDsKLX0KLQogc3RhdGljIGludCBpc19odm1fdm1jZV9yZWFkeShz
dHJ1Y3QgbWNpbmZvX2JhbmsgKmJhbmssIHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAgc3RydWN0
IHZjcHUgKnY7CkBAIC01ODgsMTQgKzU2Miw2IEBACiAgICAgaWYgKG5vX3ZtY2UpCiAgICAgICAg
IHJldHVybiAwOwogCi0gICAgLyogR3Vlc3QgaGFzIGRpZmZlcmVudCBNQ0UgY3RsIHZhbHVlIHNl
dHRpbmcgKi8KLSAgICBpZiAobWNhX2N0bF9jb25mbGljdChiYW5rLCBkKSkKLSAgICB7Ci0gICAg
ICAgIGRwcmludGsoWEVOTE9HX1dBUk5JTkcsCi0gICAgICAgICAgIk5vIHZtY2UsIGd1ZXN0IGhh
cyBkaWZmZXJlbnQgbWNhIGNvbnRyb2wgc2V0dGluZ1xuIik7Ci0gICAgICAgIHJldHVybiAwOwot
ICAgIH0KLQogICAgIHJldHVybiAxOwogfQogCmRpZmYgLXIgODA2Nzg5MTAzN2E2IHhlbi9pbmNs
dWRlL2FzbS14ODYvbWNlLmgKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tY2UuaAlUaHUgSnVs
IDE5IDIwOjQ4OjI1IDIwMTIgKzA4MDAKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tY2UuaAlU
dWUgQXVnIDA3IDA0OjI2OjU2IDIwMTIgKzA4MDAKQEAgLTE2LDcgKzE2LDYgQEAKIHN0cnVjdCBk
b21haW5fbWNhX21zcnMKIHsKICAgICAvKiBHdWVzdCBzaG91bGQgbm90IGNoYW5nZSBiZWxvdyB2
YWx1ZXMgYWZ0ZXIgRE9NIGJvb3QgdXAgKi8KLSAgICB1aW50NjRfdCBtY2dfY3RsOwogICAgIHVp
bnQ2NF90IG1jZ19zdGF0dXM7CiAgICAgdWludDE2X3QgbnJfaW5qZWN0aW9uOwogICAgIHN0cnVj
dCBsaXN0X2hlYWQgaW1wYWN0X2hlYWRlcjsK

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Mon Aug 06 13:51:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:51:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNiS-0008O9-D4; Mon, 06 Aug 2012 13:51:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SyNiQ-0008O1-Sk
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 13:51:27 +0000
Received: from [85.158.143.35:11584] by server-3.bemta-4.messagelabs.com id
	7C/63-01511-EDBCF105; Mon, 06 Aug 2012 13:51:26 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344261080!5737725!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzMzY0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27861 invoked from network); 6 Aug 2012 13:51:21 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 13:51:21 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 06 Aug 2012 06:51:19 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="203149121"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 06 Aug 2012 06:51:19 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 06:51:19 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 06:51:19 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Mon, 6 Aug 2012 21:51:16 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Patch 2/6] Xen/MCE: remove mcg_ctl and other adjustment for
	future vMCE
Thread-Index: AQHNc5+sCWSuIDJ4QUuNAy7SHei7jpdMxclg
Date: Mon, 6 Aug 2012 13:51:17 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D616D@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
	<501F852E0200007800092C24@nat28.tlf.novell.com>
In-Reply-To: <501F852E0200007800092C24@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_"
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

>>>=20
>>> Is there a particular reason to make this access fault here, when
>>> it didn't before? I.e. was there anything wrong with the previous
>>> approach of returning zero on reads and ignoring writes when
>>> !MCG_CTL_P?=20
>>>=20
>>=20
>> Semantically this code is better than previous approach, since
>> !MCG_CTL_P means unimplemented MCG_CTL so access it would generate
>> GP#.=20
>=20
> Agreed. But nevertheless I'd like to be a little more conservative
> here. After all, "knowing" that this won't break Windows or Linux
> isn't covering all possible HVM guests (and the quotes are there
> to indicate that (a) unless you have access to Windows sources,
> you can't really know, you may at best have empirical data
> suggesting so, and (b) makes you/us dependent on all older
> Windows/Linux versions you didn't try out/look at behave
> correctly here too).

OK, fine to me to use previous approach, updated as attached.

Thanks,
Jinsong

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D


Xen/MCE: remove mcg_ctl and other adjustment for future vMCE

This patch is a middle-work patch, prepare for future new vMCE model.
It remove mcg_ctl, disable MCG_CTL_P, and set bank number to 2.

Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>

diff -r 8067891037a6 xen/arch/x86/cpu/mcheck/mce.c
--- a/xen/arch/x86/cpu/mcheck/mce.c	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/arch/x86/cpu/mcheck/mce.c	Tue Aug 07 04:26:56 2012 +0800
@@ -843,8 +843,6 @@
=20
     mctelem_init(sizeof(struct mc_info));
=20
-    vmce_init(c);
-
     /* Turn on MCE now */
     set_in_cr4(X86_CR4_MCE);
=20
diff -r 8067891037a6 xen/arch/x86/cpu/mcheck/mce.h
--- a/xen/arch/x86/cpu/mcheck/mce.h	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/arch/x86/cpu/mcheck/mce.h	Tue Aug 07 04:26:56 2012 +0800
@@ -170,8 +170,6 @@
 int inject_vmce(struct domain *d);
 int vmce_domain_inject(struct mcinfo_bank *bank, struct domain *d, struct =
mcinfo_global *global);
=20
-extern int vmce_init(struct cpuinfo_x86 *c);
-
 static inline int mce_vendor_bank_msr(const struct vcpu *v, uint32_t msr)
 {
     if ( boot_cpu_data.x86_vendor =3D=3D X86_VENDOR_INTEL &&
diff -r 8067891037a6 xen/arch/x86/cpu/mcheck/vmce.c
--- a/xen/arch/x86/cpu/mcheck/vmce.c	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/arch/x86/cpu/mcheck/vmce.c	Tue Aug 07 04:26:56 2012 +0800
@@ -19,13 +19,18 @@
 #include "mce.h"
 #include "x86_mca.h"
=20
+/*
+ * Emulalte 2 banks for guest
+ * Bank0: reserved for 'bank0 quirk' occur at some very old processors:
+ *   1). Intel cpu whose family-model value < 06-1A;
+ *   2). AMD K7
+ * Bank1: used to transfer error info to guest
+ */
+#define GUEST_BANK_NUM 2
+#define GUEST_MCG_CAP (MCG_TES_P | MCG_SER_P | GUEST_BANK_NUM)
+
 #define dom_vmce(x)   ((x)->arch.vmca_msrs)
=20
-static uint64_t __read_mostly g_mcg_cap;
-
-/* Real value in physical CTL MSR */
-static uint64_t __read_mostly h_mcg_ctl;
-
 int vmce_init_msr(struct domain *d)
 {
     dom_vmce(d) =3D xmalloc(struct domain_mca_msrs);
@@ -33,7 +38,6 @@
         return -ENOMEM;
=20
     dom_vmce(d)->mcg_status =3D 0x0;
-    dom_vmce(d)->mcg_ctl =3D ~(uint64_t)0x0;
     dom_vmce(d)->nr_injection =3D 0;
=20
     INIT_LIST_HEAD(&dom_vmce(d)->impact_header);
@@ -52,17 +56,17 @@
=20
 void vmce_init_vcpu(struct vcpu *v)
 {
-    v->arch.mcg_cap =3D g_mcg_cap;
+    v->arch.mcg_cap =3D GUEST_MCG_CAP;
 }
=20
 int vmce_restore_vcpu(struct vcpu *v, uint64_t caps)
 {
-    if ( caps & ~g_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
+    if ( caps & ~GUEST_MCG_CAP & ~MCG_CAP_COUNT & ~MCG_CTL_P )
     {
         dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
                 " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
                 is_hvm_vcpu(v) ? "HVM" : "PV", caps, v->domain->domain_id,
-                v->vcpu_id, g_mcg_cap & ~MCG_CAP_COUNT);
+                v->vcpu_id, GUEST_MCG_CAP & ~MCG_CAP_COUNT);
         return -EPERM;
     }
=20
@@ -175,11 +179,10 @@
                    *val);
         break;
     case MSR_IA32_MCG_CTL:
-        /* Always 0 if no CTL support */
+        /* Stick all 1's when CTL support, and 0's when no CTL support */
         if ( cur->arch.mcg_cap & MCG_CTL_P )
-            *val =3D vmce->mcg_ctl & h_mcg_ctl;
-        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n",
-                   *val);
+            *val =3D ~0ULL;
+        mce_printk(MCE_VERBOSE, "MCE: rdmsr MCG_CTL 0x%"PRIx64"\n", *val);
         break;
     default:
         ret =3D mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr, val) : 0=
;
@@ -287,15 +290,11 @@
     struct domain_mca_msrs *vmce =3D dom_vmce(cur->domain);
     int ret =3D 1;
=20
-    if ( !g_mcg_cap )
-        return 0;
-
     spin_lock(&vmce->lock);
=20
     switch ( msr )
     {
     case MSR_IA32_MCG_CTL:
-        vmce->mcg_ctl =3D val;
         break;
     case MSR_IA32_MCG_STATUS:
         vmce->mcg_status =3D val;
@@ -510,31 +509,6 @@
 }
 #endif
=20
-int vmce_init(struct cpuinfo_x86 *c)
-{
-    u64 value;
-
-    rdmsrl(MSR_IA32_MCG_CAP, value);
-    /* For Guest vMCE usage */
-    g_mcg_cap =3D value & (MCG_CAP_COUNT | MCG_CTL_P | MCG_TES_P | MCG_SER=
_P);
-    if (value & MCG_CTL_P)
-        rdmsrl(MSR_IA32_MCG_CTL, h_mcg_ctl);
-
-    return 0;
-}
-
-static int mca_ctl_conflict(struct mcinfo_bank *bank, struct domain *d)
-{
-    if ( !bank || !d )
-        return 1;
-
-    /* Will MCE happen in host if If host mcg_ctl is 0? */
-    if ( ~d->arch.vmca_msrs->mcg_ctl & h_mcg_ctl )
-        return 1;
-
-    return 0;
-}
-
 static int is_hvm_vmce_ready(struct mcinfo_bank *bank, struct domain *d)
 {
     struct vcpu *v;
@@ -588,14 +562,6 @@
     if (no_vmce)
         return 0;
=20
-    /* Guest has different MCE ctl value setting */
-    if (mca_ctl_conflict(bank, d))
-    {
-        dprintk(XENLOG_WARNING,
-          "No vmce, guest has different mca control setting\n");
-        return 0;
-    }
-
     return 1;
 }
=20
diff -r 8067891037a6 xen/include/asm-x86/mce.h
--- a/xen/include/asm-x86/mce.h	Thu Jul 19 20:48:25 2012 +0800
+++ b/xen/include/asm-x86/mce.h	Tue Aug 07 04:26:56 2012 +0800
@@ -16,7 +16,6 @@
 struct domain_mca_msrs
 {
     /* Guest should not change below values after DOM boot up */
-    uint64_t mcg_ctl;
     uint64_t mcg_status;
     uint16_t nr_injection;
     struct list_head impact_header;=

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_
Content-Type: application/octet-stream;
	name="2_vmce_mcg_ctl_compatible.patch"
Content-Description: 2_vmce_mcg_ctl_compatible.patch
Content-Disposition: attachment; filename="2_vmce_mcg_ctl_compatible.patch";
	size=5214; creation-date="Mon, 06 Aug 2012 13:21:52 GMT";
	modification-date="Mon, 06 Aug 2012 20:26:56 GMT"
Content-Transfer-Encoding: base64

WGVuL01DRTogcmVtb3ZlIG1jZ19jdGwgYW5kIG90aGVyIGFkanVzdG1lbnQgZm9yIGZ1dHVyZSB2
TUNFCgpUaGlzIHBhdGNoIGlzIGEgbWlkZGxlLXdvcmsgcGF0Y2gsIHByZXBhcmUgZm9yIGZ1dHVy
ZSBuZXcgdk1DRSBtb2RlbC4KSXQgcmVtb3ZlIG1jZ19jdGwsIGRpc2FibGUgTUNHX0NUTF9QLCBh
bmQgc2V0IGJhbmsgbnVtYmVyIHRvIDIuCgpTaWduZWQtb2ZmLWJ5OiBMaXUsIEppbnNvbmcgPGpp
bnNvbmcubGl1QGludGVsLmNvbT4KCmRpZmYgLXIgODA2Nzg5MTAzN2E2IHhlbi9hcmNoL3g4Ni9j
cHUvbWNoZWNrL21jZS5jCi0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvbWNoZWNrL21jZS5jCVRodSBK
dWwgMTkgMjA6NDg6MjUgMjAxMiArMDgwMAorKysgYi94ZW4vYXJjaC94ODYvY3B1L21jaGVjay9t
Y2UuYwlUdWUgQXVnIDA3IDA0OjI2OjU2IDIwMTIgKzA4MDAKQEAgLTg0Myw4ICs4NDMsNiBAQAog
CiAgICAgbWN0ZWxlbV9pbml0KHNpemVvZihzdHJ1Y3QgbWNfaW5mbykpOwogCi0gICAgdm1jZV9p
bml0KGMpOwotCiAgICAgLyogVHVybiBvbiBNQ0Ugbm93ICovCiAgICAgc2V0X2luX2NyNChYODZf
Q1I0X01DRSk7CiAKZGlmZiAtciA4MDY3ODkxMDM3YTYgeGVuL2FyY2gveDg2L2NwdS9tY2hlY2sv
bWNlLmgKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9tY2hlY2svbWNlLmgJVGh1IEp1bCAxOSAyMDo0
ODoyNSAyMDEyICswODAwCisrKyBiL3hlbi9hcmNoL3g4Ni9jcHUvbWNoZWNrL21jZS5oCVR1ZSBB
dWcgMDcgMDQ6MjY6NTYgMjAxMiArMDgwMApAQCAtMTcwLDggKzE3MCw2IEBACiBpbnQgaW5qZWN0
X3ZtY2Uoc3RydWN0IGRvbWFpbiAqZCk7CiBpbnQgdm1jZV9kb21haW5faW5qZWN0KHN0cnVjdCBt
Y2luZm9fYmFuayAqYmFuaywgc3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IG1jaW5mb19nbG9iYWwg
Kmdsb2JhbCk7CiAKLWV4dGVybiBpbnQgdm1jZV9pbml0KHN0cnVjdCBjcHVpbmZvX3g4NiAqYyk7
Ci0KIHN0YXRpYyBpbmxpbmUgaW50IG1jZV92ZW5kb3JfYmFua19tc3IoY29uc3Qgc3RydWN0IHZj
cHUgKnYsIHVpbnQzMl90IG1zcikKIHsKICAgICBpZiAoIGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRv
ciA9PSBYODZfVkVORE9SX0lOVEVMICYmCmRpZmYgLXIgODA2Nzg5MTAzN2E2IHhlbi9hcmNoL3g4
Ni9jcHUvbWNoZWNrL3ZtY2UuYwotLS0gYS94ZW4vYXJjaC94ODYvY3B1L21jaGVjay92bWNlLmMJ
VGh1IEp1bCAxOSAyMDo0ODoyNSAyMDEyICswODAwCisrKyBiL3hlbi9hcmNoL3g4Ni9jcHUvbWNo
ZWNrL3ZtY2UuYwlUdWUgQXVnIDA3IDA0OjI2OjU2IDIwMTIgKzA4MDAKQEAgLTE5LDEzICsxOSwx
OCBAQAogI2luY2x1ZGUgIm1jZS5oIgogI2luY2x1ZGUgIng4Nl9tY2EuaCIKIAorLyoKKyAqIEVt
dWxhbHRlIDIgYmFua3MgZm9yIGd1ZXN0CisgKiBCYW5rMDogcmVzZXJ2ZWQgZm9yICdiYW5rMCBx
dWlyaycgb2NjdXIgYXQgc29tZSB2ZXJ5IG9sZCBwcm9jZXNzb3JzOgorICogICAxKS4gSW50ZWwg
Y3B1IHdob3NlIGZhbWlseS1tb2RlbCB2YWx1ZSA8IDA2LTFBOworICogICAyKS4gQU1EIEs3Cisg
KiBCYW5rMTogdXNlZCB0byB0cmFuc2ZlciBlcnJvciBpbmZvIHRvIGd1ZXN0CisgKi8KKyNkZWZp
bmUgR1VFU1RfQkFOS19OVU0gMgorI2RlZmluZSBHVUVTVF9NQ0dfQ0FQIChNQ0dfVEVTX1AgfCBN
Q0dfU0VSX1AgfCBHVUVTVF9CQU5LX05VTSkKKwogI2RlZmluZSBkb21fdm1jZSh4KSAgICgoeCkt
PmFyY2gudm1jYV9tc3JzKQogCi1zdGF0aWMgdWludDY0X3QgX19yZWFkX21vc3RseSBnX21jZ19j
YXA7Ci0KLS8qIFJlYWwgdmFsdWUgaW4gcGh5c2ljYWwgQ1RMIE1TUiAqLwotc3RhdGljIHVpbnQ2
NF90IF9fcmVhZF9tb3N0bHkgaF9tY2dfY3RsOwotCiBpbnQgdm1jZV9pbml0X21zcihzdHJ1Y3Qg
ZG9tYWluICpkKQogewogICAgIGRvbV92bWNlKGQpID0geG1hbGxvYyhzdHJ1Y3QgZG9tYWluX21j
YV9tc3JzKTsKQEAgLTMzLDcgKzM4LDYgQEAKICAgICAgICAgcmV0dXJuIC1FTk9NRU07CiAKICAg
ICBkb21fdm1jZShkKS0+bWNnX3N0YXR1cyA9IDB4MDsKLSAgICBkb21fdm1jZShkKS0+bWNnX2N0
bCA9IH4odWludDY0X3QpMHgwOwogICAgIGRvbV92bWNlKGQpLT5ucl9pbmplY3Rpb24gPSAwOwog
CiAgICAgSU5JVF9MSVNUX0hFQUQoJmRvbV92bWNlKGQpLT5pbXBhY3RfaGVhZGVyKTsKQEAgLTUy
LDE3ICs1NiwxNyBAQAogCiB2b2lkIHZtY2VfaW5pdF92Y3B1KHN0cnVjdCB2Y3B1ICp2KQogewot
ICAgIHYtPmFyY2gubWNnX2NhcCA9IGdfbWNnX2NhcDsKKyAgICB2LT5hcmNoLm1jZ19jYXAgPSBH
VUVTVF9NQ0dfQ0FQOwogfQogCiBpbnQgdm1jZV9yZXN0b3JlX3ZjcHUoc3RydWN0IHZjcHUgKnYs
IHVpbnQ2NF90IGNhcHMpCiB7Ci0gICAgaWYgKCBjYXBzICYgfmdfbWNnX2NhcCAmIH5NQ0dfQ0FQ
X0NPVU5UICYgfk1DR19DVExfUCApCisgICAgaWYgKCBjYXBzICYgfkdVRVNUX01DR19DQVAgJiB+
TUNHX0NBUF9DT1VOVCAmIH5NQ0dfQ1RMX1AgKQogICAgIHsKICAgICAgICAgZHByaW50ayhYRU5M
T0dfR19FUlIsICIlcyByZXN0b3JlOiB1bnN1cHBvcnRlZCBNQ0EgY2FwYWJpbGl0aWVzIgogICAg
ICAgICAgICAgICAgICIgJSMiIFBSSXg2NCAiIGZvciBkJWQ6diV1IChzdXBwb3J0ZWQ6ICUjTHgp
XG4iLAogICAgICAgICAgICAgICAgIGlzX2h2bV92Y3B1KHYpID8gIkhWTSIgOiAiUFYiLCBjYXBz
LCB2LT5kb21haW4tPmRvbWFpbl9pZCwKLSAgICAgICAgICAgICAgICB2LT52Y3B1X2lkLCBnX21j
Z19jYXAgJiB+TUNHX0NBUF9DT1VOVCk7CisgICAgICAgICAgICAgICAgdi0+dmNwdV9pZCwgR1VF
U1RfTUNHX0NBUCAmIH5NQ0dfQ0FQX0NPVU5UKTsKICAgICAgICAgcmV0dXJuIC1FUEVSTTsKICAg
ICB9CiAKQEAgLTE3NSwxMSArMTc5LDEwIEBACiAgICAgICAgICAgICAgICAgICAgKnZhbCk7CiAg
ICAgICAgIGJyZWFrOwogICAgIGNhc2UgTVNSX0lBMzJfTUNHX0NUTDoKLSAgICAgICAgLyogQWx3
YXlzIDAgaWYgbm8gQ1RMIHN1cHBvcnQgKi8KKyAgICAgICAgLyogU3RpY2sgYWxsIDEncyB3aGVu
IENUTCBzdXBwb3J0LCBhbmQgMCdzIHdoZW4gbm8gQ1RMIHN1cHBvcnQgKi8KICAgICAgICAgaWYg
KCBjdXItPmFyY2gubWNnX2NhcCAmIE1DR19DVExfUCApCi0gICAgICAgICAgICAqdmFsID0gdm1j
ZS0+bWNnX2N0bCAmIGhfbWNnX2N0bDsKLSAgICAgICAgbWNlX3ByaW50ayhNQ0VfVkVSQk9TRSwg
Ik1DRTogcmRtc3IgTUNHX0NUTCAweCUiUFJJeDY0IlxuIiwKLSAgICAgICAgICAgICAgICAgICAq
dmFsKTsKKyAgICAgICAgICAgICp2YWwgPSB+MFVMTDsKKyAgICAgICAgbWNlX3ByaW50ayhNQ0Vf
VkVSQk9TRSwgIk1DRTogcmRtc3IgTUNHX0NUTCAweCUiUFJJeDY0IlxuIiwgKnZhbCk7CiAgICAg
ICAgIGJyZWFrOwogICAgIGRlZmF1bHQ6CiAgICAgICAgIHJldCA9IG1jZV9iYW5rX21zcihjdXIs
IG1zcikgPyBiYW5rX21jZV9yZG1zcihjdXIsIG1zciwgdmFsKSA6IDA7CkBAIC0yODcsMTUgKzI5
MCwxMSBAQAogICAgIHN0cnVjdCBkb21haW5fbWNhX21zcnMgKnZtY2UgPSBkb21fdm1jZShjdXIt
PmRvbWFpbik7CiAgICAgaW50IHJldCA9IDE7CiAKLSAgICBpZiAoICFnX21jZ19jYXAgKQotICAg
ICAgICByZXR1cm4gMDsKLQogICAgIHNwaW5fbG9jaygmdm1jZS0+bG9jayk7CiAKICAgICBzd2l0
Y2ggKCBtc3IgKQogICAgIHsKICAgICBjYXNlIE1TUl9JQTMyX01DR19DVEw6Ci0gICAgICAgIHZt
Y2UtPm1jZ19jdGwgPSB2YWw7CiAgICAgICAgIGJyZWFrOwogICAgIGNhc2UgTVNSX0lBMzJfTUNH
X1NUQVRVUzoKICAgICAgICAgdm1jZS0+bWNnX3N0YXR1cyA9IHZhbDsKQEAgLTUxMCwzMSArNTA5
LDYgQEAKIH0KICNlbmRpZgogCi1pbnQgdm1jZV9pbml0KHN0cnVjdCBjcHVpbmZvX3g4NiAqYykK
LXsKLSAgICB1NjQgdmFsdWU7Ci0KLSAgICByZG1zcmwoTVNSX0lBMzJfTUNHX0NBUCwgdmFsdWUp
OwotICAgIC8qIEZvciBHdWVzdCB2TUNFIHVzYWdlICovCi0gICAgZ19tY2dfY2FwID0gdmFsdWUg
JiAoTUNHX0NBUF9DT1VOVCB8IE1DR19DVExfUCB8IE1DR19URVNfUCB8IE1DR19TRVJfUCk7Ci0g
ICAgaWYgKHZhbHVlICYgTUNHX0NUTF9QKQotICAgICAgICByZG1zcmwoTVNSX0lBMzJfTUNHX0NU
TCwgaF9tY2dfY3RsKTsKLQotICAgIHJldHVybiAwOwotfQotCi1zdGF0aWMgaW50IG1jYV9jdGxf
Y29uZmxpY3Qoc3RydWN0IG1jaW5mb19iYW5rICpiYW5rLCBzdHJ1Y3QgZG9tYWluICpkKQotewot
ICAgIGlmICggIWJhbmsgfHwgIWQgKQotICAgICAgICByZXR1cm4gMTsKLQotICAgIC8qIFdpbGwg
TUNFIGhhcHBlbiBpbiBob3N0IGlmIElmIGhvc3QgbWNnX2N0bCBpcyAwPyAqLwotICAgIGlmICgg
fmQtPmFyY2gudm1jYV9tc3JzLT5tY2dfY3RsICYgaF9tY2dfY3RsICkKLSAgICAgICAgcmV0dXJu
IDE7Ci0KLSAgICByZXR1cm4gMDsKLX0KLQogc3RhdGljIGludCBpc19odm1fdm1jZV9yZWFkeShz
dHJ1Y3QgbWNpbmZvX2JhbmsgKmJhbmssIHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAgc3RydWN0
IHZjcHUgKnY7CkBAIC01ODgsMTQgKzU2Miw2IEBACiAgICAgaWYgKG5vX3ZtY2UpCiAgICAgICAg
IHJldHVybiAwOwogCi0gICAgLyogR3Vlc3QgaGFzIGRpZmZlcmVudCBNQ0UgY3RsIHZhbHVlIHNl
dHRpbmcgKi8KLSAgICBpZiAobWNhX2N0bF9jb25mbGljdChiYW5rLCBkKSkKLSAgICB7Ci0gICAg
ICAgIGRwcmludGsoWEVOTE9HX1dBUk5JTkcsCi0gICAgICAgICAgIk5vIHZtY2UsIGd1ZXN0IGhh
cyBkaWZmZXJlbnQgbWNhIGNvbnRyb2wgc2V0dGluZ1xuIik7Ci0gICAgICAgIHJldHVybiAwOwot
ICAgIH0KLQogICAgIHJldHVybiAxOwogfQogCmRpZmYgLXIgODA2Nzg5MTAzN2E2IHhlbi9pbmNs
dWRlL2FzbS14ODYvbWNlLmgKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tY2UuaAlUaHUgSnVs
IDE5IDIwOjQ4OjI1IDIwMTIgKzA4MDAKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tY2UuaAlU
dWUgQXVnIDA3IDA0OjI2OjU2IDIwMTIgKzA4MDAKQEAgLTE2LDcgKzE2LDYgQEAKIHN0cnVjdCBk
b21haW5fbWNhX21zcnMKIHsKICAgICAvKiBHdWVzdCBzaG91bGQgbm90IGNoYW5nZSBiZWxvdyB2
YWx1ZXMgYWZ0ZXIgRE9NIGJvb3QgdXAgKi8KLSAgICB1aW50NjRfdCBtY2dfY3RsOwogICAgIHVp
bnQ2NF90IG1jZ19zdGF0dXM7CiAgICAgdWludDE2X3QgbnJfaW5qZWN0aW9uOwogICAgIHN0cnVj
dCBsaXN0X2hlYWQgaW1wYWN0X2hlYWRlcjsK

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC82923352D616DSHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Mon Aug 06 13:58:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNpN-0000Lg-TR; Mon, 06 Aug 2012 13:58:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyNpM-0000LW-BM
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:58:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344261458!4117711!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1755 invoked from network); 6 Aug 2012 13:57:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 13:57:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 14:57:37 +0100
Message-Id: <501FE96E0200007800092E25@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 14:57:34 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>,
	"George Dunlap" <george.dunlap@eu.citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
In-Reply-To: <50116CD9.6000503@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.07.12 at 18:14, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> Yes, this is a very strange circumstance: because p2m_demand_populate() 
> shouldn't happen until at least one PoD entry has been created; and that 
> shouldn't happen until after c0...7ff have been populated with 4k pages.

So meanwhile I was told that this very likely is caused by an access
originating in Dom0. Just a few minutes ago I also got hold of call
stacks (as was already seen with the original messages, it
produces two instances per bad access):

...
(XEN) gpmpod(1, 8000, 9) -> 0 [dom0]
(XEN) gpmpod(1, 8200, 9) -> 0 [dom0]

[coming from

printk("gpmpod(%d, %lx, %u) -> %d [dom%d]\n", d->domain_id, gfn, order, rc, current->domain->domain_id);

at the end of guest_physmap_mark_populate_on_demand()]

(XEN) p2m_pod_demand_populate: Dom1 out of PoD memory! (tot=1e0 ents=8200 dom0)

[altered message at the failure point in p2m_pod_demand_populate():

-    printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",
-           __func__, d->tot_pages, p2md->pod.entry_count);
+    printk("%s: Dom%d out of PoD memory! (tot=%"PRIx32" ents=%"PRIx32" dom%d)\n",
+           __func__, d->domain_id, d->tot_pages, p2md->pod.entry_count, current->domain->domain_id);
+WARN_ON(1);

]

(XEN) Xen WARN at p2m.c:1155
(XEN) ----[ Xen-4.0.3_21548_04a-0.9.1  x86_64  debug=n  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
...
(XEN) Xen call trace:
(XEN)    [<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
(XEN)    [<ffff82c4801676b1>] get_page_and_type_from_pagenr+0x91/0x100
(XEN)    [<ffff82c4801f02d4>] ept_pod_check_and_populate+0x104/0x1a0
(XEN)    [<ffff82c4801f0482>] ept_get_entry+0x112/0x230
(XEN)    [<ffff82c48016be98>] do_mmu_update+0x16d8/0x1930
(XEN)    [<ffff82c4801f8c51>] do_iret+0xc1/0x1a0
(XEN)    [<ffff82c4801f4189>] syscall_enter+0xa9/0xae
(XEN)
(XEN) domain_crash called from p2m.c:1156
(XEN) Domain 1 reported crashed by domain 0 on cpu#2:
(XEN) p2m_pod_demand_populate: Dom1 out of PoD memory! (tot=1e0 ents=8200 dom0)
(XEN) Xen WARN at p2m.c:1155
(XEN) ----[ Xen-4.0.3_21548_04a-0.9.1  x86_64  debug=n  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
...
(XEN) Xen call trace:
(XEN)    [<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
(XEN)    [<ffff82c480108733>] send_guest_global_virq+0x93/0xe0
(XEN)    [<ffff82c4801cbfb2>] p2m_pod_demand_populate+0x862/0xab0
(XEN)    [<ffff82c4801f02d4>] ept_pod_check_and_populate+0x104/0x1a0
(XEN)    [<ffff82c4801f0482>] ept_get_entry+0x112/0x230
(XEN)    [<ffff82c48016890b>] mod_l1_entry+0x47b/0x650
(XEN)    [<ffff82c4801f0482>] ept_get_entry+0x112/0x230
(XEN)    [<ffff82c48016b21a>] do_mmu_update+0xa5a/0x1930
(XEN)    [<ffff82c4801f8c51>] do_iret+0xc1/0x1a0
(XEN)    [<ffff82c4801f4189>] syscall_enter+0xa9/0xae
(XEN)
(XEN) domain_crash called from p2m.c:1156

This clarifies at least why there are two events (and, despite
the code having changed quite a bit, appears to still be the
case for -unstable): case MMU_NORMAL_PT_UPDATE, sub-case
PGT_l1_page_table call (in -unstable terms) get_page_from_gfn()
but ignores the return value (which ought to be NULL here), and
only partially inspects the returned type. As the type matches
none of the ones looked for, it happily proceeds into
mod_l1_entry(), which then calls get_page_from_gfn() again.

> Although, it does look as though when populating 4k pages, the code 
> doesn't actually look to see if the allocation succeeded or not... oh 
> wait, no, it actually checks rc as a condition of the while() loop -- 
> but that is then clobbered by the xc_domain_set_pod_target() call.  But 
> surely if the 4k allocation failed, the set_target() call should fail as 
> well?  And in any case, there shouldn't yet be any PoD entries to cause 
> a demand-populate.
> 
> We probably should change "if(pod_mode)" to "if(rc == 0 && pod_mode)" or 
> something like that, just to be sure.  I'll spin up a patch.

I had also included this adjustment in the debugging patch, but
this clearly isn't related to the problem.

The domain indeed has 0x1e0 pages allocated, and a huge (still
growing number) of PoD entries. And apparently this fails so
rarely because it's pretty unlikely that there's not a single clear
page that the PoD code can select as victim, plus the Dom0
space code likely also only infrequently happens to kick in at
the wrong time.

So in the end it presumably boils down to decide whether such
an out-of-band Dom0 access is valid to be done (and I think it
is). If it is, then xc_hvm_build_x86.c:setup_guest() should
make sure any actually allocated pages (those coming from the
calls to xc_domain_populate_physmap_exact()) get cleared
when pod_mode is set.

Otoh, as pointed out in a yet unanswered mail (see
http://lists.xen.org/archives/html/xen-devel/2012-07/msg01331.html),
these allocations could/should - when pod_mode is set -
similarly be done with XENMEMF_populate_on_demand set.
In such a case, _any_ Dom0 access to guest memory prior
to the call to xc_domain_set_pod_target() would kill the
domain, as there is not even a single page to be looked at
as a possible victim. Consequently, I would think that the
guest shouldn't be killed unconditionally when a PoD
operation didn't succeed - in particular not when the
access was from a foreign (i.e. the management) domain.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 13:58:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 13:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyNpN-0000Lg-TR; Mon, 06 Aug 2012 13:58:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyNpM-0000LW-BM
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 13:58:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344261458!4117711!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1755 invoked from network); 6 Aug 2012 13:57:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 13:57:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 14:57:37 +0100
Message-Id: <501FE96E0200007800092E25@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 14:57:34 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>,
	"George Dunlap" <george.dunlap@eu.citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
In-Reply-To: <50116CD9.6000503@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.07.12 at 18:14, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> Yes, this is a very strange circumstance: because p2m_demand_populate() 
> shouldn't happen until at least one PoD entry has been created; and that 
> shouldn't happen until after c0...7ff have been populated with 4k pages.

So meanwhile I was told that this very likely is caused by an access
originating in Dom0. Just a few minutes ago I also got hold of call
stacks (as was already seen with the original messages, it
produces two instances per bad access):

...
(XEN) gpmpod(1, 8000, 9) -> 0 [dom0]
(XEN) gpmpod(1, 8200, 9) -> 0 [dom0]

[coming from

printk("gpmpod(%d, %lx, %u) -> %d [dom%d]\n", d->domain_id, gfn, order, rc, current->domain->domain_id);

at the end of guest_physmap_mark_populate_on_demand()]

(XEN) p2m_pod_demand_populate: Dom1 out of PoD memory! (tot=1e0 ents=8200 dom0)

[altered message at the failure point in p2m_pod_demand_populate():

-    printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",
-           __func__, d->tot_pages, p2md->pod.entry_count);
+    printk("%s: Dom%d out of PoD memory! (tot=%"PRIx32" ents=%"PRIx32" dom%d)\n",
+           __func__, d->domain_id, d->tot_pages, p2md->pod.entry_count, current->domain->domain_id);
+WARN_ON(1);

]

(XEN) Xen WARN at p2m.c:1155
(XEN) ----[ Xen-4.0.3_21548_04a-0.9.1  x86_64  debug=n  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
...
(XEN) Xen call trace:
(XEN)    [<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
(XEN)    [<ffff82c4801676b1>] get_page_and_type_from_pagenr+0x91/0x100
(XEN)    [<ffff82c4801f02d4>] ept_pod_check_and_populate+0x104/0x1a0
(XEN)    [<ffff82c4801f0482>] ept_get_entry+0x112/0x230
(XEN)    [<ffff82c48016be98>] do_mmu_update+0x16d8/0x1930
(XEN)    [<ffff82c4801f8c51>] do_iret+0xc1/0x1a0
(XEN)    [<ffff82c4801f4189>] syscall_enter+0xa9/0xae
(XEN)
(XEN) domain_crash called from p2m.c:1156
(XEN) Domain 1 reported crashed by domain 0 on cpu#2:
(XEN) p2m_pod_demand_populate: Dom1 out of PoD memory! (tot=1e0 ents=8200 dom0)
(XEN) Xen WARN at p2m.c:1155
(XEN) ----[ Xen-4.0.3_21548_04a-0.9.1  x86_64  debug=n  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
...
(XEN) Xen call trace:
(XEN)    [<ffff82c4801cbf86>] p2m_pod_demand_populate+0x836/0xab0
(XEN)    [<ffff82c480108733>] send_guest_global_virq+0x93/0xe0
(XEN)    [<ffff82c4801cbfb2>] p2m_pod_demand_populate+0x862/0xab0
(XEN)    [<ffff82c4801f02d4>] ept_pod_check_and_populate+0x104/0x1a0
(XEN)    [<ffff82c4801f0482>] ept_get_entry+0x112/0x230
(XEN)    [<ffff82c48016890b>] mod_l1_entry+0x47b/0x650
(XEN)    [<ffff82c4801f0482>] ept_get_entry+0x112/0x230
(XEN)    [<ffff82c48016b21a>] do_mmu_update+0xa5a/0x1930
(XEN)    [<ffff82c4801f8c51>] do_iret+0xc1/0x1a0
(XEN)    [<ffff82c4801f4189>] syscall_enter+0xa9/0xae
(XEN)
(XEN) domain_crash called from p2m.c:1156

This clarifies at least why there are two events (and, despite
the code having changed quite a bit, appears to still be the
case for -unstable): case MMU_NORMAL_PT_UPDATE, sub-case
PGT_l1_page_table call (in -unstable terms) get_page_from_gfn()
but ignores the return value (which ought to be NULL here), and
only partially inspects the returned type. As the type matches
none of the ones looked for, it happily proceeds into
mod_l1_entry(), which then calls get_page_from_gfn() again.

> Although, it does look as though when populating 4k pages, the code 
> doesn't actually look to see if the allocation succeeded or not... oh 
> wait, no, it actually checks rc as a condition of the while() loop -- 
> but that is then clobbered by the xc_domain_set_pod_target() call.  But 
> surely if the 4k allocation failed, the set_target() call should fail as 
> well?  And in any case, there shouldn't yet be any PoD entries to cause 
> a demand-populate.
> 
> We probably should change "if(pod_mode)" to "if(rc == 0 && pod_mode)" or 
> something like that, just to be sure.  I'll spin up a patch.

I had also included this adjustment in the debugging patch, but
this clearly isn't related to the problem.

The domain indeed has 0x1e0 pages allocated, and a huge (still
growing number) of PoD entries. And apparently this fails so
rarely because it's pretty unlikely that there's not a single clear
page that the PoD code can select as victim, plus the Dom0
space code likely also only infrequently happens to kick in at
the wrong time.

So in the end it presumably boils down to decide whether such
an out-of-band Dom0 access is valid to be done (and I think it
is). If it is, then xc_hvm_build_x86.c:setup_guest() should
make sure any actually allocated pages (those coming from the
calls to xc_domain_populate_physmap_exact()) get cleared
when pod_mode is set.

Otoh, as pointed out in a yet unanswered mail (see
http://lists.xen.org/archives/html/xen-devel/2012-07/msg01331.html),
these allocations could/should - when pod_mode is set -
similarly be done with XENMEMF_populate_on_demand set.
In such a case, _any_ Dom0 access to guest memory prior
to the call to xc_domain_set_pod_target() would kill the
domain, as there is not even a single page to be looked at
as a possible victim. Consequently, I would think that the
guest shouldn't be killed unconditionally when a PoD
operation didn't succeed - in particular not when the
access was from a foreign (i.e. the management) domain.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO27-0000tV-Qr; Mon, 06 Aug 2012 14:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO26-0000tL-AP
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:11:46 +0000
Received: from [85.158.139.83:20491] by server-12.bemta-5.messagelabs.com id
	07/6B-26304-1A0DF105; Mon, 06 Aug 2012 14:11:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344262304!30504476!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24260 invoked from network); 6 Aug 2012 14:11:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:11:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:11:41 +0100
Date: Mon, 6 Aug 2012 15:11:29 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Tim Deegan <Tim.Deegan@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.



It is based on Ian's arm-for-4.3 branch. 


Stefano Stabellini (5):
      xen: improve changes to xen_add_to_physmap
      xen/arm: introduce __lshrdi3 and __aeabi_llsr
      xen: few more xen_ulong_t substitutions
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate


 xen/arch/arm/domain.c              |    2 +-
 xen/arch/arm/domctl.c              |    2 +-
 xen/arch/arm/hvm.c                 |    2 +-
 xen/arch/arm/lib/Makefile          |    2 +-
 xen/arch/arm/lib/lshrdi3.S         |   54 ++++++++++++++++++++++++++++++++++++
 xen/arch/arm/mm.c                  |    2 +-
 xen/arch/arm/physdev.c             |    2 +-
 xen/arch/arm/sysctl.c              |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
 xen/arch/x86/domain.c              |    2 +-
 xen/arch/x86/domctl.c              |    2 +-
 xen/arch/x86/efi/runtime.c         |    2 +-
 xen/arch/x86/hvm/hvm.c             |   26 ++++++++--------
 xen/arch/x86/microcode.c           |    2 +-
 xen/arch/x86/mm.c                  |   14 ++++----
 xen/arch/x86/mm/hap/hap.c          |    2 +-
 xen/arch/x86/mm/mem_event.c        |    2 +-
 xen/arch/x86/mm/paging.c           |    2 +-
 xen/arch/x86/mm/shadow/common.c    |    2 +-
 xen/arch/x86/physdev.c             |    2 +-
 xen/arch/x86/platform_hypercall.c  |    2 +-
 xen/arch/x86/sysctl.c              |    2 +-
 xen/arch/x86/traps.c               |    2 +-
 xen/arch/x86/x86_32/mm.c           |    2 +-
 xen/arch/x86/x86_32/traps.c        |    2 +-
 xen/arch/x86/x86_64/compat/mm.c    |    8 ++--
 xen/arch/x86/x86_64/domain.c       |    2 +-
 xen/arch/x86/x86_64/mm.c           |    2 +-
 xen/arch/x86/x86_64/traps.c        |    2 +-
 xen/common/compat/domain.c         |    2 +-
 xen/common/compat/grant_table.c    |    2 +-
 xen/common/compat/memory.c         |    2 +-
 xen/common/domain.c                |    2 +-
 xen/common/domctl.c                |    2 +-
 xen/common/event_channel.c         |    2 +-
 xen/common/grant_table.c           |   36 ++++++++++++------------
 xen/common/kernel.c                |    4 +-
 xen/common/kexec.c                 |   16 +++++-----
 xen/common/memory.c                |    4 +-
 xen/common/multicall.c             |    2 +-
 xen/common/schedule.c              |    2 +-
 xen/common/sysctl.c                |    2 +-
 xen/common/xenoprof.c              |    8 ++--
 xen/drivers/acpi/pmstat.c          |    2 +-
 xen/drivers/char/console.c         |    6 ++--
 xen/drivers/passthrough/iommu.c    |    2 +-
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/asm-arm/hypercall.h    |    2 +-
 xen/include/asm-arm/mm.h           |    2 +-
 xen/include/asm-x86/hap.h          |    2 +-
 xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h    |    2 +-
 xen/include/asm-x86/mm.h           |    8 ++--
 xen/include/asm-x86/paging.h       |    2 +-
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/asm-x86/shadow.h       |    2 +-
 xen/include/asm-x86/xenoprof.h     |    6 ++--
 xen/include/public/arch-arm.h      |   21 ++++++++++----
 xen/include/public/arch-ia64.h     |    1 +
 xen/include/public/arch-x86/xen.h  |    1 +
 xen/include/public/memory.h        |   11 ++++--
 xen/include/public/physdev.h       |    2 +-
 xen/include/public/version.h       |    2 +-
 xen/include/public/xen.h           |    4 +-
 xen/include/xen/acpi.h             |    4 +-
 xen/include/xen/hypercall.h        |   52 +++++++++++++++++-----------------
 xen/include/xen/iommu.h            |    2 +-
 xen/include/xen/tmem_xen.h         |    2 +-
 xen/include/xsm/xsm.h              |    4 +-
 xen/xsm/dummy.c                    |    2 +-
 xen/xsm/flask/flask_op.c           |    4 +-
 xen/xsm/flask/hooks.c              |    2 +-
 xen/xsm/xsm_core.c                 |    2 +-
 73 files changed, 243 insertions(+), 175 deletions(-)


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO27-0000tV-Qr; Mon, 06 Aug 2012 14:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO26-0000tL-AP
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:11:46 +0000
Received: from [85.158.139.83:20491] by server-12.bemta-5.messagelabs.com id
	07/6B-26304-1A0DF105; Mon, 06 Aug 2012 14:11:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344262304!30504476!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24260 invoked from network); 6 Aug 2012 14:11:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:11:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:11:41 +0100
Date: Mon, 6 Aug 2012 15:11:29 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Tim Deegan <Tim.Deegan@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.



It is based on Ian's arm-for-4.3 branch. 


Stefano Stabellini (5):
      xen: improve changes to xen_add_to_physmap
      xen/arm: introduce __lshrdi3 and __aeabi_llsr
      xen: few more xen_ulong_t substitutions
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate


 xen/arch/arm/domain.c              |    2 +-
 xen/arch/arm/domctl.c              |    2 +-
 xen/arch/arm/hvm.c                 |    2 +-
 xen/arch/arm/lib/Makefile          |    2 +-
 xen/arch/arm/lib/lshrdi3.S         |   54 ++++++++++++++++++++++++++++++++++++
 xen/arch/arm/mm.c                  |    2 +-
 xen/arch/arm/physdev.c             |    2 +-
 xen/arch/arm/sysctl.c              |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
 xen/arch/x86/domain.c              |    2 +-
 xen/arch/x86/domctl.c              |    2 +-
 xen/arch/x86/efi/runtime.c         |    2 +-
 xen/arch/x86/hvm/hvm.c             |   26 ++++++++--------
 xen/arch/x86/microcode.c           |    2 +-
 xen/arch/x86/mm.c                  |   14 ++++----
 xen/arch/x86/mm/hap/hap.c          |    2 +-
 xen/arch/x86/mm/mem_event.c        |    2 +-
 xen/arch/x86/mm/paging.c           |    2 +-
 xen/arch/x86/mm/shadow/common.c    |    2 +-
 xen/arch/x86/physdev.c             |    2 +-
 xen/arch/x86/platform_hypercall.c  |    2 +-
 xen/arch/x86/sysctl.c              |    2 +-
 xen/arch/x86/traps.c               |    2 +-
 xen/arch/x86/x86_32/mm.c           |    2 +-
 xen/arch/x86/x86_32/traps.c        |    2 +-
 xen/arch/x86/x86_64/compat/mm.c    |    8 ++--
 xen/arch/x86/x86_64/domain.c       |    2 +-
 xen/arch/x86/x86_64/mm.c           |    2 +-
 xen/arch/x86/x86_64/traps.c        |    2 +-
 xen/common/compat/domain.c         |    2 +-
 xen/common/compat/grant_table.c    |    2 +-
 xen/common/compat/memory.c         |    2 +-
 xen/common/domain.c                |    2 +-
 xen/common/domctl.c                |    2 +-
 xen/common/event_channel.c         |    2 +-
 xen/common/grant_table.c           |   36 ++++++++++++------------
 xen/common/kernel.c                |    4 +-
 xen/common/kexec.c                 |   16 +++++-----
 xen/common/memory.c                |    4 +-
 xen/common/multicall.c             |    2 +-
 xen/common/schedule.c              |    2 +-
 xen/common/sysctl.c                |    2 +-
 xen/common/xenoprof.c              |    8 ++--
 xen/drivers/acpi/pmstat.c          |    2 +-
 xen/drivers/char/console.c         |    6 ++--
 xen/drivers/passthrough/iommu.c    |    2 +-
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/asm-arm/hypercall.h    |    2 +-
 xen/include/asm-arm/mm.h           |    2 +-
 xen/include/asm-x86/hap.h          |    2 +-
 xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h    |    2 +-
 xen/include/asm-x86/mm.h           |    8 ++--
 xen/include/asm-x86/paging.h       |    2 +-
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/asm-x86/shadow.h       |    2 +-
 xen/include/asm-x86/xenoprof.h     |    6 ++--
 xen/include/public/arch-arm.h      |   21 ++++++++++----
 xen/include/public/arch-ia64.h     |    1 +
 xen/include/public/arch-x86/xen.h  |    1 +
 xen/include/public/memory.h        |   11 ++++--
 xen/include/public/physdev.h       |    2 +-
 xen/include/public/version.h       |    2 +-
 xen/include/public/xen.h           |    4 +-
 xen/include/xen/acpi.h             |    4 +-
 xen/include/xen/hypercall.h        |   52 +++++++++++++++++-----------------
 xen/include/xen/iommu.h            |    2 +-
 xen/include/xen/tmem_xen.h         |    2 +-
 xen/include/xsm/xsm.h              |    4 +-
 xen/xsm/dummy.c                    |    2 +-
 xen/xsm/flask/flask_op.c           |    4 +-
 xen/xsm/flask/hooks.c              |    2 +-
 xen/xsm/xsm_core.c                 |    2 +-
 73 files changed, 243 insertions(+), 175 deletions(-)


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2b-0000wj-Cr; Mon, 06 Aug 2012 14:12:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2a-0000wT-Ce
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:16 +0000
Received: from [85.158.143.99:26192] by server-3.bemta-4.messagelabs.com id
	00/AA-01511-FB0DF105; Mon, 06 Aug 2012 14:12:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262333!30147524!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4232 invoked from network); 6 Aug 2012 14:12:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696044"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-SH;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:02 +0100
Message-ID: <1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 2/5] xen/arm: introduce __lshrdi3 and
	__aeabi_llsr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Taken from Linux.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/lib/Makefile  |    2 +-
 xen/arch/arm/lib/lshrdi3.S |   54 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+), 1 deletions(-)
 create mode 100644 xen/arch/arm/lib/lshrdi3.S

diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/lib/Makefile
index cbbed68..4cf41f4 100644
--- a/xen/arch/arm/lib/Makefile
+++ b/xen/arch/arm/lib/Makefile
@@ -2,4 +2,4 @@ obj-y += memcpy.o memmove.o memset.o memzero.o
 obj-y += findbit.o setbit.o
 obj-y += setbit.o clearbit.o changebit.o
 obj-y += testsetbit.o testclearbit.o testchangebit.o
-obj-y += lib1funcs.o div64.o
+obj-y += lib1funcs.o lshrdi3.o div64.o
diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/lib/lshrdi3.S
new file mode 100644
index 0000000..3e8887e
--- /dev/null
+++ b/xen/arch/arm/lib/lshrdi3.S
@@ -0,0 +1,54 @@
+/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
+   Free Software Foundation, Inc.
+
+This file is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 2, or (at your option) any
+later version.
+
+In addition to the permissions in the GNU General Public License, the
+Free Software Foundation gives you unlimited permission to link the
+compiled version of this file into combinations with other programs,
+and to distribute those combinations without any restriction coming
+from the use of this file.  (The General Public License restrictions
+do apply in other respects; for example, they cover modification of
+the file, and distribution when not linked into a combine
+executable.)
+
+This file is distributed in the hope that it will be useful, but
+WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; see the file COPYING.  If not, write to
+the Free Software Foundation, 51 Franklin Street, Fifth Floor,
+Boston, MA 02110-1301, USA.  */
+
+
+#include <xen/config.h>
+#include "assembler.h"
+
+#ifdef __ARMEB__
+#define al r1
+#define ah r0
+#else
+#define al r0
+#define ah r1
+#endif
+
+ENTRY(__lshrdi3)
+ENTRY(__aeabi_llsr)
+
+	subs	r3, r2, #32
+	rsb	ip, r2, #32
+	movmi	al, al, lsr r2
+	movpl	al, ah, lsr r3
+ ARM(	orrmi	al, al, ah, lsl ip	)
+ THUMB(	lslmi	r3, ah, ip		)
+ THUMB(	orrmi	al, al, r3		)
+	mov	ah, ah, lsr r2
+	mov	pc, lr
+
+ENDPROC(__lshrdi3)
+ENDPROC(__aeabi_llsr)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2b-0000wj-Cr; Mon, 06 Aug 2012 14:12:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2a-0000wT-Ce
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:16 +0000
Received: from [85.158.143.99:26192] by server-3.bemta-4.messagelabs.com id
	00/AA-01511-FB0DF105; Mon, 06 Aug 2012 14:12:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262333!30147524!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4232 invoked from network); 6 Aug 2012 14:12:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696044"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-SH;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:02 +0100
Message-ID: <1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 2/5] xen/arm: introduce __lshrdi3 and
	__aeabi_llsr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Taken from Linux.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/lib/Makefile  |    2 +-
 xen/arch/arm/lib/lshrdi3.S |   54 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+), 1 deletions(-)
 create mode 100644 xen/arch/arm/lib/lshrdi3.S

diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/lib/Makefile
index cbbed68..4cf41f4 100644
--- a/xen/arch/arm/lib/Makefile
+++ b/xen/arch/arm/lib/Makefile
@@ -2,4 +2,4 @@ obj-y += memcpy.o memmove.o memset.o memzero.o
 obj-y += findbit.o setbit.o
 obj-y += setbit.o clearbit.o changebit.o
 obj-y += testsetbit.o testclearbit.o testchangebit.o
-obj-y += lib1funcs.o div64.o
+obj-y += lib1funcs.o lshrdi3.o div64.o
diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/lib/lshrdi3.S
new file mode 100644
index 0000000..3e8887e
--- /dev/null
+++ b/xen/arch/arm/lib/lshrdi3.S
@@ -0,0 +1,54 @@
+/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
+   Free Software Foundation, Inc.
+
+This file is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 2, or (at your option) any
+later version.
+
+In addition to the permissions in the GNU General Public License, the
+Free Software Foundation gives you unlimited permission to link the
+compiled version of this file into combinations with other programs,
+and to distribute those combinations without any restriction coming
+from the use of this file.  (The General Public License restrictions
+do apply in other respects; for example, they cover modification of
+the file, and distribution when not linked into a combine
+executable.)
+
+This file is distributed in the hope that it will be useful, but
+WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; see the file COPYING.  If not, write to
+the Free Software Foundation, 51 Franklin Street, Fifth Floor,
+Boston, MA 02110-1301, USA.  */
+
+
+#include <xen/config.h>
+#include "assembler.h"
+
+#ifdef __ARMEB__
+#define al r1
+#define ah r0
+#else
+#define al r0
+#define ah r1
+#endif
+
+ENTRY(__lshrdi3)
+ENTRY(__aeabi_llsr)
+
+	subs	r3, r2, #32
+	rsb	ip, r2, #32
+	movmi	al, al, lsr r2
+	movpl	al, ah, lsr r3
+ ARM(	orrmi	al, al, ah, lsl ip	)
+ THUMB(	lslmi	r3, ah, ip		)
+ THUMB(	orrmi	al, al, r3		)
+	mov	ah, ah, lsr r2
+	mov	pc, lr
+
+ENDPROC(__lshrdi3)
+ENDPROC(__aeabi_llsr)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2e-0000xk-KM; Mon, 06 Aug 2012 14:12:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyO2d-0000wN-SL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:12:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344262325!4055163!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10885 invoked from network); 6 Aug 2012 14:12:07 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 14:12:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76EBqFM010907
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 14:11:52 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76EBpUT020710
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 14:11:51 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76EBpWk000784; Mon, 6 Aug 2012 09:11:51 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 07:11:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ED07141F13; Mon,  6 Aug 2012 10:02:26 -0400 (EDT)
Date: Mon, 6 Aug 2012 10:02:26 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Malcolm Crossley <malcolm.crossley@citrix.com>, m.a.young@durham.ac.uk,
	mike@flyn.org, xen@lists.fedoraproject.org
Message-ID: <20120806140226.GB3093@phenom.dumpdata.com>
References: <501F98A1.4070806@brockmann-consult.de>
	<501FBB63.3050309@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501FBB63.3050309@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 01:41:07PM +0100, Malcolm Crossley wrote:
> On 06/08/12 11:12, Peter Maloney wrote:
> >my AMD FX-8150 system with vanilla source code is super slow, both the
> >dom0 and domUs. However, after I merge the upstream patches I found in
> >the openSUSE rpm, it runs normally.
> >
> >I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> >it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> >obviously those patches won't work any more since the 4.2 code looks
> >completely reorganized, so I'm stuck with 4.1.2
> >
> >Here is the rpm I was using at the time:
> >http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm
> >
> >To see the list of the patches and what order to apply them, see the
> >spec file.
> >
> >Please make sure this performance issue is fixed for the 4.2 release.
> >And I would be happy to test whatever files you send me.
> >
> >
> >
> >_______________________________________________
> >Xen-devel mailing list
> >Xen-devel@lists.xen.org
> >http://lists.xen.org/xen-devel
> I suspect you may need the following patch to improve your 4.1.2
> performance:
> 
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053
> 
> The cache flush on every C2 transition is very expensive and causes
> a large slow down.
> 
> 4.1.3-rc3 already includes that patch so it would be worth testing
> that version.

MA Young, could this be back-ported in the F17 and F16. I belive
Micahel Petullo setup a bug for that?

> 
> Malcolm
> 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2e-0000xk-KM; Mon, 06 Aug 2012 14:12:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyO2d-0000wN-SL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:12:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344262325!4055163!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10885 invoked from network); 6 Aug 2012 14:12:07 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 14:12:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76EBqFM010907
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 14:11:52 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76EBpUT020710
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 14:11:51 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76EBpWk000784; Mon, 6 Aug 2012 09:11:51 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 07:11:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ED07141F13; Mon,  6 Aug 2012 10:02:26 -0400 (EDT)
Date: Mon, 6 Aug 2012 10:02:26 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Malcolm Crossley <malcolm.crossley@citrix.com>, m.a.young@durham.ac.uk,
	mike@flyn.org, xen@lists.fedoraproject.org
Message-ID: <20120806140226.GB3093@phenom.dumpdata.com>
References: <501F98A1.4070806@brockmann-consult.de>
	<501FBB63.3050309@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501FBB63.3050309@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 01:41:07PM +0100, Malcolm Crossley wrote:
> On 06/08/12 11:12, Peter Maloney wrote:
> >my AMD FX-8150 system with vanilla source code is super slow, both the
> >dom0 and domUs. However, after I merge the upstream patches I found in
> >the openSUSE rpm, it runs normally.
> >
> >I tried 4.2-unstable and it was the same. There was no rc1 when I tested
> >it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
> >obviously those patches won't work any more since the 4.2 code looks
> >completely reorganized, so I'm stuck with 4.1.2
> >
> >Here is the rpm I was using at the time:
> >http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm
> >
> >To see the list of the patches and what order to apply them, see the
> >spec file.
> >
> >Please make sure this performance issue is fixed for the 4.2 release.
> >And I would be happy to test whatever files you send me.
> >
> >
> >
> >_______________________________________________
> >Xen-devel mailing list
> >Xen-devel@lists.xen.org
> >http://lists.xen.org/xen-devel
> I suspect you may need the following patch to improve your 4.1.2
> performance:
> 
> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053
> 
> The cache flush on every C2 transition is very expensive and causes
> a large slow down.
> 
> 4.1.3-rc3 already includes that patch so it would be worth testing
> that version.

MA Young, could this be back-ported in the F17 and F16. I belive
Micahel Petullo setup a bug for that?

> 
> Malcolm
> 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2b-0000wx-Pr; Mon, 06 Aug 2012 14:12:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2a-0000wX-SN
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:17 +0000
Received: from [85.158.143.99:26232] by server-1.bemta-4.messagelabs.com id
	BC/65-24392-0C0DF105; Mon, 06 Aug 2012 14:12:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262333!30147524!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4274 invoked from network); 6 Aug 2012 14:12:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696045"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-Rm;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:01 +0100
Message-ID: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 1/5] xen: improve changes to xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/public/memory.h |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..b0af2fd 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    };
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2b-0000wx-Pr; Mon, 06 Aug 2012 14:12:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2a-0000wX-SN
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:17 +0000
Received: from [85.158.143.99:26232] by server-1.bemta-4.messagelabs.com id
	BC/65-24392-0C0DF105; Mon, 06 Aug 2012 14:12:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262333!30147524!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4274 invoked from network); 6 Aug 2012 14:12:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696045"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-Rm;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:01 +0100
Message-ID: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 1/5] xen: improve changes to xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/public/memory.h |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..b0af2fd 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    };
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2f-0000xy-1Y; Mon, 06 Aug 2012 14:12:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2d-0000xC-Pb
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:20 +0000
Received: from [85.158.138.51:34693] by server-11.bemta-3.messagelabs.com id
	20/92-10722-2C0DF105; Mon, 06 Aug 2012 14:12:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344262335!30693378!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10543 invoked from network); 6 Aug 2012 14:12:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696047"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:14 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-W4;
	Mon, 06 Aug 2012 15:12:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:05 +0100
Message-ID: <1344262325-26598-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 5/5] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c              |    2 +-
 xen/arch/arm/domctl.c              |    2 +-
 xen/arch/arm/hvm.c                 |    2 +-
 xen/arch/arm/mm.c                  |    2 +-
 xen/arch/arm/physdev.c             |    2 +-
 xen/arch/arm/sysctl.c              |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
 xen/arch/x86/domain.c              |    2 +-
 xen/arch/x86/domctl.c              |    2 +-
 xen/arch/x86/efi/runtime.c         |    2 +-
 xen/arch/x86/hvm/hvm.c             |   26 +++++++++---------
 xen/arch/x86/microcode.c           |    2 +-
 xen/arch/x86/mm.c                  |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c          |    2 +-
 xen/arch/x86/mm/mem_event.c        |    2 +-
 xen/arch/x86/mm/paging.c           |    2 +-
 xen/arch/x86/mm/shadow/common.c    |    2 +-
 xen/arch/x86/physdev.c             |    2 +-
 xen/arch/x86/platform_hypercall.c  |    2 +-
 xen/arch/x86/sysctl.c              |    2 +-
 xen/arch/x86/traps.c               |    2 +-
 xen/arch/x86/x86_32/mm.c           |    2 +-
 xen/arch/x86/x86_32/traps.c        |    2 +-
 xen/arch/x86/x86_64/compat/mm.c    |    8 +++---
 xen/arch/x86/x86_64/domain.c       |    2 +-
 xen/arch/x86/x86_64/mm.c           |    2 +-
 xen/arch/x86/x86_64/traps.c        |    2 +-
 xen/common/compat/domain.c         |    2 +-
 xen/common/compat/grant_table.c    |    2 +-
 xen/common/compat/memory.c         |    2 +-
 xen/common/domain.c                |    2 +-
 xen/common/domctl.c                |    2 +-
 xen/common/event_channel.c         |    2 +-
 xen/common/grant_table.c           |   36 ++++++++++++------------
 xen/common/kernel.c                |    4 +-
 xen/common/kexec.c                 |   16 +++++-----
 xen/common/memory.c                |    4 +-
 xen/common/multicall.c             |    2 +-
 xen/common/schedule.c              |    2 +-
 xen/common/sysctl.c                |    2 +-
 xen/common/xenoprof.c              |    8 +++---
 xen/drivers/acpi/pmstat.c          |    2 +-
 xen/drivers/char/console.c         |    6 ++--
 xen/drivers/passthrough/iommu.c    |    2 +-
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/asm-arm/hypercall.h    |    2 +-
 xen/include/asm-arm/mm.h           |    2 +-
 xen/include/asm-x86/hap.h          |    2 +-
 xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h    |    2 +-
 xen/include/asm-x86/mm.h           |    8 +++---
 xen/include/asm-x86/paging.h       |    2 +-
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/asm-x86/shadow.h       |    2 +-
 xen/include/asm-x86/xenoprof.h     |    6 ++--
 xen/include/xen/acpi.h             |    4 +-
 xen/include/xen/hypercall.h        |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h            |    2 +-
 xen/include/xen/tmem_xen.h         |    2 +-
 xen/include/xsm/xsm.h              |    4 +-
 xen/xsm/dummy.c                    |    2 +-
 xen/xsm/flask/flask_op.c           |    4 +-
 xen/xsm/flask/hooks.c              |    2 +-
 xen/xsm/xsm_core.c                 |    2 +-
 64 files changed, 160 insertions(+), 160 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..c9cc59f 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index ed76131..4b2e0c7 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1359,7 +1359,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 22c136b..bf97aea 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3041,14 +3041,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3066,7 +3066,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3082,7 +3082,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3131,7 +3131,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3139,7 +3139,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3158,7 +3158,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3182,7 +3182,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3354,7 +3354,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3519,7 +3519,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3563,7 +3563,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3596,7 +3596,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3680,7 +3680,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..fd1c890 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..88a07e8 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -260,9 +260,9 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..74a4733 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,7 +52,7 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..8e311ff 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..a126188 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 8788f01..2be1764 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -513,7 +513,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 7a955cb..bf5005b 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { {_x } };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2f-0000xy-1Y; Mon, 06 Aug 2012 14:12:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2d-0000xC-Pb
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:20 +0000
Received: from [85.158.138.51:34693] by server-11.bemta-3.messagelabs.com id
	20/92-10722-2C0DF105; Mon, 06 Aug 2012 14:12:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344262335!30693378!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10543 invoked from network); 6 Aug 2012 14:12:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696047"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:14 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-W4;
	Mon, 06 Aug 2012 15:12:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:05 +0100
Message-ID: <1344262325-26598-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 5/5] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c              |    2 +-
 xen/arch/arm/domctl.c              |    2 +-
 xen/arch/arm/hvm.c                 |    2 +-
 xen/arch/arm/mm.c                  |    2 +-
 xen/arch/arm/physdev.c             |    2 +-
 xen/arch/arm/sysctl.c              |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
 xen/arch/x86/domain.c              |    2 +-
 xen/arch/x86/domctl.c              |    2 +-
 xen/arch/x86/efi/runtime.c         |    2 +-
 xen/arch/x86/hvm/hvm.c             |   26 +++++++++---------
 xen/arch/x86/microcode.c           |    2 +-
 xen/arch/x86/mm.c                  |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c          |    2 +-
 xen/arch/x86/mm/mem_event.c        |    2 +-
 xen/arch/x86/mm/paging.c           |    2 +-
 xen/arch/x86/mm/shadow/common.c    |    2 +-
 xen/arch/x86/physdev.c             |    2 +-
 xen/arch/x86/platform_hypercall.c  |    2 +-
 xen/arch/x86/sysctl.c              |    2 +-
 xen/arch/x86/traps.c               |    2 +-
 xen/arch/x86/x86_32/mm.c           |    2 +-
 xen/arch/x86/x86_32/traps.c        |    2 +-
 xen/arch/x86/x86_64/compat/mm.c    |    8 +++---
 xen/arch/x86/x86_64/domain.c       |    2 +-
 xen/arch/x86/x86_64/mm.c           |    2 +-
 xen/arch/x86/x86_64/traps.c        |    2 +-
 xen/common/compat/domain.c         |    2 +-
 xen/common/compat/grant_table.c    |    2 +-
 xen/common/compat/memory.c         |    2 +-
 xen/common/domain.c                |    2 +-
 xen/common/domctl.c                |    2 +-
 xen/common/event_channel.c         |    2 +-
 xen/common/grant_table.c           |   36 ++++++++++++------------
 xen/common/kernel.c                |    4 +-
 xen/common/kexec.c                 |   16 +++++-----
 xen/common/memory.c                |    4 +-
 xen/common/multicall.c             |    2 +-
 xen/common/schedule.c              |    2 +-
 xen/common/sysctl.c                |    2 +-
 xen/common/xenoprof.c              |    8 +++---
 xen/drivers/acpi/pmstat.c          |    2 +-
 xen/drivers/char/console.c         |    6 ++--
 xen/drivers/passthrough/iommu.c    |    2 +-
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/asm-arm/hypercall.h    |    2 +-
 xen/include/asm-arm/mm.h           |    2 +-
 xen/include/asm-x86/hap.h          |    2 +-
 xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h    |    2 +-
 xen/include/asm-x86/mm.h           |    8 +++---
 xen/include/asm-x86/paging.h       |    2 +-
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/asm-x86/shadow.h       |    2 +-
 xen/include/asm-x86/xenoprof.h     |    6 ++--
 xen/include/xen/acpi.h             |    4 +-
 xen/include/xen/hypercall.h        |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h            |    2 +-
 xen/include/xen/tmem_xen.h         |    2 +-
 xen/include/xsm/xsm.h              |    4 +-
 xen/xsm/dummy.c                    |    2 +-
 xen/xsm/flask/flask_op.c           |    4 +-
 xen/xsm/flask/hooks.c              |    2 +-
 xen/xsm/xsm_core.c                 |    2 +-
 64 files changed, 160 insertions(+), 160 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..c9cc59f 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index ed76131..4b2e0c7 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1359,7 +1359,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 22c136b..bf97aea 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3041,14 +3041,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3066,7 +3066,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3082,7 +3082,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3131,7 +3131,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3139,7 +3139,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3158,7 +3158,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3182,7 +3182,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3354,7 +3354,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3519,7 +3519,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3563,7 +3563,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3596,7 +3596,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3680,7 +3680,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..fd1c890 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..88a07e8 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -260,9 +260,9 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..74a4733 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,7 +52,7 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..8e311ff 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..a126188 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 8788f01..2be1764 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -513,7 +513,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 7a955cb..bf5005b 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { {_x } };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2d-0000xD-72; Mon, 06 Aug 2012 14:12:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2b-0000wT-5p
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:17 +0000
Received: from [85.158.143.99:55383] by server-3.bemta-4.messagelabs.com id
	2A/AA-01511-0C0DF105; Mon, 06 Aug 2012 14:12:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262333!30147524!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4345 invoked from network); 6 Aug 2012 14:12:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696046"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-Tt;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:03 +0100
Message-ID: <1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There are still few unsigned long in the xen public interface: replace
them with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/public/arch-arm.h |    4 ++--
 xen/include/public/physdev.h  |    2 +-
 xen/include/public/version.h  |    2 +-
 xen/include/public/xen.h      |    4 ++--
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index b78eeba..a4cf6eb 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
 #define PHYSDEVOP_apic_write             9
 struct physdev_apic {
     /* IN */
-    unsigned long apic_physbase;
+    xen_ulong_t apic_physbase;
     uint32_t reg;
     /* IN or OUT */
     uint32_t value;
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..eb83eba 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -58,7 +58,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b2f6c50..d635bbf 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
  * NB. The fields are natural register size for this architecture.
  */
 struct multicall_entry {
-    unsigned long op, result;
-    unsigned long args[6];
+    xen_ulong_t op, result;
+    xen_ulong_t args[6];
 };
 typedef struct multicall_entry multicall_entry_t;
 DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2d-0000xD-72; Mon, 06 Aug 2012 14:12:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2b-0000wT-5p
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:17 +0000
Received: from [85.158.143.99:55383] by server-3.bemta-4.messagelabs.com id
	2A/AA-01511-0C0DF105; Mon, 06 Aug 2012 14:12:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262333!30147524!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4345 invoked from network); 6 Aug 2012 14:12:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33696046"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-Tt;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:03 +0100
Message-ID: <1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There are still few unsigned long in the xen public interface: replace
them with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/public/arch-arm.h |    4 ++--
 xen/include/public/physdev.h  |    2 +-
 xen/include/public/version.h  |    2 +-
 xen/include/public/xen.h      |    4 ++--
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index b78eeba..a4cf6eb 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
 #define PHYSDEVOP_apic_write             9
 struct physdev_apic {
     /* IN */
-    unsigned long apic_physbase;
+    xen_ulong_t apic_physbase;
     uint32_t reg;
     /* IN or OUT */
     uint32_t value;
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..eb83eba 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -58,7 +58,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b2f6c50..d635bbf 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
  * NB. The fields are natural register size for this architecture.
  */
 struct multicall_entry {
-    unsigned long op, result;
-    unsigned long args[6];
+    xen_ulong_t op, result;
+    xen_ulong_t args[6];
 };
 typedef struct multicall_entry multicall_entry_t;
 DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2s-00014H-N3; Mon, 06 Aug 2012 14:12:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2r-00013N-9w
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:33 +0000
Received: from [85.158.143.99:27243] by server-1.bemta-4.messagelabs.com id
	CE/F5-24392-0D0DF105; Mon, 06 Aug 2012 14:12:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262350!30147595!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5503 invoked from network); 6 Aug 2012 14:12:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204273843"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-Vb;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:04 +0100
Message-ID: <1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: this change does not make any difference on x86 and ia64.


XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/public/arch-arm.h      |   17 +++++++++++++----
 xen/include/public/arch-ia64.h     |    1 +
 xen/include/public/arch-x86/xen.h  |    1 +
 4 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..7a955cb 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE(type)) { {_x } };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..d17d645 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,27 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef struct { type *p; }                                 \
+        __guest_handle_ ## name;                                \
+    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_64_ ## name;
 
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
+         (hnd).p = val;                                     \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..97583ea 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -47,6 +47,7 @@
 
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..8ee5437 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -44,6 +44,7 @@
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:12:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO2s-00014H-N3; Mon, 06 Aug 2012 14:12:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyO2r-00013N-9w
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:12:33 +0000
Received: from [85.158.143.99:27243] by server-1.bemta-4.messagelabs.com id
	CE/F5-24392-0D0DF105; Mon, 06 Aug 2012 14:12:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344262350!30147595!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5503 invoked from network); 6 Aug 2012 14:12:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:12:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204273843"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:12:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:12:13 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyO2W-0002fX-Vb;
	Mon, 06 Aug 2012 15:12:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Mon, 6 Aug 2012 15:12:04 +0100
Message-ID: <1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: this change does not make any difference on x86 and ia64.


XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/public/arch-arm.h      |   17 +++++++++++++----
 xen/include/public/arch-ia64.h     |    1 +
 xen/include/public/arch-x86/xen.h  |    1 +
 4 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..7a955cb 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE(type)) { {_x } };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..d17d645 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,27 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef struct { type *p; }                                 \
+        __guest_handle_ ## name;                                \
+    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_64_ ## name;
 
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
+         (hnd).p = val;                                     \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..97583ea 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -47,6 +47,7 @@
 
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..8ee5437 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -44,6 +44,7 @@
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO3F-0001Do-3f; Mon, 06 Aug 2012 14:12:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyO3E-0001DS-OL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:12:56 +0000
Received: from [85.158.138.51:40205] by server-2.bemta-3.messagelabs.com id
	1E/C9-29239-7E0DF105; Mon, 06 Aug 2012 14:12:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344262375!30536439!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12702 invoked from network); 6 Aug 2012 14:12:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:12:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 15:12:54 +0100
Message-Id: <501FED030200007800092E35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 15:12:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>,
	"George Dunlap" <george.dunlap@eu.citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
In-Reply-To: <501FE96E0200007800092E25@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 15:57, "Jan Beulich" <JBeulich@suse.com> wrote:
> The domain indeed has 0x1e0 pages allocated, and a huge (still
> growing number) of PoD entries. And apparently this fails so
> rarely because it's pretty unlikely that there's not a single clear
> page that the PoD code can select as victim, plus the Dom0
> space code likely also only infrequently happens to kick in at
> the wrong time.

Just realized that of course it's also suspicious that there
shouldn't be any clear page among those 480 - Dom0 scrubs
its pages at balloons them out (but I think ballooning isn't even
in use there), Xen scrubs the free pages on boot, yet this
reportedly has happened also for the very first domain ever
created after boot. Or does the PoD code not touch the low
2Mb for some reason?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:13:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO3F-0001Do-3f; Mon, 06 Aug 2012 14:12:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyO3E-0001DS-OL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:12:56 +0000
Received: from [85.158.138.51:40205] by server-2.bemta-3.messagelabs.com id
	1E/C9-29239-7E0DF105; Mon, 06 Aug 2012 14:12:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344262375!30536439!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12702 invoked from network); 6 Aug 2012 14:12:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:12:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 15:12:54 +0100
Message-Id: <501FED030200007800092E35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 15:12:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>,
	"George Dunlap" <george.dunlap@eu.citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
In-Reply-To: <501FE96E0200007800092E25@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 15:57, "Jan Beulich" <JBeulich@suse.com> wrote:
> The domain indeed has 0x1e0 pages allocated, and a huge (still
> growing number) of PoD entries. And apparently this fails so
> rarely because it's pretty unlikely that there's not a single clear
> page that the PoD code can select as victim, plus the Dom0
> space code likely also only infrequently happens to kick in at
> the wrong time.

Just realized that of course it's also suspicious that there
shouldn't be any clear page among those 480 - Dom0 scrubs
its pages at balloons them out (but I think ballooning isn't even
in use there), Xen scrubs the free pages on boot, yet this
reportedly has happened also for the very first domain ever
created after boot. Or does the PoD code not touch the low
2Mb for some reason?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO8J-00023W-Si; Mon, 06 Aug 2012 14:18:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyO8J-00023H-2d
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:18:11 +0000
Received: from [85.158.143.99:20688] by server-2.bemta-4.messagelabs.com id
	B9/99-17938-222DF105; Mon, 06 Aug 2012 14:18:10 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344262689!27116678!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31774 invoked from network); 6 Aug 2012 14:18:09 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:18:09 -0000
Received: by eeke53 with SMTP id e53so856892eek.32
	for <multiple recipients>; Mon, 06 Aug 2012 07:18:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=BigfR+S2+mCrFbzV23nzjK+sKZ9qqEW53IHIQ8Zk/uc=;
	b=kOsjdFCJ9yetzAuLy7InAI3+0jQ7GvsIcTuxG6OrNVyoMtry0R/0P4rMvZ+pg6wsIs
	0hT8y3UWh0ImFvyYWatlA3eLpp/CorXqNQLHdWq+5CHmesC6KXqD9wePeHjV9DEDLSKI
	aYqxAut0PaArPoGhmFFg4KdsWJwQJ7VHUKnaaDAngOv6XmWexpZbho79sodBLvJnxRWa
	afvGqjoCi3IvU47BX1lXKF9SdAy1v4IJGPBT2RCqmLfJiqnlyLDrfXeEVgJkSzgQxZST
	re0W2xvA6bcbB/odIn8/A3Zht6qRR4qhm5apTxu08cyYe7cMJwRT7X/c2IH3Yc+yEqgz
	0IXA==
MIME-Version: 1.0
Received: by 10.14.218.5 with SMTP id j5mr13228221eep.16.1344262689452; Mon,
	06 Aug 2012 07:18:09 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 07:18:09 -0700 (PDT)
Date: Mon, 6 Aug 2012 15:18:09 +0100
X-Google-Sender-Auth: yFbdp94CGUL0bXUmH-0CpGdAEE4
Message-ID: <CAFLBxZaM+NrphF2eQ5v8+DVQue5F7CgQh_Yi-byv5dpQva1TMw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org
Cc: Ian Jackson <ian.jackson@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Joanna Rutkowska <joanna@invisiblethingslab.com>,
	John Haxby <john.haxby@oracle.com>,
	Matthew Allen <matthew.allen@citrix.com>,
	Sander Eikelenboom <linux@eikelenboom.it>,
	Thomas Goirand <thomas@goirand.fr>, Matt Wilson <msw@amazon.com>,
	John Creol <iamcreo@yahoo.com>, Alan Cox <alan@lxorguk.ukuu.org.uk>
Subject: [Xen-devel] Security discussion poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As promised, here is the poll for the security discussion.  As a
reminder, the purpose of this poll is mainly to see where people's
attitudes are with respect to the various options, so that we can move
the discussion forward towards a conclusion.  If you have any
interested at all in the outcome, please make your voice heard.  I
have CC'd everyone who has participated in the discussion so far.Joh

The poll will not be secret.  You may fill out the poll anonymously,
but if you do, your vote will be given less weight (to avoid ballot
stuffing).  We don't necessarily plan on publishing the individual
poll responses, but we may do so if we think it would be helpful.

Because of the summer holidays, we will keep the poll open for two
weeks; we will tabulate the results Monday, August 20.

The poll can be found here:

http://xen.org/polls/xen_dev_2012_security_process.html

Thank you for your time.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyO8J-00023W-Si; Mon, 06 Aug 2012 14:18:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyO8J-00023H-2d
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:18:11 +0000
Received: from [85.158.143.99:20688] by server-2.bemta-4.messagelabs.com id
	B9/99-17938-222DF105; Mon, 06 Aug 2012 14:18:10 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344262689!27116678!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31774 invoked from network); 6 Aug 2012 14:18:09 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:18:09 -0000
Received: by eeke53 with SMTP id e53so856892eek.32
	for <multiple recipients>; Mon, 06 Aug 2012 07:18:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:cc:content-type;
	bh=BigfR+S2+mCrFbzV23nzjK+sKZ9qqEW53IHIQ8Zk/uc=;
	b=kOsjdFCJ9yetzAuLy7InAI3+0jQ7GvsIcTuxG6OrNVyoMtry0R/0P4rMvZ+pg6wsIs
	0hT8y3UWh0ImFvyYWatlA3eLpp/CorXqNQLHdWq+5CHmesC6KXqD9wePeHjV9DEDLSKI
	aYqxAut0PaArPoGhmFFg4KdsWJwQJ7VHUKnaaDAngOv6XmWexpZbho79sodBLvJnxRWa
	afvGqjoCi3IvU47BX1lXKF9SdAy1v4IJGPBT2RCqmLfJiqnlyLDrfXeEVgJkSzgQxZST
	re0W2xvA6bcbB/odIn8/A3Zht6qRR4qhm5apTxu08cyYe7cMJwRT7X/c2IH3Yc+yEqgz
	0IXA==
MIME-Version: 1.0
Received: by 10.14.218.5 with SMTP id j5mr13228221eep.16.1344262689452; Mon,
	06 Aug 2012 07:18:09 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 07:18:09 -0700 (PDT)
Date: Mon, 6 Aug 2012 15:18:09 +0100
X-Google-Sender-Auth: yFbdp94CGUL0bXUmH-0CpGdAEE4
Message-ID: <CAFLBxZaM+NrphF2eQ5v8+DVQue5F7CgQh_Yi-byv5dpQva1TMw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org
Cc: Ian Jackson <ian.jackson@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Joanna Rutkowska <joanna@invisiblethingslab.com>,
	John Haxby <john.haxby@oracle.com>,
	Matthew Allen <matthew.allen@citrix.com>,
	Sander Eikelenboom <linux@eikelenboom.it>,
	Thomas Goirand <thomas@goirand.fr>, Matt Wilson <msw@amazon.com>,
	John Creol <iamcreo@yahoo.com>, Alan Cox <alan@lxorguk.ukuu.org.uk>
Subject: [Xen-devel] Security discussion poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As promised, here is the poll for the security discussion.  As a
reminder, the purpose of this poll is mainly to see where people's
attitudes are with respect to the various options, so that we can move
the discussion forward towards a conclusion.  If you have any
interested at all in the outcome, please make your voice heard.  I
have CC'd everyone who has participated in the discussion so far.Joh

The poll will not be secret.  You may fill out the poll anonymously,
but if you do, your vote will be given less weight (to avoid ballot
stuffing).  We don't necessarily plan on publishing the individual
poll responses, but we may do so if we think it would be helpful.

Because of the summer holidays, we will keep the poll open for two
weeks; we will tabulate the results Monday, August 20.

The poll can be found here:

http://xen.org/polls/xen_dev_2012_security_process.html

Thank you for your time.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:25:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOFY-0002J7-VS; Mon, 06 Aug 2012 14:25:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOFX-0002J2-6J
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:25:39 +0000
Received: from [85.158.139.83:58025] by server-2.bemta-5.messagelabs.com id
	96/41-04598-2E3DF105; Mon, 06 Aug 2012 14:25:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344263137!23155847!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3750 invoked from network); 6 Aug 2012 14:25:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; 
	d="dts'?dtsi'?scan'208";a="13868387"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:25:30 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:25:30 +0100
Date: Mon, 6 Aug 2012 15:25:08 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Message-ID: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1331867984-1344262762=:4645"
Content-ID: <alpine.DEB.2.02.1208061520000.4645@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Tim
	\(Xen.org\)" <tim@xen.org>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v2 00/23] Introduce Xen support on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208061520001.4645@kaball.uk.xensource.com>

Hi all,
this patch series implements Xen support for ARMv7 with virtualization
extensions.  It allows a Linux guest to boot as dom0 and
as domU on Xen on ARM. PV console, disk and network frontends and
backends are all working correctly.

It has been tested on a Versatile Express Cortex A15 emulator, using the
latest Xen ARM developement branch
(git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3) plus
the "ARM hypercall ABI: 64 bit ready" patch series
(http://marc.info/?l=xen-devel&m=134426267205408), and a simple ad-hoc
tool to build guest domains (marc.info/?l=xen-devel&m=134089788016546).

The patch marked with [HACK] shouldn't be applied and is part of the
series only because it is needed to create domUs.

I am also attaching to this email the dts'es that I am currently using
for dom0 and domU: vexpress-v2p-ca15-tc1.dts (that includes
vexpress-v2m-rs1-rtsm.dtsi) is the dts used for dom0 and it is passed to
Linux by Xen, while vexpress-virt.dts is the dts used for other domUs
and it is appended in binary form to the guest kernel image. I am not
sure where they are supposed to live yet, so I am just attaching them
here so that people can actually try out this series if they want to.

Comments are very welcome!


Changes in v2:
- fix up many comments and commit messages;
- remove the early_printk patches: rely on the emulated serial for now;
- remove the xen_guest_init patch: without any PV early_printk, we don't
  need any early call to xen_guest_init, we can rely on core_initcall
  alone;
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
  at the moment is unused;
- use ldm instead of pop in the hypercall wrappers;
- return -ENOSYS rather than -1 from the unimplemented grant_table
  functions;
- remove the pvclock ifdef in the Xen headers;
- remove include linux/types.h from xen/interface/xen.h;
- replace pr_info with pr_debug in xen_guest_init;
- add a new patch to introduce xen_ulong_t and use it top replace all
  the occurences of unsigned long in the public Xen interface;
- explicitely size all the pointers to 64 bit on ARM, so that the
  hypercall ABI is "64 bit ready";
- clean up xenbus_init;
- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI;
- mark Xen guest support on ARM as EXPERIMENTAL;
- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames;
- add a new patch to clear IRQ_NOAUTOEN and IRQ_NOREQUEST in events.c;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap;
- retain binary compatibility in xen_add_to_physmap: use a union to
  introduce foreign_domid.



Ian Campbell (1):
      [HACK] xen/arm: implement xen_remap_domain_mfn_range

Stefano Stabellini (22):
      arm: initial Xen support
      xen/arm: hypercalls
      xen/arm: page.h definitions
      xen/arm: sync_bitops
      xen/arm: empty implementation of grant_table arch specific functions
      xen: missing includes
      xen/arm: Xen detection and shared_info page mapping
      xen/arm: Introduce xen_pfn_t for pfn and mfn types
      xen/arm: Introduce xen_ulong_t for unsigned long
      xen/arm: compile and run xenbus
      xen: do not compile manage, balloon, pci, acpi and cpu_hotplug on ARM
      xen/arm: introduce CONFIG_XEN on ARM
      xen/arm: get privilege status
      xen/arm: initialize grant_table on ARM
      xen/arm: receive Xen events on ARM
      xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
      xen/arm: implement alloc/free_xenballooned_pages with alloc_pages/kfree
      xen: allow privcmd for HVM guests
      xen/arm: compile blkfront and blkback
      xen/arm: compile netback
      xen: update xen_add_to_physmap interface
      arm/v2m: initialize arch_timers even if v2m_timer is not present

 arch/arm/Kconfig                           |   10 ++
 arch/arm/Makefile                          |    1 +
 arch/arm/include/asm/hypervisor.h          |    6 +
 arch/arm/include/asm/sync_bitops.h         |   27 +++
 arch/arm/include/asm/xen/events.h          |   18 ++
 arch/arm/include/asm/xen/hypercall.h       |   69 ++++++++
 arch/arm/include/asm/xen/hypervisor.h      |   19 +++
 arch/arm/include/asm/xen/interface.h       |   72 +++++++++
 arch/arm/include/asm/xen/page.h            |   79 +++++++++
 arch/arm/mach-vexpress/v2m.c               |   11 +-
 arch/arm/xen/Makefile                      |    1 +
 arch/arm/xen/enlighten.c                   |  237 ++++++++++++++++++++++++++++
 arch/arm/xen/grant-table.c                 |   53 ++++++
 arch/arm/xen/hypercall.S                   |  106 +++++++++++++
 arch/ia64/include/asm/xen/interface.h      |    6 +-
 arch/x86/include/asm/xen/interface.h       |    8 +
 arch/x86/xen/enlighten.c                   |    1 +
 arch/x86/xen/irq.c                         |    1 +
 arch/x86/xen/mmu.c                         |    3 +
 arch/x86/xen/xen-ops.h                     |    1 -
 drivers/block/xen-blkback/blkback.c        |    1 +
 drivers/net/xen-netback/netback.c          |    1 +
 drivers/net/xen-netfront.c                 |    1 +
 drivers/tty/hvc/hvc_xen.c                  |    2 +
 drivers/xen/Makefile                       |   11 +-
 drivers/xen/events.c                       |   18 ++-
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/privcmd.c                      |   20 +--
 drivers/xen/xenbus/xenbus_comms.c          |    2 +-
 drivers/xen/xenbus/xenbus_probe.c          |   62 +++++---
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 drivers/xen/xenbus/xenbus_xs.c             |    1 +
 drivers/xen/xenfs/super.c                  |    7 +
 include/xen/events.h                       |    2 +
 include/xen/interface/features.h           |    3 +
 include/xen/interface/grant_table.h        |    4 +-
 include/xen/interface/io/protocols.h       |    3 +
 include/xen/interface/memory.h             |   32 +++--
 include/xen/interface/physdev.h            |    4 +-
 include/xen/interface/platform.h           |    4 +-
 include/xen/interface/version.h            |    2 +-
 include/xen/interface/xen.h                |   13 +-
 include/xen/privcmd.h                      |    3 +-
 include/xen/xen.h                          |    2 +-
 44 files changed, 857 insertions(+), 72 deletions(-)



A branch based on 3.5-rc7 is available here:

git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 3.5-rc7-arm-2

Cheers,

Stefano
--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-virt.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208061519220.4645@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-virt.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQVJN
IEVudmVsb3BlIE1vZGVsIHY3QSAoc2luZ2xlIENQVSkuDQogKi8NCg0KL2R0
cy12MS87DQoNCi9pbmNsdWRlLyAic2tlbGV0b24uZHRzaSINCg0KLyB7DQoJ
bW9kZWwgPSAiVjJQLUFFTXY3QSI7DQoJY29tcGF0aWJsZSA9ICJhcm0sdmV4
cHJlc3MsdjJwLWFlbSx2N2EiLCAiYXJtLHZleHByZXNzLHYycC1hZW0iLCAi
YXJtLHZleHByZXNzIjsNCglpbnRlcnJ1cHQtcGFyZW50ID0gPCZnaWM+Ow0K
DQogICAgICAgIGNob3NlbiB7DQogICAgICAgICAgICAgICAgYm9vdGFyZ3Mg
PSAiZWFybHlwcmludGsgZGVidWcgbG9nbGV2ZWw9OSBjb25zb2xlPWh2YzAg
cm9vdD0vZGV2L3h2ZGEgaW5pdD0vc2Jpbi9pbml0IjsNCiAgICAgICAgfTsN
Cg0KCWNwdXMgew0KCQkjYWRkcmVzcy1jZWxscyA9IDwxPjsNCgkJI3NpemUt
Y2VsbHMgPSA8MD47DQoNCgkJY3B1QDAgew0KCQkJZGV2aWNlX3R5cGUgPSAi
Y3B1IjsNCgkJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hMTUiOw0KCQkJ
cmVnID0gPDA+Ow0KCQl9Ow0KCX07DQoNCgltZW1vcnkgew0KCQlkZXZpY2Vf
dHlwZSA9ICJtZW1vcnkiOw0KCQlyZWcgPSA8MHg4MDAwMDAwMCAweDA4MDAw
MDAwPjsNCgl9Ow0KDQoJZ2ljOiBpbnRlcnJ1cHQtY29udHJvbGxlckAyYzAw
MTAwMCB7DQoJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hOS1naWMiOw0K
CQkjaW50ZXJydXB0LWNlbGxzID0gPDM+Ow0KCQkjYWRkcmVzcy1jZWxscyA9
IDwwPjsNCgkJaW50ZXJydXB0LWNvbnRyb2xsZXI7DQoJCXJlZyA9IDwweDJj
MDAxMDAwIDB4MTAwMD4sDQoJCSAgICAgIDwweDJjMDAyMDAwIDB4MTAwPjsN
Cgl9Ow0KDQoJdGltZXIgew0KCQljb21wYXRpYmxlID0gImFybSxhcm12Ny10
aW1lciI7DQoJCWludGVycnVwdHMgPSA8MSAxMyAweGYwOD4sDQoJCQkgICAg
IDwxIDE0IDB4ZjA4PiwNCgkJCSAgICAgPDEgMTEgMHhmMDg+LA0KCQkJICAg
ICA8MSAxMCAweGYwOD47DQoJfTsNCg0KCXhlbiB7DQoJCWNvbXBhdGlibGUg
PSAiYXJtLHhlbiI7DQoJCXJlZyA9IDwweGIwMDAwMDAwIDB4MjAwMDA+Ow0K
CQlpbnRlcnJ1cHRzID0gPDEgMTUgMHhmMDg+Ow0KCX07DQoNCgltb3RoZXJi
b2FyZCB7DQoJCWFybSx2Mm0tbWVtb3J5LW1hcCA9ICJyczEiOw0KCQlyYW5n
ZXMgPSA8MCAwIDB4MDgwMDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDEgMCAw
eDE0MDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDwyIDAgMHgxODAwMDAwMCAw
eDA0MDAwMDAwPiwNCgkJCSA8MyAwIDB4MWMwMDAwMDAgMHgwNDAwMDAwMD4s
DQoJCQkgPDQgMCAweDBjMDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDw1IDAg
MHgxMDAwMDAwMCAweDA0MDAwMDAwPjsNCg0KCQlpbnRlcnJ1cHQtbWFwLW1h
c2sgPSA8MCAwIDYzPjsNCgkJaW50ZXJydXB0LW1hcCA9IDwwIDAgIDAgJmdp
YyAwICAwIDQ+LA0KCQkJCTwwIDAgIDEgJmdpYyAwICAxIDQ+LA0KCQkJCTww
IDAgIDIgJmdpYyAwICAyIDQ+LA0KCQkJCTwwIDAgIDMgJmdpYyAwICAzIDQ+
LA0KCQkJCTwwIDAgIDQgJmdpYyAwICA0IDQ+LA0KCQkJCTwwIDAgIDUgJmdp
YyAwICA1IDQ+LA0KCQkJCTwwIDAgIDYgJmdpYyAwICA2IDQ+LA0KCQkJCTww
IDAgIDcgJmdpYyAwICA3IDQ+LA0KCQkJCTwwIDAgIDggJmdpYyAwICA4IDQ+
LA0KCQkJCTwwIDAgIDkgJmdpYyAwICA5IDQ+LA0KCQkJCTwwIDAgMTAgJmdp
YyAwIDEwIDQ+LA0KCQkJCTwwIDAgMTEgJmdpYyAwIDExIDQ+LA0KCQkJCTww
IDAgMTIgJmdpYyAwIDEyIDQ+LA0KCQkJCTwwIDAgMTMgJmdpYyAwIDEzIDQ+
LA0KCQkJCTwwIDAgMTQgJmdpYyAwIDE0IDQ+LA0KCQkJCTwwIDAgMTUgJmdp
YyAwIDE1IDQ+LA0KCQkJCTwwIDAgMTYgJmdpYyAwIDE2IDQ+LA0KCQkJCTww
IDAgMTcgJmdpYyAwIDE3IDQ+LA0KCQkJCTwwIDAgMTggJmdpYyAwIDE4IDQ+
LA0KCQkJCTwwIDAgMTkgJmdpYyAwIDE5IDQ+LA0KCQkJCTwwIDAgMjAgJmdp
YyAwIDIwIDQ+LA0KCQkJCTwwIDAgMjEgJmdpYyAwIDIxIDQ+LA0KCQkJCTww
IDAgMjIgJmdpYyAwIDIyIDQ+LA0KCQkJCTwwIDAgMjMgJmdpYyAwIDIzIDQ+
LA0KCQkJCTwwIDAgMjQgJmdpYyAwIDI0IDQ+LA0KCQkJCTwwIDAgMjUgJmdp
YyAwIDI1IDQ+LA0KCQkJCTwwIDAgMjYgJmdpYyAwIDI2IDQ+LA0KCQkJCTww
IDAgMjcgJmdpYyAwIDI3IDQ+LA0KCQkJCTwwIDAgMjggJmdpYyAwIDI4IDQ+
LA0KCQkJCTwwIDAgMjkgJmdpYyAwIDI5IDQ+LA0KCQkJCTwwIDAgMzAgJmdp
YyAwIDMwIDQ+LA0KCQkJCTwwIDAgMzEgJmdpYyAwIDMxIDQ+LA0KCQkJCTww
IDAgMzIgJmdpYyAwIDMyIDQ+LA0KCQkJCTwwIDAgMzMgJmdpYyAwIDMzIDQ+
LA0KCQkJCTwwIDAgMzQgJmdpYyAwIDM0IDQ+LA0KCQkJCTwwIDAgMzUgJmdp
YyAwIDM1IDQ+LA0KCQkJCTwwIDAgMzYgJmdpYyAwIDM2IDQ+LA0KCQkJCTww
IDAgMzcgJmdpYyAwIDM3IDQ+LA0KCQkJCTwwIDAgMzggJmdpYyAwIDM4IDQ+
LA0KCQkJCTwwIDAgMzkgJmdpYyAwIDM5IDQ+LA0KCQkJCTwwIDAgNDAgJmdp
YyAwIDQwIDQ+LA0KCQkJCTwwIDAgNDEgJmdpYyAwIDQxIDQ+LA0KCQkJCTww
IDAgNDIgJmdpYyAwIDQyIDQ+Ow0KCX07DQp9Ow0K

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-v2p-ca15-tc1.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208061519221.4645@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2p-ca15-tc1.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQ29y
ZVRpbGUgRXhwcmVzcyBBMTV4MiAodmVyc2lvbiB3aXRoIFRlc3QgQ2hpcCAx
KQ0KICogQ29ydGV4LUExNSBNUENvcmUgKFYyUC1DQTE1KQ0KICoNCiAqIEhC
SS0wMjM3QQ0KICovDQoNCi9kdHMtdjEvOw0KDQovIHsNCgltb2RlbCA9ICJW
MlAtQ0ExNSI7DQoJYXJtLGhiaSA9IDwweDIzNz47DQoJY29tcGF0aWJsZSA9
ICJhcm0sdmV4cHJlc3MsdjJwLWNhMTUsdGMxIiwgImFybSx2ZXhwcmVzcyx2
MnAtY2ExNSIsICJhcm0sdmV4cHJlc3MiOw0KDQoJI2FkZHJlc3MtY2VsbHMg
PSA8MT47DQoJI3NpemUtY2VsbHMgPSA8MT47DQoNCglpbnRlcnJ1cHQtcGFy
ZW50ID0gPCZnaWM+Ow0KDQoJY2hvc2VuIHsNCiAgICAgICAgICAgICAgICAg
Ym9vdGFyZ3MgPSAiZG9tMF9tZW09MTI4TSI7DQogICAgICAgICAgICAgICAg
IHhlbixkb20wLWJvb3RhcmdzID0gImVhcmx5cHJpbnRrIGNvbnNvbGU9dHR5
QU1BMSByb290PS9kZXYvbW1jYmxrMCBkZWJ1ZyBydyI7DQoJfTsNCg0KDQoJ
YWxpYXNlcyB7DQoJCXNlcmlhbDAgPSAmdjJtX3NlcmlhbDA7DQoJCXNlcmlh
bDEgPSAmdjJtX3NlcmlhbDE7DQoJCXNlcmlhbDIgPSAmdjJtX3NlcmlhbDI7
DQoJCXNlcmlhbDMgPSAmdjJtX3NlcmlhbDM7DQoJCWkyYzAgPSAmdjJtX2ky
Y19kdmk7DQoJCWkyYzEgPSAmdjJtX2kyY19wY2llOw0KCX07DQoNCgljcHVz
IHsNCgkJI2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJCSNzaXplLWNlbGxzID0g
PDA+Ow0KDQoJCWNwdUAwIHsNCgkJCWRldmljZV90eXBlID0gImNwdSI7DQoJ
CQljb21wYXRpYmxlID0gImFybSxjb3J0ZXgtYTE1IjsNCgkJCXJlZyA9IDww
PjsNCgkJfTsNCgl9Ow0KDQoJbWVtb3J5IHsNCgkJZGV2aWNlX3R5cGUgPSAi
bWVtb3J5IjsNCgkJcmVnID0gPDB4ODAwMDAwMDAgMHg4MDAwMDAwMD47DQoJ
fTsNCg0KCWdpYzogaW50ZXJydXB0LWNvbnRyb2xsZXJAMmMwMDEwMDAgew0K
CQljb21wYXRpYmxlID0gImFybSxjb3J0ZXgtYTktZ2ljIjsNCgkJI2ludGVy
cnVwdC1jZWxscyA9IDwzPjsNCgkJI2FkZHJlc3MtY2VsbHMgPSA8MD47DQoJ
CWludGVycnVwdC1jb250cm9sbGVyOw0KCQlyZWcgPSA8MHgyYzAwMTAwMCAw
eDEwMDA+LA0KCQkgICAgICA8MHgyYzAwMjAwMCAweDEwMD47DQoJfTsNCg0K
CXBtdSB7DQoJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hOS1wbXUiOw0K
CQlpbnRlcnJ1cHRzID0gPDAgNjggND4sDQoJCQkgICAgIDwwIDY5IDQ+Ow0K
CX07DQoNCgl4ZW4gew0KCQljb21wYXRpYmxlID0gImFybSx4ZW4iOw0KCQly
ZWcgPSA8MHhiMDAwMDAwMCAweDIwMDAwPjsNCgkJaW50ZXJydXB0cyA9IDwx
IDE1IDB4ZjA4PjsNCgl9Ow0KDQoJbW90aGVyYm9hcmQgew0KCQlyYW5nZXMg
PSA8MCAwIDB4MDgwMDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDEgMCAweDE0
MDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDwyIDAgMHgxODAwMDAwMCAweDA0
MDAwMDAwPiwNCgkJCSA8MyAwIDB4MWMwMDAwMDAgMHgwNDAwMDAwMD4sDQoJ
CQkgPDQgMCAweDBjMDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDw1IDAgMHgx
MDAwMDAwMCAweDA0MDAwMDAwPjsNCg0KCQlpbnRlcnJ1cHQtbWFwLW1hc2sg
PSA8MCAwIDYzPjsNCgkJaW50ZXJydXB0LW1hcCA9IDwwIDAgIDAgJmdpYyAw
ICAwIDQ+LA0KCQkJCTwwIDAgIDEgJmdpYyAwICAxIDQ+LA0KCQkJCTwwIDAg
IDIgJmdpYyAwICAyIDQ+LA0KCQkJCTwwIDAgIDMgJmdpYyAwICAzIDQ+LA0K
CQkJCTwwIDAgIDQgJmdpYyAwICA0IDQ+LA0KCQkJCTwwIDAgIDUgJmdpYyAw
ICA1IDQ+LA0KCQkJCTwwIDAgIDYgJmdpYyAwICA2IDQ+LA0KCQkJCTwwIDAg
IDcgJmdpYyAwICA3IDQ+LA0KCQkJCTwwIDAgIDggJmdpYyAwICA4IDQ+LA0K
CQkJCTwwIDAgIDkgJmdpYyAwICA5IDQ+LA0KCQkJCTwwIDAgMTAgJmdpYyAw
IDEwIDQ+LA0KCQkJCTwwIDAgMTEgJmdpYyAwIDExIDQ+LA0KCQkJCTwwIDAg
MTIgJmdpYyAwIDEyIDQ+LA0KCQkJCTwwIDAgMTMgJmdpYyAwIDEzIDQ+LA0K
CQkJCTwwIDAgMTQgJmdpYyAwIDE0IDQ+LA0KCQkJCTwwIDAgMTUgJmdpYyAw
IDE1IDQ+LA0KCQkJCTwwIDAgMTYgJmdpYyAwIDE2IDQ+LA0KCQkJCTwwIDAg
MTcgJmdpYyAwIDE3IDQ+LA0KCQkJCTwwIDAgMTggJmdpYyAwIDE4IDQ+LA0K
CQkJCTwwIDAgMTkgJmdpYyAwIDE5IDQ+LA0KCQkJCTwwIDAgMjAgJmdpYyAw
IDIwIDQ+LA0KCQkJCTwwIDAgMjEgJmdpYyAwIDIxIDQ+LA0KCQkJCTwwIDAg
MjIgJmdpYyAwIDIyIDQ+LA0KCQkJCTwwIDAgMjMgJmdpYyAwIDIzIDQ+LA0K
CQkJCTwwIDAgMjQgJmdpYyAwIDI0IDQ+LA0KCQkJCTwwIDAgMjUgJmdpYyAw
IDI1IDQ+LA0KCQkJCTwwIDAgMjYgJmdpYyAwIDI2IDQ+LA0KCQkJCTwwIDAg
MjcgJmdpYyAwIDI3IDQ+LA0KCQkJCTwwIDAgMjggJmdpYyAwIDI4IDQ+LA0K
CQkJCTwwIDAgMjkgJmdpYyAwIDI5IDQ+LA0KCQkJCTwwIDAgMzAgJmdpYyAw
IDMwIDQ+LA0KCQkJCTwwIDAgMzEgJmdpYyAwIDMxIDQ+LA0KCQkJCTwwIDAg
MzIgJmdpYyAwIDMyIDQ+LA0KCQkJCTwwIDAgMzMgJmdpYyAwIDMzIDQ+LA0K
CQkJCTwwIDAgMzQgJmdpYyAwIDM0IDQ+LA0KCQkJCTwwIDAgMzUgJmdpYyAw
IDM1IDQ+LA0KCQkJCTwwIDAgMzYgJmdpYyAwIDM2IDQ+LA0KCQkJCTwwIDAg
MzcgJmdpYyAwIDM3IDQ+LA0KCQkJCTwwIDAgMzggJmdpYyAwIDM4IDQ+LA0K
CQkJCTwwIDAgMzkgJmdpYyAwIDM5IDQ+LA0KCQkJCTwwIDAgNDAgJmdpYyAw
IDQwIDQ+LA0KCQkJCTwwIDAgNDEgJmdpYyAwIDQxIDQ+LA0KCQkJCTwwIDAg
NDIgJmdpYyAwIDQyIDQ+Ow0KCX07DQp9Ow0KDQovaW5jbHVkZS8gInZleHBy
ZXNzLXYybS1yczEtcnRzbS5kdHNpIg0K

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII";
	name="vexpress-v2m-rs1-rtsm.dtsi"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208061519222.4645@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2m-rs1-rtsm.dtsi"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogTW90
aGVyYm9hcmQgRXhwcmVzcyB1QVRYDQogKiBWMk0tUDENCiAqDQogKiBIQkkt
MDE5MEQNCiAqDQogKiBSUzEgbWVtb3J5IG1hcCAoIkFSTSBDb3J0ZXgtQSBT
ZXJpZXMgbWVtb3J5IG1hcCIgaW4gdGhlIGJvYXJkJ3MNCiAqIFRlY2huaWNh
bCBSZWZlcmVuY2UgTWFudWFsKQ0KICoNCiAqIFdBUk5JTkchIFRoZSBoYXJk
d2FyZSBkZXNjcmliZWQgaW4gdGhpcyBmaWxlIGlzIGluZGVwZW5kZW50IGZy
b20gdGhlDQogKiBvcmlnaW5hbCB2YXJpYW50ICh2ZXhwcmVzcy12Mm0uZHRz
aSksIGJ1dCB0aGVyZSBpcyBhIHN0cm9uZw0KICogY29ycmVzcG9uZGVuY2Ug
YmV0d2VlbiB0aGUgdHdvIGNvbmZpZ3VyYXRpb25zLg0KICoNCiAqIFRBS0Ug
Q0FSRSBXSEVOIE1BSU5UQUlOSU5HIFRISVMgRklMRSBUTyBQUk9QQUdBVEUg
QU5ZIFJFTEVWQU5UDQogKiBDSEFOR0VTIFRPIHZleHByZXNzLXYybS5kdHNp
IQ0KICovDQoNCi8gew0KCWFsaWFzZXMgew0KCQlhcm0sdjJtX3RpbWVyID0g
JnYybV90aW1lcjAxOw0KCX07DQoNCgltb3RoZXJib2FyZCB7DQoJCWNvbXBh
dGlibGUgPSAic2ltcGxlLWJ1cyI7DQoJCWFybSx2Mm0tbWVtb3J5LW1hcCA9
ICJyczEiOw0KCQkjYWRkcmVzcy1jZWxscyA9IDwyPjsgLyogU01CIGNoaXBz
ZWxlY3QgbnVtYmVyIGFuZCBvZmZzZXQgKi8NCgkJI3NpemUtY2VsbHMgPSA8
MT47DQoJCSNpbnRlcnJ1cHQtY2VsbHMgPSA8MT47DQoNCgkJZmxhc2hAMCww
MDAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gImFybSx2ZXhwcmVzcy1mbGFz
aCIsICJjZmktZmxhc2giOw0KCQkJcmVnID0gPDAgMHgwMDAwMDAwMCAweDA0
MDAwMDAwPiwNCgkJCSAgICAgIDw0IDB4MDAwMDAwMDAgMHgwNDAwMDAwMD47
DQoJCQliYW5rLXdpZHRoID0gPDQ+Ow0KCQl9Ow0KDQoJCXBzcmFtQDEsMDAw
MDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtcHNyYW0i
LCAibXRkLXJhbSI7DQoJCQlyZWcgPSA8MSAweDAwMDAwMDAwIDB4MDIwMDAw
MDA+Ow0KCQkJYmFuay13aWR0aCA9IDw0PjsNCgkJfTsNCg0KCQl2cmFtQDIs
MDAwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtdnJh
bSI7DQoJCQlyZWcgPSA8MiAweDAwMDAwMDAwIDB4MDA4MDAwMDA+Ow0KCQl9
Ow0KDQoJCWV0aGVybmV0QDIsMDIwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9
ICJzbXNjLGxhbjkxYzExMSI7DQoJCQlyZWcgPSA8MiAweDAyMDAwMDAwIDB4
MTAwMDA+Ow0KCQkJaW50ZXJydXB0cyA9IDwxNT47DQoJCX07DQoJDQoJCXVz
YkAyLDAzMDAwMDAwIHsNCgkJCWNvbXBhdGlibGUgPSAibnhwLHVzYi1pc3Ax
NzYxIjsNCgkJCXJlZyA9IDwyIDB4MDMwMDAwMDAgMHgyMDAwMD47DQoJCQlp
bnRlcnJ1cHRzID0gPDE2PjsNCgkJCXBvcnQxLW90ZzsNCgkJfTsNCg0KCQlp
b2ZwZ2FAMywwMDAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gImFybSxhbWJh
LWJ1cyIsICJzaW1wbGUtYnVzIjsNCgkJCSNhZGRyZXNzLWNlbGxzID0gPDE+
Ow0KCQkJI3NpemUtY2VsbHMgPSA8MT47DQoJCQlyYW5nZXMgPSA8MCAzIDAg
MHgyMDAwMDA+Ow0KDQoJCQlzeXNyZWdAMDEwMDAwIHsNCgkJCQljb21wYXRp
YmxlID0gImFybSx2ZXhwcmVzcy1zeXNyZWciOw0KCQkJCXJlZyA9IDwweDAx
MDAwMCAweDEwMDA+Ow0KCQkJfTsNCg0KCQkJc3lzY3RsQDAyMDAwMCB7DQoJ
CQkJY29tcGF0aWJsZSA9ICJhcm0sc3A4MTAiLCAiYXJtLHByaW1lY2VsbCI7
DQoJCQkJcmVnID0gPDB4MDIwMDAwIDB4MTAwMD47DQoJCQl9Ow0KDQoJCQkv
KiBQQ0ktRSBJMkMgYnVzICovDQoJCQl2Mm1faTJjX3BjaWU6IGkyY0AwMzAw
MDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHZlcnNhdGlsZS1pMmMiOw0K
CQkJCXJlZyA9IDwweDAzMDAwMCAweDEwMDA+Ow0KDQoJCQkJI2FkZHJlc3Mt
Y2VsbHMgPSA8MT47DQoJCQkJI3NpemUtY2VsbHMgPSA8MD47DQoNCgkJCQlw
Y2llLXN3aXRjaEA2MCB7DQoJCQkJCWNvbXBhdGlibGUgPSAiaWR0LDg5aHBl
czMyaDgiOw0KCQkJCQlyZWcgPSA8MHg2MD47DQoJCQkJfTsNCgkJCX07DQoN
CgkJCWFhY2lAMDQwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxwbDA0
MSIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgwNDAwMDAgMHgx
MDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDExPjsNCgkJCX07DQoNCgkJCW1t
Y2lAMDUwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxwbDE4MCIsICJh
cm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgwNTAwMDAgMHgxMDAwPjsN
CgkJCQlpbnRlcnJ1cHRzID0gPDkgMTA+Ow0KCQkJfTsNCg0KCQkJa21pQDA2
MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwNTAiLCAiYXJtLHBy
aW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDYwMDAwIDB4MTAwMD47DQoJCQkJ
aW50ZXJydXB0cyA9IDwxMj47DQoJCQl9Ow0KDQoJCQlrbWlAMDcwMDAwIHsN
CgkJCQljb21wYXRpYmxlID0gImFybSxwbDA1MCIsICJhcm0scHJpbWVjZWxs
IjsNCgkJCQlyZWcgPSA8MHgwNzAwMDAgMHgxMDAwPjsNCgkJCQlpbnRlcnJ1
cHRzID0gPDEzPjsNCgkJCX07DQoNCgkJCXYybV9zZXJpYWwwOiB1YXJ0QDA5
MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwMTEiLCAiYXJtLHBy
aW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDkwMDAwIDB4MTAwMD47DQoJCQkJ
aW50ZXJydXB0cyA9IDw1PjsNCgkJCX07DQoNCgkJCXYybV9zZXJpYWwxOiB1
YXJ0QDBhMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwMTEiLCAi
YXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MGEwMDAwIDB4MTAwMD47
DQoJCQkJaW50ZXJydXB0cyA9IDw2PjsNCgkJCX07DQoNCgkJCXYybV9zZXJp
YWwyOiB1YXJ0QDBiMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGww
MTEiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MGIwMDAwIDB4
MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDw3PjsNCgkJCX07DQoNCgkJCXYy
bV9zZXJpYWwzOiB1YXJ0QDBjMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJh
cm0scGwwMTEiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MGMw
MDAwIDB4MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDw4PjsNCgkJCX07DQoN
CgkJCXdkdEAwZjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHNwODA1
IiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBmMDAwMCAweDEw
MDA+Ow0KCQkJCWludGVycnVwdHMgPSA8MD47DQoJCQl9Ow0KDQoJCQl2Mm1f
dGltZXIwMTogdGltZXJAMTEwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFy
bSxzcDgwNCIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgxMTAw
MDAgMHgxMDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDI+Ow0KCQkJfTsNCg0K
CQkJdjJtX3RpbWVyMjM6IHRpbWVyQDEyMDAwMCB7DQoJCQkJY29tcGF0aWJs
ZSA9ICJhcm0sc3A4MDQiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0g
PDB4MTIwMDAwIDB4MTAwMD47DQoJCQl9Ow0KDQoJCQkvKiBEVkkgSTJDIGJ1
cyAqLw0KCQkJdjJtX2kyY19kdmk6IGkyY0AxNjAwMDAgew0KCQkJCWNvbXBh
dGlibGUgPSAiYXJtLHZlcnNhdGlsZS1pMmMiOw0KCQkJCXJlZyA9IDwweDE2
MDAwMCAweDEwMDA+Ow0KDQoJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJ
CQkJI3NpemUtY2VsbHMgPSA8MD47DQoNCgkJCQlkdmktdHJhbnNtaXR0ZXJA
Mzkgew0KCQkJCQljb21wYXRpYmxlID0gInNpbCxzaWk5MDIyLXRwaSIsICJz
aWwsc2lpOTAyMiI7DQoJCQkJCXJlZyA9IDwweDM5PjsNCgkJCQl9Ow0KDQoJ
CQkJZHZpLXRyYW5zbWl0dGVyQDYwIHsNCgkJCQkJY29tcGF0aWJsZSA9ICJz
aWwsc2lpOTAyMi1jcGkiLCAic2lsLHNpaTkwMjIiOw0KCQkJCQlyZWcgPSA8
MHg2MD47DQoJCQkJfTsNCgkJCX07DQoNCgkJCXJ0Y0AxNzAwMDAgew0KCQkJ
CWNvbXBhdGlibGUgPSAiYXJtLHBsMDMxIiwgImFybSxwcmltZWNlbGwiOw0K
CQkJCXJlZyA9IDwweDE3MDAwMCAweDEwMDA+Ow0KCQkJCWludGVycnVwdHMg
PSA8ND47DQoJCQl9Ow0KDQoJCQljb21wYWN0LWZsYXNoQDFhMDAwMCB7DQoJ
CQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtY2YiLCAiYXRhLWdlbmVy
aWMiOw0KCQkJCXJlZyA9IDwweDFhMDAwMCAweDEwMA0KCQkJCSAgICAgICAw
eDFhMDEwMCAweGYwMD47DQoJCQkJcmVnLXNoaWZ0ID0gPDI+Ow0KCQkJfTsN
Cg0KCQkJY2xjZEAxZjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBs
MTExIiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDFmMDAwMCAw
eDEwMDA+Ow0KCQkJCWludGVycnVwdHMgPSA8MTQ+Ow0KCQkJfTsNCgkJfTsN
Cgl9Ow0KfTsNCg==

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1331867984-1344262762=:4645--


From xen-devel-bounces@lists.xen.org Mon Aug 06 14:25:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOFY-0002J7-VS; Mon, 06 Aug 2012 14:25:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOFX-0002J2-6J
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:25:39 +0000
Received: from [85.158.139.83:58025] by server-2.bemta-5.messagelabs.com id
	96/41-04598-2E3DF105; Mon, 06 Aug 2012 14:25:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344263137!23155847!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3750 invoked from network); 6 Aug 2012 14:25:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; 
	d="dts'?dtsi'?scan'208";a="13868387"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:25:30 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:25:30 +0100
Date: Mon, 6 Aug 2012 15:25:08 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Message-ID: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1331867984-1344262762=:4645"
Content-ID: <alpine.DEB.2.02.1208061520000.4645@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Tim
	\(Xen.org\)" <tim@xen.org>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v2 00/23] Introduce Xen support on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208061520001.4645@kaball.uk.xensource.com>

Hi all,
this patch series implements Xen support for ARMv7 with virtualization
extensions.  It allows a Linux guest to boot as dom0 and
as domU on Xen on ARM. PV console, disk and network frontends and
backends are all working correctly.

It has been tested on a Versatile Express Cortex A15 emulator, using the
latest Xen ARM developement branch
(git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3) plus
the "ARM hypercall ABI: 64 bit ready" patch series
(http://marc.info/?l=xen-devel&m=134426267205408), and a simple ad-hoc
tool to build guest domains (marc.info/?l=xen-devel&m=134089788016546).

The patch marked with [HACK] shouldn't be applied and is part of the
series only because it is needed to create domUs.

I am also attaching to this email the dts'es that I am currently using
for dom0 and domU: vexpress-v2p-ca15-tc1.dts (that includes
vexpress-v2m-rs1-rtsm.dtsi) is the dts used for dom0 and it is passed to
Linux by Xen, while vexpress-virt.dts is the dts used for other domUs
and it is appended in binary form to the guest kernel image. I am not
sure where they are supposed to live yet, so I am just attaching them
here so that people can actually try out this series if they want to.

Comments are very welcome!


Changes in v2:
- fix up many comments and commit messages;
- remove the early_printk patches: rely on the emulated serial for now;
- remove the xen_guest_init patch: without any PV early_printk, we don't
  need any early call to xen_guest_init, we can rely on core_initcall
  alone;
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
  at the moment is unused;
- use ldm instead of pop in the hypercall wrappers;
- return -ENOSYS rather than -1 from the unimplemented grant_table
  functions;
- remove the pvclock ifdef in the Xen headers;
- remove include linux/types.h from xen/interface/xen.h;
- replace pr_info with pr_debug in xen_guest_init;
- add a new patch to introduce xen_ulong_t and use it top replace all
  the occurences of unsigned long in the public Xen interface;
- explicitely size all the pointers to 64 bit on ARM, so that the
  hypercall ABI is "64 bit ready";
- clean up xenbus_init;
- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI;
- mark Xen guest support on ARM as EXPERIMENTAL;
- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames;
- add a new patch to clear IRQ_NOAUTOEN and IRQ_NOREQUEST in events.c;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap;
- retain binary compatibility in xen_add_to_physmap: use a union to
  introduce foreign_domid.



Ian Campbell (1):
      [HACK] xen/arm: implement xen_remap_domain_mfn_range

Stefano Stabellini (22):
      arm: initial Xen support
      xen/arm: hypercalls
      xen/arm: page.h definitions
      xen/arm: sync_bitops
      xen/arm: empty implementation of grant_table arch specific functions
      xen: missing includes
      xen/arm: Xen detection and shared_info page mapping
      xen/arm: Introduce xen_pfn_t for pfn and mfn types
      xen/arm: Introduce xen_ulong_t for unsigned long
      xen/arm: compile and run xenbus
      xen: do not compile manage, balloon, pci, acpi and cpu_hotplug on ARM
      xen/arm: introduce CONFIG_XEN on ARM
      xen/arm: get privilege status
      xen/arm: initialize grant_table on ARM
      xen/arm: receive Xen events on ARM
      xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
      xen/arm: implement alloc/free_xenballooned_pages with alloc_pages/kfree
      xen: allow privcmd for HVM guests
      xen/arm: compile blkfront and blkback
      xen/arm: compile netback
      xen: update xen_add_to_physmap interface
      arm/v2m: initialize arch_timers even if v2m_timer is not present

 arch/arm/Kconfig                           |   10 ++
 arch/arm/Makefile                          |    1 +
 arch/arm/include/asm/hypervisor.h          |    6 +
 arch/arm/include/asm/sync_bitops.h         |   27 +++
 arch/arm/include/asm/xen/events.h          |   18 ++
 arch/arm/include/asm/xen/hypercall.h       |   69 ++++++++
 arch/arm/include/asm/xen/hypervisor.h      |   19 +++
 arch/arm/include/asm/xen/interface.h       |   72 +++++++++
 arch/arm/include/asm/xen/page.h            |   79 +++++++++
 arch/arm/mach-vexpress/v2m.c               |   11 +-
 arch/arm/xen/Makefile                      |    1 +
 arch/arm/xen/enlighten.c                   |  237 ++++++++++++++++++++++++++++
 arch/arm/xen/grant-table.c                 |   53 ++++++
 arch/arm/xen/hypercall.S                   |  106 +++++++++++++
 arch/ia64/include/asm/xen/interface.h      |    6 +-
 arch/x86/include/asm/xen/interface.h       |    8 +
 arch/x86/xen/enlighten.c                   |    1 +
 arch/x86/xen/irq.c                         |    1 +
 arch/x86/xen/mmu.c                         |    3 +
 arch/x86/xen/xen-ops.h                     |    1 -
 drivers/block/xen-blkback/blkback.c        |    1 +
 drivers/net/xen-netback/netback.c          |    1 +
 drivers/net/xen-netfront.c                 |    1 +
 drivers/tty/hvc/hvc_xen.c                  |    2 +
 drivers/xen/Makefile                       |   11 +-
 drivers/xen/events.c                       |   18 ++-
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/privcmd.c                      |   20 +--
 drivers/xen/xenbus/xenbus_comms.c          |    2 +-
 drivers/xen/xenbus/xenbus_probe.c          |   62 +++++---
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 drivers/xen/xenbus/xenbus_xs.c             |    1 +
 drivers/xen/xenfs/super.c                  |    7 +
 include/xen/events.h                       |    2 +
 include/xen/interface/features.h           |    3 +
 include/xen/interface/grant_table.h        |    4 +-
 include/xen/interface/io/protocols.h       |    3 +
 include/xen/interface/memory.h             |   32 +++--
 include/xen/interface/physdev.h            |    4 +-
 include/xen/interface/platform.h           |    4 +-
 include/xen/interface/version.h            |    2 +-
 include/xen/interface/xen.h                |   13 +-
 include/xen/privcmd.h                      |    3 +-
 include/xen/xen.h                          |    2 +-
 44 files changed, 857 insertions(+), 72 deletions(-)



A branch based on 3.5-rc7 is available here:

git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 3.5-rc7-arm-2

Cheers,

Stefano
--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-virt.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208061519220.4645@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-virt.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQVJN
IEVudmVsb3BlIE1vZGVsIHY3QSAoc2luZ2xlIENQVSkuDQogKi8NCg0KL2R0
cy12MS87DQoNCi9pbmNsdWRlLyAic2tlbGV0b24uZHRzaSINCg0KLyB7DQoJ
bW9kZWwgPSAiVjJQLUFFTXY3QSI7DQoJY29tcGF0aWJsZSA9ICJhcm0sdmV4
cHJlc3MsdjJwLWFlbSx2N2EiLCAiYXJtLHZleHByZXNzLHYycC1hZW0iLCAi
YXJtLHZleHByZXNzIjsNCglpbnRlcnJ1cHQtcGFyZW50ID0gPCZnaWM+Ow0K
DQogICAgICAgIGNob3NlbiB7DQogICAgICAgICAgICAgICAgYm9vdGFyZ3Mg
PSAiZWFybHlwcmludGsgZGVidWcgbG9nbGV2ZWw9OSBjb25zb2xlPWh2YzAg
cm9vdD0vZGV2L3h2ZGEgaW5pdD0vc2Jpbi9pbml0IjsNCiAgICAgICAgfTsN
Cg0KCWNwdXMgew0KCQkjYWRkcmVzcy1jZWxscyA9IDwxPjsNCgkJI3NpemUt
Y2VsbHMgPSA8MD47DQoNCgkJY3B1QDAgew0KCQkJZGV2aWNlX3R5cGUgPSAi
Y3B1IjsNCgkJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hMTUiOw0KCQkJ
cmVnID0gPDA+Ow0KCQl9Ow0KCX07DQoNCgltZW1vcnkgew0KCQlkZXZpY2Vf
dHlwZSA9ICJtZW1vcnkiOw0KCQlyZWcgPSA8MHg4MDAwMDAwMCAweDA4MDAw
MDAwPjsNCgl9Ow0KDQoJZ2ljOiBpbnRlcnJ1cHQtY29udHJvbGxlckAyYzAw
MTAwMCB7DQoJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hOS1naWMiOw0K
CQkjaW50ZXJydXB0LWNlbGxzID0gPDM+Ow0KCQkjYWRkcmVzcy1jZWxscyA9
IDwwPjsNCgkJaW50ZXJydXB0LWNvbnRyb2xsZXI7DQoJCXJlZyA9IDwweDJj
MDAxMDAwIDB4MTAwMD4sDQoJCSAgICAgIDwweDJjMDAyMDAwIDB4MTAwPjsN
Cgl9Ow0KDQoJdGltZXIgew0KCQljb21wYXRpYmxlID0gImFybSxhcm12Ny10
aW1lciI7DQoJCWludGVycnVwdHMgPSA8MSAxMyAweGYwOD4sDQoJCQkgICAg
IDwxIDE0IDB4ZjA4PiwNCgkJCSAgICAgPDEgMTEgMHhmMDg+LA0KCQkJICAg
ICA8MSAxMCAweGYwOD47DQoJfTsNCg0KCXhlbiB7DQoJCWNvbXBhdGlibGUg
PSAiYXJtLHhlbiI7DQoJCXJlZyA9IDwweGIwMDAwMDAwIDB4MjAwMDA+Ow0K
CQlpbnRlcnJ1cHRzID0gPDEgMTUgMHhmMDg+Ow0KCX07DQoNCgltb3RoZXJi
b2FyZCB7DQoJCWFybSx2Mm0tbWVtb3J5LW1hcCA9ICJyczEiOw0KCQlyYW5n
ZXMgPSA8MCAwIDB4MDgwMDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDEgMCAw
eDE0MDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDwyIDAgMHgxODAwMDAwMCAw
eDA0MDAwMDAwPiwNCgkJCSA8MyAwIDB4MWMwMDAwMDAgMHgwNDAwMDAwMD4s
DQoJCQkgPDQgMCAweDBjMDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDw1IDAg
MHgxMDAwMDAwMCAweDA0MDAwMDAwPjsNCg0KCQlpbnRlcnJ1cHQtbWFwLW1h
c2sgPSA8MCAwIDYzPjsNCgkJaW50ZXJydXB0LW1hcCA9IDwwIDAgIDAgJmdp
YyAwICAwIDQ+LA0KCQkJCTwwIDAgIDEgJmdpYyAwICAxIDQ+LA0KCQkJCTww
IDAgIDIgJmdpYyAwICAyIDQ+LA0KCQkJCTwwIDAgIDMgJmdpYyAwICAzIDQ+
LA0KCQkJCTwwIDAgIDQgJmdpYyAwICA0IDQ+LA0KCQkJCTwwIDAgIDUgJmdp
YyAwICA1IDQ+LA0KCQkJCTwwIDAgIDYgJmdpYyAwICA2IDQ+LA0KCQkJCTww
IDAgIDcgJmdpYyAwICA3IDQ+LA0KCQkJCTwwIDAgIDggJmdpYyAwICA4IDQ+
LA0KCQkJCTwwIDAgIDkgJmdpYyAwICA5IDQ+LA0KCQkJCTwwIDAgMTAgJmdp
YyAwIDEwIDQ+LA0KCQkJCTwwIDAgMTEgJmdpYyAwIDExIDQ+LA0KCQkJCTww
IDAgMTIgJmdpYyAwIDEyIDQ+LA0KCQkJCTwwIDAgMTMgJmdpYyAwIDEzIDQ+
LA0KCQkJCTwwIDAgMTQgJmdpYyAwIDE0IDQ+LA0KCQkJCTwwIDAgMTUgJmdp
YyAwIDE1IDQ+LA0KCQkJCTwwIDAgMTYgJmdpYyAwIDE2IDQ+LA0KCQkJCTww
IDAgMTcgJmdpYyAwIDE3IDQ+LA0KCQkJCTwwIDAgMTggJmdpYyAwIDE4IDQ+
LA0KCQkJCTwwIDAgMTkgJmdpYyAwIDE5IDQ+LA0KCQkJCTwwIDAgMjAgJmdp
YyAwIDIwIDQ+LA0KCQkJCTwwIDAgMjEgJmdpYyAwIDIxIDQ+LA0KCQkJCTww
IDAgMjIgJmdpYyAwIDIyIDQ+LA0KCQkJCTwwIDAgMjMgJmdpYyAwIDIzIDQ+
LA0KCQkJCTwwIDAgMjQgJmdpYyAwIDI0IDQ+LA0KCQkJCTwwIDAgMjUgJmdp
YyAwIDI1IDQ+LA0KCQkJCTwwIDAgMjYgJmdpYyAwIDI2IDQ+LA0KCQkJCTww
IDAgMjcgJmdpYyAwIDI3IDQ+LA0KCQkJCTwwIDAgMjggJmdpYyAwIDI4IDQ+
LA0KCQkJCTwwIDAgMjkgJmdpYyAwIDI5IDQ+LA0KCQkJCTwwIDAgMzAgJmdp
YyAwIDMwIDQ+LA0KCQkJCTwwIDAgMzEgJmdpYyAwIDMxIDQ+LA0KCQkJCTww
IDAgMzIgJmdpYyAwIDMyIDQ+LA0KCQkJCTwwIDAgMzMgJmdpYyAwIDMzIDQ+
LA0KCQkJCTwwIDAgMzQgJmdpYyAwIDM0IDQ+LA0KCQkJCTwwIDAgMzUgJmdp
YyAwIDM1IDQ+LA0KCQkJCTwwIDAgMzYgJmdpYyAwIDM2IDQ+LA0KCQkJCTww
IDAgMzcgJmdpYyAwIDM3IDQ+LA0KCQkJCTwwIDAgMzggJmdpYyAwIDM4IDQ+
LA0KCQkJCTwwIDAgMzkgJmdpYyAwIDM5IDQ+LA0KCQkJCTwwIDAgNDAgJmdp
YyAwIDQwIDQ+LA0KCQkJCTwwIDAgNDEgJmdpYyAwIDQxIDQ+LA0KCQkJCTww
IDAgNDIgJmdpYyAwIDQyIDQ+Ow0KCX07DQp9Ow0K

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-v2p-ca15-tc1.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208061519221.4645@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2p-ca15-tc1.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQ29y
ZVRpbGUgRXhwcmVzcyBBMTV4MiAodmVyc2lvbiB3aXRoIFRlc3QgQ2hpcCAx
KQ0KICogQ29ydGV4LUExNSBNUENvcmUgKFYyUC1DQTE1KQ0KICoNCiAqIEhC
SS0wMjM3QQ0KICovDQoNCi9kdHMtdjEvOw0KDQovIHsNCgltb2RlbCA9ICJW
MlAtQ0ExNSI7DQoJYXJtLGhiaSA9IDwweDIzNz47DQoJY29tcGF0aWJsZSA9
ICJhcm0sdmV4cHJlc3MsdjJwLWNhMTUsdGMxIiwgImFybSx2ZXhwcmVzcyx2
MnAtY2ExNSIsICJhcm0sdmV4cHJlc3MiOw0KDQoJI2FkZHJlc3MtY2VsbHMg
PSA8MT47DQoJI3NpemUtY2VsbHMgPSA8MT47DQoNCglpbnRlcnJ1cHQtcGFy
ZW50ID0gPCZnaWM+Ow0KDQoJY2hvc2VuIHsNCiAgICAgICAgICAgICAgICAg
Ym9vdGFyZ3MgPSAiZG9tMF9tZW09MTI4TSI7DQogICAgICAgICAgICAgICAg
IHhlbixkb20wLWJvb3RhcmdzID0gImVhcmx5cHJpbnRrIGNvbnNvbGU9dHR5
QU1BMSByb290PS9kZXYvbW1jYmxrMCBkZWJ1ZyBydyI7DQoJfTsNCg0KDQoJ
YWxpYXNlcyB7DQoJCXNlcmlhbDAgPSAmdjJtX3NlcmlhbDA7DQoJCXNlcmlh
bDEgPSAmdjJtX3NlcmlhbDE7DQoJCXNlcmlhbDIgPSAmdjJtX3NlcmlhbDI7
DQoJCXNlcmlhbDMgPSAmdjJtX3NlcmlhbDM7DQoJCWkyYzAgPSAmdjJtX2ky
Y19kdmk7DQoJCWkyYzEgPSAmdjJtX2kyY19wY2llOw0KCX07DQoNCgljcHVz
IHsNCgkJI2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJCSNzaXplLWNlbGxzID0g
PDA+Ow0KDQoJCWNwdUAwIHsNCgkJCWRldmljZV90eXBlID0gImNwdSI7DQoJ
CQljb21wYXRpYmxlID0gImFybSxjb3J0ZXgtYTE1IjsNCgkJCXJlZyA9IDww
PjsNCgkJfTsNCgl9Ow0KDQoJbWVtb3J5IHsNCgkJZGV2aWNlX3R5cGUgPSAi
bWVtb3J5IjsNCgkJcmVnID0gPDB4ODAwMDAwMDAgMHg4MDAwMDAwMD47DQoJ
fTsNCg0KCWdpYzogaW50ZXJydXB0LWNvbnRyb2xsZXJAMmMwMDEwMDAgew0K
CQljb21wYXRpYmxlID0gImFybSxjb3J0ZXgtYTktZ2ljIjsNCgkJI2ludGVy
cnVwdC1jZWxscyA9IDwzPjsNCgkJI2FkZHJlc3MtY2VsbHMgPSA8MD47DQoJ
CWludGVycnVwdC1jb250cm9sbGVyOw0KCQlyZWcgPSA8MHgyYzAwMTAwMCAw
eDEwMDA+LA0KCQkgICAgICA8MHgyYzAwMjAwMCAweDEwMD47DQoJfTsNCg0K
CXBtdSB7DQoJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hOS1wbXUiOw0K
CQlpbnRlcnJ1cHRzID0gPDAgNjggND4sDQoJCQkgICAgIDwwIDY5IDQ+Ow0K
CX07DQoNCgl4ZW4gew0KCQljb21wYXRpYmxlID0gImFybSx4ZW4iOw0KCQly
ZWcgPSA8MHhiMDAwMDAwMCAweDIwMDAwPjsNCgkJaW50ZXJydXB0cyA9IDwx
IDE1IDB4ZjA4PjsNCgl9Ow0KDQoJbW90aGVyYm9hcmQgew0KCQlyYW5nZXMg
PSA8MCAwIDB4MDgwMDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDEgMCAweDE0
MDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDwyIDAgMHgxODAwMDAwMCAweDA0
MDAwMDAwPiwNCgkJCSA8MyAwIDB4MWMwMDAwMDAgMHgwNDAwMDAwMD4sDQoJ
CQkgPDQgMCAweDBjMDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDw1IDAgMHgx
MDAwMDAwMCAweDA0MDAwMDAwPjsNCg0KCQlpbnRlcnJ1cHQtbWFwLW1hc2sg
PSA8MCAwIDYzPjsNCgkJaW50ZXJydXB0LW1hcCA9IDwwIDAgIDAgJmdpYyAw
ICAwIDQ+LA0KCQkJCTwwIDAgIDEgJmdpYyAwICAxIDQ+LA0KCQkJCTwwIDAg
IDIgJmdpYyAwICAyIDQ+LA0KCQkJCTwwIDAgIDMgJmdpYyAwICAzIDQ+LA0K
CQkJCTwwIDAgIDQgJmdpYyAwICA0IDQ+LA0KCQkJCTwwIDAgIDUgJmdpYyAw
ICA1IDQ+LA0KCQkJCTwwIDAgIDYgJmdpYyAwICA2IDQ+LA0KCQkJCTwwIDAg
IDcgJmdpYyAwICA3IDQ+LA0KCQkJCTwwIDAgIDggJmdpYyAwICA4IDQ+LA0K
CQkJCTwwIDAgIDkgJmdpYyAwICA5IDQ+LA0KCQkJCTwwIDAgMTAgJmdpYyAw
IDEwIDQ+LA0KCQkJCTwwIDAgMTEgJmdpYyAwIDExIDQ+LA0KCQkJCTwwIDAg
MTIgJmdpYyAwIDEyIDQ+LA0KCQkJCTwwIDAgMTMgJmdpYyAwIDEzIDQ+LA0K
CQkJCTwwIDAgMTQgJmdpYyAwIDE0IDQ+LA0KCQkJCTwwIDAgMTUgJmdpYyAw
IDE1IDQ+LA0KCQkJCTwwIDAgMTYgJmdpYyAwIDE2IDQ+LA0KCQkJCTwwIDAg
MTcgJmdpYyAwIDE3IDQ+LA0KCQkJCTwwIDAgMTggJmdpYyAwIDE4IDQ+LA0K
CQkJCTwwIDAgMTkgJmdpYyAwIDE5IDQ+LA0KCQkJCTwwIDAgMjAgJmdpYyAw
IDIwIDQ+LA0KCQkJCTwwIDAgMjEgJmdpYyAwIDIxIDQ+LA0KCQkJCTwwIDAg
MjIgJmdpYyAwIDIyIDQ+LA0KCQkJCTwwIDAgMjMgJmdpYyAwIDIzIDQ+LA0K
CQkJCTwwIDAgMjQgJmdpYyAwIDI0IDQ+LA0KCQkJCTwwIDAgMjUgJmdpYyAw
IDI1IDQ+LA0KCQkJCTwwIDAgMjYgJmdpYyAwIDI2IDQ+LA0KCQkJCTwwIDAg
MjcgJmdpYyAwIDI3IDQ+LA0KCQkJCTwwIDAgMjggJmdpYyAwIDI4IDQ+LA0K
CQkJCTwwIDAgMjkgJmdpYyAwIDI5IDQ+LA0KCQkJCTwwIDAgMzAgJmdpYyAw
IDMwIDQ+LA0KCQkJCTwwIDAgMzEgJmdpYyAwIDMxIDQ+LA0KCQkJCTwwIDAg
MzIgJmdpYyAwIDMyIDQ+LA0KCQkJCTwwIDAgMzMgJmdpYyAwIDMzIDQ+LA0K
CQkJCTwwIDAgMzQgJmdpYyAwIDM0IDQ+LA0KCQkJCTwwIDAgMzUgJmdpYyAw
IDM1IDQ+LA0KCQkJCTwwIDAgMzYgJmdpYyAwIDM2IDQ+LA0KCQkJCTwwIDAg
MzcgJmdpYyAwIDM3IDQ+LA0KCQkJCTwwIDAgMzggJmdpYyAwIDM4IDQ+LA0K
CQkJCTwwIDAgMzkgJmdpYyAwIDM5IDQ+LA0KCQkJCTwwIDAgNDAgJmdpYyAw
IDQwIDQ+LA0KCQkJCTwwIDAgNDEgJmdpYyAwIDQxIDQ+LA0KCQkJCTwwIDAg
NDIgJmdpYyAwIDQyIDQ+Ow0KCX07DQp9Ow0KDQovaW5jbHVkZS8gInZleHBy
ZXNzLXYybS1yczEtcnRzbS5kdHNpIg0K

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="US-ASCII";
	name="vexpress-v2m-rs1-rtsm.dtsi"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208061519222.4645@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2m-rs1-rtsm.dtsi"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogTW90
aGVyYm9hcmQgRXhwcmVzcyB1QVRYDQogKiBWMk0tUDENCiAqDQogKiBIQkkt
MDE5MEQNCiAqDQogKiBSUzEgbWVtb3J5IG1hcCAoIkFSTSBDb3J0ZXgtQSBT
ZXJpZXMgbWVtb3J5IG1hcCIgaW4gdGhlIGJvYXJkJ3MNCiAqIFRlY2huaWNh
bCBSZWZlcmVuY2UgTWFudWFsKQ0KICoNCiAqIFdBUk5JTkchIFRoZSBoYXJk
d2FyZSBkZXNjcmliZWQgaW4gdGhpcyBmaWxlIGlzIGluZGVwZW5kZW50IGZy
b20gdGhlDQogKiBvcmlnaW5hbCB2YXJpYW50ICh2ZXhwcmVzcy12Mm0uZHRz
aSksIGJ1dCB0aGVyZSBpcyBhIHN0cm9uZw0KICogY29ycmVzcG9uZGVuY2Ug
YmV0d2VlbiB0aGUgdHdvIGNvbmZpZ3VyYXRpb25zLg0KICoNCiAqIFRBS0Ug
Q0FSRSBXSEVOIE1BSU5UQUlOSU5HIFRISVMgRklMRSBUTyBQUk9QQUdBVEUg
QU5ZIFJFTEVWQU5UDQogKiBDSEFOR0VTIFRPIHZleHByZXNzLXYybS5kdHNp
IQ0KICovDQoNCi8gew0KCWFsaWFzZXMgew0KCQlhcm0sdjJtX3RpbWVyID0g
JnYybV90aW1lcjAxOw0KCX07DQoNCgltb3RoZXJib2FyZCB7DQoJCWNvbXBh
dGlibGUgPSAic2ltcGxlLWJ1cyI7DQoJCWFybSx2Mm0tbWVtb3J5LW1hcCA9
ICJyczEiOw0KCQkjYWRkcmVzcy1jZWxscyA9IDwyPjsgLyogU01CIGNoaXBz
ZWxlY3QgbnVtYmVyIGFuZCBvZmZzZXQgKi8NCgkJI3NpemUtY2VsbHMgPSA8
MT47DQoJCSNpbnRlcnJ1cHQtY2VsbHMgPSA8MT47DQoNCgkJZmxhc2hAMCww
MDAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gImFybSx2ZXhwcmVzcy1mbGFz
aCIsICJjZmktZmxhc2giOw0KCQkJcmVnID0gPDAgMHgwMDAwMDAwMCAweDA0
MDAwMDAwPiwNCgkJCSAgICAgIDw0IDB4MDAwMDAwMDAgMHgwNDAwMDAwMD47
DQoJCQliYW5rLXdpZHRoID0gPDQ+Ow0KCQl9Ow0KDQoJCXBzcmFtQDEsMDAw
MDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtcHNyYW0i
LCAibXRkLXJhbSI7DQoJCQlyZWcgPSA8MSAweDAwMDAwMDAwIDB4MDIwMDAw
MDA+Ow0KCQkJYmFuay13aWR0aCA9IDw0PjsNCgkJfTsNCg0KCQl2cmFtQDIs
MDAwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtdnJh
bSI7DQoJCQlyZWcgPSA8MiAweDAwMDAwMDAwIDB4MDA4MDAwMDA+Ow0KCQl9
Ow0KDQoJCWV0aGVybmV0QDIsMDIwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9
ICJzbXNjLGxhbjkxYzExMSI7DQoJCQlyZWcgPSA8MiAweDAyMDAwMDAwIDB4
MTAwMDA+Ow0KCQkJaW50ZXJydXB0cyA9IDwxNT47DQoJCX07DQoJDQoJCXVz
YkAyLDAzMDAwMDAwIHsNCgkJCWNvbXBhdGlibGUgPSAibnhwLHVzYi1pc3Ax
NzYxIjsNCgkJCXJlZyA9IDwyIDB4MDMwMDAwMDAgMHgyMDAwMD47DQoJCQlp
bnRlcnJ1cHRzID0gPDE2PjsNCgkJCXBvcnQxLW90ZzsNCgkJfTsNCg0KCQlp
b2ZwZ2FAMywwMDAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gImFybSxhbWJh
LWJ1cyIsICJzaW1wbGUtYnVzIjsNCgkJCSNhZGRyZXNzLWNlbGxzID0gPDE+
Ow0KCQkJI3NpemUtY2VsbHMgPSA8MT47DQoJCQlyYW5nZXMgPSA8MCAzIDAg
MHgyMDAwMDA+Ow0KDQoJCQlzeXNyZWdAMDEwMDAwIHsNCgkJCQljb21wYXRp
YmxlID0gImFybSx2ZXhwcmVzcy1zeXNyZWciOw0KCQkJCXJlZyA9IDwweDAx
MDAwMCAweDEwMDA+Ow0KCQkJfTsNCg0KCQkJc3lzY3RsQDAyMDAwMCB7DQoJ
CQkJY29tcGF0aWJsZSA9ICJhcm0sc3A4MTAiLCAiYXJtLHByaW1lY2VsbCI7
DQoJCQkJcmVnID0gPDB4MDIwMDAwIDB4MTAwMD47DQoJCQl9Ow0KDQoJCQkv
KiBQQ0ktRSBJMkMgYnVzICovDQoJCQl2Mm1faTJjX3BjaWU6IGkyY0AwMzAw
MDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHZlcnNhdGlsZS1pMmMiOw0K
CQkJCXJlZyA9IDwweDAzMDAwMCAweDEwMDA+Ow0KDQoJCQkJI2FkZHJlc3Mt
Y2VsbHMgPSA8MT47DQoJCQkJI3NpemUtY2VsbHMgPSA8MD47DQoNCgkJCQlw
Y2llLXN3aXRjaEA2MCB7DQoJCQkJCWNvbXBhdGlibGUgPSAiaWR0LDg5aHBl
czMyaDgiOw0KCQkJCQlyZWcgPSA8MHg2MD47DQoJCQkJfTsNCgkJCX07DQoN
CgkJCWFhY2lAMDQwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxwbDA0
MSIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgwNDAwMDAgMHgx
MDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDExPjsNCgkJCX07DQoNCgkJCW1t
Y2lAMDUwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxwbDE4MCIsICJh
cm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgwNTAwMDAgMHgxMDAwPjsN
CgkJCQlpbnRlcnJ1cHRzID0gPDkgMTA+Ow0KCQkJfTsNCg0KCQkJa21pQDA2
MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwNTAiLCAiYXJtLHBy
aW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDYwMDAwIDB4MTAwMD47DQoJCQkJ
aW50ZXJydXB0cyA9IDwxMj47DQoJCQl9Ow0KDQoJCQlrbWlAMDcwMDAwIHsN
CgkJCQljb21wYXRpYmxlID0gImFybSxwbDA1MCIsICJhcm0scHJpbWVjZWxs
IjsNCgkJCQlyZWcgPSA8MHgwNzAwMDAgMHgxMDAwPjsNCgkJCQlpbnRlcnJ1
cHRzID0gPDEzPjsNCgkJCX07DQoNCgkJCXYybV9zZXJpYWwwOiB1YXJ0QDA5
MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwMTEiLCAiYXJtLHBy
aW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDkwMDAwIDB4MTAwMD47DQoJCQkJ
aW50ZXJydXB0cyA9IDw1PjsNCgkJCX07DQoNCgkJCXYybV9zZXJpYWwxOiB1
YXJ0QDBhMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwMTEiLCAi
YXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MGEwMDAwIDB4MTAwMD47
DQoJCQkJaW50ZXJydXB0cyA9IDw2PjsNCgkJCX07DQoNCgkJCXYybV9zZXJp
YWwyOiB1YXJ0QDBiMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGww
MTEiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MGIwMDAwIDB4
MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDw3PjsNCgkJCX07DQoNCgkJCXYy
bV9zZXJpYWwzOiB1YXJ0QDBjMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJh
cm0scGwwMTEiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MGMw
MDAwIDB4MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDw4PjsNCgkJCX07DQoN
CgkJCXdkdEAwZjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHNwODA1
IiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBmMDAwMCAweDEw
MDA+Ow0KCQkJCWludGVycnVwdHMgPSA8MD47DQoJCQl9Ow0KDQoJCQl2Mm1f
dGltZXIwMTogdGltZXJAMTEwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFy
bSxzcDgwNCIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgxMTAw
MDAgMHgxMDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDI+Ow0KCQkJfTsNCg0K
CQkJdjJtX3RpbWVyMjM6IHRpbWVyQDEyMDAwMCB7DQoJCQkJY29tcGF0aWJs
ZSA9ICJhcm0sc3A4MDQiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0g
PDB4MTIwMDAwIDB4MTAwMD47DQoJCQl9Ow0KDQoJCQkvKiBEVkkgSTJDIGJ1
cyAqLw0KCQkJdjJtX2kyY19kdmk6IGkyY0AxNjAwMDAgew0KCQkJCWNvbXBh
dGlibGUgPSAiYXJtLHZlcnNhdGlsZS1pMmMiOw0KCQkJCXJlZyA9IDwweDE2
MDAwMCAweDEwMDA+Ow0KDQoJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJ
CQkJI3NpemUtY2VsbHMgPSA8MD47DQoNCgkJCQlkdmktdHJhbnNtaXR0ZXJA
Mzkgew0KCQkJCQljb21wYXRpYmxlID0gInNpbCxzaWk5MDIyLXRwaSIsICJz
aWwsc2lpOTAyMiI7DQoJCQkJCXJlZyA9IDwweDM5PjsNCgkJCQl9Ow0KDQoJ
CQkJZHZpLXRyYW5zbWl0dGVyQDYwIHsNCgkJCQkJY29tcGF0aWJsZSA9ICJz
aWwsc2lpOTAyMi1jcGkiLCAic2lsLHNpaTkwMjIiOw0KCQkJCQlyZWcgPSA8
MHg2MD47DQoJCQkJfTsNCgkJCX07DQoNCgkJCXJ0Y0AxNzAwMDAgew0KCQkJ
CWNvbXBhdGlibGUgPSAiYXJtLHBsMDMxIiwgImFybSxwcmltZWNlbGwiOw0K
CQkJCXJlZyA9IDwweDE3MDAwMCAweDEwMDA+Ow0KCQkJCWludGVycnVwdHMg
PSA8ND47DQoJCQl9Ow0KDQoJCQljb21wYWN0LWZsYXNoQDFhMDAwMCB7DQoJ
CQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtY2YiLCAiYXRhLWdlbmVy
aWMiOw0KCQkJCXJlZyA9IDwweDFhMDAwMCAweDEwMA0KCQkJCSAgICAgICAw
eDFhMDEwMCAweGYwMD47DQoJCQkJcmVnLXNoaWZ0ID0gPDI+Ow0KCQkJfTsN
Cg0KCQkJY2xjZEAxZjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBs
MTExIiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDFmMDAwMCAw
eDEwMDA+Ow0KCQkJCWludGVycnVwdHMgPSA8MTQ+Ow0KCQkJfTsNCgkJfTsN
Cgl9Ow0KfTsNCg==

--1342847746-1331867984-1344262762=:4645
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1331867984-1344262762=:4645--


From xen-devel-bounces@lists.xen.org Mon Aug 06 14:27:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:27:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOH7-0002Pn-L8; Mon, 06 Aug 2012 14:27:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyOH5-0002PD-SC
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:27:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344263229!5758065!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16809 invoked from network); 6 Aug 2012 14:27:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:27:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:27:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:27:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyOGz-0004ZR-5r; Mon, 06 Aug 2012 14:27:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyOGz-0001xZ-1y;
	Mon, 06 Aug 2012 15:27:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20511.54333.44714.694390@mariner.uk.xensource.com>
Date: Mon, 6 Aug 2012 15:27:09 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344244422.11339.17.camel@zakaz.uk.xensource.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
	<1344244422.11339.17.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] xl segfault when starting a guest"):
> It looks like 25727:a8d708fcb347 is missing some bits of my original
> patch got missed during application, specifically the changes to the
> iscsi/nbd/enbd prefix handling rule.

This was because:
 - I mistakenly used the copy of the patch that Ian C had CC'd to me,
   rather than the copy I got via the mailing list.  The former went
   via the Citrix corporate email system which mangles things, which
   is why this is a bad idea.  The latter does not.
 - When I tried to apply it, it produced a bunch of rejects in the
   autogenerated files.  However buried in those messages was a reject
   in the .l source file, which I didn't spot.
 - I therefore tried to regenerate the flex source (perhaps not with
   complete success) and committed the result. 
Sorry for messing this up.

I have backed out 25727:a8d708fcb347 and reapplied what I think is a
non-mangled version as 25733:353bc0801b11.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:27:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:27:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOH7-0002Pn-L8; Mon, 06 Aug 2012 14:27:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyOH5-0002PD-SC
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:27:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344263229!5758065!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16809 invoked from network); 6 Aug 2012 14:27:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:27:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:27:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:27:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyOGz-0004ZR-5r; Mon, 06 Aug 2012 14:27:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyOGz-0001xZ-1y;
	Mon, 06 Aug 2012 15:27:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20511.54333.44714.694390@mariner.uk.xensource.com>
Date: Mon, 6 Aug 2012 15:27:09 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344244422.11339.17.camel@zakaz.uk.xensource.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
	<1344244422.11339.17.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] xl segfault when starting a guest"):
> It looks like 25727:a8d708fcb347 is missing some bits of my original
> patch got missed during application, specifically the changes to the
> iscsi/nbd/enbd prefix handling rule.

This was because:
 - I mistakenly used the copy of the patch that Ian C had CC'd to me,
   rather than the copy I got via the mailing list.  The former went
   via the Citrix corporate email system which mangles things, which
   is why this is a bad idea.  The latter does not.
 - When I tried to apply it, it produced a bunch of rejects in the
   autogenerated files.  However buried in those messages was a reject
   in the .l source file, which I didn't spot.
 - I therefore tried to regenerate the flex source (perhaps not with
   complete success) and committed the result. 
Sorry for messing this up.

I have backed out 25727:a8d708fcb347 and reapplied what I think is a
non-mangled version as 25733:353bc0801b11.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOLJ-0002dS-Ap; Mon, 06 Aug 2012 14:31:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyOLI-0002dK-LQ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:31:36 +0000
Received: from [85.158.143.99:32167] by server-3.bemta-4.messagelabs.com id
	F1/7B-01511-745DF105; Mon, 06 Aug 2012 14:31:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344263493!29880393!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 716 invoked from network); 6 Aug 2012 14:31:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:31:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868540"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:31:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:31:33 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyOLE-0004ee-Ow; Mon, 06 Aug 2012 14:31:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyOLE-0001xu-O3;
	Mon, 06 Aug 2012 15:31:32 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20511.54596.544654.176170@mariner.uk.xensource.com>
Date: Mon, 6 Aug 2012 15:31:32 +0100
To: Matt Wilson <msw@amazon.com>
In-Reply-To: <20120803205146.GA6268@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
	<20120803205146.GA6268@US-SEA-R8XVZTX>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> On Fri, Aug 03, 2012 at 07:44:30AM -0700, Ian Jackson wrote:
> > That would be better than asking lists.xen.org to start violating the
> > specified protocol.  Now of course a SHOULD is not an absolute
> > requirement.  Perhaps mailing lists are a special case somehow; but if
> > so I would expect this to be addressed in the relevant standards
> > documents.  I don't see any particular reason to think that
> > lists.xen.org is somehow unusual.
> 
> Ultimately I think that Mailman should verify DKIM signatures, provide
> a new signature for the modified message (or have the outbound MTA do
> the signing), and retain the origional DKIM signature as a trace. I
> believe that this is in line with the recomendations for intermediary
> email handlers like Mailman in RFC 5863 [4]. Of course, I don't know
> if Gmail will rework their implementation to ignore the invalid
> signature. At least one Mailman user reported success simply adding a
> new signature and not stripping any header [5].

The solution to the broken DKIM implementations, or broken spec, must
not be allowed to become "install more DKIM".  That is making the
problem worse, not better.

> Personally, I think that stripping DKIM headers as a short term
> workaround is less objectionable.

So bottom line is you think that Gmail is violating a SHOULD NOT.
And you are suggesting that the right fix for this is for us to also
violate a SHOULD NOT.  That can't be right.

> If a test of removing DKIM headers to see if it helps with delivery to
> Gmail is off the table, then perhaps configuring Mailman in a way that
> doesn't break DKIM signatures would be an option? Amazon's signed
> headers include date, from, to, cc, subject, message-id and
> mime-version. If the subject manipulation of adding [Xen-devel] was
> removed, the signature would likely still be valid.

I don't think that would be popular and I don't think this is a good
reason to do it.

Personally I think these subject line prefixes are annoying and if it
were my list it wouldn't have had them to start with.  But if you want
us to turn that off I think you need to get consensus for that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:31:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOLJ-0002dS-Ap; Mon, 06 Aug 2012 14:31:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyOLI-0002dK-LQ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:31:36 +0000
Received: from [85.158.143.99:32167] by server-3.bemta-4.messagelabs.com id
	F1/7B-01511-745DF105; Mon, 06 Aug 2012 14:31:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344263493!29880393!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 716 invoked from network); 6 Aug 2012 14:31:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:31:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868540"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:31:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:31:33 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyOLE-0004ee-Ow; Mon, 06 Aug 2012 14:31:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyOLE-0001xu-O3;
	Mon, 06 Aug 2012 15:31:32 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20511.54596.544654.176170@mariner.uk.xensource.com>
Date: Mon, 6 Aug 2012 15:31:32 +0100
To: Matt Wilson <msw@amazon.com>
In-Reply-To: <20120803205146.GA6268@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
	<20120803205146.GA6268@US-SEA-R8XVZTX>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> On Fri, Aug 03, 2012 at 07:44:30AM -0700, Ian Jackson wrote:
> > That would be better than asking lists.xen.org to start violating the
> > specified protocol.  Now of course a SHOULD is not an absolute
> > requirement.  Perhaps mailing lists are a special case somehow; but if
> > so I would expect this to be addressed in the relevant standards
> > documents.  I don't see any particular reason to think that
> > lists.xen.org is somehow unusual.
> 
> Ultimately I think that Mailman should verify DKIM signatures, provide
> a new signature for the modified message (or have the outbound MTA do
> the signing), and retain the origional DKIM signature as a trace. I
> believe that this is in line with the recomendations for intermediary
> email handlers like Mailman in RFC 5863 [4]. Of course, I don't know
> if Gmail will rework their implementation to ignore the invalid
> signature. At least one Mailman user reported success simply adding a
> new signature and not stripping any header [5].

The solution to the broken DKIM implementations, or broken spec, must
not be allowed to become "install more DKIM".  That is making the
problem worse, not better.

> Personally, I think that stripping DKIM headers as a short term
> workaround is less objectionable.

So bottom line is you think that Gmail is violating a SHOULD NOT.
And you are suggesting that the right fix for this is for us to also
violate a SHOULD NOT.  That can't be right.

> If a test of removing DKIM headers to see if it helps with delivery to
> Gmail is off the table, then perhaps configuring Mailman in a way that
> doesn't break DKIM signatures would be an option? Amazon's signed
> headers include date, from, to, cc, subject, message-id and
> mime-version. If the subject manipulation of adding [Xen-devel] was
> removed, the signature would likely still be valid.

I don't think that would be popular and I don't think this is a good
reason to do it.

Personally I think these subject line prefixes are annoying and if it
were my list it wouldn't have had them to start with.  But if you want
us to turn that off I think you need to get consensus for that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMM-0002iu-Sa; Mon, 06 Aug 2012 14:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOML-0002iD-FJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:41 +0000
Received: from [85.158.138.51:57345] by server-1.bemta-3.messagelabs.com id
	6B/1C-29224-885DF105; Mon, 06 Aug 2012 14:32:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344263558!21732849!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10795 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b071d190002c548@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b071d190002c548 ; Mon, 6 Aug 2012 10:33:08 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9Q011112; 
	Mon, 6 Aug 2012 10:32:37 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:18 -0400
Message-Id: <1344263550-3941-7-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 06/18] xsm,
	arch/x86: add distinct XSM hooks for map/unmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The xsm_iomem_permission and xsm_ioport_permission hooks are intended to
be called by the domain builder, while the calls in arch/x86/domctl.c
which control mapping are also performed by the device model. Because of
this, they should not use the same XSM hooks.

This also adds a missing XSM hook in the unbind IRQ domctl.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/domctl.c  |  8 ++++++--
 xen/arch/x86/physdev.c |  2 +-
 xen/include/xsm/xsm.h  | 25 ++++++++++++++++++++++---
 xen/xsm/dummy.c        | 20 +++++++++++++++++++-
 xen/xsm/flask/hooks.c  | 42 ++++++++++++++++++++----------------------
 5 files changed, 68 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..3cb4d97 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -800,6 +800,10 @@ long arch_do_domctl(
              !irq_access_permitted(current->domain, bind->machine_irq) )
             goto unbind_out;
 
+        ret = xsm_unbind_pt_irq(d, bind);
+        if ( ret )
+            goto unbind_out;
+
         if ( iommu_enabled )
         {
             spin_lock(&pcidevs_lock);
@@ -837,7 +841,7 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
             break;
 
-        ret = xsm_iomem_permission(d, mfn, mfn + nr_mfns - 1, add);
+        ret = xsm_iomem_mapping(d, mfn, mfn + nr_mfns - 1, add);
         if ( ret ) {
             rcu_unlock_domain(d);
             break;
@@ -899,7 +903,7 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
             break;
 
-        ret = xsm_ioport_permission(d, fmp, fmp + np - 1, add);
+        ret = xsm_ioport_mapping(d, fmp, fmp + np - 1, add);
         if ( ret ) {
             rcu_unlock_domain(d);
             break;
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..e434ff4 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -239,7 +239,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
     if ( !IS_PRIV_FOR(current->domain, d) )
         goto free_domain;
 
-    ret = xsm_irq_permission(d, domain_pirq_to_irq(d, pirq), 0);
+    ret = xsm_unmap_domain_pirq(d, domain_pirq_to_irq(d, pirq));
     if ( ret )
         goto free_domain;
 
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..0434c05 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -117,8 +117,10 @@ struct xsm_operations {
 
     char *(*show_irq_sid) (int irq);
     int (*map_domain_pirq) (struct domain *d, int irq, void *data);
+    int (*unmap_domain_pirq) (struct domain *d, int irq);
     int (*irq_permission) (struct domain *d, int pirq, uint8_t allow);
     int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
+    int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
     int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access);
 
     int (*get_device_group) (uint32_t machine_bdf);
@@ -177,11 +179,12 @@ struct xsm_operations {
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
     int (*sendtrigger) (struct domain *d);
     int (*bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
-    int (*unbind_pt_irq) (struct domain *d);
+    int (*unbind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
     int (*pin_mem_cacheattr) (struct domain *d);
     int (*ext_vcpucontext) (struct domain *d, uint32_t cmd);
     int (*vcpuextstate) (struct domain *d, uint32_t cmd);
     int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
+    int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
 #endif
 };
 
@@ -495,6 +498,11 @@ static inline int xsm_map_domain_pirq (struct domain *d, int irq, void *data)
     return xsm_call(map_domain_pirq(d, irq, data));
 }
 
+static inline int xsm_unmap_domain_pirq (struct domain *d, int irq)
+{
+    return xsm_call(unmap_domain_pirq(d, irq));
+}
+
 static inline int xsm_irq_permission (struct domain *d, int pirq, uint8_t allow)
 {
     return xsm_call(irq_permission(d, pirq, allow));
@@ -505,6 +513,11 @@ static inline int xsm_iomem_permission (struct domain *d, uint64_t s, uint64_t e
     return xsm_call(iomem_permission(d, s, e, allow));
 }
 
+static inline int xsm_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    return xsm_call(iomem_mapping(d, s, e, allow));
+}
+
 static inline int xsm_pci_config_permission (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
     return xsm_call(pci_config_permission(d, machine_bdf, start, end, access));
@@ -780,9 +793,10 @@ static inline int xsm_bind_pt_irq(struct domain *d,
     return xsm_call(bind_pt_irq(d, bind));
 }
 
-static inline int xsm_unbind_pt_irq(struct domain *d)
+static inline int xsm_unbind_pt_irq(struct domain *d,
+                                                struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_call(unbind_pt_irq(d));
+    return xsm_call(unbind_pt_irq(d, bind));
 }
 
 static inline int xsm_pin_mem_cacheattr(struct domain *d)
@@ -803,6 +817,11 @@ static inline int xsm_ioport_permission (struct domain *d, uint32_t s, uint32_t
 {
     return xsm_call(ioport_permission(d, s, e, allow));
 }
+
+static inline int xsm_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    return xsm_call(ioport_mapping(d, s, e, allow));
+}
 #endif /* CONFIG_X86 */
 
 extern struct xsm_operations dummy_xsm_ops;
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5d35342..fd075c7 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -395,6 +395,11 @@ static int dummy_map_domain_pirq (struct domain *d, int irq, void *data)
     return 0;
 }
 
+static int dummy_unmap_domain_pirq (struct domain *d, int irq)
+{
+    return 0;
+}
+
 static int dummy_irq_permission (struct domain *d, int pirq, uint8_t allow)
 {
     return 0;
@@ -405,6 +410,11 @@ static int dummy_iomem_permission (struct domain *d, uint64_t s, uint64_t e, uin
     return 0;
 }
 
+static int dummy_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    return 0;
+}
+
 static int dummy_pci_config_permission (struct domain *d, uint32_t machine_bdf,
                                         uint16_t start, uint16_t end,
                                         uint8_t access)
@@ -585,7 +595,7 @@ static int dummy_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     return 0;
 }
 
-static int dummy_unbind_pt_irq (struct domain *d)
+static int dummy_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
     return 0;
 }
@@ -609,6 +619,11 @@ static int dummy_ioport_permission (struct domain *d, uint32_t s, uint32_t e, ui
 {
     return 0;
 }
+
+static int dummy_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    return 0;
+}
 #endif
 
 struct xsm_operations dummy_xsm_ops;
@@ -693,8 +708,10 @@ void xsm_fixup_ops (struct xsm_operations *ops)
 
     set_to_dummy_if_null(ops, show_irq_sid);
     set_to_dummy_if_null(ops, map_domain_pirq);
+    set_to_dummy_if_null(ops, unmap_domain_pirq);
     set_to_dummy_if_null(ops, irq_permission);
     set_to_dummy_if_null(ops, iomem_permission);
+    set_to_dummy_if_null(ops, iomem_mapping);
     set_to_dummy_if_null(ops, pci_config_permission);
 
     set_to_dummy_if_null(ops, get_device_group);
@@ -757,5 +774,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, ext_vcpucontext);
     set_to_dummy_if_null(ops, vcpuextstate);
     set_to_dummy_if_null(ops, ioport_permission);
+    set_to_dummy_if_null(ops, ioport_mapping);
 #endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 9262d34..5923710 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -781,43 +781,40 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     return rc;
 }
 
-static int flask_irq_permission (struct domain *d, int irq, uint8_t access)
+static int flask_unmap_domain_pirq (struct domain *d, int irq)
 {
-    u32 perm;
-    u32 rsid;
+    u32 sid;
     int rc = -EPERM;
 
-    struct domain_security_struct *ssec, *tsec;
+    struct domain_security_struct *ssec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
-                         resource_to_perm(access));
-
+    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
     if ( rc )
         return rc;
 
-    if ( access )
-        perm = RESOURCE__ADD_IRQ;
-    else
-        perm = RESOURCE__REMOVE_IRQ;
-
     ssec = current->domain->ssid;
-    tsec = d->ssid;
 
-    rc = get_irq_sid(irq, &rsid, &ad);
-    if ( rc )
-        return rc;
-
-    rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, perm, &ad);
+    if ( irq >= nr_irqs_gsi ) {
+        /* TODO support for MSI here */
+        return 0;
+    } else {
+        rc = get_irq_sid(irq, &sid, &ad);
+    }
     if ( rc )
         return rc;
 
-    if ( access )
-        rc = avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, 
-                            RESOURCE__USE, &ad);
+    rc = avc_has_perm(ssec->sid, sid, SECCLASS_RESOURCE, RESOURCE__REMOVE_IRQ, &ad);
     return rc;
 }
 
+static int flask_irq_permission (struct domain *d, int pirq, uint8_t access)
+{
+    /* the PIRQ number is not useful; real IRQ is checked during mapping */
+    return domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
+                           resource_to_perm(access));
+}
+
 struct iomem_has_perm_data {
     struct domain_security_struct *ssec, *tsec;
     u32 perm;
@@ -1507,7 +1504,7 @@ static int flask_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     return avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
-static int flask_unbind_pt_irq (struct domain *d)
+static int flask_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
     return domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
 }
@@ -1627,6 +1624,7 @@ static struct xsm_operations flask_ops = {
     .show_irq_sid = flask_show_irq_sid,
 
     .map_domain_pirq = flask_map_domain_pirq,
+    .unmap_domain_pirq = flask_unmap_domain_pirq,
     .irq_permission = flask_irq_permission,
     .iomem_permission = flask_iomem_permission,
     .pci_config_permission = flask_pci_config_permission,
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMM-0002iu-Sa; Mon, 06 Aug 2012 14:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOML-0002iD-FJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:41 +0000
Received: from [85.158.138.51:57345] by server-1.bemta-3.messagelabs.com id
	6B/1C-29224-885DF105; Mon, 06 Aug 2012 14:32:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344263558!21732849!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10795 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b071d190002c548@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b071d190002c548 ; Mon, 6 Aug 2012 10:33:08 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9Q011112; 
	Mon, 6 Aug 2012 10:32:37 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:18 -0400
Message-Id: <1344263550-3941-7-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 06/18] xsm,
	arch/x86: add distinct XSM hooks for map/unmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The xsm_iomem_permission and xsm_ioport_permission hooks are intended to
be called by the domain builder, while the calls in arch/x86/domctl.c
which control mapping are also performed by the device model. Because of
this, they should not use the same XSM hooks.

This also adds a missing XSM hook in the unbind IRQ domctl.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/domctl.c  |  8 ++++++--
 xen/arch/x86/physdev.c |  2 +-
 xen/include/xsm/xsm.h  | 25 ++++++++++++++++++++++---
 xen/xsm/dummy.c        | 20 +++++++++++++++++++-
 xen/xsm/flask/hooks.c  | 42 ++++++++++++++++++++----------------------
 5 files changed, 68 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..3cb4d97 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -800,6 +800,10 @@ long arch_do_domctl(
              !irq_access_permitted(current->domain, bind->machine_irq) )
             goto unbind_out;
 
+        ret = xsm_unbind_pt_irq(d, bind);
+        if ( ret )
+            goto unbind_out;
+
         if ( iommu_enabled )
         {
             spin_lock(&pcidevs_lock);
@@ -837,7 +841,7 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
             break;
 
-        ret = xsm_iomem_permission(d, mfn, mfn + nr_mfns - 1, add);
+        ret = xsm_iomem_mapping(d, mfn, mfn + nr_mfns - 1, add);
         if ( ret ) {
             rcu_unlock_domain(d);
             break;
@@ -899,7 +903,7 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(domctl->domain)) == NULL) )
             break;
 
-        ret = xsm_ioport_permission(d, fmp, fmp + np - 1, add);
+        ret = xsm_ioport_mapping(d, fmp, fmp + np - 1, add);
         if ( ret ) {
             rcu_unlock_domain(d);
             break;
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..e434ff4 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -239,7 +239,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
     if ( !IS_PRIV_FOR(current->domain, d) )
         goto free_domain;
 
-    ret = xsm_irq_permission(d, domain_pirq_to_irq(d, pirq), 0);
+    ret = xsm_unmap_domain_pirq(d, domain_pirq_to_irq(d, pirq));
     if ( ret )
         goto free_domain;
 
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..0434c05 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -117,8 +117,10 @@ struct xsm_operations {
 
     char *(*show_irq_sid) (int irq);
     int (*map_domain_pirq) (struct domain *d, int irq, void *data);
+    int (*unmap_domain_pirq) (struct domain *d, int irq);
     int (*irq_permission) (struct domain *d, int pirq, uint8_t allow);
     int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
+    int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
     int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access);
 
     int (*get_device_group) (uint32_t machine_bdf);
@@ -177,11 +179,12 @@ struct xsm_operations {
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
     int (*sendtrigger) (struct domain *d);
     int (*bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
-    int (*unbind_pt_irq) (struct domain *d);
+    int (*unbind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
     int (*pin_mem_cacheattr) (struct domain *d);
     int (*ext_vcpucontext) (struct domain *d, uint32_t cmd);
     int (*vcpuextstate) (struct domain *d, uint32_t cmd);
     int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
+    int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
 #endif
 };
 
@@ -495,6 +498,11 @@ static inline int xsm_map_domain_pirq (struct domain *d, int irq, void *data)
     return xsm_call(map_domain_pirq(d, irq, data));
 }
 
+static inline int xsm_unmap_domain_pirq (struct domain *d, int irq)
+{
+    return xsm_call(unmap_domain_pirq(d, irq));
+}
+
 static inline int xsm_irq_permission (struct domain *d, int pirq, uint8_t allow)
 {
     return xsm_call(irq_permission(d, pirq, allow));
@@ -505,6 +513,11 @@ static inline int xsm_iomem_permission (struct domain *d, uint64_t s, uint64_t e
     return xsm_call(iomem_permission(d, s, e, allow));
 }
 
+static inline int xsm_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    return xsm_call(iomem_mapping(d, s, e, allow));
+}
+
 static inline int xsm_pci_config_permission (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
     return xsm_call(pci_config_permission(d, machine_bdf, start, end, access));
@@ -780,9 +793,10 @@ static inline int xsm_bind_pt_irq(struct domain *d,
     return xsm_call(bind_pt_irq(d, bind));
 }
 
-static inline int xsm_unbind_pt_irq(struct domain *d)
+static inline int xsm_unbind_pt_irq(struct domain *d,
+                                                struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_call(unbind_pt_irq(d));
+    return xsm_call(unbind_pt_irq(d, bind));
 }
 
 static inline int xsm_pin_mem_cacheattr(struct domain *d)
@@ -803,6 +817,11 @@ static inline int xsm_ioport_permission (struct domain *d, uint32_t s, uint32_t
 {
     return xsm_call(ioport_permission(d, s, e, allow));
 }
+
+static inline int xsm_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    return xsm_call(ioport_mapping(d, s, e, allow));
+}
 #endif /* CONFIG_X86 */
 
 extern struct xsm_operations dummy_xsm_ops;
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5d35342..fd075c7 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -395,6 +395,11 @@ static int dummy_map_domain_pirq (struct domain *d, int irq, void *data)
     return 0;
 }
 
+static int dummy_unmap_domain_pirq (struct domain *d, int irq)
+{
+    return 0;
+}
+
 static int dummy_irq_permission (struct domain *d, int pirq, uint8_t allow)
 {
     return 0;
@@ -405,6 +410,11 @@ static int dummy_iomem_permission (struct domain *d, uint64_t s, uint64_t e, uin
     return 0;
 }
 
+static int dummy_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    return 0;
+}
+
 static int dummy_pci_config_permission (struct domain *d, uint32_t machine_bdf,
                                         uint16_t start, uint16_t end,
                                         uint8_t access)
@@ -585,7 +595,7 @@ static int dummy_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     return 0;
 }
 
-static int dummy_unbind_pt_irq (struct domain *d)
+static int dummy_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
     return 0;
 }
@@ -609,6 +619,11 @@ static int dummy_ioport_permission (struct domain *d, uint32_t s, uint32_t e, ui
 {
     return 0;
 }
+
+static int dummy_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    return 0;
+}
 #endif
 
 struct xsm_operations dummy_xsm_ops;
@@ -693,8 +708,10 @@ void xsm_fixup_ops (struct xsm_operations *ops)
 
     set_to_dummy_if_null(ops, show_irq_sid);
     set_to_dummy_if_null(ops, map_domain_pirq);
+    set_to_dummy_if_null(ops, unmap_domain_pirq);
     set_to_dummy_if_null(ops, irq_permission);
     set_to_dummy_if_null(ops, iomem_permission);
+    set_to_dummy_if_null(ops, iomem_mapping);
     set_to_dummy_if_null(ops, pci_config_permission);
 
     set_to_dummy_if_null(ops, get_device_group);
@@ -757,5 +774,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, ext_vcpucontext);
     set_to_dummy_if_null(ops, vcpuextstate);
     set_to_dummy_if_null(ops, ioport_permission);
+    set_to_dummy_if_null(ops, ioport_mapping);
 #endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 9262d34..5923710 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -781,43 +781,40 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     return rc;
 }
 
-static int flask_irq_permission (struct domain *d, int irq, uint8_t access)
+static int flask_unmap_domain_pirq (struct domain *d, int irq)
 {
-    u32 perm;
-    u32 rsid;
+    u32 sid;
     int rc = -EPERM;
 
-    struct domain_security_struct *ssec, *tsec;
+    struct domain_security_struct *ssec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
-                         resource_to_perm(access));
-
+    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
     if ( rc )
         return rc;
 
-    if ( access )
-        perm = RESOURCE__ADD_IRQ;
-    else
-        perm = RESOURCE__REMOVE_IRQ;
-
     ssec = current->domain->ssid;
-    tsec = d->ssid;
 
-    rc = get_irq_sid(irq, &rsid, &ad);
-    if ( rc )
-        return rc;
-
-    rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, perm, &ad);
+    if ( irq >= nr_irqs_gsi ) {
+        /* TODO support for MSI here */
+        return 0;
+    } else {
+        rc = get_irq_sid(irq, &sid, &ad);
+    }
     if ( rc )
         return rc;
 
-    if ( access )
-        rc = avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, 
-                            RESOURCE__USE, &ad);
+    rc = avc_has_perm(ssec->sid, sid, SECCLASS_RESOURCE, RESOURCE__REMOVE_IRQ, &ad);
     return rc;
 }
 
+static int flask_irq_permission (struct domain *d, int pirq, uint8_t access)
+{
+    /* the PIRQ number is not useful; real IRQ is checked during mapping */
+    return domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
+                           resource_to_perm(access));
+}
+
 struct iomem_has_perm_data {
     struct domain_security_struct *ssec, *tsec;
     u32 perm;
@@ -1507,7 +1504,7 @@ static int flask_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     return avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
-static int flask_unbind_pt_irq (struct domain *d)
+static int flask_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
     return domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
 }
@@ -1627,6 +1624,7 @@ static struct xsm_operations flask_ops = {
     .show_irq_sid = flask_show_irq_sid,
 
     .map_domain_pirq = flask_map_domain_pirq,
+    .unmap_domain_pirq = flask_unmap_domain_pirq,
     .irq_permission = flask_irq_permission,
     .iomem_permission = flask_iomem_permission,
     .pci_config_permission = flask_pci_config_permission,
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMM-0002ia-4V; Mon, 06 Aug 2012 14:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMK-0002hw-QP
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:41 +0000
Received: from [85.158.138.51:25443] by server-9.bemta-3.messagelabs.com id
	9E/10-14615-885DF105; Mon, 06 Aug 2012 14:32:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344263558!30697575!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25763 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-9.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b071b740002c547@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b071b740002c547 ; Mon, 6 Aug 2012 10:33:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9P011112; 
	Mon, 6 Aug 2012 10:32:37 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:17 -0400
Message-Id: <1344263550-3941-6-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 05/18] flask/policy: Add domain relabel example
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds the nomigrate_t type to the example FLASK policy which allows
domains to be created that dom0 cannot access after building.

Example domain configuration snippet:
  seclabel='customer_1:vm_r:nomigrate_t'
  init_seclabel='customer_1:vm_r:nomigrate_t_building'

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 docs/misc/xsm-flask.txt                      |  2 +
 tools/flask/policy/policy/modules/xen/xen.if | 56 +++++++++++++++++++++-------
 tools/flask/policy/policy/modules/xen/xen.te | 10 +++++
 3 files changed, 55 insertions(+), 13 deletions(-)

diff --git a/docs/misc/xsm-flask.txt b/docs/misc/xsm-flask.txt
index 6b0d327..0778a28 100644
--- a/docs/misc/xsm-flask.txt
+++ b/docs/misc/xsm-flask.txt
@@ -60,6 +60,8 @@ that can be used without dom0 disaggregation. The main types for domUs are:
  - domU_t is a domain that can communicate with any other domU_t
  - isolated_domU_t can only communicate with dom0
  - prot_domU_t is a domain type whose creation can be disabled with a boolean
+ - nomigrate_t is a domain that must be created via the nomigrate_t_building
+   type, and whose memory cannot be read by dom0 once created
 
 HVM domains with stubdomain device models use two types (one per domain):
  - domHVM_t is an HVM domain that uses a stubdomain device model
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 87ef165..4de99c8 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -9,24 +9,47 @@
 #   Declare a type as a domain type, and allow basic domain setup
 define(`declare_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	type $1_channel, event_type;
+	type_transition $1 domain_type:event $1_channel;
 	allow $1 $1:grant { query setup };
 	allow $1 $1:mmu { adjust physmap map_read map_write stat pinpage };
 	allow $1 $1:hvm { getparam setparam };
 ')
 
-# create_domain(priv, target)
-#   Allow a domain to be created
-define(`create_domain', `
+# declare_build_label(type)
+#   Declare a paired _building type for the given domain type
+define(`declare_build_label', `
+	type $1_building, domain_type;
+	type_transition $1_building domain_type:event $1_channel;
+	allow $1_building $1 : domain transition;
+')
+
+define(`create_domain_common', `
 	allow $1 $2:domain { create max_vcpus setdomainmaxmem setaddrsize
-			getdomaininfo hypercall setvcpucontext scheduler
-			unpause getvcpuinfo getvcpuextstate getaddrsize
-			getvcpuaffinity };
+			getdomaininfo hypercall setvcpucontext setextvcpucontext
+			scheduler getvcpuinfo getvcpuextstate getaddrsize
+			getvcpuaffinity setvcpuaffinity };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
 	allow $1 $2:grant setup;
-	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute setparam pcilevel trackdirtyvram };
-	allow $1 $2_$1_channel:event create;
+	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc setparam pcilevel trackdirtyvram };
+')
+
+# create_domain(priv, target)
+#   Allow a domain to be created directly
+define(`create_domain', `
+	create_domain_common($1, $2)
+	allow $1 $2_channel:event create;
+')
+
+# create_domain_build_label(priv, target)
+#   Allow a domain to be created via its domain build label
+define(`create_domain_build_label', `
+	create_domain_common($1, $2_building)
+	allow $1 $2_channel:event create;
+	allow $1 $2_building:domain2 relabelfrom;
+	allow $1 $2:domain2 relabelto;
 ')
 
 # manage_domain(priv, target)
@@ -37,6 +60,15 @@ define(`manage_domain', `
 			setvcpuaffinity setdomainmaxmem };
 ')
 
+# migrate_domain_out(priv, target)
+#   Allow creation of a snapshot or migration image from a domain
+#   (inbound migration is the same as domain creation)
+define(`migrate_domain_out', `
+	allow $1 $2:hvm { gethvmc getparam irqlevel };
+	allow $1 $2:mmu { stat pageinfo map_read };
+	allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext getvcpuextstate pause destroy };
+')
+
 ################################################################################
 #
 # Inter-domain communication
@@ -47,8 +79,6 @@ define(`manage_domain', `
 #   This allows an event channel to be created from domains with labels
 #   <source> to <dest> and will label it <chan-label>
 define(`create_channel', `
-	type $3, event_type;
-	type_transition $1 $2:event $3;
 	allow $1 $3:event { create send status };
 	allow $3 $2:event { bind };
 ')
@@ -56,8 +86,8 @@ define(`create_channel', `
 # domain_event_comms(dom1, dom2)
 #   Allow two domain types to communicate using event channels
 define(`domain_event_comms', `
-	create_channel($1, $2, $1_$2_channel)
-	create_channel($2, $1, $2_$1_channel)
+	create_channel($1, $2, $1_channel)
+	create_channel($2, $1, $2_channel)
 ')
 
 # domain_comms(dom1, dom2)
@@ -72,7 +102,7 @@ define(`domain_comms', `
 #   Allow a domain types to communicate with others of its type using grants
 #   and event channels (this includes event channels to DOMID_SELF)
 define(`domain_self_comms', `
-	create_channel($1, $1, $1_self_channel)
+	create_channel($1, $1, $1_channel)
 	allow $1 $1:grant { map_read map_write copy unmap };
 ')
 
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index f9bc061..40c4c0a 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -90,6 +90,7 @@ create_domain(dom0_t, isolated_domU_t)
 manage_domain(dom0_t, isolated_domU_t)
 domain_comms(dom0_t, isolated_domU_t)
 
+# Declare a boolean that denies creation of prot_domU_t domains
 gen_bool(prot_doms_locked, false)
 declare_domain(prot_domU_t)
 if (!prot_doms_locked) {
@@ -111,6 +112,15 @@ manage_domain(dom0_t, dm_dom_t)
 domain_comms(dom0_t, dm_dom_t)
 device_model(dm_dom_t, domHVM_t)
 
+# nomigrate_t must be built via the nomigrate_t_building label; once built,
+# dom0 cannot read its memory.
+declare_domain(nomigrate_t)
+declare_build_label(nomigrate_t)
+create_domain_build_label(dom0_t, nomigrate_t)
+manage_domain(dom0_t, nomigrate_t)
+domain_comms(dom0_t, nomigrate_t)
+domain_self_comms(nomigrate_t)
+
 ###############################################################################
 #
 # Device delegation
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMM-0002ia-4V; Mon, 06 Aug 2012 14:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMK-0002hw-QP
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:41 +0000
Received: from [85.158.138.51:25443] by server-9.bemta-3.messagelabs.com id
	9E/10-14615-885DF105; Mon, 06 Aug 2012 14:32:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344263558!30697575!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25763 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-9.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b071b740002c547@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b071b740002c547 ; Mon, 6 Aug 2012 10:33:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9P011112; 
	Mon, 6 Aug 2012 10:32:37 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:17 -0400
Message-Id: <1344263550-3941-6-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 05/18] flask/policy: Add domain relabel example
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds the nomigrate_t type to the example FLASK policy which allows
domains to be created that dom0 cannot access after building.

Example domain configuration snippet:
  seclabel='customer_1:vm_r:nomigrate_t'
  init_seclabel='customer_1:vm_r:nomigrate_t_building'

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 docs/misc/xsm-flask.txt                      |  2 +
 tools/flask/policy/policy/modules/xen/xen.if | 56 +++++++++++++++++++++-------
 tools/flask/policy/policy/modules/xen/xen.te | 10 +++++
 3 files changed, 55 insertions(+), 13 deletions(-)

diff --git a/docs/misc/xsm-flask.txt b/docs/misc/xsm-flask.txt
index 6b0d327..0778a28 100644
--- a/docs/misc/xsm-flask.txt
+++ b/docs/misc/xsm-flask.txt
@@ -60,6 +60,8 @@ that can be used without dom0 disaggregation. The main types for domUs are:
  - domU_t is a domain that can communicate with any other domU_t
  - isolated_domU_t can only communicate with dom0
  - prot_domU_t is a domain type whose creation can be disabled with a boolean
+ - nomigrate_t is a domain that must be created via the nomigrate_t_building
+   type, and whose memory cannot be read by dom0 once created
 
 HVM domains with stubdomain device models use two types (one per domain):
  - domHVM_t is an HVM domain that uses a stubdomain device model
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 87ef165..4de99c8 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -9,24 +9,47 @@
 #   Declare a type as a domain type, and allow basic domain setup
 define(`declare_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	type $1_channel, event_type;
+	type_transition $1 domain_type:event $1_channel;
 	allow $1 $1:grant { query setup };
 	allow $1 $1:mmu { adjust physmap map_read map_write stat pinpage };
 	allow $1 $1:hvm { getparam setparam };
 ')
 
-# create_domain(priv, target)
-#   Allow a domain to be created
-define(`create_domain', `
+# declare_build_label(type)
+#   Declare a paired _building type for the given domain type
+define(`declare_build_label', `
+	type $1_building, domain_type;
+	type_transition $1_building domain_type:event $1_channel;
+	allow $1_building $1 : domain transition;
+')
+
+define(`create_domain_common', `
 	allow $1 $2:domain { create max_vcpus setdomainmaxmem setaddrsize
-			getdomaininfo hypercall setvcpucontext scheduler
-			unpause getvcpuinfo getvcpuextstate getaddrsize
-			getvcpuaffinity };
+			getdomaininfo hypercall setvcpucontext setextvcpucontext
+			scheduler getvcpuinfo getvcpuextstate getaddrsize
+			getvcpuaffinity setvcpuaffinity };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
 	allow $1 $2:grant setup;
-	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute setparam pcilevel trackdirtyvram };
-	allow $1 $2_$1_channel:event create;
+	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc setparam pcilevel trackdirtyvram };
+')
+
+# create_domain(priv, target)
+#   Allow a domain to be created directly
+define(`create_domain', `
+	create_domain_common($1, $2)
+	allow $1 $2_channel:event create;
+')
+
+# create_domain_build_label(priv, target)
+#   Allow a domain to be created via its domain build label
+define(`create_domain_build_label', `
+	create_domain_common($1, $2_building)
+	allow $1 $2_channel:event create;
+	allow $1 $2_building:domain2 relabelfrom;
+	allow $1 $2:domain2 relabelto;
 ')
 
 # manage_domain(priv, target)
@@ -37,6 +60,15 @@ define(`manage_domain', `
 			setvcpuaffinity setdomainmaxmem };
 ')
 
+# migrate_domain_out(priv, target)
+#   Allow creation of a snapshot or migration image from a domain
+#   (inbound migration is the same as domain creation)
+define(`migrate_domain_out', `
+	allow $1 $2:hvm { gethvmc getparam irqlevel };
+	allow $1 $2:mmu { stat pageinfo map_read };
+	allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext getvcpuextstate pause destroy };
+')
+
 ################################################################################
 #
 # Inter-domain communication
@@ -47,8 +79,6 @@ define(`manage_domain', `
 #   This allows an event channel to be created from domains with labels
 #   <source> to <dest> and will label it <chan-label>
 define(`create_channel', `
-	type $3, event_type;
-	type_transition $1 $2:event $3;
 	allow $1 $3:event { create send status };
 	allow $3 $2:event { bind };
 ')
@@ -56,8 +86,8 @@ define(`create_channel', `
 # domain_event_comms(dom1, dom2)
 #   Allow two domain types to communicate using event channels
 define(`domain_event_comms', `
-	create_channel($1, $2, $1_$2_channel)
-	create_channel($2, $1, $2_$1_channel)
+	create_channel($1, $2, $1_channel)
+	create_channel($2, $1, $2_channel)
 ')
 
 # domain_comms(dom1, dom2)
@@ -72,7 +102,7 @@ define(`domain_comms', `
 #   Allow a domain types to communicate with others of its type using grants
 #   and event channels (this includes event channels to DOMID_SELF)
 define(`domain_self_comms', `
-	create_channel($1, $1, $1_self_channel)
+	create_channel($1, $1, $1_channel)
 	allow $1 $1:grant { map_read map_write copy unmap };
 ')
 
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index f9bc061..40c4c0a 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -90,6 +90,7 @@ create_domain(dom0_t, isolated_domU_t)
 manage_domain(dom0_t, isolated_domU_t)
 domain_comms(dom0_t, isolated_domU_t)
 
+# Declare a boolean that denies creation of prot_domU_t domains
 gen_bool(prot_doms_locked, false)
 declare_domain(prot_domU_t)
 if (!prot_doms_locked) {
@@ -111,6 +112,15 @@ manage_domain(dom0_t, dm_dom_t)
 domain_comms(dom0_t, dm_dom_t)
 device_model(dm_dom_t, domHVM_t)
 
+# nomigrate_t must be built via the nomigrate_t_building label; once built,
+# dom0 cannot read its memory.
+declare_domain(nomigrate_t)
+declare_build_label(nomigrate_t)
+create_domain_build_label(dom0_t, nomigrate_t)
+manage_domain(dom0_t, nomigrate_t)
+domain_comms(dom0_t, nomigrate_t)
+domain_self_comms(nomigrate_t)
+
 ###############################################################################
 #
 # Device delegation
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMR-0002kr-Bx; Mon, 06 Aug 2012 14:32:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002jc-E7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:45 +0000
Received: from [85.158.138.51:27845] by server-10.bemta-3.messagelabs.com id
	DF/CE-07905-C85DF105; Mon, 06 Aug 2012 14:32:44 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344263562!21732863!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11253 invoked from network); 6 Aug 2012 14:32:43 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:43 -0000
X-TM-IMSS-Message-ID: <7b0729970002c550@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0729970002c550 ; Mon, 6 Aug 2012 10:33:11 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9a011112; 
	Mon, 6 Aug 2012 10:32:40 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:28 -0400
Message-Id: <1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for get_pg_owner
	access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This requires introducing a new XSM hook for do_mmuext_op to validate
remote domain access there.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |  1 +
 tools/flask/policy/policy/modules/xen/xen.if   |  4 ++--
 xen/arch/x86/mm.c                              | 25 ++++++++-----------------
 xen/include/xsm/dummy.h                        | 20 ++++++++++++++++++--
 xen/include/xsm/xsm.h                          | 14 +++++++++++---
 xen/xsm/dummy.c                                |  1 +
 xen/xsm/flask/hooks.c                          |  9 ++++++++-
 xen/xsm/flask/include/av_perm_to_string.h      |  1 +
 xen/xsm/flask/include/av_permissions.h         |  1 +
 9 files changed, 51 insertions(+), 25 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 2986b40..5e897e2 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -141,6 +141,7 @@ class mmu
     mfnlist
     memorymap
     remote_remap
+	mmuext_op
 }
 
 class shadow
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 796698b..78083c3 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -7,7 +7,7 @@
 ################################################################################
 define(`declare_domain_common', `
 	allow $1 $2:grant { query setup };
-	allow $1 $2:mmu { adjust physmap map_read map_write stat pinpage updatemp };
+	allow $1 $2:mmu { adjust physmap map_read map_write stat pinpage updatemp mmuext_op };
 	allow $1 $2:hvm { getparam setparam };
 ')
 
@@ -51,7 +51,7 @@ define(`create_domain_common', `
 	allow $1 $2:domain2 { set_cpuid settsc };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
-	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
+	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
 	allow $1 $2:grant setup;
 	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc setparam pcilevel trackdirtyvram };
 ')
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 1b352df..4bc3ab5 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2882,11 +2882,6 @@ static struct domain *get_pg_owner(domid_t domid)
         pg_owner = rcu_lock_domain(dom_io);
         break;
     case DOMID_XEN:
-        if ( !IS_PRIV(curr) )
-        {
-            MEM_LOG("Cannot set foreign dom");
-            break;
-        }
         pg_owner = rcu_lock_domain(dom_xen);
         break;
     default:
@@ -2895,12 +2890,6 @@ static struct domain *get_pg_owner(domid_t domid)
             MEM_LOG("Unknown domain '%u'", domid);
             break;
         }
-        if ( !IS_PRIV_FOR(curr, pg_owner) )
-        {
-            MEM_LOG("Cannot set foreign dom");
-            rcu_unlock_domain(pg_owner);
-            pg_owner = NULL;
-        }
         break;
     }
 
@@ -3008,6 +2997,13 @@ int do_mmuext_op(
         goto out;
     }
 
+    rc = xsm_mmuext_op(d, pg_owner);
+    if ( rc )
+    {
+        rcu_unlock_domain(pg_owner);
+        goto out;
+    }
+
     for ( i = 0; i < count; i++ )
     {
         if ( hypercall_preempt_check() )
@@ -3483,11 +3479,6 @@ int do_mmu_update(
             rc = -EINVAL;
             goto out;
         }
-        if ( !IS_PRIV_FOR(d, pt_owner) )
-        {
-            rc = -ESRCH;
-            goto out;
-        }
     }
 
     if ( (pg_owner = get_pg_owner((uint16_t)foreigndom)) == NULL )
@@ -3643,7 +3634,7 @@ int do_mmu_update(
             mfn = req.ptr >> PAGE_SHIFT;
             gpfn = req.val;
 
-            rc = xsm_mmu_machphys_update(d, mfn);
+            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
             if ( rc )
                 break;
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index d796a33..28e1d2b 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -803,19 +803,35 @@ static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
 }
 
 static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
-                                    struct domain *f, intpte_t fpte)
+                                            struct domain *f, intpte_t fpte)
 {
+    if ( d != t && !IS_PRIV_FOR(d, t) )
+        return -EPERM;
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
     return 0;
 }
 
-static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
+static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, struct domain *f,
+                                              unsigned long mfn)
 {
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
+{
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
     return 0;
 }
 
 static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte)
 {
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
     return 0;
 }
 
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index fa9f50e..4134877 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -177,7 +177,9 @@ struct xsm_operations {
     int (*domain_memory_map) (struct domain *d);
     int (*mmu_normal_update) (struct domain *d, struct domain *t,
                               struct domain *f, intpte_t fpte);
-    int (*mmu_machphys_update) (struct domain *d, unsigned long mfn);
+    int (*mmu_machphys_update) (struct domain *d, struct domain *f,
+                                unsigned long mfn);
+    int (*mmuext_op) (struct domain *d, struct domain *f);
     int (*update_va_mapping) (struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte);
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
@@ -797,9 +799,15 @@ static inline int xsm_mmu_normal_update (struct domain *d, struct domain *t,
     return xsm_ops->mmu_normal_update(d, t, f, fpte);
 }
 
-static inline int xsm_mmu_machphys_update (struct domain *d, unsigned long mfn)
+static inline int xsm_mmu_machphys_update (struct domain *d, struct domain *f,
+                                           unsigned long mfn)
 {
-    return xsm_ops->mmu_machphys_update(d, mfn);
+    return xsm_ops->mmu_machphys_update(d, f, mfn);
+}
+
+static inline int xsm_mmuext_op (struct domain *d, struct domain *f)
+{
+	return xsm_ops->mmuext_op(d, f);
 }
 
 static inline int xsm_update_va_mapping(struct domain *d, struct domain *f, 
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index aebe333..1bf9de9 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -161,6 +161,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, domain_memory_map);
     set_to_dummy_if_null(ops, mmu_normal_update);
     set_to_dummy_if_null(ops, mmu_machphys_update);
+    set_to_dummy_if_null(ops, mmuext_op);
     set_to_dummy_if_null(ops, update_va_mapping);
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index dae587c..f743be1 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1385,7 +1385,8 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     return avc_has_perm(dsid, fsid, SECCLASS_MMU, map_perms, &ad);
 }
 
-static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
+static int flask_mmu_machphys_update(struct domain *d, struct domain *f,
+                                     unsigned long mfn)
 {
     int rc = 0;
     u32 dsid, psid;
@@ -1398,6 +1399,11 @@ static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
     return avc_has_perm(dsid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
 }
 
+static int flask_mmuext_op(struct domain *d, struct domain *f)
+{
+    return domain_has_perm(d, f, SECCLASS_MMU, MMU__MMUEXT_OP);
+}
+
 static int flask_update_va_mapping(struct domain *d, struct domain *f,
                                    l1_pgentry_t pte)
 {
@@ -1707,6 +1713,7 @@ static struct xsm_operations flask_ops = {
     .domain_memory_map = flask_domain_memory_map,
     .mmu_normal_update = flask_mmu_normal_update,
     .mmu_machphys_update = flask_mmu_machphys_update,
+    .mmuext_op = flask_mmuext_op,
     .update_va_mapping = flask_update_va_mapping,
     .add_to_physmap = flask_add_to_physmap,
     .remove_from_physmap = flask_remove_from_physmap,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 5d5a45a..5d4f316 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -111,6 +111,7 @@
    S_(SECCLASS_MMU, MMU__MFNLIST, "mfnlist")
    S_(SECCLASS_MMU, MMU__MEMORYMAP, "memorymap")
    S_(SECCLASS_MMU, MMU__REMOTE_REMAP, "remote_remap")
+   S_(SECCLASS_MMU, MMU__MMUEXT_OP, "mmuext_op")
    S_(SECCLASS_SHADOW, SHADOW__DISABLE, "disable")
    S_(SECCLASS_SHADOW, SHADOW__ENABLE, "enable")
    S_(SECCLASS_SHADOW, SHADOW__LOGDIRTY, "logdirty")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index e6d6a6d..f970b50 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -117,6 +117,7 @@
 #define MMU__MFNLIST                              0x00000400UL
 #define MMU__MEMORYMAP                            0x00000800UL
 #define MMU__REMOTE_REMAP                         0x00001000UL
+#define MMU__MMUEXT_OP                            0x00002000UL
 
 #define SHADOW__DISABLE                           0x00000001UL
 #define SHADOW__ENABLE                            0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMM-0002ik-G0; Mon, 06 Aug 2012 14:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOML-0002hz-0A
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:41 +0000
Received: from [85.158.138.51:25435] by server-12.bemta-3.messagelabs.com id
	96/4D-21301-885DF105; Mon, 06 Aug 2012 14:32:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344263558!21732848!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10794 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b071aa90002c546@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b071aa90002c546 ; Mon, 6 Aug 2012 10:33:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9O011112; 
	Mon, 6 Aug 2012 10:32:37 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:16 -0400
Message-Id: <1344263550-3941-5-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 04/18] libxl: introduce XSM relabel on build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Allow a domain to be built under one security label and run using a
different label. This can be used to prevent the domain builder or
control domain from having the ability to access a guest domain's memory
via map_foreign_range except during the build process where this is
required.

Note: this does not provide complete protection from a malicious dom0;
mappings created during the build process may persist after the relabel,
and could be used to indirectly access the guest's memory.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/libxc/xc_flask.c      | 10 ++++++++++
 tools/libxc/xenctrl.h       |  1 +
 tools/libxl/libxl_create.c  |  4 ++++
 tools/libxl/libxl_types.idl |  1 +
 tools/libxl/xl_cmdimpl.c    | 20 +++++++++++++++++++-
 5 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_flask.c b/tools/libxc/xc_flask.c
index 80c5a2d..face1e0 100644
--- a/tools/libxc/xc_flask.c
+++ b/tools/libxc/xc_flask.c
@@ -422,6 +422,16 @@ int xc_flask_setavc_threshold(xc_interface *xch, int threshold)
     return xc_flask_op(xch, &op);
 }
 
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid)
+{
+    DECLARE_FLASK_OP;
+    op.cmd = FLASK_RELABEL_DOMAIN;
+    op.u.relabel.domid = domid;
+    op.u.relabel.sid = sid;
+
+    return xc_flask_op(xch, &op);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 91fbb02..2ac74d9 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2152,6 +2152,7 @@ int xc_flask_policyvers(xc_interface *xc_handle);
 int xc_flask_avc_hashstats(xc_interface *xc_handle, char *buf, int size);
 int xc_flask_getavc_threshold(xc_interface *xc_handle);
 int xc_flask_setavc_threshold(xc_interface *xc_handle, int threshold);
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid);
 
 struct elf_binary;
 void xc_elf_set_logfile(xc_interface *xch, struct elf_binary *elf,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index aafacd8..3c227e6 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1100,6 +1100,10 @@ static void domcreate_complete(libxl__egc *egc,
                                int rc)
 {
     STATE_AO_GC(dcs->ao);
+    libxl_domain_config *const d_config = dcs->guest_config;
+
+    if (!rc && d_config->b_info.exec_ssidref)
+        rc = xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_config->b_info.exec_ssidref);
 
     if (rc) {
         if (dcs->guest_domid) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index daa8c79..eb2668f 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -257,6 +257,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("video_memkb",     MemKB),
     ("shadow_memkb",    MemKB),
     ("rtc_timeoffset",  uint32),
+    ("exec_ssidref",    uint32),
     ("localtime",       libxl_defbool),
     ("disable_migrate", libxl_defbool),
     ("cpuid",           libxl_cpuid_policy_list),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index a7dc340..a63ef57 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -582,16 +582,34 @@ static void parse_config_data(const char *config_source,
         exit(1);
     }
 
-    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+    if (!xlu_cfg_get_string (config, "init_seclabel", &buf, 0)) {
         e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
                                     &c_info->ssidref);
         if (e) {
             if (errno == ENOSYS) {
+                fprintf(stderr, "XSM Disabled: init_seclabel not supported\n");
+            } else {
+                fprintf(stderr, "Invalid init_seclabel: %s\n", buf);
+                exit(1);
+            }
+        }
+    }
+
+    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+        uint32_t ssidref;
+        e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
+                                    &ssidref);
+        if (e) {
+            if (errno == ENOSYS) {
                 fprintf(stderr, "XSM Disabled: seclabel not supported\n");
             } else {
                 fprintf(stderr, "Invalid seclabel: %s\n", buf);
                 exit(1);
             }
+        } else if (c_info->ssidref) {
+            b_info->exec_ssidref = ssidref;
+        } else {
+            c_info->ssidref = ssidref;
         }
     }
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMR-0002kr-Bx; Mon, 06 Aug 2012 14:32:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002jc-E7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:45 +0000
Received: from [85.158.138.51:27845] by server-10.bemta-3.messagelabs.com id
	DF/CE-07905-C85DF105; Mon, 06 Aug 2012 14:32:44 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344263562!21732863!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11253 invoked from network); 6 Aug 2012 14:32:43 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:43 -0000
X-TM-IMSS-Message-ID: <7b0729970002c550@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0729970002c550 ; Mon, 6 Aug 2012 10:33:11 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9a011112; 
	Mon, 6 Aug 2012 10:32:40 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:28 -0400
Message-Id: <1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for get_pg_owner
	access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This requires introducing a new XSM hook for do_mmuext_op to validate
remote domain access there.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |  1 +
 tools/flask/policy/policy/modules/xen/xen.if   |  4 ++--
 xen/arch/x86/mm.c                              | 25 ++++++++-----------------
 xen/include/xsm/dummy.h                        | 20 ++++++++++++++++++--
 xen/include/xsm/xsm.h                          | 14 +++++++++++---
 xen/xsm/dummy.c                                |  1 +
 xen/xsm/flask/hooks.c                          |  9 ++++++++-
 xen/xsm/flask/include/av_perm_to_string.h      |  1 +
 xen/xsm/flask/include/av_permissions.h         |  1 +
 9 files changed, 51 insertions(+), 25 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 2986b40..5e897e2 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -141,6 +141,7 @@ class mmu
     mfnlist
     memorymap
     remote_remap
+	mmuext_op
 }
 
 class shadow
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 796698b..78083c3 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -7,7 +7,7 @@
 ################################################################################
 define(`declare_domain_common', `
 	allow $1 $2:grant { query setup };
-	allow $1 $2:mmu { adjust physmap map_read map_write stat pinpage updatemp };
+	allow $1 $2:mmu { adjust physmap map_read map_write stat pinpage updatemp mmuext_op };
 	allow $1 $2:hvm { getparam setparam };
 ')
 
@@ -51,7 +51,7 @@ define(`create_domain_common', `
 	allow $1 $2:domain2 { set_cpuid settsc };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
-	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
+	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
 	allow $1 $2:grant setup;
 	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc setparam pcilevel trackdirtyvram };
 ')
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 1b352df..4bc3ab5 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2882,11 +2882,6 @@ static struct domain *get_pg_owner(domid_t domid)
         pg_owner = rcu_lock_domain(dom_io);
         break;
     case DOMID_XEN:
-        if ( !IS_PRIV(curr) )
-        {
-            MEM_LOG("Cannot set foreign dom");
-            break;
-        }
         pg_owner = rcu_lock_domain(dom_xen);
         break;
     default:
@@ -2895,12 +2890,6 @@ static struct domain *get_pg_owner(domid_t domid)
             MEM_LOG("Unknown domain '%u'", domid);
             break;
         }
-        if ( !IS_PRIV_FOR(curr, pg_owner) )
-        {
-            MEM_LOG("Cannot set foreign dom");
-            rcu_unlock_domain(pg_owner);
-            pg_owner = NULL;
-        }
         break;
     }
 
@@ -3008,6 +2997,13 @@ int do_mmuext_op(
         goto out;
     }
 
+    rc = xsm_mmuext_op(d, pg_owner);
+    if ( rc )
+    {
+        rcu_unlock_domain(pg_owner);
+        goto out;
+    }
+
     for ( i = 0; i < count; i++ )
     {
         if ( hypercall_preempt_check() )
@@ -3483,11 +3479,6 @@ int do_mmu_update(
             rc = -EINVAL;
             goto out;
         }
-        if ( !IS_PRIV_FOR(d, pt_owner) )
-        {
-            rc = -ESRCH;
-            goto out;
-        }
     }
 
     if ( (pg_owner = get_pg_owner((uint16_t)foreigndom)) == NULL )
@@ -3643,7 +3634,7 @@ int do_mmu_update(
             mfn = req.ptr >> PAGE_SHIFT;
             gpfn = req.val;
 
-            rc = xsm_mmu_machphys_update(d, mfn);
+            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
             if ( rc )
                 break;
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index d796a33..28e1d2b 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -803,19 +803,35 @@ static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
 }
 
 static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
-                                    struct domain *f, intpte_t fpte)
+                                            struct domain *f, intpte_t fpte)
 {
+    if ( d != t && !IS_PRIV_FOR(d, t) )
+        return -EPERM;
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
     return 0;
 }
 
-static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
+static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, struct domain *f,
+                                              unsigned long mfn)
 {
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
+{
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
     return 0;
 }
 
 static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte)
 {
+    if ( d != f && !IS_PRIV_FOR(d, f) )
+        return -EPERM;
     return 0;
 }
 
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index fa9f50e..4134877 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -177,7 +177,9 @@ struct xsm_operations {
     int (*domain_memory_map) (struct domain *d);
     int (*mmu_normal_update) (struct domain *d, struct domain *t,
                               struct domain *f, intpte_t fpte);
-    int (*mmu_machphys_update) (struct domain *d, unsigned long mfn);
+    int (*mmu_machphys_update) (struct domain *d, struct domain *f,
+                                unsigned long mfn);
+    int (*mmuext_op) (struct domain *d, struct domain *f);
     int (*update_va_mapping) (struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte);
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
@@ -797,9 +799,15 @@ static inline int xsm_mmu_normal_update (struct domain *d, struct domain *t,
     return xsm_ops->mmu_normal_update(d, t, f, fpte);
 }
 
-static inline int xsm_mmu_machphys_update (struct domain *d, unsigned long mfn)
+static inline int xsm_mmu_machphys_update (struct domain *d, struct domain *f,
+                                           unsigned long mfn)
 {
-    return xsm_ops->mmu_machphys_update(d, mfn);
+    return xsm_ops->mmu_machphys_update(d, f, mfn);
+}
+
+static inline int xsm_mmuext_op (struct domain *d, struct domain *f)
+{
+	return xsm_ops->mmuext_op(d, f);
 }
 
 static inline int xsm_update_va_mapping(struct domain *d, struct domain *f, 
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index aebe333..1bf9de9 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -161,6 +161,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, domain_memory_map);
     set_to_dummy_if_null(ops, mmu_normal_update);
     set_to_dummy_if_null(ops, mmu_machphys_update);
+    set_to_dummy_if_null(ops, mmuext_op);
     set_to_dummy_if_null(ops, update_va_mapping);
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index dae587c..f743be1 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1385,7 +1385,8 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     return avc_has_perm(dsid, fsid, SECCLASS_MMU, map_perms, &ad);
 }
 
-static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
+static int flask_mmu_machphys_update(struct domain *d, struct domain *f,
+                                     unsigned long mfn)
 {
     int rc = 0;
     u32 dsid, psid;
@@ -1398,6 +1399,11 @@ static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
     return avc_has_perm(dsid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
 }
 
+static int flask_mmuext_op(struct domain *d, struct domain *f)
+{
+    return domain_has_perm(d, f, SECCLASS_MMU, MMU__MMUEXT_OP);
+}
+
 static int flask_update_va_mapping(struct domain *d, struct domain *f,
                                    l1_pgentry_t pte)
 {
@@ -1707,6 +1713,7 @@ static struct xsm_operations flask_ops = {
     .domain_memory_map = flask_domain_memory_map,
     .mmu_normal_update = flask_mmu_normal_update,
     .mmu_machphys_update = flask_mmu_machphys_update,
+    .mmuext_op = flask_mmuext_op,
     .update_va_mapping = flask_update_va_mapping,
     .add_to_physmap = flask_add_to_physmap,
     .remove_from_physmap = flask_remove_from_physmap,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 5d5a45a..5d4f316 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -111,6 +111,7 @@
    S_(SECCLASS_MMU, MMU__MFNLIST, "mfnlist")
    S_(SECCLASS_MMU, MMU__MEMORYMAP, "memorymap")
    S_(SECCLASS_MMU, MMU__REMOTE_REMAP, "remote_remap")
+   S_(SECCLASS_MMU, MMU__MMUEXT_OP, "mmuext_op")
    S_(SECCLASS_SHADOW, SHADOW__DISABLE, "disable")
    S_(SECCLASS_SHADOW, SHADOW__ENABLE, "enable")
    S_(SECCLASS_SHADOW, SHADOW__LOGDIRTY, "logdirty")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index e6d6a6d..f970b50 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -117,6 +117,7 @@
 #define MMU__MFNLIST                              0x00000400UL
 #define MMU__MEMORYMAP                            0x00000800UL
 #define MMU__REMOTE_REMAP                         0x00001000UL
+#define MMU__MMUEXT_OP                            0x00002000UL
 
 #define SHADOW__DISABLE                           0x00000001UL
 #define SHADOW__ENABLE                            0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMM-0002ik-G0; Mon, 06 Aug 2012 14:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOML-0002hz-0A
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:41 +0000
Received: from [85.158.138.51:25435] by server-12.bemta-3.messagelabs.com id
	96/4D-21301-885DF105; Mon, 06 Aug 2012 14:32:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344263558!21732848!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10794 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-7.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b071aa90002c546@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b071aa90002c546 ; Mon, 6 Aug 2012 10:33:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9O011112; 
	Mon, 6 Aug 2012 10:32:37 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:16 -0400
Message-Id: <1344263550-3941-5-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 04/18] libxl: introduce XSM relabel on build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Allow a domain to be built under one security label and run using a
different label. This can be used to prevent the domain builder or
control domain from having the ability to access a guest domain's memory
via map_foreign_range except during the build process where this is
required.

Note: this does not provide complete protection from a malicious dom0;
mappings created during the build process may persist after the relabel,
and could be used to indirectly access the guest's memory.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/libxc/xc_flask.c      | 10 ++++++++++
 tools/libxc/xenctrl.h       |  1 +
 tools/libxl/libxl_create.c  |  4 ++++
 tools/libxl/libxl_types.idl |  1 +
 tools/libxl/xl_cmdimpl.c    | 20 +++++++++++++++++++-
 5 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_flask.c b/tools/libxc/xc_flask.c
index 80c5a2d..face1e0 100644
--- a/tools/libxc/xc_flask.c
+++ b/tools/libxc/xc_flask.c
@@ -422,6 +422,16 @@ int xc_flask_setavc_threshold(xc_interface *xch, int threshold)
     return xc_flask_op(xch, &op);
 }
 
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid)
+{
+    DECLARE_FLASK_OP;
+    op.cmd = FLASK_RELABEL_DOMAIN;
+    op.u.relabel.domid = domid;
+    op.u.relabel.sid = sid;
+
+    return xc_flask_op(xch, &op);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 91fbb02..2ac74d9 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2152,6 +2152,7 @@ int xc_flask_policyvers(xc_interface *xc_handle);
 int xc_flask_avc_hashstats(xc_interface *xc_handle, char *buf, int size);
 int xc_flask_getavc_threshold(xc_interface *xc_handle);
 int xc_flask_setavc_threshold(xc_interface *xc_handle, int threshold);
+int xc_flask_relabel_domain(xc_interface *xch, int domid, uint32_t sid);
 
 struct elf_binary;
 void xc_elf_set_logfile(xc_interface *xch, struct elf_binary *elf,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index aafacd8..3c227e6 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1100,6 +1100,10 @@ static void domcreate_complete(libxl__egc *egc,
                                int rc)
 {
     STATE_AO_GC(dcs->ao);
+    libxl_domain_config *const d_config = dcs->guest_config;
+
+    if (!rc && d_config->b_info.exec_ssidref)
+        rc = xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_config->b_info.exec_ssidref);
 
     if (rc) {
         if (dcs->guest_domid) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index daa8c79..eb2668f 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -257,6 +257,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("video_memkb",     MemKB),
     ("shadow_memkb",    MemKB),
     ("rtc_timeoffset",  uint32),
+    ("exec_ssidref",    uint32),
     ("localtime",       libxl_defbool),
     ("disable_migrate", libxl_defbool),
     ("cpuid",           libxl_cpuid_policy_list),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index a7dc340..a63ef57 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -582,16 +582,34 @@ static void parse_config_data(const char *config_source,
         exit(1);
     }
 
-    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+    if (!xlu_cfg_get_string (config, "init_seclabel", &buf, 0)) {
         e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
                                     &c_info->ssidref);
         if (e) {
             if (errno == ENOSYS) {
+                fprintf(stderr, "XSM Disabled: init_seclabel not supported\n");
+            } else {
+                fprintf(stderr, "Invalid init_seclabel: %s\n", buf);
+                exit(1);
+            }
+        }
+    }
+
+    if (!xlu_cfg_get_string (config, "seclabel", &buf, 0)) {
+        uint32_t ssidref;
+        e = libxl_flask_context_to_sid(ctx, (char *)buf, strlen(buf),
+                                    &ssidref);
+        if (e) {
+            if (errno == ENOSYS) {
                 fprintf(stderr, "XSM Disabled: seclabel not supported\n");
             } else {
                 fprintf(stderr, "Invalid seclabel: %s\n", buf);
                 exit(1);
             }
+        } else if (c_info->ssidref) {
+            b_info->exec_ssidref = ssidref;
+        } else {
+            c_info->ssidref = ssidref;
         }
     }
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMQ-0002kJ-HP; Mon, 06 Aug 2012 14:32:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002hu-2z
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344263558!8790354!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13512 invoked from network); 6 Aug 2012 14:32:38 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-16.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:38 -0000
X-TM-IMSS-Message-ID: <7b074d7b0002b221@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b074d7b0002b221 ;
	Mon, 6 Aug 2012 10:32:48 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9N011112; 
	Mon, 6 Aug 2012 10:32:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:15 -0400
Message-Id: <1344263550-3941-4-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 03/18] xsm/flask: add domain relabel support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds the ability to change a domain's XSM label after creation.
The new label will be used for all future access checks; however,
existing event channels and memory mappings will remain valid even if
their creation would be denied by the new label.

With appropriate security policy and hooks in the domain builder, this
can be used to create domains that the domain builder does not have
access to after building. It can also be used to allow a domain to drop
privileges - for example, prior to launching a user-supplied kernel
loaded by a pv-grub stubdom.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors   |  7 ++++
 tools/flask/policy/policy/flask/security_classes |  1 +
 tools/flask/policy/policy/modules/xen/xen.te     |  2 +-
 xen/include/public/xsm/flask_op.h                |  8 ++++
 xen/xsm/flask/flask_op.c                         | 49 ++++++++++++++++++++++++
 xen/xsm/flask/include/av_perm_to_string.h        |  3 ++
 xen/xsm/flask/include/av_permissions.h           |  4 ++
 xen/xsm/flask/include/class_to_string.h          |  1 +
 xen/xsm/flask/include/flask.h                    | 15 ++++----
 9 files changed, 82 insertions(+), 8 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index a884312..c7e29ab 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -73,6 +73,13 @@ class domain
 	set_virq_handler
 }
 
+class domain2
+{
+	relabelfrom
+	relabelto
+	relabelself
+}
+
 class hvm
 {
     sethvmc
diff --git a/tools/flask/policy/policy/flask/security_classes b/tools/flask/policy/policy/flask/security_classes
index 2ca35d2..ef134a7 100644
--- a/tools/flask/policy/policy/flask/security_classes
+++ b/tools/flask/policy/policy/flask/security_classes
@@ -9,6 +9,7 @@
 
 class xen
 class domain
+class domain2
 class hvm
 class mmu
 class resource
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 3d2e351..f9bc061 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -169,7 +169,7 @@ delegate_devices(dom0_t, domU_t)
 ################################################################################
 
 # Domains must be declared using domain_type
-neverallow * ~domain_type:domain create;
+neverallow * ~domain_type:domain { create transition };
 
 # Resources must be declared using resource_type
 neverallow * ~resource_type:resource use;
diff --git a/xen/include/public/xsm/flask_op.h b/xen/include/public/xsm/flask_op.h
index 1a251c9..233de81 100644
--- a/xen/include/public/xsm/flask_op.h
+++ b/xen/include/public/xsm/flask_op.h
@@ -142,6 +142,12 @@ struct xen_flask_peersid {
     uint32_t sid;
 };
 
+struct xen_flask_relabel {
+    /* IN */
+    uint32_t domid;
+    uint32_t sid;
+};
+
 struct xen_flask_op {
     uint32_t cmd;
 #define FLASK_LOAD              1
@@ -167,6 +173,7 @@ struct xen_flask_op {
 #define FLASK_ADD_OCONTEXT      21
 #define FLASK_DEL_OCONTEXT      22
 #define FLASK_GET_PEER_SID      23
+#define FLASK_RELABEL_DOMAIN    24
     uint32_t interface_version; /* XEN_FLASK_INTERFACE_VERSION */
     union {
         struct xen_flask_load load;
@@ -185,6 +192,7 @@ struct xen_flask_op {
         /* FLASK_ADD_OCONTEXT, FLASK_DEL_OCONTEXT */
         struct xen_flask_ocontext ocontext;
         struct xen_flask_peersid peersid;
+        struct xen_flask_relabel relabel;
     } u;
 };
 typedef struct xen_flask_op xen_flask_op_t;
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..9c0a087 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -573,6 +573,51 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
+static int flask_relabel_domain(struct xen_flask_relabel *arg)
+{
+    int rc;
+    struct domain *d;
+    struct domain_security_struct *csec = current->domain->ssid;
+    struct domain_security_struct *dsec;
+    struct avc_audit_data ad;
+    AVC_AUDIT_DATA_INIT(&ad, NONE);
+
+    d = rcu_lock_domain_by_id(arg->domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    ad.sdom = current->domain;
+    ad.tdom = d;
+    dsec = d->ssid;
+
+    if ( arg->domid == DOMID_SELF )
+    {
+        rc = avc_has_perm(dsec->sid, arg->sid, SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, &ad);
+        if ( rc )
+            goto out;
+    }
+    else
+    {
+        rc = avc_has_perm(csec->sid, dsec->sid, SECCLASS_DOMAIN2, DOMAIN2__RELABELFROM, &ad);
+        if ( rc )
+            goto out;
+
+        rc = avc_has_perm(csec->sid, arg->sid, SECCLASS_DOMAIN2, DOMAIN2__RELABELTO, &ad);
+        if ( rc )
+            goto out;
+    }
+
+    rc = avc_has_perm(dsec->sid, arg->sid, SECCLASS_DOMAIN, DOMAIN__TRANSITION, &ad);
+    if ( rc )
+        goto out;
+
+    dsec->sid = arg->sid;
+
+ out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
@@ -680,6 +725,10 @@ long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
         rv = flask_get_peer_sid(&op.u.peersid);
         break;
 
+    case FLASK_RELABEL_DOMAIN:
+        rv = flask_relabel_domain(&op.u.relabel);
+        break;
+
     default:
         rv = -ENOSYS;
     }
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 17a1c36..e7e2058 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -61,6 +61,9 @@
    S_(SECCLASS_DOMAIN, DOMAIN__SETPODTARGET, "setpodtarget")
    S_(SECCLASS_DOMAIN, DOMAIN__SET_MISC_INFO, "set_misc_info")
    S_(SECCLASS_DOMAIN, DOMAIN__SET_VIRQ_HANDLER, "set_virq_handler")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELFROM, "relabelfrom")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELTO, "relabelto")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
    S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
    S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
    S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index 42eaf81..cb1c5dc 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -63,6 +63,10 @@
 #define DOMAIN__SET_MISC_INFO                     0x40000000UL
 #define DOMAIN__SET_VIRQ_HANDLER                  0x80000000UL
 
+#define DOMAIN2__RELABELFROM                      0x00000001UL
+#define DOMAIN2__RELABELTO                        0x00000002UL
+#define DOMAIN2__RELABELSELF                      0x00000004UL
+
 #define HVM__SETHVMC                              0x00000001UL
 #define HVM__GETHVMC                              0x00000002UL
 #define HVM__SETPARAM                             0x00000004UL
diff --git a/xen/xsm/flask/include/class_to_string.h b/xen/xsm/flask/include/class_to_string.h
index ab55700..7716645 100644
--- a/xen/xsm/flask/include/class_to_string.h
+++ b/xen/xsm/flask/include/class_to_string.h
@@ -5,6 +5,7 @@
     S_("null")
     S_("xen")
     S_("domain")
+    S_("domain2")
     S_("hvm")
     S_("mmu")
     S_("resource")
diff --git a/xen/xsm/flask/include/flask.h b/xen/xsm/flask/include/flask.h
index 6d29c5a..3bff998 100644
--- a/xen/xsm/flask/include/flask.h
+++ b/xen/xsm/flask/include/flask.h
@@ -7,13 +7,14 @@
  */
 #define SECCLASS_XEN                                     1
 #define SECCLASS_DOMAIN                                  2
-#define SECCLASS_HVM                                     3
-#define SECCLASS_MMU                                     4
-#define SECCLASS_RESOURCE                                5
-#define SECCLASS_SHADOW                                  6
-#define SECCLASS_EVENT                                   7
-#define SECCLASS_GRANT                                   8
-#define SECCLASS_SECURITY                                9
+#define SECCLASS_DOMAIN2                                 3
+#define SECCLASS_HVM                                     4
+#define SECCLASS_MMU                                     5
+#define SECCLASS_RESOURCE                                6
+#define SECCLASS_SHADOW                                  7
+#define SECCLASS_EVENT                                   8
+#define SECCLASS_GRANT                                   9
+#define SECCLASS_SECURITY                                10
 
 /*
  * Security identifier indices for initial entities
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMQ-0002kJ-HP; Mon, 06 Aug 2012 14:32:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002hu-2z
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344263558!8790354!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13512 invoked from network); 6 Aug 2012 14:32:38 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-16.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:38 -0000
X-TM-IMSS-Message-ID: <7b074d7b0002b221@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b074d7b0002b221 ;
	Mon, 6 Aug 2012 10:32:48 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9N011112; 
	Mon, 6 Aug 2012 10:32:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:15 -0400
Message-Id: <1344263550-3941-4-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 03/18] xsm/flask: add domain relabel support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds the ability to change a domain's XSM label after creation.
The new label will be used for all future access checks; however,
existing event channels and memory mappings will remain valid even if
their creation would be denied by the new label.

With appropriate security policy and hooks in the domain builder, this
can be used to create domains that the domain builder does not have
access to after building. It can also be used to allow a domain to drop
privileges - for example, prior to launching a user-supplied kernel
loaded by a pv-grub stubdom.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors   |  7 ++++
 tools/flask/policy/policy/flask/security_classes |  1 +
 tools/flask/policy/policy/modules/xen/xen.te     |  2 +-
 xen/include/public/xsm/flask_op.h                |  8 ++++
 xen/xsm/flask/flask_op.c                         | 49 ++++++++++++++++++++++++
 xen/xsm/flask/include/av_perm_to_string.h        |  3 ++
 xen/xsm/flask/include/av_permissions.h           |  4 ++
 xen/xsm/flask/include/class_to_string.h          |  1 +
 xen/xsm/flask/include/flask.h                    | 15 ++++----
 9 files changed, 82 insertions(+), 8 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index a884312..c7e29ab 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -73,6 +73,13 @@ class domain
 	set_virq_handler
 }
 
+class domain2
+{
+	relabelfrom
+	relabelto
+	relabelself
+}
+
 class hvm
 {
     sethvmc
diff --git a/tools/flask/policy/policy/flask/security_classes b/tools/flask/policy/policy/flask/security_classes
index 2ca35d2..ef134a7 100644
--- a/tools/flask/policy/policy/flask/security_classes
+++ b/tools/flask/policy/policy/flask/security_classes
@@ -9,6 +9,7 @@
 
 class xen
 class domain
+class domain2
 class hvm
 class mmu
 class resource
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 3d2e351..f9bc061 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -169,7 +169,7 @@ delegate_devices(dom0_t, domU_t)
 ################################################################################
 
 # Domains must be declared using domain_type
-neverallow * ~domain_type:domain create;
+neverallow * ~domain_type:domain { create transition };
 
 # Resources must be declared using resource_type
 neverallow * ~resource_type:resource use;
diff --git a/xen/include/public/xsm/flask_op.h b/xen/include/public/xsm/flask_op.h
index 1a251c9..233de81 100644
--- a/xen/include/public/xsm/flask_op.h
+++ b/xen/include/public/xsm/flask_op.h
@@ -142,6 +142,12 @@ struct xen_flask_peersid {
     uint32_t sid;
 };
 
+struct xen_flask_relabel {
+    /* IN */
+    uint32_t domid;
+    uint32_t sid;
+};
+
 struct xen_flask_op {
     uint32_t cmd;
 #define FLASK_LOAD              1
@@ -167,6 +173,7 @@ struct xen_flask_op {
 #define FLASK_ADD_OCONTEXT      21
 #define FLASK_DEL_OCONTEXT      22
 #define FLASK_GET_PEER_SID      23
+#define FLASK_RELABEL_DOMAIN    24
     uint32_t interface_version; /* XEN_FLASK_INTERFACE_VERSION */
     union {
         struct xen_flask_load load;
@@ -185,6 +192,7 @@ struct xen_flask_op {
         /* FLASK_ADD_OCONTEXT, FLASK_DEL_OCONTEXT */
         struct xen_flask_ocontext ocontext;
         struct xen_flask_peersid peersid;
+        struct xen_flask_relabel relabel;
     } u;
 };
 typedef struct xen_flask_op xen_flask_op_t;
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..9c0a087 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -573,6 +573,51 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
+static int flask_relabel_domain(struct xen_flask_relabel *arg)
+{
+    int rc;
+    struct domain *d;
+    struct domain_security_struct *csec = current->domain->ssid;
+    struct domain_security_struct *dsec;
+    struct avc_audit_data ad;
+    AVC_AUDIT_DATA_INIT(&ad, NONE);
+
+    d = rcu_lock_domain_by_id(arg->domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    ad.sdom = current->domain;
+    ad.tdom = d;
+    dsec = d->ssid;
+
+    if ( arg->domid == DOMID_SELF )
+    {
+        rc = avc_has_perm(dsec->sid, arg->sid, SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, &ad);
+        if ( rc )
+            goto out;
+    }
+    else
+    {
+        rc = avc_has_perm(csec->sid, dsec->sid, SECCLASS_DOMAIN2, DOMAIN2__RELABELFROM, &ad);
+        if ( rc )
+            goto out;
+
+        rc = avc_has_perm(csec->sid, arg->sid, SECCLASS_DOMAIN2, DOMAIN2__RELABELTO, &ad);
+        if ( rc )
+            goto out;
+    }
+
+    rc = avc_has_perm(dsec->sid, arg->sid, SECCLASS_DOMAIN, DOMAIN__TRANSITION, &ad);
+    if ( rc )
+        goto out;
+
+    dsec->sid = arg->sid;
+
+ out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
@@ -680,6 +725,10 @@ long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
         rv = flask_get_peer_sid(&op.u.peersid);
         break;
 
+    case FLASK_RELABEL_DOMAIN:
+        rv = flask_relabel_domain(&op.u.relabel);
+        break;
+
     default:
         rv = -ENOSYS;
     }
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 17a1c36..e7e2058 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -61,6 +61,9 @@
    S_(SECCLASS_DOMAIN, DOMAIN__SETPODTARGET, "setpodtarget")
    S_(SECCLASS_DOMAIN, DOMAIN__SET_MISC_INFO, "set_misc_info")
    S_(SECCLASS_DOMAIN, DOMAIN__SET_VIRQ_HANDLER, "set_virq_handler")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELFROM, "relabelfrom")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELTO, "relabelto")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
    S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
    S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
    S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index 42eaf81..cb1c5dc 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -63,6 +63,10 @@
 #define DOMAIN__SET_MISC_INFO                     0x40000000UL
 #define DOMAIN__SET_VIRQ_HANDLER                  0x80000000UL
 
+#define DOMAIN2__RELABELFROM                      0x00000001UL
+#define DOMAIN2__RELABELTO                        0x00000002UL
+#define DOMAIN2__RELABELSELF                      0x00000004UL
+
 #define HVM__SETHVMC                              0x00000001UL
 #define HVM__GETHVMC                              0x00000002UL
 #define HVM__SETPARAM                             0x00000004UL
diff --git a/xen/xsm/flask/include/class_to_string.h b/xen/xsm/flask/include/class_to_string.h
index ab55700..7716645 100644
--- a/xen/xsm/flask/include/class_to_string.h
+++ b/xen/xsm/flask/include/class_to_string.h
@@ -5,6 +5,7 @@
     S_("null")
     S_("xen")
     S_("domain")
+    S_("domain2")
     S_("hvm")
     S_("mmu")
     S_("resource")
diff --git a/xen/xsm/flask/include/flask.h b/xen/xsm/flask/include/flask.h
index 6d29c5a..3bff998 100644
--- a/xen/xsm/flask/include/flask.h
+++ b/xen/xsm/flask/include/flask.h
@@ -7,13 +7,14 @@
  */
 #define SECCLASS_XEN                                     1
 #define SECCLASS_DOMAIN                                  2
-#define SECCLASS_HVM                                     3
-#define SECCLASS_MMU                                     4
-#define SECCLASS_RESOURCE                                5
-#define SECCLASS_SHADOW                                  6
-#define SECCLASS_EVENT                                   7
-#define SECCLASS_GRANT                                   8
-#define SECCLASS_SECURITY                                9
+#define SECCLASS_DOMAIN2                                 3
+#define SECCLASS_HVM                                     4
+#define SECCLASS_MMU                                     5
+#define SECCLASS_RESOURCE                                6
+#define SECCLASS_SHADOW                                  7
+#define SECCLASS_EVENT                                   8
+#define SECCLASS_GRANT                                   9
+#define SECCLASS_SECURITY                                10
 
 /*
  * Security identifier indices for initial entities
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMQ-0002kV-Ty; Mon, 06 Aug 2012 14:32:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002hv-3s
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344263557!4124597!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28499 invoked from network); 6 Aug 2012 14:32:37 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:37 -0000
X-TM-IMSS-Message-ID: <7b0748f90002b220@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0748f90002b220 ;
	Mon, 6 Aug 2012 10:32:47 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9L011112; 
	Mon, 6 Aug 2012 10:32:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:13 -0400
Message-Id: <1344263550-3941-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 01/18] xsm/flask: remove inherited class
	attributes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The ability to declare common permission blocks shared across multiple
classes is not currently used in Xen. Currently, support for this
feature is broken in the header generation scripts, and it is not
expected that this feature will be used in the future, so remove the
dead code.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/Makefile           |  2 +-
 tools/flask/policy/policy/flask/access_vectors     | 17 +----
 tools/flask/policy/policy/flask/mkaccess_vector.sh | 89 ----------------------
 xen/xsm/flask/avc.c                                | 27 -------
 xen/xsm/flask/include/av_inherit.h                 |  1 -
 xen/xsm/flask/include/avc_ss.h                     |  8 --
 xen/xsm/flask/include/common_perm_to_string.h      |  1 -
 xen/xsm/flask/ss/services.c                        | 54 +------------
 8 files changed, 4 insertions(+), 195 deletions(-)
 delete mode 100644 xen/xsm/flask/include/av_inherit.h
 delete mode 100644 xen/xsm/flask/include/common_perm_to_string.h

diff --git a/tools/flask/policy/policy/flask/Makefile b/tools/flask/policy/policy/flask/Makefile
index 970b9fe..5f57e88 100644
--- a/tools/flask/policy/policy/flask/Makefile
+++ b/tools/flask/policy/policy/flask/Makefile
@@ -14,7 +14,7 @@ FLASK_H_DEPEND = security_classes initial_sids
 AV_H_DEPEND = access_vectors
 
 FLASK_H_FILES = class_to_string.h flask.h initial_sid_to_string.h
-AV_H_FILES = av_inherit.h common_perm_to_string.h av_perm_to_string.h av_permissions.h
+AV_H_FILES = av_perm_to_string.h av_permissions.h
 ALL_H_FILES = $(FLASK_H_FILES) $(AV_H_FILES)
 
 all:  $(ALL_H_FILES)
diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 5901911..a884312 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -1,22 +1,7 @@
 #
-# Define common prefixes for access vectors
-#
-# common common_name { permission_name ... }
-
-#
-# Define a common prefix for file access vectors.
-#
-
-
-#
 # Define the access vectors.
 #
-# class class_name [ inherits common_name ] { permission_name ... }
-
-
-#
-# Define the access vector interpretation for file-related objects.
-#
+# class class_name { permission_name ... }
 
 class xen
 {
diff --git a/tools/flask/policy/policy/flask/mkaccess_vector.sh b/tools/flask/policy/policy/flask/mkaccess_vector.sh
index b5da734..43a60a7 100644
--- a/tools/flask/policy/policy/flask/mkaccess_vector.sh
+++ b/tools/flask/policy/policy/flask/mkaccess_vector.sh
@@ -10,50 +10,21 @@ shift
 
 # output files
 av_permissions="av_permissions.h"
-av_inherit="av_inherit.h"
-common_perm_to_string="common_perm_to_string.h"
 av_perm_to_string="av_perm_to_string.h"
 
 cat $* | $awk "
 BEGIN	{
 		outfile = \"$av_permissions\"
-		inheritfile = \"$av_inherit\"
-		cpermfile = \"$common_perm_to_string\"
 		avpermfile = \"$av_perm_to_string\"
 		"'
 		nextstate = "COMMON_OR_AV";
 		printf("/* This file is automatically generated.  Do not edit. */\n") > outfile;
-		printf("/* This file is automatically generated.  Do not edit. */\n") > inheritfile;
-		printf("/* This file is automatically generated.  Do not edit. */\n") > cpermfile;
 		printf("/* This file is automatically generated.  Do not edit. */\n") > avpermfile;
 ;
 	}
 /^[ \t]*#/	{ 
 			next;
 		}
-$1 == "common"	{ 
-			if (nextstate != "COMMON_OR_AV")
-			{
-				printf("Parse error:  Unexpected COMMON definition on line %d\n", NR);
-				next;	
-			}
-
-			if ($2 in common_defined)
-			{
-				printf("Duplicate COMMON definition for %s on line %d.\n", $2, NR);
-				next;
-			}	
-			common_defined[$2] = 1;
-
-			tclass = $2;
-			common_name = $2; 
-			permission = 1;
-
-			printf("TB_(common_%s_perm_to_string)\n", $2) > cpermfile;
-
-			nextstate = "COMMON-OPENBRACKET";
-			next;
-		}
 $1 == "class"	{
 			if (nextstate != "COMMON_OR_AV" &&
 			    nextstate != "CLASS_OR_CLASS-OPENBRACKET")
@@ -71,62 +42,11 @@ $1 == "class"	{
 			} 
 			av_defined[tclass] = 1;
 
-			inherits = "";
 			permission = 1;
 
 			nextstate = "INHERITS_OR_CLASS-OPENBRACKET";
 			next;
 		}
-$1 == "inherits" {			
-			if (nextstate != "INHERITS_OR_CLASS-OPENBRACKET")
-			{
-				printf("Parse error:  Unexpected INHERITS definition on line %d\n", NR);
-				next;	
-			}
-
-			if (!($2 in common_defined))
-			{
-				printf("COMMON %s is not defined (line %d).\n", $2, NR);
-				next;
-			}
-
-			inherits = $2;
-			permission = common_base[$2];
-
-			for (combined in common_perms)
-			{
-				split(combined,separate, SUBSEP);
-				if (separate[1] == inherits)
-				{
-					inherited_perms[common_perms[combined]] = separate[2];
-				}
-			}
-
-                        j = 1;
-                        for (i in inherited_perms) {
-                            ind[j] = i + 0;
-                            j++;
-                        }
-                        n = asort(ind);
-			for (i = 1; i <= n; i++) {
-				perm = inherited_perms[ind[i]];
-				printf("#define %s__%s", toupper(tclass), toupper(perm)) > outfile; 
-				spaces = 40 - (length(perm) + length(tclass));
-				if (spaces < 1)
-				      spaces = 1;
-				for (j = 0; j < spaces; j++) 
-					printf(" ") > outfile; 
-				printf("0x%08xUL\n", ind[i]) > outfile; 
-			}
-			printf("\n") > outfile;
-                        for (i in ind) delete ind[i];
-                        for (i in inherited_perms) delete inherited_perms[i];
-
-			printf("   S_(SECCLASS_%s, %s, 0x%08xUL)\n", toupper(tclass), inherits, permission) > inheritfile; 
-
-			nextstate = "CLASS_OR_CLASS-OPENBRACKET";
-			next;
-		}
 $1 == "{"	{ 
 			if (nextstate != "INHERITS_OR_CLASS-OPENBRACKET" &&
 			    nextstate != "CLASS_OR_CLASS-OPENBRACKET" &&
@@ -177,15 +97,6 @@ $1 == "{"	{
 
 				av_perms[tclass,$1] = permission;
 		
-				if (inherits != "")
-				{
-					if ((inherits,$1) in common_perms)
-					{
-						printf("Permission %s in %s on line %d conflicts with common permission.\n", $1, tclass, inherits, NR);
-						next;
-					}
-				}
-
 				printf("#define %s__%s", toupper(tclass), toupper($1)) > outfile; 
 
 				printf("   S_(SECCLASS_%s, %s__%s, \"%s\")\n", toupper(tclass), toupper(tclass), toupper($1), $1) > avpermfile; 
diff --git a/xen/xsm/flask/avc.c b/xen/xsm/flask/avc.c
index 44240a9..1bfeef2 100644
--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -45,28 +45,11 @@ static const char *class_to_string[] = {
 #undef S_
 };
 
-#define TB_(s) static const char * s [] = {
-#define TE_(s) };
-#define S_(s) s,
-#include "common_perm_to_string.h"
-#undef TB_
-#undef TE_
-#undef S_
-
-static const struct av_inherit av_inherit[] = {
-#define S_(c, i, b) { .tclass = c, .common_pts = common_##i##_perm_to_string, \
-                      .common_base = b },
-#include "av_inherit.h"
-#undef S_
-};
-
 const struct selinux_class_perm selinux_class_perm = {
     .av_perm_to_string = av_perm_to_string,
     .av_pts_len = ARRAY_SIZE(av_perm_to_string),
     .class_to_string = class_to_string,
     .cts_len = ARRAY_SIZE(class_to_string),
-    .av_inherit = av_inherit,
-    .av_inherit_len = ARRAY_SIZE(av_inherit)
 };
 
 #define AVC_CACHE_SLOTS            512
@@ -191,16 +174,6 @@ static void avc_dump_av(struct avc_dump_buf *buf, u16 tclass, u32 av)
         return;
     }
 
-    for ( i = 0; i < ARRAY_SIZE(av_inherit); i++ )
-    {
-        if (av_inherit[i].tclass == tclass)
-        {
-            common_pts = av_inherit[i].common_pts;
-            common_base = av_inherit[i].common_base;
-            break;
-        }
-    }
-
     avc_printk(buf, " {");
     i = 0;
     perm = 1;
diff --git a/xen/xsm/flask/include/av_inherit.h b/xen/xsm/flask/include/av_inherit.h
deleted file mode 100644
index 321ffe7..0000000
--- a/xen/xsm/flask/include/av_inherit.h
+++ /dev/null
@@ -1 +0,0 @@
-/* This file is automatically generated.  Do not edit. */
diff --git a/xen/xsm/flask/include/avc_ss.h b/xen/xsm/flask/include/avc_ss.h
index ea4e98c..a3d7d1e 100644
--- a/xen/xsm/flask/include/avc_ss.h
+++ b/xen/xsm/flask/include/avc_ss.h
@@ -16,19 +16,11 @@ struct av_perm_to_string {
     const char *name;
 };
 
-struct av_inherit {
-    const char **common_pts;
-    u32 common_base;
-    u16 tclass;
-};
-
 struct selinux_class_perm {
     const struct av_perm_to_string *av_perm_to_string;
     u32 av_pts_len;
     u32 cts_len;
     const char **class_to_string;
-    const struct av_inherit *av_inherit;
-    u32 av_inherit_len;
 };
 
 extern const struct selinux_class_perm selinux_class_perm;
diff --git a/xen/xsm/flask/include/common_perm_to_string.h b/xen/xsm/flask/include/common_perm_to_string.h
deleted file mode 100644
index 321ffe7..0000000
--- a/xen/xsm/flask/include/common_perm_to_string.h
+++ /dev/null
@@ -1 +0,0 @@
-/* This file is automatically generated.  Do not edit. */
diff --git a/xen/xsm/flask/ss/services.c b/xen/xsm/flask/ss/services.c
index 363f586..1bf3b0c 100644
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1167,10 +1167,10 @@ int security_change_sid(u32 ssid, u32 tsid, u16 tclass, u32 *out_sid)
  */
 static int validate_classes(struct policydb *p)
 {
-    int i, j;
+    int i;
     struct class_datum *cladatum;
     struct perm_datum *perdatum;
-    u32 nprim, tmp, common_pts_len, perm_val, pol_val;
+    u32 nprim, perm_val, pol_val;
     u16 class_val;
     const struct selinux_class_perm *kdefs = &selinux_class_perm;
     const char *def_class, *def_perm, *pol_class;
@@ -1233,56 +1233,6 @@ static int validate_classes(struct policydb *p)
             return -EINVAL;
         }
     }
-    for ( i = 0; i < kdefs->av_inherit_len; i++ )
-    {
-        class_val = kdefs->av_inherit[i].tclass;
-        if ( class_val > p->p_classes.nprim )
-            continue;
-        pol_class = p->p_class_val_to_name[class_val-1];
-        cladatum = hashtab_search(p->p_classes.table, pol_class);
-        BUG_ON( !cladatum );
-        if ( !cladatum->comdatum )
-        {
-            printk(KERN_ERR
-            "Flask:  class %s should have an inherits clause but does not\n",
-                   pol_class);
-            return -EINVAL;
-        }
-        tmp = kdefs->av_inherit[i].common_base;
-        common_pts_len = 0;
-        while ( !(tmp & 0x01) )
-        {
-            common_pts_len++;
-            tmp >>= 1;
-        }
-        perms = &cladatum->comdatum->permissions;
-        for ( j = 0; j < common_pts_len; j++ )
-        {
-            def_perm = kdefs->av_inherit[i].common_pts[j];
-            if ( j >= perms->nprim )
-            {
-                printk(KERN_INFO
-                "Flask:  permission %s in class %s not defined in policy\n",
-                       def_perm, pol_class);
-                return -EINVAL;
-            }
-            perdatum = hashtab_search(perms->table, def_perm);
-            if ( perdatum == NULL )
-            {
-                printk(KERN_ERR
-                       "Flask:  permission %s in class %s not found in policy\n",
-                       def_perm, pol_class);
-                return -EINVAL;
-            }
-            if ( perdatum->value != j + 1 )
-            {
-                printk(KERN_ERR
-                      "Flask:  permission %s in class %s has incorrect value\n",
-                       def_perm, pol_class);
-                return -EINVAL;
-            }
-        }
-    }
     return 0;
 }
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMQ-0002kV-Ty; Mon, 06 Aug 2012 14:32:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002hv-3s
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344263557!4124597!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28499 invoked from network); 6 Aug 2012 14:32:37 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:37 -0000
X-TM-IMSS-Message-ID: <7b0748f90002b220@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0748f90002b220 ;
	Mon, 6 Aug 2012 10:32:47 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9L011112; 
	Mon, 6 Aug 2012 10:32:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:13 -0400
Message-Id: <1344263550-3941-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 01/18] xsm/flask: remove inherited class
	attributes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The ability to declare common permission blocks shared across multiple
classes is not currently used in Xen. Currently, support for this
feature is broken in the header generation scripts, and it is not
expected that this feature will be used in the future, so remove the
dead code.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/Makefile           |  2 +-
 tools/flask/policy/policy/flask/access_vectors     | 17 +----
 tools/flask/policy/policy/flask/mkaccess_vector.sh | 89 ----------------------
 xen/xsm/flask/avc.c                                | 27 -------
 xen/xsm/flask/include/av_inherit.h                 |  1 -
 xen/xsm/flask/include/avc_ss.h                     |  8 --
 xen/xsm/flask/include/common_perm_to_string.h      |  1 -
 xen/xsm/flask/ss/services.c                        | 54 +------------
 8 files changed, 4 insertions(+), 195 deletions(-)
 delete mode 100644 xen/xsm/flask/include/av_inherit.h
 delete mode 100644 xen/xsm/flask/include/common_perm_to_string.h

diff --git a/tools/flask/policy/policy/flask/Makefile b/tools/flask/policy/policy/flask/Makefile
index 970b9fe..5f57e88 100644
--- a/tools/flask/policy/policy/flask/Makefile
+++ b/tools/flask/policy/policy/flask/Makefile
@@ -14,7 +14,7 @@ FLASK_H_DEPEND = security_classes initial_sids
 AV_H_DEPEND = access_vectors
 
 FLASK_H_FILES = class_to_string.h flask.h initial_sid_to_string.h
-AV_H_FILES = av_inherit.h common_perm_to_string.h av_perm_to_string.h av_permissions.h
+AV_H_FILES = av_perm_to_string.h av_permissions.h
 ALL_H_FILES = $(FLASK_H_FILES) $(AV_H_FILES)
 
 all:  $(ALL_H_FILES)
diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 5901911..a884312 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -1,22 +1,7 @@
 #
-# Define common prefixes for access vectors
-#
-# common common_name { permission_name ... }
-
-#
-# Define a common prefix for file access vectors.
-#
-
-
-#
 # Define the access vectors.
 #
-# class class_name [ inherits common_name ] { permission_name ... }
-
-
-#
-# Define the access vector interpretation for file-related objects.
-#
+# class class_name { permission_name ... }
 
 class xen
 {
diff --git a/tools/flask/policy/policy/flask/mkaccess_vector.sh b/tools/flask/policy/policy/flask/mkaccess_vector.sh
index b5da734..43a60a7 100644
--- a/tools/flask/policy/policy/flask/mkaccess_vector.sh
+++ b/tools/flask/policy/policy/flask/mkaccess_vector.sh
@@ -10,50 +10,21 @@ shift
 
 # output files
 av_permissions="av_permissions.h"
-av_inherit="av_inherit.h"
-common_perm_to_string="common_perm_to_string.h"
 av_perm_to_string="av_perm_to_string.h"
 
 cat $* | $awk "
 BEGIN	{
 		outfile = \"$av_permissions\"
-		inheritfile = \"$av_inherit\"
-		cpermfile = \"$common_perm_to_string\"
 		avpermfile = \"$av_perm_to_string\"
 		"'
 		nextstate = "COMMON_OR_AV";
 		printf("/* This file is automatically generated.  Do not edit. */\n") > outfile;
-		printf("/* This file is automatically generated.  Do not edit. */\n") > inheritfile;
-		printf("/* This file is automatically generated.  Do not edit. */\n") > cpermfile;
 		printf("/* This file is automatically generated.  Do not edit. */\n") > avpermfile;
 ;
 	}
 /^[ \t]*#/	{ 
 			next;
 		}
-$1 == "common"	{ 
-			if (nextstate != "COMMON_OR_AV")
-			{
-				printf("Parse error:  Unexpected COMMON definition on line %d\n", NR);
-				next;	
-			}
-
-			if ($2 in common_defined)
-			{
-				printf("Duplicate COMMON definition for %s on line %d.\n", $2, NR);
-				next;
-			}	
-			common_defined[$2] = 1;
-
-			tclass = $2;
-			common_name = $2; 
-			permission = 1;
-
-			printf("TB_(common_%s_perm_to_string)\n", $2) > cpermfile;
-
-			nextstate = "COMMON-OPENBRACKET";
-			next;
-		}
 $1 == "class"	{
 			if (nextstate != "COMMON_OR_AV" &&
 			    nextstate != "CLASS_OR_CLASS-OPENBRACKET")
@@ -71,62 +42,11 @@ $1 == "class"	{
 			} 
 			av_defined[tclass] = 1;
 
-			inherits = "";
 			permission = 1;
 
 			nextstate = "INHERITS_OR_CLASS-OPENBRACKET";
 			next;
 		}
-$1 == "inherits" {			
-			if (nextstate != "INHERITS_OR_CLASS-OPENBRACKET")
-			{
-				printf("Parse error:  Unexpected INHERITS definition on line %d\n", NR);
-				next;	
-			}
-
-			if (!($2 in common_defined))
-			{
-				printf("COMMON %s is not defined (line %d).\n", $2, NR);
-				next;
-			}
-
-			inherits = $2;
-			permission = common_base[$2];
-
-			for (combined in common_perms)
-			{
-				split(combined,separate, SUBSEP);
-				if (separate[1] == inherits)
-				{
-					inherited_perms[common_perms[combined]] = separate[2];
-				}
-			}
-
-                        j = 1;
-                        for (i in inherited_perms) {
-                            ind[j] = i + 0;
-                            j++;
-                        }
-                        n = asort(ind);
-			for (i = 1; i <= n; i++) {
-				perm = inherited_perms[ind[i]];
-				printf("#define %s__%s", toupper(tclass), toupper(perm)) > outfile; 
-				spaces = 40 - (length(perm) + length(tclass));
-				if (spaces < 1)
-				      spaces = 1;
-				for (j = 0; j < spaces; j++) 
-					printf(" ") > outfile; 
-				printf("0x%08xUL\n", ind[i]) > outfile; 
-			}
-			printf("\n") > outfile;
-                        for (i in ind) delete ind[i];
-                        for (i in inherited_perms) delete inherited_perms[i];
-
-			printf("   S_(SECCLASS_%s, %s, 0x%08xUL)\n", toupper(tclass), inherits, permission) > inheritfile; 
-
-			nextstate = "CLASS_OR_CLASS-OPENBRACKET";
-			next;
-		}
 $1 == "{"	{ 
 			if (nextstate != "INHERITS_OR_CLASS-OPENBRACKET" &&
 			    nextstate != "CLASS_OR_CLASS-OPENBRACKET" &&
@@ -177,15 +97,6 @@ $1 == "{"	{
 
 				av_perms[tclass,$1] = permission;
 		
-				if (inherits != "")
-				{
-					if ((inherits,$1) in common_perms)
-					{
-						printf("Permission %s in %s on line %d conflicts with common permission.\n", $1, tclass, inherits, NR);
-						next;
-					}
-				}
-
 				printf("#define %s__%s", toupper(tclass), toupper($1)) > outfile; 
 
 				printf("   S_(SECCLASS_%s, %s__%s, \"%s\")\n", toupper(tclass), toupper(tclass), toupper($1), $1) > avpermfile; 
diff --git a/xen/xsm/flask/avc.c b/xen/xsm/flask/avc.c
index 44240a9..1bfeef2 100644
--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -45,28 +45,11 @@ static const char *class_to_string[] = {
 #undef S_
 };
 
-#define TB_(s) static const char * s [] = {
-#define TE_(s) };
-#define S_(s) s,
-#include "common_perm_to_string.h"
-#undef TB_
-#undef TE_
-#undef S_
-
-static const struct av_inherit av_inherit[] = {
-#define S_(c, i, b) { .tclass = c, .common_pts = common_##i##_perm_to_string, \
-                      .common_base = b },
-#include "av_inherit.h"
-#undef S_
-};
-
 const struct selinux_class_perm selinux_class_perm = {
     .av_perm_to_string = av_perm_to_string,
     .av_pts_len = ARRAY_SIZE(av_perm_to_string),
     .class_to_string = class_to_string,
     .cts_len = ARRAY_SIZE(class_to_string),
-    .av_inherit = av_inherit,
-    .av_inherit_len = ARRAY_SIZE(av_inherit)
 };
 
 #define AVC_CACHE_SLOTS            512
@@ -191,16 +174,6 @@ static void avc_dump_av(struct avc_dump_buf *buf, u16 tclass, u32 av)
         return;
     }
 
-    for ( i = 0; i < ARRAY_SIZE(av_inherit); i++ )
-    {
-        if (av_inherit[i].tclass == tclass)
-        {
-            common_pts = av_inherit[i].common_pts;
-            common_base = av_inherit[i].common_base;
-            break;
-        }
-    }
-
     avc_printk(buf, " {");
     i = 0;
     perm = 1;
diff --git a/xen/xsm/flask/include/av_inherit.h b/xen/xsm/flask/include/av_inherit.h
deleted file mode 100644
index 321ffe7..0000000
--- a/xen/xsm/flask/include/av_inherit.h
+++ /dev/null
@@ -1 +0,0 @@
-/* This file is automatically generated.  Do not edit. */
diff --git a/xen/xsm/flask/include/avc_ss.h b/xen/xsm/flask/include/avc_ss.h
index ea4e98c..a3d7d1e 100644
--- a/xen/xsm/flask/include/avc_ss.h
+++ b/xen/xsm/flask/include/avc_ss.h
@@ -16,19 +16,11 @@ struct av_perm_to_string {
     const char *name;
 };
 
-struct av_inherit {
-    const char **common_pts;
-    u32 common_base;
-    u16 tclass;
-};
-
 struct selinux_class_perm {
     const struct av_perm_to_string *av_perm_to_string;
     u32 av_pts_len;
     u32 cts_len;
     const char **class_to_string;
-    const struct av_inherit *av_inherit;
-    u32 av_inherit_len;
 };
 
 extern const struct selinux_class_perm selinux_class_perm;
diff --git a/xen/xsm/flask/include/common_perm_to_string.h b/xen/xsm/flask/include/common_perm_to_string.h
deleted file mode 100644
index 321ffe7..0000000
--- a/xen/xsm/flask/include/common_perm_to_string.h
+++ /dev/null
@@ -1 +0,0 @@
-/* This file is automatically generated.  Do not edit. */
diff --git a/xen/xsm/flask/ss/services.c b/xen/xsm/flask/ss/services.c
index 363f586..1bf3b0c 100644
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1167,10 +1167,10 @@ int security_change_sid(u32 ssid, u32 tsid, u16 tclass, u32 *out_sid)
  */
 static int validate_classes(struct policydb *p)
 {
-    int i, j;
+    int i;
     struct class_datum *cladatum;
     struct perm_datum *perdatum;
-    u32 nprim, tmp, common_pts_len, perm_val, pol_val;
+    u32 nprim, perm_val, pol_val;
     u16 class_val;
     const struct selinux_class_perm *kdefs = &selinux_class_perm;
     const char *def_class, *def_perm, *pol_class;
@@ -1233,56 +1233,6 @@ static int validate_classes(struct policydb *p)
             return -EINVAL;
         }
     }
-    for ( i = 0; i < kdefs->av_inherit_len; i++ )
-    {
-        class_val = kdefs->av_inherit[i].tclass;
-        if ( class_val > p->p_classes.nprim )
-            continue;
-        pol_class = p->p_class_val_to_name[class_val-1];
-        cladatum = hashtab_search(p->p_classes.table, pol_class);
-        BUG_ON( !cladatum );
-        if ( !cladatum->comdatum )
-        {
-            printk(KERN_ERR
-            "Flask:  class %s should have an inherits clause but does not\n",
-                   pol_class);
-            return -EINVAL;
-        }
-        tmp = kdefs->av_inherit[i].common_base;
-        common_pts_len = 0;
-        while ( !(tmp & 0x01) )
-        {
-            common_pts_len++;
-            tmp >>= 1;
-        }
-        perms = &cladatum->comdatum->permissions;
-        for ( j = 0; j < common_pts_len; j++ )
-        {
-            def_perm = kdefs->av_inherit[i].common_pts[j];
-            if ( j >= perms->nprim )
-            {
-                printk(KERN_INFO
-                "Flask:  permission %s in class %s not defined in policy\n",
-                       def_perm, pol_class);
-                return -EINVAL;
-            }
-            perdatum = hashtab_search(perms->table, def_perm);
-            if ( perdatum == NULL )
-            {
-                printk(KERN_ERR
-                       "Flask:  permission %s in class %s not found in policy\n",
-                       def_perm, pol_class);
-                return -EINVAL;
-            }
-            if ( perdatum->value != j + 1 )
-            {
-                printk(KERN_ERR
-                      "Flask:  permission %s in class %s has incorrect value\n",
-                       def_perm, pol_class);
-                return -EINVAL;
-            }
-        }
-    }
     return 0;
 }
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOML-0002iO-PG; Mon, 06 Aug 2012 14:32:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMK-0002hw-8o
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:40 +0000
Received: from [85.158.138.51:57269] by server-9.bemta-3.messagelabs.com id
	75/10-14615-785DF105; Mon, 06 Aug 2012 14:32:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344263558!22609274!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8976 invoked from network); 6 Aug 2012 14:32:38 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-6.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:38 -0000
X-TM-IMSS-Message-ID: <7b0719130002c545@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0719130002c545 ; Mon, 6 Aug 2012 10:33:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9M011112; 
	Mon, 6 Aug 2012 10:32:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:14 -0400
Message-Id: <1344263550-3941-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 02/18] xsm/flask: remove unneeded create_sid
	field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This field was only used to populate the ssid of dom0, which can be
handled explicitly in the domain creation hook. This also removes the
unnecessary permission check on the creation of dom0.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.te |  2 --
 xen/xsm/flask/hooks.c                        | 23 ++++++++++-------------
 xen/xsm/flask/include/objsec.h               |  1 -
 3 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 29885c4..3d2e351 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -52,8 +52,6 @@ type device_t, resource_type;
 # Rules required to boot the hypervisor and dom0
 #
 ################################################################################
-allow xen_t dom0_t:domain { create };
-
 allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add mtrr_del
 	scheduler physinfo heap quirk readconsole writeconsole settime
 	microcode cpupool_op sched_op };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 62771bf..9262d34 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -111,12 +111,10 @@ static int flask_domain_alloc_security(struct domain *d)
     if ( is_idle_domain(d) )
     {
         dsec->sid = SECINITSID_XEN;
-        dsec->create_sid = SECINITSID_DOM0;
     }
     else
     {
         dsec->sid = SECINITSID_UNLABELED;
-        dsec->create_sid = SECSID_NULL;
     }
 
     d->ssid = dsec;
@@ -549,25 +547,24 @@ static int flask_domain_create(struct domain *d, u32 ssidref)
     int rc;
     struct domain_security_struct *dsec1;
     struct domain_security_struct *dsec2;
+    static int dom0_created = 0;
 
     dsec1 = current->domain->ssid;
+    dsec2 = d->ssid;
 
-    if ( dsec1->create_sid == SECSID_NULL ) 
-        dsec1->create_sid = ssidref;
+    if ( is_idle_domain(current->domain) && !dom0_created )
+    {
+        dsec2->sid = SECINITSID_DOM0;
+        dom0_created = 1;
+        return 0;
+    }
 
-    rc = avc_has_perm(dsec1->sid, dsec1->create_sid, SECCLASS_DOMAIN, 
+    rc = avc_has_perm(dsec1->sid, ssidref, SECCLASS_DOMAIN,
                       DOMAIN__CREATE, NULL);
     if ( rc )
-    {
-        dsec1->create_sid = SECSID_NULL;
         return rc;
-    }
-
-    dsec2 = d->ssid;
-    dsec2->sid = dsec1->create_sid;
 
-    dsec1->create_sid = SECSID_NULL;
-    dsec2->create_sid = SECSID_NULL;
+    dsec2->sid = ssidref;
 
     return rc;
 }
diff --git a/xen/xsm/flask/include/objsec.h b/xen/xsm/flask/include/objsec.h
index df5baee..4ff52be 100644
--- a/xen/xsm/flask/include/objsec.h
+++ b/xen/xsm/flask/include/objsec.h
@@ -19,7 +19,6 @@
 
 struct domain_security_struct {
     u32 sid;               /* current SID */
-    u32 create_sid;
 };
 
 struct evtchn_security_struct {
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOML-0002iO-PG; Mon, 06 Aug 2012 14:32:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMK-0002hw-8o
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:40 +0000
Received: from [85.158.138.51:57269] by server-9.bemta-3.messagelabs.com id
	75/10-14615-785DF105; Mon, 06 Aug 2012 14:32:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344263558!22609274!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8976 invoked from network); 6 Aug 2012 14:32:38 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-6.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:38 -0000
X-TM-IMSS-Message-ID: <7b0719130002c545@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0719130002c545 ; Mon, 6 Aug 2012 10:33:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9M011112; 
	Mon, 6 Aug 2012 10:32:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:14 -0400
Message-Id: <1344263550-3941-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 02/18] xsm/flask: remove unneeded create_sid
	field
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This field was only used to populate the ssid of dom0, which can be
handled explicitly in the domain creation hook. This also removes the
unnecessary permission check on the creation of dom0.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.te |  2 --
 xen/xsm/flask/hooks.c                        | 23 ++++++++++-------------
 xen/xsm/flask/include/objsec.h               |  1 -
 3 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 29885c4..3d2e351 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -52,8 +52,6 @@ type device_t, resource_type;
 # Rules required to boot the hypervisor and dom0
 #
 ################################################################################
-allow xen_t dom0_t:domain { create };
-
 allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add mtrr_del
 	scheduler physinfo heap quirk readconsole writeconsole settime
 	microcode cpupool_op sched_op };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 62771bf..9262d34 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -111,12 +111,10 @@ static int flask_domain_alloc_security(struct domain *d)
     if ( is_idle_domain(d) )
     {
         dsec->sid = SECINITSID_XEN;
-        dsec->create_sid = SECINITSID_DOM0;
     }
     else
     {
         dsec->sid = SECINITSID_UNLABELED;
-        dsec->create_sid = SECSID_NULL;
     }
 
     d->ssid = dsec;
@@ -549,25 +547,24 @@ static int flask_domain_create(struct domain *d, u32 ssidref)
     int rc;
     struct domain_security_struct *dsec1;
     struct domain_security_struct *dsec2;
+    static int dom0_created = 0;
 
     dsec1 = current->domain->ssid;
+    dsec2 = d->ssid;
 
-    if ( dsec1->create_sid == SECSID_NULL ) 
-        dsec1->create_sid = ssidref;
+    if ( is_idle_domain(current->domain) && !dom0_created )
+    {
+        dsec2->sid = SECINITSID_DOM0;
+        dom0_created = 1;
+        return 0;
+    }
 
-    rc = avc_has_perm(dsec1->sid, dsec1->create_sid, SECCLASS_DOMAIN, 
+    rc = avc_has_perm(dsec1->sid, ssidref, SECCLASS_DOMAIN,
                       DOMAIN__CREATE, NULL);
     if ( rc )
-    {
-        dsec1->create_sid = SECSID_NULL;
         return rc;
-    }
-
-    dsec2 = d->ssid;
-    dsec2->sid = dsec1->create_sid;
 
-    dsec1->create_sid = SECSID_NULL;
-    dsec2->create_sid = SECSID_NULL;
+    dsec2->sid = ssidref;
 
     return rc;
 }
diff --git a/xen/xsm/flask/include/objsec.h b/xen/xsm/flask/include/objsec.h
index df5baee..4ff52be 100644
--- a/xen/xsm/flask/include/objsec.h
+++ b/xen/xsm/flask/include/objsec.h
@@ -19,7 +19,6 @@
 
 struct domain_security_struct {
     u32 sid;               /* current SID */
-    u32 create_sid;
 };
 
 struct evtchn_security_struct {
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMS-0002lX-3F; Mon, 06 Aug 2012 14:32:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002jd-P2
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:46 +0000
Received: from [85.158.138.51:61648] by server-6.bemta-3.messagelabs.com id
	C5/27-02321-C85DF105; Mon, 06 Aug 2012 14:32:44 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344263561!30652866!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10656 invoked from network); 6 Aug 2012 14:32:42 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:42 -0000
X-TM-IMSS-Message-ID: <7b0725340002c54b@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0725340002c54b ; Mon, 6 Aug 2012 10:33:10 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9Y011112; 
	Mon, 6 Aug 2012 10:32:40 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:26 -0400
Message-Id: <1344263550-3941-15-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 14/18] xsm: remove unneeded xsm_call macro
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/include/xsm/xsm.h | 260 +++++++++++++++++++++++++-------------------------
 1 file changed, 129 insertions(+), 131 deletions(-)

diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index ee613a7..fa9f50e 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -21,8 +21,6 @@
 typedef void xsm_op_t;
 DEFINE_XEN_GUEST_HANDLE(xsm_op_t);
 
-#define xsm_call(fn) xsm_ops->fn
-
 /* policy magic number (defined by XSM_MAGIC) */
 typedef u32 xsm_magic_t;
 #ifndef XSM_MAGIC
@@ -199,417 +197,417 @@ extern struct xsm_operations *xsm_ops;
 static inline void xsm_security_domaininfo (struct domain *d,
                                         struct xen_domctl_getdomaininfo *info)
 {
-    (void)xsm_call(security_domaininfo(d, info));
+    xsm_ops->security_domaininfo(d, info);
 }
 
 static inline int xsm_setvcpucontext(struct domain *d)
 {
-    return xsm_call(setvcpucontext(d));
+    return xsm_ops->setvcpucontext(d);
 }
 
 static inline int xsm_pausedomain (struct domain *d)
 {
-    return xsm_call(pausedomain(d));
+    return xsm_ops->pausedomain(d);
 }
 
 static inline int xsm_unpausedomain (struct domain *d)
 {
-    return xsm_call(unpausedomain(d));
+    return xsm_ops->unpausedomain(d);
 }
 
 static inline int xsm_resumedomain (struct domain *d)
 {
-    return xsm_call(resumedomain(d));
+    return xsm_ops->resumedomain(d);
 }
 
 static inline int xsm_domain_create (struct domain *d, u32 ssidref)
 {
-    return xsm_call(domain_create(d, ssidref));
+    return xsm_ops->domain_create(d, ssidref);
 }
 
 static inline int xsm_max_vcpus(struct domain *d)
 {
-    return xsm_call(max_vcpus(d));
+    return xsm_ops->max_vcpus(d);
 }
 
 static inline int xsm_destroydomain (struct domain *d)
 {
-    return xsm_call(destroydomain(d));
+    return xsm_ops->destroydomain(d);
 }
 
 static inline int xsm_vcpuaffinity (int cmd, struct domain *d)
 {
-    return xsm_call(vcpuaffinity(cmd, d));
+    return xsm_ops->vcpuaffinity(cmd, d);
 }
 
 static inline int xsm_scheduler (struct domain *d)
 {
-    return xsm_call(scheduler(d));
+    return xsm_ops->scheduler(d);
 }
 
 static inline int xsm_getdomaininfo (struct domain *d)
 {
-    return xsm_call(getdomaininfo(d));
+    return xsm_ops->getdomaininfo(d);
 }
 
 static inline int xsm_getvcpucontext (struct domain *d)
 {
-    return xsm_call(getvcpucontext(d));
+    return xsm_ops->getvcpucontext(d);
 }
 
 static inline int xsm_getvcpuinfo (struct domain *d)
 {
-    return xsm_call(getvcpuinfo(d));
+    return xsm_ops->getvcpuinfo(d);
 }
 
 static inline int xsm_domain_settime (struct domain *d)
 {
-    return xsm_call(domain_settime(d));
+    return xsm_ops->domain_settime(d);
 }
 
 static inline int xsm_set_target (struct domain *d, struct domain *e)
 {
-    return xsm_call(set_target(d, e));
+    return xsm_ops->set_target(d, e);
 }
 
 static inline int xsm_domctl (struct domain *d, int cmd)
 {
-    return xsm_call(domctl(d, cmd));
+    return xsm_ops->domctl(d, cmd);
 }
 
 static inline int xsm_set_virq_handler (struct domain *d, uint32_t virq)
 {
-    return xsm_call(set_virq_handler(d, virq));
+    return xsm_ops->set_virq_handler(d, virq);
 }
 
 static inline int xsm_tbufcontrol (void)
 {
-    return xsm_call(tbufcontrol());
+    return xsm_ops->tbufcontrol();
 }
 
 static inline int xsm_readconsole (uint32_t clear)
 {
-    return xsm_call(readconsole(clear));
+    return xsm_ops->readconsole(clear);
 }
 
 static inline int xsm_sched_id (void)
 {
-    return xsm_call(sched_id());
+    return xsm_ops->sched_id();
 }
 
 static inline int xsm_setdomainmaxmem (struct domain *d)
 {
-    return xsm_call(setdomainmaxmem(d));
+    return xsm_ops->setdomainmaxmem(d);
 }
 
 static inline int xsm_setdomainhandle (struct domain *d)
 {
-    return xsm_call(setdomainhandle(d));
+    return xsm_ops->setdomainhandle(d);
 }
 
 static inline int xsm_setdebugging (struct domain *d)
 {
-    return xsm_call(setdebugging(d));
+    return xsm_ops->setdebugging(d);
 }
 
 static inline int xsm_debug_op (struct domain *d)
 {
-    return xsm_call(debug_op(d));
+    return xsm_ops->debug_op(d);
 }
 
 static inline int xsm_perfcontrol (void)
 {
-    return xsm_call(perfcontrol());
+    return xsm_ops->perfcontrol();
 }
 
 static inline int xsm_debug_keys (void)
 {
-    return xsm_call(debug_keys());
+    return xsm_ops->debug_keys();
 }
 
 static inline int xsm_availheap (void)
 {
-    return xsm_call(availheap());
+    return xsm_ops->availheap();
 }
 
 static inline int xsm_getcpuinfo (void)
 {
-    return xsm_call(getcpuinfo());
+    return xsm_ops->getcpuinfo();
 }
 
 static inline int xsm_get_pmstat(void)
 {
-    return xsm_call(get_pmstat());
+    return xsm_ops->get_pmstat();
 }
 
 static inline int xsm_setpminfo(void)
 {
-    return xsm_call(setpminfo());
+    return xsm_ops->setpminfo();
 }
 
 static inline int xsm_pm_op(void)
 {
-    return xsm_call(pm_op());
+    return xsm_ops->pm_op();
 }
 
 static inline int xsm_do_mca(void)
 {
-    return xsm_call(do_mca());
+    return xsm_ops->do_mca();
 }
 
 static inline int xsm_evtchn_unbound (struct domain *d1, struct evtchn *chn,
                                                                     domid_t id2)
 {
-    return xsm_call(evtchn_unbound(d1, chn, id2));
+    return xsm_ops->evtchn_unbound(d1, chn, id2);
 }
 
 static inline int xsm_evtchn_interdomain (struct domain *d1, 
                 struct evtchn *chan1, struct domain *d2, struct evtchn *chan2)
 {
-    return xsm_call(evtchn_interdomain(d1, chan1, d2, chan2));
+    return xsm_ops->evtchn_interdomain(d1, chan1, d2, chan2);
 }
 
 static inline void xsm_evtchn_close_post (struct evtchn *chn)
 {
-    (void)xsm_call(evtchn_close_post(chn));
+    xsm_ops->evtchn_close_post(chn);
 }
 
 static inline int xsm_evtchn_send (struct domain *d, struct evtchn *chn)
 {
-    return xsm_call(evtchn_send(d, chn));
+    return xsm_ops->evtchn_send(d, chn);
 }
 
 static inline int xsm_evtchn_status (struct domain *d, struct evtchn *chn)
 {
-    return xsm_call(evtchn_status(d, chn));
+    return xsm_ops->evtchn_status(d, chn);
 }
 
 static inline int xsm_evtchn_reset (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(evtchn_reset(d1, d2));
+    return xsm_ops->evtchn_reset(d1, d2);
 }
 
 static inline int xsm_grant_mapref (struct domain *d1, struct domain *d2,
                                                                 uint32_t flags)
 {
-    return xsm_call(grant_mapref(d1, d2, flags));
+    return xsm_ops->grant_mapref(d1, d2, flags);
 }
 
 static inline int xsm_grant_unmapref (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_unmapref(d1, d2));
+    return xsm_ops->grant_unmapref(d1, d2);
 }
 
 static inline int xsm_grant_setup (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_setup(d1, d2));
+    return xsm_ops->grant_setup(d1, d2);
 }
 
 static inline int xsm_grant_transfer (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_transfer(d1, d2));
+    return xsm_ops->grant_transfer(d1, d2);
 }
 
 static inline int xsm_grant_copy (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_copy(d1, d2));
+    return xsm_ops->grant_copy(d1, d2);
 }
 
 static inline int xsm_grant_query_size (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_query_size(d1, d2));
+    return xsm_ops->grant_query_size(d1, d2);
 }
 
 static inline int xsm_alloc_security_domain (struct domain *d)
 {
-    return xsm_call(alloc_security_domain(d));
+    return xsm_ops->alloc_security_domain(d);
 }
 
 static inline void xsm_free_security_domain (struct domain *d)
 {
-    (void)xsm_call(free_security_domain(d));
+    xsm_ops->free_security_domain(d);
 }
 
 static inline int xsm_alloc_security_evtchn (struct evtchn *chn)
 {
-    return xsm_call(alloc_security_evtchn(chn));
+    return xsm_ops->alloc_security_evtchn(chn);
 }
 
 static inline void xsm_free_security_evtchn (struct evtchn *chn)
 {
-    (void)xsm_call(free_security_evtchn(chn));
+    xsm_ops->free_security_evtchn(chn);
 }
 
 static inline char *xsm_show_security_evtchn (struct domain *d, const struct evtchn *chn)
 {
-    return xsm_call(show_security_evtchn(d, chn));
+    return xsm_ops->show_security_evtchn(d, chn);
 }
 
 static inline int xsm_get_pod_target (struct domain *d)
 {
-    return xsm_call(get_pod_target(d));
+    return xsm_ops->get_pod_target(d);
 }
 
 static inline int xsm_set_pod_target (struct domain *d)
 {
-    return xsm_call(set_pod_target(d));
+    return xsm_ops->set_pod_target(d);
 }
 
 static inline int xsm_memory_adjust_reservation (struct domain *d1, struct
                                                                     domain *d2)
 {
-    return xsm_call(memory_adjust_reservation(d1, d2));
+    return xsm_ops->memory_adjust_reservation(d1, d2);
 }
 
 static inline int xsm_memory_stat_reservation (struct domain *d1,
                                                             struct domain *d2)
 {
-    return xsm_call(memory_stat_reservation(d1, d2));
+    return xsm_ops->memory_stat_reservation(d1, d2);
 }
 
 static inline int xsm_memory_pin_page(struct domain *d, struct page_info *page)
 {
-    return xsm_call(memory_pin_page(d, page));
+    return xsm_ops->memory_pin_page(d, page);
 }
 
 static inline int xsm_remove_from_physmap(struct domain *d1, struct domain *d2)
 {
-    return xsm_call(remove_from_physmap(d1, d2));
+    return xsm_ops->remove_from_physmap(d1, d2);
 }
 
 static inline int xsm_console_io (struct domain *d, int cmd)
 {
-    return xsm_call(console_io(d, cmd));
+    return xsm_ops->console_io(d, cmd);
 }
 
 static inline int xsm_profile (struct domain *d, int op)
 {
-    return xsm_call(profile(d, op));
+    return xsm_ops->profile(d, op);
 }
 
 static inline int xsm_kexec (void)
 {
-    return xsm_call(kexec());
+    return xsm_ops->kexec();
 }
 
 static inline int xsm_schedop_shutdown (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(schedop_shutdown(d1, d2));
+    return xsm_ops->schedop_shutdown(d1, d2);
 }
 
 static inline char *xsm_show_irq_sid (int irq)
 {
-    return xsm_call(show_irq_sid(irq));
+    return xsm_ops->show_irq_sid(irq);
 }
 
 static inline int xsm_map_domain_pirq (struct domain *d, int irq, void *data)
 {
-    return xsm_call(map_domain_pirq(d, irq, data));
+    return xsm_ops->map_domain_pirq(d, irq, data);
 }
 
 static inline int xsm_unmap_domain_pirq (struct domain *d, int irq)
 {
-    return xsm_call(unmap_domain_pirq(d, irq));
+    return xsm_ops->unmap_domain_pirq(d, irq);
 }
 
 static inline int xsm_irq_permission (struct domain *d, int pirq, uint8_t allow)
 {
-    return xsm_call(irq_permission(d, pirq, allow));
+    return xsm_ops->irq_permission(d, pirq, allow);
 }
 
 static inline int xsm_iomem_permission (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_call(iomem_permission(d, s, e, allow));
+    return xsm_ops->iomem_permission(d, s, e, allow);
 }
 
 static inline int xsm_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_call(iomem_mapping(d, s, e, allow));
+    return xsm_ops->iomem_mapping(d, s, e, allow);
 }
 
 static inline int xsm_pci_config_permission (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
-    return xsm_call(pci_config_permission(d, machine_bdf, start, end, access));
+    return xsm_ops->pci_config_permission(d, machine_bdf, start, end, access);
 }
 
 static inline int xsm_get_device_group(uint32_t machine_bdf)
 {
-    return xsm_call(get_device_group(machine_bdf));
+    return xsm_ops->get_device_group(machine_bdf);
 }
 
 static inline int xsm_test_assign_device(uint32_t machine_bdf)
 {
-    return xsm_call(test_assign_device(machine_bdf));
+    return xsm_ops->test_assign_device(machine_bdf);
 }
 
 static inline int xsm_assign_device(struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_call(assign_device(d, machine_bdf));
+    return xsm_ops->assign_device(d, machine_bdf);
 }
 
 static inline int xsm_deassign_device(struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_call(deassign_device(d, machine_bdf));
+    return xsm_ops->deassign_device(d, machine_bdf);
 }
 
 static inline int xsm_resource_plug_pci (uint32_t machine_bdf)
 {
-    return xsm_call(resource_plug_pci(machine_bdf));
+    return xsm_ops->resource_plug_pci(machine_bdf);
 }
 
 static inline int xsm_resource_unplug_pci (uint32_t machine_bdf)
 {
-    return xsm_call(resource_unplug_pci(machine_bdf));
+    return xsm_ops->resource_unplug_pci(machine_bdf);
 }
 
 static inline int xsm_resource_plug_core (void)
 {
-    return xsm_call(resource_plug_core());
+    return xsm_ops->resource_plug_core();
 }
 
 static inline int xsm_resource_unplug_core (void)
 {
-    return xsm_call(resource_unplug_core());
+    return xsm_ops->resource_unplug_core();
 }
 
 static inline int xsm_resource_setup_pci (uint32_t machine_bdf)
 {
-    return xsm_call(resource_setup_pci(machine_bdf));
+    return xsm_ops->resource_setup_pci(machine_bdf);
 }
 
 static inline int xsm_resource_setup_gsi (int gsi)
 {
-    return xsm_call(resource_setup_gsi(gsi));
+    return xsm_ops->resource_setup_gsi(gsi);
 }
 
 static inline int xsm_resource_setup_misc (void)
 {
-    return xsm_call(resource_setup_misc());
+    return xsm_ops->resource_setup_misc();
 }
 
 static inline int xsm_page_offline(uint32_t cmd)
 {
-    return xsm_call(page_offline(cmd));
+    return xsm_ops->page_offline(cmd);
 }
 
 static inline int xsm_lockprof(void)
 {
-    return xsm_call(lockprof());
+    return xsm_ops->lockprof();
 }
 
 static inline int xsm_cpupool_op(void)
 {
-    return xsm_call(cpupool_op());
+    return xsm_ops->cpupool_op();
 }
 
 static inline int xsm_sched_op(void)
 {
-    return xsm_call(sched_op());
+    return xsm_ops->sched_op();
 }
 
 static inline int xsm_tmem_control(uint32_t subop)
 {
-    return xsm_call(tmem_control(subop));
+    return xsm_ops->tmem_control(subop);
 }
 
 static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
@@ -620,240 +618,240 @@ static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 #ifdef CONFIG_X86
 static inline int xsm_shadow_control (struct domain *d, uint32_t op)
 {
-    return xsm_call(shadow_control(d, op));
+    return xsm_ops->shadow_control(d, op);
 }
 
 static inline int xsm_getpageframeinfo (struct page_info *page)
 {
-    return xsm_call(getpageframeinfo(page));
+    return xsm_ops->getpageframeinfo(page);
 }
 
 static inline int xsm_getpageframeinfo_domain (struct domain *d)
 {
-    return xsm_call(getpageframeinfo_domain(d));
+    return xsm_ops->getpageframeinfo_domain(d);
 }
 
 static inline int xsm_set_cpuid (struct domain *d)
 {
-    return xsm_call(set_cpuid(d));
+    return xsm_ops->set_cpuid(d);
 }
 
 static inline int xsm_gettscinfo (struct domain *d)
 {
-    return xsm_call(gettscinfo(d));
+    return xsm_ops->gettscinfo(d);
 }
 
 static inline int xsm_settscinfo (struct domain *d)
 {
-    return xsm_call(settscinfo(d));
+    return xsm_ops->settscinfo(d);
 }
 
 static inline int xsm_getmemlist (struct domain *d)
 {
-    return xsm_call(getmemlist(d));
+    return xsm_ops->getmemlist(d);
 }
 
 static inline int xsm_hypercall_init (struct domain *d)
 {
-    return xsm_call(hypercall_init(d));
+    return xsm_ops->hypercall_init(d);
 }
 
 static inline int xsm_hvmcontext (struct domain *d, uint32_t cmd)
 {
-    return xsm_call(hvmcontext(d, cmd));
+    return xsm_ops->hvmcontext(d, cmd);
 }
 
 static inline int xsm_address_size (struct domain *d, uint32_t cmd)
 {
-    return xsm_call(address_size(d, cmd));
+    return xsm_ops->address_size(d, cmd);
 }
 
 static inline int xsm_machine_address_size (struct domain *d, uint32_t cmd)
 {
-    return xsm_call(machine_address_size(d, cmd));
+    return xsm_ops->machine_address_size(d, cmd);
 }
 
 static inline int xsm_hvm_param (struct domain *d, unsigned long op)
 {
-    return xsm_call(hvm_param(d, op));
+    return xsm_ops->hvm_param(d, op);
 }
 
 static inline int xsm_hvm_set_pci_intx_level (struct domain *d)
 {
-    return xsm_call(hvm_set_pci_intx_level(d));
+    return xsm_ops->hvm_set_pci_intx_level(d);
 }
 
 static inline int xsm_hvm_set_isa_irq_level (struct domain *d)
 {
-    return xsm_call(hvm_set_isa_irq_level(d));
+    return xsm_ops->hvm_set_isa_irq_level(d);
 }
 
 static inline int xsm_hvm_set_pci_link_route (struct domain *d)
 {
-    return xsm_call(hvm_set_pci_link_route(d));
+    return xsm_ops->hvm_set_pci_link_route(d);
 }
 
 static inline int xsm_hvm_inject_msi (struct domain *d)
 {
-    return xsm_call(hvm_inject_msi(d));
+    return xsm_ops->hvm_inject_msi(d);
 }
 
 static inline int xsm_mem_event_setup (struct domain *d)
 {
-    return xsm_call(mem_event_setup(d));
+    return xsm_ops->mem_event_setup(d);
 }
 
 static inline int xsm_mem_event_control (struct domain *d, int mode, int op)
 {
-    return xsm_call(mem_event_control(d, mode, op));
+    return xsm_ops->mem_event_control(d, mode, op);
 }
 
 static inline int xsm_mem_event_op (struct domain *d, int op)
 {
-    return xsm_call(mem_event_op(d, op));
+    return xsm_ops->mem_event_op(d, op);
 }
 
 static inline int xsm_mem_sharing (struct domain *d)
 {
-    return xsm_call(mem_sharing(d));
+    return xsm_ops->mem_sharing(d);
 }
 
 static inline int xsm_mem_sharing_op (struct domain *d, struct domain *cd, int op)
 {
-    return xsm_call(mem_sharing_op(d, cd, op));
+    return xsm_ops->mem_sharing_op(d, cd, op);
 }
 
 static inline int xsm_audit_p2m (struct domain *d)
 {
-    return xsm_call(audit_p2m(d));
+    return xsm_ops->audit_p2m(d);
 }
 
 static inline int xsm_apic (struct domain *d, int cmd)
 {
-    return xsm_call(apic(d, cmd));
+    return xsm_ops->apic(d, cmd);
 }
 
 static inline int xsm_xen_settime (void)
 {
-    return xsm_call(xen_settime());
+    return xsm_ops->xen_settime();
 }
 
 static inline int xsm_memtype (uint32_t access)
 {
-    return xsm_call(memtype(access));
+    return xsm_ops->memtype(access);
 }
 
 static inline int xsm_microcode (void)
 {
-    return xsm_call(microcode());
+    return xsm_ops->microcode();
 }
 
 static inline int xsm_physinfo (void)
 {
-    return xsm_call(physinfo());
+    return xsm_ops->physinfo();
 }
 
 static inline int xsm_platform_quirk (uint32_t quirk)
 {
-    return xsm_call(platform_quirk(quirk));
+    return xsm_ops->platform_quirk(quirk);
 }
 
 static inline int xsm_firmware_info (void)
 {
-    return xsm_call(firmware_info());
+    return xsm_ops->firmware_info();
 }
 
 static inline int xsm_efi_call (void)
 {
-    return xsm_call(efi_call());
+    return xsm_ops->efi_call();
 }
 
 static inline int xsm_acpi_sleep (void)
 {
-    return xsm_call(acpi_sleep());
+    return xsm_ops->acpi_sleep();
 }
 
 static inline int xsm_change_freq (void)
 {
-    return xsm_call(change_freq());
+    return xsm_ops->change_freq();
 }
 
 static inline int xsm_getidletime (void)
 {
-    return xsm_call(getidletime());
+    return xsm_ops->getidletime();
 }
 
 static inline int xsm_machine_memory_map(void)
 {
-    return xsm_call(machine_memory_map());
+    return xsm_ops->machine_memory_map();
 }
 
 static inline int xsm_domain_memory_map(struct domain *d)
 {
-    return xsm_call(domain_memory_map(d));
+    return xsm_ops->domain_memory_map(d);
 }
 
 static inline int xsm_mmu_normal_update (struct domain *d, struct domain *t,
                                          struct domain *f, intpte_t fpte)
 {
-    return xsm_call(mmu_normal_update(d, t, f, fpte));
+    return xsm_ops->mmu_normal_update(d, t, f, fpte);
 }
 
 static inline int xsm_mmu_machphys_update (struct domain *d, unsigned long mfn)
 {
-    return xsm_call(mmu_machphys_update(d, mfn));
+    return xsm_ops->mmu_machphys_update(d, mfn);
 }
 
 static inline int xsm_update_va_mapping(struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte)
 {
-    return xsm_call(update_va_mapping(d, f, pte));
+    return xsm_ops->update_va_mapping(d, f, pte);
 }
 
 static inline int xsm_add_to_physmap(struct domain *d1, struct domain *d2)
 {
-    return xsm_call(add_to_physmap(d1, d2));
+    return xsm_ops->add_to_physmap(d1, d2);
 }
 
 static inline int xsm_sendtrigger(struct domain *d)
 {
-    return xsm_call(sendtrigger(d));
+    return xsm_ops->sendtrigger(d);
 }
 
 static inline int xsm_bind_pt_irq(struct domain *d, 
                                                 struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_call(bind_pt_irq(d, bind));
+    return xsm_ops->bind_pt_irq(d, bind);
 }
 
 static inline int xsm_unbind_pt_irq(struct domain *d,
                                                 struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_call(unbind_pt_irq(d, bind));
+    return xsm_ops->unbind_pt_irq(d, bind);
 }
 
 static inline int xsm_pin_mem_cacheattr(struct domain *d)
 {
-    return xsm_call(pin_mem_cacheattr(d));
+    return xsm_ops->pin_mem_cacheattr(d);
 }
 
 static inline int xsm_ext_vcpucontext(struct domain *d, uint32_t cmd)
 {
-    return xsm_call(ext_vcpucontext(d, cmd));
+    return xsm_ops->ext_vcpucontext(d, cmd);
 }
 static inline int xsm_vcpuextstate(struct domain *d, uint32_t cmd)
 {
-    return xsm_call(vcpuextstate(d, cmd));
+    return xsm_ops->vcpuextstate(d, cmd);
 }
 
 static inline int xsm_ioport_permission (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_call(ioport_permission(d, s, e, allow));
+    return xsm_ops->ioport_permission(d, s, e, allow);
 }
 
 static inline int xsm_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_call(ioport_mapping(d, s, e, allow));
+    return xsm_ops->ioport_mapping(d, s, e, allow);
 }
 #endif /* CONFIG_X86 */
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMT-0002mY-8Z; Mon, 06 Aug 2012 14:32:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMR-0002ig-Qs
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:48 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344263561!11031784!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29337 invoked from network); 6 Aug 2012 14:32:41 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:41 -0000
X-TM-IMSS-Message-ID: <7b0757c80002b22f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0757c80002b22f ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9X011112; 
	Mon, 6 Aug 2012 10:32:39 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:25 -0400
Message-Id: <1344263550-3941-14-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 13/18] tmem: Add access control check
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |  1 +
 xen/common/tmem.c                              | 10 +++++-----
 xen/include/xen/tmem_xen.h                     |  5 -----
 xen/include/xsm/dummy.h                        |  7 +++++++
 xen/include/xsm/xsm.h                          |  6 ++++++
 xen/xsm/dummy.c                                |  1 +
 xen/xsm/flask/hooks.c                          |  6 ++++++
 xen/xsm/flask/include/av_perm_to_string.h      |  1 +
 xen/xsm/flask/include/av_permissions.h         |  1 +
 9 files changed, 28 insertions(+), 10 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 28b8ada..2986b40 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -35,6 +35,7 @@ class xen
 	lockprof
 	cpupool_op
 	sched_op
+	tmem_op
 }
 
 class domain
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index dd276df..164098f 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -23,6 +23,7 @@
 #include <xen/radix-tree.h>
 #include <xen/list.h>
 #include <xen/init.h>
+#include <xsm/xsm.h>
 
 #define EXPORT /* indicates code other modules are dependent upon */
 #define FORWARD
@@ -2539,11 +2540,10 @@ static NOINLINE int do_tmem_control(struct tmem_op *op)
     uint32_t subop = op->u.ctrl.subop;
     OID *oidp = (OID *)(&op->u.ctrl.oid[0]);
 
-    if (!tmh_current_is_privileged())
-    {
-        /* don't fail... mystery: sometimes dom0 fails here */
-        /* return -EPERM; */
-    }
+    ret = xsm_tmem_control(subop);
+    if ( ret )
+        return ret;
+
     switch(subop)
     {
     case TMEMC_THAW:
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..f248128 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -344,11 +344,6 @@ static inline bool_t tmh_set_client_from_id(
     return rc;
 }
 
-static inline bool_t tmh_current_is_privileged(void)
-{
-    return IS_PRIV(current->domain);
-}
-
 static inline uint8_t tmh_get_first_byte(pfp_t *pfp)
 {
     void *p = __map_domain_page(pfp);
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index c71c08b..d796a33 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -495,6 +495,13 @@ static XSM_DEFAULT(int, sched_op) (void)
     return 0;
 }
 
+static XSM_DEFAULT(int, tmem_control) (uint32_t subcmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(long, __do_xsm_op)(XEN_GUEST_HANDLE(xsm_op_t) op)
 {
     return -ENOSYS;
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index b473b54..ee613a7 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -137,6 +137,7 @@ struct xsm_operations {
     int (*lockprof)(void);
     int (*cpupool_op)(void);
     int (*sched_op)(void);
+	int (*tmem_control)(uint32_t subop);
 
     long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
 
@@ -606,6 +607,11 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
+static inline int xsm_tmem_control(uint32_t subop)
+{
+    return xsm_call(tmem_control(subop));
+}
+
 static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 {
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 09935d8..aebe333 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -119,6 +119,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, lockprof);
     set_to_dummy_if_null(ops, cpupool_op);
     set_to_dummy_if_null(ops, sched_op);
+    set_to_dummy_if_null(ops, tmem_control);
 
     set_to_dummy_if_null(ops, __do_xsm_op);
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4f71604..be5c3ad 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1022,6 +1022,11 @@ static inline int flask_sched_op(void)
     return domain_has_xen(current->domain, XEN__SCHED_OP);
 }
 
+static inline int flask_tmem_control(uint32_t subcmd)
+{
+    return domain_has_xen(current->domain, XEN__TMEM_OP);
+}
+
 static int flask_perfcontrol(void)
 {
     return domain_has_xen(current->domain, XEN__PERFCONTROL);
@@ -1698,6 +1703,7 @@ static struct xsm_operations flask_ops = {
     .lockprof = flask_lockprof,
     .cpupool_op = flask_cpupool_op,
     .sched_op = flask_sched_op,
+    .tmem_control = flask_tmem_control,
 
     .__do_xsm_op = do_flask_op,
 
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 997f098..5d5a45a 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -29,6 +29,7 @@
    S_(SECCLASS_XEN, XEN__LOCKPROF, "lockprof")
    S_(SECCLASS_XEN, XEN__CPUPOOL_OP, "cpupool_op")
    S_(SECCLASS_XEN, XEN__SCHED_OP, "sched_op")
+   S_(SECCLASS_XEN, XEN__TMEM_OP, "tmem_op")
    S_(SECCLASS_DOMAIN, DOMAIN__SETVCPUCONTEXT, "setvcpucontext")
    S_(SECCLASS_DOMAIN, DOMAIN__PAUSE, "pause")
    S_(SECCLASS_DOMAIN, DOMAIN__UNPAUSE, "unpause")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index 8596a55..e6d6a6d 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -29,6 +29,7 @@
 #define XEN__LOCKPROF                             0x08000000UL
 #define XEN__CPUPOOL_OP                           0x10000000UL
 #define XEN__SCHED_OP                             0x20000000UL
+#define XEN__TMEM_OP                              0x40000000UL
 
 #define DOMAIN__SETVCPUCONTEXT                    0x00000001UL
 #define DOMAIN__PAUSE                             0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMS-0002lX-3F; Mon, 06 Aug 2012 14:32:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002jd-P2
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:46 +0000
Received: from [85.158.138.51:61648] by server-6.bemta-3.messagelabs.com id
	C5/27-02321-C85DF105; Mon, 06 Aug 2012 14:32:44 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344263561!30652866!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10656 invoked from network); 6 Aug 2012 14:32:42 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:42 -0000
X-TM-IMSS-Message-ID: <7b0725340002c54b@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0725340002c54b ; Mon, 6 Aug 2012 10:33:10 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9Y011112; 
	Mon, 6 Aug 2012 10:32:40 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:26 -0400
Message-Id: <1344263550-3941-15-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 14/18] xsm: remove unneeded xsm_call macro
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/include/xsm/xsm.h | 260 +++++++++++++++++++++++++-------------------------
 1 file changed, 129 insertions(+), 131 deletions(-)

diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index ee613a7..fa9f50e 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -21,8 +21,6 @@
 typedef void xsm_op_t;
 DEFINE_XEN_GUEST_HANDLE(xsm_op_t);
 
-#define xsm_call(fn) xsm_ops->fn
-
 /* policy magic number (defined by XSM_MAGIC) */
 typedef u32 xsm_magic_t;
 #ifndef XSM_MAGIC
@@ -199,417 +197,417 @@ extern struct xsm_operations *xsm_ops;
 static inline void xsm_security_domaininfo (struct domain *d,
                                         struct xen_domctl_getdomaininfo *info)
 {
-    (void)xsm_call(security_domaininfo(d, info));
+    xsm_ops->security_domaininfo(d, info);
 }
 
 static inline int xsm_setvcpucontext(struct domain *d)
 {
-    return xsm_call(setvcpucontext(d));
+    return xsm_ops->setvcpucontext(d);
 }
 
 static inline int xsm_pausedomain (struct domain *d)
 {
-    return xsm_call(pausedomain(d));
+    return xsm_ops->pausedomain(d);
 }
 
 static inline int xsm_unpausedomain (struct domain *d)
 {
-    return xsm_call(unpausedomain(d));
+    return xsm_ops->unpausedomain(d);
 }
 
 static inline int xsm_resumedomain (struct domain *d)
 {
-    return xsm_call(resumedomain(d));
+    return xsm_ops->resumedomain(d);
 }
 
 static inline int xsm_domain_create (struct domain *d, u32 ssidref)
 {
-    return xsm_call(domain_create(d, ssidref));
+    return xsm_ops->domain_create(d, ssidref);
 }
 
 static inline int xsm_max_vcpus(struct domain *d)
 {
-    return xsm_call(max_vcpus(d));
+    return xsm_ops->max_vcpus(d);
 }
 
 static inline int xsm_destroydomain (struct domain *d)
 {
-    return xsm_call(destroydomain(d));
+    return xsm_ops->destroydomain(d);
 }
 
 static inline int xsm_vcpuaffinity (int cmd, struct domain *d)
 {
-    return xsm_call(vcpuaffinity(cmd, d));
+    return xsm_ops->vcpuaffinity(cmd, d);
 }
 
 static inline int xsm_scheduler (struct domain *d)
 {
-    return xsm_call(scheduler(d));
+    return xsm_ops->scheduler(d);
 }
 
 static inline int xsm_getdomaininfo (struct domain *d)
 {
-    return xsm_call(getdomaininfo(d));
+    return xsm_ops->getdomaininfo(d);
 }
 
 static inline int xsm_getvcpucontext (struct domain *d)
 {
-    return xsm_call(getvcpucontext(d));
+    return xsm_ops->getvcpucontext(d);
 }
 
 static inline int xsm_getvcpuinfo (struct domain *d)
 {
-    return xsm_call(getvcpuinfo(d));
+    return xsm_ops->getvcpuinfo(d);
 }
 
 static inline int xsm_domain_settime (struct domain *d)
 {
-    return xsm_call(domain_settime(d));
+    return xsm_ops->domain_settime(d);
 }
 
 static inline int xsm_set_target (struct domain *d, struct domain *e)
 {
-    return xsm_call(set_target(d, e));
+    return xsm_ops->set_target(d, e);
 }
 
 static inline int xsm_domctl (struct domain *d, int cmd)
 {
-    return xsm_call(domctl(d, cmd));
+    return xsm_ops->domctl(d, cmd);
 }
 
 static inline int xsm_set_virq_handler (struct domain *d, uint32_t virq)
 {
-    return xsm_call(set_virq_handler(d, virq));
+    return xsm_ops->set_virq_handler(d, virq);
 }
 
 static inline int xsm_tbufcontrol (void)
 {
-    return xsm_call(tbufcontrol());
+    return xsm_ops->tbufcontrol();
 }
 
 static inline int xsm_readconsole (uint32_t clear)
 {
-    return xsm_call(readconsole(clear));
+    return xsm_ops->readconsole(clear);
 }
 
 static inline int xsm_sched_id (void)
 {
-    return xsm_call(sched_id());
+    return xsm_ops->sched_id();
 }
 
 static inline int xsm_setdomainmaxmem (struct domain *d)
 {
-    return xsm_call(setdomainmaxmem(d));
+    return xsm_ops->setdomainmaxmem(d);
 }
 
 static inline int xsm_setdomainhandle (struct domain *d)
 {
-    return xsm_call(setdomainhandle(d));
+    return xsm_ops->setdomainhandle(d);
 }
 
 static inline int xsm_setdebugging (struct domain *d)
 {
-    return xsm_call(setdebugging(d));
+    return xsm_ops->setdebugging(d);
 }
 
 static inline int xsm_debug_op (struct domain *d)
 {
-    return xsm_call(debug_op(d));
+    return xsm_ops->debug_op(d);
 }
 
 static inline int xsm_perfcontrol (void)
 {
-    return xsm_call(perfcontrol());
+    return xsm_ops->perfcontrol();
 }
 
 static inline int xsm_debug_keys (void)
 {
-    return xsm_call(debug_keys());
+    return xsm_ops->debug_keys();
 }
 
 static inline int xsm_availheap (void)
 {
-    return xsm_call(availheap());
+    return xsm_ops->availheap();
 }
 
 static inline int xsm_getcpuinfo (void)
 {
-    return xsm_call(getcpuinfo());
+    return xsm_ops->getcpuinfo();
 }
 
 static inline int xsm_get_pmstat(void)
 {
-    return xsm_call(get_pmstat());
+    return xsm_ops->get_pmstat();
 }
 
 static inline int xsm_setpminfo(void)
 {
-    return xsm_call(setpminfo());
+    return xsm_ops->setpminfo();
 }
 
 static inline int xsm_pm_op(void)
 {
-    return xsm_call(pm_op());
+    return xsm_ops->pm_op();
 }
 
 static inline int xsm_do_mca(void)
 {
-    return xsm_call(do_mca());
+    return xsm_ops->do_mca();
 }
 
 static inline int xsm_evtchn_unbound (struct domain *d1, struct evtchn *chn,
                                                                     domid_t id2)
 {
-    return xsm_call(evtchn_unbound(d1, chn, id2));
+    return xsm_ops->evtchn_unbound(d1, chn, id2);
 }
 
 static inline int xsm_evtchn_interdomain (struct domain *d1, 
                 struct evtchn *chan1, struct domain *d2, struct evtchn *chan2)
 {
-    return xsm_call(evtchn_interdomain(d1, chan1, d2, chan2));
+    return xsm_ops->evtchn_interdomain(d1, chan1, d2, chan2);
 }
 
 static inline void xsm_evtchn_close_post (struct evtchn *chn)
 {
-    (void)xsm_call(evtchn_close_post(chn));
+    xsm_ops->evtchn_close_post(chn);
 }
 
 static inline int xsm_evtchn_send (struct domain *d, struct evtchn *chn)
 {
-    return xsm_call(evtchn_send(d, chn));
+    return xsm_ops->evtchn_send(d, chn);
 }
 
 static inline int xsm_evtchn_status (struct domain *d, struct evtchn *chn)
 {
-    return xsm_call(evtchn_status(d, chn));
+    return xsm_ops->evtchn_status(d, chn);
 }
 
 static inline int xsm_evtchn_reset (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(evtchn_reset(d1, d2));
+    return xsm_ops->evtchn_reset(d1, d2);
 }
 
 static inline int xsm_grant_mapref (struct domain *d1, struct domain *d2,
                                                                 uint32_t flags)
 {
-    return xsm_call(grant_mapref(d1, d2, flags));
+    return xsm_ops->grant_mapref(d1, d2, flags);
 }
 
 static inline int xsm_grant_unmapref (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_unmapref(d1, d2));
+    return xsm_ops->grant_unmapref(d1, d2);
 }
 
 static inline int xsm_grant_setup (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_setup(d1, d2));
+    return xsm_ops->grant_setup(d1, d2);
 }
 
 static inline int xsm_grant_transfer (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_transfer(d1, d2));
+    return xsm_ops->grant_transfer(d1, d2);
 }
 
 static inline int xsm_grant_copy (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_copy(d1, d2));
+    return xsm_ops->grant_copy(d1, d2);
 }
 
 static inline int xsm_grant_query_size (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(grant_query_size(d1, d2));
+    return xsm_ops->grant_query_size(d1, d2);
 }
 
 static inline int xsm_alloc_security_domain (struct domain *d)
 {
-    return xsm_call(alloc_security_domain(d));
+    return xsm_ops->alloc_security_domain(d);
 }
 
 static inline void xsm_free_security_domain (struct domain *d)
 {
-    (void)xsm_call(free_security_domain(d));
+    xsm_ops->free_security_domain(d);
 }
 
 static inline int xsm_alloc_security_evtchn (struct evtchn *chn)
 {
-    return xsm_call(alloc_security_evtchn(chn));
+    return xsm_ops->alloc_security_evtchn(chn);
 }
 
 static inline void xsm_free_security_evtchn (struct evtchn *chn)
 {
-    (void)xsm_call(free_security_evtchn(chn));
+    xsm_ops->free_security_evtchn(chn);
 }
 
 static inline char *xsm_show_security_evtchn (struct domain *d, const struct evtchn *chn)
 {
-    return xsm_call(show_security_evtchn(d, chn));
+    return xsm_ops->show_security_evtchn(d, chn);
 }
 
 static inline int xsm_get_pod_target (struct domain *d)
 {
-    return xsm_call(get_pod_target(d));
+    return xsm_ops->get_pod_target(d);
 }
 
 static inline int xsm_set_pod_target (struct domain *d)
 {
-    return xsm_call(set_pod_target(d));
+    return xsm_ops->set_pod_target(d);
 }
 
 static inline int xsm_memory_adjust_reservation (struct domain *d1, struct
                                                                     domain *d2)
 {
-    return xsm_call(memory_adjust_reservation(d1, d2));
+    return xsm_ops->memory_adjust_reservation(d1, d2);
 }
 
 static inline int xsm_memory_stat_reservation (struct domain *d1,
                                                             struct domain *d2)
 {
-    return xsm_call(memory_stat_reservation(d1, d2));
+    return xsm_ops->memory_stat_reservation(d1, d2);
 }
 
 static inline int xsm_memory_pin_page(struct domain *d, struct page_info *page)
 {
-    return xsm_call(memory_pin_page(d, page));
+    return xsm_ops->memory_pin_page(d, page);
 }
 
 static inline int xsm_remove_from_physmap(struct domain *d1, struct domain *d2)
 {
-    return xsm_call(remove_from_physmap(d1, d2));
+    return xsm_ops->remove_from_physmap(d1, d2);
 }
 
 static inline int xsm_console_io (struct domain *d, int cmd)
 {
-    return xsm_call(console_io(d, cmd));
+    return xsm_ops->console_io(d, cmd);
 }
 
 static inline int xsm_profile (struct domain *d, int op)
 {
-    return xsm_call(profile(d, op));
+    return xsm_ops->profile(d, op);
 }
 
 static inline int xsm_kexec (void)
 {
-    return xsm_call(kexec());
+    return xsm_ops->kexec();
 }
 
 static inline int xsm_schedop_shutdown (struct domain *d1, struct domain *d2)
 {
-    return xsm_call(schedop_shutdown(d1, d2));
+    return xsm_ops->schedop_shutdown(d1, d2);
 }
 
 static inline char *xsm_show_irq_sid (int irq)
 {
-    return xsm_call(show_irq_sid(irq));
+    return xsm_ops->show_irq_sid(irq);
 }
 
 static inline int xsm_map_domain_pirq (struct domain *d, int irq, void *data)
 {
-    return xsm_call(map_domain_pirq(d, irq, data));
+    return xsm_ops->map_domain_pirq(d, irq, data);
 }
 
 static inline int xsm_unmap_domain_pirq (struct domain *d, int irq)
 {
-    return xsm_call(unmap_domain_pirq(d, irq));
+    return xsm_ops->unmap_domain_pirq(d, irq);
 }
 
 static inline int xsm_irq_permission (struct domain *d, int pirq, uint8_t allow)
 {
-    return xsm_call(irq_permission(d, pirq, allow));
+    return xsm_ops->irq_permission(d, pirq, allow);
 }
 
 static inline int xsm_iomem_permission (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_call(iomem_permission(d, s, e, allow));
+    return xsm_ops->iomem_permission(d, s, e, allow);
 }
 
 static inline int xsm_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_call(iomem_mapping(d, s, e, allow));
+    return xsm_ops->iomem_mapping(d, s, e, allow);
 }
 
 static inline int xsm_pci_config_permission (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
-    return xsm_call(pci_config_permission(d, machine_bdf, start, end, access));
+    return xsm_ops->pci_config_permission(d, machine_bdf, start, end, access);
 }
 
 static inline int xsm_get_device_group(uint32_t machine_bdf)
 {
-    return xsm_call(get_device_group(machine_bdf));
+    return xsm_ops->get_device_group(machine_bdf);
 }
 
 static inline int xsm_test_assign_device(uint32_t machine_bdf)
 {
-    return xsm_call(test_assign_device(machine_bdf));
+    return xsm_ops->test_assign_device(machine_bdf);
 }
 
 static inline int xsm_assign_device(struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_call(assign_device(d, machine_bdf));
+    return xsm_ops->assign_device(d, machine_bdf);
 }
 
 static inline int xsm_deassign_device(struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_call(deassign_device(d, machine_bdf));
+    return xsm_ops->deassign_device(d, machine_bdf);
 }
 
 static inline int xsm_resource_plug_pci (uint32_t machine_bdf)
 {
-    return xsm_call(resource_plug_pci(machine_bdf));
+    return xsm_ops->resource_plug_pci(machine_bdf);
 }
 
 static inline int xsm_resource_unplug_pci (uint32_t machine_bdf)
 {
-    return xsm_call(resource_unplug_pci(machine_bdf));
+    return xsm_ops->resource_unplug_pci(machine_bdf);
 }
 
 static inline int xsm_resource_plug_core (void)
 {
-    return xsm_call(resource_plug_core());
+    return xsm_ops->resource_plug_core();
 }
 
 static inline int xsm_resource_unplug_core (void)
 {
-    return xsm_call(resource_unplug_core());
+    return xsm_ops->resource_unplug_core();
 }
 
 static inline int xsm_resource_setup_pci (uint32_t machine_bdf)
 {
-    return xsm_call(resource_setup_pci(machine_bdf));
+    return xsm_ops->resource_setup_pci(machine_bdf);
 }
 
 static inline int xsm_resource_setup_gsi (int gsi)
 {
-    return xsm_call(resource_setup_gsi(gsi));
+    return xsm_ops->resource_setup_gsi(gsi);
 }
 
 static inline int xsm_resource_setup_misc (void)
 {
-    return xsm_call(resource_setup_misc());
+    return xsm_ops->resource_setup_misc();
 }
 
 static inline int xsm_page_offline(uint32_t cmd)
 {
-    return xsm_call(page_offline(cmd));
+    return xsm_ops->page_offline(cmd);
 }
 
 static inline int xsm_lockprof(void)
 {
-    return xsm_call(lockprof());
+    return xsm_ops->lockprof();
 }
 
 static inline int xsm_cpupool_op(void)
 {
-    return xsm_call(cpupool_op());
+    return xsm_ops->cpupool_op();
 }
 
 static inline int xsm_sched_op(void)
 {
-    return xsm_call(sched_op());
+    return xsm_ops->sched_op();
 }
 
 static inline int xsm_tmem_control(uint32_t subop)
 {
-    return xsm_call(tmem_control(subop));
+    return xsm_ops->tmem_control(subop);
 }
 
 static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
@@ -620,240 +618,240 @@ static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 #ifdef CONFIG_X86
 static inline int xsm_shadow_control (struct domain *d, uint32_t op)
 {
-    return xsm_call(shadow_control(d, op));
+    return xsm_ops->shadow_control(d, op);
 }
 
 static inline int xsm_getpageframeinfo (struct page_info *page)
 {
-    return xsm_call(getpageframeinfo(page));
+    return xsm_ops->getpageframeinfo(page);
 }
 
 static inline int xsm_getpageframeinfo_domain (struct domain *d)
 {
-    return xsm_call(getpageframeinfo_domain(d));
+    return xsm_ops->getpageframeinfo_domain(d);
 }
 
 static inline int xsm_set_cpuid (struct domain *d)
 {
-    return xsm_call(set_cpuid(d));
+    return xsm_ops->set_cpuid(d);
 }
 
 static inline int xsm_gettscinfo (struct domain *d)
 {
-    return xsm_call(gettscinfo(d));
+    return xsm_ops->gettscinfo(d);
 }
 
 static inline int xsm_settscinfo (struct domain *d)
 {
-    return xsm_call(settscinfo(d));
+    return xsm_ops->settscinfo(d);
 }
 
 static inline int xsm_getmemlist (struct domain *d)
 {
-    return xsm_call(getmemlist(d));
+    return xsm_ops->getmemlist(d);
 }
 
 static inline int xsm_hypercall_init (struct domain *d)
 {
-    return xsm_call(hypercall_init(d));
+    return xsm_ops->hypercall_init(d);
 }
 
 static inline int xsm_hvmcontext (struct domain *d, uint32_t cmd)
 {
-    return xsm_call(hvmcontext(d, cmd));
+    return xsm_ops->hvmcontext(d, cmd);
 }
 
 static inline int xsm_address_size (struct domain *d, uint32_t cmd)
 {
-    return xsm_call(address_size(d, cmd));
+    return xsm_ops->address_size(d, cmd);
 }
 
 static inline int xsm_machine_address_size (struct domain *d, uint32_t cmd)
 {
-    return xsm_call(machine_address_size(d, cmd));
+    return xsm_ops->machine_address_size(d, cmd);
 }
 
 static inline int xsm_hvm_param (struct domain *d, unsigned long op)
 {
-    return xsm_call(hvm_param(d, op));
+    return xsm_ops->hvm_param(d, op);
 }
 
 static inline int xsm_hvm_set_pci_intx_level (struct domain *d)
 {
-    return xsm_call(hvm_set_pci_intx_level(d));
+    return xsm_ops->hvm_set_pci_intx_level(d);
 }
 
 static inline int xsm_hvm_set_isa_irq_level (struct domain *d)
 {
-    return xsm_call(hvm_set_isa_irq_level(d));
+    return xsm_ops->hvm_set_isa_irq_level(d);
 }
 
 static inline int xsm_hvm_set_pci_link_route (struct domain *d)
 {
-    return xsm_call(hvm_set_pci_link_route(d));
+    return xsm_ops->hvm_set_pci_link_route(d);
 }
 
 static inline int xsm_hvm_inject_msi (struct domain *d)
 {
-    return xsm_call(hvm_inject_msi(d));
+    return xsm_ops->hvm_inject_msi(d);
 }
 
 static inline int xsm_mem_event_setup (struct domain *d)
 {
-    return xsm_call(mem_event_setup(d));
+    return xsm_ops->mem_event_setup(d);
 }
 
 static inline int xsm_mem_event_control (struct domain *d, int mode, int op)
 {
-    return xsm_call(mem_event_control(d, mode, op));
+    return xsm_ops->mem_event_control(d, mode, op);
 }
 
 static inline int xsm_mem_event_op (struct domain *d, int op)
 {
-    return xsm_call(mem_event_op(d, op));
+    return xsm_ops->mem_event_op(d, op);
 }
 
 static inline int xsm_mem_sharing (struct domain *d)
 {
-    return xsm_call(mem_sharing(d));
+    return xsm_ops->mem_sharing(d);
 }
 
 static inline int xsm_mem_sharing_op (struct domain *d, struct domain *cd, int op)
 {
-    return xsm_call(mem_sharing_op(d, cd, op));
+    return xsm_ops->mem_sharing_op(d, cd, op);
 }
 
 static inline int xsm_audit_p2m (struct domain *d)
 {
-    return xsm_call(audit_p2m(d));
+    return xsm_ops->audit_p2m(d);
 }
 
 static inline int xsm_apic (struct domain *d, int cmd)
 {
-    return xsm_call(apic(d, cmd));
+    return xsm_ops->apic(d, cmd);
 }
 
 static inline int xsm_xen_settime (void)
 {
-    return xsm_call(xen_settime());
+    return xsm_ops->xen_settime();
 }
 
 static inline int xsm_memtype (uint32_t access)
 {
-    return xsm_call(memtype(access));
+    return xsm_ops->memtype(access);
 }
 
 static inline int xsm_microcode (void)
 {
-    return xsm_call(microcode());
+    return xsm_ops->microcode();
 }
 
 static inline int xsm_physinfo (void)
 {
-    return xsm_call(physinfo());
+    return xsm_ops->physinfo();
 }
 
 static inline int xsm_platform_quirk (uint32_t quirk)
 {
-    return xsm_call(platform_quirk(quirk));
+    return xsm_ops->platform_quirk(quirk);
 }
 
 static inline int xsm_firmware_info (void)
 {
-    return xsm_call(firmware_info());
+    return xsm_ops->firmware_info();
 }
 
 static inline int xsm_efi_call (void)
 {
-    return xsm_call(efi_call());
+    return xsm_ops->efi_call();
 }
 
 static inline int xsm_acpi_sleep (void)
 {
-    return xsm_call(acpi_sleep());
+    return xsm_ops->acpi_sleep();
 }
 
 static inline int xsm_change_freq (void)
 {
-    return xsm_call(change_freq());
+    return xsm_ops->change_freq();
 }
 
 static inline int xsm_getidletime (void)
 {
-    return xsm_call(getidletime());
+    return xsm_ops->getidletime();
 }
 
 static inline int xsm_machine_memory_map(void)
 {
-    return xsm_call(machine_memory_map());
+    return xsm_ops->machine_memory_map();
 }
 
 static inline int xsm_domain_memory_map(struct domain *d)
 {
-    return xsm_call(domain_memory_map(d));
+    return xsm_ops->domain_memory_map(d);
 }
 
 static inline int xsm_mmu_normal_update (struct domain *d, struct domain *t,
                                          struct domain *f, intpte_t fpte)
 {
-    return xsm_call(mmu_normal_update(d, t, f, fpte));
+    return xsm_ops->mmu_normal_update(d, t, f, fpte);
 }
 
 static inline int xsm_mmu_machphys_update (struct domain *d, unsigned long mfn)
 {
-    return xsm_call(mmu_machphys_update(d, mfn));
+    return xsm_ops->mmu_machphys_update(d, mfn);
 }
 
 static inline int xsm_update_va_mapping(struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte)
 {
-    return xsm_call(update_va_mapping(d, f, pte));
+    return xsm_ops->update_va_mapping(d, f, pte);
 }
 
 static inline int xsm_add_to_physmap(struct domain *d1, struct domain *d2)
 {
-    return xsm_call(add_to_physmap(d1, d2));
+    return xsm_ops->add_to_physmap(d1, d2);
 }
 
 static inline int xsm_sendtrigger(struct domain *d)
 {
-    return xsm_call(sendtrigger(d));
+    return xsm_ops->sendtrigger(d);
 }
 
 static inline int xsm_bind_pt_irq(struct domain *d, 
                                                 struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_call(bind_pt_irq(d, bind));
+    return xsm_ops->bind_pt_irq(d, bind);
 }
 
 static inline int xsm_unbind_pt_irq(struct domain *d,
                                                 struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_call(unbind_pt_irq(d, bind));
+    return xsm_ops->unbind_pt_irq(d, bind);
 }
 
 static inline int xsm_pin_mem_cacheattr(struct domain *d)
 {
-    return xsm_call(pin_mem_cacheattr(d));
+    return xsm_ops->pin_mem_cacheattr(d);
 }
 
 static inline int xsm_ext_vcpucontext(struct domain *d, uint32_t cmd)
 {
-    return xsm_call(ext_vcpucontext(d, cmd));
+    return xsm_ops->ext_vcpucontext(d, cmd);
 }
 static inline int xsm_vcpuextstate(struct domain *d, uint32_t cmd)
 {
-    return xsm_call(vcpuextstate(d, cmd));
+    return xsm_ops->vcpuextstate(d, cmd);
 }
 
 static inline int xsm_ioport_permission (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_call(ioport_permission(d, s, e, allow));
+    return xsm_ops->ioport_permission(d, s, e, allow);
 }
 
 static inline int xsm_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_call(ioport_mapping(d, s, e, allow));
+    return xsm_ops->ioport_mapping(d, s, e, allow);
 }
 #endif /* CONFIG_X86 */
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMS-0002lu-Gw; Mon, 06 Aug 2012 14:32:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMQ-0002jz-Gt
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:47 +0000
Received: from [85.158.138.51:27949] by server-5.bemta-3.messagelabs.com id
	9D/FC-27557-D85DF105; Mon, 06 Aug 2012 14:32:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344263561!21123984!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21359 invoked from network); 6 Aug 2012 14:32:42 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-12.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:42 -0000
X-TM-IMSS-Message-ID: <7b0728eb0002c54f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0728eb0002c54f ; Mon, 6 Aug 2012 10:33:11 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9Z011112; 
	Mon, 6 Aug 2012 10:32:40 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:27 -0400
Message-Id: <1344263550-3941-16-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 15/18] xsm/flask: add distinct SIDs for
	self/target access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because the FLASK XSM module no longer checks IS_PRIV for remote domain
accesses covered by XSM permissions, domains now have the ability to
perform memory management and other functions on all domains that have
the same type. While it is possible to prevent this by only creating one
domain per type, this solution significantly limits the flexibility of
the type system.

This patch introduces a domain type transition to represent a domain
that is operating on itself. In the example policy, this is demonstrated
by creating a type with _self appended when declaring a domain type
which will be used for reflexive operations. AVCs for a domain of type
domU_t will look like the following:

scontext=system_u:system_r:domU_t tcontext=system_u:system_r:domU_t_self

This change also allows policy to distinguish between event channels a
domain creates to itself and event channels created between domains of
the same type.

The IS_PRIV_FOR check used for device model domains is also no longer
checked by FLASK; a similar transition is performed when the target is
set and used when the device model accesses its target domain.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.if |  64 +++-
 tools/flask/policy/policy/modules/xen/xen.te |  13 +-
 xen/xsm/flask/flask_op.c                     |   9 +
 xen/xsm/flask/hooks.c                        | 470 +++++++++++++--------------
 xen/xsm/flask/include/objsec.h               |   2 +
 5 files changed, 289 insertions(+), 269 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index f9bd757..796698b 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -5,15 +5,34 @@
 # Domain creation and setup
 #
 ################################################################################
+define(`declare_domain_common', `
+	allow $1 $2:grant { query setup };
+	allow $1 $2:mmu { adjust physmap map_read map_write stat pinpage updatemp };
+	allow $1 $2:hvm { getparam setparam };
+')
+
 # declare_domain(type, attrs...)
-#   Declare a type as a domain type, and allow basic domain setup
+#   Declare a domain type, along with associated _self and _channel types
+#   Allow the domain to perform basic operations on itself
 define(`declare_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	type $1_self, domain_type, domain_self_type;
+	type_transition $1 $1:domain $1_self;
+	type $1_channel, event_type;
+	type_transition $1 domain_type:event $1_channel;
+	declare_domain_common($1, $1_self)
+')
+
+# declare_singleton_domain(type, attrs...)
+#   Declare a domain type and associated _channel types.
+#   Note: Because the domain can perform basic operations on itself and any
+#   other domain of the same type, this constructor should be used for types
+#   containing at most one domain. This is not enforced by policy.
+define(`declare_singleton_domain', `
+	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
 	type $1_channel, event_type;
 	type_transition $1 domain_type:event $1_channel;
-	allow $1 $1:grant { query setup };
-	allow $1 $1:mmu { adjust physmap map_read map_write stat pinpage };
-	allow $1 $1:hvm { getparam setparam };
+	declare_domain_common($1, $1)
 ')
 
 # declare_build_label(type)
@@ -51,6 +70,7 @@ define(`create_domain_build_label', `
 	allow $1 $2_channel:event create;
 	allow $1 $2_building:domain2 relabelfrom;
 	allow $1 $2:domain2 relabelto;
+	allow $2_building $2:domain transition;
 ')
 
 # manage_domain(priv, target)
@@ -101,20 +121,36 @@ define(`domain_comms', `
 ')
 
 # domain_self_comms(domain)
-#   Allow a domain types to communicate with others of its type using grants
-#   and event channels (this includes event channels to DOMID_SELF)
+#   Allow a non-singleton domain type to communicate with itself using grants
+#   and event channels
 define(`domain_self_comms', `
-	create_channel($1, $1, $1_channel)
-	allow $1 $1:grant { map_read map_write copy unmap };
+	create_channel($1, $1_self, $1_channel)
+	allow $1 $1_self:grant { map_read map_write copy unmap };
 ')
 
 # device_model(dm_dom, hvm_dom)
 #   Define how a device model domain interacts with its target
 define(`device_model', `
-	domain_comms($1, $2)
-	allow $1 $2:domain { set_target shutdown };
-	allow $1 $2:mmu { map_read map_write adjust physmap };
-	allow $1 $2:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute };
+	type $2_target, domain_type, domain_target_type;
+	type_transition $2 $1:domain $2_target;
+	allow $1 $2:domain set_target;
+
+	type_transition $2_target domain_type:event $2_channel;
+	create_channel($1, $2_target, $1_channel)
+	create_channel($2, $1, $2_channel)
+	allow $1 $2_channel:event create;
+
+	allow $1 $2_target:domain shutdown;
+	allow $1 $2_target:mmu { map_read map_write adjust physmap };
+	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute cacheattr };
+')
+
+# make_device_model(priv, dm_dom, hvm_dom)
+#   Allow creation of a device model and HVM domain pair
+define(`make_device_model', `
+	device_model($2, $3)
+	allow $1 $2:domain2 make_priv_for;
+	allow $1 $3:domain2 set_as_target;
 ')
 ################################################################################
 #
@@ -132,7 +168,9 @@ define(`use_device', `
 # admin_device(domain, device)
 #   Allow a device to be used and delegated by a domain
 define(`admin_device', `
-    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport remove_device remove_irq remove_iomem remove_ioport };
+    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport
+	                       remove_device remove_irq remove_iomem remove_ioport
+						   plug unplug };
     allow $1 $2:hvm bind_irq;
     use_device($1, $2)
 ')
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 1162153..8d33285 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -8,6 +8,8 @@
 ################################################################################
 attribute xen_type;
 attribute domain_type;
+attribute domain_self_type;
+attribute domain_target_type;
 attribute resource_type;
 attribute event_type;
 attribute mls_priv;
@@ -25,7 +27,7 @@ attribute mls_priv;
 type xen_t, xen_type, mls_priv;
 
 # Domain 0
-declare_domain(dom0_t, mls_priv);
+declare_singleton_domain(dom0_t, mls_priv);
 
 # Untracked I/O memory (pseudo-domain)
 type domio_t, xen_type;
@@ -69,7 +71,7 @@ admin_device(dom0_t, ioport_t)
 admin_device(dom0_t, iomem_t)
 allow dom0_t domio_t:mmu { map_read map_write };
 
-domain_self_comms(dom0_t)
+domain_comms(dom0_t, dom0_t)
 
 auditallow dom0_t security_t:security { load_policy setenforce setbool };
 
@@ -84,11 +86,14 @@ domain_self_comms(domU_t)
 create_domain(dom0_t, domU_t)
 manage_domain(dom0_t, domU_t)
 domain_comms(dom0_t, domU_t)
+domain_comms(domU_t, domU_t)
+domain_self_comms(domU_t)
 
 declare_domain(isolated_domU_t)
 create_domain(dom0_t, isolated_domU_t)
 manage_domain(dom0_t, isolated_domU_t)
 domain_comms(dom0_t, isolated_domU_t)
+domain_self_comms(isolated_domU_t)
 
 # Declare a boolean that denies creation of prot_domU_t domains
 gen_bool(prot_doms_locked, false)
@@ -98,6 +103,8 @@ if (!prot_doms_locked) {
 }
 domain_comms(dom0_t, prot_domU_t)
 domain_comms(domU_t, prot_domU_t)
+domain_comms(prot_domU_t, prot_domU_t)
+domain_self_comms(prot_domU_t)
 
 # domHVM_t is meant to be paired with a qemu-dm stub domain of type dm_dom_t
 declare_domain(domHVM_t)
@@ -110,7 +117,7 @@ declare_domain(dm_dom_t)
 create_domain(dom0_t, dm_dom_t)
 manage_domain(dom0_t, dm_dom_t)
 domain_comms(dom0_t, dm_dom_t)
-device_model(dm_dom_t, domHVM_t)
+make_device_model(dom0_t, dm_dom_t, domHVM_t)
 
 # nomigrate_t must be built via the nomigrate_t_building label; once built,
 # dom0 cannot read its memory.
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index 9c0a087..28f6f5e 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -612,6 +612,15 @@ static int flask_relabel_domain(struct xen_flask_relabel *arg)
         goto out;
 
     dsec->sid = arg->sid;
+    dsec->self_sid = arg->sid;
+    security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                            &dsec->self_sid);
+    if ( d->target )
+    {
+        struct domain_security_struct *tsec = d->target->ssid;
+        security_transition_sid(tsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                                &dsec->target_sid);
+    }
 
  out:
     rcu_unlock_domain(d);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index be5c3ad..dae587c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -33,38 +33,69 @@
 
 struct xsm_operations *original_ops = NULL;
 
+static u32 domain_sid(struct domain *dom)
+{
+    struct domain_security_struct *dsec = dom->ssid;
+    return dsec->sid;
+}
+
+static u32 domain_target_sid(struct domain *src, struct domain *dst)
+{
+    struct domain_security_struct *ssec = src->ssid;
+    struct domain_security_struct *dsec = dst->ssid;
+    if (src == dst)
+        return ssec->self_sid;
+    if (src->target == dst)
+        return ssec->target_sid;
+    return dsec->sid;
+}
+
+static u32 evtchn_sid(const struct evtchn *chn)
+{
+    struct evtchn_security_struct *esec = chn->ssid;
+    return esec->sid;
+}
+
 static int domain_has_perm(struct domain *dom1, struct domain *dom2, 
                            u16 class, u32 perms)
 {
-    struct domain_security_struct *dsec1, *dsec2;
+    u32 ssid, tsid;
     struct avc_audit_data ad;
     AVC_AUDIT_DATA_INIT(&ad, NONE);
     ad.sdom = dom1;
     ad.tdom = dom2;
 
-    dsec1 = dom1->ssid;
-    dsec2 = dom2->ssid;
+    ssid = domain_sid(dom1);
+    tsid = domain_target_sid(dom1, dom2);
 
-    return avc_has_perm(dsec1->sid, dsec2->sid, class, perms, &ad);
+    return avc_has_perm(ssid, tsid, class, perms, &ad);
 }
 
-static int domain_has_evtchn(struct domain *d, struct evtchn *chn, u32 perms)
+static int avc_current_has_perm(u32 tsid, u16 class, u32 perm,
+                                struct avc_audit_data *ad)
 {
-    struct domain_security_struct *dsec;
-    struct evtchn_security_struct *esec;
+    u32 csid = domain_sid(current->domain);
+    return avc_has_perm(csid, tsid, class, perm, ad);
+}
 
-    dsec = d->ssid;
-    esec = chn->ssid;
+static int current_has_perm(struct domain *d, u16 class, u32 perms)
+{
+    return domain_has_perm(current->domain, d, class, perms);
+}
+
+static int domain_has_evtchn(struct domain *d, struct evtchn *chn, u32 perms)
+{
+    u32 dsid = domain_sid(d);
+    u32 esid = evtchn_sid(chn);
 
-    return avc_has_perm(dsec->sid, esec->sid, SECCLASS_EVENT, perms, NULL);
+    return avc_has_perm(dsid, esid, SECCLASS_EVENT, perms, NULL);
 }
 
 static int domain_has_xen(struct domain *d, u32 perms)
 {
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
+    u32 dsid = domain_sid(d);
 
-    return avc_has_perm(dsec->sid, SECINITSID_XEN, SECCLASS_XEN, perms, NULL);
+    return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_XEN, perms, NULL);
 }
 
 static int get_irq_sid(int irq, u32 *sid, struct avc_audit_data *ad)
@@ -109,14 +140,11 @@ static int flask_domain_alloc_security(struct domain *d)
     memset(dsec, 0, sizeof(struct domain_security_struct));
 
     if ( is_idle_domain(d) )
-    {
         dsec->sid = SECINITSID_XEN;
-    }
     else
-    {
         dsec->sid = SECINITSID_UNLABELED;
-    }
 
+    dsec->self_sid = dsec->sid;
     d->ssid = dsec;
 
     return 0;
@@ -136,68 +164,55 @@ static void flask_domain_free_security(struct domain *d)
 static int flask_evtchn_unbound(struct domain *d1, struct evtchn *chn, 
                                 domid_t id2)
 {
-    u32 newsid;
+    u32 sid1, sid2, newsid;
     int rc;
-    domid_t id;
     struct domain *d2;
-    struct domain_security_struct *dsec, *dsec1, *dsec2;
     struct evtchn_security_struct *esec;
 
-    dsec = current->domain->ssid;
-    dsec1 = d1->ssid;
-    esec = chn->ssid;
-
-    if ( id2 == DOMID_SELF )
-        id = current->domain->domain_id;
-    else
-        id = id2;
-
-    d2 = get_domain_by_id(id);
+    d2 = rcu_lock_domain_by_id(id2);
     if ( d2 == NULL )
         return -EPERM;
 
-    dsec2 = d2->ssid;
-    rc = security_transition_sid(dsec1->sid, dsec2->sid, SECCLASS_EVENT, 
-                                 &newsid);
+    sid1 = domain_sid(d1);
+    sid2 = domain_target_sid(d1, d2);
+    esec = chn->ssid;
+
+    rc = security_transition_sid(sid1, sid2, SECCLASS_EVENT, &newsid);
     if ( rc )
         goto out;
 
-    rc = avc_has_perm(dsec->sid, newsid, SECCLASS_EVENT, EVENT__CREATE, NULL);
+    rc = avc_current_has_perm(newsid, SECCLASS_EVENT, EVENT__CREATE, NULL);
     if ( rc )
         goto out;
 
-    rc = avc_has_perm(newsid, dsec2->sid, SECCLASS_EVENT, EVENT__BIND, NULL);
+    rc = avc_has_perm(newsid, sid2, SECCLASS_EVENT, EVENT__BIND, NULL);
     if ( rc )
         goto out;
-    else
-        esec->sid = newsid;
+
+    esec->sid = newsid;
 
  out:
-    put_domain(d2);
+    rcu_unlock_domain(d2);
     return rc;
 }
 
 static int flask_evtchn_interdomain(struct domain *d1, struct evtchn *chn1, 
                                     struct domain *d2, struct evtchn *chn2)
 {
-    u32 newsid;
+    u32 sid1, sid2, newsid, reverse_sid;
     int rc;
-    struct domain_security_struct *dsec, *dsec1, *dsec2;
-    struct evtchn_security_struct *esec1, *esec2;
+    struct evtchn_security_struct *esec1;
     struct avc_audit_data ad;
     AVC_AUDIT_DATA_INIT(&ad, NONE);
     ad.sdom = d1;
     ad.tdom = d2;
 
-    dsec = current->domain->ssid;
-    dsec1 = d1->ssid;
-    dsec2 = d2->ssid;
+    sid1 = domain_sid(d1);
+    sid2 = domain_target_sid(d1, d2);
 
     esec1 = chn1->ssid;
-    esec2 = chn2->ssid;
 
-    rc = security_transition_sid(dsec1->sid, dsec2->sid, 
-                                 SECCLASS_EVENT, &newsid);
+    rc = security_transition_sid(sid1, sid2, SECCLASS_EVENT, &newsid);
     if ( rc )
     {
         printk("%s: security_transition_sid failed, rc=%d (domain=%d)\n",
@@ -205,15 +220,20 @@ static int flask_evtchn_interdomain(struct domain *d1, struct evtchn *chn1,
         return rc;
     }
 
-    rc = avc_has_perm(dsec->sid, newsid, SECCLASS_EVENT, EVENT__CREATE, &ad);
+    rc = avc_current_has_perm(newsid, SECCLASS_EVENT, EVENT__CREATE, &ad);
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(newsid, dsec2->sid, SECCLASS_EVENT, EVENT__BIND, &ad);
+    rc = avc_has_perm(newsid, sid2, SECCLASS_EVENT, EVENT__BIND, &ad);
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(esec2->sid, dsec1->sid, SECCLASS_EVENT, EVENT__BIND, &ad);
+    /* It's possible the target domain has changed (relabel or destroy/create)
+     * since the unbound part was created; re-validate this binding now.
+     */
+    reverse_sid = evtchn_sid(chn2);
+    sid1 = domain_target_sid(d2, d1);
+    rc = avc_has_perm(reverse_sid, sid1, SECCLASS_EVENT, EVENT__BIND, &ad);
     if ( rc )
         return rc;
 
@@ -296,7 +316,6 @@ static void flask_free_security_evtchn(struct evtchn *chn)
 
 static char *flask_show_security_evtchn(struct domain *d, const struct evtchn *chn)
 {
-    struct evtchn_security_struct *esec;
     int irq;
     u32 sid = 0;
     char *ctx;
@@ -306,9 +325,7 @@ static char *flask_show_security_evtchn(struct domain *d, const struct evtchn *c
     {
     case ECS_UNBOUND:
     case ECS_INTERDOMAIN:
-        esec = chn->ssid;
-        if ( esec )
-            sid = esec->sid;
+        sid = evtchn_sid(chn);
         break;
     case ECS_PIRQ:
         irq = domain_pirq_to_irq(d, chn->u.pirq.irq);
@@ -359,11 +376,10 @@ static int flask_grant_query_size(struct domain *d1, struct domain *d2)
     return domain_has_perm(d1, d2, SECCLASS_GRANT, GRANT__QUERY);
 }
 
-static int get_page_sid(struct page_info *page, u32 *sid)
+static int get_page_sid(struct domain *who, struct page_info *page, u32 *sid)
 {
     int rc = 0;
     struct domain *d;
-    struct domain_security_struct *dsec;
     unsigned long mfn;
 
     d = page_get_owner(page);
@@ -389,15 +405,14 @@ static int get_page_sid(struct page_info *page, u32 *sid)
 
     default:
         /*Pages are implicitly labeled by domain ownership!*/
-        dsec = d->ssid;
-        *sid = dsec ? dsec->sid : SECINITSID_UNLABELED;
+        *sid = domain_target_sid(who, d);
         break;
     }
 
     return rc;
 }
 
-static int get_mfn_sid(unsigned long mfn, u32 *sid)
+static int get_mfn_sid(struct domain *who, unsigned long mfn, u32 *sid)
 {
     int rc = 0;
     struct page_info *page;
@@ -406,7 +421,7 @@ static int get_mfn_sid(unsigned long mfn, u32 *sid)
     {
         /*mfn is valid if this is a page that Xen is tracking!*/
         page = mfn_to_page(mfn);
-        rc = get_page_sid(page, sid);
+        rc = get_page_sid(who, page, sid);
     }
     else
     {
@@ -419,12 +434,12 @@ static int get_mfn_sid(unsigned long mfn, u32 *sid)
 
 static int flask_get_pod_target(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__GETPODTARGET);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETPODTARGET);
 }
 
 static int flask_set_pod_target(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SETPODTARGET);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETPODTARGET);
 }
 
 static int flask_memory_adjust_reservation(struct domain *d1, struct domain *d2)
@@ -440,15 +455,14 @@ static int flask_memory_stat_reservation(struct domain *d1, struct domain *d2)
 static int flask_memory_pin_page(struct domain *d, struct page_info *page)
 {
     int rc = 0;
-    u32 sid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
+    u32 dsid, psid;
+    dsid = domain_sid(d);
 
-    rc = get_page_sid(page, &sid);
+    rc = get_page_sid(d, page, &psid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, sid, SECCLASS_MMU, MMU__PINPAGE, NULL);
+    return avc_has_perm(dsid, psid, SECCLASS_MMU, MMU__PINPAGE, NULL);
 }
 
 static int flask_console_io(struct domain *d, int cmd)
@@ -515,70 +529,65 @@ static int flask_schedop_shutdown(struct domain *d1, struct domain *d2)
 static void flask_security_domaininfo(struct domain *d, 
                                       struct xen_domctl_getdomaininfo *info)
 {
-    struct domain_security_struct *dsec;
-
-    dsec = d->ssid;
-    info->ssidref = dsec->sid;
+    info->ssidref = domain_sid(d);
 }
 
 static int flask_setvcpucontext(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__SETVCPUCONTEXT);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETVCPUCONTEXT);
 }
 
 static int flask_pausedomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__PAUSE);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__PAUSE);
 }
 
 static int flask_unpausedomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__UNPAUSE);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__UNPAUSE);
 }
 
 static int flask_resumedomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__RESUME);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__RESUME);
 }
 
 static int flask_domain_create(struct domain *d, u32 ssidref)
 {
     int rc;
-    struct domain_security_struct *dsec1;
-    struct domain_security_struct *dsec2;
+    struct domain_security_struct *dsec = d->ssid;
     static int dom0_created = 0;
 
-    dsec1 = current->domain->ssid;
-    dsec2 = d->ssid;
-
     if ( is_idle_domain(current->domain) && !dom0_created )
     {
-        dsec2->sid = SECINITSID_DOM0;
+        dsec->sid = SECINITSID_DOM0;
         dom0_created = 1;
-        return 0;
     }
+    else
+    {
+        rc = avc_current_has_perm(ssidref, SECCLASS_DOMAIN,
+                          DOMAIN__CREATE, NULL);
+        if ( rc )
+            return rc;
 
-    rc = avc_has_perm(dsec1->sid, ssidref, SECCLASS_DOMAIN,
-                      DOMAIN__CREATE, NULL);
-    if ( rc )
-        return rc;
+        dsec->sid = ssidref;
+    }
+    dsec->self_sid = dsec->sid;
 
-    dsec2->sid = ssidref;
+    rc = security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                                 &dsec->self_sid);
 
     return rc;
 }
 
 static int flask_max_vcpus(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__MAX_VCPUS);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__MAX_VCPUS);
 }
 
 static int flask_destroydomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__DESTROY);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__DESTROY);
 }
 
 static int flask_vcpuaffinity(int cmd, struct domain *d)
@@ -597,7 +606,7 @@ static int flask_vcpuaffinity(int cmd, struct domain *d)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm );
+    return current_has_perm(d, SECCLASS_DOMAIN, perm );
 }
 
 static int flask_scheduler(struct domain *d)
@@ -608,53 +617,61 @@ static int flask_scheduler(struct domain *d)
     if ( rc )
         return rc;
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__SCHEDULER);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SCHEDULER);
 }
 
 static int flask_getdomaininfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__GETDOMAININFO);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETDOMAININFO);
 }
 
 static int flask_getvcpucontext(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__GETVCPUCONTEXT);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETVCPUCONTEXT);
 }
 
 static int flask_getvcpuinfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__GETVCPUINFO);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETVCPUINFO);
 }
 
 static int flask_domain_settime(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SETTIME);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETTIME);
 }
 
-static int flask_set_target(struct domain *d, struct domain *e)
+static int flask_set_target(struct domain *d, struct domain *t)
 {
     int rc;
-    rc = domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR);
+    struct domain_security_struct *dsec, *tsec;
+    dsec = d->ssid;
+    tsec = t->ssid;
+
+    rc = current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR);
     if ( rc )
         return rc;
-    rc = domain_has_perm(current->domain, e, SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET);
+    rc = current_has_perm(t, SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET);
     if ( rc )
         return rc;
-    return domain_has_perm(d, e, SECCLASS_DOMAIN, DOMAIN__SET_TARGET);
+    /* Use avc_has_perm to avoid resolving target/current SID */
+    rc = avc_has_perm(dsec->sid, tsec->sid, SECCLASS_DOMAIN, DOMAIN__SET_TARGET, NULL);
+    if ( rc )
+        return rc;
+
+    /* (tsec, dsec) defaults the label to tsec, as it should here */
+    rc = security_transition_sid(tsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                                 &dsec->target_sid);
+    return rc;
 }
 
 static int flask_domctl(struct domain *d, int cmd)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SET_MISC_INFO);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SET_MISC_INFO);
 }
 
 static int flask_set_virq_handler(struct domain *d, uint32_t virq)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SET_VIRQ_HANDLER);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SET_VIRQ_HANDLER);
 }
 
 static int flask_tbufcontrol(void)
@@ -679,26 +696,22 @@ static int flask_sched_id(void)
 
 static int flask_setdomainmaxmem(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDOMAINMAXMEM);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDOMAINMAXMEM);
 }
 
 static int flask_setdomainhandle(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDOMAINHANDLE);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDOMAINHANDLE);
 }
 
 static int flask_setdebugging(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDEBUGGING);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDEBUGGING);
 }
 
 static int flask_debug_op(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDEBUGGING);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDEBUGGING);
 }
 
 static int flask_debug_keys(void)
@@ -760,14 +773,12 @@ static char *flask_show_irq_sid (int irq)
 
 static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
 {
-    u32 sid;
+    u32 sid, dsid;
     int rc = -EPERM;
     struct msi_info *msi = data;
-
-    struct domain_security_struct *ssec, *tsec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__ADD);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__ADD);
 
     if ( rc )
         return rc;
@@ -783,14 +794,13 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-    tsec = d->ssid;
+    dsid = domain_sid(d);
 
-    rc = avc_has_perm(ssec->sid, sid, SECCLASS_RESOURCE, RESOURCE__ADD_IRQ, &ad);
+    rc = avc_current_has_perm(sid, SECCLASS_RESOURCE, RESOURCE__ADD_IRQ, &ad);
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(tsec->sid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    rc = avc_has_perm(dsid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
     return rc;
 }
 
@@ -798,16 +808,12 @@ static int flask_unmap_domain_pirq (struct domain *d, int irq)
 {
     u32 sid;
     int rc = -EPERM;
-
-    struct domain_security_struct *ssec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-
     if ( irq >= nr_irqs_gsi ) {
         /* TODO support for MSI here */
         return 0;
@@ -817,19 +823,19 @@ static int flask_unmap_domain_pirq (struct domain *d, int irq)
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(ssec->sid, sid, SECCLASS_RESOURCE, RESOURCE__REMOVE_IRQ, &ad);
+    rc = avc_current_has_perm(sid, SECCLASS_RESOURCE, RESOURCE__REMOVE_IRQ, &ad);
     return rc;
 }
 
 static int flask_irq_permission (struct domain *d, int pirq, uint8_t access)
 {
     /* the PIRQ number is not useful; real IRQ is checked during mapping */
-    return domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
-                           resource_to_perm(access));
+    return current_has_perm(d, SECCLASS_RESOURCE, resource_to_perm(access));
 }
 
 struct iomem_has_perm_data {
-    struct domain_security_struct *ssec, *tsec;
+    u32 ssid;
+    u32 dsid;
     u32 perm;
 };
 
@@ -843,12 +849,12 @@ static int _iomem_has_perm(void *v, u32 sid, unsigned long start, unsigned long
     ad.range.start = start;
     ad.range.end = end;
 
-    rc = avc_has_perm(data->ssec->sid, sid, SECCLASS_RESOURCE, data->perm, &ad);
+    rc = avc_has_perm(data->ssid, sid, SECCLASS_RESOURCE, data->perm, &ad);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(data->tsec->sid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    return avc_has_perm(data->dsid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 static int flask_iomem_permission(struct domain *d, uint64_t start, uint64_t end, uint8_t access)
@@ -856,7 +862,7 @@ static int flask_iomem_permission(struct domain *d, uint64_t start, uint64_t end
     struct iomem_has_perm_data data;
     int rc;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
+    rc = current_has_perm(d, SECCLASS_RESOURCE,
                          resource_to_perm(access));
     if ( rc )
         return rc;
@@ -866,18 +872,17 @@ static int flask_iomem_permission(struct domain *d, uint64_t start, uint64_t end
     else
         data.perm = RESOURCE__REMOVE_IOMEM;
 
-    data.ssec = current->domain->ssid;
-    data.tsec = d->ssid;
+    data.ssid = domain_sid(current->domain);
+    data.dsid = domain_sid(d);
 
     return security_iterate_iomem_sids(start, end, _iomem_has_perm, &data);
 }
 
 static int flask_pci_config_permission(struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
-    u32 rsid;
+    u32 dsid, rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
     u32 perm = RESOURCE__USE;
 
     rc = security_device_sid(machine_bdf, &rsid);
@@ -890,33 +895,24 @@ static int flask_pci_config_permission(struct domain *d, uint32_t machine_bdf, u
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = d->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, perm, &ad);
+    dsid = domain_sid(d);
+    return avc_has_perm(dsid, rsid, SECCLASS_RESOURCE, perm, &ad);
 
 }
 
 static int flask_resource_plug_core(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__PLUG, NULL);
+    return avc_current_has_perm(SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__PLUG, NULL);
 }
 
 static int flask_resource_unplug_core(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__UNPLUG, NULL);
+    return avc_current_has_perm(SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__UNPLUG, NULL);
 }
 
 static int flask_resource_use_core(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__USE, NULL);
+    return avc_current_has_perm(SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__USE, NULL);
 }
 
 static int flask_resource_plug_pci(uint32_t machine_bdf)
@@ -924,7 +920,6 @@ static int flask_resource_plug_pci(uint32_t machine_bdf)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
@@ -932,8 +927,7 @@ static int flask_resource_plug_pci(uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__PLUG, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__PLUG, &ad);
 }
 
 static int flask_resource_unplug_pci(uint32_t machine_bdf)
@@ -941,7 +935,6 @@ static int flask_resource_unplug_pci(uint32_t machine_bdf)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
@@ -949,8 +942,7 @@ static int flask_resource_unplug_pci(uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__UNPLUG, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__UNPLUG, &ad);
 }
 
 static int flask_resource_setup_pci(uint32_t machine_bdf)
@@ -958,7 +950,6 @@ static int flask_resource_setup_pci(uint32_t machine_bdf)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
@@ -966,8 +957,7 @@ static int flask_resource_setup_pci(uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
 }
 
 static int flask_resource_setup_gsi(int gsi)
@@ -975,22 +965,17 @@ static int flask_resource_setup_gsi(int gsi)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = get_irq_sid(gsi, &rsid, &ad);
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
 }
 
 static int flask_resource_setup_misc(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_XEN, SECCLASS_RESOURCE, RESOURCE__SETUP, NULL);
+    return avc_current_has_perm(SECINITSID_XEN, SECCLASS_RESOURCE, RESOURCE__SETUP, NULL);
 }
 
 static inline int flask_page_offline(uint32_t cmd)
@@ -1058,11 +1043,12 @@ static int flask_shadow_control(struct domain *d, uint32_t op)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_SHADOW, perm);
+    return current_has_perm(d, SECCLASS_SHADOW, perm);
 }
 
 struct ioport_has_perm_data {
-    struct domain_security_struct *ssec, *tsec;
+    u32 ssid;
+    u32 dsid;
     u32 perm;
 };
 
@@ -1076,12 +1062,12 @@ static int _ioport_has_perm(void *v, u32 sid, unsigned long start, unsigned long
     ad.range.start = start;
     ad.range.end = end;
 
-    rc = avc_has_perm(data->ssec->sid, sid, SECCLASS_RESOURCE, data->perm, &ad);
+    rc = avc_has_perm(data->ssid, sid, SECCLASS_RESOURCE, data->perm, &ad);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(data->tsec->sid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    return avc_has_perm(data->dsid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 
@@ -1090,7 +1076,7 @@ static int flask_ioport_permission(struct domain *d, uint32_t start, uint32_t en
     int rc;
     struct ioport_has_perm_data data;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
+    rc = current_has_perm(d, SECCLASS_RESOURCE,
                          resource_to_perm(access));
 
     if ( rc )
@@ -1101,8 +1087,8 @@ static int flask_ioport_permission(struct domain *d, uint32_t start, uint32_t en
     else
         data.perm = RESOURCE__REMOVE_IOPORT;
 
-    data.ssec = current->domain->ssid;
-    data.tsec = d->ssid;
+    data.ssid = domain_sid(current->domain);
+    data.dsid = domain_sid(d);
 
     return security_iterate_ioport_sids(start, end, _ioport_has_perm, &data);
 }
@@ -1111,46 +1097,42 @@ static int flask_getpageframeinfo(struct page_info *page)
 {
     int rc = 0;
     u32 tsid;
-    struct domain_security_struct *dsec;
-
-    dsec = current->domain->ssid;
 
-    rc = get_page_sid(page, &tsid);
+    rc = get_page_sid(current->domain, page, &tsid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);    
+    return avc_current_has_perm(tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);
 }
 
 static int flask_getpageframeinfo_domain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
+    return current_has_perm(d, SECCLASS_MMU, MMU__PAGEINFO);
 }
 
 static int flask_set_cpuid(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID);
 }
 
 static int flask_gettscinfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__GETTSC);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GETTSC);
 }
 
 static int flask_settscinfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SETTSC);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SETTSC);
 }
 
 static int flask_getmemlist(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGELIST);
+    return current_has_perm(d, SECCLASS_MMU, MMU__PAGELIST);
 }
 
 static int flask_hypercall_init(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__HYPERCALL);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__HYPERCALL);
 }
 
 static int flask_hvmcontext(struct domain *d, uint32_t cmd)
@@ -1173,7 +1155,7 @@ static int flask_hvmcontext(struct domain *d, uint32_t cmd)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, perm);
+    return current_has_perm(d, SECCLASS_HVM, perm);
 }
 
 static int flask_address_size(struct domain *d, uint32_t cmd)
@@ -1192,7 +1174,7 @@ static int flask_address_size(struct domain *d, uint32_t cmd)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm);
+    return current_has_perm(d, SECCLASS_DOMAIN, perm);
 }
 
 static int flask_hvm_param(struct domain *d, unsigned long op)
@@ -1214,47 +1196,47 @@ static int flask_hvm_param(struct domain *d, unsigned long op)
         perm = HVM__HVMCTL;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, perm);
+    return current_has_perm(d, SECCLASS_HVM, perm);
 }
 
 static int flask_hvm_set_pci_intx_level(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCILEVEL);
+    return current_has_perm(d, SECCLASS_HVM, HVM__PCILEVEL);
 }
 
 static int flask_hvm_set_isa_irq_level(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__IRQLEVEL);
+    return current_has_perm(d, SECCLASS_HVM, HVM__IRQLEVEL);
 }
 
 static int flask_hvm_set_pci_link_route(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCIROUTE);
+    return current_has_perm(d, SECCLASS_HVM, HVM__PCIROUTE);
 }
 
 static int flask_mem_event_setup(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
 
 static int flask_mem_event_control(struct domain *d, int mode, int op)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
 
 static int flask_mem_event_op(struct domain *d, int op)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
 
 static int flask_mem_sharing(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_SHARING);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_SHARING);
 }
 
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
-    int rc = domain_has_perm(current->domain, cd, SECCLASS_HVM, HVM__MEM_SHARING);
+    int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
     if ( rc )
         return rc;
     return domain_has_perm(d, cd, SECCLASS_HVM, HVM__SHARE_MEM);
@@ -1262,7 +1244,7 @@ static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 
 static int flask_audit_p2m(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__AUDIT_P2M);
+    return current_has_perm(d, SECCLASS_HVM, HVM__AUDIT_P2M);
 }
 
 static int flask_apic(struct domain *d, int cmd)
@@ -1323,11 +1305,7 @@ static int flask_physinfo(void)
 
 static int flask_platform_quirk(uint32_t quirk)
 {
-    struct domain_security_struct *dsec;
-    dsec = current->domain->ssid;
-
-    return avc_has_perm(dsec->sid, SECINITSID_XEN, SECCLASS_XEN, 
-                        XEN__QUIRK, NULL);
+    return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN, XEN__QUIRK, NULL);
 }
 
 static int flask_firmware_info(void)
@@ -1357,16 +1335,12 @@ static int flask_getidletime(void)
 
 static int flask_machine_memory_map(void)
 {
-    struct domain_security_struct *dsec;
-    dsec = current->domain->ssid;
-
-    return avc_has_perm(dsec->sid, SECINITSID_XEN, SECCLASS_MMU, 
-                        MMU__MEMORYMAP, NULL);
+    return avc_current_has_perm(SECINITSID_XEN, SECCLASS_MMU, MMU__MEMORYMAP, NULL);
 }
 
 static int flask_domain_memory_map(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__MEMORYMAP);
+    return current_has_perm(d, SECCLASS_MMU, MMU__MEMORYMAP);
 }
 
 static int flask_mmu_normal_update(struct domain *d, struct domain *t,
@@ -1375,8 +1349,7 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     int rc = 0;
     u32 map_perms = MMU__MAP_READ;
     unsigned long fgfn, fmfn;
-    struct domain_security_struct *dsec;
-    u32 fsid;
+    u32 dsid, fsid;
     struct avc_audit_data ad;
     p2m_type_t p2mt;
 
@@ -1388,7 +1361,7 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     if ( !(l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_PRESENT) )
         return 0;
 
-    dsec = d->ssid;
+    dsid = domain_sid(d);
 
     if ( l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_RW )
         map_perms |= MMU__MAP_WRITE;
@@ -1402,37 +1375,35 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     ad.memory.pte = fpte;
     ad.memory.mfn = fmfn;
 
-    rc = get_mfn_sid(fmfn, &fsid);
+    rc = get_mfn_sid(d, fmfn, &fsid);
 
     put_gfn(f, fgfn);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, fsid, SECCLASS_MMU, map_perms, &ad);
+    return avc_has_perm(dsid, fsid, SECCLASS_MMU, map_perms, &ad);
 }
 
 static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
 {
     int rc = 0;
-    u32 psid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
+    u32 dsid, psid;
+    dsid = domain_sid(d);
 
-    rc = get_mfn_sid(mfn, &psid);
+    rc = get_mfn_sid(d, mfn, &psid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
+    return avc_has_perm(dsid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
 }
 
 static int flask_update_va_mapping(struct domain *d, struct domain *f,
                                    l1_pgentry_t pte)
 {
     int rc = 0;
-    u32 psid;
+    u32 dsid, psid;
     u32 map_perms = MMU__MAP_READ;
-    struct domain_security_struct *dsec;
     unsigned long fgfn, fmfn;
     p2m_type_t p2mt;
 
@@ -1442,16 +1413,16 @@ static int flask_update_va_mapping(struct domain *d, struct domain *f,
     if ( l1e_get_flags(pte) & _PAGE_RW )
         map_perms |= MMU__MAP_WRITE;
 
-    dsec = d->ssid;
+    dsid = domain_sid(d);
     fgfn = l1e_get_pfn(pte);
     fmfn = mfn_x(get_gfn_query(f, fgfn, &p2mt));
-    rc = get_mfn_sid(fmfn, &psid);
+    rc = get_mfn_sid(d, fmfn, &psid);
     put_gfn(f, fgfn);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, map_perms, NULL);
+    return avc_has_perm(dsid, psid, SECCLASS_MMU, map_perms, NULL);
 }
 
 static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
@@ -1466,43 +1437,40 @@ static int flask_remove_from_physmap(struct domain *d1, struct domain *d2)
 
 static int flask_sendtrigger(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
 }
 
 static int flask_get_device_group(uint32_t machine_bdf)
 {
     u32 rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec = current->domain->ssid;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
 }
 
 static int flask_test_assign_device(uint32_t machine_bdf)
 {
     u32 rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec = current->domain->ssid;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
         return rc;
 
-    return rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
 }
 
 static int flask_assign_device(struct domain *d, uint32_t machine_bdf)
 {
-    u32 rsid;
+    u32 dsid, rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec, *tsec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__ADD);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__ADD);
     if ( rc )
         return rc;
 
@@ -1512,22 +1480,20 @@ static int flask_assign_device(struct domain *d, uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__ADD_DEVICE, &ad);
+    rc = avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__ADD_DEVICE, &ad);
     if ( rc )
         return rc;
 
-    tsec = d->ssid;
-    return avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    dsid = domain_sid(d);
+    return avc_has_perm(dsid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 {
     u32 rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec = current->domain->ssid;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
     if ( rc )
         return rc;
 
@@ -1535,18 +1501,17 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
     if ( rc )
         return rc;
 
-    return rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE, NULL);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE, NULL);
 }
 
 static int flask_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
-    u32 rsid;
+    u32 dsid, rsid;
     int rc = -EPERM;
     int irq;
-    struct domain_security_struct *ssec, *tsec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__ADD);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__ADD);
     if ( rc )
         return rc;
 
@@ -1556,23 +1521,22 @@ static int flask_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-    rc = avc_has_perm(ssec->sid, rsid, SECCLASS_HVM, HVM__BIND_IRQ, &ad);
+    rc = avc_current_has_perm(rsid, SECCLASS_HVM, HVM__BIND_IRQ, &ad);
     if ( rc )
         return rc;
 
-    tsec = d->ssid;
-    return avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    dsid = domain_sid(d);
+    return avc_has_perm(dsid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 static int flask_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
+    return current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
 }
 
 static int flask_pin_mem_cacheattr (struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__CACHEATTR);
+    return current_has_perm(d, SECCLASS_HVM, HVM__CACHEATTR);
 }
 
 static int flask_ext_vcpucontext (struct domain *d, uint32_t cmd)
@@ -1591,7 +1555,7 @@ static int flask_ext_vcpucontext (struct domain *d, uint32_t cmd)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm);
+    return current_has_perm(d, SECCLASS_DOMAIN, perm);
 }
 
 static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
@@ -1610,7 +1574,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
             return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm);
+    return current_has_perm(d, SECCLASS_DOMAIN, perm);
 }
 #endif
 
diff --git a/xen/xsm/flask/include/objsec.h b/xen/xsm/flask/include/objsec.h
index 4ff52be..6595dc3 100644
--- a/xen/xsm/flask/include/objsec.h
+++ b/xen/xsm/flask/include/objsec.h
@@ -19,6 +19,8 @@
 
 struct domain_security_struct {
     u32 sid;               /* current SID */
+    u32 self_sid;          /* SID for target when operating on DOMID_SELF */
+    u32 target_sid;        /* SID for device model target domain */
 };
 
 struct evtchn_security_struct {
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMT-0002mY-8Z; Mon, 06 Aug 2012 14:32:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMR-0002ig-Qs
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:48 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344263561!11031784!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29337 invoked from network); 6 Aug 2012 14:32:41 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:41 -0000
X-TM-IMSS-Message-ID: <7b0757c80002b22f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0757c80002b22f ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9X011112; 
	Mon, 6 Aug 2012 10:32:39 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:25 -0400
Message-Id: <1344263550-3941-14-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 13/18] tmem: Add access control check
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |  1 +
 xen/common/tmem.c                              | 10 +++++-----
 xen/include/xen/tmem_xen.h                     |  5 -----
 xen/include/xsm/dummy.h                        |  7 +++++++
 xen/include/xsm/xsm.h                          |  6 ++++++
 xen/xsm/dummy.c                                |  1 +
 xen/xsm/flask/hooks.c                          |  6 ++++++
 xen/xsm/flask/include/av_perm_to_string.h      |  1 +
 xen/xsm/flask/include/av_permissions.h         |  1 +
 9 files changed, 28 insertions(+), 10 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 28b8ada..2986b40 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -35,6 +35,7 @@ class xen
 	lockprof
 	cpupool_op
 	sched_op
+	tmem_op
 }
 
 class domain
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index dd276df..164098f 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -23,6 +23,7 @@
 #include <xen/radix-tree.h>
 #include <xen/list.h>
 #include <xen/init.h>
+#include <xsm/xsm.h>
 
 #define EXPORT /* indicates code other modules are dependent upon */
 #define FORWARD
@@ -2539,11 +2540,10 @@ static NOINLINE int do_tmem_control(struct tmem_op *op)
     uint32_t subop = op->u.ctrl.subop;
     OID *oidp = (OID *)(&op->u.ctrl.oid[0]);
 
-    if (!tmh_current_is_privileged())
-    {
-        /* don't fail... mystery: sometimes dom0 fails here */
-        /* return -EPERM; */
-    }
+    ret = xsm_tmem_control(subop);
+    if ( ret )
+        return ret;
+
     switch(subop)
     {
     case TMEMC_THAW:
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..f248128 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -344,11 +344,6 @@ static inline bool_t tmh_set_client_from_id(
     return rc;
 }
 
-static inline bool_t tmh_current_is_privileged(void)
-{
-    return IS_PRIV(current->domain);
-}
-
 static inline uint8_t tmh_get_first_byte(pfp_t *pfp)
 {
     void *p = __map_domain_page(pfp);
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index c71c08b..d796a33 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -495,6 +495,13 @@ static XSM_DEFAULT(int, sched_op) (void)
     return 0;
 }
 
+static XSM_DEFAULT(int, tmem_control) (uint32_t subcmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(long, __do_xsm_op)(XEN_GUEST_HANDLE(xsm_op_t) op)
 {
     return -ENOSYS;
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index b473b54..ee613a7 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -137,6 +137,7 @@ struct xsm_operations {
     int (*lockprof)(void);
     int (*cpupool_op)(void);
     int (*sched_op)(void);
+	int (*tmem_control)(uint32_t subop);
 
     long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
 
@@ -606,6 +607,11 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
+static inline int xsm_tmem_control(uint32_t subop)
+{
+    return xsm_call(tmem_control(subop));
+}
+
 static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 {
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 09935d8..aebe333 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -119,6 +119,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, lockprof);
     set_to_dummy_if_null(ops, cpupool_op);
     set_to_dummy_if_null(ops, sched_op);
+    set_to_dummy_if_null(ops, tmem_control);
 
     set_to_dummy_if_null(ops, __do_xsm_op);
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4f71604..be5c3ad 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1022,6 +1022,11 @@ static inline int flask_sched_op(void)
     return domain_has_xen(current->domain, XEN__SCHED_OP);
 }
 
+static inline int flask_tmem_control(uint32_t subcmd)
+{
+    return domain_has_xen(current->domain, XEN__TMEM_OP);
+}
+
 static int flask_perfcontrol(void)
 {
     return domain_has_xen(current->domain, XEN__PERFCONTROL);
@@ -1698,6 +1703,7 @@ static struct xsm_operations flask_ops = {
     .lockprof = flask_lockprof,
     .cpupool_op = flask_cpupool_op,
     .sched_op = flask_sched_op,
+    .tmem_control = flask_tmem_control,
 
     .__do_xsm_op = do_flask_op,
 
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 997f098..5d5a45a 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -29,6 +29,7 @@
    S_(SECCLASS_XEN, XEN__LOCKPROF, "lockprof")
    S_(SECCLASS_XEN, XEN__CPUPOOL_OP, "cpupool_op")
    S_(SECCLASS_XEN, XEN__SCHED_OP, "sched_op")
+   S_(SECCLASS_XEN, XEN__TMEM_OP, "tmem_op")
    S_(SECCLASS_DOMAIN, DOMAIN__SETVCPUCONTEXT, "setvcpucontext")
    S_(SECCLASS_DOMAIN, DOMAIN__PAUSE, "pause")
    S_(SECCLASS_DOMAIN, DOMAIN__UNPAUSE, "unpause")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index 8596a55..e6d6a6d 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -29,6 +29,7 @@
 #define XEN__LOCKPROF                             0x08000000UL
 #define XEN__CPUPOOL_OP                           0x10000000UL
 #define XEN__SCHED_OP                             0x20000000UL
+#define XEN__TMEM_OP                              0x40000000UL
 
 #define DOMAIN__SETVCPUCONTEXT                    0x00000001UL
 #define DOMAIN__PAUSE                             0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMS-0002lu-Gw; Mon, 06 Aug 2012 14:32:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMQ-0002jz-Gt
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:47 +0000
Received: from [85.158.138.51:27949] by server-5.bemta-3.messagelabs.com id
	9D/FC-27557-D85DF105; Mon, 06 Aug 2012 14:32:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344263561!21123984!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21359 invoked from network); 6 Aug 2012 14:32:42 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-12.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:42 -0000
X-TM-IMSS-Message-ID: <7b0728eb0002c54f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b0728eb0002c54f ; Mon, 6 Aug 2012 10:33:11 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9Z011112; 
	Mon, 6 Aug 2012 10:32:40 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:27 -0400
Message-Id: <1344263550-3941-16-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 15/18] xsm/flask: add distinct SIDs for
	self/target access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because the FLASK XSM module no longer checks IS_PRIV for remote domain
accesses covered by XSM permissions, domains now have the ability to
perform memory management and other functions on all domains that have
the same type. While it is possible to prevent this by only creating one
domain per type, this solution significantly limits the flexibility of
the type system.

This patch introduces a domain type transition to represent a domain
that is operating on itself. In the example policy, this is demonstrated
by creating a type with _self appended when declaring a domain type
which will be used for reflexive operations. AVCs for a domain of type
domU_t will look like the following:

scontext=system_u:system_r:domU_t tcontext=system_u:system_r:domU_t_self

This change also allows policy to distinguish between event channels a
domain creates to itself and event channels created between domains of
the same type.

The IS_PRIV_FOR check used for device model domains is also no longer
checked by FLASK; a similar transition is performed when the target is
set and used when the device model accesses its target domain.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.if |  64 +++-
 tools/flask/policy/policy/modules/xen/xen.te |  13 +-
 xen/xsm/flask/flask_op.c                     |   9 +
 xen/xsm/flask/hooks.c                        | 470 +++++++++++++--------------
 xen/xsm/flask/include/objsec.h               |   2 +
 5 files changed, 289 insertions(+), 269 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index f9bd757..796698b 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -5,15 +5,34 @@
 # Domain creation and setup
 #
 ################################################################################
+define(`declare_domain_common', `
+	allow $1 $2:grant { query setup };
+	allow $1 $2:mmu { adjust physmap map_read map_write stat pinpage updatemp };
+	allow $1 $2:hvm { getparam setparam };
+')
+
 # declare_domain(type, attrs...)
-#   Declare a type as a domain type, and allow basic domain setup
+#   Declare a domain type, along with associated _self and _channel types
+#   Allow the domain to perform basic operations on itself
 define(`declare_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	type $1_self, domain_type, domain_self_type;
+	type_transition $1 $1:domain $1_self;
+	type $1_channel, event_type;
+	type_transition $1 domain_type:event $1_channel;
+	declare_domain_common($1, $1_self)
+')
+
+# declare_singleton_domain(type, attrs...)
+#   Declare a domain type and associated _channel types.
+#   Note: Because the domain can perform basic operations on itself and any
+#   other domain of the same type, this constructor should be used for types
+#   containing at most one domain. This is not enforced by policy.
+define(`declare_singleton_domain', `
+	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
 	type $1_channel, event_type;
 	type_transition $1 domain_type:event $1_channel;
-	allow $1 $1:grant { query setup };
-	allow $1 $1:mmu { adjust physmap map_read map_write stat pinpage };
-	allow $1 $1:hvm { getparam setparam };
+	declare_domain_common($1, $1)
 ')
 
 # declare_build_label(type)
@@ -51,6 +70,7 @@ define(`create_domain_build_label', `
 	allow $1 $2_channel:event create;
 	allow $1 $2_building:domain2 relabelfrom;
 	allow $1 $2:domain2 relabelto;
+	allow $2_building $2:domain transition;
 ')
 
 # manage_domain(priv, target)
@@ -101,20 +121,36 @@ define(`domain_comms', `
 ')
 
 # domain_self_comms(domain)
-#   Allow a domain types to communicate with others of its type using grants
-#   and event channels (this includes event channels to DOMID_SELF)
+#   Allow a non-singleton domain type to communicate with itself using grants
+#   and event channels
 define(`domain_self_comms', `
-	create_channel($1, $1, $1_channel)
-	allow $1 $1:grant { map_read map_write copy unmap };
+	create_channel($1, $1_self, $1_channel)
+	allow $1 $1_self:grant { map_read map_write copy unmap };
 ')
 
 # device_model(dm_dom, hvm_dom)
 #   Define how a device model domain interacts with its target
 define(`device_model', `
-	domain_comms($1, $2)
-	allow $1 $2:domain { set_target shutdown };
-	allow $1 $2:mmu { map_read map_write adjust physmap };
-	allow $1 $2:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute };
+	type $2_target, domain_type, domain_target_type;
+	type_transition $2 $1:domain $2_target;
+	allow $1 $2:domain set_target;
+
+	type_transition $2_target domain_type:event $2_channel;
+	create_channel($1, $2_target, $1_channel)
+	create_channel($2, $1, $2_channel)
+	allow $1 $2_channel:event create;
+
+	allow $1 $2_target:domain shutdown;
+	allow $1 $2_target:mmu { map_read map_write adjust physmap };
+	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute cacheattr };
+')
+
+# make_device_model(priv, dm_dom, hvm_dom)
+#   Allow creation of a device model and HVM domain pair
+define(`make_device_model', `
+	device_model($2, $3)
+	allow $1 $2:domain2 make_priv_for;
+	allow $1 $3:domain2 set_as_target;
 ')
 ################################################################################
 #
@@ -132,7 +168,9 @@ define(`use_device', `
 # admin_device(domain, device)
 #   Allow a device to be used and delegated by a domain
 define(`admin_device', `
-    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport remove_device remove_irq remove_iomem remove_ioport };
+    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport
+	                       remove_device remove_irq remove_iomem remove_ioport
+						   plug unplug };
     allow $1 $2:hvm bind_irq;
     use_device($1, $2)
 ')
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 1162153..8d33285 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -8,6 +8,8 @@
 ################################################################################
 attribute xen_type;
 attribute domain_type;
+attribute domain_self_type;
+attribute domain_target_type;
 attribute resource_type;
 attribute event_type;
 attribute mls_priv;
@@ -25,7 +27,7 @@ attribute mls_priv;
 type xen_t, xen_type, mls_priv;
 
 # Domain 0
-declare_domain(dom0_t, mls_priv);
+declare_singleton_domain(dom0_t, mls_priv);
 
 # Untracked I/O memory (pseudo-domain)
 type domio_t, xen_type;
@@ -69,7 +71,7 @@ admin_device(dom0_t, ioport_t)
 admin_device(dom0_t, iomem_t)
 allow dom0_t domio_t:mmu { map_read map_write };
 
-domain_self_comms(dom0_t)
+domain_comms(dom0_t, dom0_t)
 
 auditallow dom0_t security_t:security { load_policy setenforce setbool };
 
@@ -84,11 +86,14 @@ domain_self_comms(domU_t)
 create_domain(dom0_t, domU_t)
 manage_domain(dom0_t, domU_t)
 domain_comms(dom0_t, domU_t)
+domain_comms(domU_t, domU_t)
+domain_self_comms(domU_t)
 
 declare_domain(isolated_domU_t)
 create_domain(dom0_t, isolated_domU_t)
 manage_domain(dom0_t, isolated_domU_t)
 domain_comms(dom0_t, isolated_domU_t)
+domain_self_comms(isolated_domU_t)
 
 # Declare a boolean that denies creation of prot_domU_t domains
 gen_bool(prot_doms_locked, false)
@@ -98,6 +103,8 @@ if (!prot_doms_locked) {
 }
 domain_comms(dom0_t, prot_domU_t)
 domain_comms(domU_t, prot_domU_t)
+domain_comms(prot_domU_t, prot_domU_t)
+domain_self_comms(prot_domU_t)
 
 # domHVM_t is meant to be paired with a qemu-dm stub domain of type dm_dom_t
 declare_domain(domHVM_t)
@@ -110,7 +117,7 @@ declare_domain(dm_dom_t)
 create_domain(dom0_t, dm_dom_t)
 manage_domain(dom0_t, dm_dom_t)
 domain_comms(dom0_t, dm_dom_t)
-device_model(dm_dom_t, domHVM_t)
+make_device_model(dom0_t, dm_dom_t, domHVM_t)
 
 # nomigrate_t must be built via the nomigrate_t_building label; once built,
 # dom0 cannot read its memory.
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index 9c0a087..28f6f5e 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -612,6 +612,15 @@ static int flask_relabel_domain(struct xen_flask_relabel *arg)
         goto out;
 
     dsec->sid = arg->sid;
+    dsec->self_sid = arg->sid;
+    security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                            &dsec->self_sid);
+    if ( d->target )
+    {
+        struct domain_security_struct *tsec = d->target->ssid;
+        security_transition_sid(tsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                                &dsec->target_sid);
+    }
 
  out:
     rcu_unlock_domain(d);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index be5c3ad..dae587c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -33,38 +33,69 @@
 
 struct xsm_operations *original_ops = NULL;
 
+static u32 domain_sid(struct domain *dom)
+{
+    struct domain_security_struct *dsec = dom->ssid;
+    return dsec->sid;
+}
+
+static u32 domain_target_sid(struct domain *src, struct domain *dst)
+{
+    struct domain_security_struct *ssec = src->ssid;
+    struct domain_security_struct *dsec = dst->ssid;
+    if (src == dst)
+        return ssec->self_sid;
+    if (src->target == dst)
+        return ssec->target_sid;
+    return dsec->sid;
+}
+
+static u32 evtchn_sid(const struct evtchn *chn)
+{
+    struct evtchn_security_struct *esec = chn->ssid;
+    return esec->sid;
+}
+
 static int domain_has_perm(struct domain *dom1, struct domain *dom2, 
                            u16 class, u32 perms)
 {
-    struct domain_security_struct *dsec1, *dsec2;
+    u32 ssid, tsid;
     struct avc_audit_data ad;
     AVC_AUDIT_DATA_INIT(&ad, NONE);
     ad.sdom = dom1;
     ad.tdom = dom2;
 
-    dsec1 = dom1->ssid;
-    dsec2 = dom2->ssid;
+    ssid = domain_sid(dom1);
+    tsid = domain_target_sid(dom1, dom2);
 
-    return avc_has_perm(dsec1->sid, dsec2->sid, class, perms, &ad);
+    return avc_has_perm(ssid, tsid, class, perms, &ad);
 }
 
-static int domain_has_evtchn(struct domain *d, struct evtchn *chn, u32 perms)
+static int avc_current_has_perm(u32 tsid, u16 class, u32 perm,
+                                struct avc_audit_data *ad)
 {
-    struct domain_security_struct *dsec;
-    struct evtchn_security_struct *esec;
+    u32 csid = domain_sid(current->domain);
+    return avc_has_perm(csid, tsid, class, perm, ad);
+}
 
-    dsec = d->ssid;
-    esec = chn->ssid;
+static int current_has_perm(struct domain *d, u16 class, u32 perms)
+{
+    return domain_has_perm(current->domain, d, class, perms);
+}
+
+static int domain_has_evtchn(struct domain *d, struct evtchn *chn, u32 perms)
+{
+    u32 dsid = domain_sid(d);
+    u32 esid = evtchn_sid(chn);
 
-    return avc_has_perm(dsec->sid, esec->sid, SECCLASS_EVENT, perms, NULL);
+    return avc_has_perm(dsid, esid, SECCLASS_EVENT, perms, NULL);
 }
 
 static int domain_has_xen(struct domain *d, u32 perms)
 {
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
+    u32 dsid = domain_sid(d);
 
-    return avc_has_perm(dsec->sid, SECINITSID_XEN, SECCLASS_XEN, perms, NULL);
+    return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_XEN, perms, NULL);
 }
 
 static int get_irq_sid(int irq, u32 *sid, struct avc_audit_data *ad)
@@ -109,14 +140,11 @@ static int flask_domain_alloc_security(struct domain *d)
     memset(dsec, 0, sizeof(struct domain_security_struct));
 
     if ( is_idle_domain(d) )
-    {
         dsec->sid = SECINITSID_XEN;
-    }
     else
-    {
         dsec->sid = SECINITSID_UNLABELED;
-    }
 
+    dsec->self_sid = dsec->sid;
     d->ssid = dsec;
 
     return 0;
@@ -136,68 +164,55 @@ static void flask_domain_free_security(struct domain *d)
 static int flask_evtchn_unbound(struct domain *d1, struct evtchn *chn, 
                                 domid_t id2)
 {
-    u32 newsid;
+    u32 sid1, sid2, newsid;
     int rc;
-    domid_t id;
     struct domain *d2;
-    struct domain_security_struct *dsec, *dsec1, *dsec2;
     struct evtchn_security_struct *esec;
 
-    dsec = current->domain->ssid;
-    dsec1 = d1->ssid;
-    esec = chn->ssid;
-
-    if ( id2 == DOMID_SELF )
-        id = current->domain->domain_id;
-    else
-        id = id2;
-
-    d2 = get_domain_by_id(id);
+    d2 = rcu_lock_domain_by_id(id2);
     if ( d2 == NULL )
         return -EPERM;
 
-    dsec2 = d2->ssid;
-    rc = security_transition_sid(dsec1->sid, dsec2->sid, SECCLASS_EVENT, 
-                                 &newsid);
+    sid1 = domain_sid(d1);
+    sid2 = domain_target_sid(d1, d2);
+    esec = chn->ssid;
+
+    rc = security_transition_sid(sid1, sid2, SECCLASS_EVENT, &newsid);
     if ( rc )
         goto out;
 
-    rc = avc_has_perm(dsec->sid, newsid, SECCLASS_EVENT, EVENT__CREATE, NULL);
+    rc = avc_current_has_perm(newsid, SECCLASS_EVENT, EVENT__CREATE, NULL);
     if ( rc )
         goto out;
 
-    rc = avc_has_perm(newsid, dsec2->sid, SECCLASS_EVENT, EVENT__BIND, NULL);
+    rc = avc_has_perm(newsid, sid2, SECCLASS_EVENT, EVENT__BIND, NULL);
     if ( rc )
         goto out;
-    else
-        esec->sid = newsid;
+
+    esec->sid = newsid;
 
  out:
-    put_domain(d2);
+    rcu_unlock_domain(d2);
     return rc;
 }
 
 static int flask_evtchn_interdomain(struct domain *d1, struct evtchn *chn1, 
                                     struct domain *d2, struct evtchn *chn2)
 {
-    u32 newsid;
+    u32 sid1, sid2, newsid, reverse_sid;
     int rc;
-    struct domain_security_struct *dsec, *dsec1, *dsec2;
-    struct evtchn_security_struct *esec1, *esec2;
+    struct evtchn_security_struct *esec1;
     struct avc_audit_data ad;
     AVC_AUDIT_DATA_INIT(&ad, NONE);
     ad.sdom = d1;
     ad.tdom = d2;
 
-    dsec = current->domain->ssid;
-    dsec1 = d1->ssid;
-    dsec2 = d2->ssid;
+    sid1 = domain_sid(d1);
+    sid2 = domain_target_sid(d1, d2);
 
     esec1 = chn1->ssid;
-    esec2 = chn2->ssid;
 
-    rc = security_transition_sid(dsec1->sid, dsec2->sid, 
-                                 SECCLASS_EVENT, &newsid);
+    rc = security_transition_sid(sid1, sid2, SECCLASS_EVENT, &newsid);
     if ( rc )
     {
         printk("%s: security_transition_sid failed, rc=%d (domain=%d)\n",
@@ -205,15 +220,20 @@ static int flask_evtchn_interdomain(struct domain *d1, struct evtchn *chn1,
         return rc;
     }
 
-    rc = avc_has_perm(dsec->sid, newsid, SECCLASS_EVENT, EVENT__CREATE, &ad);
+    rc = avc_current_has_perm(newsid, SECCLASS_EVENT, EVENT__CREATE, &ad);
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(newsid, dsec2->sid, SECCLASS_EVENT, EVENT__BIND, &ad);
+    rc = avc_has_perm(newsid, sid2, SECCLASS_EVENT, EVENT__BIND, &ad);
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(esec2->sid, dsec1->sid, SECCLASS_EVENT, EVENT__BIND, &ad);
+    /* It's possible the target domain has changed (relabel or destroy/create)
+     * since the unbound part was created; re-validate this binding now.
+     */
+    reverse_sid = evtchn_sid(chn2);
+    sid1 = domain_target_sid(d2, d1);
+    rc = avc_has_perm(reverse_sid, sid1, SECCLASS_EVENT, EVENT__BIND, &ad);
     if ( rc )
         return rc;
 
@@ -296,7 +316,6 @@ static void flask_free_security_evtchn(struct evtchn *chn)
 
 static char *flask_show_security_evtchn(struct domain *d, const struct evtchn *chn)
 {
-    struct evtchn_security_struct *esec;
     int irq;
     u32 sid = 0;
     char *ctx;
@@ -306,9 +325,7 @@ static char *flask_show_security_evtchn(struct domain *d, const struct evtchn *c
     {
     case ECS_UNBOUND:
     case ECS_INTERDOMAIN:
-        esec = chn->ssid;
-        if ( esec )
-            sid = esec->sid;
+        sid = evtchn_sid(chn);
         break;
     case ECS_PIRQ:
         irq = domain_pirq_to_irq(d, chn->u.pirq.irq);
@@ -359,11 +376,10 @@ static int flask_grant_query_size(struct domain *d1, struct domain *d2)
     return domain_has_perm(d1, d2, SECCLASS_GRANT, GRANT__QUERY);
 }
 
-static int get_page_sid(struct page_info *page, u32 *sid)
+static int get_page_sid(struct domain *who, struct page_info *page, u32 *sid)
 {
     int rc = 0;
     struct domain *d;
-    struct domain_security_struct *dsec;
     unsigned long mfn;
 
     d = page_get_owner(page);
@@ -389,15 +405,14 @@ static int get_page_sid(struct page_info *page, u32 *sid)
 
     default:
         /*Pages are implicitly labeled by domain ownership!*/
-        dsec = d->ssid;
-        *sid = dsec ? dsec->sid : SECINITSID_UNLABELED;
+        *sid = domain_target_sid(who, d);
         break;
     }
 
     return rc;
 }
 
-static int get_mfn_sid(unsigned long mfn, u32 *sid)
+static int get_mfn_sid(struct domain *who, unsigned long mfn, u32 *sid)
 {
     int rc = 0;
     struct page_info *page;
@@ -406,7 +421,7 @@ static int get_mfn_sid(unsigned long mfn, u32 *sid)
     {
         /*mfn is valid if this is a page that Xen is tracking!*/
         page = mfn_to_page(mfn);
-        rc = get_page_sid(page, sid);
+        rc = get_page_sid(who, page, sid);
     }
     else
     {
@@ -419,12 +434,12 @@ static int get_mfn_sid(unsigned long mfn, u32 *sid)
 
 static int flask_get_pod_target(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__GETPODTARGET);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETPODTARGET);
 }
 
 static int flask_set_pod_target(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SETPODTARGET);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETPODTARGET);
 }
 
 static int flask_memory_adjust_reservation(struct domain *d1, struct domain *d2)
@@ -440,15 +455,14 @@ static int flask_memory_stat_reservation(struct domain *d1, struct domain *d2)
 static int flask_memory_pin_page(struct domain *d, struct page_info *page)
 {
     int rc = 0;
-    u32 sid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
+    u32 dsid, psid;
+    dsid = domain_sid(d);
 
-    rc = get_page_sid(page, &sid);
+    rc = get_page_sid(d, page, &psid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, sid, SECCLASS_MMU, MMU__PINPAGE, NULL);
+    return avc_has_perm(dsid, psid, SECCLASS_MMU, MMU__PINPAGE, NULL);
 }
 
 static int flask_console_io(struct domain *d, int cmd)
@@ -515,70 +529,65 @@ static int flask_schedop_shutdown(struct domain *d1, struct domain *d2)
 static void flask_security_domaininfo(struct domain *d, 
                                       struct xen_domctl_getdomaininfo *info)
 {
-    struct domain_security_struct *dsec;
-
-    dsec = d->ssid;
-    info->ssidref = dsec->sid;
+    info->ssidref = domain_sid(d);
 }
 
 static int flask_setvcpucontext(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__SETVCPUCONTEXT);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETVCPUCONTEXT);
 }
 
 static int flask_pausedomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__PAUSE);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__PAUSE);
 }
 
 static int flask_unpausedomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__UNPAUSE);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__UNPAUSE);
 }
 
 static int flask_resumedomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__RESUME);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__RESUME);
 }
 
 static int flask_domain_create(struct domain *d, u32 ssidref)
 {
     int rc;
-    struct domain_security_struct *dsec1;
-    struct domain_security_struct *dsec2;
+    struct domain_security_struct *dsec = d->ssid;
     static int dom0_created = 0;
 
-    dsec1 = current->domain->ssid;
-    dsec2 = d->ssid;
-
     if ( is_idle_domain(current->domain) && !dom0_created )
     {
-        dsec2->sid = SECINITSID_DOM0;
+        dsec->sid = SECINITSID_DOM0;
         dom0_created = 1;
-        return 0;
     }
+    else
+    {
+        rc = avc_current_has_perm(ssidref, SECCLASS_DOMAIN,
+                          DOMAIN__CREATE, NULL);
+        if ( rc )
+            return rc;
 
-    rc = avc_has_perm(dsec1->sid, ssidref, SECCLASS_DOMAIN,
-                      DOMAIN__CREATE, NULL);
-    if ( rc )
-        return rc;
+        dsec->sid = ssidref;
+    }
+    dsec->self_sid = dsec->sid;
 
-    dsec2->sid = ssidref;
+    rc = security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                                 &dsec->self_sid);
 
     return rc;
 }
 
 static int flask_max_vcpus(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__MAX_VCPUS);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__MAX_VCPUS);
 }
 
 static int flask_destroydomain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__DESTROY);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__DESTROY);
 }
 
 static int flask_vcpuaffinity(int cmd, struct domain *d)
@@ -597,7 +606,7 @@ static int flask_vcpuaffinity(int cmd, struct domain *d)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm );
+    return current_has_perm(d, SECCLASS_DOMAIN, perm );
 }
 
 static int flask_scheduler(struct domain *d)
@@ -608,53 +617,61 @@ static int flask_scheduler(struct domain *d)
     if ( rc )
         return rc;
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__SCHEDULER);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SCHEDULER);
 }
 
 static int flask_getdomaininfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__GETDOMAININFO);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETDOMAININFO);
 }
 
 static int flask_getvcpucontext(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, 
-                           DOMAIN__GETVCPUCONTEXT);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETVCPUCONTEXT);
 }
 
 static int flask_getvcpuinfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__GETVCPUINFO);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__GETVCPUINFO);
 }
 
 static int flask_domain_settime(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SETTIME);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETTIME);
 }
 
-static int flask_set_target(struct domain *d, struct domain *e)
+static int flask_set_target(struct domain *d, struct domain *t)
 {
     int rc;
-    rc = domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR);
+    struct domain_security_struct *dsec, *tsec;
+    dsec = d->ssid;
+    tsec = t->ssid;
+
+    rc = current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR);
     if ( rc )
         return rc;
-    rc = domain_has_perm(current->domain, e, SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET);
+    rc = current_has_perm(t, SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET);
     if ( rc )
         return rc;
-    return domain_has_perm(d, e, SECCLASS_DOMAIN, DOMAIN__SET_TARGET);
+    /* Use avc_has_perm to avoid resolving target/current SID */
+    rc = avc_has_perm(dsec->sid, tsec->sid, SECCLASS_DOMAIN, DOMAIN__SET_TARGET, NULL);
+    if ( rc )
+        return rc;
+
+    /* (tsec, dsec) defaults the label to tsec, as it should here */
+    rc = security_transition_sid(tsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                                 &dsec->target_sid);
+    return rc;
 }
 
 static int flask_domctl(struct domain *d, int cmd)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SET_MISC_INFO);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SET_MISC_INFO);
 }
 
 static int flask_set_virq_handler(struct domain *d, uint32_t virq)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__SET_VIRQ_HANDLER);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SET_VIRQ_HANDLER);
 }
 
 static int flask_tbufcontrol(void)
@@ -679,26 +696,22 @@ static int flask_sched_id(void)
 
 static int flask_setdomainmaxmem(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDOMAINMAXMEM);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDOMAINMAXMEM);
 }
 
 static int flask_setdomainhandle(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDOMAINHANDLE);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDOMAINHANDLE);
 }
 
 static int flask_setdebugging(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDEBUGGING);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDEBUGGING);
 }
 
 static int flask_debug_op(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__SETDEBUGGING);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETDEBUGGING);
 }
 
 static int flask_debug_keys(void)
@@ -760,14 +773,12 @@ static char *flask_show_irq_sid (int irq)
 
 static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
 {
-    u32 sid;
+    u32 sid, dsid;
     int rc = -EPERM;
     struct msi_info *msi = data;
-
-    struct domain_security_struct *ssec, *tsec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__ADD);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__ADD);
 
     if ( rc )
         return rc;
@@ -783,14 +794,13 @@ static int flask_map_domain_pirq (struct domain *d, int irq, void *data)
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-    tsec = d->ssid;
+    dsid = domain_sid(d);
 
-    rc = avc_has_perm(ssec->sid, sid, SECCLASS_RESOURCE, RESOURCE__ADD_IRQ, &ad);
+    rc = avc_current_has_perm(sid, SECCLASS_RESOURCE, RESOURCE__ADD_IRQ, &ad);
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(tsec->sid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    rc = avc_has_perm(dsid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
     return rc;
 }
 
@@ -798,16 +808,12 @@ static int flask_unmap_domain_pirq (struct domain *d, int irq)
 {
     u32 sid;
     int rc = -EPERM;
-
-    struct domain_security_struct *ssec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-
     if ( irq >= nr_irqs_gsi ) {
         /* TODO support for MSI here */
         return 0;
@@ -817,19 +823,19 @@ static int flask_unmap_domain_pirq (struct domain *d, int irq)
     if ( rc )
         return rc;
 
-    rc = avc_has_perm(ssec->sid, sid, SECCLASS_RESOURCE, RESOURCE__REMOVE_IRQ, &ad);
+    rc = avc_current_has_perm(sid, SECCLASS_RESOURCE, RESOURCE__REMOVE_IRQ, &ad);
     return rc;
 }
 
 static int flask_irq_permission (struct domain *d, int pirq, uint8_t access)
 {
     /* the PIRQ number is not useful; real IRQ is checked during mapping */
-    return domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
-                           resource_to_perm(access));
+    return current_has_perm(d, SECCLASS_RESOURCE, resource_to_perm(access));
 }
 
 struct iomem_has_perm_data {
-    struct domain_security_struct *ssec, *tsec;
+    u32 ssid;
+    u32 dsid;
     u32 perm;
 };
 
@@ -843,12 +849,12 @@ static int _iomem_has_perm(void *v, u32 sid, unsigned long start, unsigned long
     ad.range.start = start;
     ad.range.end = end;
 
-    rc = avc_has_perm(data->ssec->sid, sid, SECCLASS_RESOURCE, data->perm, &ad);
+    rc = avc_has_perm(data->ssid, sid, SECCLASS_RESOURCE, data->perm, &ad);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(data->tsec->sid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    return avc_has_perm(data->dsid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 static int flask_iomem_permission(struct domain *d, uint64_t start, uint64_t end, uint8_t access)
@@ -856,7 +862,7 @@ static int flask_iomem_permission(struct domain *d, uint64_t start, uint64_t end
     struct iomem_has_perm_data data;
     int rc;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
+    rc = current_has_perm(d, SECCLASS_RESOURCE,
                          resource_to_perm(access));
     if ( rc )
         return rc;
@@ -866,18 +872,17 @@ static int flask_iomem_permission(struct domain *d, uint64_t start, uint64_t end
     else
         data.perm = RESOURCE__REMOVE_IOMEM;
 
-    data.ssec = current->domain->ssid;
-    data.tsec = d->ssid;
+    data.ssid = domain_sid(current->domain);
+    data.dsid = domain_sid(d);
 
     return security_iterate_iomem_sids(start, end, _iomem_has_perm, &data);
 }
 
 static int flask_pci_config_permission(struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
-    u32 rsid;
+    u32 dsid, rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
     u32 perm = RESOURCE__USE;
 
     rc = security_device_sid(machine_bdf, &rsid);
@@ -890,33 +895,24 @@ static int flask_pci_config_permission(struct domain *d, uint32_t machine_bdf, u
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = d->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, perm, &ad);
+    dsid = domain_sid(d);
+    return avc_has_perm(dsid, rsid, SECCLASS_RESOURCE, perm, &ad);
 
 }
 
 static int flask_resource_plug_core(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__PLUG, NULL);
+    return avc_current_has_perm(SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__PLUG, NULL);
 }
 
 static int flask_resource_unplug_core(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__UNPLUG, NULL);
+    return avc_current_has_perm(SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__UNPLUG, NULL);
 }
 
 static int flask_resource_use_core(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__USE, NULL);
+    return avc_current_has_perm(SECINITSID_DOMXEN, SECCLASS_RESOURCE, RESOURCE__USE, NULL);
 }
 
 static int flask_resource_plug_pci(uint32_t machine_bdf)
@@ -924,7 +920,6 @@ static int flask_resource_plug_pci(uint32_t machine_bdf)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
@@ -932,8 +927,7 @@ static int flask_resource_plug_pci(uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__PLUG, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__PLUG, &ad);
 }
 
 static int flask_resource_unplug_pci(uint32_t machine_bdf)
@@ -941,7 +935,6 @@ static int flask_resource_unplug_pci(uint32_t machine_bdf)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
@@ -949,8 +942,7 @@ static int flask_resource_unplug_pci(uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__UNPLUG, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__UNPLUG, &ad);
 }
 
 static int flask_resource_setup_pci(uint32_t machine_bdf)
@@ -958,7 +950,6 @@ static int flask_resource_setup_pci(uint32_t machine_bdf)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
@@ -966,8 +957,7 @@ static int flask_resource_setup_pci(uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
 }
 
 static int flask_resource_setup_gsi(int gsi)
@@ -975,22 +965,17 @@ static int flask_resource_setup_gsi(int gsi)
     u32 rsid;
     int rc = -EPERM;
     struct avc_audit_data ad;
-    struct domain_security_struct *ssec;
 
     rc = get_irq_sid(gsi, &rsid, &ad);
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__SETUP, &ad);
 }
 
 static int flask_resource_setup_misc(void)
 {
-    struct domain_security_struct *ssec;
-
-    ssec = current->domain->ssid;
-    return avc_has_perm(ssec->sid, SECINITSID_XEN, SECCLASS_RESOURCE, RESOURCE__SETUP, NULL);
+    return avc_current_has_perm(SECINITSID_XEN, SECCLASS_RESOURCE, RESOURCE__SETUP, NULL);
 }
 
 static inline int flask_page_offline(uint32_t cmd)
@@ -1058,11 +1043,12 @@ static int flask_shadow_control(struct domain *d, uint32_t op)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_SHADOW, perm);
+    return current_has_perm(d, SECCLASS_SHADOW, perm);
 }
 
 struct ioport_has_perm_data {
-    struct domain_security_struct *ssec, *tsec;
+    u32 ssid;
+    u32 dsid;
     u32 perm;
 };
 
@@ -1076,12 +1062,12 @@ static int _ioport_has_perm(void *v, u32 sid, unsigned long start, unsigned long
     ad.range.start = start;
     ad.range.end = end;
 
-    rc = avc_has_perm(data->ssec->sid, sid, SECCLASS_RESOURCE, data->perm, &ad);
+    rc = avc_has_perm(data->ssid, sid, SECCLASS_RESOURCE, data->perm, &ad);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(data->tsec->sid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    return avc_has_perm(data->dsid, sid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 
@@ -1090,7 +1076,7 @@ static int flask_ioport_permission(struct domain *d, uint32_t start, uint32_t en
     int rc;
     struct ioport_has_perm_data data;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE,
+    rc = current_has_perm(d, SECCLASS_RESOURCE,
                          resource_to_perm(access));
 
     if ( rc )
@@ -1101,8 +1087,8 @@ static int flask_ioport_permission(struct domain *d, uint32_t start, uint32_t en
     else
         data.perm = RESOURCE__REMOVE_IOPORT;
 
-    data.ssec = current->domain->ssid;
-    data.tsec = d->ssid;
+    data.ssid = domain_sid(current->domain);
+    data.dsid = domain_sid(d);
 
     return security_iterate_ioport_sids(start, end, _ioport_has_perm, &data);
 }
@@ -1111,46 +1097,42 @@ static int flask_getpageframeinfo(struct page_info *page)
 {
     int rc = 0;
     u32 tsid;
-    struct domain_security_struct *dsec;
-
-    dsec = current->domain->ssid;
 
-    rc = get_page_sid(page, &tsid);
+    rc = get_page_sid(current->domain, page, &tsid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);    
+    return avc_current_has_perm(tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);
 }
 
 static int flask_getpageframeinfo_domain(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
+    return current_has_perm(d, SECCLASS_MMU, MMU__PAGEINFO);
 }
 
 static int flask_set_cpuid(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID);
 }
 
 static int flask_gettscinfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__GETTSC);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GETTSC);
 }
 
 static int flask_settscinfo(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SETTSC);
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SETTSC);
 }
 
 static int flask_getmemlist(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGELIST);
+    return current_has_perm(d, SECCLASS_MMU, MMU__PAGELIST);
 }
 
 static int flask_hypercall_init(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
-                           DOMAIN__HYPERCALL);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__HYPERCALL);
 }
 
 static int flask_hvmcontext(struct domain *d, uint32_t cmd)
@@ -1173,7 +1155,7 @@ static int flask_hvmcontext(struct domain *d, uint32_t cmd)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, perm);
+    return current_has_perm(d, SECCLASS_HVM, perm);
 }
 
 static int flask_address_size(struct domain *d, uint32_t cmd)
@@ -1192,7 +1174,7 @@ static int flask_address_size(struct domain *d, uint32_t cmd)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm);
+    return current_has_perm(d, SECCLASS_DOMAIN, perm);
 }
 
 static int flask_hvm_param(struct domain *d, unsigned long op)
@@ -1214,47 +1196,47 @@ static int flask_hvm_param(struct domain *d, unsigned long op)
         perm = HVM__HVMCTL;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, perm);
+    return current_has_perm(d, SECCLASS_HVM, perm);
 }
 
 static int flask_hvm_set_pci_intx_level(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCILEVEL);
+    return current_has_perm(d, SECCLASS_HVM, HVM__PCILEVEL);
 }
 
 static int flask_hvm_set_isa_irq_level(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__IRQLEVEL);
+    return current_has_perm(d, SECCLASS_HVM, HVM__IRQLEVEL);
 }
 
 static int flask_hvm_set_pci_link_route(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCIROUTE);
+    return current_has_perm(d, SECCLASS_HVM, HVM__PCIROUTE);
 }
 
 static int flask_mem_event_setup(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
 
 static int flask_mem_event_control(struct domain *d, int mode, int op)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
 
 static int flask_mem_event_op(struct domain *d, int op)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
 
 static int flask_mem_sharing(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_SHARING);
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_SHARING);
 }
 
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
-    int rc = domain_has_perm(current->domain, cd, SECCLASS_HVM, HVM__MEM_SHARING);
+    int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
     if ( rc )
         return rc;
     return domain_has_perm(d, cd, SECCLASS_HVM, HVM__SHARE_MEM);
@@ -1262,7 +1244,7 @@ static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 
 static int flask_audit_p2m(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__AUDIT_P2M);
+    return current_has_perm(d, SECCLASS_HVM, HVM__AUDIT_P2M);
 }
 
 static int flask_apic(struct domain *d, int cmd)
@@ -1323,11 +1305,7 @@ static int flask_physinfo(void)
 
 static int flask_platform_quirk(uint32_t quirk)
 {
-    struct domain_security_struct *dsec;
-    dsec = current->domain->ssid;
-
-    return avc_has_perm(dsec->sid, SECINITSID_XEN, SECCLASS_XEN, 
-                        XEN__QUIRK, NULL);
+    return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN, XEN__QUIRK, NULL);
 }
 
 static int flask_firmware_info(void)
@@ -1357,16 +1335,12 @@ static int flask_getidletime(void)
 
 static int flask_machine_memory_map(void)
 {
-    struct domain_security_struct *dsec;
-    dsec = current->domain->ssid;
-
-    return avc_has_perm(dsec->sid, SECINITSID_XEN, SECCLASS_MMU, 
-                        MMU__MEMORYMAP, NULL);
+    return avc_current_has_perm(SECINITSID_XEN, SECCLASS_MMU, MMU__MEMORYMAP, NULL);
 }
 
 static int flask_domain_memory_map(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__MEMORYMAP);
+    return current_has_perm(d, SECCLASS_MMU, MMU__MEMORYMAP);
 }
 
 static int flask_mmu_normal_update(struct domain *d, struct domain *t,
@@ -1375,8 +1349,7 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     int rc = 0;
     u32 map_perms = MMU__MAP_READ;
     unsigned long fgfn, fmfn;
-    struct domain_security_struct *dsec;
-    u32 fsid;
+    u32 dsid, fsid;
     struct avc_audit_data ad;
     p2m_type_t p2mt;
 
@@ -1388,7 +1361,7 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     if ( !(l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_PRESENT) )
         return 0;
 
-    dsec = d->ssid;
+    dsid = domain_sid(d);
 
     if ( l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_RW )
         map_perms |= MMU__MAP_WRITE;
@@ -1402,37 +1375,35 @@ static int flask_mmu_normal_update(struct domain *d, struct domain *t,
     ad.memory.pte = fpte;
     ad.memory.mfn = fmfn;
 
-    rc = get_mfn_sid(fmfn, &fsid);
+    rc = get_mfn_sid(d, fmfn, &fsid);
 
     put_gfn(f, fgfn);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, fsid, SECCLASS_MMU, map_perms, &ad);
+    return avc_has_perm(dsid, fsid, SECCLASS_MMU, map_perms, &ad);
 }
 
 static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
 {
     int rc = 0;
-    u32 psid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
+    u32 dsid, psid;
+    dsid = domain_sid(d);
 
-    rc = get_mfn_sid(mfn, &psid);
+    rc = get_mfn_sid(d, mfn, &psid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
+    return avc_has_perm(dsid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
 }
 
 static int flask_update_va_mapping(struct domain *d, struct domain *f,
                                    l1_pgentry_t pte)
 {
     int rc = 0;
-    u32 psid;
+    u32 dsid, psid;
     u32 map_perms = MMU__MAP_READ;
-    struct domain_security_struct *dsec;
     unsigned long fgfn, fmfn;
     p2m_type_t p2mt;
 
@@ -1442,16 +1413,16 @@ static int flask_update_va_mapping(struct domain *d, struct domain *f,
     if ( l1e_get_flags(pte) & _PAGE_RW )
         map_perms |= MMU__MAP_WRITE;
 
-    dsec = d->ssid;
+    dsid = domain_sid(d);
     fgfn = l1e_get_pfn(pte);
     fmfn = mfn_x(get_gfn_query(f, fgfn, &p2mt));
-    rc = get_mfn_sid(fmfn, &psid);
+    rc = get_mfn_sid(d, fmfn, &psid);
     put_gfn(f, fgfn);
 
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, map_perms, NULL);
+    return avc_has_perm(dsid, psid, SECCLASS_MMU, map_perms, NULL);
 }
 
 static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
@@ -1466,43 +1437,40 @@ static int flask_remove_from_physmap(struct domain *d1, struct domain *d2)
 
 static int flask_sendtrigger(struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
+    return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__TRIGGER);
 }
 
 static int flask_get_device_group(uint32_t machine_bdf)
 {
     u32 rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec = current->domain->ssid;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
         return rc;
 
-    return avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
 }
 
 static int flask_test_assign_device(uint32_t machine_bdf)
 {
     u32 rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec = current->domain->ssid;
 
     rc = security_device_sid(machine_bdf, &rsid);
     if ( rc )
         return rc;
 
-    return rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__STAT_DEVICE, NULL);
 }
 
 static int flask_assign_device(struct domain *d, uint32_t machine_bdf)
 {
-    u32 rsid;
+    u32 dsid, rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec, *tsec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__ADD);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__ADD);
     if ( rc )
         return rc;
 
@@ -1512,22 +1480,20 @@ static int flask_assign_device(struct domain *d, uint32_t machine_bdf)
 
     AVC_AUDIT_DATA_INIT(&ad, DEV);
     ad.device = (unsigned long) machine_bdf;
-    ssec = current->domain->ssid;
-    rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__ADD_DEVICE, &ad);
+    rc = avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__ADD_DEVICE, &ad);
     if ( rc )
         return rc;
 
-    tsec = d->ssid;
-    return avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    dsid = domain_sid(d);
+    return avc_has_perm(dsid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 {
     u32 rsid;
     int rc = -EPERM;
-    struct domain_security_struct *ssec = current->domain->ssid;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
     if ( rc )
         return rc;
 
@@ -1535,18 +1501,17 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
     if ( rc )
         return rc;
 
-    return rc = avc_has_perm(ssec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE, NULL);
+    return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE, NULL);
 }
 
 static int flask_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
-    u32 rsid;
+    u32 dsid, rsid;
     int rc = -EPERM;
     int irq;
-    struct domain_security_struct *ssec, *tsec;
     struct avc_audit_data ad;
 
-    rc = domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__ADD);
+    rc = current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__ADD);
     if ( rc )
         return rc;
 
@@ -1556,23 +1521,22 @@ static int flask_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     if ( rc )
         return rc;
 
-    ssec = current->domain->ssid;
-    rc = avc_has_perm(ssec->sid, rsid, SECCLASS_HVM, HVM__BIND_IRQ, &ad);
+    rc = avc_current_has_perm(rsid, SECCLASS_HVM, HVM__BIND_IRQ, &ad);
     if ( rc )
         return rc;
 
-    tsec = d->ssid;
-    return avc_has_perm(tsec->sid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
+    dsid = domain_sid(d);
+    return avc_has_perm(dsid, rsid, SECCLASS_RESOURCE, RESOURCE__USE, &ad);
 }
 
 static int flask_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
+    return current_has_perm(d, SECCLASS_RESOURCE, RESOURCE__REMOVE);
 }
 
 static int flask_pin_mem_cacheattr (struct domain *d)
 {
-    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__CACHEATTR);
+    return current_has_perm(d, SECCLASS_HVM, HVM__CACHEATTR);
 }
 
 static int flask_ext_vcpucontext (struct domain *d, uint32_t cmd)
@@ -1591,7 +1555,7 @@ static int flask_ext_vcpucontext (struct domain *d, uint32_t cmd)
         return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm);
+    return current_has_perm(d, SECCLASS_DOMAIN, perm);
 }
 
 static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
@@ -1610,7 +1574,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
             return -EPERM;
     }
 
-    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, perm);
+    return current_has_perm(d, SECCLASS_DOMAIN, perm);
 }
 #endif
 
diff --git a/xen/xsm/flask/include/objsec.h b/xen/xsm/flask/include/objsec.h
index 4ff52be..6595dc3 100644
--- a/xen/xsm/flask/include/objsec.h
+++ b/xen/xsm/flask/include/objsec.h
@@ -19,6 +19,8 @@
 
 struct domain_security_struct {
     u32 sid;               /* current SID */
+    u32 self_sid;          /* SID for target when operating on DOMID_SELF */
+    u32 target_sid;        /* SID for device model target domain */
 };
 
 struct evtchn_security_struct {
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMS-0002mI-T6; Mon, 06 Aug 2012 14:32:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMR-0002iX-Gb
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:47 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344263559!9376297!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21947 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b0750f50002b224@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0750f50002b224 ;
	Mon, 6 Aug 2012 10:32:49 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9R011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:19 -0400
Message-Id: <1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 07/18] arch/x86: add missing XSM checks to
	XENPF_ commands
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.te | 4 ++--
 xen/arch/x86/platform_hypercall.c            | 8 ++++++++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 40c4c0a..1162153 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -53,8 +53,8 @@ type device_t, resource_type;
 #
 ################################################################################
 allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add mtrr_del
-	scheduler physinfo heap quirk readconsole writeconsole settime
-	microcode cpupool_op sched_op };
+	scheduler physinfo heap quirk readconsole writeconsole settime getcpuinfo
+	microcode cpupool_op sched_op pm_op };
 allow dom0_t xen_t:mmu { memorymap };
 allow dom0_t security_t:security { check_context compute_av compute_create
 	compute_member load_policy compute_relabel compute_user setenforce
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..c049db7 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -501,6 +501,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
     {
         struct xenpf_pcpu_version *ver = &op->u.pcpu_version;
 
+        ret = xsm_getcpuinfo();
+        if ( ret )
+            break;
+
         if ( !get_cpu_maps() )
         {
             ret = -EBUSY;
@@ -618,6 +622,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
     {
         uint32_t idle_nums;
 
+        ret = xsm_pm_op();
+        if ( ret )
+            break;
+
         switch(op->u.core_parking.type)
         {
         case XEN_CORE_PARKING_SET:
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMS-0002mI-T6; Mon, 06 Aug 2012 14:32:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMR-0002iX-Gb
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:47 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344263559!9376297!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21947 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b0750f50002b224@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0750f50002b224 ;
	Mon, 6 Aug 2012 10:32:49 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9R011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:19 -0400
Message-Id: <1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 07/18] arch/x86: add missing XSM checks to
	XENPF_ commands
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.te | 4 ++--
 xen/arch/x86/platform_hypercall.c            | 8 ++++++++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 40c4c0a..1162153 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -53,8 +53,8 @@ type device_t, resource_type;
 #
 ################################################################################
 allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add mtrr_del
-	scheduler physinfo heap quirk readconsole writeconsole settime
-	microcode cpupool_op sched_op };
+	scheduler physinfo heap quirk readconsole writeconsole settime getcpuinfo
+	microcode cpupool_op sched_op pm_op };
 allow dom0_t xen_t:mmu { memorymap };
 allow dom0_t security_t:security { check_context compute_av compute_create
 	compute_member load_policy compute_relabel compute_user setenforce
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..c049db7 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -501,6 +501,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
     {
         struct xenpf_pcpu_version *ver = &op->u.pcpu_version;
 
+        ret = xsm_getcpuinfo();
+        if ( ret )
+            break;
+
         if ( !get_cpu_maps() )
         {
             ret = -EBUSY;
@@ -618,6 +622,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
     {
         uint32_t idle_nums;
 
+        ret = xsm_pm_op();
+        if ( ret )
+            break;
+
         switch(op->u.core_parking.type)
         {
         case XEN_CORE_PARKING_SET:
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMR-0002lA-NY; Mon, 06 Aug 2012 14:32:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002i4-VS
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:46 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344263559!11102992!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10780 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b0751a00002b229@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0751a00002b229 ;
	Mon, 6 Aug 2012 10:32:49 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9S011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:20 -0400
Message-Id: <1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
	rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers that want to prevent use of DOMID_SELF already need to ensure
the calling domain does not pass its own domain ID. This removes the
need for the caller to manually support DOMID_SELF, which many already
do.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/common/domain.c        | 3 +++
 xen/common/event_channel.c | 3 ---
 xen/common/grant_table.c   | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..dbbc414 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -400,6 +400,9 @@ struct domain *rcu_lock_domain_by_id(domid_t dom)
 {
     struct domain *d = NULL;
 
+    if ( dom == DOMID_SELF )
+        return rcu_lock_current_domain();
+
     rcu_read_lock(&domlist_read_lock);
 
     for ( d = rcu_dereference(domain_hash[DOMAIN_HASH(dom)]);
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..988d3ce 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -201,9 +201,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     domid_t        rdom = bind->remote_dom;
     long           rc;
 
-    if ( rdom == DOMID_SELF )
-        rdom = current->domain->domain_id;
-
     if ( (rd = rcu_lock_domain_by_id(rdom)) == NULL )
         return -ESRCH;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..fbea67c 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -715,7 +715,7 @@ __gnttab_map_grant_ref(
     TRACE_1D(TRC_MEM_PAGE_GRANT_MAP, op->dom);
 
     mt = &maptrack_entry(lgt, handle);
-    mt->domid = op->dom;
+    mt->domid = rd->domain_id;
     mt->ref   = op->ref;
     mt->flags = op->flags;
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMR-0002lA-NY; Mon, 06 Aug 2012 14:32:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMP-0002i4-VS
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:46 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344263559!11102992!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10780 invoked from network); 6 Aug 2012 14:32:39 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:39 -0000
X-TM-IMSS-Message-ID: <7b0751a00002b229@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0751a00002b229 ;
	Mon, 6 Aug 2012 10:32:49 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9S011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:20 -0400
Message-Id: <1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
	rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers that want to prevent use of DOMID_SELF already need to ensure
the calling domain does not pass its own domain ID. This removes the
need for the caller to manually support DOMID_SELF, which many already
do.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/common/domain.c        | 3 +++
 xen/common/event_channel.c | 3 ---
 xen/common/grant_table.c   | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..dbbc414 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -400,6 +400,9 @@ struct domain *rcu_lock_domain_by_id(domid_t dom)
 {
     struct domain *d = NULL;
 
+    if ( dom == DOMID_SELF )
+        return rcu_lock_current_domain();
+
     rcu_read_lock(&domlist_read_lock);
 
     for ( d = rcu_dereference(domain_hash[DOMAIN_HASH(dom)]);
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..988d3ce 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -201,9 +201,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     domid_t        rdom = bind->remote_dom;
     long           rc;
 
-    if ( rdom == DOMID_SELF )
-        rdom = current->domain->domain_id;
-
     if ( (rd = rcu_lock_domain_by_id(rdom)) == NULL )
         return -ESRCH;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..fbea67c 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -715,7 +715,7 @@ __gnttab_map_grant_ref(
     TRACE_1D(TRC_MEM_PAGE_GRANT_MAP, op->dom);
 
     mt = &maptrack_entry(lgt, handle);
-    mt->domid = op->dom;
+    mt->domid = rd->domain_id;
     mt->ref   = op->ref;
     mt->flags = op->flags;
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMW-0002sm-TD; Mon, 06 Aug 2012 14:32:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMU-0002jf-Gw
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:51 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344263560!12457313!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20897 invoked from network); 6 Aug 2012 14:32:40 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-13.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:40 -0000
X-TM-IMSS-Message-ID: <7b0755870002b22c@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0755870002b22c ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9V011112; 
	Mon, 6 Aug 2012 10:32:39 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:23 -0400
Message-Id: <1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
	duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The Xen hypervisor has two basic access control function calls: IS_PRIV
and the xsm_* functions. Most privileged operations currently require
that both checks succeed, and many times the checks are at different
locations in the code. This patch eliminates the explicit and implicit
IS_PRIV checks that are duplicated in XSM hooks. When XSM_ENABLE is not
defined or when the dummy XSM module is used, this patch should not
change any functionality.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/acpi/power.c         |  2 +-
 xen/arch/x86/cpu/mcheck/mce.c     |  3 --
 xen/arch/x86/domctl.c             | 25 ++++++++--
 xen/arch/x86/hvm/hvm.c            | 96 +++++++++++++++++++--------------------
 xen/arch/x86/irq.c                |  3 +-
 xen/arch/x86/mm.c                 | 25 ++++------
 xen/arch/x86/physdev.c            | 54 ----------------------
 xen/arch/x86/platform_hypercall.c |  3 --
 xen/common/domctl.c               | 33 ++------------
 xen/common/event_channel.c        | 18 ++++----
 xen/common/grant_table.c          | 70 ++++++++--------------------
 xen/common/kexec.c                |  3 --
 xen/common/memory.c               | 29 ++++--------
 xen/common/schedule.c             |  6 ---
 xen/common/sysctl.c               |  3 --
 15 files changed, 119 insertions(+), 254 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 9e1f989..c7b37ef 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -238,7 +238,7 @@ static long enter_state_helper(void *data)
  */
 int acpi_enter_sleep(struct xenpf_enter_acpi_sleep *sleep)
 {
-    if ( !IS_PRIV(current->domain) || !acpi_sinfo.pm1a_cnt_blk.address )
+    if ( !acpi_sinfo.pm1a_cnt_blk.address )
         return -EPERM;
 
     /* Sanity check */
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index ed76131..4176bae 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1381,9 +1381,6 @@ long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
     struct xen_mc_msrinject *mc_msrinject;
     struct xen_mc_mceinject *mc_mceinject;
 
-    if (!IS_PRIV(v->domain) )
-        return x86_mcerr(NULL, -EPERM);
-
     ret = xsm_do_mca();
     if ( ret )
         return x86_mcerr(NULL, ret);
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 3cb4d97..bcb5b2d 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -54,6 +54,26 @@ long arch_do_domctl(
 
     switch ( domctl->cmd )
     {
+    /* TODO: the following do not have XSM hooks yet */
+    case XEN_DOMCTL_set_cpuid:
+    case XEN_DOMCTL_suppress_spurious_page_faults:
+    case XEN_DOMCTL_debug_op:
+    case XEN_DOMCTL_gettscinfo:
+    case XEN_DOMCTL_settscinfo:
+    case XEN_DOMCTL_audit_p2m:
+    case XEN_DOMCTL_gdbsx_guestmemio:
+    case XEN_DOMCTL_gdbsx_pausevcpu:
+    case XEN_DOMCTL_gdbsx_unpausevcpu:
+    case XEN_DOMCTL_gdbsx_domstatus:
+    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */
+    case XEN_DOMCTL_getpageframeinfo2:
+    case XEN_DOMCTL_getpageframeinfo3:
+        if ( !IS_PRIV(current->domain) )
+            return -EPERM;
+    }
+
+    switch ( domctl->cmd )
+    {
 
     case XEN_DOMCTL_shadow_op:
     {
@@ -795,11 +815,6 @@ long arch_do_domctl(
             break;
         bind = &(domctl->u.bind_pt_irq);
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) &&
-             !irq_access_permitted(current->domain, bind->machine_irq) )
-            goto unbind_out;
-
         ret = xsm_unbind_pt_irq(d, bind);
         if ( ret )
             goto unbind_out;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 22c136b..bec9e57 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
     if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
         return -EINVAL;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( d == current->domain || !is_hvm_domain(d) )
         goto out;
 
     rc = xsm_hvm_set_pci_intx_level(d);
@@ -3531,12 +3531,12 @@ static int hvmop_set_isa_irq_level(
     if ( op.isa_irq > 15 )
         return -EINVAL;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( d == current->domain || !is_hvm_domain(d) )
         goto out;
 
     rc = xsm_hvm_set_isa_irq_level(d);
@@ -3575,12 +3575,12 @@ static int hvmop_set_pci_link_route(
     if ( (op.link > 3) || (op.isa_irq > 15) )
         return -EINVAL;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( d == current->domain || !is_hvm_domain(d) )
         goto out;
 
     rc = xsm_hvm_set_pci_link_route(d);
@@ -3605,9 +3605,9 @@ static int hvmop_inject_msi(
     if ( copy_from_guest(&op, uop, 1) )
         return -EFAULT;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
     if ( !is_hvm_domain(d) )
@@ -3702,9 +3702,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( a.index >= HVM_NR_PARAMS )
             return -EINVAL;
 
-        rc = rcu_lock_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
         if ( !is_hvm_domain(d) )
@@ -3948,12 +3948,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail2;
 
         rc = xsm_hvm_param(d, op);
@@ -3987,12 +3987,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail3;
 
         rc = xsm_hvm_param(d, op);
@@ -4037,9 +4037,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = xsm_hvm_param(d, op);
         if ( rc )
@@ -4084,12 +4084,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail4;
 
         rc = xsm_hvm_param(d, op);
@@ -4163,12 +4163,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail5;
 
         rc = xsm_hvm_param(d, op);
@@ -4198,12 +4198,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail6;
 
         rc = xsm_hvm_param(d, op);
@@ -4234,9 +4234,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
         if ( !is_hvm_domain(d) || !paging_mode_shadow(d) )
@@ -4288,12 +4288,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&tr, arg, 1 ) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(tr.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(tr.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail8;
 
         rc = xsm_hvm_param(d, op);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 78a02e3..33ce710 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1853,8 +1853,7 @@ int map_domain_pirq(
     ASSERT(spin_is_locked(&d->event_lock));
 
     if ( !IS_PRIV(current->domain) &&
-         !(IS_PRIV_FOR(current->domain, d) &&
-           irq_access_permitted(current->domain, pirq)))
+         !irq_access_permitted(current->domain, pirq))
         return -EPERM;
 
     if ( pirq < 0 || pirq >= d->nr_pirqs || irq < 0 || irq >= nr_irqs )
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9338575..1b352df 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4673,9 +4673,9 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&xatp, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(xatp.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         if ( xsm_add_to_physmap(current->domain, d) )
         {
@@ -4712,9 +4712,9 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         if ( fmap.map.nr_entries > E820MAX )
             return -EINVAL;
 
-        rc = rcu_lock_target_domain_by_id(fmap.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(fmap.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = xsm_domain_memory_map(d);
         if ( rc )
@@ -4790,9 +4790,6 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         XEN_GUEST_HANDLE(e820entry_t) buffer;
         unsigned int i;
 
-        if ( !IS_PRIV(current->domain) )
-            return -EINVAL;
-
         rc = xsm_machine_memory_map();
         if ( rc )
             return rc;
@@ -4868,16 +4865,12 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         struct domain *d;
         struct p2m_domain *p2m;
 
-        /* Support DOMID_SELF? */
-        if ( !IS_PRIV(current->domain) )
-            return -EPERM;
-
         if ( copy_from_guest(&target, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(target.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(target.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         if ( op == XENMEM_set_pod_target )
             rc = xsm_set_pod_target(d);
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index e434ff4..0841a7a 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -106,12 +106,6 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
         goto free_domain;
     }
 
-    if ( !IS_PRIV_FOR(current->domain, d) )
-    {
-        ret = -EPERM;
-        goto free_domain;
-    }
-
     /* Verify or get irq. */
     switch ( type )
     {
@@ -235,10 +229,6 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
             goto free_domain;
     }
 
-    ret = -EPERM;
-    if ( !IS_PRIV_FOR(current->domain, d) )
-        goto free_domain;
-
     ret = xsm_unmap_domain_pirq(d, domain_pirq_to_irq(d, pirq));
     if ( ret )
         goto free_domain;
@@ -430,9 +420,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = xsm_apic(v->domain, cmd);
         if ( ret )
             break;
@@ -447,9 +434,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = xsm_apic(v->domain, cmd);
         if ( ret )
             break;
@@ -464,10 +448,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&irq_op, arg, 1) != 0 )
             break;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         /* Vector is only used by hypervisor, and dom0 shouldn't
            touch it in its world, return irq_op.irq as the vecotr,
            and make this hypercall dummy, and also defer the vector 
@@ -514,9 +494,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 
     case PHYSDEVOP_manage_pci_add: {
         struct physdev_manage_pci manage_pci;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci, arg, 1) != 0 )
             break;
@@ -527,9 +504,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 
     case PHYSDEVOP_manage_pci_remove: {
         struct physdev_manage_pci manage_pci;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci, arg, 1) != 0 )
             break;
@@ -542,10 +516,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_manage_pci_ext manage_pci_ext;
         struct pci_dev_info pdev_info;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci_ext, arg, 1) != 0 )
             break;
@@ -568,10 +538,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_pci_device_add add;
         struct pci_dev_info pdev_info;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&add, arg, 1) != 0 )
             break;
@@ -592,10 +558,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     case PHYSDEVOP_pci_device_remove: {
         struct physdev_pci_device dev;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&dev, arg, 1) != 0 )
             break;
@@ -608,10 +570,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     case PHYSDEVOP_pci_mmcfg_reserved: {
         struct physdev_pci_mmcfg_reserved info;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) )
-            break;
-
         ret = xsm_resource_setup_misc();
         if ( ret )
             break;
@@ -630,10 +588,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_restore_msi restore_msi;
         struct pci_dev *pdev;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&restore_msi, arg, 1) != 0 )
             break;
@@ -649,10 +603,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_pci_device dev;
         struct pci_dev *pdev;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&dev, arg, 1) != 0 )
             break;
@@ -667,10 +617,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     case PHYSDEVOP_setup_gsi: {
         struct physdev_setup_gsi setup_gsi;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&setup_gsi, arg, 1) != 0 )
             break;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index c049db7..f3304a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -65,9 +65,6 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
 
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-
     if ( copy_from_guest(op, u_xenpf_op, 1) )
         return -EFAULT;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..db152b1 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -249,33 +249,6 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
     if ( op->interface_version != XEN_DOMCTL_INTERFACE_VERSION )
         return -EACCES;
 
-    switch ( op->cmd )
-    {
-    case XEN_DOMCTL_ioport_mapping:
-    case XEN_DOMCTL_memory_mapping:
-    case XEN_DOMCTL_bind_pt_irq:
-    case XEN_DOMCTL_unbind_pt_irq: {
-        struct domain *d;
-        bool_t is_priv = IS_PRIV(current->domain);
-        if ( !is_priv && ((d = rcu_lock_domain_by_id(op->domain)) != NULL) )
-        {
-            is_priv = IS_PRIV_FOR(current->domain, d);
-            rcu_unlock_domain(d);
-        }
-        if ( !is_priv )
-            return -EPERM;
-        break;
-    }
-#ifdef XSM_ENABLE
-    case XEN_DOMCTL_getdomaininfo:
-        break;
-#endif
-    default:
-        if ( !IS_PRIV(current->domain) )
-            return -EPERM;
-        break;
-    }
-
     if ( !domctl_lock_acquire() )
         return hypercall_create_continuation(
             __HYPERVISOR_domctl, "h", u_domctl);
@@ -889,10 +862,10 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
         if ( d == NULL )
             break;
 
-        if ( pirq >= d->nr_pirqs )
-            ret = -EINVAL;
-        else if ( xsm_irq_permission(d, pirq, allow) )
+        if ( xsm_irq_permission(d, pirq, allow) )
             ret = -EPERM;
+        else if ( pirq >= d->nr_pirqs )
+            ret = -EINVAL;
         else if ( allow )
             ret = irq_permit_access(d, pirq);
         else
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 988d3ce..625748b 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -165,9 +165,9 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     domid_t        dom = alloc->dom;
     long           rc;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
-    if ( rc )
-        return rc;
+    d = rcu_lock_domain_by_id(dom);
+    if ( d == NULL )
+        return -ESRCH;
 
     spin_lock(&d->event_lock);
 
@@ -795,9 +795,9 @@ static long evtchn_status(evtchn_status_t *status)
     struct evtchn   *chn;
     long             rc = 0;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
-    if ( rc )
-        return rc;
+    d = rcu_lock_domain_by_id(dom);
+    if ( d == NULL )
+        return -ESRCH;
 
     spin_lock(&d->event_lock);
 
@@ -947,9 +947,9 @@ static long evtchn_reset(evtchn_reset_t *r)
     struct domain *d;
     int i, rc;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
-    if ( rc )
-        return rc;
+    d = rcu_lock_domain_by_id(dom);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = xsm_evtchn_reset(current->domain, d);
     if ( rc )
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index fbea67c..5760937 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1261,7 +1261,6 @@ gnttab_setup_table(
     struct grant_table *gt;
     int            i;
     unsigned long  gmfn;
-    domid_t        dom;
 
     if ( count != 1 )
         return -EINVAL;
@@ -1281,25 +1280,12 @@ gnttab_setup_table(
         goto out1;
     }
 
-    dom = op.dom;
-    if ( dom == DOMID_SELF )
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
     {
-        d = rcu_lock_current_domain();
-    }
-    else
-    {
-        if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-        {
-            gdprintk(XENLOG_INFO, "Bad domid %d.\n", dom);
-            op.status = GNTST_bad_domain;
-            goto out1;
-        }
-
-        if ( unlikely(!IS_PRIV_FOR(current->domain, d)) )
-        {
-            op.status = GNTST_permission_denied;
-            goto out2;
-        }
+        gdprintk(XENLOG_INFO, "Bad domid %d.\n", op.dom);
+        op.status = GNTST_bad_domain;
+        goto out2;
     }
 
     if ( xsm_grant_setup(current->domain, d) )
@@ -1352,7 +1338,6 @@ gnttab_query_size(
 {
     struct gnttab_query_size op;
     struct domain *d;
-    domid_t        dom;
     int rc;
 
     if ( count != 1 )
@@ -1364,25 +1349,12 @@ gnttab_query_size(
         return -EFAULT;
     }
 
-    dom = op.dom;
-    if ( dom == DOMID_SELF )
-    {
-        d = rcu_lock_current_domain();
-    }
-    else
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
     {
-        if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-        {
-            gdprintk(XENLOG_INFO, "Bad domid %d.\n", dom);
-            op.status = GNTST_bad_domain;
-            goto query_out;
-        }
-
-        if ( unlikely(!IS_PRIV_FOR(current->domain, d)) )
-        {
-            op.status = GNTST_permission_denied;
-            goto query_out_unlock;
-        }
+        gdprintk(XENLOG_INFO, "Bad domid %d.\n", op.dom);
+        op.status = GNTST_bad_domain;
+        goto query_out;
     }
 
     rc = xsm_grant_query_size(current->domain, d);
@@ -2240,15 +2212,10 @@ gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
         return -EFAULT;
     }
 
-    rc = rcu_lock_target_domain_by_id(op.dom, &d);
-    if ( rc < 0 )
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
     {
-        if ( rc == -ESRCH )
-            op.status = GNTST_bad_domain;
-        else if ( rc == -EPERM )
-            op.status = GNTST_permission_denied;
-        else
-            op.status = GNTST_general_error;
+        op.status = GNTST_bad_domain;
         goto out1;
     }
     rc = xsm_grant_setup(current->domain, d);
@@ -2298,14 +2265,15 @@ gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
     if ( copy_from_guest(&op, uop, 1) )
         return -EFAULT;
 
-    rc = rcu_lock_target_domain_by_id(op.dom, &d);
-    if ( rc < 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
+        return -ESRCH;
 
-    if ( xsm_grant_query_size(current->domain, d) )
+    rc = xsm_grant_query_size(current->domain, d);
+    if ( rc )
     {
         rcu_unlock_domain(d);
-        return -EPERM;
+        return rc;
     }
 
     op.version = d->grant_table->gt_version;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..22bca20 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -851,9 +851,6 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     unsigned long flags;
     int ret = -EINVAL;
 
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-
     ret = xsm_kexec();
     if ( ret )
         return ret;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..77969d9 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -583,20 +583,9 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
              && (reservation.mem_flags & XENMEMF_populate_on_demand) )
             args.memflags |= MEMF_populate_on_demand;
 
-        if ( likely(reservation.domid == DOMID_SELF) )
-        {
-            d = rcu_lock_current_domain();
-        }
-        else
-        {
-            if ( (d = rcu_lock_domain_by_id(reservation.domid)) == NULL )
-                return start_extent;
-            if ( !IS_PRIV_FOR(current->domain, d) )
-            {
-                rcu_unlock_domain(d);
-                return start_extent;
-            }
-        }
+        d = rcu_lock_domain_by_id(reservation.domid);
+        if ( d == NULL )
+            return start_extent;
         args.domain = d;
 
         rc = xsm_memory_adjust_reservation(current->domain, d);
@@ -644,9 +633,9 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&domid, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(domid, &d);
-        if ( rc )
-            return rc;
+        d = rcu_lock_domain_by_id(domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = xsm_memory_stat_reservation(current->domain, d);
         if ( rc )
@@ -682,9 +671,9 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&xrfp, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(xrfp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(xrfp.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         if ( xsm_remove_from_physmap(current->domain, d) )
         {
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..e38e6e2 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -919,12 +919,6 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         if ( d == NULL )
             break;
 
-        if ( !IS_PRIV_FOR(current->domain, d) )
-        {
-            rcu_unlock_domain(d);
-            return -EPERM;
-        }
-
         ret = xsm_schedop_shutdown(current->domain, d);
         if ( ret )
         {
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..2cea0c3 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -33,9 +33,6 @@ long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
     struct xen_sysctl curop, *op = &curop;
     static DEFINE_SPINLOCK(sysctl_lock);
 
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-
     if ( copy_from_guest(op, u_sysctl, 1) )
         return -EFAULT;
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMW-0002sm-TD; Mon, 06 Aug 2012 14:32:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMU-0002jf-Gw
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:51 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344263560!12457313!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20897 invoked from network); 6 Aug 2012 14:32:40 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-13.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:40 -0000
X-TM-IMSS-Message-ID: <7b0755870002b22c@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0755870002b22c ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9V011112; 
	Mon, 6 Aug 2012 10:32:39 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:23 -0400
Message-Id: <1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
	duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The Xen hypervisor has two basic access control function calls: IS_PRIV
and the xsm_* functions. Most privileged operations currently require
that both checks succeed, and many times the checks are at different
locations in the code. This patch eliminates the explicit and implicit
IS_PRIV checks that are duplicated in XSM hooks. When XSM_ENABLE is not
defined or when the dummy XSM module is used, this patch should not
change any functionality.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/acpi/power.c         |  2 +-
 xen/arch/x86/cpu/mcheck/mce.c     |  3 --
 xen/arch/x86/domctl.c             | 25 ++++++++--
 xen/arch/x86/hvm/hvm.c            | 96 +++++++++++++++++++--------------------
 xen/arch/x86/irq.c                |  3 +-
 xen/arch/x86/mm.c                 | 25 ++++------
 xen/arch/x86/physdev.c            | 54 ----------------------
 xen/arch/x86/platform_hypercall.c |  3 --
 xen/common/domctl.c               | 33 ++------------
 xen/common/event_channel.c        | 18 ++++----
 xen/common/grant_table.c          | 70 ++++++++--------------------
 xen/common/kexec.c                |  3 --
 xen/common/memory.c               | 29 ++++--------
 xen/common/schedule.c             |  6 ---
 xen/common/sysctl.c               |  3 --
 15 files changed, 119 insertions(+), 254 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 9e1f989..c7b37ef 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -238,7 +238,7 @@ static long enter_state_helper(void *data)
  */
 int acpi_enter_sleep(struct xenpf_enter_acpi_sleep *sleep)
 {
-    if ( !IS_PRIV(current->domain) || !acpi_sinfo.pm1a_cnt_blk.address )
+    if ( !acpi_sinfo.pm1a_cnt_blk.address )
         return -EPERM;
 
     /* Sanity check */
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index ed76131..4176bae 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1381,9 +1381,6 @@ long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
     struct xen_mc_msrinject *mc_msrinject;
     struct xen_mc_mceinject *mc_mceinject;
 
-    if (!IS_PRIV(v->domain) )
-        return x86_mcerr(NULL, -EPERM);
-
     ret = xsm_do_mca();
     if ( ret )
         return x86_mcerr(NULL, ret);
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 3cb4d97..bcb5b2d 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -54,6 +54,26 @@ long arch_do_domctl(
 
     switch ( domctl->cmd )
     {
+    /* TODO: the following do not have XSM hooks yet */
+    case XEN_DOMCTL_set_cpuid:
+    case XEN_DOMCTL_suppress_spurious_page_faults:
+    case XEN_DOMCTL_debug_op:
+    case XEN_DOMCTL_gettscinfo:
+    case XEN_DOMCTL_settscinfo:
+    case XEN_DOMCTL_audit_p2m:
+    case XEN_DOMCTL_gdbsx_guestmemio:
+    case XEN_DOMCTL_gdbsx_pausevcpu:
+    case XEN_DOMCTL_gdbsx_unpausevcpu:
+    case XEN_DOMCTL_gdbsx_domstatus:
+    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */
+    case XEN_DOMCTL_getpageframeinfo2:
+    case XEN_DOMCTL_getpageframeinfo3:
+        if ( !IS_PRIV(current->domain) )
+            return -EPERM;
+    }
+
+    switch ( domctl->cmd )
+    {
 
     case XEN_DOMCTL_shadow_op:
     {
@@ -795,11 +815,6 @@ long arch_do_domctl(
             break;
         bind = &(domctl->u.bind_pt_irq);
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) &&
-             !irq_access_permitted(current->domain, bind->machine_irq) )
-            goto unbind_out;
-
         ret = xsm_unbind_pt_irq(d, bind);
         if ( ret )
             goto unbind_out;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 22c136b..bec9e57 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
     if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
         return -EINVAL;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( d == current->domain || !is_hvm_domain(d) )
         goto out;
 
     rc = xsm_hvm_set_pci_intx_level(d);
@@ -3531,12 +3531,12 @@ static int hvmop_set_isa_irq_level(
     if ( op.isa_irq > 15 )
         return -EINVAL;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( d == current->domain || !is_hvm_domain(d) )
         goto out;
 
     rc = xsm_hvm_set_isa_irq_level(d);
@@ -3575,12 +3575,12 @@ static int hvmop_set_pci_link_route(
     if ( (op.link > 3) || (op.isa_irq > 15) )
         return -EINVAL;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
+    if ( d == current->domain || !is_hvm_domain(d) )
         goto out;
 
     rc = xsm_hvm_set_pci_link_route(d);
@@ -3605,9 +3605,9 @@ static int hvmop_inject_msi(
     if ( copy_from_guest(&op, uop, 1) )
         return -EFAULT;
 
-    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.domid);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = -EINVAL;
     if ( !is_hvm_domain(d) )
@@ -3702,9 +3702,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( a.index >= HVM_NR_PARAMS )
             return -EINVAL;
 
-        rc = rcu_lock_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
         if ( !is_hvm_domain(d) )
@@ -3948,12 +3948,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail2;
 
         rc = xsm_hvm_param(d, op);
@@ -3987,12 +3987,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail3;
 
         rc = xsm_hvm_param(d, op);
@@ -4037,9 +4037,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = xsm_hvm_param(d, op);
         if ( rc )
@@ -4084,12 +4084,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail4;
 
         rc = xsm_hvm_param(d, op);
@@ -4163,12 +4163,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail5;
 
         rc = xsm_hvm_param(d, op);
@@ -4198,12 +4198,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail6;
 
         rc = xsm_hvm_param(d, op);
@@ -4234,9 +4234,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&a, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(a.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
         if ( !is_hvm_domain(d) || !paging_mode_shadow(d) )
@@ -4288,12 +4288,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&tr, arg, 1 ) )
             return -EFAULT;
 
-        rc = rcu_lock_remote_target_domain_by_id(tr.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(tr.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
+        if ( d == current->domain || !is_hvm_domain(d) )
             goto param_fail8;
 
         rc = xsm_hvm_param(d, op);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 78a02e3..33ce710 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1853,8 +1853,7 @@ int map_domain_pirq(
     ASSERT(spin_is_locked(&d->event_lock));
 
     if ( !IS_PRIV(current->domain) &&
-         !(IS_PRIV_FOR(current->domain, d) &&
-           irq_access_permitted(current->domain, pirq)))
+         !irq_access_permitted(current->domain, pirq))
         return -EPERM;
 
     if ( pirq < 0 || pirq >= d->nr_pirqs || irq < 0 || irq >= nr_irqs )
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9338575..1b352df 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4673,9 +4673,9 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&xatp, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(xatp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(xatp.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         if ( xsm_add_to_physmap(current->domain, d) )
         {
@@ -4712,9 +4712,9 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         if ( fmap.map.nr_entries > E820MAX )
             return -EINVAL;
 
-        rc = rcu_lock_target_domain_by_id(fmap.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(fmap.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = xsm_domain_memory_map(d);
         if ( rc )
@@ -4790,9 +4790,6 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         XEN_GUEST_HANDLE(e820entry_t) buffer;
         unsigned int i;
 
-        if ( !IS_PRIV(current->domain) )
-            return -EINVAL;
-
         rc = xsm_machine_memory_map();
         if ( rc )
             return rc;
@@ -4868,16 +4865,12 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
         struct domain *d;
         struct p2m_domain *p2m;
 
-        /* Support DOMID_SELF? */
-        if ( !IS_PRIV(current->domain) )
-            return -EPERM;
-
         if ( copy_from_guest(&target, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(target.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(target.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         if ( op == XENMEM_set_pod_target )
             rc = xsm_set_pod_target(d);
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index e434ff4..0841a7a 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -106,12 +106,6 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
         goto free_domain;
     }
 
-    if ( !IS_PRIV_FOR(current->domain, d) )
-    {
-        ret = -EPERM;
-        goto free_domain;
-    }
-
     /* Verify or get irq. */
     switch ( type )
     {
@@ -235,10 +229,6 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
             goto free_domain;
     }
 
-    ret = -EPERM;
-    if ( !IS_PRIV_FOR(current->domain, d) )
-        goto free_domain;
-
     ret = xsm_unmap_domain_pirq(d, domain_pirq_to_irq(d, pirq));
     if ( ret )
         goto free_domain;
@@ -430,9 +420,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = xsm_apic(v->domain, cmd);
         if ( ret )
             break;
@@ -447,9 +434,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = xsm_apic(v->domain, cmd);
         if ( ret )
             break;
@@ -464,10 +448,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&irq_op, arg, 1) != 0 )
             break;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         /* Vector is only used by hypervisor, and dom0 shouldn't
            touch it in its world, return irq_op.irq as the vecotr,
            and make this hypercall dummy, and also defer the vector 
@@ -514,9 +494,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 
     case PHYSDEVOP_manage_pci_add: {
         struct physdev_manage_pci manage_pci;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci, arg, 1) != 0 )
             break;
@@ -527,9 +504,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 
     case PHYSDEVOP_manage_pci_remove: {
         struct physdev_manage_pci manage_pci;
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci, arg, 1) != 0 )
             break;
@@ -542,10 +516,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_manage_pci_ext manage_pci_ext;
         struct pci_dev_info pdev_info;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci_ext, arg, 1) != 0 )
             break;
@@ -568,10 +538,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_pci_device_add add;
         struct pci_dev_info pdev_info;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&add, arg, 1) != 0 )
             break;
@@ -592,10 +558,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     case PHYSDEVOP_pci_device_remove: {
         struct physdev_pci_device dev;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&dev, arg, 1) != 0 )
             break;
@@ -608,10 +570,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     case PHYSDEVOP_pci_mmcfg_reserved: {
         struct physdev_pci_mmcfg_reserved info;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(current->domain) )
-            break;
-
         ret = xsm_resource_setup_misc();
         if ( ret )
             break;
@@ -630,10 +588,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_restore_msi restore_msi;
         struct pci_dev *pdev;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&restore_msi, arg, 1) != 0 )
             break;
@@ -649,10 +603,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         struct physdev_pci_device dev;
         struct pci_dev *pdev;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&dev, arg, 1) != 0 )
             break;
@@ -667,10 +617,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     case PHYSDEVOP_setup_gsi: {
         struct physdev_setup_gsi setup_gsi;
 
-        ret = -EPERM;
-        if ( !IS_PRIV(v->domain) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&setup_gsi, arg, 1) != 0 )
             break;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index c049db7..f3304a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -65,9 +65,6 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
 
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-
     if ( copy_from_guest(op, u_xenpf_op, 1) )
         return -EFAULT;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..db152b1 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -249,33 +249,6 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
     if ( op->interface_version != XEN_DOMCTL_INTERFACE_VERSION )
         return -EACCES;
 
-    switch ( op->cmd )
-    {
-    case XEN_DOMCTL_ioport_mapping:
-    case XEN_DOMCTL_memory_mapping:
-    case XEN_DOMCTL_bind_pt_irq:
-    case XEN_DOMCTL_unbind_pt_irq: {
-        struct domain *d;
-        bool_t is_priv = IS_PRIV(current->domain);
-        if ( !is_priv && ((d = rcu_lock_domain_by_id(op->domain)) != NULL) )
-        {
-            is_priv = IS_PRIV_FOR(current->domain, d);
-            rcu_unlock_domain(d);
-        }
-        if ( !is_priv )
-            return -EPERM;
-        break;
-    }
-#ifdef XSM_ENABLE
-    case XEN_DOMCTL_getdomaininfo:
-        break;
-#endif
-    default:
-        if ( !IS_PRIV(current->domain) )
-            return -EPERM;
-        break;
-    }
-
     if ( !domctl_lock_acquire() )
         return hypercall_create_continuation(
             __HYPERVISOR_domctl, "h", u_domctl);
@@ -889,10 +862,10 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
         if ( d == NULL )
             break;
 
-        if ( pirq >= d->nr_pirqs )
-            ret = -EINVAL;
-        else if ( xsm_irq_permission(d, pirq, allow) )
+        if ( xsm_irq_permission(d, pirq, allow) )
             ret = -EPERM;
+        else if ( pirq >= d->nr_pirqs )
+            ret = -EINVAL;
         else if ( allow )
             ret = irq_permit_access(d, pirq);
         else
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 988d3ce..625748b 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -165,9 +165,9 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     domid_t        dom = alloc->dom;
     long           rc;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
-    if ( rc )
-        return rc;
+    d = rcu_lock_domain_by_id(dom);
+    if ( d == NULL )
+        return -ESRCH;
 
     spin_lock(&d->event_lock);
 
@@ -795,9 +795,9 @@ static long evtchn_status(evtchn_status_t *status)
     struct evtchn   *chn;
     long             rc = 0;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
-    if ( rc )
-        return rc;
+    d = rcu_lock_domain_by_id(dom);
+    if ( d == NULL )
+        return -ESRCH;
 
     spin_lock(&d->event_lock);
 
@@ -947,9 +947,9 @@ static long evtchn_reset(evtchn_reset_t *r)
     struct domain *d;
     int i, rc;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
-    if ( rc )
-        return rc;
+    d = rcu_lock_domain_by_id(dom);
+    if ( d == NULL )
+        return -ESRCH;
 
     rc = xsm_evtchn_reset(current->domain, d);
     if ( rc )
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index fbea67c..5760937 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1261,7 +1261,6 @@ gnttab_setup_table(
     struct grant_table *gt;
     int            i;
     unsigned long  gmfn;
-    domid_t        dom;
 
     if ( count != 1 )
         return -EINVAL;
@@ -1281,25 +1280,12 @@ gnttab_setup_table(
         goto out1;
     }
 
-    dom = op.dom;
-    if ( dom == DOMID_SELF )
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
     {
-        d = rcu_lock_current_domain();
-    }
-    else
-    {
-        if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-        {
-            gdprintk(XENLOG_INFO, "Bad domid %d.\n", dom);
-            op.status = GNTST_bad_domain;
-            goto out1;
-        }
-
-        if ( unlikely(!IS_PRIV_FOR(current->domain, d)) )
-        {
-            op.status = GNTST_permission_denied;
-            goto out2;
-        }
+        gdprintk(XENLOG_INFO, "Bad domid %d.\n", op.dom);
+        op.status = GNTST_bad_domain;
+        goto out2;
     }
 
     if ( xsm_grant_setup(current->domain, d) )
@@ -1352,7 +1338,6 @@ gnttab_query_size(
 {
     struct gnttab_query_size op;
     struct domain *d;
-    domid_t        dom;
     int rc;
 
     if ( count != 1 )
@@ -1364,25 +1349,12 @@ gnttab_query_size(
         return -EFAULT;
     }
 
-    dom = op.dom;
-    if ( dom == DOMID_SELF )
-    {
-        d = rcu_lock_current_domain();
-    }
-    else
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
     {
-        if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
-        {
-            gdprintk(XENLOG_INFO, "Bad domid %d.\n", dom);
-            op.status = GNTST_bad_domain;
-            goto query_out;
-        }
-
-        if ( unlikely(!IS_PRIV_FOR(current->domain, d)) )
-        {
-            op.status = GNTST_permission_denied;
-            goto query_out_unlock;
-        }
+        gdprintk(XENLOG_INFO, "Bad domid %d.\n", op.dom);
+        op.status = GNTST_bad_domain;
+        goto query_out;
     }
 
     rc = xsm_grant_query_size(current->domain, d);
@@ -2240,15 +2212,10 @@ gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
         return -EFAULT;
     }
 
-    rc = rcu_lock_target_domain_by_id(op.dom, &d);
-    if ( rc < 0 )
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
     {
-        if ( rc == -ESRCH )
-            op.status = GNTST_bad_domain;
-        else if ( rc == -EPERM )
-            op.status = GNTST_permission_denied;
-        else
-            op.status = GNTST_general_error;
+        op.status = GNTST_bad_domain;
         goto out1;
     }
     rc = xsm_grant_setup(current->domain, d);
@@ -2298,14 +2265,15 @@ gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
     if ( copy_from_guest(&op, uop, 1) )
         return -EFAULT;
 
-    rc = rcu_lock_target_domain_by_id(op.dom, &d);
-    if ( rc < 0 )
-        return rc;
+    d = rcu_lock_domain_by_id(op.dom);
+    if ( d == NULL )
+        return -ESRCH;
 
-    if ( xsm_grant_query_size(current->domain, d) )
+    rc = xsm_grant_query_size(current->domain, d);
+    if ( rc )
     {
         rcu_unlock_domain(d);
-        return -EPERM;
+        return rc;
     }
 
     op.version = d->grant_table->gt_version;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..22bca20 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -851,9 +851,6 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     unsigned long flags;
     int ret = -EINVAL;
 
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-
     ret = xsm_kexec();
     if ( ret )
         return ret;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..77969d9 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -583,20 +583,9 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
              && (reservation.mem_flags & XENMEMF_populate_on_demand) )
             args.memflags |= MEMF_populate_on_demand;
 
-        if ( likely(reservation.domid == DOMID_SELF) )
-        {
-            d = rcu_lock_current_domain();
-        }
-        else
-        {
-            if ( (d = rcu_lock_domain_by_id(reservation.domid)) == NULL )
-                return start_extent;
-            if ( !IS_PRIV_FOR(current->domain, d) )
-            {
-                rcu_unlock_domain(d);
-                return start_extent;
-            }
-        }
+        d = rcu_lock_domain_by_id(reservation.domid);
+        if ( d == NULL )
+            return start_extent;
         args.domain = d;
 
         rc = xsm_memory_adjust_reservation(current->domain, d);
@@ -644,9 +633,9 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&domid, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(domid, &d);
-        if ( rc )
-            return rc;
+        d = rcu_lock_domain_by_id(domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         rc = xsm_memory_stat_reservation(current->domain, d);
         if ( rc )
@@ -682,9 +671,9 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
         if ( copy_from_guest(&xrfp, arg, 1) )
             return -EFAULT;
 
-        rc = rcu_lock_target_domain_by_id(xrfp.domid, &d);
-        if ( rc != 0 )
-            return rc;
+        d = rcu_lock_domain_by_id(xrfp.domid);
+        if ( d == NULL )
+            return -ESRCH;
 
         if ( xsm_remove_from_physmap(current->domain, d) )
         {
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..e38e6e2 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -919,12 +919,6 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
         if ( d == NULL )
             break;
 
-        if ( !IS_PRIV_FOR(current->domain, d) )
-        {
-            rcu_unlock_domain(d);
-            return -EPERM;
-        }
-
         ret = xsm_schedop_shutdown(current->domain, d);
         if ( ret )
         {
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..2cea0c3 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -33,9 +33,6 @@ long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
     struct xen_sysctl curop, *op = &curop;
     static DEFINE_SPINLOCK(sysctl_lock);
 
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-
     if ( copy_from_guest(op, u_sysctl, 1) )
         return -EFAULT;
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMW-0002rR-3a; Mon, 06 Aug 2012 14:32:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMU-0002jb-A7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:50 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344263563!11103008!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11006 invoked from network); 6 Aug 2012 14:32:43 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:43 -0000
X-TM-IMSS-Message-ID: <7b075f270002b236@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b075f270002b236 ;
	Mon, 6 Aug 2012 10:32:52 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9c011112; 
	Mon, 6 Aug 2012 10:32:41 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:30 -0400
Message-Id: <1344263550-3941-19-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 18/18] xen: remove rcu_lock_target_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/physdev.c  | 20 +++++++++++++-------
 xen/common/domain.c     | 34 ----------------------------------
 xen/include/xen/sched.h | 14 --------------
 3 files changed, 13 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 0841a7a..37a4a29 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -87,9 +87,12 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
     int pirq, irq, ret = 0;
     void *map_data = NULL;
 
-    ret = rcu_lock_target_domain_by_id(domid, &d);
-    if ( ret )
-        return ret;
+    d = rcu_lock_domain_by_id(domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    if ( d != current->domain && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
 
     if ( domid == DOMID_SELF && is_hvm_domain(d) )
     {
@@ -213,11 +216,14 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
 int physdev_unmap_pirq(domid_t domid, int pirq)
 {
     struct domain *d;
-    int ret;
+    int ret = 0;
 
-    ret = rcu_lock_target_domain_by_id(domid, &d);
-    if ( ret )
-        return ret;
+    d = rcu_lock_domain_by_id(domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    if ( d != current->domain && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
 
     if ( is_hvm_domain(d) )
     {
diff --git a/xen/common/domain.c b/xen/common/domain.c
index dbbc414..8989fa6 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -421,40 +421,6 @@ struct domain *rcu_lock_domain_by_id(domid_t dom)
     return d;
 }
 
-int rcu_lock_target_domain_by_id(domid_t dom, struct domain **d)
-{
-    if ( dom == DOMID_SELF )
-    {
-        *d = rcu_lock_current_domain();
-        return 0;
-    }
-
-    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
-        return -ESRCH;
-
-    if ( !IS_PRIV_FOR(current->domain, *d) )
-    {
-        rcu_unlock_domain(*d);
-        return -EPERM;
-    }
-
-    return 0;
-}
-
-int rcu_lock_remote_target_domain_by_id(domid_t dom, struct domain **d)
-{
-    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
-        return -ESRCH;
-
-    if ( (*d == current->domain) || !IS_PRIV_FOR(current->domain, *d) )
-    {
-        rcu_unlock_domain(*d);
-        return -EPERM;
-    }
-
-    return 0;
-}
-
 int domain_kill(struct domain *d)
 {
     int rc = 0;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 53804c8..73d82a8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -446,20 +446,6 @@ struct domain *domain_create(
  */
 struct domain *rcu_lock_domain_by_id(domid_t dom);
 
-/*
- * As above function, but accounts for current domain context:
- *  - Translates target DOMID_SELF into caller's domain id; and
- *  - Checks that caller has permission to act on the target domain.
- */
-int rcu_lock_target_domain_by_id(domid_t dom, struct domain **d);
-
-/*
- * As rcu_lock_target_domain_by_id(), but will fail EPERM rather than resolve
- * to local domain. Successful return always resolves to a remote domain that
- * the local domain is privileged to control.
- */
-int rcu_lock_remote_target_domain_by_id(domid_t dom, struct domain **d);
-
 /* Finish a RCU critical region started by rcu_lock_domain_by_id(). */
 static inline void rcu_unlock_domain(struct domain *d)
 {
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMW-0002rR-3a; Mon, 06 Aug 2012 14:32:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMU-0002jb-A7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:50 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344263563!11103008!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11006 invoked from network); 6 Aug 2012 14:32:43 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:43 -0000
X-TM-IMSS-Message-ID: <7b075f270002b236@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b075f270002b236 ;
	Mon, 6 Aug 2012 10:32:52 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9c011112; 
	Mon, 6 Aug 2012 10:32:41 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:30 -0400
Message-Id: <1344263550-3941-19-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 18/18] xen: remove rcu_lock_target_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/physdev.c  | 20 +++++++++++++-------
 xen/common/domain.c     | 34 ----------------------------------
 xen/include/xen/sched.h | 14 --------------
 3 files changed, 13 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 0841a7a..37a4a29 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -87,9 +87,12 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
     int pirq, irq, ret = 0;
     void *map_data = NULL;
 
-    ret = rcu_lock_target_domain_by_id(domid, &d);
-    if ( ret )
-        return ret;
+    d = rcu_lock_domain_by_id(domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    if ( d != current->domain && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
 
     if ( domid == DOMID_SELF && is_hvm_domain(d) )
     {
@@ -213,11 +216,14 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
 int physdev_unmap_pirq(domid_t domid, int pirq)
 {
     struct domain *d;
-    int ret;
+    int ret = 0;
 
-    ret = rcu_lock_target_domain_by_id(domid, &d);
-    if ( ret )
-        return ret;
+    d = rcu_lock_domain_by_id(domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    if ( d != current->domain && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
 
     if ( is_hvm_domain(d) )
     {
diff --git a/xen/common/domain.c b/xen/common/domain.c
index dbbc414..8989fa6 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -421,40 +421,6 @@ struct domain *rcu_lock_domain_by_id(domid_t dom)
     return d;
 }
 
-int rcu_lock_target_domain_by_id(domid_t dom, struct domain **d)
-{
-    if ( dom == DOMID_SELF )
-    {
-        *d = rcu_lock_current_domain();
-        return 0;
-    }
-
-    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
-        return -ESRCH;
-
-    if ( !IS_PRIV_FOR(current->domain, *d) )
-    {
-        rcu_unlock_domain(*d);
-        return -EPERM;
-    }
-
-    return 0;
-}
-
-int rcu_lock_remote_target_domain_by_id(domid_t dom, struct domain **d)
-{
-    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
-        return -ESRCH;
-
-    if ( (*d == current->domain) || !IS_PRIV_FOR(current->domain, *d) )
-    {
-        rcu_unlock_domain(*d);
-        return -EPERM;
-    }
-
-    return 0;
-}
-
 int domain_kill(struct domain *d)
 {
     int rc = 0;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 53804c8..73d82a8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -446,20 +446,6 @@ struct domain *domain_create(
  */
 struct domain *rcu_lock_domain_by_id(domid_t dom);
 
-/*
- * As above function, but accounts for current domain context:
- *  - Translates target DOMID_SELF into caller's domain id; and
- *  - Checks that caller has permission to act on the target domain.
- */
-int rcu_lock_target_domain_by_id(domid_t dom, struct domain **d);
-
-/*
- * As rcu_lock_target_domain_by_id(), but will fail EPERM rather than resolve
- * to local domain. Successful return always resolves to a remote domain that
- * the local domain is privileged to control.
- */
-int rcu_lock_remote_target_domain_by_id(domid_t dom, struct domain **d);
-
 /* Finish a RCU critical region started by rcu_lock_domain_by_id(). */
 static inline void rcu_unlock_domain(struct domain *d)
 {
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMX-0002tm-Sx; Mon, 06 Aug 2012 14:32:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMV-0002k6-F2
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:51 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344263561!11031783!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=2.5 required=7.0 tests=BODY_RANDOM_LONG,LONGWORDS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29336 invoked from network); 6 Aug 2012 14:32:41 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:41 -0000
X-TM-IMSS-Message-ID: <7b0756030002b22d@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0756030002b22d ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9W011112; 
	Mon, 6 Aug 2012 10:32:39 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:24 -0400
Message-Id: <1344263550-3941-13-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and mem_sharing
	hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds new XSM hooks to cover the 12 domctls that were not
previously covered by an XSM hook, and splits up the mem_sharing and
mem_event XSM hooks to better cover what the code is doing.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |   5 +
 tools/flask/policy/policy/modules/xen/xen.if   |   2 +
 xen/arch/x86/domctl.c                          | 125 +++++++++++++++----------
 xen/arch/x86/mm/mem_event.c                    |  45 ++++-----
 xen/arch/x86/mm/mem_sharing.c                  |  23 ++++-
 xen/include/asm-x86/mem_event.h                |   1 -
 xen/include/xsm/dummy.h                        |  65 ++++++++++++-
 xen/include/xsm/xsm.h                          |  62 +++++++++++-
 xen/xsm/dummy.c                                |  11 ++-
 xen/xsm/flask/hooks.c                          |  62 +++++++++++-
 xen/xsm/flask/include/av_perm_to_string.h      |   5 +
 xen/xsm/flask/include/av_permissions.h         |   5 +
 12 files changed, 318 insertions(+), 93 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 11d02da..28b8ada 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -80,6 +80,9 @@ class domain2
 	relabelself
 	make_priv_for
 	set_as_target
+	set_cpuid
+	gettsc
+	settsc
 }
 
 class hvm
@@ -97,6 +100,8 @@ class hvm
     hvmctl
     mem_event
     mem_sharing
+	share_mem
+	audit_p2m
 }
 
 class event
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 4de99c8..f9bd757 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -29,6 +29,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			scheduler getvcpuinfo getvcpuextstate getaddrsize
 			getvcpuaffinity setvcpuaffinity };
+	allow $1 $2:domain2 { set_cpuid settsc };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
@@ -67,6 +68,7 @@ define(`migrate_domain_out', `
 	allow $1 $2:hvm { gethvmc getparam irqlevel };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext getvcpuextstate pause destroy };
+	allow $1 $2:domain2 gettsc;
 ')
 
 ################################################################################
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index bcb5b2d..95f34d2 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -54,26 +54,6 @@ long arch_do_domctl(
 
     switch ( domctl->cmd )
     {
-    /* TODO: the following do not have XSM hooks yet */
-    case XEN_DOMCTL_set_cpuid:
-    case XEN_DOMCTL_suppress_spurious_page_faults:
-    case XEN_DOMCTL_debug_op:
-    case XEN_DOMCTL_gettscinfo:
-    case XEN_DOMCTL_settscinfo:
-    case XEN_DOMCTL_audit_p2m:
-    case XEN_DOMCTL_gdbsx_guestmemio:
-    case XEN_DOMCTL_gdbsx_pausevcpu:
-    case XEN_DOMCTL_gdbsx_unpausevcpu:
-    case XEN_DOMCTL_gdbsx_domstatus:
-    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */
-    case XEN_DOMCTL_getpageframeinfo2:
-    case XEN_DOMCTL_getpageframeinfo3:
-        if ( !IS_PRIV(current->domain) )
-            return -EPERM;
-    }
-
-    switch ( domctl->cmd )
-    {
 
     case XEN_DOMCTL_shadow_op:
     {
@@ -190,6 +170,13 @@ long arch_do_domctl(
             if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
                 break;
 
+            ret = xsm_getpageframeinfo_domain(d);
+            if ( ret )
+            {
+                rcu_unlock_domain(d);
+                break;
+            }
+
             if ( unlikely(num > 1024) ||
                  unlikely(num != domctl->u.getpageframeinfo3.num) )
             {
@@ -287,6 +274,13 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
             break;
 
+        ret = xsm_getpageframeinfo_domain(d);
+        if ( ret )
+        {
+            rcu_unlock_domain(d);
+            break;
+        }
+
         if ( unlikely(num > 1024) )
         {
             ret = -E2BIG;
@@ -1106,6 +1100,10 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_set_cpuid(d);
+        if ( ret )
+            goto set_cpuid_out;
+
         for ( i = 0; i < MAX_CPUID_INPUT; i++ )
         {
             cpuid = &d->arch.cpuids[i];
@@ -1129,6 +1127,7 @@ long arch_do_domctl(
             ret = 0;
         }
 
+    set_cpuid_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1143,6 +1142,10 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_gettscinfo(d);
+        if ( ret )
+            goto gettscinfo_out;
+
         domain_pause(d);
         tsc_get_info(d, &info.tsc_mode,
                         &info.elapsed_nsec,
@@ -1154,6 +1157,7 @@ long arch_do_domctl(
             ret = 0;
         domain_unpause(d);
 
+    gettscinfo_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1167,15 +1171,20 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_settscinfo(d);
+        if ( ret )
+            goto settscinfo_out;
+
         domain_pause(d);
         tsc_set_info(d, domctl->u.tsc_info.info.tsc_mode,
                      domctl->u.tsc_info.info.elapsed_nsec,
                      domctl->u.tsc_info.info.gtsc_khz,
                      domctl->u.tsc_info.info.incarnation);
         domain_unpause(d);
+        ret = 0;
 
+    settscinfo_out:
         rcu_unlock_domain(d);
-        ret = 0;
     }
     break;
 
@@ -1187,9 +1196,10 @@ long arch_do_domctl(
         d = rcu_lock_domain_by_id(domctl->domain);
         if ( d != NULL )
         {
-            d->arch.suppress_spurious_page_faults = 1;
+            ret = xsm_domctl(d, domctl->cmd);
+            if ( !ret )
+                d->arch.suppress_spurious_page_faults = 1;
             rcu_unlock_domain(d);
-            ret = 0;
         }
     }
     break;
@@ -1204,6 +1214,10 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto debug_op_out;
+
         ret = -EINVAL;
         if ( (domctl->u.debug_op.vcpu >= d->max_vcpus) ||
              ((v = d->vcpu[domctl->u.debug_op.vcpu]) == NULL) )
@@ -1228,6 +1242,10 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_guestmemio_out;
+
         domctl->u.gdbsx_guest_memio.remain =
             domctl->u.gdbsx_guest_memio.len;
 
@@ -1235,6 +1253,7 @@ long arch_do_domctl(
         if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
             ret = -EFAULT;
 
+    gdbsx_guestmemio_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1248,21 +1267,20 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_pausevcpu_out;
+
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_pausevcpu_out;
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_pausevcpu_out;
         vcpu_pause(v);
         ret = 0;
+    gdbsx_pausevcpu_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1276,23 +1294,22 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_unpausevcpu_out;
+
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_unpausevcpu_out;
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_unpausevcpu_out;
         if ( !atomic_read(&v->pause_count) )
             printk("WARN: Unpausing vcpu:%d which is not paused\n", v->vcpu_id);
         vcpu_unpause(v);
         ret = 0;
+    gdbsx_unpausevcpu_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1306,6 +1323,10 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_domstatus_out;
+
         domctl->u.gdbsx_domstatus.vcpu_id = -1;
         domctl->u.gdbsx_domstatus.paused = d->is_paused_by_controller;
         if ( domctl->u.gdbsx_domstatus.paused )
@@ -1325,6 +1346,7 @@ long arch_do_domctl(
         ret = 0;
         if ( copy_to_guest(u_domctl, domctl, 1) )
             ret = -EFAULT;
+    gdbsx_domstatus_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1464,10 +1486,8 @@ long arch_do_domctl(
         d = rcu_lock_domain_by_id(domctl->domain);
         if ( d != NULL )
         {
-            ret = xsm_mem_event(d);
-            if ( !ret )
-                ret = mem_event_domctl(d, &domctl->u.mem_event_op,
-                                       guest_handle_cast(u_domctl, void));
+            ret = mem_event_domctl(d, &domctl->u.mem_event_op,
+                                   guest_handle_cast(u_domctl, void));
             rcu_unlock_domain(d);
             copy_to_guest(u_domctl, domctl, 1);
         } 
@@ -1496,16 +1516,19 @@ long arch_do_domctl(
     {
         struct domain *d;
 
-        ret = rcu_lock_remote_target_domain_by_id(domctl->domain, &d);
-        if ( ret != 0 )
+        d = rcu_lock_domain_by_id(domctl->domain);
+        if ( d == NULL )
             break;
 
-        audit_p2m(d,
-                  &domctl->u.audit_p2m.orphans,
-                  &domctl->u.audit_p2m.m2p_bad,
-                  &domctl->u.audit_p2m.p2m_bad);
+        ret = xsm_audit_p2m(d);
+        if ( !ret )
+            audit_p2m(d,
+                      &domctl->u.audit_p2m.orphans,
+                      &domctl->u.audit_p2m.m2p_bad,
+                      &domctl->u.audit_p2m.p2m_bad);
+
         rcu_unlock_domain(d);
-        if ( copy_to_guest(u_domctl, domctl, 1) ) 
+        if ( !ret && copy_to_guest(u_domctl, domctl, 1) ) 
             ret = -EFAULT;
     }
     break;
@@ -1524,7 +1547,7 @@ long arch_do_domctl(
         d = rcu_lock_domain_by_id(domctl->domain);
         if ( d != NULL )
         {
-            ret = xsm_mem_event(d);
+            ret = xsm_mem_event_setup(d);
             if ( !ret ) {
                 p2m = p2m_get_hostp2m(d);
                 p2m->access_required = domctl->u.access_required.access_required;
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..a5b02d9 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -29,6 +29,7 @@
 #include <asm/mem_paging.h>
 #include <asm/mem_access.h>
 #include <asm/mem_sharing.h>
+#include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
 #define xen_mb()   mb()
@@ -439,34 +440,22 @@ static void mem_sharing_notification(struct vcpu *v, unsigned int port)
         mem_sharing_sharing_resume(v->domain);
 }
 
-struct domain *get_mem_event_op_target(uint32_t domain, int *rc)
-{
-    struct domain *d;
-
-    /* Get the target domain */
-    *rc = rcu_lock_remote_target_domain_by_id(domain, &d);
-    if ( *rc != 0 )
-        return NULL;
-
-    /* Not dying? */
-    if ( d->is_dying )
-    {
-        rcu_unlock_domain(d);
-        *rc = -EINVAL;
-        return NULL;
-    }
-    
-    return d;
-}
-
 int do_mem_event_op(int op, uint32_t domain, void *arg)
 {
     int ret;
     struct domain *d;
 
-    d = get_mem_event_op_target(domain, &ret);
+    d = rcu_lock_domain_by_id(domain);
     if ( !d )
-        return ret;
+        return -ESRCH;
+
+    ret = -EINVAL;
+    if ( d->is_dying || d == current->domain )
+        goto out;
+
+    ret = xsm_mem_event_op(d, op);
+    if ( ret )
+        goto out;
 
     switch (op)
     {
@@ -483,6 +472,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
             ret = -ENOSYS;
     }
 
+ out:
     rcu_unlock_domain(d);
     return ret;
 }
@@ -516,6 +506,10 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 {
     int rc;
 
+    rc = xsm_mem_event_control(d, mec->mode, mec->op);
+    if ( rc )
+        return rc;
+
     if ( unlikely(d == current->domain) )
     {
         gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
@@ -537,13 +531,6 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         return -EINVAL;
     }
 
-    /* TODO: XSM hook */
-#if 0
-    rc = xsm_mem_event_control(d, mec->op);
-    if ( rc )
-        return rc;
-#endif
-
     rc = -ENOSYS;
 
     switch ( mec->mode )
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 5103285..a7e6c5c 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -34,6 +34,7 @@
 #include <asm/atomic.h>
 #include <xen/rcupdate.h>
 #include <asm/event.h>
+#include <xsm/xsm.h>
 
 #include "mm-locks.h"
 
@@ -1345,11 +1346,18 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
+            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
             if ( !cd )
+                return -ESRCH;
+
+            rc = xsm_mem_sharing_op(d, cd, mec->op);
+            if ( rc )
+            {
+                rcu_unlock_domain(cd);
                 return rc;
+            }
 
-            if ( !mem_sharing_enabled(cd) )
+            if ( cd == current->domain || !mem_sharing_enabled(cd) )
             {
                 rcu_unlock_domain(cd);
                 return -EINVAL;
@@ -1401,11 +1409,18 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
+            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
             if ( !cd )
+                return -ESRCH;
+
+            rc = xsm_mem_sharing_op(d, cd, mec->op);
+            if ( rc )
+            {
+                rcu_unlock_domain(cd);
                 return rc;
+            }
 
-            if ( !mem_sharing_enabled(cd) )
+            if ( cd == current->domain || !mem_sharing_enabled(cd) )
             {
                 rcu_unlock_domain(cd);
                 return -EINVAL;
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..448be4f 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -62,7 +62,6 @@ void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
 int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
                            mem_event_response_t *rsp);
 
-struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
                      XEN_GUEST_HANDLE(void) u_domctl);
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 0d849cc..c71c08b 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -171,6 +171,13 @@ static XSM_DEFAULT(int, setdebugging) (struct domain *d)
     return 0;
 }
 
+static XSM_DEFAULT(int, debug_op) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, perfcontrol) (void)
 {
     if ( !IS_PRIV(current->domain) )
@@ -557,6 +564,34 @@ static XSM_DEFAULT(int, getpageframeinfo) (struct page_info *page)
     return 0;
 }
 
+static XSM_DEFAULT(int, getpageframeinfo_domain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_cpuid) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, gettscinfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, settscinfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, getmemlist) (struct domain *d)
 {
     if ( !IS_PRIV(current->domain) )
@@ -627,13 +662,27 @@ static XSM_DEFAULT(int, hvm_inject_msi) (struct domain *d)
     return 0;
 }
 
-static XSM_DEFAULT(int, mem_event) (struct domain *d)
+static XSM_DEFAULT(int, mem_event_setup) (struct domain *d)
 {
     if ( !IS_PRIV(current->domain) )
         return -EPERM;
     return 0;
 }
 
+static XSM_DEFAULT(int, mem_event_control) (struct domain *d, int mode, int op)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mem_event_op) (struct domain *d, int op)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
 {
     if ( !IS_PRIV(current->domain) )
@@ -641,6 +690,20 @@ static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
     return 0;
 }
 
+static XSM_DEFAULT(int, mem_sharing_op) (struct domain *d, struct domain *cd, int op)
+{
+    if ( !IS_PRIV_FOR(current->domain, cd) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, audit_p2m) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, apic) (struct domain *d, int cmd)
 {
     if ( !IS_PRIV(current->domain) )
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 1a9f35b..b473b54 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -67,6 +67,7 @@ struct xsm_operations {
     int (*setdomainmaxmem) (struct domain *d);
     int (*setdomainhandle) (struct domain *d);
     int (*setdebugging) (struct domain *d);
+    int (*debug_op) (struct domain *d);
     int (*perfcontrol) (void);
     int (*debug_keys) (void);
     int (*getcpuinfo) (void);
@@ -142,6 +143,10 @@ struct xsm_operations {
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
     int (*getpageframeinfo) (struct page_info *page);
+    int (*getpageframeinfo_domain) (struct domain *d);
+    int (*set_cpuid) (struct domain *d);
+    int (*gettscinfo) (struct domain *d);
+    int (*settscinfo) (struct domain *d);
     int (*getmemlist) (struct domain *d);
     int (*hypercall_init) (struct domain *d);
     int (*hvmcontext) (struct domain *d, uint32_t op);
@@ -152,8 +157,12 @@ struct xsm_operations {
     int (*hvm_set_isa_irq_level) (struct domain *d);
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
-    int (*mem_event) (struct domain *d);
+    int (*mem_event_setup) (struct domain *d);
+    int (*mem_event_control) (struct domain *d, int mode, int op);
+    int (*mem_event_op) (struct domain *d, int op);
     int (*mem_sharing) (struct domain *d);
+    int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
+    int (*audit_p2m) (struct domain *d);
     int (*apic) (struct domain *d, int cmd);
     int (*xen_settime) (void);
     int (*memtype) (uint32_t access);
@@ -302,6 +311,11 @@ static inline int xsm_setdebugging (struct domain *d)
     return xsm_call(setdebugging(d));
 }
 
+static inline int xsm_debug_op (struct domain *d)
+{
+    return xsm_call(debug_op(d));
+}
+
 static inline int xsm_perfcontrol (void)
 {
     return xsm_call(perfcontrol());
@@ -329,7 +343,7 @@ static inline int xsm_get_pmstat(void)
 
 static inline int xsm_setpminfo(void)
 {
-	return xsm_call(setpminfo());
+    return xsm_call(setpminfo());
 }
 
 static inline int xsm_pm_op(void)
@@ -608,6 +622,26 @@ static inline int xsm_getpageframeinfo (struct page_info *page)
     return xsm_call(getpageframeinfo(page));
 }
 
+static inline int xsm_getpageframeinfo_domain (struct domain *d)
+{
+    return xsm_call(getpageframeinfo_domain(d));
+}
+
+static inline int xsm_set_cpuid (struct domain *d)
+{
+    return xsm_call(set_cpuid(d));
+}
+
+static inline int xsm_gettscinfo (struct domain *d)
+{
+    return xsm_call(gettscinfo(d));
+}
+
+static inline int xsm_settscinfo (struct domain *d)
+{
+    return xsm_call(settscinfo(d));
+}
+
 static inline int xsm_getmemlist (struct domain *d)
 {
     return xsm_call(getmemlist(d));
@@ -658,9 +692,19 @@ static inline int xsm_hvm_inject_msi (struct domain *d)
     return xsm_call(hvm_inject_msi(d));
 }
 
-static inline int xsm_mem_event (struct domain *d)
+static inline int xsm_mem_event_setup (struct domain *d)
+{
+    return xsm_call(mem_event_setup(d));
+}
+
+static inline int xsm_mem_event_control (struct domain *d, int mode, int op)
+{
+    return xsm_call(mem_event_control(d, mode, op));
+}
+
+static inline int xsm_mem_event_op (struct domain *d, int op)
 {
-    return xsm_call(mem_event(d));
+    return xsm_call(mem_event_op(d, op));
 }
 
 static inline int xsm_mem_sharing (struct domain *d)
@@ -668,6 +712,16 @@ static inline int xsm_mem_sharing (struct domain *d)
     return xsm_call(mem_sharing(d));
 }
 
+static inline int xsm_mem_sharing_op (struct domain *d, struct domain *cd, int op)
+{
+    return xsm_call(mem_sharing_op(d, cd, op));
+}
+
+static inline int xsm_audit_p2m (struct domain *d)
+{
+    return xsm_call(audit_p2m(d));
+}
+
 static inline int xsm_apic (struct domain *d, int cmd)
 {
     return xsm_call(apic(d, cmd));
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index af532b8..09935d8 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -51,6 +51,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, setdomainmaxmem);
     set_to_dummy_if_null(ops, setdomainhandle);
     set_to_dummy_if_null(ops, setdebugging);
+    set_to_dummy_if_null(ops, debug_op);
     set_to_dummy_if_null(ops, perfcontrol);
     set_to_dummy_if_null(ops, debug_keys);
     set_to_dummy_if_null(ops, getcpuinfo);
@@ -124,6 +125,10 @@ void xsm_fixup_ops (struct xsm_operations *ops)
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, shadow_control);
     set_to_dummy_if_null(ops, getpageframeinfo);
+    set_to_dummy_if_null(ops, getpageframeinfo_domain);
+    set_to_dummy_if_null(ops, set_cpuid);
+    set_to_dummy_if_null(ops, gettscinfo);
+    set_to_dummy_if_null(ops, settscinfo);
     set_to_dummy_if_null(ops, getmemlist);
     set_to_dummy_if_null(ops, hypercall_init);
     set_to_dummy_if_null(ops, hvmcontext);
@@ -134,8 +139,12 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
-    set_to_dummy_if_null(ops, mem_event);
+    set_to_dummy_if_null(ops, mem_event_setup);
+    set_to_dummy_if_null(ops, mem_event_control);
+    set_to_dummy_if_null(ops, mem_event_op);
     set_to_dummy_if_null(ops, mem_sharing);
+    set_to_dummy_if_null(ops, mem_sharing_op);
+    set_to_dummy_if_null(ops, audit_p2m);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, xen_settime);
     set_to_dummy_if_null(ops, memtype);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f8aff14..4f71604 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -695,6 +695,12 @@ static int flask_setdebugging(struct domain *d)
                            DOMAIN__SETDEBUGGING);
 }
 
+static int flask_debug_op(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
+                           DOMAIN__SETDEBUGGING);
+}
+
 static int flask_debug_keys(void)
 {
     return domain_has_xen(current->domain, XEN__DEBUG);
@@ -1111,6 +1117,26 @@ static int flask_getpageframeinfo(struct page_info *page)
     return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);    
 }
 
+static int flask_getpageframeinfo_domain(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
+}
+
+static int flask_set_cpuid(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID);
+}
+
+static int flask_gettscinfo(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__GETTSC);
+}
+
+static int flask_settscinfo(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SETTSC);
+}
+
 static int flask_getmemlist(struct domain *d)
 {
     return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGELIST);
@@ -1201,7 +1227,17 @@ static int flask_hvm_set_pci_link_route(struct domain *d)
     return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCIROUTE);
 }
 
-static int flask_mem_event(struct domain *d)
+static int flask_mem_event_setup(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_control(struct domain *d, int mode, int op)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_op(struct domain *d, int op)
 {
     return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
@@ -1211,6 +1247,19 @@ static int flask_mem_sharing(struct domain *d)
     return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_SHARING);
 }
 
+static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
+{
+    int rc = domain_has_perm(current->domain, cd, SECCLASS_HVM, HVM__MEM_SHARING);
+    if ( rc )
+        return rc;
+    return domain_has_perm(d, cd, SECCLASS_HVM, HVM__SHARE_MEM);
+}
+
+static int flask_audit_p2m(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__AUDIT_P2M);
+}
+
 static int flask_apic(struct domain *d, int cmd)
 {
     u32 perm;
@@ -1586,6 +1635,7 @@ static struct xsm_operations flask_ops = {
     .setdomainmaxmem = flask_setdomainmaxmem,
     .setdomainhandle = flask_setdomainhandle,
     .setdebugging = flask_setdebugging,
+    .debug_op = flask_debug_op,
     .perfcontrol = flask_perfcontrol,
     .debug_keys = flask_debug_keys,
     .getcpuinfo = flask_getcpuinfo,
@@ -1654,6 +1704,10 @@ static struct xsm_operations flask_ops = {
 #ifdef CONFIG_X86
     .shadow_control = flask_shadow_control,
     .getpageframeinfo = flask_getpageframeinfo,
+    .getpageframeinfo_domain = flask_getpageframeinfo_domain,
+    .set_cpuid = flask_set_cpuid,
+    .gettscinfo = flask_gettscinfo,
+    .settscinfo = flask_settscinfo,
     .getmemlist = flask_getmemlist,
     .hypercall_init = flask_hypercall_init,
     .hvmcontext = flask_hvmcontext,
@@ -1662,8 +1716,12 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_intx_level = flask_hvm_set_pci_intx_level,
     .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
-    .mem_event = flask_mem_event,
+    .mem_event_setup = flask_mem_event_setup,
+    .mem_event_control = flask_mem_event_control,
+    .mem_event_op = flask_mem_event_op,
     .mem_sharing = flask_mem_sharing,
+    .mem_sharing_op = flask_mem_sharing_op,
+    .audit_p2m = flask_audit_p2m,
     .apic = flask_apic,
     .xen_settime = flask_xen_settime,
     .memtype = flask_memtype,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 10f8e80..997f098 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -66,6 +66,9 @@
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
    S_(SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR, "make_priv_for")
    S_(SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET, "set_as_target")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID, "set_cpuid")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__GETTSC, "gettsc")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__SETTSC, "settsc")
    S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
    S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
    S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
@@ -79,6 +82,8 @@
    S_(SECCLASS_HVM, HVM__HVMCTL, "hvmctl")
    S_(SECCLASS_HVM, HVM__MEM_EVENT, "mem_event")
    S_(SECCLASS_HVM, HVM__MEM_SHARING, "mem_sharing")
+   S_(SECCLASS_HVM, HVM__SHARE_MEM, "share_mem")
+   S_(SECCLASS_HVM, HVM__AUDIT_P2M, "audit_p2m")
    S_(SECCLASS_EVENT, EVENT__BIND, "bind")
    S_(SECCLASS_EVENT, EVENT__SEND, "send")
    S_(SECCLASS_EVENT, EVENT__STATUS, "status")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index f7cfee1..8596a55 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -68,6 +68,9 @@
 #define DOMAIN2__RELABELSELF                      0x00000004UL
 #define DOMAIN2__MAKE_PRIV_FOR                    0x00000008UL
 #define DOMAIN2__SET_AS_TARGET                    0x00000010UL
+#define DOMAIN2__SET_CPUID                        0x00000020UL
+#define DOMAIN2__GETTSC                           0x00000040UL
+#define DOMAIN2__SETTSC                           0x00000080UL
 
 #define HVM__SETHVMC                              0x00000001UL
 #define HVM__GETHVMC                              0x00000002UL
@@ -82,6 +85,8 @@
 #define HVM__HVMCTL                               0x00000400UL
 #define HVM__MEM_EVENT                            0x00000800UL
 #define HVM__MEM_SHARING                          0x00001000UL
+#define HVM__SHARE_MEM                            0x00002000UL
+#define HVM__AUDIT_P2M                            0x00004000UL
 
 #define EVENT__BIND                               0x00000001UL
 #define EVENT__SEND                               0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMX-0002tm-Sx; Mon, 06 Aug 2012 14:32:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMV-0002k6-F2
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:51 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344263561!11031783!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=2.5 required=7.0 tests=BODY_RANDOM_LONG,LONGWORDS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29336 invoked from network); 6 Aug 2012 14:32:41 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:41 -0000
X-TM-IMSS-Message-ID: <7b0756030002b22d@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0756030002b22d ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9W011112; 
	Mon, 6 Aug 2012 10:32:39 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:24 -0400
Message-Id: <1344263550-3941-13-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and mem_sharing
	hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds new XSM hooks to cover the 12 domctls that were not
previously covered by an XSM hook, and splits up the mem_sharing and
mem_event XSM hooks to better cover what the code is doing.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |   5 +
 tools/flask/policy/policy/modules/xen/xen.if   |   2 +
 xen/arch/x86/domctl.c                          | 125 +++++++++++++++----------
 xen/arch/x86/mm/mem_event.c                    |  45 ++++-----
 xen/arch/x86/mm/mem_sharing.c                  |  23 ++++-
 xen/include/asm-x86/mem_event.h                |   1 -
 xen/include/xsm/dummy.h                        |  65 ++++++++++++-
 xen/include/xsm/xsm.h                          |  62 +++++++++++-
 xen/xsm/dummy.c                                |  11 ++-
 xen/xsm/flask/hooks.c                          |  62 +++++++++++-
 xen/xsm/flask/include/av_perm_to_string.h      |   5 +
 xen/xsm/flask/include/av_permissions.h         |   5 +
 12 files changed, 318 insertions(+), 93 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 11d02da..28b8ada 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -80,6 +80,9 @@ class domain2
 	relabelself
 	make_priv_for
 	set_as_target
+	set_cpuid
+	gettsc
+	settsc
 }
 
 class hvm
@@ -97,6 +100,8 @@ class hvm
     hvmctl
     mem_event
     mem_sharing
+	share_mem
+	audit_p2m
 }
 
 class event
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 4de99c8..f9bd757 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -29,6 +29,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			scheduler getvcpuinfo getvcpuextstate getaddrsize
 			getvcpuaffinity setvcpuaffinity };
+	allow $1 $2:domain2 { set_cpuid settsc };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
@@ -67,6 +68,7 @@ define(`migrate_domain_out', `
 	allow $1 $2:hvm { gethvmc getparam irqlevel };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext getvcpuextstate pause destroy };
+	allow $1 $2:domain2 gettsc;
 ')
 
 ################################################################################
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index bcb5b2d..95f34d2 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -54,26 +54,6 @@ long arch_do_domctl(
 
     switch ( domctl->cmd )
     {
-    /* TODO: the following do not have XSM hooks yet */
-    case XEN_DOMCTL_set_cpuid:
-    case XEN_DOMCTL_suppress_spurious_page_faults:
-    case XEN_DOMCTL_debug_op:
-    case XEN_DOMCTL_gettscinfo:
-    case XEN_DOMCTL_settscinfo:
-    case XEN_DOMCTL_audit_p2m:
-    case XEN_DOMCTL_gdbsx_guestmemio:
-    case XEN_DOMCTL_gdbsx_pausevcpu:
-    case XEN_DOMCTL_gdbsx_unpausevcpu:
-    case XEN_DOMCTL_gdbsx_domstatus:
-    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */
-    case XEN_DOMCTL_getpageframeinfo2:
-    case XEN_DOMCTL_getpageframeinfo3:
-        if ( !IS_PRIV(current->domain) )
-            return -EPERM;
-    }
-
-    switch ( domctl->cmd )
-    {
 
     case XEN_DOMCTL_shadow_op:
     {
@@ -190,6 +170,13 @@ long arch_do_domctl(
             if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
                 break;
 
+            ret = xsm_getpageframeinfo_domain(d);
+            if ( ret )
+            {
+                rcu_unlock_domain(d);
+                break;
+            }
+
             if ( unlikely(num > 1024) ||
                  unlikely(num != domctl->u.getpageframeinfo3.num) )
             {
@@ -287,6 +274,13 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
             break;
 
+        ret = xsm_getpageframeinfo_domain(d);
+        if ( ret )
+        {
+            rcu_unlock_domain(d);
+            break;
+        }
+
         if ( unlikely(num > 1024) )
         {
             ret = -E2BIG;
@@ -1106,6 +1100,10 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_set_cpuid(d);
+        if ( ret )
+            goto set_cpuid_out;
+
         for ( i = 0; i < MAX_CPUID_INPUT; i++ )
         {
             cpuid = &d->arch.cpuids[i];
@@ -1129,6 +1127,7 @@ long arch_do_domctl(
             ret = 0;
         }
 
+    set_cpuid_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1143,6 +1142,10 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_gettscinfo(d);
+        if ( ret )
+            goto gettscinfo_out;
+
         domain_pause(d);
         tsc_get_info(d, &info.tsc_mode,
                         &info.elapsed_nsec,
@@ -1154,6 +1157,7 @@ long arch_do_domctl(
             ret = 0;
         domain_unpause(d);
 
+    gettscinfo_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1167,15 +1171,20 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_settscinfo(d);
+        if ( ret )
+            goto settscinfo_out;
+
         domain_pause(d);
         tsc_set_info(d, domctl->u.tsc_info.info.tsc_mode,
                      domctl->u.tsc_info.info.elapsed_nsec,
                      domctl->u.tsc_info.info.gtsc_khz,
                      domctl->u.tsc_info.info.incarnation);
         domain_unpause(d);
+        ret = 0;
 
+    settscinfo_out:
         rcu_unlock_domain(d);
-        ret = 0;
     }
     break;
 
@@ -1187,9 +1196,10 @@ long arch_do_domctl(
         d = rcu_lock_domain_by_id(domctl->domain);
         if ( d != NULL )
         {
-            d->arch.suppress_spurious_page_faults = 1;
+            ret = xsm_domctl(d, domctl->cmd);
+            if ( !ret )
+                d->arch.suppress_spurious_page_faults = 1;
             rcu_unlock_domain(d);
-            ret = 0;
         }
     }
     break;
@@ -1204,6 +1214,10 @@ long arch_do_domctl(
         if ( d == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto debug_op_out;
+
         ret = -EINVAL;
         if ( (domctl->u.debug_op.vcpu >= d->max_vcpus) ||
              ((v = d->vcpu[domctl->u.debug_op.vcpu]) == NULL) )
@@ -1228,6 +1242,10 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_guestmemio_out;
+
         domctl->u.gdbsx_guest_memio.remain =
             domctl->u.gdbsx_guest_memio.len;
 
@@ -1235,6 +1253,7 @@ long arch_do_domctl(
         if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
             ret = -EFAULT;
 
+    gdbsx_guestmemio_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1248,21 +1267,20 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_pausevcpu_out;
+
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_pausevcpu_out;
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_pausevcpu_out;
         vcpu_pause(v);
         ret = 0;
+    gdbsx_pausevcpu_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1276,23 +1294,22 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_unpausevcpu_out;
+
         ret = -EBUSY;
         if ( !d->is_paused_by_controller )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_unpausevcpu_out;
         ret = -EINVAL;
         if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
              (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
-        {
-            rcu_unlock_domain(d);
-            break;
-        }
+            goto gdbsx_unpausevcpu_out;
         if ( !atomic_read(&v->pause_count) )
             printk("WARN: Unpausing vcpu:%d which is not paused\n", v->vcpu_id);
         vcpu_unpause(v);
         ret = 0;
+    gdbsx_unpausevcpu_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1306,6 +1323,10 @@ long arch_do_domctl(
         if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
             break;
 
+        ret = xsm_debug_op(d);
+        if ( ret )
+            goto gdbsx_domstatus_out;
+
         domctl->u.gdbsx_domstatus.vcpu_id = -1;
         domctl->u.gdbsx_domstatus.paused = d->is_paused_by_controller;
         if ( domctl->u.gdbsx_domstatus.paused )
@@ -1325,6 +1346,7 @@ long arch_do_domctl(
         ret = 0;
         if ( copy_to_guest(u_domctl, domctl, 1) )
             ret = -EFAULT;
+    gdbsx_domstatus_out:
         rcu_unlock_domain(d);
     }
     break;
@@ -1464,10 +1486,8 @@ long arch_do_domctl(
         d = rcu_lock_domain_by_id(domctl->domain);
         if ( d != NULL )
         {
-            ret = xsm_mem_event(d);
-            if ( !ret )
-                ret = mem_event_domctl(d, &domctl->u.mem_event_op,
-                                       guest_handle_cast(u_domctl, void));
+            ret = mem_event_domctl(d, &domctl->u.mem_event_op,
+                                   guest_handle_cast(u_domctl, void));
             rcu_unlock_domain(d);
             copy_to_guest(u_domctl, domctl, 1);
         } 
@@ -1496,16 +1516,19 @@ long arch_do_domctl(
     {
         struct domain *d;
 
-        ret = rcu_lock_remote_target_domain_by_id(domctl->domain, &d);
-        if ( ret != 0 )
+        d = rcu_lock_domain_by_id(domctl->domain);
+        if ( d == NULL )
             break;
 
-        audit_p2m(d,
-                  &domctl->u.audit_p2m.orphans,
-                  &domctl->u.audit_p2m.m2p_bad,
-                  &domctl->u.audit_p2m.p2m_bad);
+        ret = xsm_audit_p2m(d);
+        if ( !ret )
+            audit_p2m(d,
+                      &domctl->u.audit_p2m.orphans,
+                      &domctl->u.audit_p2m.m2p_bad,
+                      &domctl->u.audit_p2m.p2m_bad);
+
         rcu_unlock_domain(d);
-        if ( copy_to_guest(u_domctl, domctl, 1) ) 
+        if ( !ret && copy_to_guest(u_domctl, domctl, 1) ) 
             ret = -EFAULT;
     }
     break;
@@ -1524,7 +1547,7 @@ long arch_do_domctl(
         d = rcu_lock_domain_by_id(domctl->domain);
         if ( d != NULL )
         {
-            ret = xsm_mem_event(d);
+            ret = xsm_mem_event_setup(d);
             if ( !ret ) {
                 p2m = p2m_get_hostp2m(d);
                 p2m->access_required = domctl->u.access_required.access_required;
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..a5b02d9 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -29,6 +29,7 @@
 #include <asm/mem_paging.h>
 #include <asm/mem_access.h>
 #include <asm/mem_sharing.h>
+#include <xsm/xsm.h>
 
 /* for public/io/ring.h macros */
 #define xen_mb()   mb()
@@ -439,34 +440,22 @@ static void mem_sharing_notification(struct vcpu *v, unsigned int port)
         mem_sharing_sharing_resume(v->domain);
 }
 
-struct domain *get_mem_event_op_target(uint32_t domain, int *rc)
-{
-    struct domain *d;
-
-    /* Get the target domain */
-    *rc = rcu_lock_remote_target_domain_by_id(domain, &d);
-    if ( *rc != 0 )
-        return NULL;
-
-    /* Not dying? */
-    if ( d->is_dying )
-    {
-        rcu_unlock_domain(d);
-        *rc = -EINVAL;
-        return NULL;
-    }
-    
-    return d;
-}
-
 int do_mem_event_op(int op, uint32_t domain, void *arg)
 {
     int ret;
     struct domain *d;
 
-    d = get_mem_event_op_target(domain, &ret);
+    d = rcu_lock_domain_by_id(domain);
     if ( !d )
-        return ret;
+        return -ESRCH;
+
+    ret = -EINVAL;
+    if ( d->is_dying || d == current->domain )
+        goto out;
+
+    ret = xsm_mem_event_op(d, op);
+    if ( ret )
+        goto out;
 
     switch (op)
     {
@@ -483,6 +472,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
             ret = -ENOSYS;
     }
 
+ out:
     rcu_unlock_domain(d);
     return ret;
 }
@@ -516,6 +506,10 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
 {
     int rc;
 
+    rc = xsm_mem_event_control(d, mec->mode, mec->op);
+    if ( rc )
+        return rc;
+
     if ( unlikely(d == current->domain) )
     {
         gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
@@ -537,13 +531,6 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
         return -EINVAL;
     }
 
-    /* TODO: XSM hook */
-#if 0
-    rc = xsm_mem_event_control(d, mec->op);
-    if ( rc )
-        return rc;
-#endif
-
     rc = -ENOSYS;
 
     switch ( mec->mode )
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 5103285..a7e6c5c 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -34,6 +34,7 @@
 #include <asm/atomic.h>
 #include <xen/rcupdate.h>
 #include <asm/event.h>
+#include <xsm/xsm.h>
 
 #include "mm-locks.h"
 
@@ -1345,11 +1346,18 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
+            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
             if ( !cd )
+                return -ESRCH;
+
+            rc = xsm_mem_sharing_op(d, cd, mec->op);
+            if ( rc )
+            {
+                rcu_unlock_domain(cd);
                 return rc;
+            }
 
-            if ( !mem_sharing_enabled(cd) )
+            if ( cd == current->domain || !mem_sharing_enabled(cd) )
             {
                 rcu_unlock_domain(cd);
                 return -EINVAL;
@@ -1401,11 +1409,18 @@ int mem_sharing_memop(struct domain *d, xen_mem_sharing_op_t *mec)
             if ( !mem_sharing_enabled(d) )
                 return -EINVAL;
 
-            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
+            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
             if ( !cd )
+                return -ESRCH;
+
+            rc = xsm_mem_sharing_op(d, cd, mec->op);
+            if ( rc )
+            {
+                rcu_unlock_domain(cd);
                 return rc;
+            }
 
-            if ( !mem_sharing_enabled(cd) )
+            if ( cd == current->domain || !mem_sharing_enabled(cd) )
             {
                 rcu_unlock_domain(cd);
                 return -EINVAL;
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..448be4f 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -62,7 +62,6 @@ void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
 int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
                            mem_event_response_t *rsp);
 
-struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
                      XEN_GUEST_HANDLE(void) u_domctl);
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 0d849cc..c71c08b 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -171,6 +171,13 @@ static XSM_DEFAULT(int, setdebugging) (struct domain *d)
     return 0;
 }
 
+static XSM_DEFAULT(int, debug_op) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, perfcontrol) (void)
 {
     if ( !IS_PRIV(current->domain) )
@@ -557,6 +564,34 @@ static XSM_DEFAULT(int, getpageframeinfo) (struct page_info *page)
     return 0;
 }
 
+static XSM_DEFAULT(int, getpageframeinfo_domain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_cpuid) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, gettscinfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, settscinfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, getmemlist) (struct domain *d)
 {
     if ( !IS_PRIV(current->domain) )
@@ -627,13 +662,27 @@ static XSM_DEFAULT(int, hvm_inject_msi) (struct domain *d)
     return 0;
 }
 
-static XSM_DEFAULT(int, mem_event) (struct domain *d)
+static XSM_DEFAULT(int, mem_event_setup) (struct domain *d)
 {
     if ( !IS_PRIV(current->domain) )
         return -EPERM;
     return 0;
 }
 
+static XSM_DEFAULT(int, mem_event_control) (struct domain *d, int mode, int op)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mem_event_op) (struct domain *d, int op)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
 {
     if ( !IS_PRIV(current->domain) )
@@ -641,6 +690,20 @@ static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
     return 0;
 }
 
+static XSM_DEFAULT(int, mem_sharing_op) (struct domain *d, struct domain *cd, int op)
+{
+    if ( !IS_PRIV_FOR(current->domain, cd) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, audit_p2m) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, apic) (struct domain *d, int cmd)
 {
     if ( !IS_PRIV(current->domain) )
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 1a9f35b..b473b54 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -67,6 +67,7 @@ struct xsm_operations {
     int (*setdomainmaxmem) (struct domain *d);
     int (*setdomainhandle) (struct domain *d);
     int (*setdebugging) (struct domain *d);
+    int (*debug_op) (struct domain *d);
     int (*perfcontrol) (void);
     int (*debug_keys) (void);
     int (*getcpuinfo) (void);
@@ -142,6 +143,10 @@ struct xsm_operations {
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
     int (*getpageframeinfo) (struct page_info *page);
+    int (*getpageframeinfo_domain) (struct domain *d);
+    int (*set_cpuid) (struct domain *d);
+    int (*gettscinfo) (struct domain *d);
+    int (*settscinfo) (struct domain *d);
     int (*getmemlist) (struct domain *d);
     int (*hypercall_init) (struct domain *d);
     int (*hvmcontext) (struct domain *d, uint32_t op);
@@ -152,8 +157,12 @@ struct xsm_operations {
     int (*hvm_set_isa_irq_level) (struct domain *d);
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
-    int (*mem_event) (struct domain *d);
+    int (*mem_event_setup) (struct domain *d);
+    int (*mem_event_control) (struct domain *d, int mode, int op);
+    int (*mem_event_op) (struct domain *d, int op);
     int (*mem_sharing) (struct domain *d);
+    int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
+    int (*audit_p2m) (struct domain *d);
     int (*apic) (struct domain *d, int cmd);
     int (*xen_settime) (void);
     int (*memtype) (uint32_t access);
@@ -302,6 +311,11 @@ static inline int xsm_setdebugging (struct domain *d)
     return xsm_call(setdebugging(d));
 }
 
+static inline int xsm_debug_op (struct domain *d)
+{
+    return xsm_call(debug_op(d));
+}
+
 static inline int xsm_perfcontrol (void)
 {
     return xsm_call(perfcontrol());
@@ -329,7 +343,7 @@ static inline int xsm_get_pmstat(void)
 
 static inline int xsm_setpminfo(void)
 {
-	return xsm_call(setpminfo());
+    return xsm_call(setpminfo());
 }
 
 static inline int xsm_pm_op(void)
@@ -608,6 +622,26 @@ static inline int xsm_getpageframeinfo (struct page_info *page)
     return xsm_call(getpageframeinfo(page));
 }
 
+static inline int xsm_getpageframeinfo_domain (struct domain *d)
+{
+    return xsm_call(getpageframeinfo_domain(d));
+}
+
+static inline int xsm_set_cpuid (struct domain *d)
+{
+    return xsm_call(set_cpuid(d));
+}
+
+static inline int xsm_gettscinfo (struct domain *d)
+{
+    return xsm_call(gettscinfo(d));
+}
+
+static inline int xsm_settscinfo (struct domain *d)
+{
+    return xsm_call(settscinfo(d));
+}
+
 static inline int xsm_getmemlist (struct domain *d)
 {
     return xsm_call(getmemlist(d));
@@ -658,9 +692,19 @@ static inline int xsm_hvm_inject_msi (struct domain *d)
     return xsm_call(hvm_inject_msi(d));
 }
 
-static inline int xsm_mem_event (struct domain *d)
+static inline int xsm_mem_event_setup (struct domain *d)
+{
+    return xsm_call(mem_event_setup(d));
+}
+
+static inline int xsm_mem_event_control (struct domain *d, int mode, int op)
+{
+    return xsm_call(mem_event_control(d, mode, op));
+}
+
+static inline int xsm_mem_event_op (struct domain *d, int op)
 {
-    return xsm_call(mem_event(d));
+    return xsm_call(mem_event_op(d, op));
 }
 
 static inline int xsm_mem_sharing (struct domain *d)
@@ -668,6 +712,16 @@ static inline int xsm_mem_sharing (struct domain *d)
     return xsm_call(mem_sharing(d));
 }
 
+static inline int xsm_mem_sharing_op (struct domain *d, struct domain *cd, int op)
+{
+    return xsm_call(mem_sharing_op(d, cd, op));
+}
+
+static inline int xsm_audit_p2m (struct domain *d)
+{
+    return xsm_call(audit_p2m(d));
+}
+
 static inline int xsm_apic (struct domain *d, int cmd)
 {
     return xsm_call(apic(d, cmd));
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index af532b8..09935d8 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -51,6 +51,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, setdomainmaxmem);
     set_to_dummy_if_null(ops, setdomainhandle);
     set_to_dummy_if_null(ops, setdebugging);
+    set_to_dummy_if_null(ops, debug_op);
     set_to_dummy_if_null(ops, perfcontrol);
     set_to_dummy_if_null(ops, debug_keys);
     set_to_dummy_if_null(ops, getcpuinfo);
@@ -124,6 +125,10 @@ void xsm_fixup_ops (struct xsm_operations *ops)
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, shadow_control);
     set_to_dummy_if_null(ops, getpageframeinfo);
+    set_to_dummy_if_null(ops, getpageframeinfo_domain);
+    set_to_dummy_if_null(ops, set_cpuid);
+    set_to_dummy_if_null(ops, gettscinfo);
+    set_to_dummy_if_null(ops, settscinfo);
     set_to_dummy_if_null(ops, getmemlist);
     set_to_dummy_if_null(ops, hypercall_init);
     set_to_dummy_if_null(ops, hvmcontext);
@@ -134,8 +139,12 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
-    set_to_dummy_if_null(ops, mem_event);
+    set_to_dummy_if_null(ops, mem_event_setup);
+    set_to_dummy_if_null(ops, mem_event_control);
+    set_to_dummy_if_null(ops, mem_event_op);
     set_to_dummy_if_null(ops, mem_sharing);
+    set_to_dummy_if_null(ops, mem_sharing_op);
+    set_to_dummy_if_null(ops, audit_p2m);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, xen_settime);
     set_to_dummy_if_null(ops, memtype);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f8aff14..4f71604 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -695,6 +695,12 @@ static int flask_setdebugging(struct domain *d)
                            DOMAIN__SETDEBUGGING);
 }
 
+static int flask_debug_op(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
+                           DOMAIN__SETDEBUGGING);
+}
+
 static int flask_debug_keys(void)
 {
     return domain_has_xen(current->domain, XEN__DEBUG);
@@ -1111,6 +1117,26 @@ static int flask_getpageframeinfo(struct page_info *page)
     return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);    
 }
 
+static int flask_getpageframeinfo_domain(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
+}
+
+static int flask_set_cpuid(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID);
+}
+
+static int flask_gettscinfo(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__GETTSC);
+}
+
+static int flask_settscinfo(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__SETTSC);
+}
+
 static int flask_getmemlist(struct domain *d)
 {
     return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGELIST);
@@ -1201,7 +1227,17 @@ static int flask_hvm_set_pci_link_route(struct domain *d)
     return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCIROUTE);
 }
 
-static int flask_mem_event(struct domain *d)
+static int flask_mem_event_setup(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_control(struct domain *d, int mode, int op)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_op(struct domain *d, int op)
 {
     return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
 }
@@ -1211,6 +1247,19 @@ static int flask_mem_sharing(struct domain *d)
     return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_SHARING);
 }
 
+static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
+{
+    int rc = domain_has_perm(current->domain, cd, SECCLASS_HVM, HVM__MEM_SHARING);
+    if ( rc )
+        return rc;
+    return domain_has_perm(d, cd, SECCLASS_HVM, HVM__SHARE_MEM);
+}
+
+static int flask_audit_p2m(struct domain *d)
+{
+    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__AUDIT_P2M);
+}
+
 static int flask_apic(struct domain *d, int cmd)
 {
     u32 perm;
@@ -1586,6 +1635,7 @@ static struct xsm_operations flask_ops = {
     .setdomainmaxmem = flask_setdomainmaxmem,
     .setdomainhandle = flask_setdomainhandle,
     .setdebugging = flask_setdebugging,
+    .debug_op = flask_debug_op,
     .perfcontrol = flask_perfcontrol,
     .debug_keys = flask_debug_keys,
     .getcpuinfo = flask_getcpuinfo,
@@ -1654,6 +1704,10 @@ static struct xsm_operations flask_ops = {
 #ifdef CONFIG_X86
     .shadow_control = flask_shadow_control,
     .getpageframeinfo = flask_getpageframeinfo,
+    .getpageframeinfo_domain = flask_getpageframeinfo_domain,
+    .set_cpuid = flask_set_cpuid,
+    .gettscinfo = flask_gettscinfo,
+    .settscinfo = flask_settscinfo,
     .getmemlist = flask_getmemlist,
     .hypercall_init = flask_hypercall_init,
     .hvmcontext = flask_hvmcontext,
@@ -1662,8 +1716,12 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_intx_level = flask_hvm_set_pci_intx_level,
     .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
-    .mem_event = flask_mem_event,
+    .mem_event_setup = flask_mem_event_setup,
+    .mem_event_control = flask_mem_event_control,
+    .mem_event_op = flask_mem_event_op,
     .mem_sharing = flask_mem_sharing,
+    .mem_sharing_op = flask_mem_sharing_op,
+    .audit_p2m = flask_audit_p2m,
     .apic = flask_apic,
     .xen_settime = flask_xen_settime,
     .memtype = flask_memtype,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 10f8e80..997f098 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -66,6 +66,9 @@
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
    S_(SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR, "make_priv_for")
    S_(SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET, "set_as_target")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID, "set_cpuid")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__GETTSC, "gettsc")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__SETTSC, "settsc")
    S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
    S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
    S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
@@ -79,6 +82,8 @@
    S_(SECCLASS_HVM, HVM__HVMCTL, "hvmctl")
    S_(SECCLASS_HVM, HVM__MEM_EVENT, "mem_event")
    S_(SECCLASS_HVM, HVM__MEM_SHARING, "mem_sharing")
+   S_(SECCLASS_HVM, HVM__SHARE_MEM, "share_mem")
+   S_(SECCLASS_HVM, HVM__AUDIT_P2M, "audit_p2m")
    S_(SECCLASS_EVENT, EVENT__BIND, "bind")
    S_(SECCLASS_EVENT, EVENT__SEND, "send")
    S_(SECCLASS_EVENT, EVENT__STATUS, "status")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index f7cfee1..8596a55 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -68,6 +68,9 @@
 #define DOMAIN2__RELABELSELF                      0x00000004UL
 #define DOMAIN2__MAKE_PRIV_FOR                    0x00000008UL
 #define DOMAIN2__SET_AS_TARGET                    0x00000010UL
+#define DOMAIN2__SET_CPUID                        0x00000020UL
+#define DOMAIN2__GETTSC                           0x00000040UL
+#define DOMAIN2__SETTSC                           0x00000080UL
 
 #define HVM__SETHVMC                              0x00000001UL
 #define HVM__GETHVMC                              0x00000002UL
@@ -82,6 +85,8 @@
 #define HVM__HVMCTL                               0x00000400UL
 #define HVM__MEM_EVENT                            0x00000800UL
 #define HVM__MEM_SHARING                          0x00001000UL
+#define HVM__SHARE_MEM                            0x00002000UL
+#define HVM__AUDIT_P2M                            0x00004000UL
 
 #define EVENT__BIND                               0x00000001UL
 #define EVENT__SEND                               0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMY-0002uu-FU; Mon, 06 Aug 2012 14:32:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMW-0002kZ-CX
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:52 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344263563!9376315!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22202 invoked from network); 6 Aug 2012 14:32:43 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:43 -0000
X-TM-IMSS-Message-ID: <7b075e5d0002b235@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b075e5d0002b235 ;
	Mon, 6 Aug 2012 10:32:52 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9b011112; 
	Mon, 6 Aug 2012 10:32:41 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:29 -0400
Message-Id: <1344263550-3941-18-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 17/18] xen: Add XSM hook for XENMEM_exchange
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |  1 +
 tools/flask/policy/policy/modules/xen/xen.if   |  2 ++
 xen/common/memory.c                            | 21 +++++++++------------
 xen/include/xsm/dummy.h                        |  7 +++++++
 xen/include/xsm/xsm.h                          |  6 ++++++
 xen/xsm/dummy.c                                |  1 +
 xen/xsm/flask/hooks.c                          |  6 ++++++
 xen/xsm/flask/include/av_perm_to_string.h      |  1 +
 xen/xsm/flask/include/av_permissions.h         |  1 +
 9 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 5e897e2..2736075 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -142,6 +142,7 @@ class mmu
     memorymap
     remote_remap
 	mmuext_op
+	exchange
 }
 
 class shadow
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 78083c3..ab14d2f 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -30,6 +30,7 @@ define(`declare_domain', `
 #   containing at most one domain. This is not enforced by policy.
 define(`declare_singleton_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	define(`$1_self', `$1')
 	type $1_channel, event_type;
 	type_transition $1 domain_type:event $1_channel;
 	declare_domain_common($1, $1)
@@ -161,6 +162,7 @@ define(`make_device_model', `
 # use_device(domain, device)
 #   Allow a device to be used by a domain
 define(`use_device', `
+    allow $1 $1_self:mmu exchange;
     allow $1 $2:resource use;
     allow $1 $2:mmu { map_read map_write };
 ')
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 77969d9..b8aaecb 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -329,21 +329,18 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
         out_chunk_order = exch.in.extent_order - exch.out.extent_order;
     }
 
-    if ( likely(exch.in.domid == DOMID_SELF) )
+    d = rcu_lock_domain_by_id(exch.in.domid);
+    if ( d == NULL )
     {
-        d = rcu_lock_current_domain();
+        rc = -ESRCH;
+        goto fail_early;
     }
-    else
+    
+    rc = xsm_memory_exchange(d);
+    if ( rc )
     {
-        if ( (d = rcu_lock_domain_by_id(exch.in.domid)) == NULL )
-            goto fail_early;
-
-        if ( !IS_PRIV_FOR(current->domain, d) )
-        {
-            rcu_unlock_domain(d);
-            rc = -EPERM;
-            goto fail_early;
-        }
+        rcu_unlock_domain(d);
+        goto fail_early;
     }
 
     memflags |= MEMF_bits(domain_clamp_alloc_bitsize(
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 28e1d2b..6467928 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -279,6 +279,13 @@ static XSM_DEFAULT(int, grant_query_size) (struct domain *d1, struct domain *d2)
     return 0;
 }
 
+static XSM_DEFAULT(int, memory_exchange) (struct domain *d)
+{
+    if ( d != current->domain && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, memory_adjust_reservation) (struct domain *d1,
                                                             struct domain *d2)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 4134877..c5c6202 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -98,6 +98,7 @@ struct xsm_operations {
 
     int (*get_pod_target) (struct domain *d);
     int (*set_pod_target) (struct domain *d);
+    int (*memory_exchange) (struct domain *d);
     int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_pin_page) (struct domain *d, struct page_info *page);
@@ -455,6 +456,11 @@ static inline int xsm_set_pod_target (struct domain *d)
     return xsm_ops->set_pod_target(d);
 }
 
+static inline int xsm_memory_exchange (struct domain *d)
+{
+    return xsm_ops->memory_exchange(d);
+}
+
 static inline int xsm_memory_adjust_reservation (struct domain *d1, struct
                                                                     domain *d2)
 {
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 1bf9de9..5915c5e 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -83,6 +83,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, get_pod_target);
     set_to_dummy_if_null(ops, set_pod_target);
 
+    set_to_dummy_if_null(ops, memory_exchange);
     set_to_dummy_if_null(ops, memory_adjust_reservation);
     set_to_dummy_if_null(ops, memory_stat_reservation);
     set_to_dummy_if_null(ops, memory_pin_page);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f743be1..ad1c593 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -442,6 +442,11 @@ static int flask_set_pod_target(struct domain *d)
     return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETPODTARGET);
 }
 
+static int flask_memory_exchange(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_MMU, MMU__EXCHANGE);
+}
+
 static int flask_memory_adjust_reservation(struct domain *d1, struct domain *d2)
 {
     return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__ADJUST);
@@ -1642,6 +1647,7 @@ static struct xsm_operations flask_ops = {
 
     .get_pod_target = flask_get_pod_target,
     .set_pod_target = flask_set_pod_target,
+    .memory_exchange = flask_memory_exchange,
     .memory_adjust_reservation = flask_memory_adjust_reservation,
     .memory_stat_reservation = flask_memory_stat_reservation,
     .memory_pin_page = flask_memory_pin_page,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 5d4f316..b2c77b2 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -112,6 +112,7 @@
    S_(SECCLASS_MMU, MMU__MEMORYMAP, "memorymap")
    S_(SECCLASS_MMU, MMU__REMOTE_REMAP, "remote_remap")
    S_(SECCLASS_MMU, MMU__MMUEXT_OP, "mmuext_op")
+   S_(SECCLASS_MMU, MMU__EXCHANGE, "exchange")
    S_(SECCLASS_SHADOW, SHADOW__DISABLE, "disable")
    S_(SECCLASS_SHADOW, SHADOW__ENABLE, "enable")
    S_(SECCLASS_SHADOW, SHADOW__LOGDIRTY, "logdirty")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index f970b50..acb0b1a 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -118,6 +118,7 @@
 #define MMU__MEMORYMAP                            0x00000800UL
 #define MMU__REMOTE_REMAP                         0x00001000UL
 #define MMU__MMUEXT_OP                            0x00002000UL
+#define MMU__EXCHANGE                             0x00004000UL
 
 #define SHADOW__DISABLE                           0x00000001UL
 #define SHADOW__ENABLE                            0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMY-0002uu-FU; Mon, 06 Aug 2012 14:32:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMW-0002kZ-CX
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:52 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344263563!9376315!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22202 invoked from network); 6 Aug 2012 14:32:43 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:43 -0000
X-TM-IMSS-Message-ID: <7b075e5d0002b235@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b075e5d0002b235 ;
	Mon, 6 Aug 2012 10:32:52 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9b011112; 
	Mon, 6 Aug 2012 10:32:41 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:29 -0400
Message-Id: <1344263550-3941-18-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 17/18] xen: Add XSM hook for XENMEM_exchange
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors |  1 +
 tools/flask/policy/policy/modules/xen/xen.if   |  2 ++
 xen/common/memory.c                            | 21 +++++++++------------
 xen/include/xsm/dummy.h                        |  7 +++++++
 xen/include/xsm/xsm.h                          |  6 ++++++
 xen/xsm/dummy.c                                |  1 +
 xen/xsm/flask/hooks.c                          |  6 ++++++
 xen/xsm/flask/include/av_perm_to_string.h      |  1 +
 xen/xsm/flask/include/av_permissions.h         |  1 +
 9 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index 5e897e2..2736075 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -142,6 +142,7 @@ class mmu
     memorymap
     remote_remap
 	mmuext_op
+	exchange
 }
 
 class shadow
diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 78083c3..ab14d2f 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -30,6 +30,7 @@ define(`declare_domain', `
 #   containing at most one domain. This is not enforced by policy.
 define(`declare_singleton_domain', `
 	type $1, domain_type`'ifelse(`$#', `1', `', `,shift($@)');
+	define(`$1_self', `$1')
 	type $1_channel, event_type;
 	type_transition $1 domain_type:event $1_channel;
 	declare_domain_common($1, $1)
@@ -161,6 +162,7 @@ define(`make_device_model', `
 # use_device(domain, device)
 #   Allow a device to be used by a domain
 define(`use_device', `
+    allow $1 $1_self:mmu exchange;
     allow $1 $2:resource use;
     allow $1 $2:mmu { map_read map_write };
 ')
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 77969d9..b8aaecb 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -329,21 +329,18 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
         out_chunk_order = exch.in.extent_order - exch.out.extent_order;
     }
 
-    if ( likely(exch.in.domid == DOMID_SELF) )
+    d = rcu_lock_domain_by_id(exch.in.domid);
+    if ( d == NULL )
     {
-        d = rcu_lock_current_domain();
+        rc = -ESRCH;
+        goto fail_early;
     }
-    else
+    
+    rc = xsm_memory_exchange(d);
+    if ( rc )
     {
-        if ( (d = rcu_lock_domain_by_id(exch.in.domid)) == NULL )
-            goto fail_early;
-
-        if ( !IS_PRIV_FOR(current->domain, d) )
-        {
-            rcu_unlock_domain(d);
-            rc = -EPERM;
-            goto fail_early;
-        }
+        rcu_unlock_domain(d);
+        goto fail_early;
     }
 
     memflags |= MEMF_bits(domain_clamp_alloc_bitsize(
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 28e1d2b..6467928 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -279,6 +279,13 @@ static XSM_DEFAULT(int, grant_query_size) (struct domain *d1, struct domain *d2)
     return 0;
 }
 
+static XSM_DEFAULT(int, memory_exchange) (struct domain *d)
+{
+    if ( d != current->domain && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
 static XSM_DEFAULT(int, memory_adjust_reservation) (struct domain *d1,
                                                             struct domain *d2)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 4134877..c5c6202 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -98,6 +98,7 @@ struct xsm_operations {
 
     int (*get_pod_target) (struct domain *d);
     int (*set_pod_target) (struct domain *d);
+    int (*memory_exchange) (struct domain *d);
     int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_pin_page) (struct domain *d, struct page_info *page);
@@ -455,6 +456,11 @@ static inline int xsm_set_pod_target (struct domain *d)
     return xsm_ops->set_pod_target(d);
 }
 
+static inline int xsm_memory_exchange (struct domain *d)
+{
+    return xsm_ops->memory_exchange(d);
+}
+
 static inline int xsm_memory_adjust_reservation (struct domain *d1, struct
                                                                     domain *d2)
 {
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 1bf9de9..5915c5e 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -83,6 +83,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, get_pod_target);
     set_to_dummy_if_null(ops, set_pod_target);
 
+    set_to_dummy_if_null(ops, memory_exchange);
     set_to_dummy_if_null(ops, memory_adjust_reservation);
     set_to_dummy_if_null(ops, memory_stat_reservation);
     set_to_dummy_if_null(ops, memory_pin_page);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f743be1..ad1c593 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -442,6 +442,11 @@ static int flask_set_pod_target(struct domain *d)
     return current_has_perm(d, SECCLASS_DOMAIN, DOMAIN__SETPODTARGET);
 }
 
+static int flask_memory_exchange(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_MMU, MMU__EXCHANGE);
+}
+
 static int flask_memory_adjust_reservation(struct domain *d1, struct domain *d2)
 {
     return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__ADJUST);
@@ -1642,6 +1647,7 @@ static struct xsm_operations flask_ops = {
 
     .get_pod_target = flask_get_pod_target,
     .set_pod_target = flask_set_pod_target,
+    .memory_exchange = flask_memory_exchange,
     .memory_adjust_reservation = flask_memory_adjust_reservation,
     .memory_stat_reservation = flask_memory_stat_reservation,
     .memory_pin_page = flask_memory_pin_page,
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index 5d4f316..b2c77b2 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -112,6 +112,7 @@
    S_(SECCLASS_MMU, MMU__MEMORYMAP, "memorymap")
    S_(SECCLASS_MMU, MMU__REMOTE_REMAP, "remote_remap")
    S_(SECCLASS_MMU, MMU__MMUEXT_OP, "mmuext_op")
+   S_(SECCLASS_MMU, MMU__EXCHANGE, "exchange")
    S_(SECCLASS_SHADOW, SHADOW__DISABLE, "disable")
    S_(SECCLASS_SHADOW, SHADOW__ENABLE, "enable")
    S_(SECCLASS_SHADOW, SHADOW__LOGDIRTY, "logdirty")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index f970b50..acb0b1a 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -118,6 +118,7 @@
 #define MMU__MEMORYMAP                            0x00000800UL
 #define MMU__REMOTE_REMAP                         0x00001000UL
 #define MMU__MMUEXT_OP                            0x00002000UL
+#define MMU__EXCHANGE                             0x00004000UL
 
 #define SHADOW__DISABLE                           0x00000001UL
 #define SHADOW__ENABLE                            0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMb-0002zo-Bl; Mon, 06 Aug 2012 14:32:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMZ-0002nK-8x
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:55 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344263560!11750142!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4050 invoked from network); 6 Aug 2012 14:32:41 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:41 -0000
X-TM-IMSS-Message-ID: <7b07548d0002b22b@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b07548d0002b22b ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9U011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:22 -0400
Message-Id: <1344263550-3941-11-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 10/18] xsm: Add IS_PRIV checks to dummy XSM
	module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch copies IS_PRIV checks into the dummy XSM hooks and makes the
dummy hooks the default implementation instead of always returning zero.
This patch should not change any functionality regardless of the state
of XSM_ENABLE; these newly introduced duplicate checks will be resolved
in the next patch.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/include/xsm/dummy.h | 822 ++++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/xsm/xsm.h   |  52 ++-
 xen/xsm/dummy.c         | 617 +-----------------------------------
 xen/xsm/xsm_core.c      |   2 +-
 4 files changed, 848 insertions(+), 645 deletions(-)
 create mode 100644 xen/include/xsm/dummy.h

diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
new file mode 100644
index 0000000..0d849cc
--- /dev/null
+++ b/xen/include/xsm/dummy.h
@@ -0,0 +1,822 @@
+/*
+ *  Default XSM hooks - IS_PRIV and IS_PRIV_FOR checks
+ *
+ *  Author: Daniel De Graaf <dgdegra@tyhco.nsa.gov>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2,
+ *  as published by the Free Software Foundation.
+ */
+
+#include <xen/sched.h>
+#include <xsm/xsm.h>
+
+static XSM_DEFAULT(void, security_domaininfo)(struct domain *d,
+                                    struct xen_domctl_getdomaininfo *info)
+{
+    return;
+}
+
+static XSM_DEFAULT(int, setvcpucontext)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pausedomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, unpausedomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resumedomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domain_create)(struct domain *d, u32 ssidref)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, max_vcpus)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, destroydomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, vcpuaffinity) (int cmd, struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, scheduler) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getdomaininfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getvcpucontext) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getvcpuinfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domain_settime) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_target) (struct domain *d, struct domain *e)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domctl)(struct domain *d, int cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_virq_handler)(struct domain *d, uint32_t virq)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, tbufcontrol) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, readconsole) (uint32_t clear)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, sched_id) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setdomainmaxmem) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setdomainhandle) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setdebugging) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, perfcontrol) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, debug_keys) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getcpuinfo) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, get_pmstat) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setpminfo) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pm_op) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, do_mca) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, availheap) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, alloc_security_domain) (struct domain *d)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(void, free_security_domain) (struct domain *d)
+{
+    return;
+}
+
+static XSM_DEFAULT(int, grant_mapref) (struct domain *d1, struct domain *d2,
+                                                                uint32_t flags)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_unmapref) (struct domain *d1, struct domain *d2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_setup) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_transfer) (struct domain *d1, struct domain *d2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_copy) (struct domain *d1, struct domain *d2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_query_size) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memory_adjust_reservation) (struct domain *d1,
+                                                            struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memory_stat_reservation) (struct domain *d1, struct domain *d2)
+{
+    if ( !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, console_io) (struct domain *d, int cmd)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, profile) (struct domain *d, int op)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, kexec) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, schedop_shutdown) (struct domain *d1, struct domain *d2)
+{
+    if ( !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memory_pin_page)(struct domain *d, struct page_info *page)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_unbound) (struct domain *d, struct evtchn *chn,
+                                                                    domid_t id2)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_interdomain) (struct domain *d1, struct evtchn
+                                *chan1, struct domain *d2, struct evtchn *chan2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(void, evtchn_close_post) (struct evtchn *chn)
+{
+    return;
+}
+
+static XSM_DEFAULT(int, evtchn_send) (struct domain *d, struct evtchn *chn)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_status) (struct domain *d, struct evtchn *chn)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_reset) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, alloc_security_evtchn) (struct evtchn *chn)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(void, free_security_evtchn) (struct evtchn *chn)
+{
+    return;
+}
+
+static XSM_DEFAULT(char *, show_security_evtchn) (struct domain *d, const struct evtchn *chn)
+{
+    return NULL;
+}
+
+static XSM_DEFAULT(int, get_pod_target)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_pod_target)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, get_device_group) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, test_assign_device) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, assign_device) (struct domain *d, uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, deassign_device) (struct domain *d, uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_plug_core) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_unplug_core) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_plug_pci) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_unplug_pci) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_setup_pci) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_setup_gsi) (int gsi)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_setup_misc) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, page_offline) (uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, lockprof) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, cpupool_op) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, sched_op) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(long, __do_xsm_op)(XEN_GUEST_HANDLE(xsm_op_t) op)
+{
+    return -ENOSYS;
+}
+
+static XSM_DEFAULT(char *, show_irq_sid) (int irq)
+{
+    return NULL;
+}
+
+static XSM_DEFAULT(int, map_domain_pirq) (struct domain *d, int irq, void *data)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, unmap_domain_pirq) (struct domain *d, int irq)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, irq_permission) (struct domain *d, int pirq, uint8_t allow)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pci_config_permission) (struct domain *d, uint32_t machine_bdf,
+                                        uint16_t start, uint16_t end,
+                                        uint8_t access)
+{
+    if ( !IS_PRIV(d) )
+        return -EPERM;
+    return 0;
+}
+
+#ifdef CONFIG_X86
+static XSM_DEFAULT(int, shadow_control) (struct domain *d, uint32_t op)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getpageframeinfo) (struct page_info *page)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getmemlist) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hypercall_init) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvmcontext) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, address_size) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, machine_address_size) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_param) (struct domain *d, unsigned long op)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_set_pci_intx_level) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_set_isa_irq_level) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_set_pci_link_route) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_inject_msi) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mem_event) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, apic) (struct domain *d, int cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, xen_settime) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memtype) (uint32_t access)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, microcode) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, physinfo) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, platform_quirk) (uint32_t quirk)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, firmware_info) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, efi_call) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, acpi_sleep) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, change_freq) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getidletime) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, machine_memory_map) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
+                                    struct domain *f, intpte_t fpte)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
+                                                            l1_pgentry_t pte)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, add_to_physmap) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, remove_from_physmap) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, sendtrigger) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, unbind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pin_mem_cacheattr) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, ext_vcpucontext) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, vcpuextstate) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+#endif
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 0434c05..1a9f35b 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -21,11 +21,7 @@
 typedef void xsm_op_t;
 DEFINE_XEN_GUEST_HANDLE(xsm_op_t);
 
-#ifdef XSM_ENABLE
-    #define xsm_call(fn) xsm_ops->fn
-#else
-    #define xsm_call(fn) 0
-#endif
+#define xsm_call(fn) xsm_ops->fn
 
 /* policy magic number (defined by XSM_MAGIC) */
 typedef u32 xsm_magic_t;
@@ -188,8 +184,6 @@ struct xsm_operations {
 #endif
 };
 
-#endif
-
 extern struct xsm_operations *xsm_ops;
 
 static inline void xsm_security_domaininfo (struct domain *d,
@@ -598,32 +592,11 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 {
-#ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
-#else
-    return -ENOSYS;
-#endif
 }
 
-#ifdef XSM_ENABLE
-extern int xsm_init(unsigned long *module_map, const multiboot_info_t *mbi,
-                    void *(*bootstrap_map)(const module_t *));
-extern int xsm_policy_init(unsigned long *module_map,
-                           const multiboot_info_t *mbi,
-                           void *(*bootstrap_map)(const module_t *));
-extern int register_xsm(struct xsm_operations *ops);
-extern int unregister_xsm(struct xsm_operations *ops);
-#else
-static inline int xsm_init (unsigned long *module_map,
-                            const multiboot_info_t *mbi,
-                            void *(*bootstrap_map)(const module_t *))
-{
-    return 0;
-}
-#endif
-
 #ifdef CONFIG_X86
 static inline int xsm_shadow_control (struct domain *d, uint32_t op)
 {
@@ -824,7 +797,28 @@ static inline int xsm_ioport_mapping (struct domain *d, uint32_t s, uint32_t e,
 }
 #endif /* CONFIG_X86 */
 
+extern int xsm_init(unsigned long *module_map, const multiboot_info_t *mbi,
+                    void *(*bootstrap_map)(const module_t *));
+extern int xsm_policy_init(unsigned long *module_map,
+                           const multiboot_info_t *mbi,
+                           void *(*bootstrap_map)(const module_t *));
+extern int register_xsm(struct xsm_operations *ops);
+extern int unregister_xsm(struct xsm_operations *ops);
+
 extern struct xsm_operations dummy_xsm_ops;
 extern void xsm_fixup_ops(struct xsm_operations *ops);
 
+#else /* XSM_ENABLE */
+
+#define XSM_DEFAULT(type, name) inline type xsm_ ## name
+#include <xsm/dummy.h>
+
+static inline int xsm_init (unsigned long *module_map,
+                            const multiboot_info_t *mbi,
+                            void *(*bootstrap_map)(const module_t *))
+{
+    return 0;
+}
+#endif /* XSM_ENABLE */
+
 #endif /* __XSM_H */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index fd075c7..af532b8 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -10,621 +10,8 @@
  *  as published by the Free Software Foundation.
  */
 
-#include <xen/sched.h>
-#include <xsm/xsm.h>
-
-static void dummy_security_domaininfo(struct domain *d,
-                                    struct xen_domctl_getdomaininfo *info)
-{
-    return;
-}
-
-static int dummy_setvcpucontext(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_pausedomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_unpausedomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_resumedomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_domain_create(struct domain *d, u32 ssidref)
-{
-    return 0;
-}
-
-static int dummy_max_vcpus(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_destroydomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_vcpuaffinity (int cmd, struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_scheduler (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_getdomaininfo (struct domain *d)
-{
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-    return 0;
-}
-
-static int dummy_getvcpucontext (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_getvcpuinfo (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_domain_settime (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_set_target (struct domain *d, struct domain *e)
-{
-    return 0;
-}
-
-static int dummy_domctl(struct domain *d, int cmd)
-{
-    return 0;
-}
-
-static int dummy_set_virq_handler(struct domain *d, uint32_t virq)
-{
-    return 0;
-}
-
-static int dummy_tbufcontrol (void)
-{
-    return 0;
-}
-
-static int dummy_readconsole (uint32_t clear)
-{
-    return 0;
-}
-
-static int dummy_sched_id (void)
-{
-    return 0;
-}
-
-static int dummy_setdomainmaxmem (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_setdomainhandle (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_setdebugging (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_perfcontrol (void)
-{
-    return 0;
-}
-
-static int dummy_debug_keys (void)
-{
-    return 0;
-}
-
-static int dummy_getcpuinfo (void)
-{
-    return 0;
-}
-
-static int dummy_get_pmstat (void)
-{
-    return 0;
-}
-
-static int dummy_setpminfo (void)
-{
-    return 0;
-}
-
-static int dummy_pm_op (void)
-{
-    return 0;
-}
-
-static int dummy_do_mca (void)
-{
-    return 0;
-}
-
-static int dummy_availheap (void)
-{
-    return 0;
-}
-
-static int dummy_alloc_security_domain (struct domain *d)
-{
-    return 0;
-}
-
-static void dummy_free_security_domain (struct domain *d)
-{
-    return;
-}
-
-static int dummy_grant_mapref (struct domain *d1, struct domain *d2,
-                                                                uint32_t flags)
-{
-    return 0;
-}
-
-static int dummy_grant_unmapref (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_setup (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_transfer (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_copy (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_query_size (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_memory_adjust_reservation (struct domain *d1,
-                                                            struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_memory_stat_reservation (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_console_io (struct domain *d, int cmd)
-{
-    return 0;
-}
-
-static int dummy_profile (struct domain *d, int op)
-{
-    return 0;
-}
-
-static int dummy_kexec (void)
-{
-    return 0;
-}
-
-static int dummy_schedop_shutdown (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_memory_pin_page(struct domain *d, struct page_info *page)
-{
-    return 0;
-}
-
-static int dummy_evtchn_unbound (struct domain *d, struct evtchn *chn,
-                                                                    domid_t id2)
-{
-    return 0;
-}
-
-static int dummy_evtchn_interdomain (struct domain *d1, struct evtchn
-                                *chan1, struct domain *d2, struct evtchn *chan2)
-{
-    return 0;
-}
-
-static void dummy_evtchn_close_post (struct evtchn *chn)
-{
-    return;
-}
-
-static int dummy_evtchn_send (struct domain *d, struct evtchn *chn)
-{
-    return 0;
-}
-
-static int dummy_evtchn_status (struct domain *d, struct evtchn *chn)
-{
-    return 0;
-}
-
-static int dummy_evtchn_reset (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_alloc_security_evtchn (struct evtchn *chn)
-{
-    return 0;
-}
-
-static void dummy_free_security_evtchn (struct evtchn *chn)
-{
-    return;
-}
-
-static char *dummy_show_security_evtchn (struct domain *d, const struct evtchn *chn)
-{
-    return NULL;
-}
-
-static int dummy_get_pod_target(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_set_pod_target(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_get_device_group (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_test_assign_device (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_assign_device (struct domain *d, uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_deassign_device (struct domain *d, uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_plug_core (void)
-{
-    return 0;
-}
-
-static int dummy_resource_unplug_core (void)
-{
-    return 0;
-}
-
-static int dummy_resource_plug_pci (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_unplug_pci (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_setup_pci (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_setup_gsi (int gsi)
-{
-    return 0;
-}
-
-static int dummy_resource_setup_misc (void)
-{
-    return 0;
-}
-
-static int dummy_page_offline (uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_lockprof (void)
-{
-    return 0;
-}
-
-static int dummy_cpupool_op (void)
-{
-    return 0;
-}
-
-static int dummy_sched_op (void)
-{
-    return 0;
-}
-
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
-{
-    return -ENOSYS;
-}
-
-static char *dummy_show_irq_sid (int irq)
-{
-    return NULL;
-}
-
-static int dummy_map_domain_pirq (struct domain *d, int irq, void *data)
-{
-    return 0;
-}
-
-static int dummy_unmap_domain_pirq (struct domain *d, int irq)
-{
-    return 0;
-}
-
-static int dummy_irq_permission (struct domain *d, int pirq, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_iomem_permission (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_pci_config_permission (struct domain *d, uint32_t machine_bdf,
-                                        uint16_t start, uint16_t end,
-                                        uint8_t access)
-{
-    return 0;
-}
-
-#ifdef CONFIG_X86
-static int dummy_shadow_control (struct domain *d, uint32_t op)
-{
-    return 0;
-}
-
-static int dummy_getpageframeinfo (struct page_info *page)
-{
-    return 0;
-}
-
-static int dummy_getmemlist (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hypercall_init (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvmcontext (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_address_size (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_machine_address_size (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_hvm_param (struct domain *d, unsigned long op)
-{
-    return 0;
-}
-
-static int dummy_hvm_set_pci_intx_level (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvm_set_isa_irq_level (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvm_set_pci_link_route (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvm_inject_msi (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_mem_event (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_mem_sharing (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_apic (struct domain *d, int cmd)
-{
-    return 0;
-}
-
-static int dummy_xen_settime (void)
-{
-    return 0;
-}
-
-static int dummy_memtype (uint32_t access)
-{
-    return 0;
-}
-
-static int dummy_microcode (void)
-{
-    return 0;
-}
-
-static int dummy_physinfo (void)
-{
-    return 0;
-}
-
-static int dummy_platform_quirk (uint32_t quirk)
-{
-    return 0;
-}
-
-static int dummy_firmware_info (void)
-{
-    return 0;
-}
-
-static int dummy_efi_call(void)
-{
-    return 0;
-}
-
-static int dummy_acpi_sleep (void)
-{
-    return 0;
-}
-
-static int dummy_change_freq (void)
-{
-    return 0;
-}
-
-static int dummy_getidletime (void)
-{
-    return 0;
-}
-
-static int dummy_machine_memory_map (void)
-{
-    return 0;
-}
-
-static int dummy_domain_memory_map (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_mmu_normal_update (struct domain *d, struct domain *t,
-                                    struct domain *f, intpte_t fpte)
-{
-    return 0;
-}
-
-static int dummy_mmu_machphys_update (struct domain *d, unsigned long mfn)
-{
-    return 0;
-}
-
-static int dummy_update_va_mapping (struct domain *d, struct domain *f, 
-                                                            l1_pgentry_t pte)
-{
-    return 0;
-}
-
-static int dummy_add_to_physmap (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_remove_from_physmap (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_sendtrigger (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
-{
-    return 0;
-}
-
-static int dummy_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
-{
-    return 0;
-}
-
-static int dummy_pin_mem_cacheattr (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_ext_vcpucontext (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_vcpuextstate (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_ioport_permission (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
-{
-    return 0;
-}
-#endif
+#define XSM_DEFAULT(type, name) type dummy_ ## name
+#include <xsm/dummy.h>
 
 struct xsm_operations dummy_xsm_ops;
 
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..c4c85c0 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -113,7 +113,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 {
-    return __do_xsm_op(op);
+    return xsm___do_xsm_op(op);
 }
 
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMb-0002zo-Bl; Mon, 06 Aug 2012 14:32:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMZ-0002nK-8x
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:32:55 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344263560!11750142!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4050 invoked from network); 6 Aug 2012 14:32:41 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:32:41 -0000
X-TM-IMSS-Message-ID: <7b07548d0002b22b@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b07548d0002b22b ;
	Mon, 6 Aug 2012 10:32:50 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9U011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:22 -0400
Message-Id: <1344263550-3941-11-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 10/18] xsm: Add IS_PRIV checks to dummy XSM
	module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch copies IS_PRIV checks into the dummy XSM hooks and makes the
dummy hooks the default implementation instead of always returning zero.
This patch should not change any functionality regardless of the state
of XSM_ENABLE; these newly introduced duplicate checks will be resolved
in the next patch.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/include/xsm/dummy.h | 822 ++++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/xsm/xsm.h   |  52 ++-
 xen/xsm/dummy.c         | 617 +-----------------------------------
 xen/xsm/xsm_core.c      |   2 +-
 4 files changed, 848 insertions(+), 645 deletions(-)
 create mode 100644 xen/include/xsm/dummy.h

diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
new file mode 100644
index 0000000..0d849cc
--- /dev/null
+++ b/xen/include/xsm/dummy.h
@@ -0,0 +1,822 @@
+/*
+ *  Default XSM hooks - IS_PRIV and IS_PRIV_FOR checks
+ *
+ *  Author: Daniel De Graaf <dgdegra@tyhco.nsa.gov>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2,
+ *  as published by the Free Software Foundation.
+ */
+
+#include <xen/sched.h>
+#include <xsm/xsm.h>
+
+static XSM_DEFAULT(void, security_domaininfo)(struct domain *d,
+                                    struct xen_domctl_getdomaininfo *info)
+{
+    return;
+}
+
+static XSM_DEFAULT(int, setvcpucontext)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pausedomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, unpausedomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resumedomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domain_create)(struct domain *d, u32 ssidref)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, max_vcpus)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, destroydomain) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, vcpuaffinity) (int cmd, struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, scheduler) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getdomaininfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getvcpucontext) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getvcpuinfo) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domain_settime) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_target) (struct domain *d, struct domain *e)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domctl)(struct domain *d, int cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_virq_handler)(struct domain *d, uint32_t virq)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, tbufcontrol) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, readconsole) (uint32_t clear)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, sched_id) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setdomainmaxmem) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setdomainhandle) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setdebugging) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, perfcontrol) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, debug_keys) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getcpuinfo) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, get_pmstat) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, setpminfo) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pm_op) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, do_mca) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, availheap) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, alloc_security_domain) (struct domain *d)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(void, free_security_domain) (struct domain *d)
+{
+    return;
+}
+
+static XSM_DEFAULT(int, grant_mapref) (struct domain *d1, struct domain *d2,
+                                                                uint32_t flags)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_unmapref) (struct domain *d1, struct domain *d2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_setup) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_transfer) (struct domain *d1, struct domain *d2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_copy) (struct domain *d1, struct domain *d2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, grant_query_size) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memory_adjust_reservation) (struct domain *d1,
+                                                            struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memory_stat_reservation) (struct domain *d1, struct domain *d2)
+{
+    if ( !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, console_io) (struct domain *d, int cmd)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, profile) (struct domain *d, int op)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, kexec) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, schedop_shutdown) (struct domain *d1, struct domain *d2)
+{
+    if ( !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memory_pin_page)(struct domain *d, struct page_info *page)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_unbound) (struct domain *d, struct evtchn *chn,
+                                                                    domid_t id2)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_interdomain) (struct domain *d1, struct evtchn
+                                *chan1, struct domain *d2, struct evtchn *chan2)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(void, evtchn_close_post) (struct evtchn *chn)
+{
+    return;
+}
+
+static XSM_DEFAULT(int, evtchn_send) (struct domain *d, struct evtchn *chn)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_status) (struct domain *d, struct evtchn *chn)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, evtchn_reset) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, alloc_security_evtchn) (struct evtchn *chn)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(void, free_security_evtchn) (struct evtchn *chn)
+{
+    return;
+}
+
+static XSM_DEFAULT(char *, show_security_evtchn) (struct domain *d, const struct evtchn *chn)
+{
+    return NULL;
+}
+
+static XSM_DEFAULT(int, get_pod_target)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, set_pod_target)(struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, get_device_group) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, test_assign_device) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, assign_device) (struct domain *d, uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, deassign_device) (struct domain *d, uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_plug_core) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_unplug_core) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_plug_pci) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_unplug_pci) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_setup_pci) (uint32_t machine_bdf)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_setup_gsi) (int gsi)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, resource_setup_misc) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, page_offline) (uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, lockprof) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, cpupool_op) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, sched_op) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(long, __do_xsm_op)(XEN_GUEST_HANDLE(xsm_op_t) op)
+{
+    return -ENOSYS;
+}
+
+static XSM_DEFAULT(char *, show_irq_sid) (int irq)
+{
+    return NULL;
+}
+
+static XSM_DEFAULT(int, map_domain_pirq) (struct domain *d, int irq, void *data)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, unmap_domain_pirq) (struct domain *d, int irq)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, irq_permission) (struct domain *d, int pirq, uint8_t allow)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pci_config_permission) (struct domain *d, uint32_t machine_bdf,
+                                        uint16_t start, uint16_t end,
+                                        uint8_t access)
+{
+    if ( !IS_PRIV(d) )
+        return -EPERM;
+    return 0;
+}
+
+#ifdef CONFIG_X86
+static XSM_DEFAULT(int, shadow_control) (struct domain *d, uint32_t op)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getpageframeinfo) (struct page_info *page)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getmemlist) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hypercall_init) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvmcontext) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, address_size) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, machine_address_size) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_param) (struct domain *d, unsigned long op)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_set_pci_intx_level) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_set_isa_irq_level) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_set_pci_link_route) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, hvm_inject_msi) (struct domain *d)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mem_event) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, apic) (struct domain *d, int cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, xen_settime) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, memtype) (uint32_t access)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, microcode) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, physinfo) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, platform_quirk) (uint32_t quirk)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, firmware_info) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, efi_call) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, acpi_sleep) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, change_freq) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, getidletime) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, machine_memory_map) (void)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
+{
+    if ( current->domain != d && !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
+                                    struct domain *f, intpte_t fpte)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
+                                                            l1_pgentry_t pte)
+{
+    return 0;
+}
+
+static XSM_DEFAULT(int, add_to_physmap) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, remove_from_physmap) (struct domain *d1, struct domain *d2)
+{
+    if ( d1 != d2 && !IS_PRIV_FOR(d1, d2) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, sendtrigger) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, unbind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, pin_mem_cacheattr) (struct domain *d)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, ext_vcpucontext) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, vcpuextstate) (struct domain *d, uint32_t cmd)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    if ( !IS_PRIV(current->domain) )
+        return -EPERM;
+    return 0;
+}
+
+static XSM_DEFAULT(int, ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+{
+    if ( !IS_PRIV_FOR(current->domain, d) )
+        return -EPERM;
+    return 0;
+}
+
+#endif
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 0434c05..1a9f35b 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -21,11 +21,7 @@
 typedef void xsm_op_t;
 DEFINE_XEN_GUEST_HANDLE(xsm_op_t);
 
-#ifdef XSM_ENABLE
-    #define xsm_call(fn) xsm_ops->fn
-#else
-    #define xsm_call(fn) 0
-#endif
+#define xsm_call(fn) xsm_ops->fn
 
 /* policy magic number (defined by XSM_MAGIC) */
 typedef u32 xsm_magic_t;
@@ -188,8 +184,6 @@ struct xsm_operations {
 #endif
 };
 
-#endif
-
 extern struct xsm_operations *xsm_ops;
 
 static inline void xsm_security_domaininfo (struct domain *d,
@@ -598,32 +592,11 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long xsm___do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 {
-#ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
-#else
-    return -ENOSYS;
-#endif
 }
 
-#ifdef XSM_ENABLE
-extern int xsm_init(unsigned long *module_map, const multiboot_info_t *mbi,
-                    void *(*bootstrap_map)(const module_t *));
-extern int xsm_policy_init(unsigned long *module_map,
-                           const multiboot_info_t *mbi,
-                           void *(*bootstrap_map)(const module_t *));
-extern int register_xsm(struct xsm_operations *ops);
-extern int unregister_xsm(struct xsm_operations *ops);
-#else
-static inline int xsm_init (unsigned long *module_map,
-                            const multiboot_info_t *mbi,
-                            void *(*bootstrap_map)(const module_t *))
-{
-    return 0;
-}
-#endif
-
 #ifdef CONFIG_X86
 static inline int xsm_shadow_control (struct domain *d, uint32_t op)
 {
@@ -824,7 +797,28 @@ static inline int xsm_ioport_mapping (struct domain *d, uint32_t s, uint32_t e,
 }
 #endif /* CONFIG_X86 */
 
+extern int xsm_init(unsigned long *module_map, const multiboot_info_t *mbi,
+                    void *(*bootstrap_map)(const module_t *));
+extern int xsm_policy_init(unsigned long *module_map,
+                           const multiboot_info_t *mbi,
+                           void *(*bootstrap_map)(const module_t *));
+extern int register_xsm(struct xsm_operations *ops);
+extern int unregister_xsm(struct xsm_operations *ops);
+
 extern struct xsm_operations dummy_xsm_ops;
 extern void xsm_fixup_ops(struct xsm_operations *ops);
 
+#else /* XSM_ENABLE */
+
+#define XSM_DEFAULT(type, name) inline type xsm_ ## name
+#include <xsm/dummy.h>
+
+static inline int xsm_init (unsigned long *module_map,
+                            const multiboot_info_t *mbi,
+                            void *(*bootstrap_map)(const module_t *))
+{
+    return 0;
+}
+#endif /* XSM_ENABLE */
+
 #endif /* __XSM_H */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index fd075c7..af532b8 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -10,621 +10,8 @@
  *  as published by the Free Software Foundation.
  */
 
-#include <xen/sched.h>
-#include <xsm/xsm.h>
-
-static void dummy_security_domaininfo(struct domain *d,
-                                    struct xen_domctl_getdomaininfo *info)
-{
-    return;
-}
-
-static int dummy_setvcpucontext(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_pausedomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_unpausedomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_resumedomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_domain_create(struct domain *d, u32 ssidref)
-{
-    return 0;
-}
-
-static int dummy_max_vcpus(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_destroydomain (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_vcpuaffinity (int cmd, struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_scheduler (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_getdomaininfo (struct domain *d)
-{
-    if ( !IS_PRIV(current->domain) )
-        return -EPERM;
-    return 0;
-}
-
-static int dummy_getvcpucontext (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_getvcpuinfo (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_domain_settime (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_set_target (struct domain *d, struct domain *e)
-{
-    return 0;
-}
-
-static int dummy_domctl(struct domain *d, int cmd)
-{
-    return 0;
-}
-
-static int dummy_set_virq_handler(struct domain *d, uint32_t virq)
-{
-    return 0;
-}
-
-static int dummy_tbufcontrol (void)
-{
-    return 0;
-}
-
-static int dummy_readconsole (uint32_t clear)
-{
-    return 0;
-}
-
-static int dummy_sched_id (void)
-{
-    return 0;
-}
-
-static int dummy_setdomainmaxmem (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_setdomainhandle (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_setdebugging (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_perfcontrol (void)
-{
-    return 0;
-}
-
-static int dummy_debug_keys (void)
-{
-    return 0;
-}
-
-static int dummy_getcpuinfo (void)
-{
-    return 0;
-}
-
-static int dummy_get_pmstat (void)
-{
-    return 0;
-}
-
-static int dummy_setpminfo (void)
-{
-    return 0;
-}
-
-static int dummy_pm_op (void)
-{
-    return 0;
-}
-
-static int dummy_do_mca (void)
-{
-    return 0;
-}
-
-static int dummy_availheap (void)
-{
-    return 0;
-}
-
-static int dummy_alloc_security_domain (struct domain *d)
-{
-    return 0;
-}
-
-static void dummy_free_security_domain (struct domain *d)
-{
-    return;
-}
-
-static int dummy_grant_mapref (struct domain *d1, struct domain *d2,
-                                                                uint32_t flags)
-{
-    return 0;
-}
-
-static int dummy_grant_unmapref (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_setup (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_transfer (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_copy (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_grant_query_size (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_memory_adjust_reservation (struct domain *d1,
-                                                            struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_memory_stat_reservation (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_console_io (struct domain *d, int cmd)
-{
-    return 0;
-}
-
-static int dummy_profile (struct domain *d, int op)
-{
-    return 0;
-}
-
-static int dummy_kexec (void)
-{
-    return 0;
-}
-
-static int dummy_schedop_shutdown (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_memory_pin_page(struct domain *d, struct page_info *page)
-{
-    return 0;
-}
-
-static int dummy_evtchn_unbound (struct domain *d, struct evtchn *chn,
-                                                                    domid_t id2)
-{
-    return 0;
-}
-
-static int dummy_evtchn_interdomain (struct domain *d1, struct evtchn
-                                *chan1, struct domain *d2, struct evtchn *chan2)
-{
-    return 0;
-}
-
-static void dummy_evtchn_close_post (struct evtchn *chn)
-{
-    return;
-}
-
-static int dummy_evtchn_send (struct domain *d, struct evtchn *chn)
-{
-    return 0;
-}
-
-static int dummy_evtchn_status (struct domain *d, struct evtchn *chn)
-{
-    return 0;
-}
-
-static int dummy_evtchn_reset (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_alloc_security_evtchn (struct evtchn *chn)
-{
-    return 0;
-}
-
-static void dummy_free_security_evtchn (struct evtchn *chn)
-{
-    return;
-}
-
-static char *dummy_show_security_evtchn (struct domain *d, const struct evtchn *chn)
-{
-    return NULL;
-}
-
-static int dummy_get_pod_target(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_set_pod_target(struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_get_device_group (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_test_assign_device (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_assign_device (struct domain *d, uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_deassign_device (struct domain *d, uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_plug_core (void)
-{
-    return 0;
-}
-
-static int dummy_resource_unplug_core (void)
-{
-    return 0;
-}
-
-static int dummy_resource_plug_pci (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_unplug_pci (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_setup_pci (uint32_t machine_bdf)
-{
-    return 0;
-}
-
-static int dummy_resource_setup_gsi (int gsi)
-{
-    return 0;
-}
-
-static int dummy_resource_setup_misc (void)
-{
-    return 0;
-}
-
-static int dummy_page_offline (uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_lockprof (void)
-{
-    return 0;
-}
-
-static int dummy_cpupool_op (void)
-{
-    return 0;
-}
-
-static int dummy_sched_op (void)
-{
-    return 0;
-}
-
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
-{
-    return -ENOSYS;
-}
-
-static char *dummy_show_irq_sid (int irq)
-{
-    return NULL;
-}
-
-static int dummy_map_domain_pirq (struct domain *d, int irq, void *data)
-{
-    return 0;
-}
-
-static int dummy_unmap_domain_pirq (struct domain *d, int irq)
-{
-    return 0;
-}
-
-static int dummy_irq_permission (struct domain *d, int pirq, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_iomem_permission (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_iomem_mapping (struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_pci_config_permission (struct domain *d, uint32_t machine_bdf,
-                                        uint16_t start, uint16_t end,
-                                        uint8_t access)
-{
-    return 0;
-}
-
-#ifdef CONFIG_X86
-static int dummy_shadow_control (struct domain *d, uint32_t op)
-{
-    return 0;
-}
-
-static int dummy_getpageframeinfo (struct page_info *page)
-{
-    return 0;
-}
-
-static int dummy_getmemlist (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hypercall_init (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvmcontext (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_address_size (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_machine_address_size (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_hvm_param (struct domain *d, unsigned long op)
-{
-    return 0;
-}
-
-static int dummy_hvm_set_pci_intx_level (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvm_set_isa_irq_level (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvm_set_pci_link_route (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_hvm_inject_msi (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_mem_event (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_mem_sharing (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_apic (struct domain *d, int cmd)
-{
-    return 0;
-}
-
-static int dummy_xen_settime (void)
-{
-    return 0;
-}
-
-static int dummy_memtype (uint32_t access)
-{
-    return 0;
-}
-
-static int dummy_microcode (void)
-{
-    return 0;
-}
-
-static int dummy_physinfo (void)
-{
-    return 0;
-}
-
-static int dummy_platform_quirk (uint32_t quirk)
-{
-    return 0;
-}
-
-static int dummy_firmware_info (void)
-{
-    return 0;
-}
-
-static int dummy_efi_call(void)
-{
-    return 0;
-}
-
-static int dummy_acpi_sleep (void)
-{
-    return 0;
-}
-
-static int dummy_change_freq (void)
-{
-    return 0;
-}
-
-static int dummy_getidletime (void)
-{
-    return 0;
-}
-
-static int dummy_machine_memory_map (void)
-{
-    return 0;
-}
-
-static int dummy_domain_memory_map (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_mmu_normal_update (struct domain *d, struct domain *t,
-                                    struct domain *f, intpte_t fpte)
-{
-    return 0;
-}
-
-static int dummy_mmu_machphys_update (struct domain *d, unsigned long mfn)
-{
-    return 0;
-}
-
-static int dummy_update_va_mapping (struct domain *d, struct domain *f, 
-                                                            l1_pgentry_t pte)
-{
-    return 0;
-}
-
-static int dummy_add_to_physmap (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_remove_from_physmap (struct domain *d1, struct domain *d2)
-{
-    return 0;
-}
-
-static int dummy_sendtrigger (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
-{
-    return 0;
-}
-
-static int dummy_unbind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *bind)
-{
-    return 0;
-}
-
-static int dummy_pin_mem_cacheattr (struct domain *d)
-{
-    return 0;
-}
-
-static int dummy_ext_vcpucontext (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_vcpuextstate (struct domain *d, uint32_t cmd)
-{
-    return 0;
-}
-
-static int dummy_ioport_permission (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
-{
-    return 0;
-}
-
-static int dummy_ioport_mapping (struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
-{
-    return 0;
-}
-#endif
+#define XSM_DEFAULT(type, name) type dummy_ ## name
+#include <xsm/dummy.h>
 
 struct xsm_operations dummy_xsm_ops;
 
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..c4c85c0 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -113,7 +113,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
 {
-    return __do_xsm_op(op);
+    return xsm___do_xsm_op(op);
 }
 
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:33:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMg-00036D-1Z; Mon, 06 Aug 2012 14:33:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMe-00034F-Q5
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:33:01 +0000
Received: from [85.158.138.51:29224] by server-11.bemta-3.messagelabs.com id
	27/FD-10722-C95DF105; Mon, 06 Aug 2012 14:33:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344263578!30617318!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25887 invoked from network); 6 Aug 2012 14:32:59 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:59 -0000
X-TM-IMSS-Message-ID: <7b0746c70002b21e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0746c70002b21e ;
	Mon, 6 Aug 2012 10:32:46 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9K011112
	for <xen-devel@lists.xen.org>; Mon, 6 Aug 2012 10:32:35 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:12 -0400
Message-Id: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Subject: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since this series changes a lot of code accessible to guests, I think
it's better to let people comment on it now, even if it won't go in
until 4.3. Overall, it should not change the behavior of Xen when XSM is
not enabled; however, in some cases, the exact errors that are returned
will be different because security checks have been moved below validity
checks. Also, once applied, newly introduced domctls and sysctls will
not automatically be guarded by IS_PRIV checks - they will need to add
their own permission checking code.

Background:

The Xen hypervisor has two basic access control function calls: IS_PRIV
and the xsm_* functions. Most privileged operations currently require
that both checks succeed, and many times the checks are at different
locations in the code.
   
When performing dom0 disaggregation, many of the functions normally
protected with IS_PRIV are handled by domains other than dom0. This
requires either making all such disaggregated domains privileged, or
allowing certain operations to be performed without an IS_PRIV check.
Because the privileged bit also short-circuits the IS_PRIV_FOR check,
and some IS_PRIV calls do not currently have an accompanying XSM call,
this series implements the second option.

Once applied, most IS_PRIV checks are isolated in the newly introduced
xen/include/xsm/dummy.h header. The remaining checks cover a few areas
that need further examining or that have reason to remain:

1. Overriding the IRQ and IO memory access checks (arch/x86/domctl.c).
   These overrides should not be needed, as dom0 should have access
   without needing the override.
2. Allow MAP_PIRQ_TYPE_GSI to ignore domain_pirq_to_irq negative return
3. PIRQ operations by HVM domains (TODO add hooks)
4. The hack for device model framebuffers in get_page_from_l1e
5. Installing maps of non-owned pages in shadow_get_page_from_l1e
6. PCI configuration space (arch/x86/traps.c). Allowing a PV Linux domU
   to access the PCI configuration space is a good way to crash the
   system as it reconfigures PCI devices during boot, so this needs to
   remain to get a working system when FLASK is in permissive mode.
7. Various MSR accesses (arch/x86/traps.c)
8. ARM architecture - not touched at all in these patches.

The rcu_lock_target_domain_by_id and rcu_lock_remote_target_domain_by_id
functions are removed by this series because they act as wrappers around
IS_PRIV_FOR; their callers have been changed to use XSM checks instead.

Miscellaneous updates to FLASK:
    [PATCH 01/18] xsm/flask: remove inherited class attributes
    [PATCH 02/18] xsm/flask: remove unneeded create_sid field
    [PATCH 03/18] xsm/flask: add domain relabel support
    [PATCH 04/18] libxl: introduce XSM relabel on build
    [PATCH 05/18] flask/policy: Add domain relabel example

Preparatory:
    [PATCH 06/18] xsm, arch/x86: add distinct XSM hooks for map/unmap
    [PATCH 07/18] arch/x86: add missing XSM checks to XENPF_ commands
    [PATCH 08/18] xen: Add DOMID_SELF support to rcu_lock_domain_by_id
    [PATCH 09/18] xsm/flask: Add checks on the domain performing set_target

Refactor checks into existing XSM hooks:
    [PATCH 10/18] xsm: Add IS_PRIV checks to dummy XSM module
    [PATCH 11/18] xen: use XSM instead of IS_PRIV where duplicated

Clean up remaining IS_PRIV calls (1):
    [PATCH 12/18] xsm: Add missing domctl and mem_sharing hooks
    [PATCH 13/18] tmem: Add access control check

FLASK updates to allow acting as a proper IS_PRIV replacement:
    [PATCH 14/18] xsm: remove unneeded xsm_call macro
    [PATCH 15/18] xsm/flask: add distinct SIDs for self/target access

Clean up remaining IS_PRIV calls (2):
    [PATCH 16/18] arch/x86: use XSM hooks for get_pg_owner access checks
    [PATCH 17/18] xen: Add XSM hook for XENMEM_exchange
    [PATCH 18/18] xen: remove rcu_lock_target_domain_by_id

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:33:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMg-00036D-1Z; Mon, 06 Aug 2012 14:33:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMe-00034F-Q5
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:33:01 +0000
Received: from [85.158.138.51:29224] by server-11.bemta-3.messagelabs.com id
	27/FD-10722-C95DF105; Mon, 06 Aug 2012 14:33:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344263578!30617318!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25887 invoked from network); 6 Aug 2012 14:32:59 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:32:59 -0000
X-TM-IMSS-Message-ID: <7b0746c70002b21e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0746c70002b21e ;
	Mon, 6 Aug 2012 10:32:46 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9K011112
	for <xen-devel@lists.xen.org>; Mon, 6 Aug 2012 10:32:35 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:12 -0400
Message-Id: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Subject: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since this series changes a lot of code accessible to guests, I think
it's better to let people comment on it now, even if it won't go in
until 4.3. Overall, it should not change the behavior of Xen when XSM is
not enabled; however, in some cases, the exact errors that are returned
will be different because security checks have been moved below validity
checks. Also, once applied, newly introduced domctls and sysctls will
not automatically be guarded by IS_PRIV checks - they will need to add
their own permission checking code.

Background:

The Xen hypervisor has two basic access control function calls: IS_PRIV
and the xsm_* functions. Most privileged operations currently require
that both checks succeed, and many times the checks are at different
locations in the code.
   
When performing dom0 disaggregation, many of the functions normally
protected with IS_PRIV are handled by domains other than dom0. This
requires either making all such disaggregated domains privileged, or
allowing certain operations to be performed without an IS_PRIV check.
Because the privileged bit also short-circuits the IS_PRIV_FOR check,
and some IS_PRIV calls do not currently have an accompanying XSM call,
this series implements the second option.

Once applied, most IS_PRIV checks are isolated in the newly introduced
xen/include/xsm/dummy.h header. The remaining checks cover a few areas
that need further examining or that have reason to remain:

1. Overriding the IRQ and IO memory access checks (arch/x86/domctl.c).
   These overrides should not be needed, as dom0 should have access
   without needing the override.
2. Allow MAP_PIRQ_TYPE_GSI to ignore domain_pirq_to_irq negative return
3. PIRQ operations by HVM domains (TODO add hooks)
4. The hack for device model framebuffers in get_page_from_l1e
5. Installing maps of non-owned pages in shadow_get_page_from_l1e
6. PCI configuration space (arch/x86/traps.c). Allowing a PV Linux domU
   to access the PCI configuration space is a good way to crash the
   system as it reconfigures PCI devices during boot, so this needs to
   remain to get a working system when FLASK is in permissive mode.
7. Various MSR accesses (arch/x86/traps.c)
8. ARM architecture - not touched at all in these patches.

The rcu_lock_target_domain_by_id and rcu_lock_remote_target_domain_by_id
functions are removed by this series because they act as wrappers around
IS_PRIV_FOR; their callers have been changed to use XSM checks instead.

Miscellaneous updates to FLASK:
    [PATCH 01/18] xsm/flask: remove inherited class attributes
    [PATCH 02/18] xsm/flask: remove unneeded create_sid field
    [PATCH 03/18] xsm/flask: add domain relabel support
    [PATCH 04/18] libxl: introduce XSM relabel on build
    [PATCH 05/18] flask/policy: Add domain relabel example

Preparatory:
    [PATCH 06/18] xsm, arch/x86: add distinct XSM hooks for map/unmap
    [PATCH 07/18] arch/x86: add missing XSM checks to XENPF_ commands
    [PATCH 08/18] xen: Add DOMID_SELF support to rcu_lock_domain_by_id
    [PATCH 09/18] xsm/flask: Add checks on the domain performing set_target

Refactor checks into existing XSM hooks:
    [PATCH 10/18] xsm: Add IS_PRIV checks to dummy XSM module
    [PATCH 11/18] xen: use XSM instead of IS_PRIV where duplicated

Clean up remaining IS_PRIV calls (1):
    [PATCH 12/18] xsm: Add missing domctl and mem_sharing hooks
    [PATCH 13/18] tmem: Add access control check

FLASK updates to allow acting as a proper IS_PRIV replacement:
    [PATCH 14/18] xsm: remove unneeded xsm_call macro
    [PATCH 15/18] xsm/flask: add distinct SIDs for self/target access

Clean up remaining IS_PRIV calls (2):
    [PATCH 16/18] arch/x86: use XSM hooks for get_pg_owner access checks
    [PATCH 17/18] xen: Add XSM hook for XENMEM_exchange
    [PATCH 18/18] xen: remove rcu_lock_target_domain_by_id

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:33:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMj-0003AJ-FQ; Mon, 06 Aug 2012 14:33:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMh-00036s-BM
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:33:03 +0000
Received: from [85.158.138.51:29459] by server-5.bemta-3.messagelabs.com id
	7D/6D-27557-E95DF105; Mon, 06 Aug 2012 14:33:02 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344263581!26604261!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3540 invoked from network); 6 Aug 2012 14:33:01 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:33:01 -0000
X-TM-IMSS-Message-ID: <7b0752e80002b22a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0752e80002b22a ;
	Mon, 6 Aug 2012 10:32:49 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9T011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:21 -0400
Message-Id: <1344263550-3941-10-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 09/18] xsm/flask: Add checks on the domain
	performing the set_target operation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The existing domain__set_target check only verifies that the source and
target domains can be associated. We also need to check that the
privileged domain making this association is allowed to do so.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors | 2 ++
 xen/xsm/flask/hooks.c                          | 7 +++++++
 xen/xsm/flask/include/av_perm_to_string.h      | 2 ++
 xen/xsm/flask/include/av_permissions.h         | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index c7e29ab..11d02da 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -78,6 +78,8 @@ class domain2
 	relabelfrom
 	relabelto
 	relabelself
+	make_priv_for
+	set_as_target
 }
 
 class hvm
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 5923710..f8aff14 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -637,6 +637,13 @@ static int flask_domain_settime(struct domain *d)
 
 static int flask_set_target(struct domain *d, struct domain *e)
 {
+    int rc;
+    rc = domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR);
+    if ( rc )
+        return rc;
+    rc = domain_has_perm(current->domain, e, SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET);
+    if ( rc )
+        return rc;
     return domain_has_perm(d, e, SECCLASS_DOMAIN, DOMAIN__SET_TARGET);
 }
 
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index e7e2058..10f8e80 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -64,6 +64,8 @@
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELFROM, "relabelfrom")
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELTO, "relabelto")
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR, "make_priv_for")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET, "set_as_target")
    S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
    S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
    S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index cb1c5dc..f7cfee1 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -66,6 +66,8 @@
 #define DOMAIN2__RELABELFROM                      0x00000001UL
 #define DOMAIN2__RELABELTO                        0x00000002UL
 #define DOMAIN2__RELABELSELF                      0x00000004UL
+#define DOMAIN2__MAKE_PRIV_FOR                    0x00000008UL
+#define DOMAIN2__SET_AS_TARGET                    0x00000010UL
 
 #define HVM__SETHVMC                              0x00000001UL
 #define HVM__GETHVMC                              0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:33:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOMj-0003AJ-FQ; Mon, 06 Aug 2012 14:33:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOMh-00036s-BM
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:33:03 +0000
Received: from [85.158.138.51:29459] by server-5.bemta-3.messagelabs.com id
	7D/6D-27557-E95DF105; Mon, 06 Aug 2012 14:33:02 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344263581!26604261!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3540 invoked from network); 6 Aug 2012 14:33:01 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-10.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 14:33:01 -0000
X-TM-IMSS-Message-ID: <7b0752e80002b22a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b0752e80002b22a ;
	Mon, 6 Aug 2012 10:32:49 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76EWZ9T011112; 
	Mon, 6 Aug 2012 10:32:38 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: xen-devel@lists.xen.org
Date: Mon,  6 Aug 2012 10:32:21 -0400
Message-Id: <1344263550-3941-10-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH 09/18] xsm/flask: Add checks on the domain
	performing the set_target operation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The existing domain__set_target check only verifies that the source and
target domains can be associated. We also need to check that the
privileged domain making this association is allowed to do so.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/flask/access_vectors | 2 ++
 xen/xsm/flask/hooks.c                          | 7 +++++++
 xen/xsm/flask/include/av_perm_to_string.h      | 2 ++
 xen/xsm/flask/include/av_permissions.h         | 2 ++
 4 files changed, 13 insertions(+)

diff --git a/tools/flask/policy/policy/flask/access_vectors b/tools/flask/policy/policy/flask/access_vectors
index c7e29ab..11d02da 100644
--- a/tools/flask/policy/policy/flask/access_vectors
+++ b/tools/flask/policy/policy/flask/access_vectors
@@ -78,6 +78,8 @@ class domain2
 	relabelfrom
 	relabelto
 	relabelself
+	make_priv_for
+	set_as_target
 }
 
 class hvm
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 5923710..f8aff14 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -637,6 +637,13 @@ static int flask_domain_settime(struct domain *d)
 
 static int flask_set_target(struct domain *d, struct domain *e)
 {
+    int rc;
+    rc = domain_has_perm(current->domain, d, SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR);
+    if ( rc )
+        return rc;
+    rc = domain_has_perm(current->domain, e, SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET);
+    if ( rc )
+        return rc;
     return domain_has_perm(d, e, SECCLASS_DOMAIN, DOMAIN__SET_TARGET);
 }
 
diff --git a/xen/xsm/flask/include/av_perm_to_string.h b/xen/xsm/flask/include/av_perm_to_string.h
index e7e2058..10f8e80 100644
--- a/xen/xsm/flask/include/av_perm_to_string.h
+++ b/xen/xsm/flask/include/av_perm_to_string.h
@@ -64,6 +64,8 @@
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELFROM, "relabelfrom")
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELTO, "relabelto")
    S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR, "make_priv_for")
+   S_(SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET, "set_as_target")
    S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
    S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
    S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
diff --git a/xen/xsm/flask/include/av_permissions.h b/xen/xsm/flask/include/av_permissions.h
index cb1c5dc..f7cfee1 100644
--- a/xen/xsm/flask/include/av_permissions.h
+++ b/xen/xsm/flask/include/av_permissions.h
@@ -66,6 +66,8 @@
 #define DOMAIN2__RELABELFROM                      0x00000001UL
 #define DOMAIN2__RELABELTO                        0x00000002UL
 #define DOMAIN2__RELABELSELF                      0x00000004UL
+#define DOMAIN2__MAKE_PRIV_FOR                    0x00000008UL
+#define DOMAIN2__SET_AS_TARGET                    0x00000010UL
 
 #define HVM__SETHVMC                              0x00000001UL
 #define HVM__GETHVMC                              0x00000002UL
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:34:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyONk-00040f-U9; Mon, 06 Aug 2012 14:34:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyONk-0003zz-Bw
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:34:08 +0000
Received: from [85.158.138.51:37449] by server-6.bemta-3.messagelabs.com id
	CA/69-02321-FD5DF105; Mon, 06 Aug 2012 14:34:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344263645!28887554!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MDU3MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1925 invoked from network); 6 Aug 2012 14:34:06 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Aug 2012 14:34:06 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76EXvxI005516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 14:33:57 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76EXuJs000614
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 14:33:57 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76EXuvS014461; Mon, 6 Aug 2012 09:33:56 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 07:33:56 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9EBF841F36; Mon,  6 Aug 2012 10:24:32 -0400 (EDT)
Date: Mon, 6 Aug 2012 10:24:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120806142432.GA2487@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, Tim.Deegan@citrix.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:12:01PM +0100, Stefano Stabellini wrote:
> This is an incremental patch on top of
> c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> compatibility, it is better to introduce foreign_domid as part of a
> union containing both size and foreign_domid.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/public/memory.h |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index b2adfbe..b0af2fd 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };
>  
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info  0 /* shared info page */
> @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> -    uint16_t space;
> -    domid_t foreign_domid; /* IFF gmfn_foreign */
> +    unsigned int space;

Should this be of uint32... ?

>  
>  #define XENMAPIDX_grant_table_status 0x80000000
>  
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:34:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyONk-00040f-U9; Mon, 06 Aug 2012 14:34:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyONk-0003zz-Bw
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:34:08 +0000
Received: from [85.158.138.51:37449] by server-6.bemta-3.messagelabs.com id
	CA/69-02321-FD5DF105; Mon, 06 Aug 2012 14:34:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344263645!28887554!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MDU3MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1925 invoked from network); 6 Aug 2012 14:34:06 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Aug 2012 14:34:06 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76EXvxI005516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 14:33:57 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76EXuJs000614
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 14:33:57 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76EXuvS014461; Mon, 6 Aug 2012 09:33:56 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 07:33:56 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9EBF841F36; Mon,  6 Aug 2012 10:24:32 -0400 (EDT)
Date: Mon, 6 Aug 2012 10:24:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120806142432.GA2487@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, Tim.Deegan@citrix.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:12:01PM +0100, Stefano Stabellini wrote:
> This is an incremental patch on top of
> c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> compatibility, it is better to introduce foreign_domid as part of a
> union containing both size and foreign_domid.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/public/memory.h |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index b2adfbe..b0af2fd 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };
>  
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info  0 /* shared info page */
> @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> -    uint16_t space;
> -    domid_t foreign_domid; /* IFF gmfn_foreign */
> +    unsigned int space;

Should this be of uint32... ?

>  
>  #define XENMAPIDX_grant_table_status 0x80000000
>  
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:34:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOOM-0004Mu-DE; Mon, 06 Aug 2012 14:34:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1SyOOL-0004Jw-Hf
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:34:45 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344263678!11750569!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzExNzIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13834 invoked from network); 6 Aug 2012 14:34:39 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:34:39 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 06 Aug 2012 07:34:38 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="130773976"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Aug 2012 07:34:38 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 07:34:37 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Mon, 6 Aug 2012 22:34:36 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: xen-devel <xen-devel@lists.xen.org>
Thread-Topic: Xen4.2-rc1 test result
Thread-Index: Ac1z4JMlowett3slQkWIjSAGX1ARsA==
Date: Mon, 6 Aug 2012 14:34:35 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A1013D698@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Xen4.2-rc1 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did a round testing for Xen 4.2 RC1 (CS#25692) with Linux 3.4.7 dom0.
We covered VT-d, SR-IOV, Power Management, TXT, IVB new features, HSW new features.
We covered many cases for HVM guests (both Redhat Linux and MS Windows guest).
We tested on Westmere-EP, SandyBridge-EP, IvyBridge desktop, and Haswell hardware platforms.
We found no new issues, and verified 1 fixed bug.

Fixed bug (1):
1. parameter 'maxvcpus' causes hvm guest boots up with wrong vcpu number
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1825
-- Fixed by Yang from Intel.

The following are some of the old issues which we guess are something important.
Some of the old issues:
1. long stop during the guest boot process (too slow bootup)
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
2. Poor performance when do guest save/restore and migration with linux 3.x dom0
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
4. after detaching a VF from a guest, shutdown the guest is very slow
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
  (xl allows same PCI device to be assigned to multiple guests)
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
6. Guest hang after resuming from S3
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828


Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:34:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOOM-0004Mu-DE; Mon, 06 Aug 2012 14:34:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1SyOOL-0004Jw-Hf
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:34:45 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344263678!11750569!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzExNzIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13834 invoked from network); 6 Aug 2012 14:34:39 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-12.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 14:34:39 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 06 Aug 2012 07:34:38 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="130773976"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by AZSMGA002.ch.intel.com with ESMTP; 06 Aug 2012 07:34:38 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 07:34:37 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Mon, 6 Aug 2012 22:34:36 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: xen-devel <xen-devel@lists.xen.org>
Thread-Topic: Xen4.2-rc1 test result
Thread-Index: Ac1z4JMlowett3slQkWIjSAGX1ARsA==
Date: Mon, 6 Aug 2012 14:34:35 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A1013D698@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Xen4.2-rc1 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did a round testing for Xen 4.2 RC1 (CS#25692) with Linux 3.4.7 dom0.
We covered VT-d, SR-IOV, Power Management, TXT, IVB new features, HSW new features.
We covered many cases for HVM guests (both Redhat Linux and MS Windows guest).
We tested on Westmere-EP, SandyBridge-EP, IvyBridge desktop, and Haswell hardware platforms.
We found no new issues, and verified 1 fixed bug.

Fixed bug (1):
1. parameter 'maxvcpus' causes hvm guest boots up with wrong vcpu number
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1825
-- Fixed by Yang from Intel.

The following are some of the old issues which we guess are something important.
Some of the old issues:
1. long stop during the guest boot process (too slow bootup)
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
2. Poor performance when do guest save/restore and migration with linux 3.x dom0
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
4. after detaching a VF from a guest, shutdown the guest is very slow
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
  (xl allows same PCI device to be assigned to multiple guests)
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
6. Guest hang after resuming from S3
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828


Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOOx-0004hC-Rb; Mon, 06 Aug 2012 14:35:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOOv-0004fi-Op
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:22 +0000
Received: from [85.158.139.83:38242] by server-2.bemta-5.messagelabs.com id
	03/D3-04598-926DF105; Mon, 06 Aug 2012 14:35:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344263718!30423043!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5661 invoked from network); 6 Aug 2012 14:35:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33698997"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:18 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:18 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-9X;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:13 +0100
Message-ID: <1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
an error.

If Linux is running as an HVM domain and is running as Dom0, use
xenstored_local_init to initialize the xenstore page and event channel.

Changes in v2:

- refactor xenbus_init.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/xenbus/xenbus_comms.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
 drivers/xen/xenbus/xenbus_xs.c    |    1 +
 3 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index 52fe7ad..c5aa55c 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -224,7 +224,7 @@ int xb_init_comms(void)
 		int err;
 		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
 						0, "xenbus", &xb_waitq);
-		if (err <= 0) {
+		if (err < 0) {
 			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
 			return err;
 		}
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..a67ccc0 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
 	return err;
 }
 
+enum xenstore_init {
+	UNKNOWN,
+	PV,
+	HVM,
+	LOCAL,
+};
 static int __init xenbus_init(void)
 {
 	int err = 0;
+	enum xenstore_init usage = UNKNOWN;
+	uint64_t v = 0;
 
 	if (!xen_domain())
 		return -ENODEV;
 
 	xenbus_ring_ops_init();
 
-	if (xen_hvm_domain()) {
-		uint64_t v = 0;
-		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
-		if (err)
-			goto out_error;
-		xen_store_evtchn = (int)v;
-		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
-		if (err)
-			goto out_error;
-		xen_store_mfn = (unsigned long)v;
-		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
-	} else {
-		xen_store_evtchn = xen_start_info->store_evtchn;
-		xen_store_mfn = xen_start_info->store_mfn;
-		if (xen_store_evtchn)
-			xenstored_ready = 1;
-		else {
+	if (xen_pv_domain())
+		usage = PV;
+	if (xen_hvm_domain())
+		usage = HVM;
+	if (xen_hvm_domain() && xen_initial_domain())
+		usage = LOCAL;
+	if (xen_pv_domain() && !xen_start_info->store_evtchn)
+		usage = LOCAL;
+	if (xen_pv_domain() && xen_start_info->store_evtchn)
+		xenstored_ready = 1;
+
+	switch (usage) {
+		case LOCAL:
 			err = xenstored_local_init();
 			if (err)
 				goto out_error;
-		}
-		xen_store_interface = mfn_to_virt(xen_store_mfn);
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case PV:
+			xen_store_evtchn = xen_start_info->store_evtchn;
+			xen_store_mfn = xen_start_info->store_mfn;
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case HVM:
+			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
+			if (err)
+				goto out_error;
+			xen_store_evtchn = (int)v;
+			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
+			if (err)
+				goto out_error;
+			xen_store_mfn = (unsigned long)v;
+			xen_store_interface =
+				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
+			break;
+		default:
+			pr_warn("Xenstore state unknown\n");
+			break;
 	}
 
 	/* Initialize the interface to xenstore. */
diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index d1c217b..f7feb3d 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -44,6 +44,7 @@
 #include <linux/rwsem.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
+#include <asm/xen/hypervisor.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
 #include "xenbus_comms.h"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOOx-0004hC-Rb; Mon, 06 Aug 2012 14:35:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOOv-0004fi-Op
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:22 +0000
Received: from [85.158.139.83:38242] by server-2.bemta-5.messagelabs.com id
	03/D3-04598-926DF105; Mon, 06 Aug 2012 14:35:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344263718!30423043!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5661 invoked from network); 6 Aug 2012 14:35:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33698997"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:18 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:18 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-9X;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:13 +0100
Message-ID: <1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
an error.

If Linux is running as an HVM domain and is running as Dom0, use
xenstored_local_init to initialize the xenstore page and event channel.

Changes in v2:

- refactor xenbus_init.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/xenbus/xenbus_comms.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
 drivers/xen/xenbus/xenbus_xs.c    |    1 +
 3 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index 52fe7ad..c5aa55c 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -224,7 +224,7 @@ int xb_init_comms(void)
 		int err;
 		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
 						0, "xenbus", &xb_waitq);
-		if (err <= 0) {
+		if (err < 0) {
 			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
 			return err;
 		}
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..a67ccc0 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
 	return err;
 }
 
+enum xenstore_init {
+	UNKNOWN,
+	PV,
+	HVM,
+	LOCAL,
+};
 static int __init xenbus_init(void)
 {
 	int err = 0;
+	enum xenstore_init usage = UNKNOWN;
+	uint64_t v = 0;
 
 	if (!xen_domain())
 		return -ENODEV;
 
 	xenbus_ring_ops_init();
 
-	if (xen_hvm_domain()) {
-		uint64_t v = 0;
-		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
-		if (err)
-			goto out_error;
-		xen_store_evtchn = (int)v;
-		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
-		if (err)
-			goto out_error;
-		xen_store_mfn = (unsigned long)v;
-		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
-	} else {
-		xen_store_evtchn = xen_start_info->store_evtchn;
-		xen_store_mfn = xen_start_info->store_mfn;
-		if (xen_store_evtchn)
-			xenstored_ready = 1;
-		else {
+	if (xen_pv_domain())
+		usage = PV;
+	if (xen_hvm_domain())
+		usage = HVM;
+	if (xen_hvm_domain() && xen_initial_domain())
+		usage = LOCAL;
+	if (xen_pv_domain() && !xen_start_info->store_evtchn)
+		usage = LOCAL;
+	if (xen_pv_domain() && xen_start_info->store_evtchn)
+		xenstored_ready = 1;
+
+	switch (usage) {
+		case LOCAL:
 			err = xenstored_local_init();
 			if (err)
 				goto out_error;
-		}
-		xen_store_interface = mfn_to_virt(xen_store_mfn);
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case PV:
+			xen_store_evtchn = xen_start_info->store_evtchn;
+			xen_store_mfn = xen_start_info->store_mfn;
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case HVM:
+			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
+			if (err)
+				goto out_error;
+			xen_store_evtchn = (int)v;
+			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
+			if (err)
+				goto out_error;
+			xen_store_mfn = (unsigned long)v;
+			xen_store_interface =
+				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
+			break;
+		default:
+			pr_warn("Xenstore state unknown\n");
+			break;
 	}
 
 	/* Initialize the interface to xenstore. */
diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index d1c217b..f7feb3d 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -44,6 +44,7 @@
 #include <linux/rwsem.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
+#include <asm/xen/hypervisor.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
 #include "xenbus_comms.h"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP6-0004oO-Ro; Mon, 06 Aug 2012 14:35:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP5-0004mg-FQ
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:31 +0000
Received: from [85.158.143.99:5215] by server-2.bemta-4.messagelabs.com id
	14/46-17938-236DF105; Mon, 06 Aug 2012 14:35:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344263726!23447798!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27902 invoked from network); 6 Aug 2012 14:35:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276920"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-2T;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:08 +0100
Message-ID: <1344263246-28036-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 05/23] xen/arm: empty implementation of
	grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- return -ENOSYS rather than -1.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/Makefile      |    2 +-
 arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/xen/grant-table.c

diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index b9d6acc..4384103 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o hypercall.o
+obj-y		:= enlighten.o hypercall.o grant-table.o
diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
new file mode 100644
index 0000000..dbd1330
--- /dev/null
+++ b/arch/arm/xen/grant-table.c
@@ -0,0 +1,53 @@
+/******************************************************************************
+ * grant_table.c
+ * ARM specific part
+ *
+ * Granting foreign access to our memory reservation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <xen/interface/xen.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+
+int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   void **__shared)
+{
+	return -ENOSYS;
+}
+
+void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
+{
+	return;
+}
+
+int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   grant_status_t **__shared)
+{
+	return -ENOSYS;
+}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP6-0004oO-Ro; Mon, 06 Aug 2012 14:35:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP5-0004mg-FQ
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:31 +0000
Received: from [85.158.143.99:5215] by server-2.bemta-4.messagelabs.com id
	14/46-17938-236DF105; Mon, 06 Aug 2012 14:35:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344263726!23447798!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27902 invoked from network); 6 Aug 2012 14:35:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276920"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-2T;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:08 +0100
Message-ID: <1344263246-28036-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 05/23] xen/arm: empty implementation of
	grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- return -ENOSYS rather than -1.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/Makefile      |    2 +-
 arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/xen/grant-table.c

diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index b9d6acc..4384103 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o hypercall.o
+obj-y		:= enlighten.o hypercall.o grant-table.o
diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
new file mode 100644
index 0000000..dbd1330
--- /dev/null
+++ b/arch/arm/xen/grant-table.c
@@ -0,0 +1,53 @@
+/******************************************************************************
+ * grant_table.c
+ * ARM specific part
+ *
+ * Granting foreign access to our memory reservation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <xen/interface/xen.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+
+int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   void **__shared)
+{
+	return -ENOSYS;
+}
+
+void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
+{
+	return;
+}
+
+int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   grant_status_t **__shared)
+{
+	return -ENOSYS;
+}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP6-0004nZ-8d; Mon, 06 Aug 2012 14:35:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP4-0004lS-1q
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:30 +0000
Received: from [85.158.143.99:16553] by server-1.bemta-4.messagelabs.com id
	43/8B-24392-136DF105; Mon, 06 Aug 2012 14:35:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344263726!23447798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27836 invoked from network); 6 Aug 2012 14:35:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276919"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHN-0002zY-W7;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:06 +0100
Message-ID: <1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 03/23] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM Xen guests always use paging in hardware, like PV on HVM guests in
the X86 world.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/page.h |   79 +++++++++++++++++++++++++++++++++++++++
 1 files changed, 79 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/page.h

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
new file mode 100644
index 0000000..fe78331
--- /dev/null
+++ b/arch/arm/include/asm/xen/page.h
@@ -0,0 +1,79 @@
+#ifndef _ASM_ARM_XEN_PAGE_H
+#define _ASM_ARM_XEN_PAGE_H
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+#include <linux/pfn.h>
+#include <linux/types.h>
+
+#include <xen/interface/grant_table.h>
+
+#define pfn_to_mfn(pfn)			(pfn)
+#define phys_to_machine_mapping_valid	(1)
+#define mfn_to_pfn(mfn)			(mfn)
+#define mfn_to_virt(m)			(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+#define pte_mfn	    pte_pfn
+#define mfn_pte	    pfn_pte
+
+/* Xen machine address */
+typedef struct xmaddr {
+	phys_addr_t maddr;
+} xmaddr_t;
+
+/* Xen pseudo-physical address */
+typedef struct xpaddr {
+	phys_addr_t paddr;
+} xpaddr_t;
+
+#define XMADDR(x)	((xmaddr_t) { .maddr = (x) })
+#define XPADDR(x)	((xpaddr_t) { .paddr = (x) })
+
+static inline xmaddr_t phys_to_machine(xpaddr_t phys)
+{
+	unsigned offset = phys.paddr & ~PAGE_MASK;
+	return XMADDR(PFN_PHYS(pfn_to_mfn(PFN_DOWN(phys.paddr))) | offset);
+}
+
+static inline xpaddr_t machine_to_phys(xmaddr_t machine)
+{
+	unsigned offset = machine.maddr & ~PAGE_MASK;
+	return XPADDR(PFN_PHYS(mfn_to_pfn(PFN_DOWN(machine.maddr))) | offset);
+}
+/* VIRT <-> MACHINE conversion */
+#define virt_to_machine(v)	(phys_to_machine(XPADDR(__pa(v))))
+#define virt_to_pfn(v)          (PFN_DOWN(__pa(v)))
+#define virt_to_mfn(v)		(pfn_to_mfn(virt_to_pfn(v)))
+#define mfn_to_virt(m)		(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
+{
+	/* XXX: assuming it is mapped in the kernel 1:1 */
+	return virt_to_machine(vaddr);
+}
+
+/* XXX: this shouldn't be here */
+static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
+{
+	BUG();
+	return NULL;
+}
+
+static inline int m2p_add_override(unsigned long mfn, struct page *page,
+		struct gnttab_map_grant_ref *kmap_op)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page, bool clear_pte)
+{
+	return 0;
+}
+
+static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
+{
+	BUG();
+	return false;
+}
+#endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP6-0004nZ-8d; Mon, 06 Aug 2012 14:35:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP4-0004lS-1q
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:30 +0000
Received: from [85.158.143.99:16553] by server-1.bemta-4.messagelabs.com id
	43/8B-24392-136DF105; Mon, 06 Aug 2012 14:35:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344263726!23447798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27836 invoked from network); 6 Aug 2012 14:35:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276919"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHN-0002zY-W7;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:06 +0100
Message-ID: <1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 03/23] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM Xen guests always use paging in hardware, like PV on HVM guests in
the X86 world.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/page.h |   79 +++++++++++++++++++++++++++++++++++++++
 1 files changed, 79 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/page.h

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
new file mode 100644
index 0000000..fe78331
--- /dev/null
+++ b/arch/arm/include/asm/xen/page.h
@@ -0,0 +1,79 @@
+#ifndef _ASM_ARM_XEN_PAGE_H
+#define _ASM_ARM_XEN_PAGE_H
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+#include <linux/pfn.h>
+#include <linux/types.h>
+
+#include <xen/interface/grant_table.h>
+
+#define pfn_to_mfn(pfn)			(pfn)
+#define phys_to_machine_mapping_valid	(1)
+#define mfn_to_pfn(mfn)			(mfn)
+#define mfn_to_virt(m)			(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+#define pte_mfn	    pte_pfn
+#define mfn_pte	    pfn_pte
+
+/* Xen machine address */
+typedef struct xmaddr {
+	phys_addr_t maddr;
+} xmaddr_t;
+
+/* Xen pseudo-physical address */
+typedef struct xpaddr {
+	phys_addr_t paddr;
+} xpaddr_t;
+
+#define XMADDR(x)	((xmaddr_t) { .maddr = (x) })
+#define XPADDR(x)	((xpaddr_t) { .paddr = (x) })
+
+static inline xmaddr_t phys_to_machine(xpaddr_t phys)
+{
+	unsigned offset = phys.paddr & ~PAGE_MASK;
+	return XMADDR(PFN_PHYS(pfn_to_mfn(PFN_DOWN(phys.paddr))) | offset);
+}
+
+static inline xpaddr_t machine_to_phys(xmaddr_t machine)
+{
+	unsigned offset = machine.maddr & ~PAGE_MASK;
+	return XPADDR(PFN_PHYS(mfn_to_pfn(PFN_DOWN(machine.maddr))) | offset);
+}
+/* VIRT <-> MACHINE conversion */
+#define virt_to_machine(v)	(phys_to_machine(XPADDR(__pa(v))))
+#define virt_to_pfn(v)          (PFN_DOWN(__pa(v)))
+#define virt_to_mfn(v)		(pfn_to_mfn(virt_to_pfn(v)))
+#define mfn_to_virt(m)		(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
+{
+	/* XXX: assuming it is mapped in the kernel 1:1 */
+	return virt_to_machine(vaddr);
+}
+
+/* XXX: this shouldn't be here */
+static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
+{
+	BUG();
+	return NULL;
+}
+
+static inline int m2p_add_override(unsigned long mfn, struct page *page,
+		struct gnttab_map_grant_ref *kmap_op)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page, bool clear_pte)
+{
+	return 0;
+}
+
+static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
+{
+	BUG();
+	return false;
+}
+#endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP8-0004qS-KS; Mon, 06 Aug 2012 14:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP7-0004nC-28
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:33 +0000
Received: from [85.158.139.83:22140] by server-7.bemta-5.messagelabs.com id
	DD/8D-28276-436DF105; Mon, 06 Aug 2012 14:35:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344263729!25761559!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1358 invoked from network); 6 Aug 2012 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276922"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-4X;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:09 +0100
Message-ID: <1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 06/23] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:
- remove pvclock hack;
- remove include linux/types.h from xen/interface/xen.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/include/asm/xen/interface.h       |    2 ++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/xen.h                |    1 -
 include/xen/privcmd.h                      |    1 +
 6 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..a93db16 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -121,6 +121,8 @@ struct arch_shared_info {
 #include "interface_64.h"
 #endif
 
+#include <asm/pvclock-abi.h>
+
 #ifndef __ASSEMBLY__
 /*
  * The following is all CPU context. Note that the fpu_ctxt block is filled
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 944eaeb..dc07f56 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -21,6 +21,7 @@
 #include <linux/console.h>
 #include <linux/delay.h>
 #include <linux/err.h>
+#include <linux/irq.h>
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/list.h>
@@ -35,6 +36,7 @@
 #include <xen/page.h>
 #include <xen/events.h>
 #include <xen/interface/io/console.h>
+#include <xen/interface/sched.h>
 #include <xen/hvc-console.h>
 #include <xen/xenbus.h>
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..1d0d95e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -47,6 +47,7 @@
 #include <xen/interface/memory.h>
 #include <xen/hvc-console.h>
 #include <asm/xen/hypercall.h>
+#include <asm/xen/interface.h>
 
 #include <asm/pgtable.h>
 #include <asm/sync_bitops.h>
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
index a31b54d..3159a37 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -21,6 +21,7 @@
 #include <xen/xenbus.h>
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 
 #include <xen/platform_pci.h>
 
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..3871e47 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -10,7 +10,6 @@
 #define __XEN_PUBLIC_XEN_H__
 
 #include <asm/xen/interface.h>
-#include <asm/pvclock-abi.h>
 
 /*
  * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..4d58881 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -35,6 +35,7 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
+#include <xen/interface/xen.h>
 
 typedef unsigned long xen_pfn_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP8-0004qS-KS; Mon, 06 Aug 2012 14:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP7-0004nC-28
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:33 +0000
Received: from [85.158.139.83:22140] by server-7.bemta-5.messagelabs.com id
	DD/8D-28276-436DF105; Mon, 06 Aug 2012 14:35:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344263729!25761559!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1358 invoked from network); 6 Aug 2012 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276922"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-4X;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:09 +0100
Message-ID: <1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 06/23] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:
- remove pvclock hack;
- remove include linux/types.h from xen/interface/xen.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/include/asm/xen/interface.h       |    2 ++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/xen.h                |    1 -
 include/xen/privcmd.h                      |    1 +
 6 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..a93db16 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -121,6 +121,8 @@ struct arch_shared_info {
 #include "interface_64.h"
 #endif
 
+#include <asm/pvclock-abi.h>
+
 #ifndef __ASSEMBLY__
 /*
  * The following is all CPU context. Note that the fpu_ctxt block is filled
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 944eaeb..dc07f56 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -21,6 +21,7 @@
 #include <linux/console.h>
 #include <linux/delay.h>
 #include <linux/err.h>
+#include <linux/irq.h>
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/list.h>
@@ -35,6 +36,7 @@
 #include <xen/page.h>
 #include <xen/events.h>
 #include <xen/interface/io/console.h>
+#include <xen/interface/sched.h>
 #include <xen/hvc-console.h>
 #include <xen/xenbus.h>
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..1d0d95e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -47,6 +47,7 @@
 #include <xen/interface/memory.h>
 #include <xen/hvc-console.h>
 #include <asm/xen/hypercall.h>
+#include <asm/xen/interface.h>
 
 #include <asm/pgtable.h>
 #include <asm/sync_bitops.h>
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
index a31b54d..3159a37 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -21,6 +21,7 @@
 #include <xen/xenbus.h>
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 
 #include <xen/platform_pci.h>
 
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..3871e47 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -10,7 +10,6 @@
 #define __XEN_PUBLIC_XEN_H__
 
 #include <asm/xen/interface.h>
-#include <asm/pvclock-abi.h>
 
 /*
  * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..4d58881 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -35,6 +35,7 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
+#include <xen/interface/xen.h>
 
 typedef unsigned long xen_pfn_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP9-0004rM-4c; Mon, 06 Aug 2012 14:35:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP7-0004nz-2i
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:33 +0000
Received: from [85.158.143.99:16742] by server-3.bemta-4.messagelabs.com id
	4F/02-01511-436DF105; Mon, 06 Aug 2012 14:35:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344263726!23447798!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27959 invoked from network); 6 Aug 2012 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276921"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-1s;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:07 +0100
Message-ID: <1344263246-28036-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 04/23] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

sync_bitops functions are equivalent to the SMP implementation of the
original functions, independently from CONFIG_SMP being defined.

We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
under Xen you might be communicating with a completely external entity
who might be on another CPU (e.g. two uniprocessor guests communicating
via event channels and grant tables). So we need a variant of the bit
ops which are SMP safe even on a UP kernel.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/sync_bitops.h |   27 +++++++++++++++++++++++++++
 1 files changed, 27 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/sync_bitops.h

diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
new file mode 100644
index 0000000..63479ee
--- /dev/null
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -0,0 +1,27 @@
+#ifndef __ASM_SYNC_BITOPS_H__
+#define __ASM_SYNC_BITOPS_H__
+
+#include <asm/bitops.h>
+#include <asm/system.h>
+
+/* sync_bitops functions are equivalent to the SMP implementation of the
+ * original functions, independently from CONFIG_SMP being defined.
+ *
+ * We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
+ * under Xen you might be communicating with a completely external entity
+ * who might be on another CPU (e.g. two uniprocessor guests communicating
+ * via event channels and grant tables). So we need a variant of the bit
+ * ops which are SMP safe even on a UP kernel.
+ */
+
+#define sync_set_bit(nr, p)		_set_bit(nr, p)
+#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
+#define sync_change_bit(nr, p)		_change_bit(nr, p)
+#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
+#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
+#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
+#define sync_test_bit(nr, addr)		test_bit(nr, addr)
+#define sync_cmpxchg			cmpxchg
+
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP9-0004rM-4c; Mon, 06 Aug 2012 14:35:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP7-0004nz-2i
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:33 +0000
Received: from [85.158.143.99:16742] by server-3.bemta-4.messagelabs.com id
	4F/02-01511-436DF105; Mon, 06 Aug 2012 14:35:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344263726!23447798!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27959 invoked from network); 6 Aug 2012 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276921"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-1s;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:07 +0100
Message-ID: <1344263246-28036-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 04/23] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

sync_bitops functions are equivalent to the SMP implementation of the
original functions, independently from CONFIG_SMP being defined.

We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
under Xen you might be communicating with a completely external entity
who might be on another CPU (e.g. two uniprocessor guests communicating
via event channels and grant tables). So we need a variant of the bit
ops which are SMP safe even on a UP kernel.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/sync_bitops.h |   27 +++++++++++++++++++++++++++
 1 files changed, 27 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/sync_bitops.h

diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
new file mode 100644
index 0000000..63479ee
--- /dev/null
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -0,0 +1,27 @@
+#ifndef __ASM_SYNC_BITOPS_H__
+#define __ASM_SYNC_BITOPS_H__
+
+#include <asm/bitops.h>
+#include <asm/system.h>
+
+/* sync_bitops functions are equivalent to the SMP implementation of the
+ * original functions, independently from CONFIG_SMP being defined.
+ *
+ * We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
+ * under Xen you might be communicating with a completely external entity
+ * who might be on another CPU (e.g. two uniprocessor guests communicating
+ * via event channels and grant tables). So we need a variant of the bit
+ * ops which are SMP safe even on a UP kernel.
+ */
+
+#define sync_set_bit(nr, p)		_set_bit(nr, p)
+#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
+#define sync_change_bit(nr, p)		_change_bit(nr, p)
+#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
+#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
+#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
+#define sync_test_bit(nr, addr)		test_bit(nr, addr)
+#define sync_cmpxchg			cmpxchg
+
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP8-0004q1-8t; Mon, 06 Aug 2012 14:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP6-0004nC-D9
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:32 +0000
Received: from [85.158.139.83:22078] by server-7.bemta-5.messagelabs.com id
	77/8D-28276-336DF105; Mon, 06 Aug 2012 14:35:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344263729!25761559!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1327 invoked from network); 6 Aug 2012 14:35:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276918"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-6K;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:10 +0100
Message-ID: <1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and shared_info
	page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Check for a "/xen" node in the device tree, if it is present set
xen_domain_type to XEN_HVM_DOMAIN and continue initialization.

Map the real shared info page using XENMEM_add_to_physmap with
XENMAPSPACE_shared_info.

Changes in v2:

- replace pr_info with pr_debug.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 52 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index d27c2a6..102d823 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -5,6 +5,9 @@
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_address.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	return -ENOSYS;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/*
+ * == Xen Device Tree format ==
+ * - /xen node;
+ * - compatible "arm,xen";
+ * - one interrupt for Xen event notifications;
+ * - one memory region to map the grant_table.
+ */
+static int __init xen_guest_init(void)
+{
+	struct xen_add_to_physmap xatp;
+	static struct shared_info *shared_info_page = 0;
+	struct device_node *node;
+
+	node = of_find_compatible_node(NULL, NULL, "arm,xen");
+	if (!node) {
+		pr_debug("No Xen support\n");
+		return 0;
+	}
+	xen_domain_type = XEN_HVM_DOMAIN;
+
+	if (!shared_info_page)
+		shared_info_page = (struct shared_info *)
+			get_zeroed_page(GFP_KERNEL);
+	if (!shared_info_page) {
+		pr_err("not enough memory\n");
+		return -ENOMEM;
+	}
+	xatp.domid = DOMID_SELF;
+	xatp.idx = 0;
+	xatp.space = XENMAPSPACE_shared_info;
+	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
+	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
+		BUG();
+
+	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
+
+	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
+	 * page, we use it in the event channel upcall and in some pvclock
+	 * related functions. We don't need the vcpu_info placement
+	 * optimizations because we don't use any pv_mmu or pv_irq op on
+	 * HVM.
+	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
+	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
+	 * for secondary CPUs as they are brought up. */
+	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+	return 0;
+}
+core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP8-0004q1-8t; Mon, 06 Aug 2012 14:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP6-0004nC-D9
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:32 +0000
Received: from [85.158.139.83:22078] by server-7.bemta-5.messagelabs.com id
	77/8D-28276-336DF105; Mon, 06 Aug 2012 14:35:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344263729!25761559!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1327 invoked from network); 6 Aug 2012 14:35:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276918"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:25 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-6K;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:10 +0100
Message-ID: <1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and shared_info
	page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Check for a "/xen" node in the device tree, if it is present set
xen_domain_type to XEN_HVM_DOMAIN and continue initialization.

Map the real shared info page using XENMEM_add_to_physmap with
XENMAPSPACE_shared_info.

Changes in v2:

- replace pr_info with pr_debug.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 52 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index d27c2a6..102d823 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -5,6 +5,9 @@
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_address.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	return -ENOSYS;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/*
+ * == Xen Device Tree format ==
+ * - /xen node;
+ * - compatible "arm,xen";
+ * - one interrupt for Xen event notifications;
+ * - one memory region to map the grant_table.
+ */
+static int __init xen_guest_init(void)
+{
+	struct xen_add_to_physmap xatp;
+	static struct shared_info *shared_info_page = 0;
+	struct device_node *node;
+
+	node = of_find_compatible_node(NULL, NULL, "arm,xen");
+	if (!node) {
+		pr_debug("No Xen support\n");
+		return 0;
+	}
+	xen_domain_type = XEN_HVM_DOMAIN;
+
+	if (!shared_info_page)
+		shared_info_page = (struct shared_info *)
+			get_zeroed_page(GFP_KERNEL);
+	if (!shared_info_page) {
+		pr_err("not enough memory\n");
+		return -ENOMEM;
+	}
+	xatp.domid = DOMID_SELF;
+	xatp.idx = 0;
+	xatp.space = XENMAPSPACE_shared_info;
+	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
+	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
+		BUG();
+
+	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
+
+	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
+	 * page, we use it in the event channel upcall and in some pvclock
+	 * related functions. We don't need the vcpu_info placement
+	 * optimizations because we don't use any pv_mmu or pv_irq op on
+	 * HVM.
+	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
+	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
+	 * for secondary CPUs as they are brought up. */
+	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+	return 0;
+}
+core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOPA-0004t4-5M; Mon, 06 Aug 2012 14:35:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP9-0004le-4z
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344263726!2400405!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22575 invoked from network); 6 Aug 2012 14:35:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33699027"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-8G;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:11 +0100
Message-ID: <1344263246-28036-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 08/23] xen/arm: Introduce xen_pfn_t for pfn
	and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_pfn_t as mfn and pfn type, however
when they have been imported in Linux, xen_pfn_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
see fit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/interface.h  |    4 ++++
 arch/ia64/include/asm/xen/interface.h |    5 ++++-
 arch/x86/include/asm/xen/interface.h  |    5 +++++
 include/xen/interface/grant_table.h   |    4 ++--
 include/xen/interface/memory.h        |    6 +++---
 include/xen/interface/platform.h      |    4 ++--
 include/xen/interface/xen.h           |    6 +++---
 include/xen/privcmd.h                 |    2 --
 8 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index ab99270..f904dcc 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -25,6 +25,9 @@
 	} while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the interface with
+ * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
+typedef uint64_t xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -35,6 +38,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 
 /* Maximum number of virtual CPUs in multi-processor guests. */
 #define MAX_VIRT_CPUS 1
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 09d5f7f..686464e 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -67,6 +67,10 @@
 #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that we could have one ABI that works for 32 and 64 bit
+ * guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
@@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
 
-typedef unsigned long xen_pfn_t;
 DEFINE_GUEST_HANDLE(xen_pfn_t);
 #define PRI_xen_pfn	"lx"
 #endif
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index a93db16..555f94d 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -47,6 +47,10 @@
 #endif
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that on ARM we can have one ABI that works for 32 and 64
+ * bit guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 #endif
 
 #ifndef HYPERVISOR_VIRT_START
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index a17d844..7da811b 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
 #define GNTTABOP_transfer                4
 struct gnttab_transfer {
     /* IN parameters. */
-    unsigned long mfn;
+    xen_pfn_t mfn;
     domid_t       domid;
     grant_ref_t   ref;
     /* OUT parameters. */
@@ -375,7 +375,7 @@ struct gnttab_copy {
 	struct {
 		union {
 			grant_ref_t ref;
-			unsigned long   gmfn;
+			xen_pfn_t   gmfn;
 		} u;
 		domid_t  domid;
 		uint16_t offset;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..abbbff0 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -31,7 +31,7 @@ struct xen_memory_reservation {
      *   OUT: GMFN bases of extents that were allocated
      *   (NB. This command also updates the mach_to_phys translation table)
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
     unsigned long  nr_extents;
@@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
      * any large discontiguities in the machine address space, 2MB gaps in
      * the machphys table will be represented by an MFN base of zero.
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /*
      * Number of extents written to the above array. This will be smaller
@@ -172,7 +172,7 @@ struct xen_add_to_physmap {
     unsigned long idx;
 
     /* GPFN where the source mapping page should appear. */
-    unsigned long gpfn;
+    xen_pfn_t gpfn;
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
 
diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
index 486653f..0bea470 100644
--- a/include/xen/interface/platform.h
+++ b/include/xen/interface/platform.h
@@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
 #define XENPF_add_memtype         31
 struct xenpf_add_memtype {
 	/* IN variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 	/* OUT variables. */
@@ -84,7 +84,7 @@ struct xenpf_read_memtype {
 	/* IN variables. */
 	uint32_t reg;
 	/* OUT variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 };
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 3871e47..42834a3 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -188,7 +188,7 @@ struct mmuext_op {
 	unsigned int cmd;
 	union {
 		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
-		unsigned long mfn;
+		xen_pfn_t mfn;
 		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
 		unsigned long linear_addr;
 	} arg1;
@@ -428,11 +428,11 @@ struct start_info {
 	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
 	unsigned long shared_info;  /* MACHINE address of shared info struct. */
 	uint32_t flags;             /* SIF_xxx flags.                         */
-	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
+	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
 	uint32_t store_evtchn;      /* Event channel for store communication. */
 	union {
 		struct {
-			unsigned long mfn;  /* MACHINE page number of console page.   */
+			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
 			uint32_t  evtchn;   /* Event channel for console page.        */
 		} domU;
 		struct {
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 4d58881..45c1aa1 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -37,8 +37,6 @@
 #include <linux/compiler.h>
 #include <xen/interface/xen.h>
 
-typedef unsigned long xen_pfn_t;
-
 struct privcmd_hypercall {
 	__u64 op;
 	__u64 arg[5];
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOPA-0004t4-5M; Mon, 06 Aug 2012 14:35:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP9-0004le-4z
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344263726!2400405!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22575 invoked from network); 6 Aug 2012 14:35:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33699027"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-8G;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:11 +0100
Message-ID: <1344263246-28036-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 08/23] xen/arm: Introduce xen_pfn_t for pfn
	and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_pfn_t as mfn and pfn type, however
when they have been imported in Linux, xen_pfn_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
see fit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/interface.h  |    4 ++++
 arch/ia64/include/asm/xen/interface.h |    5 ++++-
 arch/x86/include/asm/xen/interface.h  |    5 +++++
 include/xen/interface/grant_table.h   |    4 ++--
 include/xen/interface/memory.h        |    6 +++---
 include/xen/interface/platform.h      |    4 ++--
 include/xen/interface/xen.h           |    6 +++---
 include/xen/privcmd.h                 |    2 --
 8 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index ab99270..f904dcc 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -25,6 +25,9 @@
 	} while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the interface with
+ * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
+typedef uint64_t xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -35,6 +38,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 
 /* Maximum number of virtual CPUs in multi-processor guests. */
 #define MAX_VIRT_CPUS 1
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 09d5f7f..686464e 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -67,6 +67,10 @@
 #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that we could have one ABI that works for 32 and 64 bit
+ * guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
@@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
 
-typedef unsigned long xen_pfn_t;
 DEFINE_GUEST_HANDLE(xen_pfn_t);
 #define PRI_xen_pfn	"lx"
 #endif
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index a93db16..555f94d 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -47,6 +47,10 @@
 #endif
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that on ARM we can have one ABI that works for 32 and 64
+ * bit guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 #endif
 
 #ifndef HYPERVISOR_VIRT_START
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index a17d844..7da811b 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
 #define GNTTABOP_transfer                4
 struct gnttab_transfer {
     /* IN parameters. */
-    unsigned long mfn;
+    xen_pfn_t mfn;
     domid_t       domid;
     grant_ref_t   ref;
     /* OUT parameters. */
@@ -375,7 +375,7 @@ struct gnttab_copy {
 	struct {
 		union {
 			grant_ref_t ref;
-			unsigned long   gmfn;
+			xen_pfn_t   gmfn;
 		} u;
 		domid_t  domid;
 		uint16_t offset;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..abbbff0 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -31,7 +31,7 @@ struct xen_memory_reservation {
      *   OUT: GMFN bases of extents that were allocated
      *   (NB. This command also updates the mach_to_phys translation table)
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
     unsigned long  nr_extents;
@@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
      * any large discontiguities in the machine address space, 2MB gaps in
      * the machphys table will be represented by an MFN base of zero.
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /*
      * Number of extents written to the above array. This will be smaller
@@ -172,7 +172,7 @@ struct xen_add_to_physmap {
     unsigned long idx;
 
     /* GPFN where the source mapping page should appear. */
-    unsigned long gpfn;
+    xen_pfn_t gpfn;
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
 
diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
index 486653f..0bea470 100644
--- a/include/xen/interface/platform.h
+++ b/include/xen/interface/platform.h
@@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
 #define XENPF_add_memtype         31
 struct xenpf_add_memtype {
 	/* IN variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 	/* OUT variables. */
@@ -84,7 +84,7 @@ struct xenpf_read_memtype {
 	/* IN variables. */
 	uint32_t reg;
 	/* OUT variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 };
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 3871e47..42834a3 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -188,7 +188,7 @@ struct mmuext_op {
 	unsigned int cmd;
 	union {
 		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
-		unsigned long mfn;
+		xen_pfn_t mfn;
 		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
 		unsigned long linear_addr;
 	} arg1;
@@ -428,11 +428,11 @@ struct start_info {
 	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
 	unsigned long shared_info;  /* MACHINE address of shared info struct. */
 	uint32_t flags;             /* SIF_xxx flags.                         */
-	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
+	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
 	uint32_t store_evtchn;      /* Event channel for store communication. */
 	union {
 		struct {
-			unsigned long mfn;  /* MACHINE page number of console page.   */
+			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
 			uint32_t  evtchn;   /* Event channel for console page.        */
 		} domU;
 		struct {
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 4d58881..45c1aa1 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -37,8 +37,6 @@
 #include <linux/compiler.h>
 #include <xen/interface/xen.h>
 
-typedef unsigned long xen_pfn_t;
-
 struct privcmd_hypercall {
 	__u64 op;
 	__u64 arg[5];
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP9-0004s0-GS; Mon, 06 Aug 2012 14:35:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP7-0004lS-Eq
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:33 +0000
Received: from [85.158.143.99:16782] by server-1.bemta-4.messagelabs.com id
	A2/AB-24392-536DF105; Mon, 06 Aug 2012 14:35:33 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344263724!20721767!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30290 invoked from network); 6 Aug 2012 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276904"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:23 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:23 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-8t;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:12 +0100
Message-ID: <1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 09/23] xen/arm: Introduce xen_ulong_t for
	unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_ulong_t as unsigned long type, however
when they have been imported in Linux, xen_ulong_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
see fit.

Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/interface.h  |    8 ++++++--
 arch/ia64/include/asm/xen/interface.h |    1 +
 arch/x86/include/asm/xen/interface.h  |    1 +
 include/xen/interface/memory.h        |   12 ++++++------
 include/xen/interface/physdev.h       |    4 ++--
 include/xen/interface/version.h       |    2 +-
 include/xen/interface/xen.h           |    6 +++---
 7 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index f904dcc..1d3030c 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -9,8 +9,11 @@
 
 #include <linux/types.h>
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
+
 #define __DEFINE_GUEST_HANDLE(name, type) \
-	typedef type * __guest_handle_ ## name
+	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_ ## name
 
 #define DEFINE_GUEST_HANDLE_STRUCT(name) \
 	__DEFINE_GUEST_HANDLE(name, struct name)
@@ -21,13 +24,14 @@
 	do {						\
 		if (sizeof(hnd) == 8)			\
 			*(uint64_t *)&(hnd) = 0;	\
-		(hnd) = val;				\
+		(hnd).p = val;				\
 	} while (0)
 
 #ifndef __ASSEMBLY__
 /* Explicitly size integers that represent pfns in the interface with
  * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
 typedef uint64_t xen_pfn_t;
+typedef uint64_t xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 686464e..7c83445 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -71,6 +71,7 @@
  * with Xen so that we could have one ABI that works for 32 and 64 bit
  * guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index 555f94d..28fc621 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -51,6 +51,7 @@
  * with Xen so that on ARM we can have one ABI that works for 32 and 64
  * bit guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index abbbff0..b5c3098 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -34,7 +34,7 @@ struct xen_memory_reservation {
     GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
-    unsigned long  nr_extents;
+    xen_ulong_t  nr_extents;
     unsigned int   extent_order;
 
     /*
@@ -92,7 +92,7 @@ struct xen_memory_exchange {
      *     command will be non-zero.
      *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
      */
-    unsigned long nr_exchanged;
+    xen_ulong_t nr_exchanged;
 };
 
 DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
@@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
  */
 #define XENMEM_machphys_mapping     12
 struct xen_machphys_mapping {
-    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
-    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
+    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
+    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
 
@@ -169,7 +169,7 @@ struct xen_add_to_physmap {
     unsigned int space;
 
     /* Index into source mapping space. */
-    unsigned long idx;
+    xen_ulong_t idx;
 
     /* GPFN where the source mapping page should appear. */
     xen_pfn_t gpfn;
@@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
     domid_t domid;
 
     /* Length of list. */
-    unsigned long nr_gpfns;
+    xen_ulong_t nr_gpfns;
 
     /* List of GPFNs to translate. */
     GUEST_HANDLE(ulong) gpfn_list;
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 9ce788d..bc3ae14 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -56,7 +56,7 @@ struct physdev_eoi {
 #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
 struct physdev_pirq_eoi_gmfn {
     /* IN */
-    unsigned long gmfn;
+    xen_ulong_t gmfn;
 };
 
 /*
@@ -108,7 +108,7 @@ struct physdev_set_iobitmap {
 #define PHYSDEVOP_apic_write		 9
 struct physdev_apic {
 	/* IN */
-	unsigned long apic_physbase;
+	xen_ulong_t apic_physbase;
 	uint32_t reg;
 	/* IN or OUT */
 	uint32_t value;
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..30280c9 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -45,7 +45,7 @@ struct xen_changeset_info {
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 
 #define XENVER_get_features 6
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 42834a3..ec32115 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -274,9 +274,9 @@ DEFINE_GUEST_HANDLE_STRUCT(mmu_update);
  * NB. The fields are natural register size for this architecture.
  */
 struct multicall_entry {
-    unsigned long op;
-    long result;
-    unsigned long args[6];
+    xen_ulong_t op;
+    xen_ulong_t result;
+    xen_ulong_t args[6];
 };
 DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOP9-0004s0-GS; Mon, 06 Aug 2012 14:35:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOP7-0004lS-Eq
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:33 +0000
Received: from [85.158.143.99:16782] by server-1.bemta-4.messagelabs.com id
	A2/AB-24392-536DF105; Mon, 06 Aug 2012 14:35:33 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344263724!20721767!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30290 invoked from network); 6 Aug 2012 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204276904"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:23 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:23 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-8t;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:12 +0100
Message-ID: <1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 09/23] xen/arm: Introduce xen_ulong_t for
	unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_ulong_t as unsigned long type, however
when they have been imported in Linux, xen_ulong_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
see fit.

Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/interface.h  |    8 ++++++--
 arch/ia64/include/asm/xen/interface.h |    1 +
 arch/x86/include/asm/xen/interface.h  |    1 +
 include/xen/interface/memory.h        |   12 ++++++------
 include/xen/interface/physdev.h       |    4 ++--
 include/xen/interface/version.h       |    2 +-
 include/xen/interface/xen.h           |    6 +++---
 7 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index f904dcc..1d3030c 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -9,8 +9,11 @@
 
 #include <linux/types.h>
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
+
 #define __DEFINE_GUEST_HANDLE(name, type) \
-	typedef type * __guest_handle_ ## name
+	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_ ## name
 
 #define DEFINE_GUEST_HANDLE_STRUCT(name) \
 	__DEFINE_GUEST_HANDLE(name, struct name)
@@ -21,13 +24,14 @@
 	do {						\
 		if (sizeof(hnd) == 8)			\
 			*(uint64_t *)&(hnd) = 0;	\
-		(hnd) = val;				\
+		(hnd).p = val;				\
 	} while (0)
 
 #ifndef __ASSEMBLY__
 /* Explicitly size integers that represent pfns in the interface with
  * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
 typedef uint64_t xen_pfn_t;
+typedef uint64_t xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 686464e..7c83445 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -71,6 +71,7 @@
  * with Xen so that we could have one ABI that works for 32 and 64 bit
  * guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index 555f94d..28fc621 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -51,6 +51,7 @@
  * with Xen so that on ARM we can have one ABI that works for 32 and 64
  * bit guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index abbbff0..b5c3098 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -34,7 +34,7 @@ struct xen_memory_reservation {
     GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
-    unsigned long  nr_extents;
+    xen_ulong_t  nr_extents;
     unsigned int   extent_order;
 
     /*
@@ -92,7 +92,7 @@ struct xen_memory_exchange {
      *     command will be non-zero.
      *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
      */
-    unsigned long nr_exchanged;
+    xen_ulong_t nr_exchanged;
 };
 
 DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
@@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
  */
 #define XENMEM_machphys_mapping     12
 struct xen_machphys_mapping {
-    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
-    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
+    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
+    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
 
@@ -169,7 +169,7 @@ struct xen_add_to_physmap {
     unsigned int space;
 
     /* Index into source mapping space. */
-    unsigned long idx;
+    xen_ulong_t idx;
 
     /* GPFN where the source mapping page should appear. */
     xen_pfn_t gpfn;
@@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
     domid_t domid;
 
     /* Length of list. */
-    unsigned long nr_gpfns;
+    xen_ulong_t nr_gpfns;
 
     /* List of GPFNs to translate. */
     GUEST_HANDLE(ulong) gpfn_list;
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 9ce788d..bc3ae14 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -56,7 +56,7 @@ struct physdev_eoi {
 #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
 struct physdev_pirq_eoi_gmfn {
     /* IN */
-    unsigned long gmfn;
+    xen_ulong_t gmfn;
 };
 
 /*
@@ -108,7 +108,7 @@ struct physdev_set_iobitmap {
 #define PHYSDEVOP_apic_write		 9
 struct physdev_apic {
 	/* IN */
-	unsigned long apic_physbase;
+	xen_ulong_t apic_physbase;
 	uint32_t reg;
 	/* IN or OUT */
 	uint32_t value;
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..30280c9 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -45,7 +45,7 @@ struct xen_changeset_info {
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 
 #define XENVER_get_features 6
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 42834a3..ec32115 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -274,9 +274,9 @@ DEFINE_GUEST_HANDLE_STRUCT(mmu_update);
  * NB. The fields are natural register size for this architecture.
  */
 struct multicall_entry {
-    unsigned long op;
-    long result;
-    unsigned long args[6];
+    xen_ulong_t op;
+    xen_ulong_t result;
+    xen_ulong_t args[6];
 };
 DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOPB-0004vR-I5; Mon, 06 Aug 2012 14:35:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOPA-0004mP-5N
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344263726!2400405!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22639 invoked from network); 6 Aug 2012 14:35:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33699028"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHN-0002zY-VS;
	Mon, 06 Aug 2012 15:27:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:05 +0100
Message-ID: <1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use r12 to pass the hypercall number to the hypervisor.

We need a register to pass the hypercall number because we might not
know it at compile time and HVC only takes an immediate argument.

Among the available registers r12 seems to be the best choice because it
is defined as "intra-procedure call scratch register".

Use the ISS to pass an hypervisor specific tag.

Changes in v2:
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
at the moment is unused;
- use ldm instead of pop;
- fix up comments.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
 arch/arm/xen/Makefile                |    2 +-
 arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
 3 files changed, 157 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/hypercall.h
 create mode 100644 arch/arm/xen/hypercall.S

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
new file mode 100644
index 0000000..4ac0624
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -0,0 +1,50 @@
+/******************************************************************************
+ * hypercall.h
+ *
+ * Linux-specific hypervisor handling.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef _ASM_ARM_XEN_HYPERCALL_H
+#define _ASM_ARM_XEN_HYPERCALL_H
+
+#include <xen/interface/xen.h>
+
+long privcmd_call(unsigned call, unsigned long a1,
+		unsigned long a2, unsigned long a3,
+		unsigned long a4, unsigned long a5);
+int HYPERVISOR_xen_version(int cmd, void *arg);
+int HYPERVISOR_console_io(int cmd, int count, char *str);
+int HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count);
+int HYPERVISOR_sched_op(int cmd, void *arg);
+int HYPERVISOR_event_channel_op(int cmd, void *arg);
+unsigned long HYPERVISOR_hvm_op(int op, void *arg);
+int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
+int HYPERVISOR_physdev_op(int cmd, void *arg);
+
+#endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index 0bad594..b9d6acc 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o
+obj-y		:= enlighten.o hypercall.o
diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
new file mode 100644
index 0000000..074f5ed
--- /dev/null
+++ b/arch/arm/xen/hypercall.S
@@ -0,0 +1,106 @@
+/******************************************************************************
+ * hypercall.S
+ *
+ * Xen hypercall wrappers
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+/*
+ * The Xen hypercall calling convention is very similar to the ARM
+ * procedure calling convention: the first paramter is passed in r0, the
+ * second in r1, the third in r2 and the fourth in r3. Considering that
+ * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
+ * in r4, differently from the procedure calling convention of using the
+ * stack for that case.
+ *
+ * The hypercall number is passed in r12.
+ *
+ * The return value is in r0.
+ *
+ * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
+ * hypercall tag.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <xen/interface/xen.h>
+
+
+/* HVC 0xEA1 */
+#ifdef CONFIG_THUMB2_KERNEL
+#define xen_hvc .word 0xf7e08ea1
+#else
+#define xen_hvc .word 0xe140ea71
+#endif
+
+#define HYPERCALL_SIMPLE(hypercall)		\
+ENTRY(HYPERVISOR_##hypercall)			\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc;							\
+	mov pc, lr;							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+#define HYPERCALL0 HYPERCALL_SIMPLE
+#define HYPERCALL1 HYPERCALL_SIMPLE
+#define HYPERCALL2 HYPERCALL_SIMPLE
+#define HYPERCALL3 HYPERCALL_SIMPLE
+#define HYPERCALL4 HYPERCALL_SIMPLE
+
+#define HYPERCALL5(hypercall)			\
+ENTRY(HYPERVISOR_##hypercall)			\
+	stmdb sp!, {r4}						\
+	ldr r4, [sp, #4]					\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc								\
+	ldm sp!, {r4}						\
+	mov pc, lr							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+                .text
+
+HYPERCALL2(xen_version);
+HYPERCALL3(console_io);
+HYPERCALL3(grant_table_op);
+HYPERCALL2(sched_op);
+HYPERCALL2(event_channel_op);
+HYPERCALL2(hvm_op);
+HYPERCALL2(memory_op);
+HYPERCALL2(physdev_op);
+
+ENTRY(privcmd_call)
+	stmdb sp!, {r4}
+	mov r12, r0
+	mov r0, r1
+	mov r1, r2
+	mov r2, r3
+	ldr r3, [sp, #8]
+	ldr r4, [sp, #4]
+	xen_hvc
+	ldm sp!, {r4}
+	mov pc, lr
+ENDPROC(privcmd_call);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOPB-0004vR-I5; Mon, 06 Aug 2012 14:35:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOPA-0004mP-5N
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344263726!2400405!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22639 invoked from network); 6 Aug 2012 14:35:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33699028"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHN-0002zY-VS;
	Mon, 06 Aug 2012 15:27:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:05 +0100
Message-ID: <1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use r12 to pass the hypercall number to the hypervisor.

We need a register to pass the hypercall number because we might not
know it at compile time and HVC only takes an immediate argument.

Among the available registers r12 seems to be the best choice because it
is defined as "intra-procedure call scratch register".

Use the ISS to pass an hypervisor specific tag.

Changes in v2:
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
at the moment is unused;
- use ldm instead of pop;
- fix up comments.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
 arch/arm/xen/Makefile                |    2 +-
 arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
 3 files changed, 157 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/hypercall.h
 create mode 100644 arch/arm/xen/hypercall.S

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
new file mode 100644
index 0000000..4ac0624
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -0,0 +1,50 @@
+/******************************************************************************
+ * hypercall.h
+ *
+ * Linux-specific hypervisor handling.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef _ASM_ARM_XEN_HYPERCALL_H
+#define _ASM_ARM_XEN_HYPERCALL_H
+
+#include <xen/interface/xen.h>
+
+long privcmd_call(unsigned call, unsigned long a1,
+		unsigned long a2, unsigned long a3,
+		unsigned long a4, unsigned long a5);
+int HYPERVISOR_xen_version(int cmd, void *arg);
+int HYPERVISOR_console_io(int cmd, int count, char *str);
+int HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count);
+int HYPERVISOR_sched_op(int cmd, void *arg);
+int HYPERVISOR_event_channel_op(int cmd, void *arg);
+unsigned long HYPERVISOR_hvm_op(int op, void *arg);
+int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
+int HYPERVISOR_physdev_op(int cmd, void *arg);
+
+#endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index 0bad594..b9d6acc 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o
+obj-y		:= enlighten.o hypercall.o
diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
new file mode 100644
index 0000000..074f5ed
--- /dev/null
+++ b/arch/arm/xen/hypercall.S
@@ -0,0 +1,106 @@
+/******************************************************************************
+ * hypercall.S
+ *
+ * Xen hypercall wrappers
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+/*
+ * The Xen hypercall calling convention is very similar to the ARM
+ * procedure calling convention: the first paramter is passed in r0, the
+ * second in r1, the third in r2 and the fourth in r3. Considering that
+ * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
+ * in r4, differently from the procedure calling convention of using the
+ * stack for that case.
+ *
+ * The hypercall number is passed in r12.
+ *
+ * The return value is in r0.
+ *
+ * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
+ * hypercall tag.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <xen/interface/xen.h>
+
+
+/* HVC 0xEA1 */
+#ifdef CONFIG_THUMB2_KERNEL
+#define xen_hvc .word 0xf7e08ea1
+#else
+#define xen_hvc .word 0xe140ea71
+#endif
+
+#define HYPERCALL_SIMPLE(hypercall)		\
+ENTRY(HYPERVISOR_##hypercall)			\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc;							\
+	mov pc, lr;							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+#define HYPERCALL0 HYPERCALL_SIMPLE
+#define HYPERCALL1 HYPERCALL_SIMPLE
+#define HYPERCALL2 HYPERCALL_SIMPLE
+#define HYPERCALL3 HYPERCALL_SIMPLE
+#define HYPERCALL4 HYPERCALL_SIMPLE
+
+#define HYPERCALL5(hypercall)			\
+ENTRY(HYPERVISOR_##hypercall)			\
+	stmdb sp!, {r4}						\
+	ldr r4, [sp, #4]					\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc								\
+	ldm sp!, {r4}						\
+	mov pc, lr							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+                .text
+
+HYPERCALL2(xen_version);
+HYPERCALL3(console_io);
+HYPERCALL3(grant_table_op);
+HYPERCALL2(sched_op);
+HYPERCALL2(event_channel_op);
+HYPERCALL2(hvm_op);
+HYPERCALL2(memory_op);
+HYPERCALL2(physdev_op);
+
+ENTRY(privcmd_call)
+	stmdb sp!, {r4}
+	mov r12, r0
+	mov r0, r1
+	mov r1, r2
+	mov r2, r3
+	ldr r3, [sp, #8]
+	ldr r4, [sp, #4]
+	xen_hvc
+	ldm sp!, {r4}
+	mov pc, lr
+ENDPROC(privcmd_call);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOPD-0004xx-1Y; Mon, 06 Aug 2012 14:35:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOPB-0004nS-Ip
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344263726!2400405!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22795 invoked from network); 6 Aug 2012 14:35:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33699029"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHN-0002zY-Un;
	Mon, 06 Aug 2012 15:27:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:04 +0100
Message-ID: <1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 01/23] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Basic hypervisor.h and interface.h definitions.
- Skeleton enlighten.c, set xen_start_info to an empty struct.
- Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.

The new code only compiles when CONFIG_XEN is set, that is going to be
added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
ARM".

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/Makefile                     |    1 +
 arch/arm/include/asm/hypervisor.h     |    6 +++
 arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
 arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
 arch/arm/xen/Makefile                 |    1 +
 arch/arm/xen/enlighten.c              |   35 ++++++++++++++++++
 include/xen/xen.h                     |    2 +-
 7 files changed, 127 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/interface.h
 create mode 100644 arch/arm/xen/Makefile
 create mode 100644 arch/arm/xen/enlighten.c

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 0298b00..70aaa82 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -246,6 +246,7 @@ endif
 core-$(CONFIG_FPE_NWFPE)	+= arch/arm/nwfpe/
 core-$(CONFIG_FPE_FASTFPE)	+= $(FASTFPE_OBJ)
 core-$(CONFIG_VFP)		+= arch/arm/vfp/
+core-$(CONFIG_XEN)		+= arch/arm/xen/
 
 # If we have a machine-specific directory, then include it in the build.
 core-y				+= arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
diff --git a/arch/arm/include/asm/hypervisor.h b/arch/arm/include/asm/hypervisor.h
new file mode 100644
index 0000000..b90d9e5
--- /dev/null
+++ b/arch/arm/include/asm/hypervisor.h
@@ -0,0 +1,6 @@
+#ifndef _ASM_ARM_HYPERVISOR_H
+#define _ASM_ARM_HYPERVISOR_H
+
+#include <asm/xen/hypervisor.h>
+
+#endif
diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
new file mode 100644
index 0000000..d7ab99a
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypervisor.h
@@ -0,0 +1,19 @@
+#ifndef _ASM_ARM_XEN_HYPERVISOR_H
+#define _ASM_ARM_XEN_HYPERVISOR_H
+
+extern struct shared_info *HYPERVISOR_shared_info;
+extern struct start_info *xen_start_info;
+
+/* Lazy mode for batching updates / context switch */
+enum paravirt_lazy_mode {
+	PARAVIRT_LAZY_NONE,
+	PARAVIRT_LAZY_MMU,
+	PARAVIRT_LAZY_CPU,
+};
+
+static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
+{
+	return PARAVIRT_LAZY_NONE;
+}
+
+#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
new file mode 100644
index 0000000..ab99270
--- /dev/null
+++ b/arch/arm/include/asm/xen/interface.h
@@ -0,0 +1,64 @@
+/******************************************************************************
+ * Guest OS interface to ARM Xen.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ */
+
+#ifndef _ASM_ARM_XEN_INTERFACE_H
+#define _ASM_ARM_XEN_INTERFACE_H
+
+#include <linux/types.h>
+
+#define __DEFINE_GUEST_HANDLE(name, type) \
+	typedef type * __guest_handle_ ## name
+
+#define DEFINE_GUEST_HANDLE_STRUCT(name) \
+	__DEFINE_GUEST_HANDLE(name, struct name)
+#define DEFINE_GUEST_HANDLE(name) __DEFINE_GUEST_HANDLE(name, name)
+#define GUEST_HANDLE(name)        __guest_handle_ ## name
+
+#define set_xen_guest_handle(hnd, val)			\
+	do {						\
+		if (sizeof(hnd) == 8)			\
+			*(uint64_t *)&(hnd) = 0;	\
+		(hnd) = val;				\
+	} while (0)
+
+#ifndef __ASSEMBLY__
+/* Guest handles for primitive C types. */
+__DEFINE_GUEST_HANDLE(uchar, unsigned char);
+__DEFINE_GUEST_HANDLE(uint,  unsigned int);
+__DEFINE_GUEST_HANDLE(ulong, unsigned long);
+DEFINE_GUEST_HANDLE(char);
+DEFINE_GUEST_HANDLE(int);
+DEFINE_GUEST_HANDLE(long);
+DEFINE_GUEST_HANDLE(void);
+DEFINE_GUEST_HANDLE(uint64_t);
+DEFINE_GUEST_HANDLE(uint32_t);
+
+/* Maximum number of virtual CPUs in multi-processor guests. */
+#define MAX_VIRT_CPUS 1
+
+struct arch_vcpu_info { };
+struct arch_shared_info { };
+
+/* XXX: Move pvclock definitions some place arch independent */
+struct pvclock_vcpu_time_info {
+	u32   version;
+	u32   pad0;
+	u64   tsc_timestamp;
+	u64   system_time;
+	u32   tsc_to_system_mul;
+	s8    tsc_shift;
+	u8    flags;
+	u8    pad[2];
+} __attribute__((__packed__)); /* 32 bytes */
+
+struct pvclock_wall_clock {
+	u32   version;
+	u32   sec;
+	u32   nsec;
+} __attribute__((__packed__));
+#endif
+
+#endif /* _ASM_ARM_XEN_INTERFACE_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
new file mode 100644
index 0000000..0bad594
--- /dev/null
+++ b/arch/arm/xen/Makefile
@@ -0,0 +1 @@
+obj-y		:= enlighten.o
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
new file mode 100644
index 0000000..d27c2a6
--- /dev/null
+++ b/arch/arm/xen/enlighten.c
@@ -0,0 +1,35 @@
+#include <xen/xen.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/memory.h>
+#include <xen/platform_pci.h>
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+#include <linux/module.h>
+
+struct start_info _xen_start_info;
+struct start_info *xen_start_info = &_xen_start_info;
+EXPORT_SYMBOL_GPL(xen_start_info);
+
+enum xen_domain_type xen_domain_type = XEN_NATIVE;
+EXPORT_SYMBOL_GPL(xen_domain_type);
+
+struct shared_info xen_dummy_shared_info;
+struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
+
+DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
+
+/* XXX: to be removed */
+__read_mostly int xen_have_vector_callback;
+EXPORT_SYMBOL_GPL(xen_have_vector_callback);
+
+int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
+EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
+
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return -ENOSYS;
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a164024..2c0d3a5 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -23,7 +23,7 @@ extern enum xen_domain_type xen_domain_type;
 #include <xen/interface/xen.h>
 #include <asm/xen/hypervisor.h>
 
-#define xen_initial_domain()	(xen_pv_domain() && \
+#define xen_initial_domain()	(xen_domain() && \
 				 xen_start_info->flags & SIF_INITDOMAIN)
 #else  /* !CONFIG_XEN_DOM0 */
 #define xen_initial_domain()	(0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:35:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOPD-0004xx-1Y; Mon, 06 Aug 2012 14:35:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOPB-0004nS-Ip
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:35:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344263726!2400405!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22795 invoked from network); 6 Aug 2012 14:35:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33699029"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:35:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:35:26 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHN-0002zY-Un;
	Mon, 06 Aug 2012 15:27:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:04 +0100
Message-ID: <1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 01/23] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Basic hypervisor.h and interface.h definitions.
- Skeleton enlighten.c, set xen_start_info to an empty struct.
- Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.

The new code only compiles when CONFIG_XEN is set, that is going to be
added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
ARM".

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/Makefile                     |    1 +
 arch/arm/include/asm/hypervisor.h     |    6 +++
 arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
 arch/arm/include/asm/xen/interface.h  |   64 +++++++++++++++++++++++++++++++++
 arch/arm/xen/Makefile                 |    1 +
 arch/arm/xen/enlighten.c              |   35 ++++++++++++++++++
 include/xen/xen.h                     |    2 +-
 7 files changed, 127 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/interface.h
 create mode 100644 arch/arm/xen/Makefile
 create mode 100644 arch/arm/xen/enlighten.c

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 0298b00..70aaa82 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -246,6 +246,7 @@ endif
 core-$(CONFIG_FPE_NWFPE)	+= arch/arm/nwfpe/
 core-$(CONFIG_FPE_FASTFPE)	+= $(FASTFPE_OBJ)
 core-$(CONFIG_VFP)		+= arch/arm/vfp/
+core-$(CONFIG_XEN)		+= arch/arm/xen/
 
 # If we have a machine-specific directory, then include it in the build.
 core-y				+= arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
diff --git a/arch/arm/include/asm/hypervisor.h b/arch/arm/include/asm/hypervisor.h
new file mode 100644
index 0000000..b90d9e5
--- /dev/null
+++ b/arch/arm/include/asm/hypervisor.h
@@ -0,0 +1,6 @@
+#ifndef _ASM_ARM_HYPERVISOR_H
+#define _ASM_ARM_HYPERVISOR_H
+
+#include <asm/xen/hypervisor.h>
+
+#endif
diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
new file mode 100644
index 0000000..d7ab99a
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypervisor.h
@@ -0,0 +1,19 @@
+#ifndef _ASM_ARM_XEN_HYPERVISOR_H
+#define _ASM_ARM_XEN_HYPERVISOR_H
+
+extern struct shared_info *HYPERVISOR_shared_info;
+extern struct start_info *xen_start_info;
+
+/* Lazy mode for batching updates / context switch */
+enum paravirt_lazy_mode {
+	PARAVIRT_LAZY_NONE,
+	PARAVIRT_LAZY_MMU,
+	PARAVIRT_LAZY_CPU,
+};
+
+static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
+{
+	return PARAVIRT_LAZY_NONE;
+}
+
+#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
new file mode 100644
index 0000000..ab99270
--- /dev/null
+++ b/arch/arm/include/asm/xen/interface.h
@@ -0,0 +1,64 @@
+/******************************************************************************
+ * Guest OS interface to ARM Xen.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ */
+
+#ifndef _ASM_ARM_XEN_INTERFACE_H
+#define _ASM_ARM_XEN_INTERFACE_H
+
+#include <linux/types.h>
+
+#define __DEFINE_GUEST_HANDLE(name, type) \
+	typedef type * __guest_handle_ ## name
+
+#define DEFINE_GUEST_HANDLE_STRUCT(name) \
+	__DEFINE_GUEST_HANDLE(name, struct name)
+#define DEFINE_GUEST_HANDLE(name) __DEFINE_GUEST_HANDLE(name, name)
+#define GUEST_HANDLE(name)        __guest_handle_ ## name
+
+#define set_xen_guest_handle(hnd, val)			\
+	do {						\
+		if (sizeof(hnd) == 8)			\
+			*(uint64_t *)&(hnd) = 0;	\
+		(hnd) = val;				\
+	} while (0)
+
+#ifndef __ASSEMBLY__
+/* Guest handles for primitive C types. */
+__DEFINE_GUEST_HANDLE(uchar, unsigned char);
+__DEFINE_GUEST_HANDLE(uint,  unsigned int);
+__DEFINE_GUEST_HANDLE(ulong, unsigned long);
+DEFINE_GUEST_HANDLE(char);
+DEFINE_GUEST_HANDLE(int);
+DEFINE_GUEST_HANDLE(long);
+DEFINE_GUEST_HANDLE(void);
+DEFINE_GUEST_HANDLE(uint64_t);
+DEFINE_GUEST_HANDLE(uint32_t);
+
+/* Maximum number of virtual CPUs in multi-processor guests. */
+#define MAX_VIRT_CPUS 1
+
+struct arch_vcpu_info { };
+struct arch_shared_info { };
+
+/* XXX: Move pvclock definitions some place arch independent */
+struct pvclock_vcpu_time_info {
+	u32   version;
+	u32   pad0;
+	u64   tsc_timestamp;
+	u64   system_time;
+	u32   tsc_to_system_mul;
+	s8    tsc_shift;
+	u8    flags;
+	u8    pad[2];
+} __attribute__((__packed__)); /* 32 bytes */
+
+struct pvclock_wall_clock {
+	u32   version;
+	u32   sec;
+	u32   nsec;
+} __attribute__((__packed__));
+#endif
+
+#endif /* _ASM_ARM_XEN_INTERFACE_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
new file mode 100644
index 0000000..0bad594
--- /dev/null
+++ b/arch/arm/xen/Makefile
@@ -0,0 +1 @@
+obj-y		:= enlighten.o
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
new file mode 100644
index 0000000..d27c2a6
--- /dev/null
+++ b/arch/arm/xen/enlighten.c
@@ -0,0 +1,35 @@
+#include <xen/xen.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/memory.h>
+#include <xen/platform_pci.h>
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+#include <linux/module.h>
+
+struct start_info _xen_start_info;
+struct start_info *xen_start_info = &_xen_start_info;
+EXPORT_SYMBOL_GPL(xen_start_info);
+
+enum xen_domain_type xen_domain_type = XEN_NATIVE;
+EXPORT_SYMBOL_GPL(xen_domain_type);
+
+struct shared_info xen_dummy_shared_info;
+struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
+
+DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
+
+/* XXX: to be removed */
+__read_mostly int xen_have_vector_callback;
+EXPORT_SYMBOL_GPL(xen_have_vector_callback);
+
+int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
+EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
+
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return -ENOSYS;
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a164024..2c0d3a5 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -23,7 +23,7 @@ extern enum xen_domain_type xen_domain_type;
 #include <xen/interface/xen.h>
 #include <asm/xen/hypervisor.h>
 
-#define xen_initial_domain()	(xen_pv_domain() && \
+#define xen_initial_domain()	(xen_domain() && \
 				 xen_start_info->flags & SIF_INITDOMAIN)
 #else  /* !CONFIG_XEN_DOM0 */
 #define xen_initial_domain()	(0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:38:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyORy-0006zp-NW; Mon, 06 Aug 2012 14:38:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyORx-0006zN-Lj
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:38:29 +0000
Received: from [85.158.139.83:14741] by server-11.bemta-5.messagelabs.com id
	9D/50-20400-4E6DF105; Mon, 06 Aug 2012 14:38:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344263907!29927498!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22884 invoked from network); 6 Aug 2012 14:38:28 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:38:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868789"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:38:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:38:27 +0100
Date: Mon, 6 Aug 2012 15:38:05 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120806142432.GA2487@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061536390.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120806142432.GA2487@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:12:01PM +0100, Stefano Stabellini wrote:
> > This is an incremental patch on top of
> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> > compatibility, it is better to introduce foreign_domid as part of a
> > union containing both size and foreign_domid.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/include/public/memory.h |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index b2adfbe..b0af2fd 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > -    /* Number of pages to go through for gmfn_range */
> > -    uint16_t    size;
> > +    union {
> > +        /* Number of pages to go through for gmfn_range */
> > +        uint16_t    size;
> > +        /* IFF gmfn_foreign */
> > +        domid_t foreign_domid;
> > +    };
> >  
> >      /* Source mapping space. */
> >  #define XENMAPSPACE_shared_info  0 /* shared info page */
> > @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
> >  #define XENMAPSPACE_gmfn         2 /* GMFN */
> >  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> >  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> > -    uint16_t space;
> > -    domid_t foreign_domid; /* IFF gmfn_foreign */
> > +    unsigned int space;
> 
> Should this be of uint32... ?

Nope: this patch is a fix for:

diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 86d02c8..b2adfbe 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -212,11 +212,13 @@ struct xen_add_to_physmap {
     uint16_t    size;
 
     /* Source mapping space. */
-#define XENMAPSPACE_shared_info 0 /* shared info page */
-#define XENMAPSPACE_grant_table 1 /* grant table page */
-#define XENMAPSPACE_gmfn        2 /* GMFN */
-#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
-    unsigned int space;
+#define XENMAPSPACE_shared_info  0 /* shared info page */
+#define XENMAPSPACE_grant_table  1 /* grant table page */
+#define XENMAPSPACE_gmfn         2 /* GMFN */
+#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
+    uint16_t space;
+    domid_t foreign_domid; /* IFF gmfn_foreign */
 
 #define XENMAPIDX_grant_table_status 0x80000000

---

this is not upstream yet. So it brings space back to its original self.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:38:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyORy-0006zp-NW; Mon, 06 Aug 2012 14:38:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyORx-0006zN-Lj
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:38:29 +0000
Received: from [85.158.139.83:14741] by server-11.bemta-5.messagelabs.com id
	9D/50-20400-4E6DF105; Mon, 06 Aug 2012 14:38:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344263907!29927498!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22884 invoked from network); 6 Aug 2012 14:38:28 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:38:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868789"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:38:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:38:27 +0100
Date: Mon, 6 Aug 2012 15:38:05 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120806142432.GA2487@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208061536390.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120806142432.GA2487@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:12:01PM +0100, Stefano Stabellini wrote:
> > This is an incremental patch on top of
> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> > compatibility, it is better to introduce foreign_domid as part of a
> > union containing both size and foreign_domid.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/include/public/memory.h |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index b2adfbe..b0af2fd 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > -    /* Number of pages to go through for gmfn_range */
> > -    uint16_t    size;
> > +    union {
> > +        /* Number of pages to go through for gmfn_range */
> > +        uint16_t    size;
> > +        /* IFF gmfn_foreign */
> > +        domid_t foreign_domid;
> > +    };
> >  
> >      /* Source mapping space. */
> >  #define XENMAPSPACE_shared_info  0 /* shared info page */
> > @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
> >  #define XENMAPSPACE_gmfn         2 /* GMFN */
> >  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> >  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> > -    uint16_t space;
> > -    domid_t foreign_domid; /* IFF gmfn_foreign */
> > +    unsigned int space;
> 
> Should this be of uint32... ?

Nope: this patch is a fix for:

diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 86d02c8..b2adfbe 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -212,11 +212,13 @@ struct xen_add_to_physmap {
     uint16_t    size;
 
     /* Source mapping space. */
-#define XENMAPSPACE_shared_info 0 /* shared info page */
-#define XENMAPSPACE_grant_table 1 /* grant table page */
-#define XENMAPSPACE_gmfn        2 /* GMFN */
-#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
-    unsigned int space;
+#define XENMAPSPACE_shared_info  0 /* shared info page */
+#define XENMAPSPACE_grant_table  1 /* grant table page */
+#define XENMAPSPACE_gmfn         2 /* GMFN */
+#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
+    uint16_t space;
+    domid_t foreign_domid; /* IFF gmfn_foreign */
 
 #define XENMAPIDX_grant_table_status 0x80000000

---

this is not upstream yet. So it brings space back to its original self.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOT6-0007Id-CZ; Mon, 06 Aug 2012 14:39:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1SyOT4-0007I9-WB
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:39:39 +0000
Received: from [85.158.143.99:40686] by server-1.bemta-4.messagelabs.com id
	0C/92-24392-A27DF105; Mon, 06 Aug 2012 14:39:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344263976!25215626!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12797 invoked from network); 6 Aug 2012 14:39:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:39:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204277584"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:39:35 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	10:39:35 -0400
Message-ID: <501FD726.90806@citrix.com>
Date: Mon, 6 Aug 2012 15:39:34 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Tim Deegan <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:11, Stefano Stabellini wrote:
> Hi all,
> this patch series makes the necessary changes to make sure that the
> current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
> 
> - it defines xen_ulong_t as uint64_t on ARM;
> - it introduces a new macro to handle guest pointers, called
> XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
> have size 8 bytes on aarch64);
> - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
> parameters with XEN_GUEST_HANDLE_PARAM.

This is a subtle (and undocumented!) distinction. I can see people
adding/modifying hypercall etc. getting this wrong and no one noticing
for a while (since it doesn't affect x86).

The xen_ulong_t parameters (when used for pointers) from an aarch guest
point of view are a uint32_t guest pointer and uint32_t of padding.  So
the guest handles will be the same size in hypercall parameters and
structure members.

David

> On x86 and ia64 things should stay exactly the same.
> 
> On ARM all the unsigned long and the guest pointers that are members of
> a struct become size 8 byte (both aarch and aarch64).
> However guest pointers that are passed as hypercall arguments in
> registers are going to be 4 bytes on aarch and 8 bytes on aarch64.
> 
> It is based on Ian's arm-for-4.3 branch. 
> 
> 
> Stefano Stabellini (5):
>       xen: improve changes to xen_add_to_physmap
>       xen/arm: introduce __lshrdi3 and __aeabi_llsr
>       xen: few more xen_ulong_t substitutions
>       xen: introduce XEN_GUEST_HANDLE_PARAM
>       xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate
> 
> 
>  xen/arch/arm/domain.c              |    2 +-
>  xen/arch/arm/domctl.c              |    2 +-
>  xen/arch/arm/hvm.c                 |    2 +-
>  xen/arch/arm/lib/Makefile          |    2 +-
>  xen/arch/arm/lib/lshrdi3.S         |   54 ++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/mm.c                  |    2 +-
>  xen/arch/arm/physdev.c             |    2 +-
>  xen/arch/arm/sysctl.c              |    2 +-
>  xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
>  xen/arch/x86/domain.c              |    2 +-
>  xen/arch/x86/domctl.c              |    2 +-
>  xen/arch/x86/efi/runtime.c         |    2 +-
>  xen/arch/x86/hvm/hvm.c             |   26 ++++++++--------
>  xen/arch/x86/microcode.c           |    2 +-
>  xen/arch/x86/mm.c                  |   14 ++++----
>  xen/arch/x86/mm/hap/hap.c          |    2 +-
>  xen/arch/x86/mm/mem_event.c        |    2 +-
>  xen/arch/x86/mm/paging.c           |    2 +-
>  xen/arch/x86/mm/shadow/common.c    |    2 +-
>  xen/arch/x86/physdev.c             |    2 +-
>  xen/arch/x86/platform_hypercall.c  |    2 +-
>  xen/arch/x86/sysctl.c              |    2 +-
>  xen/arch/x86/traps.c               |    2 +-
>  xen/arch/x86/x86_32/mm.c           |    2 +-
>  xen/arch/x86/x86_32/traps.c        |    2 +-
>  xen/arch/x86/x86_64/compat/mm.c    |    8 ++--
>  xen/arch/x86/x86_64/domain.c       |    2 +-
>  xen/arch/x86/x86_64/mm.c           |    2 +-
>  xen/arch/x86/x86_64/traps.c        |    2 +-
>  xen/common/compat/domain.c         |    2 +-
>  xen/common/compat/grant_table.c    |    2 +-
>  xen/common/compat/memory.c         |    2 +-
>  xen/common/domain.c                |    2 +-
>  xen/common/domctl.c                |    2 +-
>  xen/common/event_channel.c         |    2 +-
>  xen/common/grant_table.c           |   36 ++++++++++++------------
>  xen/common/kernel.c                |    4 +-
>  xen/common/kexec.c                 |   16 +++++-----
>  xen/common/memory.c                |    4 +-
>  xen/common/multicall.c             |    2 +-
>  xen/common/schedule.c              |    2 +-
>  xen/common/sysctl.c                |    2 +-
>  xen/common/xenoprof.c              |    8 ++--
>  xen/drivers/acpi/pmstat.c          |    2 +-
>  xen/drivers/char/console.c         |    6 ++--
>  xen/drivers/passthrough/iommu.c    |    2 +-
>  xen/include/asm-arm/guest_access.h |    2 +-
>  xen/include/asm-arm/hypercall.h    |    2 +-
>  xen/include/asm-arm/mm.h           |    2 +-
>  xen/include/asm-x86/hap.h          |    2 +-
>  xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
>  xen/include/asm-x86/mem_event.h    |    2 +-
>  xen/include/asm-x86/mm.h           |    8 ++--
>  xen/include/asm-x86/paging.h       |    2 +-
>  xen/include/asm-x86/processor.h    |    2 +-
>  xen/include/asm-x86/shadow.h       |    2 +-
>  xen/include/asm-x86/xenoprof.h     |    6 ++--
>  xen/include/public/arch-arm.h      |   21 ++++++++++----
>  xen/include/public/arch-ia64.h     |    1 +
>  xen/include/public/arch-x86/xen.h  |    1 +
>  xen/include/public/memory.h        |   11 ++++--
>  xen/include/public/physdev.h       |    2 +-
>  xen/include/public/version.h       |    2 +-
>  xen/include/public/xen.h           |    4 +-
>  xen/include/xen/acpi.h             |    4 +-
>  xen/include/xen/hypercall.h        |   52 +++++++++++++++++-----------------
>  xen/include/xen/iommu.h            |    2 +-
>  xen/include/xen/tmem_xen.h         |    2 +-
>  xen/include/xsm/xsm.h              |    4 +-
>  xen/xsm/dummy.c                    |    2 +-
>  xen/xsm/flask/flask_op.c           |    4 +-
>  xen/xsm/flask/hooks.c              |    2 +-
>  xen/xsm/xsm_core.c                 |    2 +-
>  73 files changed, 243 insertions(+), 175 deletions(-)
> 
> 
> Cheers,
> 
> Stefano
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOT6-0007Id-CZ; Mon, 06 Aug 2012 14:39:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1SyOT4-0007I9-WB
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:39:39 +0000
Received: from [85.158.143.99:40686] by server-1.bemta-4.messagelabs.com id
	0C/92-24392-A27DF105; Mon, 06 Aug 2012 14:39:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344263976!25215626!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12797 invoked from network); 6 Aug 2012 14:39:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:39:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204277584"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:39:35 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	10:39:35 -0400
Message-ID: <501FD726.90806@citrix.com>
Date: Mon, 6 Aug 2012 15:39:34 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Tim Deegan <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:11, Stefano Stabellini wrote:
> Hi all,
> this patch series makes the necessary changes to make sure that the
> current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
> 
> - it defines xen_ulong_t as uint64_t on ARM;
> - it introduces a new macro to handle guest pointers, called
> XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
> have size 8 bytes on aarch64);
> - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
> parameters with XEN_GUEST_HANDLE_PARAM.

This is a subtle (and undocumented!) distinction. I can see people
adding/modifying hypercall etc. getting this wrong and no one noticing
for a while (since it doesn't affect x86).

The xen_ulong_t parameters (when used for pointers) from an aarch guest
point of view are a uint32_t guest pointer and uint32_t of padding.  So
the guest handles will be the same size in hypercall parameters and
structure members.

David

> On x86 and ia64 things should stay exactly the same.
> 
> On ARM all the unsigned long and the guest pointers that are members of
> a struct become size 8 byte (both aarch and aarch64).
> However guest pointers that are passed as hypercall arguments in
> registers are going to be 4 bytes on aarch and 8 bytes on aarch64.
> 
> It is based on Ian's arm-for-4.3 branch. 
> 
> 
> Stefano Stabellini (5):
>       xen: improve changes to xen_add_to_physmap
>       xen/arm: introduce __lshrdi3 and __aeabi_llsr
>       xen: few more xen_ulong_t substitutions
>       xen: introduce XEN_GUEST_HANDLE_PARAM
>       xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate
> 
> 
>  xen/arch/arm/domain.c              |    2 +-
>  xen/arch/arm/domctl.c              |    2 +-
>  xen/arch/arm/hvm.c                 |    2 +-
>  xen/arch/arm/lib/Makefile          |    2 +-
>  xen/arch/arm/lib/lshrdi3.S         |   54 ++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/mm.c                  |    2 +-
>  xen/arch/arm/physdev.c             |    2 +-
>  xen/arch/arm/sysctl.c              |    2 +-
>  xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
>  xen/arch/x86/domain.c              |    2 +-
>  xen/arch/x86/domctl.c              |    2 +-
>  xen/arch/x86/efi/runtime.c         |    2 +-
>  xen/arch/x86/hvm/hvm.c             |   26 ++++++++--------
>  xen/arch/x86/microcode.c           |    2 +-
>  xen/arch/x86/mm.c                  |   14 ++++----
>  xen/arch/x86/mm/hap/hap.c          |    2 +-
>  xen/arch/x86/mm/mem_event.c        |    2 +-
>  xen/arch/x86/mm/paging.c           |    2 +-
>  xen/arch/x86/mm/shadow/common.c    |    2 +-
>  xen/arch/x86/physdev.c             |    2 +-
>  xen/arch/x86/platform_hypercall.c  |    2 +-
>  xen/arch/x86/sysctl.c              |    2 +-
>  xen/arch/x86/traps.c               |    2 +-
>  xen/arch/x86/x86_32/mm.c           |    2 +-
>  xen/arch/x86/x86_32/traps.c        |    2 +-
>  xen/arch/x86/x86_64/compat/mm.c    |    8 ++--
>  xen/arch/x86/x86_64/domain.c       |    2 +-
>  xen/arch/x86/x86_64/mm.c           |    2 +-
>  xen/arch/x86/x86_64/traps.c        |    2 +-
>  xen/common/compat/domain.c         |    2 +-
>  xen/common/compat/grant_table.c    |    2 +-
>  xen/common/compat/memory.c         |    2 +-
>  xen/common/domain.c                |    2 +-
>  xen/common/domctl.c                |    2 +-
>  xen/common/event_channel.c         |    2 +-
>  xen/common/grant_table.c           |   36 ++++++++++++------------
>  xen/common/kernel.c                |    4 +-
>  xen/common/kexec.c                 |   16 +++++-----
>  xen/common/memory.c                |    4 +-
>  xen/common/multicall.c             |    2 +-
>  xen/common/schedule.c              |    2 +-
>  xen/common/sysctl.c                |    2 +-
>  xen/common/xenoprof.c              |    8 ++--
>  xen/drivers/acpi/pmstat.c          |    2 +-
>  xen/drivers/char/console.c         |    6 ++--
>  xen/drivers/passthrough/iommu.c    |    2 +-
>  xen/include/asm-arm/guest_access.h |    2 +-
>  xen/include/asm-arm/hypercall.h    |    2 +-
>  xen/include/asm-arm/mm.h           |    2 +-
>  xen/include/asm-x86/hap.h          |    2 +-
>  xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
>  xen/include/asm-x86/mem_event.h    |    2 +-
>  xen/include/asm-x86/mm.h           |    8 ++--
>  xen/include/asm-x86/paging.h       |    2 +-
>  xen/include/asm-x86/processor.h    |    2 +-
>  xen/include/asm-x86/shadow.h       |    2 +-
>  xen/include/asm-x86/xenoprof.h     |    6 ++--
>  xen/include/public/arch-arm.h      |   21 ++++++++++----
>  xen/include/public/arch-ia64.h     |    1 +
>  xen/include/public/arch-x86/xen.h  |    1 +
>  xen/include/public/memory.h        |   11 ++++--
>  xen/include/public/physdev.h       |    2 +-
>  xen/include/public/version.h       |    2 +-
>  xen/include/public/xen.h           |    4 +-
>  xen/include/xen/acpi.h             |    4 +-
>  xen/include/xen/hypercall.h        |   52 +++++++++++++++++-----------------
>  xen/include/xen/iommu.h            |    2 +-
>  xen/include/xen/tmem_xen.h         |    2 +-
>  xen/include/xsm/xsm.h              |    4 +-
>  xen/xsm/dummy.c                    |    2 +-
>  xen/xsm/flask/flask_op.c           |    4 +-
>  xen/xsm/flask/hooks.c              |    2 +-
>  xen/xsm/xsm_core.c                 |    2 +-
>  73 files changed, 243 insertions(+), 175 deletions(-)
> 
> 
> Cheers,
> 
> Stefano
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:44:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOXm-0007fB-8p; Mon, 06 Aug 2012 14:44:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOXk-0007f0-Gj
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:44:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344264262!11105614!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27174 invoked from network); 6 Aug 2012 14:44:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:44:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868925"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:44:22 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:44:22 +0100
Date: Mon, 6 Aug 2012 15:44:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <501FD726.90806@citrix.com>
Message-ID: <alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<501FD726.90806@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, David Vrabel wrote:
> On 06/08/12 15:11, Stefano Stabellini wrote:
> > Hi all,
> > this patch series makes the necessary changes to make sure that the
> > current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
> > 
> > - it defines xen_ulong_t as uint64_t on ARM;
> > - it introduces a new macro to handle guest pointers, called
> > XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
> > have size 8 bytes on aarch64);
> > - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
> > parameters with XEN_GUEST_HANDLE_PARAM.
> 
> This is a subtle (and undocumented!) distinction. I can see people
> adding/modifying hypercall etc. getting this wrong and no one noticing
> for a while (since it doesn't affect x86).

Where should I document this? I wrote it into the commit message but
maybe a doc under docs is better.


> The xen_ulong_t parameters (when used for pointers) from an aarch guest
> point of view are a uint32_t guest pointer and uint32_t of padding.  So
> the guest handles will be the same size in hypercall parameters and
> structure members.

I changed xen_ulong_t to be 64 bit on ARM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:44:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOXm-0007fB-8p; Mon, 06 Aug 2012 14:44:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOXk-0007f0-Gj
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:44:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344264262!11105614!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27174 invoked from network); 6 Aug 2012 14:44:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:44:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13868925"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:44:22 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:44:22 +0100
Date: Mon, 6 Aug 2012 15:44:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <501FD726.90806@citrix.com>
Message-ID: <alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<501FD726.90806@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, David Vrabel wrote:
> On 06/08/12 15:11, Stefano Stabellini wrote:
> > Hi all,
> > this patch series makes the necessary changes to make sure that the
> > current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
> > 
> > - it defines xen_ulong_t as uint64_t on ARM;
> > - it introduces a new macro to handle guest pointers, called
> > XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
> > have size 8 bytes on aarch64);
> > - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
> > parameters with XEN_GUEST_HANDLE_PARAM.
> 
> This is a subtle (and undocumented!) distinction. I can see people
> adding/modifying hypercall etc. getting this wrong and no one noticing
> for a while (since it doesn't affect x86).

Where should I document this? I wrote it into the commit message but
maybe a doc under docs is better.


> The xen_ulong_t parameters (when used for pointers) from an aarch guest
> point of view are a uint32_t guest pointer and uint32_t of padding.  So
> the guest handles will be the same size in hypercall parameters and
> structure members.

I changed xen_ulong_t to be 64 bit on ARM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:46:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOZr-0007lW-Pj; Mon, 06 Aug 2012 14:46:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SyOZq-0007lN-6T
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:46:38 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344264390!4062717!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32748 invoked from network); 6 Aug 2012 14:46:31 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:46:31 -0000
Received: by qadc10 with SMTP id c10so531321qad.11
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 07:46:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=c6+lIqY0WVKGb7pqCarYjqGTVqAYKkWJFf9l2GiMY1I=;
	b=VzYeKS8e+a7WY1W+xkaGcXfQoDqbsmcIqKS54d13J8nUxr+qC9dcEMlSx25d5KXroP
	DxoEOifE8XsLBGS2n5wXQf0cKjMCAAkz+gu9Lf31O4IFEwn9cbqlR/Pt4BDF0W018QA5
	PDJqEQOuQYHRGsBYLXFTbiDWyIzOt5MwGAvhw8uhWZHVxYelqeAjjREwjn1WGsHgP1/s
	MwyWMdJpRpSWqMn3TDsKM2sBi4jjwF5OkoTDAcFucgi8HC87AqGoVIE9PWKQvf81vItB
	dVgdFYcRo0v8R0X5o91pFyWIbKv+t30AWzp2piEMv+LYbBnWQytse8NGYD7yXf3r3OOZ
	wRfQ==
Received: by 10.59.1.193 with SMTP id bi1mr898188ved.57.1344264390448; Mon, 06
	Aug 2012 07:46:30 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Mon, 6 Aug 2012 07:46:10 -0700 (PDT)
In-Reply-To: <501F97F90200007800092CAC@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 15:46:10 +0100
Message-ID: <CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>> to event channels so this place holder is no longer required.
>
> I'm fine with this change, but is a future re-use of the value indeed
> not going to cause problems on XenServer (or wherever else this
> is patch set coming from)?
>

That may need to be confirmed but I don't think XenServer is using v4v
yet, their plan
is to pick it up from upstream. So removing this VIRQ should be fine.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:46:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOZr-0007lW-Pj; Mon, 06 Aug 2012 14:46:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SyOZq-0007lN-6T
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:46:38 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344264390!4062717!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32748 invoked from network); 6 Aug 2012 14:46:31 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:46:31 -0000
Received: by qadc10 with SMTP id c10so531321qad.11
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 07:46:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=c6+lIqY0WVKGb7pqCarYjqGTVqAYKkWJFf9l2GiMY1I=;
	b=VzYeKS8e+a7WY1W+xkaGcXfQoDqbsmcIqKS54d13J8nUxr+qC9dcEMlSx25d5KXroP
	DxoEOifE8XsLBGS2n5wXQf0cKjMCAAkz+gu9Lf31O4IFEwn9cbqlR/Pt4BDF0W018QA5
	PDJqEQOuQYHRGsBYLXFTbiDWyIzOt5MwGAvhw8uhWZHVxYelqeAjjREwjn1WGsHgP1/s
	MwyWMdJpRpSWqMn3TDsKM2sBi4jjwF5OkoTDAcFucgi8HC87AqGoVIE9PWKQvf81vItB
	dVgdFYcRo0v8R0X5o91pFyWIbKv+t30AWzp2piEMv+LYbBnWQytse8NGYD7yXf3r3OOZ
	wRfQ==
Received: by 10.59.1.193 with SMTP id bi1mr898188ved.57.1344264390448; Mon, 06
	Aug 2012 07:46:30 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Mon, 6 Aug 2012 07:46:10 -0700 (PDT)
In-Reply-To: <501F97F90200007800092CAC@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 15:46:10 +0100
Message-ID: <CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>> to event channels so this place holder is no longer required.
>
> I'm fine with this change, but is a future re-use of the value indeed
> not going to cause problems on XenServer (or wherever else this
> is patch set coming from)?
>

That may need to be confirmed but I don't think XenServer is using v4v
yet, their plan
is to pick it up from upstream. So removing this VIRQ should be fine.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyObQ-0007sr-97; Mon, 06 Aug 2012 14:48:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SyObP-0007sl-Bp
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:48:15 +0000
Received: from [85.158.139.83:14810] by server-6.bemta-5.messagelabs.com id
	36/17-11348-E29DF105; Mon, 06 Aug 2012 14:48:14 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344264479!19226486!1
X-Originating-IP: [209.85.216.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21071 invoked from network); 6 Aug 2012 14:48:00 -0000
Received: from mail-qc0-f173.google.com (HELO mail-qc0-f173.google.com)
	(209.85.216.173)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:48:00 -0000
Received: by qcab12 with SMTP id b12so2073456qca.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 07:47:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=OjNlg8M2njPiMl3zuTK9aiAfPSpwVy6WqtIs7OBy4TI=;
	b=lS0hDvBuC5d8GNde+joqDNfVsUnozhOH6afMX/HyZbAUU8z67JBCnwQXexPFDkty9Y
	7D5HDoeD0ZqXp0ODG9tFEi8AsKcZtlk4hht+JxI+apcwIYLsAVTMHv1sgLEY8pMIiNat
	Y1fLFEUX8D45JL0SwGdkoWDeVm9m09U6L4r94i7T16GnyTjWnBWyCD+sAXel3dLm33ga
	t0R8EUUm6n0agwHMNJKtEq4DU78VI3q+OYPDf2JNdIAOvblK/4PLNeTxZYQp4TsKJSvu
	EdagEN8awTjRaWIiLJuqWW61JqX04Na94kjQQOGxz12FirhgkKNP+d6O/y62ngID+iXU
	Aeog==
Received: by 10.59.1.193 with SMTP id bi1mr902406ved.57.1344264479661; Mon, 06
	Aug 2012 07:47:59 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Mon, 6 Aug 2012 07:47:39 -0700 (PDT)
In-Reply-To: <501F97890200007800092CA9@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 15:47:39 +0100
Message-ID: <CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>
> Without finally explaining why you need this type in the first place,
> I'll continue to NAK this patch. (This is made even worse by the fact
> taht the two inline functions in patch 5 that make use of the type
> appear to be unused.)
>

Understood. I'll switch to use long instead of ssize_t in my
forthcoming patch serie.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyObQ-0007sr-97; Mon, 06 Aug 2012 14:48:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SyObP-0007sl-Bp
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:48:15 +0000
Received: from [85.158.139.83:14810] by server-6.bemta-5.messagelabs.com id
	36/17-11348-E29DF105; Mon, 06 Aug 2012 14:48:14 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344264479!19226486!1
X-Originating-IP: [209.85.216.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21071 invoked from network); 6 Aug 2012 14:48:00 -0000
Received: from mail-qc0-f173.google.com (HELO mail-qc0-f173.google.com)
	(209.85.216.173)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:48:00 -0000
Received: by qcab12 with SMTP id b12so2073456qca.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 07:47:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=OjNlg8M2njPiMl3zuTK9aiAfPSpwVy6WqtIs7OBy4TI=;
	b=lS0hDvBuC5d8GNde+joqDNfVsUnozhOH6afMX/HyZbAUU8z67JBCnwQXexPFDkty9Y
	7D5HDoeD0ZqXp0ODG9tFEi8AsKcZtlk4hht+JxI+apcwIYLsAVTMHv1sgLEY8pMIiNat
	Y1fLFEUX8D45JL0SwGdkoWDeVm9m09U6L4r94i7T16GnyTjWnBWyCD+sAXel3dLm33ga
	t0R8EUUm6n0agwHMNJKtEq4DU78VI3q+OYPDf2JNdIAOvblK/4PLNeTxZYQp4TsKJSvu
	EdagEN8awTjRaWIiLJuqWW61JqX04Na94kjQQOGxz12FirhgkKNP+d6O/y62ngID+iXU
	Aeog==
Received: by 10.59.1.193 with SMTP id bi1mr902406ved.57.1344264479661; Mon, 06
	Aug 2012 07:47:59 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Mon, 6 Aug 2012 07:47:39 -0700 (PDT)
In-Reply-To: <501F97890200007800092CA9@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 15:47:39 +0100
Message-ID: <CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>
> Without finally explaining why you need this type in the first place,
> I'll continue to NAK this patch. (This is made even worse by the fact
> taht the two inline functions in patch 5 that make use of the type
> appear to be unused.)
>

Understood. I'll switch to use long instead of ssize_t in my
forthcoming patch serie.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:49:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOcO-0007zQ-RB; Mon, 06 Aug 2012 14:49:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOcM-0007z4-Qw
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:49:15 +0000
Received: from [85.158.138.51:37221] by server-12.bemta-3.messagelabs.com id
	D1/CF-21301-969DF105; Mon, 06 Aug 2012 14:49:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344264552!30701072!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23018 invoked from network); 6 Aug 2012 14:49:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:49:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13869040"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:49:12 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:49:12 +0100
Date: Mon, 6 Aug 2012 15:49:00 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208061545000.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<501FD726.90806@citrix.com>
	<alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Stefano Stabellini wrote:
> On Mon, 6 Aug 2012, David Vrabel wrote:
> > On 06/08/12 15:11, Stefano Stabellini wrote:
> > > Hi all,
> > > this patch series makes the necessary changes to make sure that the
> > > current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
> > > 
> > > - it defines xen_ulong_t as uint64_t on ARM;
> > > - it introduces a new macro to handle guest pointers, called
> > > XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
> > > have size 8 bytes on aarch64);
> > > - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
> > > parameters with XEN_GUEST_HANDLE_PARAM.
> > 
> > This is a subtle (and undocumented!) distinction. I can see people
> > adding/modifying hypercall etc. getting this wrong and no one noticing
> > for a while (since it doesn't affect x86).
> 
> Where should I document this? I wrote it into the commit message but
> maybe a doc under docs is better.
> 
> 
> > The xen_ulong_t parameters (when used for pointers) from an aarch guest
> > point of view are a uint32_t guest pointer and uint32_t of padding.  So
> > the guest handles will be the same size in hypercall parameters and
> > structure members.
> 
> I changed xen_ulong_t to be 64 bit on ARM
> 

Sorry, pressed "send" too soon.

I don't think there are any xen_ulong_t used for pointers in hypercall
parameters, at least there are none in the hypercalls we are using so
far.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:49:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOcO-0007zQ-RB; Mon, 06 Aug 2012 14:49:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOcM-0007z4-Qw
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:49:15 +0000
Received: from [85.158.138.51:37221] by server-12.bemta-3.messagelabs.com id
	D1/CF-21301-969DF105; Mon, 06 Aug 2012 14:49:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344264552!30701072!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23018 invoked from network); 6 Aug 2012 14:49:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:49:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13869040"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:49:12 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 15:49:12 +0100
Date: Mon, 6 Aug 2012 15:49:00 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208061545000.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<501FD726.90806@citrix.com>
	<alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Stefano Stabellini wrote:
> On Mon, 6 Aug 2012, David Vrabel wrote:
> > On 06/08/12 15:11, Stefano Stabellini wrote:
> > > Hi all,
> > > this patch series makes the necessary changes to make sure that the
> > > current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
> > > 
> > > - it defines xen_ulong_t as uint64_t on ARM;
> > > - it introduces a new macro to handle guest pointers, called
> > > XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
> > > have size 8 bytes on aarch64);
> > > - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
> > > parameters with XEN_GUEST_HANDLE_PARAM.
> > 
> > This is a subtle (and undocumented!) distinction. I can see people
> > adding/modifying hypercall etc. getting this wrong and no one noticing
> > for a while (since it doesn't affect x86).
> 
> Where should I document this? I wrote it into the commit message but
> maybe a doc under docs is better.
> 
> 
> > The xen_ulong_t parameters (when used for pointers) from an aarch guest
> > point of view are a uint32_t guest pointer and uint32_t of padding.  So
> > the guest handles will be the same size in hypercall parameters and
> > structure members.
> 
> I changed xen_ulong_t to be 64 bit on ARM
> 

Sorry, pressed "send" too soon.

I don't think there are any xen_ulong_t used for pointers in hypercall
parameters, at least there are none in the hypercalls we are using so
far.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:49:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOco-00084Y-9a; Mon, 06 Aug 2012 14:49:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SyOcl-00083S-VJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:49:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344264572!8794079!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25466 invoked from network); 6 Aug 2012 14:49:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:49:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279038"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:49:31 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:49:32 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SyOcd-0003SF-7t;
	Mon, 06 Aug 2012 15:49:31 +0100
Message-ID: <501FD97B.4030906@citrix.com>
Date: Mon, 6 Aug 2012 15:49:31 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jean Guyader <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
In-Reply-To: <CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:46, Jean Guyader wrote:
> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>>> to event channels so this place holder is no longer required.
>> I'm fine with this change, but is a future re-use of the value indeed
>> not going to cause problems on XenServer (or wherever else this
>> is patch set coming from)?
>>
> That may need to be confirmed but I don't think XenServer is using v4v
> yet, their plan
> is to pick it up from upstream. So removing this VIRQ should be fine.
>
> Jean

Yes - we don't use it, but are interested when XenClient can
successfully upstream it.

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:49:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOco-00084Y-9a; Mon, 06 Aug 2012 14:49:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1SyOcl-00083S-VJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:49:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344264572!8794079!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25466 invoked from network); 6 Aug 2012 14:49:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:49:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279038"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:49:31 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:49:32 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1SyOcd-0003SF-7t;
	Mon, 06 Aug 2012 15:49:31 +0100
Message-ID: <501FD97B.4030906@citrix.com>
Date: Mon, 6 Aug 2012 15:49:31 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jean Guyader <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
In-Reply-To: <CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:46, Jean Guyader wrote:
> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>>> to event channels so this place holder is no longer required.
>> I'm fine with this change, but is a future re-use of the value indeed
>> not going to cause problems on XenServer (or wherever else this
>> is patch set coming from)?
>>
> That may need to be confirmed but I don't think XenServer is using v4v
> yet, their plan
> is to pick it up from upstream. So removing this VIRQ should be fine.
>
> Jean

Yes - we don't use it, but are interested when XenClient can
successfully upstream it.

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOis-0008RF-3x; Mon, 06 Aug 2012 14:55:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOir-0008R8-9Q
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:55:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344264946!11516352!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30769 invoked from network); 6 Aug 2012 14:55:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:55:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279788"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:55:45 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:55:46 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-FV;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:20 +0100
Message-ID: <1344263246-28036-17-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 17/23] xen/arm: implement
	alloc/free_xenballooned_pages with alloc_pages/kfree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only until we get the balloon driver to work.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   18 ++++++++++++++++++
 1 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 87b17f0..c244583 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -140,3 +140,21 @@ static int __init xen_init_events(void)
 	return 0;
 }
 postcore_initcall(xen_init_events);
+
+/* XXX: only until balloon is properly working */
+int alloc_xenballooned_pages(int nr_pages, struct page **pages, bool highmem)
+{
+	*pages = alloc_pages(highmem ? GFP_HIGHUSER : GFP_KERNEL,
+			get_order(nr_pages));
+	if (*pages == NULL)
+		return -ENOMEM;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(alloc_xenballooned_pages);
+
+void free_xenballooned_pages(int nr_pages, struct page **pages)
+{
+	kfree(*pages);
+	*pages = NULL;
+}
+EXPORT_SYMBOL_GPL(free_xenballooned_pages);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOis-0008RF-3x; Mon, 06 Aug 2012 14:55:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOir-0008R8-9Q
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:55:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344264946!11516352!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30769 invoked from network); 6 Aug 2012 14:55:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:55:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279788"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:55:45 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:55:46 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-FV;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:20 +0100
Message-ID: <1344263246-28036-17-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 17/23] xen/arm: implement
	alloc/free_xenballooned_pages with alloc_pages/kfree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only until we get the balloon driver to work.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   18 ++++++++++++++++++
 1 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 87b17f0..c244583 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -140,3 +140,21 @@ static int __init xen_init_events(void)
 	return 0;
 }
 postcore_initcall(xen_init_events);
+
+/* XXX: only until balloon is properly working */
+int alloc_xenballooned_pages(int nr_pages, struct page **pages, bool highmem)
+{
+	*pages = alloc_pages(highmem ? GFP_HIGHUSER : GFP_KERNEL,
+			get_order(nr_pages));
+	if (*pages == NULL)
+		return -ENOMEM;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(alloc_xenballooned_pages);
+
+void free_xenballooned_pages(int nr_pages, struct page **pages)
+{
+	kfree(*pages);
+	*pages = NULL;
+}
+EXPORT_SYMBOL_GPL(free_xenballooned_pages);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOix-0008SH-1V; Mon, 06 Aug 2012 14:56:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOiv-0008RA-61
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344264946!11516352!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31041 invoked from network); 6 Aug 2012 14:55:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:55:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279800"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:55:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:55:52 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-Gl;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:22 +0100
Message-ID: <1344263246-28036-19-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 19/23] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/block/xen-blkback/blkback.c  |    1 +
 include/xen/interface/io/protocols.h |    3 +++
 2 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 73f196c..63dd5b9 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -42,6 +42,7 @@
 
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include "common.h"
diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
index 01fc8ae..0eafaf2 100644
--- a/include/xen/interface/io/protocols.h
+++ b/include/xen/interface/io/protocols.h
@@ -5,6 +5,7 @@
 #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
 #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
 #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
+#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
 
 #if defined(__i386__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
@@ -14,6 +15,8 @@
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
 #elif defined(__powerpc64__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
+#elif defined(__arm__)
+# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
 #else
 # error arch fixup needed here
 #endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOix-0008SH-1V; Mon, 06 Aug 2012 14:56:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOiv-0008RA-61
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344264946!11516352!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31041 invoked from network); 6 Aug 2012 14:55:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:55:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279800"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:55:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:55:52 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-Gl;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:22 +0100
Message-ID: <1344263246-28036-19-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 19/23] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/block/xen-blkback/blkback.c  |    1 +
 include/xen/interface/io/protocols.h |    3 +++
 2 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 73f196c..63dd5b9 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -42,6 +42,7 @@
 
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include "common.h"
diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
index 01fc8ae..0eafaf2 100644
--- a/include/xen/interface/io/protocols.h
+++ b/include/xen/interface/io/protocols.h
@@ -5,6 +5,7 @@
 #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
 #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
 #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
+#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
 
 #if defined(__i386__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
@@ -14,6 +15,8 @@
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
 #elif defined(__powerpc64__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
+#elif defined(__arm__)
+# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
 #else
 # error arch fixup needed here
 #endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOiw-0008S5-Kv; Mon, 06 Aug 2012 14:56:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOiv-0008Rb-0Y
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:01 +0000
Received: from [85.158.139.83:59198] by server-5.bemta-5.messagelabs.com id
	60/20-02722-00BDF105; Mon, 06 Aug 2012 14:56:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344264958!29930953!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28444 invoked from network); 6 Aug 2012 14:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702043"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:55:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:55:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-If;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:25 +0100
Message-ID: <1344263246-28036-22-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, Russell King <linux@arm.linux.org.uk>,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 22/23] arm/v2m: initialize arch_timers even
	if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
 1 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
index fde26ad..dee1451 100644
--- a/arch/arm/mach-vexpress/v2m.c
+++ b/arch/arm/mach-vexpress/v2m.c
@@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
 	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
 	v2m_sysctl_init(of_iomap(node, 0));
 
-	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
-	if (WARN_ON(err))
-		return;
-	node = of_find_node_by_path(path);
-	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 	if (arch_timer_of_register() != 0)
 		twd_local_timer_of_register();
 
 	if (arch_timer_sched_clock_init() != 0)
 		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
+
+	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
+	if (WARN_ON(err))
+		return;
+	node = of_find_node_by_path(path);
+	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 }
 
 static struct sys_timer v2m_dt_timer = {
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOiw-0008S5-Kv; Mon, 06 Aug 2012 14:56:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOiv-0008Rb-0Y
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:01 +0000
Received: from [85.158.139.83:59198] by server-5.bemta-5.messagelabs.com id
	60/20-02722-00BDF105; Mon, 06 Aug 2012 14:56:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344264958!29930953!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28444 invoked from network); 6 Aug 2012 14:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702043"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:55:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:55:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-If;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:25 +0100
Message-ID: <1344263246-28036-22-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, Russell King <linux@arm.linux.org.uk>,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 22/23] arm/v2m: initialize arch_timers even
	if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
 1 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
index fde26ad..dee1451 100644
--- a/arch/arm/mach-vexpress/v2m.c
+++ b/arch/arm/mach-vexpress/v2m.c
@@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
 	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
 	v2m_sysctl_init(of_iomap(node, 0));
 
-	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
-	if (WARN_ON(err))
-		return;
-	node = of_find_node_by_path(path);
-	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 	if (arch_timer_of_register() != 0)
 		twd_local_timer_of_register();
 
 	if (arch_timer_sched_clock_init() != 0)
 		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
+
+	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
+	if (WARN_ON(err))
+		return;
+	node = of_find_node_by_path(path);
+	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 }
 
 static struct sys_timer v2m_dt_timer = {
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjD-00005Z-FG; Mon, 06 Aug 2012 14:56:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjC-0008WD-Ct
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:18 +0000
Received: from [85.158.138.51:44304] by server-6.bemta-3.messagelabs.com id
	67/6C-02321-11BDF105; Mon, 06 Aug 2012 14:56:17 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344264973!30611592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10690 invoked from network); 6 Aug 2012 14:56:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279844"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:09 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-B0;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:15 +0100
Message-ID: <1344263246-28036-12-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 12/23] xen/arm: introduce CONFIG_XEN on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- mark Xen guest support on ARM as EXPERIMENTAL.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/Kconfig |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index a91009c..f14664b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1855,6 +1855,16 @@ config DEPRECATED_PARAM_STRUCT
 	  This was deprecated in 2001 and announced to live on for 5 years.
 	  Some old boot loaders still use this way.
 
+config XEN_DOM0
+	def_bool y
+
+config XEN
+	bool "Xen guest support on ARM (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ARM && OF
+	select XEN_DOM0
+	help
+	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
+
 endmenu
 
 menu "Boot options"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjD-00005Z-FG; Mon, 06 Aug 2012 14:56:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjC-0008WD-Ct
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:18 +0000
Received: from [85.158.138.51:44304] by server-6.bemta-3.messagelabs.com id
	67/6C-02321-11BDF105; Mon, 06 Aug 2012 14:56:17 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344264973!30611592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10690 invoked from network); 6 Aug 2012 14:56:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279844"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:09 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-B0;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:15 +0100
Message-ID: <1344263246-28036-12-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 12/23] xen/arm: introduce CONFIG_XEN on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- mark Xen guest support on ARM as EXPERIMENTAL.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/Kconfig |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index a91009c..f14664b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1855,6 +1855,16 @@ config DEPRECATED_PARAM_STRUCT
 	  This was deprecated in 2001 and announced to live on for 5 years.
 	  Some old boot loaders still use this way.
 
+config XEN_DOM0
+	def_bool y
+
+config XEN
+	bool "Xen guest support on ARM (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ARM && OF
+	select XEN_DOM0
+	help
+	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
+
 endmenu
 
 menu "Boot options"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjG-00007t-T9; Mon, 06 Aug 2012 14:56:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjF-00006Q-5c
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:21 +0000
Received: from [85.158.139.83:60508] by server-10.bemta-5.messagelabs.com id
	A7/A0-02190-11BDF105; Mon, 06 Aug 2012 14:56:17 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344264975!24815939!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27866 invoked from network); 6 Aug 2012 14:56:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702087"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:15 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:15 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-CX;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:17 +0100
Message-ID: <1344263246-28036-14-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 14/23] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Initialize the grant table mapping at the address specified at index 0
in the DT under the /xen node.
After the grant table is initialized, call xenbus_probe (if not dom0).

Changes in v2:

- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   15 +++++++++++++++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index ac3a2d6..e5e92d5 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,8 +1,12 @@
 #include <xen/xen.h>
+#include <xen/grant_table.h>
+#include <xen/hvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/interface/hvm/params.h>
 #include <xen/features.h>
 #include <xen/platform_pci.h>
+#include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
@@ -45,17 +49,23 @@ EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
  * - one interrupt for Xen event notifications;
  * - one memory region to map the grant_table.
  */
+
+#define GRANT_TABLE_PHYSADDR 0
 static int __init xen_guest_init(void)
 {
 	struct xen_add_to_physmap xatp;
 	static struct shared_info *shared_info_page = 0;
 	struct device_node *node;
+	struct resource res;
 
 	node = of_find_compatible_node(NULL, NULL, "arm,xen");
 	if (!node) {
 		pr_debug("No Xen support\n");
 		return 0;
 	}
+	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
+		return 0;
+	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -89,6 +99,11 @@ static int __init xen_guest_init(void)
 	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
 	 * for secondary CPUs as they are brought up. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+
+	gnttab_init();
+	if (!xen_initial_domain())
+		xenbus_probe(NULL);
+
 	return 0;
 }
 core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjG-00007t-T9; Mon, 06 Aug 2012 14:56:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjF-00006Q-5c
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:21 +0000
Received: from [85.158.139.83:60508] by server-10.bemta-5.messagelabs.com id
	A7/A0-02190-11BDF105; Mon, 06 Aug 2012 14:56:17 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344264975!24815939!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27866 invoked from network); 6 Aug 2012 14:56:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702087"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:15 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:15 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-CX;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:17 +0100
Message-ID: <1344263246-28036-14-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 14/23] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Initialize the grant table mapping at the address specified at index 0
in the DT under the /xen node.
After the grant table is initialized, call xenbus_probe (if not dom0).

Changes in v2:

- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   15 +++++++++++++++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index ac3a2d6..e5e92d5 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,8 +1,12 @@
 #include <xen/xen.h>
+#include <xen/grant_table.h>
+#include <xen/hvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/interface/hvm/params.h>
 #include <xen/features.h>
 #include <xen/platform_pci.h>
+#include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
@@ -45,17 +49,23 @@ EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
  * - one interrupt for Xen event notifications;
  * - one memory region to map the grant_table.
  */
+
+#define GRANT_TABLE_PHYSADDR 0
 static int __init xen_guest_init(void)
 {
 	struct xen_add_to_physmap xatp;
 	static struct shared_info *shared_info_page = 0;
 	struct device_node *node;
+	struct resource res;
 
 	node = of_find_compatible_node(NULL, NULL, "arm,xen");
 	if (!node) {
 		pr_debug("No Xen support\n");
 		return 0;
 	}
+	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
+		return 0;
+	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -89,6 +99,11 @@ static int __init xen_guest_init(void)
 	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
 	 * for secondary CPUs as they are brought up. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+
+	gnttab_init();
+	if (!xen_initial_domain())
+		xenbus_probe(NULL);
+
 	return 0;
 }
 core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjJ-00008g-AY; Mon, 06 Aug 2012 14:56:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjH-00006Q-0t
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:23 +0000
Received: from [85.158.139.83:60891] by server-10.bemta-5.messagelabs.com id
	3F/B0-02190-61BDF105; Mon, 06 Aug 2012 14:56:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344264975!24815939!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28169 invoked from network); 6 Aug 2012 14:56:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702106"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-EK;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:18 +0100
Message-ID: <1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 15/23] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Compile events.c on ARM.
Parse, map and enable the IRQ to get event notifications from the device
tree (node "/xen").

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
 arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
 arch/x86/xen/enlighten.c          |    1 +
 arch/x86/xen/irq.c                |    1 +
 arch/x86/xen/xen-ops.h            |    1 -
 drivers/xen/events.c              |   17 ++++++++++++++---
 include/xen/events.h              |    2 ++
 7 files changed, 69 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/events.h

diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
new file mode 100644
index 0000000..94b4e90
--- /dev/null
+++ b/arch/arm/include/asm/xen/events.h
@@ -0,0 +1,18 @@
+#ifndef _ASM_ARM_XEN_EVENTS_H
+#define _ASM_ARM_XEN_EVENTS_H
+
+#include <asm/ptrace.h>
+
+enum ipi_vector {
+	XEN_PLACEHOLDER_VECTOR,
+
+	/* Xen IPIs go here */
+	XEN_NR_IPIS,
+};
+
+static inline int xen_irqs_disabled(struct pt_regs *regs)
+{
+	return raw_irqs_disabled_flags(regs->ARM_cpsr);
+}
+
+#endif /* _ASM_ARM_XEN_EVENTS_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e5e92d5..87b17f0 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,4 +1,5 @@
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/grant_table.h>
 #include <xen/hvm.h>
 #include <xen/interface/xen.h>
@@ -9,6 +10,8 @@
 #include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <linux/interrupt.h>
+#include <linux/irqreturn.h>
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/of_irq.h>
@@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
 int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
 EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
+static __read_mostly int xen_events_irq = -1;
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
@@ -66,6 +71,9 @@ static int __init xen_guest_init(void)
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
 	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
+	xen_events_irq = irq_of_parse_and_map(node, 0);
+	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
+			xen_events_irq, xen_hvm_resume_frames);
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -107,3 +115,28 @@ static int __init xen_guest_init(void)
 	return 0;
 }
 core_initcall(xen_guest_init);
+
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
+static int __init xen_init_events(void)
+{
+	if (!xen_domain() || xen_events_irq < 0)
+		return -ENODEV;
+
+	xen_init_IRQ();
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			"events", xen_vcpu)) {
+		pr_err("Error requesting IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	enable_percpu_irq(xen_events_irq, 0);
+
+	return 0;
+}
+postcore_initcall(xen_init_events);
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..9f8b0ef 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -33,6 +33,7 @@
 #include <linux/memblock.h>
 
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/version.h>
 #include <xen/interface/physdev.h>
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 1573376..01a4dc0 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 202d4c1..2368295 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -35,7 +35,6 @@ void xen_set_pat(u64);
 
 char * __init xen_memory_setup(void);
 void __init xen_arch_setup(void);
-void __init xen_init_IRQ(void);
 void xen_enable_sysenter(void);
 void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..5ecb596 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -31,14 +31,16 @@
 #include <linux/irqnr.h>
 #include <linux/pci.h>
 
+#ifdef CONFIG_X86
 #include <asm/desc.h>
 #include <asm/ptrace.h>
 #include <asm/irq.h>
 #include <asm/idle.h>
 #include <asm/io_apic.h>
-#include <asm/sync_bitops.h>
 #include <asm/xen/page.h>
 #include <asm/xen/pci.h>
+#endif
+#include <asm/sync_bitops.h>
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
 
@@ -50,6 +52,9 @@
 #include <xen/interface/event_channel.h>
 #include <xen/interface/hvm/hvm_op.h>
 #include <xen/interface/hvm/params.h>
+#include <xen/interface/physdev.h>
+#include <xen/interface/sched.h>
+#include <asm/hw_irq.h>
 
 /*
  * This lock protects updates to the following mapping and reference-count
@@ -1374,7 +1379,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
+#ifdef CONFIG_X86
 	exit_idle();
+#endif
 	irq_enter();
 
 	__xen_evtchn_do_upcall();
@@ -1783,9 +1790,9 @@ void xen_callback_vector(void)
 void xen_callback_vector(void) {}
 #endif
 
-void __init xen_init_IRQ(void)
+void xen_init_IRQ(void)
 {
-	int i, rc;
+	int i;
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
 				    GFP_KERNEL);
@@ -1801,6 +1808,7 @@ void __init xen_init_IRQ(void)
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
+#ifdef CONFIG_X86
 	if (xen_hvm_domain()) {
 		xen_callback_vector();
 		native_init_IRQ();
@@ -1808,6 +1816,7 @@ void __init xen_init_IRQ(void)
 		 * __acpi_register_gsi can point at the right function */
 		pci_xen_hvm_init();
 	} else {
+		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
 		irq_ctx_init(smp_processor_id());
@@ -1823,4 +1832,6 @@ void __init xen_init_IRQ(void)
 		} else
 			pirq_needs_eoi = pirq_check_eoi_map;
 	}
+#endif
 }
+EXPORT_SYMBOL_GPL(xen_init_IRQ);
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..c6bfe01 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* initialize Xen IRQ subsystem */
+void xen_init_IRQ(void);
 #endif	/* _XEN_EVENTS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjJ-00008g-AY; Mon, 06 Aug 2012 14:56:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjH-00006Q-0t
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:23 +0000
Received: from [85.158.139.83:60891] by server-10.bemta-5.messagelabs.com id
	3F/B0-02190-61BDF105; Mon, 06 Aug 2012 14:56:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344264975!24815939!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28169 invoked from network); 6 Aug 2012 14:56:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702106"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-EK;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:18 +0100
Message-ID: <1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 15/23] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Compile events.c on ARM.
Parse, map and enable the IRQ to get event notifications from the device
tree (node "/xen").

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
 arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
 arch/x86/xen/enlighten.c          |    1 +
 arch/x86/xen/irq.c                |    1 +
 arch/x86/xen/xen-ops.h            |    1 -
 drivers/xen/events.c              |   17 ++++++++++++++---
 include/xen/events.h              |    2 ++
 7 files changed, 69 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/events.h

diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
new file mode 100644
index 0000000..94b4e90
--- /dev/null
+++ b/arch/arm/include/asm/xen/events.h
@@ -0,0 +1,18 @@
+#ifndef _ASM_ARM_XEN_EVENTS_H
+#define _ASM_ARM_XEN_EVENTS_H
+
+#include <asm/ptrace.h>
+
+enum ipi_vector {
+	XEN_PLACEHOLDER_VECTOR,
+
+	/* Xen IPIs go here */
+	XEN_NR_IPIS,
+};
+
+static inline int xen_irqs_disabled(struct pt_regs *regs)
+{
+	return raw_irqs_disabled_flags(regs->ARM_cpsr);
+}
+
+#endif /* _ASM_ARM_XEN_EVENTS_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e5e92d5..87b17f0 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,4 +1,5 @@
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/grant_table.h>
 #include <xen/hvm.h>
 #include <xen/interface/xen.h>
@@ -9,6 +10,8 @@
 #include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <linux/interrupt.h>
+#include <linux/irqreturn.h>
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/of_irq.h>
@@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
 int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
 EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
+static __read_mostly int xen_events_irq = -1;
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
@@ -66,6 +71,9 @@ static int __init xen_guest_init(void)
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
 	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
+	xen_events_irq = irq_of_parse_and_map(node, 0);
+	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
+			xen_events_irq, xen_hvm_resume_frames);
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -107,3 +115,28 @@ static int __init xen_guest_init(void)
 	return 0;
 }
 core_initcall(xen_guest_init);
+
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
+static int __init xen_init_events(void)
+{
+	if (!xen_domain() || xen_events_irq < 0)
+		return -ENODEV;
+
+	xen_init_IRQ();
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			"events", xen_vcpu)) {
+		pr_err("Error requesting IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	enable_percpu_irq(xen_events_irq, 0);
+
+	return 0;
+}
+postcore_initcall(xen_init_events);
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..9f8b0ef 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -33,6 +33,7 @@
 #include <linux/memblock.h>
 
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/version.h>
 #include <xen/interface/physdev.h>
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 1573376..01a4dc0 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 202d4c1..2368295 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -35,7 +35,6 @@ void xen_set_pat(u64);
 
 char * __init xen_memory_setup(void);
 void __init xen_arch_setup(void);
-void __init xen_init_IRQ(void);
 void xen_enable_sysenter(void);
 void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..5ecb596 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -31,14 +31,16 @@
 #include <linux/irqnr.h>
 #include <linux/pci.h>
 
+#ifdef CONFIG_X86
 #include <asm/desc.h>
 #include <asm/ptrace.h>
 #include <asm/irq.h>
 #include <asm/idle.h>
 #include <asm/io_apic.h>
-#include <asm/sync_bitops.h>
 #include <asm/xen/page.h>
 #include <asm/xen/pci.h>
+#endif
+#include <asm/sync_bitops.h>
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
 
@@ -50,6 +52,9 @@
 #include <xen/interface/event_channel.h>
 #include <xen/interface/hvm/hvm_op.h>
 #include <xen/interface/hvm/params.h>
+#include <xen/interface/physdev.h>
+#include <xen/interface/sched.h>
+#include <asm/hw_irq.h>
 
 /*
  * This lock protects updates to the following mapping and reference-count
@@ -1374,7 +1379,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
+#ifdef CONFIG_X86
 	exit_idle();
+#endif
 	irq_enter();
 
 	__xen_evtchn_do_upcall();
@@ -1783,9 +1790,9 @@ void xen_callback_vector(void)
 void xen_callback_vector(void) {}
 #endif
 
-void __init xen_init_IRQ(void)
+void xen_init_IRQ(void)
 {
-	int i, rc;
+	int i;
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
 				    GFP_KERNEL);
@@ -1801,6 +1808,7 @@ void __init xen_init_IRQ(void)
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
+#ifdef CONFIG_X86
 	if (xen_hvm_domain()) {
 		xen_callback_vector();
 		native_init_IRQ();
@@ -1808,6 +1816,7 @@ void __init xen_init_IRQ(void)
 		 * __acpi_register_gsi can point at the right function */
 		pci_xen_hvm_init();
 	} else {
+		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
 		irq_ctx_init(smp_processor_id());
@@ -1823,4 +1832,6 @@ void __init xen_init_IRQ(void)
 		} else
 			pirq_needs_eoi = pirq_check_eoi_map;
 	}
+#endif
 }
+EXPORT_SYMBOL_GPL(xen_init_IRQ);
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..c6bfe01 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* initialize Xen IRQ subsystem */
+void xen_init_IRQ(void);
 #endif	/* _XEN_EVENTS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjP-0000C3-Nj; Mon, 06 Aug 2012 14:56:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjO-0000B6-SL
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:31 +0000
Received: from [85.158.143.99:9808] by server-1.bemta-4.messagelabs.com id
	E5/FC-24392-E1BDF105; Mon, 06 Aug 2012 14:56:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344264987!29584441!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30636 invoked from network); 6 Aug 2012 14:56:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702119"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:27 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:27 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-Bi;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:16 +0100
Message-ID: <1344263246-28036-13-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 13/23] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use Xen features to figure out if we are privileged.

XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c         |    7 +++++++
 include/xen/interface/features.h |    3 +++
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 102d823..ac3a2d6 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,6 +1,7 @@
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/features.h>
 #include <xen/platform_pci.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
@@ -57,6 +58,12 @@ static int __init xen_guest_init(void)
 	}
 	xen_domain_type = XEN_HVM_DOMAIN;
 
+	xen_setup_features();
+	if (xen_feature(XENFEAT_dom0))
+		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
+	else
+		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
+
 	if (!shared_info_page)
 		shared_info_page = (struct shared_info *)
 			get_zeroed_page(GFP_KERNEL);
diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
index b6ca39a..131a6cc 100644
--- a/include/xen/interface/features.h
+++ b/include/xen/interface/features.h
@@ -50,6 +50,9 @@
 /* x86: pirq can be used by HVM guests */
 #define XENFEAT_hvm_pirqs           10
 
+/* operation as Dom0 is supported */
+#define XENFEAT_dom0                      11
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjP-0000C3-Nj; Mon, 06 Aug 2012 14:56:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjO-0000B6-SL
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:31 +0000
Received: from [85.158.143.99:9808] by server-1.bemta-4.messagelabs.com id
	E5/FC-24392-E1BDF105; Mon, 06 Aug 2012 14:56:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344264987!29584441!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30636 invoked from network); 6 Aug 2012 14:56:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702119"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:27 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:27 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-Bi;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:16 +0100
Message-ID: <1344263246-28036-13-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 13/23] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use Xen features to figure out if we are privileged.

XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c         |    7 +++++++
 include/xen/interface/features.h |    3 +++
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 102d823..ac3a2d6 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,6 +1,7 @@
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/features.h>
 #include <xen/platform_pci.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
@@ -57,6 +58,12 @@ static int __init xen_guest_init(void)
 	}
 	xen_domain_type = XEN_HVM_DOMAIN;
 
+	xen_setup_features();
+	if (xen_feature(XENFEAT_dom0))
+		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
+	else
+		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
+
 	if (!shared_info_page)
 		shared_info_page = (struct shared_info *)
 			get_zeroed_page(GFP_KERNEL);
diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
index b6ca39a..131a6cc 100644
--- a/include/xen/interface/features.h
+++ b/include/xen/interface/features.h
@@ -50,6 +50,9 @@
 /* x86: pirq can be used by HVM guests */
 #define XENFEAT_hvm_pirqs           10
 
+/* operation as Dom0 is supported */
+#define XENFEAT_dom0                      11
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjS-0000E5-Aa; Mon, 06 Aug 2012 14:56:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyOjR-0000Cr-39
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:56:33 +0000
Received: from [85.158.139.83:57136] by server-9.bemta-5.messagelabs.com id
	59/AC-01069-02BDF105; Mon, 06 Aug 2012 14:56:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344264991!27789872!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25388 invoked from network); 6 Aug 2012 14:56:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13869177"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:56:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	15:56:31 +0100
Message-ID: <1344264989.11339.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 15:56:29 +0100
In-Reply-To: <CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
> >> to event channels so this place holder is no longer required.
> >
> > I'm fine with this change, but is a future re-use of the value indeed
> > not going to cause problems on XenServer (or wherever else this
> > is patch set coming from)?
> >
> 
> That may need to be confirmed but I don't think XenServer is using v4v
> yet

I think Jan probably meant XenClient (i.e. that being the place where
v4v is already deployed).

There's no harm in keeping this # reserved indefinitely, with a suitable
comment, I think? The only reason not to would be if this address space
was limited, but I don't think that is the case with VIRQs



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjS-0000E5-Aa; Mon, 06 Aug 2012 14:56:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyOjR-0000Cr-39
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:56:33 +0000
Received: from [85.158.139.83:57136] by server-9.bemta-5.messagelabs.com id
	59/AC-01069-02BDF105; Mon, 06 Aug 2012 14:56:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344264991!27789872!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25388 invoked from network); 6 Aug 2012 14:56:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13869177"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 14:56:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	15:56:31 +0100
Message-ID: <1344264989.11339.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 15:56:29 +0100
In-Reply-To: <CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
> >> to event channels so this place holder is no longer required.
> >
> > I'm fine with this change, but is a future re-use of the value indeed
> > not going to cause problems on XenServer (or wherever else this
> > is patch set coming from)?
> >
> 
> That may need to be confirmed but I don't think XenServer is using v4v
> yet

I think Jan probably meant XenClient (i.e. that being the place where
v4v is already deployed).

There's no harm in keeping this # reserved indefinitely, with a suitable
comment, I think? The only reason not to would be if this address space
was limited, but I don't think that is the case with VIRQs



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjY-0000IS-Oy; Mon, 06 Aug 2012 14:56:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjW-0000Gj-FT
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:38 +0000
Received: from [85.158.143.99:14192] by server-3.bemta-4.messagelabs.com id
	5A/15-01511-52BDF105; Mon, 06 Aug 2012 14:56:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344264994!23451673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12752 invoked from network); 6 Aug 2012 14:56:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702147"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:33 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:33 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-HJ;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:23 +0100
Message-ID: <1344263246-28036-20-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 20/23] xen/arm: compile netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |   19 +++++++++++++++++++
 drivers/net/xen-netback/netback.c    |    1 +
 drivers/net/xen-netfront.c           |    1 +
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index 4ac0624..8a82325 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -47,4 +47,23 @@ unsigned long HYPERVISOR_hvm_op(int op, void *arg);
 int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
 int HYPERVISOR_physdev_op(int cmd, void *arg);
 
+static inline void
+MULTI_update_va_mapping(struct multicall_entry *mcl, unsigned long va,
+			unsigned int new_val, unsigned long flags)
+{
+	BUG();
+}
+
+static inline void
+MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
+		 int count, int *success_count, domid_t domid)
+{
+	BUG();
+}
+
+static inline int
+HYPERVISOR_multicall(void *call_list, int nr_calls)
+{
+	BUG();
+}
 #endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f4a6fca..ab4f81c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -40,6 +40,7 @@
 
 #include <net/tcp.h>
 
+#include <xen/xen.h>
 #include <xen/events.h>
 #include <xen/interface/memory.h>
 
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3089990..bf4ba2b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -43,6 +43,7 @@
 #include <linux/slab.h>
 #include <net/ip.h>
 
+#include <asm/xen/page.h>
 #include <xen/xen.h>
 #include <xen/xenbus.h>
 #include <xen/events.h>
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjY-0000IS-Oy; Mon, 06 Aug 2012 14:56:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjW-0000Gj-FT
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:38 +0000
Received: from [85.158.143.99:14192] by server-3.bemta-4.messagelabs.com id
	5A/15-01511-52BDF105; Mon, 06 Aug 2012 14:56:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344264994!23451673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12752 invoked from network); 6 Aug 2012 14:56:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702147"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:33 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:33 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-HJ;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:23 +0100
Message-ID: <1344263246-28036-20-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 20/23] xen/arm: compile netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |   19 +++++++++++++++++++
 drivers/net/xen-netback/netback.c    |    1 +
 drivers/net/xen-netfront.c           |    1 +
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index 4ac0624..8a82325 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -47,4 +47,23 @@ unsigned long HYPERVISOR_hvm_op(int op, void *arg);
 int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
 int HYPERVISOR_physdev_op(int cmd, void *arg);
 
+static inline void
+MULTI_update_va_mapping(struct multicall_entry *mcl, unsigned long va,
+			unsigned int new_val, unsigned long flags)
+{
+	BUG();
+}
+
+static inline void
+MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
+		 int count, int *success_count, domid_t domid)
+{
+	BUG();
+}
+
+static inline int
+HYPERVISOR_multicall(void *call_list, int nr_calls)
+{
+	BUG();
+}
 #endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f4a6fca..ab4f81c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -40,6 +40,7 @@
 
 #include <net/tcp.h>
 
+#include <xen/xen.h>
 #include <xen/events.h>
 #include <xen/interface/memory.h>
 
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3089990..bf4ba2b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -43,6 +43,7 @@
 #include <linux/slab.h>
 #include <net/ip.h>
 
+#include <asm/xen/page.h>
 #include <xen/xen.h>
 #include <xen/xenbus.h>
 #include <xen/events.h>
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjh-0000Nt-6J; Mon, 06 Aug 2012 14:56:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjg-0000N0-HJ
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:48 +0000
Received: from [85.158.143.99:13390] by server-1.bemta-4.messagelabs.com id
	4A/5D-24392-F2BDF105; Mon, 06 Aug 2012 14:56:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344265006!30516872!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17227 invoked from network); 6 Aug 2012 14:56:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702183"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:45 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:45 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-I4;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:24 +0100
Message-ID: <1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 21/23] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update struct xen_add_to_physmap to be in sync with Xen's version of the
structure.
The size field was introduced by:

changeset:   24164:707d27fe03e7
user:        Jean Guyader <jean.guyader@eu.citrix.com>
date:        Fri Nov 18 13:42:08 2011 +0000
summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range

According to the comment:

"This new field .size is located in the 16 bits padding between .domid
and .space in struct xen_add_to_physmap to stay compatible with older
versions."

Changes in v2:

- remove erroneous comment in the commit message.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 include/xen/interface/memory.h |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b5c3098..b66d04c 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,6 +163,9 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjh-0000Nt-6J; Mon, 06 Aug 2012 14:56:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjg-0000N0-HJ
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:48 +0000
Received: from [85.158.143.99:13390] by server-1.bemta-4.messagelabs.com id
	4A/5D-24392-F2BDF105; Mon, 06 Aug 2012 14:56:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344265006!30516872!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17227 invoked from network); 6 Aug 2012 14:56:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702183"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:45 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:45 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-I4;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:24 +0100
Message-ID: <1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 21/23] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update struct xen_add_to_physmap to be in sync with Xen's version of the
structure.
The size field was introduced by:

changeset:   24164:707d27fe03e7
user:        Jean Guyader <jean.guyader@eu.citrix.com>
date:        Fri Nov 18 13:42:08 2011 +0000
summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range

According to the comment:

"This new field .size is located in the 16 bits padding between .domid
and .space in struct xen_add_to_physmap to stay compatible with older
versions."

Changes in v2:

- remove erroneous comment in the commit message.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 include/xen/interface/memory.h |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b5c3098..b66d04c 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,6 +163,9 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjl-0000Qg-KK; Mon, 06 Aug 2012 14:56:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjk-0000N0-Hx
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:52 +0000
Received: from [85.158.143.99:13596] by server-1.bemta-4.messagelabs.com id
	70/7D-24392-43BDF105; Mon, 06 Aug 2012 14:56:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344265006!30516872!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17413 invoked from network); 6 Aug 2012 14:56:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702198"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:51 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-JF;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:26 +0100
Message-ID: <1344263246-28036-23-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 23/23] [HACK] xen/arm: implement
	xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Campbell <Ian.Campbell@citrix.com>

Do not apply!

This is a simple, hacky implementation of xen_remap_domain_mfn_range,
using XENMAPSPACE_gmfn_foreign.

It should use same interface as hybrid x86.

Changes in v2:

- retain binary compatibility in xen_add_to_physmap: use a union.

Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
 drivers/xen/privcmd.c          |   16 +++++----
 drivers/xen/xenfs/super.c      |    7 ++++
 include/xen/interface/memory.h |   15 ++++++--
 4 files changed, 105 insertions(+), 12 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index c244583..20ca1e4 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -16,6 +16,10 @@
 #include <linux/of.h>
 #include <linux/of_irq.h>
 #include <linux/of_address.h>
+#include <linux/mm.h>
+#include <linux/ioport.h>
+
+#include <asm/pgtable.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
 static __read_mostly int xen_events_irq = -1;
 
+#define FOREIGN_MAP_BUFFER 0x90000000UL
+#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
+struct resource foreign_map_resource = {
+	.start = FOREIGN_MAP_BUFFER,
+	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
+	.name = "Xen foreign map buffer",
+	.flags = 0,
+};
+
+static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
+
+struct remap_data {
+	struct mm_struct *mm;
+	unsigned long mfn;
+	pgprot_t prot;
+};
+
+static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
+				 unsigned long addr, void *data)
+{
+	struct remap_data *rmd = data;
+	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
+
+	if (rmd->mfn < 0x90010)
+		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
+		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
+
+	set_pte_at(rmd->mm, addr, ptep, pte);
+
+	rmd->mfn++;
+	return 0;
+}
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid)
 {
-	return -ENOSYS;
+	int i, rc = 0;
+	struct remap_data rmd = {
+		.mm = vma->vm_mm,
+		.prot = prot,
+	};
+	struct xen_add_to_physmap xatp = {
+		.domid = DOMID_SELF,
+		.space = XENMAPSPACE_gmfn_foreign,
+
+		.foreign_domid = domid,
+	};
+
+	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
+					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
+		pr_crit("RAM out of foreign map buffers...\n");
+		return -EBUSY;
+	}
+
+	for (i = 0; i < nr; i++) {
+		xatp.idx = mfn + i;
+		xatp.gpfn = foreign_map_buffer_pfn + i;
+		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
+		if (rc != 0) {
+			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
+			goto out;
+		}
+	}
+
+	rmd.mfn = foreign_map_buffer_pfn;
+	rc = apply_to_page_range(vma->vm_mm,
+				 addr,
+				 (unsigned long)nr << PAGE_SHIFT,
+				 remap_area_mfn_pte_fn, &rmd);
+	if (rc != 0) {
+		pr_crit("apply_to_page_range failed rc=%d\n", rc);
+		goto out;
+	}
+
+	foreign_map_buffer_pfn += nr;
+out:
+	return rc;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..3e15c22 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -20,6 +20,8 @@
 #include <linux/pagemap.h>
 #include <linux/seq_file.h>
 #include <linux/miscdevice.h>
+#include <linux/resource.h>
+#include <linux/ioport.h>
 
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
@@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_mfn_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
 		return -EFAULT;
 
@@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_batch_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&m, udata, sizeof(m)))
 		return -EFAULT;
 
@@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
 	return ret;
 }
 
+static void privcmd_close(struct vm_area_struct *vma)
+{
+	/* TODO: unmap VMA */
+}
+
 static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
@@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 }
 
 static struct vm_operations_struct privcmd_vm_ops = {
-	.fault = privcmd_fault
+	.fault = privcmd_fault,
+	.close = privcmd_close,
 };
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
index a84b53c..edbe22f 100644
--- a/drivers/xen/xenfs/super.c
+++ b/drivers/xen/xenfs/super.c
@@ -12,6 +12,7 @@
 #include <linux/module.h>
 #include <linux/fs.h>
 #include <linux/magic.h>
+#include <linux/ioport.h>
 
 #include <xen/xen.h>
 
@@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
 	.llseek = default_llseek,
 };
 
+extern struct resource foreign_map_resource;
+
 static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 {
 	static struct tree_descr xenfs_files[] = {
@@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
 		xenfs_create_file(sb, sb->s_root, "xsd_port",
 				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
+		rc = request_resource(&iomem_resource, &foreign_map_resource);
+		if (rc < 0)
+			pr_crit("failed to register foreign map resource\n");
+		rc = 0; /* ignore */
 	}
 
 	return rc;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b66d04c..dd2ffe0 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,12 +163,19 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    };
 
     /* Source mapping space. */
-#define XENMAPSPACE_shared_info 0 /* shared info page */
-#define XENMAPSPACE_grant_table 1 /* grant table page */
+#define XENMAPSPACE_shared_info  0 /* shared info page */
+#define XENMAPSPACE_grant_table  1 /* grant table page */
+#define XENMAPSPACE_gmfn         2 /* GMFN */
+#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
     unsigned int space;
 
     /* Index into source mapping space. */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:56:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOjl-0000Qg-KK; Mon, 06 Aug 2012 14:56:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjk-0000N0-Hx
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:56:52 +0000
Received: from [85.158.143.99:13596] by server-1.bemta-4.messagelabs.com id
	70/7D-24392-43BDF105; Mon, 06 Aug 2012 14:56:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344265006!30516872!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17413 invoked from network); 6 Aug 2012 14:56:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702198"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:51 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-JF;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:26 +0100
Message-ID: <1344263246-28036-23-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 23/23] [HACK] xen/arm: implement
	xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Campbell <Ian.Campbell@citrix.com>

Do not apply!

This is a simple, hacky implementation of xen_remap_domain_mfn_range,
using XENMAPSPACE_gmfn_foreign.

It should use same interface as hybrid x86.

Changes in v2:

- retain binary compatibility in xen_add_to_physmap: use a union.

Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
 drivers/xen/privcmd.c          |   16 +++++----
 drivers/xen/xenfs/super.c      |    7 ++++
 include/xen/interface/memory.h |   15 ++++++--
 4 files changed, 105 insertions(+), 12 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index c244583..20ca1e4 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -16,6 +16,10 @@
 #include <linux/of.h>
 #include <linux/of_irq.h>
 #include <linux/of_address.h>
+#include <linux/mm.h>
+#include <linux/ioport.h>
+
+#include <asm/pgtable.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
 static __read_mostly int xen_events_irq = -1;
 
+#define FOREIGN_MAP_BUFFER 0x90000000UL
+#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
+struct resource foreign_map_resource = {
+	.start = FOREIGN_MAP_BUFFER,
+	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
+	.name = "Xen foreign map buffer",
+	.flags = 0,
+};
+
+static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
+
+struct remap_data {
+	struct mm_struct *mm;
+	unsigned long mfn;
+	pgprot_t prot;
+};
+
+static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
+				 unsigned long addr, void *data)
+{
+	struct remap_data *rmd = data;
+	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
+
+	if (rmd->mfn < 0x90010)
+		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
+		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
+
+	set_pte_at(rmd->mm, addr, ptep, pte);
+
+	rmd->mfn++;
+	return 0;
+}
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid)
 {
-	return -ENOSYS;
+	int i, rc = 0;
+	struct remap_data rmd = {
+		.mm = vma->vm_mm,
+		.prot = prot,
+	};
+	struct xen_add_to_physmap xatp = {
+		.domid = DOMID_SELF,
+		.space = XENMAPSPACE_gmfn_foreign,
+
+		.foreign_domid = domid,
+	};
+
+	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
+					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
+		pr_crit("RAM out of foreign map buffers...\n");
+		return -EBUSY;
+	}
+
+	for (i = 0; i < nr; i++) {
+		xatp.idx = mfn + i;
+		xatp.gpfn = foreign_map_buffer_pfn + i;
+		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
+		if (rc != 0) {
+			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
+			goto out;
+		}
+	}
+
+	rmd.mfn = foreign_map_buffer_pfn;
+	rc = apply_to_page_range(vma->vm_mm,
+				 addr,
+				 (unsigned long)nr << PAGE_SHIFT,
+				 remap_area_mfn_pte_fn, &rmd);
+	if (rc != 0) {
+		pr_crit("apply_to_page_range failed rc=%d\n", rc);
+		goto out;
+	}
+
+	foreign_map_buffer_pfn += nr;
+out:
+	return rc;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..3e15c22 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -20,6 +20,8 @@
 #include <linux/pagemap.h>
 #include <linux/seq_file.h>
 #include <linux/miscdevice.h>
+#include <linux/resource.h>
+#include <linux/ioport.h>
 
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
@@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_mfn_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
 		return -EFAULT;
 
@@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_batch_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&m, udata, sizeof(m)))
 		return -EFAULT;
 
@@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
 	return ret;
 }
 
+static void privcmd_close(struct vm_area_struct *vma)
+{
+	/* TODO: unmap VMA */
+}
+
 static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
@@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 }
 
 static struct vm_operations_struct privcmd_vm_ops = {
-	.fault = privcmd_fault
+	.fault = privcmd_fault,
+	.close = privcmd_close,
 };
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
index a84b53c..edbe22f 100644
--- a/drivers/xen/xenfs/super.c
+++ b/drivers/xen/xenfs/super.c
@@ -12,6 +12,7 @@
 #include <linux/module.h>
 #include <linux/fs.h>
 #include <linux/magic.h>
+#include <linux/ioport.h>
 
 #include <xen/xen.h>
 
@@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
 	.llseek = default_llseek,
 };
 
+extern struct resource foreign_map_resource;
+
 static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 {
 	static struct tree_descr xenfs_files[] = {
@@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
 		xenfs_create_file(sb, sb->s_root, "xsd_port",
 				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
+		rc = request_resource(&iomem_resource, &foreign_map_resource);
+		if (rc < 0)
+			pr_crit("failed to register foreign map resource\n");
+		rc = 0; /* ignore */
 	}
 
 	return rc;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b66d04c..dd2ffe0 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,12 +163,19 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    };
 
     /* Source mapping space. */
-#define XENMAPSPACE_shared_info 0 /* shared info page */
-#define XENMAPSPACE_grant_table 1 /* grant table page */
+#define XENMAPSPACE_shared_info  0 /* shared info page */
+#define XENMAPSPACE_grant_table  1 /* grant table page */
+#define XENMAPSPACE_gmfn         2 /* GMFN */
+#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
     unsigned int space;
 
     /* Index into source mapping space. */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:57:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOju-0000W6-Gt; Mon, 06 Aug 2012 14:57:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjr-0000Uk-VB
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:57:00 +0000
Received: from [85.158.138.51:51234] by server-12.bemta-3.messagelabs.com id
	B3/1F-21301-B3BDF105; Mon, 06 Aug 2012 14:56:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344265017!30732906!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9504 invoked from network); 6 Aug 2012 14:56:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702215"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-AM;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:14 +0100
Message-ID: <1344263246-28036-11-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 11/23] xen: do not compile manage, balloon,
	pci, acpi and cpu_hotplug on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/Makefile |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index fc34886..bee02b2 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -1,11 +1,17 @@
-obj-y	+= grant-table.o features.o events.o manage.o balloon.o
+ifneq ($(CONFIG_ARM),y)
+obj-y	+= manage.o balloon.o
+obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
+endif
+obj-y	+= grant-table.o features.o events.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
 
+obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
+dom0-$(CONFIG_PCI) := pci.o
+dom0-$(CONFIG_ACPI) := acpi.o
 obj-$(CONFIG_BLOCK)			+= biomerge.o
-obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
 obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
 obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
@@ -17,7 +23,6 @@ obj-$(CONFIG_XEN_SYS_HYPERVISOR)	+= sys-hypervisor.o
 obj-$(CONFIG_XEN_PVHVM)			+= platform-pci.o
 obj-$(CONFIG_XEN_TMEM)			+= tmem.o
 obj-$(CONFIG_SWIOTLB_XEN)		+= swiotlb-xen.o
-obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
 obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
 obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
 obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:57:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOju-0000W6-Gt; Mon, 06 Aug 2012 14:57:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOjr-0000Uk-VB
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:57:00 +0000
Received: from [85.158.138.51:51234] by server-12.bemta-3.messagelabs.com id
	B3/1F-21301-B3BDF105; Mon, 06 Aug 2012 14:56:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344265017!30732906!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9504 invoked from network); 6 Aug 2012 14:56:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33702215"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-AM;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:14 +0100
Message-ID: <1344263246-28036-11-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 11/23] xen: do not compile manage, balloon,
	pci, acpi and cpu_hotplug on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/Makefile |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index fc34886..bee02b2 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -1,11 +1,17 @@
-obj-y	+= grant-table.o features.o events.o manage.o balloon.o
+ifneq ($(CONFIG_ARM),y)
+obj-y	+= manage.o balloon.o
+obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
+endif
+obj-y	+= grant-table.o features.o events.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
 
+obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
+dom0-$(CONFIG_PCI) := pci.o
+dom0-$(CONFIG_ACPI) := acpi.o
 obj-$(CONFIG_BLOCK)			+= biomerge.o
-obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
 obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
 obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
@@ -17,7 +23,6 @@ obj-$(CONFIG_XEN_SYS_HYPERVISOR)	+= sys-hypervisor.o
 obj-$(CONFIG_XEN_PVHVM)			+= platform-pci.o
 obj-$(CONFIG_XEN_TMEM)			+= tmem.o
 obj-$(CONFIG_SWIOTLB_XEN)		+= swiotlb-xen.o
-obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
 obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
 obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
 obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:57:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOkG-0000k6-UQ; Mon, 06 Aug 2012 14:57:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOkF-0000gY-I6
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:57:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344264999!8795503!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21272 invoked from network); 6 Aug 2012 14:56:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:39 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:39 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-GB;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:21 +0100
Message-ID: <1344263246-28036-18-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 18/23] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the "return -ENOSYS" for auto_translated_physmap
guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
mmap calls. However privcmd mmap calls are still going to fail for HVM
and hybrid guests on x86 because the xen_remap_domain_mfn_range
implementation is currently PV only.

Changes in v2:

- better commit message;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/mmu.c    |    3 +++
 drivers/xen/privcmd.c |    4 ----
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 3a73785..885a223 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	unsigned long range;
 	int err = 0;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return -EINVAL;
+
 	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
 
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..85226cb 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return -ENOSYS;
-
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
 	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:57:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOkG-0000k6-UQ; Mon, 06 Aug 2012 14:57:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOkF-0000gY-I6
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:57:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344264999!8795503!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21272 invoked from network); 6 Aug 2012 14:56:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:39 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:39 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-GB;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:21 +0100
Message-ID: <1344263246-28036-18-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 18/23] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the "return -ENOSYS" for auto_translated_physmap
guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
mmap calls. However privcmd mmap calls are still going to fail for HVM
and hybrid guests on x86 because the xen_remap_domain_mfn_range
implementation is currently PV only.

Changes in v2:

- better commit message;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/mmu.c    |    3 +++
 drivers/xen/privcmd.c |    4 ----
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 3a73785..885a223 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	unsigned long range;
 	int err = 0;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return -EINVAL;
+
 	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
 
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..85226cb 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return -ENOSYS;
-
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
 	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:57:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOkM-0000ns-HJ; Mon, 06 Aug 2012 14:57:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOkL-0000mo-8z
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:57:29 +0000
Received: from [85.158.143.35:31150] by server-2.bemta-4.messagelabs.com id
	48/0A-17938-85BDF105; Mon, 06 Aug 2012 14:57:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344264964!17066289!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2193 invoked from network); 6 Aug 2012 14:56:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279817"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:03 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-Ev;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:19 +0100
Message-ID: <1344263246-28036-16-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 16/23] xen: clear IRQ_NOAUTOEN and
	IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
irq_startup, that is responsible for calling irq_unmask at startup time.
As a result event channels remain masked.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/events.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 5ecb596..8ffb7b7 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -836,6 +836,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
 	}
+	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
 out:
 	mutex_unlock(&irq_mapping_update_lock);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:57:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOkM-0000ns-HJ; Mon, 06 Aug 2012 14:57:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyOkL-0000mo-8z
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:57:29 +0000
Received: from [85.158.143.35:31150] by server-2.bemta-4.messagelabs.com id
	48/0A-17938-85BDF105; Mon, 06 Aug 2012 14:57:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344264964!17066289!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2193 invoked from network); 6 Aug 2012 14:56:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204279817"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:56:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 10:56:03 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1SyOHO-0002zY-Ev;
	Mon, 06 Aug 2012 15:27:34 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Mon, 6 Aug 2012 15:27:19 +0100
Message-ID: <1344263246-28036-16-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2 16/23] xen: clear IRQ_NOAUTOEN and
	IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
irq_startup, that is responsible for calling irq_unmask at startup time.
As a result event channels remain masked.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/events.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 5ecb596..8ffb7b7 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -836,6 +836,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
 	}
+	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
 out:
 	mutex_unlock(&irq_mapping_update_lock);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:58:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOkn-00014N-Vl; Mon, 06 Aug 2012 14:57:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyOkm-00013A-Md
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:57:56 +0000
Received: from [85.158.139.83:5941] by server-6.bemta-5.messagelabs.com id
	8C/98-11348-37BDF105; Mon, 06 Aug 2012 14:57:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344265075!24816239!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9479 invoked from network); 6 Aug 2012 14:57:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 14:57:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 15:57:54 +0100
Message-Id: <501FF78D0200007800092F24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 15:57:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 07/18] arch/x86: add missing XSM checks to
 XENPF_ commands
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:

What's the point of doing XSM checks for Dom0-only interfaces
anyway? I don't see how these can be subject to disaggregation...

Jan

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  tools/flask/policy/policy/modules/xen/xen.te | 4 ++--
>  xen/arch/x86/platform_hypercall.c            | 8 ++++++++
>  2 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/flask/policy/policy/modules/xen/xen.te 
> b/tools/flask/policy/policy/modules/xen/xen.te
> index 40c4c0a..1162153 100644
> --- a/tools/flask/policy/policy/modules/xen/xen.te
> +++ b/tools/flask/policy/policy/modules/xen/xen.te
> @@ -53,8 +53,8 @@ type device_t, resource_type;
>  #
>  
> #############################################################################
> ###
>  allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add 
> mtrr_del
> -	scheduler physinfo heap quirk readconsole writeconsole settime
> -	microcode cpupool_op sched_op };
> +	scheduler physinfo heap quirk readconsole writeconsole settime getcpuinfo
> +	microcode cpupool_op sched_op pm_op };
>  allow dom0_t xen_t:mmu { memorymap };
>  allow dom0_t security_t:security { check_context compute_av compute_create
>  	compute_member load_policy compute_relabel compute_user setenforce
> diff --git a/xen/arch/x86/platform_hypercall.c 
> b/xen/arch/x86/platform_hypercall.c
> index 88880b0..c049db7 100644
> --- a/xen/arch/x86/platform_hypercall.c
> +++ b/xen/arch/x86/platform_hypercall.c
> @@ -501,6 +501,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) 
> u_xenpf_op)
>      {
>          struct xenpf_pcpu_version *ver = &op->u.pcpu_version;
>  
> +        ret = xsm_getcpuinfo();
> +        if ( ret )
> +            break;
> +
>          if ( !get_cpu_maps() )
>          {
>              ret = -EBUSY;
> @@ -618,6 +622,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) 
> u_xenpf_op)
>      {
>          uint32_t idle_nums;
>  
> +        ret = xsm_pm_op();
> +        if ( ret )
> +            break;
> +
>          switch(op->u.core_parking.type)
>          {
>          case XEN_CORE_PARKING_SET:
> -- 
> 1.7.11.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:58:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOkn-00014N-Vl; Mon, 06 Aug 2012 14:57:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyOkm-00013A-Md
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 14:57:56 +0000
Received: from [85.158.139.83:5941] by server-6.bemta-5.messagelabs.com id
	8C/98-11348-37BDF105; Mon, 06 Aug 2012 14:57:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344265075!24816239!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9479 invoked from network); 6 Aug 2012 14:57:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 14:57:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 15:57:54 +0100
Message-Id: <501FF78D0200007800092F24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 15:57:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 07/18] arch/x86: add missing XSM checks to
 XENPF_ commands
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:

What's the point of doing XSM checks for Dom0-only interfaces
anyway? I don't see how these can be subject to disaggregation...

Jan

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  tools/flask/policy/policy/modules/xen/xen.te | 4 ++--
>  xen/arch/x86/platform_hypercall.c            | 8 ++++++++
>  2 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/flask/policy/policy/modules/xen/xen.te 
> b/tools/flask/policy/policy/modules/xen/xen.te
> index 40c4c0a..1162153 100644
> --- a/tools/flask/policy/policy/modules/xen/xen.te
> +++ b/tools/flask/policy/policy/modules/xen/xen.te
> @@ -53,8 +53,8 @@ type device_t, resource_type;
>  #
>  
> #############################################################################
> ###
>  allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add 
> mtrr_del
> -	scheduler physinfo heap quirk readconsole writeconsole settime
> -	microcode cpupool_op sched_op };
> +	scheduler physinfo heap quirk readconsole writeconsole settime getcpuinfo
> +	microcode cpupool_op sched_op pm_op };
>  allow dom0_t xen_t:mmu { memorymap };
>  allow dom0_t security_t:security { check_context compute_av compute_create
>  	compute_member load_policy compute_relabel compute_user setenforce
> diff --git a/xen/arch/x86/platform_hypercall.c 
> b/xen/arch/x86/platform_hypercall.c
> index 88880b0..c049db7 100644
> --- a/xen/arch/x86/platform_hypercall.c
> +++ b/xen/arch/x86/platform_hypercall.c
> @@ -501,6 +501,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) 
> u_xenpf_op)
>      {
>          struct xenpf_pcpu_version *ver = &op->u.pcpu_version;
>  
> +        ret = xsm_getcpuinfo();
> +        if ( ret )
> +            break;
> +
>          if ( !get_cpu_maps() )
>          {
>              ret = -EBUSY;
> @@ -618,6 +622,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) 
> u_xenpf_op)
>      {
>          uint32_t idle_nums;
>  
> +        ret = xsm_pm_op();
> +        if ( ret )
> +            break;
> +
>          switch(op->u.core_parking.type)
>          {
>          case XEN_CORE_PARKING_SET:
> -- 
> 1.7.11.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:59:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOm4-0001fx-Ev; Mon, 06 Aug 2012 14:59:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1SyOm2-0001fC-LR
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:59:14 +0000
Received: from [85.158.143.35:41322] by server-1.bemta-4.messagelabs.com id
	9D/D0-24392-1CBDF105; Mon, 06 Aug 2012 14:59:13 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344265151!16372090!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16481 invoked from network); 6 Aug 2012 14:59:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:59:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204280308"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:59:10 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	10:59:11 -0400
Message-ID: <501FDBBD.7010502@citrix.com>
Date: Mon, 6 Aug 2012 15:59:09 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>	<501FD726.90806@citrix.com>
	<alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:44, Stefano Stabellini wrote:
> On Mon, 6 Aug 2012, David Vrabel wrote:
>> On 06/08/12 15:11, Stefano Stabellini wrote:
>>> Hi all,
>>> this patch series makes the necessary changes to make sure that the
>>> current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
>>>
>>> - it defines xen_ulong_t as uint64_t on ARM;
>>> - it introduces a new macro to handle guest pointers, called
>>> XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
>>> have size 8 bytes on aarch64);
>>> - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
>>> parameters with XEN_GUEST_HANDLE_PARAM.
>>
>> This is a subtle (and undocumented!) distinction. I can see people
>> adding/modifying hypercall etc. getting this wrong and no one noticing
>> for a while (since it doesn't affect x86).
> 
> Where should I document this? I wrote it into the commit message but
> maybe a doc under docs is better.

A comment next to the #define of the two macros?

>> The xen_ulong_t parameters (when used for pointers) from an aarch guest
>> point of view are a uint32_t guest pointer and uint32_t of padding.  So
>> the guest handles will be the same size in hypercall parameters and
>> structure members.

Ignore this.  Dunno what I was thinking.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 14:59:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 14:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOm4-0001fx-Ev; Mon, 06 Aug 2012 14:59:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1SyOm2-0001fC-LR
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 14:59:14 +0000
Received: from [85.158.143.35:41322] by server-1.bemta-4.messagelabs.com id
	9D/D0-24392-1CBDF105; Mon, 06 Aug 2012 14:59:13 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344265151!16372090!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAwNzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16481 invoked from network); 6 Aug 2012 14:59:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 14:59:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="204280308"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 10:59:10 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	10:59:11 -0400
Message-ID: <501FDBBD.7010502@citrix.com>
Date: Mon, 6 Aug 2012 15:59:09 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>	<501FD726.90806@citrix.com>
	<alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061542200.4645@kaball.uk.xensource.com>
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:44, Stefano Stabellini wrote:
> On Mon, 6 Aug 2012, David Vrabel wrote:
>> On 06/08/12 15:11, Stefano Stabellini wrote:
>>> Hi all,
>>> this patch series makes the necessary changes to make sure that the
>>> current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
>>>
>>> - it defines xen_ulong_t as uint64_t on ARM;
>>> - it introduces a new macro to handle guest pointers, called
>>> XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
>>> have size 8 bytes on aarch64);
>>> - it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
>>> parameters with XEN_GUEST_HANDLE_PARAM.
>>
>> This is a subtle (and undocumented!) distinction. I can see people
>> adding/modifying hypercall etc. getting this wrong and no one noticing
>> for a while (since it doesn't affect x86).
> 
> Where should I document this? I wrote it into the commit message but
> maybe a doc under docs is better.

A comment next to the #define of the two macros?

>> The xen_ulong_t parameters (when used for pointers) from an aarch guest
>> point of view are a uint32_t guest pointer and uint32_t of padding.  So
>> the guest handles will be the same size in hypercall parameters and
>> structure members.

Ignore this.  Dunno what I was thinking.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:01:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOoX-0002IH-0m; Mon, 06 Aug 2012 15:01:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SyOoV-0002Hr-S7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:01:48 +0000
Received: from [85.158.139.83:45939] by server-4.bemta-5.messagelabs.com id
	93/32-27831-B5CDF105; Mon, 06 Aug 2012 15:01:47 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344265304!29803180!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25228 invoked from network); 6 Aug 2012 15:01:46 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:01:46 -0000
Received: by yenm4 with SMTP id m4so188595yen.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 08:01:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=QWUjmX0Otwgp7Rgb5H42JIHYryQ1uLv9Azx/XimfTHk=;
	b=vk/IVdp4QICsv94gB0V1VvSv5QfxOiBFI44PmYkOBuIbpo7NNlDswAU2xbaW25jh1T
	vv1h/+UxVSAYhdzEnvTvSbX49EeHXzDyBjmAFBwgGs2CToX7AuQYUoa8jaNqQpbvOCu9
	Wb0ba0Mt4EBR8pcNAE58b1+8pGh9sMxHkiFOHqfIPrKsAnW6qsDz4B8EoIomuPGAFIgI
	FiC5LOV/zkgZjuHQUSFrMU81o0i4gTOwPpJMshFvDBbCfAcXZn+NGPSzR8vHM7Qp+OfD
	ynwjjVZwYZIeS4RYo63mnB1NaU1v7LdyNe7bYuvR6IQXMiT89TocPJEy73H8MmkoQZH3
	9XZA==
Received: by 10.52.91.7 with SMTP id ca7mr3546025vdb.2.1344265303685; Mon, 06
	Aug 2012 08:01:43 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Mon, 6 Aug 2012 08:01:23 -0700 (PDT)
In-Reply-To: <1344264989.11339.47.camel@zakaz.uk.xensource.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
	<1344264989.11339.47.camel@zakaz.uk.xensource.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 16:01:23 +0100
Message-ID: <CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 15:56, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
>> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>> >> to event channels so this place holder is no longer required.
>> >
>> > I'm fine with this change, but is a future re-use of the value indeed
>> > not going to cause problems on XenServer (or wherever else this
>> > is patch set coming from)?
>> >
>>
>> That may need to be confirmed but I don't think XenServer is using v4v
>> yet
>
> I think Jan probably meant XenClient (i.e. that being the place where
> v4v is already deployed).
>
> There's no harm in keeping this # reserved indefinitely, with a suitable
> comment, I think? The only reason not to would be if this address space
> was limited, but I don't think that is the case with VIRQs
>
>

I think if XenClient rebase to a new version of Xen we will probably
use the version of
v4v that comes with it and we will not try to rebase the old code on
the newer Xen.

But if you think we should keep it I don't mind.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:01:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOoX-0002IH-0m; Mon, 06 Aug 2012 15:01:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SyOoV-0002Hr-S7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:01:48 +0000
Received: from [85.158.139.83:45939] by server-4.bemta-5.messagelabs.com id
	93/32-27831-B5CDF105; Mon, 06 Aug 2012 15:01:47 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344265304!29803180!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25228 invoked from network); 6 Aug 2012 15:01:46 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:01:46 -0000
Received: by yenm4 with SMTP id m4so188595yen.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 08:01:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=QWUjmX0Otwgp7Rgb5H42JIHYryQ1uLv9Azx/XimfTHk=;
	b=vk/IVdp4QICsv94gB0V1VvSv5QfxOiBFI44PmYkOBuIbpo7NNlDswAU2xbaW25jh1T
	vv1h/+UxVSAYhdzEnvTvSbX49EeHXzDyBjmAFBwgGs2CToX7AuQYUoa8jaNqQpbvOCu9
	Wb0ba0Mt4EBR8pcNAE58b1+8pGh9sMxHkiFOHqfIPrKsAnW6qsDz4B8EoIomuPGAFIgI
	FiC5LOV/zkgZjuHQUSFrMU81o0i4gTOwPpJMshFvDBbCfAcXZn+NGPSzR8vHM7Qp+OfD
	ynwjjVZwYZIeS4RYo63mnB1NaU1v7LdyNe7bYuvR6IQXMiT89TocPJEy73H8MmkoQZH3
	9XZA==
Received: by 10.52.91.7 with SMTP id ca7mr3546025vdb.2.1344265303685; Mon, 06
	Aug 2012 08:01:43 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Mon, 6 Aug 2012 08:01:23 -0700 (PDT)
In-Reply-To: <1344264989.11339.47.camel@zakaz.uk.xensource.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
	<1344264989.11339.47.camel@zakaz.uk.xensource.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 16:01:23 +0100
Message-ID: <CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 15:56, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
>> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>> >> to event channels so this place holder is no longer required.
>> >
>> > I'm fine with this change, but is a future re-use of the value indeed
>> > not going to cause problems on XenServer (or wherever else this
>> > is patch set coming from)?
>> >
>>
>> That may need to be confirmed but I don't think XenServer is using v4v
>> yet
>
> I think Jan probably meant XenClient (i.e. that being the place where
> v4v is already deployed).
>
> There's no harm in keeping this # reserved indefinitely, with a suitable
> comment, I think? The only reason not to would be if this address space
> was limited, but I don't think that is the case with VIRQs
>
>

I think if XenClient rebase to a new version of Xen we will probably
use the version of
v4v that comes with it and we will not try to rebase the old code on
the newer Xen.

But if you think we should keep it I don't mind.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:04:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOrH-0002VN-JS; Mon, 06 Aug 2012 15:04:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyOrG-0002V4-5N
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 15:04:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344265319!12496808!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MDU3MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28893 invoked from network); 6 Aug 2012 15:02:01 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 15:02:01 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76F12bx007447
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 15:01:03 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76F11dS007332
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 15:01:01 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76F10hh009143; Mon, 6 Aug 2012 10:01:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 08:01:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CFD6541F34; Mon,  6 Aug 2012 10:31:58 -0400 (EDT)
Date: Mon, 6 Aug 2012 10:31:58 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mel Gorman <mgorman@suse.de>
Message-ID: <20120806143158.GB2487@phenom.dumpdata.com>
References: <20120801190227.GA13272@phenom.dumpdata.com>
	<20120803120414.GA10670@andromeda.dapyr.net>
	<20120804110355.GA17640@andromeda.dapyr.net>
	<20120804133105.GE29814@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120804133105.GE29814@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	akpm@linux-foundation.org, davem@davemloft.net
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6 (git commit
 c48a11c7ad2623b99bbd6859b0b4234e7f11176f,
 netvm: propagate page->pfmemalloc to skb)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Aug 04, 2012 at 02:31:05PM +0100, Mel Gorman wrote:
> On Sat, Aug 04, 2012 at 07:03:55AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 03, 2012 at 08:04:14AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > So I hadn't done a git bisection yet. But if I choose git commit:
> > > > 4b24ff71108164e047cf2c95990b77651163e315
> > > >     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> > > > 
> > > >     Pull battery updates from Anton Vorontsov:
> > > > 
> > > > 
> > > > everything works nicely. Anything past that, so these merges:
> > > > 
> > > > konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> > > > 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
> > > ===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)
> > > 
> > > Somewhere in there is the culprit. Hadn't done yet the full bisection
> > > (was just checking out in each merge to see when it stopped working)
> > 
> > Mel, your:
> > commit c48a11c7ad2623b99bbd6859b0b4234e7f11176f
> > Author: Mel Gorman <mgorman@suse.de>
> > Date:   Tue Jul 31 16:44:23 2012 -0700
> > 
> >     netvm: propagate page->pfmemalloc to skb
> > 
> > is the culprit per git bisect. Any ideas - do the drivers need to do
> > some extra processing? Here is the git bisect log
> > 
> 
> The problem appears to be at drivers/net/xen-netfront.c#973 where it
> calls __skb_fill_page_desc(skb, 0, NULL, 0, 0) . The driver does not
> have to do extra processing as such but I did not expect NULL to be
> passed in like this. Can you check if this fixes the bug please?

That does it!
.. snip..
> 
> Signed-off-by: Mel Gorman <mgorman@suse.de>

Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  include/linux/skbuff.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 7632c87..8857669 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -1256,7 +1256,7 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
>  	 * do not lose pfmemalloc information as the pages would not be
>  	 * allocated using __GFP_MEMALLOC.
>  	 */
> -	if (page->pfmemalloc && !page->mapping)
> +	if (page && page->pfmemalloc && !page->mapping)
>  		skb->pfmemalloc	= true;
>  	frag->page.p		  = page;
>  	frag->page_offset	  = off;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:04:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOrH-0002VN-JS; Mon, 06 Aug 2012 15:04:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyOrG-0002V4-5N
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 15:04:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344265319!12496808!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MDU3MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28893 invoked from network); 6 Aug 2012 15:02:01 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 15:02:01 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76F12bx007447
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 15:01:03 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76F11dS007332
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 15:01:01 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76F10hh009143; Mon, 6 Aug 2012 10:01:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 08:01:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CFD6541F34; Mon,  6 Aug 2012 10:31:58 -0400 (EDT)
Date: Mon, 6 Aug 2012 10:31:58 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mel Gorman <mgorman@suse.de>
Message-ID: <20120806143158.GB2487@phenom.dumpdata.com>
References: <20120801190227.GA13272@phenom.dumpdata.com>
	<20120803120414.GA10670@andromeda.dapyr.net>
	<20120804110355.GA17640@andromeda.dapyr.net>
	<20120804133105.GE29814@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120804133105.GE29814@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	akpm@linux-foundation.org, davem@davemloft.net
Subject: Re: [Xen-devel] Regression in xen-netfront on v3.6 (git commit
 c48a11c7ad2623b99bbd6859b0b4234e7f11176f,
 netvm: propagate page->pfmemalloc to skb)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Aug 04, 2012 at 02:31:05PM +0100, Mel Gorman wrote:
> On Sat, Aug 04, 2012 at 07:03:55AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 03, 2012 at 08:04:14AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Wed, Aug 01, 2012 at 03:02:27PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > So I hadn't done a git bisection yet. But if I choose git commit:
> > > > 4b24ff71108164e047cf2c95990b77651163e315
> > > >     Merge tag 'for-v3.6' of git://git.infradead.org/battery-2.6
> > > > 
> > > >     Pull battery updates from Anton Vorontsov:
> > > > 
> > > > 
> > > > everything works nicely. Anything past that, so these merges:
> > > > 
> > > > konrad@phenom:~/ssd/linux$ git log --oneline --merges 4b24ff71108164e047cf2c95990b77651163e315..linus/master
> > > > 2d53492 Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
> > > ===> ac694db Merge branch 'akpm' (Andrew's patch-bomb)
> > > 
> > > Somewhere in there is the culprit. Hadn't done yet the full bisection
> > > (was just checking out in each merge to see when it stopped working)
> > 
> > Mel, your:
> > commit c48a11c7ad2623b99bbd6859b0b4234e7f11176f
> > Author: Mel Gorman <mgorman@suse.de>
> > Date:   Tue Jul 31 16:44:23 2012 -0700
> > 
> >     netvm: propagate page->pfmemalloc to skb
> > 
> > is the culprit per git bisect. Any ideas - do the drivers need to do
> > some extra processing? Here is the git bisect log
> > 
> 
> The problem appears to be at drivers/net/xen-netfront.c#973 where it
> calls __skb_fill_page_desc(skb, 0, NULL, 0, 0) . The driver does not
> have to do extra processing as such but I did not expect NULL to be
> passed in like this. Can you check if this fixes the bug please?

That does it!
.. snip..
> 
> Signed-off-by: Mel Gorman <mgorman@suse.de>

Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  include/linux/skbuff.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 7632c87..8857669 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -1256,7 +1256,7 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
>  	 * do not lose pfmemalloc information as the pages would not be
>  	 * allocated using __GFP_MEMALLOC.
>  	 */
> -	if (page->pfmemalloc && !page->mapping)
> +	if (page && page->pfmemalloc && !page->mapping)
>  		skb->pfmemalloc	= true;
>  	frag->page.p		  = page;
>  	frag->page_offset	  = off;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:07:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOuA-0002lH-6L; Mon, 06 Aug 2012 15:07:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyOu8-0002kw-Ve
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:07:37 +0000
Received: from [85.158.139.83:36402] by server-2.bemta-5.messagelabs.com id
	92/31-04598-8BDDF105; Mon, 06 Aug 2012 15:07:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344265655!30429337!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17943 invoked from network); 6 Aug 2012 15:07:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 15:07:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:07:35 +0100
Message-Id: <501FF9D50200007800092F76@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:07:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
 rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> Callers that want to prevent use of DOMID_SELF already need to ensure
> the calling domain does not pass its own domain ID. This removes the
> need for the caller to manually support DOMID_SELF, which many already
> do.

I'm not really sure this is correct. At the very least it changes the
return value of rcu_lock_remote_target_domain_by_id() when
called with DOMID_SELF (from -ESRCH to -EPERM).

I'm also not convinced that a distinction between a domain knowing
its ID and one passing DOMID_SELF isn't/can't be useful. That of
course depends on whether the ID can be fully hidden from a guest
(obviously pure HVM guests would never know their ID, but then
again they also would never pass DOMID_SELF anywhere; it might
be, however, that they could get the latter passed on their behalf
e.g. from some emulation function).

Jan

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  xen/common/domain.c        | 3 +++
>  xen/common/event_channel.c | 3 ---
>  xen/common/grant_table.c   | 2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 4c5d241..dbbc414 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -400,6 +400,9 @@ struct domain *rcu_lock_domain_by_id(domid_t dom)
>  {
>      struct domain *d = NULL;
>  
> +    if ( dom == DOMID_SELF )
> +        return rcu_lock_current_domain();
> +
>      rcu_read_lock(&domlist_read_lock);
>  
>      for ( d = rcu_dereference(domain_hash[DOMAIN_HASH(dom)]);
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index 53777f8..988d3ce 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -201,9 +201,6 @@ static long 
> evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>      domid_t        rdom = bind->remote_dom;
>      long           rc;
>  
> -    if ( rdom == DOMID_SELF )
> -        rdom = current->domain->domain_id;
> -
>      if ( (rd = rcu_lock_domain_by_id(rdom)) == NULL )
>          return -ESRCH;
>  
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 9961e83..fbea67c 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -715,7 +715,7 @@ __gnttab_map_grant_ref(
>      TRACE_1D(TRC_MEM_PAGE_GRANT_MAP, op->dom);
>  
>      mt = &maptrack_entry(lgt, handle);
> -    mt->domid = op->dom;
> +    mt->domid = rd->domain_id;
>      mt->ref   = op->ref;
>      mt->flags = op->flags;
>  
> -- 
> 1.7.11.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:07:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOuA-0002lH-6L; Mon, 06 Aug 2012 15:07:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyOu8-0002kw-Ve
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:07:37 +0000
Received: from [85.158.139.83:36402] by server-2.bemta-5.messagelabs.com id
	92/31-04598-8BDDF105; Mon, 06 Aug 2012 15:07:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344265655!30429337!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17943 invoked from network); 6 Aug 2012 15:07:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 15:07:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:07:35 +0100
Message-Id: <501FF9D50200007800092F76@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:07:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
 rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> Callers that want to prevent use of DOMID_SELF already need to ensure
> the calling domain does not pass its own domain ID. This removes the
> need for the caller to manually support DOMID_SELF, which many already
> do.

I'm not really sure this is correct. At the very least it changes the
return value of rcu_lock_remote_target_domain_by_id() when
called with DOMID_SELF (from -ESRCH to -EPERM).

I'm also not convinced that a distinction between a domain knowing
its ID and one passing DOMID_SELF isn't/can't be useful. That of
course depends on whether the ID can be fully hidden from a guest
(obviously pure HVM guests would never know their ID, but then
again they also would never pass DOMID_SELF anywhere; it might
be, however, that they could get the latter passed on their behalf
e.g. from some emulation function).

Jan

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  xen/common/domain.c        | 3 +++
>  xen/common/event_channel.c | 3 ---
>  xen/common/grant_table.c   | 2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 4c5d241..dbbc414 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -400,6 +400,9 @@ struct domain *rcu_lock_domain_by_id(domid_t dom)
>  {
>      struct domain *d = NULL;
>  
> +    if ( dom == DOMID_SELF )
> +        return rcu_lock_current_domain();
> +
>      rcu_read_lock(&domlist_read_lock);
>  
>      for ( d = rcu_dereference(domain_hash[DOMAIN_HASH(dom)]);
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index 53777f8..988d3ce 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -201,9 +201,6 @@ static long 
> evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>      domid_t        rdom = bind->remote_dom;
>      long           rc;
>  
> -    if ( rdom == DOMID_SELF )
> -        rdom = current->domain->domain_id;
> -
>      if ( (rd = rcu_lock_domain_by_id(rdom)) == NULL )
>          return -ESRCH;
>  
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 9961e83..fbea67c 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -715,7 +715,7 @@ __gnttab_map_grant_ref(
>      TRACE_1D(TRC_MEM_PAGE_GRANT_MAP, op->dom);
>  
>      mt = &maptrack_entry(lgt, handle);
> -    mt->domid = op->dom;
> +    mt->domid = rd->domain_id;
>      mt->ref   = op->ref;
>      mt->flags = op->flags;
>  
> -- 
> 1.7.11.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:07:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOuO-0002nB-JF; Mon, 06 Aug 2012 15:07:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOuM-0002mA-3L
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:07:50 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344265621!9383343!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8095 invoked from network); 6 Aug 2012 15:07:02 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:07:02 -0000
X-TM-IMSS-Message-ID: <7b26baa00002c0b9@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b26baa00002c0b9 ;
	Mon, 6 Aug 2012 11:07:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76F6vE8014259; 
	Mon, 6 Aug 2012 11:06:57 -0400
Message-ID: <501FDD91.8000100@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 11:06:57 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF78D0200007800092F24@nat28.tlf.novell.com>
In-Reply-To: <501FF78D0200007800092F24@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 07/18] arch/x86: add missing XSM checks to
 XENPF_ commands
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 10:57 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> 
> What's the point of doing XSM checks for Dom0-only interfaces
> anyway? I don't see how these can be subject to disaggregation...
> 
> Jan
> 

When splitting up the domain builder and hardware access domains, the
domain builder still needs to be privileged but should not have access
the functions that manage the hardware. Similarly, the hardware domain
has no need to use dom0 functions for accessing remote domains.

This also allows exposing read-only interfaces like getcpuinfo to a
domain containing something like OpenStack, instead of needing to proxy
all such calls through dom0.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:07:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOuO-0002nB-JF; Mon, 06 Aug 2012 15:07:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyOuM-0002mA-3L
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:07:50 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344265621!9383343!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8095 invoked from network); 6 Aug 2012 15:07:02 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:07:02 -0000
X-TM-IMSS-Message-ID: <7b26baa00002c0b9@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b26baa00002c0b9 ;
	Mon, 6 Aug 2012 11:07:07 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76F6vE8014259; 
	Mon, 6 Aug 2012 11:06:57 -0400
Message-ID: <501FDD91.8000100@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 11:06:57 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-8-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF78D0200007800092F24@nat28.tlf.novell.com>
In-Reply-To: <501FF78D0200007800092F24@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 07/18] arch/x86: add missing XSM checks to
 XENPF_ commands
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 10:57 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> 
> What's the point of doing XSM checks for Dom0-only interfaces
> anyway? I don't see how these can be subject to disaggregation...
> 
> Jan
> 

When splitting up the domain builder and hardware access domains, the
domain builder still needs to be privileged but should not have access
the functions that manage the hardware. Similarly, the hardware domain
has no need to use dom0 functions for accessing remote domains.

This also allows exposing read-only interfaces like getcpuinfo to a
domain containing something like OpenStack, instead of needing to proxy
all such calls through dom0.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:13:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOzS-0003Db-RJ; Mon, 06 Aug 2012 15:13:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SyOzR-0003DN-D3
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:13:05 +0000
Received: from [85.158.138.51:7980] by server-3.bemta-3.messagelabs.com id
	41/DE-13122-00FDF105; Mon, 06 Aug 2012 15:13:04 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344265982!21740654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21542 invoked from network); 6 Aug 2012 15:13:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:13:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33704893"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:13:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 11:13:02 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SyOzN-0003s7-EI;
	Mon, 06 Aug 2012 16:13:01 +0100
Message-ID: <501FDE3B.20801@eu.citrix.com>
Date: Mon, 6 Aug 2012 16:09:47 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
	<20120803205146.GA6268@US-SEA-R8XVZTX>
In-Reply-To: <20120803205146.GA6268@US-SEA-R8XVZTX>
Cc: Lars Kurth <lars.kurth@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 21:51, Matt Wilson wrote:
>>    (b)  Get your correspondents to use a non-broken email host;
> Lars, George - is that an option?
Gmail is the best interface I've seen so far for dealing with a list 
like xen-devel.  Giving it up would be a major downgrade.  I've 
instructed gmail to white-list your e-mail address, so I (hopefully) 
shouldn't be missing any more of your e-mails (although I may miss some 
from your colleagues).

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:13:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOzS-0003Db-RJ; Mon, 06 Aug 2012 15:13:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SyOzR-0003DN-D3
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:13:05 +0000
Received: from [85.158.138.51:7980] by server-3.bemta-3.messagelabs.com id
	41/DE-13122-00FDF105; Mon, 06 Aug 2012 15:13:04 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344265982!21740654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21542 invoked from network); 6 Aug 2012 15:13:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:13:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33704893"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 11:13:01 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 6 Aug 2012 11:13:02 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SyOzN-0003s7-EI;
	Mon, 06 Aug 2012 16:13:01 +0100
Message-ID: <501FDE3B.20801@eu.citrix.com>
Date: Mon, 6 Aug 2012 16:09:47 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Matt Wilson <msw@amazon.com>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
	<20120803205146.GA6268@US-SEA-R8XVZTX>
In-Reply-To: <20120803205146.GA6268@US-SEA-R8XVZTX>
Cc: Lars Kurth <lars.kurth@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/08/12 21:51, Matt Wilson wrote:
>>    (b)  Get your correspondents to use a non-broken email host;
> Lars, George - is that an option?
Gmail is the best interface I've seen so far for dealing with a list 
like xen-devel.  Giving it up would be a major downgrade.  I've 
instructed gmail to white-list your e-mail address, so I (hopefully) 
shouldn't be missing any more of your e-mails (although I may miss some 
from your colleagues).

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:13:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOzR-0003DO-Ey; Mon, 06 Aug 2012 15:13:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyOzQ-0003DH-7X
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:13:04 +0000
Received: from [85.158.143.99:38082] by server-1.bemta-4.messagelabs.com id
	F0/09-24392-FFEDF105; Mon, 06 Aug 2012 15:13:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344265981!20866311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8958 invoked from network); 6 Aug 2012 15:13:01 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13869556"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 15:13:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	16:13:01 +0100
Message-ID: <1344265980.11339.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 16:13:00 +0100
In-Reply-To: <CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
	<1344264989.11339.47.camel@zakaz.uk.xensource.com>
	<CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 16:01 +0100, Jean Guyader wrote:
> On 6 August 2012 15:56, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
> >> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
> >> >> to event channels so this place holder is no longer required.
> >> >
> >> > I'm fine with this change, but is a future re-use of the value indeed
> >> > not going to cause problems on XenServer (or wherever else this
> >> > is patch set coming from)?
> >> >
> >>
> >> That may need to be confirmed but I don't think XenServer is using v4v
> >> yet
> >
> > I think Jan probably meant XenClient (i.e. that being the place where
> > v4v is already deployed).
> >
> > There's no harm in keeping this # reserved indefinitely, with a suitable
> > comment, I think? The only reason not to would be if this address space
> > was limited, but I don't think that is the case with VIRQs
> >
> >
> 
> I think if XenClient rebase to a new version of Xen we will probably
> use the version of
> v4v that comes with it and we will not try to rebase the old code on
> the newer Xen.

I think Jan's concern was if a current client runs on some future
version of Xen which has reused that VIRQ for something else, some sort
of weirdness would probably ensue?

Probably not as bad for a VIRQ as reusing a hypercall number...


> 
> But if you think we should keep it I don't mind.
> 
> Jean



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:13:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyOzR-0003DO-Ey; Mon, 06 Aug 2012 15:13:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyOzQ-0003DH-7X
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:13:04 +0000
Received: from [85.158.143.99:38082] by server-1.bemta-4.messagelabs.com id
	F0/09-24392-FFEDF105; Mon, 06 Aug 2012 15:13:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344265981!20866311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8958 invoked from network); 6 Aug 2012 15:13:01 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13869556"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 15:13:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	16:13:01 +0100
Message-ID: <1344265980.11339.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 6 Aug 2012 16:13:00 +0100
In-Reply-To: <CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
	<1344264989.11339.47.camel@zakaz.uk.xensource.com>
	<CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 16:01 +0100, Jean Guyader wrote:
> On 6 August 2012 15:56, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
> >> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
> >> >> to event channels so this place holder is no longer required.
> >> >
> >> > I'm fine with this change, but is a future re-use of the value indeed
> >> > not going to cause problems on XenServer (or wherever else this
> >> > is patch set coming from)?
> >> >
> >>
> >> That may need to be confirmed but I don't think XenServer is using v4v
> >> yet
> >
> > I think Jan probably meant XenClient (i.e. that being the place where
> > v4v is already deployed).
> >
> > There's no harm in keeping this # reserved indefinitely, with a suitable
> > comment, I think? The only reason not to would be if this address space
> > was limited, but I don't think that is the case with VIRQs
> >
> >
> 
> I think if XenClient rebase to a new version of Xen we will probably
> use the version of
> v4v that comes with it and we will not try to rebase the old code on
> the newer Xen.

I think Jan's concern was if a current client runs on some future
version of Xen which has reused that VIRQ for something else, some sort
of weirdness would probably ensue?

Probably not as bad for a VIRQ as reusing a hypercall number...


> 
> But if you think we should keep it I don't mind.
> 
> Jean



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:18:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyP4e-0003cX-JZ; Mon, 06 Aug 2012 15:18:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyP4c-0003c7-SN
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:18:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344266299!2409874!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26862 invoked from network); 6 Aug 2012 15:18:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:18:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:18:19 +0100
Message-Id: <501FFC580200007800092FA4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:18:16 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xen.org>,
 "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
 duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -54,6 +54,26 @@ long arch_do_domctl(
>  
>      switch ( domctl->cmd )
>      {
> +    /* TODO: the following do not have XSM hooks yet */
> +    case XEN_DOMCTL_set_cpuid:
> +    case XEN_DOMCTL_suppress_spurious_page_faults:
> +    case XEN_DOMCTL_debug_op:
> +    case XEN_DOMCTL_gettscinfo:
> +    case XEN_DOMCTL_settscinfo:
> +    case XEN_DOMCTL_audit_p2m:
> +    case XEN_DOMCTL_gdbsx_guestmemio:
> +    case XEN_DOMCTL_gdbsx_pausevcpu:
> +    case XEN_DOMCTL_gdbsx_unpausevcpu:
> +    case XEN_DOMCTL_gdbsx_domstatus:
> +    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */

Is that to state that the patch introduces a leak here? Or are you
trying to carefully tell us you spotted a problem in the existing
code?

> +    case XEN_DOMCTL_getpageframeinfo2:
> +    case XEN_DOMCTL_getpageframeinfo3:
> +        if ( !IS_PRIV(current->domain) )
> +            return -EPERM;
> +    }
> +
> +    switch ( domctl->cmd )
> +    {
>  
>      case XEN_DOMCTL_shadow_op:
>      {
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
>      if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
>          return -EINVAL;
>  
> -    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
> -    if ( rc != 0 )
> -        return rc;
> +    d = rcu_lock_domain_by_id(op.domid);
> +    if ( d == NULL )
> +        return -ESRCH;
>  
>      rc = -EINVAL;
> -    if ( !is_hvm_domain(d) )
> +    if ( d == current->domain || !is_hvm_domain(d) )

What's wrong with rcu_lock_remote_target_domain_by_id() here
and in other places below? I think this minimally would deserve
a comment in the patch description, the more that this huge a
patch is already bad enough to look at.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:18:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyP4e-0003cX-JZ; Mon, 06 Aug 2012 15:18:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyP4c-0003c7-SN
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:18:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344266299!2409874!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26862 invoked from network); 6 Aug 2012 15:18:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:18:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:18:19 +0100
Message-Id: <501FFC580200007800092FA4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:18:16 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xen.org>,
 "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
 duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -54,6 +54,26 @@ long arch_do_domctl(
>  
>      switch ( domctl->cmd )
>      {
> +    /* TODO: the following do not have XSM hooks yet */
> +    case XEN_DOMCTL_set_cpuid:
> +    case XEN_DOMCTL_suppress_spurious_page_faults:
> +    case XEN_DOMCTL_debug_op:
> +    case XEN_DOMCTL_gettscinfo:
> +    case XEN_DOMCTL_settscinfo:
> +    case XEN_DOMCTL_audit_p2m:
> +    case XEN_DOMCTL_gdbsx_guestmemio:
> +    case XEN_DOMCTL_gdbsx_pausevcpu:
> +    case XEN_DOMCTL_gdbsx_unpausevcpu:
> +    case XEN_DOMCTL_gdbsx_domstatus:
> +    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */

Is that to state that the patch introduces a leak here? Or are you
trying to carefully tell us you spotted a problem in the existing
code?

> +    case XEN_DOMCTL_getpageframeinfo2:
> +    case XEN_DOMCTL_getpageframeinfo3:
> +        if ( !IS_PRIV(current->domain) )
> +            return -EPERM;
> +    }
> +
> +    switch ( domctl->cmd )
> +    {
>  
>      case XEN_DOMCTL_shadow_op:
>      {
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
>      if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
>          return -EINVAL;
>  
> -    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
> -    if ( rc != 0 )
> -        return rc;
> +    d = rcu_lock_domain_by_id(op.domid);
> +    if ( d == NULL )
> +        return -ESRCH;
>  
>      rc = -EINVAL;
> -    if ( !is_hvm_domain(d) )
> +    if ( d == current->domain || !is_hvm_domain(d) )

What's wrong with rcu_lock_remote_target_domain_by_id() here
and in other places below? I think this minimally would deserve
a comment in the patch description, the more that this huge a
patch is already bad enough to look at.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyP6V-0003m9-3K; Mon, 06 Aug 2012 15:20:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyP6T-0003lq-VU
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:20:22 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344266353!11042121!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19773 invoked from network); 6 Aug 2012 15:19:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:19:13 -0000
X-TM-IMSS-Message-ID: <7b31d61c0002c704@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b31d61c0002c704 ;
	Mon, 6 Aug 2012 11:19:15 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76FJ52W015079; 
	Mon, 6 Aug 2012 11:19:05 -0400
Message-ID: <501FE068.1090404@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 11:19:04 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
In-Reply-To: <501FF9D50200007800092F76@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
	rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> Callers that want to prevent use of DOMID_SELF already need to ensure
>> the calling domain does not pass its own domain ID. This removes the
>> need for the caller to manually support DOMID_SELF, which many already
>> do.
> 
> I'm not really sure this is correct. At the very least it changes the
> return value of rcu_lock_remote_target_domain_by_id() when
> called with DOMID_SELF (from -ESRCH to -EPERM).

This series ends up eliminating that function in patch #18, so that
part is taken care of.
 
> I'm also not convinced that a distinction between a domain knowing
> its ID and one passing DOMID_SELF isn't/can't be useful. That of
> course depends on whether the ID can be fully hidden from a guest
> (obviously pure HVM guests would never know their ID, but then
> again they also would never pass DOMID_SELF anywhere; it might
> be, however, that they could get the latter passed on their behalf
> e.g. from some emulation function).
> 
> Jan

I don't think we can (or want to) make it impossible for a guest to find
out its own domain ID. I agree that the distinction between DOMID_SELF and
my_own_domid can be a useful distinction in some cases. Most of those cases
in Xen that I have seen already handle this at the caller.

Another solution here is to create a function rcu_lock_domain_by_any_id that
is identical to rcu_lock_domain_by_id except for handling DOMID_SELF.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyP6V-0003m9-3K; Mon, 06 Aug 2012 15:20:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyP6T-0003lq-VU
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:20:22 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344266353!11042121!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19773 invoked from network); 6 Aug 2012 15:19:13 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:19:13 -0000
X-TM-IMSS-Message-ID: <7b31d61c0002c704@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b31d61c0002c704 ;
	Mon, 6 Aug 2012 11:19:15 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76FJ52W015079; 
	Mon, 6 Aug 2012 11:19:05 -0400
Message-ID: <501FE068.1090404@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 11:19:04 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
In-Reply-To: <501FF9D50200007800092F76@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
	rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> Callers that want to prevent use of DOMID_SELF already need to ensure
>> the calling domain does not pass its own domain ID. This removes the
>> need for the caller to manually support DOMID_SELF, which many already
>> do.
> 
> I'm not really sure this is correct. At the very least it changes the
> return value of rcu_lock_remote_target_domain_by_id() when
> called with DOMID_SELF (from -ESRCH to -EPERM).

This series ends up eliminating that function in patch #18, so that
part is taken care of.
 
> I'm also not convinced that a distinction between a domain knowing
> its ID and one passing DOMID_SELF isn't/can't be useful. That of
> course depends on whether the ID can be fully hidden from a guest
> (obviously pure HVM guests would never know their ID, but then
> again they also would never pass DOMID_SELF anywhere; it might
> be, however, that they could get the latter passed on their behalf
> e.g. from some emulation function).
> 
> Jan

I don't think we can (or want to) make it impossible for a guest to find
out its own domain ID. I agree that the distinction between DOMID_SELF and
my_own_domid can be a useful distinction in some cases. Most of those cases
in Xen that I have seen already handle this at the caller.

Another solution here is to create a function rcu_lock_domain_by_any_id that
is identical to rcu_lock_domain_by_id except for handling DOMID_SELF.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:26:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPBn-000464-RG; Mon, 06 Aug 2012 15:25:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyPBl-00045z-SG
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:25:50 +0000
Received: from [85.158.139.83:55011] by server-2.bemta-5.messagelabs.com id
	39/64-04598-DF1EF105; Mon, 06 Aug 2012 15:25:49 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344266747!30548305!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4448 invoked from network); 6 Aug 2012 15:25:47 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-5.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 15:25:47 -0000
X-TM-IMSS-Message-ID: <7b37b9c60002db3e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b37b9c60002db3e ; Mon, 6 Aug 2012 11:26:13 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76FPiDH015522; 
	Mon, 6 Aug 2012 11:25:44 -0400
Message-ID: <501FE1F8.1040100@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 11:25:44 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFC580200007800092FA4@nat28.tlf.novell.com>
In-Reply-To: <501FFC580200007800092FA4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
 duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:18 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> --- a/xen/arch/x86/domctl.c
>> +++ b/xen/arch/x86/domctl.c
>> @@ -54,6 +54,26 @@ long arch_do_domctl(
>>  
>>      switch ( domctl->cmd )
>>      {
>> +    /* TODO: the following do not have XSM hooks yet */
>> +    case XEN_DOMCTL_set_cpuid:
>> +    case XEN_DOMCTL_suppress_spurious_page_faults:
>> +    case XEN_DOMCTL_debug_op:
>> +    case XEN_DOMCTL_gettscinfo:
>> +    case XEN_DOMCTL_settscinfo:
>> +    case XEN_DOMCTL_audit_p2m:
>> +    case XEN_DOMCTL_gdbsx_guestmemio:
>> +    case XEN_DOMCTL_gdbsx_pausevcpu:
>> +    case XEN_DOMCTL_gdbsx_unpausevcpu:
>> +    case XEN_DOMCTL_gdbsx_domstatus:
>> +    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */
> 
> Is that to state that the patch introduces a leak here? Or are you
> trying to carefully tell us you spotted a problem in the existing
> code?

This is an information leak, not a memory leak. It's a (minor) problem with
the placement of the existing XSM hooks allowing a domain to query information
on remote domains. A later patch fixes this by adding an XSM hook for the
entire query operation.

>> +    case XEN_DOMCTL_getpageframeinfo2:
>> +    case XEN_DOMCTL_getpageframeinfo3:
>> +        if ( !IS_PRIV(current->domain) )
>> +            return -EPERM;
>> +    }
>> +
>> +    switch ( domctl->cmd )
>> +    {
>>  
>>      case XEN_DOMCTL_shadow_op:
>>      {
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
>>      if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
>>          return -EINVAL;
>>  
>> -    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
>> -    if ( rc != 0 )
>> -        return rc;
>> +    d = rcu_lock_domain_by_id(op.domid);
>> +    if ( d == NULL )
>> +        return -ESRCH;
>>  
>>      rc = -EINVAL;
>> -    if ( !is_hvm_domain(d) )
>> +    if ( d == current->domain || !is_hvm_domain(d) )
> 
> What's wrong with rcu_lock_remote_target_domain_by_id() here
> and in other places below? I think this minimally would deserve
> a comment in the patch description, the more that this huge a
> patch is already bad enough to look at.
> 
> Jan
> 

The main reason for this change is that rcu_lock_remote_target_domain_by_id
calls IS_PRIV, and this patch is attempting to remove the duplicated calls.
Would you prefer making another rcu_lock_* function that only checks against
current->domain and doesn't include the IS_PRIV_FOR check?

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:26:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPBn-000464-RG; Mon, 06 Aug 2012 15:25:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyPBl-00045z-SG
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:25:50 +0000
Received: from [85.158.139.83:55011] by server-2.bemta-5.messagelabs.com id
	39/64-04598-DF1EF105; Mon, 06 Aug 2012 15:25:49 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344266747!30548305!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4448 invoked from network); 6 Aug 2012 15:25:47 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-5.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 15:25:47 -0000
X-TM-IMSS-Message-ID: <7b37b9c60002db3e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b37b9c60002db3e ; Mon, 6 Aug 2012 11:26:13 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76FPiDH015522; 
	Mon, 6 Aug 2012 11:25:44 -0400
Message-ID: <501FE1F8.1040100@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 11:25:44 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFC580200007800092FA4@nat28.tlf.novell.com>
In-Reply-To: <501FFC580200007800092FA4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
 duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:18 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> --- a/xen/arch/x86/domctl.c
>> +++ b/xen/arch/x86/domctl.c
>> @@ -54,6 +54,26 @@ long arch_do_domctl(
>>  
>>      switch ( domctl->cmd )
>>      {
>> +    /* TODO: the following do not have XSM hooks yet */
>> +    case XEN_DOMCTL_set_cpuid:
>> +    case XEN_DOMCTL_suppress_spurious_page_faults:
>> +    case XEN_DOMCTL_debug_op:
>> +    case XEN_DOMCTL_gettscinfo:
>> +    case XEN_DOMCTL_settscinfo:
>> +    case XEN_DOMCTL_audit_p2m:
>> +    case XEN_DOMCTL_gdbsx_guestmemio:
>> +    case XEN_DOMCTL_gdbsx_pausevcpu:
>> +    case XEN_DOMCTL_gdbsx_unpausevcpu:
>> +    case XEN_DOMCTL_gdbsx_domstatus:
>> +    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs */
> 
> Is that to state that the patch introduces a leak here? Or are you
> trying to carefully tell us you spotted a problem in the existing
> code?

This is an information leak, not a memory leak. It's a (minor) problem with
the placement of the existing XSM hooks allowing a domain to query information
on remote domains. A later patch fixes this by adding an XSM hook for the
entire query operation.

>> +    case XEN_DOMCTL_getpageframeinfo2:
>> +    case XEN_DOMCTL_getpageframeinfo3:
>> +        if ( !IS_PRIV(current->domain) )
>> +            return -EPERM;
>> +    }
>> +
>> +    switch ( domctl->cmd )
>> +    {
>>  
>>      case XEN_DOMCTL_shadow_op:
>>      {
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
>>      if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
>>          return -EINVAL;
>>  
>> -    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
>> -    if ( rc != 0 )
>> -        return rc;
>> +    d = rcu_lock_domain_by_id(op.domid);
>> +    if ( d == NULL )
>> +        return -ESRCH;
>>  
>>      rc = -EINVAL;
>> -    if ( !is_hvm_domain(d) )
>> +    if ( d == current->domain || !is_hvm_domain(d) )
> 
> What's wrong with rcu_lock_remote_target_domain_by_id() here
> and in other places below? I think this minimally would deserve
> a comment in the patch description, the more that this huge a
> patch is already bad enough to look at.
> 
> Jan
> 

The main reason for this change is that rcu_lock_remote_target_domain_by_id
calls IS_PRIV, and this patch is attempting to remove the duplicated calls.
Would you prefer making another rcu_lock_* function that only checks against
current->domain and doesn't include the IS_PRIV_FOR check?

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:27:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPCs-0004A5-9P; Mon, 06 Aug 2012 15:26:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPCq-00049g-LK
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:26:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344266810!10067109!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9618 invoked from network); 6 Aug 2012 15:26:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:26:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:26:50 +0100
Message-Id: <501FFE560200007800092FBA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:26:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2882,11 +2882,6 @@ static struct domain *get_pg_owner(domid_t domid)
>          pg_owner = rcu_lock_domain(dom_io);
>          break;
>      case DOMID_XEN:
> -        if ( !IS_PRIV(curr) )
> -        {
> -            MEM_LOG("Cannot set foreign dom");
> -            break;
> -        }
>          pg_owner = rcu_lock_domain(dom_xen);
>          break;
>      default:
> @@ -2895,12 +2890,6 @@ static struct domain *get_pg_owner(domid_t domid)
>              MEM_LOG("Unknown domain '%u'", domid);
>              break;
>          }
> -        if ( !IS_PRIV_FOR(curr, pg_owner) )
> -        {
> -            MEM_LOG("Cannot set foreign dom");
> -            rcu_unlock_domain(pg_owner);
> -            pg_owner = NULL;
> -        }
>          break;
>      }
>  
> @@ -3008,6 +2997,13 @@ int do_mmuext_op(
>          goto out;
>      }
>  
> +    rc = xsm_mmuext_op(d, pg_owner);

Given the above, this could be

xsm_mmuext_op(dom0, DOMID_{IO,XEN});

yet ...

> +    if ( rc )
> +    {
> +        rcu_unlock_domain(pg_owner);
> +        goto out;
> +    }
> +
>      for ( i = 0; i < count; i++ )
>      {
>          if ( hypercall_preempt_check() )
> @@ -3483,11 +3479,6 @@ int do_mmu_update(
>              rc = -EINVAL;
>              goto out;
>          }
> -        if ( !IS_PRIV_FOR(d, pt_owner) )
> -        {
> -            rc = -ESRCH;
> -            goto out;
> -        }
>      }
>  
>      if ( (pg_owner = get_pg_owner((uint16_t)foreigndom)) == NULL )
> @@ -3643,7 +3634,7 @@ int do_mmu_update(
>              mfn = req.ptr >> PAGE_SHIFT;
>              gpfn = req.val;
>  
> -            rc = xsm_mmu_machphys_update(d, mfn);
> +            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
>              if ( rc )
>                  break;
>  
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -803,19 +803,35 @@ static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
>  }
>  
>  static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
> -                                    struct domain *f, intpte_t fpte)
> +                                            struct domain *f, intpte_t fpte)
>  {
> +    if ( d != t && !IS_PRIV_FOR(d, t) )
> +        return -EPERM;
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;
>      return 0;
>  }
>  
> -static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
> +static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, struct domain *f,
> +                                              unsigned long mfn)
>  {
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
> +{
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;

... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.

>      return 0;
>  }
>  
>  static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
>                                                              l1_pgentry_t pte)
>  {
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;
>      return 0;
>  }
>  

Didn't check the other cases in any detail.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:27:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPCs-0004A5-9P; Mon, 06 Aug 2012 15:26:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPCq-00049g-LK
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:26:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344266810!10067109!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9618 invoked from network); 6 Aug 2012 15:26:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:26:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:26:50 +0100
Message-Id: <501FFE560200007800092FBA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:26:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2882,11 +2882,6 @@ static struct domain *get_pg_owner(domid_t domid)
>          pg_owner = rcu_lock_domain(dom_io);
>          break;
>      case DOMID_XEN:
> -        if ( !IS_PRIV(curr) )
> -        {
> -            MEM_LOG("Cannot set foreign dom");
> -            break;
> -        }
>          pg_owner = rcu_lock_domain(dom_xen);
>          break;
>      default:
> @@ -2895,12 +2890,6 @@ static struct domain *get_pg_owner(domid_t domid)
>              MEM_LOG("Unknown domain '%u'", domid);
>              break;
>          }
> -        if ( !IS_PRIV_FOR(curr, pg_owner) )
> -        {
> -            MEM_LOG("Cannot set foreign dom");
> -            rcu_unlock_domain(pg_owner);
> -            pg_owner = NULL;
> -        }
>          break;
>      }
>  
> @@ -3008,6 +2997,13 @@ int do_mmuext_op(
>          goto out;
>      }
>  
> +    rc = xsm_mmuext_op(d, pg_owner);

Given the above, this could be

xsm_mmuext_op(dom0, DOMID_{IO,XEN});

yet ...

> +    if ( rc )
> +    {
> +        rcu_unlock_domain(pg_owner);
> +        goto out;
> +    }
> +
>      for ( i = 0; i < count; i++ )
>      {
>          if ( hypercall_preempt_check() )
> @@ -3483,11 +3479,6 @@ int do_mmu_update(
>              rc = -EINVAL;
>              goto out;
>          }
> -        if ( !IS_PRIV_FOR(d, pt_owner) )
> -        {
> -            rc = -ESRCH;
> -            goto out;
> -        }
>      }
>  
>      if ( (pg_owner = get_pg_owner((uint16_t)foreigndom)) == NULL )
> @@ -3643,7 +3634,7 @@ int do_mmu_update(
>              mfn = req.ptr >> PAGE_SHIFT;
>              gpfn = req.val;
>  
> -            rc = xsm_mmu_machphys_update(d, mfn);
> +            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
>              if ( rc )
>                  break;
>  
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -803,19 +803,35 @@ static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
>  }
>  
>  static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
> -                                    struct domain *f, intpte_t fpte)
> +                                            struct domain *f, intpte_t fpte)
>  {
> +    if ( d != t && !IS_PRIV_FOR(d, t) )
> +        return -EPERM;
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;
>      return 0;
>  }
>  
> -static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
> +static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, struct domain *f,
> +                                              unsigned long mfn)
>  {
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
> +{
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;

... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.

>      return 0;
>  }
>  
>  static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
>                                                              l1_pgentry_t pte)
>  {
> +    if ( d != f && !IS_PRIV_FOR(d, f) )
> +        return -EPERM;
>      return 0;
>  }
>  

Didn't check the other cases in any detail.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:32:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPIS-0004Ro-33; Mon, 06 Aug 2012 15:32:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPIQ-0004Rj-K8
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:32:42 +0000
Received: from [85.158.138.51:55714] by server-1.bemta-3.messagelabs.com id
	4A/CD-29224-993EF105; Mon, 06 Aug 2012 15:32:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344267161!22564082!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25944 invoked from network); 6 Aug 2012 15:32:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 15:32:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:32:40 +0100
Message-Id: <501FFFB50200007800092FC7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:32:37 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian.Campbell@citrix.com,
	Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> This is an incremental patch on top of
> c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> compatibility, it is better to introduce foreign_domid as part of a
> union containing both size and foreign_domid.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/public/memory.h |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index b2adfbe..b0af2fd 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };

But you're clear that this isn't standard C, and hence can't go
in this way?

Jan

>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info  0 /* shared info page */
> @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> -    uint16_t space;
> -    domid_t foreign_domid; /* IFF gmfn_foreign */
> +    unsigned int space;
>  
>  #define XENMAPIDX_grant_table_status 0x80000000
>  
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:32:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPIS-0004Ro-33; Mon, 06 Aug 2012 15:32:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPIQ-0004Rj-K8
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:32:42 +0000
Received: from [85.158.138.51:55714] by server-1.bemta-3.messagelabs.com id
	4A/CD-29224-993EF105; Mon, 06 Aug 2012 15:32:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344267161!22564082!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25944 invoked from network); 6 Aug 2012 15:32:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 15:32:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:32:40 +0100
Message-Id: <501FFFB50200007800092FC7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:32:37 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian.Campbell@citrix.com,
	Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> This is an incremental patch on top of
> c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> compatibility, it is better to introduce foreign_domid as part of a
> union containing both size and foreign_domid.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/public/memory.h |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index b2adfbe..b0af2fd 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };

But you're clear that this isn't standard C, and hence can't go
in this way?

Jan

>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info  0 /* shared info page */
> @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> -    uint16_t space;
> -    domid_t foreign_domid; /* IFF gmfn_foreign */
> +    unsigned int space;
>  
>  #define XENMAPIDX_grant_table_status 0x80000000
>  
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:38:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:38:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPNO-0004hd-0x; Mon, 06 Aug 2012 15:37:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyPNL-0004hX-VE
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:37:48 +0000
Received: from [85.158.143.99:56953] by server-3.bemta-4.messagelabs.com id
	D1/47-01511-BC4EF105; Mon, 06 Aug 2012 15:37:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344267463!27129826!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4574 invoked from network); 6 Aug 2012 15:37:44 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 15:37:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76FberF015612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 15:37:41 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76FbdmT024638
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 15:37:40 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76FbdVE028688; Mon, 6 Aug 2012 10:37:39 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 08:37:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 75AE241F35; Mon,  6 Aug 2012 11:28:15 -0400 (EDT)
Date: Mon, 6 Aug 2012 11:28:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120806152815.GE8967@phenom.dumpdata.com>
References: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 11:24:20PM +0100, Jean Guyader wrote:
> This is a Linux driver for the V4V inter VM communication system.
> 
> I've posted the V4V Xen patches for comments, to find more info about
> V4V you can check out this link.
> http://osdir.com/ml/general/2012-08/msg05904.html
> 
> This linux driver exposes two char devices one for TCP one for UDP.
> The interface exposed to userspace are made of IOCTLs, one per
> network operation (listen, bind, accept, send, recv, ...).

I haven't had a chance to take a look at this and won't until next
week. But just a couple of quick questions:

 - Is there a test application for this? If so where can I get it
 - Is there any code in the Xen repository that uses it.
 - Who are the users?
 - Why .. TCP and UDP ? Does that mean it masquarades as an Ethernet
   device? Why the choice of using a char device?

Thx.
> 
> Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> ---
>  drivers/xen/Kconfig         |    4 +
>  drivers/xen/Makefile        |    1 +
>  drivers/xen/v4v.c           | 2639 +++++++++++++++++++++++++++++++++++++++++++
>  drivers/xen/v4v_utils.h     |  278 +++++
>  include/xen/interface/v4v.h |  299 +++++
>  include/xen/interface/xen.h |    1 +
>  include/xen/v4vdev.h        |   34 +
>  7 files changed, 3256 insertions(+)
>  create mode 100644 drivers/xen/v4v.c
>  create mode 100644 drivers/xen/v4v_utils.h
>  create mode 100644 include/xen/interface/v4v.h
>  create mode 100644 include/xen/v4vdev.h
> 

> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index 8d2501e..db500cc 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -196,4 +196,8 @@ config XEN_ACPI_PROCESSOR
>  	  called xen_acpi_processor  If you do not know what to choose, select
>  	  M here. If the CPUFREQ drivers are built in, select Y here.
>  
> +config XEN_V4V
> +	tristate "Xen V4V driver"
> +        default m
> +
>  endmenu
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index fc34886..a3d3014 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -21,6 +21,7 @@ obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
>  obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
>  obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
>  obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
> +obj-$(CONFIG_XEN_V4V)			+= v4v.o
>  xen-evtchn-y				:= evtchn.o
>  xen-gntdev-y				:= gntdev.o
>  xen-gntalloc-y				:= gntalloc.o
> diff --git a/drivers/xen/v4v.c b/drivers/xen/v4v.c
> new file mode 100644
> index 0000000..141be66
> --- /dev/null
> +++ b/drivers/xen/v4v.c
> @@ -0,0 +1,2639 @@
> +/******************************************************************************
> + * drivers/xen/v4v/v4v.c
> + *
> + * V4V interdomain communication driver.
> + *
> + * Copyright (c) 2012 Jean Guyader
> + * Copyright (c) 2009 Ross Philipson
> + * Copyright (c) 2009 James McKenzie
> + * Copyright (c) 2009 Citrix Systems, Inc.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/vmalloc.h>
> +#include <linux/interrupt.h>
> +#include <linux/spinlock.h>
> +#include <linux/list.h>
> +#include <linux/socket.h>
> +#include <linux/sched.h>
> +#include <xen/events.h>
> +#include <xen/evtchn.h>
> +#include <xen/page.h>
> +#include <xen/xen.h>
> +#include <linux/fs.h>
> +#include <linux/platform_device.h>
> +#include <linux/miscdevice.h>
> +#include <linux/major.h>
> +#include <linux/proc_fs.h>
> +#include <linux/poll.h>
> +#include <linux/random.h>
> +#include <linux/wait.h>
> +#include <linux/file.h>
> +#include <linux/mount.h>
> +
> +#include <xen/interface/v4v.h>
> +#include <xen/v4vdev.h>
> +#include "v4v_utils.h"
> +
> +#define DEFAULT_RING_SIZE \
> +    (V4V_ROUNDUP((((PAGE_SIZE)*32) - sizeof(v4v_ring_t)-V4V_ROUNDUP(1))))
> +
> +/* The type of a ring*/
> +typedef enum {
> +        V4V_RTYPE_IDLE = 0,
> +        V4V_RTYPE_DGRAM,
> +        V4V_RTYPE_LISTENER,
> +        V4V_RTYPE_CONNECTOR,
> +} v4v_rtype;
> +
> +/* the state of a v4V_private*/
> +typedef enum {
> +        V4V_STATE_IDLE = 0,
> +        V4V_STATE_BOUND,
> +        V4V_STATE_LISTENING,
> +        V4V_STATE_ACCEPTED,
> +        V4V_STATE_CONNECTING,
> +        V4V_STATE_CONNECTED,
> +        V4V_STATE_DISCONNECTED
> +} v4v_state;
> +
> +typedef enum {
> +        V4V_PTYPE_DGRAM = 1,
> +        V4V_PTYPE_STREAM,
> +} v4v_ptype;
> +
> +static rwlock_t list_lock;
> +static struct list_head ring_list;
> +
> +struct v4v_private;
> +
> +/*
> + * Ring pointer itself is protected by the refcnt the lists its in by list_lock.
> + *
> + * It's permittable to decrement the refcnt whilst holding the read lock, and then
> + * clean up refcnt=0 rings later.
> + *
> + * If a ring has refcnt!=0 we expect ->ring to be non NULL, and for the ring to
> + * be registered with Xen.
> + */
> +
> +struct ring {
> +        struct list_head node;
> +        atomic_t refcnt;
> +
> +        spinlock_t lock;        /* Protects the data in the v4v_ring_t also privates and sponsor */
> +
> +        struct list_head privates;      /* Protected by lock */
> +        struct v4v_private *sponsor;    /* Protected by lock */
> +
> +        v4v_rtype type;
> +
> +        /* Ring */
> +        v4v_ring_t *ring;
> +        v4v_pfn_t *pfn_list;
> +        size_t pfn_list_npages;
> +        int order;
> +};
> +
> +struct v4v_private {
> +        struct list_head node;
> +        v4v_state state;
> +        v4v_ptype ptype;
> +        uint32_t desired_ring_size;
> +        struct ring *r;
> +        wait_queue_head_t readq;
> +        wait_queue_head_t writeq;
> +        v4v_addr_t peer;
> +        uint32_t conid;
> +        spinlock_t pending_recv_lock;   /* Protects pending messages, and pending_error */
> +        struct list_head pending_recv_list;     /* For LISTENER contains only ... */
> +        atomic_t pending_recv_count;
> +        int pending_error;
> +        int full;
> +        int send_blocked;
> +        int rx;
> +};
> +
> +struct pending_recv {
> +        struct list_head node;
> +        v4v_addr_t from;
> +        size_t data_len, data_ptr;
> +        struct v4v_stream_header sh;
> +        uint8_t data[0];
> +} V4V_PACKED;
> +
> +static spinlock_t interrupt_lock;
> +static spinlock_t pending_xmit_lock;
> +static struct list_head pending_xmit_list;
> +static atomic_t pending_xmit_count;
> +
> +enum v4v_pending_xmit_type {
> +        V4V_PENDING_XMIT_INLINE = 1,    /* Send the inline xmit */
> +        V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR,   /* Wake up writeq of sponsor of the ringid from */
> +        V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES,  /* Wake up writeq of a private of ringid from with conid */
> +};
> +
> +struct pending_xmit {
> +        struct list_head node;
> +        enum v4v_pending_xmit_type type;
> +        uint32_t conid;
> +        struct v4v_ring_id from;
> +        v4v_addr_t to;
> +        size_t len;
> +        uint32_t protocol;
> +        uint8_t data[0];
> +};
> +
> +#define MAX_PENDING_RECVS        16
> +
> +/* Hypercalls */
> +
> +static inline int __must_check
> +HYPERVISOR_v4v_op(int cmd, void *arg1, void *arg2,
> +                  uint32_t arg3, uint32_t arg4)
> +{
> +        return _hypercall5(int, v4v_op, cmd, arg1, arg2, arg3, arg4);
> +}
> +
> +static int v4v_info(v4v_info_t *info)
> +{
> +        (void)(*(volatile int*)info);
> +        return HYPERVISOR_v4v_op (V4VOP_info, info, NULL, 0, 0);
> +}
> +
> +static int H_v4v_register_ring(v4v_ring_t * r, v4v_pfn_t * l, size_t npages)
> +{
> +        (void)(*(volatile int *)r);
> +        return HYPERVISOR_v4v_op(V4VOP_register_ring, r, l, npages, 0);
> +}
> +
> +static int H_v4v_unregister_ring(v4v_ring_t * r)
> +{
> +        (void)(*(volatile int *)r);
> +        return HYPERVISOR_v4v_op(V4VOP_unregister_ring, r, NULL, 0, 0);
> +}
> +
> +static int
> +H_v4v_send(v4v_addr_t * s, v4v_addr_t * d, const void *buf, uint32_t len,
> +           uint32_t protocol)
> +{
> +        v4v_send_addr_t addr;
> +        addr.src = *s;
> +        addr.dst = *d;
> +        return HYPERVISOR_v4v_op(V4VOP_send, &addr, (void *)buf, len, protocol);
> +}
> +
> +static int
> +H_v4v_sendv(v4v_addr_t * s, v4v_addr_t * d, const v4v_iov_t * iovs,
> +            uint32_t niov, uint32_t protocol)
> +{
> +        v4v_send_addr_t addr;
> +        addr.src = *s;
> +        addr.dst = *d;
> +        return HYPERVISOR_v4v_op(V4VOP_sendv, &addr, (void *)iovs, niov,
> +                                 protocol);
> +}
> +
> +static int H_v4v_notify(v4v_ring_data_t * rd)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_notify, rd, NULL, 0, 0);
> +}
> +
> +static int H_v4v_viptables_add(v4v_viptables_rule_t * rule, int position)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_viptables_add, rule, NULL,
> +                                 position, 0);
> +}
> +
> +static int H_v4v_viptables_del(v4v_viptables_rule_t * rule, int position)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_viptables_del, rule, NULL,
> +                                 position, 0);
> +}
> +
> +static int H_v4v_viptables_list(struct v4v_viptables_list *list)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_viptables_list, list, NULL, 0, 0);
> +}
> +
> +/* Port/Ring uniqueness */
> +
> +/* Need to hold write lock for all of these */
> +
> +static int v4v_id_in_use(struct v4v_ring_id *id)
> +{
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if ((r->ring->id.addr.port == id->addr.port)
> +                    && (r->ring->id.partner == id->partner))
> +                        return 1;
> +        }
> +
> +        return 0;
> +}
> +
> +static int v4v_port_in_use(uint32_t port, uint32_t * max)
> +{
> +        uint32_t ret = 0;
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (r->ring->id.addr.port == port)
> +                        ret++;
> +                if (max && (r->ring->id.addr.port > *max))
> +                        *max = r->ring->id.addr.port;
> +        }
> +
> +        return ret;
> +}
> +
> +static uint32_t v4v_random_port(void)
> +{
> +        uint32_t port;
> +
> +        port = random32();
> +        port |= 0x80000000U;
> +        if (port > 0xf0000000U) {
> +                port -= 0x10000000;
> +        }
> +
> +        return port;
> +}
> +
> +/* Caller needs to hold lock */
> +static uint32_t v4v_find_spare_port_number(void)
> +{
> +        uint32_t port, max = 0x80000000U;
> +
> +        port = v4v_random_port();
> +        if (!v4v_port_in_use(port, &max)) {
> +                return port;
> +        } else {
> +                port = max + 1;
> +        }
> +
> +        return port;
> +}
> +
> +/* Ring Goo */
> +
> +static int register_ring(struct ring *r)
> +{
> +        return H_v4v_register_ring((void *)r->ring,
> +                                   r->pfn_list,
> +                                   r->pfn_list_npages);
> +}
> +
> +static int unregister_ring(struct ring *r)
> +{
> +        return H_v4v_unregister_ring((void *)r->ring);
> +}
> +
> +static void refresh_pfn_list(struct ring *r)
> +{
> +        uint8_t *b = (void *)r->ring;
> +        int i;
> +
> +        for (i = 0; i < r->pfn_list_npages; ++i) {
> +                r->pfn_list[i] = pfn_to_mfn(vmalloc_to_pfn(b));
> +                b += PAGE_SIZE;
> +        }
> +}
> +
> +static void allocate_pfn_list(struct ring *r)
> +{
> +        int n = (r->ring->len + PAGE_SIZE - 1) >> PAGE_SHIFT;
> +        int len = sizeof(v4v_pfn_t) * n;
> +
> +        r->pfn_list = kmalloc(len, GFP_KERNEL);
> +        if (!r->pfn_list)
> +                return;
> +        r->pfn_list_npages = n;
> +
> +        refresh_pfn_list(r);
> +}
> +
> +static int allocate_ring(struct ring *r, int ring_len)
> +{
> +        int len = ring_len + sizeof(v4v_ring_t);
> +        int ret = 0;
> +
> +        if (ring_len != V4V_ROUNDUP(ring_len)) {
> +                ret = -EINVAL;
> +                goto fail;
> +        }
> +
> +        r->ring = NULL;
> +        r->pfn_list = NULL;
> +        r->order = 0;
> +
> +        r->order = get_order(len);
> +
> +        r->ring = vmalloc(len);
> +
> +        if (!r->ring) {
> +                ret = -ENOMEM;
> +                goto fail;
> +        }
> +
> +        memset((void *)r->ring, 0, len);
> +
> +        r->ring->magic = V4V_RING_MAGIC;
> +        r->ring->len = ring_len;
> +        r->ring->rx_ptr = r->ring->tx_ptr = 0;
> +
> +        memset((void *)r->ring->ring, 0x5a, ring_len);
> +
> +        allocate_pfn_list(r);
> +        if (!r->pfn_list) {
> +
> +                ret = -ENOMEM;
> +                goto fail;
> +        }
> +
> +        return 0;
> + fail:
> +        if (r->ring)
> +                vfree(r->ring);
> +        if (r->pfn_list)
> +                kfree(r->pfn_list);
> +
> +        r->ring = NULL;
> +        r->pfn_list = NULL;
> +
> +        return ret;
> +}
> +
> +/* Caller must hold lock */
> +static void recover_ring(struct ring *r)
> +{
> +        /* It's all gone horribly wrong */
> +        r->ring->rx_ptr = r->ring->tx_ptr;
> +        /* Xen updates tx_ptr atomically to always be pointing somewhere sensible */
> +}
> +
> +/* Caller must hold no locks, ring is allocated with a refcnt of 1 */
> +static int new_ring(struct v4v_private *sponsor, struct v4v_ring_id *pid)
> +{
> +        struct v4v_ring_id id = *pid;
> +        struct ring *r;
> +        int ret;
> +        unsigned long flags;
> +
> +        if (id.addr.domain != V4V_DOMID_NONE)
> +                return -EINVAL;
> +
> +        r = kmalloc(sizeof(struct ring), GFP_KERNEL);
> +        memset(r, 0, sizeof(struct ring));
> +
> +        ret = allocate_ring(r, sponsor->desired_ring_size);
> +        if (ret) {
> +                kfree(r);
> +                return ret;
> +        }
> +
> +        INIT_LIST_HEAD(&r->privates);
> +        spin_lock_init(&r->lock);
> +        atomic_set(&r->refcnt, 1);
> +
> +        write_lock_irqsave(&list_lock, flags);
> +        if (sponsor->state != V4V_STATE_IDLE) {
> +                ret = -EINVAL;
> +                goto fail;
> +        }
> +
> +        if (!id.addr.port) {
> +                id.addr.port = v4v_find_spare_port_number();
> +        } else if (v4v_id_in_use(&id)) {
> +                ret = -EADDRINUSE;
> +                goto fail;
> +        }
> +
> +        r->ring->id = id;
> +        r->sponsor = sponsor;
> +        sponsor->r = r;
> +        sponsor->state = V4V_STATE_BOUND;
> +
> +        ret = register_ring(r);
> +        if (ret)
> +                goto fail;
> +
> +        list_add(&r->node, &ring_list);
> +        write_unlock_irqrestore(&list_lock, flags);
> +        return 0;
> +
> + fail:
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        vfree(r->ring);
> +        kfree(r->pfn_list);
> +        kfree(r);
> +
> +        sponsor->r = NULL;
> +        sponsor->state = V4V_STATE_IDLE;
> +
> +        return ret;
> +}
> +
> +/* Cleans up old rings */
> +static void delete_ring(struct ring *r)
> +{
> +        int ret;
> +
> +        list_del(&r->node);
> +
> +        if ((ret = unregister_ring(r))) {
> +                printk(KERN_ERR
> +                       "unregister_ring hypercall failed: %d. Leaking ring.\n",
> +                       ret);
> +        } else {
> +                vfree(r->ring);
> +        }
> +
> +        kfree(r->pfn_list);
> +        kfree(r);
> +}
> +
> +/* Returns !0 if you sucessfully got a reference to the ring */
> +static int get_ring(struct ring *r)
> +{
> +        return atomic_add_unless(&r->refcnt, 1, 0);
> +}
> +
> +/* Must be called with DEBUG_WRITELOCK; v4v_write_lock */
> +static void put_ring(struct ring *r)
> +{
> +        if (!r)
> +                return;
> +
> +        if (atomic_dec_and_test(&r->refcnt)) {
> +                delete_ring(r);
> +        }
> +}
> +
> +/* Caller must hold ring_lock */
> +static struct ring *find_ring_by_id(struct v4v_ring_id *id)
> +{
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (!memcmp
> +                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id)))
> +                        return r;
> +        }
> +        return NULL;
> +}
> +
> +/* Caller must hold ring_lock */
> +struct ring *find_ring_by_id_type(struct v4v_ring_id *id, v4v_rtype t)
> +{
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (r->type != t)
> +                        continue;
> +                if (!memcmp
> +                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id)))
> +                        return r;
> +        }
> +
> +        return NULL;
> +}
> +
> +/* Pending xmits */
> +
> +/* Caller must hold pending_xmit_lock */
> +
> +static void
> +xmit_queue_wakeup_private(struct v4v_ring_id *from,
> +                          uint32_t conid, v4v_addr_t * to, int len, int delete)
> +{
> +        struct pending_xmit *p;
> +
> +        list_for_each_entry(p, &pending_xmit_list, node) {
> +                if (p->type != V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES)
> +                        continue;
> +                if (p->conid != conid)
> +                        continue;
> +
> +                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id)))
> +                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
> +                        if (delete) {
> +                                atomic_dec(&pending_xmit_count);
> +                                list_del(&p->node);
> +                        } else {
> +                                p->len = len;
> +                        }
> +                        return;
> +                }
> +        }
> +
> +        if (delete)
> +                return;
> +
> +        p = kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
> +        if (!p) {
> +                printk(KERN_ERR
> +                       "Out of memory trying to queue an xmit sponsor wakeup\n");
> +                return;
> +        }
> +        p->type = V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES;
> +        p->conid = conid;
> +        p->from = *from;
> +        p->to = *to;
> +        p->len = len;
> +
> +        atomic_inc(&pending_xmit_count);
> +        list_add_tail(&p->node, &pending_xmit_list);
> +}
> +
> +/* Caller must hold pending_xmit_lock */
> +static void
> +xmit_queue_wakeup_sponsor(struct v4v_ring_id *from, v4v_addr_t * to,
> +                          int len, int delete)
> +{
> +        struct pending_xmit *p;
> +
> +        list_for_each_entry(p, &pending_xmit_list, node) {
> +                if (p->type != V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR)
> +                        continue;
> +                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id)))
> +                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
> +                        if (delete) {
> +                                atomic_dec(&pending_xmit_count);
> +                                list_del(&p->node);
> +                        } else {
> +                                p->len = len;
> +                        }
> +                        return;
> +                }
> +        }
> +
> +        if (delete)
> +                return;
> +
> +        p = kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
> +        if (!p) {
> +                printk(KERN_ERR
> +                       "Out of memory trying to queue an xmit sponsor wakeup\n");
> +                return;
> +        }
> +        p->type = V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR;
> +        p->from = *from;
> +        p->to = *to;
> +        p->len = len;
> +        atomic_inc(&pending_xmit_count);
> +        list_add_tail(&p->node, &pending_xmit_list);
> +}
> +
> +static int
> +xmit_queue_inline(struct v4v_ring_id *from, v4v_addr_t * to,
> +                  void *buf, size_t len, uint32_t protocol)
> +{
> +        ssize_t ret;
> +        unsigned long flags;
> +        struct pending_xmit *p;
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +
> +        ret = H_v4v_send(&from->addr, to, buf, len, protocol);
> +        if (ret != -EAGAIN) {
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                return ret;
> +        }
> +
> +        p = kmalloc(sizeof(struct pending_xmit) + len, GFP_ATOMIC);
> +        if (!p) {
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                printk(KERN_ERR
> +                       "Out of memory trying to queue an xmit of %zu bytes\n",
> +                       len);
> +
> +                return -ENOMEM;
> +        }
> +
> +        p->type = V4V_PENDING_XMIT_INLINE;
> +        p->from = *from;
> +        p->to = *to;
> +        p->len = len;
> +        p->protocol = protocol;
> +
> +        if (len)
> +                memcpy(p->data, buf, len);
> +
> +        list_add_tail(&p->node, &pending_xmit_list);
> +        atomic_inc(&pending_xmit_count);
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return len;
> +}
> +
> +static void
> +xmit_queue_rst_to(struct v4v_ring_id *from, uint32_t conid, v4v_addr_t * to)
> +{
> +        struct v4v_stream_header sh;
> +
> +        if (!to)
> +                return;
> +
> +        sh.conid = conid;
> +        sh.flags = V4V_SHF_RST;
> +        xmit_queue_inline(from, to, &sh, sizeof(sh), V4V_PROTO_STREAM);
> +}
> +
> +/* RX */
> +
> +static int
> +copy_into_pending_recv(struct ring *r, int len, struct v4v_private *p)
> +{
> +        struct pending_recv *pending;
> +        int k;
> +
> +        /* Too much queued? Let the ring take the strain */
> +        if (atomic_read(&p->pending_recv_count) > MAX_PENDING_RECVS) {
> +                spin_lock(&p->pending_recv_lock);
> +                p->full = 1;
> +                spin_unlock(&p->pending_recv_lock);
> +
> +                return -1;
> +        }
> +
> +        pending =
> +            kmalloc(sizeof(struct pending_recv) -
> +                    sizeof(struct v4v_stream_header) + len, GFP_ATOMIC);
> +
> +        if (!pending)
> +                return -1;
> +
> +        pending->data_ptr = 0;
> +        pending->data_len = len - sizeof(struct v4v_stream_header);
> +
> +        k = v4v_copy_out(r->ring, &pending->from, NULL, &pending->sh, len, 1);
> +
> +        spin_lock(&p->pending_recv_lock);
> +        list_add_tail(&pending->node, &p->pending_recv_list);
> +        atomic_inc(&p->pending_recv_count);
> +        p->full = 0;
> +        spin_unlock(&p->pending_recv_lock);
> +
> +        return 0;
> +}
> +
> +/* Notify */
> +
> +/* Caller must hold list_lock */
> +static void
> +wakeup_privates(struct v4v_ring_id *id, v4v_addr_t * peer, uint32_t conid)
> +{
> +        struct ring *r = find_ring_by_id_type(id, V4V_RTYPE_LISTENER);
> +        struct v4v_private *p;
> +
> +        if (!r)
> +                return;
> +
> +        list_for_each_entry(p, &r->privates, node) {
> +                if ((p->conid == conid)
> +                    && !memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
> +                        p->send_blocked = 0;
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return;
> +                }
> +        }
> +}
> +
> +/* Caller must hold list_lock */
> +static void wakeup_sponsor(struct v4v_ring_id *id)
> +{
> +        struct ring *r = find_ring_by_id(id);
> +
> +        if (!r)
> +                return;
> +
> +        if (!r->sponsor)
> +                return;
> +
> +        r->sponsor->send_blocked = 0;
> +        wake_up_interruptible_all(&r->sponsor->writeq);
> +}
> +
> +static void v4v_null_notify(void)
> +{
> +        H_v4v_notify(NULL);
> +}
> +
> +/* Caller must hold list_lock */
> +static void v4v_notify(void)
> +{
> +        unsigned long flags;
> +        int ret;
> +        int nent;
> +        struct pending_xmit *p, *n;
> +        v4v_ring_data_t *d;
> +        int i = 0;
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +
> +        nent = atomic_read(&pending_xmit_count);
> +        d = kmalloc(sizeof(v4v_ring_data_t) +
> +                    nent * sizeof(v4v_ring_data_ent_t), GFP_ATOMIC);
> +        if (!d) {
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                return;
> +        }
> +        memset(d, 0, sizeof(v4v_ring_data_t));
> +
> +        d->magic = V4V_RING_DATA_MAGIC;
> +
> +        list_for_each_entry(p, &pending_xmit_list, node) {
> +                if (i != nent) {
> +                        d->data[i].ring = p->to;
> +                        d->data[i].space_required = p->len;
> +                        i++;
> +                }
> +        }
> +        d->nent = i;
> +
> +        if (H_v4v_notify(d)) {
> +                kfree(d);
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                //MOAN;
> +                return;
> +        }
> +
> +        i = 0;
> +        list_for_each_entry_safe(p, n, &pending_xmit_list, node) {
> +                int processed = 1;
> +
> +                if (i == nent)
> +                        continue;
> +
> +                if (d->data[i].flags & V4V_RING_DATA_F_EXISTS) {
> +                        switch (p->type) {
> +                        case V4V_PENDING_XMIT_INLINE:
> +                                if (!
> +                                    (d->data[i].flags &
> +                                     V4V_RING_DATA_F_SUFFICIENT)) {
> +                                        processed = 0;
> +                                        break;
> +                                }
> +                                ret =
> +                                    H_v4v_send(&p->from.addr, &p->to, p->data,
> +                                               p->len, p->protocol);
> +                                if (ret == -EAGAIN)
> +                                        processed = 0;
> +                                break;
> +                        case V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR:
> +                                if (d->
> +                                    data[i].flags & V4V_RING_DATA_F_SUFFICIENT)
> +                                {
> +                                        wakeup_sponsor(&p->from);
> +                                } else {
> +                                        processed = 0;
> +                                }
> +                                break;
> +                        case V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES:
> +                                if (d->
> +                                    data[i].flags & V4V_RING_DATA_F_SUFFICIENT)
> +                                {
> +                                        wakeup_privates(&p->from, &p->to,
> +                                                        p->conid);
> +                                } else {
> +                                        processed = 0;
> +                                }
> +                                break;
> +                        }
> +                }
> +                if (processed) {
> +                        list_del(&p->node);     /* No one to talk to */
> +                        atomic_dec(&pending_xmit_count);
> +                        kfree(p);
> +                }
> +                i++;
> +        }
> +
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +        kfree(d);
> +}
> +
> +/* VIPtables */
> +static void
> +v4v_viptables_add(struct v4v_private *p, struct v4v_viptables_rule *rule,
> +                  int position)
> +{
> +        H_v4v_viptables_add(rule, position);
> +}
> +
> +static void
> +v4v_viptables_del(struct v4v_private *p, struct v4v_viptables_rule *rule,
> +                  int position)
> +{
> +        H_v4v_viptables_del(rule, position);
> +}
> +
> +static int v4v_viptables_list(struct v4v_private *p, struct v4v_viptables_list *list)
> +{
> +        return H_v4v_viptables_list(list);
> +}
> +
> +/* State Machines */
> +static int
> +connector_state_machine(struct v4v_private *p, struct v4v_stream_header *sh)
> +{
> +        if (sh->flags & V4V_SHF_ACK) {
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTING:
> +                        p->state = V4V_STATE_CONNECTED;
> +
> +                        spin_lock(&p->pending_recv_lock);
> +                        p->pending_error = 0;
> +                        spin_unlock(&p->pending_recv_lock);
> +
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return 0;
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_DISCONNECTED:
> +                        p->state = V4V_STATE_DISCONNECTED;
> +
> +                        wake_up_interruptible_all(&p->readq);
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return 1;       /* Send RST */
> +                default:
> +                        break;
> +                }
> +        }
> +
> +        if (sh->flags & V4V_SHF_RST) {
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTING:
> +                        spin_lock(&p->pending_recv_lock);
> +                        p->pending_error = -ECONNREFUSED;
> +                        spin_unlock(&p->pending_recv_lock);
> +                case V4V_STATE_CONNECTED:
> +                        p->state = V4V_STATE_DISCONNECTED;
> +                        wake_up_interruptible_all(&p->readq);
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return 0;
> +                default:
> +                        break;
> +                }
> +        }
> +
> +        return 0;
> +}
> +
> +static void
> +acceptor_state_machine(struct v4v_private *p, struct v4v_stream_header *sh)
> +{
> +        if ((sh->flags & V4V_SHF_RST)
> +            && ((p->state == V4V_STATE_ACCEPTED))) {
> +                p->state = V4V_STATE_DISCONNECTED;
> +                wake_up_interruptible_all(&p->readq);
> +                wake_up_interruptible_all(&p->writeq);
> +        }
> +}
> +
> +/* Interrupt handler */
> +
> +static int connector_interrupt(struct ring *r)
> +{
> +        ssize_t msg_len;
> +        uint32_t protocol;
> +        struct v4v_stream_header sh;
> +        v4v_addr_t from;
> +        int ret = 0;
> +
> +        if (!r->sponsor) {
> +                //MOAN;
> +                return -1;
> +        }
> +
> +        msg_len = v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 0);  /* Peek the header */
> +        if (msg_len == -1) {
> +                recover_ring(r);
> +                return ret;
> +        }
> +
> +        if ((protocol != V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) {
> +                /* Wrong protocol bin it */
> +                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
> +                return ret;
> +        }
> +
> +        if (sh.flags & V4V_SHF_SYN) {   /* This is a connector no-one should send SYN, send RST back */
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 1);
> +                if (msg_len == sizeof(sh))
> +                        xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
> +                return ret;
> +        }
> +
> +        /* Right connexion? */
> +        if (sh.conid != r->sponsor->conid) {
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 1);
> +                xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
> +                return ret;
> +        }
> +
> +        /* Any messages to eat? */
> +        if (sh.flags & (V4V_SHF_ACK | V4V_SHF_RST)) {
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 1);
> +                if (msg_len == sizeof(sh)) {
> +                        if (connector_state_machine(r->sponsor, &sh))
> +                                xmit_queue_rst_to(&r->ring->id, sh.conid,
> +                                                  &from);
> +                }
> +                return ret;
> +        }
> +        //FIXME set a flag to say wake up the userland process next time, and do that rather than copy
> +        ret = copy_into_pending_recv(r, msg_len, r->sponsor);
> +        wake_up_interruptible_all(&r->sponsor->readq);
> +
> +        return ret;
> +}
> +
> +static int
> +acceptor_interrupt(struct v4v_private *p, struct ring *r,
> +                   struct v4v_stream_header *sh, ssize_t msg_len)
> +{
> +        v4v_addr_t from;
> +        int ret = 0;
> +
> +        if (sh->flags & (V4V_SHF_SYN | V4V_SHF_ACK)) {  /* This is an  acceptor no-one should send SYN or ACK, send RST back */
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), 1);
> +                if (msg_len == sizeof(*sh))
> +                        xmit_queue_rst_to(&r->ring->id, sh->conid, &from);
> +                return ret;
> +        }
> +
> +        /* Is it all over */
> +        if (sh->flags & V4V_SHF_RST) {
> +                /* Consume the RST */
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), 1);
> +                if (msg_len == sizeof(*sh))
> +                        acceptor_state_machine(p, sh);
> +                return ret;
> +        }
> +
> +        /* Copy the message out */
> +        ret = copy_into_pending_recv(r, msg_len, p);
> +        wake_up_interruptible_all(&p->readq);
> +
> +        return ret;
> +}
> +
> +static int listener_interrupt(struct ring *r)
> +{
> +        int ret = 0;
> +        ssize_t msg_len;
> +        uint32_t protocol;
> +        struct v4v_stream_header sh;
> +        struct v4v_private *p;
> +        v4v_addr_t from;
> +
> +        msg_len = v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 0);  /* Peek the header */
> +        if (msg_len == -1) {
> +                recover_ring(r);
> +                return ret;
> +        }
> +
> +        if ((protocol != V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) {
> +                /* Wrong protocol bin it */
> +                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
> +                return ret;
> +        }
> +
> +        list_for_each_entry(p, &r->privates, node) {
> +                if ((p->conid == sh.conid)
> +                    && (!memcmp(&p->peer, &from, sizeof(v4v_addr_t)))) {
> +                        ret = acceptor_interrupt(p, r, &sh, msg_len);
> +                        return ret;
> +                }
> +        }
> +
> +        /* Consume it */
> +        if (r->sponsor && (sh.flags & V4V_SHF_RST)) {
> +                /*
> +                 * If we previously received a SYN which has not been pulled by
> +                 * v4v_accept() from the pending queue yet, the RST will be dropped here
> +                 * and the connection will never be closed.
> +                 * Hence we must make sure to evict the SYN header from the pending queue
> +                 * before it gets picked up by v4v_accept().
> +                 */
> +                struct pending_recv *pending, *t;
> +
> +                spin_lock(&r->sponsor->pending_recv_lock);
> +                list_for_each_entry_safe(pending, t,
> +                                         &r->sponsor->pending_recv_list, node) {
> +                        if (pending->sh.flags & V4V_SHF_SYN
> +                            && pending->sh.conid == sh.conid) {
> +                                list_del(&pending->node);
> +                                atomic_dec(&r->sponsor->pending_recv_count);
> +                                kfree(pending);
> +                                break;
> +                        }
> +                }
> +                spin_unlock(&r->sponsor->pending_recv_lock);
> +
> +                /* Rst to a listener, should be picked up above for the connexion, drop it */
> +                v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
> +                return ret;
> +        }
> +
> +        if (sh.flags & V4V_SHF_SYN) {
> +                /* Syn to new connexion */
> +                if ((!r->sponsor) || (msg_len != sizeof(sh))) {
> +                        v4v_copy_out(r->ring, NULL, NULL, NULL,
> +                                           sizeof(sh), 1);
> +                        return ret;
> +                }
> +                ret = copy_into_pending_recv(r, msg_len, r->sponsor);
> +                wake_up_interruptible_all(&r->sponsor->readq);
> +                return ret;
> +        }
> +
> +        v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
> +        /* Data for unknown destination, RST them */
> +        xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
> +
> +        return ret;
> +}
> +
> +static void v4v_interrupt_rx(void)
> +{
> +        struct ring *r;
> +
> +        read_lock(&list_lock);
> +
> +        /* Wake up anyone pending */
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (r->ring->tx_ptr == r->ring->rx_ptr)
> +                        continue;
> +
> +                switch (r->type) {
> +                case V4V_RTYPE_IDLE:
> +                        v4v_copy_out(r->ring, NULL, NULL, NULL, 1, 1);
> +                        break;
> +                case V4V_RTYPE_DGRAM:  /* For datagrams we just wake up the reader */
> +                        if (r->sponsor)
> +                                wake_up_interruptible_all(&r->sponsor->readq);
> +                        break;
> +                case V4V_RTYPE_CONNECTOR:
> +                        spin_lock(&r->lock);
> +                        while ((r->ring->tx_ptr != r->ring->rx_ptr)
> +                               && !connector_interrupt(r)) ;
> +                        spin_unlock(&r->lock);
> +                        break;
> +                case V4V_RTYPE_LISTENER:
> +                        spin_lock(&r->lock);
> +                        while ((r->ring->tx_ptr != r->ring->rx_ptr)
> +                               && !listener_interrupt(r)) ;
> +                        spin_unlock(&r->lock);
> +                        break;
> +                default:       /* enum warning */
> +                        break;
> +                }
> +        }
> +        read_unlock(&list_lock);
> +}
> +
> +static irqreturn_t v4v_interrupt(int irq, void *dev_id)
> +{
> +        unsigned long flags;
> +
> +        spin_lock_irqsave(&interrupt_lock, flags);
> +        v4v_interrupt_rx();
> +        v4v_notify();
> +        spin_unlock_irqrestore(&interrupt_lock, flags);
> +
> +        return IRQ_HANDLED;
> +}
> +
> +static void v4v_fake_irq(void)
> +{
> +        unsigned long flags;
> +
> +        spin_lock_irqsave(&interrupt_lock, flags);
> +        v4v_interrupt_rx();
> +        v4v_null_notify();
> +        spin_unlock_irqrestore(&interrupt_lock, flags);
> +}
> +
> +/* Filesystem gunge */
> +
> +#define V4VFS_MAGIC 0x56345644  /* "V4VD" */
> +
> +static struct vfsmount *v4v_mnt = NULL;
> +static const struct file_operations v4v_fops_stream;
> +
> +static struct dentry *v4vfs_mount_pseudo(struct file_system_type *fs_type,
> +                                         int flags, const char *dev_name,
> +                                         void *data)
> +{
> +        return mount_pseudo(fs_type, "v4v:", NULL, NULL, V4VFS_MAGIC);
> +}
> +
> +static struct file_system_type v4v_fs = {
> +        /* No owner field so module can be unloaded */
> +        .name = "v4vfs",
> +        .mount = v4vfs_mount_pseudo,
> +        .kill_sb = kill_litter_super
> +};
> +
> +static int setup_fs(void)
> +{
> +        int ret;
> +
> +        ret = register_filesystem(&v4v_fs);
> +        if (ret) {
> +                printk(KERN_ERR
> +                       "v4v: couldn't register tedious filesystem thingy\n");
> +                return ret;
> +        }
> +
> +        v4v_mnt = kern_mount(&v4v_fs);
> +        if (IS_ERR(v4v_mnt)) {
> +                unregister_filesystem(&v4v_fs);
> +                ret = PTR_ERR(v4v_mnt);
> +                printk(KERN_ERR
> +                       "v4v: couldn't mount tedious filesystem thingy\n");
> +                return ret;
> +        }
> +
> +        return 0;
> +}
> +
> +static void unsetup_fs(void)
> +{
> +        mntput(v4v_mnt);
> +        unregister_filesystem(&v4v_fs);
> +}
> +
> +/* Methods */
> +
> +static int stream_connected(struct v4v_private *p)
> +{
> +        switch (p->state) {
> +        case V4V_STATE_ACCEPTED:
> +        case V4V_STATE_CONNECTED:
> +                return 1;
> +        default:
> +                return 0;
> +        }
> +}
> +
> +static size_t
> +v4v_try_send_sponsor(struct v4v_private *p,
> +                     v4v_addr_t * dest,
> +                     const void *buf, size_t len, uint32_t protocol)
> +{
> +        size_t ret;
> +        unsigned long flags;
> +
> +        ret = H_v4v_send(&p->r->ring->id.addr, dest, buf, len, protocol);
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +        if (ret == -EAGAIN) {
> +                /* Add pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0);
> +                p->send_blocked++;
> +
> +        } else {
> +                /* Remove pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1);
> +                p->send_blocked = 0;
> +        }
> +
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return ret;
> +}
> +
> +static size_t
> +v4v_try_sendv_sponsor(struct v4v_private *p,
> +                      v4v_addr_t * dest,
> +                      const v4v_iov_t * iovs, size_t niov, size_t len,
> +                      uint32_t protocol)
> +{
> +        size_t ret;
> +        unsigned long flags;
> +
> +        ret = H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, protocol);
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +        if (ret == -EAGAIN) {
> +                /* Add pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0);
> +                p->send_blocked++;
> +
> +        } else {
> +                /* Remove pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1);
> +                p->send_blocked = 0;
> +        }
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return ret;
> +}
> +
> +/*
> + * Try to send from one of the ring's privates (not its sponsor),
> + * and queue an writeq wakeup if we fail
> + */
> +static size_t
> +v4v_try_sendv_privates(struct v4v_private *p,
> +                       v4v_addr_t * dest,
> +                       const v4v_iov_t * iovs, size_t niov, size_t len,
> +                       uint32_t protocol)
> +{
> +        size_t ret;
> +        unsigned long flags;
> +
> +        ret = H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, protocol);
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +        if (ret == -EAGAIN) {
> +                /* Add pending xmit */
> +                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, dest, len,
> +                                          0);
> +                p->send_blocked++;
> +        } else {
> +                /* Remove pending xmit */
> +                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, dest, len,
> +                                          1);
> +                p->send_blocked = 0;
> +        }
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return ret;
> +}
> +
> +static ssize_t
> +v4v_sendto_from_sponsor(struct v4v_private *p,
> +                        const void *buf, size_t len,
> +                        int nonblock, v4v_addr_t * dest, uint32_t protocol)
> +{
> +        size_t ret = 0, ts_ret;
> +
> +        switch (p->state) {
> +        case V4V_STATE_CONNECTING:
> +                ret = -ENOTCONN;
> +                break;
> +        case V4V_STATE_DISCONNECTED:
> +                ret = -EPIPE;
> +                break;
> +        case V4V_STATE_BOUND:
> +        case V4V_STATE_CONNECTED:
> +                break;
> +        default:
> +                ret = -EINVAL;
> +        }
> +
> +        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_header)))
> +                return -EMSGSIZE;
> +
> +        if (ret)
> +                return ret;
> +
> +        if (nonblock) {
> +                return H_v4v_send(&p->r->ring->id.addr, dest, buf, len,
> +                                  protocol);;
> +        }
> +        /*
> +         * I happen to know that wait_event_interruptible will never
> +         * evaluate the 2nd argument once it has returned true but
> +         * I shouldn't.
> +         *
> +         * The EAGAIN will cause xen to send an interrupt which will
> +         * via the pending_xmit_list and writeq wake us up.
> +         */
> +        ret = wait_event_interruptible(p->writeq,
> +                                       ((ts_ret =
> +                                         v4v_try_send_sponsor
> +                                         (p, dest,
> +                                          buf, len, protocol)) != -EAGAIN));
> +        if (ret)
> +                ret = ts_ret;
> +
> +        return ret;
> +}
> +
> +static ssize_t
> +v4v_stream_sendvto_from_sponsor(struct v4v_private *p,
> +                                const v4v_iov_t * iovs, size_t niov,
> +                                size_t len, int nonblock,
> +                                v4v_addr_t * dest, uint32_t protocol)
> +{
> +        size_t ret = 0, ts_ret;
> +
> +        switch (p->state) {
> +        case V4V_STATE_CONNECTING:
> +                return -ENOTCONN;
> +        case V4V_STATE_DISCONNECTED:
> +                return -EPIPE;
> +        case V4V_STATE_BOUND:
> +        case V4V_STATE_CONNECTED:
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_header)))
> +                return -EMSGSIZE;
> +
> +        if (ret)
> +                return ret;
> +
> +        if (nonblock) {
> +                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov,
> +                                   protocol);
> +        }
> +        /*
> +         * I happen to know that wait_event_interruptible will never
> +         * evaluate the 2nd argument once it has returned true but
> +         * I shouldn't.
> +         *
> +         * The EAGAIN will cause xen to send an interrupt which will
> +         * via the pending_xmit_list and writeq wake us up.
> +         */
> +        ret = wait_event_interruptible(p->writeq,
> +                                       ((ts_ret =
> +                                         v4v_try_sendv_sponsor
> +                                         (p, dest,
> +                                          iovs, niov, len,
> +                                          protocol)) != -EAGAIN)
> +                                       || !stream_connected(p));
> +        if (ret == 0)
> +                ret = ts_ret;
> +
> +        return ret;
> +}
> +static ssize_t
> +v4v_stream_sendvto_from_private(struct v4v_private *p,
> +                                const v4v_iov_t * iovs, size_t niov,
> +                                size_t len, int nonblock,
> +                                v4v_addr_t * dest, uint32_t protocol)
> +{
> +        size_t ret = 0, ts_ret;
> +
> +        switch (p->state) {
> +        case V4V_STATE_DISCONNECTED:
> +                return -EPIPE;
> +        case V4V_STATE_ACCEPTED:
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_header)))
> +                return -EMSGSIZE;
> +
> +        if (ret)
> +                return ret;
> +
> +        if (nonblock) {
> +                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov,
> +                                   protocol);
> +        }
> +        /*
> +         * I happen to know that wait_event_interruptible will never
> +         * evaluate the 2nd argument once it has returned true but
> +         * I shouldn't.
> +         *
> +         * The EAGAIN will cause xen to send an interrupt which will
> +         * via the pending_xmit_list and writeq wake us up.
> +         */
> +        ret = wait_event_interruptible(p->writeq,
> +                                       ((ts_ret =
> +                                         v4v_try_sendv_privates
> +                                         (p, dest,
> +                                          iovs, niov, len,
> +                                          protocol)) != -EAGAIN)
> +                                       || !stream_connected(p));
> +        if (ret == 0)
> +                ret = ts_ret;
> +
> +        return ret;
> +}
> +
> +static int v4v_get_sock_name(struct v4v_private *p, struct v4v_ring_id *id)
> +{
> +        int rc = 0;
> +
> +        read_lock(&list_lock);
> +        if ((p->r) && (p->r->ring)) {
> +                *id = p->r->ring->id;
> +        } else {
> +                rc = -EINVAL;
> +        }
> +        read_unlock(&list_lock);
> +
> +        return rc;
> +}
> +
> +static int v4v_get_peer_name(struct v4v_private *p, v4v_addr_t * id)
> +{
> +        int rc = 0;
> +        read_lock(&list_lock);
> +
> +        switch (p->state) {
> +        case V4V_STATE_CONNECTING:
> +        case V4V_STATE_CONNECTED:
> +        case V4V_STATE_ACCEPTED:
> +                *id = p->peer;
> +                break;
> +        default:
> +                rc = -ENOTCONN;
> +        }
> +
> +        read_unlock(&list_lock);
> +        return rc;
> +}
> +
> +static int v4v_set_ring_size(struct v4v_private *p, uint32_t ring_size)
> +{
> +
> +        if (ring_size <
> +            (sizeof(struct v4v_ring_message_header) + V4V_ROUNDUP(1)))
> +                return -EINVAL;
> +        if (ring_size != V4V_ROUNDUP(ring_size))
> +                return -EINVAL;
> +
> +        read_lock(&list_lock);
> +        if (p->state != V4V_STATE_IDLE) {
> +                read_unlock(&list_lock);
> +                return -EINVAL;
> +        }
> +
> +        p->desired_ring_size = ring_size;
> +        read_unlock(&list_lock);
> +
> +        return 0;
> +}
> +
> +static ssize_t
> +v4v_recvfrom_dgram(struct v4v_private *p, void *buf, size_t len,
> +                   int nonblock, int peek, v4v_addr_t * src)
> +{
> +        ssize_t ret;
> +        uint32_t protocol;
> +        v4v_addr_t lsrc;
> +
> +        if (!src)
> +                src = &lsrc;
> +
> +retry:
> +        if (!nonblock) {
> +                ret = wait_event_interruptible(p->readq,
> +                                               (p->r->ring->rx_ptr !=
> +                                                p->r->ring->tx_ptr));
> +                if (ret)
> +                        return ret;
> +        }
> +
> +        read_lock(&list_lock);
> +
> +        /*
> +         * For datagrams, we know the interrrupt handler will never use
> +         * the ring, leave irqs on
> +         */
> +        spin_lock(&p->r->lock);
> +        if (p->r->ring->rx_ptr == p->r->ring->tx_ptr) {
> +                spin_unlock(&p->r->lock);
> +                if (nonblock) {
> +                        ret = -EAGAIN;
> +                        goto unlock;
> +                }
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +        ret = v4v_copy_out(p->r->ring, src, &protocol, buf, len, !peek);
> +        if (ret < 0) {
> +                recover_ring(p->r);
> +                spin_unlock(&p->r->lock);
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +        spin_unlock(&p->r->lock);
> +
> +        if (!peek)
> +                v4v_null_notify();
> +
> +        if (protocol != V4V_PROTO_DGRAM) {
> +                /* If peeking consume the rubbish */
> +                if (peek)
> +                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1);
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +
> +        if ((p->state == V4V_STATE_CONNECTED) &&
> +            memcmp(src, &p->peer, sizeof(v4v_addr_t))) {
> +                /* Wrong source - bin it */
> +                if (peek)
> +                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1);
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +
> +unlock:
> +        read_unlock(&list_lock);
> +
> +        return ret;
> +}
> +
> +static ssize_t
> +v4v_recv_stream(struct v4v_private *p, void *_buf, int len, int recv_flags,
> +                int nonblock)
> +{
> +        size_t count = 0;
> +        int ret = 0;
> +        unsigned long flags;
> +        int schedule_irq = 0;
> +        uint8_t *buf = (void *)_buf;
> +
> +        read_lock(&list_lock);
> +
> +        switch (p->state) {
> +        case V4V_STATE_DISCONNECTED:
> +                ret = -EPIPE;
> +                goto unlock;
> +        case V4V_STATE_CONNECTING:
> +                ret = -ENOTCONN;
> +                goto unlock;
> +        case V4V_STATE_CONNECTED:
> +        case V4V_STATE_ACCEPTED:
> +                break;
> +        default:
> +                ret = -EINVAL;
> +                goto unlock;
> +        }
> +
> +        do {
> +                if (!nonblock) {
> +                        ret = wait_event_interruptible(p->readq,
> +                                                       (!list_empty(&p->pending_recv_list)
> +                                                        || !stream_connected(p)));
> +
> +                        if (ret)
> +                                break;
> +                }
> +                        
> +                spin_lock_irqsave(&p->pending_recv_lock, flags);
> +
> +                while (!list_empty(&p->pending_recv_list) && len) {
> +                        size_t to_copy;
> +                        struct pending_recv *pending;
> +                        int unlink = 0;
> +
> +                        pending = list_first_entry(&p->pending_recv_list,
> +                                                   struct pending_recv, node);
> +
> +                        if ((pending->data_len - pending->data_ptr) > len) {
> +                                to_copy = len;
> +                        } else {
> +                                unlink = 1;
> +                                to_copy = pending->data_len - pending->data_ptr;
> +                        }
> +
> +                        if (!access_ok(VERIFY_WRITE, buf, to_copy)) {
> +                                printk(KERN_ERR
> +                                       "V4V - ERROR: buf invalid _buf=%p buf=%p len=%d to_copy=%zu count=%zu\n",
> +                                       _buf, buf, len, to_copy, count);
> +                                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +                                read_unlock(&list_lock);
> +                                return -EFAULT;
> +                        }
> +                        
> +                        if (copy_to_user(buf, pending->data + pending->data_ptr, to_copy))
> +                        {
> +                                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +                                read_unlock(&list_lock);
> +                                return -EFAULT;
> +                        }
> +
> +                        if (unlink) {
> +                                list_del(&pending->node);
> +                                kfree(pending);
> +                                atomic_dec(&p->pending_recv_count);
> +                                if (p->full)
> +                                        schedule_irq = 1;
> +                        } else
> +                                pending->data_ptr += to_copy;
> +
> +                        buf += to_copy;
> +                        count += to_copy;
> +                        len -= to_copy;
> +                }
> +                        
> +                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +
> +                if (p->state == V4V_STATE_DISCONNECTED) {
> +                        ret = -EPIPE;
> +                        break;
> +                }
> +
> +                if (nonblock)
> +                        ret = -EAGAIN;
> +
> +        } while ((recv_flags & MSG_WAITALL) && len);
> +
> +unlock:
> +        read_unlock(&list_lock);
> +
> +        if (schedule_irq)
> +                v4v_fake_irq();
> +
> +        return count ? count : ret;
> +}
> +
> +static ssize_t
> +v4v_send_stream(struct v4v_private *p, const void *_buf, int len, int nonblock)
> +{
> +        int write_lump;
> +        const uint8_t *buf = _buf;
> +        size_t count = 0;
> +        ssize_t ret;
> +        int to_send;
> +
> +        write_lump = DEFAULT_RING_SIZE >> 2;
> +
> +        switch (p->state) {
> +        case V4V_STATE_DISCONNECTED:
> +                return -EPIPE;
> +        case V4V_STATE_CONNECTING:
> +                return -ENOTCONN;
> +        case V4V_STATE_CONNECTED:
> +        case V4V_STATE_ACCEPTED:
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        while (len) {
> +                struct v4v_stream_header sh;
> +                v4v_iov_t iovs[2];
> +
> +                to_send = len > write_lump ? write_lump : len;
> +                sh.flags = 0;
> +                sh.conid = p->conid;
> +
> +                iovs[0].iov_base = (uintptr_t)&sh;
> +                iovs[0].iov_len = sizeof (sh);
> +
> +                iovs[1].iov_base = (uintptr_t)buf;
> +                iovs[1].iov_len = to_send;
> +
> +                if (p->state == V4V_STATE_CONNECTED)
> +                    ret = v4v_stream_sendvto_from_sponsor(
> +                                p, iovs, 2,
> +                                to_send + sizeof(struct v4v_stream_header),
> +                                nonblock, &p->peer, V4V_PROTO_STREAM);
> +                else
> +                    ret = v4v_stream_sendvto_from_private(
> +                                p, iovs, 2,
> +                                to_send + sizeof(struct v4v_stream_header),
> +                                nonblock, &p->peer, V4V_PROTO_STREAM);
> +
> +                if (ret < 0) {
> +                        return count ? count : ret;
> +                }
> +
> +                len -= to_send;
> +                buf += to_send;
> +                count += to_send;
> +
> +                if (nonblock)
> +                        return count;
> +        }
> +
> +        return count;
> +}
> +
> +static int v4v_bind(struct v4v_private *p, struct v4v_ring_id *ring_id)
> +{
> +        int ret = 0;
> +
> +        if (ring_id->addr.domain != V4V_DOMID_NONE) {
> +                return -EINVAL;
> +        }
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                ret = new_ring(p, ring_id);
> +                if (!ret)
> +                        p->r->type = V4V_RTYPE_DGRAM;
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                ret = new_ring(p, ring_id);
> +                break;
> +        }
> +
> +        return ret;
> +}
> +
> +static int v4v_listen(struct v4v_private *p)
> +{
> +        if (p->ptype != V4V_PTYPE_STREAM)
> +                return -EINVAL;
> +
> +        if (p->state != V4V_STATE_BOUND) {
> +                return -EINVAL;
> +        }
> +
> +        p->r->type = V4V_RTYPE_LISTENER;
> +        p->state = V4V_STATE_LISTENING;
> +
> +        return 0;
> +}
> +
> +static int v4v_connect(struct v4v_private *p, v4v_addr_t * peer, int nonblock)
> +{
> +        struct v4v_stream_header sh;
> +        int ret = -EINVAL;
> +
> +        if (p->ptype == V4V_PTYPE_DGRAM) {
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                case V4V_STATE_CONNECTED:
> +                        if (peer) {
> +                                p->state = V4V_STATE_CONNECTED;
> +                                memcpy(&p->peer, peer, sizeof(v4v_addr_t));
> +                        } else {
> +                                p->state = V4V_STATE_BOUND;
> +                        }
> +                        return 0;
> +                default:
> +                        return -EINVAL;
> +                }
> +        }
> +        if (p->ptype != V4V_PTYPE_STREAM) {
> +                return -EINVAL;
> +        }
> +
> +        /* Irritiatingly we need to be restartable */
> +        switch (p->state) {
> +        case V4V_STATE_BOUND:
> +                p->r->type = V4V_RTYPE_CONNECTOR;
> +                p->state = V4V_STATE_CONNECTING;
> +                p->conid = random32();
> +                p->peer = *peer;
> +
> +                sh.flags = V4V_SHF_SYN;
> +                sh.conid = p->conid;
> +
> +                ret =
> +                    xmit_queue_inline(&p->r->ring->id, &p->peer, &sh,
> +                                      sizeof(sh), V4V_PROTO_STREAM);
> +                if (ret == sizeof(sh))
> +                        ret = 0;
> +
> +                if (ret && (ret != -EAGAIN)) {
> +                        p->state = V4V_STATE_BOUND;
> +                        p->r->type = V4V_RTYPE_DGRAM;
> +                        return ret;
> +                }
> +
> +                break;
> +        case V4V_STATE_CONNECTED:
> +                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
> +                        return -EINVAL;
> +                } else {
> +                        return 0;
> +                }
> +        case V4V_STATE_CONNECTING:
> +                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
> +                        return -EINVAL;
> +                }
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        if (nonblock) {
> +                return -EINPROGRESS;
> +        }
> +
> +        while (p->state != V4V_STATE_CONNECTED) {
> +                ret =
> +                    wait_event_interruptible(p->writeq,
> +                                             (p->state !=
> +                                              V4V_STATE_CONNECTING));
> +                if (ret)
> +                        return ret;
> +
> +                if (p->state == V4V_STATE_DISCONNECTED) {
> +                        p->state = V4V_STATE_BOUND;
> +                        p->r->type = V4V_RTYPE_DGRAM;
> +                        ret = -ECONNREFUSED;
> +                        break;
> +                }
> +        }
> +
> +        return ret;
> +}
> +
> +static int allocate_fd_with_private(void *private)
> +{
> +        int fd;
> +        struct file *f;
> +        struct qstr name = {.name = "" };
> +        struct path path;
> +        struct inode *ind;
> +
> +        fd = get_unused_fd();
> +        if (fd < 0)
> +                return fd;
> +
> +        path.dentry = d_alloc_pseudo(v4v_mnt->mnt_sb, &name);
> +        if (unlikely(!path.dentry)) {
> +                put_unused_fd(fd);
> +                return -ENOMEM;
> +        }
> +        ind = new_inode(v4v_mnt->mnt_sb);
> +        ind->i_ino = get_next_ino();
> +        ind->i_fop = v4v_mnt->mnt_root->d_inode->i_fop;
> +        ind->i_state = v4v_mnt->mnt_root->d_inode->i_state;
> +        ind->i_mode = v4v_mnt->mnt_root->d_inode->i_mode;
> +        ind->i_uid = current_fsuid();
> +        ind->i_gid = current_fsgid();
> +        d_instantiate(path.dentry, ind);
> +
> +        path.mnt = mntget(v4v_mnt);
> +
> +        f = alloc_file(&path, FMODE_READ | FMODE_WRITE, &v4v_fops_stream);
> +        if (!f) {
> +                /* Put back fd ? */
> +                return -ENFILE;
> +        }
> +
> +        f->private_data = private;
> +        fd_install(fd, f);
> +
> +        return fd;
> +}
> +
> +static int
> +v4v_accept(struct v4v_private *p, struct v4v_addr *peer, int nonblock)
> +{
> +        int fd;
> +        int ret = 0;
> +        struct v4v_private *a = NULL;
> +        struct pending_recv *r = NULL;
> +        unsigned long flags;
> +        struct v4v_stream_header sh;
> +
> +        if (p->ptype != V4V_PTYPE_STREAM)
> +                return -ENOTTY;
> +
> +        if (p->state != V4V_STATE_LISTENING) {
> +                return -EINVAL;
> +        }
> +
> +        /* FIXME: leak! */
> +        for (;;) {
> +                ret =
> +                    wait_event_interruptible(p->readq,
> +                                             (!list_empty
> +                                              (&p->pending_recv_list))
> +                                             || nonblock);
> +                if (ret)
> +                        return ret;
> +
> +                /* Write lock implicitly has pending_recv_lock */
> +                write_lock_irqsave(&list_lock, flags);
> +
> +                if (!list_empty(&p->pending_recv_list)) {
> +                        r = list_first_entry(&p->pending_recv_list,
> +                                             struct pending_recv, node);
> +
> +                        list_del(&r->node);
> +                        atomic_dec(&p->pending_recv_count);
> +
> +                        if ((!r->data_len) && (r->sh.flags & V4V_SHF_SYN))
> +                                break;
> +
> +                        kfree(r);
> +                }
> +
> +                write_unlock_irqrestore(&list_lock, flags);
> +                if (nonblock)
> +                        return -EAGAIN;
> +        }
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        a = kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
> +        if (!a) {
> +                ret = -ENOMEM;
> +                goto release;
> +        }
> +
> +        memset(a, 0, sizeof(struct v4v_private));
> +        a->state = V4V_STATE_ACCEPTED;
> +        a->ptype = V4V_PTYPE_STREAM;
> +        a->r = p->r;
> +        if (!get_ring(a->r)) {
> +                a->r = NULL;
> +                ret = -EINVAL;
> +                goto release;
> +        }
> +
> +        init_waitqueue_head(&a->readq);
> +        init_waitqueue_head(&a->writeq);
> +        spin_lock_init(&a->pending_recv_lock);
> +        INIT_LIST_HEAD(&a->pending_recv_list);
> +        atomic_set(&a->pending_recv_count, 0);
> +
> +        a->send_blocked = 0;
> +        a->peer = r->from;
> +        a->conid = r->sh.conid;
> +
> +        if (peer)
> +                *peer = r->from;
> +
> +        fd = allocate_fd_with_private(a);
> +        if (fd < 0) {
> +                ret = fd;
> +                goto release;
> +        }
> +
> +        write_lock_irqsave(&list_lock, flags);
> +        list_add(&a->node, &a->r->privates);
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        /* Ship the ACK */
> +        sh.conid = a->conid;
> +        sh.flags = V4V_SHF_ACK;
> +
> +        xmit_queue_inline(&a->r->ring->id, &a->peer, &sh,
> +                          sizeof(sh), V4V_PROTO_STREAM);
> +        kfree(r);
> +
> +        return fd;
> +
> + release:
> +        kfree(r);
> +        if (a) {
> +                write_lock_irqsave(&list_lock, flags);
> +                if (a->r)
> +                        put_ring(a->r);
> +                write_unlock_irqrestore(&list_lock, flags);
> +                kfree(a);
> +        }
> +        return ret;
> +}
> +
> +ssize_t
> +v4v_sendto(struct v4v_private * p, const void *buf, size_t len, int flags,
> +           v4v_addr_t * addr, int nonblock)
> +{
> +        ssize_t rc;
> +
> +        if (!access_ok(VERIFY_READ, buf, len))
> +                return -EFAULT;
> +        if (!access_ok(VERIFY_READ, addr, len))
> +                return -EFAULT;
> +
> +        if (flags & MSG_DONTWAIT)
> +                nonblock++;
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                        if (!addr)
> +                                return -ENOTCONN;
> +                        rc = v4v_sendto_from_sponsor(p, buf, len, nonblock,
> +                                                     addr, V4V_PROTO_DGRAM);
> +                        break;
> +
> +                case V4V_STATE_CONNECTED:
> +                        if (addr)
> +                                return -EISCONN;
> +
> +                        rc = v4v_sendto_from_sponsor(p, buf, len, nonblock,
> +                                                     &p->peer, V4V_PROTO_DGRAM);
> +                        break;
> +
> +                default:
> +                        return -EINVAL;
> +                }
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                if (addr)
> +                        return -EISCONN;
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTING:
> +                case V4V_STATE_BOUND:
> +                        return -ENOTCONN;
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_ACCEPTED:
> +                        rc = v4v_send_stream(p, buf, len, nonblock);
> +                        break;
> +                case V4V_STATE_DISCONNECTED:
> +
> +                        rc = -EPIPE;
> +                        break;
> +                default:
> +
> +                        return -EINVAL;
> +                }
> +                break;
> +        default:
> +
> +                return -ENOTTY;
> +        }
> +
> +        if ((rc == -EPIPE) && !(flags & MSG_NOSIGNAL))
> +                send_sig(SIGPIPE, current, 0);
> +
> +        return rc;
> +}
> +
> +ssize_t
> +v4v_recvfrom(struct v4v_private * p, void *buf, size_t len, int flags,
> +             v4v_addr_t * addr, int nonblock)
> +{
> +        int peek = 0;
> +        ssize_t rc = 0;
> +
> +        if (!access_ok(VERIFY_WRITE, buf, len))
> +                return -EFAULT;
> +        if ((addr) && (!access_ok(VERIFY_WRITE, addr, sizeof(v4v_addr_t))))
> +                return -EFAULT;
> +
> +        if (flags & MSG_DONTWAIT)
> +                nonblock++;
> +        if (flags & MSG_PEEK)
> +                peek++;
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                rc = v4v_recvfrom_dgram(p, buf, len, nonblock, peek, addr);
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                if (peek)
> +                        return -EINVAL;
> +
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                        return -ENOTCONN;
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_ACCEPTED:
> +                        if (addr)
> +                                *addr = p->peer;
> +                        rc = v4v_recv_stream(p, buf, len, flags, nonblock);
> +                        break;
> +                case V4V_STATE_DISCONNECTED:
> +                        rc = 0;
> +                        break;
> +                default:
> +                        rc = -EINVAL;
> +                }
> +        }
> +
> +        if ((rc > (ssize_t) len) && !(flags & MSG_TRUNC))
> +                rc = len;
> +
> +        return rc;
> +}
> +
> +/* fops */
> +
> +static int v4v_open_dgram(struct inode *inode, struct file *f)
> +{
> +        struct v4v_private *p;
> +
> +        p = kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
> +        if (!p)
> +                return -ENOMEM;
> +
> +        memset(p, 0, sizeof(struct v4v_private));
> +        p->state = V4V_STATE_IDLE;
> +        p->desired_ring_size = DEFAULT_RING_SIZE;
> +        p->r = NULL;
> +        p->ptype = V4V_PTYPE_DGRAM;
> +        p->send_blocked = 0;
> +
> +        init_waitqueue_head(&p->readq);
> +        init_waitqueue_head(&p->writeq);
> +
> +        spin_lock_init(&p->pending_recv_lock);
> +        INIT_LIST_HEAD(&p->pending_recv_list);
> +        atomic_set(&p->pending_recv_count, 0);
> +
> +        f->private_data = p;
> +        return 0;
> +}
> +
> +static int v4v_open_stream(struct inode *inode, struct file *f)
> +{
> +        struct v4v_private *p;
> +
> +        p = kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
> +        if (!p)
> +                return -ENOMEM;
> +
> +        memset(p, 0, sizeof(struct v4v_private));
> +        p->state = V4V_STATE_IDLE;
> +        p->desired_ring_size = DEFAULT_RING_SIZE;
> +        p->r = NULL;
> +        p->ptype = V4V_PTYPE_STREAM;
> +        p->send_blocked = 0;
> +
> +        init_waitqueue_head(&p->readq);
> +        init_waitqueue_head(&p->writeq);
> +
> +        spin_lock_init(&p->pending_recv_lock);
> +        INIT_LIST_HEAD(&p->pending_recv_list);
> +        atomic_set(&p->pending_recv_count, 0);
> +
> +        f->private_data = p;
> +        return 0;
> +}
> +
> +static int v4v_release(struct inode *inode, struct file *f)
> +{
> +        struct v4v_private *p = (struct v4v_private *)f->private_data;
> +        unsigned long flags;
> +        struct pending_recv *pending;
> +
> +        if (p->ptype == V4V_PTYPE_STREAM) {
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_CONNECTING:
> +                case V4V_STATE_ACCEPTED:
> +                        xmit_queue_rst_to(&p->r->ring->id, p->conid, &p->peer);
> +                        break;
> +                default:
> +                        break;
> +                }
> +        }
> +
> +        write_lock_irqsave(&list_lock, flags);
> +        if (!p->r) {
> +                write_unlock_irqrestore(&list_lock, flags);
> +                goto release;
> +        }
> +
> +        if (p != p->r->sponsor) {
> +                put_ring(p->r);
> +                list_del(&p->node);
> +                write_unlock_irqrestore(&list_lock, flags);
> +                goto release;
> +        }
> +
> +        p->r->sponsor = NULL;
> +        put_ring(p->r);
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        while (!list_empty(&p->pending_recv_list)) {
> +                pending =
> +                    list_first_entry(&p->pending_recv_list,
> +                                     struct pending_recv, node);
> +
> +                list_del(&pending->node);
> +                kfree(pending);
> +                atomic_dec(&p->pending_recv_count);
> +        }
> +
> + release:
> +        kfree(p);
> +
> +        return 0;
> +}
> +
> +static ssize_t
> +v4v_write(struct file *f, const char __user * buf, size_t count, loff_t * ppos)
> +{
> +        struct v4v_private *p = f->private_data;
> +        int nonblock = f->f_flags & O_NONBLOCK;
> +
> +        return v4v_sendto(p, buf, count, 0, NULL, nonblock);
> +}
> +
> +static ssize_t
> +v4v_read(struct file *f, char __user * buf, size_t count, loff_t * ppos)
> +{
> +        struct v4v_private *p = f->private_data;
> +        int nonblock = f->f_flags & O_NONBLOCK;
> +
> +        return v4v_recvfrom(p, (void *)buf, count, 0, NULL, nonblock);
> +}
> +
> +static long v4v_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
> +{
> +        int rc = -ENOTTY;
> +
> +        int nonblock = f->f_flags & O_NONBLOCK;
> +        struct v4v_private *p = f->private_data;
> +
> +        if (_IOC_TYPE(cmd) != V4V_TYPE)
> +                return rc;
> +
> +        switch (cmd) {
> +        case V4VIOCSETRINGSIZE:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(uint32_t)))
> +                        return -EFAULT;
> +                rc = v4v_set_ring_size(p, *(uint32_t *) arg);
> +                break;
> +        case V4VIOCBIND:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_ring_id)))
> +                        return -EFAULT;
> +                rc = v4v_bind(p, (struct v4v_ring_id *)arg);
> +                break;
> +        case V4VIOCGETSOCKNAME:
> +                if (!access_ok(VERIFY_WRITE, arg, sizeof(struct v4v_ring_id)))
> +                        return -EFAULT;
> +                rc = v4v_get_sock_name(p, (struct v4v_ring_id *)arg);
> +                break;
> +        case V4VIOCGETPEERNAME:
> +                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
> +                        return -EFAULT;
> +                rc = v4v_get_peer_name(p, (v4v_addr_t *) arg);
> +                break;
> +        case V4VIOCCONNECT:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(v4v_addr_t)))
> +                        return -EFAULT;
> +                /* Bind if not done */
> +                if (p->state == V4V_STATE_IDLE) {
> +                        struct v4v_ring_id id;
> +                        memset(&id, 0, sizeof(id));
> +                        id.partner = V4V_DOMID_NONE;
> +                        id.addr.domain = V4V_DOMID_NONE;
> +                        id.addr.port = 0;
> +                        rc = v4v_bind(p, &id);
> +                        if (rc)
> +                                break;
> +                }
> +                rc = v4v_connect(p, (v4v_addr_t *) arg, nonblock);
> +                break;
> +        case V4VIOCGETCONNECTERR:
> +                {
> +                        unsigned long flags;
> +                        if (!access_ok(VERIFY_WRITE, arg, sizeof(int)))
> +                                return -EFAULT;
> +
> +                        spin_lock_irqsave(&p->pending_recv_lock, flags);
> +                        *(int *)arg = p->pending_error;
> +                        p->pending_error = 0;
> +                        spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +                        rc = 0;
> +                }
> +                break;
> +        case V4VIOCLISTEN:
> +                rc = v4v_listen(p);
> +                break;
> +        case V4VIOCACCEPT:
> +                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
> +                        return -EFAULT;
> +                rc = v4v_accept(p, (v4v_addr_t *) arg, nonblock);
> +                break;
> +        case V4VIOCSEND:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_dev a = *(struct v4v_dev *)arg;
> +
> +                        rc = v4v_sendto(p, a.buf, a.len, a.flags, a.addr,
> +                                        nonblock);
> +                }
> +                break;
> +        case V4VIOCRECV:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_dev a = *(struct v4v_dev *)arg;
> +                        rc = v4v_recvfrom(p, a.buf, a.len, a.flags, a.addr,
> +                                          nonblock);
> +                }
> +                break;
> +        case V4VIOCVIPTABLESADD:
> +                if (!access_ok
> +                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_pos)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_viptables_rule_pos *rule =
> +                            (struct v4v_viptables_rule_pos *)arg;
> +                        v4v_viptables_add(p, rule->rule, rule->position);
> +                        rc = 0;
> +                }
> +                break;
> +        case V4VIOCVIPTABLESDEL:
> +                if (!access_ok
> +                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_pos)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_viptables_rule_pos *rule =
> +                            (struct v4v_viptables_rule_pos *)arg;
> +                        v4v_viptables_del(p, rule->rule, rule->position);
> +                        rc = 0;
> +                }
> +                break;
> +        case V4VIOCVIPTABLESLIST:
> +                if (!access_ok
> +                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_list)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_viptables_list *list =
> +                            (struct v4v_viptables_list *)arg;
> +                        rc = v4v_viptables_list(p, list);
> +                }
> +                break;
> +        default:
> +                printk(KERN_ERR "v4v: unkown ioctl, cmd:0x%x nr:%d size:0x%x\n",
> +                       cmd, _IOC_NR(cmd), _IOC_SIZE(cmd));
> +        }
> +
> +        return rc;
> +}
> +
> +static unsigned int v4v_poll(struct file *f, poll_table * pt)
> +{
> +        unsigned int mask = 0;
> +        struct v4v_private *p = f->private_data;
> +
> +        read_lock(&list_lock);
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_BOUND:
> +                        poll_wait(f, &p->readq, pt);
> +                        mask |= POLLOUT | POLLWRNORM;
> +                        if (p->r->ring->tx_ptr != p->r->ring->rx_ptr)
> +                                mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                default:
> +                        break;
> +                }
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                        break;
> +                case V4V_STATE_LISTENING:
> +                        poll_wait(f, &p->readq, pt);
> +                        if (!list_empty(&p->pending_recv_list))
> +                                mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                case V4V_STATE_ACCEPTED:
> +                case V4V_STATE_CONNECTED:
> +                        poll_wait(f, &p->readq, pt);
> +                        poll_wait(f, &p->writeq, pt);
> +                        if (!p->send_blocked)
> +                                mask |= POLLOUT | POLLWRNORM;
> +                        if (!list_empty(&p->pending_recv_list))
> +                                mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                case V4V_STATE_CONNECTING:
> +                        poll_wait(f, &p->writeq, pt);
> +                        break;
> +                case V4V_STATE_DISCONNECTED:
> +                        mask |= POLLOUT | POLLWRNORM;
> +                        mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                case V4V_STATE_IDLE:
> +                        break;
> +                }
> +                break;
> +        }
> +
> +        read_unlock(&list_lock);
> +        return mask;
> +}
> +
> +static const struct file_operations v4v_fops_stream = {
> +        .owner = THIS_MODULE,
> +        .write = v4v_write,
> +        .read = v4v_read,
> +        .unlocked_ioctl = v4v_ioctl,
> +        .open = v4v_open_stream,
> +        .release = v4v_release,
> +        .poll = v4v_poll,
> +};
> +
> +static const struct file_operations v4v_fops_dgram = {
> +        .owner = THIS_MODULE,
> +        .write = v4v_write,
> +        .read = v4v_read,
> +        .unlocked_ioctl = v4v_ioctl,
> +        .open = v4v_open_dgram,
> +        .release = v4v_release,
> +        .poll = v4v_poll,
> +};
> +
> +/* Xen VIRQ */
> +static int v4v_irq = -1;
> +
> +static void unbind_virq(void)
> +{
> +        unbind_from_irqhandler (v4v_irq, NULL);
> +        v4v_irq = -1;
> +}
> +
> +static int bind_evtchn(void)
> +{
> +        v4v_info_t info;
> +        int result;
> +        
> +        v4v_info(&info);
> +        if (info.ring_magic != V4V_RING_MAGIC)
> +                return 1;
> +
> +        result =
> +                bind_interdomain_evtchn_to_irqhandler(
> +                        0, info.evtchn,
> +                        v4v_interrupt, IRQF_SAMPLE_RANDOM, "v4v", NULL);
> +
> +        if (result < 0) {
> +                unbind_virq();
> +                return result;
> +        }
> +
> +        v4v_irq = result;
> +
> +        return 0;
> +}
> +
> +/* V4V Device */
> +
> +static struct miscdevice v4v_miscdev_dgram = {
> +        .minor = MISC_DYNAMIC_MINOR,
> +        .name = "v4v_dgram",
> +        .fops = &v4v_fops_dgram,
> +};
> +
> +static struct miscdevice v4v_miscdev_stream = {
> +        .minor = MISC_DYNAMIC_MINOR,
> +        .name = "v4v_stream",
> +        .fops = &v4v_fops_stream,
> +};
> +
> +static int v4v_suspend(struct platform_device *dev, pm_message_t state)
> +{
> +        unbind_virq();
> +        return 0;
> +}
> +
> +static int v4v_resume(struct platform_device *dev)
> +{
> +        struct ring *r;
> +
> +        read_lock(&list_lock);
> +        list_for_each_entry(r, &ring_list, node) {
> +                refresh_pfn_list(r);
> +                if (register_ring(r)) {
> +                        printk(KERN_ERR
> +                               "Failed to re-register a v4v ring on resume, port=0x%08x\n",
> +                               r->ring->id.addr.port);
> +                }
> +        }
> +        read_unlock(&list_lock);
> +
> +        if (bind_evtchn()) {
> +                printk(KERN_ERR "v4v_resume: failed to bind v4v evtchn\n");
> +                return -ENODEV;
> +        }
> +
> +        return 0;
> +}
> +
> +static void v4v_shutdown(struct platform_device *dev)
> +{
> +}
> +
> +static int __devinit v4v_probe(struct platform_device *dev)
> +{
> +        int err = 0;
> +        int ret;
> +
> +        ret = setup_fs();
> +        if (ret)
> +                return ret;
> +
> +        INIT_LIST_HEAD(&ring_list);
> +        rwlock_init(&list_lock);
> +        INIT_LIST_HEAD(&pending_xmit_list);
> +        spin_lock_init(&pending_xmit_lock);
> +        spin_lock_init(&interrupt_lock);
> +        atomic_set(&pending_xmit_count, 0);
> +
> +        if (bind_evtchn()) {
> +                printk(KERN_ERR "failed to bind v4v evtchn\n");
> +                unsetup_fs();
> +                return -ENODEV;
> +        }
> +
> +        err = misc_register(&v4v_miscdev_dgram);
> +        if (err != 0) {
> +                printk(KERN_ERR "Could not register /dev/v4v_dgram\n");
> +                unsetup_fs();
> +                return err;
> +        }
> +
> +        err = misc_register(&v4v_miscdev_stream);
> +        if (err != 0) {
> +                printk(KERN_ERR "Could not register /dev/v4v_stream\n");
> +                unsetup_fs();
> +                return err;
> +        }
> +
> +        printk(KERN_INFO "Xen V4V device installed.\n");
> +        return 0;
> +}
> +
> +/* Platform Gunge */
> +
> +static int __devexit v4v_remove(struct platform_device *dev)
> +{
> +        unbind_virq();
> +        misc_deregister(&v4v_miscdev_dgram);
> +        misc_deregister(&v4v_miscdev_stream);
> +        unsetup_fs();
> +        return 0;
> +}
> +
> +static struct platform_driver v4v_driver = {
> +        .driver = {
> +                   .name = "v4v",
> +                   .owner = THIS_MODULE,
> +                   },
> +        .probe = v4v_probe,
> +        .remove = __devexit_p(v4v_remove),
> +        .shutdown = v4v_shutdown,
> +        .suspend = v4v_suspend,
> +        .resume = v4v_resume,
> +};
> +
> +static struct platform_device *v4v_platform_device;
> +
> +static int __init v4v_init(void)
> +{
> +        int error;
> +
> +        if (!xen_domain())
> +        {
> +                printk(KERN_ERR "v4v only works under Xen\n");
> +                return -ENODEV;
> +        }
> +
> +        error = platform_driver_register(&v4v_driver);
> +        if (error)
> +                return error;
> +
> +        v4v_platform_device = platform_device_alloc("v4v", -1);
> +        if (!v4v_platform_device) {
> +                platform_driver_unregister(&v4v_driver);
> +                return -ENOMEM;
> +        }
> +
> +        error = platform_device_add(v4v_platform_device);
> +        if (error) {
> +                platform_device_put(v4v_platform_device);
> +                platform_driver_unregister(&v4v_driver);
> +                return error;
> +        }
> +
> +        return 0;
> +}
> +
> +static void __exit v4v_cleanup(void)
> +{
> +        platform_device_unregister(v4v_platform_device);
> +        platform_driver_unregister(&v4v_driver);
> +}
> +
> +module_init(v4v_init);
> +module_exit(v4v_cleanup);
> +MODULE_LICENSE("GPL");
> diff --git a/drivers/xen/v4v_utils.h b/drivers/xen/v4v_utils.h
> new file mode 100644
> index 0000000..91c00b6
> --- /dev/null
> +++ b/drivers/xen/v4v_utils.h
> @@ -0,0 +1,278 @@
> +/******************************************************************************
> + * V4V
> + *
> + * Version 2 of v2v (Virtual-to-Virtual)
> + *
> + * Copyright (c) 2010, Citrix Systems
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + */
> +
> +#ifndef __V4V_UTILS_H__
> +# define __V4V_UTILS_H__
> +
> +/* Compiler specific hacks */
> +#if defined(__GNUC__)
> +# define V4V_UNUSED __attribute__ ((unused))
> +# ifndef __STRICT_ANSI__
> +#  define V4V_INLINE inline
> +# else
> +#  define V4V_INLINE
> +# endif
> +#else /* !__GNUC__ */
> +# define V4V_UNUSED
> +# define V4V_INLINE
> +#endif
> +
> +
> +/*
> + * Utility functions
> + */
> +static V4V_INLINE uint32_t
> +v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
> +{
> +        int32_t ret;
> +        ret = r->tx_ptr - r->rx_ptr;
> +        if (ret >= 0)
> +                return ret;
> +        return (uint32_t) (r->len + ret);
> +}
> +
> +
> +/*
> + * Copy at most t bytes of the next message in the ring, into the buffer
> + * at _buf, setting from and protocol if they are not NULL, returns
> + * the actual length of the message, or -1 if there is nothing to read
> + */
> +V4V_UNUSED static V4V_INLINE ssize_t
> +v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * protocol,
> +              void *_buf, size_t t, int consume)
> +{
> +        volatile struct v4v_ring_message_header *mh;
> +        /* unnecessary cast from void * required by MSVC compiler */
> +        uint8_t *buf = (uint8_t *) _buf;
> +        uint32_t btr = v4v_ring_bytes_to_read (r);
> +        uint32_t rxp = r->rx_ptr;
> +        uint32_t bte;
> +        uint32_t len;
> +        ssize_t ret;
> +
> +
> +        if (btr < sizeof (*mh))
> +                return -1;
> +
> +        /*
> +         * Becuase the message_header is 128 bits long and the ring is 128 bit
> +         * aligned, we're gaurunteed never to wrap
> +         */
> +        mh = (volatile struct v4v_ring_message_header *) &r->ring[r->rx_ptr];
> +
> +        len = mh->len;
> +
> +        if (btr < len)
> +        {
> +                return -1;
> +        }
> +
> +#if defined(__GNUC__)
> +        if (from)
> +                *from = mh->source;
> +#else
> +        /* MSVC can't do the above */
> +        if (from)
> +                memcpy((void *) from, (void *) &(mh->source), sizeof(struct v4v_addr));
> +#endif
> +
> +        if (protocol)
> +                *protocol = mh->protocol;
> +
> +        rxp += sizeof (*mh);
> +        if (rxp == r->len)
> +                rxp = 0;
> +        len -= sizeof (*mh);
> +        ret = len;
> +
> +        bte = r->len - rxp;
> +
> +        if (bte < len)
> +        {
> +                if (t < bte)
> +                {
> +                        if (buf)
> +                        {
> +                                memcpy (buf, (void *) &r->ring[rxp], t);
> +                                buf += t;
> +                        }
> +
> +                        rxp = 0;
> +                        len -= bte;
> +                        t = 0;
> +                }
> +                else
> +                {
> +                        if (buf)
> +                        {
> +                                memcpy (buf, (void *) &r->ring[rxp], bte);
> +                                buf += bte;
> +                        }
> +                        rxp = 0;
> +                        len -= bte;
> +                        t -= bte;
> +                }
> +        }
> +
> +        if (buf && t)
> +                memcpy (buf, (void *) &r->ring[rxp], (t < len) ? t : len);
> +
> +
> +        rxp += V4V_ROUNDUP (len);
> +        if (rxp == r->len)
> +                rxp = 0;
> +
> +        mb ();
> +
> +        if (consume)
> +                r->rx_ptr = rxp;
> +
> +        return ret;
> +}
> +
> +static V4V_INLINE void
> +v4v_memcpy_skip (void *_dst, const void *_src, size_t len, size_t *skip)
> +{
> +        const uint8_t *src =  (const uint8_t *) _src;
> +        uint8_t *dst = (uint8_t *) _dst;
> +
> +        if (!*skip)
> +        {
> +                memcpy (dst, src, len);
> +                return;
> +        }
> +
> +        if (*skip >= len)
> +        {
> +                *skip -= len;
> +                return;
> +        }
> +
> +        src += *skip;
> +        dst += *skip;
> +        len -= *skip;
> +        *skip = 0;
> +
> +        memcpy (dst, src, len);
> +}
> +
> +/*
> + * Copy at most t bytes of the next message in the ring, into the buffer
> + * at _buf, skipping skip bytes, setting from and protocol if they are not
> + * NULL, returns the actual length of the message, or -1 if there is
> + * nothing to read
> + */
> +static ssize_t
> +v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
> +                    uint32_t * protocol, void *_buf, size_t t, int consume,
> +                    size_t skip) V4V_UNUSED;
> +
> +V4V_INLINE static ssize_t
> +v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
> +                    uint32_t * protocol, void *_buf, size_t t, int consume,
> +                    size_t skip)
> +{
> +        volatile struct v4v_ring_message_header *mh;
> +        /* unnecessary cast from void * required by MSVC compiler */
> +        uint8_t *buf = (uint8_t *) _buf;
> +        uint32_t btr = v4v_ring_bytes_to_read (r);
> +        uint32_t rxp = r->rx_ptr;
> +        uint32_t bte;
> +        uint32_t len;
> +        ssize_t ret;
> +
> +        buf -= skip;
> +
> +        if (btr < sizeof (*mh))
> +                return -1;
> +
> +        /*
> +         * Becuase the message_header is 128 bits long and the ring is 128 bit
> +         * aligned, we're gaurunteed never to wrap
> +         */
> +        mh = (volatile struct v4v_ring_message_header *)&r->ring[r->rx_ptr];
> +
> +        len = mh->len;
> +        if (btr < len)
> +                return -1;
> +
> +#if defined(__GNUC__)
> +        if (from)
> +                *from = mh->source;
> +#else
> +        /* MSVC can't do the above */
> +        if (from)
> +                memcpy((void *)from, (void *)&(mh->source), sizeof(struct v4v_addr));
> +#endif
> +
> +        if (protocol)
> +                *protocol = mh->protocol;
> +
> +        rxp += sizeof (*mh);
> +        if (rxp == r->len)
> +                rxp = 0;
> +        len -= sizeof (*mh);
> +        ret = len;
> +
> +        bte = r->len - rxp;
> +
> +        if (bte < len)
> +        {
> +                if (t < bte)
> +                {
> +                        if (buf)
> +                        {
> +                                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], t, &skip);
> +                                buf += t;
> +                        }
> +
> +                        rxp = 0;
> +                        len -= bte;
> +                        t = 0;
> +                }
> +                else
> +                {
> +                        if (buf)
> +                        {
> +                                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], bte,
> +                                                &skip);
> +                                buf += bte;
> +                        }
> +                        rxp = 0;
> +                        len -= bte;
> +                        t -= bte;
> +                }
> +        }
> +
> +        if (buf && t)
> +                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], (t < len) ? t : len,
> +                                &skip);
> +
> +
> +        rxp += V4V_ROUNDUP (len);
> +        if (rxp == r->len)
> +                rxp = 0;
> +
> +        mb ();
> +
> +        if (consume)
> +                r->rx_ptr = rxp;
> +
> +        return ret;
> +}
> +
> +#endif /* !__V4V_UTILS_H__ */
> diff --git a/include/xen/interface/v4v.h b/include/xen/interface/v4v.h
> new file mode 100644
> index 0000000..36ff95c
> --- /dev/null
> +++ b/include/xen/interface/v4v.h
> @@ -0,0 +1,299 @@
> +/******************************************************************************
> + * V4V
> + *
> + * Version 2 of v2v (Virtual-to-Virtual)
> + *
> + * Copyright (c) 2010, Citrix Systems
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + */
> +
> +#ifndef __XEN_PUBLIC_V4V_H__
> +#define __XEN_PUBLIC_V4V_H__
> +
> +/*
> + * Structure definitions
> + */
> +
> +#define V4V_RING_MAGIC          0xA822F72BB0B9D8CC
> +#define V4V_RING_DATA_MAGIC	0x45FE852220B801E4
> +
> +#define V4V_PROTO_DGRAM		0x3c2c1db8
> +#define V4V_PROTO_STREAM 	0x70f6a8e5
> +
> +#define V4V_DOMID_INVALID       (0x7FFFU)
> +#define V4V_DOMID_NONE          V4V_DOMID_INVALID
> +#define V4V_DOMID_ANY           V4V_DOMID_INVALID
> +#define V4V_PORT_NONE           0
> +
> +typedef struct v4v_iov
> +{
> +    uint64_t iov_base;
> +    uint64_t iov_len;
> +} v4v_iov_t;
> +
> +typedef struct v4v_addr
> +{
> +    uint32_t port;
> +    domid_t domain;
> +    uint16_t pad;
> +} v4v_addr_t;
> +
> +typedef struct v4v_ring_id
> +{
> +    v4v_addr_t addr;
> +    domid_t partner;
> +    uint16_t pad;
> +} v4v_ring_id_t;
> +
> +typedef uint64_t v4v_pfn_t;
> +
> +typedef struct
> +{
> +    v4v_addr_t src;
> +    v4v_addr_t dst;
> +} v4v_send_addr_t;
> +
> +/*
> + * v4v_ring
> + * id:
> + * xen only looks at this during register/unregister
> + * and will fill in id.addr.domain
> + *
> + * rx_ptr: rx pointer, modified by domain
> + * tx_ptr: tx pointer, modified by xen
> + *
> + */
> +struct v4v_ring
> +{
> +    uint64_t magic;
> +    v4v_ring_id_t id;
> +    uint32_t len;
> +    uint32_t rx_ptr;
> +    uint32_t tx_ptr;
> +    uint8_t reserved[32];
> +    uint8_t ring[0];
> +};
> +typedef struct v4v_ring v4v_ring_t;
> +
> +#define V4V_RING_DATA_F_EMPTY       (1U << 0) /* Ring is empty */
> +#define V4V_RING_DATA_F_EXISTS      (1U << 1) /* Ring exists */
> +#define V4V_RING_DATA_F_PENDING     (1U << 2) /* Pending interrupt exists - do not
> +                                               * rely on this field - for
> +                                               * profiling only */
> +#define V4V_RING_DATA_F_SUFFICIENT  (1U << 3) /* Sufficient space to queue
> +                                               * space_required bytes exists */
> +
> +#if defined(__GNUC__)
> +# define V4V_RING_DATA_ENT_FULLRING
> +# define V4V_RING_DATA_ENT_FULL
> +#else
> +# define V4V_RING_DATA_ENT_FULLRING fullring
> +# define V4V_RING_DATA_ENT_FULL full
> +#endif
> +typedef struct v4v_ring_data_ent
> +{
> +    v4v_addr_t ring;
> +    uint16_t flags;
> +    uint16_t pad;
> +    uint32_t space_required;
> +    uint32_t max_message_size;
> +} v4v_ring_data_ent_t;
> +
> +typedef struct v4v_ring_data
> +{
> +    uint64_t magic;
> +    uint32_t nent;
> +    uint32_t pad;
> +    uint64_t reserved[4];
> +    v4v_ring_data_ent_t data[0];
> +} v4v_ring_data_t;
> +
> +struct v4v_info
> +{
> +    uint64_t ring_magic;
> +    uint64_t data_magic;
> +    evtchn_port_t evtchn;
> +};
> +typedef struct v4v_info v4v_info_t;
> +
> +#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
> +/*
> + * Messages on the ring are padded to 128 bits
> + * Len here refers to the exact length of the data not including the
> + * 128 bit header. The message uses
> + * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
> + */
> +
> +#define V4V_SHF_SYN		(1 << 0)
> +#define V4V_SHF_ACK		(1 << 1)
> +#define V4V_SHF_RST		(1 << 2)
> +
> +#define V4V_SHF_PING		(1 << 8)
> +#define V4V_SHF_PONG		(1 << 9)
> +
> +struct v4v_stream_header
> +{
> +    uint32_t flags;
> +    uint32_t conid;
> +};
> +
> +struct v4v_ring_message_header
> +{
> +    uint32_t len;
> +    uint32_t pad0;
> +    v4v_addr_t source;
> +    uint32_t protocol;
> +    uint32_t pad1;
> +    uint8_t data[0];
> +};
> +
> +typedef struct v4v_viptables_rule
> +{
> +    v4v_addr_t src;
> +    v4v_addr_t dst;
> +    uint32_t accept;
> +    uint32_t pad;
> +} v4v_viptables_rule_t;
> +
> +typedef struct v4v_viptables_list
> +{
> +    uint32_t start_rule;
> +    uint32_t nb_rules;
> +    struct v4v_viptables_rule rules[0];
> +} v4v_viptables_list_t;
> +
> +/*
> + * HYPERCALLS
> + */
> +
> +#define V4VOP_register_ring 	1
> +/*
> + * Registers a ring with Xen, if a ring with the same v4v_ring_id exists,
> + * this ring takes its place, registration will not change tx_ptr
> + * unless it is invalid
> + *
> + * do_v4v_op(V4VOP_unregister_ring,
> + *           v4v_ring, XEN_GUEST_HANDLE(v4v_pfn),
> + *           npage, 0)
> + */
> +
> +
> +#define V4VOP_unregister_ring 	2
> +/*
> + * Unregister a ring.
> + *
> + * v4v_hypercall(V4VOP_send, v4v_ring, NULL, 0, 0)
> + */
> +
> +#define V4VOP_send 		3
> +/*
> + * Sends len bytes of buf to dst, giving src as the source address (xen will
> + * ignore src->domain and put your domain in the actually message), xen
> + * first looks for a ring with id.addr==dst and id.partner==sending_domain
> + * if that fails it looks for id.addr==dst and id.partner==DOMID_ANY.
> + * protocol is the 32 bit protocol number used from the message
> + * most likely V4V_PROTO_DGRAM or STREAM. If insufficient space exists
> + * it will return -EAGAIN and xen will twing the V4V_INTERRUPT when
> + * sufficient space becomes available
> + *
> + * v4v_hypercall(V4VOP_send,
> + *               v4v_send_addr_t addr,
> + *               void* buf,
> + *               uint32_t len,
> + *               uint32_t protocol)
> + */
> +
> +
> +#define V4VOP_notify 		4
> +/* Asks xen for information about other rings in the system
> + *
> + * ent->ring is the v4v_addr_t of the ring you want information on
> + * the same matching rules are used as for V4VOP_send.
> + *
> + * ent->space_required  if this field is not null xen will check
> + * that there is space in the destination ring for this many bytes
> + * of payload. If there is it will set the V4V_RING_DATA_F_SUFFICIENT
> + * and CANCEL any pending interrupt for that ent->ring, if insufficient
> + * space is available it will schedule an interrupt and the flag will
> + * not be set.
> + *
> + * The flags are set by xen when notify replies
> + * V4V_RING_DATA_F_EMPTY	ring is empty
> + * V4V_RING_DATA_F_PENDING	interrupt is pending - don't rely on this
> + * V4V_RING_DATA_F_SUFFICIENT	sufficient space for space_required is there
> + * V4V_RING_DATA_F_EXISTS	ring exists
> + *
> + * v4v_hypercall(V4VOP_notify,
> + *               XEN_GUEST_HANDLE(v4v_ring_data_ent) ent,
> + *               NULL, nent, 0)
> + */
> +
> +#define V4VOP_sendv		5
> +/*
> + * Identical to V4VOP_send except rather than buf and len it takes
> + * an array of v4v_iov and a length of the array.
> + *
> + * v4v_hypercall(V4VOP_sendv,
> + *               v4v_send_addr_t addr,
> + *               v4v_iov iov,
> + *               uint32_t niov,
> + *               uint32_t protocol)
> + */
> +
> +#define V4VOP_viptables_add     6
> +/*
> + * Insert a filtering rules after a given position.
> + *
> + * v4v_hypercall(V4VOP_viptables_add,
> + *               v4v_viptables_rule_t rule,
> + *               NULL,
> + *               uint32_t position, 0)
> + */
> +
> +#define V4VOP_viptables_del     7
> +/*
> + * Delete a filtering rules at a position or the rule
> + * that matches "rule".
> + *
> + * v4v_hypercall(V4VOP_viptables_del,
> + *               v4v_viptables_rule_t rule,
> + *               NULL,
> + *               uint32_t position, 0)
> + */
> +
> +#define V4VOP_viptables_list    8
> +/*
> + * Delete a filtering rules at a position or the rule
> + * that matches "rule".
> + *
> + * v4v_hypercall(V4VOP_viptables_list,
> + *               v4v_vitpables_list_t list,
> + *               NULL, 0, 0)
> + */
> +
> +#define V4VOP_info              9
> +/*
> + * v4v_hypercall(V4VOP_info,
> + *               XEN_GUEST_HANDLE(v4v_info_t) info,
> + *               NULL, 0, 0)
> + */
> +
> +#endif /* __XEN_PUBLIC_V4V_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-set-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index a890804..395f6cd 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -59,6 +59,7 @@
>  #define __HYPERVISOR_physdev_op           33
>  #define __HYPERVISOR_hvm_op               34
>  #define __HYPERVISOR_tmem_op              38
> +#define __HYPERVISOR_v4v_op               39
>  
>  /* Architecture-specific hypercall definitions. */
>  #define __HYPERVISOR_arch_0               48
> diff --git a/include/xen/v4vdev.h b/include/xen/v4vdev.h
> new file mode 100644
> index 0000000..a30b608
> --- /dev/null
> +++ b/include/xen/v4vdev.h
> @@ -0,0 +1,34 @@
> +#ifndef __V4V_DGRAM_H__
> +#define __V4V_DGRAM_H__
> +
> +struct v4v_dev
> +{
> +    void *buf;
> +    size_t len;
> +    int flags;
> +    v4v_addr_t *addr;
> +};
> +
> +struct v4v_viptables_rule_pos
> +{
> +    struct v4v_viptables_rule* rule;
> +    int position;
> +};
> +
> +#define V4V_TYPE 'W'
> +
> +#define V4VIOCSETRINGSIZE 	_IOW (V4V_TYPE,  1, uint32_t)
> +#define V4VIOCBIND		_IOW (V4V_TYPE,  2, v4v_ring_id_t)
> +#define V4VIOCGETSOCKNAME	_IOW (V4V_TYPE,  3, v4v_ring_id_t)
> +#define V4VIOCGETPEERNAME	_IOW (V4V_TYPE,  4, v4v_addr_t)
> +#define V4VIOCCONNECT		_IOW (V4V_TYPE,  5, v4v_addr_t)
> +#define V4VIOCGETCONNECTERR	_IOW (V4V_TYPE,  6, int)
> +#define V4VIOCLISTEN		_IOW (V4V_TYPE,  7, uint32_t) /*unused args */
> +#define V4VIOCACCEPT		_IOW (V4V_TYPE,  8, v4v_addr_t) 
> +#define V4VIOCSEND		_IOW (V4V_TYPE,  9, struct v4v_dev)
> +#define V4VIOCRECV		_IOW (V4V_TYPE, 10, struct v4v_dev)
> +#define V4VIOCVIPTABLESADD	_IOW (V4V_TYPE, 11, struct v4v_viptables_rule_pos)
> +#define V4VIOCVIPTABLESDEL	_IOW (V4V_TYPE, 12, struct v4v_viptables_rule_pos)
> +#define V4VIOCVIPTABLESLIST	_IOW (V4V_TYPE, 13, struct v4v_viptables_list)
> +
> +#endif

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:38:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:38:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPNO-0004hd-0x; Mon, 06 Aug 2012 15:37:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyPNL-0004hX-VE
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:37:48 +0000
Received: from [85.158.143.99:56953] by server-3.bemta-4.messagelabs.com id
	D1/47-01511-BC4EF105; Mon, 06 Aug 2012 15:37:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344267463!27129826!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4574 invoked from network); 6 Aug 2012 15:37:44 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 15:37:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76FberF015612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 15:37:41 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76FbdmT024638
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 15:37:40 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76FbdVE028688; Mon, 6 Aug 2012 10:37:39 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 08:37:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 75AE241F35; Mon,  6 Aug 2012 11:28:15 -0400 (EDT)
Date: Mon, 6 Aug 2012 11:28:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120806152815.GE8967@phenom.dumpdata.com>
References: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 11:24:20PM +0100, Jean Guyader wrote:
> This is a Linux driver for the V4V inter VM communication system.
> 
> I've posted the V4V Xen patches for comments, to find more info about
> V4V you can check out this link.
> http://osdir.com/ml/general/2012-08/msg05904.html
> 
> This linux driver exposes two char devices one for TCP one for UDP.
> The interface exposed to userspace are made of IOCTLs, one per
> network operation (listen, bind, accept, send, recv, ...).

I haven't had a chance to take a look at this and won't until next
week. But just a couple of quick questions:

 - Is there a test application for this? If so where can I get it
 - Is there any code in the Xen repository that uses it.
 - Who are the users?
 - Why .. TCP and UDP ? Does that mean it masquarades as an Ethernet
   device? Why the choice of using a char device?

Thx.
> 
> Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> ---
>  drivers/xen/Kconfig         |    4 +
>  drivers/xen/Makefile        |    1 +
>  drivers/xen/v4v.c           | 2639 +++++++++++++++++++++++++++++++++++++++++++
>  drivers/xen/v4v_utils.h     |  278 +++++
>  include/xen/interface/v4v.h |  299 +++++
>  include/xen/interface/xen.h |    1 +
>  include/xen/v4vdev.h        |   34 +
>  7 files changed, 3256 insertions(+)
>  create mode 100644 drivers/xen/v4v.c
>  create mode 100644 drivers/xen/v4v_utils.h
>  create mode 100644 include/xen/interface/v4v.h
>  create mode 100644 include/xen/v4vdev.h
> 

> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index 8d2501e..db500cc 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -196,4 +196,8 @@ config XEN_ACPI_PROCESSOR
>  	  called xen_acpi_processor  If you do not know what to choose, select
>  	  M here. If the CPUFREQ drivers are built in, select Y here.
>  
> +config XEN_V4V
> +	tristate "Xen V4V driver"
> +        default m
> +
>  endmenu
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index fc34886..a3d3014 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -21,6 +21,7 @@ obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
>  obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
>  obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
>  obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
> +obj-$(CONFIG_XEN_V4V)			+= v4v.o
>  xen-evtchn-y				:= evtchn.o
>  xen-gntdev-y				:= gntdev.o
>  xen-gntalloc-y				:= gntalloc.o
> diff --git a/drivers/xen/v4v.c b/drivers/xen/v4v.c
> new file mode 100644
> index 0000000..141be66
> --- /dev/null
> +++ b/drivers/xen/v4v.c
> @@ -0,0 +1,2639 @@
> +/******************************************************************************
> + * drivers/xen/v4v/v4v.c
> + *
> + * V4V interdomain communication driver.
> + *
> + * Copyright (c) 2012 Jean Guyader
> + * Copyright (c) 2009 Ross Philipson
> + * Copyright (c) 2009 James McKenzie
> + * Copyright (c) 2009 Citrix Systems, Inc.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/vmalloc.h>
> +#include <linux/interrupt.h>
> +#include <linux/spinlock.h>
> +#include <linux/list.h>
> +#include <linux/socket.h>
> +#include <linux/sched.h>
> +#include <xen/events.h>
> +#include <xen/evtchn.h>
> +#include <xen/page.h>
> +#include <xen/xen.h>
> +#include <linux/fs.h>
> +#include <linux/platform_device.h>
> +#include <linux/miscdevice.h>
> +#include <linux/major.h>
> +#include <linux/proc_fs.h>
> +#include <linux/poll.h>
> +#include <linux/random.h>
> +#include <linux/wait.h>
> +#include <linux/file.h>
> +#include <linux/mount.h>
> +
> +#include <xen/interface/v4v.h>
> +#include <xen/v4vdev.h>
> +#include "v4v_utils.h"
> +
> +#define DEFAULT_RING_SIZE \
> +    (V4V_ROUNDUP((((PAGE_SIZE)*32) - sizeof(v4v_ring_t)-V4V_ROUNDUP(1))))
> +
> +/* The type of a ring*/
> +typedef enum {
> +        V4V_RTYPE_IDLE = 0,
> +        V4V_RTYPE_DGRAM,
> +        V4V_RTYPE_LISTENER,
> +        V4V_RTYPE_CONNECTOR,
> +} v4v_rtype;
> +
> +/* the state of a v4V_private*/
> +typedef enum {
> +        V4V_STATE_IDLE = 0,
> +        V4V_STATE_BOUND,
> +        V4V_STATE_LISTENING,
> +        V4V_STATE_ACCEPTED,
> +        V4V_STATE_CONNECTING,
> +        V4V_STATE_CONNECTED,
> +        V4V_STATE_DISCONNECTED
> +} v4v_state;
> +
> +typedef enum {
> +        V4V_PTYPE_DGRAM = 1,
> +        V4V_PTYPE_STREAM,
> +} v4v_ptype;
> +
> +static rwlock_t list_lock;
> +static struct list_head ring_list;
> +
> +struct v4v_private;
> +
> +/*
> + * Ring pointer itself is protected by the refcnt the lists its in by list_lock.
> + *
> + * It's permittable to decrement the refcnt whilst holding the read lock, and then
> + * clean up refcnt=0 rings later.
> + *
> + * If a ring has refcnt!=0 we expect ->ring to be non NULL, and for the ring to
> + * be registered with Xen.
> + */
> +
> +struct ring {
> +        struct list_head node;
> +        atomic_t refcnt;
> +
> +        spinlock_t lock;        /* Protects the data in the v4v_ring_t also privates and sponsor */
> +
> +        struct list_head privates;      /* Protected by lock */
> +        struct v4v_private *sponsor;    /* Protected by lock */
> +
> +        v4v_rtype type;
> +
> +        /* Ring */
> +        v4v_ring_t *ring;
> +        v4v_pfn_t *pfn_list;
> +        size_t pfn_list_npages;
> +        int order;
> +};
> +
> +struct v4v_private {
> +        struct list_head node;
> +        v4v_state state;
> +        v4v_ptype ptype;
> +        uint32_t desired_ring_size;
> +        struct ring *r;
> +        wait_queue_head_t readq;
> +        wait_queue_head_t writeq;
> +        v4v_addr_t peer;
> +        uint32_t conid;
> +        spinlock_t pending_recv_lock;   /* Protects pending messages, and pending_error */
> +        struct list_head pending_recv_list;     /* For LISTENER contains only ... */
> +        atomic_t pending_recv_count;
> +        int pending_error;
> +        int full;
> +        int send_blocked;
> +        int rx;
> +};
> +
> +struct pending_recv {
> +        struct list_head node;
> +        v4v_addr_t from;
> +        size_t data_len, data_ptr;
> +        struct v4v_stream_header sh;
> +        uint8_t data[0];
> +} V4V_PACKED;
> +
> +static spinlock_t interrupt_lock;
> +static spinlock_t pending_xmit_lock;
> +static struct list_head pending_xmit_list;
> +static atomic_t pending_xmit_count;
> +
> +enum v4v_pending_xmit_type {
> +        V4V_PENDING_XMIT_INLINE = 1,    /* Send the inline xmit */
> +        V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR,   /* Wake up writeq of sponsor of the ringid from */
> +        V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES,  /* Wake up writeq of a private of ringid from with conid */
> +};
> +
> +struct pending_xmit {
> +        struct list_head node;
> +        enum v4v_pending_xmit_type type;
> +        uint32_t conid;
> +        struct v4v_ring_id from;
> +        v4v_addr_t to;
> +        size_t len;
> +        uint32_t protocol;
> +        uint8_t data[0];
> +};
> +
> +#define MAX_PENDING_RECVS        16
> +
> +/* Hypercalls */
> +
> +static inline int __must_check
> +HYPERVISOR_v4v_op(int cmd, void *arg1, void *arg2,
> +                  uint32_t arg3, uint32_t arg4)
> +{
> +        return _hypercall5(int, v4v_op, cmd, arg1, arg2, arg3, arg4);
> +}
> +
> +static int v4v_info(v4v_info_t *info)
> +{
> +        (void)(*(volatile int*)info);
> +        return HYPERVISOR_v4v_op (V4VOP_info, info, NULL, 0, 0);
> +}
> +
> +static int H_v4v_register_ring(v4v_ring_t * r, v4v_pfn_t * l, size_t npages)
> +{
> +        (void)(*(volatile int *)r);
> +        return HYPERVISOR_v4v_op(V4VOP_register_ring, r, l, npages, 0);
> +}
> +
> +static int H_v4v_unregister_ring(v4v_ring_t * r)
> +{
> +        (void)(*(volatile int *)r);
> +        return HYPERVISOR_v4v_op(V4VOP_unregister_ring, r, NULL, 0, 0);
> +}
> +
> +static int
> +H_v4v_send(v4v_addr_t * s, v4v_addr_t * d, const void *buf, uint32_t len,
> +           uint32_t protocol)
> +{
> +        v4v_send_addr_t addr;
> +        addr.src = *s;
> +        addr.dst = *d;
> +        return HYPERVISOR_v4v_op(V4VOP_send, &addr, (void *)buf, len, protocol);
> +}
> +
> +static int
> +H_v4v_sendv(v4v_addr_t * s, v4v_addr_t * d, const v4v_iov_t * iovs,
> +            uint32_t niov, uint32_t protocol)
> +{
> +        v4v_send_addr_t addr;
> +        addr.src = *s;
> +        addr.dst = *d;
> +        return HYPERVISOR_v4v_op(V4VOP_sendv, &addr, (void *)iovs, niov,
> +                                 protocol);
> +}
> +
> +static int H_v4v_notify(v4v_ring_data_t * rd)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_notify, rd, NULL, 0, 0);
> +}
> +
> +static int H_v4v_viptables_add(v4v_viptables_rule_t * rule, int position)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_viptables_add, rule, NULL,
> +                                 position, 0);
> +}
> +
> +static int H_v4v_viptables_del(v4v_viptables_rule_t * rule, int position)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_viptables_del, rule, NULL,
> +                                 position, 0);
> +}
> +
> +static int H_v4v_viptables_list(struct v4v_viptables_list *list)
> +{
> +        return HYPERVISOR_v4v_op(V4VOP_viptables_list, list, NULL, 0, 0);
> +}
> +
> +/* Port/Ring uniqueness */
> +
> +/* Need to hold write lock for all of these */
> +
> +static int v4v_id_in_use(struct v4v_ring_id *id)
> +{
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if ((r->ring->id.addr.port == id->addr.port)
> +                    && (r->ring->id.partner == id->partner))
> +                        return 1;
> +        }
> +
> +        return 0;
> +}
> +
> +static int v4v_port_in_use(uint32_t port, uint32_t * max)
> +{
> +        uint32_t ret = 0;
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (r->ring->id.addr.port == port)
> +                        ret++;
> +                if (max && (r->ring->id.addr.port > *max))
> +                        *max = r->ring->id.addr.port;
> +        }
> +
> +        return ret;
> +}
> +
> +static uint32_t v4v_random_port(void)
> +{
> +        uint32_t port;
> +
> +        port = random32();
> +        port |= 0x80000000U;
> +        if (port > 0xf0000000U) {
> +                port -= 0x10000000;
> +        }
> +
> +        return port;
> +}
> +
> +/* Caller needs to hold lock */
> +static uint32_t v4v_find_spare_port_number(void)
> +{
> +        uint32_t port, max = 0x80000000U;
> +
> +        port = v4v_random_port();
> +        if (!v4v_port_in_use(port, &max)) {
> +                return port;
> +        } else {
> +                port = max + 1;
> +        }
> +
> +        return port;
> +}
> +
> +/* Ring Goo */
> +
> +static int register_ring(struct ring *r)
> +{
> +        return H_v4v_register_ring((void *)r->ring,
> +                                   r->pfn_list,
> +                                   r->pfn_list_npages);
> +}
> +
> +static int unregister_ring(struct ring *r)
> +{
> +        return H_v4v_unregister_ring((void *)r->ring);
> +}
> +
> +static void refresh_pfn_list(struct ring *r)
> +{
> +        uint8_t *b = (void *)r->ring;
> +        int i;
> +
> +        for (i = 0; i < r->pfn_list_npages; ++i) {
> +                r->pfn_list[i] = pfn_to_mfn(vmalloc_to_pfn(b));
> +                b += PAGE_SIZE;
> +        }
> +}
> +
> +static void allocate_pfn_list(struct ring *r)
> +{
> +        int n = (r->ring->len + PAGE_SIZE - 1) >> PAGE_SHIFT;
> +        int len = sizeof(v4v_pfn_t) * n;
> +
> +        r->pfn_list = kmalloc(len, GFP_KERNEL);
> +        if (!r->pfn_list)
> +                return;
> +        r->pfn_list_npages = n;
> +
> +        refresh_pfn_list(r);
> +}
> +
> +static int allocate_ring(struct ring *r, int ring_len)
> +{
> +        int len = ring_len + sizeof(v4v_ring_t);
> +        int ret = 0;
> +
> +        if (ring_len != V4V_ROUNDUP(ring_len)) {
> +                ret = -EINVAL;
> +                goto fail;
> +        }
> +
> +        r->ring = NULL;
> +        r->pfn_list = NULL;
> +        r->order = 0;
> +
> +        r->order = get_order(len);
> +
> +        r->ring = vmalloc(len);
> +
> +        if (!r->ring) {
> +                ret = -ENOMEM;
> +                goto fail;
> +        }
> +
> +        memset((void *)r->ring, 0, len);
> +
> +        r->ring->magic = V4V_RING_MAGIC;
> +        r->ring->len = ring_len;
> +        r->ring->rx_ptr = r->ring->tx_ptr = 0;
> +
> +        memset((void *)r->ring->ring, 0x5a, ring_len);
> +
> +        allocate_pfn_list(r);
> +        if (!r->pfn_list) {
> +
> +                ret = -ENOMEM;
> +                goto fail;
> +        }
> +
> +        return 0;
> + fail:
> +        if (r->ring)
> +                vfree(r->ring);
> +        if (r->pfn_list)
> +                kfree(r->pfn_list);
> +
> +        r->ring = NULL;
> +        r->pfn_list = NULL;
> +
> +        return ret;
> +}
> +
> +/* Caller must hold lock */
> +static void recover_ring(struct ring *r)
> +{
> +        /* It's all gone horribly wrong */
> +        r->ring->rx_ptr = r->ring->tx_ptr;
> +        /* Xen updates tx_ptr atomically to always be pointing somewhere sensible */
> +}
> +
> +/* Caller must hold no locks, ring is allocated with a refcnt of 1 */
> +static int new_ring(struct v4v_private *sponsor, struct v4v_ring_id *pid)
> +{
> +        struct v4v_ring_id id = *pid;
> +        struct ring *r;
> +        int ret;
> +        unsigned long flags;
> +
> +        if (id.addr.domain != V4V_DOMID_NONE)
> +                return -EINVAL;
> +
> +        r = kmalloc(sizeof(struct ring), GFP_KERNEL);
> +        memset(r, 0, sizeof(struct ring));
> +
> +        ret = allocate_ring(r, sponsor->desired_ring_size);
> +        if (ret) {
> +                kfree(r);
> +                return ret;
> +        }
> +
> +        INIT_LIST_HEAD(&r->privates);
> +        spin_lock_init(&r->lock);
> +        atomic_set(&r->refcnt, 1);
> +
> +        write_lock_irqsave(&list_lock, flags);
> +        if (sponsor->state != V4V_STATE_IDLE) {
> +                ret = -EINVAL;
> +                goto fail;
> +        }
> +
> +        if (!id.addr.port) {
> +                id.addr.port = v4v_find_spare_port_number();
> +        } else if (v4v_id_in_use(&id)) {
> +                ret = -EADDRINUSE;
> +                goto fail;
> +        }
> +
> +        r->ring->id = id;
> +        r->sponsor = sponsor;
> +        sponsor->r = r;
> +        sponsor->state = V4V_STATE_BOUND;
> +
> +        ret = register_ring(r);
> +        if (ret)
> +                goto fail;
> +
> +        list_add(&r->node, &ring_list);
> +        write_unlock_irqrestore(&list_lock, flags);
> +        return 0;
> +
> + fail:
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        vfree(r->ring);
> +        kfree(r->pfn_list);
> +        kfree(r);
> +
> +        sponsor->r = NULL;
> +        sponsor->state = V4V_STATE_IDLE;
> +
> +        return ret;
> +}
> +
> +/* Cleans up old rings */
> +static void delete_ring(struct ring *r)
> +{
> +        int ret;
> +
> +        list_del(&r->node);
> +
> +        if ((ret = unregister_ring(r))) {
> +                printk(KERN_ERR
> +                       "unregister_ring hypercall failed: %d. Leaking ring.\n",
> +                       ret);
> +        } else {
> +                vfree(r->ring);
> +        }
> +
> +        kfree(r->pfn_list);
> +        kfree(r);
> +}
> +
> +/* Returns !0 if you sucessfully got a reference to the ring */
> +static int get_ring(struct ring *r)
> +{
> +        return atomic_add_unless(&r->refcnt, 1, 0);
> +}
> +
> +/* Must be called with DEBUG_WRITELOCK; v4v_write_lock */
> +static void put_ring(struct ring *r)
> +{
> +        if (!r)
> +                return;
> +
> +        if (atomic_dec_and_test(&r->refcnt)) {
> +                delete_ring(r);
> +        }
> +}
> +
> +/* Caller must hold ring_lock */
> +static struct ring *find_ring_by_id(struct v4v_ring_id *id)
> +{
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (!memcmp
> +                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id)))
> +                        return r;
> +        }
> +        return NULL;
> +}
> +
> +/* Caller must hold ring_lock */
> +struct ring *find_ring_by_id_type(struct v4v_ring_id *id, v4v_rtype t)
> +{
> +        struct ring *r;
> +
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (r->type != t)
> +                        continue;
> +                if (!memcmp
> +                    ((void *)&r->ring->id, id, sizeof(struct v4v_ring_id)))
> +                        return r;
> +        }
> +
> +        return NULL;
> +}
> +
> +/* Pending xmits */
> +
> +/* Caller must hold pending_xmit_lock */
> +
> +static void
> +xmit_queue_wakeup_private(struct v4v_ring_id *from,
> +                          uint32_t conid, v4v_addr_t * to, int len, int delete)
> +{
> +        struct pending_xmit *p;
> +
> +        list_for_each_entry(p, &pending_xmit_list, node) {
> +                if (p->type != V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES)
> +                        continue;
> +                if (p->conid != conid)
> +                        continue;
> +
> +                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id)))
> +                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
> +                        if (delete) {
> +                                atomic_dec(&pending_xmit_count);
> +                                list_del(&p->node);
> +                        } else {
> +                                p->len = len;
> +                        }
> +                        return;
> +                }
> +        }
> +
> +        if (delete)
> +                return;
> +
> +        p = kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
> +        if (!p) {
> +                printk(KERN_ERR
> +                       "Out of memory trying to queue an xmit sponsor wakeup\n");
> +                return;
> +        }
> +        p->type = V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES;
> +        p->conid = conid;
> +        p->from = *from;
> +        p->to = *to;
> +        p->len = len;
> +
> +        atomic_inc(&pending_xmit_count);
> +        list_add_tail(&p->node, &pending_xmit_list);
> +}
> +
> +/* Caller must hold pending_xmit_lock */
> +static void
> +xmit_queue_wakeup_sponsor(struct v4v_ring_id *from, v4v_addr_t * to,
> +                          int len, int delete)
> +{
> +        struct pending_xmit *p;
> +
> +        list_for_each_entry(p, &pending_xmit_list, node) {
> +                if (p->type != V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR)
> +                        continue;
> +                if ((!memcmp(from, &p->from, sizeof(struct v4v_ring_id)))
> +                    && (!memcmp(to, &p->to, sizeof(v4v_addr_t)))) {
> +                        if (delete) {
> +                                atomic_dec(&pending_xmit_count);
> +                                list_del(&p->node);
> +                        } else {
> +                                p->len = len;
> +                        }
> +                        return;
> +                }
> +        }
> +
> +        if (delete)
> +                return;
> +
> +        p = kmalloc(sizeof(struct pending_xmit), GFP_ATOMIC);
> +        if (!p) {
> +                printk(KERN_ERR
> +                       "Out of memory trying to queue an xmit sponsor wakeup\n");
> +                return;
> +        }
> +        p->type = V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR;
> +        p->from = *from;
> +        p->to = *to;
> +        p->len = len;
> +        atomic_inc(&pending_xmit_count);
> +        list_add_tail(&p->node, &pending_xmit_list);
> +}
> +
> +static int
> +xmit_queue_inline(struct v4v_ring_id *from, v4v_addr_t * to,
> +                  void *buf, size_t len, uint32_t protocol)
> +{
> +        ssize_t ret;
> +        unsigned long flags;
> +        struct pending_xmit *p;
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +
> +        ret = H_v4v_send(&from->addr, to, buf, len, protocol);
> +        if (ret != -EAGAIN) {
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                return ret;
> +        }
> +
> +        p = kmalloc(sizeof(struct pending_xmit) + len, GFP_ATOMIC);
> +        if (!p) {
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                printk(KERN_ERR
> +                       "Out of memory trying to queue an xmit of %zu bytes\n",
> +                       len);
> +
> +                return -ENOMEM;
> +        }
> +
> +        p->type = V4V_PENDING_XMIT_INLINE;
> +        p->from = *from;
> +        p->to = *to;
> +        p->len = len;
> +        p->protocol = protocol;
> +
> +        if (len)
> +                memcpy(p->data, buf, len);
> +
> +        list_add_tail(&p->node, &pending_xmit_list);
> +        atomic_inc(&pending_xmit_count);
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return len;
> +}
> +
> +static void
> +xmit_queue_rst_to(struct v4v_ring_id *from, uint32_t conid, v4v_addr_t * to)
> +{
> +        struct v4v_stream_header sh;
> +
> +        if (!to)
> +                return;
> +
> +        sh.conid = conid;
> +        sh.flags = V4V_SHF_RST;
> +        xmit_queue_inline(from, to, &sh, sizeof(sh), V4V_PROTO_STREAM);
> +}
> +
> +/* RX */
> +
> +static int
> +copy_into_pending_recv(struct ring *r, int len, struct v4v_private *p)
> +{
> +        struct pending_recv *pending;
> +        int k;
> +
> +        /* Too much queued? Let the ring take the strain */
> +        if (atomic_read(&p->pending_recv_count) > MAX_PENDING_RECVS) {
> +                spin_lock(&p->pending_recv_lock);
> +                p->full = 1;
> +                spin_unlock(&p->pending_recv_lock);
> +
> +                return -1;
> +        }
> +
> +        pending =
> +            kmalloc(sizeof(struct pending_recv) -
> +                    sizeof(struct v4v_stream_header) + len, GFP_ATOMIC);
> +
> +        if (!pending)
> +                return -1;
> +
> +        pending->data_ptr = 0;
> +        pending->data_len = len - sizeof(struct v4v_stream_header);
> +
> +        k = v4v_copy_out(r->ring, &pending->from, NULL, &pending->sh, len, 1);
> +
> +        spin_lock(&p->pending_recv_lock);
> +        list_add_tail(&pending->node, &p->pending_recv_list);
> +        atomic_inc(&p->pending_recv_count);
> +        p->full = 0;
> +        spin_unlock(&p->pending_recv_lock);
> +
> +        return 0;
> +}
> +
> +/* Notify */
> +
> +/* Caller must hold list_lock */
> +static void
> +wakeup_privates(struct v4v_ring_id *id, v4v_addr_t * peer, uint32_t conid)
> +{
> +        struct ring *r = find_ring_by_id_type(id, V4V_RTYPE_LISTENER);
> +        struct v4v_private *p;
> +
> +        if (!r)
> +                return;
> +
> +        list_for_each_entry(p, &r->privates, node) {
> +                if ((p->conid == conid)
> +                    && !memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
> +                        p->send_blocked = 0;
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return;
> +                }
> +        }
> +}
> +
> +/* Caller must hold list_lock */
> +static void wakeup_sponsor(struct v4v_ring_id *id)
> +{
> +        struct ring *r = find_ring_by_id(id);
> +
> +        if (!r)
> +                return;
> +
> +        if (!r->sponsor)
> +                return;
> +
> +        r->sponsor->send_blocked = 0;
> +        wake_up_interruptible_all(&r->sponsor->writeq);
> +}
> +
> +static void v4v_null_notify(void)
> +{
> +        H_v4v_notify(NULL);
> +}
> +
> +/* Caller must hold list_lock */
> +static void v4v_notify(void)
> +{
> +        unsigned long flags;
> +        int ret;
> +        int nent;
> +        struct pending_xmit *p, *n;
> +        v4v_ring_data_t *d;
> +        int i = 0;
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +
> +        nent = atomic_read(&pending_xmit_count);
> +        d = kmalloc(sizeof(v4v_ring_data_t) +
> +                    nent * sizeof(v4v_ring_data_ent_t), GFP_ATOMIC);
> +        if (!d) {
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                return;
> +        }
> +        memset(d, 0, sizeof(v4v_ring_data_t));
> +
> +        d->magic = V4V_RING_DATA_MAGIC;
> +
> +        list_for_each_entry(p, &pending_xmit_list, node) {
> +                if (i != nent) {
> +                        d->data[i].ring = p->to;
> +                        d->data[i].space_required = p->len;
> +                        i++;
> +                }
> +        }
> +        d->nent = i;
> +
> +        if (H_v4v_notify(d)) {
> +                kfree(d);
> +                spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +                //MOAN;
> +                return;
> +        }
> +
> +        i = 0;
> +        list_for_each_entry_safe(p, n, &pending_xmit_list, node) {
> +                int processed = 1;
> +
> +                if (i == nent)
> +                        continue;
> +
> +                if (d->data[i].flags & V4V_RING_DATA_F_EXISTS) {
> +                        switch (p->type) {
> +                        case V4V_PENDING_XMIT_INLINE:
> +                                if (!
> +                                    (d->data[i].flags &
> +                                     V4V_RING_DATA_F_SUFFICIENT)) {
> +                                        processed = 0;
> +                                        break;
> +                                }
> +                                ret =
> +                                    H_v4v_send(&p->from.addr, &p->to, p->data,
> +                                               p->len, p->protocol);
> +                                if (ret == -EAGAIN)
> +                                        processed = 0;
> +                                break;
> +                        case V4V_PENDING_XMIT_WAITQ_MATCH_SPONSOR:
> +                                if (d->
> +                                    data[i].flags & V4V_RING_DATA_F_SUFFICIENT)
> +                                {
> +                                        wakeup_sponsor(&p->from);
> +                                } else {
> +                                        processed = 0;
> +                                }
> +                                break;
> +                        case V4V_PENDING_XMIT_WAITQ_MATCH_PRIVATES:
> +                                if (d->
> +                                    data[i].flags & V4V_RING_DATA_F_SUFFICIENT)
> +                                {
> +                                        wakeup_privates(&p->from, &p->to,
> +                                                        p->conid);
> +                                } else {
> +                                        processed = 0;
> +                                }
> +                                break;
> +                        }
> +                }
> +                if (processed) {
> +                        list_del(&p->node);     /* No one to talk to */
> +                        atomic_dec(&pending_xmit_count);
> +                        kfree(p);
> +                }
> +                i++;
> +        }
> +
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +        kfree(d);
> +}
> +
> +/* VIPtables */
> +static void
> +v4v_viptables_add(struct v4v_private *p, struct v4v_viptables_rule *rule,
> +                  int position)
> +{
> +        H_v4v_viptables_add(rule, position);
> +}
> +
> +static void
> +v4v_viptables_del(struct v4v_private *p, struct v4v_viptables_rule *rule,
> +                  int position)
> +{
> +        H_v4v_viptables_del(rule, position);
> +}
> +
> +static int v4v_viptables_list(struct v4v_private *p, struct v4v_viptables_list *list)
> +{
> +        return H_v4v_viptables_list(list);
> +}
> +
> +/* State Machines */
> +static int
> +connector_state_machine(struct v4v_private *p, struct v4v_stream_header *sh)
> +{
> +        if (sh->flags & V4V_SHF_ACK) {
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTING:
> +                        p->state = V4V_STATE_CONNECTED;
> +
> +                        spin_lock(&p->pending_recv_lock);
> +                        p->pending_error = 0;
> +                        spin_unlock(&p->pending_recv_lock);
> +
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return 0;
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_DISCONNECTED:
> +                        p->state = V4V_STATE_DISCONNECTED;
> +
> +                        wake_up_interruptible_all(&p->readq);
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return 1;       /* Send RST */
> +                default:
> +                        break;
> +                }
> +        }
> +
> +        if (sh->flags & V4V_SHF_RST) {
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTING:
> +                        spin_lock(&p->pending_recv_lock);
> +                        p->pending_error = -ECONNREFUSED;
> +                        spin_unlock(&p->pending_recv_lock);
> +                case V4V_STATE_CONNECTED:
> +                        p->state = V4V_STATE_DISCONNECTED;
> +                        wake_up_interruptible_all(&p->readq);
> +                        wake_up_interruptible_all(&p->writeq);
> +                        return 0;
> +                default:
> +                        break;
> +                }
> +        }
> +
> +        return 0;
> +}
> +
> +static void
> +acceptor_state_machine(struct v4v_private *p, struct v4v_stream_header *sh)
> +{
> +        if ((sh->flags & V4V_SHF_RST)
> +            && ((p->state == V4V_STATE_ACCEPTED))) {
> +                p->state = V4V_STATE_DISCONNECTED;
> +                wake_up_interruptible_all(&p->readq);
> +                wake_up_interruptible_all(&p->writeq);
> +        }
> +}
> +
> +/* Interrupt handler */
> +
> +static int connector_interrupt(struct ring *r)
> +{
> +        ssize_t msg_len;
> +        uint32_t protocol;
> +        struct v4v_stream_header sh;
> +        v4v_addr_t from;
> +        int ret = 0;
> +
> +        if (!r->sponsor) {
> +                //MOAN;
> +                return -1;
> +        }
> +
> +        msg_len = v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 0);  /* Peek the header */
> +        if (msg_len == -1) {
> +                recover_ring(r);
> +                return ret;
> +        }
> +
> +        if ((protocol != V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) {
> +                /* Wrong protocol bin it */
> +                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
> +                return ret;
> +        }
> +
> +        if (sh.flags & V4V_SHF_SYN) {   /* This is a connector no-one should send SYN, send RST back */
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 1);
> +                if (msg_len == sizeof(sh))
> +                        xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
> +                return ret;
> +        }
> +
> +        /* Right connexion? */
> +        if (sh.conid != r->sponsor->conid) {
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 1);
> +                xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
> +                return ret;
> +        }
> +
> +        /* Any messages to eat? */
> +        if (sh.flags & (V4V_SHF_ACK | V4V_SHF_RST)) {
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 1);
> +                if (msg_len == sizeof(sh)) {
> +                        if (connector_state_machine(r->sponsor, &sh))
> +                                xmit_queue_rst_to(&r->ring->id, sh.conid,
> +                                                  &from);
> +                }
> +                return ret;
> +        }
> +        //FIXME set a flag to say wake up the userland process next time, and do that rather than copy
> +        ret = copy_into_pending_recv(r, msg_len, r->sponsor);
> +        wake_up_interruptible_all(&r->sponsor->readq);
> +
> +        return ret;
> +}
> +
> +static int
> +acceptor_interrupt(struct v4v_private *p, struct ring *r,
> +                   struct v4v_stream_header *sh, ssize_t msg_len)
> +{
> +        v4v_addr_t from;
> +        int ret = 0;
> +
> +        if (sh->flags & (V4V_SHF_SYN | V4V_SHF_ACK)) {  /* This is an  acceptor no-one should send SYN or ACK, send RST back */
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), 1);
> +                if (msg_len == sizeof(*sh))
> +                        xmit_queue_rst_to(&r->ring->id, sh->conid, &from);
> +                return ret;
> +        }
> +
> +        /* Is it all over */
> +        if (sh->flags & V4V_SHF_RST) {
> +                /* Consume the RST */
> +                msg_len =
> +                    v4v_copy_out(r->ring, &from, NULL, sh, sizeof(*sh), 1);
> +                if (msg_len == sizeof(*sh))
> +                        acceptor_state_machine(p, sh);
> +                return ret;
> +        }
> +
> +        /* Copy the message out */
> +        ret = copy_into_pending_recv(r, msg_len, p);
> +        wake_up_interruptible_all(&p->readq);
> +
> +        return ret;
> +}
> +
> +static int listener_interrupt(struct ring *r)
> +{
> +        int ret = 0;
> +        ssize_t msg_len;
> +        uint32_t protocol;
> +        struct v4v_stream_header sh;
> +        struct v4v_private *p;
> +        v4v_addr_t from;
> +
> +        msg_len = v4v_copy_out(r->ring, &from, &protocol, &sh, sizeof(sh), 0);  /* Peek the header */
> +        if (msg_len == -1) {
> +                recover_ring(r);
> +                return ret;
> +        }
> +
> +        if ((protocol != V4V_PROTO_STREAM) || (msg_len < sizeof(sh))) {
> +                /* Wrong protocol bin it */
> +                v4v_copy_out(r->ring, NULL, NULL, NULL, 0, 1);
> +                return ret;
> +        }
> +
> +        list_for_each_entry(p, &r->privates, node) {
> +                if ((p->conid == sh.conid)
> +                    && (!memcmp(&p->peer, &from, sizeof(v4v_addr_t)))) {
> +                        ret = acceptor_interrupt(p, r, &sh, msg_len);
> +                        return ret;
> +                }
> +        }
> +
> +        /* Consume it */
> +        if (r->sponsor && (sh.flags & V4V_SHF_RST)) {
> +                /*
> +                 * If we previously received a SYN which has not been pulled by
> +                 * v4v_accept() from the pending queue yet, the RST will be dropped here
> +                 * and the connection will never be closed.
> +                 * Hence we must make sure to evict the SYN header from the pending queue
> +                 * before it gets picked up by v4v_accept().
> +                 */
> +                struct pending_recv *pending, *t;
> +
> +                spin_lock(&r->sponsor->pending_recv_lock);
> +                list_for_each_entry_safe(pending, t,
> +                                         &r->sponsor->pending_recv_list, node) {
> +                        if (pending->sh.flags & V4V_SHF_SYN
> +                            && pending->sh.conid == sh.conid) {
> +                                list_del(&pending->node);
> +                                atomic_dec(&r->sponsor->pending_recv_count);
> +                                kfree(pending);
> +                                break;
> +                        }
> +                }
> +                spin_unlock(&r->sponsor->pending_recv_lock);
> +
> +                /* Rst to a listener, should be picked up above for the connexion, drop it */
> +                v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
> +                return ret;
> +        }
> +
> +        if (sh.flags & V4V_SHF_SYN) {
> +                /* Syn to new connexion */
> +                if ((!r->sponsor) || (msg_len != sizeof(sh))) {
> +                        v4v_copy_out(r->ring, NULL, NULL, NULL,
> +                                           sizeof(sh), 1);
> +                        return ret;
> +                }
> +                ret = copy_into_pending_recv(r, msg_len, r->sponsor);
> +                wake_up_interruptible_all(&r->sponsor->readq);
> +                return ret;
> +        }
> +
> +        v4v_copy_out(r->ring, NULL, NULL, NULL, sizeof(sh), 1);
> +        /* Data for unknown destination, RST them */
> +        xmit_queue_rst_to(&r->ring->id, sh.conid, &from);
> +
> +        return ret;
> +}
> +
> +static void v4v_interrupt_rx(void)
> +{
> +        struct ring *r;
> +
> +        read_lock(&list_lock);
> +
> +        /* Wake up anyone pending */
> +        list_for_each_entry(r, &ring_list, node) {
> +                if (r->ring->tx_ptr == r->ring->rx_ptr)
> +                        continue;
> +
> +                switch (r->type) {
> +                case V4V_RTYPE_IDLE:
> +                        v4v_copy_out(r->ring, NULL, NULL, NULL, 1, 1);
> +                        break;
> +                case V4V_RTYPE_DGRAM:  /* For datagrams we just wake up the reader */
> +                        if (r->sponsor)
> +                                wake_up_interruptible_all(&r->sponsor->readq);
> +                        break;
> +                case V4V_RTYPE_CONNECTOR:
> +                        spin_lock(&r->lock);
> +                        while ((r->ring->tx_ptr != r->ring->rx_ptr)
> +                               && !connector_interrupt(r)) ;
> +                        spin_unlock(&r->lock);
> +                        break;
> +                case V4V_RTYPE_LISTENER:
> +                        spin_lock(&r->lock);
> +                        while ((r->ring->tx_ptr != r->ring->rx_ptr)
> +                               && !listener_interrupt(r)) ;
> +                        spin_unlock(&r->lock);
> +                        break;
> +                default:       /* enum warning */
> +                        break;
> +                }
> +        }
> +        read_unlock(&list_lock);
> +}
> +
> +static irqreturn_t v4v_interrupt(int irq, void *dev_id)
> +{
> +        unsigned long flags;
> +
> +        spin_lock_irqsave(&interrupt_lock, flags);
> +        v4v_interrupt_rx();
> +        v4v_notify();
> +        spin_unlock_irqrestore(&interrupt_lock, flags);
> +
> +        return IRQ_HANDLED;
> +}
> +
> +static void v4v_fake_irq(void)
> +{
> +        unsigned long flags;
> +
> +        spin_lock_irqsave(&interrupt_lock, flags);
> +        v4v_interrupt_rx();
> +        v4v_null_notify();
> +        spin_unlock_irqrestore(&interrupt_lock, flags);
> +}
> +
> +/* Filesystem gunge */
> +
> +#define V4VFS_MAGIC 0x56345644  /* "V4VD" */
> +
> +static struct vfsmount *v4v_mnt = NULL;
> +static const struct file_operations v4v_fops_stream;
> +
> +static struct dentry *v4vfs_mount_pseudo(struct file_system_type *fs_type,
> +                                         int flags, const char *dev_name,
> +                                         void *data)
> +{
> +        return mount_pseudo(fs_type, "v4v:", NULL, NULL, V4VFS_MAGIC);
> +}
> +
> +static struct file_system_type v4v_fs = {
> +        /* No owner field so module can be unloaded */
> +        .name = "v4vfs",
> +        .mount = v4vfs_mount_pseudo,
> +        .kill_sb = kill_litter_super
> +};
> +
> +static int setup_fs(void)
> +{
> +        int ret;
> +
> +        ret = register_filesystem(&v4v_fs);
> +        if (ret) {
> +                printk(KERN_ERR
> +                       "v4v: couldn't register tedious filesystem thingy\n");
> +                return ret;
> +        }
> +
> +        v4v_mnt = kern_mount(&v4v_fs);
> +        if (IS_ERR(v4v_mnt)) {
> +                unregister_filesystem(&v4v_fs);
> +                ret = PTR_ERR(v4v_mnt);
> +                printk(KERN_ERR
> +                       "v4v: couldn't mount tedious filesystem thingy\n");
> +                return ret;
> +        }
> +
> +        return 0;
> +}
> +
> +static void unsetup_fs(void)
> +{
> +        mntput(v4v_mnt);
> +        unregister_filesystem(&v4v_fs);
> +}
> +
> +/* Methods */
> +
> +static int stream_connected(struct v4v_private *p)
> +{
> +        switch (p->state) {
> +        case V4V_STATE_ACCEPTED:
> +        case V4V_STATE_CONNECTED:
> +                return 1;
> +        default:
> +                return 0;
> +        }
> +}
> +
> +static size_t
> +v4v_try_send_sponsor(struct v4v_private *p,
> +                     v4v_addr_t * dest,
> +                     const void *buf, size_t len, uint32_t protocol)
> +{
> +        size_t ret;
> +        unsigned long flags;
> +
> +        ret = H_v4v_send(&p->r->ring->id.addr, dest, buf, len, protocol);
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +        if (ret == -EAGAIN) {
> +                /* Add pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0);
> +                p->send_blocked++;
> +
> +        } else {
> +                /* Remove pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1);
> +                p->send_blocked = 0;
> +        }
> +
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return ret;
> +}
> +
> +static size_t
> +v4v_try_sendv_sponsor(struct v4v_private *p,
> +                      v4v_addr_t * dest,
> +                      const v4v_iov_t * iovs, size_t niov, size_t len,
> +                      uint32_t protocol)
> +{
> +        size_t ret;
> +        unsigned long flags;
> +
> +        ret = H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, protocol);
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +        if (ret == -EAGAIN) {
> +                /* Add pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 0);
> +                p->send_blocked++;
> +
> +        } else {
> +                /* Remove pending xmit */
> +                xmit_queue_wakeup_sponsor(&p->r->ring->id, dest, len, 1);
> +                p->send_blocked = 0;
> +        }
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return ret;
> +}
> +
> +/*
> + * Try to send from one of the ring's privates (not its sponsor),
> + * and queue an writeq wakeup if we fail
> + */
> +static size_t
> +v4v_try_sendv_privates(struct v4v_private *p,
> +                       v4v_addr_t * dest,
> +                       const v4v_iov_t * iovs, size_t niov, size_t len,
> +                       uint32_t protocol)
> +{
> +        size_t ret;
> +        unsigned long flags;
> +
> +        ret = H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov, protocol);
> +
> +        spin_lock_irqsave(&pending_xmit_lock, flags);
> +        if (ret == -EAGAIN) {
> +                /* Add pending xmit */
> +                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, dest, len,
> +                                          0);
> +                p->send_blocked++;
> +        } else {
> +                /* Remove pending xmit */
> +                xmit_queue_wakeup_private(&p->r->ring->id, p->conid, dest, len,
> +                                          1);
> +                p->send_blocked = 0;
> +        }
> +        spin_unlock_irqrestore(&pending_xmit_lock, flags);
> +
> +        return ret;
> +}
> +
> +static ssize_t
> +v4v_sendto_from_sponsor(struct v4v_private *p,
> +                        const void *buf, size_t len,
> +                        int nonblock, v4v_addr_t * dest, uint32_t protocol)
> +{
> +        size_t ret = 0, ts_ret;
> +
> +        switch (p->state) {
> +        case V4V_STATE_CONNECTING:
> +                ret = -ENOTCONN;
> +                break;
> +        case V4V_STATE_DISCONNECTED:
> +                ret = -EPIPE;
> +                break;
> +        case V4V_STATE_BOUND:
> +        case V4V_STATE_CONNECTED:
> +                break;
> +        default:
> +                ret = -EINVAL;
> +        }
> +
> +        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_header)))
> +                return -EMSGSIZE;
> +
> +        if (ret)
> +                return ret;
> +
> +        if (nonblock) {
> +                return H_v4v_send(&p->r->ring->id.addr, dest, buf, len,
> +                                  protocol);;
> +        }
> +        /*
> +         * I happen to know that wait_event_interruptible will never
> +         * evaluate the 2nd argument once it has returned true but
> +         * I shouldn't.
> +         *
> +         * The EAGAIN will cause xen to send an interrupt which will
> +         * via the pending_xmit_list and writeq wake us up.
> +         */
> +        ret = wait_event_interruptible(p->writeq,
> +                                       ((ts_ret =
> +                                         v4v_try_send_sponsor
> +                                         (p, dest,
> +                                          buf, len, protocol)) != -EAGAIN));
> +        if (ret)
> +                ret = ts_ret;
> +
> +        return ret;
> +}
> +
> +static ssize_t
> +v4v_stream_sendvto_from_sponsor(struct v4v_private *p,
> +                                const v4v_iov_t * iovs, size_t niov,
> +                                size_t len, int nonblock,
> +                                v4v_addr_t * dest, uint32_t protocol)
> +{
> +        size_t ret = 0, ts_ret;
> +
> +        switch (p->state) {
> +        case V4V_STATE_CONNECTING:
> +                return -ENOTCONN;
> +        case V4V_STATE_DISCONNECTED:
> +                return -EPIPE;
> +        case V4V_STATE_BOUND:
> +        case V4V_STATE_CONNECTED:
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_header)))
> +                return -EMSGSIZE;
> +
> +        if (ret)
> +                return ret;
> +
> +        if (nonblock) {
> +                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov,
> +                                   protocol);
> +        }
> +        /*
> +         * I happen to know that wait_event_interruptible will never
> +         * evaluate the 2nd argument once it has returned true but
> +         * I shouldn't.
> +         *
> +         * The EAGAIN will cause xen to send an interrupt which will
> +         * via the pending_xmit_list and writeq wake us up.
> +         */
> +        ret = wait_event_interruptible(p->writeq,
> +                                       ((ts_ret =
> +                                         v4v_try_sendv_sponsor
> +                                         (p, dest,
> +                                          iovs, niov, len,
> +                                          protocol)) != -EAGAIN)
> +                                       || !stream_connected(p));
> +        if (ret == 0)
> +                ret = ts_ret;
> +
> +        return ret;
> +}
> +static ssize_t
> +v4v_stream_sendvto_from_private(struct v4v_private *p,
> +                                const v4v_iov_t * iovs, size_t niov,
> +                                size_t len, int nonblock,
> +                                v4v_addr_t * dest, uint32_t protocol)
> +{
> +        size_t ret = 0, ts_ret;
> +
> +        switch (p->state) {
> +        case V4V_STATE_DISCONNECTED:
> +                return -EPIPE;
> +        case V4V_STATE_ACCEPTED:
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        if (len > (p->r->ring->len - sizeof(struct v4v_ring_message_header)))
> +                return -EMSGSIZE;
> +
> +        if (ret)
> +                return ret;
> +
> +        if (nonblock) {
> +                return H_v4v_sendv(&p->r->ring->id.addr, dest, iovs, niov,
> +                                   protocol);
> +        }
> +        /*
> +         * I happen to know that wait_event_interruptible will never
> +         * evaluate the 2nd argument once it has returned true but
> +         * I shouldn't.
> +         *
> +         * The EAGAIN will cause xen to send an interrupt which will
> +         * via the pending_xmit_list and writeq wake us up.
> +         */
> +        ret = wait_event_interruptible(p->writeq,
> +                                       ((ts_ret =
> +                                         v4v_try_sendv_privates
> +                                         (p, dest,
> +                                          iovs, niov, len,
> +                                          protocol)) != -EAGAIN)
> +                                       || !stream_connected(p));
> +        if (ret == 0)
> +                ret = ts_ret;
> +
> +        return ret;
> +}
> +
> +static int v4v_get_sock_name(struct v4v_private *p, struct v4v_ring_id *id)
> +{
> +        int rc = 0;
> +
> +        read_lock(&list_lock);
> +        if ((p->r) && (p->r->ring)) {
> +                *id = p->r->ring->id;
> +        } else {
> +                rc = -EINVAL;
> +        }
> +        read_unlock(&list_lock);
> +
> +        return rc;
> +}
> +
> +static int v4v_get_peer_name(struct v4v_private *p, v4v_addr_t * id)
> +{
> +        int rc = 0;
> +        read_lock(&list_lock);
> +
> +        switch (p->state) {
> +        case V4V_STATE_CONNECTING:
> +        case V4V_STATE_CONNECTED:
> +        case V4V_STATE_ACCEPTED:
> +                *id = p->peer;
> +                break;
> +        default:
> +                rc = -ENOTCONN;
> +        }
> +
> +        read_unlock(&list_lock);
> +        return rc;
> +}
> +
> +static int v4v_set_ring_size(struct v4v_private *p, uint32_t ring_size)
> +{
> +
> +        if (ring_size <
> +            (sizeof(struct v4v_ring_message_header) + V4V_ROUNDUP(1)))
> +                return -EINVAL;
> +        if (ring_size != V4V_ROUNDUP(ring_size))
> +                return -EINVAL;
> +
> +        read_lock(&list_lock);
> +        if (p->state != V4V_STATE_IDLE) {
> +                read_unlock(&list_lock);
> +                return -EINVAL;
> +        }
> +
> +        p->desired_ring_size = ring_size;
> +        read_unlock(&list_lock);
> +
> +        return 0;
> +}
> +
> +static ssize_t
> +v4v_recvfrom_dgram(struct v4v_private *p, void *buf, size_t len,
> +                   int nonblock, int peek, v4v_addr_t * src)
> +{
> +        ssize_t ret;
> +        uint32_t protocol;
> +        v4v_addr_t lsrc;
> +
> +        if (!src)
> +                src = &lsrc;
> +
> +retry:
> +        if (!nonblock) {
> +                ret = wait_event_interruptible(p->readq,
> +                                               (p->r->ring->rx_ptr !=
> +                                                p->r->ring->tx_ptr));
> +                if (ret)
> +                        return ret;
> +        }
> +
> +        read_lock(&list_lock);
> +
> +        /*
> +         * For datagrams, we know the interrrupt handler will never use
> +         * the ring, leave irqs on
> +         */
> +        spin_lock(&p->r->lock);
> +        if (p->r->ring->rx_ptr == p->r->ring->tx_ptr) {
> +                spin_unlock(&p->r->lock);
> +                if (nonblock) {
> +                        ret = -EAGAIN;
> +                        goto unlock;
> +                }
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +        ret = v4v_copy_out(p->r->ring, src, &protocol, buf, len, !peek);
> +        if (ret < 0) {
> +                recover_ring(p->r);
> +                spin_unlock(&p->r->lock);
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +        spin_unlock(&p->r->lock);
> +
> +        if (!peek)
> +                v4v_null_notify();
> +
> +        if (protocol != V4V_PROTO_DGRAM) {
> +                /* If peeking consume the rubbish */
> +                if (peek)
> +                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1);
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +
> +        if ((p->state == V4V_STATE_CONNECTED) &&
> +            memcmp(src, &p->peer, sizeof(v4v_addr_t))) {
> +                /* Wrong source - bin it */
> +                if (peek)
> +                        v4v_copy_out(p->r->ring, NULL, NULL, NULL, 1, 1);
> +                read_unlock(&list_lock);
> +                goto retry;
> +        }
> +
> +unlock:
> +        read_unlock(&list_lock);
> +
> +        return ret;
> +}
> +
> +static ssize_t
> +v4v_recv_stream(struct v4v_private *p, void *_buf, int len, int recv_flags,
> +                int nonblock)
> +{
> +        size_t count = 0;
> +        int ret = 0;
> +        unsigned long flags;
> +        int schedule_irq = 0;
> +        uint8_t *buf = (void *)_buf;
> +
> +        read_lock(&list_lock);
> +
> +        switch (p->state) {
> +        case V4V_STATE_DISCONNECTED:
> +                ret = -EPIPE;
> +                goto unlock;
> +        case V4V_STATE_CONNECTING:
> +                ret = -ENOTCONN;
> +                goto unlock;
> +        case V4V_STATE_CONNECTED:
> +        case V4V_STATE_ACCEPTED:
> +                break;
> +        default:
> +                ret = -EINVAL;
> +                goto unlock;
> +        }
> +
> +        do {
> +                if (!nonblock) {
> +                        ret = wait_event_interruptible(p->readq,
> +                                                       (!list_empty(&p->pending_recv_list)
> +                                                        || !stream_connected(p)));
> +
> +                        if (ret)
> +                                break;
> +                }
> +                        
> +                spin_lock_irqsave(&p->pending_recv_lock, flags);
> +
> +                while (!list_empty(&p->pending_recv_list) && len) {
> +                        size_t to_copy;
> +                        struct pending_recv *pending;
> +                        int unlink = 0;
> +
> +                        pending = list_first_entry(&p->pending_recv_list,
> +                                                   struct pending_recv, node);
> +
> +                        if ((pending->data_len - pending->data_ptr) > len) {
> +                                to_copy = len;
> +                        } else {
> +                                unlink = 1;
> +                                to_copy = pending->data_len - pending->data_ptr;
> +                        }
> +
> +                        if (!access_ok(VERIFY_WRITE, buf, to_copy)) {
> +                                printk(KERN_ERR
> +                                       "V4V - ERROR: buf invalid _buf=%p buf=%p len=%d to_copy=%zu count=%zu\n",
> +                                       _buf, buf, len, to_copy, count);
> +                                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +                                read_unlock(&list_lock);
> +                                return -EFAULT;
> +                        }
> +                        
> +                        if (copy_to_user(buf, pending->data + pending->data_ptr, to_copy))
> +                        {
> +                                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +                                read_unlock(&list_lock);
> +                                return -EFAULT;
> +                        }
> +
> +                        if (unlink) {
> +                                list_del(&pending->node);
> +                                kfree(pending);
> +                                atomic_dec(&p->pending_recv_count);
> +                                if (p->full)
> +                                        schedule_irq = 1;
> +                        } else
> +                                pending->data_ptr += to_copy;
> +
> +                        buf += to_copy;
> +                        count += to_copy;
> +                        len -= to_copy;
> +                }
> +                        
> +                spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +
> +                if (p->state == V4V_STATE_DISCONNECTED) {
> +                        ret = -EPIPE;
> +                        break;
> +                }
> +
> +                if (nonblock)
> +                        ret = -EAGAIN;
> +
> +        } while ((recv_flags & MSG_WAITALL) && len);
> +
> +unlock:
> +        read_unlock(&list_lock);
> +
> +        if (schedule_irq)
> +                v4v_fake_irq();
> +
> +        return count ? count : ret;
> +}
> +
> +static ssize_t
> +v4v_send_stream(struct v4v_private *p, const void *_buf, int len, int nonblock)
> +{
> +        int write_lump;
> +        const uint8_t *buf = _buf;
> +        size_t count = 0;
> +        ssize_t ret;
> +        int to_send;
> +
> +        write_lump = DEFAULT_RING_SIZE >> 2;
> +
> +        switch (p->state) {
> +        case V4V_STATE_DISCONNECTED:
> +                return -EPIPE;
> +        case V4V_STATE_CONNECTING:
> +                return -ENOTCONN;
> +        case V4V_STATE_CONNECTED:
> +        case V4V_STATE_ACCEPTED:
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        while (len) {
> +                struct v4v_stream_header sh;
> +                v4v_iov_t iovs[2];
> +
> +                to_send = len > write_lump ? write_lump : len;
> +                sh.flags = 0;
> +                sh.conid = p->conid;
> +
> +                iovs[0].iov_base = (uintptr_t)&sh;
> +                iovs[0].iov_len = sizeof (sh);
> +
> +                iovs[1].iov_base = (uintptr_t)buf;
> +                iovs[1].iov_len = to_send;
> +
> +                if (p->state == V4V_STATE_CONNECTED)
> +                    ret = v4v_stream_sendvto_from_sponsor(
> +                                p, iovs, 2,
> +                                to_send + sizeof(struct v4v_stream_header),
> +                                nonblock, &p->peer, V4V_PROTO_STREAM);
> +                else
> +                    ret = v4v_stream_sendvto_from_private(
> +                                p, iovs, 2,
> +                                to_send + sizeof(struct v4v_stream_header),
> +                                nonblock, &p->peer, V4V_PROTO_STREAM);
> +
> +                if (ret < 0) {
> +                        return count ? count : ret;
> +                }
> +
> +                len -= to_send;
> +                buf += to_send;
> +                count += to_send;
> +
> +                if (nonblock)
> +                        return count;
> +        }
> +
> +        return count;
> +}
> +
> +static int v4v_bind(struct v4v_private *p, struct v4v_ring_id *ring_id)
> +{
> +        int ret = 0;
> +
> +        if (ring_id->addr.domain != V4V_DOMID_NONE) {
> +                return -EINVAL;
> +        }
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                ret = new_ring(p, ring_id);
> +                if (!ret)
> +                        p->r->type = V4V_RTYPE_DGRAM;
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                ret = new_ring(p, ring_id);
> +                break;
> +        }
> +
> +        return ret;
> +}
> +
> +static int v4v_listen(struct v4v_private *p)
> +{
> +        if (p->ptype != V4V_PTYPE_STREAM)
> +                return -EINVAL;
> +
> +        if (p->state != V4V_STATE_BOUND) {
> +                return -EINVAL;
> +        }
> +
> +        p->r->type = V4V_RTYPE_LISTENER;
> +        p->state = V4V_STATE_LISTENING;
> +
> +        return 0;
> +}
> +
> +static int v4v_connect(struct v4v_private *p, v4v_addr_t * peer, int nonblock)
> +{
> +        struct v4v_stream_header sh;
> +        int ret = -EINVAL;
> +
> +        if (p->ptype == V4V_PTYPE_DGRAM) {
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                case V4V_STATE_CONNECTED:
> +                        if (peer) {
> +                                p->state = V4V_STATE_CONNECTED;
> +                                memcpy(&p->peer, peer, sizeof(v4v_addr_t));
> +                        } else {
> +                                p->state = V4V_STATE_BOUND;
> +                        }
> +                        return 0;
> +                default:
> +                        return -EINVAL;
> +                }
> +        }
> +        if (p->ptype != V4V_PTYPE_STREAM) {
> +                return -EINVAL;
> +        }
> +
> +        /* Irritiatingly we need to be restartable */
> +        switch (p->state) {
> +        case V4V_STATE_BOUND:
> +                p->r->type = V4V_RTYPE_CONNECTOR;
> +                p->state = V4V_STATE_CONNECTING;
> +                p->conid = random32();
> +                p->peer = *peer;
> +
> +                sh.flags = V4V_SHF_SYN;
> +                sh.conid = p->conid;
> +
> +                ret =
> +                    xmit_queue_inline(&p->r->ring->id, &p->peer, &sh,
> +                                      sizeof(sh), V4V_PROTO_STREAM);
> +                if (ret == sizeof(sh))
> +                        ret = 0;
> +
> +                if (ret && (ret != -EAGAIN)) {
> +                        p->state = V4V_STATE_BOUND;
> +                        p->r->type = V4V_RTYPE_DGRAM;
> +                        return ret;
> +                }
> +
> +                break;
> +        case V4V_STATE_CONNECTED:
> +                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
> +                        return -EINVAL;
> +                } else {
> +                        return 0;
> +                }
> +        case V4V_STATE_CONNECTING:
> +                if (memcmp(peer, &p->peer, sizeof(v4v_addr_t))) {
> +                        return -EINVAL;
> +                }
> +                break;
> +        default:
> +                return -EINVAL;
> +        }
> +
> +        if (nonblock) {
> +                return -EINPROGRESS;
> +        }
> +
> +        while (p->state != V4V_STATE_CONNECTED) {
> +                ret =
> +                    wait_event_interruptible(p->writeq,
> +                                             (p->state !=
> +                                              V4V_STATE_CONNECTING));
> +                if (ret)
> +                        return ret;
> +
> +                if (p->state == V4V_STATE_DISCONNECTED) {
> +                        p->state = V4V_STATE_BOUND;
> +                        p->r->type = V4V_RTYPE_DGRAM;
> +                        ret = -ECONNREFUSED;
> +                        break;
> +                }
> +        }
> +
> +        return ret;
> +}
> +
> +static int allocate_fd_with_private(void *private)
> +{
> +        int fd;
> +        struct file *f;
> +        struct qstr name = {.name = "" };
> +        struct path path;
> +        struct inode *ind;
> +
> +        fd = get_unused_fd();
> +        if (fd < 0)
> +                return fd;
> +
> +        path.dentry = d_alloc_pseudo(v4v_mnt->mnt_sb, &name);
> +        if (unlikely(!path.dentry)) {
> +                put_unused_fd(fd);
> +                return -ENOMEM;
> +        }
> +        ind = new_inode(v4v_mnt->mnt_sb);
> +        ind->i_ino = get_next_ino();
> +        ind->i_fop = v4v_mnt->mnt_root->d_inode->i_fop;
> +        ind->i_state = v4v_mnt->mnt_root->d_inode->i_state;
> +        ind->i_mode = v4v_mnt->mnt_root->d_inode->i_mode;
> +        ind->i_uid = current_fsuid();
> +        ind->i_gid = current_fsgid();
> +        d_instantiate(path.dentry, ind);
> +
> +        path.mnt = mntget(v4v_mnt);
> +
> +        f = alloc_file(&path, FMODE_READ | FMODE_WRITE, &v4v_fops_stream);
> +        if (!f) {
> +                /* Put back fd ? */
> +                return -ENFILE;
> +        }
> +
> +        f->private_data = private;
> +        fd_install(fd, f);
> +
> +        return fd;
> +}
> +
> +static int
> +v4v_accept(struct v4v_private *p, struct v4v_addr *peer, int nonblock)
> +{
> +        int fd;
> +        int ret = 0;
> +        struct v4v_private *a = NULL;
> +        struct pending_recv *r = NULL;
> +        unsigned long flags;
> +        struct v4v_stream_header sh;
> +
> +        if (p->ptype != V4V_PTYPE_STREAM)
> +                return -ENOTTY;
> +
> +        if (p->state != V4V_STATE_LISTENING) {
> +                return -EINVAL;
> +        }
> +
> +        /* FIXME: leak! */
> +        for (;;) {
> +                ret =
> +                    wait_event_interruptible(p->readq,
> +                                             (!list_empty
> +                                              (&p->pending_recv_list))
> +                                             || nonblock);
> +                if (ret)
> +                        return ret;
> +
> +                /* Write lock implicitly has pending_recv_lock */
> +                write_lock_irqsave(&list_lock, flags);
> +
> +                if (!list_empty(&p->pending_recv_list)) {
> +                        r = list_first_entry(&p->pending_recv_list,
> +                                             struct pending_recv, node);
> +
> +                        list_del(&r->node);
> +                        atomic_dec(&p->pending_recv_count);
> +
> +                        if ((!r->data_len) && (r->sh.flags & V4V_SHF_SYN))
> +                                break;
> +
> +                        kfree(r);
> +                }
> +
> +                write_unlock_irqrestore(&list_lock, flags);
> +                if (nonblock)
> +                        return -EAGAIN;
> +        }
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        a = kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
> +        if (!a) {
> +                ret = -ENOMEM;
> +                goto release;
> +        }
> +
> +        memset(a, 0, sizeof(struct v4v_private));
> +        a->state = V4V_STATE_ACCEPTED;
> +        a->ptype = V4V_PTYPE_STREAM;
> +        a->r = p->r;
> +        if (!get_ring(a->r)) {
> +                a->r = NULL;
> +                ret = -EINVAL;
> +                goto release;
> +        }
> +
> +        init_waitqueue_head(&a->readq);
> +        init_waitqueue_head(&a->writeq);
> +        spin_lock_init(&a->pending_recv_lock);
> +        INIT_LIST_HEAD(&a->pending_recv_list);
> +        atomic_set(&a->pending_recv_count, 0);
> +
> +        a->send_blocked = 0;
> +        a->peer = r->from;
> +        a->conid = r->sh.conid;
> +
> +        if (peer)
> +                *peer = r->from;
> +
> +        fd = allocate_fd_with_private(a);
> +        if (fd < 0) {
> +                ret = fd;
> +                goto release;
> +        }
> +
> +        write_lock_irqsave(&list_lock, flags);
> +        list_add(&a->node, &a->r->privates);
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        /* Ship the ACK */
> +        sh.conid = a->conid;
> +        sh.flags = V4V_SHF_ACK;
> +
> +        xmit_queue_inline(&a->r->ring->id, &a->peer, &sh,
> +                          sizeof(sh), V4V_PROTO_STREAM);
> +        kfree(r);
> +
> +        return fd;
> +
> + release:
> +        kfree(r);
> +        if (a) {
> +                write_lock_irqsave(&list_lock, flags);
> +                if (a->r)
> +                        put_ring(a->r);
> +                write_unlock_irqrestore(&list_lock, flags);
> +                kfree(a);
> +        }
> +        return ret;
> +}
> +
> +ssize_t
> +v4v_sendto(struct v4v_private * p, const void *buf, size_t len, int flags,
> +           v4v_addr_t * addr, int nonblock)
> +{
> +        ssize_t rc;
> +
> +        if (!access_ok(VERIFY_READ, buf, len))
> +                return -EFAULT;
> +        if (!access_ok(VERIFY_READ, addr, len))
> +                return -EFAULT;
> +
> +        if (flags & MSG_DONTWAIT)
> +                nonblock++;
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                        if (!addr)
> +                                return -ENOTCONN;
> +                        rc = v4v_sendto_from_sponsor(p, buf, len, nonblock,
> +                                                     addr, V4V_PROTO_DGRAM);
> +                        break;
> +
> +                case V4V_STATE_CONNECTED:
> +                        if (addr)
> +                                return -EISCONN;
> +
> +                        rc = v4v_sendto_from_sponsor(p, buf, len, nonblock,
> +                                                     &p->peer, V4V_PROTO_DGRAM);
> +                        break;
> +
> +                default:
> +                        return -EINVAL;
> +                }
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                if (addr)
> +                        return -EISCONN;
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTING:
> +                case V4V_STATE_BOUND:
> +                        return -ENOTCONN;
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_ACCEPTED:
> +                        rc = v4v_send_stream(p, buf, len, nonblock);
> +                        break;
> +                case V4V_STATE_DISCONNECTED:
> +
> +                        rc = -EPIPE;
> +                        break;
> +                default:
> +
> +                        return -EINVAL;
> +                }
> +                break;
> +        default:
> +
> +                return -ENOTTY;
> +        }
> +
> +        if ((rc == -EPIPE) && !(flags & MSG_NOSIGNAL))
> +                send_sig(SIGPIPE, current, 0);
> +
> +        return rc;
> +}
> +
> +ssize_t
> +v4v_recvfrom(struct v4v_private * p, void *buf, size_t len, int flags,
> +             v4v_addr_t * addr, int nonblock)
> +{
> +        int peek = 0;
> +        ssize_t rc = 0;
> +
> +        if (!access_ok(VERIFY_WRITE, buf, len))
> +                return -EFAULT;
> +        if ((addr) && (!access_ok(VERIFY_WRITE, addr, sizeof(v4v_addr_t))))
> +                return -EFAULT;
> +
> +        if (flags & MSG_DONTWAIT)
> +                nonblock++;
> +        if (flags & MSG_PEEK)
> +                peek++;
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                rc = v4v_recvfrom_dgram(p, buf, len, nonblock, peek, addr);
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                if (peek)
> +                        return -EINVAL;
> +
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                        return -ENOTCONN;
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_ACCEPTED:
> +                        if (addr)
> +                                *addr = p->peer;
> +                        rc = v4v_recv_stream(p, buf, len, flags, nonblock);
> +                        break;
> +                case V4V_STATE_DISCONNECTED:
> +                        rc = 0;
> +                        break;
> +                default:
> +                        rc = -EINVAL;
> +                }
> +        }
> +
> +        if ((rc > (ssize_t) len) && !(flags & MSG_TRUNC))
> +                rc = len;
> +
> +        return rc;
> +}
> +
> +/* fops */
> +
> +static int v4v_open_dgram(struct inode *inode, struct file *f)
> +{
> +        struct v4v_private *p;
> +
> +        p = kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
> +        if (!p)
> +                return -ENOMEM;
> +
> +        memset(p, 0, sizeof(struct v4v_private));
> +        p->state = V4V_STATE_IDLE;
> +        p->desired_ring_size = DEFAULT_RING_SIZE;
> +        p->r = NULL;
> +        p->ptype = V4V_PTYPE_DGRAM;
> +        p->send_blocked = 0;
> +
> +        init_waitqueue_head(&p->readq);
> +        init_waitqueue_head(&p->writeq);
> +
> +        spin_lock_init(&p->pending_recv_lock);
> +        INIT_LIST_HEAD(&p->pending_recv_list);
> +        atomic_set(&p->pending_recv_count, 0);
> +
> +        f->private_data = p;
> +        return 0;
> +}
> +
> +static int v4v_open_stream(struct inode *inode, struct file *f)
> +{
> +        struct v4v_private *p;
> +
> +        p = kmalloc(sizeof(struct v4v_private), GFP_KERNEL);
> +        if (!p)
> +                return -ENOMEM;
> +
> +        memset(p, 0, sizeof(struct v4v_private));
> +        p->state = V4V_STATE_IDLE;
> +        p->desired_ring_size = DEFAULT_RING_SIZE;
> +        p->r = NULL;
> +        p->ptype = V4V_PTYPE_STREAM;
> +        p->send_blocked = 0;
> +
> +        init_waitqueue_head(&p->readq);
> +        init_waitqueue_head(&p->writeq);
> +
> +        spin_lock_init(&p->pending_recv_lock);
> +        INIT_LIST_HEAD(&p->pending_recv_list);
> +        atomic_set(&p->pending_recv_count, 0);
> +
> +        f->private_data = p;
> +        return 0;
> +}
> +
> +static int v4v_release(struct inode *inode, struct file *f)
> +{
> +        struct v4v_private *p = (struct v4v_private *)f->private_data;
> +        unsigned long flags;
> +        struct pending_recv *pending;
> +
> +        if (p->ptype == V4V_PTYPE_STREAM) {
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_CONNECTING:
> +                case V4V_STATE_ACCEPTED:
> +                        xmit_queue_rst_to(&p->r->ring->id, p->conid, &p->peer);
> +                        break;
> +                default:
> +                        break;
> +                }
> +        }
> +
> +        write_lock_irqsave(&list_lock, flags);
> +        if (!p->r) {
> +                write_unlock_irqrestore(&list_lock, flags);
> +                goto release;
> +        }
> +
> +        if (p != p->r->sponsor) {
> +                put_ring(p->r);
> +                list_del(&p->node);
> +                write_unlock_irqrestore(&list_lock, flags);
> +                goto release;
> +        }
> +
> +        p->r->sponsor = NULL;
> +        put_ring(p->r);
> +        write_unlock_irqrestore(&list_lock, flags);
> +
> +        while (!list_empty(&p->pending_recv_list)) {
> +                pending =
> +                    list_first_entry(&p->pending_recv_list,
> +                                     struct pending_recv, node);
> +
> +                list_del(&pending->node);
> +                kfree(pending);
> +                atomic_dec(&p->pending_recv_count);
> +        }
> +
> + release:
> +        kfree(p);
> +
> +        return 0;
> +}
> +
> +static ssize_t
> +v4v_write(struct file *f, const char __user * buf, size_t count, loff_t * ppos)
> +{
> +        struct v4v_private *p = f->private_data;
> +        int nonblock = f->f_flags & O_NONBLOCK;
> +
> +        return v4v_sendto(p, buf, count, 0, NULL, nonblock);
> +}
> +
> +static ssize_t
> +v4v_read(struct file *f, char __user * buf, size_t count, loff_t * ppos)
> +{
> +        struct v4v_private *p = f->private_data;
> +        int nonblock = f->f_flags & O_NONBLOCK;
> +
> +        return v4v_recvfrom(p, (void *)buf, count, 0, NULL, nonblock);
> +}
> +
> +static long v4v_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
> +{
> +        int rc = -ENOTTY;
> +
> +        int nonblock = f->f_flags & O_NONBLOCK;
> +        struct v4v_private *p = f->private_data;
> +
> +        if (_IOC_TYPE(cmd) != V4V_TYPE)
> +                return rc;
> +
> +        switch (cmd) {
> +        case V4VIOCSETRINGSIZE:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(uint32_t)))
> +                        return -EFAULT;
> +                rc = v4v_set_ring_size(p, *(uint32_t *) arg);
> +                break;
> +        case V4VIOCBIND:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_ring_id)))
> +                        return -EFAULT;
> +                rc = v4v_bind(p, (struct v4v_ring_id *)arg);
> +                break;
> +        case V4VIOCGETSOCKNAME:
> +                if (!access_ok(VERIFY_WRITE, arg, sizeof(struct v4v_ring_id)))
> +                        return -EFAULT;
> +                rc = v4v_get_sock_name(p, (struct v4v_ring_id *)arg);
> +                break;
> +        case V4VIOCGETPEERNAME:
> +                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
> +                        return -EFAULT;
> +                rc = v4v_get_peer_name(p, (v4v_addr_t *) arg);
> +                break;
> +        case V4VIOCCONNECT:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(v4v_addr_t)))
> +                        return -EFAULT;
> +                /* Bind if not done */
> +                if (p->state == V4V_STATE_IDLE) {
> +                        struct v4v_ring_id id;
> +                        memset(&id, 0, sizeof(id));
> +                        id.partner = V4V_DOMID_NONE;
> +                        id.addr.domain = V4V_DOMID_NONE;
> +                        id.addr.port = 0;
> +                        rc = v4v_bind(p, &id);
> +                        if (rc)
> +                                break;
> +                }
> +                rc = v4v_connect(p, (v4v_addr_t *) arg, nonblock);
> +                break;
> +        case V4VIOCGETCONNECTERR:
> +                {
> +                        unsigned long flags;
> +                        if (!access_ok(VERIFY_WRITE, arg, sizeof(int)))
> +                                return -EFAULT;
> +
> +                        spin_lock_irqsave(&p->pending_recv_lock, flags);
> +                        *(int *)arg = p->pending_error;
> +                        p->pending_error = 0;
> +                        spin_unlock_irqrestore(&p->pending_recv_lock, flags);
> +                        rc = 0;
> +                }
> +                break;
> +        case V4VIOCLISTEN:
> +                rc = v4v_listen(p);
> +                break;
> +        case V4VIOCACCEPT:
> +                if (!access_ok(VERIFY_WRITE, arg, sizeof(v4v_addr_t)))
> +                        return -EFAULT;
> +                rc = v4v_accept(p, (v4v_addr_t *) arg, nonblock);
> +                break;
> +        case V4VIOCSEND:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_dev a = *(struct v4v_dev *)arg;
> +
> +                        rc = v4v_sendto(p, a.buf, a.len, a.flags, a.addr,
> +                                        nonblock);
> +                }
> +                break;
> +        case V4VIOCRECV:
> +                if (!access_ok(VERIFY_READ, arg, sizeof(struct v4v_dev)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_dev a = *(struct v4v_dev *)arg;
> +                        rc = v4v_recvfrom(p, a.buf, a.len, a.flags, a.addr,
> +                                          nonblock);
> +                }
> +                break;
> +        case V4VIOCVIPTABLESADD:
> +                if (!access_ok
> +                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_pos)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_viptables_rule_pos *rule =
> +                            (struct v4v_viptables_rule_pos *)arg;
> +                        v4v_viptables_add(p, rule->rule, rule->position);
> +                        rc = 0;
> +                }
> +                break;
> +        case V4VIOCVIPTABLESDEL:
> +                if (!access_ok
> +                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_rule_pos)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_viptables_rule_pos *rule =
> +                            (struct v4v_viptables_rule_pos *)arg;
> +                        v4v_viptables_del(p, rule->rule, rule->position);
> +                        rc = 0;
> +                }
> +                break;
> +        case V4VIOCVIPTABLESLIST:
> +                if (!access_ok
> +                    (VERIFY_READ, arg, sizeof(struct v4v_viptables_list)))
> +                        return -EFAULT;
> +                {
> +                        struct v4v_viptables_list *list =
> +                            (struct v4v_viptables_list *)arg;
> +                        rc = v4v_viptables_list(p, list);
> +                }
> +                break;
> +        default:
> +                printk(KERN_ERR "v4v: unkown ioctl, cmd:0x%x nr:%d size:0x%x\n",
> +                       cmd, _IOC_NR(cmd), _IOC_SIZE(cmd));
> +        }
> +
> +        return rc;
> +}
> +
> +static unsigned int v4v_poll(struct file *f, poll_table * pt)
> +{
> +        unsigned int mask = 0;
> +        struct v4v_private *p = f->private_data;
> +
> +        read_lock(&list_lock);
> +
> +        switch (p->ptype) {
> +        case V4V_PTYPE_DGRAM:
> +                switch (p->state) {
> +                case V4V_STATE_CONNECTED:
> +                case V4V_STATE_BOUND:
> +                        poll_wait(f, &p->readq, pt);
> +                        mask |= POLLOUT | POLLWRNORM;
> +                        if (p->r->ring->tx_ptr != p->r->ring->rx_ptr)
> +                                mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                default:
> +                        break;
> +                }
> +                break;
> +        case V4V_PTYPE_STREAM:
> +                switch (p->state) {
> +                case V4V_STATE_BOUND:
> +                        break;
> +                case V4V_STATE_LISTENING:
> +                        poll_wait(f, &p->readq, pt);
> +                        if (!list_empty(&p->pending_recv_list))
> +                                mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                case V4V_STATE_ACCEPTED:
> +                case V4V_STATE_CONNECTED:
> +                        poll_wait(f, &p->readq, pt);
> +                        poll_wait(f, &p->writeq, pt);
> +                        if (!p->send_blocked)
> +                                mask |= POLLOUT | POLLWRNORM;
> +                        if (!list_empty(&p->pending_recv_list))
> +                                mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                case V4V_STATE_CONNECTING:
> +                        poll_wait(f, &p->writeq, pt);
> +                        break;
> +                case V4V_STATE_DISCONNECTED:
> +                        mask |= POLLOUT | POLLWRNORM;
> +                        mask |= POLLIN | POLLRDNORM;
> +                        break;
> +                case V4V_STATE_IDLE:
> +                        break;
> +                }
> +                break;
> +        }
> +
> +        read_unlock(&list_lock);
> +        return mask;
> +}
> +
> +static const struct file_operations v4v_fops_stream = {
> +        .owner = THIS_MODULE,
> +        .write = v4v_write,
> +        .read = v4v_read,
> +        .unlocked_ioctl = v4v_ioctl,
> +        .open = v4v_open_stream,
> +        .release = v4v_release,
> +        .poll = v4v_poll,
> +};
> +
> +static const struct file_operations v4v_fops_dgram = {
> +        .owner = THIS_MODULE,
> +        .write = v4v_write,
> +        .read = v4v_read,
> +        .unlocked_ioctl = v4v_ioctl,
> +        .open = v4v_open_dgram,
> +        .release = v4v_release,
> +        .poll = v4v_poll,
> +};
> +
> +/* Xen VIRQ */
> +static int v4v_irq = -1;
> +
> +static void unbind_virq(void)
> +{
> +        unbind_from_irqhandler (v4v_irq, NULL);
> +        v4v_irq = -1;
> +}
> +
> +static int bind_evtchn(void)
> +{
> +        v4v_info_t info;
> +        int result;
> +        
> +        v4v_info(&info);
> +        if (info.ring_magic != V4V_RING_MAGIC)
> +                return 1;
> +
> +        result =
> +                bind_interdomain_evtchn_to_irqhandler(
> +                        0, info.evtchn,
> +                        v4v_interrupt, IRQF_SAMPLE_RANDOM, "v4v", NULL);
> +
> +        if (result < 0) {
> +                unbind_virq();
> +                return result;
> +        }
> +
> +        v4v_irq = result;
> +
> +        return 0;
> +}
> +
> +/* V4V Device */
> +
> +static struct miscdevice v4v_miscdev_dgram = {
> +        .minor = MISC_DYNAMIC_MINOR,
> +        .name = "v4v_dgram",
> +        .fops = &v4v_fops_dgram,
> +};
> +
> +static struct miscdevice v4v_miscdev_stream = {
> +        .minor = MISC_DYNAMIC_MINOR,
> +        .name = "v4v_stream",
> +        .fops = &v4v_fops_stream,
> +};
> +
> +static int v4v_suspend(struct platform_device *dev, pm_message_t state)
> +{
> +        unbind_virq();
> +        return 0;
> +}
> +
> +static int v4v_resume(struct platform_device *dev)
> +{
> +        struct ring *r;
> +
> +        read_lock(&list_lock);
> +        list_for_each_entry(r, &ring_list, node) {
> +                refresh_pfn_list(r);
> +                if (register_ring(r)) {
> +                        printk(KERN_ERR
> +                               "Failed to re-register a v4v ring on resume, port=0x%08x\n",
> +                               r->ring->id.addr.port);
> +                }
> +        }
> +        read_unlock(&list_lock);
> +
> +        if (bind_evtchn()) {
> +                printk(KERN_ERR "v4v_resume: failed to bind v4v evtchn\n");
> +                return -ENODEV;
> +        }
> +
> +        return 0;
> +}
> +
> +static void v4v_shutdown(struct platform_device *dev)
> +{
> +}
> +
> +static int __devinit v4v_probe(struct platform_device *dev)
> +{
> +        int err = 0;
> +        int ret;
> +
> +        ret = setup_fs();
> +        if (ret)
> +                return ret;
> +
> +        INIT_LIST_HEAD(&ring_list);
> +        rwlock_init(&list_lock);
> +        INIT_LIST_HEAD(&pending_xmit_list);
> +        spin_lock_init(&pending_xmit_lock);
> +        spin_lock_init(&interrupt_lock);
> +        atomic_set(&pending_xmit_count, 0);
> +
> +        if (bind_evtchn()) {
> +                printk(KERN_ERR "failed to bind v4v evtchn\n");
> +                unsetup_fs();
> +                return -ENODEV;
> +        }
> +
> +        err = misc_register(&v4v_miscdev_dgram);
> +        if (err != 0) {
> +                printk(KERN_ERR "Could not register /dev/v4v_dgram\n");
> +                unsetup_fs();
> +                return err;
> +        }
> +
> +        err = misc_register(&v4v_miscdev_stream);
> +        if (err != 0) {
> +                printk(KERN_ERR "Could not register /dev/v4v_stream\n");
> +                unsetup_fs();
> +                return err;
> +        }
> +
> +        printk(KERN_INFO "Xen V4V device installed.\n");
> +        return 0;
> +}
> +
> +/* Platform Gunge */
> +
> +static int __devexit v4v_remove(struct platform_device *dev)
> +{
> +        unbind_virq();
> +        misc_deregister(&v4v_miscdev_dgram);
> +        misc_deregister(&v4v_miscdev_stream);
> +        unsetup_fs();
> +        return 0;
> +}
> +
> +static struct platform_driver v4v_driver = {
> +        .driver = {
> +                   .name = "v4v",
> +                   .owner = THIS_MODULE,
> +                   },
> +        .probe = v4v_probe,
> +        .remove = __devexit_p(v4v_remove),
> +        .shutdown = v4v_shutdown,
> +        .suspend = v4v_suspend,
> +        .resume = v4v_resume,
> +};
> +
> +static struct platform_device *v4v_platform_device;
> +
> +static int __init v4v_init(void)
> +{
> +        int error;
> +
> +        if (!xen_domain())
> +        {
> +                printk(KERN_ERR "v4v only works under Xen\n");
> +                return -ENODEV;
> +        }
> +
> +        error = platform_driver_register(&v4v_driver);
> +        if (error)
> +                return error;
> +
> +        v4v_platform_device = platform_device_alloc("v4v", -1);
> +        if (!v4v_platform_device) {
> +                platform_driver_unregister(&v4v_driver);
> +                return -ENOMEM;
> +        }
> +
> +        error = platform_device_add(v4v_platform_device);
> +        if (error) {
> +                platform_device_put(v4v_platform_device);
> +                platform_driver_unregister(&v4v_driver);
> +                return error;
> +        }
> +
> +        return 0;
> +}
> +
> +static void __exit v4v_cleanup(void)
> +{
> +        platform_device_unregister(v4v_platform_device);
> +        platform_driver_unregister(&v4v_driver);
> +}
> +
> +module_init(v4v_init);
> +module_exit(v4v_cleanup);
> +MODULE_LICENSE("GPL");
> diff --git a/drivers/xen/v4v_utils.h b/drivers/xen/v4v_utils.h
> new file mode 100644
> index 0000000..91c00b6
> --- /dev/null
> +++ b/drivers/xen/v4v_utils.h
> @@ -0,0 +1,278 @@
> +/******************************************************************************
> + * V4V
> + *
> + * Version 2 of v2v (Virtual-to-Virtual)
> + *
> + * Copyright (c) 2010, Citrix Systems
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + */
> +
> +#ifndef __V4V_UTILS_H__
> +# define __V4V_UTILS_H__
> +
> +/* Compiler specific hacks */
> +#if defined(__GNUC__)
> +# define V4V_UNUSED __attribute__ ((unused))
> +# ifndef __STRICT_ANSI__
> +#  define V4V_INLINE inline
> +# else
> +#  define V4V_INLINE
> +# endif
> +#else /* !__GNUC__ */
> +# define V4V_UNUSED
> +# define V4V_INLINE
> +#endif
> +
> +
> +/*
> + * Utility functions
> + */
> +static V4V_INLINE uint32_t
> +v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
> +{
> +        int32_t ret;
> +        ret = r->tx_ptr - r->rx_ptr;
> +        if (ret >= 0)
> +                return ret;
> +        return (uint32_t) (r->len + ret);
> +}
> +
> +
> +/*
> + * Copy at most t bytes of the next message in the ring, into the buffer
> + * at _buf, setting from and protocol if they are not NULL, returns
> + * the actual length of the message, or -1 if there is nothing to read
> + */
> +V4V_UNUSED static V4V_INLINE ssize_t
> +v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * protocol,
> +              void *_buf, size_t t, int consume)
> +{
> +        volatile struct v4v_ring_message_header *mh;
> +        /* unnecessary cast from void * required by MSVC compiler */
> +        uint8_t *buf = (uint8_t *) _buf;
> +        uint32_t btr = v4v_ring_bytes_to_read (r);
> +        uint32_t rxp = r->rx_ptr;
> +        uint32_t bte;
> +        uint32_t len;
> +        ssize_t ret;
> +
> +
> +        if (btr < sizeof (*mh))
> +                return -1;
> +
> +        /*
> +         * Becuase the message_header is 128 bits long and the ring is 128 bit
> +         * aligned, we're gaurunteed never to wrap
> +         */
> +        mh = (volatile struct v4v_ring_message_header *) &r->ring[r->rx_ptr];
> +
> +        len = mh->len;
> +
> +        if (btr < len)
> +        {
> +                return -1;
> +        }
> +
> +#if defined(__GNUC__)
> +        if (from)
> +                *from = mh->source;
> +#else
> +        /* MSVC can't do the above */
> +        if (from)
> +                memcpy((void *) from, (void *) &(mh->source), sizeof(struct v4v_addr));
> +#endif
> +
> +        if (protocol)
> +                *protocol = mh->protocol;
> +
> +        rxp += sizeof (*mh);
> +        if (rxp == r->len)
> +                rxp = 0;
> +        len -= sizeof (*mh);
> +        ret = len;
> +
> +        bte = r->len - rxp;
> +
> +        if (bte < len)
> +        {
> +                if (t < bte)
> +                {
> +                        if (buf)
> +                        {
> +                                memcpy (buf, (void *) &r->ring[rxp], t);
> +                                buf += t;
> +                        }
> +
> +                        rxp = 0;
> +                        len -= bte;
> +                        t = 0;
> +                }
> +                else
> +                {
> +                        if (buf)
> +                        {
> +                                memcpy (buf, (void *) &r->ring[rxp], bte);
> +                                buf += bte;
> +                        }
> +                        rxp = 0;
> +                        len -= bte;
> +                        t -= bte;
> +                }
> +        }
> +
> +        if (buf && t)
> +                memcpy (buf, (void *) &r->ring[rxp], (t < len) ? t : len);
> +
> +
> +        rxp += V4V_ROUNDUP (len);
> +        if (rxp == r->len)
> +                rxp = 0;
> +
> +        mb ();
> +
> +        if (consume)
> +                r->rx_ptr = rxp;
> +
> +        return ret;
> +}
> +
> +static V4V_INLINE void
> +v4v_memcpy_skip (void *_dst, const void *_src, size_t len, size_t *skip)
> +{
> +        const uint8_t *src =  (const uint8_t *) _src;
> +        uint8_t *dst = (uint8_t *) _dst;
> +
> +        if (!*skip)
> +        {
> +                memcpy (dst, src, len);
> +                return;
> +        }
> +
> +        if (*skip >= len)
> +        {
> +                *skip -= len;
> +                return;
> +        }
> +
> +        src += *skip;
> +        dst += *skip;
> +        len -= *skip;
> +        *skip = 0;
> +
> +        memcpy (dst, src, len);
> +}
> +
> +/*
> + * Copy at most t bytes of the next message in the ring, into the buffer
> + * at _buf, skipping skip bytes, setting from and protocol if they are not
> + * NULL, returns the actual length of the message, or -1 if there is
> + * nothing to read
> + */
> +static ssize_t
> +v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
> +                    uint32_t * protocol, void *_buf, size_t t, int consume,
> +                    size_t skip) V4V_UNUSED;
> +
> +V4V_INLINE static ssize_t
> +v4v_copy_out_offset(struct v4v_ring *r, struct v4v_addr *from,
> +                    uint32_t * protocol, void *_buf, size_t t, int consume,
> +                    size_t skip)
> +{
> +        volatile struct v4v_ring_message_header *mh;
> +        /* unnecessary cast from void * required by MSVC compiler */
> +        uint8_t *buf = (uint8_t *) _buf;
> +        uint32_t btr = v4v_ring_bytes_to_read (r);
> +        uint32_t rxp = r->rx_ptr;
> +        uint32_t bte;
> +        uint32_t len;
> +        ssize_t ret;
> +
> +        buf -= skip;
> +
> +        if (btr < sizeof (*mh))
> +                return -1;
> +
> +        /*
> +         * Becuase the message_header is 128 bits long and the ring is 128 bit
> +         * aligned, we're gaurunteed never to wrap
> +         */
> +        mh = (volatile struct v4v_ring_message_header *)&r->ring[r->rx_ptr];
> +
> +        len = mh->len;
> +        if (btr < len)
> +                return -1;
> +
> +#if defined(__GNUC__)
> +        if (from)
> +                *from = mh->source;
> +#else
> +        /* MSVC can't do the above */
> +        if (from)
> +                memcpy((void *)from, (void *)&(mh->source), sizeof(struct v4v_addr));
> +#endif
> +
> +        if (protocol)
> +                *protocol = mh->protocol;
> +
> +        rxp += sizeof (*mh);
> +        if (rxp == r->len)
> +                rxp = 0;
> +        len -= sizeof (*mh);
> +        ret = len;
> +
> +        bte = r->len - rxp;
> +
> +        if (bte < len)
> +        {
> +                if (t < bte)
> +                {
> +                        if (buf)
> +                        {
> +                                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], t, &skip);
> +                                buf += t;
> +                        }
> +
> +                        rxp = 0;
> +                        len -= bte;
> +                        t = 0;
> +                }
> +                else
> +                {
> +                        if (buf)
> +                        {
> +                                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], bte,
> +                                                &skip);
> +                                buf += bte;
> +                        }
> +                        rxp = 0;
> +                        len -= bte;
> +                        t -= bte;
> +                }
> +        }
> +
> +        if (buf && t)
> +                v4v_memcpy_skip (buf, (void *) &r->ring[rxp], (t < len) ? t : len,
> +                                &skip);
> +
> +
> +        rxp += V4V_ROUNDUP (len);
> +        if (rxp == r->len)
> +                rxp = 0;
> +
> +        mb ();
> +
> +        if (consume)
> +                r->rx_ptr = rxp;
> +
> +        return ret;
> +}
> +
> +#endif /* !__V4V_UTILS_H__ */
> diff --git a/include/xen/interface/v4v.h b/include/xen/interface/v4v.h
> new file mode 100644
> index 0000000..36ff95c
> --- /dev/null
> +++ b/include/xen/interface/v4v.h
> @@ -0,0 +1,299 @@
> +/******************************************************************************
> + * V4V
> + *
> + * Version 2 of v2v (Virtual-to-Virtual)
> + *
> + * Copyright (c) 2010, Citrix Systems
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
> + */
> +
> +#ifndef __XEN_PUBLIC_V4V_H__
> +#define __XEN_PUBLIC_V4V_H__
> +
> +/*
> + * Structure definitions
> + */
> +
> +#define V4V_RING_MAGIC          0xA822F72BB0B9D8CC
> +#define V4V_RING_DATA_MAGIC	0x45FE852220B801E4
> +
> +#define V4V_PROTO_DGRAM		0x3c2c1db8
> +#define V4V_PROTO_STREAM 	0x70f6a8e5
> +
> +#define V4V_DOMID_INVALID       (0x7FFFU)
> +#define V4V_DOMID_NONE          V4V_DOMID_INVALID
> +#define V4V_DOMID_ANY           V4V_DOMID_INVALID
> +#define V4V_PORT_NONE           0
> +
> +typedef struct v4v_iov
> +{
> +    uint64_t iov_base;
> +    uint64_t iov_len;
> +} v4v_iov_t;
> +
> +typedef struct v4v_addr
> +{
> +    uint32_t port;
> +    domid_t domain;
> +    uint16_t pad;
> +} v4v_addr_t;
> +
> +typedef struct v4v_ring_id
> +{
> +    v4v_addr_t addr;
> +    domid_t partner;
> +    uint16_t pad;
> +} v4v_ring_id_t;
> +
> +typedef uint64_t v4v_pfn_t;
> +
> +typedef struct
> +{
> +    v4v_addr_t src;
> +    v4v_addr_t dst;
> +} v4v_send_addr_t;
> +
> +/*
> + * v4v_ring
> + * id:
> + * xen only looks at this during register/unregister
> + * and will fill in id.addr.domain
> + *
> + * rx_ptr: rx pointer, modified by domain
> + * tx_ptr: tx pointer, modified by xen
> + *
> + */
> +struct v4v_ring
> +{
> +    uint64_t magic;
> +    v4v_ring_id_t id;
> +    uint32_t len;
> +    uint32_t rx_ptr;
> +    uint32_t tx_ptr;
> +    uint8_t reserved[32];
> +    uint8_t ring[0];
> +};
> +typedef struct v4v_ring v4v_ring_t;
> +
> +#define V4V_RING_DATA_F_EMPTY       (1U << 0) /* Ring is empty */
> +#define V4V_RING_DATA_F_EXISTS      (1U << 1) /* Ring exists */
> +#define V4V_RING_DATA_F_PENDING     (1U << 2) /* Pending interrupt exists - do not
> +                                               * rely on this field - for
> +                                               * profiling only */
> +#define V4V_RING_DATA_F_SUFFICIENT  (1U << 3) /* Sufficient space to queue
> +                                               * space_required bytes exists */
> +
> +#if defined(__GNUC__)
> +# define V4V_RING_DATA_ENT_FULLRING
> +# define V4V_RING_DATA_ENT_FULL
> +#else
> +# define V4V_RING_DATA_ENT_FULLRING fullring
> +# define V4V_RING_DATA_ENT_FULL full
> +#endif
> +typedef struct v4v_ring_data_ent
> +{
> +    v4v_addr_t ring;
> +    uint16_t flags;
> +    uint16_t pad;
> +    uint32_t space_required;
> +    uint32_t max_message_size;
> +} v4v_ring_data_ent_t;
> +
> +typedef struct v4v_ring_data
> +{
> +    uint64_t magic;
> +    uint32_t nent;
> +    uint32_t pad;
> +    uint64_t reserved[4];
> +    v4v_ring_data_ent_t data[0];
> +} v4v_ring_data_t;
> +
> +struct v4v_info
> +{
> +    uint64_t ring_magic;
> +    uint64_t data_magic;
> +    evtchn_port_t evtchn;
> +};
> +typedef struct v4v_info v4v_info_t;
> +
> +#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
> +/*
> + * Messages on the ring are padded to 128 bits
> + * Len here refers to the exact length of the data not including the
> + * 128 bit header. The message uses
> + * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
> + */
> +
> +#define V4V_SHF_SYN		(1 << 0)
> +#define V4V_SHF_ACK		(1 << 1)
> +#define V4V_SHF_RST		(1 << 2)
> +
> +#define V4V_SHF_PING		(1 << 8)
> +#define V4V_SHF_PONG		(1 << 9)
> +
> +struct v4v_stream_header
> +{
> +    uint32_t flags;
> +    uint32_t conid;
> +};
> +
> +struct v4v_ring_message_header
> +{
> +    uint32_t len;
> +    uint32_t pad0;
> +    v4v_addr_t source;
> +    uint32_t protocol;
> +    uint32_t pad1;
> +    uint8_t data[0];
> +};
> +
> +typedef struct v4v_viptables_rule
> +{
> +    v4v_addr_t src;
> +    v4v_addr_t dst;
> +    uint32_t accept;
> +    uint32_t pad;
> +} v4v_viptables_rule_t;
> +
> +typedef struct v4v_viptables_list
> +{
> +    uint32_t start_rule;
> +    uint32_t nb_rules;
> +    struct v4v_viptables_rule rules[0];
> +} v4v_viptables_list_t;
> +
> +/*
> + * HYPERCALLS
> + */
> +
> +#define V4VOP_register_ring 	1
> +/*
> + * Registers a ring with Xen, if a ring with the same v4v_ring_id exists,
> + * this ring takes its place, registration will not change tx_ptr
> + * unless it is invalid
> + *
> + * do_v4v_op(V4VOP_unregister_ring,
> + *           v4v_ring, XEN_GUEST_HANDLE(v4v_pfn),
> + *           npage, 0)
> + */
> +
> +
> +#define V4VOP_unregister_ring 	2
> +/*
> + * Unregister a ring.
> + *
> + * v4v_hypercall(V4VOP_send, v4v_ring, NULL, 0, 0)
> + */
> +
> +#define V4VOP_send 		3
> +/*
> + * Sends len bytes of buf to dst, giving src as the source address (xen will
> + * ignore src->domain and put your domain in the actually message), xen
> + * first looks for a ring with id.addr==dst and id.partner==sending_domain
> + * if that fails it looks for id.addr==dst and id.partner==DOMID_ANY.
> + * protocol is the 32 bit protocol number used from the message
> + * most likely V4V_PROTO_DGRAM or STREAM. If insufficient space exists
> + * it will return -EAGAIN and xen will twing the V4V_INTERRUPT when
> + * sufficient space becomes available
> + *
> + * v4v_hypercall(V4VOP_send,
> + *               v4v_send_addr_t addr,
> + *               void* buf,
> + *               uint32_t len,
> + *               uint32_t protocol)
> + */
> +
> +
> +#define V4VOP_notify 		4
> +/* Asks xen for information about other rings in the system
> + *
> + * ent->ring is the v4v_addr_t of the ring you want information on
> + * the same matching rules are used as for V4VOP_send.
> + *
> + * ent->space_required  if this field is not null xen will check
> + * that there is space in the destination ring for this many bytes
> + * of payload. If there is it will set the V4V_RING_DATA_F_SUFFICIENT
> + * and CANCEL any pending interrupt for that ent->ring, if insufficient
> + * space is available it will schedule an interrupt and the flag will
> + * not be set.
> + *
> + * The flags are set by xen when notify replies
> + * V4V_RING_DATA_F_EMPTY	ring is empty
> + * V4V_RING_DATA_F_PENDING	interrupt is pending - don't rely on this
> + * V4V_RING_DATA_F_SUFFICIENT	sufficient space for space_required is there
> + * V4V_RING_DATA_F_EXISTS	ring exists
> + *
> + * v4v_hypercall(V4VOP_notify,
> + *               XEN_GUEST_HANDLE(v4v_ring_data_ent) ent,
> + *               NULL, nent, 0)
> + */
> +
> +#define V4VOP_sendv		5
> +/*
> + * Identical to V4VOP_send except rather than buf and len it takes
> + * an array of v4v_iov and a length of the array.
> + *
> + * v4v_hypercall(V4VOP_sendv,
> + *               v4v_send_addr_t addr,
> + *               v4v_iov iov,
> + *               uint32_t niov,
> + *               uint32_t protocol)
> + */
> +
> +#define V4VOP_viptables_add     6
> +/*
> + * Insert a filtering rules after a given position.
> + *
> + * v4v_hypercall(V4VOP_viptables_add,
> + *               v4v_viptables_rule_t rule,
> + *               NULL,
> + *               uint32_t position, 0)
> + */
> +
> +#define V4VOP_viptables_del     7
> +/*
> + * Delete a filtering rules at a position or the rule
> + * that matches "rule".
> + *
> + * v4v_hypercall(V4VOP_viptables_del,
> + *               v4v_viptables_rule_t rule,
> + *               NULL,
> + *               uint32_t position, 0)
> + */
> +
> +#define V4VOP_viptables_list    8
> +/*
> + * Delete a filtering rules at a position or the rule
> + * that matches "rule".
> + *
> + * v4v_hypercall(V4VOP_viptables_list,
> + *               v4v_vitpables_list_t list,
> + *               NULL, 0, 0)
> + */
> +
> +#define V4VOP_info              9
> +/*
> + * v4v_hypercall(V4VOP_info,
> + *               XEN_GUEST_HANDLE(v4v_info_t) info,
> + *               NULL, 0, 0)
> + */
> +
> +#endif /* __XEN_PUBLIC_V4V_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-set-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index a890804..395f6cd 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -59,6 +59,7 @@
>  #define __HYPERVISOR_physdev_op           33
>  #define __HYPERVISOR_hvm_op               34
>  #define __HYPERVISOR_tmem_op              38
> +#define __HYPERVISOR_v4v_op               39
>  
>  /* Architecture-specific hypercall definitions. */
>  #define __HYPERVISOR_arch_0               48
> diff --git a/include/xen/v4vdev.h b/include/xen/v4vdev.h
> new file mode 100644
> index 0000000..a30b608
> --- /dev/null
> +++ b/include/xen/v4vdev.h
> @@ -0,0 +1,34 @@
> +#ifndef __V4V_DGRAM_H__
> +#define __V4V_DGRAM_H__
> +
> +struct v4v_dev
> +{
> +    void *buf;
> +    size_t len;
> +    int flags;
> +    v4v_addr_t *addr;
> +};
> +
> +struct v4v_viptables_rule_pos
> +{
> +    struct v4v_viptables_rule* rule;
> +    int position;
> +};
> +
> +#define V4V_TYPE 'W'
> +
> +#define V4VIOCSETRINGSIZE 	_IOW (V4V_TYPE,  1, uint32_t)
> +#define V4VIOCBIND		_IOW (V4V_TYPE,  2, v4v_ring_id_t)
> +#define V4VIOCGETSOCKNAME	_IOW (V4V_TYPE,  3, v4v_ring_id_t)
> +#define V4VIOCGETPEERNAME	_IOW (V4V_TYPE,  4, v4v_addr_t)
> +#define V4VIOCCONNECT		_IOW (V4V_TYPE,  5, v4v_addr_t)
> +#define V4VIOCGETCONNECTERR	_IOW (V4V_TYPE,  6, int)
> +#define V4VIOCLISTEN		_IOW (V4V_TYPE,  7, uint32_t) /*unused args */
> +#define V4VIOCACCEPT		_IOW (V4V_TYPE,  8, v4v_addr_t) 
> +#define V4VIOCSEND		_IOW (V4V_TYPE,  9, struct v4v_dev)
> +#define V4VIOCRECV		_IOW (V4V_TYPE, 10, struct v4v_dev)
> +#define V4VIOCVIPTABLESADD	_IOW (V4V_TYPE, 11, struct v4v_viptables_rule_pos)
> +#define V4VIOCVIPTABLESDEL	_IOW (V4V_TYPE, 12, struct v4v_viptables_rule_pos)
> +#define V4VIOCVIPTABLESLIST	_IOW (V4V_TYPE, 13, struct v4v_viptables_list)
> +
> +#endif

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:38:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPOI-0004oe-OR; Mon, 06 Aug 2012 15:38:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPOG-0004oU-UN
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:38:45 +0000
Received: from [85.158.138.51:42941] by server-5.bemta-3.messagelabs.com id
	78/B8-27557-305EF105; Mon, 06 Aug 2012 15:38:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344267523!30740005!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10179 invoked from network); 6 Aug 2012 15:38:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 15:38:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:38:42 +0100
Message-Id: <5020011F0200007800092FD7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:38:39 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian.Campbell@citrix.com,
	Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> There are still few unsigned long in the xen public interface: replace
> them with xen_ulong_t.
> 
> Also typedef xen_ulong_t to uint64_t on ARM.

While this change by itself already looks suspicious to me, I don't
follow what the global replacement is going to be good for, in
particular when done in places that ARM should be completely
ignorant of, e.g.

> --- a/xen/include/public/physdev.h
> +++ b/xen/include/public/physdev.h
> @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
>  #define PHYSDEVOP_apic_write             9
>  struct physdev_apic {
>      /* IN */
> -    unsigned long apic_physbase;
> +    xen_ulong_t apic_physbase;
>      uint32_t reg;
>      /* IN or OUT */
>      uint32_t value;
>...
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
>   * NB. The fields are natural register size for this architecture.
>   */
>  struct multicall_entry {
> -    unsigned long op, result;
> -    unsigned long args[6];
> +    xen_ulong_t op, result;
> +    xen_ulong_t args[6];

And here I really start to wonder - what use is it to put all 64-bit
values here on a 32-bit arch? You certainly know a lot more about
ARM than me, but this looks pretty inefficient, the more that
you'll have to deal with checking the full values when converting
to e.g. pointers anyway, in order to avoid behavioral differences
between running on a 32- or 64-bit host. Zero-extending from
32-bits when in a 64-bit hypervisor wouldn't have this problem.

Jan

>  };
>  typedef struct multicall_entry multicall_entry_t;
>  DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:38:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPOI-0004oe-OR; Mon, 06 Aug 2012 15:38:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPOG-0004oU-UN
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:38:45 +0000
Received: from [85.158.138.51:42941] by server-5.bemta-3.messagelabs.com id
	78/B8-27557-305EF105; Mon, 06 Aug 2012 15:38:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344267523!30740005!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10179 invoked from network); 6 Aug 2012 15:38:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 15:38:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:38:42 +0100
Message-Id: <5020011F0200007800092FD7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:38:39 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian.Campbell@citrix.com,
	Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> There are still few unsigned long in the xen public interface: replace
> them with xen_ulong_t.
> 
> Also typedef xen_ulong_t to uint64_t on ARM.

While this change by itself already looks suspicious to me, I don't
follow what the global replacement is going to be good for, in
particular when done in places that ARM should be completely
ignorant of, e.g.

> --- a/xen/include/public/physdev.h
> +++ b/xen/include/public/physdev.h
> @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
>  #define PHYSDEVOP_apic_write             9
>  struct physdev_apic {
>      /* IN */
> -    unsigned long apic_physbase;
> +    xen_ulong_t apic_physbase;
>      uint32_t reg;
>      /* IN or OUT */
>      uint32_t value;
>...
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
>   * NB. The fields are natural register size for this architecture.
>   */
>  struct multicall_entry {
> -    unsigned long op, result;
> -    unsigned long args[6];
> +    xen_ulong_t op, result;
> +    xen_ulong_t args[6];

And here I really start to wonder - what use is it to put all 64-bit
values here on a 32-bit arch? You certainly know a lot more about
ARM than me, but this looks pretty inefficient, the more that
you'll have to deal with checking the full values when converting
to e.g. pointers anyway, in order to avoid behavioral differences
between running on a 32- or 64-bit host. Zero-extending from
32-bits when in a 64-bit hypervisor wouldn't have this problem.

Jan

>  };
>  typedef struct multicall_entry multicall_entry_t;
>  DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:41:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPQP-00050o-9D; Mon, 06 Aug 2012 15:40:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyPQO-00050f-0n
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:40:56 +0000
Received: from [85.158.143.35:16814] by server-1.bemta-4.messagelabs.com id
	2F/E0-24392-785EF105; Mon, 06 Aug 2012 15:40:55 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344267652!13704694!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MDU3MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6895 invoked from network); 6 Aug 2012 15:40:54 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 15:40:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76FencZ021337
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 15:40:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76Fend6015560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 15:40:49 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76FemgM031199; Mon, 6 Aug 2012 10:40:49 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 08:40:48 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E86DA41F35; Mon,  6 Aug 2012 11:31:24 -0400 (EDT)
Date: Mon, 6 Aug 2012 11:31:24 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120806153124.GF8967@phenom.dumpdata.com>
References: <1B4B44D9196EFF41AE41FDA404FC0A1013D698@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A1013D698@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2-rc1 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 02:34:35PM +0000, Ren, Yongjie wrote:
> Hi All,
> We did a round testing for Xen 4.2 RC1 (CS#25692) with Linux 3.4.7 dom0.
> We covered VT-d, SR-IOV, Power Management, TXT, IVB new features, HSW new features.
> We covered many cases for HVM guests (both Redhat Linux and MS Windows guest).
> We tested on Westmere-EP, SandyBridge-EP, IvyBridge desktop, and Haswell hardware platforms.
> We found no new issues, and verified 1 fixed bug.

Cool!
> 
> Fixed bug (1):
> 1. parameter 'maxvcpus' causes hvm guest boots up with wrong vcpu number
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1825
> -- Fixed by Yang from Intel.
> 
> The following are some of the old issues which we guess are something important.
> Some of the old issues:
> 1. long stop during the guest boot process (too slow bootup)
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> 2. Poor performance when do guest save/restore and migration with linux 3.x dom0
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 4. after detaching a VF from a guest, shutdown the guest is very slow
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> 5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
>   (xl allows same PCI device to be assigned to multiple guests)
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> 6. Guest hang after resuming from S3
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828


So those that are Linux related are going to have to wait. I am ..swamped.
The merge window just closed and its time to deal with regressions. Is there
way for Intel engineers in helping to come up with a patch for some of
these issues?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:41:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPQP-00050o-9D; Mon, 06 Aug 2012 15:40:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyPQO-00050f-0n
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:40:56 +0000
Received: from [85.158.143.35:16814] by server-1.bemta-4.messagelabs.com id
	2F/E0-24392-785EF105; Mon, 06 Aug 2012 15:40:55 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344267652!13704694!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MDU3MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6895 invoked from network); 6 Aug 2012 15:40:54 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 15:40:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76FencZ021337
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 15:40:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76Fend6015560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 15:40:49 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76FemgM031199; Mon, 6 Aug 2012 10:40:49 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 08:40:48 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E86DA41F35; Mon,  6 Aug 2012 11:31:24 -0400 (EDT)
Date: Mon, 6 Aug 2012 11:31:24 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120806153124.GF8967@phenom.dumpdata.com>
References: <1B4B44D9196EFF41AE41FDA404FC0A1013D698@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A1013D698@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2-rc1 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 02:34:35PM +0000, Ren, Yongjie wrote:
> Hi All,
> We did a round testing for Xen 4.2 RC1 (CS#25692) with Linux 3.4.7 dom0.
> We covered VT-d, SR-IOV, Power Management, TXT, IVB new features, HSW new features.
> We covered many cases for HVM guests (both Redhat Linux and MS Windows guest).
> We tested on Westmere-EP, SandyBridge-EP, IvyBridge desktop, and Haswell hardware platforms.
> We found no new issues, and verified 1 fixed bug.

Cool!
> 
> Fixed bug (1):
> 1. parameter 'maxvcpus' causes hvm guest boots up with wrong vcpu number
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1825
> -- Fixed by Yang from Intel.
> 
> The following are some of the old issues which we guess are something important.
> Some of the old issues:
> 1. long stop during the guest boot process (too slow bootup)
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> 2. Poor performance when do guest save/restore and migration with linux 3.x dom0
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 4. after detaching a VF from a guest, shutdown the guest is very slow
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> 5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
>   (xl allows same PCI device to be assigned to multiple guests)
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> 6. Guest hang after resuming from S3
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828


So those that are Linux related are going to have to wait. I am ..swamped.
The merge window just closed and its time to deal with regressions. Is there
way for Intel engineers in helping to come up with a patch for some of
these issues?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:44:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPTU-0005PW-HT; Mon, 06 Aug 2012 15:44:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPTT-0005OV-Fm
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:44:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344267841!2414307!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29704 invoked from network); 6 Aug 2012 15:44:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:44:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:44:00 +0100
Message-Id: <5020025C0200007800092FEE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:43:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian.Campbell@citrix.com,
	Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Note: this change does not make any difference on x86 and ia64.
> 
> 
> XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> stored in memory from guest pointers as hypercall parameters.

I have to admit that really dislike this, to a large part because of
the follow up patch that clutters the corresponding function
declarations even further. Plus I see no mechanism to convert
between the two, yet I can't see how - long term at least - you
could get away without such conversion.

Is it really a well thought through and settled upon decision to
make guest handles 64 bits wide even on 32-bit ARM? After all,
both x86 and PPC got away without doing so (and I think your
xen_ulong_t swipe would have broken PPC quite badly).

Jan

> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/asm-arm/guest_access.h |    2 +-
>  xen/include/public/arch-arm.h      |   17 +++++++++++++----
>  xen/include/public/arch-ia64.h     |    1 +
>  xen/include/public/arch-x86/xen.h  |    1 +
>  4 files changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/include/asm-arm/guest_access.h 
> b/xen/include/asm-arm/guest_access.h
> index 0fceae6..7a955cb 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
>  /* Cast a guest handle to the specified type of handle. */
>  #define guest_handle_cast(hnd, type) ({         \
>      type *_x = (hnd).p;                         \
> -    (XEN_GUEST_HANDLE(type)) { _x };            \
> +    (XEN_GUEST_HANDLE(type)) { {_x } };            \
>  })
>  
>  #define guest_handle_from_ptr(ptr, type)        \
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 2ae6548..d17d645 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -51,18 +51,27 @@
>  
>  #define XEN_HYPERCALL_TAG   0XEA1
>  
> +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
>  
>  #ifndef __ASSEMBLY__
> -#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> -    typedef struct { type *p; } __guest_handle_ ## name
> +#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
> +    typedef struct { type *p; }                                 \
> +        __guest_handle_ ## name;                                \
> +    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> +        __guest_handle_64_ ## name;
>  
>  #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
>      ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
>      ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, 
> name)
> -#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
> +#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
>  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> -#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> +/* this is going to be changes on 64 bit */
> +#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
> +#define set_xen_guest_handle_raw(hnd, val)                  \
> +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
> +         (hnd).p = val;                                     \
> +    } while ( 0 )
>  #ifdef __XEN_TOOLS__
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
>  #endif
> diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
> index c9da5d4..97583ea 100644
> --- a/xen/include/public/arch-ia64.h
> +++ b/xen/include/public/arch-ia64.h
> @@ -47,6 +47,7 @@
>  
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, 
> name)
>  #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
> +#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
>  #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
>  #define uint64_aligned_t                uint64_t
>  #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> diff --git a/xen/include/public/arch-x86/xen.h 
> b/xen/include/public/arch-x86/xen.h
> index 1c186d7..8ee5437 100644
> --- a/xen/include/public/arch-x86/xen.h
> +++ b/xen/include/public/arch-x86/xen.h
> @@ -44,6 +44,7 @@
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, 
> name)
>  #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
>  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> +#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
>  #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
>  #ifdef __XEN_TOOLS__
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:44:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPTU-0005PW-HT; Mon, 06 Aug 2012 15:44:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPTT-0005OV-Fm
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:44:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344267841!2414307!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29704 invoked from network); 6 Aug 2012 15:44:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:44:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:44:00 +0100
Message-Id: <5020025C0200007800092FEE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:43:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian.Campbell@citrix.com,
	Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Note: this change does not make any difference on x86 and ia64.
> 
> 
> XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> stored in memory from guest pointers as hypercall parameters.

I have to admit that really dislike this, to a large part because of
the follow up patch that clutters the corresponding function
declarations even further. Plus I see no mechanism to convert
between the two, yet I can't see how - long term at least - you
could get away without such conversion.

Is it really a well thought through and settled upon decision to
make guest handles 64 bits wide even on 32-bit ARM? After all,
both x86 and PPC got away without doing so (and I think your
xen_ulong_t swipe would have broken PPC quite badly).

Jan

> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/asm-arm/guest_access.h |    2 +-
>  xen/include/public/arch-arm.h      |   17 +++++++++++++----
>  xen/include/public/arch-ia64.h     |    1 +
>  xen/include/public/arch-x86/xen.h  |    1 +
>  4 files changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/include/asm-arm/guest_access.h 
> b/xen/include/asm-arm/guest_access.h
> index 0fceae6..7a955cb 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
>  /* Cast a guest handle to the specified type of handle. */
>  #define guest_handle_cast(hnd, type) ({         \
>      type *_x = (hnd).p;                         \
> -    (XEN_GUEST_HANDLE(type)) { _x };            \
> +    (XEN_GUEST_HANDLE(type)) { {_x } };            \
>  })
>  
>  #define guest_handle_from_ptr(ptr, type)        \
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 2ae6548..d17d645 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -51,18 +51,27 @@
>  
>  #define XEN_HYPERCALL_TAG   0XEA1
>  
> +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
>  
>  #ifndef __ASSEMBLY__
> -#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> -    typedef struct { type *p; } __guest_handle_ ## name
> +#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
> +    typedef struct { type *p; }                                 \
> +        __guest_handle_ ## name;                                \
> +    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> +        __guest_handle_64_ ## name;
>  
>  #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
>      ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
>      ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, 
> name)
> -#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
> +#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
>  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> -#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> +/* this is going to be changes on 64 bit */
> +#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
> +#define set_xen_guest_handle_raw(hnd, val)                  \
> +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
> +         (hnd).p = val;                                     \
> +    } while ( 0 )
>  #ifdef __XEN_TOOLS__
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
>  #endif
> diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
> index c9da5d4..97583ea 100644
> --- a/xen/include/public/arch-ia64.h
> +++ b/xen/include/public/arch-ia64.h
> @@ -47,6 +47,7 @@
>  
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, 
> name)
>  #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
> +#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
>  #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
>  #define uint64_aligned_t                uint64_t
>  #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> diff --git a/xen/include/public/arch-x86/xen.h 
> b/xen/include/public/arch-x86/xen.h
> index 1c186d7..8ee5437 100644
> --- a/xen/include/public/arch-x86/xen.h
> +++ b/xen/include/public/arch-x86/xen.h
> @@ -44,6 +44,7 @@
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, 
> name)
>  #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
>  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> +#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
>  #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
>  #ifdef __XEN_TOOLS__
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:44:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPTd-0005RS-Uc; Mon, 06 Aug 2012 15:44:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyPTc-0005QU-Mw
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:44:16 +0000
Received: from [85.158.143.99:39320] by server-2.bemta-4.messagelabs.com id
	1E/0F-17938-F46EF105; Mon, 06 Aug 2012 15:44:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344267855!27130884!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23591 invoked from network); 6 Aug 2012 15:44:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:44:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870279"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 15:44:15 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 16:44:15 +0100
Date: Mon, 6 Aug 2012 16:43:53 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <501FFFB50200007800092FC7@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > This is an incremental patch on top of
> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> > compatibility, it is better to introduce foreign_domid as part of a
> > union containing both size and foreign_domid.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/include/public/memory.h |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index b2adfbe..b0af2fd 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > -    /* Number of pages to go through for gmfn_range */
> > -    uint16_t    size;
> > +    union {
> > +        /* Number of pages to go through for gmfn_range */
> > +        uint16_t    size;
> > +        /* IFF gmfn_foreign */
> > +        domid_t foreign_domid;
> > +    };
> 
> But you're clear that this isn't standard C, and hence can't go
> in this way?
> 

Why? It is c11 if I am not mistaken.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:44:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPTd-0005RS-Uc; Mon, 06 Aug 2012 15:44:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyPTc-0005QU-Mw
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:44:16 +0000
Received: from [85.158.143.99:39320] by server-2.bemta-4.messagelabs.com id
	1E/0F-17938-F46EF105; Mon, 06 Aug 2012 15:44:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344267855!27130884!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23591 invoked from network); 6 Aug 2012 15:44:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:44:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870279"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 15:44:15 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 16:44:15 +0100
Date: Mon, 6 Aug 2012 16:43:53 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <501FFFB50200007800092FC7@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > This is an incremental patch on top of
> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> > compatibility, it is better to introduce foreign_domid as part of a
> > union containing both size and foreign_domid.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/include/public/memory.h |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index b2adfbe..b0af2fd 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > -    /* Number of pages to go through for gmfn_range */
> > -    uint16_t    size;
> > +    union {
> > +        /* Number of pages to go through for gmfn_range */
> > +        uint16_t    size;
> > +        /* IFF gmfn_foreign */
> > +        domid_t foreign_domid;
> > +    };
> 
> But you're clear that this isn't standard C, and hence can't go
> in this way?
> 

Why? It is c11 if I am not mistaken.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:47:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPWN-0005re-HD; Mon, 06 Aug 2012 15:47:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPWL-0005qs-ON
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:47:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344268019!11116340!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12456 invoked from network); 6 Aug 2012 15:46:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:46:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:46:59 +0100
Message-Id: <5020030F0200007800092FF1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:46:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
	<1344264989.11339.47.camel@zakaz.uk.xensource.com>
	<CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
	<1344265980.11339.48.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344265980.11339.48.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-06 at 16:01 +0100, Jean Guyader wrote:
>> On 6 August 2012 15:56, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
>> >> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>> >> >> to event channels so this place holder is no longer required.
>> >> >
>> >> > I'm fine with this change, but is a future re-use of the value indeed
>> >> > not going to cause problems on XenServer (or wherever else this
>> >> > is patch set coming from)?
>> >> >
>> >>
>> >> That may need to be confirmed but I don't think XenServer is using v4v
>> >> yet
>> >
>> > I think Jan probably meant XenClient (i.e. that being the place where
>> > v4v is already deployed).
>> >
>> > There's no harm in keeping this # reserved indefinitely, with a suitable
>> > comment, I think? The only reason not to would be if this address space
>> > was limited, but I don't think that is the case with VIRQs
>> >
>> >
>> 
>> I think if XenClient rebase to a new version of Xen we will probably
>> use the version of
>> v4v that comes with it and we will not try to rebase the old code on
>> the newer Xen.
> 
> I think Jan's concern was if a current client runs on some future
> version of Xen which has reused that VIRQ for something else, some sort
> of weirdness would probably ensue?

That was exactly the point of my inquiry.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:47:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPWN-0005re-HD; Mon, 06 Aug 2012 15:47:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPWL-0005qs-ON
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:47:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344268019!11116340!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12456 invoked from network); 6 Aug 2012 15:46:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 15:46:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:46:59 +0100
Message-Id: <5020030F0200007800092FF1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:46:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-4-git-send-email-jean.guyader@citrix.com>
	<501F97F90200007800092CAC@nat28.tlf.novell.com>
	<CAEBdQ920zPHQuLng=ncPonFHoSnvgFb3rz4wo+MZP5y88GAAxA@mail.gmail.com>
	<1344264989.11339.47.camel@zakaz.uk.xensource.com>
	<CAEBdQ90V0ZRTnn+njqmduhcRO0rD0xkR+0gjWbaLjMGsRNzYQA@mail.gmail.com>
	<1344265980.11339.48.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344265980.11339.48.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ross Philipson <Ross.Philipson@citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: virq, remove VIRQ_XC_RESERVED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-06 at 16:01 +0100, Jean Guyader wrote:
>> On 6 August 2012 15:56, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2012-08-06 at 15:46 +0100, Jean Guyader wrote:
>> >> On 6 August 2012 09:10, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >> >> VIRQ_XC_RESERVED was reserved for V4V but we have switched
>> >> >> to event channels so this place holder is no longer required.
>> >> >
>> >> > I'm fine with this change, but is a future re-use of the value indeed
>> >> > not going to cause problems on XenServer (or wherever else this
>> >> > is patch set coming from)?
>> >> >
>> >>
>> >> That may need to be confirmed but I don't think XenServer is using v4v
>> >> yet
>> >
>> > I think Jan probably meant XenClient (i.e. that being the place where
>> > v4v is already deployed).
>> >
>> > There's no harm in keeping this # reserved indefinitely, with a suitable
>> > comment, I think? The only reason not to would be if this address space
>> > was limited, but I don't think that is the case with VIRQs
>> >
>> >
>> 
>> I think if XenClient rebase to a new version of Xen we will probably
>> use the version of
>> v4v that comes with it and we will not try to rebase the old code on
>> the newer Xen.
> 
> I think Jan's concern was if a current client runs on some future
> version of Xen which has reused that VIRQ for something else, some sort
> of weirdness would probably ensue?

That was exactly the point of my inquiry.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:48:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPXS-0005xm-VH; Mon, 06 Aug 2012 15:48:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyPXR-0005xf-Eg
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:48:13 +0000
Received: from [85.158.139.83:61631] by server-5.bemta-5.messagelabs.com id
	26/CF-02722-C37EF105; Mon, 06 Aug 2012 15:48:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344268071!30551865!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30398 invoked from network); 6 Aug 2012 15:47:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:47:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870370"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 15:47:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	16:47:51 +0100
Message-ID: <1344268070.11339.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 6 Aug 2012 16:47:50 +0100
In-Reply-To: <5020025C0200007800092FEE@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > Note: this change does not make any difference on x86 and ia64.
> > 
> > 
> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> > stored in memory from guest pointers as hypercall parameters.
> 
> I have to admit that really dislike this, to a large part because of
> the follow up patch that clutters the corresponding function
> declarations even further. Plus I see no mechanism to convert
> between the two, yet I can't see how - long term at least - you
> could get away without such conversion.
> 
> Is it really a well thought through and settled upon decision to
> make guest handles 64 bits wide even on 32-bit ARM? After all,
> both x86 and PPC got away without doing so

Well, on x86 we have the compat XLAT layer, which is a pretty complex
piece of code, so "got away without" is a bit strong... We'd really
rather not have to have a non-trivial compat layer on arm too by having
the struct layouts be the same on 32/64.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:48:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPXS-0005xm-VH; Mon, 06 Aug 2012 15:48:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyPXR-0005xf-Eg
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:48:13 +0000
Received: from [85.158.139.83:61631] by server-5.bemta-5.messagelabs.com id
	26/CF-02722-C37EF105; Mon, 06 Aug 2012 15:48:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344268071!30551865!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30398 invoked from network); 6 Aug 2012 15:47:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 15:47:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870370"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 15:47:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	16:47:51 +0100
Message-ID: <1344268070.11339.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 6 Aug 2012 16:47:50 +0100
In-Reply-To: <5020025C0200007800092FEE@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > Note: this change does not make any difference on x86 and ia64.
> > 
> > 
> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> > stored in memory from guest pointers as hypercall parameters.
> 
> I have to admit that really dislike this, to a large part because of
> the follow up patch that clutters the corresponding function
> declarations even further. Plus I see no mechanism to convert
> between the two, yet I can't see how - long term at least - you
> could get away without such conversion.
> 
> Is it really a well thought through and settled upon decision to
> make guest handles 64 bits wide even on 32-bit ARM? After all,
> both x86 and PPC got away without doing so

Well, on x86 we have the compat XLAT layer, which is a pretty complex
piece of code, so "got away without" is a bit strong... We'd really
rather not have to have a non-trivial compat layer on arm too by having
the struct layouts be the same on 32/64.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:50:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPZc-0006Ai-GX; Mon, 06 Aug 2012 15:50:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPZb-0006Aa-Rg
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:50:28 +0000
Received: from [85.158.143.35:55366] by server-3.bemta-4.messagelabs.com id
	94/C7-01511-3C7EF105; Mon, 06 Aug 2012 15:50:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1344268225!4472568!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11963 invoked from network); 6 Aug 2012 15:50:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 15:50:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:50:25 +0100
Message-Id: <502003DD0200007800093011@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:50:21 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
	<501FE068.1090404@tycho.nsa.gov>
In-Reply-To: <501FE068.1090404@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
 rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:19, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> Callers that want to prevent use of DOMID_SELF already need to ensure
>>> the calling domain does not pass its own domain ID. This removes the
>>> need for the caller to manually support DOMID_SELF, which many already
>>> do.
>> 
>> I'm not really sure this is correct. At the very least it changes the
>> return value of rcu_lock_remote_target_domain_by_id() when
>> called with DOMID_SELF (from -ESRCH to -EPERM).
> 
> This series ends up eliminating that function in patch #18, so that
> part is taken care of.

But may, in case of problems, then not be fully bi-sectable.

>> I'm also not convinced that a distinction between a domain knowing
>> its ID and one passing DOMID_SELF isn't/can't be useful. That of
>> course depends on whether the ID can be fully hidden from a guest
>> (obviously pure HVM guests would never know their ID, but then
>> again they also would never pass DOMID_SELF anywhere; it might
>> be, however, that they could get the latter passed on their behalf
>> e.g. from some emulation function).
> 
> I don't think we can (or want to) make it impossible for a guest to find
> out its own domain ID. I agree that the distinction between DOMID_SELF and
> my_own_domid can be a useful distinction in some cases. Most of those cases
> in Xen that I have seen already handle this at the caller.
> 
> Another solution here is to create a function rcu_lock_domain_by_any_id that
> is identical to rcu_lock_domain_by_id except for handling DOMID_SELF.

Maybe. I'd like to see Keir's position regarding all of the details
here anyway.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:50:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPZc-0006Ai-GX; Mon, 06 Aug 2012 15:50:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPZb-0006Aa-Rg
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:50:28 +0000
Received: from [85.158.143.35:55366] by server-3.bemta-4.messagelabs.com id
	94/C7-01511-3C7EF105; Mon, 06 Aug 2012 15:50:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1344268225!4472568!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11963 invoked from network); 6 Aug 2012 15:50:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 15:50:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:50:25 +0100
Message-Id: <502003DD0200007800093011@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:50:21 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
	<501FE068.1090404@tycho.nsa.gov>
In-Reply-To: <501FE068.1090404@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
 rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:19, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> Callers that want to prevent use of DOMID_SELF already need to ensure
>>> the calling domain does not pass its own domain ID. This removes the
>>> need for the caller to manually support DOMID_SELF, which many already
>>> do.
>> 
>> I'm not really sure this is correct. At the very least it changes the
>> return value of rcu_lock_remote_target_domain_by_id() when
>> called with DOMID_SELF (from -ESRCH to -EPERM).
> 
> This series ends up eliminating that function in patch #18, so that
> part is taken care of.

But may, in case of problems, then not be fully bi-sectable.

>> I'm also not convinced that a distinction between a domain knowing
>> its ID and one passing DOMID_SELF isn't/can't be useful. That of
>> course depends on whether the ID can be fully hidden from a guest
>> (obviously pure HVM guests would never know their ID, but then
>> again they also would never pass DOMID_SELF anywhere; it might
>> be, however, that they could get the latter passed on their behalf
>> e.g. from some emulation function).
> 
> I don't think we can (or want to) make it impossible for a guest to find
> out its own domain ID. I agree that the distinction between DOMID_SELF and
> my_own_domid can be a useful distinction in some cases. Most of those cases
> in Xen that I have seen already handle this at the caller.
> 
> Another solution here is to create a function rcu_lock_domain_by_any_id that
> is identical to rcu_lock_domain_by_id except for handling DOMID_SELF.

Maybe. I'd like to see Keir's position regarding all of the details
here anyway.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:51:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:51:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPaD-0006GT-3k; Mon, 06 Aug 2012 15:51:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1SyPaB-0006GA-RN
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:51:04 +0000
Received: from [85.158.143.35:62143] by server-3.bemta-4.messagelabs.com id
	F1/98-01511-7E7EF105; Mon, 06 Aug 2012 15:51:03 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344268260!13706303!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12468 invoked from network); 6 Aug 2012 15:51:01 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Aug 2012 15:51:01 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Mon, 6 Aug 2012 17:50:59 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
Thread-Index: Ac1ps8zkCPzyeRZ3RWeWIDwxqaix/gFrW5YAASHqQkI=
Date: Mon, 6 Aug 2012 15:50:58 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0B71A@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>,
	<20120731231246.GD32698@phenom.dumpdata.com>
In-Reply-To: <20120731231246.GD32698@phenom.dumpdata.com>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [90.49.216.20]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel]  ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



>On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
>> Hi,
>>
>> I'm currently trying to use XEN on graphical cluster.
>> VGA passtrough works fine and I'm able to use quad-buffer for active ste=
reo.
>> The last thing to do is to synchronize all GPU from the cluster.
>> For this purpose I use ATI FirePro S400 and it didn't works.
>>
>> I've seen two behaviors:
>> -When I run lspci command on Dom0 I've got:
>> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
>> 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio [Radeon HD=
 5800 Series]
>> And sometimes just
>> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
>> -When a DomU(Windows 7) is running, it's very slow (I can't do anything)=
 and crash after several minutes.
>> I've tried with the unstable version and I've seen the same 'lspci' beha=
vior.
>>
>> I've a couple of questions:
>> Is it possible to passthrough a VGA with this extension?
>
>I never tried. I just pass in the VGA card and don't try to pass in the HD=
MI driver.
>

I've tried this, usually the computer reboot without any informations in lo=
gs. Sometimes XEN raise an error because the VGA can't be passed.

>> As it's a particular use of VGA passtrough is it planned to able to use =
synchronization module?
>
>So what is synchronization module for you? That is the HDMI part?
>

No, as it's an extension that is directly plug on the VGA card, it's seems =
that XEN isn't aware of it.

>> Is it easy to add this feature (time cost)?
>>
>> Computers : HP Z800 workstation
>> GPU: ATI FirePro V8800
>> CPU: Intel Xeon E5640
>> MB: Intel 5520 chipset
>>
>> XEN :
>> Version  4.1.2
>> With ATI patch from http://old-list-archives.xen.org/archives/html/xen-u=
sers/2011-05/msg00048.html
>> Thanks,
>> Aur=E9lien Milliat
>>
>>
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:51:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:51:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPaD-0006GT-3k; Mon, 06 Aug 2012 15:51:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1SyPaB-0006GA-RN
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:51:04 +0000
Received: from [85.158.143.35:62143] by server-3.bemta-4.messagelabs.com id
	F1/98-01511-7E7EF105; Mon, 06 Aug 2012 15:51:03 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344268260!13706303!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12468 invoked from network); 6 Aug 2012 15:51:01 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Aug 2012 15:51:01 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Mon, 6 Aug 2012 17:50:59 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
Thread-Index: Ac1ps8zkCPzyeRZ3RWeWIDwxqaix/gFrW5YAASHqQkI=
Date: Mon, 6 Aug 2012 15:50:58 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0B71A@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>,
	<20120731231246.GD32698@phenom.dumpdata.com>
In-Reply-To: <20120731231246.GD32698@phenom.dumpdata.com>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [90.49.216.20]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel]  ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



>On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
>> Hi,
>>
>> I'm currently trying to use XEN on graphical cluster.
>> VGA passtrough works fine and I'm able to use quad-buffer for active ste=
reo.
>> The last thing to do is to synchronize all GPU from the cluster.
>> For this purpose I use ATI FirePro S400 and it didn't works.
>>
>> I've seen two behaviors:
>> -When I run lspci command on Dom0 I've got:
>> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
>> 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio [Radeon HD=
 5800 Series]
>> And sometimes just
>> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
>> -When a DomU(Windows 7) is running, it's very slow (I can't do anything)=
 and crash after several minutes.
>> I've tried with the unstable version and I've seen the same 'lspci' beha=
vior.
>>
>> I've a couple of questions:
>> Is it possible to passthrough a VGA with this extension?
>
>I never tried. I just pass in the VGA card and don't try to pass in the HD=
MI driver.
>

I've tried this, usually the computer reboot without any informations in lo=
gs. Sometimes XEN raise an error because the VGA can't be passed.

>> As it's a particular use of VGA passtrough is it planned to able to use =
synchronization module?
>
>So what is synchronization module for you? That is the HDMI part?
>

No, as it's an extension that is directly plug on the VGA card, it's seems =
that XEN isn't aware of it.

>> Is it easy to add this feature (time cost)?
>>
>> Computers : HP Z800 workstation
>> GPU: ATI FirePro V8800
>> CPU: Intel Xeon E5640
>> MB: Intel 5520 chipset
>>
>> XEN :
>> Version  4.1.2
>> With ATI patch from http://old-list-archives.xen.org/archives/html/xen-u=
sers/2011-05/msg00048.html
>> Thanks,
>> Aur=E9lien Milliat
>>
>>
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:53:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPcL-0006Ud-Ll; Mon, 06 Aug 2012 15:53:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPcJ-0006UV-Bj
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:53:15 +0000
Received: from [85.158.143.35:8374] by server-2.bemta-4.messagelabs.com id
	C6/CA-17938-A68EF105; Mon, 06 Aug 2012 15:53:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344268394!5760017!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16882 invoked from network); 6 Aug 2012 15:53:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 15:53:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:53:14 +0100
Message-Id: <502004870200007800093014@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:53:11 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFC580200007800092FA4@nat28.tlf.novell.com>
	<501FE1F8.1040100@tycho.nsa.gov>
In-Reply-To: <501FE1F8.1040100@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
 duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:25, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:18 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
>>>      if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
>>>          return -EINVAL;
>>>  
>>> -    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
>>> -    if ( rc != 0 )
>>> -        return rc;
>>> +    d = rcu_lock_domain_by_id(op.domid);
>>> +    if ( d == NULL )
>>> +        return -ESRCH;
>>>  
>>>      rc = -EINVAL;
>>> -    if ( !is_hvm_domain(d) )
>>> +    if ( d == current->domain || !is_hvm_domain(d) )
>> 
>> What's wrong with rcu_lock_remote_target_domain_by_id() here
>> and in other places below? I think this minimally would deserve
>> a comment in the patch description, the more that this huge a
>> patch is already bad enough to look at.
> 
> The main reason for this change is that rcu_lock_remote_target_domain_by_id
> calls IS_PRIV, and this patch is attempting to remove the duplicated calls.
> Would you prefer making another rcu_lock_* function that only checks against
> current->domain and doesn't include the IS_PRIV_FOR check?

Yes, I think the restructuring should be so that no new
"d == current->domain" or alike get introduced (or at least
not as many of them as this patch did).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:53:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPcL-0006Ud-Ll; Mon, 06 Aug 2012 15:53:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPcJ-0006UV-Bj
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:53:15 +0000
Received: from [85.158.143.35:8374] by server-2.bemta-4.messagelabs.com id
	C6/CA-17938-A68EF105; Mon, 06 Aug 2012 15:53:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344268394!5760017!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16882 invoked from network); 6 Aug 2012 15:53:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 15:53:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:53:14 +0100
Message-Id: <502004870200007800093014@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:53:11 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-12-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFC580200007800092FA4@nat28.tlf.novell.com>
	<501FE1F8.1040100@tycho.nsa.gov>
In-Reply-To: <501FE1F8.1040100@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 11/18] xen: use XSM instead of IS_PRIV where
 duplicated
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:25, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:18 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -3366,12 +3366,12 @@ static int hvmop_set_pci_intx_level(
>>>      if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
>>>          return -EINVAL;
>>>  
>>> -    rc = rcu_lock_remote_target_domain_by_id(op.domid, &d);
>>> -    if ( rc != 0 )
>>> -        return rc;
>>> +    d = rcu_lock_domain_by_id(op.domid);
>>> +    if ( d == NULL )
>>> +        return -ESRCH;
>>>  
>>>      rc = -EINVAL;
>>> -    if ( !is_hvm_domain(d) )
>>> +    if ( d == current->domain || !is_hvm_domain(d) )
>> 
>> What's wrong with rcu_lock_remote_target_domain_by_id() here
>> and in other places below? I think this minimally would deserve
>> a comment in the patch description, the more that this huge a
>> patch is already bad enough to look at.
> 
> The main reason for this change is that rcu_lock_remote_target_domain_by_id
> calls IS_PRIV, and this patch is attempting to remove the duplicated calls.
> Would you prefer making another rcu_lock_* function that only checks against
> current->domain and doesn't include the IS_PRIV_FOR check?

Yes, I think the restructuring should be so that no new
"d == current->domain" or alike get introduced (or at least
not as many of them as this patch did).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:54:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPdJ-0006c4-3r; Mon, 06 Aug 2012 15:54:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPdH-0006bv-M7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:54:15 +0000
Received: from [85.158.143.35:12060] by server-3.bemta-4.messagelabs.com id
	41/AC-01511-7A8EF105; Mon, 06 Aug 2012 15:54:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344268449!17031817!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18663 invoked from network); 6 Aug 2012 15:54:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 15:54:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:54:09 +0100
Message-Id: <502004BE0200007800093028@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:54:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > This is an incremental patch on top of
>> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
>> > compatibility, it is better to introduce foreign_domid as part of a
>> > union containing both size and foreign_domid.
>> > 
>> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> > ---
>> >  xen/include/public/memory.h |   11 +++++++----
>> >  1 files changed, 7 insertions(+), 4 deletions(-)
>> > 
>> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> > index b2adfbe..b0af2fd 100644
>> > --- a/xen/include/public/memory.h
>> > +++ b/xen/include/public/memory.h
>> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>> >      /* Which domain to change the mapping for. */
>> >      domid_t domid;
>> >  
>> > -    /* Number of pages to go through for gmfn_range */
>> > -    uint16_t    size;
>> > +    union {
>> > +        /* Number of pages to go through for gmfn_range */
>> > +        uint16_t    size;
>> > +        /* IFF gmfn_foreign */
>> > +        domid_t foreign_domid;
>> > +    };
>> 
>> But you're clear that this isn't standard C, and hence can't go
>> in this way?
>> 
> 
> Why? It is c11 if I am not mistaken.

Yes. But the common baseline is C89.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:54:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPdJ-0006c4-3r; Mon, 06 Aug 2012 15:54:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPdH-0006bv-M7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:54:15 +0000
Received: from [85.158.143.35:12060] by server-3.bemta-4.messagelabs.com id
	41/AC-01511-7A8EF105; Mon, 06 Aug 2012 15:54:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344268449!17031817!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18663 invoked from network); 6 Aug 2012 15:54:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	6 Aug 2012 15:54:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:54:09 +0100
Message-Id: <502004BE0200007800093028@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:54:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > This is an incremental patch on top of
>> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
>> > compatibility, it is better to introduce foreign_domid as part of a
>> > union containing both size and foreign_domid.
>> > 
>> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> > ---
>> >  xen/include/public/memory.h |   11 +++++++----
>> >  1 files changed, 7 insertions(+), 4 deletions(-)
>> > 
>> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> > index b2adfbe..b0af2fd 100644
>> > --- a/xen/include/public/memory.h
>> > +++ b/xen/include/public/memory.h
>> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>> >      /* Which domain to change the mapping for. */
>> >      domid_t domid;
>> >  
>> > -    /* Number of pages to go through for gmfn_range */
>> > -    uint16_t    size;
>> > +    union {
>> > +        /* Number of pages to go through for gmfn_range */
>> > +        uint16_t    size;
>> > +        /* IFF gmfn_foreign */
>> > +        domid_t foreign_domid;
>> > +    };
>> 
>> But you're clear that this isn't standard C, and hence can't go
>> in this way?
>> 
> 
> Why? It is c11 if I am not mistaken.

Yes. But the common baseline is C89.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:58:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPhB-0006t2-Qa; Mon, 06 Aug 2012 15:58:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPhA-0006ss-A4
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:58:16 +0000
Received: from [85.158.138.51:48896] by server-8.bemta-3.messagelabs.com id
	1B/CC-25919-799EF105; Mon, 06 Aug 2012 15:58:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344268694!30621844!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27804 invoked from network); 6 Aug 2012 15:58:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 15:58:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:58:14 +0100
Message-Id: <502005B2020000780009302B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:58:10 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344268070.11339.53.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
>> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > Note: this change does not make any difference on x86 and ia64.
>> > 
>> > 
>> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
>> > stored in memory from guest pointers as hypercall parameters.
>> 
>> I have to admit that really dislike this, to a large part because of
>> the follow up patch that clutters the corresponding function
>> declarations even further. Plus I see no mechanism to convert
>> between the two, yet I can't see how - long term at least - you
>> could get away without such conversion.
>> 
>> Is it really a well thought through and settled upon decision to
>> make guest handles 64 bits wide even on 32-bit ARM? After all,
>> both x86 and PPC got away without doing so
> 
> Well, on x86 we have the compat XLAT layer, which is a pretty complex
> piece of code, so "got away without" is a bit strong...

Hmm, yes, that's a valid correction.

> We'd really
> rather not have to have a non-trivial compat layer on arm too by having
> the struct layouts be the same on 32/64.

And paying a penalty like this in the 32-bit half (if what is likely
to remain the much bigger portion for the next couple of years
can validly be called "half") is worth it? The more that the compat
layer is now reasonably mature (and should hence be easily
re-usable for ARM)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 15:58:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 15:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPhB-0006t2-Qa; Mon, 06 Aug 2012 15:58:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyPhA-0006ss-A4
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 15:58:16 +0000
Received: from [85.158.138.51:48896] by server-8.bemta-3.messagelabs.com id
	1B/CC-25919-799EF105; Mon, 06 Aug 2012 15:58:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344268694!30621844!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27804 invoked from network); 6 Aug 2012 15:58:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with SMTP;
	6 Aug 2012 15:58:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 06 Aug 2012 16:58:14 +0100
Message-Id: <502005B2020000780009302B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 06 Aug 2012 16:58:10 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344268070.11339.53.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
>> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > Note: this change does not make any difference on x86 and ia64.
>> > 
>> > 
>> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
>> > stored in memory from guest pointers as hypercall parameters.
>> 
>> I have to admit that really dislike this, to a large part because of
>> the follow up patch that clutters the corresponding function
>> declarations even further. Plus I see no mechanism to convert
>> between the two, yet I can't see how - long term at least - you
>> could get away without such conversion.
>> 
>> Is it really a well thought through and settled upon decision to
>> make guest handles 64 bits wide even on 32-bit ARM? After all,
>> both x86 and PPC got away without doing so
> 
> Well, on x86 we have the compat XLAT layer, which is a pretty complex
> piece of code, so "got away without" is a bit strong...

Hmm, yes, that's a valid correction.

> We'd really
> rather not have to have a non-trivial compat layer on arm too by having
> the struct layouts be the same on 32/64.

And paying a penalty like this in the 32-bit half (if what is likely
to remain the much bigger portion for the next couple of years
can validly be called "half") is worth it? The more that the compat
layer is now reasonably mature (and should hence be easily
re-usable for ARM)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:03:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPmN-0007SX-Ie; Mon, 06 Aug 2012 16:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyPmL-0007SP-Hk
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:03:37 +0000
Received: from [85.158.139.83:60445] by server-8.bemta-5.messagelabs.com id
	83/EA-10278-6DAEF105; Mon, 06 Aug 2012 16:03:34 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344269014!30788055!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13816 invoked from network); 6 Aug 2012 16:03:34 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:03:34 -0000
Received: by eaac13 with SMTP id c13so275676eaa.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 09:03:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=cSXuIrByNbmbxfqlf89J249HGjjkGGOvD80+fQckWxE=;
	b=boOJwEdRNiK1EWvd4p9SReLguqwW5JCfTMGJ5Px2mKRgUvNJYZhwNMbu2KpPXkcjNC
	tSAjSM6QHhV28DrpYP0/aMNSZZhHgN7evBaFtWwLKSh8U6Wt0DS+psMyBHwIoHRJIMe4
	4tjg1RhO2p1Uo+Coz4KGnbRFWylVlAQBSJ9gFxgIrhWFmu3vI/wd3f/t0g4X7s8BDxHz
	J97Yb8vO7swYYx4rY1CI8KPLwxWXaL46sxDMD/LxUgoIy5wnfaglzZGCu/oe5dkZBXhV
	m8aHUJbY1i0S3rEFUMcCjUUNtdR/oFX+gT9aMc0eD0I/hI5/mzCEoymaoUmcBgRrRW2f
	XbDQ==
MIME-Version: 1.0
Received: by 10.14.214.197 with SMTP id c45mr13645822eep.37.1344269013955;
	Mon, 06 Aug 2012 09:03:33 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 09:03:33 -0700 (PDT)
In-Reply-To: <501FED030200007800092E35@nat28.tlf.novell.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
Date: Mon, 6 Aug 2012 17:03:33 +0100
X-Google-Sender-Auth: HaxyWpW9SD5k4NkIWo2gNV-8gCI
Message-ID: <CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 6, 2012 at 3:12 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.08.12 at 15:57, "Jan Beulich" <JBeulich@suse.com> wrote:
>> The domain indeed has 0x1e0 pages allocated, and a huge (still
>> growing number) of PoD entries. And apparently this fails so
>> rarely because it's pretty unlikely that there's not a single clear
>> page that the PoD code can select as victim, plus the Dom0
>> space code likely also only infrequently happens to kick in at
>> the wrong time.
>
> Just realized that of course it's also suspicious that there
> shouldn't be any clear page among those 480 - Dom0 scrubs
> its pages at balloons them out (but I think ballooning isn't even
> in use there), Xen scrubs the free pages on boot, yet this
> reportedly has happened also for the very first domain ever
> created after boot. Or does the PoD code not touch the low
> 2Mb for some reason?

Hmm -- the sweep code has some fairly complicated heuristics.  Ah -- I
bet this is it: The algorithm implicitly assumes that he first sweep
will happen after the first demand-fault.  It's designed to start at
the last demand-faulted gpfn (tracked by p2m->pod.max_guest) and go
downwards.  When it reaches 0, it stops its sweep (?!), and resets to
max_guest on the next entry.  But if max_guest is 0, this means it
will basically never sweep at all.

I guess there are two problems with that:
* As you've seen, apparently dom0 may access these pages before any
faults happen.
* If it happens that reclaim_single is below the only zeroed page, the
guest will crash even when there is reclaim-able memory available.

Two ways we could fix this:
1. Remove dom0 accesses (what on earth could be looking at a
not-yet-created VM?)
2. Allocate the PoD cache before populating the p2m table
3. Make it so that some accesses fail w/o crashing the guest?  I don't
see how that's really practical.
4. Change the sweep routine so that the lower 2MiB gets swept

#2 would require us to use all PoD entries when building the p2m
table, thus addressing the mail you mentioned from 25 July*.  Given
that you don't want #1, it seems like #2 is the best option.

No matter what we do, the sweep routine for 4.2 should be re-written
to search all of memory at least once (maybe with a timeout for
watchdogs), since it's only called in an actual emergency.

Let me take a look...

 -George

* Sorry for not responding to that one; I must have missed it in my
return-from-travelling e-mail sweep.  If you CC me next time I'll be
sure to get it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:03:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPmN-0007SX-Ie; Mon, 06 Aug 2012 16:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyPmL-0007SP-Hk
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:03:37 +0000
Received: from [85.158.139.83:60445] by server-8.bemta-5.messagelabs.com id
	83/EA-10278-6DAEF105; Mon, 06 Aug 2012 16:03:34 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344269014!30788055!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13816 invoked from network); 6 Aug 2012 16:03:34 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:03:34 -0000
Received: by eaac13 with SMTP id c13so275676eaa.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 09:03:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=cSXuIrByNbmbxfqlf89J249HGjjkGGOvD80+fQckWxE=;
	b=boOJwEdRNiK1EWvd4p9SReLguqwW5JCfTMGJ5Px2mKRgUvNJYZhwNMbu2KpPXkcjNC
	tSAjSM6QHhV28DrpYP0/aMNSZZhHgN7evBaFtWwLKSh8U6Wt0DS+psMyBHwIoHRJIMe4
	4tjg1RhO2p1Uo+Coz4KGnbRFWylVlAQBSJ9gFxgIrhWFmu3vI/wd3f/t0g4X7s8BDxHz
	J97Yb8vO7swYYx4rY1CI8KPLwxWXaL46sxDMD/LxUgoIy5wnfaglzZGCu/oe5dkZBXhV
	m8aHUJbY1i0S3rEFUMcCjUUNtdR/oFX+gT9aMc0eD0I/hI5/mzCEoymaoUmcBgRrRW2f
	XbDQ==
MIME-Version: 1.0
Received: by 10.14.214.197 with SMTP id c45mr13645822eep.37.1344269013955;
	Mon, 06 Aug 2012 09:03:33 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 09:03:33 -0700 (PDT)
In-Reply-To: <501FED030200007800092E35@nat28.tlf.novell.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
Date: Mon, 6 Aug 2012 17:03:33 +0100
X-Google-Sender-Auth: HaxyWpW9SD5k4NkIWo2gNV-8gCI
Message-ID: <CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 6, 2012 at 3:12 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.08.12 at 15:57, "Jan Beulich" <JBeulich@suse.com> wrote:
>> The domain indeed has 0x1e0 pages allocated, and a huge (still
>> growing number) of PoD entries. And apparently this fails so
>> rarely because it's pretty unlikely that there's not a single clear
>> page that the PoD code can select as victim, plus the Dom0
>> space code likely also only infrequently happens to kick in at
>> the wrong time.
>
> Just realized that of course it's also suspicious that there
> shouldn't be any clear page among those 480 - Dom0 scrubs
> its pages at balloons them out (but I think ballooning isn't even
> in use there), Xen scrubs the free pages on boot, yet this
> reportedly has happened also for the very first domain ever
> created after boot. Or does the PoD code not touch the low
> 2Mb for some reason?

Hmm -- the sweep code has some fairly complicated heuristics.  Ah -- I
bet this is it: The algorithm implicitly assumes that he first sweep
will happen after the first demand-fault.  It's designed to start at
the last demand-faulted gpfn (tracked by p2m->pod.max_guest) and go
downwards.  When it reaches 0, it stops its sweep (?!), and resets to
max_guest on the next entry.  But if max_guest is 0, this means it
will basically never sweep at all.

I guess there are two problems with that:
* As you've seen, apparently dom0 may access these pages before any
faults happen.
* If it happens that reclaim_single is below the only zeroed page, the
guest will crash even when there is reclaim-able memory available.

Two ways we could fix this:
1. Remove dom0 accesses (what on earth could be looking at a
not-yet-created VM?)
2. Allocate the PoD cache before populating the p2m table
3. Make it so that some accesses fail w/o crashing the guest?  I don't
see how that's really practical.
4. Change the sweep routine so that the lower 2MiB gets swept

#2 would require us to use all PoD entries when building the p2m
table, thus addressing the mail you mentioned from 25 July*.  Given
that you don't want #1, it seems like #2 is the best option.

No matter what we do, the sweep routine for 4.2 should be re-written
to search all of memory at least once (maybe with a timeout for
watchdogs), since it's only called in an actual emergency.

Let me take a look...

 -George

* Sorry for not responding to that one; I must have missed it in my
return-from-travelling e-mail sweep.  If you CC me next time I'll be
sure to get it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:03:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPmU-0007T6-VF; Mon, 06 Aug 2012 16:03:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyPmT-0007Sf-HJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:03:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344268976!4140232!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22385 invoked from network); 6 Aug 2012 16:02:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:02:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 16:02:26 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 17:02:26 +0100
Date: Mon, 6 Aug 2012 17:02:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502005B2020000780009302B@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > Note: this change does not make any difference on x86 and ia64.
> >> > 
> >> > 
> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> >> > stored in memory from guest pointers as hypercall parameters.
> >> 
> >> I have to admit that really dislike this, to a large part because of
> >> the follow up patch that clutters the corresponding function
> >> declarations even further. Plus I see no mechanism to convert
> >> between the two, yet I can't see how - long term at least - you
> >> could get away without such conversion.
> >> 
> >> Is it really a well thought through and settled upon decision to
> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> >> both x86 and PPC got away without doing so
> > 
> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> > piece of code, so "got away without" is a bit strong...
> 
> Hmm, yes, that's a valid correction.
> 
> > We'd really
> > rather not have to have a non-trivial compat layer on arm too by having
> > the struct layouts be the same on 32/64.
> 
> And paying a penalty like this in the 32-bit half (if what is likely
> to remain the much bigger portion for the next couple of years
> can validly be called "half") is worth it? The more that the compat
> layer is now reasonably mature (and should hence be easily
> re-usable for ARM)?

What penalty? The only penalty is the wasted space in the structs in
memory.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:03:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPmU-0007T6-VF; Mon, 06 Aug 2012 16:03:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyPmT-0007Sf-HJ
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:03:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344268976!4140232!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22385 invoked from network); 6 Aug 2012 16:02:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:02:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 16:02:26 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 17:02:26 +0100
Date: Mon, 6 Aug 2012 17:02:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502005B2020000780009302B@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > Note: this change does not make any difference on x86 and ia64.
> >> > 
> >> > 
> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> >> > stored in memory from guest pointers as hypercall parameters.
> >> 
> >> I have to admit that really dislike this, to a large part because of
> >> the follow up patch that clutters the corresponding function
> >> declarations even further. Plus I see no mechanism to convert
> >> between the two, yet I can't see how - long term at least - you
> >> could get away without such conversion.
> >> 
> >> Is it really a well thought through and settled upon decision to
> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> >> both x86 and PPC got away without doing so
> > 
> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> > piece of code, so "got away without" is a bit strong...
> 
> Hmm, yes, that's a valid correction.
> 
> > We'd really
> > rather not have to have a non-trivial compat layer on arm too by having
> > the struct layouts be the same on 32/64.
> 
> And paying a penalty like this in the 32-bit half (if what is likely
> to remain the much bigger portion for the next couple of years
> can validly be called "half") is worth it? The more that the compat
> layer is now reasonably mature (and should hence be easily
> re-usable for ARM)?

What penalty? The only penalty is the wasted space in the structs in
memory.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:09:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPsH-0007p7-Q5; Mon, 06 Aug 2012 16:09:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyPsG-0007p2-3r
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 16:09:44 +0000
Received: from [85.158.138.51:13408] by server-6.bemta-3.messagelabs.com id
	D9/5E-02321-74CEF105; Mon, 06 Aug 2012 16:09:43 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344269382!30714559!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9194 invoked from network); 6 Aug 2012 16:09:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:09:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870745"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 16:09:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 17:09:42 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyPsD-0005xD-P7;
	Mon, 06 Aug 2012 16:09:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyPsD-00032g-Bo;
	Mon, 06 Aug 2012 17:09:41 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13558-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 17:09:41 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13558: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13558 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13558/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:09:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyPsH-0007p7-Q5; Mon, 06 Aug 2012 16:09:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyPsG-0007p2-3r
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 16:09:44 +0000
Received: from [85.158.138.51:13408] by server-6.bemta-3.messagelabs.com id
	D9/5E-02321-74CEF105; Mon, 06 Aug 2012 16:09:43 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344269382!30714559!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9194 invoked from network); 6 Aug 2012 16:09:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:09:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13870745"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 16:09:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 17:09:42 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyPsD-0005xD-P7;
	Mon, 06 Aug 2012 16:09:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyPsD-00032g-Bo;
	Mon, 06 Aug 2012 17:09:41 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13558-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 17:09:41 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13558: trouble: blocked/broken
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13558 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13558/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 13525
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 13525
 build-i386-pvops              2 host-install(2)         broken REGR. vs. 13525
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 13525
 build-amd64                   2 host-install(2)         broken REGR. vs. 13525
 build-amd64-pvops             2 host-install(2)         broken REGR. vs. 13525

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:18:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ08-0008TB-Id; Mon, 06 Aug 2012 16:17:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyQ07-0008T4-J7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:17:51 +0000
Received: from [85.158.143.35:52237] by server-2.bemta-4.messagelabs.com id
	52/18-17938-E2EEF105; Mon, 06 Aug 2012 16:17:50 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344269869!5763579!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5935 invoked from network); 6 Aug 2012 16:17:50 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:17:50 -0000
Received: by eaac13 with SMTP id c13so280595eaa.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 09:17:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=YGjeDi/Egf6uv0LsJ3wVRUiLxW5pNazt+kR7NNxhzN0=;
	b=cD76ZQ3Ofnhle8RCQmCFb2GqRgRot9W5Pf61GmKOKr1OKOVi3NcFTl0NumMLxhnq3u
	ylE+Hc26fkkKPItit+JRhQByMg0QZtEAH/b/ppgaa+wQs6NOYrXmqeXdNuajtg61e7S+
	H4m0JEbaVjybR97Fborlplf4wCDyM4Uf8JrjJAlfdZsEgsWrzl715Z60l4hrhCJ2RoId
	K8PEiYdRj83wpN1eyIiF0vQzsH0LJBcB8og1CdhQTJiBScLahvwNKUW0XgxvdQoRWS7w
	jRHMx1ktxSuB64sVpqVFOHDv83+6E6pQt2B9JBmdo0sC15sYaKWJQq0sBDzRxjPXlATZ
	6kIA==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr13800557eep.32.1344269869381;
	Mon, 06 Aug 2012 09:17:49 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 09:17:49 -0700 (PDT)
In-Reply-To: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
Date: Mon, 6 Aug 2012 17:17:49 +0100
X-Google-Sender-Auth: arIUoK0GNblOBLyWxiTfLwSkgBw
Message-ID: <CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 6, 2012 at 9:58 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> hypervisor, nice to have:

* [BUG(?)] Under certain conditions, the p2m_pod_sweep code will stop
halfway through searching, causing a guest to crash even if there was
zeroed memory available.  This is NOT a regression from 4.1, and is a
very rare case, so probably shouldn't be a blocker.  (In fact, I'd be
open to the idea that it should wait until after the release to get
more testing.)

I can take this one.

> tools, nice to have:

* [BUG(?)] If domain 0 attempts to access a guests' memory before it
is finished being built, and it is being built in PoD mode, this may
cause the guest to crash.  Again, this is NOT a regression from 4.1.
Furthermore, it's only been reported (AIUI) by a customer of OpenSuSE;
so it shoudn't be a blocker.  (Again, I'd be open to the idea that it
should wait until after the release to get more testing.)

Jan, since you've got access to the system that reproduces it, do you
want to take this one?  I think it should just be a matter of moving
xc_domain_set_target() just before the while() loop in the domain
builder (but after xc_domain_populate_physmap_exact, I think), and
changing the loop to never allocate real memory in PoD mode.  I can do
it, but it will be longer before we can get it tested.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:18:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ08-0008TB-Id; Mon, 06 Aug 2012 16:17:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyQ07-0008T4-J7
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:17:51 +0000
Received: from [85.158.143.35:52237] by server-2.bemta-4.messagelabs.com id
	52/18-17938-E2EEF105; Mon, 06 Aug 2012 16:17:50 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344269869!5763579!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5935 invoked from network); 6 Aug 2012 16:17:50 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:17:50 -0000
Received: by eaac13 with SMTP id c13so280595eaa.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 09:17:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=YGjeDi/Egf6uv0LsJ3wVRUiLxW5pNazt+kR7NNxhzN0=;
	b=cD76ZQ3Ofnhle8RCQmCFb2GqRgRot9W5Pf61GmKOKr1OKOVi3NcFTl0NumMLxhnq3u
	ylE+Hc26fkkKPItit+JRhQByMg0QZtEAH/b/ppgaa+wQs6NOYrXmqeXdNuajtg61e7S+
	H4m0JEbaVjybR97Fborlplf4wCDyM4Uf8JrjJAlfdZsEgsWrzl715Z60l4hrhCJ2RoId
	K8PEiYdRj83wpN1eyIiF0vQzsH0LJBcB8og1CdhQTJiBScLahvwNKUW0XgxvdQoRWS7w
	jRHMx1ktxSuB64sVpqVFOHDv83+6E6pQt2B9JBmdo0sC15sYaKWJQq0sBDzRxjPXlATZ
	6kIA==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr13800557eep.32.1344269869381;
	Mon, 06 Aug 2012 09:17:49 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 09:17:49 -0700 (PDT)
In-Reply-To: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
Date: Mon, 6 Aug 2012 17:17:49 +0100
X-Google-Sender-Auth: arIUoK0GNblOBLyWxiTfLwSkgBw
Message-ID: <CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 6, 2012 at 9:58 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> hypervisor, nice to have:

* [BUG(?)] Under certain conditions, the p2m_pod_sweep code will stop
halfway through searching, causing a guest to crash even if there was
zeroed memory available.  This is NOT a regression from 4.1, and is a
very rare case, so probably shouldn't be a blocker.  (In fact, I'd be
open to the idea that it should wait until after the release to get
more testing.)

I can take this one.

> tools, nice to have:

* [BUG(?)] If domain 0 attempts to access a guests' memory before it
is finished being built, and it is being built in PoD mode, this may
cause the guest to crash.  Again, this is NOT a regression from 4.1.
Furthermore, it's only been reported (AIUI) by a customer of OpenSuSE;
so it shoudn't be a blocker.  (Again, I'd be open to the idea that it
should wait until after the release to get more testing.)

Jan, since you've got access to the system that reproduces it, do you
want to take this one?  I think it should just be a matter of moving
xc_domain_set_target() just before the while() loop in the domain
builder (but after xc_domain_populate_physmap_exact, I think), and
changing the loop to never allocate real memory in PoD mode.  I can do
it, but it will be longer before we can get it tested.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:18:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ0t-0008WF-0S; Mon, 06 Aug 2012 16:18:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyQ0r-0008Vm-0t
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:18:37 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344269910!4078513!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 439 invoked from network); 6 Aug 2012 16:18:31 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:18:31 -0000
Received: by eeke53 with SMTP id e53so912577eek.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 09:18:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=3vXtIyXU1V54HpqoAhHHOdLe/rKW5JHAUmyB6i5pEEs=;
	b=qLD8bCUcrdEXDJmQ5aDehbBY9wLF3BakVkftR0BreuyMOS34KHLHPA5l/tP59w051o
	8XW7W5//Mv7PT3KZvOTRlVKwzFba/uCkObIAcsFaq8VSMtUnlChq6aNHh5r5Zcb1Akur
	D6hiAqcrWlwn+EX3XsCRKHjCH0az6Bswf3AmG93mW0Jqi3O/Aiddkx8RAGex8RnQzQTA
	ErkqdgZiLLFnDlTtUp0Yv5FcHdHE7t4WxJP7M4n8c2w2cYUJ6bcXimG+93OC0jwoV3jY
	uBMnsIaxHX1juu1z5+ylznOKlBo+jRRkIzj4prgR5Qo9hwWIHG9bvWlxw/7BtO3KZWQj
	p3gw==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr13803585eep.32.1344269910770;
	Mon, 06 Aug 2012 09:18:30 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 09:18:30 -0700 (PDT)
In-Reply-To: <CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
Date: Mon, 6 Aug 2012 17:18:30 +0100
X-Google-Sender-Auth: N_2aDxhWQc0smMyqQPmmb3OcFXs
Message-ID: <CAFLBxZZT=ZKcXNiCoXYMEza5GOvgmJ_fHB=1hq2KAWo7aEJQaQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 6, 2012 at 5:17 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> * [BUG(?)] If domain 0 attempts to access a guests' memory before it
> is finished being built, and it is being built in PoD mode, this may
> cause the guest to crash.  Again, this is NOT a regression from 4.1.
> Furthermore, it's only been reported (AIUI) by a customer of OpenSuSE;

Sorry, SuSE, not OpenSuSE; fingers on auto-pilot...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:18:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ0t-0008WF-0S; Mon, 06 Aug 2012 16:18:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SyQ0r-0008Vm-0t
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:18:37 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344269910!4078513!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 439 invoked from network); 6 Aug 2012 16:18:31 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:18:31 -0000
Received: by eeke53 with SMTP id e53so912577eek.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 09:18:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=3vXtIyXU1V54HpqoAhHHOdLe/rKW5JHAUmyB6i5pEEs=;
	b=qLD8bCUcrdEXDJmQ5aDehbBY9wLF3BakVkftR0BreuyMOS34KHLHPA5l/tP59w051o
	8XW7W5//Mv7PT3KZvOTRlVKwzFba/uCkObIAcsFaq8VSMtUnlChq6aNHh5r5Zcb1Akur
	D6hiAqcrWlwn+EX3XsCRKHjCH0az6Bswf3AmG93mW0Jqi3O/Aiddkx8RAGex8RnQzQTA
	ErkqdgZiLLFnDlTtUp0Yv5FcHdHE7t4WxJP7M4n8c2w2cYUJ6bcXimG+93OC0jwoV3jY
	uBMnsIaxHX1juu1z5+ylznOKlBo+jRRkIzj4prgR5Qo9hwWIHG9bvWlxw/7BtO3KZWQj
	p3gw==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr13803585eep.32.1344269910770;
	Mon, 06 Aug 2012 09:18:30 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Mon, 6 Aug 2012 09:18:30 -0700 (PDT)
In-Reply-To: <CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
Date: Mon, 6 Aug 2012 17:18:30 +0100
X-Google-Sender-Auth: N_2aDxhWQc0smMyqQPmmb3OcFXs
Message-ID: <CAFLBxZZT=ZKcXNiCoXYMEza5GOvgmJ_fHB=1hq2KAWo7aEJQaQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 6, 2012 at 5:17 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> * [BUG(?)] If domain 0 attempts to access a guests' memory before it
> is finished being built, and it is being built in PoD mode, this may
> cause the guest to crash.  Again, this is NOT a regression from 4.1.
> Furthermore, it's only been reported (AIUI) by a customer of OpenSuSE;

Sorry, SuSE, not OpenSuSE; fingers on auto-pilot...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:23:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ5E-0000M8-PD; Mon, 06 Aug 2012 16:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1SyQ5D-0000Lx-7K
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 16:23:07 +0000
Received: from [85.158.138.51:37630] by server-10.bemta-3.messagelabs.com id
	FF/FA-07905-A6FEF105; Mon, 06 Aug 2012 16:23:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344270184!22627924!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22215 invoked from network); 6 Aug 2012 16:23:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:23:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33714898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 12:23:03 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:23:04 -0400
Message-ID: <501FEF65.1000304@citrix.com>
Date: Mon, 6 Aug 2012 17:23:01 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:27, Stefano Stabellini wrote:
> Check for a "/xen" node in the device tree, if it is present set
> xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> 
> Map the real shared info page using XENMEM_add_to_physmap with
> XENMAPSPACE_shared_info.
> 
> Changes in v2:
> 
> - replace pr_info with pr_debug.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 52 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index d27c2a6..102d823 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -5,6 +5,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_address.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/*
> + * == Xen Device Tree format ==
> + * - /xen node;
> + * - compatible "arm,xen";
> + * - one interrupt for Xen event notifications;
> + * - one memory region to map the grant_table.
> + */

These needs to be documented in Documentation/devicetree/bindings/ and
should be sent to the devicetree-discuss mailing list for review.

The node should be called 'hypervisor' I think.

The first word of the compatible string is the vendor/organization that
defined the binding so should be "xen" here.  This does give a odd
looking "xen,xen" but we'll have to live with that.

I'd suggest that the DT provided by the hypervisor or tools give the
hypercall ABI version in the compatible string as well.  e.g.,

hypervisor {
    compatible = "xen,xen-4.3", "xen,xen"
};

I missed the Xen patch that adds this node for dom0.  Can you point me
to it?

David

> +static int __init xen_guest_init(void)
> +{
> +	struct xen_add_to_physmap xatp;
> +	static struct shared_info *shared_info_page = 0;
> +	struct device_node *node;
> +
> +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> +	if (!node) {
> +		pr_debug("No Xen support\n");
> +		return 0;
> +	}
> +	xen_domain_type = XEN_HVM_DOMAIN;
> +
> +	if (!shared_info_page)
> +		shared_info_page = (struct shared_info *)
> +			get_zeroed_page(GFP_KERNEL);
> +	if (!shared_info_page) {
> +		pr_err("not enough memory\n");
> +		return -ENOMEM;
> +	}
> +	xatp.domid = DOMID_SELF;
> +	xatp.idx = 0;
> +	xatp.space = XENMAPSPACE_shared_info;
> +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> +		BUG();
> +
> +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> +
> +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> +	 * page, we use it in the event channel upcall and in some pvclock
> +	 * related functions. We don't need the vcpu_info placement
> +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> +	 * HVM.
> +	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
> +	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
> +	 * for secondary CPUs as they are brought up. */
> +	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
> +	return 0;
> +}
> +core_initcall(xen_guest_init);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:23:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ5E-0000M8-PD; Mon, 06 Aug 2012 16:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1SyQ5D-0000Lx-7K
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 16:23:07 +0000
Received: from [85.158.138.51:37630] by server-10.bemta-3.messagelabs.com id
	FF/FA-07905-A6FEF105; Mon, 06 Aug 2012 16:23:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344270184!22627924!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjI4NDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22215 invoked from network); 6 Aug 2012 16:23:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 16:23:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336363200"; d="scan'208";a="33714898"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 12:23:03 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Mon, 6 Aug 2012
	12:23:04 -0400
Message-ID: <501FEF65.1000304@citrix.com>
Date: Mon, 6 Aug 2012 17:23:01 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/08/12 15:27, Stefano Stabellini wrote:
> Check for a "/xen" node in the device tree, if it is present set
> xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> 
> Map the real shared info page using XENMEM_add_to_physmap with
> XENMAPSPACE_shared_info.
> 
> Changes in v2:
> 
> - replace pr_info with pr_debug.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 52 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index d27c2a6..102d823 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -5,6 +5,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_address.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/*
> + * == Xen Device Tree format ==
> + * - /xen node;
> + * - compatible "arm,xen";
> + * - one interrupt for Xen event notifications;
> + * - one memory region to map the grant_table.
> + */

These needs to be documented in Documentation/devicetree/bindings/ and
should be sent to the devicetree-discuss mailing list for review.

The node should be called 'hypervisor' I think.

The first word of the compatible string is the vendor/organization that
defined the binding so should be "xen" here.  This does give a odd
looking "xen,xen" but we'll have to live with that.

I'd suggest that the DT provided by the hypervisor or tools give the
hypercall ABI version in the compatible string as well.  e.g.,

hypervisor {
    compatible = "xen,xen-4.3", "xen,xen"
};

I missed the Xen patch that adds this node for dom0.  Can you point me
to it?

David

> +static int __init xen_guest_init(void)
> +{
> +	struct xen_add_to_physmap xatp;
> +	static struct shared_info *shared_info_page = 0;
> +	struct device_node *node;
> +
> +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> +	if (!node) {
> +		pr_debug("No Xen support\n");
> +		return 0;
> +	}
> +	xen_domain_type = XEN_HVM_DOMAIN;
> +
> +	if (!shared_info_page)
> +		shared_info_page = (struct shared_info *)
> +			get_zeroed_page(GFP_KERNEL);
> +	if (!shared_info_page) {
> +		pr_err("not enough memory\n");
> +		return -ENOMEM;
> +	}
> +	xatp.domid = DOMID_SELF;
> +	xatp.idx = 0;
> +	xatp.space = XENMAPSPACE_shared_info;
> +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> +		BUG();
> +
> +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> +
> +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> +	 * page, we use it in the event channel upcall and in some pvclock
> +	 * related functions. We don't need the vcpu_info placement
> +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> +	 * HVM.
> +	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
> +	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
> +	 * for secondary CPUs as they are brought up. */
> +	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
> +	return 0;
> +}
> +core_initcall(xen_guest_init);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:26:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ8D-0000VA-CL; Mon, 06 Aug 2012 16:26:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyQ8B-0000Uz-Rq
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:26:12 +0000
Received: from [85.158.138.51:55442] by server-5.bemta-3.messagelabs.com id
	40/49-27557-320FF105; Mon, 06 Aug 2012 16:26:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344270360!30626587!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDM4NDEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22394 invoked from network); 6 Aug 2012 16:26:01 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 16:26:01 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76GPwNC002830
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 16:25:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76GPvIm000513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 16:25:57 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76GPujh000818; Mon, 6 Aug 2012 11:25:57 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 09:25:56 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 150D241F34; Mon,  6 Aug 2012 12:16:33 -0400 (EDT)
Date: Mon, 6 Aug 2012 12:16:33 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Tobias Geiger <tobias.geiger@vido.info>
Message-ID: <20120806161632.GA11007@phenom.dumpdata.com>
References: <d19a37de141b07add2cd49f3d1f63f2b@vido.info>
	<20120725134357.GA8959@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120725134357.GA8959@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > Hi!
> > 
> > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > stable):
> > 
> > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > not recognized within the DomU (HVM Win7 64)
> > Dom0 cmdline is:
> > ro root=LABEL=dom0root xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > security=apparmor noirqdebug nouveau.msi=1
> > 
> > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > USB Controller IDs are not correctly passed through and get a
> > exclamation mark within the win7 device manager ("could not be
> > started").
> 
> Ok, but they do get passed in though? As in, QEMU sees them.
> If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> passed in devices do you see them? Meaning lspci shows them?
> 
> 
> Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> 
> > 
> > 
> > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) - sorry
> > that i have no full stacktrace, all i have is a "screenshot" which i
> > uploaded here:
> > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> 
> Ugh, that looks like somebody removed a large chunk of a pagetable.
> 
> Hmm. Are you using dom0_mem=max parameter? If not, can you try
> that and also disable ballooning in the xm/xl config file pls?
> 
> > 
> > 
> > With 3.4 both issues were not there - everything worked perfectly.
> > Tell me which debugging info you need, i may be able to re-install
> > my netconsole to get the full stacktrace (but i had not much luck
> > with netconsole regarding kernel panics - rarely this info gets sent
> > before the "panic"...)

So I am able to reproduce this with a Windows 7 with an ATI 4870 and
an Intel 82574L NIC. The video card still works, but the NIC stopped
working. Same version of hypervisor/toolstack/etc, only change is the
kernel (v3.4.6->v3.5).

Time to get my hands greasy with this..

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:26:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQ8D-0000VA-CL; Mon, 06 Aug 2012 16:26:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyQ8B-0000Uz-Rq
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:26:12 +0000
Received: from [85.158.138.51:55442] by server-5.bemta-3.messagelabs.com id
	40/49-27557-320FF105; Mon, 06 Aug 2012 16:26:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344270360!30626587!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDM4NDEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22394 invoked from network); 6 Aug 2012 16:26:01 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 16:26:01 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76GPwNC002830
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 16:25:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76GPvIm000513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 16:25:57 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76GPujh000818; Mon, 6 Aug 2012 11:25:57 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Aug 2012 09:25:56 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 150D241F34; Mon,  6 Aug 2012 12:16:33 -0400 (EDT)
Date: Mon, 6 Aug 2012 12:16:33 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Tobias Geiger <tobias.geiger@vido.info>
Message-ID: <20120806161632.GA11007@phenom.dumpdata.com>
References: <d19a37de141b07add2cd49f3d1f63f2b@vido.info>
	<20120725134357.GA8959@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120725134357.GA8959@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > Hi!
> > 
> > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > stable):
> > 
> > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > not recognized within the DomU (HVM Win7 64)
> > Dom0 cmdline is:
> > ro root=LABEL=dom0root xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > security=apparmor noirqdebug nouveau.msi=1
> > 
> > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > USB Controller IDs are not correctly passed through and get a
> > exclamation mark within the win7 device manager ("could not be
> > started").
> 
> Ok, but they do get passed in though? As in, QEMU sees them.
> If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> passed in devices do you see them? Meaning lspci shows them?
> 
> 
> Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> 
> > 
> > 
> > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) - sorry
> > that i have no full stacktrace, all i have is a "screenshot" which i
> > uploaded here:
> > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> 
> Ugh, that looks like somebody removed a large chunk of a pagetable.
> 
> Hmm. Are you using dom0_mem=max parameter? If not, can you try
> that and also disable ballooning in the xm/xl config file pls?
> 
> > 
> > 
> > With 3.4 both issues were not there - everything worked perfectly.
> > Tell me which debugging info you need, i may be able to re-install
> > my netconsole to get the full stacktrace (but i had not much luck
> > with netconsole regarding kernel panics - rarely this info gets sent
> > before the "panic"...)

So I am able to reproduce this with a Windows 7 with an ATI 4870 and
an Intel 82574L NIC. The video card still works, but the NIC stopped
working. Same version of hypervisor/toolstack/etc, only change is the
kernel (v3.4.6->v3.5).

Time to get my hands greasy with this..

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:29:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQAw-0000e0-4k; Mon, 06 Aug 2012 16:29:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SyQAu-0000dq-59
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:29:00 +0000
Received: from [85.158.143.99:43733] by server-1.bemta-4.messagelabs.com id
	79/79-24392-BC0FF105; Mon, 06 Aug 2012 16:28:59 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344270536!24098523!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1296 invoked from network); 6 Aug 2012 16:28:58 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Aug 2012 16:28:58 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76GSjrC005816
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 16:28:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76GSiHn009621
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 16:28:45 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76GSiHX002823; Mon, 6 Aug 2012 11:28:44 -0500
MIME-Version: 1.0
Message-ID: <566f060d-e29e-4b49-9746-1154873062bf@default>
Date: Mon, 6 Aug 2012 09:28:25 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
	<501F8B250200007800092C62@nat28.tlf.novell.com>
In-Reply-To: <501F8B250200007800092C62@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang ZZhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Subject: RE: [Xen-devel] NUMA TODO-list for xen-devel
> 
> >>> On 04.08.12 at 00:34, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> >>  From: Jan Beulich [mailto:JBeulich@suse.com]
> >> The question is whether trading functionality for performance
> >> is an acceptable choice.
> >
> > If there were a lwn.net equivalent for Xen, I'd be pushing to get
> > quoted on the following:
> >
> > "Virtualization: You can have flexibility or you can have performance.
> > Pick one."
> >
> > A couple of years ago when NUMA was first being extensively discussed
> > for Xen, I suggested that this should really be a "top level" flag
> > that a sysadmin should be able to select: Either optimize for
> > performance or optimize for flexibility.  Then Xen and the Xen tools
> > should "do the right thing" depending on the selection.
> >
> > I still think this is a good way to surface the tradeoffs for
> > a very complex problem to the vast majority of users/admins.
> > Clearly they will want "both" but forcing the choice will
> > provoke more thought about their use model, as well as provide
> > important guidance to the underlying implementations.
> 
> I would expect a good part to pick performance, and then
> go whine about something not working in an emergency. On
> xen-devel one could respond with this-is-what-you-get, but
> you can't necessarily do so to paying customers...

Well, you can, but you have to first convince marketing that
virtualization doesn't solve all problems for all users all the
time. :-)

The two options would have to be clearly documented as:

"flexibility-is-my-highest-priority-and-performance-is-second-priority"

and

"performance-is-my-highest-priority-and-flexibility-is-second-priority"

and when a user selects the latter, they should be prompted with

"Are you really sure you want to use virtualization instead of bare metal?"

Sigh. We can only wish.
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:29:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQAw-0000e0-4k; Mon, 06 Aug 2012 16:29:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1SyQAu-0000dq-59
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:29:00 +0000
Received: from [85.158.143.99:43733] by server-1.bemta-4.messagelabs.com id
	79/79-24392-BC0FF105; Mon, 06 Aug 2012 16:28:59 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344270536!24098523!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjcyOTUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1296 invoked from network); 6 Aug 2012 16:28:58 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Aug 2012 16:28:58 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q76GSjrC005816
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Aug 2012 16:28:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q76GSiHn009621
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Aug 2012 16:28:45 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q76GSiHX002823; Mon, 6 Aug 2012 11:28:44 -0500
MIME-Version: 1.0
Message-ID: <566f060d-e29e-4b49-9746-1154873062bf@default>
Date: Mon, 6 Aug 2012 09:28:25 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, Dario Faggioli <raistlin@linux.it>
References: <1343837796.4958.32.camel@Solace>
	<501A67C502000078000921FF@nat28.tlf.novell.com>
	<6843caa4-9ef7-4e9d-97e5-9ebee55ec6e4@default>
	<501F8B250200007800092C62@nat28.tlf.novell.com>
In-Reply-To: <501F8B250200007800092C62@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Yang ZZhang <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Subject: RE: [Xen-devel] NUMA TODO-list for xen-devel
> 
> >>> On 04.08.12 at 00:34, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> >>  From: Jan Beulich [mailto:JBeulich@suse.com]
> >> The question is whether trading functionality for performance
> >> is an acceptable choice.
> >
> > If there were a lwn.net equivalent for Xen, I'd be pushing to get
> > quoted on the following:
> >
> > "Virtualization: You can have flexibility or you can have performance.
> > Pick one."
> >
> > A couple of years ago when NUMA was first being extensively discussed
> > for Xen, I suggested that this should really be a "top level" flag
> > that a sysadmin should be able to select: Either optimize for
> > performance or optimize for flexibility.  Then Xen and the Xen tools
> > should "do the right thing" depending on the selection.
> >
> > I still think this is a good way to surface the tradeoffs for
> > a very complex problem to the vast majority of users/admins.
> > Clearly they will want "both" but forcing the choice will
> > provoke more thought about their use model, as well as provide
> > important guidance to the underlying implementations.
> 
> I would expect a good part to pick performance, and then
> go whine about something not working in an emergency. On
> xen-devel one could respond with this-is-what-you-get, but
> you can't necessarily do so to paying customers...

Well, you can, but you have to first convince marketing that
virtualization doesn't solve all problems for all users all the
time. :-)

The two options would have to be clearly documented as:

"flexibility-is-my-highest-priority-and-performance-is-second-priority"

and

"performance-is-my-highest-priority-and-flexibility-is-second-priority"

and when a user selects the latter, they should be prompted with

"Are you really sure you want to use virtualization instead of bare metal?"

Sigh. We can only wish.
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:29:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQBa-0000iH-IV; Mon, 06 Aug 2012 16:29:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyQBZ-0000hl-Ih
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:29:41 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344270575!11122877!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23392 invoked from network); 6 Aug 2012 16:29:35 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 16:29:35 -0000
X-TM-IMSS-Message-ID: <7b7220370002ee6e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b7220370002ee6e ; Mon, 6 Aug 2012 12:30:01 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76GTVVB019231; 
	Mon, 6 Aug 2012 12:29:31 -0400
Message-ID: <501FF0EB.1000900@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 12:29:31 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
In-Reply-To: <501FFE560200007800092FBA@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -2882,11 +2882,6 @@ static struct domain *get_pg_owner(domid_t domid)
>>          pg_owner = rcu_lock_domain(dom_io);
>>          break;
>>      case DOMID_XEN:
>> -        if ( !IS_PRIV(curr) )
>> -        {
>> -            MEM_LOG("Cannot set foreign dom");
>> -            break;
>> -        }
>>          pg_owner = rcu_lock_domain(dom_xen);
>>          break;
>>      default:
>> @@ -2895,12 +2890,6 @@ static struct domain *get_pg_owner(domid_t domid)
>>              MEM_LOG("Unknown domain '%u'", domid);
>>              break;
>>          }
>> -        if ( !IS_PRIV_FOR(curr, pg_owner) )
>> -        {
>> -            MEM_LOG("Cannot set foreign dom");
>> -            rcu_unlock_domain(pg_owner);
>> -            pg_owner = NULL;
>> -        }
>>          break;
>>      }
>>  
>> @@ -3008,6 +2997,13 @@ int do_mmuext_op(
>>          goto out;
>>      }
>>  
>> +    rc = xsm_mmuext_op(d, pg_owner);
> 
> Given the above, this could be
> 
> xsm_mmuext_op(dom0, DOMID_{IO,XEN});
> 
> yet ...
> 
>> +    if ( rc )
>> +    {
>> +        rcu_unlock_domain(pg_owner);
>> +        goto out;
>> +    }
>> +
>>      for ( i = 0; i < count; i++ )
>>      {
>>          if ( hypercall_preempt_check() )
>> @@ -3483,11 +3479,6 @@ int do_mmu_update(
>>              rc = -EINVAL;
>>              goto out;
>>          }
>> -        if ( !IS_PRIV_FOR(d, pt_owner) )
>> -        {
>> -            rc = -ESRCH;
>> -            goto out;
>> -        }
>>      }
>>  
>>      if ( (pg_owner = get_pg_owner((uint16_t)foreigndom)) == NULL )
>> @@ -3643,7 +3634,7 @@ int do_mmu_update(
>>              mfn = req.ptr >> PAGE_SHIFT;
>>              gpfn = req.val;
>>  
>> -            rc = xsm_mmu_machphys_update(d, mfn);
>> +            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
>>              if ( rc )
>>                  break;
>>  
>> --- a/xen/include/xsm/dummy.h
>> +++ b/xen/include/xsm/dummy.h
>> @@ -803,19 +803,35 @@ static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
>>  }
>>  
>>  static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
>> -                                    struct domain *f, intpte_t fpte)
>> +                                            struct domain *f, intpte_t fpte)
>>  {
>> +    if ( d != t && !IS_PRIV_FOR(d, t) )
>> +        return -EPERM;
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
>>      return 0;
>>  }
>>  
>> -static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
>> +static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, struct domain *f,
>> +                                              unsigned long mfn)
>>  {
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
>> +    return 0;
>> +}
>> +
>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>> +{
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
> 
> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.

Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
IS_PRIV domain.

> 
>>      return 0;
>>  }
>>  
>>  static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
>>                                                              l1_pgentry_t pte)
>>  {
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
>>      return 0;
>>  }
>>  
> 
> Didn't check the other cases in any detail.
> 
> Jan
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:29:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQBa-0000iH-IV; Mon, 06 Aug 2012 16:29:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyQBZ-0000hl-Ih
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:29:41 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344270575!11122877!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23392 invoked from network); 6 Aug 2012 16:29:35 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-3.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 16:29:35 -0000
X-TM-IMSS-Message-ID: <7b7220370002ee6e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7b7220370002ee6e ; Mon, 6 Aug 2012 12:30:01 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76GTVVB019231; 
	Mon, 6 Aug 2012 12:29:31 -0400
Message-ID: <501FF0EB.1000900@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 12:29:31 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
In-Reply-To: <501FFE560200007800092FBA@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -2882,11 +2882,6 @@ static struct domain *get_pg_owner(domid_t domid)
>>          pg_owner = rcu_lock_domain(dom_io);
>>          break;
>>      case DOMID_XEN:
>> -        if ( !IS_PRIV(curr) )
>> -        {
>> -            MEM_LOG("Cannot set foreign dom");
>> -            break;
>> -        }
>>          pg_owner = rcu_lock_domain(dom_xen);
>>          break;
>>      default:
>> @@ -2895,12 +2890,6 @@ static struct domain *get_pg_owner(domid_t domid)
>>              MEM_LOG("Unknown domain '%u'", domid);
>>              break;
>>          }
>> -        if ( !IS_PRIV_FOR(curr, pg_owner) )
>> -        {
>> -            MEM_LOG("Cannot set foreign dom");
>> -            rcu_unlock_domain(pg_owner);
>> -            pg_owner = NULL;
>> -        }
>>          break;
>>      }
>>  
>> @@ -3008,6 +2997,13 @@ int do_mmuext_op(
>>          goto out;
>>      }
>>  
>> +    rc = xsm_mmuext_op(d, pg_owner);
> 
> Given the above, this could be
> 
> xsm_mmuext_op(dom0, DOMID_{IO,XEN});
> 
> yet ...
> 
>> +    if ( rc )
>> +    {
>> +        rcu_unlock_domain(pg_owner);
>> +        goto out;
>> +    }
>> +
>>      for ( i = 0; i < count; i++ )
>>      {
>>          if ( hypercall_preempt_check() )
>> @@ -3483,11 +3479,6 @@ int do_mmu_update(
>>              rc = -EINVAL;
>>              goto out;
>>          }
>> -        if ( !IS_PRIV_FOR(d, pt_owner) )
>> -        {
>> -            rc = -ESRCH;
>> -            goto out;
>> -        }
>>      }
>>  
>>      if ( (pg_owner = get_pg_owner((uint16_t)foreigndom)) == NULL )
>> @@ -3643,7 +3634,7 @@ int do_mmu_update(
>>              mfn = req.ptr >> PAGE_SHIFT;
>>              gpfn = req.val;
>>  
>> -            rc = xsm_mmu_machphys_update(d, mfn);
>> +            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
>>              if ( rc )
>>                  break;
>>  
>> --- a/xen/include/xsm/dummy.h
>> +++ b/xen/include/xsm/dummy.h
>> @@ -803,19 +803,35 @@ static XSM_DEFAULT(int, domain_memory_map) (struct domain *d)
>>  }
>>  
>>  static XSM_DEFAULT(int, mmu_normal_update) (struct domain *d, struct domain *t,
>> -                                    struct domain *f, intpte_t fpte)
>> +                                            struct domain *f, intpte_t fpte)
>>  {
>> +    if ( d != t && !IS_PRIV_FOR(d, t) )
>> +        return -EPERM;
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
>>      return 0;
>>  }
>>  
>> -static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, unsigned long mfn)
>> +static XSM_DEFAULT(int, mmu_machphys_update) (struct domain *d, struct domain *f,
>> +                                              unsigned long mfn)
>>  {
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
>> +    return 0;
>> +}
>> +
>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>> +{
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
> 
> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.

Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
IS_PRIV domain.

> 
>>      return 0;
>>  }
>>  
>>  static XSM_DEFAULT(int, update_va_mapping) (struct domain *d, struct domain *f, 
>>                                                              l1_pgentry_t pte)
>>  {
>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>> +        return -EPERM;
>>      return 0;
>>  }
>>  
> 
> Didn't check the other cases in any detail.
> 
> Jan
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:38:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQKB-00018a-P2; Mon, 06 Aug 2012 16:38:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyQKA-00018T-5u
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:38:34 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344271107!11054651!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5966 invoked from network); 6 Aug 2012 16:38:27 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 16:38:27 -0000
X-TM-IMSS-Message-ID: <7b7a6ffb0002deb7@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b7a6ffb0002deb7 ;
	Mon, 6 Aug 2012 12:38:33 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76GcMuG019718; 
	Mon, 6 Aug 2012 12:38:23 -0400
Message-ID: <501FF2FE.8040603@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 12:38:22 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
	<501FE068.1090404@tycho.nsa.gov>
	<502003DD0200007800093011@nat28.tlf.novell.com>
In-Reply-To: <502003DD0200007800093011@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
	rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:50 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 17:19, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>> Callers that want to prevent use of DOMID_SELF already need to ensure
>>>> the calling domain does not pass its own domain ID. This removes the
>>>> need for the caller to manually support DOMID_SELF, which many already
>>>> do.
>>>
>>> I'm not really sure this is correct. At the very least it changes the
>>> return value of rcu_lock_remote_target_domain_by_id() when
>>> called with DOMID_SELF (from -ESRCH to -EPERM).
>>
>> This series ends up eliminating that function in patch #18, so that
>> part is taken care of.
> 
> But may, in case of problems, then not be fully bi-sectable.

Do we depend on the exact error return codes in non-error code paths?
I would think most attempts to bisect would work just fine as the
function will still be returning an error. I'm not sure ESRCH is
really the best error to return here anyway: the problem is not that
a domain with my_own_domid or DOMID_SELF couldn't be found, it's that
you can't specify that domain for this operation.
 
>>> I'm also not convinced that a distinction between a domain knowing
>>> its ID and one passing DOMID_SELF isn't/can't be useful. That of
>>> course depends on whether the ID can be fully hidden from a guest
>>> (obviously pure HVM guests would never know their ID, but then
>>> again they also would never pass DOMID_SELF anywhere; it might
>>> be, however, that they could get the latter passed on their behalf
>>> e.g. from some emulation function).
>>
>> I don't think we can (or want to) make it impossible for a guest to find
>> out its own domain ID. I agree that the distinction between DOMID_SELF and
>> my_own_domid can be a useful distinction in some cases. Most of those cases
>> in Xen that I have seen already handle this at the caller.
>>
>> Another solution here is to create a function rcu_lock_domain_by_any_id that
>> is identical to rcu_lock_domain_by_id except for handling DOMID_SELF.
> 
> Maybe. I'd like to see Keir's position regarding all of the details
> here anyway.
> 
> Jan
> 
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 16:38:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 16:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQKB-00018a-P2; Mon, 06 Aug 2012 16:38:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyQKA-00018T-5u
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 16:38:34 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344271107!11054651!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5966 invoked from network); 6 Aug 2012 16:38:27 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 16:38:27 -0000
X-TM-IMSS-Message-ID: <7b7a6ffb0002deb7@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7b7a6ffb0002deb7 ;
	Mon, 6 Aug 2012 12:38:33 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76GcMuG019718; 
	Mon, 6 Aug 2012 12:38:23 -0400
Message-ID: <501FF2FE.8040603@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 12:38:22 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
	<501FE068.1090404@tycho.nsa.gov>
	<502003DD0200007800093011@nat28.tlf.novell.com>
In-Reply-To: <502003DD0200007800093011@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
	rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 11:50 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 17:19, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>> Callers that want to prevent use of DOMID_SELF already need to ensure
>>>> the calling domain does not pass its own domain ID. This removes the
>>>> need for the caller to manually support DOMID_SELF, which many already
>>>> do.
>>>
>>> I'm not really sure this is correct. At the very least it changes the
>>> return value of rcu_lock_remote_target_domain_by_id() when
>>> called with DOMID_SELF (from -ESRCH to -EPERM).
>>
>> This series ends up eliminating that function in patch #18, so that
>> part is taken care of.
> 
> But may, in case of problems, then not be fully bi-sectable.

Do we depend on the exact error return codes in non-error code paths?
I would think most attempts to bisect would work just fine as the
function will still be returning an error. I'm not sure ESRCH is
really the best error to return here anyway: the problem is not that
a domain with my_own_domid or DOMID_SELF couldn't be found, it's that
you can't specify that domain for this operation.
 
>>> I'm also not convinced that a distinction between a domain knowing
>>> its ID and one passing DOMID_SELF isn't/can't be useful. That of
>>> course depends on whether the ID can be fully hidden from a guest
>>> (obviously pure HVM guests would never know their ID, but then
>>> again they also would never pass DOMID_SELF anywhere; it might
>>> be, however, that they could get the latter passed on their behalf
>>> e.g. from some emulation function).
>>
>> I don't think we can (or want to) make it impossible for a guest to find
>> out its own domain ID. I agree that the distinction between DOMID_SELF and
>> my_own_domid can be a useful distinction in some cases. Most of those cases
>> in Xen that I have seen already handle this at the caller.
>>
>> Another solution here is to create a function rcu_lock_domain_by_any_id that
>> is identical to rcu_lock_domain_by_id except for handling DOMID_SELF.
> 
> Maybe. I'd like to see Keir's position regarding all of the details
> here anyway.
> 
> Jan
> 
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 17:06:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 17:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQkq-0001iQ-GQ; Mon, 06 Aug 2012 17:06:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SyQkp-0001iL-6I
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 17:06:07 +0000
Received: from [85.158.139.83:45967] by server-11.bemta-5.messagelabs.com id
	E9/7B-20400-E79FF105; Mon, 06 Aug 2012 17:06:06 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344272765!25785753!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwNjMwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18170 invoked from network); 6 Aug 2012 17:06:05 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-14.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 17:06:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 06 Aug 2012 10:06:04 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="193824015"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 06 Aug 2012 10:06:04 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 10:06:03 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 10:06:03 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Tue, 7 Aug 2012 01:06:01 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Keir Fraser <keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNcWTm/VoC38+Rs0qPrUzcitcAR5dNAUnQ
Date: Mon, 6 Aug 2012 17:06:01 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D6748@SHSMSX101.ccr.corp.intel.com>
References: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
	<CC416661.3A655%keir.xen@gmail.com>
	<501BC799020000780009278B@nat28.tlf.novell.com>
In-Reply-To: <501BC799020000780009278B@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 03.08.12 at 12:28, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
>> 
>>> On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>>>>     * vMCE save/restore changes, to simplify migration 4.2->4.3
>>>>      with new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
>>> 
>>> Where are we with this?
>>> 
>>> Is it still a viable candidate for 4.2, now that we have reached rc1
>>> (almost 2)?
>> 
>> Didn't we already take the trivial patch that will ease the
>> transition to 
>> 4.3?
> 
> We took one necessary patch, but I think at least the second
> one of the recently posted series would also be needed. And
> the really important patch for migration forward compatibility
> was patch 5 in that series, yet I wouldn't want to take patches
> 3 and 4 for 4.2.
> 
> In any case, the series is in need of resubmission anyway.
> Perhaps (if that's possible, I didn't check in too much detail)
> reordering patch 5 could be done at once.
> 

Patch 2 has been updated according to Jan's comments.

As for patch 5, it cannot be reordered w/o patch 3 checked in (patch 5 is for save/restore MCi_CTL2, a newly added MSR at patch 3). In fact we could remove patch 5 totally, and don't add MCi_CTL2 (this MSR is nothing to do with vmce logic itself, the only reason why we add it in new vmce is to get perfromance benefit (but very trivial), so it's OK not to add it and remove patch 5). Another benefit of not add MCi_CTL2 is, to avoid difference between Intel and AMD code. Hence I think it's an acceptable approach to keep current vmce (not implement MCi_CTL2). Your opinion?

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 17:06:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 17:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQkq-0001iQ-GQ; Mon, 06 Aug 2012 17:06:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SyQkp-0001iL-6I
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 17:06:07 +0000
Received: from [85.158.139.83:45967] by server-11.bemta-5.messagelabs.com id
	E9/7B-20400-E79FF105; Mon, 06 Aug 2012 17:06:06 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344272765!25785753!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwNjMwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18170 invoked from network); 6 Aug 2012 17:06:05 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-14.tower-182.messagelabs.com with SMTP;
	6 Aug 2012 17:06:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 06 Aug 2012 10:06:04 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="193824015"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 06 Aug 2012 10:06:04 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 10:06:03 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 6 Aug 2012 10:06:03 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Tue, 7 Aug 2012 01:06:01 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Keir Fraser <keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNcWTm/VoC38+Rs0qPrUzcitcAR5dNAUnQ
Date: Mon, 6 Aug 2012 17:06:01 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D6748@SHSMSX101.ccr.corp.intel.com>
References: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
	<CC416661.3A655%keir.xen@gmail.com>
	<501BC799020000780009278B@nat28.tlf.novell.com>
In-Reply-To: <501BC799020000780009278B@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 03.08.12 at 12:28, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 03/08/2012 11:09, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:
>> 
>>> On Mon, 2012-07-30 at 09:30 +0100, Ian Campbell wrote:
>>>>     * vMCE save/restore changes, to simplify migration 4.2->4.3
>>>>      with new vMCE in 4.3. (Jinsong Liu, Jan Beulich)
>>> 
>>> Where are we with this?
>>> 
>>> Is it still a viable candidate for 4.2, now that we have reached rc1
>>> (almost 2)?
>> 
>> Didn't we already take the trivial patch that will ease the
>> transition to 
>> 4.3?
> 
> We took one necessary patch, but I think at least the second
> one of the recently posted series would also be needed. And
> the really important patch for migration forward compatibility
> was patch 5 in that series, yet I wouldn't want to take patches
> 3 and 4 for 4.2.
> 
> In any case, the series is in need of resubmission anyway.
> Perhaps (if that's possible, I didn't check in too much detail)
> reordering patch 5 could be done at once.
> 

Patch 2 has been updated according to Jan's comments.

As for patch 5, it cannot be reordered w/o patch 3 checked in (patch 5 is for save/restore MCi_CTL2, a newly added MSR at patch 3). In fact we could remove patch 5 totally, and don't add MCi_CTL2 (this MSR is nothing to do with vmce logic itself, the only reason why we add it in new vmce is to get perfromance benefit (but very trivial), so it's OK not to add it and remove patch 5). Another benefit of not add MCi_CTL2 is, to avoid difference between Intel and AMD code. Hence I think it's an acceptable approach to keep current vmce (not implement MCi_CTL2). Your opinion?

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 17:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 17:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQvM-00028y-QN; Mon, 06 Aug 2012 17:17:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyQvK-00028t-Tz
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 17:16:59 +0000
Received: from [85.158.139.83:44511] by server-3.bemta-5.messagelabs.com id
	BB/E9-03367-80CFF105; Mon, 06 Aug 2012 17:16:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344273415!23182395!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10589 invoked from network); 6 Aug 2012 17:16:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 17:16:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13871441"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 17:16:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 18:16:54 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyQvG-0006iv-Bd;
	Mon, 06 Aug 2012 17:16:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyQvG-00022F-74;
	Mon, 06 Aug 2012 18:16:54 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13564-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 18:16:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13564: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13564 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13564/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 17:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 17:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyQvM-00028y-QN; Mon, 06 Aug 2012 17:17:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyQvK-00028t-Tz
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 17:16:59 +0000
Received: from [85.158.139.83:44511] by server-3.bemta-5.messagelabs.com id
	BB/E9-03367-80CFF105; Mon, 06 Aug 2012 17:16:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344273415!23182395!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10589 invoked from network); 6 Aug 2012 17:16:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 17:16:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,720,1336348800"; d="scan'208";a="13871441"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 17:16:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 6 Aug 2012 18:16:54 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyQvG-0006iv-Bd;
	Mon, 06 Aug 2012 17:16:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyQvG-00022F-74;
	Mon, 06 Aug 2012 18:16:54 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13564-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Aug 2012 18:16:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13564: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13564 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13564/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 17:39:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 17:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyRGw-0002S5-Q6; Mon, 06 Aug 2012 17:39:18 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SyRGu-0002Rr-9k
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 17:39:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344274747!5788572!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTgxNjc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTgxNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4959 invoked from network); 6 Aug 2012 17:39:07 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 17:39:07 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3M7pEQbP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-085-232.pools.arcor-ip.net [84.57.85.232])
	by smtp.strato.de (jorabe mo79) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id m076b7o76EfoLl
	for <xen-devel@lists.xen.org>; Mon, 6 Aug 2012 19:39:06 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 11A7418639; Mon,  6 Aug 2012 19:39:05 +0200 (CEST)
Date: Mon, 6 Aug 2012 19:39:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20120806173905.GA26336@aepfle.de>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="lrZ03NoBR/3+SXJZ"
Content-Disposition: inline
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Subject: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline


With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
patch.

The output from this command is attached:
xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &

Any ideas how to fix this timeout error?

Olaf

...
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 2  timed out
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62bf88: deregister unregistered
libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl_create.c:1070:domcreate_attach_pci: unable to add nic devices
...

--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="xl-create-3.0.34-0.7-xen.txt"

WARNING: ignoring "kernel" directive for HVM guest. Use "firmware_override" instead if you really want a non-default firmware
WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a non-default device_model
Parsing config from /root/xenpaging/sles11sp2_full_xenpaging_local.cfg
{
    "domid": null,
    "config": {
        "c_info": {
            "type": "hvm",
            "hap": "<default>",
            "oos": "<default>",
            "ssidref": 0,
            "name": "sles11sp2_full_xenpaging_local",
            "uuid": "f0311246-c8e4-4685-8549-321cd34c8f14",
            "xsdata": {

            },
            "platformdata": {

            },
            "poolid": 0,
            "run_hotplug_scripts": "True"
        },
        "b_info": {
            "max_vcpus": 4,
            "avail_vcpus": [
                0,
                1,
                2,
                3
            ],
            "cpumap": [

            ],
            "numa_placement": "<default>",
            "tsc_mode": "default",
            "max_memkb": 1048576,
            "target_memkb": 1048576,
            "video_memkb": -1,
            "shadow_memkb": 12288,
            "rtc_timeoffset": 0,
            "localtime": "False",
            "disable_migrate": "<default>",
            "cpuid": [

            ],
            "blkdev_start": null,
            "device_model_version": null,
            "device_model_stubdomain": "<default>",
            "device_model": null,
            "device_model_ssidref": 0,
            "extra": [

            ],
            "extra_pv": [

            ],
            "extra_hvm": [

            ],
            "sched_params": {
                "sched": "unknown",
                "weight": -1,
                "cap": -1,
                "period": -1,
                "slice": -1,
                "latency": -1,
                "extratime": -1
            },
            "u": {
                "firmware": null,
                "bios": null,
                "pae": "True",
                "apic": "<default>",
                "acpi": "True",
                "acpi_s3": "<default>",
                "acpi_s4": "<default>",
                "nx": "<default>",
                "viridian": "<default>",
                "timeoffset": null,
                "hpet": "<default>",
                "vpt_align": "<default>",
                "timer_mode": null,
                "nested_hvm": "<default>",
                "incr_generationid": "False",
                "nographic": "<default>",
                "vga": {
                    "kind": "cirrus"
                },
                "vnc": {
                    "enable": "True",
                    "listen": null,
                    "passwd": null,
                    "display": 0,
                    "findunused": "True"
                },
                "keymap": "de",
                "sdl": {
                    "enable": "<default>",
                    "opengl": "<default>",
                    "display": null,
                    "xauthority": null
                },
                "spice": {
                    "enable": "<default>",
                    "port": 0,
                    "tls_port": 0,
                    "host": null,
                    "disable_ticketing": "<default>",
                    "passwd": null,
                    "agent_mouse": "<default>"
                },
                "gfx_passthru": "<default>",
                "serial": "pty",
                "boot": "d",
                "usb": "<default>",
                "usbdevice": null,
                "soundhw": null,
                "xen_platform_pci": "<default>"
            }
        },
        "disks": [
            {
                "backend_domid": 0,
                "pdev_path": "/vmimages/vdisk-sles11sp2_full_xenpaging_local-disk0",
                "vdev": "hda",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 0,
                "readwrite": 1,
                "is_cdrom": 0
            },
            {
                "backend_domid": 0,
                "pdev_path": "/bax.arch.suse.de-olaf_xenimages/olaf/bax-olaf_xenimages/olaf_xenimages/SLES-11-SP2-DVD-x86_64-GMC-DVD1.iso",
                "vdev": "hdc",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 1,
                "readwrite": 0,
                "is_cdrom": 1
            }
        ],
        "nics": [
            {
                "backend_domid": 0,
                "devid": 0,
                "mtu": 0,
                "model": null,
                "mac": "00:00:00:00:00:00",
                "ip": null,
                "bridge": "br0",
                "ifname": null,
                "script": null,
                "nictype": null,
                "rate_bytes_per_interval": 0,
                "rate_interval_usecs": 0
            }
        ],
        "pcidevs": [

        ],
        "vfbs": [

        ],
        "vkbs": [

        ],
        "on_poweroff": "destroy",
        "on_reboot": "restart",
        "on_watchdog": "destroy",
        "on_crash": "destroy"
    }
}

libxl: debug: libxl_create.c:1147:do_domain_create: ao 0x623820: create: how=(nil) callback=(nil) poller=0x629010
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hda, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hdc, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hdc, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hdc, using backend qdisk
libxl: debug: libxl_create.c:678:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627330: deregister unregistered
libxl: debug: libxl_numa.c:435:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=2, nr_cpus=24, nr_vcpus=28, free_memkb=1522
libxl: detail: libxl_dom.c:192:numa_place_domain: NUMA placement candidate with 2 nodes, 24 cpus and 1522 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9c324
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19c324
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019c324
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100000
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x0x7fa70f484000 -> 0x0x7fa70f517194
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=qdisk
libxl: debug: libxl_dm.c:1142:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-dm with arguments:
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -domain-name
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   sles11sp2_full_xenpaging_local
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   127.0.0.1:0
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vncunused
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -k
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   de
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -videoram
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   8
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -acpi
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpus
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   4
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpu_avail
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   0x0f
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   nic,vlan=0,macaddr=00:16:3e:10:dd:52,model=rtl8139
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   tap,vlan=0,ifname=vif2.0-emu,bridge=br0,script=no,downscript=no
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1160:do_domain_create: ao 0x623820: inprogress: poller=0x629010, flags=i
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627568: deregister unregistered
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 2  timed out
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62bf88: deregister unregistered
libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl_create.c:1070:domcreate_attach_pci: unable to add nic devices
libxl: debug: libxl_dm.c:1248:libxl__destroy_device_model: Device Model signaled
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x62d218 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: register slotnum=3
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62d218 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 6 still waiting state 5
libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 6  timed out
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62d218 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62d218: deregister unregistered
libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl.c:1463:devices_destroy_cb: libxl__devices_destroy failed for 2
libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0x623820: complete, rc=-3
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0x623820: destroy
xc: debug: hypercall buffer: total allocations:680 total releases:680
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:672 misses:4 toobig:4

--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="xl-create-3.5.0-3.home_olh_kernel_sles11sp1.1-kernel-linux-3_5.txt"

WARNING: ignoring "kernel" directive for HVM guest. Use "firmware_override" instead if you really want a non-default firmware
WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a non-default device_model
Parsing config from /root/xenpaging/sles11sp2_full_xenpaging_local.cfg
{
    "domid": null,
    "config": {
        "c_info": {
            "type": "hvm",
            "hap": "<default>",
            "oos": "<default>",
            "ssidref": 0,
            "name": "sles11sp2_full_xenpaging_local",
            "uuid": "f0311246-c8e4-4685-8549-321cd34c8f14",
            "xsdata": {

            },
            "platformdata": {

            },
            "poolid": 0,
            "run_hotplug_scripts": "True"
        },
        "b_info": {
            "max_vcpus": 4,
            "avail_vcpus": [
                0,
                1,
                2,
                3
            ],
            "cpumap": [

            ],
            "numa_placement": "<default>",
            "tsc_mode": "default",
            "max_memkb": 1048576,
            "target_memkb": 1048576,
            "video_memkb": -1,
            "shadow_memkb": 12288,
            "rtc_timeoffset": 0,
            "localtime": "False",
            "disable_migrate": "<default>",
            "cpuid": [

            ],
            "blkdev_start": null,
            "device_model_version": null,
            "device_model_stubdomain": "<default>",
            "device_model": null,
            "device_model_ssidref": 0,
            "extra": [

            ],
            "extra_pv": [

            ],
            "extra_hvm": [

            ],
            "sched_params": {
                "sched": "unknown",
                "weight": -1,
                "cap": -1,
                "period": -1,
                "slice": -1,
                "latency": -1,
                "extratime": -1
            },
            "u": {
                "firmware": null,
                "bios": null,
                "pae": "True",
                "apic": "<default>",
                "acpi": "True",
                "acpi_s3": "<default>",
                "acpi_s4": "<default>",
                "nx": "<default>",
                "viridian": "<default>",
                "timeoffset": null,
                "hpet": "<default>",
                "vpt_align": "<default>",
                "timer_mode": null,
                "nested_hvm": "<default>",
                "incr_generationid": "False",
                "nographic": "<default>",
                "vga": {
                    "kind": "cirrus"
                },
                "vnc": {
                    "enable": "True",
                    "listen": null,
                    "passwd": null,
                    "display": 0,
                    "findunused": "True"
                },
                "keymap": "de",
                "sdl": {
                    "enable": "<default>",
                    "opengl": "<default>",
                    "display": null,
                    "xauthority": null
                },
                "spice": {
                    "enable": "<default>",
                    "port": 0,
                    "tls_port": 0,
                    "host": null,
                    "disable_ticketing": "<default>",
                    "passwd": null,
                    "agent_mouse": "<default>"
                },
                "gfx_passthru": "<default>",
                "serial": "pty",
                "boot": "d",
                "usb": "<default>",
                "usbdevice": null,
                "soundhw": null,
                "xen_platform_pci": "<default>"
            }
        },
        "disks": [
            {
                "backend_domid": 0,
                "pdev_path": "/vmimages/vdisk-sles11sp2_full_xenpaging_local-disk0",
                "vdev": "hda",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 0,
                "readwrite": 1,
                "is_cdrom": 0
            },
            {
                "backend_domid": 0,
                "pdev_path": "/bax.arch.suse.de-olaf_xenimages/olaf/bax-olaf_xenimages/olaf_xenimages/SLES-11-SP2-DVD-x86_64-GMC-DVD1.iso",
                "vdev": "hdc",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 1,
                "readwrite": 0,
                "is_cdrom": 1
            }
        ],
        "nics": [
            {
                "backend_domid": 0,
                "devid": 0,
                "mtu": 0,
                "model": null,
                "mac": "00:00:00:00:00:00",
                "ip": null,
                "bridge": "br0",
                "ifname": null,
                "script": null,
                "nictype": null,
                "rate_bytes_per_interval": 0,
                "rate_interval_usecs": 0
            }
        ],
        "pcidevs": [

        ],
        "vfbs": [

        ],
        "vkbs": [

        ],
        "on_poweroff": "destroy",
        "on_reboot": "restart",
        "on_watchdog": "destroy",
        "on_crash": "destroy"
    }
}

libxl: debug: libxl_create.c:1147:do_domain_create: ao 0x623820: create: how=(nil) callback=(nil) poller=0x629010
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hda, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hdc, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hdc, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hdc, using backend qdisk
libxl: debug: libxl_create.c:678:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627330: deregister unregistered
libxl: debug: libxl_numa.c:435:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=2, nr_cpus=24, nr_vcpus=28, free_memkb=1490
libxl: detail: libxl_dom.c:192:numa_place_domain: NUMA placement candidate with 2 nodes, 24 cpus and 1490 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9c324
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19c324
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019c324
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100000
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x0x7f739b580000 -> 0x0x7f739b613194
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=qdisk
libxl: debug: libxl_dm.c:1142:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-dm with arguments:
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -domain-name
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   sles11sp2_full_xenpaging_local
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   127.0.0.1:0
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vncunused
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -k
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   de
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -videoram
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   8
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -acpi
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpus
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   4
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpu_avail
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   0x0f
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   nic,vlan=0,macaddr=00:16:3e:0f:f7:4c,model=rtl8139
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   tap,vlan=0,ifname=vif2.0-emu,bridge=br0,script=no,downscript=no
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1160:do_domain_create: ao 0x623820: inprogress: poller=0x629010, flags=i
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627568: deregister unregistered
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:596:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62c658: deregister unregistered
libxl: debug: libxl_device.c:916:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_device.c:916:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:1667:libxl__ao_progress_report: ao 0x623820: progress report: ignored
libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0x623820: complete, rc=0
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0x623820: destroy
xc: debug: hypercall buffer: total allocations:582 total releases:582
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:574 misses:4 toobig:4
Daemon running with PID 5091

--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="sles11sp2_full_xenpaging_local.cfg"

name="sles11sp2_full_xenpaging_local"
description="None"
uuid="f0311246-c8e4-4685-8549-321cd34c8f14"
memory=1024
vcpus=4
actmem=234
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
keymap="de"

vif=[ 'bridge=br0' ]

# HVM
builder="hvm"
device_model="/usr/lib/xen/bin/qemu-dm"
kernel="/usr/lib/xen/boot/hvmloader"
boot="d"
disk=[ 'file:/vmimages/vdisk-sles11sp2_full_xenpaging_local-disk0,hda,w', 'file:/bax.arch.suse.de-olaf_xenimages/olaf/bax-olaf_xenimages/olaf_xenimages/SLES-11-SP2-DVD-x86_64-GMC-DVD1.iso,hdc:cdrom,r', ]
stdvga=0
vnc=1
vncunused=1
extid=0
acpi=1
pae=1
serial="pty"
xenpaging_extra=[ '-f', '/dev/shm/pagefile-sles11sp2_full_xenpaging_local', '-v' ]


--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--lrZ03NoBR/3+SXJZ--


From xen-devel-bounces@lists.xen.org Mon Aug 06 17:39:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 17:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyRGw-0002S5-Q6; Mon, 06 Aug 2012 17:39:18 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SyRGu-0002Rr-9k
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 17:39:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344274747!5788572!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTgxNjc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNTgxNjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4959 invoked from network); 6 Aug 2012 17:39:07 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Aug 2012 17:39:07 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3M7pEQbP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-085-232.pools.arcor-ip.net [84.57.85.232])
	by smtp.strato.de (jorabe mo79) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id m076b7o76EfoLl
	for <xen-devel@lists.xen.org>; Mon, 6 Aug 2012 19:39:06 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 11A7418639; Mon,  6 Aug 2012 19:39:05 +0200 (CEST)
Date: Mon, 6 Aug 2012 19:39:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20120806173905.GA26336@aepfle.de>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="lrZ03NoBR/3+SXJZ"
Content-Disposition: inline
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Subject: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline


With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
patch.

The output from this command is attached:
xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &

Any ideas how to fix this timeout error?

Olaf

...
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 2  timed out
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62bf88: deregister unregistered
libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl_create.c:1070:domcreate_attach_pci: unable to add nic devices
...

--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="xl-create-3.0.34-0.7-xen.txt"

WARNING: ignoring "kernel" directive for HVM guest. Use "firmware_override" instead if you really want a non-default firmware
WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a non-default device_model
Parsing config from /root/xenpaging/sles11sp2_full_xenpaging_local.cfg
{
    "domid": null,
    "config": {
        "c_info": {
            "type": "hvm",
            "hap": "<default>",
            "oos": "<default>",
            "ssidref": 0,
            "name": "sles11sp2_full_xenpaging_local",
            "uuid": "f0311246-c8e4-4685-8549-321cd34c8f14",
            "xsdata": {

            },
            "platformdata": {

            },
            "poolid": 0,
            "run_hotplug_scripts": "True"
        },
        "b_info": {
            "max_vcpus": 4,
            "avail_vcpus": [
                0,
                1,
                2,
                3
            ],
            "cpumap": [

            ],
            "numa_placement": "<default>",
            "tsc_mode": "default",
            "max_memkb": 1048576,
            "target_memkb": 1048576,
            "video_memkb": -1,
            "shadow_memkb": 12288,
            "rtc_timeoffset": 0,
            "localtime": "False",
            "disable_migrate": "<default>",
            "cpuid": [

            ],
            "blkdev_start": null,
            "device_model_version": null,
            "device_model_stubdomain": "<default>",
            "device_model": null,
            "device_model_ssidref": 0,
            "extra": [

            ],
            "extra_pv": [

            ],
            "extra_hvm": [

            ],
            "sched_params": {
                "sched": "unknown",
                "weight": -1,
                "cap": -1,
                "period": -1,
                "slice": -1,
                "latency": -1,
                "extratime": -1
            },
            "u": {
                "firmware": null,
                "bios": null,
                "pae": "True",
                "apic": "<default>",
                "acpi": "True",
                "acpi_s3": "<default>",
                "acpi_s4": "<default>",
                "nx": "<default>",
                "viridian": "<default>",
                "timeoffset": null,
                "hpet": "<default>",
                "vpt_align": "<default>",
                "timer_mode": null,
                "nested_hvm": "<default>",
                "incr_generationid": "False",
                "nographic": "<default>",
                "vga": {
                    "kind": "cirrus"
                },
                "vnc": {
                    "enable": "True",
                    "listen": null,
                    "passwd": null,
                    "display": 0,
                    "findunused": "True"
                },
                "keymap": "de",
                "sdl": {
                    "enable": "<default>",
                    "opengl": "<default>",
                    "display": null,
                    "xauthority": null
                },
                "spice": {
                    "enable": "<default>",
                    "port": 0,
                    "tls_port": 0,
                    "host": null,
                    "disable_ticketing": "<default>",
                    "passwd": null,
                    "agent_mouse": "<default>"
                },
                "gfx_passthru": "<default>",
                "serial": "pty",
                "boot": "d",
                "usb": "<default>",
                "usbdevice": null,
                "soundhw": null,
                "xen_platform_pci": "<default>"
            }
        },
        "disks": [
            {
                "backend_domid": 0,
                "pdev_path": "/vmimages/vdisk-sles11sp2_full_xenpaging_local-disk0",
                "vdev": "hda",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 0,
                "readwrite": 1,
                "is_cdrom": 0
            },
            {
                "backend_domid": 0,
                "pdev_path": "/bax.arch.suse.de-olaf_xenimages/olaf/bax-olaf_xenimages/olaf_xenimages/SLES-11-SP2-DVD-x86_64-GMC-DVD1.iso",
                "vdev": "hdc",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 1,
                "readwrite": 0,
                "is_cdrom": 1
            }
        ],
        "nics": [
            {
                "backend_domid": 0,
                "devid": 0,
                "mtu": 0,
                "model": null,
                "mac": "00:00:00:00:00:00",
                "ip": null,
                "bridge": "br0",
                "ifname": null,
                "script": null,
                "nictype": null,
                "rate_bytes_per_interval": 0,
                "rate_interval_usecs": 0
            }
        ],
        "pcidevs": [

        ],
        "vfbs": [

        ],
        "vkbs": [

        ],
        "on_poweroff": "destroy",
        "on_reboot": "restart",
        "on_watchdog": "destroy",
        "on_crash": "destroy"
    }
}

libxl: debug: libxl_create.c:1147:do_domain_create: ao 0x623820: create: how=(nil) callback=(nil) poller=0x629010
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hda, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hdc, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hdc, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hdc, using backend qdisk
libxl: debug: libxl_create.c:678:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627330: deregister unregistered
libxl: debug: libxl_numa.c:435:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=2, nr_cpus=24, nr_vcpus=28, free_memkb=1522
libxl: detail: libxl_dom.c:192:numa_place_domain: NUMA placement candidate with 2 nodes, 24 cpus and 1522 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9c324
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19c324
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019c324
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100000
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x0x7fa70f484000 -> 0x0x7fa70f517194
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=qdisk
libxl: debug: libxl_dm.c:1142:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-dm with arguments:
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -domain-name
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   sles11sp2_full_xenpaging_local
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   127.0.0.1:0
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vncunused
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -k
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   de
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -videoram
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   8
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -acpi
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpus
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   4
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpu_avail
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   0x0f
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   nic,vlan=0,macaddr=00:16:3e:10:dd:52,model=rtl8139
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   tap,vlan=0,ifname=vif2.0-emu,bridge=br0,script=no,downscript=no
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1160:do_domain_create: ao 0x623820: inprogress: poller=0x629010, flags=i
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627568: deregister unregistered
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 2  timed out
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62bf88: deregister unregistered
libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl_create.c:1070:domcreate_attach_pci: unable to add nic devices
libxl: debug: libxl_dm.c:1248:libxl__destroy_device_model: Device Model signaled
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x62d218 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: register slotnum=3
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62d218 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 6 still waiting state 5
libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 6  timed out
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62d218 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62d218: deregister unregistered
libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl.c:1463:devices_destroy_cb: libxl__devices_destroy failed for 2
libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0x623820: complete, rc=-3
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0x623820: destroy
xc: debug: hypercall buffer: total allocations:680 total releases:680
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:672 misses:4 toobig:4

--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="xl-create-3.5.0-3.home_olh_kernel_sles11sp1.1-kernel-linux-3_5.txt"

WARNING: ignoring "kernel" directive for HVM guest. Use "firmware_override" instead if you really want a non-default firmware
WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a non-default device_model
Parsing config from /root/xenpaging/sles11sp2_full_xenpaging_local.cfg
{
    "domid": null,
    "config": {
        "c_info": {
            "type": "hvm",
            "hap": "<default>",
            "oos": "<default>",
            "ssidref": 0,
            "name": "sles11sp2_full_xenpaging_local",
            "uuid": "f0311246-c8e4-4685-8549-321cd34c8f14",
            "xsdata": {

            },
            "platformdata": {

            },
            "poolid": 0,
            "run_hotplug_scripts": "True"
        },
        "b_info": {
            "max_vcpus": 4,
            "avail_vcpus": [
                0,
                1,
                2,
                3
            ],
            "cpumap": [

            ],
            "numa_placement": "<default>",
            "tsc_mode": "default",
            "max_memkb": 1048576,
            "target_memkb": 1048576,
            "video_memkb": -1,
            "shadow_memkb": 12288,
            "rtc_timeoffset": 0,
            "localtime": "False",
            "disable_migrate": "<default>",
            "cpuid": [

            ],
            "blkdev_start": null,
            "device_model_version": null,
            "device_model_stubdomain": "<default>",
            "device_model": null,
            "device_model_ssidref": 0,
            "extra": [

            ],
            "extra_pv": [

            ],
            "extra_hvm": [

            ],
            "sched_params": {
                "sched": "unknown",
                "weight": -1,
                "cap": -1,
                "period": -1,
                "slice": -1,
                "latency": -1,
                "extratime": -1
            },
            "u": {
                "firmware": null,
                "bios": null,
                "pae": "True",
                "apic": "<default>",
                "acpi": "True",
                "acpi_s3": "<default>",
                "acpi_s4": "<default>",
                "nx": "<default>",
                "viridian": "<default>",
                "timeoffset": null,
                "hpet": "<default>",
                "vpt_align": "<default>",
                "timer_mode": null,
                "nested_hvm": "<default>",
                "incr_generationid": "False",
                "nographic": "<default>",
                "vga": {
                    "kind": "cirrus"
                },
                "vnc": {
                    "enable": "True",
                    "listen": null,
                    "passwd": null,
                    "display": 0,
                    "findunused": "True"
                },
                "keymap": "de",
                "sdl": {
                    "enable": "<default>",
                    "opengl": "<default>",
                    "display": null,
                    "xauthority": null
                },
                "spice": {
                    "enable": "<default>",
                    "port": 0,
                    "tls_port": 0,
                    "host": null,
                    "disable_ticketing": "<default>",
                    "passwd": null,
                    "agent_mouse": "<default>"
                },
                "gfx_passthru": "<default>",
                "serial": "pty",
                "boot": "d",
                "usb": "<default>",
                "usbdevice": null,
                "soundhw": null,
                "xen_platform_pci": "<default>"
            }
        },
        "disks": [
            {
                "backend_domid": 0,
                "pdev_path": "/vmimages/vdisk-sles11sp2_full_xenpaging_local-disk0",
                "vdev": "hda",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 0,
                "readwrite": 1,
                "is_cdrom": 0
            },
            {
                "backend_domid": 0,
                "pdev_path": "/bax.arch.suse.de-olaf_xenimages/olaf/bax-olaf_xenimages/olaf_xenimages/SLES-11-SP2-DVD-x86_64-GMC-DVD1.iso",
                "vdev": "hdc",
                "backend": "unknown",
                "format": "raw",
                "script": null,
                "removable": 1,
                "readwrite": 0,
                "is_cdrom": 1
            }
        ],
        "nics": [
            {
                "backend_domid": 0,
                "devid": 0,
                "mtu": 0,
                "model": null,
                "mac": "00:00:00:00:00:00",
                "ip": null,
                "bridge": "br0",
                "ifname": null,
                "script": null,
                "nictype": null,
                "rate_bytes_per_interval": 0,
                "rate_interval_usecs": 0
            }
        ],
        "pcidevs": [

        ],
        "vfbs": [

        ],
        "vkbs": [

        ],
        "on_poweroff": "destroy",
        "on_reboot": "restart",
        "on_watchdog": "destroy",
        "on_crash": "destroy"
    }
}

libxl: debug: libxl_create.c:1147:do_domain_create: ao 0x623820: create: how=(nil) callback=(nil) poller=0x629010
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hda, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hdc, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hdc, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hdc, using backend qdisk
libxl: debug: libxl_create.c:678:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627330: deregister unregistered
libxl: debug: libxl_numa.c:435:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=2, nr_cpus=24, nr_vcpus=28, free_memkb=1490
libxl: detail: libxl_dom.c:192:numa_place_domain: NUMA placement candidate with 2 nodes, 24 cpus and 1490 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9c324
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19c324
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019c324
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100000
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x0x7f739b580000 -> 0x0x7f739b613194
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hdc spec.backend=qdisk
libxl: debug: libxl_dm.c:1142:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-dm with arguments:
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -domain-name
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   sles11sp2_full_xenpaging_local
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   127.0.0.1:0
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vncunused
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -k
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   de
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -videoram
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   8
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   d
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -acpi
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpus
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   4
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vcpu_avail
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   0x0f
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   nic,vlan=0,macaddr=00:16:3e:0f:f7:4c,model=rtl8139
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   tap,vlan=0,ifname=vif2.0-emu,bridge=br0,script=no,downscript=no
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1160:do_domain_create: ao 0x623820: inprogress: poller=0x629010, flags=i
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x627568 wpath=/local/domain/0/device-model/2/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x627568: deregister unregistered
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:596:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62c658 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62c658: deregister unregistered
libxl: debug: libxl_device.c:916:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_device.c:916:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:1667:libxl__ao_progress_report: ao 0x623820: progress report: ignored
libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0x623820: complete, rc=0
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0x623820: destroy
xc: debug: hypercall buffer: total allocations:582 total releases:582
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:574 misses:4 toobig:4
Daemon running with PID 5091

--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="sles11sp2_full_xenpaging_local.cfg"

name="sles11sp2_full_xenpaging_local"
description="None"
uuid="f0311246-c8e4-4685-8549-321cd34c8f14"
memory=1024
vcpus=4
actmem=234
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
keymap="de"

vif=[ 'bridge=br0' ]

# HVM
builder="hvm"
device_model="/usr/lib/xen/bin/qemu-dm"
kernel="/usr/lib/xen/boot/hvmloader"
boot="d"
disk=[ 'file:/vmimages/vdisk-sles11sp2_full_xenpaging_local-disk0,hda,w', 'file:/bax.arch.suse.de-olaf_xenimages/olaf/bax-olaf_xenimages/olaf_xenimages/SLES-11-SP2-DVD-x86_64-GMC-DVD1.iso,hdc:cdrom,r', ]
stdvga=0
vnc=1
vncunused=1
extid=0
acpi=1
pae=1
serial="pty"
xenpaging_extra=[ '-f', '/dev/shm/pagefile-sles11sp2_full_xenpaging_local', '-v' ]


--lrZ03NoBR/3+SXJZ
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--lrZ03NoBR/3+SXJZ--


From xen-devel-bounces@lists.xen.org Mon Aug 06 18:54:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 18:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SySRO-0003dg-Ey; Mon, 06 Aug 2012 18:54:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SySRN-0003dY-0Z
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 18:54:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344279239!11548064!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2839 invoked from network); 6 Aug 2012 18:54:00 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 18:54:00 -0000
Received: by weyz53 with SMTP id z53so2572222wey.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 11:53:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ObHPrE5c96rJzzT0MrQKNtIrAn+L0iqH6pw/fIJWZKw=;
	b=0UMRby49S7+ojcGeGYikGwpwT3pkO/cqkJDFWHHgsytYnD09o/gqL8g3KuTWwDDf+U
	yvmvjivKK3EX3MRnNWfg9FqqLmZMLuziARyM49ViN/eHgELf22Dv8Q2IpL/URagMiBu+
	s+KaNaM0Nlgku2tRstoiBOcAUvKaQuaKs4DfGP/Y2a47//cIWdpwVJm7V4lnV75r1dpb
	WZMUaWnxltLmB20w6MNJapa/EOXfLquBtjX7VBzhDavKf/rMRtdAlwxwBMW2uq+7X43g
	OUMMeMULm5lgbZ0k0Q5JxdXK/+KVifavCUR4wZWsuGWUzXhWKMrK3ROhkGLQAYxtXJ+r
	AaWw==
Received: by 10.180.83.106 with SMTP id p10mr20649682wiy.21.1344279239639;
	Mon, 06 Aug 2012 11:53:59 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id k20sm17007602wiv.11.2012.08.06.11.53.53
	(version=SSLv3 cipher=OTHER); Mon, 06 Aug 2012 11:53:58 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Mon, 06 Aug 2012 19:53:49 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<xen-devel@lists.xen.org>
Message-ID: <CC45D14D.3A904%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and
	mem_sharing hooks
Thread-Index: Ac10BNB6BoEaqC3UcEuDHKmXzGt/fA==
In-Reply-To: <1344263550-3941-13-git-send-email-dgdegra@tycho.nsa.gov>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and
 mem_sharing hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When someone wants to add a new domctl/sysctl, how many places will they
have to add things to ensure that xsm dtrt for a basic setup, allowing only
dom0 access to the new op? How big is the risk that we end up with new ops
that have no access control?


On 06/08/2012 15:32, "Daniel De Graaf" <dgdegra@tycho.nsa.gov> wrote:

> This patch adds new XSM hooks to cover the 12 domctls that were not
> previously covered by an XSM hook, and splits up the mem_sharing and
> mem_event XSM hooks to better cover what the code is doing.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  tools/flask/policy/policy/flask/access_vectors |   5 +
>  tools/flask/policy/policy/modules/xen/xen.if   |   2 +
>  xen/arch/x86/domctl.c                          | 125
> +++++++++++++++----------
>  xen/arch/x86/mm/mem_event.c                    |  45 ++++-----
>  xen/arch/x86/mm/mem_sharing.c                  |  23 ++++-
>  xen/include/asm-x86/mem_event.h                |   1 -
>  xen/include/xsm/dummy.h                        |  65 ++++++++++++-
>  xen/include/xsm/xsm.h                          |  62 +++++++++++-
>  xen/xsm/dummy.c                                |  11 ++-
>  xen/xsm/flask/hooks.c                          |  62 +++++++++++-
>  xen/xsm/flask/include/av_perm_to_string.h      |   5 +
>  xen/xsm/flask/include/av_permissions.h         |   5 +
>  12 files changed, 318 insertions(+), 93 deletions(-)
> 
> diff --git a/tools/flask/policy/policy/flask/access_vectors
> b/tools/flask/policy/policy/flask/access_vectors
> index 11d02da..28b8ada 100644
> --- a/tools/flask/policy/policy/flask/access_vectors
> +++ b/tools/flask/policy/policy/flask/access_vectors
> @@ -80,6 +80,9 @@ class domain2
> relabelself
> make_priv_for
> set_as_target
> + set_cpuid
> + gettsc
> + settsc
>  }
>  
>  class hvm
> @@ -97,6 +100,8 @@ class hvm
>      hvmctl
>      mem_event
>      mem_sharing
> + share_mem
> + audit_p2m
>  }
>  
>  class event
> diff --git a/tools/flask/policy/policy/modules/xen/xen.if
> b/tools/flask/policy/policy/modules/xen/xen.if
> index 4de99c8..f9bd757 100644
> --- a/tools/flask/policy/policy/modules/xen/xen.if
> +++ b/tools/flask/policy/policy/modules/xen/xen.if
> @@ -29,6 +29,7 @@ define(`create_domain_common', `
> getdomaininfo hypercall setvcpucontext setextvcpucontext
> scheduler getvcpuinfo getvcpuextstate getaddrsize
> getvcpuaffinity setvcpuaffinity };
> + allow $1 $2:domain2 { set_cpuid settsc };
> allow $1 $2:security check_context;
> allow $1 $2:shadow enable;
> allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
> @@ -67,6 +68,7 @@ define(`migrate_domain_out', `
> allow $1 $2:hvm { gethvmc getparam irqlevel };
> allow $1 $2:mmu { stat pageinfo map_read };
> allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext
> getvcpuextstate pause destroy };
> + allow $1 $2:domain2 gettsc;
>  ')
>  
>  
> ##############################################################################
> ##
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index bcb5b2d..95f34d2 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -54,26 +54,6 @@ long arch_do_domctl(
>  
>      switch ( domctl->cmd )
>      {
> -    /* TODO: the following do not have XSM hooks yet */
> -    case XEN_DOMCTL_set_cpuid:
> -    case XEN_DOMCTL_suppress_spurious_page_faults:
> -    case XEN_DOMCTL_debug_op:
> -    case XEN_DOMCTL_gettscinfo:
> -    case XEN_DOMCTL_settscinfo:
> -    case XEN_DOMCTL_audit_p2m:
> -    case XEN_DOMCTL_gdbsx_guestmemio:
> -    case XEN_DOMCTL_gdbsx_pausevcpu:
> -    case XEN_DOMCTL_gdbsx_unpausevcpu:
> -    case XEN_DOMCTL_gdbsx_domstatus:
> -    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs
> */
> -    case XEN_DOMCTL_getpageframeinfo2:
> -    case XEN_DOMCTL_getpageframeinfo3:
> -        if ( !IS_PRIV(current->domain) )
> -            return -EPERM;
> -    }
> -
> -    switch ( domctl->cmd )
> -    {
>  
>      case XEN_DOMCTL_shadow_op:
>      {
> @@ -190,6 +170,13 @@ long arch_do_domctl(
>              if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
>                  break;
>  
> +            ret = xsm_getpageframeinfo_domain(d);
> +            if ( ret )
> +            {
> +                rcu_unlock_domain(d);
> +                break;
> +            }
> +
>              if ( unlikely(num > 1024) ||
>                   unlikely(num != domctl->u.getpageframeinfo3.num) )
>              {
> @@ -287,6 +274,13 @@ long arch_do_domctl(
>          if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
>              break;
>  
> +        ret = xsm_getpageframeinfo_domain(d);
> +        if ( ret )
> +        {
> +            rcu_unlock_domain(d);
> +            break;
> +        }
> +
>          if ( unlikely(num > 1024) )
>          {
>              ret = -E2BIG;
> @@ -1106,6 +1100,10 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_set_cpuid(d);
> +        if ( ret )
> +            goto set_cpuid_out;
> +
>          for ( i = 0; i < MAX_CPUID_INPUT; i++ )
>          {
>              cpuid = &d->arch.cpuids[i];
> @@ -1129,6 +1127,7 @@ long arch_do_domctl(
>              ret = 0;
>          }
>  
> +    set_cpuid_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1143,6 +1142,10 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_gettscinfo(d);
> +        if ( ret )
> +            goto gettscinfo_out;
> +
>          domain_pause(d);
>          tsc_get_info(d, &info.tsc_mode,
>                          &info.elapsed_nsec,
> @@ -1154,6 +1157,7 @@ long arch_do_domctl(
>              ret = 0;
>          domain_unpause(d);
>  
> +    gettscinfo_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1167,15 +1171,20 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_settscinfo(d);
> +        if ( ret )
> +            goto settscinfo_out;
> +
>          domain_pause(d);
>          tsc_set_info(d, domctl->u.tsc_info.info.tsc_mode,
>                       domctl->u.tsc_info.info.elapsed_nsec,
>                       domctl->u.tsc_info.info.gtsc_khz,
>                       domctl->u.tsc_info.info.incarnation);
>          domain_unpause(d);
> +        ret = 0;
>  
> +    settscinfo_out:
>          rcu_unlock_domain(d);
> -        ret = 0;
>      }
>      break;
>  
> @@ -1187,9 +1196,10 @@ long arch_do_domctl(
>          d = rcu_lock_domain_by_id(domctl->domain);
>          if ( d != NULL )
>          {
> -            d->arch.suppress_spurious_page_faults = 1;
> +            ret = xsm_domctl(d, domctl->cmd);
> +            if ( !ret )
> +                d->arch.suppress_spurious_page_faults = 1;
>              rcu_unlock_domain(d);
> -            ret = 0;
>          }
>      }
>      break;
> @@ -1204,6 +1214,10 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto debug_op_out;
> +
>          ret = -EINVAL;
>          if ( (domctl->u.debug_op.vcpu >= d->max_vcpus) ||
>               ((v = d->vcpu[domctl->u.debug_op.vcpu]) == NULL) )
> @@ -1228,6 +1242,10 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_guestmemio_out;
> +
>          domctl->u.gdbsx_guest_memio.remain =
>              domctl->u.gdbsx_guest_memio.len;
>  
> @@ -1235,6 +1253,7 @@ long arch_do_domctl(
>          if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
>              ret = -EFAULT;
>  
> +    gdbsx_guestmemio_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1248,21 +1267,20 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_pausevcpu_out;
> +
>          ret = -EBUSY;
>          if ( !d->is_paused_by_controller )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_pausevcpu_out;
>          ret = -EINVAL;
>          if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
>               (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_pausevcpu_out;
>          vcpu_pause(v);
>          ret = 0;
> +    gdbsx_pausevcpu_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1276,23 +1294,22 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_unpausevcpu_out;
> +
>          ret = -EBUSY;
>          if ( !d->is_paused_by_controller )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_unpausevcpu_out;
>          ret = -EINVAL;
>          if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
>               (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_unpausevcpu_out;
>          if ( !atomic_read(&v->pause_count) )
>              printk("WARN: Unpausing vcpu:%d which is not paused\n",
> v->vcpu_id);
>          vcpu_unpause(v);
>          ret = 0;
> +    gdbsx_unpausevcpu_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1306,6 +1323,10 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_domstatus_out;
> +
>          domctl->u.gdbsx_domstatus.vcpu_id = -1;
>          domctl->u.gdbsx_domstatus.paused = d->is_paused_by_controller;
>          if ( domctl->u.gdbsx_domstatus.paused )
> @@ -1325,6 +1346,7 @@ long arch_do_domctl(
>          ret = 0;
>          if ( copy_to_guest(u_domctl, domctl, 1) )
>              ret = -EFAULT;
> +    gdbsx_domstatus_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1464,10 +1486,8 @@ long arch_do_domctl(
>          d = rcu_lock_domain_by_id(domctl->domain);
>          if ( d != NULL )
>          {
> -            ret = xsm_mem_event(d);
> -            if ( !ret )
> -                ret = mem_event_domctl(d, &domctl->u.mem_event_op,
> -                                       guest_handle_cast(u_domctl, void));
> +            ret = mem_event_domctl(d, &domctl->u.mem_event_op,
> +                                   guest_handle_cast(u_domctl, void));
>              rcu_unlock_domain(d);
>              copy_to_guest(u_domctl, domctl, 1);
>          } 
> @@ -1496,16 +1516,19 @@ long arch_do_domctl(
>      {
>          struct domain *d;
>  
> -        ret = rcu_lock_remote_target_domain_by_id(domctl->domain, &d);
> -        if ( ret != 0 )
> +        d = rcu_lock_domain_by_id(domctl->domain);
> +        if ( d == NULL )
>              break;
>  
> -        audit_p2m(d,
> -                  &domctl->u.audit_p2m.orphans,
> -                  &domctl->u.audit_p2m.m2p_bad,
> -                  &domctl->u.audit_p2m.p2m_bad);
> +        ret = xsm_audit_p2m(d);
> +        if ( !ret )
> +            audit_p2m(d,
> +                      &domctl->u.audit_p2m.orphans,
> +                      &domctl->u.audit_p2m.m2p_bad,
> +                      &domctl->u.audit_p2m.p2m_bad);
> +
>          rcu_unlock_domain(d);
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> +        if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
>              ret = -EFAULT;
>      }
>      break;
> @@ -1524,7 +1547,7 @@ long arch_do_domctl(
>          d = rcu_lock_domain_by_id(domctl->domain);
>          if ( d != NULL )
>          {
> -            ret = xsm_mem_event(d);
> +            ret = xsm_mem_event_setup(d);
>              if ( !ret ) {
>                  p2m = p2m_get_hostp2m(d);
>                  p2m->access_required =
> domctl->u.access_required.access_required;
> diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
> index d728889..a5b02d9 100644
> --- a/xen/arch/x86/mm/mem_event.c
> +++ b/xen/arch/x86/mm/mem_event.c
> @@ -29,6 +29,7 @@
>  #include <asm/mem_paging.h>
>  #include <asm/mem_access.h>
>  #include <asm/mem_sharing.h>
> +#include <xsm/xsm.h>
>  
>  /* for public/io/ring.h macros */
>  #define xen_mb()   mb()
> @@ -439,34 +440,22 @@ static void mem_sharing_notification(struct vcpu *v,
> unsigned int port)
>          mem_sharing_sharing_resume(v->domain);
>  }
>  
> -struct domain *get_mem_event_op_target(uint32_t domain, int *rc)
> -{
> -    struct domain *d;
> -
> -    /* Get the target domain */
> -    *rc = rcu_lock_remote_target_domain_by_id(domain, &d);
> -    if ( *rc != 0 )
> -        return NULL;
> -
> -    /* Not dying? */
> -    if ( d->is_dying )
> -    {
> -        rcu_unlock_domain(d);
> -        *rc = -EINVAL;
> -        return NULL;
> -    }
> -    
> -    return d;
> -}
> -
>  int do_mem_event_op(int op, uint32_t domain, void *arg)
>  {
>      int ret;
>      struct domain *d;
>  
> -    d = get_mem_event_op_target(domain, &ret);
> +    d = rcu_lock_domain_by_id(domain);
>      if ( !d )
> -        return ret;
> +        return -ESRCH;
> +
> +    ret = -EINVAL;
> +    if ( d->is_dying || d == current->domain )
> +        goto out;
> +
> +    ret = xsm_mem_event_op(d, op);
> +    if ( ret )
> +        goto out;
>  
>      switch (op)
>      {
> @@ -483,6 +472,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
>              ret = -ENOSYS;
>      }
>  
> + out:
>      rcu_unlock_domain(d);
>      return ret;
>  }
> @@ -516,6 +506,10 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>  {
>      int rc;
>  
> +    rc = xsm_mem_event_control(d, mec->mode, mec->op);
> +    if ( rc )
> +        return rc;
> +
>      if ( unlikely(d == current->domain) )
>      {
>          gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
> @@ -537,13 +531,6 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>          return -EINVAL;
>      }
>  
> -    /* TODO: XSM hook */
> -#if 0
> -    rc = xsm_mem_event_control(d, mec->op);
> -    if ( rc )
> -        return rc;
> -#endif
> -
>      rc = -ENOSYS;
>  
>      switch ( mec->mode )
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 5103285..a7e6c5c 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -34,6 +34,7 @@
>  #include <asm/atomic.h>
>  #include <xen/rcupdate.h>
>  #include <asm/event.h>
> +#include <xsm/xsm.h>
>  
>  #include "mm-locks.h"
>  
> @@ -1345,11 +1346,18 @@ int mem_sharing_memop(struct domain *d,
> xen_mem_sharing_op_t *mec)
>              if ( !mem_sharing_enabled(d) )
>                  return -EINVAL;
>  
> -            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
> +            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
>              if ( !cd )
> +                return -ESRCH;
> +
> +            rc = xsm_mem_sharing_op(d, cd, mec->op);
> +            if ( rc )
> +            {
> +                rcu_unlock_domain(cd);
>                  return rc;
> +            }
>  
> -            if ( !mem_sharing_enabled(cd) )
> +            if ( cd == current->domain || !mem_sharing_enabled(cd) )
>              {
>                  rcu_unlock_domain(cd);
>                  return -EINVAL;
> @@ -1401,11 +1409,18 @@ int mem_sharing_memop(struct domain *d,
> xen_mem_sharing_op_t *mec)
>              if ( !mem_sharing_enabled(d) )
>                  return -EINVAL;
>  
> -            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
> +            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
>              if ( !cd )
> +                return -ESRCH;
> +
> +            rc = xsm_mem_sharing_op(d, cd, mec->op);
> +            if ( rc )
> +            {
> +                rcu_unlock_domain(cd);
>                  return rc;
> +            }
>  
> -            if ( !mem_sharing_enabled(cd) )
> +            if ( cd == current->domain || !mem_sharing_enabled(cd) )
>              {
>                  rcu_unlock_domain(cd);
>                  return -EINVAL;
> diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
> index 23d71c1..448be4f 100644
> --- a/xen/include/asm-x86/mem_event.h
> +++ b/xen/include/asm-x86/mem_event.h
> @@ -62,7 +62,6 @@ void mem_event_put_request(struct domain *d, struct
> mem_event_domain *med,
>  int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
>                             mem_event_response_t *rsp);
>  
> -struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
>  int do_mem_event_op(int op, uint32_t domain, void *arg);
>  int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
>                       XEN_GUEST_HANDLE(void) u_domctl);
> diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
> index 0d849cc..c71c08b 100644
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -171,6 +171,13 @@ static XSM_DEFAULT(int, setdebugging) (struct domain *d)
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, debug_op) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, perfcontrol) (void)
>  {
>      if ( !IS_PRIV(current->domain) )
> @@ -557,6 +564,34 @@ static XSM_DEFAULT(int, getpageframeinfo) (struct
> page_info *page)
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, getpageframeinfo_domain) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, set_cpuid) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, gettscinfo) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, settscinfo) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, getmemlist) (struct domain *d)
>  {
>      if ( !IS_PRIV(current->domain) )
> @@ -627,13 +662,27 @@ static XSM_DEFAULT(int, hvm_inject_msi) (struct domain
> *d)
>      return 0;
>  }
>  
> -static XSM_DEFAULT(int, mem_event) (struct domain *d)
> +static XSM_DEFAULT(int, mem_event_setup) (struct domain *d)
>  {
>      if ( !IS_PRIV(current->domain) )
>          return -EPERM;
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, mem_event_control) (struct domain *d, int mode, int
> op)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, mem_event_op) (struct domain *d, int op)
> +{
> +    if ( !IS_PRIV_FOR(current->domain, d) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
>  {
>      if ( !IS_PRIV(current->domain) )
> @@ -641,6 +690,20 @@ static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, mem_sharing_op) (struct domain *d, struct domain *cd,
> int op)
> +{
> +    if ( !IS_PRIV_FOR(current->domain, cd) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, audit_p2m) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, apic) (struct domain *d, int cmd)
>  {
>      if ( !IS_PRIV(current->domain) )
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> index 1a9f35b..b473b54 100644
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -67,6 +67,7 @@ struct xsm_operations {
>      int (*setdomainmaxmem) (struct domain *d);
>      int (*setdomainhandle) (struct domain *d);
>      int (*setdebugging) (struct domain *d);
> +    int (*debug_op) (struct domain *d);
>      int (*perfcontrol) (void);
>      int (*debug_keys) (void);
>      int (*getcpuinfo) (void);
> @@ -142,6 +143,10 @@ struct xsm_operations {
>  #ifdef CONFIG_X86
>      int (*shadow_control) (struct domain *d, uint32_t op);
>      int (*getpageframeinfo) (struct page_info *page);
> +    int (*getpageframeinfo_domain) (struct domain *d);
> +    int (*set_cpuid) (struct domain *d);
> +    int (*gettscinfo) (struct domain *d);
> +    int (*settscinfo) (struct domain *d);
>      int (*getmemlist) (struct domain *d);
>      int (*hypercall_init) (struct domain *d);
>      int (*hvmcontext) (struct domain *d, uint32_t op);
> @@ -152,8 +157,12 @@ struct xsm_operations {
>      int (*hvm_set_isa_irq_level) (struct domain *d);
>      int (*hvm_set_pci_link_route) (struct domain *d);
>      int (*hvm_inject_msi) (struct domain *d);
> -    int (*mem_event) (struct domain *d);
> +    int (*mem_event_setup) (struct domain *d);
> +    int (*mem_event_control) (struct domain *d, int mode, int op);
> +    int (*mem_event_op) (struct domain *d, int op);
>      int (*mem_sharing) (struct domain *d);
> +    int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
> +    int (*audit_p2m) (struct domain *d);
>      int (*apic) (struct domain *d, int cmd);
>      int (*xen_settime) (void);
>      int (*memtype) (uint32_t access);
> @@ -302,6 +311,11 @@ static inline int xsm_setdebugging (struct domain *d)
>      return xsm_call(setdebugging(d));
>  }
>  
> +static inline int xsm_debug_op (struct domain *d)
> +{
> +    return xsm_call(debug_op(d));
> +}
> +
>  static inline int xsm_perfcontrol (void)
>  {
>      return xsm_call(perfcontrol());
> @@ -329,7 +343,7 @@ static inline int xsm_get_pmstat(void)
>  
>  static inline int xsm_setpminfo(void)
>  {
> - return xsm_call(setpminfo());
> +    return xsm_call(setpminfo());
>  }
>  
>  static inline int xsm_pm_op(void)
> @@ -608,6 +622,26 @@ static inline int xsm_getpageframeinfo (struct page_info
> *page)
>      return xsm_call(getpageframeinfo(page));
>  }
>  
> +static inline int xsm_getpageframeinfo_domain (struct domain *d)
> +{
> +    return xsm_call(getpageframeinfo_domain(d));
> +}
> +
> +static inline int xsm_set_cpuid (struct domain *d)
> +{
> +    return xsm_call(set_cpuid(d));
> +}
> +
> +static inline int xsm_gettscinfo (struct domain *d)
> +{
> +    return xsm_call(gettscinfo(d));
> +}
> +
> +static inline int xsm_settscinfo (struct domain *d)
> +{
> +    return xsm_call(settscinfo(d));
> +}
> +
>  static inline int xsm_getmemlist (struct domain *d)
>  {
>      return xsm_call(getmemlist(d));
> @@ -658,9 +692,19 @@ static inline int xsm_hvm_inject_msi (struct domain *d)
>      return xsm_call(hvm_inject_msi(d));
>  }
>  
> -static inline int xsm_mem_event (struct domain *d)
> +static inline int xsm_mem_event_setup (struct domain *d)
> +{
> +    return xsm_call(mem_event_setup(d));
> +}
> +
> +static inline int xsm_mem_event_control (struct domain *d, int mode, int op)
> +{
> +    return xsm_call(mem_event_control(d, mode, op));
> +}
> +
> +static inline int xsm_mem_event_op (struct domain *d, int op)
>  {
> -    return xsm_call(mem_event(d));
> +    return xsm_call(mem_event_op(d, op));
>  }
>  
>  static inline int xsm_mem_sharing (struct domain *d)
> @@ -668,6 +712,16 @@ static inline int xsm_mem_sharing (struct domain *d)
>      return xsm_call(mem_sharing(d));
>  }
>  
> +static inline int xsm_mem_sharing_op (struct domain *d, struct domain *cd,
> int op)
> +{
> +    return xsm_call(mem_sharing_op(d, cd, op));
> +}
> +
> +static inline int xsm_audit_p2m (struct domain *d)
> +{
> +    return xsm_call(audit_p2m(d));
> +}
> +
>  static inline int xsm_apic (struct domain *d, int cmd)
>  {
>      return xsm_call(apic(d, cmd));
> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> index af532b8..09935d8 100644
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -51,6 +51,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>      set_to_dummy_if_null(ops, setdomainmaxmem);
>      set_to_dummy_if_null(ops, setdomainhandle);
>      set_to_dummy_if_null(ops, setdebugging);
> +    set_to_dummy_if_null(ops, debug_op);
>      set_to_dummy_if_null(ops, perfcontrol);
>      set_to_dummy_if_null(ops, debug_keys);
>      set_to_dummy_if_null(ops, getcpuinfo);
> @@ -124,6 +125,10 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>  #ifdef CONFIG_X86
>      set_to_dummy_if_null(ops, shadow_control);
>      set_to_dummy_if_null(ops, getpageframeinfo);
> +    set_to_dummy_if_null(ops, getpageframeinfo_domain);
> +    set_to_dummy_if_null(ops, set_cpuid);
> +    set_to_dummy_if_null(ops, gettscinfo);
> +    set_to_dummy_if_null(ops, settscinfo);
>      set_to_dummy_if_null(ops, getmemlist);
>      set_to_dummy_if_null(ops, hypercall_init);
>      set_to_dummy_if_null(ops, hvmcontext);
> @@ -134,8 +139,12 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>      set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
>      set_to_dummy_if_null(ops, hvm_set_pci_link_route);
>      set_to_dummy_if_null(ops, hvm_inject_msi);
> -    set_to_dummy_if_null(ops, mem_event);
> +    set_to_dummy_if_null(ops, mem_event_setup);
> +    set_to_dummy_if_null(ops, mem_event_control);
> +    set_to_dummy_if_null(ops, mem_event_op);
>      set_to_dummy_if_null(ops, mem_sharing);
> +    set_to_dummy_if_null(ops, mem_sharing_op);
> +    set_to_dummy_if_null(ops, audit_p2m);
>      set_to_dummy_if_null(ops, apic);
>      set_to_dummy_if_null(ops, xen_settime);
>      set_to_dummy_if_null(ops, memtype);
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index f8aff14..4f71604 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -695,6 +695,12 @@ static int flask_setdebugging(struct domain *d)
>                             DOMAIN__SETDEBUGGING);
>  }
>  
> +static int flask_debug_op(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
> +                           DOMAIN__SETDEBUGGING);
> +}
> +
>  static int flask_debug_keys(void)
>  {
>      return domain_has_xen(current->domain, XEN__DEBUG);
> @@ -1111,6 +1117,26 @@ static int flask_getpageframeinfo(struct page_info
> *page)
>      return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);
>  }
>  
> +static int flask_getpageframeinfo_domain(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
> +}
> +
> +static int flask_set_cpuid(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2,
> DOMAIN2__SET_CPUID);
> +}
> +
> +static int flask_gettscinfo(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2,
> DOMAIN2__GETTSC);
> +}
> +
> +static int flask_settscinfo(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2,
> DOMAIN2__SETTSC);
> +}
> +
>  static int flask_getmemlist(struct domain *d)
>  {
>      return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGELIST);
> @@ -1201,7 +1227,17 @@ static int flask_hvm_set_pci_link_route(struct domain
> *d)
>      return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCIROUTE);
>  }
>  
> -static int flask_mem_event(struct domain *d)
> +static int flask_mem_event_setup(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
> +}
> +
> +static int flask_mem_event_control(struct domain *d, int mode, int op)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
> +}
> +
> +static int flask_mem_event_op(struct domain *d, int op)
>  {
>      return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
>  }
> @@ -1211,6 +1247,19 @@ static int flask_mem_sharing(struct domain *d)
>      return domain_has_perm(current->domain, d, SECCLASS_HVM,
> HVM__MEM_SHARING);
>  }
>  
> +static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
> +{
> +    int rc = domain_has_perm(current->domain, cd, SECCLASS_HVM,
> HVM__MEM_SHARING);
> +    if ( rc )
> +        return rc;
> +    return domain_has_perm(d, cd, SECCLASS_HVM, HVM__SHARE_MEM);
> +}
> +
> +static int flask_audit_p2m(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__AUDIT_P2M);
> +}
> +
>  static int flask_apic(struct domain *d, int cmd)
>  {
>      u32 perm;
> @@ -1586,6 +1635,7 @@ static struct xsm_operations flask_ops = {
>      .setdomainmaxmem = flask_setdomainmaxmem,
>      .setdomainhandle = flask_setdomainhandle,
>      .setdebugging = flask_setdebugging,
> +    .debug_op = flask_debug_op,
>      .perfcontrol = flask_perfcontrol,
>      .debug_keys = flask_debug_keys,
>      .getcpuinfo = flask_getcpuinfo,
> @@ -1654,6 +1704,10 @@ static struct xsm_operations flask_ops = {
>  #ifdef CONFIG_X86
>      .shadow_control = flask_shadow_control,
>      .getpageframeinfo = flask_getpageframeinfo,
> +    .getpageframeinfo_domain = flask_getpageframeinfo_domain,
> +    .set_cpuid = flask_set_cpuid,
> +    .gettscinfo = flask_gettscinfo,
> +    .settscinfo = flask_settscinfo,
>      .getmemlist = flask_getmemlist,
>      .hypercall_init = flask_hypercall_init,
>      .hvmcontext = flask_hvmcontext,
> @@ -1662,8 +1716,12 @@ static struct xsm_operations flask_ops = {
>      .hvm_set_pci_intx_level = flask_hvm_set_pci_intx_level,
>      .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
>      .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
> -    .mem_event = flask_mem_event,
> +    .mem_event_setup = flask_mem_event_setup,
> +    .mem_event_control = flask_mem_event_control,
> +    .mem_event_op = flask_mem_event_op,
>      .mem_sharing = flask_mem_sharing,
> +    .mem_sharing_op = flask_mem_sharing_op,
> +    .audit_p2m = flask_audit_p2m,
>      .apic = flask_apic,
>      .xen_settime = flask_xen_settime,
>      .memtype = flask_memtype,
> diff --git a/xen/xsm/flask/include/av_perm_to_string.h
> b/xen/xsm/flask/include/av_perm_to_string.h
> index 10f8e80..997f098 100644
> --- a/xen/xsm/flask/include/av_perm_to_string.h
> +++ b/xen/xsm/flask/include/av_perm_to_string.h
> @@ -66,6 +66,9 @@
>     S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
>     S_(SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR, "make_priv_for")
>     S_(SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET, "set_as_target")
> +   S_(SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID, "set_cpuid")
> +   S_(SECCLASS_DOMAIN2, DOMAIN2__GETTSC, "gettsc")
> +   S_(SECCLASS_DOMAIN2, DOMAIN2__SETTSC, "settsc")
>     S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
>     S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
>     S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
> @@ -79,6 +82,8 @@
>     S_(SECCLASS_HVM, HVM__HVMCTL, "hvmctl")
>     S_(SECCLASS_HVM, HVM__MEM_EVENT, "mem_event")
>     S_(SECCLASS_HVM, HVM__MEM_SHARING, "mem_sharing")
> +   S_(SECCLASS_HVM, HVM__SHARE_MEM, "share_mem")
> +   S_(SECCLASS_HVM, HVM__AUDIT_P2M, "audit_p2m")
>     S_(SECCLASS_EVENT, EVENT__BIND, "bind")
>     S_(SECCLASS_EVENT, EVENT__SEND, "send")
>     S_(SECCLASS_EVENT, EVENT__STATUS, "status")
> diff --git a/xen/xsm/flask/include/av_permissions.h
> b/xen/xsm/flask/include/av_permissions.h
> index f7cfee1..8596a55 100644
> --- a/xen/xsm/flask/include/av_permissions.h
> +++ b/xen/xsm/flask/include/av_permissions.h
> @@ -68,6 +68,9 @@
>  #define DOMAIN2__RELABELSELF                      0x00000004UL
>  #define DOMAIN2__MAKE_PRIV_FOR                    0x00000008UL
>  #define DOMAIN2__SET_AS_TARGET                    0x00000010UL
> +#define DOMAIN2__SET_CPUID                        0x00000020UL
> +#define DOMAIN2__GETTSC                           0x00000040UL
> +#define DOMAIN2__SETTSC                           0x00000080UL
>  
>  #define HVM__SETHVMC                              0x00000001UL
>  #define HVM__GETHVMC                              0x00000002UL
> @@ -82,6 +85,8 @@
>  #define HVM__HVMCTL                               0x00000400UL
>  #define HVM__MEM_EVENT                            0x00000800UL
>  #define HVM__MEM_SHARING                          0x00001000UL
> +#define HVM__SHARE_MEM                            0x00002000UL
> +#define HVM__AUDIT_P2M                            0x00004000UL
>  
>  #define EVENT__BIND                               0x00000001UL
>  #define EVENT__SEND                               0x00000002UL



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 18:54:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 18:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SySRO-0003dg-Ey; Mon, 06 Aug 2012 18:54:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SySRN-0003dY-0Z
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 18:54:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344279239!11548064!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2839 invoked from network); 6 Aug 2012 18:54:00 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 18:54:00 -0000
Received: by weyz53 with SMTP id z53so2572222wey.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 11:53:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ObHPrE5c96rJzzT0MrQKNtIrAn+L0iqH6pw/fIJWZKw=;
	b=0UMRby49S7+ojcGeGYikGwpwT3pkO/cqkJDFWHHgsytYnD09o/gqL8g3KuTWwDDf+U
	yvmvjivKK3EX3MRnNWfg9FqqLmZMLuziARyM49ViN/eHgELf22Dv8Q2IpL/URagMiBu+
	s+KaNaM0Nlgku2tRstoiBOcAUvKaQuaKs4DfGP/Y2a47//cIWdpwVJm7V4lnV75r1dpb
	WZMUaWnxltLmB20w6MNJapa/EOXfLquBtjX7VBzhDavKf/rMRtdAlwxwBMW2uq+7X43g
	OUMMeMULm5lgbZ0k0Q5JxdXK/+KVifavCUR4wZWsuGWUzXhWKMrK3ROhkGLQAYxtXJ+r
	AaWw==
Received: by 10.180.83.106 with SMTP id p10mr20649682wiy.21.1344279239639;
	Mon, 06 Aug 2012 11:53:59 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id k20sm17007602wiv.11.2012.08.06.11.53.53
	(version=SSLv3 cipher=OTHER); Mon, 06 Aug 2012 11:53:58 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Mon, 06 Aug 2012 19:53:49 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	<xen-devel@lists.xen.org>
Message-ID: <CC45D14D.3A904%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and
	mem_sharing hooks
Thread-Index: Ac10BNB6BoEaqC3UcEuDHKmXzGt/fA==
In-Reply-To: <1344263550-3941-13-git-send-email-dgdegra@tycho.nsa.gov>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and
 mem_sharing hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When someone wants to add a new domctl/sysctl, how many places will they
have to add things to ensure that xsm dtrt for a basic setup, allowing only
dom0 access to the new op? How big is the risk that we end up with new ops
that have no access control?


On 06/08/2012 15:32, "Daniel De Graaf" <dgdegra@tycho.nsa.gov> wrote:

> This patch adds new XSM hooks to cover the 12 domctls that were not
> previously covered by an XSM hook, and splits up the mem_sharing and
> mem_event XSM hooks to better cover what the code is doing.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  tools/flask/policy/policy/flask/access_vectors |   5 +
>  tools/flask/policy/policy/modules/xen/xen.if   |   2 +
>  xen/arch/x86/domctl.c                          | 125
> +++++++++++++++----------
>  xen/arch/x86/mm/mem_event.c                    |  45 ++++-----
>  xen/arch/x86/mm/mem_sharing.c                  |  23 ++++-
>  xen/include/asm-x86/mem_event.h                |   1 -
>  xen/include/xsm/dummy.h                        |  65 ++++++++++++-
>  xen/include/xsm/xsm.h                          |  62 +++++++++++-
>  xen/xsm/dummy.c                                |  11 ++-
>  xen/xsm/flask/hooks.c                          |  62 +++++++++++-
>  xen/xsm/flask/include/av_perm_to_string.h      |   5 +
>  xen/xsm/flask/include/av_permissions.h         |   5 +
>  12 files changed, 318 insertions(+), 93 deletions(-)
> 
> diff --git a/tools/flask/policy/policy/flask/access_vectors
> b/tools/flask/policy/policy/flask/access_vectors
> index 11d02da..28b8ada 100644
> --- a/tools/flask/policy/policy/flask/access_vectors
> +++ b/tools/flask/policy/policy/flask/access_vectors
> @@ -80,6 +80,9 @@ class domain2
> relabelself
> make_priv_for
> set_as_target
> + set_cpuid
> + gettsc
> + settsc
>  }
>  
>  class hvm
> @@ -97,6 +100,8 @@ class hvm
>      hvmctl
>      mem_event
>      mem_sharing
> + share_mem
> + audit_p2m
>  }
>  
>  class event
> diff --git a/tools/flask/policy/policy/modules/xen/xen.if
> b/tools/flask/policy/policy/modules/xen/xen.if
> index 4de99c8..f9bd757 100644
> --- a/tools/flask/policy/policy/modules/xen/xen.if
> +++ b/tools/flask/policy/policy/modules/xen/xen.if
> @@ -29,6 +29,7 @@ define(`create_domain_common', `
> getdomaininfo hypercall setvcpucontext setextvcpucontext
> scheduler getvcpuinfo getvcpuextstate getaddrsize
> getvcpuaffinity setvcpuaffinity };
> + allow $1 $2:domain2 { set_cpuid settsc };
> allow $1 $2:security check_context;
> allow $1 $2:shadow enable;
> allow $1 $2:mmu {map_read map_write adjust memorymap physmap pinpage};
> @@ -67,6 +68,7 @@ define(`migrate_domain_out', `
> allow $1 $2:hvm { gethvmc getparam irqlevel };
> allow $1 $2:mmu { stat pageinfo map_read };
> allow $1 $2:domain { getaddrsize getvcpucontext getextvcpucontext
> getvcpuextstate pause destroy };
> + allow $1 $2:domain2 gettsc;
>  ')
>  
>  
> ##############################################################################
> ##
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index bcb5b2d..95f34d2 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -54,26 +54,6 @@ long arch_do_domctl(
>  
>      switch ( domctl->cmd )
>      {
> -    /* TODO: the following do not have XSM hooks yet */
> -    case XEN_DOMCTL_set_cpuid:
> -    case XEN_DOMCTL_suppress_spurious_page_faults:
> -    case XEN_DOMCTL_debug_op:
> -    case XEN_DOMCTL_gettscinfo:
> -    case XEN_DOMCTL_settscinfo:
> -    case XEN_DOMCTL_audit_p2m:
> -    case XEN_DOMCTL_gdbsx_guestmemio:
> -    case XEN_DOMCTL_gdbsx_pausevcpu:
> -    case XEN_DOMCTL_gdbsx_unpausevcpu:
> -    case XEN_DOMCTL_gdbsx_domstatus:
> -    /* getpageframeinfo[23] will leak XEN_DOMCTL_PFINFO_XTAB on target GFNs
> */
> -    case XEN_DOMCTL_getpageframeinfo2:
> -    case XEN_DOMCTL_getpageframeinfo3:
> -        if ( !IS_PRIV(current->domain) )
> -            return -EPERM;
> -    }
> -
> -    switch ( domctl->cmd )
> -    {
>  
>      case XEN_DOMCTL_shadow_op:
>      {
> @@ -190,6 +170,13 @@ long arch_do_domctl(
>              if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
>                  break;
>  
> +            ret = xsm_getpageframeinfo_domain(d);
> +            if ( ret )
> +            {
> +                rcu_unlock_domain(d);
> +                break;
> +            }
> +
>              if ( unlikely(num > 1024) ||
>                   unlikely(num != domctl->u.getpageframeinfo3.num) )
>              {
> @@ -287,6 +274,13 @@ long arch_do_domctl(
>          if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
>              break;
>  
> +        ret = xsm_getpageframeinfo_domain(d);
> +        if ( ret )
> +        {
> +            rcu_unlock_domain(d);
> +            break;
> +        }
> +
>          if ( unlikely(num > 1024) )
>          {
>              ret = -E2BIG;
> @@ -1106,6 +1100,10 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_set_cpuid(d);
> +        if ( ret )
> +            goto set_cpuid_out;
> +
>          for ( i = 0; i < MAX_CPUID_INPUT; i++ )
>          {
>              cpuid = &d->arch.cpuids[i];
> @@ -1129,6 +1127,7 @@ long arch_do_domctl(
>              ret = 0;
>          }
>  
> +    set_cpuid_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1143,6 +1142,10 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_gettscinfo(d);
> +        if ( ret )
> +            goto gettscinfo_out;
> +
>          domain_pause(d);
>          tsc_get_info(d, &info.tsc_mode,
>                          &info.elapsed_nsec,
> @@ -1154,6 +1157,7 @@ long arch_do_domctl(
>              ret = 0;
>          domain_unpause(d);
>  
> +    gettscinfo_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1167,15 +1171,20 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_settscinfo(d);
> +        if ( ret )
> +            goto settscinfo_out;
> +
>          domain_pause(d);
>          tsc_set_info(d, domctl->u.tsc_info.info.tsc_mode,
>                       domctl->u.tsc_info.info.elapsed_nsec,
>                       domctl->u.tsc_info.info.gtsc_khz,
>                       domctl->u.tsc_info.info.incarnation);
>          domain_unpause(d);
> +        ret = 0;
>  
> +    settscinfo_out:
>          rcu_unlock_domain(d);
> -        ret = 0;
>      }
>      break;
>  
> @@ -1187,9 +1196,10 @@ long arch_do_domctl(
>          d = rcu_lock_domain_by_id(domctl->domain);
>          if ( d != NULL )
>          {
> -            d->arch.suppress_spurious_page_faults = 1;
> +            ret = xsm_domctl(d, domctl->cmd);
> +            if ( !ret )
> +                d->arch.suppress_spurious_page_faults = 1;
>              rcu_unlock_domain(d);
> -            ret = 0;
>          }
>      }
>      break;
> @@ -1204,6 +1214,10 @@ long arch_do_domctl(
>          if ( d == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto debug_op_out;
> +
>          ret = -EINVAL;
>          if ( (domctl->u.debug_op.vcpu >= d->max_vcpus) ||
>               ((v = d->vcpu[domctl->u.debug_op.vcpu]) == NULL) )
> @@ -1228,6 +1242,10 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_guestmemio_out;
> +
>          domctl->u.gdbsx_guest_memio.remain =
>              domctl->u.gdbsx_guest_memio.len;
>  
> @@ -1235,6 +1253,7 @@ long arch_do_domctl(
>          if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
>              ret = -EFAULT;
>  
> +    gdbsx_guestmemio_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1248,21 +1267,20 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_pausevcpu_out;
> +
>          ret = -EBUSY;
>          if ( !d->is_paused_by_controller )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_pausevcpu_out;
>          ret = -EINVAL;
>          if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
>               (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_pausevcpu_out;
>          vcpu_pause(v);
>          ret = 0;
> +    gdbsx_pausevcpu_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1276,23 +1294,22 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_unpausevcpu_out;
> +
>          ret = -EBUSY;
>          if ( !d->is_paused_by_controller )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_unpausevcpu_out;
>          ret = -EINVAL;
>          if ( domctl->u.gdbsx_pauseunp_vcpu.vcpu >= MAX_VIRT_CPUS ||
>               (v = d->vcpu[domctl->u.gdbsx_pauseunp_vcpu.vcpu]) == NULL )
> -        {
> -            rcu_unlock_domain(d);
> -            break;
> -        }
> +            goto gdbsx_unpausevcpu_out;
>          if ( !atomic_read(&v->pause_count) )
>              printk("WARN: Unpausing vcpu:%d which is not paused\n",
> v->vcpu_id);
>          vcpu_unpause(v);
>          ret = 0;
> +    gdbsx_unpausevcpu_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1306,6 +1323,10 @@ long arch_do_domctl(
>          if ( (d = rcu_lock_domain_by_id(domctl->domain)) == NULL )
>              break;
>  
> +        ret = xsm_debug_op(d);
> +        if ( ret )
> +            goto gdbsx_domstatus_out;
> +
>          domctl->u.gdbsx_domstatus.vcpu_id = -1;
>          domctl->u.gdbsx_domstatus.paused = d->is_paused_by_controller;
>          if ( domctl->u.gdbsx_domstatus.paused )
> @@ -1325,6 +1346,7 @@ long arch_do_domctl(
>          ret = 0;
>          if ( copy_to_guest(u_domctl, domctl, 1) )
>              ret = -EFAULT;
> +    gdbsx_domstatus_out:
>          rcu_unlock_domain(d);
>      }
>      break;
> @@ -1464,10 +1486,8 @@ long arch_do_domctl(
>          d = rcu_lock_domain_by_id(domctl->domain);
>          if ( d != NULL )
>          {
> -            ret = xsm_mem_event(d);
> -            if ( !ret )
> -                ret = mem_event_domctl(d, &domctl->u.mem_event_op,
> -                                       guest_handle_cast(u_domctl, void));
> +            ret = mem_event_domctl(d, &domctl->u.mem_event_op,
> +                                   guest_handle_cast(u_domctl, void));
>              rcu_unlock_domain(d);
>              copy_to_guest(u_domctl, domctl, 1);
>          } 
> @@ -1496,16 +1516,19 @@ long arch_do_domctl(
>      {
>          struct domain *d;
>  
> -        ret = rcu_lock_remote_target_domain_by_id(domctl->domain, &d);
> -        if ( ret != 0 )
> +        d = rcu_lock_domain_by_id(domctl->domain);
> +        if ( d == NULL )
>              break;
>  
> -        audit_p2m(d,
> -                  &domctl->u.audit_p2m.orphans,
> -                  &domctl->u.audit_p2m.m2p_bad,
> -                  &domctl->u.audit_p2m.p2m_bad);
> +        ret = xsm_audit_p2m(d);
> +        if ( !ret )
> +            audit_p2m(d,
> +                      &domctl->u.audit_p2m.orphans,
> +                      &domctl->u.audit_p2m.m2p_bad,
> +                      &domctl->u.audit_p2m.p2m_bad);
> +
>          rcu_unlock_domain(d);
> -        if ( copy_to_guest(u_domctl, domctl, 1) )
> +        if ( !ret && copy_to_guest(u_domctl, domctl, 1) )
>              ret = -EFAULT;
>      }
>      break;
> @@ -1524,7 +1547,7 @@ long arch_do_domctl(
>          d = rcu_lock_domain_by_id(domctl->domain);
>          if ( d != NULL )
>          {
> -            ret = xsm_mem_event(d);
> +            ret = xsm_mem_event_setup(d);
>              if ( !ret ) {
>                  p2m = p2m_get_hostp2m(d);
>                  p2m->access_required =
> domctl->u.access_required.access_required;
> diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
> index d728889..a5b02d9 100644
> --- a/xen/arch/x86/mm/mem_event.c
> +++ b/xen/arch/x86/mm/mem_event.c
> @@ -29,6 +29,7 @@
>  #include <asm/mem_paging.h>
>  #include <asm/mem_access.h>
>  #include <asm/mem_sharing.h>
> +#include <xsm/xsm.h>
>  
>  /* for public/io/ring.h macros */
>  #define xen_mb()   mb()
> @@ -439,34 +440,22 @@ static void mem_sharing_notification(struct vcpu *v,
> unsigned int port)
>          mem_sharing_sharing_resume(v->domain);
>  }
>  
> -struct domain *get_mem_event_op_target(uint32_t domain, int *rc)
> -{
> -    struct domain *d;
> -
> -    /* Get the target domain */
> -    *rc = rcu_lock_remote_target_domain_by_id(domain, &d);
> -    if ( *rc != 0 )
> -        return NULL;
> -
> -    /* Not dying? */
> -    if ( d->is_dying )
> -    {
> -        rcu_unlock_domain(d);
> -        *rc = -EINVAL;
> -        return NULL;
> -    }
> -    
> -    return d;
> -}
> -
>  int do_mem_event_op(int op, uint32_t domain, void *arg)
>  {
>      int ret;
>      struct domain *d;
>  
> -    d = get_mem_event_op_target(domain, &ret);
> +    d = rcu_lock_domain_by_id(domain);
>      if ( !d )
> -        return ret;
> +        return -ESRCH;
> +
> +    ret = -EINVAL;
> +    if ( d->is_dying || d == current->domain )
> +        goto out;
> +
> +    ret = xsm_mem_event_op(d, op);
> +    if ( ret )
> +        goto out;
>  
>      switch (op)
>      {
> @@ -483,6 +472,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
>              ret = -ENOSYS;
>      }
>  
> + out:
>      rcu_unlock_domain(d);
>      return ret;
>  }
> @@ -516,6 +506,10 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>  {
>      int rc;
>  
> +    rc = xsm_mem_event_control(d, mec->mode, mec->op);
> +    if ( rc )
> +        return rc;
> +
>      if ( unlikely(d == current->domain) )
>      {
>          gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
> @@ -537,13 +531,6 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
>          return -EINVAL;
>      }
>  
> -    /* TODO: XSM hook */
> -#if 0
> -    rc = xsm_mem_event_control(d, mec->op);
> -    if ( rc )
> -        return rc;
> -#endif
> -
>      rc = -ENOSYS;
>  
>      switch ( mec->mode )
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 5103285..a7e6c5c 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -34,6 +34,7 @@
>  #include <asm/atomic.h>
>  #include <xen/rcupdate.h>
>  #include <asm/event.h>
> +#include <xsm/xsm.h>
>  
>  #include "mm-locks.h"
>  
> @@ -1345,11 +1346,18 @@ int mem_sharing_memop(struct domain *d,
> xen_mem_sharing_op_t *mec)
>              if ( !mem_sharing_enabled(d) )
>                  return -EINVAL;
>  
> -            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
> +            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
>              if ( !cd )
> +                return -ESRCH;
> +
> +            rc = xsm_mem_sharing_op(d, cd, mec->op);
> +            if ( rc )
> +            {
> +                rcu_unlock_domain(cd);
>                  return rc;
> +            }
>  
> -            if ( !mem_sharing_enabled(cd) )
> +            if ( cd == current->domain || !mem_sharing_enabled(cd) )
>              {
>                  rcu_unlock_domain(cd);
>                  return -EINVAL;
> @@ -1401,11 +1409,18 @@ int mem_sharing_memop(struct domain *d,
> xen_mem_sharing_op_t *mec)
>              if ( !mem_sharing_enabled(d) )
>                  return -EINVAL;
>  
> -            cd = get_mem_event_op_target(mec->u.share.client_domain, &rc);
> +            cd = rcu_lock_domain_by_id(mec->u.share.client_domain);
>              if ( !cd )
> +                return -ESRCH;
> +
> +            rc = xsm_mem_sharing_op(d, cd, mec->op);
> +            if ( rc )
> +            {
> +                rcu_unlock_domain(cd);
>                  return rc;
> +            }
>  
> -            if ( !mem_sharing_enabled(cd) )
> +            if ( cd == current->domain || !mem_sharing_enabled(cd) )
>              {
>                  rcu_unlock_domain(cd);
>                  return -EINVAL;
> diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
> index 23d71c1..448be4f 100644
> --- a/xen/include/asm-x86/mem_event.h
> +++ b/xen/include/asm-x86/mem_event.h
> @@ -62,7 +62,6 @@ void mem_event_put_request(struct domain *d, struct
> mem_event_domain *med,
>  int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
>                             mem_event_response_t *rsp);
>  
> -struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
>  int do_mem_event_op(int op, uint32_t domain, void *arg);
>  int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
>                       XEN_GUEST_HANDLE(void) u_domctl);
> diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
> index 0d849cc..c71c08b 100644
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -171,6 +171,13 @@ static XSM_DEFAULT(int, setdebugging) (struct domain *d)
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, debug_op) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, perfcontrol) (void)
>  {
>      if ( !IS_PRIV(current->domain) )
> @@ -557,6 +564,34 @@ static XSM_DEFAULT(int, getpageframeinfo) (struct
> page_info *page)
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, getpageframeinfo_domain) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, set_cpuid) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, gettscinfo) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, settscinfo) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, getmemlist) (struct domain *d)
>  {
>      if ( !IS_PRIV(current->domain) )
> @@ -627,13 +662,27 @@ static XSM_DEFAULT(int, hvm_inject_msi) (struct domain
> *d)
>      return 0;
>  }
>  
> -static XSM_DEFAULT(int, mem_event) (struct domain *d)
> +static XSM_DEFAULT(int, mem_event_setup) (struct domain *d)
>  {
>      if ( !IS_PRIV(current->domain) )
>          return -EPERM;
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, mem_event_control) (struct domain *d, int mode, int
> op)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, mem_event_op) (struct domain *d, int op)
> +{
> +    if ( !IS_PRIV_FOR(current->domain, d) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
>  {
>      if ( !IS_PRIV(current->domain) )
> @@ -641,6 +690,20 @@ static XSM_DEFAULT(int, mem_sharing) (struct domain *d)
>      return 0;
>  }
>  
> +static XSM_DEFAULT(int, mem_sharing_op) (struct domain *d, struct domain *cd,
> int op)
> +{
> +    if ( !IS_PRIV_FOR(current->domain, cd) )
> +        return -EPERM;
> +    return 0;
> +}
> +
> +static XSM_DEFAULT(int, audit_p2m) (struct domain *d)
> +{
> +    if ( !IS_PRIV(current->domain) )
> +        return -EPERM;
> +    return 0;
> +}
> +
>  static XSM_DEFAULT(int, apic) (struct domain *d, int cmd)
>  {
>      if ( !IS_PRIV(current->domain) )
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> index 1a9f35b..b473b54 100644
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -67,6 +67,7 @@ struct xsm_operations {
>      int (*setdomainmaxmem) (struct domain *d);
>      int (*setdomainhandle) (struct domain *d);
>      int (*setdebugging) (struct domain *d);
> +    int (*debug_op) (struct domain *d);
>      int (*perfcontrol) (void);
>      int (*debug_keys) (void);
>      int (*getcpuinfo) (void);
> @@ -142,6 +143,10 @@ struct xsm_operations {
>  #ifdef CONFIG_X86
>      int (*shadow_control) (struct domain *d, uint32_t op);
>      int (*getpageframeinfo) (struct page_info *page);
> +    int (*getpageframeinfo_domain) (struct domain *d);
> +    int (*set_cpuid) (struct domain *d);
> +    int (*gettscinfo) (struct domain *d);
> +    int (*settscinfo) (struct domain *d);
>      int (*getmemlist) (struct domain *d);
>      int (*hypercall_init) (struct domain *d);
>      int (*hvmcontext) (struct domain *d, uint32_t op);
> @@ -152,8 +157,12 @@ struct xsm_operations {
>      int (*hvm_set_isa_irq_level) (struct domain *d);
>      int (*hvm_set_pci_link_route) (struct domain *d);
>      int (*hvm_inject_msi) (struct domain *d);
> -    int (*mem_event) (struct domain *d);
> +    int (*mem_event_setup) (struct domain *d);
> +    int (*mem_event_control) (struct domain *d, int mode, int op);
> +    int (*mem_event_op) (struct domain *d, int op);
>      int (*mem_sharing) (struct domain *d);
> +    int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
> +    int (*audit_p2m) (struct domain *d);
>      int (*apic) (struct domain *d, int cmd);
>      int (*xen_settime) (void);
>      int (*memtype) (uint32_t access);
> @@ -302,6 +311,11 @@ static inline int xsm_setdebugging (struct domain *d)
>      return xsm_call(setdebugging(d));
>  }
>  
> +static inline int xsm_debug_op (struct domain *d)
> +{
> +    return xsm_call(debug_op(d));
> +}
> +
>  static inline int xsm_perfcontrol (void)
>  {
>      return xsm_call(perfcontrol());
> @@ -329,7 +343,7 @@ static inline int xsm_get_pmstat(void)
>  
>  static inline int xsm_setpminfo(void)
>  {
> - return xsm_call(setpminfo());
> +    return xsm_call(setpminfo());
>  }
>  
>  static inline int xsm_pm_op(void)
> @@ -608,6 +622,26 @@ static inline int xsm_getpageframeinfo (struct page_info
> *page)
>      return xsm_call(getpageframeinfo(page));
>  }
>  
> +static inline int xsm_getpageframeinfo_domain (struct domain *d)
> +{
> +    return xsm_call(getpageframeinfo_domain(d));
> +}
> +
> +static inline int xsm_set_cpuid (struct domain *d)
> +{
> +    return xsm_call(set_cpuid(d));
> +}
> +
> +static inline int xsm_gettscinfo (struct domain *d)
> +{
> +    return xsm_call(gettscinfo(d));
> +}
> +
> +static inline int xsm_settscinfo (struct domain *d)
> +{
> +    return xsm_call(settscinfo(d));
> +}
> +
>  static inline int xsm_getmemlist (struct domain *d)
>  {
>      return xsm_call(getmemlist(d));
> @@ -658,9 +692,19 @@ static inline int xsm_hvm_inject_msi (struct domain *d)
>      return xsm_call(hvm_inject_msi(d));
>  }
>  
> -static inline int xsm_mem_event (struct domain *d)
> +static inline int xsm_mem_event_setup (struct domain *d)
> +{
> +    return xsm_call(mem_event_setup(d));
> +}
> +
> +static inline int xsm_mem_event_control (struct domain *d, int mode, int op)
> +{
> +    return xsm_call(mem_event_control(d, mode, op));
> +}
> +
> +static inline int xsm_mem_event_op (struct domain *d, int op)
>  {
> -    return xsm_call(mem_event(d));
> +    return xsm_call(mem_event_op(d, op));
>  }
>  
>  static inline int xsm_mem_sharing (struct domain *d)
> @@ -668,6 +712,16 @@ static inline int xsm_mem_sharing (struct domain *d)
>      return xsm_call(mem_sharing(d));
>  }
>  
> +static inline int xsm_mem_sharing_op (struct domain *d, struct domain *cd,
> int op)
> +{
> +    return xsm_call(mem_sharing_op(d, cd, op));
> +}
> +
> +static inline int xsm_audit_p2m (struct domain *d)
> +{
> +    return xsm_call(audit_p2m(d));
> +}
> +
>  static inline int xsm_apic (struct domain *d, int cmd)
>  {
>      return xsm_call(apic(d, cmd));
> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> index af532b8..09935d8 100644
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -51,6 +51,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>      set_to_dummy_if_null(ops, setdomainmaxmem);
>      set_to_dummy_if_null(ops, setdomainhandle);
>      set_to_dummy_if_null(ops, setdebugging);
> +    set_to_dummy_if_null(ops, debug_op);
>      set_to_dummy_if_null(ops, perfcontrol);
>      set_to_dummy_if_null(ops, debug_keys);
>      set_to_dummy_if_null(ops, getcpuinfo);
> @@ -124,6 +125,10 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>  #ifdef CONFIG_X86
>      set_to_dummy_if_null(ops, shadow_control);
>      set_to_dummy_if_null(ops, getpageframeinfo);
> +    set_to_dummy_if_null(ops, getpageframeinfo_domain);
> +    set_to_dummy_if_null(ops, set_cpuid);
> +    set_to_dummy_if_null(ops, gettscinfo);
> +    set_to_dummy_if_null(ops, settscinfo);
>      set_to_dummy_if_null(ops, getmemlist);
>      set_to_dummy_if_null(ops, hypercall_init);
>      set_to_dummy_if_null(ops, hvmcontext);
> @@ -134,8 +139,12 @@ void xsm_fixup_ops (struct xsm_operations *ops)
>      set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
>      set_to_dummy_if_null(ops, hvm_set_pci_link_route);
>      set_to_dummy_if_null(ops, hvm_inject_msi);
> -    set_to_dummy_if_null(ops, mem_event);
> +    set_to_dummy_if_null(ops, mem_event_setup);
> +    set_to_dummy_if_null(ops, mem_event_control);
> +    set_to_dummy_if_null(ops, mem_event_op);
>      set_to_dummy_if_null(ops, mem_sharing);
> +    set_to_dummy_if_null(ops, mem_sharing_op);
> +    set_to_dummy_if_null(ops, audit_p2m);
>      set_to_dummy_if_null(ops, apic);
>      set_to_dummy_if_null(ops, xen_settime);
>      set_to_dummy_if_null(ops, memtype);
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index f8aff14..4f71604 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -695,6 +695,12 @@ static int flask_setdebugging(struct domain *d)
>                             DOMAIN__SETDEBUGGING);
>  }
>  
> +static int flask_debug_op(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN,
> +                           DOMAIN__SETDEBUGGING);
> +}
> +
>  static int flask_debug_keys(void)
>  {
>      return domain_has_xen(current->domain, XEN__DEBUG);
> @@ -1111,6 +1117,26 @@ static int flask_getpageframeinfo(struct page_info
> *page)
>      return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);
>  }
>  
> +static int flask_getpageframeinfo_domain(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
> +}
> +
> +static int flask_set_cpuid(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2,
> DOMAIN2__SET_CPUID);
> +}
> +
> +static int flask_gettscinfo(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2,
> DOMAIN2__GETTSC);
> +}
> +
> +static int flask_settscinfo(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_DOMAIN2,
> DOMAIN2__SETTSC);
> +}
> +
>  static int flask_getmemlist(struct domain *d)
>  {
>      return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGELIST);
> @@ -1201,7 +1227,17 @@ static int flask_hvm_set_pci_link_route(struct domain
> *d)
>      return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__PCIROUTE);
>  }
>  
> -static int flask_mem_event(struct domain *d)
> +static int flask_mem_event_setup(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
> +}
> +
> +static int flask_mem_event_control(struct domain *d, int mode, int op)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
> +}
> +
> +static int flask_mem_event_op(struct domain *d, int op)
>  {
>      return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__MEM_EVENT);
>  }
> @@ -1211,6 +1247,19 @@ static int flask_mem_sharing(struct domain *d)
>      return domain_has_perm(current->domain, d, SECCLASS_HVM,
> HVM__MEM_SHARING);
>  }
>  
> +static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
> +{
> +    int rc = domain_has_perm(current->domain, cd, SECCLASS_HVM,
> HVM__MEM_SHARING);
> +    if ( rc )
> +        return rc;
> +    return domain_has_perm(d, cd, SECCLASS_HVM, HVM__SHARE_MEM);
> +}
> +
> +static int flask_audit_p2m(struct domain *d)
> +{
> +    return domain_has_perm(current->domain, d, SECCLASS_HVM, HVM__AUDIT_P2M);
> +}
> +
>  static int flask_apic(struct domain *d, int cmd)
>  {
>      u32 perm;
> @@ -1586,6 +1635,7 @@ static struct xsm_operations flask_ops = {
>      .setdomainmaxmem = flask_setdomainmaxmem,
>      .setdomainhandle = flask_setdomainhandle,
>      .setdebugging = flask_setdebugging,
> +    .debug_op = flask_debug_op,
>      .perfcontrol = flask_perfcontrol,
>      .debug_keys = flask_debug_keys,
>      .getcpuinfo = flask_getcpuinfo,
> @@ -1654,6 +1704,10 @@ static struct xsm_operations flask_ops = {
>  #ifdef CONFIG_X86
>      .shadow_control = flask_shadow_control,
>      .getpageframeinfo = flask_getpageframeinfo,
> +    .getpageframeinfo_domain = flask_getpageframeinfo_domain,
> +    .set_cpuid = flask_set_cpuid,
> +    .gettscinfo = flask_gettscinfo,
> +    .settscinfo = flask_settscinfo,
>      .getmemlist = flask_getmemlist,
>      .hypercall_init = flask_hypercall_init,
>      .hvmcontext = flask_hvmcontext,
> @@ -1662,8 +1716,12 @@ static struct xsm_operations flask_ops = {
>      .hvm_set_pci_intx_level = flask_hvm_set_pci_intx_level,
>      .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
>      .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
> -    .mem_event = flask_mem_event,
> +    .mem_event_setup = flask_mem_event_setup,
> +    .mem_event_control = flask_mem_event_control,
> +    .mem_event_op = flask_mem_event_op,
>      .mem_sharing = flask_mem_sharing,
> +    .mem_sharing_op = flask_mem_sharing_op,
> +    .audit_p2m = flask_audit_p2m,
>      .apic = flask_apic,
>      .xen_settime = flask_xen_settime,
>      .memtype = flask_memtype,
> diff --git a/xen/xsm/flask/include/av_perm_to_string.h
> b/xen/xsm/flask/include/av_perm_to_string.h
> index 10f8e80..997f098 100644
> --- a/xen/xsm/flask/include/av_perm_to_string.h
> +++ b/xen/xsm/flask/include/av_perm_to_string.h
> @@ -66,6 +66,9 @@
>     S_(SECCLASS_DOMAIN2, DOMAIN2__RELABELSELF, "relabelself")
>     S_(SECCLASS_DOMAIN2, DOMAIN2__MAKE_PRIV_FOR, "make_priv_for")
>     S_(SECCLASS_DOMAIN2, DOMAIN2__SET_AS_TARGET, "set_as_target")
> +   S_(SECCLASS_DOMAIN2, DOMAIN2__SET_CPUID, "set_cpuid")
> +   S_(SECCLASS_DOMAIN2, DOMAIN2__GETTSC, "gettsc")
> +   S_(SECCLASS_DOMAIN2, DOMAIN2__SETTSC, "settsc")
>     S_(SECCLASS_HVM, HVM__SETHVMC, "sethvmc")
>     S_(SECCLASS_HVM, HVM__GETHVMC, "gethvmc")
>     S_(SECCLASS_HVM, HVM__SETPARAM, "setparam")
> @@ -79,6 +82,8 @@
>     S_(SECCLASS_HVM, HVM__HVMCTL, "hvmctl")
>     S_(SECCLASS_HVM, HVM__MEM_EVENT, "mem_event")
>     S_(SECCLASS_HVM, HVM__MEM_SHARING, "mem_sharing")
> +   S_(SECCLASS_HVM, HVM__SHARE_MEM, "share_mem")
> +   S_(SECCLASS_HVM, HVM__AUDIT_P2M, "audit_p2m")
>     S_(SECCLASS_EVENT, EVENT__BIND, "bind")
>     S_(SECCLASS_EVENT, EVENT__SEND, "send")
>     S_(SECCLASS_EVENT, EVENT__STATUS, "status")
> diff --git a/xen/xsm/flask/include/av_permissions.h
> b/xen/xsm/flask/include/av_permissions.h
> index f7cfee1..8596a55 100644
> --- a/xen/xsm/flask/include/av_permissions.h
> +++ b/xen/xsm/flask/include/av_permissions.h
> @@ -68,6 +68,9 @@
>  #define DOMAIN2__RELABELSELF                      0x00000004UL
>  #define DOMAIN2__MAKE_PRIV_FOR                    0x00000008UL
>  #define DOMAIN2__SET_AS_TARGET                    0x00000010UL
> +#define DOMAIN2__SET_CPUID                        0x00000020UL
> +#define DOMAIN2__GETTSC                           0x00000040UL
> +#define DOMAIN2__SETTSC                           0x00000080UL
>  
>  #define HVM__SETHVMC                              0x00000001UL
>  #define HVM__GETHVMC                              0x00000002UL
> @@ -82,6 +85,8 @@
>  #define HVM__HVMCTL                               0x00000400UL
>  #define HVM__MEM_EVENT                            0x00000800UL
>  #define HVM__MEM_SHARING                          0x00001000UL
> +#define HVM__SHARE_MEM                            0x00002000UL
> +#define HVM__AUDIT_P2M                            0x00004000UL
>  
>  #define EVENT__BIND                               0x00000001UL
>  #define EVENT__SEND                               0x00000002UL



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 19:31:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 19:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyT12-00045v-No; Mon, 06 Aug 2012 19:31:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyT10-00045q-RW
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 19:30:59 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344281445!11074189!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 6 Aug 2012 19:30:46 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 19:30:46 -0000
X-TM-IMSS-Message-ID: <7c1838b9000304a2@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7c1838b9000304a2 ;
	Mon, 6 Aug 2012 15:30:54 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76JUhpg029232; 
	Mon, 6 Aug 2012 15:30:43 -0400
Message-ID: <50201B63.20408@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 15:30:43 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC45D14D.3A904%keir.xen@gmail.com>
In-Reply-To: <CC45D14D.3A904%keir.xen@gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and
 mem_sharing hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 02:53 PM, Keir Fraser wrote:
> When someone wants to add a new domctl/sysctl, how many places will they
> have to add things to ensure that xsm dtrt for a basic setup, allowing only
> dom0 access to the new op? How big is the risk that we end up with new ops
> that have no access control?

Short answer: 3 files (xsm.h, dummy.h, dummy.c); 13 lines including whitespace.

Long answer: there are a couple ways to add access controls:

1. Add an explicit IS_PRIV check. That's pretty much what occurs before this
   series, only the IS_PRIV is at the top of the hypercall for domctl and sysctl.
   This is the least preferred, but is trivially correct for the new patch and
   fairly easy to wire up as an XSM hook in the future.
2. Reuse an existing XSM hook. This requires no changes required except at the
   caller, but requires that a suitable hook exist to reuse. There are generic
   hooks like xsm_domctl(), but it's best not to just create dumping grounds
   for permissions if we ever want to allow subsets of them to different domains.
   This is probably best for incremental modifications or trivial features.
3. Add a new XSM hook. This requires adding a hook function in xsm.h and a
   default implementation in dummy.h/dummy.c. The changes made in this patch
   to FLASK would not be required, as XSM will fall back to the dummy
   implementation when the FLASK module doesn't provide its own hook.

Patch #13 (tmem) is a good example of adding a single hook; all changes with
/flask/ could be done in a later patch implementing new FLASK permissions.

If you're adding a new function, the only way to compile both with and without
XSM enabled is to add functions in dummy.h, dummy.c, and xsm.h; incomplete
implementations will yield a compilation error in one of those cases.

One patch I haven't included in this series is adding automatic generation of
the xen/xsm/flask/include/av_*.h files from tools/flask/policy/policy/flask/*;
this simplifies adding the FLASK part of the XSM hook. The auto-generation
is in tools/flask/policy/policy/flask/Makefile, just not wired in to the xen
build.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 19:31:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 19:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyT12-00045v-No; Mon, 06 Aug 2012 19:31:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyT10-00045q-RW
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 19:30:59 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344281445!11074189!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 6 Aug 2012 19:30:46 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-8.tower-27.messagelabs.com with SMTP;
	6 Aug 2012 19:30:46 -0000
X-TM-IMSS-Message-ID: <7c1838b9000304a2@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 7c1838b9000304a2 ;
	Mon, 6 Aug 2012 15:30:54 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76JUhpg029232; 
	Mon, 6 Aug 2012 15:30:43 -0400
Message-ID: <50201B63.20408@tycho.nsa.gov>
Date: Mon, 06 Aug 2012 15:30:43 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC45D14D.3A904%keir.xen@gmail.com>
In-Reply-To: <CC45D14D.3A904%keir.xen@gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 12/18] xsm: Add missing domctl and
 mem_sharing hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/2012 02:53 PM, Keir Fraser wrote:
> When someone wants to add a new domctl/sysctl, how many places will they
> have to add things to ensure that xsm dtrt for a basic setup, allowing only
> dom0 access to the new op? How big is the risk that we end up with new ops
> that have no access control?

Short answer: 3 files (xsm.h, dummy.h, dummy.c); 13 lines including whitespace.

Long answer: there are a couple ways to add access controls:

1. Add an explicit IS_PRIV check. That's pretty much what occurs before this
   series, only the IS_PRIV is at the top of the hypercall for domctl and sysctl.
   This is the least preferred, but is trivially correct for the new patch and
   fairly easy to wire up as an XSM hook in the future.
2. Reuse an existing XSM hook. This requires no changes required except at the
   caller, but requires that a suitable hook exist to reuse. There are generic
   hooks like xsm_domctl(), but it's best not to just create dumping grounds
   for permissions if we ever want to allow subsets of them to different domains.
   This is probably best for incremental modifications or trivial features.
3. Add a new XSM hook. This requires adding a hook function in xsm.h and a
   default implementation in dummy.h/dummy.c. The changes made in this patch
   to FLASK would not be required, as XSM will fall back to the dummy
   implementation when the FLASK module doesn't provide its own hook.

Patch #13 (tmem) is a good example of adding a single hook; all changes with
/flask/ could be done in a later patch implementing new FLASK permissions.

If you're adding a new function, the only way to compile both with and without
XSM enabled is to add functions in dummy.h, dummy.c, and xsm.h; incomplete
implementations will yield a compilation error in one of those cases.

One patch I haven't included in this series is adding automatic generation of
the xen/xsm/flask/include/av_*.h files from tools/flask/policy/policy/flask/*;
this simplifies adding the FLASK part of the XSM hook. The auto-generation
is in tools/flask/policy/policy/flask/Makefile, just not wired in to the xen
build.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 20:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 20:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyTqJ-0004c8-Ub; Mon, 06 Aug 2012 20:23:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1SyTqI-0004c3-UL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 20:23:59 +0000
Received: from [85.158.143.99:5366] by server-2.bemta-4.messagelabs.com id
	89/93-17938-ED720205; Mon, 06 Aug 2012 20:23:58 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344284636!25271576!1
X-Originating-IP: [143.182.124.22]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28977 invoked from network); 6 Aug 2012 20:23:56 -0000
Received: from mga07.intel.com (HELO azsmga101.ch.intel.com) (143.182.124.22)
	by server-12.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 20:23:56 -0000
Received: from mail-qc0-f180.google.com ([209.85.216.180])
	by mga03.intel.com with ESMTP/TLS/RC4-SHA; 06 Aug 2012 13:23:55 -0700
Received: by qcmv28 with SMTP id v28so1851151qcm.25
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 13:23:54 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=HNdnrTUc1EnuMIDyEIh/DOQLOu7RO0tBW57nCbt2gFA=;
	b=YOrZx+vJpnEPE6scxyczX7JO5SpymvfWaYo74/Z8vmM1myrfWFCp+vBDLYKu6T079d
	P/mTIJljhV0t2GGDNcSW+4aEOMQFBvTxufDoZgkM7qUY3sXlXiHxWWXhV5wwA+WUcT10
	YugmIwjOvYqIHhrEW6GL9gxbTtl5W5ywruryIKOdLGh8TABl9usXwpfjmF3RjjP6yFfj
	e8uwMAsuw02hCL5coTJDIyRvJyZOYqZYu61ggdggp3+pNFDgcuDvKczreSg4UavOL+RP
	2eYhS54jt0FYCVZrODzjQzhtO4GfRY+zgo3baitgweEYJhiuIypcpfeEpj85KLRR7YL/
	l7nw==
MIME-Version: 1.0
Received: by 10.224.207.2 with SMTP id fw2mr19864551qab.34.1344284633893; Mon,
	06 Aug 2012 13:23:53 -0700 (PDT)
Received: by 10.229.35.8 with HTTP; Mon, 6 Aug 2012 13:23:53 -0700 (PDT)
In-Reply-To: <50164C5F0200007800091325@nat28.tlf.novell.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
Date: Mon, 6 Aug 2012 13:23:53 -0700
Message-ID: <CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Jan Beulich <JBeulich@suse.com>
X-Gm-Message-State: ALoCoQkK8LhTOYqy5pK767299QCTf4CXvm4i0SqLq8KX4IQBUQnGjCSk7cGLFJ5d/8uJx99mm37F
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5277557473932175330=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5277557473932175330==
Content-Type: multipart/alternative; boundary=20cf300fb2f9148b1804c69ea83d

--20cf300fb2f9148b1804c69ea83d
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
> > Although the "Intel Virtualization Technology FlexMigration
> > Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> > does not document support for extended model 2H model DH (Intel Xeon
> > Processor E5 Family), empirical evidence shows that the same MSR
> > addresses can be used for cpuid masking as exdended model 2H model AH
> > (Intel Xen Processor E3-1200 Family).
>
> Empirical evidence isn't really enough - let's have someone at Intel
> confirm this - Jun, Don?
>

Thanks for the patch. The patch looks good, and it should be in.
We'll update the document.



>
> Jan
>
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> >
> > diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> > --- a/xen/arch/x86/cpu/intel.c        Fri Jul 27 12:22:13 2012 +0200
> > +++ b/xen/arch/x86/cpu/intel.c        Sat Jul 28 17:27:30 2012 +0000
> > @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
> >                       return;
> >               extra = "xsave ";
> >               break;
> > -     case 0x2a:
> > +     case 0x2a: case 0x2d:
> >               wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
> >                     opt_cpuid_mask_ecx,
> >                     opt_cpuid_mask_edx);
>
>
>
>


-- 
Jun
Intel Open Source Technology Center

--20cf300fb2f9148b1804c69ea83d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote">On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <s=
pan dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@suse.com" target=3D"_blank">=
JBeulich@suse.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote=
" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div>&gt;&gt;&gt; On 28.07.12 at 21:19, Matt Wilson &lt;<a href=3D"mailto:m=
sw@amazon.com" target=3D"_blank">msw@amazon.com</a>&gt; wrote:<br>
&gt; Although the &quot;Intel Virtualization Technology FlexMigration<br>
&gt; Application Note&quot; (<a href=3D"http://www.intel.com/Assets/PDF/man=
ual/323850.pdf" target=3D"_blank">http://www.intel.com/Assets/PDF/manual/32=
3850.pdf</a>)<br>
&gt; does not document support for extended model 2H model DH (Intel Xeon<b=
r>
&gt; Processor E5 Family), empirical evidence shows that the same MSR<br>
&gt; addresses can be used for cpuid masking as exdended model 2H model AH<=
br>
&gt; (Intel Xen Processor E3-1200 Family).<br>
<br>
</div>Empirical evidence isn&#39;t really enough - let&#39;s have someone a=
t Intel<br>
confirm this - Jun, Don?<br></blockquote><div><br></div><div>Thanks for the=
 patch. The patch looks good, and it should be in.=A0</div><div>We&#39;ll u=
pdate the document.</div><div><br></div><div>=A0</div><blockquote class=3D"=
gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-=
left:1ex">


<span><font color=3D"#888888"><br>
Jan<br>
</font></span><div><div><br>
&gt; Signed-off-by: Matt Wilson &lt;<a href=3D"mailto:msw@amazon.com" targe=
t=3D"_blank">msw@amazon.com</a>&gt;<br>
&gt;<br>
&gt; diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c<br>
&gt; --- a/xen/arch/x86/cpu/intel.c =A0 =A0 =A0 =A0Fri Jul 27 12:22:13 2012=
 +0200<br>
&gt; +++ b/xen/arch/x86/cpu/intel.c =A0 =A0 =A0 =A0Sat Jul 28 17:27:30 2012=
 +0000<br>
&gt; @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 extra =3D &quot;xsave &quot;;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;<br>
&gt; - =A0 =A0 case 0x2a:<br>
&gt; + =A0 =A0 case 0x2a: case 0x2d:<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,<br=
>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 opt_cpuid_mask_ecx,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 opt_cpuid_mask_edx);<br>
<br>
<br>
<br>
</div></div></blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>=
<div><div>Jun</div><div><div>Intel Open Source Technology Center</div></div=
></div><br>

--20cf300fb2f9148b1804c69ea83d--


--===============5277557473932175330==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5277557473932175330==--


From xen-devel-bounces@lists.xen.org Mon Aug 06 20:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 20:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyTqJ-0004c8-Ub; Mon, 06 Aug 2012 20:23:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1SyTqI-0004c3-UL
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 20:23:59 +0000
Received: from [85.158.143.99:5366] by server-2.bemta-4.messagelabs.com id
	89/93-17938-ED720205; Mon, 06 Aug 2012 20:23:58 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344284636!25271576!1
X-Originating-IP: [143.182.124.22]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28977 invoked from network); 6 Aug 2012 20:23:56 -0000
Received: from mga07.intel.com (HELO azsmga101.ch.intel.com) (143.182.124.22)
	by server-12.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 20:23:56 -0000
Received: from mail-qc0-f180.google.com ([209.85.216.180])
	by mga03.intel.com with ESMTP/TLS/RC4-SHA; 06 Aug 2012 13:23:55 -0700
Received: by qcmv28 with SMTP id v28so1851151qcm.25
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 13:23:54 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=HNdnrTUc1EnuMIDyEIh/DOQLOu7RO0tBW57nCbt2gFA=;
	b=YOrZx+vJpnEPE6scxyczX7JO5SpymvfWaYo74/Z8vmM1myrfWFCp+vBDLYKu6T079d
	P/mTIJljhV0t2GGDNcSW+4aEOMQFBvTxufDoZgkM7qUY3sXlXiHxWWXhV5wwA+WUcT10
	YugmIwjOvYqIHhrEW6GL9gxbTtl5W5ywruryIKOdLGh8TABl9usXwpfjmF3RjjP6yFfj
	e8uwMAsuw02hCL5coTJDIyRvJyZOYqZYu61ggdggp3+pNFDgcuDvKczreSg4UavOL+RP
	2eYhS54jt0FYCVZrODzjQzhtO4GfRY+zgo3baitgweEYJhiuIypcpfeEpj85KLRR7YL/
	l7nw==
MIME-Version: 1.0
Received: by 10.224.207.2 with SMTP id fw2mr19864551qab.34.1344284633893; Mon,
	06 Aug 2012 13:23:53 -0700 (PDT)
Received: by 10.229.35.8 with HTTP; Mon, 6 Aug 2012 13:23:53 -0700 (PDT)
In-Reply-To: <50164C5F0200007800091325@nat28.tlf.novell.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
Date: Mon, 6 Aug 2012 13:23:53 -0700
Message-ID: <CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Jan Beulich <JBeulich@suse.com>
X-Gm-Message-State: ALoCoQkK8LhTOYqy5pK767299QCTf4CXvm4i0SqLq8KX4IQBUQnGjCSk7cGLFJ5d/8uJx99mm37F
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5277557473932175330=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5277557473932175330==
Content-Type: multipart/alternative; boundary=20cf300fb2f9148b1804c69ea83d

--20cf300fb2f9148b1804c69ea83d
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
> > Although the "Intel Virtualization Technology FlexMigration
> > Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> > does not document support for extended model 2H model DH (Intel Xeon
> > Processor E5 Family), empirical evidence shows that the same MSR
> > addresses can be used for cpuid masking as exdended model 2H model AH
> > (Intel Xen Processor E3-1200 Family).
>
> Empirical evidence isn't really enough - let's have someone at Intel
> confirm this - Jun, Don?
>

Thanks for the patch. The patch looks good, and it should be in.
We'll update the document.



>
> Jan
>
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> >
> > diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> > --- a/xen/arch/x86/cpu/intel.c        Fri Jul 27 12:22:13 2012 +0200
> > +++ b/xen/arch/x86/cpu/intel.c        Sat Jul 28 17:27:30 2012 +0000
> > @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
> >                       return;
> >               extra = "xsave ";
> >               break;
> > -     case 0x2a:
> > +     case 0x2a: case 0x2d:
> >               wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
> >                     opt_cpuid_mask_ecx,
> >                     opt_cpuid_mask_edx);
>
>
>
>


-- 
Jun
Intel Open Source Technology Center

--20cf300fb2f9148b1804c69ea83d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote">On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <s=
pan dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@suse.com" target=3D"_blank">=
JBeulich@suse.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote=
" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div>&gt;&gt;&gt; On 28.07.12 at 21:19, Matt Wilson &lt;<a href=3D"mailto:m=
sw@amazon.com" target=3D"_blank">msw@amazon.com</a>&gt; wrote:<br>
&gt; Although the &quot;Intel Virtualization Technology FlexMigration<br>
&gt; Application Note&quot; (<a href=3D"http://www.intel.com/Assets/PDF/man=
ual/323850.pdf" target=3D"_blank">http://www.intel.com/Assets/PDF/manual/32=
3850.pdf</a>)<br>
&gt; does not document support for extended model 2H model DH (Intel Xeon<b=
r>
&gt; Processor E5 Family), empirical evidence shows that the same MSR<br>
&gt; addresses can be used for cpuid masking as exdended model 2H model AH<=
br>
&gt; (Intel Xen Processor E3-1200 Family).<br>
<br>
</div>Empirical evidence isn&#39;t really enough - let&#39;s have someone a=
t Intel<br>
confirm this - Jun, Don?<br></blockquote><div><br></div><div>Thanks for the=
 patch. The patch looks good, and it should be in.=A0</div><div>We&#39;ll u=
pdate the document.</div><div><br></div><div>=A0</div><blockquote class=3D"=
gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-=
left:1ex">


<span><font color=3D"#888888"><br>
Jan<br>
</font></span><div><div><br>
&gt; Signed-off-by: Matt Wilson &lt;<a href=3D"mailto:msw@amazon.com" targe=
t=3D"_blank">msw@amazon.com</a>&gt;<br>
&gt;<br>
&gt; diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c<br>
&gt; --- a/xen/arch/x86/cpu/intel.c =A0 =A0 =A0 =A0Fri Jul 27 12:22:13 2012=
 +0200<br>
&gt; +++ b/xen/arch/x86/cpu/intel.c =A0 =A0 =A0 =A0Sat Jul 28 17:27:30 2012=
 +0000<br>
&gt; @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 extra =3D &quot;xsave &quot;;<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;<br>
&gt; - =A0 =A0 case 0x2a:<br>
&gt; + =A0 =A0 case 0x2a: case 0x2d:<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,<br=
>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 opt_cpuid_mask_ecx,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 opt_cpuid_mask_edx);<br>
<br>
<br>
<br>
</div></div></blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>=
<div><div>Jun</div><div><div>Intel Open Source Technology Center</div></div=
></div><br>

--20cf300fb2f9148b1804c69ea83d--


--===============5277557473932175330==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5277557473932175330==--


From xen-devel-bounces@lists.xen.org Mon Aug 06 21:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 21:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyVDf-0005WZ-KD; Mon, 06 Aug 2012 21:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyVDd-0005WR-Qn
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 21:52:10 +0000
Received: from [85.158.143.99:28163] by server-3.bemta-4.messagelabs.com id
	81/01-01511-98C30205; Mon, 06 Aug 2012 21:52:09 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344289924!30562512!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12868 invoked from network); 6 Aug 2012 21:52:04 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-9.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 21:52:04 -0000
X-TM-IMSS-Message-ID: <7c995485000339e5@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7c995485000339e5 ; Mon, 6 Aug 2012 17:52:27 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76LpvwS004740; 
	Mon, 6 Aug 2012 17:51:58 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon,  6 Aug 2012 17:51:56 -0400
Message-Id: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain ID
	for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Allow specification of backend domains for disks, either in the config
file or via xl block-attach.

A version of this patch was submitted in October 2011 but was not
suitable at the time because libxl did not support the "script=" option
for disks in libxl. Now that this option exists, it is possible to
specify a backend domain without needing to duplicate the device tree of
domain providing the disk in the domain using libxl; just specify
script=/bin/true (or any more useful script) to prevent the block script
from running in the domain using libxl.

In order to support named backend domains like network-attach, the
prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
this parameter, it would only be only possible to support numeric domain
IDs in the block device specification.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

---

This patch does not include the changes to tools/libxl/libxlu_disk_l.c
and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
changes due to different generator versions.

 tools/libxl/libxlu_disk.c   |   3 +-
 tools/libxl/libxlu_disk_i.h |   3 +-
 tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
 tools/libxl/libxlu_disk_l.h |  24 +-
 tools/libxl/libxlu_disk_l.l |   8 +
 tools/libxl/libxlutil.h     |   2 +-
 tools/libxl/xl_cmdimpl.c    |   6 +-
 7 files changed, 319 insertions(+), 308 deletions(-)

diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
index 18fe386..1e6caca 100644
--- a/tools/libxl/libxlu_disk.c
+++ b/tools/libxl/libxlu_disk.c
@@ -48,7 +48,7 @@ static void dpc_dispose(DiskParseContext *dpc) {
 
 int xlu_disk_parse(XLU_Config *cfg,
                    int nspecs, const char *const *specs,
-                   libxl_device_disk *disk) {
+                   libxl_device_disk *disk, libxl_ctx *ctx) {
     DiskParseContext dpc;
     int i, e;
 
@@ -56,6 +56,7 @@ int xlu_disk_parse(XLU_Config *cfg,
     dpc.cfg = cfg;
     dpc.scanner = 0;
     dpc.disk = disk;
+    dpc.ctx = ctx;
 
     disk->readwrite = 1;
 
diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
index 4fccd4a..c220bcf 100644
--- a/tools/libxl/libxlu_disk_i.h
+++ b/tools/libxl/libxlu_disk_i.h
@@ -2,7 +2,7 @@
 #define LIBXLU_DISK_I_H
 
 #include "libxlu_internal.h"
-
+#include "libxl_utils.h"
 
 typedef struct {
     XLU_Config *cfg;
@@ -12,6 +12,7 @@ typedef struct {
     libxl_device_disk *disk;
     int access_set, had_depr_prefix;
     const char *spec;
+    libxl_ctx *ctx;
 } DiskParseContext;
 
 void xlu__disk_err(DiskParseContext *dpc, const char *erroneous,
diff --git a/tools/libxl/libxlu_disk_l.c b/tools/libxl/libxlu_disk_l.c
index 4c68034..4e17f7c 100644
--- a/tools/libxl/libxlu_disk_l.c
+++ b/tools/libxl/libxlu_disk_l.c
@@ -58,6 +58,7 @@ typedef int flex_int32_t;
 typedef unsigned char flex_uint8_t; 
 typedef unsigned short int flex_uint16_t;
 typedef unsigned int flex_uint32_t;
+#endif /* ! C99 */
 
 /* Limits of integral types. */
 #ifndef INT8_MIN
@@ -88,8 +89,6 @@ typedef unsigned int flex_uint32_t;
 #define UINT32_MAX             (4294967295U)
 #endif
 
-#endif /* ! C99 */
-
 #endif /* ! FLEXINT_H */
 
 #ifdef __cplusplus
@@ -163,15 +162,7 @@ typedef void* yyscan_t;
 
 /* Size of default input buffer. */
 #ifndef YY_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k.
- * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
- * Ditto for the __ia64__ case accordingly.
- */
-#define YY_BUF_SIZE 32768
-#else
 #define YY_BUF_SIZE 16384
-#endif /* __ia64__ */
 #endif
 
 /* The state buf must be large enough to hold one state per character in the main buffer.
@@ -361,8 +352,8 @@ static void yy_fatal_error (yyconst char msg[] ,yyscan_t yyscanner );
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
 
-#define YY_NUM_RULES 25
-#define YY_END_OF_BUFFER 26
+#define YY_NUM_RULES 26
+#define YY_END_OF_BUFFER 27
 /* This struct is not used in this scanner,
    but its presence is necessary. */
 struct yy_trans_info
@@ -370,60 +361,61 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[447] =
+static yyconst flex_int16_t yy_acclist[460] =
     {   0,
-       24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
-    16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
-       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
-       23,   25,   22,   23,   25,   22,   23,   25,   22,   23,
-       25,   22,   23,   25,   22,   23,   25,   22,   23,   25,
-       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
-       23,   25,   22,   23,   25,   24,   25,   25,   22,   22,
-     8193,   22, 8193,   22,16385, 8193,   22, 8193,   22,   22,
-     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
-       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-
-       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
-     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
-       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
-     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
-     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
-     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
-    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
-     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
-       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
-
-     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
-     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
-     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
-     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
-       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
-       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
-     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
-    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
-       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
-       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
-
-     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
-       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
-        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
-       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
-     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
-        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
-       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
-       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
-       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
-        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
-
-       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
-       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
-     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
-        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
-        6,    8,    8,   12,    4,    6
+       25,   25,   27,   23,   24,   26, 8193,   23,   24,   26,
+    16385, 8193,   23,   26,16385,   23,   24,   26,   24,   26,
+       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
+       24,   26,   23,   24,   26,   23,   24,   26,   23,   24,
+       26,   23,   24,   26,   23,   24,   26,   23,   24,   26,
+       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
+       24,   26,   23,   24,   26,   25,   26,   26,   23,   23,
+     8193,   23, 8193,   23,16385, 8193,   23, 8193,   23,   23,
+     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
+       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
+
+       23,   23,   25, 8193,   23, 8193,   23, 8193, 8214,   23,
+     8214,   23, 8214,   13,   23,   23,   23,   23,   23,   23,
+       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
+       23,   23, 8214,   23, 8214,   23, 8214,   13,   23,   18,
+     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
+     8207, 8214,   23,16399,16406,   21, 8214,   23,16406,   23,
+     8206, 8214,   23,16398,16406,   23,   23, 8209, 8214,   23,
+    16401,16406,   23,   23,   23,   23,   18, 8214,   23,   18,
+     8214,   23,   18,   23,   18, 8214,   23,    3,   23,   23,
+       20, 8214,   23,16406,   23,   23, 8207, 8214,   23, 8207,
+
+     8214,   23, 8207,   23, 8207, 8214,   21, 8214,   23,   21,
+     8214,   23,   21,   23,   21, 8214, 8206, 8214,   23, 8206,
+     8214,   23, 8206,   23, 8206, 8214,   23, 8209, 8214,   23,
+     8209, 8214,   23, 8209,   23, 8209, 8214,   23,   23,   10,
+       23,   18, 8214,   23,   18, 8214,   23,   18, 8214,   18,
+       23,   18,   23,    3,   23,   23,   20, 8214,   23,   20,
+     8214,   23,   20,   23,   20, 8214,   23,   19, 8214,   23,
+    16406, 8207, 8214,   23, 8207, 8214,   23, 8207, 8214, 8207,
+       23, 8207,   21, 8214,   23,   21, 8214,   23,   21, 8214,
+       21,   23,   21, 8206, 8214,   23, 8206, 8214,   23, 8206,
+
+     8214, 8206,   23, 8206,   23, 8209, 8214,   23, 8209, 8214,
+       23, 8209, 8214, 8209,   23, 8209,   23,   23,   10,   13,
+       10,    7,   23,   23,   20, 8214,   23,   20, 8214,   23,
+       20, 8214,   20,   23,   20,    2,   19, 8214,   23,   19,
+     8214,   23,   19,   23,   19, 8214,   11,   23,   12,   10,
+       10,   13,    7,   13,    7,   23,   23,    6,    2,   13,
+        2,   19, 8214,   23,   19, 8214,   23,   19, 8214,   19,
+       23,   19,   11,   13,   11,   16, 8214,   23,16406,   12,
+       13,   12,    7,    7,   13,   23,   23,    6,   13,    6,
+        6,   13,    6,   13,    2,    2,   13,   11,   11,   13,
+
+       16, 8214,   23,   16, 8214,   23,   16,   23,   16, 8214,
+       12,   13,   23,   23,    6,    6,   13,    6,    6,   16,
+     8214,   23,   16, 8214,   23,   16, 8214,   16,   23,   16,
+       23,   23,    6,    6,   23,    8,    6,    5,    6,   23,
+        8,   13,    8,    4,    6,    5,    6,    9,    8,    8,
+       13,    4,    6,    9,   13,    9,    9,    9,   13
     } ;
 
-static yyconst flex_int16_t yy_accept[252] =
+static yyconst flex_int16_t yy_accept[263] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -445,14 +437,15 @@ static yyconst flex_int16_t yy_accept[252] =
       293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
       314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
       328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
-      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
-
-      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
-      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
-      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
-      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
-      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
-      447
+      348,  349,  350,  351,  353,  355,  356,  357,  358,  359,
+
+      361,  362,  365,  368,  370,  372,  373,  375,  376,  380,
+      382,  383,  384,  386,  387,  388,  390,  391,  393,  395,
+      396,  398,  399,  401,  404,  407,  409,  411,  413,  414,
+      415,  416,  418,  419,  420,  423,  426,  428,  430,  431,
+      432,  433,  434,  435,  436,  437,  438,  440,  441,  443,
+      444,  446,  448,  449,  450,  452,  454,  456,  457,  458,
+      460,  460
     } ;
 
 static yyconst flex_int32_t yy_ec[256] =
@@ -495,83 +488,85 @@ static yyconst flex_int32_t yy_meta[34] =
         1,    1,    1
     } ;
 
-static yyconst flex_int16_t yy_base[308] =
+static yyconst flex_int16_t yy_base[321] =
     {   0,
-        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
-       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
-       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
-       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
+        0,    0,  644,  632,  623,  595,   32,   35,  670,  670,
+       44,   62,   30,   41,   50,   51,  577,   64,   47,   66,
+       67,  565,   68,  561,   72,    0,  670,  563,  670,   87,
+       91,    0,    0,  100,  553,  109,    0,   74,   95,   87,
        32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
       118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
-      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
+      147,    0,    0,  551,  129,  126,  134,  143,  145,  147,
       148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
-      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
-      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
+      168,  159,  188,    0,    0,  670,  166,  197,  179,  185,
+      176,  200,  537,  186,  193,  216,  225,  205,  234,  221,
 
       237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
       251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
       286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
-        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
-        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
+        0,  301,    0,  288,  297,  535,  302,  310,    0,    0,
+        0,    0,  305,  670,  307,  319,    0,  321,    0,  322,
       332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
         0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
-        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
-        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
-      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
-
-      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
-      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
-      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
-      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
-      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
-      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
-      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
-      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
-      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
-      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
-
-      627,  631,  635,  639,  643,  647,  651
+        0,    0,  337,  345,  527,  670,  519,  346,  351,  359,
+        0,    0,    0,    0,  511,  361,    0,  363,    0,  499,
+      319,  370,  471,  670,  464,  670,  359,  276,  367,  455,
+
+      670,  373,    0,    0,    0,    0,  447,  670,  383,  429,
+        0,  428,  670,  368,  371,  425,  670,  385,  389,  422,
+      670,  421,  670,  391,    0,  399,    0,    0,  414,  387,
+      419,  670,  395,  400,  402,    0,    0,    0,    0,  399,
+      403,  406,  411,  404,  417,  412,  416,  409,  316,  670,
+      271,  670,  228,  200,  670,  670,  175,  670,   77,  670,
+      670,  434,  438,  441,  445,  449,  453,  457,  461,  465,
+      469,  473,  477,  481,  485,  489,  493,  497,  501,  505,
+      509,  513,  517,  521,  525,  529,  533,  537,  541,  545,
+      549,  553,  557,  561,  565,  569,  573,  577,  581,  585,
+
+      589,  593,  597,  601,  605,  609,  613,  617,  621,  625,
+      629,  633,  637,  641,  645,  649,  653,  657,  661,  665
     } ;
 
-static yyconst flex_int16_t yy_def[308] =
+static yyconst flex_int16_t yy_def[321] =
     {   0,
-      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
-      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
-      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
+      261,    1,  262,  262,  261,  263,  264,  264,  261,  261,
+      265,  265,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,  266,  261,  263,  261,  267,
+      264,  268,  268,  269,   12,  263,  270,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
-      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  266,  267,  268,  268,
+      271,  272,  272,  261,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
-       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
-
-       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
-       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
-      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
-      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
-      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
-      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
-      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
-      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
-      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
-       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
-
-      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
-      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
-      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
-      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
-      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-
-      250,  250,  250,  250,  250,  250,  250
+       12,   12,  271,  272,  272,  261,   12,  273,   12,   12,
+       12,   12,   12,   12,   12,  274,  275,   12,  276,   12,
+
+       12,  277,   12,   12,   12,   12,  278,  279,  273,  279,
+       12,   12,   12,  280,   12,   12,  281,  282,  274,  282,
+      283,  284,  275,  284,  285,  286,  276,  286,   12,  287,
+      288,  277,  288,   12,   12,  289,   12,  278,  279,  279,
+      290,  290,   12,  261,   12,  291,  292,  280,  292,   12,
+      293,  281,  282,  282,  294,  294,  283,  284,  284,  295,
+      295,  285,  286,  286,  296,  296,   12,  287,  288,  288,
+      297,  297,   12,   12,  298,  261,  299,   12,   12,  291,
+      292,  292,  300,  300,  301,  302,  303,  293,  303,  304,
+       12,  305,  298,  261,  306,  261,   12,   12,  307,  308,
+
+      261,  302,  303,  303,  309,  309,  310,  261,  311,  312,
+      312,  306,  261,   12,   12,  313,  261,  313,  313,  308,
+      261,  310,  261,  314,  315,  311,  315,  312,   12,   12,
+      313,  261,  313,  313,  314,  315,  315,  316,  316,   12,
+       12,  313,  313,   12,  317,  313,  313,   12,  318,  261,
+      313,  261,  319,  318,  261,  261,  320,  261,  320,  261,
+        0,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261
     } ;
 
-static yyconst flex_int16_t yy_nxt[690] =
+static yyconst flex_int16_t yy_nxt[704] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
@@ -581,7 +576,7 @@ static yyconst flex_int16_t yy_nxt[690] =
        35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
        35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
        37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
-      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
+      258,   35,   50,   35,   55,   65,   35,   47,   56,   28,
        59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
 
        28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
@@ -591,66 +586,69 @@ static yyconst flex_int16_t yy_nxt[690] =
        59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
        84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
        35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
-       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
+       93,   35,   94,   91,   99,   35,   35,   35,  260,  100,
        95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
        28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
 
-      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
+      108,  107,   35,  250,  107,  110,  112,  114,  113,   35,
        75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
       117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
-       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
+       35,  258,  121,  124,  125,  125,   61,  126,  125,   35,
       137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
       131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
        35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
-      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
+      153,   28,  155,  143,  256,  154,   35,  156,  145,  146,
       146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
        28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
 
-      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
-      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
-      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
+      165,   28,  169,   28,  171,  166,   35,  170,  215,  172,
+      177,   35,   28,  139,   35,  173,   35,  178,  140,  255,
+      179,   28,  181,   28,  183,  174,  209,  182,   35,  184,
       185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
       189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
-      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
-       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
-      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
-      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
-      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
-
-       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
-      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
-      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
-       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
-       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
-       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
-       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
-       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
-      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
-      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
-
-      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
-      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
-      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
-      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
-       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
-      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
-      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
-      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
-      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
-      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
-
-      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
-      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
-      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
-      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
-      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
-      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250
+      164,   28,  169,  192,   35,   35,  191,  170,  197,  199,
+       35,   28,  181,   28,  203,   28,  205,  182,   35,  204,
+      217,  206,   64,  211,  198,   28,  203,   35,  218,  219,
+       35,  204,  214,  224,  224,   61,  225,  224,  232,  229,
+      224,  227,  232,   28,  236,  230,   35,  233,  217,  237,
+
+      241,   28,  238,  217,   28,  236,  234,  239,   35,  217,
+      237,  245,   35,   35,  217,  217,  244,  253,   35,  252,
+      250,  242,  217,  240,  208,  201,  248,  243,  232,  246,
+      247,  196,  228,  251,   26,   26,   26,   26,   28,   28,
+       28,   30,   30,   30,   30,   35,   35,   35,   35,   57,
+      223,   57,   57,   58,   58,   58,   58,   60,  221,   60,
+       60,   34,   34,   34,   34,   64,   64,  213,   64,   83,
+       83,   83,   83,   85,  176,   85,   85,  109,  109,  109,
+      109,  119,  119,  119,  119,  123,  123,  123,  123,  127,
+      127,  127,  127,  132,  132,  132,  132,  138,  138,  138,
+
+      138,  140,  208,  140,  140,  148,  148,  148,  148,  152,
+      152,  152,  152,  154,  201,  154,  154,  157,  157,  157,
+      157,  159,  196,  159,  159,  162,  162,  162,  162,  164,
+      194,  164,  164,  168,  168,  168,  168,  170,  176,  170,
+      170,  175,  175,  175,  175,  142,  115,  142,  142,  180,
+      180,  180,  180,  182,   86,  182,  182,  188,  188,  188,
+      188,  156,   35,  156,  156,  161,   29,  161,  161,  166,
+       54,  166,  166,  172,   52,  172,  172,  193,  193,  193,
+      193,  195,  195,  195,  195,  184,   35,  184,  184,  200,
+      200,  200,  200,  202,  202,  202,  202,  204,   29,  204,
+
+      204,  207,  207,  207,  207,  210,  210,  210,  210,  212,
+      212,  212,  212,  216,  216,  216,  216,  220,  220,  220,
+      220,  206,  261,  206,  206,  222,  222,  222,  222,  226,
+      226,  226,  226,  211,   27,  211,  211,  231,  231,  231,
+      231,  235,  235,  235,  235,  237,   27,  237,  237,  239,
+      261,  239,  239,  249,  249,  249,  249,  254,  254,  254,
+      254,  257,  257,  257,  257,  259,  259,  259,  259,    5,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+
+      261,  261,  261
     } ;
 
-static yyconst flex_int16_t yy_chk[690] =
+static yyconst flex_int16_t yy_chk[704] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -660,7 +658,7 @@ static yyconst flex_int16_t yy_chk[690] =
        14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
        16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
        12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
-      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
+      259,   25,   20,   38,   25,   38,   45,   18,   25,   30,
        30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
 
        34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
@@ -670,63 +668,66 @@ static yyconst flex_int16_t yy_chk[690] =
        58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
        61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
        73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
-       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
+       72,   79,   73,   69,   78,   87,   78,   81,  257,   79,
        74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
        83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
 
-       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
+       88,   88,   95,  254,   88,   88,   90,   92,   91,   92,
        95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
        96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
-      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
+      100,  253,   97,   97,   99,   99,   99,   99,   99,  104,
       106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
       102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
       111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
-      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
-      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
+      117,  119,  119,  111,  251,  117,  129,  119,  113,  114,
+      114,  114,  114,  114,  115,  198,  114,  114,  121,  121,
       123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
 
-      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
-      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
+      127,  130,  130,  132,  132,  127,  135,  130,  198,  132,
+      137,  137,  138,  138,  143,  134,  145,  143,  138,  249,
       145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
       150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
       151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
-      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
-      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
-      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
-      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
-      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
-
-      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
-      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
-      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
-      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
-      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
-      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
-      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
-      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
-      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
-      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
-
-      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
-      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
-      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
-      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
-        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
-      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
-        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
-      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
-        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
-      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
-
-      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
-      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
-      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
-      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
-        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
-      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250
+      162,  168,  168,  174,  174,  178,  173,  168,  178,  179,
+      179,  180,  180,  186,  186,  188,  188,  180,  197,  186,
+      199,  188,  192,  192,  178,  202,  202,  214,  199,  199,
+      215,  202,  197,  209,  209,  209,  209,  209,  218,  214,
+      209,  209,  219,  224,  224,  215,  230,  218,  233,  224,
+
+      230,  226,  226,  234,  235,  235,  219,  226,  240,  242,
+      235,  241,  241,  244,  243,  246,  240,  248,  248,  247,
+      245,  233,  231,  229,  222,  220,  244,  234,  216,  242,
+      243,  212,  210,  246,  262,  262,  262,  262,  263,  263,
+      263,  264,  264,  264,  264,  265,  265,  265,  265,  266,
+      207,  266,  266,  267,  267,  267,  267,  268,  200,  268,
+      268,  269,  269,  269,  269,  270,  270,  195,  270,  271,
+      271,  271,  271,  272,  193,  272,  272,  273,  273,  273,
+      273,  274,  274,  274,  274,  275,  275,  275,  275,  276,
+      276,  276,  276,  277,  277,  277,  277,  278,  278,  278,
+
+      278,  279,  190,  279,  279,  280,  280,  280,  280,  281,
+      281,  281,  281,  282,  185,  282,  282,  283,  283,  283,
+      283,  284,  177,  284,  284,  285,  285,  285,  285,  286,
+      175,  286,  286,  287,  287,  287,  287,  288,  136,  288,
+      288,  289,  289,  289,  289,  290,   93,  290,  290,  291,
+      291,  291,  291,  292,   64,  292,  292,  293,  293,  293,
+      293,  294,   35,  294,  294,  295,   28,  295,  295,  296,
+       24,  296,  296,  297,   22,  297,  297,  298,  298,  298,
+      298,  299,  299,  299,  299,  300,   17,  300,  300,  301,
+      301,  301,  301,  302,  302,  302,  302,  303,    6,  303,
+
+      303,  304,  304,  304,  304,  305,  305,  305,  305,  306,
+      306,  306,  306,  307,  307,  307,  307,  308,  308,  308,
+      308,  309,    5,  309,  309,  310,  310,  310,  310,  311,
+      311,  311,  311,  312,    4,  312,  312,  313,  313,  313,
+      313,  314,  314,  314,  314,  315,    3,  315,  315,  316,
+        0,  316,  316,  317,  317,  317,  317,  318,  318,  318,
+      318,  319,  319,  319,  319,  320,  320,  320,  320,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+
+      261,  261,  261
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -856,6 +857,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->backend_domid from the string. */
+static void setbackend(DiskParseContext *dpc, const char *str) {
+    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
+        xlu__disk_err(dpc,str,"unknown domain for backend");
+    }
+}
+
 #define DEPRECATE(usewhatinstead) /* not currently reported */
 
 /* Handles a vdev positional parameter which includes a devtype. */
@@ -883,7 +891,7 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #define DPC ((DiskParseContext*)yyextra)
 
 
-#line 887 "libxlu_disk_l.c"
+#line 895 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -980,6 +988,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
 
 void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
 
+int xlu__disk_yyget_column  (yyscan_t yyscanner );
+
+void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
+
 /* Macros after this point can all be overridden by user definitions in
  * section 1.
  */
@@ -1012,12 +1024,7 @@ static int input (yyscan_t yyscanner );
 
 /* Amount of stuff to slurp up with each read. */
 #ifndef YY_READ_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k */
-#define YY_READ_BUF_SIZE 16384
-#else
 #define YY_READ_BUF_SIZE 8192
-#endif /* __ia64__ */
 #endif
 
 /* Copy whatever the last rule matched to the standard output. */
@@ -1036,7 +1043,7 @@ static int input (yyscan_t yyscanner );
 	if ( YY_CURRENT_BUFFER_LVALUE->yy_is_interactive ) \
 		{ \
 		int c = '*'; \
-		size_t n; \
+		unsigned n; \
 		for ( n = 0; n < max_size && \
 			     (c = getc( yyin )) != EOF && c != '\n'; ++n ) \
 			buf[n] = (char) c; \
@@ -1119,12 +1126,12 @@ YY_DECL
 	register int yy_act;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
-#line 155 "libxlu_disk_l.l"
+#line 162 "libxlu_disk_l.l"
 
 
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1128 "libxlu_disk_l.c"
+#line 1135 "libxlu_disk_l.c"
 
 	if ( !yyg->yy_init )
 		{
@@ -1188,14 +1195,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 251 )
+				if ( yy_current_state >= 262 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 250 );
+		while ( yy_current_state != 261 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1245,89 +1252,95 @@ do_action:	/* This label is used only to access EOF actions. */
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 159 "libxlu_disk_l.l"
+#line 166 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 163 "libxlu_disk_l.l"
+#line 170 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 165 "libxlu_disk_l.l"
+#line 172 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 166 "libxlu_disk_l.l"
+#line 173 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 167 "libxlu_disk_l.l"
+#line 174 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 168 "libxlu_disk_l.l"
+#line 175 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 170 "libxlu_disk_l.l"
+#line 177 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 171 "libxlu_disk_l.l"
+#line 178 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 173 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
+#line 179 "libxlu_disk_l.l"
+{ STRIP(','); setbackend(DPC,FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 174 "libxlu_disk_l.l"
+#line 181 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
+	YY_BREAK
+case 11:
+/* rule 11 can match eol */
+YY_RULE_SETUP
+#line 182 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
-case 11:
+case 12:
 YY_RULE_SETUP
-#line 178 "libxlu_disk_l.l"
+#line 186 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
-case 12:
-/* rule 12 can match eol */
+case 13:
+/* rule 13 can match eol */
 YY_RULE_SETUP
-#line 182 "libxlu_disk_l.l"
+#line 190 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
 /* the "/.*" in these patterns ensures that they count as if they
    * matched the whole string, so these patterns take precedence */
-case 13:
+case 14:
 YY_RULE_SETUP
-#line 189 "libxlu_disk_l.l"
+#line 197 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
                     setformat(DPC, yytext);
                  }
 	YY_BREAK
-case 14:
+case 15:
 YY_RULE_SETUP
-#line 195 "libxlu_disk_l.l"
+#line 203 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1341,65 +1354,65 @@ YY_RULE_SETUP
                     free(newscript);
                 }
 	YY_BREAK
-case 15:
+case 16:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 208 "libxlu_disk_l.l"
+#line 216 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 16:
+case 17:
 YY_RULE_SETUP
-#line 209 "libxlu_disk_l.l"
+#line 217 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 17:
+case 18:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 210 "libxlu_disk_l.l"
+#line 218 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 18:
+case 19:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 211 "libxlu_disk_l.l"
+#line 219 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 19:
+case 20:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 212 "libxlu_disk_l.l"
+#line 220 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 20:
+case 21:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 213 "libxlu_disk_l.l"
+#line 221 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 21:
-/* rule 21 can match eol */
+case 22:
+/* rule 22 can match eol */
 YY_RULE_SETUP
-#line 215 "libxlu_disk_l.l"
+#line 223 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
 		}
 	YY_BREAK
 /* positional parameters */
-case 22:
-/* rule 22 can match eol */
+case 23:
+/* rule 23 can match eol */
 YY_RULE_SETUP
-#line 222 "libxlu_disk_l.l"
+#line 230 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1426,27 +1439,27 @@ YY_RULE_SETUP
     }
 }
 	YY_BREAK
-case 23:
+case 24:
 YY_RULE_SETUP
-#line 248 "libxlu_disk_l.l"
+#line 256 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
 }
 	YY_BREAK
-case 24:
+case 25:
 YY_RULE_SETUP
-#line 252 "libxlu_disk_l.l"
+#line 260 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
-case 25:
+case 26:
 YY_RULE_SETUP
-#line 255 "libxlu_disk_l.l"
+#line 263 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1450 "libxlu_disk_l.c"
+#line 1463 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1710,7 +1723,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 251 )
+			if ( yy_current_state >= 262 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1734,11 +1747,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 251 )
+		if ( yy_current_state >= 262 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 250);
+	yy_is_jam = (yy_current_state == 261);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2147,8 +2160,8 @@ YY_BUFFER_STATE xlu__disk_yy_scan_string (yyconst char * yystr , yyscan_t yyscan
 
 /** Setup the input buffer state to scan the given bytes. The next call to xlu__disk_yylex() will
  * scan from a @e copy of @a bytes.
- * @param yybytes the byte buffer to scan
- * @param _yybytes_len the number of bytes in the buffer pointed to by @a bytes.
+ * @param bytes the byte buffer to scan
+ * @param len the number of bytes in the buffer pointed to by @a bytes.
  * @param yyscanner The scanner object.
  * @return the newly allocated buffer state object.
  */
@@ -2538,4 +2551,4 @@ void xlu__disk_yyfree (void * ptr , yyscan_t yyscanner)
 
 #define YYTABLES_NAME "yytables"
 
-#line 255 "libxlu_disk_l.l"
+#line 263 "libxlu_disk_l.l"
diff --git a/tools/libxl/libxlu_disk_l.h b/tools/libxl/libxlu_disk_l.h
index de03908..247a0d7 100644
--- a/tools/libxl/libxlu_disk_l.h
+++ b/tools/libxl/libxlu_disk_l.h
@@ -62,6 +62,7 @@ typedef int flex_int32_t;
 typedef unsigned char flex_uint8_t; 
 typedef unsigned short int flex_uint16_t;
 typedef unsigned int flex_uint32_t;
+#endif /* ! C99 */
 
 /* Limits of integral types. */
 #ifndef INT8_MIN
@@ -92,8 +93,6 @@ typedef unsigned int flex_uint32_t;
 #define UINT32_MAX             (4294967295U)
 #endif
 
-#endif /* ! C99 */
-
 #endif /* ! FLEXINT_H */
 
 #ifdef __cplusplus
@@ -136,15 +135,7 @@ typedef void* yyscan_t;
 
 /* Size of default input buffer. */
 #ifndef YY_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k.
- * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
- * Ditto for the __ia64__ case accordingly.
- */
-#define YY_BUF_SIZE 32768
-#else
 #define YY_BUF_SIZE 16384
-#endif /* __ia64__ */
 #endif
 
 #ifndef YY_TYPEDEF_YY_BUFFER_STATE
@@ -280,6 +271,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
 
 void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
 
+int xlu__disk_yyget_column  (yyscan_t yyscanner );
+
+void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
+
 /* Macros after this point can all be overridden by user definitions in
  * section 1.
  */
@@ -306,12 +301,7 @@ static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
 
 /* Amount of stuff to slurp up with each read. */
 #ifndef YY_READ_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k */
-#define YY_READ_BUF_SIZE 16384
-#else
 #define YY_READ_BUF_SIZE 8192
-#endif /* __ia64__ */
 #endif
 
 /* Number of entries by which start-condition stack grows. */
@@ -344,8 +334,8 @@ extern int xlu__disk_yylex (yyscan_t yyscanner);
 #undef YY_DECL
 #endif
 
-#line 255 "libxlu_disk_l.l"
+#line 263 "libxlu_disk_l.l"
 
-#line 350 "libxlu_disk_l.h"
+#line 340 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
 #endif /* xlu__disk_yyHEADER_H */
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index bee16a1..6bd48e8 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -113,6 +113,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->backend_domid from the string. */
+static void setbackend(DiskParseContext *dpc, const char *str) {
+    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
+        xlu__disk_err(dpc,str,"unknown domain for backend");
+    }
+}
+
 #define DEPRECATE(usewhatinstead) /* not currently reported */
 
 /* Handles a vdev positional parameter which includes a devtype. */
@@ -169,6 +176,7 @@ devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
+backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
index 0333e55..87eb399 100644
--- a/tools/libxl/libxlutil.h
+++ b/tools/libxl/libxlutil.h
@@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
  */
 
 int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
-                   libxl_device_disk *disk);
+                   libxl_device_disk *disk, libxl_ctx *ctx);
   /* disk must have been initialised.
    *
    * On error, returns errno value.  Bad strings cause EINVAL and
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 138cd72..fd00d61 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
         if (!*config) { perror("xlu_cfg_init"); exit(-1); }
     }
 
-    e = xlu_disk_parse(*config, nspecs, specs, disk);
+    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
     if (e == EINVAL) exit(-1);
     if (e) {
         fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
@@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
 int main_blockattach(int argc, char **argv)
 {
     int opt;
-    uint32_t fe_domid, be_domid = 0;
+    uint32_t fe_domid;
     libxl_device_disk disk = { 0 };
     XLU_Config *config = 0;
 
@@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
     parse_disk_config_multistring
         (&config, argc-optind, (const char* const*)argv + optind, &disk);
 
-    disk.backend_domid = be_domid;
-
     if (dryrun_only) {
         char *json = libxl_device_disk_to_json(ctx, &disk);
         printf("disk: %s\n", json);
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 21:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 21:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyVDf-0005WZ-KD; Mon, 06 Aug 2012 21:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyVDd-0005WR-Qn
	for xen-devel@lists.xen.org; Mon, 06 Aug 2012 21:52:10 +0000
Received: from [85.158.143.99:28163] by server-3.bemta-4.messagelabs.com id
	81/01-01511-98C30205; Mon, 06 Aug 2012 21:52:09 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344289924!30562512!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12868 invoked from network); 6 Aug 2012 21:52:04 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-9.tower-216.messagelabs.com with SMTP;
	6 Aug 2012 21:52:04 -0000
X-TM-IMSS-Message-ID: <7c995485000339e5@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	7c995485000339e5 ; Mon, 6 Aug 2012 17:52:27 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q76LpvwS004740; 
	Mon, 6 Aug 2012 17:51:58 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon,  6 Aug 2012 17:51:56 -0400
Message-Id: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain ID
	for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Allow specification of backend domains for disks, either in the config
file or via xl block-attach.

A version of this patch was submitted in October 2011 but was not
suitable at the time because libxl did not support the "script=" option
for disks in libxl. Now that this option exists, it is possible to
specify a backend domain without needing to duplicate the device tree of
domain providing the disk in the domain using libxl; just specify
script=/bin/true (or any more useful script) to prevent the block script
from running in the domain using libxl.

In order to support named backend domains like network-attach, the
prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
this parameter, it would only be only possible to support numeric domain
IDs in the block device specification.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

---

This patch does not include the changes to tools/libxl/libxlu_disk_l.c
and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
changes due to different generator versions.

 tools/libxl/libxlu_disk.c   |   3 +-
 tools/libxl/libxlu_disk_i.h |   3 +-
 tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
 tools/libxl/libxlu_disk_l.h |  24 +-
 tools/libxl/libxlu_disk_l.l |   8 +
 tools/libxl/libxlutil.h     |   2 +-
 tools/libxl/xl_cmdimpl.c    |   6 +-
 7 files changed, 319 insertions(+), 308 deletions(-)

diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
index 18fe386..1e6caca 100644
--- a/tools/libxl/libxlu_disk.c
+++ b/tools/libxl/libxlu_disk.c
@@ -48,7 +48,7 @@ static void dpc_dispose(DiskParseContext *dpc) {
 
 int xlu_disk_parse(XLU_Config *cfg,
                    int nspecs, const char *const *specs,
-                   libxl_device_disk *disk) {
+                   libxl_device_disk *disk, libxl_ctx *ctx) {
     DiskParseContext dpc;
     int i, e;
 
@@ -56,6 +56,7 @@ int xlu_disk_parse(XLU_Config *cfg,
     dpc.cfg = cfg;
     dpc.scanner = 0;
     dpc.disk = disk;
+    dpc.ctx = ctx;
 
     disk->readwrite = 1;
 
diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
index 4fccd4a..c220bcf 100644
--- a/tools/libxl/libxlu_disk_i.h
+++ b/tools/libxl/libxlu_disk_i.h
@@ -2,7 +2,7 @@
 #define LIBXLU_DISK_I_H
 
 #include "libxlu_internal.h"
-
+#include "libxl_utils.h"
 
 typedef struct {
     XLU_Config *cfg;
@@ -12,6 +12,7 @@ typedef struct {
     libxl_device_disk *disk;
     int access_set, had_depr_prefix;
     const char *spec;
+    libxl_ctx *ctx;
 } DiskParseContext;
 
 void xlu__disk_err(DiskParseContext *dpc, const char *erroneous,
diff --git a/tools/libxl/libxlu_disk_l.c b/tools/libxl/libxlu_disk_l.c
index 4c68034..4e17f7c 100644
--- a/tools/libxl/libxlu_disk_l.c
+++ b/tools/libxl/libxlu_disk_l.c
@@ -58,6 +58,7 @@ typedef int flex_int32_t;
 typedef unsigned char flex_uint8_t; 
 typedef unsigned short int flex_uint16_t;
 typedef unsigned int flex_uint32_t;
+#endif /* ! C99 */
 
 /* Limits of integral types. */
 #ifndef INT8_MIN
@@ -88,8 +89,6 @@ typedef unsigned int flex_uint32_t;
 #define UINT32_MAX             (4294967295U)
 #endif
 
-#endif /* ! C99 */
-
 #endif /* ! FLEXINT_H */
 
 #ifdef __cplusplus
@@ -163,15 +162,7 @@ typedef void* yyscan_t;
 
 /* Size of default input buffer. */
 #ifndef YY_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k.
- * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
- * Ditto for the __ia64__ case accordingly.
- */
-#define YY_BUF_SIZE 32768
-#else
 #define YY_BUF_SIZE 16384
-#endif /* __ia64__ */
 #endif
 
 /* The state buf must be large enough to hold one state per character in the main buffer.
@@ -361,8 +352,8 @@ static void yy_fatal_error (yyconst char msg[] ,yyscan_t yyscanner );
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
 
-#define YY_NUM_RULES 25
-#define YY_END_OF_BUFFER 26
+#define YY_NUM_RULES 26
+#define YY_END_OF_BUFFER 27
 /* This struct is not used in this scanner,
    but its presence is necessary. */
 struct yy_trans_info
@@ -370,60 +361,61 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[447] =
+static yyconst flex_int16_t yy_acclist[460] =
     {   0,
-       24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
-    16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
-       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
-       23,   25,   22,   23,   25,   22,   23,   25,   22,   23,
-       25,   22,   23,   25,   22,   23,   25,   22,   23,   25,
-       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
-       23,   25,   22,   23,   25,   24,   25,   25,   22,   22,
-     8193,   22, 8193,   22,16385, 8193,   22, 8193,   22,   22,
-     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
-       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-
-       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
-     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
-       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
-       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
-     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
-     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
-     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
-    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
-     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
-       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
-
-     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
-     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
-     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
-     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
-       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
-       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
-     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
-    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
-       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
-       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
-
-     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
-       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
-        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
-       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
-     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
-        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
-       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
-       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
-       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
-        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
-
-       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
-       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
-     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
-        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
-        6,    8,    8,   12,    4,    6
+       25,   25,   27,   23,   24,   26, 8193,   23,   24,   26,
+    16385, 8193,   23,   26,16385,   23,   24,   26,   24,   26,
+       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
+       24,   26,   23,   24,   26,   23,   24,   26,   23,   24,
+       26,   23,   24,   26,   23,   24,   26,   23,   24,   26,
+       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
+       24,   26,   23,   24,   26,   25,   26,   26,   23,   23,
+     8193,   23, 8193,   23,16385, 8193,   23, 8193,   23,   23,
+     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
+       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
+
+       23,   23,   25, 8193,   23, 8193,   23, 8193, 8214,   23,
+     8214,   23, 8214,   13,   23,   23,   23,   23,   23,   23,
+       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
+       23,   23, 8214,   23, 8214,   23, 8214,   13,   23,   18,
+     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
+     8207, 8214,   23,16399,16406,   21, 8214,   23,16406,   23,
+     8206, 8214,   23,16398,16406,   23,   23, 8209, 8214,   23,
+    16401,16406,   23,   23,   23,   23,   18, 8214,   23,   18,
+     8214,   23,   18,   23,   18, 8214,   23,    3,   23,   23,
+       20, 8214,   23,16406,   23,   23, 8207, 8214,   23, 8207,
+
+     8214,   23, 8207,   23, 8207, 8214,   21, 8214,   23,   21,
+     8214,   23,   21,   23,   21, 8214, 8206, 8214,   23, 8206,
+     8214,   23, 8206,   23, 8206, 8214,   23, 8209, 8214,   23,
+     8209, 8214,   23, 8209,   23, 8209, 8214,   23,   23,   10,
+       23,   18, 8214,   23,   18, 8214,   23,   18, 8214,   18,
+       23,   18,   23,    3,   23,   23,   20, 8214,   23,   20,
+     8214,   23,   20,   23,   20, 8214,   23,   19, 8214,   23,
+    16406, 8207, 8214,   23, 8207, 8214,   23, 8207, 8214, 8207,
+       23, 8207,   21, 8214,   23,   21, 8214,   23,   21, 8214,
+       21,   23,   21, 8206, 8214,   23, 8206, 8214,   23, 8206,
+
+     8214, 8206,   23, 8206,   23, 8209, 8214,   23, 8209, 8214,
+       23, 8209, 8214, 8209,   23, 8209,   23,   23,   10,   13,
+       10,    7,   23,   23,   20, 8214,   23,   20, 8214,   23,
+       20, 8214,   20,   23,   20,    2,   19, 8214,   23,   19,
+     8214,   23,   19,   23,   19, 8214,   11,   23,   12,   10,
+       10,   13,    7,   13,    7,   23,   23,    6,    2,   13,
+        2,   19, 8214,   23,   19, 8214,   23,   19, 8214,   19,
+       23,   19,   11,   13,   11,   16, 8214,   23,16406,   12,
+       13,   12,    7,    7,   13,   23,   23,    6,   13,    6,
+        6,   13,    6,   13,    2,    2,   13,   11,   11,   13,
+
+       16, 8214,   23,   16, 8214,   23,   16,   23,   16, 8214,
+       12,   13,   23,   23,    6,    6,   13,    6,    6,   16,
+     8214,   23,   16, 8214,   23,   16, 8214,   16,   23,   16,
+       23,   23,    6,    6,   23,    8,    6,    5,    6,   23,
+        8,   13,    8,    4,    6,    5,    6,    9,    8,    8,
+       13,    4,    6,    9,   13,    9,    9,    9,   13
     } ;
 
-static yyconst flex_int16_t yy_accept[252] =
+static yyconst flex_int16_t yy_accept[263] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -445,14 +437,15 @@ static yyconst flex_int16_t yy_accept[252] =
       293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
       314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
       328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
-      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
-
-      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
-      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
-      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
-      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
-      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
-      447
+      348,  349,  350,  351,  353,  355,  356,  357,  358,  359,
+
+      361,  362,  365,  368,  370,  372,  373,  375,  376,  380,
+      382,  383,  384,  386,  387,  388,  390,  391,  393,  395,
+      396,  398,  399,  401,  404,  407,  409,  411,  413,  414,
+      415,  416,  418,  419,  420,  423,  426,  428,  430,  431,
+      432,  433,  434,  435,  436,  437,  438,  440,  441,  443,
+      444,  446,  448,  449,  450,  452,  454,  456,  457,  458,
+      460,  460
     } ;
 
 static yyconst flex_int32_t yy_ec[256] =
@@ -495,83 +488,85 @@ static yyconst flex_int32_t yy_meta[34] =
         1,    1,    1
     } ;
 
-static yyconst flex_int16_t yy_base[308] =
+static yyconst flex_int16_t yy_base[321] =
     {   0,
-        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
-       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
-       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
-       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
+        0,    0,  644,  632,  623,  595,   32,   35,  670,  670,
+       44,   62,   30,   41,   50,   51,  577,   64,   47,   66,
+       67,  565,   68,  561,   72,    0,  670,  563,  670,   87,
+       91,    0,    0,  100,  553,  109,    0,   74,   95,   87,
        32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
       118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
-      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
+      147,    0,    0,  551,  129,  126,  134,  143,  145,  147,
       148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
-      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
-      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
+      168,  159,  188,    0,    0,  670,  166,  197,  179,  185,
+      176,  200,  537,  186,  193,  216,  225,  205,  234,  221,
 
       237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
       251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
       286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
-        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
-        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
+        0,  301,    0,  288,  297,  535,  302,  310,    0,    0,
+        0,    0,  305,  670,  307,  319,    0,  321,    0,  322,
       332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
         0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
-        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
-        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
-      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
-
-      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
-      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
-      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
-      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
-      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
-      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
-      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
-      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
-      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
-      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
-
-      627,  631,  635,  639,  643,  647,  651
+        0,    0,  337,  345,  527,  670,  519,  346,  351,  359,
+        0,    0,    0,    0,  511,  361,    0,  363,    0,  499,
+      319,  370,  471,  670,  464,  670,  359,  276,  367,  455,
+
+      670,  373,    0,    0,    0,    0,  447,  670,  383,  429,
+        0,  428,  670,  368,  371,  425,  670,  385,  389,  422,
+      670,  421,  670,  391,    0,  399,    0,    0,  414,  387,
+      419,  670,  395,  400,  402,    0,    0,    0,    0,  399,
+      403,  406,  411,  404,  417,  412,  416,  409,  316,  670,
+      271,  670,  228,  200,  670,  670,  175,  670,   77,  670,
+      670,  434,  438,  441,  445,  449,  453,  457,  461,  465,
+      469,  473,  477,  481,  485,  489,  493,  497,  501,  505,
+      509,  513,  517,  521,  525,  529,  533,  537,  541,  545,
+      549,  553,  557,  561,  565,  569,  573,  577,  581,  585,
+
+      589,  593,  597,  601,  605,  609,  613,  617,  621,  625,
+      629,  633,  637,  641,  645,  649,  653,  657,  661,  665
     } ;
 
-static yyconst flex_int16_t yy_def[308] =
+static yyconst flex_int16_t yy_def[321] =
     {   0,
-      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
-      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
-      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
+      261,    1,  262,  262,  261,  263,  264,  264,  261,  261,
+      265,  265,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,  266,  261,  263,  261,  267,
+      264,  268,  268,  269,   12,  263,  270,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
-      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  266,  267,  268,  268,
+      271,  272,  272,  261,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
-       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
-
-       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
-       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
-      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
-      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
-      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
-      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
-      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
-      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
-      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
-       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
-
-      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
-      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
-      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
-      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
-      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-
-      250,  250,  250,  250,  250,  250,  250
+       12,   12,  271,  272,  272,  261,   12,  273,   12,   12,
+       12,   12,   12,   12,   12,  274,  275,   12,  276,   12,
+
+       12,  277,   12,   12,   12,   12,  278,  279,  273,  279,
+       12,   12,   12,  280,   12,   12,  281,  282,  274,  282,
+      283,  284,  275,  284,  285,  286,  276,  286,   12,  287,
+      288,  277,  288,   12,   12,  289,   12,  278,  279,  279,
+      290,  290,   12,  261,   12,  291,  292,  280,  292,   12,
+      293,  281,  282,  282,  294,  294,  283,  284,  284,  295,
+      295,  285,  286,  286,  296,  296,   12,  287,  288,  288,
+      297,  297,   12,   12,  298,  261,  299,   12,   12,  291,
+      292,  292,  300,  300,  301,  302,  303,  293,  303,  304,
+       12,  305,  298,  261,  306,  261,   12,   12,  307,  308,
+
+      261,  302,  303,  303,  309,  309,  310,  261,  311,  312,
+      312,  306,  261,   12,   12,  313,  261,  313,  313,  308,
+      261,  310,  261,  314,  315,  311,  315,  312,   12,   12,
+      313,  261,  313,  313,  314,  315,  315,  316,  316,   12,
+       12,  313,  313,   12,  317,  313,  313,   12,  318,  261,
+      313,  261,  319,  318,  261,  261,  320,  261,  320,  261,
+        0,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261
     } ;
 
-static yyconst flex_int16_t yy_nxt[690] =
+static yyconst flex_int16_t yy_nxt[704] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
@@ -581,7 +576,7 @@ static yyconst flex_int16_t yy_nxt[690] =
        35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
        35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
        37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
-      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
+      258,   35,   50,   35,   55,   65,   35,   47,   56,   28,
        59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
 
        28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
@@ -591,66 +586,69 @@ static yyconst flex_int16_t yy_nxt[690] =
        59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
        84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
        35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
-       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
+       93,   35,   94,   91,   99,   35,   35,   35,  260,  100,
        95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
        28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
 
-      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
+      108,  107,   35,  250,  107,  110,  112,  114,  113,   35,
        75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
       117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
-       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
+       35,  258,  121,  124,  125,  125,   61,  126,  125,   35,
       137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
       131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
        35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
-      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
+      153,   28,  155,  143,  256,  154,   35,  156,  145,  146,
       146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
        28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
 
-      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
-      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
-      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
+      165,   28,  169,   28,  171,  166,   35,  170,  215,  172,
+      177,   35,   28,  139,   35,  173,   35,  178,  140,  255,
+      179,   28,  181,   28,  183,  174,  209,  182,   35,  184,
       185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
       189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
-      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
-       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
-      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
-      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
-      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
-
-       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
-      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
-      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
-       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
-       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
-       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
-       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
-       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
-      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
-      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
-
-      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
-      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
-      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
-      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
-       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
-      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
-      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
-      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
-      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
-      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
-
-      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
-      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
-      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
-      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
-      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
-      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250
+      164,   28,  169,  192,   35,   35,  191,  170,  197,  199,
+       35,   28,  181,   28,  203,   28,  205,  182,   35,  204,
+      217,  206,   64,  211,  198,   28,  203,   35,  218,  219,
+       35,  204,  214,  224,  224,   61,  225,  224,  232,  229,
+      224,  227,  232,   28,  236,  230,   35,  233,  217,  237,
+
+      241,   28,  238,  217,   28,  236,  234,  239,   35,  217,
+      237,  245,   35,   35,  217,  217,  244,  253,   35,  252,
+      250,  242,  217,  240,  208,  201,  248,  243,  232,  246,
+      247,  196,  228,  251,   26,   26,   26,   26,   28,   28,
+       28,   30,   30,   30,   30,   35,   35,   35,   35,   57,
+      223,   57,   57,   58,   58,   58,   58,   60,  221,   60,
+       60,   34,   34,   34,   34,   64,   64,  213,   64,   83,
+       83,   83,   83,   85,  176,   85,   85,  109,  109,  109,
+      109,  119,  119,  119,  119,  123,  123,  123,  123,  127,
+      127,  127,  127,  132,  132,  132,  132,  138,  138,  138,
+
+      138,  140,  208,  140,  140,  148,  148,  148,  148,  152,
+      152,  152,  152,  154,  201,  154,  154,  157,  157,  157,
+      157,  159,  196,  159,  159,  162,  162,  162,  162,  164,
+      194,  164,  164,  168,  168,  168,  168,  170,  176,  170,
+      170,  175,  175,  175,  175,  142,  115,  142,  142,  180,
+      180,  180,  180,  182,   86,  182,  182,  188,  188,  188,
+      188,  156,   35,  156,  156,  161,   29,  161,  161,  166,
+       54,  166,  166,  172,   52,  172,  172,  193,  193,  193,
+      193,  195,  195,  195,  195,  184,   35,  184,  184,  200,
+      200,  200,  200,  202,  202,  202,  202,  204,   29,  204,
+
+      204,  207,  207,  207,  207,  210,  210,  210,  210,  212,
+      212,  212,  212,  216,  216,  216,  216,  220,  220,  220,
+      220,  206,  261,  206,  206,  222,  222,  222,  222,  226,
+      226,  226,  226,  211,   27,  211,  211,  231,  231,  231,
+      231,  235,  235,  235,  235,  237,   27,  237,  237,  239,
+      261,  239,  239,  249,  249,  249,  249,  254,  254,  254,
+      254,  257,  257,  257,  257,  259,  259,  259,  259,    5,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+
+      261,  261,  261
     } ;
 
-static yyconst flex_int16_t yy_chk[690] =
+static yyconst flex_int16_t yy_chk[704] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -660,7 +658,7 @@ static yyconst flex_int16_t yy_chk[690] =
        14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
        16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
        12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
-      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
+      259,   25,   20,   38,   25,   38,   45,   18,   25,   30,
        30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
 
        34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
@@ -670,63 +668,66 @@ static yyconst flex_int16_t yy_chk[690] =
        58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
        61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
        73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
-       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
+       72,   79,   73,   69,   78,   87,   78,   81,  257,   79,
        74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
        83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
 
-       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
+       88,   88,   95,  254,   88,   88,   90,   92,   91,   92,
        95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
        96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
-      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
+      100,  253,   97,   97,   99,   99,   99,   99,   99,  104,
       106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
       102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
       111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
-      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
-      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
+      117,  119,  119,  111,  251,  117,  129,  119,  113,  114,
+      114,  114,  114,  114,  115,  198,  114,  114,  121,  121,
       123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
 
-      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
-      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
+      127,  130,  130,  132,  132,  127,  135,  130,  198,  132,
+      137,  137,  138,  138,  143,  134,  145,  143,  138,  249,
       145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
       150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
       151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
-      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
-      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
-      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
-      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
-      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
-
-      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
-      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
-      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
-      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
-      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
-      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
-      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
-      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
-      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
-      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
-
-      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
-      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
-      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
-      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
-        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
-      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
-        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
-      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
-        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
-      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
-
-      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
-      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
-      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
-      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
-        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
-      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
-      250,  250,  250,  250,  250,  250,  250,  250,  250
+      162,  168,  168,  174,  174,  178,  173,  168,  178,  179,
+      179,  180,  180,  186,  186,  188,  188,  180,  197,  186,
+      199,  188,  192,  192,  178,  202,  202,  214,  199,  199,
+      215,  202,  197,  209,  209,  209,  209,  209,  218,  214,
+      209,  209,  219,  224,  224,  215,  230,  218,  233,  224,
+
+      230,  226,  226,  234,  235,  235,  219,  226,  240,  242,
+      235,  241,  241,  244,  243,  246,  240,  248,  248,  247,
+      245,  233,  231,  229,  222,  220,  244,  234,  216,  242,
+      243,  212,  210,  246,  262,  262,  262,  262,  263,  263,
+      263,  264,  264,  264,  264,  265,  265,  265,  265,  266,
+      207,  266,  266,  267,  267,  267,  267,  268,  200,  268,
+      268,  269,  269,  269,  269,  270,  270,  195,  270,  271,
+      271,  271,  271,  272,  193,  272,  272,  273,  273,  273,
+      273,  274,  274,  274,  274,  275,  275,  275,  275,  276,
+      276,  276,  276,  277,  277,  277,  277,  278,  278,  278,
+
+      278,  279,  190,  279,  279,  280,  280,  280,  280,  281,
+      281,  281,  281,  282,  185,  282,  282,  283,  283,  283,
+      283,  284,  177,  284,  284,  285,  285,  285,  285,  286,
+      175,  286,  286,  287,  287,  287,  287,  288,  136,  288,
+      288,  289,  289,  289,  289,  290,   93,  290,  290,  291,
+      291,  291,  291,  292,   64,  292,  292,  293,  293,  293,
+      293,  294,   35,  294,  294,  295,   28,  295,  295,  296,
+       24,  296,  296,  297,   22,  297,  297,  298,  298,  298,
+      298,  299,  299,  299,  299,  300,   17,  300,  300,  301,
+      301,  301,  301,  302,  302,  302,  302,  303,    6,  303,
+
+      303,  304,  304,  304,  304,  305,  305,  305,  305,  306,
+      306,  306,  306,  307,  307,  307,  307,  308,  308,  308,
+      308,  309,    5,  309,  309,  310,  310,  310,  310,  311,
+      311,  311,  311,  312,    4,  312,  312,  313,  313,  313,
+      313,  314,  314,  314,  314,  315,    3,  315,  315,  316,
+        0,  316,  316,  317,  317,  317,  317,  318,  318,  318,
+      318,  319,  319,  319,  319,  320,  320,  320,  320,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
+
+      261,  261,  261
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -856,6 +857,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->backend_domid from the string. */
+static void setbackend(DiskParseContext *dpc, const char *str) {
+    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
+        xlu__disk_err(dpc,str,"unknown domain for backend");
+    }
+}
+
 #define DEPRECATE(usewhatinstead) /* not currently reported */
 
 /* Handles a vdev positional parameter which includes a devtype. */
@@ -883,7 +891,7 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #define DPC ((DiskParseContext*)yyextra)
 
 
-#line 887 "libxlu_disk_l.c"
+#line 895 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -980,6 +988,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
 
 void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
 
+int xlu__disk_yyget_column  (yyscan_t yyscanner );
+
+void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
+
 /* Macros after this point can all be overridden by user definitions in
  * section 1.
  */
@@ -1012,12 +1024,7 @@ static int input (yyscan_t yyscanner );
 
 /* Amount of stuff to slurp up with each read. */
 #ifndef YY_READ_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k */
-#define YY_READ_BUF_SIZE 16384
-#else
 #define YY_READ_BUF_SIZE 8192
-#endif /* __ia64__ */
 #endif
 
 /* Copy whatever the last rule matched to the standard output. */
@@ -1036,7 +1043,7 @@ static int input (yyscan_t yyscanner );
 	if ( YY_CURRENT_BUFFER_LVALUE->yy_is_interactive ) \
 		{ \
 		int c = '*'; \
-		size_t n; \
+		unsigned n; \
 		for ( n = 0; n < max_size && \
 			     (c = getc( yyin )) != EOF && c != '\n'; ++n ) \
 			buf[n] = (char) c; \
@@ -1119,12 +1126,12 @@ YY_DECL
 	register int yy_act;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
-#line 155 "libxlu_disk_l.l"
+#line 162 "libxlu_disk_l.l"
 
 
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1128 "libxlu_disk_l.c"
+#line 1135 "libxlu_disk_l.c"
 
 	if ( !yyg->yy_init )
 		{
@@ -1188,14 +1195,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 251 )
+				if ( yy_current_state >= 262 )
 					yy_c = yy_meta[(unsigned int) yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 250 );
+		while ( yy_current_state != 261 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1245,89 +1252,95 @@ do_action:	/* This label is used only to access EOF actions. */
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 159 "libxlu_disk_l.l"
+#line 166 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 163 "libxlu_disk_l.l"
+#line 170 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 165 "libxlu_disk_l.l"
+#line 172 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 166 "libxlu_disk_l.l"
+#line 173 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 167 "libxlu_disk_l.l"
+#line 174 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 168 "libxlu_disk_l.l"
+#line 175 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 170 "libxlu_disk_l.l"
+#line 177 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 171 "libxlu_disk_l.l"
+#line 178 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 173 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
+#line 179 "libxlu_disk_l.l"
+{ STRIP(','); setbackend(DPC,FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 174 "libxlu_disk_l.l"
+#line 181 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
+	YY_BREAK
+case 11:
+/* rule 11 can match eol */
+YY_RULE_SETUP
+#line 182 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
-case 11:
+case 12:
 YY_RULE_SETUP
-#line 178 "libxlu_disk_l.l"
+#line 186 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
-case 12:
-/* rule 12 can match eol */
+case 13:
+/* rule 13 can match eol */
 YY_RULE_SETUP
-#line 182 "libxlu_disk_l.l"
+#line 190 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
 /* the "/.*" in these patterns ensures that they count as if they
    * matched the whole string, so these patterns take precedence */
-case 13:
+case 14:
 YY_RULE_SETUP
-#line 189 "libxlu_disk_l.l"
+#line 197 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
                     setformat(DPC, yytext);
                  }
 	YY_BREAK
-case 14:
+case 15:
 YY_RULE_SETUP
-#line 195 "libxlu_disk_l.l"
+#line 203 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1341,65 +1354,65 @@ YY_RULE_SETUP
                     free(newscript);
                 }
 	YY_BREAK
-case 15:
+case 16:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 208 "libxlu_disk_l.l"
+#line 216 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 16:
+case 17:
 YY_RULE_SETUP
-#line 209 "libxlu_disk_l.l"
+#line 217 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 17:
+case 18:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 210 "libxlu_disk_l.l"
+#line 218 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 18:
+case 19:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 211 "libxlu_disk_l.l"
+#line 219 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 19:
+case 20:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 212 "libxlu_disk_l.l"
+#line 220 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 20:
+case 21:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 213 "libxlu_disk_l.l"
+#line 221 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
-case 21:
-/* rule 21 can match eol */
+case 22:
+/* rule 22 can match eol */
 YY_RULE_SETUP
-#line 215 "libxlu_disk_l.l"
+#line 223 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
 		}
 	YY_BREAK
 /* positional parameters */
-case 22:
-/* rule 22 can match eol */
+case 23:
+/* rule 23 can match eol */
 YY_RULE_SETUP
-#line 222 "libxlu_disk_l.l"
+#line 230 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1426,27 +1439,27 @@ YY_RULE_SETUP
     }
 }
 	YY_BREAK
-case 23:
+case 24:
 YY_RULE_SETUP
-#line 248 "libxlu_disk_l.l"
+#line 256 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
 }
 	YY_BREAK
-case 24:
+case 25:
 YY_RULE_SETUP
-#line 252 "libxlu_disk_l.l"
+#line 260 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
-case 25:
+case 26:
 YY_RULE_SETUP
-#line 255 "libxlu_disk_l.l"
+#line 263 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1450 "libxlu_disk_l.c"
+#line 1463 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1710,7 +1723,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 251 )
+			if ( yy_current_state >= 262 )
 				yy_c = yy_meta[(unsigned int) yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
@@ -1734,11 +1747,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 251 )
+		if ( yy_current_state >= 262 )
 			yy_c = yy_meta[(unsigned int) yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
-	yy_is_jam = (yy_current_state == 250);
+	yy_is_jam = (yy_current_state == 261);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2147,8 +2160,8 @@ YY_BUFFER_STATE xlu__disk_yy_scan_string (yyconst char * yystr , yyscan_t yyscan
 
 /** Setup the input buffer state to scan the given bytes. The next call to xlu__disk_yylex() will
  * scan from a @e copy of @a bytes.
- * @param yybytes the byte buffer to scan
- * @param _yybytes_len the number of bytes in the buffer pointed to by @a bytes.
+ * @param bytes the byte buffer to scan
+ * @param len the number of bytes in the buffer pointed to by @a bytes.
  * @param yyscanner The scanner object.
  * @return the newly allocated buffer state object.
  */
@@ -2538,4 +2551,4 @@ void xlu__disk_yyfree (void * ptr , yyscan_t yyscanner)
 
 #define YYTABLES_NAME "yytables"
 
-#line 255 "libxlu_disk_l.l"
+#line 263 "libxlu_disk_l.l"
diff --git a/tools/libxl/libxlu_disk_l.h b/tools/libxl/libxlu_disk_l.h
index de03908..247a0d7 100644
--- a/tools/libxl/libxlu_disk_l.h
+++ b/tools/libxl/libxlu_disk_l.h
@@ -62,6 +62,7 @@ typedef int flex_int32_t;
 typedef unsigned char flex_uint8_t; 
 typedef unsigned short int flex_uint16_t;
 typedef unsigned int flex_uint32_t;
+#endif /* ! C99 */
 
 /* Limits of integral types. */
 #ifndef INT8_MIN
@@ -92,8 +93,6 @@ typedef unsigned int flex_uint32_t;
 #define UINT32_MAX             (4294967295U)
 #endif
 
-#endif /* ! C99 */
-
 #endif /* ! FLEXINT_H */
 
 #ifdef __cplusplus
@@ -136,15 +135,7 @@ typedef void* yyscan_t;
 
 /* Size of default input buffer. */
 #ifndef YY_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k.
- * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
- * Ditto for the __ia64__ case accordingly.
- */
-#define YY_BUF_SIZE 32768
-#else
 #define YY_BUF_SIZE 16384
-#endif /* __ia64__ */
 #endif
 
 #ifndef YY_TYPEDEF_YY_BUFFER_STATE
@@ -280,6 +271,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
 
 void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
 
+int xlu__disk_yyget_column  (yyscan_t yyscanner );
+
+void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
+
 /* Macros after this point can all be overridden by user definitions in
  * section 1.
  */
@@ -306,12 +301,7 @@ static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
 
 /* Amount of stuff to slurp up with each read. */
 #ifndef YY_READ_BUF_SIZE
-#ifdef __ia64__
-/* On IA-64, the buffer size is 16k, not 8k */
-#define YY_READ_BUF_SIZE 16384
-#else
 #define YY_READ_BUF_SIZE 8192
-#endif /* __ia64__ */
 #endif
 
 /* Number of entries by which start-condition stack grows. */
@@ -344,8 +334,8 @@ extern int xlu__disk_yylex (yyscan_t yyscanner);
 #undef YY_DECL
 #endif
 
-#line 255 "libxlu_disk_l.l"
+#line 263 "libxlu_disk_l.l"
 
-#line 350 "libxlu_disk_l.h"
+#line 340 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
 #endif /* xlu__disk_yyHEADER_H */
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index bee16a1..6bd48e8 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -113,6 +113,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
     else xlu__disk_err(dpc,str,"unknown value for backendtype");
 }
 
+/* Sets ->backend_domid from the string. */
+static void setbackend(DiskParseContext *dpc, const char *str) {
+    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
+        xlu__disk_err(dpc,str,"unknown domain for backend");
+    }
+}
+
 #define DEPRECATE(usewhatinstead) /* not currently reported */
 
 /* Handles a vdev positional parameter which includes a devtype. */
@@ -169,6 +176,7 @@ devtype=[^,]*,?	{ xlu__disk_err(DPC,yytext,"unknown value for type"); }
 
 access=[^,]*,?	{ STRIP(','); setaccess(DPC, FROMEQUALS); }
 backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
+backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
index 0333e55..87eb399 100644
--- a/tools/libxl/libxlutil.h
+++ b/tools/libxl/libxlutil.h
@@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
  */
 
 int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
-                   libxl_device_disk *disk);
+                   libxl_device_disk *disk, libxl_ctx *ctx);
   /* disk must have been initialised.
    *
    * On error, returns errno value.  Bad strings cause EINVAL and
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 138cd72..fd00d61 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
         if (!*config) { perror("xlu_cfg_init"); exit(-1); }
     }
 
-    e = xlu_disk_parse(*config, nspecs, specs, disk);
+    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
     if (e == EINVAL) exit(-1);
     if (e) {
         fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
@@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
 int main_blockattach(int argc, char **argv)
 {
     int opt;
-    uint32_t fe_domid, be_domid = 0;
+    uint32_t fe_domid;
     libxl_device_disk disk = { 0 };
     XLU_Config *config = 0;
 
@@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
     parse_disk_config_multistring
         (&config, argc-optind, (const char* const*)argv + optind, &disk);
 
-    disk.backend_domid = be_domid;
-
     if (dryrun_only) {
         char *json = libxl_device_disk_to_json(ctx, &disk);
         printf("disk: %s\n", json);
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 23:01:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 23:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyWIL-000624-7w; Mon, 06 Aug 2012 23:01:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyWIJ-00061z-6F
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 23:01:03 +0000
Received: from [85.158.139.83:19319] by server-2.bemta-5.messagelabs.com id
	38/4D-04598-EAC40205; Mon, 06 Aug 2012 23:01:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344294061!28622853!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2984 invoked from network); 6 Aug 2012 23:01:01 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 23:01:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,722,1336348800"; d="scan'208";a="13873999"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 23:01:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 00:01:00 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyWIG-0000Zo-Gq;
	Mon, 06 Aug 2012 23:01:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyWIG-0001g3-3F;
	Tue, 07 Aug 2012 00:01:00 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13566-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 00:01:00 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13566: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13566 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13566/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 06 23:01:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Aug 2012 23:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyWIL-000624-7w; Mon, 06 Aug 2012 23:01:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyWIJ-00061z-6F
	for xen-devel@lists.xensource.com; Mon, 06 Aug 2012 23:01:03 +0000
Received: from [85.158.139.83:19319] by server-2.bemta-5.messagelabs.com id
	38/4D-04598-EAC40205; Mon, 06 Aug 2012 23:01:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344294061!28622853!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgxNjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2984 invoked from network); 6 Aug 2012 23:01:01 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Aug 2012 23:01:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,722,1336348800"; d="scan'208";a="13873999"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	06 Aug 2012 23:01:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 00:01:00 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyWIG-0000Zo-Gq;
	Mon, 06 Aug 2012 23:01:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyWIG-0001g3-3F;
	Tue, 07 Aug 2012 00:01:00 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13566-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 00:01:00 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13566: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13566 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13566/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 03:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 03:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyaNs-0003BR-Hr; Tue, 07 Aug 2012 03:23:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SyaNr-0003BM-2I
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 03:23:03 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344309775!8870540!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE1ODcwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5919 invoked from network); 7 Aug 2012 03:22:56 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 03:22:56 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344309776; x=1375845776;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=0uhm+K4Ww3oMXvqtyRcSytXwC9ciARW1MTxA950Yaaw=;
	b=VkPCqPUr1oxlfgGjlAuh2uN6OmgcIm+rTr1Owbjqo7Ogv9KqNkRD1YOW
	efgIeiGceRYQD2X6JwX43Ef8kE0ogw==;
X-IronPort-AV: E=Sophos;i="4.77,724,1336348800"; d="scan'208";a="776582382"
Received: from smtp-in-6001.iad6.amazon.com ([10.195.76.178])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 03:22:55 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-6001.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q773MrSi001747
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 03:22:54 GMT
Received: from US-SEA-R8XVZTX (10.224.80.38) by ex10-hub-31005.ant.amazon.com
	(10.185.176.12) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 6 Aug 2012 20:22:47 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 06 Aug 2012
	20:22:46 -0700
Date: Mon, 6 Aug 2012 20:22:46 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807032246.GA4324@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120731153459.GD8228@US-SEA-R8XVZTX>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 08:34:59AM -0700, Matt Wilson wrote:
> On Tue, Jul 31, 2012 at 01:29:03AM -0700, Ian Campbell wrote:
> > On Mon, 2012-07-30 at 20:33 +0100, Matt Wilson wrote:
> > > Markdown, while easy to read and write, isn't the most consumable
> > > format for users reading documentation on a terminal. This patch uses
> > > lynx to format markdown produced HTML into text files.
> > 
> > The markdown syntax is supposed to be readable as plain text too, if
> > there are particular instances where this is not the case perhaps we can
> > tidy them up with that in mind?
> 
> I'm not sure how much the markdown can be tidied for constructs like:
> 
> ### apic
> > `= summit | bigsmp | default`
> 
> Override Xen's logic for choosing the APIC driver.  By default, if
> there are more than 8 CPUs, Xen will switch to `bigsmp` over
> `default`.
> 
> ### allow\_unsafe
> > `= <boolean>`
> 
> > Default: `false`
> 
> Force boot on potentially unsafe systems. By default Xen will refuse
> to boot on systems with the following errata:
> 
> * AMD Erratum 121. Processors with this erratum are subject to a guest
>   triggerable Denial of Service. Override only if you trust all of
>   your PV guests.
> 
> When processed as I propose, it looks like:
> 
>   apic
> 
>      = summit | bigsmp | default
> 
>    Override Xen's logic for choosing the APIC driver. By default, if there
>    are more than 8 CPUs, Xen will switch to bigsmp over default.
> 
>   allow_unsafe
> 
>      = <boolean>
> 
>      Default: false
> 
>    Force boot on potentially unsafe systems. By default Xen will refuse to
>    boot on systems with the following errata:
>      * AMD Erratum 121. Processors with this erratum are subject to a
>        guest triggerable Denial of Service. Override only if you trust all
>        of your PV guests.
> 
> 
> > Why wouldn't you just run lynx on the generated .html instead of less on
> > the generated .txt if you wanted something a bit better formatted?
> 
> I generally don't have lynx installed on my production machines.

Ian,

Any further concerns?

Matt

> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > > 
> > > diff -r c3a6e679bdfa -r ef1271aef866 docs/Docs.mk
> > > --- a/docs/Docs.mk	Mon Jul 30 19:04:59 2012 +0000
> > > +++ b/docs/Docs.mk	Mon Jul 30 19:33:41 2012 +0000
> > > @@ -10,3 +10,4 @@ POD2TEXT	:= pod2text
> > >  DOT		:= dot
> > >  NEATO		:= neato
> > >  MARKDOWN	:= markdown
> > > +LYNX		:= lynx
> > > diff -r c3a6e679bdfa -r ef1271aef866 docs/Makefile
> > > --- a/docs/Makefile	Mon Jul 30 19:04:59 2012 +0000
> > > +++ b/docs/Makefile	Mon Jul 30 19:33:41 2012 +0000
> > > @@ -103,7 +103,16 @@ html/%.html: %.markdown
> > >  
> > >  html/%.txt: %.txt
> > >  	@$(INSTALL_DIR) $(@D)
> > > -	cp $< $@
> > > +	@set -e ; \
> > > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > > +		which $(LYNX) >/dev/null 2>&1 ; then \
> > > +		echo "Running markdown to generate $*.txt ... "; \
> > > +		$(MARKDOWN) $< | lynx -dump -stdin > $@.tmp ; \
> > > +		$(call move-if-changed,$@.tmp,$@) ; \
> > > +	else \
> > > +		echo "markdown or lynx not installed; just copying $<."; \
> > > +		cp $< $@; \
> > > +	fi
> > 
> > Does formatting a non-markdown .txt file like this produce reasonable
> > results for all the random ASCII formatting used under misc?
> 
> Oops, sorry. This is bogus. I'll resubmit with it removed.
> 
> Matt
> 
> > >  html/man/%.1.html: man/%.pod.1 Makefile
> > >  	$(INSTALL_DIR) $(@D)
> > > @@ -131,9 +140,17 @@ txt/%.txt: %.txt
> > >  	$(call move-if-changed,$@.tmp,$@)
> > >  
> > >  txt/%.txt: %.markdown
> > > -	$(INSTALL_DIR) $(@D)
> > > -	cp $< $@.tmp
> > > -	$(call move-if-changed,$@.tmp,$@)
> > > +	@$(INSTALL_DIR) $(@D)
> > > +	@set -e ; \
> > > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > > +		which $(LYNX) >/dev/null 2>&1 ; then \
> > > +		echo "Running markdown to generate $*.txt ... "; \
> > > +		$(MARKDOWN) $< | lynx -dump -stdin > $@.tmp ; \
> > > +		$(call move-if-changed,$@.tmp,$@) ; \
> > > +	else \
> > > +		echo "markdown or lynx not installed; just copying $<."; \
> > > +		cp $< $@; \
> > > +	fi
> > >  
> > >  txt/man/%.1.txt: man/%.pod.1 Makefile
> > >  	$(INSTALL_DIR) $(@D)
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 03:23:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 03:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyaNs-0003BR-Hr; Tue, 07 Aug 2012 03:23:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SyaNr-0003BM-2I
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 03:23:03 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344309775!8870540!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE1ODcwNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5919 invoked from network); 7 Aug 2012 03:22:56 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 03:22:56 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344309776; x=1375845776;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=0uhm+K4Ww3oMXvqtyRcSytXwC9ciARW1MTxA950Yaaw=;
	b=VkPCqPUr1oxlfgGjlAuh2uN6OmgcIm+rTr1Owbjqo7Ogv9KqNkRD1YOW
	efgIeiGceRYQD2X6JwX43Ef8kE0ogw==;
X-IronPort-AV: E=Sophos;i="4.77,724,1336348800"; d="scan'208";a="776582382"
Received: from smtp-in-6001.iad6.amazon.com ([10.195.76.178])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 03:22:55 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-6001.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q773MrSi001747
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 03:22:54 GMT
Received: from US-SEA-R8XVZTX (10.224.80.38) by ex10-hub-31005.ant.amazon.com
	(10.185.176.12) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 6 Aug 2012 20:22:47 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 06 Aug 2012
	20:22:46 -0700
Date: Mon, 6 Aug 2012 20:22:46 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807032246.GA4324@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120731153459.GD8228@US-SEA-R8XVZTX>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 08:34:59AM -0700, Matt Wilson wrote:
> On Tue, Jul 31, 2012 at 01:29:03AM -0700, Ian Campbell wrote:
> > On Mon, 2012-07-30 at 20:33 +0100, Matt Wilson wrote:
> > > Markdown, while easy to read and write, isn't the most consumable
> > > format for users reading documentation on a terminal. This patch uses
> > > lynx to format markdown produced HTML into text files.
> > 
> > The markdown syntax is supposed to be readable as plain text too, if
> > there are particular instances where this is not the case perhaps we can
> > tidy them up with that in mind?
> 
> I'm not sure how much the markdown can be tidied for constructs like:
> 
> ### apic
> > `= summit | bigsmp | default`
> 
> Override Xen's logic for choosing the APIC driver.  By default, if
> there are more than 8 CPUs, Xen will switch to `bigsmp` over
> `default`.
> 
> ### allow\_unsafe
> > `= <boolean>`
> 
> > Default: `false`
> 
> Force boot on potentially unsafe systems. By default Xen will refuse
> to boot on systems with the following errata:
> 
> * AMD Erratum 121. Processors with this erratum are subject to a guest
>   triggerable Denial of Service. Override only if you trust all of
>   your PV guests.
> 
> When processed as I propose, it looks like:
> 
>   apic
> 
>      = summit | bigsmp | default
> 
>    Override Xen's logic for choosing the APIC driver. By default, if there
>    are more than 8 CPUs, Xen will switch to bigsmp over default.
> 
>   allow_unsafe
> 
>      = <boolean>
> 
>      Default: false
> 
>    Force boot on potentially unsafe systems. By default Xen will refuse to
>    boot on systems with the following errata:
>      * AMD Erratum 121. Processors with this erratum are subject to a
>        guest triggerable Denial of Service. Override only if you trust all
>        of your PV guests.
> 
> 
> > Why wouldn't you just run lynx on the generated .html instead of less on
> > the generated .txt if you wanted something a bit better formatted?
> 
> I generally don't have lynx installed on my production machines.

Ian,

Any further concerns?

Matt

> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > > 
> > > diff -r c3a6e679bdfa -r ef1271aef866 docs/Docs.mk
> > > --- a/docs/Docs.mk	Mon Jul 30 19:04:59 2012 +0000
> > > +++ b/docs/Docs.mk	Mon Jul 30 19:33:41 2012 +0000
> > > @@ -10,3 +10,4 @@ POD2TEXT	:= pod2text
> > >  DOT		:= dot
> > >  NEATO		:= neato
> > >  MARKDOWN	:= markdown
> > > +LYNX		:= lynx
> > > diff -r c3a6e679bdfa -r ef1271aef866 docs/Makefile
> > > --- a/docs/Makefile	Mon Jul 30 19:04:59 2012 +0000
> > > +++ b/docs/Makefile	Mon Jul 30 19:33:41 2012 +0000
> > > @@ -103,7 +103,16 @@ html/%.html: %.markdown
> > >  
> > >  html/%.txt: %.txt
> > >  	@$(INSTALL_DIR) $(@D)
> > > -	cp $< $@
> > > +	@set -e ; \
> > > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > > +		which $(LYNX) >/dev/null 2>&1 ; then \
> > > +		echo "Running markdown to generate $*.txt ... "; \
> > > +		$(MARKDOWN) $< | lynx -dump -stdin > $@.tmp ; \
> > > +		$(call move-if-changed,$@.tmp,$@) ; \
> > > +	else \
> > > +		echo "markdown or lynx not installed; just copying $<."; \
> > > +		cp $< $@; \
> > > +	fi
> > 
> > Does formatting a non-markdown .txt file like this produce reasonable
> > results for all the random ASCII formatting used under misc?
> 
> Oops, sorry. This is bogus. I'll resubmit with it removed.
> 
> Matt
> 
> > >  html/man/%.1.html: man/%.pod.1 Makefile
> > >  	$(INSTALL_DIR) $(@D)
> > > @@ -131,9 +140,17 @@ txt/%.txt: %.txt
> > >  	$(call move-if-changed,$@.tmp,$@)
> > >  
> > >  txt/%.txt: %.markdown
> > > -	$(INSTALL_DIR) $(@D)
> > > -	cp $< $@.tmp
> > > -	$(call move-if-changed,$@.tmp,$@)
> > > +	@$(INSTALL_DIR) $(@D)
> > > +	@set -e ; \
> > > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > > +		which $(LYNX) >/dev/null 2>&1 ; then \
> > > +		echo "Running markdown to generate $*.txt ... "; \
> > > +		$(MARKDOWN) $< | lynx -dump -stdin > $@.tmp ; \
> > > +		$(call move-if-changed,$@.tmp,$@) ; \
> > > +	else \
> > > +		echo "markdown or lynx not installed; just copying $<."; \
> > > +		cp $< $@; \
> > > +	fi
> > >  
> > >  txt/man/%.1.txt: man/%.pod.1 Makefile
> > >  	$(INSTALL_DIR) $(@D)
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 03:25:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 03:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyaPr-0003Fe-2t; Tue, 07 Aug 2012 03:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SyaPp-0003FX-LZ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 03:25:06 +0000
Received: from [85.158.138.51:28810] by server-11.bemta-3.messagelabs.com id
	89/3D-10722-09A80205; Tue, 07 Aug 2012 03:25:04 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344309902!22688571!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNDk1Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20217 invoked from network); 7 Aug 2012 03:25:03 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 03:25:03 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344309903; x=1375845903;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Ckzw4A4uQ4Vw3HAPMI7Mw0iKbFzmx2i7CKJ0TwJKlfA=;
	b=eSp+y9Y8T/DYLH3VvvOhuc1JUYc/KukdLsrRciJrtBVoCrRcoXoS/+i9
	PZbgdV5zN4YG35JXl5aCoTfS8TfRnw==;
X-IronPort-AV: E=Sophos;i="4.77,724,1336348800"; d="scan'208";a="419814561"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 03:25:01 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q773P0q9002785
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 03:25:01 GMT
Received: from US-SEA-R8XVZTX (10.224.80.34) by ex10-hub-9003.ant.amazon.com
	(10.185.137.132) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 6 Aug 2012 20:24:53 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 06 Aug 2012
	20:24:53 -0700
Date: Mon, 6 Aug 2012 20:24:53 -0700
From: Matt Wilson <msw@amazon.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <20120807032453.GB4324@US-SEA-R8XVZTX>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> This change improves documentation for several Xen command line
> parameters. Some of the Itanium-specific options are now removed. A
> more thorough check should be performed to remove any other remnants.
> 
> I've reformatted some of the entries to fit in 80 column terminals.
> 
> Options that are yet undocumented but accept standard boolean /
> integer values are now annotated as such.
> 
> The size suffixes have been corrected to use the binary prefixes
> instead of decimal prefixes.
> 
> Changes since v2:
>  * Change *bi prefixes to GiB, MiB, KiB
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

George's concerns were adressed in this version, and Andrew gave an
Ack. Anything else keeping this from landing in staging?

Matt

> diff -r bf922651da96 -r 1809175cdc9b docs/misc/xen-command-line.markdown
> --- a/docs/misc/xen-command-line.markdown       Sat Jul 28 17:27:30 2012 +0000
> +++ b/docs/misc/xen-command-line.markdown       Mon Jul 30 19:04:59 2012 +0000
> @@ -46,9 +46,9 @@ if a leading `0` is present.
> 
>  A size parameter may be any integer, with a size suffix
> 
> -* `G` or `g`: Giga (2^30)
> -* `M` or `m`: Mega (2^20)
> -* `K` or `k`: Kilo (2^10)
> +* `G` or `g`: GiB (2^30)
> +* `M` or `m`: MiB (2^20)
> +* `K` or `k`: KiB (2^10)
>  * `B` or `b`: Bytes
> 
>  Without a size suffix, the default will be kilo.
> @@ -107,8 +107,10 @@ Specify which ACPI MADT table to parse f
>  than one is present.
> 
>  ### acpi\_pstate\_strict
> +> `= <integer>`
> 
>  ### acpi\_skip\_timer\_override
> +> `= <boolean>`
> 
>  Instruct Xen to ignore timer-interrupt override.
> 
> @@ -117,6 +119,8 @@ the domain 0 kernel this option is autom
>  domain 0 command line
> 
>  ### acpi\_sleep
> +> `= s3_bios | s3_mode`
> +
>  ### allowsuperpage
>  > `= <boolean>`
> 
> @@ -136,12 +140,12 @@ there are more than 8 CPUs, Xen will swi
> 
>  > Default: `false`
> 
> -Force boot on potentially unsafe systems. By default Xen will refuse to boot on
> -systems with the following errata:
> +Force boot on potentially unsafe systems. By default Xen will refuse
> +to boot on systems with the following errata:
> 
>  * AMD Erratum 121. Processors with this erratum are subject to a guest
> -  triggerable Denial of Service. Override only if you trust all of your PV
> -  guests.
> +  triggerable Denial of Service. Override only if you trust all of
> +  your PV guests.
> 
>  ### apic\_verbosity
>  > `= verbose | debug`
> @@ -153,15 +157,16 @@ Increase the verbosity of the APIC code
> 
>  > Default: `true`
> 
> -Permits Xen to set up and use PCI Address Translation Services, which is required
> -for PCI Passthrough.
> +Permits Xen to set up and use PCI Address Translation Services, which
> +is required for PCI Passthrough.
> 
>  ### availmem
>  > `= <size>`
> 
>  > Default: `0` (no limit)
> 
> -Specify a maximum amount of available memory, to which Xen will clamp the e820 table.
> +Specify a maximum amount of available memory, to which Xen will clamp
> +the e820 table.
> 
>  ### badpage
>  > `= List of [ <integer> | <integer>-<integer> ]`
> @@ -176,8 +181,9 @@ Xen's command line.
> 
>  > Default: `true`
> 
> -Scrub free RAM during boot.  This is a safety feature to prevent accidentally leaking
> -sensitive VM data into other VMs if Xen crashes and reboots.
> +Scrub free RAM during boot.  This is a safety feature to prevent
> +accidentally leaking sensitive VM data into other VMs if Xen crashes
> +and reboots.
> 
>  ### cachesize
>  > `= <size>`
> @@ -227,7 +233,6 @@ Both option `com1` and `com2` follow the
> 
>  A typical setup for most situations might be `com1=115200,8n1`
> 
> -
>  ### conring\_size
>  > `= <size>`
> 
> @@ -300,25 +305,30 @@ Indicate where the responsibility for dr
>  ### cpuid\_mask\_cpu (AMD only)
>  > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
> 
> -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set (unspecified
> -on the command line), specify a pre-canned cpuid mask to mask the current
> -processor down to appear as the specified processor.  It is important to ensure
> -that all hosts in a pool appear the same to guests to allow successful live
> -migration.
> +If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
> +(unspecified on the command line), specify a pre-canned cpuid mask to
> +mask the current processor down to appear as the specified processor.
> +It is important to ensure that all hosts in a pool appear the same to
> +guests to allow successful live migration.
> 
>  ### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
>  > `= <integer>`
> 
>  > Default: `~0` (all bits set)
> 
> -These five command line parameters are used to specify cpuid masks to help with
> -cpuid levelling across a pool of hosts.  Setting a bit in the mask indicates that
> -the feature should be enabled, while clearing a bit in the mask indicates that
> -the feature should be disabled.  It is important to ensure that all hosts in a
> -pool appear the same to guests to allow successful live migration.
> +These five command line parameters are used to specify cpuid masks to
> +help with cpuid levelling across a pool of hosts.  Setting a bit in
> +the mask indicates that the feature should be enabled, while clearing
> +a bit in the mask indicates that the feature should be disabled.  It
> +is important to ensure that all hosts in a pool appear the same to
> +guests to allow successful live migration.
> 
>  ### cpuidle
> +> `= <boolean>`
> +
>  ### cpuinfo
> +> `= <boolean>`
> +
>  ### crashinfo_maxaddr
>  > `= <size>`
> 
> @@ -328,17 +338,42 @@ Specify the maximum address to allocate
>  combination with the `low_crashinfo` command line option.
> 
>  ### crashkernel
> +> `= <ramsize-range>:<size>[,...][@<offset>]`
> +
>  ### credit2\_balance\_over
> +> `= <integer>`
> +
>  ### credit2\_balance\_under
> +> `= <integer>`
> +
>  ### credit2\_load\_window\_shift
> +> `= <integer>`
> +
>  ### debug\_stack\_lines
> +> `= <integer>`
> +
> +> Default: `20`
> +
> +Limits the number lines printed in Xen stack traces.
> +
>  ### debugtrace
> +> `= <integer>`
> +
> +> Default: `128`
> +
> +Specify the size of the console debug trace buffer in KiB. The debug
> +trace feature is only enabled in debugging builds of Xen.
> +
>  ### dma\_bits
>  > `= <integer>`
> 
>  Specify the bit width of the DMA heap.
> 
>  ### dom0\_ioports\_disable
> +> `= List of <hex>-<hex>`
> +
> +Specify a list of IO ports to be excluded from dom0 access.
> +
>  ### dom0\_max\_vcpus
>  > `= <integer>`
> 
> @@ -372,6 +407,8 @@ For example, to set dom0's initial memor
>  allow it to balloon up as far as 1GB use `dom0_mem=512M,max:1G`
> 
>  ### dom0\_shadow
> +> `= <boolean>`
> +
>  ### dom0\_vcpus\_pin
>  > `= <boolean>`
> 
> @@ -379,10 +416,21 @@ allow it to balloon up as far as 1GB use
> 
>  Pin dom0 vcpus to their respective pcpus
> 
> -### dom0\_vhpt\_size\_log2
> -### dom\_rid\_bits
>  ### e820-mtrr-clip
> +> `= <boolean>`
> +
> +Flag that specifies if RAM should be clipped to the highest cacheable
> +MTRR.
> +
> +> Default: `true` on Intel CPUs, otherwise `false`
> +
>  ### e820-verbose
> +> `= <boolean>`
> +
> +> Default: `false`
> +
> +Flag that enables verbose output when processing e820 information and
> +applying clipping.
> 
>  ### edd (x86)
>  > `= off | on | skipmbr`
> @@ -397,17 +445,32 @@ Either force retrieval of monitor EDID i
>  disable it (edid=no). This option should not normally be required
>  except for debugging purposes.
> 
> -### efi\_print
>  ### extra\_guest\_irqs
>  > `= <number>`
> 
>  Increase the number of PIRQs available for the guest. The default is 32.
> 
>  ### flask\_enabled
> +> `= <integer>`
> +
>  ### flask\_enforcing
> +> `= <integer>`
> +
>  ### font
> +> `= <height>` where height is `8x8 | 8x14 | 8x16 '`
> +
> +Specify the font size when using the VESA console driver.
> +
>  ### gdb
> +> `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
> +
> +Specify the serial parameters for the GDB stub.
> +
>  ### gnttab\_max\_nr\_frames
> +> `= <integer>`
> +
> +Specify the maximum number of frames per grant table operation.
> +
>  ### guest\_loglvl
>  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> 
> @@ -420,15 +483,41 @@ The optional `<rate-limited level>` opti
>  should be rate limited.
> 
>  ### hap\_1gb
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable 1 GB host page table support for Hardware Assisted
> +Paging (HAP).
> +
>  ### hap\_2mb
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable 1 GB host page table support for Hardware Assisted
> +Paging (HAP).
> +
>  ### hpetbroadcast
> +> `= <boolean>`
> +
>  ### hvm\_debug
> +> `= <integer>`
> +
>  ### hvm\_port80
> +> `= <boolean>`
> +
>  ### idle\_latency\_factor
> +> `= <integer>`
> +
>  ### ioapic\_ack
>  ### iommu
>  ### iommu\_inclusive\_mapping
> +> `= <boolean>`
> +
>  ### irq\_ratelimit
> +> `= <integer>`
> +
>  ### irq\_vector\_map
>  ### lapic
> 
> @@ -437,7 +526,11 @@ if left disabled by the BIOS.  This opti
>  all.
> 
>  ### lapic\_timer\_c2\_ok
> +> `= <boolean>`
> +
>  ### ler
> +> `= <boolean>`
> +
>  ### loglvl
>  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> 
> @@ -461,18 +554,38 @@ so the crash kernel may find find them.
>  with **crashinfo_maxaddr**.
> 
>  ### max\_cstate
> +> `= <integer>`
> +
>  ### max\_gsi\_irqs
> +> `= <integer>`
> +
>  ### maxcpus
> +> `= <integer>`
> +
>  ### mce
> +> `= <integer>`
> +
>  ### mce\_fb
> +> `= <integer>`
> +
>  ### mce\_verbosity
> +> `= verbose`
> +
> +Specify verbose machine check output.
> +
>  ### mem
>  > `= <size>`
> 
> -Specifies the maximum address of physical RAM.  Any RAM beyond this
> +Specify the maximum address of physical RAM.  Any RAM beyond this
>  limit is ignored by Xen.
> 
>  ### mmcfg
> +> `= <boolean>[,amd-fam10]`
> +
> +> Default: `1`
> +
> +Specify if the MMConfig space should be enabled.
> +
>  ### nmi
>  > `= ignore | dom0 | fatal`
> 
> @@ -493,6 +606,8 @@ domain 0 kernel this option is automatic
>  0 command line.
> 
>  ### nofxsr
> +> `= <boolean>`
> +
>  ### noirqbalance
>  > `= <boolean>`
> 
> @@ -501,11 +616,15 @@ systems such as Dell 1850/2850 that have
>  IRQ routing issues.
> 
>  ### nolapic
> +> `= <boolean>`
> +
> +> Default: `false`
> 
>  Ignore the local APIC on a uniprocessor system, even if enabled by the
>  BIOS.  This option will accept value.
> 
>  ### no-real-mode (x86)
> +> `= <boolean>`
> 
>  Do not execute real-mode bootstrap code when booting Xen. This option
>  should not be used except for debugging. It will effectively disable
> @@ -519,6 +638,10 @@ catching debug output.  Defaults to auto
>  seconds.
> 
>  ### noserialnumber
> +> `= <boolean>`
> +
> +Disable CPU serial number reporting.
> +
>  ### nosmp
>  > `= <boolean>`
> 
> @@ -526,11 +649,39 @@ Disable SMP support.  No secondary proce
>  Defaults to booting secondary processors.
> 
>  ### nr\_irqs
> +> `= <integer>`
> +
>  ### numa
> -### pervcpu\_vhpt
> +> `= on | off | fake=<integer> | noacpi`
> +
> +Default: `on`
> +
>  ### ple\_gap
> +> `= <integer>`
> +
>  ### ple\_window
> +> `= <integer>`
> +
>  ### reboot
> +> `= b[ios] | t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
> +
> +Default: `0`
> +
> +Specify the host reboot method.
> +
> +`warm` instructs Xen to not set the cold reboot flag.
> +
> +`cold` instructs Xen to set the cold reboot flag.
> +
> +`bios` instructs Xen to reboot the host by jumping to BIOS. This is
> +only available on 32-bit x86 platforms.
> +
> +`triple` instructs Xen to reboot the host by causing a triple fault.
> +
> +`kbd` instructs Xen to reboot the host via the keyboard controller.
> +
> +`acpi` instructs Xen to reboot the host using RESET_REG in the ACPI FADT.
> +
>  ### sched
>  > `= credit | credit2 | sedf | arinc653`
> 
> @@ -539,10 +690,20 @@ Defaults to booting secondary processors
>  Choose the default scheduler.
> 
>  ### sched\_credit2\_migrate\_resist
> +> `= <integer>`
> +
>  ### sched\_credit\_default\_yield
> +> `= <boolean>`
> +
>  ### sched\_credit\_tslice\_ms
> +> `= <integer>`
> +
>  ### sched\_ratelimit\_us
> +> `= <integer>`
> +
>  ### sched\_smt\_power\_savings
> +> `= <boolean>`
> +
>  ### serial\_tx\_buffer
>  > `= <size>`
> 
> @@ -551,7 +712,15 @@ Choose the default scheduler.
>  Set the serial transmit buffer size.
> 
>  ### smep
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable Supervisor Mode Execution Protection
> +
>  ### snb\_igd\_quirk
> +> `= <boolean>`
> +
>  ### sync\_console
>  > `= <boolean>`
> 
> @@ -561,28 +730,80 @@ Flag to force synchronous console output
>  not suitable for production environments due to incurred overhead.
> 
>  ### tboot
> +> `= 0x<phys_addr>`
> +
> +Specify the physical address of the trusted boot shared page.
> +
>  ### tbuf\_size
>  > `= <integer>`
> 
>  Specify the per-cpu trace buffer size in pages.
> 
>  ### tdt
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable TSC deadline as the APIC timer mode.
> +
>  ### tevt\_mask
> +> `= <integer>`
> +
> +Specify a mask for Xen event tracing. This allows Xen tracing to be
> +enabled at boot. Refer to the xentrace(8) documentation for a list of
> +valid event mask values. In order to enable tracing, a buffer size (in
> +pages) must also be specified via the tbuf\_size parameter.
> +
>  ### tickle\_one\_idle\_cpu
> +> `= <boolean>`
> +
>  ### timer\_slop
> +> `= <integer>`
> +
>  ### tmem
> +> `= <boolean>`
> +
>  ### tmem\_compress
> +> `= <boolean>`
> +
>  ### tmem\_dedup
> +> `= <boolean>`
> +
>  ### tmem\_lock
> +> `= <integer>`
> +
>  ### tmem\_shared\_auth
> +> `= <boolean>`
> +
>  ### tmem\_tze
> +> `= <integer>`
> +
>  ### tsc
> +> `= unstable | skewed`
> +
>  ### ucode
>  ### unrestricted\_guest
> +> `= <boolean>`
> +
>  ### vcpu\_migration\_delay
> +> `= <integer>`
> +
> +> Default: `0`
> +
> +Specify a delay, in microseconds, between migrations of a VCPU between
> +PCPUs when using the credit1 scheduler. This prevents rapid fluttering
> +of a VCPU between CPUs, and reduces the implicit overheads such as
> +cache-warming. 1ms (1000) has been measured as a good value.
> +
>  ### vesa-map
> +> `= <integer>`
> +
>  ### vesa-mtrr
> +> `= <integer>`
> +
>  ### vesa-ram
> +> `= <integer>`
> +
>  ### vga
>  > `= ( ask | current | text-80x<rows> | gfx-<width>x<height>x<depth> | mode-<mode> )[,keep]`
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 03:25:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 03:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyaPr-0003Fe-2t; Tue, 07 Aug 2012 03:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SyaPp-0003FX-LZ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 03:25:06 +0000
Received: from [85.158.138.51:28810] by server-11.bemta-3.messagelabs.com id
	89/3D-10722-09A80205; Tue, 07 Aug 2012 03:25:04 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344309902!22688571!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNDk1Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20217 invoked from network); 7 Aug 2012 03:25:03 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 03:25:03 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344309903; x=1375845903;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Ckzw4A4uQ4Vw3HAPMI7Mw0iKbFzmx2i7CKJ0TwJKlfA=;
	b=eSp+y9Y8T/DYLH3VvvOhuc1JUYc/KukdLsrRciJrtBVoCrRcoXoS/+i9
	PZbgdV5zN4YG35JXl5aCoTfS8TfRnw==;
X-IronPort-AV: E=Sophos;i="4.77,724,1336348800"; d="scan'208";a="419814561"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 03:25:01 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q773P0q9002785
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 03:25:01 GMT
Received: from US-SEA-R8XVZTX (10.224.80.34) by ex10-hub-9003.ant.amazon.com
	(10.185.137.132) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 6 Aug 2012 20:24:53 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 06 Aug 2012
	20:24:53 -0700
Date: Mon, 6 Aug 2012 20:24:53 -0700
From: Matt Wilson <msw@amazon.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <20120807032453.GB4324@US-SEA-R8XVZTX>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> This change improves documentation for several Xen command line
> parameters. Some of the Itanium-specific options are now removed. A
> more thorough check should be performed to remove any other remnants.
> 
> I've reformatted some of the entries to fit in 80 column terminals.
> 
> Options that are yet undocumented but accept standard boolean /
> integer values are now annotated as such.
> 
> The size suffixes have been corrected to use the binary prefixes
> instead of decimal prefixes.
> 
> Changes since v2:
>  * Change *bi prefixes to GiB, MiB, KiB
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

George's concerns were adressed in this version, and Andrew gave an
Ack. Anything else keeping this from landing in staging?

Matt

> diff -r bf922651da96 -r 1809175cdc9b docs/misc/xen-command-line.markdown
> --- a/docs/misc/xen-command-line.markdown       Sat Jul 28 17:27:30 2012 +0000
> +++ b/docs/misc/xen-command-line.markdown       Mon Jul 30 19:04:59 2012 +0000
> @@ -46,9 +46,9 @@ if a leading `0` is present.
> 
>  A size parameter may be any integer, with a size suffix
> 
> -* `G` or `g`: Giga (2^30)
> -* `M` or `m`: Mega (2^20)
> -* `K` or `k`: Kilo (2^10)
> +* `G` or `g`: GiB (2^30)
> +* `M` or `m`: MiB (2^20)
> +* `K` or `k`: KiB (2^10)
>  * `B` or `b`: Bytes
> 
>  Without a size suffix, the default will be kilo.
> @@ -107,8 +107,10 @@ Specify which ACPI MADT table to parse f
>  than one is present.
> 
>  ### acpi\_pstate\_strict
> +> `= <integer>`
> 
>  ### acpi\_skip\_timer\_override
> +> `= <boolean>`
> 
>  Instruct Xen to ignore timer-interrupt override.
> 
> @@ -117,6 +119,8 @@ the domain 0 kernel this option is autom
>  domain 0 command line
> 
>  ### acpi\_sleep
> +> `= s3_bios | s3_mode`
> +
>  ### allowsuperpage
>  > `= <boolean>`
> 
> @@ -136,12 +140,12 @@ there are more than 8 CPUs, Xen will swi
> 
>  > Default: `false`
> 
> -Force boot on potentially unsafe systems. By default Xen will refuse to boot on
> -systems with the following errata:
> +Force boot on potentially unsafe systems. By default Xen will refuse
> +to boot on systems with the following errata:
> 
>  * AMD Erratum 121. Processors with this erratum are subject to a guest
> -  triggerable Denial of Service. Override only if you trust all of your PV
> -  guests.
> +  triggerable Denial of Service. Override only if you trust all of
> +  your PV guests.
> 
>  ### apic\_verbosity
>  > `= verbose | debug`
> @@ -153,15 +157,16 @@ Increase the verbosity of the APIC code
> 
>  > Default: `true`
> 
> -Permits Xen to set up and use PCI Address Translation Services, which is required
> -for PCI Passthrough.
> +Permits Xen to set up and use PCI Address Translation Services, which
> +is required for PCI Passthrough.
> 
>  ### availmem
>  > `= <size>`
> 
>  > Default: `0` (no limit)
> 
> -Specify a maximum amount of available memory, to which Xen will clamp the e820 table.
> +Specify a maximum amount of available memory, to which Xen will clamp
> +the e820 table.
> 
>  ### badpage
>  > `= List of [ <integer> | <integer>-<integer> ]`
> @@ -176,8 +181,9 @@ Xen's command line.
> 
>  > Default: `true`
> 
> -Scrub free RAM during boot.  This is a safety feature to prevent accidentally leaking
> -sensitive VM data into other VMs if Xen crashes and reboots.
> +Scrub free RAM during boot.  This is a safety feature to prevent
> +accidentally leaking sensitive VM data into other VMs if Xen crashes
> +and reboots.
> 
>  ### cachesize
>  > `= <size>`
> @@ -227,7 +233,6 @@ Both option `com1` and `com2` follow the
> 
>  A typical setup for most situations might be `com1=115200,8n1`
> 
> -
>  ### conring\_size
>  > `= <size>`
> 
> @@ -300,25 +305,30 @@ Indicate where the responsibility for dr
>  ### cpuid\_mask\_cpu (AMD only)
>  > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
> 
> -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set (unspecified
> -on the command line), specify a pre-canned cpuid mask to mask the current
> -processor down to appear as the specified processor.  It is important to ensure
> -that all hosts in a pool appear the same to guests to allow successful live
> -migration.
> +If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
> +(unspecified on the command line), specify a pre-canned cpuid mask to
> +mask the current processor down to appear as the specified processor.
> +It is important to ensure that all hosts in a pool appear the same to
> +guests to allow successful live migration.
> 
>  ### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
>  > `= <integer>`
> 
>  > Default: `~0` (all bits set)
> 
> -These five command line parameters are used to specify cpuid masks to help with
> -cpuid levelling across a pool of hosts.  Setting a bit in the mask indicates that
> -the feature should be enabled, while clearing a bit in the mask indicates that
> -the feature should be disabled.  It is important to ensure that all hosts in a
> -pool appear the same to guests to allow successful live migration.
> +These five command line parameters are used to specify cpuid masks to
> +help with cpuid levelling across a pool of hosts.  Setting a bit in
> +the mask indicates that the feature should be enabled, while clearing
> +a bit in the mask indicates that the feature should be disabled.  It
> +is important to ensure that all hosts in a pool appear the same to
> +guests to allow successful live migration.
> 
>  ### cpuidle
> +> `= <boolean>`
> +
>  ### cpuinfo
> +> `= <boolean>`
> +
>  ### crashinfo_maxaddr
>  > `= <size>`
> 
> @@ -328,17 +338,42 @@ Specify the maximum address to allocate
>  combination with the `low_crashinfo` command line option.
> 
>  ### crashkernel
> +> `= <ramsize-range>:<size>[,...][@<offset>]`
> +
>  ### credit2\_balance\_over
> +> `= <integer>`
> +
>  ### credit2\_balance\_under
> +> `= <integer>`
> +
>  ### credit2\_load\_window\_shift
> +> `= <integer>`
> +
>  ### debug\_stack\_lines
> +> `= <integer>`
> +
> +> Default: `20`
> +
> +Limits the number lines printed in Xen stack traces.
> +
>  ### debugtrace
> +> `= <integer>`
> +
> +> Default: `128`
> +
> +Specify the size of the console debug trace buffer in KiB. The debug
> +trace feature is only enabled in debugging builds of Xen.
> +
>  ### dma\_bits
>  > `= <integer>`
> 
>  Specify the bit width of the DMA heap.
> 
>  ### dom0\_ioports\_disable
> +> `= List of <hex>-<hex>`
> +
> +Specify a list of IO ports to be excluded from dom0 access.
> +
>  ### dom0\_max\_vcpus
>  > `= <integer>`
> 
> @@ -372,6 +407,8 @@ For example, to set dom0's initial memor
>  allow it to balloon up as far as 1GB use `dom0_mem=512M,max:1G`
> 
>  ### dom0\_shadow
> +> `= <boolean>`
> +
>  ### dom0\_vcpus\_pin
>  > `= <boolean>`
> 
> @@ -379,10 +416,21 @@ allow it to balloon up as far as 1GB use
> 
>  Pin dom0 vcpus to their respective pcpus
> 
> -### dom0\_vhpt\_size\_log2
> -### dom\_rid\_bits
>  ### e820-mtrr-clip
> +> `= <boolean>`
> +
> +Flag that specifies if RAM should be clipped to the highest cacheable
> +MTRR.
> +
> +> Default: `true` on Intel CPUs, otherwise `false`
> +
>  ### e820-verbose
> +> `= <boolean>`
> +
> +> Default: `false`
> +
> +Flag that enables verbose output when processing e820 information and
> +applying clipping.
> 
>  ### edd (x86)
>  > `= off | on | skipmbr`
> @@ -397,17 +445,32 @@ Either force retrieval of monitor EDID i
>  disable it (edid=no). This option should not normally be required
>  except for debugging purposes.
> 
> -### efi\_print
>  ### extra\_guest\_irqs
>  > `= <number>`
> 
>  Increase the number of PIRQs available for the guest. The default is 32.
> 
>  ### flask\_enabled
> +> `= <integer>`
> +
>  ### flask\_enforcing
> +> `= <integer>`
> +
>  ### font
> +> `= <height>` where height is `8x8 | 8x14 | 8x16 '`
> +
> +Specify the font size when using the VESA console driver.
> +
>  ### gdb
> +> `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
> +
> +Specify the serial parameters for the GDB stub.
> +
>  ### gnttab\_max\_nr\_frames
> +> `= <integer>`
> +
> +Specify the maximum number of frames per grant table operation.
> +
>  ### guest\_loglvl
>  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> 
> @@ -420,15 +483,41 @@ The optional `<rate-limited level>` opti
>  should be rate limited.
> 
>  ### hap\_1gb
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable 1 GB host page table support for Hardware Assisted
> +Paging (HAP).
> +
>  ### hap\_2mb
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable 1 GB host page table support for Hardware Assisted
> +Paging (HAP).
> +
>  ### hpetbroadcast
> +> `= <boolean>`
> +
>  ### hvm\_debug
> +> `= <integer>`
> +
>  ### hvm\_port80
> +> `= <boolean>`
> +
>  ### idle\_latency\_factor
> +> `= <integer>`
> +
>  ### ioapic\_ack
>  ### iommu
>  ### iommu\_inclusive\_mapping
> +> `= <boolean>`
> +
>  ### irq\_ratelimit
> +> `= <integer>`
> +
>  ### irq\_vector\_map
>  ### lapic
> 
> @@ -437,7 +526,11 @@ if left disabled by the BIOS.  This opti
>  all.
> 
>  ### lapic\_timer\_c2\_ok
> +> `= <boolean>`
> +
>  ### ler
> +> `= <boolean>`
> +
>  ### loglvl
>  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> 
> @@ -461,18 +554,38 @@ so the crash kernel may find find them.
>  with **crashinfo_maxaddr**.
> 
>  ### max\_cstate
> +> `= <integer>`
> +
>  ### max\_gsi\_irqs
> +> `= <integer>`
> +
>  ### maxcpus
> +> `= <integer>`
> +
>  ### mce
> +> `= <integer>`
> +
>  ### mce\_fb
> +> `= <integer>`
> +
>  ### mce\_verbosity
> +> `= verbose`
> +
> +Specify verbose machine check output.
> +
>  ### mem
>  > `= <size>`
> 
> -Specifies the maximum address of physical RAM.  Any RAM beyond this
> +Specify the maximum address of physical RAM.  Any RAM beyond this
>  limit is ignored by Xen.
> 
>  ### mmcfg
> +> `= <boolean>[,amd-fam10]`
> +
> +> Default: `1`
> +
> +Specify if the MMConfig space should be enabled.
> +
>  ### nmi
>  > `= ignore | dom0 | fatal`
> 
> @@ -493,6 +606,8 @@ domain 0 kernel this option is automatic
>  0 command line.
> 
>  ### nofxsr
> +> `= <boolean>`
> +
>  ### noirqbalance
>  > `= <boolean>`
> 
> @@ -501,11 +616,15 @@ systems such as Dell 1850/2850 that have
>  IRQ routing issues.
> 
>  ### nolapic
> +> `= <boolean>`
> +
> +> Default: `false`
> 
>  Ignore the local APIC on a uniprocessor system, even if enabled by the
>  BIOS.  This option will accept value.
> 
>  ### no-real-mode (x86)
> +> `= <boolean>`
> 
>  Do not execute real-mode bootstrap code when booting Xen. This option
>  should not be used except for debugging. It will effectively disable
> @@ -519,6 +638,10 @@ catching debug output.  Defaults to auto
>  seconds.
> 
>  ### noserialnumber
> +> `= <boolean>`
> +
> +Disable CPU serial number reporting.
> +
>  ### nosmp
>  > `= <boolean>`
> 
> @@ -526,11 +649,39 @@ Disable SMP support.  No secondary proce
>  Defaults to booting secondary processors.
> 
>  ### nr\_irqs
> +> `= <integer>`
> +
>  ### numa
> -### pervcpu\_vhpt
> +> `= on | off | fake=<integer> | noacpi`
> +
> +Default: `on`
> +
>  ### ple\_gap
> +> `= <integer>`
> +
>  ### ple\_window
> +> `= <integer>`
> +
>  ### reboot
> +> `= b[ios] | t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
> +
> +Default: `0`
> +
> +Specify the host reboot method.
> +
> +`warm` instructs Xen to not set the cold reboot flag.
> +
> +`cold` instructs Xen to set the cold reboot flag.
> +
> +`bios` instructs Xen to reboot the host by jumping to BIOS. This is
> +only available on 32-bit x86 platforms.
> +
> +`triple` instructs Xen to reboot the host by causing a triple fault.
> +
> +`kbd` instructs Xen to reboot the host via the keyboard controller.
> +
> +`acpi` instructs Xen to reboot the host using RESET_REG in the ACPI FADT.
> +
>  ### sched
>  > `= credit | credit2 | sedf | arinc653`
> 
> @@ -539,10 +690,20 @@ Defaults to booting secondary processors
>  Choose the default scheduler.
> 
>  ### sched\_credit2\_migrate\_resist
> +> `= <integer>`
> +
>  ### sched\_credit\_default\_yield
> +> `= <boolean>`
> +
>  ### sched\_credit\_tslice\_ms
> +> `= <integer>`
> +
>  ### sched\_ratelimit\_us
> +> `= <integer>`
> +
>  ### sched\_smt\_power\_savings
> +> `= <boolean>`
> +
>  ### serial\_tx\_buffer
>  > `= <size>`
> 
> @@ -551,7 +712,15 @@ Choose the default scheduler.
>  Set the serial transmit buffer size.
> 
>  ### smep
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable Supervisor Mode Execution Protection
> +
>  ### snb\_igd\_quirk
> +> `= <boolean>`
> +
>  ### sync\_console
>  > `= <boolean>`
> 
> @@ -561,28 +730,80 @@ Flag to force synchronous console output
>  not suitable for production environments due to incurred overhead.
> 
>  ### tboot
> +> `= 0x<phys_addr>`
> +
> +Specify the physical address of the trusted boot shared page.
> +
>  ### tbuf\_size
>  > `= <integer>`
> 
>  Specify the per-cpu trace buffer size in pages.
> 
>  ### tdt
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Flag to enable TSC deadline as the APIC timer mode.
> +
>  ### tevt\_mask
> +> `= <integer>`
> +
> +Specify a mask for Xen event tracing. This allows Xen tracing to be
> +enabled at boot. Refer to the xentrace(8) documentation for a list of
> +valid event mask values. In order to enable tracing, a buffer size (in
> +pages) must also be specified via the tbuf\_size parameter.
> +
>  ### tickle\_one\_idle\_cpu
> +> `= <boolean>`
> +
>  ### timer\_slop
> +> `= <integer>`
> +
>  ### tmem
> +> `= <boolean>`
> +
>  ### tmem\_compress
> +> `= <boolean>`
> +
>  ### tmem\_dedup
> +> `= <boolean>`
> +
>  ### tmem\_lock
> +> `= <integer>`
> +
>  ### tmem\_shared\_auth
> +> `= <boolean>`
> +
>  ### tmem\_tze
> +> `= <integer>`
> +
>  ### tsc
> +> `= unstable | skewed`
> +
>  ### ucode
>  ### unrestricted\_guest
> +> `= <boolean>`
> +
>  ### vcpu\_migration\_delay
> +> `= <integer>`
> +
> +> Default: `0`
> +
> +Specify a delay, in microseconds, between migrations of a VCPU between
> +PCPUs when using the credit1 scheduler. This prevents rapid fluttering
> +of a VCPU between CPUs, and reduces the implicit overheads such as
> +cache-warming. 1ms (1000) has been measured as a good value.
> +
>  ### vesa-map
> +> `= <integer>`
> +
>  ### vesa-mtrr
> +> `= <integer>`
> +
>  ### vesa-ram
> +> `= <integer>`
> +
>  ### vga
>  > `= ( ask | current | text-80x<rows> | gfx-<width>x<height>x<depth> | mode-<mode> )[,keep]`
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 03:38:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 03:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syabv-0003Xe-QQ; Tue, 07 Aug 2012 03:37:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <listmail@triad.rr.com>) id 1Syabu-0003XZ-L5
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 03:37:34 +0000
Received: from [85.158.138.51:59345] by server-1.bemta-3.messagelabs.com id
	6C/1E-29224-D7D80205; Tue, 07 Aug 2012 03:37:33 +0000
X-Env-Sender: listmail@triad.rr.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344310652!30780188!1
X-Originating-IP: [71.74.56.122]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg1Mzcw\n,sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg1Mzcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24544 invoked from network); 7 Aug 2012 03:37:33 -0000
Received: from hrndva-omtalb.mail.rr.com (HELO hrndva-omtalb.mail.rr.com)
	(71.74.56.122) by server-9.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 03:37:33 -0000
X-Authority-Analysis: v=2.0 cv=Vb91zSV9 c=1 sm=0 a=R14c1kN7475LMi+rpQwGWw==:17
	a=Kt2980LFN-gA:10 a=mIU5gKuZvJwA:10 a=05ChyHeVI94A:10
	a=IkcTkHD0fZMA:10 a=ayC55rCoAAAA:8 a=n8i27M1mAAAA:8
	a=NAM8ycOw7El-FyqjS3oA:9 a=QEXdDO2ut3YA:10
	a=R14c1kN7475LMi+rpQwGWw==:117
X-Cloudmark-Score: 0
X-Originating-IP: 65.190.252.167
Received: from [65.190.252.167] ([65.190.252.167:48278] helo=corenix.localnet)
	by hrndva-oedge03.mail.rr.com (envelope-from <listmail@triad.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTP
	id E9/28-17584-C7D80205; Tue, 07 Aug 2012 03:37:32 +0000
Received: by corenix.localnet (Postfix, from userid 1003)
	id ED3AE853DC; Mon,  6 Aug 2012 23:37:31 -0400 (EDT)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on corenix.localnet
X-Spam-Level: 
X-Spam-Status: No, score=-1.0 required=4.5 tests=ALL_TRUSTED autolearn=ham
	version=3.3.1
Received: from [192.168.0.129] (a200.localnet [192.168.0.129])
	by corenix.localnet (Postfix) with ESMTP id A7CE084FDB
	for <xen-devel@lists.xen.org>; Mon,  6 Aug 2012 23:37:31 -0400 (EDT)
Date: Mon, 06 Aug 2012 23:37:30 -0400
Message-ID: <w93ksj5lh53kpwlup2mpkda9.1344310650969@email.android.com>
From: Richie <listmail@triad.rr.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: Re: [Xen-devel] wheezy VT-d passthrough test: DMAR:[fault reason
 06h] PTE Read access is not set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Are there any settings I can try or more information I should provide?

Thanks.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 03:38:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 03:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syabv-0003Xe-QQ; Tue, 07 Aug 2012 03:37:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <listmail@triad.rr.com>) id 1Syabu-0003XZ-L5
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 03:37:34 +0000
Received: from [85.158.138.51:59345] by server-1.bemta-3.messagelabs.com id
	6C/1E-29224-D7D80205; Tue, 07 Aug 2012 03:37:33 +0000
X-Env-Sender: listmail@triad.rr.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344310652!30780188!1
X-Originating-IP: [71.74.56.122]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg1Mzcw\n,sa_preprocessor: 
	QmFkIElQOiA3MS43NC41Ni4xMjIgPT4gMzg1Mzcw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24544 invoked from network); 7 Aug 2012 03:37:33 -0000
Received: from hrndva-omtalb.mail.rr.com (HELO hrndva-omtalb.mail.rr.com)
	(71.74.56.122) by server-9.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 03:37:33 -0000
X-Authority-Analysis: v=2.0 cv=Vb91zSV9 c=1 sm=0 a=R14c1kN7475LMi+rpQwGWw==:17
	a=Kt2980LFN-gA:10 a=mIU5gKuZvJwA:10 a=05ChyHeVI94A:10
	a=IkcTkHD0fZMA:10 a=ayC55rCoAAAA:8 a=n8i27M1mAAAA:8
	a=NAM8ycOw7El-FyqjS3oA:9 a=QEXdDO2ut3YA:10
	a=R14c1kN7475LMi+rpQwGWw==:117
X-Cloudmark-Score: 0
X-Originating-IP: 65.190.252.167
Received: from [65.190.252.167] ([65.190.252.167:48278] helo=corenix.localnet)
	by hrndva-oedge03.mail.rr.com (envelope-from <listmail@triad.rr.com>)
	(ecelerity 2.2.3.46 r()) with ESMTP
	id E9/28-17584-C7D80205; Tue, 07 Aug 2012 03:37:32 +0000
Received: by corenix.localnet (Postfix, from userid 1003)
	id ED3AE853DC; Mon,  6 Aug 2012 23:37:31 -0400 (EDT)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on corenix.localnet
X-Spam-Level: 
X-Spam-Status: No, score=-1.0 required=4.5 tests=ALL_TRUSTED autolearn=ham
	version=3.3.1
Received: from [192.168.0.129] (a200.localnet [192.168.0.129])
	by corenix.localnet (Postfix) with ESMTP id A7CE084FDB
	for <xen-devel@lists.xen.org>; Mon,  6 Aug 2012 23:37:31 -0400 (EDT)
Date: Mon, 06 Aug 2012 23:37:30 -0400
Message-ID: <w93ksj5lh53kpwlup2mpkda9.1344310650969@email.android.com>
From: Richie <listmail@triad.rr.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: Re: [Xen-devel] wheezy VT-d passthrough test: DMAR:[fault reason
 06h] PTE Read access is not set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Are there any settings I can try or more information I should provide?

Thanks.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 05:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 05:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SybxA-0004Ah-2j; Tue, 07 Aug 2012 05:03:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sybx8-0004Ac-Pv
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 05:03:35 +0000
Received: from [85.158.143.35:54715] by server-2.bemta-4.messagelabs.com id
	87/CE-17938-6A1A0205; Tue, 07 Aug 2012 05:03:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344315811!17102492!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29151 invoked from network); 7 Aug 2012 05:03:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 05:03:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,724,1336348800"; d="scan'208";a="13875605"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:03:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 06:03:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sybx4-00036U-IT;
	Tue, 07 Aug 2012 05:03:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sybx4-0004IQ-5W;
	Tue, 07 Aug 2012 06:03:30 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13567-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 06:03:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13567: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13567 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13567/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  7 redhat-install              fail pass in 13566

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 05:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 05:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SybxA-0004Ah-2j; Tue, 07 Aug 2012 05:03:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sybx8-0004Ac-Pv
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 05:03:35 +0000
Received: from [85.158.143.35:54715] by server-2.bemta-4.messagelabs.com id
	87/CE-17938-6A1A0205; Tue, 07 Aug 2012 05:03:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344315811!17102492!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29151 invoked from network); 7 Aug 2012 05:03:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 05:03:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,724,1336348800"; d="scan'208";a="13875605"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:03:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 06:03:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sybx4-00036U-IT;
	Tue, 07 Aug 2012 05:03:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sybx4-0004IQ-5W;
	Tue, 07 Aug 2012 06:03:30 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13567-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 06:03:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13567: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13567 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13567/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  7 redhat-install              fail pass in 13566

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 05:13:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 05:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syc5w-0004KC-3Q; Tue, 07 Aug 2012 05:12:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1Syc5u-0004K7-Eb
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 05:12:38 +0000
Received: from [85.158.139.83:23665] by server-10.bemta-5.messagelabs.com id
	49/27-02190-5C3A0205; Tue, 07 Aug 2012 05:12:37 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344316357!26717519!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13680 invoked from network); 7 Aug 2012 05:12:37 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 05:12:37 -0000
Received: by eeke53 with SMTP id e53so1049795eek.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 22:12:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=DM2eJ4ucOqPqFri4q6EF+Z1Fs3oEvEaNcQpe1rxy09c=;
	b=pd9q8HG34Ib+VZbVi/5XWEiWl77zKCUAg/odazx/3Gp8PoeW/rDjw+ClxW+MfZSJ/x
	Fpe1aqMYPJKbTsk9kRue6YNyCpKmvSPNoXEjKarzvYgGHow7LzS8P78DJXyGVvUcpiBz
	/xeJAcN0LpI5exKPrjySlv/wpJk0G+Z1CyR2C0oQY8WgJVAUI3PyU3fa79dcPQ24L+eZ
	K++auwDZM/tII2llsw67MvnEHOZgbfnZZx6V/NwZH0knhKxx12oq5W2it16cKs7cL8WM
	ah/a/J1dydWQMCNOipM5HjljVxJ0r/M99bczK3J0JQMVKCuz1yIBSR5TkVxl90h7yyrS
	pfpA==
MIME-Version: 1.0
Received: by 10.14.211.3 with SMTP id v3mr4749589eeo.43.1344316357045; Mon, 06
	Aug 2012 22:12:37 -0700 (PDT)
Received: by 10.14.94.200 with HTTP; Mon, 6 Aug 2012 22:12:36 -0700 (PDT)
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Date: Tue, 7 Aug 2012 01:12:36 -0400
Message-ID: <CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
	hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have just two comments:

1. Although the apparent benefit of this patch series seems dom0
disaggregation [VEE'08,SOSP'11] but (completely covered) xsm hooks
will facilitate the implementation of recently proposed system like
CloudVisor [SOSP'11] and Self-service Cloud [CCS'12] and can be used
to further explore access control and flexibility for different
scenarios.

2. This patch series is the hypervisor part of the dom0 disaggregation
idea realization. I think the next step should be applying similar
ideas to xen tools and Linux kernel. For example in Linux kernel
is_initial_domain() is equivalent to IS_PRIV, what should be the xsm
equivalent solution here. Other parts which need some discussion or
thinking are xenbus, xenstored, privcmd (and others).

Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 05:13:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 05:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syc5w-0004KC-3Q; Tue, 07 Aug 2012 05:12:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1Syc5u-0004K7-Eb
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 05:12:38 +0000
Received: from [85.158.139.83:23665] by server-10.bemta-5.messagelabs.com id
	49/27-02190-5C3A0205; Tue, 07 Aug 2012 05:12:37 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344316357!26717519!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13680 invoked from network); 7 Aug 2012 05:12:37 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 05:12:37 -0000
Received: by eeke53 with SMTP id e53so1049795eek.32
	for <xen-devel@lists.xen.org>; Mon, 06 Aug 2012 22:12:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=DM2eJ4ucOqPqFri4q6EF+Z1Fs3oEvEaNcQpe1rxy09c=;
	b=pd9q8HG34Ib+VZbVi/5XWEiWl77zKCUAg/odazx/3Gp8PoeW/rDjw+ClxW+MfZSJ/x
	Fpe1aqMYPJKbTsk9kRue6YNyCpKmvSPNoXEjKarzvYgGHow7LzS8P78DJXyGVvUcpiBz
	/xeJAcN0LpI5exKPrjySlv/wpJk0G+Z1CyR2C0oQY8WgJVAUI3PyU3fa79dcPQ24L+eZ
	K++auwDZM/tII2llsw67MvnEHOZgbfnZZx6V/NwZH0knhKxx12oq5W2it16cKs7cL8WM
	ah/a/J1dydWQMCNOipM5HjljVxJ0r/M99bczK3J0JQMVKCuz1yIBSR5TkVxl90h7yyrS
	pfpA==
MIME-Version: 1.0
Received: by 10.14.211.3 with SMTP id v3mr4749589eeo.43.1344316357045; Mon, 06
	Aug 2012 22:12:37 -0700 (PDT)
Received: by 10.14.94.200 with HTTP; Mon, 6 Aug 2012 22:12:36 -0700 (PDT)
In-Reply-To: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
Date: Tue, 7 Aug 2012 01:12:36 -0400
Message-ID: <CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
	hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have just two comments:

1. Although the apparent benefit of this patch series seems dom0
disaggregation [VEE'08,SOSP'11] but (completely covered) xsm hooks
will facilitate the implementation of recently proposed system like
CloudVisor [SOSP'11] and Self-service Cloud [CCS'12] and can be used
to further explore access control and flexibility for different
scenarios.

2. This patch series is the hypervisor part of the dom0 disaggregation
idea realization. I think the next step should be applying similar
ideas to xen tools and Linux kernel. For example in Linux kernel
is_initial_domain() is equivalent to IS_PRIV, what should be the xsm
equivalent solution here. Other parts which need some discussion or
thinking are xenbus, xenstored, privcmd (and others).

Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 05:42:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 05:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SycYh-0004XZ-KP; Tue, 07 Aug 2012 05:42:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SycYf-0004XU-N4
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 05:42:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344318134!10149230!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22454 invoked from network); 7 Aug 2012 05:42:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 05:42:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13875991"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:42:13 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	06:42:13 +0100
Message-ID: <1344318133.24794.16.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 7 Aug 2012 06:42:13 +0100
In-Reply-To: <20120806173905.GA26336@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> patch.
> 
> The output from this command is attached:
> xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> 
> Any ideas how to fix this timeout error?

The tools are waiting for the backend to move from state 1
(XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
driver typically makes that transition at the end of its probe function
-- what is the SLES11SP2 netback waiting for? Or is it failing to init,
in which case perhaps there is an error node in XS?

> 
> Olaf
> 
> ...
> libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
> libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 2  timed out
> libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
> libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62bf88: deregister unregistered
> libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
> libxl: error: libxl_create.c:1070:domcreate_attach_pci: unable to add nic devices
> ...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 05:42:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 05:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SycYh-0004XZ-KP; Tue, 07 Aug 2012 05:42:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SycYf-0004XU-N4
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 05:42:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344318134!10149230!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22454 invoked from network); 7 Aug 2012 05:42:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 05:42:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13875991"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:42:13 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	06:42:13 +0100
Message-ID: <1344318133.24794.16.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 7 Aug 2012 06:42:13 +0100
In-Reply-To: <20120806173905.GA26336@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> patch.
> 
> The output from this command is attached:
> xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> 
> Any ideas how to fix this timeout error?

The tools are waiting for the backend to move from state 1
(XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
driver typically makes that transition at the end of its probe function
-- what is the SLES11SP2 netback waiting for? Or is it failing to init,
in which case perhaps there is an error node in XS?

> 
> Olaf
> 
> ...
> libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: event epath=/local/domain/0/backend/vif/2/0/state
> libxl: debug: libxl_event.c:600:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:614:devstate_timeout: backend /local/domain/0/backend/vif/2/0/state wanted state 2  timed out
> libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0x62bf88 wpath=/local/domain/0/backend/vif/2/0/state token=3/1: deregister slotnum=3
> libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0x62bf88: deregister unregistered
> libxl: error: libxl_device.c:858:device_backend_callback: unable to disconnect device with path /local/domain/0/backend/vif/2/0
> libxl: error: libxl_create.c:1070:domcreate_attach_pci: unable to add nic devices
> ...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:22:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydAj-00059X-LC; Tue, 07 Aug 2012 06:21:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydAi-00059S-Hg
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 06:21:40 +0000
Received: from [85.158.143.35:46144] by server-1.bemta-4.messagelabs.com id
	D7/B9-24392-3F3B0205; Tue, 07 Aug 2012 06:21:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344320498!16462499!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1640 invoked from network); 7 Aug 2012 06:21:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 06:21:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:21:37 +0100
Message-Id: <5020D0100200007800093174@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:21:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
	<501F852E0200007800092C24@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D616D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D616D@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 15:51, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>> 
>>>> Is there a particular reason to make this access fault here, when
>>>> it didn't before? I.e. was there anything wrong with the previous
>>>> approach of returning zero on reads and ignoring writes when
>>>> !MCG_CTL_P? 
>>>> 
>>> 
>>> Semantically this code is better than previous approach, since
>>> !MCG_CTL_P means unimplemented MCG_CTL so access it would generate
>>> GP#. 
>> 
>> Agreed. But nevertheless I'd like to be a little more conservative
>> here. After all, "knowing" that this won't break Windows or Linux
>> isn't covering all possible HVM guests (and the quotes are there
>> to indicate that (a) unless you have access to Windows sources,
>> you can't really know, you may at best have empirical data
>> suggesting so, and (b) makes you/us dependent on all older
>> Windows/Linux versions you didn't try out/look at behave
>> correctly here too).
> 
> OK, fine to me to use previous approach, updated as attached.

Thanks, committed. Now what's your take on pulling the previous
patch #5 ahead (at once addressing the compatibility issue I
pointed out), as that's really the meat of the save/restore forward
compatibility? The other four patches could then be postponed
until after 4.2 went out afaict.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:22:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydAj-00059X-LC; Tue, 07 Aug 2012 06:21:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydAi-00059S-Hg
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 06:21:40 +0000
Received: from [85.158.143.35:46144] by server-1.bemta-4.messagelabs.com id
	D7/B9-24392-3F3B0205; Tue, 07 Aug 2012 06:21:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344320498!16462499!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1640 invoked from network); 7 Aug 2012 06:21:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 06:21:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:21:37 +0100
Message-Id: <5020D0100200007800093174@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:21:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923352C40A1@SHSMSX101.ccr.corp.intel.com>
	<5016A69A0200007800091435@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D5959@SHSMSX101.ccr.corp.intel.com>
	<501F852E0200007800092C24@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D616D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D616D@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 2/6] Xen/MCE: remove mcg_ctl and other
 adjustment for future vMCE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 15:51, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>> 
>>>> Is there a particular reason to make this access fault here, when
>>>> it didn't before? I.e. was there anything wrong with the previous
>>>> approach of returning zero on reads and ignoring writes when
>>>> !MCG_CTL_P? 
>>>> 
>>> 
>>> Semantically this code is better than previous approach, since
>>> !MCG_CTL_P means unimplemented MCG_CTL so access it would generate
>>> GP#. 
>> 
>> Agreed. But nevertheless I'd like to be a little more conservative
>> here. After all, "knowing" that this won't break Windows or Linux
>> isn't covering all possible HVM guests (and the quotes are there
>> to indicate that (a) unless you have access to Windows sources,
>> you can't really know, you may at best have empirical data
>> suggesting so, and (b) makes you/us dependent on all older
>> Windows/Linux versions you didn't try out/look at behave
>> correctly here too).
> 
> OK, fine to me to use previous approach, updated as attached.

Thanks, committed. Now what's your take on pulling the previous
patch #5 ahead (at once addressing the compatibility issue I
pointed out), as that's really the meat of the save/restore forward
compatibility? The other four patches could then be postponed
until after 4.2 went out afaict.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydDg-0005JM-Eh; Tue, 07 Aug 2012 06:24:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydDe-0005JF-NX
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:24:42 +0000
Received: from [85.158.143.35:26753] by server-3.bemta-4.messagelabs.com id
	BD/72-01511-AA4B0205; Tue, 07 Aug 2012 06:24:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344320675!17155442!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11380 invoked from network); 7 Aug 2012 06:24:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 06:24:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:24:27 +0100
Message-Id: <5020D0B90200007800093181@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:24:25 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
>> > wrote:
>> >> > Note: this change does not make any difference on x86 and ia64.
>> >> > 
>> >> > 
>> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
>> >> > stored in memory from guest pointers as hypercall parameters.
>> >> 
>> >> I have to admit that really dislike this, to a large part because of
>> >> the follow up patch that clutters the corresponding function
>> >> declarations even further. Plus I see no mechanism to convert
>> >> between the two, yet I can't see how - long term at least - you
>> >> could get away without such conversion.
>> >> 
>> >> Is it really a well thought through and settled upon decision to
>> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
>> >> both x86 and PPC got away without doing so
>> > 
>> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
>> > piece of code, so "got away without" is a bit strong...
>> 
>> Hmm, yes, that's a valid correction.
>> 
>> > We'd really
>> > rather not have to have a non-trivial compat layer on arm too by having
>> > the struct layouts be the same on 32/64.
>> 
>> And paying a penalty like this in the 32-bit half (if what is likely
>> to remain the much bigger portion for the next couple of years
>> can validly be called "half") is worth it? The more that the compat
>> layer is now reasonably mature (and should hence be easily
>> re-usable for ARM)?
> 
> What penalty? The only penalty is the wasted space in the structs in
> memory.

No - the caller has to zero-initialize those extra 32 bits, and the
hypervisor has to check for them to be zero (the latter may be
implicit in the 64-bit one, but certainly needs to be explicit on the
32-bit side).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydDg-0005JM-Eh; Tue, 07 Aug 2012 06:24:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydDe-0005JF-NX
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:24:42 +0000
Received: from [85.158.143.35:26753] by server-3.bemta-4.messagelabs.com id
	BD/72-01511-AA4B0205; Tue, 07 Aug 2012 06:24:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344320675!17155442!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11380 invoked from network); 7 Aug 2012 06:24:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 06:24:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:24:27 +0100
Message-Id: <5020D0B90200007800093181@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:24:25 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
>> > wrote:
>> >> > Note: this change does not make any difference on x86 and ia64.
>> >> > 
>> >> > 
>> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
>> >> > stored in memory from guest pointers as hypercall parameters.
>> >> 
>> >> I have to admit that really dislike this, to a large part because of
>> >> the follow up patch that clutters the corresponding function
>> >> declarations even further. Plus I see no mechanism to convert
>> >> between the two, yet I can't see how - long term at least - you
>> >> could get away without such conversion.
>> >> 
>> >> Is it really a well thought through and settled upon decision to
>> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
>> >> both x86 and PPC got away without doing so
>> > 
>> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
>> > piece of code, so "got away without" is a bit strong...
>> 
>> Hmm, yes, that's a valid correction.
>> 
>> > We'd really
>> > rather not have to have a non-trivial compat layer on arm too by having
>> > the struct layouts be the same on 32/64.
>> 
>> And paying a penalty like this in the 32-bit half (if what is likely
>> to remain the much bigger portion for the next couple of years
>> can validly be called "half") is worth it? The more that the compat
>> layer is now reasonably mature (and should hence be easily
>> re-usable for ARM)?
> 
> What penalty? The only penalty is the wasted space in the structs in
> memory.

No - the caller has to zero-initialize those extra 32 bits, and the
hypervisor has to check for them to be zero (the latter may be
implicit in the 64-bit one, but certainly needs to be explicit on the
32-bit side).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:39:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydRS-0005YY-Pr; Tue, 07 Aug 2012 06:38:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydRR-0005YT-VT
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:38:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344321531!12555834!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17413 invoked from network); 7 Aug 2012 06:38:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 06:38:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:38:50 +0100
Message-Id: <5020D418020000780009318C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:38:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Keir Fraser" <keir.xen@gmail.com>, "Jinsong Liu" <jinsong.liu@intel.com>
References: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
	<CC416661.3A655%keir.xen@gmail.com>
	<501BC799020000780009278B@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D6748@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D6748@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 19:06, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> As for patch 5, it cannot be reordered w/o patch 3 checked in (patch 5 is 
> for save/restore MCi_CTL2, a newly added MSR at patch 3). In fact we could 
> remove patch 5 totally, and don't add MCi_CTL2 (this MSR is nothing to do 
> with vmce logic itself, the only reason why we add it in new vmce is to get 
> perfromance benefit (but very trivial), so it's OK not to add it and remove 
> patch 5).

But I thought you were pretty keen on getting in this performance
improvement?

> Another benefit of not add MCi_CTL2 is, to avoid difference between 
> Intel and AMD code. Hence I think it's an acceptable approach to keep current 
> vmce (not implement MCi_CTL2). Your opinion?

Current code already handles MCi_CTL2, but incompletely (always
returning zero for reads, and dropping writes). This is valid because
nothing gets announced that should make a guest think it can
access those MSRs in the first place. I do think, however, that
getting this right (and at once getting the guest side polling
disabled) is beneficial, the question just is whether we want to set
the ground for this now, or deal with it after 4.2 went out.

I'm favoring doing it now, and I don't see the strict relationship
to patch #3 - with what we currently implement it would be
sufficient to save zeros and fail non-zero restores (which ought
to fail earlier already anyway, since the implication of the MSRs
here having non-zero values would be for MCG_CAP to have an
unsupported value to be restored too).

Otoh, restoring from saved state that only includes MCG_CAP (but
no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
to be zero, which would be trivial as that's the startup state, i.e.
the only complication here is the variable size save record), so
pushing this to post-4.2 as well is a reasonable alternative.

Keir, Ian?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:39:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydRS-0005YY-Pr; Tue, 07 Aug 2012 06:38:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydRR-0005YT-VT
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:38:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344321531!12555834!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17413 invoked from network); 7 Aug 2012 06:38:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 06:38:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:38:50 +0100
Message-Id: <5020D418020000780009318C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:38:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Keir Fraser" <keir.xen@gmail.com>, "Jinsong Liu" <jinsong.liu@intel.com>
References: <1343988573.21372.45.camel@zakaz.uk.xensource.com>
	<CC416661.3A655%keir.xen@gmail.com>
	<501BC799020000780009278B@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D6748@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D6748@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 19:06, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> As for patch 5, it cannot be reordered w/o patch 3 checked in (patch 5 is 
> for save/restore MCi_CTL2, a newly added MSR at patch 3). In fact we could 
> remove patch 5 totally, and don't add MCi_CTL2 (this MSR is nothing to do 
> with vmce logic itself, the only reason why we add it in new vmce is to get 
> perfromance benefit (but very trivial), so it's OK not to add it and remove 
> patch 5).

But I thought you were pretty keen on getting in this performance
improvement?

> Another benefit of not add MCi_CTL2 is, to avoid difference between 
> Intel and AMD code. Hence I think it's an acceptable approach to keep current 
> vmce (not implement MCi_CTL2). Your opinion?

Current code already handles MCi_CTL2, but incompletely (always
returning zero for reads, and dropping writes). This is valid because
nothing gets announced that should make a guest think it can
access those MSRs in the first place. I do think, however, that
getting this right (and at once getting the guest side polling
disabled) is beneficial, the question just is whether we want to set
the ground for this now, or deal with it after 4.2 went out.

I'm favoring doing it now, and I don't see the strict relationship
to patch #3 - with what we currently implement it would be
sufficient to save zeros and fail non-zero restores (which ought
to fail earlier already anyway, since the implication of the MSRs
here having non-zero values would be for MCG_CAP to have an
unsupported value to be restored too).

Otoh, restoring from saved state that only includes MCG_CAP (but
no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
to be zero, which would be trivial as that's the startup state, i.e.
the only complication here is the variable size save record), so
pushing this to post-4.2 as well is a reasonable alternative.

Keir, Ian?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:49:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydbS-0005jy-2M; Tue, 07 Aug 2012 06:49:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydbQ-0005jt-4O
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:49:16 +0000
Received: from [85.158.143.35:37568] by server-3.bemta-4.messagelabs.com id
	8F/EB-01511-B6AB0205; Tue, 07 Aug 2012 06:49:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344322154!17114383!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24349 invoked from network); 7 Aug 2012 06:49:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 06:49:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:49:14 +0100
Message-Id: <5020D6880200007800093198@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:49:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jun Nakajima" <jun.nakajima@intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
In-Reply-To: <CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 22:23, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:
> On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
>> > Although the "Intel Virtualization Technology FlexMigration
>> > Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
>> > does not document support for extended model 2H model DH (Intel Xeon
>> > Processor E5 Family), empirical evidence shows that the same MSR
>> > addresses can be used for cpuid masking as exdended model 2H model AH
>> > (Intel Xen Processor E3-1200 Family).
>>
>> Empirical evidence isn't really enough - let's have someone at Intel
>> confirm this - Jun, Don?
>>
> 
> Thanks for the patch. The patch looks good, and it should be in.
> We'll update the document.

I take this as an ack then, and will commit it that way.

Jan

>> > Signed-off-by: Matt Wilson <msw@amazon.com>
>> >
>> > diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
>> > --- a/xen/arch/x86/cpu/intel.c        Fri Jul 27 12:22:13 2012 +0200
>> > +++ b/xen/arch/x86/cpu/intel.c        Sat Jul 28 17:27:30 2012 +0000
>> > @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
>> >                       return;
>> >               extra = "xsave ";
>> >               break;
>> > -     case 0x2a:
>> > +     case 0x2a: case 0x2d:
>> >               wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
>> >                     opt_cpuid_mask_ecx,
>> >                     opt_cpuid_mask_edx);
>>
>>
>>
>>
> 
> 
> -- 
> Jun
> Intel Open Source Technology Center




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:49:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydbS-0005jy-2M; Tue, 07 Aug 2012 06:49:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydbQ-0005jt-4O
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:49:16 +0000
Received: from [85.158.143.35:37568] by server-3.bemta-4.messagelabs.com id
	8F/EB-01511-B6AB0205; Tue, 07 Aug 2012 06:49:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344322154!17114383!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24349 invoked from network); 7 Aug 2012 06:49:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 06:49:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:49:14 +0100
Message-Id: <5020D6880200007800093198@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:49:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jun Nakajima" <jun.nakajima@intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
In-Reply-To: <CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 22:23, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:
> On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
>> > Although the "Intel Virtualization Technology FlexMigration
>> > Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
>> > does not document support for extended model 2H model DH (Intel Xeon
>> > Processor E5 Family), empirical evidence shows that the same MSR
>> > addresses can be used for cpuid masking as exdended model 2H model AH
>> > (Intel Xen Processor E3-1200 Family).
>>
>> Empirical evidence isn't really enough - let's have someone at Intel
>> confirm this - Jun, Don?
>>
> 
> Thanks for the patch. The patch looks good, and it should be in.
> We'll update the document.

I take this as an ack then, and will commit it that way.

Jan

>> > Signed-off-by: Matt Wilson <msw@amazon.com>
>> >
>> > diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
>> > --- a/xen/arch/x86/cpu/intel.c        Fri Jul 27 12:22:13 2012 +0200
>> > +++ b/xen/arch/x86/cpu/intel.c        Sat Jul 28 17:27:30 2012 +0000
>> > @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
>> >                       return;
>> >               extra = "xsave ";
>> >               break;
>> > -     case 0x2a:
>> > +     case 0x2a: case 0x2d:
>> >               wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
>> >                     opt_cpuid_mask_ecx,
>> >                     opt_cpuid_mask_edx);
>>
>>
>>
>>
> 
> 
> -- 
> Jun
> Intel Open Source Technology Center




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:55:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydhZ-0005uI-Ta; Tue, 07 Aug 2012 06:55:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydhY-0005uC-8u
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:55:36 +0000
Received: from [85.158.139.83:25376] by server-6.bemta-5.messagelabs.com id
	62/5E-27759-7EBB0205; Tue, 07 Aug 2012 06:55:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344322534!23383211!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16246 invoked from network); 7 Aug 2012 06:55:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 06:55:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:55:34 +0100
Message-Id: <5020D80402000078000931A4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:55:32 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
	<501FF0EB.1000900@tycho.nsa.gov>
In-Reply-To: <501FF0EB.1000900@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:29, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>>> +{
>>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>>> +        return -EPERM;
>> 
>> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.
> 
> Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
> IS_PRIV domain.

That's a side effect of the current way of handling this, not
something that is either logical or designed to be that way (it
certainly is bogus even now for DOM_XEN, and with
disaggregation - afaiu your plans - it'll also become bogus for
DOM_IO, where right now one could consider it half-way
correct).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 06:55:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 06:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SydhZ-0005uI-Ta; Tue, 07 Aug 2012 06:55:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SydhY-0005uC-8u
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 06:55:36 +0000
Received: from [85.158.139.83:25376] by server-6.bemta-5.messagelabs.com id
	62/5E-27759-7EBB0205; Tue, 07 Aug 2012 06:55:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344322534!23383211!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16246 invoked from network); 7 Aug 2012 06:55:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 06:55:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 07:55:34 +0100
Message-Id: <5020D80402000078000931A4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 07:55:32 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
	<501FF0EB.1000900@tycho.nsa.gov>
In-Reply-To: <501FF0EB.1000900@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:29, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>>> +{
>>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>>> +        return -EPERM;
>> 
>> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.
> 
> Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
> IS_PRIV domain.

That's a side effect of the current way of handling this, not
something that is either logical or designed to be that way (it
certainly is bogus even now for DOM_XEN, and with
disaggregation - afaiu your plans - it'll also become bogus for
DOM_IO, where right now one could consider it half-way
correct).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 07:01:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sydmj-00064d-LI; Tue, 07 Aug 2012 07:00:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sydmi-00064N-4n
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 07:00:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344322849!11135791!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4718 invoked from network); 7 Aug 2012 07:00:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 07:00:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 08:00:48 +0100
Message-Id: <5020D93E02000078000931AE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 08:00:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
	<501FE068.1090404@tycho.nsa.gov>
	<502003DD0200007800093011@nat28.tlf.novell.com>
	<501FF2FE.8040603@tycho.nsa.gov>
In-Reply-To: <501FF2FE.8040603@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
 rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:38, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:50 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 17:19, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>>> Callers that want to prevent use of DOMID_SELF already need to ensure
>>>>> the calling domain does not pass its own domain ID. This removes the
>>>>> need for the caller to manually support DOMID_SELF, which many already
>>>>> do.
>>>>
>>>> I'm not really sure this is correct. At the very least it changes the
>>>> return value of rcu_lock_remote_target_domain_by_id() when
>>>> called with DOMID_SELF (from -ESRCH to -EPERM).
>>>
>>> This series ends up eliminating that function in patch #18, so that
>>> part is taken care of.
>> 
>> But may, in case of problems, then not be fully bi-sectable.
> 
> Do we depend on the exact error return codes in non-error code paths?
> I would think most attempts to bisect would work just fine as the
> function will still be returning an error. I'm not sure ESRCH is
> really the best error to return here anyway: the problem is not that
> a domain with my_own_domid or DOMID_SELF couldn't be found, it's that
> you can't specify that domain for this operation.

I'm not aware of any such dependency, but I also cannot exclude
them (some testsuites tend to check for specific error codes for
example). These adjustments generally claim that they don't
actually change any behavior, yet here they clearly do. Hence
what I'm asking that if that behavioral change is unavoidable, it
should be mentioned in the description, so that whoever runs
across this can be easily aware of this not immediately obvious
effect of the patch without having to analyze it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 07:01:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sydmj-00064d-LI; Tue, 07 Aug 2012 07:00:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sydmi-00064N-4n
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 07:00:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344322849!11135791!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4718 invoked from network); 7 Aug 2012 07:00:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 07:00:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 08:00:48 +0100
Message-Id: <5020D93E02000078000931AE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 08:00:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-9-git-send-email-dgdegra@tycho.nsa.gov>
	<501FF9D50200007800092F76@nat28.tlf.novell.com>
	<501FE068.1090404@tycho.nsa.gov>
	<502003DD0200007800093011@nat28.tlf.novell.com>
	<501FF2FE.8040603@tycho.nsa.gov>
In-Reply-To: <501FF2FE.8040603@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 08/18] xen: Add DOMID_SELF support to
 rcu_lock_domain_by_id
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:38, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/06/2012 11:50 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 17:19, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> On 08/06/2012 11:07 AM, Jan Beulich wrote:
>>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>>> Callers that want to prevent use of DOMID_SELF already need to ensure
>>>>> the calling domain does not pass its own domain ID. This removes the
>>>>> need for the caller to manually support DOMID_SELF, which many already
>>>>> do.
>>>>
>>>> I'm not really sure this is correct. At the very least it changes the
>>>> return value of rcu_lock_remote_target_domain_by_id() when
>>>> called with DOMID_SELF (from -ESRCH to -EPERM).
>>>
>>> This series ends up eliminating that function in patch #18, so that
>>> part is taken care of.
>> 
>> But may, in case of problems, then not be fully bi-sectable.
> 
> Do we depend on the exact error return codes in non-error code paths?
> I would think most attempts to bisect would work just fine as the
> function will still be returning an error. I'm not sure ESRCH is
> really the best error to return here anyway: the problem is not that
> a domain with my_own_domid or DOMID_SELF couldn't be found, it's that
> you can't specify that domain for this operation.

I'm not aware of any such dependency, but I also cannot exclude
them (some testsuites tend to check for specific error codes for
example). These adjustments generally claim that they don't
actually change any behavior, yet here they clearly do. Hence
what I'm asking that if that behavioral change is unavoidable, it
should be mentioned in the description, so that whoever runs
across this can be easily aware of this not immediately obvious
effect of the patch without having to analyze it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 07:22:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sye7P-0006Sg-Af; Tue, 07 Aug 2012 07:22:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Sye7N-0006Sb-Mo
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 07:22:17 +0000
Received: from [85.158.143.99:48525] by server-3.bemta-4.messagelabs.com id
	6B/EA-01511-822C0205; Tue, 07 Aug 2012 07:22:16 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344324135!25280716!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc0MTUz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4973 invoked from network); 7 Aug 2012 07:22:16 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 07:22:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q777MDiI012180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 07:22:14 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q777MCWD017793
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 07:22:13 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q777MCls022681
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 02:22:12 -0500
Received: from [10.191.14.119] (/10.191.14.119)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 00:22:12 -0700
Message-ID: <5020C24A.3060604@oracle.com>
Date: Tue, 07 Aug 2012 15:22:50 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4630480789698147875=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4630480789698147875==
Content-Type: multipart/alternative;
 boundary="------------060807030108080608060303"

This is a multi-part message in MIME format.
--------------060807030108080608060303
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: 7bit

Hi maintainers,

We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).

The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
6 cores per socket, 2 HT threads per core).
After boot up this node with all cores enabled,
We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
it takes 30+ mins to boot.
If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
So a big mem + passthrough device made the worst case.

If we boot this node with HT disabled from BIOS. Now only 12 cores are
available.
OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!

After some debug, we found it's in kernel mtrr init that make this delay.

mtrr_aps_init() 
 \-> set_mtrr() 
     \-> mtrr_work_handler() 

kernel spin in mtrr_work_handler.

But we don't know the scene hide in the hypervisor. Why big mem +
passthrough made the worst case.
Is this already fixed in xen upstream?
Any comments are welcome, I'll upload all data depend on your need.

thanks
zduan

--------------060807030108080608060303
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=GB2312">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <tt>Hi maintainers,<br>
      <br>
      We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and
      ovm3.1.1).<br>
      <br>
      The system env is an exalogic node with 24 cores + 100G mem (2
      socket , 6 cores per socket, 2 HT threads per core).<br>
      After boot up this node with all cores enabled,<br>
      We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed
      device, it takes 30+ mins to boot.<br>
      If we remove passthrough device from vm.cfg, bootup takes about 2
      mins.<br>
      If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3
      mins.<br>
      So a big mem + passthrough device made the worst case.<br>
      <br>
      If we boot this node with HT disabled from BIOS. Now only 12 cores
      are available.<br>
      OVM on same node, same config with <span
        style="background:aqua;mso-highlight:aqua">12vpcus+90GB</span>
      boots in <span style="background:aqua;mso-highlight:aqua">1.5
        mins!</span><br>
      <br>
      After some debug, we found it's in kernel mtrr init that make this
      delay. <br>
    </tt>
    <pre wrap="">mtrr_aps_init() 
 \-&gt; set_mtrr() 
     \-&gt; mtrr_work_handler() 

kernel spin in mtrr_work_handler.
</pre>
    <tt>But we don't know the scene hide in the hypervisor. Why big mem
      + passthrough made the worst case.<br>
      Is this already fixed in xen upstream?<br>
      Any comments are welcome, I'll upload all data depend on your
      need.<br>
      <br>
      thanks<br>
      zduan</tt><br>
  </body>
</html>

--------------060807030108080608060303--


--===============4630480789698147875==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4630480789698147875==--


From xen-devel-bounces@lists.xen.org Tue Aug 07 07:22:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sye7P-0006Sg-Af; Tue, 07 Aug 2012 07:22:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Sye7N-0006Sb-Mo
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 07:22:17 +0000
Received: from [85.158.143.99:48525] by server-3.bemta-4.messagelabs.com id
	6B/EA-01511-822C0205; Tue, 07 Aug 2012 07:22:16 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344324135!25280716!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc0MTUz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4973 invoked from network); 7 Aug 2012 07:22:16 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 07:22:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q777MDiI012180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 07:22:14 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q777MCWD017793
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 07:22:13 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q777MCls022681
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 02:22:12 -0500
Received: from [10.191.14.119] (/10.191.14.119)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 00:22:12 -0700
Message-ID: <5020C24A.3060604@oracle.com>
Date: Tue, 07 Aug 2012 15:22:50 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4630480789698147875=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4630480789698147875==
Content-Type: multipart/alternative;
 boundary="------------060807030108080608060303"

This is a multi-part message in MIME format.
--------------060807030108080608060303
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: 7bit

Hi maintainers,

We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).

The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
6 cores per socket, 2 HT threads per core).
After boot up this node with all cores enabled,
We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
it takes 30+ mins to boot.
If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
So a big mem + passthrough device made the worst case.

If we boot this node with HT disabled from BIOS. Now only 12 cores are
available.
OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!

After some debug, we found it's in kernel mtrr init that make this delay.

mtrr_aps_init() 
 \-> set_mtrr() 
     \-> mtrr_work_handler() 

kernel spin in mtrr_work_handler.

But we don't know the scene hide in the hypervisor. Why big mem +
passthrough made the worst case.
Is this already fixed in xen upstream?
Any comments are welcome, I'll upload all data depend on your need.

thanks
zduan

--------------060807030108080608060303
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=GB2312">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <tt>Hi maintainers,<br>
      <br>
      We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and
      ovm3.1.1).<br>
      <br>
      The system env is an exalogic node with 24 cores + 100G mem (2
      socket , 6 cores per socket, 2 HT threads per core).<br>
      After boot up this node with all cores enabled,<br>
      We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed
      device, it takes 30+ mins to boot.<br>
      If we remove passthrough device from vm.cfg, bootup takes about 2
      mins.<br>
      If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3
      mins.<br>
      So a big mem + passthrough device made the worst case.<br>
      <br>
      If we boot this node with HT disabled from BIOS. Now only 12 cores
      are available.<br>
      OVM on same node, same config with <span
        style="background:aqua;mso-highlight:aqua">12vpcus+90GB</span>
      boots in <span style="background:aqua;mso-highlight:aqua">1.5
        mins!</span><br>
      <br>
      After some debug, we found it's in kernel mtrr init that make this
      delay. <br>
    </tt>
    <pre wrap="">mtrr_aps_init() 
 \-&gt; set_mtrr() 
     \-&gt; mtrr_work_handler() 

kernel spin in mtrr_work_handler.
</pre>
    <tt>But we don't know the scene hide in the hypervisor. Why big mem
      + passthrough made the worst case.<br>
      Is this already fixed in xen upstream?<br>
      Any comments are welcome, I'll upload all data depend on your
      need.<br>
      <br>
      thanks<br>
      zduan</tt><br>
  </body>
</html>

--------------060807030108080608060303--


--===============4630480789698147875==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4630480789698147875==--


From xen-devel-bounces@lists.xen.org Tue Aug 07 07:35:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyeJh-0006eR-Kp; Tue, 07 Aug 2012 07:35:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyeJg-0006eM-LQ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 07:35:00 +0000
Received: from [85.158.138.51:49868] by server-2.bemta-3.messagelabs.com id
	62/0A-29239-325C0205; Tue, 07 Aug 2012 07:34:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344324898!28997726!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6773 invoked from network); 7 Aug 2012 07:34:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 07:34:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 08:34:58 +0100
Message-Id: <5020E13F02000078000931DA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 08:34:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>, "Tim Deegan" <tim@xen.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
In-Reply-To: <CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> I guess there are two problems with that:
> * As you've seen, apparently dom0 may access these pages before any
> faults happen.
> * If it happens that reclaim_single is below the only zeroed page, the
> guest will crash even when there is reclaim-able memory available.
> 
> Two ways we could fix this:
> 1. Remove dom0 accesses (what on earth could be looking at a
> not-yet-created VM?)

I'm told it's a monitoring daemon, and yes, they are intending to
adjust it to first query the GFN's type (and don't do the access
when it's not populated, yet). But wait, I didn't check the code
when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
also call get_page_from_gfn() with P2M_ALLOC, so would also
trigger the PoD code (in -unstable at least) - Tim, was that really
a correct adjustment in 25355:974ad81bb68b? It looks to be a
1:1 translation, but is that really necessary? If one wanted to
find out whether a page is PoD to avoid getting it populated,
how would that be done from outside the hypervisor? Would
we need XEN_DOMCTL_getpageframeinfo4 for this?

> 2. Allocate the PoD cache before populating the p2m table
> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
> see how that's really practical.

What's wrong with telling control tools that a certain page is
unpopulated (from which they will be able to imply that's it's all
clear from the guest's pov)? Even outside of the current problem,
I would think that's more efficient than allocating the page. Of
course, the control tools need to be able to cope with that. And
it may also be necessary to distinguish between read and
read/write mappings being established (and for r/w ones the
option of populating at access time rather than at creation time
would need to be explored).

> 4. Change the sweep routine so that the lower 2MiB gets swept
> 
> #2 would require us to use all PoD entries when building the p2m
> table, thus addressing the mail you mentioned from 25 July*.  Given
> that you don't want #1, it seems like #2 is the best option.
> 
> No matter what we do, the sweep routine for 4.2 should be re-written
> to search all of memory at least once (maybe with a timeout for
> watchdogs), since it's only called in an actual emergency.
> 
> Let me take a look...

Thanks!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 07:35:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyeJh-0006eR-Kp; Tue, 07 Aug 2012 07:35:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyeJg-0006eM-LQ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 07:35:00 +0000
Received: from [85.158.138.51:49868] by server-2.bemta-3.messagelabs.com id
	62/0A-29239-325C0205; Tue, 07 Aug 2012 07:34:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344324898!28997726!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6773 invoked from network); 7 Aug 2012 07:34:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 07:34:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 08:34:58 +0100
Message-Id: <5020E13F02000078000931DA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 08:34:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>, "Tim Deegan" <tim@xen.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
In-Reply-To: <CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> I guess there are two problems with that:
> * As you've seen, apparently dom0 may access these pages before any
> faults happen.
> * If it happens that reclaim_single is below the only zeroed page, the
> guest will crash even when there is reclaim-able memory available.
> 
> Two ways we could fix this:
> 1. Remove dom0 accesses (what on earth could be looking at a
> not-yet-created VM?)

I'm told it's a monitoring daemon, and yes, they are intending to
adjust it to first query the GFN's type (and don't do the access
when it's not populated, yet). But wait, I didn't check the code
when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
also call get_page_from_gfn() with P2M_ALLOC, so would also
trigger the PoD code (in -unstable at least) - Tim, was that really
a correct adjustment in 25355:974ad81bb68b? It looks to be a
1:1 translation, but is that really necessary? If one wanted to
find out whether a page is PoD to avoid getting it populated,
how would that be done from outside the hypervisor? Would
we need XEN_DOMCTL_getpageframeinfo4 for this?

> 2. Allocate the PoD cache before populating the p2m table
> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
> see how that's really practical.

What's wrong with telling control tools that a certain page is
unpopulated (from which they will be able to imply that's it's all
clear from the guest's pov)? Even outside of the current problem,
I would think that's more efficient than allocating the page. Of
course, the control tools need to be able to cope with that. And
it may also be necessary to distinguish between read and
read/write mappings being established (and for r/w ones the
option of populating at access time rather than at creation time
would need to be explored).

> 4. Change the sweep routine so that the lower 2MiB gets swept
> 
> #2 would require us to use all PoD entries when building the p2m
> table, thus addressing the mail you mentioned from 25 July*.  Given
> that you don't want #1, it seems like #2 is the best option.
> 
> No matter what we do, the sweep routine for 4.2 should be re-written
> to search all of memory at least once (maybe with a timeout for
> watchdogs), since it's only called in an actual emergency.
> 
> Let me take a look...

Thanks!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 07:51:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:51:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyeZ9-0006on-AG; Tue, 07 Aug 2012 07:50:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SyeZ7-0006od-NV
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 07:50:57 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344325848!1752569!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30723 invoked from network); 7 Aug 2012 07:50:48 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 07:50:48 -0000
Received: by eeke53 with SMTP id e53so1083469eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 00:50:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/Pv4e7Xbiq70OgMT+X9004q6YXhBYxLR0Vqrpr5UF7c=;
	b=w8tU0axMDNn5yDuFlqdTiOVk1F87Lcz/PA4BhdvVxPbzf+8Ktproce9l1gMUtSDuEr
	qBpAleEAcJGoTrHAxhsOWd+YwPTQyYOeUrK6kGmqCVBSD48OXZffFH9UvGVcR7zQwn0I
	1FwC3ZgpZ6oJ5PhvZbHci/3rKuBBRCaLjR/4rZhfStt+25cFo6funw8MyZWhW4lSwCQ+
	JkRmhSNU5OEDgkLL24lVJ66gyD/RCjPTTqrdIybSsPDGzsK9xwBQsoZGD9HPO+90BUFX
	hbBP4zETloPBdwX4drkYPAfBhDBPNAHIgYE/IifHjCa6EYtSZQoi4tbycmb4+5q2Ole+
	d6/A==
Received: by 10.14.198.200 with SMTP id v48mr16764457een.3.1344325848063;
	Tue, 07 Aug 2012 00:50:48 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id k41sm53711767eep.13.2012.08.07.00.50.46
	(version=SSLv3 cipher=OTHER); Tue, 07 Aug 2012 00:50:47 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 07 Aug 2012 08:50:42 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jinsong Liu <jinsong.liu@intel.com>
Message-ID: <CC468762.3AE90%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: Ac10cVf9uOr3luEyl0qIrkdlB1F2zA==
In-Reply-To: <5020D418020000780009318C@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:

> Otoh, restoring from saved state that only includes MCG_CAP (but
> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
> to be zero, which would be trivial as that's the startup state, i.e.
> the only complication here is the variable size save record), so
> pushing this to post-4.2 as well is a reasonable alternative.
> 
> Keir, Ian?

I think we should leave it and handle the variable-sized save record in 4.3.
Using hvm_load_entry_zeroextend() to read in save records, with zero padding
for older shorter records, should be straightforward enough.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 07:51:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 07:51:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyeZ9-0006on-AG; Tue, 07 Aug 2012 07:50:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SyeZ7-0006od-NV
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 07:50:57 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344325848!1752569!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30723 invoked from network); 7 Aug 2012 07:50:48 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 07:50:48 -0000
Received: by eeke53 with SMTP id e53so1083469eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 00:50:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/Pv4e7Xbiq70OgMT+X9004q6YXhBYxLR0Vqrpr5UF7c=;
	b=w8tU0axMDNn5yDuFlqdTiOVk1F87Lcz/PA4BhdvVxPbzf+8Ktproce9l1gMUtSDuEr
	qBpAleEAcJGoTrHAxhsOWd+YwPTQyYOeUrK6kGmqCVBSD48OXZffFH9UvGVcR7zQwn0I
	1FwC3ZgpZ6oJ5PhvZbHci/3rKuBBRCaLjR/4rZhfStt+25cFo6funw8MyZWhW4lSwCQ+
	JkRmhSNU5OEDgkLL24lVJ66gyD/RCjPTTqrdIybSsPDGzsK9xwBQsoZGD9HPO+90BUFX
	hbBP4zETloPBdwX4drkYPAfBhDBPNAHIgYE/IifHjCa6EYtSZQoi4tbycmb4+5q2Ole+
	d6/A==
Received: by 10.14.198.200 with SMTP id v48mr16764457een.3.1344325848063;
	Tue, 07 Aug 2012 00:50:48 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id k41sm53711767eep.13.2012.08.07.00.50.46
	(version=SSLv3 cipher=OTHER); Tue, 07 Aug 2012 00:50:47 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 07 Aug 2012 08:50:42 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jinsong Liu <jinsong.liu@intel.com>
Message-ID: <CC468762.3AE90%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: Ac10cVf9uOr3luEyl0qIrkdlB1F2zA==
In-Reply-To: <5020D418020000780009318C@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:

> Otoh, restoring from saved state that only includes MCG_CAP (but
> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
> to be zero, which would be trivial as that's the startup state, i.e.
> the only complication here is the variable size save record), so
> pushing this to post-4.2 as well is a reasonable alternative.
> 
> Keir, Ian?

I think we should leave it and handle the variable-sized save record in 4.3.
Using hvm_load_entry_zeroextend() to read in save records, with zero padding
for older shorter records, should be straightforward enough.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:05:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syen0-0007VT-2w; Tue, 07 Aug 2012 08:05:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syemy-0007VE-K9
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:05:16 +0000
Received: from [85.158.139.83:26837] by server-10.bemta-5.messagelabs.com id
	40/4E-24472-B3CC0205; Tue, 07 Aug 2012 08:05:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344326711!30535825!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19378 invoked from network); 7 Aug 2012 08:05:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 08:05:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 09:05:10 +0100
Message-Id: <5020E85202000078000931F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 09:05:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Keir Fraser" <keir.xen@gmail.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
In-Reply-To: <CC468762.3AE90%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
> On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> Otoh, restoring from saved state that only includes MCG_CAP (but
>> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
>> to be zero, which would be trivial as that's the startup state, i.e.
>> the only complication here is the variable size save record), so
>> pushing this to post-4.2 as well is a reasonable alternative.
>> 
>> Keir, Ian?
> 
> I think we should leave it and handle the variable-sized save record in 4.3.
> Using hvm_load_entry_zeroextend() to read in save records, with zero padding
> for older shorter records, should be straightforward enough.

Okay. So Ian, you could then take the corresponding item
off the list. Or do you do that only once patches make it
through the regression tester?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:05:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syen0-0007VT-2w; Tue, 07 Aug 2012 08:05:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syemy-0007VE-K9
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:05:16 +0000
Received: from [85.158.139.83:26837] by server-10.bemta-5.messagelabs.com id
	40/4E-24472-B3CC0205; Tue, 07 Aug 2012 08:05:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344326711!30535825!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19378 invoked from network); 7 Aug 2012 08:05:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 08:05:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 09:05:10 +0100
Message-Id: <5020E85202000078000931F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 09:05:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Keir Fraser" <keir.xen@gmail.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
In-Reply-To: <CC468762.3AE90%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
> On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> Otoh, restoring from saved state that only includes MCG_CAP (but
>> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
>> to be zero, which would be trivial as that's the startup state, i.e.
>> the only complication here is the variable size save record), so
>> pushing this to post-4.2 as well is a reasonable alternative.
>> 
>> Keir, Ian?
> 
> I think we should leave it and handle the variable-sized save record in 4.3.
> Using hvm_load_entry_zeroextend() to read in save records, with zero padding
> for older shorter records, should be straightforward enough.

Okay. So Ian, you could then take the corresponding item
off the list. Or do you do that only once patches make it
through the regression tester?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:07:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyepH-0007iY-8m; Tue, 07 Aug 2012 08:07:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1SyepF-0007iI-Rc
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:07:38 +0000
Received: from [85.158.143.35:12507] by server-3.bemta-4.messagelabs.com id
	36/CA-01511-9CCC0205; Tue, 07 Aug 2012 08:07:37 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1344324320!4562461!1
X-Originating-IP: [212.227.17.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIyNy4xNy45ID0+IDc4MzIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11661 invoked from network); 7 Aug 2012 07:25:21 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.9) by server-5.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 07:25:21 -0000
Received: from [10.3.0.26] ([141.4.215.32])
	by mrelayeu.kundenserver.de (node=mreu0) with ESMTP (Nemesis)
	id 0LkUkZ-1TajsJ1JLn-00cO1L; Tue, 07 Aug 2012 09:25:20 +0200
Message-ID: <5020C2DE.304@brockmann-consult.de>
Date: Tue, 07 Aug 2012 09:25:18 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501F98A1.4070806@brockmann-consult.de>
	<501FB9060200007800092D4E@nat28.tlf.novell.com>
In-Reply-To: <501FB9060200007800092D4E@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.2
X-Provags-ID: V02:K0:MWsxNwQjkXt+S6PH1yQxTkFDBQ075KTc2A/KGWOXxCI
	qsxpjiAqwC7oGHMPYwXoo9HgHna95Ue8OT85nMofLAY8JM8Kfl
	XFvmwTmKIqh0qKbZGwVEkcNkWRcncS4FEfUOBXaG0z0OzX9u68
	pQb+afthpNWyLMHN1A5M2nUB5rOyW/g53ud5fd1QmU96ACq+Yx
	7gsHMAY3KNG2/k52bJ7egi9jsXPvXp/qJe1BUsna8IOd39p2Rp
	ylXjPBNnDdnQq24oo4t7KR04Q/t6iFkWsBqGJGaFhA3PGqy4WK
	y7VpigTVCk4JBx5VGjkTjEIgRc/fa8/nTRKYh002Nt1myXsESN
	kwq2exVAIfV4oj2lWFrFZHJbjBQHo+QU0PE0p85/R
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> That still won't tell us which patches you did apply.
I applied no patches and tested, and the result was slow. And then
applied all patches, and it was fast. I didn't try figuring out which
one it was.


So I guess I'll try:
- the latest unstable 4.2
- the 4.1.3-rc (Which includes the patch Malcolm suggested)
- and my rpm source with half patches, 3/4 of them, etc. binary search
style to see which patch(es) changed the performance. But this means I
won't be able to narrow it down to a single patch, but only the point in
the long list where the most dramatic change happens, possibly depending
on many previous patches.

Thanks so far, guys.


On 08/06/2012 12:31 PM, Jan Beulich wrote:
>>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
>> my AMD FX-8150 system with vanilla source code is super slow, both the
>> dom0 and domUs. However, after I merge the upstream patches I found in
>> the openSUSE rpm, it runs normally.
> I'd be very surprised if you really just took the upstream patches,
> and the result was better than 4.2-rc1. After all, what upstream
> means is that they were taken from -unstable.
>
>> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
>> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
>> obviously those patches won't work any more since the 4.2 code looks
>> completely reorganized, so I'm stuck with 4.1.2
> Obviously the upstream patches can't be applied to something
> that already has all those changes. Other patches, of which we
> unfortunately have quite a few, would be a different story.
>
>> Here is the rpm I was using at the time:
>> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
>>
>> To see the list of the patches and what order to apply them, see the
>> spec file.
> That still won't tell us which patches you did apply.
>
>> Please make sure this performance issue is fixed for the 4.2 release.
>> And I would be happy to test whatever files you send me.
> The sort of report you're doing isn't that helpful. What would
> help is if you could narrow down which patch(es) it is that
> make things so much better. Giving 4.1.3-rc a try might also
> be worthwhile, albeit I would hope we don't have a regression
> in 4.2.0-rc compared to 4.1.3-rc...
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney@brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:07:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyepH-0007iY-8m; Tue, 07 Aug 2012 08:07:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1SyepF-0007iI-Rc
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:07:38 +0000
Received: from [85.158.143.35:12507] by server-3.bemta-4.messagelabs.com id
	36/CA-01511-9CCC0205; Tue, 07 Aug 2012 08:07:37 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1344324320!4562461!1
X-Originating-IP: [212.227.17.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIyNy4xNy45ID0+IDc4MzIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11661 invoked from network); 7 Aug 2012 07:25:21 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.9) by server-5.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 07:25:21 -0000
Received: from [10.3.0.26] ([141.4.215.32])
	by mrelayeu.kundenserver.de (node=mreu0) with ESMTP (Nemesis)
	id 0LkUkZ-1TajsJ1JLn-00cO1L; Tue, 07 Aug 2012 09:25:20 +0200
Message-ID: <5020C2DE.304@brockmann-consult.de>
Date: Tue, 07 Aug 2012 09:25:18 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501F98A1.4070806@brockmann-consult.de>
	<501FB9060200007800092D4E@nat28.tlf.novell.com>
In-Reply-To: <501FB9060200007800092D4E@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.2
X-Provags-ID: V02:K0:MWsxNwQjkXt+S6PH1yQxTkFDBQ075KTc2A/KGWOXxCI
	qsxpjiAqwC7oGHMPYwXoo9HgHna95Ue8OT85nMofLAY8JM8Kfl
	XFvmwTmKIqh0qKbZGwVEkcNkWRcncS4FEfUOBXaG0z0OzX9u68
	pQb+afthpNWyLMHN1A5M2nUB5rOyW/g53ud5fd1QmU96ACq+Yx
	7gsHMAY3KNG2/k52bJ7egi9jsXPvXp/qJe1BUsna8IOd39p2Rp
	ylXjPBNnDdnQq24oo4t7KR04Q/t6iFkWsBqGJGaFhA3PGqy4WK
	y7VpigTVCk4JBx5VGjkTjEIgRc/fa8/nTRKYh002Nt1myXsESN
	kwq2exVAIfV4oj2lWFrFZHJbjBQHo+QU0PE0p85/R
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> That still won't tell us which patches you did apply.
I applied no patches and tested, and the result was slow. And then
applied all patches, and it was fast. I didn't try figuring out which
one it was.


So I guess I'll try:
- the latest unstable 4.2
- the 4.1.3-rc (Which includes the patch Malcolm suggested)
- and my rpm source with half patches, 3/4 of them, etc. binary search
style to see which patch(es) changed the performance. But this means I
won't be able to narrow it down to a single patch, but only the point in
the long list where the most dramatic change happens, possibly depending
on many previous patches.

Thanks so far, guys.


On 08/06/2012 12:31 PM, Jan Beulich wrote:
>>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
>> my AMD FX-8150 system with vanilla source code is super slow, both the
>> dom0 and domUs. However, after I merge the upstream patches I found in
>> the openSUSE rpm, it runs normally.
> I'd be very surprised if you really just took the upstream patches,
> and the result was better than 4.2-rc1. After all, what upstream
> means is that they were taken from -unstable.
>
>> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
>> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
>> obviously those patches won't work any more since the 4.2 code looks
>> completely reorganized, so I'm stuck with 4.1.2
> Obviously the upstream patches can't be applied to something
> that already has all those changes. Other patches, of which we
> unfortunately have quite a few, would be a different story.
>
>> Here is the rpm I was using at the time:
>> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
>>
>> To see the list of the patches and what order to apply them, see the
>> spec file.
> That still won't tell us which patches you did apply.
>
>> Please make sure this performance issue is fixed for the 4.2 release.
>> And I would be happy to test whatever files you send me.
> The sort of report you're doing isn't that helpful. What would
> help is if you could narrow down which patch(es) it is that
> make things so much better. Giving 4.1.3-rc a try might also
> be worthwhile, albeit I would hope we don't have a regression
> in 4.2.0-rc compared to 4.1.3-rc...
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


-- 

--------------------------------------------
Peter Maloney
Brockmann Consult
Max-Planck-Str. 2
21502 Geesthacht
Germany
Tel: +49 4152 889 300
Fax: +49 4152 889 333
E-mail: peter.maloney@brockmann-consult.de
Internet: http://www.brockmann-consult.de
--------------------------------------------


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:11:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syesf-00088C-4V; Tue, 07 Aug 2012 08:11:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syesd-00087b-1K
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:11:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344327040!9494505!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29783 invoked from network); 7 Aug 2012 08:10:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:10:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13877934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:10:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:10:40 +0100
Message-ID: <1344327038.11339.58.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Aug 2012 09:10:38 +0100
In-Reply-To: <5020E85202000078000931F6@nat28.tlf.novell.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
	<5020E85202000078000931F6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir.xen@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 09:05 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
> > On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> >> Otoh, restoring from saved state that only includes MCG_CAP (but
> >> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
> >> to be zero, which would be trivial as that's the startup state, i.e.
> >> the only complication here is the variable size save record), so
> >> pushing this to post-4.2 as well is a reasonable alternative.
> >> 
> >> Keir, Ian?
> > 
> > I think we should leave it and handle the variable-sized save record in 4.3.
> > Using hvm_load_entry_zeroextend() to read in save records, with zero padding
> > for older shorter records, should be straightforward enough.
> 
> Okay. So Ian, you could then take the corresponding item
> off the list. Or do you do that only once patches make it
> through the regression tester?

I'll mark it as done for 4.2.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:11:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syesf-00088C-4V; Tue, 07 Aug 2012 08:11:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syesd-00087b-1K
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:11:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344327040!9494505!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29783 invoked from network); 7 Aug 2012 08:10:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:10:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13877934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:10:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:10:40 +0100
Message-ID: <1344327038.11339.58.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Aug 2012 09:10:38 +0100
In-Reply-To: <5020E85202000078000931F6@nat28.tlf.novell.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
	<5020E85202000078000931F6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir.xen@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 09:05 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
> > On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> >> Otoh, restoring from saved state that only includes MCG_CAP (but
> >> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
> >> to be zero, which would be trivial as that's the startup state, i.e.
> >> the only complication here is the variable size save record), so
> >> pushing this to post-4.2 as well is a reasonable alternative.
> >> 
> >> Keir, Ian?
> > 
> > I think we should leave it and handle the variable-sized save record in 4.3.
> > Using hvm_load_entry_zeroextend() to read in save records, with zero padding
> > for older shorter records, should be straightforward enough.
> 
> Okay. So Ian, you could then take the corresponding item
> off the list. Or do you do that only once patches make it
> through the regression tester?

I'll mark it as done for 4.2.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:14:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syevx-0008ME-OU; Tue, 07 Aug 2012 08:14:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Syevv-0008M4-UP
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:14:32 +0000
Received: from [85.158.139.83:56373] by server-10.bemta-5.messagelabs.com id
	7F/0A-24472-76EC0205; Tue, 07 Aug 2012 08:14:31 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344327270!30538406!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8175 invoked from network); 7 Aug 2012 08:14:30 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:14:30 -0000
Received: by eeke53 with SMTP id e53so1091714eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 01:14:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=vpvjkmRQqMnk5QLH6YSpMfhrAXawGXXEwJ7sYnJKbX4=;
	b=hfawDqa1o1qkYBuXqt9GkYY7nWhjjFRNILceTIKRz5WYv1TYAiePInw/O7Q6NlG6Zu
	7+ix4UqVimJy/FeVUe3Fqsgl+6rCxO3wS/jTi7QoPDk15pyHyaJr1LHZ2GFro5A2cDsd
	2Dw2tydRnwLudzbaKKbXZU0aqmlTdiEnQum1sEcGF4nYCOsEdXJvF+PG/lBtxMnC/lmD
	POCHFujvP8LS1xdVn5VbNZaRjuhhvvDWTX1sDjOAx8J+cmCmXyKWBQsvGf9PSCFzN1Co
	VsTx6iukNX8/IgwM7KT9Z+JXkaM1yxGpnIW3qlX0yR364++4fcKWWPHUJrqXBLUigrtR
	nEjA==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr16612350eep.10.1344327270348; Tue,
	07 Aug 2012 01:14:30 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Tue, 7 Aug 2012 01:14:30 -0700 (PDT)
In-Reply-To: <1343901358.27221.110.camel@zakaz.uk.xensource.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
	<1343901358.27221.110.camel@zakaz.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:14:30 +0800
Message-ID: <CA+ePHTBSozOAZ6Jdf8JZ1XxrGbUT+iCSK2TLEYrtA9kFMx5otg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6081839433306982109=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6081839433306982109==
Content-Type: multipart/alternative; boundary=047d7b670a9d69391e04c6a89509

--047d7b670a9d69391e04c6a89509
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 2, 2012 at 5:55 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wrote:
>
> >     In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(), it starts
>
> If you are doing development then target xen-unstable would be
> preferable, especially if you are looking at (lib)xl. The xl stuff in
> 4.1 is effectively a preview and it has change substantially internally
> since 4.1.
>
> > After input `xl create xp-101.hvm`, you got `Daemon running with PID
> > 26622` following the command line in the screen; but the xl log file
> > lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain
> > xp-101 (domid 1) to die [pid 26624]`.
> > Why 26622 !=3D 26624?  And 26622 could not be found by `ps -ef` command=
.
> > What happended to that?!
>
> There's a call to dameon in there, which will involve a double fork
> IIRC.
>
> Ian.
>
> I only to see `libxl_fork` in function create_domain() who invokes fork()
once, where is the second fork()?

--047d7b670a9d69391e04c6a89509
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 2, 2012 at 5:55 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"im">On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wro=
te:<br>
<br>
&gt; =C2=A0 =C2=A0 In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(),=
 it starts<br>
<br>
</div>If you are doing development then target xen-unstable would be<br>
preferable, especially if you are looking at (lib)xl. The xl stuff in<br>
4.1 is effectively a preview and it has change substantially internally<br>
since 4.1.<br>
<div class=3D"im"><br>
&gt; After input `xl create xp-101.hvm`, you got `Daemon running with PID<b=
r>
&gt; 26622` following the command line in the screen; but the xl log file<b=
r>
&gt; lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain=
<br>
&gt; xp-101 (domid 1) to die [pid 26624]`.<br>
&gt; Why 26622 !=3D 26624? =C2=A0And 26622 could not be found by `ps -ef` c=
ommand.<br>
&gt; What happended to that?!<br>
<br>
</div>There&#39;s a call to dameon in there, which will involve a double fo=
rk<br>
IIRC.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div>I only to see `libxl_fork` in function cre=
ate_domain() who invokes fork() once, where is the second fork()?

--047d7b670a9d69391e04c6a89509--


--===============6081839433306982109==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6081839433306982109==--


From xen-devel-bounces@lists.xen.org Tue Aug 07 08:14:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syevx-0008ME-OU; Tue, 07 Aug 2012 08:14:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Syevv-0008M4-UP
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:14:32 +0000
Received: from [85.158.139.83:56373] by server-10.bemta-5.messagelabs.com id
	7F/0A-24472-76EC0205; Tue, 07 Aug 2012 08:14:31 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344327270!30538406!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8175 invoked from network); 7 Aug 2012 08:14:30 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:14:30 -0000
Received: by eeke53 with SMTP id e53so1091714eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 01:14:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=vpvjkmRQqMnk5QLH6YSpMfhrAXawGXXEwJ7sYnJKbX4=;
	b=hfawDqa1o1qkYBuXqt9GkYY7nWhjjFRNILceTIKRz5WYv1TYAiePInw/O7Q6NlG6Zu
	7+ix4UqVimJy/FeVUe3Fqsgl+6rCxO3wS/jTi7QoPDk15pyHyaJr1LHZ2GFro5A2cDsd
	2Dw2tydRnwLudzbaKKbXZU0aqmlTdiEnQum1sEcGF4nYCOsEdXJvF+PG/lBtxMnC/lmD
	POCHFujvP8LS1xdVn5VbNZaRjuhhvvDWTX1sDjOAx8J+cmCmXyKWBQsvGf9PSCFzN1Co
	VsTx6iukNX8/IgwM7KT9Z+JXkaM1yxGpnIW3qlX0yR364++4fcKWWPHUJrqXBLUigrtR
	nEjA==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr16612350eep.10.1344327270348; Tue,
	07 Aug 2012 01:14:30 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Tue, 7 Aug 2012 01:14:30 -0700 (PDT)
In-Reply-To: <1343901358.27221.110.camel@zakaz.uk.xensource.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
	<1343901358.27221.110.camel@zakaz.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:14:30 +0800
Message-ID: <CA+ePHTBSozOAZ6Jdf8JZ1XxrGbUT+iCSK2TLEYrtA9kFMx5otg@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6081839433306982109=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6081839433306982109==
Content-Type: multipart/alternative; boundary=047d7b670a9d69391e04c6a89509

--047d7b670a9d69391e04c6a89509
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 2, 2012 at 5:55 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wrote:
>
> >     In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(), it starts
>
> If you are doing development then target xen-unstable would be
> preferable, especially if you are looking at (lib)xl. The xl stuff in
> 4.1 is effectively a preview and it has change substantially internally
> since 4.1.
>
> > After input `xl create xp-101.hvm`, you got `Daemon running with PID
> > 26622` following the command line in the screen; but the xl log file
> > lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain
> > xp-101 (domid 1) to die [pid 26624]`.
> > Why 26622 !=3D 26624?  And 26622 could not be found by `ps -ef` command=
.
> > What happended to that?!
>
> There's a call to dameon in there, which will involve a double fork
> IIRC.
>
> Ian.
>
> I only to see `libxl_fork` in function create_domain() who invokes fork()
once, where is the second fork()?

--047d7b670a9d69391e04c6a89509
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 2, 2012 at 5:55 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"im">On Thu, 2012-08-02 at 10:47 +0100, =E9=A9=AC=E7=A3=8A wro=
te:<br>
<br>
&gt; =C2=A0 =C2=A0 In xen-4.1.2/tools/libxl/xl_cmdimpl.c : create_domain(),=
 it starts<br>
<br>
</div>If you are doing development then target xen-unstable would be<br>
preferable, especially if you are looking at (lib)xl. The xl stuff in<br>
4.1 is effectively a preview and it has change substantially internally<br>
since 4.1.<br>
<div class=3D"im"><br>
&gt; After input `xl create xp-101.hvm`, you got `Daemon running with PID<b=
r>
&gt; 26622` following the command line in the screen; but the xl log file<b=
r>
&gt; lies in /var/log/xen/xl-xp-101.log contains a line `Waiting for domain=
<br>
&gt; xp-101 (domid 1) to die [pid 26624]`.<br>
&gt; Why 26622 !=3D 26624? =C2=A0And 26622 could not be found by `ps -ef` c=
ommand.<br>
&gt; What happended to that?!<br>
<br>
</div>There&#39;s a call to dameon in there, which will involve a double fo=
rk<br>
IIRC.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div>I only to see `libxl_fork` in function cre=
ate_domain() who invokes fork() once, where is the second fork()?

--047d7b670a9d69391e04c6a89509--


--===============6081839433306982109==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6081839433306982109==--


From xen-devel-bounces@lists.xen.org Tue Aug 07 08:18:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyezA-00006d-GZ; Tue, 07 Aug 2012 08:17:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syez9-00006T-Nc
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:17:51 +0000
Received: from [85.158.138.51:36265] by server-2.bemta-3.messagelabs.com id
	19/62-29239-E2FC0205; Tue, 07 Aug 2012 08:17:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344327470!30846865!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27354 invoked from network); 7 Aug 2012 08:17:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 08:17:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 09:17:49 +0100
Message-Id: <5020EB4D020000780009320E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 09:17:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
In-Reply-To: <CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:17, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Mon, Aug 6, 2012 at 9:58 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> hypervisor, nice to have:
> 
> * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will stop
> halfway through searching, causing a guest to crash even if there was
> zeroed memory available.  This is NOT a regression from 4.1, and is a
> very rare case, so probably shouldn't be a blocker.  (In fact, I'd be
> open to the idea that it should wait until after the release to get
> more testing.)
> 
> I can take this one.
> 
>> tools, nice to have:
> 
> * [BUG(?)] If domain 0 attempts to access a guests' memory before it
> is finished being built, and it is being built in PoD mode, this may
> cause the guest to crash.  Again, this is NOT a regression from 4.1.
> Furthermore, it's only been reported (AIUI) by a customer of OpenSuSE;

It's a SLES customer really, and hence is quite a bit more important
to us than if it was an openSUSE one. But we'd have to backport
an eventual fix anyway (as this was reported against 4.0.x), so
as long as a fix becomes available we'd be fine with backporting it
no matter whether it makes it into 4.2 (though of course, given
that all versions of the PoD code are affected, getting this fixed
in 4.0.4 and 4.1.3 would seem desirable).

> so it shoudn't be a blocker.  (Again, I'd be open to the idea that it
> should wait until after the release to get more testing.)
> 
> Jan, since you've got access to the system that reproduces it, do you
> want to take this one?  I think it should just be a matter of moving
> xc_domain_set_target() just before the while() loop in the domain
> builder (but after xc_domain_populate_physmap_exact, I think), and
> changing the loop to never allocate real memory in PoD mode.  I can do
> it, but it will be longer before we can get it tested.

If that's really expected to work, then yes, I can have them test
such a change (as indicated, on 4.0.x) and post the patch once
validated. But it wasn't really clear to me whether the non-PoD
allocation for at least the low 2Mb weren't on purpose (as that's
where BIOS image and hvmloader will end up sitting).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:18:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyezA-00006d-GZ; Tue, 07 Aug 2012 08:17:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syez9-00006T-Nc
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:17:51 +0000
Received: from [85.158.138.51:36265] by server-2.bemta-3.messagelabs.com id
	19/62-29239-E2FC0205; Tue, 07 Aug 2012 08:17:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344327470!30846865!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27354 invoked from network); 7 Aug 2012 08:17:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 08:17:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 09:17:49 +0100
Message-Id: <5020EB4D020000780009320E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 09:17:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
In-Reply-To: <CAFLBxZbgskkGEPDdFr-2tZ0JCzXvu=PeRLidc4_tRx+9T7TtVw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:17, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Mon, Aug 6, 2012 at 9:58 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> hypervisor, nice to have:
> 
> * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will stop
> halfway through searching, causing a guest to crash even if there was
> zeroed memory available.  This is NOT a regression from 4.1, and is a
> very rare case, so probably shouldn't be a blocker.  (In fact, I'd be
> open to the idea that it should wait until after the release to get
> more testing.)
> 
> I can take this one.
> 
>> tools, nice to have:
> 
> * [BUG(?)] If domain 0 attempts to access a guests' memory before it
> is finished being built, and it is being built in PoD mode, this may
> cause the guest to crash.  Again, this is NOT a regression from 4.1.
> Furthermore, it's only been reported (AIUI) by a customer of OpenSuSE;

It's a SLES customer really, and hence is quite a bit more important
to us than if it was an openSUSE one. But we'd have to backport
an eventual fix anyway (as this was reported against 4.0.x), so
as long as a fix becomes available we'd be fine with backporting it
no matter whether it makes it into 4.2 (though of course, given
that all versions of the PoD code are affected, getting this fixed
in 4.0.4 and 4.1.3 would seem desirable).

> so it shoudn't be a blocker.  (Again, I'd be open to the idea that it
> should wait until after the release to get more testing.)
> 
> Jan, since you've got access to the system that reproduces it, do you
> want to take this one?  I think it should just be a matter of moving
> xc_domain_set_target() just before the while() loop in the domain
> builder (but after xc_domain_populate_physmap_exact, I think), and
> changing the loop to never allocate real memory in PoD mode.  I can do
> it, but it will be longer before we can get it tested.

If that's really expected to work, then yes, I can have them test
such a change (as indicated, on 4.0.x) and post the patch once
validated. But it wasn't really clear to me whether the non-PoD
allocation for at least the low 2Mb weren't on purpose (as that's
where BIOS image and hvmloader will end up sitting).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:22:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syf3V-0000Ml-8F; Tue, 07 Aug 2012 08:22:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syf3T-0000MO-VD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:22:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344327733!4245033!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21468 invoked from network); 7 Aug 2012 08:22:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:22:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13878202"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:22:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:22:13 +0100
Message-ID: <1344327732.11339.64.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Tue, 7 Aug 2012 09:22:12 +0100
In-Reply-To: <CA+ePHTBSozOAZ6Jdf8JZ1XxrGbUT+iCSK2TLEYrtA9kFMx5otg@mail.gmail.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
	<1343901358.27221.110.camel@zakaz.uk.xensource.com>
	<CA+ePHTBSozOAZ6Jdf8JZ1XxrGbUT+iCSK2TLEYrtA9kFMx5otg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTA4LTA3IGF0IDA5OjE0ICswMTAwLCDpqazno4ogd3JvdGU6Cgo+ICAgICAg
ICAgVGhlcmUncyBhIGNhbGwgdG8gZGFtZW9uIGluIHRoZXJlLCB3aGljaCB3aWxsIGludm9sdmUg
YSBkb3VibGUKPiAgICAgICAgIGZvcmsKPiAgICAgICAgIElJUkMuCj4gICAgICAgICAKPiAgICAg
ICAgIElhbi4KPiAgICAgICAgIAo+IEkgb25seSB0byBzZWUgYGxpYnhsX2ZvcmtgIGluIGZ1bmN0
aW9uIGNyZWF0ZV9kb21haW4oKSB3aG8gaW52b2tlcwo+IGZvcmsoKSBvbmNlLCB3aGVyZSBpcyB0
aGUgc2Vjb25kIGZvcmsoKT8KCkFzIEkgc2FpZCAoYnV0IG1pc3NwZWx0LCBzb3JyeSkgVGhlcmUn
cyBhIGNhbGwgdG8gZGFlbW9uKCkgdGhlcmUuIEhhdmUKeW91IHJlYWQgdGhlIG1hbnBhZ2UgZm9y
IHRoYXQgZnVuY3Rpb24/CgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:22:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syf3V-0000Ml-8F; Tue, 07 Aug 2012 08:22:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syf3T-0000MO-VD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:22:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344327733!4245033!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21468 invoked from network); 7 Aug 2012 08:22:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:22:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13878202"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:22:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:22:13 +0100
Message-ID: <1344327732.11339.64.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Tue, 7 Aug 2012 09:22:12 +0100
In-Reply-To: <CA+ePHTBSozOAZ6Jdf8JZ1XxrGbUT+iCSK2TLEYrtA9kFMx5otg@mail.gmail.com>
References: <CA+ePHTBWQFGFFb=nrLjGf4xLBqMuiQKmc0OYyJZpBjLX4F9rYQ@mail.gmail.com>
	<1343901358.27221.110.camel@zakaz.uk.xensource.com>
	<CA+ePHTBSozOAZ6Jdf8JZ1XxrGbUT+iCSK2TLEYrtA9kFMx5otg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [problem in `xl_cmdimpl.c`] Why pid return by
 fork() in parent process is not the same with pid returned by getpid()?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTA4LTA3IGF0IDA5OjE0ICswMTAwLCDpqazno4ogd3JvdGU6Cgo+ICAgICAg
ICAgVGhlcmUncyBhIGNhbGwgdG8gZGFtZW9uIGluIHRoZXJlLCB3aGljaCB3aWxsIGludm9sdmUg
YSBkb3VibGUKPiAgICAgICAgIGZvcmsKPiAgICAgICAgIElJUkMuCj4gICAgICAgICAKPiAgICAg
ICAgIElhbi4KPiAgICAgICAgIAo+IEkgb25seSB0byBzZWUgYGxpYnhsX2ZvcmtgIGluIGZ1bmN0
aW9uIGNyZWF0ZV9kb21haW4oKSB3aG8gaW52b2tlcwo+IGZvcmsoKSBvbmNlLCB3aGVyZSBpcyB0
aGUgc2Vjb25kIGZvcmsoKT8KCkFzIEkgc2FpZCAoYnV0IG1pc3NwZWx0LCBzb3JyeSkgVGhlcmUn
cyBhIGNhbGwgdG8gZGFlbW9uKCkgdGhlcmUuIEhhdmUKeW91IHJlYWQgdGhlIG1hbnBhZ2UgZm9y
IHRoYXQgZnVuY3Rpb24/CgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:23:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syf4F-0000Qb-M4; Tue, 07 Aug 2012 08:23:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syf4D-0000QJ-LS
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:23:05 +0000
Received: from [85.158.139.83:3216] by server-5.bemta-5.messagelabs.com id
	44/94-03096-860D0205; Tue, 07 Aug 2012 08:23:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344327784!30047529!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3143 invoked from network); 7 Aug 2012 08:23:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13878212"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:23:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:23:03 +0100
Message-ID: <1344327782.11339.65.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Tue, 7 Aug 2012 09:23:02 +0100
In-Reply-To: <20120807032246.GA4324@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 04:22 +0100, Matt Wilson wrote:
> On Tue, Jul 31, 2012 at 08:34:59AM -0700, Matt Wilson wrote:
> > > Why wouldn't you just run lynx on the generated .html instead of less on
> > > the generated .txt if you wanted something a bit better formatted?
> > 
> > I generally don't have lynx installed on my production machines.

You can read it on your workstation or on line at
http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

> Any further concerns?

I'm afraid I just don't see the point. The plain markdown docs are
serviceable enough, if not great, and there are plenty of ways to view
the html docs both on and off line if you want something prettier,
without adding a web browser to our build dependencies.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:23:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syf4F-0000Qb-M4; Tue, 07 Aug 2012 08:23:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syf4D-0000QJ-LS
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:23:05 +0000
Received: from [85.158.139.83:3216] by server-5.bemta-5.messagelabs.com id
	44/94-03096-860D0205; Tue, 07 Aug 2012 08:23:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344327784!30047529!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3143 invoked from network); 7 Aug 2012 08:23:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13878212"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:23:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:23:03 +0100
Message-ID: <1344327782.11339.65.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Tue, 7 Aug 2012 09:23:02 +0100
In-Reply-To: <20120807032246.GA4324@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 04:22 +0100, Matt Wilson wrote:
> On Tue, Jul 31, 2012 at 08:34:59AM -0700, Matt Wilson wrote:
> > > Why wouldn't you just run lynx on the generated .html instead of less on
> > > the generated .txt if you wanted something a bit better formatted?
> > 
> > I generally don't have lynx installed on my production machines.

You can read it on your workstation or on line at
http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

> Any further concerns?

I'm afraid I just don't see the point. The plain markdown docs are
serviceable enough, if not great, and there are plenty of ways to view
the html docs both on and off line if you want something prettier,
without adding a web browser to our build dependencies.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:25:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:25:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syf6P-0000cX-Qz; Tue, 07 Aug 2012 08:25:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syf6O-0000cG-Fc
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 08:25:20 +0000
Received: from [85.158.143.35:20822] by server-2.bemta-4.messagelabs.com id
	6D/AB-17938-FE0D0205; Tue, 07 Aug 2012 08:25:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344327917!13811110!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30044 invoked from network); 7 Aug 2012 08:25:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:25:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13878272"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:25:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:25:17 +0100
Message-ID: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 09:25:16 +0100
In-Reply-To: <osstest-13567-mainreport@xen.org>
References: <osstest-13567-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13567: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 06:03 +0100, xen.org wrote:
> flight 13567 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13567/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536

libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/block add [14735] exited with error status 1
libxl: error: libxl_device.c:978:device_hotplug_child_death_cb: script: Device /dev/dm-3 is mounted in a guest domain,
and so cannot be mounted now.

This is a consequence of 25733:353bc0801b11 "libxl: support custom block
hotplug scripts which had the side effect of re-enabling the hotplug
script's device sharing checks, which in turn has exposed the fact that
during migration xl currently has both devices in existence (but
thankfully not active).

One option might be to nobble those checks again but I'm going to
investigate if I can make xl sequence the device teardowns in the
migration so as to make it work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:25:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:25:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syf6P-0000cX-Qz; Tue, 07 Aug 2012 08:25:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syf6O-0000cG-Fc
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 08:25:20 +0000
Received: from [85.158.143.35:20822] by server-2.bemta-4.messagelabs.com id
	6D/AB-17938-FE0D0205; Tue, 07 Aug 2012 08:25:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344327917!13811110!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30044 invoked from network); 7 Aug 2012 08:25:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 08:25:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,725,1336348800"; d="scan'208";a="13878272"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 08:25:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	09:25:17 +0100
Message-ID: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 09:25:16 +0100
In-Reply-To: <osstest-13567-mainreport@xen.org>
References: <osstest-13567-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13567: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 06:03 +0100, xen.org wrote:
> flight 13567 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13567/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536

libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/block add [14735] exited with error status 1
libxl: error: libxl_device.c:978:device_hotplug_child_death_cb: script: Device /dev/dm-3 is mounted in a guest domain,
and so cannot be mounted now.

This is a consequence of 25733:353bc0801b11 "libxl: support custom block
hotplug scripts which had the side effect of re-enabling the hotplug
script's device sharing checks, which in turn has exposed the fact that
during migration xl currently has both devices in existence (but
thankfully not active).

One option might be to nobble those checks again but I'm going to
investigate if I can make xl sequence the device teardowns in the
migration so as to make it work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:38:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyfIf-00018v-4c; Tue, 07 Aug 2012 08:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyfIc-00018h-3u
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:37:59 +0000
Received: from [85.158.143.35:29670] by server-2.bemta-4.messagelabs.com id
	5E/65-17938-5E3D0205; Tue, 07 Aug 2012 08:37:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344328676!5865480!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31235 invoked from network); 7 Aug 2012 08:37:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 08:37:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 09:37:56 +0100
Message-Id: <5020F003020000780009322C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 09:37:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
In-Reply-To: <5020C24A.3060604@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 09:22, "zhenzhong.duan" <zhenzhong.duan@oracle.com> wrote:
> After some debug, we found it's in kernel mtrr init that make this delay.
> 
> mtrr_aps_init() 
>  \-> set_mtrr() 
>      \-> mtrr_work_handler() 
> 
> kernel spin in mtrr_work_handler.
> 
> But we don't know the scene hide in the hypervisor. Why big mem +
> passthrough made the worst case.
> Is this already fixed in xen upstream?

First of all it would have been useful to indicate the kernel version,
since mtrr_work_handler() disappeared after 3.0. Obviously worth
checking whether that change by itself already addresses your
problem.

Next, if you already spotted where the spinning occurs, you
should also be able to tell what's going on at the other side, i.e.
why the event that is being waited for isn't occurring for this
long a time. Since there's a number of open coded spin loops
here, knowing exactly which one each CPU is sitting in (and
which ones might not be in any) is the fundamental information
needed.

>From what you're telling us so far, I'd rather suspect a kernel
problem, not a hypervisor one here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:38:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyfIf-00018v-4c; Tue, 07 Aug 2012 08:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyfIc-00018h-3u
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 08:37:59 +0000
Received: from [85.158.143.35:29670] by server-2.bemta-4.messagelabs.com id
	5E/65-17938-5E3D0205; Tue, 07 Aug 2012 08:37:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344328676!5865480!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4Nzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31235 invoked from network); 7 Aug 2012 08:37:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 08:37:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 09:37:56 +0100
Message-Id: <5020F003020000780009322C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 09:37:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
In-Reply-To: <5020C24A.3060604@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 09:22, "zhenzhong.duan" <zhenzhong.duan@oracle.com> wrote:
> After some debug, we found it's in kernel mtrr init that make this delay.
> 
> mtrr_aps_init() 
>  \-> set_mtrr() 
>      \-> mtrr_work_handler() 
> 
> kernel spin in mtrr_work_handler.
> 
> But we don't know the scene hide in the hypervisor. Why big mem +
> passthrough made the worst case.
> Is this already fixed in xen upstream?

First of all it would have been useful to indicate the kernel version,
since mtrr_work_handler() disappeared after 3.0. Obviously worth
checking whether that change by itself already addresses your
problem.

Next, if you already spotted where the spinning occurs, you
should also be able to tell what's going on at the other side, i.e.
why the event that is being waited for isn't occurring for this
long a time. Since there's a number of open coded spin loops
here, knowing exactly which one each CPU is sitting in (and
which ones might not be in any) is the fundamental information
needed.

>From what you're telling us so far, I'd rather suspect a kernel
problem, not a hypervisor one here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:56:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyfaE-0001Mn-QL; Tue, 07 Aug 2012 08:56:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1SyfaD-0001Mf-3G
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 08:56:09 +0000
Received: from [85.158.143.99:42797] by server-2.bemta-4.messagelabs.com id
	3F/98-17938-828D0205; Tue, 07 Aug 2012 08:56:08 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344329764!30003112!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12368 invoked from network); 7 Aug 2012 08:56:04 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 08:56:04 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 170D3A3A78;
	Tue,  7 Aug 2012 10:55:58 +0200 (CEST)
Date: Tue, 7 Aug 2012 09:55:55 +0100
From: Mel Gorman <mgorman@suse.de>
To: David Miller <davem@davemloft.net>
Message-ID: <20120807085554.GF29814@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Xen-devel <xen-devel@lists.xensource.com>,
	Linux-Netdev <netdev@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, Linux-MM <linux-mm@kvack.org>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: [Xen-devel] [PATCH] netvm: check for page == NULL when propogating
 the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
for the following bug triggered by a xen network driver

[    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
[    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.908703] PGD ea1df067 PUD e8ada067 PMD 0
[    1.908774] Oops: 0000 [#1] SMP
[    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea +xen_kbdfront xenfs xen_privcmd
[    1.908938] CPU 0
[    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1
[    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
[    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
[    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
[    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
[    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
[    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
[    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
[    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
[    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
[    1.909055] Stack:
[    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
[    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
[    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
[    1.909055] Call Trace:
[    1.909055]  <IRQ>
[    1.909055]  [<ffffffff81066028>] ?  pvclock_clocksource_read+0x58/0xd0
[    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
[    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
[    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30

The problem is that the xenfront driver is passing a NULL page to
__skb_fill_page_desc() which was unexpected. This patch checks that
there is a page before dereferencing.

Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/skbuff.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7632c87..8857669 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1256,7 +1256,7 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
 	 * do not lose pfmemalloc information as the pages would not be
 	 * allocated using __GFP_MEMALLOC.
 	 */
-	if (page->pfmemalloc && !page->mapping)
+	if (page && page->pfmemalloc && !page->mapping)
 		skb->pfmemalloc	= true;
 	frag->page.p		  = page;
 	frag->page_offset	  = off;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 08:56:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 08:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyfaE-0001Mn-QL; Tue, 07 Aug 2012 08:56:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1SyfaD-0001Mf-3G
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 08:56:09 +0000
Received: from [85.158.143.99:42797] by server-2.bemta-4.messagelabs.com id
	3F/98-17938-828D0205; Tue, 07 Aug 2012 08:56:08 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344329764!30003112!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12368 invoked from network); 7 Aug 2012 08:56:04 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 08:56:04 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 170D3A3A78;
	Tue,  7 Aug 2012 10:55:58 +0200 (CEST)
Date: Tue, 7 Aug 2012 09:55:55 +0100
From: Mel Gorman <mgorman@suse.de>
To: David Miller <davem@davemloft.net>
Message-ID: <20120807085554.GF29814@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Xen-devel <xen-devel@lists.xensource.com>,
	Linux-Netdev <netdev@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, Linux-MM <linux-mm@kvack.org>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: [Xen-devel] [PATCH] netvm: check for page == NULL when propogating
 the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
for the following bug triggered by a xen network driver

[    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
[    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.908703] PGD ea1df067 PUD e8ada067 PMD 0
[    1.908774] Oops: 0000 [#1] SMP
[    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea +xen_kbdfront xenfs xen_privcmd
[    1.908938] CPU 0
[    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1
[    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
[    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
[    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
[    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
[    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
[    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
[    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
[    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
[    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
[    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
[    1.909055] Stack:
[    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
[    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
[    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
[    1.909055] Call Trace:
[    1.909055]  <IRQ>
[    1.909055]  [<ffffffff81066028>] ?  pvclock_clocksource_read+0x58/0xd0
[    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
[    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
[    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30

The problem is that the xenfront driver is passing a NULL page to
__skb_fill_page_desc() which was unexpected. This patch checks that
there is a page before dereferencing.

Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 include/linux/skbuff.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7632c87..8857669 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1256,7 +1256,7 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
 	 * do not lose pfmemalloc information as the pages would not be
 	 * allocated using __GFP_MEMALLOC.
 	 */
-	if (page->pfmemalloc && !page->mapping)
+	if (page && page->pfmemalloc && !page->mapping)
 		skb->pfmemalloc	= true;
 	frag->page.p		  = page;
 	frag->page_offset	  = off;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:09:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syfn4-0001pU-Qf; Tue, 07 Aug 2012 09:09:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syfn3-0001p7-N9
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:09:25 +0000
Received: from [85.158.143.35:57497] by server-3.bemta-4.messagelabs.com id
	94/3C-01511-44BD0205; Tue, 07 Aug 2012 09:09:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344330561!15850612!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24906 invoked from network); 7 Aug 2012 09:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336363200"; d="scan'208";a="204369661"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:08:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 05:08:54 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyfmY-00055J-3b;
	Tue, 07 Aug 2012 10:08:54 +0100
MIME-Version: 1.0
X-Mercurial-Node: 0a67d3147a174a26dda2e268ca801e56a9c5b711
Message-ID: <0a67d3147a174a26dda2.1344330534@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1344330533@cosworth.uk.xensource.com>
References: <patchbomb.1344330533@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 10:08:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1 of 2] libxl: add libxl_device_<type>_list_free
	helpers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344326374 -3600
# Node ID 0a67d3147a174a26dda2e268ca801e56a9c5b711
# Parent  a7ad22e5525831dd491d7ee1fe538b7543404ac7
libxl: add libxl_device_<type>_list_free helpers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a7ad22e55258 -r 0a67d3147a17 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl.c	Tue Aug 07 08:59:34 2012 +0100
@@ -2090,6 +2090,14 @@ out_err:
     return NULL;
 }
 
+void libxl_device_disk_list_free(libxl_device_disk *list, int nr)
+{
+    int i;
+    for (i = 0; i < nr; i++)
+        libxl_device_disk_dispose(&list[i]);
+    free(list);
+}
+
 int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_disk *disk, libxl_diskinfo *diskinfo)
 {
@@ -2759,6 +2767,14 @@ out_err:
     return NULL;
 }
 
+void libxl_device_nic_list_free(libxl_device_nic *list, int nr)
+{
+    int i;
+    for (i = 0; i < nr; i++)
+        libxl_device_nic_dispose(&list[i]);
+    free(list);
+}
+
 int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_nic *nic, libxl_nicinfo *nicinfo)
 {
diff -r a7ad22e55258 -r 0a67d3147a17 tools/libxl/libxl.h
--- a/tools/libxl/libxl.h	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl.h	Tue Aug 07 08:59:34 2012 +0100
@@ -668,6 +668,11 @@ void libxl_vcpuinfo_list_free(libxl_vcpu
  *   Initialises info with details of the given device which must be
  *   attached to the specified domain.
  *
+ * libxl_device_<type>_list_free(*devs, int nr_dev):
+ *
+ *   Dispose nr_dev elements of the array devs and then free the array
+ *   itself.
+ *
  * Creation / Control
  * ------------------
  *
@@ -714,6 +719,8 @@ int libxl_device_disk_destroy(libxl_ctx 
                               LIBXL_EXTERNAL_CALLERS_ONLY;
 
 libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num);
+void libxl_device_disk_list_free(libxl_device_disk *disks, int nr_disks);
+
 int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_disk *disk, libxl_diskinfo *diskinfo);
 
@@ -739,6 +746,7 @@ int libxl_device_nic_destroy(libxl_ctx *
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
 libxl_device_nic *libxl_device_nic_list(libxl_ctx *ctx, uint32_t domid, int *num);
+void libxl_device_nic_list_free(libxl_device_nic *nics, int nr_nics);
 int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_nic *nic, libxl_nicinfo *nicinfo);
 
@@ -784,6 +792,7 @@ int libxl_device_pci_destroy(libxl_ctx *
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci *pcis, int nr_pcis);
 
 /*
  * Functions related to making devices assignable -- that is, bound to
diff -r a7ad22e55258 -r 0a67d3147a17 tools/libxl/libxl_pci.c
--- a/tools/libxl/libxl_pci.c	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl_pci.c	Tue Aug 07 08:59:34 2012 +0100
@@ -1406,6 +1406,14 @@ out:
     return pcidevs;
 }
 
+void libxl_device_pci_list_free(libxl_device_pci *list, int nr)
+{
+    int i;
+    for (i = 0; i < nr; i++)
+        libxl_device_pci_dispose(&list[i]);
+    free(list);
+}
+
 int libxl__device_pci_destroy_all(libxl__gc *gc, uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:09:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syfn4-0001pU-Qf; Tue, 07 Aug 2012 09:09:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syfn3-0001p7-N9
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:09:25 +0000
Received: from [85.158.143.35:57497] by server-3.bemta-4.messagelabs.com id
	94/3C-01511-44BD0205; Tue, 07 Aug 2012 09:09:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344330561!15850612!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24906 invoked from network); 7 Aug 2012 09:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336363200"; d="scan'208";a="204369661"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:08:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 05:08:54 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyfmY-00055J-3b;
	Tue, 07 Aug 2012 10:08:54 +0100
MIME-Version: 1.0
X-Mercurial-Node: 0a67d3147a174a26dda2e268ca801e56a9c5b711
Message-ID: <0a67d3147a174a26dda2.1344330534@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1344330533@cosworth.uk.xensource.com>
References: <patchbomb.1344330533@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 10:08:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 1 of 2] libxl: add libxl_device_<type>_list_free
	helpers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344326374 -3600
# Node ID 0a67d3147a174a26dda2e268ca801e56a9c5b711
# Parent  a7ad22e5525831dd491d7ee1fe538b7543404ac7
libxl: add libxl_device_<type>_list_free helpers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a7ad22e55258 -r 0a67d3147a17 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl.c	Tue Aug 07 08:59:34 2012 +0100
@@ -2090,6 +2090,14 @@ out_err:
     return NULL;
 }
 
+void libxl_device_disk_list_free(libxl_device_disk *list, int nr)
+{
+    int i;
+    for (i = 0; i < nr; i++)
+        libxl_device_disk_dispose(&list[i]);
+    free(list);
+}
+
 int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_disk *disk, libxl_diskinfo *diskinfo)
 {
@@ -2759,6 +2767,14 @@ out_err:
     return NULL;
 }
 
+void libxl_device_nic_list_free(libxl_device_nic *list, int nr)
+{
+    int i;
+    for (i = 0; i < nr; i++)
+        libxl_device_nic_dispose(&list[i]);
+    free(list);
+}
+
 int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_nic *nic, libxl_nicinfo *nicinfo)
 {
diff -r a7ad22e55258 -r 0a67d3147a17 tools/libxl/libxl.h
--- a/tools/libxl/libxl.h	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl.h	Tue Aug 07 08:59:34 2012 +0100
@@ -668,6 +668,11 @@ void libxl_vcpuinfo_list_free(libxl_vcpu
  *   Initialises info with details of the given device which must be
  *   attached to the specified domain.
  *
+ * libxl_device_<type>_list_free(*devs, int nr_dev):
+ *
+ *   Dispose nr_dev elements of the array devs and then free the array
+ *   itself.
+ *
  * Creation / Control
  * ------------------
  *
@@ -714,6 +719,8 @@ int libxl_device_disk_destroy(libxl_ctx 
                               LIBXL_EXTERNAL_CALLERS_ONLY;
 
 libxl_device_disk *libxl_device_disk_list(libxl_ctx *ctx, uint32_t domid, int *num);
+void libxl_device_disk_list_free(libxl_device_disk *disks, int nr_disks);
+
 int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_disk *disk, libxl_diskinfo *diskinfo);
 
@@ -739,6 +746,7 @@ int libxl_device_nic_destroy(libxl_ctx *
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
 libxl_device_nic *libxl_device_nic_list(libxl_ctx *ctx, uint32_t domid, int *num);
+void libxl_device_nic_list_free(libxl_device_nic *nics, int nr_nics);
 int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
                               libxl_device_nic *nic, libxl_nicinfo *nicinfo);
 
@@ -784,6 +792,7 @@ int libxl_device_pci_destroy(libxl_ctx *
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci *pcis, int nr_pcis);
 
 /*
  * Functions related to making devices assignable -- that is, bound to
diff -r a7ad22e55258 -r 0a67d3147a17 tools/libxl/libxl_pci.c
--- a/tools/libxl/libxl_pci.c	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl_pci.c	Tue Aug 07 08:59:34 2012 +0100
@@ -1406,6 +1406,14 @@ out:
     return pcidevs;
 }
 
+void libxl_device_pci_list_free(libxl_device_pci *list, int nr)
+{
+    int i;
+    for (i = 0; i < nr; i++)
+        libxl_device_pci_dispose(&list[i]);
+    free(list);
+}
+
 int libxl__device_pci_destroy_all(libxl__gc *gc, uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:09:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syfn3-0001pD-DA; Tue, 07 Aug 2012 09:09:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syfn2-0001ot-2S
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:09:24 +0000
Received: from [85.158.143.35:49541] by server-2.bemta-4.messagelabs.com id
	EA/C6-17938-34BD0205; Tue, 07 Aug 2012 09:09:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344330561!15850612!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24860 invoked from network); 7 Aug 2012 09:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336363200"; d="scan'208";a="204369660"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:08:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 05:08:54 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyfmY-00055J-42;
	Tue, 07 Aug 2012 10:08:54 +0100
MIME-Version: 1.0
X-Mercurial-Node: 773de9b98cd183ee2b595f1856fb9768709e20f7
Message-ID: <773de9b98cd183ee2b59.1344330535@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1344330533@cosworth.uk.xensource.com>
References: <patchbomb.1344330533@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 10:08:55 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2 of 2] xl: migrate: destroy disks on source
 before giving destination the "go"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344330518 -3600
# Node ID 773de9b98cd183ee2b595f1856fb9768709e20f7
# Parent  0a67d3147a174a26dda2e268ca801e56a9c5b711
xl: migrate: destroy disks on source before giving destination the "go"

25733:353bc0801b11 "libxl: support custom block hotplug scripts which
had the (intended) side effect of re-enabling the hotplug script's
device sharing checks, which in turn has exposed the fact that during
migration xl currently has both devices in existence (but thankfully
not active).

Fix this by destroying the disk backends before sending the GO message
to the destination end (and recreating them on failure).

This is a bit of an ad-hoc solution for 4.2, we should revisit the
sequencing of the operations during migration for 4.3.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 0a67d3147a17 -r 773de9b98cd1 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Aug 07 08:59:34 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Tue Aug 07 10:08:38 2012 +0100
@@ -3021,6 +3021,8 @@ static void migrate_domain(const char *d
     char rc_buf;
     uint8_t *config_data;
     int config_len;
+    libxl_device_disk *disks;
+    int i, nr_disks = 0;
 
     save_domain_core_begin(domain_spec, override_config_file,
                            &config_data, &config_len);
@@ -3072,6 +3074,18 @@ static void migrate_domain(const char *d
         if (rc) goto failed_resume;
     }
 
+    /* In order to avoid having the same disk active twice
+     * simultaneously on a migration, which will trigger the hotplug
+     * script sharing detection, we must first remove the disks from
+     * the source domain. */
+    disks = libxl_device_disk_list(ctx, domid, &nr_disks);
+    if (!disks) goto failed_badly;
+
+    for (i = 0 ; i < nr_disks ; i++) {
+        rc = libxl_device_disk_destroy(ctx, domid, &disks[i], NULL);
+        if (rc) goto failed_badly;
+    }
+
     /* point of no return - as soon as we have tried to say
      * "go" to the receiver, it's not safe to carry on.  We leave
      * the domain renamed to %s--migratedaway in case that's helpful.
@@ -3107,6 +3121,13 @@ static void migrate_domain(const char *d
 
         fprintf(stderr, "migration sender: Trying to resume at our end.\n");
 
+        /* Re-add disks removed above. */
+        for (i = 0 ; i < nr_disks ; i++) {
+            rc = libxl_device_disk_add(ctx, domid, &disks[i], NULL);
+            if (rc) goto failed_badly;
+        }
+        libxl_device_disk_list_free(disks, nr_disks);
+
         if (common_domname) {
             libxl_domain_rename(ctx, domid, away_domname, common_domname);
         }
@@ -3120,6 +3141,8 @@ static void migrate_domain(const char *d
     fprintf(stderr, "migration sender: Target reports successful startup.\n");
     libxl_domain_destroy(ctx, domid, 0); /* bang! */
     fprintf(stderr, "Migration successful.\n");
+
+    libxl_device_disk_list_free(disks, nr_disks);
     exit(0);
 
  failed_suspend:
@@ -3132,6 +3155,7 @@ static void migrate_domain(const char *d
     close(send_fd);
     migration_child_report(recv_fd);
     fprintf(stderr, "Migration failed, resuming at sender.\n");
+    assert(!nr_disks); /* This failure path is always before disk shutdown */
     libxl_domain_resume(ctx, domid, 0);
     exit(-ERROR_FAIL);
 
@@ -3146,6 +3170,7 @@ static void migrate_domain(const char *d
 
     close(send_fd);
     migration_child_report(recv_fd);
+    libxl_device_disk_list_free(disks, nr_disks);
     exit(-ERROR_BADFAIL);
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:09:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syfn3-0001pD-DA; Tue, 07 Aug 2012 09:09:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syfn2-0001ot-2S
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:09:24 +0000
Received: from [85.158.143.35:49541] by server-2.bemta-4.messagelabs.com id
	EA/C6-17938-34BD0205; Tue, 07 Aug 2012 09:09:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344330561!15850612!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24860 invoked from network); 7 Aug 2012 09:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336363200"; d="scan'208";a="204369660"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:08:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 05:08:54 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyfmY-00055J-42;
	Tue, 07 Aug 2012 10:08:54 +0100
MIME-Version: 1.0
X-Mercurial-Node: 773de9b98cd183ee2b595f1856fb9768709e20f7
Message-ID: <773de9b98cd183ee2b59.1344330535@cosworth.uk.xensource.com>
In-Reply-To: <patchbomb.1344330533@cosworth.uk.xensource.com>
References: <patchbomb.1344330533@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 10:08:55 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 2 of 2] xl: migrate: destroy disks on source
 before giving destination the "go"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344330518 -3600
# Node ID 773de9b98cd183ee2b595f1856fb9768709e20f7
# Parent  0a67d3147a174a26dda2e268ca801e56a9c5b711
xl: migrate: destroy disks on source before giving destination the "go"

25733:353bc0801b11 "libxl: support custom block hotplug scripts which
had the (intended) side effect of re-enabling the hotplug script's
device sharing checks, which in turn has exposed the fact that during
migration xl currently has both devices in existence (but thankfully
not active).

Fix this by destroying the disk backends before sending the GO message
to the destination end (and recreating them on failure).

This is a bit of an ad-hoc solution for 4.2, we should revisit the
sequencing of the operations during migration for 4.3.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 0a67d3147a17 -r 773de9b98cd1 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Tue Aug 07 08:59:34 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Tue Aug 07 10:08:38 2012 +0100
@@ -3021,6 +3021,8 @@ static void migrate_domain(const char *d
     char rc_buf;
     uint8_t *config_data;
     int config_len;
+    libxl_device_disk *disks;
+    int i, nr_disks = 0;
 
     save_domain_core_begin(domain_spec, override_config_file,
                            &config_data, &config_len);
@@ -3072,6 +3074,18 @@ static void migrate_domain(const char *d
         if (rc) goto failed_resume;
     }
 
+    /* In order to avoid having the same disk active twice
+     * simultaneously on a migration, which will trigger the hotplug
+     * script sharing detection, we must first remove the disks from
+     * the source domain. */
+    disks = libxl_device_disk_list(ctx, domid, &nr_disks);
+    if (!disks) goto failed_badly;
+
+    for (i = 0 ; i < nr_disks ; i++) {
+        rc = libxl_device_disk_destroy(ctx, domid, &disks[i], NULL);
+        if (rc) goto failed_badly;
+    }
+
     /* point of no return - as soon as we have tried to say
      * "go" to the receiver, it's not safe to carry on.  We leave
      * the domain renamed to %s--migratedaway in case that's helpful.
@@ -3107,6 +3121,13 @@ static void migrate_domain(const char *d
 
         fprintf(stderr, "migration sender: Trying to resume at our end.\n");
 
+        /* Re-add disks removed above. */
+        for (i = 0 ; i < nr_disks ; i++) {
+            rc = libxl_device_disk_add(ctx, domid, &disks[i], NULL);
+            if (rc) goto failed_badly;
+        }
+        libxl_device_disk_list_free(disks, nr_disks);
+
         if (common_domname) {
             libxl_domain_rename(ctx, domid, away_domname, common_domname);
         }
@@ -3120,6 +3141,8 @@ static void migrate_domain(const char *d
     fprintf(stderr, "migration sender: Target reports successful startup.\n");
     libxl_domain_destroy(ctx, domid, 0); /* bang! */
     fprintf(stderr, "Migration successful.\n");
+
+    libxl_device_disk_list_free(disks, nr_disks);
     exit(0);
 
  failed_suspend:
@@ -3132,6 +3155,7 @@ static void migrate_domain(const char *d
     close(send_fd);
     migration_child_report(recv_fd);
     fprintf(stderr, "Migration failed, resuming at sender.\n");
+    assert(!nr_disks); /* This failure path is always before disk shutdown */
     libxl_domain_resume(ctx, domid, 0);
     exit(-ERROR_FAIL);
 
@@ -3146,6 +3170,7 @@ static void migrate_domain(const char *d
 
     close(send_fd);
     migration_child_report(recv_fd);
+    libxl_device_disk_list_free(disks, nr_disks);
     exit(-ERROR_BADFAIL);
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:09:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syfn3-0001p5-0T; Tue, 07 Aug 2012 09:09:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syfn1-0001ot-Ml
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:09:23 +0000
Received: from [85.158.143.35:49472] by server-2.bemta-4.messagelabs.com id
	7D/B6-17938-24BD0205; Tue, 07 Aug 2012 09:09:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344330561!15850612!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24800 invoked from network); 7 Aug 2012 09:09:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:09:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336363200"; d="scan'208";a="204369659"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:08:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 05:08:54 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyfmY-00055J-35;
	Tue, 07 Aug 2012 10:08:54 +0100
MIME-Version: 1.0
Message-ID: <patchbomb.1344330533@cosworth.uk.xensource.com>
In-Reply-To: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 10:08:53 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
	25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> I'm going to investigate if I can make xl sequence the device
> teardowns in the migration so as to make it work. 

Here is a short series which does this. 

It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
now).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:09:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syfn3-0001p5-0T; Tue, 07 Aug 2012 09:09:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syfn1-0001ot-Ml
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:09:23 +0000
Received: from [85.158.143.35:49472] by server-2.bemta-4.messagelabs.com id
	7D/B6-17938-24BD0205; Tue, 07 Aug 2012 09:09:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344330561!15850612!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24800 invoked from network); 7 Aug 2012 09:09:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:09:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336363200"; d="scan'208";a="204369659"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 05:08:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 05:08:54 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1SyfmY-00055J-35;
	Tue, 07 Aug 2012 10:08:54 +0100
MIME-Version: 1.0
Message-ID: <patchbomb.1344330533@cosworth.uk.xensource.com>
In-Reply-To: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 10:08:53 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
	25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> I'm going to investigate if I can make xl sequence the device
> teardowns in the migration so as to make it work. 

Here is a short series which does this. 

It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
now).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:37:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:37:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygEH-0002nR-4d; Tue, 07 Aug 2012 09:37:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SygEF-0002nM-7a
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:37:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344332204!11171663!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6936 invoked from network); 7 Aug 2012 09:36:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13880193"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 09:36:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	10:36:44 +0100
Message-ID: <1344332203.11339.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Tue, 7 Aug 2012 10:36:43 +0100
In-Reply-To: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
> Allow specification of backend domains for disks, either in the config
> file or via xl block-attach.
> 
> A version of this patch was submitted in October 2011 but was not
> suitable at the time because libxl did not support the "script=" option
> for disks in libxl. Now that this option exists, it is possible to
> specify a backend domain without needing to duplicate the device tree of
> domain providing the disk in the domain using libxl; just specify
> script=/bin/true (or any more useful script) to prevent the block script
> from running in the domain using libxl.

You can also set run_hotplug_scripts=0 in /etc/xen/xl.conf which causes
the scripts to be run from udev again -- which means they should run in
the appropriate domain automatically. (although I confess I don't
understand the reliance on script= here, so perhaps I've got the totally
wrong end of the stick)

Given that this patch was originally posted so long ago, that the
script= stuff just went in, that driver domains were on the TODO at one
point (I think) and the relative simplicity of this patch I'm leaning
towards taking this in 4.2.

> In order to support named backend domains like network-attach, the
> prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
> this parameter, it would only be only possible to support numeric domain
> IDs in the block device specification.

I'm not sure if using libxl in libxlu is a layering violation or not
(perhaps Ian J has an opinion), but I suppose it is acceptable (the
alternative is a twisty maze of callbacks).

If we are going to expose libxl down to libxlu maybe we should go all
the way and add the ctx to the XLU_Config?

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> ---
> 
> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
> changes due to different generator versions.

I'm afraid it did, but I've ignored them.

>  tools/libxl/libxlu_disk.c   |   3 +-
>  tools/libxl/libxlu_disk_i.h |   3 +-
>  tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
>  tools/libxl/libxlu_disk_l.h |  24 +-
>  tools/libxl/libxlu_disk_l.l |   8 +
>  tools/libxl/libxlutil.h     |   2 +-
>  tools/libxl/xl_cmdimpl.c    |   6 +-

This patch should also touch docs/misc/xl-disk-configuration.txt.

[...]
> @@ -169,6 +176,7 @@ devtype=[^,]*,?     { xlu__disk_err(DPC,yytext,"unknown value for type"); }
> 
>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
> +backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }

Is this compatible with xend? Grep doesn't show the string
"backenddomain" at all. Maybe xend simply didn't have this
functionality?

xl seems to use just backend for network devices. They probably ought to
match.

>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,? { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
> index 0333e55..87eb399 100644
> --- a/tools/libxl/libxlutil.h
> +++ b/tools/libxl/libxlutil.h
> @@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
>   */
> 
>  int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
> -                   libxl_device_disk *disk);
> +                   libxl_device_disk *disk, libxl_ctx *ctx);
>    /* disk must have been initialised.
>     *
>     * On error, returns errno value.  Bad strings cause EINVAL and
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 138cd72..fd00d61 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
>          if (!*config) { perror("xlu_cfg_init"); exit(-1); }
>      }
> 
> -    e = xlu_disk_parse(*config, nspecs, specs, disk);
> +    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
>      if (e == EINVAL) exit(-1);
>      if (e) {
>          fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
> @@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
>  int main_blockattach(int argc, char **argv)
>  {
>      int opt;
> -    uint32_t fe_domid, be_domid = 0;
> +    uint32_t fe_domid;
>      libxl_device_disk disk = { 0 };
>      XLU_Config *config = 0;
> 
> @@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
>      parse_disk_config_multistring
>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
> 
> -    disk.backend_domid = be_domid;
> -

xm block-attach's syntax was (allegedly): 
        Usage: xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]

i.e. it accepted backdomain on the command line. Do we want/need to
support that? I'd be happy enough not to.

>      if (dryrun_only) {
>          char *json = libxl_device_disk_to_json(ctx, &disk);
>          printf("disk: %s\n", json);
> --
> 1.7.11.2
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:37:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:37:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygEH-0002nR-4d; Tue, 07 Aug 2012 09:37:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SygEF-0002nM-7a
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:37:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344332204!11171663!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6936 invoked from network); 7 Aug 2012 09:36:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 09:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13880193"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 09:36:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	10:36:44 +0100
Message-ID: <1344332203.11339.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Tue, 7 Aug 2012 10:36:43 +0100
In-Reply-To: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
> Allow specification of backend domains for disks, either in the config
> file or via xl block-attach.
> 
> A version of this patch was submitted in October 2011 but was not
> suitable at the time because libxl did not support the "script=" option
> for disks in libxl. Now that this option exists, it is possible to
> specify a backend domain without needing to duplicate the device tree of
> domain providing the disk in the domain using libxl; just specify
> script=/bin/true (or any more useful script) to prevent the block script
> from running in the domain using libxl.

You can also set run_hotplug_scripts=0 in /etc/xen/xl.conf which causes
the scripts to be run from udev again -- which means they should run in
the appropriate domain automatically. (although I confess I don't
understand the reliance on script= here, so perhaps I've got the totally
wrong end of the stick)

Given that this patch was originally posted so long ago, that the
script= stuff just went in, that driver domains were on the TODO at one
point (I think) and the relative simplicity of this patch I'm leaning
towards taking this in 4.2.

> In order to support named backend domains like network-attach, the
> prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
> this parameter, it would only be only possible to support numeric domain
> IDs in the block device specification.

I'm not sure if using libxl in libxlu is a layering violation or not
(perhaps Ian J has an opinion), but I suppose it is acceptable (the
alternative is a twisty maze of callbacks).

If we are going to expose libxl down to libxlu maybe we should go all
the way and add the ctx to the XLU_Config?

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> ---
> 
> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
> changes due to different generator versions.

I'm afraid it did, but I've ignored them.

>  tools/libxl/libxlu_disk.c   |   3 +-
>  tools/libxl/libxlu_disk_i.h |   3 +-
>  tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
>  tools/libxl/libxlu_disk_l.h |  24 +-
>  tools/libxl/libxlu_disk_l.l |   8 +
>  tools/libxl/libxlutil.h     |   2 +-
>  tools/libxl/xl_cmdimpl.c    |   6 +-

This patch should also touch docs/misc/xl-disk-configuration.txt.

[...]
> @@ -169,6 +176,7 @@ devtype=[^,]*,?     { xlu__disk_err(DPC,yytext,"unknown value for type"); }
> 
>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
> +backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }

Is this compatible with xend? Grep doesn't show the string
"backenddomain" at all. Maybe xend simply didn't have this
functionality?

xl seems to use just backend for network devices. They probably ought to
match.

>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,? { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
> index 0333e55..87eb399 100644
> --- a/tools/libxl/libxlutil.h
> +++ b/tools/libxl/libxlutil.h
> @@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
>   */
> 
>  int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
> -                   libxl_device_disk *disk);
> +                   libxl_device_disk *disk, libxl_ctx *ctx);
>    /* disk must have been initialised.
>     *
>     * On error, returns errno value.  Bad strings cause EINVAL and
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 138cd72..fd00d61 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
>          if (!*config) { perror("xlu_cfg_init"); exit(-1); }
>      }
> 
> -    e = xlu_disk_parse(*config, nspecs, specs, disk);
> +    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
>      if (e == EINVAL) exit(-1);
>      if (e) {
>          fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
> @@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
>  int main_blockattach(int argc, char **argv)
>  {
>      int opt;
> -    uint32_t fe_domid, be_domid = 0;
> +    uint32_t fe_domid;
>      libxl_device_disk disk = { 0 };
>      XLU_Config *config = 0;
> 
> @@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
>      parse_disk_config_multistring
>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
> 
> -    disk.backend_domid = be_domid;
> -

xm block-attach's syntax was (allegedly): 
        Usage: xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]

i.e. it accepted backdomain on the command line. Do we want/need to
support that? I'd be happy enough not to.

>      if (dryrun_only) {
>          char *json = libxl_device_disk_to_json(ctx, &disk);
>          printf("disk: %s\n", json);
> --
> 1.7.11.2
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:41:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygHP-0002wS-OI; Tue, 07 Aug 2012 09:40:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SygHO-0002wL-5Y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:40:46 +0000
Received: from [85.158.143.99:42740] by server-2.bemta-4.messagelabs.com id
	E9/2D-17938-D92E0205; Tue, 07 Aug 2012 09:40:45 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344332443!24214436!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7658 invoked from network); 7 Aug 2012 09:40:44 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 09:40:44 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SygHK-000LvY-4f; Tue, 07 Aug 2012 09:40:42 +0000
Date: Tue, 7 Aug 2012 10:40:42 +0100
From: Tim Deegan <tim@xen.org>
To: lmingcsce <lmingcsce@gmail.com>
Message-ID: <20120807094042.GA84051@ocelot.phlegethon.org>
References: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
	<20120802104741.GB11437@ocelot.phlegethon.org>
	<033B5345-93F0-412C-822A-5F952694BD30@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <033B5345-93F0-412C-822A-5F952694BD30@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] About revoke write access of all the shadows
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:35 -0400 on 04 Aug (1344072926), lmingcsce wrote:
> From shadow_blow_tables function of the log dirty mode mechanism, I
> find it uses this way. However, through debugging
> foreach_pinned_shadow(d, sp, t), I find that all the pages I get are
> L2_pae_shadow or L2h_page_shadow, there is no L1 page type.
> Can you help explain why this happen?

shadow_blow_tables() only touches the topmost tables (i.e. on PAE, L2,
and on 64-bit, L4).  What it does is drop the reference count on the
tables (or clear their entries), and lets the reference-counting
mechanism take care of clearing and freeing the lower-level tables that
they point to.

> If so, how can I get all the L1 page type of one domain? What I want
> to do is to set all the shadow tables as read only.

To get at all the L1 entries, you should use hash_foreach(), with a 
mask and callbacks that contain all the L1 types.  You can copy that
from sh_remove_write_access() or sh_remove_all_mappings(), but you'll
need to make a new callback function (in multi.c) to handle each L1
page.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:41:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygHP-0002wS-OI; Tue, 07 Aug 2012 09:40:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SygHO-0002wL-5Y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:40:46 +0000
Received: from [85.158.143.99:42740] by server-2.bemta-4.messagelabs.com id
	E9/2D-17938-D92E0205; Tue, 07 Aug 2012 09:40:45 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344332443!24214436!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7658 invoked from network); 7 Aug 2012 09:40:44 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 09:40:44 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SygHK-000LvY-4f; Tue, 07 Aug 2012 09:40:42 +0000
Date: Tue, 7 Aug 2012 10:40:42 +0100
From: Tim Deegan <tim@xen.org>
To: lmingcsce <lmingcsce@gmail.com>
Message-ID: <20120807094042.GA84051@ocelot.phlegethon.org>
References: <7E3078CC-DDFB-49C4-98A9-CC14395A41ED@gmail.com>
	<20120802104741.GB11437@ocelot.phlegethon.org>
	<033B5345-93F0-412C-822A-5F952694BD30@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <033B5345-93F0-412C-822A-5F952694BD30@gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] About revoke write access of all the shadows
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:35 -0400 on 04 Aug (1344072926), lmingcsce wrote:
> From shadow_blow_tables function of the log dirty mode mechanism, I
> find it uses this way. However, through debugging
> foreach_pinned_shadow(d, sp, t), I find that all the pages I get are
> L2_pae_shadow or L2h_page_shadow, there is no L1 page type.
> Can you help explain why this happen?

shadow_blow_tables() only touches the topmost tables (i.e. on PAE, L2,
and on 64-bit, L4).  What it does is drop the reference count on the
tables (or clear their entries), and lets the reference-counting
mechanism take care of clearing and freeing the lower-level tables that
they point to.

> If so, how can I get all the L1 page type of one domain? What I want
> to do is to set all the shadow tables as read only.

To get at all the L1 entries, you should use hash_foreach(), with a 
mask and callbacks that contain all the L1 types.  You can copy that
from sh_remove_write_access() or sh_remove_all_mappings(), but you'll
need to make a new callback function (in multi.c) to handle each L1
page.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:43:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygJN-00033K-7Y; Tue, 07 Aug 2012 09:42:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SygJM-000338-Et
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:42:48 +0000
Received: from [85.158.143.99:57617] by server-1.bemta-4.messagelabs.com id
	19/74-24392-713E0205; Tue, 07 Aug 2012 09:42:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344332566!23581050!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30332 invoked from network); 7 Aug 2012 09:42:47 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 09:42:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SygJH-000LxI-VJ; Tue, 07 Aug 2012 09:42:43 +0000
Date: Tue, 7 Aug 2012 10:42:43 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807094243.GB84051@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344260756.11339.44.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:45 +0100 on 06 Aug (1344264356), Ian Campbell wrote:
> From 6440d1868cb03573ebacf5eb3cfc69f4f6abdf15 Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Mon, 6 Aug 2012 09:40:59 +0000
> Subject: [PATCH] arm: disable distributor delivery on boot CPU only
> 
> The secondary processors do not call enter_hyp_mode until the boot CPU
> has brought most of the system up, including enabling delivery via the
> distributor. This means that bringing up secondary CPUs unexpectedly
> disables the GICD again, meaning we get no further interrupts on any
> CPU.
> 
> For completeness also disable the GICC (CPU interface) on all CPUs
> too.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
>  xen/arch/arm/mode_switch.S |   23 +++++++++++++++++------
>  1 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
> index f5549d7..acbd523 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/mode_switch.S
> @@ -23,6 +23,8 @@
>  
>  /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
>   *
> + * Expects r12 == CPU number
> + *
>   * This code is specific to the VE model, and not intended to be used
>   * on production systems.  As such it's a bit hackier than the main
>   * boot code in head.S.  In future it will be replaced by better
> @@ -46,19 +48,28 @@ enter_hyp_mode:
>  	mcr   CP32(r0, CNTFRQ)
>  	ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
>  	mcr   CP32(r0, NSACR)
> -	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_DR_OFFSET
> +	/* Disable the GIC distributor, on the boot CPU only */
>  	mov   r1, #0
> -	str   r1, [r0]               /* Disable delivery in the distributor */
> +	teq   r12, #0                /* Is this the boot CPU? */
> +	streq r1, [r0]
> +	/* Continuing ugliness: Set up the GIC so NS state owns interrupts,
> +	 * The first 32 interrupts (SGIs & PPIs) must be configured on all
> +	 * CPUs while the remainder are SPIs and only need to be done one, on
> +	 * the boot CPU. */
>  	add   r0, r0, #0x80          /* GICD_IGROUP0 */
>  	mov   r2, #0xffffffff        /* All interrupts to group 1 */
> -	str   r2, [r0]
> -	str   r2, [r0, #4]
> -	str   r2, [r0, #8]
> -	/* Must drop priority mask below 0x80 before entering NS state */
> +	teq   r12, #0                /* Boot CPU? */
> +	str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
> +	streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
> +	streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
> +	/* Disable the GIC CPU interface on all processors */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_CR_OFFSET
> +	mov   r1, #0
> +	str   r1, [r0]		     
> +	/* Must drop priority mask below 0x80 before entering NS state */
>  	ldr   r1, =0xff
>  	str   r1, [r0, #0x4]         /* -> GICC_PMR */
>  	/* Reset a few config registers */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:43:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygJN-00033K-7Y; Tue, 07 Aug 2012 09:42:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SygJM-000338-Et
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:42:48 +0000
Received: from [85.158.143.99:57617] by server-1.bemta-4.messagelabs.com id
	19/74-24392-713E0205; Tue, 07 Aug 2012 09:42:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344332566!23581050!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30332 invoked from network); 7 Aug 2012 09:42:47 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 09:42:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SygJH-000LxI-VJ; Tue, 07 Aug 2012 09:42:43 +0000
Date: Tue, 7 Aug 2012 10:42:43 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807094243.GB84051@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344260756.11339.44.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:45 +0100 on 06 Aug (1344264356), Ian Campbell wrote:
> From 6440d1868cb03573ebacf5eb3cfc69f4f6abdf15 Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Mon, 6 Aug 2012 09:40:59 +0000
> Subject: [PATCH] arm: disable distributor delivery on boot CPU only
> 
> The secondary processors do not call enter_hyp_mode until the boot CPU
> has brought most of the system up, including enabling delivery via the
> distributor. This means that bringing up secondary CPUs unexpectedly
> disables the GICD again, meaning we get no further interrupts on any
> CPU.
> 
> For completeness also disable the GICC (CPU interface) on all CPUs
> too.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

> ---
>  xen/arch/arm/mode_switch.S |   23 +++++++++++++++++------
>  1 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/mode_switch.S b/xen/arch/arm/mode_switch.S
> index f5549d7..acbd523 100644
> --- a/xen/arch/arm/mode_switch.S
> +++ b/xen/arch/arm/mode_switch.S
> @@ -23,6 +23,8 @@
>  
>  /* Get up a CPU into Hyp mode.  Clobbers r0-r3.
>   *
> + * Expects r12 == CPU number
> + *
>   * This code is specific to the VE model, and not intended to be used
>   * on production systems.  As such it's a bit hackier than the main
>   * boot code in head.S.  In future it will be replaced by better
> @@ -46,19 +48,28 @@ enter_hyp_mode:
>  	mcr   CP32(r0, CNTFRQ)
>  	ldr   r0, =0x40c00           /* SMP, c11, c10 in non-secure mode */
>  	mcr   CP32(r0, NSACR)
> -	/* Continuing ugliness: Set up the GIC so NS state owns interrupts */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_DR_OFFSET
> +	/* Disable the GIC distributor, on the boot CPU only */
>  	mov   r1, #0
> -	str   r1, [r0]               /* Disable delivery in the distributor */
> +	teq   r12, #0                /* Is this the boot CPU? */
> +	streq r1, [r0]
> +	/* Continuing ugliness: Set up the GIC so NS state owns interrupts,
> +	 * The first 32 interrupts (SGIs & PPIs) must be configured on all
> +	 * CPUs while the remainder are SPIs and only need to be done one, on
> +	 * the boot CPU. */
>  	add   r0, r0, #0x80          /* GICD_IGROUP0 */
>  	mov   r2, #0xffffffff        /* All interrupts to group 1 */
> -	str   r2, [r0]
> -	str   r2, [r0, #4]
> -	str   r2, [r0, #8]
> -	/* Must drop priority mask below 0x80 before entering NS state */
> +	teq   r12, #0                /* Boot CPU? */
> +	str   r2, [r0]               /* Interrupts  0-31 (SGI & PPI) */
> +	streq r2, [r0, #4]           /* Interrupts 32-63 (SPI) */
> +	streq r2, [r0, #8]           /* Interrupts 64-95 (SPI) */
> +	/* Disable the GIC CPU interface on all processors */
>  	mov   r0, #GIC_BASE_ADDRESS
>  	add   r0, r0, #GIC_CR_OFFSET
> +	mov   r1, #0
> +	str   r1, [r0]		     
> +	/* Must drop priority mask below 0x80 before entering NS state */
>  	ldr   r1, =0xff
>  	str   r1, [r0, #0x4]         /* -> GICC_PMR */
>  	/* Reset a few config registers */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:58:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygY3-0003Jg-SG; Tue, 07 Aug 2012 09:57:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SygY2-0003Jb-Lc
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:57:59 +0000
Received: from [85.158.138.51:19507] by server-10.bemta-3.messagelabs.com id
	B4/A6-07905-4A6E0205; Tue, 07 Aug 2012 09:57:56 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344333473!30751315!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMDcxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24743 invoked from network); 7 Aug 2012 09:57:54 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-8.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 09:57:54 -0000
Received: from 172.24.2.119 (EHLO szxeml209-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id ANC15639; Tue, 07 Aug 2012 17:55:26 +0800 (CST)
Received: from SZXEML405-HUB.china.huawei.com (10.82.67.60) by
	szxeml209-edg.china.huawei.com (172.24.2.184) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Tue, 7 Aug 2012 17:53:40 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml405-hub.china.huawei.com ([::1]) with mapi id 14.01.0323.003;
	Tue, 7 Aug 2012 17:53:30 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8A==
Date: Tue, 7 Aug 2012 09:53:28 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344259710.11339.39.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-hashedpuzzle: ADQR AoqK CvXx GaVs Gpgp H3u9 JTT0 LPSA MDY4 MRNP Mqba
	NX36 Pbc2 Q2oc R1Ch Sjq7; 2;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {C8DD6866-80B5-4D74-92F6-6DB538398B42};
	dwBhAG4AZwB6AGgAZQBuAGcAdQBvAEAAaAB1AGEAdwBlAGkALgBjAG8AbQA=;
	Tue, 07 Aug 2012 09:53:23 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABUAGgAZQAgAGgAeQBwAGUAcgBjAGEAbABsACAAdwBpAGwAbAAgAGYAYQBpAGwAIABhAG4AZAAgAHIAZQB0AHUAcgBuACAARQBGAEEAVQBMAFQAIAB3AGgAZQBuACAAdABoAGUAIABwAGEAZwBlACAAYgBlAGMAbwBtAGUAcwAgAEMATwBXACAAYgB5ACAAZgBvAHIAawBpAG4AZwAgAHAAcgBvAGMAZQBzAHMAIABpAG4AIABsAGkAbgB1AHgA
x-cr-puzzleid: {C8DD6866-80B5-4D74-92F6-6DB538398B42}
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Ian

  Thanks your reply. 
  I added two hooks in xc_osdep_ops.u.pricmd for the different OSes 
after allocating and before freeing the hypercall buffer. 
One hook makes the hypercall buffer not to COW after being 
allocated in Linux, and to restore it to normal before being freed.

> Another thread in the process might fork though, wasn't that the main
> observation you made when you first posted?
> 
> I think perhaps you mean it is forbidden to fork and then access a
> hypercall buffer allocated before the fork, which sounds ok, since no
> thread which allocates a hypercall buffer should fork with it still
> allocated.

Yes.

> 
> That will make "hg diff" and similar functions show the name of the
> changed function here which is very useful for reviewers.

OK.

> the page being COW when hyper calling*/
> > +    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);
> 
> madvise(2) tells me that MADV_{DO,DONT}FORK are Linux specific, so I
> think this belongs in the Linux specific alloc_hypercall_buffer hook.

I don't think so. We only need madvise(MADV_DONTFORK) before hypercall, 
and madvise(MADV_DOFORK) after hypercall. The pages in the hypercall buffer 
need not be protected. So two extra hooks are added in xc_osdep_ops.u.pricmd. 
The linux version is implemented.

> > +    p = mmap(NULL, size, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
> 
> I suppose this must necessarily return a page aligned result?
The address returned by mmap is already page aligned in the linux OS. 

the patch is below:

# HG changeset patch
# Parent bd244b9bc84bc74b6c6182c34369a31d1c5c869c
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
may call hypercall, thread B may call fork() to create child process. 
After forking, all pages of the process including hypercall buffers are cow. 
The hypervisor calls copy_to_user in hypercall in thread A context, will cause 
a write protection and return EFAULT error.

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer not to be copied
   to child process after fork. 
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by using MADV_DOFORK 
   of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>

diff -r bd244b9bc84b tools/libxc/xc_hcall_buf.c
--- a/tools/libxc/xc_hcall_buf.c	Tue Aug 07 16:44:20 2012 +0800
+++ b/tools/libxc/xc_hcall_buf.c	Tue Aug 07 16:46:56 2012 +0800
@@ -135,6 +135,9 @@ void *xc__hypercall_buffer_alloc_pages(x
 
     b->hbuf = p;
 
+    if (xch->ops->u.privcmd.prepare_hypercall_buffer)
+        xch->ops->u.privcmd.prepare_hypercall_buffer(xch, xch->ops_handle, p, nr_pages);
+
     memset(p, 0, nr_pages * PAGE_SIZE);
 
     return b->hbuf;
@@ -145,6 +148,9 @@ void xc__hypercall_buffer_free_pages(xc_
     if ( b->hbuf == NULL )
         return;
 
+    if (xch->ops->u.privcmd.unprepare_hypercall_buffer)
+        xch->ops->u.privcmd.unprepare_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
+
     if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
         xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
 }
diff -r bd244b9bc84b tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 16:44:20 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Tue Aug 07 16:46:56 2012 +0800
@@ -93,22 +93,26 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
-
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    munmap(ptr, npages * XC_PAGE_SIZE);
+}
+
+static void linux_privcmd_prepare_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
+{
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
+}
+
+static void linux_privcmd_unprepare_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
+{
+    /* Recover the VMA flags. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
@@ -420,6 +424,8 @@ static struct xc_osdep_ops linux_privcmd
     .u.privcmd = {
         .alloc_hypercall_buffer = &linux_privcmd_alloc_hypercall_buffer,
         .free_hypercall_buffer = &linux_privcmd_free_hypercall_buffer,
+        .prepare_hypercall_buffer = &linux_privcmd_prepare_hypercall_buffer,
+        .unprepare_hypercall_buffer = &linux_privcmd_unprepare_hypercall_buffer,
 
         .hypercall = &linux_privcmd_hypercall,
 
diff -r bd244b9bc84b tools/libxc/xenctrlosdep.h
--- a/tools/libxc/xenctrlosdep.h	Tue Aug 07 16:44:20 2012 +0800
+++ b/tools/libxc/xenctrlosdep.h	Tue Aug 07 16:46:56 2012 +0800
@@ -78,6 +78,9 @@ struct xc_osdep_ops
             void *(*alloc_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, int npages);
             void (*free_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages);
 
+            void (*prepare_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages);
+            void (*unprepare_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages);
+
             int (*hypercall)(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall);
 
             void *(*map_foreign_batch)(xc_interface *xch, xc_osdep_handle h, uint32_t dom, int prot,
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 09:58:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 09:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygY3-0003Jg-SG; Tue, 07 Aug 2012 09:57:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SygY2-0003Jb-Lc
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 09:57:59 +0000
Received: from [85.158.138.51:19507] by server-10.bemta-3.messagelabs.com id
	B4/A6-07905-4A6E0205; Tue, 07 Aug 2012 09:57:56 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344333473!30751315!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMDcxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24743 invoked from network); 7 Aug 2012 09:57:54 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-8.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 09:57:54 -0000
Received: from 172.24.2.119 (EHLO szxeml209-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id ANC15639; Tue, 07 Aug 2012 17:55:26 +0800 (CST)
Received: from SZXEML405-HUB.china.huawei.com (10.82.67.60) by
	szxeml209-edg.china.huawei.com (172.24.2.184) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Tue, 7 Aug 2012 17:53:40 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml405-hub.china.huawei.com ([::1]) with mapi id 14.01.0323.003;
	Tue, 7 Aug 2012 17:53:30 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8A==
Date: Tue, 7 Aug 2012 09:53:28 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344259710.11339.39.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-hashedpuzzle: ADQR AoqK CvXx GaVs Gpgp H3u9 JTT0 LPSA MDY4 MRNP Mqba
	NX36 Pbc2 Q2oc R1Ch Sjq7; 2;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {C8DD6866-80B5-4D74-92F6-6DB538398B42};
	dwBhAG4AZwB6AGgAZQBuAGcAdQBvAEAAaAB1AGEAdwBlAGkALgBjAG8AbQA=;
	Tue, 07 Aug 2012 09:53:23 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABUAGgAZQAgAGgAeQBwAGUAcgBjAGEAbABsACAAdwBpAGwAbAAgAGYAYQBpAGwAIABhAG4AZAAgAHIAZQB0AHUAcgBuACAARQBGAEEAVQBMAFQAIAB3AGgAZQBuACAAdABoAGUAIABwAGEAZwBlACAAYgBlAGMAbwBtAGUAcwAgAEMATwBXACAAYgB5ACAAZgBvAHIAawBpAG4AZwAgAHAAcgBvAGMAZQBzAHMAIABpAG4AIABsAGkAbgB1AHgA
x-cr-puzzleid: {C8DD6866-80B5-4D74-92F6-6DB538398B42}
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Ian

  Thanks your reply. 
  I added two hooks in xc_osdep_ops.u.pricmd for the different OSes 
after allocating and before freeing the hypercall buffer. 
One hook makes the hypercall buffer not to COW after being 
allocated in Linux, and to restore it to normal before being freed.

> Another thread in the process might fork though, wasn't that the main
> observation you made when you first posted?
> 
> I think perhaps you mean it is forbidden to fork and then access a
> hypercall buffer allocated before the fork, which sounds ok, since no
> thread which allocates a hypercall buffer should fork with it still
> allocated.

Yes.

> 
> That will make "hg diff" and similar functions show the name of the
> changed function here which is very useful for reviewers.

OK.

> the page being COW when hyper calling*/
> > +    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);
> 
> madvise(2) tells me that MADV_{DO,DONT}FORK are Linux specific, so I
> think this belongs in the Linux specific alloc_hypercall_buffer hook.

I don't think so. We only need madvise(MADV_DONTFORK) before hypercall, 
and madvise(MADV_DOFORK) after hypercall. The pages in the hypercall buffer 
need not be protected. So two extra hooks are added in xc_osdep_ops.u.pricmd. 
The linux version is implemented.

> > +    p = mmap(NULL, size, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
> 
> I suppose this must necessarily return a page aligned result?
The address returned by mmap is already page aligned in the linux OS. 

the patch is below:

# HG changeset patch
# Parent bd244b9bc84bc74b6c6182c34369a31d1c5c869c
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
may call hypercall, thread B may call fork() to create child process. 
After forking, all pages of the process including hypercall buffers are cow. 
The hypervisor calls copy_to_user in hypercall in thread A context, will cause 
a write protection and return EFAULT error.

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer not to be copied
   to child process after fork. 
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by using MADV_DOFORK 
   of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>

diff -r bd244b9bc84b tools/libxc/xc_hcall_buf.c
--- a/tools/libxc/xc_hcall_buf.c	Tue Aug 07 16:44:20 2012 +0800
+++ b/tools/libxc/xc_hcall_buf.c	Tue Aug 07 16:46:56 2012 +0800
@@ -135,6 +135,9 @@ void *xc__hypercall_buffer_alloc_pages(x
 
     b->hbuf = p;
 
+    if (xch->ops->u.privcmd.prepare_hypercall_buffer)
+        xch->ops->u.privcmd.prepare_hypercall_buffer(xch, xch->ops_handle, p, nr_pages);
+
     memset(p, 0, nr_pages * PAGE_SIZE);
 
     return b->hbuf;
@@ -145,6 +148,9 @@ void xc__hypercall_buffer_free_pages(xc_
     if ( b->hbuf == NULL )
         return;
 
+    if (xch->ops->u.privcmd.unprepare_hypercall_buffer)
+        xch->ops->u.privcmd.unprepare_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
+
     if ( !hypercall_buffer_cache_free(xch, b->hbuf, nr_pages) )
         xch->ops->u.privcmd.free_hypercall_buffer(xch, xch->ops_handle, b->hbuf, nr_pages);
 }
diff -r bd244b9bc84b tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 16:44:20 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Tue Aug 07 16:46:56 2012 +0800
@@ -93,22 +93,26 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
-
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    munmap(ptr, npages * XC_PAGE_SIZE);
+}
+
+static void linux_privcmd_prepare_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
+{
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
+}
+
+static void linux_privcmd_unprepare_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
+{
+    /* Recover the VMA flags. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
@@ -420,6 +424,8 @@ static struct xc_osdep_ops linux_privcmd
     .u.privcmd = {
         .alloc_hypercall_buffer = &linux_privcmd_alloc_hypercall_buffer,
         .free_hypercall_buffer = &linux_privcmd_free_hypercall_buffer,
+        .prepare_hypercall_buffer = &linux_privcmd_prepare_hypercall_buffer,
+        .unprepare_hypercall_buffer = &linux_privcmd_unprepare_hypercall_buffer,
 
         .hypercall = &linux_privcmd_hypercall,
 
diff -r bd244b9bc84b tools/libxc/xenctrlosdep.h
--- a/tools/libxc/xenctrlosdep.h	Tue Aug 07 16:44:20 2012 +0800
+++ b/tools/libxc/xenctrlosdep.h	Tue Aug 07 16:46:56 2012 +0800
@@ -78,6 +78,9 @@ struct xc_osdep_ops
             void *(*alloc_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, int npages);
             void (*free_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages);
 
+            void (*prepare_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages);
+            void (*unprepare_hypercall_buffer)(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages);
+
             int (*hypercall)(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall);
 
             void *(*map_foreign_batch)(xc_interface *xch, xc_osdep_handle h, uint32_t dom, int prot,
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:00:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sygad-0003St-EF; Tue, 07 Aug 2012 10:00:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Sygac-0003So-5G
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:00:38 +0000
Received: from [85.158.143.99:63120] by server-2.bemta-4.messagelabs.com id
	CD/94-17938-547E0205; Tue, 07 Aug 2012 10:00:37 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344333633!18941669!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27724 invoked from network); 7 Aug 2012 10:00:34 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 10:00:34 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SygaT-000M0N-BL; Tue, 07 Aug 2012 10:00:29 +0000
Date: Tue, 7 Aug 2012 11:00:29 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120807100029.GC84051@ocelot.phlegethon.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5020E13F02000078000931DA@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 08:34 +0100 on 07 Aug (1344328495), Jan Beulich wrote:
> >>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> > I guess there are two problems with that:
> > * As you've seen, apparently dom0 may access these pages before any
> > faults happen.
> > * If it happens that reclaim_single is below the only zeroed page, the
> > guest will crash even when there is reclaim-able memory available.
> > 
> > Two ways we could fix this:
> > 1. Remove dom0 accesses (what on earth could be looking at a
> > not-yet-created VM?)
> 
> I'm told it's a monitoring daemon, and yes, they are intending to
> adjust it to first query the GFN's type (and don't do the access
> when it's not populated, yet). But wait, I didn't check the code
> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
> also call get_page_from_gfn() with P2M_ALLOC, so would also
> trigger the PoD code (in -unstable at least) - Tim, was that really
> a correct adjustment in 25355:974ad81bb68b? It looks to be a
> 1:1 translation, but is that really necessary?

AFAICT 25355:974ad81bb68b doesn't change anything.  Back in 4.1-testing
the lookup was done with gmfn_to_mfn(), which boils down to a lookup
with p2m_alloc.

> If one wanted to find out whether a page is PoD to avoid getting it
> populated, how would that be done from outside the hypervisor? Would
> we need XEN_DOMCTL_getpageframeinfo4 for this?

We'd certainly need _some_ change to the hypercall interface, as there's
no XEN_DOMCTL_PFINFO_ rune for 'PoD', and presumably you'd want to know
the difference between PoD and not-present.

> > 2. Allocate the PoD cache before populating the p2m table

Any reason not to do this?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:00:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sygad-0003St-EF; Tue, 07 Aug 2012 10:00:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1Sygac-0003So-5G
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:00:38 +0000
Received: from [85.158.143.99:63120] by server-2.bemta-4.messagelabs.com id
	CD/94-17938-547E0205; Tue, 07 Aug 2012 10:00:37 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344333633!18941669!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27724 invoked from network); 7 Aug 2012 10:00:34 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 10:00:34 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SygaT-000M0N-BL; Tue, 07 Aug 2012 10:00:29 +0000
Date: Tue, 7 Aug 2012 11:00:29 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120807100029.GC84051@ocelot.phlegethon.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5020E13F02000078000931DA@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

At 08:34 +0100 on 07 Aug (1344328495), Jan Beulich wrote:
> >>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> > I guess there are two problems with that:
> > * As you've seen, apparently dom0 may access these pages before any
> > faults happen.
> > * If it happens that reclaim_single is below the only zeroed page, the
> > guest will crash even when there is reclaim-able memory available.
> > 
> > Two ways we could fix this:
> > 1. Remove dom0 accesses (what on earth could be looking at a
> > not-yet-created VM?)
> 
> I'm told it's a monitoring daemon, and yes, they are intending to
> adjust it to first query the GFN's type (and don't do the access
> when it's not populated, yet). But wait, I didn't check the code
> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
> also call get_page_from_gfn() with P2M_ALLOC, so would also
> trigger the PoD code (in -unstable at least) - Tim, was that really
> a correct adjustment in 25355:974ad81bb68b? It looks to be a
> 1:1 translation, but is that really necessary?

AFAICT 25355:974ad81bb68b doesn't change anything.  Back in 4.1-testing
the lookup was done with gmfn_to_mfn(), which boils down to a lookup
with p2m_alloc.

> If one wanted to find out whether a page is PoD to avoid getting it
> populated, how would that be done from outside the hypervisor? Would
> we need XEN_DOMCTL_getpageframeinfo4 for this?

We'd certainly need _some_ change to the hypercall interface, as there's
no XEN_DOMCTL_PFINFO_ rune for 'PoD', and presumably you'd want to know
the difference between PoD and not-present.

> > 2. Allocate the PoD cache before populating the p2m table

Any reason not to do this?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:03:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygdO-0003cQ-0H; Tue, 07 Aug 2012 10:03:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SygdM-0003cG-KS
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 10:03:28 +0000
Received: from [85.158.138.51:21958] by server-7.bemta-3.messagelabs.com id
	AA/39-04660-FE7E0205; Tue, 07 Aug 2012 10:03:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344333806!29033446!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20031 invoked from network); 7 Aug 2012 10:03:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:03:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13880947"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:03:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 11:03:26 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SygdJ-00067L-Ti;
	Tue, 07 Aug 2012 10:03:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SygdI-0000sT-7i;
	Tue, 07 Aug 2012 11:03:25 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13565-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 11:03:24 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13565: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13565 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13565/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 13525

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 13525
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 13525

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:03:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SygdO-0003cQ-0H; Tue, 07 Aug 2012 10:03:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SygdM-0003cG-KS
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 10:03:28 +0000
Received: from [85.158.138.51:21958] by server-7.bemta-3.messagelabs.com id
	AA/39-04660-FE7E0205; Tue, 07 Aug 2012 10:03:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344333806!29033446!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20031 invoked from network); 7 Aug 2012 10:03:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:03:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13880947"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:03:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 11:03:26 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SygdJ-00067L-Ti;
	Tue, 07 Aug 2012 10:03:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SygdI-0000sT-7i;
	Tue, 07 Aug 2012 11:03:25 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13565-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 11:03:24 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13565: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13565 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13565/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 13525

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 13525
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 13525

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23331:f8f8912b3de0
tag:         tip
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
changeset:   23330:5a65d6a1aab7
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Fri Aug 03 10:39:13 2012 +0100
    
    Intel VT-d: Dump IOMMU supported page sizes
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    xen-unstable changeset:   25725:9ad379939b78
    xen-unstable date:        Fri Aug 03 10:38:04 2012 +0100
    
    
changeset:   23329:fa34499e8f6c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:38:58 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syghv-0003pc-Ob; Tue, 07 Aug 2012 10:08:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syghu-0003pS-KD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:08:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344334045!12634959!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3663 invoked from network); 7 Aug 2012 10:07:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:07:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881081"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:07:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	11:07:25 +0100
Message-ID: <1344334043.11339.85.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Tue, 7 Aug 2012 11:07:23 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 10:53 +0100, Wangzhenguo wrote:
> > the page being COW when hyper calling*/
> > > +    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);
> > 
> > madvise(2) tells me that MADV_{DO,DONT}FORK are Linux specific, so I
> > think this belongs in the Linux specific alloc_hypercall_buffer hook.
> 
> I don't think so. We only need madvise(MADV_DONTFORK) before hypercall, 
> and madvise(MADV_DOFORK) after hypercall. The pages in the hypercall buffer 
> need not be protected.

The entire point of the hypercall buffer is that it needs to be safe for
use as a hypercall argument, therefore it does need to be protected.

>  So two extra hooks are added in xc_osdep_ops.u.pricmd. 

I don't understand why these new hooks are needed, you call the first
immediately after (near enough) alloc_hypercall_buffer and the second
immediately before free_hypercall_buffer. The semantics of both those
existing calls are already that they must provide and release memory
suitable for use as a hypercall argument, so I don't think having a
separate prepare call which takes their result and does "really make
this memory suitable" makes sense.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syghv-0003pc-Ob; Tue, 07 Aug 2012 10:08:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syghu-0003pS-KD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:08:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344334045!12634959!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3663 invoked from network); 7 Aug 2012 10:07:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:07:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881081"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:07:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	11:07:25 +0100
Message-ID: <1344334043.11339.85.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Tue, 7 Aug 2012 11:07:23 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 10:53 +0100, Wangzhenguo wrote:
> > the page being COW when hyper calling*/
> > > +    madvise(p, nr_pages * PAGE_SIZE, MADV_DONTFORK);
> > 
> > madvise(2) tells me that MADV_{DO,DONT}FORK are Linux specific, so I
> > think this belongs in the Linux specific alloc_hypercall_buffer hook.
> 
> I don't think so. We only need madvise(MADV_DONTFORK) before hypercall, 
> and madvise(MADV_DOFORK) after hypercall. The pages in the hypercall buffer 
> need not be protected.

The entire point of the hypercall buffer is that it needs to be safe for
use as a hypercall argument, therefore it does need to be protected.

>  So two extra hooks are added in xc_osdep_ops.u.pricmd. 

I don't understand why these new hooks are needed, you call the first
immediately after (near enough) alloc_hypercall_buffer and the second
immediately before free_hypercall_buffer. The semantics of both those
existing calls are already that they must provide and release memory
suitable for use as a hypercall argument, so I don't think having a
separate prepare call which takes their result and does "really make
this memory suitable" makes sense.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:21:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:21:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sygu5-00043B-5k; Tue, 07 Aug 2012 10:20:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Sygu3-000436-Cn
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:20:43 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344334835!11252658!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4794 invoked from network); 7 Aug 2012 10:20:35 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:20:35 -0000
Received: by eeke53 with SMTP id e53so1138980eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 03:20:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=t8AbIj2xARpGmAOfw0+IIw9ox/AIwksCDEH8lMNtMe4=;
	b=V8zSFrwnveUKkZeLiuDNRZC5OsVOll+lG9o22xtDAk8LBwqvl0pFohi726fvRy3W4h
	pFRzOxHWG4F0l83LMB8mcCmCpHXu71V8OKU5ldCCt3HOn1Hpob6Jajs+NXbW2doX2veO
	8xmMZ3kzz3HxcLYagi4B+doXG/oIrj39yQr8jkh7xTJ3H4WmRI08bQdNfhbqjMCJq2Dc
	hIc3XGUMUrGIAszyCBQ29qclyQHd+2/Boi/TBeBP0LP8fkuQorZJH3N/HKfuVHlIWR2u
	4u3AdBaAcWluHgPt+4SmAOb9v0tHR3nwadgUVGEJDNGtjmw0mM0yLL3yMB7OHf9ajD60
	ehJg==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr17145989eep.32.1344334835382;
	Tue, 07 Aug 2012 03:20:35 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Tue, 7 Aug 2012 03:20:35 -0700 (PDT)
In-Reply-To: <5020E13F02000078000931DA@nat28.tlf.novell.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
Date: Tue, 7 Aug 2012 11:20:35 +0100
X-Google-Sender-Auth: 8tKe6nZ3uiJxUaJ5QOPl2_Bs4Qc
Message-ID: <CAFLBxZZgzqy1moLjuTDJEeswwoh9SPBXmNPDkvB0OdMBdLN92Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 7, 2012 at 8:34 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> I guess there are two problems with that:
>> * As you've seen, apparently dom0 may access these pages before any
>> faults happen.
>> * If it happens that reclaim_single is below the only zeroed page, the
>> guest will crash even when there is reclaim-able memory available.
>>
>> Two ways we could fix this:
>> 1. Remove dom0 accesses (what on earth could be looking at a
>> not-yet-created VM?)
>
> I'm told it's a monitoring daemon, and yes, they are intending to
> adjust it to first query the GFN's type (and don't do the access
> when it's not populated, yet). But wait, I didn't check the code
> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
> also call get_page_from_gfn() with P2M_ALLOC, so would also
> trigger the PoD code (in -unstable at least) - Tim, was that really
> a correct adjustment in 25355:974ad81bb68b? It looks to be a
> 1:1 translation, but is that really necessary? If one wanted to
> find out whether a page is PoD to avoid getting it populated,
> how would that be done from outside the hypervisor? Would
> we need XEN_DOMCTL_getpageframeinfo4 for this?
>
>> 2. Allocate the PoD cache before populating the p2m table
>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>> see how that's really practical.
>
> What's wrong with telling control tools that a certain page is
> unpopulated (from which they will be able to imply that's it's all
> clear from the guest's pov)?

Because in the general case it's wrong.  The only time crashing the
guest is *not* the right thing to do is in the case we have at hand,
where PoD pages are accessed before the PoD memory is allocated.

Probably the quickest fix would be if there was a simple way for the
monitoring daemon to filter out domains that aren't completely built
yet -- maybe by looking at something in xenstore?

But the current state of things, does seem unnecessarily fragile; I
think if it can be done, allocating PoD memory before writing PoD
entries is probably a good thing to do anyway.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:21:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:21:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sygu5-00043B-5k; Tue, 07 Aug 2012 10:20:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Sygu3-000436-Cn
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:20:43 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344334835!11252658!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4794 invoked from network); 7 Aug 2012 10:20:35 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:20:35 -0000
Received: by eeke53 with SMTP id e53so1138980eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 03:20:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=t8AbIj2xARpGmAOfw0+IIw9ox/AIwksCDEH8lMNtMe4=;
	b=V8zSFrwnveUKkZeLiuDNRZC5OsVOll+lG9o22xtDAk8LBwqvl0pFohi726fvRy3W4h
	pFRzOxHWG4F0l83LMB8mcCmCpHXu71V8OKU5ldCCt3HOn1Hpob6Jajs+NXbW2doX2veO
	8xmMZ3kzz3HxcLYagi4B+doXG/oIrj39yQr8jkh7xTJ3H4WmRI08bQdNfhbqjMCJq2Dc
	hIc3XGUMUrGIAszyCBQ29qclyQHd+2/Boi/TBeBP0LP8fkuQorZJH3N/HKfuVHlIWR2u
	4u3AdBaAcWluHgPt+4SmAOb9v0tHR3nwadgUVGEJDNGtjmw0mM0yLL3yMB7OHf9ajD60
	ehJg==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr17145989eep.32.1344334835382;
	Tue, 07 Aug 2012 03:20:35 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Tue, 7 Aug 2012 03:20:35 -0700 (PDT)
In-Reply-To: <5020E13F02000078000931DA@nat28.tlf.novell.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
Date: Tue, 7 Aug 2012 11:20:35 +0100
X-Google-Sender-Auth: 8tKe6nZ3uiJxUaJ5QOPl2_Bs4Qc
Message-ID: <CAFLBxZZgzqy1moLjuTDJEeswwoh9SPBXmNPDkvB0OdMBdLN92Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 7, 2012 at 8:34 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> I guess there are two problems with that:
>> * As you've seen, apparently dom0 may access these pages before any
>> faults happen.
>> * If it happens that reclaim_single is below the only zeroed page, the
>> guest will crash even when there is reclaim-able memory available.
>>
>> Two ways we could fix this:
>> 1. Remove dom0 accesses (what on earth could be looking at a
>> not-yet-created VM?)
>
> I'm told it's a monitoring daemon, and yes, they are intending to
> adjust it to first query the GFN's type (and don't do the access
> when it's not populated, yet). But wait, I didn't check the code
> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
> also call get_page_from_gfn() with P2M_ALLOC, so would also
> trigger the PoD code (in -unstable at least) - Tim, was that really
> a correct adjustment in 25355:974ad81bb68b? It looks to be a
> 1:1 translation, but is that really necessary? If one wanted to
> find out whether a page is PoD to avoid getting it populated,
> how would that be done from outside the hypervisor? Would
> we need XEN_DOMCTL_getpageframeinfo4 for this?
>
>> 2. Allocate the PoD cache before populating the p2m table
>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>> see how that's really practical.
>
> What's wrong with telling control tools that a certain page is
> unpopulated (from which they will be able to imply that's it's all
> clear from the guest's pov)?

Because in the general case it's wrong.  The only time crashing the
guest is *not* the right thing to do is in the case we have at hand,
where PoD pages are accessed before the PoD memory is allocated.

Probably the quickest fix would be if there was a simple way for the
monitoring daemon to filter out domains that aren't completely built
yet -- maybe by looking at something in xenstore?

But the current state of things, does seem unnecessarily fragile; I
think if it can be done, allocating PoD memory before writing PoD
entries is probably a good thing to do anyway.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:24:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sygx3-000496-P3; Tue, 07 Aug 2012 10:23:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1Sygx2-000491-Sk
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 10:23:49 +0000
Received: from [85.158.138.51:38453] by server-11.bemta-3.messagelabs.com id
	24/96-10722-4BCE0205; Tue, 07 Aug 2012 10:23:48 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344335027!29038499!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24847 invoked from network); 7 Aug 2012 10:23:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881453"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:23:47 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 7 Aug 2012
	11:23:47 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Tue, 7 Aug 2012 11:23:45 +0100
Thread-Topic: [PATCH] Fix invalidate if memory requested was not bucket aligned
Thread-Index: Ac10hro0wbxoitOlRTy0ykMX+48tig==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] Fix invalidate if memory requested was not
	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 hw/xen_machine_fv.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
index fdad42a..b385d6a 100644
--- a/hw/xen_machine_fv.c
+++ b/hw/xen_machine_fv.c
@@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
     unsigned long paddr_index;
     int found = 0;
     
-    if (last_address_vaddr == buffer)
-        last_address_page =  ~0UL;
-
     TAILQ_FOREACH(reventry, &locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
@@ -201,6 +198,10 @@ void qemu_invalidate_entry(uint8_t *buffer)
     TAILQ_REMOVE(&locked_entries, reventry, next);
     qemu_free(reventry);
 
+    if (last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT) == paddr_index) {
+        last_address_page =  ~0UL;
+    }
+
     entry = &mapcache_entry[paddr_index % nr_buckets];
     while (entry && entry->paddr_index != paddr_index) {
         pentry = entry;
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:24:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sygx3-000496-P3; Tue, 07 Aug 2012 10:23:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1Sygx2-000491-Sk
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 10:23:49 +0000
Received: from [85.158.138.51:38453] by server-11.bemta-3.messagelabs.com id
	24/96-10722-4BCE0205; Tue, 07 Aug 2012 10:23:48 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344335027!29038499!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24847 invoked from network); 7 Aug 2012 10:23:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881453"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:23:47 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Tue, 7 Aug 2012
	11:23:47 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Tue, 7 Aug 2012 11:23:45 +0100
Thread-Topic: [PATCH] Fix invalidate if memory requested was not bucket aligned
Thread-Index: Ac10hro0wbxoitOlRTy0ykMX+48tig==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] Fix invalidate if memory requested was not
	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 hw/xen_machine_fv.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
index fdad42a..b385d6a 100644
--- a/hw/xen_machine_fv.c
+++ b/hw/xen_machine_fv.c
@@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
     unsigned long paddr_index;
     int found = 0;
     
-    if (last_address_vaddr == buffer)
-        last_address_page =  ~0UL;
-
     TAILQ_FOREACH(reventry, &locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
@@ -201,6 +198,10 @@ void qemu_invalidate_entry(uint8_t *buffer)
     TAILQ_REMOVE(&locked_entries, reventry, next);
     qemu_free(reventry);
 
+    if (last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT) == paddr_index) {
+        last_address_page =  ~0UL;
+    }
+
     entry = &mapcache_entry[paddr_index % nr_buckets];
     while (entry && entry->paddr_index != paddr_index) {
         pentry = entry;
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:32:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syh5a-0004M0-To; Tue, 07 Aug 2012 10:32:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Syh5Z-0004Lv-IC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:32:37 +0000
Received: from [85.158.138.51:48211] by server-4.bemta-3.messagelabs.com id
	06/8C-06379-4CEE0205; Tue, 07 Aug 2012 10:32:36 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344335556!30805785!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3506 invoked from network); 7 Aug 2012 10:32:36 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:32:36 -0000
Received: by eaac13 with SMTP id c13so509298eaa.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 03:32:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=4eFTR0UU4CXe7C8nvsfojbSA7DcVhHKvNKz3sg5uFyA=;
	b=IYcDOuKBJGYQSux83V+pPhyrKMdYCatI4j0xdijcVxBjP09mAz/tIUOidmPbo70/2G
	VLVrRw4I5aYXrag7JSFOgivGhfI057gO+qjKflMW1GLZ8KhmLR2JJrupzjXPAolVTvVD
	JiP/KnjjCEuj+WZ3gUAbOCdyK2wMwyqWru7aEtPF/cR4X4sbNe6QLkBNFF5AsUZyXxH7
	6xAGAxaWpHBnX5eE+/Vof8g4L68XOYAi0b3LIazJ+QX5LvI682Xkqc4nD2AcPCk8tKlh
	b3Hy4bLCED5at8f3hAwoRVlAyX70q1DFw2rvf94JEF7/yrl0Al/ys7cPor/iVwGXJ+l9
	fMyQ==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr17186074eep.32.1344335555696;
	Tue, 07 Aug 2012 03:32:35 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Tue, 7 Aug 2012 03:32:35 -0700 (PDT)
In-Reply-To: <20120807100029.GC84051@ocelot.phlegethon.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
	<20120807100029.GC84051@ocelot.phlegethon.org>
Date: Tue, 7 Aug 2012 11:32:35 +0100
X-Google-Sender-Auth: 8eVZV6o4VuvokBZkaJUsiwcUoJ4
Message-ID: <CAFLBxZbeHVUHmrDL9=Y2LzH9v4WMXELqK9xf8whkVrN9oKOnvw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Tim Deegan <tim@xen.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 7, 2012 at 11:00 AM, Tim Deegan <tim@xen.org> wrote:
>> > 2. Allocate the PoD cache before populating the p2m table
>
> Any reason not to do this?

The only reason I can think of is that it make it more fiddly if we
*didn't* want some entries to be PoD.  For example, if it is the case
that the first 2MiB shouldn't be PoD, we'd have to subtract 2MiB from
the target when doing the allocation.  Doing it afterwards means not
having to worry about it.

I can't think of any reason why the first 2MiB couldn't be PoD,
however; so I think this is probably the best route to take.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:32:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syh5a-0004M0-To; Tue, 07 Aug 2012 10:32:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Syh5Z-0004Lv-IC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:32:37 +0000
Received: from [85.158.138.51:48211] by server-4.bemta-3.messagelabs.com id
	06/8C-06379-4CEE0205; Tue, 07 Aug 2012 10:32:36 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344335556!30805785!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3506 invoked from network); 7 Aug 2012 10:32:36 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:32:36 -0000
Received: by eaac13 with SMTP id c13so509298eaa.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 03:32:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=4eFTR0UU4CXe7C8nvsfojbSA7DcVhHKvNKz3sg5uFyA=;
	b=IYcDOuKBJGYQSux83V+pPhyrKMdYCatI4j0xdijcVxBjP09mAz/tIUOidmPbo70/2G
	VLVrRw4I5aYXrag7JSFOgivGhfI057gO+qjKflMW1GLZ8KhmLR2JJrupzjXPAolVTvVD
	JiP/KnjjCEuj+WZ3gUAbOCdyK2wMwyqWru7aEtPF/cR4X4sbNe6QLkBNFF5AsUZyXxH7
	6xAGAxaWpHBnX5eE+/Vof8g4L68XOYAi0b3LIazJ+QX5LvI682Xkqc4nD2AcPCk8tKlh
	b3Hy4bLCED5at8f3hAwoRVlAyX70q1DFw2rvf94JEF7/yrl0Al/ys7cPor/iVwGXJ+l9
	fMyQ==
MIME-Version: 1.0
Received: by 10.14.216.198 with SMTP id g46mr17186074eep.32.1344335555696;
	Tue, 07 Aug 2012 03:32:35 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Tue, 7 Aug 2012 03:32:35 -0700 (PDT)
In-Reply-To: <20120807100029.GC84051@ocelot.phlegethon.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
	<20120807100029.GC84051@ocelot.phlegethon.org>
Date: Tue, 7 Aug 2012 11:32:35 +0100
X-Google-Sender-Auth: 8eVZV6o4VuvokBZkaJUsiwcUoJ4
Message-ID: <CAFLBxZbeHVUHmrDL9=Y2LzH9v4WMXELqK9xf8whkVrN9oKOnvw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Tim Deegan <tim@xen.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 7, 2012 at 11:00 AM, Tim Deegan <tim@xen.org> wrote:
>> > 2. Allocate the PoD cache before populating the p2m table
>
> Any reason not to do this?

The only reason I can think of is that it make it more fiddly if we
*didn't* want some entries to be PoD.  For example, if it is the case
that the first 2MiB shouldn't be PoD, we'd have to subtract 2MiB from
the target when doing the allocation.  Doing it afterwards means not
having to worry about it.

I can't think of any reason why the first 2MiB couldn't be PoD,
however; so I think this is probably the best route to take.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:33:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syh6K-0004PF-B8; Tue, 07 Aug 2012 10:33:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Syh6I-0004Oz-J8
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:33:22 +0000
Received: from [85.158.138.51:54804] by server-12.bemta-3.messagelabs.com id
	76/50-21301-1FEE0205; Tue, 07 Aug 2012 10:33:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344335601!30690072!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29440 invoked from network); 7 Aug 2012 10:33:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:33:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881662"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:33:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 11:33:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Syh6G-0006Op-Qe; Tue, 07 Aug 2012 10:33:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Syh6G-0001HJ-KU;
	Tue, 07 Aug 2012 11:33:20 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20512.61168.228964.615080@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 11:33:20 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <patchbomb.1344330533@cosworth.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
	25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH 0 of 2] xl: fix localhost migration after 25733:353bc0801b11"):
> On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> > I'm going to investigate if I can make xl sequence the device
> > teardowns in the migration so as to make it work. 
> 
> Here is a short series which does this. 
> 
> It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
> now).

I think the right answer at this stage of the release is to back out
the part of the recent change which is causing the problem.

In this case that means abolishing the execution of
/etc/xen/scripts/block, which "libxl: support custom block hotplug
scripts" had as an intended side-effect.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:33:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syh6K-0004PF-B8; Tue, 07 Aug 2012 10:33:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Syh6I-0004Oz-J8
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:33:22 +0000
Received: from [85.158.138.51:54804] by server-12.bemta-3.messagelabs.com id
	76/50-21301-1FEE0205; Tue, 07 Aug 2012 10:33:21 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344335601!30690072!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29440 invoked from network); 7 Aug 2012 10:33:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:33:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881662"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:33:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 11:33:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Syh6G-0006Op-Qe; Tue, 07 Aug 2012 10:33:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Syh6G-0001HJ-KU;
	Tue, 07 Aug 2012 11:33:20 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20512.61168.228964.615080@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 11:33:20 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <patchbomb.1344330533@cosworth.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
	25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH 0 of 2] xl: fix localhost migration after 25733:353bc0801b11"):
> On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> > I'm going to investigate if I can make xl sequence the device
> > teardowns in the migration so as to make it work. 
> 
> Here is a short series which does this. 
> 
> It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
> now).

I think the right answer at this stage of the release is to back out
the part of the recent change which is causing the problem.

In this case that means abolishing the execution of
/etc/xen/scripts/block, which "libxl: support custom block hotplug
scripts" had as an intended side-effect.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhAY-0004dD-0t; Tue, 07 Aug 2012 10:37:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyhAW-0004ct-Fp
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:37:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344335785!4276657!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10335 invoked from network); 7 Aug 2012 10:36:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:36:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881749"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:36:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	11:36:25 +0100
Message-ID: <1344335783.11339.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 11:36:23 +0100
In-Reply-To: <20512.61168.228964.615080@mariner.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
	<20512.61168.228964.615080@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
 25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 11:33 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH 0 of 2] xl: fix localhost migration after 25733:353bc0801b11"):
> > On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> > > I'm going to investigate if I can make xl sequence the device
> > > teardowns in the migration so as to make it work. 
> > 
> > Here is a short series which does this. 
> > 
> > It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
> > now).
> 
> I think the right answer at this stage of the release is to back out
> the part of the recent change which is causing the problem.
> 
> In this case that means abolishing the execution of
> /etc/xen/scripts/block, which "libxl: support custom block hotplug
> scripts" had as an intended side-effect.

OK, I'll look into this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhAY-0004dD-0t; Tue, 07 Aug 2012 10:37:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyhAW-0004ct-Fp
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:37:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344335785!4276657!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10335 invoked from network); 7 Aug 2012 10:36:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:36:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881749"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:36:25 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	11:36:25 +0100
Message-ID: <1344335783.11339.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 11:36:23 +0100
In-Reply-To: <20512.61168.228964.615080@mariner.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
	<20512.61168.228964.615080@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
 25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 11:33 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH 0 of 2] xl: fix localhost migration after 25733:353bc0801b11"):
> > On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> > > I'm going to investigate if I can make xl sequence the device
> > > teardowns in the migration so as to make it work. 
> > 
> > Here is a short series which does this. 
> > 
> > It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
> > now).
> 
> I think the right answer at this stage of the release is to back out
> the part of the recent change which is causing the problem.
> 
> In this case that means abolishing the execution of
> /etc/xen/scripts/block, which "libxl: support custom block hotplug
> scripts" had as an intended side-effect.

OK, I'll look into this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:39:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhBr-0004l2-GK; Tue, 07 Aug 2012 10:39:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyhBq-0004kd-Du
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:39:06 +0000
Received: from [85.158.139.83:11695] by server-6.bemta-5.messagelabs.com id
	35/D0-27759-940F0205; Tue, 07 Aug 2012 10:39:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344335944!30690588!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30210 invoked from network); 7 Aug 2012 10:39:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:39:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881867"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:39:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 11:39:04 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyhBo-0006TF-EC; Tue, 07 Aug 2012 10:39:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyhBo-0001Hm-8s;
	Tue, 07 Aug 2012 11:39:04 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20512.61507.837455.115256@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 11:38:59 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344332203.11339.79.camel@zakaz.uk.xensource.com>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344332203.11339.79.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH RFC/for-4.2?] libxl: Support backend domain ID for disks"):
> On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
> > Allow specification of backend domains for disks, either in the config
> > file or via xl block-attach.
> > 
> > A version of this patch was submitted in October 2011 but was not
> > suitable at the time because libxl did not support the "script=" option
> > for disks in libxl. Now that this option exists, it is possible to
> > specify a backend domain without needing to duplicate the device tree of
> > domain providing the disk in the domain using libxl; just specify
> > script=/bin/true (or any more useful script) to prevent the block script
> > from running in the domain using libxl.

Thanks, Daniel!

> Given that this patch was originally posted so long ago, that the
> script= stuff just went in, that driver domains were on the TODO at one
> point (I think) and the relative simplicity of this patch I'm leaning
> towards taking this in 4.2.

The patch looks good to me and it improves a regression compared to
xend, the fixing of which is one of our goals for 4.2.  So I think
this is fine.

> I'm not sure if using libxl in libxlu is a layering violation or not
> (perhaps Ian J has an opinion), but I suppose it is acceptable (the
> alternative is a twisty maze of callbacks).

No, it's not a layering violation.

The only question in my mind was whether the libxl domain config
struct (in the IDL) should contain (perhaps optionally) the name, so
that the lookup is done at a slightly better time, but the result
would not be pretty at all and I think it's better this way.

> If we are going to expose libxl down to libxlu maybe we should go all
> the way and add the ctx to the XLU_Config?

Yes, I think that would be better.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 10:39:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 10:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhBr-0004l2-GK; Tue, 07 Aug 2012 10:39:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyhBq-0004kd-Du
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 10:39:06 +0000
Received: from [85.158.139.83:11695] by server-6.bemta-5.messagelabs.com id
	35/D0-27759-940F0205; Tue, 07 Aug 2012 10:39:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344335944!30690588!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30210 invoked from network); 7 Aug 2012 10:39:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 10:39:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13881867"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:39:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 11:39:04 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyhBo-0006TF-EC; Tue, 07 Aug 2012 10:39:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyhBo-0001Hm-8s;
	Tue, 07 Aug 2012 11:39:04 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20512.61507.837455.115256@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 11:38:59 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344332203.11339.79.camel@zakaz.uk.xensource.com>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344332203.11339.79.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH RFC/for-4.2?] libxl: Support backend domain ID for disks"):
> On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
> > Allow specification of backend domains for disks, either in the config
> > file or via xl block-attach.
> > 
> > A version of this patch was submitted in October 2011 but was not
> > suitable at the time because libxl did not support the "script=" option
> > for disks in libxl. Now that this option exists, it is possible to
> > specify a backend domain without needing to duplicate the device tree of
> > domain providing the disk in the domain using libxl; just specify
> > script=/bin/true (or any more useful script) to prevent the block script
> > from running in the domain using libxl.

Thanks, Daniel!

> Given that this patch was originally posted so long ago, that the
> script= stuff just went in, that driver domains were on the TODO at one
> point (I think) and the relative simplicity of this patch I'm leaning
> towards taking this in 4.2.

The patch looks good to me and it improves a regression compared to
xend, the fixing of which is one of our goals for 4.2.  So I think
this is fine.

> I'm not sure if using libxl in libxlu is a layering violation or not
> (perhaps Ian J has an opinion), but I suppose it is acceptable (the
> alternative is a twisty maze of callbacks).

No, it's not a layering violation.

The only question in my mind was whether the libxl domain config
struct (in the IDL) should contain (perhaps optionally) the name, so
that the lookup is done at a slightly better time, but the result
would not be pretty at all and I think it's better this way.

> If we are going to expose libxl down to libxlu maybe we should go all
> the way and add the ctx to the XLU_Config?

Yes, I think that would be better.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:03:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhZa-00053u-Nj; Tue, 07 Aug 2012 11:03:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyhZZ-00053p-2y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:03:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344337387!8951882!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22727 invoked from network); 7 Aug 2012 11:03:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:03:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:03:07 +0100
Message-Id: <502112080200007800093306@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:03:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
	<20120807100029.GC84051@ocelot.phlegethon.org>
In-Reply-To: <20120807100029.GC84051@ocelot.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 12:00, Tim Deegan <tim@xen.org> wrote:
>> > 2. Allocate the PoD cache before populating the p2m table
> 
> Any reason not to do this?

Don't know. I was (maybe wrongly) implying that the order things
got done in was such for a reason. That's why I had added tools
folks to Cc.

In any case I have asked the reporter to test a respective change.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:03:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhZa-00053u-Nj; Tue, 07 Aug 2012 11:03:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyhZZ-00053p-2y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:03:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344337387!8951882!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22727 invoked from network); 7 Aug 2012 11:03:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:03:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:03:07 +0100
Message-Id: <502112080200007800093306@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:03:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
	<20120807100029.GC84051@ocelot.phlegethon.org>
In-Reply-To: <20120807100029.GC84051@ocelot.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 12:00, Tim Deegan <tim@xen.org> wrote:
>> > 2. Allocate the PoD cache before populating the p2m table
> 
> Any reason not to do this?

Don't know. I was (maybe wrongly) implying that the order things
got done in was such for a reason. That's why I had added tools
folks to Cc.

In any case I have asked the reporter to test a respective change.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:04:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhZw-00055A-4f; Tue, 07 Aug 2012 11:04:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyhZu-00054W-0V
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:03:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344337361!1799791!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9283 invoked from network); 7 Aug 2012 11:02:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 11:02:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13882437"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:02:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	12:02:10 +0100
Message-ID: <1344337328.11339.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 12:02:08 +0100
In-Reply-To: <20512.61168.228964.615080@mariner.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
	<20512.61168.228964.615080@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
 25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 11:33 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH 0 of 2] xl: fix localhost migration after 25733:353bc0801b11"):
> > On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> > > I'm going to investigate if I can make xl sequence the device
> > > teardowns in the migration so as to make it work. 
> > 
> > Here is a short series which does this. 
> > 
> > It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
> > now).
> 
> I think the right answer at this stage of the release is to back out
> the part of the recent change which is causing the problem.
> 
> In this case that means abolishing the execution of
> /etc/xen/scripts/block, which "libxl: support custom block hotplug
> scripts" had as an intended side-effect.

Here it is. This approach has the side effect that we now require block
hotplug scripts to properly nest/reference count for localhost migration
to work. e.g. the effect of add->add->remove calls to the script should
be that the device remains connected until a second remove. This may be
more or less trivial depending on the device

I suppose we can live with that limitation since localhost migration is
mostly for test purposes.

I'm slightly concerned that this may also effect non-localhost
migration. e.g. an iSCSI target which only permits a single login.

8<------------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344337292 -3600
# Node ID c5f673d8b330d6195e2aa3bbf63bb594b4bc99ee
# Parent  a7ad22e5525831dd491d7ee1fe538b7543404ac7
libxl: write physical-device node if user did not supply a block script

This reverts one of the intentional changes from 25733:353bc0801b11.
That change exposed an issue with the xl migration protocol, which
although safe triggers the hotplug scripts device sharing logic.

For 4.2 we disable this logic by writing the physical-device xenstore
node ourselves if a user did not supply a script. If the user did
supply a script then we continue to rely on it to write the
physical-device node (not least because the script may create the
device and therefore it is not available before we run the script).

This means that to support localhost migration a block hotplug script
needs to be robust against adding a device twice and should not
deactivate the device until it has been removed twice.

This should be revisted for 4.3.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a7ad22e55258 -r c5f673d8b330 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl.c	Tue Aug 07 12:01:32 2012 +0100
@@ -1845,18 +1845,31 @@ static void device_disk_add(libxl__egc *
             case LIBXL_DISK_BACKEND_PHY:
                 dev = disk->pdev_path;
 
-                script = libxl__abs_path(gc, disk->script ?: "block",
-                                         libxl__xen_script_dir_path());
-
         do_backend_phy:
                 flexarray_append(back, "params");
                 flexarray_append(back, dev);
 
-                assert(script);
+                script = libxl__abs_path(gc, disk->script?: "block",
+                                         libxl__xen_script_dir_path());
                 flexarray_append_pair(back, "script", script);
 
+                /* If the user did not supply a block script then we
+                 * write the physical-device node ourselves.
+                 *
+                 * If the user did supply a script then that script is
+                 * responsible for this since the block device may not
+                 * exist yet.
+                 */
+                if (!disk->script) {
+                    int major, minor;
+                    libxl__device_physdisk_major_minor(dev, &major, &minor);
+                    flexarray_append_pair(back, "physical-device",
+                            libxl__sprintf(gc, "%x:%x", major, minor));
+                }
+
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
+
             case LIBXL_DISK_BACKEND_TAP:
                 dev = libxl__blktap_devpath(gc, disk->pdev_path, disk->format);
                 if (!dev) {
@@ -1870,12 +1883,9 @@ static void device_disk_add(libxl__egc *
                     libxl__device_disk_string_of_format(disk->format),
                     disk->pdev_path));
 
-                /*
-                 * tap devices do not support custom block scripts and
-                 * always use the plain block script.
-                 */
-                script = libxl__abs_path(gc, "block",
-                                         libxl__xen_script_dir_path());
+                /* tap backends with scripts are rejected by
+                 * libxl__device_disk_set_backend */
+                assert(!disk->script);
 
                 /* now create a phy device to export the device to the guest */
                 goto do_backend_phy;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:04:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhZw-00055A-4f; Tue, 07 Aug 2012 11:04:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SyhZu-00054W-0V
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:03:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344337361!1799791!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9283 invoked from network); 7 Aug 2012 11:02:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 11:02:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13882437"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:02:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	12:02:10 +0100
Message-ID: <1344337328.11339.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 12:02:08 +0100
In-Reply-To: <20512.61168.228964.615080@mariner.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
	<20512.61168.228964.615080@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
 25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 11:33 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH 0 of 2] xl: fix localhost migration after 25733:353bc0801b11"):
> > On Tue, 2012-08-07 at 09:25 +0100, Ian Campbell wrote:
> > > I'm going to investigate if I can make xl sequence the device
> > > teardowns in the migration so as to make it work. 
> > 
> > Here is a short series which does this. 
> > 
> > It's a bit ad-hoc, we probably want to revist this for 4.3 (if not
> > now).
> 
> I think the right answer at this stage of the release is to back out
> the part of the recent change which is causing the problem.
> 
> In this case that means abolishing the execution of
> /etc/xen/scripts/block, which "libxl: support custom block hotplug
> scripts" had as an intended side-effect.

Here it is. This approach has the side effect that we now require block
hotplug scripts to properly nest/reference count for localhost migration
to work. e.g. the effect of add->add->remove calls to the script should
be that the device remains connected until a second remove. This may be
more or less trivial depending on the device

I suppose we can live with that limitation since localhost migration is
mostly for test purposes.

I'm slightly concerned that this may also effect non-localhost
migration. e.g. an iSCSI target which only permits a single login.

8<------------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1344337292 -3600
# Node ID c5f673d8b330d6195e2aa3bbf63bb594b4bc99ee
# Parent  a7ad22e5525831dd491d7ee1fe538b7543404ac7
libxl: write physical-device node if user did not supply a block script

This reverts one of the intentional changes from 25733:353bc0801b11.
That change exposed an issue with the xl migration protocol, which
although safe triggers the hotplug scripts device sharing logic.

For 4.2 we disable this logic by writing the physical-device xenstore
node ourselves if a user did not supply a script. If the user did
supply a script then we continue to rely on it to write the
physical-device node (not least because the script may create the
device and therefore it is not available before we run the script).

This means that to support localhost migration a block hotplug script
needs to be robust against adding a device twice and should not
deactivate the device until it has been removed twice.

This should be revisted for 4.3.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a7ad22e55258 -r c5f673d8b330 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Mon Aug 06 12:31:02 2012 +0100
+++ b/tools/libxl/libxl.c	Tue Aug 07 12:01:32 2012 +0100
@@ -1845,18 +1845,31 @@ static void device_disk_add(libxl__egc *
             case LIBXL_DISK_BACKEND_PHY:
                 dev = disk->pdev_path;
 
-                script = libxl__abs_path(gc, disk->script ?: "block",
-                                         libxl__xen_script_dir_path());
-
         do_backend_phy:
                 flexarray_append(back, "params");
                 flexarray_append(back, dev);
 
-                assert(script);
+                script = libxl__abs_path(gc, disk->script?: "block",
+                                         libxl__xen_script_dir_path());
                 flexarray_append_pair(back, "script", script);
 
+                /* If the user did not supply a block script then we
+                 * write the physical-device node ourselves.
+                 *
+                 * If the user did supply a script then that script is
+                 * responsible for this since the block device may not
+                 * exist yet.
+                 */
+                if (!disk->script) {
+                    int major, minor;
+                    libxl__device_physdisk_major_minor(dev, &major, &minor);
+                    flexarray_append_pair(back, "physical-device",
+                            libxl__sprintf(gc, "%x:%x", major, minor));
+                }
+
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
+
             case LIBXL_DISK_BACKEND_TAP:
                 dev = libxl__blktap_devpath(gc, disk->pdev_path, disk->format);
                 if (!dev) {
@@ -1870,12 +1883,9 @@ static void device_disk_add(libxl__egc *
                     libxl__device_disk_string_of_format(disk->format),
                     disk->pdev_path));
 
-                /*
-                 * tap devices do not support custom block scripts and
-                 * always use the plain block script.
-                 */
-                script = libxl__abs_path(gc, "block",
-                                         libxl__xen_script_dir_path());
+                /* tap backends with scripts are rejected by
+                 * libxl__device_disk_set_backend */
+                assert(!disk->script);
 
                 /* now create a phy device to export the device to the guest */
                 goto do_backend_phy;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:05:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhbI-0005DA-Oj; Tue, 07 Aug 2012 11:05:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyhbH-0005Cc-QB
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:05:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344337517!8952434!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5099 invoked from network); 7 Aug 2012 11:05:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:05:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:05:16 +0100
Message-Id: <5021128B0200007800093309@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:05:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
	<CAFLBxZZgzqy1moLjuTDJEeswwoh9SPBXmNPDkvB0OdMBdLN92Q@mail.gmail.com>
In-Reply-To: <CAFLBxZZgzqy1moLjuTDJEeswwoh9SPBXmNPDkvB0OdMBdLN92Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 12:20, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Probably the quickest fix would be if there was a simple way for the
> monitoring daemon to filter out domains that aren't completely built
> yet -- maybe by looking at something in xenstore?

Given that the getpageframeinfo thing that I thought could be
used for this doesn't work, I'd be curious as to what (xenstore
or other) things to look at we could suggest to them. I personally
have no idea...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:05:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyhbI-0005DA-Oj; Tue, 07 Aug 2012 11:05:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyhbH-0005Cc-QB
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:05:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344337517!8952434!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5099 invoked from network); 7 Aug 2012 11:05:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:05:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:05:16 +0100
Message-Id: <5021128B0200007800093309@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:05:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<5020E13F02000078000931DA@nat28.tlf.novell.com>
	<CAFLBxZZgzqy1moLjuTDJEeswwoh9SPBXmNPDkvB0OdMBdLN92Q@mail.gmail.com>
In-Reply-To: <CAFLBxZZgzqy1moLjuTDJEeswwoh9SPBXmNPDkvB0OdMBdLN92Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 12:20, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Probably the quickest fix would be if there was a simple way for the
> monitoring daemon to filter out domains that aren't completely built
> yet -- maybe by looking at something in xenstore?

Given that the getpageframeinfo thing that I thought could be
used for this doesn't work, I'd be curious as to what (xenstore
or other) things to look at we could suggest to them. I personally
have no idea...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syhdt-0005QF-Ae; Tue, 07 Aug 2012 11:08:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1Syhdr-0005Pm-Rj
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:08:04 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344337611!11190713!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMDcxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12805 invoked from network); 7 Aug 2012 11:06:54 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-8.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:06:54 -0000
Received: from 172.24.2.119 (EHLO szxeml211-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id ANC24573; Tue, 07 Aug 2012 19:06:51 +0800 (CST)
Received: from SZXEML423-HUB.china.huawei.com (10.82.67.162) by
	szxeml211-edg.china.huawei.com (172.24.2.182) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Tue, 7 Aug 2012 19:05:39 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml423-hub.china.huawei.com ([10.82.67.162]) with mapi id
	14.01.0323.003; Tue, 7 Aug 2012 19:05:30 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9A=
Date: Tue, 7 Aug 2012 11:05:29 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344334043.11339.85.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, August 07, 2012 6:07 PM

> The entire point of the hypercall buffer is that it needs to be safe
> for use as a hypercall argument, therefore it does need to be protected.

We won't protect pages in hypercall buffer cache. Hypercall buffer cache belongs to xc_interface, parent process calls xc_interface_open to get xc_interface handler, after fork, child process inherits the handler, and also inherits hypercall buffer cache which belongs to it. It will cause segment fault to access pages in the hypercall buffer cache by being delivered   to hypercall, because they are not invalidate in the child process. So, We need restore the status of pages before putting they to hypercall buffer cache, and prevent pages from COW after being allocated in hyprecall buffer cache.

> >  So two extra hooks are added in xc_osdep_ops.u.pricmd.
> 
> I don't understand why these new hooks are needed, you call the first
> immediately after (near enough) alloc_hypercall_buffer and the second
> immediately before free_hypercall_buffer. The semantics of both those
> existing calls are already that they must provide and release memory
> suitable for use as a hypercall argument, so I don't think having a
> separate prepare call which takes their result and does "really make
> this memory suitable" makes sense.
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:08:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:08:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syhdt-0005QF-Ae; Tue, 07 Aug 2012 11:08:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1Syhdr-0005Pm-Rj
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:08:04 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344337611!11190713!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMDcxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12805 invoked from network); 7 Aug 2012 11:06:54 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-8.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:06:54 -0000
Received: from 172.24.2.119 (EHLO szxeml211-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id ANC24573; Tue, 07 Aug 2012 19:06:51 +0800 (CST)
Received: from SZXEML423-HUB.china.huawei.com (10.82.67.162) by
	szxeml211-edg.china.huawei.com (172.24.2.182) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Tue, 7 Aug 2012 19:05:39 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml423-hub.china.huawei.com ([10.82.67.162]) with mapi id
	14.01.0323.003; Tue, 7 Aug 2012 19:05:30 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9A=
Date: Tue, 7 Aug 2012 11:05:29 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344334043.11339.85.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, August 07, 2012 6:07 PM

> The entire point of the hypercall buffer is that it needs to be safe
> for use as a hypercall argument, therefore it does need to be protected.

We won't protect pages in hypercall buffer cache. Hypercall buffer cache belongs to xc_interface, parent process calls xc_interface_open to get xc_interface handler, after fork, child process inherits the handler, and also inherits hypercall buffer cache which belongs to it. It will cause segment fault to access pages in the hypercall buffer cache by being delivered   to hypercall, because they are not invalidate in the child process. So, We need restore the status of pages before putting they to hypercall buffer cache, and prevent pages from COW after being allocated in hyprecall buffer cache.

> >  So two extra hooks are added in xc_osdep_ops.u.pricmd.
> 
> I don't understand why these new hooks are needed, you call the first
> immediately after (near enough) alloc_hypercall_buffer and the second
> immediately before free_hypercall_buffer. The semantics of both those
> existing calls are already that they must provide and release memory
> suitable for use as a hypercall argument, so I don't think having a
> separate prepare call which takes their result and does "really make
> this memory suitable" makes sense.
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syi8D-0005oe-Vl; Tue, 07 Aug 2012 11:39:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syi8C-0005oZ-4N
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:39:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344339558!11272782!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3728 invoked from network); 7 Aug 2012 11:39:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:39:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:39:18 +0100
Message-Id: <50211A84020000780009332E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:39:16 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] [PATCH] mark 8x14 font data const
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
Given that this is trivially correct (only getting this font in line
with the other two), I'll commit this right away, without waiting for
an ack.

--- a/xen/drivers/video/font_8x14.c
+++ b/xen/drivers/video/font_8x14.c
@@ -9,7 +9,7 @@
 
 #define FONTDATAMAX (256*14)
 
-static unsigned char fontdata_8x14[FONTDATAMAX] = {
+static const unsigned char fontdata_8x14[FONTDATAMAX] = {
 
     /* 0 0x00 '^@' */
     0x00, /* 00000000 */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syi8D-0005oe-Vl; Tue, 07 Aug 2012 11:39:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syi8C-0005oZ-4N
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:39:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344339558!11272782!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3728 invoked from network); 7 Aug 2012 11:39:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:39:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:39:18 +0100
Message-Id: <50211A84020000780009332E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:39:16 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] [PATCH] mark 8x14 font data const
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
Given that this is trivially correct (only getting this font in line
with the other two), I'll commit this right away, without waiting for
an ack.

--- a/xen/drivers/video/font_8x14.c
+++ b/xen/drivers/video/font_8x14.c
@@ -9,7 +9,7 @@
 
 #define FONTDATAMAX (256*14)
 
-static unsigned char fontdata_8x14[FONTDATAMAX] = {
+static const unsigned char fontdata_8x14[FONTDATAMAX] = {
 
     /* 0 0x00 '^@' */
     0x00, /* 00000000 */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:41:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyiAE-0005u2-Jp; Tue, 07 Aug 2012 11:41:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyiAD-0005tb-Ij
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:41:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344339665!4230202!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7270 invoked from network); 7 Aug 2012 11:41:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:41:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:41:05 +0100
Message-Id: <50211AEF0200007800093332@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:41:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part390844DF.0__="
Subject: [Xen-devel] [PATCH] eliminate lock profile pointer from spinlock
 structure when !LOCK_PROFILE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part390844DF.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This pointer is never used for anything, and needlessly increases the
memory footprint of various pieces of data.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -115,11 +115,10 @@ extern void spinlock_profile_reset(unsig
=20
 #else
=20
-struct lock_profile { };
 struct lock_profile_qhead { };
=20
 #define SPIN_LOCK_UNLOCKED                                                =
    \
-    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG, NULL }
+    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG }
 #define DEFINE_SPINLOCK(l) spinlock_t l =3D SPIN_LOCK_UNLOCKED
=20
 #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))
@@ -133,7 +132,9 @@ typedef struct spinlock {
     u16 recurse_cpu:12;
     u16 recurse_cnt:4;
     struct lock_debug debug;
+#ifdef LOCK_PROFILE
     struct lock_profile *profile;
+#endif
 } spinlock_t;
=20
=20




--=__Part390844DF.0__=
Content-Type: text/plain; name="lock-profile-no-overhead.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="lock-profile-no-overhead.patch"

eliminate lock profile pointer from spinlock structure when !LOCK_PROFILE=
=0A=0AThis pointer is never used for anything, and needlessly increases =
the=0Amemory footprint of various pieces of data.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/include/xen/spinlock.h=0A+++ =
b/xen/include/xen/spinlock.h=0A@@ -115,11 +115,10 @@ extern void spinlock_p=
rofile_reset(unsig=0A =0A #else=0A =0A-struct lock_profile { };=0A struct =
lock_profile_qhead { };=0A =0A #define SPIN_LOCK_UNLOCKED                  =
                                  \=0A-    { _RAW_SPIN_LOCK_UNLOCKED, =
0xfffu, 0, _LOCK_DEBUG, NULL }=0A+    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, =
0, _LOCK_DEBUG }=0A #define DEFINE_SPINLOCK(l) spinlock_t l =3D SPIN_LOCK_U=
NLOCKED=0A =0A #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))=
=0A@@ -133,7 +132,9 @@ typedef struct spinlock {=0A     u16 recurse_cpu:12;=
=0A     u16 recurse_cnt:4;=0A     struct lock_debug debug;=0A+#ifdef =
LOCK_PROFILE=0A     struct lock_profile *profile;=0A+#endif=0A } spinlock_t=
;=0A =0A =0A
--=__Part390844DF.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part390844DF.0__=--


From xen-devel-bounces@lists.xen.org Tue Aug 07 11:41:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyiAE-0005u2-Jp; Tue, 07 Aug 2012 11:41:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyiAD-0005tb-Ij
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 11:41:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344339665!4230202!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7270 invoked from network); 7 Aug 2012 11:41:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 11:41:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 12:41:05 +0100
Message-Id: <50211AEF0200007800093332@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 12:41:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part390844DF.0__="
Subject: [Xen-devel] [PATCH] eliminate lock profile pointer from spinlock
 structure when !LOCK_PROFILE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part390844DF.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This pointer is never used for anything, and needlessly increases the
memory footprint of various pieces of data.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -115,11 +115,10 @@ extern void spinlock_profile_reset(unsig
=20
 #else
=20
-struct lock_profile { };
 struct lock_profile_qhead { };
=20
 #define SPIN_LOCK_UNLOCKED                                                =
    \
-    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG, NULL }
+    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG }
 #define DEFINE_SPINLOCK(l) spinlock_t l =3D SPIN_LOCK_UNLOCKED
=20
 #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))
@@ -133,7 +132,9 @@ typedef struct spinlock {
     u16 recurse_cpu:12;
     u16 recurse_cnt:4;
     struct lock_debug debug;
+#ifdef LOCK_PROFILE
     struct lock_profile *profile;
+#endif
 } spinlock_t;
=20
=20




--=__Part390844DF.0__=
Content-Type: text/plain; name="lock-profile-no-overhead.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="lock-profile-no-overhead.patch"

eliminate lock profile pointer from spinlock structure when !LOCK_PROFILE=
=0A=0AThis pointer is never used for anything, and needlessly increases =
the=0Amemory footprint of various pieces of data.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/include/xen/spinlock.h=0A+++ =
b/xen/include/xen/spinlock.h=0A@@ -115,11 +115,10 @@ extern void spinlock_p=
rofile_reset(unsig=0A =0A #else=0A =0A-struct lock_profile { };=0A struct =
lock_profile_qhead { };=0A =0A #define SPIN_LOCK_UNLOCKED                  =
                                  \=0A-    { _RAW_SPIN_LOCK_UNLOCKED, =
0xfffu, 0, _LOCK_DEBUG, NULL }=0A+    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, =
0, _LOCK_DEBUG }=0A #define DEFINE_SPINLOCK(l) spinlock_t l =3D SPIN_LOCK_U=
NLOCKED=0A =0A #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))=
=0A@@ -133,7 +132,9 @@ typedef struct spinlock {=0A     u16 recurse_cpu:12;=
=0A     u16 recurse_cnt:4;=0A     struct lock_debug debug;=0A+#ifdef =
LOCK_PROFILE=0A     struct lock_profile *profile;=0A+#endif=0A } spinlock_t=
;=0A =0A =0A
--=__Part390844DF.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part390844DF.0__=--


From xen-devel-bounces@lists.xen.org Tue Aug 07 11:57:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyiPI-00069l-50; Tue, 07 Aug 2012 11:57:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyiPH-00069g-1f
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 11:57:03 +0000
Received: from [85.158.143.35:8277] by server-3.bemta-4.messagelabs.com id
	79/04-01511-E8201205; Tue, 07 Aug 2012 11:57:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344340581!19060452!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31145 invoked from network); 7 Aug 2012 11:56:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 11:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13883730"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:56:21 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 12:56:21 +0100
Date: Tue, 7 Aug 2012 12:56:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208071253090.4645@kaball.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Fix invalidate if memory requested was not
 bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Frediano Ziglio wrote:
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Could you please send a patch for upstream QEMU too?


> ---
>  hw/xen_machine_fv.c |    7 ++++---
>  1 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index fdad42a..b385d6a 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      unsigned long paddr_index;
>      int found = 0;
>      
> -    if (last_address_vaddr == buffer)
> -        last_address_page =  ~0UL;
> -
>      TAILQ_FOREACH(reventry, &locked_entries, next) {
>          if (reventry->vaddr_req == buffer) {
>              paddr_index = reventry->paddr_index;
> @@ -201,6 +198,10 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      TAILQ_REMOVE(&locked_entries, reventry, next);
>      qemu_free(reventry);
>  
> +    if (last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT) == paddr_index) {
> +        last_address_page =  ~0UL;
> +    }

code style: I wouldn't mind a pair of parentesys around
(last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT), also max 80
columns.
Other than that is OK.


>      entry = &mapcache_entry[paddr_index % nr_buckets];
>      while (entry && entry->paddr_index != paddr_index) {
>          pentry = entry;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 11:57:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 11:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyiPI-00069l-50; Tue, 07 Aug 2012 11:57:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyiPH-00069g-1f
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 11:57:03 +0000
Received: from [85.158.143.35:8277] by server-3.bemta-4.messagelabs.com id
	79/04-01511-E8201205; Tue, 07 Aug 2012 11:57:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344340581!19060452!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31145 invoked from network); 7 Aug 2012 11:56:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 11:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13883730"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:56:21 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 12:56:21 +0100
Date: Tue, 7 Aug 2012 12:56:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208071253090.4645@kaball.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Fix invalidate if memory requested was not
 bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Frediano Ziglio wrote:
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Could you please send a patch for upstream QEMU too?


> ---
>  hw/xen_machine_fv.c |    7 ++++---
>  1 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index fdad42a..b385d6a 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      unsigned long paddr_index;
>      int found = 0;
>      
> -    if (last_address_vaddr == buffer)
> -        last_address_page =  ~0UL;
> -
>      TAILQ_FOREACH(reventry, &locked_entries, next) {
>          if (reventry->vaddr_req == buffer) {
>              paddr_index = reventry->paddr_index;
> @@ -201,6 +198,10 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      TAILQ_REMOVE(&locked_entries, reventry, next);
>      qemu_free(reventry);
>  
> +    if (last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT) == paddr_index) {
> +        last_address_page =  ~0UL;
> +    }

code style: I wouldn't mind a pair of parentesys around
(last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT), also max 80
columns.
Other than that is OK.


>      entry = &mapcache_entry[paddr_index % nr_buckets];
>      while (entry && entry->paddr_index != paddr_index) {
>          pentry = entry;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:08:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyiaX-0006XW-6J; Tue, 07 Aug 2012 12:08:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyiaV-0006XR-Qm
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:08:39 +0000
Received: from [85.158.139.83:43080] by server-4.bemta-5.messagelabs.com id
	F0/99-32474-74501205; Tue, 07 Aug 2012 12:08:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344341318!19393034!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2857 invoked from network); 7 Aug 2012 12:08:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 12:08:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 13:08:37 +0100
Message-Id: <502121640200007800093359@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 13:08:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
	<501C08F20200007800092920@nat28.tlf.novell.com>
In-Reply-To: <501C08F20200007800092920@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 17:22, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> This patch increase sector buffer in order to avoid this overflow
> 
> And if we indeed need to adjust for this, then let's fix this properly:
> Don't just increase the buffer size, but also check that the sector
> size reported actually fits. That may require calling Fn48 first,
> before doing the actual read.

This, btw, is how current Linux is doing it. But I don't think they're
really protected from corruption either when sufficiently large
sector sizes get encountered...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:08:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyiaX-0006XW-6J; Tue, 07 Aug 2012 12:08:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyiaV-0006XR-Qm
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:08:39 +0000
Received: from [85.158.139.83:43080] by server-4.bemta-5.messagelabs.com id
	F0/99-32474-74501205; Tue, 07 Aug 2012 12:08:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344341318!19393034!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2857 invoked from network); 7 Aug 2012 12:08:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 12:08:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 13:08:37 +0100
Message-Id: <502121640200007800093359@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 13:08:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D0D@LONPMAILBOX01.citrite.net>
	<501C08F20200007800092920@nat28.tlf.novell.com>
In-Reply-To: <501C08F20200007800092920@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Increment buffer used to read first boot
 sector in order to accomodate space for 4k sector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 17:22, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 16:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> This patch increase sector buffer in order to avoid this overflow
> 
> And if we indeed need to adjust for this, then let's fix this properly:
> Don't just increase the buffer size, but also check that the sector
> size reported actually fits. That may require calling Fn48 first,
> before doing the actual read.

This, btw, is how current Linux is doing it. But I don't think they're
really protected from corruption either when sufficiently large
sector sizes get encountered...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syib8-0006ar-JY; Tue, 07 Aug 2012 12:09:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syib7-0006ai-1M
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:09:17 +0000
Received: from [85.158.143.35:21145] by server-1.bemta-4.messagelabs.com id
	9D/FA-24392-C6501205; Tue, 07 Aug 2012 12:09:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344341355!19063618!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3353 invoked from network); 7 Aug 2012 12:09:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:09:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884240"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:09:15 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:09:15 +0100
Date: Tue, 7 Aug 2012 13:08:53 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5020011F0200007800092FD7@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > There are still few unsigned long in the xen public interface: replace
> > them with xen_ulong_t.
> > 
> > Also typedef xen_ulong_t to uint64_t on ARM.
> 
> While this change by itself already looks suspicious to me, I don't
> follow what the global replacement is going to be good for, in
> particular when done in places that ARM should be completely
> ignorant of, e.g.

See below


> > --- a/xen/include/public/physdev.h
> > +++ b/xen/include/public/physdev.h
> > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> >  #define PHYSDEVOP_apic_write             9
> >  struct physdev_apic {
> >      /* IN */
> > -    unsigned long apic_physbase;
> > +    xen_ulong_t apic_physbase;
> >      uint32_t reg;
> >      /* IN or OUT */
> >      uint32_t value;
> >...

This change is actually not required, considering that ARM doesn't have
an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
but it wouldn't make any difference for ARM (or x86).
If you think that it is better to keep it unsigned long, I'll remove
this chunk for the patch.


> > --- a/xen/include/public/xen.h
> > +++ b/xen/include/public/xen.h
> > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
> >   * NB. The fields are natural register size for this architecture.
> >   */
> >  struct multicall_entry {
> > -    unsigned long op, result;
> > -    unsigned long args[6];
> > +    xen_ulong_t op, result;
> > +    xen_ulong_t args[6];
> 
> And here I really start to wonder - what use is it to put all 64-bit
> values here on a 32-bit arch? You certainly know a lot more about
> ARM than me, but this looks pretty inefficient, the more that
> you'll have to deal with checking the full values when converting
> to e.g. pointers anyway, in order to avoid behavioral differences
> between running on a 32- or 64-bit host. Zero-extending from
> 32-bits when in a 64-bit hypervisor wouldn't have this problem.

Actually the multicall_entry change is wrong, thanks for point it out.

The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
except when they are passed as hypercall arguments, in which case a 32
bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
pointers.

Considering that each field of a multicall_entry is usually passed as an
hypercall parameter, they should all remain unsigned long.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syib8-0006ar-JY; Tue, 07 Aug 2012 12:09:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syib7-0006ai-1M
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:09:17 +0000
Received: from [85.158.143.35:21145] by server-1.bemta-4.messagelabs.com id
	9D/FA-24392-C6501205; Tue, 07 Aug 2012 12:09:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344341355!19063618!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3353 invoked from network); 7 Aug 2012 12:09:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:09:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884240"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:09:15 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:09:15 +0100
Date: Tue, 7 Aug 2012 13:08:53 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5020011F0200007800092FD7@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > There are still few unsigned long in the xen public interface: replace
> > them with xen_ulong_t.
> > 
> > Also typedef xen_ulong_t to uint64_t on ARM.
> 
> While this change by itself already looks suspicious to me, I don't
> follow what the global replacement is going to be good for, in
> particular when done in places that ARM should be completely
> ignorant of, e.g.

See below


> > --- a/xen/include/public/physdev.h
> > +++ b/xen/include/public/physdev.h
> > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> >  #define PHYSDEVOP_apic_write             9
> >  struct physdev_apic {
> >      /* IN */
> > -    unsigned long apic_physbase;
> > +    xen_ulong_t apic_physbase;
> >      uint32_t reg;
> >      /* IN or OUT */
> >      uint32_t value;
> >...

This change is actually not required, considering that ARM doesn't have
an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
but it wouldn't make any difference for ARM (or x86).
If you think that it is better to keep it unsigned long, I'll remove
this chunk for the patch.


> > --- a/xen/include/public/xen.h
> > +++ b/xen/include/public/xen.h
> > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
> >   * NB. The fields are natural register size for this architecture.
> >   */
> >  struct multicall_entry {
> > -    unsigned long op, result;
> > -    unsigned long args[6];
> > +    xen_ulong_t op, result;
> > +    xen_ulong_t args[6];
> 
> And here I really start to wonder - what use is it to put all 64-bit
> values here on a 32-bit arch? You certainly know a lot more about
> ARM than me, but this looks pretty inefficient, the more that
> you'll have to deal with checking the full values when converting
> to e.g. pointers anyway, in order to avoid behavioral differences
> between running on a 32- or 64-bit host. Zero-extending from
> 32-bits when in a 64-bit hypervisor wouldn't have this problem.

Actually the multicall_entry change is wrong, thanks for point it out.

The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
except when they are passed as hypercall arguments, in which case a 32
bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
pointers.

Considering that each field of a multicall_entry is usually passed as an
hypercall parameter, they should all remain unsigned long.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyihE-0006qC-D1; Tue, 07 Aug 2012 12:15:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SyihC-0006q7-L7
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:15:34 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344341687!11677613!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12739 invoked from network); 7 Aug 2012 12:14:48 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:14:48 -0000
Received: by eaac13 with SMTP id c13so547348eaa.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 05:14:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=syaIwb8FNuKlHRZ7qx/t78k8C8LEYfYHRMF8zW0PeKw=;
	b=0xgPj6Ex8nsOnS7rOxReGnsUvcOWrRyJNiHxSAI82mn2PyJBPu69349K1HHl0BF3Mn
	PolMi3kTi6TR8t+VHplLGG6x/rCzXbxG25HgG5wjt98QtgxLXJaFQ3rchRSlrG5pncuv
	No1/zsVgeHlN7kGPDHkIjqdgl7pWI+tUOYPDBR3mVoV03YEigvF0y/eJt6Ty3dLvstwj
	gLIbRh6IsP1iIlugpAXRr+gMIEtx8+Z5e1Gk/A7QYDJ+r4DIlKyPKEvDwYTya7Blc580
	kNISGHRQpq+quqCQmoYLH6m+XCy6LeLzDX0iU8sEvlsPxXiYGLCc1L9c1lbCs31iXNm8
	S+VA==
Received: by 10.14.0.198 with SMTP id 46mr11295501eeb.30.1344341687318;
	Tue, 07 Aug 2012 05:14:47 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id k41sm55651736eep.13.2012.08.07.05.14.46
	(version=SSLv3 cipher=OTHER); Tue, 07 Aug 2012 05:14:46 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 07 Aug 2012 13:14:44 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC46C544.3AEE7%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] eliminate lock profile pointer from spinlock
	structure when !LOCK_PROFILE
Thread-Index: Ac10ljqPC049I4wb40GV4vWb5z3rnQ==
In-Reply-To: <50211AEF0200007800093332@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] eliminate lock profile pointer from
 spinlock structure when !LOCK_PROFILE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/2012 12:41, "Jan Beulich" <JBeulich@suse.com> wrote:

> This pointer is never used for anything, and needlessly increases the
> memory footprint of various pieces of data.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Good catch.

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/include/xen/spinlock.h
> +++ b/xen/include/xen/spinlock.h
> @@ -115,11 +115,10 @@ extern void spinlock_profile_reset(unsig
>  
>  #else
>  
> -struct lock_profile { };
>  struct lock_profile_qhead { };
>  
>  #define SPIN_LOCK_UNLOCKED
> \
> -    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG, NULL }
> +    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG }
>  #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED
>  
>  #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))
> @@ -133,7 +132,9 @@ typedef struct spinlock {
>      u16 recurse_cpu:12;
>      u16 recurse_cnt:4;
>      struct lock_debug debug;
> +#ifdef LOCK_PROFILE
>      struct lock_profile *profile;
> +#endif
>  } spinlock_t;
>  
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:15:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyihE-0006qC-D1; Tue, 07 Aug 2012 12:15:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SyihC-0006q7-L7
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:15:34 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344341687!11677613!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12739 invoked from network); 7 Aug 2012 12:14:48 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:14:48 -0000
Received: by eaac13 with SMTP id c13so547348eaa.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 05:14:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=syaIwb8FNuKlHRZ7qx/t78k8C8LEYfYHRMF8zW0PeKw=;
	b=0xgPj6Ex8nsOnS7rOxReGnsUvcOWrRyJNiHxSAI82mn2PyJBPu69349K1HHl0BF3Mn
	PolMi3kTi6TR8t+VHplLGG6x/rCzXbxG25HgG5wjt98QtgxLXJaFQ3rchRSlrG5pncuv
	No1/zsVgeHlN7kGPDHkIjqdgl7pWI+tUOYPDBR3mVoV03YEigvF0y/eJt6Ty3dLvstwj
	gLIbRh6IsP1iIlugpAXRr+gMIEtx8+Z5e1Gk/A7QYDJ+r4DIlKyPKEvDwYTya7Blc580
	kNISGHRQpq+quqCQmoYLH6m+XCy6LeLzDX0iU8sEvlsPxXiYGLCc1L9c1lbCs31iXNm8
	S+VA==
Received: by 10.14.0.198 with SMTP id 46mr11295501eeb.30.1344341687318;
	Tue, 07 Aug 2012 05:14:47 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id k41sm55651736eep.13.2012.08.07.05.14.46
	(version=SSLv3 cipher=OTHER); Tue, 07 Aug 2012 05:14:46 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 07 Aug 2012 13:14:44 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC46C544.3AEE7%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] eliminate lock profile pointer from spinlock
	structure when !LOCK_PROFILE
Thread-Index: Ac10ljqPC049I4wb40GV4vWb5z3rnQ==
In-Reply-To: <50211AEF0200007800093332@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] eliminate lock profile pointer from
 spinlock structure when !LOCK_PROFILE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/2012 12:41, "Jan Beulich" <JBeulich@suse.com> wrote:

> This pointer is never used for anything, and needlessly increases the
> memory footprint of various pieces of data.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Good catch.

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/include/xen/spinlock.h
> +++ b/xen/include/xen/spinlock.h
> @@ -115,11 +115,10 @@ extern void spinlock_profile_reset(unsig
>  
>  #else
>  
> -struct lock_profile { };
>  struct lock_profile_qhead { };
>  
>  #define SPIN_LOCK_UNLOCKED
> \
> -    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG, NULL }
> +    { _RAW_SPIN_LOCK_UNLOCKED, 0xfffu, 0, _LOCK_DEBUG }
>  #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED
>  
>  #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))
> @@ -133,7 +132,9 @@ typedef struct spinlock {
>      u16 recurse_cpu:12;
>      u16 recurse_cnt:4;
>      struct lock_debug debug;
> +#ifdef LOCK_PROFILE
>      struct lock_profile *profile;
> +#endif
>  } spinlock_t;
>  
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:17:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syiin-0006uX-34; Tue, 07 Aug 2012 12:17:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syiim-0006uR-0R
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:17:12 +0000
Received: from [85.158.139.83:59058] by server-10.bemta-5.messagelabs.com id
	5B/CB-24472-74701205; Tue, 07 Aug 2012 12:17:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344341829!30641894!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18458 invoked from network); 7 Aug 2012 12:17:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 12:17:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 13:17:08 +0100
Message-Id: <502123620200007800093377@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 13:17:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
In-Reply-To: <CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> 2. Allocate the PoD cache before populating the p2m table

So this doesn't work, the call simply has no effect (and never
reaches p2m_pod_set_cache_target()). Apparently because
of

    /* P == B: Nothing to do. */
    if ( p2md->pod.entry_count == 0 )
        goto out;

in p2m_pod_set_mem_target(). Now I'm not sure about the
proper adjustment here: Entirely dropping the conditional is
certainly wrong. Would

    if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
        goto out;

be okay?

But then later in that function we also have

    /* B < T': Set the cache size equal to # of outstanding entries,
     * let the balloon driver fill in the rest. */
    if ( pod_target > p2md->pod.entry_count )
        pod_target = p2md->pod.entry_count;

which in the case at hand would set pod_target to 0, and the
whole operation would again not have any effect afaict. So
maybe this was the reason to do this operation _after_ the
normal address space population?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:17:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syiin-0006uX-34; Tue, 07 Aug 2012 12:17:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syiim-0006uR-0R
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:17:12 +0000
Received: from [85.158.139.83:59058] by server-10.bemta-5.messagelabs.com id
	5B/CB-24472-74701205; Tue, 07 Aug 2012 12:17:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344341829!30641894!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18458 invoked from network); 7 Aug 2012 12:17:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 12:17:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 13:17:08 +0100
Message-Id: <502123620200007800093377@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 13:17:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
In-Reply-To: <CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> 2. Allocate the PoD cache before populating the p2m table

So this doesn't work, the call simply has no effect (and never
reaches p2m_pod_set_cache_target()). Apparently because
of

    /* P == B: Nothing to do. */
    if ( p2md->pod.entry_count == 0 )
        goto out;

in p2m_pod_set_mem_target(). Now I'm not sure about the
proper adjustment here: Entirely dropping the conditional is
certainly wrong. Would

    if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
        goto out;

be okay?

But then later in that function we also have

    /* B < T': Set the cache size equal to # of outstanding entries,
     * let the balloon driver fill in the rest. */
    if ( pod_target > p2md->pod.entry_count )
        pod_target = p2md->pod.entry_count;

which in the case at hand would set pod_target to 0, and the
whole operation would again not have any effect afaict. So
maybe this was the reason to do this operation _after_ the
normal address space population?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:23:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyioY-00078A-W6; Tue, 07 Aug 2012 12:23:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1SyioX-000785-9L
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:23:09 +0000
Received: from [85.158.143.35:12094] by server-3.bemta-4.messagelabs.com id
	2F/04-01511-CA801205; Tue, 07 Aug 2012 12:23:08 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344342187!13682274!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19701 invoked from network); 7 Aug 2012 12:23:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:23:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884584"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:23:07 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 7 Aug 2012
	13:23:07 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:23:06 +0100
Thread-Topic: Fix invalidate if memory requested was not bucket aligned
Thread-Index: Ac10l2YXbSBGaEnWSIepIX0CWw+1RQ==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D19@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: [Xen-devel] Fix invalidate if memory requested was not bucket
	aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen-mapcache.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen-mapcache.c b/xen-mapcache.c
index 59ba085..9cd6db3 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -320,10 +320,6 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
     target_phys_addr_t size;
     int found = 0;
 
-    if (mapcache->last_address_vaddr == buffer) {
-        mapcache->last_address_index = -1;
-    }
-
     QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
@@ -342,6 +338,11 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
     QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
     g_free(reventry);
 
+    if (mapcache->last_address_index == paddr_index) {
+        mapcache->last_address_index = -1;
+        mapcache->last_address_vaddr = NULL;
+    }
+
     entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
     while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
         pentry = entry;
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:23:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyioY-00078A-W6; Tue, 07 Aug 2012 12:23:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1SyioX-000785-9L
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:23:09 +0000
Received: from [85.158.143.35:12094] by server-3.bemta-4.messagelabs.com id
	2F/04-01511-CA801205; Tue, 07 Aug 2012 12:23:08 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344342187!13682274!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19701 invoked from network); 7 Aug 2012 12:23:08 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:23:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884584"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:23:07 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 7 Aug 2012
	13:23:07 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:23:06 +0100
Thread-Topic: Fix invalidate if memory requested was not bucket aligned
Thread-Index: Ac10l2YXbSBGaEnWSIepIX0CWw+1RQ==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D19@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: [Xen-devel] Fix invalidate if memory requested was not bucket
	aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen-mapcache.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen-mapcache.c b/xen-mapcache.c
index 59ba085..9cd6db3 100644
--- a/xen-mapcache.c
+++ b/xen-mapcache.c
@@ -320,10 +320,6 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
     target_phys_addr_t size;
     int found = 0;
 
-    if (mapcache->last_address_vaddr == buffer) {
-        mapcache->last_address_index = -1;
-    }
-
     QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
@@ -342,6 +338,11 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
     QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
     g_free(reventry);
 
+    if (mapcache->last_address_index == paddr_index) {
+        mapcache->last_address_index = -1;
+        mapcache->last_address_vaddr = NULL;
+    }
+
     entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
     while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
         pentry = entry;
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syit2-0007Gc-M6; Tue, 07 Aug 2012 12:27:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syit0-0007GS-VT
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:27:47 +0000
Received: from [85.158.143.99:49263] by server-3.bemta-4.messagelabs.com id
	D8/AC-01511-1C901205; Tue, 07 Aug 2012 12:27:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344342464!25393050!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32472 invoked from network); 7 Aug 2012 12:27:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:27:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884679"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:27:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:27:44 +0100
Date: Tue, 7 Aug 2012 13:27:21 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502004BE0200007800093028@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> wrote:
> >> > This is an incremental patch on top of
> >> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> >> > compatibility, it is better to introduce foreign_domid as part of a
> >> > union containing both size and foreign_domid.
> >> > 
> >> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> > ---
> >> >  xen/include/public/memory.h |   11 +++++++----
> >> >  1 files changed, 7 insertions(+), 4 deletions(-)
> >> > 
> >> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> >> > index b2adfbe..b0af2fd 100644
> >> > --- a/xen/include/public/memory.h
> >> > +++ b/xen/include/public/memory.h
> >> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >> >      /* Which domain to change the mapping for. */
> >> >      domid_t domid;
> >> >  
> >> > -    /* Number of pages to go through for gmfn_range */
> >> > -    uint16_t    size;
> >> > +    union {
> >> > +        /* Number of pages to go through for gmfn_range */
> >> > +        uint16_t    size;
> >> > +        /* IFF gmfn_foreign */
> >> > +        domid_t foreign_domid;
> >> > +    };
> >> 
> >> But you're clear that this isn't standard C, and hence can't go
> >> in this way?
> >> 
> > 
> > Why? It is c11 if I am not mistaken.
> 
> Yes. But the common baseline is C89.

Do we need to keep it C89?
If I am not mistaken anonymous unions have been in GCC for more than 10
years now.

If we do want to keep it C89, considering that size was introduced only
recently, do you think that it would be OK for me to change the
interface and just add size to a union?
Like this:

union {
    /* Number of pages to go through for gmfn_range */
    uint16_t    size;
    /* IFF gmfn_foreign */
    domid_t foreign_domid;
} u;

Also CC'ing Jean.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:28:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syit2-0007Gc-M6; Tue, 07 Aug 2012 12:27:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syit0-0007GS-VT
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:27:47 +0000
Received: from [85.158.143.99:49263] by server-3.bemta-4.messagelabs.com id
	D8/AC-01511-1C901205; Tue, 07 Aug 2012 12:27:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344342464!25393050!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32472 invoked from network); 7 Aug 2012 12:27:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:27:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884679"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:27:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:27:44 +0100
Date: Tue, 7 Aug 2012 13:27:21 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502004BE0200007800093028@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> wrote:
> >> > This is an incremental patch on top of
> >> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> >> > compatibility, it is better to introduce foreign_domid as part of a
> >> > union containing both size and foreign_domid.
> >> > 
> >> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> > ---
> >> >  xen/include/public/memory.h |   11 +++++++----
> >> >  1 files changed, 7 insertions(+), 4 deletions(-)
> >> > 
> >> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> >> > index b2adfbe..b0af2fd 100644
> >> > --- a/xen/include/public/memory.h
> >> > +++ b/xen/include/public/memory.h
> >> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >> >      /* Which domain to change the mapping for. */
> >> >      domid_t domid;
> >> >  
> >> > -    /* Number of pages to go through for gmfn_range */
> >> > -    uint16_t    size;
> >> > +    union {
> >> > +        /* Number of pages to go through for gmfn_range */
> >> > +        uint16_t    size;
> >> > +        /* IFF gmfn_foreign */
> >> > +        domid_t foreign_domid;
> >> > +    };
> >> 
> >> But you're clear that this isn't standard C, and hence can't go
> >> in this way?
> >> 
> > 
> > Why? It is c11 if I am not mistaken.
> 
> Yes. But the common baseline is C89.

Do we need to keep it C89?
If I am not mistaken anonymous unions have been in GCC for more than 10
years now.

If we do want to keep it C89, considering that size was introduced only
recently, do you think that it would be OK for me to change the
interface and just add size to a union?
Like this:

union {
    /* Number of pages to go through for gmfn_range */
    uint16_t    size;
    /* IFF gmfn_foreign */
    domid_t foreign_domid;
} u;

Also CC'ing Jean.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:28:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syith-0007JJ-3U; Tue, 07 Aug 2012 12:28:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1Syitg-0007JC-5y
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:28:28 +0000
Received: from [85.158.143.35:25845] by server-1.bemta-4.messagelabs.com id
	3D/5B-24392-BE901205; Tue, 07 Aug 2012 12:28:27 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344342506!6782758!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28676 invoked from network); 7 Aug 2012 12:28:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:28:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884697"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:28:26 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 7 Aug 2012
	13:28:26 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:28:25 +0100
Thread-Topic: [PATCH v2] Fix invalidate if memory requested was not bucket
	aligned
Thread-Index: Ac10mCQ8xRF05MYXQsWu//YRIcFFbA==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D1A@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2] Fix invalidate if memory requested was not
	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 hw/xen_machine_fv.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
index fdad42a..6bdb8f4 100644
--- a/hw/xen_machine_fv.c
+++ b/hw/xen_machine_fv.c
@@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
     unsigned long paddr_index;
     int found = 0;
     
-    if (last_address_vaddr == buffer)
-        last_address_page =  ~0UL;
-
     TAILQ_FOREACH(reventry, &locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
@@ -201,6 +198,11 @@ void qemu_invalidate_entry(uint8_t *buffer)
     TAILQ_REMOVE(&locked_entries, reventry, next);
     qemu_free(reventry);
 
+    if ((last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT))
+        == paddr_index) {
+        last_address_page =  ~0UL;
+    }
+
     entry = &mapcache_entry[paddr_index % nr_buckets];
     while (entry && entry->paddr_index != paddr_index) {
         pentry = entry;
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:28:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syith-0007JJ-3U; Tue, 07 Aug 2012 12:28:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1Syitg-0007JC-5y
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:28:28 +0000
Received: from [85.158.143.35:25845] by server-1.bemta-4.messagelabs.com id
	3D/5B-24392-BE901205; Tue, 07 Aug 2012 12:28:27 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344342506!6782758!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28676 invoked from network); 7 Aug 2012 12:28:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:28:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884697"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:28:26 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Tue, 7 Aug 2012
	13:28:26 +0100
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:28:25 +0100
Thread-Topic: [PATCH v2] Fix invalidate if memory requested was not bucket
	aligned
Thread-Index: Ac10mCQ8xRF05MYXQsWu//YRIcFFbA==
Message-ID: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D1A@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2] Fix invalidate if memory requested was not
	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 hw/xen_machine_fv.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
index fdad42a..6bdb8f4 100644
--- a/hw/xen_machine_fv.c
+++ b/hw/xen_machine_fv.c
@@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
     unsigned long paddr_index;
     int found = 0;
     
-    if (last_address_vaddr == buffer)
-        last_address_page =  ~0UL;
-
     TAILQ_FOREACH(reventry, &locked_entries, next) {
         if (reventry->vaddr_req == buffer) {
             paddr_index = reventry->paddr_index;
@@ -201,6 +198,11 @@ void qemu_invalidate_entry(uint8_t *buffer)
     TAILQ_REMOVE(&locked_entries, reventry, next);
     qemu_free(reventry);
 
+    if ((last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT))
+        == paddr_index) {
+        last_address_page =  ~0UL;
+    }
+
     entry = &mapcache_entry[paddr_index % nr_buckets];
     while (entry && entry->paddr_index != paddr_index) {
         pentry = entry;
-- 
1.7.5.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:35:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj0l-0007cf-65; Tue, 07 Aug 2012 12:35:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syj0j-0007ca-SQ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:35:46 +0000
Received: from [85.158.139.83:38325] by server-3.bemta-5.messagelabs.com id
	B5/A2-31899-1AB01205; Tue, 07 Aug 2012 12:35:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344342944!30714764!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19295 invoked from network); 7 Aug 2012 12:35:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:35:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:35:42 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:35:42 +0100
Date: Tue, 7 Aug 2012 13:35:19 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5020D0B90200007800093181@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> >> > wrote:
> >> >> > Note: this change does not make any difference on x86 and ia64.
> >> >> > 
> >> >> > 
> >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> >> >> > stored in memory from guest pointers as hypercall parameters.
> >> >> 
> >> >> I have to admit that really dislike this, to a large part because of
> >> >> the follow up patch that clutters the corresponding function
> >> >> declarations even further. Plus I see no mechanism to convert
> >> >> between the two, yet I can't see how - long term at least - you
> >> >> could get away without such conversion.
> >> >> 
> >> >> Is it really a well thought through and settled upon decision to
> >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> >> >> both x86 and PPC got away without doing so
> >> > 
> >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> >> > piece of code, so "got away without" is a bit strong...
> >> 
> >> Hmm, yes, that's a valid correction.
> >> 
> >> > We'd really
> >> > rather not have to have a non-trivial compat layer on arm too by having
> >> > the struct layouts be the same on 32/64.
> >> 
> >> And paying a penalty like this in the 32-bit half (if what is likely
> >> to remain the much bigger portion for the next couple of years
> >> can validly be called "half") is worth it? The more that the compat
> >> layer is now reasonably mature (and should hence be easily
> >> re-usable for ARM)?
> > 
> > What penalty? The only penalty is the wasted space in the structs in
> > memory.
> 
> No - the caller has to zero-initialize those extra 32 bits, and the
> hypervisor has to check for them to be zero (the latter may be
> implicit in the 64-bit one, but certainly needs to be explicit on the
> 32-bit side).

You are saying that on a 32 bit hypervisor we should check that the
padding is zero? Why should we care about the value of the padding?

In any case fortunately accesses to guest_handles already go via macros,
so it should be easy to arrange if it comes down to it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:35:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj0l-0007cf-65; Tue, 07 Aug 2012 12:35:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syj0j-0007ca-SQ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:35:46 +0000
Received: from [85.158.139.83:38325] by server-3.bemta-5.messagelabs.com id
	B5/A2-31899-1AB01205; Tue, 07 Aug 2012 12:35:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344342944!30714764!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19295 invoked from network); 7 Aug 2012 12:35:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:35:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:35:42 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:35:42 +0100
Date: Tue, 7 Aug 2012 13:35:19 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5020D0B90200007800093181@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> >> > wrote:
> >> >> > Note: this change does not make any difference on x86 and ia64.
> >> >> > 
> >> >> > 
> >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> >> >> > stored in memory from guest pointers as hypercall parameters.
> >> >> 
> >> >> I have to admit that really dislike this, to a large part because of
> >> >> the follow up patch that clutters the corresponding function
> >> >> declarations even further. Plus I see no mechanism to convert
> >> >> between the two, yet I can't see how - long term at least - you
> >> >> could get away without such conversion.
> >> >> 
> >> >> Is it really a well thought through and settled upon decision to
> >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> >> >> both x86 and PPC got away without doing so
> >> > 
> >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> >> > piece of code, so "got away without" is a bit strong...
> >> 
> >> Hmm, yes, that's a valid correction.
> >> 
> >> > We'd really
> >> > rather not have to have a non-trivial compat layer on arm too by having
> >> > the struct layouts be the same on 32/64.
> >> 
> >> And paying a penalty like this in the 32-bit half (if what is likely
> >> to remain the much bigger portion for the next couple of years
> >> can validly be called "half") is worth it? The more that the compat
> >> layer is now reasonably mature (and should hence be easily
> >> re-usable for ARM)?
> > 
> > What penalty? The only penalty is the wasted space in the structs in
> > memory.
> 
> No - the caller has to zero-initialize those extra 32 bits, and the
> hypervisor has to check for them to be zero (the latter may be
> implicit in the 64-bit one, but certainly needs to be explicit on the
> 32-bit side).

You are saying that on a 32 bit hypervisor we should check that the
padding is zero? Why should we care about the value of the padding?

In any case fortunately accesses to guest_handles already go via macros,
so it should be easy to arrange if it comes down to it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:37:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj25-0007gz-LY; Tue, 07 Aug 2012 12:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syj23-0007gr-UJ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:37:08 +0000
Received: from [85.158.143.99:48008] by server-1.bemta-4.messagelabs.com id
	1F/0A-24392-3FB01205; Tue, 07 Aug 2012 12:37:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344343026!18972139!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25248 invoked from network); 7 Aug 2012 12:37:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:37:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884917"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:36:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	13:36:49 +0100
Message-ID: <1344343007.11339.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:36:47 +0100
In-Reply-To: <alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 13:08 +0100, Stefano Stabellini wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
> > >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > There are still few unsigned long in the xen public interface: replace
> > > them with xen_ulong_t.
> > > 
> > > Also typedef xen_ulong_t to uint64_t on ARM.
> > 
> > While this change by itself already looks suspicious to me, I don't
> > follow what the global replacement is going to be good for, in
> > particular when done in places that ARM should be completely
> > ignorant of, e.g.
> 
> See below
> 
> 
> > > --- a/xen/include/public/physdev.h
> > > +++ b/xen/include/public/physdev.h
> > > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> > >  #define PHYSDEVOP_apic_write             9
> > >  struct physdev_apic {
> > >      /* IN */
> > > -    unsigned long apic_physbase;
> > > +    xen_ulong_t apic_physbase;
> > >      uint32_t reg;
> > >      /* IN or OUT */
> > >      uint32_t value;
> > >...
> 
> This change is actually not required, considering that ARM doesn't have
> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> but it wouldn't make any difference for ARM (or x86).
> If you think that it is better to keep it unsigned long, I'll remove
> this chunk for the patch.
> 
> 
> > > --- a/xen/include/public/xen.h
> > > +++ b/xen/include/public/xen.h
> > > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
> > >   * NB. The fields are natural register size for this architecture.
> > >   */
> > >  struct multicall_entry {
> > > -    unsigned long op, result;
> > > -    unsigned long args[6];
> > > +    xen_ulong_t op, result;
> > > +    xen_ulong_t args[6];
> > 
> > And here I really start to wonder - what use is it to put all 64-bit
> > values here on a 32-bit arch? You certainly know a lot more about
> > ARM than me, but this looks pretty inefficient, the more that
> > you'll have to deal with checking the full values when converting
> > to e.g. pointers anyway, in order to avoid behavioral differences
> > between running on a 32- or 64-bit host. Zero-extending from
> > 32-bits when in a 64-bit hypervisor wouldn't have this problem.
> 
> Actually the multicall_entry change is wrong, thanks for point it out.
> 
> The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
> except when they are passed as hypercall arguments, in which case a 32
> bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
> pointers.
> 
> Considering that each field of a multicall_entry is usually passed as an
> hypercall parameter, they should all remain unsigned long.

If possible, please make them an explicitly sized type, even if it is
now 32 bits.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:37:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj25-0007gz-LY; Tue, 07 Aug 2012 12:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syj23-0007gr-UJ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:37:08 +0000
Received: from [85.158.143.99:48008] by server-1.bemta-4.messagelabs.com id
	1F/0A-24392-3FB01205; Tue, 07 Aug 2012 12:37:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344343026!18972139!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25248 invoked from network); 7 Aug 2012 12:37:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:37:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884917"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:36:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	13:36:49 +0100
Message-ID: <1344343007.11339.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:36:47 +0100
In-Reply-To: <alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 13:08 +0100, Stefano Stabellini wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
> > >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > There are still few unsigned long in the xen public interface: replace
> > > them with xen_ulong_t.
> > > 
> > > Also typedef xen_ulong_t to uint64_t on ARM.
> > 
> > While this change by itself already looks suspicious to me, I don't
> > follow what the global replacement is going to be good for, in
> > particular when done in places that ARM should be completely
> > ignorant of, e.g.
> 
> See below
> 
> 
> > > --- a/xen/include/public/physdev.h
> > > +++ b/xen/include/public/physdev.h
> > > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> > >  #define PHYSDEVOP_apic_write             9
> > >  struct physdev_apic {
> > >      /* IN */
> > > -    unsigned long apic_physbase;
> > > +    xen_ulong_t apic_physbase;
> > >      uint32_t reg;
> > >      /* IN or OUT */
> > >      uint32_t value;
> > >...
> 
> This change is actually not required, considering that ARM doesn't have
> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> but it wouldn't make any difference for ARM (or x86).
> If you think that it is better to keep it unsigned long, I'll remove
> this chunk for the patch.
> 
> 
> > > --- a/xen/include/public/xen.h
> > > +++ b/xen/include/public/xen.h
> > > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
> > >   * NB. The fields are natural register size for this architecture.
> > >   */
> > >  struct multicall_entry {
> > > -    unsigned long op, result;
> > > -    unsigned long args[6];
> > > +    xen_ulong_t op, result;
> > > +    xen_ulong_t args[6];
> > 
> > And here I really start to wonder - what use is it to put all 64-bit
> > values here on a 32-bit arch? You certainly know a lot more about
> > ARM than me, but this looks pretty inefficient, the more that
> > you'll have to deal with checking the full values when converting
> > to e.g. pointers anyway, in order to avoid behavioral differences
> > between running on a 32- or 64-bit host. Zero-extending from
> > 32-bits when in a 64-bit hypervisor wouldn't have this problem.
> 
> Actually the multicall_entry change is wrong, thanks for point it out.
> 
> The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
> except when they are passed as hypercall arguments, in which case a 32
> bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
> pointers.
> 
> Considering that each field of a multicall_entry is usually passed as an
> hypercall parameter, they should all remain unsigned long.

If possible, please make them an explicitly sized type, even if it is
now 32 bits.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:39:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj3y-0007pg-63; Tue, 07 Aug 2012 12:39:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syj3w-0007p6-QL
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:39:05 +0000
Received: from [85.158.138.51:52077] by server-1.bemta-3.messagelabs.com id
	D9/12-29224-76C01205; Tue, 07 Aug 2012 12:39:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344343141!9324639!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1869 invoked from network); 7 Aug 2012 12:39:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:39:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884964"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:39:01 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:39:01 +0100
Date: Tue, 7 Aug 2012 13:38:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D19@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208071337350.4645@kaball.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D19@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Fix invalidate if memory requested was not bucket
	aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Frediano Ziglio wrote:
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

I'll add it to my queue.


>  xen-mapcache.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/xen-mapcache.c b/xen-mapcache.c
> index 59ba085..9cd6db3 100644
> --- a/xen-mapcache.c
> +++ b/xen-mapcache.c
> @@ -320,10 +320,6 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
>      target_phys_addr_t size;
>      int found = 0;
>  
> -    if (mapcache->last_address_vaddr == buffer) {
> -        mapcache->last_address_index = -1;
> -    }
> -
>      QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
>          if (reventry->vaddr_req == buffer) {
>              paddr_index = reventry->paddr_index;
> @@ -342,6 +338,11 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
>      QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
>      g_free(reventry);
>  
> +    if (mapcache->last_address_index == paddr_index) {
> +        mapcache->last_address_index = -1;
> +        mapcache->last_address_vaddr = NULL;
> +    }
> +
>      entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
>      while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
>          pentry = entry;
> -- 
> 1.7.5.4
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:39:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj3y-0007pg-63; Tue, 07 Aug 2012 12:39:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syj3w-0007p6-QL
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:39:05 +0000
Received: from [85.158.138.51:52077] by server-1.bemta-3.messagelabs.com id
	D9/12-29224-76C01205; Tue, 07 Aug 2012 12:39:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344343141!9324639!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1869 invoked from network); 7 Aug 2012 12:39:02 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:39:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884964"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:39:01 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:39:01 +0100
Date: Tue, 7 Aug 2012 13:38:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D19@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208071337350.4645@kaball.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D19@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Fix invalidate if memory requested was not bucket
	aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Frediano Ziglio wrote:
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

I'll add it to my queue.


>  xen-mapcache.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/xen-mapcache.c b/xen-mapcache.c
> index 59ba085..9cd6db3 100644
> --- a/xen-mapcache.c
> +++ b/xen-mapcache.c
> @@ -320,10 +320,6 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
>      target_phys_addr_t size;
>      int found = 0;
>  
> -    if (mapcache->last_address_vaddr == buffer) {
> -        mapcache->last_address_index = -1;
> -    }
> -
>      QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
>          if (reventry->vaddr_req == buffer) {
>              paddr_index = reventry->paddr_index;
> @@ -342,6 +338,11 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
>      QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
>      g_free(reventry);
>  
> +    if (mapcache->last_address_index == paddr_index) {
> +        mapcache->last_address_index = -1;
> +        mapcache->last_address_vaddr = NULL;
> +    }
> +
>      entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
>      while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
>          pentry = entry;
> -- 
> 1.7.5.4
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj4Q-0007u0-Jp; Tue, 07 Aug 2012 12:39:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syj4O-0007tk-Pa
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:39:32 +0000
Received: from [85.158.143.99:63599] by server-2.bemta-4.messagelabs.com id
	6C/AC-17938-48C01205; Tue, 07 Aug 2012 12:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344343171!23617426!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29535 invoked from network); 7 Aug 2012 12:39:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884972"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:39:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	13:39:31 +0100
Message-ID: <1344343169.11339.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:39:29 +0100
In-Reply-To: <alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 13:35 +0100, Stefano Stabellini wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
> > >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Mon, 6 Aug 2012, Jan Beulich wrote:
> > >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> > >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > >> > wrote:
> > >> >> > Note: this change does not make any difference on x86 and ia64.
> > >> >> > 
> > >> >> > 
> > >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> > >> >> > stored in memory from guest pointers as hypercall parameters.
> > >> >> 
> > >> >> I have to admit that really dislike this, to a large part because of
> > >> >> the follow up patch that clutters the corresponding function
> > >> >> declarations even further. Plus I see no mechanism to convert
> > >> >> between the two, yet I can't see how - long term at least - you
> > >> >> could get away without such conversion.
> > >> >> 
> > >> >> Is it really a well thought through and settled upon decision to
> > >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> > >> >> both x86 and PPC got away without doing so
> > >> > 
> > >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> > >> > piece of code, so "got away without" is a bit strong...
> > >> 
> > >> Hmm, yes, that's a valid correction.
> > >> 
> > >> > We'd really
> > >> > rather not have to have a non-trivial compat layer on arm too by having
> > >> > the struct layouts be the same on 32/64.
> > >> 
> > >> And paying a penalty like this in the 32-bit half (if what is likely
> > >> to remain the much bigger portion for the next couple of years
> > >> can validly be called "half") is worth it? The more that the compat
> > >> layer is now reasonably mature (and should hence be easily
> > >> re-usable for ARM)?
> > > 
> > > What penalty? The only penalty is the wasted space in the structs in
> > > memory.
> > 
> > No - the caller has to zero-initialize those extra 32 bits, and the
> > hypervisor has to check for them to be zero (the latter may be
> > implicit in the 64-bit one, but certainly needs to be explicit on the
> > 32-bit side).
> 
> You are saying that on a 32 bit hypervisor we should check that the
> padding is zero? Why should we care about the value of the padding?

The point is so that we can treat them as 64 bit values in a 64 bit
hypervisor, otherwise we would need a compat layer to translate (which
is what we want to avoid).

So the 32 bit guest definitely does need to zero them, and a debug build
of the 32 bit hypervisor probably ought to reject non-zero upper halves,
otherwise the chances of the guest doing it consistently is pretty
small.


> In any case fortunately accesses to guest_handles already go via macros,
> so it should be easy to arrange if it comes down to it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:39:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj4Q-0007u0-Jp; Tue, 07 Aug 2012 12:39:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syj4O-0007tk-Pa
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:39:32 +0000
Received: from [85.158.143.99:63599] by server-2.bemta-4.messagelabs.com id
	6C/AC-17938-48C01205; Tue, 07 Aug 2012 12:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344343171!23617426!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29535 invoked from network); 7 Aug 2012 12:39:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884972"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:39:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	13:39:31 +0100
Message-ID: <1344343169.11339.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 7 Aug 2012 13:39:29 +0100
In-Reply-To: <alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 13:35 +0100, Stefano Stabellini wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
> > >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Mon, 6 Aug 2012, Jan Beulich wrote:
> > >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> > >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > >> > wrote:
> > >> >> > Note: this change does not make any difference on x86 and ia64.
> > >> >> > 
> > >> >> > 
> > >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> > >> >> > stored in memory from guest pointers as hypercall parameters.
> > >> >> 
> > >> >> I have to admit that really dislike this, to a large part because of
> > >> >> the follow up patch that clutters the corresponding function
> > >> >> declarations even further. Plus I see no mechanism to convert
> > >> >> between the two, yet I can't see how - long term at least - you
> > >> >> could get away without such conversion.
> > >> >> 
> > >> >> Is it really a well thought through and settled upon decision to
> > >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> > >> >> both x86 and PPC got away without doing so
> > >> > 
> > >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> > >> > piece of code, so "got away without" is a bit strong...
> > >> 
> > >> Hmm, yes, that's a valid correction.
> > >> 
> > >> > We'd really
> > >> > rather not have to have a non-trivial compat layer on arm too by having
> > >> > the struct layouts be the same on 32/64.
> > >> 
> > >> And paying a penalty like this in the 32-bit half (if what is likely
> > >> to remain the much bigger portion for the next couple of years
> > >> can validly be called "half") is worth it? The more that the compat
> > >> layer is now reasonably mature (and should hence be easily
> > >> re-usable for ARM)?
> > > 
> > > What penalty? The only penalty is the wasted space in the structs in
> > > memory.
> > 
> > No - the caller has to zero-initialize those extra 32 bits, and the
> > hypervisor has to check for them to be zero (the latter may be
> > implicit in the 64-bit one, but certainly needs to be explicit on the
> > 32-bit side).
> 
> You are saying that on a 32 bit hypervisor we should check that the
> padding is zero? Why should we care about the value of the padding?

The point is so that we can treat them as 64 bit values in a 64 bit
hypervisor, otherwise we would need a compat layer to translate (which
is what we want to avoid).

So the 32 bit guest definitely does need to zero them, and a debug build
of the 32 bit hypervisor probably ought to reject non-zero upper halves,
otherwise the chances of the guest doing it consistently is pretty
small.


> In any case fortunately accesses to guest_handles already go via macros,
> so it should be easy to arrange if it comes down to it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:40:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj56-00080o-1R; Tue, 07 Aug 2012 12:40:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syj53-00080O-VO
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:40:14 +0000
Received: from [85.158.138.51:38550] by server-2.bemta-3.messagelabs.com id
	CE/5F-29239-DAC01205; Tue, 07 Aug 2012 12:40:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344343212!22783705!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24388 invoked from network); 7 Aug 2012 12:40:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:40:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884989"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:40:12 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:40:12 +0100
Date: Tue, 7 Aug 2012 13:39:59 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D1A@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208071339500.4645@kaball.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D1A@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] Fix invalidate if memory requested was
 not bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Frediano Ziglio wrote:
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  hw/xen_machine_fv.c |    8 +++++---
>  1 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index fdad42a..6bdb8f4 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      unsigned long paddr_index;
>      int found = 0;
>      
> -    if (last_address_vaddr == buffer)
> -        last_address_page =  ~0UL;
> -
>      TAILQ_FOREACH(reventry, &locked_entries, next) {
>          if (reventry->vaddr_req == buffer) {
>              paddr_index = reventry->paddr_index;
> @@ -201,6 +198,11 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      TAILQ_REMOVE(&locked_entries, reventry, next);
>      qemu_free(reventry);
>  
> +    if ((last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT))
> +        == paddr_index) {
> +        last_address_page =  ~0UL;
> +    }
> +
>      entry = &mapcache_entry[paddr_index % nr_buckets];
>      while (entry && entry->paddr_index != paddr_index) {
>          pentry = entry;
> -- 
> 1.7.5.4
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:40:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj56-00080o-1R; Tue, 07 Aug 2012 12:40:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Syj53-00080O-VO
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 12:40:14 +0000
Received: from [85.158.138.51:38550] by server-2.bemta-3.messagelabs.com id
	CE/5F-29239-DAC01205; Tue, 07 Aug 2012 12:40:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344343212!22783705!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24388 invoked from network); 7 Aug 2012 12:40:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:40:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13884989"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:40:12 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 13:40:12 +0100
Date: Tue, 7 Aug 2012 13:39:59 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D1A@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208071339500.4645@kaball.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D1A@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] Fix invalidate if memory requested was
 not bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Frediano Ziglio wrote:
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  hw/xen_machine_fv.c |    8 +++++---
>  1 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index fdad42a..6bdb8f4 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -181,9 +181,6 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      unsigned long paddr_index;
>      int found = 0;
>      
> -    if (last_address_vaddr == buffer)
> -        last_address_page =  ~0UL;
> -
>      TAILQ_FOREACH(reventry, &locked_entries, next) {
>          if (reventry->vaddr_req == buffer) {
>              paddr_index = reventry->paddr_index;
> @@ -201,6 +198,11 @@ void qemu_invalidate_entry(uint8_t *buffer)
>      TAILQ_REMOVE(&locked_entries, reventry, next);
>      qemu_free(reventry);
>  
> +    if ((last_address_page >> (MCACHE_BUCKET_SHIFT - XC_PAGE_SHIFT))
> +        == paddr_index) {
> +        last_address_page =  ~0UL;
> +    }
> +
>      entry = &mapcache_entry[paddr_index % nr_buckets];
>      while (entry && entry->paddr_index != paddr_index) {
>          pentry = entry;
> -- 
> 1.7.5.4
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:41:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj63-00089t-Gx; Tue, 07 Aug 2012 12:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Syj62-00089g-JD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:41:14 +0000
Received: from [85.158.139.83:23555] by server-4.bemta-5.messagelabs.com id
	B2/37-32474-9EC01205; Tue, 07 Aug 2012 12:41:13 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344343271!19399615!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22330 invoked from network); 7 Aug 2012 12:41:12 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:41:12 -0000
Received: by vbip1 with SMTP id p1so4160222vbi.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 05:41:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=oAWvWMHT4WjWOSsp3PFmey9NguHI75t6ZpR/Dje0U2g=;
	b=b0cIXUD3IAEbgcEAVHuS6pBbYabZBeYcRN1XzbgJVsDhtgW3s9nxgVRI1QZroI/GgB
	hnePhVLBbTMFYFTJH5lKslXpcT/qIxIZHB9x//qTsYeBii0cYzd+mVSGaL+MJPZOzmwJ
	avBgVC/UopyRVps1Dwt18tNbJOKznA3vukIgqRuFqQnj10fmrp5Ne8Ob5+Q1hvg3ytOM
	QO27V9IM0UDqS3BnPIOpcPg2nUArkckLMO5TSv0GnaQ4lypwOmBS5r5onow/75svhuE/
	xJWaZThampbSZcPDB5CiqWvqF8fWtZUNNBhEYkydUKJ2nBx6FkgqzhwFFQSe84fWQeDn
	4ZmA==
Received: by 10.52.91.7 with SMTP id ca7mr5714838vdb.2.1344343271065; Tue, 07
	Aug 2012 05:41:11 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Tue, 7 Aug 2012 05:40:50 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Tue, 7 Aug 2012 13:40:50 +0100
Message-ID: <CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 7 August 2012 13:27, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> wrote:
>> >> > This is an incremental patch on top of
>> >> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
>> >> > compatibility, it is better to introduce foreign_domid as part of a
>> >> > union containing both size and foreign_domid.
>> >> >
>> >> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> > ---
>> >> >  xen/include/public/memory.h |   11 +++++++----
>> >> >  1 files changed, 7 insertions(+), 4 deletions(-)
>> >> >
>> >> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> >> > index b2adfbe..b0af2fd 100644
>> >> > --- a/xen/include/public/memory.h
>> >> > +++ b/xen/include/public/memory.h
>> >> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>> >> >      /* Which domain to change the mapping for. */
>> >> >      domid_t domid;
>> >> >
>> >> > -    /* Number of pages to go through for gmfn_range */
>> >> > -    uint16_t    size;
>> >> > +    union {
>> >> > +        /* Number of pages to go through for gmfn_range */
>> >> > +        uint16_t    size;
>> >> > +        /* IFF gmfn_foreign */
>> >> > +        domid_t foreign_domid;
>> >> > +    };
>> >>
>> >> But you're clear that this isn't standard C, and hence can't go
>> >> in this way?
>> >>
>> >
>> > Why? It is c11 if I am not mistaken.
>>
>> Yes. But the common baseline is C89.
>
> Do we need to keep it C89?
> If I am not mistaken anonymous unions have been in GCC for more than 10
> years now.
>

It's a public header so you can't assume that the only consumer will be GCC.

> If we do want to keep it C89, considering that size was introduced only
> recently, do you think that it would be OK for me to change the
> interface and just add size to a union?
> Like this:
>
> union {
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } u;
>

You could probably do something like

#ifdef __GNUC__
# define UNION_NAME
#else
# define UNION_NAME u
#endif
union {
    /* Number of pages to go through for gmfn_range */
    uint16_t    size;
    /* IFF gmfn_foreign */
    domid_t foreign_domid;
} UNION_NAME;

It's not ideal but this way you keep the binary compatibility and you
also the code still cmopatible with GCC.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:41:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj63-00089t-Gx; Tue, 07 Aug 2012 12:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Syj62-00089g-JD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:41:14 +0000
Received: from [85.158.139.83:23555] by server-4.bemta-5.messagelabs.com id
	B2/37-32474-9EC01205; Tue, 07 Aug 2012 12:41:13 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344343271!19399615!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22330 invoked from network); 7 Aug 2012 12:41:12 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:41:12 -0000
Received: by vbip1 with SMTP id p1so4160222vbi.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 05:41:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=oAWvWMHT4WjWOSsp3PFmey9NguHI75t6ZpR/Dje0U2g=;
	b=b0cIXUD3IAEbgcEAVHuS6pBbYabZBeYcRN1XzbgJVsDhtgW3s9nxgVRI1QZroI/GgB
	hnePhVLBbTMFYFTJH5lKslXpcT/qIxIZHB9x//qTsYeBii0cYzd+mVSGaL+MJPZOzmwJ
	avBgVC/UopyRVps1Dwt18tNbJOKznA3vukIgqRuFqQnj10fmrp5Ne8Ob5+Q1hvg3ytOM
	QO27V9IM0UDqS3BnPIOpcPg2nUArkckLMO5TSv0GnaQ4lypwOmBS5r5onow/75svhuE/
	xJWaZThampbSZcPDB5CiqWvqF8fWtZUNNBhEYkydUKJ2nBx6FkgqzhwFFQSe84fWQeDn
	4ZmA==
Received: by 10.52.91.7 with SMTP id ca7mr5714838vdb.2.1344343271065; Tue, 07
	Aug 2012 05:41:11 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.181.101 with HTTP; Tue, 7 Aug 2012 05:40:50 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Tue, 7 Aug 2012 13:40:50 +0100
Message-ID: <CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 7 August 2012 13:27, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> wrote:
>> >> > This is an incremental patch on top of
>> >> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
>> >> > compatibility, it is better to introduce foreign_domid as part of a
>> >> > union containing both size and foreign_domid.
>> >> >
>> >> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> > ---
>> >> >  xen/include/public/memory.h |   11 +++++++----
>> >> >  1 files changed, 7 insertions(+), 4 deletions(-)
>> >> >
>> >> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> >> > index b2adfbe..b0af2fd 100644
>> >> > --- a/xen/include/public/memory.h
>> >> > +++ b/xen/include/public/memory.h
>> >> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>> >> >      /* Which domain to change the mapping for. */
>> >> >      domid_t domid;
>> >> >
>> >> > -    /* Number of pages to go through for gmfn_range */
>> >> > -    uint16_t    size;
>> >> > +    union {
>> >> > +        /* Number of pages to go through for gmfn_range */
>> >> > +        uint16_t    size;
>> >> > +        /* IFF gmfn_foreign */
>> >> > +        domid_t foreign_domid;
>> >> > +    };
>> >>
>> >> But you're clear that this isn't standard C, and hence can't go
>> >> in this way?
>> >>
>> >
>> > Why? It is c11 if I am not mistaken.
>>
>> Yes. But the common baseline is C89.
>
> Do we need to keep it C89?
> If I am not mistaken anonymous unions have been in GCC for more than 10
> years now.
>

It's a public header so you can't assume that the only consumer will be GCC.

> If we do want to keep it C89, considering that size was introduced only
> recently, do you think that it would be OK for me to change the
> interface and just add size to a union?
> Like this:
>
> union {
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } u;
>

You could probably do something like

#ifdef __GNUC__
# define UNION_NAME
#else
# define UNION_NAME u
#endif
union {
    /* Number of pages to go through for gmfn_range */
    uint16_t    size;
    /* IFF gmfn_foreign */
    domid_t foreign_domid;
} UNION_NAME;

It's not ideal but this way you keep the binary compatibility and you
also the code still cmopatible with GCC.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:42:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj7H-0008ME-4w; Tue, 07 Aug 2012 12:42:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syj7F-0008LC-NG
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:42:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344343343!11683076!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12728 invoked from network); 7 Aug 2012 12:42:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:42:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13885058"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:42:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	13:42:23 +0100
Message-ID: <1344343342.11339.96.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Tue, 7 Aug 2012 13:42:22 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 12:05 +0100, Wangzhenguo wrote:
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Tuesday, August 07, 2012 6:07 PM
> 
> > The entire point of the hypercall buffer is that it needs to be safe
> > for use as a hypercall argument, therefore it does need to be protected.
> 
> We won't protect pages in hypercall buffer cache. Hypercall buffer
> cache belongs to xc_interface, parent process calls xc_interface_open
> to get xc_interface handler, after fork, child process inherits the
> handler, and also inherits hypercall buffer cache which belongs to it.
> It will cause segment fault to access pages in the hypercall buffer
> cache by being delivered   to hypercall, because they are not
> invalidate in the child process. So, We need restore the status of
> pages before putting they to hypercall buffer cache, and prevent pages
> from COW after being allocated in hyprecall buffer cache.

I don't think we should expect it to be valid to keep an xc interface
handle open after a fork. The child should open a fresh handle if it
wants to keep interacting with xc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:42:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syj7H-0008ME-4w; Tue, 07 Aug 2012 12:42:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syj7F-0008LC-NG
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:42:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344343343!11683076!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12728 invoked from network); 7 Aug 2012 12:42:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 12:42:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,726,1336348800"; d="scan'208";a="13885058"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:42:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	13:42:23 +0100
Message-ID: <1344343342.11339.96.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Tue, 7 Aug 2012 13:42:22 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 12:05 +0100, Wangzhenguo wrote:
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Tuesday, August 07, 2012 6:07 PM
> 
> > The entire point of the hypercall buffer is that it needs to be safe
> > for use as a hypercall argument, therefore it does need to be protected.
> 
> We won't protect pages in hypercall buffer cache. Hypercall buffer
> cache belongs to xc_interface, parent process calls xc_interface_open
> to get xc_interface handler, after fork, child process inherits the
> handler, and also inherits hypercall buffer cache which belongs to it.
> It will cause segment fault to access pages in the hypercall buffer
> cache by being delivered   to hypercall, because they are not
> invalidate in the child process. So, We need restore the status of
> pages before putting they to hypercall buffer cache, and prevent pages
> from COW after being allocated in hyprecall buffer cache.

I don't think we should expect it to be valid to keep an xc interface
handle open after a fork. The child should open a fresh handle if it
wants to keep interacting with xc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjJ9-0000Eo-CS; Tue, 07 Aug 2012 12:54:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjJ7-0000Eg-El
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:54:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344344079!1823233!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21651 invoked from network); 7 Aug 2012 12:54:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 12:54:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 13:54:39 +0100
Message-Id: <50212C2B02000078000933CE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 13:54:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
>> > --- a/xen/include/public/physdev.h
>> > +++ b/xen/include/public/physdev.h
>> > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
>> >  #define PHYSDEVOP_apic_write             9
>> >  struct physdev_apic {
>> >      /* IN */
>> > -    unsigned long apic_physbase;
>> > +    xen_ulong_t apic_physbase;
>> >      uint32_t reg;
>> >      /* IN or OUT */
>> >      uint32_t value;
>> >...
> 
> This change is actually not required, considering that ARM doesn't have
> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> but it wouldn't make any difference for ARM (or x86).
> If you think that it is better to keep it unsigned long, I'll remove
> this chunk for the patch.

I'd prefer that. Also any others that you may not actually need
to be converted.

Also, seems I was misremembering how PPC dealt with this - they
indeed had xen_ulong_t defined to be a 64-bit value too.

> Considering that each field of a multicall_entry is usually passed as an
> hypercall parameter, they should all remain unsigned long.

That'll give you subtle bugs I'm afraid: do_memory_op()'s
encoding of a continuation start extent (into the 'cmd' value),
for example, depends on being able to store the full value into
the command field of the multicall structure. The limit checking
of the permitted number of extents therefore is different
between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
neither find it very appealing to have do_memory_op() adjusted
for dealing with this new special case, nor am I sure that's the
only place your approach would cause problems if you excluded
the multicall structure from the model change.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 12:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 12:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjJ9-0000Eo-CS; Tue, 07 Aug 2012 12:54:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjJ7-0000Eg-El
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 12:54:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344344079!1823233!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21651 invoked from network); 7 Aug 2012 12:54:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 12:54:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 13:54:39 +0100
Message-Id: <50212C2B02000078000933CE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 13:54:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
>> > --- a/xen/include/public/physdev.h
>> > +++ b/xen/include/public/physdev.h
>> > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
>> >  #define PHYSDEVOP_apic_write             9
>> >  struct physdev_apic {
>> >      /* IN */
>> > -    unsigned long apic_physbase;
>> > +    xen_ulong_t apic_physbase;
>> >      uint32_t reg;
>> >      /* IN or OUT */
>> >      uint32_t value;
>> >...
> 
> This change is actually not required, considering that ARM doesn't have
> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> but it wouldn't make any difference for ARM (or x86).
> If you think that it is better to keep it unsigned long, I'll remove
> this chunk for the patch.

I'd prefer that. Also any others that you may not actually need
to be converted.

Also, seems I was misremembering how PPC dealt with this - they
indeed had xen_ulong_t defined to be a 64-bit value too.

> Considering that each field of a multicall_entry is usually passed as an
> hypercall parameter, they should all remain unsigned long.

That'll give you subtle bugs I'm afraid: do_memory_op()'s
encoding of a continuation start extent (into the 'cmd' value),
for example, depends on being able to store the full value into
the command field of the multicall structure. The limit checking
of the permitted number of extents therefore is different
between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
neither find it very appealing to have do_memory_op() adjusted
for dealing with this new special case, nor am I sure that's the
only place your approach would cause problems if you excluded
the multicall structure from the model change.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:03:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjRJ-0000VP-G1; Tue, 07 Aug 2012 13:03:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjRH-0000Uy-P2
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:03:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344344578!10246704!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23659 invoked from network); 7 Aug 2012 13:02:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 13:02:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:02:58 +0100
Message-Id: <50212E2002000078000933D9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:02:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean.Guyader@citrix.com, "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:27, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> wrote:
>> >> > This is an incremental patch on top of
>> >> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
>> >> > compatibility, it is better to introduce foreign_domid as part of a
>> >> > union containing both size and foreign_domid.
>> >> > 
>> >> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> > ---
>> >> >  xen/include/public/memory.h |   11 +++++++----
>> >> >  1 files changed, 7 insertions(+), 4 deletions(-)
>> >> > 
>> >> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> >> > index b2adfbe..b0af2fd 100644
>> >> > --- a/xen/include/public/memory.h
>> >> > +++ b/xen/include/public/memory.h
>> >> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>> >> >      /* Which domain to change the mapping for. */
>> >> >      domid_t domid;
>> >> >  
>> >> > -    /* Number of pages to go through for gmfn_range */
>> >> > -    uint16_t    size;
>> >> > +    union {
>> >> > +        /* Number of pages to go through for gmfn_range */
>> >> > +        uint16_t    size;
>> >> > +        /* IFF gmfn_foreign */
>> >> > +        domid_t foreign_domid;
>> >> > +    };
>> >> 
>> >> But you're clear that this isn't standard C, and hence can't go
>> >> in this way?
>> >> 
>> > 
>> > Why? It is c11 if I am not mistaken.
>> 
>> Yes. But the common baseline is C89.
> 
> Do we need to keep it C89?

Yes, I think so. That's what we indeed can (and already do) expect
of compilers used on these headers. Even C99 would be too much
already.

> If I am not mistaken anonymous unions have been in GCC for more than 10
> years now.

With a few quirks here and there, yes. But gcc is not the measure
here anyway, we need to settle on the lowest common standard
that's largely accepted/implemented industry wide.

> If we do want to keep it C89, considering that size was introduced only
> recently, do you think that it would be OK for me to change the
> interface and just add size to a union?
> Like this:
> 
> union {
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } u;
> 
> Also CC'ing Jean.

If that's okay with Jean (and implying eventual other users of the
interface - I hope he would know), then yes; the addition isn't in
anything that we have released so far.

The only other reasonable alternative I see would be to bump
__XEN_LATEST_INTERFACE_VERSION__ again, and have this
coded as (beware, ugly!)

#if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
union {
#endif
    /* Number of pages to go through for gmfn_range */
    uint16_t    size;
#if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
    /* IFF gmfn_foreign */
    domid_t foreign_domid;
} u;
#endif

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:03:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjRJ-0000VP-G1; Tue, 07 Aug 2012 13:03:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjRH-0000Uy-P2
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:03:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344344578!10246704!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23659 invoked from network); 7 Aug 2012 13:02:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	7 Aug 2012 13:02:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:02:58 +0100
Message-Id: <50212E2002000078000933D9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:02:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean.Guyader@citrix.com, "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:27, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 17:43, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> wrote:
>> >> > This is an incremental patch on top of
>> >> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
>> >> > compatibility, it is better to introduce foreign_domid as part of a
>> >> > union containing both size and foreign_domid.
>> >> > 
>> >> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> >> > ---
>> >> >  xen/include/public/memory.h |   11 +++++++----
>> >> >  1 files changed, 7 insertions(+), 4 deletions(-)
>> >> > 
>> >> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> >> > index b2adfbe..b0af2fd 100644
>> >> > --- a/xen/include/public/memory.h
>> >> > +++ b/xen/include/public/memory.h
>> >> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>> >> >      /* Which domain to change the mapping for. */
>> >> >      domid_t domid;
>> >> >  
>> >> > -    /* Number of pages to go through for gmfn_range */
>> >> > -    uint16_t    size;
>> >> > +    union {
>> >> > +        /* Number of pages to go through for gmfn_range */
>> >> > +        uint16_t    size;
>> >> > +        /* IFF gmfn_foreign */
>> >> > +        domid_t foreign_domid;
>> >> > +    };
>> >> 
>> >> But you're clear that this isn't standard C, and hence can't go
>> >> in this way?
>> >> 
>> > 
>> > Why? It is c11 if I am not mistaken.
>> 
>> Yes. But the common baseline is C89.
> 
> Do we need to keep it C89?

Yes, I think so. That's what we indeed can (and already do) expect
of compilers used on these headers. Even C99 would be too much
already.

> If I am not mistaken anonymous unions have been in GCC for more than 10
> years now.

With a few quirks here and there, yes. But gcc is not the measure
here anyway, we need to settle on the lowest common standard
that's largely accepted/implemented industry wide.

> If we do want to keep it C89, considering that size was introduced only
> recently, do you think that it would be OK for me to change the
> interface and just add size to a union?
> Like this:
> 
> union {
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } u;
> 
> Also CC'ing Jean.

If that's okay with Jean (and implying eventual other users of the
interface - I hope he would know), then yes; the addition isn't in
anything that we have released so far.

The only other reasonable alternative I see would be to bump
__XEN_LATEST_INTERFACE_VERSION__ again, and have this
coded as (beware, ugly!)

#if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
union {
#endif
    /* Number of pages to go through for gmfn_range */
    uint16_t    size;
#if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
    /* IFF gmfn_foreign */
    domid_t foreign_domid;
} u;
#endif

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:08:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjWU-0000hT-8E; Tue, 07 Aug 2012 13:08:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjWS-0000hM-Oy
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:08:32 +0000
Received: from [85.158.143.99:37750] by server-3.bemta-4.messagelabs.com id
	FC/1A-01511-05311205; Tue, 07 Aug 2012 13:08:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344344910!21037045!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26290 invoked from network); 7 Aug 2012 13:08:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:08:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:08:29 +0100
Message-Id: <50212F6902000078000933E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:08:25 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:35, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
>> >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> 
>> >> > wrote:
>> >> >> > Note: this change does not make any difference on x86 and ia64.
>> >> >> > 
>> >> >> > 
>> >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
>> >> >> > stored in memory from guest pointers as hypercall parameters.
>> >> >> 
>> >> >> I have to admit that really dislike this, to a large part because of
>> >> >> the follow up patch that clutters the corresponding function
>> >> >> declarations even further. Plus I see no mechanism to convert
>> >> >> between the two, yet I can't see how - long term at least - you
>> >> >> could get away without such conversion.
>> >> >> 
>> >> >> Is it really a well thought through and settled upon decision to
>> >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
>> >> >> both x86 and PPC got away without doing so
>> >> > 
>> >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
>> >> > piece of code, so "got away without" is a bit strong...
>> >> 
>> >> Hmm, yes, that's a valid correction.
>> >> 
>> >> > We'd really
>> >> > rather not have to have a non-trivial compat layer on arm too by having
>> >> > the struct layouts be the same on 32/64.
>> >> 
>> >> And paying a penalty like this in the 32-bit half (if what is likely
>> >> to remain the much bigger portion for the next couple of years
>> >> can validly be called "half") is worth it? The more that the compat
>> >> layer is now reasonably mature (and should hence be easily
>> >> re-usable for ARM)?
>> > 
>> > What penalty? The only penalty is the wasted space in the structs in
>> > memory.
>> 
>> No - the caller has to zero-initialize those extra 32 bits, and the
>> hypervisor has to check for them to be zero (the latter may be
>> implicit in the 64-bit one, but certainly needs to be explicit on the
>> 32-bit side).
> 
> You are saying that on a 32 bit hypervisor we should check that the
> padding is zero? Why should we care about the value of the padding?

Because otherwise the same 32-bit guest kernel's behavior on
32- and 64-bit hypervisor may unexpectedly differ. And even if
you didn't care to do the check, the guest would still be required
to put zeros there in order to run on a 64-bit hypervisor. (And
of course this costs you cache and TLB bandwidth. See how
x86-64 just recently got the ILP32 model [aka x32] added for
this very reason.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:08:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjWU-0000hT-8E; Tue, 07 Aug 2012 13:08:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjWS-0000hM-Oy
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:08:32 +0000
Received: from [85.158.143.99:37750] by server-3.bemta-4.messagelabs.com id
	FC/1A-01511-05311205; Tue, 07 Aug 2012 13:08:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344344910!21037045!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26290 invoked from network); 7 Aug 2012 13:08:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:08:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:08:29 +0100
Message-Id: <50212F6902000078000933E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:08:25 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:35, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
>> >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Mon, 6 Aug 2012, Jan Beulich wrote:
>> >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
>> >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> 
>> >> > wrote:
>> >> >> > Note: this change does not make any difference on x86 and ia64.
>> >> >> > 
>> >> >> > 
>> >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
>> >> >> > stored in memory from guest pointers as hypercall parameters.
>> >> >> 
>> >> >> I have to admit that really dislike this, to a large part because of
>> >> >> the follow up patch that clutters the corresponding function
>> >> >> declarations even further. Plus I see no mechanism to convert
>> >> >> between the two, yet I can't see how - long term at least - you
>> >> >> could get away without such conversion.
>> >> >> 
>> >> >> Is it really a well thought through and settled upon decision to
>> >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
>> >> >> both x86 and PPC got away without doing so
>> >> > 
>> >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
>> >> > piece of code, so "got away without" is a bit strong...
>> >> 
>> >> Hmm, yes, that's a valid correction.
>> >> 
>> >> > We'd really
>> >> > rather not have to have a non-trivial compat layer on arm too by having
>> >> > the struct layouts be the same on 32/64.
>> >> 
>> >> And paying a penalty like this in the 32-bit half (if what is likely
>> >> to remain the much bigger portion for the next couple of years
>> >> can validly be called "half") is worth it? The more that the compat
>> >> layer is now reasonably mature (and should hence be easily
>> >> re-usable for ARM)?
>> > 
>> > What penalty? The only penalty is the wasted space in the structs in
>> > memory.
>> 
>> No - the caller has to zero-initialize those extra 32 bits, and the
>> hypervisor has to check for them to be zero (the latter may be
>> implicit in the 64-bit one, but certainly needs to be explicit on the
>> 32-bit side).
> 
> You are saying that on a 32 bit hypervisor we should check that the
> padding is zero? Why should we care about the value of the padding?

Because otherwise the same 32-bit guest kernel's behavior on
32- and 64-bit hypervisor may unexpectedly differ. And even if
you didn't care to do the check, the guest would still be required
to put zeros there in order to run on a 64-bit hypervisor. (And
of course this costs you cache and TLB bandwidth. See how
x86-64 just recently got the ILP32 model [aka x32] added for
this very reason.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:13:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjay-0000qu-VV; Tue, 07 Aug 2012 13:13:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Syjax-0000qe-4a
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:13:11 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344345183!5947736!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11928 invoked from network); 7 Aug 2012 13:13:04 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:13:04 -0000
Received: by eeke53 with SMTP id e53so1203624eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 06:13:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=NYAfmFKgpHSMpBO2A/JlKKW45mhjl50oZYI5iEY0xFM=;
	b=wyBTPWOUcHTEvCqqCZeRQkF8Wz5TcdIIsSD9ofSXMl/MVoUZDAbtRPjn+wk+SMmDJN
	B29mVSGKJa6jE3ll8ufIKQlY1yPc2gJSAC2bJZyqsJsOs/imaX25+Z9+5b0lebiZAs/u
	WqZsIgWXd9T3Rd78o6DC544YWYjs3ODkDH/LRqxwkEY5YpD2c+cDo2Ck8e+13eJRBcIL
	7YvQd+uMDEiXDJe7LMKi7YSxQB1JrkjlCd2GXuMggVtQ6oY66faXwnV+bOXa9Xv1653o
	XY0eYfZMZNk0uyNqDULzi9auIGMjcnYBuUkmE8eEqJ6WO5AqrnpwUExRQfPMfOOacQs9
	DnIA==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr7931003eeo.24.1344345183573; Tue,
	07 Aug 2012 06:13:03 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Tue, 7 Aug 2012 06:13:03 -0700 (PDT)
In-Reply-To: <502123620200007800093377@nat28.tlf.novell.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
Date: Tue, 7 Aug 2012 14:13:03 +0100
X-Google-Sender-Auth: boEe6h3NO3VK4Po-XWDfcxvVigg
Message-ID: <CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> 2. Allocate the PoD cache before populating the p2m table
>
> So this doesn't work, the call simply has no effect (and never
> reaches p2m_pod_set_cache_target()). Apparently because
> of
>
>     /* P == B: Nothing to do. */
>     if ( p2md->pod.entry_count == 0 )
>         goto out;
>
> in p2m_pod_set_mem_target(). Now I'm not sure about the
> proper adjustment here: Entirely dropping the conditional is
> certainly wrong. Would
>
>     if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>         goto out;
>
> be okay?
>
> But then later in that function we also have
>
>     /* B < T': Set the cache size equal to # of outstanding entries,
>      * let the balloon driver fill in the rest. */
>     if ( pod_target > p2md->pod.entry_count )
>         pod_target = p2md->pod.entry_count;
>
> which in the case at hand would set pod_target to 0, and the
> whole operation would again not have any effect afaict. So
> maybe this was the reason to do this operation _after_ the
> normal address space population?

Snap -- forgot about that.

The main thing is for set_mem_target() to be simple for the toolstack
-- it's just supposed to say how much memory it wants the guest to
use, and Xen is supposed to figure out how much memory the PoD cache
needs.  The interface is that the toolstack is just supposed to call
set_mem_target() after each time it changes the balloon target.  The
idea was to be robust against the user setting arbitrary new targets
before the balloon driver had reached the old target.  So the problem
with allowing (pod_target > entry_count) is that that's the condition
that happens when you are ballooning up.

Maybe the best thing to do is to introduce a specific call to
initialize the PoD cache that would ignore entry_count?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:13:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjay-0000qu-VV; Tue, 07 Aug 2012 13:13:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Syjax-0000qe-4a
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:13:11 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344345183!5947736!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11928 invoked from network); 7 Aug 2012 13:13:04 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:13:04 -0000
Received: by eeke53 with SMTP id e53so1203624eek.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 06:13:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=NYAfmFKgpHSMpBO2A/JlKKW45mhjl50oZYI5iEY0xFM=;
	b=wyBTPWOUcHTEvCqqCZeRQkF8Wz5TcdIIsSD9ofSXMl/MVoUZDAbtRPjn+wk+SMmDJN
	B29mVSGKJa6jE3ll8ufIKQlY1yPc2gJSAC2bJZyqsJsOs/imaX25+Z9+5b0lebiZAs/u
	WqZsIgWXd9T3Rd78o6DC544YWYjs3ODkDH/LRqxwkEY5YpD2c+cDo2Ck8e+13eJRBcIL
	7YvQd+uMDEiXDJe7LMKi7YSxQB1JrkjlCd2GXuMggVtQ6oY66faXwnV+bOXa9Xv1653o
	XY0eYfZMZNk0uyNqDULzi9auIGMjcnYBuUkmE8eEqJ6WO5AqrnpwUExRQfPMfOOacQs9
	DnIA==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr7931003eeo.24.1344345183573; Tue,
	07 Aug 2012 06:13:03 -0700 (PDT)
Received: by 10.14.206.135 with HTTP; Tue, 7 Aug 2012 06:13:03 -0700 (PDT)
In-Reply-To: <502123620200007800093377@nat28.tlf.novell.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
Date: Tue, 7 Aug 2012 14:13:03 +0100
X-Google-Sender-Auth: boEe6h3NO3VK4Po-XWDfcxvVigg
Message-ID: <CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> 2. Allocate the PoD cache before populating the p2m table
>
> So this doesn't work, the call simply has no effect (and never
> reaches p2m_pod_set_cache_target()). Apparently because
> of
>
>     /* P == B: Nothing to do. */
>     if ( p2md->pod.entry_count == 0 )
>         goto out;
>
> in p2m_pod_set_mem_target(). Now I'm not sure about the
> proper adjustment here: Entirely dropping the conditional is
> certainly wrong. Would
>
>     if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>         goto out;
>
> be okay?
>
> But then later in that function we also have
>
>     /* B < T': Set the cache size equal to # of outstanding entries,
>      * let the balloon driver fill in the rest. */
>     if ( pod_target > p2md->pod.entry_count )
>         pod_target = p2md->pod.entry_count;
>
> which in the case at hand would set pod_target to 0, and the
> whole operation would again not have any effect afaict. So
> maybe this was the reason to do this operation _after_ the
> normal address space population?

Snap -- forgot about that.

The main thing is for set_mem_target() to be simple for the toolstack
-- it's just supposed to say how much memory it wants the guest to
use, and Xen is supposed to figure out how much memory the PoD cache
needs.  The interface is that the toolstack is just supposed to call
set_mem_target() after each time it changes the balloon target.  The
idea was to be robust against the user setting arbitrary new targets
before the balloon driver had reached the old target.  So the problem
with allowing (pod_target > entry_count) is that that's the condition
that happens when you are ballooning up.

Maybe the best thing to do is to introduce a specific call to
initialize the PoD cache that would ignore entry_count?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:14:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:14:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjbd-0000tj-D8; Tue, 07 Aug 2012 13:13:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syjbb-0000ta-ST
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:13:52 +0000
Received: from [85.158.143.99:35867] by server-3.bemta-4.messagelabs.com id
	85/37-01511-F8411205; Tue, 07 Aug 2012 13:13:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344345230!30078076!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6461 invoked from network); 7 Aug 2012 13:13:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:13:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:13:50 +0100
Message-Id: <502130AD02000078000933EF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:13:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<1344343007.11339.93.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344343007.11339.93.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:36, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-07 at 13:08 +0100, Stefano Stabellini wrote:
>> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> > >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > > There are still few unsigned long in the xen public interface: replace
>> > > them with xen_ulong_t.
>> > > 
>> > > Also typedef xen_ulong_t to uint64_t on ARM.
>> > 
>> > While this change by itself already looks suspicious to me, I don't
>> > follow what the global replacement is going to be good for, in
>> > particular when done in places that ARM should be completely
>> > ignorant of, e.g.
>> 
>> See below
>> 
>> 
>> > > --- a/xen/include/public/physdev.h
>> > > +++ b/xen/include/public/physdev.h
>> > > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
>> > >  #define PHYSDEVOP_apic_write             9
>> > >  struct physdev_apic {
>> > >      /* IN */
>> > > -    unsigned long apic_physbase;
>> > > +    xen_ulong_t apic_physbase;
>> > >      uint32_t reg;
>> > >      /* IN or OUT */
>> > >      uint32_t value;
>> > >...
>> 
>> This change is actually not required, considering that ARM doesn't have
>> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
>> but it wouldn't make any difference for ARM (or x86).
>> If you think that it is better to keep it unsigned long, I'll remove
>> this chunk for the patch.
>> 
>> 
>> > > --- a/xen/include/public/xen.h
>> > > +++ b/xen/include/public/xen.h
>> > > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
>> > >   * NB. The fields are natural register size for this architecture.
>> > >   */
>> > >  struct multicall_entry {
>> > > -    unsigned long op, result;
>> > > -    unsigned long args[6];
>> > > +    xen_ulong_t op, result;
>> > > +    xen_ulong_t args[6];
>> > 
>> > And here I really start to wonder - what use is it to put all 64-bit
>> > values here on a 32-bit arch? You certainly know a lot more about
>> > ARM than me, but this looks pretty inefficient, the more that
>> > you'll have to deal with checking the full values when converting
>> > to e.g. pointers anyway, in order to avoid behavioral differences
>> > between running on a 32- or 64-bit host. Zero-extending from
>> > 32-bits when in a 64-bit hypervisor wouldn't have this problem.
>> 
>> Actually the multicall_entry change is wrong, thanks for point it out.
>> 
>> The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
>> except when they are passed as hypercall arguments, in which case a 32
>> bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
>> pointers.
>> 
>> Considering that each field of a multicall_entry is usually passed as an
>> hypercall parameter, they should all remain unsigned long.
> 
> If possible, please make them an explicitly sized type, even if it is
> now 32 bits.

??? This structure is already shared between 32-bit and 64-bit
implementations, and the fields are intentionally fitting both by
using "unsigned long". Are you suggesting to add #ifdef-ery to
public/xen.h?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:14:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:14:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjbd-0000tj-D8; Tue, 07 Aug 2012 13:13:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syjbb-0000ta-ST
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:13:52 +0000
Received: from [85.158.143.99:35867] by server-3.bemta-4.messagelabs.com id
	85/37-01511-F8411205; Tue, 07 Aug 2012 13:13:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344345230!30078076!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6461 invoked from network); 7 Aug 2012 13:13:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:13:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:13:50 +0100
Message-Id: <502130AD02000078000933EF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:13:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<1344343007.11339.93.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344343007.11339.93.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:36, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-07 at 13:08 +0100, Stefano Stabellini wrote:
>> On Mon, 6 Aug 2012, Jan Beulich wrote:
>> > >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > > There are still few unsigned long in the xen public interface: replace
>> > > them with xen_ulong_t.
>> > > 
>> > > Also typedef xen_ulong_t to uint64_t on ARM.
>> > 
>> > While this change by itself already looks suspicious to me, I don't
>> > follow what the global replacement is going to be good for, in
>> > particular when done in places that ARM should be completely
>> > ignorant of, e.g.
>> 
>> See below
>> 
>> 
>> > > --- a/xen/include/public/physdev.h
>> > > +++ b/xen/include/public/physdev.h
>> > > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
>> > >  #define PHYSDEVOP_apic_write             9
>> > >  struct physdev_apic {
>> > >      /* IN */
>> > > -    unsigned long apic_physbase;
>> > > +    xen_ulong_t apic_physbase;
>> > >      uint32_t reg;
>> > >      /* IN or OUT */
>> > >      uint32_t value;
>> > >...
>> 
>> This change is actually not required, considering that ARM doesn't have
>> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
>> but it wouldn't make any difference for ARM (or x86).
>> If you think that it is better to keep it unsigned long, I'll remove
>> this chunk for the patch.
>> 
>> 
>> > > --- a/xen/include/public/xen.h
>> > > +++ b/xen/include/public/xen.h
>> > > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
>> > >   * NB. The fields are natural register size for this architecture.
>> > >   */
>> > >  struct multicall_entry {
>> > > -    unsigned long op, result;
>> > > -    unsigned long args[6];
>> > > +    xen_ulong_t op, result;
>> > > +    xen_ulong_t args[6];
>> > 
>> > And here I really start to wonder - what use is it to put all 64-bit
>> > values here on a 32-bit arch? You certainly know a lot more about
>> > ARM than me, but this looks pretty inefficient, the more that
>> > you'll have to deal with checking the full values when converting
>> > to e.g. pointers anyway, in order to avoid behavioral differences
>> > between running on a 32- or 64-bit host. Zero-extending from
>> > 32-bits when in a 64-bit hypervisor wouldn't have this problem.
>> 
>> Actually the multicall_entry change is wrong, thanks for point it out.
>> 
>> The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
>> except when they are passed as hypercall arguments, in which case a 32
>> bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
>> pointers.
>> 
>> Considering that each field of a multicall_entry is usually passed as an
>> hypercall parameter, they should all remain unsigned long.
> 
> If possible, please make them an explicitly sized type, even if it is
> now 32 bits.

??? This structure is already shared between 32-bit and 64-bit
implementations, and the fields are intentionally fitting both by
using "unsigned long". Are you suggesting to add #ifdef-ery to
public/xen.h?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:19:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjge-0001B9-9W; Tue, 07 Aug 2012 13:19:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syjgc-0001B2-5a
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:19:02 +0000
Received: from [85.158.143.99:21439] by server-2.bemta-4.messagelabs.com id
	FA/27-17938-5C511205; Tue, 07 Aug 2012 13:19:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344345540!20896912!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15774 invoked from network); 7 Aug 2012 13:19:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:19:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:18:59 +0100
Message-Id: <502131E202000078000933F9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:18:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	"Jean Guyader" <jean.guyader@gmail.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
In-Reply-To: <CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean.Guyader@citrix.com, "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:40, Jean Guyader <jean.guyader@gmail.com> wrote:
> You could probably do something like
> 
> #ifdef __GNUC__
> # define UNION_NAME
> #else
> # define UNION_NAME u
> #endif
> union {
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } UNION_NAME;
> 
> It's not ideal but this way you keep the binary compatibility and you
> also the code still cmopatible with GCC.

Ah, yes, something along those lines might be okay; we already
have similar things e.g. for __DECL_REG() (note the additional
use of __STRICT_ANSI__ there, though). The most problematic
thing here is the name space cluttering - UNION_NAME is certainly
not good, but __DECL_REG really is too unspecific too (yet we
got away with it so far).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:19:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjge-0001B9-9W; Tue, 07 Aug 2012 13:19:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syjgc-0001B2-5a
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:19:02 +0000
Received: from [85.158.143.99:21439] by server-2.bemta-4.messagelabs.com id
	FA/27-17938-5C511205; Tue, 07 Aug 2012 13:19:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344345540!20896912!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15774 invoked from network); 7 Aug 2012 13:19:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:19:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:18:59 +0100
Message-Id: <502131E202000078000933F9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:18:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	"Jean Guyader" <jean.guyader@gmail.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
In-Reply-To: <CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean.Guyader@citrix.com, "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 14:40, Jean Guyader <jean.guyader@gmail.com> wrote:
> You could probably do something like
> 
> #ifdef __GNUC__
> # define UNION_NAME
> #else
> # define UNION_NAME u
> #endif
> union {
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } UNION_NAME;
> 
> It's not ideal but this way you keep the binary compatibility and you
> also the code still cmopatible with GCC.

Ah, yes, something along those lines might be okay; we already
have similar things e.g. for __DECL_REG() (note the additional
use of __STRICT_ANSI__ there, though). The most problematic
thing here is the name space cluttering - UNION_NAME is certainly
not good, but __DECL_REG really is too unspecific too (yet we
got away with it so far).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:29:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjqT-0001LV-Co; Tue, 07 Aug 2012 13:29:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjqR-0001LN-EC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:29:11 +0000
Received: from [85.158.143.99:16509] by server-1.bemta-4.messagelabs.com id
	9D/FE-24392-62811205; Tue, 07 Aug 2012 13:29:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344346150!18982915!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3665 invoked from network); 7 Aug 2012 13:29:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:29:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:29:09 +0100
Message-Id: <502134440200007800093416@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:29:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
In-Reply-To: <CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 15:13, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> 2. Allocate the PoD cache before populating the p2m table
>>
>> So this doesn't work, the call simply has no effect (and never
>> reaches p2m_pod_set_cache_target()). Apparently because
>> of
>>
>>     /* P == B: Nothing to do. */
>>     if ( p2md->pod.entry_count == 0 )
>>         goto out;
>>
>> in p2m_pod_set_mem_target(). Now I'm not sure about the
>> proper adjustment here: Entirely dropping the conditional is
>> certainly wrong. Would
>>
>>     if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>         goto out;
>>
>> be okay?
>>
>> But then later in that function we also have
>>
>>     /* B < T': Set the cache size equal to # of outstanding entries,
>>      * let the balloon driver fill in the rest. */
>>     if ( pod_target > p2md->pod.entry_count )
>>         pod_target = p2md->pod.entry_count;
>>
>> which in the case at hand would set pod_target to 0, and the
>> whole operation would again not have any effect afaict. So
>> maybe this was the reason to do this operation _after_ the
>> normal address space population?
> 
> Snap -- forgot about that.
> 
> The main thing is for set_mem_target() to be simple for the toolstack
> -- it's just supposed to say how much memory it wants the guest to
> use, and Xen is supposed to figure out how much memory the PoD cache
> needs.  The interface is that the toolstack is just supposed to call
> set_mem_target() after each time it changes the balloon target.  The
> idea was to be robust against the user setting arbitrary new targets
> before the balloon driver had reached the old target.  So the problem
> with allowing (pod_target > entry_count) is that that's the condition
> that happens when you are ballooning up.
> 
> Maybe the best thing to do is to introduce a specific call to
> initialize the PoD cache that would ignore entry_count?

Hmm, would looks more like a hack to me.

How about doing the initial check as suggested earlier

    if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
        goto out;

and the latter check in a similar way

    if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
        pod_target = p2md->pod.entry_count;

(which would still take care of any ballooning activity)? Or are
there any other traps to fall into?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:29:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyjqT-0001LV-Co; Tue, 07 Aug 2012 13:29:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SyjqR-0001LN-EC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:29:11 +0000
Received: from [85.158.143.99:16509] by server-1.bemta-4.messagelabs.com id
	9D/FE-24392-62811205; Tue, 07 Aug 2012 13:29:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344346150!18982915!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3665 invoked from network); 7 Aug 2012 13:29:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:29:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:29:09 +0100
Message-Id: <502134440200007800093416@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:29:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
In-Reply-To: <CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 15:13, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> 2. Allocate the PoD cache before populating the p2m table
>>
>> So this doesn't work, the call simply has no effect (and never
>> reaches p2m_pod_set_cache_target()). Apparently because
>> of
>>
>>     /* P == B: Nothing to do. */
>>     if ( p2md->pod.entry_count == 0 )
>>         goto out;
>>
>> in p2m_pod_set_mem_target(). Now I'm not sure about the
>> proper adjustment here: Entirely dropping the conditional is
>> certainly wrong. Would
>>
>>     if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>         goto out;
>>
>> be okay?
>>
>> But then later in that function we also have
>>
>>     /* B < T': Set the cache size equal to # of outstanding entries,
>>      * let the balloon driver fill in the rest. */
>>     if ( pod_target > p2md->pod.entry_count )
>>         pod_target = p2md->pod.entry_count;
>>
>> which in the case at hand would set pod_target to 0, and the
>> whole operation would again not have any effect afaict. So
>> maybe this was the reason to do this operation _after_ the
>> normal address space population?
> 
> Snap -- forgot about that.
> 
> The main thing is for set_mem_target() to be simple for the toolstack
> -- it's just supposed to say how much memory it wants the guest to
> use, and Xen is supposed to figure out how much memory the PoD cache
> needs.  The interface is that the toolstack is just supposed to call
> set_mem_target() after each time it changes the balloon target.  The
> idea was to be robust against the user setting arbitrary new targets
> before the balloon driver had reached the old target.  So the problem
> with allowing (pod_target > entry_count) is that that's the condition
> that happens when you are ballooning up.
> 
> Maybe the best thing to do is to introduce a specific call to
> initialize the PoD cache that would ignore entry_count?

Hmm, would looks more like a hack to me.

How about doing the initial check as suggested earlier

    if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
        goto out;

and the latter check in a similar way

    if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
        pod_target = p2md->pod.entry_count;

(which would still take care of any ballooning activity)? Or are
there any other traps to fall into?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:30:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjry-0001Q0-S0; Tue, 07 Aug 2012 13:30:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syjry-0001Pb-4i
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:30:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344346233!11222537!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3504 invoked from network); 7 Aug 2012 13:30:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:30:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13886432"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 13:30:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	14:30:33 +0100
Message-ID: <1344346231.11339.100.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Aug 2012 14:30:31 +0100
In-Reply-To: <502130AD02000078000933EF@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<1344343007.11339.93.camel@zakaz.uk.xensource.com>
	<502130AD02000078000933EF@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 14:13 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 14:36, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2012-08-07 at 13:08 +0100, Stefano Stabellini wrote:
> >> On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> > >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > > There are still few unsigned long in the xen public interface: replace
> >> > > them with xen_ulong_t.
> >> > > 
> >> > > Also typedef xen_ulong_t to uint64_t on ARM.
> >> > 
> >> > While this change by itself already looks suspicious to me, I don't
> >> > follow what the global replacement is going to be good for, in
> >> > particular when done in places that ARM should be completely
> >> > ignorant of, e.g.
> >> 
> >> See below
> >> 
> >> 
> >> > > --- a/xen/include/public/physdev.h
> >> > > +++ b/xen/include/public/physdev.h
> >> > > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> >> > >  #define PHYSDEVOP_apic_write             9
> >> > >  struct physdev_apic {
> >> > >      /* IN */
> >> > > -    unsigned long apic_physbase;
> >> > > +    xen_ulong_t apic_physbase;
> >> > >      uint32_t reg;
> >> > >      /* IN or OUT */
> >> > >      uint32_t value;
> >> > >...
> >> 
> >> This change is actually not required, considering that ARM doesn't have
> >> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> >> but it wouldn't make any difference for ARM (or x86).
> >> If you think that it is better to keep it unsigned long, I'll remove
> >> this chunk for the patch.
> >> 
> >> 
> >> > > --- a/xen/include/public/xen.h
> >> > > +++ b/xen/include/public/xen.h
> >> > > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
> >> > >   * NB. The fields are natural register size for this architecture.
> >> > >   */
> >> > >  struct multicall_entry {
> >> > > -    unsigned long op, result;
> >> > > -    unsigned long args[6];
> >> > > +    xen_ulong_t op, result;
> >> > > +    xen_ulong_t args[6];
> >> > 
> >> > And here I really start to wonder - what use is it to put all 64-bit
> >> > values here on a 32-bit arch? You certainly know a lot more about
> >> > ARM than me, but this looks pretty inefficient, the more that
> >> > you'll have to deal with checking the full values when converting
> >> > to e.g. pointers anyway, in order to avoid behavioral differences
> >> > between running on a 32- or 64-bit host. Zero-extending from
> >> > 32-bits when in a 64-bit hypervisor wouldn't have this problem.
> >> 
> >> Actually the multicall_entry change is wrong, thanks for point it out.
> >> 
> >> The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
> >> except when they are passed as hypercall arguments, in which case a 32
> >> bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
> >> pointers.
> >> 
> >> Considering that each field of a multicall_entry is usually passed as an
> >> hypercall parameter, they should all remain unsigned long.
> > 
> > If possible, please make them an explicitly sized type, even if it is
> > now 32 bits.
> 
> ??? This structure is already shared between 32-bit and 64-bit
> implementations, and the fields are intentionally fitting both by
> using "unsigned long". Are you suggesting to add #ifdef-ery to
> public/xen.h?

Oh, I thought multicall_entry was in an arch header.

In any case I would have expected a typedef xen_multicall_arg_t or
something.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:30:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjry-0001Q0-S0; Tue, 07 Aug 2012 13:30:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syjry-0001Pb-4i
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:30:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344346233!11222537!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3504 invoked from network); 7 Aug 2012 13:30:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:30:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13886432"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 13:30:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	14:30:33 +0100
Message-ID: <1344346231.11339.100.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Aug 2012 14:30:31 +0100
In-Reply-To: <502130AD02000078000933EF@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<1344343007.11339.93.camel@zakaz.uk.xensource.com>
	<502130AD02000078000933EF@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 14:13 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 14:36, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2012-08-07 at 13:08 +0100, Stefano Stabellini wrote:
> >> On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> > >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > > There are still few unsigned long in the xen public interface: replace
> >> > > them with xen_ulong_t.
> >> > > 
> >> > > Also typedef xen_ulong_t to uint64_t on ARM.
> >> > 
> >> > While this change by itself already looks suspicious to me, I don't
> >> > follow what the global replacement is going to be good for, in
> >> > particular when done in places that ARM should be completely
> >> > ignorant of, e.g.
> >> 
> >> See below
> >> 
> >> 
> >> > > --- a/xen/include/public/physdev.h
> >> > > +++ b/xen/include/public/physdev.h
> >> > > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> >> > >  #define PHYSDEVOP_apic_write             9
> >> > >  struct physdev_apic {
> >> > >      /* IN */
> >> > > -    unsigned long apic_physbase;
> >> > > +    xen_ulong_t apic_physbase;
> >> > >      uint32_t reg;
> >> > >      /* IN or OUT */
> >> > >      uint32_t value;
> >> > >...
> >> 
> >> This change is actually not required, considering that ARM doesn't have
> >> an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> >> but it wouldn't make any difference for ARM (or x86).
> >> If you think that it is better to keep it unsigned long, I'll remove
> >> this chunk for the patch.
> >> 
> >> 
> >> > > --- a/xen/include/public/xen.h
> >> > > +++ b/xen/include/public/xen.h
> >> > > @@ -518,8 +518,8 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
> >> > >   * NB. The fields are natural register size for this architecture.
> >> > >   */
> >> > >  struct multicall_entry {
> >> > > -    unsigned long op, result;
> >> > > -    unsigned long args[6];
> >> > > +    xen_ulong_t op, result;
> >> > > +    xen_ulong_t args[6];
> >> > 
> >> > And here I really start to wonder - what use is it to put all 64-bit
> >> > values here on a 32-bit arch? You certainly know a lot more about
> >> > ARM than me, but this looks pretty inefficient, the more that
> >> > you'll have to deal with checking the full values when converting
> >> > to e.g. pointers anyway, in order to avoid behavioral differences
> >> > between running on a 32- or 64-bit host. Zero-extending from
> >> > 32-bits when in a 64-bit hypervisor wouldn't have this problem.
> >> 
> >> Actually the multicall_entry change is wrong, thanks for point it out.
> >> 
> >> The idea is that pointers are always 8 bytes sized and 8 bytes aligned,
> >> except when they are passed as hypercall arguments, in which case a 32
> >> bit guest would use 32 bit pointers and a 64 bit guest would use 64 bit
> >> pointers.
> >> 
> >> Considering that each field of a multicall_entry is usually passed as an
> >> hypercall parameter, they should all remain unsigned long.
> > 
> > If possible, please make them an explicitly sized type, even if it is
> > now 32 bits.
> 
> ??? This structure is already shared between 32-bit and 64-bit
> implementations, and the fields are intentionally fitting both by
> using "unsigned long". Are you suggesting to add #ifdef-ery to
> public/xen.h?

Oh, I thought multicall_entry was in an arch header.

In any case I would have expected a typedef xen_multicall_arg_t or
something.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:39:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjzk-0001hW-UU; Tue, 07 Aug 2012 13:38:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syjzj-0001hO-0Q
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:38:47 +0000
Received: from [85.158.143.99:51448] by server-1.bemta-4.messagelabs.com id
	B4/C2-24392-66A11205; Tue, 07 Aug 2012 13:38:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344346725!27301062!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16076 invoked from network); 7 Aug 2012 13:38:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:38:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13886626"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 13:38:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	14:38:45 +0100
Message-ID: <1344346723.11339.101.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 14:38:43 +0100
In-Reply-To: <1344337328.11339.92.camel@zakaz.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
	<20512.61168.228964.615080@mariner.uk.xensource.com>
	<1344337328.11339.92.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
 25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 12:02 +0100, Ian Campbell wrote:

> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1344337292 -3600
> # Node ID c5f673d8b330d6195e2aa3bbf63bb594b4bc99ee
> # Parent  a7ad22e5525831dd491d7ee1fe538b7543404ac7
> libxl: write physical-device node if user did not supply a block script
> 
> This reverts one of the intentional changes from 25733:353bc0801b11.
> That change exposed an issue with the xl migration protocol, which
> although safe triggers the hotplug scripts device sharing logic.
> 
> For 4.2 we disable this logic by writing the physical-device xenstore
> node ourselves if a user did not supply a script. If the user did
> supply a script then we continue to rely on it to write the
> physical-device node (not least because the script may create the
> device and therefore it is not available before we run the script).
> 
> This means that to support localhost migration a block hotplug script
> needs to be robust against adding a device twice and should not
> deactivate the device until it has been removed twice.
> 
> This should be revisted for 4.3.
                     ^i
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Ian J acked this IRL so I have committed.

Our reasoning is that at this stage in the release it is better to go
back to the previous behaviour, which although strictly speaking wrong
is harmless and has no API implications, rather than try to invent new
correct APIs on short notice etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:39:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syjzk-0001hW-UU; Tue, 07 Aug 2012 13:38:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Syjzj-0001hO-0Q
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:38:47 +0000
Received: from [85.158.143.99:51448] by server-1.bemta-4.messagelabs.com id
	B4/C2-24392-66A11205; Tue, 07 Aug 2012 13:38:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344346725!27301062!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16076 invoked from network); 7 Aug 2012 13:38:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:38:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13886626"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 13:38:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	14:38:45 +0100
Message-ID: <1344346723.11339.101.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 14:38:43 +0100
In-Reply-To: <1344337328.11339.92.camel@zakaz.uk.xensource.com>
References: <1344327916.11339.67.camel@zakaz.uk.xensource.com>
	<patchbomb.1344330533@cosworth.uk.xensource.com>
	<20512.61168.228964.615080@mariner.uk.xensource.com>
	<1344337328.11339.92.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 0 of 2] xl: fix localhost migration after
 25733:353bc0801b11
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 12:02 +0100, Ian Campbell wrote:

> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1344337292 -3600
> # Node ID c5f673d8b330d6195e2aa3bbf63bb594b4bc99ee
> # Parent  a7ad22e5525831dd491d7ee1fe538b7543404ac7
> libxl: write physical-device node if user did not supply a block script
> 
> This reverts one of the intentional changes from 25733:353bc0801b11.
> That change exposed an issue with the xl migration protocol, which
> although safe triggers the hotplug scripts device sharing logic.
> 
> For 4.2 we disable this logic by writing the physical-device xenstore
> node ourselves if a user did not supply a script. If the user did
> supply a script then we continue to rely on it to write the
> physical-device node (not least because the script may create the
> device and therefore it is not available before we run the script).
> 
> This means that to support localhost migration a block hotplug script
> needs to be robust against adding a device twice and should not
> deactivate the device until it has been removed twice.
> 
> This should be revisted for 4.3.
                     ^i
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Ian J acked this IRL so I have committed.

Our reasoning is that at this stage in the release it is better to go
back to the previous behaviour, which although strictly speaking wrong
is harmless and has no API implications, rather than try to invent new
correct APIs on short notice etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:44:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syk4k-0001q1-Mo; Tue, 07 Aug 2012 13:43:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syk4j-0001pv-SH
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 13:43:58 +0000
Received: from [85.158.143.35:29905] by server-1.bemta-4.messagelabs.com id
	E6/9D-24392-D9B11205; Tue, 07 Aug 2012 13:43:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344347036!5935883!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15469 invoked from network); 7 Aug 2012 13:43:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 13:43:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:43:55 +0100
Message-Id: <502137BA0200007800093442@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:43:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
In-Reply-To: <20507.45344.552468.930223@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 13:08, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
> added other xen kernel modules on xencommons start"):
>> On Fri, 2012-06-08 at 15:07 +0100, Ian Jackson wrote:
>> > Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
> added other xen kernel modules on xencommons start"):
>> > > tools/hotplug/Linux/init.d/: added other xen kernel modules on 
>> > > xencommons start
>> > 
>> > This looks at least harmless to me.
>> > 
>> > I'm surprised, however, that these things aren't loaded automatically.
>> > For example, shouldn't the xenbus driver's enumeration automatically
>> > load blkback too ?
>> 
>> Yes it should, there is autoloading stuff for all the backends.
>> 
>> Not sure about gntalloc. I suspect not.
>> 
>> > Having said that, I'm inclined to apply this unless someone 
>> > explains that it's a bad idea.
> 
> I have applied it.
> 
> But we should still try to fix the upstream kernels not to need it.

And just like for the respective pending blktap item, it should
- be made sure this gets backed out again after 4.2, making sure
  the loading happens either automatically or when the module is
  in fact needed, and
- not exclusively try the pv-ops kernel's module names.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:44:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syk4k-0001q1-Mo; Tue, 07 Aug 2012 13:43:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syk4j-0001pv-SH
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 13:43:58 +0000
Received: from [85.158.143.35:29905] by server-1.bemta-4.messagelabs.com id
	E6/9D-24392-D9B11205; Tue, 07 Aug 2012 13:43:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344347036!5935883!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15469 invoked from network); 7 Aug 2012 13:43:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 13:43:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:43:55 +0100
Message-Id: <502137BA0200007800093442@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:43:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
In-Reply-To: <20507.45344.552468.930223@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 13:08, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
> added other xen kernel modules on xencommons start"):
>> On Fri, 2012-06-08 at 15:07 +0100, Ian Jackson wrote:
>> > Fabio Fantoni writes ("[Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
> added other xen kernel modules on xencommons start"):
>> > > tools/hotplug/Linux/init.d/: added other xen kernel modules on 
>> > > xencommons start
>> > 
>> > This looks at least harmless to me.
>> > 
>> > I'm surprised, however, that these things aren't loaded automatically.
>> > For example, shouldn't the xenbus driver's enumeration automatically
>> > load blkback too ?
>> 
>> Yes it should, there is autoloading stuff for all the backends.
>> 
>> Not sure about gntalloc. I suspect not.
>> 
>> > Having said that, I'm inclined to apply this unless someone 
>> > explains that it's a bad idea.
> 
> I have applied it.
> 
> But we should still try to fix the upstream kernels not to need it.

And just like for the respective pending blktap item, it should
- be made sure this gets backed out again after 4.2, making sure
  the loading happens either automatically or when the module is
  in fact needed, and
- not exclusively try the pv-ops kernel's module names.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:44:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syk5B-0001rq-46; Tue, 07 Aug 2012 13:44:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Syk59-0001rf-WC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:44:24 +0000
Received: from [85.158.143.99:54671] by server-1.bemta-4.messagelabs.com id
	F3/7E-24392-7BB11205; Tue, 07 Aug 2012 13:44:23 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344347060!23147384!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13427 invoked from network); 7 Aug 2012 13:44:21 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-11.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:44:21 -0000
X-TM-IMSS-Message-ID: <800102b000039d67@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	800102b000039d67 ; Tue, 7 Aug 2012 09:44:34 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77Di6Za015344; 
	Tue, 7 Aug 2012 09:44:06 -0400
Message-ID: <50211BA6.1040607@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 09:44:06 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
	<501FF0EB.1000900@tycho.nsa.gov>
	<5020D80402000078000931A4@nat28.tlf.novell.com>
In-Reply-To: <5020D80402000078000931A4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 02:55 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 18:29, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>>>> +{
>>>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>>>> +        return -EPERM;
>>>
>>> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.
>>
>> Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
>> IS_PRIV domain.
> 
> That's a side effect of the current way of handling this, not
> something that is either logical or designed to be that way (it
> certainly is bogus even now for DOM_XEN, and with
> disaggregation - afaiu your plans - it'll also become bogus for
> DOM_IO, where right now one could consider it half-way
> correct).
> 
> Jan
> 

In that case, I think it would make sense to modify these XSM hooks
when IS_PRIV_FOR is changed to not short-circuit on DOM_IO/DOM_XEN.
If you're suggesting changing the condition to something like
  ( d != f && !(IS_PRIV_FOR(d, f) || IS_PRIV(d)) )
I could do that, but I think that type of change would be best done
in another patch actually making IS_PRIV_FOR(dom0, DOM_XEN) == false.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:44:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syk5B-0001rq-46; Tue, 07 Aug 2012 13:44:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Syk59-0001rf-WC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:44:24 +0000
Received: from [85.158.143.99:54671] by server-1.bemta-4.messagelabs.com id
	F3/7E-24392-7BB11205; Tue, 07 Aug 2012 13:44:23 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344347060!23147384!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13427 invoked from network); 7 Aug 2012 13:44:21 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-11.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:44:21 -0000
X-TM-IMSS-Message-ID: <800102b000039d67@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	800102b000039d67 ; Tue, 7 Aug 2012 09:44:34 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77Di6Za015344; 
	Tue, 7 Aug 2012 09:44:06 -0400
Message-ID: <50211BA6.1040607@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 09:44:06 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
	<501FF0EB.1000900@tycho.nsa.gov>
	<5020D80402000078000931A4@nat28.tlf.novell.com>
In-Reply-To: <5020D80402000078000931A4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 02:55 AM, Jan Beulich wrote:
>>>> On 06.08.12 at 18:29, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>>>> +{
>>>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>>>> +        return -EPERM;
>>>
>>> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.
>>
>> Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
>> IS_PRIV domain.
> 
> That's a side effect of the current way of handling this, not
> something that is either logical or designed to be that way (it
> certainly is bogus even now for DOM_XEN, and with
> disaggregation - afaiu your plans - it'll also become bogus for
> DOM_IO, where right now one could consider it half-way
> correct).
> 
> Jan
> 

In that case, I think it would make sense to modify these XSM hooks
when IS_PRIV_FOR is changed to not short-circuit on DOM_IO/DOM_XEN.
If you're suggesting changing the condition to something like
  ( d != f && !(IS_PRIV_FOR(d, f) || IS_PRIV(d)) )
I could do that, but I think that type of change would be best done
in another patch actually making IS_PRIV_FOR(dom0, DOM_XEN) == false.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:56:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykGv-00028x-D5; Tue, 07 Aug 2012 13:56:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SykGu-00028s-5y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:56:32 +0000
Received: from [85.158.139.83:52196] by server-2.bemta-5.messagelabs.com id
	AB/C6-25118-F8E11205; Tue, 07 Aug 2012 13:56:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344347790!30732871!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4042 invoked from network); 7 Aug 2012 13:56:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 13:56:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:56:30 +0100
Message-Id: <50213AAD020000780009347E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:56:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
	<501FF0EB.1000900@tycho.nsa.gov>
	<5020D80402000078000931A4@nat28.tlf.novell.com>
	<50211BA6.1040607@tycho.nsa.gov>
In-Reply-To: <50211BA6.1040607@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 15:44, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/07/2012 02:55 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 18:29, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>>>>> +{
>>>>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>>>>> +        return -EPERM;
>>>>
>>>> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.
>>>
>>> Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
>>> IS_PRIV domain.
>> 
>> That's a side effect of the current way of handling this, not
>> something that is either logical or designed to be that way (it
>> certainly is bogus even now for DOM_XEN, and with
>> disaggregation - afaiu your plans - it'll also become bogus for
>> DOM_IO, where right now one could consider it half-way
>> correct).
> 
> In that case, I think it would make sense to modify these XSM hooks
> when IS_PRIV_FOR is changed to not short-circuit on DOM_IO/DOM_XEN.
> If you're suggesting changing the condition to something like
>   ( d != f && !(IS_PRIV_FOR(d, f) || IS_PRIV(d)) )
> I could do that, but I think that type of change would be best done
> in another patch actually making IS_PRIV_FOR(dom0, DOM_XEN) == false.

>From a standpoint of wanting to keep logically distinct things
separate I agree. I'm merely afraid that this will be overlooked
later, and causing otherwise unnecessary loss of hair until the
now hidden construct is found and adjusted.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:56:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykGv-00028x-D5; Tue, 07 Aug 2012 13:56:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SykGu-00028s-5y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:56:32 +0000
Received: from [85.158.139.83:52196] by server-2.bemta-5.messagelabs.com id
	AB/C6-25118-F8E11205; Tue, 07 Aug 2012 13:56:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344347790!30732871!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4042 invoked from network); 7 Aug 2012 13:56:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 13:56:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 14:56:30 +0100
Message-Id: <50213AAD020000780009347E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 14:56:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344263550-3941-17-git-send-email-dgdegra@tycho.nsa.gov>
	<501FFE560200007800092FBA@nat28.tlf.novell.com>
	<501FF0EB.1000900@tycho.nsa.gov>
	<5020D80402000078000931A4@nat28.tlf.novell.com>
	<50211BA6.1040607@tycho.nsa.gov>
In-Reply-To: <50211BA6.1040607@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 16/18] arch/x86: use XSM hooks for
 get_pg_owner access checks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 15:44, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/07/2012 02:55 AM, Jan Beulich wrote:
>>>>> On 06.08.12 at 18:29, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> On 08/06/2012 11:26 AM, Jan Beulich wrote:
>>>>>>> On 06.08.12 at 16:32, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>>>> +static XSM_DEFAULT(int, mmuext_op) (struct domain *d, struct domain *f)
>>>>> +{
>>>>> +    if ( d != f && !IS_PRIV_FOR(d, f) )
>>>>> +        return -EPERM;
>>>>
>>>> ... Dom0 is neither privileged for DOM_IO nor for DOM_XEN.
>>>
>>> Actually, it is. IS_PRIV_FOR returns true for any domain when called from an
>>> IS_PRIV domain.
>> 
>> That's a side effect of the current way of handling this, not
>> something that is either logical or designed to be that way (it
>> certainly is bogus even now for DOM_XEN, and with
>> disaggregation - afaiu your plans - it'll also become bogus for
>> DOM_IO, where right now one could consider it half-way
>> correct).
> 
> In that case, I think it would make sense to modify these XSM hooks
> when IS_PRIV_FOR is changed to not short-circuit on DOM_IO/DOM_XEN.
> If you're suggesting changing the condition to something like
>   ( d != f && !(IS_PRIV_FOR(d, f) || IS_PRIV(d)) )
> I could do that, but I think that type of change would be best done
> in another patch actually making IS_PRIV_FOR(dom0, DOM_XEN) == false.

>From a standpoint of wanting to keep logically distinct things
separate I agree. I'm merely afraid that this will be overlooked
later, and causing otherwise unnecessary loss of hair until the
now hidden construct is found and adjusted.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:57:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykHF-0002Au-TK; Tue, 07 Aug 2012 13:56:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SykHF-0002Ak-5Q
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:56:53 +0000
Received: from [85.158.143.99:12262] by server-3.bemta-4.messagelabs.com id
	2E/3C-01511-4AE11205; Tue, 07 Aug 2012 13:56:52 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344347811!25366147!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27288 invoked from network); 7 Aug 2012 13:56:51 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:56:51 -0000
X-TM-IMSS-Message-ID: <800c9d880003a05d@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	800c9d880003a05d ; Tue, 7 Aug 2012 09:57:14 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77DukJc016098; 
	Tue, 7 Aug 2012 09:56:46 -0400
Message-ID: <50211E9E.6090906@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 09:56:46 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344332203.11339.79.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344332203.11339.79.camel@zakaz.uk.xensource.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 05:36 AM, Ian Campbell wrote:
> On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
>> Allow specification of backend domains for disks, either in the config
>> file or via xl block-attach.
>>
>> A version of this patch was submitted in October 2011 but was not
>> suitable at the time because libxl did not support the "script=" option
>> for disks in libxl. Now that this option exists, it is possible to
>> specify a backend domain without needing to duplicate the device tree of
>> domain providing the disk in the domain using libxl; just specify
>> script=/bin/true (or any more useful script) to prevent the block script
>> from running in the domain using libxl.
> 
> You can also set run_hotplug_scripts=0 in /etc/xen/xl.conf which causes
> the scripts to be run from udev again -- which means they should run in
> the appropriate domain automatically. (although I confess I don't
> understand the reliance on script= here, so perhaps I've got the totally
> wrong end of the stick)

No, I just missed that setting that option would also do this.

> Given that this patch was originally posted so long ago, that the
> script= stuff just went in, that driver domains were on the TODO at one
> point (I think) and the relative simplicity of this patch I'm leaning
> towards taking this in 4.2.
> 
>> In order to support named backend domains like network-attach, the
>> prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
>> this parameter, it would only be only possible to support numeric domain
>> IDs in the block device specification.
> 
> I'm not sure if using libxl in libxlu is a layering violation or not
> (perhaps Ian J has an opinion), but I suppose it is acceptable (the
> alternative is a twisty maze of callbacks).
> 
> If we are going to expose libxl down to libxlu maybe we should go all
> the way and add the ctx to the XLU_Config?
> 
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>
>> ---
>>
>> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
>> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
>> changes due to different generator versions.
> 
> I'm afraid it did, but I've ignored them.
> 
>>  tools/libxl/libxlu_disk.c   |   3 +-
>>  tools/libxl/libxlu_disk_i.h |   3 +-
>>  tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
>>  tools/libxl/libxlu_disk_l.h |  24 +-
>>  tools/libxl/libxlu_disk_l.l |   8 +
>>  tools/libxl/libxlutil.h     |   2 +-
>>  tools/libxl/xl_cmdimpl.c    |   6 +-
> 
> This patch should also touch docs/misc/xl-disk-configuration.txt.

I'll add that in v2.

> [...]
>> @@ -169,6 +176,7 @@ devtype=[^,]*,?     { xlu__disk_err(DPC,yytext,"unknown value for type"); }
>>
>>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
>>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>> +backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }
> 
> Is this compatible with xend? Grep doesn't show the string
> "backenddomain" at all. Maybe xend simply didn't have this
> functionality?

The implementation I have for xend was in the comma-separated syntax only,
but I think that may have been due to non-upstreamed patches.
 
> xl seems to use just backend for network devices. They probably ought to
> match.

Originally I had the patch using "backend" but received comments that it could
be confused with backendtype. Matching the network device specification is a
good idea, and I'm fine with either name.
 
>>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>>  script=[^,]*,? { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
>> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
>> index 0333e55..87eb399 100644
>> --- a/tools/libxl/libxlutil.h
>> +++ b/tools/libxl/libxlutil.h
>> @@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
>>   */
>>
>>  int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
>> -                   libxl_device_disk *disk);
>> +                   libxl_device_disk *disk, libxl_ctx *ctx);
>>    /* disk must have been initialised.
>>     *
>>     * On error, returns errno value.  Bad strings cause EINVAL and
>> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
>> index 138cd72..fd00d61 100644
>> --- a/tools/libxl/xl_cmdimpl.c
>> +++ b/tools/libxl/xl_cmdimpl.c
>> @@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
>>          if (!*config) { perror("xlu_cfg_init"); exit(-1); }
>>      }
>>
>> -    e = xlu_disk_parse(*config, nspecs, specs, disk);
>> +    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
>>      if (e == EINVAL) exit(-1);
>>      if (e) {
>>          fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
>> @@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
>>  int main_blockattach(int argc, char **argv)
>>  {
>>      int opt;
>> -    uint32_t fe_domid, be_domid = 0;
>> +    uint32_t fe_domid;
>>      libxl_device_disk disk = { 0 };
>>      XLU_Config *config = 0;
>>
>> @@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
>>      parse_disk_config_multistring
>>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
>>
>> -    disk.backend_domid = be_domid;
>> -
> 
> xm block-attach's syntax was (allegedly): 
>         Usage: xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]
> 
> i.e. it accepted backdomain on the command line. Do we want/need to
> support that? I'd be happy enough not to.
> 
>>      if (dryrun_only) {
>>          char *json = libxl_device_disk_to_json(ctx, &disk);
>>          printf("disk: %s\n", json);
>> --
>> 1.7.11.2
>>
> 
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:57:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykHF-0002Au-TK; Tue, 07 Aug 2012 13:56:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SykHF-0002Ak-5Q
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 13:56:53 +0000
Received: from [85.158.143.99:12262] by server-3.bemta-4.messagelabs.com id
	2E/3C-01511-4AE11205; Tue, 07 Aug 2012 13:56:52 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344347811!25366147!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27288 invoked from network); 7 Aug 2012 13:56:51 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 13:56:51 -0000
X-TM-IMSS-Message-ID: <800c9d880003a05d@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	800c9d880003a05d ; Tue, 7 Aug 2012 09:57:14 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77DukJc016098; 
	Tue, 7 Aug 2012 09:56:46 -0400
Message-ID: <50211E9E.6090906@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 09:56:46 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344332203.11339.79.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344332203.11339.79.camel@zakaz.uk.xensource.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 05:36 AM, Ian Campbell wrote:
> On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
>> Allow specification of backend domains for disks, either in the config
>> file or via xl block-attach.
>>
>> A version of this patch was submitted in October 2011 but was not
>> suitable at the time because libxl did not support the "script=" option
>> for disks in libxl. Now that this option exists, it is possible to
>> specify a backend domain without needing to duplicate the device tree of
>> domain providing the disk in the domain using libxl; just specify
>> script=/bin/true (or any more useful script) to prevent the block script
>> from running in the domain using libxl.
> 
> You can also set run_hotplug_scripts=0 in /etc/xen/xl.conf which causes
> the scripts to be run from udev again -- which means they should run in
> the appropriate domain automatically. (although I confess I don't
> understand the reliance on script= here, so perhaps I've got the totally
> wrong end of the stick)

No, I just missed that setting that option would also do this.

> Given that this patch was originally posted so long ago, that the
> script= stuff just went in, that driver domains were on the TODO at one
> point (I think) and the relative simplicity of this patch I'm leaning
> towards taking this in 4.2.
> 
>> In order to support named backend domains like network-attach, the
>> prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
>> this parameter, it would only be only possible to support numeric domain
>> IDs in the block device specification.
> 
> I'm not sure if using libxl in libxlu is a layering violation or not
> (perhaps Ian J has an opinion), but I suppose it is acceptable (the
> alternative is a twisty maze of callbacks).
> 
> If we are going to expose libxl down to libxlu maybe we should go all
> the way and add the ctx to the XLU_Config?
> 
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>
>> ---
>>
>> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
>> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
>> changes due to different generator versions.
> 
> I'm afraid it did, but I've ignored them.
> 
>>  tools/libxl/libxlu_disk.c   |   3 +-
>>  tools/libxl/libxlu_disk_i.h |   3 +-
>>  tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
>>  tools/libxl/libxlu_disk_l.h |  24 +-
>>  tools/libxl/libxlu_disk_l.l |   8 +
>>  tools/libxl/libxlutil.h     |   2 +-
>>  tools/libxl/xl_cmdimpl.c    |   6 +-
> 
> This patch should also touch docs/misc/xl-disk-configuration.txt.

I'll add that in v2.

> [...]
>> @@ -169,6 +176,7 @@ devtype=[^,]*,?     { xlu__disk_err(DPC,yytext,"unknown value for type"); }
>>
>>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
>>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>> +backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }
> 
> Is this compatible with xend? Grep doesn't show the string
> "backenddomain" at all. Maybe xend simply didn't have this
> functionality?

The implementation I have for xend was in the comma-separated syntax only,
but I think that may have been due to non-upstreamed patches.
 
> xl seems to use just backend for network devices. They probably ought to
> match.

Originally I had the patch using "backend" but received comments that it could
be confused with backendtype. Matching the network device specification is a
good idea, and I'm fine with either name.
 
>>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>>  script=[^,]*,? { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
>> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
>> index 0333e55..87eb399 100644
>> --- a/tools/libxl/libxlutil.h
>> +++ b/tools/libxl/libxlutil.h
>> @@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
>>   */
>>
>>  int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
>> -                   libxl_device_disk *disk);
>> +                   libxl_device_disk *disk, libxl_ctx *ctx);
>>    /* disk must have been initialised.
>>     *
>>     * On error, returns errno value.  Bad strings cause EINVAL and
>> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
>> index 138cd72..fd00d61 100644
>> --- a/tools/libxl/xl_cmdimpl.c
>> +++ b/tools/libxl/xl_cmdimpl.c
>> @@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
>>          if (!*config) { perror("xlu_cfg_init"); exit(-1); }
>>      }
>>
>> -    e = xlu_disk_parse(*config, nspecs, specs, disk);
>> +    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
>>      if (e == EINVAL) exit(-1);
>>      if (e) {
>>          fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
>> @@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
>>  int main_blockattach(int argc, char **argv)
>>  {
>>      int opt;
>> -    uint32_t fe_domid, be_domid = 0;
>> +    uint32_t fe_domid;
>>      libxl_device_disk disk = { 0 };
>>      XLU_Config *config = 0;
>>
>> @@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
>>      parse_disk_config_multistring
>>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
>>
>> -    disk.backend_domid = be_domid;
>> -
> 
> xm block-attach's syntax was (allegedly): 
>         Usage: xm block-attach <Domain> <BackDev> <FrontDev> <Mode> [BackDomain]
> 
> i.e. it accepted backdomain on the command line. Do we want/need to
> support that? I'd be happy enough not to.
> 
>>      if (dryrun_only) {
>>          char *json = libxl_device_disk_to_json(ctx, &disk);
>>          printf("disk: %s\n", json);
>> --
>> 1.7.11.2
>>
> 
> 
> 


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:57:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykHQ-0002CI-9V; Tue, 07 Aug 2012 13:57:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SykHO-0002C2-S6
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 13:57:03 +0000
Received: from [85.158.138.51:43035] by server-9.bemta-3.messagelabs.com id
	19/D7-14615-EAE11205; Tue, 07 Aug 2012 13:57:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344347821!30810696!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31642 invoked from network); 7 Aug 2012 13:57:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:57:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13887142"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 13:57:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 14:57:01 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SykHM-0008JB-VR;
	Tue, 07 Aug 2012 13:57:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SykHM-0002C3-IA;
	Tue, 07 Aug 2012 14:57:00 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13568-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 14:57:00 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13568: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13568 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13568/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 13:57:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 13:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykHQ-0002CI-9V; Tue, 07 Aug 2012 13:57:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SykHO-0002C2-S6
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 13:57:03 +0000
Received: from [85.158.138.51:43035] by server-9.bemta-3.messagelabs.com id
	19/D7-14615-EAE11205; Tue, 07 Aug 2012 13:57:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344347821!30810696!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31642 invoked from network); 7 Aug 2012 13:57:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 13:57:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13887142"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 13:57:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 14:57:01 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SykHM-0008JB-VR;
	Tue, 07 Aug 2012 13:57:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SykHM-0002C3-IA;
	Tue, 07 Aug 2012 14:57:00 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13568-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 14:57:00 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13568: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13568 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13568/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-multivcpu 11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-credit2   11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl           11 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl            11 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-amd64-xl-winxpsp3  9 guest-localmigrate        fail REGR. vs. 13536
 test-i386-i386-xl-winxpsp3    9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 13536
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 13536
 test-i386-i386-xl-win         9 guest-localmigrate        fail REGR. vs. 13536
 test-amd64-i386-xl-win-vcpus1  9 guest-localmigrate       fail REGR. vs. 13536
 test-amd64-amd64-xl-win       9 guest-localmigrate        fail REGR. vs. 13536

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  353bc0801b11
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 835 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 14:40:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 14:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykxR-00037w-Ie; Tue, 07 Aug 2012 14:40:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1SykxQ-00037q-5z
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 14:40:28 +0000
Received: from [85.158.143.35:24869] by server-2.bemta-4.messagelabs.com id
	CC/83-17938-BD821205; Tue, 07 Aug 2012 14:40:27 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344350410!17850431!1
X-Originating-IP: [208.97.132.119]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4xMTkgPT4gMzAwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23272 invoked from network); 7 Aug 2012 14:40:13 -0000
Received: from caiajhbdcbbj.dreamhost.com (HELO homiemail-a75.g.dreamhost.com)
	(208.97.132.119) by server-13.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 14:40:13 -0000
Received: from homiemail-a75.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a75.g.dreamhost.com (Postfix) with ESMTP id 0B9D85EC080;
	Tue,  7 Aug 2012 07:40:10 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=kG0UCB5Zbozz/QTWOYp0L3P1SmIJqpK7FOydHGsHYVMJ
	s4G/YbzZxfCZjAu6cjTQ669QzQ2HwqbFMGDx8OEx1o3o5h8sLeuvclyLCY9o1BEh
	hnKno9SjiicAmy4E6ZRJJ+Urut34FNuzGMq3ZThyA87iNs8oIQ7q8pwtsVwgan4=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=VdHXbGSdBOZBg9jS3el6+Eujs70=; b=lQYdKh0J
	MqZtm+wI9cEDo/fKOHS8pmhxqvnS0Pcj57mfYdzCxvifQ6uuszlsD2DzBckRE7HX
	MugsezroA7nsuT9pfsF1YBukwMXb4MChW25BMLZZsBdHP6FV8S40n6Nr9rLn9nXl
	CJSr8qYnL2OLl3fVKCLhknzAZIlT+0cACMU=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a75.g.dreamhost.com (Postfix) with ESMTPA id 843015EC07E; 
	Tue,  7 Aug 2012 07:40:09 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Tue, 7 Aug 2012 07:40:09 -0700
Message-ID: <d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
References: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
Date: Tue, 7 Aug 2012 07:40:09 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: george.dunlap@eu.citrix.com, Ian.Jackson@eu.citrix.com, tim@xen.org,
	JBeulich@suse.com
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com>
>>>> wrote:
>> I guess there are two problems with that:
>> * As you've seen, apparently dom0 may access these pages before any
>> faults happen.
>> * If it happens that reclaim_single is below the only zeroed page, the
>> guest will crash even when there is reclaim-able memory available.
>>
>> Two ways we could fix this:
>> 1. Remove dom0 accesses (what on earth could be looking at a
>> not-yet-created VM?)
>
> I'm told it's a monitoring daemon, and yes, they are intending to
> adjust it to first query the GFN's type (and don't do the access
> when it's not populated, yet). But wait, I didn't check the code
> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
> also call get_page_from_gfn() with P2M_ALLOC, so would also
> trigger the PoD code (in -unstable at least) - Tim, was that really
> a correct adjustment in 25355:974ad81bb68b? It looks to be a
> 1:1 translation, but is that really necessary? If one wanted to
> find out whether a page is PoD to avoid getting it populated,
> how would that be done from outside the hypervisor? Would
> we need XEN_DOMCTL_getpageframeinfo4 for this?
>
>> 2. Allocate the PoD cache before populating the p2m table
>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>> see how that's really practical.
>
> What's wrong with telling control tools that a certain page is
> unpopulated (from which they will be able to imply that's it's all
> clear from the guest's pov)? Even outside of the current problem,
> I would think that's more efficient than allocating the page. Of
> course, the control tools need to be able to cope with that. And
> it may also be necessary to distinguish between read and
> read/write mappings being established (and for r/w ones the
> option of populating at access time rather than at creation time
> would need to be explored).

I wouldn't be opposed to some form of getpageframeinfo4. It's not just PoD
we are talking about here. Is the page paged out? Is the page shared?

Right now we have global per-domain queries (domaininfo). Or individual
gfn debug memctl's. A batched interface with richer information would be a
blessing for debugging or diagnosis purposes.

The first order of business is exposing the type. Do we really want to
expose the whole range of p2m_* types or just "really useful" ones like
is_shared, is_pod, is_paged, is_normal? An argument for the former is that
the mem event interface already pumps the p2m_* type up the stack.

The other useful bit of information I can think of is exposing the shared
ref count.

My two cents
Andres

>
>> 4. Change the sweep routine so that the lower 2MiB gets swept
>>
>> #2 would require us to use all PoD entries when building the p2m
>> table, thus addressing the mail you mentioned from 25 July*.  Given
>> that you don't want #1, it seems like #2 is the best option.
>>
>> No matter what we do, the sweep routine for 4.2 should be re-written
>> to search all of memory at least once (maybe with a timeout for
>> watchdogs), since it's only called in an actual emergency.
>>
>> Let me take a look...
>
> Thanks!
>
> Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 14:40:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 14:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SykxR-00037w-Ie; Tue, 07 Aug 2012 14:40:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1SykxQ-00037q-5z
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 14:40:28 +0000
Received: from [85.158.143.35:24869] by server-2.bemta-4.messagelabs.com id
	CC/83-17938-BD821205; Tue, 07 Aug 2012 14:40:27 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344350410!17850431!1
X-Originating-IP: [208.97.132.119]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4xMTkgPT4gMzAwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23272 invoked from network); 7 Aug 2012 14:40:13 -0000
Received: from caiajhbdcbbj.dreamhost.com (HELO homiemail-a75.g.dreamhost.com)
	(208.97.132.119) by server-13.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 14:40:13 -0000
Received: from homiemail-a75.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a75.g.dreamhost.com (Postfix) with ESMTP id 0B9D85EC080;
	Tue,  7 Aug 2012 07:40:10 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=kG0UCB5Zbozz/QTWOYp0L3P1SmIJqpK7FOydHGsHYVMJ
	s4G/YbzZxfCZjAu6cjTQ669QzQ2HwqbFMGDx8OEx1o3o5h8sLeuvclyLCY9o1BEh
	hnKno9SjiicAmy4E6ZRJJ+Urut34FNuzGMq3ZThyA87iNs8oIQ7q8pwtsVwgan4=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=VdHXbGSdBOZBg9jS3el6+Eujs70=; b=lQYdKh0J
	MqZtm+wI9cEDo/fKOHS8pmhxqvnS0Pcj57mfYdzCxvifQ6uuszlsD2DzBckRE7HX
	MugsezroA7nsuT9pfsF1YBukwMXb4MChW25BMLZZsBdHP6FV8S40n6Nr9rLn9nXl
	CJSr8qYnL2OLl3fVKCLhknzAZIlT+0cACMU=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a75.g.dreamhost.com (Postfix) with ESMTPA id 843015EC07E; 
	Tue,  7 Aug 2012 07:40:09 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Tue, 7 Aug 2012 07:40:09 -0700
Message-ID: <d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
References: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
Date: Tue, 7 Aug 2012 07:40:09 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: george.dunlap@eu.citrix.com, Ian.Jackson@eu.citrix.com, tim@xen.org,
	JBeulich@suse.com
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com>
>>>> wrote:
>> I guess there are two problems with that:
>> * As you've seen, apparently dom0 may access these pages before any
>> faults happen.
>> * If it happens that reclaim_single is below the only zeroed page, the
>> guest will crash even when there is reclaim-able memory available.
>>
>> Two ways we could fix this:
>> 1. Remove dom0 accesses (what on earth could be looking at a
>> not-yet-created VM?)
>
> I'm told it's a monitoring daemon, and yes, they are intending to
> adjust it to first query the GFN's type (and don't do the access
> when it's not populated, yet). But wait, I didn't check the code
> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
> also call get_page_from_gfn() with P2M_ALLOC, so would also
> trigger the PoD code (in -unstable at least) - Tim, was that really
> a correct adjustment in 25355:974ad81bb68b? It looks to be a
> 1:1 translation, but is that really necessary? If one wanted to
> find out whether a page is PoD to avoid getting it populated,
> how would that be done from outside the hypervisor? Would
> we need XEN_DOMCTL_getpageframeinfo4 for this?
>
>> 2. Allocate the PoD cache before populating the p2m table
>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>> see how that's really practical.
>
> What's wrong with telling control tools that a certain page is
> unpopulated (from which they will be able to imply that's it's all
> clear from the guest's pov)? Even outside of the current problem,
> I would think that's more efficient than allocating the page. Of
> course, the control tools need to be able to cope with that. And
> it may also be necessary to distinguish between read and
> read/write mappings being established (and for r/w ones the
> option of populating at access time rather than at creation time
> would need to be explored).

I wouldn't be opposed to some form of getpageframeinfo4. It's not just PoD
we are talking about here. Is the page paged out? Is the page shared?

Right now we have global per-domain queries (domaininfo). Or individual
gfn debug memctl's. A batched interface with richer information would be a
blessing for debugging or diagnosis purposes.

The first order of business is exposing the type. Do we really want to
expose the whole range of p2m_* types or just "really useful" ones like
is_shared, is_pod, is_paged, is_normal? An argument for the former is that
the mem event interface already pumps the p2m_* type up the stack.

The other useful bit of information I can think of is exposing the shared
ref count.

My two cents
Andres

>
>> 4. Change the sweep routine so that the lower 2MiB gets swept
>>
>> #2 would require us to use all PoD entries when building the p2m
>> table, thus addressing the mail you mentioned from 25 July*.  Given
>> that you don't want #1, it seems like #2 is the best option.
>>
>> No matter what we do, the sweep routine for 4.2 should be re-written
>> to search all of memory at least once (maybe with a timeout for
>> watchdogs), since it's only called in an actual emergency.
>>
>> Let me take a look...
>
> Thanks!
>
> Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 14:49:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 14:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syl6A-0003MJ-KQ; Tue, 07 Aug 2012 14:49:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Syl68-0003ME-PT
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 14:49:29 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344350961!9004994!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20756 invoked from network); 7 Aug 2012 14:49:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 14:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336363200"; d="scan'208";a="33833999"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:49:19 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 10:49:19 -0400
MIME-Version: 1.0
X-Mercurial-Node: 8b634f1786825a98ce700b2ea77385e0f5b84575
Message-ID: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 07:49:18 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'I' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 07:46:14 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
+{
+    u64 address;
+    void *table_vaddr, *pde;
+    u64 next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
+        return;
+    }
+
+    if ( level > 1 )
+    {
+        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+        {
+            if ( !(index % 2) )
+                process_pending_softirqs();
+
+            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+            entry = (u32*)pde;
+
+            next_level = get_field_from_reg_u32(entry[0],
+                                                IOMMU_PDE_NEXT_LEVEL_MASK,
+                                                IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+            present = get_field_from_reg_u32(entry[0],
+                                             IOMMU_PDE_PRESENT_MASK,
+                                             IOMMU_PDE_PRESENT_SHIFT);
+
+            address = gpa + amd_offset_level_address(index, level);
+            if ( (next_table_maddr != 0) && (next_level != 0)
+                && present )
+            {
+                amd_dump_p2m_table_level(
+                    maddr_to_page(next_table_maddr), level - 1, address);
+            }
+
+            if ( present )
+            {
+                printk("gfn: %-16lx  mfn: %-16lx\n",
+                       address, next_table_maddr);
+            }
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +595,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/iommu.c	Tue Aug 07 07:46:14 2012 -0700
@@ -18,6 +18,7 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
@@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+void setup_iommu_dump(void);
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    setup_iommu_dump();
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +658,46 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
+void __init setup_iommu_dump(void)
+{
+    register_keyhandler('I', &iommu_p2m_table);
+}
+
+
 /*
  * Local variables:
  * mode: C
diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 07:46:14 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2356,6 +2357,56 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
+{
+    u64 address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %-16lx\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+        
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 )
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
+
+        if ( level == 1 )
+            printk("gfn: %-16lx mfn: %-16lx superpage=%d\n", 
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2378,6 +2429,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 07:46:14 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,7 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 3d17148e465c -r 8b634f178682 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 07:46:14 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 3d17148e465c -r 8b634f178682 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/include/xen/iommu.h	Tue Aug 07 07:46:14 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 14:49:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 14:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syl6A-0003MJ-KQ; Tue, 07 Aug 2012 14:49:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Syl68-0003ME-PT
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 14:49:29 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344350961!9004994!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20756 invoked from network); 7 Aug 2012 14:49:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 14:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336363200"; d="scan'208";a="33833999"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 10:49:19 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 10:49:19 -0400
MIME-Version: 1.0
X-Mercurial-Node: 8b634f1786825a98ce700b2ea77385e0f5b84575
Message-ID: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 7 Aug 2012 07:49:18 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'I' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 07:46:14 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
+{
+    u64 address;
+    void *table_vaddr, *pde;
+    u64 next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
+        return;
+    }
+
+    if ( level > 1 )
+    {
+        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+        {
+            if ( !(index % 2) )
+                process_pending_softirqs();
+
+            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+            entry = (u32*)pde;
+
+            next_level = get_field_from_reg_u32(entry[0],
+                                                IOMMU_PDE_NEXT_LEVEL_MASK,
+                                                IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+            present = get_field_from_reg_u32(entry[0],
+                                             IOMMU_PDE_PRESENT_MASK,
+                                             IOMMU_PDE_PRESENT_SHIFT);
+
+            address = gpa + amd_offset_level_address(index, level);
+            if ( (next_table_maddr != 0) && (next_level != 0)
+                && present )
+            {
+                amd_dump_p2m_table_level(
+                    maddr_to_page(next_table_maddr), level - 1, address);
+            }
+
+            if ( present )
+            {
+                printk("gfn: %-16lx  mfn: %-16lx\n",
+                       address, next_table_maddr);
+            }
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +595,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/iommu.c	Tue Aug 07 07:46:14 2012 -0700
@@ -18,6 +18,7 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
@@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+void setup_iommu_dump(void);
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    setup_iommu_dump();
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +658,46 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
+void __init setup_iommu_dump(void)
+{
+    register_keyhandler('I', &iommu_p2m_table);
+}
+
+
 /*
  * Local variables:
  * mode: C
diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 07:46:14 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2356,6 +2357,56 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
+{
+    u64 address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %-16lx\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+        
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 )
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
+
+        if ( level == 1 )
+            printk("gfn: %-16lx mfn: %-16lx superpage=%d\n", 
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2378,6 +2429,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 3d17148e465c -r 8b634f178682 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 07:46:14 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,7 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 3d17148e465c -r 8b634f178682 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 07:46:14 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 3d17148e465c -r 8b634f178682 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Thu Aug 02 11:49:37 2012 +0200
+++ b/xen/include/xen/iommu.h	Tue Aug 07 07:46:14 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:01:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylH2-0003c4-72; Tue, 07 Aug 2012 15:00:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SylH0-0003by-P9
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 15:00:43 +0000
Received: from [85.158.143.99:21429] by server-3.bemta-4.messagelabs.com id
	00/F7-01511-A9D21205; Tue, 07 Aug 2012 15:00:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344351640!20917236!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23137 invoked from network); 7 Aug 2012 15:00:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:00:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13888770"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:00:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:00:40 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SylGy-0000LF-Bv;
	Tue, 07 Aug 2012 15:00:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SylGy-0005es-8m;
	Tue, 07 Aug 2012 16:00:40 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13569-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:00:40 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13569: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13569 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13569/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-winxpsp3    7 windows-install             fail pass in 13565
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 13565 pass in 13569

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 13525
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 13525

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop            fail in 13565 never pass

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=f8f8912b3de0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing f8f8912b3de0
+ branch=xen-4.1-testing
+ revision=f8f8912b3de0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r f8f8912b3de0 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 5 changes to 5 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:01:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylH2-0003c4-72; Tue, 07 Aug 2012 15:00:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SylH0-0003by-P9
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 15:00:43 +0000
Received: from [85.158.143.99:21429] by server-3.bemta-4.messagelabs.com id
	00/F7-01511-A9D21205; Tue, 07 Aug 2012 15:00:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344351640!20917236!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23137 invoked from network); 7 Aug 2012 15:00:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:00:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13888770"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:00:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:00:40 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SylGy-0000LF-Bv;
	Tue, 07 Aug 2012 15:00:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SylGy-0005es-8m;
	Tue, 07 Aug 2012 16:00:40 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13569-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:00:40 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13569: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13569 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13569/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-i386-i386-xl-winxpsp3    7 windows-install             fail pass in 13565
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 13565 pass in 13569

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 13525
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 13525

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop            fail in 13565 never pass

version targeted for testing:
 xen                  f8f8912b3de0
baseline version:
 xen                  fa34499e8f6c

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=f8f8912b3de0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing f8f8912b3de0
+ branch=xen-4.1-testing
+ revision=f8f8912b3de0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r f8f8912b3de0 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 5 changes to 5 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:04:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylKa-0003rG-WF; Tue, 07 Aug 2012 15:04:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SylKa-0003rB-0k
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:04:24 +0000
Received: from [85.158.139.83:23881] by server-2.bemta-5.messagelabs.com id
	A3/55-25118-77E21205; Tue, 07 Aug 2012 15:04:23 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344351861!23361928!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28886 invoked from network); 7 Aug 2012 15:04:22 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:04:22 -0000
Received: by yhpp34 with SMTP id p34so4255080yhp.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 08:04:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=cqbkHJ07HGghxDgZcNUioiNMd8Ylxlx+aeR+1K1puuo=;
	b=N7BaAELtDj7viWXbYySUVw8YWMTysk4tKGOJlT5EAB8D4WQxQA1GHFFKXjHs+d6b82
	WTrp/MkZUsxx03hO0pkeSYV5uzuJkEFWqHT5xsNJxFm3nZ1XHqf8y5C39hk4XqJaMpXx
	XHnXSAc4Z9GsSBbTIYG2TN/tww44sCmiBj4ONjUcyNgrI13pv7iuELum7j5OB7QUTTYD
	f4LwASCcupNZw7Jh675olhlbQmu44YlA5FsePc5jeaXBg7USkDNdwW2CPIDP5NFXz+3C
	0fgp5l5URIXyEPEwGUbsMi2h7FCcSm/LnpSauEeL8c+q7Q8+jMJMjaRJcYmzBZstMUOz
	715Q==
MIME-Version: 1.0
Received: by 10.50.170.73 with SMTP id ak9mr8923222igc.38.1344351860635; Tue,
	07 Aug 2012 08:04:20 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 08:04:20 -0700 (PDT)
Date: Tue, 7 Aug 2012 11:04:20 -0400
X-Google-Sender-Auth: vd5sKACIfhiZy1t23PxOO09Y-hk
Message-ID: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have been doing some experiments in upgrading the Xen version in a
future version of XenClient Enterprise, and I've been running into a
regression that I'm wondering if anyone else has seen.

dom0 suspend/resume (S3) does not seem to be working for me.

In swapping out components of the system, the common failure seems to
be when I use Xen-4.2 (upgraded from Xen-4.0.3)

The first suspend seems to mostly work...but subsequent ones always
resume improperly.
By "improperly" - I see I/O failures, and stalls of many processes.

Below is a log excerpt of 2 S3 attempts.


Has anyone else seen these failures?

- Ben


(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Breaking vcpu affinity for domain 0 vcpu 1
(XEN) Breaking vcpu affinity for domain 0 vcpu 2
(XEN) Breaking vcpu affinity for domain 0 vcpu 3
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
[   66.508829] ata3.00: revalidation failed (errno=-5)
[   66.508861] ata1.00: revalidation failed (errno=-5)
[   76.858815] ata3.00: revalidation failed (errno=-5)
[   76.898807] ata1.00: revalidation failed (errno=-5)
[  107.208817] ata3.00: revalidation failed (errno=-5)
[  107.288807] ata1.00: revalidation failed (errno=-5)
[  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
[  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
[  107.718913] end_request: I/O error, dev sda, sector 35193296
[  107.718919] Buffer I/O error on device dm-5, logical block 7690
[  107.718947] end_request: I/O error, dev sda, sector 35657184
[  107.718965] end_request: I/O error, dev sda, sector 246202760
[  107.718968] Buffer I/O error on device dm-6, logical block 26252801
[  107.718995] end_request: I/O error, dev sda, sector 254548368
[  107.719009] Aborting journal on device dm-6-8.
[  107.719021] end_request: I/O error, dev sda, sector 35164192
[  107.719023] Buffer I/O error on device dm-5, logical block 4052
[  107.719063] Aborting journal on device dm-5-8.
[  107.719085] end_request: I/O error, dev sda, sector 254546304
[  107.719097] Buffer I/O error on device dm-6, logical block 27295744
[  107.719129] JBD2: I/O error detected when updating journal
superblock for dm-6-8.
[  107.719141] end_request: I/O error, dev sda, sector 35656064
[  107.719146] Buffer I/O error on device dm-5, logical block 65536
[  107.719168] JBD2: I/O error detected when updating journal
superblock for dm-5-8.
[  107.870082] end_request: I/O error, dev sda, sector 35131776
[  107.875825] Buffer I/O error on device dm-5, logical block 0
[  107.881805] end_request: I/O error, dev sda, sector 35131776
[  107.887637] Buffer I/O error on device dm-5, logical block 0
[  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
[  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
[  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
Detected aborted journal
[  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
[  107.893617] end_request: I/O error, dev sda, sector 35131776
[  107.893620] Buffer I/O error on device dm-5, logical block 0
[  107.893749] end_request: I/O error, dev sda, sector 36180352
[  107.893752] Buffer I/O error on device dm-6, logical block 0
[  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
Detected aborted journal
[  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
[  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
[  107.893784] end_request: I/O error, dev sda, sector 36180352
[  107.893787] Buffer I/O error on device dm-6, logical block 0
[  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
Detected aborted journal
[  108.669763] end_request: I/O error, dev sda, sector 25957784
[  108.675555] Aborting journal on device dm-3-8.
[  108.680246] end_request: I/O error, dev sda, sector 25956736
[  108.686099] JBD2: I/O error detected when updating journal
superblock for dm-3-8.
[  108.693908] journal commit I/O error
[  108.755829] end_request: I/O error, dev sda, sector 17305984
[  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
Detected aborted journal
[  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
[  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
[  108.782904] end_request: I/O error, dev sda, sector 17305984
[  109.660011] end_request: I/O error, dev sda, sector 358788
[  109.665572] Buffer I/O error on device dm-1, logical block 46082
[  109.682479] end_request: I/O error, dev sda, sector 18832256
[  109.688246] end_request: I/O error, dev sda, sector 18832256
[  109.709559] end_request: I/O error, dev sda, sector 357762
[  109.715120] Buffer I/O error on device dm-1, logical block 45569
[  109.721506] end_request: I/O error, dev sda, sector 358790
[  109.727114] Buffer I/O error on device dm-1, logical block 46083
[  109.743714] end_request: I/O error, dev sda, sector 18832256
[  109.755555] end_request: I/O error, dev sda, sector 18832256
[  109.886187] end_request: I/O error, dev sda, sector 357764
[  109.891756] Buffer I/O error on device dm-1, logical block 45570
[  109.908344] end_request: I/O error, dev sda, sector 18832256
[  109.928369] end_request: I/O error, dev sda, sector 349574
[  109.933938] Buffer I/O error on device dm-1, logical block 41475
[  109.950336] end_request: I/O error, dev sda, sector 18832256
[  115.378875] end_request: I/O error, dev sda, sector 365000
[  115.384445] Aborting journal on device dm-1-8.
[  115.389120] end_request: I/O error, dev sda, sector 364930
[  115.394798] Buffer I/O error on device dm-1, logical block 49153
[  115.401101] JBD2: I/O error detected when updating journal
superblock for dm-1-8.
[  207.207426] end_request: I/O error, dev sda, sector 246192376
[  207.213313] end_request: I/O error, dev sda, sector 246192376
[  207.903181] end_request: I/O error, dev sda, sector 246192376
[  209.234399] end_request: I/O error, dev sda, sector 18518400
[  209.240221] end_request: I/O error, dev sda, sector 18518400

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:04:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylKa-0003rG-WF; Tue, 07 Aug 2012 15:04:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SylKa-0003rB-0k
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:04:24 +0000
Received: from [85.158.139.83:23881] by server-2.bemta-5.messagelabs.com id
	A3/55-25118-77E21205; Tue, 07 Aug 2012 15:04:23 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344351861!23361928!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28886 invoked from network); 7 Aug 2012 15:04:22 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:04:22 -0000
Received: by yhpp34 with SMTP id p34so4255080yhp.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 08:04:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=cqbkHJ07HGghxDgZcNUioiNMd8Ylxlx+aeR+1K1puuo=;
	b=N7BaAELtDj7viWXbYySUVw8YWMTysk4tKGOJlT5EAB8D4WQxQA1GHFFKXjHs+d6b82
	WTrp/MkZUsxx03hO0pkeSYV5uzuJkEFWqHT5xsNJxFm3nZ1XHqf8y5C39hk4XqJaMpXx
	XHnXSAc4Z9GsSBbTIYG2TN/tww44sCmiBj4ONjUcyNgrI13pv7iuELum7j5OB7QUTTYD
	f4LwASCcupNZw7Jh675olhlbQmu44YlA5FsePc5jeaXBg7USkDNdwW2CPIDP5NFXz+3C
	0fgp5l5URIXyEPEwGUbsMi2h7FCcSm/LnpSauEeL8c+q7Q8+jMJMjaRJcYmzBZstMUOz
	715Q==
MIME-Version: 1.0
Received: by 10.50.170.73 with SMTP id ak9mr8923222igc.38.1344351860635; Tue,
	07 Aug 2012 08:04:20 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 08:04:20 -0700 (PDT)
Date: Tue, 7 Aug 2012 11:04:20 -0400
X-Google-Sender-Auth: vd5sKACIfhiZy1t23PxOO09Y-hk
Message-ID: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have been doing some experiments in upgrading the Xen version in a
future version of XenClient Enterprise, and I've been running into a
regression that I'm wondering if anyone else has seen.

dom0 suspend/resume (S3) does not seem to be working for me.

In swapping out components of the system, the common failure seems to
be when I use Xen-4.2 (upgraded from Xen-4.0.3)

The first suspend seems to mostly work...but subsequent ones always
resume improperly.
By "improperly" - I see I/O failures, and stalls of many processes.

Below is a log excerpt of 2 S3 attempts.


Has anyone else seen these failures?

- Ben


(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Breaking vcpu affinity for domain 0 vcpu 1
(XEN) Breaking vcpu affinity for domain 0 vcpu 2
(XEN) Breaking vcpu affinity for domain 0 vcpu 3
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
[   66.508829] ata3.00: revalidation failed (errno=-5)
[   66.508861] ata1.00: revalidation failed (errno=-5)
[   76.858815] ata3.00: revalidation failed (errno=-5)
[   76.898807] ata1.00: revalidation failed (errno=-5)
[  107.208817] ata3.00: revalidation failed (errno=-5)
[  107.288807] ata1.00: revalidation failed (errno=-5)
[  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
[  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
[  107.718913] end_request: I/O error, dev sda, sector 35193296
[  107.718919] Buffer I/O error on device dm-5, logical block 7690
[  107.718947] end_request: I/O error, dev sda, sector 35657184
[  107.718965] end_request: I/O error, dev sda, sector 246202760
[  107.718968] Buffer I/O error on device dm-6, logical block 26252801
[  107.718995] end_request: I/O error, dev sda, sector 254548368
[  107.719009] Aborting journal on device dm-6-8.
[  107.719021] end_request: I/O error, dev sda, sector 35164192
[  107.719023] Buffer I/O error on device dm-5, logical block 4052
[  107.719063] Aborting journal on device dm-5-8.
[  107.719085] end_request: I/O error, dev sda, sector 254546304
[  107.719097] Buffer I/O error on device dm-6, logical block 27295744
[  107.719129] JBD2: I/O error detected when updating journal
superblock for dm-6-8.
[  107.719141] end_request: I/O error, dev sda, sector 35656064
[  107.719146] Buffer I/O error on device dm-5, logical block 65536
[  107.719168] JBD2: I/O error detected when updating journal
superblock for dm-5-8.
[  107.870082] end_request: I/O error, dev sda, sector 35131776
[  107.875825] Buffer I/O error on device dm-5, logical block 0
[  107.881805] end_request: I/O error, dev sda, sector 35131776
[  107.887637] Buffer I/O error on device dm-5, logical block 0
[  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
[  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
[  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
Detected aborted journal
[  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
[  107.893617] end_request: I/O error, dev sda, sector 35131776
[  107.893620] Buffer I/O error on device dm-5, logical block 0
[  107.893749] end_request: I/O error, dev sda, sector 36180352
[  107.893752] Buffer I/O error on device dm-6, logical block 0
[  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
Detected aborted journal
[  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
[  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
[  107.893784] end_request: I/O error, dev sda, sector 36180352
[  107.893787] Buffer I/O error on device dm-6, logical block 0
[  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
Detected aborted journal
[  108.669763] end_request: I/O error, dev sda, sector 25957784
[  108.675555] Aborting journal on device dm-3-8.
[  108.680246] end_request: I/O error, dev sda, sector 25956736
[  108.686099] JBD2: I/O error detected when updating journal
superblock for dm-3-8.
[  108.693908] journal commit I/O error
[  108.755829] end_request: I/O error, dev sda, sector 17305984
[  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
Detected aborted journal
[  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
[  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
[  108.782904] end_request: I/O error, dev sda, sector 17305984
[  109.660011] end_request: I/O error, dev sda, sector 358788
[  109.665572] Buffer I/O error on device dm-1, logical block 46082
[  109.682479] end_request: I/O error, dev sda, sector 18832256
[  109.688246] end_request: I/O error, dev sda, sector 18832256
[  109.709559] end_request: I/O error, dev sda, sector 357762
[  109.715120] Buffer I/O error on device dm-1, logical block 45569
[  109.721506] end_request: I/O error, dev sda, sector 358790
[  109.727114] Buffer I/O error on device dm-1, logical block 46083
[  109.743714] end_request: I/O error, dev sda, sector 18832256
[  109.755555] end_request: I/O error, dev sda, sector 18832256
[  109.886187] end_request: I/O error, dev sda, sector 357764
[  109.891756] Buffer I/O error on device dm-1, logical block 45570
[  109.908344] end_request: I/O error, dev sda, sector 18832256
[  109.928369] end_request: I/O error, dev sda, sector 349574
[  109.933938] Buffer I/O error on device dm-1, logical block 41475
[  109.950336] end_request: I/O error, dev sda, sector 18832256
[  115.378875] end_request: I/O error, dev sda, sector 365000
[  115.384445] Aborting journal on device dm-1-8.
[  115.389120] end_request: I/O error, dev sda, sector 364930
[  115.394798] Buffer I/O error on device dm-1, logical block 49153
[  115.401101] JBD2: I/O error detected when updating journal
superblock for dm-1-8.
[  207.207426] end_request: I/O error, dev sda, sector 246192376
[  207.213313] end_request: I/O error, dev sda, sector 246192376
[  207.903181] end_request: I/O error, dev sda, sector 246192376
[  209.234399] end_request: I/O error, dev sda, sector 18518400
[  209.240221] end_request: I/O error, dev sda, sector 18518400

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:08:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylOf-00041L-Lg; Tue, 07 Aug 2012 15:08:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SylOe-00041D-0H
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:08:36 +0000
Received: from [85.158.139.83:60308] by server-12.bemta-5.messagelabs.com id
	4E/20-26304-37F21205; Tue, 07 Aug 2012 15:08:35 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344352112!30006503!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25139 invoked from network); 7 Aug 2012 15:08:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:08:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336363200"; d="scan'208";a="204407850"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:08:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 11:08:10 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SylOD-0003mr-P1;
	Tue, 07 Aug 2012 16:08:09 +0100
Message-ID: <50212E96.7010803@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:04:54 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "andres@lagarcavilla.org" <andres@lagarcavilla.org>
References: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
	<d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/12 15:40, Andres Lagar-Cavilla wrote:
>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com>
>>>>> wrote:
>>> I guess there are two problems with that:
>>> * As you've seen, apparently dom0 may access these pages before any
>>> faults happen.
>>> * If it happens that reclaim_single is below the only zeroed page, the
>>> guest will crash even when there is reclaim-able memory available.
>>>
>>> Two ways we could fix this:
>>> 1. Remove dom0 accesses (what on earth could be looking at a
>>> not-yet-created VM?)
>> I'm told it's a monitoring daemon, and yes, they are intending to
>> adjust it to first query the GFN's type (and don't do the access
>> when it's not populated, yet). But wait, I didn't check the code
>> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
>> also call get_page_from_gfn() with P2M_ALLOC, so would also
>> trigger the PoD code (in -unstable at least) - Tim, was that really
>> a correct adjustment in 25355:974ad81bb68b? It looks to be a
>> 1:1 translation, but is that really necessary? If one wanted to
>> find out whether a page is PoD to avoid getting it populated,
>> how would that be done from outside the hypervisor? Would
>> we need XEN_DOMCTL_getpageframeinfo4 for this?
>>
>>> 2. Allocate the PoD cache before populating the p2m table
>>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>>> see how that's really practical.
>> What's wrong with telling control tools that a certain page is
>> unpopulated (from which they will be able to imply that's it's all
>> clear from the guest's pov)? Even outside of the current problem,
>> I would think that's more efficient than allocating the page. Of
>> course, the control tools need to be able to cope with that. And
>> it may also be necessary to distinguish between read and
>> read/write mappings being established (and for r/w ones the
>> option of populating at access time rather than at creation time
>> would need to be explored).
> I wouldn't be opposed to some form of getpageframeinfo4. It's not just PoD
> we are talking about here. Is the page paged out? Is the page shared?
>
> Right now we have global per-domain queries (domaininfo). Or individual
> gfn debug memctl's. A batched interface with richer information would be a
> blessing for debugging or diagnosis purposes.
>
> The first order of business is exposing the type. Do we really want to
> expose the whole range of p2m_* types or just "really useful" ones like
> is_shared, is_pod, is_paged, is_normal? An argument for the former is that
> the mem event interface already pumps the p2m_* type up the stack.
>
> The other useful bit of information I can think of is exposing the shared
> ref count.
I think just like the gfn_to_mfn() interface, we need a "I care about 
the details" interface and an "I don't care about the details" 
interface.  If a page isn't present, or needs to be un-shared, or is PoD 
and not currently available, then maybe dom0 callers trying to map that 
page should get something like -EAGAIN?  Just something that indicates, 
"This page isn't here at the moment, but may be here soon."  What do you 
think?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:08:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylOf-00041L-Lg; Tue, 07 Aug 2012 15:08:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SylOe-00041D-0H
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:08:36 +0000
Received: from [85.158.139.83:60308] by server-12.bemta-5.messagelabs.com id
	4E/20-26304-37F21205; Tue, 07 Aug 2012 15:08:35 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344352112!30006503!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25139 invoked from network); 7 Aug 2012 15:08:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:08:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336363200"; d="scan'208";a="204407850"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:08:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 11:08:10 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SylOD-0003mr-P1;
	Tue, 07 Aug 2012 16:08:09 +0100
Message-ID: <50212E96.7010803@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:04:54 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "andres@lagarcavilla.org" <andres@lagarcavilla.org>
References: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
	<d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/12 15:40, Andres Lagar-Cavilla wrote:
>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com>
>>>>> wrote:
>>> I guess there are two problems with that:
>>> * As you've seen, apparently dom0 may access these pages before any
>>> faults happen.
>>> * If it happens that reclaim_single is below the only zeroed page, the
>>> guest will crash even when there is reclaim-able memory available.
>>>
>>> Two ways we could fix this:
>>> 1. Remove dom0 accesses (what on earth could be looking at a
>>> not-yet-created VM?)
>> I'm told it's a monitoring daemon, and yes, they are intending to
>> adjust it to first query the GFN's type (and don't do the access
>> when it's not populated, yet). But wait, I didn't check the code
>> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
>> also call get_page_from_gfn() with P2M_ALLOC, so would also
>> trigger the PoD code (in -unstable at least) - Tim, was that really
>> a correct adjustment in 25355:974ad81bb68b? It looks to be a
>> 1:1 translation, but is that really necessary? If one wanted to
>> find out whether a page is PoD to avoid getting it populated,
>> how would that be done from outside the hypervisor? Would
>> we need XEN_DOMCTL_getpageframeinfo4 for this?
>>
>>> 2. Allocate the PoD cache before populating the p2m table
>>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>>> see how that's really practical.
>> What's wrong with telling control tools that a certain page is
>> unpopulated (from which they will be able to imply that's it's all
>> clear from the guest's pov)? Even outside of the current problem,
>> I would think that's more efficient than allocating the page. Of
>> course, the control tools need to be able to cope with that. And
>> it may also be necessary to distinguish between read and
>> read/write mappings being established (and for r/w ones the
>> option of populating at access time rather than at creation time
>> would need to be explored).
> I wouldn't be opposed to some form of getpageframeinfo4. It's not just PoD
> we are talking about here. Is the page paged out? Is the page shared?
>
> Right now we have global per-domain queries (domaininfo). Or individual
> gfn debug memctl's. A batched interface with richer information would be a
> blessing for debugging or diagnosis purposes.
>
> The first order of business is exposing the type. Do we really want to
> expose the whole range of p2m_* types or just "really useful" ones like
> is_shared, is_pod, is_paged, is_normal? An argument for the former is that
> the mem event interface already pumps the p2m_* type up the stack.
>
> The other useful bit of information I can think of is exposing the shared
> ref count.
I think just like the gfn_to_mfn() interface, we need a "I care about 
the details" interface and an "I don't care about the details" 
interface.  If a page isn't present, or needs to be un-shared, or is PoD 
and not currently available, then maybe dom0 callers trying to map that 
page should get something like -EAGAIN?  Just something that indicates, 
"This page isn't here at the moment, but may be here soon."  What do you 
think?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:11:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:11:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylRY-00048c-88; Tue, 07 Aug 2012 15:11:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SylRW-00048W-1R
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:11:34 +0000
Received: from [85.158.138.51:54459] by server-12.bemta-3.messagelabs.com id
	F2/36-21301-52031205; Tue, 07 Aug 2012 15:11:33 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344352291!29100092!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24265 invoked from network); 7 Aug 2012 15:11:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:11:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336363200"; d="scan'208";a="204408297"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:11:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 11:11:30 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SylRS-0003q3-7h;
	Tue, 07 Aug 2012 16:11:30 +0100
Message-ID: <50212F5F.3090405@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:08:15 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
	<502134440200007800093416@nat28.tlf.novell.com>
In-Reply-To: <502134440200007800093416@nat28.tlf.novell.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/12 14:29, Jan Beulich wrote:
>>>> On 07.08.12 at 15:13, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>>> 2. Allocate the PoD cache before populating the p2m table
>>> So this doesn't work, the call simply has no effect (and never
>>> reaches p2m_pod_set_cache_target()). Apparently because
>>> of
>>>
>>>      /* P == B: Nothing to do. */
>>>      if ( p2md->pod.entry_count == 0 )
>>>          goto out;
>>>
>>> in p2m_pod_set_mem_target(). Now I'm not sure about the
>>> proper adjustment here: Entirely dropping the conditional is
>>> certainly wrong. Would
>>>
>>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>>          goto out;
>>>
>>> be okay?
>>>
>>> But then later in that function we also have
>>>
>>>      /* B < T': Set the cache size equal to # of outstanding entries,
>>>       * let the balloon driver fill in the rest. */
>>>      if ( pod_target > p2md->pod.entry_count )
>>>          pod_target = p2md->pod.entry_count;
>>>
>>> which in the case at hand would set pod_target to 0, and the
>>> whole operation would again not have any effect afaict. So
>>> maybe this was the reason to do this operation _after_ the
>>> normal address space population?
>> Snap -- forgot about that.
>>
>> The main thing is for set_mem_target() to be simple for the toolstack
>> -- it's just supposed to say how much memory it wants the guest to
>> use, and Xen is supposed to figure out how much memory the PoD cache
>> needs.  The interface is that the toolstack is just supposed to call
>> set_mem_target() after each time it changes the balloon target.  The
>> idea was to be robust against the user setting arbitrary new targets
>> before the balloon driver had reached the old target.  So the problem
>> with allowing (pod_target > entry_count) is that that's the condition
>> that happens when you are ballooning up.
>>
>> Maybe the best thing to do is to introduce a specific call to
>> initialize the PoD cache that would ignore entry_count?
> Hmm, would looks more like a hack to me.
>
> How about doing the initial check as suggested earlier
>
>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>          goto out;
>
> and the latter check in a similar way
>
>      if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
>          pod_target = p2md->pod.entry_count;
>
> (which would still take care of any ballooning activity)? Or are
> there any other traps to fall into?
The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish 
about having an interface like this?
* allocate_pod_mem()
* for() { populate_pod_mem() }
* [Boot VM]
* set_pod_target()

Right now set_pod_mem() is used both for initial allocation and for 
adjustments.  But it seems like there's good reason to make a distinction.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:11:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:11:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylRY-00048c-88; Tue, 07 Aug 2012 15:11:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1SylRW-00048W-1R
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:11:34 +0000
Received: from [85.158.138.51:54459] by server-12.bemta-3.messagelabs.com id
	F2/36-21301-52031205; Tue, 07 Aug 2012 15:11:33 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344352291!29100092!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAzMTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24265 invoked from network); 7 Aug 2012 15:11:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:11:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336363200"; d="scan'208";a="204408297"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 11:11:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 7 Aug 2012 11:11:30 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1SylRS-0003q3-7h;
	Tue, 07 Aug 2012 16:11:30 +0100
Message-ID: <50212F5F.3090405@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:08:15 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
	<502134440200007800093416@nat28.tlf.novell.com>
In-Reply-To: <502134440200007800093416@nat28.tlf.novell.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/12 14:29, Jan Beulich wrote:
>>>> On 07.08.12 at 15:13, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>>> 2. Allocate the PoD cache before populating the p2m table
>>> So this doesn't work, the call simply has no effect (and never
>>> reaches p2m_pod_set_cache_target()). Apparently because
>>> of
>>>
>>>      /* P == B: Nothing to do. */
>>>      if ( p2md->pod.entry_count == 0 )
>>>          goto out;
>>>
>>> in p2m_pod_set_mem_target(). Now I'm not sure about the
>>> proper adjustment here: Entirely dropping the conditional is
>>> certainly wrong. Would
>>>
>>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>>          goto out;
>>>
>>> be okay?
>>>
>>> But then later in that function we also have
>>>
>>>      /* B < T': Set the cache size equal to # of outstanding entries,
>>>       * let the balloon driver fill in the rest. */
>>>      if ( pod_target > p2md->pod.entry_count )
>>>          pod_target = p2md->pod.entry_count;
>>>
>>> which in the case at hand would set pod_target to 0, and the
>>> whole operation would again not have any effect afaict. So
>>> maybe this was the reason to do this operation _after_ the
>>> normal address space population?
>> Snap -- forgot about that.
>>
>> The main thing is for set_mem_target() to be simple for the toolstack
>> -- it's just supposed to say how much memory it wants the guest to
>> use, and Xen is supposed to figure out how much memory the PoD cache
>> needs.  The interface is that the toolstack is just supposed to call
>> set_mem_target() after each time it changes the balloon target.  The
>> idea was to be robust against the user setting arbitrary new targets
>> before the balloon driver had reached the old target.  So the problem
>> with allowing (pod_target > entry_count) is that that's the condition
>> that happens when you are ballooning up.
>>
>> Maybe the best thing to do is to introduce a specific call to
>> initialize the PoD cache that would ignore entry_count?
> Hmm, would looks more like a hack to me.
>
> How about doing the initial check as suggested earlier
>
>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>          goto out;
>
> and the latter check in a similar way
>
>      if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
>          pod_target = p2md->pod.entry_count;
>
> (which would still take care of any ballooning activity)? Or are
> there any other traps to fall into?
The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish 
about having an interface like this?
* allocate_pod_mem()
* for() { populate_pod_mem() }
* [Boot VM]
* set_pod_target()

Right now set_pod_mem() is used both for initial allocation and for 
adjustments.  But it seems like there's good reason to make a distinction.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:15:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylUf-0004Je-1f; Tue, 07 Aug 2012 15:14:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SylUd-0004JJ-EX
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:14:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344352461!12658798!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2479 invoked from network); 7 Aug 2012 15:14:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:14:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889077"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:14:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:14:19 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SylUB-0000QC-IW; Tue, 07 Aug 2012 15:14:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SylUB-0001hE-Er;
	Tue, 07 Aug 2012 16:14:19 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.12491.177757.131912@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:14:19 +0100
To: "greg@enjellic.com" <greg@enjellic.com>
In-Reply-To: <201208051448.q75EmvG5008573@wind.enjellic.com>
References: <201208051448.q75EmvG5008573@wind.enjellic.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
	minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> libxl: Seal tapdisk minor leak.
...
> To implement correct cleanup of blktap devices in Xen 4.1.2.

Is this patch is supposed to be against xen-4.1-testing.hg ?

> diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl

This function doesn't seem to exist in my copy of xen-4.1-testing.hg
tip nor in 23172:3eca5bf65e6c.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:15:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylUf-0004Je-1f; Tue, 07 Aug 2012 15:14:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SylUd-0004JJ-EX
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:14:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344352461!12658798!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2479 invoked from network); 7 Aug 2012 15:14:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:14:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889077"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:14:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:14:19 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SylUB-0000QC-IW; Tue, 07 Aug 2012 15:14:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SylUB-0001hE-Er;
	Tue, 07 Aug 2012 16:14:19 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.12491.177757.131912@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:14:19 +0100
To: "greg@enjellic.com" <greg@enjellic.com>
In-Reply-To: <201208051448.q75EmvG5008573@wind.enjellic.com>
References: <201208051448.q75EmvG5008573@wind.enjellic.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
	minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> libxl: Seal tapdisk minor leak.
...
> To implement correct cleanup of blktap devices in Xen 4.1.2.

Is this patch is supposed to be against xen-4.1-testing.hg ?

> diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl

This function doesn't seem to exist in my copy of xen-4.1-testing.hg
tip nor in 23172:3eca5bf65e6c.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylei-0004VP-8D; Tue, 07 Aug 2012 15:25:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Syleg-0004VH-Gj
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:25:10 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344353104!4338484!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzODQ3MDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzODQ3MDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24988 invoked from network); 7 Aug 2012 15:25:04 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 15:25:04 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq2M7pF+zh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-071-255.pools.arcor-ip.net [84.57.71.255])
	by smtp.strato.de (josoe mo42) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id J04934o77F9SWe ;
	Tue, 7 Aug 2012 17:25:04 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 36BF818639; Tue,  7 Aug 2012 17:25:03 +0200 (CEST)
Date: Tue, 7 Aug 2012 17:25:03 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807152502.GA24503@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344318133.24794.16.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, Ian Campbell wrote:

> On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > patch.
> > 
> > The output from this command is attached:
> > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > 
> > Any ideas how to fix this timeout error?
> 
> The tools are waiting for the backend to move from state 1
> (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> driver typically makes that transition at the end of its probe function
> -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> in which case perhaps there is an error node in XS?

I think there is a difference between the two kernels. The pvops kernel
goes into state 2 right away (I cant tell from repeated xenstore-ls runs
if it had also state 1).
The sles11 kernel remains in state 1. Did the expectations of libxl
change recently? xl create used to work not too long ago.

xm does not work either, so the change is most likely in the scripts.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:25:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylei-0004VP-8D; Tue, 07 Aug 2012 15:25:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Syleg-0004VH-Gj
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:25:10 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344353104!4338484!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzODQ3MDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzODQ3MDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24988 invoked from network); 7 Aug 2012 15:25:04 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 15:25:04 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq2M7pF+zh
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-071-255.pools.arcor-ip.net [84.57.71.255])
	by smtp.strato.de (josoe mo42) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id J04934o77F9SWe ;
	Tue, 7 Aug 2012 17:25:04 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 36BF818639; Tue,  7 Aug 2012 17:25:03 +0200 (CEST)
Date: Tue, 7 Aug 2012 17:25:03 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807152502.GA24503@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344318133.24794.16.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, Ian Campbell wrote:

> On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > patch.
> > 
> > The output from this command is attached:
> > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > 
> > Any ideas how to fix this timeout error?
> 
> The tools are waiting for the backend to move from state 1
> (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> driver typically makes that transition at the end of its probe function
> -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> in which case perhaps there is an error node in XS?

I think there is a difference between the two kernels. The pvops kernel
goes into state 2 right away (I cant tell from repeated xenstore-ls runs
if it had also state 1).
The sles11 kernel remains in state 1. Did the expectations of libxl
change recently? xl create used to work not too long ago.

xm does not work either, so the change is most likely in the scripts.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:30:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syljg-0004eN-Vp; Tue, 07 Aug 2012 15:30:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Syljf-0004eF-Dk
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:30:19 +0000
Received: from [85.158.138.51:44920] by server-12.bemta-3.messagelabs.com id
	06/1C-21301-A8431205; Tue, 07 Aug 2012 15:30:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344353413!30869235!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27577 invoked from network); 7 Aug 2012 15:30:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:30:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889411"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:30:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:30:12 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Syldx-0000XD-DP; Tue, 07 Aug 2012 15:24:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Syldx-0001ic-9h;
	Tue, 07 Aug 2012 16:24:25 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.13097.199198.397475@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:24:25 +0100
To: "Jan Beulich" <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50212E2002000078000933D9@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<50212E2002000078000933D9@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH 1/5] xen: improve changes to xen_add_to_physmap"):
> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
> union {
> #endif
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } u;
> #endif

Has someone checked that on all supported platforms the size and
alignment of this union is identical to that of the original
uint16_t ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:30:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syljg-0004eN-Vp; Tue, 07 Aug 2012 15:30:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Syljf-0004eF-Dk
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:30:19 +0000
Received: from [85.158.138.51:44920] by server-12.bemta-3.messagelabs.com id
	06/1C-21301-A8431205; Tue, 07 Aug 2012 15:30:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344353413!30869235!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27577 invoked from network); 7 Aug 2012 15:30:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:30:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889411"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:30:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:30:12 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Syldx-0000XD-DP; Tue, 07 Aug 2012 15:24:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Syldx-0001ic-9h;
	Tue, 07 Aug 2012 16:24:25 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.13097.199198.397475@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:24:25 +0100
To: "Jan Beulich" <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50212E2002000078000933D9@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<50212E2002000078000933D9@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH 1/5] xen: improve changes to xen_add_to_physmap"):
> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
> union {
> #endif
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
>     /* IFF gmfn_foreign */
>     domid_t foreign_domid;
> } u;
> #endif

Has someone checked that on all supported platforms the size and
alignment of this union is identical to that of the original
uint16_t ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:33:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylmS-0004lx-IV; Tue, 07 Aug 2012 15:33:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SylmR-0004lU-6e
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:33:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344353583!5979680!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24853 invoked from network); 7 Aug 2012 15:33:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:33:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889523"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:33:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	16:33:03 +0100
Message-ID: <1344353581.11339.105.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 7 Aug 2012 16:33:01 +0100
In-Reply-To: <20120807152502.GA24503@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 16:25 +0100, Olaf Hering wrote:
> On Tue, Aug 07, Ian Campbell wrote:
> 
> > On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > > patch.
> > > 
> > > The output from this command is attached:
> > > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > > 
> > > Any ideas how to fix this timeout error?
> > 
> > The tools are waiting for the backend to move from state 1
> > (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> > driver typically makes that transition at the end of its probe function
> > -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> > in which case perhaps there is an error node in XS?
> 
> I think there is a difference between the two kernels. The pvops kernel
> goes into state 2 right away (I cant tell from repeated xenstore-ls runs
> if it had also state 1).
> The sles11 kernel remains in state 1.

What is it waiting for?

>  Did the expectations of libxl
> change recently? xl create used to work not too long ago.

I don't think the expectation has changed but the implementation is
probably more picky since Roger's hotplug patches.

> xm does not work either, so the change is most likely in the scripts.

If you are switching from xl to xm then you should either reboot or
remove libxl/disable_udev in xenstore manually.

Other than that nor much has changed in the scripts either. Are you sure
it isn't the kernel which has changed?

Ian.

> 
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:33:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylmS-0004lx-IV; Tue, 07 Aug 2012 15:33:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SylmR-0004lU-6e
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:33:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344353583!5979680!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24853 invoked from network); 7 Aug 2012 15:33:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:33:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889523"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:33:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	16:33:03 +0100
Message-ID: <1344353581.11339.105.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 7 Aug 2012 16:33:01 +0100
In-Reply-To: <20120807152502.GA24503@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 16:25 +0100, Olaf Hering wrote:
> On Tue, Aug 07, Ian Campbell wrote:
> 
> > On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > > patch.
> > > 
> > > The output from this command is attached:
> > > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > > 
> > > Any ideas how to fix this timeout error?
> > 
> > The tools are waiting for the backend to move from state 1
> > (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> > driver typically makes that transition at the end of its probe function
> > -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> > in which case perhaps there is an error node in XS?
> 
> I think there is a difference between the two kernels. The pvops kernel
> goes into state 2 right away (I cant tell from repeated xenstore-ls runs
> if it had also state 1).
> The sles11 kernel remains in state 1.

What is it waiting for?

>  Did the expectations of libxl
> change recently? xl create used to work not too long ago.

I don't think the expectation has changed but the implementation is
probably more picky since Roger's hotplug patches.

> xm does not work either, so the change is most likely in the scripts.

If you are switching from xl to xm then you should either reboot or
remove libxl/disable_udev in xenstore manually.

Other than that nor much has changed in the scripts either. Are you sure
it isn't the kernel which has changed?

Ian.

> 
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:34:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylni-0004tq-1x; Tue, 07 Aug 2012 15:34:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sylnh-0004tk-5G
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:34:29 +0000
Received: from [85.158.139.83:29019] by server-8.bemta-5.messagelabs.com id
	E0/05-05939-48531205; Tue, 07 Aug 2012 15:34:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344353667!28780567!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29861 invoked from network); 7 Aug 2012 15:34:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:34:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889540"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:34:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	16:34:06 +0100
Message-ID: <1344353645.11339.106.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:34:05 +0100
In-Reply-To: <20513.12491.177757.131912@mariner.uk.xensource.com>
References: <201208051448.q75EmvG5008573@wind.enjellic.com>
	<20513.12491.177757.131912@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
 minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 16:14 +0100, Ian Jackson wrote:
> Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > libxl: Seal tapdisk minor leak.
> ...
> > To implement correct cleanup of blktap devices in Xen 4.1.2.
> 
> Is this patch is supposed to be against xen-4.1-testing.hg ?
> 
> > diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> > --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> > +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> > @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
> 
> This function doesn't seem to exist in my copy of xen-4.1-testing.hg
> tip nor in 23172:3eca5bf65e6c.

It comes from <201208051440.q75Een7F008501@wind.enjellic.com> which is a
backport request that introduces this function.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:34:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylni-0004tq-1x; Tue, 07 Aug 2012 15:34:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sylnh-0004tk-5G
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:34:29 +0000
Received: from [85.158.139.83:29019] by server-8.bemta-5.messagelabs.com id
	E0/05-05939-48531205; Tue, 07 Aug 2012 15:34:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344353667!28780567!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29861 invoked from network); 7 Aug 2012 15:34:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:34:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889540"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:34:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Tue, 7 Aug 2012
	16:34:06 +0100
Message-ID: <1344353645.11339.106.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 16:34:05 +0100
In-Reply-To: <20513.12491.177757.131912@mariner.uk.xensource.com>
References: <201208051448.q75EmvG5008573@wind.enjellic.com>
	<20513.12491.177757.131912@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
 minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 16:14 +0100, Ian Jackson wrote:
> Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > libxl: Seal tapdisk minor leak.
> ...
> > To implement correct cleanup of blktap devices in Xen 4.1.2.
> 
> Is this patch is supposed to be against xen-4.1-testing.hg ?
> 
> > diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> > --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> > +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> > @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
> 
> This function doesn't seem to exist in my copy of xen-4.1-testing.hg
> tip nor in 23172:3eca5bf65e6c.

It comes from <201208051440.q75Een7F008501@wind.enjellic.com> which is a
backport request that introduces this function.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylpU-00053B-II; Tue, 07 Aug 2012 15:36:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SylpS-00052x-Fr
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:36:18 +0000
Received: from [85.158.143.99:40205] by server-1.bemta-4.messagelabs.com id
	DF/62-24392-1F531205; Tue, 07 Aug 2012 15:36:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344353777!25431085!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18231 invoked from network); 7 Aug 2012 15:36:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 15:36:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 16:36:16 +0100
Message-Id: <5021520F02000078000934FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 16:36:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
	<502134440200007800093416@nat28.tlf.novell.com>
	<50212F5F.3090405@eu.citrix.com>
In-Reply-To: <50212F5F.3090405@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:08, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 07/08/12 14:29, Jan Beulich wrote:
>>>>> On 07.08.12 at 15:13, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>>>> 2. Allocate the PoD cache before populating the p2m table
>>>> So this doesn't work, the call simply has no effect (and never
>>>> reaches p2m_pod_set_cache_target()). Apparently because
>>>> of
>>>>
>>>>      /* P == B: Nothing to do. */
>>>>      if ( p2md->pod.entry_count == 0 )
>>>>          goto out;
>>>>
>>>> in p2m_pod_set_mem_target(). Now I'm not sure about the
>>>> proper adjustment here: Entirely dropping the conditional is
>>>> certainly wrong. Would
>>>>
>>>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>>>          goto out;
>>>>
>>>> be okay?
>>>>
>>>> But then later in that function we also have
>>>>
>>>>      /* B < T': Set the cache size equal to # of outstanding entries,
>>>>       * let the balloon driver fill in the rest. */
>>>>      if ( pod_target > p2md->pod.entry_count )
>>>>          pod_target = p2md->pod.entry_count;
>>>>
>>>> which in the case at hand would set pod_target to 0, and the
>>>> whole operation would again not have any effect afaict. So
>>>> maybe this was the reason to do this operation _after_ the
>>>> normal address space population?
>>> Snap -- forgot about that.
>>>
>>> The main thing is for set_mem_target() to be simple for the toolstack
>>> -- it's just supposed to say how much memory it wants the guest to
>>> use, and Xen is supposed to figure out how much memory the PoD cache
>>> needs.  The interface is that the toolstack is just supposed to call
>>> set_mem_target() after each time it changes the balloon target.  The
>>> idea was to be robust against the user setting arbitrary new targets
>>> before the balloon driver had reached the old target.  So the problem
>>> with allowing (pod_target > entry_count) is that that's the condition
>>> that happens when you are ballooning up.
>>>
>>> Maybe the best thing to do is to introduce a specific call to
>>> initialize the PoD cache that would ignore entry_count?
>> Hmm, would looks more like a hack to me.
>>
>> How about doing the initial check as suggested earlier
>>
>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>          goto out;
>>
>> and the latter check in a similar way
>>
>>      if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
>>          pod_target = p2md->pod.entry_count;
>>
>> (which would still take care of any ballooning activity)? Or are
>> there any other traps to fall into?
> The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish 
> about having an interface like this?
> * allocate_pod_mem()
> * for() { populate_pod_mem() }
> * [Boot VM]
> * set_pod_target()

Mostly the fact that it's an almost identical interface to what
we already have. After all, the two !alloc checks would go at
exactly the place where I had put the d->tot_pages > 0.
Plus the new interface likely ought to check that d->tot_pages
is zero. In the end you're proposing a full new interface for
something that can be done with two small adjustments.

> Right now set_pod_mem() is used both for initial allocation and for 
> adjustments.  But it seems like there's good reason to make a distinction.

Yes. But that distinction can well be implicit.

In any case, as for testing this out my approach would be far
easier (even if it looks like a hack to you), is there anything
wrong with it (i.e. is my assumption where the !alloc checks
would have to go wrong)? If not, I'd prefer to have that
simpler thing tested, and then we can still settle on the more
involved change you would like to see.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SylpU-00053B-II; Tue, 07 Aug 2012 15:36:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SylpS-00052x-Fr
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:36:18 +0000
Received: from [85.158.143.99:40205] by server-1.bemta-4.messagelabs.com id
	DF/62-24392-1F531205; Tue, 07 Aug 2012 15:36:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344353777!25431085!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18231 invoked from network); 7 Aug 2012 15:36:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 15:36:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 16:36:16 +0100
Message-Id: <5021520F02000078000934FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 16:36:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
	<502134440200007800093416@nat28.tlf.novell.com>
	<50212F5F.3090405@eu.citrix.com>
In-Reply-To: <50212F5F.3090405@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:08, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 07/08/12 14:29, Jan Beulich wrote:
>>>>> On 07.08.12 at 15:13, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>>>> 2. Allocate the PoD cache before populating the p2m table
>>>> So this doesn't work, the call simply has no effect (and never
>>>> reaches p2m_pod_set_cache_target()). Apparently because
>>>> of
>>>>
>>>>      /* P == B: Nothing to do. */
>>>>      if ( p2md->pod.entry_count == 0 )
>>>>          goto out;
>>>>
>>>> in p2m_pod_set_mem_target(). Now I'm not sure about the
>>>> proper adjustment here: Entirely dropping the conditional is
>>>> certainly wrong. Would
>>>>
>>>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>>>          goto out;
>>>>
>>>> be okay?
>>>>
>>>> But then later in that function we also have
>>>>
>>>>      /* B < T': Set the cache size equal to # of outstanding entries,
>>>>       * let the balloon driver fill in the rest. */
>>>>      if ( pod_target > p2md->pod.entry_count )
>>>>          pod_target = p2md->pod.entry_count;
>>>>
>>>> which in the case at hand would set pod_target to 0, and the
>>>> whole operation would again not have any effect afaict. So
>>>> maybe this was the reason to do this operation _after_ the
>>>> normal address space population?
>>> Snap -- forgot about that.
>>>
>>> The main thing is for set_mem_target() to be simple for the toolstack
>>> -- it's just supposed to say how much memory it wants the guest to
>>> use, and Xen is supposed to figure out how much memory the PoD cache
>>> needs.  The interface is that the toolstack is just supposed to call
>>> set_mem_target() after each time it changes the balloon target.  The
>>> idea was to be robust against the user setting arbitrary new targets
>>> before the balloon driver had reached the old target.  So the problem
>>> with allowing (pod_target > entry_count) is that that's the condition
>>> that happens when you are ballooning up.
>>>
>>> Maybe the best thing to do is to introduce a specific call to
>>> initialize the PoD cache that would ignore entry_count?
>> Hmm, would looks more like a hack to me.
>>
>> How about doing the initial check as suggested earlier
>>
>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>          goto out;
>>
>> and the latter check in a similar way
>>
>>      if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
>>          pod_target = p2md->pod.entry_count;
>>
>> (which would still take care of any ballooning activity)? Or are
>> there any other traps to fall into?
> The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish 
> about having an interface like this?
> * allocate_pod_mem()
> * for() { populate_pod_mem() }
> * [Boot VM]
> * set_pod_target()

Mostly the fact that it's an almost identical interface to what
we already have. After all, the two !alloc checks would go at
exactly the place where I had put the d->tot_pages > 0.
Plus the new interface likely ought to check that d->tot_pages
is zero. In the end you're proposing a full new interface for
something that can be done with two small adjustments.

> Right now set_pod_mem() is used both for initial allocation and for 
> adjustments.  But it seems like there's good reason to make a distinction.

Yes. But that distinction can well be implicit.

In any case, as for testing this out my approach would be far
easier (even if it looks like a hack to you), is there anything
wrong with it (i.e. is my assumption where the !alloc checks
would have to go wrong)? If not, I'd prefer to have that
simpler thing tested, and then we can still settle on the more
involved change you would like to see.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:37:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylpw-00056a-VA; Tue, 07 Aug 2012 15:36:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1Sylpw-00056M-28
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:36:48 +0000
Received: from [85.158.139.83:43906] by server-1.bemta-5.messagelabs.com id
	C6/00-14385-F0631205; Tue, 07 Aug 2012 15:36:47 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344353806!30633550!1
X-Originating-IP: [208.97.132.83]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi44MyA9PiAyODY4OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18773 invoked from network); 7 Aug 2012 15:36:46 -0000
Received: from caiajhbdcaid.dreamhost.com (HELO homiemail-a19.g.dreamhost.com)
	(208.97.132.83) by server-15.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 15:36:46 -0000
Received: from homiemail-a19.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTP id 3B0FE604078;
	Tue,  7 Aug 2012 08:36:45 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=AF56vIKI+2TRNBflZuXXPvbrMxCHxwELtwC3Ef4DNTa3
	cBDQh+G2hwlyNAARL6a7x6HTE/dirmCkurMqe1iBgWhXcERqwrHtIumpT0j+hwXs
	5pJzwsEI/8Xrqm739U8c9C9cJjJ9NmQ0DGbViD3ktDi+5k6xWNoV/BWxeOQHV0k=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=Kp+PIRQssXSm4CVLa5IFCfA4rJY=; b=UOTYapE1
	ErezXIBL3fmjc8nUyIIVwNaUjRC9ynoSEGX2PjGiyk1WXVEB2Oqv9Q0podpIwSk3
	iWKl4v6w0dTKIliPjhqjCMMY2Vtf2+v1244ZsRqYMYAHBLkH71FAtLa41NoWwmHC
	c7lQb7OTHVxdi15ZhkqILR2s0ipyzwJC7rs=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTPA id E91B3604076; 
	Tue,  7 Aug 2012 08:36:44 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Tue, 7 Aug 2012 08:36:40 -0700
Message-ID: <2a80a8ab65fb48530e38e5c7f2a831ff.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <50212E96.7010803@eu.citrix.com>
References: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
	<d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
	<50212E96.7010803@eu.citrix.com>
Date: Tue, 7 Aug 2012 08:36:40 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"JBeulich@suse.com" <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On 07/08/12 15:40, Andres Lagar-Cavilla wrote:
>>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com>
>>>>>> wrote:
>>>> I guess there are two problems with that:
>>>> * As you've seen, apparently dom0 may access these pages before any
>>>> faults happen.
>>>> * If it happens that reclaim_single is below the only zeroed page, the
>>>> guest will crash even when there is reclaim-able memory available.
>>>>
>>>> Two ways we could fix this:
>>>> 1. Remove dom0 accesses (what on earth could be looking at a
>>>> not-yet-created VM?)
>>> I'm told it's a monitoring daemon, and yes, they are intending to
>>> adjust it to first query the GFN's type (and don't do the access
>>> when it's not populated, yet). But wait, I didn't check the code
>>> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
>>> also call get_page_from_gfn() with P2M_ALLOC, so would also
>>> trigger the PoD code (in -unstable at least) - Tim, was that really
>>> a correct adjustment in 25355:974ad81bb68b? It looks to be a
>>> 1:1 translation, but is that really necessary? If one wanted to
>>> find out whether a page is PoD to avoid getting it populated,
>>> how would that be done from outside the hypervisor? Would
>>> we need XEN_DOMCTL_getpageframeinfo4 for this?
>>>
>>>> 2. Allocate the PoD cache before populating the p2m table
>>>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>>>> see how that's really practical.
>>> What's wrong with telling control tools that a certain page is
>>> unpopulated (from which they will be able to imply that's it's all
>>> clear from the guest's pov)? Even outside of the current problem,
>>> I would think that's more efficient than allocating the page. Of
>>> course, the control tools need to be able to cope with that. And
>>> it may also be necessary to distinguish between read and
>>> read/write mappings being established (and for r/w ones the
>>> option of populating at access time rather than at creation time
>>> would need to be explored).
>> I wouldn't be opposed to some form of getpageframeinfo4. It's not just
>> PoD
>> we are talking about here. Is the page paged out? Is the page shared?
>>
>> Right now we have global per-domain queries (domaininfo). Or individual
>> gfn debug memctl's. A batched interface with richer information would be
>> a
>> blessing for debugging or diagnosis purposes.
>>
>> The first order of business is exposing the type. Do we really want to
>> expose the whole range of p2m_* types or just "really useful" ones like
>> is_shared, is_pod, is_paged, is_normal? An argument for the former is
>> that
>> the mem event interface already pumps the p2m_* type up the stack.
>>
>> The other useful bit of information I can think of is exposing the
>> shared
>> ref count.
> I think just like the gfn_to_mfn() interface, we need a "I care about
> the details" interface and an "I don't care about the details"
> interface.  If a page isn't present, or needs to be un-shared, or is PoD
> and not currently available, then maybe dom0 callers trying to map that
> page should get something like -EAGAIN?  Just something that indicates,
> "This page isn't here at the moment, but may be here soon."  What do you
> think?

Sure.

Right now you get -ENOENT if dom0 tries to mmap a foreign frame that is
paged out. Kernel-level backends get the same with grant mappings. As a
side-effect, the hypervisor has triggered the page in, so one of the next
retries should succeed.

My opinion is that you should expand the use of -ENOENT for this
newly-minted "delayed PoD" case. Iiuc, PoD would just succeed or crash the
guest prior to this conversation.

That way, no new code is needed in the upper-layers (neither libxc nor
kernel backends) to deal with delayed PoD allocations. The retry loops
paging needs are already in place and PoD leverages them.

Sharing can get ENOMEM when breaking shares. Much like paging, the
hypervisor will have kicked the appropriate helper, so a retry with an
expectation of success could be put in place. (Nevermind that there are no
in-tree helpers atm: what to do, balloon dom0 to release more mem?). I can
look into making it uniform with the above cases.

As for the detailed interface, getpageframeinfo4 sounds like a good idea.
Otherwise, you need a new privcmd mmap interface in the kernel ... snore
;)

Andres
>
>   -George
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:37:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylpw-00056a-VA; Tue, 07 Aug 2012 15:36:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1Sylpw-00056M-28
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:36:48 +0000
Received: from [85.158.139.83:43906] by server-1.bemta-5.messagelabs.com id
	C6/00-14385-F0631205; Tue, 07 Aug 2012 15:36:47 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344353806!30633550!1
X-Originating-IP: [208.97.132.83]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi44MyA9PiAyODY4OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18773 invoked from network); 7 Aug 2012 15:36:46 -0000
Received: from caiajhbdcaid.dreamhost.com (HELO homiemail-a19.g.dreamhost.com)
	(208.97.132.83) by server-15.tower-182.messagelabs.com with SMTP;
	7 Aug 2012 15:36:46 -0000
Received: from homiemail-a19.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTP id 3B0FE604078;
	Tue,  7 Aug 2012 08:36:45 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=AF56vIKI+2TRNBflZuXXPvbrMxCHxwELtwC3Ef4DNTa3
	cBDQh+G2hwlyNAARL6a7x6HTE/dirmCkurMqe1iBgWhXcERqwrHtIumpT0j+hwXs
	5pJzwsEI/8Xrqm739U8c9C9cJjJ9NmQ0DGbViD3ktDi+5k6xWNoV/BWxeOQHV0k=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=Kp+PIRQssXSm4CVLa5IFCfA4rJY=; b=UOTYapE1
	ErezXIBL3fmjc8nUyIIVwNaUjRC9ynoSEGX2PjGiyk1WXVEB2Oqv9Q0podpIwSk3
	iWKl4v6w0dTKIliPjhqjCMMY2Vtf2+v1244ZsRqYMYAHBLkH71FAtLa41NoWwmHC
	c7lQb7OTHVxdi15ZhkqILR2s0ipyzwJC7rs=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTPA id E91B3604076; 
	Tue,  7 Aug 2012 08:36:44 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Tue, 7 Aug 2012 08:36:40 -0700
Message-ID: <2a80a8ab65fb48530e38e5c7f2a831ff.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <50212E96.7010803@eu.citrix.com>
References: <mailman.10292.1344326858.1399.xen-devel@lists.xen.org>
	<d24761e13f3727f9131899f7af7f2a28.squirrel@webmail.lagarcavilla.org>
	<50212E96.7010803@eu.citrix.com>
Date: Tue, 7 Aug 2012 08:36:40 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"JBeulich@suse.com" <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
	started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On 07/08/12 15:40, Andres Lagar-Cavilla wrote:
>>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@eu.citrix.com>
>>>>>> wrote:
>>>> I guess there are two problems with that:
>>>> * As you've seen, apparently dom0 may access these pages before any
>>>> faults happen.
>>>> * If it happens that reclaim_single is below the only zeroed page, the
>>>> guest will crash even when there is reclaim-able memory available.
>>>>
>>>> Two ways we could fix this:
>>>> 1. Remove dom0 accesses (what on earth could be looking at a
>>>> not-yet-created VM?)
>>> I'm told it's a monitoring daemon, and yes, they are intending to
>>> adjust it to first query the GFN's type (and don't do the access
>>> when it's not populated, yet). But wait, I didn't check the code
>>> when I recommended this - XEN_DOMCTL_getpageframeinfo{2,3)
>>> also call get_page_from_gfn() with P2M_ALLOC, so would also
>>> trigger the PoD code (in -unstable at least) - Tim, was that really
>>> a correct adjustment in 25355:974ad81bb68b? It looks to be a
>>> 1:1 translation, but is that really necessary? If one wanted to
>>> find out whether a page is PoD to avoid getting it populated,
>>> how would that be done from outside the hypervisor? Would
>>> we need XEN_DOMCTL_getpageframeinfo4 for this?
>>>
>>>> 2. Allocate the PoD cache before populating the p2m table
>>>> 3. Make it so that some accesses fail w/o crashing the guest?  I don't
>>>> see how that's really practical.
>>> What's wrong with telling control tools that a certain page is
>>> unpopulated (from which they will be able to imply that's it's all
>>> clear from the guest's pov)? Even outside of the current problem,
>>> I would think that's more efficient than allocating the page. Of
>>> course, the control tools need to be able to cope with that. And
>>> it may also be necessary to distinguish between read and
>>> read/write mappings being established (and for r/w ones the
>>> option of populating at access time rather than at creation time
>>> would need to be explored).
>> I wouldn't be opposed to some form of getpageframeinfo4. It's not just
>> PoD
>> we are talking about here. Is the page paged out? Is the page shared?
>>
>> Right now we have global per-domain queries (domaininfo). Or individual
>> gfn debug memctl's. A batched interface with richer information would be
>> a
>> blessing for debugging or diagnosis purposes.
>>
>> The first order of business is exposing the type. Do we really want to
>> expose the whole range of p2m_* types or just "really useful" ones like
>> is_shared, is_pod, is_paged, is_normal? An argument for the former is
>> that
>> the mem event interface already pumps the p2m_* type up the stack.
>>
>> The other useful bit of information I can think of is exposing the
>> shared
>> ref count.
> I think just like the gfn_to_mfn() interface, we need a "I care about
> the details" interface and an "I don't care about the details"
> interface.  If a page isn't present, or needs to be un-shared, or is PoD
> and not currently available, then maybe dom0 callers trying to map that
> page should get something like -EAGAIN?  Just something that indicates,
> "This page isn't here at the moment, but may be here soon."  What do you
> think?

Sure.

Right now you get -ENOENT if dom0 tries to mmap a foreign frame that is
paged out. Kernel-level backends get the same with grant mappings. As a
side-effect, the hypervisor has triggered the page in, so one of the next
retries should succeed.

My opinion is that you should expand the use of -ENOENT for this
newly-minted "delayed PoD" case. Iiuc, PoD would just succeed or crash the
guest prior to this conversation.

That way, no new code is needed in the upper-layers (neither libxc nor
kernel backends) to deal with delayed PoD allocations. The retry loops
paging needs are already in place and PoD leverages them.

Sharing can get ENOMEM when breaking shares. Much like paging, the
hypervisor will have kicked the appropriate helper, so a retry with an
expectation of success could be put in place. (Nevermind that there are no
in-tree helpers atm: what to do, balloon dom0 to release more mem?). I can
look into making it uniform with the above cases.

As for the detailed interface, getpageframeinfo4 sounds like a good idea.
Otherwise, you need a new privcmd mmap interface in the kernel ... snore
;)

Andres
>
>   -George
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylqv-0005F2-KT; Tue, 07 Aug 2012 15:37:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sylqu-0005El-Ic
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:37:48 +0000
Received: from [85.158.143.99:47197] by server-1.bemta-4.messagelabs.com id
	38/84-24392-B4631205; Tue, 07 Aug 2012 15:37:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344353867!25386350!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18328 invoked from network); 7 Aug 2012 15:37:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 15:37:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 16:37:47 +0100
Message-Id: <502152680200007800093500@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 16:37:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<50212E2002000078000933D9@nat28.tlf.novell.com>
	<20513.13097.199198.397475@mariner.uk.xensource.com>
In-Reply-To: <20513.13097.199198.397475@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:24, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH 1/5] xen: improve changes to 
> xen_add_to_physmap"):
>> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
>> union {
>> #endif
>>     /* Number of pages to go through for gmfn_range */
>>     uint16_t    size;
>> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
>>     /* IFF gmfn_foreign */
>>     domid_t foreign_domid;
>> } u;
>> #endif
> 
> Has someone checked that on all supported platforms the size and
> alignment of this union is identical to that of the original
> uint16_t ?

Since domid_t is uniformly a typedef of uint16_t, I don't think
there's any doubt in that.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylqv-0005F2-KT; Tue, 07 Aug 2012 15:37:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sylqu-0005El-Ic
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:37:48 +0000
Received: from [85.158.143.99:47197] by server-1.bemta-4.messagelabs.com id
	38/84-24392-B4631205; Tue, 07 Aug 2012 15:37:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344353867!25386350!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18328 invoked from network); 7 Aug 2012 15:37:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 15:37:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 16:37:47 +0100
Message-Id: <502152680200007800093500@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 16:37:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<50212E2002000078000933D9@nat28.tlf.novell.com>
	<20513.13097.199198.397475@mariner.uk.xensource.com>
In-Reply-To: <20513.13097.199198.397475@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jean.Guyader@citrix.com,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:24, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH 1/5] xen: improve changes to 
> xen_add_to_physmap"):
>> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
>> union {
>> #endif
>>     /* Number of pages to go through for gmfn_range */
>>     uint16_t    size;
>> #if __XEN_LATEST_INTERFACE_VERSION__ > 0x040200
>>     /* IFF gmfn_foreign */
>>     domid_t foreign_domid;
>> } u;
>> #endif
> 
> Has someone checked that on all supported platforms the size and
> alignment of this union is identical to that of the original
> uint16_t ?

Since domid_t is uniformly a typedef of uint16_t, I don't think
there's any doubt in that.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:45:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylxl-0005b2-HY; Tue, 07 Aug 2012 15:44:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1Sylxj-0005ax-TE
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:44:52 +0000
Received: from [85.158.138.51:11910] by server-2.bemta-3.messagelabs.com id
	48/01-29239-3F731205; Tue, 07 Aug 2012 15:44:51 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344354283!30849711!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MIME_BASE64_TEXT, MIME_BOUND_NEXTPART, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11853 invoked from network); 7 Aug 2012 15:44:44 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:44:44 -0000
Received: by ggna5 with SMTP id a5so547276ggn.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 08:44:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:reply-to:subject:x-priority:x-guid:x-has-attach
	:x-mailer:mime-version:message-id:content-type;
	bh=TbNkOp/ovqNvhYuwW2h1SiKCYPyISirE8+k6nB0QhuE=;
	b=z8qpDfR6L/AFjHRzdaemlicE6Wzi66GiYd4v5WE4TlpNijkBbKWrlUXXED6niByQI5
	qEYIbH+vi/31AisD2R1I3rkeb1cBDvZ6zHEfXTZCudxdVyx0gioSCTgEbtbsmfQBf1zu
	U68pH4vUZlX6aN9/PGdh3KvnskiRT1ETrh3J+l1oOVfTO+TRtO81cQ41//TISBj13Vzj
	qGv2QjKEarIcCM5V3tDqXdqrqs8Ld1AjZhxN9NeDZq8xBHm09anrRzJ0qp6pLViwwkCR
	zbWSsAdmOilFdVH8/JfqRUF4P77/WhH7g3UeA9wSNjWoQo7MUNKnMdiB4xhJX5+nrA7C
	TniQ==
Received: by 10.66.77.168 with SMTP id t8mr26942201paw.28.1344354282162;
	Tue, 07 Aug 2012 08:44:42 -0700 (PDT)
Received: from root ([115.204.231.222])
	by mx.google.com with ESMTPS id hx9sm11385213pbc.68.2012.08.07.08.44.40
	(version=SSLv3 cipher=OTHER); Tue, 07 Aug 2012 08:44:41 -0700 (PDT)
Date: Tue, 7 Aug 2012 23:44:45 +0800
From: tupeng212 <tupeng212@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-GUID: 4CCF6910-D5EA-4E57-987E-088212FED4CD
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <201208070018394210381@gmail.com>
Subject: [Xen-devel] Big Bug:Time in VM running on xen goes slower
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1615201137309538416=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============1615201137309538416==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart421446275566_=----"

This is a multi-part message in MIME format.

------=_001_NextPart421446275566_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBhbGw6DQpJIGhhdmUgZm91bmQgYSBiaWcgYnVnIG9uIHhlbiBjb25jZXJuaW5nIHRpbWUg
dmlydHVhbGl6YXRpb24uIFBsZWFzZSBsZXQgbWUgc2hvdyB5b3UgdGhlIHdob2xlIHByb2Nlc3M6
DQoNCjEgUGhlbm9tZW5vbg0Kd2hlbiBJIHJ1biBhIEpWTSBiYXNlZCBwcm9ncmFtIGluIElFIGJy
b3dzZXIgaW4gbXkgVmlydHVhbCBNYWNoaW5lLCBJIGhhdmUgZm91bmQgY2xlYXJseSB0aGF0IHRp
bWUgYXQgdGhlIHJpZ2h0IGJvdHRvbSBjb3JuZXIgaW4gbXkgVk0gZ2V0cyBtb3JlIHNsb3dlciBh
bmQgc2xvd2VyLg0KSSBzdHVkaWVkIHRoZSBidWcgZGVlcGx5LCBhbmQgZm91bmQgc29tZXRoaW5n
IGJlbG93Lg0KDQoyIFhlbg0Kdm14X3ZtZXhpdF9oYW5kbGVyICAtLT4gLi4uLi4uLi4uIC0tPiBo
YW5kbGVfcnRjX2lvICAtLT4gcnRjX2lvcG9ydF93cml0ZSAgLS0+IHJ0Y190aW1lcl91cGRhdGUg
LS0+IHNldCBSVEMncyBSRUdfQSB0byBhIGhpZ2ggcmF0ZS0tPiBjcmVhdGVfcGVyaW9kaWNfdGlt
ZShkaXNhYmxlIHRoZSBmb3JtZXIgdGltZXIsIGFuZCBpbml0IGEgbmV3IG9uZSkNCldpbjcgaXMg
aW5zdGFsbGVkIGluIHRoZSB2bS4gVGhpcyBjYWxsaW5nIHBhdGggaXMgZXhlY3V0ZWQgc28gZnJl
cXVlbnQgdGhhdCBtYXkgY29tZSBkb3duIHRvIHNldCB0aGUgUlRDJ3MgUkVHX0EgaHVuZHJlZHMg
b2YgdGltZXMgZXZlcnkgc2Vjb25kIGJ1dCB3aXRoIHRoZSBzYW1lIHJhdGUoOTc2LjU2MnVzKDEw
MjRIWikpLCBpdCBpcyBzbyBhYm5vcm1hbCB0byBtZSB0byBzZWUgc3VjaCBiZWhhdmlvci4NCg0K
MyBPUw0KSSBoYXZlIHRyaWVkIHRvIGZpbmQgd2h5IHRoZSB3aW43IHNldHRlZCBSVEMncyByZWdB
IHNvIGZyZXF1ZW50bHkuIGZpbmFsbHkgZ290IHRoZSByZXN1bHQsIGFsbCB0aGF0IGNvbWVzIGZy
b20gYSBmdW5jdGlvbjogTnRTZXRUaW1lclJlc29sdXRpb24gLS0+IDB4NzAsMHg3MQ0Kd2hlbiBJ
IGF0dGFjaGVkIHdpbmRiZyBpbnRvIHRoZSBndWVzdCBPUywgSSBhbHNvIGZvdW5kIHRoZSBzb3Vy
Y2UsIHRoZXkgYXJlIGFsbCBjYWxsZWQgZnJvbSBhIHVwcGVyIHN5c3RlbSBjYWxsIHRoYXQgY29t
ZXMgZnJvbSBKVk0oSmF2YSBWaXJ0dWFsIE1hY2hpbmUpLg0KDQo0IEpWTQ0KSSBkb24ndCBrbm93
IHdoeSBKVk0gY2FsbHMgTnRTZXRUaW1lclJlc29sdXRpb24gdG8gc2V0IHRoZSBzYW1lIFJUQydz
IHJhdGUgZG93biAoOTc2LjU2MnVzKDEwMjRIWikpIHNvIGZyZXF1ZW50bHkuIA0KQnV0IGZvdW5k
IHNvbWV0aGluZyB1c2VmdWwsIGluIHRoZSBqYXZhIHNvdXJjZSBjb2RlLCBJIGZvdW5kIG1hbnkg
cGFsYWNlcyB3cml0dGVuIHdpdGggdGltZS5zY2hlZHVsZUF0Rml4ZWRSYXRlKCksIEluZm9ybWF0
aW9ucyBmcm9tIEludGVybmV0IHRvbGQgbWUgdGhpcyBmdW5jdGlvbiBzY2hlZHVsZUF0Rml4ZWRS
YXRlIGRlbWFuZHMgYSBoaWdoZXIgdGltZSByZXNvbHV0aW9uLiBzbyBJIGd1ZXNzIHRoZSB3aG9s
ZSBwcm9jZXNzIG1heSBiZSB0aGlzOiANCmphdmEgd2FudHMgYSBoaWdoZXIgdGltZSByZXNvbHV0
aW9uIHRpbWVyLCBzbyBpdCBjaGFuZ2VzIHRoZSBSVEMncyByYXRlIGZyb20gMTUuNjI1bXMoNjRI
WikgdG8gOTc2LjU2MnVzKDEwMjRIWiksIGFmdGVyIHRoYXQsIGl0IHJlY29uZmlybXMgd2hldGhl
ciB0aGUgdGltZSBpcyBhY2N1cmF0ZSBhcyBleHBlY3RlZCwgYnV0IGl0J3Mgc29ycnkgdG8gZ2V0
IHNvbWUgbm90aWNlIGl0ICdzIG5vdCBhY2N1cmF0ZSBlaXRoZXIuIHNvIGl0IHNldHMgIHRoZSBS
VEMncyByYXRlIGZyb20gMTUuNjI1bXMoNjRIWikgdG8gOTc2LjU2MnVzKDEwMjRIWikgYWdhaW4g
YW5kIGFnYWluLi4uLCBhdCBsYXN0LCByZXN1bHRzIGluIGEgc2xvdyBzeXN0ZW0gdGltZXIgaW4g
dm0uDQp0aGVyZSBpcyBhbHNvIGEgZnJlcXVlbnRseSBjYWxsZWQgZnVuY3Rpb24gZ29pbmcgaW50
byBteSBleWVzOiBRdWVyeVBlcmZvcm1hbmNlQ291bnRlci4NCg0KDQp3aGF0IG15IHByb2JsZW1z
IGFyZToNCjEgd2h5IHRoZSBKVk0gc2V0cyB0aGUgc2FtZSBSVEMncyByYXRlIDk3NjU2MiBkb3du
IHRvIHRoZSBjb21zIGFnYWluIGFuZCBhZ2Fpbj8gd2hhdCBoZSBmb3VuZCAgaXMgYWJub3JtYWw/
DQoyIGV2ZW4gYSBhYm5vcm1hbCB1c2VyIHByb2dyYW0gY2FsbHMgY3JlYXRlX3BlcmlvZGljX3Rp
bWUgdG8gc2V0IHRoZSByYXRlIGFnYWluIGFuZCBhZ2FpbiwgaG93IGRvIHdlIGF2b2lkIHRoZSBp
bmZsdWVuY2UgdXBvbiBvdXIgdGltZSBpbiB2bT8gaG93IGRvIHdlIGNvbXBlbnNhdGUgdGhlIGVs
YXBzZWQgdGltZSBhdCB0aGUgc3dpdGNoaW5nIHBvaW50PyBlc3BlY2lhbGx5IHdoZW4gdGhlIHN3
aXRjaCBpcyBzbyBmcmVxdWVudC4NCg0KY2FuIHNvbWUgYmlnIGZpZ3VyZXMgZ2l2ZSBtZSBzb21l
IGFkdmljZXMgb24gdGhpcz8NCnRoYW5rcyENCg0KDQoNCnR1cGVuZzIxMg==

------=_001_NextPart421446275566_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000000; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear all:</DIV>
<DIV style=3D"TEXT-INDENT: 2em">I have found a big bug on xen concerning t=
ime=20
virtualization. Please let me show you the whole process:</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>1 <SPAN class=3Dshort_text lang=3Den id=3Dresult_box closure_uid_bpq0=
rs=3D"95"=20
a=3D"undefined" f=3D"4"><SPAN class=3Dhps=20
closure_uid_bpq0rs=3D"214">Phenomenon</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">when I run a JVM base=
d program in=20
IE browser in my Virtual Machine, I have found clearly that time at the ri=
ght=20
bottom&nbsp;corner&nbsp;in my&nbsp;VM gets more slower and=20
slower.</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">I studied the bug dee=
ply, and=20
found something below.</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"></SPAN></SPAN>&nbsp;<=
/DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">2 Xen</SPAN></SPAN></=
DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">vmx_vmexit_handler&nb=
sp; --&gt;=20
......... --&gt; handle_rtc_io&nbsp; --&gt; rtc_ioport_write&nbsp; --&gt;=20
rtc_timer_update --&gt; set RTC's REG_A to a high rate--&gt;=20
create_periodic_time(disable the former timer, and init a new=20
one)</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">Win7 is installed in =
the vm. This=20
calling path is executed so frequent that may come down to set the RTC's R=
EG_A=20
hundreds of times every second but with the same rate(<FONT=20
size=3D2>976.562us(1024HZ)</FONT>), it is so abnormal to me to see such=20
behavior.</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"></SPAN></SPAN>&nbsp;<=
/DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">3 OS</SPAN></SPAN></D=
IV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">I have tried to find =
why the win7=20
setted RTC's regA so frequently. finally got the result, all that&nbsp;com=
es=20
from&nbsp;a function: <FONT size=3D2><FONT=20
color=3D#cc0000>NtSetTimerResolution</FONT> --&gt;=20
0x70,0x71</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>when I=
 attached=20
windbg into the guest OS, I also found the source, they are all called fro=
m a=20
upper system call that comes from JVM(Java Virtual=20
Machine).</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT=20
size=3D2></FONT></SPAN></SPAN><SPAN class=3Dshort_text lang=3Den=20
closure_uid_bpq0rs=3D"95" a=3D"undefined" f=3D"4"><SPAN class=3Dhps=20
closure_uid_bpq0rs=3D"214"><FONT size=3D2></FONT></SPAN></SPAN>&nbsp;</DIV=
>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>4=20
JVM</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>I don'=
t know why JVM=20
calls <FONT color=3D#cc0000>NtSetTimerResolution</FONT> to set the same RT=
C's rate=20
down (976.562us(1024HZ)) so frequently. </FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>But fo=
und something=20
useful, </FONT></SPAN></SPAN><SPAN class=3Dshort_text lang=3Den=20
closure_uid_bpq0rs=3D"95" a=3D"undefined" f=3D"4"><SPAN class=3Dhps=20
closure_uid_bpq0rs=3D"214"><FONT size=3D2>in the java source code, I found=
 many=20
palaces written with time.scheduleAtFixedRate(), Informations from Interne=
t told=20
me this function scheduleAtFixedRate demands a&nbsp;higher time resolution=
.=20
</FONT></SPAN></SPAN><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=
=3D"95"=20
a=3D"undefined" f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT=
 size=3D2>so I=20
guess the whole process may be this: </FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>java w=
ants a higher=20
time resolution timer, so it changes the RTC's rate from 15.625ms(64HZ) to=
=20
976.562us(1024HZ), after that, it reconfirms whether the time is accurate =
as=20
expected, but it's sorry to get some notice it 's not accurate either. so =
it=20
sets&nbsp; the RTC's rate from 15.625ms(64HZ) to 976.562us(1024HZ) again a=
nd=20
again..., at last, results in a slow system timer in=20
vm.</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>there =
is also a=20
frequently called function going into my eyes:=20
QueryPerformanceCounter.</FONT></SPAN></SPAN></DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>what my problems are:</DIV>
<DIV>1 why the JVM sets the same RTC's rate 976562 down to the coms again =
and=20
again? what he found&nbsp; is abnormal?</DIV>
<DIV>2 even a abnormal user program calls create_periodic_time to set the =
rate=20
again and again, how do we avoid the influence upon our time in vm? how do=
 we=20
compensate the elapsed time at the switching point? especially when the sw=
itch=20
is so frequent.</DIV>
<DIV>&nbsp;</DIV>
<DIV>can some big figures give me some advices on this?</DIV>
<DIV>thanks!</DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV><SPAN></SPAN>&nbsp;</DIV>
<DIV><SPAN></SPAN>&nbsp;</DIV>
<DIV><SPAN></SPAN>&nbsp;</DIV></BODY></HTML>

------=_001_NextPart421446275566_=------



--===============1615201137309538416==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1615201137309538416==--



From xen-devel-bounces@lists.xen.org Tue Aug 07 15:45:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sylxl-0005b2-HY; Tue, 07 Aug 2012 15:44:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1Sylxj-0005ax-TE
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:44:52 +0000
Received: from [85.158.138.51:11910] by server-2.bemta-3.messagelabs.com id
	48/01-29239-3F731205; Tue, 07 Aug 2012 15:44:51 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344354283!30849711!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MIME_BASE64_TEXT, MIME_BOUND_NEXTPART, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11853 invoked from network); 7 Aug 2012 15:44:44 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:44:44 -0000
Received: by ggna5 with SMTP id a5so547276ggn.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 08:44:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:reply-to:subject:x-priority:x-guid:x-has-attach
	:x-mailer:mime-version:message-id:content-type;
	bh=TbNkOp/ovqNvhYuwW2h1SiKCYPyISirE8+k6nB0QhuE=;
	b=z8qpDfR6L/AFjHRzdaemlicE6Wzi66GiYd4v5WE4TlpNijkBbKWrlUXXED6niByQI5
	qEYIbH+vi/31AisD2R1I3rkeb1cBDvZ6zHEfXTZCudxdVyx0gioSCTgEbtbsmfQBf1zu
	U68pH4vUZlX6aN9/PGdh3KvnskiRT1ETrh3J+l1oOVfTO+TRtO81cQ41//TISBj13Vzj
	qGv2QjKEarIcCM5V3tDqXdqrqs8Ld1AjZhxN9NeDZq8xBHm09anrRzJ0qp6pLViwwkCR
	zbWSsAdmOilFdVH8/JfqRUF4P77/WhH7g3UeA9wSNjWoQo7MUNKnMdiB4xhJX5+nrA7C
	TniQ==
Received: by 10.66.77.168 with SMTP id t8mr26942201paw.28.1344354282162;
	Tue, 07 Aug 2012 08:44:42 -0700 (PDT)
Received: from root ([115.204.231.222])
	by mx.google.com with ESMTPS id hx9sm11385213pbc.68.2012.08.07.08.44.40
	(version=SSLv3 cipher=OTHER); Tue, 07 Aug 2012 08:44:41 -0700 (PDT)
Date: Tue, 7 Aug 2012 23:44:45 +0800
From: tupeng212 <tupeng212@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-GUID: 4CCF6910-D5EA-4E57-987E-088212FED4CD
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <201208070018394210381@gmail.com>
Subject: [Xen-devel] Big Bug:Time in VM running on xen goes slower
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1615201137309538416=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============1615201137309538416==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart421446275566_=----"

This is a multi-part message in MIME format.

------=_001_NextPart421446275566_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBhbGw6DQpJIGhhdmUgZm91bmQgYSBiaWcgYnVnIG9uIHhlbiBjb25jZXJuaW5nIHRpbWUg
dmlydHVhbGl6YXRpb24uIFBsZWFzZSBsZXQgbWUgc2hvdyB5b3UgdGhlIHdob2xlIHByb2Nlc3M6
DQoNCjEgUGhlbm9tZW5vbg0Kd2hlbiBJIHJ1biBhIEpWTSBiYXNlZCBwcm9ncmFtIGluIElFIGJy
b3dzZXIgaW4gbXkgVmlydHVhbCBNYWNoaW5lLCBJIGhhdmUgZm91bmQgY2xlYXJseSB0aGF0IHRp
bWUgYXQgdGhlIHJpZ2h0IGJvdHRvbSBjb3JuZXIgaW4gbXkgVk0gZ2V0cyBtb3JlIHNsb3dlciBh
bmQgc2xvd2VyLg0KSSBzdHVkaWVkIHRoZSBidWcgZGVlcGx5LCBhbmQgZm91bmQgc29tZXRoaW5n
IGJlbG93Lg0KDQoyIFhlbg0Kdm14X3ZtZXhpdF9oYW5kbGVyICAtLT4gLi4uLi4uLi4uIC0tPiBo
YW5kbGVfcnRjX2lvICAtLT4gcnRjX2lvcG9ydF93cml0ZSAgLS0+IHJ0Y190aW1lcl91cGRhdGUg
LS0+IHNldCBSVEMncyBSRUdfQSB0byBhIGhpZ2ggcmF0ZS0tPiBjcmVhdGVfcGVyaW9kaWNfdGlt
ZShkaXNhYmxlIHRoZSBmb3JtZXIgdGltZXIsIGFuZCBpbml0IGEgbmV3IG9uZSkNCldpbjcgaXMg
aW5zdGFsbGVkIGluIHRoZSB2bS4gVGhpcyBjYWxsaW5nIHBhdGggaXMgZXhlY3V0ZWQgc28gZnJl
cXVlbnQgdGhhdCBtYXkgY29tZSBkb3duIHRvIHNldCB0aGUgUlRDJ3MgUkVHX0EgaHVuZHJlZHMg
b2YgdGltZXMgZXZlcnkgc2Vjb25kIGJ1dCB3aXRoIHRoZSBzYW1lIHJhdGUoOTc2LjU2MnVzKDEw
MjRIWikpLCBpdCBpcyBzbyBhYm5vcm1hbCB0byBtZSB0byBzZWUgc3VjaCBiZWhhdmlvci4NCg0K
MyBPUw0KSSBoYXZlIHRyaWVkIHRvIGZpbmQgd2h5IHRoZSB3aW43IHNldHRlZCBSVEMncyByZWdB
IHNvIGZyZXF1ZW50bHkuIGZpbmFsbHkgZ290IHRoZSByZXN1bHQsIGFsbCB0aGF0IGNvbWVzIGZy
b20gYSBmdW5jdGlvbjogTnRTZXRUaW1lclJlc29sdXRpb24gLS0+IDB4NzAsMHg3MQ0Kd2hlbiBJ
IGF0dGFjaGVkIHdpbmRiZyBpbnRvIHRoZSBndWVzdCBPUywgSSBhbHNvIGZvdW5kIHRoZSBzb3Vy
Y2UsIHRoZXkgYXJlIGFsbCBjYWxsZWQgZnJvbSBhIHVwcGVyIHN5c3RlbSBjYWxsIHRoYXQgY29t
ZXMgZnJvbSBKVk0oSmF2YSBWaXJ0dWFsIE1hY2hpbmUpLg0KDQo0IEpWTQ0KSSBkb24ndCBrbm93
IHdoeSBKVk0gY2FsbHMgTnRTZXRUaW1lclJlc29sdXRpb24gdG8gc2V0IHRoZSBzYW1lIFJUQydz
IHJhdGUgZG93biAoOTc2LjU2MnVzKDEwMjRIWikpIHNvIGZyZXF1ZW50bHkuIA0KQnV0IGZvdW5k
IHNvbWV0aGluZyB1c2VmdWwsIGluIHRoZSBqYXZhIHNvdXJjZSBjb2RlLCBJIGZvdW5kIG1hbnkg
cGFsYWNlcyB3cml0dGVuIHdpdGggdGltZS5zY2hlZHVsZUF0Rml4ZWRSYXRlKCksIEluZm9ybWF0
aW9ucyBmcm9tIEludGVybmV0IHRvbGQgbWUgdGhpcyBmdW5jdGlvbiBzY2hlZHVsZUF0Rml4ZWRS
YXRlIGRlbWFuZHMgYSBoaWdoZXIgdGltZSByZXNvbHV0aW9uLiBzbyBJIGd1ZXNzIHRoZSB3aG9s
ZSBwcm9jZXNzIG1heSBiZSB0aGlzOiANCmphdmEgd2FudHMgYSBoaWdoZXIgdGltZSByZXNvbHV0
aW9uIHRpbWVyLCBzbyBpdCBjaGFuZ2VzIHRoZSBSVEMncyByYXRlIGZyb20gMTUuNjI1bXMoNjRI
WikgdG8gOTc2LjU2MnVzKDEwMjRIWiksIGFmdGVyIHRoYXQsIGl0IHJlY29uZmlybXMgd2hldGhl
ciB0aGUgdGltZSBpcyBhY2N1cmF0ZSBhcyBleHBlY3RlZCwgYnV0IGl0J3Mgc29ycnkgdG8gZ2V0
IHNvbWUgbm90aWNlIGl0ICdzIG5vdCBhY2N1cmF0ZSBlaXRoZXIuIHNvIGl0IHNldHMgIHRoZSBS
VEMncyByYXRlIGZyb20gMTUuNjI1bXMoNjRIWikgdG8gOTc2LjU2MnVzKDEwMjRIWikgYWdhaW4g
YW5kIGFnYWluLi4uLCBhdCBsYXN0LCByZXN1bHRzIGluIGEgc2xvdyBzeXN0ZW0gdGltZXIgaW4g
dm0uDQp0aGVyZSBpcyBhbHNvIGEgZnJlcXVlbnRseSBjYWxsZWQgZnVuY3Rpb24gZ29pbmcgaW50
byBteSBleWVzOiBRdWVyeVBlcmZvcm1hbmNlQ291bnRlci4NCg0KDQp3aGF0IG15IHByb2JsZW1z
IGFyZToNCjEgd2h5IHRoZSBKVk0gc2V0cyB0aGUgc2FtZSBSVEMncyByYXRlIDk3NjU2MiBkb3du
IHRvIHRoZSBjb21zIGFnYWluIGFuZCBhZ2Fpbj8gd2hhdCBoZSBmb3VuZCAgaXMgYWJub3JtYWw/
DQoyIGV2ZW4gYSBhYm5vcm1hbCB1c2VyIHByb2dyYW0gY2FsbHMgY3JlYXRlX3BlcmlvZGljX3Rp
bWUgdG8gc2V0IHRoZSByYXRlIGFnYWluIGFuZCBhZ2FpbiwgaG93IGRvIHdlIGF2b2lkIHRoZSBp
bmZsdWVuY2UgdXBvbiBvdXIgdGltZSBpbiB2bT8gaG93IGRvIHdlIGNvbXBlbnNhdGUgdGhlIGVs
YXBzZWQgdGltZSBhdCB0aGUgc3dpdGNoaW5nIHBvaW50PyBlc3BlY2lhbGx5IHdoZW4gdGhlIHN3
aXRjaCBpcyBzbyBmcmVxdWVudC4NCg0KY2FuIHNvbWUgYmlnIGZpZ3VyZXMgZ2l2ZSBtZSBzb21l
IGFkdmljZXMgb24gdGhpcz8NCnRoYW5rcyENCg0KDQoNCnR1cGVuZzIxMg==

------=_001_NextPart421446275566_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000000; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear all:</DIV>
<DIV style=3D"TEXT-INDENT: 2em">I have found a big bug on xen concerning t=
ime=20
virtualization. Please let me show you the whole process:</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>1 <SPAN class=3Dshort_text lang=3Den id=3Dresult_box closure_uid_bpq0=
rs=3D"95"=20
a=3D"undefined" f=3D"4"><SPAN class=3Dhps=20
closure_uid_bpq0rs=3D"214">Phenomenon</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">when I run a JVM base=
d program in=20
IE browser in my Virtual Machine, I have found clearly that time at the ri=
ght=20
bottom&nbsp;corner&nbsp;in my&nbsp;VM gets more slower and=20
slower.</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">I studied the bug dee=
ply, and=20
found something below.</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"></SPAN></SPAN>&nbsp;<=
/DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">2 Xen</SPAN></SPAN></=
DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">vmx_vmexit_handler&nb=
sp; --&gt;=20
......... --&gt; handle_rtc_io&nbsp; --&gt; rtc_ioport_write&nbsp; --&gt;=20
rtc_timer_update --&gt; set RTC's REG_A to a high rate--&gt;=20
create_periodic_time(disable the former timer, and init a new=20
one)</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">Win7 is installed in =
the vm. This=20
calling path is executed so frequent that may come down to set the RTC's R=
EG_A=20
hundreds of times every second but with the same rate(<FONT=20
size=3D2>976.562us(1024HZ)</FONT>), it is so abnormal to me to see such=20
behavior.</SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"></SPAN></SPAN>&nbsp;<=
/DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">3 OS</SPAN></SPAN></D=
IV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214">I have tried to find =
why the win7=20
setted RTC's regA so frequently. finally got the result, all that&nbsp;com=
es=20
from&nbsp;a function: <FONT size=3D2><FONT=20
color=3D#cc0000>NtSetTimerResolution</FONT> --&gt;=20
0x70,0x71</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>when I=
 attached=20
windbg into the guest OS, I also found the source, they are all called fro=
m a=20
upper system call that comes from JVM(Java Virtual=20
Machine).</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT=20
size=3D2></FONT></SPAN></SPAN><SPAN class=3Dshort_text lang=3Den=20
closure_uid_bpq0rs=3D"95" a=3D"undefined" f=3D"4"><SPAN class=3Dhps=20
closure_uid_bpq0rs=3D"214"><FONT size=3D2></FONT></SPAN></SPAN>&nbsp;</DIV=
>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>4=20
JVM</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>I don'=
t know why JVM=20
calls <FONT color=3D#cc0000>NtSetTimerResolution</FONT> to set the same RT=
C's rate=20
down (976.562us(1024HZ)) so frequently. </FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>But fo=
und something=20
useful, </FONT></SPAN></SPAN><SPAN class=3Dshort_text lang=3Den=20
closure_uid_bpq0rs=3D"95" a=3D"undefined" f=3D"4"><SPAN class=3Dhps=20
closure_uid_bpq0rs=3D"214"><FONT size=3D2>in the java source code, I found=
 many=20
palaces written with time.scheduleAtFixedRate(), Informations from Interne=
t told=20
me this function scheduleAtFixedRate demands a&nbsp;higher time resolution=
.=20
</FONT></SPAN></SPAN><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=
=3D"95"=20
a=3D"undefined" f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT=
 size=3D2>so I=20
guess the whole process may be this: </FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>java w=
ants a higher=20
time resolution timer, so it changes the RTC's rate from 15.625ms(64HZ) to=
=20
976.562us(1024HZ), after that, it reconfirms whether the time is accurate =
as=20
expected, but it's sorry to get some notice it 's not accurate either. so =
it=20
sets&nbsp; the RTC's rate from 15.625ms(64HZ) to 976.562us(1024HZ) again a=
nd=20
again..., at last, results in a slow system timer in=20
vm.</FONT></SPAN></SPAN></DIV>
<DIV><SPAN class=3Dshort_text lang=3Den closure_uid_bpq0rs=3D"95" a=3D"und=
efined"=20
f=3D"4"><SPAN class=3Dhps closure_uid_bpq0rs=3D"214"><FONT size=3D2>there =
is also a=20
frequently called function going into my eyes:=20
QueryPerformanceCounter.</FONT></SPAN></SPAN></DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>what my problems are:</DIV>
<DIV>1 why the JVM sets the same RTC's rate 976562 down to the coms again =
and=20
again? what he found&nbsp; is abnormal?</DIV>
<DIV>2 even a abnormal user program calls create_periodic_time to set the =
rate=20
again and again, how do we avoid the influence upon our time in vm? how do=
 we=20
compensate the elapsed time at the switching point? especially when the sw=
itch=20
is so frequent.</DIV>
<DIV>&nbsp;</DIV>
<DIV>can some big figures give me some advices on this?</DIV>
<DIV>thanks!</DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV><SPAN></SPAN>&nbsp;</DIV>
<DIV><SPAN></SPAN>&nbsp;</DIV>
<DIV><SPAN></SPAN>&nbsp;</DIV></BODY></HTML>

------=_001_NextPart421446275566_=------



--===============1615201137309538416==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1615201137309538416==--



From xen-devel-bounces@lists.xen.org Tue Aug 07 15:49:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sym1m-0005id-6v; Tue, 07 Aug 2012 15:49:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sym1l-0005iU-Jq
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:49:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344354530!11327633!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5369 invoked from network); 7 Aug 2012 15:48:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:48:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889956"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:45:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:45:45 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sylya-0000er-SH; Tue, 07 Aug 2012 15:45:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sylya-0001m4-OO;
	Tue, 07 Aug 2012 16:45:44 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.14376.661494.238112@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:45:44 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344353645.11339.106.camel@zakaz.uk.xensource.com>
References: <201208051448.q75EmvG5008573@wind.enjellic.com>
	<20513.12491.177757.131912@mariner.uk.xensource.com>
	<1344353645.11339.106.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
 minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> On Tue, 2012-08-07 at 16:14 +0100, Ian Jackson wrote:
> > Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > > libxl: Seal tapdisk minor leak.
> > ...
> > > To implement correct cleanup of blktap devices in Xen 4.1.2.
> > 
> > Is this patch is supposed to be against xen-4.1-testing.hg ?
> > 
> > > diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> > > --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> > > +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> > > @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
> > 
> > This function doesn't seem to exist in my copy of xen-4.1-testing.hg
> > tip nor in 23172:3eca5bf65e6c.
> 
> It comes from <201208051440.q75Een7F008501@wind.enjellic.com> which is a
> backport request that introduces this function.

Oh I see, these are two interdependent patches.  Let me read them
again.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:49:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sym1m-0005id-6v; Tue, 07 Aug 2012 15:49:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sym1l-0005iU-Jq
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:49:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344354530!11327633!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5369 invoked from network); 7 Aug 2012 15:48:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 15:48:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,727,1336348800"; d="scan'208";a="13889956"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 15:45:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 16:45:45 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sylya-0000er-SH; Tue, 07 Aug 2012 15:45:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sylya-0001m4-OO;
	Tue, 07 Aug 2012 16:45:44 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.14376.661494.238112@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 16:45:44 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344353645.11339.106.camel@zakaz.uk.xensource.com>
References: <201208051448.q75EmvG5008573@wind.enjellic.com>
	<20513.12491.177757.131912@mariner.uk.xensource.com>
	<1344353645.11339.106.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
 minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> On Tue, 2012-08-07 at 16:14 +0100, Ian Jackson wrote:
> > Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > > libxl: Seal tapdisk minor leak.
> > ...
> > > To implement correct cleanup of blktap devices in Xen 4.1.2.
> > 
> > Is this patch is supposed to be against xen-4.1-testing.hg ?
> > 
> > > diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> > > --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> > > +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> > > @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
> > 
> > This function doesn't seem to exist in my copy of xen-4.1-testing.hg
> > tip nor in 23172:3eca5bf65e6c.
> 
> It comes from <201208051440.q75Een7F008501@wind.enjellic.com> which is a
> backport request that introduces this function.

Oh I see, these are two interdependent patches.  Let me read them
again.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:52:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sym58-0005rM-RT; Tue, 07 Aug 2012 15:52:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sym56-0005rD-Vn
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:52:29 +0000
Received: from [85.158.143.35:34843] by server-2.bemta-4.messagelabs.com id
	F2/53-17938-CB931205; Tue, 07 Aug 2012 15:52:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344354747!17235491!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1043 invoked from network); 7 Aug 2012 15:52:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 15:52:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 16:52:26 +0100
Message-Id: <502155D80200007800093525@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 16:52:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
In-Reply-To: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 07:46:14 2012 -0700
> @@ -22,6 +22,7 @@
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/paging.h>
> +#include <xen/softirq.h>
>  #include <asm/hvm/iommu.h>
>  #include <asm/amd-iommu.h>
>  #include <asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
> +{
> +    u64 address;
> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
> +        return;
> +    }
> +
> +    if ( level > 1 )

As long as the top level call below can never pass <= 1 here and
the recursive call gets gated accordingly, I don't see why you do
it differently here than for VT-d, resulting in both unnecessarily
deep indentation and a pointless map/unmap pair around the
conditional.

Jan

> +    {
> +        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +        {
> +            if ( !(index % 2) )
> +                process_pending_softirqs();
> +
> +            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +            entry = (u32*)pde;
> +
> +            next_level = get_field_from_reg_u32(entry[0],
> +                                                IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                                IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +            present = get_field_from_reg_u32(entry[0],
> +                                             IOMMU_PDE_PRESENT_MASK,
> +                                             IOMMU_PDE_PRESENT_SHIFT);
> +
> +            address = gpa + amd_offset_level_address(index, level);
> +            if ( (next_table_maddr != 0) && (next_level != 0)
> +                && present )
> +            {
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1, address);
> +            }
> +
> +            if ( present )
> +            {
> +                printk("gfn: %-16lx  mfn: %-16lx\n",
> +                       address, next_table_maddr);
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table ) 
> +        return;
> +
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
> +}
> +
>  const struct iommu_ops amd_iommu_ops = {
>      .init = amd_iommu_domain_init,
>      .dom0_init = amd_iommu_dom0_init,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 15:52:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 15:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sym58-0005rM-RT; Tue, 07 Aug 2012 15:52:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sym56-0005rD-Vn
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 15:52:29 +0000
Received: from [85.158.143.35:34843] by server-2.bemta-4.messagelabs.com id
	F2/53-17938-CB931205; Tue, 07 Aug 2012 15:52:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344354747!17235491!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM3MzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1043 invoked from network); 7 Aug 2012 15:52:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 15:52:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Aug 2012 16:52:26 +0100
Message-Id: <502155D80200007800093525@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 07 Aug 2012 16:52:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
In-Reply-To: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 07:46:14 2012 -0700
> @@ -22,6 +22,7 @@
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/paging.h>
> +#include <xen/softirq.h>
>  #include <asm/hvm/iommu.h>
>  #include <asm/amd-iommu.h>
>  #include <asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
> +{
> +    u64 address;
> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
> +        return;
> +    }
> +
> +    if ( level > 1 )

As long as the top level call below can never pass <= 1 here and
the recursive call gets gated accordingly, I don't see why you do
it differently here than for VT-d, resulting in both unnecessarily
deep indentation and a pointless map/unmap pair around the
conditional.

Jan

> +    {
> +        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +        {
> +            if ( !(index % 2) )
> +                process_pending_softirqs();
> +
> +            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +            entry = (u32*)pde;
> +
> +            next_level = get_field_from_reg_u32(entry[0],
> +                                                IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                                IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +            present = get_field_from_reg_u32(entry[0],
> +                                             IOMMU_PDE_PRESENT_MASK,
> +                                             IOMMU_PDE_PRESENT_SHIFT);
> +
> +            address = gpa + amd_offset_level_address(index, level);
> +            if ( (next_table_maddr != 0) && (next_level != 0)
> +                && present )
> +            {
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1, address);
> +            }
> +
> +            if ( present )
> +            {
> +                printk("gfn: %-16lx  mfn: %-16lx\n",
> +                       address, next_table_maddr);
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table ) 
> +        return;
> +
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
> +}
> +
>  const struct iommu_ops amd_iommu_ops = {
>      .init = amd_iommu_domain_init,
>      .dom0_init = amd_iommu_dom0_init,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:19:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymUd-0006a4-5h; Tue, 07 Aug 2012 16:18:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1SymUb-0006Zz-MC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:18:49 +0000
Received: from [85.158.143.99:6692] by server-2.bemta-4.messagelabs.com id
	F0/B5-17938-9EF31205; Tue, 07 Aug 2012 16:18:49 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344356327!23176048!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29241 invoked from network); 7 Aug 2012 16:18:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 16:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336363200"; d="scan'208";a="33849256"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:17:15 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Tue, 7 Aug 2012 09:17:14 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Aug 2012 09:16:59 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac10tPzyqe0cgnoMQhyC3ZNHWhIXqwAAXTxQ
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E8FC8@SJCPMAILBOX01.citrite.net>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
	<502155D80200007800093525@nat28.tlf.novell.com>
In-Reply-To: <502155D80200007800093525@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Honestly, I don't have an AMD machine to test my code - I just wrote it for completion sake. I based my code on deallocate_next_page_table() in the same file. 

I agree that the map/unmap can be easily avoided.

Someone more familiar with AMD IOMMU might be able to comment more.

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Tuesday, August 07, 2012 8:52 AM
To: Santosh Jodh
Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 07:46:14 2012 -0700
> @@ -22,6 +22,7 @@
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/paging.h>
> +#include <xen/softirq.h>
>  #include <asm/hvm/iommu.h>
>  #include <asm/amd-iommu.h>
>  #include <asm/hvm/svm/amd-iommu-proto.h> @@ -512,6 +513,69 @@ static 
> int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +u64 gpa) {
> +    u64 address;
> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
> +        return;
> +    }
> +
> +    if ( level > 1 )

As long as the top level call below can never pass <= 1 here and the recursive call gets gated accordingly, I don't see why you do it differently here than for VT-d, resulting in both unnecessarily deep indentation and a pointless map/unmap pair around the conditional.

Jan

> +    {
> +        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +        {
> +            if ( !(index % 2) )
> +                process_pending_softirqs();
> +
> +            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +            entry = (u32*)pde;
> +
> +            next_level = get_field_from_reg_u32(entry[0],
> +                                                IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                                
> + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +            present = get_field_from_reg_u32(entry[0],
> +                                             IOMMU_PDE_PRESENT_MASK,
> +                                             
> + IOMMU_PDE_PRESENT_SHIFT);
> +
> +            address = gpa + amd_offset_level_address(index, level);
> +            if ( (next_table_maddr != 0) && (next_level != 0)
> +                && present )
> +            {
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1, address);
> +            }
> +
> +            if ( present )
> +            {
> +                printk("gfn: %-16lx  mfn: %-16lx\n",
> +                       address, next_table_maddr);
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d) {
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table ) 
> +        return;
> +
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0); }
> +
>  const struct iommu_ops amd_iommu_ops = {
>      .init = amd_iommu_domain_init,
>      .dom0_init = amd_iommu_dom0_init,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:19:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymUd-0006a4-5h; Tue, 07 Aug 2012 16:18:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1SymUb-0006Zz-MC
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:18:49 +0000
Received: from [85.158.143.99:6692] by server-2.bemta-4.messagelabs.com id
	F0/B5-17938-9EF31205; Tue, 07 Aug 2012 16:18:49 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344356327!23176048!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29241 invoked from network); 7 Aug 2012 16:18:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 16:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336363200"; d="scan'208";a="33849256"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 12:17:15 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Tue, 7 Aug 2012 09:17:14 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Aug 2012 09:16:59 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac10tPzyqe0cgnoMQhyC3ZNHWhIXqwAAXTxQ
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E8FC8@SJCPMAILBOX01.citrite.net>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
	<502155D80200007800093525@nat28.tlf.novell.com>
In-Reply-To: <502155D80200007800093525@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Honestly, I don't have an AMD machine to test my code - I just wrote it for completion sake. I based my code on deallocate_next_page_table() in the same file. 

I agree that the map/unmap can be easily avoided.

Someone more familiar with AMD IOMMU might be able to comment more.

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Tuesday, August 07, 2012 8:52 AM
To: Santosh Jodh
Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 07:46:14 2012 -0700
> @@ -22,6 +22,7 @@
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/paging.h>
> +#include <xen/softirq.h>
>  #include <asm/hvm/iommu.h>
>  #include <asm/amd-iommu.h>
>  #include <asm/hvm/svm/amd-iommu-proto.h> @@ -512,6 +513,69 @@ static 
> int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +u64 gpa) {
> +    u64 address;
> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
> +        return;
> +    }
> +
> +    if ( level > 1 )

As long as the top level call below can never pass <= 1 here and the recursive call gets gated accordingly, I don't see why you do it differently here than for VT-d, resulting in both unnecessarily deep indentation and a pointless map/unmap pair around the conditional.

Jan

> +    {
> +        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +        {
> +            if ( !(index % 2) )
> +                process_pending_softirqs();
> +
> +            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +            entry = (u32*)pde;
> +
> +            next_level = get_field_from_reg_u32(entry[0],
> +                                                IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                                
> + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +            present = get_field_from_reg_u32(entry[0],
> +                                             IOMMU_PDE_PRESENT_MASK,
> +                                             
> + IOMMU_PDE_PRESENT_SHIFT);
> +
> +            address = gpa + amd_offset_level_address(index, level);
> +            if ( (next_table_maddr != 0) && (next_level != 0)
> +                && present )
> +            {
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1, address);
> +            }
> +
> +            if ( present )
> +            {
> +                printk("gfn: %-16lx  mfn: %-16lx\n",
> +                       address, next_table_maddr);
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d) {
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table ) 
> +        return;
> +
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0); }
> +
>  const struct iommu_ops amd_iommu_ops = {
>      .init = amd_iommu_domain_init,
>      .dom0_init = amd_iommu_dom0_init,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:21:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymX9-0006h2-Tq; Tue, 07 Aug 2012 16:21:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SymX8-0006gw-2X
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:21:26 +0000
Received: from [85.158.143.99:17492] by server-1.bemta-4.messagelabs.com id
	D8/4E-24392-58041205; Tue, 07 Aug 2012 16:21:25 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344356483!30348144!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30008 invoked from network); 7 Aug 2012 16:21:24 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 16:21:24 -0000
Received: by ghrr17 with SMTP id r17so2576448ghr.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 09:21:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=CaPklnJIxH4NS8YRZLv3Pm8sWQqE8ro+rBGgRcO7d7U=;
	b=yjqURumE4BZ655ybapMBoDG8ucTuPmEOg8sH7q0TRs0hMl9imYNczO1jREaV6ylgyP
	XejV2mNnfsEM6jRhoMDslSkV9enJUwhNqJyedeeZjA/VJ3CJTwb5xOSQeOTwtvxgRVlV
	4bSu8MJpqkM1pEjg7YzxrTtjjeUBEUJ9T32Oh/p5KcKu5suOksnxncTriwf3Gy4+b6hd
	ReWLP3iIJ3Vxoqkh6navL3AYK2ltyKnIi+JSsWlsG18U6bs1hTx8001rdDx0uFSYRdp/
	gKaZFcxtzrHH6LUvpqsFSRNv1mQANO4G1mQVmnhbm88EZyCulBHweG7zgnwGlUNAOUl+
	hjFA==
MIME-Version: 1.0
Received: by 10.50.182.229 with SMTP id eh5mr9226325igc.38.1344356482819; Tue,
	07 Aug 2012 09:21:22 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 09:21:22 -0700 (PDT)
In-Reply-To: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
Date: Tue, 7 Aug 2012 12:21:22 -0400
X-Google-Sender-Auth: SvYJJQ29P1G_CCniqKBgUqCgqAM
Message-ID: <CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It looks like this regression may be related to MSI handling.

"pci=nomsi" on the kernel command line seems to bypass the issue.

Clearly, legacy interrupts are not ideal.


On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
> I have been doing some experiments in upgrading the Xen version in a
> future version of XenClient Enterprise, and I've been running into a
> regression that I'm wondering if anyone else has seen.
>
> dom0 suspend/resume (S3) does not seem to be working for me.
>
> In swapping out components of the system, the common failure seems to
> be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>
> The first suspend seems to mostly work...but subsequent ones always
> resume improperly.
> By "improperly" - I see I/O failures, and stalls of many processes.
>
> Below is a log excerpt of 2 S3 attempts.
>
>
> Has anyone else seen these failures?
>
> - Ben
>
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Breaking vcpu affinity for domain 0 vcpu 1
> (XEN) Breaking vcpu affinity for domain 0 vcpu 2
> (XEN) Breaking vcpu affinity for domain 0 vcpu 3
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> 0 extended MCE MSR 0
> (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) Enabling non-boot CPUs  ...
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> 0 extended MCE MSR 0
> (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) Enabling non-boot CPUs  ...
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> [   66.508829] ata3.00: revalidation failed (errno=-5)
> [   66.508861] ata1.00: revalidation failed (errno=-5)
> [   76.858815] ata3.00: revalidation failed (errno=-5)
> [   76.898807] ata1.00: revalidation failed (errno=-5)
> [  107.208817] ata3.00: revalidation failed (errno=-5)
> [  107.288807] ata1.00: revalidation failed (errno=-5)
> [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
> [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
> [  107.718913] end_request: I/O error, dev sda, sector 35193296
> [  107.718919] Buffer I/O error on device dm-5, logical block 7690
> [  107.718947] end_request: I/O error, dev sda, sector 35657184
> [  107.718965] end_request: I/O error, dev sda, sector 246202760
> [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
> [  107.718995] end_request: I/O error, dev sda, sector 254548368
> [  107.719009] Aborting journal on device dm-6-8.
> [  107.719021] end_request: I/O error, dev sda, sector 35164192
> [  107.719023] Buffer I/O error on device dm-5, logical block 4052
> [  107.719063] Aborting journal on device dm-5-8.
> [  107.719085] end_request: I/O error, dev sda, sector 254546304
> [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
> [  107.719129] JBD2: I/O error detected when updating journal
> superblock for dm-6-8.
> [  107.719141] end_request: I/O error, dev sda, sector 35656064
> [  107.719146] Buffer I/O error on device dm-5, logical block 65536
> [  107.719168] JBD2: I/O error detected when updating journal
> superblock for dm-5-8.
> [  107.870082] end_request: I/O error, dev sda, sector 35131776
> [  107.875825] Buffer I/O error on device dm-5, logical block 0
> [  107.881805] end_request: I/O error, dev sda, sector 35131776
> [  107.887637] Buffer I/O error on device dm-5, logical block 0
> [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
> [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> Detected aborted journal
> [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
> [  107.893617] end_request: I/O error, dev sda, sector 35131776
> [  107.893620] Buffer I/O error on device dm-5, logical block 0
> [  107.893749] end_request: I/O error, dev sda, sector 36180352
> [  107.893752] Buffer I/O error on device dm-6, logical block 0
> [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
> Detected aborted journal
> [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
> [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
> [  107.893784] end_request: I/O error, dev sda, sector 36180352
> [  107.893787] Buffer I/O error on device dm-6, logical block 0
> [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> Detected aborted journal
> [  108.669763] end_request: I/O error, dev sda, sector 25957784
> [  108.675555] Aborting journal on device dm-3-8.
> [  108.680246] end_request: I/O error, dev sda, sector 25956736
> [  108.686099] JBD2: I/O error detected when updating journal
> superblock for dm-3-8.
> [  108.693908] journal commit I/O error
> [  108.755829] end_request: I/O error, dev sda, sector 17305984
> [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
> Detected aborted journal
> [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
> [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
> [  108.782904] end_request: I/O error, dev sda, sector 17305984
> [  109.660011] end_request: I/O error, dev sda, sector 358788
> [  109.665572] Buffer I/O error on device dm-1, logical block 46082
> [  109.682479] end_request: I/O error, dev sda, sector 18832256
> [  109.688246] end_request: I/O error, dev sda, sector 18832256
> [  109.709559] end_request: I/O error, dev sda, sector 357762
> [  109.715120] Buffer I/O error on device dm-1, logical block 45569
> [  109.721506] end_request: I/O error, dev sda, sector 358790
> [  109.727114] Buffer I/O error on device dm-1, logical block 46083
> [  109.743714] end_request: I/O error, dev sda, sector 18832256
> [  109.755555] end_request: I/O error, dev sda, sector 18832256
> [  109.886187] end_request: I/O error, dev sda, sector 357764
> [  109.891756] Buffer I/O error on device dm-1, logical block 45570
> [  109.908344] end_request: I/O error, dev sda, sector 18832256
> [  109.928369] end_request: I/O error, dev sda, sector 349574
> [  109.933938] Buffer I/O error on device dm-1, logical block 41475
> [  109.950336] end_request: I/O error, dev sda, sector 18832256
> [  115.378875] end_request: I/O error, dev sda, sector 365000
> [  115.384445] Aborting journal on device dm-1-8.
> [  115.389120] end_request: I/O error, dev sda, sector 364930
> [  115.394798] Buffer I/O error on device dm-1, logical block 49153
> [  115.401101] JBD2: I/O error detected when updating journal
> superblock for dm-1-8.
> [  207.207426] end_request: I/O error, dev sda, sector 246192376
> [  207.213313] end_request: I/O error, dev sda, sector 246192376
> [  207.903181] end_request: I/O error, dev sda, sector 246192376
> [  209.234399] end_request: I/O error, dev sda, sector 18518400
> [  209.240221] end_request: I/O error, dev sda, sector 18518400

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:21:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymX9-0006h2-Tq; Tue, 07 Aug 2012 16:21:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SymX8-0006gw-2X
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:21:26 +0000
Received: from [85.158.143.99:17492] by server-1.bemta-4.messagelabs.com id
	D8/4E-24392-58041205; Tue, 07 Aug 2012 16:21:25 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344356483!30348144!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30008 invoked from network); 7 Aug 2012 16:21:24 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 16:21:24 -0000
Received: by ghrr17 with SMTP id r17so2576448ghr.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 09:21:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=CaPklnJIxH4NS8YRZLv3Pm8sWQqE8ro+rBGgRcO7d7U=;
	b=yjqURumE4BZ655ybapMBoDG8ucTuPmEOg8sH7q0TRs0hMl9imYNczO1jREaV6ylgyP
	XejV2mNnfsEM6jRhoMDslSkV9enJUwhNqJyedeeZjA/VJ3CJTwb5xOSQeOTwtvxgRVlV
	4bSu8MJpqkM1pEjg7YzxrTtjjeUBEUJ9T32Oh/p5KcKu5suOksnxncTriwf3Gy4+b6hd
	ReWLP3iIJ3Vxoqkh6navL3AYK2ltyKnIi+JSsWlsG18U6bs1hTx8001rdDx0uFSYRdp/
	gKaZFcxtzrHH6LUvpqsFSRNv1mQANO4G1mQVmnhbm88EZyCulBHweG7zgnwGlUNAOUl+
	hjFA==
MIME-Version: 1.0
Received: by 10.50.182.229 with SMTP id eh5mr9226325igc.38.1344356482819; Tue,
	07 Aug 2012 09:21:22 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 09:21:22 -0700 (PDT)
In-Reply-To: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
Date: Tue, 7 Aug 2012 12:21:22 -0400
X-Google-Sender-Auth: SvYJJQ29P1G_CCniqKBgUqCgqAM
Message-ID: <CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It looks like this regression may be related to MSI handling.

"pci=nomsi" on the kernel command line seems to bypass the issue.

Clearly, legacy interrupts are not ideal.


On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
> I have been doing some experiments in upgrading the Xen version in a
> future version of XenClient Enterprise, and I've been running into a
> regression that I'm wondering if anyone else has seen.
>
> dom0 suspend/resume (S3) does not seem to be working for me.
>
> In swapping out components of the system, the common failure seems to
> be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>
> The first suspend seems to mostly work...but subsequent ones always
> resume improperly.
> By "improperly" - I see I/O failures, and stalls of many processes.
>
> Below is a log excerpt of 2 S3 attempts.
>
>
> Has anyone else seen these failures?
>
> - Ben
>
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Breaking vcpu affinity for domain 0 vcpu 1
> (XEN) Breaking vcpu affinity for domain 0 vcpu 2
> (XEN) Breaking vcpu affinity for domain 0 vcpu 3
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> 0 extended MCE MSR 0
> (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) Enabling non-boot CPUs  ...
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> 0 extended MCE MSR 0
> (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) Enabling non-boot CPUs  ...
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> [   66.508829] ata3.00: revalidation failed (errno=-5)
> [   66.508861] ata1.00: revalidation failed (errno=-5)
> [   76.858815] ata3.00: revalidation failed (errno=-5)
> [   76.898807] ata1.00: revalidation failed (errno=-5)
> [  107.208817] ata3.00: revalidation failed (errno=-5)
> [  107.288807] ata1.00: revalidation failed (errno=-5)
> [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
> [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
> [  107.718913] end_request: I/O error, dev sda, sector 35193296
> [  107.718919] Buffer I/O error on device dm-5, logical block 7690
> [  107.718947] end_request: I/O error, dev sda, sector 35657184
> [  107.718965] end_request: I/O error, dev sda, sector 246202760
> [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
> [  107.718995] end_request: I/O error, dev sda, sector 254548368
> [  107.719009] Aborting journal on device dm-6-8.
> [  107.719021] end_request: I/O error, dev sda, sector 35164192
> [  107.719023] Buffer I/O error on device dm-5, logical block 4052
> [  107.719063] Aborting journal on device dm-5-8.
> [  107.719085] end_request: I/O error, dev sda, sector 254546304
> [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
> [  107.719129] JBD2: I/O error detected when updating journal
> superblock for dm-6-8.
> [  107.719141] end_request: I/O error, dev sda, sector 35656064
> [  107.719146] Buffer I/O error on device dm-5, logical block 65536
> [  107.719168] JBD2: I/O error detected when updating journal
> superblock for dm-5-8.
> [  107.870082] end_request: I/O error, dev sda, sector 35131776
> [  107.875825] Buffer I/O error on device dm-5, logical block 0
> [  107.881805] end_request: I/O error, dev sda, sector 35131776
> [  107.887637] Buffer I/O error on device dm-5, logical block 0
> [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
> [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> Detected aborted journal
> [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
> [  107.893617] end_request: I/O error, dev sda, sector 35131776
> [  107.893620] Buffer I/O error on device dm-5, logical block 0
> [  107.893749] end_request: I/O error, dev sda, sector 36180352
> [  107.893752] Buffer I/O error on device dm-6, logical block 0
> [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
> Detected aborted journal
> [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
> [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
> [  107.893784] end_request: I/O error, dev sda, sector 36180352
> [  107.893787] Buffer I/O error on device dm-6, logical block 0
> [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> Detected aborted journal
> [  108.669763] end_request: I/O error, dev sda, sector 25957784
> [  108.675555] Aborting journal on device dm-3-8.
> [  108.680246] end_request: I/O error, dev sda, sector 25956736
> [  108.686099] JBD2: I/O error detected when updating journal
> superblock for dm-3-8.
> [  108.693908] journal commit I/O error
> [  108.755829] end_request: I/O error, dev sda, sector 17305984
> [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
> Detected aborted journal
> [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
> [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
> [  108.782904] end_request: I/O error, dev sda, sector 17305984
> [  109.660011] end_request: I/O error, dev sda, sector 358788
> [  109.665572] Buffer I/O error on device dm-1, logical block 46082
> [  109.682479] end_request: I/O error, dev sda, sector 18832256
> [  109.688246] end_request: I/O error, dev sda, sector 18832256
> [  109.709559] end_request: I/O error, dev sda, sector 357762
> [  109.715120] Buffer I/O error on device dm-1, logical block 45569
> [  109.721506] end_request: I/O error, dev sda, sector 358790
> [  109.727114] Buffer I/O error on device dm-1, logical block 46083
> [  109.743714] end_request: I/O error, dev sda, sector 18832256
> [  109.755555] end_request: I/O error, dev sda, sector 18832256
> [  109.886187] end_request: I/O error, dev sda, sector 357764
> [  109.891756] Buffer I/O error on device dm-1, logical block 45570
> [  109.908344] end_request: I/O error, dev sda, sector 18832256
> [  109.928369] end_request: I/O error, dev sda, sector 349574
> [  109.933938] Buffer I/O error on device dm-1, logical block 41475
> [  109.950336] end_request: I/O error, dev sda, sector 18832256
> [  115.378875] end_request: I/O error, dev sda, sector 365000
> [  115.384445] Aborting journal on device dm-1-8.
> [  115.389120] end_request: I/O error, dev sda, sector 364930
> [  115.394798] Buffer I/O error on device dm-1, logical block 49153
> [  115.401101] JBD2: I/O error detected when updating journal
> superblock for dm-1-8.
> [  207.207426] end_request: I/O error, dev sda, sector 246192376
> [  207.213313] end_request: I/O error, dev sda, sector 246192376
> [  207.903181] end_request: I/O error, dev sda, sector 246192376
> [  209.234399] end_request: I/O error, dev sda, sector 18518400
> [  209.240221] end_request: I/O error, dev sda, sector 18518400

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymlP-0006zx-CE; Tue, 07 Aug 2012 16:36:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SymlN-0006zp-Mw
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 16:36:09 +0000
Received: from [85.158.138.51:39966] by server-8.bemta-3.messagelabs.com id
	1C/3D-25919-8F341205; Tue, 07 Aug 2012 16:36:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344357366!21958900!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7542 invoked from network); 7 Aug 2012 16:36:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 16:36:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ga4c4007328
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 16:36:05 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Ga3Mo020413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 16:36:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77Ga2NF011472
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 11:36:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 09:36:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CCFB441F36; Tue,  7 Aug 2012 12:26:37 -0400 (EDT)
Date: Tue, 7 Aug 2012 12:26:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Message-ID: <20120807162637.GB15053@phenom.dumpdata.com>
References: <5020C24A.3060604@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5020C24A.3060604@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, Feng Jin <joe.jin@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 03:22:50PM +0800, zhenzhong.duan wrote:
> Hi maintainers,
> 
> We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).
> 
> The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
> 6 cores per socket, 2 HT threads per core).
> After boot up this node with all cores enabled,
> We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
> it takes 30+ mins to boot.
> If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
> If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
> So a big mem + passthrough device made the worst case.
> 
> If we boot this node with HT disabled from BIOS. Now only 12 cores are
> available.
> OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!
> 
> After some debug, we found it's in kernel mtrr init that make this delay.
> 
> mtrr_aps_init() 
>  \-> set_mtrr() 
>      \-> mtrr_work_handler() 
> 
> kernel spin in mtrr_work_handler.
> 
> But we don't know the scene hide in the hypervisor. Why big mem +
> passthrough made the worst case.
> Is this already fixed in xen upstream?
> Any comments are welcome, I'll upload all data depend on your need.

What happens if you run with a upstream version of kernel? Say v3.4.7 ?
Do you see the same issue?
> 
> thanks
> zduan

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymlP-0006zx-CE; Tue, 07 Aug 2012 16:36:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SymlN-0006zp-Mw
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 16:36:09 +0000
Received: from [85.158.138.51:39966] by server-8.bemta-3.messagelabs.com id
	1C/3D-25919-8F341205; Tue, 07 Aug 2012 16:36:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344357366!21958900!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7542 invoked from network); 7 Aug 2012 16:36:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 16:36:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ga4c4007328
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 16:36:05 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Ga3Mo020413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 16:36:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77Ga2NF011472
	for <xen-devel@lists.xensource.com>; Tue, 7 Aug 2012 11:36:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 09:36:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CCFB441F36; Tue,  7 Aug 2012 12:26:37 -0400 (EDT)
Date: Tue, 7 Aug 2012 12:26:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Message-ID: <20120807162637.GB15053@phenom.dumpdata.com>
References: <5020C24A.3060604@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5020C24A.3060604@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, Feng Jin <joe.jin@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 03:22:50PM +0800, zhenzhong.duan wrote:
> Hi maintainers,
> 
> We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).
> 
> The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
> 6 cores per socket, 2 HT threads per core).
> After boot up this node with all cores enabled,
> We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
> it takes 30+ mins to boot.
> If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
> If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
> So a big mem + passthrough device made the worst case.
> 
> If we boot this node with HT disabled from BIOS. Now only 12 cores are
> available.
> OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!
> 
> After some debug, we found it's in kernel mtrr init that make this delay.
> 
> mtrr_aps_init() 
>  \-> set_mtrr() 
>      \-> mtrr_work_handler() 
> 
> kernel spin in mtrr_work_handler.
> 
> But we don't know the scene hide in the hypervisor. Why big mem +
> passthrough made the worst case.
> Is this already fixed in xen upstream?
> Any comments are welcome, I'll upload all data depend on your need.

What happens if you run with a upstream version of kernel? Say v3.4.7 ?
Do you see the same issue?
> 
> thanks
> zduan

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Symru-000793-7c; Tue, 07 Aug 2012 16:42:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Symrs-00078y-5t
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:42:52 +0000
Received: from [85.158.139.83:48419] by server-2.bemta-5.messagelabs.com id
	CE/EC-25118-B8541205; Tue, 07 Aug 2012 16:42:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344357769!25036694!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27077 invoked from network); 7 Aug 2012 16:42:50 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 16:42:50 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ggk4G005124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 16:42:47 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77GgjIb018023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 16:42:46 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77GgjV8027546; Tue, 7 Aug 2012 11:42:45 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 09:42:45 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9A22B41F35; Tue,  7 Aug 2012 12:33:20 -0400 (EDT)
Date: Tue, 7 Aug 2012 12:33:20 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ben Guthro <ben@guthro.net>
Message-ID: <20120807163320.GC15053@phenom.dumpdata.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
> It looks like this regression may be related to MSI handling.
> 
> "pci=nomsi" on the kernel command line seems to bypass the issue.
> 
> Clearly, legacy interrupts are not ideal.

This is with v3.5 kernel right? With the earlier one you did not have
this issue?
> 
> 
> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
> > I have been doing some experiments in upgrading the Xen version in a
> > future version of XenClient Enterprise, and I've been running into a
> > regression that I'm wondering if anyone else has seen.
> >
> > dom0 suspend/resume (S3) does not seem to be working for me.
> >
> > In swapping out components of the system, the common failure seems to
> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
> >
> > The first suspend seems to mostly work...but subsequent ones always
> > resume improperly.
> > By "improperly" - I see I/O failures, and stalls of many processes.
> >
> > Below is a log excerpt of 2 S3 attempts.
> >
> >
> > Has anyone else seen these failures?
> >
> > - Ben
> >
> >
> > (XEN) Preparing system for ACPI S3 state.
> > (XEN) Disabling non-boot CPUs ...
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
> > (XEN) Entering ACPI S3 state.
> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> > 0 extended MCE MSR 0
> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> > (XEN) Finishing wakeup from ACPI S3 state.
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) Enabling non-boot CPUs  ...
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> > (XEN) Preparing system for ACPI S3 state.
> > (XEN) Disabling non-boot CPUs ...
> > (XEN) Entering ACPI S3 state.
> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> > 0 extended MCE MSR 0
> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> > (XEN) Finishing wakeup from ACPI S3 state.
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) Enabling non-boot CPUs  ...
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> > [   66.508829] ata3.00: revalidation failed (errno=-5)
> > [   66.508861] ata1.00: revalidation failed (errno=-5)
> > [   76.858815] ata3.00: revalidation failed (errno=-5)
> > [   76.898807] ata1.00: revalidation failed (errno=-5)
> > [  107.208817] ata3.00: revalidation failed (errno=-5)
> > [  107.288807] ata1.00: revalidation failed (errno=-5)
> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
> > [  107.719009] Aborting journal on device dm-6-8.
> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
> > [  107.719063] Aborting journal on device dm-5-8.
> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
> > [  107.719129] JBD2: I/O error detected when updating journal
> > superblock for dm-6-8.
> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
> > [  107.719168] JBD2: I/O error detected when updating journal
> > superblock for dm-5-8.
> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
> > [  108.675555] Aborting journal on device dm-3-8.
> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
> > [  108.686099] JBD2: I/O error detected when updating journal
> > superblock for dm-3-8.
> > [  108.693908] journal commit I/O error
> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
> > [  109.660011] end_request: I/O error, dev sda, sector 358788
> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
> > [  109.709559] end_request: I/O error, dev sda, sector 357762
> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
> > [  109.721506] end_request: I/O error, dev sda, sector 358790
> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
> > [  109.886187] end_request: I/O error, dev sda, sector 357764
> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
> > [  109.928369] end_request: I/O error, dev sda, sector 349574
> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
> > [  115.378875] end_request: I/O error, dev sda, sector 365000
> > [  115.384445] Aborting journal on device dm-1-8.
> > [  115.389120] end_request: I/O error, dev sda, sector 364930
> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
> > [  115.401101] JBD2: I/O error detected when updating journal
> > superblock for dm-1-8.
> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Symru-000793-7c; Tue, 07 Aug 2012 16:42:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Symrs-00078y-5t
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:42:52 +0000
Received: from [85.158.139.83:48419] by server-2.bemta-5.messagelabs.com id
	CE/EC-25118-B8541205; Tue, 07 Aug 2012 16:42:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344357769!25036694!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27077 invoked from network); 7 Aug 2012 16:42:50 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 16:42:50 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ggk4G005124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 16:42:47 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77GgjIb018023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 16:42:46 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77GgjV8027546; Tue, 7 Aug 2012 11:42:45 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 09:42:45 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9A22B41F35; Tue,  7 Aug 2012 12:33:20 -0400 (EDT)
Date: Tue, 7 Aug 2012 12:33:20 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ben Guthro <ben@guthro.net>
Message-ID: <20120807163320.GC15053@phenom.dumpdata.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
> It looks like this regression may be related to MSI handling.
> 
> "pci=nomsi" on the kernel command line seems to bypass the issue.
> 
> Clearly, legacy interrupts are not ideal.

This is with v3.5 kernel right? With the earlier one you did not have
this issue?
> 
> 
> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
> > I have been doing some experiments in upgrading the Xen version in a
> > future version of XenClient Enterprise, and I've been running into a
> > regression that I'm wondering if anyone else has seen.
> >
> > dom0 suspend/resume (S3) does not seem to be working for me.
> >
> > In swapping out components of the system, the common failure seems to
> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
> >
> > The first suspend seems to mostly work...but subsequent ones always
> > resume improperly.
> > By "improperly" - I see I/O failures, and stalls of many processes.
> >
> > Below is a log excerpt of 2 S3 attempts.
> >
> >
> > Has anyone else seen these failures?
> >
> > - Ben
> >
> >
> > (XEN) Preparing system for ACPI S3 state.
> > (XEN) Disabling non-boot CPUs ...
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
> > (XEN) Entering ACPI S3 state.
> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> > 0 extended MCE MSR 0
> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> > (XEN) Finishing wakeup from ACPI S3 state.
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) Enabling non-boot CPUs  ...
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> > (XEN) Preparing system for ACPI S3 state.
> > (XEN) Disabling non-boot CPUs ...
> > (XEN) Entering ACPI S3 state.
> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> > 0 extended MCE MSR 0
> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> > (XEN) Finishing wakeup from ACPI S3 state.
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) Enabling non-boot CPUs  ...
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> > [   66.508829] ata3.00: revalidation failed (errno=-5)
> > [   66.508861] ata1.00: revalidation failed (errno=-5)
> > [   76.858815] ata3.00: revalidation failed (errno=-5)
> > [   76.898807] ata1.00: revalidation failed (errno=-5)
> > [  107.208817] ata3.00: revalidation failed (errno=-5)
> > [  107.288807] ata1.00: revalidation failed (errno=-5)
> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
> > [  107.719009] Aborting journal on device dm-6-8.
> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
> > [  107.719063] Aborting journal on device dm-5-8.
> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
> > [  107.719129] JBD2: I/O error detected when updating journal
> > superblock for dm-6-8.
> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
> > [  107.719168] JBD2: I/O error detected when updating journal
> > superblock for dm-5-8.
> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
> > [  108.675555] Aborting journal on device dm-3-8.
> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
> > [  108.686099] JBD2: I/O error detected when updating journal
> > superblock for dm-3-8.
> > [  108.693908] journal commit I/O error
> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
> > [  109.660011] end_request: I/O error, dev sda, sector 358788
> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
> > [  109.709559] end_request: I/O error, dev sda, sector 357762
> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
> > [  109.721506] end_request: I/O error, dev sda, sector 358790
> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
> > [  109.886187] end_request: I/O error, dev sda, sector 357764
> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
> > [  109.928369] end_request: I/O error, dev sda, sector 349574
> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
> > [  115.378875] end_request: I/O error, dev sda, sector 365000
> > [  115.384445] Aborting journal on device dm-1-8.
> > [  115.389120] end_request: I/O error, dev sda, sector 364930
> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
> > [  115.401101] JBD2: I/O error detected when updating journal
> > superblock for dm-1-8.
> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:48:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:48:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymxR-0007Hj-0g; Tue, 07 Aug 2012 16:48:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SymxP-0007Hd-Ca
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:48:35 +0000
Received: from [85.158.143.35:42019] by server-2.bemta-4.messagelabs.com id
	C6/A6-17938-2E641205; Tue, 07 Aug 2012 16:48:34 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344358111!16590457!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31152 invoked from network); 7 Aug 2012 16:48:32 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 16:48:32 -0000
Received: by yhpp34 with SMTP id p34so4394959yhp.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 09:48:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Hmtg9qIKis4/+9a7j4NsoY1NR0ClNG9WTeJ9L/uZ/2A=;
	b=gt1j0aAUGp73DINgxxlOUd1u2AwLGIJvXkRyF6hULWK09cqCN4aWjzeggk6rOnNFuO
	B5Fzeu3OG0Ye+JJW2/UEIFoHtqYtTUIRMevYVHtDWhbsJM6Y2hszu/X/kjmVOZSzHeS7
	hxxQIQObIupNpopV/VVvjSI/xQr0b3Jk7TIis/eA2NRQPrG2ZgGLABk3B6quVWVyRYCS
	/Hpbd8mK9fBoF2NcdKUBgRNTtyYzTtQSwXFSeeI04x1C4CzZCrxmIdHoek2GimJYd0iN
	3E93UbrazmkRO3DtuGVEr8RUfbTC4pZKYMB0oWLt7KWZIQ0YrONzVRhQWdwnVKbdj3zF
	/u7Q==
MIME-Version: 1.0
Received: by 10.50.170.73 with SMTP id ak9mr9282727igc.38.1344358095060; Tue,
	07 Aug 2012 09:48:15 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 09:48:15 -0700 (PDT)
In-Reply-To: <20120807163320.GC15053@phenom.dumpdata.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
Date: Tue, 7 Aug 2012 12:48:15 -0400
X-Google-Sender-Auth: V32JbecK5pqfp5Qgkkfp3DjOJGc
Message-ID: <CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No - the issue seems to follow xen-4.2

my test matrix looks as such:

Xen      Linux                        S3 result
4.0.3     3.2.23                       OK
4.0.3     3.5                            OK
4.2        3.2.23                       FAIL
4.2        3.5                            FAIL
4.2        3.2.23 pci=nomsi    OK
4.2        3.5 pci=nomsi         (untested)




On Tue, Aug 7, 2012 at 12:33 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
>> It looks like this regression may be related to MSI handling.
>>
>> "pci=nomsi" on the kernel command line seems to bypass the issue.
>>
>> Clearly, legacy interrupts are not ideal.
>
> This is with v3.5 kernel right? With the earlier one you did not have
> this issue?
>>
>>
>> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
>> > I have been doing some experiments in upgrading the Xen version in a
>> > future version of XenClient Enterprise, and I've been running into a
>> > regression that I'm wondering if anyone else has seen.
>> >
>> > dom0 suspend/resume (S3) does not seem to be working for me.
>> >
>> > In swapping out components of the system, the common failure seems to
>> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>> >
>> > The first suspend seems to mostly work...but subsequent ones always
>> > resume improperly.
>> > By "improperly" - I see I/O failures, and stalls of many processes.
>> >
>> > Below is a log excerpt of 2 S3 attempts.
>> >
>> >
>> > Has anyone else seen these failures?
>> >
>> > - Ben
>> >
>> >
>> > (XEN) Preparing system for ACPI S3 state.
>> > (XEN) Disabling non-boot CPUs ...
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
>> > (XEN) Entering ACPI S3 state.
>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>> > 0 extended MCE MSR 0
>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>> > (XEN) Finishing wakeup from ACPI S3 state.
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) Enabling non-boot CPUs  ...
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>> > (XEN) Preparing system for ACPI S3 state.
>> > (XEN) Disabling non-boot CPUs ...
>> > (XEN) Entering ACPI S3 state.
>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>> > 0 extended MCE MSR 0
>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>> > (XEN) Finishing wakeup from ACPI S3 state.
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) Enabling non-boot CPUs  ...
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>> > [   66.508829] ata3.00: revalidation failed (errno=-5)
>> > [   66.508861] ata1.00: revalidation failed (errno=-5)
>> > [   76.858815] ata3.00: revalidation failed (errno=-5)
>> > [   76.898807] ata1.00: revalidation failed (errno=-5)
>> > [  107.208817] ata3.00: revalidation failed (errno=-5)
>> > [  107.288807] ata1.00: revalidation failed (errno=-5)
>> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
>> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
>> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
>> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
>> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
>> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
>> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
>> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
>> > [  107.719009] Aborting journal on device dm-6-8.
>> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
>> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
>> > [  107.719063] Aborting journal on device dm-5-8.
>> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
>> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
>> > [  107.719129] JBD2: I/O error detected when updating journal
>> > superblock for dm-6-8.
>> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
>> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
>> > [  107.719168] JBD2: I/O error detected when updating journal
>> > superblock for dm-5-8.
>> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
>> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
>> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
>> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
>> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
>> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
>> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
>> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
>> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
>> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
>> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
>> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
>> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
>> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
>> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
>> > [  108.675555] Aborting journal on device dm-3-8.
>> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
>> > [  108.686099] JBD2: I/O error detected when updating journal
>> > superblock for dm-3-8.
>> > [  108.693908] journal commit I/O error
>> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
>> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
>> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
>> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
>> > [  109.660011] end_request: I/O error, dev sda, sector 358788
>> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
>> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
>> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
>> > [  109.709559] end_request: I/O error, dev sda, sector 357762
>> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
>> > [  109.721506] end_request: I/O error, dev sda, sector 358790
>> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
>> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
>> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
>> > [  109.886187] end_request: I/O error, dev sda, sector 357764
>> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
>> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
>> > [  109.928369] end_request: I/O error, dev sda, sector 349574
>> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
>> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
>> > [  115.378875] end_request: I/O error, dev sda, sector 365000
>> > [  115.384445] Aborting journal on device dm-1-8.
>> > [  115.389120] end_request: I/O error, dev sda, sector 364930
>> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
>> > [  115.401101] JBD2: I/O error detected when updating journal
>> > superblock for dm-1-8.
>> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
>> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
>> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
>> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
>> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 16:48:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 16:48:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SymxR-0007Hj-0g; Tue, 07 Aug 2012 16:48:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SymxP-0007Hd-Ca
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 16:48:35 +0000
Received: from [85.158.143.35:42019] by server-2.bemta-4.messagelabs.com id
	C6/A6-17938-2E641205; Tue, 07 Aug 2012 16:48:34 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344358111!16590457!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31152 invoked from network); 7 Aug 2012 16:48:32 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 16:48:32 -0000
Received: by yhpp34 with SMTP id p34so4394959yhp.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 09:48:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Hmtg9qIKis4/+9a7j4NsoY1NR0ClNG9WTeJ9L/uZ/2A=;
	b=gt1j0aAUGp73DINgxxlOUd1u2AwLGIJvXkRyF6hULWK09cqCN4aWjzeggk6rOnNFuO
	B5Fzeu3OG0Ye+JJW2/UEIFoHtqYtTUIRMevYVHtDWhbsJM6Y2hszu/X/kjmVOZSzHeS7
	hxxQIQObIupNpopV/VVvjSI/xQr0b3Jk7TIis/eA2NRQPrG2ZgGLABk3B6quVWVyRYCS
	/Hpbd8mK9fBoF2NcdKUBgRNTtyYzTtQSwXFSeeI04x1C4CzZCrxmIdHoek2GimJYd0iN
	3E93UbrazmkRO3DtuGVEr8RUfbTC4pZKYMB0oWLt7KWZIQ0YrONzVRhQWdwnVKbdj3zF
	/u7Q==
MIME-Version: 1.0
Received: by 10.50.170.73 with SMTP id ak9mr9282727igc.38.1344358095060; Tue,
	07 Aug 2012 09:48:15 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 09:48:15 -0700 (PDT)
In-Reply-To: <20120807163320.GC15053@phenom.dumpdata.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
Date: Tue, 7 Aug 2012 12:48:15 -0400
X-Google-Sender-Auth: V32JbecK5pqfp5Qgkkfp3DjOJGc
Message-ID: <CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No - the issue seems to follow xen-4.2

my test matrix looks as such:

Xen      Linux                        S3 result
4.0.3     3.2.23                       OK
4.0.3     3.5                            OK
4.2        3.2.23                       FAIL
4.2        3.5                            FAIL
4.2        3.2.23 pci=nomsi    OK
4.2        3.5 pci=nomsi         (untested)




On Tue, Aug 7, 2012 at 12:33 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
>> It looks like this regression may be related to MSI handling.
>>
>> "pci=nomsi" on the kernel command line seems to bypass the issue.
>>
>> Clearly, legacy interrupts are not ideal.
>
> This is with v3.5 kernel right? With the earlier one you did not have
> this issue?
>>
>>
>> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
>> > I have been doing some experiments in upgrading the Xen version in a
>> > future version of XenClient Enterprise, and I've been running into a
>> > regression that I'm wondering if anyone else has seen.
>> >
>> > dom0 suspend/resume (S3) does not seem to be working for me.
>> >
>> > In swapping out components of the system, the common failure seems to
>> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>> >
>> > The first suspend seems to mostly work...but subsequent ones always
>> > resume improperly.
>> > By "improperly" - I see I/O failures, and stalls of many processes.
>> >
>> > Below is a log excerpt of 2 S3 attempts.
>> >
>> >
>> > Has anyone else seen these failures?
>> >
>> > - Ben
>> >
>> >
>> > (XEN) Preparing system for ACPI S3 state.
>> > (XEN) Disabling non-boot CPUs ...
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
>> > (XEN) Entering ACPI S3 state.
>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>> > 0 extended MCE MSR 0
>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>> > (XEN) Finishing wakeup from ACPI S3 state.
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) Enabling non-boot CPUs  ...
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>> > (XEN) Preparing system for ACPI S3 state.
>> > (XEN) Disabling non-boot CPUs ...
>> > (XEN) Entering ACPI S3 state.
>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>> > 0 extended MCE MSR 0
>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>> > (XEN) Finishing wakeup from ACPI S3 state.
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) Enabling non-boot CPUs  ...
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>> > [   66.508829] ata3.00: revalidation failed (errno=-5)
>> > [   66.508861] ata1.00: revalidation failed (errno=-5)
>> > [   76.858815] ata3.00: revalidation failed (errno=-5)
>> > [   76.898807] ata1.00: revalidation failed (errno=-5)
>> > [  107.208817] ata3.00: revalidation failed (errno=-5)
>> > [  107.288807] ata1.00: revalidation failed (errno=-5)
>> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
>> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
>> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
>> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
>> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
>> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
>> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
>> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
>> > [  107.719009] Aborting journal on device dm-6-8.
>> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
>> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
>> > [  107.719063] Aborting journal on device dm-5-8.
>> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
>> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
>> > [  107.719129] JBD2: I/O error detected when updating journal
>> > superblock for dm-6-8.
>> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
>> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
>> > [  107.719168] JBD2: I/O error detected when updating journal
>> > superblock for dm-5-8.
>> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
>> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
>> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
>> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
>> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
>> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
>> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
>> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
>> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
>> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
>> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
>> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
>> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
>> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
>> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
>> > [  108.675555] Aborting journal on device dm-3-8.
>> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
>> > [  108.686099] JBD2: I/O error detected when updating journal
>> > superblock for dm-3-8.
>> > [  108.693908] journal commit I/O error
>> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
>> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
>> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
>> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
>> > [  109.660011] end_request: I/O error, dev sda, sector 358788
>> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
>> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
>> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
>> > [  109.709559] end_request: I/O error, dev sda, sector 357762
>> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
>> > [  109.721506] end_request: I/O error, dev sda, sector 358790
>> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
>> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
>> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
>> > [  109.886187] end_request: I/O error, dev sda, sector 357764
>> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
>> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
>> > [  109.928369] end_request: I/O error, dev sda, sector 349574
>> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
>> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
>> > [  115.378875] end_request: I/O error, dev sda, sector 365000
>> > [  115.384445] Aborting journal on device dm-1-8.
>> > [  115.389120] end_request: I/O error, dev sda, sector 364930
>> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
>> > [  115.401101] JBD2: I/O error detected when updating journal
>> > superblock for dm-1-8.
>> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
>> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
>> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
>> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
>> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:08:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:08:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynFz-0007bP-UE; Tue, 07 Aug 2012 17:07:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SynFy-0007bI-BD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:07:46 +0000
Received: from [85.158.139.83:54446] by server-1.bemta-5.messagelabs.com id
	9A/4A-14385-16B41205; Tue, 07 Aug 2012 17:07:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344359264!25040165!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9458 invoked from network); 7 Aug 2012 17:07:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:07:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13891468"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 17:07:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 18:07:44 +0100
Date: Tue, 7 Aug 2012 18:07:21 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502131E202000078000933F9@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Jean
	Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jean Guyader <jean.guyader@gmail.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> >>> On 07.08.12 at 14:40, Jean Guyader <jean.guyader@gmail.com> wrote:
> > You could probably do something like
> > 
> > #ifdef __GNUC__
> > # define UNION_NAME
> > #else
> > # define UNION_NAME u
> > #endif
> > union {
> >     /* Number of pages to go through for gmfn_range */
> >     uint16_t    size;
> >     /* IFF gmfn_foreign */
> >     domid_t foreign_domid;
> > } UNION_NAME;
> > 
> > It's not ideal but this way you keep the binary compatibility and you
> > also the code still cmopatible with GCC.
> 
> Ah, yes, something along those lines might be okay; we already
> have similar things e.g. for __DECL_REG() (note the additional
> use of __STRICT_ANSI__ there, though).

But it is strict ANSI.


> The most problematic
> thing here is the name space cluttering - UNION_NAME is certainly
> not good, but __DECL_REG really is too unspecific too (yet we
> got away with it so far).

OK, I'll got for this strategy.

Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?


#if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 201112L)
# define XEN_ADD_TO_PHYSMAP_ARG
#else
# define XEN_ADD_TO_PHYSMAP_ARG u
#endif


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:08:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:08:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynFz-0007bP-UE; Tue, 07 Aug 2012 17:07:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SynFy-0007bI-BD
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:07:46 +0000
Received: from [85.158.139.83:54446] by server-1.bemta-5.messagelabs.com id
	9A/4A-14385-16B41205; Tue, 07 Aug 2012 17:07:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344359264!25040165!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9458 invoked from network); 7 Aug 2012 17:07:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:07:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13891468"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 17:07:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 18:07:44 +0100
Date: Tue, 7 Aug 2012 18:07:21 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502131E202000078000933F9@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Jean
	Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jean Guyader <jean.guyader@gmail.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> >>> On 07.08.12 at 14:40, Jean Guyader <jean.guyader@gmail.com> wrote:
> > You could probably do something like
> > 
> > #ifdef __GNUC__
> > # define UNION_NAME
> > #else
> > # define UNION_NAME u
> > #endif
> > union {
> >     /* Number of pages to go through for gmfn_range */
> >     uint16_t    size;
> >     /* IFF gmfn_foreign */
> >     domid_t foreign_domid;
> > } UNION_NAME;
> > 
> > It's not ideal but this way you keep the binary compatibility and you
> > also the code still cmopatible with GCC.
> 
> Ah, yes, something along those lines might be okay; we already
> have similar things e.g. for __DECL_REG() (note the additional
> use of __STRICT_ANSI__ there, though).

But it is strict ANSI.


> The most problematic
> thing here is the name space cluttering - UNION_NAME is certainly
> not good, but __DECL_REG really is too unspecific too (yet we
> got away with it so far).

OK, I'll got for this strategy.

Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?


#if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 201112L)
# define XEN_ADD_TO_PHYSMAP_ARG
#else
# define XEN_ADD_TO_PHYSMAP_ARG u
#endif


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:18:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynQQ-0007l1-38; Tue, 07 Aug 2012 17:18:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SynQO-0007kw-4u
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 17:18:32 +0000
Received: from [85.158.143.35:29933] by server-2.bemta-4.messagelabs.com id
	8E/60-17938-7ED41205; Tue, 07 Aug 2012 17:18:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344359910!16595140!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23111 invoked from network); 7 Aug 2012 17:18:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:18:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13891614"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 17:18:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 18:18:29 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SynQL-0001Fp-0f; Tue, 07 Aug 2012 17:18:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SynQK-0002Vy-SZ;
	Tue, 07 Aug 2012 18:18:28 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.19940.781946.155910@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 18:18:28 +0100
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Fix invalidate if memory requested was
	not	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Frediano Ziglio writes ("[Xen-devel] [PATCH] Fix invalidate if memory requested was not bucket aligned"):
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Thanks for this!

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

I think that this is a good candidate for a backport, after it has
been given a workout in -unstable.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:18:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynQQ-0007l1-38; Tue, 07 Aug 2012 17:18:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SynQO-0007kw-4u
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 17:18:32 +0000
Received: from [85.158.143.35:29933] by server-2.bemta-4.messagelabs.com id
	8E/60-17938-7ED41205; Tue, 07 Aug 2012 17:18:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344359910!16595140!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23111 invoked from network); 7 Aug 2012 17:18:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:18:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13891614"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 17:18:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 18:18:29 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SynQL-0001Fp-0f; Tue, 07 Aug 2012 17:18:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SynQK-0002Vy-SZ;
	Tue, 07 Aug 2012 18:18:28 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.19940.781946.155910@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 18:18:28 +0100
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Fix invalidate if memory requested was
	not	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Frediano Ziglio writes ("[Xen-devel] [PATCH] Fix invalidate if memory requested was not bucket aligned"):
> When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> is created pointing to the virtual address of location requested.
> The cached mapped entry is saved in last_address_vaddr with the memory
> location of the base virtual address (without bucket offset).
> However when this entry is invalidated the virtual address saved in the
> reverse mapping is used. This cause that the mapping is freed but the
> last_address_vaddr is not reset.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Thanks for this!

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

I think that this is a good candidate for a backport, after it has
been given a workout in -unstable.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:23:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynUS-0007sX-P0; Tue, 07 Aug 2012 17:22:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SynUR-0007sP-Ar
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 17:22:43 +0000
Received: from [85.158.138.51:27288] by server-1.bemta-3.messagelabs.com id
	01/86-29224-2EE41205; Tue, 07 Aug 2012 17:22:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344360161!22785301!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18559 invoked from network); 7 Aug 2012 17:22:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:22:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13891674"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 17:22:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 18:22:17 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SynU1-0001H7-7T; Tue, 07 Aug 2012 17:22:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SynU1-0002WJ-3Q;
	Tue, 07 Aug 2012 18:22:17 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.20168.993350.590876@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 18:22:16 +0100
To: Jan Beulich <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <502137BA0200007800093442@nat28.tlf.novell.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> On 03.08.12 at 13:08, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > But we should still try to fix the upstream kernels not to need it.
> 
> And just like for the respective pending blktap item, it should
> - be made sure this gets backed out again after 4.2, making sure
>   the loading happens either automatically or when the module is
>   in fact needed, and

Please do remind us :-).

> - not exclusively try the pv-ops kernel's module names.

Do you mean that 4.2 should try loading some bigger set of module
names ?  If so then do you have a list ? :-)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:23:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynUS-0007sX-P0; Tue, 07 Aug 2012 17:22:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SynUR-0007sP-Ar
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 17:22:43 +0000
Received: from [85.158.138.51:27288] by server-1.bemta-3.messagelabs.com id
	01/86-29224-2EE41205; Tue, 07 Aug 2012 17:22:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344360161!22785301!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18559 invoked from network); 7 Aug 2012 17:22:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:22:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13891674"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 17:22:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 18:22:17 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SynU1-0001H7-7T; Tue, 07 Aug 2012 17:22:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SynU1-0002WJ-3Q;
	Tue, 07 Aug 2012 18:22:17 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.20168.993350.590876@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 18:22:16 +0100
To: Jan Beulich <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <502137BA0200007800093442@nat28.tlf.novell.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> On 03.08.12 at 13:08, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > But we should still try to fix the upstream kernels not to need it.
> 
> And just like for the respective pending blktap item, it should
> - be made sure this gets backed out again after 4.2, making sure
>   the loading happens either automatically or when the module is
>   in fact needed, and

Please do remind us :-).

> - not exclusively try the pv-ops kernel's module names.

Do you mean that 4.2 should try loading some bigger set of module
names ?  If so then do you have a list ? :-)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:46:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:46:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynrD-0008CT-0Q; Tue, 07 Aug 2012 17:46:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SynrB-0008CO-6y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:46:13 +0000
Received: from [85.158.143.99:2348] by server-2.bemta-4.messagelabs.com id
	CD/84-17938-46451205; Tue, 07 Aug 2012 17:46:12 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344361571!25449614!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8121 invoked from network); 7 Aug 2012 17:46:11 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-12.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 17:46:11 -0000
X-TM-IMSS-Message-ID: <80ded2330003cace@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 80ded2330003cace ;
	Tue, 7 Aug 2012 13:46:18 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77Hk9b8029917; 
	Tue, 7 Aug 2012 13:46:09 -0400
Message-ID: <50215461.4030901@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 13:46:09 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Shakeel Butt <shakeel.butt@gmail.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
In-Reply-To: <CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
 hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 01:12 AM, Shakeel Butt wrote:
> I have just two comments:
> 
> 1. Although the apparent benefit of this patch series seems dom0
> disaggregation [VEE'08,SOSP'11] but (completely covered) xsm hooks
> will facilitate the implementation of recently proposed system like
> CloudVisor [SOSP'11] and Self-service Cloud [CCS'12] and can be used
> to further explore access control and flexibility for different
> scenarios.

I wasn't intending to exclude the other uses of XSM that this series will
benefit; dom0 disaggregation is just the most obvious case that requires
the larger changes like removing IS_PRIV checks.
 
> 2. This patch series is the hypervisor part of the dom0 disaggregation
> idea realization. I think the next step should be applying similar
> ideas to xen tools and Linux kernel. For example in Linux kernel
> is_initial_domain() is equivalent to IS_PRIV, what should be the xsm
> equivalent solution here. Other parts which need some discussion or
> thinking are xenbus, xenstored, privcmd (and others).
> 
> Shakeel

Linux should not be doing any access control for the hypervisor based on
xen_initial_domain; this is the hypervisor's job, and duplicating access
checks based on this bit will just make it more likely to be inconsistent.

The actual equivalent for XSM in Linux is SELinux; a method for mapping
between the XSM/FLASK labels in the hypervisor and SELinux labels in a
domain will be needed to make security policy extend from the hypervisor
down to processes. Currently, Xen interfaces are labeled as a whole, so
a process with access to these interfaces has access to everything that
the domain it is running in has access to. This is often sufficient,
especially if stub domains (Linux or minios) are used to limit the access
that any given domain requires.

The xen_initial_domain() access checks are mostly confined to controlling
if PV Linux domains attempt direct access to hardware: things like ACPI
support, IRQ configuration, direct PCI access, etc. It should be possible
to use the rest of the Xen toolstack from a domU, once this series is
applied.

Xenstore can already be split into its own stub domain (or domains, as in
the Xoar paper). The permissions model in Xenstore has a privileged bit
similar to IS_PRIV; extending XSM controls into Xenstore similar to how
SELinux controls were extended into DBus will address this.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:46:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:46:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SynrD-0008CT-0Q; Tue, 07 Aug 2012 17:46:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SynrB-0008CO-6y
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:46:13 +0000
Received: from [85.158.143.99:2348] by server-2.bemta-4.messagelabs.com id
	CD/84-17938-46451205; Tue, 07 Aug 2012 17:46:12 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344361571!25449614!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8121 invoked from network); 7 Aug 2012 17:46:11 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-12.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 17:46:11 -0000
X-TM-IMSS-Message-ID: <80ded2330003cace@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 80ded2330003cace ;
	Tue, 7 Aug 2012 13:46:18 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77Hk9b8029917; 
	Tue, 7 Aug 2012 13:46:09 -0400
Message-ID: <50215461.4030901@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 13:46:09 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Shakeel Butt <shakeel.butt@gmail.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
In-Reply-To: <CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
 hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 01:12 AM, Shakeel Butt wrote:
> I have just two comments:
> 
> 1. Although the apparent benefit of this patch series seems dom0
> disaggregation [VEE'08,SOSP'11] but (completely covered) xsm hooks
> will facilitate the implementation of recently proposed system like
> CloudVisor [SOSP'11] and Self-service Cloud [CCS'12] and can be used
> to further explore access control and flexibility for different
> scenarios.

I wasn't intending to exclude the other uses of XSM that this series will
benefit; dom0 disaggregation is just the most obvious case that requires
the larger changes like removing IS_PRIV checks.
 
> 2. This patch series is the hypervisor part of the dom0 disaggregation
> idea realization. I think the next step should be applying similar
> ideas to xen tools and Linux kernel. For example in Linux kernel
> is_initial_domain() is equivalent to IS_PRIV, what should be the xsm
> equivalent solution here. Other parts which need some discussion or
> thinking are xenbus, xenstored, privcmd (and others).
> 
> Shakeel

Linux should not be doing any access control for the hypervisor based on
xen_initial_domain; this is the hypervisor's job, and duplicating access
checks based on this bit will just make it more likely to be inconsistent.

The actual equivalent for XSM in Linux is SELinux; a method for mapping
between the XSM/FLASK labels in the hypervisor and SELinux labels in a
domain will be needed to make security policy extend from the hypervisor
down to processes. Currently, Xen interfaces are labeled as a whole, so
a process with access to these interfaces has access to everything that
the domain it is running in has access to. This is often sufficient,
especially if stub domains (Linux or minios) are used to limit the access
that any given domain requires.

The xen_initial_domain() access checks are mostly confined to controlling
if PV Linux domains attempt direct access to hardware: things like ACPI
support, IRQ configuration, direct PCI access, etc. It should be possible
to use the rest of the Xen toolstack from a domU, once this series is
applied.

Xenstore can already be split into its own stub domain (or domains, as in
the Xoar paper). The permissions model in Xenstore has a privileged bit
similar to IS_PRIV; extending XSM controls into Xenstore similar to how
SELinux controls were extended into DBus will address this.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:47:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Synsb-0008GM-Fu; Tue, 07 Aug 2012 17:47:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Synsa-0008GF-8D
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:47:40 +0000
Received: from [85.158.143.35:64465] by server-3.bemta-4.messagelabs.com id
	D1/A2-01511-BB451205; Tue, 07 Aug 2012 17:47:39 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344361657!5834747!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNTU2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16613 invoked from network); 7 Aug 2012 17:47:38 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:47:38 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344361658; x=1375897658;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Xd8chtoLialXPg7Mk8horPYrWC/0V5yboWZ7ATfN/LI=;
	b=jWI9YWGYUWdLhWqyHZAHmP5nVnCgon7Mu6yKYrMrQellYxca1EvPDVyu
	sKLHe7Oat/QfLZCTddDx5iyvpMOaoQ==;
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="420116884"
Received: from smtp-in-0191.sea3.amazon.com ([10.224.12.28])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 17:47:36 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0191.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q77HlYDO011315
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 17:47:34 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 7 Aug 2012 10:47:33 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 07 Aug 2012
	10:47:33 -0700
Date: Tue, 7 Aug 2012 10:47:33 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120807174733.GA5592@US-SEA-R8XVZTX>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5020D6880200007800093198@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 11:49:12PM -0700, Jan Beulich wrote:
> >>> On 06.08.12 at 22:23, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:
> > On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > 
> >> >>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
> >> > Although the "Intel Virtualization Technology FlexMigration
> >> > Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> >> > does not document support for extended model 2H model DH (Intel Xeon
> >> > Processor E5 Family), empirical evidence shows that the same MSR
> >> > addresses can be used for cpuid masking as exdended model 2H model AH
> >> > (Intel Xen Processor E3-1200 Family).
> >>
> >> Empirical evidence isn't really enough - let's have someone at Intel
> >> confirm this - Jun, Don?
> >>
> > 
> > Thanks for the patch. The patch looks good, and it should be in.
> > We'll update the document.
> 
> I take this as an ack then, and will commit it that way.

Thanks for committing, Jan.

For what it's worth, I think that the first line of the commit log got
dropped, which makes for a strange short log message of:

  Although the "Intel Virtualization Technology FlexMigration

Matt

> >> > Signed-off-by: Matt Wilson <msw@amazon.com>
> >> >
> >> > diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> >> > --- a/xen/arch/x86/cpu/intel.c        Fri Jul 27 12:22:13 2012 +0200
> >> > +++ b/xen/arch/x86/cpu/intel.c        Sat Jul 28 17:27:30 2012 +0000
> >> > @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
> >> >                       return;
> >> >               extra = "xsave ";
> >> >               break;
> >> > -     case 0x2a:
> >> > +     case 0x2a: case 0x2d:
> >> >               wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
> >> >                     opt_cpuid_mask_ecx,
> >> >                     opt_cpuid_mask_edx);
> >>
> >>
> >>
> >>
> > 
> > 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 17:47:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 17:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Synsb-0008GM-Fu; Tue, 07 Aug 2012 17:47:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Synsa-0008GF-8D
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:47:40 +0000
Received: from [85.158.143.35:64465] by server-3.bemta-4.messagelabs.com id
	D1/A2-01511-BB451205; Tue, 07 Aug 2012 17:47:39 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344361657!5834747!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNTU2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16613 invoked from network); 7 Aug 2012 17:47:38 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:47:38 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344361658; x=1375897658;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=Xd8chtoLialXPg7Mk8horPYrWC/0V5yboWZ7ATfN/LI=;
	b=jWI9YWGYUWdLhWqyHZAHmP5nVnCgon7Mu6yKYrMrQellYxca1EvPDVyu
	sKLHe7Oat/QfLZCTddDx5iyvpMOaoQ==;
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="420116884"
Received: from smtp-in-0191.sea3.amazon.com ([10.224.12.28])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 17:47:36 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0191.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q77HlYDO011315
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 17:47:34 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 7 Aug 2012 10:47:33 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 07 Aug 2012
	10:47:33 -0700
Date: Tue, 7 Aug 2012 10:47:33 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120807174733.GA5592@US-SEA-R8XVZTX>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5020D6880200007800093198@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 11:49:12PM -0700, Jan Beulich wrote:
> >>> On 06.08.12 at 22:23, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:
> > On Sun, Jul 29, 2012 at 11:57 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > 
> >> >>> On 28.07.12 at 21:19, Matt Wilson <msw@amazon.com> wrote:
> >> > Although the "Intel Virtualization Technology FlexMigration
> >> > Application Note" (http://www.intel.com/Assets/PDF/manual/323850.pdf)
> >> > does not document support for extended model 2H model DH (Intel Xeon
> >> > Processor E5 Family), empirical evidence shows that the same MSR
> >> > addresses can be used for cpuid masking as exdended model 2H model AH
> >> > (Intel Xen Processor E3-1200 Family).
> >>
> >> Empirical evidence isn't really enough - let's have someone at Intel
> >> confirm this - Jun, Don?
> >>
> > 
> > Thanks for the patch. The patch looks good, and it should be in.
> > We'll update the document.
> 
> I take this as an ack then, and will commit it that way.

Thanks for committing, Jan.

For what it's worth, I think that the first line of the commit log got
dropped, which makes for a strange short log message of:

  Although the "Intel Virtualization Technology FlexMigration

Matt

> >> > Signed-off-by: Matt Wilson <msw@amazon.com>
> >> >
> >> > diff -r e6266fc76d08 -r bf922651da96 xen/arch/x86/cpu/intel.c
> >> > --- a/xen/arch/x86/cpu/intel.c        Fri Jul 27 12:22:13 2012 +0200
> >> > +++ b/xen/arch/x86/cpu/intel.c        Sat Jul 28 17:27:30 2012 +0000
> >> > @@ -104,7 +104,7 @@ static void __devinit set_cpuidmask(cons
> >> >                       return;
> >> >               extra = "xsave ";
> >> >               break;
> >> > -     case 0x2a:
> >> > +     case 0x2a: case 0x2d:
> >> >               wrmsr(MSR_INTEL_CPUID1_FEATURE_MASK_V2,
> >> >                     opt_cpuid_mask_ecx,
> >> >                     opt_cpuid_mask_edx);
> >>
> >>
> >>
> >>
> > 
> > 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:00:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syo4L-0008V0-O4; Tue, 07 Aug 2012 17:59:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Syo4K-0008Uv-U6
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:59:49 +0000
Received: from [85.158.143.99:16617] by server-1.bemta-4.messagelabs.com id
	8E/63-24392-49751205; Tue, 07 Aug 2012 17:59:48 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344362386!23672577!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE1OTEzNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3839 invoked from network); 7 Aug 2012 17:59:47 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:59:47 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344362387; x=1375898387;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=LjqOYniq2CnSlEex5+lZboAOMxZt0VRvQRMP9qF6Q3o=;
	b=AOTISuPHTuHEBsvgW3ntxKNl2JtfV7NOANNuD7r9jalM4qqqGSZ8OC7j
	veQgWvj1+GBuyqmqtD8xJHl0UseUFg==;
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="776881325"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 17:59:45 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q77HxiDg013591
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 17:59:45 GMT
Received: from US-SEA-R8XVZTX (10.224.80.43) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 7 Aug 2012 10:59:38 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 07 Aug 2012
	10:59:38 -0700
Date: Tue, 7 Aug 2012 10:59:38 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807175938.GB5592@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
	<1344327782.11339.65.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344327782.11339.65.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 01:23:02AM -0700, Ian Campbell wrote:
> On Tue, 2012-08-07 at 04:22 +0100, Matt Wilson wrote:
> > On Tue, Jul 31, 2012 at 08:34:59AM -0700, Matt Wilson wrote:
> > > > Why wouldn't you just run lynx on the generated .html instead of less on
> > > > the generated .txt if you wanted something a bit better formatted?
> > > 
> > > I generally don't have lynx installed on my production machines.
> 
> You can read it on your workstation or on line at
> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> 
> > Any further concerns?
> 
> I'm afraid I just don't see the point. The plain markdown docs are
> serviceable enough, if not great, and there are plenty of ways to view
> the html docs both on and off line if you want something prettier,
> without adding a web browser to our build dependencies.

I'm certainly not suggesting that we add a web browser to the build
dependencies. If lynx isn't installed, the current behavior of copying
the markdown file is maintained.

If a packager wishes to produce prettier text documentation, they can
elect to add lynx to their build dependencies. Today doing this
post-build from build control files is a bit tricky, since we drop the
semantic information conveyed by the .markdown suffix by calling the
final file .txt.

4.2 will be the first release with markdown documentation. I think
that making it well formatted, just as the previous .txt
documentation, will be a better experience for the user.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:00:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syo4L-0008V0-O4; Tue, 07 Aug 2012 17:59:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Syo4K-0008Uv-U6
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 17:59:49 +0000
Received: from [85.158.143.99:16617] by server-1.bemta-4.messagelabs.com id
	8E/63-24392-49751205; Tue, 07 Aug 2012 17:59:48 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344362386!23672577!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE1OTEzNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3839 invoked from network); 7 Aug 2012 17:59:47 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 17:59:47 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344362387; x=1375898387;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=LjqOYniq2CnSlEex5+lZboAOMxZt0VRvQRMP9qF6Q3o=;
	b=AOTISuPHTuHEBsvgW3ntxKNl2JtfV7NOANNuD7r9jalM4qqqGSZ8OC7j
	veQgWvj1+GBuyqmqtD8xJHl0UseUFg==;
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="776881325"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 17:59:45 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q77HxiDg013591
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 17:59:45 GMT
Received: from US-SEA-R8XVZTX (10.224.80.43) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 7 Aug 2012 10:59:38 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 07 Aug 2012
	10:59:38 -0700
Date: Tue, 7 Aug 2012 10:59:38 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120807175938.GB5592@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
	<1344327782.11339.65.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344327782.11339.65.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 01:23:02AM -0700, Ian Campbell wrote:
> On Tue, 2012-08-07 at 04:22 +0100, Matt Wilson wrote:
> > On Tue, Jul 31, 2012 at 08:34:59AM -0700, Matt Wilson wrote:
> > > > Why wouldn't you just run lynx on the generated .html instead of less on
> > > > the generated .txt if you wanted something a bit better formatted?
> > > 
> > > I generally don't have lynx installed on my production machines.
> 
> You can read it on your workstation or on line at
> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> 
> > Any further concerns?
> 
> I'm afraid I just don't see the point. The plain markdown docs are
> serviceable enough, if not great, and there are plenty of ways to view
> the html docs both on and off line if you want something prettier,
> without adding a web browser to our build dependencies.

I'm certainly not suggesting that we add a web browser to the build
dependencies. If lynx isn't installed, the current behavior of copying
the markdown file is maintained.

If a packager wishes to produce prettier text documentation, they can
elect to add lynx to their build dependencies. Today doing this
post-build from build control files is a bit tricky, since we drop the
semantic information conveyed by the .markdown suffix by calling the
final file .txt.

4.2 will be the first release with markdown documentation. I think
that making it well formatted, just as the previous .txt
documentation, will be a better experience for the user.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:08:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoCE-0000Lm-Mh; Tue, 07 Aug 2012 18:07:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1SyoCD-0000Lh-J1
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:07:57 +0000
Received: from [85.158.139.83:18034] by server-3.bemta-5.messagelabs.com id
	65/D2-31899-C7951205; Tue, 07 Aug 2012 18:07:56 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344362875!30774916!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23094 invoked from network); 7 Aug 2012 18:07:56 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 18:07:56 -0000
Received: by eaac13 with SMTP id c13so676319eaa.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 11:07:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=o98OqaulPGCKa5YPqX0aGGU9XZwE8r8KcSbWtVQumHU=;
	b=N8dcQu6wor48plZZg6mNHW7Gu3GZYu/yexVQ9mOHuVz9Dusp6IELJ5fb0YdxgSpTgf
	ULP1yjK0In7favGDtVeYFgkNP74leQGCWbWBa07nT/4HRjSNLI8xK05WEz3hpqzc4288
	V6xBYg1oUl2v8IRvS638RKRE21ORF18j2lzotC9x7hJY5UuOBqiDwkbK2+8ioiYVPqYn
	fZ8U0hV/temwr/dwv63c/fYLH0mL8gZQP8Eru6/Je4k2+ceeZAAPR4SjlelYH8hHJKgi
	yUE1+Z3MIFkLZ5QHRbZysfPGdHY2hyg5dlQcvOpcNLfbDV4CdIZxXcBuSVfg/04WMFR2
	2k5g==
MIME-Version: 1.0
Received: by 10.14.172.129 with SMTP id t1mr10878568eel.34.1344362875653; Tue,
	07 Aug 2012 11:07:55 -0700 (PDT)
Received: by 10.14.94.200 with HTTP; Tue, 7 Aug 2012 11:07:55 -0700 (PDT)
In-Reply-To: <50215461.4030901@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
	<50215461.4030901@tycho.nsa.gov>
Date: Tue, 7 Aug 2012 14:07:55 -0400
Message-ID: <CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
	hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I wasn't intending to exclude the other uses of XSM that this series will
> benefit; dom0 disaggregation is just the most obvious case that requires
> the larger changes like removing IS_PRIV checks.
I was just saying that this patch series is more beneficial than claimed.

> Xenstore can already be split into its own stub domain (or domains, as in
> the Xoar paper). The permissions model in Xenstore has a privileged bit
> similar to IS_PRIV; extending XSM controls into Xenstore similar to how
> SELinux controls were extended into DBus will address this.

My real concern here was the use of is_initial_domain() in the xenbus driver
code. For example I am running all Linux PV and one of them is XenStore
domain, the xenbus driver needs to do something different than
is_initial_domain(),
maybe something like is_xenstore_domain() [not saying this is right
way to do it].
Please correct me if I am wrong.

thanks,
Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:08:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoCE-0000Lm-Mh; Tue, 07 Aug 2012 18:07:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1SyoCD-0000Lh-J1
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:07:57 +0000
Received: from [85.158.139.83:18034] by server-3.bemta-5.messagelabs.com id
	65/D2-31899-C7951205; Tue, 07 Aug 2012 18:07:56 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344362875!30774916!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23094 invoked from network); 7 Aug 2012 18:07:56 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 18:07:56 -0000
Received: by eaac13 with SMTP id c13so676319eaa.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 11:07:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=o98OqaulPGCKa5YPqX0aGGU9XZwE8r8KcSbWtVQumHU=;
	b=N8dcQu6wor48plZZg6mNHW7Gu3GZYu/yexVQ9mOHuVz9Dusp6IELJ5fb0YdxgSpTgf
	ULP1yjK0In7favGDtVeYFgkNP74leQGCWbWBa07nT/4HRjSNLI8xK05WEz3hpqzc4288
	V6xBYg1oUl2v8IRvS638RKRE21ORF18j2lzotC9x7hJY5UuOBqiDwkbK2+8ioiYVPqYn
	fZ8U0hV/temwr/dwv63c/fYLH0mL8gZQP8Eru6/Je4k2+ceeZAAPR4SjlelYH8hHJKgi
	yUE1+Z3MIFkLZ5QHRbZysfPGdHY2hyg5dlQcvOpcNLfbDV4CdIZxXcBuSVfg/04WMFR2
	2k5g==
MIME-Version: 1.0
Received: by 10.14.172.129 with SMTP id t1mr10878568eel.34.1344362875653; Tue,
	07 Aug 2012 11:07:55 -0700 (PDT)
Received: by 10.14.94.200 with HTTP; Tue, 7 Aug 2012 11:07:55 -0700 (PDT)
In-Reply-To: <50215461.4030901@tycho.nsa.gov>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
	<50215461.4030901@tycho.nsa.gov>
Date: Tue, 7 Aug 2012 14:07:55 -0400
Message-ID: <CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
	hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I wasn't intending to exclude the other uses of XSM that this series will
> benefit; dom0 disaggregation is just the most obvious case that requires
> the larger changes like removing IS_PRIV checks.
I was just saying that this patch series is more beneficial than claimed.

> Xenstore can already be split into its own stub domain (or domains, as in
> the Xoar paper). The permissions model in Xenstore has a privileged bit
> similar to IS_PRIV; extending XSM controls into Xenstore similar to how
> SELinux controls were extended into DBus will address this.

My real concern here was the use of is_initial_domain() in the xenbus driver
code. For example I am running all Linux PV and one of them is XenStore
domain, the xenbus driver needs to do something different than
is_initial_domain(),
maybe something like is_xenstore_domain() [not saying this is right
way to do it].
Please correct me if I am wrong.

thanks,
Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:09:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoDv-0000SP-Ap; Tue, 07 Aug 2012 18:09:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyoDt-0000SI-CQ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:09:41 +0000
Received: from [85.158.143.99:48077] by server-1.bemta-4.messagelabs.com id
	00/CA-24392-4E951205; Tue, 07 Aug 2012 18:09:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344362979!24306228!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24248 invoked from network); 7 Aug 2012 18:09:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 18:09:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13892373"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 18:09:39 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 19:09:39 +0100
Date: Tue, 7 Aug 2012 19:09:16 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50212F6902000078000933E4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071857550.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
	<50212F6902000078000933E4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> >>> On 07.08.12 at 14:35, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Tue, 7 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> wrote:
> >> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >> >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > 
> >> >> > wrote:
> >> >> >> > Note: this change does not make any difference on x86 and ia64.
> >> >> >> > 
> >> >> >> > 
> >> >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> >> >> >> > stored in memory from guest pointers as hypercall parameters.
> >> >> >> 
> >> >> >> I have to admit that really dislike this, to a large part because of
> >> >> >> the follow up patch that clutters the corresponding function
> >> >> >> declarations even further. Plus I see no mechanism to convert
> >> >> >> between the two, yet I can't see how - long term at least - you
> >> >> >> could get away without such conversion.
> >> >> >> 
> >> >> >> Is it really a well thought through and settled upon decision to
> >> >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> >> >> >> both x86 and PPC got away without doing so
> >> >> > 
> >> >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> >> >> > piece of code, so "got away without" is a bit strong...
> >> >> 
> >> >> Hmm, yes, that's a valid correction.
> >> >> 
> >> >> > We'd really
> >> >> > rather not have to have a non-trivial compat layer on arm too by having
> >> >> > the struct layouts be the same on 32/64.
> >> >> 
> >> >> And paying a penalty like this in the 32-bit half (if what is likely
> >> >> to remain the much bigger portion for the next couple of years
> >> >> can validly be called "half") is worth it? The more that the compat
> >> >> layer is now reasonably mature (and should hence be easily
> >> >> re-usable for ARM)?
> >> > 
> >> > What penalty? The only penalty is the wasted space in the structs in
> >> > memory.
> >> 
> >> No - the caller has to zero-initialize those extra 32 bits, and the
> >> hypervisor has to check for them to be zero (the latter may be
> >> implicit in the 64-bit one, but certainly needs to be explicit on the
> >> 32-bit side).
> > 
> > You are saying that on a 32 bit hypervisor we should check that the
> > padding is zero? Why should we care about the value of the padding?
> 
> Because otherwise the same 32-bit guest kernel's behavior on
> 32- and 64-bit hypervisor may unexpectedly differ. And even if
> you didn't care to do the check, the guest would still be required
> to put zeros there in order to run on a 64-bit hypervisor. (And
> of course this costs you cache and TLB bandwidth. See how
> x86-64 just recently got the ILP32 model [aka x32] added for
> this very reason.)

Considering that no MMU hypercalls are required on ARM and that none of
the grant table and event channel ops on the hot path have any guest
handles, I think we are going to be fine.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:09:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoDv-0000SP-Ap; Tue, 07 Aug 2012 18:09:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SyoDt-0000SI-CQ
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:09:41 +0000
Received: from [85.158.143.99:48077] by server-1.bemta-4.messagelabs.com id
	00/CA-24392-4E951205; Tue, 07 Aug 2012 18:09:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344362979!24306228!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24248 invoked from network); 7 Aug 2012 18:09:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 18:09:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13892373"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 18:09:39 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 19:09:39 +0100
Date: Tue, 7 Aug 2012 19:09:16 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50212F6902000078000933E4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071857550.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
	<50212F6902000078000933E4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> >>> On 07.08.12 at 14:35, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Tue, 7 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 18:02, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> wrote:
> >> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >> >>> On 06.08.12 at 17:47, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> >> > On Mon, 2012-08-06 at 16:43 +0100, Jan Beulich wrote:
> >> >> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > 
> >> >> > wrote:
> >> >> >> > Note: this change does not make any difference on x86 and ia64.
> >> >> >> > 
> >> >> >> > 
> >> >> >> > XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
> >> >> >> > stored in memory from guest pointers as hypercall parameters.
> >> >> >> 
> >> >> >> I have to admit that really dislike this, to a large part because of
> >> >> >> the follow up patch that clutters the corresponding function
> >> >> >> declarations even further. Plus I see no mechanism to convert
> >> >> >> between the two, yet I can't see how - long term at least - you
> >> >> >> could get away without such conversion.
> >> >> >> 
> >> >> >> Is it really a well thought through and settled upon decision to
> >> >> >> make guest handles 64 bits wide even on 32-bit ARM? After all,
> >> >> >> both x86 and PPC got away without doing so
> >> >> > 
> >> >> > Well, on x86 we have the compat XLAT layer, which is a pretty complex
> >> >> > piece of code, so "got away without" is a bit strong...
> >> >> 
> >> >> Hmm, yes, that's a valid correction.
> >> >> 
> >> >> > We'd really
> >> >> > rather not have to have a non-trivial compat layer on arm too by having
> >> >> > the struct layouts be the same on 32/64.
> >> >> 
> >> >> And paying a penalty like this in the 32-bit half (if what is likely
> >> >> to remain the much bigger portion for the next couple of years
> >> >> can validly be called "half") is worth it? The more that the compat
> >> >> layer is now reasonably mature (and should hence be easily
> >> >> re-usable for ARM)?
> >> > 
> >> > What penalty? The only penalty is the wasted space in the structs in
> >> > memory.
> >> 
> >> No - the caller has to zero-initialize those extra 32 bits, and the
> >> hypervisor has to check for them to be zero (the latter may be
> >> implicit in the 64-bit one, but certainly needs to be explicit on the
> >> 32-bit side).
> > 
> > You are saying that on a 32 bit hypervisor we should check that the
> > padding is zero? Why should we care about the value of the padding?
> 
> Because otherwise the same 32-bit guest kernel's behavior on
> 32- and 64-bit hypervisor may unexpectedly differ. And even if
> you didn't care to do the check, the guest would still be required
> to put zeros there in order to run on a 64-bit hypervisor. (And
> of course this costs you cache and TLB bandwidth. See how
> x86-64 just recently got the ILP32 model [aka x32] added for
> this very reason.)

Considering that no MMU hypercalls are required on ARM and that none of
the grant table and event channel ops on the hot path have any guest
handles, I think we are going to be fine.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:13:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoHP-0000ea-VB; Tue, 07 Aug 2012 18:13:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SyoHO-0000eQ-Hi
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:13:18 +0000
Received: from [85.158.138.51:18705] by server-2.bemta-3.messagelabs.com id
	73/D9-29239-DBA51205; Tue, 07 Aug 2012 18:13:17 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344363196!30938370!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwNjc2Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12369 invoked from network); 7 Aug 2012 18:13:17 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-9.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 18:13:17 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 07 Aug 2012 11:13:16 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="204261629"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga002.fm.intel.com with ESMTP; 07 Aug 2012 11:13:16 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 7 Aug 2012 11:13:15 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Wed, 8 Aug 2012 02:13:13 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Keir Fraser <keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNdHNa/VoC38+Rs0qPrUzcitcAR5dOpk1A
Date: Tue, 7 Aug 2012 18:13:13 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D7ABC@SHSMSX101.ccr.corp.intel.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
	<5020E85202000078000931F6@nat28.tlf.novell.com>
In-Reply-To: <5020E85202000078000931F6@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> Otoh, restoring from saved state that only includes MCG_CAP (but
>>> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
>>> to be zero, which would be trivial as that's the startup state, i.e.
>>> the only complication here is the variable size save record), so
>>> pushing this to post-4.2 as well is a reasonable alternative.
>>> 
>>> Keir, Ian?
>> 
>> I think we should leave it and handle the variable-sized save record
>> in 4.3. Using hvm_load_entry_zeroextend() to read in save records,
>> with zero padding for older shorter records, should be
>> straightforward enough. 
> 
> Okay. So Ian, you could then take the corresponding item
> off the list. Or do you do that only once patches make it
> through the regression tester?
> 

So we will leave it and handle it after 4.2, right?
I will take 1 week vacation start from tomorrow, so Jan, if anythting misunderstanding please let me know asap.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:13:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoHP-0000ea-VB; Tue, 07 Aug 2012 18:13:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1SyoHO-0000eQ-Hi
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:13:18 +0000
Received: from [85.158.138.51:18705] by server-2.bemta-3.messagelabs.com id
	73/D9-29239-DBA51205; Tue, 07 Aug 2012 18:13:17 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344363196!30938370!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwNjc2Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12369 invoked from network); 7 Aug 2012 18:13:17 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-9.tower-174.messagelabs.com with SMTP;
	7 Aug 2012 18:13:17 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 07 Aug 2012 11:13:16 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.71,315,1320652800"; d="scan'208";a="204261629"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga002.fm.intel.com with ESMTP; 07 Aug 2012 11:13:16 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 7 Aug 2012 11:13:15 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Wed, 8 Aug 2012 02:13:13 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Keir Fraser <keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNdHNa/VoC38+Rs0qPrUzcitcAR5dOpk1A
Date: Tue, 7 Aug 2012 18:13:13 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923352D7ABC@SHSMSX101.ccr.corp.intel.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
	<5020E85202000078000931F6@nat28.tlf.novell.com>
In-Reply-To: <5020E85202000078000931F6@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> Otoh, restoring from saved state that only includes MCG_CAP (but
>>> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
>>> to be zero, which would be trivial as that's the startup state, i.e.
>>> the only complication here is the variable size save record), so
>>> pushing this to post-4.2 as well is a reasonable alternative.
>>> 
>>> Keir, Ian?
>> 
>> I think we should leave it and handle the variable-sized save record
>> in 4.3. Using hvm_load_entry_zeroextend() to read in save records,
>> with zero padding for older shorter records, should be
>> straightforward enough. 
> 
> Okay. So Ian, you could then take the corresponding item
> off the list. Or do you do that only once patches make it
> through the regression tester?
> 

So we will leave it and handle it after 4.2, right?
I will take 1 week vacation start from tomorrow, so Jan, if anythting misunderstanding please let me know asap.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:16:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoJu-0000lo-HH; Tue, 07 Aug 2012 18:15:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoJt-0000lj-JV
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:15:53 +0000
Received: from [85.158.138.51:29649] by server-6.bemta-3.messagelabs.com id
	FD/33-02321-85B51205; Tue, 07 Aug 2012 18:15:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344363350!30852262!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24757 invoked from network); 7 Aug 2012 18:15:52 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:15:52 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IFlmO012576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:15:48 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IFkj0029802
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:15:47 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IFkt4018034; Tue, 7 Aug 2012 13:15:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:15:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C835A41F36; Tue,  7 Aug 2012 14:06:21 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:06:21 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Shakeel Butt <shakeel.butt@gmail.com>
Message-ID: <20120807180621.GE15053@phenom.dumpdata.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
	<50215461.4030901@tycho.nsa.gov>
	<CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
 hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 02:07:55PM -0400, Shakeel Butt wrote:
> > I wasn't intending to exclude the other uses of XSM that this series will
> > benefit; dom0 disaggregation is just the most obvious case that requires
> > the larger changes like removing IS_PRIV checks.
> I was just saying that this patch series is more beneficial than claimed.
> 
> > Xenstore can already be split into its own stub domain (or domains, as in
> > the Xoar paper). The permissions model in Xenstore has a privileged bit
> > similar to IS_PRIV; extending XSM controls into Xenstore similar to how
> > SELinux controls were extended into DBus will address this.
> 
> My real concern here was the use of is_initial_domain() in the xenbus driver
> code. For example I am running all Linux PV and one of them is XenStore
> domain, the xenbus driver needs to do something different than
> is_initial_domain(),

Stefano and Daniel are already making the Linux XenBus driver more intelligient.
So that it can figure out whether is initial domain, but not running XenBus.

> maybe something like is_xenstore_domain() [not saying this is right
> way to do it].
> Please correct me if I am wrong.
> 
> thanks,
> Shakeel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:16:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoJu-0000lo-HH; Tue, 07 Aug 2012 18:15:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoJt-0000lj-JV
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:15:53 +0000
Received: from [85.158.138.51:29649] by server-6.bemta-3.messagelabs.com id
	FD/33-02321-85B51205; Tue, 07 Aug 2012 18:15:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344363350!30852262!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24757 invoked from network); 7 Aug 2012 18:15:52 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:15:52 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IFlmO012576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:15:48 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IFkj0029802
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:15:47 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IFkt4018034; Tue, 7 Aug 2012 13:15:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:15:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C835A41F36; Tue,  7 Aug 2012 14:06:21 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:06:21 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Shakeel Butt <shakeel.butt@gmail.com>
Message-ID: <20120807180621.GE15053@phenom.dumpdata.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
	<50215461.4030901@tycho.nsa.gov>
	<CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
 hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, 2012 at 02:07:55PM -0400, Shakeel Butt wrote:
> > I wasn't intending to exclude the other uses of XSM that this series will
> > benefit; dom0 disaggregation is just the most obvious case that requires
> > the larger changes like removing IS_PRIV checks.
> I was just saying that this patch series is more beneficial than claimed.
> 
> > Xenstore can already be split into its own stub domain (or domains, as in
> > the Xoar paper). The permissions model in Xenstore has a privileged bit
> > similar to IS_PRIV; extending XSM controls into Xenstore similar to how
> > SELinux controls were extended into DBus will address this.
> 
> My real concern here was the use of is_initial_domain() in the xenbus driver
> code. For example I am running all Linux PV and one of them is XenStore
> domain, the xenbus driver needs to do something different than
> is_initial_domain(),

Stefano and Daniel are already making the Linux XenBus driver more intelligient.
So that it can figure out whether is initial domain, but not running XenBus.

> maybe something like is_xenstore_domain() [not saying this is right
> way to do it].
> Please correct me if I am wrong.
> 
> thanks,
> Shakeel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:20:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:20:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoO9-0000xD-7G; Tue, 07 Aug 2012 18:20:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoO7-0000x0-IG
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:20:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344363607!11275570!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2842 invoked from network); 7 Aug 2012 18:20:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:20:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IJiK9028041
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:19:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IJhGC026153
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:19:43 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IJgV1032231; Tue, 7 Aug 2012 13:19:42 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:19:42 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 26ED441F37; Tue,  7 Aug 2012 14:10:17 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:10:17 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181017.GF15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 01/23] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:04PM +0100, Stefano Stabellini wrote:
> - Basic hypervisor.h and interface.h definitions.
> - Skeleton enlighten.c, set xen_start_info to an empty struct.
> - Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.
> 
> The new code only compiles when CONFIG_XEN is set, that is going to be
> added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
> ARM".

You can add my Ack, but do one change pls:

> +/* XXX: Move pvclock definitions some place arch independent */

Just use 'TODO'

> +struct pvclock_vcpu_time_info {
> +	u32   version;
> +	u32   pad0;
> +	u64   tsc_timestamp;
> +	u64   system_time;
> +	u32   tsc_to_system_mul;
> +	s8    tsc_shift;
> +	u8    flags;
> +	u8    pad[2];
> +} __attribute__((__packed__)); /* 32 bytes */
> +
> +struct pvclock_wall_clock {
> +	u32   version;
> +	u32   sec;
> +	u32   nsec;
> +} __attribute__((__packed__));

Mention the size and why it is OK to have it be a weird
size while the one above is nicely padded.

> +#endif
> +
> +#endif /* _ASM_ARM_XEN_INTERFACE_H */
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> new file mode 100644
> index 0000000..0bad594
> --- /dev/null
> +++ b/arch/arm/xen/Makefile
> @@ -0,0 +1 @@
> +obj-y		:= enlighten.o
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> new file mode 100644
> index 0000000..d27c2a6
> --- /dev/null
> +++ b/arch/arm/xen/enlighten.c
> @@ -0,0 +1,35 @@
> +#include <xen/xen.h>
> +#include <xen/interface/xen.h>
> +#include <xen/interface/memory.h>
> +#include <xen/platform_pci.h>
> +#include <asm/xen/hypervisor.h>
> +#include <asm/xen/hypercall.h>
> +#include <linux/module.h>
> +
> +struct start_info _xen_start_info;
> +struct start_info *xen_start_info = &_xen_start_info;
> +EXPORT_SYMBOL_GPL(xen_start_info);
> +
> +enum xen_domain_type xen_domain_type = XEN_NATIVE;
> +EXPORT_SYMBOL_GPL(xen_domain_type);
> +
> +struct shared_info xen_dummy_shared_info;
> +struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
> +
> +DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
> +
> +/* XXX: to be removed */

s/XXX/TODO/ here, and mention pls why it needs to be removed.

> +__read_mostly int xen_have_vector_callback;
> +EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> +
> +int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> +EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:20:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:20:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoO9-0000xD-7G; Tue, 07 Aug 2012 18:20:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoO7-0000x0-IG
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:20:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344363607!11275570!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2842 invoked from network); 7 Aug 2012 18:20:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:20:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IJiK9028041
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:19:46 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IJhGC026153
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:19:43 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IJgV1032231; Tue, 7 Aug 2012 13:19:42 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:19:42 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 26ED441F37; Tue,  7 Aug 2012 14:10:17 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:10:17 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181017.GF15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 01/23] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:04PM +0100, Stefano Stabellini wrote:
> - Basic hypervisor.h and interface.h definitions.
> - Skeleton enlighten.c, set xen_start_info to an empty struct.
> - Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.
> 
> The new code only compiles when CONFIG_XEN is set, that is going to be
> added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
> ARM".

You can add my Ack, but do one change pls:

> +/* XXX: Move pvclock definitions some place arch independent */

Just use 'TODO'

> +struct pvclock_vcpu_time_info {
> +	u32   version;
> +	u32   pad0;
> +	u64   tsc_timestamp;
> +	u64   system_time;
> +	u32   tsc_to_system_mul;
> +	s8    tsc_shift;
> +	u8    flags;
> +	u8    pad[2];
> +} __attribute__((__packed__)); /* 32 bytes */
> +
> +struct pvclock_wall_clock {
> +	u32   version;
> +	u32   sec;
> +	u32   nsec;
> +} __attribute__((__packed__));

Mention the size and why it is OK to have it be a weird
size while the one above is nicely padded.

> +#endif
> +
> +#endif /* _ASM_ARM_XEN_INTERFACE_H */
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> new file mode 100644
> index 0000000..0bad594
> --- /dev/null
> +++ b/arch/arm/xen/Makefile
> @@ -0,0 +1 @@
> +obj-y		:= enlighten.o
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> new file mode 100644
> index 0000000..d27c2a6
> --- /dev/null
> +++ b/arch/arm/xen/enlighten.c
> @@ -0,0 +1,35 @@
> +#include <xen/xen.h>
> +#include <xen/interface/xen.h>
> +#include <xen/interface/memory.h>
> +#include <xen/platform_pci.h>
> +#include <asm/xen/hypervisor.h>
> +#include <asm/xen/hypercall.h>
> +#include <linux/module.h>
> +
> +struct start_info _xen_start_info;
> +struct start_info *xen_start_info = &_xen_start_info;
> +EXPORT_SYMBOL_GPL(xen_start_info);
> +
> +enum xen_domain_type xen_domain_type = XEN_NATIVE;
> +EXPORT_SYMBOL_GPL(xen_domain_type);
> +
> +struct shared_info xen_dummy_shared_info;
> +struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
> +
> +DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
> +
> +/* XXX: to be removed */

s/XXX/TODO/ here, and mention pls why it needs to be removed.

> +__read_mostly int xen_have_vector_callback;
> +EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> +
> +int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> +EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:21:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:21:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoOf-0000zb-K1; Tue, 07 Aug 2012 18:20:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyoOe-0000zR-Kg
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:20:48 +0000
Received: from [85.158.143.35:51308] by server-3.bemta-4.messagelabs.com id
	9D/79-01511-08C51205; Tue, 07 Aug 2012 18:20:48 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344363645!16603414!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18881 invoked from network); 7 Aug 2012 18:20:45 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 18:20:45 -0000
X-TM-IMSS-Message-ID: <80fe769d0003d5ba@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 80fe769d0003d5ba ;
	Tue, 7 Aug 2012 14:20:52 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77IKhU6032339; 
	Tue, 7 Aug 2012 14:20:43 -0400
Message-ID: <50215C7B.1060509@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 14:20:43 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Shakeel Butt <shakeel.butt@gmail.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
	<50215461.4030901@tycho.nsa.gov>
	<CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
In-Reply-To: <CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
 hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 02:07 PM, Shakeel Butt wrote:
>> I wasn't intending to exclude the other uses of XSM that this series will
>> benefit; dom0 disaggregation is just the most obvious case that requires
>> the larger changes like removing IS_PRIV checks.
> I was just saying that this patch series is more beneficial than claimed.
> 
>> Xenstore can already be split into its own stub domain (or domains, as in
>> the Xoar paper). The permissions model in Xenstore has a privileged bit
>> similar to IS_PRIV; extending XSM controls into Xenstore similar to how
>> SELinux controls were extended into DBus will address this.
> 
> My real concern here was the use of is_initial_domain() in the xenbus driver
> code. For example I am running all Linux PV and one of them is XenStore
> domain, the xenbus driver needs to do something different than
> is_initial_domain(),
> maybe something like is_xenstore_domain() [not saying this is right
> way to do it].
> Please correct me if I am wrong.
> 
> thanks,
> Shakeel
> 

The method in upstream Linux is more complete than this: if the domain
is started with xenstore information in the shared page, it will use it
(which happens when a domain builder is used to launch dom0 and xenstore
stub domains at the same time); otherwise, there is an ioctl that can
be used in dom0 to tell it about a newly launched xenstore stub domain.
The combination of these eliminates any need for an is_xenstore_domain()
function.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:21:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:21:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoOf-0000zb-K1; Tue, 07 Aug 2012 18:20:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyoOe-0000zR-Kg
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:20:48 +0000
Received: from [85.158.143.35:51308] by server-3.bemta-4.messagelabs.com id
	9D/79-01511-08C51205; Tue, 07 Aug 2012 18:20:48 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344363645!16603414!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18881 invoked from network); 7 Aug 2012 18:20:45 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 18:20:45 -0000
X-TM-IMSS-Message-ID: <80fe769d0003d5ba@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 80fe769d0003d5ba ;
	Tue, 7 Aug 2012 14:20:52 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77IKhU6032339; 
	Tue, 7 Aug 2012 14:20:43 -0400
Message-ID: <50215C7B.1060509@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 14:20:43 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Shakeel Butt <shakeel.butt@gmail.com>
References: <1344263550-3941-1-git-send-email-dgdegra@tycho.nsa.gov>
	<CAGj-7pXZ6Z+f0p9NSVZ=_M9i8LF49myNNyyjM6rs_T0AqMDSSA@mail.gmail.com>
	<50215461.4030901@tycho.nsa.gov>
	<CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
In-Reply-To: <CAGj-7pWTrXYUM-pdM1zsFTy98krtzHw4ANGFKyTyxAiJ_pJn+A@mail.gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 00/18] RFC: Merge IS_PRIV checks into XSM
 hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 02:07 PM, Shakeel Butt wrote:
>> I wasn't intending to exclude the other uses of XSM that this series will
>> benefit; dom0 disaggregation is just the most obvious case that requires
>> the larger changes like removing IS_PRIV checks.
> I was just saying that this patch series is more beneficial than claimed.
> 
>> Xenstore can already be split into its own stub domain (or domains, as in
>> the Xoar paper). The permissions model in Xenstore has a privileged bit
>> similar to IS_PRIV; extending XSM controls into Xenstore similar to how
>> SELinux controls were extended into DBus will address this.
> 
> My real concern here was the use of is_initial_domain() in the xenbus driver
> code. For example I am running all Linux PV and one of them is XenStore
> domain, the xenbus driver needs to do something different than
> is_initial_domain(),
> maybe something like is_xenstore_domain() [not saying this is right
> way to do it].
> Please correct me if I am wrong.
> 
> thanks,
> Shakeel
> 

The method in upstream Linux is more complete than this: if the domain
is started with xenstore information in the shared page, it will use it
(which happens when a domain builder is used to launch dom0 and xenstore
stub domains at the same time); otherwise, there is an ioctl that can
be used in dom0 to tell it about a newly launched xenstore stub domain.
The combination of these eliminates any need for an is_xenstore_domain()
function.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoR1-000196-Ge; Tue, 07 Aug 2012 18:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoR0-00018m-7P
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:23:14 +0000
Received: from [85.158.138.51:58404] by server-6.bemta-3.messagelabs.com id
	F6/69-02321-11D51205; Tue, 07 Aug 2012 18:23:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344363789!21362449!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23449 invoked from network); 7 Aug 2012 18:23:11 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:23:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IMs0m022138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:22:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IMsCL029798
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:22:54 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IMs7m001974; Tue, 7 Aug 2012 13:22:54 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:22:54 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E20141F38; Tue,  7 Aug 2012 14:13:29 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:13:29 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181329.GH15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-4-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 04/23] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:07PM +0100, Stefano Stabellini wrote:
> sync_bitops functions are equivalent to the SMP implementation of the
> original functions, independently from CONFIG_SMP being defined.
> 
> We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
> under Xen you might be communicating with a completely external entity
> who might be on another CPU (e.g. two uniprocessor guests communicating
> via event channels and grant tables). So we need a variant of the bit
> ops which are SMP safe even on a UP kernel.

Ack from me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/sync_bitops.h |   27 +++++++++++++++++++++++++++
>  1 files changed, 27 insertions(+), 0 deletions(-)
>  create mode 100644 arch/arm/include/asm/sync_bitops.h
> 
> diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
> new file mode 100644
> index 0000000..63479ee
> --- /dev/null
> +++ b/arch/arm/include/asm/sync_bitops.h
> @@ -0,0 +1,27 @@
> +#ifndef __ASM_SYNC_BITOPS_H__
> +#define __ASM_SYNC_BITOPS_H__
> +
> +#include <asm/bitops.h>
> +#include <asm/system.h>
> +
> +/* sync_bitops functions are equivalent to the SMP implementation of the
> + * original functions, independently from CONFIG_SMP being defined.
> + *
> + * We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
> + * under Xen you might be communicating with a completely external entity
> + * who might be on another CPU (e.g. two uniprocessor guests communicating
> + * via event channels and grant tables). So we need a variant of the bit
> + * ops which are SMP safe even on a UP kernel.
> + */
> +
> +#define sync_set_bit(nr, p)		_set_bit(nr, p)
> +#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
> +#define sync_change_bit(nr, p)		_change_bit(nr, p)
> +#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
> +#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
> +#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
> +#define sync_test_bit(nr, addr)		test_bit(nr, addr)
> +#define sync_cmpxchg			cmpxchg
> +
> +
> +#endif
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoR1-000196-Ge; Tue, 07 Aug 2012 18:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoR0-00018m-7P
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:23:14 +0000
Received: from [85.158.138.51:58404] by server-6.bemta-3.messagelabs.com id
	F6/69-02321-11D51205; Tue, 07 Aug 2012 18:23:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344363789!21362449!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23449 invoked from network); 7 Aug 2012 18:23:11 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:23:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IMs0m022138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:22:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IMsCL029798
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:22:54 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IMs7m001974; Tue, 7 Aug 2012 13:22:54 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:22:54 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E20141F38; Tue,  7 Aug 2012 14:13:29 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:13:29 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181329.GH15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-4-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 04/23] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:07PM +0100, Stefano Stabellini wrote:
> sync_bitops functions are equivalent to the SMP implementation of the
> original functions, independently from CONFIG_SMP being defined.
> 
> We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
> under Xen you might be communicating with a completely external entity
> who might be on another CPU (e.g. two uniprocessor guests communicating
> via event channels and grant tables). So we need a variant of the bit
> ops which are SMP safe even on a UP kernel.

Ack from me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/sync_bitops.h |   27 +++++++++++++++++++++++++++
>  1 files changed, 27 insertions(+), 0 deletions(-)
>  create mode 100644 arch/arm/include/asm/sync_bitops.h
> 
> diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
> new file mode 100644
> index 0000000..63479ee
> --- /dev/null
> +++ b/arch/arm/include/asm/sync_bitops.h
> @@ -0,0 +1,27 @@
> +#ifndef __ASM_SYNC_BITOPS_H__
> +#define __ASM_SYNC_BITOPS_H__
> +
> +#include <asm/bitops.h>
> +#include <asm/system.h>
> +
> +/* sync_bitops functions are equivalent to the SMP implementation of the
> + * original functions, independently from CONFIG_SMP being defined.
> + *
> + * We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
> + * under Xen you might be communicating with a completely external entity
> + * who might be on another CPU (e.g. two uniprocessor guests communicating
> + * via event channels and grant tables). So we need a variant of the bit
> + * ops which are SMP safe even on a UP kernel.
> + */
> +
> +#define sync_set_bit(nr, p)		_set_bit(nr, p)
> +#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
> +#define sync_change_bit(nr, p)		_change_bit(nr, p)
> +#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
> +#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
> +#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
> +#define sync_test_bit(nr, addr)		test_bit(nr, addr)
> +#define sync_cmpxchg			cmpxchg
> +
> +
> +#endif
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:23:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoR1-00018y-55; Tue, 07 Aug 2012 18:23:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoQz-00018c-Fs
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:23:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344363784!10297331!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4225 invoked from network); 7 Aug 2012 18:23:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:23:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IMoFr031406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:22:50 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IMmUl006220
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:22:48 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IMldx004076; Tue, 7 Aug 2012 13:22:47 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:22:47 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AD77541F38; Tue,  7 Aug 2012 14:13:22 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:13:22 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181322.GG15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 03/23] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:06PM +0100, Stefano Stabellini wrote:
> ARM Xen guests always use paging in hardware, like PV on HVM guests in
> the X86 world.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Ack.. with one nitpick

> +/* XXX: this shouldn't be here */

.. but its here b/c the frontend drivers are using it (its rolled in
headers)- even though we won't hit the code path. So for right now
just punt with this.

> +static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
> +{
> +	BUG();
> +	return NULL;
> +}
> +
> +static inline int m2p_add_override(unsigned long mfn, struct page *page,
> +		struct gnttab_map_grant_ref *kmap_op)
> +{
> +	return 0;
> +}
> +
> +static inline int m2p_remove_override(struct page *page, bool clear_pte)
> +{
> +	return 0;
> +}
> +
> +static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> +{
> +	BUG();
> +	return false;
> +}
> +#endif /* _ASM_ARM_XEN_PAGE_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:23:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoR1-00018y-55; Tue, 07 Aug 2012 18:23:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoQz-00018c-Fs
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:23:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344363784!10297331!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4225 invoked from network); 7 Aug 2012 18:23:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:23:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IMoFr031406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:22:50 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IMmUl006220
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:22:48 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IMldx004076; Tue, 7 Aug 2012 13:22:47 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:22:47 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AD77541F38; Tue,  7 Aug 2012 14:13:22 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:13:22 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181322.GG15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 03/23] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:06PM +0100, Stefano Stabellini wrote:
> ARM Xen guests always use paging in hardware, like PV on HVM guests in
> the X86 world.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Ack.. with one nitpick

> +/* XXX: this shouldn't be here */

.. but its here b/c the frontend drivers are using it (its rolled in
headers)- even though we won't hit the code path. So for right now
just punt with this.

> +static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
> +{
> +	BUG();
> +	return NULL;
> +}
> +
> +static inline int m2p_add_override(unsigned long mfn, struct page *page,
> +		struct gnttab_map_grant_ref *kmap_op)
> +{
> +	return 0;
> +}
> +
> +static inline int m2p_remove_override(struct page *page, bool clear_pte)
> +{
> +	return 0;
> +}
> +
> +static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> +{
> +	BUG();
> +	return false;
> +}
> +#endif /* _ASM_ARM_XEN_PAGE_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:23:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoR6-0001AF-3A; Tue, 07 Aug 2012 18:23:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoR4-00019h-Dh
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:23:18 +0000
Received: from [85.158.143.35:48855] by server-1.bemta-4.messagelabs.com id
	A5/C2-24392-51D51205; Tue, 07 Aug 2012 18:23:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344363796!13927266!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11265 invoked from network); 7 Aug 2012 18:23:17 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:23:17 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IN4nS022519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:23:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IN3Qf000083
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:23:03 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IN3mq023196; Tue, 7 Aug 2012 13:23:03 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:23:03 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E1D6C41F38; Tue,  7 Aug 2012 14:13:37 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:13:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181337.GI15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-5-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-5-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 05/23] xen/arm: empty implementation of
 grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:08PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> 
> - return -ENOSYS rather than -1.

Ack.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/Makefile      |    2 +-
>  arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/xen/grant-table.c
> 
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> index b9d6acc..4384103 100644
> --- a/arch/arm/xen/Makefile
> +++ b/arch/arm/xen/Makefile
> @@ -1 +1 @@
> -obj-y		:= enlighten.o hypercall.o
> +obj-y		:= enlighten.o hypercall.o grant-table.o
> diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
> new file mode 100644
> index 0000000..dbd1330
> --- /dev/null
> +++ b/arch/arm/xen/grant-table.c
> @@ -0,0 +1,53 @@
> +/******************************************************************************
> + * grant_table.c
> + * ARM specific part
> + *
> + * Granting foreign access to our memory reservation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <xen/interface/xen.h>
> +#include <xen/page.h>
> +#include <xen/grant_table.h>
> +
> +int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   void **__shared)
> +{
> +	return -ENOSYS;
> +}
> +
> +void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> +{
> +	return;
> +}
> +
> +int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   grant_status_t **__shared)
> +{
> +	return -ENOSYS;
> +}
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:23:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoR6-0001AF-3A; Tue, 07 Aug 2012 18:23:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoR4-00019h-Dh
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:23:18 +0000
Received: from [85.158.143.35:48855] by server-1.bemta-4.messagelabs.com id
	A5/C2-24392-51D51205; Tue, 07 Aug 2012 18:23:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344363796!13927266!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11265 invoked from network); 7 Aug 2012 18:23:17 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:23:17 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IN4nS022519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:23:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IN3Qf000083
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:23:03 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IN3mq023196; Tue, 7 Aug 2012 13:23:03 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:23:03 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E1D6C41F38; Tue,  7 Aug 2012 14:13:37 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:13:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181337.GI15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-5-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-5-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 05/23] xen/arm: empty implementation of
 grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:08PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> 
> - return -ENOSYS rather than -1.

Ack.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/Makefile      |    2 +-
>  arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/xen/grant-table.c
> 
> diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> index b9d6acc..4384103 100644
> --- a/arch/arm/xen/Makefile
> +++ b/arch/arm/xen/Makefile
> @@ -1 +1 @@
> -obj-y		:= enlighten.o hypercall.o
> +obj-y		:= enlighten.o hypercall.o grant-table.o
> diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
> new file mode 100644
> index 0000000..dbd1330
> --- /dev/null
> +++ b/arch/arm/xen/grant-table.c
> @@ -0,0 +1,53 @@
> +/******************************************************************************
> + * grant_table.c
> + * ARM specific part
> + *
> + * Granting foreign access to our memory reservation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <xen/interface/xen.h>
> +#include <xen/page.h>
> +#include <xen/grant_table.h>
> +
> +int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   void **__shared)
> +{
> +	return -ENOSYS;
> +}
> +
> +void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> +{
> +	return;
> +}
> +
> +int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> +			   unsigned long max_nr_gframes,
> +			   grant_status_t **__shared)
> +{
> +	return -ENOSYS;
> +}
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:24:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoSQ-0001RE-Ju; Tue, 07 Aug 2012 18:24:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoSP-0001R3-EL
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:24:41 +0000
Received: from [85.158.139.83:21717] by server-4.bemta-5.messagelabs.com id
	83/9B-32474-86D51205; Tue, 07 Aug 2012 18:24:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344363878!30034916!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21532 invoked from network); 7 Aug 2012 18:24:40 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:24:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IONRI000561
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:24:24 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IONTj015181
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:24:23 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IONqB024031; Tue, 7 Aug 2012 13:24:23 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:24:22 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D6EE441F38; Tue,  7 Aug 2012 14:14:57 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:14:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181457.GJ15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 06/23] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:09PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> - remove pvclock hack;
> - remove include linux/types.h from xen/interface/xen.h.

I think I can take in my tree now right by itself right? Or do
you want to keep this in your patchqueue? If so, Ack from me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/x86/include/asm/xen/interface.h       |    2 ++
>  drivers/tty/hvc/hvc_xen.c                  |    2 ++
>  drivers/xen/grant-table.c                  |    1 +
>  drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
>  include/xen/interface/xen.h                |    1 -
>  include/xen/privcmd.h                      |    1 +
>  6 files changed, 7 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..a93db16 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -121,6 +121,8 @@ struct arch_shared_info {
>  #include "interface_64.h"
>  #endif
>  
> +#include <asm/pvclock-abi.h>
> +
>  #ifndef __ASSEMBLY__
>  /*
>   * The following is all CPU context. Note that the fpu_ctxt block is filled
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index 944eaeb..dc07f56 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -21,6 +21,7 @@
>  #include <linux/console.h>
>  #include <linux/delay.h>
>  #include <linux/err.h>
> +#include <linux/irq.h>
>  #include <linux/init.h>
>  #include <linux/types.h>
>  #include <linux/list.h>
> @@ -35,6 +36,7 @@
>  #include <xen/page.h>
>  #include <xen/events.h>
>  #include <xen/interface/io/console.h>
> +#include <xen/interface/sched.h>
>  #include <xen/hvc-console.h>
>  #include <xen/xenbus.h>
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 0bfc1ef..1d0d95e 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -47,6 +47,7 @@
>  #include <xen/interface/memory.h>
>  #include <xen/hvc-console.h>
>  #include <asm/xen/hypercall.h>
> +#include <asm/xen/interface.h>
>  
>  #include <asm/pgtable.h>
>  #include <asm/sync_bitops.h>
> diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
> index a31b54d..3159a37 100644
> --- a/drivers/xen/xenbus/xenbus_probe_frontend.c
> +++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
> @@ -21,6 +21,7 @@
>  #include <xen/xenbus.h>
>  #include <xen/events.h>
>  #include <xen/page.h>
> +#include <xen/xen.h>
>  
>  #include <xen/platform_pci.h>
>  
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index a890804..3871e47 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -10,7 +10,6 @@
>  #define __XEN_PUBLIC_XEN_H__
>  
>  #include <asm/xen/interface.h>
> -#include <asm/pvclock-abi.h>
>  
>  /*
>   * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..4d58881 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -35,6 +35,7 @@
>  
>  #include <linux/types.h>
>  #include <linux/compiler.h>
> +#include <xen/interface/xen.h>
>  
>  typedef unsigned long xen_pfn_t;
>  
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:24:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoSQ-0001RE-Ju; Tue, 07 Aug 2012 18:24:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoSP-0001R3-EL
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:24:41 +0000
Received: from [85.158.139.83:21717] by server-4.bemta-5.messagelabs.com id
	83/9B-32474-86D51205; Tue, 07 Aug 2012 18:24:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344363878!30034916!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21532 invoked from network); 7 Aug 2012 18:24:40 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:24:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IONRI000561
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:24:24 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IONTj015181
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:24:23 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IONqB024031; Tue, 7 Aug 2012 13:24:23 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:24:22 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D6EE441F38; Tue,  7 Aug 2012 14:14:57 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:14:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181457.GJ15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 06/23] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:09PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> - remove pvclock hack;
> - remove include linux/types.h from xen/interface/xen.h.

I think I can take in my tree now right by itself right? Or do
you want to keep this in your patchqueue? If so, Ack from me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/x86/include/asm/xen/interface.h       |    2 ++
>  drivers/tty/hvc/hvc_xen.c                  |    2 ++
>  drivers/xen/grant-table.c                  |    1 +
>  drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
>  include/xen/interface/xen.h                |    1 -
>  include/xen/privcmd.h                      |    1 +
>  6 files changed, 7 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..a93db16 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -121,6 +121,8 @@ struct arch_shared_info {
>  #include "interface_64.h"
>  #endif
>  
> +#include <asm/pvclock-abi.h>
> +
>  #ifndef __ASSEMBLY__
>  /*
>   * The following is all CPU context. Note that the fpu_ctxt block is filled
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index 944eaeb..dc07f56 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -21,6 +21,7 @@
>  #include <linux/console.h>
>  #include <linux/delay.h>
>  #include <linux/err.h>
> +#include <linux/irq.h>
>  #include <linux/init.h>
>  #include <linux/types.h>
>  #include <linux/list.h>
> @@ -35,6 +36,7 @@
>  #include <xen/page.h>
>  #include <xen/events.h>
>  #include <xen/interface/io/console.h>
> +#include <xen/interface/sched.h>
>  #include <xen/hvc-console.h>
>  #include <xen/xenbus.h>
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 0bfc1ef..1d0d95e 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -47,6 +47,7 @@
>  #include <xen/interface/memory.h>
>  #include <xen/hvc-console.h>
>  #include <asm/xen/hypercall.h>
> +#include <asm/xen/interface.h>
>  
>  #include <asm/pgtable.h>
>  #include <asm/sync_bitops.h>
> diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
> index a31b54d..3159a37 100644
> --- a/drivers/xen/xenbus/xenbus_probe_frontend.c
> +++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
> @@ -21,6 +21,7 @@
>  #include <xen/xenbus.h>
>  #include <xen/events.h>
>  #include <xen/page.h>
> +#include <xen/xen.h>
>  
>  #include <xen/platform_pci.h>
>  
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index a890804..3871e47 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -10,7 +10,6 @@
>  #define __XEN_PUBLIC_XEN_H__
>  
>  #include <asm/xen/interface.h>
> -#include <asm/pvclock-abi.h>
>  
>  /*
>   * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..4d58881 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -35,6 +35,7 @@
>  
>  #include <linux/types.h>
>  #include <linux/compiler.h>
> +#include <xen/interface/xen.h>
>  
>  typedef unsigned long xen_pfn_t;
>  
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:27:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoVF-0001k1-6U; Tue, 07 Aug 2012 18:27:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyoVD-0001jo-H9
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:27:35 +0000
Received: from [85.158.143.35:15623] by server-3.bemta-4.messagelabs.com id
	8A/8D-01511-61E51205; Tue, 07 Aug 2012 18:27:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344364054!17887804!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25766 invoked from network); 7 Aug 2012 18:27:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 18:27:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13892684"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 18:27:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 19:27:33 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyoVB-0001iD-Hj; Tue, 07 Aug 2012 18:27:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyoVB-0000qx-Dh;
	Tue, 07 Aug 2012 19:27:33 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.24085.195729.702722@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 19:27:33 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <501973F5.4030804@citrix.com>
References: <501973F5.4030804@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[Xen-devel] tools/makefile: Add build target"):
> The alternative is to change the root makefile to call "$(MAKE) -C
> tools/ all" instead, but this way feels neater to me.

Yes.

> tools/makefile: Add build target
> 
> The root Makefile has a build target which calls "$(MAKE) build" in each of xen/
> tools/ stubdom/ and docs/, which fails because of the tools/ Makefile.

That this didn't work is clearly a bug.

But when I tried it my build did this:

  CC    i386-stubdom/eeprom93xx.o
/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/test.o: In function `app_main':
/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/test.c:441: multiple definition of `app_main'
/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/main.o:/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/main.c:187: first defined here
  CC    i386-stubdom/eepro100.o
make[2]: *** [/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/mini-os] Error 1
make[2]: Leaving directory `/u/iwj/work/xen-unstable-tools.hg/extras/mini-os'
make[1]: *** [c-stubdom] Error 2
make[1]: *** Waiting for unfinished jobs....

That's the result of:

 (make -j4 build && echo ok.) 2>&1 | tee ../log

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:27:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoVF-0001k1-6U; Tue, 07 Aug 2012 18:27:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyoVD-0001jo-H9
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 18:27:35 +0000
Received: from [85.158.143.35:15623] by server-3.bemta-4.messagelabs.com id
	8A/8D-01511-61E51205; Tue, 07 Aug 2012 18:27:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344364054!17887804!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25766 invoked from network); 7 Aug 2012 18:27:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 18:27:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13892684"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 18:27:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 19:27:33 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SyoVB-0001iD-Hj; Tue, 07 Aug 2012 18:27:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SyoVB-0000qx-Dh;
	Tue, 07 Aug 2012 19:27:33 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20513.24085.195729.702722@mariner.uk.xensource.com>
Date: Tue, 7 Aug 2012 19:27:33 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <501973F5.4030804@citrix.com>
References: <501973F5.4030804@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[Xen-devel] tools/makefile: Add build target"):
> The alternative is to change the root makefile to call "$(MAKE) -C
> tools/ all" instead, but this way feels neater to me.

Yes.

> tools/makefile: Add build target
> 
> The root Makefile has a build target which calls "$(MAKE) build" in each of xen/
> tools/ stubdom/ and docs/, which fails because of the tools/ Makefile.

That this didn't work is clearly a bug.

But when I tried it my build did this:

  CC    i386-stubdom/eeprom93xx.o
/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/test.o: In function `app_main':
/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/test.c:441: multiple definition of `app_main'
/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/main.o:/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/main.c:187: first defined here
  CC    i386-stubdom/eepro100.o
make[2]: *** [/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/mini-os] Error 1
make[2]: Leaving directory `/u/iwj/work/xen-unstable-tools.hg/extras/mini-os'
make[1]: *** [c-stubdom] Error 2
make[1]: *** Waiting for unfinished jobs....

That's the result of:

 (make -j4 build && echo ok.) 2>&1 | tee ../log

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:27:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoVG-0001kH-JA; Tue, 07 Aug 2012 18:27:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoVF-0001jx-3k
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:27:37 +0000
Received: from [85.158.138.51:13779] by server-7.bemta-3.messagelabs.com id
	24/18-04660-81E51205; Tue, 07 Aug 2012 18:27:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344364053!30853598!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12612 invoked from network); 7 Aug 2012 18:27:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:27:35 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IRI5S003647
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:27:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IRHQW014382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:27:18 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IRHbe025960; Tue, 7 Aug 2012 13:27:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:27:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 72B7141F37; Tue,  7 Aug 2012 14:17:52 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:17:52 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181752.GK15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:10PM +0100, Stefano Stabellini wrote:
> Check for a "/xen" node in the device tree, if it is present set
> xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> 
> Map the real shared info page using XENMEM_add_to_physmap with
> XENMAPSPACE_shared_info.
> 
> Changes in v2:
> 
> - replace pr_info with pr_debug.

I second what David mentioned. The other thing is that you are going to
need to rebase this on top of v3.5-rc1, as Olaf's patches have changed
the shared_info_page a bit.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 52 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index d27c2a6..102d823 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -5,6 +5,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_address.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/*
> + * == Xen Device Tree format ==
> + * - /xen node;
> + * - compatible "arm,xen";
> + * - one interrupt for Xen event notifications;
> + * - one memory region to map the grant_table.
> + */
> +static int __init xen_guest_init(void)
> +{
> +	struct xen_add_to_physmap xatp;
> +	static struct shared_info *shared_info_page = 0;
> +	struct device_node *node;
> +
> +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> +	if (!node) {
> +		pr_debug("No Xen support\n");
> +		return 0;
> +	}
> +	xen_domain_type = XEN_HVM_DOMAIN;
> +
> +	if (!shared_info_page)
> +		shared_info_page = (struct shared_info *)
> +			get_zeroed_page(GFP_KERNEL);
> +	if (!shared_info_page) {
> +		pr_err("not enough memory\n");
> +		return -ENOMEM;
> +	}
> +	xatp.domid = DOMID_SELF;
> +	xatp.idx = 0;
> +	xatp.space = XENMAPSPACE_shared_info;
> +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> +		BUG();
> +
> +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> +
> +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> +	 * page, we use it in the event channel upcall and in some pvclock
> +	 * related functions. We don't need the vcpu_info placement
> +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> +	 * HVM.
> +	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
> +	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
> +	 * for secondary CPUs as they are brought up. */
> +	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
> +	return 0;
> +}
> +core_initcall(xen_guest_init);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:27:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoVG-0001kH-JA; Tue, 07 Aug 2012 18:27:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoVF-0001jx-3k
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:27:37 +0000
Received: from [85.158.138.51:13779] by server-7.bemta-3.messagelabs.com id
	24/18-04660-81E51205; Tue, 07 Aug 2012 18:27:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344364053!30853598!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12612 invoked from network); 7 Aug 2012 18:27:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:27:35 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IRI5S003647
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:27:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IRHQW014382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:27:18 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IRHbe025960; Tue, 7 Aug 2012 13:27:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:27:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 72B7141F37; Tue,  7 Aug 2012 14:17:52 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:17:52 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181752.GK15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:10PM +0100, Stefano Stabellini wrote:
> Check for a "/xen" node in the device tree, if it is present set
> xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> 
> Map the real shared info page using XENMEM_add_to_physmap with
> XENMAPSPACE_shared_info.
> 
> Changes in v2:
> 
> - replace pr_info with pr_debug.

I second what David mentioned. The other thing is that you are going to
need to rebase this on top of v3.5-rc1, as Olaf's patches have changed
the shared_info_page a bit.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 52 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index d27c2a6..102d823 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -5,6 +5,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_address.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	return -ENOSYS;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> +
> +/*
> + * == Xen Device Tree format ==
> + * - /xen node;
> + * - compatible "arm,xen";
> + * - one interrupt for Xen event notifications;
> + * - one memory region to map the grant_table.
> + */
> +static int __init xen_guest_init(void)
> +{
> +	struct xen_add_to_physmap xatp;
> +	static struct shared_info *shared_info_page = 0;
> +	struct device_node *node;
> +
> +	node = of_find_compatible_node(NULL, NULL, "arm,xen");
> +	if (!node) {
> +		pr_debug("No Xen support\n");
> +		return 0;
> +	}
> +	xen_domain_type = XEN_HVM_DOMAIN;
> +
> +	if (!shared_info_page)
> +		shared_info_page = (struct shared_info *)
> +			get_zeroed_page(GFP_KERNEL);
> +	if (!shared_info_page) {
> +		pr_err("not enough memory\n");
> +		return -ENOMEM;
> +	}
> +	xatp.domid = DOMID_SELF;
> +	xatp.idx = 0;
> +	xatp.space = XENMAPSPACE_shared_info;
> +	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> +	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
> +		BUG();
> +
> +	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> +
> +	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
> +	 * page, we use it in the event channel upcall and in some pvclock
> +	 * related functions. We don't need the vcpu_info placement
> +	 * optimizations because we don't use any pv_mmu or pv_irq op on
> +	 * HVM.
> +	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
> +	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
> +	 * for secondary CPUs as they are brought up. */
> +	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
> +	return 0;
> +}
> +core_initcall(xen_guest_init);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:28:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoVi-0001qz-15; Tue, 07 Aug 2012 18:28:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoVh-0001qY-5V
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:28:05 +0000
Received: from [85.158.143.35:61094] by server-2.bemta-4.messagelabs.com id
	9C/BF-17938-43E51205; Tue, 07 Aug 2012 18:28:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344364081!13747125!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4220 invoked from network); 7 Aug 2012 18:28:02 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:28:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IRmCi004045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:27:49 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IRkMf015412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:27:47 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IRk2g005288; Tue, 7 Aug 2012 13:27:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:27:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2BC8B41F38; Tue,  7 Aug 2012 14:18:21 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:18:21 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181821.GL15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-8-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-8-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 08/23] xen/arm: Introduce xen_pfn_t for
 pfn and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:11PM +0100, Stefano Stabellini wrote:
> All the original Xen headers have xen_pfn_t as mfn and pfn type, however
> when they have been imported in Linux, xen_pfn_t has been replaced with
> unsigned long. That might work for x86 and ia64 but it does not for arm.
> Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
> see fit.

Ack.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/interface.h  |    4 ++++
>  arch/ia64/include/asm/xen/interface.h |    5 ++++-
>  arch/x86/include/asm/xen/interface.h  |    5 +++++
>  include/xen/interface/grant_table.h   |    4 ++--
>  include/xen/interface/memory.h        |    6 +++---
>  include/xen/interface/platform.h      |    4 ++--
>  include/xen/interface/xen.h           |    6 +++---
>  include/xen/privcmd.h                 |    2 --
>  8 files changed, 23 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> index ab99270..f904dcc 100644
> --- a/arch/arm/include/asm/xen/interface.h
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -25,6 +25,9 @@
>  	} while (0)
>  
>  #ifndef __ASSEMBLY__
> +/* Explicitly size integers that represent pfns in the interface with
> + * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
> +typedef uint64_t xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -35,6 +38,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  
>  /* Maximum number of virtual CPUs in multi-processor guests. */
>  #define MAX_VIRT_CPUS 1
> diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> index 09d5f7f..686464e 100644
> --- a/arch/ia64/include/asm/xen/interface.h
> +++ b/arch/ia64/include/asm/xen/interface.h
> @@ -67,6 +67,10 @@
>  #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
>  
>  #ifndef __ASSEMBLY__
> +/* Explicitly size integers that represent pfns in the public interface
> + * with Xen so that we could have one ABI that works for 32 and 64 bit
> + * guests. */
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> @@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
>  
> -typedef unsigned long xen_pfn_t;
>  DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #define PRI_xen_pfn	"lx"
>  #endif
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index a93db16..555f94d 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -47,6 +47,10 @@
>  #endif
>  
>  #ifndef __ASSEMBLY__
> +/* Explicitly size integers that represent pfns in the public interface
> + * with Xen so that on ARM we can have one ABI that works for 32 and 64
> + * bit guests. */
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #endif
>  
>  #ifndef HYPERVISOR_VIRT_START
> diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
> index a17d844..7da811b 100644
> --- a/include/xen/interface/grant_table.h
> +++ b/include/xen/interface/grant_table.h
> @@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
>  #define GNTTABOP_transfer                4
>  struct gnttab_transfer {
>      /* IN parameters. */
> -    unsigned long mfn;
> +    xen_pfn_t mfn;
>      domid_t       domid;
>      grant_ref_t   ref;
>      /* OUT parameters. */
> @@ -375,7 +375,7 @@ struct gnttab_copy {
>  	struct {
>  		union {
>  			grant_ref_t ref;
> -			unsigned long   gmfn;
> +			xen_pfn_t   gmfn;
>  		} u;
>  		domid_t  domid;
>  		uint16_t offset;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..abbbff0 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -31,7 +31,7 @@ struct xen_memory_reservation {
>       *   OUT: GMFN bases of extents that were allocated
>       *   (NB. This command also updates the mach_to_phys translation table)
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /* Number of extents, and size/alignment of each (2^extent_order pages). */
>      unsigned long  nr_extents;
> @@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
>       * any large discontiguities in the machine address space, 2MB gaps in
>       * the machphys table will be represented by an MFN base of zero.
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /*
>       * Number of extents written to the above array. This will be smaller
> @@ -172,7 +172,7 @@ struct xen_add_to_physmap {
>      unsigned long idx;
>  
>      /* GPFN where the source mapping page should appear. */
> -    unsigned long gpfn;
> +    xen_pfn_t gpfn;
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
>  
> diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
> index 486653f..0bea470 100644
> --- a/include/xen/interface/platform.h
> +++ b/include/xen/interface/platform.h
> @@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
>  #define XENPF_add_memtype         31
>  struct xenpf_add_memtype {
>  	/* IN variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  	/* OUT variables. */
> @@ -84,7 +84,7 @@ struct xenpf_read_memtype {
>  	/* IN variables. */
>  	uint32_t reg;
>  	/* OUT variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  };
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 3871e47..42834a3 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -188,7 +188,7 @@ struct mmuext_op {
>  	unsigned int cmd;
>  	union {
>  		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
> -		unsigned long mfn;
> +		xen_pfn_t mfn;
>  		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
>  		unsigned long linear_addr;
>  	} arg1;
> @@ -428,11 +428,11 @@ struct start_info {
>  	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
>  	unsigned long shared_info;  /* MACHINE address of shared info struct. */
>  	uint32_t flags;             /* SIF_xxx flags.                         */
> -	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
> +	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
>  	uint32_t store_evtchn;      /* Event channel for store communication. */
>  	union {
>  		struct {
> -			unsigned long mfn;  /* MACHINE page number of console page.   */
> +			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
>  			uint32_t  evtchn;   /* Event channel for console page.        */
>  		} domU;
>  		struct {
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 4d58881..45c1aa1 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -37,8 +37,6 @@
>  #include <linux/compiler.h>
>  #include <xen/interface/xen.h>
>  
> -typedef unsigned long xen_pfn_t;
> -
>  struct privcmd_hypercall {
>  	__u64 op;
>  	__u64 arg[5];
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:28:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoVi-0001qz-15; Tue, 07 Aug 2012 18:28:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoVh-0001qY-5V
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:28:05 +0000
Received: from [85.158.143.35:61094] by server-2.bemta-4.messagelabs.com id
	9C/BF-17938-43E51205; Tue, 07 Aug 2012 18:28:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344364081!13747125!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4220 invoked from network); 7 Aug 2012 18:28:02 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:28:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IRmCi004045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:27:49 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IRkMf015412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:27:47 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IRk2g005288; Tue, 7 Aug 2012 13:27:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:27:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2BC8B41F38; Tue,  7 Aug 2012 14:18:21 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:18:21 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181821.GL15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-8-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-8-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 08/23] xen/arm: Introduce xen_pfn_t for
 pfn and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:11PM +0100, Stefano Stabellini wrote:
> All the original Xen headers have xen_pfn_t as mfn and pfn type, however
> when they have been imported in Linux, xen_pfn_t has been replaced with
> unsigned long. That might work for x86 and ia64 but it does not for arm.
> Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
> see fit.

Ack.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/interface.h  |    4 ++++
>  arch/ia64/include/asm/xen/interface.h |    5 ++++-
>  arch/x86/include/asm/xen/interface.h  |    5 +++++
>  include/xen/interface/grant_table.h   |    4 ++--
>  include/xen/interface/memory.h        |    6 +++---
>  include/xen/interface/platform.h      |    4 ++--
>  include/xen/interface/xen.h           |    6 +++---
>  include/xen/privcmd.h                 |    2 --
>  8 files changed, 23 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> index ab99270..f904dcc 100644
> --- a/arch/arm/include/asm/xen/interface.h
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -25,6 +25,9 @@
>  	} while (0)
>  
>  #ifndef __ASSEMBLY__
> +/* Explicitly size integers that represent pfns in the interface with
> + * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
> +typedef uint64_t xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -35,6 +38,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  
>  /* Maximum number of virtual CPUs in multi-processor guests. */
>  #define MAX_VIRT_CPUS 1
> diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> index 09d5f7f..686464e 100644
> --- a/arch/ia64/include/asm/xen/interface.h
> +++ b/arch/ia64/include/asm/xen/interface.h
> @@ -67,6 +67,10 @@
>  #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
>  
>  #ifndef __ASSEMBLY__
> +/* Explicitly size integers that represent pfns in the public interface
> + * with Xen so that we could have one ABI that works for 32 and 64 bit
> + * guests. */
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> @@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
>  
> -typedef unsigned long xen_pfn_t;
>  DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #define PRI_xen_pfn	"lx"
>  #endif
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index a93db16..555f94d 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -47,6 +47,10 @@
>  #endif
>  
>  #ifndef __ASSEMBLY__
> +/* Explicitly size integers that represent pfns in the public interface
> + * with Xen so that on ARM we can have one ABI that works for 32 and 64
> + * bit guests. */
> +typedef unsigned long xen_pfn_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> @@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
>  DEFINE_GUEST_HANDLE(void);
>  DEFINE_GUEST_HANDLE(uint64_t);
>  DEFINE_GUEST_HANDLE(uint32_t);
> +DEFINE_GUEST_HANDLE(xen_pfn_t);
>  #endif
>  
>  #ifndef HYPERVISOR_VIRT_START
> diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
> index a17d844..7da811b 100644
> --- a/include/xen/interface/grant_table.h
> +++ b/include/xen/interface/grant_table.h
> @@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
>  #define GNTTABOP_transfer                4
>  struct gnttab_transfer {
>      /* IN parameters. */
> -    unsigned long mfn;
> +    xen_pfn_t mfn;
>      domid_t       domid;
>      grant_ref_t   ref;
>      /* OUT parameters. */
> @@ -375,7 +375,7 @@ struct gnttab_copy {
>  	struct {
>  		union {
>  			grant_ref_t ref;
> -			unsigned long   gmfn;
> +			xen_pfn_t   gmfn;
>  		} u;
>  		domid_t  domid;
>  		uint16_t offset;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..abbbff0 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -31,7 +31,7 @@ struct xen_memory_reservation {
>       *   OUT: GMFN bases of extents that were allocated
>       *   (NB. This command also updates the mach_to_phys translation table)
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /* Number of extents, and size/alignment of each (2^extent_order pages). */
>      unsigned long  nr_extents;
> @@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
>       * any large discontiguities in the machine address space, 2MB gaps in
>       * the machphys table will be represented by an MFN base of zero.
>       */
> -    GUEST_HANDLE(ulong) extent_start;
> +    GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /*
>       * Number of extents written to the above array. This will be smaller
> @@ -172,7 +172,7 @@ struct xen_add_to_physmap {
>      unsigned long idx;
>  
>      /* GPFN where the source mapping page should appear. */
> -    unsigned long gpfn;
> +    xen_pfn_t gpfn;
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
>  
> diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
> index 486653f..0bea470 100644
> --- a/include/xen/interface/platform.h
> +++ b/include/xen/interface/platform.h
> @@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
>  #define XENPF_add_memtype         31
>  struct xenpf_add_memtype {
>  	/* IN variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  	/* OUT variables. */
> @@ -84,7 +84,7 @@ struct xenpf_read_memtype {
>  	/* IN variables. */
>  	uint32_t reg;
>  	/* OUT variables. */
> -	unsigned long mfn;
> +	xen_pfn_t mfn;
>  	uint64_t nr_mfns;
>  	uint32_t type;
>  };
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 3871e47..42834a3 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -188,7 +188,7 @@ struct mmuext_op {
>  	unsigned int cmd;
>  	union {
>  		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
> -		unsigned long mfn;
> +		xen_pfn_t mfn;
>  		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
>  		unsigned long linear_addr;
>  	} arg1;
> @@ -428,11 +428,11 @@ struct start_info {
>  	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
>  	unsigned long shared_info;  /* MACHINE address of shared info struct. */
>  	uint32_t flags;             /* SIF_xxx flags.                         */
> -	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
> +	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
>  	uint32_t store_evtchn;      /* Event channel for store communication. */
>  	union {
>  		struct {
> -			unsigned long mfn;  /* MACHINE page number of console page.   */
> +			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
>  			uint32_t  evtchn;   /* Event channel for console page.        */
>  		} domU;
>  		struct {
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 4d58881..45c1aa1 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -37,8 +37,6 @@
>  #include <linux/compiler.h>
>  #include <xen/interface/xen.h>
>  
> -typedef unsigned long xen_pfn_t;
> -
>  struct privcmd_hypercall {
>  	__u64 op;
>  	__u64 arg[5];
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:28:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoW1-0001vq-LG; Tue, 07 Aug 2012 18:28:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoVz-0001vP-Hb
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:28:23 +0000
Received: from [85.158.143.35:63694] by server-2.bemta-4.messagelabs.com id
	63/FF-17938-64E51205; Tue, 07 Aug 2012 18:28:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344364100!19133652!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6564 invoked from network); 7 Aug 2012 18:28:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:28:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IS61G004526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:28:07 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IS5JL016304
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:28:06 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IS5E6005616; Tue, 7 Aug 2012 13:28:05 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:28:05 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 049E441F38; Tue,  7 Aug 2012 14:18:40 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:18:39 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181839.GM15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 09/23] xen/arm: Introduce xen_ulong_t for
	unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:12PM +0100, Stefano Stabellini wrote:
> All the original Xen headers have xen_ulong_t as unsigned long type, however
> when they have been imported in Linux, xen_ulong_t has been replaced with
> unsigned long. That might work for x86 and ia64 but it does not for arm.
> Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
> see fit.
> 
> Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.

Looks ok to me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/interface.h  |    8 ++++++--
>  arch/ia64/include/asm/xen/interface.h |    1 +
>  arch/x86/include/asm/xen/interface.h  |    1 +
>  include/xen/interface/memory.h        |   12 ++++++------
>  include/xen/interface/physdev.h       |    4 ++--
>  include/xen/interface/version.h       |    2 +-
>  include/xen/interface/xen.h           |    6 +++---
>  7 files changed, 20 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> index f904dcc..1d3030c 100644
> --- a/arch/arm/include/asm/xen/interface.h
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -9,8 +9,11 @@
>  
>  #include <linux/types.h>
>  
> +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
> +
>  #define __DEFINE_GUEST_HANDLE(name, type) \
> -	typedef type * __guest_handle_ ## name
> +	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> +        __guest_handle_ ## name
>  
>  #define DEFINE_GUEST_HANDLE_STRUCT(name) \
>  	__DEFINE_GUEST_HANDLE(name, struct name)
> @@ -21,13 +24,14 @@
>  	do {						\
>  		if (sizeof(hnd) == 8)			\
>  			*(uint64_t *)&(hnd) = 0;	\
> -		(hnd) = val;				\
> +		(hnd).p = val;				\
>  	} while (0)
>  
>  #ifndef __ASSEMBLY__
>  /* Explicitly size integers that represent pfns in the interface with
>   * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
>  typedef uint64_t xen_pfn_t;
> +typedef uint64_t xen_ulong_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> index 686464e..7c83445 100644
> --- a/arch/ia64/include/asm/xen/interface.h
> +++ b/arch/ia64/include/asm/xen/interface.h
> @@ -71,6 +71,7 @@
>   * with Xen so that we could have one ABI that works for 32 and 64 bit
>   * guests. */
>  typedef unsigned long xen_pfn_t;
> +typedef unsigned long xen_ulong_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index 555f94d..28fc621 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -51,6 +51,7 @@
>   * with Xen so that on ARM we can have one ABI that works for 32 and 64
>   * bit guests. */
>  typedef unsigned long xen_pfn_t;
> +typedef unsigned long xen_ulong_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index abbbff0..b5c3098 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -34,7 +34,7 @@ struct xen_memory_reservation {
>      GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /* Number of extents, and size/alignment of each (2^extent_order pages). */
> -    unsigned long  nr_extents;
> +    xen_ulong_t  nr_extents;
>      unsigned int   extent_order;
>  
>      /*
> @@ -92,7 +92,7 @@ struct xen_memory_exchange {
>       *     command will be non-zero.
>       *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
>       */
> -    unsigned long nr_exchanged;
> +    xen_ulong_t nr_exchanged;
>  };
>  
>  DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
> @@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
>   */
>  #define XENMEM_machphys_mapping     12
>  struct xen_machphys_mapping {
> -    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
> -    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
> +    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
> +    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
>  
> @@ -169,7 +169,7 @@ struct xen_add_to_physmap {
>      unsigned int space;
>  
>      /* Index into source mapping space. */
> -    unsigned long idx;
> +    xen_ulong_t idx;
>  
>      /* GPFN where the source mapping page should appear. */
>      xen_pfn_t gpfn;
> @@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
>      domid_t domid;
>  
>      /* Length of list. */
> -    unsigned long nr_gpfns;
> +    xen_ulong_t nr_gpfns;
>  
>      /* List of GPFNs to translate. */
>      GUEST_HANDLE(ulong) gpfn_list;
> diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
> index 9ce788d..bc3ae14 100644
> --- a/include/xen/interface/physdev.h
> +++ b/include/xen/interface/physdev.h
> @@ -56,7 +56,7 @@ struct physdev_eoi {
>  #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
>  struct physdev_pirq_eoi_gmfn {
>      /* IN */
> -    unsigned long gmfn;
> +    xen_ulong_t gmfn;
>  };
>  
>  /*
> @@ -108,7 +108,7 @@ struct physdev_set_iobitmap {
>  #define PHYSDEVOP_apic_write		 9
>  struct physdev_apic {
>  	/* IN */
> -	unsigned long apic_physbase;
> +	xen_ulong_t apic_physbase;
>  	uint32_t reg;
>  	/* IN or OUT */
>  	uint32_t value;
> diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> index e8b6519..30280c9 100644
> --- a/include/xen/interface/version.h
> +++ b/include/xen/interface/version.h
> @@ -45,7 +45,7 @@ struct xen_changeset_info {
>  
>  #define XENVER_platform_parameters 5
>  struct xen_platform_parameters {
> -    unsigned long virt_start;
> +    xen_ulong_t virt_start;
>  };
>  
>  #define XENVER_get_features 6
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 42834a3..ec32115 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -274,9 +274,9 @@ DEFINE_GUEST_HANDLE_STRUCT(mmu_update);
>   * NB. The fields are natural register size for this architecture.
>   */
>  struct multicall_entry {
> -    unsigned long op;
> -    long result;
> -    unsigned long args[6];
> +    xen_ulong_t op;
> +    xen_ulong_t result;
> +    xen_ulong_t args[6];
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
>  
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:28:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoW1-0001vq-LG; Tue, 07 Aug 2012 18:28:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoVz-0001vP-Hb
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:28:23 +0000
Received: from [85.158.143.35:63694] by server-2.bemta-4.messagelabs.com id
	63/FF-17938-64E51205; Tue, 07 Aug 2012 18:28:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344364100!19133652!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6564 invoked from network); 7 Aug 2012 18:28:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:28:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IS61G004526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:28:07 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IS5JL016304
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:28:06 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IS5E6005616; Tue, 7 Aug 2012 13:28:05 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:28:05 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 049E441F38; Tue,  7 Aug 2012 14:18:40 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:18:39 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807181839.GM15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 09/23] xen/arm: Introduce xen_ulong_t for
	unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:12PM +0100, Stefano Stabellini wrote:
> All the original Xen headers have xen_ulong_t as unsigned long type, however
> when they have been imported in Linux, xen_ulong_t has been replaced with
> unsigned long. That might work for x86 and ia64 but it does not for arm.
> Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
> see fit.
> 
> Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.

Looks ok to me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/interface.h  |    8 ++++++--
>  arch/ia64/include/asm/xen/interface.h |    1 +
>  arch/x86/include/asm/xen/interface.h  |    1 +
>  include/xen/interface/memory.h        |   12 ++++++------
>  include/xen/interface/physdev.h       |    4 ++--
>  include/xen/interface/version.h       |    2 +-
>  include/xen/interface/xen.h           |    6 +++---
>  7 files changed, 20 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> index f904dcc..1d3030c 100644
> --- a/arch/arm/include/asm/xen/interface.h
> +++ b/arch/arm/include/asm/xen/interface.h
> @@ -9,8 +9,11 @@
>  
>  #include <linux/types.h>
>  
> +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
> +
>  #define __DEFINE_GUEST_HANDLE(name, type) \
> -	typedef type * __guest_handle_ ## name
> +	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> +        __guest_handle_ ## name
>  
>  #define DEFINE_GUEST_HANDLE_STRUCT(name) \
>  	__DEFINE_GUEST_HANDLE(name, struct name)
> @@ -21,13 +24,14 @@
>  	do {						\
>  		if (sizeof(hnd) == 8)			\
>  			*(uint64_t *)&(hnd) = 0;	\
> -		(hnd) = val;				\
> +		(hnd).p = val;				\
>  	} while (0)
>  
>  #ifndef __ASSEMBLY__
>  /* Explicitly size integers that represent pfns in the interface with
>   * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
>  typedef uint64_t xen_pfn_t;
> +typedef uint64_t xen_ulong_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> index 686464e..7c83445 100644
> --- a/arch/ia64/include/asm/xen/interface.h
> +++ b/arch/ia64/include/asm/xen/interface.h
> @@ -71,6 +71,7 @@
>   * with Xen so that we could have one ABI that works for 32 and 64 bit
>   * guests. */
>  typedef unsigned long xen_pfn_t;
> +typedef unsigned long xen_ulong_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index 555f94d..28fc621 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -51,6 +51,7 @@
>   * with Xen so that on ARM we can have one ABI that works for 32 and 64
>   * bit guests. */
>  typedef unsigned long xen_pfn_t;
> +typedef unsigned long xen_ulong_t;
>  /* Guest handles for primitive C types. */
>  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
>  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index abbbff0..b5c3098 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -34,7 +34,7 @@ struct xen_memory_reservation {
>      GUEST_HANDLE(xen_pfn_t) extent_start;
>  
>      /* Number of extents, and size/alignment of each (2^extent_order pages). */
> -    unsigned long  nr_extents;
> +    xen_ulong_t  nr_extents;
>      unsigned int   extent_order;
>  
>      /*
> @@ -92,7 +92,7 @@ struct xen_memory_exchange {
>       *     command will be non-zero.
>       *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
>       */
> -    unsigned long nr_exchanged;
> +    xen_ulong_t nr_exchanged;
>  };
>  
>  DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
> @@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
>   */
>  #define XENMEM_machphys_mapping     12
>  struct xen_machphys_mapping {
> -    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
> -    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
> +    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
> +    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
>  
> @@ -169,7 +169,7 @@ struct xen_add_to_physmap {
>      unsigned int space;
>  
>      /* Index into source mapping space. */
> -    unsigned long idx;
> +    xen_ulong_t idx;
>  
>      /* GPFN where the source mapping page should appear. */
>      xen_pfn_t gpfn;
> @@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
>      domid_t domid;
>  
>      /* Length of list. */
> -    unsigned long nr_gpfns;
> +    xen_ulong_t nr_gpfns;
>  
>      /* List of GPFNs to translate. */
>      GUEST_HANDLE(ulong) gpfn_list;
> diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
> index 9ce788d..bc3ae14 100644
> --- a/include/xen/interface/physdev.h
> +++ b/include/xen/interface/physdev.h
> @@ -56,7 +56,7 @@ struct physdev_eoi {
>  #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
>  struct physdev_pirq_eoi_gmfn {
>      /* IN */
> -    unsigned long gmfn;
> +    xen_ulong_t gmfn;
>  };
>  
>  /*
> @@ -108,7 +108,7 @@ struct physdev_set_iobitmap {
>  #define PHYSDEVOP_apic_write		 9
>  struct physdev_apic {
>  	/* IN */
> -	unsigned long apic_physbase;
> +	xen_ulong_t apic_physbase;
>  	uint32_t reg;
>  	/* IN or OUT */
>  	uint32_t value;
> diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> index e8b6519..30280c9 100644
> --- a/include/xen/interface/version.h
> +++ b/include/xen/interface/version.h
> @@ -45,7 +45,7 @@ struct xen_changeset_info {
>  
>  #define XENVER_platform_parameters 5
>  struct xen_platform_parameters {
> -    unsigned long virt_start;
> +    xen_ulong_t virt_start;
>  };
>  
>  #define XENVER_get_features 6
> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 42834a3..ec32115 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -274,9 +274,9 @@ DEFINE_GUEST_HANDLE_STRUCT(mmu_update);
>   * NB. The fields are natural register size for this architecture.
>   */
>  struct multicall_entry {
> -    unsigned long op;
> -    long result;
> -    unsigned long args[6];
> +    xen_ulong_t op;
> +    xen_ulong_t result;
> +    xen_ulong_t args[6];
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
>  
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:31:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoZJ-0002KR-8x; Tue, 07 Aug 2012 18:31:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoZH-0002Jo-72
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:31:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344364299!11349945!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6792 invoked from network); 7 Aug 2012 18:31:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:31:40 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IVOfN002611
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:31:24 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IVNFp018635
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:31:23 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IVMFn007989; Tue, 7 Aug 2012 13:31:22 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:31:22 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 14D9741F38; Tue,  7 Aug 2012 14:21:57 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:21:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	dgdegra@tycho.nsa.gov
Message-ID: <20120807182157.GN15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> an error.
> 
> If Linux is running as an HVM domain and is running as Dom0, use
> xenstored_local_init to initialize the xenstore page and event channel.
> 
> Changes in v2:
> 
> - refactor xenbus_init.

Thank you. Lets also CC our friend at NSA who has been doing some work
in that area. Daniel are you OK with this change - will it still make
PV initial domain with with the MiniOS XenBus driver?

Thanks.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>  3 files changed, 45 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index 52fe7ad..c5aa55c 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>  		int err;
>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>  						0, "xenbus", &xb_waitq);
> -		if (err <= 0) {
> +		if (err < 0) {
>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>  			return err;
>  		}

> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index b793723..a67ccc0 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>  	return err;
>  }
>  
> +enum xenstore_init {
> +	UNKNOWN,
> +	PV,
> +	HVM,
> +	LOCAL,
> +};
>  static int __init xenbus_init(void)
>  {
>  	int err = 0;
> +	enum xenstore_init usage = UNKNOWN;
> +	uint64_t v = 0;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
>  
>  	xenbus_ring_ops_init();
>  
> -	if (xen_hvm_domain()) {
> -		uint64_t v = 0;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_evtchn = (int)v;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_mfn = (unsigned long)v;
> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> -	} else {
> -		xen_store_evtchn = xen_start_info->store_evtchn;
> -		xen_store_mfn = xen_start_info->store_mfn;
> -		if (xen_store_evtchn)
> -			xenstored_ready = 1;
> -		else {
> +	if (xen_pv_domain())
> +		usage = PV;
> +	if (xen_hvm_domain())
> +		usage = HVM;
> +	if (xen_hvm_domain() && xen_initial_domain())
> +		usage = LOCAL;
> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> +		usage = LOCAL;
> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> +		xenstored_ready = 1;
> +
> +	switch (usage) {
> +		case LOCAL:
>  			err = xenstored_local_init();
>  			if (err)
>  				goto out_error;
> -		}
> -		xen_store_interface = mfn_to_virt(xen_store_mfn);
> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
> +			break;
> +		case PV:
> +			xen_store_evtchn = xen_start_info->store_evtchn;
> +			xen_store_mfn = xen_start_info->store_mfn;
> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
> +			break;
> +		case HVM:
> +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_evtchn = (int)v;
> +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_mfn = (unsigned long)v;
> +			xen_store_interface =
> +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> +			break;
> +		default:
> +			pr_warn("Xenstore state unknown\n");
> +			break;
>  	}
>  
>  	/* Initialize the interface to xenstore. */
> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> index d1c217b..f7feb3d 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -44,6 +44,7 @@
>  #include <linux/rwsem.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> +#include <asm/xen/hypervisor.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
>  #include "xenbus_comms.h"
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:31:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoZJ-0002KR-8x; Tue, 07 Aug 2012 18:31:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoZH-0002Jo-72
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:31:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344364299!11349945!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6792 invoked from network); 7 Aug 2012 18:31:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:31:40 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IVOfN002611
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:31:24 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IVNFp018635
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:31:23 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IVMFn007989; Tue, 7 Aug 2012 13:31:22 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:31:22 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 14D9741F38; Tue,  7 Aug 2012 14:21:57 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:21:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	dgdegra@tycho.nsa.gov
Message-ID: <20120807182157.GN15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> an error.
> 
> If Linux is running as an HVM domain and is running as Dom0, use
> xenstored_local_init to initialize the xenstore page and event channel.
> 
> Changes in v2:
> 
> - refactor xenbus_init.

Thank you. Lets also CC our friend at NSA who has been doing some work
in that area. Daniel are you OK with this change - will it still make
PV initial domain with with the MiniOS XenBus driver?

Thanks.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>  3 files changed, 45 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> index 52fe7ad..c5aa55c 100644
> --- a/drivers/xen/xenbus/xenbus_comms.c
> +++ b/drivers/xen/xenbus/xenbus_comms.c
> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>  		int err;
>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>  						0, "xenbus", &xb_waitq);
> -		if (err <= 0) {
> +		if (err < 0) {
>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>  			return err;
>  		}

> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index b793723..a67ccc0 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>  	return err;
>  }
>  
> +enum xenstore_init {
> +	UNKNOWN,
> +	PV,
> +	HVM,
> +	LOCAL,
> +};
>  static int __init xenbus_init(void)
>  {
>  	int err = 0;
> +	enum xenstore_init usage = UNKNOWN;
> +	uint64_t v = 0;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
>  
>  	xenbus_ring_ops_init();
>  
> -	if (xen_hvm_domain()) {
> -		uint64_t v = 0;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_evtchn = (int)v;
> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> -		if (err)
> -			goto out_error;
> -		xen_store_mfn = (unsigned long)v;
> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> -	} else {
> -		xen_store_evtchn = xen_start_info->store_evtchn;
> -		xen_store_mfn = xen_start_info->store_mfn;
> -		if (xen_store_evtchn)
> -			xenstored_ready = 1;
> -		else {
> +	if (xen_pv_domain())
> +		usage = PV;
> +	if (xen_hvm_domain())
> +		usage = HVM;
> +	if (xen_hvm_domain() && xen_initial_domain())
> +		usage = LOCAL;
> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> +		usage = LOCAL;
> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> +		xenstored_ready = 1;
> +
> +	switch (usage) {
> +		case LOCAL:
>  			err = xenstored_local_init();
>  			if (err)
>  				goto out_error;
> -		}
> -		xen_store_interface = mfn_to_virt(xen_store_mfn);
> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
> +			break;
> +		case PV:
> +			xen_store_evtchn = xen_start_info->store_evtchn;
> +			xen_store_mfn = xen_start_info->store_mfn;
> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
> +			break;
> +		case HVM:
> +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_evtchn = (int)v;
> +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> +			if (err)
> +				goto out_error;
> +			xen_store_mfn = (unsigned long)v;
> +			xen_store_interface =
> +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> +			break;
> +		default:
> +			pr_warn("Xenstore state unknown\n");
> +			break;
>  	}
>  
>  	/* Initialize the interface to xenstore. */
> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> index d1c217b..f7feb3d 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -44,6 +44,7 @@
>  #include <linux/rwsem.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> +#include <asm/xen/hypervisor.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
>  #include "xenbus_comms.h"
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:33:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:33:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syoad-0002Uz-Oj; Tue, 07 Aug 2012 18:33:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoac-0002Uo-BG
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:33:10 +0000
Received: from [85.158.143.35:45400] by server-2.bemta-4.messagelabs.com id
	D7/B2-17938-46F51205; Tue, 07 Aug 2012 18:33:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344364385!15953877!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24864 invoked from network); 7 Aug 2012 18:33:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:33:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IWpOg013103
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:32:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IWogM000806
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:32:51 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IWo6h011136; Tue, 7 Aug 2012 13:32:50 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:32:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6885741F39; Tue,  7 Aug 2012 14:23:25 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:23:25 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807182325.GO15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-11-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-11-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 11/23] xen: do not compile manage, balloon,
 pci, acpi and cpu_hotplug on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:14PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> 
> - make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Looks good.
> ---
>  drivers/xen/Makefile |   11 ++++++++---
>  1 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index fc34886..bee02b2 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -1,11 +1,17 @@
> -obj-y	+= grant-table.o features.o events.o manage.o balloon.o
> +ifneq ($(CONFIG_ARM),y)
> +obj-y	+= manage.o balloon.o
> +obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
> +endif
> +obj-y	+= grant-table.o features.o events.o
>  obj-y	+= xenbus/
>  
>  nostackp := $(call cc-option, -fno-stack-protector)
>  CFLAGS_features.o			:= $(nostackp)
>  
> +obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
> +dom0-$(CONFIG_PCI) := pci.o
> +dom0-$(CONFIG_ACPI) := acpi.o
>  obj-$(CONFIG_BLOCK)			+= biomerge.o
> -obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
>  obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
>  obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
>  obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
> @@ -17,7 +23,6 @@ obj-$(CONFIG_XEN_SYS_HYPERVISOR)	+= sys-hypervisor.o
>  obj-$(CONFIG_XEN_PVHVM)			+= platform-pci.o
>  obj-$(CONFIG_XEN_TMEM)			+= tmem.o
>  obj-$(CONFIG_SWIOTLB_XEN)		+= swiotlb-xen.o
> -obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
>  obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
>  obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
>  obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:33:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:33:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syoad-0002Uz-Oj; Tue, 07 Aug 2012 18:33:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoac-0002Uo-BG
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:33:10 +0000
Received: from [85.158.143.35:45400] by server-2.bemta-4.messagelabs.com id
	D7/B2-17938-46F51205; Tue, 07 Aug 2012 18:33:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344364385!15953877!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24864 invoked from network); 7 Aug 2012 18:33:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:33:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IWpOg013103
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:32:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IWogM000806
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:32:51 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IWo6h011136; Tue, 7 Aug 2012 13:32:50 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:32:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6885741F39; Tue,  7 Aug 2012 14:23:25 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:23:25 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807182325.GO15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-11-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-11-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 11/23] xen: do not compile manage, balloon,
 pci, acpi and cpu_hotplug on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:14PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> 
> - make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Looks good.
> ---
>  drivers/xen/Makefile |   11 ++++++++---
>  1 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index fc34886..bee02b2 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -1,11 +1,17 @@
> -obj-y	+= grant-table.o features.o events.o manage.o balloon.o
> +ifneq ($(CONFIG_ARM),y)
> +obj-y	+= manage.o balloon.o
> +obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
> +endif
> +obj-y	+= grant-table.o features.o events.o
>  obj-y	+= xenbus/
>  
>  nostackp := $(call cc-option, -fno-stack-protector)
>  CFLAGS_features.o			:= $(nostackp)
>  
> +obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
> +dom0-$(CONFIG_PCI) := pci.o
> +dom0-$(CONFIG_ACPI) := acpi.o
>  obj-$(CONFIG_BLOCK)			+= biomerge.o
> -obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
>  obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
>  obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
>  obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
> @@ -17,7 +23,6 @@ obj-$(CONFIG_XEN_SYS_HYPERVISOR)	+= sys-hypervisor.o
>  obj-$(CONFIG_XEN_PVHVM)			+= platform-pci.o
>  obj-$(CONFIG_XEN_TMEM)			+= tmem.o
>  obj-$(CONFIG_SWIOTLB_XEN)		+= swiotlb-xen.o
> -obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
>  obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
>  obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
>  obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:33:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syob6-0002Zm-6E; Tue, 07 Aug 2012 18:33:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syob5-0002Yx-4z
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:33:39 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344364402!11744276!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11035 invoked from network); 7 Aug 2012 18:33:23 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:33:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IX3tO013423
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:33:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IX2b5019701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:33:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IX2CW011375; Tue, 7 Aug 2012 13:33:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:33:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A7F7B41F38; Tue,  7 Aug 2012 14:23:37 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:23:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807182337.GP15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-12-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-12-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 12/23] xen/arm: introduce CONFIG_XEN on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:15PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> 
> - mark Xen guest support on ARM as EXPERIMENTAL.

OK. Looks good.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/Kconfig |   10 ++++++++++
>  1 files changed, 10 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index a91009c..f14664b 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1855,6 +1855,16 @@ config DEPRECATED_PARAM_STRUCT
>  	  This was deprecated in 2001 and announced to live on for 5 years.
>  	  Some old boot loaders still use this way.
>  
> +config XEN_DOM0
> +	def_bool y
> +
> +config XEN
> +	bool "Xen guest support on ARM (EXPERIMENTAL)"
> +	depends on EXPERIMENTAL && ARM && OF
> +	select XEN_DOM0
> +	help
> +	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
> +
>  endmenu
>  
>  menu "Boot options"
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:33:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syob6-0002Zm-6E; Tue, 07 Aug 2012 18:33:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syob5-0002Yx-4z
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:33:39 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344364402!11744276!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11035 invoked from network); 7 Aug 2012 18:33:23 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:33:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IX3tO013423
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:33:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IX2b5019701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:33:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IX2CW011375; Tue, 7 Aug 2012 13:33:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:33:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A7F7B41F38; Tue,  7 Aug 2012 14:23:37 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:23:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807182337.GP15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-12-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-12-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 12/23] xen/arm: introduce CONFIG_XEN on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:15PM +0100, Stefano Stabellini wrote:
> Changes in v2:
> 
> - mark Xen guest support on ARM as EXPERIMENTAL.

OK. Looks good.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/Kconfig |   10 ++++++++++
>  1 files changed, 10 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index a91009c..f14664b 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1855,6 +1855,16 @@ config DEPRECATED_PARAM_STRUCT
>  	  This was deprecated in 2001 and announced to live on for 5 years.
>  	  Some old boot loaders still use this way.
>  
> +config XEN_DOM0
> +	def_bool y
> +
> +config XEN
> +	bool "Xen guest support on ARM (EXPERIMENTAL)"
> +	depends on EXPERIMENTAL && ARM && OF
> +	select XEN_DOM0
> +	help
> +	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
> +
>  endmenu
>  
>  menu "Boot options"
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:34:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyobM-0002cU-Jx; Tue, 07 Aug 2012 18:33:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyobK-0002c5-VP
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:33:55 +0000
Received: from [85.158.143.99:50864] by server-3.bemta-4.messagelabs.com id
	D8/21-01511-29F51205; Tue, 07 Aug 2012 18:33:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344364432!24308748!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18207 invoked from network); 7 Aug 2012 18:33:53 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:33:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IXebx004910
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:33:41 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IXdlX022755
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:33:40 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IXd5b009546; Tue, 7 Aug 2012 13:33:39 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:33:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3A8B041F39; Tue,  7 Aug 2012 14:24:14 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:24:14 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807182414.GQ15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-13-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-13-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 13/23] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:16PM +0100, Stefano Stabellini wrote:
> Use Xen features to figure out if we are privileged.
> 
> XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.

Looks good.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c         |    7 +++++++
>  include/xen/interface/features.h |    3 +++
>  2 files changed, 10 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 102d823..ac3a2d6 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -1,6 +1,7 @@
>  #include <xen/xen.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/memory.h>
> +#include <xen/features.h>
>  #include <xen/platform_pci.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
> @@ -57,6 +58,12 @@ static int __init xen_guest_init(void)
>  	}
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
> +	xen_setup_features();
> +	if (xen_feature(XENFEAT_dom0))
> +		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
> +	else
> +		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
> +
>  	if (!shared_info_page)
>  		shared_info_page = (struct shared_info *)
>  			get_zeroed_page(GFP_KERNEL);
> diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
> index b6ca39a..131a6cc 100644
> --- a/include/xen/interface/features.h
> +++ b/include/xen/interface/features.h
> @@ -50,6 +50,9 @@
>  /* x86: pirq can be used by HVM guests */
>  #define XENFEAT_hvm_pirqs           10
>  
> +/* operation as Dom0 is supported */
> +#define XENFEAT_dom0                      11
> +
>  #define XENFEAT_NR_SUBMAPS 1
>  
>  #endif /* __XEN_PUBLIC_FEATURES_H__ */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:34:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyobM-0002cU-Jx; Tue, 07 Aug 2012 18:33:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyobK-0002c5-VP
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:33:55 +0000
Received: from [85.158.143.99:50864] by server-3.bemta-4.messagelabs.com id
	D8/21-01511-29F51205; Tue, 07 Aug 2012 18:33:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344364432!24308748!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18207 invoked from network); 7 Aug 2012 18:33:53 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:33:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IXebx004910
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:33:41 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IXdlX022755
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:33:40 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IXd5b009546; Tue, 7 Aug 2012 13:33:39 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:33:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3A8B041F39; Tue,  7 Aug 2012 14:24:14 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:24:14 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807182414.GQ15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-13-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-13-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 13/23] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:16PM +0100, Stefano Stabellini wrote:
> Use Xen features to figure out if we are privileged.
> 
> XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.

Looks good.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c         |    7 +++++++
>  include/xen/interface/features.h |    3 +++
>  2 files changed, 10 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 102d823..ac3a2d6 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -1,6 +1,7 @@
>  #include <xen/xen.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/memory.h>
> +#include <xen/features.h>
>  #include <xen/platform_pci.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
> @@ -57,6 +58,12 @@ static int __init xen_guest_init(void)
>  	}
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
> +	xen_setup_features();
> +	if (xen_feature(XENFEAT_dom0))
> +		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
> +	else
> +		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
> +
>  	if (!shared_info_page)
>  		shared_info_page = (struct shared_info *)
>  			get_zeroed_page(GFP_KERNEL);
> diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
> index b6ca39a..131a6cc 100644
> --- a/include/xen/interface/features.h
> +++ b/include/xen/interface/features.h
> @@ -50,6 +50,9 @@
>  /* x86: pirq can be used by HVM guests */
>  #define XENFEAT_hvm_pirqs           10
>  
> +/* operation as Dom0 is supported */
> +#define XENFEAT_dom0                      11
> +
>  #define XENFEAT_NR_SUBMAPS 1
>  
>  #endif /* __XEN_PUBLIC_FEATURES_H__ */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:40:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syohq-00034e-MJ; Tue, 07 Aug 2012 18:40:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syohp-00034Z-7M
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:40:37 +0000
Received: from [85.158.143.35:36699] by server-3.bemta-4.messagelabs.com id
	C0/35-01511-42161205; Tue, 07 Aug 2012 18:40:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344364834!19134655!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28494 invoked from network); 7 Aug 2012 18:40:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:40:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ie3LW020744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:40:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Ie2cQ001646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:40:02 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77Ie1Qb013877; Tue, 7 Aug 2012 13:40:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:40:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A160E41F3C; Tue,  7 Aug 2012 14:30:36 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:30:36 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183036.GR15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 15/23] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:18PM +0100, Stefano Stabellini wrote:
> Compile events.c on ARM.
> Parse, map and enable the IRQ to get event notifications from the device
> tree (node "/xen").
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
>  arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
>  arch/x86/xen/enlighten.c          |    1 +
>  arch/x86/xen/irq.c                |    1 +
>  arch/x86/xen/xen-ops.h            |    1 -
>  drivers/xen/events.c              |   17 ++++++++++++++---
>  include/xen/events.h              |    2 ++
>  7 files changed, 69 insertions(+), 4 deletions(-)
>  create mode 100644 arch/arm/include/asm/xen/events.h
> 
> diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
> new file mode 100644
> index 0000000..94b4e90
> --- /dev/null
> +++ b/arch/arm/include/asm/xen/events.h
> @@ -0,0 +1,18 @@
> +#ifndef _ASM_ARM_XEN_EVENTS_H
> +#define _ASM_ARM_XEN_EVENTS_H
> +
> +#include <asm/ptrace.h>
> +
> +enum ipi_vector {
> +	XEN_PLACEHOLDER_VECTOR,
> +
> +	/* Xen IPIs go here */
> +	XEN_NR_IPIS,
> +};
> +
> +static inline int xen_irqs_disabled(struct pt_regs *regs)
> +{
> +	return raw_irqs_disabled_flags(regs->ARM_cpsr);
> +}
> +
> +#endif /* _ASM_ARM_XEN_EVENTS_H */
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index e5e92d5..87b17f0 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -1,4 +1,5 @@
>  #include <xen/xen.h>
> +#include <xen/events.h>
>  #include <xen/grant_table.h>
>  #include <xen/hvm.h>
>  #include <xen/interface/xen.h>
> @@ -9,6 +10,8 @@
>  #include <xen/xenbus.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
> +#include <linux/interrupt.h>
> +#include <linux/irqreturn.h>
>  #include <linux/module.h>
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
> @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
>  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
>  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
> +static __read_mostly int xen_events_irq = -1;
> +

So this is global..
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
> @@ -66,6 +71,9 @@ static int __init xen_guest_init(void)
>  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
>  		return 0;
>  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> +	xen_events_irq = irq_of_parse_and_map(node, 0);
> +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> +			xen_events_irq, xen_hvm_resume_frames);
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -107,3 +115,28 @@ static int __init xen_guest_init(void)
>  	return 0;
>  }
>  core_initcall(xen_guest_init);
> +
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return IRQ_HANDLED;
> +}
> +
> +static int __init xen_init_events(void)
> +{
> +	if (!xen_domain() || xen_events_irq < 0)
> +		return -ENODEV;
> +
> +	xen_init_IRQ();
> +
> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			"events", xen_vcpu)) {

But here you are asking for it to be percpu? What if there are other
interrupts on the _other_ CPUs that conflict with it?
> +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	enable_percpu_irq(xen_events_irq, 0);

Uh, that is bold. One global to rule them all, eh? Should you make
it at least:
static DEFINE_PER_CPU(int, xen_events_irq);
?

> +
> +	return 0;
> +}
> +postcore_initcall(xen_init_events);
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..9f8b0ef 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -33,6 +33,7 @@
>  #include <linux/memblock.h>
>  
>  #include <xen/xen.h>
> +#include <xen/events.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/version.h>
>  #include <xen/interface/physdev.h>
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..01a4dc0 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 202d4c1..2368295 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -35,7 +35,6 @@ void xen_set_pat(u64);
>  
>  char * __init xen_memory_setup(void);
>  void __init xen_arch_setup(void);
> -void __init xen_init_IRQ(void);
>  void xen_enable_sysenter(void);
>  void xen_enable_syscall(void);
>  void xen_vcpu_restore(void);
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7595581..5ecb596 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -31,14 +31,16 @@
>  #include <linux/irqnr.h>
>  #include <linux/pci.h>
>  
> +#ifdef CONFIG_X86
>  #include <asm/desc.h>
>  #include <asm/ptrace.h>
>  #include <asm/irq.h>
>  #include <asm/idle.h>
>  #include <asm/io_apic.h>
> -#include <asm/sync_bitops.h>
>  #include <asm/xen/page.h>
>  #include <asm/xen/pci.h>
> +#endif
> +#include <asm/sync_bitops.h>
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
>  
> @@ -50,6 +52,9 @@
>  #include <xen/interface/event_channel.h>
>  #include <xen/interface/hvm/hvm_op.h>
>  #include <xen/interface/hvm/params.h>
> +#include <xen/interface/physdev.h>
> +#include <xen/interface/sched.h>
> +#include <asm/hw_irq.h>
>  
>  /*
>   * This lock protects updates to the following mapping and reference-count
> @@ -1374,7 +1379,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  {
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  
> +#ifdef CONFIG_X86

Please explain this with a comment.

>  	exit_idle();
> +#endif
>  	irq_enter();
>  
>  	__xen_evtchn_do_upcall();
> @@ -1783,9 +1790,9 @@ void xen_callback_vector(void)
>  void xen_callback_vector(void) {}
>  #endif
>  
> -void __init xen_init_IRQ(void)
> +void xen_init_IRQ(void)
>  {
> -	int i, rc;
> +	int i;
>  
>  	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
>  				    GFP_KERNEL);
> @@ -1801,6 +1808,7 @@ void __init xen_init_IRQ(void)
>  
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
> +#ifdef CONFIG_X86
>  	if (xen_hvm_domain()) {
>  		xen_callback_vector();
>  		native_init_IRQ();
> @@ -1808,6 +1816,7 @@ void __init xen_init_IRQ(void)
>  		 * __acpi_register_gsi can point at the right function */
>  		pci_xen_hvm_init();
>  	} else {
> +		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
>  		irq_ctx_init(smp_processor_id());
> @@ -1823,4 +1832,6 @@ void __init xen_init_IRQ(void)
>  		} else
>  			pirq_needs_eoi = pirq_check_eoi_map;
>  	}
> +#endif
>  }
> +EXPORT_SYMBOL_GPL(xen_init_IRQ);
> diff --git a/include/xen/events.h b/include/xen/events.h
> index 04399b2..c6bfe01 100644
> --- a/include/xen/events.h
> +++ b/include/xen/events.h
> @@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
>  /* Determine whether to ignore this IRQ if it is passed to a guest. */
>  int xen_test_irq_shared(int irq);
>  
> +/* initialize Xen IRQ subsystem */
> +void xen_init_IRQ(void);
>  #endif	/* _XEN_EVENTS_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:40:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syohq-00034e-MJ; Tue, 07 Aug 2012 18:40:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syohp-00034Z-7M
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:40:37 +0000
Received: from [85.158.143.35:36699] by server-3.bemta-4.messagelabs.com id
	C0/35-01511-42161205; Tue, 07 Aug 2012 18:40:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344364834!19134655!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28494 invoked from network); 7 Aug 2012 18:40:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:40:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ie3LW020744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:40:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Ie2cQ001646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:40:02 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77Ie1Qb013877; Tue, 7 Aug 2012 13:40:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:40:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A160E41F3C; Tue,  7 Aug 2012 14:30:36 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:30:36 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183036.GR15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 15/23] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:18PM +0100, Stefano Stabellini wrote:
> Compile events.c on ARM.
> Parse, map and enable the IRQ to get event notifications from the device
> tree (node "/xen").
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
>  arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
>  arch/x86/xen/enlighten.c          |    1 +
>  arch/x86/xen/irq.c                |    1 +
>  arch/x86/xen/xen-ops.h            |    1 -
>  drivers/xen/events.c              |   17 ++++++++++++++---
>  include/xen/events.h              |    2 ++
>  7 files changed, 69 insertions(+), 4 deletions(-)
>  create mode 100644 arch/arm/include/asm/xen/events.h
> 
> diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
> new file mode 100644
> index 0000000..94b4e90
> --- /dev/null
> +++ b/arch/arm/include/asm/xen/events.h
> @@ -0,0 +1,18 @@
> +#ifndef _ASM_ARM_XEN_EVENTS_H
> +#define _ASM_ARM_XEN_EVENTS_H
> +
> +#include <asm/ptrace.h>
> +
> +enum ipi_vector {
> +	XEN_PLACEHOLDER_VECTOR,
> +
> +	/* Xen IPIs go here */
> +	XEN_NR_IPIS,
> +};
> +
> +static inline int xen_irqs_disabled(struct pt_regs *regs)
> +{
> +	return raw_irqs_disabled_flags(regs->ARM_cpsr);
> +}
> +
> +#endif /* _ASM_ARM_XEN_EVENTS_H */
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index e5e92d5..87b17f0 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -1,4 +1,5 @@
>  #include <xen/xen.h>
> +#include <xen/events.h>
>  #include <xen/grant_table.h>
>  #include <xen/hvm.h>
>  #include <xen/interface/xen.h>
> @@ -9,6 +10,8 @@
>  #include <xen/xenbus.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
> +#include <linux/interrupt.h>
> +#include <linux/irqreturn.h>
>  #include <linux/module.h>
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
> @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
>  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
>  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
> +static __read_mostly int xen_events_irq = -1;
> +

So this is global..
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
> @@ -66,6 +71,9 @@ static int __init xen_guest_init(void)
>  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
>  		return 0;
>  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> +	xen_events_irq = irq_of_parse_and_map(node, 0);
> +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> +			xen_events_irq, xen_hvm_resume_frames);
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -107,3 +115,28 @@ static int __init xen_guest_init(void)
>  	return 0;
>  }
>  core_initcall(xen_guest_init);
> +
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return IRQ_HANDLED;
> +}
> +
> +static int __init xen_init_events(void)
> +{
> +	if (!xen_domain() || xen_events_irq < 0)
> +		return -ENODEV;
> +
> +	xen_init_IRQ();
> +
> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			"events", xen_vcpu)) {

But here you are asking for it to be percpu? What if there are other
interrupts on the _other_ CPUs that conflict with it?
> +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	enable_percpu_irq(xen_events_irq, 0);

Uh, that is bold. One global to rule them all, eh? Should you make
it at least:
static DEFINE_PER_CPU(int, xen_events_irq);
?

> +
> +	return 0;
> +}
> +postcore_initcall(xen_init_events);
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..9f8b0ef 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -33,6 +33,7 @@
>  #include <linux/memblock.h>
>  
>  #include <xen/xen.h>
> +#include <xen/events.h>
>  #include <xen/interface/xen.h>
>  #include <xen/interface/version.h>
>  #include <xen/interface/physdev.h>
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..01a4dc0 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 202d4c1..2368295 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -35,7 +35,6 @@ void xen_set_pat(u64);
>  
>  char * __init xen_memory_setup(void);
>  void __init xen_arch_setup(void);
> -void __init xen_init_IRQ(void);
>  void xen_enable_sysenter(void);
>  void xen_enable_syscall(void);
>  void xen_vcpu_restore(void);
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7595581..5ecb596 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -31,14 +31,16 @@
>  #include <linux/irqnr.h>
>  #include <linux/pci.h>
>  
> +#ifdef CONFIG_X86
>  #include <asm/desc.h>
>  #include <asm/ptrace.h>
>  #include <asm/irq.h>
>  #include <asm/idle.h>
>  #include <asm/io_apic.h>
> -#include <asm/sync_bitops.h>
>  #include <asm/xen/page.h>
>  #include <asm/xen/pci.h>
> +#endif
> +#include <asm/sync_bitops.h>
>  #include <asm/xen/hypercall.h>
>  #include <asm/xen/hypervisor.h>
>  
> @@ -50,6 +52,9 @@
>  #include <xen/interface/event_channel.h>
>  #include <xen/interface/hvm/hvm_op.h>
>  #include <xen/interface/hvm/params.h>
> +#include <xen/interface/physdev.h>
> +#include <xen/interface/sched.h>
> +#include <asm/hw_irq.h>
>  
>  /*
>   * This lock protects updates to the following mapping and reference-count
> @@ -1374,7 +1379,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  {
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  
> +#ifdef CONFIG_X86

Please explain this with a comment.

>  	exit_idle();
> +#endif
>  	irq_enter();
>  
>  	__xen_evtchn_do_upcall();
> @@ -1783,9 +1790,9 @@ void xen_callback_vector(void)
>  void xen_callback_vector(void) {}
>  #endif
>  
> -void __init xen_init_IRQ(void)
> +void xen_init_IRQ(void)
>  {
> -	int i, rc;
> +	int i;
>  
>  	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
>  				    GFP_KERNEL);
> @@ -1801,6 +1808,7 @@ void __init xen_init_IRQ(void)
>  
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
> +#ifdef CONFIG_X86
>  	if (xen_hvm_domain()) {
>  		xen_callback_vector();
>  		native_init_IRQ();
> @@ -1808,6 +1816,7 @@ void __init xen_init_IRQ(void)
>  		 * __acpi_register_gsi can point at the right function */
>  		pci_xen_hvm_init();
>  	} else {
> +		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
>  		irq_ctx_init(smp_processor_id());
> @@ -1823,4 +1832,6 @@ void __init xen_init_IRQ(void)
>  		} else
>  			pirq_needs_eoi = pirq_check_eoi_map;
>  	}
> +#endif
>  }
> +EXPORT_SYMBOL_GPL(xen_init_IRQ);
> diff --git a/include/xen/events.h b/include/xen/events.h
> index 04399b2..c6bfe01 100644
> --- a/include/xen/events.h
> +++ b/include/xen/events.h
> @@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
>  /* Determine whether to ignore this IRQ if it is passed to a guest. */
>  int xen_test_irq_shared(int irq);
>  
> +/* initialize Xen IRQ subsystem */
> +void xen_init_IRQ(void);
>  #endif	/* _XEN_EVENTS_H */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoiO-00037I-3A; Tue, 07 Aug 2012 18:41:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoiM-00036d-Ts
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344364860!11993750!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1075 invoked from network); 7 Aug 2012 18:41:02 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:41:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IeoH9021354
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:40:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Iemhb012514
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:40:49 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IemAS014355; Tue, 7 Aug 2012 13:40:48 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:40:48 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 700AE41F3C; Tue,  7 Aug 2012 14:31:23 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:31:23 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183123.GS15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-16-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-16-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 16/23] xen: clear IRQ_NOAUTOEN and
	IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:19PM +0100, Stefano Stabellini wrote:
> Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
> default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
> irq_startup, that is responsible for calling irq_unmask at startup time.
> As a result event channels remain masked.

Acked by me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/events.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 5ecb596..8ffb7b7 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -836,6 +836,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
>  		struct irq_info *info = info_for_irq(irq);
>  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
>  	}
> +	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
>  
>  out:
>  	mutex_unlock(&irq_mapping_update_lock);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoiO-00037I-3A; Tue, 07 Aug 2012 18:41:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoiM-00036d-Ts
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344364860!11993750!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1075 invoked from network); 7 Aug 2012 18:41:02 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:41:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IeoH9021354
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:40:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Iemhb012514
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:40:49 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IemAS014355; Tue, 7 Aug 2012 13:40:48 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:40:48 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 700AE41F3C; Tue,  7 Aug 2012 14:31:23 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:31:23 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183123.GS15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-16-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-16-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 16/23] xen: clear IRQ_NOAUTOEN and
	IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:19PM +0100, Stefano Stabellini wrote:
> Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
> default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
> irq_startup, that is responsible for calling irq_unmask at startup time.
> As a result event channels remain masked.

Acked by me.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/events.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 5ecb596..8ffb7b7 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -836,6 +836,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
>  		struct irq_info *info = info_for_irq(irq);
>  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
>  	}
> +	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
>  
>  out:
>  	mutex_unlock(&irq_mapping_update_lock);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoiT-000383-GB; Tue, 07 Aug 2012 18:41:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoiR-00037f-79
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:15 +0000
Received: from [85.158.139.83:22662] by server-7.bemta-5.messagelabs.com id
	8B/93-00857-A4161205; Tue, 07 Aug 2012 18:41:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344364872!30708545!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13425 invoked from network); 7 Aug 2012 18:41:13 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:41:13 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77If1SF021543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:41:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77If00l004716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:41:01 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77If0qm016657; Tue, 7 Aug 2012 13:41:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:41:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 684F241F3C; Tue,  7 Aug 2012 14:31:35 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:31:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183135.GT15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-17-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-17-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 17/23] xen/arm: implement
 alloc/free_xenballooned_pages with alloc_pages/kfree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:20PM +0100, Stefano Stabellini wrote:
> Only until we get the balloon driver to work.

OK. Acked-by be.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   18 ++++++++++++++++++
>  1 files changed, 18 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 87b17f0..c244583 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -140,3 +140,21 @@ static int __init xen_init_events(void)
>  	return 0;
>  }
>  postcore_initcall(xen_init_events);
> +
> +/* XXX: only until balloon is properly working */
> +int alloc_xenballooned_pages(int nr_pages, struct page **pages, bool highmem)
> +{
> +	*pages = alloc_pages(highmem ? GFP_HIGHUSER : GFP_KERNEL,
> +			get_order(nr_pages));
> +	if (*pages == NULL)
> +		return -ENOMEM;
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(alloc_xenballooned_pages);
> +
> +void free_xenballooned_pages(int nr_pages, struct page **pages)
> +{
> +	kfree(*pages);
> +	*pages = NULL;
> +}
> +EXPORT_SYMBOL_GPL(free_xenballooned_pages);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoiT-000383-GB; Tue, 07 Aug 2012 18:41:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoiR-00037f-79
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:15 +0000
Received: from [85.158.139.83:22662] by server-7.bemta-5.messagelabs.com id
	8B/93-00857-A4161205; Tue, 07 Aug 2012 18:41:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344364872!30708545!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13425 invoked from network); 7 Aug 2012 18:41:13 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:41:13 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77If1SF021543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:41:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77If00l004716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:41:01 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77If0qm016657; Tue, 7 Aug 2012 13:41:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:41:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 684F241F3C; Tue,  7 Aug 2012 14:31:35 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:31:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183135.GT15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-17-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-17-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 17/23] xen/arm: implement
 alloc/free_xenballooned_pages with alloc_pages/kfree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:20PM +0100, Stefano Stabellini wrote:
> Only until we get the balloon driver to work.

OK. Acked-by be.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c |   18 ++++++++++++++++++
>  1 files changed, 18 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 87b17f0..c244583 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -140,3 +140,21 @@ static int __init xen_init_events(void)
>  	return 0;
>  }
>  postcore_initcall(xen_init_events);
> +
> +/* XXX: only until balloon is properly working */
> +int alloc_xenballooned_pages(int nr_pages, struct page **pages, bool highmem)
> +{
> +	*pages = alloc_pages(highmem ? GFP_HIGHUSER : GFP_KERNEL,
> +			get_order(nr_pages));
> +	if (*pages == NULL)
> +		return -ENOMEM;
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(alloc_xenballooned_pages);
> +
> +void free_xenballooned_pages(int nr_pages, struct page **pages)
> +{
> +	kfree(*pages);
> +	*pages = NULL;
> +}
> +EXPORT_SYMBOL_GPL(free_xenballooned_pages);
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syoim-0003CO-Tk; Tue, 07 Aug 2012 18:41:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoil-0003Bx-QB
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:35 +0000
Received: from [85.158.143.99:22363] by server-1.bemta-4.messagelabs.com id
	2D/4D-24392-F5161205; Tue, 07 Aug 2012 18:41:35 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344364893!25455470!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 7 Aug 2012 18:41:34 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:41:34 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IfL1a021911
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:41:22 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IfLm6014195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:41:21 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IfKhZ016875; Tue, 7 Aug 2012 13:41:20 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:41:20 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 946A741F3C; Tue,  7 Aug 2012 14:31:55 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:31:55 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183155.GU15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-18-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-18-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 18/23] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:21PM +0100, Stefano Stabellini wrote:
> This patch removes the "return -ENOSYS" for auto_translated_physmap
> guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
> mmap calls. However privcmd mmap calls are still going to fail for HVM
> and hybrid guests on x86 because the xen_remap_domain_mfn_range
> implementation is currently PV only.

Looks good. Ack from me.
> 
> Changes in v2:
> 
> - better commit message;
> - return -EINVAL from xen_remap_domain_mfn_range if
>   auto_translated_physmap.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/x86/xen/mmu.c    |    3 +++
>  drivers/xen/privcmd.c |    4 ----
>  2 files changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 3a73785..885a223 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	unsigned long range;
>  	int err = 0;
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return -EINVAL;
> +
>  	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
>  
>  	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..85226cb 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
>  {
> -	/* Unsupported for auto-translate guests. */
> -	if (xen_feature(XENFEAT_auto_translated_physmap))
> -		return -ENOSYS;
> -
>  	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
>  	 * how to recreate these mappings */
>  	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syoim-0003CO-Tk; Tue, 07 Aug 2012 18:41:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoil-0003Bx-QB
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:35 +0000
Received: from [85.158.143.99:22363] by server-1.bemta-4.messagelabs.com id
	2D/4D-24392-F5161205; Tue, 07 Aug 2012 18:41:35 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344364893!25455470!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 7 Aug 2012 18:41:34 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:41:34 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IfL1a021911
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:41:22 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IfLm6014195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:41:21 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IfKhZ016875; Tue, 7 Aug 2012 13:41:20 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:41:20 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 946A741F3C; Tue,  7 Aug 2012 14:31:55 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:31:55 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183155.GU15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-18-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-18-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 18/23] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:21PM +0100, Stefano Stabellini wrote:
> This patch removes the "return -ENOSYS" for auto_translated_physmap
> guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
> mmap calls. However privcmd mmap calls are still going to fail for HVM
> and hybrid guests on x86 because the xen_remap_domain_mfn_range
> implementation is currently PV only.

Looks good. Ack from me.
> 
> Changes in v2:
> 
> - better commit message;
> - return -EINVAL from xen_remap_domain_mfn_range if
>   auto_translated_physmap.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/x86/xen/mmu.c    |    3 +++
>  drivers/xen/privcmd.c |    4 ----
>  2 files changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 3a73785..885a223 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	unsigned long range;
>  	int err = 0;
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return -EINVAL;
> +
>  	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
>  
>  	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..85226cb 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
>  {
> -	/* Unsupported for auto-translate guests. */
> -	if (xen_feature(XENFEAT_auto_translated_physmap))
> -		return -ENOSYS;
> -
>  	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
>  	 * how to recreate these mappings */
>  	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syoj1-0003G7-Br; Tue, 07 Aug 2012 18:41:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoiz-0003Fa-K9
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:49 +0000
Received: from [85.158.138.51:7750] by server-1.bemta-3.messagelabs.com id
	FF/4F-29224-C6161205; Tue, 07 Aug 2012 18:41:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344364906!30846470!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17424 invoked from network); 7 Aug 2012 18:41:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:41:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IfZIN022063
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:41:36 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IfYNQ014580
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:41:35 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IfYYG003433; Tue, 7 Aug 2012 13:41:34 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:41:34 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9963741F3C; Tue,  7 Aug 2012 14:32:09 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:32:09 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183209.GV15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-19-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-19-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 19/23] xen/arm: compile blkfront and
	blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:22PM +0100, Stefano Stabellini wrote:

OK. Looks good.
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/block/xen-blkback/blkback.c  |    1 +
>  include/xen/interface/io/protocols.h |    3 +++
>  2 files changed, 4 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 73f196c..63dd5b9 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -42,6 +42,7 @@
>  
>  #include <xen/events.h>
>  #include <xen/page.h>
> +#include <xen/xen.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include "common.h"
> diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
> index 01fc8ae..0eafaf2 100644
> --- a/include/xen/interface/io/protocols.h
> +++ b/include/xen/interface/io/protocols.h
> @@ -5,6 +5,7 @@
>  #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
>  #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
>  #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
> +#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
>  
>  #if defined(__i386__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
> @@ -14,6 +15,8 @@
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
>  #elif defined(__powerpc64__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
> +#elif defined(__arm__)
> +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
>  #else
>  # error arch fixup needed here
>  #endif
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:41:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:41:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syoj1-0003G7-Br; Tue, 07 Aug 2012 18:41:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoiz-0003Fa-K9
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:41:49 +0000
Received: from [85.158.138.51:7750] by server-1.bemta-3.messagelabs.com id
	FF/4F-29224-C6161205; Tue, 07 Aug 2012 18:41:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344364906!30846470!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17424 invoked from network); 7 Aug 2012 18:41:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:41:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77IfZIN022063
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:41:36 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IfYNQ014580
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:41:35 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IfYYG003433; Tue, 7 Aug 2012 13:41:34 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:41:34 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9963741F3C; Tue,  7 Aug 2012 14:32:09 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:32:09 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183209.GV15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-19-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-19-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 19/23] xen/arm: compile blkfront and
	blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:22PM +0100, Stefano Stabellini wrote:

OK. Looks good.
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/block/xen-blkback/blkback.c  |    1 +
>  include/xen/interface/io/protocols.h |    3 +++
>  2 files changed, 4 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 73f196c..63dd5b9 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -42,6 +42,7 @@
>  
>  #include <xen/events.h>
>  #include <xen/page.h>
> +#include <xen/xen.h>
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include "common.h"
> diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
> index 01fc8ae..0eafaf2 100644
> --- a/include/xen/interface/io/protocols.h
> +++ b/include/xen/interface/io/protocols.h
> @@ -5,6 +5,7 @@
>  #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
>  #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
>  #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
> +#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
>  
>  #if defined(__i386__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
> @@ -14,6 +15,8 @@
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
>  #elif defined(__powerpc64__)
>  # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
> +#elif defined(__arm__)
> +# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
>  #else
>  # error arch fixup needed here
>  #endif
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:42:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syojk-0003Pq-Px; Tue, 07 Aug 2012 18:42:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoji-0003PT-Er
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:42:34 +0000
Received: from [85.158.138.51:11580] by server-4.bemta-3.messagelabs.com id
	C2/41-06379-99161205; Tue, 07 Aug 2012 18:42:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344364950!24541165!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7031 invoked from network); 7 Aug 2012 18:42:32 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:42:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ig9JU015885
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:42:15 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Ig0dW006497
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:42:01 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77Ig0lK017318; Tue, 7 Aug 2012 13:42:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:42:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5E4C041F3C; Tue,  7 Aug 2012 14:32:35 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:32:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183235.GW15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-20-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-20-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 20/23] xen/arm: compile netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:23PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

OK. Looks good.
> ---
>  arch/arm/include/asm/xen/hypercall.h |   19 +++++++++++++++++++
>  drivers/net/xen-netback/netback.c    |    1 +
>  drivers/net/xen-netfront.c           |    1 +
>  3 files changed, 21 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
> index 4ac0624..8a82325 100644
> --- a/arch/arm/include/asm/xen/hypercall.h
> +++ b/arch/arm/include/asm/xen/hypercall.h
> @@ -47,4 +47,23 @@ unsigned long HYPERVISOR_hvm_op(int op, void *arg);
>  int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
>  int HYPERVISOR_physdev_op(int cmd, void *arg);
>  
> +static inline void
> +MULTI_update_va_mapping(struct multicall_entry *mcl, unsigned long va,
> +			unsigned int new_val, unsigned long flags)
> +{
> +	BUG();
> +}
> +
> +static inline void
> +MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
> +		 int count, int *success_count, domid_t domid)
> +{
> +	BUG();
> +}
> +
> +static inline int
> +HYPERVISOR_multicall(void *call_list, int nr_calls)
> +{
> +	BUG();
> +}
>  #endif /* _ASM_ARM_XEN_HYPERCALL_H */
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index f4a6fca..ab4f81c 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -40,6 +40,7 @@
>  
>  #include <net/tcp.h>
>  
> +#include <xen/xen.h>
>  #include <xen/events.h>
>  #include <xen/interface/memory.h>
>  
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 3089990..bf4ba2b 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -43,6 +43,7 @@
>  #include <linux/slab.h>
>  #include <net/ip.h>
>  
> +#include <asm/xen/page.h>
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
>  #include <xen/events.h>
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:42:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syojk-0003Pq-Px; Tue, 07 Aug 2012 18:42:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Syoji-0003PT-Er
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:42:34 +0000
Received: from [85.158.138.51:11580] by server-4.bemta-3.messagelabs.com id
	C2/41-06379-99161205; Tue, 07 Aug 2012 18:42:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344364950!24541165!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7031 invoked from network); 7 Aug 2012 18:42:32 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Aug 2012 18:42:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Ig9JU015885
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:42:15 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Ig0dW006497
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:42:01 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77Ig0lK017318; Tue, 7 Aug 2012 13:42:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:42:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5E4C041F3C; Tue,  7 Aug 2012 14:32:35 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:32:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183235.GW15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-20-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-20-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 20/23] xen/arm: compile netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:23PM +0100, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

OK. Looks good.
> ---
>  arch/arm/include/asm/xen/hypercall.h |   19 +++++++++++++++++++
>  drivers/net/xen-netback/netback.c    |    1 +
>  drivers/net/xen-netfront.c           |    1 +
>  3 files changed, 21 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
> index 4ac0624..8a82325 100644
> --- a/arch/arm/include/asm/xen/hypercall.h
> +++ b/arch/arm/include/asm/xen/hypercall.h
> @@ -47,4 +47,23 @@ unsigned long HYPERVISOR_hvm_op(int op, void *arg);
>  int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
>  int HYPERVISOR_physdev_op(int cmd, void *arg);
>  
> +static inline void
> +MULTI_update_va_mapping(struct multicall_entry *mcl, unsigned long va,
> +			unsigned int new_val, unsigned long flags)
> +{
> +	BUG();
> +}
> +
> +static inline void
> +MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
> +		 int count, int *success_count, domid_t domid)
> +{
> +	BUG();
> +}
> +
> +static inline int
> +HYPERVISOR_multicall(void *call_list, int nr_calls)
> +{
> +	BUG();
> +}
>  #endif /* _ASM_ARM_XEN_HYPERCALL_H */
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index f4a6fca..ab4f81c 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -40,6 +40,7 @@
>  
>  #include <net/tcp.h>
>  
> +#include <xen/xen.h>
>  #include <xen/events.h>
>  #include <xen/interface/memory.h>
>  
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 3089990..bf4ba2b 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -43,6 +43,7 @@
>  #include <linux/slab.h>
>  #include <net/ip.h>
>  
> +#include <asm/xen/page.h>
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
>  #include <xen/events.h>
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyokG-0003XL-DC; Tue, 07 Aug 2012 18:43:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyokF-0003X2-94
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:43:07 +0000
Received: from [85.158.143.99:25128] by server-2.bemta-4.messagelabs.com id
	3C/78-17938-AB161205; Tue, 07 Aug 2012 18:43:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344364984!25443399!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18619 invoked from network); 7 Aug 2012 18:43:06 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:43:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Igosg028303
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:42:51 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IgVtB019719
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:42:32 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IgVqJ015479; Tue, 7 Aug 2012 13:42:31 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:42:31 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E60D41F3C; Tue,  7 Aug 2012 14:33:06 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:33:06 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183306.GX15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 21/23] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:24PM +0100, Stefano Stabellini wrote:
> Update struct xen_add_to_physmap to be in sync with Xen's version of the
> structure.
> The size field was introduced by:
> 
> changeset:   24164:707d27fe03e7
> user:        Jean Guyader <jean.guyader@eu.citrix.com>
> date:        Fri Nov 18 13:42:08 2011 +0000
> summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> 
> According to the comment:
> 
> "This new field .size is located in the 16 bits padding between .domid
> and .space in struct xen_add_to_physmap to stay compatible with older
> versions."
> 
> Changes in v2:

Looks good. Let me take this as in my tree to prep it for Mukesh's patches.

> 
> - remove erroneous comment in the commit message.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  include/xen/interface/memory.h |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index b5c3098..b66d04c 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,6 +163,9 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyokG-0003XL-DC; Tue, 07 Aug 2012 18:43:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyokF-0003X2-94
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:43:07 +0000
Received: from [85.158.143.99:25128] by server-2.bemta-4.messagelabs.com id
	3C/78-17938-AB161205; Tue, 07 Aug 2012 18:43:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344364984!25443399!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4MzA2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18619 invoked from network); 7 Aug 2012 18:43:06 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:43:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Igosg028303
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:42:51 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77IgVtB019719
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:42:32 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77IgVqJ015479; Tue, 7 Aug 2012 13:42:31 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:42:31 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E60D41F3C; Tue,  7 Aug 2012 14:33:06 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:33:06 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120807183306.GX15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 21/23] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:24PM +0100, Stefano Stabellini wrote:
> Update struct xen_add_to_physmap to be in sync with Xen's version of the
> structure.
> The size field was introduced by:
> 
> changeset:   24164:707d27fe03e7
> user:        Jean Guyader <jean.guyader@eu.citrix.com>
> date:        Fri Nov 18 13:42:08 2011 +0000
> summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> 
> According to the comment:
> 
> "This new field .size is located in the 16 bits padding between .domid
> and .space in struct xen_add_to_physmap to stay compatible with older
> versions."
> 
> Changes in v2:

Looks good. Let me take this as in my tree to prep it for Mukesh's patches.

> 
> - remove erroneous comment in the commit message.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  include/xen/interface/memory.h |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index b5c3098..b66d04c 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,6 +163,9 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:45:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyomE-0003tO-Ut; Tue, 07 Aug 2012 18:45:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyomD-0003t0-0u
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:45:09 +0000
Received: from [85.158.143.35:48349] by server-2.bemta-4.messagelabs.com id
	A1/99-17938-43261205; Tue, 07 Aug 2012 18:45:08 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344365104!17259993!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17607 invoked from network); 7 Aug 2012 18:45:05 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-14.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 18:45:05 -0000
X-TM-IMSS-Message-ID: <8114a3480003dd21@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 8114a3480003dd21 ;
	Tue, 7 Aug 2012 14:45:05 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77IiuKU001551; 
	Tue, 7 Aug 2012 14:44:56 -0400
Message-ID: <50216228.7010407@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 14:44:56 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
In-Reply-To: <20120807182157.GN15053@phenom.dumpdata.com>
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
>> an error.
>>
>> If Linux is running as an HVM domain and is running as Dom0, use
>> xenstored_local_init to initialize the xenstore page and event channel.
>>
>> Changes in v2:
>>
>> - refactor xenbus_init.
> 
> Thank you. Lets also CC our friend at NSA who has been doing some work
> in that area. Daniel are you OK with this change - will it still make
> PV initial domain with with the MiniOS XenBus driver?
> 
> Thanks.

That case will work, but what this will break is launching the initial domain
with a Xenstore stub domain already running (see below).

>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> ---
>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>>  3 files changed, 45 insertions(+), 20 deletions(-)
>>
>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
>> index 52fe7ad..c5aa55c 100644
>> --- a/drivers/xen/xenbus/xenbus_comms.c
>> +++ b/drivers/xen/xenbus/xenbus_comms.c
>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>>  		int err;
>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>>  						0, "xenbus", &xb_waitq);
>> -		if (err <= 0) {
>> +		if (err < 0) {
>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>>  			return err;
>>  		}
> 
>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>> index b793723..a67ccc0 100644
>> --- a/drivers/xen/xenbus/xenbus_probe.c
>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>>  	return err;
>>  }
>>  
>> +enum xenstore_init {
>> +	UNKNOWN,
>> +	PV,
>> +	HVM,
>> +	LOCAL,
>> +};
>>  static int __init xenbus_init(void)
>>  {
>>  	int err = 0;
>> +	enum xenstore_init usage = UNKNOWN;
>> +	uint64_t v = 0;
>>  
>>  	if (!xen_domain())
>>  		return -ENODEV;
>>  
>>  	xenbus_ring_ops_init();
>>  
>> -	if (xen_hvm_domain()) {
>> -		uint64_t v = 0;
>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>> -		if (err)
>> -			goto out_error;
>> -		xen_store_evtchn = (int)v;
>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>> -		if (err)
>> -			goto out_error;
>> -		xen_store_mfn = (unsigned long)v;
>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>> -	} else {
>> -		xen_store_evtchn = xen_start_info->store_evtchn;
>> -		xen_store_mfn = xen_start_info->store_mfn;
>> -		if (xen_store_evtchn)
>> -			xenstored_ready = 1;
>> -		else {
>> +	if (xen_pv_domain())
>> +		usage = PV;
>> +	if (xen_hvm_domain())
>> +		usage = HVM;

The above is correct for domUs, and is overridden for dom0s:

>> +	if (xen_hvm_domain() && xen_initial_domain())
>> +		usage = LOCAL;
>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
>> +		usage = LOCAL;

Instead of these checks, I think it should just be:

if (!xen_start_info->store_evtchn)
	usage = LOCAL;

Any domain started after xenstore will have store_evtchn set, so if you don't
have this set, you are either going to be running xenstore locally, or will
use the ioctl to change it later (and so should still set up everything as if
it will be running locally).

>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
>> +		xenstored_ready = 1;

This part can now just be moved unconditionally into case PV.

>> +
>> +	switch (usage) {
>> +		case LOCAL:
>>  			err = xenstored_local_init();
>>  			if (err)
>>  				goto out_error;
>> -		}
>> -		xen_store_interface = mfn_to_virt(xen_store_mfn);
>> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
>> +			break;
>> +		case PV:
>> +			xen_store_evtchn = xen_start_info->store_evtchn;
>> +			xen_store_mfn = xen_start_info->store_mfn;
>> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
>> +			break;
>> +		case HVM:
>> +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>> +			if (err)
>> +				goto out_error;
>> +			xen_store_evtchn = (int)v;
>> +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>> +			if (err)
>> +				goto out_error;
>> +			xen_store_mfn = (unsigned long)v;
>> +			xen_store_interface =
>> +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>> +			break;
>> +		default:
>> +			pr_warn("Xenstore state unknown\n");
>> +			break;
>>  	}
>>  
>>  	/* Initialize the interface to xenstore. */
>> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
>> index d1c217b..f7feb3d 100644
>> --- a/drivers/xen/xenbus/xenbus_xs.c
>> +++ b/drivers/xen/xenbus/xenbus_xs.c
>> @@ -44,6 +44,7 @@
>>  #include <linux/rwsem.h>
>>  #include <linux/module.h>
>>  #include <linux/mutex.h>
>> +#include <asm/xen/hypervisor.h>
>>  #include <xen/xenbus.h>
>>  #include <xen/xen.h>
>>  #include "xenbus_comms.h"
>> -- 
>> 1.7.2.5


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:45:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyomE-0003tO-Ut; Tue, 07 Aug 2012 18:45:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SyomD-0003t0-0u
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:45:09 +0000
Received: from [85.158.143.35:48349] by server-2.bemta-4.messagelabs.com id
	A1/99-17938-43261205; Tue, 07 Aug 2012 18:45:08 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344365104!17259993!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17607 invoked from network); 7 Aug 2012 18:45:05 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-14.tower-21.messagelabs.com with SMTP;
	7 Aug 2012 18:45:05 -0000
X-TM-IMSS-Message-ID: <8114a3480003dd21@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 8114a3480003dd21 ;
	Tue, 7 Aug 2012 14:45:05 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q77IiuKU001551; 
	Tue, 7 Aug 2012 14:44:56 -0400
Message-ID: <50216228.7010407@tycho.nsa.gov>
Date: Tue, 07 Aug 2012 14:44:56 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
In-Reply-To: <20120807182157.GN15053@phenom.dumpdata.com>
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
>> an error.
>>
>> If Linux is running as an HVM domain and is running as Dom0, use
>> xenstored_local_init to initialize the xenstore page and event channel.
>>
>> Changes in v2:
>>
>> - refactor xenbus_init.
> 
> Thank you. Lets also CC our friend at NSA who has been doing some work
> in that area. Daniel are you OK with this change - will it still make
> PV initial domain with with the MiniOS XenBus driver?
> 
> Thanks.

That case will work, but what this will break is launching the initial domain
with a Xenstore stub domain already running (see below).

>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> ---
>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>>  3 files changed, 45 insertions(+), 20 deletions(-)
>>
>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
>> index 52fe7ad..c5aa55c 100644
>> --- a/drivers/xen/xenbus/xenbus_comms.c
>> +++ b/drivers/xen/xenbus/xenbus_comms.c
>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>>  		int err;
>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>>  						0, "xenbus", &xb_waitq);
>> -		if (err <= 0) {
>> +		if (err < 0) {
>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>>  			return err;
>>  		}
> 
>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>> index b793723..a67ccc0 100644
>> --- a/drivers/xen/xenbus/xenbus_probe.c
>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>>  	return err;
>>  }
>>  
>> +enum xenstore_init {
>> +	UNKNOWN,
>> +	PV,
>> +	HVM,
>> +	LOCAL,
>> +};
>>  static int __init xenbus_init(void)
>>  {
>>  	int err = 0;
>> +	enum xenstore_init usage = UNKNOWN;
>> +	uint64_t v = 0;
>>  
>>  	if (!xen_domain())
>>  		return -ENODEV;
>>  
>>  	xenbus_ring_ops_init();
>>  
>> -	if (xen_hvm_domain()) {
>> -		uint64_t v = 0;
>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>> -		if (err)
>> -			goto out_error;
>> -		xen_store_evtchn = (int)v;
>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>> -		if (err)
>> -			goto out_error;
>> -		xen_store_mfn = (unsigned long)v;
>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>> -	} else {
>> -		xen_store_evtchn = xen_start_info->store_evtchn;
>> -		xen_store_mfn = xen_start_info->store_mfn;
>> -		if (xen_store_evtchn)
>> -			xenstored_ready = 1;
>> -		else {
>> +	if (xen_pv_domain())
>> +		usage = PV;
>> +	if (xen_hvm_domain())
>> +		usage = HVM;

The above is correct for domUs, and is overridden for dom0s:

>> +	if (xen_hvm_domain() && xen_initial_domain())
>> +		usage = LOCAL;
>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
>> +		usage = LOCAL;

Instead of these checks, I think it should just be:

if (!xen_start_info->store_evtchn)
	usage = LOCAL;

Any domain started after xenstore will have store_evtchn set, so if you don't
have this set, you are either going to be running xenstore locally, or will
use the ioctl to change it later (and so should still set up everything as if
it will be running locally).

>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
>> +		xenstored_ready = 1;

This part can now just be moved unconditionally into case PV.

>> +
>> +	switch (usage) {
>> +		case LOCAL:
>>  			err = xenstored_local_init();
>>  			if (err)
>>  				goto out_error;
>> -		}
>> -		xen_store_interface = mfn_to_virt(xen_store_mfn);
>> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
>> +			break;
>> +		case PV:
>> +			xen_store_evtchn = xen_start_info->store_evtchn;
>> +			xen_store_mfn = xen_start_info->store_mfn;
>> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
>> +			break;
>> +		case HVM:
>> +			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>> +			if (err)
>> +				goto out_error;
>> +			xen_store_evtchn = (int)v;
>> +			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>> +			if (err)
>> +				goto out_error;
>> +			xen_store_mfn = (unsigned long)v;
>> +			xen_store_interface =
>> +				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>> +			break;
>> +		default:
>> +			pr_warn("Xenstore state unknown\n");
>> +			break;
>>  	}
>>  
>>  	/* Initialize the interface to xenstore. */
>> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
>> index d1c217b..f7feb3d 100644
>> --- a/drivers/xen/xenbus/xenbus_xs.c
>> +++ b/drivers/xen/xenbus/xenbus_xs.c
>> @@ -44,6 +44,7 @@
>>  #include <linux/rwsem.h>
>>  #include <linux/module.h>
>>  #include <linux/mutex.h>
>> +#include <asm/xen/hypervisor.h>
>>  #include <xen/xenbus.h>
>>  #include <xen/xen.h>
>>  #include "xenbus_comms.h"
>> -- 
>> 1.7.2.5


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:49:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoqN-0004Bo-PP; Tue, 07 Aug 2012 18:49:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoqL-0004Bd-F2
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:49:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344365358!11278640!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21938 invoked from network); 7 Aug 2012 18:49:19 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:49:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Imscf028138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:48:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Imqvq000840
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:48:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77ImqVv020096; Tue, 7 Aug 2012 13:48:52 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:48:52 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CFD5E41F34; Tue,  7 Aug 2012 14:39:26 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:39:26 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120807183926.GY15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-23-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-23-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 23/23] [HACK] xen/arm: implement
 xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:26PM +0100, Stefano Stabellini wrote:
> From: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Do not apply!

OK.
> 
> This is a simple, hacky implementation of xen_remap_domain_mfn_range,
> using XENMAPSPACE_gmfn_foreign.
> 
> It should use same interface as hybrid x86.

Yeah, Mukesh - can you share your patch here please?
> 
> Changes in v2:
> 
> - retain binary compatibility in xen_add_to_physmap: use a union.
> 
> Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
>  drivers/xen/privcmd.c          |   16 +++++----
>  drivers/xen/xenfs/super.c      |    7 ++++
>  include/xen/interface/memory.h |   15 ++++++--
>  4 files changed, 105 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index c244583..20ca1e4 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -16,6 +16,10 @@
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
>  #include <linux/of_address.h>
> +#include <linux/mm.h>
> +#include <linux/ioport.h>
> +
> +#include <asm/pgtable.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
>  static __read_mostly int xen_events_irq = -1;
>  
> +#define FOREIGN_MAP_BUFFER 0x90000000UL
> +#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
> +struct resource foreign_map_resource = {
> +	.start = FOREIGN_MAP_BUFFER,
> +	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
> +	.name = "Xen foreign map buffer",
> +	.flags = 0,
> +};
> +
> +static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
> +
> +struct remap_data {
> +	struct mm_struct *mm;
> +	unsigned long mfn;
> +	pgprot_t prot;
> +};
> +
> +static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
> +				 unsigned long addr, void *data)
> +{
> +	struct remap_data *rmd = data;
> +	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
> +
> +	if (rmd->mfn < 0x90010)
> +		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
> +		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
> +
> +	set_pte_at(rmd->mm, addr, ptep, pte);
> +
> +	rmd->mfn++;
> +	return 0;
> +}
> +
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
>  			       pgprot_t prot, unsigned domid)
>  {
> -	return -ENOSYS;
> +	int i, rc = 0;
> +	struct remap_data rmd = {
> +		.mm = vma->vm_mm,
> +		.prot = prot,
> +	};
> +	struct xen_add_to_physmap xatp = {
> +		.domid = DOMID_SELF,
> +		.space = XENMAPSPACE_gmfn_foreign,
> +
> +		.foreign_domid = domid,
> +	};
> +
> +	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
> +					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
> +		pr_crit("RAM out of foreign map buffers...\n");
> +		return -EBUSY;
> +	}
> +
> +	for (i = 0; i < nr; i++) {
> +		xatp.idx = mfn + i;
> +		xatp.gpfn = foreign_map_buffer_pfn + i;
> +		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
> +		if (rc != 0) {
> +			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
> +			goto out;
> +		}
> +	}
> +
> +	rmd.mfn = foreign_map_buffer_pfn;
> +	rc = apply_to_page_range(vma->vm_mm,
> +				 addr,
> +				 (unsigned long)nr << PAGE_SHIFT,
> +				 remap_area_mfn_pte_fn, &rmd);
> +	if (rc != 0) {
> +		pr_crit("apply_to_page_range failed rc=%d\n", rc);
> +		goto out;
> +	}
> +
> +	foreign_map_buffer_pfn += nr;
> +out:
> +	return rc;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>  
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 85226cb..3e15c22 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -20,6 +20,8 @@
>  #include <linux/pagemap.h>
>  #include <linux/seq_file.h>
>  #include <linux/miscdevice.h>
> +#include <linux/resource.h>
> +#include <linux/ioport.h>
>  
>  #include <asm/pgalloc.h>
>  #include <asm/pgtable.h>
> @@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_mfn_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
>  		return -EFAULT;
>  
> @@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_batch_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&m, udata, sizeof(m)))
>  		return -EFAULT;
>  
> @@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
>  	return ret;
>  }
>  
> +static void privcmd_close(struct vm_area_struct *vma)
> +{
> +	/* TODO: unmap VMA */
> +}
> +
>  static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  {
>  	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
> @@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops = {
> -	.fault = privcmd_fault
> +	.fault = privcmd_fault,
> +	.close = privcmd_close,
>  };
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
> index a84b53c..edbe22f 100644
> --- a/drivers/xen/xenfs/super.c
> +++ b/drivers/xen/xenfs/super.c
> @@ -12,6 +12,7 @@
>  #include <linux/module.h>
>  #include <linux/fs.h>
>  #include <linux/magic.h>
> +#include <linux/ioport.h>
>  
>  #include <xen/xen.h>
>  
> @@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
>  	.llseek = default_llseek,
>  };
>  
> +extern struct resource foreign_map_resource;
> +
>  static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  {
>  	static struct tree_descr xenfs_files[] = {
> @@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
>  		xenfs_create_file(sb, sb->s_root, "xsd_port",
>  				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
> +		rc = request_resource(&iomem_resource, &foreign_map_resource);
> +		if (rc < 0)
> +			pr_crit("failed to register foreign map resource\n");
> +		rc = 0; /* ignore */
>  	}
>  
>  	return rc;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index b66d04c..dd2ffe0 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,12 +163,19 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };
>  
>      /* Source mapping space. */
> -#define XENMAPSPACE_shared_info 0 /* shared info page */
> -#define XENMAPSPACE_grant_table 1 /* grant table page */
> +#define XENMAPSPACE_shared_info  0 /* shared info page */
> +#define XENMAPSPACE_grant_table  1 /* grant table page */
> +#define XENMAPSPACE_gmfn         2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
>      unsigned int space;
>  
>      /* Index into source mapping space. */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 18:49:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 18:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyoqN-0004Bo-PP; Tue, 07 Aug 2012 18:49:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SyoqL-0004Bd-F2
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 18:49:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344365358!11278640!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc1NDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21938 invoked from network); 7 Aug 2012 18:49:19 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 18:49:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q77Imscf028138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Aug 2012 18:48:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q77Imqvq000840
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 18:48:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q77ImqVv020096; Tue, 7 Aug 2012 13:48:52 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Aug 2012 11:48:52 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CFD5E41F34; Tue,  7 Aug 2012 14:39:26 -0400 (EDT)
Date: Tue, 7 Aug 2012 14:39:26 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120807183926.GY15053@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-23-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-23-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, catalin.marinas@arm.com,
	tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 23/23] [HACK] xen/arm: implement
 xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:26PM +0100, Stefano Stabellini wrote:
> From: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Do not apply!

OK.
> 
> This is a simple, hacky implementation of xen_remap_domain_mfn_range,
> using XENMAPSPACE_gmfn_foreign.
> 
> It should use same interface as hybrid x86.

Yeah, Mukesh - can you share your patch here please?
> 
> Changes in v2:
> 
> - retain binary compatibility in xen_add_to_physmap: use a union.
> 
> Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
>  drivers/xen/privcmd.c          |   16 +++++----
>  drivers/xen/xenfs/super.c      |    7 ++++
>  include/xen/interface/memory.h |   15 ++++++--
>  4 files changed, 105 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index c244583..20ca1e4 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -16,6 +16,10 @@
>  #include <linux/of.h>
>  #include <linux/of_irq.h>
>  #include <linux/of_address.h>
> +#include <linux/mm.h>
> +#include <linux/ioport.h>
> +
> +#include <asm/pgtable.h>
>  
>  struct start_info _xen_start_info;
>  struct start_info *xen_start_info = &_xen_start_info;
> @@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
>  
>  static __read_mostly int xen_events_irq = -1;
>  
> +#define FOREIGN_MAP_BUFFER 0x90000000UL
> +#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
> +struct resource foreign_map_resource = {
> +	.start = FOREIGN_MAP_BUFFER,
> +	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
> +	.name = "Xen foreign map buffer",
> +	.flags = 0,
> +};
> +
> +static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
> +
> +struct remap_data {
> +	struct mm_struct *mm;
> +	unsigned long mfn;
> +	pgprot_t prot;
> +};
> +
> +static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
> +				 unsigned long addr, void *data)
> +{
> +	struct remap_data *rmd = data;
> +	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
> +
> +	if (rmd->mfn < 0x90010)
> +		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
> +		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
> +
> +	set_pte_at(rmd->mm, addr, ptep, pte);
> +
> +	rmd->mfn++;
> +	return 0;
> +}
> +
>  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  			       unsigned long addr,
>  			       unsigned long mfn, int nr,
>  			       pgprot_t prot, unsigned domid)
>  {
> -	return -ENOSYS;
> +	int i, rc = 0;
> +	struct remap_data rmd = {
> +		.mm = vma->vm_mm,
> +		.prot = prot,
> +	};
> +	struct xen_add_to_physmap xatp = {
> +		.domid = DOMID_SELF,
> +		.space = XENMAPSPACE_gmfn_foreign,
> +
> +		.foreign_domid = domid,
> +	};
> +
> +	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
> +					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
> +		pr_crit("RAM out of foreign map buffers...\n");
> +		return -EBUSY;
> +	}
> +
> +	for (i = 0; i < nr; i++) {
> +		xatp.idx = mfn + i;
> +		xatp.gpfn = foreign_map_buffer_pfn + i;
> +		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
> +		if (rc != 0) {
> +			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
> +			goto out;
> +		}
> +	}
> +
> +	rmd.mfn = foreign_map_buffer_pfn;
> +	rc = apply_to_page_range(vma->vm_mm,
> +				 addr,
> +				 (unsigned long)nr << PAGE_SHIFT,
> +				 remap_area_mfn_pte_fn, &rmd);
> +	if (rc != 0) {
> +		pr_crit("apply_to_page_range failed rc=%d\n", rc);
> +		goto out;
> +	}
> +
> +	foreign_map_buffer_pfn += nr;
> +out:
> +	return rc;
>  }
>  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
>  
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 85226cb..3e15c22 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -20,6 +20,8 @@
>  #include <linux/pagemap.h>
>  #include <linux/seq_file.h>
>  #include <linux/miscdevice.h>
> +#include <linux/resource.h>
> +#include <linux/ioport.h>
>  
>  #include <asm/pgalloc.h>
>  #include <asm/pgtable.h>
> @@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_mfn_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
>  		return -EFAULT;
>  
> @@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  	LIST_HEAD(pagelist);
>  	struct mmap_batch_state state;
>  
> -	if (!xen_initial_domain())
> -		return -EPERM;
> -
>  	if (copy_from_user(&m, udata, sizeof(m)))
>  		return -EFAULT;
>  
> @@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
>  	return ret;
>  }
>  
> +static void privcmd_close(struct vm_area_struct *vma)
> +{
> +	/* TODO: unmap VMA */
> +}
> +
>  static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  {
>  	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
> @@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops = {
> -	.fault = privcmd_fault
> +	.fault = privcmd_fault,
> +	.close = privcmd_close,
>  };
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
> index a84b53c..edbe22f 100644
> --- a/drivers/xen/xenfs/super.c
> +++ b/drivers/xen/xenfs/super.c
> @@ -12,6 +12,7 @@
>  #include <linux/module.h>
>  #include <linux/fs.h>
>  #include <linux/magic.h>
> +#include <linux/ioport.h>
>  
>  #include <xen/xen.h>
>  
> @@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
>  	.llseek = default_llseek,
>  };
>  
> +extern struct resource foreign_map_resource;
> +
>  static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  {
>  	static struct tree_descr xenfs_files[] = {
> @@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
>  				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
>  		xenfs_create_file(sb, sb->s_root, "xsd_port",
>  				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
> +		rc = request_resource(&iomem_resource, &foreign_map_resource);
> +		if (rc < 0)
> +			pr_crit("failed to register foreign map resource\n");
> +		rc = 0; /* ignore */
>  	}
>  
>  	return rc;
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index b66d04c..dd2ffe0 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,12 +163,19 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };
>  
>      /* Source mapping space. */
> -#define XENMAPSPACE_shared_info 0 /* shared info page */
> -#define XENMAPSPACE_grant_table 1 /* grant table page */
> +#define XENMAPSPACE_shared_info  0 /* shared info page */
> +#define XENMAPSPACE_grant_table  1 /* grant table page */
> +#define XENMAPSPACE_gmfn         2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
>      unsigned int space;
>  
>      /* Index into source mapping space. */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 20:15:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 20:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyqAk-0004zG-Gs; Tue, 07 Aug 2012 20:14:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SyqAh-0004zB-4n
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 20:14:32 +0000
Received: from [85.158.138.51:48616] by server-6.bemta-3.messagelabs.com id
	15/A5-02321-62771205; Tue, 07 Aug 2012 20:14:30 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344370460!9396063!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31008 invoked from network); 7 Aug 2012 20:14:21 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 20:14:21 -0000
Received: by ggna5 with SMTP id a5so20875ggn.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 13:14:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=4pUaKMITuDLEOHc/ziQyE1criF7BPdF4udsLqgPPJK8=;
	b=m2dmrg9/1KrKu397uzBzoLiuOr1tIqOJ103qY0ijHQYTp65lfh+VSQzsj5VK+GBfKm
	i5vDMPE0+/OCDjqFk2BXyxNKuCf+Dyts+n5C3mAmes04wvGl18ksGl+4gXeqBpOmL43J
	2TtJm1E2IyY7+IiU3A3j3kMM2O+MVO3J+9LSpy1VhMjCW44r23ebXcpzU0uwsVCbZAmU
	ABJsB1pXCQcj62X91q7w2p7zz7WY5Y+ZHQ7t/5MZ7Yw3d3GZoCLuWfikaqbeuQAXMoUH
	PsKff++6nI5rdDrWpmYhIVE5Hrj6qKGWsLZLAv8WikIp/JKsfhOQLhSbQa+VMvvDcBNU
	w3hQ==
MIME-Version: 1.0
Received: by 10.50.15.196 with SMTP id z4mr3874908igc.52.1344370459404; Tue,
	07 Aug 2012 13:14:19 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 13:14:19 -0700 (PDT)
In-Reply-To: <CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
Date: Tue, 7 Aug 2012 16:14:19 -0400
X-Google-Sender-Auth: 3HJxXkrYQnLQkUfaw_dzffIw7Oo
Message-ID: <CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Content-Type: multipart/mixed; boundary=14dae9340a33ade82404c6b2a306
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=ISO-8859-1

Any suggestions on how best to chase this down?

The first S3 suspend/resume cycle works, but the second does not.

On the second try, I never get any interrupts delivered to ahci.
(at least according to /proc/interrupts)


syslog traces from the first (good) and the second (bad) are attached,
as well as the output from the "*" debug Ctrl+a handler in both cases.




On Tue, Aug 7, 2012 at 12:48 PM, Ben Guthro <ben@guthro.net> wrote:
> No - the issue seems to follow xen-4.2
>
> my test matrix looks as such:
>
> Xen      Linux                        S3 result
> 4.0.3     3.2.23                       OK
> 4.0.3     3.5                            OK
> 4.2        3.2.23                       FAIL
> 4.2        3.5                            FAIL
> 4.2        3.2.23 pci=nomsi    OK
> 4.2        3.5 pci=nomsi         (untested)
>
>
>
>
> On Tue, Aug 7, 2012 at 12:33 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
>>> It looks like this regression may be related to MSI handling.
>>>
>>> "pci=nomsi" on the kernel command line seems to bypass the issue.
>>>
>>> Clearly, legacy interrupts are not ideal.
>>
>> This is with v3.5 kernel right? With the earlier one you did not have
>> this issue?
>>>
>>>
>>> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
>>> > I have been doing some experiments in upgrading the Xen version in a
>>> > future version of XenClient Enterprise, and I've been running into a
>>> > regression that I'm wondering if anyone else has seen.
>>> >
>>> > dom0 suspend/resume (S3) does not seem to be working for me.
>>> >
>>> > In swapping out components of the system, the common failure seems to
>>> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>>> >
>>> > The first suspend seems to mostly work...but subsequent ones always
>>> > resume improperly.
>>> > By "improperly" - I see I/O failures, and stalls of many processes.
>>> >
>>> > Below is a log excerpt of 2 S3 attempts.
>>> >
>>> >
>>> > Has anyone else seen these failures?
>>> >
>>> > - Ben
>>> >
>>> >
>>> > (XEN) Preparing system for ACPI S3 state.
>>> > (XEN) Disabling non-boot CPUs ...
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
>>> > (XEN) Entering ACPI S3 state.
>>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>>> > 0 extended MCE MSR 0
>>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>>> > (XEN) Finishing wakeup from ACPI S3 state.
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) Enabling non-boot CPUs  ...
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>>> > (XEN) Preparing system for ACPI S3 state.
>>> > (XEN) Disabling non-boot CPUs ...
>>> > (XEN) Entering ACPI S3 state.
>>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>>> > 0 extended MCE MSR 0
>>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>>> > (XEN) Finishing wakeup from ACPI S3 state.
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) Enabling non-boot CPUs  ...
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>>> > [   66.508829] ata3.00: revalidation failed (errno=-5)
>>> > [   66.508861] ata1.00: revalidation failed (errno=-5)
>>> > [   76.858815] ata3.00: revalidation failed (errno=-5)
>>> > [   76.898807] ata1.00: revalidation failed (errno=-5)
>>> > [  107.208817] ata3.00: revalidation failed (errno=-5)
>>> > [  107.288807] ata1.00: revalidation failed (errno=-5)
>>> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
>>> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
>>> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
>>> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
>>> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
>>> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
>>> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
>>> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
>>> > [  107.719009] Aborting journal on device dm-6-8.
>>> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
>>> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
>>> > [  107.719063] Aborting journal on device dm-5-8.
>>> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
>>> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
>>> > [  107.719129] JBD2: I/O error detected when updating journal
>>> > superblock for dm-6-8.
>>> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
>>> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
>>> > [  107.719168] JBD2: I/O error detected when updating journal
>>> > superblock for dm-5-8.
>>> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
>>> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
>>> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
>>> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
>>> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
>>> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
>>> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
>>> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
>>> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
>>> > [  108.675555] Aborting journal on device dm-3-8.
>>> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
>>> > [  108.686099] JBD2: I/O error detected when updating journal
>>> > superblock for dm-3-8.
>>> > [  108.693908] journal commit I/O error
>>> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
>>> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
>>> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
>>> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
>>> > [  109.660011] end_request: I/O error, dev sda, sector 358788
>>> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
>>> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.709559] end_request: I/O error, dev sda, sector 357762
>>> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
>>> > [  109.721506] end_request: I/O error, dev sda, sector 358790
>>> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
>>> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.886187] end_request: I/O error, dev sda, sector 357764
>>> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
>>> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.928369] end_request: I/O error, dev sda, sector 349574
>>> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
>>> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
>>> > [  115.378875] end_request: I/O error, dev sda, sector 365000
>>> > [  115.384445] Aborting journal on device dm-1-8.
>>> > [  115.389120] end_request: I/O error, dev sda, sector 364930
>>> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
>>> > [  115.401101] JBD2: I/O error detected when updating journal
>>> > superblock for dm-1-8.
>>> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
>>> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
>>> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
>>> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
>>> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel

--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-bad.txt"
Content-Disposition: attachment; filename="xen-dump-bad.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfcta50

KFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBYZW4gKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMg
dG8gc3dpdGNoIGlucHV0IHRvIERPTTApCihYRU4pICcqJyBwcmVzc2VkIC0+IGZpcmluZyBhbGwg
ZGlhZ25vc3RpYyBrZXloYW5kbGVycwooWEVOKSBbZDogZHVtcCByZWdpc3RlcnNdCihYRU4pICdk
JyBwcmVzc2VkIC0+IGR1bXBpbmcgcmVnaXN0ZXJzCihYRU4pIAooWEVOKSAqKiogRHVtcGluZyBD
UFUwIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0
ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMAooWEVOKSBSSVA6
ICAgIGUwMDg6WzxmZmZmODJjNDgwMTNkNzdlPl0gbnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVO
KSBSRkxBR1M6IDAwMDAwMDAwMDAwMTAyODYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJh
eDogZmZmZjgyYzQ4MDMwMjVhMCAgIHJieDogZmZmZjgyYzQ4MDMwMjQ4MCAgIHJjeDogMDAwMDAw
MDAwMDAwMDAwMwooWEVOKSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IGZmZmY4MmM0ODAy
ZTI1YzggICByZGk6IGZmZmY4MmM0ODAyNzE4MDAKKFhFTikgcmJwOiBmZmZmODJjNDgwMmI3ZTMw
ICAgcnNwOiBmZmZmODJjNDgwMmI3ZTMwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAxCihYRU4pIHI5
OiAgZmZmZjgzMDE0ODk5YWVhOCAgIHIxMDogMDAwMDAwNGJmN2Q0MTc4MyAgIHIxMTogMDAwMDAw
MDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MmM0ODAyNzE4MDAgICByMTM6IGZmZmY4MmM0ODAx
M2Q3NTcgICByMTQ6IDAwMDAwMDRiZjdjNjg5NGMKKFhFTikgcjE1OiBmZmZmODJjNDgwMzAyMzA4
ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNy
MzogMDAwMDAwMDBhYTJjNTAwMCAgIGNyMjogZmZmZjg4MDAyNmJhYTEwOAooWEVOKSBkczogMDAw
MCAgIGVzOiAwMDAwICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgK
KFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MmM0ODAyYjdlMzA6CihYRU4pICAg
IGZmZmY4MmM0ODAyYjdlNjAgZmZmZjgyYzQ4MDEyODE3ZiAwMDAwMDAwMDAwMDAwMDAyIGZmZmY4
MmM0ODAyZTI1YzgKKFhFTikgICAgZmZmZjgyYzQ4MDMwMjQ4MCBmZmZmODMwMTQ4OTkyZDQwIGZm
ZmY4MmM0ODAyYjdlYjAgZmZmZjgyYzQ4MDEyODI4MQooWEVOKSAgICBmZmZmODJjNDgwMmI3ZjE4
IDAwMDAwMDAwMDAwMDAyNDYgMDAwMDAwNGJmN2Q0MTc4MyBmZmZmODJjNDgwMmQ4ODgwCihYRU4p
ICAgIGZmZmY4MmM0ODAyZDg4ODAgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmY4MmM0ODAzMDIzMDgKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2VlMCBmZmZmODJjNDgwMTI1NDA1
IGZmZmY4MmM0ODAyYjdmMTggZmZmZjgyYzQ4MDJiN2YxOAooWEVOKSAgICAwMDAwMDAwMGZmZmZm
ZmZmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJiN2VmMCBmZmZmODJjNDgwMTI1NDg0CihY
RU4pICAgIGZmZmY4MmM0ODAyYjdmMTAgZmZmZjgyYzQ4MDE1OGMwNSBmZmZmODMwMGFhNTg0MDAw
IGZmZmY4MzAwYWEwZmMwMDAKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2RhOCAwMDAwMDAwMDAwMDAw
MDAwIGZmZmZmZmZmZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgx
YWFmZGEwIGZmZmZmZmZmODFhMDFlZTggZmZmZmZmZmY4MWEwMWZkOCAwMDAwMDAwMDAwMDAwMjQ2
CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEw
MDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pICAgIGZmZmZmZmZmODFhMDFlZDAgMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjgzMDBhYTU4NDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4p
ICAgIFs8ZmZmZjgyYzQ4MDEzZDc3ZT5dIG5zMTY1NTBfcG9sbCsweDI3LzB4MzMKKFhFTikgICAg
WzxmZmZmODJjNDgwMTI4MTdmPl0gZXhlY3V0ZV90aW1lcisweDRlLzB4NmMKKFhFTikgICAgWzxm
ZmZmODJjNDgwMTI4MjgxPl0gdGltZXJfc29mdGlycV9hY3Rpb24rMHhlNC8weDIxYQooWEVOKSAg
ICBbPGZmZmY4MmM0ODAxMjU0MDU+XSBfX2RvX3NvZnRpcnErMHg5NS8weGEwCihYRU4pICAgIFs8
ZmZmZjgyYzQ4MDEyNTQ4ND5dIGRvX3NvZnRpcnErMHgyNi8weDI4CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1OGMwNT5dIGlkbGVfbG9vcCsweDZmLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1w
aW5nIENQVTEgaG9zdCBzdGF0ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4
ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAxCihYRU4p
IFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDll
CihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhF
TikgcmF4OiBmZmZmODJjNDgwMzAyMzcwICAgcmJ4OiBmZmZmODMwMTNlNjdmZjE4ICAgcmN4OiAw
MDAwMDAwMDAwMDAwMDAxCihYRU4pIHJkeDogMDAwMDAwM2NiZDM2OGQ4MCAgIHJzaTogMDAwMDAw
MDA4Y2U0Y2E3YSAgIHJkaTogMDAwMDAwMDAwMDAwMDAwMQooWEVOKSByYnA6IGZmZmY4MzAxM2U2
N2ZlZjAgICByc3A6IGZmZmY4MzAxM2U2N2ZlZjAgICByODogIDAwMDAwMDUwZWY3ZWY1NzAKKFhF
Tikgcjk6ICBmZmZmODMwMGE4M2ZkMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAw
MDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZjgzMDEzZTY3ZmYxOCAgIHIxMzogMDAwMDAw
MDBmZmZmZmZmZiAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxM2Q2
NmIwODggICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhF
TikgY3IzOiAwMDAwMDAwMTNkOTZjMDAwICAgY3IyOiBmZmZmODgwMDI2YjhlMWU4CihYRU4pIGRz
OiAwMDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczog
ZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDEzZTY3ZmVmMDoKKFhF
TikgICAgZmZmZjgzMDEzZTY3ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYWEwZmUwMDAg
ZmZmZjgzMDBhODNmZDAwMAooWEVOKSAgICBmZmZmODMwMTNlNjdmZGE4IDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAxCihYRU4pICAgIGZmZmZmZmZmODFh
YWZkYTAgZmZmZjg4MDAyNzg2ZGVlMCBmZmZmODgwMDI3ODZkZmQ4IDAwMDAwMDAwMDAwMDAyNDYK
KFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAw
MDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAy
NDYKKFhFTikgICAgZmZmZjg4MDAyNzg2ZGVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmODMwMGFhMGZlMDAwCihYRU4pICAgIDAwMDAw
MDNjYmQzNjhkODAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikg
ICAgWzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBb
PGZmZmY4MmM0ODAxNThiZjg+XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAq
KiogRHVtcGluZyBDUFUyIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMy
LXByZSAgeDg2XzY0ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAg
MgooWEVOKSBSSVA6ICAgIGUwMDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4
OTkvMHg5ZQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZp
c29yCihYRU4pIHJheDogZmZmZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODkzZmYxOCAg
IHJjeDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByZHg6IDAwMDAwMDNjY2NmYjVkODAgICByc2k6
IDAwMDAwMDAwOGRhYTZiYjAgICByZGk6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcmJwOiBmZmZm
ODMwMTQ4OTNmZWYwICAgcnNwOiBmZmZmODMwMTQ4OTNmZWYwICAgcjg6ICAwMDAwMDA1MTExZGEx
Njk0CihYRU4pIHI5OiAgZmZmZjgzMDBhODNmYzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAg
IHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5M2ZmMTggICByMTM6
IDAwMDAwMDAwZmZmZmZmZmYgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZm
ODMwMTRkMmI4MDg4ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAy
NmYwCihYRU4pIGNyMzogMDAwMDAwMDE0Y2RhYjAwMCAgIGNyMjogZmZmZjg4MDAwMmU2Mzc2OAoo
WEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEw
ICAgY3M6IGUwMDgKKFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5M2Zl
ZjA6CihYRU4pICAgIGZmZmY4MzAxNDg5M2ZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4
NWM3MDAwIGZmZmY4MzAwYTgzZmMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODkzZmRhOCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMgooWEVOKSAgICBmZmZm
ZmZmZjgxYWFmZGEwIGZmZmY4ODAwMjc4NmZlZTAgZmZmZjg4MDAyNzg2ZmZkOCAwMDAwMDAwMDAw
MDAwMjQ2CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAw
MDAwMDEwMDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAw
MDAwMDAwMjQ2CihYRU4pICAgIGZmZmY4ODAwMjc4NmZlYzggMDAwMDAwMDAwMDAwZTAyYiAzZWMy
ZTMyNjgwOTJlMTY3IGI5N2NlYzViOWU2OGRjOTMKKFhFTikgICAgY2M5ODA3OTI0OWZiNzNiNSBl
MWEzNWRlNGZkMTYxYTZmIGUxZTM1ZDY0MDAwMDAwMDIgZmZmZjgzMDBhODVjNzAwMAooWEVOKSAg
ICAwMDAwMDAzY2NjZmI1ZDgwIGUyNGQ1YTM4ZjJhZTA1MWUKKFhFTikgWGVuIGNhbGwgdHJhY2U6
CihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhF
TikgICAgWzxmZmZmODJjNDgwMTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAK
KFhFTikgKioqIER1bXBpbmcgQ1BVMyBob3N0IHN0YXRlOiAqKioKKFhFTikgLS0tLVsgWGVuLTQu
Mi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDogICAgQyBdLS0tLQooWEVOKSBD
UFU6ICAgIDMKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRf
aWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgQ09OVEVYVDog
aHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAgICByYng6IGZmZmY4MzAxNDg5
MmZmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmR4OiAwMDAwMDAzY2NjYzUyZDgw
ICAgcnNpOiAwMDAwMDAwMDhlNzAzMjUwICAgcmRpOiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJi
cDogZmZmZjgzMDE0ODkyZmVmMCAgIHJzcDogZmZmZjgzMDE0ODkyZmVmMCAgIHI4OiAgMDAwMDAw
NTEzNDhmMTc0NAooWEVOKSByOTogIGZmZmY4MzAwYWE1ODMwNjAgICByMTA6IDAwMDAwMDAwZGVh
ZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmODMwMTQ4OTJmZjE4
ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAyCihYRU4pIHIx
NTogZmZmZjgzMDE0Y2Y1NTA4OCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAw
MDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxNGNhZjQwMDAgICBjcjI6IGZmZmY4ODAwMDMy
MTMxMDgKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBz
czogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODMw
MTQ4OTJmZWYwOgooWEVOKSAgICBmZmZmODMwMTQ4OTJmZjEwIGZmZmY4MmM0ODAxNThiZjggZmZm
ZjgzMDBhODNmZTAwMCBmZmZmODMwMGFhNTgzMDAwCihYRU4pICAgIGZmZmY4MzAxNDg5MmZkYTgg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDMKKFhFTikg
ICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODgxZWUwIGZmZmY4ODAwMjc4ODFmZDggMDAw
MDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODEwMDEz
YWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAwMDAwMDAwZGVhZGJlZWYKKFhF
TikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMGUwMzMg
MDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODgxZWM4IDAwMDAwMDAwMDAwMGUw
MmIgM2VjMmUzMjY4MDkyZTE2NyBiOTdjZWM1YjllNjhkYzkzCihYRU4pICAgIGNjOTgwNzkyNDlm
YjczYjUgZTFhMzVkZTRmZDE2MWE2ZiBlMWUzNWQ2NDAwMDAwMDAzIGZmZmY4MzAwYTgzZmUwMDAK
KFhFTikgICAgMDAwMDAwM2NjY2M1MmQ4MCBlMjRkNWEzOGYyYWUwNTFlCihYRU4pIFhlbiBjYWxs
IHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8w
eDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVfbG9vcCsweDYyLzB4NzEKKFhF
TikgICAgCihYRU4pIFswOiBkdW1wIERvbTAgcmVnaXN0ZXJzXQooWEVOKSAnMCcgcHJlc3NlZCAt
PiBkdW1waW5nIERvbTAncyByZWdpc3RlcnMKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2Y3B1IzAg
c3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0KKFhFTikg
UkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVzdAooWEVO
KSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmZmZmZmODFhMDFmZDggICByY3g6IGZm
ZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAwMDAwMDAw
MGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZmZmZmY4MWEw
MWVlOCAgIHJzcDogZmZmZmZmZmY4MWEwMWVkMCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICByMTE6IDAw
MDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAwMDAwMDAw
MDAwMDAwMDAwICAgcjE0OiBmZmZmZmZmZmZmZmZmZmZmCihYRU4pIHIxNTogMDAwMDAwMDAwMDAw
MDAwMCAgIGNyMDogMDAwMDAwMDAwMDAwMDAwOCAgIGNyNDogMDAwMDAwMDAwMDAwMjY2MAooWEVO
KSBjcjM6IDAwMDAwMDAxNDFhMDUwMDAgICBjcjI6IDAwMDA3ZmM5Njk2MjFjNjIKKFhFTikgZHM6
IDAwMDAgICBlczogMDAwMCAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAgIGNzOiBl
MDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmZmZmZmODFhMDFlZDA6CihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgxMDBhNWMw
IGZmZmZmZmZmODFhMDFmMTgKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmZmZmZjgxYTAx
ZmQ4IGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyZGVlMWEwMAooWEVOKSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmODFhMDFmNDggZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZmZmZmZmZmZm
CihYRU4pICAgIDhlNmExZGU5NjBlNzVkYzMgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZjgxYjE1
MTYwIGZmZmZmZmZmODFhMDFmNTgKKFhFTikgICAgZmZmZmZmZmY4MTU1NGY1ZSBmZmZmZmZmZjgx
YTAxZjk4IGZmZmZmZmZmODFhY2NiZjUgZmZmZmZmZmY4MWIxNTE2MAooWEVOKSAgICBkMGEwZjA3
NTJhZDlhMDA4IDAwMDAwMDAwMDBjZGYwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmY4MWEwMWZiOCBmZmZmZmZmZjgx
YWNjMzRiIGZmZmZmZmZmN2ZmZmZmZmYKKFhFTikgICAgZmZmZmZmZmY4NGIyNTAwMCBmZmZmZmZm
ZjgxYTAxZmY4IGZmZmZmZmZmODFhY2ZlY2MgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMTAwMDAwMDAwIDAwMTAwODAwMDAwMzA2YTQgMWZjOThiNzVlM2I4MjI4MyAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAq
KiogRHVtcGluZyBEb20wIHZjcHUjMSBzdGF0ZTogKioqCihYRU4pIFJJUDogICAgZTAzMzpbPGZm
ZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBFTTogMCAg
IENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAgIHJieDogZmZm
Zjg4MDAyNzg2ZGZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6IDAwMDAwMDAw
MDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAwZGVhZGJlZWYK
KFhFTikgcmJwOiBmZmZmODgwMDI3ODZkZWUwICAgcnNwOiBmZmZmODgwMDI3ODZkZWM4ICAgcjg6
ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAgIHIxMDogMDAw
MDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmZmZmZm
ODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDEgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDAK
KFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0
OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDEzZDk2YzAwMCAgIGNyMjogMDAw
MDdmY2I2NjQzZDAzOQooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczog
MDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJhY2UgZnJvbSBy
c3A9ZmZmZjg4MDAyNzg2ZGVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGZm
ZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZGYxMAooWEVOKSAgICBmZmZmZmZm
ZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmRmZDggZmZmZmZmZmY4MWFhZmRhMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY0MCBmZmZmZmZmZjgx
MDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgYTEwMjFlZjczOWZiMjdkMyAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY1MAooWEVOKSAgICBmZmZm
ZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBm
ZmZmODgwMDI3ODZkZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2
Y3B1IzIgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0K
KFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVz
dAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4ODAwMjc4NmZmZDggICBy
Y3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAw
MDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZjg4
MDAyNzg2ZmVlMCAgIHJzcDogZmZmZjg4MDAyNzg2ZmVjOCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICBy
MTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAw
MDAwMDAwMDAwMDAwMDAyICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHIxNTogMDAwMDAw
MDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAwMDAwMDAwMjY2
MAooWEVOKSBjcjM6IDAwMDAwMDAxNGNkYWIwMDAgICBjcjI6IDAwMDAwMDAwMDE0ZWQxODgKKFhF
TikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAg
IGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4ODAwMjc4NmZl
Yzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgx
MDBhNWMwIGZmZmY4ODAwMjc4NmZmMTAKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmODgw
MDI3ODZmZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNDAgZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZjgx
MDBhZGU5CihYRU4pICAgIDU3OTAyZTE4ZWUzMjEyZjkgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNTAKKFhFTikgICAgZmZmZmZmZmY4MTU2MzQzOCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1OCAw
MDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMzIHN0YXRlOiAqKioK
KFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikgcmF4OiAwMDAw
MDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODgxZmQ4ICAgcmN4OiBmZmZmZmZmZjgxMDAx
M2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBkZWFkYmVlZiAg
IHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4ODFlZTAgICByc3A6
IGZmZmY4ODAwMjc4ODFlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgcjk6ICAwMDAw
MDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAwMDAwMDAwMyAg
IHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAwMDAgICBjcjA6
IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikgY3IzOiAwMDAw
MDAwMTNlNDIxMDAwICAgY3IyOiAwMDAwN2ZjOTY5NjIxYzYyCihYRU4pIGRzOiAwMDJiICAgZXM6
IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAzMwooWEVOKSBH
dWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODgxZWM4OgooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBmZmZmODgwMDI3
ODgxZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg4MWZkOCBmZmZmZmZm
ZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBmZmZm
ODgwMDI3ODgxZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQooWEVOKSAgICA2
N2IxMDQ0Y2M0YjdjMzkxIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmODgw
MDI3ODgxZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTggMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSBbSDogZHVtcCBoZWFwIGluZm9dCihYRU4pICdIJyBwcmVzc2VkIC0+IGR1bXBpbmcgaGVh
cCBpbmZvIChub3ctMHg0Qzo0MTkyM0I2QikKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MF0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTJdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9M10gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT00XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTVdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Nl0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT03XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPThdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OV0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMF0gLT4gMCBwYWdlcwooWEVOKSBoZWFw
W25vZGU9MF1bem9uZT0xMV0gLT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMl0g
LT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xM10gLT4gMCBwYWdlcwooWEVOKSBo
ZWFwW25vZGU9MF1bem9uZT0xNF0gLT4gMTYxMjggcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MTVdIC0+IDMyNzY4IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE2XSAtPiA2NTUz
NiBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xN10gLT4gMTMwNTU5IHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTE4XSAtPiAyNjIxNDMgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MTldIC0+IDE3Mjc3MiBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0yMF0gLT4g
MTM0MjkwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIxXSAtPiAwIHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTIyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25l
PTIzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI0XSAtPiAwIHBhZ2VzCihY
RU4pIGhlYXBbbm9kZT0wXVt6b25lPTI1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6
b25lPTI2XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI3XSAtPiAwIHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTI5XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMwXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMxXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTMyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMzXSAtPiAw
IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM0XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTM1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM2XSAt
PiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM3XSAtPiAwIHBhZ2VzCihYRU4pIGhl
YXBbbm9kZT0wXVt6b25lPTM4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM5
XSAtPiAwIHBhZ2VzCihYRU4pIFtJOiBkdW1wIEhWTSBpcnEgaW5mb10KKFhFTikgJ0knIHByZXNz
ZWQgLT4gZHVtcGluZyBIVk0gaXJxIGluZm8KKFhFTikgW006IGR1bXAgTVNJIHN0YXRlXQooWEVO
KSBQQ0ktTVNJIGludGVycnVwdCBpbmZvcm1hdGlvbjoKKFhFTikgIE1TSSAgICAyNiB2ZWM9YTEg
bG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEv
LTEKKFhFTikgIE1TSSAgICAyNyB2ZWM9MDAgIGZpeGVkICBlZGdlIGRlYXNzZXJ0IHBoeXMgbG93
ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAyOCB2ZWM9MjkgbG93
ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEK
KFhFTikgIE1TSSAgICAyOSB2ZWM9YTkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0
IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAzMCB2ZWM9YjEgbG93ZXN0
ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhF
TikgIE1TSSAgICAzMSB2ZWM9YzkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRl
c3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgW1E6IGR1bXAgUENJIGRldmljZXNdCihYRU4p
ID09PT0gUENJIGRldmljZXMgPT09PQooWEVOKSA9PT09IHNlZ21lbnQgMDAwMCA9PT09CihYRU4p
IDAwMDA6MDU6MDEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjA0OjAwLjAgLSBk
b20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMzowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDI6MDAuMCAtIGRvbSAwICAgLSBNU0lzIDwgMzAgPgooWEVOKSAwMDAwOjAw
OjFmLjMgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZi4yIC0gZG9tIDAgICAt
IE1TSXMgPCAyNyA+CihYRU4pIDAwMDA6MDA6MWYuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVO
KSAwMDAwOjAwOjFlLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZC4wIC0g
ZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNyAtIGRvbSAwICAgLSBNU0lzIDwg
PgooWEVOKSAwMDAwOjAwOjFjLjYgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDox
Yy4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWIuMCAtIGRvbSAwICAgLSBN
U0lzIDwgMjYgPgooWEVOKSAwMDAwOjAwOjFhLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikg
MDAwMDowMDoxOS4wIC0gZG9tIDAgICAtIE1TSXMgPCAzMSA+CihYRU4pIDAwMDA6MDA6MTYuMyAt
IGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjE2LjAgLSBkb20gMCAgIC0gTVNJcyA8
ID4KKFhFTikgMDAwMDowMDoxNC4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOSA+CihYRU4pIDAwMDA6
MDA6MDIuMCAtIGRvbSAwICAgLSBNU0lzIDwgMjggPgooWEVOKSAwMDAwOjAwOjAwLjAgLSBkb20g
MCAgIC0gTVNJcyA8ID4KKFhFTikgW1Y6IGR1bXAgaW9tbXUgaW5mb10KKFhFTikgCihYRU4pIGlv
bW11IDA6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFsaWRhdGlvbjogc3Vw
cG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBpbmc6IHN1cHBvcnRl
ZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRhYmxlIChucl9lbnRy
eT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4pICAgICAgICBTVlQg
IFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQCihYRU4pICAgMDAw
MDogIDEgICAwICAwMDEwIDAwMDAwMDAxIDI5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4p
IAooWEVOKSBpb21tdSAxOiBucl9wdF9sZXZlbHMgPSAzLgooWEVOKSAgIFF1ZXVlZCBJbnZhbGlk
YXRpb246IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgUmVtYXBwaW5n
OiBzdXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IHJlbWFwcGluZyB0YWJs
ZSAobnJfZW50cnk9MHgxMDAwMC4gT25seSBkdW1wIFA9MSBlbnRyaWVzIGhlcmUpOgooWEVOKSAg
ICAgICAgU1ZUICBTUSAgIFNJRCAgICAgIERTVCAgViAgQVZMIERMTSBUTSBSSCBETSBGUEQgUAoo
WEVOKSAgIDAwMDA6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzOCAgICAwICAgMSAgMCAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMDE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBmMCAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMDI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0MCAgICAwICAg
MSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0OCAg
ICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDQ6ICAxICAgMCAgZjBmOCAwMDAwMDAw
MSA1MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDU6ICAxICAgMCAgZjBmOCAw
MDAwMDAwMSA1OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDY6ICAxICAgMCAg
ZjBmOCAwMDAwMDAwMSA2MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDc6ICAx
ICAgMCAgZjBmOCAwMDAwMDAwMSA2OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAw
MDg6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3MCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVO
KSAgIDAwMDk6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3OCAgICAwICAgMSAgMCAgMSAgMSAgIDAg
MQooWEVOKSAgIDAwMGE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA4OCAgICAwICAgMSAgMCAgMSAg
MSAgIDAgMQooWEVOKSAgIDAwMGI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5MCAgICAwICAgMSAg
MCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5OCAgICAw
ICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGQ6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBh
MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGU6ICAxICAgMCAgZjBmOCAwMDAw
MDAwMSBhOCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGY6ICAxICAgMCAgZjBm
OCAwMDAwMDAwMSBiMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTA6ICAxICAg
MCAgZjBmOCAwMDAwMDAwMSBiOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTE6
ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAg
IDAwMTI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQoo
WEVOKSAgIDAwMTM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBkMCAgICAwICAgMSAgMSAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMTQ6ICAxICAgMCAgMDBkOCAwMDAwMDAwMSBhMSAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMTU6ICAxICAgMCAgMDBmYSAwMDAwMDAwMSAwMCAgICAwICAg
MCAgMCAgMCAgMCAgIDAgMQooWEVOKSAgIDAwMTY6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzMSAg
ICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTc6ICAxICAgMCAgMDBhMCAwMDAwMDAw
MSBhOSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTg6ICAxICAgMCAgMDIwMCAw
MDAwMDAwMSBiMSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTk6ICAxICAgMCAg
MDBjOCAwMDAwMDAwMSBjOSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAKKFhFTikgUmVk
aXJlY3Rpb24gdGFibGUgb2YgSU9BUElDIDA6CihYRU4pICAgI2VudHJ5IElEWCBGTVQgTUFTSyBU
UklHIElSUiBQT0wgU1RBVCBERUxJICBWRUNUT1IKKFhFTikgICAgMDE6ICAwMDAwICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgMzgKKFhFTikgICAgMDI6ICAwMDAxICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgZjAKKFhFTikgICAgMDM6ICAwMDAyICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDAKKFhFTikgICAgMDQ6ICAwMDAzICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDgKKFhFTikgICAgMDU6ICAwMDA0ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTAKKFhFTikgICAgMDY6ICAwMDA1ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTgKKFhFTikgICAgMDc6ICAwMDA2ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjAKKFhFTikgICAgMDg6ICAwMDA3ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjgKKFhFTikgICAgMDk6ICAwMDA4ICAgMSAgICAw
ICAgMSAgIDAgICAwICAgIDAgICAgMCAgICAgNzAKKFhFTikgICAgMGE6ICAwMDA5ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNzgKKFhFTikgICAgMGI6ICAwMDBhICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgODgKKFhFTikgICAgMGM6ICAwMDBiICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTAKKFhFTikgICAgMGQ6ICAwMDBjICAgMSAgICAx
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTgKKFhFTikgICAgMGU6ICAwMDBkICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTAKKFhFTikgICAgMGY6ICAwMDBlICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTgKKFhFTikgICAgMTA6ICAwMDBmICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjAKKFhFTikgICAgMTI6ICAwMDEwICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjgKKFhFTikgICAgMTM6ICAwMDExICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzAKKFhFTikgICAgMTQ6ICAwMDE2ICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgMzEKKFhFTikgICAgMTY6ICAwMDEzICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgZDAKKFhFTikgICAgMTc6ICAwMDEyICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzgKKFhFTikgW2E6IGR1bXAgdGltZXIgcXVldWVz
XQooWEVOKSBEdW1waW5nIHRpbWVyIHF1ZXVlczoKKFhFTikgQ1BVMDA6CihYRU4pICAgZXg9ICAg
LTE2ODF1cyB0aW1lcj1mZmZmODJjNDgwMmUyNWM4IGNiPWZmZmY4MmM0ODAxM2Q3NTcoZmZmZjgy
YzQ4MDI3MTgwMCkgbnMxNjU1MF9wb2xsKzB4MC8weDMzCihYRU4pICAgZXg9ICAgIDczMTh1cyB0
aW1lcj1mZmZmODMwMTQ4OTlhMWI4IGNiPWZmZmY4MmM0ODAxMTlkNzIoZmZmZjgzMDE0ODk5YTE5
MCkgY3NjaGVkX2FjY3QrMHgwLzB4NDJhCihYRU4pICAgZXg9ICA0NTcwMDF1cyB0aW1lcj1mZmZm
ODJjNDgwMzAwNTgwIGNiPWZmZmY4MmM0ODAxYTg4NTAoMDAwMDAwMDAwMDAwMDAwMCkgbWNlX3dv
cmtfZm4rMHgwLzB4YTkKKFhFTikgICBleD0xMjIzMTU3NTl1cyB0aW1lcj1mZmZmODJjNDgwMmZl
MjgwIGNiPWZmZmY4MmM0ODAxODA3YzIoMDAwMDAwMDAwMDAwMDAwMCkgcGx0X292ZXJmbG93KzB4
MC8weDEzMQooWEVOKSAgIGV4PSAgICA3MzE4dXMgdGltZXI9ZmZmZjgzMDE0ODk5YWVhOCBjYj1m
ZmZmODJjNDgwMTFhYWYwKDAwMDAwMDAwMDAwMDAwMDApIGNzY2hlZF90aWNrKzB4MC8weDMxNAoo
WEVOKSBDUFUwMToKKFhFTikgICBleD0gICA2MjYxMXVzIHRpbWVyPWZmZmY4MzAxNGM5NjY1Zjgg
Y2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAxKSBjc2NoZWRfdGljaysweDAvMHgz
MTQKKFhFTikgICBleD0gIDI2MjA5M3VzIHRpbWVyPWZmZmY4MzAwYTgzZmQwNjAgY2I9ZmZmZjgy
YzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZkMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJfZm4rMHgw
LzB4YgooWEVOKSBDUFUwMjoKKFhFTikgICBleD0gICA4Mjk1M3VzIHRpbWVyPWZmZmY4MzAxMzZh
NDlhMjggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAyKSBjc2NoZWRfdGljaysw
eDAvMHgzMTQKKFhFTikgICBleD0gIDIzODA5OXVzIHRpbWVyPWZmZmY4MzAwYTgzZmMwNjAgY2I9
ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZjMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJf
Zm4rMHgwLzB4YgooWEVOKSBDUFUwMzoKKFhFTikgICBleD0gIDEwMzI2N3VzIHRpbWVyPWZmZmY4
MzAxMWM5ZTk5NDggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAzKSBjc2NoZWRf
dGljaysweDAvMHgzMTQKKFhFTikgICBleD0gMzkyMDAxOXVzIHRpbWVyPWZmZmY4MzAwYWE1ODMw
NjAgY2I9ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGFhNTgzMDAwKSB2Y3B1X3NpbmdsZXNob3Rf
dGltZXJfZm4rMHgwLzB4YgooWEVOKSBbYzogZHVtcCBBQ1BJIEN4IHN0cnVjdHVyZXNdCihYRU4p
ICdjJyBwcmVzc2VkIC0+IHByaW50aW5nIEFDUEkgQ3ggc3RydWN0dXJlcwooWEVOKSA9PWNwdTA9
PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBz
dGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAw
XSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBd
IGR1cmF0aW9uWzMyODI3ODA1NDE2Ml0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBd
CihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pID09Y3B1MT09CihYRU4pIGFjdGl2ZSBz
dGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAg
IEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0gdXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0g
ZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMzI4MzAy
ODQ0NDI5XQooWEVOKSBQQzJbMF0gUEMzWzBdIFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIEND
NlswXSBDQzdbMF0KKFhFTikgPT1jcHUyPT0KKFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVO
KSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxh
dGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVO
KSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlvblszMjgzMjc2MzQ4MjNdCihYRU4pIFBD
MlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVO
KSA9PWNwdTM9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlD
NwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdl
WzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2Vb
MDAwMDAwMDBdIGR1cmF0aW9uWzMyODM1MjQyNDUyMl0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZb
MF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pIFtlOiBkdW1wIGV2dGNo
biBpbmZvXQooWEVOKSAnZScgcHJlc3NlZCAtPiBkdW1waW5nIGV2ZW50LWNoYW5uZWwgaW5mbwoo
WEVOKSBFdmVudCBjaGFubmVsIGluZm9ybWF0aW9uIGZvciBkb21haW4gMDoKKFhFTikgUG9sbGlu
ZyB2Q1BVczoge30KKFhFTikgICAgIHBvcnQgW3AvbV0KKFhFTikgICAgICAgIDEgWzEvMF06IHM9
NSBuPTAgeD0wIHY9MAooWEVOKSAgICAgICAgMiBbMS8xXTogcz02IG49MCB4PTAKKFhFTikgICAg
ICAgIDMgWzEvMF06IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICA0IFswLzBdOiBzPTYgbj0wIHg9
MAooWEVOKSAgICAgICAgNSBbMC8wXTogcz01IG49MCB4PTAgdj0xCihYRU4pICAgICAgICA2IFsw
LzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAgICAgNyBbMC8wXTogcz01IG49MSB4PTAgdj0wCihY
RU4pICAgICAgICA4IFsxLzFdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAgOSBbMC8wXTogcz02
IG49MSB4PTAKKFhFTikgICAgICAgMTAgWzAvMF06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgIDEx
IFswLzBdOiBzPTUgbj0xIHg9MCB2PTEKKFhFTikgICAgICAgMTIgWzAvMF06IHM9NiBuPTEgeD0w
CihYRU4pICAgICAgIDEzIFswLzBdOiBzPTUgbj0yIHg9MCB2PTAKKFhFTikgICAgICAgMTQgWzEv
MV06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE1IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAg
ICAgICAxNiBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTcgWzAvMF06IHM9NSBuPTIg
eD0wIHY9MQooWEVOKSAgICAgICAxOCBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTkg
WzAvMF06IHM9NSBuPTMgeD0wIHY9MAooWEVOKSAgICAgICAyMCBbMS8xXTogcz02IG49MyB4PTAK
KFhFTikgICAgICAgMjEgWzAvMF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIyIFswLzBdOiBz
PTYgbj0zIHg9MAooWEVOKSAgICAgICAyMyBbMC8wXTogcz01IG49MyB4PTAgdj0xCihYRU4pICAg
ICAgIDI0IFswLzBdOiBzPTYgbj0zIHg9MAooWEVOKSAgICAgICAyNSBbMC8wXTogcz0zIG49MCB4
PTAgZD0wIHA9MzYKKFhFTikgICAgICAgMjYgWzAvMF06IHM9NCBuPTAgeD0wIHA9OSBpPTkKKFhF
TikgICAgICAgMjcgWzEvMV06IHM9NSBuPTAgeD0wIHY9MgooWEVOKSAgICAgICAyOCBbMC8wXTog
cz00IG49MCB4PTAgcD04IGk9OAooWEVOKSAgICAgICAyOSBbMC8wXTogcz00IG49MCB4PTAgcD0y
NzkgaT0yNgooWEVOKSAgICAgICAzMCBbMC8wXTogcz00IG49MCB4PTAgcD0yNzcgaT0yOAooWEVO
KSAgICAgICAzMSBbMC8wXTogcz00IG49MCB4PTAgcD0xNiBpPTE2CihYRU4pICAgICAgIDMyIFsw
LzBdOiBzPTQgbj0wIHg9MCBwPTI3OCBpPTI3CihYRU4pICAgICAgIDMzIFswLzBdOiBzPTQgbj0w
IHg9MCBwPTIzIGk9MjMKKFhFTikgICAgICAgMzQgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc2IGk9
MjkKKFhFTikgICAgICAgMzUgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc1IGk9MzAKKFhFTikgICAg
ICAgMzYgWzAvMF06IHM9MyBuPTAgeD0wIGQ9MCBwPTI1CihYRU4pICAgICAgIDM3IFswLzBdOiBz
PTUgbj0wIHg9MCB2PTMKKFhFTikgICAgICAgMzggWzEvMF06IHM9NCBuPTAgeD0wIHA9Mjc0IGk9
MzEKKFhFTikgW2c6IHByaW50IGdyYW50IHRhYmxlIHVzYWdlXQooWEVOKSBnbnR0YWJfdXNhZ2Vf
cHJpbnRfYWxsIFsga2V5ICdnJyBwcmVzc2VkCihYRU4pICAgICAgIC0tLS0tLS0tIGFjdGl2ZSAt
LS0tLS0tLSAgICAgICAtLS0tLS0tLSBzaGFyZWQgLS0tLS0tLS0KKFhFTikgW3JlZl0gbG9jYWxk
b20gbWZuICAgICAgcGluICAgICAgICAgIGxvY2FsZG9tIGdtZm4gICAgIGZsYWdzCihYRU4pIGdy
YW50LXRhYmxlIGZvciByZW1vdGUgZG9tYWluOiAgICAwIC4uLiBubyBhY3RpdmUgZ3JhbnQgdGFi
bGUgZW50cmllcwooWEVOKSBnbnR0YWJfdXNhZ2VfcHJpbnRfYWxsIF0gZG9uZQooWEVOKSBbaTog
ZHVtcCBpbnRlcnJ1cHQgYmluZGluZ3NdCihYRU4pIEd1ZXN0IGludGVycnVwdCBpbmZvcm1hdGlv
bjoKKFhFTikgICAgSVJROiAgIDAgYWZmaW5pdHk6MDAwMSB2ZWM6ZjAgdHlwZT1JTy1BUElDLWVk
Z2UgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMSBh
ZmZpbml0eTowMDAxIHZlYzozOCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIg
bWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICAyIGFmZmluaXR5OmZmZmYgdmVjOmUyIHR5
cGU9WFQtUElDICAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikg
ICAgSVJROiAgIDMgYWZmaW5pdHk6MDAwMSB2ZWM6NDAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3Rh
dHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNCBhZmZpbml0eTow
MDAxIHZlYzo0OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1
bmJvdW5kCihYRU4pICAgIElSUTogICA1IGFmZmluaXR5OjAwMDEgdmVjOjUwIHR5cGU9SU8tQVBJ
Qy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAg
IDYgYWZmaW5pdHk6MDAwMSB2ZWM6NTggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAw
MDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNyBhZmZpbml0eTowMDAxIHZlYzo2
MCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihY
RU4pICAgIElSUTogICA4IGFmZmluaXR5OjAwMDEgdmVjOjY4IHR5cGU9SU8tQVBJQy1lZGdlICAg
IHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOCgtUy0tKSwKKFhF
TikgICAgSVJROiAgIDkgYWZmaW5pdHk6MDAwMSB2ZWM6NzAgdHlwZT1JTy1BUElDLWxldmVsICAg
c3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6ICA5KC1TLS0pLAooWEVO
KSAgICBJUlE6ICAxMCBhZmZpbml0eTowMDAxIHZlYzo3OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDExIGFmZmluaXR5
OjAwMDEgdmVjOjg4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQKKFhFTikgICAgSVJROiAgMTIgYWZmaW5pdHk6MDAwMSB2ZWM6OTAgdHlwZT1JTy1B
UElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6
ICAxMyBhZmZpbml0eTowMDBmIHZlYzo5OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAw
MDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE0IGFmZmluaXR5OjAwMDEgdmVj
OmEwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQK
KFhFTikgICAgSVJROiAgMTUgYWZmaW5pdHk6MDAwMSB2ZWM6YTggdHlwZT1JTy1BUElDLWVkZ2Ug
ICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNiBhZmZp
bml0eTowMDAxIHZlYzpiMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMTAgaW4t
ZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogMTYoLVMtLSksCihYRU4pICAgIElSUTogIDE4IGFmZmlu
aXR5OjAwMGYgdmVjOmI4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBw
ZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTkgYWZmaW5pdHk6MDAwMSB2ZWM6YzAgdHlwZT1J
Ty1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJ
UlE6ICAyMCBhZmZpbml0eTowMDBmIHZlYzozMSB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9
MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIyIGFmZmluaXR5OjAwMDEg
dmVjOmQwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91
bmQKKFhFTikgICAgSVJROiAgMjMgYWZmaW5pdHk6MDAwMSB2ZWM6YzggdHlwZT1JTy1BUElDLWxl
dmVsICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDIzKC1TLS0p
LAooWEVOKSAgICBJUlE6ICAyNCBhZmZpbml0eTowMDAxIHZlYzoyOCB0eXBlPURNQV9NU0kgICAg
ICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI1IGFm
ZmluaXR5OjAwMDEgdmVjOjMwIHR5cGU9RE1BX01TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBt
YXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjYgYWZmaW5pdHk6MDAwMSB2ZWM6YTEgdHlw
ZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0
PTA6Mjc5KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyNyBhZmZpbml0eTowMDAxIHZlYzoyMSB0eXBl
PVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9
MDoyNzgoLVMtLSksCihYRU4pICAgIElSUTogIDI4IGFmZmluaXR5OjAwMDEgdmVjOjI5IHR5cGU9
UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0w
OjI3NygtUy0tKSwKKFhFTikgICAgSVJROiAgMjkgYWZmaW5pdHk6MDAwMSB2ZWM6YTkgdHlwZT1Q
Q0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6
Mjc2KC1TLS0pLAooWEVOKSAgICBJUlE6ICAzMCBhZmZpbml0eTowMDAxIHZlYzpiMSB0eXBlPVBD
SS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoy
NzUoLVMtLSksCihYRU4pICAgIElSUTogIDMxIGFmZmluaXR5OjAwMDEgdmVjOmM5IHR5cGU9UENJ
LU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3
NChQUy0tKSwKKFhFTikgSU8tQVBJQyBpbnRlcnJ1cHQgaW5mb3JtYXRpb246CihYRU4pICAgICBJ
UlEgIDAgVmVjMjQwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMjogdmVjPWYwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDEgVmVjIDU2OgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAgMTogdmVjPTM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0w
IGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDMgVmVjIDY0Ogoo
WEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMzogdmVjPTQwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9
TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4p
ICAgICBJUlEgIDQgVmVjIDcyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNDogdmVjPTQ4
IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBt
YXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDUgVmVjIDgwOgooWEVOKSAgICAgICBBcGlj
IDB4MDAsIFBpbiAgNTogdmVjPTUwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDYgVmVj
IDg4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNjogdmVjPTU4IGRlbGl2ZXJ5PUxvUHJp
IGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDow
CihYRU4pICAgICBJUlEgIDcgVmVjIDk2OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNzog
dmVjPTYwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRy
aWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDggVmVjMTA0OgooWEVOKSAgICAg
ICBBcGljIDB4MDAsIFBpbiAgODogdmVjPTY4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9
MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEg
IDkgVmVjMTEyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgOTogdmVjPTcwIGRlbGl2ZXJ5
PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVz
dF9pZDowCihYRU4pICAgICBJUlEgMTAgVmVjMTIwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBp
biAxMDogdmVjPTc4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGly
cj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTEgVmVjMTM2OgooWEVO
KSAgICAgICBBcGljIDB4MDAsIFBpbiAxMTogdmVjPTg4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBz
dGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAg
ICBJUlEgMTIgVmVjMTQ0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxMjogdmVjPTkwIGRl
bGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNr
PTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTMgVmVjMTUyOgooWEVOKSAgICAgICBBcGljIDB4
MDAsIFBpbiAxMzogdmVjPTk4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0
eT0wIGlycj0wIHRyaWc9RSBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTQgVmVjMTYw
OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNDogdmVjPWEwIGRlbGl2ZXJ5PUxvUHJpIGRl
c3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihY
RU4pICAgICBJUlEgMTUgVmVjMTY4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNTogdmVj
PWE4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9
RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTYgVmVjMTc2OgooWEVOKSAgICAgICBB
cGljIDB4MDAsIFBpbiAxNjogdmVjPWIwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTgg
VmVjMTg0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxODogdmVjPWI4IGRlbGl2ZXJ5PUxv
UHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9p
ZDowCihYRU4pICAgICBJUlEgMTkgVmVjMTkyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAx
OTogdmVjPWMwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0w
IHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjAgVmVjIDQ5OgooWEVOKSAg
ICAgICBBcGljIDB4MDAsIFBpbiAyMDogdmVjPTMxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0
dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJ
UlEgMjIgVmVjMjA4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAyMjogdmVjPWQwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjMgVmVjMjAwOgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAyMzogdmVjPWM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0x
IGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pIFttOiBtZW1vcnkgaW5mb10KKFhF
TikgUGh5c2ljYWwgbWVtb3J5IGluZm9ybWF0aW9uOgooWEVOKSAgICAgWGVuIGhlYXA6IDBrQiBm
cmVlCihYRU4pICAgICBoZWFwWzE0XTogNjQ1MTJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE1XTog
MTMxMDcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNl06IDI2MjE0NGtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMTddOiA1MjIyMzZrQiBmcmVlCihYRU4pICAgICBoZWFwWzE4XTogMTA0ODU3MmtCIGZy
ZWUKKFhFTikgICAgIGhlYXBbMTldOiA2OTEwODhrQiBmcmVlCihYRU4pICAgICBoZWFwWzIwXTog
NTM3MTYwa0IgZnJlZQooWEVOKSAgICAgRG9tIGhlYXA6IDMyNTY3ODRrQiBmcmVlCihYRU4pIFtu
OiBOTUkgc3RhdGlzdGljc10KKFhFTikgQ1BVCU5NSQooWEVOKSAgIDAJICAwCihYRU4pICAgMQkg
IDAKKFhFTikgICAyCSAgMAooWEVOKSAgIDMJICAwCihYRU4pIGRvbTAgdmNwdTA6IE5NSSBuZWl0
aGVyIHBlbmRpbmcgbm9yIG1hc2tlZAooWEVOKSBbcTogZHVtcCBkb21haW4gKGFuZCBndWVzdCBk
ZWJ1ZykgaW5mb10KKFhFTikgJ3EnIHByZXNzZWQgLT4gZHVtcGluZyBkb21haW4gaW5mbyAobm93
PTB4NEM6QTE4N0I1RkQpCihYRU4pIEdlbmVyYWwgaW5mb3JtYXRpb24gZm9yIGRvbWFpbiAwOgoo
WEVOKSAgICAgcmVmY250PTMgZHlpbmc9MCBwYXVzZV9jb3VudD0wCihYRU4pICAgICBucl9wYWdl
cz0xODc1MzkgeGVuaGVhcF9wYWdlcz02IHNoYXJlZF9wYWdlcz0wIHBhZ2VkX3BhZ2VzPTAgZGly
dHlfY3B1cz17MS0zfSBtYXhfcGFnZXM9MTg4MTQ3CihYRU4pICAgICBoYW5kbGU9MDAwMDAwMDAt
MDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwIHZtX2Fzc2lzdD0wMDAwMDAwZAooWEVOKSBSYW5n
ZXNldHMgYmVsb25naW5nIHRvIGRvbWFpbiAwOgooWEVOKSAgICAgSS9PIFBvcnRzICB7IDAtMWYs
IDIyLTNmLCA0NC02MCwgNjItOWYsIGEyLTQwNywgNDBjLWNmYiwgZDAwLTIwNGYsIDIwNTgtZmZm
ZiB9CihYRU4pICAgICBJbnRlcnJ1cHRzIHsgMC0yNzkgfQooWEVOKSAgICAgSS9PIE1lbW9yeSB7
IDAtZmViZmYsIGZlYzAxLWZlZGZmLCBmZWUwMS1mZmZmZmZmZmZmZmZmZmZmIH0KKFhFTikgTWVt
b3J5IHBhZ2VzIGJlbG9uZ2luZyB0byBkb21haW4gMDoKKFhFTikgICAgIERvbVBhZ2UgbGlzdCB0
b28gbG9uZyB0byBkaXNwbGF5CihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDc2ZjY6IGNh
Zj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAgICAgWGVuUGFn
ZSAwMDAwMDAwMDAwMTQ3NmY1OiBjYWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAw
MDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0NzZmNDogY2FmPWMwMDAwMDAwMDAw
MDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAx
NDc2ZjM6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAg
ICAgWGVuUGFnZSAwMDAwMDAwMDAwMGFhMGZkOiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0
MDAwMDAwMDAwMDAwMDIKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDExYzllYTogY2FmPWMw
MDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4pIFZDUFUgaW5mb3JtYXRp
b24gYW5kIGNhbGxiYWNrcyBmb3IgZG9tYWluIDA6CihYRU4pICAgICBWQ1BVMDogQ1BVMCBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAxLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
e30gY3B1X2FmZmluaXR5PXswfQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0w
CihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSAgICAgVkNQVTE6IENQVTEgW2hhcz1G
XSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAwMCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXsx
fSBjcHVfYWZmaW5pdHk9ezAtMTV9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdz
PTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMjogQ1BVMiBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
ezJ9IGNwdV9hZmZpbml0eT17MC0xNX0KKFhFTikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxh
Z3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMgdGltZXIKKFhFTikgICAgIFZDUFUzOiBDUFUzIFto
YXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1
cz17M30gY3B1X2FmZmluaXR5PXswLTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9m
bGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSBOb3RpZnlpbmcgZ3Vlc3Qg
MDowICh2aXJxIDEsIHBvcnQgNSwgc3RhdCAwLzAvLTEpCihYRU4pIE5vdGlmeWluZyBndWVzdCAw
OjEgKHZpcnEgMSwgcG9ydCAxMSwgc3RhdCAwLzAvMCkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6
MiAodmlycSAxLCBwb3J0IDE3LCBzdGF0IDAvMC8wKQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDoz
ICh2aXJxIDEsIHBvcnQgMjMsIHN0YXQgMC8wLzApCgooWEVOKSBTaGFyZWQgZnJhbWVzIDAgLS0g
U2F2ZWQgZnJhbWVzIDAKWyAgMzI5LjMwNTc0N10gdihYRU4pIFtyOiBkdW1wIHJ1biBxdWV1ZXNd
CmNwdSAxCihYRU4pIHNjaGVkX3NtdF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9
MHgwMDAwMDA0Q0FENEVFQjRGCihYRU4pIElkbGUgY3B1cG9vbDoKKFhFTikgU2NoZWR1bGVyOiBT
TVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQpbICAzMjkuMzA1NzQ4XSAgKFhFTikgaW5mbzoK
KFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1hc3RlciAgICAgICAgICAgICA9
IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDQwMAooWEVOKSAJY3JlZGl0IGJhbGFuY2Ug
ICAgID0gNDcKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9zb3J0
ICAgICAgICAgID0gMjkyNAooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0
c2xpY2UgICAgICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAxMDAw
dXMKKFhFTikgCWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNl
ICAgPSAxCihYRU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMKIChYRU4pIGlkbGVyczogMDAw
YwooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IDA6IG1hc2tlZD0wIHBlbmRbMC4xXSBw
cmk9LTEgZmxhZ3M9MCBjcHU9MSBjcmVkaXQ9LTU0MCBbdz0yNTZdCmluZz0xIGV2ZW50X3NlbCAo
WEVOKSBDcHVwb29sIDA6CihYRU4pIFNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNy
ZWRpdCkKKFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1h
c3RlciAgICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDQwMAooWEVO
KSAJY3JlZGl0IGJhbGFuY2UgICAgID0gNDcKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1
NgooWEVOKSAJcnVucV9zb3J0ICAgICAgICAgID0gMjkyNAooWEVOKSAJZGVmYXVsdC13ZWlnaHQg
ICAgID0gMjU2CihYRU4pIAl0c2xpY2UgICAgICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGlt
aXQgICAgICAgICAgPSAxMDAwdXMKKFhFTikgCWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4p
IAl0aWNrcyBwZXIgdHNsaWNlICAgPSAxCihYRU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMK
MDAwMDAwMDAwMDAwMDAwMShYRU4pIGlkbGVyczogMDAwYwooWEVOKSBhY3RpdmUgdmNwdXM6CihY
RU4pIAkgIDE6IFswLjFdIHByaT0tMSBmbGFncz0wIGNwdT0xIGNyZWRpdD0tMTA5NiBbdz0yNTZd
CgooWEVOKSBDUFVbMDBdIFsgIDMyOS4zNzYyNjRdICAgc29ydD0yOTI0LCBzaWJsaW5nPTAwMDEs
ICBjb3JlPTAwMGYKKFhFTikgCXJ1bjogWzMyNzY3LjBdIHByaT0wIGZsYWdzPTAgY3B1PTAKKFhF
TikgCSAgMTogWzAuMF0gcHJpPTAgZmxhZ3M9MCBjcHU9MCBjcmVkaXQ9NzQgW3c9MjU2XQoxOiBt
YXNrZWQ9MCBwZW5kKFhFTikgQ1BVWzAxXSAgc29ydD0yOTI0LCBzaWJsaW5nPTAwMDIsIGNvcmU9
MDAwZgooWEVOKSAJcnVuOiBbMC4xXSBwcmk9LTEgZmxhZ3M9MCBjcHU9MSBjcmVkaXQ9LTEzNTUg
W3c9MjU2XQooWEVOKSAJICAxOiBbMzI3NjcuMV0gcHJpPS02NCBmbGFncz0wIGNwdT0xCihYRU4p
IENQVVswMl0gaW5nPTEgZXZlbnRfc2VsICBzb3J0PTI5MjQsIHNpYmxpbmc9MDAwNCwgMDAwMDAw
MDAwMDAwMDAwMWNvcmU9MDAwZgooWEVOKSAJcnVuOiBbMzI3NjcuMl0gcHJpPS02NCBmbGFncz0w
IGNwdT0yCgooWEVOKSBDUFVbMDNdIFsgIDMyOS40NDYzMzddICAgc29ydD0yOTI0LCBzaWJsaW5n
PTAwMDgsICBjb3JlPTAwMGYKKFhFTikgCXJ1bjogWzMyNzY3LjNdIHByaT0tNjQgZmxhZ3M9MCBj
cHU9MwoyOiBtYXNrZWQ9MSBwZW5kKFhFTikgW3M6IGR1bXAgc29mdHRzYyBzdGF0c10KaW5nPTEg
ZXZlbnRfc2VsIChYRU4pIFRTQyBtYXJrZWQgYXMgcmVsaWFibGUsIHdhcnAgPSAwIChjb3VudD0z
KQowMDAwMDAwMDAwMDAwMDAxKFhFTikgTm8gZG9tYWlucyBoYXZlIGVtdWxhdGVkIFRTQwoKKFhF
TikgW3Q6IGRpc3BsYXkgbXVsdGktY3B1IGNsb2NrIGluZm9dClsgIDMyOS40ODg1NzldICAoWEVO
KSBTeW5jZWQgc3RpbWUgc2tldzogbWF4PTE1NzYxbnMgYXZnPTEyMTI2bnMgc2FtcGxlcz0yIGN1
cnJlbnQ9MTU3NjFucwooWEVOKSBTeW5jZWQgY3ljbGVzIHNrZXc6IG1heD0xNzAgYXZnPTE2NSBz
YW1wbGVzPTIgY3VycmVudD0xNjAKIChYRU4pIFt1OiBkdW1wIG51bWEgaW5mb10KMzogbWFza2Vk
PTEgcGVuZChYRU4pICd1JyBwcmVzc2VkIC0+IGR1bXBpbmcgbnVtYSBpbmZvIChub3ctMHg0QzpC
QUExOUM1NykKaW5nPTAgZXZlbnRfc2VsIChYRU4pIGlkeDAgLT4gTk9ERTAgc3RhcnQtPjAgc2l6
ZS0+MTM2OTYwMCBmcmVlLT44MTQxOTYKMDAwMDAwMDAwMDAwMDAwMChYRU4pIHBoeXNfdG9fbmlk
KDAwMDAwMDAwMDAwMDEwMDApIC0+IDAgc2hvdWxkIGJlIDAKCihYRU4pIENQVTAgLT4gTk9ERTAK
KFhFTikgQ1BVMSAtPiBOT0RFMAooWEVOKSBDUFUyIC0+IE5PREUwCihYRU4pIENQVTMgLT4gTk9E
RTAKKFhFTikgTWVtb3J5IGxvY2F0aW9uIG9mIGVhY2ggZG9tYWluOgooWEVOKSBEb21haW4gMCAo
dG90YWw6IDE4NzUzOSk6ClsgIDMyOS41NDYyNjldICAoWEVOKSAgICAgTm9kZSAwOiAxODc1MzkK
IChYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KCihYRU4pICoqKioqKioqKioqIFZNQ1MgQXJl
YXMgKioqKioqKioqKioqKioKKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioKWyAgMzI5LjU4NjM2Ml0gcChYRU4pIFt6OiBwcmludCBpb2FwaWMgaW5mb10KZW5kaW5n
OgooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1LgpbICAzMjkuNTg2MzYzXSAgKFhF
TikgbnVtYmVyIG9mIElPLUFQSUMgIzIgcmVnaXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUg
SU8gQVBJQy4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uCiAgKFhFTikgSU8gQVBJQyAjMi4uLi4uLgoo
WEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDIwMDAwMDAKKFhFTikgLi4uLi4uLiAgICA6IHBoeXNp
Y2FsIEFQSUMgaWQ6IDAyCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwCihYRU4p
IC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAuLi4u
IHJlZ2lzdGVyICMwMTogMDAxNzAwMjAKKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rp
b24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMAoo
WEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMAooWEVOKSAuLi4uIElSUSBy
ZWRpcmVjdGlvbiB0YWJsZToKKFhFTikgIE5SIExvZyBQaHkgTWFzayBUcmlnIElSUiBQb2wgU3Rh
dCBEZXN0IERlbGkgVmVjdDogICAKIChYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDEgMDAwIDAwICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOAogKFhFTikgIDAyIDAwMCAwMCAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgRjAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwMyAw
MDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwCiAoWEVOKSAgMDQgMDAw
IDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OAowMDAwMDAwMDAwMDAwMDAw
KFhFTikgIDA1IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTAKIChY
RU4pICAwNiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4CjAwMDAw
MDAwMDAwMDAwMDAoWEVOKSAgMDcgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICA2MAogKFhFTikgIDA4IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgNjgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwOSAwMDAgMDAgIDAgICAgMSAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDcwCiAoWEVOKSAgMGEgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA3OAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDBiIDAwMCAwMCAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgODgKIChYRU4pICAwYyAwMDAgMDAgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDkwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGQgMDAw
IDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA5OAoKKFhFTikgIDBlIDAwMCAw
MCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgQTAKWyAgMzI5LjcyNDcyMV0gIChY
RU4pICAwZiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEE4CiAgKFhF
TikgIDEwIDAwMCAwMCAgMCAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQjAKMDAwMDAw
MDAwMDAwMDAwMChYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwCiAoWEVOKSAgMTIgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAg
ICBCOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEzIDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAg
MCAgICAxICAgIDEgICAgQzAKIChYRU4pICAxNCAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAg
ICAgMSAgICAxICAgIDMxCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMTUgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMAogKFhFTikgIDE2IDAwMCAwMCAgMSAgICAxICAg
IDAgICAxICAgMCAgICAxICAgIDEgICAgRDAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAxNyAwMDAg
MDAgIDAgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEM4CihYRU4pIFVzaW5nIHZlY3Rv
ci1iYXNlZCBpbmRleGluZwooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOgogKFhFTikgSVJRMjQw
IC0+IDA6MgooWEVOKSBJUlE1NiAtPiAwOjEKKFhFTikgSVJRNjQgLT4gMDozCihYRU4pIElSUTcy
IC0+IDA6NAooWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4pIElSUTk2
IC0+IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhFTikgSVJR
MTIwIC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6MTIKKFhF
TikgSVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4IC0+IDA6
MTUKKFhFTikgSVJRMTc2IC0+IDA6MTYKKFhFTikgSVJRMTg0IC0+IDA6MTgKKFhFTikgSVJRMTky
IC0+IDA6MTkKKFhFTikgSVJRNDkgLT4gMDoyMAooWEVOKSBJUlEyMDggLT4gMDoyMgooWEVOKSBJ
UlEyMDAgLT4gMDoyMwooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4g
ZG9uZS4KMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAzMjkuODQ3Nzc1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzI5Ljg2MTczNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyOS44NzU2OThdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjkuODg5NjU5XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI5LjkwMzYyMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyOS45MTc1
ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwNDAwODAwMjE4MgpbICAzMjkuOTMxNTQzXSAgICAKWyAgMzI5Ljkz
NDg1NF0gZ2xvYmFsIG1hc2s6ClsgIDMyOS45MzQ4NTRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZgpbICAzMjkuOTUwMDY5XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI5Ljk2NDAzMF0g
ICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMyOS45Nzc5OTFdICAgIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZgpbICAzMjkuOTkxOTUyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjAw
NTkxM10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMzMC4wMTk4NzRdICAgIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZgpbICAzMzAuMDMzODM1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZjgwMDgxMDQxMDUKWyAg
MzMwLjA0Nzc5Nl0gICAgClsgIDMzMC4wNTExMDhdIGdsb2JhbGx5IHVubWFza2VkOgpbICAzMzAu
MDUxMTA4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjA2Njg1OV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMzMC4wODA4MjFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzAuMDk0NzgxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjEwODc0M10gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC4xMjI3MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzAuMTM2NjY1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjE1MDYy
N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDA0MDAwMDAyMDgyClsgIDMzMC4xNjQ1ODddICAgIApbICAzMzAuMTY3
ODk4XSBsb2NhbCBjcHUxIG1hc2s6ClsgIDMzMC4xNjc4OTldICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzAuMTgzNDcwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjE5NzQz
Ml0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC4yMTEzOTJdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMzAuMjI1MzU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMw
LjIzOTMxNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC4yNTMyNzddICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMjY3MjM4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDFmODAK
WyAgMzMwLjI4MTE5OV0gICAgClsgIDMzMC4yODQ1MTBdIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDMz
MC4yODQ1MTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMzAwMTcyXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjMxNDEzM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDMzMC4zMjgwOTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMzQyMDU2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjM1NjAxN10gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMzMC4zNjk5NzhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMzgz
OTQyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwODAKWyAgMzMwLjM5NzkwMF0gICAgClsgIDMzMC40
MDEyMTJdIHBlbmRpbmcgbGlzdDoKWyAgMzMwLjQwNDI1NV0gICAwOiBldmVudCAxIC0+IGlycSAy
NzIgbG9jYWxseS1tYXNrZWQKWyAgMzMwLjQwOTI2Nl0gICAxOiBldmVudCA3IC0+IGlycSAyNzgK
WyAgMzMwLjQxMjkzN10gICAxOiBldmVudCA4IC0+IGlycSAyNzkgZ2xvYmFsbHktbWFza2VkClsg
IDMzMC40MTgwMzddICAgMjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1hc2tlZApbICAz
MzAuNDIzMTM4XSAgIDA6IGV2ZW50IDI3IC0+IGlycSAyOTcgZ2xvYmFsbHktbWFza2VkIGxvY2Fs
bHktbWFza2VkClsgIDMzMC40Mjk2NzFdICAgMDogZXZlbnQgMzggLT4gaXJxIDMwMyBsb2NhbGx5
LW1hc2tlZApbICAzMzAuNDM0Nzk0XSAKWyAgMzMwLjQzNDc5Nl0gdmNwdSAwClsgIDMzMC40MzQ3
OTZdICAgMDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMC40NDAwMzhdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMzMC40NDYxMjNdICAgMjogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAw
MDAwMDAwMDAwMDAwMDAxClsgIDMzMC40NTIyMDldICAgMzogbWFza2VkPTEgcGVuZGluZz0xIGV2
ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMzMC40NTgyOTVdICAgClsgIDMzMC40NjQzNzld
IHBlbmRpbmc6ClsgIDMzMC40NjQzODBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAu
NDc5MjM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjQ5MzE5N10gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMzMC41MDcxNThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzAuNTIxMTE5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjUzNTA4MF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC41NDkwNDFdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzAuNTYzMDAzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDQwMDgyMDIxMDYKWyAgMzMwLjU3Njk2
NF0gICAgClsgIDMzMC41ODAyNzVdIGdsb2JhbCBtYXNrOgpbICAzMzAuNTgwMjc1XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjU5NTQ4OV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
ClsgIDMzMC42MDk0NTBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMzAuNjIzNDEyXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjYzNzM3Ml0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMzMC42NTEzMzRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMzAuNjY1
Mjk1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjY3OTI1N10gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmY4MDA4MTA0MTA1ClsgIDMzMC42OTMyMThdICAgIApbICAzMzAuNjk2NTI5XSBnbG9iYWxseSB1
bm1hc2tlZDoKWyAgMzMwLjY5NjUzMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC43
MTIyODBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuNzI2MjQxXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMzMwLjc0MDIwM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMC43NTQxNjNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuNzY4MTI1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjc4MjA4Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMC43OTYwNDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwNDAwMDIwMjAwMgpbICAzMzAuODEwMDA4
XSAgICAKWyAgMzMwLjgxMzMxOV0gbG9jYWwgY3B1MCBtYXNrOgpbICAzMzAuODEzMzIwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjgyODg5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMC44NDI4NTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuODU2ODE0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjg3MDc3Nl0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMzMC44ODQ3MzZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAu
ODk4Njk4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjkxMjY1OV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBm
ZmZmZmZmZmZlMDAwMDdmClsgIDMzMC45MjY2MjBdICAgIApbICAzMzAuOTI5OTMxXSBsb2NhbGx5
IHVubWFza2VkOgpbICAzMzAuOTI5OTMyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMw
Ljk0NTU5M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC45NTk1NTVdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAzMzAuOTczNTE2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzMwLjk4NzQ3Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS4wMDE0MzhdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuMDE1NDAwXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzMxLjAyOTM2MF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDA0MDAwMDAwMDAyClsgIDMzMS4wNDMz
MjFdICAgIApbICAzMzEuMDQ2NjMzXSBwZW5kaW5nIGxpc3Q6ClsgIDMzMS4wNDk2ODFdICAgMDog
ZXZlbnQgMSAtPiBpcnEgMjcyIGwyLWNsZWFyClsgIDMzMS4wNTQxNTFdICAgMDogZXZlbnQgMiAt
PiBpcnEgMjczIGwyLWNsZWFyIGdsb2JhbGx5LW1hc2tlZApbICAzMzEuMDYwMDU4XSAgIDE6IGV2
ZW50IDggLT4gaXJxIDI3OSBsMi1jbGVhciBnbG9iYWxseS1tYXNrZWQgbG9jYWxseS1tYXNrZWQK
WyAgMzMxLjA2NzMxMl0gICAyOiBldmVudCAxMyAtPiBpcnEgMjg0IGwyLWNsZWFyIGxvY2FsbHkt
bWFza2VkClsgIDMzMS4wNzMyMTRdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsMi1jbGVhciBs
b2NhbGx5LW1hc2tlZApbICAzMzEuMDc5MTIxXSAgIDA6IGV2ZW50IDI3IC0+IGlycSAyOTcgbDIt
Y2xlYXIgZ2xvYmFsbHktbWFza2VkClsgIDMzMS4wODUxMTZdICAgMDogZXZlbnQgMzggLT4gaXJx
IDMwMyBsMi1jbGVhcgpbICAzMzEuMDg5NzE2XSAKWyAgMzMxLjA4OTcxN10gdmNwdSAyClsgIDMz
MS4wODk3MThdICAgMDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMS4wOTQ5NzldICAgMTogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAw
MDAwMDAwMDAwMDAxClsgIDMzMS4xMDEwNjRdICAgMjogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50
X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMzMS4xMDcxNDldICAgMzogbWFza2VkPTEgcGVuZGlu
Zz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMzMS4xMTMyMzRdICAgClsgIDMzMS4x
MTkzMTldIHBlbmRpbmc6ClsgIDMzMS4xMTkzMTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzEuMTM0MTc1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjE0ODEzNl0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS4xNjIwOTddICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzEuMTc2MDU4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjE5MDAy
MF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS4yMDM5ODFdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMzEuMjE3OTQ0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDgyMGUxMDYKWyAgMzMx
LjIzMTkwNF0gICAgClsgIDMzMS4yMzUyMTRdIGdsb2JhbCBtYXNrOgpbICAzMzEuMjM1MjE0XSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjI1MDQyOF0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMzMS4yNjQzODldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMzEuMjc4
MzUxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjI5MjMxMl0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDMzMS4zMDYyNzRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAz
MzEuMzIwMjM0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjMzNDE5Nl0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmY4MDA4MTA0MTA1ClsgIDMzMS4zNDgxNTddICAgIApbICAzMzEuMzUxNDY4XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMzMxLjM1MTQ2OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMS4zNjcyMjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuMzgxMTgxXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjM5NTE0MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMS40MDkxMDNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuNDIzMDY0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjQzNzAyNV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMzMS40NTA5ODZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDIwYTAwMApbICAzMzEu
NDY0OTQ4XSAgICAKWyAgMzMxLjQ2ODI1OF0gbG9jYWwgY3B1MiBtYXNrOgpbICAzMzEuNDY4MjU5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjQ4MzgzMV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMzMS40OTc3OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEu
NTExNzUzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjUyNTcxNV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMzMS41Mzk2NzZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzEuNTUzNjQwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjU2NzU5OF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDdlMDAwClsgIDMzMS41ODE1NjBdICAgIApbICAzMzEuNTg0ODcwXSBs
b2NhbGx5IHVubWFza2VkOgpbICAzMzEuNTg0ODcxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzMxLjYwMDUzMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS42MTQ0OTRdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuNjI4NDU0XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzMxLjY0MjQxNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS42NTYz
NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuNjcwMzM5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzMxLjY4NDI5OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDBhMDAwClsgIDMz
MS42OTgyNjBdICAgIApbICAzMzEuNzAxNTcxXSBwZW5kaW5nIGxpc3Q6ClsgIDMzMS43MDQ2MTZd
ICAgMDogZXZlbnQgMiAtPiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApb
ICAzMzEuNzExMDU4XSAgIDE6IGV2ZW50IDggLT4gaXJxIDI3OSBnbG9iYWxseS1tYXNrZWQgbG9j
YWxseS1tYXNrZWQKWyAgMzMxLjcxNzUwMl0gICAyOiBldmVudCAxMyAtPiBpcnEgMjg0ClsgIDMz
MS43MjEyNjFdICAgMjogZXZlbnQgMTQgLT4gaXJxIDI4NSBnbG9iYWxseS1tYXNrZWQKWyAgMzMx
LjcyNjQ1Ml0gICAyOiBldmVudCAxNSAtPiBpcnEgMjg2ClsgIDMzMS43MzAyMTFdICAgMzogZXZl
bnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tlZApbICAzMzEuNzM1MzEyXSAgIDA6IGV2ZW50
IDI3IC0+IGlycSAyOTcgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDMzMS43NDE4
NjhdIApbICAzMzEuNzQxODY5XSB2Y3B1IDMKWyAgMzMxLjc0MTg2OV0gICAwOiBtYXNrZWQ9MSBw
ZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgMzMxLjc0NzEyN10gICAxOiBt
YXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjc1MzIx
M10gICAyOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MzMxLjc1OTI5OV0gICAzOiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAw
MDAwMDEKWyAgMzMxLjc2NTM4NF0gICAKWyAgMzMxLjc3MTQ2OF0gcGVuZGluZzoKWyAgMzMxLjc3
MTQ2OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS43ODYzMjRdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAzMzEuODAwMjg1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MzMxLjgxNDI0NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS44MjgyMDZdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuODQyMTY4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgMzMxLjg1NjEyOV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS44NzAwOTFd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwODMwNDEwNApbICAzMzEuODg0MDUzXSAgICAKWyAgMzMxLjg4NzM2
NF0gZ2xvYmFsIG1hc2s6ClsgIDMzMS44ODczNjVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpb
ICAzMzEuOTAyNTc3XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjkxNjUzOF0gICAg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMzMS45MzA1MDBdICAgIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZgpbICAzMzEuOTQ0NDYxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjk1ODQy
Ml0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMzMS45NzIzODRdICAgIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZgpbICAzMzEuOTg2MzQ1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZjgwMDgxMDQxMDUKWyAgMzMy
LjAwMDMwNV0gICAgClsgIDMzMi4wMDM2MTZdIGdsb2JhbGx5IHVubWFza2VkOgpbICAzMzIuMDAz
NjE3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjAxOTM2OF0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDMzMi4wMzMzMjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAz
MzIuMDQ3MjkxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjA2MTI1Ml0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMi4wNzUyMTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAzMzIuMDg5MTc0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjEwMzEzNV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDMzMi4xMTcwOTddICAgIApbICAzMzIuMTIwNDA3
XSBsb2NhbCBjcHUzIG1hc2s6ClsgIDMzMi4xMjA0MDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAzMzIuMTM1OTgwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjE0OTk0MV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMi4xNjM5MDJdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAzMzIuMTc3ODY0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjE5
MTgyNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMi4yMDU3ODZdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAzMzIuMjE5NzQ3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDFmODAwMDAKWyAg
MzMyLjIzMzcwOV0gICAgClsgIDMzMi4yMzcwMTldIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDMzMi4y
MzcwMjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzIuMjUyNjgxXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMzMyLjI2NjY0M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMi4yODA2MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzIuMjk0NTY1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjMwODUyNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMi4zMjI0ODddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzIuMzM2NDQ4
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAyODAwMDAKWyAgMzMyLjM1MDQwOV0gICAgClsgIDMzMi4zNTM3
MjFdIHBlbmRpbmcgbGlzdDoKWyAgMzMyLjM1Njc2NF0gICAwOiBldmVudCAyIC0+IGlycSAyNzMg
Z2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDMzMi4zNjMyMDhdICAgMTogZXZlbnQg
OCAtPiBpcnEgMjc5IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAzMzIuMzY5NjUy
XSAgIDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2Vk
ClsgIDMzMi4zNzYxODVdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICAzMzIuMzc5OTQyXSAg
IDM6IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDMzMi4zODUxMzNdICAg
MzogZXZlbnQgMjEgLT4gaXJxIDI5MgpbICAzMzIuMzg4ODkzXSAgIDA6IGV2ZW50IDI3IC0+IGly
cSAyOTcgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkCgo=
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="syslog-bad.txt"
Content-Disposition: attachment; filename="syslog-bad.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfctaa1

WyAgMjMyLjU1NzcwOV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEwxIERpc2FibGVkOyBFbmFibGlu
ZyBMMFMKWyAgMjMyLjU2OTcxMl0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFJhZGlvIHR5cGU9MHgx
LTB4Mi0weDAKWyAgMjM1LjAxNTg4NV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFBDSSBJTlQgQSBk
aXNhYmxlZApbICAyMzUuNzU5NjM1XSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJpbmcgZm9y
d2FyZGluZyBzdGF0ZQpbICAyMzUuNzczNTk5XSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJp
bmcgZm9yd2FyZGluZyBzdGF0ZQpbICAyMzUuNzc5MjY1XSBici1ldGgwOiBwb3J0IDEoZXRoMCkg
ZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAyMzUuNzg1NTI5XSBici1ldGgwOiBwb3J0IDEo
ZXRoMCkgZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAyMzYuMDI1NDk3XSBlMTAwMGUgMDAw
MDowMDoxOS4wOiBQQ0kgSU5UIEEgZGlzYWJsZWQKWyAgMjM2LjMzNTQwMF0gZWhjaV9oY2QgMDAw
MDowMDoxZC4wOiByZW1vdmUsIHN0YXRlIDEKWyAgMjM2LjM0MDI0OV0gdXNiIHVzYjI6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMjM2LjM0NTUzOF0gdXNiIDItMTogVVNCIGRp
c2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMgpbICAyMzYuMzUwNzE2XSB1c2IgMi0xLjU6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDMKWyAgMjM2LjQ5NTU3MF0gdXNiIDItMS42OiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciA0ClsgIDIzNi43MTAyNjldIGVoY2lfaGNkIDAwMDA6
MDA6MWQuMDogVVNCIGJ1cyAyIGRlcmVnaXN0ZXJlZApbICAyMzYuNzE1ODU2XSBlaGNpX2hjZCAw
MDAwOjAwOjFkLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAyMzYuNzIxMDI3XSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IHJlbW92ZSwgc3RhdGUgNApbICAyMzYuNzI2MDQ4XSB1c2IgdXNiMTogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAyMzYuNzMxMzExXSB1c2IgMS0xOiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciAyClsgIDIzNi43NDA2NThdIGVoY2lfaGNkIDAwMDA6
MDA6MWEuMDogVVNCIGJ1cyAxIGRlcmVnaXN0ZXJlZApbICAyMzYuNzQ2MjI0XSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAyMzYuNzk1NDE4XSB4aGNpX2hjZCAw
MDAwOjAwOjE0LjA6IHJlbW92ZSwgc3RhdGUgNApbICAyMzYuODAwMjY1XSB1c2IgdXNiNDogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAyMzYuODA1NTg5XSB4SENJIHhoY2lfZHJv
cF9lbmRwb2ludCBjYWxsZWQgZm9yIHJvb3QgaHViClsgIDIzNi44MTEwMDRdIHhIQ0kgeGhjaV9j
aGVja19iYW5kd2lkdGggY2FsbGVkIGZvciByb290IGh1YgpbICAyMzYuODE2NzI5XSB4aGNpX2hj
ZCAwMDAwOjAwOjE0LjA6IFVTQiBidXMgNCBkZXJlZ2lzdGVyZWQKWyAgMjM2LjgyMjMzMF0geGhj
aV9oY2QgMDAwMDowMDoxNC4wOiByZW1vdmUsIHN0YXRlIDQKWyAgMjM2LjgyNzI5OV0gdXNiIHVz
YjM6IFVTQiBkaXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMjM2LjgzMjU5OF0geEhDSSB4
aGNpX2Ryb3BfZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAyMzYuODM4MDMzXSB4SENJ
IHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgMjM2Ljg0MzgwN10g
eGhjaV9oY2QgMDAwMDowMDoxNC4wOiBVU0IgYnVzIDMgZGVyZWdpc3RlcmVkClsgIDIzNi44OTU0
ODBdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogUENJIElOVCBBIGRpc2FibGVkClsgIDIzNy4yMDcx
OTVdIFBNOiBTeW5jaW5nIGZpbGVzeXN0ZW1zIC4uLiBkb25lLgpbICAyMzcuMjExODMxXSBQTTog
UHJlcGFyaW5nIHN5c3RlbSBmb3IgbWVtIHNsZWVwClsgIDIzNy40NDU0MTldIEZyZWV6aW5nIHVz
ZXIgc3BhY2UgcHJvY2Vzc2VzIC4uLiAoZWxhcHNlZCAwLjAxIHNlY29uZHMpIGRvbmUuClsgIDIz
Ny40Njc5ODldIEZyZWV6aW5nIHJlbWFpbmluZyBmcmVlemFibGUgdGFza3MgLi4uIChlbGFwc2Vk
IDAuMDEgc2Vjb25kcykgZG9uZS4KWyAgMjM3LjQ4Nzk4Nl0gUE06IEVudGVyaW5nIG1lbSBzbGVl
cApbICAyMzcuNDkxOTA0XSBzZCAwOjA6MDowOiBbc2RhXSBTeW5jaHJvbml6aW5nIFNDU0kgY2Fj
aGUKWyAgMjM3LjQ5MjI3NV0gc25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6IFBDSSBJTlQgQSBk
aXNhYmxlZApbICAyMzcuNDkyMzY3XSBBQ1BJIGhhbmRsZSBoYXMgbm8gY29udGV4dCEKWyAgMjM3
LjUwNjk0NV0gc2QgMDowOjA6MDogW3NkYV0gU3RvcHBpbmcgZGlzawpbICAyMzcuNjA1Mzc4XSBQ
TTogc3VzcGVuZCBvZiBkcnY6YWhjaSBkZXY6MDAwMDowMDoxZi4yIGNvbXBsZXRlIGFmdGVyIDEx
My4yNTMgbXNlY3MKWyAgMjM3LjYxMzAwOF0gUE06IHN1c3BlbmQgb2YgZHJ2OiBkZXY6cGNpMDAw
MDowMCBjb21wbGV0ZSBhZnRlciAxMDcuNjQxIG1zZWNzClsgIDIzNy42MjAyNjJdIFBNOiBzdXNw
ZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMTI4LjQ1MSBtc2VjcwpbICAyMzcuNjI2NDI4
XSBQTTogc3VzcGVuZCBkZXZpY2VzIHRvb2sgMC4xNDAgc2Vjb25kcwpbICAyMzcuNjMyNjIzXSBQ
TTogbGF0ZSBzdXNwZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMS4xOTEgbXNlY3MKWyAg
MjM3LjYzOTMwOV0gQUNQSTogUHJlcGFyaW5nIHRvIGVudGVyIHN5c3RlbSBzbGVlcCBzdGF0ZSBT
MwpbICAyMzcuNjQ1MTIyXSBQTTogU2F2aW5nIHBsYXRmb3JtIE5WUyBtZW1vcnkKWyAgMjM3Ljg0
MjYxMF0gRGlzYWJsaW5nIG5vbi1ib290IENQVXMgLi4uCihYRU4pIFByZXBhcmluZyBzeXN0ZW0g
Zm9yIEFDUEkgUzMgc3RhdGUuCihYRU4pIERpc2FibGluZyBub24tYm9vdCBDUFVzIC4uLgooWEVO
KSBFbnRlcmluZyBBQ1BJIFMzIHN0YXRlLgooWEVOKSBtY2VfaW50ZWwuYzoxMjM5OiBNQ0EgQ2Fw
YWJpbGl0eTogQkNBU1QgMSBTRVIgMCBDTUNJIDEgZmlyc3RiYW5rIDAgZXh0ZW5kZWQgTUNFIE1T
UiAwCihYRU4pIENQVTAgQ01DSSBMVlQgdmVjdG9yICgweGYxKSBhbHJlYWR5IGluc3RhbGxlZAoo
WEVOKSBGaW5pc2hpbmcgd2FrZXVwIGZyb20gQUNQSSBTMyBzdGF0ZS4KKFhFTikgbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBF
bmFibGluZyBub24tYm9vdCBDUFVzICAuLi4KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9p
bmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6IGNvbGxl
Y3RfY3B1X2luZm8gOiBzaWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1pY3JvY29k
ZTogY29sbGVjdF9jcHVfaW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcKWyAgMjM5
LjYyMjU0OV0gQUNQSTogTG93LWxldmVsIHJlc3VtZSBjb21wbGV0ZQpbICAyMzkuNjI2ODY3XSBQ
TTogUmVzdG9yaW5nIHBsYXRmb3JtIE5WUyBtZW1vcnkKKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0
X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6
IGNvbGxlY3RfY3B1X2luZm8gOiBzaWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1p
Y3JvY29kZTogY29sbGVjdF9jcHVfaW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcK
KFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4Miwg
cmV2PTB4NwpbICAyMzkuNzc5ODg0XSBFbmFibGluZyBub24tYm9vdCBDUFVzIC4uLgpbICAyMzku
NzgzNzY2XSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDEKWyAgMjM5Ljc4ODAwMF0gY3B1
IDEgc3BpbmxvY2sgZXZlbnQgaXJxIDI3OQpbICAyMzkuNzkzMjQwXSBDUFUxIGlzIHVwClsgIDIz
OS43OTU3ODVdIGluc3RhbGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMgpbICAyMzkuNzk5OTQ5XSBj
cHUgMiBzcGlubG9jayBldmVudCBpcnEgMjg1ClsgIDIzOS44MDUyMzJdIENQVTIgaXMgdXAKWyAg
MjM5LjgwNzcyNV0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAzClsgIDIzOS44MTE4ODld
IGNwdSAzIHNwaW5sb2NrIGV2ZW50IGlycSAyOTEKWyAgMjM5LjgxNzI0NV0gQ1BVMyBpcyB1cApb
ICAyMzkuODIxNjU3XSBBQ1BJOiBXYWtpbmcgdXAgZnJvbSBzeXN0ZW0gc2xlZXAgc3RhdGUgUzMK
WyAgMjM5LjgyNzI1NV0gaTkxNSAwMDAwOjAwOjAyLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2Ug
YXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpbICAyMzkuODM2MDcxXSBp
OTE1IDAwMDA6MDA6MDIuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3
YXMgMHg5MDAwMDcsIHdyaXRpbmcgMHg5MDA0MDcpClsgIDIzOS44NDU1OTBdIHBjaSAwMDAwOjAw
OjE0LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3
cml0aW5nIDB4MTBiKQpbICAyMzkuODU0NDE4XSBwY2kgMDAwMDowMDoxNC4wOiByZXN0b3Jpbmcg
Y29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDQgKHdhcyAweDQsIHdyaXRpbmcgMHhiMDJiMDAwNCkK
WyAgMjM5Ljg2MzUyN10gcGNpIDAwMDA6MDA6MTQuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBh
dCBvZmZzZXQgMHgxICh3YXMgMHgyOTAwMDAwLCB3cml0aW5nIDB4MjkwMDAwMikKWyAgMjM5Ljg3
MzEzOV0gcGNpIDAwMDA6MDA6MTYuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQg
MHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMGIpClsgIDIzOS44ODE5ODZdIHBjaSAwMDAwOjAw
OjE2LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4ZmVkYjAw
MDQsIHdyaXRpbmcgMHhiMDJhMDAwNCkKWyAgMjM5Ljg5MTc2MF0gc2VyaWFsIDAwMDA6MDA6MTYu
MzogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgyMDAsIHdyaXRp
bmcgMHgyMGEpClsgIDIzOS45MDA4NjZdIHNlcmlhbCAwMDAwOjAwOjE2LjM6IHJlc3RvcmluZyBj
b25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NSAod2FzIDB4MCwgd3JpdGluZyAweGIwMjkwMDAwKQpb
ICAyMzkuOTEwMjM3XSBzZXJpYWwgMDAwMDowMDoxNi4zOiByZXN0b3JpbmcgY29uZmlnIHNwYWNl
IGF0IG9mZnNldCAweDQgKHdhcyAweDEsIHdyaXRpbmcgMHgzMGUxKQpbICAyMzkuOTE5MjgyXSBz
ZXJpYWwgMDAwMDowMDoxNi4zOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEg
KHdhcyAweGIwMDAwMCwgd3JpdGluZyAweGIwMDAwNykKWyAgMjM5LjkyODk4N10gcGNpIDAwMDA6
MDA6MTkuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgxMDAs
IHdyaXRpbmcgMHgxMDUpClsgIDIzOS45Mzc4NDBdIHBjaSAwMDAwOjAwOjE5LjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MTAwMDAyLCB3cml0aW5nIDB4MTAw
MDAzKQpbICAyMzkuOTQ3MjQ5XSBwY2kgMDAwMDowMDoxYS4wOiByZXN0b3JpbmcgY29uZmlnIHNw
YWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDEwMCwgd3JpdGluZyAweDEwYikKWyAgMjM5Ljk1NjA4
NV0gcGNpIDAwMDA6MDA6MWEuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg0
ICh3YXMgMHgwLCB3cml0aW5nIDB4YjAyNzAwMDApClsgIDIzOS45NjUxOTNdIHBjaSAwMDAwOjAw
OjFhLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MjkwMDAw
MCwgd3JpdGluZyAweDI5MDAwMDIpClsgIDIzOS45NzQ4MTVdIHNuZF9oZGFfaW50ZWwgMDAwMDow
MDoxYi4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDEwMCwg
d3JpdGluZyAweDEwMykKWyAgMjM5Ljk4NDU0NV0gc25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6
IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4NCwgd3JpdGluZyAw
eGIwMjYwMDA0KQpbICAyMzkuOTk0NTQxXSBzbmRfaGRhX2ludGVsIDAwMDA6MDA6MWIuMDogcmVz
dG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHgwLCB3cml0aW5nIDB4MTAp
ClsgIDI0MC4wMDQwMzBdIHNuZF9oZGFfaW50ZWwgMDAwMDowMDoxYi4wOiByZXN0b3JpbmcgY29u
ZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDEwMDAwMCwgd3JpdGluZyAweDEwMDAwMikK
WyAgMjQwLjAxNDM4MV0gcGNpZXBvcnQgMDAwMDowMDoxYy4wOiByZXN0b3JpbmcgY29uZmlnIHNw
YWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDEwMCwgd3JpdGluZyAweDEwYikKWyAgMjQwLjAyMzY1
OV0gcGNpZXBvcnQgMDAwMDowMDoxYy4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNl
dCAweDMgKHdhcyAweDgxMDAwMCwgd3JpdGluZyAweDgxMDAxMCkKWyAgMjQwLjAzMzQ3NF0gcGNp
ZXBvcnQgMDAwMDowMDoxYy4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEg
KHdhcyAweDEwMDAwMCwgd3JpdGluZyAweDEwMDAwNykKWyAgMjQwLjA0MzM4N10gcGNpZXBvcnQg
MDAwMDowMDoxYy42OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAw
eDMwMCwgd3JpdGluZyAweDMwNCkKWyAgMjQwLjA1MjY1M10gcGNpZXBvcnQgMDAwMDowMDoxYy42
OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDMgKHdhcyAweDgxMDAwMCwgd3Jp
dGluZyAweDgxMDAxMCkKWyAgMjQwLjA2MjQ3MV0gcGNpZXBvcnQgMDAwMDowMDoxYy42OiByZXN0
b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDEwMDAwMCwgd3JpdGluZyAw
eDEwMDAwNykKWyAgMjQwLjA3MjM4NV0gcGNpZXBvcnQgMDAwMDowMDoxYy43OiByZXN0b3Jpbmcg
Y29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDQwMCwgd3JpdGluZyAweDQwYSkKWyAg
MjQwLjA4MTY1Ml0gcGNpZXBvcnQgMDAwMDowMDoxYy43OiByZXN0b3JpbmcgY29uZmlnIHNwYWNl
IGF0IG9mZnNldCAweDMgKHdhcyAweDgxMDAwMCwgd3JpdGluZyAweDgxMDAxMCkKWyAgMjQwLjA5
MTUzM10gcGNpIDAwMDA6MDA6MWQuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQg
MHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMGIpClsgIDI0MC4xMDAzNTJdIHBjaSAwMDAwOjAw
OjFkLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MCwgd3Jp
dGluZyAweGIwMjUwMDAwKQpbICAyNDAuMTA5NDU4XSBwY2kgMDAwMDowMDoxZC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDI5MDAwMDAsIHdyaXRpbmcgMHgy
OTAwMDAyKQpbICAyNDAuMTE5MjYwXSBhaGNpIDAwMDA6MDA6MWYuMjogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHgyYjAwMDA3LCB3cml0aW5nIDB4MmIwMDQwNykK
WyAgMjQwLjEyODg4MF0gcGNpIDAwMDA6MDA6MWYuMzogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBh
dCBvZmZzZXQgMHgxICh3YXMgMHgyODAwMDAxLCB3cml0aW5nIDB4MjgwMDAwMykKWyAgMjQwLjEz
ODQwMV0gcGNpIDAwMDA6MDI6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQg
MHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMDQpClsgIDI0MC4xNDcyNTBdIHBjaSAwMDAwOjAy
OjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4NCwgd3Jp
dGluZyAweGIwMTAwMDA0KQpbICAyNDAuMTU2MzMzXSBwY2kgMDAwMDowMjowMC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDMgKHdhcyAweDAsIHdyaXRpbmcgMHgxMCkKWyAg
MjQwLjE2NDkyNl0gcGNpIDAwMDA6MDI6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBv
ZmZzZXQgMHgxICh3YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAwMDIpClsgIDI0MC4xNzQ0MjZd
IHBjaSAwMDAwOjAzOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4OSAo
d2FzIDB4MTAwMDEsIHdyaXRpbmcgMHgxZmZmMSkKWyAgMjQwLjE4MzU0MV0gcGNpIDAwMDA6MDM6
MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg3ICh3YXMgMHgyMmEwMDEw
MSwgd3JpdGluZyAweDJhMDAxZjEpClsgIDI0MC4xOTMyMTRdIHBjaSAwMDAwOjAzOjAwLjA6IHJl
c3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MyAod2FzIDB4MTAwMDAsIHdyaXRpbmcg
MHgxMDAxMCkKWyAgMjQwLjIwMjQ4NV0gcGNpIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHg0MDIwMTAwLCB3cml0aW5nIDB4NDAyMDEwYSkK
WyAgMjQwLjIxMjAzMF0gcGNpIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBh
dCBvZmZzZXQgMHg1ICh3YXMgMHgwLCB3cml0aW5nIDB4YjAwMDAwMDApClsgIDI0MC4yMjExMjld
IHBjaSAwMDAwOjA0OjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MyAo
d2FzIDB4MCwgd3JpdGluZyAweDIwMTApClsgIDI0MC4yMjk5MjVdIHNlcmlhbCAwMDAwOjA1OjAx
LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MWZmLCB3cml0
aW5nIDB4MTAzKQpbICAyNDAuMjM5MDM2XSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3Jpbmcg
Y29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDkgKHdhcyAweDEsIHdyaXRpbmcgMHgyMDAxKQpbICAy
NDAuMjQ4MDYyXSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0
IG9mZnNldCAweDggKHdhcyAweDEsIHdyaXRpbmcgMHgyMDExKQpbICAyNDAuMjU3MTAwXSBzZXJp
YWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDcgKHdh
cyAweDEsIHdyaXRpbmcgMHgyMDIxKQpbICAyNDAuMjY2MTQwXSBzZXJpYWwgMDAwMDowNTowMS4w
OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDYgKHdhcyAweDEsIHdyaXRpbmcg
MHgyMDMxKQpbICAyNDAuMjc1MTc5XSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29u
ZmlnIHNwYWNlIGF0IG9mZnNldCAweDUgKHdhcyAweDEsIHdyaXRpbmcgMHgyMDQxKQpbICAyNDAu
Mjg0MjE3XSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9m
ZnNldCAweDMgKHdhcyAweDgsIHdyaXRpbmcgMHgyMDEwKQpbICAyNDAuMjkzMzM0XSBQTTogZWFy
bHkgcmVzdW1lIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgNDY2LjE2NiBtc2VjcwpbICAyNDAu
Mjk5OTM2XSBpOTE1IDAwMDA6MDA6MDIuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0Clsg
IDI0MC4yOTk5NDNdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDIyIHRyaWdnZXJpbmcgMCBwb2xhcml0
eSAxClsgIDI0MC4yOTk5NDNdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MjIKWyAgMjQwLjI5OTk0
M10gc25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6IFBDSSBJTlQgQSAtPiBHU0kgMjIgKGxldmVs
LCBsb3cpIC0+IElSUSAyMgpbICAyNDAuMjk5OTU4XSBwY2kgMDAwMDowMDoxZS4wOiBzZXR0aW5n
IGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMjQwLjI5OTk2OF0gc25kX2hkYV9pbnRlbCAwMDAwOjAw
OjFiLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAyNDAuMjk5OTgxXSBhaGNpIDAw
MDA6MDA6MWYuMjogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDI0MC4zMDAwMDFdIHBj
aSAwMDAwOjAzOjAwLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAyNDAuMzAwMDg5
XSBzZCAwOjA6MDowOiBbc2RhXSBTdGFydGluZyBkaXNrClsgIDI0MC42NDUyMDBdIGF0YTE6IFNB
VEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMgU0NvbnRyb2wgMzAwKQpbICAyNDAuNzA1
MjAyXSBhdGEzOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9sIDMw
MCkKWyAgMjQwLjg3MzM4MF0gUE06IHJlc3VtZSBvZiBkcnY6aTkxNSBkZXY6MDAwMDowMDowMi4w
IGNvbXBsZXRlIGFmdGVyIDU3My40NTYgbXNlY3MKWyAgMjQ1LjA5OTY3Ml0gW2RybTpwY2hfaXJx
X2hhbmRsZXJdICpFUlJPUiogUENIIHBvaXNvbiBpbnRlcnJ1cHQKWyAgMjQ1LjY0NTE5NF0gYXRh
MS4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMpClsgIDI0NS42NjUxOTRdIGF0YTEuMDA6IGZhaWxl
ZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFzaz0weDQpClsgIDI0NS42NzEzNzddIGF0
YTEuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQpbICAyNDUuNzA1MTk3XSBhdGEz
LjAwOiBxYyB0aW1lb3V0IChjbWQgMHhhMSkKWyAgMjQ1LjcwOTMyM10gYXRhMy4wMDogZmFpbGVk
IHRvIElERU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkKWyAgMjQ1LjcxNTY3OF0gYXRh
My4wMDogcmV2YWxpZGF0aW9uIGZhaWxlZCAoZXJybm89LTUpClsgIDI0Ni4wNDUyMDRdIGF0YTE6
IFNBVEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMgU0NvbnRyb2wgMzAwKQpbICAyNDYu
MDY1MjAwXSBhdGEzOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9s
IDMwMCkKWyAgMjU2LjA0NTE5NV0gYXRhMS4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMpClsgIDI1
Ni4wNjUxNzBdIGF0YTEuMDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFz
az0weDQpClsgIDI1Ni4wNzEzNTldIGF0YTEuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5v
PS01KQpbICAyNTYuMDcxMzY0XSBhdGExOiBsaW1pdGluZyBTQVRBIGxpbmsgc3BlZWQgdG8gMy4w
IEdicHMKWyAgMjU2LjA3MTM3M10gYXRhMy4wMDogcWMgdGltZW91dCAoY21kIDB4YTEpClsgIDI1
Ni4wNzEzODVdIGF0YTMuMDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFz
az0weDQpClsgIDI1Ni4wNzEzODhdIGF0YTMuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5v
PS01KQpbICAyNTYuMDcxMzkxXSBhdGEzOiBsaW1pdGluZyBTQVRBIGxpbmsgc3BlZWQgdG8gMS41
IEdicHMKWyAgMjU2LjQxNTIwMF0gYXRhMzogU0FUQSBsaW5rIHVwIDEuNSBHYnBzIChTU3RhdHVz
IDExMyBTQ29udHJvbCAzMTApClsgIDI1Ni40MzUyMDJdIGF0YTE6IFNBVEEgbGluayB1cCAzLjAg
R2JwcyAoU1N0YXR1cyAxMjMgU0NvbnRyb2wgMzIwKQpbICAyODYuNDE1MTkyXSBhdGEzLjAwOiBx
YyB0aW1lb3V0IChjbWQgMHhhMSkKWyAgMjg2LjQxOTMyNF0gYXRhMy4wMDogZmFpbGVkIHRvIElE
RU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkKWyAgMjg2LjQyNTY4M10gYXRhMy4wMDog
cmV2YWxpZGF0aW9uIGZhaWxlZCAoZXJybm89LTUpClsgIDI4Ni40MzA3NzVdIGF0YTMuMDA6IGRp
c2FibGVkClsgIDI4Ni40MzUxOTBdIGF0YTEuMDA6IHFjIHRpbWVvdXQgKGNtZCAweGVjKQpbICAy
ODYuNDQ1MTk3XSBhdGEzOiBoYXJkIHJlc2V0dGluZyBsaW5rClsgIDI4Ni40NTUxOTFdIGF0YTEu
MDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFzaz0weDQpClsgIDI4Ni40
NjEzNzhdIGF0YTEuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQpbICAyODYuNDY2
NDg1XSBhdGExLjAwOiBkaXNhYmxlZApbICAyODYuNTA1MTk2XSBhdGExOiBoYXJkIHJlc2V0dGlu
ZyBsaW5rClsgIDI4Ni43OTUxOTldIGF0YTM6IFNBVEEgbGluayB1cCAxLjUgR2JwcyAoU1N0YXR1
cyAxMTMgU0NvbnRyb2wgMzEwKQpbICAyODYuODE1MTkzXSBhdGEzOiBFSCBjb21wbGV0ZQpbICAy
ODYuODU1MjAxXSBhdGExOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMgKFNTdGF0dXMgMTIzIFNDb250
cm9sIDMyMCkKWyAgMjg2Ljg5NTE5M10gYXRhMTogRUggY29tcGxldGUKWyAgMjg2Ljg5ODE4OV0g
c2QgMDowOjA6MDogW3NkYV0gU1RBUlRfU1RPUCBGQUlMRUQKWyAgMjg2LjkwMjkwMF0gc2QgMDow
OjA6MDogW3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1E
UklWRVJfT0sKWyAgMjg2LjkxMDg4OF0gcG1fb3AoKTogc2NzaV9idXNfcmVzdW1lX2NvbW1vbisw
eDAvMHg2MCByZXR1cm5zIDI2MjE0NApbICAyODYuOTE3NDA4XSBzZCAwOjA6MDowOiBbc2RhXSBV
bmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODYuOTIyNDA4XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVz
dWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODYu
OTMwMzc0XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDIgMTkgMGMg
YjAgMDAgMDAgMDggMDAKWyAgMjg2LjkzNzYyM10gZW5kX3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2
IHNkYSwgc2VjdG9yIDM1MTk2MDgwClsgIDI4Ni45NDM1MzNdIEJ1ZmZlciBJL08gZXJyb3Igb24g
ZGV2aWNlIGRtLTUsIGxvZ2ljYWwgYmxvY2sgODAzOApbICAyODYuOTQ5NzA5XSBsb3N0IHBhZ2Ug
d3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS01ClsgIDI4Ni45NTQ5MDRdIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ni45NTk5MTFdIHNkIDA6MDowOjA6IFtz
ZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09L
ClsgIDI4Ni45Njc4NzZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAw
MiAyMCAxOCAzMCAwMCAwMCAxMCAwMApbICAyODYuOTc1MTIxXSBlbmRfcmVxdWVzdDogSS9PIGVy
cm9yLCBkZXYgc2RhLCBzZWN0b3IgMzU2NTc3NzYKWyAgMjg2Ljk4MTAzNV0gc2QgMDowOjA6MDog
W3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg2Ljk4NjA0NF0gc2QgMDowOjA6MDogW3Nk
YV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sK
WyAgMjg2Ljk5NDAwNF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDAy
IDI0IDNjIDIwIDAwIDAwIDA4IDAwClsgIDI4Ny4wMDEyNTNdIGVuZF9yZXF1ZXN0OiBJL08gZXJy
b3IsIGRldiBzZGEsIHNlY3RvciAzNTkyOTEyMApbICAyODcuMDA3MTYzXSBCdWZmZXIgSS9PIGVy
cm9yIG9uIGRldmljZSBkbS01LCBsb2dpY2FsIGJsb2NrIDk5NjY4ClsgIDI4Ny4wMTM0MjVdIGxv
c3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTUKWyAgMjg3LjAxODYyM10gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg3LjAxODYyM10gSkJEMjog
RGV0ZWN0ZWQgSU8gZXJyb3JzIHdoaWxlIGZsdXNoaW5nIGZpbGUgZGF0YSBvbiBkbS01LTgKWyAg
Mjg3LjAxODYyM10gQWJvcnRpbmcgam91cm5hbCBvbiBkZXZpY2UgZG0tNS04LgpbICAyODcuMDE4
NjQwXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODcuMDE4NjQx
XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2
ZXJieXRlPURSSVZFUl9PSwpbICAyODcuMDE4NjQ0XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdy
aXRlKDEwKTogMmEgMDAgMDIgMjAgMTEgODAgMDAgMDAgMDggMDAKWyAgMjg3LjAxODY0OF0gZW5k
X3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDM1NjU2MDY0ClsgIDI4Ny4wMTg2
NTFdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTUsIGxvZ2ljYWwgYmxvY2sgNjU1MzYK
WyAgMjg3LjAxODY1Ml0gbG9zdCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tNQpb
ICAyODcuMDE4NjgwXSBKQkQyOiBJL08gZXJyb3IgZGV0ZWN0ZWQgd2hlbiB1cGRhdGluZyBqb3Vy
bmFsIHN1cGVyYmxvY2sgZm9yIGRtLTUtOC4KWyAgMjg3LjA4MDU1MV0gc2QgMDowOjA6MDogW3Nk
YV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sK
WyAgMjg3LjA4ODUxM10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDBl
IGFjIGIyIDk4IDAwIDAwIDA4IDAwClsgIDI4Ny4wOTU3NTldIGVuZF9yZXF1ZXN0OiBJL08gZXJy
b3IsIGRldiBzZGEsIHNlY3RvciAyNDYxOTg5MzYKWyAgMjg3LjEwMTc1OF0gQnVmZmVyIEkvTyBl
cnJvciBvbiBkZXZpY2UgZG0tNiwgbG9naWNhbCBibG9jayAyNjI1MjMyMwpbICAyODcuMTA4Mjky
XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS02ClsgIDI4Ny4xMTM0ODVd
IHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4xMTM0OTFdIEpC
RDI6IERldGVjdGVkIElPIGVycm9ycyB3aGlsZSBmbHVzaGluZyBmaWxlIGRhdGEgb24gZG0tNi04
ClsgIDI4Ny4xMjUzODZdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4xMzMzNDhdIHNkIDA6MDowOjA6
IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwZiAyYyAxOSBmOCAwMCAwMCAxMCAwMApbICAy
ODcuMTQwNTk3XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMjU0NTQ4
NDcyClsgIDI4Ny4xNDY2MDRdIFBNOiByZXN1bWUgb2YgZHJ2OnNkIGRldjowOjA6MDowIGNvbXBs
ZXRlIGFmdGVyIDQ2ODQ2LjUxNiBtc2VjcwpbICAyODcuMTQ2NjA4XSBBYm9ydGluZyBqb3VybmFs
IG9uIGRldmljZSBkbS02LTguClsgIDI4Ny4xNDY2MjRdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFu
ZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4xNDY2MjVdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6
IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4xNDY2
MjddIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwZiAyYyAxMSA4MCAw
MCAwMCAwOCAwMApbICAyODcuMTQ2NjMyXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2Rh
LCBzZWN0b3IgMjU0NTQ2MzA0ClsgIDI4Ny4xNDY2MzVdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2
aWNlIGRtLTYsIGxvZ2ljYWwgYmxvY2sgMjcyOTU3NDQKWyAgMjg3LjE0NjYzNl0gbG9zdCBwYWdl
IHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tNgpbICAyODcuMTQ2NjQ4XSBKQkQyOiBJL08g
ZXJyb3IgZGV0ZWN0ZWQgd2hlbiB1cGRhdGluZyBqb3VybmFsIHN1cGVyYmxvY2sgZm9yIGRtLTYt
OC4KWyAgMjg3LjIwNDI0M10gUE06IHJlc3VtZSBvZiBkcnY6c2NzaV9kZXZpY2UgZGV2OjA6MDow
OjAgY29tcGxldGUgYWZ0ZXIgNDY5MDQuMTM5IG1zZWNzClsgIDI4Ny4yMDQyNTFdIFBNOiByZXN1
bWUgb2YgZHJ2OnNjc2lfZGlzayBkZXY6MDowOjA6MCBjb21wbGV0ZSBhZnRlciA0NjMyMy4zMDAg
bXNlY3MKWyAgMjg3LjIwNDI1N10gUE06IERldmljZSAwOjA6MDowIGZhaWxlZCB0byByZXN1bWUg
YXN5bmM6IGVycm9yIDI2MjE0NApbICAyODcuMjI2NzMxXSBQTTogcmVzdW1lIG9mIGRldmljZXMg
Y29tcGxldGUgYWZ0ZXIgNDY5MjYuODM3IG1zZWNzClsgIDI4Ny4yMzMwMjNdIFBNOiByZXN1bWUg
ZGV2aWNlcyB0b29rIDQ2LjkzMCBzZWNvbmRzClsgIDI4Ny4yMzc5NjVdIC0tLS0tLS0tLS0tLVsg
Y3V0IGhlcmUgXS0tLS0tLS0tLS0tLQpbICAyODcuMjQyNzkxXSBXQVJOSU5HOiBhdCAvZGF0YS9o
b21lL2JndXRocm8vZGV2L29yYy1uZXdkZXYuZ2l0L2xpbnV4LTMuMi9rZXJuZWwvcG93ZXIvc3Vz
cGVuZF90ZXN0LmM6NTMgc3VzcGVuZF90ZXN0X2ZpbmlzaCsweDg2LzB4OTAoKQpbICAyODcuMjU1
MzI3XSBIYXJkd2FyZSBuYW1lOiAyMDEyIENsaWVudCBQbGF0Zm9ybQpbICAyODcuMjYwMDYxXSBD
b21wb25lbnQ6IHJlc3VtZSBkZXZpY2VzLCB0aW1lOiA0NjkzMApbICAyODcuMjY1MDY5XSBNb2R1
bGVzIGxpbmtlZCBpbjogaXB0X01BU1FVRVJBREUgaXNjc2lfc2NzdChPKSBzY3N0X3ZkaXNrKE8p
IGNyYzMyYyB4dF90Y3B1ZHAgbGliY3JjMzJjIHh0X3N0YXRlIHh0X211bHRpcG9ydCBzY3N0X2Nk
cm9tKE8pIGlwdGFibGVfZmlsdGVyIGlwdGFibGVfbmF0IG5mX25hdCBuZl9jb25udHJhY2tfaXB2
NCBuZl9jb25udHJhY2sgbmZfZGVmcmFnX2lwdjQgaXBfdGFibGVzIHNjc3QoTykgeF90YWJsZXMg
YnJpZGdlIHN0cCBsbGMgbWljcm9jb2RlIGlzY3NpX3RjcCBsaWJpc2NzaV90Y3AgbGliaXNjc2kg
c2NzaV90cmFuc3BvcnRfaXNjc2kgYXJjNCBzbmRfaGRhX2NvZGVjX3JlYWx0ZWsgc25kX2hkYV9p
bnRlbCBzbmRfaGRhX2NvZGVjIHNuZF9od2RlcCBzbmRfcGNtIHNuZF90aW1lciBzbmQgc2hwY2hw
IHNvdW5kY29yZSBzbmRfcGFnZV9hbGxvYyB6cmFtKEMpIHVzYmhpZCBoaWQgaTkxNSBkcm1fa21z
X2hlbHBlciBkcm0gYWhjaSBsaWJhaGNpIGkyY19hbGdvX2JpdCB2aWRlbyBpbnRlbF9hZ3AgaW50
ZWxfZ3R0IFtsYXN0IHVubG9hZGVkOiB0cG1fYmlvc10KWyAgMjg3LjMxNTgxOF0gUGlkOiAzOTE0
LCBjb21tOiBwbS1zdXNwZW5kIFRhaW50ZWQ6IEcgICAgICAgICBDIE8gMy4yLjIzLW9yYyAjMQpb
ICAyODcuMzIzMTUxXSBDYWxsIFRyYWNlOgpbICAyODcuMzI1NzcyXSAgWzxmZmZmZmZmZjgxMDY0
MmFmPl0gd2Fybl9zbG93cGF0aF9jb21tb24rMHg3Zi8weGMwClsgIDI4Ny4zMzIwMTNdICBbPGZm
ZmZmZmZmODEwNjQzYTY+XSB3YXJuX3Nsb3dwYXRoX2ZtdCsweDQ2LzB4NTAKWyAgMjg3LjMzODAx
M10gIFs8ZmZmZmZmZmY4MTBhNjRkNj5dIHN1c3BlbmRfdGVzdF9maW5pc2grMHg4Ni8weDkwClsg
IDI4Ny4zNDQxODRdICBbPGZmZmZmZmZmODEwYTYwOGU+XSBzdXNwZW5kX2RldmljZXNfYW5kX2Vu
dGVyKzB4MTVlLzB4MzEwClsgIDI4Ny4zNTEwOTldICBbPGZmZmZmZmZmODEwYTYzYTc+XSBlbnRl
cl9zdGF0ZSsweDE2Ny8weDE5MApbICAyODcuMzUxMTAyXSAgWzxmZmZmZmZmZjgxMGE0ZDI3Pl0g
c3RhdGVfc3RvcmUrMHhiNy8weDEzMApbICAyODcuMzUxMTA0XSAgWzxmZmZmZmZmZjgxMmI1NGRm
Pl0ga29ial9hdHRyX3N0b3JlKzB4Zi8weDMwClsgIDI4Ny4zNTExMDRdICBbPGZmZmZmZmZmODEx
ZDM4MmY+XSBzeXNmc193cml0ZV9maWxlKzB4ZWYvMHgxNzAKWyAgMjg3LjM1MTEwNF0gIFs8ZmZm
ZmZmZmY4MTE2NjhkMz5dIHZmc193cml0ZSsweGIzLzB4MTgwClsgIDI4Ny4zNTExMDRdICBbPGZm
ZmZmZmZmODExNjZiZmE+XSBzeXNfd3JpdGUrMHg0YS8weDkwClsgIDI4Ny4zNTExMDVdICBbPGZm
ZmZmZmZmODE1ODIxNDI+XSBzeXN0ZW1fY2FsbF9mYXN0cGF0aCsweDE2LzB4MWIKWyAgMjg3LjM1
MTEwN10gLS0tWyBlbmQgdHJhY2UgMDI4ZDI5MWFkNjNjMTEyMSBdLS0tClsgIDI4Ny4zNTExMjVd
IFBNOiBGaW5pc2hpbmcgd2FrZXVwLgpbICAyODcuMzUxMTI2XSBSZXN0YXJ0aW5nIHRhc2tzIC4u
LiBkb25lLgpbICAyODcuMzUxNTUwXSB2aWRlbyBMTlhWSURFTzowMDogUmVzdG9yaW5nIGJhY2ts
aWdodCBzdGF0ZQpbICAyODcuMzUxNzM4XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJy
b3IgY29kZQpbICAyODcuMzUxNzQwXSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODcuMzUxNzQyXSBzZCAw
OjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDIgMTggMTEgODAgMDAgMDAgMDgg
MDAKWyAgMjg3LjM1MTc0N10gZW5kX3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9y
IDM1MTMxNzc2ClsgIDI4Ny4zNTE3NTBdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTUs
IGxvZ2ljYWwgYmxvY2sgMApbICAyODcuMzUxNzUxXSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkv
TyBlcnJvciBvbiBkbS01ClsgIDI4Ny4zNTE4NjZdIEVYVDQtZnMgZXJyb3IgKGRldmljZSBkbS01
KTogZXh0NF9qb3VybmFsX3N0YXJ0X3NiOjMyNzogRGV0ZWN0ZWQgYWJvcnRlZCBqb3VybmFsClsg
IDI4Ny4zNTE4NjldIEVYVDQtZnMgKGRtLTUpOiBSZW1vdW50aW5nIGZpbGVzeXN0ZW0gcmVhZC1v
bmx5ClsgIDI4Ny4zNTE4NzFdIEVYVDQtZnMgKGRtLTUpOiBwcmV2aW91cyBJL08gZXJyb3IgdG8g
c3VwZXJibG9jayBkZXRlY3RlZApbICAyODcuMzUxODg2XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhh
bmRsZWQgZXJyb3IgY29kZQpbICAyODcuMzUxODg3XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0
OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODcuMzUx
ODg5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDIgMTggMTEgODAg
MDAgMDAgMDggMDAKWyAgMjg3LjM1MTg5M10gZW5kX3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2IHNk
YSwgc2VjdG9yIDM1MTMxNzc2ClsgIDI4Ny4zNTE4OTVdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2
aWNlIGRtLTUsIGxvZ2ljYWwgYmxvY2sgMApbICAyODcuMzUxODk3XSBsb3N0IHBhZ2Ugd3JpdGUg
ZHVlIHRvIEkvTyBlcnJvciBvbiBkbS01ClsgIDI4Ny4zNTMzNDRdIHNkIDA6MDowOjA6IFtzZGFd
IFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4zNTMzNDZdIHNkIDA6MDowOjA6IFtzZGFdICBS
ZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4
Ny4zNTMzNDhdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMiAyOCAx
MSA4MCAwMCAwMCAwOCAwMApbICAyODcuMzUzMzUzXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBk
ZXYgc2RhLCBzZWN0b3IgMzYxODAzNTIKWyAgMjg3LjM1MzM1NV0gQnVmZmVyIEkvTyBlcnJvciBv
biBkZXZpY2UgZG0tNiwgbG9naWNhbCBibG9jayAwClsgIDI4Ny4zNTMzNTZdIGxvc3QgcGFnZSB3
cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTYKWyAgMjg3LjM1MzM2NV0gRVhUNC1mcyBlcnJv
ciAoZGV2aWNlIGRtLTYpOiBleHQ0X2pvdXJuYWxfc3RhcnRfc2I6MzI3OiBEZXRlY3RlZCBhYm9y
dGVkIGpvdXJuYWwKWyAgMjg3LjM1MzM2Nl0gRVhUNC1mcyAoZG0tNik6IFJlbW91bnRpbmcgZmls
ZXN5c3RlbSByZWFkLW9ubHkKWyAgMjg3LjM1MzM2OF0gRVhUNC1mcyAoZG0tNik6IHByZXZpb3Vz
IEkvTyBlcnJvciB0byBzdXBlcmJsb2NrIGRldGVjdGVkClsgIDI4Ny4zNTMzNzhdIHNkIDA6MDow
OjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4zNTMzNzldIHNkIDA6MDowOjA6
IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVS
X09LClsgIDI4Ny4zNTMzODFdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAw
MCAwMiAyOCAxMSA4MCAwMCAwMCAwOCAwMApbICAyODcuMzUzMzg1XSBlbmRfcmVxdWVzdDogSS9P
IGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzYxODAzNTIKWyAgMjg3LjM1MzM4Nl0gQnVmZmVyIEkv
TyBlcnJvciBvbiBkZXZpY2UgZG0tNiwgbG9naWNhbCBibG9jayAwClsgIDI4Ny4zNTMzODddIGxv
c3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTYKWyAgMjg3LjM3MzIwMV0gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg3LjM3MzIwMV0gc2QgMDow
OjA6MDogW3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1E
UklWRVJfT0sKWyAgMjg3LjM3MzIwMV0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBSZWFkKDEwKTog
MjggMDAgMDAgMDkgMjQgYTAgMDAgMDAgMDggMDAKWyAgMjg3LjM3MzIwNF0gZW5kX3JlcXVlc3Q6
IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDU5OTIwMApbICAyODcuMzczNzU5XSBzZCAwOjA6
MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODcuMzczNzU5XSBzZCAwOjA6MDow
OiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZF
Ul9PSwpbICAyODcuMzczNzU5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEg
MDAgMDAgMDggMTEgODAgMDAgMDAgMDggMDAKWyAgMjg3LjM3Mzc2M10gZW5kX3JlcXVlc3Q6IEkv
TyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDUyODc2OApbICAyODcuMzczNzY2XSBCdWZmZXIgSS9P
IGVycm9yIG9uIGRldmljZSBkbS0yLCBsb2dpY2FsIGJsb2NrIDAKWyAgMjg3LjM3Mzc2N10gbG9z
dCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tMgpbICAyODcuMzczODc5XSBFWFQ0
LWZzIGVycm9yIChkZXZpY2UgZG0tMik6IGV4dDRfZmluZF9lbnRyeTo5MzU6IGlub2RlICMzNzM6
IGNvbW0gY3JvbjogcmVhZGluZyBkaXJlY3RvcnkgbGJsb2NrIDAKWyAgMjg3LjM3Mzg3OV0gQWJv
cnRpbmcgam91cm5hbCBvbiBkZXZpY2UgZG0tMi04LgpbICAyODcuMzczOTg2XSBzZCAwOjA6MDow
OiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODcuMzczOTg2XSBzZCAwOjA6MDowOiBb
c2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9P
SwpbICAyODcuMzczOTg2XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAg
MDAgOGMgMTEgODAgMDAgMDAgMDggMDAKWyAgMjg3LjM3Mzk4Nl0gZW5kX3JlcXVlc3Q6IEkvTyBl
cnJvciwgZGV2IHNkYSwgc2VjdG9yIDkxNzk1MjAKWyAgMjg3LjM3NDAxOF0gSkJEMjogSS9PIGVy
cm9yIGRldGVjdGVkIHdoZW4gdXBkYXRpbmcgam91cm5hbCBzdXBlcmJsb2NrIGZvciBkbS0yLTgu
ClsgIDI4Ny4zNzQwMThdIEVYVDQtZnMgKGRtLTIpOiBSZW1vdW50aW5nIGZpbGVzeXN0ZW0gcmVh
ZC1vbmx5ClsgIDI4Ny4zNzQzMjRdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlClsgIDI4Ny4zNzQzMjRdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJ
RF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4zNzQzMjRdIHNkIDA6MDow
OjA6IFtzZGFdIENEQjogUmVhZCgxMCk6IDI4IDAwIDAwIDA5IDI0IGEwIDAwIDAwIDA4IDAwClsg
IDI4Ny4zNzQzMjRdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciA1OTky
MDAKWyAgMjg3LjM3NDM2MV0gRVhUNC1mcyAoZG0tMik6IHByZXZpb3VzIEkvTyBlcnJvciB0byBz
dXBlcmJsb2NrIGRldGVjdGVkClsgIDI4Ny4zNzQ0MjVdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFu
ZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4zNzQ0MjVdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6
IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4zNzQ0
MjVdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwOCAxMSA4MCAw
MCAwMCAwOCAwMApbICAyODcuMzc0NDI1XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2Rh
LCBzZWN0b3IgNTI4NzY4ClsgIDI4Ny4zNzQ0NTddIEVYVDQtZnMgZXJyb3IgKGRldmljZSBkbS0y
KTogZXh0NF9maW5kX2VudHJ5OjkzNTogaW5vZGUgIzM3MzogY29tbSBjcm9uOiByZWFkaW5nIGRp
cmVjdG9yeSBsYmxvY2sgMApbICAyODguMjMzNTg2XSBpbml0OiBhbmFjcm9uIG1haW4gcHJvY2Vz
cyAoNDExNikgdGVybWluYXRlZCB3aXRoIHN0YXR1cyAxClsgIDI4OC4yNTU4MDddIGVoY2lfaGNk
OiBVU0IgMi4wICdFbmhhbmNlZCcgSG9zdCBDb250cm9sbGVyIChFSENJKSBEcml2ZXIKWyAgMjg4
LjI2MjQ5Nl0geGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEK
WyAgMjg4LjI2ODI5N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAyODguMjcyMTEyXSBl
aGNpX2hjZCAwMDAwOjAwOjFhLjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+
IElSUSAxNgpbICAyODguMjc5Njc1XSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IHNldHRpbmcgbGF0
ZW5jeSB0aW1lciB0byA2NApbICAyODguMjg1NjQ3XSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IEVI
Q0kgSG9zdCBDb250cm9sbGVyClsgIDI4OC4yOTExNzldIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDog
bmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAxClsgIDI4OC4yOTg4
NTRdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogZGVidWcgcG9ydCAyClsgIDI4OC4zMDc0MDddIGVo
Y2lfaGNkIDAwMDA6MDA6MWEuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0
ZWQKWyAgMjg4LjMxNDM1Nl0gZWhjaV9oY2QgMDAwMDowMDoxYS4wOiBpcnEgMTYsIGlvIG1lbSAw
eGIwMjcwMDAwClsgIDI4OC4zMzUxOTFdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogVVNCIDIuMCBz
dGFydGVkLCBFSENJIDEuMDAKWyAgMjg4LjM0MTIwOF0gaHViIDEtMDoxLjA6IFVTQiBodWIgZm91
bmQKWyAgMjg4LjM0NTA3MF0gaHViIDEtMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMjg4LjM4
NTIzM10geGVuOiByZWdpc3RlcmluZyBnc2kgMjMgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAg
Mjg4LjM5MDg4N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyMwpbICAyODguMzk0NzMyXSBlaGNp
X2hjZCAwMDAwOjAwOjFkLjA6IFBDSSBJTlQgQSAtPiBHU0kgMjMgKGxldmVsLCBsb3cpIC0+IElS
USAyMwpbICAyODguNDAyMjEzXSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IHNldHRpbmcgbGF0ZW5j
eSB0aW1lciB0byA2NApbICAyODguNDA4Mjc4XSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IEVIQ0kg
SG9zdCBDb250cm9sbGVyClsgIDI4OC40MTM3OTBdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogbmV3
IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAyClsgIDI4OC40MjE0NzBd
IGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogZGVidWcgcG9ydCAyClsgIDI4OC40MzAwNTBdIGVoY2lf
aGNkIDAwMDA6MDA6MWQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQK
WyAgMjg4LjQzNzAxM10gZWhjaV9oY2QgMDAwMDowMDoxZC4wOiBpcnEgMjMsIGlvIG1lbSAweGIw
MjUwMDAwClsgIDI4OC40NjUxOTJdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogVVNCIDIuMCBzdGFy
dGVkLCBFSENJIDEuMDAKWyAgMjg4LjQ3MTE4NV0gaHViIDItMDoxLjA6IFVTQiBodWIgZm91bmQK
WyAgMjg4LjQ3NTAyNV0gaHViIDItMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMjg4LjUzNTIy
OV0geGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMjg4
LjU0MDg4NF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAyODguNTQ0NzMwXSB4aGNpX2hj
ZCAwMDAwOjAwOjE0LjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+IElSUSAx
NgpbICAyODguNTUyMjY5XSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHNldHRpbmcgbGF0ZW5jeSB0
aW1lciB0byA2NApbICAyODguNTU4MjU2XSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHhIQ0kgSG9z
dCBDb250cm9sbGVyClsgIDI4OC41NjM3OTBdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgIDI4OC41NzE1NjRdIHho
Y2lfaGNkIDAwMDA6MDA6MTQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0
ZWQKWyAgMjg4LjU3ODQ3OV0geGhjaV9oY2QgMDAwMDowMDoxNC4wOiBpcnEgMTYsIGlvIG1lbSAw
eGIwMmIwMDAwClsgIDI4OC41ODQ2NDNdIHhIQ0kgeGhjaV9hZGRfZW5kcG9pbnQgY2FsbGVkIGZv
ciByb290IGh1YgpbICAyODguNTg5ODU2XSB4SENJIHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxl
ZCBmb3Igcm9vdCBodWIKWyAgMjg4LjU5NTUxM10gaHViIDMtMDoxLjA6IFVTQiBodWIgZm91bmQK
WyAgMjg4LjU5OTQyNF0gaHViIDMtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQKWyAgMjg4LjYzNTIx
MV0geGhjaV9oY2QgMDAwMDowMDoxNC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAyODguNjQw
NTgzXSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2ln
bmVkIGJ1cyBudW1iZXIgNApbICAyODguNjQ4MzQzXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNh
bGxlZCBmb3Igcm9vdCBodWIKWyAgMjg4LjY1MzU2OV0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0
aCBjYWxsZWQgZm9yIHJvb3QgaHViClsgIDI4OC42NTkyNTNdIGh1YiA0LTA6MS4wOiBVU0IgaHVi
IGZvdW5kClsgIDI4OC42NjMxNTBdIGh1YiA0LTA6MS4wOiA0IHBvcnRzIGRldGVjdGVkClsgIDI4
OC42NjUxOThdIHVzYiAxLTE6IG5ldyBoaWdoLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNp
bmcgZWhjaV9oY2QKWyAgMjg4LjgwMjM1NV0gY2ZnODAyMTE6IENhbGxpbmcgQ1JEQSB0byB1cGRh
dGUgd29ybGQgcmVndWxhdG9yeSBkb21haW4KWyAgMjg4LjgxNjIxMl0gaHViIDEtMToxLjA6IFVT
QiBodWIgZm91bmQKWyAgMjg4LjgxOTgwMl0gSW50ZWwoUikgV2lyZWxlc3MgV2lGaSBMaW5rIEFH
TiBkcml2ZXIgZm9yIExpbnV4LCBpbi10cmVlOgpbICAyODguODE5ODA1XSBDb3B5cmlnaHQoYykg
MjAwMy0yMDExIEludGVsIENvcnBvcmF0aW9uClsgIDI4OC44MTk4NTddIHhlbjogcmVnaXN0ZXJp
bmcgZ3NpIDE4IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgIDI4OC44MTk4NjJdIEFscmVhZHkg
c2V0dXAgdGhlIEdTSSA6MTgKWyAgMjg4LjgxOTg2Nl0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFBD
SSBJTlQgQSAtPiBHU0kgMTggKGxldmVsLCBsb3cpIC0+IElSUSAxOApbICAyODguODE5ODg3XSBp
d2x3aWZpIDAwMDA6MDI6MDAuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDI4OC44
MTk5NTVdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBwY2lfcmVzb3VyY2VfbGVuID0gMHgwMDAwMjAw
MApbICAyODguODE5OTU3XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogcGNpX3Jlc291cmNlX2Jhc2Ug
PSBmZmZmYzkwMDE2MzNjMDAwClsgIDI4OC44MTk5NjBdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBI
VyBSZXZpc2lvbiBJRCA9IDB4MzQKWyAgMjg4LjgyMDI1NF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6
IENPTkZJR19JV0xXSUZJX0RFQlVHIGRpc2FibGVkClsgIDI4OC44MjAyNTZdIGl3bHdpZmkgMDAw
MDowMjowMC4wOiBDT05GSUdfSVdMV0lGSV9ERUJVR0ZTIGRpc2FibGVkClsgIDI4OC44MjAyNTdd
IGl3bHdpZmkgMDAwMDowMjowMC4wOiBDT05GSUdfSVdMV0lGSV9ERVZJQ0VfVFJBQ0lORyBlbmFi
bGVkClsgIDI4OC44MjAyNThdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBDT05GSUdfSVdMV0lGSV9E
RVZJQ0VfVEVTVE1PREUgZGlzYWJsZWQKWyAgMjg4LjgyMDI2MF0gaXdsd2lmaSAwMDAwOjAyOjAw
LjA6IENPTkZJR19JV0xXSUZJX1AyUCBkaXNhYmxlZApbICAyODguODIwMjY1XSBpd2x3aWZpIDAw
MDA6MDI6MDAuMDogRGV0ZWN0ZWQgSW50ZWwoUikgQ2VudHJpbm8oUikgQWR2YW5jZWQtTiA2MjA1
IEFHTiwgUkVWPTB4QjAKWyAgMjg4LjgyMDM5NF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEwxIERp
c2FibGVkOyBFbmFibGluZyBMMFMKWyAgMjg4Ljg0MDE0N10gaXdsd2lmaSAwMDAwOjAyOjAwLjA6
IGRldmljZSBFRVBST00gVkVSPTB4NzE1LCBDQUxJQj0weDYKWyAgMjg4Ljg0MDE0OV0gaXdsd2lm
aSAwMDAwOjAyOjAwLjA6IERldmljZSBTS1U6IDB4MUYwClsgIDI4OC44NDAxNTBdIGl3bHdpZmkg
MDAwMDowMjowMC4wOiBWYWxpZCBUeCBhbnQ6IDB4MywgVmFsaWQgUnggYW50OiAweDMKWyAgMjg4
Ljg0MDE2Ml0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFR1bmFibGUgY2hhbm5lbHM6IDEzIDgwMi4x
MWJnLCAyNCA4MDIuMTFhIGNoYW5uZWxzClsgIDI4OC45NDczOTddIGh1YiAxLTE6MS4wOiA2IHBv
cnRzIGRldGVjdGVkClsgIDI4OC45NTM3MDBdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBsb2FkZWQg
ZmlybXdhcmUgdmVyc2lvbiAxNy4xNjguNS4zIGJ1aWxkIDQyMzAxClsgIDI4OC45NTQzMTddIGUx
MDAwZTogSW50ZWwoUikgUFJPLzEwMDAgTmV0d29yayBEcml2ZXIgLSAxLjUuMS1rClsgIDI4OC45
NTQzMTldIGUxMDAwZTogQ29weXJpZ2h0KGMpIDE5OTkgLSAyMDExIEludGVsIENvcnBvcmF0aW9u
LgpbICAyODguOTU0MzQzXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAyMCB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQpbICAyODguOTU0MzQ2XSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjIwClsgIDI4OC45
NTQzNDldIGUxMDAwZSAwMDAwOjAwOjE5LjA6IFBDSSBJTlQgQSAtPiBHU0kgMjAgKGxldmVsLCBs
b3cpIC0+IElSUSAyMApbICAyODguOTU0MzYzXSBlMTAwMGUgMDAwMDowMDoxOS4wOiBzZXR0aW5n
IGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMjg4Ljk5NjQ3NF0gUmVnaXN0ZXJlZCBsZWQgZGV2aWNl
OiBwaHkwLWxlZApbICAyODkuMDAwNzE5XSBjZmc4MDIxMTogSWdub3JpbmcgcmVndWxhdG9yeSBy
ZXF1ZXN0IFNldCBieSBjb3JlIHNpbmNlIHRoZSBkcml2ZXIgdXNlcyBpdHMgb3duIGN1c3RvbSBy
ZWd1bGF0b3J5IGRvbWFpbgpbICAyODkuMDExNzEzXSBpZWVlODAyMTEgcGh5MDogU2VsZWN0ZWQg
cmF0ZSBjb250cm9sIGFsZ29yaXRobSAnaXdsLWFnbi1ycycKWyAgMjg5LjA2NTIwM10gdXNiIDIt
MTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBlaGNpX2hjZApbICAy
ODkuMjE1OTMzXSBodWIgMi0xOjEuMDogVVNCIGh1YiBmb3VuZApbICAyODkuMjE5NzM3XSBodWIg
Mi0xOjEuMDogOCBwb3J0cyBkZXRlY3RlZApbICAyODkuMjIxMDAxXSBlMTAwMGUgMDAwMDowMDox
OS4wOiBldGgwOiAoUENJIEV4cHJlc3M6Mi41R1QvczpXaWR0aCB4MSkgMDA6MTM6MjA6Zjk6ZGU6
MjQKWyAgMjg5LjIyMTAwM10gZTEwMDBlIDAwMDA6MDA6MTkuMDogZXRoMDogSW50ZWwoUikgUFJP
LzEwMDAgTmV0d29yayBDb25uZWN0aW9uClsgIDI4OS4yMjEwODJdIGUxMDAwZSAwMDAwOjAwOjE5
LjA6IGV0aDA6IE1BQzogMTAsIFBIWTogMTEsIFBCQSBObzogRkZGRkZGLTBGRgpbICAyODkuMjM5
NjM4XSBkZXZpY2UgZXRoMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUKWyAgMjg5LjU0NTM4Ml0g
dXNiIDItMS41OiBuZXcgbG93LXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDMgdXNpbmcgZWhjaV9o
Y2QKWyAgMjg5LjY2MTAzOV0gaW5wdXQ6IFVTQiBPcHRpY2FsIE1vdXNlIGFzIC9kZXZpY2VzL3Bj
aTAwMDA6MDAvMDAwMDowMDoxZC4wL3VzYjIvMi0xLzItMS41LzItMS41OjEuMC9pbnB1dC9pbnB1
dDEzClsgIDI4OS42NzE3OTJdIGdlbmVyaWMtdXNiIDAwMDM6MDRCMzozMTBDLjAwMDU6IGlucHV0
LGhpZHJhdzA6IFVTQiBISUQgdjEuMTEgTW91c2UgW1VTQiBPcHRpY2FsIE1vdXNlXSBvbiB1c2It
MDAwMDowMDoxZC4wLTEuNS9pbnB1dDAKWyAgMjg5LjY5NzUzNV0gc2QgMDowOjA6MDogW3NkYV0g
VW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg5LjcwMjM4NV0gc2QgMDowOjA6MDogW3NkYV0gIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sKWyAgMjg5
LjcxMDM0NF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDAwIDAwIDc1
IDg4IDAwIDAwIDAyIDAwClsgIDI4OS43MTc1OTRdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRl
diBzZGEsIHNlY3RvciAzMDA4OApbICAyODkuNzIzMjQyXSBCdWZmZXIgSS9PIGVycm9yIG9uIGRl
dmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDEyODA0ClsgIDI4OS43Mjk1NDFdIEVYVDQtZnMgd2Fy
bmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2VuZF9iaW86MjUxOiBJL08gZXJyb3Igd3JpdGluZyB0
byBpbm9kZSAyMiAob2Zmc2V0IDAgc2l6ZSAxMDI0IHN0YXJ0aW5nIGJsb2NrIDEyODA0KQpbICAy
ODkuNzUwNzA5XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODku
NzU1MjMwXSB1c2IgMi0xLjY6IG5ldyBsb3ctc3BlZWQgVVNCIGRldmljZSBudW1iZXIgNCB1c2lu
ZyBlaGNpX2hjZApbICAyODkuNzYyNDUzXSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0
Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODkuNzcwNDEyXSBz
ZCAwOjA6MDowOiBbc2RhXSBDREI6IFJlYWQoMTApOiAyOCAwMCAwMCAxZiAwNyA4MCAwMCAwMCAy
MCAwMApbICAyODkuNzc3NTY4XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0
b3IgMjAzMzUzNgpbICAyODkuNzgzNDAzXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJy
b3IgY29kZQpbICAyODkuNzg4NDA0XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODkuNzk2MzUyXSBzZCAw
OjA6MDowOiBbc2RhXSBDREI6IFJlYWQoMTApOiAyOCAwMCAwMCAxZiAwNyA4MCAwMCAwMCAwOCAw
MApbICAyODkuODAzNTA2XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3Ig
MjAzMzUzNgpbICAyODkuODA5OTA3XSBpbml0OiBGYWlsZWQgdG8gd3JpdGUgdG8gbG9nIGZpbGUg
L3Zhci9sb2cvdXBzdGFydC94c2VydmVyLmxvZwpbICAyODkuODMxMjI2XSBzZCAwOjA6MDowOiBb
c2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODkuODM2MDc5XSBzZCAwOjA6MDowOiBbc2Rh
XSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpb
ICAyODkuODQ0MDM2XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDAg
MDAgODEgODIgMDAgMDAgMDIgMDAKWyAgMjg5Ljg1MTI4NF0gZW5kX3JlcXVlc3Q6IEkvTyBlcnJv
ciwgZGV2IHNkYSwgc2VjdG9yIDMzMTU0ClsgIDI4OS44NTY5MzBdIEJ1ZmZlciBJL08gZXJyb3Ig
b24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sgMTQzMzcKWyAgMjg5Ljg2MzE5NF0gRVhUNC1m
cyB3YXJuaW5nIChkZXZpY2UgZG0tMCk6IGV4dDRfZW5kX2JpbzoyNTE6IEkvTyBlcnJvciB3cml0
aW5nIHRvIGlub2RlIDIyIChvZmZzZXQgMCBzaXplIDEwMjQgc3RhcnRpbmcgYmxvY2sgMTQzMzcp
ClsgIDI4OS44NjMzMDNdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsg
IDI4OS44NjMzMDVdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURf
VEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4OS44NjMzMDddIHNkIDA6MDowOjA6IFtz
ZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwMCA4NSA4NCAwMCAwMCAwMiAwMApbICAyODku
ODYzMzEyXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzQxODAKWyAg
Mjg5Ljg2MzMxNV0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9j
ayAxNDg1MApbICAyODkuODYzMzE4XSBFWFQ0LWZzIHdhcm5pbmcgKGRldmljZSBkbS0wKTogZXh0
NF9lbmRfYmlvOjI1MTogSS9PIGVycm9yIHdyaXRpbmcgdG8gaW5vZGUgMjIgKG9mZnNldCAwIHNp
emUgMTAyNCBzdGFydGluZyBibG9jayAxNDg1MCkKWyAgMjg5LjkyMzI2Nl0gc2QgMDowOjA6MDog
W3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg5LjkyNTQxOV0gaW5wdXQ6IExJVEUtT04g
VGVjaG5vbG9neSBVU0IgTmV0VmlzdGEgRnVsbCBXaWR0aCBLZXlib2FyZC4gYXMgL2RldmljZXMv
cGNpMDAwMDowMC8wMDAwOjAwOjFkLjAvdXNiMi8yLTEvMi0xLjYvMi0xLjY6MS4wL2lucHV0L2lu
cHV0MTQKWyAgMjg5LjkyNTUxMF0gZ2VuZXJpYy11c2IgMDAwMzowNEIzOjMwMjUuMDAwNjogaW5w
dXQsaGlkcmF3MTogVVNCIEhJRCB2MS4xMCBLZXlib2FyZCBbTElURS1PTiBUZWNobm9sb2d5IFVT
QiBOZXRWaXN0YSBGdWxsIFdpZHRoIEtleWJvYXJkLl0gb24gdXNiLTAwMDA6MDA6MWQuMC0xLjYv
aW5wdXQwClsgIDI4OS45NTcyOTldIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRl
PURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4OS45NjUyNThdIHNkIDA6
MDowOjA6IFtzZGFdIENEQjogUmVhZCgxMCk6IDI4IDAwIDAwIDFmIDA3IDgwIDAwIDAwIDA4IDAw
ClsgIDI4OS45NzI0MTJdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAy
MDMzNTM2ClsgIDI4OS45ODc3NzFdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlClsgIDI4OS45OTI2MTldIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJ
RF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI5MC4wMDA1ODJdIHNkIDA6MDow
OjA6IFtzZGFdIENEQjogUmVhZCgxMCk6IDI4IDAwIDAwIDFmIDA3IDgwIDAwIDAwIDA4IDAwClsg
IDI5MC4wMDc3NDFdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAyMDMz
NTM2ClsgIDI5MC4wMTU1OTldIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2Rl
ClsgIDI5MC4wMjA0NDNdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI5MC4wMjg0MThdIHNkIDA6MDowOjA6
IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwMCA4NSA4NiAwMCAwMCAwMiAwMApbICAy
OTAuMDM1NjYwXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzQxODIK
WyAgMjkwLjA0MTMwMF0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBi
bG9jayAxNDg1MQpbICAyOTAuMDQ3NTc2XSBFWFQ0LWZzIHdhcm5pbmcgKGRldmljZSBkbS0wKTog
ZXh0NF9lbmRfYmlvOjI1MTogSS9PIGVycm9yIHdyaXRpbmcgdG8gaW5vZGUgMjIgKG9mZnNldCAw
IHNpemUgMTAyNCBzdGFydGluZyBibG9jayAxNDg1MSkKWyAgMjkwLjA2ODk1OF0gc2QgMDowOjA6
MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjkwLjA3MzgwMl0gc2QgMDowOjA6MDog
W3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sKWyAgMjkwLjA4MTc2N10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBSZWFkKDEwKTogMjggMDAg
MDAgMWYgMDcgODAgMDAgMDAgMDggMDAKWyAgMjkwLjA4ODkyNl0gZW5kX3JlcXVlc3Q6IEkvTyBl
cnJvciwgZGV2IHNkYSwgc2VjdG9yIDIwMzM1MzYKWyAgMjkwLjEwOTEzMV0gc2QgMDowOjA6MDog
W3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjkwLjExMzk3N10gc2QgMDowOjA6MDogW3Nk
YV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sK
WyAgMjkwLjEyMTk0MV0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDAw
IDAwIDgxIDg0IDAwIDAwIDAyIDAwClsgIDI5MC4xMjkxOTBdIGVuZF9yZXF1ZXN0OiBJL08gZXJy
b3IsIGRldiBzZGEsIHNlY3RvciAzMzE1NgpbICAyOTAuMTM0ODMwXSBCdWZmZXIgSS9PIGVycm9y
IG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDE0MzM4ClsgIDI5MC4xNDExMDVdIEVYVDQt
ZnMgd2FybmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2VuZF9iaW86MjUxOiBJL08gZXJyb3Igd3Jp
dGluZyB0byBpbm9kZSAyMiAob2Zmc2V0IDAgc2l6ZSAxMDI0IHN0YXJ0aW5nIGJsb2NrIDE0MzM4
KQpbICAyOTAuMTU2MjU0XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpb
ICAyOTAuMTYxMDk4XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFE
X1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyOTAuMTY5MDcwXSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IFJlYWQoMTApOiAyOCAwMCAwMCAxZiAwNyA4MCAwMCAwMCAwOCAwMApbICAyOTAu
MTc2MjIyXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMjAzMzUzNgpb
ICAyOTIuNTAwOTc1XSBlMTAwMGU6IGV0aDAgTklDIExpbmsgaXMgVXAgMTAwMCBNYnBzIEZ1bGwg
RHVwbGV4LCBGbG93IENvbnRyb2w6IFJ4L1R4ClsgIDI5Mi41MDg3OTddIGJyLWV0aDA6IHBvcnQg
MShldGgwKSBlbnRlcmluZyBmb3J3YXJkaW5nIHN0YXRlClsgIDI5Mi41MTQ1MTNdIGJyLWV0aDA6
IHBvcnQgMShldGgwKSBlbnRlcmluZyBmb3J3YXJkaW5nIHN0YXRlClsgIDI5NC45ODUyMjRdIEpC
RDI6IERldGVjdGVkIElPIGVycm9ycyB3aGlsZSBmbHVzaGluZyBmaWxlIGRhdGEgb24gZG0tMC04
ClsgIDI5NC45OTE5NjZdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsg
IDI5NC45OTY5ODJdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURf
VEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI5NS4wMDQ5MjVdIHNkIDA6MDowOjA6IFtz
ZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwMSA5MSBiZSAwMCAwMCAwYSAwMApbICAyOTUu
MDEyMTc2XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMTAyODQ2Clsg
IDI5NS4wMTc5NTNdIEFib3J0aW5nIGpvdXJuYWwgb24gZGV2aWNlIGRtLTAtOC4KWyAgMjk1LjAy
MjU5NF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjk1LjAyNzU5
NF0gc2QgMDowOjA6MDogW3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJp
dmVyYnl0ZT1EUklWRVJfT0sKWyAgMjk1LjAzNTU1Nl0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBX
cml0ZSgxMCk6IDJhIDAwIDAwIDAxIDkxIDgyIDAwIDAwIDAyIDAwClsgIDI5NS4wNDI3OTZdIGVu
ZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAxMDI3ODYKWyAgMjk1LjA0ODUz
Ml0gcXVpZXRfZXJyb3I6IDIgY2FsbGJhY2tzIHN1cHByZXNzZWQKWyAgMjk1LjA1MzI2OF0gQnVm
ZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayA0OTE1MwpbICAyOTUu
MDU5NTM1XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS0wClsgIDI5NS4w
NjQ3MzBdIEpCRDI6IEkvTyBlcnJvciBkZXRlY3RlZCB3aGVuIHVwZGF0aW5nIGpvdXJuYWwgc3Vw
ZXJibG9jayBmb3IgZG0tMC04LgoK
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-good.txt"
Content-Disposition: attachment; filename="xen-dump-good.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfctac2

KFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBYZW4gKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMg
dG8gc3dpdGNoIGlucHV0IHRvIERPTTApCihYRU4pICcqJyBwcmVzc2VkIC0+IGZpcmluZyBhbGwg
ZGlhZ25vc3RpYyBrZXloYW5kbGVycwooWEVOKSBbZDogZHVtcCByZWdpc3RlcnNdCihYRU4pICdk
JyBwcmVzc2VkIC0+IGR1bXBpbmcgcmVnaXN0ZXJzCihYRU4pIAooWEVOKSAqKiogRHVtcGluZyBD
UFUwIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0
ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMAooWEVOKSBSSVA6
ICAgIGUwMDg6WzxmZmZmODJjNDgwMTNkNzdlPl0gbnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVO
KSBSRkxBR1M6IDAwMDAwMDAwMDAwMTAyODYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJh
eDogZmZmZjgyYzQ4MDMwMjVhMCAgIHJieDogZmZmZjgyYzQ4MDMwMjQ4MCAgIHJjeDogMDAwMDAw
MDAwMDAwMDAwMwooWEVOKSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IGZmZmY4MmM0ODAy
ZTI1YzggICByZGk6IGZmZmY4MmM0ODAyNzE4MDAKKFhFTikgcmJwOiBmZmZmODJjNDgwMmI3ZTMw
ICAgcnNwOiBmZmZmODJjNDgwMmI3ZTMwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAxCihYRU4pIHI5
OiAgZmZmZjgzMDE0ODk5YWVhOCAgIHIxMDogMDAwMDAwMjg2NDIzMGZkYyAgIHIxMTogMDAwMDAw
MDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MmM0ODAyNzE4MDAgICByMTM6IGZmZmY4MmM0ODAx
M2Q3NTcgICByMTQ6IDAwMDAwMDI4NjNmNWQwN2EKKFhFTikgcjE1OiBmZmZmODJjNDgwMzAyMzA4
ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNy
MzogMDAwMDAwMDEzZDk2YzAwMCAgIGNyMjogZmZmZjg4MDAyNTY4ZWI5OAooWEVOKSBkczogMDAw
MCAgIGVzOiAwMDAwICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgK
KFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MmM0ODAyYjdlMzA6CihYRU4pICAg
IGZmZmY4MmM0ODAyYjdlNjAgZmZmZjgyYzQ4MDEyODE3ZiAwMDAwMDAwMDAwMDAwMDAyIGZmZmY4
MmM0ODAyZTI1YzgKKFhFTikgICAgZmZmZjgyYzQ4MDMwMjQ4MCBmZmZmODMwMTQ4OTkyZDQwIGZm
ZmY4MmM0ODAyYjdlYjAgZmZmZjgyYzQ4MDEyODI4MQooWEVOKSAgICBmZmZmODJjNDgwMmI3ZjE4
IDAwMDAwMDAwMDAwMDAyNDYgMDAwMDAwMjg2NDIzMGZkYyBmZmZmODJjNDgwMmQ4ODgwCihYRU4p
ICAgIGZmZmY4MmM0ODAyZDg4ODAgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmY4MmM0ODAzMDIzMDgKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2VlMCBmZmZmODJjNDgwMTI1NDA1
IGZmZmY4MmM0ODAyYjdmMTggZmZmZjgyYzQ4MDJiN2YxOAooWEVOKSAgICAwMDAwMDAwMGZmZmZm
ZmZmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJiN2VmMCBmZmZmODJjNDgwMTI1NDg0CihY
RU4pICAgIGZmZmY4MmM0ODAyYjdmMTAgZmZmZjgyYzQ4MDE1OGMwNSBmZmZmODMwMGFhNTg0MDAw
IGZmZmY4MzAwYWEwZmMwMDAKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2RhOCAwMDAwMDAwMDAwMDAw
MDAwIGZmZmZmZmZmZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgx
YWFmZGEwIGZmZmZmZmZmODFhMDFlZTggZmZmZmZmZmY4MWEwMWZkOCAwMDAwMDAwMDAwMDAwMjQ2
CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEw
MDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pICAgIGZmZmZmZmZmODFhMDFlZDAgMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjgzMDBhYTU4NDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4p
ICAgIFs8ZmZmZjgyYzQ4MDEzZDc3ZT5dIG5zMTY1NTBfcG9sbCsweDI3LzB4MzMKKFhFTikgICAg
WzxmZmZmODJjNDgwMTI4MTdmPl0gZXhlY3V0ZV90aW1lcisweDRlLzB4NmMKKFhFTikgICAgWzxm
ZmZmODJjNDgwMTI4MjgxPl0gdGltZXJfc29mdGlycV9hY3Rpb24rMHhlNC8weDIxYQooWEVOKSAg
ICBbPGZmZmY4MmM0ODAxMjU0MDU+XSBfX2RvX3NvZnRpcnErMHg5NS8weGEwCihYRU4pICAgIFs8
ZmZmZjgyYzQ4MDEyNTQ4ND5dIGRvX3NvZnRpcnErMHgyNi8weDI4CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1OGMwNT5dIGlkbGVfbG9vcCsweDZmLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1w
aW5nIENQVTEgaG9zdCBzdGF0ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4
ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAxCihYRU4p
IFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDll
CihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhF
TikgcmF4OiBmZmZmODJjNDgwMzAyMzcwICAgcmJ4OiBmZmZmODMwMTNlNjdmZjE4ICAgcmN4OiAw
MDAwMDAwMDAwMDAwMDAxCihYRU4pIHJkeDogMDAwMDAwM2NjY2VkMWQ4MCAgIHJzaTogMDAwMDAw
MDA1YjM0MDU2MCAgIHJkaTogMDAwMDAwMDAwMDAwMDAwMQooWEVOKSByYnA6IGZmZmY4MzAxM2U2
N2ZlZjAgICByc3A6IGZmZmY4MzAxM2U2N2ZlZjAgICByODogIDAwMDAwMDJlNzBiYTMwMjQKKFhF
Tikgcjk6ICBmZmZmODMwMGE4M2ZjMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAw
MDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZjgzMDEzZTY3ZmYxOCAgIHIxMzogMDAwMDAw
MDBmZmZmZmZmZiAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxNGQx
ZDQwODggICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhF
TikgY3IzOiAwMDAwMDAwMTRjZGFiMDAwICAgY3IyOiBmZmZmODgwMDAzMTk4YzgwCihYRU4pIGRz
OiAwMDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczog
ZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDEzZTY3ZmVmMDoKKFhF
TikgICAgZmZmZjgzMDEzZTY3ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYWEwZmUwMDAg
ZmZmZjgzMDBhODNmYzAwMAooWEVOKSAgICBmZmZmODMwMTNlNjdmZGE4IDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAyCihYRU4pICAgIGZmZmZmZmZmODFh
YWZkYTAgZmZmZjg4MDAyNzg2ZmVlMCBmZmZmODgwMDI3ODZmZmQ4IDAwMDAwMDAwMDAwMDAyNDYK
KFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAw
MDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAy
NDYKKFhFTikgICAgZmZmZjg4MDAyNzg2ZmVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmODMwMGFhMGZlMDAwCihYRU4pICAgIDAwMDAw
MDNjY2NlZDFkODAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikg
ICAgWzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBb
PGZmZmY4MmM0ODAxNThiZjg+XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAq
KiogRHVtcGluZyBDUFUyIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMy
LXByZSAgeDg2XzY0ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAg
MgooWEVOKSBSSVA6ICAgIGUwMDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4
OTkvMHg5ZQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZp
c29yCihYRU4pIHJheDogZmZmZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODkzZmYxOCAg
IHJjeDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByZHg6IDAwMDAwMDNjY2M2YmJkODAgICByc2k6
IDAwMDAwMDAwNWJmOTgyMTAgICByZGk6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcmJwOiBmZmZm
ODMwMTQ4OTNmZWYwICAgcnNwOiBmZmZmODMwMTQ4OTNmZWYwICAgcjg6ICAwMDAwMDAyZTkzYjg4
ODc0CihYRU4pIHI5OiAgZmZmZjgzMDBhYTU4MzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAg
IHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5M2ZmMTggICByMTM6
IDAwMDAwMDAwZmZmZmZmZmYgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZm
ODMwMTRjOWJlMDg4ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAy
NmYwCihYRU4pIGNyMzogMDAwMDAwMDEzZGQ0NjAwMCAgIGNyMjogZmZmZjg4MDAyNWU5NDA1MAoo
WEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEw
ICAgY3M6IGUwMDgKKFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5M2Zl
ZjA6CihYRU4pICAgIGZmZmY4MzAxNDg5M2ZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4
NWM3MDAwIGZmZmY4MzAwYWE1ODMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODkzZmRhOCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMwooWEVOKSAgICBmZmZm
ZmZmZjgxYWFmZGEwIGZmZmY4ODAwMjc4ODFlZTAgZmZmZjg4MDAyNzg4MWZkOCAwMDAwMDAwMDAw
MDAwMjQ2CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAw
MDAwMDEwMDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAw
MDAwMDAwMjQ2CihYRU4pICAgIGZmZmY4ODAwMjc4ODFlYzggMDAwMDAwMDAwMDAwZTAyYiAzZWMy
ZTMyNjgwOTJlMTY3IGI5N2NlYzViOWU2OGRjOTMKKFhFTikgICAgY2M5ODA3OTI0OWZiNzNiNSBl
MWEzNWRlNGZkMTYxYTZmIGUxZTM1ZDY0MDAwMDAwMDIgZmZmZjgzMDBhODVjNzAwMAooWEVOKSAg
ICAwMDAwMDAzY2NjNmJiZDgwIGUyNGQ1YTM4ZjJhZTA1MWUKKFhFTikgWGVuIGNhbGwgdHJhY2U6
CihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhF
TikgICAgWzxmZmZmODJjNDgwMTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAK
KFhFTikgKioqIER1bXBpbmcgQ1BVMyBob3N0IHN0YXRlOiAqKioKKFhFTikgLS0tLVsgWGVuLTQu
Mi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDogICAgQyBdLS0tLQooWEVOKSBD
UFU6ICAgIDMKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRf
aWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgQ09OVEVYVDog
aHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAgICByYng6IGZmZmY4MzAxNDg5
MmZmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmR4OiAwMDAwMDAzY2M4Njg1ZDgw
ICAgcnNpOiAwMDAwMDAwMDVjYmY0NmE4ICAgcmRpOiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJi
cDogZmZmZjgzMDE0ODkyZmVmMCAgIHJzcDogZmZmZjgzMDE0ODkyZmVmMCAgIHI4OiAgMDAwMDAw
MmViNjk0ZjFjOAooWEVOKSByOTogIGZmZmY4MzAwYTgzZmQwNjAgICByMTA6IDAwMDAwMDAwZGVh
ZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmODMwMTQ4OTJmZjE4
ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAyCihYRU4pIHIx
NTogZmZmZjgzMDE0ODk4ODA4OCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAw
MDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxNGNkYjIwMDAgICBjcjI6IGZmZmY4ODAwMDJi
ZDExNTAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBz
czogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODMw
MTQ4OTJmZWYwOgooWEVOKSAgICBmZmZmODMwMTQ4OTJmZjEwIGZmZmY4MmM0ODAxNThiZjggZmZm
ZjgzMDBhODNmZTAwMCBmZmZmODMwMGE4M2ZkMDAwCihYRU4pICAgIGZmZmY4MzAxNDg5MmZkYTgg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDEKKFhFTikg
ICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODZkZWUwIGZmZmY4ODAwMjc4NmRmZDggMDAw
MDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODEwMDEz
YWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAwMDAwMDAwZGVhZGJlZWYKKFhF
TikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMGUwMzMg
MDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODZkZWM4IDAwMDAwMDAwMDAwMGUw
MmIgM2VjMmUzMjY4MDkyZTE2NyBiOTdjZWM1YjllNjhkYzkzCihYRU4pICAgIGNjOTgwNzkyNDlm
YjczYjUgZTFhMzVkZTRmZDE2MWE2ZiBlMWUzNWQ2NDAwMDAwMDAzIGZmZmY4MzAwYTgzZmUwMDAK
KFhFTikgICAgMDAwMDAwM2NjODY4NWQ4MCBlMjRkNWEzOGYyYWUwNTFlCihYRU4pIFhlbiBjYWxs
IHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8w
eDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVfbG9vcCsweDYyLzB4NzEKKFhF
TikgICAgCihYRU4pIFswOiBkdW1wIERvbTAgcmVnaXN0ZXJzXQooWEVOKSAnMCcgcHJlc3NlZCAt
PiBkdW1waW5nIERvbTAncyByZWdpc3RlcnMKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2Y3B1IzAg
c3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0KKFhFTikg
UkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVzdAooWEVO
KSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmZmZmZmODFhMDFmZDggICByY3g6IGZm
ZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAwMDAwMDAw
MGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZmZmZmY4MWEw
MWVlOCAgIHJzcDogZmZmZmZmZmY4MWEwMWVkMCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICByMTE6IDAw
MDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAwMDAwMDAw
MDAwMDAwMDAwICAgcjE0OiBmZmZmZmZmZmZmZmZmZmZmCihYRU4pIHIxNTogMDAwMDAwMDAwMDAw
MDAwMCAgIGNyMDogMDAwMDAwMDAwMDAwMDAwOCAgIGNyNDogMDAwMDAwMDAwMDAwMjY2MAooWEVO
KSBjcjM6IDAwMDAwMDAxM2Q5NmMwMDAgICBjcjI6IDAwMDA3ZmQyMWU5NzMwMDAKKFhFTikgZHM6
IDAwMDAgICBlczogMDAwMCAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAgIGNzOiBl
MDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmZmZmZmODFhMDFlZDA6CihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgxMDBhNWMw
IGZmZmZmZmZmODFhMDFmMTgKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmZmZmZjgxYTAx
ZmQ4IGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyZGVlMWEwMAooWEVOKSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmODFhMDFmNDggZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZmZmZmZmZmZm
CihYRU4pICAgIDhlNmExZGU5NjBlNzVkYzMgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZjgxYjE1
MTYwIGZmZmZmZmZmODFhMDFmNTgKKFhFTikgICAgZmZmZmZmZmY4MTU1NGY1ZSBmZmZmZmZmZjgx
YTAxZjk4IGZmZmZmZmZmODFhY2NiZjUgZmZmZmZmZmY4MWIxNTE2MAooWEVOKSAgICBkMGEwZjA3
NTJhZDlhMDA4IDAwMDAwMDAwMDBjZGYwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmY4MWEwMWZiOCBmZmZmZmZmZjgx
YWNjMzRiIGZmZmZmZmZmN2ZmZmZmZmYKKFhFTikgICAgZmZmZmZmZmY4NGIyNTAwMCBmZmZmZmZm
ZjgxYTAxZmY4IGZmZmZmZmZmODFhY2ZlY2MgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMTAwMDAwMDAwIDAwMTAwODAwMDAwMzA2YTQgMWZjOThiNzVlM2I4MjI4MyAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAq
KiogRHVtcGluZyBEb20wIHZjcHUjMSBzdGF0ZTogKioqCihYRU4pIFJJUDogICAgZTAzMzpbPGZm
ZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBFTTogMCAg
IENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAgIHJieDogZmZm
Zjg4MDAyNzg2ZGZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6IDAwMDAwMDAw
MDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAwZGVhZGJlZWYK
KFhFTikgcmJwOiBmZmZmODgwMDI3ODZkZWUwICAgcnNwOiBmZmZmODgwMDI3ODZkZWM4ICAgcjg6
ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAgIHIxMDogMDAw
MDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmZmZmZm
ODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDEgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDAK
KFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0
OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0Y2RiMjAwMCAgIGNyMjogMDAw
MDdmZDlhYTQyYTAwMAooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczog
MDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJhY2UgZnJvbSBy
c3A9ZmZmZjg4MDAyNzg2ZGVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGZm
ZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZGYxMAooWEVOKSAgICBmZmZmZmZm
ZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmRmZDggZmZmZmZmZmY4MWFhZmRhMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY0MCBmZmZmZmZmZjgx
MDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgYTEwMjFlZjczOWZiMjdkMyAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY1MAooWEVOKSAgICBmZmZm
ZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBm
ZmZmODgwMDI3ODZkZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2
Y3B1IzIgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0K
KFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVz
dAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4ODAwMjc4NmZmZDggICBy
Y3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAw
MDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZjg4
MDAyNzg2ZmVlMCAgIHJzcDogZmZmZjg4MDAyNzg2ZmVjOCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICBy
MTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAw
MDAwMDAwMDAwMDAwMDAyICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHIxNTogMDAwMDAw
MDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAwMDAwMDAwMjY2
MAooWEVOKSBjcjM6IDAwMDAwMDAxNGNkYWIwMDAgICBjcjI6IDAwMDAwMDAwMDFiOTAwNDgKKFhF
TikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAg
IGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4ODAwMjc4NmZl
Yzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgx
MDBhNWMwIGZmZmY4ODAwMjc4NmZmMTAKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmODgw
MDI3ODZmZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNDAgZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZjgx
MDBhZGU5CihYRU4pICAgIDU3OTAyZTE4ZWUzMjEyZjkgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNTAKKFhFTikgICAgZmZmZmZmZmY4MTU2MzQzOCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1OCAw
MDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMzIHN0YXRlOiAqKioK
KFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikgcmF4OiAwMDAw
MDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODgxZmQ4ICAgcmN4OiBmZmZmZmZmZjgxMDAx
M2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBkZWFkYmVlZiAg
IHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4ODFlZTAgICByc3A6
IGZmZmY4ODAwMjc4ODFlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgcjk6ICAwMDAw
MDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAwMDAwMDAwMyAg
IHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAwMDAgICBjcjA6
IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikgY3IzOiAwMDAw
MDAwMTNkZDQ2MDAwICAgY3IyOiAwMDAwN2ZkMjA0MDBhNWEwCihYRU4pIGRzOiAwMDJiICAgZXM6
IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAzMwooWEVOKSBH
dWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODgxZWM4OgooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBmZmZmODgwMDI3
ODgxZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg4MWZkOCBmZmZmZmZm
ZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBmZmZm
ODgwMDI3ODgxZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQooWEVOKSAgICA2
N2IxMDQ0Y2M0YjdjMzkxIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmODgw
MDI3ODgxZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTggMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSBbSDogZHVtcCBoZWFwIGluZm9dCihYRU4pICdIJyBwcmVzc2VkIC0+IGR1bXBpbmcgaGVh
cCBpbmZvIChub3ctMHgyODpBREMxODFEMikKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MF0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTJdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9M10gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT00XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTVdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Nl0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT03XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPThdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OV0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMF0gLT4gMCBwYWdlcwooWEVOKSBoZWFw
W25vZGU9MF1bem9uZT0xMV0gLT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMl0g
LT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xM10gLT4gMCBwYWdlcwooWEVOKSBo
ZWFwW25vZGU9MF1bem9uZT0xNF0gLT4gMTYxMjggcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MTVdIC0+IDMyNzY4IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE2XSAtPiA2NTUz
NiBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xN10gLT4gMTMwNTU5IHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTE4XSAtPiAyNjIxNDMgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MTldIC0+IDE3Mjc5NyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0yMF0gLT4g
MTM0MjY1IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIxXSAtPiAwIHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTIyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25l
PTIzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI0XSAtPiAwIHBhZ2VzCihY
RU4pIGhlYXBbbm9kZT0wXVt6b25lPTI1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6
b25lPTI2XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI3XSAtPiAwIHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTI5XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMwXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMxXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTMyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMzXSAtPiAw
IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM0XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTM1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM2XSAt
PiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM3XSAtPiAwIHBhZ2VzCihYRU4pIGhl
YXBbbm9kZT0wXVt6b25lPTM4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM5
XSAtPiAwIHBhZ2VzCihYRU4pIFtJOiBkdW1wIEhWTSBpcnEgaW5mb10KKFhFTikgJ0knIHByZXNz
ZWQgLT4gZHVtcGluZyBIVk0gaXJxIGluZm8KKFhFTikgW006IGR1bXAgTVNJIHN0YXRlXQooWEVO
KSBQQ0ktTVNJIGludGVycnVwdCBpbmZvcm1hdGlvbjoKKFhFTikgIE1TSSAgICAyNiB2ZWM9NzEg
bG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEv
LTEKKFhFTikgIE1TSSAgICAyNyB2ZWM9MDAgIGZpeGVkICBlZGdlIGRlYXNzZXJ0IHBoeXMgbG93
ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAyOCB2ZWM9MjkgbG93
ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEK
KFhFTikgIE1TSSAgICAyOSB2ZWM9NzkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0
IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAzMCB2ZWM9ODEgbG93ZXN0
ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhF
TikgIE1TSSAgICAzMSB2ZWM9OTkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRl
c3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgW1E6IGR1bXAgUENJIGRldmljZXNdCihYRU4p
ID09PT0gUENJIGRldmljZXMgPT09PQooWEVOKSA9PT09IHNlZ21lbnQgMDAwMCA9PT09CihYRU4p
IDAwMDA6MDU6MDEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjA0OjAwLjAgLSBk
b20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMzowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDI6MDAuMCAtIGRvbSAwICAgLSBNU0lzIDwgMzAgPgooWEVOKSAwMDAwOjAw
OjFmLjMgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZi4yIC0gZG9tIDAgICAt
IE1TSXMgPCAyNyA+CihYRU4pIDAwMDA6MDA6MWYuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVO
KSAwMDAwOjAwOjFlLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZC4wIC0g
ZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNyAtIGRvbSAwICAgLSBNU0lzIDwg
PgooWEVOKSAwMDAwOjAwOjFjLjYgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDox
Yy4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWIuMCAtIGRvbSAwICAgLSBN
U0lzIDwgMjYgPgooWEVOKSAwMDAwOjAwOjFhLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikg
MDAwMDowMDoxOS4wIC0gZG9tIDAgICAtIE1TSXMgPCAzMSA+CihYRU4pIDAwMDA6MDA6MTYuMyAt
IGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjE2LjAgLSBkb20gMCAgIC0gTVNJcyA8
ID4KKFhFTikgMDAwMDowMDoxNC4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOSA+CihYRU4pIDAwMDA6
MDA6MDIuMCAtIGRvbSAwICAgLSBNU0lzIDwgMjggPgooWEVOKSAwMDAwOjAwOjAwLjAgLSBkb20g
MCAgIC0gTVNJcyA8ID4KKFhFTikgW1Y6IGR1bXAgaW9tbXUgaW5mb10KKFhFTikgCihYRU4pIGlv
bW11IDA6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFsaWRhdGlvbjogc3Vw
cG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBpbmc6IHN1cHBvcnRl
ZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRhYmxlIChucl9lbnRy
eT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4pICAgICAgICBTVlQg
IFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQCihYRU4pICAgMDAw
MDogIDEgICAwICAwMDEwIDAwMDAwMDAxIDI5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4p
IAooWEVOKSBpb21tdSAxOiBucl9wdF9sZXZlbHMgPSAzLgooWEVOKSAgIFF1ZXVlZCBJbnZhbGlk
YXRpb246IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgUmVtYXBwaW5n
OiBzdXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IHJlbWFwcGluZyB0YWJs
ZSAobnJfZW50cnk9MHgxMDAwMC4gT25seSBkdW1wIFA9MSBlbnRyaWVzIGhlcmUpOgooWEVOKSAg
ICAgICAgU1ZUICBTUSAgIFNJRCAgICAgIERTVCAgViAgQVZMIERMTSBUTSBSSCBETSBGUEQgUAoo
WEVOKSAgIDAwMDA6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzOCAgICAwICAgMSAgMCAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMDE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBmMCAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMDI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0MCAgICAwICAg
MSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0OCAg
ICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDQ6ICAxICAgMCAgZjBmOCAwMDAwMDAw
MSA1MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDU6ICAxICAgMCAgZjBmOCAw
MDAwMDAwMSA1OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDY6ICAxICAgMCAg
ZjBmOCAwMDAwMDAwMSA2MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDc6ICAx
ICAgMCAgZjBmOCAwMDAwMDAwMSA2OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAw
MDg6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3MCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVO
KSAgIDAwMDk6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3OCAgICAwICAgMSAgMCAgMSAgMSAgIDAg
MQooWEVOKSAgIDAwMGE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA4OCAgICAwICAgMSAgMCAgMSAg
MSAgIDAgMQooWEVOKSAgIDAwMGI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5MCAgICAwICAgMSAg
MCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5OCAgICAw
ICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGQ6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBh
MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGU6ICAxICAgMCAgZjBmOCAwMDAw
MDAwMSBhOCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGY6ICAxICAgMCAgZjBm
OCAwMDAwMDAwMSBiMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTA6ICAxICAg
MCAgZjBmOCAwMDAwMDAwMSBiOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTE6
ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAg
IDAwMTI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQoo
WEVOKSAgIDAwMTM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBkMCAgICAwICAgMSAgMSAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMTQ6ICAxICAgMCAgMDBkOCAwMDAwMDAwMSA3MSAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMTU6ICAxICAgMCAgMDBmYSAwMDAwMDAwMSAyMSAgICAwICAg
MSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTY6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzMSAg
ICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTc6ICAxICAgMCAgMDBhMCAwMDAwMDAw
MSA3OSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTg6ICAxICAgMCAgMDIwMCAw
MDAwMDAwMSA4MSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTk6ICAxICAgMCAg
MDBjOCAwMDAwMDAwMSA5OSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAKKFhFTikgUmVk
aXJlY3Rpb24gdGFibGUgb2YgSU9BUElDIDA6CihYRU4pICAgI2VudHJ5IElEWCBGTVQgTUFTSyBU
UklHIElSUiBQT0wgU1RBVCBERUxJICBWRUNUT1IKKFhFTikgICAgMDE6ICAwMDAwICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgMzgKKFhFTikgICAgMDI6ICAwMDAxICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgZjAKKFhFTikgICAgMDM6ICAwMDAyICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDAKKFhFTikgICAgMDQ6ICAwMDAzICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDgKKFhFTikgICAgMDU6ICAwMDA0ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTAKKFhFTikgICAgMDY6ICAwMDA1ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTgKKFhFTikgICAgMDc6ICAwMDA2ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjAKKFhFTikgICAgMDg6ICAwMDA3ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjgKKFhFTikgICAgMDk6ICAwMDA4ICAgMSAgICAw
ICAgMSAgIDAgICAwICAgIDAgICAgMCAgICAgNzAKKFhFTikgICAgMGE6ICAwMDA5ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNzgKKFhFTikgICAgMGI6ICAwMDBhICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgODgKKFhFTikgICAgMGM6ICAwMDBiICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTAKKFhFTikgICAgMGQ6ICAwMDBjICAgMSAgICAx
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTgKKFhFTikgICAgMGU6ICAwMDBkICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTAKKFhFTikgICAgMGY6ICAwMDBlICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTgKKFhFTikgICAgMTA6ICAwMDBmICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjAKKFhFTikgICAgMTI6ICAwMDEwICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjgKKFhFTikgICAgMTM6ICAwMDExICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzAKKFhFTikgICAgMTQ6ICAwMDE2ICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgMzEKKFhFTikgICAgMTY6ICAwMDEzICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgZDAKKFhFTikgICAgMTc6ICAwMDEyICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzgKKFhFTikgW2E6IGR1bXAgdGltZXIgcXVldWVz
XQooWEVOKSBEdW1waW5nIHRpbWVyIHF1ZXVlczoKKFhFTikgQ1BVMDA6CihYRU4pICAgZXg9ICAg
LTE2ODB1cyB0aW1lcj1mZmZmODJjNDgwMmUyNWM4IGNiPWZmZmY4MmM0ODAxM2Q3NTcoZmZmZjgy
YzQ4MDI3MTgwMCkgbnMxNjU1MF9wb2xsKzB4MC8weDMzCihYRU4pICAgZXg9ICAgIDczMTl1cyB0
aW1lcj1mZmZmODMwMTQ4OTlhMWI4IGNiPWZmZmY4MmM0ODAxMTlkNzIoZmZmZjgzMDE0ODk5YTE5
MCkgY3NjaGVkX2FjY3QrMHgwLzB4NDJhCihYRU4pICAgZXg9MTI1MTM2NDc3dXMgdGltZXI9ZmZm
ZjgyYzQ4MDJmZTI4MCBjYj1mZmZmODJjNDgwMTgwN2MyKDAwMDAwMDAwMDAwMDAwMDApIHBsdF9v
dmVyZmxvdysweDAvMHgxMzEKKFhFTikgICBleD0gOTI2MDI5M3VzIHRpbWVyPWZmZmY4MmM0ODAz
MDA1ODAgY2I9ZmZmZjgyYzQ4MDFhODg1MCgwMDAwMDAwMDAwMDAwMDAwKSBtY2Vfd29ya19mbisw
eDAvMHhhOQooWEVOKSAgIGV4PSAgICA3MzE5dXMgdGltZXI9ZmZmZjgzMDE0ODk5YWVhOCBjYj1m
ZmZmODJjNDgwMTFhYWYwKDAwMDAwMDAwMDAwMDAwMDApIGNzY2hlZF90aWNrKzB4MC8weDMxNAoo
WEVOKSBDUFUwMToKKFhFTikgICBleD0gICA2MTM4MHVzIHRpbWVyPWZmZmY4MzAxMzZhNDlhMjgg
Y2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAxKSBjc2NoZWRfdGljaysweDAvMHgz
MTQKKFhFTikgICBleD0gIDMwMTY0MHVzIHRpbWVyPWZmZmY4MzAwYTgzZmMwNjAgY2I9ZmZmZjgy
YzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZjMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJfZm4rMHgw
LzB4YgooWEVOKSBDUFUwMjoKKFhFTikgICBleD0gICA4Mjk0NXVzIHRpbWVyPWZmZmY4MzAxMWM5
ZTlmNDggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAyKSBjc2NoZWRfdGljaysw
eDAvMHgzMTQKKFhFTikgICBleD0gIDY2NjAwMnVzIHRpbWVyPWZmZmY4MzAwYWE1ODMwNjAgY2I9
ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGFhNTgzMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJf
Zm4rMHgwLzB4YgooWEVOKSBDUFUwMzoKKFhFTikgICBleD0gIDEwMzI2MnVzIHRpbWVyPWZmZmY4
MzAxMWM5ZTlhMzggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAzKSBjc2NoZWRf
dGljaysweDAvMHgzMTQKKFhFTikgICBleD0gIDE3ODkzMHVzIHRpbWVyPWZmZmY4MzAwYTgzZmQw
NjAgY2I9ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZkMDAwKSB2Y3B1X3NpbmdsZXNob3Rf
dGltZXJfZm4rMHgwLzB4YgooWEVOKSBbYzogZHVtcCBBQ1BJIEN4IHN0cnVjdHVyZXNdCihYRU4p
ICdjJyBwcmVzc2VkIC0+IHByaW50aW5nIEFDUEkgQ3ggc3RydWN0dXJlcwooWEVOKSA9PWNwdTA9
PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBz
dGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAw
XSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBd
IGR1cmF0aW9uWzE3NTQ3NDI2OTk2NF0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBd
CihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pID09Y3B1MT09CihYRU4pIGFjdGl2ZSBz
dGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAg
IEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0gdXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0g
ZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMTc1NDk5
MDYwNjUwXQooWEVOKSBQQzJbMF0gUEMzWzBdIFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIEND
NlswXSBDQzdbMF0KKFhFTikgPT1jcHUyPT0KKFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVO
KSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxh
dGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVO
KSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlvblsxNzU1MjM4NDk5MzNdCihYRU4pIFBD
MlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVO
KSA9PWNwdTM9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlD
NwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdl
WzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2Vb
MDAwMDAwMDBdIGR1cmF0aW9uWzE3NTU0ODYzOTk2NF0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZb
MF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pIFtlOiBkdW1wIGV2dGNo
biBpbmZvXQooWEVOKSAnZScgcHJlc3NlZCAtPiBkdW1waW5nIGV2ZW50LWNoYW5uZWwgaW5mbwoo
WEVOKSBFdmVudCBjaGFubmVsIGluZm9ybWF0aW9uIGZvciBkb21haW4gMDoKKFhFTikgUG9sbGlu
ZyB2Q1BVczoge30KKFhFTikgICAgIHBvcnQgW3AvbV0KKFhFTikgICAgICAgIDEgWzEvMF06IHM9
NSBuPTAgeD0wIHY9MAooWEVOKSAgICAgICAgMiBbMS8xXTogcz02IG49MCB4PTAKKFhFTikgICAg
ICAgIDMgWzEvMF06IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICA0IFswLzBdOiBzPTYgbj0wIHg9
MAooWEVOKSAgICAgICAgNSBbMC8wXTogcz01IG49MCB4PTAgdj0xCihYRU4pICAgICAgICA2IFsw
LzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAgICAgNyBbMC8wXTogcz01IG49MSB4PTAgdj0wCihY
RU4pICAgICAgICA4IFswLzFdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAgOSBbMC8wXTogcz02
IG49MSB4PTAKKFhFTikgICAgICAgMTAgWzAvMF06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgIDEx
IFswLzBdOiBzPTUgbj0xIHg9MCB2PTEKKFhFTikgICAgICAgMTIgWzAvMF06IHM9NiBuPTEgeD0w
CihYRU4pICAgICAgIDEzIFswLzBdOiBzPTUgbj0yIHg9MCB2PTAKKFhFTikgICAgICAgMTQgWzEv
MV06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE1IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAg
ICAgICAxNiBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTcgWzAvMF06IHM9NSBuPTIg
eD0wIHY9MQooWEVOKSAgICAgICAxOCBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTkg
WzAvMF06IHM9NSBuPTMgeD0wIHY9MAooWEVOKSAgICAgICAyMCBbMS8xXTogcz02IG49MyB4PTAK
KFhFTikgICAgICAgMjEgWzAvMF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIyIFswLzBdOiBz
PTYgbj0zIHg9MAooWEVOKSAgICAgICAyMyBbMC8wXTogcz01IG49MyB4PTAgdj0xCihYRU4pICAg
ICAgIDI0IFswLzBdOiBzPTYgbj0zIHg9MAooWEVOKSAgICAgICAyNSBbMC8wXTogcz0zIG49MCB4
PTAgZD0wIHA9MzYKKFhFTikgICAgICAgMjYgWzAvMF06IHM9NCBuPTAgeD0wIHA9OSBpPTkKKFhF
TikgICAgICAgMjcgWzAvMV06IHM9NSBuPTAgeD0wIHY9MgooWEVOKSAgICAgICAyOCBbMC8wXTog
cz00IG49MCB4PTAgcD04IGk9OAooWEVOKSAgICAgICAyOSBbMC8wXTogcz00IG49MCB4PTAgcD0y
NzkgaT0yNgooWEVOKSAgICAgICAzMCBbMC8wXTogcz00IG49MCB4PTAgcD0yNzcgaT0yOAooWEVO
KSAgICAgICAzMSBbMC8wXTogcz00IG49MCB4PTAgcD0xNiBpPTE2CihYRU4pICAgICAgIDMyIFsw
LzBdOiBzPTQgbj0wIHg9MCBwPTI3OCBpPTI3CihYRU4pICAgICAgIDMzIFswLzBdOiBzPTQgbj0w
IHg9MCBwPTIzIGk9MjMKKFhFTikgICAgICAgMzQgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc2IGk9
MjkKKFhFTikgICAgICAgMzUgWzEvMF06IHM9NCBuPTAgeD0wIHA9Mjc1IGk9MzAKKFhFTikgICAg
ICAgMzYgWzAvMF06IHM9MyBuPTAgeD0wIGQ9MCBwPTI1CihYRU4pICAgICAgIDM3IFswLzBdOiBz
PTUgbj0wIHg9MCB2PTMKKFhFTikgICAgICAgMzggWzEvMF06IHM9NCBuPTAgeD0wIHA9Mjc0IGk9
MzEKKFhFTikgW2c6IHByaW50IGdyYW50IHRhYmxlIHVzYWdlXQooWEVOKSBnbnR0YWJfdXNhZ2Vf
cHJpbnRfYWxsIFsga2V5ICdnJyBwcmVzc2VkCihYRU4pICAgICAgIC0tLS0tLS0tIGFjdGl2ZSAt
LS0tLS0tLSAgICAgICAtLS0tLS0tLSBzaGFyZWQgLS0tLS0tLS0KKFhFTikgW3JlZl0gbG9jYWxk
b20gbWZuICAgICAgcGluICAgICAgICAgIGxvY2FsZG9tIGdtZm4gICAgIGZsYWdzCihYRU4pIGdy
YW50LXRhYmxlIGZvciByZW1vdGUgZG9tYWluOiAgICAwIC4uLiBubyBhY3RpdmUgZ3JhbnQgdGFi
bGUgZW50cmllcwooWEVOKSBnbnR0YWJfdXNhZ2VfcHJpbnRfYWxsIF0gZG9uZQooWEVOKSBbaTog
ZHVtcCBpbnRlcnJ1cHQgYmluZGluZ3NdCihYRU4pIEd1ZXN0IGludGVycnVwdCBpbmZvcm1hdGlv
bjoKKFhFTikgICAgSVJROiAgIDAgYWZmaW5pdHk6MDAwMSB2ZWM6ZjAgdHlwZT1JTy1BUElDLWVk
Z2UgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMSBh
ZmZpbml0eTowMDAxIHZlYzozOCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIg
bWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICAyIGFmZmluaXR5OmZmZmYgdmVjOmUyIHR5
cGU9WFQtUElDICAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikg
ICAgSVJROiAgIDMgYWZmaW5pdHk6MDAwMSB2ZWM6NDAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3Rh
dHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNCBhZmZpbml0eTow
MDAxIHZlYzo0OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1
bmJvdW5kCihYRU4pICAgIElSUTogICA1IGFmZmluaXR5OjAwMDEgdmVjOjUwIHR5cGU9SU8tQVBJ
Qy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAg
IDYgYWZmaW5pdHk6MDAwMSB2ZWM6NTggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAw
MDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNyBhZmZpbml0eTowMDAxIHZlYzo2
MCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihY
RU4pICAgIElSUTogICA4IGFmZmluaXR5OjAwMDEgdmVjOjY4IHR5cGU9SU8tQVBJQy1lZGdlICAg
IHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOCgtUy0tKSwKKFhF
TikgICAgSVJROiAgIDkgYWZmaW5pdHk6MDAwMSB2ZWM6NzAgdHlwZT1JTy1BUElDLWxldmVsICAg
c3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6ICA5KC1TLS0pLAooWEVO
KSAgICBJUlE6ICAxMCBhZmZpbml0eTowMDAxIHZlYzo3OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDExIGFmZmluaXR5
OjAwMDEgdmVjOjg4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQKKFhFTikgICAgSVJROiAgMTIgYWZmaW5pdHk6MDAwMSB2ZWM6OTAgdHlwZT1JTy1B
UElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6
ICAxMyBhZmZpbml0eTowMDBmIHZlYzo5OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAw
MDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE0IGFmZmluaXR5OjAwMDEgdmVj
OmEwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQK
KFhFTikgICAgSVJROiAgMTUgYWZmaW5pdHk6MDAwMSB2ZWM6YTggdHlwZT1JTy1BUElDLWVkZ2Ug
ICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNiBhZmZp
bml0eTowMDAxIHZlYzpiMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMTAgaW4t
ZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogMTYoLVMtLSksCihYRU4pICAgIElSUTogIDE4IGFmZmlu
aXR5OjAwMGYgdmVjOmI4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBw
ZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTkgYWZmaW5pdHk6MDAwMSB2ZWM6YzAgdHlwZT1J
Ty1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJ
UlE6ICAyMCBhZmZpbml0eTowMDBmIHZlYzozMSB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9
MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIyIGFmZmluaXR5OjAwMDEg
dmVjOmQwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91
bmQKKFhFTikgICAgSVJROiAgMjMgYWZmaW5pdHk6MDAwMSB2ZWM6YzggdHlwZT1JTy1BUElDLWxl
dmVsICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDIzKC1TLS0p
LAooWEVOKSAgICBJUlE6ICAyNCBhZmZpbml0eTowMDAxIHZlYzoyOCB0eXBlPURNQV9NU0kgICAg
ICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI1IGFm
ZmluaXR5OjAwMDEgdmVjOjMwIHR5cGU9RE1BX01TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBt
YXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjYgYWZmaW5pdHk6MDAwMSB2ZWM6NzEgdHlw
ZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0
PTA6Mjc5KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyNyBhZmZpbml0eTowMDAxIHZlYzoyMSB0eXBl
PVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9
MDoyNzgoLVMtLSksCihYRU4pICAgIElSUTogIDI4IGFmZmluaXR5OjAwMDEgdmVjOjI5IHR5cGU9
UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0w
OjI3NygtUy0tKSwKKFhFTikgICAgSVJROiAgMjkgYWZmaW5pdHk6MDAwMSB2ZWM6NzkgdHlwZT1Q
Q0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6
Mjc2KC1TLS0pLAooWEVOKSAgICBJUlE6ICAzMCBhZmZpbml0eTowMDAxIHZlYzo4MSB0eXBlPVBD
SS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoy
NzUoUFMtLSksCihYRU4pICAgIElSUTogIDMxIGFmZmluaXR5OjAwMDEgdmVjOjk5IHR5cGU9UENJ
LU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3
NChQUy0tKSwKKFhFTikgSU8tQVBJQyBpbnRlcnJ1cHQgaW5mb3JtYXRpb246CihYRU4pICAgICBJ
UlEgIDAgVmVjMjQwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMjogdmVjPWYwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDEgVmVjIDU2OgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAgMTogdmVjPTM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0w
IGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDMgVmVjIDY0Ogoo
WEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMzogdmVjPTQwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9
TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4p
ICAgICBJUlEgIDQgVmVjIDcyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNDogdmVjPTQ4
IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBt
YXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDUgVmVjIDgwOgooWEVOKSAgICAgICBBcGlj
IDB4MDAsIFBpbiAgNTogdmVjPTUwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDYgVmVj
IDg4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNjogdmVjPTU4IGRlbGl2ZXJ5PUxvUHJp
IGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDow
CihYRU4pICAgICBJUlEgIDcgVmVjIDk2OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNzog
dmVjPTYwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRy
aWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDggVmVjMTA0OgooWEVOKSAgICAg
ICBBcGljIDB4MDAsIFBpbiAgODogdmVjPTY4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9
MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEg
IDkgVmVjMTEyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgOTogdmVjPTcwIGRlbGl2ZXJ5
PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVz
dF9pZDowCihYRU4pICAgICBJUlEgMTAgVmVjMTIwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBp
biAxMDogdmVjPTc4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGly
cj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTEgVmVjMTM2OgooWEVO
KSAgICAgICBBcGljIDB4MDAsIFBpbiAxMTogdmVjPTg4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBz
dGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAg
ICBJUlEgMTIgVmVjMTQ0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxMjogdmVjPTkwIGRl
bGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNr
PTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTMgVmVjMTUyOgooWEVOKSAgICAgICBBcGljIDB4
MDAsIFBpbiAxMzogdmVjPTk4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0
eT0wIGlycj0wIHRyaWc9RSBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTQgVmVjMTYw
OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNDogdmVjPWEwIGRlbGl2ZXJ5PUxvUHJpIGRl
c3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihY
RU4pICAgICBJUlEgMTUgVmVjMTY4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNTogdmVj
PWE4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9
RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTYgVmVjMTc2OgooWEVOKSAgICAgICBB
cGljIDB4MDAsIFBpbiAxNjogdmVjPWIwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTgg
VmVjMTg0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxODogdmVjPWI4IGRlbGl2ZXJ5PUxv
UHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9p
ZDowCihYRU4pICAgICBJUlEgMTkgVmVjMTkyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAx
OTogdmVjPWMwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0w
IHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjAgVmVjIDQ5OgooWEVOKSAg
ICAgICBBcGljIDB4MDAsIFBpbiAyMDogdmVjPTMxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0
dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJ
UlEgMjIgVmVjMjA4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAyMjogdmVjPWQwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjMgVmVjMjAwOgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAyMzogdmVjPWM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0x
IGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pIFttOiBtZW1vcnkgaW5mb10KKFhF
TikgUGh5c2ljYWwgbWVtb3J5IGluZm9ybWF0aW9uOgooWEVOKSAgICAgWGVuIGhlYXA6IDBrQiBm
cmVlCihYRU4pICAgICBoZWFwWzE0XTogNjQ1MTJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE1XTog
MTMxMDcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNl06IDI2MjE0NGtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMTddOiA1MjIyMzZrQiBmcmVlCihYRU4pICAgICBoZWFwWzE4XTogMTA0ODU3MmtCIGZy
ZWUKKFhFTikgICAgIGhlYXBbMTldOiA2OTExODhrQiBmcmVlCihYRU4pICAgICBoZWFwWzIwXTog
NTM3MDYwa0IgZnJlZQooWEVOKSAgICAgRG9tIGhlYXA6IDMyNTY3ODRrQiBmcmVlCihYRU4pIFtu
OiBOTUkgc3RhdGlzdGljc10KKFhFTikgQ1BVCU5NSQooWEVOKSAgIDAJICAwCihYRU4pICAgMQkg
IDAKKFhFTikgICAyCSAgMAooWEVOKSAgIDMJICAwCihYRU4pIGRvbTAgdmNwdTA6IE5NSSBuZWl0
aGVyIHBlbmRpbmcgbm9yIG1hc2tlZAooWEVOKSBbcTogZHVtcCBkb21haW4gKGFuZCBndWVzdCBk
ZWJ1ZykgaW5mb10KKFhFTikgJ3EnIHByZXNzZWQgLT4gZHVtcGluZyBkb21haW4gaW5mbyAobm93
PTB4Mjk6MERCNzAwQjEpCihYRU4pIEdlbmVyYWwgaW5mb3JtYXRpb24gZm9yIGRvbWFpbiAwOgoo
WEVOKSAgICAgcmVmY250PTMgZHlpbmc9MCBwYXVzZV9jb3VudD0wCihYRU4pICAgICBucl9wYWdl
cz0xODc1MzkgeGVuaGVhcF9wYWdlcz02IHNoYXJlZF9wYWdlcz0wIHBhZ2VkX3BhZ2VzPTAgZGly
dHlfY3B1cz17MS0zfSBtYXhfcGFnZXM9MTg4MTQ3CihYRU4pICAgICBoYW5kbGU9MDAwMDAwMDAt
MDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwIHZtX2Fzc2lzdD0wMDAwMDAwZAooWEVOKSBSYW5n
ZXNldHMgYmVsb25naW5nIHRvIGRvbWFpbiAwOgooWEVOKSAgICAgSS9PIFBvcnRzICB7IDAtMWYs
IDIyLTNmLCA0NC02MCwgNjItOWYsIGEyLTQwNywgNDBjLWNmYiwgZDAwLTIwNGYsIDIwNTgtZmZm
ZiB9CihYRU4pICAgICBJbnRlcnJ1cHRzIHsgMC0yNzkgfQooWEVOKSAgICAgSS9PIE1lbW9yeSB7
IDAtZmViZmYsIGZlYzAxLWZlZGZmLCBmZWUwMS1mZmZmZmZmZmZmZmZmZmZmIH0KKFhFTikgTWVt
b3J5IHBhZ2VzIGJlbG9uZ2luZyB0byBkb21haW4gMDoKKFhFTikgICAgIERvbVBhZ2UgbGlzdCB0
b28gbG9uZyB0byBkaXNwbGF5CihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDc2ZjY6IGNh
Zj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAgICAgWGVuUGFn
ZSAwMDAwMDAwMDAwMTQ3NmY1OiBjYWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAw
MDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0NzZmNDogY2FmPWMwMDAwMDAwMDAw
MDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAx
NDc2ZjM6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAg
ICAgWGVuUGFnZSAwMDAwMDAwMDAwMGFhMGZkOiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0
MDAwMDAwMDAwMDAwMDIKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDExYzllYTogY2FmPWMw
MDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4pIFZDUFUgaW5mb3JtYXRp
b24gYW5kIGNhbGxiYWNrcyBmb3IgZG9tYWluIDA6CihYRU4pICAgICBWQ1BVMDogQ1BVMCBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAxLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
e30gY3B1X2FmZmluaXR5PXswfQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0w
CihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSAgICAgVkNQVTE6IENQVTMgW2hhcz1G
XSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAwMCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXsz
fSBjcHVfYWZmaW5pdHk9ezAtMTV9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdz
PTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMjogQ1BVMSBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
ezF9IGNwdV9hZmZpbml0eT17MC0xNX0KKFhFTikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxh
Z3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMgdGltZXIKKFhFTikgICAgIFZDUFUzOiBDUFUyIFto
YXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1
cz17Mn0gY3B1X2FmZmluaXR5PXswLTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9m
bGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSBOb3RpZnlpbmcgZ3Vlc3Qg
MDowICh2aXJxIDEsIHBvcnQgNSwgc3RhdCAwLzAvLTEpCihYRU4pIE5vdGlmeWluZyBndWVzdCAw
OjEgKHZpcnEgMSwgcG9ydCAxMSwgc3RhdCAwLzAvMCkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6
MiAodmlycSAxLCBwb3J0IDE3LCBzdGF0IDAvMC8wKQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDoz
ICh2aXJxIDEsIHBvcnQgMjMsIHN0YXQgMC8wLzApCgooWEVOKSBTaGFyZWQgZnJhbWVzIDAgLS0g
U2F2ZWQgZnJhbWVzIDAKWyAgMTc2LjUwMTk1NV0gdihYRU4pIFtyOiBkdW1wIHJ1biBxdWV1ZXNd
CmNwdSAxCihYRU4pIHNjaGVkX3NtdF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9
MHgwMDAwMDAyOTE5N0UzNTAyCihYRU4pIElkbGUgY3B1cG9vbDoKWyAgMTc2LjUwMTk1NV0gIChY
RU4pIFNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNyZWRpdCkKIChYRU4pIGluZm86
CihYRU4pIAluY3B1cyAgICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIgICAgICAgICAgICAg
PSAwCihYRU4pIAljcmVkaXQgICAgICAgICAgICAgPSA0MDAKKFhFTikgCWNyZWRpdCBiYWxhbmNl
ICAgICA9IC0xMDAKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9z
b3J0ICAgICAgICAgID0gMTk0NQooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4p
IAl0c2xpY2UgICAgICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAx
MDAwdXMKKFhFTikgCWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNs
aWNlICAgPSAxCihYRU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMKMDogbWFza2VkPTAgcGVu
ZChYRU4pIGlkbGVyczogMDAwNgooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IGluZz0x
IGV2ZW50X3NlbCBbMC4xXSBwcmk9LTIgZmxhZ3M9MCBjcHU9MyBjcmVkaXQ9LTcwMyBbdz0yNTZd
CjAwMDAwMDAwMDAwMDAwMDEoWEVOKSBDcHVwb29sIDA6CgooWEVOKSBTY2hlZHVsZXI6IFNNUCBD
cmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpClsgIDE3Ni41MzYxMzddICAoWEVOKSBpbmZvOgooWEVO
KSAJbmNwdXMgICAgICAgICAgICAgID0gNAooWEVOKSAJbWFzdGVyICAgICAgICAgICAgID0gMAoo
WEVOKSAJY3JlZGl0ICAgICAgICAgICAgID0gNDAwCihYRU4pIAljcmVkaXQgYmFsYW5jZSAgICAg
PSAtMTAwCihYRU4pIAl3ZWlnaHQgICAgICAgICAgICAgPSAyNTYKKFhFTikgCXJ1bnFfc29ydCAg
ICAgICAgICA9IDE5NDUKKFhFTikgCWRlZmF1bHQtd2VpZ2h0ICAgICA9IDI1NgooWEVOKSAJdHNs
aWNlICAgICAgICAgICAgID0gMTBtcwooWEVOKSAJcmF0ZWxpbWl0ICAgICAgICAgID0gMTAwMHVz
CihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAxMAooWEVOKSAJdGlja3MgcGVyIHRzbGljZSAg
ID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5ICAgID0gMHVzCiAoWEVOKSBpZGxlcnM6IDAwMDYK
KFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJICAxOiBbMC4xXSBwcmk9LTIgZmxhZ3M9MCBjcHU9
MyBjcmVkaXQ9LTEyNjQgW3c9MjU2XQoxOiBtYXNrZWQ9MCBwZW5kKFhFTikgQ1BVWzAwXSBpbmc9
MSBldmVudF9zZWwgIHNvcnQ9MTk0NSwgc2libGluZz0wMDAxLCAwMDAwMDAwMDAwMDAwMDAxY29y
ZT0wMDBmCihYRU4pIAlydW46IFszMjc2Ny4wXSBwcmk9MCBmbGFncz0wIGNwdT0wCgooWEVOKSAJ
ICAxOiBbMC4wXSBwcmk9MCBmbGFncz0wIGNwdT0wIGNyZWRpdD02MiBbdz0yNTZdClsgIDE3Ni42
MzAxOTVdICAoWEVOKSBDUFVbMDFdICAgc29ydD0xOTQ1LCBzaWJsaW5nPTAwMDIsIDI6IG1hc2tl
ZD0xIHBlbmRjb3JlPTAwMGYKKFhFTikgCXJ1bjogaW5nPTEgZXZlbnRfc2VsIFszMjc2Ny4xXSBw
cmk9LTY0IGZsYWdzPTAgY3B1PTEKMDAwMDAwMDAwMDAwMDAwMShYRU4pIENQVVswMl0gCiBzb3J0
PTE5NDUsIHNpYmxpbmc9MDAwNCwgWyAgMTc2LjY2MDE3NV0gIGNvcmU9MDAwZgooWEVOKSAJcnVu
OiAgWzMyNzY3LjJdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MgozOiBtYXNrZWQ9MSBwZW5kKFhFTikg
Q1BVWzAzXSBpbmc9MCBldmVudF9zZWwgIHNvcnQ9MTk0NSwgc2libGluZz0wMDA4LCAwMDAwMDAw
MDAwMDAwMDAwY29yZT0wMDBmCihYRU4pIAlydW46IFswLjFdIHByaT0tMiBmbGFncz0wIGNwdT0z
IGNyZWRpdD0tMTg2MyBbdz0yNTZdCihYRU4pIAkgIDE6IFszMjc2Ny4zXSBwcmk9LTY0IGZsYWdz
PTAgY3B1PTMKCihYRU4pIFtzOiBkdW1wIHNvZnR0c2Mgc3RhdHNdClsgIDE3Ni42NzkyMzddICAo
WEVOKSBUU0MgbWFya2VkIGFzIHJlbGlhYmxlLCB3YXJwID0gMCAoY291bnQ9MikKIChYRU4pIE5v
IGRvbWFpbnMgaGF2ZSBlbXVsYXRlZCBUU0MKCihYRU4pIFt0OiBkaXNwbGF5IG11bHRpLWNwdSBj
bG9jayBpbmZvXQpbICAxNzYuNzI5NDA3XSBwKFhFTikgU3luY2VkIHN0aW1lIHNrZXc6IG1heD04
NDkybnMgYXZnPTg0OTJucyBzYW1wbGVzPTEgY3VycmVudD04NDkybnMKKFhFTikgU3luY2VkIGN5
Y2xlcyBza2V3OiBtYXg9MTcwIGF2Zz0xNzAgc2FtcGxlcz0xIGN1cnJlbnQ9MTcwCmVuZGluZzoK
KFhFTikgW3U6IGR1bXAgbnVtYSBpbmZvXQpbICAxNzYuNzI5NDA4XSAgKFhFTikgJ3UnIHByZXNz
ZWQgLT4gZHVtcGluZyBudW1hIGluZm8gKG5vdy0weDI5OjI3NTZCNjNCKQogIChYRU4pIGlkeDAg
LT4gTk9ERTAgc3RhcnQtPjAgc2l6ZS0+MTM2OTYwMCBmcmVlLT44MTQxOTYKMDAwMDAwMDAwMDAw
MDAwMChYRU4pIHBoeXNfdG9fbmlkKDAwMDAwMDAwMDAwMDEwMDApIC0+IDAgc2hvdWxkIGJlIDAK
IChYRU4pIENQVTAgLT4gTk9ERTAKKFhFTikgQ1BVMSAtPiBOT0RFMAooWEVOKSBDUFUyIC0+IE5P
REUwCihYRU4pIENQVTMgLT4gTk9ERTAKKFhFTikgTWVtb3J5IGxvY2F0aW9uIG9mIGVhY2ggZG9t
YWluOgooWEVOKSBEb21haW4gMCAodG90YWw6IDE4NzUzOSk6CjAwMDAwMDAwMDAwMDAwMDAoWEVO
KSAgICAgTm9kZSAwOiAxODc1MzkKIChYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KMDAwMDAw
MDAwMDAwMDAwMChYRU4pICoqKioqKioqKioqIFZNQ1MgQXJlYXMgKioqKioqKioqKioqKioKIChY
RU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqCjAwMDAwMDAwMDAwMDAw
MDAoWEVOKSBbejogcHJpbnQgaW9hcGljIGluZm9dCiAoWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNv
dXJjZXM6IDE1LgowMDAwMDAwMDAwMDAwMDAwKFhFTikgbnVtYmVyIG9mIElPLUFQSUMgIzIgcmVn
aXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uCiAoWEVOKSBJTyBBUElDICMyLi4uLi4uCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAwOiAwMjAw
MDAwMAooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwgQVBJQyBpZDogMDIKKFhFTikgLi4uLi4u
LiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAKKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAgICAgICA6
IDAKMDAwMDAwMDAwMDAwMDAwMChYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAwMDE3MDAyMAooWEVO
KSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVzOiAwMDE3CihYRU4pIC4uLi4u
Li4gICAgIDogUFJRIGltcGxlbWVudGVkOiAwCihYRU4pIC4uLi4uLi4gICAgIDogSU8gQVBJQyB2
ZXJzaW9uOiAwMDIwCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOgooWEVOKSAgTlIg
TG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIAogKFhFTikg
IDAwIDAwMCAwMCAgMDAwMDAwMDAwMDAwMDAwMDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwCiAoWEVOKSAgMDEgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICAzOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDAyIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgRjAKCihYRU4pICAwMyAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDQwClsgIDE3Ni44NjY4NzJdICAoWEVOKSAgMDQgMDAwIDAwICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OAogIChYRU4pICAwNSAwMDAgMDAgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDUwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDYgMDAw
IDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1OAogKFhFTikgIDA3IDAwMCAw
MCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNjAKMDAwMDAwMDAwMDAwMDAwMChY
RU4pICAwOCAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDY4CiAoWEVO
KSAgMDkgMDAwIDAwICAwICAgIDEgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3MAowMDAwMDAw
MDAwMDAwMDAwKFhFTikgIDBhIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgNzgKIChYRU4pICAwYiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IDg4CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGMgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA5MAogKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgOTgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwZSAwMDAgMDAgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIEEwCiAoWEVOKSAgMGYgMDAwIDAwICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICBBOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEwIDAwMCAw
MCAgMCAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQjAKIChYRU4pICAxMSAwMDAgMDAg
IDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAoWEVO
KSAgMTIgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBCOAogKFhFTikg
IDEzIDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQzAKMDAwMDAwMDAw
MDAwMDAwMChYRU4pICAxNCAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAg
IDMxCgooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MApbICAxNzYuOTY5NDMzXSAgKFhFTikgIDE2IDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAg
ICAxICAgIDEgICAgRDAKICAoWEVOKSAgMTcgMDAwIDAwICAwICAgIDEgICAgMCAgIDEgICAwICAg
IDEgICAgMSAgICBDOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgVXNpbmcgdmVjdG9yLWJhc2VkIGlu
ZGV4aW5nCihYRU4pIElSUSB0byBwaW4gbWFwcGluZ3M6CiAoWEVOKSBJUlEyNDAgLT4gMDoyCihY
RU4pIElSUTU2IC0+IDA6MQooWEVOKSBJUlE2NCAtPiAwOjMKKFhFTikgSVJRNzIgLT4gMDo0CjAw
MDAwMDAwMDAwMDAwMDAoWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4p
IElSUTk2IC0+IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhF
TikgSVJRMTIwIC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6
MTIKKFhFTikgSVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4
IC0+IDA6MTUKKFhFTikgSVJRMTc2IC0+IDA6MTYKIChYRU4pIElSUTE4NCAtPiAwOjE4CihYRU4p
IElSUTE5MiAtPiAwOjE5CihYRU4pIElSUTQ5IC0+IDA6MjAKKFhFTikgSVJRMjA4IC0+IDA6MjIK
KFhFTikgSVJRMjAwIC0+IDA6MjMKMDAwMDAwMDAwMDAwMDAwMChYRU4pIC4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLiBkb25lLgogMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDE3Ny4wNTgwMzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMDcxOTk0XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjA4NTk1NV0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDE3Ny4wOTk5MTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMTEz
ODc4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDQ4MDAwODIyODIKWyAgMTc3LjEyNzgzOV0gICAgClsgIDE3Ny4x
MzExNTBdIGdsb2JhbCBtYXNrOgpbICAxNzcuMTMxMTUwXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYKWyAgMTc3LjE0NjM2NF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny4xNjAzMjZd
ICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzcuMTc0Mjg2XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMTc3LjE4ODI0OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny4y
MDIyMDldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzcuMjE2MTcwXSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTc3LjIzMDEzMl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4MTA0MTA1Clsg
IDE3Ny4yNDQwOTJdICAgIApbICAxNzcuMjQ3NDA0XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAgMTc3
LjI0NzQwNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4yNjMxNTVdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMjc3MTE2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc3LjI5MTA3OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4zMDUwMzldICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMzE5MDAwXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc3LjMzMjk2MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4zNDY5
MjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwNDgwMDA4MjI4MgpbICAxNzcuMzYwODgzXSAgICAKWyAgMTc3LjM2
NDE5NV0gbG9jYWwgY3B1MSBtYXNrOgpbICAxNzcuMzY0MTk2XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc3LjM3OTc2N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4zOTM3
MjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNDA3Njg5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTc3LjQyMTY1MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3
Ny40MzU2MTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNDQ5NTczXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjQ2MzUzNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAxZjgw
ClsgIDE3Ny40Nzc0OTVdICAgIApbICAxNzcuNDgwODA2XSBsb2NhbGx5IHVubWFza2VkOgpbICAx
NzcuNDgwODA2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjQ5NjQ2OV0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny41MTA0MjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAxNzcuNTI0MzkwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjUzODM1Ml0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny41NTIzMTNdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxNzcuNTY2Mjc0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjU4
MDIzNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMjgwClsgIDE3Ny41OTQxOTZdICAgIApbICAxNzcu
NTk3NTA4XSBwZW5kaW5nIGxpc3Q6ClsgIDE3Ny42MDA1NTFdICAgMDogZXZlbnQgMSAtPiBpcnEg
MjcyIGxvY2FsbHktbWFza2VkClsgIDE3Ny42MDU1NjJdICAgMTogZXZlbnQgNyAtPiBpcnEgMjc4
ClsgIDE3Ny42MDkyMzJdICAgMTogZXZlbnQgOSAtPiBpcnEgMjgwClsgIDE3Ny42MTI5MDJdICAg
MjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1hc2tlZApbICAxNzcuNjE4MDAyXSAgIDM6
IGV2ZW50IDE5IC0+IGlycSAyOTAgbG9jYWxseS1tYXNrZWQKWyAgMTc3LjYyMzEwNF0gICAwOiBl
dmVudCAzNSAtPiBpcnEgMzAyIGxvY2FsbHktbWFza2VkClsgIDE3Ny42MjgyMDVdICAgMDogZXZl
bnQgMzggLT4gaXJxIDMwMyBsb2NhbGx5LW1hc2tlZApbICAxNzcuNjMzMzI2XSAKWyAgMTc3LjYz
MzMyNl0gdmNwdSAwClsgIDE3Ny42MzMzMjddICAgMDogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50
X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3Ny42Mzg1NzldICAgMTogbWFza2VkPTAgcGVuZGlu
Zz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny42NDQ2NjVdICAgMjogbWFza2Vk
PTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3Ny42NTA3NDldICAg
MzogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3Ny42
NTY4MzVdICAgClsgIDE3Ny42NjI5MjBdIHBlbmRpbmc6ClsgIDE3Ny42NjI5MjFdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNjc3Nzc2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc3LjY5MTczOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny43MDU3MDBdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNzE5NjYwXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc3LjczMzYyMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny43NDc1
ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNzYxNTQzXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDQ4MDAyOGEwMDYKWyAgMTc3Ljc3NTUwNV0gICAgClsgIDE3Ny43Nzg4MTZdIGdsb2JhbCBtYXNr
OgpbICAxNzcuNzc4ODE2XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc3Ljc5NDAzMF0g
ICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny44MDc5OTJdICAgIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZgpbICAxNzcuODIxOTUyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc3Ljgz
NTkxNF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny44NDk4NzVdICAgIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZgpbICAxNzcuODYzODM2XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAg
MTc3Ljg3Nzc5OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4MTA0MTA1ClsgIDE3Ny44OTE3NTldICAgIApb
ICAxNzcuODk1MDY5XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAgMTc3Ljg5NTA3MF0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDE3Ny45MTA4MjFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxNzcuOTI0NzgzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjkzODc0M10gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny45NTI3MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAxNzcuOTY2NjY2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3Ljk4MDYy
N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny45OTQ1ODldICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
NDgwMDI4YTAwMgpbICAxNzguMDA4NTUwXSAgICAKWyAgMTc4LjAxMTg2MV0gbG9jYWwgY3B1MCBt
YXNrOgpbICAxNzguMDExODYxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjAyNzQz
M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC4wNDEzOTRdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxNzguMDU1MzU1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4
LjA2OTMxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC4wODMyNzhdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzguMDk3MjM5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc4LjExMTIwMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZmZlMDAwMDdmClsgIDE3OC4xMjUxNjFdICAg
IApbICAxNzguMTI4NDcyXSBsb2NhbGx5IHVubWFza2VkOgpbICAxNzguMTI4NDczXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjE0NDEzNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDE3OC4xNTgwOTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMTcyMDU2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjE4NjAxOF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDE3OC4xOTk5NzldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMjEz
OTQwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjIyNzkwMl0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDA0ODAwMDAwMDAyClsgIDE3OC4yNDE4NjNdICAgIApbICAxNzguMjQ1MTc0XSBwZW5kaW5nIGxp
c3Q6ClsgIDE3OC4yNDgyMjFdICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyClsgIDE3OC4yNTE4ODhd
ICAgMDogZXZlbnQgMiAtPiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZApbICAxNzguMjU2OTg4XSAg
IDI6IGV2ZW50IDEzIC0+IGlycSAyODQgbG9jYWxseS1tYXNrZWQKWyAgMTc4LjI2MjA5MF0gICAy
OiBldmVudCAxNSAtPiBpcnEgMjg2IGxvY2FsbHktbWFza2VkClsgIDE3OC4yNjcxOTBdICAgMzog
ZXZlbnQgMTkgLT4gaXJxIDI5MCBsb2NhbGx5LW1hc2tlZApbICAxNzguMjcyMjkyXSAgIDM6IGV2
ZW50IDIxIC0+IGlycSAyOTIgbG9jYWxseS1tYXNrZWQKWyAgMTc4LjI3NzM5Ml0gICAwOiBldmVu
dCAzNSAtPiBpcnEgMzAyClsgIDE3OC4yODExNTFdICAgMDogZXZlbnQgMzggLT4gaXJxIDMwMwpb
ICAxNzguMjg0OTM4XSAKWyAgMTc4LjI4NDkzOV0gdmNwdSAyClsgIDE3OC4yODQ5NDBdICAgMDog
bWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC4yOTAy
MDBdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsg
IDE3OC4yOTYyODVdICAgMjogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAw
MDAwMDAxClsgIDE3OC4zMDIzNzFdICAgMzogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAw
MDAwMDAwMDAwMDAwMDAxClsgIDE3OC4zMDg0NTVdICAgClsgIDE3OC4zMTQ1NDFdIHBlbmRpbmc6
ClsgIDE3OC4zMTQ1NDFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMzI5Mzk3XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjM0MzM1OF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDE3OC4zNTczMjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMzcx
MjgxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjM4NTI0Ml0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDE3OC4zOTkyMDNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAx
NzguNDEzMTY1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAyOGUwMDQKWyAgMTc4LjQyNzEyNl0gICAgClsg
IDE3OC40MzA0MzddIGdsb2JhbCBtYXNrOgpbICAxNzguNDMwNDM3XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMTc4LjQ0NTY1MV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3OC40
NTk2MTJdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzguNDczNTczXSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTc4LjQ4NzUzNV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDE3OC41MDE0OTZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzguNTE1NDU3XSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc4LjUyOTQxOF0gICAgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4MTA0
MTA1ClsgIDE3OC41NDMzNzldICAgIApbICAxNzguNTQ2NjkwXSBnbG9iYWxseSB1bm1hc2tlZDoK
WyAgMTc4LjU0NjY5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC41NjI0NDFdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNTc2NDAzXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc4LjU5MDM2NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC42MDQz
MjVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNjE4Mjg2XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTc4LjYzMjI0N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3
OC42NDYyMDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4YTAwMApbICAxNzguNjYwMTcwXSAgICAKWyAg
MTc4LjY2MzQ4MV0gbG9jYWwgY3B1MiBtYXNrOgpbICAxNzguNjYzNDgyXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTc4LjY3OTA1M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3
OC42OTMwMTVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNzA2OTc2XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjcyMDkzN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDE3OC43MzQ4OThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNzQ4ODU5XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljc2MjgyMF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDdlMDAwClsgIDE3OC43NzY3ODRdICAgIApbICAxNzguNzgwMDkzXSBsb2NhbGx5IHVubWFza2Vk
OgpbICAxNzguNzgwMDkzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljc5NTc1NV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC44MDk3MTZdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxNzguODIzNjc3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljgz
NzYzOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC44NTE1OTldICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAxNzguODY1NTYxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MTc4Ljg3OTUyMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDBhMDAwClsgIDE3OC44OTM0ODRdICAgIApb
ICAxNzguODk2Nzk0XSBwZW5kaW5nIGxpc3Q6ClsgIDE3OC44OTk4MzhdICAgMDogZXZlbnQgMiAt
PiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAxNzguOTA2MjgxXSAg
IDI6IGV2ZW50IDEzIC0+IGlycSAyODQKWyAgMTc4LjkxMDAzOV0gICAyOiBldmVudCAxNCAtPiBp
cnEgMjg1IGdsb2JhbGx5LW1hc2tlZApbICAxNzguOTE1MjMxXSAgIDI6IGV2ZW50IDE1IC0+IGly
cSAyODYKWyAgMTc4LjkxODk5MF0gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFz
a2VkClsgIDE3OC45MjQwOTFdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tl
ZApbICAxNzguOTI5MjE0XSAKWyAgMTc4LjkyOTIxNV0gdmNwdSAzClsgIDE3OC45MjkyMTZdICAg
MDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC45
MzQ0NzNdICAgMTogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAx
ClsgIDE3OC45NDA1NThdICAgMjogbWFza2VkPTEgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAw
MDAwMDAwMDAwClsgIDE3OC45NDY2NDRdICAgMzogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3Nl
bCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3OC45NTI3MjldICAgClsgIDE3OC45NTg4MTVdIHBlbmRp
bmc6ClsgIDE3OC45NTg4MTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguOTczNjcx
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljk4NzYzMl0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDE3OS4wMDE1OTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzku
MDE1NTU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjAyOTUxNV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDE3OS4wNDM0NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxNzkuMDU3NDM4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAzODQwMDQKWyAgMTc5LjA3MTM5OV0gICAg
ClsgIDE3OS4wNzQ3MTBdIGdsb2JhbCBtYXNrOgpbICAxNzkuMDc0NzExXSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYKWyAgMTc5LjA4OTkyNV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3
OS4xMDM4ODVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzkuMTE3ODQ3XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc5LjEzMTgwOF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
ClsgIDE3OS4xNDU3NzBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzkuMTU5NzMwXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc5LjE3MzY5Ml0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4
MTA0MTA1ClsgIDE3OS4xODc2NTJdICAgIApbICAxNzkuMTkwOTYzXSBnbG9iYWxseSB1bm1hc2tl
ZDoKWyAgMTc5LjE5MDk2NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS4yMDY3MTVd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMjIwNjc2XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMTc5LjIzNDYzOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS4y
NDg1OThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMjYyNTYwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTc5LjI3NjUyMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDE3OS4yOTA0ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4MDAwMApbICAxNzkuMzA0NDQ2XSAgICAK
WyAgMTc5LjMwNzc1NV0gbG9jYWwgY3B1MyBtYXNrOgpbICAxNzkuMzA3NzU2XSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTc5LjMyMzMyN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDE3OS4zMzcyODhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMzUxMjQ5XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjM2NTIxMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDE3OS4zNzkxNzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMzkzMTMz
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjQwNzA5NF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAxZjgwMDAwClsgIDE3OS40MjEwNTVdICAgIApbICAxNzkuNDI0MzY3XSBsb2NhbGx5IHVubWFz
a2VkOgpbICAxNzkuNDI0MzY3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjQ0MDAy
OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS40NTM5OTBdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxNzkuNDY3OTUxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5
LjQ4MTkxMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS40OTU4NzNdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzkuNTA5ODM1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc5LjUyMzc5NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDE3OS41Mzc3NTddICAg
IApbICAxNzkuNTQxMDY4XSBwZW5kaW5nIGxpc3Q6ClsgIDE3OS41NDQxMTFdICAgMDogZXZlbnQg
MiAtPiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAxNzkuNTUwNTU1
XSAgIDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2Vk
ClsgIDE3OS41NTcwODhdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICAxNzkuNTYwODQ3XSAg
IDM6IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDE3OS41NjYwMzhdICAg
MzogZXZlbnQgMjEgLT4gaXJxIDI5MgoK
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="syslog-good.txt"
Content-Disposition: attachment; filename="syslog-good.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfctae3

WyAgMTE2Ljk3MDEzMF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEwxIERpc2FibGVkOyBFbmFibGlu
ZyBMMFMKWyAgMTE2Ljk4MjEyNV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFJhZGlvIHR5cGU9MHgx
LTB4Mi0weDAKWyAgMTE5LjY4MDQ5NV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFBDSSBJTlQgQSBk
aXNhYmxlZApbICAxMjAuMzU0NjcxXSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJpbmcgZm9y
d2FyZGluZyBzdGF0ZQpbICAxMjAuMzY4Njk0XSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJp
bmcgZm9yd2FyZGluZyBzdGF0ZQpbICAxMjAuMzc0MzU5XSBici1ldGgwOiBwb3J0IDEoZXRoMCkg
ZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAxMjAuMzgwNzg2XSBici1ldGgwOiBwb3J0IDEo
ZXRoMCkgZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAxMjAuNzUwMTAxXSBlMTAwMGUgMDAw
MDowMDoxOS4wOiBQQ0kgSU5UIEEgZGlzYWJsZWQKWyAgMTIxLjA4MDAwOF0gZWhjaV9oY2QgMDAw
MDowMDoxZC4wOiByZW1vdmUsIHN0YXRlIDEKWyAgMTIxLjA4NDg1N10gdXNiIHVzYjQ6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMTIxLjA5MDE0NV0gdXNiIDQtMTogVVNCIGRp
c2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMgpbICAxMjEuMDk1NDQwXSB1c2IgNC0xLjU6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDMKWyAgMTIxLjIyMTc1MF0gdXNiIDQtMS42OiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciA0ClsgIDEyMS41NTYwMTFdIGVoY2lfaGNkIDAwMDA6
MDA6MWQuMDogVVNCIGJ1cyA0IGRlcmVnaXN0ZXJlZApbICAxMjEuNTYyNjc4XSBlaGNpX2hjZCAw
MDAwOjAwOjFkLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAxMjEuNTY3ODAwXSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IHJlbW92ZSwgc3RhdGUgNApbICAxMjEuNTczMjk0XSB1c2IgdXNiMzogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAxMjEuNTc4NDA1XSB1c2IgMy0xOiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciAyClsgIDEyMS41ODgzOTddIGVoY2lfaGNkIDAwMDA6
MDA6MWEuMDogVVNCIGJ1cyAzIGRlcmVnaXN0ZXJlZApbICAxMjEuNTk0MjAyXSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAxMjEuNjcwMDIxXSB4aGNpX2hjZCAw
MDAwOjAwOjE0LjA6IHJlbW92ZSwgc3RhdGUgNApbICAxMjEuNjc0ODY3XSB1c2IgdXNiMjogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAxMjEuNjgwODY3XSB4SENJIHhoY2lfZHJv
cF9lbmRwb2ludCBjYWxsZWQgZm9yIHJvb3QgaHViClsgIDEyMS42ODYxNThdIHhIQ0kgeGhjaV9j
aGVja19iYW5kd2lkdGggY2FsbGVkIGZvciByb290IGh1YgpbICAxMjEuNjkyNzA5XSB4aGNpX2hj
ZCAwMDAwOjAwOjE0LjA6IFVTQiBidXMgMiBkZXJlZ2lzdGVyZWQKWyAgMTIxLjY5ODQ2MF0geGhj
aV9oY2QgMDAwMDowMDoxNC4wOiByZW1vdmUsIHN0YXRlIDQKWyAgMTIxLjcwMzgyMl0gdXNiIHVz
YjE6IFVTQiBkaXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMTIxLjcwOTE1Nl0geEhDSSB4
aGNpX2Ryb3BfZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAxMjEuNzE0NDU1XSB4SENJ
IHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgMTIxLjcyMDU2MF0g
eGhjaV9oY2QgMDAwMDowMDoxNC4wOiBVU0IgYnVzIDEgZGVyZWdpc3RlcmVkClsgIDEyMS43NzAz
MzRdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogUENJIElOVCBBIGRpc2FibGVkClsgIDEyMi4xMTcx
NDNdIFBNOiBTeW5jaW5nIGZpbGVzeXN0ZW1zIC4uLiBkb25lLgpbICAxMjIuMTIyMDAwXSBQTTog
UHJlcGFyaW5nIHN5c3RlbSBmb3IgbWVtIHNsZWVwClsgIDEyMi4zNjAwMjldIEZyZWV6aW5nIHVz
ZXIgc3BhY2UgcHJvY2Vzc2VzIC4uLiAoZWxhcHNlZCAwLjAxIHNlY29uZHMpIGRvbmUuClsgIDEy
Mi4zODI1OTVdIEZyZWV6aW5nIHJlbWFpbmluZyBmcmVlemFibGUgdGFza3MgLi4uIChlbGFwc2Vk
IDAuMDEgc2Vjb25kcykgZG9uZS4KWyAgMTIyLjQwMjU5NV0gUE06IEVudGVyaW5nIG1lbSBzbGVl
cApbICAxMjIuNDA2NTE0XSBzZCAwOjA6MDowOiBbc2RhXSBTeW5jaHJvbml6aW5nIFNDU0kgY2Fj
aGUKWyAgMTIyLjQxMTc3OF0gc2QgMDowOjA6MDogW3NkYV0gU3RvcHBpbmcgZGlzawpbICAxMjIu
NDE2NDQxXSBBQ1BJIGhhbmRsZSBoYXMgbm8gY29udGV4dCEKWyAgMTIyLjUyMDA2OF0gc25kX2hk
YV9pbnRlbCAwMDAwOjAwOjFiLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAxMjIuNTM5OTg2XSBQ
TTogc3VzcGVuZCBvZiBkcnY6c25kX2hkYV9pbnRlbCBkZXY6MDAwMDowMDoxYi4wIGNvbXBsZXRl
IGFmdGVyIDEyMy42MTkgbXNlY3MKWyAgMTIyLjU0ODQyNV0gUE06IHN1c3BlbmQgb2YgZHJ2OiBk
ZXY6cGNpMDAwMDowMCBjb21wbGV0ZSBhZnRlciAxMzEuODkwIG1zZWNzClsgIDEyMi41NTU2NzVd
IFBNOiBzdXNwZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMTQ5LjI1MCBtc2VjcwpbICAx
MjIuNTYxODM4XSBQTTogc3VzcGVuZCBkZXZpY2VzIHRvb2sgMC4xNjAgc2Vjb25kcwpbICAxMjIu
NTY3NzU0XSBQTTogbGF0ZSBzdXNwZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMC45MDkg
bXNlY3MKWyAgMTIyLjU3NDQ0NV0gQUNQSTogUHJlcGFyaW5nIHRvIGVudGVyIHN5c3RlbSBzbGVl
cCBzdGF0ZSBTMwpbICAxMjIuNTgwMjc3XSBQTTogU2F2aW5nIHBsYXRmb3JtIE5WUyBtZW1vcnkK
WyAgMTIyLjc3ODQ5Ml0gRGlzYWJsaW5nIG5vbi1ib290IENQVXMgLi4uCihYRU4pIFByZXBhcmlu
ZyBzeXN0ZW0gZm9yIEFDUEkgUzMgc3RhdGUuCihYRU4pIERpc2FibGluZyBub24tYm9vdCBDUFVz
IC4uLgooWEVOKSBCcmVha2luZyB2Y3B1IGFmZmluaXR5IGZvciBkb21haW4gMCB2Y3B1IDEKKFhF
TikgQnJlYWtpbmcgdmNwdSBhZmZpbml0eSBmb3IgZG9tYWluIDAgdmNwdSAyCihYRU4pIEJyZWFr
aW5nIHZjcHUgYWZmaW5pdHkgZm9yIGRvbWFpbiAwIHZjcHUgMwooWEVOKSBFbnRlcmluZyBBQ1BJ
IFMzIHN0YXRlLgooWEVOKSBtY2VfaW50ZWwuYzoxMjM5OiBNQ0EgQ2FwYWJpbGl0eTogQkNBU1Qg
MSBTRVIgMCBDTUNJIDEgZmlyc3RiYW5rIDAgZXh0ZW5kZWQgTUNFIE1TUiAwCihYRU4pIENQVTAg
Q01DSSBMVlQgdmVjdG9yICgweGYxKSBhbHJlYWR5IGluc3RhbGxlZAooWEVOKSBGaW5pc2hpbmcg
d2FrZXVwIGZyb20gQUNQSSBTMyBzdGF0ZS4KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9p
bmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBFbmFibGluZyBub24tYm9v
dCBDUFVzICAuLi4KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2
YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm8gOiBz
aWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1pY3JvY29kZTogY29sbGVjdF9jcHVf
aW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcKWyAgMTIzLjg2MjczOV0gQUNQSTog
TG93LWxldmVsIHJlc3VtZSBjb21wbGV0ZQpbICAxMjMuODY3MDU2XSBQTTogUmVzdG9yaW5nIHBs
YXRmb3JtIE5WUyBtZW1vcnkKKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvIDogc2ln
PTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2lu
Zm8gOiBzaWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1pY3JvY29kZTogY29sbGVj
dF9jcHVfaW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcKKFhFTikgbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwpbICAxMjQu
MDE5NzE1XSBFbmFibGluZyBub24tYm9vdCBDUFVzIC4uLgpbICAxMjQuMDIzNTkzXSBpbnN0YWxs
aW5nIFhlbiB0aW1lciBmb3IgQ1BVIDEKWyAgMTI0LjAyNzgzM10gY3B1IDEgc3BpbmxvY2sgZXZl
bnQgaXJxIDI3OQpbICAxMjQuMDM0MDgwXSBDUFUxIGlzIHVwClsgIDEyNC4wMzY2MjVdIGluc3Rh
bGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMgpbICAxMjQuMDQwNzkzXSBjcHUgMiBzcGlubG9jayBl
dmVudCBpcnEgMjg1ClsgIDEyNC4wNDU5MDNdIENQVTIgaXMgdXAKWyAgMTI0LjA0ODM5OF0gaW5z
dGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAzClsgIDEyNC4wNTI1NjFdIGNwdSAzIHNwaW5sb2Nr
IGV2ZW50IGlycSAyOTEKWyAgMTI0LjA1Nzc4Ml0gQ1BVMyBpcyB1cApbICAxMjQuMDYxOTU5XSBB
Q1BJOiBXYWtpbmcgdXAgZnJvbSBzeXN0ZW0gc2xlZXAgc3RhdGUgUzMKWyAgMTI0LjA2NzU2N10g
aTkxNSAwMDAwOjAwOjAyLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAo
d2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpbICAxMjQuMDc2Mzg0XSBpOTE1IDAwMDA6MDA6MDIu
MDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHg5MDAwMDcsIHdy
aXRpbmcgMHg5MDA0MDcpClsgIDEyNC4wODU4OTldIHBjaSAwMDAwOjAwOjE0LjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpb
ICAxMjQuMDk0NzI3XSBwY2kgMDAwMDowMDoxNC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0
IG9mZnNldCAweDQgKHdhcyAweDQsIHdyaXRpbmcgMHhiMDJiMDAwNCkKWyAgMTI0LjEwMzgzOV0g
cGNpIDAwMDA6MDA6MTQuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3
YXMgMHgyOTAwMDAwLCB3cml0aW5nIDB4MjkwMDAwMikKWyAgMTI0LjExMzQ0NF0gcGNpIDAwMDA6
MDA6MTYuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgxMDAs
IHdyaXRpbmcgMHgxMGIpClsgIDEyNC4xMjIyOTNdIHBjaSAwMDAwOjAwOjE2LjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4ZmVkYjAwMDQsIHdyaXRpbmcgMHhi
MDJhMDAwNCkKWyAgMTI0LjEzMjAyNV0gcGNpIDAwMDA6MDA6MTYuMDogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHgxODAwMDYsIHdyaXRpbmcgMHgxMDAwMDYpClsg
IDEyNC4xNDE0NTldIHNlcmlhbCAwMDAwOjAwOjE2LjM6IHJlc3RvcmluZyBjb25maWcgc3BhY2Ug
YXQgb2Zmc2V0IDB4ZiAod2FzIDB4MjAwLCB3cml0aW5nIDB4MjBhKQpbICAxMjQuMTUwNTcyXSBz
ZXJpYWwgMDAwMDowMDoxNi4zOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDUg
KHdhcyAweDAsIHdyaXRpbmcgMHhiMDI5MDAwMCkKWyAgMTI0LjE1OTk0NF0gc2VyaWFsIDAwMDA6
MDA6MTYuMzogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg0ICh3YXMgMHgxLCB3
cml0aW5nIDB4MzBlMSkKWyAgMTI0LjE2ODk4OF0gc2VyaWFsIDAwMDA6MDA6MTYuMzogcmVzdG9y
aW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHhiMDAwMDAsIHdyaXRpbmcgMHhi
MDAwMDcpClsgIDEyNC4xNzg2OTddIHBjaSAwMDAwOjAwOjE5LjA6IHJlc3RvcmluZyBjb25maWcg
c3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTA1KQpbICAxMjQuMTg3
NTQ2XSBwY2kgMDAwMDowMDoxOS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAw
eDEgKHdhcyAweDEwMDAwMiwgd3JpdGluZyAweDEwMDAwMykKWyAgMTI0LjE5Njk1NV0gcGNpIDAw
MDA6MDA6MWEuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgx
MDAsIHdyaXRpbmcgMHgxMGIpClsgIDEyNC4yMDU3OTNdIHBjaSAwMDAwOjAwOjFhLjA6IHJlc3Rv
cmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MCwgd3JpdGluZyAweGIwMjcw
MDAwKQpbICAxMjQuMjE0ODk2XSBwY2kgMDAwMDowMDoxYS4wOiByZXN0b3JpbmcgY29uZmlnIHNw
YWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDI5MDAwMDAsIHdyaXRpbmcgMHgyOTAwMDAyKQpbICAx
MjQuMjI0NTI0XSBzbmRfaGRhX2ludGVsIDAwMDA6MDA6MWIuMDogcmVzdG9yaW5nIGNvbmZpZyBz
cGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMDMpClsgIDEyNC4yMzQy
NTRdIHNuZF9oZGFfaW50ZWwgMDAwMDowMDoxYi4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0
IG9mZnNldCAweDQgKHdhcyAweDQsIHdyaXRpbmcgMHhiMDI2MDAwNCkKWyAgMTI0LjI0NDI0OF0g
c25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zm
c2V0IDB4MyAod2FzIDB4MCwgd3JpdGluZyAweDEwKQpbICAxMjQuMjUzNzM5XSBzbmRfaGRhX2lu
dGVsIDAwMDA6MDA6MWIuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3
YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAwMDIpClsgIDEyNC4yNjQwODldIHBjaWVwb3J0IDAw
MDA6MDA6MWMuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgx
MDAsIHdyaXRpbmcgMHgxMGIpClsgIDEyNC4yNzMzNTZdIHBjaWVwb3J0IDAwMDA6MDA6MWMuMDog
cmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg3ICh3YXMgMHhmMCwgd3JpdGluZyAw
eDIwMDAwMGYwKQpbICAxMjQuMjgzMDA4XSBwY2llcG9ydCAwMDAwOjAwOjFjLjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MyAod2FzIDB4ODEwMDAwLCB3cml0aW5nIDB4ODEw
MDEwKQpbICAxMjQuMjkyODQ4XSBwY2llcG9ydCAwMDAwOjAwOjFjLjA6IHJlc3RvcmluZyBjb25m
aWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MTAwMDAwLCB3cml0aW5nIDB4MTAwMDA3KQpb
ICAxMjQuMzAyNzYwXSBwY2llcG9ydCAwMDAwOjAwOjFjLjY6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MzAwLCB3cml0aW5nIDB4MzA0KQpbICAxMjQuMzEyMDE4
XSBwY2llcG9ydCAwMDAwOjAwOjFjLjY6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0
IDB4NyAod2FzIDB4ZjAsIHdyaXRpbmcgMHgyMDAwMDBmMCkKWyAgMTI0LjMyMTY3MF0gcGNpZXBv
cnQgMDAwMDowMDoxYy42OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDMgKHdh
cyAweDgxMDAwMCwgd3JpdGluZyAweDgxMDAxMCkKWyAgMTI0LjMzMTUwOV0gcGNpZXBvcnQgMDAw
MDowMDoxYy42OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDEw
MDAwMCwgd3JpdGluZyAweDEwMDAwNykKWyAgMTI0LjM0MTQyMl0gcGNpZXBvcnQgMDAwMDowMDox
Yy43OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDQwMCwgd3Jp
dGluZyAweDQwYSkKWyAgMTI0LjM1MDY3N10gcGNpZXBvcnQgMDAwMDowMDoxYy43OiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDcgKHdhcyAweGYwLCB3cml0aW5nIDB4MjAwMDAw
ZjApClsgIDEyNC4zNjAzMzNdIHBjaWVwb3J0IDAwMDA6MDA6MWMuNzogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHg4MTAwMDAsIHdyaXRpbmcgMHg4MTAwMTApClsg
IDEyNC4zNzAyMzddIHBjaSAwMDAwOjAwOjFkLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQg
b2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpbICAxMjQuMzc5MDU1XSBwY2kg
MDAwMDowMDoxZC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDQgKHdhcyAw
eDAsIHdyaXRpbmcgMHhiMDI1MDAwMCkKWyAgMTI0LjM4ODE2Ml0gcGNpIDAwMDA6MDA6MWQuMDog
cmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHgyOTAwMDAwLCB3cml0
aW5nIDB4MjkwMDAwMikKWyAgMTI0LjM5Nzk2M10gYWhjaSAwMDAwOjAwOjFmLjI6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MmIwMDAwNywgd3JpdGluZyAweDJi
MDA0MDcpClsgIDEyNC40MDc1ODFdIHBjaSAwMDAwOjAwOjFmLjM6IHJlc3RvcmluZyBjb25maWcg
c3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MjgwMDAwMSwgd3JpdGluZyAweDI4MDAwMDMpClsg
IDEyNC40MTcxMDVdIHBjaSAwMDAwOjAyOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQg
b2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTA0KQpbICAxMjQuNDI1OTU0XSBwY2kg
MDAwMDowMjowMC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDQgKHdhcyAw
eDQsIHdyaXRpbmcgMHhiMDEwMDAwNCkKWyAgMTI0LjQzNTAzMl0gcGNpIDAwMDA6MDI6MDAuMDog
cmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHgwLCB3cml0aW5nIDB4
MTApClsgIDEyNC40NDM2MzNdIHBjaSAwMDAwOjAyOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MTAwMDAwLCB3cml0aW5nIDB4MTAwMDAyKQpbICAxMjQu
NDUzMTI4XSBwY2kgMDAwMDowMzowMC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNl
dCAweDkgKHdhcyAweDEwMDAxLCB3cml0aW5nIDB4MWZmZjEpClsgIDEyNC40NjIyNDRdIHBjaSAw
MDAwOjAzOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NyAod2FzIDB4
MjJhMDAxMDEsIHdyaXRpbmcgMHgyMmEwMDFmMSkKWyAgMTI0LjQ3MjAwNl0gcGNpIDAwMDA6MDM6
MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHgxMDAwMCwg
d3JpdGluZyAweDEwMDEwKQpbICAxMjQuNDgxMjc5XSBwY2kgMDAwMDowNDowMC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDQwMjAxMDAsIHdyaXRpbmcgMHg0
MDIwMTBhKQpbICAxMjQuNDkwODIyXSBwY2kgMDAwMDowNDowMC4wOiByZXN0b3JpbmcgY29uZmln
IHNwYWNlIGF0IG9mZnNldCAweDUgKHdhcyAweDAsIHdyaXRpbmcgMHhiMDAwMDAwMCkKWyAgMTI0
LjQ5OTkyMV0gcGNpIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZz
ZXQgMHgzICh3YXMgMHgwLCB3cml0aW5nIDB4MjAxMCkKWyAgMTI0LjUwODcxOF0gc2VyaWFsIDAw
MDA6MDU6MDEuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgx
ZmYsIHdyaXRpbmcgMHgxMDMpClsgIDEyNC41MTc4MjldIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJl
c3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4OSAod2FzIDB4MSwgd3JpdGluZyAweDIw
MDEpClsgIDEyNC41MjY4NTNdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcg
c3BhY2UgYXQgb2Zmc2V0IDB4OCAod2FzIDB4MSwgd3JpdGluZyAweDIwMTEpClsgIDEyNC41MzU4
OTNdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0
IDB4NyAod2FzIDB4MSwgd3JpdGluZyAweDIwMjEpClsgIDEyNC41NDQ5MjldIHNlcmlhbCAwMDAw
OjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NiAod2FzIDB4MSwg
d3JpdGluZyAweDIwMzEpClsgIDEyNC41NTM5NzFdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3Rv
cmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NSAod2FzIDB4MSwgd3JpdGluZyAweDIwNDEp
ClsgIDEyNC41NjMwMTJdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4MyAod2FzIDB4OCwgd3JpdGluZyAweDIwMTApClsgIDEyNC41NzIxMjhd
IFBNOiBlYXJseSByZXN1bWUgb2YgZGV2aWNlcyBjb21wbGV0ZSBhZnRlciA1MDQuNjQ1IG1zZWNz
ClsgIDEyNC41Nzg3MzBdIGk5MTUgMDAwMDowMDowMi4wOiBzZXR0aW5nIGxhdGVuY3kgdGltZXIg
dG8gNjQKWyAgMTI0LjU3ODc0M10geGVuOiByZWdpc3RlcmluZyBnc2kgMjIgdHJpZ2dlcmluZyAw
IHBvbGFyaXR5IDEKWyAgMTI0LjU3ODc0N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyMgpbICAx
MjQuNTc4NzQ5XSBzbmRfaGRhX2ludGVsIDAwMDA6MDA6MWIuMDogUENJIElOVCBBIC0+IEdTSSAy
MiAobGV2ZWwsIGxvdykgLT4gSVJRIDIyClsgIDEyNC41Nzg3NTNdIHBjaSAwMDAwOjAwOjFlLjA6
IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAxMjQuNTc4NzYxXSBzbmRfaGRhX2ludGVs
IDAwMDA6MDA6MWIuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDEyNC41Nzg3OTdd
IGFoY2kgMDAwMDowMDoxZi4yOiBzZXR0aW5nIGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMTI0LjU3
ODk2M10gcGNpIDAwMDA6MDM6MDAuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDEy
NC41Nzg5OThdIHNkIDA6MDowOjA6IFtzZGFdIFN0YXJ0aW5nIGRpc2sKWyAgMTI0LjkyNTM3NF0g
YXRhMzogU0FUQSBsaW5rIHVwIDEuNSBHYnBzIChTU3RhdHVzIDExMyBTQ29udHJvbCAzMDApClsg
IDEyNC45NDUzODNdIGF0YTE6IFNBVEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMgU0Nv
bnRyb2wgMzAwKQpbICAxMjQuOTc0MzUxXSBhdGEzLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEw
MApbICAxMjUuMTUzNTQ5XSBQTTogcmVzdW1lIG9mIGRydjppOTE1IGRldjowMDAwOjAwOjAyLjAg
Y29tcGxldGUgYWZ0ZXIgNTc0LjgzMyBtc2VjcwpbICAxMjYuODUzOTY4XSBhdGExLjAwOiBjb25m
aWd1cmVkIGZvciBVRE1BLzEzMwpbICAxMjYuODc1NDI4XSBQTTogcmVzdW1lIG9mIGRydjpzZCBk
ZXY6MDowOjA6MCBjb21wbGV0ZSBhZnRlciAyMjk2LjQyOCBtc2VjcwpbICAxMjYuODgyNDM3XSBQ
TTogcmVzdW1lIG9mIGRydjpzY3NpX2RldmljZSBkZXY6MDowOjA6MCBjb21wbGV0ZSBhZnRlciAy
MzAzLjQxMyBtc2VjcwpbICAxMjYuODgyNDQ4XSBQTTogcmVzdW1lIG9mIGRydjpzY3NpX2Rpc2sg
ZGV2OjA6MDowOjAgY29tcGxldGUgYWZ0ZXIgMTcyMS4zMjAgbXNlY3MKWyAgMTI2Ljg5ODE5NF0g
UE06IHJlc3VtZSBvZiBkZXZpY2VzIGNvbXBsZXRlIGFmdGVyIDIzMTkuNTA2IG1zZWNzClsgIDEy
Ni45MDQ0MjJdIFBNOiByZXN1bWUgZGV2aWNlcyB0b29rIDIuMzIwIHNlY29uZHMKWyAgMTI2Ljkw
OTI3Nl0gUE06IEZpbmlzaGluZyB3YWtldXAuClsgIDEyNi45MTI3NTBdIFJlc3RhcnRpbmcgdGFz
a3MgLi4uIGRvbmUuClsgIDEyNi45MTczMTddIHZpZGVvIExOWFZJREVPOjAwOiBSZXN0b3Jpbmcg
YmFja2xpZ2h0IHN0YXRlClsgIDEyNi45OTM5MDddIFtkcm06cGNoX2lycV9oYW5kbGVyXSAqRVJS
T1IqIFBDSCBwb2lzb24gaW50ZXJydXB0ClsgIDEyNy41MzEwMDddIGVoY2lfaGNkOiBVU0IgMi4w
ICdFbmhhbmNlZCcgSG9zdCBDb250cm9sbGVyIChFSENJKSBEcml2ZXIKWyAgMTI3LjUzODAwNV0g
eGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMTI3LjU0
MzY1OF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAxMjcuNTQ3NTE1XSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+IElSUSAxNgpb
ICAxMjcuNTU1MDAwXSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1l
ciB0byA2NApbICAxMjcuNTYxMDMxXSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IEVIQ0kgSG9zdCBD
b250cm9sbGVyClsgIDEyNy41NjY1NjFdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogbmV3IFVTQiBi
dXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAxClsgIDEyNy41NzQyMTRdIGVoY2lf
aGNkIDAwMDA6MDA6MWEuMDogZGVidWcgcG9ydCAyClsgIDEyNy41ODI4MjNdIGVoY2lfaGNkIDAw
MDA6MDA6MWEuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQKWyAgMTI3
LjU4OTc4M10gZWhjaV9oY2QgMDAwMDowMDoxYS4wOiBpcnEgMTYsIGlvIG1lbSAweGIwMjcwMDAw
ClsgIDEyNy42MTUzNjFdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogVVNCIDIuMCBzdGFydGVkLCBF
SENJIDEuMDAKWyAgMTI3LjYyMTM3OV0gaHViIDEtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI3
LjYyNTQwM10gaHViIDEtMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI3LjcyNTQzMl0geGVu
OiByZWdpc3RlcmluZyBnc2kgMjMgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMTI3LjczMTA4
NV0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyMwpbICAxMjcuNzM0OTMyXSBlaGNpX2hjZCAwMDAw
OjAwOjFkLjA6IFBDSSBJTlQgQSAtPiBHU0kgMjMgKGxldmVsLCBsb3cpIC0+IElSUSAyMwpbICAx
MjcuNzQyNDEwXSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0
byA2NApbICAxMjcuNzQ4NDU5XSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IEVIQ0kgSG9zdCBDb250
cm9sbGVyClsgIDEyNy43NTM5OTVdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogbmV3IFVTQiBidXMg
cmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAyClsgIDEyNy43NjE2NTVdIGVoY2lfaGNk
IDAwMDA6MDA6MWQuMDogZGVidWcgcG9ydCAyClsgIDEyNy43NzAyMjddIGVoY2lfaGNkIDAwMDA6
MDA6MWQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQKWyAgMTI3Ljc3
NzE3Ml0gZWhjaV9oY2QgMDAwMDowMDoxZC4wOiBpcnEgMjMsIGlvIG1lbSAweGIwMjUwMDAwClsg
IDEyNy44MDUzNjZdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogVVNCIDIuMCBzdGFydGVkLCBFSENJ
IDEuMDAKWyAgMTI3LjgxMTM1MF0gaHViIDItMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI3Ljgx
NTEzNF0gaHViIDItMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI3Ljg3NjE4MV0geGVuOiBy
ZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMTI3Ljg4MTgzNl0g
QWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAxMjcuODg1NzA0XSB4aGNpX2hjZCAwMDAwOjAw
OjE0LjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+IElSUSAxNgpbICAxMjcu
ODkzMTkwXSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2
NApbICAxMjcuODk5MjEyXSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHhIQ0kgSG9zdCBDb250cm9s
bGVyClsgIDEyNy45MDQ3MTZdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogbmV3IFVTQiBidXMgcmVn
aXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgIDEyNy45MTI1MzJdIHhoY2lfaGNkIDAw
MDA6MDA6MTQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQKWyAgMTI3
LjkxOTQ1MF0geGhjaV9oY2QgMDAwMDowMDoxNC4wOiBpcnEgMTYsIGlvIG1lbSAweGIwMmIwMDAw
ClsgIDEyNy45MjU2MTJdIHhIQ0kgeGhjaV9hZGRfZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1
YgpbICAxMjcuOTMwODEyXSB4SENJIHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9v
dCBodWIKWyAgMTI3LjkzNjQ4OV0gaHViIDMtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI3Ljk0
MDM5Ml0gaHViIDMtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI3Ljk0NTM4MF0gdXNiIDEt
MTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBlaGNpX2hjZApbICAx
MjcuOTg1Mzg0XSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsg
IDEyNy45OTA3NjBdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJl
ZCwgYXNzaWduZWQgYnVzIG51bWJlciA0ClsgIDEyNy45OTg1MThdIHhIQ0kgeGhjaV9hZGRfZW5k
cG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAxMjguMDAzNzQ3XSB4SENJIHhoY2lfY2hlY2tf
YmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgMTI4LjAwOTQzOV0gaHViIDQtMDoxLjA6
IFVTQiBodWIgZm91bmQKWyAgMTI4LjAxMzMyOV0gaHViIDQtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0
ZWQKWyAgMTI4LjA5NjIyMV0gaHViIDEtMToxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI4LjEwMDA0
N10gaHViIDEtMToxLjA6IDYgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI4LjE5MDMyNV0gY2ZnODAyMTE6
IENhbGxpbmcgQ1JEQSB0byB1cGRhdGUgd29ybGQgcmVndWxhdG9yeSBkb21haW4KWyAgMTI4LjIw
NTI4Ml0gSW50ZWwoUikgV2lyZWxlc3MgV2lGaSBMaW5rIEFHTiBkcml2ZXIgZm9yIExpbnV4LCBp
bi10cmVlOgpbICAxMjguMjExOTQ5XSBDb3B5cmlnaHQoYykgMjAwMy0yMDExIEludGVsIENvcnBv
cmF0aW9uClsgIDEyOC4yMTcxODhdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDE4IHRyaWdnZXJpbmcg
MCBwb2xhcml0eSAxClsgIDEyOC4yMjI5MjddIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MTgKWyAg
MTI4LjIyNTM3Ml0gdXNiIDItMTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1
c2luZyBlaGNpX2hjZApbICAxMjguMjMzNjExXSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogUENJIElO
VCBBIC0+IEdTSSAxOCAobGV2ZWwsIGxvdykgLT4gSVJRIDE4ClsgIDEyOC4yNDA5MzVdIGl3bHdp
ZmkgMDAwMDowMjowMC4wOiBzZXR0aW5nIGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMTI4LjI0Njk1
OF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IHBjaV9yZXNvdXJjZV9sZW4gPSAweDAwMDAyMDAwClsg
IDEyOC4yNTMwODRdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBwY2lfcmVzb3VyY2VfYmFzZSA9IGZm
ZmZjOTAwMTU5ZjgwMDAKWyAgMTI4LjI1OTg5NF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEhXIFJl
dmlzaW9uIElEID0gMHgzNApbICAxMjguMjY1NTk1XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogQ09O
RklHX0lXTFdJRklfREVCVUcgZGlzYWJsZWQKWyAgMTI4LjI3MTYwMF0gaXdsd2lmaSAwMDAwOjAy
OjAwLjA6IENPTkZJR19JV0xXSUZJX0RFQlVHRlMgZGlzYWJsZWQKWyAgMTI4LjI3Nzk2M10gaXds
d2lmaSAwMDAwOjAyOjAwLjA6IENPTkZJR19JV0xXSUZJX0RFVklDRV9UUkFDSU5HIGVuYWJsZWQK
WyAgMTI4LjI4NDg0Nl0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IENPTkZJR19JV0xXSUZJX0RFVklD
RV9URVNUTU9ERSBkaXNhYmxlZApbICAxMjguMjkxOTMyXSBpd2x3aWZpIDAwMDA6MDI6MDAuMDog
Q09ORklHX0lXTFdJRklfUDJQIGRpc2FibGVkClsgIDEyOC4yOTc5MjNdIGl3bHdpZmkgMDAwMDow
MjowMC4wOiBEZXRlY3RlZCBJbnRlbChSKSBDZW50cmlubyhSKSBBZHZhbmNlZC1OIDYyMDUgQUdO
LCBSRVY9MHhCMApbICAxMjguMzA2ODg1XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogTDEgRGlzYWJs
ZWQ7IEVuYWJsaW5nIEwwUwpbICAxMjguMzI5NzQ5XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogZGV2
aWNlIEVFUFJPTSBWRVI9MHg3MTUsIENBTElCPTB4NgpbICAxMjguMzM2MjA3XSBpd2x3aWZpIDAw
MDA6MDI6MDAuMDogRGV2aWNlIFNLVTogMHgxRjAKWyAgMTI4LjM0MTMwM10gaXdsd2lmaSAwMDAw
OjAyOjAwLjA6IFZhbGlkIFR4IGFudDogMHgzLCBWYWxpZCBSeCBhbnQ6IDB4MwpbICAxMjguMzQ4
MTIwXSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogVHVuYWJsZSBjaGFubmVsczogMTMgODAyLjExYmcs
IDI0IDgwMi4xMWEgY2hhbm5lbHMKWyAgMTI4LjM2MzQxN10gaXdsd2lmaSAwMDAwOjAyOjAwLjA6
IGxvYWRlZCBmaXJtd2FyZSB2ZXJzaW9uIDE3LjE2OC41LjMgYnVpbGQgNDIzMDEKWyAgMTI4LjM3
MTQ5MF0gUmVnaXN0ZXJlZCBsZWQgZGV2aWNlOiBwaHkwLWxlZApbICAxMjguMzc1NzU1XSBjZmc4
MDIxMTogSWdub3JpbmcgcmVndWxhdG9yeSByZXF1ZXN0IFNldCBieSBjb3JlIHNpbmNlIHRoZSBk
cml2ZXIgdXNlcyBpdHMgb3duIGN1c3RvbSByZWd1bGF0b3J5IGRvbWFpbgpbICAxMjguMzg2ODA5
XSBpZWVlODAyMTEgcGh5MDogU2VsZWN0ZWQgcmF0ZSBjb250cm9sIGFsZ29yaXRobSAnaXdsLWFn
bi1ycycKWyAgMTI4LjM5NjA3NF0gaHViIDItMToxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI4LjM5
OTkxOF0gaHViIDItMToxLjA6IDggcG9ydHMgZGV0ZWN0ZWQKWyAgMTI4LjQwMDk1M10gaXdsd2lm
aSAwMDAwOjAyOjAwLjA6IEwxIERpc2FibGVkOyBFbmFibGluZyBMMFMKWyAgMTI4LjQwNzI3Nl0g
aXdsd2lmaSAwMDAwOjAyOjAwLjA6IFJhZGlvIHR5cGU9MHgxLTB4Mi0weDAKWyAgMTI4LjQyMDE1
MF0gZTEwMDBlOiBJbnRlbChSKSBQUk8vMTAwMCBOZXR3b3JrIERyaXZlciAtIDEuNS4xLWsKWyAg
MTI4LjQyNjExNF0gZTEwMDBlOiBDb3B5cmlnaHQoYykgMTk5OSAtIDIwMTEgSW50ZWwgQ29ycG9y
YXRpb24uClsgIDEyOC40MzIyNzVdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDIwIHRyaWdnZXJpbmcg
MCBwb2xhcml0eSAxClsgIDEyOC40MzgwODRdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MjAKWyAg
MTI4LjQ0MTkxOF0gZTEwMDBlIDAwMDA6MDA6MTkuMDogUENJIElOVCBBIC0+IEdTSSAyMCAobGV2
ZWwsIGxvdykgLT4gSVJRIDIwClsgIDEyOC40NDkyMDFdIGUxMDAwZSAwMDAwOjAwOjE5LjA6IHNl
dHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAxMjguNzA1NTMxXSB1c2IgMi0xLjU6IG5ldyBs
b3ctc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMyB1c2luZyBlaGNpX2hjZApbICAxMjguNzE5NjA0
XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogTDEgRGlzYWJsZWQ7IEVuYWJsaW5nIEwwUwpbICAxMjgu
NzMxNTk2XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogUmFkaW8gdHlwZT0weDEtMHgyLTB4MApbICAx
MjguODMxNTcyXSBpbnB1dDogVVNCIE9wdGljYWwgTW91c2UgYXMgL2RldmljZXMvcGNpMDAwMDow
MC8wMDAwOjAwOjFkLjAvdXNiMi8yLTEvMi0xLjUvMi0xLjU6MS4wL2lucHV0L2lucHV0MTEKWyAg
MTI4Ljg0MjM5OV0gZ2VuZXJpYy11c2IgMDAwMzowNEIzOjMxMEMuMDAwMzogaW5wdXQsaGlkcmF3
MDogVVNCIEhJRCB2MS4xMSBNb3VzZSBbVVNCIE9wdGljYWwgTW91c2VdIG9uIHVzYi0wMDAwOjAw
OjFkLjAtMS41L2lucHV0MApbICAxMjguODU5MTExXSBlMTAwMGUgMDAwMDowMDoxOS4wOiBldGgw
OiAoUENJIEV4cHJlc3M6Mi41R1QvczpXaWR0aCB4MSkgMDA6MTM6MjA6Zjk6ZGU6MjQKWyAgMTI4
Ljg2NzI4OF0gZTEwMDBlIDAwMDA6MDA6MTkuMDogZXRoMDogSW50ZWwoUikgUFJPLzEwMDAgTmV0
d29yayBDb25uZWN0aW9uClsgIDEyOC44NzQ1NzhdIGUxMDAwZSAwMDAwOjAwOjE5LjA6IGV0aDA6
IE1BQzogMTAsIFBIWTogMTEsIFBCQSBObzogRkZGRkZGLTBGRgpbICAxMjguODgyNDE3XSBkZXZp
Y2UgZXRoMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUKWyAgMTI4Ljk0NTUzNF0gdXNiIDItMS42
OiBuZXcgbG93LXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDQgdXNpbmcgZWhjaV9oY2QKWyAgMTI5
LjA4NzU0NV0gaW5wdXQ6IExJVEUtT04gVGVjaG5vbG9neSBVU0IgTmV0VmlzdGEgRnVsbCBXaWR0
aCBLZXlib2FyZC4gYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjFkLjAvdXNiMi8yLTEv
Mi0xLjYvMi0xLjY6MS4wL2lucHV0L2lucHV0MTIKWyAgMTI5LjEwMjAzMV0gZ2VuZXJpYy11c2Ig
MDAwMzowNEIzOjMwMjUuMDAwNDogaW5wdXQsaGlkcmF3MTogVVNCIEhJRCB2MS4xMCBLZXlib2Fy
ZCBbTElURS1PTiBUZWNobm9sb2d5IFVTQiBOZXRWaXN0YSBGdWxsIFdpZHRoIEtleWJvYXJkLl0g
b24gdXNiLTAwMDA6MDA6MWQuMC0xLjYvaW5wdXQwClsgIDEyOS44ODUwNTJdIGNmZzgwMjExOiBG
b3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTE4MCBNSHogKENoIDM2KSBvbiBwaHkwClsg
IDEyOS44OTIzMzZdIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTE4
MCBNSHogKENoIDM2KSBvbiBwaHkwClsgIDEyOS44OTk3NjBdIGNmZzgwMjExOiBQZW5kaW5nIHJl
Z3VsYXRvcnkgcmVxdWVzdCwgd2FpdGluZyBmb3IgaXQgdG8gYmUgcHJvY2Vzc2VkLi4uClsgIDEz
MC4wMDA2MjddIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTIwMCBN
SHogKENoIDQwKSBvbiBwaHkwClsgIDEzMC4wMDc5MTddIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVh
Y29uIG9uIGZyZXF1ZW5jeTogNTIwMCBNSHogKENoIDQwKSBvbiBwaHkwClsgIDEzMC4wMTUzMjZd
IGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVzdCwgd2FpdGluZyBmb3IgaXQgdG8g
YmUgcHJvY2Vzc2VkLi4uClsgIDEzMC4wOTI4MTBdIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVhY29u
IG9uIGZyZXF1ZW5jeTogNTIyMCBNSHogKENoIDQ0KSBvbiBwaHkwClsgIDEzMC4xMDAxMDBdIGNm
ZzgwMjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTIyMCBNSHogKENoIDQ0KSBv
biBwaHkwClsgIDEzMC4xMDc1MjddIGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVz
dCwgd2FpdGluZyBmb3IgaXQgdG8gYmUgcHJvY2Vzc2VkLi4uClsgIDEzMC4xNjI0NjddIGNmZzgw
MjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTI0MCBNSHogKENoIDQ4KSBvbiBw
aHkwClsgIDEzMC4xNjk3NDVdIGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVzdCwg
d2FpdGluZyBmb3IgaXQgdG8gYmUgcHJvY2Vzc2VkLi4uClsgIDEzMi4wNDExNDRdIGUxMDAwZTog
ZXRoMCBOSUMgTGluayBpcyBVcCAxMDAwIE1icHMgRnVsbCBEdXBsZXgsIEZsb3cgQ29udHJvbDog
UngvVHgKWyAgMTMyLjA0ODk2MF0gYnItZXRoMDogcG9ydCAxKGV0aDApIGVudGVyaW5nIGZvcndh
cmRpbmcgc3RhdGUKWyAgMTMyLjA1NDY5OF0gYnItZXRoMDogcG9ydCAxKGV0aDApIGVudGVyaW5n
IGZvcndhcmRpbmcgc3RhdGUKWyAgMTMyLjIyMTQ3NF0gY2ZnODAyMTE6IEZvdW5kIG5ldyBiZWFj
b24gb24gZnJlcXVlbmN5OiA1Nzg1IE1IeiAoQ2ggMTU3KSBvbiBwaHkwClsgIDEzMi4yMjkyMDld
IGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVzdCwgd2FpdGluZyBmb3IgaXQgdG8g
YmUgcHJvY2Vzc2VkLi4uCgo=
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae9340a33ade82404c6b2a306--


From xen-devel-bounces@lists.xen.org Tue Aug 07 20:15:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 20:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyqAk-0004zG-Gs; Tue, 07 Aug 2012 20:14:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SyqAh-0004zB-4n
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 20:14:32 +0000
Received: from [85.158.138.51:48616] by server-6.bemta-3.messagelabs.com id
	15/A5-02321-62771205; Tue, 07 Aug 2012 20:14:30 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344370460!9396063!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31008 invoked from network); 7 Aug 2012 20:14:21 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 20:14:21 -0000
Received: by ggna5 with SMTP id a5so20875ggn.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 13:14:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=4pUaKMITuDLEOHc/ziQyE1criF7BPdF4udsLqgPPJK8=;
	b=m2dmrg9/1KrKu397uzBzoLiuOr1tIqOJ103qY0ijHQYTp65lfh+VSQzsj5VK+GBfKm
	i5vDMPE0+/OCDjqFk2BXyxNKuCf+Dyts+n5C3mAmes04wvGl18ksGl+4gXeqBpOmL43J
	2TtJm1E2IyY7+IiU3A3j3kMM2O+MVO3J+9LSpy1VhMjCW44r23ebXcpzU0uwsVCbZAmU
	ABJsB1pXCQcj62X91q7w2p7zz7WY5Y+ZHQ7t/5MZ7Yw3d3GZoCLuWfikaqbeuQAXMoUH
	PsKff++6nI5rdDrWpmYhIVE5Hrj6qKGWsLZLAv8WikIp/JKsfhOQLhSbQa+VMvvDcBNU
	w3hQ==
MIME-Version: 1.0
Received: by 10.50.15.196 with SMTP id z4mr3874908igc.52.1344370459404; Tue,
	07 Aug 2012 13:14:19 -0700 (PDT)
Received: by 10.64.6.4 with HTTP; Tue, 7 Aug 2012 13:14:19 -0700 (PDT)
In-Reply-To: <CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
Date: Tue, 7 Aug 2012 16:14:19 -0400
X-Google-Sender-Auth: 3HJxXkrYQnLQkUfaw_dzffIw7Oo
Message-ID: <CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Content-Type: multipart/mixed; boundary=14dae9340a33ade82404c6b2a306
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=ISO-8859-1

Any suggestions on how best to chase this down?

The first S3 suspend/resume cycle works, but the second does not.

On the second try, I never get any interrupts delivered to ahci.
(at least according to /proc/interrupts)


syslog traces from the first (good) and the second (bad) are attached,
as well as the output from the "*" debug Ctrl+a handler in both cases.




On Tue, Aug 7, 2012 at 12:48 PM, Ben Guthro <ben@guthro.net> wrote:
> No - the issue seems to follow xen-4.2
>
> my test matrix looks as such:
>
> Xen      Linux                        S3 result
> 4.0.3     3.2.23                       OK
> 4.0.3     3.5                            OK
> 4.2        3.2.23                       FAIL
> 4.2        3.5                            FAIL
> 4.2        3.2.23 pci=nomsi    OK
> 4.2        3.5 pci=nomsi         (untested)
>
>
>
>
> On Tue, Aug 7, 2012 at 12:33 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
>>> It looks like this regression may be related to MSI handling.
>>>
>>> "pci=nomsi" on the kernel command line seems to bypass the issue.
>>>
>>> Clearly, legacy interrupts are not ideal.
>>
>> This is with v3.5 kernel right? With the earlier one you did not have
>> this issue?
>>>
>>>
>>> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
>>> > I have been doing some experiments in upgrading the Xen version in a
>>> > future version of XenClient Enterprise, and I've been running into a
>>> > regression that I'm wondering if anyone else has seen.
>>> >
>>> > dom0 suspend/resume (S3) does not seem to be working for me.
>>> >
>>> > In swapping out components of the system, the common failure seems to
>>> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>>> >
>>> > The first suspend seems to mostly work...but subsequent ones always
>>> > resume improperly.
>>> > By "improperly" - I see I/O failures, and stalls of many processes.
>>> >
>>> > Below is a log excerpt of 2 S3 attempts.
>>> >
>>> >
>>> > Has anyone else seen these failures?
>>> >
>>> > - Ben
>>> >
>>> >
>>> > (XEN) Preparing system for ACPI S3 state.
>>> > (XEN) Disabling non-boot CPUs ...
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
>>> > (XEN) Entering ACPI S3 state.
>>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>>> > 0 extended MCE MSR 0
>>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>>> > (XEN) Finishing wakeup from ACPI S3 state.
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) Enabling non-boot CPUs  ...
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>>> > (XEN) Preparing system for ACPI S3 state.
>>> > (XEN) Disabling non-boot CPUs ...
>>> > (XEN) Entering ACPI S3 state.
>>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>>> > 0 extended MCE MSR 0
>>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>>> > (XEN) Finishing wakeup from ACPI S3 state.
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) Enabling non-boot CPUs  ...
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>>> > [   66.508829] ata3.00: revalidation failed (errno=-5)
>>> > [   66.508861] ata1.00: revalidation failed (errno=-5)
>>> > [   76.858815] ata3.00: revalidation failed (errno=-5)
>>> > [   76.898807] ata1.00: revalidation failed (errno=-5)
>>> > [  107.208817] ata3.00: revalidation failed (errno=-5)
>>> > [  107.288807] ata1.00: revalidation failed (errno=-5)
>>> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
>>> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
>>> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
>>> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
>>> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
>>> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
>>> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
>>> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
>>> > [  107.719009] Aborting journal on device dm-6-8.
>>> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
>>> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
>>> > [  107.719063] Aborting journal on device dm-5-8.
>>> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
>>> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
>>> > [  107.719129] JBD2: I/O error detected when updating journal
>>> > superblock for dm-6-8.
>>> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
>>> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
>>> > [  107.719168] JBD2: I/O error detected when updating journal
>>> > superblock for dm-5-8.
>>> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
>>> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
>>> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
>>> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
>>> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
>>> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
>>> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
>>> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
>>> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
>>> > [  108.675555] Aborting journal on device dm-3-8.
>>> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
>>> > [  108.686099] JBD2: I/O error detected when updating journal
>>> > superblock for dm-3-8.
>>> > [  108.693908] journal commit I/O error
>>> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
>>> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
>>> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
>>> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
>>> > [  109.660011] end_request: I/O error, dev sda, sector 358788
>>> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
>>> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.709559] end_request: I/O error, dev sda, sector 357762
>>> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
>>> > [  109.721506] end_request: I/O error, dev sda, sector 358790
>>> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
>>> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.886187] end_request: I/O error, dev sda, sector 357764
>>> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
>>> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.928369] end_request: I/O error, dev sda, sector 349574
>>> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
>>> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
>>> > [  115.378875] end_request: I/O error, dev sda, sector 365000
>>> > [  115.384445] Aborting journal on device dm-1-8.
>>> > [  115.389120] end_request: I/O error, dev sda, sector 364930
>>> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
>>> > [  115.401101] JBD2: I/O error detected when updating journal
>>> > superblock for dm-1-8.
>>> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
>>> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
>>> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
>>> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
>>> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel

--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-bad.txt"
Content-Disposition: attachment; filename="xen-dump-bad.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfcta50

KFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBYZW4gKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMg
dG8gc3dpdGNoIGlucHV0IHRvIERPTTApCihYRU4pICcqJyBwcmVzc2VkIC0+IGZpcmluZyBhbGwg
ZGlhZ25vc3RpYyBrZXloYW5kbGVycwooWEVOKSBbZDogZHVtcCByZWdpc3RlcnNdCihYRU4pICdk
JyBwcmVzc2VkIC0+IGR1bXBpbmcgcmVnaXN0ZXJzCihYRU4pIAooWEVOKSAqKiogRHVtcGluZyBD
UFUwIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0
ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMAooWEVOKSBSSVA6
ICAgIGUwMDg6WzxmZmZmODJjNDgwMTNkNzdlPl0gbnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVO
KSBSRkxBR1M6IDAwMDAwMDAwMDAwMTAyODYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJh
eDogZmZmZjgyYzQ4MDMwMjVhMCAgIHJieDogZmZmZjgyYzQ4MDMwMjQ4MCAgIHJjeDogMDAwMDAw
MDAwMDAwMDAwMwooWEVOKSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IGZmZmY4MmM0ODAy
ZTI1YzggICByZGk6IGZmZmY4MmM0ODAyNzE4MDAKKFhFTikgcmJwOiBmZmZmODJjNDgwMmI3ZTMw
ICAgcnNwOiBmZmZmODJjNDgwMmI3ZTMwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAxCihYRU4pIHI5
OiAgZmZmZjgzMDE0ODk5YWVhOCAgIHIxMDogMDAwMDAwNGJmN2Q0MTc4MyAgIHIxMTogMDAwMDAw
MDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MmM0ODAyNzE4MDAgICByMTM6IGZmZmY4MmM0ODAx
M2Q3NTcgICByMTQ6IDAwMDAwMDRiZjdjNjg5NGMKKFhFTikgcjE1OiBmZmZmODJjNDgwMzAyMzA4
ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNy
MzogMDAwMDAwMDBhYTJjNTAwMCAgIGNyMjogZmZmZjg4MDAyNmJhYTEwOAooWEVOKSBkczogMDAw
MCAgIGVzOiAwMDAwICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgK
KFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MmM0ODAyYjdlMzA6CihYRU4pICAg
IGZmZmY4MmM0ODAyYjdlNjAgZmZmZjgyYzQ4MDEyODE3ZiAwMDAwMDAwMDAwMDAwMDAyIGZmZmY4
MmM0ODAyZTI1YzgKKFhFTikgICAgZmZmZjgyYzQ4MDMwMjQ4MCBmZmZmODMwMTQ4OTkyZDQwIGZm
ZmY4MmM0ODAyYjdlYjAgZmZmZjgyYzQ4MDEyODI4MQooWEVOKSAgICBmZmZmODJjNDgwMmI3ZjE4
IDAwMDAwMDAwMDAwMDAyNDYgMDAwMDAwNGJmN2Q0MTc4MyBmZmZmODJjNDgwMmQ4ODgwCihYRU4p
ICAgIGZmZmY4MmM0ODAyZDg4ODAgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmY4MmM0ODAzMDIzMDgKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2VlMCBmZmZmODJjNDgwMTI1NDA1
IGZmZmY4MmM0ODAyYjdmMTggZmZmZjgyYzQ4MDJiN2YxOAooWEVOKSAgICAwMDAwMDAwMGZmZmZm
ZmZmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJiN2VmMCBmZmZmODJjNDgwMTI1NDg0CihY
RU4pICAgIGZmZmY4MmM0ODAyYjdmMTAgZmZmZjgyYzQ4MDE1OGMwNSBmZmZmODMwMGFhNTg0MDAw
IGZmZmY4MzAwYWEwZmMwMDAKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2RhOCAwMDAwMDAwMDAwMDAw
MDAwIGZmZmZmZmZmZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgx
YWFmZGEwIGZmZmZmZmZmODFhMDFlZTggZmZmZmZmZmY4MWEwMWZkOCAwMDAwMDAwMDAwMDAwMjQ2
CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEw
MDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pICAgIGZmZmZmZmZmODFhMDFlZDAgMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjgzMDBhYTU4NDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4p
ICAgIFs8ZmZmZjgyYzQ4MDEzZDc3ZT5dIG5zMTY1NTBfcG9sbCsweDI3LzB4MzMKKFhFTikgICAg
WzxmZmZmODJjNDgwMTI4MTdmPl0gZXhlY3V0ZV90aW1lcisweDRlLzB4NmMKKFhFTikgICAgWzxm
ZmZmODJjNDgwMTI4MjgxPl0gdGltZXJfc29mdGlycV9hY3Rpb24rMHhlNC8weDIxYQooWEVOKSAg
ICBbPGZmZmY4MmM0ODAxMjU0MDU+XSBfX2RvX3NvZnRpcnErMHg5NS8weGEwCihYRU4pICAgIFs8
ZmZmZjgyYzQ4MDEyNTQ4ND5dIGRvX3NvZnRpcnErMHgyNi8weDI4CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1OGMwNT5dIGlkbGVfbG9vcCsweDZmLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1w
aW5nIENQVTEgaG9zdCBzdGF0ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4
ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAxCihYRU4p
IFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDll
CihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhF
TikgcmF4OiBmZmZmODJjNDgwMzAyMzcwICAgcmJ4OiBmZmZmODMwMTNlNjdmZjE4ICAgcmN4OiAw
MDAwMDAwMDAwMDAwMDAxCihYRU4pIHJkeDogMDAwMDAwM2NiZDM2OGQ4MCAgIHJzaTogMDAwMDAw
MDA4Y2U0Y2E3YSAgIHJkaTogMDAwMDAwMDAwMDAwMDAwMQooWEVOKSByYnA6IGZmZmY4MzAxM2U2
N2ZlZjAgICByc3A6IGZmZmY4MzAxM2U2N2ZlZjAgICByODogIDAwMDAwMDUwZWY3ZWY1NzAKKFhF
Tikgcjk6ICBmZmZmODMwMGE4M2ZkMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAw
MDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZjgzMDEzZTY3ZmYxOCAgIHIxMzogMDAwMDAw
MDBmZmZmZmZmZiAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxM2Q2
NmIwODggICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhF
TikgY3IzOiAwMDAwMDAwMTNkOTZjMDAwICAgY3IyOiBmZmZmODgwMDI2YjhlMWU4CihYRU4pIGRz
OiAwMDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczog
ZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDEzZTY3ZmVmMDoKKFhF
TikgICAgZmZmZjgzMDEzZTY3ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYWEwZmUwMDAg
ZmZmZjgzMDBhODNmZDAwMAooWEVOKSAgICBmZmZmODMwMTNlNjdmZGE4IDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAxCihYRU4pICAgIGZmZmZmZmZmODFh
YWZkYTAgZmZmZjg4MDAyNzg2ZGVlMCBmZmZmODgwMDI3ODZkZmQ4IDAwMDAwMDAwMDAwMDAyNDYK
KFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAw
MDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAy
NDYKKFhFTikgICAgZmZmZjg4MDAyNzg2ZGVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmODMwMGFhMGZlMDAwCihYRU4pICAgIDAwMDAw
MDNjYmQzNjhkODAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikg
ICAgWzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBb
PGZmZmY4MmM0ODAxNThiZjg+XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAq
KiogRHVtcGluZyBDUFUyIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMy
LXByZSAgeDg2XzY0ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAg
MgooWEVOKSBSSVA6ICAgIGUwMDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4
OTkvMHg5ZQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZp
c29yCihYRU4pIHJheDogZmZmZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODkzZmYxOCAg
IHJjeDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByZHg6IDAwMDAwMDNjY2NmYjVkODAgICByc2k6
IDAwMDAwMDAwOGRhYTZiYjAgICByZGk6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcmJwOiBmZmZm
ODMwMTQ4OTNmZWYwICAgcnNwOiBmZmZmODMwMTQ4OTNmZWYwICAgcjg6ICAwMDAwMDA1MTExZGEx
Njk0CihYRU4pIHI5OiAgZmZmZjgzMDBhODNmYzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAg
IHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5M2ZmMTggICByMTM6
IDAwMDAwMDAwZmZmZmZmZmYgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZm
ODMwMTRkMmI4MDg4ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAy
NmYwCihYRU4pIGNyMzogMDAwMDAwMDE0Y2RhYjAwMCAgIGNyMjogZmZmZjg4MDAwMmU2Mzc2OAoo
WEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEw
ICAgY3M6IGUwMDgKKFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5M2Zl
ZjA6CihYRU4pICAgIGZmZmY4MzAxNDg5M2ZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4
NWM3MDAwIGZmZmY4MzAwYTgzZmMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODkzZmRhOCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMgooWEVOKSAgICBmZmZm
ZmZmZjgxYWFmZGEwIGZmZmY4ODAwMjc4NmZlZTAgZmZmZjg4MDAyNzg2ZmZkOCAwMDAwMDAwMDAw
MDAwMjQ2CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAw
MDAwMDEwMDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAw
MDAwMDAwMjQ2CihYRU4pICAgIGZmZmY4ODAwMjc4NmZlYzggMDAwMDAwMDAwMDAwZTAyYiAzZWMy
ZTMyNjgwOTJlMTY3IGI5N2NlYzViOWU2OGRjOTMKKFhFTikgICAgY2M5ODA3OTI0OWZiNzNiNSBl
MWEzNWRlNGZkMTYxYTZmIGUxZTM1ZDY0MDAwMDAwMDIgZmZmZjgzMDBhODVjNzAwMAooWEVOKSAg
ICAwMDAwMDAzY2NjZmI1ZDgwIGUyNGQ1YTM4ZjJhZTA1MWUKKFhFTikgWGVuIGNhbGwgdHJhY2U6
CihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhF
TikgICAgWzxmZmZmODJjNDgwMTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAK
KFhFTikgKioqIER1bXBpbmcgQ1BVMyBob3N0IHN0YXRlOiAqKioKKFhFTikgLS0tLVsgWGVuLTQu
Mi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDogICAgQyBdLS0tLQooWEVOKSBD
UFU6ICAgIDMKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRf
aWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgQ09OVEVYVDog
aHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAgICByYng6IGZmZmY4MzAxNDg5
MmZmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmR4OiAwMDAwMDAzY2NjYzUyZDgw
ICAgcnNpOiAwMDAwMDAwMDhlNzAzMjUwICAgcmRpOiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJi
cDogZmZmZjgzMDE0ODkyZmVmMCAgIHJzcDogZmZmZjgzMDE0ODkyZmVmMCAgIHI4OiAgMDAwMDAw
NTEzNDhmMTc0NAooWEVOKSByOTogIGZmZmY4MzAwYWE1ODMwNjAgICByMTA6IDAwMDAwMDAwZGVh
ZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmODMwMTQ4OTJmZjE4
ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAyCihYRU4pIHIx
NTogZmZmZjgzMDE0Y2Y1NTA4OCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAw
MDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxNGNhZjQwMDAgICBjcjI6IGZmZmY4ODAwMDMy
MTMxMDgKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBz
czogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODMw
MTQ4OTJmZWYwOgooWEVOKSAgICBmZmZmODMwMTQ4OTJmZjEwIGZmZmY4MmM0ODAxNThiZjggZmZm
ZjgzMDBhODNmZTAwMCBmZmZmODMwMGFhNTgzMDAwCihYRU4pICAgIGZmZmY4MzAxNDg5MmZkYTgg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDMKKFhFTikg
ICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODgxZWUwIGZmZmY4ODAwMjc4ODFmZDggMDAw
MDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODEwMDEz
YWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAwMDAwMDAwZGVhZGJlZWYKKFhF
TikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMGUwMzMg
MDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODgxZWM4IDAwMDAwMDAwMDAwMGUw
MmIgM2VjMmUzMjY4MDkyZTE2NyBiOTdjZWM1YjllNjhkYzkzCihYRU4pICAgIGNjOTgwNzkyNDlm
YjczYjUgZTFhMzVkZTRmZDE2MWE2ZiBlMWUzNWQ2NDAwMDAwMDAzIGZmZmY4MzAwYTgzZmUwMDAK
KFhFTikgICAgMDAwMDAwM2NjY2M1MmQ4MCBlMjRkNWEzOGYyYWUwNTFlCihYRU4pIFhlbiBjYWxs
IHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8w
eDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVfbG9vcCsweDYyLzB4NzEKKFhF
TikgICAgCihYRU4pIFswOiBkdW1wIERvbTAgcmVnaXN0ZXJzXQooWEVOKSAnMCcgcHJlc3NlZCAt
PiBkdW1waW5nIERvbTAncyByZWdpc3RlcnMKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2Y3B1IzAg
c3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0KKFhFTikg
UkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVzdAooWEVO
KSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmZmZmZmODFhMDFmZDggICByY3g6IGZm
ZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAwMDAwMDAw
MGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZmZmZmY4MWEw
MWVlOCAgIHJzcDogZmZmZmZmZmY4MWEwMWVkMCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICByMTE6IDAw
MDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAwMDAwMDAw
MDAwMDAwMDAwICAgcjE0OiBmZmZmZmZmZmZmZmZmZmZmCihYRU4pIHIxNTogMDAwMDAwMDAwMDAw
MDAwMCAgIGNyMDogMDAwMDAwMDAwMDAwMDAwOCAgIGNyNDogMDAwMDAwMDAwMDAwMjY2MAooWEVO
KSBjcjM6IDAwMDAwMDAxNDFhMDUwMDAgICBjcjI6IDAwMDA3ZmM5Njk2MjFjNjIKKFhFTikgZHM6
IDAwMDAgICBlczogMDAwMCAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAgIGNzOiBl
MDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmZmZmZmODFhMDFlZDA6CihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgxMDBhNWMw
IGZmZmZmZmZmODFhMDFmMTgKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmZmZmZjgxYTAx
ZmQ4IGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyZGVlMWEwMAooWEVOKSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmODFhMDFmNDggZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZmZmZmZmZmZm
CihYRU4pICAgIDhlNmExZGU5NjBlNzVkYzMgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZjgxYjE1
MTYwIGZmZmZmZmZmODFhMDFmNTgKKFhFTikgICAgZmZmZmZmZmY4MTU1NGY1ZSBmZmZmZmZmZjgx
YTAxZjk4IGZmZmZmZmZmODFhY2NiZjUgZmZmZmZmZmY4MWIxNTE2MAooWEVOKSAgICBkMGEwZjA3
NTJhZDlhMDA4IDAwMDAwMDAwMDBjZGYwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmY4MWEwMWZiOCBmZmZmZmZmZjgx
YWNjMzRiIGZmZmZmZmZmN2ZmZmZmZmYKKFhFTikgICAgZmZmZmZmZmY4NGIyNTAwMCBmZmZmZmZm
ZjgxYTAxZmY4IGZmZmZmZmZmODFhY2ZlY2MgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMTAwMDAwMDAwIDAwMTAwODAwMDAwMzA2YTQgMWZjOThiNzVlM2I4MjI4MyAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAq
KiogRHVtcGluZyBEb20wIHZjcHUjMSBzdGF0ZTogKioqCihYRU4pIFJJUDogICAgZTAzMzpbPGZm
ZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBFTTogMCAg
IENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAgIHJieDogZmZm
Zjg4MDAyNzg2ZGZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6IDAwMDAwMDAw
MDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAwZGVhZGJlZWYK
KFhFTikgcmJwOiBmZmZmODgwMDI3ODZkZWUwICAgcnNwOiBmZmZmODgwMDI3ODZkZWM4ICAgcjg6
ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAgIHIxMDogMDAw
MDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmZmZmZm
ODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDEgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDAK
KFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0
OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDEzZDk2YzAwMCAgIGNyMjogMDAw
MDdmY2I2NjQzZDAzOQooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczog
MDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJhY2UgZnJvbSBy
c3A9ZmZmZjg4MDAyNzg2ZGVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGZm
ZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZGYxMAooWEVOKSAgICBmZmZmZmZm
ZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmRmZDggZmZmZmZmZmY4MWFhZmRhMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY0MCBmZmZmZmZmZjgx
MDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgYTEwMjFlZjczOWZiMjdkMyAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY1MAooWEVOKSAgICBmZmZm
ZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBm
ZmZmODgwMDI3ODZkZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2
Y3B1IzIgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0K
KFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVz
dAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4ODAwMjc4NmZmZDggICBy
Y3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAw
MDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZjg4
MDAyNzg2ZmVlMCAgIHJzcDogZmZmZjg4MDAyNzg2ZmVjOCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICBy
MTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAw
MDAwMDAwMDAwMDAwMDAyICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHIxNTogMDAwMDAw
MDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAwMDAwMDAwMjY2
MAooWEVOKSBjcjM6IDAwMDAwMDAxNGNkYWIwMDAgICBjcjI6IDAwMDAwMDAwMDE0ZWQxODgKKFhF
TikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAg
IGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4ODAwMjc4NmZl
Yzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgx
MDBhNWMwIGZmZmY4ODAwMjc4NmZmMTAKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmODgw
MDI3ODZmZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNDAgZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZjgx
MDBhZGU5CihYRU4pICAgIDU3OTAyZTE4ZWUzMjEyZjkgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNTAKKFhFTikgICAgZmZmZmZmZmY4MTU2MzQzOCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1OCAw
MDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMzIHN0YXRlOiAqKioK
KFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikgcmF4OiAwMDAw
MDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODgxZmQ4ICAgcmN4OiBmZmZmZmZmZjgxMDAx
M2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBkZWFkYmVlZiAg
IHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4ODFlZTAgICByc3A6
IGZmZmY4ODAwMjc4ODFlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgcjk6ICAwMDAw
MDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAwMDAwMDAwMyAg
IHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAwMDAgICBjcjA6
IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikgY3IzOiAwMDAw
MDAwMTNlNDIxMDAwICAgY3IyOiAwMDAwN2ZjOTY5NjIxYzYyCihYRU4pIGRzOiAwMDJiICAgZXM6
IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAzMwooWEVOKSBH
dWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODgxZWM4OgooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBmZmZmODgwMDI3
ODgxZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg4MWZkOCBmZmZmZmZm
ZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBmZmZm
ODgwMDI3ODgxZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQooWEVOKSAgICA2
N2IxMDQ0Y2M0YjdjMzkxIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmODgw
MDI3ODgxZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTggMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSBbSDogZHVtcCBoZWFwIGluZm9dCihYRU4pICdIJyBwcmVzc2VkIC0+IGR1bXBpbmcgaGVh
cCBpbmZvIChub3ctMHg0Qzo0MTkyM0I2QikKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MF0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTJdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9M10gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT00XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTVdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Nl0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT03XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPThdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OV0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMF0gLT4gMCBwYWdlcwooWEVOKSBoZWFw
W25vZGU9MF1bem9uZT0xMV0gLT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMl0g
LT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xM10gLT4gMCBwYWdlcwooWEVOKSBo
ZWFwW25vZGU9MF1bem9uZT0xNF0gLT4gMTYxMjggcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MTVdIC0+IDMyNzY4IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE2XSAtPiA2NTUz
NiBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xN10gLT4gMTMwNTU5IHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTE4XSAtPiAyNjIxNDMgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MTldIC0+IDE3Mjc3MiBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0yMF0gLT4g
MTM0MjkwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIxXSAtPiAwIHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTIyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25l
PTIzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI0XSAtPiAwIHBhZ2VzCihY
RU4pIGhlYXBbbm9kZT0wXVt6b25lPTI1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6
b25lPTI2XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI3XSAtPiAwIHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTI5XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMwXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMxXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTMyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMzXSAtPiAw
IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM0XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTM1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM2XSAt
PiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM3XSAtPiAwIHBhZ2VzCihYRU4pIGhl
YXBbbm9kZT0wXVt6b25lPTM4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM5
XSAtPiAwIHBhZ2VzCihYRU4pIFtJOiBkdW1wIEhWTSBpcnEgaW5mb10KKFhFTikgJ0knIHByZXNz
ZWQgLT4gZHVtcGluZyBIVk0gaXJxIGluZm8KKFhFTikgW006IGR1bXAgTVNJIHN0YXRlXQooWEVO
KSBQQ0ktTVNJIGludGVycnVwdCBpbmZvcm1hdGlvbjoKKFhFTikgIE1TSSAgICAyNiB2ZWM9YTEg
bG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEv
LTEKKFhFTikgIE1TSSAgICAyNyB2ZWM9MDAgIGZpeGVkICBlZGdlIGRlYXNzZXJ0IHBoeXMgbG93
ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAyOCB2ZWM9MjkgbG93
ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEK
KFhFTikgIE1TSSAgICAyOSB2ZWM9YTkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0
IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAzMCB2ZWM9YjEgbG93ZXN0
ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhF
TikgIE1TSSAgICAzMSB2ZWM9YzkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRl
c3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgW1E6IGR1bXAgUENJIGRldmljZXNdCihYRU4p
ID09PT0gUENJIGRldmljZXMgPT09PQooWEVOKSA9PT09IHNlZ21lbnQgMDAwMCA9PT09CihYRU4p
IDAwMDA6MDU6MDEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjA0OjAwLjAgLSBk
b20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMzowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDI6MDAuMCAtIGRvbSAwICAgLSBNU0lzIDwgMzAgPgooWEVOKSAwMDAwOjAw
OjFmLjMgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZi4yIC0gZG9tIDAgICAt
IE1TSXMgPCAyNyA+CihYRU4pIDAwMDA6MDA6MWYuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVO
KSAwMDAwOjAwOjFlLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZC4wIC0g
ZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNyAtIGRvbSAwICAgLSBNU0lzIDwg
PgooWEVOKSAwMDAwOjAwOjFjLjYgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDox
Yy4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWIuMCAtIGRvbSAwICAgLSBN
U0lzIDwgMjYgPgooWEVOKSAwMDAwOjAwOjFhLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikg
MDAwMDowMDoxOS4wIC0gZG9tIDAgICAtIE1TSXMgPCAzMSA+CihYRU4pIDAwMDA6MDA6MTYuMyAt
IGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjE2LjAgLSBkb20gMCAgIC0gTVNJcyA8
ID4KKFhFTikgMDAwMDowMDoxNC4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOSA+CihYRU4pIDAwMDA6
MDA6MDIuMCAtIGRvbSAwICAgLSBNU0lzIDwgMjggPgooWEVOKSAwMDAwOjAwOjAwLjAgLSBkb20g
MCAgIC0gTVNJcyA8ID4KKFhFTikgW1Y6IGR1bXAgaW9tbXUgaW5mb10KKFhFTikgCihYRU4pIGlv
bW11IDA6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFsaWRhdGlvbjogc3Vw
cG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBpbmc6IHN1cHBvcnRl
ZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRhYmxlIChucl9lbnRy
eT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4pICAgICAgICBTVlQg
IFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQCihYRU4pICAgMDAw
MDogIDEgICAwICAwMDEwIDAwMDAwMDAxIDI5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4p
IAooWEVOKSBpb21tdSAxOiBucl9wdF9sZXZlbHMgPSAzLgooWEVOKSAgIFF1ZXVlZCBJbnZhbGlk
YXRpb246IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgUmVtYXBwaW5n
OiBzdXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IHJlbWFwcGluZyB0YWJs
ZSAobnJfZW50cnk9MHgxMDAwMC4gT25seSBkdW1wIFA9MSBlbnRyaWVzIGhlcmUpOgooWEVOKSAg
ICAgICAgU1ZUICBTUSAgIFNJRCAgICAgIERTVCAgViAgQVZMIERMTSBUTSBSSCBETSBGUEQgUAoo
WEVOKSAgIDAwMDA6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzOCAgICAwICAgMSAgMCAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMDE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBmMCAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMDI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0MCAgICAwICAg
MSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0OCAg
ICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDQ6ICAxICAgMCAgZjBmOCAwMDAwMDAw
MSA1MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDU6ICAxICAgMCAgZjBmOCAw
MDAwMDAwMSA1OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDY6ICAxICAgMCAg
ZjBmOCAwMDAwMDAwMSA2MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDc6ICAx
ICAgMCAgZjBmOCAwMDAwMDAwMSA2OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAw
MDg6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3MCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVO
KSAgIDAwMDk6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3OCAgICAwICAgMSAgMCAgMSAgMSAgIDAg
MQooWEVOKSAgIDAwMGE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA4OCAgICAwICAgMSAgMCAgMSAg
MSAgIDAgMQooWEVOKSAgIDAwMGI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5MCAgICAwICAgMSAg
MCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5OCAgICAw
ICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGQ6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBh
MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGU6ICAxICAgMCAgZjBmOCAwMDAw
MDAwMSBhOCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGY6ICAxICAgMCAgZjBm
OCAwMDAwMDAwMSBiMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTA6ICAxICAg
MCAgZjBmOCAwMDAwMDAwMSBiOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTE6
ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAg
IDAwMTI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQoo
WEVOKSAgIDAwMTM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBkMCAgICAwICAgMSAgMSAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMTQ6ICAxICAgMCAgMDBkOCAwMDAwMDAwMSBhMSAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMTU6ICAxICAgMCAgMDBmYSAwMDAwMDAwMSAwMCAgICAwICAg
MCAgMCAgMCAgMCAgIDAgMQooWEVOKSAgIDAwMTY6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzMSAg
ICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTc6ICAxICAgMCAgMDBhMCAwMDAwMDAw
MSBhOSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTg6ICAxICAgMCAgMDIwMCAw
MDAwMDAwMSBiMSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTk6ICAxICAgMCAg
MDBjOCAwMDAwMDAwMSBjOSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAKKFhFTikgUmVk
aXJlY3Rpb24gdGFibGUgb2YgSU9BUElDIDA6CihYRU4pICAgI2VudHJ5IElEWCBGTVQgTUFTSyBU
UklHIElSUiBQT0wgU1RBVCBERUxJICBWRUNUT1IKKFhFTikgICAgMDE6ICAwMDAwICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgMzgKKFhFTikgICAgMDI6ICAwMDAxICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgZjAKKFhFTikgICAgMDM6ICAwMDAyICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDAKKFhFTikgICAgMDQ6ICAwMDAzICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDgKKFhFTikgICAgMDU6ICAwMDA0ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTAKKFhFTikgICAgMDY6ICAwMDA1ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTgKKFhFTikgICAgMDc6ICAwMDA2ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjAKKFhFTikgICAgMDg6ICAwMDA3ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjgKKFhFTikgICAgMDk6ICAwMDA4ICAgMSAgICAw
ICAgMSAgIDAgICAwICAgIDAgICAgMCAgICAgNzAKKFhFTikgICAgMGE6ICAwMDA5ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNzgKKFhFTikgICAgMGI6ICAwMDBhICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgODgKKFhFTikgICAgMGM6ICAwMDBiICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTAKKFhFTikgICAgMGQ6ICAwMDBjICAgMSAgICAx
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTgKKFhFTikgICAgMGU6ICAwMDBkICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTAKKFhFTikgICAgMGY6ICAwMDBlICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTgKKFhFTikgICAgMTA6ICAwMDBmICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjAKKFhFTikgICAgMTI6ICAwMDEwICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjgKKFhFTikgICAgMTM6ICAwMDExICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzAKKFhFTikgICAgMTQ6ICAwMDE2ICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgMzEKKFhFTikgICAgMTY6ICAwMDEzICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgZDAKKFhFTikgICAgMTc6ICAwMDEyICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzgKKFhFTikgW2E6IGR1bXAgdGltZXIgcXVldWVz
XQooWEVOKSBEdW1waW5nIHRpbWVyIHF1ZXVlczoKKFhFTikgQ1BVMDA6CihYRU4pICAgZXg9ICAg
LTE2ODF1cyB0aW1lcj1mZmZmODJjNDgwMmUyNWM4IGNiPWZmZmY4MmM0ODAxM2Q3NTcoZmZmZjgy
YzQ4MDI3MTgwMCkgbnMxNjU1MF9wb2xsKzB4MC8weDMzCihYRU4pICAgZXg9ICAgIDczMTh1cyB0
aW1lcj1mZmZmODMwMTQ4OTlhMWI4IGNiPWZmZmY4MmM0ODAxMTlkNzIoZmZmZjgzMDE0ODk5YTE5
MCkgY3NjaGVkX2FjY3QrMHgwLzB4NDJhCihYRU4pICAgZXg9ICA0NTcwMDF1cyB0aW1lcj1mZmZm
ODJjNDgwMzAwNTgwIGNiPWZmZmY4MmM0ODAxYTg4NTAoMDAwMDAwMDAwMDAwMDAwMCkgbWNlX3dv
cmtfZm4rMHgwLzB4YTkKKFhFTikgICBleD0xMjIzMTU3NTl1cyB0aW1lcj1mZmZmODJjNDgwMmZl
MjgwIGNiPWZmZmY4MmM0ODAxODA3YzIoMDAwMDAwMDAwMDAwMDAwMCkgcGx0X292ZXJmbG93KzB4
MC8weDEzMQooWEVOKSAgIGV4PSAgICA3MzE4dXMgdGltZXI9ZmZmZjgzMDE0ODk5YWVhOCBjYj1m
ZmZmODJjNDgwMTFhYWYwKDAwMDAwMDAwMDAwMDAwMDApIGNzY2hlZF90aWNrKzB4MC8weDMxNAoo
WEVOKSBDUFUwMToKKFhFTikgICBleD0gICA2MjYxMXVzIHRpbWVyPWZmZmY4MzAxNGM5NjY1Zjgg
Y2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAxKSBjc2NoZWRfdGljaysweDAvMHgz
MTQKKFhFTikgICBleD0gIDI2MjA5M3VzIHRpbWVyPWZmZmY4MzAwYTgzZmQwNjAgY2I9ZmZmZjgy
YzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZkMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJfZm4rMHgw
LzB4YgooWEVOKSBDUFUwMjoKKFhFTikgICBleD0gICA4Mjk1M3VzIHRpbWVyPWZmZmY4MzAxMzZh
NDlhMjggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAyKSBjc2NoZWRfdGljaysw
eDAvMHgzMTQKKFhFTikgICBleD0gIDIzODA5OXVzIHRpbWVyPWZmZmY4MzAwYTgzZmMwNjAgY2I9
ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZjMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJf
Zm4rMHgwLzB4YgooWEVOKSBDUFUwMzoKKFhFTikgICBleD0gIDEwMzI2N3VzIHRpbWVyPWZmZmY4
MzAxMWM5ZTk5NDggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAzKSBjc2NoZWRf
dGljaysweDAvMHgzMTQKKFhFTikgICBleD0gMzkyMDAxOXVzIHRpbWVyPWZmZmY4MzAwYWE1ODMw
NjAgY2I9ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGFhNTgzMDAwKSB2Y3B1X3NpbmdsZXNob3Rf
dGltZXJfZm4rMHgwLzB4YgooWEVOKSBbYzogZHVtcCBBQ1BJIEN4IHN0cnVjdHVyZXNdCihYRU4p
ICdjJyBwcmVzc2VkIC0+IHByaW50aW5nIEFDUEkgQ3ggc3RydWN0dXJlcwooWEVOKSA9PWNwdTA9
PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBz
dGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAw
XSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBd
IGR1cmF0aW9uWzMyODI3ODA1NDE2Ml0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBd
CihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pID09Y3B1MT09CihYRU4pIGFjdGl2ZSBz
dGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAg
IEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0gdXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0g
ZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMzI4MzAy
ODQ0NDI5XQooWEVOKSBQQzJbMF0gUEMzWzBdIFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIEND
NlswXSBDQzdbMF0KKFhFTikgPT1jcHUyPT0KKFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVO
KSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxh
dGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVO
KSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlvblszMjgzMjc2MzQ4MjNdCihYRU4pIFBD
MlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVO
KSA9PWNwdTM9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlD
NwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdl
WzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2Vb
MDAwMDAwMDBdIGR1cmF0aW9uWzMyODM1MjQyNDUyMl0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZb
MF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pIFtlOiBkdW1wIGV2dGNo
biBpbmZvXQooWEVOKSAnZScgcHJlc3NlZCAtPiBkdW1waW5nIGV2ZW50LWNoYW5uZWwgaW5mbwoo
WEVOKSBFdmVudCBjaGFubmVsIGluZm9ybWF0aW9uIGZvciBkb21haW4gMDoKKFhFTikgUG9sbGlu
ZyB2Q1BVczoge30KKFhFTikgICAgIHBvcnQgW3AvbV0KKFhFTikgICAgICAgIDEgWzEvMF06IHM9
NSBuPTAgeD0wIHY9MAooWEVOKSAgICAgICAgMiBbMS8xXTogcz02IG49MCB4PTAKKFhFTikgICAg
ICAgIDMgWzEvMF06IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICA0IFswLzBdOiBzPTYgbj0wIHg9
MAooWEVOKSAgICAgICAgNSBbMC8wXTogcz01IG49MCB4PTAgdj0xCihYRU4pICAgICAgICA2IFsw
LzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAgICAgNyBbMC8wXTogcz01IG49MSB4PTAgdj0wCihY
RU4pICAgICAgICA4IFsxLzFdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAgOSBbMC8wXTogcz02
IG49MSB4PTAKKFhFTikgICAgICAgMTAgWzAvMF06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgIDEx
IFswLzBdOiBzPTUgbj0xIHg9MCB2PTEKKFhFTikgICAgICAgMTIgWzAvMF06IHM9NiBuPTEgeD0w
CihYRU4pICAgICAgIDEzIFswLzBdOiBzPTUgbj0yIHg9MCB2PTAKKFhFTikgICAgICAgMTQgWzEv
MV06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE1IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAg
ICAgICAxNiBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTcgWzAvMF06IHM9NSBuPTIg
eD0wIHY9MQooWEVOKSAgICAgICAxOCBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTkg
WzAvMF06IHM9NSBuPTMgeD0wIHY9MAooWEVOKSAgICAgICAyMCBbMS8xXTogcz02IG49MyB4PTAK
KFhFTikgICAgICAgMjEgWzAvMF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIyIFswLzBdOiBz
PTYgbj0zIHg9MAooWEVOKSAgICAgICAyMyBbMC8wXTogcz01IG49MyB4PTAgdj0xCihYRU4pICAg
ICAgIDI0IFswLzBdOiBzPTYgbj0zIHg9MAooWEVOKSAgICAgICAyNSBbMC8wXTogcz0zIG49MCB4
PTAgZD0wIHA9MzYKKFhFTikgICAgICAgMjYgWzAvMF06IHM9NCBuPTAgeD0wIHA9OSBpPTkKKFhF
TikgICAgICAgMjcgWzEvMV06IHM9NSBuPTAgeD0wIHY9MgooWEVOKSAgICAgICAyOCBbMC8wXTog
cz00IG49MCB4PTAgcD04IGk9OAooWEVOKSAgICAgICAyOSBbMC8wXTogcz00IG49MCB4PTAgcD0y
NzkgaT0yNgooWEVOKSAgICAgICAzMCBbMC8wXTogcz00IG49MCB4PTAgcD0yNzcgaT0yOAooWEVO
KSAgICAgICAzMSBbMC8wXTogcz00IG49MCB4PTAgcD0xNiBpPTE2CihYRU4pICAgICAgIDMyIFsw
LzBdOiBzPTQgbj0wIHg9MCBwPTI3OCBpPTI3CihYRU4pICAgICAgIDMzIFswLzBdOiBzPTQgbj0w
IHg9MCBwPTIzIGk9MjMKKFhFTikgICAgICAgMzQgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc2IGk9
MjkKKFhFTikgICAgICAgMzUgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc1IGk9MzAKKFhFTikgICAg
ICAgMzYgWzAvMF06IHM9MyBuPTAgeD0wIGQ9MCBwPTI1CihYRU4pICAgICAgIDM3IFswLzBdOiBz
PTUgbj0wIHg9MCB2PTMKKFhFTikgICAgICAgMzggWzEvMF06IHM9NCBuPTAgeD0wIHA9Mjc0IGk9
MzEKKFhFTikgW2c6IHByaW50IGdyYW50IHRhYmxlIHVzYWdlXQooWEVOKSBnbnR0YWJfdXNhZ2Vf
cHJpbnRfYWxsIFsga2V5ICdnJyBwcmVzc2VkCihYRU4pICAgICAgIC0tLS0tLS0tIGFjdGl2ZSAt
LS0tLS0tLSAgICAgICAtLS0tLS0tLSBzaGFyZWQgLS0tLS0tLS0KKFhFTikgW3JlZl0gbG9jYWxk
b20gbWZuICAgICAgcGluICAgICAgICAgIGxvY2FsZG9tIGdtZm4gICAgIGZsYWdzCihYRU4pIGdy
YW50LXRhYmxlIGZvciByZW1vdGUgZG9tYWluOiAgICAwIC4uLiBubyBhY3RpdmUgZ3JhbnQgdGFi
bGUgZW50cmllcwooWEVOKSBnbnR0YWJfdXNhZ2VfcHJpbnRfYWxsIF0gZG9uZQooWEVOKSBbaTog
ZHVtcCBpbnRlcnJ1cHQgYmluZGluZ3NdCihYRU4pIEd1ZXN0IGludGVycnVwdCBpbmZvcm1hdGlv
bjoKKFhFTikgICAgSVJROiAgIDAgYWZmaW5pdHk6MDAwMSB2ZWM6ZjAgdHlwZT1JTy1BUElDLWVk
Z2UgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMSBh
ZmZpbml0eTowMDAxIHZlYzozOCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIg
bWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICAyIGFmZmluaXR5OmZmZmYgdmVjOmUyIHR5
cGU9WFQtUElDICAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikg
ICAgSVJROiAgIDMgYWZmaW5pdHk6MDAwMSB2ZWM6NDAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3Rh
dHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNCBhZmZpbml0eTow
MDAxIHZlYzo0OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1
bmJvdW5kCihYRU4pICAgIElSUTogICA1IGFmZmluaXR5OjAwMDEgdmVjOjUwIHR5cGU9SU8tQVBJ
Qy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAg
IDYgYWZmaW5pdHk6MDAwMSB2ZWM6NTggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAw
MDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNyBhZmZpbml0eTowMDAxIHZlYzo2
MCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihY
RU4pICAgIElSUTogICA4IGFmZmluaXR5OjAwMDEgdmVjOjY4IHR5cGU9SU8tQVBJQy1lZGdlICAg
IHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOCgtUy0tKSwKKFhF
TikgICAgSVJROiAgIDkgYWZmaW5pdHk6MDAwMSB2ZWM6NzAgdHlwZT1JTy1BUElDLWxldmVsICAg
c3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6ICA5KC1TLS0pLAooWEVO
KSAgICBJUlE6ICAxMCBhZmZpbml0eTowMDAxIHZlYzo3OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDExIGFmZmluaXR5
OjAwMDEgdmVjOjg4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQKKFhFTikgICAgSVJROiAgMTIgYWZmaW5pdHk6MDAwMSB2ZWM6OTAgdHlwZT1JTy1B
UElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6
ICAxMyBhZmZpbml0eTowMDBmIHZlYzo5OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAw
MDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE0IGFmZmluaXR5OjAwMDEgdmVj
OmEwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQK
KFhFTikgICAgSVJROiAgMTUgYWZmaW5pdHk6MDAwMSB2ZWM6YTggdHlwZT1JTy1BUElDLWVkZ2Ug
ICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNiBhZmZp
bml0eTowMDAxIHZlYzpiMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMTAgaW4t
ZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogMTYoLVMtLSksCihYRU4pICAgIElSUTogIDE4IGFmZmlu
aXR5OjAwMGYgdmVjOmI4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBw
ZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTkgYWZmaW5pdHk6MDAwMSB2ZWM6YzAgdHlwZT1J
Ty1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJ
UlE6ICAyMCBhZmZpbml0eTowMDBmIHZlYzozMSB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9
MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIyIGFmZmluaXR5OjAwMDEg
dmVjOmQwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91
bmQKKFhFTikgICAgSVJROiAgMjMgYWZmaW5pdHk6MDAwMSB2ZWM6YzggdHlwZT1JTy1BUElDLWxl
dmVsICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDIzKC1TLS0p
LAooWEVOKSAgICBJUlE6ICAyNCBhZmZpbml0eTowMDAxIHZlYzoyOCB0eXBlPURNQV9NU0kgICAg
ICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI1IGFm
ZmluaXR5OjAwMDEgdmVjOjMwIHR5cGU9RE1BX01TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBt
YXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjYgYWZmaW5pdHk6MDAwMSB2ZWM6YTEgdHlw
ZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0
PTA6Mjc5KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyNyBhZmZpbml0eTowMDAxIHZlYzoyMSB0eXBl
PVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9
MDoyNzgoLVMtLSksCihYRU4pICAgIElSUTogIDI4IGFmZmluaXR5OjAwMDEgdmVjOjI5IHR5cGU9
UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0w
OjI3NygtUy0tKSwKKFhFTikgICAgSVJROiAgMjkgYWZmaW5pdHk6MDAwMSB2ZWM6YTkgdHlwZT1Q
Q0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6
Mjc2KC1TLS0pLAooWEVOKSAgICBJUlE6ICAzMCBhZmZpbml0eTowMDAxIHZlYzpiMSB0eXBlPVBD
SS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoy
NzUoLVMtLSksCihYRU4pICAgIElSUTogIDMxIGFmZmluaXR5OjAwMDEgdmVjOmM5IHR5cGU9UENJ
LU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3
NChQUy0tKSwKKFhFTikgSU8tQVBJQyBpbnRlcnJ1cHQgaW5mb3JtYXRpb246CihYRU4pICAgICBJ
UlEgIDAgVmVjMjQwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMjogdmVjPWYwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDEgVmVjIDU2OgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAgMTogdmVjPTM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0w
IGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDMgVmVjIDY0Ogoo
WEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMzogdmVjPTQwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9
TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4p
ICAgICBJUlEgIDQgVmVjIDcyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNDogdmVjPTQ4
IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBt
YXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDUgVmVjIDgwOgooWEVOKSAgICAgICBBcGlj
IDB4MDAsIFBpbiAgNTogdmVjPTUwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDYgVmVj
IDg4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNjogdmVjPTU4IGRlbGl2ZXJ5PUxvUHJp
IGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDow
CihYRU4pICAgICBJUlEgIDcgVmVjIDk2OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNzog
dmVjPTYwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRy
aWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDggVmVjMTA0OgooWEVOKSAgICAg
ICBBcGljIDB4MDAsIFBpbiAgODogdmVjPTY4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9
MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEg
IDkgVmVjMTEyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgOTogdmVjPTcwIGRlbGl2ZXJ5
PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVz
dF9pZDowCihYRU4pICAgICBJUlEgMTAgVmVjMTIwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBp
biAxMDogdmVjPTc4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGly
cj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTEgVmVjMTM2OgooWEVO
KSAgICAgICBBcGljIDB4MDAsIFBpbiAxMTogdmVjPTg4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBz
dGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAg
ICBJUlEgMTIgVmVjMTQ0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxMjogdmVjPTkwIGRl
bGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNr
PTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTMgVmVjMTUyOgooWEVOKSAgICAgICBBcGljIDB4
MDAsIFBpbiAxMzogdmVjPTk4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0
eT0wIGlycj0wIHRyaWc9RSBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTQgVmVjMTYw
OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNDogdmVjPWEwIGRlbGl2ZXJ5PUxvUHJpIGRl
c3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihY
RU4pICAgICBJUlEgMTUgVmVjMTY4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNTogdmVj
PWE4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9
RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTYgVmVjMTc2OgooWEVOKSAgICAgICBB
cGljIDB4MDAsIFBpbiAxNjogdmVjPWIwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTgg
VmVjMTg0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxODogdmVjPWI4IGRlbGl2ZXJ5PUxv
UHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9p
ZDowCihYRU4pICAgICBJUlEgMTkgVmVjMTkyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAx
OTogdmVjPWMwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0w
IHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjAgVmVjIDQ5OgooWEVOKSAg
ICAgICBBcGljIDB4MDAsIFBpbiAyMDogdmVjPTMxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0
dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJ
UlEgMjIgVmVjMjA4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAyMjogdmVjPWQwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjMgVmVjMjAwOgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAyMzogdmVjPWM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0x
IGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pIFttOiBtZW1vcnkgaW5mb10KKFhF
TikgUGh5c2ljYWwgbWVtb3J5IGluZm9ybWF0aW9uOgooWEVOKSAgICAgWGVuIGhlYXA6IDBrQiBm
cmVlCihYRU4pICAgICBoZWFwWzE0XTogNjQ1MTJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE1XTog
MTMxMDcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNl06IDI2MjE0NGtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMTddOiA1MjIyMzZrQiBmcmVlCihYRU4pICAgICBoZWFwWzE4XTogMTA0ODU3MmtCIGZy
ZWUKKFhFTikgICAgIGhlYXBbMTldOiA2OTEwODhrQiBmcmVlCihYRU4pICAgICBoZWFwWzIwXTog
NTM3MTYwa0IgZnJlZQooWEVOKSAgICAgRG9tIGhlYXA6IDMyNTY3ODRrQiBmcmVlCihYRU4pIFtu
OiBOTUkgc3RhdGlzdGljc10KKFhFTikgQ1BVCU5NSQooWEVOKSAgIDAJICAwCihYRU4pICAgMQkg
IDAKKFhFTikgICAyCSAgMAooWEVOKSAgIDMJICAwCihYRU4pIGRvbTAgdmNwdTA6IE5NSSBuZWl0
aGVyIHBlbmRpbmcgbm9yIG1hc2tlZAooWEVOKSBbcTogZHVtcCBkb21haW4gKGFuZCBndWVzdCBk
ZWJ1ZykgaW5mb10KKFhFTikgJ3EnIHByZXNzZWQgLT4gZHVtcGluZyBkb21haW4gaW5mbyAobm93
PTB4NEM6QTE4N0I1RkQpCihYRU4pIEdlbmVyYWwgaW5mb3JtYXRpb24gZm9yIGRvbWFpbiAwOgoo
WEVOKSAgICAgcmVmY250PTMgZHlpbmc9MCBwYXVzZV9jb3VudD0wCihYRU4pICAgICBucl9wYWdl
cz0xODc1MzkgeGVuaGVhcF9wYWdlcz02IHNoYXJlZF9wYWdlcz0wIHBhZ2VkX3BhZ2VzPTAgZGly
dHlfY3B1cz17MS0zfSBtYXhfcGFnZXM9MTg4MTQ3CihYRU4pICAgICBoYW5kbGU9MDAwMDAwMDAt
MDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwIHZtX2Fzc2lzdD0wMDAwMDAwZAooWEVOKSBSYW5n
ZXNldHMgYmVsb25naW5nIHRvIGRvbWFpbiAwOgooWEVOKSAgICAgSS9PIFBvcnRzICB7IDAtMWYs
IDIyLTNmLCA0NC02MCwgNjItOWYsIGEyLTQwNywgNDBjLWNmYiwgZDAwLTIwNGYsIDIwNTgtZmZm
ZiB9CihYRU4pICAgICBJbnRlcnJ1cHRzIHsgMC0yNzkgfQooWEVOKSAgICAgSS9PIE1lbW9yeSB7
IDAtZmViZmYsIGZlYzAxLWZlZGZmLCBmZWUwMS1mZmZmZmZmZmZmZmZmZmZmIH0KKFhFTikgTWVt
b3J5IHBhZ2VzIGJlbG9uZ2luZyB0byBkb21haW4gMDoKKFhFTikgICAgIERvbVBhZ2UgbGlzdCB0
b28gbG9uZyB0byBkaXNwbGF5CihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDc2ZjY6IGNh
Zj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAgICAgWGVuUGFn
ZSAwMDAwMDAwMDAwMTQ3NmY1OiBjYWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAw
MDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0NzZmNDogY2FmPWMwMDAwMDAwMDAw
MDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAx
NDc2ZjM6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAg
ICAgWGVuUGFnZSAwMDAwMDAwMDAwMGFhMGZkOiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0
MDAwMDAwMDAwMDAwMDIKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDExYzllYTogY2FmPWMw
MDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4pIFZDUFUgaW5mb3JtYXRp
b24gYW5kIGNhbGxiYWNrcyBmb3IgZG9tYWluIDA6CihYRU4pICAgICBWQ1BVMDogQ1BVMCBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAxLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
e30gY3B1X2FmZmluaXR5PXswfQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0w
CihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSAgICAgVkNQVTE6IENQVTEgW2hhcz1G
XSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAwMCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXsx
fSBjcHVfYWZmaW5pdHk9ezAtMTV9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdz
PTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMjogQ1BVMiBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
ezJ9IGNwdV9hZmZpbml0eT17MC0xNX0KKFhFTikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxh
Z3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMgdGltZXIKKFhFTikgICAgIFZDUFUzOiBDUFUzIFto
YXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1
cz17M30gY3B1X2FmZmluaXR5PXswLTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9m
bGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSBOb3RpZnlpbmcgZ3Vlc3Qg
MDowICh2aXJxIDEsIHBvcnQgNSwgc3RhdCAwLzAvLTEpCihYRU4pIE5vdGlmeWluZyBndWVzdCAw
OjEgKHZpcnEgMSwgcG9ydCAxMSwgc3RhdCAwLzAvMCkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6
MiAodmlycSAxLCBwb3J0IDE3LCBzdGF0IDAvMC8wKQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDoz
ICh2aXJxIDEsIHBvcnQgMjMsIHN0YXQgMC8wLzApCgooWEVOKSBTaGFyZWQgZnJhbWVzIDAgLS0g
U2F2ZWQgZnJhbWVzIDAKWyAgMzI5LjMwNTc0N10gdihYRU4pIFtyOiBkdW1wIHJ1biBxdWV1ZXNd
CmNwdSAxCihYRU4pIHNjaGVkX3NtdF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9
MHgwMDAwMDA0Q0FENEVFQjRGCihYRU4pIElkbGUgY3B1cG9vbDoKKFhFTikgU2NoZWR1bGVyOiBT
TVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQpbICAzMjkuMzA1NzQ4XSAgKFhFTikgaW5mbzoK
KFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1hc3RlciAgICAgICAgICAgICA9
IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDQwMAooWEVOKSAJY3JlZGl0IGJhbGFuY2Ug
ICAgID0gNDcKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9zb3J0
ICAgICAgICAgID0gMjkyNAooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0
c2xpY2UgICAgICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAxMDAw
dXMKKFhFTikgCWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNl
ICAgPSAxCihYRU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMKIChYRU4pIGlkbGVyczogMDAw
YwooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IDA6IG1hc2tlZD0wIHBlbmRbMC4xXSBw
cmk9LTEgZmxhZ3M9MCBjcHU9MSBjcmVkaXQ9LTU0MCBbdz0yNTZdCmluZz0xIGV2ZW50X3NlbCAo
WEVOKSBDcHVwb29sIDA6CihYRU4pIFNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNy
ZWRpdCkKKFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1h
c3RlciAgICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDQwMAooWEVO
KSAJY3JlZGl0IGJhbGFuY2UgICAgID0gNDcKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1
NgooWEVOKSAJcnVucV9zb3J0ICAgICAgICAgID0gMjkyNAooWEVOKSAJZGVmYXVsdC13ZWlnaHQg
ICAgID0gMjU2CihYRU4pIAl0c2xpY2UgICAgICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGlt
aXQgICAgICAgICAgPSAxMDAwdXMKKFhFTikgCWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4p
IAl0aWNrcyBwZXIgdHNsaWNlICAgPSAxCihYRU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMK
MDAwMDAwMDAwMDAwMDAwMShYRU4pIGlkbGVyczogMDAwYwooWEVOKSBhY3RpdmUgdmNwdXM6CihY
RU4pIAkgIDE6IFswLjFdIHByaT0tMSBmbGFncz0wIGNwdT0xIGNyZWRpdD0tMTA5NiBbdz0yNTZd
CgooWEVOKSBDUFVbMDBdIFsgIDMyOS4zNzYyNjRdICAgc29ydD0yOTI0LCBzaWJsaW5nPTAwMDEs
ICBjb3JlPTAwMGYKKFhFTikgCXJ1bjogWzMyNzY3LjBdIHByaT0wIGZsYWdzPTAgY3B1PTAKKFhF
TikgCSAgMTogWzAuMF0gcHJpPTAgZmxhZ3M9MCBjcHU9MCBjcmVkaXQ9NzQgW3c9MjU2XQoxOiBt
YXNrZWQ9MCBwZW5kKFhFTikgQ1BVWzAxXSAgc29ydD0yOTI0LCBzaWJsaW5nPTAwMDIsIGNvcmU9
MDAwZgooWEVOKSAJcnVuOiBbMC4xXSBwcmk9LTEgZmxhZ3M9MCBjcHU9MSBjcmVkaXQ9LTEzNTUg
W3c9MjU2XQooWEVOKSAJICAxOiBbMzI3NjcuMV0gcHJpPS02NCBmbGFncz0wIGNwdT0xCihYRU4p
IENQVVswMl0gaW5nPTEgZXZlbnRfc2VsICBzb3J0PTI5MjQsIHNpYmxpbmc9MDAwNCwgMDAwMDAw
MDAwMDAwMDAwMWNvcmU9MDAwZgooWEVOKSAJcnVuOiBbMzI3NjcuMl0gcHJpPS02NCBmbGFncz0w
IGNwdT0yCgooWEVOKSBDUFVbMDNdIFsgIDMyOS40NDYzMzddICAgc29ydD0yOTI0LCBzaWJsaW5n
PTAwMDgsICBjb3JlPTAwMGYKKFhFTikgCXJ1bjogWzMyNzY3LjNdIHByaT0tNjQgZmxhZ3M9MCBj
cHU9MwoyOiBtYXNrZWQ9MSBwZW5kKFhFTikgW3M6IGR1bXAgc29mdHRzYyBzdGF0c10KaW5nPTEg
ZXZlbnRfc2VsIChYRU4pIFRTQyBtYXJrZWQgYXMgcmVsaWFibGUsIHdhcnAgPSAwIChjb3VudD0z
KQowMDAwMDAwMDAwMDAwMDAxKFhFTikgTm8gZG9tYWlucyBoYXZlIGVtdWxhdGVkIFRTQwoKKFhF
TikgW3Q6IGRpc3BsYXkgbXVsdGktY3B1IGNsb2NrIGluZm9dClsgIDMyOS40ODg1NzldICAoWEVO
KSBTeW5jZWQgc3RpbWUgc2tldzogbWF4PTE1NzYxbnMgYXZnPTEyMTI2bnMgc2FtcGxlcz0yIGN1
cnJlbnQ9MTU3NjFucwooWEVOKSBTeW5jZWQgY3ljbGVzIHNrZXc6IG1heD0xNzAgYXZnPTE2NSBz
YW1wbGVzPTIgY3VycmVudD0xNjAKIChYRU4pIFt1OiBkdW1wIG51bWEgaW5mb10KMzogbWFza2Vk
PTEgcGVuZChYRU4pICd1JyBwcmVzc2VkIC0+IGR1bXBpbmcgbnVtYSBpbmZvIChub3ctMHg0QzpC
QUExOUM1NykKaW5nPTAgZXZlbnRfc2VsIChYRU4pIGlkeDAgLT4gTk9ERTAgc3RhcnQtPjAgc2l6
ZS0+MTM2OTYwMCBmcmVlLT44MTQxOTYKMDAwMDAwMDAwMDAwMDAwMChYRU4pIHBoeXNfdG9fbmlk
KDAwMDAwMDAwMDAwMDEwMDApIC0+IDAgc2hvdWxkIGJlIDAKCihYRU4pIENQVTAgLT4gTk9ERTAK
KFhFTikgQ1BVMSAtPiBOT0RFMAooWEVOKSBDUFUyIC0+IE5PREUwCihYRU4pIENQVTMgLT4gTk9E
RTAKKFhFTikgTWVtb3J5IGxvY2F0aW9uIG9mIGVhY2ggZG9tYWluOgooWEVOKSBEb21haW4gMCAo
dG90YWw6IDE4NzUzOSk6ClsgIDMyOS41NDYyNjldICAoWEVOKSAgICAgTm9kZSAwOiAxODc1MzkK
IChYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KCihYRU4pICoqKioqKioqKioqIFZNQ1MgQXJl
YXMgKioqKioqKioqKioqKioKKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioKWyAgMzI5LjU4NjM2Ml0gcChYRU4pIFt6OiBwcmludCBpb2FwaWMgaW5mb10KZW5kaW5n
OgooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1LgpbICAzMjkuNTg2MzYzXSAgKFhF
TikgbnVtYmVyIG9mIElPLUFQSUMgIzIgcmVnaXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUg
SU8gQVBJQy4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uCiAgKFhFTikgSU8gQVBJQyAjMi4uLi4uLgoo
WEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDIwMDAwMDAKKFhFTikgLi4uLi4uLiAgICA6IHBoeXNp
Y2FsIEFQSUMgaWQ6IDAyCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwCihYRU4p
IC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAuLi4u
IHJlZ2lzdGVyICMwMTogMDAxNzAwMjAKKFhFTikgLi4uLi4uLiAgICAgOiBtYXggcmVkaXJlY3Rp
b24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMAoo
WEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMAooWEVOKSAuLi4uIElSUSBy
ZWRpcmVjdGlvbiB0YWJsZToKKFhFTikgIE5SIExvZyBQaHkgTWFzayBUcmlnIElSUiBQb2wgU3Rh
dCBEZXN0IERlbGkgVmVjdDogICAKIChYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDEgMDAwIDAwICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOAogKFhFTikgIDAyIDAwMCAwMCAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgRjAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwMyAw
MDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwCiAoWEVOKSAgMDQgMDAw
IDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OAowMDAwMDAwMDAwMDAwMDAw
KFhFTikgIDA1IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTAKIChY
RU4pICAwNiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4CjAwMDAw
MDAwMDAwMDAwMDAoWEVOKSAgMDcgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICA2MAogKFhFTikgIDA4IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgNjgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwOSAwMDAgMDAgIDAgICAgMSAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDcwCiAoWEVOKSAgMGEgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA3OAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDBiIDAwMCAwMCAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgODgKIChYRU4pICAwYyAwMDAgMDAgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDkwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGQgMDAw
IDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA5OAoKKFhFTikgIDBlIDAwMCAw
MCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgQTAKWyAgMzI5LjcyNDcyMV0gIChY
RU4pICAwZiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEE4CiAgKFhF
TikgIDEwIDAwMCAwMCAgMCAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQjAKMDAwMDAw
MDAwMDAwMDAwMChYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwCiAoWEVOKSAgMTIgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAg
ICBCOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEzIDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAg
MCAgICAxICAgIDEgICAgQzAKIChYRU4pICAxNCAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAg
ICAgMSAgICAxICAgIDMxCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMTUgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMAogKFhFTikgIDE2IDAwMCAwMCAgMSAgICAxICAg
IDAgICAxICAgMCAgICAxICAgIDEgICAgRDAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAxNyAwMDAg
MDAgIDAgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEM4CihYRU4pIFVzaW5nIHZlY3Rv
ci1iYXNlZCBpbmRleGluZwooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOgogKFhFTikgSVJRMjQw
IC0+IDA6MgooWEVOKSBJUlE1NiAtPiAwOjEKKFhFTikgSVJRNjQgLT4gMDozCihYRU4pIElSUTcy
IC0+IDA6NAooWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4pIElSUTk2
IC0+IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhFTikgSVJR
MTIwIC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6MTIKKFhF
TikgSVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4IC0+IDA6
MTUKKFhFTikgSVJRMTc2IC0+IDA6MTYKKFhFTikgSVJRMTg0IC0+IDA6MTgKKFhFTikgSVJRMTky
IC0+IDA6MTkKKFhFTikgSVJRNDkgLT4gMDoyMAooWEVOKSBJUlEyMDggLT4gMDoyMgooWEVOKSBJ
UlEyMDAgLT4gMDoyMwooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4g
ZG9uZS4KMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAzMjkuODQ3Nzc1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzI5Ljg2MTczNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyOS44NzU2OThdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjkuODg5NjU5XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI5LjkwMzYyMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyOS45MTc1
ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwNDAwODAwMjE4MgpbICAzMjkuOTMxNTQzXSAgICAKWyAgMzI5Ljkz
NDg1NF0gZ2xvYmFsIG1hc2s6ClsgIDMyOS45MzQ4NTRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZgpbICAzMjkuOTUwMDY5XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI5Ljk2NDAzMF0g
ICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMyOS45Nzc5OTFdICAgIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZgpbICAzMjkuOTkxOTUyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjAw
NTkxM10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMzMC4wMTk4NzRdICAgIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZgpbICAzMzAuMDMzODM1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZjgwMDgxMDQxMDUKWyAg
MzMwLjA0Nzc5Nl0gICAgClsgIDMzMC4wNTExMDhdIGdsb2JhbGx5IHVubWFza2VkOgpbICAzMzAu
MDUxMTA4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjA2Njg1OV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMzMC4wODA4MjFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzAuMDk0NzgxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjEwODc0M10gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC4xMjI3MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzAuMTM2NjY1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjE1MDYy
N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDA0MDAwMDAyMDgyClsgIDMzMC4xNjQ1ODddICAgIApbICAzMzAuMTY3
ODk4XSBsb2NhbCBjcHUxIG1hc2s6ClsgIDMzMC4xNjc4OTldICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzAuMTgzNDcwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjE5NzQz
Ml0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC4yMTEzOTJdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMzAuMjI1MzU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMw
LjIzOTMxNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC4yNTMyNzddICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMjY3MjM4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDFmODAK
WyAgMzMwLjI4MTE5OV0gICAgClsgIDMzMC4yODQ1MTBdIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDMz
MC4yODQ1MTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMzAwMTcyXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjMxNDEzM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDMzMC4zMjgwOTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMzQyMDU2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjM1NjAxN10gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMzMC4zNjk5NzhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuMzgz
OTQyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwODAKWyAgMzMwLjM5NzkwMF0gICAgClsgIDMzMC40
MDEyMTJdIHBlbmRpbmcgbGlzdDoKWyAgMzMwLjQwNDI1NV0gICAwOiBldmVudCAxIC0+IGlycSAy
NzIgbG9jYWxseS1tYXNrZWQKWyAgMzMwLjQwOTI2Nl0gICAxOiBldmVudCA3IC0+IGlycSAyNzgK
WyAgMzMwLjQxMjkzN10gICAxOiBldmVudCA4IC0+IGlycSAyNzkgZ2xvYmFsbHktbWFza2VkClsg
IDMzMC40MTgwMzddICAgMjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1hc2tlZApbICAz
MzAuNDIzMTM4XSAgIDA6IGV2ZW50IDI3IC0+IGlycSAyOTcgZ2xvYmFsbHktbWFza2VkIGxvY2Fs
bHktbWFza2VkClsgIDMzMC40Mjk2NzFdICAgMDogZXZlbnQgMzggLT4gaXJxIDMwMyBsb2NhbGx5
LW1hc2tlZApbICAzMzAuNDM0Nzk0XSAKWyAgMzMwLjQzNDc5Nl0gdmNwdSAwClsgIDMzMC40MzQ3
OTZdICAgMDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMC40NDAwMzhdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMzMC40NDYxMjNdICAgMjogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAw
MDAwMDAwMDAwMDAwMDAxClsgIDMzMC40NTIyMDldICAgMzogbWFza2VkPTEgcGVuZGluZz0xIGV2
ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMzMC40NTgyOTVdICAgClsgIDMzMC40NjQzNzld
IHBlbmRpbmc6ClsgIDMzMC40NjQzODBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAu
NDc5MjM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjQ5MzE5N10gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMzMC41MDcxNThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzAuNTIxMTE5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjUzNTA4MF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC41NDkwNDFdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzAuNTYzMDAzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDQwMDgyMDIxMDYKWyAgMzMwLjU3Njk2
NF0gICAgClsgIDMzMC41ODAyNzVdIGdsb2JhbCBtYXNrOgpbICAzMzAuNTgwMjc1XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjU5NTQ4OV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
ClsgIDMzMC42MDk0NTBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMzAuNjIzNDEyXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjYzNzM3Ml0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMzMC42NTEzMzRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMzAuNjY1
Mjk1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMwLjY3OTI1N10gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmY4MDA4MTA0MTA1ClsgIDMzMC42OTMyMThdICAgIApbICAzMzAuNjk2NTI5XSBnbG9iYWxseSB1
bm1hc2tlZDoKWyAgMzMwLjY5NjUzMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC43
MTIyODBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuNzI2MjQxXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMzMwLjc0MDIwM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMC43NTQxNjNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuNzY4MTI1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjc4MjA4Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMC43OTYwNDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwNDAwMDIwMjAwMgpbICAzMzAuODEwMDA4
XSAgICAKWyAgMzMwLjgxMzMxOV0gbG9jYWwgY3B1MCBtYXNrOgpbICAzMzAuODEzMzIwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjgyODg5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMC44NDI4NTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAuODU2ODE0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjg3MDc3Nl0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMzMC44ODQ3MzZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzAu
ODk4Njk4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMwLjkxMjY1OV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBm
ZmZmZmZmZmZlMDAwMDdmClsgIDMzMC45MjY2MjBdICAgIApbICAzMzAuOTI5OTMxXSBsb2NhbGx5
IHVubWFza2VkOgpbICAzMzAuOTI5OTMyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMw
Ljk0NTU5M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMC45NTk1NTVdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAzMzAuOTczNTE2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzMwLjk4NzQ3Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS4wMDE0MzhdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuMDE1NDAwXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzMxLjAyOTM2MF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDA0MDAwMDAwMDAyClsgIDMzMS4wNDMz
MjFdICAgIApbICAzMzEuMDQ2NjMzXSBwZW5kaW5nIGxpc3Q6ClsgIDMzMS4wNDk2ODFdICAgMDog
ZXZlbnQgMSAtPiBpcnEgMjcyIGwyLWNsZWFyClsgIDMzMS4wNTQxNTFdICAgMDogZXZlbnQgMiAt
PiBpcnEgMjczIGwyLWNsZWFyIGdsb2JhbGx5LW1hc2tlZApbICAzMzEuMDYwMDU4XSAgIDE6IGV2
ZW50IDggLT4gaXJxIDI3OSBsMi1jbGVhciBnbG9iYWxseS1tYXNrZWQgbG9jYWxseS1tYXNrZWQK
WyAgMzMxLjA2NzMxMl0gICAyOiBldmVudCAxMyAtPiBpcnEgMjg0IGwyLWNsZWFyIGxvY2FsbHkt
bWFza2VkClsgIDMzMS4wNzMyMTRdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsMi1jbGVhciBs
b2NhbGx5LW1hc2tlZApbICAzMzEuMDc5MTIxXSAgIDA6IGV2ZW50IDI3IC0+IGlycSAyOTcgbDIt
Y2xlYXIgZ2xvYmFsbHktbWFza2VkClsgIDMzMS4wODUxMTZdICAgMDogZXZlbnQgMzggLT4gaXJx
IDMwMyBsMi1jbGVhcgpbICAzMzEuMDg5NzE2XSAKWyAgMzMxLjA4OTcxN10gdmNwdSAyClsgIDMz
MS4wODk3MThdICAgMDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMS4wOTQ5NzldICAgMTogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAw
MDAwMDAwMDAwMDAxClsgIDMzMS4xMDEwNjRdICAgMjogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50
X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMzMS4xMDcxNDldICAgMzogbWFza2VkPTEgcGVuZGlu
Zz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMzMS4xMTMyMzRdICAgClsgIDMzMS4x
MTkzMTldIHBlbmRpbmc6ClsgIDMzMS4xMTkzMTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzEuMTM0MTc1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjE0ODEzNl0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS4xNjIwOTddICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMzEuMTc2MDU4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjE5MDAy
MF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS4yMDM5ODFdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMzEuMjE3OTQ0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDgyMGUxMDYKWyAgMzMx
LjIzMTkwNF0gICAgClsgIDMzMS4yMzUyMTRdIGdsb2JhbCBtYXNrOgpbICAzMzEuMjM1MjE0XSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjI1MDQyOF0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMzMS4yNjQzODldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMzEuMjc4
MzUxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjI5MjMxMl0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDMzMS4zMDYyNzRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAz
MzEuMzIwMjM0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjMzNDE5Nl0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmY4MDA4MTA0MTA1ClsgIDMzMS4zNDgxNTddICAgIApbICAzMzEuMzUxNDY4XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMzMxLjM1MTQ2OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMS4zNjcyMjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuMzgxMTgxXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjM5NTE0MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMS40MDkxMDNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuNDIzMDY0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjQzNzAyNV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMzMS40NTA5ODZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDIwYTAwMApbICAzMzEu
NDY0OTQ4XSAgICAKWyAgMzMxLjQ2ODI1OF0gbG9jYWwgY3B1MiBtYXNrOgpbICAzMzEuNDY4MjU5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjQ4MzgzMV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMzMS40OTc3OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEu
NTExNzUzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjUyNTcxNV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMzMS41Mzk2NzZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMzEuNTUzNjQwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjU2NzU5OF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDdlMDAwClsgIDMzMS41ODE1NjBdICAgIApbICAzMzEuNTg0ODcwXSBs
b2NhbGx5IHVubWFza2VkOgpbICAzMzEuNTg0ODcxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzMxLjYwMDUzMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS42MTQ0OTRdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuNjI4NDU0XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzMxLjY0MjQxNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS42NTYz
NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuNjcwMzM5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzMxLjY4NDI5OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDBhMDAwClsgIDMz
MS42OTgyNjBdICAgIApbICAzMzEuNzAxNTcxXSBwZW5kaW5nIGxpc3Q6ClsgIDMzMS43MDQ2MTZd
ICAgMDogZXZlbnQgMiAtPiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApb
ICAzMzEuNzExMDU4XSAgIDE6IGV2ZW50IDggLT4gaXJxIDI3OSBnbG9iYWxseS1tYXNrZWQgbG9j
YWxseS1tYXNrZWQKWyAgMzMxLjcxNzUwMl0gICAyOiBldmVudCAxMyAtPiBpcnEgMjg0ClsgIDMz
MS43MjEyNjFdICAgMjogZXZlbnQgMTQgLT4gaXJxIDI4NSBnbG9iYWxseS1tYXNrZWQKWyAgMzMx
LjcyNjQ1Ml0gICAyOiBldmVudCAxNSAtPiBpcnEgMjg2ClsgIDMzMS43MzAyMTFdICAgMzogZXZl
bnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tlZApbICAzMzEuNzM1MzEyXSAgIDA6IGV2ZW50
IDI3IC0+IGlycSAyOTcgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDMzMS43NDE4
NjhdIApbICAzMzEuNzQxODY5XSB2Y3B1IDMKWyAgMzMxLjc0MTg2OV0gICAwOiBtYXNrZWQ9MSBw
ZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgMzMxLjc0NzEyN10gICAxOiBt
YXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMxLjc1MzIx
M10gICAyOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MzMxLjc1OTI5OV0gICAzOiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAw
MDAwMDEKWyAgMzMxLjc2NTM4NF0gICAKWyAgMzMxLjc3MTQ2OF0gcGVuZGluZzoKWyAgMzMxLjc3
MTQ2OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS43ODYzMjRdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAzMzEuODAwMjg1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MzMxLjgxNDI0NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS44MjgyMDZdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzEuODQyMTY4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgMzMxLjg1NjEyOV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMS44NzAwOTFd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwODMwNDEwNApbICAzMzEuODg0MDUzXSAgICAKWyAgMzMxLjg4NzM2
NF0gZ2xvYmFsIG1hc2s6ClsgIDMzMS44ODczNjVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpb
ICAzMzEuOTAyNTc3XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjkxNjUzOF0gICAg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMzMS45MzA1MDBdICAgIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZgpbICAzMzEuOTQ0NDYxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzMxLjk1ODQy
Ml0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMzMS45NzIzODRdICAgIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZgpbICAzMzEuOTg2MzQ1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZjgwMDgxMDQxMDUKWyAgMzMy
LjAwMDMwNV0gICAgClsgIDMzMi4wMDM2MTZdIGdsb2JhbGx5IHVubWFza2VkOgpbICAzMzIuMDAz
NjE3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjAxOTM2OF0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDMzMi4wMzMzMjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAz
MzIuMDQ3MjkxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjA2MTI1Ml0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMi4wNzUyMTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAzMzIuMDg5MTc0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjEwMzEzNV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDMzMi4xMTcwOTddICAgIApbICAzMzIuMTIwNDA3
XSBsb2NhbCBjcHUzIG1hc2s6ClsgIDMzMi4xMjA0MDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAzMzIuMTM1OTgwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjE0OTk0MV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMi4xNjM5MDJdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAzMzIuMTc3ODY0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjE5
MTgyNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMzMi4yMDU3ODZdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAzMzIuMjE5NzQ3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDFmODAwMDAKWyAg
MzMyLjIzMzcwOV0gICAgClsgIDMzMi4yMzcwMTldIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDMzMi4y
MzcwMjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzIuMjUyNjgxXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMzMyLjI2NjY0M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMzMi4yODA2MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzIuMjk0NTY1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzMyLjMwODUyNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMzMi4zMjI0ODddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMzIuMzM2NDQ4
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAyODAwMDAKWyAgMzMyLjM1MDQwOV0gICAgClsgIDMzMi4zNTM3
MjFdIHBlbmRpbmcgbGlzdDoKWyAgMzMyLjM1Njc2NF0gICAwOiBldmVudCAyIC0+IGlycSAyNzMg
Z2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDMzMi4zNjMyMDhdICAgMTogZXZlbnQg
OCAtPiBpcnEgMjc5IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAzMzIuMzY5NjUy
XSAgIDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2Vk
ClsgIDMzMi4zNzYxODVdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICAzMzIuMzc5OTQyXSAg
IDM6IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDMzMi4zODUxMzNdICAg
MzogZXZlbnQgMjEgLT4gaXJxIDI5MgpbICAzMzIuMzg4ODkzXSAgIDA6IGV2ZW50IDI3IC0+IGly
cSAyOTcgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkCgo=
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="syslog-bad.txt"
Content-Disposition: attachment; filename="syslog-bad.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfctaa1

WyAgMjMyLjU1NzcwOV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEwxIERpc2FibGVkOyBFbmFibGlu
ZyBMMFMKWyAgMjMyLjU2OTcxMl0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFJhZGlvIHR5cGU9MHgx
LTB4Mi0weDAKWyAgMjM1LjAxNTg4NV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFBDSSBJTlQgQSBk
aXNhYmxlZApbICAyMzUuNzU5NjM1XSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJpbmcgZm9y
d2FyZGluZyBzdGF0ZQpbICAyMzUuNzczNTk5XSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJp
bmcgZm9yd2FyZGluZyBzdGF0ZQpbICAyMzUuNzc5MjY1XSBici1ldGgwOiBwb3J0IDEoZXRoMCkg
ZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAyMzUuNzg1NTI5XSBici1ldGgwOiBwb3J0IDEo
ZXRoMCkgZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAyMzYuMDI1NDk3XSBlMTAwMGUgMDAw
MDowMDoxOS4wOiBQQ0kgSU5UIEEgZGlzYWJsZWQKWyAgMjM2LjMzNTQwMF0gZWhjaV9oY2QgMDAw
MDowMDoxZC4wOiByZW1vdmUsIHN0YXRlIDEKWyAgMjM2LjM0MDI0OV0gdXNiIHVzYjI6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMjM2LjM0NTUzOF0gdXNiIDItMTogVVNCIGRp
c2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMgpbICAyMzYuMzUwNzE2XSB1c2IgMi0xLjU6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDMKWyAgMjM2LjQ5NTU3MF0gdXNiIDItMS42OiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciA0ClsgIDIzNi43MTAyNjldIGVoY2lfaGNkIDAwMDA6
MDA6MWQuMDogVVNCIGJ1cyAyIGRlcmVnaXN0ZXJlZApbICAyMzYuNzE1ODU2XSBlaGNpX2hjZCAw
MDAwOjAwOjFkLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAyMzYuNzIxMDI3XSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IHJlbW92ZSwgc3RhdGUgNApbICAyMzYuNzI2MDQ4XSB1c2IgdXNiMTogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAyMzYuNzMxMzExXSB1c2IgMS0xOiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciAyClsgIDIzNi43NDA2NThdIGVoY2lfaGNkIDAwMDA6
MDA6MWEuMDogVVNCIGJ1cyAxIGRlcmVnaXN0ZXJlZApbICAyMzYuNzQ2MjI0XSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAyMzYuNzk1NDE4XSB4aGNpX2hjZCAw
MDAwOjAwOjE0LjA6IHJlbW92ZSwgc3RhdGUgNApbICAyMzYuODAwMjY1XSB1c2IgdXNiNDogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAyMzYuODA1NTg5XSB4SENJIHhoY2lfZHJv
cF9lbmRwb2ludCBjYWxsZWQgZm9yIHJvb3QgaHViClsgIDIzNi44MTEwMDRdIHhIQ0kgeGhjaV9j
aGVja19iYW5kd2lkdGggY2FsbGVkIGZvciByb290IGh1YgpbICAyMzYuODE2NzI5XSB4aGNpX2hj
ZCAwMDAwOjAwOjE0LjA6IFVTQiBidXMgNCBkZXJlZ2lzdGVyZWQKWyAgMjM2LjgyMjMzMF0geGhj
aV9oY2QgMDAwMDowMDoxNC4wOiByZW1vdmUsIHN0YXRlIDQKWyAgMjM2LjgyNzI5OV0gdXNiIHVz
YjM6IFVTQiBkaXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMjM2LjgzMjU5OF0geEhDSSB4
aGNpX2Ryb3BfZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAyMzYuODM4MDMzXSB4SENJ
IHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgMjM2Ljg0MzgwN10g
eGhjaV9oY2QgMDAwMDowMDoxNC4wOiBVU0IgYnVzIDMgZGVyZWdpc3RlcmVkClsgIDIzNi44OTU0
ODBdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogUENJIElOVCBBIGRpc2FibGVkClsgIDIzNy4yMDcx
OTVdIFBNOiBTeW5jaW5nIGZpbGVzeXN0ZW1zIC4uLiBkb25lLgpbICAyMzcuMjExODMxXSBQTTog
UHJlcGFyaW5nIHN5c3RlbSBmb3IgbWVtIHNsZWVwClsgIDIzNy40NDU0MTldIEZyZWV6aW5nIHVz
ZXIgc3BhY2UgcHJvY2Vzc2VzIC4uLiAoZWxhcHNlZCAwLjAxIHNlY29uZHMpIGRvbmUuClsgIDIz
Ny40Njc5ODldIEZyZWV6aW5nIHJlbWFpbmluZyBmcmVlemFibGUgdGFza3MgLi4uIChlbGFwc2Vk
IDAuMDEgc2Vjb25kcykgZG9uZS4KWyAgMjM3LjQ4Nzk4Nl0gUE06IEVudGVyaW5nIG1lbSBzbGVl
cApbICAyMzcuNDkxOTA0XSBzZCAwOjA6MDowOiBbc2RhXSBTeW5jaHJvbml6aW5nIFNDU0kgY2Fj
aGUKWyAgMjM3LjQ5MjI3NV0gc25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6IFBDSSBJTlQgQSBk
aXNhYmxlZApbICAyMzcuNDkyMzY3XSBBQ1BJIGhhbmRsZSBoYXMgbm8gY29udGV4dCEKWyAgMjM3
LjUwNjk0NV0gc2QgMDowOjA6MDogW3NkYV0gU3RvcHBpbmcgZGlzawpbICAyMzcuNjA1Mzc4XSBQ
TTogc3VzcGVuZCBvZiBkcnY6YWhjaSBkZXY6MDAwMDowMDoxZi4yIGNvbXBsZXRlIGFmdGVyIDEx
My4yNTMgbXNlY3MKWyAgMjM3LjYxMzAwOF0gUE06IHN1c3BlbmQgb2YgZHJ2OiBkZXY6cGNpMDAw
MDowMCBjb21wbGV0ZSBhZnRlciAxMDcuNjQxIG1zZWNzClsgIDIzNy42MjAyNjJdIFBNOiBzdXNw
ZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMTI4LjQ1MSBtc2VjcwpbICAyMzcuNjI2NDI4
XSBQTTogc3VzcGVuZCBkZXZpY2VzIHRvb2sgMC4xNDAgc2Vjb25kcwpbICAyMzcuNjMyNjIzXSBQ
TTogbGF0ZSBzdXNwZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMS4xOTEgbXNlY3MKWyAg
MjM3LjYzOTMwOV0gQUNQSTogUHJlcGFyaW5nIHRvIGVudGVyIHN5c3RlbSBzbGVlcCBzdGF0ZSBT
MwpbICAyMzcuNjQ1MTIyXSBQTTogU2F2aW5nIHBsYXRmb3JtIE5WUyBtZW1vcnkKWyAgMjM3Ljg0
MjYxMF0gRGlzYWJsaW5nIG5vbi1ib290IENQVXMgLi4uCihYRU4pIFByZXBhcmluZyBzeXN0ZW0g
Zm9yIEFDUEkgUzMgc3RhdGUuCihYRU4pIERpc2FibGluZyBub24tYm9vdCBDUFVzIC4uLgooWEVO
KSBFbnRlcmluZyBBQ1BJIFMzIHN0YXRlLgooWEVOKSBtY2VfaW50ZWwuYzoxMjM5OiBNQ0EgQ2Fw
YWJpbGl0eTogQkNBU1QgMSBTRVIgMCBDTUNJIDEgZmlyc3RiYW5rIDAgZXh0ZW5kZWQgTUNFIE1T
UiAwCihYRU4pIENQVTAgQ01DSSBMVlQgdmVjdG9yICgweGYxKSBhbHJlYWR5IGluc3RhbGxlZAoo
WEVOKSBGaW5pc2hpbmcgd2FrZXVwIGZyb20gQUNQSSBTMyBzdGF0ZS4KKFhFTikgbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBF
bmFibGluZyBub24tYm9vdCBDUFVzICAuLi4KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9p
bmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6IGNvbGxl
Y3RfY3B1X2luZm8gOiBzaWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1pY3JvY29k
ZTogY29sbGVjdF9jcHVfaW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcKWyAgMjM5
LjYyMjU0OV0gQUNQSTogTG93LWxldmVsIHJlc3VtZSBjb21wbGV0ZQpbICAyMzkuNjI2ODY3XSBQ
TTogUmVzdG9yaW5nIHBsYXRmb3JtIE5WUyBtZW1vcnkKKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0
X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6
IGNvbGxlY3RfY3B1X2luZm8gOiBzaWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1p
Y3JvY29kZTogY29sbGVjdF9jcHVfaW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcK
KFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4Miwg
cmV2PTB4NwpbICAyMzkuNzc5ODg0XSBFbmFibGluZyBub24tYm9vdCBDUFVzIC4uLgpbICAyMzku
NzgzNzY2XSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDEKWyAgMjM5Ljc4ODAwMF0gY3B1
IDEgc3BpbmxvY2sgZXZlbnQgaXJxIDI3OQpbICAyMzkuNzkzMjQwXSBDUFUxIGlzIHVwClsgIDIz
OS43OTU3ODVdIGluc3RhbGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMgpbICAyMzkuNzk5OTQ5XSBj
cHUgMiBzcGlubG9jayBldmVudCBpcnEgMjg1ClsgIDIzOS44MDUyMzJdIENQVTIgaXMgdXAKWyAg
MjM5LjgwNzcyNV0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAzClsgIDIzOS44MTE4ODld
IGNwdSAzIHNwaW5sb2NrIGV2ZW50IGlycSAyOTEKWyAgMjM5LjgxNzI0NV0gQ1BVMyBpcyB1cApb
ICAyMzkuODIxNjU3XSBBQ1BJOiBXYWtpbmcgdXAgZnJvbSBzeXN0ZW0gc2xlZXAgc3RhdGUgUzMK
WyAgMjM5LjgyNzI1NV0gaTkxNSAwMDAwOjAwOjAyLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2Ug
YXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpbICAyMzkuODM2MDcxXSBp
OTE1IDAwMDA6MDA6MDIuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3
YXMgMHg5MDAwMDcsIHdyaXRpbmcgMHg5MDA0MDcpClsgIDIzOS44NDU1OTBdIHBjaSAwMDAwOjAw
OjE0LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3
cml0aW5nIDB4MTBiKQpbICAyMzkuODU0NDE4XSBwY2kgMDAwMDowMDoxNC4wOiByZXN0b3Jpbmcg
Y29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDQgKHdhcyAweDQsIHdyaXRpbmcgMHhiMDJiMDAwNCkK
WyAgMjM5Ljg2MzUyN10gcGNpIDAwMDA6MDA6MTQuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBh
dCBvZmZzZXQgMHgxICh3YXMgMHgyOTAwMDAwLCB3cml0aW5nIDB4MjkwMDAwMikKWyAgMjM5Ljg3
MzEzOV0gcGNpIDAwMDA6MDA6MTYuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQg
MHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMGIpClsgIDIzOS44ODE5ODZdIHBjaSAwMDAwOjAw
OjE2LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4ZmVkYjAw
MDQsIHdyaXRpbmcgMHhiMDJhMDAwNCkKWyAgMjM5Ljg5MTc2MF0gc2VyaWFsIDAwMDA6MDA6MTYu
MzogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgyMDAsIHdyaXRp
bmcgMHgyMGEpClsgIDIzOS45MDA4NjZdIHNlcmlhbCAwMDAwOjAwOjE2LjM6IHJlc3RvcmluZyBj
b25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NSAod2FzIDB4MCwgd3JpdGluZyAweGIwMjkwMDAwKQpb
ICAyMzkuOTEwMjM3XSBzZXJpYWwgMDAwMDowMDoxNi4zOiByZXN0b3JpbmcgY29uZmlnIHNwYWNl
IGF0IG9mZnNldCAweDQgKHdhcyAweDEsIHdyaXRpbmcgMHgzMGUxKQpbICAyMzkuOTE5MjgyXSBz
ZXJpYWwgMDAwMDowMDoxNi4zOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEg
KHdhcyAweGIwMDAwMCwgd3JpdGluZyAweGIwMDAwNykKWyAgMjM5LjkyODk4N10gcGNpIDAwMDA6
MDA6MTkuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgxMDAs
IHdyaXRpbmcgMHgxMDUpClsgIDIzOS45Mzc4NDBdIHBjaSAwMDAwOjAwOjE5LjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MTAwMDAyLCB3cml0aW5nIDB4MTAw
MDAzKQpbICAyMzkuOTQ3MjQ5XSBwY2kgMDAwMDowMDoxYS4wOiByZXN0b3JpbmcgY29uZmlnIHNw
YWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDEwMCwgd3JpdGluZyAweDEwYikKWyAgMjM5Ljk1NjA4
NV0gcGNpIDAwMDA6MDA6MWEuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg0
ICh3YXMgMHgwLCB3cml0aW5nIDB4YjAyNzAwMDApClsgIDIzOS45NjUxOTNdIHBjaSAwMDAwOjAw
OjFhLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MjkwMDAw
MCwgd3JpdGluZyAweDI5MDAwMDIpClsgIDIzOS45NzQ4MTVdIHNuZF9oZGFfaW50ZWwgMDAwMDow
MDoxYi4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDEwMCwg
d3JpdGluZyAweDEwMykKWyAgMjM5Ljk4NDU0NV0gc25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6
IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4NCwgd3JpdGluZyAw
eGIwMjYwMDA0KQpbICAyMzkuOTk0NTQxXSBzbmRfaGRhX2ludGVsIDAwMDA6MDA6MWIuMDogcmVz
dG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHgwLCB3cml0aW5nIDB4MTAp
ClsgIDI0MC4wMDQwMzBdIHNuZF9oZGFfaW50ZWwgMDAwMDowMDoxYi4wOiByZXN0b3JpbmcgY29u
ZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDEwMDAwMCwgd3JpdGluZyAweDEwMDAwMikK
WyAgMjQwLjAxNDM4MV0gcGNpZXBvcnQgMDAwMDowMDoxYy4wOiByZXN0b3JpbmcgY29uZmlnIHNw
YWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDEwMCwgd3JpdGluZyAweDEwYikKWyAgMjQwLjAyMzY1
OV0gcGNpZXBvcnQgMDAwMDowMDoxYy4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNl
dCAweDMgKHdhcyAweDgxMDAwMCwgd3JpdGluZyAweDgxMDAxMCkKWyAgMjQwLjAzMzQ3NF0gcGNp
ZXBvcnQgMDAwMDowMDoxYy4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEg
KHdhcyAweDEwMDAwMCwgd3JpdGluZyAweDEwMDAwNykKWyAgMjQwLjA0MzM4N10gcGNpZXBvcnQg
MDAwMDowMDoxYy42OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAw
eDMwMCwgd3JpdGluZyAweDMwNCkKWyAgMjQwLjA1MjY1M10gcGNpZXBvcnQgMDAwMDowMDoxYy42
OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDMgKHdhcyAweDgxMDAwMCwgd3Jp
dGluZyAweDgxMDAxMCkKWyAgMjQwLjA2MjQ3MV0gcGNpZXBvcnQgMDAwMDowMDoxYy42OiByZXN0
b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDEwMDAwMCwgd3JpdGluZyAw
eDEwMDAwNykKWyAgMjQwLjA3MjM4NV0gcGNpZXBvcnQgMDAwMDowMDoxYy43OiByZXN0b3Jpbmcg
Y29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDQwMCwgd3JpdGluZyAweDQwYSkKWyAg
MjQwLjA4MTY1Ml0gcGNpZXBvcnQgMDAwMDowMDoxYy43OiByZXN0b3JpbmcgY29uZmlnIHNwYWNl
IGF0IG9mZnNldCAweDMgKHdhcyAweDgxMDAwMCwgd3JpdGluZyAweDgxMDAxMCkKWyAgMjQwLjA5
MTUzM10gcGNpIDAwMDA6MDA6MWQuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQg
MHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMGIpClsgIDI0MC4xMDAzNTJdIHBjaSAwMDAwOjAw
OjFkLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MCwgd3Jp
dGluZyAweGIwMjUwMDAwKQpbICAyNDAuMTA5NDU4XSBwY2kgMDAwMDowMDoxZC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDI5MDAwMDAsIHdyaXRpbmcgMHgy
OTAwMDAyKQpbICAyNDAuMTE5MjYwXSBhaGNpIDAwMDA6MDA6MWYuMjogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHgyYjAwMDA3LCB3cml0aW5nIDB4MmIwMDQwNykK
WyAgMjQwLjEyODg4MF0gcGNpIDAwMDA6MDA6MWYuMzogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBh
dCBvZmZzZXQgMHgxICh3YXMgMHgyODAwMDAxLCB3cml0aW5nIDB4MjgwMDAwMykKWyAgMjQwLjEz
ODQwMV0gcGNpIDAwMDA6MDI6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQg
MHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMDQpClsgIDI0MC4xNDcyNTBdIHBjaSAwMDAwOjAy
OjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4NCwgd3Jp
dGluZyAweGIwMTAwMDA0KQpbICAyNDAuMTU2MzMzXSBwY2kgMDAwMDowMjowMC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDMgKHdhcyAweDAsIHdyaXRpbmcgMHgxMCkKWyAg
MjQwLjE2NDkyNl0gcGNpIDAwMDA6MDI6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBv
ZmZzZXQgMHgxICh3YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAwMDIpClsgIDI0MC4xNzQ0MjZd
IHBjaSAwMDAwOjAzOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4OSAo
d2FzIDB4MTAwMDEsIHdyaXRpbmcgMHgxZmZmMSkKWyAgMjQwLjE4MzU0MV0gcGNpIDAwMDA6MDM6
MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg3ICh3YXMgMHgyMmEwMDEw
MSwgd3JpdGluZyAweDJhMDAxZjEpClsgIDI0MC4xOTMyMTRdIHBjaSAwMDAwOjAzOjAwLjA6IHJl
c3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MyAod2FzIDB4MTAwMDAsIHdyaXRpbmcg
MHgxMDAxMCkKWyAgMjQwLjIwMjQ4NV0gcGNpIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHg0MDIwMTAwLCB3cml0aW5nIDB4NDAyMDEwYSkK
WyAgMjQwLjIxMjAzMF0gcGNpIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBh
dCBvZmZzZXQgMHg1ICh3YXMgMHgwLCB3cml0aW5nIDB4YjAwMDAwMDApClsgIDI0MC4yMjExMjld
IHBjaSAwMDAwOjA0OjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MyAo
d2FzIDB4MCwgd3JpdGluZyAweDIwMTApClsgIDI0MC4yMjk5MjVdIHNlcmlhbCAwMDAwOjA1OjAx
LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MWZmLCB3cml0
aW5nIDB4MTAzKQpbICAyNDAuMjM5MDM2XSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3Jpbmcg
Y29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDkgKHdhcyAweDEsIHdyaXRpbmcgMHgyMDAxKQpbICAy
NDAuMjQ4MDYyXSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0
IG9mZnNldCAweDggKHdhcyAweDEsIHdyaXRpbmcgMHgyMDExKQpbICAyNDAuMjU3MTAwXSBzZXJp
YWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDcgKHdh
cyAweDEsIHdyaXRpbmcgMHgyMDIxKQpbICAyNDAuMjY2MTQwXSBzZXJpYWwgMDAwMDowNTowMS4w
OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDYgKHdhcyAweDEsIHdyaXRpbmcg
MHgyMDMxKQpbICAyNDAuMjc1MTc5XSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29u
ZmlnIHNwYWNlIGF0IG9mZnNldCAweDUgKHdhcyAweDEsIHdyaXRpbmcgMHgyMDQxKQpbICAyNDAu
Mjg0MjE3XSBzZXJpYWwgMDAwMDowNTowMS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9m
ZnNldCAweDMgKHdhcyAweDgsIHdyaXRpbmcgMHgyMDEwKQpbICAyNDAuMjkzMzM0XSBQTTogZWFy
bHkgcmVzdW1lIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgNDY2LjE2NiBtc2VjcwpbICAyNDAu
Mjk5OTM2XSBpOTE1IDAwMDA6MDA6MDIuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0Clsg
IDI0MC4yOTk5NDNdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDIyIHRyaWdnZXJpbmcgMCBwb2xhcml0
eSAxClsgIDI0MC4yOTk5NDNdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MjIKWyAgMjQwLjI5OTk0
M10gc25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6IFBDSSBJTlQgQSAtPiBHU0kgMjIgKGxldmVs
LCBsb3cpIC0+IElSUSAyMgpbICAyNDAuMjk5OTU4XSBwY2kgMDAwMDowMDoxZS4wOiBzZXR0aW5n
IGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMjQwLjI5OTk2OF0gc25kX2hkYV9pbnRlbCAwMDAwOjAw
OjFiLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAyNDAuMjk5OTgxXSBhaGNpIDAw
MDA6MDA6MWYuMjogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDI0MC4zMDAwMDFdIHBj
aSAwMDAwOjAzOjAwLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAyNDAuMzAwMDg5
XSBzZCAwOjA6MDowOiBbc2RhXSBTdGFydGluZyBkaXNrClsgIDI0MC42NDUyMDBdIGF0YTE6IFNB
VEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMgU0NvbnRyb2wgMzAwKQpbICAyNDAuNzA1
MjAyXSBhdGEzOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9sIDMw
MCkKWyAgMjQwLjg3MzM4MF0gUE06IHJlc3VtZSBvZiBkcnY6aTkxNSBkZXY6MDAwMDowMDowMi4w
IGNvbXBsZXRlIGFmdGVyIDU3My40NTYgbXNlY3MKWyAgMjQ1LjA5OTY3Ml0gW2RybTpwY2hfaXJx
X2hhbmRsZXJdICpFUlJPUiogUENIIHBvaXNvbiBpbnRlcnJ1cHQKWyAgMjQ1LjY0NTE5NF0gYXRh
MS4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMpClsgIDI0NS42NjUxOTRdIGF0YTEuMDA6IGZhaWxl
ZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFzaz0weDQpClsgIDI0NS42NzEzNzddIGF0
YTEuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQpbICAyNDUuNzA1MTk3XSBhdGEz
LjAwOiBxYyB0aW1lb3V0IChjbWQgMHhhMSkKWyAgMjQ1LjcwOTMyM10gYXRhMy4wMDogZmFpbGVk
IHRvIElERU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkKWyAgMjQ1LjcxNTY3OF0gYXRh
My4wMDogcmV2YWxpZGF0aW9uIGZhaWxlZCAoZXJybm89LTUpClsgIDI0Ni4wNDUyMDRdIGF0YTE6
IFNBVEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMgU0NvbnRyb2wgMzAwKQpbICAyNDYu
MDY1MjAwXSBhdGEzOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9s
IDMwMCkKWyAgMjU2LjA0NTE5NV0gYXRhMS4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMpClsgIDI1
Ni4wNjUxNzBdIGF0YTEuMDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFz
az0weDQpClsgIDI1Ni4wNzEzNTldIGF0YTEuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5v
PS01KQpbICAyNTYuMDcxMzY0XSBhdGExOiBsaW1pdGluZyBTQVRBIGxpbmsgc3BlZWQgdG8gMy4w
IEdicHMKWyAgMjU2LjA3MTM3M10gYXRhMy4wMDogcWMgdGltZW91dCAoY21kIDB4YTEpClsgIDI1
Ni4wNzEzODVdIGF0YTMuMDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFz
az0weDQpClsgIDI1Ni4wNzEzODhdIGF0YTMuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5v
PS01KQpbICAyNTYuMDcxMzkxXSBhdGEzOiBsaW1pdGluZyBTQVRBIGxpbmsgc3BlZWQgdG8gMS41
IEdicHMKWyAgMjU2LjQxNTIwMF0gYXRhMzogU0FUQSBsaW5rIHVwIDEuNSBHYnBzIChTU3RhdHVz
IDExMyBTQ29udHJvbCAzMTApClsgIDI1Ni40MzUyMDJdIGF0YTE6IFNBVEEgbGluayB1cCAzLjAg
R2JwcyAoU1N0YXR1cyAxMjMgU0NvbnRyb2wgMzIwKQpbICAyODYuNDE1MTkyXSBhdGEzLjAwOiBx
YyB0aW1lb3V0IChjbWQgMHhhMSkKWyAgMjg2LjQxOTMyNF0gYXRhMy4wMDogZmFpbGVkIHRvIElE
RU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkKWyAgMjg2LjQyNTY4M10gYXRhMy4wMDog
cmV2YWxpZGF0aW9uIGZhaWxlZCAoZXJybm89LTUpClsgIDI4Ni40MzA3NzVdIGF0YTMuMDA6IGRp
c2FibGVkClsgIDI4Ni40MzUxOTBdIGF0YTEuMDA6IHFjIHRpbWVvdXQgKGNtZCAweGVjKQpbICAy
ODYuNDQ1MTk3XSBhdGEzOiBoYXJkIHJlc2V0dGluZyBsaW5rClsgIDI4Ni40NTUxOTFdIGF0YTEu
MDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVycm9yLCBlcnJfbWFzaz0weDQpClsgIDI4Ni40
NjEzNzhdIGF0YTEuMDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQpbICAyODYuNDY2
NDg1XSBhdGExLjAwOiBkaXNhYmxlZApbICAyODYuNTA1MTk2XSBhdGExOiBoYXJkIHJlc2V0dGlu
ZyBsaW5rClsgIDI4Ni43OTUxOTldIGF0YTM6IFNBVEEgbGluayB1cCAxLjUgR2JwcyAoU1N0YXR1
cyAxMTMgU0NvbnRyb2wgMzEwKQpbICAyODYuODE1MTkzXSBhdGEzOiBFSCBjb21wbGV0ZQpbICAy
ODYuODU1MjAxXSBhdGExOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMgKFNTdGF0dXMgMTIzIFNDb250
cm9sIDMyMCkKWyAgMjg2Ljg5NTE5M10gYXRhMTogRUggY29tcGxldGUKWyAgMjg2Ljg5ODE4OV0g
c2QgMDowOjA6MDogW3NkYV0gU1RBUlRfU1RPUCBGQUlMRUQKWyAgMjg2LjkwMjkwMF0gc2QgMDow
OjA6MDogW3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1E
UklWRVJfT0sKWyAgMjg2LjkxMDg4OF0gcG1fb3AoKTogc2NzaV9idXNfcmVzdW1lX2NvbW1vbisw
eDAvMHg2MCByZXR1cm5zIDI2MjE0NApbICAyODYuOTE3NDA4XSBzZCAwOjA6MDowOiBbc2RhXSBV
bmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODYuOTIyNDA4XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVz
dWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODYu
OTMwMzc0XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDIgMTkgMGMg
YjAgMDAgMDAgMDggMDAKWyAgMjg2LjkzNzYyM10gZW5kX3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2
IHNkYSwgc2VjdG9yIDM1MTk2MDgwClsgIDI4Ni45NDM1MzNdIEJ1ZmZlciBJL08gZXJyb3Igb24g
ZGV2aWNlIGRtLTUsIGxvZ2ljYWwgYmxvY2sgODAzOApbICAyODYuOTQ5NzA5XSBsb3N0IHBhZ2Ug
d3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS01ClsgIDI4Ni45NTQ5MDRdIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ni45NTk5MTFdIHNkIDA6MDowOjA6IFtz
ZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09L
ClsgIDI4Ni45Njc4NzZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAw
MiAyMCAxOCAzMCAwMCAwMCAxMCAwMApbICAyODYuOTc1MTIxXSBlbmRfcmVxdWVzdDogSS9PIGVy
cm9yLCBkZXYgc2RhLCBzZWN0b3IgMzU2NTc3NzYKWyAgMjg2Ljk4MTAzNV0gc2QgMDowOjA6MDog
W3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg2Ljk4NjA0NF0gc2QgMDowOjA6MDogW3Nk
YV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sK
WyAgMjg2Ljk5NDAwNF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDAy
IDI0IDNjIDIwIDAwIDAwIDA4IDAwClsgIDI4Ny4wMDEyNTNdIGVuZF9yZXF1ZXN0OiBJL08gZXJy
b3IsIGRldiBzZGEsIHNlY3RvciAzNTkyOTEyMApbICAyODcuMDA3MTYzXSBCdWZmZXIgSS9PIGVy
cm9yIG9uIGRldmljZSBkbS01LCBsb2dpY2FsIGJsb2NrIDk5NjY4ClsgIDI4Ny4wMTM0MjVdIGxv
c3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTUKWyAgMjg3LjAxODYyM10gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg3LjAxODYyM10gSkJEMjog
RGV0ZWN0ZWQgSU8gZXJyb3JzIHdoaWxlIGZsdXNoaW5nIGZpbGUgZGF0YSBvbiBkbS01LTgKWyAg
Mjg3LjAxODYyM10gQWJvcnRpbmcgam91cm5hbCBvbiBkZXZpY2UgZG0tNS04LgpbICAyODcuMDE4
NjQwXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODcuMDE4NjQx
XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2
ZXJieXRlPURSSVZFUl9PSwpbICAyODcuMDE4NjQ0XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdy
aXRlKDEwKTogMmEgMDAgMDIgMjAgMTEgODAgMDAgMDAgMDggMDAKWyAgMjg3LjAxODY0OF0gZW5k
X3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDM1NjU2MDY0ClsgIDI4Ny4wMTg2
NTFdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTUsIGxvZ2ljYWwgYmxvY2sgNjU1MzYK
WyAgMjg3LjAxODY1Ml0gbG9zdCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tNQpb
ICAyODcuMDE4NjgwXSBKQkQyOiBJL08gZXJyb3IgZGV0ZWN0ZWQgd2hlbiB1cGRhdGluZyBqb3Vy
bmFsIHN1cGVyYmxvY2sgZm9yIGRtLTUtOC4KWyAgMjg3LjA4MDU1MV0gc2QgMDowOjA6MDogW3Nk
YV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sK
WyAgMjg3LjA4ODUxM10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDBl
IGFjIGIyIDk4IDAwIDAwIDA4IDAwClsgIDI4Ny4wOTU3NTldIGVuZF9yZXF1ZXN0OiBJL08gZXJy
b3IsIGRldiBzZGEsIHNlY3RvciAyNDYxOTg5MzYKWyAgMjg3LjEwMTc1OF0gQnVmZmVyIEkvTyBl
cnJvciBvbiBkZXZpY2UgZG0tNiwgbG9naWNhbCBibG9jayAyNjI1MjMyMwpbICAyODcuMTA4Mjky
XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS02ClsgIDI4Ny4xMTM0ODVd
IHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4xMTM0OTFdIEpC
RDI6IERldGVjdGVkIElPIGVycm9ycyB3aGlsZSBmbHVzaGluZyBmaWxlIGRhdGEgb24gZG0tNi04
ClsgIDI4Ny4xMjUzODZdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4xMzMzNDhdIHNkIDA6MDowOjA6
IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwZiAyYyAxOSBmOCAwMCAwMCAxMCAwMApbICAy
ODcuMTQwNTk3XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMjU0NTQ4
NDcyClsgIDI4Ny4xNDY2MDRdIFBNOiByZXN1bWUgb2YgZHJ2OnNkIGRldjowOjA6MDowIGNvbXBs
ZXRlIGFmdGVyIDQ2ODQ2LjUxNiBtc2VjcwpbICAyODcuMTQ2NjA4XSBBYm9ydGluZyBqb3VybmFs
IG9uIGRldmljZSBkbS02LTguClsgIDI4Ny4xNDY2MjRdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFu
ZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4xNDY2MjVdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6
IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4xNDY2
MjddIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwZiAyYyAxMSA4MCAw
MCAwMCAwOCAwMApbICAyODcuMTQ2NjMyXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2Rh
LCBzZWN0b3IgMjU0NTQ2MzA0ClsgIDI4Ny4xNDY2MzVdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2
aWNlIGRtLTYsIGxvZ2ljYWwgYmxvY2sgMjcyOTU3NDQKWyAgMjg3LjE0NjYzNl0gbG9zdCBwYWdl
IHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tNgpbICAyODcuMTQ2NjQ4XSBKQkQyOiBJL08g
ZXJyb3IgZGV0ZWN0ZWQgd2hlbiB1cGRhdGluZyBqb3VybmFsIHN1cGVyYmxvY2sgZm9yIGRtLTYt
OC4KWyAgMjg3LjIwNDI0M10gUE06IHJlc3VtZSBvZiBkcnY6c2NzaV9kZXZpY2UgZGV2OjA6MDow
OjAgY29tcGxldGUgYWZ0ZXIgNDY5MDQuMTM5IG1zZWNzClsgIDI4Ny4yMDQyNTFdIFBNOiByZXN1
bWUgb2YgZHJ2OnNjc2lfZGlzayBkZXY6MDowOjA6MCBjb21wbGV0ZSBhZnRlciA0NjMyMy4zMDAg
bXNlY3MKWyAgMjg3LjIwNDI1N10gUE06IERldmljZSAwOjA6MDowIGZhaWxlZCB0byByZXN1bWUg
YXN5bmM6IGVycm9yIDI2MjE0NApbICAyODcuMjI2NzMxXSBQTTogcmVzdW1lIG9mIGRldmljZXMg
Y29tcGxldGUgYWZ0ZXIgNDY5MjYuODM3IG1zZWNzClsgIDI4Ny4yMzMwMjNdIFBNOiByZXN1bWUg
ZGV2aWNlcyB0b29rIDQ2LjkzMCBzZWNvbmRzClsgIDI4Ny4yMzc5NjVdIC0tLS0tLS0tLS0tLVsg
Y3V0IGhlcmUgXS0tLS0tLS0tLS0tLQpbICAyODcuMjQyNzkxXSBXQVJOSU5HOiBhdCAvZGF0YS9o
b21lL2JndXRocm8vZGV2L29yYy1uZXdkZXYuZ2l0L2xpbnV4LTMuMi9rZXJuZWwvcG93ZXIvc3Vz
cGVuZF90ZXN0LmM6NTMgc3VzcGVuZF90ZXN0X2ZpbmlzaCsweDg2LzB4OTAoKQpbICAyODcuMjU1
MzI3XSBIYXJkd2FyZSBuYW1lOiAyMDEyIENsaWVudCBQbGF0Zm9ybQpbICAyODcuMjYwMDYxXSBD
b21wb25lbnQ6IHJlc3VtZSBkZXZpY2VzLCB0aW1lOiA0NjkzMApbICAyODcuMjY1MDY5XSBNb2R1
bGVzIGxpbmtlZCBpbjogaXB0X01BU1FVRVJBREUgaXNjc2lfc2NzdChPKSBzY3N0X3ZkaXNrKE8p
IGNyYzMyYyB4dF90Y3B1ZHAgbGliY3JjMzJjIHh0X3N0YXRlIHh0X211bHRpcG9ydCBzY3N0X2Nk
cm9tKE8pIGlwdGFibGVfZmlsdGVyIGlwdGFibGVfbmF0IG5mX25hdCBuZl9jb25udHJhY2tfaXB2
NCBuZl9jb25udHJhY2sgbmZfZGVmcmFnX2lwdjQgaXBfdGFibGVzIHNjc3QoTykgeF90YWJsZXMg
YnJpZGdlIHN0cCBsbGMgbWljcm9jb2RlIGlzY3NpX3RjcCBsaWJpc2NzaV90Y3AgbGliaXNjc2kg
c2NzaV90cmFuc3BvcnRfaXNjc2kgYXJjNCBzbmRfaGRhX2NvZGVjX3JlYWx0ZWsgc25kX2hkYV9p
bnRlbCBzbmRfaGRhX2NvZGVjIHNuZF9od2RlcCBzbmRfcGNtIHNuZF90aW1lciBzbmQgc2hwY2hw
IHNvdW5kY29yZSBzbmRfcGFnZV9hbGxvYyB6cmFtKEMpIHVzYmhpZCBoaWQgaTkxNSBkcm1fa21z
X2hlbHBlciBkcm0gYWhjaSBsaWJhaGNpIGkyY19hbGdvX2JpdCB2aWRlbyBpbnRlbF9hZ3AgaW50
ZWxfZ3R0IFtsYXN0IHVubG9hZGVkOiB0cG1fYmlvc10KWyAgMjg3LjMxNTgxOF0gUGlkOiAzOTE0
LCBjb21tOiBwbS1zdXNwZW5kIFRhaW50ZWQ6IEcgICAgICAgICBDIE8gMy4yLjIzLW9yYyAjMQpb
ICAyODcuMzIzMTUxXSBDYWxsIFRyYWNlOgpbICAyODcuMzI1NzcyXSAgWzxmZmZmZmZmZjgxMDY0
MmFmPl0gd2Fybl9zbG93cGF0aF9jb21tb24rMHg3Zi8weGMwClsgIDI4Ny4zMzIwMTNdICBbPGZm
ZmZmZmZmODEwNjQzYTY+XSB3YXJuX3Nsb3dwYXRoX2ZtdCsweDQ2LzB4NTAKWyAgMjg3LjMzODAx
M10gIFs8ZmZmZmZmZmY4MTBhNjRkNj5dIHN1c3BlbmRfdGVzdF9maW5pc2grMHg4Ni8weDkwClsg
IDI4Ny4zNDQxODRdICBbPGZmZmZmZmZmODEwYTYwOGU+XSBzdXNwZW5kX2RldmljZXNfYW5kX2Vu
dGVyKzB4MTVlLzB4MzEwClsgIDI4Ny4zNTEwOTldICBbPGZmZmZmZmZmODEwYTYzYTc+XSBlbnRl
cl9zdGF0ZSsweDE2Ny8weDE5MApbICAyODcuMzUxMTAyXSAgWzxmZmZmZmZmZjgxMGE0ZDI3Pl0g
c3RhdGVfc3RvcmUrMHhiNy8weDEzMApbICAyODcuMzUxMTA0XSAgWzxmZmZmZmZmZjgxMmI1NGRm
Pl0ga29ial9hdHRyX3N0b3JlKzB4Zi8weDMwClsgIDI4Ny4zNTExMDRdICBbPGZmZmZmZmZmODEx
ZDM4MmY+XSBzeXNmc193cml0ZV9maWxlKzB4ZWYvMHgxNzAKWyAgMjg3LjM1MTEwNF0gIFs8ZmZm
ZmZmZmY4MTE2NjhkMz5dIHZmc193cml0ZSsweGIzLzB4MTgwClsgIDI4Ny4zNTExMDRdICBbPGZm
ZmZmZmZmODExNjZiZmE+XSBzeXNfd3JpdGUrMHg0YS8weDkwClsgIDI4Ny4zNTExMDVdICBbPGZm
ZmZmZmZmODE1ODIxNDI+XSBzeXN0ZW1fY2FsbF9mYXN0cGF0aCsweDE2LzB4MWIKWyAgMjg3LjM1
MTEwN10gLS0tWyBlbmQgdHJhY2UgMDI4ZDI5MWFkNjNjMTEyMSBdLS0tClsgIDI4Ny4zNTExMjVd
IFBNOiBGaW5pc2hpbmcgd2FrZXVwLgpbICAyODcuMzUxMTI2XSBSZXN0YXJ0aW5nIHRhc2tzIC4u
LiBkb25lLgpbICAyODcuMzUxNTUwXSB2aWRlbyBMTlhWSURFTzowMDogUmVzdG9yaW5nIGJhY2ts
aWdodCBzdGF0ZQpbICAyODcuMzUxNzM4XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJy
b3IgY29kZQpbICAyODcuMzUxNzQwXSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODcuMzUxNzQyXSBzZCAw
OjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDIgMTggMTEgODAgMDAgMDAgMDgg
MDAKWyAgMjg3LjM1MTc0N10gZW5kX3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9y
IDM1MTMxNzc2ClsgIDI4Ny4zNTE3NTBdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTUs
IGxvZ2ljYWwgYmxvY2sgMApbICAyODcuMzUxNzUxXSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkv
TyBlcnJvciBvbiBkbS01ClsgIDI4Ny4zNTE4NjZdIEVYVDQtZnMgZXJyb3IgKGRldmljZSBkbS01
KTogZXh0NF9qb3VybmFsX3N0YXJ0X3NiOjMyNzogRGV0ZWN0ZWQgYWJvcnRlZCBqb3VybmFsClsg
IDI4Ny4zNTE4NjldIEVYVDQtZnMgKGRtLTUpOiBSZW1vdW50aW5nIGZpbGVzeXN0ZW0gcmVhZC1v
bmx5ClsgIDI4Ny4zNTE4NzFdIEVYVDQtZnMgKGRtLTUpOiBwcmV2aW91cyBJL08gZXJyb3IgdG8g
c3VwZXJibG9jayBkZXRlY3RlZApbICAyODcuMzUxODg2XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhh
bmRsZWQgZXJyb3IgY29kZQpbICAyODcuMzUxODg3XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0
OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODcuMzUx
ODg5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDIgMTggMTEgODAg
MDAgMDAgMDggMDAKWyAgMjg3LjM1MTg5M10gZW5kX3JlcXVlc3Q6IEkvTyBlcnJvciwgZGV2IHNk
YSwgc2VjdG9yIDM1MTMxNzc2ClsgIDI4Ny4zNTE4OTVdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2
aWNlIGRtLTUsIGxvZ2ljYWwgYmxvY2sgMApbICAyODcuMzUxODk3XSBsb3N0IHBhZ2Ugd3JpdGUg
ZHVlIHRvIEkvTyBlcnJvciBvbiBkbS01ClsgIDI4Ny4zNTMzNDRdIHNkIDA6MDowOjA6IFtzZGFd
IFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4zNTMzNDZdIHNkIDA6MDowOjA6IFtzZGFdICBS
ZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4
Ny4zNTMzNDhdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMiAyOCAx
MSA4MCAwMCAwMCAwOCAwMApbICAyODcuMzUzMzUzXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBk
ZXYgc2RhLCBzZWN0b3IgMzYxODAzNTIKWyAgMjg3LjM1MzM1NV0gQnVmZmVyIEkvTyBlcnJvciBv
biBkZXZpY2UgZG0tNiwgbG9naWNhbCBibG9jayAwClsgIDI4Ny4zNTMzNTZdIGxvc3QgcGFnZSB3
cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTYKWyAgMjg3LjM1MzM2NV0gRVhUNC1mcyBlcnJv
ciAoZGV2aWNlIGRtLTYpOiBleHQ0X2pvdXJuYWxfc3RhcnRfc2I6MzI3OiBEZXRlY3RlZCBhYm9y
dGVkIGpvdXJuYWwKWyAgMjg3LjM1MzM2Nl0gRVhUNC1mcyAoZG0tNik6IFJlbW91bnRpbmcgZmls
ZXN5c3RlbSByZWFkLW9ubHkKWyAgMjg3LjM1MzM2OF0gRVhUNC1mcyAoZG0tNik6IHByZXZpb3Vz
IEkvTyBlcnJvciB0byBzdXBlcmJsb2NrIGRldGVjdGVkClsgIDI4Ny4zNTMzNzhdIHNkIDA6MDow
OjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4zNTMzNzldIHNkIDA6MDowOjA6
IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVS
X09LClsgIDI4Ny4zNTMzODFdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAw
MCAwMiAyOCAxMSA4MCAwMCAwMCAwOCAwMApbICAyODcuMzUzMzg1XSBlbmRfcmVxdWVzdDogSS9P
IGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzYxODAzNTIKWyAgMjg3LjM1MzM4Nl0gQnVmZmVyIEkv
TyBlcnJvciBvbiBkZXZpY2UgZG0tNiwgbG9naWNhbCBibG9jayAwClsgIDI4Ny4zNTMzODddIGxv
c3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTYKWyAgMjg3LjM3MzIwMV0gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg3LjM3MzIwMV0gc2QgMDow
OjA6MDogW3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1E
UklWRVJfT0sKWyAgMjg3LjM3MzIwMV0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBSZWFkKDEwKTog
MjggMDAgMDAgMDkgMjQgYTAgMDAgMDAgMDggMDAKWyAgMjg3LjM3MzIwNF0gZW5kX3JlcXVlc3Q6
IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDU5OTIwMApbICAyODcuMzczNzU5XSBzZCAwOjA6
MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODcuMzczNzU5XSBzZCAwOjA6MDow
OiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZF
Ul9PSwpbICAyODcuMzczNzU5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEg
MDAgMDAgMDggMTEgODAgMDAgMDAgMDggMDAKWyAgMjg3LjM3Mzc2M10gZW5kX3JlcXVlc3Q6IEkv
TyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDUyODc2OApbICAyODcuMzczNzY2XSBCdWZmZXIgSS9P
IGVycm9yIG9uIGRldmljZSBkbS0yLCBsb2dpY2FsIGJsb2NrIDAKWyAgMjg3LjM3Mzc2N10gbG9z
dCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tMgpbICAyODcuMzczODc5XSBFWFQ0
LWZzIGVycm9yIChkZXZpY2UgZG0tMik6IGV4dDRfZmluZF9lbnRyeTo5MzU6IGlub2RlICMzNzM6
IGNvbW0gY3JvbjogcmVhZGluZyBkaXJlY3RvcnkgbGJsb2NrIDAKWyAgMjg3LjM3Mzg3OV0gQWJv
cnRpbmcgam91cm5hbCBvbiBkZXZpY2UgZG0tMi04LgpbICAyODcuMzczOTg2XSBzZCAwOjA6MDow
OiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODcuMzczOTg2XSBzZCAwOjA6MDowOiBb
c2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9P
SwpbICAyODcuMzczOTg2XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAg
MDAgOGMgMTEgODAgMDAgMDAgMDggMDAKWyAgMjg3LjM3Mzk4Nl0gZW5kX3JlcXVlc3Q6IEkvTyBl
cnJvciwgZGV2IHNkYSwgc2VjdG9yIDkxNzk1MjAKWyAgMjg3LjM3NDAxOF0gSkJEMjogSS9PIGVy
cm9yIGRldGVjdGVkIHdoZW4gdXBkYXRpbmcgam91cm5hbCBzdXBlcmJsb2NrIGZvciBkbS0yLTgu
ClsgIDI4Ny4zNzQwMThdIEVYVDQtZnMgKGRtLTIpOiBSZW1vdW50aW5nIGZpbGVzeXN0ZW0gcmVh
ZC1vbmx5ClsgIDI4Ny4zNzQzMjRdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlClsgIDI4Ny4zNzQzMjRdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJ
RF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4zNzQzMjRdIHNkIDA6MDow
OjA6IFtzZGFdIENEQjogUmVhZCgxMCk6IDI4IDAwIDAwIDA5IDI0IGEwIDAwIDAwIDA4IDAwClsg
IDI4Ny4zNzQzMjRdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciA1OTky
MDAKWyAgMjg3LjM3NDM2MV0gRVhUNC1mcyAoZG0tMik6IHByZXZpb3VzIEkvTyBlcnJvciB0byBz
dXBlcmJsb2NrIGRldGVjdGVkClsgIDI4Ny4zNzQ0MjVdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFu
ZGxlZCBlcnJvciBjb2RlClsgIDI4Ny4zNzQ0MjVdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6
IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4Ny4zNzQ0
MjVdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwOCAxMSA4MCAw
MCAwMCAwOCAwMApbICAyODcuMzc0NDI1XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2Rh
LCBzZWN0b3IgNTI4NzY4ClsgIDI4Ny4zNzQ0NTddIEVYVDQtZnMgZXJyb3IgKGRldmljZSBkbS0y
KTogZXh0NF9maW5kX2VudHJ5OjkzNTogaW5vZGUgIzM3MzogY29tbSBjcm9uOiByZWFkaW5nIGRp
cmVjdG9yeSBsYmxvY2sgMApbICAyODguMjMzNTg2XSBpbml0OiBhbmFjcm9uIG1haW4gcHJvY2Vz
cyAoNDExNikgdGVybWluYXRlZCB3aXRoIHN0YXR1cyAxClsgIDI4OC4yNTU4MDddIGVoY2lfaGNk
OiBVU0IgMi4wICdFbmhhbmNlZCcgSG9zdCBDb250cm9sbGVyIChFSENJKSBEcml2ZXIKWyAgMjg4
LjI2MjQ5Nl0geGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEK
WyAgMjg4LjI2ODI5N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAyODguMjcyMTEyXSBl
aGNpX2hjZCAwMDAwOjAwOjFhLjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+
IElSUSAxNgpbICAyODguMjc5Njc1XSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IHNldHRpbmcgbGF0
ZW5jeSB0aW1lciB0byA2NApbICAyODguMjg1NjQ3XSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IEVI
Q0kgSG9zdCBDb250cm9sbGVyClsgIDI4OC4yOTExNzldIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDog
bmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAxClsgIDI4OC4yOTg4
NTRdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogZGVidWcgcG9ydCAyClsgIDI4OC4zMDc0MDddIGVo
Y2lfaGNkIDAwMDA6MDA6MWEuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0
ZWQKWyAgMjg4LjMxNDM1Nl0gZWhjaV9oY2QgMDAwMDowMDoxYS4wOiBpcnEgMTYsIGlvIG1lbSAw
eGIwMjcwMDAwClsgIDI4OC4zMzUxOTFdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogVVNCIDIuMCBz
dGFydGVkLCBFSENJIDEuMDAKWyAgMjg4LjM0MTIwOF0gaHViIDEtMDoxLjA6IFVTQiBodWIgZm91
bmQKWyAgMjg4LjM0NTA3MF0gaHViIDEtMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMjg4LjM4
NTIzM10geGVuOiByZWdpc3RlcmluZyBnc2kgMjMgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAg
Mjg4LjM5MDg4N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyMwpbICAyODguMzk0NzMyXSBlaGNp
X2hjZCAwMDAwOjAwOjFkLjA6IFBDSSBJTlQgQSAtPiBHU0kgMjMgKGxldmVsLCBsb3cpIC0+IElS
USAyMwpbICAyODguNDAyMjEzXSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IHNldHRpbmcgbGF0ZW5j
eSB0aW1lciB0byA2NApbICAyODguNDA4Mjc4XSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IEVIQ0kg
SG9zdCBDb250cm9sbGVyClsgIDI4OC40MTM3OTBdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogbmV3
IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAyClsgIDI4OC40MjE0NzBd
IGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogZGVidWcgcG9ydCAyClsgIDI4OC40MzAwNTBdIGVoY2lf
aGNkIDAwMDA6MDA6MWQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQK
WyAgMjg4LjQzNzAxM10gZWhjaV9oY2QgMDAwMDowMDoxZC4wOiBpcnEgMjMsIGlvIG1lbSAweGIw
MjUwMDAwClsgIDI4OC40NjUxOTJdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogVVNCIDIuMCBzdGFy
dGVkLCBFSENJIDEuMDAKWyAgMjg4LjQ3MTE4NV0gaHViIDItMDoxLjA6IFVTQiBodWIgZm91bmQK
WyAgMjg4LjQ3NTAyNV0gaHViIDItMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMjg4LjUzNTIy
OV0geGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMjg4
LjU0MDg4NF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAyODguNTQ0NzMwXSB4aGNpX2hj
ZCAwMDAwOjAwOjE0LjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+IElSUSAx
NgpbICAyODguNTUyMjY5XSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHNldHRpbmcgbGF0ZW5jeSB0
aW1lciB0byA2NApbICAyODguNTU4MjU2XSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHhIQ0kgSG9z
dCBDb250cm9sbGVyClsgIDI4OC41NjM3OTBdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgIDI4OC41NzE1NjRdIHho
Y2lfaGNkIDAwMDA6MDA6MTQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0
ZWQKWyAgMjg4LjU3ODQ3OV0geGhjaV9oY2QgMDAwMDowMDoxNC4wOiBpcnEgMTYsIGlvIG1lbSAw
eGIwMmIwMDAwClsgIDI4OC41ODQ2NDNdIHhIQ0kgeGhjaV9hZGRfZW5kcG9pbnQgY2FsbGVkIGZv
ciByb290IGh1YgpbICAyODguNTg5ODU2XSB4SENJIHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxl
ZCBmb3Igcm9vdCBodWIKWyAgMjg4LjU5NTUxM10gaHViIDMtMDoxLjA6IFVTQiBodWIgZm91bmQK
WyAgMjg4LjU5OTQyNF0gaHViIDMtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQKWyAgMjg4LjYzNTIx
MV0geGhjaV9oY2QgMDAwMDowMDoxNC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAyODguNjQw
NTgzXSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2ln
bmVkIGJ1cyBudW1iZXIgNApbICAyODguNjQ4MzQzXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNh
bGxlZCBmb3Igcm9vdCBodWIKWyAgMjg4LjY1MzU2OV0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0
aCBjYWxsZWQgZm9yIHJvb3QgaHViClsgIDI4OC42NTkyNTNdIGh1YiA0LTA6MS4wOiBVU0IgaHVi
IGZvdW5kClsgIDI4OC42NjMxNTBdIGh1YiA0LTA6MS4wOiA0IHBvcnRzIGRldGVjdGVkClsgIDI4
OC42NjUxOThdIHVzYiAxLTE6IG5ldyBoaWdoLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNp
bmcgZWhjaV9oY2QKWyAgMjg4LjgwMjM1NV0gY2ZnODAyMTE6IENhbGxpbmcgQ1JEQSB0byB1cGRh
dGUgd29ybGQgcmVndWxhdG9yeSBkb21haW4KWyAgMjg4LjgxNjIxMl0gaHViIDEtMToxLjA6IFVT
QiBodWIgZm91bmQKWyAgMjg4LjgxOTgwMl0gSW50ZWwoUikgV2lyZWxlc3MgV2lGaSBMaW5rIEFH
TiBkcml2ZXIgZm9yIExpbnV4LCBpbi10cmVlOgpbICAyODguODE5ODA1XSBDb3B5cmlnaHQoYykg
MjAwMy0yMDExIEludGVsIENvcnBvcmF0aW9uClsgIDI4OC44MTk4NTddIHhlbjogcmVnaXN0ZXJp
bmcgZ3NpIDE4IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxClsgIDI4OC44MTk4NjJdIEFscmVhZHkg
c2V0dXAgdGhlIEdTSSA6MTgKWyAgMjg4LjgxOTg2Nl0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFBD
SSBJTlQgQSAtPiBHU0kgMTggKGxldmVsLCBsb3cpIC0+IElSUSAxOApbICAyODguODE5ODg3XSBp
d2x3aWZpIDAwMDA6MDI6MDAuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDI4OC44
MTk5NTVdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBwY2lfcmVzb3VyY2VfbGVuID0gMHgwMDAwMjAw
MApbICAyODguODE5OTU3XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogcGNpX3Jlc291cmNlX2Jhc2Ug
PSBmZmZmYzkwMDE2MzNjMDAwClsgIDI4OC44MTk5NjBdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBI
VyBSZXZpc2lvbiBJRCA9IDB4MzQKWyAgMjg4LjgyMDI1NF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6
IENPTkZJR19JV0xXSUZJX0RFQlVHIGRpc2FibGVkClsgIDI4OC44MjAyNTZdIGl3bHdpZmkgMDAw
MDowMjowMC4wOiBDT05GSUdfSVdMV0lGSV9ERUJVR0ZTIGRpc2FibGVkClsgIDI4OC44MjAyNTdd
IGl3bHdpZmkgMDAwMDowMjowMC4wOiBDT05GSUdfSVdMV0lGSV9ERVZJQ0VfVFJBQ0lORyBlbmFi
bGVkClsgIDI4OC44MjAyNThdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBDT05GSUdfSVdMV0lGSV9E
RVZJQ0VfVEVTVE1PREUgZGlzYWJsZWQKWyAgMjg4LjgyMDI2MF0gaXdsd2lmaSAwMDAwOjAyOjAw
LjA6IENPTkZJR19JV0xXSUZJX1AyUCBkaXNhYmxlZApbICAyODguODIwMjY1XSBpd2x3aWZpIDAw
MDA6MDI6MDAuMDogRGV0ZWN0ZWQgSW50ZWwoUikgQ2VudHJpbm8oUikgQWR2YW5jZWQtTiA2MjA1
IEFHTiwgUkVWPTB4QjAKWyAgMjg4LjgyMDM5NF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEwxIERp
c2FibGVkOyBFbmFibGluZyBMMFMKWyAgMjg4Ljg0MDE0N10gaXdsd2lmaSAwMDAwOjAyOjAwLjA6
IGRldmljZSBFRVBST00gVkVSPTB4NzE1LCBDQUxJQj0weDYKWyAgMjg4Ljg0MDE0OV0gaXdsd2lm
aSAwMDAwOjAyOjAwLjA6IERldmljZSBTS1U6IDB4MUYwClsgIDI4OC44NDAxNTBdIGl3bHdpZmkg
MDAwMDowMjowMC4wOiBWYWxpZCBUeCBhbnQ6IDB4MywgVmFsaWQgUnggYW50OiAweDMKWyAgMjg4
Ljg0MDE2Ml0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFR1bmFibGUgY2hhbm5lbHM6IDEzIDgwMi4x
MWJnLCAyNCA4MDIuMTFhIGNoYW5uZWxzClsgIDI4OC45NDczOTddIGh1YiAxLTE6MS4wOiA2IHBv
cnRzIGRldGVjdGVkClsgIDI4OC45NTM3MDBdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBsb2FkZWQg
ZmlybXdhcmUgdmVyc2lvbiAxNy4xNjguNS4zIGJ1aWxkIDQyMzAxClsgIDI4OC45NTQzMTddIGUx
MDAwZTogSW50ZWwoUikgUFJPLzEwMDAgTmV0d29yayBEcml2ZXIgLSAxLjUuMS1rClsgIDI4OC45
NTQzMTldIGUxMDAwZTogQ29weXJpZ2h0KGMpIDE5OTkgLSAyMDExIEludGVsIENvcnBvcmF0aW9u
LgpbICAyODguOTU0MzQzXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAyMCB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQpbICAyODguOTU0MzQ2XSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjIwClsgIDI4OC45
NTQzNDldIGUxMDAwZSAwMDAwOjAwOjE5LjA6IFBDSSBJTlQgQSAtPiBHU0kgMjAgKGxldmVsLCBs
b3cpIC0+IElSUSAyMApbICAyODguOTU0MzYzXSBlMTAwMGUgMDAwMDowMDoxOS4wOiBzZXR0aW5n
IGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMjg4Ljk5NjQ3NF0gUmVnaXN0ZXJlZCBsZWQgZGV2aWNl
OiBwaHkwLWxlZApbICAyODkuMDAwNzE5XSBjZmc4MDIxMTogSWdub3JpbmcgcmVndWxhdG9yeSBy
ZXF1ZXN0IFNldCBieSBjb3JlIHNpbmNlIHRoZSBkcml2ZXIgdXNlcyBpdHMgb3duIGN1c3RvbSBy
ZWd1bGF0b3J5IGRvbWFpbgpbICAyODkuMDExNzEzXSBpZWVlODAyMTEgcGh5MDogU2VsZWN0ZWQg
cmF0ZSBjb250cm9sIGFsZ29yaXRobSAnaXdsLWFnbi1ycycKWyAgMjg5LjA2NTIwM10gdXNiIDIt
MTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBlaGNpX2hjZApbICAy
ODkuMjE1OTMzXSBodWIgMi0xOjEuMDogVVNCIGh1YiBmb3VuZApbICAyODkuMjE5NzM3XSBodWIg
Mi0xOjEuMDogOCBwb3J0cyBkZXRlY3RlZApbICAyODkuMjIxMDAxXSBlMTAwMGUgMDAwMDowMDox
OS4wOiBldGgwOiAoUENJIEV4cHJlc3M6Mi41R1QvczpXaWR0aCB4MSkgMDA6MTM6MjA6Zjk6ZGU6
MjQKWyAgMjg5LjIyMTAwM10gZTEwMDBlIDAwMDA6MDA6MTkuMDogZXRoMDogSW50ZWwoUikgUFJP
LzEwMDAgTmV0d29yayBDb25uZWN0aW9uClsgIDI4OS4yMjEwODJdIGUxMDAwZSAwMDAwOjAwOjE5
LjA6IGV0aDA6IE1BQzogMTAsIFBIWTogMTEsIFBCQSBObzogRkZGRkZGLTBGRgpbICAyODkuMjM5
NjM4XSBkZXZpY2UgZXRoMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUKWyAgMjg5LjU0NTM4Ml0g
dXNiIDItMS41OiBuZXcgbG93LXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDMgdXNpbmcgZWhjaV9o
Y2QKWyAgMjg5LjY2MTAzOV0gaW5wdXQ6IFVTQiBPcHRpY2FsIE1vdXNlIGFzIC9kZXZpY2VzL3Bj
aTAwMDA6MDAvMDAwMDowMDoxZC4wL3VzYjIvMi0xLzItMS41LzItMS41OjEuMC9pbnB1dC9pbnB1
dDEzClsgIDI4OS42NzE3OTJdIGdlbmVyaWMtdXNiIDAwMDM6MDRCMzozMTBDLjAwMDU6IGlucHV0
LGhpZHJhdzA6IFVTQiBISUQgdjEuMTEgTW91c2UgW1VTQiBPcHRpY2FsIE1vdXNlXSBvbiB1c2It
MDAwMDowMDoxZC4wLTEuNS9pbnB1dDAKWyAgMjg5LjY5NzUzNV0gc2QgMDowOjA6MDogW3NkYV0g
VW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg5LjcwMjM4NV0gc2QgMDowOjA6MDogW3NkYV0gIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sKWyAgMjg5
LjcxMDM0NF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDAwIDAwIDc1
IDg4IDAwIDAwIDAyIDAwClsgIDI4OS43MTc1OTRdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRl
diBzZGEsIHNlY3RvciAzMDA4OApbICAyODkuNzIzMjQyXSBCdWZmZXIgSS9PIGVycm9yIG9uIGRl
dmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDEyODA0ClsgIDI4OS43Mjk1NDFdIEVYVDQtZnMgd2Fy
bmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2VuZF9iaW86MjUxOiBJL08gZXJyb3Igd3JpdGluZyB0
byBpbm9kZSAyMiAob2Zmc2V0IDAgc2l6ZSAxMDI0IHN0YXJ0aW5nIGJsb2NrIDEyODA0KQpbICAy
ODkuNzUwNzA5XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODku
NzU1MjMwXSB1c2IgMi0xLjY6IG5ldyBsb3ctc3BlZWQgVVNCIGRldmljZSBudW1iZXIgNCB1c2lu
ZyBlaGNpX2hjZApbICAyODkuNzYyNDUzXSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0
Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODkuNzcwNDEyXSBz
ZCAwOjA6MDowOiBbc2RhXSBDREI6IFJlYWQoMTApOiAyOCAwMCAwMCAxZiAwNyA4MCAwMCAwMCAy
MCAwMApbICAyODkuNzc3NTY4XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0
b3IgMjAzMzUzNgpbICAyODkuNzgzNDAzXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJy
b3IgY29kZQpbICAyODkuNzg4NDA0XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyODkuNzk2MzUyXSBzZCAw
OjA6MDowOiBbc2RhXSBDREI6IFJlYWQoMTApOiAyOCAwMCAwMCAxZiAwNyA4MCAwMCAwMCAwOCAw
MApbICAyODkuODAzNTA2XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3Ig
MjAzMzUzNgpbICAyODkuODA5OTA3XSBpbml0OiBGYWlsZWQgdG8gd3JpdGUgdG8gbG9nIGZpbGUg
L3Zhci9sb2cvdXBzdGFydC94c2VydmVyLmxvZwpbICAyODkuODMxMjI2XSBzZCAwOjA6MDowOiBb
c2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpbICAyODkuODM2MDc5XSBzZCAwOjA6MDowOiBbc2Rh
XSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpb
ICAyODkuODQ0MDM2XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IFdyaXRlKDEwKTogMmEgMDAgMDAg
MDAgODEgODIgMDAgMDAgMDIgMDAKWyAgMjg5Ljg1MTI4NF0gZW5kX3JlcXVlc3Q6IEkvTyBlcnJv
ciwgZGV2IHNkYSwgc2VjdG9yIDMzMTU0ClsgIDI4OS44NTY5MzBdIEJ1ZmZlciBJL08gZXJyb3Ig
b24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sgMTQzMzcKWyAgMjg5Ljg2MzE5NF0gRVhUNC1m
cyB3YXJuaW5nIChkZXZpY2UgZG0tMCk6IGV4dDRfZW5kX2JpbzoyNTE6IEkvTyBlcnJvciB3cml0
aW5nIHRvIGlub2RlIDIyIChvZmZzZXQgMCBzaXplIDEwMjQgc3RhcnRpbmcgYmxvY2sgMTQzMzcp
ClsgIDI4OS44NjMzMDNdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsg
IDI4OS44NjMzMDVdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURf
VEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4OS44NjMzMDddIHNkIDA6MDowOjA6IFtz
ZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwMCA4NSA4NCAwMCAwMCAwMiAwMApbICAyODku
ODYzMzEyXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzQxODAKWyAg
Mjg5Ljg2MzMxNV0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9j
ayAxNDg1MApbICAyODkuODYzMzE4XSBFWFQ0LWZzIHdhcm5pbmcgKGRldmljZSBkbS0wKTogZXh0
NF9lbmRfYmlvOjI1MTogSS9PIGVycm9yIHdyaXRpbmcgdG8gaW5vZGUgMjIgKG9mZnNldCAwIHNp
emUgMTAyNCBzdGFydGluZyBibG9jayAxNDg1MCkKWyAgMjg5LjkyMzI2Nl0gc2QgMDowOjA6MDog
W3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjg5LjkyNTQxOV0gaW5wdXQ6IExJVEUtT04g
VGVjaG5vbG9neSBVU0IgTmV0VmlzdGEgRnVsbCBXaWR0aCBLZXlib2FyZC4gYXMgL2RldmljZXMv
cGNpMDAwMDowMC8wMDAwOjAwOjFkLjAvdXNiMi8yLTEvMi0xLjYvMi0xLjY6MS4wL2lucHV0L2lu
cHV0MTQKWyAgMjg5LjkyNTUxMF0gZ2VuZXJpYy11c2IgMDAwMzowNEIzOjMwMjUuMDAwNjogaW5w
dXQsaGlkcmF3MTogVVNCIEhJRCB2MS4xMCBLZXlib2FyZCBbTElURS1PTiBUZWNobm9sb2d5IFVT
QiBOZXRWaXN0YSBGdWxsIFdpZHRoIEtleWJvYXJkLl0gb24gdXNiLTAwMDA6MDA6MWQuMC0xLjYv
aW5wdXQwClsgIDI4OS45NTcyOTldIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRl
PURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI4OS45NjUyNThdIHNkIDA6
MDowOjA6IFtzZGFdIENEQjogUmVhZCgxMCk6IDI4IDAwIDAwIDFmIDA3IDgwIDAwIDAwIDA4IDAw
ClsgIDI4OS45NzI0MTJdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAy
MDMzNTM2ClsgIDI4OS45ODc3NzFdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlClsgIDI4OS45OTI2MTldIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJ
RF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI5MC4wMDA1ODJdIHNkIDA6MDow
OjA6IFtzZGFdIENEQjogUmVhZCgxMCk6IDI4IDAwIDAwIDFmIDA3IDgwIDAwIDAwIDA4IDAwClsg
IDI5MC4wMDc3NDFdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAyMDMz
NTM2ClsgIDI5MC4wMTU1OTldIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2Rl
ClsgIDI5MC4wMjA0NDNdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI5MC4wMjg0MThdIHNkIDA6MDowOjA6
IFtzZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwMCA4NSA4NiAwMCAwMCAwMiAwMApbICAy
OTAuMDM1NjYwXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzQxODIK
WyAgMjkwLjA0MTMwMF0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBi
bG9jayAxNDg1MQpbICAyOTAuMDQ3NTc2XSBFWFQ0LWZzIHdhcm5pbmcgKGRldmljZSBkbS0wKTog
ZXh0NF9lbmRfYmlvOjI1MTogSS9PIGVycm9yIHdyaXRpbmcgdG8gaW5vZGUgMjIgKG9mZnNldCAw
IHNpemUgMTAyNCBzdGFydGluZyBibG9jayAxNDg1MSkKWyAgMjkwLjA2ODk1OF0gc2QgMDowOjA6
MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjkwLjA3MzgwMl0gc2QgMDowOjA6MDog
W3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sKWyAgMjkwLjA4MTc2N10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBSZWFkKDEwKTogMjggMDAg
MDAgMWYgMDcgODAgMDAgMDAgMDggMDAKWyAgMjkwLjA4ODkyNl0gZW5kX3JlcXVlc3Q6IEkvTyBl
cnJvciwgZGV2IHNkYSwgc2VjdG9yIDIwMzM1MzYKWyAgMjkwLjEwOTEzMV0gc2QgMDowOjA6MDog
W3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjkwLjExMzk3N10gc2QgMDowOjA6MDogW3Nk
YV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sK
WyAgMjkwLjEyMTk0MV0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBXcml0ZSgxMCk6IDJhIDAwIDAw
IDAwIDgxIDg0IDAwIDAwIDAyIDAwClsgIDI5MC4xMjkxOTBdIGVuZF9yZXF1ZXN0OiBJL08gZXJy
b3IsIGRldiBzZGEsIHNlY3RvciAzMzE1NgpbICAyOTAuMTM0ODMwXSBCdWZmZXIgSS9PIGVycm9y
IG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDE0MzM4ClsgIDI5MC4xNDExMDVdIEVYVDQt
ZnMgd2FybmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2VuZF9iaW86MjUxOiBJL08gZXJyb3Igd3Jp
dGluZyB0byBpbm9kZSAyMiAob2Zmc2V0IDAgc2l6ZSAxMDI0IHN0YXJ0aW5nIGJsb2NrIDE0MzM4
KQpbICAyOTAuMTU2MjU0XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQpb
ICAyOTAuMTYxMDk4XSBzZCAwOjA6MDowOiBbc2RhXSAgUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFE
X1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSwpbICAyOTAuMTY5MDcwXSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IFJlYWQoMTApOiAyOCAwMCAwMCAxZiAwNyA4MCAwMCAwMCAwOCAwMApbICAyOTAu
MTc2MjIyXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMjAzMzUzNgpb
ICAyOTIuNTAwOTc1XSBlMTAwMGU6IGV0aDAgTklDIExpbmsgaXMgVXAgMTAwMCBNYnBzIEZ1bGwg
RHVwbGV4LCBGbG93IENvbnRyb2w6IFJ4L1R4ClsgIDI5Mi41MDg3OTddIGJyLWV0aDA6IHBvcnQg
MShldGgwKSBlbnRlcmluZyBmb3J3YXJkaW5nIHN0YXRlClsgIDI5Mi41MTQ1MTNdIGJyLWV0aDA6
IHBvcnQgMShldGgwKSBlbnRlcmluZyBmb3J3YXJkaW5nIHN0YXRlClsgIDI5NC45ODUyMjRdIEpC
RDI6IERldGVjdGVkIElPIGVycm9ycyB3aGlsZSBmbHVzaGluZyBmaWxlIGRhdGEgb24gZG0tMC04
ClsgIDI5NC45OTE5NjZdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlClsg
IDI5NC45OTY5ODJdIHNkIDA6MDowOjA6IFtzZGFdICBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURf
VEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LClsgIDI5NS4wMDQ5MjVdIHNkIDA6MDowOjA6IFtz
ZGFdIENEQjogV3JpdGUoMTApOiAyYSAwMCAwMCAwMSA5MSBiZSAwMCAwMCAwYSAwMApbICAyOTUu
MDEyMTc2XSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMTAyODQ2Clsg
IDI5NS4wMTc5NTNdIEFib3J0aW5nIGpvdXJuYWwgb24gZGV2aWNlIGRtLTAtOC4KWyAgMjk1LjAy
MjU5NF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUKWyAgMjk1LjAyNzU5
NF0gc2QgMDowOjA6MDogW3NkYV0gIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJp
dmVyYnl0ZT1EUklWRVJfT0sKWyAgMjk1LjAzNTU1Nl0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiBX
cml0ZSgxMCk6IDJhIDAwIDAwIDAxIDkxIDgyIDAwIDAwIDAyIDAwClsgIDI5NS4wNDI3OTZdIGVu
ZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAxMDI3ODYKWyAgMjk1LjA0ODUz
Ml0gcXVpZXRfZXJyb3I6IDIgY2FsbGJhY2tzIHN1cHByZXNzZWQKWyAgMjk1LjA1MzI2OF0gQnVm
ZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayA0OTE1MwpbICAyOTUu
MDU5NTM1XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS0wClsgIDI5NS4w
NjQ3MzBdIEpCRDI6IEkvTyBlcnJvciBkZXRlY3RlZCB3aGVuIHVwZGF0aW5nIGpvdXJuYWwgc3Vw
ZXJibG9jayBmb3IgZG0tMC04LgoK
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-good.txt"
Content-Disposition: attachment; filename="xen-dump-good.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfctac2

KFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBYZW4gKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMg
dG8gc3dpdGNoIGlucHV0IHRvIERPTTApCihYRU4pICcqJyBwcmVzc2VkIC0+IGZpcmluZyBhbGwg
ZGlhZ25vc3RpYyBrZXloYW5kbGVycwooWEVOKSBbZDogZHVtcCByZWdpc3RlcnNdCihYRU4pICdk
JyBwcmVzc2VkIC0+IGR1bXBpbmcgcmVnaXN0ZXJzCihYRU4pIAooWEVOKSAqKiogRHVtcGluZyBD
UFUwIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0
ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMAooWEVOKSBSSVA6
ICAgIGUwMDg6WzxmZmZmODJjNDgwMTNkNzdlPl0gbnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVO
KSBSRkxBR1M6IDAwMDAwMDAwMDAwMTAyODYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJh
eDogZmZmZjgyYzQ4MDMwMjVhMCAgIHJieDogZmZmZjgyYzQ4MDMwMjQ4MCAgIHJjeDogMDAwMDAw
MDAwMDAwMDAwMwooWEVOKSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IGZmZmY4MmM0ODAy
ZTI1YzggICByZGk6IGZmZmY4MmM0ODAyNzE4MDAKKFhFTikgcmJwOiBmZmZmODJjNDgwMmI3ZTMw
ICAgcnNwOiBmZmZmODJjNDgwMmI3ZTMwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAxCihYRU4pIHI5
OiAgZmZmZjgzMDE0ODk5YWVhOCAgIHIxMDogMDAwMDAwMjg2NDIzMGZkYyAgIHIxMTogMDAwMDAw
MDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MmM0ODAyNzE4MDAgICByMTM6IGZmZmY4MmM0ODAx
M2Q3NTcgICByMTQ6IDAwMDAwMDI4NjNmNWQwN2EKKFhFTikgcjE1OiBmZmZmODJjNDgwMzAyMzA4
ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNy
MzogMDAwMDAwMDEzZDk2YzAwMCAgIGNyMjogZmZmZjg4MDAyNTY4ZWI5OAooWEVOKSBkczogMDAw
MCAgIGVzOiAwMDAwICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgK
KFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MmM0ODAyYjdlMzA6CihYRU4pICAg
IGZmZmY4MmM0ODAyYjdlNjAgZmZmZjgyYzQ4MDEyODE3ZiAwMDAwMDAwMDAwMDAwMDAyIGZmZmY4
MmM0ODAyZTI1YzgKKFhFTikgICAgZmZmZjgyYzQ4MDMwMjQ4MCBmZmZmODMwMTQ4OTkyZDQwIGZm
ZmY4MmM0ODAyYjdlYjAgZmZmZjgyYzQ4MDEyODI4MQooWEVOKSAgICBmZmZmODJjNDgwMmI3ZjE4
IDAwMDAwMDAwMDAwMDAyNDYgMDAwMDAwMjg2NDIzMGZkYyBmZmZmODJjNDgwMmQ4ODgwCihYRU4p
ICAgIGZmZmY4MmM0ODAyZDg4ODAgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmY4MmM0ODAzMDIzMDgKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2VlMCBmZmZmODJjNDgwMTI1NDA1
IGZmZmY4MmM0ODAyYjdmMTggZmZmZjgyYzQ4MDJiN2YxOAooWEVOKSAgICAwMDAwMDAwMGZmZmZm
ZmZmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJiN2VmMCBmZmZmODJjNDgwMTI1NDg0CihY
RU4pICAgIGZmZmY4MmM0ODAyYjdmMTAgZmZmZjgyYzQ4MDE1OGMwNSBmZmZmODMwMGFhNTg0MDAw
IGZmZmY4MzAwYWEwZmMwMDAKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2RhOCAwMDAwMDAwMDAwMDAw
MDAwIGZmZmZmZmZmZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgx
YWFmZGEwIGZmZmZmZmZmODFhMDFlZTggZmZmZmZmZmY4MWEwMWZkOCAwMDAwMDAwMDAwMDAwMjQ2
CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEw
MDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pICAgIGZmZmZmZmZmODFhMDFlZDAgMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjgzMDBhYTU4NDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4p
ICAgIFs8ZmZmZjgyYzQ4MDEzZDc3ZT5dIG5zMTY1NTBfcG9sbCsweDI3LzB4MzMKKFhFTikgICAg
WzxmZmZmODJjNDgwMTI4MTdmPl0gZXhlY3V0ZV90aW1lcisweDRlLzB4NmMKKFhFTikgICAgWzxm
ZmZmODJjNDgwMTI4MjgxPl0gdGltZXJfc29mdGlycV9hY3Rpb24rMHhlNC8weDIxYQooWEVOKSAg
ICBbPGZmZmY4MmM0ODAxMjU0MDU+XSBfX2RvX3NvZnRpcnErMHg5NS8weGEwCihYRU4pICAgIFs8
ZmZmZjgyYzQ4MDEyNTQ4ND5dIGRvX3NvZnRpcnErMHgyNi8weDI4CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1OGMwNT5dIGlkbGVfbG9vcCsweDZmLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1w
aW5nIENQVTEgaG9zdCBzdGF0ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4
ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAxCihYRU4p
IFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDll
CihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhF
TikgcmF4OiBmZmZmODJjNDgwMzAyMzcwICAgcmJ4OiBmZmZmODMwMTNlNjdmZjE4ICAgcmN4OiAw
MDAwMDAwMDAwMDAwMDAxCihYRU4pIHJkeDogMDAwMDAwM2NjY2VkMWQ4MCAgIHJzaTogMDAwMDAw
MDA1YjM0MDU2MCAgIHJkaTogMDAwMDAwMDAwMDAwMDAwMQooWEVOKSByYnA6IGZmZmY4MzAxM2U2
N2ZlZjAgICByc3A6IGZmZmY4MzAxM2U2N2ZlZjAgICByODogIDAwMDAwMDJlNzBiYTMwMjQKKFhF
Tikgcjk6ICBmZmZmODMwMGE4M2ZjMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAw
MDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZjgzMDEzZTY3ZmYxOCAgIHIxMzogMDAwMDAw
MDBmZmZmZmZmZiAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxNGQx
ZDQwODggICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhF
TikgY3IzOiAwMDAwMDAwMTRjZGFiMDAwICAgY3IyOiBmZmZmODgwMDAzMTk4YzgwCihYRU4pIGRz
OiAwMDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczog
ZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDEzZTY3ZmVmMDoKKFhF
TikgICAgZmZmZjgzMDEzZTY3ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYWEwZmUwMDAg
ZmZmZjgzMDBhODNmYzAwMAooWEVOKSAgICBmZmZmODMwMTNlNjdmZGE4IDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAyCihYRU4pICAgIGZmZmZmZmZmODFh
YWZkYTAgZmZmZjg4MDAyNzg2ZmVlMCBmZmZmODgwMDI3ODZmZmQ4IDAwMDAwMDAwMDAwMDAyNDYK
KFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAw
MDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAy
NDYKKFhFTikgICAgZmZmZjg4MDAyNzg2ZmVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmODMwMGFhMGZlMDAwCihYRU4pICAgIDAwMDAw
MDNjY2NlZDFkODAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikg
ICAgWzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBb
PGZmZmY4MmM0ODAxNThiZjg+XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAq
KiogRHVtcGluZyBDUFUyIGhvc3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMy
LXByZSAgeDg2XzY0ICBkZWJ1Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAg
MgooWEVOKSBSSVA6ICAgIGUwMDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4
OTkvMHg5ZQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZp
c29yCihYRU4pIHJheDogZmZmZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODkzZmYxOCAg
IHJjeDogMDAwMDAwMDAwMDAwMDAwMgooWEVOKSByZHg6IDAwMDAwMDNjY2M2YmJkODAgICByc2k6
IDAwMDAwMDAwNWJmOTgyMTAgICByZGk6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcmJwOiBmZmZm
ODMwMTQ4OTNmZWYwICAgcnNwOiBmZmZmODMwMTQ4OTNmZWYwICAgcjg6ICAwMDAwMDAyZTkzYjg4
ODc0CihYRU4pIHI5OiAgZmZmZjgzMDBhYTU4MzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAg
IHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5M2ZmMTggICByMTM6
IDAwMDAwMDAwZmZmZmZmZmYgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZm
ODMwMTRjOWJlMDg4ICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAy
NmYwCihYRU4pIGNyMzogMDAwMDAwMDEzZGQ0NjAwMCAgIGNyMjogZmZmZjg4MDAyNWU5NDA1MAoo
WEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEw
ICAgY3M6IGUwMDgKKFhFTikgWGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5M2Zl
ZjA6CihYRU4pICAgIGZmZmY4MzAxNDg5M2ZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4
NWM3MDAwIGZmZmY4MzAwYWE1ODMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODkzZmRhOCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMwooWEVOKSAgICBmZmZm
ZmZmZjgxYWFmZGEwIGZmZmY4ODAwMjc4ODFlZTAgZmZmZjg4MDAyNzg4MWZkOCAwMDAwMDAwMDAw
MDAwMjQ2CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAw
MDAwMDEwMDAwMDAwMDAwIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAw
MDAwMDAwMjQ2CihYRU4pICAgIGZmZmY4ODAwMjc4ODFlYzggMDAwMDAwMDAwMDAwZTAyYiAzZWMy
ZTMyNjgwOTJlMTY3IGI5N2NlYzViOWU2OGRjOTMKKFhFTikgICAgY2M5ODA3OTI0OWZiNzNiNSBl
MWEzNWRlNGZkMTYxYTZmIGUxZTM1ZDY0MDAwMDAwMDIgZmZmZjgzMDBhODVjNzAwMAooWEVOKSAg
ICAwMDAwMDAzY2NjNmJiZDgwIGUyNGQ1YTM4ZjJhZTA1MWUKKFhFTikgWGVuIGNhbGwgdHJhY2U6
CihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhF
TikgICAgWzxmZmZmODJjNDgwMTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAK
KFhFTikgKioqIER1bXBpbmcgQ1BVMyBob3N0IHN0YXRlOiAqKioKKFhFTikgLS0tLVsgWGVuLTQu
Mi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDogICAgQyBdLS0tLQooWEVOKSBD
UFU6ICAgIDMKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4MDE1ODNjND5dIGRlZmF1bHRf
aWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgQ09OVEVYVDog
aHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAgICByYng6IGZmZmY4MzAxNDg5
MmZmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmR4OiAwMDAwMDAzY2M4Njg1ZDgw
ICAgcnNpOiAwMDAwMDAwMDVjYmY0NmE4ICAgcmRpOiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJi
cDogZmZmZjgzMDE0ODkyZmVmMCAgIHJzcDogZmZmZjgzMDE0ODkyZmVmMCAgIHI4OiAgMDAwMDAw
MmViNjk0ZjFjOAooWEVOKSByOTogIGZmZmY4MzAwYTgzZmQwNjAgICByMTA6IDAwMDAwMDAwZGVh
ZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmODMwMTQ4OTJmZjE4
ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAyCihYRU4pIHIx
NTogZmZmZjgzMDE0ODk4ODA4OCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAw
MDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxNGNkYjIwMDAgICBjcjI6IGZmZmY4ODAwMDJi
ZDExNTAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBz
czogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODMw
MTQ4OTJmZWYwOgooWEVOKSAgICBmZmZmODMwMTQ4OTJmZjEwIGZmZmY4MmM0ODAxNThiZjggZmZm
ZjgzMDBhODNmZTAwMCBmZmZmODMwMGE4M2ZkMDAwCihYRU4pICAgIGZmZmY4MzAxNDg5MmZkYTgg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDEKKFhFTikg
ICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODZkZWUwIGZmZmY4ODAwMjc4NmRmZDggMDAw
MDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODEwMDEz
YWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAwMDAwMDAwZGVhZGJlZWYKKFhF
TikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMGUwMzMg
MDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODZkZWM4IDAwMDAwMDAwMDAwMGUw
MmIgM2VjMmUzMjY4MDkyZTE2NyBiOTdjZWM1YjllNjhkYzkzCihYRU4pICAgIGNjOTgwNzkyNDlm
YjczYjUgZTFhMzVkZTRmZDE2MWE2ZiBlMWUzNWQ2NDAwMDAwMDAzIGZmZmY4MzAwYTgzZmUwMDAK
KFhFTikgICAgMDAwMDAwM2NjODY4NWQ4MCBlMjRkNWEzOGYyYWUwNTFlCihYRU4pIFhlbiBjYWxs
IHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8w
eDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVfbG9vcCsweDYyLzB4NzEKKFhF
TikgICAgCihYRU4pIFswOiBkdW1wIERvbTAgcmVnaXN0ZXJzXQooWEVOKSAnMCcgcHJlc3NlZCAt
PiBkdW1waW5nIERvbTAncyByZWdpc3RlcnMKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2Y3B1IzAg
c3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0KKFhFTikg
UkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVzdAooWEVO
KSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmZmZmZmODFhMDFmZDggICByY3g6IGZm
ZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAwMDAwMDAw
MGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZmZmZmY4MWEw
MWVlOCAgIHJzcDogZmZmZmZmZmY4MWEwMWVkMCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICByMTE6IDAw
MDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAwMDAwMDAw
MDAwMDAwMDAwICAgcjE0OiBmZmZmZmZmZmZmZmZmZmZmCihYRU4pIHIxNTogMDAwMDAwMDAwMDAw
MDAwMCAgIGNyMDogMDAwMDAwMDAwMDAwMDAwOCAgIGNyNDogMDAwMDAwMDAwMDAwMjY2MAooWEVO
KSBjcjM6IDAwMDAwMDAxM2Q5NmMwMDAgICBjcjI6IDAwMDA3ZmQyMWU5NzMwMDAKKFhFTikgZHM6
IDAwMDAgICBlczogMDAwMCAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAgIGNzOiBl
MDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmZmZmZmODFhMDFlZDA6CihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgxMDBhNWMw
IGZmZmZmZmZmODFhMDFmMTgKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmZmZmZjgxYTAx
ZmQ4IGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyZGVlMWEwMAooWEVOKSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmODFhMDFmNDggZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZmZmZmZmZmZm
CihYRU4pICAgIDhlNmExZGU5NjBlNzVkYzMgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZjgxYjE1
MTYwIGZmZmZmZmZmODFhMDFmNTgKKFhFTikgICAgZmZmZmZmZmY4MTU1NGY1ZSBmZmZmZmZmZjgx
YTAxZjk4IGZmZmZmZmZmODFhY2NiZjUgZmZmZmZmZmY4MWIxNTE2MAooWEVOKSAgICBkMGEwZjA3
NTJhZDlhMDA4IDAwMDAwMDAwMDBjZGYwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmY4MWEwMWZiOCBmZmZmZmZmZjgx
YWNjMzRiIGZmZmZmZmZmN2ZmZmZmZmYKKFhFTikgICAgZmZmZmZmZmY4NGIyNTAwMCBmZmZmZmZm
ZjgxYTAxZmY4IGZmZmZmZmZmODFhY2ZlY2MgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMTAwMDAwMDAwIDAwMTAwODAwMDAwMzA2YTQgMWZjOThiNzVlM2I4MjI4MyAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAq
KiogRHVtcGluZyBEb20wIHZjcHUjMSBzdGF0ZTogKioqCihYRU4pIFJJUDogICAgZTAzMzpbPGZm
ZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYgICBFTTogMCAg
IENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAgIHJieDogZmZm
Zjg4MDAyNzg2ZGZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6IDAwMDAwMDAw
MDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAwZGVhZGJlZWYK
KFhFTikgcmJwOiBmZmZmODgwMDI3ODZkZWUwICAgcnNwOiBmZmZmODgwMDI3ODZkZWM4ICAgcjg6
ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAgIHIxMDogMDAw
MDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6IGZmZmZmZmZm
ODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDEgICByMTQ6IDAwMDAwMDAwMDAwMDAwMDAK
KFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0
OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0Y2RiMjAwMCAgIGNyMjogMDAw
MDdmZDlhYTQyYTAwMAooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAgZnM6IDAwMDAgICBnczog
MDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJhY2UgZnJvbSBy
c3A9ZmZmZjg4MDAyNzg2ZGVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGZm
ZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZGYxMAooWEVOKSAgICBmZmZmZmZm
ZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmRmZDggZmZmZmZmZmY4MWFhZmRhMCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY0MCBmZmZmZmZmZjgx
MDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgYTEwMjFlZjczOWZiMjdkMyAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZGY1MAooWEVOKSAgICBmZmZm
ZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBm
ZmZmODgwMDI3ODZkZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioqIER1bXBpbmcgRG9tMCB2
Y3B1IzIgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZmZmZmZjgxMDAxM2FhPl0K
KFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBDT05URVhUOiBwdiBndWVz
dAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4ODAwMjc4NmZmZDggICBy
Y3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAwMDAwMDAwICAgcnNpOiAw
MDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pIHJicDogZmZmZjg4
MDAyNzg2ZmVlMCAgIHJzcDogZmZmZjg4MDAyNzg2ZmVjOCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAwMDAwMDAwMDAwMDEgICBy
MTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgxYWFmZGEwICAgcjEzOiAw
MDAwMDAwMDAwMDAwMDAyICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHIxNTogMDAwMDAw
MDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDogMDAwMDAwMDAwMDAwMjY2
MAooWEVOKSBjcjM6IDAwMDAwMDAxNGNkYWIwMDAgICBjcjI6IDAwMDAwMDAwMDFiOTAwNDgKKFhF
TikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAwMDAgICBzczogZTAyYiAg
IGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4ODAwMjc4NmZl
Yzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZmZmZmZiBmZmZmZmZmZjgx
MDBhNWMwIGZmZmY4ODAwMjc4NmZmMTAKKFhFTikgICAgZmZmZmZmZmY4MTAxYzY2MyBmZmZmODgw
MDI3ODZmZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNDAgZmZmZmZmZmY4MTAxMzIzNiBmZmZmZmZmZjgx
MDBhZGU5CihYRU4pICAgIDU3OTAyZTE4ZWUzMjEyZjkgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIGZmZmY4ODAwMjc4NmZmNTAKKFhFTikgICAgZmZmZmZmZmY4MTU2MzQzOCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1OCAw
MDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMzIHN0YXRlOiAqKioK
KFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikgcmF4OiAwMDAw
MDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODgxZmQ4ICAgcmN4OiBmZmZmZmZmZjgxMDAx
M2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBkZWFkYmVlZiAg
IHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4ODFlZTAgICByc3A6
IGZmZmY4ODAwMjc4ODFlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgcjk6ICAwMDAw
MDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAwMDAwMDAwMDAw
MjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAwMDAwMDAwMyAg
IHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAwMDAgICBjcjA6
IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikgY3IzOiAwMDAw
MDAwMTNkZDQ2MDAwICAgY3IyOiAwMDAwN2ZkMjA0MDBhNWEwCihYRU4pIGRzOiAwMDJiICAgZXM6
IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAzMwooWEVOKSBH
dWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODgxZWM4OgooWEVOKSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBmZmZmODgwMDI3
ODgxZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg4MWZkOCBmZmZmZmZm
ZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBmZmZm
ODgwMDI3ODgxZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQooWEVOKSAgICA2
N2IxMDQ0Y2M0YjdjMzkxIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmODgw
MDI3ODgxZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTggMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSBbSDogZHVtcCBoZWFwIGluZm9dCihYRU4pICdIJyBwcmVzc2VkIC0+IGR1bXBpbmcgaGVh
cCBpbmZvIChub3ctMHgyODpBREMxODFEMikKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MF0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTJdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9M10gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT00XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTVdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Nl0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT03XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPThdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OV0gLT4g
MCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMF0gLT4gMCBwYWdlcwooWEVOKSBoZWFw
W25vZGU9MF1bem9uZT0xMV0gLT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xMl0g
LT4gMCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xM10gLT4gMCBwYWdlcwooWEVOKSBo
ZWFwW25vZGU9MF1bem9uZT0xNF0gLT4gMTYxMjggcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MTVdIC0+IDMyNzY4IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE2XSAtPiA2NTUz
NiBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xN10gLT4gMTMwNTU5IHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTE4XSAtPiAyNjIxNDMgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MTldIC0+IDE3Mjc5NyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0yMF0gLT4g
MTM0MjY1IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIxXSAtPiAwIHBhZ2VzCihYRU4p
IGhlYXBbbm9kZT0wXVt6b25lPTIyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25l
PTIzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI0XSAtPiAwIHBhZ2VzCihY
RU4pIGhlYXBbbm9kZT0wXVt6b25lPTI1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6
b25lPTI2XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI3XSAtPiAwIHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTI4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTI5XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMwXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMxXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTMyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTMzXSAtPiAw
IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM0XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBb
bm9kZT0wXVt6b25lPTM1XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM2XSAt
PiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM3XSAtPiAwIHBhZ2VzCihYRU4pIGhl
YXBbbm9kZT0wXVt6b25lPTM4XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTM5
XSAtPiAwIHBhZ2VzCihYRU4pIFtJOiBkdW1wIEhWTSBpcnEgaW5mb10KKFhFTikgJ0knIHByZXNz
ZWQgLT4gZHVtcGluZyBIVk0gaXJxIGluZm8KKFhFTikgW006IGR1bXAgTVNJIHN0YXRlXQooWEVO
KSBQQ0ktTVNJIGludGVycnVwdCBpbmZvcm1hdGlvbjoKKFhFTikgIE1TSSAgICAyNiB2ZWM9NzEg
bG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEv
LTEKKFhFTikgIE1TSSAgICAyNyB2ZWM9MDAgIGZpeGVkICBlZGdlIGRlYXNzZXJ0IHBoeXMgbG93
ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAyOCB2ZWM9MjkgbG93
ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEK
KFhFTikgIE1TSSAgICAyOSB2ZWM9NzkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0
IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgIE1TSSAgICAzMCB2ZWM9ODEgbG93ZXN0
ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRlc3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhF
TikgIE1TSSAgICAzMSB2ZWM9OTkgbG93ZXN0ICBlZGdlICAgYXNzZXJ0ICBsb2cgbG93ZXN0IGRl
c3Q9MDAwMDAwMDEgbWFzaz0wLzEvLTEKKFhFTikgW1E6IGR1bXAgUENJIGRldmljZXNdCihYRU4p
ID09PT0gUENJIGRldmljZXMgPT09PQooWEVOKSA9PT09IHNlZ21lbnQgMDAwMCA9PT09CihYRU4p
IDAwMDA6MDU6MDEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjA0OjAwLjAgLSBk
b20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMzowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDI6MDAuMCAtIGRvbSAwICAgLSBNU0lzIDwgMzAgPgooWEVOKSAwMDAwOjAw
OjFmLjMgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZi4yIC0gZG9tIDAgICAt
IE1TSXMgPCAyNyA+CihYRU4pIDAwMDA6MDA6MWYuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVO
KSAwMDAwOjAwOjFlLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxZC4wIC0g
ZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNyAtIGRvbSAwICAgLSBNU0lzIDwg
PgooWEVOKSAwMDAwOjAwOjFjLjYgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDox
Yy4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MWIuMCAtIGRvbSAwICAgLSBN
U0lzIDwgMjYgPgooWEVOKSAwMDAwOjAwOjFhLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikg
MDAwMDowMDoxOS4wIC0gZG9tIDAgICAtIE1TSXMgPCAzMSA+CihYRU4pIDAwMDA6MDA6MTYuMyAt
IGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjE2LjAgLSBkb20gMCAgIC0gTVNJcyA8
ID4KKFhFTikgMDAwMDowMDoxNC4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOSA+CihYRU4pIDAwMDA6
MDA6MDIuMCAtIGRvbSAwICAgLSBNU0lzIDwgMjggPgooWEVOKSAwMDAwOjAwOjAwLjAgLSBkb20g
MCAgIC0gTVNJcyA8ID4KKFhFTikgW1Y6IGR1bXAgaW9tbXUgaW5mb10KKFhFTikgCihYRU4pIGlv
bW11IDA6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFsaWRhdGlvbjogc3Vw
cG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBpbmc6IHN1cHBvcnRl
ZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRhYmxlIChucl9lbnRy
eT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4pICAgICAgICBTVlQg
IFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQCihYRU4pICAgMDAw
MDogIDEgICAwICAwMDEwIDAwMDAwMDAxIDI5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4p
IAooWEVOKSBpb21tdSAxOiBucl9wdF9sZXZlbHMgPSAzLgooWEVOKSAgIFF1ZXVlZCBJbnZhbGlk
YXRpb246IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgUmVtYXBwaW5n
OiBzdXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IHJlbWFwcGluZyB0YWJs
ZSAobnJfZW50cnk9MHgxMDAwMC4gT25seSBkdW1wIFA9MSBlbnRyaWVzIGhlcmUpOgooWEVOKSAg
ICAgICAgU1ZUICBTUSAgIFNJRCAgICAgIERTVCAgViAgQVZMIERMTSBUTSBSSCBETSBGUEQgUAoo
WEVOKSAgIDAwMDA6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzOCAgICAwICAgMSAgMCAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMDE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBmMCAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMDI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0MCAgICAwICAg
MSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA0OCAg
ICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDQ6ICAxICAgMCAgZjBmOCAwMDAwMDAw
MSA1MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDU6ICAxICAgMCAgZjBmOCAw
MDAwMDAwMSA1OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDY6ICAxICAgMCAg
ZjBmOCAwMDAwMDAwMSA2MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMDc6ICAx
ICAgMCAgZjBmOCAwMDAwMDAwMSA2OCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAw
MDg6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3MCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVO
KSAgIDAwMDk6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA3OCAgICAwICAgMSAgMCAgMSAgMSAgIDAg
MQooWEVOKSAgIDAwMGE6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA4OCAgICAwICAgMSAgMCAgMSAg
MSAgIDAgMQooWEVOKSAgIDAwMGI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5MCAgICAwICAgMSAg
MCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSA5OCAgICAw
ICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGQ6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBh
MCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGU6ICAxICAgMCAgZjBmOCAwMDAw
MDAwMSBhOCAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMGY6ICAxICAgMCAgZjBm
OCAwMDAwMDAwMSBiMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTA6ICAxICAg
MCAgZjBmOCAwMDAwMDAwMSBiOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTE6
ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjMCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAg
IDAwMTI6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBjOCAgICAwICAgMSAgMSAgMSAgMSAgIDAgMQoo
WEVOKSAgIDAwMTM6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSBkMCAgICAwICAgMSAgMSAgMSAgMSAg
IDAgMQooWEVOKSAgIDAwMTQ6ICAxICAgMCAgMDBkOCAwMDAwMDAwMSA3MSAgICAwICAgMSAgMCAg
MSAgMSAgIDAgMQooWEVOKSAgIDAwMTU6ICAxICAgMCAgMDBmYSAwMDAwMDAwMSAyMSAgICAwICAg
MSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTY6ICAxICAgMCAgZjBmOCAwMDAwMDAwMSAzMSAg
ICAwICAgMSAgMSAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTc6ICAxICAgMCAgMDBhMCAwMDAwMDAw
MSA3OSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTg6ICAxICAgMCAgMDIwMCAw
MDAwMDAwMSA4MSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAgIDAwMTk6ICAxICAgMCAg
MDBjOCAwMDAwMDAwMSA5OSAgICAwICAgMSAgMCAgMSAgMSAgIDAgMQooWEVOKSAKKFhFTikgUmVk
aXJlY3Rpb24gdGFibGUgb2YgSU9BUElDIDA6CihYRU4pICAgI2VudHJ5IElEWCBGTVQgTUFTSyBU
UklHIElSUiBQT0wgU1RBVCBERUxJICBWRUNUT1IKKFhFTikgICAgMDE6ICAwMDAwICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgMzgKKFhFTikgICAgMDI6ICAwMDAxICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgZjAKKFhFTikgICAgMDM6ICAwMDAyICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDAKKFhFTikgICAgMDQ6ICAwMDAzICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNDgKKFhFTikgICAgMDU6ICAwMDA0ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTAKKFhFTikgICAgMDY6ICAwMDA1ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNTgKKFhFTikgICAgMDc6ICAwMDA2ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjAKKFhFTikgICAgMDg6ICAwMDA3ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNjgKKFhFTikgICAgMDk6ICAwMDA4ICAgMSAgICAw
ICAgMSAgIDAgICAwICAgIDAgICAgMCAgICAgNzAKKFhFTikgICAgMGE6ICAwMDA5ICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgNzgKKFhFTikgICAgMGI6ICAwMDBhICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgODgKKFhFTikgICAgMGM6ICAwMDBiICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTAKKFhFTikgICAgMGQ6ICAwMDBjICAgMSAgICAx
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgOTgKKFhFTikgICAgMGU6ICAwMDBkICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTAKKFhFTikgICAgMGY6ICAwMDBlICAgMSAgICAw
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAgYTgKKFhFTikgICAgMTA6ICAwMDBmICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjAKKFhFTikgICAgMTI6ICAwMDEwICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYjgKKFhFTikgICAgMTM6ICAwMDExICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzAKKFhFTikgICAgMTQ6ICAwMDE2ICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgMzEKKFhFTikgICAgMTY6ICAwMDEzICAgMSAgICAx
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgZDAKKFhFTikgICAgMTc6ICAwMDEyICAgMSAgICAw
ICAgMSAgIDAgICAxICAgIDAgICAgMCAgICAgYzgKKFhFTikgW2E6IGR1bXAgdGltZXIgcXVldWVz
XQooWEVOKSBEdW1waW5nIHRpbWVyIHF1ZXVlczoKKFhFTikgQ1BVMDA6CihYRU4pICAgZXg9ICAg
LTE2ODB1cyB0aW1lcj1mZmZmODJjNDgwMmUyNWM4IGNiPWZmZmY4MmM0ODAxM2Q3NTcoZmZmZjgy
YzQ4MDI3MTgwMCkgbnMxNjU1MF9wb2xsKzB4MC8weDMzCihYRU4pICAgZXg9ICAgIDczMTl1cyB0
aW1lcj1mZmZmODMwMTQ4OTlhMWI4IGNiPWZmZmY4MmM0ODAxMTlkNzIoZmZmZjgzMDE0ODk5YTE5
MCkgY3NjaGVkX2FjY3QrMHgwLzB4NDJhCihYRU4pICAgZXg9MTI1MTM2NDc3dXMgdGltZXI9ZmZm
ZjgyYzQ4MDJmZTI4MCBjYj1mZmZmODJjNDgwMTgwN2MyKDAwMDAwMDAwMDAwMDAwMDApIHBsdF9v
dmVyZmxvdysweDAvMHgxMzEKKFhFTikgICBleD0gOTI2MDI5M3VzIHRpbWVyPWZmZmY4MmM0ODAz
MDA1ODAgY2I9ZmZmZjgyYzQ4MDFhODg1MCgwMDAwMDAwMDAwMDAwMDAwKSBtY2Vfd29ya19mbisw
eDAvMHhhOQooWEVOKSAgIGV4PSAgICA3MzE5dXMgdGltZXI9ZmZmZjgzMDE0ODk5YWVhOCBjYj1m
ZmZmODJjNDgwMTFhYWYwKDAwMDAwMDAwMDAwMDAwMDApIGNzY2hlZF90aWNrKzB4MC8weDMxNAoo
WEVOKSBDUFUwMToKKFhFTikgICBleD0gICA2MTM4MHVzIHRpbWVyPWZmZmY4MzAxMzZhNDlhMjgg
Y2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAxKSBjc2NoZWRfdGljaysweDAvMHgz
MTQKKFhFTikgICBleD0gIDMwMTY0MHVzIHRpbWVyPWZmZmY4MzAwYTgzZmMwNjAgY2I9ZmZmZjgy
YzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZjMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJfZm4rMHgw
LzB4YgooWEVOKSBDUFUwMjoKKFhFTikgICBleD0gICA4Mjk0NXVzIHRpbWVyPWZmZmY4MzAxMWM5
ZTlmNDggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAyKSBjc2NoZWRfdGljaysw
eDAvMHgzMTQKKFhFTikgICBleD0gIDY2NjAwMnVzIHRpbWVyPWZmZmY4MzAwYWE1ODMwNjAgY2I9
ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGFhNTgzMDAwKSB2Y3B1X3NpbmdsZXNob3RfdGltZXJf
Zm4rMHgwLzB4YgooWEVOKSBDUFUwMzoKKFhFTikgICBleD0gIDEwMzI2MnVzIHRpbWVyPWZmZmY4
MzAxMWM5ZTlhMzggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAzKSBjc2NoZWRf
dGljaysweDAvMHgzMTQKKFhFTikgICBleD0gIDE3ODkzMHVzIHRpbWVyPWZmZmY4MzAwYTgzZmQw
NjAgY2I9ZmZmZjgyYzQ4MDEyMWM2YihmZmZmODMwMGE4M2ZkMDAwKSB2Y3B1X3NpbmdsZXNob3Rf
dGltZXJfZm4rMHgwLzB4YgooWEVOKSBbYzogZHVtcCBBQ1BJIEN4IHN0cnVjdHVyZXNdCihYRU4p
ICdjJyBwcmVzc2VkIC0+IHByaW50aW5nIEFDUEkgQ3ggc3RydWN0dXJlcwooWEVOKSA9PWNwdTA9
PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBz
dGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAw
XSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBd
IGR1cmF0aW9uWzE3NTQ3NDI2OTk2NF0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBd
CihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pID09Y3B1MT09CihYRU4pIGFjdGl2ZSBz
dGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAg
IEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0gdXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0g
ZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMTc1NDk5
MDYwNjUwXQooWEVOKSBQQzJbMF0gUEMzWzBdIFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIEND
NlswXSBDQzdbMF0KKFhFTikgPT1jcHUyPT0KKFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVO
KSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxh
dGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVO
KSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlvblsxNzU1MjM4NDk5MzNdCihYRU4pIFBD
MlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVO
KSA9PWNwdTM9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlD
NwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdl
WzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2Vb
MDAwMDAwMDBdIGR1cmF0aW9uWzE3NTU0ODYzOTk2NF0KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZb
MF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBdCihYRU4pIFtlOiBkdW1wIGV2dGNo
biBpbmZvXQooWEVOKSAnZScgcHJlc3NlZCAtPiBkdW1waW5nIGV2ZW50LWNoYW5uZWwgaW5mbwoo
WEVOKSBFdmVudCBjaGFubmVsIGluZm9ybWF0aW9uIGZvciBkb21haW4gMDoKKFhFTikgUG9sbGlu
ZyB2Q1BVczoge30KKFhFTikgICAgIHBvcnQgW3AvbV0KKFhFTikgICAgICAgIDEgWzEvMF06IHM9
NSBuPTAgeD0wIHY9MAooWEVOKSAgICAgICAgMiBbMS8xXTogcz02IG49MCB4PTAKKFhFTikgICAg
ICAgIDMgWzEvMF06IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICA0IFswLzBdOiBzPTYgbj0wIHg9
MAooWEVOKSAgICAgICAgNSBbMC8wXTogcz01IG49MCB4PTAgdj0xCihYRU4pICAgICAgICA2IFsw
LzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAgICAgNyBbMC8wXTogcz01IG49MSB4PTAgdj0wCihY
RU4pICAgICAgICA4IFswLzFdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAgOSBbMC8wXTogcz02
IG49MSB4PTAKKFhFTikgICAgICAgMTAgWzAvMF06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgIDEx
IFswLzBdOiBzPTUgbj0xIHg9MCB2PTEKKFhFTikgICAgICAgMTIgWzAvMF06IHM9NiBuPTEgeD0w
CihYRU4pICAgICAgIDEzIFswLzBdOiBzPTUgbj0yIHg9MCB2PTAKKFhFTikgICAgICAgMTQgWzEv
MV06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE1IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAg
ICAgICAxNiBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTcgWzAvMF06IHM9NSBuPTIg
eD0wIHY9MQooWEVOKSAgICAgICAxOCBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTkg
WzAvMF06IHM9NSBuPTMgeD0wIHY9MAooWEVOKSAgICAgICAyMCBbMS8xXTogcz02IG49MyB4PTAK
KFhFTikgICAgICAgMjEgWzAvMF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIyIFswLzBdOiBz
PTYgbj0zIHg9MAooWEVOKSAgICAgICAyMyBbMC8wXTogcz01IG49MyB4PTAgdj0xCihYRU4pICAg
ICAgIDI0IFswLzBdOiBzPTYgbj0zIHg9MAooWEVOKSAgICAgICAyNSBbMC8wXTogcz0zIG49MCB4
PTAgZD0wIHA9MzYKKFhFTikgICAgICAgMjYgWzAvMF06IHM9NCBuPTAgeD0wIHA9OSBpPTkKKFhF
TikgICAgICAgMjcgWzAvMV06IHM9NSBuPTAgeD0wIHY9MgooWEVOKSAgICAgICAyOCBbMC8wXTog
cz00IG49MCB4PTAgcD04IGk9OAooWEVOKSAgICAgICAyOSBbMC8wXTogcz00IG49MCB4PTAgcD0y
NzkgaT0yNgooWEVOKSAgICAgICAzMCBbMC8wXTogcz00IG49MCB4PTAgcD0yNzcgaT0yOAooWEVO
KSAgICAgICAzMSBbMC8wXTogcz00IG49MCB4PTAgcD0xNiBpPTE2CihYRU4pICAgICAgIDMyIFsw
LzBdOiBzPTQgbj0wIHg9MCBwPTI3OCBpPTI3CihYRU4pICAgICAgIDMzIFswLzBdOiBzPTQgbj0w
IHg9MCBwPTIzIGk9MjMKKFhFTikgICAgICAgMzQgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc2IGk9
MjkKKFhFTikgICAgICAgMzUgWzEvMF06IHM9NCBuPTAgeD0wIHA9Mjc1IGk9MzAKKFhFTikgICAg
ICAgMzYgWzAvMF06IHM9MyBuPTAgeD0wIGQ9MCBwPTI1CihYRU4pICAgICAgIDM3IFswLzBdOiBz
PTUgbj0wIHg9MCB2PTMKKFhFTikgICAgICAgMzggWzEvMF06IHM9NCBuPTAgeD0wIHA9Mjc0IGk9
MzEKKFhFTikgW2c6IHByaW50IGdyYW50IHRhYmxlIHVzYWdlXQooWEVOKSBnbnR0YWJfdXNhZ2Vf
cHJpbnRfYWxsIFsga2V5ICdnJyBwcmVzc2VkCihYRU4pICAgICAgIC0tLS0tLS0tIGFjdGl2ZSAt
LS0tLS0tLSAgICAgICAtLS0tLS0tLSBzaGFyZWQgLS0tLS0tLS0KKFhFTikgW3JlZl0gbG9jYWxk
b20gbWZuICAgICAgcGluICAgICAgICAgIGxvY2FsZG9tIGdtZm4gICAgIGZsYWdzCihYRU4pIGdy
YW50LXRhYmxlIGZvciByZW1vdGUgZG9tYWluOiAgICAwIC4uLiBubyBhY3RpdmUgZ3JhbnQgdGFi
bGUgZW50cmllcwooWEVOKSBnbnR0YWJfdXNhZ2VfcHJpbnRfYWxsIF0gZG9uZQooWEVOKSBbaTog
ZHVtcCBpbnRlcnJ1cHQgYmluZGluZ3NdCihYRU4pIEd1ZXN0IGludGVycnVwdCBpbmZvcm1hdGlv
bjoKKFhFTikgICAgSVJROiAgIDAgYWZmaW5pdHk6MDAwMSB2ZWM6ZjAgdHlwZT1JTy1BUElDLWVk
Z2UgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMSBh
ZmZpbml0eTowMDAxIHZlYzozOCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIg
bWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICAyIGFmZmluaXR5OmZmZmYgdmVjOmUyIHR5
cGU9WFQtUElDICAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikg
ICAgSVJROiAgIDMgYWZmaW5pdHk6MDAwMSB2ZWM6NDAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3Rh
dHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNCBhZmZpbml0eTow
MDAxIHZlYzo0OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1
bmJvdW5kCihYRU4pICAgIElSUTogICA1IGFmZmluaXR5OjAwMDEgdmVjOjUwIHR5cGU9SU8tQVBJ
Qy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAg
IDYgYWZmaW5pdHk6MDAwMSB2ZWM6NTggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAw
MDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNyBhZmZpbml0eTowMDAxIHZlYzo2
MCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihY
RU4pICAgIElSUTogICA4IGFmZmluaXR5OjAwMDEgdmVjOjY4IHR5cGU9SU8tQVBJQy1lZGdlICAg
IHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOCgtUy0tKSwKKFhF
TikgICAgSVJROiAgIDkgYWZmaW5pdHk6MDAwMSB2ZWM6NzAgdHlwZT1JTy1BUElDLWxldmVsICAg
c3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6ICA5KC1TLS0pLAooWEVO
KSAgICBJUlE6ICAxMCBhZmZpbml0eTowMDAxIHZlYzo3OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDExIGFmZmluaXR5
OjAwMDEgdmVjOjg4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQKKFhFTikgICAgSVJROiAgMTIgYWZmaW5pdHk6MDAwMSB2ZWM6OTAgdHlwZT1JTy1B
UElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6
ICAxMyBhZmZpbml0eTowMDBmIHZlYzo5OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAw
MDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE0IGFmZmluaXR5OjAwMDEgdmVj
OmEwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQK
KFhFTikgICAgSVJROiAgMTUgYWZmaW5pdHk6MDAwMSB2ZWM6YTggdHlwZT1JTy1BUElDLWVkZ2Ug
ICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNiBhZmZp
bml0eTowMDAxIHZlYzpiMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMTAgaW4t
ZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogMTYoLVMtLSksCihYRU4pICAgIElSUTogIDE4IGFmZmlu
aXR5OjAwMGYgdmVjOmI4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBw
ZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTkgYWZmaW5pdHk6MDAwMSB2ZWM6YzAgdHlwZT1J
Ty1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJ
UlE6ICAyMCBhZmZpbml0eTowMDBmIHZlYzozMSB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9
MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIyIGFmZmluaXR5OjAwMDEg
dmVjOmQwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91
bmQKKFhFTikgICAgSVJROiAgMjMgYWZmaW5pdHk6MDAwMSB2ZWM6YzggdHlwZT1JTy1BUElDLWxl
dmVsICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDIzKC1TLS0p
LAooWEVOKSAgICBJUlE6ICAyNCBhZmZpbml0eTowMDAxIHZlYzoyOCB0eXBlPURNQV9NU0kgICAg
ICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI1IGFm
ZmluaXR5OjAwMDEgdmVjOjMwIHR5cGU9RE1BX01TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAwMCBt
YXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjYgYWZmaW5pdHk6MDAwMSB2ZWM6NzEgdHlw
ZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0
PTA6Mjc5KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyNyBhZmZpbml0eTowMDAxIHZlYzoyMSB0eXBl
PVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9
MDoyNzgoLVMtLSksCihYRU4pICAgIElSUTogIDI4IGFmZmluaXR5OjAwMDEgdmVjOjI5IHR5cGU9
UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0w
OjI3NygtUy0tKSwKKFhFTikgICAgSVJROiAgMjkgYWZmaW5pdHk6MDAwMSB2ZWM6NzkgdHlwZT1Q
Q0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6
Mjc2KC1TLS0pLAooWEVOKSAgICBJUlE6ICAzMCBhZmZpbml0eTowMDAxIHZlYzo4MSB0eXBlPVBD
SS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoy
NzUoUFMtLSksCihYRU4pICAgIElSUTogIDMxIGFmZmluaXR5OjAwMDEgdmVjOjk5IHR5cGU9UENJ
LU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3
NChQUy0tKSwKKFhFTikgSU8tQVBJQyBpbnRlcnJ1cHQgaW5mb3JtYXRpb246CihYRU4pICAgICBJ
UlEgIDAgVmVjMjQwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMjogdmVjPWYwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDEgVmVjIDU2OgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAgMTogdmVjPTM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0w
IGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDMgVmVjIDY0Ogoo
WEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgMzogdmVjPTQwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9
TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4p
ICAgICBJUlEgIDQgVmVjIDcyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNDogdmVjPTQ4
IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBt
YXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDUgVmVjIDgwOgooWEVOKSAgICAgICBBcGlj
IDB4MDAsIFBpbiAgNTogdmVjPTUwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDYgVmVj
IDg4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNjogdmVjPTU4IGRlbGl2ZXJ5PUxvUHJp
IGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDow
CihYRU4pICAgICBJUlEgIDcgVmVjIDk2OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgNzog
dmVjPTYwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRy
aWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgIDggVmVjMTA0OgooWEVOKSAgICAg
ICBBcGljIDB4MDAsIFBpbiAgODogdmVjPTY4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9
MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEg
IDkgVmVjMTEyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAgOTogdmVjPTcwIGRlbGl2ZXJ5
PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVz
dF9pZDowCihYRU4pICAgICBJUlEgMTAgVmVjMTIwOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBp
biAxMDogdmVjPTc4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGly
cj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTEgVmVjMTM2OgooWEVO
KSAgICAgICBBcGljIDB4MDAsIFBpbiAxMTogdmVjPTg4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBz
dGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAg
ICBJUlEgMTIgVmVjMTQ0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxMjogdmVjPTkwIGRl
bGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNr
PTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTMgVmVjMTUyOgooWEVOKSAgICAgICBBcGljIDB4
MDAsIFBpbiAxMzogdmVjPTk4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0
eT0wIGlycj0wIHRyaWc9RSBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTQgVmVjMTYw
OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNDogdmVjPWEwIGRlbGl2ZXJ5PUxvUHJpIGRl
c3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDowCihY
RU4pICAgICBJUlEgMTUgVmVjMTY4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxNTogdmVj
PWE4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9
RSBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTYgVmVjMTc2OgooWEVOKSAgICAgICBB
cGljIDB4MDAsIFBpbiAxNjogdmVjPWIwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMTgg
VmVjMTg0OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAxODogdmVjPWI4IGRlbGl2ZXJ5PUxv
UHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9p
ZDowCihYRU4pICAgICBJUlEgMTkgVmVjMTkyOgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAx
OTogdmVjPWMwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0w
IHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjAgVmVjIDQ5OgooWEVOKSAg
ICAgICBBcGljIDB4MDAsIFBpbiAyMDogdmVjPTMxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0
dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9pZDowCihYRU4pICAgICBJ
UlEgMjIgVmVjMjA4OgooWEVOKSAgICAgICBBcGljIDB4MDAsIFBpbiAyMjogdmVjPWQwIGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTEg
ZGVzdF9pZDowCihYRU4pICAgICBJUlEgMjMgVmVjMjAwOgooWEVOKSAgICAgICBBcGljIDB4MDAs
IFBpbiAyMzogdmVjPWM4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0x
IGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDowCihYRU4pIFttOiBtZW1vcnkgaW5mb10KKFhF
TikgUGh5c2ljYWwgbWVtb3J5IGluZm9ybWF0aW9uOgooWEVOKSAgICAgWGVuIGhlYXA6IDBrQiBm
cmVlCihYRU4pICAgICBoZWFwWzE0XTogNjQ1MTJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE1XTog
MTMxMDcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNl06IDI2MjE0NGtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMTddOiA1MjIyMzZrQiBmcmVlCihYRU4pICAgICBoZWFwWzE4XTogMTA0ODU3MmtCIGZy
ZWUKKFhFTikgICAgIGhlYXBbMTldOiA2OTExODhrQiBmcmVlCihYRU4pICAgICBoZWFwWzIwXTog
NTM3MDYwa0IgZnJlZQooWEVOKSAgICAgRG9tIGhlYXA6IDMyNTY3ODRrQiBmcmVlCihYRU4pIFtu
OiBOTUkgc3RhdGlzdGljc10KKFhFTikgQ1BVCU5NSQooWEVOKSAgIDAJICAwCihYRU4pICAgMQkg
IDAKKFhFTikgICAyCSAgMAooWEVOKSAgIDMJICAwCihYRU4pIGRvbTAgdmNwdTA6IE5NSSBuZWl0
aGVyIHBlbmRpbmcgbm9yIG1hc2tlZAooWEVOKSBbcTogZHVtcCBkb21haW4gKGFuZCBndWVzdCBk
ZWJ1ZykgaW5mb10KKFhFTikgJ3EnIHByZXNzZWQgLT4gZHVtcGluZyBkb21haW4gaW5mbyAobm93
PTB4Mjk6MERCNzAwQjEpCihYRU4pIEdlbmVyYWwgaW5mb3JtYXRpb24gZm9yIGRvbWFpbiAwOgoo
WEVOKSAgICAgcmVmY250PTMgZHlpbmc9MCBwYXVzZV9jb3VudD0wCihYRU4pICAgICBucl9wYWdl
cz0xODc1MzkgeGVuaGVhcF9wYWdlcz02IHNoYXJlZF9wYWdlcz0wIHBhZ2VkX3BhZ2VzPTAgZGly
dHlfY3B1cz17MS0zfSBtYXhfcGFnZXM9MTg4MTQ3CihYRU4pICAgICBoYW5kbGU9MDAwMDAwMDAt
MDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwIHZtX2Fzc2lzdD0wMDAwMDAwZAooWEVOKSBSYW5n
ZXNldHMgYmVsb25naW5nIHRvIGRvbWFpbiAwOgooWEVOKSAgICAgSS9PIFBvcnRzICB7IDAtMWYs
IDIyLTNmLCA0NC02MCwgNjItOWYsIGEyLTQwNywgNDBjLWNmYiwgZDAwLTIwNGYsIDIwNTgtZmZm
ZiB9CihYRU4pICAgICBJbnRlcnJ1cHRzIHsgMC0yNzkgfQooWEVOKSAgICAgSS9PIE1lbW9yeSB7
IDAtZmViZmYsIGZlYzAxLWZlZGZmLCBmZWUwMS1mZmZmZmZmZmZmZmZmZmZmIH0KKFhFTikgTWVt
b3J5IHBhZ2VzIGJlbG9uZ2luZyB0byBkb21haW4gMDoKKFhFTikgICAgIERvbVBhZ2UgbGlzdCB0
b28gbG9uZyB0byBkaXNwbGF5CihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDc2ZjY6IGNh
Zj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAgICAgWGVuUGFn
ZSAwMDAwMDAwMDAwMTQ3NmY1OiBjYWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAw
MDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0NzZmNDogY2FmPWMwMDAwMDAwMDAw
MDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAx
NDc2ZjM6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAg
ICAgWGVuUGFnZSAwMDAwMDAwMDAwMGFhMGZkOiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0
MDAwMDAwMDAwMDAwMDIKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDExYzllYTogY2FmPWMw
MDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4pIFZDUFUgaW5mb3JtYXRp
b24gYW5kIGNhbGxiYWNrcyBmb3IgZG9tYWluIDA6CihYRU4pICAgICBWQ1BVMDogQ1BVMCBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAxLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
e30gY3B1X2FmZmluaXR5PXswfQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0w
CihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSAgICAgVkNQVTE6IENQVTMgW2hhcz1G
XSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAwMCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXsz
fSBjcHVfYWZmaW5pdHk9ezAtMTV9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdz
PTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMjogQ1BVMSBbaGFz
PUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9
ezF9IGNwdV9hZmZpbml0eT17MC0xNX0KKFhFTikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxh
Z3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMgdGltZXIKKFhFTikgICAgIFZDUFUzOiBDUFUyIFto
YXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1
cz17Mn0gY3B1X2FmZmluaXR5PXswLTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9m
bGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSBOb3RpZnlpbmcgZ3Vlc3Qg
MDowICh2aXJxIDEsIHBvcnQgNSwgc3RhdCAwLzAvLTEpCihYRU4pIE5vdGlmeWluZyBndWVzdCAw
OjEgKHZpcnEgMSwgcG9ydCAxMSwgc3RhdCAwLzAvMCkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6
MiAodmlycSAxLCBwb3J0IDE3LCBzdGF0IDAvMC8wKQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDoz
ICh2aXJxIDEsIHBvcnQgMjMsIHN0YXQgMC8wLzApCgooWEVOKSBTaGFyZWQgZnJhbWVzIDAgLS0g
U2F2ZWQgZnJhbWVzIDAKWyAgMTc2LjUwMTk1NV0gdihYRU4pIFtyOiBkdW1wIHJ1biBxdWV1ZXNd
CmNwdSAxCihYRU4pIHNjaGVkX3NtdF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9
MHgwMDAwMDAyOTE5N0UzNTAyCihYRU4pIElkbGUgY3B1cG9vbDoKWyAgMTc2LjUwMTk1NV0gIChY
RU4pIFNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNyZWRpdCkKIChYRU4pIGluZm86
CihYRU4pIAluY3B1cyAgICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIgICAgICAgICAgICAg
PSAwCihYRU4pIAljcmVkaXQgICAgICAgICAgICAgPSA0MDAKKFhFTikgCWNyZWRpdCBiYWxhbmNl
ICAgICA9IC0xMDAKKFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9z
b3J0ICAgICAgICAgID0gMTk0NQooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4p
IAl0c2xpY2UgICAgICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAx
MDAwdXMKKFhFTikgCWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNs
aWNlICAgPSAxCihYRU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMKMDogbWFza2VkPTAgcGVu
ZChYRU4pIGlkbGVyczogMDAwNgooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IGluZz0x
IGV2ZW50X3NlbCBbMC4xXSBwcmk9LTIgZmxhZ3M9MCBjcHU9MyBjcmVkaXQ9LTcwMyBbdz0yNTZd
CjAwMDAwMDAwMDAwMDAwMDEoWEVOKSBDcHVwb29sIDA6CgooWEVOKSBTY2hlZHVsZXI6IFNNUCBD
cmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpClsgIDE3Ni41MzYxMzddICAoWEVOKSBpbmZvOgooWEVO
KSAJbmNwdXMgICAgICAgICAgICAgID0gNAooWEVOKSAJbWFzdGVyICAgICAgICAgICAgID0gMAoo
WEVOKSAJY3JlZGl0ICAgICAgICAgICAgID0gNDAwCihYRU4pIAljcmVkaXQgYmFsYW5jZSAgICAg
PSAtMTAwCihYRU4pIAl3ZWlnaHQgICAgICAgICAgICAgPSAyNTYKKFhFTikgCXJ1bnFfc29ydCAg
ICAgICAgICA9IDE5NDUKKFhFTikgCWRlZmF1bHQtd2VpZ2h0ICAgICA9IDI1NgooWEVOKSAJdHNs
aWNlICAgICAgICAgICAgID0gMTBtcwooWEVOKSAJcmF0ZWxpbWl0ICAgICAgICAgID0gMTAwMHVz
CihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAxMAooWEVOKSAJdGlja3MgcGVyIHRzbGljZSAg
ID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5ICAgID0gMHVzCiAoWEVOKSBpZGxlcnM6IDAwMDYK
KFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJICAxOiBbMC4xXSBwcmk9LTIgZmxhZ3M9MCBjcHU9
MyBjcmVkaXQ9LTEyNjQgW3c9MjU2XQoxOiBtYXNrZWQ9MCBwZW5kKFhFTikgQ1BVWzAwXSBpbmc9
MSBldmVudF9zZWwgIHNvcnQ9MTk0NSwgc2libGluZz0wMDAxLCAwMDAwMDAwMDAwMDAwMDAxY29y
ZT0wMDBmCihYRU4pIAlydW46IFszMjc2Ny4wXSBwcmk9MCBmbGFncz0wIGNwdT0wCgooWEVOKSAJ
ICAxOiBbMC4wXSBwcmk9MCBmbGFncz0wIGNwdT0wIGNyZWRpdD02MiBbdz0yNTZdClsgIDE3Ni42
MzAxOTVdICAoWEVOKSBDUFVbMDFdICAgc29ydD0xOTQ1LCBzaWJsaW5nPTAwMDIsIDI6IG1hc2tl
ZD0xIHBlbmRjb3JlPTAwMGYKKFhFTikgCXJ1bjogaW5nPTEgZXZlbnRfc2VsIFszMjc2Ny4xXSBw
cmk9LTY0IGZsYWdzPTAgY3B1PTEKMDAwMDAwMDAwMDAwMDAwMShYRU4pIENQVVswMl0gCiBzb3J0
PTE5NDUsIHNpYmxpbmc9MDAwNCwgWyAgMTc2LjY2MDE3NV0gIGNvcmU9MDAwZgooWEVOKSAJcnVu
OiAgWzMyNzY3LjJdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MgozOiBtYXNrZWQ9MSBwZW5kKFhFTikg
Q1BVWzAzXSBpbmc9MCBldmVudF9zZWwgIHNvcnQ9MTk0NSwgc2libGluZz0wMDA4LCAwMDAwMDAw
MDAwMDAwMDAwY29yZT0wMDBmCihYRU4pIAlydW46IFswLjFdIHByaT0tMiBmbGFncz0wIGNwdT0z
IGNyZWRpdD0tMTg2MyBbdz0yNTZdCihYRU4pIAkgIDE6IFszMjc2Ny4zXSBwcmk9LTY0IGZsYWdz
PTAgY3B1PTMKCihYRU4pIFtzOiBkdW1wIHNvZnR0c2Mgc3RhdHNdClsgIDE3Ni42NzkyMzddICAo
WEVOKSBUU0MgbWFya2VkIGFzIHJlbGlhYmxlLCB3YXJwID0gMCAoY291bnQ9MikKIChYRU4pIE5v
IGRvbWFpbnMgaGF2ZSBlbXVsYXRlZCBUU0MKCihYRU4pIFt0OiBkaXNwbGF5IG11bHRpLWNwdSBj
bG9jayBpbmZvXQpbICAxNzYuNzI5NDA3XSBwKFhFTikgU3luY2VkIHN0aW1lIHNrZXc6IG1heD04
NDkybnMgYXZnPTg0OTJucyBzYW1wbGVzPTEgY3VycmVudD04NDkybnMKKFhFTikgU3luY2VkIGN5
Y2xlcyBza2V3OiBtYXg9MTcwIGF2Zz0xNzAgc2FtcGxlcz0xIGN1cnJlbnQ9MTcwCmVuZGluZzoK
KFhFTikgW3U6IGR1bXAgbnVtYSBpbmZvXQpbICAxNzYuNzI5NDA4XSAgKFhFTikgJ3UnIHByZXNz
ZWQgLT4gZHVtcGluZyBudW1hIGluZm8gKG5vdy0weDI5OjI3NTZCNjNCKQogIChYRU4pIGlkeDAg
LT4gTk9ERTAgc3RhcnQtPjAgc2l6ZS0+MTM2OTYwMCBmcmVlLT44MTQxOTYKMDAwMDAwMDAwMDAw
MDAwMChYRU4pIHBoeXNfdG9fbmlkKDAwMDAwMDAwMDAwMDEwMDApIC0+IDAgc2hvdWxkIGJlIDAK
IChYRU4pIENQVTAgLT4gTk9ERTAKKFhFTikgQ1BVMSAtPiBOT0RFMAooWEVOKSBDUFUyIC0+IE5P
REUwCihYRU4pIENQVTMgLT4gTk9ERTAKKFhFTikgTWVtb3J5IGxvY2F0aW9uIG9mIGVhY2ggZG9t
YWluOgooWEVOKSBEb21haW4gMCAodG90YWw6IDE4NzUzOSk6CjAwMDAwMDAwMDAwMDAwMDAoWEVO
KSAgICAgTm9kZSAwOiAxODc1MzkKIChYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KMDAwMDAw
MDAwMDAwMDAwMChYRU4pICoqKioqKioqKioqIFZNQ1MgQXJlYXMgKioqKioqKioqKioqKioKIChY
RU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqCjAwMDAwMDAwMDAwMDAw
MDAoWEVOKSBbejogcHJpbnQgaW9hcGljIGluZm9dCiAoWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNv
dXJjZXM6IDE1LgowMDAwMDAwMDAwMDAwMDAwKFhFTikgbnVtYmVyIG9mIElPLUFQSUMgIzIgcmVn
aXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uCiAoWEVOKSBJTyBBUElDICMyLi4uLi4uCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAwOiAwMjAw
MDAwMAooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwgQVBJQyBpZDogMDIKKFhFTikgLi4uLi4u
LiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAKKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAgICAgICA6
IDAKMDAwMDAwMDAwMDAwMDAwMChYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAwMDE3MDAyMAooWEVO
KSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVzOiAwMDE3CihYRU4pIC4uLi4u
Li4gICAgIDogUFJRIGltcGxlbWVudGVkOiAwCihYRU4pIC4uLi4uLi4gICAgIDogSU8gQVBJQyB2
ZXJzaW9uOiAwMDIwCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOgooWEVOKSAgTlIg
TG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIAogKFhFTikg
IDAwIDAwMCAwMCAgMDAwMDAwMDAwMDAwMDAwMDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwCiAoWEVOKSAgMDEgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICAzOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDAyIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgRjAKCihYRU4pICAwMyAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDQwClsgIDE3Ni44NjY4NzJdICAoWEVOKSAgMDQgMDAwIDAwICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OAogIChYRU4pICAwNSAwMDAgMDAgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDUwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDYgMDAw
IDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1OAogKFhFTikgIDA3IDAwMCAw
MCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNjAKMDAwMDAwMDAwMDAwMDAwMChY
RU4pICAwOCAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDY4CiAoWEVO
KSAgMDkgMDAwIDAwICAwICAgIDEgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3MAowMDAwMDAw
MDAwMDAwMDAwKFhFTikgIDBhIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgNzgKIChYRU4pICAwYiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IDg4CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGMgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA5MAogKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgOTgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwZSAwMDAgMDAgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIEEwCiAoWEVOKSAgMGYgMDAwIDAwICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICBBOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEwIDAwMCAw
MCAgMCAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQjAKIChYRU4pICAxMSAwMDAgMDAg
IDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAoWEVO
KSAgMTIgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBCOAogKFhFTikg
IDEzIDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQzAKMDAwMDAwMDAw
MDAwMDAwMChYRU4pICAxNCAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAg
IDMxCgooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MApbICAxNzYuOTY5NDMzXSAgKFhFTikgIDE2IDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAg
ICAxICAgIDEgICAgRDAKICAoWEVOKSAgMTcgMDAwIDAwICAwICAgIDEgICAgMCAgIDEgICAwICAg
IDEgICAgMSAgICBDOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgVXNpbmcgdmVjdG9yLWJhc2VkIGlu
ZGV4aW5nCihYRU4pIElSUSB0byBwaW4gbWFwcGluZ3M6CiAoWEVOKSBJUlEyNDAgLT4gMDoyCihY
RU4pIElSUTU2IC0+IDA6MQooWEVOKSBJUlE2NCAtPiAwOjMKKFhFTikgSVJRNzIgLT4gMDo0CjAw
MDAwMDAwMDAwMDAwMDAoWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4p
IElSUTk2IC0+IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhF
TikgSVJRMTIwIC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6
MTIKKFhFTikgSVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4
IC0+IDA6MTUKKFhFTikgSVJRMTc2IC0+IDA6MTYKIChYRU4pIElSUTE4NCAtPiAwOjE4CihYRU4p
IElSUTE5MiAtPiAwOjE5CihYRU4pIElSUTQ5IC0+IDA6MjAKKFhFTikgSVJRMjA4IC0+IDA6MjIK
KFhFTikgSVJRMjAwIC0+IDA6MjMKMDAwMDAwMDAwMDAwMDAwMChYRU4pIC4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLiBkb25lLgogMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDE3Ny4wNTgwMzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMDcxOTk0XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjA4NTk1NV0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDE3Ny4wOTk5MTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMTEz
ODc4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDQ4MDAwODIyODIKWyAgMTc3LjEyNzgzOV0gICAgClsgIDE3Ny4x
MzExNTBdIGdsb2JhbCBtYXNrOgpbICAxNzcuMTMxMTUwXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYKWyAgMTc3LjE0NjM2NF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny4xNjAzMjZd
ICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzcuMTc0Mjg2XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMTc3LjE4ODI0OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny4y
MDIyMDldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzcuMjE2MTcwXSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTc3LjIzMDEzMl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4MTA0MTA1Clsg
IDE3Ny4yNDQwOTJdICAgIApbICAxNzcuMjQ3NDA0XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAgMTc3
LjI0NzQwNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4yNjMxNTVdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMjc3MTE2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc3LjI5MTA3OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4zMDUwMzldICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuMzE5MDAwXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc3LjMzMjk2MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4zNDY5
MjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwNDgwMDA4MjI4MgpbICAxNzcuMzYwODgzXSAgICAKWyAgMTc3LjM2
NDE5NV0gbG9jYWwgY3B1MSBtYXNrOgpbICAxNzcuMzY0MTk2XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc3LjM3OTc2N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny4zOTM3
MjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNDA3Njg5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTc3LjQyMTY1MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3
Ny40MzU2MTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNDQ5NTczXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjQ2MzUzNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAxZjgw
ClsgIDE3Ny40Nzc0OTVdICAgIApbICAxNzcuNDgwODA2XSBsb2NhbGx5IHVubWFza2VkOgpbICAx
NzcuNDgwODA2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjQ5NjQ2OV0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny41MTA0MjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAxNzcuNTI0MzkwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjUzODM1Ml0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny41NTIzMTNdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxNzcuNTY2Mjc0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjU4
MDIzNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMjgwClsgIDE3Ny41OTQxOTZdICAgIApbICAxNzcu
NTk3NTA4XSBwZW5kaW5nIGxpc3Q6ClsgIDE3Ny42MDA1NTFdICAgMDogZXZlbnQgMSAtPiBpcnEg
MjcyIGxvY2FsbHktbWFza2VkClsgIDE3Ny42MDU1NjJdICAgMTogZXZlbnQgNyAtPiBpcnEgMjc4
ClsgIDE3Ny42MDkyMzJdICAgMTogZXZlbnQgOSAtPiBpcnEgMjgwClsgIDE3Ny42MTI5MDJdICAg
MjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1hc2tlZApbICAxNzcuNjE4MDAyXSAgIDM6
IGV2ZW50IDE5IC0+IGlycSAyOTAgbG9jYWxseS1tYXNrZWQKWyAgMTc3LjYyMzEwNF0gICAwOiBl
dmVudCAzNSAtPiBpcnEgMzAyIGxvY2FsbHktbWFza2VkClsgIDE3Ny42MjgyMDVdICAgMDogZXZl
bnQgMzggLT4gaXJxIDMwMyBsb2NhbGx5LW1hc2tlZApbICAxNzcuNjMzMzI2XSAKWyAgMTc3LjYz
MzMyNl0gdmNwdSAwClsgIDE3Ny42MzMzMjddICAgMDogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50
X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3Ny42Mzg1NzldICAgMTogbWFza2VkPTAgcGVuZGlu
Zz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny42NDQ2NjVdICAgMjogbWFza2Vk
PTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3Ny42NTA3NDldICAg
MzogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3Ny42
NTY4MzVdICAgClsgIDE3Ny42NjI5MjBdIHBlbmRpbmc6ClsgIDE3Ny42NjI5MjFdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNjc3Nzc2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc3LjY5MTczOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny43MDU3MDBdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNzE5NjYwXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc3LjczMzYyMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny43NDc1
ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzcuNzYxNTQzXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDQ4MDAyOGEwMDYKWyAgMTc3Ljc3NTUwNV0gICAgClsgIDE3Ny43Nzg4MTZdIGdsb2JhbCBtYXNr
OgpbICAxNzcuNzc4ODE2XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc3Ljc5NDAzMF0g
ICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny44MDc5OTJdICAgIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZgpbICAxNzcuODIxOTUyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc3Ljgz
NTkxNF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3Ny44NDk4NzVdICAgIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZgpbICAxNzcuODYzODM2XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAg
MTc3Ljg3Nzc5OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZm
ZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4MTA0MTA1ClsgIDE3Ny44OTE3NTldICAgIApb
ICAxNzcuODk1MDY5XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAgMTc3Ljg5NTA3MF0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDE3Ny45MTA4MjFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxNzcuOTI0NzgzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3LjkzODc0M10gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny45NTI3MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAxNzcuOTY2NjY2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc3Ljk4MDYy
N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3Ny45OTQ1ODldICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
NDgwMDI4YTAwMgpbICAxNzguMDA4NTUwXSAgICAKWyAgMTc4LjAxMTg2MV0gbG9jYWwgY3B1MCBt
YXNrOgpbICAxNzguMDExODYxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjAyNzQz
M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC4wNDEzOTRdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxNzguMDU1MzU1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4
LjA2OTMxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC4wODMyNzhdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzguMDk3MjM5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc4LjExMTIwMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZmZlMDAwMDdmClsgIDE3OC4xMjUxNjFdICAg
IApbICAxNzguMTI4NDcyXSBsb2NhbGx5IHVubWFza2VkOgpbICAxNzguMTI4NDczXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjE0NDEzNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDE3OC4xNTgwOTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMTcyMDU2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjE4NjAxOF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDE3OC4xOTk5NzldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMjEz
OTQwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjIyNzkwMl0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDA0ODAwMDAwMDAyClsgIDE3OC4yNDE4NjNdICAgIApbICAxNzguMjQ1MTc0XSBwZW5kaW5nIGxp
c3Q6ClsgIDE3OC4yNDgyMjFdICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyClsgIDE3OC4yNTE4ODhd
ICAgMDogZXZlbnQgMiAtPiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZApbICAxNzguMjU2OTg4XSAg
IDI6IGV2ZW50IDEzIC0+IGlycSAyODQgbG9jYWxseS1tYXNrZWQKWyAgMTc4LjI2MjA5MF0gICAy
OiBldmVudCAxNSAtPiBpcnEgMjg2IGxvY2FsbHktbWFza2VkClsgIDE3OC4yNjcxOTBdICAgMzog
ZXZlbnQgMTkgLT4gaXJxIDI5MCBsb2NhbGx5LW1hc2tlZApbICAxNzguMjcyMjkyXSAgIDM6IGV2
ZW50IDIxIC0+IGlycSAyOTIgbG9jYWxseS1tYXNrZWQKWyAgMTc4LjI3NzM5Ml0gICAwOiBldmVu
dCAzNSAtPiBpcnEgMzAyClsgIDE3OC4yODExNTFdICAgMDogZXZlbnQgMzggLT4gaXJxIDMwMwpb
ICAxNzguMjg0OTM4XSAKWyAgMTc4LjI4NDkzOV0gdmNwdSAyClsgIDE3OC4yODQ5NDBdICAgMDog
bWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC4yOTAy
MDBdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsg
IDE3OC4yOTYyODVdICAgMjogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAw
MDAwMDAxClsgIDE3OC4zMDIzNzFdICAgMzogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAw
MDAwMDAwMDAwMDAwMDAxClsgIDE3OC4zMDg0NTVdICAgClsgIDE3OC4zMTQ1NDFdIHBlbmRpbmc6
ClsgIDE3OC4zMTQ1NDFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMzI5Mzk3XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjM0MzM1OF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDE3OC4zNTczMjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguMzcx
MjgxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjM4NTI0Ml0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDE3OC4zOTkyMDNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAx
NzguNDEzMTY1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAyOGUwMDQKWyAgMTc4LjQyNzEyNl0gICAgClsg
IDE3OC40MzA0MzddIGdsb2JhbCBtYXNrOgpbICAxNzguNDMwNDM3XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMTc4LjQ0NTY1MV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3OC40
NTk2MTJdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzguNDczNTczXSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTc4LjQ4NzUzNV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDE3OC41MDE0OTZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzguNTE1NDU3XSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc4LjUyOTQxOF0gICAgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4MTA0
MTA1ClsgIDE3OC41NDMzNzldICAgIApbICAxNzguNTQ2NjkwXSBnbG9iYWxseSB1bm1hc2tlZDoK
WyAgMTc4LjU0NjY5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC41NjI0NDFdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNTc2NDAzXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTc4LjU5MDM2NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC42MDQz
MjVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNjE4Mjg2XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTc4LjYzMjI0N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3
OC42NDYyMDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4YTAwMApbICAxNzguNjYwMTcwXSAgICAKWyAg
MTc4LjY2MzQ4MV0gbG9jYWwgY3B1MiBtYXNrOgpbICAxNzguNjYzNDgyXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTc4LjY3OTA1M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3
OC42OTMwMTVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNzA2OTc2XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4LjcyMDkzN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDE3OC43MzQ4OThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguNzQ4ODU5XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljc2MjgyMF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDdlMDAwClsgIDE3OC43NzY3ODRdICAgIApbICAxNzguNzgwMDkzXSBsb2NhbGx5IHVubWFza2Vk
OgpbICAxNzguNzgwMDkzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljc5NTc1NV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC44MDk3MTZdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxNzguODIzNjc3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljgz
NzYzOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC44NTE1OTldICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAxNzguODY1NTYxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MTc4Ljg3OTUyMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDBhMDAwClsgIDE3OC44OTM0ODRdICAgIApb
ICAxNzguODk2Nzk0XSBwZW5kaW5nIGxpc3Q6ClsgIDE3OC44OTk4MzhdICAgMDogZXZlbnQgMiAt
PiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAxNzguOTA2MjgxXSAg
IDI6IGV2ZW50IDEzIC0+IGlycSAyODQKWyAgMTc4LjkxMDAzOV0gICAyOiBldmVudCAxNCAtPiBp
cnEgMjg1IGdsb2JhbGx5LW1hc2tlZApbICAxNzguOTE1MjMxXSAgIDI6IGV2ZW50IDE1IC0+IGly
cSAyODYKWyAgMTc4LjkxODk5MF0gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFz
a2VkClsgIDE3OC45MjQwOTFdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tl
ZApbICAxNzguOTI5MjE0XSAKWyAgMTc4LjkyOTIxNV0gdmNwdSAzClsgIDE3OC45MjkyMTZdICAg
MDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OC45
MzQ0NzNdICAgMTogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAx
ClsgIDE3OC45NDA1NThdICAgMjogbWFza2VkPTEgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAw
MDAwMDAwMDAwClsgIDE3OC45NDY2NDRdICAgMzogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3Nl
bCAwMDAwMDAwMDAwMDAwMDAxClsgIDE3OC45NTI3MjldICAgClsgIDE3OC45NTg4MTVdIHBlbmRp
bmc6ClsgIDE3OC45NTg4MTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzguOTczNjcx
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc4Ljk4NzYzMl0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDE3OS4wMDE1OTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzku
MDE1NTU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjAyOTUxNV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDE3OS4wNDM0NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxNzkuMDU3NDM4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAzODQwMDQKWyAgMTc5LjA3MTM5OV0gICAg
ClsgIDE3OS4wNzQ3MTBdIGdsb2JhbCBtYXNrOgpbICAxNzkuMDc0NzExXSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYKWyAgMTc5LjA4OTkyNV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDE3
OS4xMDM4ODVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzkuMTE3ODQ3XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc5LjEzMTgwOF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
ClsgIDE3OS4xNDU3NzBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxNzkuMTU5NzMwXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTc5LjE3MzY5Ml0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmY4MDA4
MTA0MTA1ClsgIDE3OS4xODc2NTJdICAgIApbICAxNzkuMTkwOTYzXSBnbG9iYWxseSB1bm1hc2tl
ZDoKWyAgMTc5LjE5MDk2NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS4yMDY3MTVd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMjIwNjc2XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMTc5LjIzNDYzOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS4y
NDg1OThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMjYyNTYwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTc5LjI3NjUyMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDE3OS4yOTA0ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4MDAwMApbICAxNzkuMzA0NDQ2XSAgICAK
WyAgMTc5LjMwNzc1NV0gbG9jYWwgY3B1MyBtYXNrOgpbICAxNzkuMzA3NzU2XSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTc5LjMyMzMyN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDE3OS4zMzcyODhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMzUxMjQ5XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjM2NTIxMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDE3OS4zNzkxNzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxNzkuMzkzMTMz
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjQwNzA5NF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAxZjgwMDAwClsgIDE3OS40MjEwNTVdICAgIApbICAxNzkuNDI0MzY3XSBsb2NhbGx5IHVubWFz
a2VkOgpbICAxNzkuNDI0MzY3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5LjQ0MDAy
OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS40NTM5OTBdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxNzkuNDY3OTUxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTc5
LjQ4MTkxMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDE3OS40OTU4NzNdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxNzkuNTA5ODM1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTc5LjUyMzc5NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDE3OS41Mzc3NTddICAg
IApbICAxNzkuNTQxMDY4XSBwZW5kaW5nIGxpc3Q6ClsgIDE3OS41NDQxMTFdICAgMDogZXZlbnQg
MiAtPiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAxNzkuNTUwNTU1
XSAgIDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2Vk
ClsgIDE3OS41NTcwODhdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICAxNzkuNTYwODQ3XSAg
IDM6IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDE3OS41NjYwMzhdICAg
MzogZXZlbnQgMjEgLT4gaXJxIDI5MgoK
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset=US-ASCII; name="syslog-good.txt"
Content-Disposition: attachment; filename="syslog-good.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5lfctae3

WyAgMTE2Ljk3MDEzMF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEwxIERpc2FibGVkOyBFbmFibGlu
ZyBMMFMKWyAgMTE2Ljk4MjEyNV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFJhZGlvIHR5cGU9MHgx
LTB4Mi0weDAKWyAgMTE5LjY4MDQ5NV0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IFBDSSBJTlQgQSBk
aXNhYmxlZApbICAxMjAuMzU0NjcxXSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJpbmcgZm9y
d2FyZGluZyBzdGF0ZQpbICAxMjAuMzY4Njk0XSBici1ldGgwOiBwb3J0IDEoZXRoMCkgZW50ZXJp
bmcgZm9yd2FyZGluZyBzdGF0ZQpbICAxMjAuMzc0MzU5XSBici1ldGgwOiBwb3J0IDEoZXRoMCkg
ZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAxMjAuMzgwNzg2XSBici1ldGgwOiBwb3J0IDEo
ZXRoMCkgZW50ZXJpbmcgZm9yd2FyZGluZyBzdGF0ZQpbICAxMjAuNzUwMTAxXSBlMTAwMGUgMDAw
MDowMDoxOS4wOiBQQ0kgSU5UIEEgZGlzYWJsZWQKWyAgMTIxLjA4MDAwOF0gZWhjaV9oY2QgMDAw
MDowMDoxZC4wOiByZW1vdmUsIHN0YXRlIDEKWyAgMTIxLjA4NDg1N10gdXNiIHVzYjQ6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMTIxLjA5MDE0NV0gdXNiIDQtMTogVVNCIGRp
c2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMgpbICAxMjEuMDk1NDQwXSB1c2IgNC0xLjU6IFVTQiBk
aXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDMKWyAgMTIxLjIyMTc1MF0gdXNiIDQtMS42OiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciA0ClsgIDEyMS41NTYwMTFdIGVoY2lfaGNkIDAwMDA6
MDA6MWQuMDogVVNCIGJ1cyA0IGRlcmVnaXN0ZXJlZApbICAxMjEuNTYyNjc4XSBlaGNpX2hjZCAw
MDAwOjAwOjFkLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAxMjEuNTY3ODAwXSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IHJlbW92ZSwgc3RhdGUgNApbICAxMjEuNTczMjk0XSB1c2IgdXNiMzogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAxMjEuNTc4NDA1XSB1c2IgMy0xOiBVU0Ig
ZGlzY29ubmVjdCwgZGV2aWNlIG51bWJlciAyClsgIDEyMS41ODgzOTddIGVoY2lfaGNkIDAwMDA6
MDA6MWEuMDogVVNCIGJ1cyAzIGRlcmVnaXN0ZXJlZApbICAxMjEuNTk0MjAyXSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAxMjEuNjcwMDIxXSB4aGNpX2hjZCAw
MDAwOjAwOjE0LjA6IHJlbW92ZSwgc3RhdGUgNApbICAxMjEuNjc0ODY3XSB1c2IgdXNiMjogVVNC
IGRpc2Nvbm5lY3QsIGRldmljZSBudW1iZXIgMQpbICAxMjEuNjgwODY3XSB4SENJIHhoY2lfZHJv
cF9lbmRwb2ludCBjYWxsZWQgZm9yIHJvb3QgaHViClsgIDEyMS42ODYxNThdIHhIQ0kgeGhjaV9j
aGVja19iYW5kd2lkdGggY2FsbGVkIGZvciByb290IGh1YgpbICAxMjEuNjkyNzA5XSB4aGNpX2hj
ZCAwMDAwOjAwOjE0LjA6IFVTQiBidXMgMiBkZXJlZ2lzdGVyZWQKWyAgMTIxLjY5ODQ2MF0geGhj
aV9oY2QgMDAwMDowMDoxNC4wOiByZW1vdmUsIHN0YXRlIDQKWyAgMTIxLjcwMzgyMl0gdXNiIHVz
YjE6IFVTQiBkaXNjb25uZWN0LCBkZXZpY2UgbnVtYmVyIDEKWyAgMTIxLjcwOTE1Nl0geEhDSSB4
aGNpX2Ryb3BfZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAxMjEuNzE0NDU1XSB4SENJ
IHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgMTIxLjcyMDU2MF0g
eGhjaV9oY2QgMDAwMDowMDoxNC4wOiBVU0IgYnVzIDEgZGVyZWdpc3RlcmVkClsgIDEyMS43NzAz
MzRdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogUENJIElOVCBBIGRpc2FibGVkClsgIDEyMi4xMTcx
NDNdIFBNOiBTeW5jaW5nIGZpbGVzeXN0ZW1zIC4uLiBkb25lLgpbICAxMjIuMTIyMDAwXSBQTTog
UHJlcGFyaW5nIHN5c3RlbSBmb3IgbWVtIHNsZWVwClsgIDEyMi4zNjAwMjldIEZyZWV6aW5nIHVz
ZXIgc3BhY2UgcHJvY2Vzc2VzIC4uLiAoZWxhcHNlZCAwLjAxIHNlY29uZHMpIGRvbmUuClsgIDEy
Mi4zODI1OTVdIEZyZWV6aW5nIHJlbWFpbmluZyBmcmVlemFibGUgdGFza3MgLi4uIChlbGFwc2Vk
IDAuMDEgc2Vjb25kcykgZG9uZS4KWyAgMTIyLjQwMjU5NV0gUE06IEVudGVyaW5nIG1lbSBzbGVl
cApbICAxMjIuNDA2NTE0XSBzZCAwOjA6MDowOiBbc2RhXSBTeW5jaHJvbml6aW5nIFNDU0kgY2Fj
aGUKWyAgMTIyLjQxMTc3OF0gc2QgMDowOjA6MDogW3NkYV0gU3RvcHBpbmcgZGlzawpbICAxMjIu
NDE2NDQxXSBBQ1BJIGhhbmRsZSBoYXMgbm8gY29udGV4dCEKWyAgMTIyLjUyMDA2OF0gc25kX2hk
YV9pbnRlbCAwMDAwOjAwOjFiLjA6IFBDSSBJTlQgQSBkaXNhYmxlZApbICAxMjIuNTM5OTg2XSBQ
TTogc3VzcGVuZCBvZiBkcnY6c25kX2hkYV9pbnRlbCBkZXY6MDAwMDowMDoxYi4wIGNvbXBsZXRl
IGFmdGVyIDEyMy42MTkgbXNlY3MKWyAgMTIyLjU0ODQyNV0gUE06IHN1c3BlbmQgb2YgZHJ2OiBk
ZXY6cGNpMDAwMDowMCBjb21wbGV0ZSBhZnRlciAxMzEuODkwIG1zZWNzClsgIDEyMi41NTU2NzVd
IFBNOiBzdXNwZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMTQ5LjI1MCBtc2VjcwpbICAx
MjIuNTYxODM4XSBQTTogc3VzcGVuZCBkZXZpY2VzIHRvb2sgMC4xNjAgc2Vjb25kcwpbICAxMjIu
NTY3NzU0XSBQTTogbGF0ZSBzdXNwZW5kIG9mIGRldmljZXMgY29tcGxldGUgYWZ0ZXIgMC45MDkg
bXNlY3MKWyAgMTIyLjU3NDQ0NV0gQUNQSTogUHJlcGFyaW5nIHRvIGVudGVyIHN5c3RlbSBzbGVl
cCBzdGF0ZSBTMwpbICAxMjIuNTgwMjc3XSBQTTogU2F2aW5nIHBsYXRmb3JtIE5WUyBtZW1vcnkK
WyAgMTIyLjc3ODQ5Ml0gRGlzYWJsaW5nIG5vbi1ib290IENQVXMgLi4uCihYRU4pIFByZXBhcmlu
ZyBzeXN0ZW0gZm9yIEFDUEkgUzMgc3RhdGUuCihYRU4pIERpc2FibGluZyBub24tYm9vdCBDUFVz
IC4uLgooWEVOKSBCcmVha2luZyB2Y3B1IGFmZmluaXR5IGZvciBkb21haW4gMCB2Y3B1IDEKKFhF
TikgQnJlYWtpbmcgdmNwdSBhZmZpbml0eSBmb3IgZG9tYWluIDAgdmNwdSAyCihYRU4pIEJyZWFr
aW5nIHZjcHUgYWZmaW5pdHkgZm9yIGRvbWFpbiAwIHZjcHUgMwooWEVOKSBFbnRlcmluZyBBQ1BJ
IFMzIHN0YXRlLgooWEVOKSBtY2VfaW50ZWwuYzoxMjM5OiBNQ0EgQ2FwYWJpbGl0eTogQkNBU1Qg
MSBTRVIgMCBDTUNJIDEgZmlyc3RiYW5rIDAgZXh0ZW5kZWQgTUNFIE1TUiAwCihYRU4pIENQVTAg
Q01DSSBMVlQgdmVjdG9yICgweGYxKSBhbHJlYWR5IGluc3RhbGxlZAooWEVOKSBGaW5pc2hpbmcg
d2FrZXVwIGZyb20gQUNQSSBTMyBzdGF0ZS4KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9p
bmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBFbmFibGluZyBub24tYm9v
dCBDUFVzICAuLi4KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2
YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm8gOiBz
aWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1pY3JvY29kZTogY29sbGVjdF9jcHVf
aW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcKWyAgMTIzLjg2MjczOV0gQUNQSTog
TG93LWxldmVsIHJlc3VtZSBjb21wbGV0ZQpbICAxMjMuODY3MDU2XSBQTTogUmVzdG9yaW5nIHBs
YXRmb3JtIE5WUyBtZW1vcnkKKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvIDogc2ln
PTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2lu
Zm8gOiBzaWc9MHgzMDZhNCwgcGY9MHgyLCByZXY9MHg3CihYRU4pIG1pY3JvY29kZTogY29sbGVj
dF9jcHVfaW5mbyA6IHNpZz0weDMwNmE0LCBwZj0weDIsIHJldj0weDcKKFhFTikgbWljcm9jb2Rl
OiBjb2xsZWN0X2NwdV9pbmZvIDogc2lnPTB4MzA2YTQsIHBmPTB4MiwgcmV2PTB4NwpbICAxMjQu
MDE5NzE1XSBFbmFibGluZyBub24tYm9vdCBDUFVzIC4uLgpbICAxMjQuMDIzNTkzXSBpbnN0YWxs
aW5nIFhlbiB0aW1lciBmb3IgQ1BVIDEKWyAgMTI0LjAyNzgzM10gY3B1IDEgc3BpbmxvY2sgZXZl
bnQgaXJxIDI3OQpbICAxMjQuMDM0MDgwXSBDUFUxIGlzIHVwClsgIDEyNC4wMzY2MjVdIGluc3Rh
bGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMgpbICAxMjQuMDQwNzkzXSBjcHUgMiBzcGlubG9jayBl
dmVudCBpcnEgMjg1ClsgIDEyNC4wNDU5MDNdIENQVTIgaXMgdXAKWyAgMTI0LjA0ODM5OF0gaW5z
dGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAzClsgIDEyNC4wNTI1NjFdIGNwdSAzIHNwaW5sb2Nr
IGV2ZW50IGlycSAyOTEKWyAgMTI0LjA1Nzc4Ml0gQ1BVMyBpcyB1cApbICAxMjQuMDYxOTU5XSBB
Q1BJOiBXYWtpbmcgdXAgZnJvbSBzeXN0ZW0gc2xlZXAgc3RhdGUgUzMKWyAgMTI0LjA2NzU2N10g
aTkxNSAwMDAwOjAwOjAyLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAo
d2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpbICAxMjQuMDc2Mzg0XSBpOTE1IDAwMDA6MDA6MDIu
MDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHg5MDAwMDcsIHdy
aXRpbmcgMHg5MDA0MDcpClsgIDEyNC4wODU4OTldIHBjaSAwMDAwOjAwOjE0LjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpb
ICAxMjQuMDk0NzI3XSBwY2kgMDAwMDowMDoxNC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0
IG9mZnNldCAweDQgKHdhcyAweDQsIHdyaXRpbmcgMHhiMDJiMDAwNCkKWyAgMTI0LjEwMzgzOV0g
cGNpIDAwMDA6MDA6MTQuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3
YXMgMHgyOTAwMDAwLCB3cml0aW5nIDB4MjkwMDAwMikKWyAgMTI0LjExMzQ0NF0gcGNpIDAwMDA6
MDA6MTYuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgxMDAs
IHdyaXRpbmcgMHgxMGIpClsgIDEyNC4xMjIyOTNdIHBjaSAwMDAwOjAwOjE2LjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4ZmVkYjAwMDQsIHdyaXRpbmcgMHhi
MDJhMDAwNCkKWyAgMTI0LjEzMjAyNV0gcGNpIDAwMDA6MDA6MTYuMDogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHgxODAwMDYsIHdyaXRpbmcgMHgxMDAwMDYpClsg
IDEyNC4xNDE0NTldIHNlcmlhbCAwMDAwOjAwOjE2LjM6IHJlc3RvcmluZyBjb25maWcgc3BhY2Ug
YXQgb2Zmc2V0IDB4ZiAod2FzIDB4MjAwLCB3cml0aW5nIDB4MjBhKQpbICAxMjQuMTUwNTcyXSBz
ZXJpYWwgMDAwMDowMDoxNi4zOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDUg
KHdhcyAweDAsIHdyaXRpbmcgMHhiMDI5MDAwMCkKWyAgMTI0LjE1OTk0NF0gc2VyaWFsIDAwMDA6
MDA6MTYuMzogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg0ICh3YXMgMHgxLCB3
cml0aW5nIDB4MzBlMSkKWyAgMTI0LjE2ODk4OF0gc2VyaWFsIDAwMDA6MDA6MTYuMzogcmVzdG9y
aW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHhiMDAwMDAsIHdyaXRpbmcgMHhi
MDAwMDcpClsgIDEyNC4xNzg2OTddIHBjaSAwMDAwOjAwOjE5LjA6IHJlc3RvcmluZyBjb25maWcg
c3BhY2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTA1KQpbICAxMjQuMTg3
NTQ2XSBwY2kgMDAwMDowMDoxOS4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAw
eDEgKHdhcyAweDEwMDAwMiwgd3JpdGluZyAweDEwMDAwMykKWyAgMTI0LjE5Njk1NV0gcGNpIDAw
MDA6MDA6MWEuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgx
MDAsIHdyaXRpbmcgMHgxMGIpClsgIDEyNC4yMDU3OTNdIHBjaSAwMDAwOjAwOjFhLjA6IHJlc3Rv
cmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MCwgd3JpdGluZyAweGIwMjcw
MDAwKQpbICAxMjQuMjE0ODk2XSBwY2kgMDAwMDowMDoxYS4wOiByZXN0b3JpbmcgY29uZmlnIHNw
YWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDI5MDAwMDAsIHdyaXRpbmcgMHgyOTAwMDAyKQpbICAx
MjQuMjI0NTI0XSBzbmRfaGRhX2ludGVsIDAwMDA6MDA6MWIuMDogcmVzdG9yaW5nIGNvbmZpZyBz
cGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgxMDAsIHdyaXRpbmcgMHgxMDMpClsgIDEyNC4yMzQy
NTRdIHNuZF9oZGFfaW50ZWwgMDAwMDowMDoxYi4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0
IG9mZnNldCAweDQgKHdhcyAweDQsIHdyaXRpbmcgMHhiMDI2MDAwNCkKWyAgMTI0LjI0NDI0OF0g
c25kX2hkYV9pbnRlbCAwMDAwOjAwOjFiLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zm
c2V0IDB4MyAod2FzIDB4MCwgd3JpdGluZyAweDEwKQpbICAxMjQuMjUzNzM5XSBzbmRfaGRhX2lu
dGVsIDAwMDA6MDA6MWIuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3
YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAwMDIpClsgIDEyNC4yNjQwODldIHBjaWVwb3J0IDAw
MDA6MDA6MWMuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgx
MDAsIHdyaXRpbmcgMHgxMGIpClsgIDEyNC4yNzMzNTZdIHBjaWVwb3J0IDAwMDA6MDA6MWMuMDog
cmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg3ICh3YXMgMHhmMCwgd3JpdGluZyAw
eDIwMDAwMGYwKQpbICAxMjQuMjgzMDA4XSBwY2llcG9ydCAwMDAwOjAwOjFjLjA6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MyAod2FzIDB4ODEwMDAwLCB3cml0aW5nIDB4ODEw
MDEwKQpbICAxMjQuMjkyODQ4XSBwY2llcG9ydCAwMDAwOjAwOjFjLjA6IHJlc3RvcmluZyBjb25m
aWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MTAwMDAwLCB3cml0aW5nIDB4MTAwMDA3KQpb
ICAxMjQuMzAyNzYwXSBwY2llcG9ydCAwMDAwOjAwOjFjLjY6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4ZiAod2FzIDB4MzAwLCB3cml0aW5nIDB4MzA0KQpbICAxMjQuMzEyMDE4
XSBwY2llcG9ydCAwMDAwOjAwOjFjLjY6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0
IDB4NyAod2FzIDB4ZjAsIHdyaXRpbmcgMHgyMDAwMDBmMCkKWyAgMTI0LjMyMTY3MF0gcGNpZXBv
cnQgMDAwMDowMDoxYy42OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDMgKHdh
cyAweDgxMDAwMCwgd3JpdGluZyAweDgxMDAxMCkKWyAgMTI0LjMzMTUwOV0gcGNpZXBvcnQgMDAw
MDowMDoxYy42OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEgKHdhcyAweDEw
MDAwMCwgd3JpdGluZyAweDEwMDAwNykKWyAgMTI0LjM0MTQyMl0gcGNpZXBvcnQgMDAwMDowMDox
Yy43OiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDQwMCwgd3Jp
dGluZyAweDQwYSkKWyAgMTI0LjM1MDY3N10gcGNpZXBvcnQgMDAwMDowMDoxYy43OiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDcgKHdhcyAweGYwLCB3cml0aW5nIDB4MjAwMDAw
ZjApClsgIDEyNC4zNjAzMzNdIHBjaWVwb3J0IDAwMDA6MDA6MWMuNzogcmVzdG9yaW5nIGNvbmZp
ZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHg4MTAwMDAsIHdyaXRpbmcgMHg4MTAwMTApClsg
IDEyNC4zNzAyMzddIHBjaSAwMDAwOjAwOjFkLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQg
b2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTBiKQpbICAxMjQuMzc5MDU1XSBwY2kg
MDAwMDowMDoxZC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDQgKHdhcyAw
eDAsIHdyaXRpbmcgMHhiMDI1MDAwMCkKWyAgMTI0LjM4ODE2Ml0gcGNpIDAwMDA6MDA6MWQuMDog
cmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgxICh3YXMgMHgyOTAwMDAwLCB3cml0
aW5nIDB4MjkwMDAwMikKWyAgMTI0LjM5Nzk2M10gYWhjaSAwMDAwOjAwOjFmLjI6IHJlc3Rvcmlu
ZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MmIwMDAwNywgd3JpdGluZyAweDJi
MDA0MDcpClsgIDEyNC40MDc1ODFdIHBjaSAwMDAwOjAwOjFmLjM6IHJlc3RvcmluZyBjb25maWcg
c3BhY2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MjgwMDAwMSwgd3JpdGluZyAweDI4MDAwMDMpClsg
IDEyNC40MTcxMDVdIHBjaSAwMDAwOjAyOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQg
b2Zmc2V0IDB4ZiAod2FzIDB4MTAwLCB3cml0aW5nIDB4MTA0KQpbICAxMjQuNDI1OTU0XSBwY2kg
MDAwMDowMjowMC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDQgKHdhcyAw
eDQsIHdyaXRpbmcgMHhiMDEwMDAwNCkKWyAgMTI0LjQzNTAzMl0gcGNpIDAwMDA6MDI6MDAuMDog
cmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHgwLCB3cml0aW5nIDB4
MTApClsgIDEyNC40NDM2MzNdIHBjaSAwMDAwOjAyOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4MSAod2FzIDB4MTAwMDAwLCB3cml0aW5nIDB4MTAwMDAyKQpbICAxMjQu
NDUzMTI4XSBwY2kgMDAwMDowMzowMC4wOiByZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNl
dCAweDkgKHdhcyAweDEwMDAxLCB3cml0aW5nIDB4MWZmZjEpClsgIDEyNC40NjIyNDRdIHBjaSAw
MDAwOjAzOjAwLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NyAod2FzIDB4
MjJhMDAxMDEsIHdyaXRpbmcgMHgyMmEwMDFmMSkKWyAgMTI0LjQ3MjAwNl0gcGNpIDAwMDA6MDM6
MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHgzICh3YXMgMHgxMDAwMCwg
d3JpdGluZyAweDEwMDEwKQpbICAxMjQuNDgxMjc5XSBwY2kgMDAwMDowNDowMC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweGYgKHdhcyAweDQwMjAxMDAsIHdyaXRpbmcgMHg0
MDIwMTBhKQpbICAxMjQuNDkwODIyXSBwY2kgMDAwMDowNDowMC4wOiByZXN0b3JpbmcgY29uZmln
IHNwYWNlIGF0IG9mZnNldCAweDUgKHdhcyAweDAsIHdyaXRpbmcgMHhiMDAwMDAwMCkKWyAgMTI0
LjQ5OTkyMV0gcGNpIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZz
ZXQgMHgzICh3YXMgMHgwLCB3cml0aW5nIDB4MjAxMCkKWyAgMTI0LjUwODcxOF0gc2VyaWFsIDAw
MDA6MDU6MDEuMDogcmVzdG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhmICh3YXMgMHgx
ZmYsIHdyaXRpbmcgMHgxMDMpClsgIDEyNC41MTc4MjldIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJl
c3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4OSAod2FzIDB4MSwgd3JpdGluZyAweDIw
MDEpClsgIDEyNC41MjY4NTNdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcg
c3BhY2UgYXQgb2Zmc2V0IDB4OCAod2FzIDB4MSwgd3JpdGluZyAweDIwMTEpClsgIDEyNC41MzU4
OTNdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0
IDB4NyAod2FzIDB4MSwgd3JpdGluZyAweDIwMjEpClsgIDEyNC41NDQ5MjldIHNlcmlhbCAwMDAw
OjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NiAod2FzIDB4MSwg
d3JpdGluZyAweDIwMzEpClsgIDEyNC41NTM5NzFdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3Rv
cmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4NSAod2FzIDB4MSwgd3JpdGluZyAweDIwNDEp
ClsgIDEyNC41NjMwMTJdIHNlcmlhbCAwMDAwOjA1OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4MyAod2FzIDB4OCwgd3JpdGluZyAweDIwMTApClsgIDEyNC41NzIxMjhd
IFBNOiBlYXJseSByZXN1bWUgb2YgZGV2aWNlcyBjb21wbGV0ZSBhZnRlciA1MDQuNjQ1IG1zZWNz
ClsgIDEyNC41Nzg3MzBdIGk5MTUgMDAwMDowMDowMi4wOiBzZXR0aW5nIGxhdGVuY3kgdGltZXIg
dG8gNjQKWyAgMTI0LjU3ODc0M10geGVuOiByZWdpc3RlcmluZyBnc2kgMjIgdHJpZ2dlcmluZyAw
IHBvbGFyaXR5IDEKWyAgMTI0LjU3ODc0N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyMgpbICAx
MjQuNTc4NzQ5XSBzbmRfaGRhX2ludGVsIDAwMDA6MDA6MWIuMDogUENJIElOVCBBIC0+IEdTSSAy
MiAobGV2ZWwsIGxvdykgLT4gSVJRIDIyClsgIDEyNC41Nzg3NTNdIHBjaSAwMDAwOjAwOjFlLjA6
IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAxMjQuNTc4NzYxXSBzbmRfaGRhX2ludGVs
IDAwMDA6MDA6MWIuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDEyNC41Nzg3OTdd
IGFoY2kgMDAwMDowMDoxZi4yOiBzZXR0aW5nIGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMTI0LjU3
ODk2M10gcGNpIDAwMDA6MDM6MDAuMDogc2V0dGluZyBsYXRlbmN5IHRpbWVyIHRvIDY0ClsgIDEy
NC41Nzg5OThdIHNkIDA6MDowOjA6IFtzZGFdIFN0YXJ0aW5nIGRpc2sKWyAgMTI0LjkyNTM3NF0g
YXRhMzogU0FUQSBsaW5rIHVwIDEuNSBHYnBzIChTU3RhdHVzIDExMyBTQ29udHJvbCAzMDApClsg
IDEyNC45NDUzODNdIGF0YTE6IFNBVEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMgU0Nv
bnRyb2wgMzAwKQpbICAxMjQuOTc0MzUxXSBhdGEzLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEw
MApbICAxMjUuMTUzNTQ5XSBQTTogcmVzdW1lIG9mIGRydjppOTE1IGRldjowMDAwOjAwOjAyLjAg
Y29tcGxldGUgYWZ0ZXIgNTc0LjgzMyBtc2VjcwpbICAxMjYuODUzOTY4XSBhdGExLjAwOiBjb25m
aWd1cmVkIGZvciBVRE1BLzEzMwpbICAxMjYuODc1NDI4XSBQTTogcmVzdW1lIG9mIGRydjpzZCBk
ZXY6MDowOjA6MCBjb21wbGV0ZSBhZnRlciAyMjk2LjQyOCBtc2VjcwpbICAxMjYuODgyNDM3XSBQ
TTogcmVzdW1lIG9mIGRydjpzY3NpX2RldmljZSBkZXY6MDowOjA6MCBjb21wbGV0ZSBhZnRlciAy
MzAzLjQxMyBtc2VjcwpbICAxMjYuODgyNDQ4XSBQTTogcmVzdW1lIG9mIGRydjpzY3NpX2Rpc2sg
ZGV2OjA6MDowOjAgY29tcGxldGUgYWZ0ZXIgMTcyMS4zMjAgbXNlY3MKWyAgMTI2Ljg5ODE5NF0g
UE06IHJlc3VtZSBvZiBkZXZpY2VzIGNvbXBsZXRlIGFmdGVyIDIzMTkuNTA2IG1zZWNzClsgIDEy
Ni45MDQ0MjJdIFBNOiByZXN1bWUgZGV2aWNlcyB0b29rIDIuMzIwIHNlY29uZHMKWyAgMTI2Ljkw
OTI3Nl0gUE06IEZpbmlzaGluZyB3YWtldXAuClsgIDEyNi45MTI3NTBdIFJlc3RhcnRpbmcgdGFz
a3MgLi4uIGRvbmUuClsgIDEyNi45MTczMTddIHZpZGVvIExOWFZJREVPOjAwOiBSZXN0b3Jpbmcg
YmFja2xpZ2h0IHN0YXRlClsgIDEyNi45OTM5MDddIFtkcm06cGNoX2lycV9oYW5kbGVyXSAqRVJS
T1IqIFBDSCBwb2lzb24gaW50ZXJydXB0ClsgIDEyNy41MzEwMDddIGVoY2lfaGNkOiBVU0IgMi4w
ICdFbmhhbmNlZCcgSG9zdCBDb250cm9sbGVyIChFSENJKSBEcml2ZXIKWyAgMTI3LjUzODAwNV0g
eGVuOiByZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMTI3LjU0
MzY1OF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAxMjcuNTQ3NTE1XSBlaGNpX2hjZCAw
MDAwOjAwOjFhLjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+IElSUSAxNgpb
ICAxMjcuNTU1MDAwXSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1l
ciB0byA2NApbICAxMjcuNTYxMDMxXSBlaGNpX2hjZCAwMDAwOjAwOjFhLjA6IEVIQ0kgSG9zdCBD
b250cm9sbGVyClsgIDEyNy41NjY1NjFdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogbmV3IFVTQiBi
dXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAxClsgIDEyNy41NzQyMTRdIGVoY2lf
aGNkIDAwMDA6MDA6MWEuMDogZGVidWcgcG9ydCAyClsgIDEyNy41ODI4MjNdIGVoY2lfaGNkIDAw
MDA6MDA6MWEuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQKWyAgMTI3
LjU4OTc4M10gZWhjaV9oY2QgMDAwMDowMDoxYS4wOiBpcnEgMTYsIGlvIG1lbSAweGIwMjcwMDAw
ClsgIDEyNy42MTUzNjFdIGVoY2lfaGNkIDAwMDA6MDA6MWEuMDogVVNCIDIuMCBzdGFydGVkLCBF
SENJIDEuMDAKWyAgMTI3LjYyMTM3OV0gaHViIDEtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI3
LjYyNTQwM10gaHViIDEtMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI3LjcyNTQzMl0geGVu
OiByZWdpc3RlcmluZyBnc2kgMjMgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMTI3LjczMTA4
NV0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoyMwpbICAxMjcuNzM0OTMyXSBlaGNpX2hjZCAwMDAw
OjAwOjFkLjA6IFBDSSBJTlQgQSAtPiBHU0kgMjMgKGxldmVsLCBsb3cpIC0+IElSUSAyMwpbICAx
MjcuNzQyNDEwXSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0
byA2NApbICAxMjcuNzQ4NDU5XSBlaGNpX2hjZCAwMDAwOjAwOjFkLjA6IEVIQ0kgSG9zdCBDb250
cm9sbGVyClsgIDEyNy43NTM5OTVdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogbmV3IFVTQiBidXMg
cmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAyClsgIDEyNy43NjE2NTVdIGVoY2lfaGNk
IDAwMDA6MDA6MWQuMDogZGVidWcgcG9ydCAyClsgIDEyNy43NzAyMjddIGVoY2lfaGNkIDAwMDA6
MDA6MWQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQKWyAgMTI3Ljc3
NzE3Ml0gZWhjaV9oY2QgMDAwMDowMDoxZC4wOiBpcnEgMjMsIGlvIG1lbSAweGIwMjUwMDAwClsg
IDEyNy44MDUzNjZdIGVoY2lfaGNkIDAwMDA6MDA6MWQuMDogVVNCIDIuMCBzdGFydGVkLCBFSENJ
IDEuMDAKWyAgMTI3LjgxMTM1MF0gaHViIDItMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI3Ljgx
NTEzNF0gaHViIDItMDoxLjA6IDMgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI3Ljg3NjE4MV0geGVuOiBy
ZWdpc3RlcmluZyBnc2kgMTYgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEKWyAgMTI3Ljg4MTgzNl0g
QWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNgpbICAxMjcuODg1NzA0XSB4aGNpX2hjZCAwMDAwOjAw
OjE0LjA6IFBDSSBJTlQgQSAtPiBHU0kgMTYgKGxldmVsLCBsb3cpIC0+IElSUSAxNgpbICAxMjcu
ODkzMTkwXSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHNldHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2
NApbICAxMjcuODk5MjEyXSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHhIQ0kgSG9zdCBDb250cm9s
bGVyClsgIDEyNy45MDQ3MTZdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogbmV3IFVTQiBidXMgcmVn
aXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgIDEyNy45MTI1MzJdIHhoY2lfaGNkIDAw
MDA6MDA6MTQuMDogY2FjaGUgbGluZSBzaXplIG9mIDY0IGlzIG5vdCBzdXBwb3J0ZWQKWyAgMTI3
LjkxOTQ1MF0geGhjaV9oY2QgMDAwMDowMDoxNC4wOiBpcnEgMTYsIGlvIG1lbSAweGIwMmIwMDAw
ClsgIDEyNy45MjU2MTJdIHhIQ0kgeGhjaV9hZGRfZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1
YgpbICAxMjcuOTMwODEyXSB4SENJIHhoY2lfY2hlY2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9v
dCBodWIKWyAgMTI3LjkzNjQ4OV0gaHViIDMtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI3Ljk0
MDM5Ml0gaHViIDMtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI3Ljk0NTM4MF0gdXNiIDEt
MTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBlaGNpX2hjZApbICAx
MjcuOTg1Mzg0XSB4aGNpX2hjZCAwMDAwOjAwOjE0LjA6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsg
IDEyNy45OTA3NjBdIHhoY2lfaGNkIDAwMDA6MDA6MTQuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJl
ZCwgYXNzaWduZWQgYnVzIG51bWJlciA0ClsgIDEyNy45OTg1MThdIHhIQ0kgeGhjaV9hZGRfZW5k
cG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAxMjguMDAzNzQ3XSB4SENJIHhoY2lfY2hlY2tf
YmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgMTI4LjAwOTQzOV0gaHViIDQtMDoxLjA6
IFVTQiBodWIgZm91bmQKWyAgMTI4LjAxMzMyOV0gaHViIDQtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0
ZWQKWyAgMTI4LjA5NjIyMV0gaHViIDEtMToxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI4LjEwMDA0
N10gaHViIDEtMToxLjA6IDYgcG9ydHMgZGV0ZWN0ZWQKWyAgMTI4LjE5MDMyNV0gY2ZnODAyMTE6
IENhbGxpbmcgQ1JEQSB0byB1cGRhdGUgd29ybGQgcmVndWxhdG9yeSBkb21haW4KWyAgMTI4LjIw
NTI4Ml0gSW50ZWwoUikgV2lyZWxlc3MgV2lGaSBMaW5rIEFHTiBkcml2ZXIgZm9yIExpbnV4LCBp
bi10cmVlOgpbICAxMjguMjExOTQ5XSBDb3B5cmlnaHQoYykgMjAwMy0yMDExIEludGVsIENvcnBv
cmF0aW9uClsgIDEyOC4yMTcxODhdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDE4IHRyaWdnZXJpbmcg
MCBwb2xhcml0eSAxClsgIDEyOC4yMjI5MjddIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MTgKWyAg
MTI4LjIyNTM3Ml0gdXNiIDItMTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1
c2luZyBlaGNpX2hjZApbICAxMjguMjMzNjExXSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogUENJIElO
VCBBIC0+IEdTSSAxOCAobGV2ZWwsIGxvdykgLT4gSVJRIDE4ClsgIDEyOC4yNDA5MzVdIGl3bHdp
ZmkgMDAwMDowMjowMC4wOiBzZXR0aW5nIGxhdGVuY3kgdGltZXIgdG8gNjQKWyAgMTI4LjI0Njk1
OF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IHBjaV9yZXNvdXJjZV9sZW4gPSAweDAwMDAyMDAwClsg
IDEyOC4yNTMwODRdIGl3bHdpZmkgMDAwMDowMjowMC4wOiBwY2lfcmVzb3VyY2VfYmFzZSA9IGZm
ZmZjOTAwMTU5ZjgwMDAKWyAgMTI4LjI1OTg5NF0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IEhXIFJl
dmlzaW9uIElEID0gMHgzNApbICAxMjguMjY1NTk1XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogQ09O
RklHX0lXTFdJRklfREVCVUcgZGlzYWJsZWQKWyAgMTI4LjI3MTYwMF0gaXdsd2lmaSAwMDAwOjAy
OjAwLjA6IENPTkZJR19JV0xXSUZJX0RFQlVHRlMgZGlzYWJsZWQKWyAgMTI4LjI3Nzk2M10gaXds
d2lmaSAwMDAwOjAyOjAwLjA6IENPTkZJR19JV0xXSUZJX0RFVklDRV9UUkFDSU5HIGVuYWJsZWQK
WyAgMTI4LjI4NDg0Nl0gaXdsd2lmaSAwMDAwOjAyOjAwLjA6IENPTkZJR19JV0xXSUZJX0RFVklD
RV9URVNUTU9ERSBkaXNhYmxlZApbICAxMjguMjkxOTMyXSBpd2x3aWZpIDAwMDA6MDI6MDAuMDog
Q09ORklHX0lXTFdJRklfUDJQIGRpc2FibGVkClsgIDEyOC4yOTc5MjNdIGl3bHdpZmkgMDAwMDow
MjowMC4wOiBEZXRlY3RlZCBJbnRlbChSKSBDZW50cmlubyhSKSBBZHZhbmNlZC1OIDYyMDUgQUdO
LCBSRVY9MHhCMApbICAxMjguMzA2ODg1XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogTDEgRGlzYWJs
ZWQ7IEVuYWJsaW5nIEwwUwpbICAxMjguMzI5NzQ5XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogZGV2
aWNlIEVFUFJPTSBWRVI9MHg3MTUsIENBTElCPTB4NgpbICAxMjguMzM2MjA3XSBpd2x3aWZpIDAw
MDA6MDI6MDAuMDogRGV2aWNlIFNLVTogMHgxRjAKWyAgMTI4LjM0MTMwM10gaXdsd2lmaSAwMDAw
OjAyOjAwLjA6IFZhbGlkIFR4IGFudDogMHgzLCBWYWxpZCBSeCBhbnQ6IDB4MwpbICAxMjguMzQ4
MTIwXSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogVHVuYWJsZSBjaGFubmVsczogMTMgODAyLjExYmcs
IDI0IDgwMi4xMWEgY2hhbm5lbHMKWyAgMTI4LjM2MzQxN10gaXdsd2lmaSAwMDAwOjAyOjAwLjA6
IGxvYWRlZCBmaXJtd2FyZSB2ZXJzaW9uIDE3LjE2OC41LjMgYnVpbGQgNDIzMDEKWyAgMTI4LjM3
MTQ5MF0gUmVnaXN0ZXJlZCBsZWQgZGV2aWNlOiBwaHkwLWxlZApbICAxMjguMzc1NzU1XSBjZmc4
MDIxMTogSWdub3JpbmcgcmVndWxhdG9yeSByZXF1ZXN0IFNldCBieSBjb3JlIHNpbmNlIHRoZSBk
cml2ZXIgdXNlcyBpdHMgb3duIGN1c3RvbSByZWd1bGF0b3J5IGRvbWFpbgpbICAxMjguMzg2ODA5
XSBpZWVlODAyMTEgcGh5MDogU2VsZWN0ZWQgcmF0ZSBjb250cm9sIGFsZ29yaXRobSAnaXdsLWFn
bi1ycycKWyAgMTI4LjM5NjA3NF0gaHViIDItMToxLjA6IFVTQiBodWIgZm91bmQKWyAgMTI4LjM5
OTkxOF0gaHViIDItMToxLjA6IDggcG9ydHMgZGV0ZWN0ZWQKWyAgMTI4LjQwMDk1M10gaXdsd2lm
aSAwMDAwOjAyOjAwLjA6IEwxIERpc2FibGVkOyBFbmFibGluZyBMMFMKWyAgMTI4LjQwNzI3Nl0g
aXdsd2lmaSAwMDAwOjAyOjAwLjA6IFJhZGlvIHR5cGU9MHgxLTB4Mi0weDAKWyAgMTI4LjQyMDE1
MF0gZTEwMDBlOiBJbnRlbChSKSBQUk8vMTAwMCBOZXR3b3JrIERyaXZlciAtIDEuNS4xLWsKWyAg
MTI4LjQyNjExNF0gZTEwMDBlOiBDb3B5cmlnaHQoYykgMTk5OSAtIDIwMTEgSW50ZWwgQ29ycG9y
YXRpb24uClsgIDEyOC40MzIyNzVdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDIwIHRyaWdnZXJpbmcg
MCBwb2xhcml0eSAxClsgIDEyOC40MzgwODRdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MjAKWyAg
MTI4LjQ0MTkxOF0gZTEwMDBlIDAwMDA6MDA6MTkuMDogUENJIElOVCBBIC0+IEdTSSAyMCAobGV2
ZWwsIGxvdykgLT4gSVJRIDIwClsgIDEyOC40NDkyMDFdIGUxMDAwZSAwMDAwOjAwOjE5LjA6IHNl
dHRpbmcgbGF0ZW5jeSB0aW1lciB0byA2NApbICAxMjguNzA1NTMxXSB1c2IgMi0xLjU6IG5ldyBs
b3ctc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMyB1c2luZyBlaGNpX2hjZApbICAxMjguNzE5NjA0
XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogTDEgRGlzYWJsZWQ7IEVuYWJsaW5nIEwwUwpbICAxMjgu
NzMxNTk2XSBpd2x3aWZpIDAwMDA6MDI6MDAuMDogUmFkaW8gdHlwZT0weDEtMHgyLTB4MApbICAx
MjguODMxNTcyXSBpbnB1dDogVVNCIE9wdGljYWwgTW91c2UgYXMgL2RldmljZXMvcGNpMDAwMDow
MC8wMDAwOjAwOjFkLjAvdXNiMi8yLTEvMi0xLjUvMi0xLjU6MS4wL2lucHV0L2lucHV0MTEKWyAg
MTI4Ljg0MjM5OV0gZ2VuZXJpYy11c2IgMDAwMzowNEIzOjMxMEMuMDAwMzogaW5wdXQsaGlkcmF3
MDogVVNCIEhJRCB2MS4xMSBNb3VzZSBbVVNCIE9wdGljYWwgTW91c2VdIG9uIHVzYi0wMDAwOjAw
OjFkLjAtMS41L2lucHV0MApbICAxMjguODU5MTExXSBlMTAwMGUgMDAwMDowMDoxOS4wOiBldGgw
OiAoUENJIEV4cHJlc3M6Mi41R1QvczpXaWR0aCB4MSkgMDA6MTM6MjA6Zjk6ZGU6MjQKWyAgMTI4
Ljg2NzI4OF0gZTEwMDBlIDAwMDA6MDA6MTkuMDogZXRoMDogSW50ZWwoUikgUFJPLzEwMDAgTmV0
d29yayBDb25uZWN0aW9uClsgIDEyOC44NzQ1NzhdIGUxMDAwZSAwMDAwOjAwOjE5LjA6IGV0aDA6
IE1BQzogMTAsIFBIWTogMTEsIFBCQSBObzogRkZGRkZGLTBGRgpbICAxMjguODgyNDE3XSBkZXZp
Y2UgZXRoMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUKWyAgMTI4Ljk0NTUzNF0gdXNiIDItMS42
OiBuZXcgbG93LXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDQgdXNpbmcgZWhjaV9oY2QKWyAgMTI5
LjA4NzU0NV0gaW5wdXQ6IExJVEUtT04gVGVjaG5vbG9neSBVU0IgTmV0VmlzdGEgRnVsbCBXaWR0
aCBLZXlib2FyZC4gYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjFkLjAvdXNiMi8yLTEv
Mi0xLjYvMi0xLjY6MS4wL2lucHV0L2lucHV0MTIKWyAgMTI5LjEwMjAzMV0gZ2VuZXJpYy11c2Ig
MDAwMzowNEIzOjMwMjUuMDAwNDogaW5wdXQsaGlkcmF3MTogVVNCIEhJRCB2MS4xMCBLZXlib2Fy
ZCBbTElURS1PTiBUZWNobm9sb2d5IFVTQiBOZXRWaXN0YSBGdWxsIFdpZHRoIEtleWJvYXJkLl0g
b24gdXNiLTAwMDA6MDA6MWQuMC0xLjYvaW5wdXQwClsgIDEyOS44ODUwNTJdIGNmZzgwMjExOiBG
b3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTE4MCBNSHogKENoIDM2KSBvbiBwaHkwClsg
IDEyOS44OTIzMzZdIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTE4
MCBNSHogKENoIDM2KSBvbiBwaHkwClsgIDEyOS44OTk3NjBdIGNmZzgwMjExOiBQZW5kaW5nIHJl
Z3VsYXRvcnkgcmVxdWVzdCwgd2FpdGluZyBmb3IgaXQgdG8gYmUgcHJvY2Vzc2VkLi4uClsgIDEz
MC4wMDA2MjddIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTIwMCBN
SHogKENoIDQwKSBvbiBwaHkwClsgIDEzMC4wMDc5MTddIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVh
Y29uIG9uIGZyZXF1ZW5jeTogNTIwMCBNSHogKENoIDQwKSBvbiBwaHkwClsgIDEzMC4wMTUzMjZd
IGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVzdCwgd2FpdGluZyBmb3IgaXQgdG8g
YmUgcHJvY2Vzc2VkLi4uClsgIDEzMC4wOTI4MTBdIGNmZzgwMjExOiBGb3VuZCBuZXcgYmVhY29u
IG9uIGZyZXF1ZW5jeTogNTIyMCBNSHogKENoIDQ0KSBvbiBwaHkwClsgIDEzMC4xMDAxMDBdIGNm
ZzgwMjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTIyMCBNSHogKENoIDQ0KSBv
biBwaHkwClsgIDEzMC4xMDc1MjddIGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVz
dCwgd2FpdGluZyBmb3IgaXQgdG8gYmUgcHJvY2Vzc2VkLi4uClsgIDEzMC4xNjI0NjddIGNmZzgw
MjExOiBGb3VuZCBuZXcgYmVhY29uIG9uIGZyZXF1ZW5jeTogNTI0MCBNSHogKENoIDQ4KSBvbiBw
aHkwClsgIDEzMC4xNjk3NDVdIGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVzdCwg
d2FpdGluZyBmb3IgaXQgdG8gYmUgcHJvY2Vzc2VkLi4uClsgIDEzMi4wNDExNDRdIGUxMDAwZTog
ZXRoMCBOSUMgTGluayBpcyBVcCAxMDAwIE1icHMgRnVsbCBEdXBsZXgsIEZsb3cgQ29udHJvbDog
UngvVHgKWyAgMTMyLjA0ODk2MF0gYnItZXRoMDogcG9ydCAxKGV0aDApIGVudGVyaW5nIGZvcndh
cmRpbmcgc3RhdGUKWyAgMTMyLjA1NDY5OF0gYnItZXRoMDogcG9ydCAxKGV0aDApIGVudGVyaW5n
IGZvcndhcmRpbmcgc3RhdGUKWyAgMTMyLjIyMTQ3NF0gY2ZnODAyMTE6IEZvdW5kIG5ldyBiZWFj
b24gb24gZnJlcXVlbmN5OiA1Nzg1IE1IeiAoQ2ggMTU3KSBvbiBwaHkwClsgIDEzMi4yMjkyMDld
IGNmZzgwMjExOiBQZW5kaW5nIHJlZ3VsYXRvcnkgcmVxdWVzdCwgd2FpdGluZyBmb3IgaXQgdG8g
YmUgcHJvY2Vzc2VkLi4uCgo=
--14dae9340a33ade82404c6b2a306
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae9340a33ade82404c6b2a306--


From xen-devel-bounces@lists.xen.org Tue Aug 07 20:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 20:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyqLL-0005Aa-3i; Tue, 07 Aug 2012 20:25:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyqLJ-0005AU-0k
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 20:25:29 +0000
Received: from [85.158.143.35:11182] by server-3.bemta-4.messagelabs.com id
	BE/B1-01511-8B971205; Tue, 07 Aug 2012 20:25:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344371126!15965679!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7605 invoked from network); 7 Aug 2012 20:25:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 20:25:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13894717"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 20:25:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 21:25:26 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyqLG-0002Nx-7j;
	Tue, 07 Aug 2012 20:25:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyqLF-00035v-Ot;
	Tue, 07 Aug 2012 21:25:26 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13570-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 21:25:25 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13570: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13570 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13570/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  8ecd9177d977
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu, Jinsong <jinsong.liu@intel.com>
  Matt Wilson <msw@amazon.com>
  Nakajima, Jun <jun.nakajima@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=8ecd9177d977
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 8ecd9177d977
+ branch=xen-unstable
+ revision=8ecd9177d977
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 8ecd9177d977 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 36 changesets with 90 changes to 50 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 20:25:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 20:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyqLL-0005Aa-3i; Tue, 07 Aug 2012 20:25:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SyqLJ-0005AU-0k
	for xen-devel@lists.xensource.com; Tue, 07 Aug 2012 20:25:29 +0000
Received: from [85.158.143.35:11182] by server-3.bemta-4.messagelabs.com id
	BE/B1-01511-8B971205; Tue, 07 Aug 2012 20:25:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344371126!15965679!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDgzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7605 invoked from network); 7 Aug 2012 20:25:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 20:25:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,728,1336348800"; d="scan'208";a="13894717"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	07 Aug 2012 20:25:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 7 Aug 2012 21:25:26 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SyqLG-0002Nx-7j;
	Tue, 07 Aug 2012 20:25:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SyqLF-00035v-Ot;
	Tue, 07 Aug 2012 21:25:26 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13570-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Aug 2012 21:25:25 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13570: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13570 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13570/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13536
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13536
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13536
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13536

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  8ecd9177d977
baseline version:
 xen                  3d17148e465c

------------------------------------------------------------
People who touched revisions under test:
  Christoph Egger <Christoph.Egger@amd.com>
  Dario Faggioli <dario.faggioli@citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Fabio Fantoni <fabio.fantoni@heliman.it>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Liu, Jinsong <jinsong.liu@intel.com>
  Matt Wilson <msw@amazon.com>
  Nakajima, Jun <jun.nakajima@intel.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=8ecd9177d977
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 8ecd9177d977
+ branch=xen-unstable
+ revision=8ecd9177d977
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 8ecd9177d977 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 36 changesets with 90 changes to 50 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 21:44:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 21:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyrYr-00070B-Mk; Tue, 07 Aug 2012 21:43:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1SyrYq-000706-NW
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 21:43:32 +0000
Received: from [85.158.143.35:22897] by server-3.bemta-4.messagelabs.com id
	5A/97-01511-40C81205; Tue, 07 Aug 2012 21:43:32 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344375809!5859578!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4ODcyNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3706 invoked from network); 7 Aug 2012 21:43:29 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 21:43:29 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q77Lgt9R011243;
	Tue, 7 Aug 2012 22:42:59 +0100
Received: from vega-c.dur.ac.uk (vega-c.dur.ac.uk [129.234.250.135])
	by smtphost2.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q77Lgdpp026676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 22:42:39 +0100
Received: from vega-c.dur.ac.uk (localhost [127.0.0.1])
	by vega-c.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q77LgdM2022165;
	Tue, 7 Aug 2012 22:42:39 +0100
Received: from localhost (dcl0may@localhost)
	by vega-c.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id q77LgctJ022159;
	Tue, 7 Aug 2012 22:42:38 +0100
Date: Tue, 7 Aug 2012 22:42:38 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120806140226.GB3093@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.00.1208072239120.8441@vega-c.dur.ac.uk>
References: <501F98A1.4070806@brockmann-consult.de>
	<501FBB63.3050309@citrix.com>
	<20120806140226.GB3093@phenom.dumpdata.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q77Lgt9R011243
Cc: xen@lists.fedoraproject.org, Malcolm Crossley <malcolm.crossley@citrix.com>,
	mike@flyn.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 01:41:07PM +0100, Malcolm Crossley wrote:

>> I suspect you may need the following patch to improve your 4.1.2
>> performance:
>>
>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053
>>
>> The cache flush on every C2 transition is very expensive and causes
>> a large slow down.
>>
>> 4.1.3-rc3 already includes that patch so it would be worth testing
>> that version.
>
> MA Young, could this be back-ported in the F17 and F16. I belive
> Micahel Petullo setup a bug for that?

I don't recall I seeing a bug for this but the patch is in 
xen-4.1.2-25.fc18 and xen-4.1.2-25.fc17 (which is building now).

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 21:44:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 21:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyrYr-00070B-Mk; Tue, 07 Aug 2012 21:43:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1SyrYq-000706-NW
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 21:43:32 +0000
Received: from [85.158.143.35:22897] by server-3.bemta-4.messagelabs.com id
	5A/97-01511-40C81205; Tue, 07 Aug 2012 21:43:32 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344375809!5859578!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4ODcyNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3706 invoked from network); 7 Aug 2012 21:43:29 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Aug 2012 21:43:29 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q77Lgt9R011243;
	Tue, 7 Aug 2012 22:42:59 +0100
Received: from vega-c.dur.ac.uk (vega-c.dur.ac.uk [129.234.250.135])
	by smtphost2.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q77Lgdpp026676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Aug 2012 22:42:39 +0100
Received: from vega-c.dur.ac.uk (localhost [127.0.0.1])
	by vega-c.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q77LgdM2022165;
	Tue, 7 Aug 2012 22:42:39 +0100
Received: from localhost (dcl0may@localhost)
	by vega-c.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id q77LgctJ022159;
	Tue, 7 Aug 2012 22:42:38 +0100
Date: Tue, 7 Aug 2012 22:42:38 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120806140226.GB3093@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.00.1208072239120.8441@vega-c.dur.ac.uk>
References: <501F98A1.4070806@brockmann-consult.de>
	<501FBB63.3050309@citrix.com>
	<20120806140226.GB3093@phenom.dumpdata.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q77Lgt9R011243
Cc: xen@lists.fedoraproject.org, Malcolm Crossley <malcolm.crossley@citrix.com>,
	mike@flyn.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 01:41:07PM +0100, Malcolm Crossley wrote:

>> I suspect you may need the following patch to improve your 4.1.2
>> performance:
>>
>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053
>>
>> The cache flush on every C2 transition is very expensive and causes
>> a large slow down.
>>
>> 4.1.3-rc3 already includes that patch so it would be worth testing
>> that version.
>
> MA Young, could this be back-ported in the F17 and F16. I belive
> Micahel Petullo setup a bug for that?

I don't recall I seeing a bug for this but the patch is in 
xen-4.1.2-25.fc18 and xen-4.1.2-25.fc17 (which is building now).

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 21:56:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 21:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyrlT-0007A8-2T; Tue, 07 Aug 2012 21:56:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SyrlS-0007A3-0P
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 21:56:34 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344376585!6025393!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzA3NTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4067 invoked from network); 7 Aug 2012 21:56:27 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 21:56:27 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344376587; x=1375912587;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=77wqeFv9qDJET9gTnexm1Ya/cA4w/m9gZFmmfBMq7pk=;
	b=PNGcUaGM7uORchedr3XNuvOHTzw1f2Eu2YtW9+/Q3FX0wDHzpGW0QZWR
	JQn4UUk1vYrFGmbAwxJ9XtKHTwt2OA==;
X-IronPort-AV: E=Sophos;i="4.77,729,1336348800"; d="scan'208";a="1007893267"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 21:56:24 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q77LuMP5006657
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 21:56:23 GMT
Received: from US-SEA-R8XVZTX (10.224.80.43) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 7 Aug 2012 14:56:06 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 07 Aug 2012
	14:56:07 -0700
Date: Tue, 7 Aug 2012 14:56:06 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120807215606.GC5592@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
	<20120803205146.GA6268@US-SEA-R8XVZTX>
	<20511.54596.544654.176170@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20511.54596.544654.176170@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 07:31:32AM -0700, Ian Jackson wrote:
> Matt Wilson writes ("Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> > On Fri, Aug 03, 2012 at 07:44:30AM -0700, Ian Jackson wrote:
> > > That would be better than asking lists.xen.org to start violating the
> > > specified protocol.  Now of course a SHOULD is not an absolute
> > > requirement.  Perhaps mailing lists are a special case somehow; but if
> > > so I would expect this to be addressed in the relevant standards
> > > documents.  I don't see any particular reason to think that
> > > lists.xen.org is somehow unusual.
> > 
> > Ultimately I think that Mailman should verify DKIM signatures, provide
> > a new signature for the modified message (or have the outbound MTA do
> > the signing), and retain the origional DKIM signature as a trace. I
> > believe that this is in line with the recomendations for intermediary
> > email handlers like Mailman in RFC 5863 [4]. Of course, I don't know
> > if Gmail will rework their implementation to ignore the invalid
> > signature. At least one Mailman user reported success simply adding a
> > new signature and not stripping any header [5].
> 
> The solution to the broken DKIM implementations, or broken spec, must
> not be allowed to become "install more DKIM".  That is making the
> problem worse, not better.

That's possibly true, but I'm not well versed enough in the DKIM and
related specs to say if "install more DKIM" makes things worse or
better.

> > Personally, I think that stripping DKIM headers as a short term
> > workaround is less objectionable.
> 
> So bottom line is you think that Gmail is violating a SHOULD NOT.
> And you are suggesting that the right fix for this is for us to also
> violate a SHOULD NOT.  That can't be right.

Andrew Cooper helpfully pointed out that the actual problem is a DMARC
policy advertised by amazon.com that requests all messages pass DKIM
checks:

$ dig +short TXT _dmarc.amazon.com.
"v=DMARC1\; p=quarantine\; pct=100\; rua=mailto:dmarc-reports@bounces.amazon.com\; ruf=mailto:dmarc-reports@bounces.amazon.com"

Gmail will treat a message with no DKIM signature from amazon.com the
same as a broken DKIM signature from amazon.com, so stripping the
headers won't actually help here.

> > If a test of removing DKIM headers to see if it helps with delivery to
> > Gmail is off the table, then perhaps configuring Mailman in a way that
> > doesn't break DKIM signatures would be an option? Amazon's signed
> > headers include date, from, to, cc, subject, message-id and
> > mime-version. If the subject manipulation of adding [Xen-devel] was
> > removed, the signature would likely still be valid.
> 
> I don't think that would be popular and I don't think this is a good
> reason to do it.
> 
> Personally I think these subject line prefixes are annoying and if it
> were my list it wouldn't have had them to start with.  But if you want
> us to turn that off I think you need to get consensus for that.

The DMARC FAQ, http://dmarc.org/faq.html, has only this advice to
mailing list operators:

  I operate a mailing list, what should I do?

    DMARC introduces the concept of aligned identifiers. It means the
    domain in the from header must match the d= in the DKIM signature
    and the domain in the mail from envelope.  You have a few
    solutions:
     * operate as a strict forwarder, where the message is not changed
       and the validity of the DKIM signature is preserved
     * introduce an "Original Authentication Results" header to
       indicate you have performed the authentication and you are
       validating it
     * take ownership of the email, by removing the DKIM signature and
       putting your own as well as changing the from header in the
       email to contain an email address within your mailing list domain.

None of these options are terribly compelling to me. I think that
Mailman could run as a strict forwarder if the subject tags and
message footers were disabled, but you're right that we'd need to get
consensus to make that change. The "Original Authentication Results"
option sounds like yet another header that will have non-standard
handling depending the implementer.

Since modifying the Xen.org mailman configurations would only address
the problem on one list server, I'll investigate alternative solutions
on the Amazon side of things. It seems that Google has a googlers.com
domain for this type of thing [1].

Thanks again,

Matt
[1] http://mail.python.org/pipermail/mailman-developers/2011-December/021640.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 21:56:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 21:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyrlT-0007A8-2T; Tue, 07 Aug 2012 21:56:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1SyrlS-0007A3-0P
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 21:56:34 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344376585!6025393!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzA3NTY4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4067 invoked from network); 7 Aug 2012 21:56:27 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 21:56:27 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344376587; x=1375912587;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=77wqeFv9qDJET9gTnexm1Ya/cA4w/m9gZFmmfBMq7pk=;
	b=PNGcUaGM7uORchedr3XNuvOHTzw1f2Eu2YtW9+/Q3FX0wDHzpGW0QZWR
	JQn4UUk1vYrFGmbAwxJ9XtKHTwt2OA==;
X-IronPort-AV: E=Sophos;i="4.77,729,1336348800"; d="scan'208";a="1007893267"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Aug 2012 21:56:24 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1101.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q77LuMP5006657
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 7 Aug 2012 21:56:23 GMT
Received: from US-SEA-R8XVZTX (10.224.80.43) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 7 Aug 2012 14:56:06 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 07 Aug 2012
	14:56:07 -0700
Date: Tue, 7 Aug 2012 14:56:06 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120807215606.GC5592@US-SEA-R8XVZTX>
References: <20120802211157.GG8228@US-SEA-R8XVZTX>
	<20507.58318.416753.917851@mariner.uk.xensource.com>
	<20120803205146.GA6268@US-SEA-R8XVZTX>
	<20511.54596.544654.176170@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20511.54596.544654.176170@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Lars Kurth <lars.kurth@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 07:31:32AM -0700, Ian Jackson wrote:
> Matt Wilson writes ("Re: [Xen-devel] lists.xen.org Mailman configuration and DKIM"):
> > On Fri, Aug 03, 2012 at 07:44:30AM -0700, Ian Jackson wrote:
> > > That would be better than asking lists.xen.org to start violating the
> > > specified protocol.  Now of course a SHOULD is not an absolute
> > > requirement.  Perhaps mailing lists are a special case somehow; but if
> > > so I would expect this to be addressed in the relevant standards
> > > documents.  I don't see any particular reason to think that
> > > lists.xen.org is somehow unusual.
> > 
> > Ultimately I think that Mailman should verify DKIM signatures, provide
> > a new signature for the modified message (or have the outbound MTA do
> > the signing), and retain the origional DKIM signature as a trace. I
> > believe that this is in line with the recomendations for intermediary
> > email handlers like Mailman in RFC 5863 [4]. Of course, I don't know
> > if Gmail will rework their implementation to ignore the invalid
> > signature. At least one Mailman user reported success simply adding a
> > new signature and not stripping any header [5].
> 
> The solution to the broken DKIM implementations, or broken spec, must
> not be allowed to become "install more DKIM".  That is making the
> problem worse, not better.

That's possibly true, but I'm not well versed enough in the DKIM and
related specs to say if "install more DKIM" makes things worse or
better.

> > Personally, I think that stripping DKIM headers as a short term
> > workaround is less objectionable.
> 
> So bottom line is you think that Gmail is violating a SHOULD NOT.
> And you are suggesting that the right fix for this is for us to also
> violate a SHOULD NOT.  That can't be right.

Andrew Cooper helpfully pointed out that the actual problem is a DMARC
policy advertised by amazon.com that requests all messages pass DKIM
checks:

$ dig +short TXT _dmarc.amazon.com.
"v=DMARC1\; p=quarantine\; pct=100\; rua=mailto:dmarc-reports@bounces.amazon.com\; ruf=mailto:dmarc-reports@bounces.amazon.com"

Gmail will treat a message with no DKIM signature from amazon.com the
same as a broken DKIM signature from amazon.com, so stripping the
headers won't actually help here.

> > If a test of removing DKIM headers to see if it helps with delivery to
> > Gmail is off the table, then perhaps configuring Mailman in a way that
> > doesn't break DKIM signatures would be an option? Amazon's signed
> > headers include date, from, to, cc, subject, message-id and
> > mime-version. If the subject manipulation of adding [Xen-devel] was
> > removed, the signature would likely still be valid.
> 
> I don't think that would be popular and I don't think this is a good
> reason to do it.
> 
> Personally I think these subject line prefixes are annoying and if it
> were my list it wouldn't have had them to start with.  But if you want
> us to turn that off I think you need to get consensus for that.

The DMARC FAQ, http://dmarc.org/faq.html, has only this advice to
mailing list operators:

  I operate a mailing list, what should I do?

    DMARC introduces the concept of aligned identifiers. It means the
    domain in the from header must match the d= in the DKIM signature
    and the domain in the mail from envelope.  You have a few
    solutions:
     * operate as a strict forwarder, where the message is not changed
       and the validity of the DKIM signature is preserved
     * introduce an "Original Authentication Results" header to
       indicate you have performed the authentication and you are
       validating it
     * take ownership of the email, by removing the DKIM signature and
       putting your own as well as changing the from header in the
       email to contain an email address within your mailing list domain.

None of these options are terribly compelling to me. I think that
Mailman could run as a strict forwarder if the subject tags and
message footers were disabled, but you're right that we'd need to get
consensus to make that change. The "Original Authentication Results"
option sounds like yet another header that will have non-standard
handling depending the implementer.

Since modifying the Xen.org mailman configurations would only address
the problem on one list server, I'll investigate alternative solutions
on the Amazon side of things. It seems that Google has a googlers.com
domain for this type of thing [1].

Thanks again,

Matt
[1] http://mail.python.org/pipermail/mailman-developers/2011-December/021640.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 07 22:57:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 22:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sysha-0007dA-UI; Tue, 07 Aug 2012 22:56:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SyshZ-0007d5-2s
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 22:56:37 +0000
Received: from [85.158.139.83:38091] by server-9.bemta-5.messagelabs.com id
	FC/19-06631-42D91205; Tue, 07 Aug 2012 22:56:36 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344380195!31030175!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12408 invoked from network); 7 Aug 2012 22:56:35 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 22:56:35 -0000
Received: by weyz53 with SMTP id z53so100730wey.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 15:56:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=QRLHvoFOoySJi1PreIpnnKgXB2HgTjHOIiq8OMMqwfA=;
	b=IZ768QcD1rWQ7yBzoi+SUuyg/s7XCLtNQ+rcgy941nMhyOAG2W2Jo5MvJO2M9qQ7bp
	T4lmxxXyrQVFUugL8hSSajMjKLQjUzB0YxdUOFtypJbTb/TrJ/DGUBEcxT/UcaBIJGYa
	C+y0GiXKhOY6iDelRVBmvHyBFbmi5Wo5/7ZaoGci4Z78dQeoHQbjh4l3FzWzoCOFolCM
	qOf6UqP+GhgjUzjR0TFgR9PbGJqhMGiL2eX6AuuQJRGY7rqrnrGZTspMOFJNk3GwINJ/
	WdELkLgYei91SY4GLuD6zaaXNyy7kwiS5noyqyynsQCr5HH17NVQHIewVCu1AO3ZCCTh
	0t5A==
Received: by 10.180.82.164 with SMTP id j4mr533266wiy.18.1344380194782;
	Tue, 07 Aug 2012 15:56:34 -0700 (PDT)
Received: from [192.168.0.40] (ip-180-4.sn1.eutelia.it. [62.94.180.4])
	by mx.google.com with ESMTPS id t8sm1881240wiy.3.2012.08.07.15.56.29
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 07 Aug 2012 15:56:33 -0700 (PDT)
Message-ID: <1344380174.4778.2.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Wed, 08 Aug 2012 00:56:14 +0200
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E2032D6@SHSMSX101.ccr.corp.intel.com>
References: <1343837796.4958.32.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E2032D6@SHSMSX101.ccr.corp.intel.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0882652826794966639=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0882652826794966639==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-oqCbyvS31NML87C0odkA"


--=-oqCbyvS31NML87C0odkA
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 01:04 +0000, Zhang, Yang Z wrote:
> >     - Automatic placement at guest creation time. Basics are there and
> >       will be shipping with 4.2. However, a lot of other things are
> >       missing and/or can be improved, for instance:
> > [D]    * automated verification and testing of the placement;
> >        * benchmarks and improvements of the placement heuristic;
> > [D]    * choosing/building up some measure of node load (more accurate
> >          than just counting vcpus) onto which to rely during placement;
> >        * consider IONUMA during placement;
> We should consider two things:
> 1. Dom0 IONUMA: Devices used by dom0 should get the dma buffer from the n=
ode which it resides. Currently, Dom0 allocates dma buffer without provide =
the node info to the hypercall..
> 2.Guest IONUMA: when guest boots up with pass through device, we need to =
allocate the memory from the node where the device resides for further dma =
buffer allocation. And let guest know the IONUMA topology. This rely on the=
 guest NUMA.
> This topic was mentioned in xen summit 2011:
> http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalabili=
ty_in_Xen.pdf
>=20
Seems fine, I knew that presentation and I added these details to the
Wiki page (sorry for the delay). Are you (or someone from your group)
perhaps working or planning to work on it?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-oqCbyvS31NML87C0odkA
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAhnQ4ACgkQk4XaBE3IOsRskQCdGINVesGFsUuZFlXEtD7gt4hF
28EAn1L0uSEr2xv1HOxjoCVmM+ytS1iB
=6rKm
-----END PGP SIGNATURE-----

--=-oqCbyvS31NML87C0odkA--



--===============0882652826794966639==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0882652826794966639==--



From xen-devel-bounces@lists.xen.org Tue Aug 07 22:57:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 22:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sysha-0007dA-UI; Tue, 07 Aug 2012 22:56:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SyshZ-0007d5-2s
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 22:56:37 +0000
Received: from [85.158.139.83:38091] by server-9.bemta-5.messagelabs.com id
	FC/19-06631-42D91205; Tue, 07 Aug 2012 22:56:36 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344380195!31030175!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12408 invoked from network); 7 Aug 2012 22:56:35 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 22:56:35 -0000
Received: by weyz53 with SMTP id z53so100730wey.32
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 15:56:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=QRLHvoFOoySJi1PreIpnnKgXB2HgTjHOIiq8OMMqwfA=;
	b=IZ768QcD1rWQ7yBzoi+SUuyg/s7XCLtNQ+rcgy941nMhyOAG2W2Jo5MvJO2M9qQ7bp
	T4lmxxXyrQVFUugL8hSSajMjKLQjUzB0YxdUOFtypJbTb/TrJ/DGUBEcxT/UcaBIJGYa
	C+y0GiXKhOY6iDelRVBmvHyBFbmi5Wo5/7ZaoGci4Z78dQeoHQbjh4l3FzWzoCOFolCM
	qOf6UqP+GhgjUzjR0TFgR9PbGJqhMGiL2eX6AuuQJRGY7rqrnrGZTspMOFJNk3GwINJ/
	WdELkLgYei91SY4GLuD6zaaXNyy7kwiS5noyqyynsQCr5HH17NVQHIewVCu1AO3ZCCTh
	0t5A==
Received: by 10.180.82.164 with SMTP id j4mr533266wiy.18.1344380194782;
	Tue, 07 Aug 2012 15:56:34 -0700 (PDT)
Received: from [192.168.0.40] (ip-180-4.sn1.eutelia.it. [62.94.180.4])
	by mx.google.com with ESMTPS id t8sm1881240wiy.3.2012.08.07.15.56.29
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 07 Aug 2012 15:56:33 -0700 (PDT)
Message-ID: <1344380174.4778.2.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Wed, 08 Aug 2012 00:56:14 +0200
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E2032D6@SHSMSX101.ccr.corp.intel.com>
References: <1343837796.4958.32.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E2032D6@SHSMSX101.ccr.corp.intel.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>, George Dunlap <dunlapg@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0882652826794966639=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0882652826794966639==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-oqCbyvS31NML87C0odkA"


--=-oqCbyvS31NML87C0odkA
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 01:04 +0000, Zhang, Yang Z wrote:
> >     - Automatic placement at guest creation time. Basics are there and
> >       will be shipping with 4.2. However, a lot of other things are
> >       missing and/or can be improved, for instance:
> > [D]    * automated verification and testing of the placement;
> >        * benchmarks and improvements of the placement heuristic;
> > [D]    * choosing/building up some measure of node load (more accurate
> >          than just counting vcpus) onto which to rely during placement;
> >        * consider IONUMA during placement;
> We should consider two things:
> 1. Dom0 IONUMA: Devices used by dom0 should get the dma buffer from the n=
ode which it resides. Currently, Dom0 allocates dma buffer without provide =
the node info to the hypercall..
> 2.Guest IONUMA: when guest boots up with pass through device, we need to =
allocate the memory from the node where the device resides for further dma =
buffer allocation. And let guest know the IONUMA topology. This rely on the=
 guest NUMA.
> This topic was mentioned in xen summit 2011:
> http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalabili=
ty_in_Xen.pdf
>=20
Seems fine, I knew that presentation and I added these details to the
Wiki page (sorry for the delay). Are you (or someone from your group)
perhaps working or planning to work on it?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-oqCbyvS31NML87C0odkA
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAhnQ4ACgkQk4XaBE3IOsRskQCdGINVesGFsUuZFlXEtD7gt4hF
28EAn1L0uSEr2xv1HOxjoCVmM+ytS1iB
=6rKm
-----END PGP SIGNATURE-----

--=-oqCbyvS31NML87C0odkA--



--===============0882652826794966639==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0882652826794966639==--



From xen-devel-bounces@lists.xen.org Tue Aug 07 23:50:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 23:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SytXa-00084x-1S; Tue, 07 Aug 2012 23:50:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SytXX-00084s-KP
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 23:50:19 +0000
Received: from [85.158.143.35:9183] by server-1.bemta-4.messagelabs.com id
	49/7D-24392-AB9A1205; Tue, 07 Aug 2012 23:50:18 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344383416!17288944!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22163 invoked from network); 7 Aug 2012 23:50:17 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 23:50:17 -0000
Received: by wibhq4 with SMTP id hq4so125883wib.14
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 16:50:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=hU+vfesHAPwpOQ7MEcpR+i7Yv79tXKPYMydMOQXqnr0=;
	b=x8/Ixr0Q/BGakxbJQ95TEY5ulYlq1j6Vf9cQVbXx6prmOY9gkPJ/osfuBBAITsoiva
	9qEX1Ed4U30lhEvGvaH4o5falLvq0GBdyC+/DEcRKnw1vJAlQf+GsJ0W7VQ3whEKaX0I
	I0UXEFnk9C5ow5EI2NL6I+gVmfzumr31DbMB029ZiEm2Cx+bZawTo5Rn8a7Xr1q/ocp4
	4y4928n8/0TqDkKgp5ueTzPnBopkXjQDI6NkMakKAPK3BEdDqtFTfiAhk1siE192scQT
	nusNi5tduJNgSdWTupAWwcaOFo/LSrLKgkNBYo1s1xd/iM686yzHOAgHmICp89KWIShs
	iEaw==
Received: by 10.180.91.228 with SMTP id ch4mr927573wib.7.1344383416808;
	Tue, 07 Aug 2012 16:50:16 -0700 (PDT)
Received: from [192.168.0.40] (ip-180-4.sn1.eutelia.it. [62.94.180.4])
	by mx.google.com with ESMTPS id dc3sm2080471wib.7.2012.08.07.16.50.14
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 07 Aug 2012 16:50:15 -0700 (PDT)
Message-ID: <1344383394.1890.11.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Date: Wed, 08 Aug 2012 01:49:54 +0200
In-Reply-To: <ed108492-5601-4bc3-8a5f-a8a7cb5916fb@default>
References: <1343837796.4958.32.camel@Solace>
	<ed108492-5601-4bc3-8a5f-a8a7cb5916fb@default>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Konrad Wilk <konrad.wilk@oracle.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Kurt Hackel <kurt.hackel@oracle.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3761522102143106158=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3761522102143106158==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-wJDHyyRKEVT2RgpqoFvW"


--=-wJDHyyRKEVT2RgpqoFvW
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 15:22 -0700, Dan Magenheimer wrote:
> Hi Dario --
>=20
Hello Dan,

> Thanks for your great work on NUMA... an interest area of
> mine but one, sadly, I haven't been able to give much time to,
> so I'm glad you've taken this bull by the horns.
>=20
Trying to... Let's see! :-P

> I've been sitting on an idea for some time that probably
> deserves some exposure on your list.  Naturally, it involves
> my favorite topic tmem (readers, please don't tune out yet :-).
>=20
It sure does! I've already put something quite generic about "memory
sharing" there, because I know that it has all but trivial interactions
with the improved NUMA support I am/we are trying to envision.

The fact that it is, as I said, generic, is due to my ignorance (let's
say for now) of the whole tmem thing, so thanks for the contribution,
it's very useful to hear your point of view on this!

> It has occurred to me that a fundamental tenet of NUMA
> is to put infrequently used data on "other" nodes, while
> pulling frequently used data onto a "local" node.
>=20
> Tmem very nicely separates infrequently-used data from
> frequently-used data with an API/ABI that is now fully
> implemented in upstream Linux.
>=20
I see, and it seems nice.

> [..]
>
> Naturally, this doesn't solve any NUMA problems at all for
> tmem-ignorant or tmem-disabled guests, but if it works
> sufficiently well for tmem-enabled guests, that might
> encourage other OS's to do a simple implementation of tmem.
>=20
Sure. In my opinion, this is not an area where we could aim at "solving
every problem for everyone". However, we should definitely target having
a sensible solution for default and/or most common use cases and
scenarios.

> Sadly, I'm not able to invest much time in this idea,
> but the combination of tmem and NUMA might interest some
> developers and/or grad students, in which case I'd be happy
> to spend a little time assisting.
>=20
That's definitely the case. I've tried to put a summary of what you said
in this mail to the Wiki (http://wiki.xen.org/wiki/Xen_NUMA_Roadmap) and
also put your contact next to it. Feel free to update/correct if you fin
something wrong. :-P

> I'll be at Xen Summit for at least the first day, so we
> can chat more if you are interested.
>
I indeed am interested, so let's make that happen! :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-wJDHyyRKEVT2RgpqoFvW
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAhqaIACgkQk4XaBE3IOsSQZACdEfjV77miGGTfawJzIL+yI0Cl
18gAmwSvtv6TFpEtt2v4kA0O/gPlAOs+
=jeiU
-----END PGP SIGNATURE-----

--=-wJDHyyRKEVT2RgpqoFvW--



--===============3761522102143106158==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3761522102143106158==--



From xen-devel-bounces@lists.xen.org Tue Aug 07 23:50:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 23:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SytXa-00084x-1S; Tue, 07 Aug 2012 23:50:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1SytXX-00084s-KP
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 23:50:19 +0000
Received: from [85.158.143.35:9183] by server-1.bemta-4.messagelabs.com id
	49/7D-24392-AB9A1205; Tue, 07 Aug 2012 23:50:18 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344383416!17288944!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22163 invoked from network); 7 Aug 2012 23:50:17 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 23:50:17 -0000
Received: by wibhq4 with SMTP id hq4so125883wib.14
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 16:50:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=hU+vfesHAPwpOQ7MEcpR+i7Yv79tXKPYMydMOQXqnr0=;
	b=x8/Ixr0Q/BGakxbJQ95TEY5ulYlq1j6Vf9cQVbXx6prmOY9gkPJ/osfuBBAITsoiva
	9qEX1Ed4U30lhEvGvaH4o5falLvq0GBdyC+/DEcRKnw1vJAlQf+GsJ0W7VQ3whEKaX0I
	I0UXEFnk9C5ow5EI2NL6I+gVmfzumr31DbMB029ZiEm2Cx+bZawTo5Rn8a7Xr1q/ocp4
	4y4928n8/0TqDkKgp5ueTzPnBopkXjQDI6NkMakKAPK3BEdDqtFTfiAhk1siE192scQT
	nusNi5tduJNgSdWTupAWwcaOFo/LSrLKgkNBYo1s1xd/iM686yzHOAgHmICp89KWIShs
	iEaw==
Received: by 10.180.91.228 with SMTP id ch4mr927573wib.7.1344383416808;
	Tue, 07 Aug 2012 16:50:16 -0700 (PDT)
Received: from [192.168.0.40] (ip-180-4.sn1.eutelia.it. [62.94.180.4])
	by mx.google.com with ESMTPS id dc3sm2080471wib.7.2012.08.07.16.50.14
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 07 Aug 2012 16:50:15 -0700 (PDT)
Message-ID: <1344383394.1890.11.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Date: Wed, 08 Aug 2012 01:49:54 +0200
In-Reply-To: <ed108492-5601-4bc3-8a5f-a8a7cb5916fb@default>
References: <1343837796.4958.32.camel@Solace>
	<ed108492-5601-4bc3-8a5f-a8a7cb5916fb@default>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Konrad Wilk <konrad.wilk@oracle.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Kurt Hackel <kurt.hackel@oracle.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3761522102143106158=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3761522102143106158==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-wJDHyyRKEVT2RgpqoFvW"


--=-wJDHyyRKEVT2RgpqoFvW
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 15:22 -0700, Dan Magenheimer wrote:
> Hi Dario --
>=20
Hello Dan,

> Thanks for your great work on NUMA... an interest area of
> mine but one, sadly, I haven't been able to give much time to,
> so I'm glad you've taken this bull by the horns.
>=20
Trying to... Let's see! :-P

> I've been sitting on an idea for some time that probably
> deserves some exposure on your list.  Naturally, it involves
> my favorite topic tmem (readers, please don't tune out yet :-).
>=20
It sure does! I've already put something quite generic about "memory
sharing" there, because I know that it has all but trivial interactions
with the improved NUMA support I am/we are trying to envision.

The fact that it is, as I said, generic, is due to my ignorance (let's
say for now) of the whole tmem thing, so thanks for the contribution,
it's very useful to hear your point of view on this!

> It has occurred to me that a fundamental tenet of NUMA
> is to put infrequently used data on "other" nodes, while
> pulling frequently used data onto a "local" node.
>=20
> Tmem very nicely separates infrequently-used data from
> frequently-used data with an API/ABI that is now fully
> implemented in upstream Linux.
>=20
I see, and it seems nice.

> [..]
>
> Naturally, this doesn't solve any NUMA problems at all for
> tmem-ignorant or tmem-disabled guests, but if it works
> sufficiently well for tmem-enabled guests, that might
> encourage other OS's to do a simple implementation of tmem.
>=20
Sure. In my opinion, this is not an area where we could aim at "solving
every problem for everyone". However, we should definitely target having
a sensible solution for default and/or most common use cases and
scenarios.

> Sadly, I'm not able to invest much time in this idea,
> but the combination of tmem and NUMA might interest some
> developers and/or grad students, in which case I'd be happy
> to spend a little time assisting.
>=20
That's definitely the case. I've tried to put a summary of what you said
in this mail to the Wiki (http://wiki.xen.org/wiki/Xen_NUMA_Roadmap) and
also put your contact next to it. Feel free to update/correct if you fin
something wrong. :-P

> I'll be at Xen Summit for at least the first day, so we
> can chat more if you are interested.
>
I indeed am interested, so let's make that happen! :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-wJDHyyRKEVT2RgpqoFvW
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAhqaIACgkQk4XaBE3IOsSQZACdEfjV77miGGTfawJzIL+yI0Cl
18gAmwSvtv6TFpEtt2v4kA0O/gPlAOs+
=jeiU
-----END PGP SIGNATURE-----

--=-wJDHyyRKEVT2RgpqoFvW--



--===============3761522102143106158==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3761522102143106158==--



From xen-devel-bounces@lists.xen.org Tue Aug 07 23:54:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 23:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sytb4-0008Cv-QZ; Tue, 07 Aug 2012 23:53:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Sytb4-0008Cq-0R
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 23:53:58 +0000
Received: from [85.158.138.51:43281] by server-4.bemta-3.messagelabs.com id
	D2/81-06379-59AA1205; Tue, 07 Aug 2012 23:53:57 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344383634!24569867!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13450 invoked from network); 7 Aug 2012 23:53:54 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 23:53:54 -0000
Received: by wibhm6 with SMTP id hm6so2742904wib.14
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 16:53:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=AWC2vBukZUlsaAmAfSoLNyCKfRCF5SFSMX3T/fXfLvk=;
	b=R/9k3+WzfAB68ogVwTB2dIv7NpG4y27gxxxaFdJUlj0nQ0ktl89YjMrHMS3LGieJyc
	BbAZcschCjvkO5VvdM2w/W838a2rKee5twnea2qe2rDyPYzS7CFExsDYXjBnhBenuCM/
	SkyiRFdd+Lc9tEQah2yzqKMVx0H+IrPgQCiqpIabQVNqZ5gzS0SADYVQeWVareYmBPN5
	oXvDkL6z5rRSi3ypRs6W+ToFUMA04QWNFKLZmIWXWCyloXNL0WuWSZB+fPEIvs9WvsDK
	z2cwg0LH96aqL41ei2s32X5U/MUW096EgqgW1lCgLtBSO2cMXqHK1RuI5vp8LcFsyAWV
	YXpg==
Received: by 10.180.100.131 with SMTP id ey3mr900984wib.15.1344383634509;
	Tue, 07 Aug 2012 16:53:54 -0700 (PDT)
Received: from [192.168.0.40] (ip-180-4.sn1.eutelia.it. [62.94.180.4])
	by mx.google.com with ESMTPS id fb20sm2993187wid.1.2012.08.07.16.53.52
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 07 Aug 2012 16:53:53 -0700 (PDT)
Message-ID: <1344383625.1890.17.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>
Date: Wed, 08 Aug 2012 01:53:45 +0200
In-Reply-To: <5019C3F7.1080404@cl.cam.ac.uk>
References: <1343837796.4958.32.camel@Solace>
	<A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
	<1343840334.4958.45.camel@Solace> <5019C3F7.1080404@cl.cam.ac.uk>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Steven Smith <steven.smith@cl.cam.ac.uk>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0548752285100658640=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0548752285100658640==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-q7TZ3Ivb5WO6flBclQN2"


--=-q7TZ3Ivb5WO6flBclQN2
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 01:04 +0100, Malte Schwarzkopf wrote:
> > Wow... That's really cool. I'll definitely take a deep look at all thes=
e
> > data! I'm also adding the link to the wiki, if you're fine with that...
>=20
> No problem with adding a link, as this is public data :) If possible,
> it'd be splendid to put a note next to this link encouraging people to
> submit their own results -- doing so is very simple, and helps us extend
> the database. Instructions are at
> http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/ (or, for a short
> link, http://fable.io).
>=20
Ok, I've tried doing this, here it is how it looks:
 http://wiki.xen.org/wiki/Xen_NUMA_Roadmap
 http://wiki.xen.org/wiki/Xen_NUMA_Roadmap#Inter-VM_dependencies_and_commun=
ication_issues

Thanks also for the references, I'll definitely take a look at them. :-)

> One interesting thing to look at (that we haven't looked at yet) is what
> memory allocators do about NUMA these days; there is an AMD whitepaper
> from 2009 discussing the performance benefits of a NUMA-aware version of
> tcmalloc [3], but I have found it hard to reproduce their results on
> modern hardware. Of course, being virtualized may complicate matters
> here, since the memory allocator can no longer freely pick and choose
> where to allocate from.
>=20
> Scheduling, notably, is key here, since the CPU a process is scheduled
> on may determine where its memory is allocated -- frequent migrations
> are likely to be bad for performance due to remote memory accesses,
>
That might be true for Linux, but it's not so much true
(fortunately :-P) for Xen. However, I also think scheduling is a very
important aspect of this whole NUMA thing... I'll repost my NUMA aware
credit scheduler patches soon.

> although we have been unable to quantify a significant difference on
> non-synthetic macrobenchmarks; that said, we did not try very hard so far=
.
>=20
I think both kinds of benchmarks are interesting. I tried to concentrate
a bit on macrobenchmark (specjbb, I'll let you decide if that's
synthetic or not :-D).

Another issue, if we want to tackle the problem of communicating/cooperatin=
g
VMs, pops up at the interface level, i.e., how do we want the user to
tell us that 2 (or more) VMs are "related"? Up to what level of detail?
Should this "relationship" be permanent or might it change over time?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)




--=-q7TZ3Ivb5WO6flBclQN2
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAhqokACgkQk4XaBE3IOsSeVQCdEBkOx3JJd5sqXfLTo8uJDtlF
uXwAn07AA5nNEWVQBEpvCNix/4Vpfgz0
=JEng
-----END PGP SIGNATURE-----

--=-q7TZ3Ivb5WO6flBclQN2--



--===============0548752285100658640==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0548752285100658640==--



From xen-devel-bounces@lists.xen.org Tue Aug 07 23:54:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Aug 2012 23:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sytb4-0008Cv-QZ; Tue, 07 Aug 2012 23:53:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Sytb4-0008Cq-0R
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 23:53:58 +0000
Received: from [85.158.138.51:43281] by server-4.bemta-3.messagelabs.com id
	D2/81-06379-59AA1205; Tue, 07 Aug 2012 23:53:57 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344383634!24569867!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13450 invoked from network); 7 Aug 2012 23:53:54 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Aug 2012 23:53:54 -0000
Received: by wibhm6 with SMTP id hm6so2742904wib.14
	for <xen-devel@lists.xen.org>; Tue, 07 Aug 2012 16:53:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=AWC2vBukZUlsaAmAfSoLNyCKfRCF5SFSMX3T/fXfLvk=;
	b=R/9k3+WzfAB68ogVwTB2dIv7NpG4y27gxxxaFdJUlj0nQ0ktl89YjMrHMS3LGieJyc
	BbAZcschCjvkO5VvdM2w/W838a2rKee5twnea2qe2rDyPYzS7CFExsDYXjBnhBenuCM/
	SkyiRFdd+Lc9tEQah2yzqKMVx0H+IrPgQCiqpIabQVNqZ5gzS0SADYVQeWVareYmBPN5
	oXvDkL6z5rRSi3ypRs6W+ToFUMA04QWNFKLZmIWXWCyloXNL0WuWSZB+fPEIvs9WvsDK
	z2cwg0LH96aqL41ei2s32X5U/MUW096EgqgW1lCgLtBSO2cMXqHK1RuI5vp8LcFsyAWV
	YXpg==
Received: by 10.180.100.131 with SMTP id ey3mr900984wib.15.1344383634509;
	Tue, 07 Aug 2012 16:53:54 -0700 (PDT)
Received: from [192.168.0.40] (ip-180-4.sn1.eutelia.it. [62.94.180.4])
	by mx.google.com with ESMTPS id fb20sm2993187wid.1.2012.08.07.16.53.52
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 07 Aug 2012 16:53:53 -0700 (PDT)
Message-ID: <1344383625.1890.17.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Malte Schwarzkopf <malte.schwarzkopf@cl.cam.ac.uk>
Date: Wed, 08 Aug 2012 01:53:45 +0200
In-Reply-To: <5019C3F7.1080404@cl.cam.ac.uk>
References: <1343837796.4958.32.camel@Solace>
	<A178E46B-25C1-4251-BB86-292B4CE3082D@recoil.org>
	<1343840334.4958.45.camel@Solace> <5019C3F7.1080404@cl.cam.ac.uk>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Steven Smith <steven.smith@cl.cam.ac.uk>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0548752285100658640=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0548752285100658640==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-q7TZ3Ivb5WO6flBclQN2"


--=-q7TZ3Ivb5WO6flBclQN2
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2012-08-02 at 01:04 +0100, Malte Schwarzkopf wrote:
> > Wow... That's really cool. I'll definitely take a deep look at all thes=
e
> > data! I'm also adding the link to the wiki, if you're fine with that...
>=20
> No problem with adding a link, as this is public data :) If possible,
> it'd be splendid to put a note next to this link encouraging people to
> submit their own results -- doing so is very simple, and helps us extend
> the database. Instructions are at
> http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/ (or, for a short
> link, http://fable.io).
>=20
Ok, I've tried doing this, here it is how it looks:
 http://wiki.xen.org/wiki/Xen_NUMA_Roadmap
 http://wiki.xen.org/wiki/Xen_NUMA_Roadmap#Inter-VM_dependencies_and_commun=
ication_issues

Thanks also for the references, I'll definitely take a look at them. :-)

> One interesting thing to look at (that we haven't looked at yet) is what
> memory allocators do about NUMA these days; there is an AMD whitepaper
> from 2009 discussing the performance benefits of a NUMA-aware version of
> tcmalloc [3], but I have found it hard to reproduce their results on
> modern hardware. Of course, being virtualized may complicate matters
> here, since the memory allocator can no longer freely pick and choose
> where to allocate from.
>=20
> Scheduling, notably, is key here, since the CPU a process is scheduled
> on may determine where its memory is allocated -- frequent migrations
> are likely to be bad for performance due to remote memory accesses,
>
That might be true for Linux, but it's not so much true
(fortunately :-P) for Xen. However, I also think scheduling is a very
important aspect of this whole NUMA thing... I'll repost my NUMA aware
credit scheduler patches soon.

> although we have been unable to quantify a significant difference on
> non-synthetic macrobenchmarks; that said, we did not try very hard so far=
.
>=20
I think both kinds of benchmarks are interesting. I tried to concentrate
a bit on macrobenchmark (specjbb, I'll let you decide if that's
synthetic or not :-D).

Another issue, if we want to tackle the problem of communicating/cooperatin=
g
VMs, pops up at the interface level, i.e., how do we want the user to
tell us that 2 (or more) VMs are "related"? Up to what level of detail?
Should this "relationship" be permanent or might it change over time?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)




--=-q7TZ3Ivb5WO6flBclQN2
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAhqokACgkQk4XaBE3IOsSeVQCdEBkOx3JJd5sqXfLTo8uJDtlF
uXwAn07AA5nNEWVQBEpvCNix/4Vpfgz0
=JEng
-----END PGP SIGNATURE-----

--=-q7TZ3Ivb5WO6flBclQN2--



--===============0548752285100658640==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0548752285100658640==--



From xen-devel-bounces@lists.xen.org Wed Aug 08 01:16:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 01:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syusd-0004xm-Qb; Wed, 08 Aug 2012 01:16:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Syusc-0004xh-SY
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 01:16:11 +0000
Received: from [85.158.143.99:39696] by server-3.bemta-4.messagelabs.com id
	77/94-01511-ADDB1205; Wed, 08 Aug 2012 01:16:10 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344388567!30164405!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11877 invoked from network); 8 Aug 2012 01:16:08 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-3.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 01:16:08 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q781G0fY011374;
	Tue, 7 Aug 2012 20:16:01 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q781G0Ei011373;
	Tue, 7 Aug 2012 20:16:00 -0500
Date: Tue, 7 Aug 2012 20:16:00 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208080116.q781G0Ei011373@wind.enjellic.com>
In-Reply-To: Ian Jackson <Ian.Jackson@eu.citrix.com>
	"Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor
	leak." (Aug 7, 4:45pm)
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Tue, 07 Aug 2012 20:16:01 -0500 (CDT)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
	minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 7,  4:45pm, Ian Jackson wrote:
} Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk mi

Hi, hope the day is going well for everyone.

> Ian Campbell writes ("Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > On Tue, 2012-08-07 at 16:14 +0100, Ian Jackson wrote:
> > > Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > > > libxl: Seal tapdisk minor leak.
> > > ...
> > > > To implement correct cleanup of blktap devices in Xen 4.1.2.
> > > 
> > > Is this patch is supposed to be against xen-4.1-testing.hg ?
> > > 
> > > > diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> > > > --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> > > > +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> > > > @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
> > > 
> > > This function doesn't seem to exist in my copy of xen-4.1-testing.hg
> > > tip nor in 23172:3eca5bf65e6c.
> > 
> > It comes from <201208051440.q75Een7F008501@wind.enjellic.com> which is a
> > backport request that introduces this function.
> 
> Oh I see, these are two interdependent patches.  Let me read them
> again.

Sorry for the confusion, I should have explicitly called out the
inter-dependency.

The first patch is a backport of one which Ian did for xen-unstable,
corrected primarily, for the change in calling convention for the
context arguement to libxl__device_destroy_tapdisk().

By itself this patch corrects the problem with the tapdisk2 userspace
process not being kill when the guest exists.

The second builds on top of that patch and fixes an ordering problem
which causes a deadlock between xen-blkback and blktap2 when the user
space process attempts to unmap its ring buffer.  The major effect is
orphaning of the tapdisk minor.

The more subtle effect is the deadlock which stalls shutdown of the
guest and actually livelocks the kernel for the duration of the
timeout on the select() call used for the IPC socket communications.

> Ian.

Have a good day.

}-- End of excerpt from Ian Jackson

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"Some of them are.  A surprising number aren't.  A personal favorite of
 mine was the log from a cracker who couldn't figure out how to untar
 and install the trojan package he'd ftped onto the machine.  He tried a
 few times, and then eventually gave up and logged out."
                                -- Nat Lanza

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 01:16:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 01:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syusd-0004xm-Qb; Wed, 08 Aug 2012 01:16:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1Syusc-0004xh-SY
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 01:16:11 +0000
Received: from [85.158.143.99:39696] by server-3.bemta-4.messagelabs.com id
	77/94-01511-ADDB1205; Wed, 08 Aug 2012 01:16:10 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344388567!30164405!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11877 invoked from network); 8 Aug 2012 01:16:08 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-3.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 01:16:08 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q781G0fY011374;
	Tue, 7 Aug 2012 20:16:01 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q781G0Ei011373;
	Tue, 7 Aug 2012 20:16:00 -0500
Date: Tue, 7 Aug 2012 20:16:00 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208080116.q781G0Ei011373@wind.enjellic.com>
In-Reply-To: Ian Jackson <Ian.Jackson@eu.citrix.com>
	"Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor
	leak." (Aug 7, 4:45pm)
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Tue, 07 Aug 2012 20:16:01 -0500 (CDT)
Cc: "greg@enjellic.com" <greg@enjellic.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk
	minor leak.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 7,  4:45pm, Ian Jackson wrote:
} Subject: Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk mi

Hi, hope the day is going well for everyone.

> Ian Campbell writes ("Re: [Xen-devel] [Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > On Tue, 2012-08-07 at 16:14 +0100, Ian Jackson wrote:
> > > Dr. Greg Wettstein writes ("[Patch 1 of 1] [xen-4.1.2] libxl: Seal tapdisk minor leak."):
> > > > libxl: Seal tapdisk minor leak.
> > > ...
> > > > To implement correct cleanup of blktap devices in Xen 4.1.2.
> > > 
> > > Is this patch is supposed to be against xen-4.1-testing.hg ?
> > > 
> > > > diff -r b2b7a7a49af5 tools/libxl/libxl_blktap2.c
> > > > --- a/tools/libxl/libxl_blktap2.c	Sat Aug 04 16:17:08 2012 -0500
> > > > +++ b/tools/libxl/libxl_blktap2.c	Sun Aug 05 09:22:35 2012 -0500
> > > > @@ -59,6 +59,7 @@ void libxl__device_destroy_tapdisk(libxl
> > > 
> > > This function doesn't seem to exist in my copy of xen-4.1-testing.hg
> > > tip nor in 23172:3eca5bf65e6c.
> > 
> > It comes from <201208051440.q75Een7F008501@wind.enjellic.com> which is a
> > backport request that introduces this function.
> 
> Oh I see, these are two interdependent patches.  Let me read them
> again.

Sorry for the confusion, I should have explicitly called out the
inter-dependency.

The first patch is a backport of one which Ian did for xen-unstable,
corrected primarily, for the change in calling convention for the
context arguement to libxl__device_destroy_tapdisk().

By itself this patch corrects the problem with the tapdisk2 userspace
process not being kill when the guest exists.

The second builds on top of that patch and fixes an ordering problem
which causes a deadlock between xen-blkback and blktap2 when the user
space process attempts to unmap its ring buffer.  The major effect is
orphaning of the tapdisk minor.

The more subtle effect is the deadlock which stalls shutdown of the
guest and actually livelocks the kernel for the duration of the
timeout on the select() call used for the IPC socket communications.

> Ian.

Have a good day.

}-- End of excerpt from Ian Jackson

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"Some of them are.  A surprising number aren't.  A personal favorite of
 mine was the log from a cracker who couldn't figure out how to untar
 and install the trojan package he'd ftped onto the machine.  He tried a
 few times, and then eventually gave up and logged out."
                                -- Nat Lanza

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 02:39:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 02:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SywBB-0005rf-AB; Wed, 08 Aug 2012 02:39:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SywBA-0005ra-4e
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 02:39:24 +0000
Received: from [85.158.138.51:38402] by server-8.bemta-3.messagelabs.com id
	DC/7A-25919-B51D1205; Wed, 08 Aug 2012 02:39:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344393562!22835996!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20863 invoked from network); 8 Aug 2012 02:39:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 02:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13899082"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 02:38:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 03:38:50 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SywAc-0004eX-2m;
	Wed, 08 Aug 2012 02:38:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SywAb-0007K4-NB;
	Wed, 08 Aug 2012 03:38:49 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13571-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Aug 2012 03:38:49 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13571 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13571/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13570
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13570
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13570
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  472fc515a463
baseline version:
 xen                  8ecd9177d977

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=472fc515a463
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 472fc515a463
+ branch=xen-unstable
+ revision=472fc515a463
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 472fc515a463 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 02:39:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 02:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SywBB-0005rf-AB; Wed, 08 Aug 2012 02:39:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SywBA-0005ra-4e
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 02:39:24 +0000
Received: from [85.158.138.51:38402] by server-8.bemta-3.messagelabs.com id
	DC/7A-25919-B51D1205; Wed, 08 Aug 2012 02:39:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344393562!22835996!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20863 invoked from network); 8 Aug 2012 02:39:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 02:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13899082"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 02:38:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 03:38:50 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SywAc-0004eX-2m;
	Wed, 08 Aug 2012 02:38:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SywAb-0007K4-NB;
	Wed, 08 Aug 2012 03:38:49 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13571-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Aug 2012 03:38:49 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13571 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13571/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13570
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13570
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13570
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  472fc515a463
baseline version:
 xen                  8ecd9177d977

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=472fc515a463
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 472fc515a463
+ branch=xen-unstable
+ revision=472fc515a463
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 472fc515a463 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 03:45:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 03:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyxCw-0006eE-26; Wed, 08 Aug 2012 03:45:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SyxCu-0006e8-4g
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 03:45:16 +0000
Received: from [85.158.138.51:51135] by server-11.bemta-3.messagelabs.com id
	2A/BD-10722-BC0E1205; Wed, 08 Aug 2012 03:45:15 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344397511!24588303!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMTA4NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21894 invoked from network); 8 Aug 2012 03:45:13 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-14.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 03:45:13 -0000
Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AND13758; Wed, 08 Aug 2012 11:45:10 +0800 (CST)
Received: from SZXEML409-HUB.china.huawei.com (10.82.67.136) by
	szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Wed, 8 Aug 2012 11:44:28 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml409-hub.china.huawei.com ([10.82.67.136]) with mapi id
	14.01.0323.003; Wed, 8 Aug 2012 11:44:19 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ
Date: Wed, 8 Aug 2012 03:44:18 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344343342.11339.96.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> 
> I don't think we should expect it to be valid to keep an xc interface
> handle open after a fork. The child should open a fresh handle if it
> wants to keep interacting with xc.
OK, I see xs, libxl and python open a fresh handle to interact with xc in child process. 
I modified the patch as follow:

# HG changeset patch
# Parent a5dfd924fcdb173a154dad9f37073c1de1302065
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
may call hypercall, thread B may call fork() to create child process. After forking, all pages 
of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
in thread A context, will cause a write protection and return EFAULT error.

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer
   not to be copied to child process after fork. 
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
   using MADV_DOFORK of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Note: 
Chlid process do not use xc interface handle which is opend by parent process, it should open 
a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
to access hypercall buffer cache in the handle.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>

diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 11:33:38 2012 +0800
@@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    /* Recover the VMA flags. Maybe it's not necessary */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
+    
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 03:45:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 03:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyxCw-0006eE-26; Wed, 08 Aug 2012 03:45:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SyxCu-0006e8-4g
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 03:45:16 +0000
Received: from [85.158.138.51:51135] by server-11.bemta-3.messagelabs.com id
	2A/BD-10722-BC0E1205; Wed, 08 Aug 2012 03:45:15 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344397511!24588303!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMTA4NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21894 invoked from network); 8 Aug 2012 03:45:13 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-14.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 03:45:13 -0000
Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AND13758; Wed, 08 Aug 2012 11:45:10 +0800 (CST)
Received: from SZXEML409-HUB.china.huawei.com (10.82.67.136) by
	szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Wed, 8 Aug 2012 11:44:28 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml409-hub.china.huawei.com ([10.82.67.136]) with mapi id
	14.01.0323.003; Wed, 8 Aug 2012 11:44:19 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ
Date: Wed, 8 Aug 2012 03:44:18 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344343342.11339.96.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> 
> I don't think we should expect it to be valid to keep an xc interface
> handle open after a fork. The child should open a fresh handle if it
> wants to keep interacting with xc.
OK, I see xs, libxl and python open a fresh handle to interact with xc in child process. 
I modified the patch as follow:

# HG changeset patch
# Parent a5dfd924fcdb173a154dad9f37073c1de1302065
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
may call hypercall, thread B may call fork() to create child process. After forking, all pages 
of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
in thread A context, will cause a write protection and return EFAULT error.

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer
   not to be copied to child process after fork. 
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
   using MADV_DOFORK of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Note: 
Chlid process do not use xc interface handle which is opend by parent process, it should open 
a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
to access hypercall buffer cache in the handle.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>

diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 11:33:38 2012 +0800
@@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    /* Recover the VMA flags. Maybe it's not necessary */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
+    
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 05:08:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 05:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyyVC-0007Io-E5; Wed, 08 Aug 2012 05:08:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mike@flyn.org>) id 1Sys9S-0007TB-1w
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 22:21:22 +0000
Received: from [85.158.143.99:63845] by server-2.bemta-4.messagelabs.com id
	07/27-17938-1E491205; Tue, 07 Aug 2012 22:21:21 +0000
X-Env-Sender: mike@flyn.org
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344378077!30149583!1
X-Originating-IP: [72.251.202.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8302 invoked from network); 7 Aug 2012 22:21:17 -0000
Received: from cust-smtp2.lga6.us.voxel.net (HELO
	cust-smtp2.lga6.us.voxel.net) (72.251.202.115)
	by server-3.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 22:21:17 -0000
Received: from vhost2.lga6.us.voxel.net (vhost2.lga6.us.voxel.net
	[72.251.193.170])
	by cust-smtp2.lga6.us.voxel.net (Postfix) with ESMTP id D2459F00F1
	for <xen-devel@lists.xen.org>; Tue,  7 Aug 2012 18:21:17 -0400 (EDT)
Received: (qmail 29602 invoked by uid 108); 7 Aug 2012 18:21:16 -0400
Received: from unknown (HELO imp.flyn.org) (mike@flyn.org@99.31.121.212)
	by vhost2.lga6.us.voxel.net with ESMTPSA; 7 Aug 2012 18:21:16 -0400
Received: by imp.flyn.org (Postfix, from userid 1101)
	id C01D1D0A002; Tue,  7 Aug 2012 17:21:15 -0500 (CDT)
Date: Tue, 7 Aug 2012 17:21:15 -0500
From: "W. Michael Petullo" <mike@flyn.org>
To: M A Young <m.a.young@durham.ac.uk>
Message-ID: <20120807222115.GA3684@imp.flyn.org>
References: <501F98A1.4070806@brockmann-consult.de>
	<501FBB63.3050309@citrix.com>
	<20120806140226.GB3093@phenom.dumpdata.com>
	<alpine.DEB.2.00.1208072239120.8441@vega-c.dur.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.00.1208072239120.8441@vega-c.dur.ac.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Wed, 08 Aug 2012 05:08:13 +0000
Cc: xen@lists.fedoraproject.org, Malcolm Crossley <malcolm.crossley@citrix.com>,
	xen-devel@lists.xen.org, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> I suspect you may need the following patch to improve your 4.1.2
>>> performance:
>>>
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053
>>>
>>> The cache flush on every C2 transition is very expensive and causes
>>> a large slow down.

>>> 4.1.3-rc3 already includes that patch so it would be worth testing
>>> that version.

>> MA Young, could this be back-ported in the F17 and F16. I belive
>> Micahel Petullo setup a bug for that?
 
> I don't recall I seeing a bug for this but the patch is in
> xen-4.1.2-25.fc18 and xen-4.1.2-25.fc17 (which is building now).

The performance-related bug I filed is at:

https://bugzilla.redhat.com/show_bug.cgi?id=841330

-- 
Mike

:wq

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 05:08:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 05:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SyyVC-0007Io-E5; Wed, 08 Aug 2012 05:08:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mike@flyn.org>) id 1Sys9S-0007TB-1w
	for xen-devel@lists.xen.org; Tue, 07 Aug 2012 22:21:22 +0000
Received: from [85.158.143.99:63845] by server-2.bemta-4.messagelabs.com id
	07/27-17938-1E491205; Tue, 07 Aug 2012 22:21:21 +0000
X-Env-Sender: mike@flyn.org
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344378077!30149583!1
X-Originating-IP: [72.251.202.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8302 invoked from network); 7 Aug 2012 22:21:17 -0000
Received: from cust-smtp2.lga6.us.voxel.net (HELO
	cust-smtp2.lga6.us.voxel.net) (72.251.202.115)
	by server-3.tower-216.messagelabs.com with SMTP;
	7 Aug 2012 22:21:17 -0000
Received: from vhost2.lga6.us.voxel.net (vhost2.lga6.us.voxel.net
	[72.251.193.170])
	by cust-smtp2.lga6.us.voxel.net (Postfix) with ESMTP id D2459F00F1
	for <xen-devel@lists.xen.org>; Tue,  7 Aug 2012 18:21:17 -0400 (EDT)
Received: (qmail 29602 invoked by uid 108); 7 Aug 2012 18:21:16 -0400
Received: from unknown (HELO imp.flyn.org) (mike@flyn.org@99.31.121.212)
	by vhost2.lga6.us.voxel.net with ESMTPSA; 7 Aug 2012 18:21:16 -0400
Received: by imp.flyn.org (Postfix, from userid 1101)
	id C01D1D0A002; Tue,  7 Aug 2012 17:21:15 -0500 (CDT)
Date: Tue, 7 Aug 2012 17:21:15 -0500
From: "W. Michael Petullo" <mike@flyn.org>
To: M A Young <m.a.young@durham.ac.uk>
Message-ID: <20120807222115.GA3684@imp.flyn.org>
References: <501F98A1.4070806@brockmann-consult.de>
	<501FBB63.3050309@citrix.com>
	<20120806140226.GB3093@phenom.dumpdata.com>
	<alpine.DEB.2.00.1208072239120.8441@vega-c.dur.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.00.1208072239120.8441@vega-c.dur.ac.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Wed, 08 Aug 2012 05:08:13 +0000
Cc: xen@lists.fedoraproject.org, Malcolm Crossley <malcolm.crossley@citrix.com>,
	xen-devel@lists.xen.org, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> I suspect you may need the following patch to improve your 4.1.2
>>> performance:
>>>
>>> http://xenbits.xen.org/hg/xen-4.1-testing.hg/rev/435493696053
>>>
>>> The cache flush on every C2 transition is very expensive and causes
>>> a large slow down.

>>> 4.1.3-rc3 already includes that patch so it would be worth testing
>>> that version.

>> MA Young, could this be back-ported in the F17 and F16. I belive
>> Micahel Petullo setup a bug for that?
 
> I don't recall I seeing a bug for this but the patch is in
> xen-4.1.2-25.fc18 and xen-4.1.2-25.fc17 (which is building now).

The performance-related bug I filed is at:

https://bugzilla.redhat.com/show_bug.cgi?id=841330

-- 
Mike

:wq

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 06:38:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 06:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syztg-0008Ka-OR; Wed, 08 Aug 2012 06:37:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syztf-0008KV-2v
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 06:37:35 +0000
Received: from [85.158.139.83:43935] by server-9.bemta-5.messagelabs.com id
	2F/AE-06631-C2902205; Wed, 08 Aug 2012 06:37:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344407852!28090375!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27258 invoked from network); 8 Aug 2012 06:37:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 06:37:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 07:37:31 +0100
Message-Id: <5022254A02000078000936FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 07:37:30 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
	<5020E85202000078000931F6@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D7ABC@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D7ABC@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 20:13, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
>>> On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
>>> 
>>>> Otoh, restoring from saved state that only includes MCG_CAP (but
>>>> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
>>>> to be zero, which would be trivial as that's the startup state, i.e.
>>>> the only complication here is the variable size save record), so
>>>> pushing this to post-4.2 as well is a reasonable alternative.
>>>> 
>>>> Keir, Ian?
>>> 
>>> I think we should leave it and handle the variable-sized save record
>>> in 4.3. Using hvm_load_entry_zeroextend() to read in save records,
>>> with zero padding for older shorter records, should be
>>> straightforward enough. 
>> 
>> Okay. So Ian, you could then take the corresponding item
>> off the list. Or do you do that only once patches make it
>> through the regression tester?
>> 
> 
> So we will leave it and handle it after 4.2, right?

Yes. It would be nice if you could then resubmit the rest of
the series, addressing pending review comments.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 06:38:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 06:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Syztg-0008Ka-OR; Wed, 08 Aug 2012 06:37:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Syztf-0008KV-2v
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 06:37:35 +0000
Received: from [85.158.139.83:43935] by server-9.bemta-5.messagelabs.com id
	2F/AE-06631-C2902205; Wed, 08 Aug 2012 06:37:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344407852!28090375!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27258 invoked from network); 8 Aug 2012 06:37:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 06:37:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 07:37:31 +0100
Message-Id: <5022254A02000078000936FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 07:37:30 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <5020D418020000780009318C@nat28.tlf.novell.com>
	<CC468762.3AE90%keir.xen@gmail.com>
	<5020E85202000078000931F6@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC82923352D7ABC@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923352D7ABC@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 20:13, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 07.08.12 at 09:50, Keir Fraser <keir.xen@gmail.com> wrote:
>>> On 07/08/2012 07:38, "Jan Beulich" <JBeulich@suse.com> wrote:
>>> 
>>>> Otoh, restoring from saved state that only includes MCG_CAP (but
>>>> no MCi_CTL2-s) needs to be handled anyway (forcing MCi_CTL2
>>>> to be zero, which would be trivial as that's the startup state, i.e.
>>>> the only complication here is the variable size save record), so
>>>> pushing this to post-4.2 as well is a reasonable alternative.
>>>> 
>>>> Keir, Ian?
>>> 
>>> I think we should leave it and handle the variable-sized save record
>>> in 4.3. Using hvm_load_entry_zeroextend() to read in save records,
>>> with zero padding for older shorter records, should be
>>> straightforward enough. 
>> 
>> Okay. So Ian, you could then take the corresponding item
>> off the list. Or do you do that only once patches make it
>> through the regression tester?
>> 
> 
> So we will leave it and handle it after 4.2, right?

Yes. It would be nice if you could then resubmit the rest of
the series, addressing pending review comments.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:07:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:07:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0Ma-0000GM-Jv; Wed, 08 Aug 2012 07:07:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0MX-0000GH-Oy
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 07:07:26 +0000
Received: from [85.158.138.51:10336] by server-4.bemta-3.messagelabs.com id
	EC/17-06379-C2012205; Wed, 08 Aug 2012 07:07:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344409644!22044544!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27839 invoked from network); 8 Aug 2012 07:07:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 07:07:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:07:23 +0100
Message-Id: <50222C4A0200007800093711@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:07:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
In-Reply-To: <20513.20168.993350.590876@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 19:22, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
>> - not exclusively try the pv-ops kernel's module names.
> 
> Do you mean that 4.2 should try loading some bigger set of module
> names ?  If so then do you have a list ? :-)

- xen-blkback's counterpart is blkbk
- xen-netback's counterpart is netbk

xen-evtchn/evtchn and xen-gntdev/gntdev are already taken
care of, albeit in a little strange a way (the two entries being
separated by an increasing amount of other ones, when it is
really pointless to load the second one if the first one's load
succeeded).

To not needlessly try everything, it might additionally be worth
a thought to
- first try loading via module alias rather than module name (if
  that succeeds for a carefully chosen module that got its alias
  added late - according to our patches, the devname: aliases
  got introduced in 2.6.35, and the xen-backend: ones in 3.1 -,
  only try loading via module alias for all subsequent ones)
- second try loading a (or all) pvops named module(s) (if that/
  any of them succeed(s), there's no need to try _any_ non-
  pvops names subsequently, i.e. including ones that don't even
  exist in the legacy trees)
- last try loading the traditional/forward-port named ones

I notice, however, that in pvops no devname: module aliases
exist - is that an oversight, Konrad? While for misc devices with
dynamic minors these don't help with autoloading, they do help
with abstracting away the module name as would be desirable
here (and later in the tools, when they get to load the modules).

Bottom line is - for the recently (c/s 25728:a6edbc39fc84)
added backend modules the above outlined scheme should work,
while for the infrastructure ones step 1 should be skipped.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:07:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:07:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0Ma-0000GM-Jv; Wed, 08 Aug 2012 07:07:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0MX-0000GH-Oy
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 07:07:26 +0000
Received: from [85.158.138.51:10336] by server-4.bemta-3.messagelabs.com id
	EC/17-06379-C2012205; Wed, 08 Aug 2012 07:07:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344409644!22044544!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27839 invoked from network); 8 Aug 2012 07:07:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 07:07:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:07:23 +0100
Message-Id: <50222C4A0200007800093711@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:07:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
In-Reply-To: <20513.20168.993350.590876@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 19:22, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
>> - not exclusively try the pv-ops kernel's module names.
> 
> Do you mean that 4.2 should try loading some bigger set of module
> names ?  If so then do you have a list ? :-)

- xen-blkback's counterpart is blkbk
- xen-netback's counterpart is netbk

xen-evtchn/evtchn and xen-gntdev/gntdev are already taken
care of, albeit in a little strange a way (the two entries being
separated by an increasing amount of other ones, when it is
really pointless to load the second one if the first one's load
succeeded).

To not needlessly try everything, it might additionally be worth
a thought to
- first try loading via module alias rather than module name (if
  that succeeds for a carefully chosen module that got its alias
  added late - according to our patches, the devname: aliases
  got introduced in 2.6.35, and the xen-backend: ones in 3.1 -,
  only try loading via module alias for all subsequent ones)
- second try loading a (or all) pvops named module(s) (if that/
  any of them succeed(s), there's no need to try _any_ non-
  pvops names subsequently, i.e. including ones that don't even
  exist in the legacy trees)
- last try loading the traditional/forward-port named ones

I notice, however, that in pvops no devname: module aliases
exist - is that an oversight, Konrad? While for misc devices with
dynamic minors these don't help with autoloading, they do help
with abstracting away the module name as would be desirable
here (and later in the tools, when they get to load the modules).

Bottom line is - for the recently (c/s 25728:a6edbc39fc84)
added backend modules the above outlined scheme should work,
while for the infrastructure ones step 1 should be skipped.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:08:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0NR-0000Il-1P; Wed, 08 Aug 2012 07:08:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Sz0NQ-0000IX-5l
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:08:20 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344409688!4378803!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11202 invoked from network); 8 Aug 2012 07:08:08 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:08:08 -0000
Received: by weyz53 with SMTP id z53so315709wey.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 00:08:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=AozTcV74WCV5nenGWuUikZP0sxT0clVp9jxnELUFMjg=;
	b=Jw34xQbc6VbqM6v2hM7DfWaX85sBsdLKUoiZwH6vQEPQAHUdyKya/PpckjCQgqV+zN
	AJxlxHtYgZAbzV32iOoZHI341qahv59dZUEB4msFQHd8GA1oC9f8gIZ1rhSEczkbQNm7
	vHEaRpC9AuCew4fxIEE4B6lj6R8G44sgUpMQb0KdtOfUT87moxFEz3rddaF3t6x4jhhr
	v6RmYS0IdpmaH2caA/nt1NByvTujTpolqIBjuTIKwGEOEmMWHpJugpQzDDLwOUjhuY+X
	i9uV7rAQ9qtrkVq15Toqrb75FQCkFATb7Iqm+SuWWmSY9g+xjZQk5PJZpa8eYZeSnpKP
	4dJA==
Received: by 10.216.7.70 with SMTP id 48mr8485484weo.40.1344409688100;
	Wed, 08 Aug 2012 00:08:08 -0700 (PDT)
Received: from [192.168.0.40] (ip-170-181.sn3.eutelia.it. [213.136.170.181])
	by mx.google.com with ESMTPS id ef5sm5452967wib.3.2012.08.08.00.08.04
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 08 Aug 2012 00:08:07 -0700 (PDT)
Message-ID: <1344409670.5763.2.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Date: Wed, 08 Aug 2012 09:07:50 +0200
In-Reply-To: <45216a40-585d-47df-86a0-3b78843d7ef7@default>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<45216a40-585d-47df-86a0-3b78843d7ef7@default>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2318029299027796970=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2318029299027796970==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-q5hHlyQTu337/Gee3836"


--=-q5hHlyQTu337/Gee3836
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 15:42 -0700, Dan Magenheimer wrote:
> > > [D] - Dynamic memory migration between different nodes of the host. A=
s
> > >        the counter-part of the NUMA-aware scheduler.
> >=20
> > I once read about a VMware feature: bandwith-limited migration in the
> > background, hot pages first. So we get flexibility and avoid CPU
> > starving, but still don't hog the system with memory copying.
> > Sounds quite ambitious, though.
>=20
> Something like this, but between NUMA nodes instead of physical systems?
>=20
> http://osnet.cs.binghamton.edu/publications/hines09postcopy_osr.pdf=20
>
Likely. The analogy between this kind of "memory migration" and the
actual live migration we already have is indeed something I want to take
advantage of. The fact that we support that small thing called
_paravirtualization_ is complicating it all quite a bit, but I'm looking
into it... Thanks for the reference. :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-q5hHlyQTu337/Gee3836
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAiEEYACgkQk4XaBE3IOsTkwgCfWVd4mBosYyWvC5KvOaa9kcFX
RuIAn32TeafeQFJ38e4gzMwdtMosspud
=PIYe
-----END PGP SIGNATURE-----

--=-q5hHlyQTu337/Gee3836--



--===============2318029299027796970==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2318029299027796970==--



From xen-devel-bounces@lists.xen.org Wed Aug 08 07:08:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0NR-0000Il-1P; Wed, 08 Aug 2012 07:08:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Sz0NQ-0000IX-5l
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:08:20 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344409688!4378803!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11202 invoked from network); 8 Aug 2012 07:08:08 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:08:08 -0000
Received: by weyz53 with SMTP id z53so315709wey.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 00:08:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=AozTcV74WCV5nenGWuUikZP0sxT0clVp9jxnELUFMjg=;
	b=Jw34xQbc6VbqM6v2hM7DfWaX85sBsdLKUoiZwH6vQEPQAHUdyKya/PpckjCQgqV+zN
	AJxlxHtYgZAbzV32iOoZHI341qahv59dZUEB4msFQHd8GA1oC9f8gIZ1rhSEczkbQNm7
	vHEaRpC9AuCew4fxIEE4B6lj6R8G44sgUpMQb0KdtOfUT87moxFEz3rddaF3t6x4jhhr
	v6RmYS0IdpmaH2caA/nt1NByvTujTpolqIBjuTIKwGEOEmMWHpJugpQzDDLwOUjhuY+X
	i9uV7rAQ9qtrkVq15Toqrb75FQCkFATb7Iqm+SuWWmSY9g+xjZQk5PJZpa8eYZeSnpKP
	4dJA==
Received: by 10.216.7.70 with SMTP id 48mr8485484weo.40.1344409688100;
	Wed, 08 Aug 2012 00:08:08 -0700 (PDT)
Received: from [192.168.0.40] (ip-170-181.sn3.eutelia.it. [213.136.170.181])
	by mx.google.com with ESMTPS id ef5sm5452967wib.3.2012.08.08.00.08.04
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 08 Aug 2012 00:08:07 -0700 (PDT)
Message-ID: <1344409670.5763.2.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Date: Wed, 08 Aug 2012 09:07:50 +0200
In-Reply-To: <45216a40-585d-47df-86a0-3b78843d7ef7@default>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
	<45216a40-585d-47df-86a0-3b78843d7ef7@default>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Andre Przywara <andre.przywara@amd.com>,
	Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2318029299027796970=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2318029299027796970==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-q5hHlyQTu337/Gee3836"


--=-q5hHlyQTu337/Gee3836
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 15:42 -0700, Dan Magenheimer wrote:
> > > [D] - Dynamic memory migration between different nodes of the host. A=
s
> > >        the counter-part of the NUMA-aware scheduler.
> >=20
> > I once read about a VMware feature: bandwith-limited migration in the
> > background, hot pages first. So we get flexibility and avoid CPU
> > starving, but still don't hog the system with memory copying.
> > Sounds quite ambitious, though.
>=20
> Something like this, but between NUMA nodes instead of physical systems?
>=20
> http://osnet.cs.binghamton.edu/publications/hines09postcopy_osr.pdf=20
>
Likely. The analogy between this kind of "memory migration" and the
actual live migration we already have is indeed something I want to take
advantage of. The fact that we support that small thing called
_paravirtualization_ is complicating it all quite a bit, but I'm looking
into it... Thanks for the reference. :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-q5hHlyQTu337/Gee3836
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAiEEYACgkQk4XaBE3IOsTkwgCfWVd4mBosYyWvC5KvOaa9kcFX
RuIAn32TeafeQFJ38e4gzMwdtMosspud
=PIYe
-----END PGP SIGNATURE-----

--=-q5hHlyQTu337/Gee3836--



--===============2318029299027796970==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2318029299027796970==--



From xen-devel-bounces@lists.xen.org Wed Aug 08 07:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0Nv-0000Kt-FB; Wed, 08 Aug 2012 07:08:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0Nt-0000KZ-Pa
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:08:49 +0000
Received: from [85.158.138.51:21560] by server-9.bemta-3.messagelabs.com id
	9D/6B-14615-08012205; Wed, 08 Aug 2012 07:08:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344409728!30944167!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26293 invoked from network); 8 Aug 2012 07:08:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 07:08:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:08:47 +0100
Message-Id: <50222C9E0200007800093719@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:08:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
In-Reply-To: <20120807174733.GA5592@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
> For what it's worth, I think that the first line of the commit log got
> dropped, which makes for a strange short log message of:
> 
>   Although the "Intel Virtualization Technology FlexMigration

Yes, I'm sorry for that, but I realized this only after pushing, and
I'm unaware of ways to adjust the commit message of an existing
c/s.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:09:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0Nv-0000Kt-FB; Wed, 08 Aug 2012 07:08:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0Nt-0000KZ-Pa
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:08:49 +0000
Received: from [85.158.138.51:21560] by server-9.bemta-3.messagelabs.com id
	9D/6B-14615-08012205; Wed, 08 Aug 2012 07:08:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344409728!30944167!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26293 invoked from network); 8 Aug 2012 07:08:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 07:08:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:08:47 +0100
Message-Id: <50222C9E0200007800093719@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:08:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
In-Reply-To: <20120807174733.GA5592@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
> For what it's worth, I think that the first line of the commit log got
> dropped, which makes for a strange short log message of:
> 
>   Although the "Intel Virtualization Technology FlexMigration

Yes, I'm sorry for that, but I realized this only after pushing, and
I'm unaware of ways to adjust the commit message of an existing
c/s.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:15:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0Td-0000gO-8E; Wed, 08 Aug 2012 07:14:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0Tb-0000gI-VX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:14:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344410076!1958002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11415 invoked from network); 8 Aug 2012 07:14:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 07:14:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:14:35 +0100
Message-Id: <50222DFB0200007800093746@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:14:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jean Guyader <jean.guyader@gmail.com>, xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"JeanGuyader \(3P\)" <jean.guyader@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?

Sounds fine (and I like this better than the ..._ARG one you used
below.

> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 201112L)

#if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
    (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)

avoiding compilers to warn about the use of the possibly
undefined __STDC_VERSION__, which only got introduced
after C89 was already published (and which e.g. gcc indeed
doesn't define if the value would end up being below 199409L).

> # define XEN_ADD_TO_PHYSMAP_ARG
> #else
> # define XEN_ADD_TO_PHYSMAP_ARG u
> #endif

Also, please don't forget to #undef it after use.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:15:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0Td-0000gO-8E; Wed, 08 Aug 2012 07:14:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0Tb-0000gI-VX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:14:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344410076!1958002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11415 invoked from network); 8 Aug 2012 07:14:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 07:14:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:14:35 +0100
Message-Id: <50222DFB0200007800093746@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:14:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jean Guyader <jean.guyader@gmail.com>, xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"JeanGuyader \(3P\)" <jean.guyader@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?

Sounds fine (and I like this better than the ..._ARG one you used
below.

> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 201112L)

#if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
    (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)

avoiding compilers to warn about the use of the possibly
undefined __STDC_VERSION__, which only got introduced
after C89 was already published (and which e.g. gcc indeed
doesn't define if the value would end up being below 199409L).

> # define XEN_ADD_TO_PHYSMAP_ARG
> #else
> # define XEN_ADD_TO_PHYSMAP_ARG u
> #endif

Also, please don't forget to #undef it after use.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0kU-0000qf-SF; Wed, 08 Aug 2012 07:32:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0kT-0000qX-Ib
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:32:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344411120!1961120!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6082 invoked from network); 8 Aug 2012 07:32:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 07:32:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:31:58 +0100
Message-Id: <5022320C0200007800093760@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:31:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
In-Reply-To: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('I', &iommu_p2m_table);
> +}

Are you btw aware that a handler for 'I' already exists (in
xen/arch/x86/hvm/irq.c)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0kU-0000qf-SF; Wed, 08 Aug 2012 07:32:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz0kT-0000qX-Ib
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:32:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344411120!1961120!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6082 invoked from network); 8 Aug 2012 07:32:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 07:32:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 08:31:58 +0100
Message-Id: <5022320C0200007800093760@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 08:31:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
In-Reply-To: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('I', &iommu_p2m_table);
> +}

Are you btw aware that a handler for 'I' already exists (in
xen/arch/x86/hvm/irq.c)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:44:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0vc-000113-1d; Wed, 08 Aug 2012 07:43:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Sz0va-00010y-M8
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:43:38 +0000
Received: from [85.158.138.51:63481] by server-11.bemta-3.messagelabs.com id
	FA/F6-10722-9A812205; Wed, 08 Aug 2012 07:43:37 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344411816!30931852!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24477 invoked from network); 8 Aug 2012 07:43:37 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:43:37 -0000
Received: by weyz53 with SMTP id z53so337445wey.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 00:43:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=yxq15V/CibVGbUb8/aLuxJagX40SJaucaRdrdj6UQFQ=;
	b=FROF4Ms6TIG0df59YSuCiL8At0XoLZLeeBkgSFBFQCvjIzGi5HB4LzD8mjTYm9F2bU
	uANP3wagBTRtg8OlXa0hYuZ5Jm2riSkBpz66CczCvo+WulNts09Qv7qDf+imgF5Hr0Db
	tmZeN9xBUxHf5XgAL1/C6LdR9oYAzOMmsIpfBQ0LIIe9DEiWAR2jpj/ej0OMVzmnovWe
	ldwULhD5Zob/c55IHdMTIvSjZtJfWS2EXPnU+EF/9v+YJPmTipF5U3XLAVoDWliNrpSG
	WdwspD6Lir3TyOAOmXD0dDzVXScPazHgdZHGcnV8jlTtK6qZCHva2BVosUCo5RGKV7n+
	COpQ==
Received: by 10.180.83.106 with SMTP id p10mr264525wiy.21.1344411816742;
	Wed, 08 Aug 2012 00:43:36 -0700 (PDT)
Received: from [192.168.0.40] (ip-170-181.sn3.eutelia.it. [213.136.170.181])
	by mx.google.com with ESMTPS id k20sm3786962wiv.11.2012.08.08.00.43.34
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 08 Aug 2012 00:43:35 -0700 (PDT)
Message-ID: <1344411808.5763.6.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Andre Przywara <andre.przywara@amd.com>
Date: Wed, 08 Aug 2012 09:43:28 +0200
In-Reply-To: <501BA1C0.7040100@amd.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5389100668122147790=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5389100668122147790==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-0wsv0xJ54i/q+cWLa42t"


--=-0wsv0xJ54i/q+cWLa42t
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 12:02 +0200, Andre Przywara wrote:
> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
> > ...
> >
> >         * automatic placement of Dom0, if possible (my current series i=
s
> >           only affecting DomU)
>=20
> I think Dom0 NUMA awareness should be one of the top priorities. If I=20
> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which=20
> actually has memory from all 8 nodes and thinks it's memory is flat.
>
Ok, I updated the Wiki page with a link to this (sub)thread --- more
specifically, the mails where we agree about the new syntax. I can work
on it, but not in the next few days, so let's see if anyone steps up
before I get to look at it. :-)

> > [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
> >        just have them _prefer_ running on the nodes where their memory
> >        is.
>=20
> This would be really cool. I once thought about something like a=20
> home-node. We start with placement to allocate memory from one node.=20
> Then we relax the VCPU-pinning, but mark this node as special for this=
=20
> guest, so that it if possible happens to get run there. But in times of=
=20
> CPU pressure we are happy to let it run on other nodes: CPU starving is=
=20
> much worse than NUMA penalty.
>=20
Yep. Patches coming soon.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-0wsv0xJ54i/q+cWLa42t
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAiGKAACgkQk4XaBE3IOsSoEwCeNpwhfSJ/P/Xp+3W0CbU7dIWT
w5QAniTYfdWGC2XmztdNrVli58PzRHvm
=aNqW
-----END PGP SIGNATURE-----

--=-0wsv0xJ54i/q+cWLa42t--



--===============5389100668122147790==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5389100668122147790==--



From xen-devel-bounces@lists.xen.org Wed Aug 08 07:44:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0vc-000113-1d; Wed, 08 Aug 2012 07:43:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1Sz0va-00010y-M8
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:43:38 +0000
Received: from [85.158.138.51:63481] by server-11.bemta-3.messagelabs.com id
	FA/F6-10722-9A812205; Wed, 08 Aug 2012 07:43:37 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344411816!30931852!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24477 invoked from network); 8 Aug 2012 07:43:37 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:43:37 -0000
Received: by weyz53 with SMTP id z53so337445wey.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 00:43:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:x-mailer:mime-version;
	bh=yxq15V/CibVGbUb8/aLuxJagX40SJaucaRdrdj6UQFQ=;
	b=FROF4Ms6TIG0df59YSuCiL8At0XoLZLeeBkgSFBFQCvjIzGi5HB4LzD8mjTYm9F2bU
	uANP3wagBTRtg8OlXa0hYuZ5Jm2riSkBpz66CczCvo+WulNts09Qv7qDf+imgF5Hr0Db
	tmZeN9xBUxHf5XgAL1/C6LdR9oYAzOMmsIpfBQ0LIIe9DEiWAR2jpj/ej0OMVzmnovWe
	ldwULhD5Zob/c55IHdMTIvSjZtJfWS2EXPnU+EF/9v+YJPmTipF5U3XLAVoDWliNrpSG
	WdwspD6Lir3TyOAOmXD0dDzVXScPazHgdZHGcnV8jlTtK6qZCHva2BVosUCo5RGKV7n+
	COpQ==
Received: by 10.180.83.106 with SMTP id p10mr264525wiy.21.1344411816742;
	Wed, 08 Aug 2012 00:43:36 -0700 (PDT)
Received: from [192.168.0.40] (ip-170-181.sn3.eutelia.it. [213.136.170.181])
	by mx.google.com with ESMTPS id k20sm3786962wiv.11.2012.08.08.00.43.34
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 08 Aug 2012 00:43:35 -0700 (PDT)
Message-ID: <1344411808.5763.6.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: Andre Przywara <andre.przywara@amd.com>
Date: Wed, 08 Aug 2012 09:43:28 +0200
In-Reply-To: <501BA1C0.7040100@amd.com>
References: <1343837796.4958.32.camel@Solace> <501BA1C0.7040100@amd.com>
X-Mailer: Evolution 3.4.3-1
Mime-Version: 1.0
Cc: Anil Madhavapeddy <anil@recoil.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] NUMA TODO-list for xen-devel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5389100668122147790=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5389100668122147790==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-0wsv0xJ54i/q+cWLa42t"


--=-0wsv0xJ54i/q+cWLa42t
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2012-08-03 at 12:02 +0200, Andre Przywara wrote:
> On 08/01/2012 06:16 PM, Dario Faggioli wrote:
> > ...
> >
> >         * automatic placement of Dom0, if possible (my current series i=
s
> >           only affecting DomU)
>=20
> I think Dom0 NUMA awareness should be one of the top priorities. If I=20
> boot my 8-node box with Xen, I end up with a NUMA-clueless Dom0 which=20
> actually has memory from all 8 nodes and thinks it's memory is flat.
>
Ok, I updated the Wiki page with a link to this (sub)thread --- more
specifically, the mails where we agree about the new syntax. I can work
on it, but not in the next few days, so let's see if anyone steps up
before I get to look at it. :-)

> > [D] - NUMA aware scheduling in Xen. Don't pin vcpus on nodes' pcpus,
> >        just have them _prefer_ running on the nodes where their memory
> >        is.
>=20
> This would be really cool. I once thought about something like a=20
> home-node. We start with placement to allocate memory from one node.=20
> Then we relax the VCPU-pinning, but mark this node as special for this=
=20
> guest, so that it if possible happens to get run there. But in times of=
=20
> CPU pressure we are happy to let it run on other nodes: CPU starving is=
=20
> much worse than NUMA penalty.
>=20
Yep. Patches coming soon.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-0wsv0xJ54i/q+cWLa42t
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEABECAAYFAlAiGKAACgkQk4XaBE3IOsSoEwCeNpwhfSJ/P/Xp+3W0CbU7dIWT
w5QAniTYfdWGC2XmztdNrVli58PzRHvm
=aNqW
-----END PGP SIGNATURE-----

--=-0wsv0xJ54i/q+cWLa42t--



--===============5389100668122147790==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5389100668122147790==--



From xen-devel-bounces@lists.xen.org Wed Aug 08 07:46:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0xk-00016B-JE; Wed, 08 Aug 2012 07:45:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz0xi-000166-MY
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:45:50 +0000
Received: from [85.158.139.83:50252] by server-3.bemta-5.messagelabs.com id
	28/B9-31899-D2912205; Wed, 08 Aug 2012 07:45:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344411949!30853645!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2779 invoked from network); 8 Aug 2012 07:45:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:45:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13901852"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 07:45:24 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	08:45:23 +0100
Message-ID: <1344411923.11783.1.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:45:23 +0100
In-Reply-To: <50222DFB0200007800093746@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jean Guyader <jean.guyader@gmail.com>, xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>, "Jean Guyader
	\(3P\)" <jean.guyader@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
> 
> Sounds fine (and I like this better than the ..._ARG one you used
> below.
> 
> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 201112L)
> 
> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)

The downside of this is that users of this header might need to change
their code depending on what compiler they actually build with today (or
even what options).

Is adding the ".u" throughout the Xen code base too intrusive?

> avoiding compilers to warn about the use of the possibly
> undefined __STDC_VERSION__, which only got introduced
> after C89 was already published (and which e.g. gcc indeed
> doesn't define if the value would end up being below 199409L).
> 
> > # define XEN_ADD_TO_PHYSMAP_ARG
> > #else
> > # define XEN_ADD_TO_PHYSMAP_ARG u
> > #endif
> 
> Also, please don't forget to #undef it after use.
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:46:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz0xk-00016B-JE; Wed, 08 Aug 2012 07:45:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz0xi-000166-MY
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:45:50 +0000
Received: from [85.158.139.83:50252] by server-3.bemta-5.messagelabs.com id
	28/B9-31899-D2912205; Wed, 08 Aug 2012 07:45:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344411949!30853645!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2779 invoked from network); 8 Aug 2012 07:45:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:45:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13901852"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 07:45:24 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	08:45:23 +0100
Message-ID: <1344411923.11783.1.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:45:23 +0100
In-Reply-To: <50222DFB0200007800093746@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jean Guyader <jean.guyader@gmail.com>, xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>, "Jean Guyader
	\(3P\)" <jean.guyader@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
> 
> Sounds fine (and I like this better than the ..._ARG one you used
> below.
> 
> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 201112L)
> 
> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)

The downside of this is that users of this header might need to change
their code depending on what compiler they actually build with today (or
even what options).

Is adding the ".u" throughout the Xen code base too intrusive?

> avoiding compilers to warn about the use of the possibly
> undefined __STDC_VERSION__, which only got introduced
> after C89 was already published (and which e.g. gcc indeed
> doesn't define if the value would end up being below 199409L).
> 
> > # define XEN_ADD_TO_PHYSMAP_ARG
> > #else
> > # define XEN_ADD_TO_PHYSMAP_ARG u
> > #endif
> 
> Also, please don't forget to #undef it after use.
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:49:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz115-0001I9-H0; Wed, 08 Aug 2012 07:49:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz114-0001Hx-55
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:49:18 +0000
Received: from [85.158.143.35:24878] by server-3.bemta-4.messagelabs.com id
	B7/7C-31486-DF912205; Wed, 08 Aug 2012 07:49:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344412121!14008829!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30635 invoked from network); 8 Aug 2012 07:48:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:48:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13901920"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 07:48:41 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	08:48:40 +0100
Message-ID: <1344412120.11783.4.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:48:40 +0100
In-Reply-To: <50212F6902000078000933E4@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
	<50212F6902000078000933E4@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 14:08 +0100, Jan Beulich wrote:
> See how
> x86-64 just recently got the ILP32 model [aka x32] added for
> this very reason.)

For userspace only though, with the kernel remaining a full 64-bit
kernel. I don't know what direction they eventually took but there was
also a contingent in favour of using the 64-bit syscall ABI for x32.

I don't think that the wastage at the hypercall interface is going to be
significant. Especially once you consider that many of the 32-bit fields
actually need to be 64-bit for a compat mode guest any, e.g. MFNs should
certainly be 64 bit regardless or else you impose limitations on 32-bit
guests which the 64-bit hypervisor doesn't itself have.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:49:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz115-0001I9-H0; Wed, 08 Aug 2012 07:49:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz114-0001Hx-55
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:49:18 +0000
Received: from [85.158.143.35:24878] by server-3.bemta-4.messagelabs.com id
	B7/7C-31486-DF912205; Wed, 08 Aug 2012 07:49:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344412121!14008829!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30635 invoked from network); 8 Aug 2012 07:48:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:48:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13901920"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 07:48:41 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	08:48:40 +0100
Message-ID: <1344412120.11783.4.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:48:40 +0100
In-Reply-To: <50212F6902000078000933E4@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
	<50212F6902000078000933E4@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 14:08 +0100, Jan Beulich wrote:
> See how
> x86-64 just recently got the ILP32 model [aka x32] added for
> this very reason.)

For userspace only though, with the kernel remaining a full 64-bit
kernel. I don't know what direction they eventually took but there was
also a contingent in favour of using the 64-bit syscall ABI for x32.

I don't think that the wastage at the hypercall interface is going to be
significant. Especially once you consider that many of the 32-bit fields
actually need to be 64-bit for a compat mode guest any, e.g. MFNs should
certainly be 64 bit regardless or else you impose limitations on 32-bit
guests which the 64-bit hypervisor doesn't itself have.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:59:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1Af-0001Tn-K2; Wed, 08 Aug 2012 07:59:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz1Ae-0001Tf-Ih
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:59:12 +0000
Received: from [85.158.139.83:31644] by server-4.bemta-5.messagelabs.com id
	35/5A-32474-F4C12205; Wed, 08 Aug 2012 07:59:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344412751!23601241!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23397 invoked from network); 8 Aug 2012 07:59:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:59:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13902112"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 07:59:11 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	08:59:10 +0100
Message-ID: <1344412750.11783.12.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:59:10 +0100
In-Reply-To: <50212C2B02000078000933CE@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 13:54 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 14:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
> >> > --- a/xen/include/public/physdev.h
> >> > +++ b/xen/include/public/physdev.h
> >> > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> >> >  #define PHYSDEVOP_apic_write             9
> >> >  struct physdev_apic {
> >> >      /* IN */
> >> > -    unsigned long apic_physbase;
> >> > +    xen_ulong_t apic_physbase;
> >> >      uint32_t reg;
> >> >      /* IN or OUT */
> >> >      uint32_t value;
> >> >...
> > 
> > This change is actually not required, considering that ARM doesn't have
> > an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> > but it wouldn't make any difference for ARM (or x86).
> > If you think that it is better to keep it unsigned long, I'll remove
> > this chunk for the patch.
> 
> I'd prefer that. Also any others that you may not actually need
> to be converted.
> 
> Also, seems I was misremembering how PPC dealt with this - they
> indeed had xen_ulong_t defined to be a 64-bit value too.
> 
> > Considering that each field of a multicall_entry is usually passed as an
> > hypercall parameter, they should all remain unsigned long.
> 
> That'll give you subtle bugs I'm afraid: do_memory_op()'s
> encoding of a continuation start extent (into the 'cmd' value),
> for example, depends on being able to store the full value into
> the command field of the multicall structure. The limit checking
> of the permitted number of extents therefore is different
> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT).

On compat this would be 2^(32-6) == 67 and a bit million extents. I
think we can live with this limit on 64bit ARM too.

We could have lived with it on 64bit X86 too but by the time we noticed
it was too late so we have to live with it.

>  I would
> neither find it very appealing to have do_memory_op() adjusted
> for dealing with this new special case, nor am I sure that's the
> only place your approach would cause problems if you excluded
> the multicall structure from the model change.

On the other hand it doesn't seem right for the multicall args to be
able to represent values which you couldn't pass to the regular
hypercall itself (since the 32 bit args are only 32 bit).

Perhaps if use of the upper portions for 32 bit guests were restricted
for the use of continuations only that might work -- it would rely on
the top half of the 64-bit x0 register surviving a trip back to 32 bit
mode and into hypervisor mode again. I think that can be expected (but
I'd have to double check a AArch64 docs to be sure). On the other-other
hand that would mean that a 32-on-32 guest would see different things to
a 32-on-64 guest which is bad.

I don't expect we will be able to get away with no compat layer what so
ever. At the minimum there will have to be compat hypercall entry points
which correctly extend the 32 bit arguments to 64 bit, and do the same
for the return value etc but if we can avoid the majority of the struct
munging (ideally completely avoiding the need for a hypercall arguments
translation area) then I think that is worth doing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 07:59:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 07:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1Af-0001Tn-K2; Wed, 08 Aug 2012 07:59:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz1Ae-0001Tf-Ih
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 07:59:12 +0000
Received: from [85.158.139.83:31644] by server-4.bemta-5.messagelabs.com id
	35/5A-32474-F4C12205; Wed, 08 Aug 2012 07:59:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344412751!23601241!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23397 invoked from network); 8 Aug 2012 07:59:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 07:59:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13902112"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 07:59:11 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	08:59:10 +0100
Message-ID: <1344412750.11783.12.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:59:10 +0100
In-Reply-To: <50212C2B02000078000933CE@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 13:54 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 14:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 6 Aug 2012, Jan Beulich wrote:
> >> >>> On 06.08.12 at 16:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
> >> > --- a/xen/include/public/physdev.h
> >> > +++ b/xen/include/public/physdev.h
> >> > @@ -124,7 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_set_iobitmap_t);
> >> >  #define PHYSDEVOP_apic_write             9
> >> >  struct physdev_apic {
> >> >      /* IN */
> >> > -    unsigned long apic_physbase;
> >> > +    xen_ulong_t apic_physbase;
> >> >      uint32_t reg;
> >> >      /* IN or OUT */
> >> >      uint32_t value;
> >> >...
> > 
> > This change is actually not required, considering that ARM doesn't have
> > an APIC. I changed apic_physbase to xen_ulong_t only for consistency,
> > but it wouldn't make any difference for ARM (or x86).
> > If you think that it is better to keep it unsigned long, I'll remove
> > this chunk for the patch.
> 
> I'd prefer that. Also any others that you may not actually need
> to be converted.
> 
> Also, seems I was misremembering how PPC dealt with this - they
> indeed had xen_ulong_t defined to be a 64-bit value too.
> 
> > Considering that each field of a multicall_entry is usually passed as an
> > hypercall parameter, they should all remain unsigned long.
> 
> That'll give you subtle bugs I'm afraid: do_memory_op()'s
> encoding of a continuation start extent (into the 'cmd' value),
> for example, depends on being able to store the full value into
> the command field of the multicall structure. The limit checking
> of the permitted number of extents therefore is different
> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT).

On compat this would be 2^(32-6) == 67 and a bit million extents. I
think we can live with this limit on 64bit ARM too.

We could have lived with it on 64bit X86 too but by the time we noticed
it was too late so we have to live with it.

>  I would
> neither find it very appealing to have do_memory_op() adjusted
> for dealing with this new special case, nor am I sure that's the
> only place your approach would cause problems if you excluded
> the multicall structure from the model change.

On the other hand it doesn't seem right for the multicall args to be
able to represent values which you couldn't pass to the regular
hypercall itself (since the 32 bit args are only 32 bit).

Perhaps if use of the upper portions for 32 bit guests were restricted
for the use of continuations only that might work -- it would rely on
the top half of the 64-bit x0 register surviving a trip back to 32 bit
mode and into hypervisor mode again. I think that can be expected (but
I'd have to double check a AArch64 docs to be sure). On the other-other
hand that would mean that a 32-on-32 guest would see different things to
a 32-on-64 guest which is bad.

I don't expect we will be able to get away with no compat layer what so
ever. At the minimum there will have to be compat hypercall entry points
which correctly extend the 32 bit arguments to 64 bit, and do the same
for the return value etc but if we can avoid the majority of the struct
munging (ideally completely avoiding the need for a hypercall arguments
translation area) then I think that is worth doing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:02:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1E5-00022E-4y; Wed, 08 Aug 2012 08:02:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz1E3-000220-FA
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:02:43 +0000
Received: from [85.158.139.83:44620] by server-2.bemta-5.messagelabs.com id
	AA/F8-25118-22D12205; Wed, 08 Aug 2012 08:02:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344412962!30857523!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17229 invoked from network); 8 Aug 2012 08:02:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 08:02:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13902226"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 08:02:41 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	09:02:40 +0100
Message-ID: <1344412960.11783.15.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 09:02:40 +0100
In-Reply-To: <50222C9E0200007800093719@nat28.tlf.novell.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jinsong Liu <jinsong.liu@intel.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Matt Wilson <msw@amazon.com>, Donald D
	Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
> > For what it's worth, I think that the first line of the commit log got
> > dropped, which makes for a strange short log message of:
> > 
> >   Although the "Intel Virtualization Technology FlexMigration
> 
> Yes, I'm sorry for that, but I realized this only after pushing, and
> I'm unaware of ways to adjust the commit message of an existing
> c/s.

There is an hg rebase extension something like git's rebase -i but I
find the easiest way is to use the mq extension's function which pulls
the tip commit into a patch in the queue.

Actually, that's not quite true, I find the real easiest way is to hg
strip the wrong commit and try again.

Actually, that's not true either, the real easiest way IMHO is to use a
git mirror for all the leg work and Ian J's git2hgapply script to
actually "apply" it. YMMV depending on your feelings about git
though ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:02:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1E5-00022E-4y; Wed, 08 Aug 2012 08:02:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz1E3-000220-FA
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:02:43 +0000
Received: from [85.158.139.83:44620] by server-2.bemta-5.messagelabs.com id
	AA/F8-25118-22D12205; Wed, 08 Aug 2012 08:02:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344412962!30857523!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17229 invoked from network); 8 Aug 2012 08:02:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 08:02:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,730,1336348800"; d="scan'208";a="13902226"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 08:02:41 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	09:02:40 +0100
Message-ID: <1344412960.11783.15.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 09:02:40 +0100
In-Reply-To: <50222C9E0200007800093719@nat28.tlf.novell.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jinsong Liu <jinsong.liu@intel.com>, "Keir
	\(Xen.org\)" <keir@xen.org>, Matt Wilson <msw@amazon.com>, Donald D
	Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
> > For what it's worth, I think that the first line of the commit log got
> > dropped, which makes for a strange short log message of:
> > 
> >   Although the "Intel Virtualization Technology FlexMigration
> 
> Yes, I'm sorry for that, but I realized this only after pushing, and
> I'm unaware of ways to adjust the commit message of an existing
> c/s.

There is an hg rebase extension something like git's rebase -i but I
find the easiest way is to use the mq extension's function which pulls
the tip commit into a patch in the queue.

Actually, that's not quite true, I find the real easiest way is to hg
strip the wrong commit and try again.

Actually, that's not true either, the real easiest way IMHO is to use a
git mirror for all the leg work and Ian J's git2hgapply script to
actually "apply" it. YMMV depending on your feelings about git
though ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:33:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1hC-0002I1-PA; Wed, 08 Aug 2012 08:32:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Sz1hB-0002Hq-34
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 08:32:49 +0000
Received: from [85.158.138.51:27939] by server-6.bemta-3.messagelabs.com id
	84/41-02321-03422205; Wed, 08 Aug 2012 08:32:48 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344414767!31027094!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,MIME_BASE64_TEXT,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6420 invoked from network); 8 Aug 2012 08:32:47 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 08:32:47 -0000
Received: by eekd4 with SMTP id d4so128286eek.30
	for <xen-devel@lists.xensource.com>;
	Wed, 08 Aug 2012 01:32:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=ljcVI5NxDfQNG3PcHlcO7im5/tBWenht2/m6cowmCsQ=;
	b=dIrXyHihdJh5NRW9qH3bGFTUZUEVHymF71TabOvk5jdNatFv/ypjEQlT56BmiJowf7
	7ZX5vyE666JSXFifvdsGozlwMk7jwXBFYpSXXsJKHx6IkpeOz2RO3iH6/dClJaKSrPtk
	YlgnpG6xeKkL9bmipLSdKJdYTMFQvRb6sG8OolsurEf9V2PblAo4zyB7lgl3JLEh2rko
	LfBnHQ/bLmUGE2udzgTwrxnValGjpODvSAm8ij+qBoMKWMDWi17C1akSrzeyk9+qrSHq
	jcNwmXp6Ng3Pp1Zud8QdM5LrQffpZJHEBIQLMMcgc1hRSWCehC2x4hEnJiQUS6ZvpdAI
	wmzg==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr11463632eeo.24.1344414766881; Wed,
	08 Aug 2012 01:32:46 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Wed, 8 Aug 2012 01:32:46 -0700 (PDT)
Date: Wed, 8 Aug 2012 16:32:46 +0800
Message-ID: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] [help] problem with `tools/xenstore/xs.c: xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8128598206000349684=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8128598206000349684==
Content-Type: multipart/alternative; boundary=047d7b603a789c5b9d04c6bcf413

--047d7b603a789c5b9d04c6bcf413
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

    In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,  there are
several lines as follow:

 437        mutex_lock(&h->request_mutex);



 438



 439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))



 440                goto fail;



 441



 442        for (i = 0; i < num_vecs; i++)



 443                if (!xs_write_all(h->fd, iovec[i].iov_base,
iovec[i].iov_len))


 444                        goto fail;



 445



 446        ret = read_reply(h, &msg.type, len);



 447        if (!ret)



 448                goto fail;



 449



 450        mutex_unlock(&h->request_mutex);


The above seems to tell me that after writing to h->fd , the read_reply
invoking read_message which immediatelly  read from hd->fd?
What did it mean by this?


Thanks

--047d7b603a789c5b9d04c6bcf413
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: base64

SGkgYWxsLDxkaXY+oDwvZGl2PjxkaXY+oCCgIEluIKB4ZW4tNC4xLjIgc3JjL3Rvb2xzL3hlbnN0
b3JlL3hzLmM6IHhzX3RhbGt2IGZ1bmN0aW9uLCCgdGhlcmUgYXJlIHNldmVyYWwgbGluZXMgYXMg
Zm9sbG93OqA8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PqA0MzcgoCCgIKAgoG11dGV4X2xvY2so
JmFtcDtoLSZndDtyZXF1ZXN0X211dGV4KTsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8
ZGl2PqA0MzggoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDQzOSCgIKAgoCCgaWYgKCF4c193cml0ZV9hbGwoaC0m
Z3Q7ZmQsICZhbXA7bXNnLCBzaXplb2YobXNnKSkpIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9kaXY+CjxkaXY+oDQ0
MCCgIKAgoCCgIKAgoCCgIKBnb3RvIGZhaWw7IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKA8L2Rpdj4KPGRpdj6gNDQxIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqA0NDIgoCCgIKAgoGZvciAoaSA9
IDA7IGkgJmx0OyBudW1fdmVjczsgaSsrKSCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+Cjxk
aXY+oDQ0MyCgIKAgoCCgIKAgoCCgIKBpZiAoIXhzX3dyaXRlX2FsbChoLSZndDtmZCwgaW92ZWNb
aV0uaW92X2Jhc2UsIGlvdmVjW2ldLmlvdl9sZW4pKSCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKA8L2Rpdj4KPGRpdj6gNDQ0IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgZ290
byBmYWlsOyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqA0NDUgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9k
aXY+CjxkaXY+oDQ0NiCgIKAgoCCgcmV0ID0gcmVhZF9yZXBseShoLCAmYW1wO21zZy50eXBlLCBs
ZW4pOyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDQ0NyCgIKAgoCCgaWYgKCFyZXQpIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoKA8L2Rpdj4KPGRpdj6gNDQ4
IKAgoCCgIKAgoCCgIKAgoGdvdG8gZmFpbDsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoDwvZGl2Pgo8ZGl2PqA0NDkgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDQ1MCCgIKAgoCCgbXV0ZXhfdW5s
b2NrKCZhbXA7aC0mZ3Q7cmVxdWVzdF9tdXRleCk7IKA8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2
Pjxicj48L2Rpdj48ZGl2PlRoZSBhYm92ZSBzZWVtcyB0byB0ZWxsIG1lIHRoYXQgYWZ0ZXIgd3Jp
dGluZyB0byBoLSZndDtmZCAsIHRoZSByZWFkX3JlcGx5IGludm9raW5nIHJlYWRfbWVzc2FnZSB3
aGljaCBpbW1lZGlhdGVsbHkgoHJlYWQgZnJvbSBoZC0mZ3Q7ZmQ/PC9kaXY+CjxkaXY+V2hhdCBk
aWQgaXQgbWVhbiBieSB0aGlzPzwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2Pjxk
aXY+VGhhbmtzPC9kaXY+Cg==
--047d7b603a789c5b9d04c6bcf413--


--===============8128598206000349684==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8128598206000349684==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 08:33:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1hC-0002I1-PA; Wed, 08 Aug 2012 08:32:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Sz1hB-0002Hq-34
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 08:32:49 +0000
Received: from [85.158.138.51:27939] by server-6.bemta-3.messagelabs.com id
	84/41-02321-03422205; Wed, 08 Aug 2012 08:32:48 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344414767!31027094!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,MIME_BASE64_TEXT,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6420 invoked from network); 8 Aug 2012 08:32:47 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 08:32:47 -0000
Received: by eekd4 with SMTP id d4so128286eek.30
	for <xen-devel@lists.xensource.com>;
	Wed, 08 Aug 2012 01:32:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=ljcVI5NxDfQNG3PcHlcO7im5/tBWenht2/m6cowmCsQ=;
	b=dIrXyHihdJh5NRW9qH3bGFTUZUEVHymF71TabOvk5jdNatFv/ypjEQlT56BmiJowf7
	7ZX5vyE666JSXFifvdsGozlwMk7jwXBFYpSXXsJKHx6IkpeOz2RO3iH6/dClJaKSrPtk
	YlgnpG6xeKkL9bmipLSdKJdYTMFQvRb6sG8OolsurEf9V2PblAo4zyB7lgl3JLEh2rko
	LfBnHQ/bLmUGE2udzgTwrxnValGjpODvSAm8ij+qBoMKWMDWi17C1akSrzeyk9+qrSHq
	jcNwmXp6Ng3Pp1Zud8QdM5LrQffpZJHEBIQLMMcgc1hRSWCehC2x4hEnJiQUS6ZvpdAI
	wmzg==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr11463632eeo.24.1344414766881; Wed,
	08 Aug 2012 01:32:46 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Wed, 8 Aug 2012 01:32:46 -0700 (PDT)
Date: Wed, 8 Aug 2012 16:32:46 +0800
Message-ID: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] [help] problem with `tools/xenstore/xs.c: xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8128598206000349684=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8128598206000349684==
Content-Type: multipart/alternative; boundary=047d7b603a789c5b9d04c6bcf413

--047d7b603a789c5b9d04c6bcf413
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

    In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,  there are
several lines as follow:

 437        mutex_lock(&h->request_mutex);



 438



 439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))



 440                goto fail;



 441



 442        for (i = 0; i < num_vecs; i++)



 443                if (!xs_write_all(h->fd, iovec[i].iov_base,
iovec[i].iov_len))


 444                        goto fail;



 445



 446        ret = read_reply(h, &msg.type, len);



 447        if (!ret)



 448                goto fail;



 449



 450        mutex_unlock(&h->request_mutex);


The above seems to tell me that after writing to h->fd , the read_reply
invoking read_message which immediatelly  read from hd->fd?
What did it mean by this?


Thanks

--047d7b603a789c5b9d04c6bcf413
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: base64

SGkgYWxsLDxkaXY+oDwvZGl2PjxkaXY+oCCgIEluIKB4ZW4tNC4xLjIgc3JjL3Rvb2xzL3hlbnN0
b3JlL3hzLmM6IHhzX3RhbGt2IGZ1bmN0aW9uLCCgdGhlcmUgYXJlIHNldmVyYWwgbGluZXMgYXMg
Zm9sbG93OqA8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PqA0MzcgoCCgIKAgoG11dGV4X2xvY2so
JmFtcDtoLSZndDtyZXF1ZXN0X211dGV4KTsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8
ZGl2PqA0MzggoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDQzOSCgIKAgoCCgaWYgKCF4c193cml0ZV9hbGwoaC0m
Z3Q7ZmQsICZhbXA7bXNnLCBzaXplb2YobXNnKSkpIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9kaXY+CjxkaXY+oDQ0
MCCgIKAgoCCgIKAgoCCgIKBnb3RvIGZhaWw7IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKA8L2Rpdj4KPGRpdj6gNDQxIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqA0NDIgoCCgIKAgoGZvciAoaSA9
IDA7IGkgJmx0OyBudW1fdmVjczsgaSsrKSCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+Cjxk
aXY+oDQ0MyCgIKAgoCCgIKAgoCCgIKBpZiAoIXhzX3dyaXRlX2FsbChoLSZndDtmZCwgaW92ZWNb
aV0uaW92X2Jhc2UsIGlvdmVjW2ldLmlvdl9sZW4pKSCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKA8L2Rpdj4KPGRpdj6gNDQ0IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgZ290
byBmYWlsOyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqA0NDUgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9k
aXY+CjxkaXY+oDQ0NiCgIKAgoCCgcmV0ID0gcmVhZF9yZXBseShoLCAmYW1wO21zZy50eXBlLCBs
ZW4pOyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDQ0NyCgIKAgoCCgaWYgKCFyZXQpIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoKA8L2Rpdj4KPGRpdj6gNDQ4
IKAgoCCgIKAgoCCgIKAgoGdvdG8gZmFpbDsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoDwvZGl2Pgo8ZGl2PqA0NDkgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDQ1MCCgIKAgoCCgbXV0ZXhfdW5s
b2NrKCZhbXA7aC0mZ3Q7cmVxdWVzdF9tdXRleCk7IKA8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2
Pjxicj48L2Rpdj48ZGl2PlRoZSBhYm92ZSBzZWVtcyB0byB0ZWxsIG1lIHRoYXQgYWZ0ZXIgd3Jp
dGluZyB0byBoLSZndDtmZCAsIHRoZSByZWFkX3JlcGx5IGludm9raW5nIHJlYWRfbWVzc2FnZSB3
aGljaCBpbW1lZGlhdGVsbHkgoHJlYWQgZnJvbSBoZC0mZ3Q7ZmQ/PC9kaXY+CjxkaXY+V2hhdCBk
aWQgaXQgbWVhbiBieSB0aGlzPzwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2Pjxk
aXY+VGhhbmtzPC9kaXY+Cg==
--047d7b603a789c5b9d04c6bcf413--


--===============8128598206000349684==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8128598206000349684==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 08:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1jt-0002OX-Bh; Wed, 08 Aug 2012 08:35:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz1js-0002OS-5I
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:35:36 +0000
Received: from [85.158.143.35:33599] by server-1.bemta-4.messagelabs.com id
	EC/8E-20198-7D422205; Wed, 08 Aug 2012 08:35:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344414933!16045356!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4110 invoked from network); 8 Aug 2012 08:35:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 08:35:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:35:32 +0100
Message-Id: <502240F302000078000937A6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:35:31 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
In-Reply-To: <CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
> Any suggestions on how best to chase this down?
> 
> The first S3 suspend/resume cycle works, but the second does not.
> 
> On the second try, I never get any interrupts delivered to ahci.
> (at least according to /proc/interrupts)
> 
> 
> syslog traces from the first (good) and the second (bad) are attached,
> as well as the output from the "*" debug Ctrl+a handler in both cases.

You should have provided this also for the state before the
first suspend. The state after the first resume already looks
corrupted (presumably just not as badly):

(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
                     ^^
(XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1

so this is likely the reason for thing falling apart on the second
iteration:

(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
...
(XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
(XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
                                              ^     ^  ^
(XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
(XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
(XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
(XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1

Surprisingly in both cases we get (with the other vector fields varying
accordingly)

(XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
(XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
                                    ^^
(XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
(XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
(XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),

The interrupt in question belongs to 0000:00:1f.2, i.e. the
AHCI contoller.

Unfortunately I can't make sense of the kernel side config space
restore messages - an offset of 1 gets reported for the device in
question (and various other odd offsets exist), yet 3.5's
drivers/pci/pci.c:pci_restore_config_space_range() calls
pci_restore_config_dword() with an offset that's always divisible
by 4. Could you clarify which kernel version you were using here?
We first need to determine whether the kernel corrupts something
(after all, config space isn't protected from Dom0 modifications) -
if that's the case, we may need to understand why older Xen was
immune against that. If that's not the case, adding some extra
logging to Xen's pci_restore_msi_state() would seem the best
first step, plus (maybe) logging of Dom0 post-resume config space
accesses to the device in question.

The most likely thing happening (though unclear where) is that
the corresponding struct msi_msg instance gets cleared in the
course of the first resume (but after the corresponding interrupt
remapping entry already got restored).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:35:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1jt-0002OX-Bh; Wed, 08 Aug 2012 08:35:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz1js-0002OS-5I
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:35:36 +0000
Received: from [85.158.143.35:33599] by server-1.bemta-4.messagelabs.com id
	EC/8E-20198-7D422205; Wed, 08 Aug 2012 08:35:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344414933!16045356!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4110 invoked from network); 8 Aug 2012 08:35:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 08:35:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:35:32 +0100
Message-Id: <502240F302000078000937A6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:35:31 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
In-Reply-To: <CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
> Any suggestions on how best to chase this down?
> 
> The first S3 suspend/resume cycle works, but the second does not.
> 
> On the second try, I never get any interrupts delivered to ahci.
> (at least according to /proc/interrupts)
> 
> 
> syslog traces from the first (good) and the second (bad) are attached,
> as well as the output from the "*" debug Ctrl+a handler in both cases.

You should have provided this also for the state before the
first suspend. The state after the first resume already looks
corrupted (presumably just not as badly):

(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
                     ^^
(XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1

so this is likely the reason for thing falling apart on the second
iteration:

(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
...
(XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
(XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
                                              ^     ^  ^
(XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
(XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
(XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
(XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1

Surprisingly in both cases we get (with the other vector fields varying
accordingly)

(XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
(XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
                                    ^^
(XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
(XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
(XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),

The interrupt in question belongs to 0000:00:1f.2, i.e. the
AHCI contoller.

Unfortunately I can't make sense of the kernel side config space
restore messages - an offset of 1 gets reported for the device in
question (and various other odd offsets exist), yet 3.5's
drivers/pci/pci.c:pci_restore_config_space_range() calls
pci_restore_config_dword() with an offset that's always divisible
by 4. Could you clarify which kernel version you were using here?
We first need to determine whether the kernel corrupts something
(after all, config space isn't protected from Dom0 modifications) -
if that's the case, we may need to understand why older Xen was
immune against that. If that's not the case, adding some extra
logging to Xen's pci_restore_msi_state() would seem the best
first step, plus (maybe) logging of Dom0 post-resume config space
accesses to the device in question.

The most likely thing happening (though unclear where) is that
the corresponding struct msi_msg instance gets cleared in the
course of the first resume (but after the corresponding interrupt
remapping entry already got restored).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:48:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1wO-0002cJ-Ke; Wed, 08 Aug 2012 08:48:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz1wN-0002cE-FW
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:48:31 +0000
Received: from [85.158.139.83:33466] by server-10.bemta-5.messagelabs.com id
	74/A7-24472-ED722205; Wed, 08 Aug 2012 08:48:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344415710!26958410!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23851 invoked from network); 8 Aug 2012 08:48:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 08:48:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:48:29 +0100
Message-Id: <502243FC02000078000937B6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:48:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
	<1344412960.11783.15.camel@dagon.hellion.org.uk>
In-Reply-To: <1344412960.11783.15.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, "Keir\(Xen.org\)" <keir@xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Donald DDugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
>> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
>> > For what it's worth, I think that the first line of the commit log got
>> > dropped, which makes for a strange short log message of:
>> > 
>> >   Although the "Intel Virtualization Technology FlexMigration
>> 
>> Yes, I'm sorry for that, but I realized this only after pushing, and
>> I'm unaware of ways to adjust the commit message of an existing
>> c/s.
> 
> There is an hg rebase extension something like git's rebase -i but I
> find the easiest way is to use the mq extension's function which pulls
> the tip commit into a patch in the queue.
> 
> Actually, that's not quite true, I find the real easiest way is to hg
> strip the wrong commit and try again.

But that's only if it didn't get pushed yet?

> Actually, that's not true either, the real easiest way IMHO is to use a
> git mirror for all the leg work and Ian J's git2hgapply script to
> actually "apply" it. YMMV depending on your feelings about git
> though ;-)

Indeed. While I'm slowly getting to know it better, I'm still not
really friends with it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:48:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1wO-0002cJ-Ke; Wed, 08 Aug 2012 08:48:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz1wN-0002cE-FW
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:48:31 +0000
Received: from [85.158.139.83:33466] by server-10.bemta-5.messagelabs.com id
	74/A7-24472-ED722205; Wed, 08 Aug 2012 08:48:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344415710!26958410!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23851 invoked from network); 8 Aug 2012 08:48:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 08:48:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:48:29 +0100
Message-Id: <502243FC02000078000937B6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:48:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
	<1344412960.11783.15.camel@dagon.hellion.org.uk>
In-Reply-To: <1344412960.11783.15.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, "Keir\(Xen.org\)" <keir@xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Donald DDugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
>> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
>> > For what it's worth, I think that the first line of the commit log got
>> > dropped, which makes for a strange short log message of:
>> > 
>> >   Although the "Intel Virtualization Technology FlexMigration
>> 
>> Yes, I'm sorry for that, but I realized this only after pushing, and
>> I'm unaware of ways to adjust the commit message of an existing
>> c/s.
> 
> There is an hg rebase extension something like git's rebase -i but I
> find the easiest way is to use the mq extension's function which pulls
> the tip commit into a patch in the queue.
> 
> Actually, that's not quite true, I find the real easiest way is to hg
> strip the wrong commit and try again.

But that's only if it didn't get pushed yet?

> Actually, that's not true either, the real easiest way IMHO is to use a
> git mirror for all the leg work and Ian J's git2hgapply script to
> actually "apply" it. YMMV depending on your feelings about git
> though ;-)

Indeed. While I'm slowly getting to know it better, I'm still not
really friends with it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:50:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1xq-0002hj-9J; Wed, 08 Aug 2012 08:50:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz1xp-0002hI-3C
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:50:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344415794!12086228!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3300 invoked from network); 8 Aug 2012 08:49:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 08:49:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:49:53 +0100
Message-Id: <5022445002000078000937B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:49:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
In-Reply-To: <1344411923.11783.1.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jean Guyader <jean.guyader@gmail.com>, xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"Jean Guyader\(3P\)" <jean.guyader@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
>> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
>> 
>> Sounds fine (and I like this better than the ..._ARG one you used
>> below.
>> 
>> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 
> 201112L)
>> 
>> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
>>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> 
> The downside of this is that users of this header might need to change
> their code depending on what compiler they actually build with today (or
> even what options).
> 
> Is adding the ".u" throughout the Xen code base too intrusive?

I don't know and didn't check; I think the goal was to avoid having
to change consumers that use gcc for compilation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:50:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz1xq-0002hj-9J; Wed, 08 Aug 2012 08:50:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz1xp-0002hI-3C
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:50:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344415794!12086228!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3300 invoked from network); 8 Aug 2012 08:49:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 08:49:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:49:53 +0100
Message-Id: <5022445002000078000937B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:49:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
In-Reply-To: <1344411923.11783.1.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jean Guyader <jean.guyader@gmail.com>, xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"Jean Guyader\(3P\)" <jean.guyader@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
>> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
>> 
>> Sounds fine (and I like this better than the ..._ARG one you used
>> below.
>> 
>> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 
> 201112L)
>> 
>> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
>>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> 
> The downside of this is that users of this header might need to change
> their code depending on what compiler they actually build with today (or
> even what options).
> 
> Is adding the ".u" throughout the Xen code base too intrusive?

I don't know and didn't check; I think the goal was to avoid having
to change consumers that use gcc for compilation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:54:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz22L-0002v0-23; Wed, 08 Aug 2012 08:54:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz22K-0002ut-Cb
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:54:40 +0000
Received: from [85.158.138.51:48027] by server-10.bemta-3.messagelabs.com id
	0B/76-07905-F4922205; Wed, 08 Aug 2012 08:54:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344416078!30938227!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4554 invoked from network); 8 Aug 2012 08:54:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 08:54:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:54:37 +0100
Message-Id: <5022456C02000078000937CA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:54:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
	<50212F6902000078000933E4@nat28.tlf.novell.com>
	<1344412120.11783.4.camel@dagon.hellion.org.uk>
In-Reply-To: <1344412120.11783.4.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 09:48, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> I don't think that the wastage at the hypercall interface is going to be
> significant. Especially once you consider that many of the 32-bit fields
> actually need to be 64-bit for a compat mode guest any, e.g. MFNs should
> certainly be 64 bit regardless or else you impose limitations on 32-bit
> guests which the 64-bit hypervisor doesn't itself have.

Hmm, MFNs are probably a bad example, because we already have
a separate xen_pfn_t to deal with precisely that case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 08:54:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 08:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz22L-0002v0-23; Wed, 08 Aug 2012 08:54:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz22K-0002ut-Cb
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 08:54:40 +0000
Received: from [85.158.138.51:48027] by server-10.bemta-3.messagelabs.com id
	0B/76-07905-F4922205; Wed, 08 Aug 2012 08:54:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344416078!30938227!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2NzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4554 invoked from network); 8 Aug 2012 08:54:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 08:54:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 09:54:37 +0100
Message-Id: <5022456C02000078000937CA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 09:54:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020025C0200007800092FEE@nat28.tlf.novell.com>
	<1344268070.11339.53.camel@zakaz.uk.xensource.com>
	<502005B2020000780009302B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061659320.4645@kaball.uk.xensource.com>
	<5020D0B90200007800093181@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071331030.4645@kaball.uk.xensource.com>
	<50212F6902000078000933E4@nat28.tlf.novell.com>
	<1344412120.11783.4.camel@dagon.hellion.org.uk>
In-Reply-To: <1344412120.11783.4.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 09:48, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> I don't think that the wastage at the hypercall interface is going to be
> significant. Especially once you consider that many of the 32-bit fields
> actually need to be 64-bit for a compat mode guest any, e.g. MFNs should
> certainly be 64 bit regardless or else you impose limitations on 32-bit
> guests which the 64-bit hypervisor doesn't itself have.

Hmm, MFNs are probably a bad example, because we already have
a separate xen_pfn_t to deal with precisely that case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:01:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz28G-0003Ai-40; Wed, 08 Aug 2012 09:00:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz28E-0003AZ-HH
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:00:46 +0000
Received: from [85.158.139.83:60913] by server-12.bemta-5.messagelabs.com id
	78/47-26304-DBA22205; Wed, 08 Aug 2012 09:00:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344416442!28116802!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12835 invoked from network); 8 Aug 2012 09:00:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:00:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903460"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:00:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	10:00:41 +0100
Message-ID: <1344416440.32142.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Wed, 8 Aug 2012 10:00:40 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Ian J for his opinion.

I think this is a candidate for 4.2.0, although I would like to get rc2
under our belts first.

On Wed, 2012-08-08 at 04:44 +0100, Wangzhenguo wrote:
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > 
> > I don't think we should expect it to be valid to keep an xc interface
> > handle open after a fork. The child should open a fresh handle if it
> > wants to keep interacting with xc.
> OK, I see xs, libxl and python open a fresh handle to interact with xc in child process. 
> I modified the patch as follow:
> 
> # HG changeset patch
> # Parent a5dfd924fcdb173a154dad9f37073c1de1302065
> libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.
> 
> In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
> may call hypercall, thread B may call fork() to create child process. After forking, all pages 
> of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
> in thread A context, will cause a write protection and return EFAULT error.
> 
> Fix:
> 1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer
>    not to be copied to child process after fork. 
> 2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
>    using MADV_DOFORK of madvise syscall.
> 3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.
> 
> Note: 
> Chlid process do not use xc interface handle which is opend by parent process, it should open 
> a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
> to access hypercall buffer cache in the handle.

It would be good to write this in a comment next to each of the
xc_{interface,evtchn,gnttab,gntshr}_open() prototypes in the header
(assuming it applies to all of them, since they all make hypercalls I
expect it does and in any case it is easy to relax this restriction in
the future if not)

Otherwise:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
> Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
> 
> diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
> +++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 11:33:38 2012 +0800
> @@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
>      size_t size = npages * XC_PAGE_SIZE;
>      void *p;
>  
> -    p = xc_memalign(xch, XC_PAGE_SIZE, size);
> -    if (!p)
> -        return NULL;
> +    /* Address returned by mmap is page aligned. */
> +    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
>  
> -    if ( mlock(p, size) < 0 )
> -    {
> -        free(p);
> -        return NULL;
> -    }
> +    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
>      return p;
>  }
>  
>  static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
>  {
> -    munlock(ptr, npages * XC_PAGE_SIZE);
> -    free(ptr);
> +    /* Recover the VMA flags. Maybe it's not necessary */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
> +    
> +    munmap(ptr, npages * XC_PAGE_SIZE);
>  }
>  
>  static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:01:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz28G-0003Ai-40; Wed, 08 Aug 2012 09:00:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz28E-0003AZ-HH
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:00:46 +0000
Received: from [85.158.139.83:60913] by server-12.bemta-5.messagelabs.com id
	78/47-26304-DBA22205; Wed, 08 Aug 2012 09:00:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344416442!28116802!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12835 invoked from network); 8 Aug 2012 09:00:42 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:00:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903460"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:00:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	10:00:41 +0100
Message-ID: <1344416440.32142.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Wed, 8 Aug 2012 10:00:40 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Ian J for his opinion.

I think this is a candidate for 4.2.0, although I would like to get rc2
under our belts first.

On Wed, 2012-08-08 at 04:44 +0100, Wangzhenguo wrote:
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > 
> > I don't think we should expect it to be valid to keep an xc interface
> > handle open after a fork. The child should open a fresh handle if it
> > wants to keep interacting with xc.
> OK, I see xs, libxl and python open a fresh handle to interact with xc in child process. 
> I modified the patch as follow:
> 
> # HG changeset patch
> # Parent a5dfd924fcdb173a154dad9f37073c1de1302065
> libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.
> 
> In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
> may call hypercall, thread B may call fork() to create child process. After forking, all pages 
> of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
> in thread A context, will cause a write protection and return EFAULT error.
> 
> Fix:
> 1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer
>    not to be copied to child process after fork. 
> 2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
>    using MADV_DOFORK of madvise syscall.
> 3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.
> 
> Note: 
> Chlid process do not use xc interface handle which is opend by parent process, it should open 
> a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
> to access hypercall buffer cache in the handle.

It would be good to write this in a comment next to each of the
xc_{interface,evtchn,gnttab,gntshr}_open() prototypes in the header
(assuming it applies to all of them, since they all make hypercalls I
expect it does and in any case it is easy to relax this restriction in
the future if not)

Otherwise:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
> Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
> 
> diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
> +++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 11:33:38 2012 +0800
> @@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
>      size_t size = npages * XC_PAGE_SIZE;
>      void *p;
>  
> -    p = xc_memalign(xch, XC_PAGE_SIZE, size);
> -    if (!p)
> -        return NULL;
> +    /* Address returned by mmap is page aligned. */
> +    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
>  
> -    if ( mlock(p, size) < 0 )
> -    {
> -        free(p);
> -        return NULL;
> -    }
> +    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
>      return p;
>  }
>  
>  static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
>  {
> -    munlock(ptr, npages * XC_PAGE_SIZE);
> -    free(ptr);
> +    /* Recover the VMA flags. Maybe it's not necessary */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
> +    
> +    munmap(ptr, npages * XC_PAGE_SIZE);
>  }
>  
>  static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:05:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2CE-0003JZ-Or; Wed, 08 Aug 2012 09:04:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz2CC-0003JN-JS
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:04:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344416659!2696595!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6827 invoked from network); 8 Aug 2012 09:04:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:04:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903613"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:04:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	10:04:18 +0100
Message-ID: <1344416657.32142.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 10:04:17 +0100
In-Reply-To: <502243FC02000078000937B6@nat28.tlf.novell.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
	<1344412960.11783.15.camel@dagon.hellion.org.uk>
	<502243FC02000078000937B6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jinsong Liu <jinsong.liu@intel.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Donald DDugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 09:48 +0100, Jan Beulich wrote:
> >>> On 08.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
> >> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
> >> > For what it's worth, I think that the first line of the commit log got
> >> > dropped, which makes for a strange short log message of:
> >> > 
> >> >   Although the "Intel Virtualization Technology FlexMigration
> >> 
> >> Yes, I'm sorry for that, but I realized this only after pushing, and
> >> I'm unaware of ways to adjust the commit message of an existing
> >> c/s.
> > 
> > There is an hg rebase extension something like git's rebase -i but I
> > find the easiest way is to use the mq extension's function which pulls
> > the tip commit into a patch in the queue.
> > 
> > Actually, that's not quite true, I find the real easiest way is to hg
> > strip the wrong commit and try again.
> 
> But that's only if it didn't get pushed yet?

Yes, it's only if you catch the mistake before pushing. If you've pushed
then in principal you could hg strip on the server but in practice you
don't want to do that on a widely used/shared repo and you just have to
live with the mistake, or I suppose you could hg revert and recommit if
the bad cset was really confusing etc.

> > Actually, that's not true either, the real easiest way IMHO is to use a
> > git mirror for all the leg work and Ian J's git2hgapply script to
> > actually "apply" it. YMMV depending on your feelings about git
> > though ;-)
> 
> Indeed. While I'm slowly getting to know it better, I'm still not
> really friends with it.

I'm not sure any one is ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:05:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2CE-0003JZ-Or; Wed, 08 Aug 2012 09:04:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz2CC-0003JN-JS
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:04:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344416659!2696595!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6827 invoked from network); 8 Aug 2012 09:04:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:04:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903613"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:04:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	10:04:18 +0100
Message-ID: <1344416657.32142.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 10:04:17 +0100
In-Reply-To: <502243FC02000078000937B6@nat28.tlf.novell.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
	<1344412960.11783.15.camel@dagon.hellion.org.uk>
	<502243FC02000078000937B6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jinsong Liu <jinsong.liu@intel.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Donald DDugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 09:48 +0100, Jan Beulich wrote:
> >>> On 08.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
> >> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
> >> > For what it's worth, I think that the first line of the commit log got
> >> > dropped, which makes for a strange short log message of:
> >> > 
> >> >   Although the "Intel Virtualization Technology FlexMigration
> >> 
> >> Yes, I'm sorry for that, but I realized this only after pushing, and
> >> I'm unaware of ways to adjust the commit message of an existing
> >> c/s.
> > 
> > There is an hg rebase extension something like git's rebase -i but I
> > find the easiest way is to use the mq extension's function which pulls
> > the tip commit into a patch in the queue.
> > 
> > Actually, that's not quite true, I find the real easiest way is to hg
> > strip the wrong commit and try again.
> 
> But that's only if it didn't get pushed yet?

Yes, it's only if you catch the mistake before pushing. If you've pushed
then in principal you could hg strip on the server but in practice you
don't want to do that on a widely used/shared repo and you just have to
live with the mistake, or I suppose you could hg revert and recommit if
the bad cset was really confusing etc.

> > Actually, that's not true either, the real easiest way IMHO is to use a
> > git mirror for all the leg work and Ian J's git2hgapply script to
> > actually "apply" it. YMMV depending on your feelings about git
> > though ;-)
> 
> Indeed. While I'm slowly getting to know it better, I'm still not
> really friends with it.

I'm not sure any one is ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:07:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2Es-0003Rg-Fi; Wed, 08 Aug 2012 09:07:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz2Eq-0003RV-Ng
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 09:07:37 +0000
Received: from [85.158.138.51:14507] by server-11.bemta-3.messagelabs.com id
	B7/4F-10722-75C22205; Wed, 08 Aug 2012 09:07:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344416853!26935137!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13990 invoked from network); 8 Aug 2012 09:07:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:07:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903691"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:07:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	10:07:14 +0100
Message-ID: <1344416832.32142.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Aug 2012 10:07:12 +0100
In-Reply-To: <osstest-13571-mainreport@xen.org>
References: <osstest-13571-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Keir Fraser <keir@xen.org>, George
	Dunlap <George.Dunlap@citrix.com>, Lars Kurth <lars.kurth@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 03:38 +0100, xen.org wrote:
> flight 13571 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13571/
> 
> Failures :-/ but no regressions.

I think this (25739:472fc515a463) is a good candidate for rc2.

There was a proposal to have an RC test day on Monday (the 13th). I
don't think there is anything else outstanding which would be considered
a blocker for having a test day. It would be good to tag this now so
that we definitely have a decent baseline tagged and through the test
system for then.

Ian.

> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13570
>  test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13570
>  test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13570
>  test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13570
> 
> Tests which did not succeed, but are not blocking:
>  test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
>  test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
>  test-amd64-amd64-win         16 leak-check/check             fail   never pass
>  test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
>  test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
>  test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
>  test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
>  test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
>  test-amd64-i386-win          16 leak-check/check             fail   never pass
>  test-i386-i386-win           16 leak-check/check             fail   never pass
>  test-i386-i386-xl-win        13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
>  test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
> 
> version targeted for testing:
>  xen                  472fc515a463
> baseline version:
>  xen                  8ecd9177d977
> 
> ------------------------------------------------------------
> People who touched revisions under test:
>   Ian Campbell <ian.campbell@citrix.com>
>   Ian Jackson <Ian.Jackson@eu.citrix.com>
> ------------------------------------------------------------
> 
> jobs:
>  build-amd64                                                  pass    
>  build-i386                                                   pass    
>  build-amd64-oldkern                                          pass    
>  build-i386-oldkern                                           pass    
>  build-amd64-pvops                                            pass    
>  build-i386-pvops                                             pass    
>  test-amd64-amd64-xl                                          pass    
>  test-amd64-i386-xl                                           pass    
>  test-i386-i386-xl                                            pass    
>  test-amd64-i386-rhel6hvm-amd                                 pass    
>  test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
>  test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
>  test-amd64-amd64-xl-win7-amd64                               fail    
>  test-amd64-i386-xl-win7-amd64                                fail    
>  test-amd64-i386-xl-credit2                                   pass    
>  test-amd64-amd64-xl-pcipt-intel                              fail    
>  test-amd64-i386-rhel6hvm-intel                               pass    
>  test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
>  test-amd64-i386-xl-multivcpu                                 pass    
>  test-amd64-amd64-pair                                        pass    
>  test-amd64-i386-pair                                         pass    
>  test-i386-i386-pair                                          pass    
>  test-amd64-amd64-xl-sedf-pin                                 fail    
>  test-amd64-amd64-pv                                          pass    
>  test-amd64-i386-pv                                           pass    
>  test-i386-i386-pv                                            pass    
>  test-amd64-amd64-xl-sedf                                     pass    
>  test-amd64-i386-win-vcpus1                                   fail    
>  test-amd64-i386-xl-win-vcpus1                                fail    
>  test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
>  test-amd64-amd64-win                                         fail    
>  test-amd64-i386-win                                          fail    
>  test-i386-i386-win                                           fail    
>  test-amd64-amd64-xl-win                                      fail    
>  test-i386-i386-xl-win                                        fail    
>  test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
>  test-i386-i386-xl-qemuu-winxpsp3                             fail    
>  test-amd64-i386-xend-winxpsp3                                fail    
>  test-amd64-amd64-xl-winxpsp3                                 fail    
>  test-i386-i386-xl-winxpsp3                                   fail    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on woking.cam.xci-test.com
> logs: /home/xc_osstest/logs
> images: /home/xc_osstest/images
> 
> Logs, config files, etc. are available at
>     http://www.chiark.greenend.org.uk/~xensrcts/logs
> 
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
> 
> 
> Pushing revision :
> 
> + branch=xen-unstable
> + revision=472fc515a463
> + . cri-lock-repos
> ++ . cri-common
> +++ umask 002
> +++ getconfig Repos
> +++ perl -e '
>                 use Osstest;
>                 readconfigonly();
>                 print $c{Repos} or die $!;
>         '
> ++ repos=/export/home/osstest/repos
> ++ repos_lock=/export/home/osstest/repos/lock
> ++ '[' x '!=' x/export/home/osstest/repos/lock ']'
> ++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
> ++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 472fc515a463
> + branch=xen-unstable
> + revision=472fc515a463
> + . cri-lock-repos
> ++ . cri-common
> +++ umask 002
> +++ getconfig Repos
> +++ perl -e '
>                 use Osstest;
>                 readconfigonly();
>                 print $c{Repos} or die $!;
>         '
> ++ repos=/export/home/osstest/repos
> ++ repos_lock=/export/home/osstest/repos/lock
> ++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
> + . cri-common
> ++ umask 002
> + select_xenbranch
> + case "$branch" in
> + tree=xen
> + xenbranch=xen-unstable
> + '[' xxen = xlinux ']'
> + linuxbranch=linux
> + : master
> + : tested/2.6.39.x
> + . ap-common
> ++ : xen@xenbits.xensource.com
> ++ : http://xenbits.xen.org/staging/xen-unstable.hg
> ++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
> ++ : git://git.kernel.org
> ++ : git://git.kernel.org/pub/scm/linux/kernel/git
> ++ : git
> ++ : git://xenbits.xen.org/linux-pvops.git
> ++ : master
> ++ : xen@xenbits.xensource.com:git/linux-pvops.git
> ++ : git://xenbits.xen.org/linux-pvops.git
> ++ : master
> ++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> ++ : tested/2.6.39.x
> ++ : daily-cron.xen-unstable
> ++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
> ++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
> ++ : daily-cron.xen-unstable
> + TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
> + TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
> + info_linux_tree xen-unstable
> + case $1 in
> + return 1
> + case "$branch" in
> + cd /export/home/osstest/repos/xen-unstable.hg
> + hg push -r 472fc515a463 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
> pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
> searching for changes
> remote: adding changesets
> remote: adding manifests
> remote: adding file changes
> remote: added 1 changesets with 1 changes to 1 files
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:07:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2Es-0003Rg-Fi; Wed, 08 Aug 2012 09:07:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz2Eq-0003RV-Ng
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 09:07:37 +0000
Received: from [85.158.138.51:14507] by server-11.bemta-3.messagelabs.com id
	B7/4F-10722-75C22205; Wed, 08 Aug 2012 09:07:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344416853!26935137!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg0Mjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13990 invoked from network); 8 Aug 2012 09:07:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:07:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903691"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:07:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	10:07:14 +0100
Message-ID: <1344416832.32142.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Aug 2012 10:07:12 +0100
In-Reply-To: <osstest-13571-mainreport@xen.org>
References: <osstest-13571-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Keir Fraser <keir@xen.org>, George
	Dunlap <George.Dunlap@citrix.com>, Lars Kurth <lars.kurth@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 03:38 +0100, xen.org wrote:
> flight 13571 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13571/
> 
> Failures :-/ but no regressions.

I think this (25739:472fc515a463) is a good candidate for rc2.

There was a proposal to have an RC test day on Monday (the 13th). I
don't think there is anything else outstanding which would be considered
a blocker for having a test day. It would be good to tag this now so
that we definitely have a decent baseline tagged and through the test
system for then.

Ian.

> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13570
>  test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13570
>  test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13570
>  test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13570
> 
> Tests which did not succeed, but are not blocking:
>  test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
>  test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
>  test-amd64-amd64-win         16 leak-check/check             fail   never pass
>  test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
>  test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
>  test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
>  test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
>  test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
>  test-amd64-i386-win          16 leak-check/check             fail   never pass
>  test-i386-i386-win           16 leak-check/check             fail   never pass
>  test-i386-i386-xl-win        13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
>  test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
> 
> version targeted for testing:
>  xen                  472fc515a463
> baseline version:
>  xen                  8ecd9177d977
> 
> ------------------------------------------------------------
> People who touched revisions under test:
>   Ian Campbell <ian.campbell@citrix.com>
>   Ian Jackson <Ian.Jackson@eu.citrix.com>
> ------------------------------------------------------------
> 
> jobs:
>  build-amd64                                                  pass    
>  build-i386                                                   pass    
>  build-amd64-oldkern                                          pass    
>  build-i386-oldkern                                           pass    
>  build-amd64-pvops                                            pass    
>  build-i386-pvops                                             pass    
>  test-amd64-amd64-xl                                          pass    
>  test-amd64-i386-xl                                           pass    
>  test-i386-i386-xl                                            pass    
>  test-amd64-i386-rhel6hvm-amd                                 pass    
>  test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
>  test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
>  test-amd64-amd64-xl-win7-amd64                               fail    
>  test-amd64-i386-xl-win7-amd64                                fail    
>  test-amd64-i386-xl-credit2                                   pass    
>  test-amd64-amd64-xl-pcipt-intel                              fail    
>  test-amd64-i386-rhel6hvm-intel                               pass    
>  test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
>  test-amd64-i386-xl-multivcpu                                 pass    
>  test-amd64-amd64-pair                                        pass    
>  test-amd64-i386-pair                                         pass    
>  test-i386-i386-pair                                          pass    
>  test-amd64-amd64-xl-sedf-pin                                 fail    
>  test-amd64-amd64-pv                                          pass    
>  test-amd64-i386-pv                                           pass    
>  test-i386-i386-pv                                            pass    
>  test-amd64-amd64-xl-sedf                                     pass    
>  test-amd64-i386-win-vcpus1                                   fail    
>  test-amd64-i386-xl-win-vcpus1                                fail    
>  test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
>  test-amd64-amd64-win                                         fail    
>  test-amd64-i386-win                                          fail    
>  test-i386-i386-win                                           fail    
>  test-amd64-amd64-xl-win                                      fail    
>  test-i386-i386-xl-win                                        fail    
>  test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
>  test-i386-i386-xl-qemuu-winxpsp3                             fail    
>  test-amd64-i386-xend-winxpsp3                                fail    
>  test-amd64-amd64-xl-winxpsp3                                 fail    
>  test-i386-i386-xl-winxpsp3                                   fail    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on woking.cam.xci-test.com
> logs: /home/xc_osstest/logs
> images: /home/xc_osstest/images
> 
> Logs, config files, etc. are available at
>     http://www.chiark.greenend.org.uk/~xensrcts/logs
> 
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
> 
> 
> Pushing revision :
> 
> + branch=xen-unstable
> + revision=472fc515a463
> + . cri-lock-repos
> ++ . cri-common
> +++ umask 002
> +++ getconfig Repos
> +++ perl -e '
>                 use Osstest;
>                 readconfigonly();
>                 print $c{Repos} or die $!;
>         '
> ++ repos=/export/home/osstest/repos
> ++ repos_lock=/export/home/osstest/repos/lock
> ++ '[' x '!=' x/export/home/osstest/repos/lock ']'
> ++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
> ++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 472fc515a463
> + branch=xen-unstable
> + revision=472fc515a463
> + . cri-lock-repos
> ++ . cri-common
> +++ umask 002
> +++ getconfig Repos
> +++ perl -e '
>                 use Osstest;
>                 readconfigonly();
>                 print $c{Repos} or die $!;
>         '
> ++ repos=/export/home/osstest/repos
> ++ repos_lock=/export/home/osstest/repos/lock
> ++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
> + . cri-common
> ++ umask 002
> + select_xenbranch
> + case "$branch" in
> + tree=xen
> + xenbranch=xen-unstable
> + '[' xxen = xlinux ']'
> + linuxbranch=linux
> + : master
> + : tested/2.6.39.x
> + . ap-common
> ++ : xen@xenbits.xensource.com
> ++ : http://xenbits.xen.org/staging/xen-unstable.hg
> ++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
> ++ : git://git.kernel.org
> ++ : git://git.kernel.org/pub/scm/linux/kernel/git
> ++ : git
> ++ : git://xenbits.xen.org/linux-pvops.git
> ++ : master
> ++ : xen@xenbits.xensource.com:git/linux-pvops.git
> ++ : git://xenbits.xen.org/linux-pvops.git
> ++ : master
> ++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> ++ : tested/2.6.39.x
> ++ : daily-cron.xen-unstable
> ++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
> ++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
> ++ : daily-cron.xen-unstable
> + TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
> + TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
> + info_linux_tree xen-unstable
> + case $1 in
> + return 1
> + case "$branch" in
> + cd /export/home/osstest/repos/xen-unstable.hg
> + hg push -r 472fc515a463 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
> pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
> searching for changes
> remote: adding changesets
> remote: adding manifests
> remote: adding file changes
> remote: added 1 changesets with 1 changes to 1 files
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:18:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2Oz-0003hF-JH; Wed, 08 Aug 2012 09:18:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sz2Ox-0003hA-RN
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 09:18:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344417460!12093453!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25111 invoked from network); 8 Aug 2012 09:17:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:17:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903962"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:16:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 10:16:53 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sz2No-0006sV-RW;
	Wed, 08 Aug 2012 09:16:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sz2No-0004is-Jp;
	Wed, 08 Aug 2012 10:16:52 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13572-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Aug 2012 10:16:52 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13572: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13572 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13572/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13571
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13571
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13571
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  472fc515a463
baseline version:
 xen                  472fc515a463

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:18:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2Oz-0003hF-JH; Wed, 08 Aug 2012 09:18:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sz2Ox-0003hA-RN
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 09:18:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344417460!12093453!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25111 invoked from network); 8 Aug 2012 09:17:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:17:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13903962"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:16:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 10:16:53 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Sz2No-0006sV-RW;
	Wed, 08 Aug 2012 09:16:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Sz2No-0004is-Jp;
	Wed, 08 Aug 2012 10:16:52 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13572-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Aug 2012 10:16:52 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13572: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13572 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13572/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13571
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13571
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13571
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  472fc515a463
baseline version:
 xen                  472fc515a463

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:20:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2RH-0003p3-9S; Wed, 08 Aug 2012 09:20:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz2RF-0003ow-AZ
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:20:25 +0000
Received: from [85.158.139.83:49835] by server-7.bemta-5.messagelabs.com id
	74/6A-00857-85F22205; Wed, 08 Aug 2012 09:20:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344417622!30805018!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27909 invoked from network); 8 Aug 2012 09:20:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 09:20:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 10:20:21 +0100
Message-Id: <50224B7402000078000937DA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 10:20:20 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <201208070018394210381@gmail.com>
In-Reply-To: <201208070018394210381@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM running on xen goes slower
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:44, tupeng212 <tupeng212@gmail.com> wrote:
> 2 Xen
> vmx_vmexit_handler  --> ......... --> handle_rtc_io  --> rtc_ioport_write  --> 
> rtc_timer_update --> set RTC's REG_A to a high rate--> create_periodic_time(disable 
> the former timer, and init a new one)
> Win7 is installed in the vm. This calling path is executed so frequent that 
> may come down to set the RTC's REG_A hundreds of times every second but with 
> the same rate(976.562us(1024HZ)), it is so abnormal to me to see such 
> behavior.

_If_ the problem is merely with the high rate of calls to
create_periodic_time(), I think this could be taken care of by
avoiding the call (and perhaps the call to rtc_timer_update() in
the first place) by checking whether anything actually changes
due to the current write. I don't think, however, that this would
help much, as the high rate of port accesses (and hence traps
into the hypervisor) would remain. It might, nevertheless, get
your immediate problem of the time slowing down taken care of
if that is caused inside Xen (but the cause here may as well be in
the Windows kernel).

> 3 OS
> I have tried to find why the win7 setted RTC's regA so frequently. finally 
> got the result, all that comes from a function: NtSetTimerResolution --> 
> 0x70,0x71
> when I attached windbg into the guest OS, I also found the source, they are 
> all called from a upper system call that comes from JVM(Java Virtual 
> Machine).

Getting Windows to be a little smarter and avoid the port I/O when
doing redundant writes would of course be even better, but is
clearly a difficult to achieve goal.

> 4 JVM
> I don't know why JVM calls NtSetTimerResolution to set the same RTC's rate 
> down (976.562us(1024HZ)) so frequently. 
> But found something useful, in the java source code, I found many palaces 
> written with time.scheduleAtFixedRate(), Informations from Internet told me 
> this function scheduleAtFixedRate demands a higher time resolution. so I 
> guess the whole process may be this: 
> java wants a higher time resolution timer, so it changes the RTC's rate from 
> 15.625ms(64HZ) to 976.562us(1024HZ), after that, it reconfirms whether the 
> time is accurate as expected, but it's sorry to get some notice it 's not 
> accurate either. so it sets  the RTC's rate from 15.625ms(64HZ) to 
> 976.562us(1024HZ) again and again..., at last, results in a slow system timer 
> in vm.

Now that's really the fundamental thing to find out - what makes it
think the clock isn't accurate? Is this an artifact of scheduling (as
the scheduler tick certainly is several milliseconds, whereas
"accurate" here appears to require below 1ms granularity), perhaps
as a result of the box being overloaded (i.e. the VM not being able
to get scheduled quickly enough when the timer expires)? For that,
did you try lowering the scheduler time slice and/or its rate limit
(possible via command line option)? Of course doing so may have
other undesirable side effects, but it would be worth a try.

Did you further check whether the adjustments done to the
scheduled time in create_periodic_time() are responsible for this
conclusion of the JVM (could be effectively doubling the first
interval if HVM_PARAM_VPT_ALIGN is set, and with the high rate
of re-sets this could certainly have a more visible effect than
intended)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:20:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2RH-0003p3-9S; Wed, 08 Aug 2012 09:20:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz2RF-0003ow-AZ
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:20:25 +0000
Received: from [85.158.139.83:49835] by server-7.bemta-5.messagelabs.com id
	74/6A-00857-85F22205; Wed, 08 Aug 2012 09:20:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344417622!30805018!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27909 invoked from network); 8 Aug 2012 09:20:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 09:20:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 10:20:21 +0100
Message-Id: <50224B7402000078000937DA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 10:20:20 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <201208070018394210381@gmail.com>
In-Reply-To: <201208070018394210381@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM running on xen goes slower
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:44, tupeng212 <tupeng212@gmail.com> wrote:
> 2 Xen
> vmx_vmexit_handler  --> ......... --> handle_rtc_io  --> rtc_ioport_write  --> 
> rtc_timer_update --> set RTC's REG_A to a high rate--> create_periodic_time(disable 
> the former timer, and init a new one)
> Win7 is installed in the vm. This calling path is executed so frequent that 
> may come down to set the RTC's REG_A hundreds of times every second but with 
> the same rate(976.562us(1024HZ)), it is so abnormal to me to see such 
> behavior.

_If_ the problem is merely with the high rate of calls to
create_periodic_time(), I think this could be taken care of by
avoiding the call (and perhaps the call to rtc_timer_update() in
the first place) by checking whether anything actually changes
due to the current write. I don't think, however, that this would
help much, as the high rate of port accesses (and hence traps
into the hypervisor) would remain. It might, nevertheless, get
your immediate problem of the time slowing down taken care of
if that is caused inside Xen (but the cause here may as well be in
the Windows kernel).

> 3 OS
> I have tried to find why the win7 setted RTC's regA so frequently. finally 
> got the result, all that comes from a function: NtSetTimerResolution --> 
> 0x70,0x71
> when I attached windbg into the guest OS, I also found the source, they are 
> all called from a upper system call that comes from JVM(Java Virtual 
> Machine).

Getting Windows to be a little smarter and avoid the port I/O when
doing redundant writes would of course be even better, but is
clearly a difficult to achieve goal.

> 4 JVM
> I don't know why JVM calls NtSetTimerResolution to set the same RTC's rate 
> down (976.562us(1024HZ)) so frequently. 
> But found something useful, in the java source code, I found many palaces 
> written with time.scheduleAtFixedRate(), Informations from Internet told me 
> this function scheduleAtFixedRate demands a higher time resolution. so I 
> guess the whole process may be this: 
> java wants a higher time resolution timer, so it changes the RTC's rate from 
> 15.625ms(64HZ) to 976.562us(1024HZ), after that, it reconfirms whether the 
> time is accurate as expected, but it's sorry to get some notice it 's not 
> accurate either. so it sets  the RTC's rate from 15.625ms(64HZ) to 
> 976.562us(1024HZ) again and again..., at last, results in a slow system timer 
> in vm.

Now that's really the fundamental thing to find out - what makes it
think the clock isn't accurate? Is this an artifact of scheduling (as
the scheduler tick certainly is several milliseconds, whereas
"accurate" here appears to require below 1ms granularity), perhaps
as a result of the box being overloaded (i.e. the VM not being able
to get scheduled quickly enough when the timer expires)? For that,
did you try lowering the scheduler time slice and/or its rate limit
(possible via command line option)? Of course doing so may have
other undesirable side effects, but it would be worth a try.

Did you further check whether the adjustments done to the
scheduled time in create_periodic_time() are responsible for this
conclusion of the JVM (could be effectively doubling the first
interval if HVM_PARAM_VPT_ALIGN is set, and with the high rate
of re-sets this could certainly have a more visible effect than
intended)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:23:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2UA-0003y2-SU; Wed, 08 Aug 2012 09:23:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Sz2U9-0003xo-GW
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 09:23:25 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344417793!11844483!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4NTk0MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4720 invoked from network); 8 Aug 2012 09:23:14 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 09:23:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q789NAau031696
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Wed, 8 Aug 2012 09:23:11 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q789NA1T026466
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Wed, 8 Aug 2012 09:23:10 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q789NAVw007188
	for <xen-devel@lists.xensource.com>; Wed, 8 Aug 2012 04:23:10 -0500
Received: from [10.191.15.208] (/10.191.15.208)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Aug 2012 02:23:10 -0700
Message-ID: <50223027.6080502@oracle.com>
Date: Wed, 08 Aug 2012 17:23:51 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<20120807162637.GB15053@phenom.dumpdata.com>
In-Reply-To: <20120807162637.GB15053@phenom.dumpdata.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, Feng Jin <joe.jin@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4844915715082705569=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4844915715082705569==
Content-Type: multipart/alternative;
 boundary="------------060509090501000408040803"

This is a multi-part message in MIME format.
--------------060509090501000408040803
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit



? 2012-08-08 00:26, Konrad Rzeszutek Wilk ??:
> On Tue, Aug 07, 2012 at 03:22:50PM +0800, zhenzhong.duan wrote:
>> Hi maintainers,
>>
>> We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).
>>
>> The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
>> 6 cores per socket, 2 HT threads per core).
>> After boot up this node with all cores enabled,
>> We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
>> it takes 30+ mins to boot.
>> If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
>> If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
>> So a big mem + passthrough device made the worst case.
>>
>> If we boot this node with HT disabled from BIOS. Now only 12 cores are
>> available.
>> OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!
>>
>> After some debug, we found it's in kernel mtrr init that make this delay.
>>
>> mtrr_aps_init()
>>   \->  set_mtrr()
>>       \->  mtrr_work_handler()
>>
>> kernel spin in mtrr_work_handler.
>>
>> But we don't know the scene hide in the hypervisor. Why big mem +
>> passthrough made the worst case.
>> Is this already fixed in xen upstream?
>> Any comments are welcome, I'll upload all data depend on your need.
> What happens if you run with a upstream version of kernel? Say v3.4.7 ?
Hi konrad, Jan,

I tried 3.5.0-2.fc17.x86_64 and 3.6.0-rc1.
*
3.5.0-2.fc17.x86_64 took ~30 mins.*
Below is piece of fc17 dmesg:
  #22[    0.002999] installing Xen timer for CPU 22
  #23[    0.002999] installing Xen timer for CPU 23
[    1.844896] Brought up 24 CPUs
[    1.844898] Total of 24 processors activated (140449.34 BogoMIPS).
*block for 30 mins here.*
[    1.899794] devtmpfs: initialized
[    1.905956] atomic64 test passed for x86-64 platform with CX8 and 
with SSE
*
3.6.0-rc1 took more than 2 hours.*
piece of dmesg:
cpu 22 spinlock event irq 218
[    1.884775]  #22[    0.001999] installing Xen timer for CPU 22
cpu 23 spinlock event irq 225
[    1.932764]  #23[    0.001999] installing Xen timer for CPU 23
[    1.977734] Brought up 24 CPUs
[    1.978706] smpboot: Total of 24 processors activated (140449.34 
BogoMIPS)
*block for more than 2 hours here.*
[    1.988859] devtmpfs: initialized
[    2.021785] dummy:
[    2.023706] NET: Registered protocol family 16
[    2.026735] ACPI: bus type pci registered
[    2.028002] PCI: Using configuration type 1 for base access

I also send a patch to lkml that can workaround this issue, but I don't 
know the reason of block in xen side.
link: https://lkml.org/lkml/2012/8/7/50

regards
zduan

--------------060509090501000408040803
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <br>
    &#20110; 2012-08-08 00:26, Konrad Rzeszutek Wilk &#20889;&#36947;:
    <blockquote cite="mid:20120807162637.GB15053@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">On Tue, Aug 07, 2012 at 03:22:50PM +0800, zhenzhong.duan wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">Hi maintainers,

We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).

The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
6 cores per socket, 2 HT threads per core).
After boot up this node with all cores enabled,
We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
it takes 30+ mins to boot.
If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
So a big mem + passthrough device made the worst case.

If we boot this node with HT disabled from BIOS. Now only 12 cores are
available.
OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!

After some debug, we found it's in kernel mtrr init that make this delay.

mtrr_aps_init() 
 \-&gt; set_mtrr() 
     \-&gt; mtrr_work_handler() 

kernel spin in mtrr_work_handler.

But we don't know the scene hide in the hypervisor. Why big mem +
passthrough made the worst case.
Is this already fixed in xen upstream?
Any comments are welcome, I'll upload all data depend on your need.
</pre>
      </blockquote>
      <pre wrap="">
What happens if you run with a upstream version of kernel? Say v3.4.7 ?</pre>
    </blockquote>
    Hi konrad, Jan,<br>
    <br>
    I tried 3.5.0-2.fc17.x86_64 and 3.6.0-rc1.<br>
    <b><br>
      3.5.0-2.fc17.x86_64 took ~30 mins.</b><br>
    Below is piece of fc17 dmesg:<br>
    &nbsp;#22[&nbsp;&nbsp;&nbsp; 0.002999] installing Xen timer for CPU 22<br>
    &nbsp;#23[&nbsp;&nbsp;&nbsp; 0.002999] installing Xen timer for CPU 23<br>
    [&nbsp;&nbsp;&nbsp; 1.844896] Brought up 24 CPUs<br>
    [&nbsp;&nbsp;&nbsp; 1.844898] Total of 24 processors activated (140449.34
    BogoMIPS).<br>
    <b>block for 30 mins here.</b><br>
    [&nbsp;&nbsp;&nbsp; 1.899794] devtmpfs: initialized<br>
    [&nbsp;&nbsp;&nbsp; 1.905956] atomic64 test passed for x86-64 platform with CX8 and
    with SSE<br>
    <b><br>
      3.6.0-rc1 took more than 2 hours.</b><br>
    piece of dmesg:<br>
    cpu 22 spinlock event irq 218<br>
    [&nbsp;&nbsp;&nbsp; 1.884775]&nbsp; #22[&nbsp;&nbsp;&nbsp; 0.001999] installing Xen timer for CPU 22<br>
    cpu 23 spinlock event irq 225<br>
    [&nbsp;&nbsp;&nbsp; 1.932764]&nbsp; #23[&nbsp;&nbsp;&nbsp; 0.001999] installing Xen timer for CPU 23<br>
    [&nbsp;&nbsp;&nbsp; 1.977734] Brought up 24 CPUs<br>
    [&nbsp;&nbsp;&nbsp; 1.978706] smpboot: Total of 24 processors activated (140449.34
    BogoMIPS)<br>
    <b>block for more than 2 hours here.</b><br>
    [&nbsp;&nbsp;&nbsp; 1.988859] devtmpfs: initialized<br>
    [&nbsp;&nbsp;&nbsp; 2.021785] dummy:<br>
    [&nbsp;&nbsp;&nbsp; 2.023706] NET: Registered protocol family 16<br>
    [&nbsp;&nbsp;&nbsp; 2.026735] ACPI: bus type pci registered<br>
    [&nbsp;&nbsp;&nbsp; 2.028002] PCI: Using configuration type 1 for base access<br>
    <br>
    I also send a patch to lkml that can workaround this issue, but I
    don't know the reason of block in xen side.<br>
    link: <a class="moz-txt-link-freetext" href="https://lkml.org/lkml/2012/8/7/50">https://lkml.org/lkml/2012/8/7/50</a><br>
    <br>
    regards<br>
    zduan<br>
    <blockquote cite="mid:20120807162637.GB15053@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">
</pre>
    </blockquote>
  </body>
</html>

--------------060509090501000408040803--


--===============4844915715082705569==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4844915715082705569==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 09:23:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2UA-0003y2-SU; Wed, 08 Aug 2012 09:23:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Sz2U9-0003xo-GW
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 09:23:25 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344417793!11844483!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4NTk0MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4720 invoked from network); 8 Aug 2012 09:23:14 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 09:23:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q789NAau031696
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Wed, 8 Aug 2012 09:23:11 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q789NA1T026466
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Wed, 8 Aug 2012 09:23:10 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q789NAVw007188
	for <xen-devel@lists.xensource.com>; Wed, 8 Aug 2012 04:23:10 -0500
Received: from [10.191.15.208] (/10.191.15.208)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Aug 2012 02:23:10 -0700
Message-ID: <50223027.6080502@oracle.com>
Date: Wed, 08 Aug 2012 17:23:51 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<20120807162637.GB15053@phenom.dumpdata.com>
In-Reply-To: <20120807162637.GB15053@phenom.dumpdata.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, Feng Jin <joe.jin@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4844915715082705569=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4844915715082705569==
Content-Type: multipart/alternative;
 boundary="------------060509090501000408040803"

This is a multi-part message in MIME format.
--------------060509090501000408040803
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit



? 2012-08-08 00:26, Konrad Rzeszutek Wilk ??:
> On Tue, Aug 07, 2012 at 03:22:50PM +0800, zhenzhong.duan wrote:
>> Hi maintainers,
>>
>> We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).
>>
>> The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
>> 6 cores per socket, 2 HT threads per core).
>> After boot up this node with all cores enabled,
>> We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
>> it takes 30+ mins to boot.
>> If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
>> If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
>> So a big mem + passthrough device made the worst case.
>>
>> If we boot this node with HT disabled from BIOS. Now only 12 cores are
>> available.
>> OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!
>>
>> After some debug, we found it's in kernel mtrr init that make this delay.
>>
>> mtrr_aps_init()
>>   \->  set_mtrr()
>>       \->  mtrr_work_handler()
>>
>> kernel spin in mtrr_work_handler.
>>
>> But we don't know the scene hide in the hypervisor. Why big mem +
>> passthrough made the worst case.
>> Is this already fixed in xen upstream?
>> Any comments are welcome, I'll upload all data depend on your need.
> What happens if you run with a upstream version of kernel? Say v3.4.7 ?
Hi konrad, Jan,

I tried 3.5.0-2.fc17.x86_64 and 3.6.0-rc1.
*
3.5.0-2.fc17.x86_64 took ~30 mins.*
Below is piece of fc17 dmesg:
  #22[    0.002999] installing Xen timer for CPU 22
  #23[    0.002999] installing Xen timer for CPU 23
[    1.844896] Brought up 24 CPUs
[    1.844898] Total of 24 processors activated (140449.34 BogoMIPS).
*block for 30 mins here.*
[    1.899794] devtmpfs: initialized
[    1.905956] atomic64 test passed for x86-64 platform with CX8 and 
with SSE
*
3.6.0-rc1 took more than 2 hours.*
piece of dmesg:
cpu 22 spinlock event irq 218
[    1.884775]  #22[    0.001999] installing Xen timer for CPU 22
cpu 23 spinlock event irq 225
[    1.932764]  #23[    0.001999] installing Xen timer for CPU 23
[    1.977734] Brought up 24 CPUs
[    1.978706] smpboot: Total of 24 processors activated (140449.34 
BogoMIPS)
*block for more than 2 hours here.*
[    1.988859] devtmpfs: initialized
[    2.021785] dummy:
[    2.023706] NET: Registered protocol family 16
[    2.026735] ACPI: bus type pci registered
[    2.028002] PCI: Using configuration type 1 for base access

I also send a patch to lkml that can workaround this issue, but I don't 
know the reason of block in xen side.
link: https://lkml.org/lkml/2012/8/7/50

regards
zduan

--------------060509090501000408040803
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <br>
    &#20110; 2012-08-08 00:26, Konrad Rzeszutek Wilk &#20889;&#36947;:
    <blockquote cite="mid:20120807162637.GB15053@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">On Tue, Aug 07, 2012 at 03:22:50PM +0800, zhenzhong.duan wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">Hi maintainers,

We meet a uek2 bootup slow issue on our ovm product(ovm3.0.3 and ovm3.1.1).

The system env is an exalogic node with 24 cores + 100G mem (2 socket ,
6 cores per socket, 2 HT threads per core).
After boot up this node with all cores enabled,
We boot a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device,
it takes 30+ mins to boot.
If we remove passthrough device from vm.cfg, bootup takes about 2 mins.
If we use a small mem(eg. 10G + 24 vcpus), bootup takes about 3 mins.
So a big mem + passthrough device made the worst case.

If we boot this node with HT disabled from BIOS. Now only 12 cores are
available.
OVM on same node, same config with 12vpcus+90GB boots in 1.5 mins!

After some debug, we found it's in kernel mtrr init that make this delay.

mtrr_aps_init() 
 \-&gt; set_mtrr() 
     \-&gt; mtrr_work_handler() 

kernel spin in mtrr_work_handler.

But we don't know the scene hide in the hypervisor. Why big mem +
passthrough made the worst case.
Is this already fixed in xen upstream?
Any comments are welcome, I'll upload all data depend on your need.
</pre>
      </blockquote>
      <pre wrap="">
What happens if you run with a upstream version of kernel? Say v3.4.7 ?</pre>
    </blockquote>
    Hi konrad, Jan,<br>
    <br>
    I tried 3.5.0-2.fc17.x86_64 and 3.6.0-rc1.<br>
    <b><br>
      3.5.0-2.fc17.x86_64 took ~30 mins.</b><br>
    Below is piece of fc17 dmesg:<br>
    &nbsp;#22[&nbsp;&nbsp;&nbsp; 0.002999] installing Xen timer for CPU 22<br>
    &nbsp;#23[&nbsp;&nbsp;&nbsp; 0.002999] installing Xen timer for CPU 23<br>
    [&nbsp;&nbsp;&nbsp; 1.844896] Brought up 24 CPUs<br>
    [&nbsp;&nbsp;&nbsp; 1.844898] Total of 24 processors activated (140449.34
    BogoMIPS).<br>
    <b>block for 30 mins here.</b><br>
    [&nbsp;&nbsp;&nbsp; 1.899794] devtmpfs: initialized<br>
    [&nbsp;&nbsp;&nbsp; 1.905956] atomic64 test passed for x86-64 platform with CX8 and
    with SSE<br>
    <b><br>
      3.6.0-rc1 took more than 2 hours.</b><br>
    piece of dmesg:<br>
    cpu 22 spinlock event irq 218<br>
    [&nbsp;&nbsp;&nbsp; 1.884775]&nbsp; #22[&nbsp;&nbsp;&nbsp; 0.001999] installing Xen timer for CPU 22<br>
    cpu 23 spinlock event irq 225<br>
    [&nbsp;&nbsp;&nbsp; 1.932764]&nbsp; #23[&nbsp;&nbsp;&nbsp; 0.001999] installing Xen timer for CPU 23<br>
    [&nbsp;&nbsp;&nbsp; 1.977734] Brought up 24 CPUs<br>
    [&nbsp;&nbsp;&nbsp; 1.978706] smpboot: Total of 24 processors activated (140449.34
    BogoMIPS)<br>
    <b>block for more than 2 hours here.</b><br>
    [&nbsp;&nbsp;&nbsp; 1.988859] devtmpfs: initialized<br>
    [&nbsp;&nbsp;&nbsp; 2.021785] dummy:<br>
    [&nbsp;&nbsp;&nbsp; 2.023706] NET: Registered protocol family 16<br>
    [&nbsp;&nbsp;&nbsp; 2.026735] ACPI: bus type pci registered<br>
    [&nbsp;&nbsp;&nbsp; 2.028002] PCI: Using configuration type 1 for base access<br>
    <br>
    I also send a patch to lkml that can workaround this issue, but I
    don't know the reason of block in xen side.<br>
    link: <a class="moz-txt-link-freetext" href="https://lkml.org/lkml/2012/8/7/50">https://lkml.org/lkml/2012/8/7/50</a><br>
    <br>
    regards<br>
    zduan<br>
    <blockquote cite="mid:20120807162637.GB15053@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">
</pre>
    </blockquote>
  </body>
</html>

--------------060509090501000408040803--


--===============4844915715082705569==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4844915715082705569==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 09:45:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2pO-0004E7-RY; Wed, 08 Aug 2012 09:45:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Sz2pN-0004E2-A7
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:45:21 +0000
Received: from [85.158.138.51:11885] by server-3.bemta-3.messagelabs.com id
	29/57-13122-03532205; Wed, 08 Aug 2012 09:45:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344419118!24646127!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13274 invoked from network); 8 Aug 2012 09:45:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:45:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336363200"; d="scan'208";a="33943270"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 05:45:17 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 8 Aug 2012 05:45:17 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Sz2pJ-000406-2p;
	Wed, 08 Aug 2012 10:45:17 +0100
Message-ID: <5022352D.7080607@citrix.com>
Date: Wed, 8 Aug 2012 10:45:17 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <501973F5.4030804@citrix.com>
	<20513.24085.195729.702722@mariner.uk.xensource.com>
In-Reply-To: <20513.24085.195729.702722@mariner.uk.xensource.com>
X-Enigmail-Version: 1.4.3
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/12 19:27, Ian Jackson wrote:
> Andrew Cooper writes ("[Xen-devel] tools/makefile: Add build target"):
>> The alternative is to change the root makefile to call "$(MAKE) -C
>> tools/ all" instead, but this way feels neater to me.
> Yes.
>
>> tools/makefile: Add build target
>>
>> The root Makefile has a build target which calls "$(MAKE) build" in each of xen/
>> tools/ stubdom/ and docs/, which fails because of the tools/ Makefile.
> That this didn't work is clearly a bug.
>
> But when I tried it my build did this:
>
>   CC    i386-stubdom/eeprom93xx.o
> /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/test.o: In function `app_main':
> /u/iwj/work/xen-unstable-tools.hg/extras/mini-os/test.c:441: multiple definition of `app_main'
> /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/main.o:/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/main.c:187: first defined here
>   CC    i386-stubdom/eepro100.o
> make[2]: *** [/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/mini-os] Error 1
> make[2]: Leaving directory `/u/iwj/work/xen-unstable-tools.hg/extras/mini-os'
> make[1]: *** [c-stubdom] Error 2
> make[1]: *** Waiting for unfinished jobs....
>
> That's the result of:
>
>  (make -j4 build && echo ok.) 2>&1 | tee ../log
>
> Ian.

The issue there is that app_main() is defined as __attribute__((weak)),
and has two definitions in the code.

I have just tested and confirmed that minios (and therefore stubdom) is
not -j safe.  As a result, I would argue that this build failure is not
nesseserally a barrier of entry to the patch itself;  There are more
issues which need fixing as well.

stubdom itself has further build issues regarding relative paths to
configure scripts, which I was going to around to fixing after some of
my more important tasks.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:45:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2pO-0004E7-RY; Wed, 08 Aug 2012 09:45:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Sz2pN-0004E2-A7
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:45:21 +0000
Received: from [85.158.138.51:11885] by server-3.bemta-3.messagelabs.com id
	29/57-13122-03532205; Wed, 08 Aug 2012 09:45:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344419118!24646127!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13274 invoked from network); 8 Aug 2012 09:45:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:45:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336363200"; d="scan'208";a="33943270"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 05:45:17 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 8 Aug 2012 05:45:17 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Sz2pJ-000406-2p;
	Wed, 08 Aug 2012 10:45:17 +0100
Message-ID: <5022352D.7080607@citrix.com>
Date: Wed, 8 Aug 2012 10:45:17 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <501973F5.4030804@citrix.com>
	<20513.24085.195729.702722@mariner.uk.xensource.com>
In-Reply-To: <20513.24085.195729.702722@mariner.uk.xensource.com>
X-Enigmail-Version: 1.4.3
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/08/12 19:27, Ian Jackson wrote:
> Andrew Cooper writes ("[Xen-devel] tools/makefile: Add build target"):
>> The alternative is to change the root makefile to call "$(MAKE) -C
>> tools/ all" instead, but this way feels neater to me.
> Yes.
>
>> tools/makefile: Add build target
>>
>> The root Makefile has a build target which calls "$(MAKE) build" in each of xen/
>> tools/ stubdom/ and docs/, which fails because of the tools/ Makefile.
> That this didn't work is clearly a bug.
>
> But when I tried it my build did this:
>
>   CC    i386-stubdom/eeprom93xx.o
> /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/test.o: In function `app_main':
> /u/iwj/work/xen-unstable-tools.hg/extras/mini-os/test.c:441: multiple definition of `app_main'
> /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/main.o:/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/main.c:187: first defined here
>   CC    i386-stubdom/eepro100.o
> make[2]: *** [/u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/mini-os] Error 1
> make[2]: Leaving directory `/u/iwj/work/xen-unstable-tools.hg/extras/mini-os'
> make[1]: *** [c-stubdom] Error 2
> make[1]: *** Waiting for unfinished jobs....
>
> That's the result of:
>
>  (make -j4 build && echo ok.) 2>&1 | tee ../log
>
> Ian.

The issue there is that app_main() is defined as __attribute__((weak)),
and has two definitions in the code.

I have just tested and confirmed that minios (and therefore stubdom) is
not -j safe.  As a result, I would argue that this build failure is not
nesseserally a barrier of entry to the patch itself;  There are more
issues which need fixing as well.

stubdom itself has further build issues regarding relative paths to
configure scripts, which I was going to around to fixing after some of
my more important tasks.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:47:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2rc-0004JO-Fy; Wed, 08 Aug 2012 09:47:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Sz2ra-0004JG-Do
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:47:38 +0000
Received: from [85.158.143.99:25983] by server-2.bemta-4.messagelabs.com id
	34/D2-19021-9B532205; Wed, 08 Aug 2012 09:47:37 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344419255!19129384!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc4MzQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5052 invoked from network); 8 Aug 2012 09:47:37 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Aug 2012 09:47:37 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q789lYR0012006
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Aug 2012 09:47:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q789lXRU015899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Aug 2012 09:47:34 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q789lXm1023161; Wed, 8 Aug 2012 04:47:33 -0500
Received: from [10.191.15.208] (/10.191.15.208)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Aug 2012 02:47:33 -0700
Message-ID: <502235E8.9040309@oracle.com>
Date: Wed, 08 Aug 2012 17:48:24 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
In-Reply-To: <5020F003020000780009322C@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cgrkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDA3LjA4
LjEyIGF0IDA5OjIyLCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+
ICB3cm90ZToKPj4gQWZ0ZXIgc29tZSBkZWJ1Zywgd2UgZm91bmQgaXQncyBpbiBrZXJuZWwgbXRy
ciBpbml0IHRoYXQgbWFrZSB0aGlzIGRlbGF5Lgo+Pgo+PiBtdHJyX2Fwc19pbml0KCkKPj4gICBc
LT4gIHNldF9tdHJyKCkKPj4gICAgICAgXC0+ICBtdHJyX3dvcmtfaGFuZGxlcigpCj4+Cj4+IGtl
cm5lbCBzcGluIGluIG10cnJfd29ya19oYW5kbGVyLgo+Pgo+PiBCdXQgd2UgZG9uJ3Qga25vdyB0
aGUgc2NlbmUgaGlkZSBpbiB0aGUgaHlwZXJ2aXNvci4gV2h5IGJpZyBtZW0gKwo+PiBwYXNzdGhy
b3VnaCBtYWRlIHRoZSB3b3JzdCBjYXNlLgo+PiBJcyB0aGlzIGFscmVhZHkgZml4ZWQgaW4geGVu
IHVwc3RyZWFtPwo+IEZpcnN0IG9mIGFsbCBpdCB3b3VsZCBoYXZlIGJlZW4gdXNlZnVsIHRvIGlu
ZGljYXRlIHRoZSBrZXJuZWwgdmVyc2lvbiwKPiBzaW5jZSBtdHJyX3dvcmtfaGFuZGxlcigpIGRp
c2FwcGVhcmVkIGFmdGVyIDMuMC4gT2J2aW91c2x5IHdvcnRoCj4gY2hlY2tpbmcgd2hldGhlciB0
aGF0IGNoYW5nZSBieSBpdHNlbGYgYWxyZWFkeSBhZGRyZXNzZXMgeW91cgo+IHByb2JsZW0uCk5v
IGx1Y2ssIHRyaWVkIHVwc3RyZWFtIGtlcm5lbCAzLjYuMC1yYzEsIHNlZW1zIHdvcnNlLiBJdCB0
b29rIDIgaG91cnMgCnRvIGJvb3QgdXAuCj4KPiBOZXh0LCBpZiB5b3UgYWxyZWFkeSBzcG90dGVk
IHdoZXJlIHRoZSBzcGlubmluZyBvY2N1cnMsIHlvdQo+IHNob3VsZCBhbHNvIGJlIGFibGUgdG8g
dGVsbCB3aGF0J3MgZ29pbmcgb24gYXQgdGhlIG90aGVyIHNpZGUsIGkuZS4KPiB3aHkgdGhlIGV2
ZW50IHRoYXQgaXMgYmVpbmcgd2FpdGVkIGZvciBpc24ndCBvY2N1cnJpbmcgZm9yIHRoaXMKPiBs
b25nIGEgdGltZS4gU2luY2UgdGhlcmUncyBhIG51bWJlciBvZiBvcGVuIGNvZGVkIHNwaW4gbG9v
cHMKPiBoZXJlLCBrbm93aW5nIGV4YWN0bHkgd2hpY2ggb25lIGVhY2ggQ1BVIGlzIHNpdHRpbmcg
aW4gKGFuZAo+IHdoaWNoIG9uZXMgbWlnaHQgbm90IGJlIGluIGFueSkgaXMgdGhlIGZ1bmRhbWVu
dGFsIGluZm9ybWF0aW9uCj4gbmVlZGVkLgo+Cj4gIEZyb20gd2hhdCB5b3UncmUgdGVsbGluZyB1
cyBzbyBmYXIsIEknZCByYXRoZXIgc3VzcGVjdCBhIGtlcm5lbAo+IHByb2JsZW0sIG5vdCBhIGh5
cGVydmlzb3Igb25lIGhlcmUuClBlciBteSBmaW5kaW5nLCBtb3N0IG9mIHZjcHVzIHNwaW4gYXQg
c2V0X2F0b21pY2l0eV9sb2NrLiBTb21lIHNwaW4gYXQgCnN0b3BfbWFjaGluZSBhZnRlciBmaW5p
c2ggdGhlaXIgam9iLgpPbmx5IG9uZSB2Y3B1IGlzIGNhbGxpbmcgZ2VuZXJpY19zZXRfYWxsLgpJ
J20gbm90IHN1cmUgaWYgdGhlIHZjcHUgY2FsbGluZyBnZW5lcmljX3NldF9hbGwgZG9uJ3QgaGF2
ZSBoaWdoZXIgCnByaW9yaXR5IGFuZCBtYXliZSBwcmVlbXB0IGJ5IG90aGVyIHZjcHVzIGFuZCBk
b20wIGZyZXF1ZW50bHkuClRoaXMgd2FzdGUgbXVjaCB0aW1lLgo+Cj4gSmFuCj4KCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:47:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2rc-0004JO-Fy; Wed, 08 Aug 2012 09:47:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Sz2ra-0004JG-Do
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:47:38 +0000
Received: from [85.158.143.99:25983] by server-2.bemta-4.messagelabs.com id
	34/D2-19021-9B532205; Wed, 08 Aug 2012 09:47:37 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344419255!19129384!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjc4MzQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5052 invoked from network); 8 Aug 2012 09:47:37 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Aug 2012 09:47:37 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q789lYR0012006
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Aug 2012 09:47:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q789lXRU015899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Aug 2012 09:47:34 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q789lXm1023161; Wed, 8 Aug 2012 04:47:33 -0500
Received: from [10.191.15.208] (/10.191.15.208)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Aug 2012 02:47:33 -0700
Message-ID: <502235E8.9040309@oracle.com>
Date: Wed, 08 Aug 2012 17:48:24 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
In-Reply-To: <5020F003020000780009322C@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cgrkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDA3LjA4
LjEyIGF0IDA5OjIyLCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+
ICB3cm90ZToKPj4gQWZ0ZXIgc29tZSBkZWJ1Zywgd2UgZm91bmQgaXQncyBpbiBrZXJuZWwgbXRy
ciBpbml0IHRoYXQgbWFrZSB0aGlzIGRlbGF5Lgo+Pgo+PiBtdHJyX2Fwc19pbml0KCkKPj4gICBc
LT4gIHNldF9tdHJyKCkKPj4gICAgICAgXC0+ICBtdHJyX3dvcmtfaGFuZGxlcigpCj4+Cj4+IGtl
cm5lbCBzcGluIGluIG10cnJfd29ya19oYW5kbGVyLgo+Pgo+PiBCdXQgd2UgZG9uJ3Qga25vdyB0
aGUgc2NlbmUgaGlkZSBpbiB0aGUgaHlwZXJ2aXNvci4gV2h5IGJpZyBtZW0gKwo+PiBwYXNzdGhy
b3VnaCBtYWRlIHRoZSB3b3JzdCBjYXNlLgo+PiBJcyB0aGlzIGFscmVhZHkgZml4ZWQgaW4geGVu
IHVwc3RyZWFtPwo+IEZpcnN0IG9mIGFsbCBpdCB3b3VsZCBoYXZlIGJlZW4gdXNlZnVsIHRvIGlu
ZGljYXRlIHRoZSBrZXJuZWwgdmVyc2lvbiwKPiBzaW5jZSBtdHJyX3dvcmtfaGFuZGxlcigpIGRp
c2FwcGVhcmVkIGFmdGVyIDMuMC4gT2J2aW91c2x5IHdvcnRoCj4gY2hlY2tpbmcgd2hldGhlciB0
aGF0IGNoYW5nZSBieSBpdHNlbGYgYWxyZWFkeSBhZGRyZXNzZXMgeW91cgo+IHByb2JsZW0uCk5v
IGx1Y2ssIHRyaWVkIHVwc3RyZWFtIGtlcm5lbCAzLjYuMC1yYzEsIHNlZW1zIHdvcnNlLiBJdCB0
b29rIDIgaG91cnMgCnRvIGJvb3QgdXAuCj4KPiBOZXh0LCBpZiB5b3UgYWxyZWFkeSBzcG90dGVk
IHdoZXJlIHRoZSBzcGlubmluZyBvY2N1cnMsIHlvdQo+IHNob3VsZCBhbHNvIGJlIGFibGUgdG8g
dGVsbCB3aGF0J3MgZ29pbmcgb24gYXQgdGhlIG90aGVyIHNpZGUsIGkuZS4KPiB3aHkgdGhlIGV2
ZW50IHRoYXQgaXMgYmVpbmcgd2FpdGVkIGZvciBpc24ndCBvY2N1cnJpbmcgZm9yIHRoaXMKPiBs
b25nIGEgdGltZS4gU2luY2UgdGhlcmUncyBhIG51bWJlciBvZiBvcGVuIGNvZGVkIHNwaW4gbG9v
cHMKPiBoZXJlLCBrbm93aW5nIGV4YWN0bHkgd2hpY2ggb25lIGVhY2ggQ1BVIGlzIHNpdHRpbmcg
aW4gKGFuZAo+IHdoaWNoIG9uZXMgbWlnaHQgbm90IGJlIGluIGFueSkgaXMgdGhlIGZ1bmRhbWVu
dGFsIGluZm9ybWF0aW9uCj4gbmVlZGVkLgo+Cj4gIEZyb20gd2hhdCB5b3UncmUgdGVsbGluZyB1
cyBzbyBmYXIsIEknZCByYXRoZXIgc3VzcGVjdCBhIGtlcm5lbAo+IHByb2JsZW0sIG5vdCBhIGh5
cGVydmlzb3Igb25lIGhlcmUuClBlciBteSBmaW5kaW5nLCBtb3N0IG9mIHZjcHVzIHNwaW4gYXQg
c2V0X2F0b21pY2l0eV9sb2NrLiBTb21lIHNwaW4gYXQgCnN0b3BfbWFjaGluZSBhZnRlciBmaW5p
c2ggdGhlaXIgam9iLgpPbmx5IG9uZSB2Y3B1IGlzIGNhbGxpbmcgZ2VuZXJpY19zZXRfYWxsLgpJ
J20gbm90IHN1cmUgaWYgdGhlIHZjcHUgY2FsbGluZyBnZW5lcmljX3NldF9hbGwgZG9uJ3QgaGF2
ZSBoaWdoZXIgCnByaW9yaXR5IGFuZCBtYXliZSBwcmVlbXB0IGJ5IG90aGVyIHZjcHVzIGFuZCBk
b20wIGZyZXF1ZW50bHkuClRoaXMgd2FzdGUgbXVjaCB0aW1lLgo+Cj4gSmFuCj4KCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:51:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2vU-0004U8-4n; Wed, 08 Aug 2012 09:51:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz2vS-0004Ty-GJ
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:51:38 +0000
Received: from [85.158.143.35:22880] by server-2.bemta-4.messagelabs.com id
	EF/D9-19021-9A632205; Wed, 08 Aug 2012 09:51:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344419497!19244128!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31752 invoked from network); 8 Aug 2012 09:51:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13904886"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:51:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 10:51:36 +0100
Date: Wed, 8 Aug 2012 10:51:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5022445002000078000937B9@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jean Guyader <jean.guyader@gmail.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Jan Beulich wrote:
> >>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
> >> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
> >> 
> >> Sounds fine (and I like this better than the ..._ARG one you used
> >> below.
> >> 
> >> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 
> > 201112L)
> >> 
> >> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
> >>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> > 
> > The downside of this is that users of this header might need to change
> > their code depending on what compiler they actually build with today (or
> > even what options).
> > 
> > Is adding the ".u" throughout the Xen code base too intrusive?
> 
> I don't know and didn't check; I think the goal was to avoid having
> to change consumers that use gcc for compilation.

For ARM is not an issue, but the size parameter can be used by out of
tree code (V4V?).
That's why I CC'ed Jean, I was hoping he was going to say that it is OK
to add ".u".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 09:51:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 09:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz2vU-0004U8-4n; Wed, 08 Aug 2012 09:51:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz2vS-0004Ty-GJ
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 09:51:38 +0000
Received: from [85.158.143.35:22880] by server-2.bemta-4.messagelabs.com id
	EF/D9-19021-9A632205; Wed, 08 Aug 2012 09:51:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344419497!19244128!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31752 invoked from network); 8 Aug 2012 09:51:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 09:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13904886"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 09:51:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 10:51:36 +0100
Date: Wed, 8 Aug 2012 10:51:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5022445002000078000937B9@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jean Guyader <jean.guyader@gmail.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Jan Beulich wrote:
> >>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
> >> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
> >> 
> >> Sounds fine (and I like this better than the ..._ARG one you used
> >> below.
> >> 
> >> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >= 
> > 201112L)
> >> 
> >> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
> >>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> > 
> > The downside of this is that users of this header might need to change
> > their code depending on what compiler they actually build with today (or
> > even what options).
> > 
> > Is adding the ".u" throughout the Xen code base too intrusive?
> 
> I don't know and didn't check; I think the goal was to avoid having
> to change consumers that use gcc for compilation.

For ARM is not an issue, but the size parameter can be used by out of
tree code (V4V?).
That's why I CC'ed Jean, I was hoping he was going to say that it is OK
to add ".u".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 10:04:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz37n-0004mI-EO; Wed, 08 Aug 2012 10:04:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Sz37m-0004mD-43
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:04:22 +0000
Received: from [85.158.143.99:35068] by server-1.bemta-4.messagelabs.com id
	67/4B-20198-5A932205; Wed, 08 Aug 2012 10:04:21 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344420259!30843928!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12563 invoked from network); 8 Aug 2012 10:04:20 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:04:20 -0000
Received: by vcbfl15 with SMTP id fl15so671609vcb.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 03:04:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=DT4E6dfW3Br3KZk5VkwVvpiYsQk1zoGh+uV8loqg6Sk=;
	b=UNFN18jiz1L6mUGCiB01CxdSZUgRDwZomB3/Zk8f8Ctr04EU+s5UkbHKjPwQ4JU57x
	zjvkDK5/SnInm0tUG2Kf1+Z9TFsNAW+SRZT5Jc0CtPxHTo5wK1UKZQ2s8xw6VYC1fcIf
	bfOgun6os0AMbIabhiDPk3V0Ta1fRQQ0YWIp9IsX3iuSWAfLAtFMOS6OGawmkGNTtAAF
	yameuXzER/92bhdCmtdla2vrKSb8SV6vpBPAmmnamfblmFPUSnWleYB3QVTMV/jP1tOs
	QnNho9B/XSb2NNjmsL1g9NOtUWy7LKBnZ3QhRk7fIjnRHNj8cGVV1btkkoj1EtREIzs0
	FTBg==
Received: by 10.220.148.210 with SMTP id q18mr13408446vcv.6.1344420259464;
	Wed, 08 Aug 2012 03:04:19 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Wed, 8 Aug 2012 03:03:59 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Wed, 8 Aug 2012 11:03:59 +0100
Message-ID: <CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 8 August 2012 10:51, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 8 Aug 2012, Jan Beulich wrote:
>> >>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
>> >> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> > wrote:
>> >> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
>> >>
>> >> Sounds fine (and I like this better than the ..._ARG one you used
>> >> below.
>> >>
>> >> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >=
>> > 201112L)
>> >>
>> >> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
>> >>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
>> >
>> > The downside of this is that users of this header might need to change
>> > their code depending on what compiler they actually build with today (or
>> > even what options).
>> >
>> > Is adding the ".u" throughout the Xen code base too intrusive?
>>
>> I don't know and didn't check; I think the goal was to avoid having
>> to change consumers that use gcc for compilation.
>
> For ARM is not an issue, but the size parameter can be used by out of
> tree code (V4V?).
> That's why I CC'ed Jean, I was hoping he was going to say that it is OK
> to add ".u".

I have introduced the size field for XENMAPSPACE_gmfn_range and that is used
by hvmload when we want to relocated the memory because the pci hole needs to be
bigger.

Maybe something else use it by now, if not you could get away by just
updating the Xen
code to use .u.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 10:04:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz37n-0004mI-EO; Wed, 08 Aug 2012 10:04:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Sz37m-0004mD-43
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:04:22 +0000
Received: from [85.158.143.99:35068] by server-1.bemta-4.messagelabs.com id
	67/4B-20198-5A932205; Wed, 08 Aug 2012 10:04:21 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344420259!30843928!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12563 invoked from network); 8 Aug 2012 10:04:20 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:04:20 -0000
Received: by vcbfl15 with SMTP id fl15so671609vcb.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 03:04:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=DT4E6dfW3Br3KZk5VkwVvpiYsQk1zoGh+uV8loqg6Sk=;
	b=UNFN18jiz1L6mUGCiB01CxdSZUgRDwZomB3/Zk8f8Ctr04EU+s5UkbHKjPwQ4JU57x
	zjvkDK5/SnInm0tUG2Kf1+Z9TFsNAW+SRZT5Jc0CtPxHTo5wK1UKZQ2s8xw6VYC1fcIf
	bfOgun6os0AMbIabhiDPk3V0Ta1fRQQ0YWIp9IsX3iuSWAfLAtFMOS6OGawmkGNTtAAF
	yameuXzER/92bhdCmtdla2vrKSb8SV6vpBPAmmnamfblmFPUSnWleYB3QVTMV/jP1tOs
	QnNho9B/XSb2NNjmsL1g9NOtUWy7LKBnZ3QhRk7fIjnRHNj8cGVV1btkkoj1EtREIzs0
	FTBg==
Received: by 10.220.148.210 with SMTP id q18mr13408446vcv.6.1344420259464;
	Wed, 08 Aug 2012 03:04:19 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Wed, 8 Aug 2012 03:03:59 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Wed, 8 Aug 2012 11:03:59 +0100
Message-ID: <CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 8 August 2012 10:51, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 8 Aug 2012, Jan Beulich wrote:
>> >>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
>> >> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> > wrote:
>> >> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
>> >>
>> >> Sounds fine (and I like this better than the ..._ARG one you used
>> >> below.
>> >>
>> >> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >=
>> > 201112L)
>> >>
>> >> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
>> >>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
>> >
>> > The downside of this is that users of this header might need to change
>> > their code depending on what compiler they actually build with today (or
>> > even what options).
>> >
>> > Is adding the ".u" throughout the Xen code base too intrusive?
>>
>> I don't know and didn't check; I think the goal was to avoid having
>> to change consumers that use gcc for compilation.
>
> For ARM is not an issue, but the size parameter can be used by out of
> tree code (V4V?).
> That's why I CC'ed Jean, I was hoping he was going to say that it is OK
> to add ".u".

I have introduced the size field for XENMAPSPACE_gmfn_range and that is used
by hvmload when we want to relocated the memory because the pci hole needs to be
bigger.

Maybe something else use it by now, if not you could get away by just
updating the Xen
code to use .u.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 10:10:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3D3-0004wG-BN; Wed, 08 Aug 2012 10:09:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz3D2-0004wB-0r
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:09:48 +0000
Received: from [85.158.138.51:12017] by server-3.bemta-3.messagelabs.com id
	09/43-13122-BEA32205; Wed, 08 Aug 2012 10:09:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344420584!22902654!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14614 invoked from network); 8 Aug 2012 10:09:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:09:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13905288"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 10:09:12 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 11:09:12 +0100
Date: Wed, 8 Aug 2012 11:08:48 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jean Guyader <jean.guyader@gmail.com>
In-Reply-To: <CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208081108370.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
	<CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Jean Guyader
	\(3P\)" <jean.guyader@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Jean Guyader wrote:
> On 8 August 2012 10:51, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Wed, 8 Aug 2012, Jan Beulich wrote:
> >> >>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
> >> >> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> > wrote:
> >> >> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
> >> >>
> >> >> Sounds fine (and I like this better than the ..._ARG one you used
> >> >> below.
> >> >>
> >> >> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >=
> >> > 201112L)
> >> >>
> >> >> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
> >> >>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> >> >
> >> > The downside of this is that users of this header might need to change
> >> > their code depending on what compiler they actually build with today (or
> >> > even what options).
> >> >
> >> > Is adding the ".u" throughout the Xen code base too intrusive?
> >>
> >> I don't know and didn't check; I think the goal was to avoid having
> >> to change consumers that use gcc for compilation.
> >
> > For ARM is not an issue, but the size parameter can be used by out of
> > tree code (V4V?).
> > That's why I CC'ed Jean, I was hoping he was going to say that it is OK
> > to add ".u".
> 
> I have introduced the size field for XENMAPSPACE_gmfn_range and that is used
> by hvmload when we want to relocated the memory because the pci hole needs to be
> bigger.
> 
> Maybe something else use it by now, if not you could get away by just
> updating the Xen
> code to use .u.

Great, I'll do that then.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 10:10:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3D3-0004wG-BN; Wed, 08 Aug 2012 10:09:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz3D2-0004wB-0r
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:09:48 +0000
Received: from [85.158.138.51:12017] by server-3.bemta-3.messagelabs.com id
	09/43-13122-BEA32205; Wed, 08 Aug 2012 10:09:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344420584!22902654!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14614 invoked from network); 8 Aug 2012 10:09:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:09:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13905288"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 10:09:12 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 11:09:12 +0100
Date: Wed, 8 Aug 2012 11:08:48 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jean Guyader <jean.guyader@gmail.com>
In-Reply-To: <CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208081108370.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
	<CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Jean Guyader
	\(3P\)" <jean.guyader@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Jean Guyader wrote:
> On 8 August 2012 10:51, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Wed, 8 Aug 2012, Jan Beulich wrote:
> >> >>> On 08.08.12 at 09:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Wed, 2012-08-08 at 08:14 +0100, Jan Beulich wrote:
> >> >> >>> On 07.08.12 at 19:07, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> > wrote:
> >> >> > Regarding the name, maybe it should be XEN_ADD_TO_PHYSMAP_FIELD?
> >> >>
> >> >> Sounds fine (and I like this better than the ..._ARG one you used
> >> >> below.
> >> >>
> >> >> > #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || (__STDC_VERSION__ >=
> >> > 201112L)
> >> >>
> >> >> #if (defined(__GNUC__) && !defined(__STRICT_ANSI__)) || \
> >> >>     (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> >> >
> >> > The downside of this is that users of this header might need to change
> >> > their code depending on what compiler they actually build with today (or
> >> > even what options).
> >> >
> >> > Is adding the ".u" throughout the Xen code base too intrusive?
> >>
> >> I don't know and didn't check; I think the goal was to avoid having
> >> to change consumers that use gcc for compilation.
> >
> > For ARM is not an issue, but the size parameter can be used by out of
> > tree code (V4V?).
> > That's why I CC'ed Jean, I was hoping he was going to say that it is OK
> > to add ".u".
> 
> I have introduced the size field for XENMAPSPACE_gmfn_range and that is used
> by hvmload when we want to relocated the memory because the pci hole needs to be
> bigger.
> 
> Maybe something else use it by now, if not you could get away by just
> updating the Xen
> code to use .u.

Great, I'll do that then.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 10:26:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:26:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3Se-0005Ti-L3; Wed, 08 Aug 2012 10:25:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz3Sd-0005Ta-Nn
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:25:55 +0000
Received: from [85.158.143.35:43146] by server-3.bemta-4.messagelabs.com id
	C6/89-31486-3BE32205; Wed, 08 Aug 2012 10:25:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344421551!16720656!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14321 invoked from network); 8 Aug 2012 10:25:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 10:25:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 11:25:50 +0100
Message-Id: <50225ACD020000780009380F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 11:25:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part99A8E6BD.0__="
Subject: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part99A8E6BD.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

So far all we (explicitly) require is gcc 3.4 or better, so we
shouldn't be unconditionally using features supported only by much
newer versions.

Short of a proper replacement, use the "deprecated" attribute instead:
It also produces a warning (thus causing the build to fail due to
-Werror), and is at least getting close to the intention here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -55,8 +55,12 @@
 #ifdef LIBXL_H
 # error libxl.h should be included via libxl_internal.h, not separately
 #endif
-#define LIBXL_EXTERNAL_CALLERS_ONLY \
+#if __GNUC__ > 4 || (__GNUC__ =3D=3D 4 && __GNUC_MINOR__ >=3D 3)
+# define LIBXL_EXTERNAL_CALLERS_ONLY \
     __attribute__((warning("may not be called from within libxl")))
+#else
+# define LIBXL_EXTERNAL_CALLERS_ONLY __attribute__((__deprecated__))
+#endif
=20
 #include "libxl.h"
 #include "_paths.h"




--=__Part99A8E6BD.0__=
Content-Type: text/plain; name="libxl-build-gcc-pre-4.3.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="libxl-build-gcc-pre-4.3.patch"

libxl: fix build for gcc prior to 4.3=0A=0ASo far all we (explicitly) =
require is gcc 3.4 or better, so we=0Ashouldn't be unconditionally using =
features supported only by much=0Anewer versions.=0A=0AShort of a proper =
replacement, use the "deprecated" attribute instead:=0AIt also produces a =
warning (thus causing the build to fail due to=0A-Werror), and is at least =
getting close to the intention here.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/tools/libxl/libxl_internal.h=0A+++ =
b/tools/libxl/libxl_internal.h=0A@@ -55,8 +55,12 @@=0A #ifdef LIBXL_H=0A # =
error libxl.h should be included via libxl_internal.h, not separately=0A =
#endif=0A-#define LIBXL_EXTERNAL_CALLERS_ONLY \=0A+#if __GNUC__ > 4 || =
(__GNUC__ =3D=3D 4 && __GNUC_MINOR__ >=3D 3)=0A+# define LIBXL_EXTERNAL_CAL=
LERS_ONLY \=0A     __attribute__((warning("may not be called from within =
libxl")))=0A+#else=0A+# define LIBXL_EXTERNAL_CALLERS_ONLY __attribute__((_=
_deprecated__))=0A+#endif=0A =0A #include "libxl.h"=0A #include "_paths.h"=
=0A
--=__Part99A8E6BD.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part99A8E6BD.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 08 10:26:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:26:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3Se-0005Ti-L3; Wed, 08 Aug 2012 10:25:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz3Sd-0005Ta-Nn
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:25:55 +0000
Received: from [85.158.143.35:43146] by server-3.bemta-4.messagelabs.com id
	C6/89-31486-3BE32205; Wed, 08 Aug 2012 10:25:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344421551!16720656!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14321 invoked from network); 8 Aug 2012 10:25:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 10:25:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 11:25:50 +0100
Message-Id: <50225ACD020000780009380F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 11:25:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part99A8E6BD.0__="
Subject: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part99A8E6BD.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

So far all we (explicitly) require is gcc 3.4 or better, so we
shouldn't be unconditionally using features supported only by much
newer versions.

Short of a proper replacement, use the "deprecated" attribute instead:
It also produces a warning (thus causing the build to fail due to
-Werror), and is at least getting close to the intention here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -55,8 +55,12 @@
 #ifdef LIBXL_H
 # error libxl.h should be included via libxl_internal.h, not separately
 #endif
-#define LIBXL_EXTERNAL_CALLERS_ONLY \
+#if __GNUC__ > 4 || (__GNUC__ =3D=3D 4 && __GNUC_MINOR__ >=3D 3)
+# define LIBXL_EXTERNAL_CALLERS_ONLY \
     __attribute__((warning("may not be called from within libxl")))
+#else
+# define LIBXL_EXTERNAL_CALLERS_ONLY __attribute__((__deprecated__))
+#endif
=20
 #include "libxl.h"
 #include "_paths.h"




--=__Part99A8E6BD.0__=
Content-Type: text/plain; name="libxl-build-gcc-pre-4.3.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="libxl-build-gcc-pre-4.3.patch"

libxl: fix build for gcc prior to 4.3=0A=0ASo far all we (explicitly) =
require is gcc 3.4 or better, so we=0Ashouldn't be unconditionally using =
features supported only by much=0Anewer versions.=0A=0AShort of a proper =
replacement, use the "deprecated" attribute instead:=0AIt also produces a =
warning (thus causing the build to fail due to=0A-Werror), and is at least =
getting close to the intention here.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/tools/libxl/libxl_internal.h=0A+++ =
b/tools/libxl/libxl_internal.h=0A@@ -55,8 +55,12 @@=0A #ifdef LIBXL_H=0A # =
error libxl.h should be included via libxl_internal.h, not separately=0A =
#endif=0A-#define LIBXL_EXTERNAL_CALLERS_ONLY \=0A+#if __GNUC__ > 4 || =
(__GNUC__ =3D=3D 4 && __GNUC_MINOR__ >=3D 3)=0A+# define LIBXL_EXTERNAL_CAL=
LERS_ONLY \=0A     __attribute__((warning("may not be called from within =
libxl")))=0A+#else=0A+# define LIBXL_EXTERNAL_CALLERS_ONLY __attribute__((_=
_deprecated__))=0A+#endif=0A =0A #include "libxl.h"=0A #include "_paths.h"=
=0A
--=__Part99A8E6BD.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part99A8E6BD.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 08 10:37:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3dI-0005vV-QF; Wed, 08 Aug 2012 10:36:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <trashsee@gmail.com>) id 1Sz3dH-0005vO-4M
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:36:55 +0000
Received: from [85.158.143.99:50842] by server-2.bemta-4.messagelabs.com id
	3C/DB-19021-64142205; Wed, 08 Aug 2012 10:36:54 +0000
X-Env-Sender: trashsee@gmail.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344422211!25565247!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25433 invoked from network); 8 Aug 2012 10:36:52 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:36:52 -0000
Received: by ghrr17 with SMTP id r17so664877ghr.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 03:36:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=VCk0KWUP8i+rhaljaWa5zXYeLBvxSRqhYEBlX1ABR78=;
	b=YTy6TOP63+kqVLJs3eU75yyHbUxsW0/bnqydsIomr0sQ8TkFhECJ2lvF3cPGlbX/uH
	jeoFJ5970mx4tvJdbTouE8er5Vsp5JxTlST7zJkYJiCV/7upiCyOS1dFuTJPZwGLNqnB
	C2lFQAJ/G/AZJ2kHWUFXuUWt+ZiLiWCigi9iV7F3cQG6GhJZRosH8HSwnmMj9B7HiiVz
	13HV6k9zhvsQn7m7xp0ET/YH4gxjroDWGa7NNSmug+AjLZ6s2USO4MmGbk8OTNmXXvtc
	sWjmD9/+AHj299pnLqhIsrRzvKTiIdZQ9XbL7O1Li7sjQmilMY8+R/apzCAMZZ4lqiJa
	DzNg==
MIME-Version: 1.0
Received: by 10.50.87.133 with SMTP id ay5mr51593igb.49.1344422210841; Wed, 08
	Aug 2012 03:36:50 -0700 (PDT)
Received: by 10.64.22.10 with HTTP; Wed, 8 Aug 2012 03:36:50 -0700 (PDT)
In-Reply-To: <CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
References: <CAPny0soyuQkUmAU+kYrBvG+w_jxKUsY8YxCrxBA=7cwmdwV6Xw@mail.gmail.com>
	<alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
	<CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
Date: Wed, 8 Aug 2012 14:36:50 +0400
Message-ID: <CAPny0sqtgo7MvndfLN6JExkQMP40ro1FT6Edc6OKWt0KreNYnQ@mail.gmail.com>
From: Alexey Klimov <trashsee@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
Content-Type: multipart/mixed; boundary=e89a8f3bafff4e2c0804c6beb06b
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [questions] Dom0/DomU on ARM under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--e89a8f3bafff4e2c0804c6beb06b
Content-Type: text/plain; charset=ISO-8859-1

2012/8/1 Alexey Klimov <trashsee@gmail.com>:
> And i saw that Ian set up git repository for xen with latest patches
> for ARM. So i'll try to use this repository.

Hello Stefano and Ian,

I used new Ian xen-unstable git repository
(git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.) and
Stefano linux kernel git repository (
git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git
3.5-rc7-arm-2) with additional patches:

- for linux kernel "xen/events: fix unmask_evtchn for PV on HVM guests",
- ARM hypercall ABI: 64 bit ready patch series for xen and attached
few versions of xcbuild (early version of Ian and latest version).
After applying 64-bit ready patches i observed such errors when
building xen and tools:

1)
for i in public/callback.h public/dom0_ops.h public/elfnote.h
public/event_channel.h public/features.h public/grant_table.h
public/kexec.h public/mem_event.h public/memory.h public/nmi.h
public/physdev.h public/platform.h public/sched.h public/tmem.h
public/trace.h public/vcpu.h public/version.h public/xen-compat.h
public/xen.h public/xencomm.h public/xenoprof.h public/hvm/e820.h
public/hvm/hvm_info_table.h public/hvm/hvm_op.h public/hvm/ioreq.h
public/hvm/params.h public/io/blkif.h public/io/console.h
public/io/fbif.h public/io/fsif.h public/io/kbdif.h
public/io/libxenvchan.h public/io/netif.h public/io/pciif.h
public/io/protocols.h public/io/ring.h public/io/tpmif.h
public/io/usbif.h public/io/vscsiif.h public/io/xenbus.h
public/io/xs_wire.h; do gcc -ansi -include stdint.h -Wall -W -Werror
-S -o /dev/null -xc $i || exit 1; echo $i; done >headers.chk.new
public/version.h:61:5: error: unknown type name 'xen_ulong_t'
make[3]: *** [headers.chk] Error 1
make[3]: Leaving directory `/src/xen/xen/include'

Fixed by inserting #include "arch-arm.h" in xen/include/public/version.h

2)
building 'xc' extension
gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
-Wstrict-prototypes -O1 -fno-omit-frame-pointer -marm -g
-fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
-Wdeclaration-after-statement -Wno-unused-but-set-variable
-D__XEN_TOOLS__ -MMD -MF .build.d -D_LARGEFILE_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
-fno-optimize-sibling-calls -fPIC -I../../tools/include
-I../../tools/libxc -Ixen/lowlevel/xc -I/usr/include/python2.7 -c
xen/lowlevel/xc/xc.c -o
build/temp.linux-armv7l-2.7/xen/lowlevel/xc/xc.o -fno-strict-aliasing
-Werror
xen/lowlevel/xc/xc.c: In function 'pyxc_xeninfo':
xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
[-Werror=format]
xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
[-Werror=format]
cc1: all warnings being treated as errors

Just commented snprintf(str, sizeof(str), "virt_start=0x%lx",
p_parms.virt_start); in xc.c

Then it compiled and i tried to run DomU. It looks like allocation
console_pfn and xenstore_pfn in alloc_magic_pages() in xc_dom_arm.c
creates real pain for me. With this allocation/patch xen prints "bad
p2m lookup" messages before booting DomU
(XEN) bad p2m lookup
(XEN) dom1 IPA 0x0000000090000000
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000
(XEN) bad p2m lookup
(XEN) dom1 IPA 0x0000000090001000
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000
(XEN) bad p2m lookup
(XEN) dom1 IPA 0x0000000090001000
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000

and then everything  hangs with translation fault:

(XEN) DOM1: Grant tables using version 1 layout.
(XEN) DOM1: Grant table initialized
(XEN) DOM1: NET: Registered protocol family 16
(XEN) Guest data abort: Translation fault at level 2
(XEN)     gva=88808804
(XEN)     gpa=0000000090001804
(XEN)     size=2 sign=0 write=0 reg=2
(XEN)     eat=0 cm=0 s1ptw=0 dfsc=6
(XEN) dom1 IPA 0x0000000090001804
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000

Detailed log is attached.
Ok, i moved allocation for console and xenstore pages back in
arch_setup_meminit() like in
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01340.html and
then added kernel parameter keep_bootcon in DomU  device tree file and
everything booted up to "unable to open an initial console" and unable
to mount rootfs. I still didn't learn how to deal with xenstore, hvc0,
xvda and how to boot from initramfs on ARM using xcbuild but i'll try
to understand and learn this :) So may be it's good thing to
investigate or take deep look why add_to_physmap failed in xcbuild and
why there is bad p2m lookup in xen. Log is attached.

Do you have any difference between Dom0 .config and DomU .config? Did
you just attach initrd using xc_dom_ramdisk_file() call in xcbuild?
Any special configuration of xen console/xen store?

Well, i dont mean that i'm doing everything correctly but i tried to
run it fixing/commenting as much as i can. Could you please help if
you have time? I can test new changes, sent other useful info/logs.

-- 
Best regards,
Alexey.

--e89a8f3bafff4e2c0804c6beb06b
Content-Type: application/octet-stream; 
	name="fault_xcbuild-simple-Dom0+U-A15x1_07082012.log"
Content-Disposition: attachment; 
	filename="fault_xcbuild-simple-Dom0+U-A15x1_07082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5m9l05a2

cm9vdEB0dXo6L3Vzci9saWIveGVuL2JpbiMgLi94Y2J1aWxkLXNpbXBsZSAvYm9vdC96SW1hZ2Ut
dGVzdDQgCkltYWdlOiAvYm9vdC96SW1hZ2UtdGVzdDQKTWVtb3J5OiAyNjQxOTJLQgp4Y19kb21h
aW5fY3JlYXRlOiAwICgwKQpidWlsZGluZyBkb20xCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNf
ZG9tX2FsbG9jYXRlOiBjbWRsaW5lPSIiLCBmZWF0dXJlcz0iKG51bGwpIgpkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9rZXJuZWxfZmlsZTogZmlsZW5hbWU9Ii9ib290L3pJbWFnZS10ZXN0
NCIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWFsbG9jX2ZpbGVtYXAgICAgOiAyMDI4
IGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfeGVuX2luaXQ6IHZlciA0LjIs
IGNhcHMgeGVuLTMuMC1hcm12N2wgCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNl
X2ltYWdlOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fZmluZF9sb2FkZXI6
IHRyeWluZyBtdWx0aWJvb3QtYmluYXJ5IGxvYWRlciAuLi4gCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogbG9hZGVyIHByb2JlIGZhaWxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9maW5k
X2xvYWRlcjogdHJ5aW5nIExpbnV4IHpJbWFnZSAoQVJNKSBsb2FkZXIgLi4uIAoKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fcHJvYmVfemltYWdlX2tlcm5lbDogZm91bmQgYW4gYXBwZW5k
ZWQgRFRCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogbG9hZGVyIHByb2JlIE9LCmRvbWFpbmJ1aWxk
ZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX3ppbWFnZV9rZXJuZWw6IGNhbGxlZApkb21haW5idWls
ZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96aW1hZ2Vfa2VybmVsOiB4ZW4tMy4wLWFybXY3bDog
UkFNIHN0YXJ0cyBhdCA4MDAwMApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96
aW1hZ2Vfa2VybmVsOiB4ZW4tMy4wLWFybXY3bDogMHg4MDAwODAwMCAtPiAweDgwMjAzM2EwCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9pbml0OiBtZW0gMjU2IE1CLCBwYWdlcyAw
eDEwMDAwIHBhZ2VzLCA0ayBlYWNoCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9p
bml0OiAweDEwMDAwIHBhZ2VzCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfbWVt
X2luaXQ6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9tYWxsb2MgICAgICAg
ICAgICA6IDUxMiBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHJlbWFwX2FyZWFfbWZuX3B0ZV9m
bjogcHRlcCA4NzJjMzVjMCBhZGRyIDB4NzY5NzAwMDAgPT4gMHg5MDAwMDMwZiAvIDB4OTAwMDAK
eGNfZG9tX2J1aWxkX2ltYXJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzJjMzVjNCBhZGRy
IDB4NzY5NzEwMDAgPT4gMHg5MDAwMTMwZiAvIDB4OTAwMDEKZ2U6IGNhbGxlZApkb21hcmVtYXBf
YXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWM4IGFkZHIgMHg3Njk3MjAwMCA9PiAweDkwMDAy
MzBmIC8gMHg5MDAwMgppbmJ1aWxkZXI6IGRldGFpcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVw
IDg3MmMzNWNjIGFkZHIgMHg3Njk3MzAwMCA9PiAweDkwMDAzMzBmIC8gMHg5MDAwMwpsOiB4Y19k
b21fYWxsb2NfcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWQwIGFkZHIgMHg3Njk3
NDAwMCA9PiAweDkwMDA0MzBmIC8gMHg5MDAwNApzZWdtZW50OiAgIGtlcm5lcmVtYXBfYXJlYV9t
Zm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWQ0IGFkZHIgMHg3Njk3NTAwMCA9PiAweDkwMDA1MzBmIC8g
MHg5MDAwNQpsICAgICAgIDogMHg4MDAwcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMz
NWQ4IGFkZHIgMHg3Njk3NjAwMCA9PiAweDkwMDA2MzBmIC8gMHg5MDAwNgo4MDAwIC0+IDB4ODAy
MDQwcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWRjIGFkZHIgMHg3Njk3NzAwMCA9
PiAweDkwMDA3MzBmIC8gMHg5MDAwNwowMCAgKHBmbiAweDgwMDA4cmVtYXBfYXJlYV9tZm5fcHRl
X2ZuOiBwdGVwIDg3MmMzNWUwIGFkZHIgMHg3Njk3ODAwMCA9PiAweDkwMDA4MzBmIC8gMHg5MDAw
OApyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcyYzM1ZTQgYWRkciAweDc2OTc5MDAwID0+
IDB4OTAwMDkzMGYgLyAweDkwMDA5CgpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcyYzM1
ZTggYWRkciAweDc2OTdhMDAwID0+IDB4OTAwMGEzMGYgLyAweDkwMDBhCnJlbWFwX2FyZWFfbWZu
X3B0ZV9mbjogcHRlcCA4NzJjMzVlYyBhZGRyIDB4NzY5N2IwMDAgPT4gMHg5MDAwYjMwZiAvIDB4
OTAwMGIKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWYwIGFkZHIgMHg3Njk3YzAw
MCA9PiAweDkwMDBjMzBmIC8gMHg5MDAwYwpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcy
YzM1ZjQgYWRkciAweDc2OTdkMDAwID0+IDB4OTAwMGQzMGYgLyAweDkwMDBkCnJlbWFwX2FyZWFf
bWZuX3B0ZV9mbjogcHRlcCA4NzJjMzVmOCBhZGRyIDB4NzY5N2UwMDAgPT4gMHg5MDAwZTMwZiAv
IDB4OTAwMGUKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWZjIGFkZHIgMHg3Njk3
ZjAwMCA9PiAweDkwMDBmMzBmIC8gMHg5MDAwZgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2Rv
bV9wZm5fdG9fcHRyOiBkb21VIG1hcHBpbmc6IHBmbiAweDgwMDA4KzB4MWZjIGF0IDB4NzY5NzAw
MDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbG9hZF96aW1hZ2Vfa2VybmVsOiBjYWxs
ZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbG9hZF96aW1hZ2Vfa2VybmVsOiBrZXJu
ZWwgc2VkIGZvcmVpZ24gbWFwIGFkZF90b19waHlzbWFwIGZhaWxlZCwgZXJyPS0yMgoweDgwMDA4
MDAwLTB4ODAyMDQwMDAKZG9tYWluYnVpZm9yZWlnbiBtYXAgYWRkX3RvX3BoeXNtYXAgZmFpbGVk
LCBlcnI9LTIyCmxkZXI6IGRldGFpbDogeGNfZG9tX2xvYWRfemltYWdlX2tlcm5lbDogY29weSAy
MDc3NjAwIGJ5dGVzIGZyb20gYmxvYiAweDc2YmVkMDAwIHRvIGRzdCAweDc2OTcwMDAwCmRvbWFp
bmJ1aWxkZXI6IGRldGFpbDogYWxsb2NfbWFnaWNfcGFnZXM6IGNhbGxlZApkb21haW5idWlsZGVy
OiBkZXRhaWw6IGNvdW50X3BndGFibGVzX2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogeGNfZG9tX2Jmb3JlaWduIG1hcCBhZGRfdG9fcGh5c21hcCBmYWlsZWQsIGVycj0tMjIKdWls
ZF9pbWFnZSAgOiB2aXJ0X2FsbG9jX2VuZCA6IDB4ODAyMDQwMDAKZG9tYWluYnVpbGRlcjogZGV0
YWlsOiB4Y19kb21fYnVpbGRfaW1hZ2UgIDogdmlydF9wZ3RhYl9lbmQgOiAweDAKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fYm9vdF9pbWFnZTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRl
dGFpbDogYXJjaF9zZXR1cF9ib290ZWFybHk6IGRvaW5nIG5vdGhpbmcKZG9tYWluYnVpbGRlcjog
ZGV0YWlsOiB4Y19kb21fY29tcGF0X2NoZWNrOiBzdXBwb3J0ZWQgZ3Vlc3QgdHlwZTogeGVuLTMu
MC1hcm12N2wgPD0gbWF0Y2hlcwpkb21haW5idWlsZGVyOiBkZXRhaWw6IHNldHVwX3BndGFibGVz
X2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogc3RhcnRfaW5mb19hcm06IGNhbGxl
ZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGRvbWFpbiBidWlsZGVyIG1lbW9yeSBmb290cHJpbnQK
ZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICBhbGxvY2F0ZWQKZG9tYWluYnVpbGRlcjogZGV0YWls
OiAgICAgICBtYWxsb2MgICAgICAgICAgICAgOiA1MjUga0IKZG9tYWluYnVpbGRlcjogZGV0YWls
OiAgICAgICBhbm9uIG1tYXAgICAgICAgICAgOiAwIGJ5dGVzCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogICAgbWFwcGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogICAgICAgZmlsZSBtbWFwICAgICAg
ICAgIDogMjAyOCBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6ICAgICAgIGRvbVUgbW1hcCAgICAg
ICAgICA6IDIwMzIga0IKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB2Y3B1X2FybTogY2FsbGVkCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogSW5pdGlhbCBzdGF0ZSBDUFNSIDB4MWQzIFBDIDB4ODAwMDgw
MDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsYXVuY2hfdm06IGNhbGxlZCwgY3R4dD0weDdlZjhk
N2U0CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3JlbGVhc2U6IGNhbGxlZAp4YzogZGVi
dWc6IGh5cGVyY2FsbCBidWZmZXI6IHRvdGFsIGFsbG9jYXRpb25zOjIwIHRvdGFsIHJlbGVhc2Vz
OjIwCnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY3VycmVudCBhbGxvY2F0aW9uczowIG1h
eGltdW0gYWxsb2NhdGlvbnM6Mgp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IGNhY2hlIGN1
cnJlbnQgc2l6ZToyCnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY2FjaGUgaGl0czoxNyBt
aXNzZXM6MiB0b29iaWc6MQoKCgoKCi0gVUFSVCBlbmFibGVkIC0KLSBDUFUgMDAwMDAwMDAgYm9v
dGluZyAtCi0gU3RhcnRlZCBpbiBTZWN1cmUgc3RhdGUgLQotIEVudGVyaW5nIEh5cCBtb2RlIC0K
LSBTZXR0aW5nIHVwIGNvbnRyb2wgcmVnaXN0ZXJzIC0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtCi0g
UmVhZHkgLQpSQU06IDAwMDAwMDAwODAwMDAwMDAgLSAwMDAwMDAwMGZmZmZmZmZmCiBfXyAgX18g
ICAgICAgICAgICBfICBfICAgIF9fX18gICAgX19fICAgICAgICAgICAgICBfX19fICAgICAgICAg
ICAgICAgICAgICAKIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgIC8gXyBcICAgIF8g
X18gX19ffF9fXyBcICAgIF8gX18gIF8gX18gX19fIAogIFwgIC8vIF8gXCAnXyBcICB8IHx8IHxf
ICAgX18pIHx8IHwgfCB8X198ICdfXy8gX198IF9fKSB8X198ICdfIFx8ICdfXy8gXyBcCiAgLyAg
XCAgX18vIHwgfCB8IHxfXyAgIF98IC8gX18vIHwgfF98IHxfX3wgfCB8IChfXyAvIF9fL3xfX3wg
fF8pIHwgfCB8ICBfXy8KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX18oXylfX18vICAg
fF98ICBcX19ffF9fX19ffCAgfCAuX18vfF98ICBcX19ffAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8X3wgICAgICAgICAgICAgCihY
RU4pIFhlbiB2ZXJzaW9uIDQuMi4wLXJjMi1wcmUgKHJvb3RAKG5vbmUpKSAoZ2NjIHZlcnNpb24g
NC42LjMgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpICkgVHVlIEF1ZyAgNyAxMToyMzoz
MCBVVEMgMjAxMgooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiB1bmF2YWlsYWJsZQooWEVOKSBHSUM6
IDY0IGxpbmVzLCAxIGNwdSwgc2VjdXJlIChJSUQgMDAwMDA0M2IpLgooWEVOKSBXYWl0aW5nIGZv
ciAwIG90aGVyIENQVXMgdG8gYmUgcmVhZHkKKFhFTikgVXNpbmcgZ2VuZXJpYyB0aW1lciBhdCAx
MDAwMDAwMDAgSHoKKFhFTikgWGVuIGhlYXA6IDI2MjE0NCBwYWdlcyAgRG9tIGhlYXA6IDI1Mzk1
MiBwYWdlcwooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZAooWEVOKSBTZXQgaHlwIHZlY3Rv
ciBiYXNlIHRvIDIzY2ZjMCAoZXhwZWN0ZWQgMDAyM2NmYzApCihYRU4pIFByb2Nlc3NvciBGZWF0
dXJlczogMDAwMDExMzEgMDAwMDExMzEKKFhFTikgRGVidWcgRmVhdHVyZXM6IDAyMDEwNTU1CihY
RU4pIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAKKFhFTikgTWVtb3J5IE1vZGVsIEZlYXR1
cmVzOiAxMDIwMTEwNSAyMDAwMDAwMCAwMTI0MDAwMCAwMjEwMjIxMQooWEVOKSBJU0EgRmVhdHVy
ZXM6IDAyMTAxMTEwIDEzMTEyMTExIDIxMjMyMDQxIDExMTEyMTMxIDEwMDExMTQyIDAwMDAwMDAw
CihYRU4pIFVzaW5nIHNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNyZWRpdCkKKFhF
TikgQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAxNiBLaUIuCihYRU4pIEJyb3VnaHQgdXAgMSBD
UFVzCihYRU4pICoqKiBMT0FESU5HIERPTUFJTiAwICoqKgooWEVOKSBQb3B1bGF0ZSBQMk0gMHg4
MDAwMDAwMC0+MHg4ODAwMDAwMAooWEVOKSBNYXAgQ1MyIE1NSU8gcmVnaW9ucyAxOjEgaW4gdGhl
IFAyTSAweDE4MDAwMDAwLT4weDFiZmZmZmZmCihYRU4pIE1hcCBDUzMgTU1JTyByZWdpb25zIDE6
MSBpbiB0aGUgUDJNIDB4MWMwMDAwMDAtPjB4MWZmZmZmZmYKKFhFTikgTWFwIFZHSUMgTU1JTyBy
ZWdpb25zIDE6MSBpbiB0aGUgUDJNIDB4MmMwMDgwMDAtPjB4MmRmZmZmZmYKKFhFTikgUm91dGlu
ZyBwZXJpcGhlcmFsIGludGVycnVwdHMgdG8gZ3Vlc3QKKFhFTikgTG9hZGluZyAwMDAwMDAwMDAw
MWZhYTAwIGJ5dGUgekltYWdlIGZyb20gZmxhc2ggMDAwMDAwMDAwMDAwMDAwMCB0byAwMDAwMDAw
MDgwMDA4MDAwLTAwMDAwMDAwODAyMDJhMDA6IFsuLl0KKFhFTikgU3RkLiBMb2dsZXZlbDogQWxs
CihYRU4pIEd1ZXN0IExvZ2xldmVsOiBBbGwKKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00w
ICh0eXBlICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pCihYRU4p
IEZyZWVkIDIwNGtCIGluaXQgbWVtb3J5LgpVbmNvbXByZXNzaW5nIExpbnV4Li4uIGRvbmUsIGJv
b3RpbmcgdGhlIGtlcm5lbC4KQm9vdGluZyBMaW51eCBvbiBwaHlzaWNhbCBDUFUgMApMaW51eCB2
ZXJzaW9uIDMuNS4wLXJjNysgKHJvb3RAdHV6KSAoZ2NjIHZlcnNpb24gNC42LjMgKFVidW50dS9M
aW5hcm8gNC42LjMtMXVidW50dTUpICkgIzEgTW9uIEF1ZyA2IDIwOjU0OjI0IE1TSyAyMDEyCkNQ
VTogQVJNdjcgUHJvY2Vzc29yIFs0MTJmYzBmMF0gcmV2aXNpb24gMCAoQVJNdjcpLCBjcj0xMGM1
M2M3ZApDUFU6IFBJUFQgLyBWSVBUIG5vbmFsaWFzaW5nIGRhdGEgY2FjaGUsIFBJUFQgaW5zdHJ1
Y3Rpb24gY2FjaGUKTWFjaGluZTogQVJNLVZlcnNhdGlsZSBFeHByZXNzLCBtb2RlbDogVjJQLUNB
MTUKYm9vdGNvbnNvbGUgW2Vhcmx5Y29uMF0gZW5hYmxlZApNZW1vcnkgcG9saWN5OiBFQ0MgZGlz
YWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCk9uIG5vZGUgMCB0b3RhbHBhZ2VzOiAzMjc2OApm
cmVlX2FyZWFfaW5pdF9ub2RlOiBub2RlIDAsIHBnZGF0IDgwM2VhZTk0LCBub2RlX21lbV9tYXAg
ODA0MDgwMDAKICBOb3JtYWwgem9uZTogMjU2IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcAogIE5vcm1h
bCB6b25lOiAwIHBhZ2VzIHJlc2VydmVkCiAgTm9ybWFsIHpvbmU6IDMyNTEyIHBhZ2VzLCBMSUZP
IGJhdGNoOjcKLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCldBUk5JTkc6IGF0
IGFyY2gvYXJtL21hY2gtdmV4cHJlc3MvdjJtLmM6NjEzIHYybV9kdF9pbml0X2Vhcmx5KzB4YWMv
MHhlYygpCk1vZHVsZXMgbGlua2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBf
YmFja3RyYWNlKzB4MC8weDEwYykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8w
eDFjKQogcjY6MDAwMDAyNjUgcjU6ODAzYTZlMzQgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MKWzw4
MDJkN2ViOD5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9z
bG93cGF0aF9jb21tb24rMHg1NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29t
bW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQv
MHgyYykKIHI4OjgwM2NkMzM4IHI3OjgwNTA4NDQwIHI2OjgwMDAwMjAwIHI1OjgwM2YzYjg4IHI0
OjAwMDAwMDAwCnIzOjAwMDAwMDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4
MC8weDJjKSBmcm9tIFs8ODAzYTZlMzQ+XSAodjJtX2R0X2luaXRfZWFybHkrMHhhYy8weGVjKQpb
PDgwM2E2ZDg4Pl0gKHYybV9kdF9pbml0X2Vhcmx5KzB4MC8weGVjKSBmcm9tIFs8ODAzYTM2N2M+
XSAoc2V0dXBfYXJjaCsweDcxMC8weDdmYykKIHI0OjgwM2JhOTI4Cls8ODAzYTJmNmM+XSAoc2V0
dXBfYXJjaCsweDAvMHg3ZmMpIGZyb20gWzw4MDNhMTU5Yz5dIChzdGFydF9rZXJuZWwrMHg3OC8w
eDI2YykKWzw4MDNhMTUyND5dIChzdGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgw
NDA+XSAoMHg4MDAwODA0MCkKIHI3OjgwM2NkMjg0IHI2OjgwM2JiY2NjIHI1OjgwM2NhMDU0IHI0
OjEwYzUzYzdkCi0tLVsgZW5kIHRyYWNlIDFiNzViMzFhMjcxOWVkMWMgXS0tLQp2ZXhwcmVzczog
RFQgSEJJICgyMzcpIGlzIG5vdCBtYXRjaGluZyBoYXJkd2FyZSAoMCkhCnBjcHUtYWxsb2M6IHMw
IHIwIGQzMjc2OCB1MzI3NjggYWxsb2M9MSozMjc2OApwY3B1LWFsbG9jOiBbMF0gMCAKQnVpbHQg
MSB6b25lbGlzdHMgaW4gWm9uZSBvcmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBw
YWdlczogMzI1MTIKS2VybmVsIGNvbW1hbmQgbGluZTogZWFybHlwcmludGsgY29uc29sZT10dHlB
TUExIHJvb3Q9L2Rldi9tbWNibGswIGRlYnVnIHJ3ClBJRCBoYXNoIHRhYmxlIGVudHJpZXM6IDUx
MiAob3JkZXI6IC0xLCAyMDQ4IGJ5dGVzKQpEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVz
OiAxNjM4NCAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzKQpJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDgxOTIgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykKTWVtb3J5OiAxMjhNQiA9IDEyOE1C
IHRvdGFsCk1lbW9yeTogMTI1NzY0ay8xMjU3NjRrIGF2YWlsYWJsZSwgNTMwOGsgcmVzZXJ2ZWQs
IDBLIGhpZ2htZW0KVmlydHVhbCBrZXJuZWwgbWVtb3J5IGxheW91dDoKICAgIHZlY3RvciAgOiAw
eGZmZmYwMDAwIC0gMHhmZmZmMTAwMCAgICggICA0IGtCKQogICAgZml4bWFwICA6IDB4ZmZmMDAw
MDAgLSAweGZmZmUwMDAwICAgKCA4OTYga0IpCiAgICB2bWFsbG9jIDogMHg4ODgwMDAwMCAtIDB4
ZmYwMDAwMDAgICAoMTg5NiBNQikKICAgIGxvd21lbSAgOiAweDgwMDAwMDAwIC0gMHg4ODAwMDAw
MCAgICggMTI4IE1CKQogICAgbW9kdWxlcyA6IDB4N2YwMDAwMDAgLSAweDgwMDAwMDAwICAgKCAg
MTYgTUIpCiAgICAgIC50ZXh0IDogMHg4MDAwODAwMCAtIDB4ODAzYTEwMDAgICAoMzY4NCBrQikK
ICAgICAgLmluaXQgOiAweDgwM2ExMDAwIC0gMHg4MDNjMTU2OCAgICggMTMwIGtCKQogICAgICAu
ZGF0YSA6IDB4ODAzYzIwMDAgLSAweDgwM2ViNWMwICAgKCAxNjYga0IpCiAgICAgICAuYnNzIDog
MHg4MDNlYjVlNCAtIDB4ODA0MDcxNjQgICAoIDExMSBrQikKU0xVQjogR2Vuc2xhYnM9MTEsIEhX
YWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTEsIE5vZGVzPTEKTlJfSVJR
UzoyNTYKYXJjaF90aW1lcjogY2FuJ3QgZmluZCBEVCBub2RlCkFyY2hpdGVjdGVkIGxvY2FsIHRp
bWVyIHJ1bm5pbmcgYXQgMTAwLjAwTUh6LgpzY2hlZF9jbG9jazogMzIgYml0cyBhdCAxMDBNSHos
IHJlc29sdXRpb24gMTBucywgd3JhcHMgZXZlcnkgNDI5NDltcwpDb25zb2xlOiBjb2xvdXIgZHVt
bXkgZGV2aWNlIDgweDMwCkNhbGlicmF0aW5nIGRlbGF5IGxvb3AuLi4gOTguNzEgQm9nb01JUFMg
KGxwaj00OTM1NjgpCgpwaWRfbWF4OiBkZWZhdWx0OiAzMjc2OCBtaW5pbXVtOiAzMDEKTW91bnQt
Y2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIKQ1BVOiBUZXN0aW5nIHdyaXRlIGJ1ZmZlciBj
b2hlcmVuY3k6IG9rClNldHRpbmcgdXAgc3RhdGljIGlkZW50aXR5IG1hcCBmb3IgMHg4MDJkYzEz
OCAtIDB4ODAyZGMxNmMKWGVuIHN1cHBvcnQgZm91bmQsIGV2ZW50c19pcnE9MzEgZ250dGFiX2Zy
YW1lX3Bmbj1iMDAwMApHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dC4KR3JhbnQg
dGFibGUgaW5pdGlhbGl6ZWQKTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNgotLS0t
LS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQga2VybmVsL2lycS9p
cnFkb21haW4uYzoxMzUgaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCgpCk1vZHVs
ZXMgbGlua2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4
MC8weDEwYykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAw
MDAwODcgcjU6ODAwNTEzMjAgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MKWzw4MDJkN2ViOD5dIChk
dW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21t
b24rMHg1NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZj
KSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4Ojg3
ODRlMTAwIHI3OjAwMDAwMDAzIHI2OjAwMDAwMDY0IHI1Ojg3ODAwNDQwIHI0OjgwM2Q4OTUwCnIz
OjAwMDAwMDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9t
IFs8ODAwNTEzMjA+XSAoaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCkKWzw4MDA1
MTJmOD5dIChpcnFfZG9tYWluX2xlZ2FjeV9yZXZtYXArMHgwLzB4NTApIGZyb20gWzw4MDA1MTNl
OD5dIChpcnFfZmluZF9tYXBwaW5nKzB4YTAvMHhkMCkKWzw4MDA1MTM0OD5dIChpcnFfZmluZF9t
YXBwaW5nKzB4MC8weGQwKSBmcm9tIFs8ODAwNTE4MjQ+XSAoaXJxX2NyZWF0ZV9tYXBwaW5nKzB4
MjgvMHgxMjgpCiByODo4Nzg0ZTEwMCByNzowMDAwMDAwMyByNjowMDAwMDA2NCByNTo4NzgyZmRk
MCByNDo4NzgwMDQ0MApyMzo4NzgyZmRhNApbPDgwMDUxN2ZjPl0gKGlycV9jcmVhdGVfbWFwcGlu
ZysweDAvMHgxMjgpIGZyb20gWzw4MDA1MTlhOD5dIChpcnFfY3JlYXRlX29mX21hcHBpbmcrMHg4
NC8weGY4KQogcjc6MDAwMDAwMDMgcjY6ODA1MDg4YTggcjU6ODc4MmZkZDAgcjQ6ODc4MDA0NDAK
Wzw4MDA1MTkyND5dIChpcnFfY3JlYXRlX29mX21hcHBpbmcrMHgwLzB4ZjgpIGZyb20gWzw4MDIz
Y2RhOD5dIChpcnFfb2ZfcGFyc2VfYW5kX21hcCsweDM0LzB4M2MpCiByNzowMDAwMDAwMCByNjo4
MDUwODlkYyByNTowMDAwMDAwMCByNDowMDAwMDAwMApbPDgwMjNjZDc0Pl0gKGlycV9vZl9wYXJz
ZV9hbmRfbWFwKzB4MC8weDNjKSBmcm9tIFs8ODAyM2NkZDA+XSAob2ZfaXJxX3RvX3Jlc291cmNl
KzB4MjAvMHg3YykKWzw4MDIzY2RiMD5dIChvZl9pcnFfdG9fcmVzb3VyY2UrMHgwLzB4N2MpIGZy
b20gWzw4MDIzY2U1OD5dIChvZl9pcnFfY291bnQrMHgyYy8weDNjKQogcjc6MDAwMDAwMDAgcjY6
ODA1MDg5ZGMgcjU6ODA1MDg5ZGMgcjQ6MDAwMDAwMDAKWzw4MDIzY2UyYz5dIChvZl9pcnFfY291
bnQrMHgwLzB4M2MpIGZyb20gWzw4MDIzZDQxYz5dIChvZl9kZXZpY2VfYWxsb2MrMHg1Yy8weDE1
YykKIHI1OjAwMDAwMDAwIHI0OjAwMDAwMDAwCls8ODAyM2QzYzA+XSAob2ZfZGV2aWNlX2FsbG9j
KzB4MC8weDE1YykgZnJvbSBbPDgwMjNkNTU4Pl0gKG9mX3BsYXRmb3JtX2RldmljZV9jcmVhdGVf
cGRhdGErMHgzYy8weDg4KQpbPDgwMjNkNTFjPl0gKG9mX3BsYXRmb3JtX2RldmljZV9jcmVhdGVf
cGRhdGErMHgwLzB4ODgpIGZyb20gWzw4MDIzZDY3OD5dIChvZl9wbGF0Zm9ybV9idXNfY3JlYXRl
KzB4ZDQvMHgyNzgpCiByNzowMDAwMDAwMSByNjowMDAwMDAwMCByNTo4MDNiYzFkYyByNDo4MDUw
ODlkYwpbPDgwMjNkNWE0Pl0gKG9mX3BsYXRmb3JtX2J1c19jcmVhdGUrMHgwLzB4Mjc4KSBmcm9t
IFs8ODAyM2Q4ODQ+XSAob2ZfcGxhdGZvcm1fcG9wdWxhdGUrMHg2OC8weGEwKQpbPDgwMjNkODFj
Pl0gKG9mX3BsYXRmb3JtX3BvcHVsYXRlKzB4MC8weGEwKSBmcm9tIFs8ODAzYTZiY2M+XSAodjJt
X2R0X2luaXQrMHgyYy8weDRjKQpbPDgwM2E2YmEwPl0gKHYybV9kdF9pbml0KzB4MC8weDRjKSBm
cm9tIFs8ODAzYTJiZWM+XSAoY3VzdG9taXplX21hY2hpbmUrMHgyNC8weDMwKQpbPDgwM2EyYmM4
Pl0gKGN1c3RvbWl6ZV9tYWNoaW5lKzB4MC8weDMwKSBmcm9tIFs8ODAwMDg2M2M+XSAoZG9fb25l
X2luaXRjYWxsKzB4NDAvMHgxODQpCls8ODAwMDg1ZmM+XSAoZG9fb25lX2luaXRjYWxsKzB4MC8w
eDE4NCkgZnJvbSBbPDgwM2ExODgwPl0gKGtlcm5lbF9pbml0KzB4ZjAvMHgxYWMpCls8ODAzYTE3
OTA+XSAoa2VybmVsX2luaXQrMHgwLzB4MWFjKSBmcm9tIFs8ODAwMWY3YjQ+XSAoZG9fZXhpdCsw
eDAvMHg2YmMpCi0tLVsgZW5kIHRyYWNlIDFiNzViMzFhMjcxOWVkMWQgXS0tLQotLS0tLS0tLS0t
LS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQga2VybmVsL2lycS9pcnFkb21h
aW4uYzoxMzUgaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCgpCk1vZHVsZXMgbGlu
a2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEw
YykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAwMDAwODcg
cjU6ODAwNTEzMjAgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MKWzw4MDJkN2ViOD5dIChkdW1wX3N0
YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1
NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9t
IFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4Ojg3ODRlMTAw
IHI3OjAwMDAwMDAzIHI2OjAwMDAwMDY0IHI1OjAwMDAwMDAwIHI0Ojg3ODAwNDQwCnIzOjAwMDAw
MDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAw
NTEzMjA+XSAoaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCkKWzw4MDA1MTJmOD5d
IChpcnFfZG9tYWluX2xlZ2FjeV9yZXZtYXArMHgwLzB4NTApIGZyb20gWzw4MDA1MThhYz5dIChp
cnFfY3JlYXRlX21hcHBpbmcrMHhiMC8weDEyOCkKWzw4MDA1MTdmYz5dIChpcnFfY3JlYXRlX21h
cHBpbmcrMHgwLzB4MTI4KSBmcm9tIFs8ODAwNTE5YTg+XSAoaXJxX2NyZWF0ZV9vZl9tYXBwaW5n
KzB4ODQvMHhmOCkKIHI3OjAwMDAwMDAzIHI2OjgwNTA4OGE4IHI1Ojg3ODJmZGQwIHI0Ojg3ODAw
NDQwCls8ODAwNTE5MjQ+XSAoaXJxX2NyZWF0ZV9vZl9tYXBwaW5nKzB4MC8weGY4KSBmcm9tIFs8
ODAyM2NkYTg+XSAoaXJxX29mX3BhcnNlX2FuZF9tYXArMHgzNC8weDNjKQogcjc6MDAwMDAwMDAg
cjY6ODA1MDg5ZGMgcjU6MDAwMDAwMDAgcjQ6MDAwMDAwMDAKWzw4MDIzY2Q3ND5dIChpcnFfb2Zf
cGFyc2VfYW5kX21hcCsweDAvMHgzYykgZnJvbSBbPDgwMjNjZGQwPl0gKG9mX2lycV90b19yZXNv
dXJjZSsweDIwLzB4N2MpCls8ODAyM2NkYjA+XSAob2ZfaXJxX3RvX3Jlc291cmNlKzB4MC8weDdj
KSBmcm9tIFs8ODAyM2NlNTg+XSAob2ZfaXJxX2NvdW50KzB4MmMvMHgzYykKIHI3OjAwMDAwMDAw
IHI2OjgwNTA4OWRjIHI1OjgwNTA4OWRjIHI0OjAwMDAwMDAwCls8ODAyM2NlMmM+XSAob2ZfaXJx
X2NvdW50KzB4MC8weDNjKSBmcm9tIFs8ODAyM2Q0MWM+XSAob2ZfZGV2aWNlX2FsbG9jKzB4NWMv
MHgxNWMpCiByNTowMDAwMDAwMCByNDowMDAwMDAwMApbPDgwMjNkM2MwPl0gKG9mX2RldmljZV9h
bGxvYysweDAvMHgxNWMpIGZyb20gWzw4MDIzZDU1OD5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3Jl
YXRlX3BkYXRhKzB4M2MvMHg4OCkKWzw4MDIzZDUxYz5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3Jl
YXRlX3BkYXRhKzB4MC8weDg4KSBmcm9tIFs8ODAyM2Q2Nzg+XSAob2ZfcGxhdGZvcm1fYnVzX2Ny
ZWF0ZSsweGQ0LzB4Mjc4KQogcjc6MDAwMDAwMDEgcjY6MDAwMDAwMDAgcjU6ODAzYmMxZGMgcjQ6
ODA1MDg5ZGMKWzw4MDIzZDVhND5dIChvZl9wbGF0Zm9ybV9idXNfY3JlYXRlKzB4MC8weDI3OCkg
ZnJvbSBbPDgwMjNkODg0Pl0gKG9mX3BsYXRmb3JtX3BvcHVsYXRlKzB4NjgvMHhhMCkKWzw4MDIz
ZDgxYz5dIChvZl9wbGF0Zm9ybV9wb3B1bGF0ZSsweDAvMHhhMCkgZnJvbSBbPDgwM2E2YmNjPl0g
KHYybV9kdF9pbml0KzB4MmMvMHg0YykKWzw4MDNhNmJhMD5dICh2Mm1fZHRfaW5pdCsweDAvMHg0
YykgZnJvbSBbPDgwM2EyYmVjPl0gKGN1c3RvbWl6ZV9tYWNoaW5lKzB4MjQvMHgzMCkKWzw4MDNh
MmJjOD5dIChjdXN0b21pemVfbWFjaGluZSsweDAvMHgzMCkgZnJvbSBbPDgwMDA4NjNjPl0gKGRv
X29uZV9pbml0Y2FsbCsweDQwLzB4MTg0KQpbPDgwMDA4NWZjPl0gKGRvX29uZV9pbml0Y2FsbCsw
eDAvMHgxODQpIGZyb20gWzw4MDNhMTg4MD5dIChrZXJuZWxfaW5pdCsweGYwLzB4MWFjKQpbPDgw
M2ExNzkwPl0gKGtlcm5lbF9pbml0KzB4MC8weDFhYykgZnJvbSBbPDgwMDFmN2I0Pl0gKGRvX2V4
aXQrMHgwLzB4NmJjKQotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFlIF0tLS0KU2VyaWFs
OiBBTUJBIFBMMDExIFVBUlQgZHJpdmVyCjFjMDkwMDAwLnVhcnQ6IHR0eUFNQTAgYXQgTU1JTyAw
eDFjMDkwMDAwIChpcnEgPSAzNykgaXMgYSBQTDAxMSByZXYxCjFjMGEwMDAwLnVhcnQ6IHR0eUFN
QTEgYXQgTU1JTyAweDFjMGEwMDAwIChpcnEgPSAzOCkgaXMgYSBQTDAxMSByZXYxCmNvbnNvbGUg
W3R0eUFNQTFdIGVuYWJsZWQsIGJvb3Rjb25zb2xlIGRpc2FibGVkCihYRU4pIGJhZCBwMm0gbG9v
a3VwCihYRU4pIGRvbTEgSVBBIDB4MDAwMDAwMDA5MDAwMDAwMAooWEVOKSBQMk0gQCAwMmZmY2Fj
MCBtZm46MHhmZmU1NgooWEVOKSAxU1RbMHgyXSA9IDB4MDAwMDAwMDBmM2Y2ODZmZgooWEVOKSAy
TkRbMHg4MF0gPSAweDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgYmFkIHAybSBsb29rdXAKKFhFTikg
ZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAxMDAwCihYRU4pIFAyTSBAIDAyZmZjYWMwIG1mbjoweGZm
ZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAwMGYzZjY4NmZmCihYRU4pIDJORFsweDgwXSA9
IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSBiYWQgcDJtIGxvb2t1cAooWEVOKSBkb20xIElQQSAw
eDAwMDAwMDAwOTAwMDEwMDAKKFhFTikgUDJNIEAgMDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikg
MVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNmNjg2ZmYKKFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAw
MDAwMDAwMDAwCihYRU4pIERPTTE6IFVuY29tcHJlc3NpbmcgTGludXguLi4gZG9uZSwgYm9vdGlu
ZyB0aGUga2VybmVsLgooWEVOKSBET00xOiBCb290aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAw
CihYRU4pIERPTTE6IExpbnV4IHZlcnNpb24gMy41LjAtcmM3KyAocm9vdEB0dXopIChnY2MgdmVy
c2lvbiA0LjYuMyAoVWJ1bnR1L0xpbmFybyA0LjYuMy0xdWJ1bnR1NSkgKSAjMSBNb24gQXVnIDYg
MjA6NTQ6MjQgTVNLIDIwMTIKKFhFTikgRE9NMTogQ1BVOiBBUk12NyBQcm9jZXNzb3IgWzQxMmZj
MGYwXSByZXZpc2lvbiAwIChBUk12NyksIGNyPTEwYzUzYzdkCihYRU4pIERPTTE6IENQVTogUElQ
VCAvIFZJUFQgbm9uYWxpYXNpbmcgZGF0YSBjYWNoZSwgUElQVCBpbnN0cnVjdGlvbiBjYWNoZQoo
WEVOKSBET00xOiBNYWNoaW5lOiBBUk0tVmVyc2F0aWxlIEV4cHJlc3MsIG1vZGVsOiBWMlAtQUVN
djdBCihYRU4pIERPTTE6IGJvb3Rjb25zb2xlIFtlYXJseWNvbjBdIGVuYWJsZWQKKFhFTikgRE9N
MTogTWVtb3J5IHBvbGljeTogRUNDIGRpc2FibGVkLCBEYXRhIGNhY2hlIHdyaXRlYmFjawooWEVO
KSBET00xOiBPbiBub2RlIDAgdG90YWxwYWdlczogMzI3NjgKKFhFTikgRE9NMTogZnJlZV9hcmVh
X2luaXRfbm9kZTogbm9kZSAwLCBwZ2RhdCA4MDNlYWU5NCwgbm9kZV9tZW1fbWFwIDgwNDA4MDAw
CihYRU4pIERPTTE6ICAgTm9ybWFsIHpvbmU6IDI1NiBwYWdlcyB1c2VkIGZvciBtZW1tYXAKKFhF
TikgRE9NMTogICBOb3JtYWwgem9uZTogMCBwYWdlcyByZXNlcnZlZAooWEVOKSBET00xOiAgIE5v
cm1hbCB6b25lOiAzMjUxMiBwYWdlcywgTElGTyBiYXRjaDo3CihYRU4pIERPTTE6IC0tLS0tLS0t
LS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0tLQooWEVOKSBET00xOiBXQVJOSU5HOiBhdCBhcmNo
L2FybS9tYWNoLXZleHByZXNzL3YybS5jOjYwMyB2Mm1fZHRfaW5pdF9lYXJseSsweDQ0LzB4ZWMo
KQooWEVOKSBET00xOiBNb2R1bGVzIGxpbmtlZCBpbjoKKFhFTikgRE9NMTogQmFja3RyYWNlOiAK
KFhFTikgRE9NMTogWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20g
Wzw4MDJkN2VkMD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKKFhFTikgRE9NMTogIHI2OjAwMDAw
MjViIHI1OjgwM2E2ZGNjIHI0OjAwMDAwMDAwIHIzOjgwM2NmOTNjCihYRU4pIERPTTE6IFs8ODAy
ZDdlYjg+XSAoZHVtcF9zdGFjaysweDAvMHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xv
d3BhdGhfY29tbW9uKzB4NTQvMHg2YykKKFhFTikgRE9NMTogWzw4MDAxYjE4OD5dICh3YXJuX3Ns
b3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhf
bnVsbCsweDI0LzB4MmMpCihYRU4pIERPTTE6ICByODo4MDNjZDMzOCByNzo4MDUwODQ0MCByNjo4
MDAwMDIwMCByNTo4MDNmM2I4OCByNDo4MDNlYzAzOAooWEVOKSBET00xOiByMzowMDAwMDAwOQoK
KFhFTikgRE9NMTogWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4MmMpIGZy
b20gWzw4MDNhNmRjYz5dICh2Mm1fZHRfaW5pdF9lYXJseSsweDQ0LzB4ZWMpCihYRU4pIERPTTE6
IFs8ODAzYTZkODg+XSAodjJtX2R0X2luaXRfZWFybHkrMHgwLzB4ZWMpIGZyb20gWzw4MDNhMzY3
Yz5dIChzZXR1cF9hcmNoKzB4NzEwLzB4N2ZjKQooWEVOKSBET00xOiAgcjQ6ODAzYmE5MjgKKFhF
TikgRE9NMTogWzw4MDNhMmY2Yz5dIChzZXR1cF9hcmNoKzB4MC8weDdmYykgZnJvbSBbPDgwM2Ex
NTljPl0gKHN0YXJ0X2tlcm5lbCsweDc4LzB4MjZjKQooWEVOKSBET00xOiBbPDgwM2ExNTI0Pl0g
KHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMpIGZyb20gWzw4MDAwODA0MD5dICgweDgwMDA4MDQwKQoo
WEVOKSBET00xOiAgcjc6ODAzY2QyODQgcjY6ODAzYmJjY2MgcjU6ODAzY2EwNTQgcjQ6MTBjNTNj
N2QKKFhFTikgRE9NMTogLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxYyBdLS0tCihYRU4p
IERPTTE6IHBjcHUtYWxsb2M6IHMwIHIwIGQzMjc2OCB1MzI3NjggYWxsb2M9MSozMjc2OAooWEVO
KSBET00xOiBwY3B1LWFsbG9jOiBbMF0gMCAKKFhFTikgRE9NMTogQnVpbHQgMSB6b25lbGlzdHMg
aW4gWm9uZSBvcmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdlczogMzI1MTIK
KFhFTikgRE9NMTogS2VybmVsIGNvbW1hbmQgbGluZTogZWFybHlwcmludGsgZGVidWcgbG9nbGV2
ZWw9OSBjb25zb2xlPWh2YzAgcm9vdD0vZGV2L3h2ZGEgaW5pdD0vc2Jpbi9pbml0CihYRU4pIERP
TTE6IFBJRCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IC0xLCAyMDQ4IGJ5dGVzKQoo
WEVOKSBET00xOiBEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAxNjM4NCAob3JkZXI6
IDQsIDY1NTM2IGJ5dGVzKQooWEVOKSBET00xOiBJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJp
ZXM6IDgxOTIgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykKKFhFTikgRE9NMTogTWVtb3J5OiAxMjhN
QiA9IDEyOE1CIHRvdGFsCihYRU4pIERPTTE6IE1lbW9yeTogMTI1Nzcyay8xMjU3NzJrIGF2YWls
YWJsZSwgNTMwMGsgcmVzZXJ2ZWQsIDBLIGhpZ2htZW0KKFhFTikgRE9NMTogVmlydHVhbCBrZXJu
ZWwgbWVtb3J5IGxheW91dDoKKFhFTikgRE9NMTogICAgIHZlY3RvciAgOiAweGZmZmYwMDAwIC0g
MHhmZmZmMTAwMCAgICggICA0IGtCKQooWEVOKSBET00xOiAgICAgZml4bWFwICA6IDB4ZmZmMDAw
MDAgLSAweGZmZmUwMDAwICAgKCA4OTYga0IpCihYRU4pIERPTTE6ICAgICB2bWFsbG9jIDogMHg4
ODgwMDAwMCAtIDB4ZmYwMDAwMDAgICAoMTg5NiBNQikKKFhFTikgRE9NMTogICAgIGxvd21lbSAg
OiAweDgwMDAwMDAwIC0gMHg4ODAwMDAwMCAgICggMTI4IE1CKQooWEVOKSBET00xOiAgICAgbW9k
dWxlcyA6IDB4N2YwMDAwMDAgLSAweDgwMDAwMDAwICAgKCAgMTYgTUIpCihYRU4pIERPTTE6ICAg
ICAgIC50ZXh0IDogMHg4MDAwODAwMCAtIDB4ODAzYTEwMDAgICAoMzY4NCBrQikKKFhFTikgRE9N
MTogICAgICAgLmluaXQgOiAweDgwM2ExMDAwIC0gMHg4MDNjMTU2OCAgICggMTMwIGtCKQooWEVO
KSBET00xOiAgICAgICAuZGF0YSA6IDB4ODAzYzIwMDAgLSAweDgwM2ViNWMwICAgKCAxNjYga0Ip
CihYRU4pIERPTTE6ICAgICAgICAuYnNzIDogMHg4MDNlYjVlNCAtIDB4ODA0MDcxNjQgICAoIDEx
MSBrQikKKFhFTikgRE9NMTogU0xVQjogR2Vuc2xhYnM9MTEsIEhXYWxpZ249NjQsIE9yZGVyPTAt
MywgTWluT2JqZWN0cz0wLCBDUFVzPTEsIE5vZGVzPTEKKFhFTikgRE9NMTogTlJfSVJRUzoyNTYK
KFhFTikgRE9NMTogLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCihYRU4pIERP
TTE6IFdBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4cHJlc3MvdjJtLmM6NjIgdjJtX3N5c2N0
bF9pbml0KzB4MjAvMHg1OCgpCihYRU4pIERPTTE6IE1vZHVsZXMgbGlua2VkIGluOgooWEVOKSBE
T00xOiBCYWNrdHJhY2U6IAooWEVOKSBET00xOiBbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNl
KzB4MC8weDEwYykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQooWEVO
KSBET00xOiAgcjY6MDAwMDAwM2UgcjU6ODAzYTY5YmMgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MK
KFhFTikgRE9NMTogWzw4MDJkN2ViOD5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAw
MWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1NC8weDZjKQooWEVOKSBET00xOiBbPDgw
MDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+
XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKKFhFTikgRE9NMTogIHI4OjgwMDA0MDU5
IHI3OjgwNTA4YmMwIHI2OjgwM2JiY2QwIHI1OjgwM2ViNjAwIHI0OjAwMDAwMDAwCihYRU4pIERP
TTE6IHIzOjAwMDAwMDA5CihYRU4pIERPTTE6IFs8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9u
dWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzYTY5YmM+XSAodjJtX3N5c2N0bF9pbml0KzB4MjAvMHg1
OCkKKFhFTikgRE9NMTogWzw4MDNhNjk5Yz5dICh2Mm1fc3lzY3RsX2luaXQrMHgwLzB4NTgpIGZy
b20gWzw4MDNhNmFiYz5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDJjLzB4Y2MpCihYRU4pIERPTTE6
ICByNTo4MDNlYjYwMCByNDpmZmZmZmZmZgooWEVOKSBET00xOiBbPDgwM2E2YTkwPl0gKHYybV9k
dF90aW1lcl9pbml0KzB4MC8weGNjKSBmcm9tIFs8ODAzYTM4MTA+XSAodGltZV9pbml0KzB4Mjgv
MHgzOCkKKFhFTikgRE9NMTogIHI2OjgwM2JiY2QwIHI1OjgwM2ViNjAwIHI0OmZmZmZmZmZmCihY
RU4pIERPTTE6IFs8ODAzYTM3ZTg+XSAodGltZV9pbml0KzB4MC8weDM4KSBmcm9tIFs8ODAzYTE2
YWM+XSAoc3RhcnRfa2VybmVsKzB4MTg4LzB4MjZjKQooWEVOKSBET00xOiBbPDgwM2ExNTI0Pl0g
KHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMpIGZyb20gWzw4MDAwODA0MD5dICgweDgwMDA4MDQwKQoo
WEVOKSBET00xOiAgcjc6ODAzY2QyODQgcjY6ODAzYmJjY2MgcjU6ODAzY2EwNTQgcjQ6MTBjNTNj
N2QKKFhFTikgRE9NMTogLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxZCBdLS0tCihYRU4p
IERPTTE6IGFyY2hfdGltZXI6IGZvdW5kIHRpbWVyIGlycXMgMjkgMzAKKFhFTikgRE9NMTogQXJj
aGl0ZWN0ZWQgbG9jYWwgdGltZXIgcnVubmluZyBhdCAxMDAuMDBNSHouCihYRU4pIERPTTE6IHNj
aGVkX2Nsb2NrOiAzMiBiaXRzIGF0IDEwME1IeiwgcmVzb2x1dGlvbiAxMG5zLCB3cmFwcyBldmVy
eSA0Mjk0OW1zCihYRU4pIERPTTE6IC0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0t
LQooWEVOKSBET00xOiBXQVJOSU5HOiBhdCBhcmNoL2FybS9tYWNoLXZleHByZXNzL3YybS5jOjY0
NyB2Mm1fZHRfdGltZXJfaW5pdCsweDdjLzB4Y2MoKQooWEVOKSBET00xOiBNb2R1bGVzIGxpbmtl
ZCBpbjoKKFhFTikgRE9NMTogQmFja3RyYWNlOiAKKFhFTikgRE9NMTogWzw4MDAxMWIwYz5dIChk
dW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJkN2VkMD5dIChkdW1wX3N0YWNrKzB4
MTgvMHgxYykKKFhFTikgRE9NMTogIHI2OjAwMDAwMjg3IHI1OjgwM2E2YjBjIHI0OjAwMDAwMDAw
IHIzOjgwM2NmOTNjCihYRU4pIERPTTE6IFs8ODAyZDdlYjg+XSAoZHVtcF9zdGFjaysweDAvMHgx
YykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKKFhF
TikgRE9NMTogWzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJv
bSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCihYRU4pIERPTTE6
ICByODo4MDAwNDA1OSByNzo4MDUwOGJjMCByNjo4MDNiYmNkMCByNTo4MDNlYjYwMCByNDpmZmZm
ZmZlYQooWEVOKSBET00xOiByMzowMDAwMDAwOQoKKFhFTikgRE9NMTogWzw4MDAxYjFmND5dICh3
YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4MmMpIGZyb20gWzw4MDNhNmIwYz5dICh2Mm1fZHRfdGlt
ZXJfaW5pdCsweDdjLzB4Y2MpCihYRU4pIERPTTE6IFs8ODAzYTZhOTA+XSAodjJtX2R0X3RpbWVy
X2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDNhMzgxMD5dICh0aW1lX2luaXQrMHgyOC8weDM4KQoo
WEVOKSBET00xOiAgcjY6ODAzYmJjZDAgcjU6ODAzZWI2MDAgcjQ6ZmZmZmZmZmYKKFhFTikgRE9N
MTogWzw4MDNhMzdlOD5dICh0aW1lX2luaXQrMHgwLzB4MzgpIGZyb20gWzw4MDNhMTZhYz5dIChz
dGFydF9rZXJuZWwrMHgxODgvMHgyNmMpCihYRU4pIERPTTE6IFs8ODAzYTE1MjQ+XSAoc3RhcnRf
a2VybmVsKzB4MC8weDI2YykgZnJvbSBbPDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCihYRU4pIERP
TTE6ICByNzo4MDNjZDI4NCByNjo4MDNiYmNjYyByNTo4MDNjYTA1NCByNDoxMGM1M2M3ZAooWEVO
KSBET00xOiAtLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFlIF0tLS0KKFhFTikgRE9NMTog
Q29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgzMAooWEVOKSBET00xOiBDYWxpYnJhdGlu
ZyBkZWxheSBsb29wLi4uIDk4LjcxIEJvZ29NSVBTIChscGo9NDkzNTY4KQooWEVOKSBET00xOiBw
aWRfbWF4OiBkZWZhdWx0OiAzMjc2OCBtaW5pbXVtOiAzMDEKKFhFTikgRE9NMTogTW91bnQtY2Fj
aGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIKKFhFTikgRE9NMTogQ1BVOiBUZXN0aW5nIHdyaXRl
IGJ1ZmZlciBjb2hlcmVuY3k6IG9rCihYRU4pIERPTTE6IFNldHRpbmcgdXAgc3RhdGljIGlkZW50
aXR5IG1hcCBmb3IgMHg4MDJkYzEzOCAtIDB4ODAyZGMxNmMKKFhFTikgRE9NMTogWGVuIHN1cHBv
cnQgZm91bmQsIGV2ZW50c19pcnE9MzEgZ250dGFiX2ZyYW1lX3Bmbj1iMDAwMAooWEVOKSBET00x
OiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dC4KKFhFTikgRE9NMTogR3JhbnQg
dGFibGUgaW5pdGlhbGl6ZWQKKFhFTikgRE9NMTogTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZh
bWlseSAxNgooWEVOKSBHdWVzdCBkYXRhIGFib3J0OiBUcmFuc2xhdGlvbiBmYXVsdCBhdCBsZXZl
bCAyCihYRU4pICAgICBndmE9ODg4MDg4MDQKKFhFTikgICAgIGdwYT0wMDAwMDAwMDkwMDAxODA0
CihYRU4pICAgICBzaXplPTIgc2lnbj0wIHdyaXRlPTAgcmVnPTIKKFhFTikgICAgIGVhdD0wIGNt
PTAgczFwdHc9MCBkZnNjPTYKKFhFTikgZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAxODA0CihYRU4p
IFAyTSBAIDAyZmZjYWMwIG1mbjoweGZmZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAwMGYz
ZjY4NmZmCihYRU4pIDJORFsweDgwXSA9IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSAtLS0tWyBY
ZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1Zz15ICBOb3QgdGFpbnRlZCBdLS0tLQooWEVO
KSBDUFU6ICAgIDAKKFhFTikgUEM6ICAgICA4MDE3NTYwNAooWEVOKSBDUFNSOiAgIDIwMDAwMDEz
IE1PREU6U1ZDCihYRU4pICAgICAgUjA6IDgwM2ZmYWE4IFIxOiA4MDM2ODBjYyBSMjogODAzZmZh
YzAgUjM6IDgwM2ZmYWM4CihYRU4pICAgICAgUjQ6IDg4ODA4MDAwIFI1OiA4MDNmZmFjMCBSNjog
MDAwMDdmZjAgUjc6IDAwMDAwMDAxCihYRU4pICAgICAgUjg6IDgwM2ExMjVjIFI5OiA4MDNjMTIy
YyBSMTA6ODAzZWI2MDAgUjExOjg3ODJkZjA0IFIxMjo4NzgyZGYwOAooWEVOKSBVU1I6IFNQOiAw
MDAwMDAwMCBMUjogMDAwMDAwMDAgQ1BTUjoyMDAwMDAxMwooWEVOKSBTVkM6IFNQOiA4NzgyZGVl
MCBMUjogODAxNzY5NzAgU1BTUjowMDAwMDA5MwooWEVOKSBBQlQ6IFNQOiA4MDNlYmQ4YyBMUjog
ODAzZWJkOGMgU1BTUjowMDAwMDAwMAooWEVOKSBVTkQ6IFNQOiA4MDNlYmQ5OCBMUjogODAzZWJk
OTggU1BTUjowMDAwMDAwMAooWEVOKSBJUlE6IFNQOiA4MDNlYmQ4MCBMUjogODAwMGRmYzAgU1BT
Ujo2MDAwMDE5MwooWEVOKSBGSVE6IFNQOiAwMDAwMDAwMCBMUjogMDAwMDAwMDAgU1BTUjowMDAw
MDAwMAooWEVOKSBGSVE6IFI4OiAwMDAwMDAwMCBSOTogMDAwMDAwMDAgUjEwOjAwMDAwMDAwIFIx
MTowMDAwMDAwMCBSMTI6MDAwMDAwMDAKKFhFTikgCihYRU4pIFRUQlIwIDgwMDA0MDU5IFRUQlIx
IDgwMDA0MDU5IFRUQkNSIDAwMDAwMDAwCihYRU4pIFNDVExSIDEwYzUzYzdkCihYRU4pIFZUVEJS
IDIwMDAwZmZlNTYwMDAKKFhFTikgCihYRU4pIEhUVEJSIGZmZWMyMDAwCihYRU4pIEhERkFSIDg4
ODA4ODA0CihYRU4pIEhJRkFSIDAKKFhFTikgSFBGQVIgOTAwMDEwCihYRU4pIEhDUiAwMDAwMDgz
NQooWEVOKSBIU1IgICA5MzgyMDAwNgooWEVOKSAKKFhFTikgREZTUiAwIERGQVIgMAooWEVOKSBJ
RlNSIDAgSUZBUiAwCihYRU4pIAooWEVOKSBHVUVTVCBTVEFDSyBHT0VTIEhFUkUKKFhFTikgCihY
RU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKKFhFTikgUGFuaWMg
b24gQ1BVIDA6CihYRU4pIFVuaGFuZGxlZCBndWVzdCBkYXRhIGFib3J0CihYRU4pICoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKKFhFTikgCihYRU4pIFJlYm9vdCBpbiBm
aXZlIHNlY29uZHMuLi4KCg==
--e89a8f3bafff4e2c0804c6beb06b
Content-Type: application/octet-stream; 
	name="roots-console_xcbuild-simple-DomU-A15x1_08082012.log"
Content-Disposition: attachment; 
	filename="roots-console_xcbuild-simple-DomU-A15x1_08082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5ma1v573

KFhFTikgRE9NMTogVW5jb21wcmVzc2luZyBMaW51eC4uLiBkb25lLCBib290aW5nIHRoZSBrZXJu
ZWwuCihYRU4pIERPTTE6IEJvb3RpbmcgTGludXggb24gcGh5c2ljYWwgQ1BVIDAKKFhFTikgRE9N
MTogTGludXggdmVyc2lvbiAzLjUuMC1yYzcrIChyb290QHR1eikgKGdjYyB2ZXJzaW9uIDQuNi4z
IChVYnVudHUvTGluYXJvIDQuNi4zLTF1YnVudHU1KSApICM5IFR1ZSBBdWcgNyAxOToxOToxMyBN
U0sgMjAxMgooWEVOKSBET00xOiBDUFU6IEFSTXY3IFByb2Nlc3NvciBbNDEyZmMwZjBdIHJldmlz
aW9uIDAgKEFSTXY3KSwgY3I9MTBjNTNjN2QKKFhFTikgRE9NMTogQ1BVOiBQSVBUIC8gVklQVCBu
b25hbGlhc2luZyBkYXRhIGNhY2hlLCBQSVBUIGluc3RydWN0aW9uIGNhY2hlCihYRU4pIERPTTE6
IE1hY2hpbmU6IEFSTS1WZXJzYXRpbGUgRXhwcmVzcywgbW9kZWw6IFYyUC1BRU12N0EKKFhFTikg
RE9NMTogYm9vdGNvbnNvbGUgW2Vhcmx5Y29uMF0gZW5hYmxlZAooWEVOKSBET00xOiBkZWJ1Zzog
c2tpcCBib290IGNvbnNvbGUgZGUtcmVnaXN0cmF0aW9uLgooWEVOKSBET00xOiBNZW1vcnkgcG9s
aWN5OiBFQ0MgZGlzYWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCihYRU4pIERPTTE6IE9uIG5v
ZGUgMCB0b3RhbHBhZ2VzOiAzMjc2OAooWEVOKSBET00xOiBmcmVlX2FyZWFfaW5pdF9ub2RlOiBu
b2RlIDAsIHBnZGF0IDgwM2UyYjk0LCBub2RlX21lbV9tYXAgODAzZmYwMDAKKFhFTikgRE9NMTog
Tm9ybWFsIHpvbmU6IDI1NiBwYWdlcyB1c2VkIGZvciBtZW1tYXAKKFhFTikgRE9NMTogTm9ybWFs
IHpvbmU6IDAgcGFnZXMgcmVzZXJ2ZWQKKFhFTikgRE9NMTogTm9ybWFsIHpvbmU6IDMyNTEyIHBh
Z2VzLCBMSUZPIGJhdGNoOjcKKFhFTikgRE9NMTogLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0t
LS0tLS0tLS0tCihYRU4pIERPTTE6IFdBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4cHJlc3Mv
djJtLmM6NjAzIHYybV9kdF9pbml0X2Vhcmx5KzB4NDQvMHhlYygpCihYRU4pIERPTTE6IE1vZHVs
ZXMgbGlua2VkIGluOgooWEVOKSBET00xOiBCYWNrdHJhY2U6CihYRU4pIERPTTE6IFs8ODAwMTFi
MGM+XSAoZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDA2NTg+XSAoZHVtcF9z
dGFjaysweDE4LzB4MWMpCihYRU4pIERPTTE6IHI2OjAwMDAwMjViIHI1OjgwMzllZGNjIHI0OjAw
MDAwMDAwIHIzOjgwM2M3OTNjCihYRU4pIERPTTE6IFs8ODAyZDA2NDA+XSAoZHVtcF9zdGFjaysw
eDAvMHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2
YykKKFhFTikgRE9NMTogWzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2
YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCihYRU4p
IERPTTE6IHI4OjgwM2M1MzM4IHI3OjgwNGZmNDQwIHI2OjgwMDAwMjAwIHI1OjgwM2ViODQ4IHI0
OjgwM2UzZDM4CihYRU4pIERPTTE6IHIzOjAwMDAwMDA5CihYRU4pIERPTTE6IFs8ODAwMWIxZjQ+
XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzOWVkY2M+XSAodjJtX2R0
X2luaXRfZWFybHkrMHg0NC8weGVjKQooWEVOKSBET00xOiBbPDgwMzllZDg4Pl0gKHYybV9kdF9p
bml0X2Vhcmx5KzB4MC8weGVjKSBmcm9tIFs8ODAzOWI2N2M+XSAoc2V0dXBfYXJjaCsweDcxMC8w
eDdmYykKKFhFTikgRE9NMTogcjQ6ODAzYjIzYzQKKFhFTikgRE9NMTogWzw4MDM5YWY2Yz5dIChz
ZXR1cF9hcmNoKzB4MC8weDdmYykgZnJvbSBbPDgwMzk5NTljPl0gKHN0YXJ0X2tlcm5lbCsweDc4
LzB4MjZjKQooWEVOKSBET00xOiBbPDgwMzk5NTI0Pl0gKHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMp
IGZyb20gWzw4MDAwODA0MD5dICgweDgwMDA4MDQwKQooWEVOKSBET00xOiByNzo4MDNjNTI4NCBy
Njo4MDNiMzc2OCByNTo4MDNjMjA1NCByNDoxMGM1M2M3ZAooWEVOKSBET00xOiAtLS1bIGVuZCB0
cmFjZSAxYjc1YjMxYTI3MTllZDFjIF0tLS0KKFhFTikgRE9NMTogcGNwdS1hbGxvYzogczAgcjAg
ZDMyNzY4IHUzMjc2OCBhbGxvYz0xKjMyNzY4CihYRU4pIERPTTE6IHBjcHUtYWxsb2M6IFswXSAw
CihYRU4pIERPTTE6IEJ1aWx0IDEgem9uZWxpc3RzIGluIFpvbmUgb3JkZXIsIG1vYmlsaXR5IGdy
b3VwaW5nIG9uLiBUb3RhbCBwYWdlczogMzI1MTIKKFhFTikgRE9NMTogS2VybmVsIGNvbW1hbmQg
bGluZTogZWFybHlwcmludGsgZGVidWcgbG9nbGV2ZWw9OSBrZWVwX2Jvb3Rjb24gY29uc29sZT1o
dmMwIHJvb3Q9L2Rldi94dmRhIGluaXQ9L3NiaW4vaW5pdAooWEVOKSBET00xOiBQSUQgaGFzaCB0
YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAtMSwgMjA0OCBieXRlcykKKFhFTikgRE9NMTogRGVu
dHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTYzODQgKG9yZGVyOiA0LCA2NTUzNiBieXRl
cykKKFhFTikgRE9NMTogSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRl
cjogMywgMzI3NjggYnl0ZXMpCihYRU4pIERPTTE6IE1lbW9yeTogMTI4TUIgPSAxMjhNQiB0b3Rh
bAooWEVOKSBET00xOiBNZW1vcnk6IDEyNTgxMmsvMTI1ODEyayBhdmFpbGFibGUsIDUyNjBrIHJl
c2VydmVkLCAwSyBoaWdobWVtCihYRU4pIERPTTE6IFZpcnR1YWwga2VybmVsIG1lbW9yeSBsYXlv
dXQ6CihYRU4pIERPTTE6IHZlY3RvciA6IDB4ZmZmZjAwMDAgLSAweGZmZmYxMDAwICggNCBrQikK
KFhFTikgRE9NMTogZml4bWFwIDogMHhmZmYwMDAwMCAtIDB4ZmZmZTAwMDAgKCA4OTYga0IpCihY
RU4pIERPTTE6IHZtYWxsb2MgOiAweDg4ODAwMDAwIC0gMHhmZjAwMDAwMCAoMTg5NiBNQikKKFhF
TikgRE9NMTogbG93bWVtIDogMHg4MDAwMDAwMCAtIDB4ODgwMDAwMDAgKCAxMjggTUIpCihYRU4p
IERPTTE6IG1vZHVsZXMgOiAweDdmMDAwMDAwIC0gMHg4MDAwMDAwMCAoIDE2IE1CKQooWEVOKSBE
T00xOiAudGV4dCA6IDB4ODAwMDgwMDAgLSAweDgwMzk5MDAwICgzNjUyIGtCKQooWEVOKSBET00x
OiAuaW5pdCA6IDB4ODAzOTkwMDAgLSAweDgwM2I4ZmQ4ICggMTI4IGtCKQooWEVOKSBET00xOiAu
ZGF0YSA6IDB4ODAzYmEwMDAgLSAweDgwM2UzMmMwICggMTY1IGtCKQooWEVOKSBET00xOiAuYnNz
IDogMHg4MDNlMzJlNCAtIDB4ODAzZmVkYTQgKCAxMTEga0IpCihYRU4pIERPTTE6IFNMVUI6IEdl
bnNsYWJzPTExLCBIV2FsaWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz0xLCBO
b2Rlcz0xCihYRU4pIERPTTE6IE5SX0lSUVM6MjU2CihYRU4pIERPTTE6IC0tLS0tLS0tLS0tLVsg
Y3V0IGhlcmUgXS0tLS0tLS0tLS0tLQooWEVOKSBET00xOiBXQVJOSU5HOiBhdCBhcmNoL2FybS9t
YWNoLXZleHByZXNzL3YybS5jOjYyIHYybV9zeXNjdGxfaW5pdCsweDIwLzB4NTgoKQooWEVOKSBE
T00xOiBNb2R1bGVzIGxpbmtlZCBpbjoKKFhFTikgRE9NMTogQmFja3RyYWNlOgooWEVOKSBET00x
OiBbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEwYykgZnJvbSBbPDgwMmQwNjU4
Pl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQooWEVOKSBET00xOiByNjowMDAwMDAzZSByNTo4MDM5
ZTliYyByNDowMDAwMDAwMCByMzo4MDNjNzkzYwooWEVOKSBET00xOiBbPDgwMmQwNjQwPl0gKGR1
bXBfc3RhY2srMHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3YXJuX3Nsb3dwYXRoX2NvbW1v
bisweDU0LzB4NmMpCihYRU4pIERPTTE6IFs8ODAwMWIxODg+XSAod2Fybl9zbG93cGF0aF9jb21t
b24rMHgwLzB4NmMpIGZyb20gWzw4MDAxYjIxOD5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgyNC8w
eDJjKQooWEVOKSBET00xOiByODo4MDAwNDA1OSByNzo4MDRmZmJjMCByNjo4MDNiMzc2YyByNTo4
MDNlMzMwMCByNDowMDAwMDAwMAooWEVOKSBET00xOiByMzowMDAwMDAwOQooWEVOKSBET00xOiBb
PDgwMDFiMWY0Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwMzllOWJj
Pl0gKHYybV9zeXNjdGxfaW5pdCsweDIwLzB4NTgpCihYRU4pIERPTTE6IFs8ODAzOWU5OWM+XSAo
djJtX3N5c2N0bF9pbml0KzB4MC8weDU4KSBmcm9tIFs8ODAzOWVhYmM+XSAodjJtX2R0X3RpbWVy
X2luaXQrMHgyYy8weGNjKQooWEVOKSBET00xOiByNTo4MDNlMzMwMCByNDpmZmZmZmZmZgooWEVO
KSBET00xOiBbPDgwMzllYTkwPl0gKHYybV9kdF90aW1lcl9pbml0KzB4MC8weGNjKSBmcm9tIFs8
ODAzOWI4MTA+XSAodGltZV9pbml0KzB4MjgvMHgzOCkKKFhFTikgRE9NMTogcjY6ODAzYjM3NmMg
cjU6ODAzZTMzMDAgcjQ6ZmZmZmZmZmYKKFhFTikgRE9NMTogWzw4MDM5YjdlOD5dICh0aW1lX2lu
aXQrMHgwLzB4MzgpIGZyb20gWzw4MDM5OTZhYz5dIChzdGFydF9rZXJuZWwrMHgxODgvMHgyNmMp
CihYRU4pIERPTTE6IFs8ODAzOTk1MjQ+XSAoc3RhcnRfa2VybmVsKzB4MC8weDI2YykgZnJvbSBb
PDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCihYRU4pIERPTTE6IHI3OjgwM2M1Mjg0IHI2OjgwM2Iz
NzY4IHI1OjgwM2MyMDU0IHI0OjEwYzUzYzdkCihYRU4pIERPTTE6IC0tLVsgZW5kIHRyYWNlIDFi
NzViMzFhMjcxOWVkMWQgXS0tLQooWEVOKSBET00xOiBhcmNoX3RpbWVyOiBmb3VuZCB0aW1lciBp
cnFzIDI5IDMwCihYRU4pIERPTTE6IEFyY2hpdGVjdGVkIGxvY2FsIHRpbWVyIHJ1bm5pbmcgYXQg
MTAwLjAwTUh6LgooWEVOKSBET00xOiBzY2hlZF9jbG9jazogMzIgYml0cyBhdCAxMDBNSHosIHJl
c29sdXRpb24gMTBucywgd3JhcHMgZXZlcnkgNDI5NDltcwooWEVOKSBET00xOiAtLS0tLS0tLS0t
LS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KKFhFTikgRE9NMTogV0FSTklORzogYXQgYXJjaC9h
cm0vbWFjaC12ZXhwcmVzcy92Mm0uYzo2NDcgdjJtX2R0X3RpbWVyX2luaXQrMHg3Yy8weGNjKCkK
KFhFTikgRE9NMTogTW9kdWxlcyBsaW5rZWQgaW46CihYRU4pIERPTTE6IEJhY2t0cmFjZToKKFhF
TikgRE9NMTogWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4
MDJkMDY1OD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKKFhFTikgRE9NMTogcjY6MDAwMDAyODcg
cjU6ODAzOWViMGMgcjQ6MDAwMDAwMDAgcjM6ODAzYzc5M2MKKFhFTikgRE9NMTogWzw4MDJkMDY0
MD5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0
aF9jb21tb24rMHg1NC8weDZjKQooWEVOKSBET00xOiBbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3Bh
dGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxs
KzB4MjQvMHgyYykKKFhFTikgRE9NMTogcjg6ODAwMDQwNTkgcjc6ODA0ZmZiYzAgcjY6ODAzYjM3
NmMgcjU6ODAzZTMzMDAgcjQ6ZmZmZmZmZWEKKFhFTikgRE9NMTogcjM6MDAwMDAwMDkKKFhFTikg
RE9NMTogWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4MmMpIGZyb20gWzw4
MDM5ZWIwYz5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDdjLzB4Y2MpCihYRU4pIERPTTE6IFs8ODAz
OWVhOTA+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDM5YjgxMD5dICh0
aW1lX2luaXQrMHgyOC8weDM4KQooWEVOKSBET00xOiByNjo4MDNiMzc2YyByNTo4MDNlMzMwMCBy
NDpmZmZmZmZmZgooWEVOKSBET00xOiBbPDgwMzliN2U4Pl0gKHRpbWVfaW5pdCsweDAvMHgzOCkg
ZnJvbSBbPDgwMzk5NmFjPl0gKHN0YXJ0X2tlcm5lbCsweDE4OC8weDI2YykKKFhFTikgRE9NMTog
Wzw4MDM5OTUyND5dIChzdGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgwNDA+XSAo
MHg4MDAwODA0MCkKKFhFTikgRE9NMTogcjc6ODAzYzUyODQgcjY6ODAzYjM3NjggcjU6ODAzYzIw
NTQgcjQ6MTBjNTNjN2QKKFhFTikgRE9NMTogLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQx
ZSBdLS0tCihYRU4pIERPTTE6IENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MzAKKFhF
TikgRE9NMTogQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcC4uLiA5OC43MSBCb2dvTUlQUyAobHBqPTQ5
MzU2OCkKKFhFTikgRE9NMTogcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxCihY
RU4pIERPTTE6IE1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTEyCihYRU4pIERPTTE6
IENQVTogVGVzdGluZyB3cml0ZSBidWZmZXIgY29oZXJlbmN5OiBvawooWEVOKSBET00xOiBTZXR0
aW5nIHVwIHN0YXRpYyBpZGVudGl0eSBtYXAgZm9yIDB4ODAyZDQ4MjAgLSAweDgwMmQ0ODU0CihY
RU4pIERPTTE6IFhlbiBzdXBwb3J0IGZvdW5kLCBldmVudHNfaXJxPTMxIGdudHRhYl9mcmFtZV9w
Zm49YjAwMDAKKFhFTikgRE9NMTogR3JhbnQgdGFibGVzIHVzaW5nIHZlcnNpb24gMSBsYXlvdXQu
CihYRU4pIERPTTE6IEdyYW50IHRhYmxlIGluaXRpYWxpemVkCihYRU4pIERPTTE6IE5FVDogUmVn
aXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTYKKFhFTikgRE9NMTogYmlvOiBjcmVhdGUgc2xhYiA8
YmlvLTA+IGF0IDAKKFhFTikgRE9NMTogU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQKKFhFTikg
RE9NMTogbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuCihYRU4pIERPTTE6IHVzYmNvcmU6IHJl
Z2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKKFhFTikgRE9NMTogdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKKFhFTikgRE9NMTogdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IKKFhFTikgRE9NMTogU3dpdGNoaW5nIHRv
IGNsb2Nrc291cmNlIGFyY2hfc3lzX2NvdW50ZXIKKFhFTikgRE9NMTogTkVUOiBSZWdpc3RlcmVk
IHByb3RvY29sIGZhbWlseSAyCihYRU4pIERPTTE6IElQIHJvdXRlIGNhY2hlIGhhc2ggdGFibGUg
ZW50cmllczogMTAyNCAob3JkZXI6IDAsIDQwOTYgYnl0ZXMpCihYRU4pIERPTTE6IFRDUCBlc3Rh
Ymxpc2hlZCBoYXNoIHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykK
KFhFTikgRE9NMTogVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjogMiwg
MTYzODQgYnl0ZXMpCihYRU4pIERPTTE6IFRDUDogSGFzaCB0YWJsZXMgY29uZmlndXJlZCAoZXN0
YWJsaXNoZWQgNDA5NiBiaW5kIDQwOTYpCihYRU4pIERPTTE6IFRDUDogcmVubyByZWdpc3RlcmVk
CihYRU4pIERPTTE6IFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDI1NiAob3JkZXI6IDAsIDQwOTYg
Ynl0ZXMpCihYRU4pIERPTTE6IFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogMjU2IChvcmRl
cjogMCwgNDA5NiBieXRlcykKKFhFTikgRE9NMTogTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZh
bWlseSAxCihYRU4pIERPTTE6IFJQQzogUmVnaXN0ZXJlZCBuYW1lZCBVTklYIHNvY2tldCB0cmFu
c3BvcnQgbW9kdWxlLgooWEVOKSBET00xOiBSUEM6IFJlZ2lzdGVyZWQgdWRwIHRyYW5zcG9ydCBt
b2R1bGUuCihYRU4pIERPTTE6IFJQQzogUmVnaXN0ZXJlZCB0Y3AgdHJhbnNwb3J0IG1vZHVsZS4K
KFhFTikgRE9NMTogUlBDOiBSZWdpc3RlcmVkIHRjcCBORlN2NC4xIGJhY2tjaGFubmVsIHRyYW5z
cG9ydCBtb2R1bGUuCihYRU4pIERPTTE6IGpmZnMyOiB2ZXJzaW9uIDIuMi4gKE5BTkQpIMKpIDIw
MDEtMjAwNiBSZWQgSGF0LCBJbmMuCihYRU4pIERPTTE6IG1zZ21uaSBoYXMgYmVlbiBzZXQgdG8g
MjQ1CihYRU4pIERPTTE6IGlvIHNjaGVkdWxlciBub29wIHJlZ2lzdGVyZWQgKGRlZmF1bHQpCihY
RU4pIERPTTE6IEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZC4KKFhFTikgRE9NMTogSW5p
dGlhbGlzaW5nIFhlbiB2aXJ0dWFsIGV0aGVybmV0IGRyaXZlci4KKFhFTikgRE9NMTogYmxrZnJv
bnQ6IHh2ZGE6IGJhcnJpZXIgb3IgZmx1c2g6IGRpc2FibGVkCihYRU4pIGdyYW50X3RhYmxlLmM6
MTIwNDpkMSBFeHBhbmRpbmcgZG9tICgxKSBncmFudCB0YWJsZSBmcm9tICgxKSB0byAoMikgZnJh
bWVzLgooWEVOKSBET00xOiBJbml0aWFsaXppbmcgVVNCIE1hc3MgU3RvcmFnZSBkcml2ZXIuLi4K
KFhFTikgRE9NMTogdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2It
c3RvcmFnZQooWEVOKSBET00xOiBVU0IgTWFzcyBTdG9yYWdlIHN1cHBvcnQgcmVnaXN0ZXJlZC4K
KFhFTikgRE9NMTogbW91c2VkZXY6IFBTLzIgbW91c2UgZGV2aWNlIGNvbW1vbiBmb3IgYWxsIG1p
Y2UKKFhFTikgRE9NMTogdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1
c2JoaWQKKFhFTikgRE9NMTogdXNiaGlkOiBVU0IgSElEIGNvcmUgZHJpdmVyCihYRU4pIERPTTE6
IFRDUDogY3ViaWMgcmVnaXN0ZXJlZAooWEVOKSBET00xOiBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9j
b2wgZmFtaWx5IDE3CihYRU4pIERPTTE6IFZGUCBzdXBwb3J0IHYwLjM6IGltcGxlbWVudG9yIDQx
IGFyY2hpdGVjdHVyZSA0IHBhcnQgMzAgdmFyaWFudCBmIHJldiAwCihYRU4pIERPTTE6IFdhcm5p
bmc6IHVuYWJsZSB0byBvcGVuIGFuIGluaXRpYWwgY29uc29sZS4KKFhFTikgRE9NMTogZW5kX3Jl
cXVlc3Q6IEkvTyBlcnJvciwgZGV2IHh2ZGEsIHNlY3RvciAyCihYRU4pIERPTTE6IEVYVDMtZnMg
KHh2ZGEpOiBlcnJvcjogdW5hYmxlIHRvIHJlYWQgc3VwZXJibG9jawooWEVOKSBET00xOiBlbmRf
cmVxdWVzdDogSS9PIGVycm9yLCBkZXYgeHZkYSwgc2VjdG9yIDIKKFhFTikgRE9NMTogRVhUMi1m
cyAoeHZkYSk6IGVycm9yOiB1bmFibGUgdG8gcmVhZCBzdXBlcmJsb2NrCihYRU4pIERPTTE6IGVu
ZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiB4dmRhLCBzZWN0b3IgMAooWEVOKSBET00xOiBGQVQt
ZnMgKHh2ZGEpOiB1bmFibGUgdG8gcmVhZCBib290IHNlY3RvcgooWEVOKSBET00xOiBWRlM6IENh
bm5vdCBvcGVuIHJvb3QgZGV2aWNlICJ4dmRhIiBvciB1bmtub3duLWJsb2NrKDIwMiwwKTogZXJy
b3IgLTUKKFhFTikgRE9NMTogUGxlYXNlIGFwcGVuZCBhIGNvcnJlY3QgInJvb3Q9IiBib290IG9w
dGlvbjsgaGVyZSBhcmUgdGhlIGF2YWlsYWJsZSBwYXJ0aXRpb25zOgooWEVOKSBET00xOiBLZXJu
ZWwgcGFuaWMgLSBub3Qgc3luY2luZzogVkZTOiBVbmFibGUgdG8gbW91bnQgcm9vdCBmcyBvbiB1
bmtub3duLWJsb2NrKDIwMiwwKQooWEVOKSBET00xOiBCYWNrdHJhY2U6CihYRU4pIERPTTE6IFs8
ODAwMTFiMGM+XSAoZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDA2NTg+XSAo
ZHVtcF9zdGFjaysweDE4LzB4MWMpCihYRU4pIERPTTE6IHI2OjgwM2IzMmRjIHI1Ojg3ODFlMDAw
IHI0OjgwM2UzZGE4IHIzOjAwMDAwMDAxCihYRU4pIERPTTE6IFs8ODAyZDA2NDA+XSAoZHVtcF9z
dGFjaysweDAvMHgxYykgZnJvbSBbPDgwMmQwODMwPl0gKHBhbmljKzB4N2MvMHgxYWMpCihYRU4p
IERPTTE6IFs8ODAyZDA3YjQ+XSAocGFuaWMrMHgwLzB4MWFjKSBmcm9tIFs8ODAzOTljYWM+XSAo
bW91bnRfYmxvY2tfcm9vdCsweDE4MC8weDIzNCkKKFhFTikgRE9NMTogcjM6MDAwMDAwMDAgcjI6
MDAwMDAwMDAgcjE6ODc4MmRmMjAgcjA6ODAzNDgzMzgKKFhFTikgRE9NMTogcjc6MDAwMDgwMDEK
KFhFTikgRE9NMTogWzw4MDM5OWIyYz5dIChtb3VudF9ibG9ja19yb290KzB4MC8weDIzNCkgZnJv
bSBbPDgwMzk5ZTUwPl0gKG1vdW50X3Jvb3QrMHhmMC8weDExMCkKKFhFTikgRE9NMTogWzw4MDM5
OWQ2MD5dIChtb3VudF9yb290KzB4MC8weDExMCkgZnJvbSBbPDgwMzk5Zjk0Pl0gKHByZXBhcmVf
bmFtZXNwYWNlKzB4MTI0LzB4MTdjKQooWEVOKSBET00xOiByNzo4MDNlMzMwMCByNjo4MDNiMzJi
NCByNTo4MDNiMzJlZCByNDo4MDNlMzM2MAooWEVOKSBET00xOiBbPDgwMzk5ZTcwPl0gKHByZXBh
cmVfbmFtZXNwYWNlKzB4MC8weDE3YykgZnJvbSBbPDgwMzk5OTAwPl0gKGtlcm5lbF9pbml0KzB4
MTcwLzB4MWFjKQooWEVOKSBET00xOiByNTowMDAwMDAwNyByNDo4MDNiMzJkNAooWEVOKSBET00x
OiBbPDgwMzk5NzkwPl0gKGtlcm5lbF9pbml0KzB4MC8weDFhYykgZnJvbSBbPDgwMDFmN2I0Pl0g
KGRvX2V4aXQrMHgwLzB4NmJjKSAK
--e89a8f3bafff4e2c0804c6beb06b
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--e89a8f3bafff4e2c0804c6beb06b--


From xen-devel-bounces@lists.xen.org Wed Aug 08 10:37:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3dI-0005vV-QF; Wed, 08 Aug 2012 10:36:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <trashsee@gmail.com>) id 1Sz3dH-0005vO-4M
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:36:55 +0000
Received: from [85.158.143.99:50842] by server-2.bemta-4.messagelabs.com id
	3C/DB-19021-64142205; Wed, 08 Aug 2012 10:36:54 +0000
X-Env-Sender: trashsee@gmail.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344422211!25565247!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25433 invoked from network); 8 Aug 2012 10:36:52 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:36:52 -0000
Received: by ghrr17 with SMTP id r17so664877ghr.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 03:36:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=VCk0KWUP8i+rhaljaWa5zXYeLBvxSRqhYEBlX1ABR78=;
	b=YTy6TOP63+kqVLJs3eU75yyHbUxsW0/bnqydsIomr0sQ8TkFhECJ2lvF3cPGlbX/uH
	jeoFJ5970mx4tvJdbTouE8er5Vsp5JxTlST7zJkYJiCV/7upiCyOS1dFuTJPZwGLNqnB
	C2lFQAJ/G/AZJ2kHWUFXuUWt+ZiLiWCigi9iV7F3cQG6GhJZRosH8HSwnmMj9B7HiiVz
	13HV6k9zhvsQn7m7xp0ET/YH4gxjroDWGa7NNSmug+AjLZ6s2USO4MmGbk8OTNmXXvtc
	sWjmD9/+AHj299pnLqhIsrRzvKTiIdZQ9XbL7O1Li7sjQmilMY8+R/apzCAMZZ4lqiJa
	DzNg==
MIME-Version: 1.0
Received: by 10.50.87.133 with SMTP id ay5mr51593igb.49.1344422210841; Wed, 08
	Aug 2012 03:36:50 -0700 (PDT)
Received: by 10.64.22.10 with HTTP; Wed, 8 Aug 2012 03:36:50 -0700 (PDT)
In-Reply-To: <CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
References: <CAPny0soyuQkUmAU+kYrBvG+w_jxKUsY8YxCrxBA=7cwmdwV6Xw@mail.gmail.com>
	<alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
	<CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
Date: Wed, 8 Aug 2012 14:36:50 +0400
Message-ID: <CAPny0sqtgo7MvndfLN6JExkQMP40ro1FT6Edc6OKWt0KreNYnQ@mail.gmail.com>
From: Alexey Klimov <trashsee@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
Content-Type: multipart/mixed; boundary=e89a8f3bafff4e2c0804c6beb06b
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [questions] Dom0/DomU on ARM under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--e89a8f3bafff4e2c0804c6beb06b
Content-Type: text/plain; charset=ISO-8859-1

2012/8/1 Alexey Klimov <trashsee@gmail.com>:
> And i saw that Ian set up git repository for xen with latest patches
> for ARM. So i'll try to use this repository.

Hello Stefano and Ian,

I used new Ian xen-unstable git repository
(git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.) and
Stefano linux kernel git repository (
git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git
3.5-rc7-arm-2) with additional patches:

- for linux kernel "xen/events: fix unmask_evtchn for PV on HVM guests",
- ARM hypercall ABI: 64 bit ready patch series for xen and attached
few versions of xcbuild (early version of Ian and latest version).
After applying 64-bit ready patches i observed such errors when
building xen and tools:

1)
for i in public/callback.h public/dom0_ops.h public/elfnote.h
public/event_channel.h public/features.h public/grant_table.h
public/kexec.h public/mem_event.h public/memory.h public/nmi.h
public/physdev.h public/platform.h public/sched.h public/tmem.h
public/trace.h public/vcpu.h public/version.h public/xen-compat.h
public/xen.h public/xencomm.h public/xenoprof.h public/hvm/e820.h
public/hvm/hvm_info_table.h public/hvm/hvm_op.h public/hvm/ioreq.h
public/hvm/params.h public/io/blkif.h public/io/console.h
public/io/fbif.h public/io/fsif.h public/io/kbdif.h
public/io/libxenvchan.h public/io/netif.h public/io/pciif.h
public/io/protocols.h public/io/ring.h public/io/tpmif.h
public/io/usbif.h public/io/vscsiif.h public/io/xenbus.h
public/io/xs_wire.h; do gcc -ansi -include stdint.h -Wall -W -Werror
-S -o /dev/null -xc $i || exit 1; echo $i; done >headers.chk.new
public/version.h:61:5: error: unknown type name 'xen_ulong_t'
make[3]: *** [headers.chk] Error 1
make[3]: Leaving directory `/src/xen/xen/include'

Fixed by inserting #include "arch-arm.h" in xen/include/public/version.h

2)
building 'xc' extension
gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
-Wstrict-prototypes -O1 -fno-omit-frame-pointer -marm -g
-fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
-Wdeclaration-after-statement -Wno-unused-but-set-variable
-D__XEN_TOOLS__ -MMD -MF .build.d -D_LARGEFILE_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
-fno-optimize-sibling-calls -fPIC -I../../tools/include
-I../../tools/libxc -Ixen/lowlevel/xc -I/usr/include/python2.7 -c
xen/lowlevel/xc/xc.c -o
build/temp.linux-armv7l-2.7/xen/lowlevel/xc/xc.o -fno-strict-aliasing
-Werror
xen/lowlevel/xc/xc.c: In function 'pyxc_xeninfo':
xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
[-Werror=format]
xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
[-Werror=format]
cc1: all warnings being treated as errors

Just commented snprintf(str, sizeof(str), "virt_start=0x%lx",
p_parms.virt_start); in xc.c

Then it compiled and i tried to run DomU. It looks like allocation
console_pfn and xenstore_pfn in alloc_magic_pages() in xc_dom_arm.c
creates real pain for me. With this allocation/patch xen prints "bad
p2m lookup" messages before booting DomU
(XEN) bad p2m lookup
(XEN) dom1 IPA 0x0000000090000000
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000
(XEN) bad p2m lookup
(XEN) dom1 IPA 0x0000000090001000
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000
(XEN) bad p2m lookup
(XEN) dom1 IPA 0x0000000090001000
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000

and then everything  hangs with translation fault:

(XEN) DOM1: Grant tables using version 1 layout.
(XEN) DOM1: Grant table initialized
(XEN) DOM1: NET: Registered protocol family 16
(XEN) Guest data abort: Translation fault at level 2
(XEN)     gva=88808804
(XEN)     gpa=0000000090001804
(XEN)     size=2 sign=0 write=0 reg=2
(XEN)     eat=0 cm=0 s1ptw=0 dfsc=6
(XEN) dom1 IPA 0x0000000090001804
(XEN) P2M @ 02ffcac0 mfn:0xffe56
(XEN) 1ST[0x2] = 0x00000000f3f686ff
(XEN) 2ND[0x80] = 0x0000000000000000

Detailed log is attached.
Ok, i moved allocation for console and xenstore pages back in
arch_setup_meminit() like in
http://lists.xen.org/archives/html/xen-devel/2012-06/msg01340.html and
then added kernel parameter keep_bootcon in DomU  device tree file and
everything booted up to "unable to open an initial console" and unable
to mount rootfs. I still didn't learn how to deal with xenstore, hvc0,
xvda and how to boot from initramfs on ARM using xcbuild but i'll try
to understand and learn this :) So may be it's good thing to
investigate or take deep look why add_to_physmap failed in xcbuild and
why there is bad p2m lookup in xen. Log is attached.

Do you have any difference between Dom0 .config and DomU .config? Did
you just attach initrd using xc_dom_ramdisk_file() call in xcbuild?
Any special configuration of xen console/xen store?

Well, i dont mean that i'm doing everything correctly but i tried to
run it fixing/commenting as much as i can. Could you please help if
you have time? I can test new changes, sent other useful info/logs.

-- 
Best regards,
Alexey.

--e89a8f3bafff4e2c0804c6beb06b
Content-Type: application/octet-stream; 
	name="fault_xcbuild-simple-Dom0+U-A15x1_07082012.log"
Content-Disposition: attachment; 
	filename="fault_xcbuild-simple-Dom0+U-A15x1_07082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5m9l05a2

cm9vdEB0dXo6L3Vzci9saWIveGVuL2JpbiMgLi94Y2J1aWxkLXNpbXBsZSAvYm9vdC96SW1hZ2Ut
dGVzdDQgCkltYWdlOiAvYm9vdC96SW1hZ2UtdGVzdDQKTWVtb3J5OiAyNjQxOTJLQgp4Y19kb21h
aW5fY3JlYXRlOiAwICgwKQpidWlsZGluZyBkb20xCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNf
ZG9tX2FsbG9jYXRlOiBjbWRsaW5lPSIiLCBmZWF0dXJlcz0iKG51bGwpIgpkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9rZXJuZWxfZmlsZTogZmlsZW5hbWU9Ii9ib290L3pJbWFnZS10ZXN0
NCIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWFsbG9jX2ZpbGVtYXAgICAgOiAyMDI4
IGtCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfeGVuX2luaXQ6IHZlciA0LjIs
IGNhcHMgeGVuLTMuMC1hcm12N2wgCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNl
X2ltYWdlOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fZmluZF9sb2FkZXI6
IHRyeWluZyBtdWx0aWJvb3QtYmluYXJ5IGxvYWRlciAuLi4gCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogbG9hZGVyIHByb2JlIGZhaWxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9maW5k
X2xvYWRlcjogdHJ5aW5nIExpbnV4IHpJbWFnZSAoQVJNKSBsb2FkZXIgLi4uIAoKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fcHJvYmVfemltYWdlX2tlcm5lbDogZm91bmQgYW4gYXBwZW5k
ZWQgRFRCCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogbG9hZGVyIHByb2JlIE9LCmRvbWFpbmJ1aWxk
ZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX3ppbWFnZV9rZXJuZWw6IGNhbGxlZApkb21haW5idWls
ZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96aW1hZ2Vfa2VybmVsOiB4ZW4tMy4wLWFybXY3bDog
UkFNIHN0YXJ0cyBhdCA4MDAwMApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wYXJzZV96
aW1hZ2Vfa2VybmVsOiB4ZW4tMy4wLWFybXY3bDogMHg4MDAwODAwMCAtPiAweDgwMjAzM2EwCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9pbml0OiBtZW0gMjU2IE1CLCBwYWdlcyAw
eDEwMDAwIHBhZ2VzLCA0ayBlYWNoCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9p
bml0OiAweDEwMDAwIHBhZ2VzCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jvb3RfbWVt
X2luaXQ6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9tYWxsb2MgICAgICAg
ICAgICA6IDUxMiBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHJlbWFwX2FyZWFfbWZuX3B0ZV9m
bjogcHRlcCA4NzJjMzVjMCBhZGRyIDB4NzY5NzAwMDAgPT4gMHg5MDAwMDMwZiAvIDB4OTAwMDAK
eGNfZG9tX2J1aWxkX2ltYXJlbWFwX2FyZWFfbWZuX3B0ZV9mbjogcHRlcCA4NzJjMzVjNCBhZGRy
IDB4NzY5NzEwMDAgPT4gMHg5MDAwMTMwZiAvIDB4OTAwMDEKZ2U6IGNhbGxlZApkb21hcmVtYXBf
YXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWM4IGFkZHIgMHg3Njk3MjAwMCA9PiAweDkwMDAy
MzBmIC8gMHg5MDAwMgppbmJ1aWxkZXI6IGRldGFpcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVw
IDg3MmMzNWNjIGFkZHIgMHg3Njk3MzAwMCA9PiAweDkwMDAzMzBmIC8gMHg5MDAwMwpsOiB4Y19k
b21fYWxsb2NfcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWQwIGFkZHIgMHg3Njk3
NDAwMCA9PiAweDkwMDA0MzBmIC8gMHg5MDAwNApzZWdtZW50OiAgIGtlcm5lcmVtYXBfYXJlYV9t
Zm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWQ0IGFkZHIgMHg3Njk3NTAwMCA9PiAweDkwMDA1MzBmIC8g
MHg5MDAwNQpsICAgICAgIDogMHg4MDAwcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMz
NWQ4IGFkZHIgMHg3Njk3NjAwMCA9PiAweDkwMDA2MzBmIC8gMHg5MDAwNgo4MDAwIC0+IDB4ODAy
MDQwcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWRjIGFkZHIgMHg3Njk3NzAwMCA9
PiAweDkwMDA3MzBmIC8gMHg5MDAwNwowMCAgKHBmbiAweDgwMDA4cmVtYXBfYXJlYV9tZm5fcHRl
X2ZuOiBwdGVwIDg3MmMzNWUwIGFkZHIgMHg3Njk3ODAwMCA9PiAweDkwMDA4MzBmIC8gMHg5MDAw
OApyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcyYzM1ZTQgYWRkciAweDc2OTc5MDAwID0+
IDB4OTAwMDkzMGYgLyAweDkwMDA5CgpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcyYzM1
ZTggYWRkciAweDc2OTdhMDAwID0+IDB4OTAwMGEzMGYgLyAweDkwMDBhCnJlbWFwX2FyZWFfbWZu
X3B0ZV9mbjogcHRlcCA4NzJjMzVlYyBhZGRyIDB4NzY5N2IwMDAgPT4gMHg5MDAwYjMwZiAvIDB4
OTAwMGIKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWYwIGFkZHIgMHg3Njk3YzAw
MCA9PiAweDkwMDBjMzBmIC8gMHg5MDAwYwpyZW1hcF9hcmVhX21mbl9wdGVfZm46IHB0ZXAgODcy
YzM1ZjQgYWRkciAweDc2OTdkMDAwID0+IDB4OTAwMGQzMGYgLyAweDkwMDBkCnJlbWFwX2FyZWFf
bWZuX3B0ZV9mbjogcHRlcCA4NzJjMzVmOCBhZGRyIDB4NzY5N2UwMDAgPT4gMHg5MDAwZTMwZiAv
IDB4OTAwMGUKcmVtYXBfYXJlYV9tZm5fcHRlX2ZuOiBwdGVwIDg3MmMzNWZjIGFkZHIgMHg3Njk3
ZjAwMCA9PiAweDkwMDBmMzBmIC8gMHg5MDAwZgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2Rv
bV9wZm5fdG9fcHRyOiBkb21VIG1hcHBpbmc6IHBmbiAweDgwMDA4KzB4MWZjIGF0IDB4NzY5NzAw
MDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbG9hZF96aW1hZ2Vfa2VybmVsOiBjYWxs
ZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbG9hZF96aW1hZ2Vfa2VybmVsOiBrZXJu
ZWwgc2VkIGZvcmVpZ24gbWFwIGFkZF90b19waHlzbWFwIGZhaWxlZCwgZXJyPS0yMgoweDgwMDA4
MDAwLTB4ODAyMDQwMDAKZG9tYWluYnVpZm9yZWlnbiBtYXAgYWRkX3RvX3BoeXNtYXAgZmFpbGVk
LCBlcnI9LTIyCmxkZXI6IGRldGFpbDogeGNfZG9tX2xvYWRfemltYWdlX2tlcm5lbDogY29weSAy
MDc3NjAwIGJ5dGVzIGZyb20gYmxvYiAweDc2YmVkMDAwIHRvIGRzdCAweDc2OTcwMDAwCmRvbWFp
bmJ1aWxkZXI6IGRldGFpbDogYWxsb2NfbWFnaWNfcGFnZXM6IGNhbGxlZApkb21haW5idWlsZGVy
OiBkZXRhaWw6IGNvdW50X3BndGFibGVzX2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogeGNfZG9tX2Jmb3JlaWduIG1hcCBhZGRfdG9fcGh5c21hcCBmYWlsZWQsIGVycj0tMjIKdWls
ZF9pbWFnZSAgOiB2aXJ0X2FsbG9jX2VuZCA6IDB4ODAyMDQwMDAKZG9tYWluYnVpbGRlcjogZGV0
YWlsOiB4Y19kb21fYnVpbGRfaW1hZ2UgIDogdmlydF9wZ3RhYl9lbmQgOiAweDAKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fYm9vdF9pbWFnZTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRl
dGFpbDogYXJjaF9zZXR1cF9ib290ZWFybHk6IGRvaW5nIG5vdGhpbmcKZG9tYWluYnVpbGRlcjog
ZGV0YWlsOiB4Y19kb21fY29tcGF0X2NoZWNrOiBzdXBwb3J0ZWQgZ3Vlc3QgdHlwZTogeGVuLTMu
MC1hcm12N2wgPD0gbWF0Y2hlcwpkb21haW5idWlsZGVyOiBkZXRhaWw6IHNldHVwX3BndGFibGVz
X2FybTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogc3RhcnRfaW5mb19hcm06IGNhbGxl
ZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGRvbWFpbiBidWlsZGVyIG1lbW9yeSBmb290cHJpbnQK
ZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICBhbGxvY2F0ZWQKZG9tYWluYnVpbGRlcjogZGV0YWls
OiAgICAgICBtYWxsb2MgICAgICAgICAgICAgOiA1MjUga0IKZG9tYWluYnVpbGRlcjogZGV0YWls
OiAgICAgICBhbm9uIG1tYXAgICAgICAgICAgOiAwIGJ5dGVzCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogICAgbWFwcGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogICAgICAgZmlsZSBtbWFwICAgICAg
ICAgIDogMjAyOCBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6ICAgICAgIGRvbVUgbW1hcCAgICAg
ICAgICA6IDIwMzIga0IKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB2Y3B1X2FybTogY2FsbGVkCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogSW5pdGlhbCBzdGF0ZSBDUFNSIDB4MWQzIFBDIDB4ODAwMDgw
MDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsYXVuY2hfdm06IGNhbGxlZCwgY3R4dD0weDdlZjhk
N2U0CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3JlbGVhc2U6IGNhbGxlZAp4YzogZGVi
dWc6IGh5cGVyY2FsbCBidWZmZXI6IHRvdGFsIGFsbG9jYXRpb25zOjIwIHRvdGFsIHJlbGVhc2Vz
OjIwCnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY3VycmVudCBhbGxvY2F0aW9uczowIG1h
eGltdW0gYWxsb2NhdGlvbnM6Mgp4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IGNhY2hlIGN1
cnJlbnQgc2l6ZToyCnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY2FjaGUgaGl0czoxNyBt
aXNzZXM6MiB0b29iaWc6MQoKCgoKCi0gVUFSVCBlbmFibGVkIC0KLSBDUFUgMDAwMDAwMDAgYm9v
dGluZyAtCi0gU3RhcnRlZCBpbiBTZWN1cmUgc3RhdGUgLQotIEVudGVyaW5nIEh5cCBtb2RlIC0K
LSBTZXR0aW5nIHVwIGNvbnRyb2wgcmVnaXN0ZXJzIC0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtCi0g
UmVhZHkgLQpSQU06IDAwMDAwMDAwODAwMDAwMDAgLSAwMDAwMDAwMGZmZmZmZmZmCiBfXyAgX18g
ICAgICAgICAgICBfICBfICAgIF9fX18gICAgX19fICAgICAgICAgICAgICBfX19fICAgICAgICAg
ICAgICAgICAgICAKIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgIC8gXyBcICAgIF8g
X18gX19ffF9fXyBcICAgIF8gX18gIF8gX18gX19fIAogIFwgIC8vIF8gXCAnXyBcICB8IHx8IHxf
ICAgX18pIHx8IHwgfCB8X198ICdfXy8gX198IF9fKSB8X198ICdfIFx8ICdfXy8gXyBcCiAgLyAg
XCAgX18vIHwgfCB8IHxfXyAgIF98IC8gX18vIHwgfF98IHxfX3wgfCB8IChfXyAvIF9fL3xfX3wg
fF8pIHwgfCB8ICBfXy8KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX18oXylfX18vICAg
fF98ICBcX19ffF9fX19ffCAgfCAuX18vfF98ICBcX19ffAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8X3wgICAgICAgICAgICAgCihY
RU4pIFhlbiB2ZXJzaW9uIDQuMi4wLXJjMi1wcmUgKHJvb3RAKG5vbmUpKSAoZ2NjIHZlcnNpb24g
NC42LjMgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpICkgVHVlIEF1ZyAgNyAxMToyMzoz
MCBVVEMgMjAxMgooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiB1bmF2YWlsYWJsZQooWEVOKSBHSUM6
IDY0IGxpbmVzLCAxIGNwdSwgc2VjdXJlIChJSUQgMDAwMDA0M2IpLgooWEVOKSBXYWl0aW5nIGZv
ciAwIG90aGVyIENQVXMgdG8gYmUgcmVhZHkKKFhFTikgVXNpbmcgZ2VuZXJpYyB0aW1lciBhdCAx
MDAwMDAwMDAgSHoKKFhFTikgWGVuIGhlYXA6IDI2MjE0NCBwYWdlcyAgRG9tIGhlYXA6IDI1Mzk1
MiBwYWdlcwooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZAooWEVOKSBTZXQgaHlwIHZlY3Rv
ciBiYXNlIHRvIDIzY2ZjMCAoZXhwZWN0ZWQgMDAyM2NmYzApCihYRU4pIFByb2Nlc3NvciBGZWF0
dXJlczogMDAwMDExMzEgMDAwMDExMzEKKFhFTikgRGVidWcgRmVhdHVyZXM6IDAyMDEwNTU1CihY
RU4pIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAKKFhFTikgTWVtb3J5IE1vZGVsIEZlYXR1
cmVzOiAxMDIwMTEwNSAyMDAwMDAwMCAwMTI0MDAwMCAwMjEwMjIxMQooWEVOKSBJU0EgRmVhdHVy
ZXM6IDAyMTAxMTEwIDEzMTEyMTExIDIxMjMyMDQxIDExMTEyMTMxIDEwMDExMTQyIDAwMDAwMDAw
CihYRU4pIFVzaW5nIHNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgKGNyZWRpdCkKKFhF
TikgQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAxNiBLaUIuCihYRU4pIEJyb3VnaHQgdXAgMSBD
UFVzCihYRU4pICoqKiBMT0FESU5HIERPTUFJTiAwICoqKgooWEVOKSBQb3B1bGF0ZSBQMk0gMHg4
MDAwMDAwMC0+MHg4ODAwMDAwMAooWEVOKSBNYXAgQ1MyIE1NSU8gcmVnaW9ucyAxOjEgaW4gdGhl
IFAyTSAweDE4MDAwMDAwLT4weDFiZmZmZmZmCihYRU4pIE1hcCBDUzMgTU1JTyByZWdpb25zIDE6
MSBpbiB0aGUgUDJNIDB4MWMwMDAwMDAtPjB4MWZmZmZmZmYKKFhFTikgTWFwIFZHSUMgTU1JTyBy
ZWdpb25zIDE6MSBpbiB0aGUgUDJNIDB4MmMwMDgwMDAtPjB4MmRmZmZmZmYKKFhFTikgUm91dGlu
ZyBwZXJpcGhlcmFsIGludGVycnVwdHMgdG8gZ3Vlc3QKKFhFTikgTG9hZGluZyAwMDAwMDAwMDAw
MWZhYTAwIGJ5dGUgekltYWdlIGZyb20gZmxhc2ggMDAwMDAwMDAwMDAwMDAwMCB0byAwMDAwMDAw
MDgwMDA4MDAwLTAwMDAwMDAwODAyMDJhMDA6IFsuLl0KKFhFTikgU3RkLiBMb2dsZXZlbDogQWxs
CihYRU4pIEd1ZXN0IExvZ2xldmVsOiBBbGwKKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00w
ICh0eXBlICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pCihYRU4p
IEZyZWVkIDIwNGtCIGluaXQgbWVtb3J5LgpVbmNvbXByZXNzaW5nIExpbnV4Li4uIGRvbmUsIGJv
b3RpbmcgdGhlIGtlcm5lbC4KQm9vdGluZyBMaW51eCBvbiBwaHlzaWNhbCBDUFUgMApMaW51eCB2
ZXJzaW9uIDMuNS4wLXJjNysgKHJvb3RAdHV6KSAoZ2NjIHZlcnNpb24gNC42LjMgKFVidW50dS9M
aW5hcm8gNC42LjMtMXVidW50dTUpICkgIzEgTW9uIEF1ZyA2IDIwOjU0OjI0IE1TSyAyMDEyCkNQ
VTogQVJNdjcgUHJvY2Vzc29yIFs0MTJmYzBmMF0gcmV2aXNpb24gMCAoQVJNdjcpLCBjcj0xMGM1
M2M3ZApDUFU6IFBJUFQgLyBWSVBUIG5vbmFsaWFzaW5nIGRhdGEgY2FjaGUsIFBJUFQgaW5zdHJ1
Y3Rpb24gY2FjaGUKTWFjaGluZTogQVJNLVZlcnNhdGlsZSBFeHByZXNzLCBtb2RlbDogVjJQLUNB
MTUKYm9vdGNvbnNvbGUgW2Vhcmx5Y29uMF0gZW5hYmxlZApNZW1vcnkgcG9saWN5OiBFQ0MgZGlz
YWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCk9uIG5vZGUgMCB0b3RhbHBhZ2VzOiAzMjc2OApm
cmVlX2FyZWFfaW5pdF9ub2RlOiBub2RlIDAsIHBnZGF0IDgwM2VhZTk0LCBub2RlX21lbV9tYXAg
ODA0MDgwMDAKICBOb3JtYWwgem9uZTogMjU2IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcAogIE5vcm1h
bCB6b25lOiAwIHBhZ2VzIHJlc2VydmVkCiAgTm9ybWFsIHpvbmU6IDMyNTEyIHBhZ2VzLCBMSUZP
IGJhdGNoOjcKLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCldBUk5JTkc6IGF0
IGFyY2gvYXJtL21hY2gtdmV4cHJlc3MvdjJtLmM6NjEzIHYybV9kdF9pbml0X2Vhcmx5KzB4YWMv
MHhlYygpCk1vZHVsZXMgbGlua2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBf
YmFja3RyYWNlKzB4MC8weDEwYykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8w
eDFjKQogcjY6MDAwMDAyNjUgcjU6ODAzYTZlMzQgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MKWzw4
MDJkN2ViOD5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9z
bG93cGF0aF9jb21tb24rMHg1NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29t
bW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQv
MHgyYykKIHI4OjgwM2NkMzM4IHI3OjgwNTA4NDQwIHI2OjgwMDAwMjAwIHI1OjgwM2YzYjg4IHI0
OjAwMDAwMDAwCnIzOjAwMDAwMDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4
MC8weDJjKSBmcm9tIFs8ODAzYTZlMzQ+XSAodjJtX2R0X2luaXRfZWFybHkrMHhhYy8weGVjKQpb
PDgwM2E2ZDg4Pl0gKHYybV9kdF9pbml0X2Vhcmx5KzB4MC8weGVjKSBmcm9tIFs8ODAzYTM2N2M+
XSAoc2V0dXBfYXJjaCsweDcxMC8weDdmYykKIHI0OjgwM2JhOTI4Cls8ODAzYTJmNmM+XSAoc2V0
dXBfYXJjaCsweDAvMHg3ZmMpIGZyb20gWzw4MDNhMTU5Yz5dIChzdGFydF9rZXJuZWwrMHg3OC8w
eDI2YykKWzw4MDNhMTUyND5dIChzdGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgw
NDA+XSAoMHg4MDAwODA0MCkKIHI3OjgwM2NkMjg0IHI2OjgwM2JiY2NjIHI1OjgwM2NhMDU0IHI0
OjEwYzUzYzdkCi0tLVsgZW5kIHRyYWNlIDFiNzViMzFhMjcxOWVkMWMgXS0tLQp2ZXhwcmVzczog
RFQgSEJJICgyMzcpIGlzIG5vdCBtYXRjaGluZyBoYXJkd2FyZSAoMCkhCnBjcHUtYWxsb2M6IHMw
IHIwIGQzMjc2OCB1MzI3NjggYWxsb2M9MSozMjc2OApwY3B1LWFsbG9jOiBbMF0gMCAKQnVpbHQg
MSB6b25lbGlzdHMgaW4gWm9uZSBvcmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBw
YWdlczogMzI1MTIKS2VybmVsIGNvbW1hbmQgbGluZTogZWFybHlwcmludGsgY29uc29sZT10dHlB
TUExIHJvb3Q9L2Rldi9tbWNibGswIGRlYnVnIHJ3ClBJRCBoYXNoIHRhYmxlIGVudHJpZXM6IDUx
MiAob3JkZXI6IC0xLCAyMDQ4IGJ5dGVzKQpEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVz
OiAxNjM4NCAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzKQpJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDgxOTIgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykKTWVtb3J5OiAxMjhNQiA9IDEyOE1C
IHRvdGFsCk1lbW9yeTogMTI1NzY0ay8xMjU3NjRrIGF2YWlsYWJsZSwgNTMwOGsgcmVzZXJ2ZWQs
IDBLIGhpZ2htZW0KVmlydHVhbCBrZXJuZWwgbWVtb3J5IGxheW91dDoKICAgIHZlY3RvciAgOiAw
eGZmZmYwMDAwIC0gMHhmZmZmMTAwMCAgICggICA0IGtCKQogICAgZml4bWFwICA6IDB4ZmZmMDAw
MDAgLSAweGZmZmUwMDAwICAgKCA4OTYga0IpCiAgICB2bWFsbG9jIDogMHg4ODgwMDAwMCAtIDB4
ZmYwMDAwMDAgICAoMTg5NiBNQikKICAgIGxvd21lbSAgOiAweDgwMDAwMDAwIC0gMHg4ODAwMDAw
MCAgICggMTI4IE1CKQogICAgbW9kdWxlcyA6IDB4N2YwMDAwMDAgLSAweDgwMDAwMDAwICAgKCAg
MTYgTUIpCiAgICAgIC50ZXh0IDogMHg4MDAwODAwMCAtIDB4ODAzYTEwMDAgICAoMzY4NCBrQikK
ICAgICAgLmluaXQgOiAweDgwM2ExMDAwIC0gMHg4MDNjMTU2OCAgICggMTMwIGtCKQogICAgICAu
ZGF0YSA6IDB4ODAzYzIwMDAgLSAweDgwM2ViNWMwICAgKCAxNjYga0IpCiAgICAgICAuYnNzIDog
MHg4MDNlYjVlNCAtIDB4ODA0MDcxNjQgICAoIDExMSBrQikKU0xVQjogR2Vuc2xhYnM9MTEsIEhX
YWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTEsIE5vZGVzPTEKTlJfSVJR
UzoyNTYKYXJjaF90aW1lcjogY2FuJ3QgZmluZCBEVCBub2RlCkFyY2hpdGVjdGVkIGxvY2FsIHRp
bWVyIHJ1bm5pbmcgYXQgMTAwLjAwTUh6LgpzY2hlZF9jbG9jazogMzIgYml0cyBhdCAxMDBNSHos
IHJlc29sdXRpb24gMTBucywgd3JhcHMgZXZlcnkgNDI5NDltcwpDb25zb2xlOiBjb2xvdXIgZHVt
bXkgZGV2aWNlIDgweDMwCkNhbGlicmF0aW5nIGRlbGF5IGxvb3AuLi4gOTguNzEgQm9nb01JUFMg
KGxwaj00OTM1NjgpCgpwaWRfbWF4OiBkZWZhdWx0OiAzMjc2OCBtaW5pbXVtOiAzMDEKTW91bnQt
Y2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIKQ1BVOiBUZXN0aW5nIHdyaXRlIGJ1ZmZlciBj
b2hlcmVuY3k6IG9rClNldHRpbmcgdXAgc3RhdGljIGlkZW50aXR5IG1hcCBmb3IgMHg4MDJkYzEz
OCAtIDB4ODAyZGMxNmMKWGVuIHN1cHBvcnQgZm91bmQsIGV2ZW50c19pcnE9MzEgZ250dGFiX2Zy
YW1lX3Bmbj1iMDAwMApHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dC4KR3JhbnQg
dGFibGUgaW5pdGlhbGl6ZWQKTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNgotLS0t
LS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQga2VybmVsL2lycS9p
cnFkb21haW4uYzoxMzUgaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCgpCk1vZHVs
ZXMgbGlua2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4
MC8weDEwYykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAw
MDAwODcgcjU6ODAwNTEzMjAgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MKWzw4MDJkN2ViOD5dIChk
dW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21t
b24rMHg1NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZj
KSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4Ojg3
ODRlMTAwIHI3OjAwMDAwMDAzIHI2OjAwMDAwMDY0IHI1Ojg3ODAwNDQwIHI0OjgwM2Q4OTUwCnIz
OjAwMDAwMDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9t
IFs8ODAwNTEzMjA+XSAoaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCkKWzw4MDA1
MTJmOD5dIChpcnFfZG9tYWluX2xlZ2FjeV9yZXZtYXArMHgwLzB4NTApIGZyb20gWzw4MDA1MTNl
OD5dIChpcnFfZmluZF9tYXBwaW5nKzB4YTAvMHhkMCkKWzw4MDA1MTM0OD5dIChpcnFfZmluZF9t
YXBwaW5nKzB4MC8weGQwKSBmcm9tIFs8ODAwNTE4MjQ+XSAoaXJxX2NyZWF0ZV9tYXBwaW5nKzB4
MjgvMHgxMjgpCiByODo4Nzg0ZTEwMCByNzowMDAwMDAwMyByNjowMDAwMDA2NCByNTo4NzgyZmRk
MCByNDo4NzgwMDQ0MApyMzo4NzgyZmRhNApbPDgwMDUxN2ZjPl0gKGlycV9jcmVhdGVfbWFwcGlu
ZysweDAvMHgxMjgpIGZyb20gWzw4MDA1MTlhOD5dIChpcnFfY3JlYXRlX29mX21hcHBpbmcrMHg4
NC8weGY4KQogcjc6MDAwMDAwMDMgcjY6ODA1MDg4YTggcjU6ODc4MmZkZDAgcjQ6ODc4MDA0NDAK
Wzw4MDA1MTkyND5dIChpcnFfY3JlYXRlX29mX21hcHBpbmcrMHgwLzB4ZjgpIGZyb20gWzw4MDIz
Y2RhOD5dIChpcnFfb2ZfcGFyc2VfYW5kX21hcCsweDM0LzB4M2MpCiByNzowMDAwMDAwMCByNjo4
MDUwODlkYyByNTowMDAwMDAwMCByNDowMDAwMDAwMApbPDgwMjNjZDc0Pl0gKGlycV9vZl9wYXJz
ZV9hbmRfbWFwKzB4MC8weDNjKSBmcm9tIFs8ODAyM2NkZDA+XSAob2ZfaXJxX3RvX3Jlc291cmNl
KzB4MjAvMHg3YykKWzw4MDIzY2RiMD5dIChvZl9pcnFfdG9fcmVzb3VyY2UrMHgwLzB4N2MpIGZy
b20gWzw4MDIzY2U1OD5dIChvZl9pcnFfY291bnQrMHgyYy8weDNjKQogcjc6MDAwMDAwMDAgcjY6
ODA1MDg5ZGMgcjU6ODA1MDg5ZGMgcjQ6MDAwMDAwMDAKWzw4MDIzY2UyYz5dIChvZl9pcnFfY291
bnQrMHgwLzB4M2MpIGZyb20gWzw4MDIzZDQxYz5dIChvZl9kZXZpY2VfYWxsb2MrMHg1Yy8weDE1
YykKIHI1OjAwMDAwMDAwIHI0OjAwMDAwMDAwCls8ODAyM2QzYzA+XSAob2ZfZGV2aWNlX2FsbG9j
KzB4MC8weDE1YykgZnJvbSBbPDgwMjNkNTU4Pl0gKG9mX3BsYXRmb3JtX2RldmljZV9jcmVhdGVf
cGRhdGErMHgzYy8weDg4KQpbPDgwMjNkNTFjPl0gKG9mX3BsYXRmb3JtX2RldmljZV9jcmVhdGVf
cGRhdGErMHgwLzB4ODgpIGZyb20gWzw4MDIzZDY3OD5dIChvZl9wbGF0Zm9ybV9idXNfY3JlYXRl
KzB4ZDQvMHgyNzgpCiByNzowMDAwMDAwMSByNjowMDAwMDAwMCByNTo4MDNiYzFkYyByNDo4MDUw
ODlkYwpbPDgwMjNkNWE0Pl0gKG9mX3BsYXRmb3JtX2J1c19jcmVhdGUrMHgwLzB4Mjc4KSBmcm9t
IFs8ODAyM2Q4ODQ+XSAob2ZfcGxhdGZvcm1fcG9wdWxhdGUrMHg2OC8weGEwKQpbPDgwMjNkODFj
Pl0gKG9mX3BsYXRmb3JtX3BvcHVsYXRlKzB4MC8weGEwKSBmcm9tIFs8ODAzYTZiY2M+XSAodjJt
X2R0X2luaXQrMHgyYy8weDRjKQpbPDgwM2E2YmEwPl0gKHYybV9kdF9pbml0KzB4MC8weDRjKSBm
cm9tIFs8ODAzYTJiZWM+XSAoY3VzdG9taXplX21hY2hpbmUrMHgyNC8weDMwKQpbPDgwM2EyYmM4
Pl0gKGN1c3RvbWl6ZV9tYWNoaW5lKzB4MC8weDMwKSBmcm9tIFs8ODAwMDg2M2M+XSAoZG9fb25l
X2luaXRjYWxsKzB4NDAvMHgxODQpCls8ODAwMDg1ZmM+XSAoZG9fb25lX2luaXRjYWxsKzB4MC8w
eDE4NCkgZnJvbSBbPDgwM2ExODgwPl0gKGtlcm5lbF9pbml0KzB4ZjAvMHgxYWMpCls8ODAzYTE3
OTA+XSAoa2VybmVsX2luaXQrMHgwLzB4MWFjKSBmcm9tIFs8ODAwMWY3YjQ+XSAoZG9fZXhpdCsw
eDAvMHg2YmMpCi0tLVsgZW5kIHRyYWNlIDFiNzViMzFhMjcxOWVkMWQgXS0tLQotLS0tLS0tLS0t
LS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KV0FSTklORzogYXQga2VybmVsL2lycS9pcnFkb21h
aW4uYzoxMzUgaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCgpCk1vZHVsZXMgbGlu
a2VkIGluOgpCYWNrdHJhY2U6IApbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEw
YykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQogcjY6MDAwMDAwODcg
cjU6ODAwNTEzMjAgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MKWzw4MDJkN2ViOD5dIChkdW1wX3N0
YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1
NC8weDZjKQpbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9t
IFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKIHI4Ojg3ODRlMTAw
IHI3OjAwMDAwMDAzIHI2OjAwMDAwMDY0IHI1OjAwMDAwMDAwIHI0Ojg3ODAwNDQwCnIzOjAwMDAw
MDA5Cls8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAw
NTEzMjA+XSAoaXJxX2RvbWFpbl9sZWdhY3lfcmV2bWFwKzB4MjgvMHg1MCkKWzw4MDA1MTJmOD5d
IChpcnFfZG9tYWluX2xlZ2FjeV9yZXZtYXArMHgwLzB4NTApIGZyb20gWzw4MDA1MThhYz5dIChp
cnFfY3JlYXRlX21hcHBpbmcrMHhiMC8weDEyOCkKWzw4MDA1MTdmYz5dIChpcnFfY3JlYXRlX21h
cHBpbmcrMHgwLzB4MTI4KSBmcm9tIFs8ODAwNTE5YTg+XSAoaXJxX2NyZWF0ZV9vZl9tYXBwaW5n
KzB4ODQvMHhmOCkKIHI3OjAwMDAwMDAzIHI2OjgwNTA4OGE4IHI1Ojg3ODJmZGQwIHI0Ojg3ODAw
NDQwCls8ODAwNTE5MjQ+XSAoaXJxX2NyZWF0ZV9vZl9tYXBwaW5nKzB4MC8weGY4KSBmcm9tIFs8
ODAyM2NkYTg+XSAoaXJxX29mX3BhcnNlX2FuZF9tYXArMHgzNC8weDNjKQogcjc6MDAwMDAwMDAg
cjY6ODA1MDg5ZGMgcjU6MDAwMDAwMDAgcjQ6MDAwMDAwMDAKWzw4MDIzY2Q3ND5dIChpcnFfb2Zf
cGFyc2VfYW5kX21hcCsweDAvMHgzYykgZnJvbSBbPDgwMjNjZGQwPl0gKG9mX2lycV90b19yZXNv
dXJjZSsweDIwLzB4N2MpCls8ODAyM2NkYjA+XSAob2ZfaXJxX3RvX3Jlc291cmNlKzB4MC8weDdj
KSBmcm9tIFs8ODAyM2NlNTg+XSAob2ZfaXJxX2NvdW50KzB4MmMvMHgzYykKIHI3OjAwMDAwMDAw
IHI2OjgwNTA4OWRjIHI1OjgwNTA4OWRjIHI0OjAwMDAwMDAwCls8ODAyM2NlMmM+XSAob2ZfaXJx
X2NvdW50KzB4MC8weDNjKSBmcm9tIFs8ODAyM2Q0MWM+XSAob2ZfZGV2aWNlX2FsbG9jKzB4NWMv
MHgxNWMpCiByNTowMDAwMDAwMCByNDowMDAwMDAwMApbPDgwMjNkM2MwPl0gKG9mX2RldmljZV9h
bGxvYysweDAvMHgxNWMpIGZyb20gWzw4MDIzZDU1OD5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3Jl
YXRlX3BkYXRhKzB4M2MvMHg4OCkKWzw4MDIzZDUxYz5dIChvZl9wbGF0Zm9ybV9kZXZpY2VfY3Jl
YXRlX3BkYXRhKzB4MC8weDg4KSBmcm9tIFs8ODAyM2Q2Nzg+XSAob2ZfcGxhdGZvcm1fYnVzX2Ny
ZWF0ZSsweGQ0LzB4Mjc4KQogcjc6MDAwMDAwMDEgcjY6MDAwMDAwMDAgcjU6ODAzYmMxZGMgcjQ6
ODA1MDg5ZGMKWzw4MDIzZDVhND5dIChvZl9wbGF0Zm9ybV9idXNfY3JlYXRlKzB4MC8weDI3OCkg
ZnJvbSBbPDgwMjNkODg0Pl0gKG9mX3BsYXRmb3JtX3BvcHVsYXRlKzB4NjgvMHhhMCkKWzw4MDIz
ZDgxYz5dIChvZl9wbGF0Zm9ybV9wb3B1bGF0ZSsweDAvMHhhMCkgZnJvbSBbPDgwM2E2YmNjPl0g
KHYybV9kdF9pbml0KzB4MmMvMHg0YykKWzw4MDNhNmJhMD5dICh2Mm1fZHRfaW5pdCsweDAvMHg0
YykgZnJvbSBbPDgwM2EyYmVjPl0gKGN1c3RvbWl6ZV9tYWNoaW5lKzB4MjQvMHgzMCkKWzw4MDNh
MmJjOD5dIChjdXN0b21pemVfbWFjaGluZSsweDAvMHgzMCkgZnJvbSBbPDgwMDA4NjNjPl0gKGRv
X29uZV9pbml0Y2FsbCsweDQwLzB4MTg0KQpbPDgwMDA4NWZjPl0gKGRvX29uZV9pbml0Y2FsbCsw
eDAvMHgxODQpIGZyb20gWzw4MDNhMTg4MD5dIChrZXJuZWxfaW5pdCsweGYwLzB4MWFjKQpbPDgw
M2ExNzkwPl0gKGtlcm5lbF9pbml0KzB4MC8weDFhYykgZnJvbSBbPDgwMDFmN2I0Pl0gKGRvX2V4
aXQrMHgwLzB4NmJjKQotLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFlIF0tLS0KU2VyaWFs
OiBBTUJBIFBMMDExIFVBUlQgZHJpdmVyCjFjMDkwMDAwLnVhcnQ6IHR0eUFNQTAgYXQgTU1JTyAw
eDFjMDkwMDAwIChpcnEgPSAzNykgaXMgYSBQTDAxMSByZXYxCjFjMGEwMDAwLnVhcnQ6IHR0eUFN
QTEgYXQgTU1JTyAweDFjMGEwMDAwIChpcnEgPSAzOCkgaXMgYSBQTDAxMSByZXYxCmNvbnNvbGUg
W3R0eUFNQTFdIGVuYWJsZWQsIGJvb3Rjb25zb2xlIGRpc2FibGVkCihYRU4pIGJhZCBwMm0gbG9v
a3VwCihYRU4pIGRvbTEgSVBBIDB4MDAwMDAwMDA5MDAwMDAwMAooWEVOKSBQMk0gQCAwMmZmY2Fj
MCBtZm46MHhmZmU1NgooWEVOKSAxU1RbMHgyXSA9IDB4MDAwMDAwMDBmM2Y2ODZmZgooWEVOKSAy
TkRbMHg4MF0gPSAweDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgYmFkIHAybSBsb29rdXAKKFhFTikg
ZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAxMDAwCihYRU4pIFAyTSBAIDAyZmZjYWMwIG1mbjoweGZm
ZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAwMGYzZjY4NmZmCihYRU4pIDJORFsweDgwXSA9
IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSBiYWQgcDJtIGxvb2t1cAooWEVOKSBkb20xIElQQSAw
eDAwMDAwMDAwOTAwMDEwMDAKKFhFTikgUDJNIEAgMDJmZmNhYzAgbWZuOjB4ZmZlNTYKKFhFTikg
MVNUWzB4Ml0gPSAweDAwMDAwMDAwZjNmNjg2ZmYKKFhFTikgMk5EWzB4ODBdID0gMHgwMDAwMDAw
MDAwMDAwMDAwCihYRU4pIERPTTE6IFVuY29tcHJlc3NpbmcgTGludXguLi4gZG9uZSwgYm9vdGlu
ZyB0aGUga2VybmVsLgooWEVOKSBET00xOiBCb290aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAw
CihYRU4pIERPTTE6IExpbnV4IHZlcnNpb24gMy41LjAtcmM3KyAocm9vdEB0dXopIChnY2MgdmVy
c2lvbiA0LjYuMyAoVWJ1bnR1L0xpbmFybyA0LjYuMy0xdWJ1bnR1NSkgKSAjMSBNb24gQXVnIDYg
MjA6NTQ6MjQgTVNLIDIwMTIKKFhFTikgRE9NMTogQ1BVOiBBUk12NyBQcm9jZXNzb3IgWzQxMmZj
MGYwXSByZXZpc2lvbiAwIChBUk12NyksIGNyPTEwYzUzYzdkCihYRU4pIERPTTE6IENQVTogUElQ
VCAvIFZJUFQgbm9uYWxpYXNpbmcgZGF0YSBjYWNoZSwgUElQVCBpbnN0cnVjdGlvbiBjYWNoZQoo
WEVOKSBET00xOiBNYWNoaW5lOiBBUk0tVmVyc2F0aWxlIEV4cHJlc3MsIG1vZGVsOiBWMlAtQUVN
djdBCihYRU4pIERPTTE6IGJvb3Rjb25zb2xlIFtlYXJseWNvbjBdIGVuYWJsZWQKKFhFTikgRE9N
MTogTWVtb3J5IHBvbGljeTogRUNDIGRpc2FibGVkLCBEYXRhIGNhY2hlIHdyaXRlYmFjawooWEVO
KSBET00xOiBPbiBub2RlIDAgdG90YWxwYWdlczogMzI3NjgKKFhFTikgRE9NMTogZnJlZV9hcmVh
X2luaXRfbm9kZTogbm9kZSAwLCBwZ2RhdCA4MDNlYWU5NCwgbm9kZV9tZW1fbWFwIDgwNDA4MDAw
CihYRU4pIERPTTE6ICAgTm9ybWFsIHpvbmU6IDI1NiBwYWdlcyB1c2VkIGZvciBtZW1tYXAKKFhF
TikgRE9NMTogICBOb3JtYWwgem9uZTogMCBwYWdlcyByZXNlcnZlZAooWEVOKSBET00xOiAgIE5v
cm1hbCB6b25lOiAzMjUxMiBwYWdlcywgTElGTyBiYXRjaDo3CihYRU4pIERPTTE6IC0tLS0tLS0t
LS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0tLQooWEVOKSBET00xOiBXQVJOSU5HOiBhdCBhcmNo
L2FybS9tYWNoLXZleHByZXNzL3YybS5jOjYwMyB2Mm1fZHRfaW5pdF9lYXJseSsweDQ0LzB4ZWMo
KQooWEVOKSBET00xOiBNb2R1bGVzIGxpbmtlZCBpbjoKKFhFTikgRE9NMTogQmFja3RyYWNlOiAK
KFhFTikgRE9NMTogWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20g
Wzw4MDJkN2VkMD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKKFhFTikgRE9NMTogIHI2OjAwMDAw
MjViIHI1OjgwM2E2ZGNjIHI0OjAwMDAwMDAwIHIzOjgwM2NmOTNjCihYRU4pIERPTTE6IFs8ODAy
ZDdlYjg+XSAoZHVtcF9zdGFjaysweDAvMHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xv
d3BhdGhfY29tbW9uKzB4NTQvMHg2YykKKFhFTikgRE9NMTogWzw4MDAxYjE4OD5dICh3YXJuX3Ns
b3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhf
bnVsbCsweDI0LzB4MmMpCihYRU4pIERPTTE6ICByODo4MDNjZDMzOCByNzo4MDUwODQ0MCByNjo4
MDAwMDIwMCByNTo4MDNmM2I4OCByNDo4MDNlYzAzOAooWEVOKSBET00xOiByMzowMDAwMDAwOQoK
KFhFTikgRE9NMTogWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4MmMpIGZy
b20gWzw4MDNhNmRjYz5dICh2Mm1fZHRfaW5pdF9lYXJseSsweDQ0LzB4ZWMpCihYRU4pIERPTTE6
IFs8ODAzYTZkODg+XSAodjJtX2R0X2luaXRfZWFybHkrMHgwLzB4ZWMpIGZyb20gWzw4MDNhMzY3
Yz5dIChzZXR1cF9hcmNoKzB4NzEwLzB4N2ZjKQooWEVOKSBET00xOiAgcjQ6ODAzYmE5MjgKKFhF
TikgRE9NMTogWzw4MDNhMmY2Yz5dIChzZXR1cF9hcmNoKzB4MC8weDdmYykgZnJvbSBbPDgwM2Ex
NTljPl0gKHN0YXJ0X2tlcm5lbCsweDc4LzB4MjZjKQooWEVOKSBET00xOiBbPDgwM2ExNTI0Pl0g
KHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMpIGZyb20gWzw4MDAwODA0MD5dICgweDgwMDA4MDQwKQoo
WEVOKSBET00xOiAgcjc6ODAzY2QyODQgcjY6ODAzYmJjY2MgcjU6ODAzY2EwNTQgcjQ6MTBjNTNj
N2QKKFhFTikgRE9NMTogLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxYyBdLS0tCihYRU4p
IERPTTE6IHBjcHUtYWxsb2M6IHMwIHIwIGQzMjc2OCB1MzI3NjggYWxsb2M9MSozMjc2OAooWEVO
KSBET00xOiBwY3B1LWFsbG9jOiBbMF0gMCAKKFhFTikgRE9NMTogQnVpbHQgMSB6b25lbGlzdHMg
aW4gWm9uZSBvcmRlciwgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdlczogMzI1MTIK
KFhFTikgRE9NMTogS2VybmVsIGNvbW1hbmQgbGluZTogZWFybHlwcmludGsgZGVidWcgbG9nbGV2
ZWw9OSBjb25zb2xlPWh2YzAgcm9vdD0vZGV2L3h2ZGEgaW5pdD0vc2Jpbi9pbml0CihYRU4pIERP
TTE6IFBJRCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IC0xLCAyMDQ4IGJ5dGVzKQoo
WEVOKSBET00xOiBEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAxNjM4NCAob3JkZXI6
IDQsIDY1NTM2IGJ5dGVzKQooWEVOKSBET00xOiBJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJp
ZXM6IDgxOTIgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykKKFhFTikgRE9NMTogTWVtb3J5OiAxMjhN
QiA9IDEyOE1CIHRvdGFsCihYRU4pIERPTTE6IE1lbW9yeTogMTI1Nzcyay8xMjU3NzJrIGF2YWls
YWJsZSwgNTMwMGsgcmVzZXJ2ZWQsIDBLIGhpZ2htZW0KKFhFTikgRE9NMTogVmlydHVhbCBrZXJu
ZWwgbWVtb3J5IGxheW91dDoKKFhFTikgRE9NMTogICAgIHZlY3RvciAgOiAweGZmZmYwMDAwIC0g
MHhmZmZmMTAwMCAgICggICA0IGtCKQooWEVOKSBET00xOiAgICAgZml4bWFwICA6IDB4ZmZmMDAw
MDAgLSAweGZmZmUwMDAwICAgKCA4OTYga0IpCihYRU4pIERPTTE6ICAgICB2bWFsbG9jIDogMHg4
ODgwMDAwMCAtIDB4ZmYwMDAwMDAgICAoMTg5NiBNQikKKFhFTikgRE9NMTogICAgIGxvd21lbSAg
OiAweDgwMDAwMDAwIC0gMHg4ODAwMDAwMCAgICggMTI4IE1CKQooWEVOKSBET00xOiAgICAgbW9k
dWxlcyA6IDB4N2YwMDAwMDAgLSAweDgwMDAwMDAwICAgKCAgMTYgTUIpCihYRU4pIERPTTE6ICAg
ICAgIC50ZXh0IDogMHg4MDAwODAwMCAtIDB4ODAzYTEwMDAgICAoMzY4NCBrQikKKFhFTikgRE9N
MTogICAgICAgLmluaXQgOiAweDgwM2ExMDAwIC0gMHg4MDNjMTU2OCAgICggMTMwIGtCKQooWEVO
KSBET00xOiAgICAgICAuZGF0YSA6IDB4ODAzYzIwMDAgLSAweDgwM2ViNWMwICAgKCAxNjYga0Ip
CihYRU4pIERPTTE6ICAgICAgICAuYnNzIDogMHg4MDNlYjVlNCAtIDB4ODA0MDcxNjQgICAoIDEx
MSBrQikKKFhFTikgRE9NMTogU0xVQjogR2Vuc2xhYnM9MTEsIEhXYWxpZ249NjQsIE9yZGVyPTAt
MywgTWluT2JqZWN0cz0wLCBDUFVzPTEsIE5vZGVzPTEKKFhFTikgRE9NMTogTlJfSVJRUzoyNTYK
KFhFTikgRE9NMTogLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tCihYRU4pIERP
TTE6IFdBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4cHJlc3MvdjJtLmM6NjIgdjJtX3N5c2N0
bF9pbml0KzB4MjAvMHg1OCgpCihYRU4pIERPTTE6IE1vZHVsZXMgbGlua2VkIGluOgooWEVOKSBE
T00xOiBCYWNrdHJhY2U6IAooWEVOKSBET00xOiBbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNl
KzB4MC8weDEwYykgZnJvbSBbPDgwMmQ3ZWQwPl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQooWEVO
KSBET00xOiAgcjY6MDAwMDAwM2UgcjU6ODAzYTY5YmMgcjQ6MDAwMDAwMDAgcjM6ODAzY2Y5M2MK
KFhFTikgRE9NMTogWzw4MDJkN2ViOD5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAw
MWIxZGM+XSAod2Fybl9zbG93cGF0aF9jb21tb24rMHg1NC8weDZjKQooWEVOKSBET00xOiBbPDgw
MDFiMTg4Pl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+
XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MjQvMHgyYykKKFhFTikgRE9NMTogIHI4OjgwMDA0MDU5
IHI3OjgwNTA4YmMwIHI2OjgwM2JiY2QwIHI1OjgwM2ViNjAwIHI0OjAwMDAwMDAwCihYRU4pIERP
TTE6IHIzOjAwMDAwMDA5CihYRU4pIERPTTE6IFs8ODAwMWIxZjQ+XSAod2Fybl9zbG93cGF0aF9u
dWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzYTY5YmM+XSAodjJtX3N5c2N0bF9pbml0KzB4MjAvMHg1
OCkKKFhFTikgRE9NMTogWzw4MDNhNjk5Yz5dICh2Mm1fc3lzY3RsX2luaXQrMHgwLzB4NTgpIGZy
b20gWzw4MDNhNmFiYz5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDJjLzB4Y2MpCihYRU4pIERPTTE6
ICByNTo4MDNlYjYwMCByNDpmZmZmZmZmZgooWEVOKSBET00xOiBbPDgwM2E2YTkwPl0gKHYybV9k
dF90aW1lcl9pbml0KzB4MC8weGNjKSBmcm9tIFs8ODAzYTM4MTA+XSAodGltZV9pbml0KzB4Mjgv
MHgzOCkKKFhFTikgRE9NMTogIHI2OjgwM2JiY2QwIHI1OjgwM2ViNjAwIHI0OmZmZmZmZmZmCihY
RU4pIERPTTE6IFs8ODAzYTM3ZTg+XSAodGltZV9pbml0KzB4MC8weDM4KSBmcm9tIFs8ODAzYTE2
YWM+XSAoc3RhcnRfa2VybmVsKzB4MTg4LzB4MjZjKQooWEVOKSBET00xOiBbPDgwM2ExNTI0Pl0g
KHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMpIGZyb20gWzw4MDAwODA0MD5dICgweDgwMDA4MDQwKQoo
WEVOKSBET00xOiAgcjc6ODAzY2QyODQgcjY6ODAzYmJjY2MgcjU6ODAzY2EwNTQgcjQ6MTBjNTNj
N2QKKFhFTikgRE9NMTogLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQxZCBdLS0tCihYRU4p
IERPTTE6IGFyY2hfdGltZXI6IGZvdW5kIHRpbWVyIGlycXMgMjkgMzAKKFhFTikgRE9NMTogQXJj
aGl0ZWN0ZWQgbG9jYWwgdGltZXIgcnVubmluZyBhdCAxMDAuMDBNSHouCihYRU4pIERPTTE6IHNj
aGVkX2Nsb2NrOiAzMiBiaXRzIGF0IDEwME1IeiwgcmVzb2x1dGlvbiAxMG5zLCB3cmFwcyBldmVy
eSA0Mjk0OW1zCihYRU4pIERPTTE6IC0tLS0tLS0tLS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0t
LQooWEVOKSBET00xOiBXQVJOSU5HOiBhdCBhcmNoL2FybS9tYWNoLXZleHByZXNzL3YybS5jOjY0
NyB2Mm1fZHRfdGltZXJfaW5pdCsweDdjLzB4Y2MoKQooWEVOKSBET00xOiBNb2R1bGVzIGxpbmtl
ZCBpbjoKKFhFTikgRE9NMTogQmFja3RyYWNlOiAKKFhFTikgRE9NMTogWzw4MDAxMWIwYz5dIChk
dW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4MDJkN2VkMD5dIChkdW1wX3N0YWNrKzB4
MTgvMHgxYykKKFhFTikgRE9NMTogIHI2OjAwMDAwMjg3IHI1OjgwM2E2YjBjIHI0OjAwMDAwMDAw
IHIzOjgwM2NmOTNjCihYRU4pIERPTTE6IFs8ODAyZDdlYjg+XSAoZHVtcF9zdGFjaysweDAvMHgx
YykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2YykKKFhF
TikgRE9NMTogWzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2YykgZnJv
bSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCihYRU4pIERPTTE6
ICByODo4MDAwNDA1OSByNzo4MDUwOGJjMCByNjo4MDNiYmNkMCByNTo4MDNlYjYwMCByNDpmZmZm
ZmZlYQooWEVOKSBET00xOiByMzowMDAwMDAwOQoKKFhFTikgRE9NMTogWzw4MDAxYjFmND5dICh3
YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4MmMpIGZyb20gWzw4MDNhNmIwYz5dICh2Mm1fZHRfdGlt
ZXJfaW5pdCsweDdjLzB4Y2MpCihYRU4pIERPTTE6IFs8ODAzYTZhOTA+XSAodjJtX2R0X3RpbWVy
X2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDNhMzgxMD5dICh0aW1lX2luaXQrMHgyOC8weDM4KQoo
WEVOKSBET00xOiAgcjY6ODAzYmJjZDAgcjU6ODAzZWI2MDAgcjQ6ZmZmZmZmZmYKKFhFTikgRE9N
MTogWzw4MDNhMzdlOD5dICh0aW1lX2luaXQrMHgwLzB4MzgpIGZyb20gWzw4MDNhMTZhYz5dIChz
dGFydF9rZXJuZWwrMHgxODgvMHgyNmMpCihYRU4pIERPTTE6IFs8ODAzYTE1MjQ+XSAoc3RhcnRf
a2VybmVsKzB4MC8weDI2YykgZnJvbSBbPDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCihYRU4pIERP
TTE6ICByNzo4MDNjZDI4NCByNjo4MDNiYmNjYyByNTo4MDNjYTA1NCByNDoxMGM1M2M3ZAooWEVO
KSBET00xOiAtLS1bIGVuZCB0cmFjZSAxYjc1YjMxYTI3MTllZDFlIF0tLS0KKFhFTikgRE9NMTog
Q29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgzMAooWEVOKSBET00xOiBDYWxpYnJhdGlu
ZyBkZWxheSBsb29wLi4uIDk4LjcxIEJvZ29NSVBTIChscGo9NDkzNTY4KQooWEVOKSBET00xOiBw
aWRfbWF4OiBkZWZhdWx0OiAzMjc2OCBtaW5pbXVtOiAzMDEKKFhFTikgRE9NMTogTW91bnQtY2Fj
aGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIKKFhFTikgRE9NMTogQ1BVOiBUZXN0aW5nIHdyaXRl
IGJ1ZmZlciBjb2hlcmVuY3k6IG9rCihYRU4pIERPTTE6IFNldHRpbmcgdXAgc3RhdGljIGlkZW50
aXR5IG1hcCBmb3IgMHg4MDJkYzEzOCAtIDB4ODAyZGMxNmMKKFhFTikgRE9NMTogWGVuIHN1cHBv
cnQgZm91bmQsIGV2ZW50c19pcnE9MzEgZ250dGFiX2ZyYW1lX3Bmbj1iMDAwMAooWEVOKSBET00x
OiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dC4KKFhFTikgRE9NMTogR3JhbnQg
dGFibGUgaW5pdGlhbGl6ZWQKKFhFTikgRE9NMTogTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZh
bWlseSAxNgooWEVOKSBHdWVzdCBkYXRhIGFib3J0OiBUcmFuc2xhdGlvbiBmYXVsdCBhdCBsZXZl
bCAyCihYRU4pICAgICBndmE9ODg4MDg4MDQKKFhFTikgICAgIGdwYT0wMDAwMDAwMDkwMDAxODA0
CihYRU4pICAgICBzaXplPTIgc2lnbj0wIHdyaXRlPTAgcmVnPTIKKFhFTikgICAgIGVhdD0wIGNt
PTAgczFwdHc9MCBkZnNjPTYKKFhFTikgZG9tMSBJUEEgMHgwMDAwMDAwMDkwMDAxODA0CihYRU4p
IFAyTSBAIDAyZmZjYWMwIG1mbjoweGZmZTU2CihYRU4pIDFTVFsweDJdID0gMHgwMDAwMDAwMGYz
ZjY4NmZmCihYRU4pIDJORFsweDgwXSA9IDB4MDAwMDAwMDAwMDAwMDAwMAooWEVOKSAtLS0tWyBY
ZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1Zz15ICBOb3QgdGFpbnRlZCBdLS0tLQooWEVO
KSBDUFU6ICAgIDAKKFhFTikgUEM6ICAgICA4MDE3NTYwNAooWEVOKSBDUFNSOiAgIDIwMDAwMDEz
IE1PREU6U1ZDCihYRU4pICAgICAgUjA6IDgwM2ZmYWE4IFIxOiA4MDM2ODBjYyBSMjogODAzZmZh
YzAgUjM6IDgwM2ZmYWM4CihYRU4pICAgICAgUjQ6IDg4ODA4MDAwIFI1OiA4MDNmZmFjMCBSNjog
MDAwMDdmZjAgUjc6IDAwMDAwMDAxCihYRU4pICAgICAgUjg6IDgwM2ExMjVjIFI5OiA4MDNjMTIy
YyBSMTA6ODAzZWI2MDAgUjExOjg3ODJkZjA0IFIxMjo4NzgyZGYwOAooWEVOKSBVU1I6IFNQOiAw
MDAwMDAwMCBMUjogMDAwMDAwMDAgQ1BTUjoyMDAwMDAxMwooWEVOKSBTVkM6IFNQOiA4NzgyZGVl
MCBMUjogODAxNzY5NzAgU1BTUjowMDAwMDA5MwooWEVOKSBBQlQ6IFNQOiA4MDNlYmQ4YyBMUjog
ODAzZWJkOGMgU1BTUjowMDAwMDAwMAooWEVOKSBVTkQ6IFNQOiA4MDNlYmQ5OCBMUjogODAzZWJk
OTggU1BTUjowMDAwMDAwMAooWEVOKSBJUlE6IFNQOiA4MDNlYmQ4MCBMUjogODAwMGRmYzAgU1BT
Ujo2MDAwMDE5MwooWEVOKSBGSVE6IFNQOiAwMDAwMDAwMCBMUjogMDAwMDAwMDAgU1BTUjowMDAw
MDAwMAooWEVOKSBGSVE6IFI4OiAwMDAwMDAwMCBSOTogMDAwMDAwMDAgUjEwOjAwMDAwMDAwIFIx
MTowMDAwMDAwMCBSMTI6MDAwMDAwMDAKKFhFTikgCihYRU4pIFRUQlIwIDgwMDA0MDU5IFRUQlIx
IDgwMDA0MDU5IFRUQkNSIDAwMDAwMDAwCihYRU4pIFNDVExSIDEwYzUzYzdkCihYRU4pIFZUVEJS
IDIwMDAwZmZlNTYwMDAKKFhFTikgCihYRU4pIEhUVEJSIGZmZWMyMDAwCihYRU4pIEhERkFSIDg4
ODA4ODA0CihYRU4pIEhJRkFSIDAKKFhFTikgSFBGQVIgOTAwMDEwCihYRU4pIEhDUiAwMDAwMDgz
NQooWEVOKSBIU1IgICA5MzgyMDAwNgooWEVOKSAKKFhFTikgREZTUiAwIERGQVIgMAooWEVOKSBJ
RlNSIDAgSUZBUiAwCihYRU4pIAooWEVOKSBHVUVTVCBTVEFDSyBHT0VTIEhFUkUKKFhFTikgCihY
RU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKKFhFTikgUGFuaWMg
b24gQ1BVIDA6CihYRU4pIFVuaGFuZGxlZCBndWVzdCBkYXRhIGFib3J0CihYRU4pICoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKKFhFTikgCihYRU4pIFJlYm9vdCBpbiBm
aXZlIHNlY29uZHMuLi4KCg==
--e89a8f3bafff4e2c0804c6beb06b
Content-Type: application/octet-stream; 
	name="roots-console_xcbuild-simple-DomU-A15x1_08082012.log"
Content-Disposition: attachment; 
	filename="roots-console_xcbuild-simple-DomU-A15x1_08082012.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5ma1v573

KFhFTikgRE9NMTogVW5jb21wcmVzc2luZyBMaW51eC4uLiBkb25lLCBib290aW5nIHRoZSBrZXJu
ZWwuCihYRU4pIERPTTE6IEJvb3RpbmcgTGludXggb24gcGh5c2ljYWwgQ1BVIDAKKFhFTikgRE9N
MTogTGludXggdmVyc2lvbiAzLjUuMC1yYzcrIChyb290QHR1eikgKGdjYyB2ZXJzaW9uIDQuNi4z
IChVYnVudHUvTGluYXJvIDQuNi4zLTF1YnVudHU1KSApICM5IFR1ZSBBdWcgNyAxOToxOToxMyBN
U0sgMjAxMgooWEVOKSBET00xOiBDUFU6IEFSTXY3IFByb2Nlc3NvciBbNDEyZmMwZjBdIHJldmlz
aW9uIDAgKEFSTXY3KSwgY3I9MTBjNTNjN2QKKFhFTikgRE9NMTogQ1BVOiBQSVBUIC8gVklQVCBu
b25hbGlhc2luZyBkYXRhIGNhY2hlLCBQSVBUIGluc3RydWN0aW9uIGNhY2hlCihYRU4pIERPTTE6
IE1hY2hpbmU6IEFSTS1WZXJzYXRpbGUgRXhwcmVzcywgbW9kZWw6IFYyUC1BRU12N0EKKFhFTikg
RE9NMTogYm9vdGNvbnNvbGUgW2Vhcmx5Y29uMF0gZW5hYmxlZAooWEVOKSBET00xOiBkZWJ1Zzog
c2tpcCBib290IGNvbnNvbGUgZGUtcmVnaXN0cmF0aW9uLgooWEVOKSBET00xOiBNZW1vcnkgcG9s
aWN5OiBFQ0MgZGlzYWJsZWQsIERhdGEgY2FjaGUgd3JpdGViYWNrCihYRU4pIERPTTE6IE9uIG5v
ZGUgMCB0b3RhbHBhZ2VzOiAzMjc2OAooWEVOKSBET00xOiBmcmVlX2FyZWFfaW5pdF9ub2RlOiBu
b2RlIDAsIHBnZGF0IDgwM2UyYjk0LCBub2RlX21lbV9tYXAgODAzZmYwMDAKKFhFTikgRE9NMTog
Tm9ybWFsIHpvbmU6IDI1NiBwYWdlcyB1c2VkIGZvciBtZW1tYXAKKFhFTikgRE9NMTogTm9ybWFs
IHpvbmU6IDAgcGFnZXMgcmVzZXJ2ZWQKKFhFTikgRE9NMTogTm9ybWFsIHpvbmU6IDMyNTEyIHBh
Z2VzLCBMSUZPIGJhdGNoOjcKKFhFTikgRE9NMTogLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0t
LS0tLS0tLS0tCihYRU4pIERPTTE6IFdBUk5JTkc6IGF0IGFyY2gvYXJtL21hY2gtdmV4cHJlc3Mv
djJtLmM6NjAzIHYybV9kdF9pbml0X2Vhcmx5KzB4NDQvMHhlYygpCihYRU4pIERPTTE6IE1vZHVs
ZXMgbGlua2VkIGluOgooWEVOKSBET00xOiBCYWNrdHJhY2U6CihYRU4pIERPTTE6IFs8ODAwMTFi
MGM+XSAoZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDA2NTg+XSAoZHVtcF9z
dGFjaysweDE4LzB4MWMpCihYRU4pIERPTTE6IHI2OjAwMDAwMjViIHI1OjgwMzllZGNjIHI0OjAw
MDAwMDAwIHIzOjgwM2M3OTNjCihYRU4pIERPTTE6IFs8ODAyZDA2NDA+XSAoZHVtcF9zdGFjaysw
eDAvMHgxYykgZnJvbSBbPDgwMDFiMWRjPl0gKHdhcm5fc2xvd3BhdGhfY29tbW9uKzB4NTQvMHg2
YykKKFhFTikgRE9NMTogWzw4MDAxYjE4OD5dICh3YXJuX3Nsb3dwYXRoX2NvbW1vbisweDAvMHg2
YykgZnJvbSBbPDgwMDFiMjE4Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDI0LzB4MmMpCihYRU4p
IERPTTE6IHI4OjgwM2M1MzM4IHI3OjgwNGZmNDQwIHI2OjgwMDAwMjAwIHI1OjgwM2ViODQ4IHI0
OjgwM2UzZDM4CihYRU4pIERPTTE6IHIzOjAwMDAwMDA5CihYRU4pIERPTTE6IFs8ODAwMWIxZjQ+
XSAod2Fybl9zbG93cGF0aF9udWxsKzB4MC8weDJjKSBmcm9tIFs8ODAzOWVkY2M+XSAodjJtX2R0
X2luaXRfZWFybHkrMHg0NC8weGVjKQooWEVOKSBET00xOiBbPDgwMzllZDg4Pl0gKHYybV9kdF9p
bml0X2Vhcmx5KzB4MC8weGVjKSBmcm9tIFs8ODAzOWI2N2M+XSAoc2V0dXBfYXJjaCsweDcxMC8w
eDdmYykKKFhFTikgRE9NMTogcjQ6ODAzYjIzYzQKKFhFTikgRE9NMTogWzw4MDM5YWY2Yz5dIChz
ZXR1cF9hcmNoKzB4MC8weDdmYykgZnJvbSBbPDgwMzk5NTljPl0gKHN0YXJ0X2tlcm5lbCsweDc4
LzB4MjZjKQooWEVOKSBET00xOiBbPDgwMzk5NTI0Pl0gKHN0YXJ0X2tlcm5lbCsweDAvMHgyNmMp
IGZyb20gWzw4MDAwODA0MD5dICgweDgwMDA4MDQwKQooWEVOKSBET00xOiByNzo4MDNjNTI4NCBy
Njo4MDNiMzc2OCByNTo4MDNjMjA1NCByNDoxMGM1M2M3ZAooWEVOKSBET00xOiAtLS1bIGVuZCB0
cmFjZSAxYjc1YjMxYTI3MTllZDFjIF0tLS0KKFhFTikgRE9NMTogcGNwdS1hbGxvYzogczAgcjAg
ZDMyNzY4IHUzMjc2OCBhbGxvYz0xKjMyNzY4CihYRU4pIERPTTE6IHBjcHUtYWxsb2M6IFswXSAw
CihYRU4pIERPTTE6IEJ1aWx0IDEgem9uZWxpc3RzIGluIFpvbmUgb3JkZXIsIG1vYmlsaXR5IGdy
b3VwaW5nIG9uLiBUb3RhbCBwYWdlczogMzI1MTIKKFhFTikgRE9NMTogS2VybmVsIGNvbW1hbmQg
bGluZTogZWFybHlwcmludGsgZGVidWcgbG9nbGV2ZWw9OSBrZWVwX2Jvb3Rjb24gY29uc29sZT1o
dmMwIHJvb3Q9L2Rldi94dmRhIGluaXQ9L3NiaW4vaW5pdAooWEVOKSBET00xOiBQSUQgaGFzaCB0
YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAtMSwgMjA0OCBieXRlcykKKFhFTikgRE9NMTogRGVu
dHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTYzODQgKG9yZGVyOiA0LCA2NTUzNiBieXRl
cykKKFhFTikgRE9NMTogSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRl
cjogMywgMzI3NjggYnl0ZXMpCihYRU4pIERPTTE6IE1lbW9yeTogMTI4TUIgPSAxMjhNQiB0b3Rh
bAooWEVOKSBET00xOiBNZW1vcnk6IDEyNTgxMmsvMTI1ODEyayBhdmFpbGFibGUsIDUyNjBrIHJl
c2VydmVkLCAwSyBoaWdobWVtCihYRU4pIERPTTE6IFZpcnR1YWwga2VybmVsIG1lbW9yeSBsYXlv
dXQ6CihYRU4pIERPTTE6IHZlY3RvciA6IDB4ZmZmZjAwMDAgLSAweGZmZmYxMDAwICggNCBrQikK
KFhFTikgRE9NMTogZml4bWFwIDogMHhmZmYwMDAwMCAtIDB4ZmZmZTAwMDAgKCA4OTYga0IpCihY
RU4pIERPTTE6IHZtYWxsb2MgOiAweDg4ODAwMDAwIC0gMHhmZjAwMDAwMCAoMTg5NiBNQikKKFhF
TikgRE9NMTogbG93bWVtIDogMHg4MDAwMDAwMCAtIDB4ODgwMDAwMDAgKCAxMjggTUIpCihYRU4p
IERPTTE6IG1vZHVsZXMgOiAweDdmMDAwMDAwIC0gMHg4MDAwMDAwMCAoIDE2IE1CKQooWEVOKSBE
T00xOiAudGV4dCA6IDB4ODAwMDgwMDAgLSAweDgwMzk5MDAwICgzNjUyIGtCKQooWEVOKSBET00x
OiAuaW5pdCA6IDB4ODAzOTkwMDAgLSAweDgwM2I4ZmQ4ICggMTI4IGtCKQooWEVOKSBET00xOiAu
ZGF0YSA6IDB4ODAzYmEwMDAgLSAweDgwM2UzMmMwICggMTY1IGtCKQooWEVOKSBET00xOiAuYnNz
IDogMHg4MDNlMzJlNCAtIDB4ODAzZmVkYTQgKCAxMTEga0IpCihYRU4pIERPTTE6IFNMVUI6IEdl
bnNsYWJzPTExLCBIV2FsaWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz0xLCBO
b2Rlcz0xCihYRU4pIERPTTE6IE5SX0lSUVM6MjU2CihYRU4pIERPTTE6IC0tLS0tLS0tLS0tLVsg
Y3V0IGhlcmUgXS0tLS0tLS0tLS0tLQooWEVOKSBET00xOiBXQVJOSU5HOiBhdCBhcmNoL2FybS9t
YWNoLXZleHByZXNzL3YybS5jOjYyIHYybV9zeXNjdGxfaW5pdCsweDIwLzB4NTgoKQooWEVOKSBE
T00xOiBNb2R1bGVzIGxpbmtlZCBpbjoKKFhFTikgRE9NMTogQmFja3RyYWNlOgooWEVOKSBET00x
OiBbPDgwMDExYjBjPl0gKGR1bXBfYmFja3RyYWNlKzB4MC8weDEwYykgZnJvbSBbPDgwMmQwNjU4
Pl0gKGR1bXBfc3RhY2srMHgxOC8weDFjKQooWEVOKSBET00xOiByNjowMDAwMDAzZSByNTo4MDM5
ZTliYyByNDowMDAwMDAwMCByMzo4MDNjNzkzYwooWEVOKSBET00xOiBbPDgwMmQwNjQwPl0gKGR1
bXBfc3RhY2srMHgwLzB4MWMpIGZyb20gWzw4MDAxYjFkYz5dICh3YXJuX3Nsb3dwYXRoX2NvbW1v
bisweDU0LzB4NmMpCihYRU4pIERPTTE6IFs8ODAwMWIxODg+XSAod2Fybl9zbG93cGF0aF9jb21t
b24rMHgwLzB4NmMpIGZyb20gWzw4MDAxYjIxOD5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgyNC8w
eDJjKQooWEVOKSBET00xOiByODo4MDAwNDA1OSByNzo4MDRmZmJjMCByNjo4MDNiMzc2YyByNTo4
MDNlMzMwMCByNDowMDAwMDAwMAooWEVOKSBET00xOiByMzowMDAwMDAwOQooWEVOKSBET00xOiBb
PDgwMDFiMWY0Pl0gKHdhcm5fc2xvd3BhdGhfbnVsbCsweDAvMHgyYykgZnJvbSBbPDgwMzllOWJj
Pl0gKHYybV9zeXNjdGxfaW5pdCsweDIwLzB4NTgpCihYRU4pIERPTTE6IFs8ODAzOWU5OWM+XSAo
djJtX3N5c2N0bF9pbml0KzB4MC8weDU4KSBmcm9tIFs8ODAzOWVhYmM+XSAodjJtX2R0X3RpbWVy
X2luaXQrMHgyYy8weGNjKQooWEVOKSBET00xOiByNTo4MDNlMzMwMCByNDpmZmZmZmZmZgooWEVO
KSBET00xOiBbPDgwMzllYTkwPl0gKHYybV9kdF90aW1lcl9pbml0KzB4MC8weGNjKSBmcm9tIFs8
ODAzOWI4MTA+XSAodGltZV9pbml0KzB4MjgvMHgzOCkKKFhFTikgRE9NMTogcjY6ODAzYjM3NmMg
cjU6ODAzZTMzMDAgcjQ6ZmZmZmZmZmYKKFhFTikgRE9NMTogWzw4MDM5YjdlOD5dICh0aW1lX2lu
aXQrMHgwLzB4MzgpIGZyb20gWzw4MDM5OTZhYz5dIChzdGFydF9rZXJuZWwrMHgxODgvMHgyNmMp
CihYRU4pIERPTTE6IFs8ODAzOTk1MjQ+XSAoc3RhcnRfa2VybmVsKzB4MC8weDI2YykgZnJvbSBb
PDgwMDA4MDQwPl0gKDB4ODAwMDgwNDApCihYRU4pIERPTTE6IHI3OjgwM2M1Mjg0IHI2OjgwM2Iz
NzY4IHI1OjgwM2MyMDU0IHI0OjEwYzUzYzdkCihYRU4pIERPTTE6IC0tLVsgZW5kIHRyYWNlIDFi
NzViMzFhMjcxOWVkMWQgXS0tLQooWEVOKSBET00xOiBhcmNoX3RpbWVyOiBmb3VuZCB0aW1lciBp
cnFzIDI5IDMwCihYRU4pIERPTTE6IEFyY2hpdGVjdGVkIGxvY2FsIHRpbWVyIHJ1bm5pbmcgYXQg
MTAwLjAwTUh6LgooWEVOKSBET00xOiBzY2hlZF9jbG9jazogMzIgYml0cyBhdCAxMDBNSHosIHJl
c29sdXRpb24gMTBucywgd3JhcHMgZXZlcnkgNDI5NDltcwooWEVOKSBET00xOiAtLS0tLS0tLS0t
LS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KKFhFTikgRE9NMTogV0FSTklORzogYXQgYXJjaC9h
cm0vbWFjaC12ZXhwcmVzcy92Mm0uYzo2NDcgdjJtX2R0X3RpbWVyX2luaXQrMHg3Yy8weGNjKCkK
KFhFTikgRE9NMTogTW9kdWxlcyBsaW5rZWQgaW46CihYRU4pIERPTTE6IEJhY2t0cmFjZToKKFhF
TikgRE9NMTogWzw4MDAxMWIwYz5dIChkdW1wX2JhY2t0cmFjZSsweDAvMHgxMGMpIGZyb20gWzw4
MDJkMDY1OD5dIChkdW1wX3N0YWNrKzB4MTgvMHgxYykKKFhFTikgRE9NMTogcjY6MDAwMDAyODcg
cjU6ODAzOWViMGMgcjQ6MDAwMDAwMDAgcjM6ODAzYzc5M2MKKFhFTikgRE9NMTogWzw4MDJkMDY0
MD5dIChkdW1wX3N0YWNrKzB4MC8weDFjKSBmcm9tIFs8ODAwMWIxZGM+XSAod2Fybl9zbG93cGF0
aF9jb21tb24rMHg1NC8weDZjKQooWEVOKSBET00xOiBbPDgwMDFiMTg4Pl0gKHdhcm5fc2xvd3Bh
dGhfY29tbW9uKzB4MC8weDZjKSBmcm9tIFs8ODAwMWIyMTg+XSAod2Fybl9zbG93cGF0aF9udWxs
KzB4MjQvMHgyYykKKFhFTikgRE9NMTogcjg6ODAwMDQwNTkgcjc6ODA0ZmZiYzAgcjY6ODAzYjM3
NmMgcjU6ODAzZTMzMDAgcjQ6ZmZmZmZmZWEKKFhFTikgRE9NMTogcjM6MDAwMDAwMDkKKFhFTikg
RE9NMTogWzw4MDAxYjFmND5dICh3YXJuX3Nsb3dwYXRoX251bGwrMHgwLzB4MmMpIGZyb20gWzw4
MDM5ZWIwYz5dICh2Mm1fZHRfdGltZXJfaW5pdCsweDdjLzB4Y2MpCihYRU4pIERPTTE6IFs8ODAz
OWVhOTA+XSAodjJtX2R0X3RpbWVyX2luaXQrMHgwLzB4Y2MpIGZyb20gWzw4MDM5YjgxMD5dICh0
aW1lX2luaXQrMHgyOC8weDM4KQooWEVOKSBET00xOiByNjo4MDNiMzc2YyByNTo4MDNlMzMwMCBy
NDpmZmZmZmZmZgooWEVOKSBET00xOiBbPDgwMzliN2U4Pl0gKHRpbWVfaW5pdCsweDAvMHgzOCkg
ZnJvbSBbPDgwMzk5NmFjPl0gKHN0YXJ0X2tlcm5lbCsweDE4OC8weDI2YykKKFhFTikgRE9NMTog
Wzw4MDM5OTUyND5dIChzdGFydF9rZXJuZWwrMHgwLzB4MjZjKSBmcm9tIFs8ODAwMDgwNDA+XSAo
MHg4MDAwODA0MCkKKFhFTikgRE9NMTogcjc6ODAzYzUyODQgcjY6ODAzYjM3NjggcjU6ODAzYzIw
NTQgcjQ6MTBjNTNjN2QKKFhFTikgRE9NMTogLS0tWyBlbmQgdHJhY2UgMWI3NWIzMWEyNzE5ZWQx
ZSBdLS0tCihYRU4pIERPTTE6IENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MzAKKFhF
TikgRE9NMTogQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcC4uLiA5OC43MSBCb2dvTUlQUyAobHBqPTQ5
MzU2OCkKKFhFTikgRE9NMTogcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxCihY
RU4pIERPTTE6IE1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTEyCihYRU4pIERPTTE6
IENQVTogVGVzdGluZyB3cml0ZSBidWZmZXIgY29oZXJlbmN5OiBvawooWEVOKSBET00xOiBTZXR0
aW5nIHVwIHN0YXRpYyBpZGVudGl0eSBtYXAgZm9yIDB4ODAyZDQ4MjAgLSAweDgwMmQ0ODU0CihY
RU4pIERPTTE6IFhlbiBzdXBwb3J0IGZvdW5kLCBldmVudHNfaXJxPTMxIGdudHRhYl9mcmFtZV9w
Zm49YjAwMDAKKFhFTikgRE9NMTogR3JhbnQgdGFibGVzIHVzaW5nIHZlcnNpb24gMSBsYXlvdXQu
CihYRU4pIERPTTE6IEdyYW50IHRhYmxlIGluaXRpYWxpemVkCihYRU4pIERPTTE6IE5FVDogUmVn
aXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTYKKFhFTikgRE9NMTogYmlvOiBjcmVhdGUgc2xhYiA8
YmlvLTA+IGF0IDAKKFhFTikgRE9NMTogU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQKKFhFTikg
RE9NMTogbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuCihYRU4pIERPTTE6IHVzYmNvcmU6IHJl
Z2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKKFhFTikgRE9NMTogdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKKFhFTikgRE9NMTogdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IKKFhFTikgRE9NMTogU3dpdGNoaW5nIHRv
IGNsb2Nrc291cmNlIGFyY2hfc3lzX2NvdW50ZXIKKFhFTikgRE9NMTogTkVUOiBSZWdpc3RlcmVk
IHByb3RvY29sIGZhbWlseSAyCihYRU4pIERPTTE6IElQIHJvdXRlIGNhY2hlIGhhc2ggdGFibGUg
ZW50cmllczogMTAyNCAob3JkZXI6IDAsIDQwOTYgYnl0ZXMpCihYRU4pIERPTTE6IFRDUCBlc3Rh
Ymxpc2hlZCBoYXNoIHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykK
KFhFTikgRE9NMTogVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjogMiwg
MTYzODQgYnl0ZXMpCihYRU4pIERPTTE6IFRDUDogSGFzaCB0YWJsZXMgY29uZmlndXJlZCAoZXN0
YWJsaXNoZWQgNDA5NiBiaW5kIDQwOTYpCihYRU4pIERPTTE6IFRDUDogcmVubyByZWdpc3RlcmVk
CihYRU4pIERPTTE6IFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDI1NiAob3JkZXI6IDAsIDQwOTYg
Ynl0ZXMpCihYRU4pIERPTTE6IFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogMjU2IChvcmRl
cjogMCwgNDA5NiBieXRlcykKKFhFTikgRE9NMTogTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZh
bWlseSAxCihYRU4pIERPTTE6IFJQQzogUmVnaXN0ZXJlZCBuYW1lZCBVTklYIHNvY2tldCB0cmFu
c3BvcnQgbW9kdWxlLgooWEVOKSBET00xOiBSUEM6IFJlZ2lzdGVyZWQgdWRwIHRyYW5zcG9ydCBt
b2R1bGUuCihYRU4pIERPTTE6IFJQQzogUmVnaXN0ZXJlZCB0Y3AgdHJhbnNwb3J0IG1vZHVsZS4K
KFhFTikgRE9NMTogUlBDOiBSZWdpc3RlcmVkIHRjcCBORlN2NC4xIGJhY2tjaGFubmVsIHRyYW5z
cG9ydCBtb2R1bGUuCihYRU4pIERPTTE6IGpmZnMyOiB2ZXJzaW9uIDIuMi4gKE5BTkQpIMKpIDIw
MDEtMjAwNiBSZWQgSGF0LCBJbmMuCihYRU4pIERPTTE6IG1zZ21uaSBoYXMgYmVlbiBzZXQgdG8g
MjQ1CihYRU4pIERPTTE6IGlvIHNjaGVkdWxlciBub29wIHJlZ2lzdGVyZWQgKGRlZmF1bHQpCihY
RU4pIERPTTE6IEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZC4KKFhFTikgRE9NMTogSW5p
dGlhbGlzaW5nIFhlbiB2aXJ0dWFsIGV0aGVybmV0IGRyaXZlci4KKFhFTikgRE9NMTogYmxrZnJv
bnQ6IHh2ZGE6IGJhcnJpZXIgb3IgZmx1c2g6IGRpc2FibGVkCihYRU4pIGdyYW50X3RhYmxlLmM6
MTIwNDpkMSBFeHBhbmRpbmcgZG9tICgxKSBncmFudCB0YWJsZSBmcm9tICgxKSB0byAoMikgZnJh
bWVzLgooWEVOKSBET00xOiBJbml0aWFsaXppbmcgVVNCIE1hc3MgU3RvcmFnZSBkcml2ZXIuLi4K
KFhFTikgRE9NMTogdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2It
c3RvcmFnZQooWEVOKSBET00xOiBVU0IgTWFzcyBTdG9yYWdlIHN1cHBvcnQgcmVnaXN0ZXJlZC4K
KFhFTikgRE9NMTogbW91c2VkZXY6IFBTLzIgbW91c2UgZGV2aWNlIGNvbW1vbiBmb3IgYWxsIG1p
Y2UKKFhFTikgRE9NMTogdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1
c2JoaWQKKFhFTikgRE9NMTogdXNiaGlkOiBVU0IgSElEIGNvcmUgZHJpdmVyCihYRU4pIERPTTE6
IFRDUDogY3ViaWMgcmVnaXN0ZXJlZAooWEVOKSBET00xOiBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9j
b2wgZmFtaWx5IDE3CihYRU4pIERPTTE6IFZGUCBzdXBwb3J0IHYwLjM6IGltcGxlbWVudG9yIDQx
IGFyY2hpdGVjdHVyZSA0IHBhcnQgMzAgdmFyaWFudCBmIHJldiAwCihYRU4pIERPTTE6IFdhcm5p
bmc6IHVuYWJsZSB0byBvcGVuIGFuIGluaXRpYWwgY29uc29sZS4KKFhFTikgRE9NMTogZW5kX3Jl
cXVlc3Q6IEkvTyBlcnJvciwgZGV2IHh2ZGEsIHNlY3RvciAyCihYRU4pIERPTTE6IEVYVDMtZnMg
KHh2ZGEpOiBlcnJvcjogdW5hYmxlIHRvIHJlYWQgc3VwZXJibG9jawooWEVOKSBET00xOiBlbmRf
cmVxdWVzdDogSS9PIGVycm9yLCBkZXYgeHZkYSwgc2VjdG9yIDIKKFhFTikgRE9NMTogRVhUMi1m
cyAoeHZkYSk6IGVycm9yOiB1bmFibGUgdG8gcmVhZCBzdXBlcmJsb2NrCihYRU4pIERPTTE6IGVu
ZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiB4dmRhLCBzZWN0b3IgMAooWEVOKSBET00xOiBGQVQt
ZnMgKHh2ZGEpOiB1bmFibGUgdG8gcmVhZCBib290IHNlY3RvcgooWEVOKSBET00xOiBWRlM6IENh
bm5vdCBvcGVuIHJvb3QgZGV2aWNlICJ4dmRhIiBvciB1bmtub3duLWJsb2NrKDIwMiwwKTogZXJy
b3IgLTUKKFhFTikgRE9NMTogUGxlYXNlIGFwcGVuZCBhIGNvcnJlY3QgInJvb3Q9IiBib290IG9w
dGlvbjsgaGVyZSBhcmUgdGhlIGF2YWlsYWJsZSBwYXJ0aXRpb25zOgooWEVOKSBET00xOiBLZXJu
ZWwgcGFuaWMgLSBub3Qgc3luY2luZzogVkZTOiBVbmFibGUgdG8gbW91bnQgcm9vdCBmcyBvbiB1
bmtub3duLWJsb2NrKDIwMiwwKQooWEVOKSBET00xOiBCYWNrdHJhY2U6CihYRU4pIERPTTE6IFs8
ODAwMTFiMGM+XSAoZHVtcF9iYWNrdHJhY2UrMHgwLzB4MTBjKSBmcm9tIFs8ODAyZDA2NTg+XSAo
ZHVtcF9zdGFjaysweDE4LzB4MWMpCihYRU4pIERPTTE6IHI2OjgwM2IzMmRjIHI1Ojg3ODFlMDAw
IHI0OjgwM2UzZGE4IHIzOjAwMDAwMDAxCihYRU4pIERPTTE6IFs8ODAyZDA2NDA+XSAoZHVtcF9z
dGFjaysweDAvMHgxYykgZnJvbSBbPDgwMmQwODMwPl0gKHBhbmljKzB4N2MvMHgxYWMpCihYRU4p
IERPTTE6IFs8ODAyZDA3YjQ+XSAocGFuaWMrMHgwLzB4MWFjKSBmcm9tIFs8ODAzOTljYWM+XSAo
bW91bnRfYmxvY2tfcm9vdCsweDE4MC8weDIzNCkKKFhFTikgRE9NMTogcjM6MDAwMDAwMDAgcjI6
MDAwMDAwMDAgcjE6ODc4MmRmMjAgcjA6ODAzNDgzMzgKKFhFTikgRE9NMTogcjc6MDAwMDgwMDEK
KFhFTikgRE9NMTogWzw4MDM5OWIyYz5dIChtb3VudF9ibG9ja19yb290KzB4MC8weDIzNCkgZnJv
bSBbPDgwMzk5ZTUwPl0gKG1vdW50X3Jvb3QrMHhmMC8weDExMCkKKFhFTikgRE9NMTogWzw4MDM5
OWQ2MD5dIChtb3VudF9yb290KzB4MC8weDExMCkgZnJvbSBbPDgwMzk5Zjk0Pl0gKHByZXBhcmVf
bmFtZXNwYWNlKzB4MTI0LzB4MTdjKQooWEVOKSBET00xOiByNzo4MDNlMzMwMCByNjo4MDNiMzJi
NCByNTo4MDNiMzJlZCByNDo4MDNlMzM2MAooWEVOKSBET00xOiBbPDgwMzk5ZTcwPl0gKHByZXBh
cmVfbmFtZXNwYWNlKzB4MC8weDE3YykgZnJvbSBbPDgwMzk5OTAwPl0gKGtlcm5lbF9pbml0KzB4
MTcwLzB4MWFjKQooWEVOKSBET00xOiByNTowMDAwMDAwNyByNDo4MDNiMzJkNAooWEVOKSBET00x
OiBbPDgwMzk5NzkwPl0gKGtlcm5lbF9pbml0KzB4MC8weDFhYykgZnJvbSBbPDgwMDFmN2I0Pl0g
KGRvX2V4aXQrMHgwLzB4NmJjKSAK
--e89a8f3bafff4e2c0804c6beb06b
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--e89a8f3bafff4e2c0804c6beb06b--


From xen-devel-bounces@lists.xen.org Wed Aug 08 10:40:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:40:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3gD-000648-Iq; Wed, 08 Aug 2012 10:39:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Sz3gC-000642-Qq
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:39:57 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344422389!12853650!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21648 invoked from network); 8 Aug 2012 10:39:50 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:39:50 -0000
Received: by yhpp34 with SMTP id p34so660836yhp.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 03:39:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=/Z7KcSXsIlctXEAqOjcNuh32mGlrfr7e9l2xfdE92Ws=;
	b=eNtHCn5XCkvNAsT3z2jwNEq/LJDMyfd5d/9d7D2/mohiz84pRfgUC+q7MnK5FEIDk1
	NAGkCN9JhPd7PXu1j2j8nO7Dvww+HmXSmRQcj7NcR3Oquo8pnX4A6rbC7bZRXHg5FxPy
	mO4xShIkZqtcx7fUvKHxbzn4VepCIjOo6g5RkjxaGBq5nil/IHydzKg90Gx1tvPo9qs8
	YAVWOMu8Hq9036KKpm5v+1kJxArU/8xqAFZzbgRIFyTpAH0YPywcWMmDNhcdZtIu4PNR
	tUKwh7FJ3ibW5qQJ+MOBNnVHnlQWlbAON+PNIunOkJvMZowSFBfUxZeJQ/eZ7O+fh4t5
	kV9w==
MIME-Version: 1.0
Received: by 10.50.220.195 with SMTP id py3mr338520igc.70.1344422388992; Wed,
	08 Aug 2012 03:39:48 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 8 Aug 2012 03:39:48 -0700 (PDT)
In-Reply-To: <502240F302000078000937A6@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
Date: Wed, 8 Aug 2012 06:39:48 -0400
X-Google-Sender-Auth: YzvmotfBAtrdth-eTL92Rgoh2bI
Message-ID: <CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for taking the time to reply.

I'm out of the office today, so don't have direct access to the
machine in question until tomorrow... but I'll do my best to answer
(inline below) and I'll follow up tomorrow with concrete answers.

On Wed, Aug 8, 2012 at 4:35 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
>> Any suggestions on how best to chase this down?
>>
>> The first S3 suspend/resume cycle works, but the second does not.
>>
>> On the second try, I never get any interrupts delivered to ahci.
>> (at least according to /proc/interrupts)
>>
>>
>> syslog traces from the first (good) and the second (bad) are attached,
>> as well as the output from the "*" debug Ctrl+a handler in both cases.
>
> You should have provided this also for the state before the
> first suspend. The state after the first resume already looks
> corrupted (presumably just not as badly):

I'll be able to send this tomorrow.

>
> (XEN) PCI-MSI interrupt information:
> (XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
>                      ^^
> (XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>
> so this is likely the reason for thing falling apart on the second
> iteration:
>
> (XEN)   Interrupt Remapping: supported and enabled.
> (XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
> (XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
> (XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
> ...
> (XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
> (XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
>                                               ^     ^  ^
> (XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
> (XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
> (XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
> (XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1
>
> Surprisingly in both cases we get (with the other vector fields varying
> accordingly)
>
> (XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
> (XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
>                                     ^^
> (XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
> (XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
> (XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
> (XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
>
> The interrupt in question belongs to 0000:00:1f.2, i.e. the
> AHCI contoller.

This would be consistent with what I've observed.

>
> Unfortunately I can't make sense of the kernel side config space
> restore messages - an offset of 1 gets reported for the device in
> question (and various other odd offsets exist), yet 3.5's
> drivers/pci/pci.c:pci_restore_config_space_range() calls
> pci_restore_config_dword() with an offset that's always divisible
> by 4. Could you clarify which kernel version you were using here?
> We first need to determine whether the kernel corrupts something
> (after all, config space isn't protected from Dom0 modifications) -
> if that's the case, we may need to understand why older Xen was
> immune against that. If that's not the case, adding some extra
> logging to Xen's pci_restore_msi_state() would seem the best
> first step, plus (maybe) logging of Dom0 post-resume config space
> accesses to the device in question.

This particular failure is using linux-3.2.23 + some of Konrad's
branches that haven't been merged into mainline (s3 branches, are
probably the most appropriate here)

>
> The most likely thing happening (though unclear where) is that
> the corresponding struct msi_msg instance gets cleared in the
> course of the first resume (but after the corresponding interrupt
> remapping entry already got restored).
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 10:40:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 10:40:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz3gD-000648-Iq; Wed, 08 Aug 2012 10:39:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Sz3gC-000642-Qq
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 10:39:57 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344422389!12853650!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21648 invoked from network); 8 Aug 2012 10:39:50 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 10:39:50 -0000
Received: by yhpp34 with SMTP id p34so660836yhp.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 03:39:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=/Z7KcSXsIlctXEAqOjcNuh32mGlrfr7e9l2xfdE92Ws=;
	b=eNtHCn5XCkvNAsT3z2jwNEq/LJDMyfd5d/9d7D2/mohiz84pRfgUC+q7MnK5FEIDk1
	NAGkCN9JhPd7PXu1j2j8nO7Dvww+HmXSmRQcj7NcR3Oquo8pnX4A6rbC7bZRXHg5FxPy
	mO4xShIkZqtcx7fUvKHxbzn4VepCIjOo6g5RkjxaGBq5nil/IHydzKg90Gx1tvPo9qs8
	YAVWOMu8Hq9036KKpm5v+1kJxArU/8xqAFZzbgRIFyTpAH0YPywcWMmDNhcdZtIu4PNR
	tUKwh7FJ3ibW5qQJ+MOBNnVHnlQWlbAON+PNIunOkJvMZowSFBfUxZeJQ/eZ7O+fh4t5
	kV9w==
MIME-Version: 1.0
Received: by 10.50.220.195 with SMTP id py3mr338520igc.70.1344422388992; Wed,
	08 Aug 2012 03:39:48 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 8 Aug 2012 03:39:48 -0700 (PDT)
In-Reply-To: <502240F302000078000937A6@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
Date: Wed, 8 Aug 2012 06:39:48 -0400
X-Google-Sender-Auth: YzvmotfBAtrdth-eTL92Rgoh2bI
Message-ID: <CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for taking the time to reply.

I'm out of the office today, so don't have direct access to the
machine in question until tomorrow... but I'll do my best to answer
(inline below) and I'll follow up tomorrow with concrete answers.

On Wed, Aug 8, 2012 at 4:35 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
>> Any suggestions on how best to chase this down?
>>
>> The first S3 suspend/resume cycle works, but the second does not.
>>
>> On the second try, I never get any interrupts delivered to ahci.
>> (at least according to /proc/interrupts)
>>
>>
>> syslog traces from the first (good) and the second (bad) are attached,
>> as well as the output from the "*" debug Ctrl+a handler in both cases.
>
> You should have provided this also for the state before the
> first suspend. The state after the first resume already looks
> corrupted (presumably just not as badly):

I'll be able to send this tomorrow.

>
> (XEN) PCI-MSI interrupt information:
> (XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
>                      ^^
> (XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>
> so this is likely the reason for thing falling apart on the second
> iteration:
>
> (XEN)   Interrupt Remapping: supported and enabled.
> (XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
> (XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
> (XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
> ...
> (XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
> (XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
>                                               ^     ^  ^
> (XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
> (XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
> (XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
> (XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1
>
> Surprisingly in both cases we get (with the other vector fields varying
> accordingly)
>
> (XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
> (XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
>                                     ^^
> (XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
> (XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
> (XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
> (XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
>
> The interrupt in question belongs to 0000:00:1f.2, i.e. the
> AHCI contoller.

This would be consistent with what I've observed.

>
> Unfortunately I can't make sense of the kernel side config space
> restore messages - an offset of 1 gets reported for the device in
> question (and various other odd offsets exist), yet 3.5's
> drivers/pci/pci.c:pci_restore_config_space_range() calls
> pci_restore_config_dword() with an offset that's always divisible
> by 4. Could you clarify which kernel version you were using here?
> We first need to determine whether the kernel corrupts something
> (after all, config space isn't protected from Dom0 modifications) -
> if that's the case, we may need to understand why older Xen was
> immune against that. If that's not the case, adding some extra
> logging to Xen's pci_restore_msi_state() would seem the best
> first step, plus (maybe) logging of Dom0 post-resume config space
> accesses to the device in question.

This particular failure is using linux-3.2.23 + some of Konrad's
branches that haven't been merged into mainline (s3 branches, are
probably the most appropriate here)

>
> The most likely thing happening (though unclear where) is that
> the corresponding struct msi_msg instance gets cleared in the
> course of the first resume (but after the corresponding interrupt
> remapping entry already got restored).
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 11:38:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 11:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz4af-0006S2-CP; Wed, 08 Aug 2012 11:38:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1Sz4ae-0006Rx-8T
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 11:38:16 +0000
Received: from [85.158.139.83:53379] by server-5.bemta-5.messagelabs.com id
	6A/84-03096-7AF42205; Wed, 08 Aug 2012 11:38:15 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344425892!30903573!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMTIzNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1125 invoked from network); 8 Aug 2012 11:38:14 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-5.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 11:38:14 -0000
Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AND81129; Wed, 08 Aug 2012 19:38:07 +0800 (CST)
Received: from SZXEML424-HUB.china.huawei.com (10.82.67.163) by
	szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Wed, 8 Aug 2012 19:36:46 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml424-hub.china.huawei.com ([10.82.67.163]) with mapi id
	14.01.0323.003; Wed, 8 Aug 2012 19:36:34 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ/8o++QD/k88DUA==
Date: Wed, 8 Aug 2012 11:36:33 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344416440.32142.7.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> 
> It would be good to write this in a comment next to each of the
> xc_{interface,evtchn,gnttab,gntshr}_open() prototypes in the header
> (assuming it applies to all of them, since they all make hypercalls I
> expect it does and in any case it is easy to relax this restriction in
> the future if not)
> 
> Otherwise:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

I added a comment and your acked-by in patch 

The patch is

# HG changeset patch
# Parent a5dfd924fcdb173a154dad9f37073c1de1302065
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
may call hypercall, thread B may call fork() to create child process. After forking, all pages 
of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
in thread A context, will cause a write protection and return EFAULT error.

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer 
   not to be copied to child process after fork. 
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
   using MADV_DOFORK of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Note: 
Chlid process do not use xc interface handle which is opend by parent process, it should open 
a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
to access hypercall buffer cache in the handle.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 19:31:53 2012 +0800
@@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    /* Recover the VMA flags. Maybe it's not necessary */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
+    
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
diff -r a5dfd924fcdb tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xenctrl.h	Wed Aug 08 19:31:53 2012 +0800
@@ -131,8 +131,11 @@ typedef enum xc_error_code xc_error_code
 
 /**
  * This function opens a handle to the hypervisor interface.  This function can
- * be called multiple times within a single process.  Multiple processes can
- * have an open hypervisor interface at the same time.
+ * be called multiple times within a single process.  Multiple processes can not
+ * have an open hypervisor interface at the same time. Chlid process do not 
+ * use xc interface handle which is opend by parent process, it should open
+ * a fresh handle if it wants to keep interacting with xc. Otherwise, it may 
+ * cause segment fault to access hypercall buffer cache in the handle.
  *
  * Each call to this function should have a corresponding call to
  * xc_interface_close().
@@ -908,6 +911,11 @@ int xc_evtchn_status(xc_interface *xch, 
  * Return a handle to the event channel driver, or -1 on failure, in which case
  * errno will be set appropriately.
  *
+ * Note:
+ * Chlid process do not use xc evtchn handle which is opend by parent process, 
+ * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
+ * it may cause segment fault to access hypercall buffer cache in the handle.
+ *
  * Before Xen pre-4.1 this function would sometimes report errors with perror.
  */
 xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
@@ -1339,9 +1347,12 @@ int xc_domain_subscribe_for_suspend(
 
 /*
  * These functions sometimes log messages as above, but not always.
- */
-
-/*
+ *
+ * Note:
+ * Chlid process do not use xc gnttab handle which is opend by parent process, 
+ * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
+ * it may cause segment fault to access hypercall buffer cache in the handle.
+ *
  * Return an fd onto the grant table driver.  Logs errors.
  */
 xc_gnttab *xc_gnttab_open(xentoollog_logger *logger,
@@ -1458,6 +1469,12 @@ grant_entry_v2_t *xc_gnttab_map_table_v2
 
 /*
  * Return an fd onto the grant sharing driver.  Logs errors.
+ *
+ * Note:
+ * Chlid process do not use xc gntshr handle which is opend by parent process, 
+ * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
+ * it may cause segment fault to access hypercall buffer cache in the handle.
+ *
  */
 xc_gntshr *xc_gntshr_open(xentoollog_logger *logger,
 			  unsigned open_flags);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 11:38:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 11:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz4af-0006S2-CP; Wed, 08 Aug 2012 11:38:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1Sz4ae-0006Rx-8T
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 11:38:16 +0000
Received: from [85.158.139.83:53379] by server-5.bemta-5.messagelabs.com id
	6A/84-03096-7AF42205; Wed, 08 Aug 2012 11:38:15 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344425892!30903573!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzMTIzNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1125 invoked from network); 8 Aug 2012 11:38:14 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-5.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 11:38:14 -0000
Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AND81129; Wed, 08 Aug 2012 19:38:07 +0800 (CST)
Received: from SZXEML424-HUB.china.huawei.com (10.82.67.163) by
	szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Wed, 8 Aug 2012 19:36:46 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml424-hub.china.huawei.com ([10.82.67.163]) with mapi id
	14.01.0323.003; Wed, 8 Aug 2012 19:36:34 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ/8o++QD/k88DUA==
Date: Wed, 8 Aug 2012 11:36:33 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344416440.32142.7.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> 
> It would be good to write this in a comment next to each of the
> xc_{interface,evtchn,gnttab,gntshr}_open() prototypes in the header
> (assuming it applies to all of them, since they all make hypercalls I
> expect it does and in any case it is easy to relax this restriction in
> the future if not)
> 
> Otherwise:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

I added a comment and your acked-by in patch 

The patch is

# HG changeset patch
# Parent a5dfd924fcdb173a154dad9f37073c1de1302065
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
may call hypercall, thread B may call fork() to create child process. After forking, all pages 
of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
in thread A context, will cause a write protection and return EFAULT error.

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer 
   not to be copied to child process after fork. 
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
   using MADV_DOFORK of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Note: 
Chlid process do not use xc interface handle which is opend by parent process, it should open 
a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
to access hypercall buffer cache in the handle.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 19:31:53 2012 +0800
@@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    /* Recover the VMA flags. Maybe it's not necessary */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
+    
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
diff -r a5dfd924fcdb tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xenctrl.h	Wed Aug 08 19:31:53 2012 +0800
@@ -131,8 +131,11 @@ typedef enum xc_error_code xc_error_code
 
 /**
  * This function opens a handle to the hypervisor interface.  This function can
- * be called multiple times within a single process.  Multiple processes can
- * have an open hypervisor interface at the same time.
+ * be called multiple times within a single process.  Multiple processes can not
+ * have an open hypervisor interface at the same time. Chlid process do not 
+ * use xc interface handle which is opend by parent process, it should open
+ * a fresh handle if it wants to keep interacting with xc. Otherwise, it may 
+ * cause segment fault to access hypercall buffer cache in the handle.
  *
  * Each call to this function should have a corresponding call to
  * xc_interface_close().
@@ -908,6 +911,11 @@ int xc_evtchn_status(xc_interface *xch, 
  * Return a handle to the event channel driver, or -1 on failure, in which case
  * errno will be set appropriately.
  *
+ * Note:
+ * Chlid process do not use xc evtchn handle which is opend by parent process, 
+ * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
+ * it may cause segment fault to access hypercall buffer cache in the handle.
+ *
  * Before Xen pre-4.1 this function would sometimes report errors with perror.
  */
 xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
@@ -1339,9 +1347,12 @@ int xc_domain_subscribe_for_suspend(
 
 /*
  * These functions sometimes log messages as above, but not always.
- */
-
-/*
+ *
+ * Note:
+ * Chlid process do not use xc gnttab handle which is opend by parent process, 
+ * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
+ * it may cause segment fault to access hypercall buffer cache in the handle.
+ *
  * Return an fd onto the grant table driver.  Logs errors.
  */
 xc_gnttab *xc_gnttab_open(xentoollog_logger *logger,
@@ -1458,6 +1469,12 @@ grant_entry_v2_t *xc_gnttab_map_table_v2
 
 /*
  * Return an fd onto the grant sharing driver.  Logs errors.
+ *
+ * Note:
+ * Chlid process do not use xc gntshr handle which is opend by parent process, 
+ * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
+ * it may cause segment fault to access hypercall buffer cache in the handle.
+ *
  */
 xc_gntshr *xc_gntshr_open(xentoollog_logger *logger,
 			  unsigned open_flags);
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:06:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz51x-0006rh-Vq; Wed, 08 Aug 2012 12:06:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz51w-0006rc-PK
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 12:06:29 +0000
Received: from [85.158.143.35:18784] by server-1.bemta-4.messagelabs.com id
	23/31-20198-44652205; Wed, 08 Aug 2012 12:06:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344427587!18026646!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1876 invoked from network); 8 Aug 2012 12:06:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:06:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13907688"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 12:06:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	13:06:27 +0100
Message-ID: <1344427585.32142.34.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Wed, 8 Aug 2012 13:06:25 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 12:36 +0100, Wangzhenguo wrote:
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > 
> > It would be good to write this in a comment next to each of the
> > xc_{interface,evtchn,gnttab,gntshr}_open() prototypes in the header
> > (assuming it applies to all of them, since they all make hypercalls I
> > expect it does and in any case it is easy to relax this restriction in
> > the future if not)
> > 
> > Otherwise:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I added a comment and your acked-by in patch 
> 
> The patch is
> 
> # HG changeset patch
> # Parent a5dfd924fcdb173a154dad9f37073c1de1302065
> libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.
> 
> In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
> may call hypercall, thread B may call fork() to create child process. After forking, all pages 
> of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
> in thread A context, will cause a write protection and return EFAULT error.
> 
> Fix:
> 1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer 
>    not to be copied to child process after fork. 
> 2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
>    using MADV_DOFORK of madvise syscall.
> 3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.
> 
> Note: 
> Chlid process do not use xc interface handle which is opend by parent process, it should open 

"Child" and "opened" (you made these typos pretty consistently
throughout ;-))

Please can you try and keep the commit message to 75-80 characters wide.

> a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
> to access hypercall buffer cache in the handle.
> 
> Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
> Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
> +++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 19:31:53 2012 +0800
> @@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
>      size_t size = npages * XC_PAGE_SIZE;
>      void *p;
>  
> -    p = xc_memalign(xch, XC_PAGE_SIZE, size);
> -    if (!p)
> -        return NULL;
> +    /* Address returned by mmap is page aligned. */
> +    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
>  
> -    if ( mlock(p, size) < 0 )
> -    {
> -        free(p);
> -        return NULL;
> -    }
> +    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
>      return p;
>  }
>  
>  static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
>  {
> -    munlock(ptr, npages * XC_PAGE_SIZE);
> -    free(ptr);
> +    /* Recover the VMA flags. Maybe it's not necessary */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
> +    
> +    munmap(ptr, npages * XC_PAGE_SIZE);
>  }
>  
>  static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
> diff -r a5dfd924fcdb tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Tue Aug 07 13:52:10 2012 +0800
> +++ b/tools/libxc/xenctrl.h	Wed Aug 08 19:31:53 2012 +0800
> @@ -131,8 +131,11 @@ typedef enum xc_error_code xc_error_code
>  
>  /**
>   * This function opens a handle to the hypervisor interface.  This function can
> - * be called multiple times within a single process.  Multiple processes can
> - * have an open hypervisor interface at the same time.
> + * be called multiple times within a single process.  Multiple processes can not
> + * have an open hypervisor interface at the same time. Chlid process do not 
> + * use xc interface handle which is opend by parent process, it should open

This would be more naturally written as "Child processes must not
use..."

> + * a fresh handle if it wants to keep interacting with xc. Otherwise, it may 
> + * cause segment fault to access hypercall buffer cache in the handle.
>   *
>   * Each call to this function should have a corresponding call to
>   * xc_interface_close().
> @@ -908,6 +911,11 @@ int xc_evtchn_status(xc_interface *xch, 
>   * Return a handle to the event channel driver, or -1 on failure, in which case
>   * errno will be set appropriately.
>   *
> + * Note:
> + * Chlid process do not use xc evtchn handle which is opend by parent process, 
> + * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
> + * it may cause segment fault to access hypercall buffer cache in the handle.
> + *
>   * Before Xen pre-4.1 this function would sometimes report errors with perror.
>   */
>  xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
> @@ -1339,9 +1347,12 @@ int xc_domain_subscribe_for_suspend(
>  
>  /*
>   * These functions sometimes log messages as above, but not always.
> - */
> -
> -/*
> + *
> + * Note:
> + * Chlid process do not use xc gnttab handle which is opend by parent process, 
> + * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
> + * it may cause segment fault to access hypercall buffer cache in the handle.
> + *
>   * Return an fd onto the grant table driver.  Logs errors.
>   */
>  xc_gnttab *xc_gnttab_open(xentoollog_logger *logger,
> @@ -1458,6 +1469,12 @@ grant_entry_v2_t *xc_gnttab_map_table_v2
>  
>  /*
>   * Return an fd onto the grant sharing driver.  Logs errors.
> + *
> + * Note:
> + * Chlid process do not use xc gntshr handle which is opend by parent process, 
> + * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
> + * it may cause segment fault to access hypercall buffer cache in the handle.
> + *
>   */
>  xc_gntshr *xc_gntshr_open(xentoollog_logger *logger,
>  			  unsigned open_flags);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:06:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz51x-0006rh-Vq; Wed, 08 Aug 2012 12:06:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz51w-0006rc-PK
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 12:06:29 +0000
Received: from [85.158.143.35:18784] by server-1.bemta-4.messagelabs.com id
	23/31-20198-44652205; Wed, 08 Aug 2012 12:06:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344427587!18026646!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1876 invoked from network); 8 Aug 2012 12:06:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:06:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13907688"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 12:06:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	13:06:27 +0100
Message-ID: <1344427585.32142.34.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Wed, 8 Aug 2012 13:06:25 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 12:36 +0100, Wangzhenguo wrote:
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > 
> > It would be good to write this in a comment next to each of the
> > xc_{interface,evtchn,gnttab,gntshr}_open() prototypes in the header
> > (assuming it applies to all of them, since they all make hypercalls I
> > expect it does and in any case it is easy to relax this restriction in
> > the future if not)
> > 
> > Otherwise:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I added a comment and your acked-by in patch 
> 
> The patch is
> 
> # HG changeset patch
> # Parent a5dfd924fcdb173a154dad9f37073c1de1302065
> libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the hypercall buffer becoming COW on hypercall.
> 
> In multi-threads and multi-processes environment, e.g. the process has two threads, thread A 
> may call hypercall, thread B may call fork() to create child process. After forking, all pages 
> of the process including hypercall buffers are cow. The hypervisor calls copy_to_user in hypercall 
> in thread A context, will cause a write protection and return EFAULT error.
> 
> Fix:
> 1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall buffer 
>    not to be copied to child process after fork. 
> 2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer by 
>    using MADV_DOFORK of madvise syscall.
> 3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.
> 
> Note: 
> Chlid process do not use xc interface handle which is opend by parent process, it should open 

"Child" and "opened" (you made these typos pretty consistently
throughout ;-))

Please can you try and keep the commit message to 75-80 characters wide.

> a fresh handle if it wants to keep interacting with xc. Otherwise, it may cause segment fault 
> to access hypercall buffer cache in the handle.
> 
> Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
> Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
> +++ b/tools/libxc/xc_linux_osdep.c	Wed Aug 08 19:31:53 2012 +0800
> @@ -93,22 +93,20 @@ static void *linux_privcmd_alloc_hyperca
>      size_t size = npages * XC_PAGE_SIZE;
>      void *p;
>  
> -    p = xc_memalign(xch, XC_PAGE_SIZE, size);
> -    if (!p)
> -        return NULL;
> +    /* Address returned by mmap is page aligned. */
> +    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
>  
> -    if ( mlock(p, size) < 0 )
> -    {
> -        free(p);
> -        return NULL;
> -    }
> +    /* Do not copy the VMA to child process on fork. Avoid the page being COW on hypercall. */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
>      return p;
>  }
>  
>  static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
>  {
> -    munlock(ptr, npages * XC_PAGE_SIZE);
> -    free(ptr);
> +    /* Recover the VMA flags. Maybe it's not necessary */
> +    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
> +    
> +    munmap(ptr, npages * XC_PAGE_SIZE);
>  }
>  
>  static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
> diff -r a5dfd924fcdb tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Tue Aug 07 13:52:10 2012 +0800
> +++ b/tools/libxc/xenctrl.h	Wed Aug 08 19:31:53 2012 +0800
> @@ -131,8 +131,11 @@ typedef enum xc_error_code xc_error_code
>  
>  /**
>   * This function opens a handle to the hypervisor interface.  This function can
> - * be called multiple times within a single process.  Multiple processes can
> - * have an open hypervisor interface at the same time.
> + * be called multiple times within a single process.  Multiple processes can not
> + * have an open hypervisor interface at the same time. Chlid process do not 
> + * use xc interface handle which is opend by parent process, it should open

This would be more naturally written as "Child processes must not
use..."

> + * a fresh handle if it wants to keep interacting with xc. Otherwise, it may 
> + * cause segment fault to access hypercall buffer cache in the handle.
>   *
>   * Each call to this function should have a corresponding call to
>   * xc_interface_close().
> @@ -908,6 +911,11 @@ int xc_evtchn_status(xc_interface *xch, 
>   * Return a handle to the event channel driver, or -1 on failure, in which case
>   * errno will be set appropriately.
>   *
> + * Note:
> + * Chlid process do not use xc evtchn handle which is opend by parent process, 
> + * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
> + * it may cause segment fault to access hypercall buffer cache in the handle.
> + *
>   * Before Xen pre-4.1 this function would sometimes report errors with perror.
>   */
>  xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
> @@ -1339,9 +1347,12 @@ int xc_domain_subscribe_for_suspend(
>  
>  /*
>   * These functions sometimes log messages as above, but not always.
> - */
> -
> -/*
> + *
> + * Note:
> + * Chlid process do not use xc gnttab handle which is opend by parent process, 
> + * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
> + * it may cause segment fault to access hypercall buffer cache in the handle.
> + *
>   * Return an fd onto the grant table driver.  Logs errors.
>   */
>  xc_gnttab *xc_gnttab_open(xentoollog_logger *logger,
> @@ -1458,6 +1469,12 @@ grant_entry_v2_t *xc_gnttab_map_table_v2
>  
>  /*
>   * Return an fd onto the grant sharing driver.  Logs errors.
> + *
> + * Note:
> + * Chlid process do not use xc gntshr handle which is opend by parent process, 
> + * it should open a fresh handle if it wants to keep interacting with xc. Otherwise, 
> + * it may cause segment fault to access hypercall buffer cache in the handle.
> + *
>   */
>  xc_gntshr *xc_gntshr_open(xentoollog_logger *logger,
>  			  unsigned open_flags);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:13:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz58Q-00070W-Py; Wed, 08 Aug 2012 12:13:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz58O-00070L-FP
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 12:13:08 +0000
Received: from [85.158.143.35:58099] by server-2.bemta-4.messagelabs.com id
	CC/01-19021-3D752205; Wed, 08 Aug 2012 12:13:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344427986!12733789!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8046 invoked from network); 8 Aug 2012 12:13:07 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:13:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13907835"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 12:13:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 13:13:06 +0100
Date: Wed, 8 Aug 2012 13:12:42 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50212C2B02000078000933CE@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> > Considering that each field of a multicall_entry is usually passed as an
> > hypercall parameter, they should all remain unsigned long.
> 
> That'll give you subtle bugs I'm afraid: do_memory_op()'s
> encoding of a continuation start extent (into the 'cmd' value),
> for example, depends on being able to store the full value into
> the command field of the multicall structure. The limit checking
> of the permitted number of extents therefore is different
> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
> neither find it very appealing to have do_memory_op() adjusted
> for dealing with this new special case, nor am I sure that's the
> only place your approach would cause problems if you excluded
> the multicall structure from the model change.

Given the way the continuation is implemented, the same problem can
also happen on x86.
In fact, considering that we don't use any compat code, and that
do_memory_op has the following check:

    /* Is size too large for us to encode a continuation? */
    if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
        return start_extent;

it would work as-is for ARM too.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:13:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz58Q-00070W-Py; Wed, 08 Aug 2012 12:13:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz58O-00070L-FP
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 12:13:08 +0000
Received: from [85.158.143.35:58099] by server-2.bemta-4.messagelabs.com id
	CC/01-19021-3D752205; Wed, 08 Aug 2012 12:13:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344427986!12733789!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8046 invoked from network); 8 Aug 2012 12:13:07 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:13:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13907835"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 12:13:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 13:13:06 +0100
Date: Wed, 8 Aug 2012 13:12:42 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50212C2B02000078000933CE@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Jan Beulich wrote:
> > Considering that each field of a multicall_entry is usually passed as an
> > hypercall parameter, they should all remain unsigned long.
> 
> That'll give you subtle bugs I'm afraid: do_memory_op()'s
> encoding of a continuation start extent (into the 'cmd' value),
> for example, depends on being able to store the full value into
> the command field of the multicall structure. The limit checking
> of the permitted number of extents therefore is different
> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
> neither find it very appealing to have do_memory_op() adjusted
> for dealing with this new special case, nor am I sure that's the
> only place your approach would cause problems if you excluded
> the multicall structure from the model change.

Given the way the continuation is implemented, the same problem can
also happen on x86.
In fact, considering that we don't use any compat code, and that
do_memory_op has the following check:

    /* Is size too large for us to encode a continuation? */
    if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
        return start_extent;

it would work as-is for ARM too.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:18:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:18:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz5D5-00078l-Ga; Wed, 08 Aug 2012 12:17:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz5D4-00078e-Kp
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 12:17:58 +0000
Received: from [85.158.139.83:43195] by server-4.bemta-5.messagelabs.com id
	80/FD-32474-5F852205; Wed, 08 Aug 2012 12:17:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344428277!28157914!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14328 invoked from network); 8 Aug 2012 12:17:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:17:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13907935"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 12:17:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	13:17:56 +0100
Message-ID: <1344428274.32142.36.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Wed, 8 Aug 2012 13:17:54 +0100
In-Reply-To: <alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 13:12 +0100, Stefano Stabellini wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
> > > Considering that each field of a multicall_entry is usually passed as an
> > > hypercall parameter, they should all remain unsigned long.
> > 
> > That'll give you subtle bugs I'm afraid: do_memory_op()'s
> > encoding of a continuation start extent (into the 'cmd' value),
> > for example, depends on being able to store the full value into
> > the command field of the multicall structure. The limit checking
> > of the permitted number of extents therefore is different
> > between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> > compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
> > neither find it very appealing to have do_memory_op() adjusted
> > for dealing with this new special case, nor am I sure that's the
> > only place your approach would cause problems if you excluded
> > the multicall structure from the model change.
> 
> Given the way the continuation is implemented, the same problem can
> also happen on x86.
> In fact, considering that we don't use any compat code, and that
> do_memory_op has the following check:
> 
>     /* Is size too large for us to encode a continuation? */
>     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
>         return start_extent;
> 
> it would work as-is for ARM too.

For 32-on-32 ARM it'll work. It'll need changing for 32-on-64 ARM
though. We might even consider making the limit the smaller number even
for 64-on-64 ARM.

We should make the limit MEMOP_MAX_EXTENTS and allow it to be per-arch,
on x86 it'll remain ULONG_MAX >> MEMOP_EXTENT_SHIFT.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:18:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:18:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz5D5-00078l-Ga; Wed, 08 Aug 2012 12:17:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz5D4-00078e-Kp
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 12:17:58 +0000
Received: from [85.158.139.83:43195] by server-4.bemta-5.messagelabs.com id
	80/FD-32474-5F852205; Wed, 08 Aug 2012 12:17:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344428277!28157914!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14328 invoked from network); 8 Aug 2012 12:17:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:17:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,732,1336348800"; d="scan'208";a="13907935"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 12:17:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	13:17:56 +0100
Message-ID: <1344428274.32142.36.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Wed, 8 Aug 2012 13:17:54 +0100
In-Reply-To: <alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 13:12 +0100, Stefano Stabellini wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
> > > Considering that each field of a multicall_entry is usually passed as an
> > > hypercall parameter, they should all remain unsigned long.
> > 
> > That'll give you subtle bugs I'm afraid: do_memory_op()'s
> > encoding of a continuation start extent (into the 'cmd' value),
> > for example, depends on being able to store the full value into
> > the command field of the multicall structure. The limit checking
> > of the permitted number of extents therefore is different
> > between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> > compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
> > neither find it very appealing to have do_memory_op() adjusted
> > for dealing with this new special case, nor am I sure that's the
> > only place your approach would cause problems if you excluded
> > the multicall structure from the model change.
> 
> Given the way the continuation is implemented, the same problem can
> also happen on x86.
> In fact, considering that we don't use any compat code, and that
> do_memory_op has the following check:
> 
>     /* Is size too large for us to encode a continuation? */
>     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
>         return start_extent;
> 
> it would work as-is for ARM too.

For 32-on-32 ARM it'll work. It'll need changing for 32-on-64 ARM
though. We might even consider making the limit the smaller number even
for 64-on-64 ARM.

We should make the limit MEMOP_MAX_EXTENTS and allow it to be per-arch,
on x86 it'll remain ULONG_MAX >> MEMOP_EXTENT_SHIFT.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:41:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz5Zk-0007N5-JF; Wed, 08 Aug 2012 12:41:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dave.martin@linaro.org>) id 1Sz5Zj-0007N0-Aw
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 12:41:23 +0000
Received: from [85.158.138.51:49098] by server-1.bemta-3.messagelabs.com id
	15/C6-29224-E6E52205; Wed, 08 Aug 2012 12:41:18 +0000
X-Env-Sender: dave.martin@linaro.org
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344429676!31030859!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4160 invoked from network); 8 Aug 2012 12:41:16 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:41:16 -0000
Received: by eaah11 with SMTP id h11so223734eaa.30
	for <xen-devel@lists.xensource.com>;
	Wed, 08 Aug 2012 05:41:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=PgxZCV5JBEDUrRPvkUJRhFbg8mofdMi3KZ8zgVZYtVk=;
	b=i3h/U/Y+sTaFitDppg+HoT4Pbwhtj8HsyMOqyPGEesAhidngovlnxD3vruqfvmagO4
	H8mKrNGzeKtjCdTa0Z57L1hnReprxQl4Ghk4S14qpsXubUjZbjHFMh4YCZG2cU/U6Ef5
	yUDyvie0UxmyEdcb5O7jaYXZHTCxOoIv8kjS8ITQxg21NRY/orLUXZqE/9CiWbYxnQF5
	Yu/t0g1FFe2vq/Mi86VOI2fiQbI0GeCSEBSslze/e5l+waIPCtZfQcFGLu484dqDt6tr
	gJ3mV9ZAsEiAjCPslR6h8Z8LcKQeRYVdb39e2ClEBVqQkRcvUmOfDNeqWLuP1XLLoY1E
	GjBA==
Received: by 10.14.175.130 with SMTP id z2mr22230890eel.0.1344429675944;
	Wed, 08 Aug 2012 05:41:15 -0700 (PDT)
Received: from linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63])
	by mx.google.com with ESMTPS id k41sm64359664eep.13.2012.08.08.05.41.13
	(version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 05:41:15 -0700 (PDT)
Date: Wed, 8 Aug 2012 13:41:11 +0100
From: Dave Martin <dave.martin@linaro.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120808124111.GB2134@linaro.org>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQmHw2J8FiY/ENk4O64BnGegVL3dRbXIrtdQ9US4M1vaFKsAkDGLsxGAyRkCTtSAtO/7A0sA
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:05PM +0100, Stefano Stabellini wrote:
> Use r12 to pass the hypercall number to the hypervisor.
> 
> We need a register to pass the hypercall number because we might not
> know it at compile time and HVC only takes an immediate argument.
> 
> Among the available registers r12 seems to be the best choice because it
> is defined as "intra-procedure call scratch register".
> 
> Use the ISS to pass an hypervisor specific tag.
> 
> Changes in v2:
> - define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
> at the moment is unused;
> - use ldm instead of pop;
> - fix up comments.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
>  arch/arm/xen/Makefile                |    2 +-
>  arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
>  3 files changed, 157 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/include/asm/xen/hypercall.h
>  create mode 100644 arch/arm/xen/hypercall.S

[...]

> diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
> new file mode 100644
> index 0000000..074f5ed
> --- /dev/null
> +++ b/arch/arm/xen/hypercall.S
> @@ -0,0 +1,106 @@
> +/******************************************************************************
> + * hypercall.S
> + *
> + * Xen hypercall wrappers
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +/*
> + * The Xen hypercall calling convention is very similar to the ARM
> + * procedure calling convention: the first paramter is passed in r0, the
> + * second in r1, the third in r2 and the fourth in r3. Considering that
> + * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
> + * in r4, differently from the procedure calling convention of using the
> + * stack for that case.
> + *
> + * The hypercall number is passed in r12.
> + *
> + * The return value is in r0.
> + *
> + * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
> + * hypercall tag.
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/assembler.h>
> +#include <xen/interface/xen.h>
> +
> +
> +/* HVC 0xEA1 */
> +#ifdef CONFIG_THUMB2_KERNEL
> +#define xen_hvc .word 0xf7e08ea1
> +#else
> +#define xen_hvc .word 0xe140ea71
> +#endif

Consider using my opcode injection helpers patch for this (see
separate repost: [PATCH v2 REPOST 0/4] ARM: opcodes: Facilitate custom
opcode injection), assuming that nobody objects to it.  This should mean
that the right opcodes get generated when building a kernel for a big-
endian target for example.

I believe the __HVC(imm) macro which I put in <asm/opcodes-virt.h> as an
example should do what you need in this case.

> +
> +#define HYPERCALL_SIMPLE(hypercall)		\
> +ENTRY(HYPERVISOR_##hypercall)			\
> +	mov r12, #__HYPERVISOR_##hypercall;	\
> +	xen_hvc;							\
> +	mov pc, lr;							\
> +ENDPROC(HYPERVISOR_##hypercall)
> +
> +#define HYPERCALL0 HYPERCALL_SIMPLE
> +#define HYPERCALL1 HYPERCALL_SIMPLE
> +#define HYPERCALL2 HYPERCALL_SIMPLE
> +#define HYPERCALL3 HYPERCALL_SIMPLE
> +#define HYPERCALL4 HYPERCALL_SIMPLE
> +
> +#define HYPERCALL5(hypercall)			\
> +ENTRY(HYPERVISOR_##hypercall)			\
> +	stmdb sp!, {r4}						\
> +	ldr r4, [sp, #4]					\
> +	mov r12, #__HYPERVISOR_##hypercall;	\
> +	xen_hvc								\
> +	ldm sp!, {r4}						\
> +	mov pc, lr							\
> +ENDPROC(HYPERVISOR_##hypercall)
> +
> +                .text
> +
> +HYPERCALL2(xen_version);
> +HYPERCALL3(console_io);
> +HYPERCALL3(grant_table_op);
> +HYPERCALL2(sched_op);
> +HYPERCALL2(event_channel_op);
> +HYPERCALL2(hvm_op);
> +HYPERCALL2(memory_op);
> +HYPERCALL2(physdev_op);
> +
> +ENTRY(privcmd_call)
> +	stmdb sp!, {r4}
> +	mov r12, r0
> +	mov r0, r1
> +	mov r1, r2
> +	mov r2, r3
> +	ldr r3, [sp, #8]
> +	ldr r4, [sp, #4]
> +	xen_hvc
> +	ldm sp!, {r4}
> +	mov pc, lr

Note that the preferred entry/exit sequences in such cases are:

	stmfd	sp!, {r4,lr}
	...
	ldmfd	sp!, {r4,pc}

...but it works either way.  I would bother to change it unless you
have other changes to make too.


Cheers
---Dave


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 12:41:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 12:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz5Zk-0007N5-JF; Wed, 08 Aug 2012 12:41:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dave.martin@linaro.org>) id 1Sz5Zj-0007N0-Aw
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 12:41:23 +0000
Received: from [85.158.138.51:49098] by server-1.bemta-3.messagelabs.com id
	15/C6-29224-E6E52205; Wed, 08 Aug 2012 12:41:18 +0000
X-Env-Sender: dave.martin@linaro.org
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344429676!31030859!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4160 invoked from network); 8 Aug 2012 12:41:16 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 12:41:16 -0000
Received: by eaah11 with SMTP id h11so223734eaa.30
	for <xen-devel@lists.xensource.com>;
	Wed, 08 Aug 2012 05:41:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=PgxZCV5JBEDUrRPvkUJRhFbg8mofdMi3KZ8zgVZYtVk=;
	b=i3h/U/Y+sTaFitDppg+HoT4Pbwhtj8HsyMOqyPGEesAhidngovlnxD3vruqfvmagO4
	H8mKrNGzeKtjCdTa0Z57L1hnReprxQl4Ghk4S14qpsXubUjZbjHFMh4YCZG2cU/U6Ef5
	yUDyvie0UxmyEdcb5O7jaYXZHTCxOoIv8kjS8ITQxg21NRY/orLUXZqE/9CiWbYxnQF5
	Yu/t0g1FFe2vq/Mi86VOI2fiQbI0GeCSEBSslze/e5l+waIPCtZfQcFGLu484dqDt6tr
	gJ3mV9ZAsEiAjCPslR6h8Z8LcKQeRYVdb39e2ClEBVqQkRcvUmOfDNeqWLuP1XLLoY1E
	GjBA==
Received: by 10.14.175.130 with SMTP id z2mr22230890eel.0.1344429675944;
	Wed, 08 Aug 2012 05:41:15 -0700 (PDT)
Received: from linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63])
	by mx.google.com with ESMTPS id k41sm64359664eep.13.2012.08.08.05.41.13
	(version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 05:41:15 -0700 (PDT)
Date: Wed, 8 Aug 2012 13:41:11 +0100
From: Dave Martin <dave.martin@linaro.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120808124111.GB2134@linaro.org>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQmHw2J8FiY/ENk4O64BnGegVL3dRbXIrtdQ9US4M1vaFKsAkDGLsxGAyRkCTtSAtO/7A0sA
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, tim@xen.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 03:27:05PM +0100, Stefano Stabellini wrote:
> Use r12 to pass the hypercall number to the hypervisor.
> 
> We need a register to pass the hypercall number because we might not
> know it at compile time and HVC only takes an immediate argument.
> 
> Among the available registers r12 seems to be the best choice because it
> is defined as "intra-procedure call scratch register".
> 
> Use the ISS to pass an hypervisor specific tag.
> 
> Changes in v2:
> - define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
> at the moment is unused;
> - use ldm instead of pop;
> - fix up comments.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
>  arch/arm/xen/Makefile                |    2 +-
>  arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
>  3 files changed, 157 insertions(+), 1 deletions(-)
>  create mode 100644 arch/arm/include/asm/xen/hypercall.h
>  create mode 100644 arch/arm/xen/hypercall.S

[...]

> diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
> new file mode 100644
> index 0000000..074f5ed
> --- /dev/null
> +++ b/arch/arm/xen/hypercall.S
> @@ -0,0 +1,106 @@
> +/******************************************************************************
> + * hypercall.S
> + *
> + * Xen hypercall wrappers
> + *
> + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +/*
> + * The Xen hypercall calling convention is very similar to the ARM
> + * procedure calling convention: the first paramter is passed in r0, the
> + * second in r1, the third in r2 and the fourth in r3. Considering that
> + * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
> + * in r4, differently from the procedure calling convention of using the
> + * stack for that case.
> + *
> + * The hypercall number is passed in r12.
> + *
> + * The return value is in r0.
> + *
> + * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
> + * hypercall tag.
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/assembler.h>
> +#include <xen/interface/xen.h>
> +
> +
> +/* HVC 0xEA1 */
> +#ifdef CONFIG_THUMB2_KERNEL
> +#define xen_hvc .word 0xf7e08ea1
> +#else
> +#define xen_hvc .word 0xe140ea71
> +#endif

Consider using my opcode injection helpers patch for this (see
separate repost: [PATCH v2 REPOST 0/4] ARM: opcodes: Facilitate custom
opcode injection), assuming that nobody objects to it.  This should mean
that the right opcodes get generated when building a kernel for a big-
endian target for example.

I believe the __HVC(imm) macro which I put in <asm/opcodes-virt.h> as an
example should do what you need in this case.

> +
> +#define HYPERCALL_SIMPLE(hypercall)		\
> +ENTRY(HYPERVISOR_##hypercall)			\
> +	mov r12, #__HYPERVISOR_##hypercall;	\
> +	xen_hvc;							\
> +	mov pc, lr;							\
> +ENDPROC(HYPERVISOR_##hypercall)
> +
> +#define HYPERCALL0 HYPERCALL_SIMPLE
> +#define HYPERCALL1 HYPERCALL_SIMPLE
> +#define HYPERCALL2 HYPERCALL_SIMPLE
> +#define HYPERCALL3 HYPERCALL_SIMPLE
> +#define HYPERCALL4 HYPERCALL_SIMPLE
> +
> +#define HYPERCALL5(hypercall)			\
> +ENTRY(HYPERVISOR_##hypercall)			\
> +	stmdb sp!, {r4}						\
> +	ldr r4, [sp, #4]					\
> +	mov r12, #__HYPERVISOR_##hypercall;	\
> +	xen_hvc								\
> +	ldm sp!, {r4}						\
> +	mov pc, lr							\
> +ENDPROC(HYPERVISOR_##hypercall)
> +
> +                .text
> +
> +HYPERCALL2(xen_version);
> +HYPERCALL3(console_io);
> +HYPERCALL3(grant_table_op);
> +HYPERCALL2(sched_op);
> +HYPERCALL2(event_channel_op);
> +HYPERCALL2(hvm_op);
> +HYPERCALL2(memory_op);
> +HYPERCALL2(physdev_op);
> +
> +ENTRY(privcmd_call)
> +	stmdb sp!, {r4}
> +	mov r12, r0
> +	mov r0, r1
> +	mov r1, r2
> +	mov r2, r3
> +	ldr r3, [sp, #8]
> +	ldr r4, [sp, #4]
> +	xen_hvc
> +	ldm sp!, {r4}
> +	mov pc, lr

Note that the preferred entry/exit sequences in such cases are:

	stmfd	sp!, {r4,lr}
	...
	ldmfd	sp!, {r4,pc}

...but it works either way.  I would bother to change it unless you
have other changes to make too.


Cheers
---Dave


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 13:29:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6K0-0007et-KM; Wed, 08 Aug 2012 13:29:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcd40@cam.ac.uk>) id 1Sz6Hl-0007eK-WC
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 13:26:54 +0000
Received: from [85.158.138.51:33557] by server-12.bemta-3.messagelabs.com id
	57/E5-21301-D1962205; Wed, 08 Aug 2012 13:26:53 +0000
X-Env-Sender: mcd40@cam.ac.uk
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344432411!21511992!1
X-Originating-IP: [131.111.8.150]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMxLjExMS44LjE1MCA9PiAxMTcwMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18170 invoked from network); 8 Aug 2012 13:26:52 -0000
Received: from ppsw-50.csi.cam.ac.uk (HELO ppsw-50.csi.cam.ac.uk)
	(131.111.8.150) by server-12.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 13:26:52 -0000
X-Cam-AntiVirus: no malware found
X-Cam-SpamDetails: not scanned
X-Cam-ScannerInfo: http://www.cam.ac.uk/cs/email/scanner/
Received: from mcd40.sp.phy.cam.ac.uk ([131.111.73.213]:61878)
	by ppsw-50.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
	with esmtpsa (PLAIN:mcd40) (TLSv1:DHE-RSA-CAMELLIA256-SHA:256)
	id 1Sz6Hj-0001py-rc (Exim 4.72) for xen-devel@lists.xen.org
	(return-path <mcd40@cam.ac.uk>); Wed, 08 Aug 2012 14:26:51 +0100
Message-ID: <50226909.9090804@cam.ac.uk>
Date: Wed, 08 Aug 2012 14:26:33 +0100
From: Matthew Dean <mcd40@cam.ac.uk>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:15.0) Gecko/20120731 Thunderbird/15.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 08 Aug 2012 13:29:11 +0000
Subject: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5433275568835359528=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5433275568835359528==
Content-Type: multipart/alternative;
 boundary="------------000500090909060000050000"

This is a multi-part message in MIME format.
--------------000500090909060000050000
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

I have been trying to setup GPU passthrough for a couple of days now 
with little luck.  I'm hoping someone can shed some light as to where I 
may be going wrong or at least identify some genuine bugs. Essentially 
pci passthrough works for me but gpu passthrough doesn't.

My system is currently configured as follows (please ask if you need 
further details)

Asrock Z77 e-Itx (stock bios, not sure the version but I can find out.  
vt-d is enabled)
i7 3770S
AMD Radeon HD 7750

Dom0 - Ubuntu server 12.04 with desktop environment and the following 
extra packages installed before the build

build-essential zlib1g-dev python-dev libncurses5-dev libssl-dev 
libx11-dev uuid-dev libyajl-dev libaio-dev libglib2.0-dev pkg-config 
bridge-utils iproute udev bison flex gettext bin86 bcc iasl markdown 
ocaml-nox ocaml-findlib git gcc-multilib texinfo pciutils-dev gawk 
libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif 
texlive-latex-base texlive-latex-recommended texlive-fonts-extra 
texlive-fonts-recommended mercurial make gcc libc6-dev python 
python-twisted patch libsdl-dev libjpeg62-dev libbz2-dev e2fslibs-dev 
git-core xz-utils liblzma-dev liblzo2-dev libvncserver-dev

I've used the stock Ubuntu provided kernel which is version 
3.2.0-27-generic.  Dom0 is connected to a display via the intel 
integrated graphics.

xen 4.2 has been build from xen unstable tag 4.2.0-rc1 changeset 25689 
using the following commands

./configure
make world
make install

I had to manually add a line to /etc/fstab to get /proc/xen to mount on 
startup
Modules xen-evtchn, xen-gntdev and xen-pciback were setup to load on 
boot in /etc/modules
I've then setup xencommons to start on boot: "update-rc.d xencommons 
defaults 19 18"

Grub2 has then been setup to automatically boot the xen kernel. I've 
also had to add the option xsave=0 to the xen boot command line to get 
things to boot

On restart everything looks good. "xl list" returns the Dom0 domain 
only.  "cat /proc/xen/capabilities" returns control_d.  I've created a 
windows 7 domU without any pci passthrough and successfully installed 
windows 7 ultimate.  The config file looks like:

builder='hvm'
vcpus = 4
memory = 8192
shadow_memory = 48
name = "xenwin7"
vif = [ 'bridge=br0' ]
acpi = 1
apic = 1
disk = [ 'file:/usr/local/xenImages/xenwin7.img,hda,w']
boot="c"
sdl=0
vnc=1
vncconsole=1
vncpasswd=''
viridian=1
usb=1
serial='pty'
usbdevice='tablet'
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"

I would like to be able to pass through the HD 7750 and a USB controller 
to the VM.  I see from lspci there are three devices I need to pass 
through, two for the gpu (the card itseft and the hd audio for hdmi 
device) and one for the USB controller.  The bus/device ids are:

0000:01:00.0
0000:01:00.1
0000:00:14.0

There are other usb controllers but I would like to leave them with 
dom0.  I can successfully configure the devices for passthrough using 
"xl pci-assignable-add".  In the config I then add the line:

  pci=["01:00.0","01:00.1","00:14.0"]

When I boot the VM everything seems to work fine.  Windows picks up all 
three devices and I can install drivers for them.  The USB controller 
works flawlessly and I can use an attached mouse and keyboard.  The 
radeon card requires a restart after which an attached display comes to 
life and I have 3D acceleration. Restarting the VM seems to work OK.  If 
however I shut down the VM I have no end of problems.  On any subsequent 
startup the vm struggles to get past the windows splash screen, waiting 
much longer than usual.  During this period dom0 is sluggish regarding 
mouse and keyboard input even though cpu and memory usage are very low.  
When windows finally loads I have no active display and I have to view 
the VM via VNC.  In device manage I find that windows has disabled the 
GPU saying there are not enough resources to run the card.  From this 
point onwards I can do nothing to get the gpu working again aside from 
removing the device, manually deleting the drivers, and starting again.

Note that at this point I am only trying to do secondary passthrough 
though I would ideally like to get to the point of doing primary 
passthrough.  Adding the line gfx_passthru=1 at any point in all this to 
the machine config however just prevents the VM from booting entirely; 
when I VNC in all I get is a qemu prompt and I never get any ouput to 
the real display.

Any suggestions as to how to get this to work would be greatly 
appreciated as I've hit a bit of a brick wall.  I should also say that I 
have managed to get secondary passthrough working using Debian Wheezy 
and the repository version of xen 4.1.  In that case though Dom0 didn't 
boot reliably.

Sincerely,

Matthew Dean

--------------000500090909060000050000
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    I have been trying to setup GPU passthrough for a couple of days now
    with little luck.&nbsp; I'm hoping someone can shed some light as to
    where I may be going wrong or at least identify some genuine bugs.&nbsp;
    Essentially pci passthrough works for me but gpu passthrough
    doesn't. <br>
    <br>
    My system is currently configured as follows (please ask if you need
    further details)<br>
    <br>
    Asrock Z77 e-Itx (stock bios, not sure the version but I can find
    out.&nbsp; vt-d is enabled)<br>
    i7 3770S<br>
    AMD Radeon HD 7750<br>
    <br>
    Dom0 - Ubuntu server 12.04 with desktop environment and the
    following extra packages installed before the build<br>
    <br>
    build-essential zlib1g-dev python-dev libncurses5-dev libssl-dev
    libx11-dev uuid-dev libyajl-dev libaio-dev libglib2.0-dev pkg-config
    bridge-utils iproute udev bison flex gettext bin86 bcc iasl markdown
    ocaml-nox ocaml-findlib git gcc-multilib texinfo pciutils-dev gawk
    libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif
    texlive-latex-base texlive-latex-recommended texlive-fonts-extra
    texlive-fonts-recommended mercurial make gcc libc6-dev python
    python-twisted patch libsdl-dev libjpeg62-dev libbz2-dev <o:p></o:p>e2fslibs-dev
    git-core xz-utils liblzma-dev liblzo2-dev libvncserver-dev<br>
    <br>
    I've used the stock Ubuntu provided kernel which is version
    3.2.0-27-generic.&nbsp; Dom0 is connected to a display via the intel
    integrated graphics. <br>
    <br>
    xen 4.2 has been build from xen unstable tag 4.2.0-rc1 changeset
    25689 using the following commands<br>
    <br>
    ./configure<br>
    make world<br>
    make install<br>
    <br>
    I had to manually add a line to /etc/fstab to get /proc/xen to mount
    on startup<br>
    Modules xen-evtchn<o:p></o:p>, xen-gntdev and <o:p></o:p>xen-pciback
    were setup to load on boot in /etc/modules<br>
    I've then setup xencommons to start on boot: "update-rc.d xencommons
    defaults 19 18"<br>
    <br>
    Grub2 has then been setup to automatically boot the xen kernel.&nbsp;
    I've also had to add the option xsave=0 to the xen boot command line
    to get things to boot<br>
    <br>
    On restart everything looks good. "xl list" returns the Dom0 domain
    only.&nbsp; "cat /proc/xen/capabilities" returns control_d.&nbsp; I've created
    a windows 7 domU without any pci passthrough and successfully
    installed windows 7 ultimate.&nbsp; The config file looks like:<br>
    <br>
    builder='hvm'<br>
    vcpus = 4<br>
    memory = 8192<br>
    shadow_memory = 48<br>
    name = "xenwin7"<br>
    vif = [ 'bridge=br0' ]<br>
    acpi = 1<br>
    apic = 1<br>
    disk = [ '<a class="moz-txt-link-freetext" href="file:/usr/local/xenImages/xenwin7.img,hda,w">file:/usr/local/xenImages/xenwin7.img,hda,w</a>']<br>
    boot="c"<br>
    sdl=0<br>
    vnc=1<br>
    vncconsole=1<br>
    vncpasswd=''<br>
    viridian=1<br>
    usb=1<br>
    serial='pty'<br>
    usbdevice='tablet'<br>
    on_poweroff="destroy"<br>
    on_reboot="restart"<br>
    on_crash="destroy"<br>
    <br>
    I would like to be able to pass through the HD 7750 and a USB
    controller to the VM.&nbsp; I see from lspci there are three devices I
    need to pass through, two for the gpu (the card itseft and the hd
    audio for hdmi device) and one for the USB controller.&nbsp; The
    bus/device ids are:<br>
    <br>
    0000:01:00.0<br>
    0000:01:00.1<br>
    0000:00:14.0<br>
    <br>
    There are other usb controllers but I would like to leave them with
    dom0.&nbsp; I can successfully configure the devices for passthrough
    using "xl pci-assignable-add".&nbsp; In the config I then add the line:<br>
    <br>
    &nbsp;pci=["01:00.0","01:00.1","00:14.0"]<br>
    &nbsp;<br>
    When I boot the VM everything seems to work fine.&nbsp; Windows picks up
    all three devices and I can install drivers for them.&nbsp; The USB
    controller works flawlessly and I can use an attached mouse and
    keyboard.&nbsp; The radeon card requires a restart after which an
    attached display comes to life and I have 3D acceleration.&nbsp;
    Restarting the VM seems to work OK.&nbsp; If however I shut down the VM I
    have no end of problems.&nbsp; On any subsequent startup the vm struggles
    to get past the windows splash screen, waiting much longer than
    usual.&nbsp; During this period dom0 is sluggish regarding mouse and
    keyboard input even though cpu and memory usage are very low.&nbsp; When
    windows finally loads I have no active display and I have to view
    the VM via VNC.&nbsp; In device manage I find that windows has disabled
    the GPU saying there are not enough resources to run the card.&nbsp; From
    this point onwards I can do nothing to get the gpu working again
    aside from removing the device, manually deleting the drivers, and
    starting again.<br>
    <br>
    Note that at this point I am only trying to do secondary passthrough
    though I would ideally like to get to the point of doing primary
    passthrough.&nbsp; Adding the line gfx_passthru=1 at any point in all
    this to the machine config however just prevents the VM from booting
    entirely; when I VNC in all I get is a qemu prompt and I never get
    any ouput to the real display.<br>
    <br>
    Any suggestions as to how to get this to work would be greatly
    appreciated as I've hit a bit of a brick wall.&nbsp; I should also say
    that I have managed to get secondary passthrough working using
    Debian Wheezy and the repository version of xen 4.1.&nbsp; In that case
    though Dom0 didn't boot reliably.<br>
    <br>
    Sincerely,<br>
    <br>
    Matthew Dean<br>
  </body>
</html>

--------------000500090909060000050000--


--===============5433275568835359528==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5433275568835359528==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 13:29:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6K0-0007et-KM; Wed, 08 Aug 2012 13:29:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcd40@cam.ac.uk>) id 1Sz6Hl-0007eK-WC
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 13:26:54 +0000
Received: from [85.158.138.51:33557] by server-12.bemta-3.messagelabs.com id
	57/E5-21301-D1962205; Wed, 08 Aug 2012 13:26:53 +0000
X-Env-Sender: mcd40@cam.ac.uk
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344432411!21511992!1
X-Originating-IP: [131.111.8.150]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMxLjExMS44LjE1MCA9PiAxMTcwMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18170 invoked from network); 8 Aug 2012 13:26:52 -0000
Received: from ppsw-50.csi.cam.ac.uk (HELO ppsw-50.csi.cam.ac.uk)
	(131.111.8.150) by server-12.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 13:26:52 -0000
X-Cam-AntiVirus: no malware found
X-Cam-SpamDetails: not scanned
X-Cam-ScannerInfo: http://www.cam.ac.uk/cs/email/scanner/
Received: from mcd40.sp.phy.cam.ac.uk ([131.111.73.213]:61878)
	by ppsw-50.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
	with esmtpsa (PLAIN:mcd40) (TLSv1:DHE-RSA-CAMELLIA256-SHA:256)
	id 1Sz6Hj-0001py-rc (Exim 4.72) for xen-devel@lists.xen.org
	(return-path <mcd40@cam.ac.uk>); Wed, 08 Aug 2012 14:26:51 +0100
Message-ID: <50226909.9090804@cam.ac.uk>
Date: Wed, 08 Aug 2012 14:26:33 +0100
From: Matthew Dean <mcd40@cam.ac.uk>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:15.0) Gecko/20120731 Thunderbird/15.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 08 Aug 2012 13:29:11 +0000
Subject: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5433275568835359528=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5433275568835359528==
Content-Type: multipart/alternative;
 boundary="------------000500090909060000050000"

This is a multi-part message in MIME format.
--------------000500090909060000050000
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

I have been trying to setup GPU passthrough for a couple of days now 
with little luck.  I'm hoping someone can shed some light as to where I 
may be going wrong or at least identify some genuine bugs. Essentially 
pci passthrough works for me but gpu passthrough doesn't.

My system is currently configured as follows (please ask if you need 
further details)

Asrock Z77 e-Itx (stock bios, not sure the version but I can find out.  
vt-d is enabled)
i7 3770S
AMD Radeon HD 7750

Dom0 - Ubuntu server 12.04 with desktop environment and the following 
extra packages installed before the build

build-essential zlib1g-dev python-dev libncurses5-dev libssl-dev 
libx11-dev uuid-dev libyajl-dev libaio-dev libglib2.0-dev pkg-config 
bridge-utils iproute udev bison flex gettext bin86 bcc iasl markdown 
ocaml-nox ocaml-findlib git gcc-multilib texinfo pciutils-dev gawk 
libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif 
texlive-latex-base texlive-latex-recommended texlive-fonts-extra 
texlive-fonts-recommended mercurial make gcc libc6-dev python 
python-twisted patch libsdl-dev libjpeg62-dev libbz2-dev e2fslibs-dev 
git-core xz-utils liblzma-dev liblzo2-dev libvncserver-dev

I've used the stock Ubuntu provided kernel which is version 
3.2.0-27-generic.  Dom0 is connected to a display via the intel 
integrated graphics.

xen 4.2 has been build from xen unstable tag 4.2.0-rc1 changeset 25689 
using the following commands

./configure
make world
make install

I had to manually add a line to /etc/fstab to get /proc/xen to mount on 
startup
Modules xen-evtchn, xen-gntdev and xen-pciback were setup to load on 
boot in /etc/modules
I've then setup xencommons to start on boot: "update-rc.d xencommons 
defaults 19 18"

Grub2 has then been setup to automatically boot the xen kernel. I've 
also had to add the option xsave=0 to the xen boot command line to get 
things to boot

On restart everything looks good. "xl list" returns the Dom0 domain 
only.  "cat /proc/xen/capabilities" returns control_d.  I've created a 
windows 7 domU without any pci passthrough and successfully installed 
windows 7 ultimate.  The config file looks like:

builder='hvm'
vcpus = 4
memory = 8192
shadow_memory = 48
name = "xenwin7"
vif = [ 'bridge=br0' ]
acpi = 1
apic = 1
disk = [ 'file:/usr/local/xenImages/xenwin7.img,hda,w']
boot="c"
sdl=0
vnc=1
vncconsole=1
vncpasswd=''
viridian=1
usb=1
serial='pty'
usbdevice='tablet'
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"

I would like to be able to pass through the HD 7750 and a USB controller 
to the VM.  I see from lspci there are three devices I need to pass 
through, two for the gpu (the card itseft and the hd audio for hdmi 
device) and one for the USB controller.  The bus/device ids are:

0000:01:00.0
0000:01:00.1
0000:00:14.0

There are other usb controllers but I would like to leave them with 
dom0.  I can successfully configure the devices for passthrough using 
"xl pci-assignable-add".  In the config I then add the line:

  pci=["01:00.0","01:00.1","00:14.0"]

When I boot the VM everything seems to work fine.  Windows picks up all 
three devices and I can install drivers for them.  The USB controller 
works flawlessly and I can use an attached mouse and keyboard.  The 
radeon card requires a restart after which an attached display comes to 
life and I have 3D acceleration. Restarting the VM seems to work OK.  If 
however I shut down the VM I have no end of problems.  On any subsequent 
startup the vm struggles to get past the windows splash screen, waiting 
much longer than usual.  During this period dom0 is sluggish regarding 
mouse and keyboard input even though cpu and memory usage are very low.  
When windows finally loads I have no active display and I have to view 
the VM via VNC.  In device manage I find that windows has disabled the 
GPU saying there are not enough resources to run the card.  From this 
point onwards I can do nothing to get the gpu working again aside from 
removing the device, manually deleting the drivers, and starting again.

Note that at this point I am only trying to do secondary passthrough 
though I would ideally like to get to the point of doing primary 
passthrough.  Adding the line gfx_passthru=1 at any point in all this to 
the machine config however just prevents the VM from booting entirely; 
when I VNC in all I get is a qemu prompt and I never get any ouput to 
the real display.

Any suggestions as to how to get this to work would be greatly 
appreciated as I've hit a bit of a brick wall.  I should also say that I 
have managed to get secondary passthrough working using Debian Wheezy 
and the repository version of xen 4.1.  In that case though Dom0 didn't 
boot reliably.

Sincerely,

Matthew Dean

--------------000500090909060000050000
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    I have been trying to setup GPU passthrough for a couple of days now
    with little luck.&nbsp; I'm hoping someone can shed some light as to
    where I may be going wrong or at least identify some genuine bugs.&nbsp;
    Essentially pci passthrough works for me but gpu passthrough
    doesn't. <br>
    <br>
    My system is currently configured as follows (please ask if you need
    further details)<br>
    <br>
    Asrock Z77 e-Itx (stock bios, not sure the version but I can find
    out.&nbsp; vt-d is enabled)<br>
    i7 3770S<br>
    AMD Radeon HD 7750<br>
    <br>
    Dom0 - Ubuntu server 12.04 with desktop environment and the
    following extra packages installed before the build<br>
    <br>
    build-essential zlib1g-dev python-dev libncurses5-dev libssl-dev
    libx11-dev uuid-dev libyajl-dev libaio-dev libglib2.0-dev pkg-config
    bridge-utils iproute udev bison flex gettext bin86 bcc iasl markdown
    ocaml-nox ocaml-findlib git gcc-multilib texinfo pciutils-dev gawk
    libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif
    texlive-latex-base texlive-latex-recommended texlive-fonts-extra
    texlive-fonts-recommended mercurial make gcc libc6-dev python
    python-twisted patch libsdl-dev libjpeg62-dev libbz2-dev <o:p></o:p>e2fslibs-dev
    git-core xz-utils liblzma-dev liblzo2-dev libvncserver-dev<br>
    <br>
    I've used the stock Ubuntu provided kernel which is version
    3.2.0-27-generic.&nbsp; Dom0 is connected to a display via the intel
    integrated graphics. <br>
    <br>
    xen 4.2 has been build from xen unstable tag 4.2.0-rc1 changeset
    25689 using the following commands<br>
    <br>
    ./configure<br>
    make world<br>
    make install<br>
    <br>
    I had to manually add a line to /etc/fstab to get /proc/xen to mount
    on startup<br>
    Modules xen-evtchn<o:p></o:p>, xen-gntdev and <o:p></o:p>xen-pciback
    were setup to load on boot in /etc/modules<br>
    I've then setup xencommons to start on boot: "update-rc.d xencommons
    defaults 19 18"<br>
    <br>
    Grub2 has then been setup to automatically boot the xen kernel.&nbsp;
    I've also had to add the option xsave=0 to the xen boot command line
    to get things to boot<br>
    <br>
    On restart everything looks good. "xl list" returns the Dom0 domain
    only.&nbsp; "cat /proc/xen/capabilities" returns control_d.&nbsp; I've created
    a windows 7 domU without any pci passthrough and successfully
    installed windows 7 ultimate.&nbsp; The config file looks like:<br>
    <br>
    builder='hvm'<br>
    vcpus = 4<br>
    memory = 8192<br>
    shadow_memory = 48<br>
    name = "xenwin7"<br>
    vif = [ 'bridge=br0' ]<br>
    acpi = 1<br>
    apic = 1<br>
    disk = [ '<a class="moz-txt-link-freetext" href="file:/usr/local/xenImages/xenwin7.img,hda,w">file:/usr/local/xenImages/xenwin7.img,hda,w</a>']<br>
    boot="c"<br>
    sdl=0<br>
    vnc=1<br>
    vncconsole=1<br>
    vncpasswd=''<br>
    viridian=1<br>
    usb=1<br>
    serial='pty'<br>
    usbdevice='tablet'<br>
    on_poweroff="destroy"<br>
    on_reboot="restart"<br>
    on_crash="destroy"<br>
    <br>
    I would like to be able to pass through the HD 7750 and a USB
    controller to the VM.&nbsp; I see from lspci there are three devices I
    need to pass through, two for the gpu (the card itseft and the hd
    audio for hdmi device) and one for the USB controller.&nbsp; The
    bus/device ids are:<br>
    <br>
    0000:01:00.0<br>
    0000:01:00.1<br>
    0000:00:14.0<br>
    <br>
    There are other usb controllers but I would like to leave them with
    dom0.&nbsp; I can successfully configure the devices for passthrough
    using "xl pci-assignable-add".&nbsp; In the config I then add the line:<br>
    <br>
    &nbsp;pci=["01:00.0","01:00.1","00:14.0"]<br>
    &nbsp;<br>
    When I boot the VM everything seems to work fine.&nbsp; Windows picks up
    all three devices and I can install drivers for them.&nbsp; The USB
    controller works flawlessly and I can use an attached mouse and
    keyboard.&nbsp; The radeon card requires a restart after which an
    attached display comes to life and I have 3D acceleration.&nbsp;
    Restarting the VM seems to work OK.&nbsp; If however I shut down the VM I
    have no end of problems.&nbsp; On any subsequent startup the vm struggles
    to get past the windows splash screen, waiting much longer than
    usual.&nbsp; During this period dom0 is sluggish regarding mouse and
    keyboard input even though cpu and memory usage are very low.&nbsp; When
    windows finally loads I have no active display and I have to view
    the VM via VNC.&nbsp; In device manage I find that windows has disabled
    the GPU saying there are not enough resources to run the card.&nbsp; From
    this point onwards I can do nothing to get the gpu working again
    aside from removing the device, manually deleting the drivers, and
    starting again.<br>
    <br>
    Note that at this point I am only trying to do secondary passthrough
    though I would ideally like to get to the point of doing primary
    passthrough.&nbsp; Adding the line gfx_passthru=1 at any point in all
    this to the machine config however just prevents the VM from booting
    entirely; when I VNC in all I get is a qemu prompt and I never get
    any ouput to the real display.<br>
    <br>
    Any suggestions as to how to get this to work would be greatly
    appreciated as I've hit a bit of a brick wall.&nbsp; I should also say
    that I have managed to get secondary passthrough working using
    Debian Wheezy and the repository version of xen 4.1.&nbsp; In that case
    though Dom0 didn't boot reliably.<br>
    <br>
    Sincerely,<br>
    <br>
    Matthew Dean<br>
  </body>
</html>

--------------000500090909060000050000--


--===============5433275568835359528==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5433275568835359528==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 13:42:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6Wa-0007sk-4i; Wed, 08 Aug 2012 13:42:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Sz6WZ-0007sc-Aw
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 13:42:11 +0000
Received: from [85.158.139.83:8225] by server-5.bemta-5.messagelabs.com id
	23/A7-03096-2BC62205; Wed, 08 Aug 2012 13:42:10 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344433329!25201822!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzA4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11213 invoked from network); 8 Aug 2012 13:42:10 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 13:42:10 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 4DF591647;
	Wed,  8 Aug 2012 16:42:09 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 09F9F2005D; Wed,  8 Aug 2012 16:42:09 +0300 (EEST)
Date: Wed, 8 Aug 2012 16:42:08 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Matthew Dean <mcd40@cam.ac.uk>
Message-ID: <20120808134208.GI19851@reaktio.net>
References: <50226909.9090804@cam.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50226909.9090804@cam.ac.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 02:26:33PM +0100, Matthew Dean wrote:
> 
>    I've used the stock Ubuntu provided kernel which is version
>    3.2.0-27-generic.  Dom0 is connected to a display via the intel integrated
>    graphics.
> 

You might want to try Linux 3.4.x or 3.5.x dom0 kernels aswell.
I think there has been some fixes in xen-pciback.


> 
>    When I boot the VM everything seems to work fine.  Windows picks up all
>    three devices and I can install drivers for them.  The USB controller
>    works flawlessly and I can use an attached mouse and keyboard.  The radeon
>    card requires a restart after which an attached display comes to life and
>    I have 3D acceleration.  Restarting the VM seems to work OK.  If however I
>    shut down the VM I have no end of problems.  On any subsequent startup the
>    vm struggles to get past the windows splash screen, waiting much longer
>    than usual.  During this period dom0 is sluggish regarding mouse and
>    keyboard input even though cpu and memory usage are very low.  When
>    windows finally loads I have no active display and I have to view the VM
>    via VNC.  In device manage I find that windows has disabled the GPU saying
>    there are not enough resources to run the card.  From this point onwards I
>    can do nothing to get the gpu working again aside from removing the
>    device, manually deleting the drivers, and starting again.
> 

Do you get any errors from Xen (xl dmesg), or from dom0 kernel (dmesg) ? 

Do you have a serial console? 


>    Note that at this point I am only trying to do secondary passthrough
>    though I would ideally like to get to the point of doing primary
>    passthrough.  Adding the line gfx_passthru=1 at any point in all this to
>    the machine config however just prevents the VM from booting entirely;
>    when I VNC in all I get is a qemu prompt and I never get any ouput to the
>    real display.
> 

AMD/ATI primary passthru requires extra patches to Xen qemu-dm,
those are not included out-of-the-box yet.

>    Any suggestions as to how to get this to work would be greatly appreciated
>    as I've hit a bit of a brick wall.  I should also say that I have managed
>    to get secondary passthrough working using Debian Wheezy and the
>    repository version of xen 4.1.  In that case though Dom0 didn't boot
>    reliably.
> 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 13:42:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6Wa-0007sk-4i; Wed, 08 Aug 2012 13:42:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Sz6WZ-0007sc-Aw
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 13:42:11 +0000
Received: from [85.158.139.83:8225] by server-5.bemta-5.messagelabs.com id
	23/A7-03096-2BC62205; Wed, 08 Aug 2012 13:42:10 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344433329!25201822!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzA4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11213 invoked from network); 8 Aug 2012 13:42:10 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 13:42:10 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 4DF591647;
	Wed,  8 Aug 2012 16:42:09 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 09F9F2005D; Wed,  8 Aug 2012 16:42:09 +0300 (EEST)
Date: Wed, 8 Aug 2012 16:42:08 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Matthew Dean <mcd40@cam.ac.uk>
Message-ID: <20120808134208.GI19851@reaktio.net>
References: <50226909.9090804@cam.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50226909.9090804@cam.ac.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 02:26:33PM +0100, Matthew Dean wrote:
> 
>    I've used the stock Ubuntu provided kernel which is version
>    3.2.0-27-generic.  Dom0 is connected to a display via the intel integrated
>    graphics.
> 

You might want to try Linux 3.4.x or 3.5.x dom0 kernels aswell.
I think there has been some fixes in xen-pciback.


> 
>    When I boot the VM everything seems to work fine.  Windows picks up all
>    three devices and I can install drivers for them.  The USB controller
>    works flawlessly and I can use an attached mouse and keyboard.  The radeon
>    card requires a restart after which an attached display comes to life and
>    I have 3D acceleration.  Restarting the VM seems to work OK.  If however I
>    shut down the VM I have no end of problems.  On any subsequent startup the
>    vm struggles to get past the windows splash screen, waiting much longer
>    than usual.  During this period dom0 is sluggish regarding mouse and
>    keyboard input even though cpu and memory usage are very low.  When
>    windows finally loads I have no active display and I have to view the VM
>    via VNC.  In device manage I find that windows has disabled the GPU saying
>    there are not enough resources to run the card.  From this point onwards I
>    can do nothing to get the gpu working again aside from removing the
>    device, manually deleting the drivers, and starting again.
> 

Do you get any errors from Xen (xl dmesg), or from dom0 kernel (dmesg) ? 

Do you have a serial console? 


>    Note that at this point I am only trying to do secondary passthrough
>    though I would ideally like to get to the point of doing primary
>    passthrough.  Adding the line gfx_passthru=1 at any point in all this to
>    the machine config however just prevents the VM from booting entirely;
>    when I VNC in all I get is a qemu prompt and I never get any ouput to the
>    real display.
> 

AMD/ATI primary passthru requires extra patches to Xen qemu-dm,
those are not included out-of-the-box yet.

>    Any suggestions as to how to get this to work would be greatly appreciated
>    as I've hit a bit of a brick wall.  I should also say that I have managed
>    to get secondary passthrough working using Debian Wheezy and the
>    repository version of xen 4.1.  In that case though Dom0 didn't boot
>    reliably.
> 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 13:48:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6bv-0007zJ-TK; Wed, 08 Aug 2012 13:47:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz6bt-0007zB-Pp
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 13:47:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344433655!11438295!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6680 invoked from network); 8 Aug 2012 13:47:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 13:47:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13910139"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:47:34 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 14:47:34 +0100
Date: Wed, 8 Aug 2012 14:47:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alexey Klimov <trashsee@gmail.com>
In-Reply-To: <CAPny0sqtgo7MvndfLN6JExkQMP40ro1FT6Edc6OKWt0KreNYnQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208081421570.4645@kaball.uk.xensource.com>
References: <CAPny0soyuQkUmAU+kYrBvG+w_jxKUsY8YxCrxBA=7cwmdwV6Xw@mail.gmail.com>
	<alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
	<CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
	<CAPny0sqtgo7MvndfLN6JExkQMP40ro1FT6Edc6OKWt0KreNYnQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1131063668-1344432587=:21096"
Content-ID: <alpine.DEB.2.02.1208081432000.21096@kaball.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [questions] Dom0/DomU on ARM under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1131063668-1344432587=:21096
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208081432001.21096@kaball.uk.xensource.com>

On Wed, 8 Aug 2012, Alexey Klimov wrote:
> 2012/8/1 Alexey Klimov <trashsee@gmail.com>:
> > And i saw that Ian set up git repository for xen with latest patches
> > for ARM. So i'll try to use this repository.
> 
> Hello Stefano and Ian,
> 
> I used new Ian xen-unstable git repository
> (git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.) and
> Stefano linux kernel git repository (
> git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git
> 3.5-rc7-arm-2) with additional patches:
> 
> - for linux kernel "xen/events: fix unmask_evtchn for PV on HVM guests",
> - ARM hypercall ABI: 64 bit ready patch series for xen and attached
> few versions of xcbuild (early version of Ian and latest version).
> After applying 64-bit ready patches i observed such errors when
> building xen and tools:
> 
> 1)
> for i in public/callback.h public/dom0_ops.h public/elfnote.h
> public/event_channel.h public/features.h public/grant_table.h
> public/kexec.h public/mem_event.h public/memory.h public/nmi.h
> public/physdev.h public/platform.h public/sched.h public/tmem.h
> public/trace.h public/vcpu.h public/version.h public/xen-compat.h
> public/xen.h public/xencomm.h public/xenoprof.h public/hvm/e820.h
> public/hvm/hvm_info_table.h public/hvm/hvm_op.h public/hvm/ioreq.h
> public/hvm/params.h public/io/blkif.h public/io/console.h
> public/io/fbif.h public/io/fsif.h public/io/kbdif.h
> public/io/libxenvchan.h public/io/netif.h public/io/pciif.h
> public/io/protocols.h public/io/ring.h public/io/tpmif.h
> public/io/usbif.h public/io/vscsiif.h public/io/xenbus.h
> public/io/xs_wire.h; do gcc -ansi -include stdint.h -Wall -W -Werror
> -S -o /dev/null -xc $i || exit 1; echo $i; done >headers.chk.new
> public/version.h:61:5: error: unknown type name 'xen_ulong_t'
> make[3]: *** [headers.chk] Error 1
> make[3]: Leaving directory `/src/xen/xen/include'
> 
> Fixed by inserting #include "arch-arm.h" in xen/include/public/version.h

I think that this is a legitimate error, I wasn't seeing it because I am
cross-compiling.


> 2)
> building 'xc' extension
> gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
> -Wstrict-prototypes -O1 -fno-omit-frame-pointer -marm -g
> -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
> -Wdeclaration-after-statement -Wno-unused-but-set-variable
> -D__XEN_TOOLS__ -MMD -MF .build.d -D_LARGEFILE_SOURCE
> -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
> -fno-optimize-sibling-calls -fPIC -I../../tools/include
> -I../../tools/libxc -Ixen/lowlevel/xc -I/usr/include/python2.7 -c
> xen/lowlevel/xc/xc.c -o
> build/temp.linux-armv7l-2.7/xen/lowlevel/xc/xc.o -fno-strict-aliasing
> -Werror
> xen/lowlevel/xc/xc.c: In function 'pyxc_xeninfo':
> xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
> type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
> [-Werror=format]
> xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
> type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
> [-Werror=format]
> cc1: all warnings being treated as errors
> 
> Just commented snprintf(str, sizeof(str), "virt_start=0x%lx",
> p_parms.virt_start); in xc.c

That is also another legitimate error, I'll fix it in the next version
of the patch series. Thanks for testing!


> Then it compiled and i tried to run DomU. It looks like allocation
> console_pfn and xenstore_pfn in alloc_magic_pages() in xc_dom_arm.c
> creates real pain for me. With this allocation/patch xen prints "bad
> p2m lookup" messages before booting DomU
> (XEN) bad p2m lookup
> (XEN) dom1 IPA 0x0000000090000000
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> (XEN) bad p2m lookup
> (XEN) dom1 IPA 0x0000000090001000
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> (XEN) bad p2m lookup
> (XEN) dom1 IPA 0x0000000090001000
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> 
> and then everything  hangs with translation fault:
> 
> (XEN) DOM1: Grant tables using version 1 layout.
> (XEN) DOM1: Grant table initialized
> (XEN) DOM1: NET: Registered protocol family 16
> (XEN) Guest data abort: Translation fault at level 2
> (XEN)     gva=88808804
> (XEN)     gpa=0000000090001804
> (XEN)     size=2 sign=0 write=0 reg=2
> (XEN)     eat=0 cm=0 s1ptw=0 dfsc=6
> (XEN) dom1 IPA 0x0000000090001804
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> 
> Detailed log is attached.
> Ok, i moved allocation for console and xenstore pages back in
> arch_setup_meminit() like in
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01340.html and
> then added kernel parameter keep_bootcon in DomU  device tree file and
> everything booted up to "unable to open an initial console" and unable
> to mount rootfs.

You are probably missing Ian's fix to alloc_magic_pages:

http://marc.info/?l=xen-devel&m=134398933530124


> I still didn't learn how to deal with xenstore, hvc0,
> xvda and how to boot from initramfs on ARM using xcbuild but i'll try
> to understand and learn this :) So may be it's good thing to
> investigate or take deep look why add_to_physmap failed in xcbuild and
> why there is bad p2m lookup in xen. Log is attached.
> 
> Do you have any difference between Dom0 .config and DomU .config? Did
> you just attach initrd using xc_dom_ramdisk_file() call in xcbuild?
> Any special configuration of xen console/xen store?

I am just using one config, attached.


> Well, i dont mean that i'm doing everything correctly but i tried to
> run it fixing/commenting as much as i can. Could you please help if
> you have time? I can test new changes, sent other useful info/logs.

Looking at the guest data abort that you are getting, I think that you
didn't update the dts and dtsi to the latest version. They are attached
to the 00/23 email "Introduce Xen support on ARM" for the linux kernel.
--1342847746-1131063668-1344432587=:21096
Content-Type: text/plain; charset="US-ASCII"; name="config-linux"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208081429470.21096@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="config-linux"

Iw0KIyBBdXRvbWF0aWNhbGx5IGdlbmVyYXRlZCBmaWxlOyBETyBOT1QgRURJ
VC4NCiMgTGludXgvYXJtIDMuNS4wLXJjNyBLZXJuZWwgQ29uZmlndXJhdGlv
bg0KIw0KQ09ORklHX0FSTT15DQpDT05GSUdfU1lTX1NVUFBPUlRTX0FQTV9F
TVVMQVRJT049eQ0KQ09ORklHX0hBVkVfUFJPQ19DUFU9eQ0KQ09ORklHX05P
X0lPUE9SVD15DQpDT05GSUdfU1RBQ0tUUkFDRV9TVVBQT1JUPXkNCkNPTkZJ
R19IQVZFX0xBVEVOQ1lUT1BfU1VQUE9SVD15DQpDT05GSUdfTE9DS0RFUF9T
VVBQT1JUPXkNCkNPTkZJR19UUkFDRV9JUlFGTEFHU19TVVBQT1JUPXkNCkNP
TkZJR19SV1NFTV9HRU5FUklDX1NQSU5MT0NLPXkNCkNPTkZJR19HRU5FUklD
X0hXRUlHSFQ9eQ0KQ09ORklHX0dFTkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkN
CkNPTkZJR19ORUVEX0RNQV9NQVBfU1RBVEU9eQ0KQ09ORklHX1ZFQ1RPUlNf
QkFTRT0weGZmZmYwMDAwDQpDT05GSUdfQVJNX1BBVENIX1BIWVNfVklSVD15
DQpDT05GSUdfR0VORVJJQ19CVUc9eQ0KQ09ORklHX0RFRkNPTkZJR19MSVNU
PSIvbGliL21vZHVsZXMvJFVOQU1FX1JFTEVBU0UvLmNvbmZpZyINCkNPTkZJ
R19IQVZFX0lSUV9XT1JLPXkNCg0KIw0KIyBHZW5lcmFsIHNldHVwDQojDQpD
T05GSUdfRVhQRVJJTUVOVEFMPXkNCkNPTkZJR19CUk9LRU5fT05fU01QPXkN
CkNPTkZJR19JTklUX0VOVl9BUkdfTElNSVQ9MzINCkNPTkZJR19DUk9TU19D
T01QSUxFPSIiDQpDT05GSUdfTE9DQUxWRVJTSU9OPSIiDQojIENPTkZJR19M
T0NBTFZFUlNJT05fQVVUTyBpcyBub3Qgc2V0DQpDT05GSUdfSEFWRV9LRVJO
RUxfR1pJUD15DQpDT05GSUdfSEFWRV9LRVJORUxfTFpNQT15DQpDT05GSUdf
SEFWRV9LRVJORUxfWFo9eQ0KQ09ORklHX0hBVkVfS0VSTkVMX0xaTz15DQpD
T05GSUdfS0VSTkVMX0daSVA9eQ0KIyBDT05GSUdfS0VSTkVMX0xaTUEgaXMg
bm90IHNldA0KIyBDT05GSUdfS0VSTkVMX1haIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0tFUk5FTF9MWk8gaXMgbm90IHNldA0KQ09ORklHX0RFRkFVTFRfSE9T
VE5BTUU9Iihub25lKSINCkNPTkZJR19TV0FQPXkNCkNPTkZJR19TWVNWSVBD
PXkNCkNPTkZJR19TWVNWSVBDX1NZU0NUTD15DQojIENPTkZJR19QT1NJWF9N
UVVFVUUgaXMgbm90IHNldA0KIyBDT05GSUdfQlNEX1BST0NFU1NfQUNDVCBp
cyBub3Qgc2V0DQojIENPTkZJR19GSEFORExFIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1RBU0tTVEFUUyBpcyBub3Qgc2V0DQojIENPTkZJR19BVURJVCBpcyBu
b3Qgc2V0DQpDT05GSUdfSEFWRV9HRU5FUklDX0hBUkRJUlFTPXkNCg0KIw0K
IyBJUlEgc3Vic3lzdGVtDQojDQpDT05GSUdfR0VORVJJQ19IQVJESVJRUz15
DQpDT05GSUdfR0VORVJJQ19JUlFfUFJPQkU9eQ0KQ09ORklHX0dFTkVSSUNf
SVJRX1NIT1c9eQ0KQ09ORklHX0hBUkRJUlFTX1NXX1JFU0VORD15DQpDT05G
SUdfSVJRX0RPTUFJTj15DQpDT05GSUdfS1RJTUVfU0NBTEFSPXkNCkNPTkZJ
R19HRU5FUklDX0NMT0NLRVZFTlRTPXkNCkNPTkZJR19HRU5FUklDX0NMT0NL
RVZFTlRTX0JVSUxEPXkNCg0KIw0KIyBUaW1lcnMgc3Vic3lzdGVtDQojDQpD
T05GSUdfVElDS19PTkVTSE9UPXkNCkNPTkZJR19OT19IWj15DQpDT05GSUdf
SElHSF9SRVNfVElNRVJTPXkNCg0KIw0KIyBSQ1UgU3Vic3lzdGVtDQojDQpD
T05GSUdfVElOWV9SQ1U9eQ0KIyBDT05GSUdfUFJFRU1QVF9SQ1UgaXMgbm90
IHNldA0KIyBDT05GSUdfVFJFRV9SQ1VfVFJBQ0UgaXMgbm90IHNldA0KIyBD
T05GSUdfSUtDT05GSUcgaXMgbm90IHNldA0KQ09ORklHX0xPR19CVUZfU0hJ
RlQ9MTQNCiMgQ09ORklHX0NHUk9VUFMgaXMgbm90IHNldA0KIyBDT05GSUdf
Q0hFQ0tQT0lOVF9SRVNUT1JFIGlzIG5vdCBzZXQNCkNPTkZJR19OQU1FU1BB
Q0VTPXkNCkNPTkZJR19VVFNfTlM9eQ0KQ09ORklHX0lQQ19OUz15DQpDT05G
SUdfUElEX05TPXkNCkNPTkZJR19ORVRfTlM9eQ0KIyBDT05GSUdfU0NIRURf
QVVUT0dST1VQIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NZU0ZTX0RFUFJFQ0FU
RUQgaXMgbm90IHNldA0KIyBDT05GSUdfUkVMQVkgaXMgbm90IHNldA0KQ09O
RklHX0JMS19ERVZfSU5JVFJEPXkNCkNPTkZJR19JTklUUkFNRlNfU09VUkNF
PSIiDQpDT05GSUdfUkRfR1pJUD15DQojIENPTkZJR19SRF9CWklQMiBpcyBu
b3Qgc2V0DQojIENPTkZJR19SRF9MWk1BIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1JEX1haIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JEX0xaTyBpcyBub3Qgc2V0
DQpDT05GSUdfQ0NfT1BUSU1JWkVfRk9SX1NJWkU9eQ0KQ09ORklHX1NZU0NU
TD15DQpDT05GSUdfQU5PTl9JTk9ERVM9eQ0KQ09ORklHX0VYUEVSVD15DQpD
T05GSUdfVUlEMTY9eQ0KIyBDT05GSUdfU1lTQ1RMX1NZU0NBTEwgaXMgbm90
IHNldA0KQ09ORklHX0tBTExTWU1TPXkNCiMgQ09ORklHX0tBTExTWU1TX0FM
TCBpcyBub3Qgc2V0DQpDT05GSUdfSE9UUExVRz15DQpDT05GSUdfUFJJTlRL
PXkNCkNPTkZJR19CVUc9eQ0KQ09ORklHX0VMRl9DT1JFPXkNCkNPTkZJR19C
QVNFX0ZVTEw9eQ0KQ09ORklHX0ZVVEVYPXkNCkNPTkZJR19FUE9MTD15DQpD
T05GSUdfU0lHTkFMRkQ9eQ0KQ09ORklHX1RJTUVSRkQ9eQ0KQ09ORklHX0VW
RU5URkQ9eQ0KQ09ORklHX1NITUVNPXkNCkNPTkZJR19BSU89eQ0KIyBDT05G
SUdfRU1CRURERUQgaXMgbm90IHNldA0KQ09ORklHX0hBVkVfUEVSRl9FVkVO
VFM9eQ0KQ09ORklHX1BFUkZfVVNFX1ZNQUxMT0M9eQ0KDQojDQojIEtlcm5l
bCBQZXJmb3JtYW5jZSBFdmVudHMgQW5kIENvdW50ZXJzDQojDQojIENPTkZJ
R19QRVJGX0VWRU5UUyBpcyBub3Qgc2V0DQpDT05GSUdfVk1fRVZFTlRfQ09V
TlRFUlM9eQ0KQ09ORklHX0NPTVBBVF9CUks9eQ0KQ09ORklHX1NMQUI9eQ0K
IyBDT05GSUdfU0xVQiBpcyBub3Qgc2V0DQojIENPTkZJR19TTE9CIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1BST0ZJTElORyBpcyBub3Qgc2V0DQpDT05GSUdf
SEFWRV9PUFJPRklMRT15DQojIENPTkZJR19KVU1QX0xBQkVMIGlzIG5vdCBz
ZXQNCkNPTkZJR19IQVZFX0tQUk9CRVM9eQ0KQ09ORklHX0hBVkVfS1JFVFBS
T0JFUz15DQpDT05GSUdfSEFWRV9BUkNIX1RSQUNFSE9PSz15DQpDT05GSUdf
SEFWRV9ETUFfQVRUUlM9eQ0KQ09ORklHX0hBVkVfRE1BX0NPTlRJR1VPVVM9
eQ0KQ09ORklHX0dFTkVSSUNfU01QX0lETEVfVEhSRUFEPXkNCkNPTkZJR19I
QVZFX1JFR1NfQU5EX1NUQUNLX0FDQ0VTU19BUEk9eQ0KQ09ORklHX0hBVkVf
Q0xLPXkNCkNPTkZJR19IQVZFX0RNQV9BUElfREVCVUc9eQ0KQ09ORklHX0hB
VkVfQVJDSF9KVU1QX0xBQkVMPXkNCg0KIw0KIyBHQ09WLWJhc2VkIGtlcm5l
bCBwcm9maWxpbmcNCiMNCkNPTkZJR19IQVZFX0dFTkVSSUNfRE1BX0NPSEVS
RU5UPXkNCkNPTkZJR19TTEFCSU5GTz15DQpDT05GSUdfUlRfTVVURVhFUz15
DQpDT05GSUdfQkFTRV9TTUFMTD0wDQojIENPTkZJR19NT0RVTEVTIGlzIG5v
dCBzZXQNCkNPTkZJR19CTE9DSz15DQpDT05GSUdfTEJEQUY9eQ0KQ09ORklH
X0JMS19ERVZfQlNHPXkNCiMgQ09ORklHX0JMS19ERVZfQlNHTElCIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0JMS19ERVZfSU5URUdSSVRZIGlzIG5vdCBzZXQN
Cg0KIw0KIyBQYXJ0aXRpb24gVHlwZXMNCiMNCkNPTkZJR19QQVJUSVRJT05f
QURWQU5DRUQ9eQ0KIyBDT05GSUdfQUNPUk5fUEFSVElUSU9OIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX09TRl9QQVJUSVRJT04gaXMgbm90IHNldA0KIyBDT05G
SUdfQU1JR0FfUEFSVElUSU9OIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FUQVJJ
X1BBUlRJVElPTiBpcyBub3Qgc2V0DQojIENPTkZJR19NQUNfUEFSVElUSU9O
IGlzIG5vdCBzZXQNCkNPTkZJR19NU0RPU19QQVJUSVRJT049eQ0KIyBDT05G
SUdfQlNEX0RJU0tMQUJFTCBpcyBub3Qgc2V0DQojIENPTkZJR19NSU5JWF9T
VUJQQVJUSVRJT04gaXMgbm90IHNldA0KIyBDT05GSUdfU09MQVJJU19YODZf
UEFSVElUSU9OIGlzIG5vdCBzZXQNCiMgQ09ORklHX1VOSVhXQVJFX0RJU0tM
QUJFTCBpcyBub3Qgc2V0DQojIENPTkZJR19MRE1fUEFSVElUSU9OIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NHSV9QQVJUSVRJT04gaXMgbm90IHNldA0KIyBD
T05GSUdfVUxUUklYX1BBUlRJVElPTiBpcyBub3Qgc2V0DQojIENPTkZJR19T
VU5fUEFSVElUSU9OIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tBUk1BX1BBUlRJ
VElPTiBpcyBub3Qgc2V0DQojIENPTkZJR19FRklfUEFSVElUSU9OIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NZU1Y2OF9QQVJUSVRJT04gaXMgbm90IHNldA0K
DQojDQojIElPIFNjaGVkdWxlcnMNCiMNCkNPTkZJR19JT1NDSEVEX05PT1A9
eQ0KQ09ORklHX0lPU0NIRURfREVBRExJTkU9eQ0KQ09ORklHX0lPU0NIRURf
Q0ZRPXkNCiMgQ09ORklHX0RFRkFVTFRfREVBRExJTkUgaXMgbm90IHNldA0K
Q09ORklHX0RFRkFVTFRfQ0ZRPXkNCiMgQ09ORklHX0RFRkFVTFRfTk9PUCBp
cyBub3Qgc2V0DQpDT05GSUdfREVGQVVMVF9JT1NDSEVEPSJjZnEiDQojIENP
TkZJR19JTkxJTkVfU1BJTl9UUllMT0NLIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0lOTElORV9TUElOX1RSWUxPQ0tfQkggaXMgbm90IHNldA0KIyBDT05GSUdf
SU5MSU5FX1NQSU5fTE9DSyBpcyBub3Qgc2V0DQojIENPTkZJR19JTkxJTkVf
U1BJTl9MT0NLX0JIIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9TUElO
X0xPQ0tfSVJRIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9TUElOX0xP
Q0tfSVJRU0FWRSBpcyBub3Qgc2V0DQojIENPTkZJR19JTkxJTkVfU1BJTl9V
TkxPQ0tfQkggaXMgbm90IHNldA0KQ09ORklHX0lOTElORV9TUElOX1VOTE9D
S19JUlE9eQ0KIyBDT05GSUdfSU5MSU5FX1NQSU5fVU5MT0NLX0lSUVJFU1RP
UkUgaXMgbm90IHNldA0KIyBDT05GSUdfSU5MSU5FX1JFQURfVFJZTE9DSyBp
cyBub3Qgc2V0DQojIENPTkZJR19JTkxJTkVfUkVBRF9MT0NLIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0lOTElORV9SRUFEX0xPQ0tfQkggaXMgbm90IHNldA0K
IyBDT05GSUdfSU5MSU5FX1JFQURfTE9DS19JUlEgaXMgbm90IHNldA0KIyBD
T05GSUdfSU5MSU5FX1JFQURfTE9DS19JUlFTQVZFIGlzIG5vdCBzZXQNCkNP
TkZJR19JTkxJTkVfUkVBRF9VTkxPQ0s9eQ0KIyBDT05GSUdfSU5MSU5FX1JF
QURfVU5MT0NLX0JIIGlzIG5vdCBzZXQNCkNPTkZJR19JTkxJTkVfUkVBRF9V
TkxPQ0tfSVJRPXkNCiMgQ09ORklHX0lOTElORV9SRUFEX1VOTE9DS19JUlFS
RVNUT1JFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9UUllM
T0NLIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9MT0NLIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9MT0NLX0JIIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9MT0NLX0lSUSBpcyBub3Qg
c2V0DQojIENPTkZJR19JTkxJTkVfV1JJVEVfTE9DS19JUlFTQVZFIGlzIG5v
dCBzZXQNCkNPTkZJR19JTkxJTkVfV1JJVEVfVU5MT0NLPXkNCiMgQ09ORklH
X0lOTElORV9XUklURV9VTkxPQ0tfQkggaXMgbm90IHNldA0KQ09ORklHX0lO
TElORV9XUklURV9VTkxPQ0tfSVJRPXkNCiMgQ09ORklHX0lOTElORV9XUklU
RV9VTkxPQ0tfSVJRUkVTVE9SRSBpcyBub3Qgc2V0DQojIENPTkZJR19NVVRF
WF9TUElOX09OX09XTkVSIGlzIG5vdCBzZXQNCkNPTkZJR19GUkVFWkVSPXkN
Cg0KIw0KIyBTeXN0ZW0gVHlwZQ0KIw0KQ09ORklHX01NVT15DQojIENPTkZJ
R19BUkNIX0lOVEVHUkFUT1IgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9S
RUFMVklFVyBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX1ZFUlNBVElMRSBp
cyBub3Qgc2V0DQpDT05GSUdfQVJDSF9WRVhQUkVTUz15DQojIENPTkZJR19B
UkNIX0FUOTEgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9CQ01SSU5HIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfSElHSEJBTksgaXMgbm90IHNldA0K
IyBDT05GSUdfQVJDSF9DTFBTNzExWCBpcyBub3Qgc2V0DQojIENPTkZJR19B
UkNIX0NOUzNYWFggaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9HRU1JTkkg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QUklNQTIgaXMgbm90IHNldA0K
IyBDT05GSUdfQVJDSF9FQlNBMTEwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FS
Q0hfRVA5M1hYIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfRk9PVEJSSURH
RSBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX01YQyBpcyBub3Qgc2V0DQoj
IENPTkZJR19BUkNIX01YUyBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX05F
VFggaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9INzIwWCBpcyBub3Qgc2V0
DQojIENPTkZJR19BUkNIX0lPUDEzWFggaXMgbm90IHNldA0KIyBDT05GSUdf
QVJDSF9JT1AzMlggaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9JT1AzM1gg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9JWFA0WFggaXMgbm90IHNldA0K
IyBDT05GSUdfQVJDSF9ET1ZFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hf
S0lSS1dPT0QgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9MUEMzMlhYIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfTVY3OFhYMCBpcyBub3Qgc2V0DQoj
IENPTkZJR19BUkNIX09SSU9ONVggaXMgbm90IHNldA0KIyBDT05GSUdfQVJD
SF9NTVAgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9LUzg2OTUgaXMgbm90
IHNldA0KIyBDT05GSUdfQVJDSF9XOTBYOTAwIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0FSQ0hfVEVHUkEgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QSUNP
WENFTEwgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QTlg0MDA4IGlzIG5v
dCBzZXQNCiMgQ09ORklHX0FSQ0hfUFhBIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0FSQ0hfTVNNIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfU0hNT0JJTEUg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9SUEMgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9TQTExMDAgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9T
M0MyNFhYIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfUzNDNjRYWCBpcyBu
b3Qgc2V0DQojIENPTkZJR19BUkNIX1M1UDY0WDAgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9TNVBDMTAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hf
UzVQVjIxMCBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX0VYWU5PUyBpcyBu
b3Qgc2V0DQojIENPTkZJR19BUkNIX1NIQVJLIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0FSQ0hfVTMwMCBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX1U4NTAw
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfTk9NQURJSyBpcyBub3Qgc2V0
DQojIENPTkZJR19BUkNIX0RBVklOQ0kgaXMgbm90IHNldA0KIyBDT05GSUdf
QVJDSF9PTUFQIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BMQVRfU1BFQVIgaXMg
bm90IHNldA0KIyBDT05GSUdfQVJDSF9WVDg1MDAgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9aWU5RIGlzIG5vdCBzZXQNCg0KIw0KIyBWZXJzYXRpbGUg
RXhwcmVzcyBwbGF0Zm9ybSB0eXBlDQojDQpDT05GSUdfQVJDSF9WRVhQUkVT
U19DT1JURVhfQTVfQTlfRVJSQVRBPXkNCiMgQ09ORklHX0FSQ0hfVkVYUFJF
U1NfQ0E5WDQgaXMgbm90IHNldA0KQ09ORklHX0FSQ0hfVkVYUFJFU1NfRFQ9
eQ0KQ09ORklHX1BMQVRfVkVSU0FUSUxFX0NMQ0Q9eQ0KQ09ORklHX1BMQVRf
VkVSU0FUSUxFX1NDSEVEX0NMT0NLPXkNCkNPTkZJR19QTEFUX1ZFUlNBVElM
RT15DQpDT05GSUdfQVJNX1RJTUVSX1NQODA0PXkNCg0KIw0KIyBQcm9jZXNz
b3IgVHlwZQ0KIw0KQ09ORklHX0NQVV9WNz15DQpDT05GSUdfQ1BVXzMydjZL
PXkNCkNPTkZJR19DUFVfMzJ2Nz15DQpDT05GSUdfQ1BVX0FCUlRfRVY3PXkN
CkNPTkZJR19DUFVfUEFCUlRfVjc9eQ0KQ09ORklHX0NQVV9DQUNIRV9WNz15
DQpDT05GSUdfQ1BVX0NBQ0hFX1ZJUFQ9eQ0KQ09ORklHX0NQVV9DT1BZX1Y2
PXkNCkNPTkZJR19DUFVfVExCX1Y3PXkNCkNPTkZJR19DUFVfSEFTX0FTSUQ9
eQ0KQ09ORklHX0NQVV9DUDE1PXkNCkNPTkZJR19DUFVfQ1AxNV9NTVU9eQ0K
DQojDQojIFByb2Nlc3NvciBGZWF0dXJlcw0KIw0KIyBDT05GSUdfQVJNX0xQ
QUUgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QSFlTX0FERFJfVF82NEJJ
VCBpcyBub3Qgc2V0DQpDT05GSUdfQVJNX1RIVU1CPXkNCkNPTkZJR19BUk1f
VEhVTUJFRT15DQpDT05GSUdfU1dQX0VNVUxBVEU9eQ0KIyBDT05GSUdfQ1BV
X0lDQUNIRV9ESVNBQkxFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NQVV9EQ0FD
SEVfRElTQUJMRSBpcyBub3Qgc2V0DQojIENPTkZJR19DUFVfQlBSRURJQ1Rf
RElTQUJMRSBpcyBub3Qgc2V0DQpDT05GSUdfT1VURVJfQ0FDSEU9eQ0KQ09O
RklHX09VVEVSX0NBQ0hFX1NZTkM9eQ0KQ09ORklHX01JR0hUX0hBVkVfQ0FD
SEVfTDJYMD15DQpDT05GSUdfQ0FDSEVfTDJYMD15DQpDT05GSUdfQ0FDSEVf
UEwzMTA9eQ0KQ09ORklHX0FSTV9MMV9DQUNIRV9TSElGVF82PXkNCkNPTkZJ
R19BUk1fTDFfQ0FDSEVfU0hJRlQ9Ng0KQ09ORklHX0FSTV9ETUFfTUVNX0JV
RkZFUkFCTEU9eQ0KQ09ORklHX0FSTV9OUl9CQU5LUz04DQpDT05GSUdfQ1BV
X0hBU19QTVU9eQ0KQ09ORklHX01VTFRJX0lSUV9IQU5ETEVSPXkNCiMgQ09O
RklHX0FSTV9FUlJBVEFfNDMwOTczIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FS
TV9FUlJBVEFfNDU4NjkzIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSTV9FUlJB
VEFfNDYwMDc1IGlzIG5vdCBzZXQNCiMgQ09ORklHX1BMMzEwX0VSUkFUQV81
ODgzNjkgaXMgbm90IHNldA0KQ09ORklHX0FSTV9FUlJBVEFfNzIwNzg5PXkN
CiMgQ09ORklHX1BMMzEwX0VSUkFUQV83Mjc5MTUgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJNX0VSUkFUQV83NDM2MjIgaXMgbm90IHNldA0KQ09ORklHX0FS
TV9FUlJBVEFfNzUxNDcyPXkNCkNPTkZJR19QTDMxMF9FUlJBVEFfNzUzOTcw
PXkNCiMgQ09ORklHX0FSTV9FUlJBVEFfNzU0MzIyIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1BMMzEwX0VSUkFUQV83Njk0MTkgaXMgbm90IHNldA0KQ09ORklH
X0FSTV9HSUM9eQ0KQ09ORklHX0lDU1Q9eQ0KDQojDQojIEJ1cyBzdXBwb3J0
DQojDQpDT05GSUdfQVJNX0FNQkE9eQ0KIyBDT05GSUdfUENJX1NZU0NBTEwg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9TVVBQT1JUU19NU0kgaXMgbm90
IHNldA0KIyBDT05GSUdfUENDQVJEIGlzIG5vdCBzZXQNCg0KIw0KIyBLZXJu
ZWwgRmVhdHVyZXMNCiMNCkNPTkZJR19IQVZFX1NNUD15DQojIENPTkZJR19T
TVAgaXMgbm90IHNldA0KQ09ORklHX0FSTV9BUkNIX1RJTUVSPXkNCkNPTkZJ
R19WTVNQTElUXzNHPXkNCiMgQ09ORklHX1ZNU1BMSVRfMkcgaXMgbm90IHNl
dA0KIyBDT05GSUdfVk1TUExJVF8xRyBpcyBub3Qgc2V0DQpDT05GSUdfUEFH
RV9PRkZTRVQ9MHhDMDAwMDAwMA0KQ09ORklHX0FSQ0hfTlJfR1BJTz0wDQpD
T05GSUdfUFJFRU1QVF9OT05FPXkNCiMgQ09ORklHX1BSRUVNUFRfVk9MVU5U
QVJZIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BSRUVNUFQgaXMgbm90IHNldA0K
Q09ORklHX0haPTEwMA0KIyBDT05GSUdfVEhVTUIyX0tFUk5FTCBpcyBub3Qg
c2V0DQpDT05GSUdfQUVBQkk9eQ0KQ09ORklHX09BQklfQ09NUEFUPXkNCiMg
Q09ORklHX0FSQ0hfU1BBUlNFTUVNX0RFRkFVTFQgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9TRUxFQ1RfTUVNT1JZX01PREVMIGlzIG5vdCBzZXQNCkNP
TkZJR19IQVZFX0FSQ0hfUEZOX1ZBTElEPXkNCkNPTkZJR19ISUdITUVNPXkN
CkNPTkZJR19ISUdIUFRFPXkNCkNPTkZJR19TRUxFQ1RfTUVNT1JZX01PREVM
PXkNCkNPTkZJR19GTEFUTUVNX01BTlVBTD15DQpDT05GSUdfRkxBVE1FTT15
DQpDT05GSUdfRkxBVF9OT0RFX01FTV9NQVA9eQ0KQ09ORklHX0hBVkVfTUVN
QkxPQ0s9eQ0KQ09ORklHX1BBR0VGTEFHU19FWFRFTkRFRD15DQpDT05GSUdf
U1BMSVRfUFRMT0NLX0NQVVM9NA0KIyBDT05GSUdfQ09NUEFDVElPTiBpcyBu
b3Qgc2V0DQojIENPTkZJR19QSFlTX0FERFJfVF82NEJJVCBpcyBub3Qgc2V0
DQpDT05GSUdfWk9ORV9ETUFfRkxBRz0wDQpDT05GSUdfQk9VTkNFPXkNCkNP
TkZJR19WSVJUX1RPX0JVUz15DQojIENPTkZJR19LU00gaXMgbm90IHNldA0K
Q09ORklHX0RFRkFVTFRfTU1BUF9NSU5fQUREUj00MDk2DQpDT05GSUdfQ1JP
U1NfTUVNT1JZX0FUVEFDSD15DQpDT05GSUdfTkVFRF9QRVJfQ1BVX0tNPXkN
CiMgQ09ORklHX0NMRUFOQ0FDSEUgaXMgbm90IHNldA0KIyBDT05GSUdfRlJP
TlRTV0FQIGlzIG5vdCBzZXQNCkNPTkZJR19GT1JDRV9NQVhfWk9ORU9SREVS
PTExDQpDT05GSUdfQUxJR05NRU5UX1RSQVA9eQ0KIyBDT05GSUdfVUFDQ0VT
U19XSVRIX01FTUNQWSBpcyBub3Qgc2V0DQojIENPTkZJR19TRUNDT01QIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0RFUFJFQ0FURURfUEFSQU1fU1RSVUNUIGlzIG5vdCBz
ZXQNCg0KIw0KIyBCb290IG9wdGlvbnMNCiMNCkNPTkZJR19VU0VfT0Y9eQ0K
Q09ORklHX1pCT09UX1JPTV9URVhUPTB4MA0KQ09ORklHX1pCT09UX1JPTV9C
U1M9MHgwDQpDT05GSUdfQVJNX0FQUEVOREVEX0RUQj15DQpDT05GSUdfQVJN
X0FUQUdfRFRCX0NPTVBBVD15DQpDT05GSUdfQ01ETElORT0iZWFybHlwcmlu
dGs9eGVuYm9vdCBjb25zb2xlPXR0eUFNQTEgcm9vdD0vZGV2L21tY2JsazAg
ZGVidWcgcncgaW5pdD0vYmluL2Jhc2giDQpDT05GSUdfQ01ETElORV9GUk9N
X0JPT1RMT0FERVI9eQ0KIyBDT05GSUdfQ01ETElORV9FWFRFTkQgaXMgbm90
IHNldA0KIyBDT05GSUdfQ01ETElORV9GT1JDRSBpcyBub3Qgc2V0DQojIENP
TkZJR19YSVBfS0VSTkVMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWEVDIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0NSQVNIX0RVTVAgaXMgbm90IHNldA0KQ09O
RklHX0FVVE9fWlJFTEFERFI9eQ0KDQojDQojIENQVSBQb3dlciBNYW5hZ2Vt
ZW50DQojDQojIENPTkZJR19DUFVfSURMRSBpcyBub3Qgc2V0DQoNCiMNCiMg
RmxvYXRpbmcgcG9pbnQgZW11bGF0aW9uDQojDQoNCiMNCiMgQXQgbGVhc3Qg
b25lIGVtdWxhdGlvbiBtdXN0IGJlIHNlbGVjdGVkDQojDQojIENPTkZJR19G
UEVfTldGUEUgaXMgbm90IHNldA0KIyBDT05GSUdfRlBFX0ZBU1RGUEUgaXMg
bm90IHNldA0KQ09ORklHX1ZGUD15DQpDT05GSUdfVkZQdjM9eQ0KIyBDT05G
SUdfTkVPTiBpcyBub3Qgc2V0DQpDT05GSUdfWEVOX0RPTTA9eQ0KQ09ORklH
X1hFTj15DQoNCiMNCiMgVXNlcnNwYWNlIGJpbmFyeSBmb3JtYXRzDQojDQpD
T05GSUdfQklORk1UX0VMRj15DQpDT05GSUdfQVJDSF9CSU5GTVRfRUxGX1JB
TkRPTUlaRV9QSUU9eQ0KQ09ORklHX0NPUkVfRFVNUF9ERUZBVUxUX0VMRl9I
RUFERVJTPXkNCkNPTkZJR19IQVZFX0FPVVQ9eQ0KQ09ORklHX0JJTkZNVF9B
T1VUPXkNCkNPTkZJR19CSU5GTVRfTUlTQz15DQoNCiMNCiMgUG93ZXIgbWFu
YWdlbWVudCBvcHRpb25zDQojDQpDT05GSUdfU1VTUEVORD15DQpDT05GSUdf
U1VTUEVORF9GUkVFWkVSPXkNCkNPTkZJR19QTV9TTEVFUD15DQojIENPTkZJ
R19QTV9BVVRPU0xFRVAgaXMgbm90IHNldA0KIyBDT05GSUdfUE1fV0FLRUxP
Q0tTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BNX1JVTlRJTUUgaXMgbm90IHNl
dA0KQ09ORklHX1BNPXkNCiMgQ09ORklHX1BNX0RFQlVHIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0FQTV9FTVVMQVRJT04gaXMgbm90IHNldA0KQ09ORklHX1BN
X0NMSz15DQpDT05GSUdfQ1BVX1BNPXkNCkNPTkZJR19BUkNIX1NVU1BFTkRf
UE9TU0lCTEU9eQ0KQ09ORklHX0FSTV9DUFVfU1VTUEVORD15DQpDT05GSUdf
TkVUPXkNCg0KIw0KIyBOZXR3b3JraW5nIG9wdGlvbnMNCiMNCkNPTkZJR19Q
QUNLRVQ9eQ0KQ09ORklHX1VOSVg9eQ0KIyBDT05GSUdfVU5JWF9ESUFHIGlz
IG5vdCBzZXQNCkNPTkZJR19YRlJNPXkNCiMgQ09ORklHX1hGUk1fVVNFUiBp
cyBub3Qgc2V0DQojIENPTkZJR19YRlJNX1NVQl9QT0xJQ1kgaXMgbm90IHNl
dA0KIyBDT05GSUdfWEZSTV9NSUdSQVRFIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1hGUk1fU1RBVElTVElDUyBpcyBub3Qgc2V0DQojIENPTkZJR19ORVRfS0VZ
IGlzIG5vdCBzZXQNCkNPTkZJR19JTkVUPXkNCkNPTkZJR19JUF9NVUxUSUNB
U1Q9eQ0KIyBDT05GSUdfSVBfQURWQU5DRURfUk9VVEVSIGlzIG5vdCBzZXQN
CkNPTkZJR19JUF9QTlA9eQ0KIyBDT05GSUdfSVBfUE5QX0RIQ1AgaXMgbm90
IHNldA0KQ09ORklHX0lQX1BOUF9CT09UUD15DQojIENPTkZJR19JUF9QTlBf
UkFSUCBpcyBub3Qgc2V0DQojIENPTkZJR19ORVRfSVBJUCBpcyBub3Qgc2V0
DQojIENPTkZJR19ORVRfSVBHUkVfREVNVVggaXMgbm90IHNldA0KIyBDT05G
SUdfSVBfTVJPVVRFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSUEQgaXMgbm90
IHNldA0KIyBDT05GSUdfU1lOX0NPT0tJRVMgaXMgbm90IHNldA0KIyBDT05G
SUdfSU5FVF9BSCBpcyBub3Qgc2V0DQojIENPTkZJR19JTkVUX0VTUCBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTkVUX0lQQ09NUCBpcyBub3Qgc2V0DQojIENP
TkZJR19JTkVUX1hGUk1fVFVOTkVMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lO
RVRfVFVOTkVMIGlzIG5vdCBzZXQNCkNPTkZJR19JTkVUX1hGUk1fTU9ERV9U
UkFOU1BPUlQ9eQ0KQ09ORklHX0lORVRfWEZSTV9NT0RFX1RVTk5FTD15DQpD
T05GSUdfSU5FVF9YRlJNX01PREVfQkVFVD15DQpDT05GSUdfSU5FVF9MUk89
eQ0KIyBDT05GSUdfSU5FVF9ESUFHIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RD
UF9DT05HX0FEVkFOQ0VEIGlzIG5vdCBzZXQNCkNPTkZJR19UQ1BfQ09OR19D
VUJJQz15DQpDT05GSUdfREVGQVVMVF9UQ1BfQ09ORz0iY3ViaWMiDQojIENP
TkZJR19UQ1BfTUQ1U0lHIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lQVjYgaXMg
bm90IHNldA0KIyBDT05GSUdfTkVUV09SS19TRUNNQVJLIGlzIG5vdCBzZXQN
CiMgQ09ORklHX05FVFdPUktfUEhZX1RJTUVTVEFNUElORyBpcyBub3Qgc2V0
DQojIENPTkZJR19ORVRGSUxURVIgaXMgbm90IHNldA0KIyBDT05GSUdfSVBf
RENDUCBpcyBub3Qgc2V0DQojIENPTkZJR19JUF9TQ1RQIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JEUyBpcyBub3Qgc2V0DQojIENPTkZJR19USVBDIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0FUTSBpcyBub3Qgc2V0DQojIENPTkZJR19MMlRQ
IGlzIG5vdCBzZXQNCkNPTkZJR19TVFA9eQ0KQ09ORklHX0JSSURHRT15DQpD
T05GSUdfQlJJREdFX0lHTVBfU05PT1BJTkc9eQ0KIyBDT05GSUdfTkVUX0RT
QSBpcyBub3Qgc2V0DQojIENPTkZJR19WTEFOXzgwMjFRIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0RFQ05FVCBpcyBub3Qgc2V0DQpDT05GSUdfTExDPXkNCiMg
Q09ORklHX0xMQzIgaXMgbm90IHNldA0KIyBDT05GSUdfSVBYIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0FUQUxLIGlzIG5vdCBzZXQNCiMgQ09ORklHX1gyNSBp
cyBub3Qgc2V0DQojIENPTkZJR19MQVBCIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1dBTl9ST1VURVIgaXMgbm90IHNldA0KIyBDT05GSUdfUEhPTkVUIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0lFRUU4MDIxNTQgaXMgbm90IHNldA0KIyBDT05G
SUdfTkVUX1NDSEVEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RDQiBpcyBub3Qg
c2V0DQojIENPTkZJR19CQVRNQU5fQURWIGlzIG5vdCBzZXQNCiMgQ09ORklH
X09QRU5WU1dJVENIIGlzIG5vdCBzZXQNCkNPTkZJR19CUUw9eQ0KDQojDQoj
IE5ldHdvcmsgdGVzdGluZw0KIw0KIyBDT05GSUdfTkVUX1BLVEdFTiBpcyBu
b3Qgc2V0DQojIENPTkZJR19IQU1SQURJTyBpcyBub3Qgc2V0DQojIENPTkZJ
R19DQU4gaXMgbm90IHNldA0KIyBDT05GSUdfSVJEQSBpcyBub3Qgc2V0DQoj
IENPTkZJR19CVCBpcyBub3Qgc2V0DQojIENPTkZJR19BRl9SWFJQQyBpcyBu
b3Qgc2V0DQpDT05GSUdfV0lSRUxFU1M9eQ0KIyBDT05GSUdfQ0ZHODAyMTEg
aXMgbm90IHNldA0KIyBDT05GSUdfTElCODAyMTEgaXMgbm90IHNldA0KDQoj
DQojIENGRzgwMjExIG5lZWRzIHRvIGJlIGVuYWJsZWQgZm9yIE1BQzgwMjEx
DQojDQojIENPTkZJR19XSU1BWCBpcyBub3Qgc2V0DQojIENPTkZJR19SRktJ
TEwgaXMgbm90IHNldA0KIyBDT05GSUdfTkVUXzlQIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NBSUYgaXMgbm90IHNldA0KIyBDT05GSUdfQ0VQSF9MSUIgaXMg
bm90IHNldA0KIyBDT05GSUdfTkZDIGlzIG5vdCBzZXQNCkNPTkZJR19IQVZF
X0JQRl9KSVQ9eQ0KDQojDQojIERldmljZSBEcml2ZXJzDQojDQoNCiMNCiMg
R2VuZXJpYyBEcml2ZXIgT3B0aW9ucw0KIw0KQ09ORklHX1VFVkVOVF9IRUxQ
RVJfUEFUSD0iIg0KIyBDT05GSUdfREVWVE1QRlMgaXMgbm90IHNldA0KQ09O
RklHX1NUQU5EQUxPTkU9eQ0KQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJ
TEQ9eQ0KQ09ORklHX0ZXX0xPQURFUj15DQpDT05GSUdfRklSTVdBUkVfSU5f
S0VSTkVMPXkNCkNPTkZJR19FWFRSQV9GSVJNV0FSRT0iIg0KIyBDT05GSUdf
REVCVUdfRFJJVkVSIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0RFVlJF
UyBpcyBub3Qgc2V0DQojIENPTkZJR19TWVNfSFlQRVJWSVNPUiBpcyBub3Qg
c2V0DQojIENPTkZJR19HRU5FUklDX0NQVV9ERVZJQ0VTIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0RNQV9TSEFSRURfQlVGRkVSIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0NNQSBpcyBub3Qgc2V0DQojIENPTkZJR19DT05ORUNUT1IgaXMgbm90
IHNldA0KQ09ORklHX01URD15DQojIENPTkZJR19NVERfUkVEQk9PVF9QQVJU
UyBpcyBub3Qgc2V0DQpDT05GSUdfTVREX0NNRExJTkVfUEFSVFM9eQ0KIyBD
T05GSUdfTVREX0FGU19QQVJUUyBpcyBub3Qgc2V0DQojIENPTkZJR19NVERf
T0ZfUEFSVFMgaXMgbm90IHNldA0KIyBDT05GSUdfTVREX0FSN19QQVJUUyBp
cyBub3Qgc2V0DQoNCiMNCiMgVXNlciBNb2R1bGVzIEFuZCBUcmFuc2xhdGlv
biBMYXllcnMNCiMNCkNPTkZJR19NVERfQ0hBUj15DQpDT05GSUdfTVREX0JM
S0RFVlM9eQ0KQ09ORklHX01URF9CTE9DSz15DQojIENPTkZJR19GVEwgaXMg
bm90IHNldA0KIyBDT05GSUdfTkZUTCBpcyBub3Qgc2V0DQojIENPTkZJR19J
TkZUTCBpcyBub3Qgc2V0DQojIENPTkZJR19SRkRfRlRMIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1NTRkRDIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NNX0ZUTCBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfT09QUyBpcyBub3Qgc2V0DQojIENP
TkZJR19NVERfU1dBUCBpcyBub3Qgc2V0DQoNCiMNCiMgUkFNL1JPTS9GbGFz
aCBjaGlwIGRyaXZlcnMNCiMNCkNPTkZJR19NVERfQ0ZJPXkNCiMgQ09ORklH
X01URF9KRURFQ1BST0JFIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfR0VOX1BS
T0JFPXkNCkNPTkZJR19NVERfQ0ZJX0FEVl9PUFRJT05TPXkNCkNPTkZJR19N
VERfQ0ZJX05PU1dBUD15DQojIENPTkZJR19NVERfQ0ZJX0JFX0JZVEVfU1dB
UCBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJX0xFX0JZVEVfU1dBUCBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJX0dFT01FVFJZIGlzIG5vdCBz
ZXQNCkNPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfMT15DQpDT05GSUdfTVRE
X01BUF9CQU5LX1dJRFRIXzI9eQ0KQ09ORklHX01URF9NQVBfQkFOS19XSURU
SF80PXkNCiMgQ09ORklHX01URF9NQVBfQkFOS19XSURUSF84IGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01URF9NQVBfQkFOS19XSURUSF8xNiBpcyBub3Qgc2V0
DQojIENPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfMzIgaXMgbm90IHNldA0K
Q09ORklHX01URF9DRklfSTE9eQ0KQ09ORklHX01URF9DRklfSTI9eQ0KIyBD
T05GSUdfTVREX0NGSV9JNCBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJ
X0k4IGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9PVFAgaXMgbm90IHNldA0K
Q09ORklHX01URF9DRklfSU5URUxFWFQ9eQ0KIyBDT05GSUdfTVREX0NGSV9B
TURTVEQgaXMgbm90IHNldA0KIyBDT05GSUdfTVREX0NGSV9TVEFBIGlzIG5v
dCBzZXQNCkNPTkZJR19NVERfQ0ZJX1VUSUw9eQ0KIyBDT05GSUdfTVREX1JB
TSBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfUk9NIGlzIG5vdCBzZXQNCiMg
Q09ORklHX01URF9BQlNFTlQgaXMgbm90IHNldA0KDQojDQojIE1hcHBpbmcg
ZHJpdmVycyBmb3IgY2hpcCBhY2Nlc3MNCiMNCiMgQ09ORklHX01URF9DT01Q
TEVYX01BUFBJTkdTIGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9QSFlTTUFQ
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9QSFlTTUFQX09GIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01URF9QTEFUUkFNIGlzIG5vdCBzZXQNCg0KIw0KIyBT
ZWxmLWNvbnRhaW5lZCBNVEQgZGV2aWNlIGRyaXZlcnMNCiMNCiMgQ09ORklH
X01URF9TTFJBTSBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfUEhSQU0gaXMg
bm90IHNldA0KIyBDT05GSUdfTVREX01URFJBTSBpcyBub3Qgc2V0DQojIENP
TkZJR19NVERfQkxPQ0syTVREIGlzIG5vdCBzZXQNCg0KIw0KIyBEaXNrLU9u
LUNoaXAgRGV2aWNlIERyaXZlcnMNCiMNCiMgQ09ORklHX01URF9ET0NHMyBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfTkFORCBpcyBub3Qgc2V0DQojIENP
TkZJR19NVERfT05FTkFORCBpcyBub3Qgc2V0DQoNCiMNCiMgTFBERFIgZmxh
c2ggbWVtb3J5IGRyaXZlcnMNCiMNCiMgQ09ORklHX01URF9MUEREUiBpcyBu
b3Qgc2V0DQojIENPTkZJR19NVERfVUJJIGlzIG5vdCBzZXQNCkNPTkZJR19E
VEM9eQ0KQ09ORklHX09GPXkNCg0KIw0KIyBEZXZpY2UgVHJlZSBhbmQgT3Bl
biBGaXJtd2FyZSBzdXBwb3J0DQojDQojIENPTkZJR19QUk9DX0RFVklDRVRS
RUUgaXMgbm90IHNldA0KIyBDT05GSUdfT0ZfU0VMRlRFU1QgaXMgbm90IHNl
dA0KQ09ORklHX09GX0ZMQVRUUkVFPXkNCkNPTkZJR19PRl9FQVJMWV9GTEFU
VFJFRT15DQpDT05GSUdfT0ZfQUREUkVTUz15DQpDT05GSUdfT0ZfSVJRPXkN
CkNPTkZJR19PRl9ERVZJQ0U9eQ0KQ09ORklHX09GX0kyQz15DQpDT05GSUdf
T0ZfTkVUPXkNCkNPTkZJR19PRl9NRElPPXkNCkNPTkZJR19PRl9NVEQ9eQ0K
IyBDT05GSUdfUEFSUE9SVCBpcyBub3Qgc2V0DQpDT05GSUdfQkxLX0RFVj15
DQojIENPTkZJR19CTEtfREVWX0NPV19DT01NT04gaXMgbm90IHNldA0KQ09O
RklHX0JMS19ERVZfTE9PUD15DQpDT05GSUdfQkxLX0RFVl9MT09QX01JTl9D
T1VOVD04DQojIENPTkZJR19CTEtfREVWX0NSWVBUT0xPT1AgaXMgbm90IHNl
dA0KDQojDQojIERSQkQgZGlzYWJsZWQgYmVjYXVzZSBQUk9DX0ZTLCBJTkVU
IG9yIENPTk5FQ1RPUiBub3Qgc2VsZWN0ZWQNCiMNCiMgQ09ORklHX0JMS19E
RVZfTkJEIGlzIG5vdCBzZXQNCkNPTkZJR19CTEtfREVWX1JBTT15DQpDT05G
SUdfQkxLX0RFVl9SQU1fQ09VTlQ9MTYNCkNPTkZJR19CTEtfREVWX1JBTV9T
SVpFPTQwOTYNCiMgQ09ORklHX0JMS19ERVZfWElQIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NEUk9NX1BLVENEVkQgaXMgbm90IHNldA0KIyBDT05GSUdfQVRB
X09WRVJfRVRIIGlzIG5vdCBzZXQNCkNPTkZJR19YRU5fQkxLREVWX0ZST05U
RU5EPXkNCkNPTkZJR19YRU5fQkxLREVWX0JBQ0tFTkQ9eQ0KIyBDT05GSUdf
QkxLX0RFVl9SQkQgaXMgbm90IHNldA0KDQojDQojIE1pc2MgZGV2aWNlcw0K
Iw0KIyBDT05GSUdfU0VOU09SU19MSVMzTFYwMkQgaXMgbm90IHNldA0KIyBD
T05GSUdfQUQ1MjVYX0RQT1QgaXMgbm90IHNldA0KIyBDT05GSUdfQVRNRUxf
UFdNIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lDUzkzMlM0MDEgaXMgbm90IHNl
dA0KIyBDT05GSUdfRU5DTE9TVVJFX1NFUlZJQ0VTIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0FQRFM5ODAyQUxTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lTTDI5
MDAzIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lTTDI5MDIwIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1NFTlNPUlNfVFNMMjU1MCBpcyBub3Qgc2V0DQojIENPTkZJ
R19TRU5TT1JTX0JIMTc4MCBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JT
X0JIMTc3MCBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JTX0FQRFM5OTBY
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0hNQzYzNTIgaXMgbm90IHNldA0KIyBD
T05GSUdfRFMxNjgyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSTV9DSEFSTENE
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0JNUDA4NV9JMkMgaXMgbm90IHNldA0K
IyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0MyUE9SVCBpcyBub3Qgc2V0DQoNCiMNCiMgRUVQUk9NIHN1cHBvcnQN
CiMNCiMgQ09ORklHX0VFUFJPTV9BVDI0IGlzIG5vdCBzZXQNCiMgQ09ORklH
X0VFUFJPTV9MRUdBQ1kgaXMgbm90IHNldA0KIyBDT05GSUdfRUVQUk9NX01B
WDY4NzUgaXMgbm90IHNldA0KIyBDT05GSUdfRUVQUk9NXzkzQ1g2IGlzIG5v
dCBzZXQNCiMgQ09ORklHX0lXTUMzMjAwVE9QIGlzIG5vdCBzZXQNCg0KIw0K
IyBUZXhhcyBJbnN0cnVtZW50cyBzaGFyZWQgdHJhbnNwb3J0IGxpbmUgZGlz
Y2lwbGluZQ0KIw0KIyBDT05GSUdfU0VOU09SU19MSVMzX0kyQyBpcyBub3Qg
c2V0DQoNCiMNCiMgQWx0ZXJhIEZQR0EgZmlybXdhcmUgZG93bmxvYWQgbW9k
dWxlDQojDQojIENPTkZJR19BTFRFUkFfU1RBUEwgaXMgbm90IHNldA0KDQoj
DQojIFNDU0kgZGV2aWNlIHN1cHBvcnQNCiMNCkNPTkZJR19TQ1NJX01PRD15
DQojIENPTkZJR19SQUlEX0FUVFJTIGlzIG5vdCBzZXQNCkNPTkZJR19TQ1NJ
PXkNCkNPTkZJR19TQ1NJX0RNQT15DQojIENPTkZJR19TQ1NJX1RHVCBpcyBu
b3Qgc2V0DQojIENPTkZJR19TQ1NJX05FVExJTksgaXMgbm90IHNldA0KQ09O
RklHX1NDU0lfUFJPQ19GUz15DQoNCiMNCiMgU0NTSSBzdXBwb3J0IHR5cGUg
KGRpc2ssIHRhcGUsIENELVJPTSkNCiMNCkNPTkZJR19CTEtfREVWX1NEPXkN
CiMgQ09ORklHX0NIUl9ERVZfU1QgaXMgbm90IHNldA0KIyBDT05GSUdfQ0hS
X0RFVl9PU1NUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0JMS19ERVZfU1IgaXMg
bm90IHNldA0KIyBDT05GSUdfQ0hSX0RFVl9TRyBpcyBub3Qgc2V0DQojIENP
TkZJR19DSFJfREVWX1NDSCBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX01V
TFRJX0xVTiBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX0NPTlNUQU5UUyBp
cyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX0xPR0dJTkcgaXMgbm90IHNldA0K
IyBDT05GSUdfU0NTSV9TQ0FOX0FTWU5DIGlzIG5vdCBzZXQNCg0KIw0KIyBT
Q1NJIFRyYW5zcG9ydHMNCiMNCiMgQ09ORklHX1NDU0lfU1BJX0FUVFJTIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NDU0lfRkNfQVRUUlMgaXMgbm90IHNldA0K
IyBDT05GSUdfU0NTSV9JU0NTSV9BVFRSUyBpcyBub3Qgc2V0DQojIENPTkZJ
R19TQ1NJX1NBU19BVFRSUyBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX1NB
U19MSUJTQVMgaXMgbm90IHNldA0KIyBDT05GSUdfU0NTSV9TUlBfQVRUUlMg
aXMgbm90IHNldA0KQ09ORklHX1NDU0lfTE9XTEVWRUw9eQ0KIyBDT05GSUdf
SVNDU0lfVENQIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lTQ1NJX0JPT1RfU1lT
RlMgaXMgbm90IHNldA0KIyBDT05GSUdfTElCRkMgaXMgbm90IHNldA0KIyBD
T05GSUdfTElCRkNPRSBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX0RFQlVH
IGlzIG5vdCBzZXQNCiMgQ09ORklHX1NDU0lfREggaXMgbm90IHNldA0KIyBD
T05GSUdfU0NTSV9PU0RfSU5JVElBVE9SIGlzIG5vdCBzZXQNCkNPTkZJR19I
QVZFX1BBVEFfUExBVEZPUk09eQ0KQ09ORklHX0FUQT15DQojIENPTkZJR19B
VEFfTk9OU1RBTkRBUkQgaXMgbm90IHNldA0KQ09ORklHX0FUQV9WRVJCT1NF
X0VSUk9SPXkNCkNPTkZJR19TQVRBX1BNUD15DQoNCiMNCiMgQ29udHJvbGxl
cnMgd2l0aCBub24tU0ZGIG5hdGl2ZSBpbnRlcmZhY2UNCiMNCiMgQ09ORklH
X1NBVEFfQUhDSV9QTEFURk9STSBpcyBub3Qgc2V0DQpDT05GSUdfQVRBX1NG
Rj15DQoNCiMNCiMgU0ZGIGNvbnRyb2xsZXJzIHdpdGggY3VzdG9tIERNQSBp
bnRlcmZhY2UNCiMNCiMgQ09ORklHX0FUQV9CTURNQSBpcyBub3Qgc2V0DQoN
CiMNCiMgUElPLW9ubHkgU0ZGIGNvbnRyb2xsZXJzDQojDQojIENPTkZJR19Q
QVRBX1BMQVRGT1JNIGlzIG5vdCBzZXQNCg0KIw0KIyBHZW5lcmljIGZhbGxi
YWNrIC8gbGVnYWN5IGRyaXZlcnMNCiMNCiMgQ09ORklHX01EIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1RBUkdFVF9DT1JFIGlzIG5vdCBzZXQNCkNPTkZJR19O
RVRERVZJQ0VTPXkNCkNPTkZJR19ORVRfQ09SRT15DQojIENPTkZJR19CT05E
SU5HIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RVTU1ZIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0VRVUFMSVpFUiBpcyBub3Qgc2V0DQpDT05GSUdfTUlJPXkNCiMg
Q09ORklHX05FVF9URUFNIGlzIG5vdCBzZXQNCiMgQ09ORklHX01BQ1ZMQU4g
aXMgbm90IHNldA0KIyBDT05GSUdfTkVUQ09OU09MRSBpcyBub3Qgc2V0DQoj
IENPTkZJR19ORVRQT0xMIGlzIG5vdCBzZXQNCiMgQ09ORklHX05FVF9QT0xM
X0NPTlRST0xMRVIgaXMgbm90IHNldA0KQ09ORklHX1RVTj15DQojIENPTkZJ
R19WRVRIIGlzIG5vdCBzZXQNCg0KIw0KIyBDQUlGIHRyYW5zcG9ydCBkcml2
ZXJzDQojDQpDT05GSUdfRVRIRVJORVQ9eQ0KQ09ORklHX05FVF9WRU5ET1Jf
QlJPQURDT009eQ0KIyBDT05GSUdfQjQ0IGlzIG5vdCBzZXQNCiMgQ09ORklH
X05FVF9DQUxYRURBX1hHTUFDIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRfVkVO
RE9SX0NIRUxTSU89eQ0KQ09ORklHX05FVF9WRU5ET1JfQ0lSUlVTPXkNCiMg
Q09ORklHX0NTODl4MCBpcyBub3Qgc2V0DQojIENPTkZJR19ETTkwMDAgaXMg
bm90IHNldA0KIyBDT05GSUdfRE5FVCBpcyBub3Qgc2V0DQpDT05GSUdfTkVU
X1ZFTkRPUl9GQVJBREFZPXkNCiMgQ09ORklHX0ZUTUFDMTAwIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0ZUR01BQzEwMCBpcyBub3Qgc2V0DQpDT05GSUdfTkVU
X1ZFTkRPUl9JTlRFTD15DQpDT05GSUdfTkVUX1ZFTkRPUl9JODI1WFg9eQ0K
Q09ORklHX05FVF9WRU5ET1JfTUFSVkVMTD15DQpDT05GSUdfTkVUX1ZFTkRP
Ul9NSUNSRUw9eQ0KIyBDT05GSUdfS1M4ODUxX01MTCBpcyBub3Qgc2V0DQpD
T05GSUdfTkVUX1ZFTkRPUl9OQVRTRU1JPXkNCkNPTkZJR19ORVRfVkVORE9S
XzgzOTA9eQ0KIyBDT05GSUdfQVg4ODc5NiBpcyBub3Qgc2V0DQojIENPTkZJ
R19FVEhPQyBpcyBub3Qgc2V0DQpDT05GSUdfTkVUX1ZFTkRPUl9TRUVRPXkN
CiMgQ09ORklHX1NFRVE4MDA1IGlzIG5vdCBzZXQNCkNPTkZJR19ORVRfVkVO
RE9SX1NNU0M9eQ0KQ09ORklHX1NNQzkxWD15DQpDT05GSUdfU01DOTExWD15
DQpDT05GSUdfU01TQzkxMVg9eQ0KIyBDT05GSUdfU01TQzkxMVhfQVJDSF9I
T09LUyBpcyBub3Qgc2V0DQpDT05GSUdfTkVUX1ZFTkRPUl9TVE1JQ1JPPXkN
CiMgQ09ORklHX1NUTU1BQ19FVEggaXMgbm90IHNldA0KQ09ORklHX05FVF9W
RU5ET1JfV0laTkVUPXkNCiMgQ09ORklHX1dJWk5FVF9XNTEwMCBpcyBub3Qg
c2V0DQojIENPTkZJR19XSVpORVRfVzUzMDAgaXMgbm90IHNldA0KQ09ORklH
X1BIWUxJQj15DQoNCiMNCiMgTUlJIFBIWSBkZXZpY2UgZHJpdmVycw0KIw0K
IyBDT05GSUdfQU1EX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19NQVJWRUxM
X1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19EQVZJQ09NX1BIWSBpcyBub3Qg
c2V0DQojIENPTkZJR19RU0VNSV9QSFkgaXMgbm90IHNldA0KIyBDT05GSUdf
TFhUX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19DSUNBREFfUEhZIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1ZJVEVTU0VfUEhZIGlzIG5vdCBzZXQNCkNPTkZJ
R19TTVNDX1BIWT15DQojIENPTkZJR19CUk9BRENPTV9QSFkgaXMgbm90IHNl
dA0KIyBDT05GSUdfSUNQTFVTX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19S
RUFMVEVLX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19OQVRJT05BTF9QSFkg
aXMgbm90IHNldA0KIyBDT05GSUdfU1RFMTBYUCBpcyBub3Qgc2V0DQojIENP
TkZJR19MU0lfRVQxMDExQ19QSFkgaXMgbm90IHNldA0KIyBDT05GSUdfTUlD
UkVMX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19GSVhFRF9QSFkgaXMgbm90
IHNldA0KIyBDT05GSUdfTURJT19CSVRCQU5HIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1BQUCBpcyBub3Qgc2V0DQojIENPTkZJR19TTElQIGlzIG5vdCBzZXQN
CkNPTkZJR19XTEFOPXkNCiMgQ09ORklHX0hPU1RBUCBpcyBub3Qgc2V0DQoj
IENPTkZJR19XTF9USSBpcyBub3Qgc2V0DQoNCiMNCiMgRW5hYmxlIFdpTUFY
IChOZXR3b3JraW5nIG9wdGlvbnMpIHRvIHNlZSB0aGUgV2lNQVggZHJpdmVy
cw0KIw0KIyBDT05GSUdfV0FOIGlzIG5vdCBzZXQNCkNPTkZJR19YRU5fTkVU
REVWX0ZST05URU5EPXkNCkNPTkZJR19YRU5fTkVUREVWX0JBQ0tFTkQ9eQ0K
IyBDT05GSUdfSVNETiBpcyBub3Qgc2V0DQoNCiMNCiMgSW5wdXQgZGV2aWNl
IHN1cHBvcnQNCiMNCkNPTkZJR19JTlBVVD15DQojIENPTkZJR19JTlBVVF9G
Rl9NRU1MRVNTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOUFVUX1BPTExERVYg
aXMgbm90IHNldA0KIyBDT05GSUdfSU5QVVRfU1BBUlNFS01BUCBpcyBub3Qg
c2V0DQojIENPTkZJR19JTlBVVF9NQVRSSVhLTUFQIGlzIG5vdCBzZXQNCg0K
Iw0KIyBVc2VybGFuZCBpbnRlcmZhY2VzDQojDQpDT05GSUdfSU5QVVRfTU9V
U0VERVY9eQ0KQ09ORklHX0lOUFVUX01PVVNFREVWX1BTQVVYPXkNCkNPTkZJ
R19JTlBVVF9NT1VTRURFVl9TQ1JFRU5fWD0xMDI0DQpDT05GSUdfSU5QVVRf
TU9VU0VERVZfU0NSRUVOX1k9NzY4DQojIENPTkZJR19JTlBVVF9KT1lERVYg
aXMgbm90IHNldA0KQ09ORklHX0lOUFVUX0VWREVWPXkNCiMgQ09ORklHX0lO
UFVUX0VWQlVHIGlzIG5vdCBzZXQNCg0KIw0KIyBJbnB1dCBEZXZpY2UgRHJp
dmVycw0KIw0KQ09ORklHX0lOUFVUX0tFWUJPQVJEPXkNCiMgQ09ORklHX0tF
WUJPQVJEX0FEUDU1ODggaXMgbm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRf
QURQNTU4OSBpcyBub3Qgc2V0DQpDT05GSUdfS0VZQk9BUkRfQVRLQkQ9eQ0K
IyBDT05GSUdfS0VZQk9BUkRfUVQxMDcwIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0tFWUJPQVJEX1FUMjE2MCBpcyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FS
RF9MS0tCRCBpcyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9UQ0E2NDE2
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJEX1RDQTg0MTggaXMgbm90
IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfTE04MzMzIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0tFWUJPQVJEX01BWDczNTkgaXMgbm90IHNldA0KIyBDT05GSUdf
S0VZQk9BUkRfTUNTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJEX01Q
UjEyMSBpcyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9ORVdUT04gaXMg
bm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfT1BFTkNPUkVTIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0tFWUJPQVJEX1NBTVNVTkcgaXMgbm90IHNldA0KIyBD
T05GSUdfS0VZQk9BUkRfU1RPV0FXQVkgaXMgbm90IHNldA0KIyBDT05GSUdf
S0VZQk9BUkRfU1VOS0JEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJE
X09NQVA0IGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJEX1hUS0JEIGlz
IG5vdCBzZXQNCkNPTkZJR19JTlBVVF9NT1VTRT15DQpDT05GSUdfTU9VU0Vf
UFMyPXkNCkNPTkZJR19NT1VTRV9QUzJfQUxQUz15DQpDT05GSUdfTU9VU0Vf
UFMyX0xPR0lQUzJQUD15DQpDT05GSUdfTU9VU0VfUFMyX1NZTkFQVElDUz15
DQpDT05GSUdfTU9VU0VfUFMyX1RSQUNLUE9JTlQ9eQ0KIyBDT05GSUdfTU9V
U0VfUFMyX0VMQU5URUNIIGlzIG5vdCBzZXQNCiMgQ09ORklHX01PVVNFX1BT
Ml9TRU5URUxJQyBpcyBub3Qgc2V0DQojIENPTkZJR19NT1VTRV9QUzJfVE9V
Q0hLSVQgaXMgbm90IHNldA0KIyBDT05GSUdfTU9VU0VfU0VSSUFMIGlzIG5v
dCBzZXQNCiMgQ09ORklHX01PVVNFX0FQUExFVE9VQ0ggaXMgbm90IHNldA0K
IyBDT05GSUdfTU9VU0VfQkNNNTk3NCBpcyBub3Qgc2V0DQojIENPTkZJR19N
T1VTRV9WU1hYWEFBIGlzIG5vdCBzZXQNCiMgQ09ORklHX01PVVNFX1NZTkFQ
VElDU19JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfTU9VU0VfU1lOQVBUSUNT
X1VTQiBpcyBub3Qgc2V0DQojIENPTkZJR19JTlBVVF9KT1lTVElDSyBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTlBVVF9UQUJMRVQgaXMgbm90IHNldA0KIyBD
T05GSUdfSU5QVVRfVE9VQ0hTQ1JFRU4gaXMgbm90IHNldA0KIyBDT05GSUdf
SU5QVVRfTUlTQyBpcyBub3Qgc2V0DQoNCiMNCiMgSGFyZHdhcmUgSS9PIHBv
cnRzDQojDQpDT05GSUdfU0VSSU89eQ0KIyBDT05GSUdfU0VSSU9fU0VSUE9S
VCBpcyBub3Qgc2V0DQpDT05GSUdfU0VSSU9fQU1CQUtNST15DQpDT05GSUdf
U0VSSU9fTElCUFMyPXkNCiMgQ09ORklHX1NFUklPX1JBVyBpcyBub3Qgc2V0
DQojIENPTkZJR19TRVJJT19BTFRFUkFfUFMyIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1NFUklPX1BTMk1VTFQgaXMgbm90IHNldA0KIyBDT05GSUdfR0FNRVBP
UlQgaXMgbm90IHNldA0KDQojDQojIENoYXJhY3RlciBkZXZpY2VzDQojDQpD
T05GSUdfVlQ9eQ0KQ09ORklHX0NPTlNPTEVfVFJBTlNMQVRJT05TPXkNCkNP
TkZJR19WVF9DT05TT0xFPXkNCkNPTkZJR19WVF9DT05TT0xFX1NMRUVQPXkN
CkNPTkZJR19IV19DT05TT0xFPXkNCiMgQ09ORklHX1ZUX0hXX0NPTlNPTEVf
QklORElORyBpcyBub3Qgc2V0DQpDT05GSUdfVU5JWDk4X1BUWVM9eQ0KIyBD
T05GSUdfREVWUFRTX01VTFRJUExFX0lOU1RBTkNFUyBpcyBub3Qgc2V0DQpD
T05GSUdfTEVHQUNZX1BUWVM9eQ0KQ09ORklHX0xFR0FDWV9QVFlfQ09VTlQ9
MTYNCiMgQ09ORklHX1NFUklBTF9OT05TVEFOREFSRCBpcyBub3Qgc2V0DQoj
IENPTkZJR19OX0dTTSBpcyBub3Qgc2V0DQojIENPTkZJR19UUkFDRV9TSU5L
IGlzIG5vdCBzZXQNCkNPTkZJR19ERVZLTUVNPXkNCg0KIw0KIyBTZXJpYWwg
ZHJpdmVycw0KIw0KQ09ORklHX1NFUklBTF84MjUwPXkNCiMgQ09ORklHX1NF
UklBTF84MjUwX0NPTlNPTEUgaXMgbm90IHNldA0KQ09ORklHX1NFUklBTF84
MjUwX05SX1VBUlRTPTQNCkNPTkZJR19TRVJJQUxfODI1MF9SVU5USU1FX1VB
UlRTPTQNCkNPTkZJR19TRVJJQUxfODI1MF9FWFRFTkRFRD15DQpDT05GSUdf
U0VSSUFMXzgyNTBfTUFOWV9QT1JUUz15DQpDT05GSUdfU0VSSUFMXzgyNTBf
U0hBUkVfSVJRPXkNCiMgQ09ORklHX1NFUklBTF84MjUwX0RFVEVDVF9JUlEg
aXMgbm90IHNldA0KQ09ORklHX1NFUklBTF84MjUwX1JTQT15DQojIENPTkZJ
R19TRVJJQUxfODI1MF9EVyBpcyBub3Qgc2V0DQojIENPTkZJR19TRVJJQUxf
ODI1MF9FTSBpcyBub3Qgc2V0DQoNCiMNCiMgTm9uLTgyNTAgc2VyaWFsIHBv
cnQgc3VwcG9ydA0KIw0KIyBDT05GSUdfU0VSSUFMX0FNQkFfUEwwMTAgaXMg
bm90IHNldA0KQ09ORklHX1NFUklBTF9BTUJBX1BMMDExPXkNCkNPTkZJR19T
RVJJQUxfQU1CQV9QTDAxMV9DT05TT0xFPXkNCkNPTkZJR19TRVJJQUxfQ09S
RT15DQpDT05GSUdfU0VSSUFMX0NPUkVfQ09OU09MRT15DQojIENPTkZJR19T
RVJJQUxfT0ZfUExBVEZPUk0gaXMgbm90IHNldA0KIyBDT05GSUdfU0VSSUFM
X1RJTUJFUkRBTEUgaXMgbm90IHNldA0KIyBDT05GSUdfU0VSSUFMX0FMVEVS
QV9KVEFHVUFSVCBpcyBub3Qgc2V0DQojIENPTkZJR19TRVJJQUxfQUxURVJB
X1VBUlQgaXMgbm90IHNldA0KIyBDT05GSUdfU0VSSUFMX1hJTElOWF9QU19V
QVJUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RUWV9QUklOVEsgaXMgbm90IHNl
dA0KQ09ORklHX0hWQ19EUklWRVI9eQ0KQ09ORklHX0hWQ19JUlE9eQ0KQ09O
RklHX0hWQ19YRU49eQ0KIyBDT05GSUdfSFZDX1hFTl9GUk9OVEVORCBpcyBu
b3Qgc2V0DQojIENPTkZJR19IVkNfRENDIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0lQTUlfSEFORExFUiBpcyBub3Qgc2V0DQpDT05GSUdfSFdfUkFORE9NPXkN
CiMgQ09ORklHX0hXX1JBTkRPTV9USU1FUklPTUVNIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0hXX1JBTkRPTV9BVE1FTCBpcyBub3Qgc2V0DQojIENPTkZJR19S
Mzk2NCBpcyBub3Qgc2V0DQojIENPTkZJR19SQVdfRFJJVkVSIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1RDR19UUE0gaXMgbm90IHNldA0KQ09ORklHX0kyQz15
DQpDT05GSUdfSTJDX0JPQVJESU5GTz15DQpDT05GSUdfSTJDX0NPTVBBVD15
DQpDT05GSUdfSTJDX0NIQVJERVY9eQ0KIyBDT05GSUdfSTJDX01VWCBpcyBu
b3Qgc2V0DQpDT05GSUdfSTJDX0hFTFBFUl9BVVRPPXkNCg0KIw0KIyBJMkMg
SGFyZHdhcmUgQnVzIHN1cHBvcnQNCiMNCg0KIw0KIyBJMkMgc3lzdGVtIGJ1
cyBkcml2ZXJzIChtb3N0bHkgZW1iZWRkZWQgLyBzeXN0ZW0tb24tY2hpcCkN
CiMNCiMgQ09ORklHX0kyQ19ERVNJR05XQVJFX1BMQVRGT1JNIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0kyQ19PQ09SRVMgaXMgbm90IHNldA0KIyBDT05GSUdf
STJDX1BDQV9QTEFURk9STSBpcyBub3Qgc2V0DQojIENPTkZJR19JMkNfUFhB
X1BDSSBpcyBub3Qgc2V0DQojIENPTkZJR19JMkNfU0lNVEVDIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0kyQ19WRVJTQVRJTEUgaXMgbm90IHNldA0KIyBDT05G
SUdfSTJDX1hJTElOWCBpcyBub3Qgc2V0DQoNCiMNCiMgRXh0ZXJuYWwgSTJD
L1NNQnVzIGFkYXB0ZXIgZHJpdmVycw0KIw0KIyBDT05GSUdfSTJDX1BBUlBP
UlRfTElHSFQgaXMgbm90IHNldA0KIyBDT05GSUdfSTJDX1RBT1NfRVZNIGlz
IG5vdCBzZXQNCg0KIw0KIyBPdGhlciBJMkMvU01CdXMgYnVzIGRyaXZlcnMN
CiMNCiMgQ09ORklHX0kyQ19ERUJVR19DT1JFIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0kyQ19ERUJVR19BTEdPIGlzIG5vdCBzZXQNCiMgQ09ORklHX0kyQ19E
RUJVR19CVVMgaXMgbm90IHNldA0KIyBDT05GSUdfU1BJIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0hTSSBpcyBub3Qgc2V0DQoNCiMNCiMgUFBTIHN1cHBvcnQN
CiMNCiMgQ09ORklHX1BQUyBpcyBub3Qgc2V0DQoNCiMNCiMgUFBTIGdlbmVy
YXRvcnMgc3VwcG9ydA0KIw0KDQojDQojIFBUUCBjbG9jayBzdXBwb3J0DQoj
DQoNCiMNCiMgRW5hYmxlIERldmljZSBEcml2ZXJzIC0+IFBQUyB0byBzZWUg
dGhlIFBUUCBjbG9jayBvcHRpb25zLg0KIw0KQ09ORklHX0FSQ0hfSEFWRV9D
VVNUT01fR1BJT19IPXkNCkNPTkZJR19BUkNIX1dBTlRfT1BUSU9OQUxfR1BJ
T0xJQj15DQojIENPTkZJR19HUElPTElCIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1cxIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BPV0VSX1NVUFBMWSBpcyBub3Qg
c2V0DQojIENPTkZJR19IV01PTiBpcyBub3Qgc2V0DQojIENPTkZJR19USEVS
TUFMIGlzIG5vdCBzZXQNCiMgQ09ORklHX1dBVENIRE9HIGlzIG5vdCBzZXQN
CkNPTkZJR19TU0JfUE9TU0lCTEU9eQ0KDQojDQojIFNvbmljcyBTaWxpY29u
IEJhY2twbGFuZQ0KIw0KIyBDT05GSUdfU1NCIGlzIG5vdCBzZXQNCkNPTkZJ
R19CQ01BX1BPU1NJQkxFPXkNCg0KIw0KIyBCcm9hZGNvbSBzcGVjaWZpYyBB
TUJBDQojDQojIENPTkZJR19CQ01BIGlzIG5vdCBzZXQNCg0KIw0KIyBNdWx0
aWZ1bmN0aW9uIGRldmljZSBkcml2ZXJzDQojDQojIENPTkZJR19NRkRfQ09S
RSBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfODhQTTg2MFggaXMgbm90IHNl
dA0KIyBDT05GSUdfTUZEX1NNNTAxIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hU
Q19QQVNJQzMgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX0xNMzUzMyBpcyBu
b3Qgc2V0DQojIENPTkZJR19UUFM2MTA1WCBpcyBub3Qgc2V0DQojIENPTkZJ
R19UUFM2NTA3WCBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfVFBTNjUyMTcg
aXMgbm90IHNldA0KIyBDT05GSUdfVFdMNDAzMF9DT1JFIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1RXTDYwNDBfQ09SRSBpcyBub3Qgc2V0DQojIENPTkZJR19N
RkRfU1RNUEUgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1RDMzU4OVggaXMg
bm90IHNldA0KIyBDT05GSUdfTUZEX1RNSU8gaXMgbm90IHNldA0KIyBDT05G
SUdfTUZEX1Q3TDY2WEIgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1RDNjM4
N1hCIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BNSUNfREE5MDNYIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01GRF9EQTkwNTJfSTJDIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1BNSUNfQURQNTUyMCBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfTUFY
Nzc2OTMgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX01BWDg5MjUgaXMgbm90
IHNldA0KIyBDT05GSUdfTUZEX01BWDg5OTcgaXMgbm90IHNldA0KIyBDT05G
SUdfTUZEX01BWDg5OTggaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1M1TV9D
T1JFIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9XTTg0MDAgaXMgbm90IHNl
dA0KIyBDT05GSUdfTUZEX1dNODMxWF9JMkMgaXMgbm90IHNldA0KIyBDT05G
SUdfTUZEX1dNODM1MF9JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1dN
ODk5NCBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfUENGNTA2MzMgaXMgbm90
IHNldA0KIyBDT05GSUdfTUZEX01DMTNYWFhfSTJDIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0FCWDUwMF9DT1JFIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9X
TDEyNzNfQ09SRSBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfVFBTNjUwOTAg
aXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1JDNVQ1ODMgaXMgbm90IHNldA0K
IyBDT05GSUdfTUZEX1BBTE1BUyBpcyBub3Qgc2V0DQojIENPTkZJR19SRUdV
TEFUT1IgaXMgbm90IHNldA0KIyBDT05GSUdfTUVESUFfU1VQUE9SVCBpcyBu
b3Qgc2V0DQoNCiMNCiMgR3JhcGhpY3Mgc3VwcG9ydA0KIw0KIyBDT05GSUdf
RFJNIGlzIG5vdCBzZXQNCiMgQ09ORklHX1ZHQVNUQVRFIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1ZJREVPX09VVFBVVF9DT05UUk9MIGlzIG5vdCBzZXQNCkNP
TkZJR19GQj15DQojIENPTkZJR19GSVJNV0FSRV9FRElEIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZCX0REQyBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9CT09U
X1ZFU0FfU1VQUE9SVCBpcyBub3Qgc2V0DQpDT05GSUdfRkJfQ0ZCX0ZJTExS
RUNUPXkNCkNPTkZJR19GQl9DRkJfQ09QWUFSRUE9eQ0KQ09ORklHX0ZCX0NG
Ql9JTUFHRUJMSVQ9eQ0KIyBDT05GSUdfRkJfQ0ZCX1JFVl9QSVhFTFNfSU5f
QllURSBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9TWVNfRklMTFJFQ1QgaXMg
bm90IHNldA0KIyBDT05GSUdfRkJfU1lTX0NPUFlBUkVBIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZCX1NZU19JTUFHRUJMSVQgaXMgbm90IHNldA0KIyBDT05G
SUdfRkJfRk9SRUlHTl9FTkRJQU4gaXMgbm90IHNldA0KIyBDT05GSUdfRkJf
U1lTX0ZPUFMgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfV01UX0dFX1JPUFMg
aXMgbm90IHNldA0KIyBDT05GSUdfRkJfU1ZHQUxJQiBpcyBub3Qgc2V0DQoj
IENPTkZJR19GQl9NQUNNT0RFUyBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9C
QUNLTElHSFQgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfTU9ERV9IRUxQRVJT
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZCX1RJTEVCTElUVElORyBpcyBub3Qg
c2V0DQoNCiMNCiMgRnJhbWUgYnVmZmVyIGhhcmR3YXJlIGRyaXZlcnMNCiMN
CkNPTkZJR19GQl9BUk1DTENEPXkNCiMgQ09ORklHX0ZCX1MxRDEzWFhYIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0ZCX1ZJUlRVQUwgaXMgbm90IHNldA0KIyBD
T05GSUdfWEVOX0ZCREVWX0ZST05URU5EIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0ZCX01FVFJPTk9NRSBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9CUk9BRFNI
RUVUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZCX0FVT19LMTkwWCBpcyBub3Qg
c2V0DQojIENPTkZJR19FWFlOT1NfVklERU8gaXMgbm90IHNldA0KIyBDT05G
SUdfQkFDS0xJR0hUX0xDRF9TVVBQT1JUIGlzIG5vdCBzZXQNCg0KIw0KIyBD
b25zb2xlIGRpc3BsYXkgZHJpdmVyIHN1cHBvcnQNCiMNCkNPTkZJR19EVU1N
WV9DT05TT0xFPXkNCkNPTkZJR19GUkFNRUJVRkZFUl9DT05TT0xFPXkNCiMg
Q09ORklHX0ZSQU1FQlVGRkVSX0NPTlNPTEVfREVURUNUX1BSSU1BUlkgaXMg
bm90IHNldA0KIyBDT05GSUdfRlJBTUVCVUZGRVJfQ09OU09MRV9ST1RBVElP
TiBpcyBub3Qgc2V0DQpDT05GSUdfRk9OVFM9eQ0KIyBDT05GSUdfRk9OVF84
eDggaXMgbm90IHNldA0KIyBDT05GSUdfRk9OVF84eDE2IGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZPTlRfNngxMSBpcyBub3Qgc2V0DQojIENPTkZJR19GT05U
Xzd4MTQgaXMgbm90IHNldA0KIyBDT05GSUdfRk9OVF9QRUFSTF84eDggaXMg
bm90IHNldA0KQ09ORklHX0ZPTlRfQUNPUk5fOHg4PXkNCiMgQ09ORklHX0ZP
TlRfTUlOSV80eDYgaXMgbm90IHNldA0KIyBDT05GSUdfRk9OVF9TVU44eDE2
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZPTlRfU1VOMTJ4MjIgaXMgbm90IHNl
dA0KIyBDT05GSUdfRk9OVF8xMHgxOCBpcyBub3Qgc2V0DQojIENPTkZJR19M
T0dPIGlzIG5vdCBzZXQNCkNPTkZJR19TT1VORD15DQpDT05GSUdfU09VTkRf
T1NTX0NPUkU9eQ0KQ09ORklHX1NPVU5EX09TU19DT1JFX1BSRUNMQUlNPXkN
CkNPTkZJR19TTkQ9eQ0KQ09ORklHX1NORF9USU1FUj15DQpDT05GSUdfU05E
X1BDTT15DQojIENPTkZJR19TTkRfU0VRVUVOQ0VSIGlzIG5vdCBzZXQNCkNP
TkZJR19TTkRfT1NTRU1VTD15DQpDT05GSUdfU05EX01JWEVSX09TUz15DQpD
T05GSUdfU05EX1BDTV9PU1M9eQ0KQ09ORklHX1NORF9QQ01fT1NTX1BMVUdJ
TlM9eQ0KIyBDT05GSUdfU05EX0hSVElNRVIgaXMgbm90IHNldA0KIyBDT05G
SUdfU05EX0RZTkFNSUNfTUlOT1JTIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRf
U1VQUE9SVF9PTERfQVBJPXkNCkNPTkZJR19TTkRfVkVSQk9TRV9QUk9DRlM9
eQ0KIyBDT05GSUdfU05EX1ZFUkJPU0VfUFJJTlRLIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1NORF9ERUJVRyBpcyBub3Qgc2V0DQpDT05GSUdfU05EX1ZNQVNU
RVI9eQ0KIyBDT05GSUdfU05EX1JBV01JRElfU0VRIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1NORF9PUEwzX0xJQl9TRVEgaXMgbm90IHNldA0KIyBDT05GSUdf
U05EX09QTDRfTElCX1NFUSBpcyBub3Qgc2V0DQojIENPTkZJR19TTkRfU0JB
V0VfU0VRIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9FTVUxMEsxX1NFUSBp
cyBub3Qgc2V0DQpDT05GSUdfU05EX0FDOTdfQ09ERUM9eQ0KQ09ORklHX1NO
RF9EUklWRVJTPXkNCiMgQ09ORklHX1NORF9EVU1NWSBpcyBub3Qgc2V0DQoj
IENPTkZJR19TTkRfQUxPT1AgaXMgbm90IHNldA0KIyBDT05GSUdfU05EX01U
UEFWIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9TRVJJQUxfVTE2NTUwIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NORF9NUFU0MDEgaXMgbm90IHNldA0KIyBD
T05GSUdfU05EX0FDOTdfUE9XRVJfU0FWRSBpcyBub3Qgc2V0DQpDT05GSUdf
U05EX0FSTT15DQpDT05GSUdfU05EX0FSTUFBQ0k9eQ0KIyBDT05GSUdfU05E
X1NPQyBpcyBub3Qgc2V0DQojIENPTkZJR19TT1VORF9QUklNRSBpcyBub3Qg
c2V0DQpDT05GSUdfQUM5N19CVVM9eQ0KDQojDQojIEhJRCBzdXBwb3J0DQoj
DQpDT05GSUdfSElEPXkNCiMgQ09ORklHX0hJRFJBVyBpcyBub3Qgc2V0DQpD
T05GSUdfSElEX0dFTkVSSUM9eQ0KDQojDQojIFNwZWNpYWwgSElEIGRyaXZl
cnMNCiMNCiMgQ09ORklHX1VTQl9BUkNIX0hBU19PSENJIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1VTQl9BUkNIX0hBU19FSENJIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1VTQl9BUkNIX0hBU19YSENJIGlzIG5vdCBzZXQNCkNPTkZJR19VU0Jf
U1VQUE9SVD15DQpDT05GSUdfVVNCX0FSQ0hfSEFTX0hDRD15DQojIENPTkZJ
R19VU0IgaXMgbm90IHNldA0KIyBDT05GSUdfVVNCX09UR19XSElURUxJU1Qg
aXMgbm90IHNldA0KIyBDT05GSUdfVVNCX09UR19CTEFDS0xJU1RfSFVCIGlz
IG5vdCBzZXQNCg0KIw0KIyBOT1RFOiBVU0JfU1RPUkFHRSBkZXBlbmRzIG9u
IFNDU0kgYnV0IEJMS19ERVZfU0QgbWF5DQojDQojIENPTkZJR19VU0JfR0FE
R0VUIGlzIG5vdCBzZXQNCg0KIw0KIyBPVEcgYW5kIHJlbGF0ZWQgaW5mcmFz
dHJ1Y3R1cmUNCiMNCkNPTkZJR19NTUM9eQ0KIyBDT05GSUdfTU1DX0RFQlVH
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01NQ19VTlNBRkVfUkVTVU1FIGlzIG5v
dCBzZXQNCiMgQ09ORklHX01NQ19DTEtHQVRFIGlzIG5vdCBzZXQNCg0KIw0K
IyBNTUMvU0QvU0RJTyBDYXJkIERyaXZlcnMNCiMNCkNPTkZJR19NTUNfQkxP
Q0s9eQ0KQ09ORklHX01NQ19CTE9DS19NSU5PUlM9OA0KQ09ORklHX01NQ19C
TE9DS19CT1VOQ0U9eQ0KIyBDT05GSUdfU0RJT19VQVJUIGlzIG5vdCBzZXQN
CiMgQ09ORklHX01NQ19URVNUIGlzIG5vdCBzZXQNCg0KIw0KIyBNTUMvU0Qv
U0RJTyBIb3N0IENvbnRyb2xsZXIgRHJpdmVycw0KIw0KQ09ORklHX01NQ19B
Uk1NTUNJPXkNCiMgQ09ORklHX01NQ19TREhDSSBpcyBub3Qgc2V0DQojIENP
TkZJR19NTUNfU0RIQ0lfUFhBVjMgaXMgbm90IHNldA0KIyBDT05GSUdfTU1D
X1NESENJX1BYQVYyIGlzIG5vdCBzZXQNCiMgQ09ORklHX01NQ19EVyBpcyBu
b3Qgc2V0DQojIENPTkZJR19NRU1TVElDSyBpcyBub3Qgc2V0DQojIENPTkZJ
R19ORVdfTEVEUyBpcyBub3Qgc2V0DQojIENPTkZJR19BQ0NFU1NJQklMSVRZ
IGlzIG5vdCBzZXQNCkNPTkZJR19SVENfTElCPXkNCiMgQ09ORklHX1JUQ19D
TEFTUyBpcyBub3Qgc2V0DQojIENPTkZJR19ETUFERVZJQ0VTIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0FVWERJU1BMQVkgaXMgbm90IHNldA0KIyBDT05GSUdf
VUlPIGlzIG5vdCBzZXQNCg0KIw0KIyBWaXJ0aW8gZHJpdmVycw0KIw0KIyBD
T05GSUdfVklSVElPX0JBTExPT04gaXMgbm90IHNldA0KIyBDT05GSUdfVklS
VElPX01NSU8gaXMgbm90IHNldA0KDQojDQojIE1pY3Jvc29mdCBIeXBlci1W
IGd1ZXN0IHN1cHBvcnQNCiMNCg0KIw0KIyBYZW4gZHJpdmVyIHN1cHBvcnQN
CiMNCiMgQ09ORklHX1hFTl9CQUxMT09OIGlzIG5vdCBzZXQNCkNPTkZJR19Y
RU5fREVWX0VWVENITj15DQpDT05GSUdfWEVOX0JBQ0tFTkQ9eQ0KQ09ORklH
X1hFTkZTPXkNCkNPTkZJR19YRU5fQ09NUEFUX1hFTkZTPXkNCiMgQ09ORklH
X1hFTl9TWVNfSFlQRVJWSVNPUiBpcyBub3Qgc2V0DQpDT05GSUdfWEVOX1hF
TkJVU19GUk9OVEVORD15DQojIENPTkZJR19YRU5fR05UREVWIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1hFTl9HUkFOVF9ERVZfQUxMT0MgaXMgbm90IHNldA0K
Q09ORklHX1hFTl9QUklWQ01EPXkNCiMgQ09ORklHX1NUQUdJTkcgaXMgbm90
IHNldA0KQ09ORklHX0NMS0RFVl9MT09LVVA9eQ0KQ09ORklHX0hBVkVfTUFD
SF9DTEtERVY9eQ0KDQojDQojIEhhcmR3YXJlIFNwaW5sb2NrIGRyaXZlcnMN
CiMNCkNPTkZJR19DTEtTUkNfTU1JTz15DQpDT05GSUdfSU9NTVVfU1VQUE9S
VD15DQoNCiMNCiMgUmVtb3RlcHJvYyBkcml2ZXJzIChFWFBFUklNRU5UQUwp
DQojDQoNCiMNCiMgUnBtc2cgZHJpdmVycyAoRVhQRVJJTUVOVEFMKQ0KIw0K
IyBDT05GSUdfVklSVF9EUklWRVJTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BN
X0RFVkZSRVEgaXMgbm90IHNldA0KIyBDT05GSUdfRVhUQ09OIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01FTU9SWSBpcyBub3Qgc2V0DQojIENPTkZJR19JSU8g
aXMgbm90IHNldA0KDQojDQojIEZpbGUgc3lzdGVtcw0KIw0KQ09ORklHX0VY
VDJfRlM9eQ0KIyBDT05GSUdfRVhUMl9GU19YQVRUUiBpcyBub3Qgc2V0DQoj
IENPTkZJR19FWFQyX0ZTX1hJUCBpcyBub3Qgc2V0DQpDT05GSUdfRVhUM19G
Uz15DQpDT05GSUdfRVhUM19ERUZBVUxUU19UT19PUkRFUkVEPXkNCkNPTkZJ
R19FWFQzX0ZTX1hBVFRSPXkNCiMgQ09ORklHX0VYVDNfRlNfUE9TSVhfQUNM
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0VYVDNfRlNfU0VDVVJJVFkgaXMgbm90
IHNldA0KQ09ORklHX0VYVDRfRlM9eQ0KQ09ORklHX0VYVDRfRlNfWEFUVFI9
eQ0KIyBDT05GSUdfRVhUNF9GU19QT1NJWF9BQ0wgaXMgbm90IHNldA0KIyBD
T05GSUdfRVhUNF9GU19TRUNVUklUWSBpcyBub3Qgc2V0DQojIENPTkZJR19F
WFQ0X0RFQlVHIGlzIG5vdCBzZXQNCkNPTkZJR19KQkQ9eQ0KQ09ORklHX0pC
RDI9eQ0KQ09ORklHX0ZTX01CQ0FDSEU9eQ0KIyBDT05GSUdfUkVJU0VSRlNf
RlMgaXMgbm90IHNldA0KIyBDT05GSUdfSkZTX0ZTIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1hGU19GUyBpcyBub3Qgc2V0DQojIENPTkZJR19HRlMyX0ZTIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0JUUkZTX0ZTIGlzIG5vdCBzZXQNCiMgQ09O
RklHX05JTEZTMl9GUyBpcyBub3Qgc2V0DQpDT05GSUdfRlNfUE9TSVhfQUNM
PXkNCkNPTkZJR19FWFBPUlRGUz15DQpDT05GSUdfRklMRV9MT0NLSU5HPXkN
CkNPTkZJR19GU05PVElGWT15DQpDT05GSUdfRE5PVElGWT15DQpDT05GSUdf
SU5PVElGWV9VU0VSPXkNCiMgQ09ORklHX0ZBTk9USUZZIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1FVT1RBIGlzIG5vdCBzZXQNCiMgQ09ORklHX1FVT1RBQ1RM
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0FVVE9GUzRfRlMgaXMgbm90IHNldA0K
IyBDT05GSUdfRlVTRV9GUyBpcyBub3Qgc2V0DQpDT05GSUdfR0VORVJJQ19B
Q0w9eQ0KDQojDQojIENhY2hlcw0KIw0KIyBDT05GSUdfRlNDQUNIRSBpcyBu
b3Qgc2V0DQoNCiMNCiMgQ0QtUk9NL0RWRCBGaWxlc3lzdGVtcw0KIw0KIyBD
T05GSUdfSVNPOTY2MF9GUyBpcyBub3Qgc2V0DQojIENPTkZJR19VREZfRlMg
aXMgbm90IHNldA0KDQojDQojIERPUy9GQVQvTlQgRmlsZXN5c3RlbXMNCiMN
CkNPTkZJR19GQVRfRlM9eQ0KIyBDT05GSUdfTVNET1NfRlMgaXMgbm90IHNl
dA0KQ09ORklHX1ZGQVRfRlM9eQ0KQ09ORklHX0ZBVF9ERUZBVUxUX0NPREVQ
QUdFPTQzNw0KQ09ORklHX0ZBVF9ERUZBVUxUX0lPQ0hBUlNFVD0iaXNvODg1
OS0xIg0KIyBDT05GSUdfTlRGU19GUyBpcyBub3Qgc2V0DQoNCiMNCiMgUHNl
dWRvIGZpbGVzeXN0ZW1zDQojDQpDT05GSUdfUFJPQ19GUz15DQpDT05GSUdf
UFJPQ19TWVNDVEw9eQ0KQ09ORklHX1BST0NfUEFHRV9NT05JVE9SPXkNCkNP
TkZJR19TWVNGUz15DQpDT05GSUdfVE1QRlM9eQ0KQ09ORklHX1RNUEZTX1BP
U0lYX0FDTD15DQpDT05GSUdfVE1QRlNfWEFUVFI9eQ0KIyBDT05GSUdfSFVH
RVRMQl9QQUdFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NPTkZJR0ZTX0ZTIGlz
IG5vdCBzZXQNCkNPTkZJR19NSVNDX0ZJTEVTWVNURU1TPXkNCiMgQ09ORklH
X0FERlNfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfQUZGU19GUyBpcyBub3Qg
c2V0DQojIENPTkZJR19IRlNfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfSEZT
UExVU19GUyBpcyBub3Qgc2V0DQojIENPTkZJR19CRUZTX0ZTIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0JGU19GUyBpcyBub3Qgc2V0DQojIENPTkZJR19FRlNf
RlMgaXMgbm90IHNldA0KQ09ORklHX0pGRlMyX0ZTPXkNCkNPTkZJR19KRkZT
Ml9GU19ERUJVRz0wDQpDT05GSUdfSkZGUzJfRlNfV1JJVEVCVUZGRVI9eQ0K
IyBDT05GSUdfSkZGUzJfRlNfV0JVRl9WRVJJRlkgaXMgbm90IHNldA0KIyBD
T05GSUdfSkZGUzJfU1VNTUFSWSBpcyBub3Qgc2V0DQojIENPTkZJR19KRkZT
Ml9GU19YQVRUUiBpcyBub3Qgc2V0DQojIENPTkZJR19KRkZTMl9DT01QUkVT
U0lPTl9PUFRJT05TIGlzIG5vdCBzZXQNCkNPTkZJR19KRkZTMl9aTElCPXkN
CiMgQ09ORklHX0pGRlMyX0xaTyBpcyBub3Qgc2V0DQpDT05GSUdfSkZGUzJf
UlRJTUU9eQ0KIyBDT05GSUdfSkZGUzJfUlVCSU4gaXMgbm90IHNldA0KIyBD
T05GSUdfTE9HRlMgaXMgbm90IHNldA0KQ09ORklHX0NSQU1GUz15DQojIENP
TkZJR19TUVVBU0hGUyBpcyBub3Qgc2V0DQojIENPTkZJR19WWEZTX0ZTIGlz
IG5vdCBzZXQNCkNPTkZJR19NSU5JWF9GUz15DQojIENPTkZJR19PTUZTX0ZT
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0hQRlNfRlMgaXMgbm90IHNldA0KIyBD
T05GSUdfUU5YNEZTX0ZTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1FOWDZGU19G
UyBpcyBub3Qgc2V0DQpDT05GSUdfUk9NRlNfRlM9eQ0KQ09ORklHX1JPTUZT
X0JBQ0tFRF9CWV9CTE9DSz15DQojIENPTkZJR19ST01GU19CQUNLRURfQllf
TVREIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JPTUZTX0JBQ0tFRF9CWV9CT1RI
IGlzIG5vdCBzZXQNCkNPTkZJR19ST01GU19PTl9CTE9DSz15DQojIENPTkZJ
R19QU1RPUkUgaXMgbm90IHNldA0KIyBDT05GSUdfU1lTVl9GUyBpcyBub3Qg
c2V0DQojIENPTkZJR19VRlNfRlMgaXMgbm90IHNldA0KQ09ORklHX05FVFdP
UktfRklMRVNZU1RFTVM9eQ0KQ09ORklHX05GU19GUz15DQpDT05GSUdfTkZT
X1YyPXkNCkNPTkZJR19ORlNfVjM9eQ0KIyBDT05GSUdfTkZTX1YzX0FDTCBp
cyBub3Qgc2V0DQojIENPTkZJR19ORlNfVjQgaXMgbm90IHNldA0KQ09ORklH
X1JPT1RfTkZTPXkNCkNPTkZJR19ORlNEPXkNCkNPTkZJR19ORlNEX1YzPXkN
CiMgQ09ORklHX05GU0RfVjNfQUNMIGlzIG5vdCBzZXQNCiMgQ09ORklHX05G
U0RfVjQgaXMgbm90IHNldA0KQ09ORklHX0xPQ0tEPXkNCkNPTkZJR19MT0NL
RF9WND15DQpDT05GSUdfTkZTX0NPTU1PTj15DQpDT05GSUdfU1VOUlBDPXkN
CiMgQ09ORklHX1NVTlJQQ19ERUJVRyBpcyBub3Qgc2V0DQojIENPTkZJR19D
RVBIX0ZTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NJRlMgaXMgbm90IHNldA0K
IyBDT05GSUdfTkNQX0ZTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NPREFfRlMg
aXMgbm90IHNldA0KIyBDT05GSUdfQUZTX0ZTIGlzIG5vdCBzZXQNCkNPTkZJ
R19OTFM9eQ0KQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiDQojIENP
TkZJR19OTFNfQ09ERVBBR0VfNDM3IGlzIG5vdCBzZXQNCiMgQ09ORklHX05M
U19DT0RFUEFHRV83MzcgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQ
QUdFXzc3NSBpcyBub3Qgc2V0DQpDT05GSUdfTkxTX0NPREVQQUdFXzg1MD15
DQojIENPTkZJR19OTFNfQ09ERVBBR0VfODUyIGlzIG5vdCBzZXQNCiMgQ09O
RklHX05MU19DT0RFUEFHRV84NTUgaXMgbm90IHNldA0KIyBDT05GSUdfTkxT
X0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNfQ09ERVBB
R0VfODYwIGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19DT0RFUEFHRV84NjEg
aXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2MiBpcyBub3Qg
c2V0DQojIENPTkZJR19OTFNfQ09ERVBBR0VfODYzIGlzIG5vdCBzZXQNCiMg
Q09ORklHX05MU19DT0RFUEFHRV84NjQgaXMgbm90IHNldA0KIyBDT05GSUdf
TkxTX0NPREVQQUdFXzg2NSBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNfQ09E
RVBBR0VfODY2IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19DT0RFUEFHRV84
NjkgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzkzNiBpcyBu
b3Qgc2V0DQojIENPTkZJR19OTFNfQ09ERVBBR0VfOTUwIGlzIG5vdCBzZXQN
CiMgQ09ORklHX05MU19DT0RFUEFHRV85MzIgaXMgbm90IHNldA0KIyBDT05G
SUdfTkxTX0NPREVQQUdFXzk0OSBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNf
Q09ERVBBR0VfODc0IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19JU084ODU5
XzggaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzEyNTAgaXMg
bm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzEyNTEgaXMgbm90IHNl
dA0KIyBDT05GSUdfTkxTX0FTQ0lJIGlzIG5vdCBzZXQNCkNPTkZJR19OTFNf
SVNPODg1OV8xPXkNCiMgQ09ORklHX05MU19JU084ODU5XzIgaXMgbm90IHNl
dA0KIyBDT05GSUdfTkxTX0lTTzg4NTlfMyBpcyBub3Qgc2V0DQojIENPTkZJ
R19OTFNfSVNPODg1OV80IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19JU084
ODU5XzUgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0lTTzg4NTlfNiBpcyBu
b3Qgc2V0DQojIENPTkZJR19OTFNfSVNPODg1OV83IGlzIG5vdCBzZXQNCiMg
Q09ORklHX05MU19JU084ODU5XzkgaXMgbm90IHNldA0KIyBDT05GSUdfTkxT
X0lTTzg4NTlfMTMgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0lTTzg4NTlf
MTQgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90
IHNldA0KIyBDT05GSUdfTkxTX0tPSThfUiBpcyBub3Qgc2V0DQojIENPTkZJ
R19OTFNfS09JOF9VIGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19NQUNfUk9N
QU4gaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX01BQ19DRUxUSUMgaXMgbm90
IHNldA0KIyBDT05GSUdfTkxTX01BQ19DRU5URVVSTyBpcyBub3Qgc2V0DQoj
IENPTkZJR19OTFNfTUFDX0NST0FUSUFOIGlzIG5vdCBzZXQNCiMgQ09ORklH
X05MU19NQUNfQ1lSSUxMSUMgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX01B
Q19HQUVMSUMgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX01BQ19HUkVFSyBp
cyBub3Qgc2V0DQojIENPTkZJR19OTFNfTUFDX0lDRUxBTkQgaXMgbm90IHNl
dA0KIyBDT05GSUdfTkxTX01BQ19JTlVJVCBpcyBub3Qgc2V0DQojIENPTkZJ
R19OTFNfTUFDX1JPTUFOSUFOIGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19N
QUNfVFVSS0lTSCBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNfVVRGOCBpcyBu
b3Qgc2V0DQoNCiMNCiMgS2VybmVsIGhhY2tpbmcNCiMNCiMgQ09ORklHX1BS
SU5US19USU1FIGlzIG5vdCBzZXQNCkNPTkZJR19ERUZBVUxUX01FU1NBR0Vf
TE9HTEVWRUw9NA0KQ09ORklHX0VOQUJMRV9XQVJOX0RFUFJFQ0FURUQ9eQ0K
Q09ORklHX0VOQUJMRV9NVVNUX0NIRUNLPXkNCkNPTkZJR19GUkFNRV9XQVJO
PTEwMjQNCkNPTkZJR19NQUdJQ19TWVNSUT15DQojIENPTkZJR19TVFJJUF9B
U01fU1lNUyBpcyBub3Qgc2V0DQojIENPTkZJR19SRUFEQUJMRV9BU00gaXMg
bm90IHNldA0KIyBDT05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldA0K
IyBDT05GSUdfREVCVUdfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfSEVBREVS
U19DSEVDSyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19TRUNUSU9OX01J
U01BVENIIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19LRVJORUw9eQ0KIyBD
T05GSUdfREVCVUdfU0hJUlEgaXMgbm90IHNldA0KIyBDT05GSUdfTE9DS1VQ
X0RFVEVDVE9SIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hBUkRMT0NLVVBfREVU
RUNUT1IgaXMgbm90IHNldA0KIyBDT05GSUdfUEFOSUNfT05fT09QUyBpcyBu
b3Qgc2V0DQpDT05GSUdfUEFOSUNfT05fT09QU19WQUxVRT0wDQojIENPTkZJ
R19ERVRFQ1RfSFVOR19UQVNLIGlzIG5vdCBzZXQNCkNPTkZJR19TQ0hFRF9E
RUJVRz15DQojIENPTkZJR19TQ0hFRFNUQVRTIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1RJTUVSX1NUQVRTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX09C
SkVDVFMgaXMgbm90IHNldA0KIyBDT05GSUdfREVCVUdfU0xBQiBpcyBub3Qg
c2V0DQojIENPTkZJR19ERUJVR19LTUVNTEVBSyBpcyBub3Qgc2V0DQojIENP
TkZJR19ERUJVR19SVF9NVVRFWEVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JU
X01VVEVYX1RFU1RFUiBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19TUElO
TE9DSyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19NVVRFWEVTIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0RFQlVHX0xPQ0tfQUxMT0MgaXMgbm90IHNldA0K
IyBDT05GSUdfUFJPVkVfTE9DS0lORyBpcyBub3Qgc2V0DQojIENPTkZJR19T
UEFSU0VfUkNVX1BPSU5URVIgaXMgbm90IHNldA0KIyBDT05GSUdfTE9DS19T
VEFUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0FUT01JQ19TTEVFUCBp
cyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19MT0NLSU5HX0FQSV9TRUxGVEVT
VFMgaXMgbm90IHNldA0KIyBDT05GSUdfREVCVUdfU1RBQ0tfVVNBR0UgaXMg
bm90IHNldA0KIyBDT05GSUdfREVCVUdfS09CSkVDVCBpcyBub3Qgc2V0DQoj
IENPTkZJR19ERUJVR19ISUdITUVNIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJV
R19CVUdWRVJCT1NFPXkNCiMgQ09ORklHX0RFQlVHX0lORk8gaXMgbm90IHNl
dA0KIyBDT05GSUdfREVCVUdfVk0gaXMgbm90IHNldA0KIyBDT05GSUdfREVC
VUdfV1JJVEVDT1VOVCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19NRU1P
UllfSU5JVCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19MSVNUIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1RFU1RfTElTVF9TT1JUIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0RFQlVHX1NHIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX05P
VElGSUVSUyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19DUkVERU5USUFM
UyBpcyBub3Qgc2V0DQojIENPTkZJR19CT09UX1BSSU5US19ERUxBWSBpcyBu
b3Qgc2V0DQojIENPTkZJR19SQ1VfVE9SVFVSRV9URVNUIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JDVV9UUkFDRSBpcyBub3Qgc2V0DQojIENPTkZJR19CQUNL
VFJBQ0VfU0VMRl9URVNUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0JM
T0NLX0VYVF9ERVZUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0ZPUkNF
X1dFQUtfUEVSX0NQVSBpcyBub3Qgc2V0DQojIENPTkZJR19GQVVMVF9JTkpF
Q1RJT04gaXMgbm90IHNldA0KIyBDT05GSUdfTEFURU5DWVRPUCBpcyBub3Qg
c2V0DQojIENPTkZJR19ERUJVR19QQUdFQUxMT0MgaXMgbm90IHNldA0KQ09O
RklHX0hBVkVfRlVOQ1RJT05fVFJBQ0VSPXkNCkNPTkZJR19IQVZFX0ZVTkNU
SU9OX0dSQVBIX1RSQUNFUj15DQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFD
RT15DQpDT05GSUdfSEFWRV9GVFJBQ0VfTUNPVU5UX1JFQ09SRD15DQpDT05G
SUdfSEFWRV9DX1JFQ09SRE1DT1VOVD15DQpDT05GSUdfVFJBQ0lOR19TVVBQ
T1JUPXkNCkNPTkZJR19GVFJBQ0U9eQ0KIyBDT05GSUdfRlVOQ1RJT05fVFJB
Q0VSIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lSUVNPRkZfVFJBQ0VSIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NDSEVEX1RSQUNFUiBpcyBub3Qgc2V0DQojIENP
TkZJR19FTkFCTEVfREVGQVVMVF9UUkFDRVJTIGlzIG5vdCBzZXQNCkNPTkZJ
R19CUkFOQ0hfUFJPRklMRV9OT05FPXkNCiMgQ09ORklHX1BST0ZJTEVfQU5O
T1RBVEVEX0JSQU5DSEVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BST0ZJTEVf
QUxMX0JSQU5DSEVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NUQUNLX1RSQUNF
UiBpcyBub3Qgc2V0DQojIENPTkZJR19CTEtfREVWX0lPX1RSQUNFIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1BST0JFX0VWRU5UUyBpcyBub3Qgc2V0DQojIENP
TkZJR19ETUFfQVBJX0RFQlVHIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FUT01J
QzY0X1NFTEZURVNUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NBTVBMRVMgaXMg
bm90IHNldA0KQ09ORklHX0hBVkVfQVJDSF9LR0RCPXkNCiMgQ09ORklHX0tH
REIgaXMgbm90IHNldA0KIyBDT05GSUdfVEVTVF9LU1RSVE9YIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1NUUklDVF9ERVZNRU0gaXMgbm90IHNldA0KQ09ORklH
X0FSTV9VTldJTkQ9eQ0KQ09ORklHX0RFQlVHX1VTRVI9eQ0KQ09ORklHX0RF
QlVHX0xMPXkNCkNPTkZJR19ERUJVR19MTF9VQVJUX05PTkU9eQ0KIyBDT05G
SUdfREVCVUdfSUNFRENDIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX1NF
TUlIT1NUSU5HIGlzIG5vdCBzZXQNCkNPTkZJR19FQVJMWV9QUklOVEs9eQ0K
IyBDT05GSUdfT0NfRVRNIGlzIG5vdCBzZXQNCg0KIw0KIyBTZWN1cml0eSBv
cHRpb25zDQojDQojIENPTkZJR19LRVlTIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1NFQ1VSSVRZX0RNRVNHX1JFU1RSSUNUIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1NFQ1VSSVRZIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFQ1VSSVRZRlMgaXMg
bm90IHNldA0KQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlfREFDPXkNCkNPTkZJ
R19ERUZBVUxUX1NFQ1VSSVRZPSIiDQpDT05GSUdfQ1JZUFRPPXkNCg0KIw0K
IyBDcnlwdG8gY29yZSBvciBoZWxwZXINCiMNCkNPTkZJR19DUllQVE9fQUxH
QVBJPXkNCkNPTkZJR19DUllQVE9fQUxHQVBJMj15DQpDT05GSUdfQ1JZUFRP
X0FFQUQyPXkNCkNPTkZJR19DUllQVE9fQkxLQ0lQSEVSPXkNCkNPTkZJR19D
UllQVE9fQkxLQ0lQSEVSMj15DQpDT05GSUdfQ1JZUFRPX0hBU0g9eQ0KQ09O
RklHX0NSWVBUT19IQVNIMj15DQpDT05GSUdfQ1JZUFRPX1JORz15DQpDT05G
SUdfQ1JZUFRPX1JORzI9eQ0KQ09ORklHX0NSWVBUT19QQ09NUDI9eQ0KQ09O
RklHX0NSWVBUT19NQU5BR0VSPXkNCkNPTkZJR19DUllQVE9fTUFOQUdFUjI9
eQ0KIyBDT05GSUdfQ1JZUFRPX1VTRVIgaXMgbm90IHNldA0KQ09ORklHX0NS
WVBUT19NQU5BR0VSX0RJU0FCTEVfVEVTVFM9eQ0KIyBDT05GSUdfQ1JZUFRP
X0dGMTI4TVVMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19OVUxMIGlz
IG5vdCBzZXQNCkNPTkZJR19DUllQVE9fV09SS1FVRVVFPXkNCiMgQ09ORklH
X0NSWVBUT19DUllQVEQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0FV
VEhFTkMgaXMgbm90IHNldA0KDQojDQojIEF1dGhlbnRpY2F0ZWQgRW5jcnlw
dGlvbiB3aXRoIEFzc29jaWF0ZWQgRGF0YQ0KIw0KIyBDT05GSUdfQ1JZUFRP
X0NDTSBpcyBub3Qgc2V0DQojIENPTkZJR19DUllQVE9fR0NNIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0NSWVBUT19TRVFJViBpcyBub3Qgc2V0DQoNCiMNCiMg
QmxvY2sgbW9kZXMNCiMNCkNPTkZJR19DUllQVE9fQ0JDPXkNCiMgQ09ORklH
X0NSWVBUT19DVFIgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0NUUyBp
cyBub3Qgc2V0DQojIENPTkZJR19DUllQVE9fRUNCIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NSWVBUT19MUlcgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRP
X1BDQkMgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1hUUyBpcyBub3Qg
c2V0DQoNCiMNCiMgSGFzaCBtb2Rlcw0KIw0KIyBDT05GSUdfQ1JZUFRPX0hN
QUMgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1hDQkMgaXMgbm90IHNl
dA0KIyBDT05GSUdfQ1JZUFRPX1ZNQUMgaXMgbm90IHNldA0KDQojDQojIERp
Z2VzdA0KIw0KQ09ORklHX0NSWVBUT19DUkMzMkM9eQ0KIyBDT05GSUdfQ1JZ
UFRPX0dIQVNIIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19NRDQgaXMg
bm90IHNldA0KQ09ORklHX0NSWVBUT19NRDU9eQ0KIyBDT05GSUdfQ1JZUFRP
X01JQ0hBRUxfTUlDIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19STUQx
MjggaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1JNRDE2MCBpcyBub3Qg
c2V0DQojIENPTkZJR19DUllQVE9fUk1EMjU2IGlzIG5vdCBzZXQNCiMgQ09O
RklHX0NSWVBUT19STUQzMjAgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRP
X1NIQTEgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1NIQTI1NiBpcyBu
b3Qgc2V0DQojIENPTkZJR19DUllQVE9fU0hBNTEyIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NSWVBUT19UR1IxOTIgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZ
UFRPX1dQNTEyIGlzIG5vdCBzZXQNCg0KIw0KIyBDaXBoZXJzDQojDQpDT05G
SUdfQ1JZUFRPX0FFUz15DQojIENPTkZJR19DUllQVE9fQU5VQklTIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0NSWVBUT19BUkM0IGlzIG5vdCBzZXQNCiMgQ09O
RklHX0NSWVBUT19CTE9XRklTSCBpcyBub3Qgc2V0DQojIENPTkZJR19DUllQ
VE9fQ0FNRUxMSUEgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0NBU1Q1
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19DQVNUNiBpcyBub3Qgc2V0
DQpDT05GSUdfQ1JZUFRPX0RFUz15DQojIENPTkZJR19DUllQVE9fRkNSWVBU
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19LSEFaQUQgaXMgbm90IHNl
dA0KIyBDT05GSUdfQ1JZUFRPX1NBTFNBMjAgaXMgbm90IHNldA0KIyBDT05G
SUdfQ1JZUFRPX1NFRUQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1NF
UlBFTlQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1RFQSBpcyBub3Qg
c2V0DQojIENPTkZJR19DUllQVE9fVFdPRklTSCBpcyBub3Qgc2V0DQoNCiMN
CiMgQ29tcHJlc3Npb24NCiMNCiMgQ09ORklHX0NSWVBUT19ERUZMQVRFIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19aTElCIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NSWVBUT19MWk8gaXMgbm90IHNldA0KDQojDQojIFJhbmRvbSBO
dW1iZXIgR2VuZXJhdGlvbg0KIw0KQ09ORklHX0NSWVBUT19BTlNJX0NQUk5H
PXkNCiMgQ09ORklHX0NSWVBUT19VU0VSX0FQSV9IQVNIIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0NSWVBUT19VU0VSX0FQSV9TS0NJUEhFUiBpcyBub3Qgc2V0
DQpDT05GSUdfQ1JZUFRPX0hXPXkNCiMgQ09ORklHX0JJTkFSWV9QUklOVEYg
aXMgbm90IHNldA0KDQojDQojIExpYnJhcnkgcm91dGluZXMNCiMNCkNPTkZJ
R19CSVRSRVZFUlNFPXkNCkNPTkZJR19HRU5FUklDX1BDSV9JT01BUD15DQpD
T05GSUdfR0VORVJJQ19JTz15DQojIENPTkZJR19DUkNfQ0NJVFQgaXMgbm90
IHNldA0KQ09ORklHX0NSQzE2PXkNCiMgQ09ORklHX0NSQ19UMTBESUYgaXMg
bm90IHNldA0KIyBDT05GSUdfQ1JDX0lUVV9UIGlzIG5vdCBzZXQNCkNPTkZJ
R19DUkMzMj15DQojIENPTkZJR19DUkMzMl9TRUxGVEVTVCBpcyBub3Qgc2V0
DQpDT05GSUdfQ1JDMzJfU0xJQ0VCWTg9eQ0KIyBDT05GSUdfQ1JDMzJfU0xJ
Q0VCWTQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JDMzJfU0FSV0FURSBpcyBu
b3Qgc2V0DQojIENPTkZJR19DUkMzMl9CSVQgaXMgbm90IHNldA0KIyBDT05G
SUdfQ1JDNyBpcyBub3Qgc2V0DQojIENPTkZJR19MSUJDUkMzMkMgaXMgbm90
IHNldA0KIyBDT05GSUdfQ1JDOCBpcyBub3Qgc2V0DQpDT05GSUdfWkxJQl9J
TkZMQVRFPXkNCkNPTkZJR19aTElCX0RFRkxBVEU9eQ0KQ09ORklHX1haX0RF
Qz15DQpDT05GSUdfWFpfREVDX1g4Nj15DQpDT05GSUdfWFpfREVDX1BPV0VS
UEM9eQ0KQ09ORklHX1haX0RFQ19JQTY0PXkNCkNPTkZJR19YWl9ERUNfQVJN
PXkNCkNPTkZJR19YWl9ERUNfQVJNVEhVTUI9eQ0KQ09ORklHX1haX0RFQ19T
UEFSQz15DQpDT05GSUdfWFpfREVDX0JDSj15DQojIENPTkZJR19YWl9ERUNf
VEVTVCBpcyBub3Qgc2V0DQpDT05GSUdfREVDT01QUkVTU19HWklQPXkNCkNP
TkZJR19IQVNfSU9NRU09eQ0KQ09ORklHX0hBU19ETUE9eQ0KQ09ORklHX0RR
TD15DQpDT05GSUdfTkxBVFRSPXkNCiMgQ09ORklHX0FWRVJBR0UgaXMgbm90
IHNldA0KIyBDT05GSUdfQ09SRElDIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RE
UiBpcyBub3Qgc2V0DQo=

--1342847746-1131063668-1344432587=:21096
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1131063668-1344432587=:21096--


From xen-devel-bounces@lists.xen.org Wed Aug 08 13:48:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6bv-0007zJ-TK; Wed, 08 Aug 2012 13:47:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz6bt-0007zB-Pp
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 13:47:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344433655!11438295!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6680 invoked from network); 8 Aug 2012 13:47:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 13:47:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13910139"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:47:34 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 14:47:34 +0100
Date: Wed, 8 Aug 2012 14:47:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alexey Klimov <trashsee@gmail.com>
In-Reply-To: <CAPny0sqtgo7MvndfLN6JExkQMP40ro1FT6Edc6OKWt0KreNYnQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208081421570.4645@kaball.uk.xensource.com>
References: <CAPny0soyuQkUmAU+kYrBvG+w_jxKUsY8YxCrxBA=7cwmdwV6Xw@mail.gmail.com>
	<alpine.DEB.2.02.1207301934540.4645@kaball.uk.xensource.com>
	<CAPny0soV4Z0R_PADtjn4JpCFMPkU-m+O4vBWA+DJRb9GVV36=g@mail.gmail.com>
	<CAPny0sqtgo7MvndfLN6JExkQMP40ro1FT6Edc6OKWt0KreNYnQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1131063668-1344432587=:21096"
Content-ID: <alpine.DEB.2.02.1208081432000.21096@kaball.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [questions] Dom0/DomU on ARM under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1131063668-1344432587=:21096
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208081432001.21096@kaball.uk.xensource.com>

On Wed, 8 Aug 2012, Alexey Klimov wrote:
> 2012/8/1 Alexey Klimov <trashsee@gmail.com>:
> > And i saw that Ian set up git repository for xen with latest patches
> > for ARM. So i'll try to use this repository.
> 
> Hello Stefano and Ian,
> 
> I used new Ian xen-unstable git repository
> (git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.) and
> Stefano linux kernel git repository (
> git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git
> 3.5-rc7-arm-2) with additional patches:
> 
> - for linux kernel "xen/events: fix unmask_evtchn for PV on HVM guests",
> - ARM hypercall ABI: 64 bit ready patch series for xen and attached
> few versions of xcbuild (early version of Ian and latest version).
> After applying 64-bit ready patches i observed such errors when
> building xen and tools:
> 
> 1)
> for i in public/callback.h public/dom0_ops.h public/elfnote.h
> public/event_channel.h public/features.h public/grant_table.h
> public/kexec.h public/mem_event.h public/memory.h public/nmi.h
> public/physdev.h public/platform.h public/sched.h public/tmem.h
> public/trace.h public/vcpu.h public/version.h public/xen-compat.h
> public/xen.h public/xencomm.h public/xenoprof.h public/hvm/e820.h
> public/hvm/hvm_info_table.h public/hvm/hvm_op.h public/hvm/ioreq.h
> public/hvm/params.h public/io/blkif.h public/io/console.h
> public/io/fbif.h public/io/fsif.h public/io/kbdif.h
> public/io/libxenvchan.h public/io/netif.h public/io/pciif.h
> public/io/protocols.h public/io/ring.h public/io/tpmif.h
> public/io/usbif.h public/io/vscsiif.h public/io/xenbus.h
> public/io/xs_wire.h; do gcc -ansi -include stdint.h -Wall -W -Werror
> -S -o /dev/null -xc $i || exit 1; echo $i; done >headers.chk.new
> public/version.h:61:5: error: unknown type name 'xen_ulong_t'
> make[3]: *** [headers.chk] Error 1
> make[3]: Leaving directory `/src/xen/xen/include'
> 
> Fixed by inserting #include "arch-arm.h" in xen/include/public/version.h

I think that this is a legitimate error, I wasn't seeing it because I am
cross-compiling.


> 2)
> building 'xc' extension
> gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
> -Wstrict-prototypes -O1 -fno-omit-frame-pointer -marm -g
> -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
> -Wdeclaration-after-statement -Wno-unused-but-set-variable
> -D__XEN_TOOLS__ -MMD -MF .build.d -D_LARGEFILE_SOURCE
> -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
> -fno-optimize-sibling-calls -fPIC -I../../tools/include
> -I../../tools/libxc -Ixen/lowlevel/xc -I/usr/include/python2.7 -c
> xen/lowlevel/xc/xc.c -o
> build/temp.linux-armv7l-2.7/xen/lowlevel/xc/xc.o -fno-strict-aliasing
> -Werror
> xen/lowlevel/xc/xc.c: In function 'pyxc_xeninfo':
> xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
> type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
> [-Werror=format]
> xen/lowlevel/xc/xc.c:1442:5: error: format '%lx' expects argument of
> type 'long unsigned int', but argument 4 has type 'xen_ulong_t'
> [-Werror=format]
> cc1: all warnings being treated as errors
> 
> Just commented snprintf(str, sizeof(str), "virt_start=0x%lx",
> p_parms.virt_start); in xc.c

That is also another legitimate error, I'll fix it in the next version
of the patch series. Thanks for testing!


> Then it compiled and i tried to run DomU. It looks like allocation
> console_pfn and xenstore_pfn in alloc_magic_pages() in xc_dom_arm.c
> creates real pain for me. With this allocation/patch xen prints "bad
> p2m lookup" messages before booting DomU
> (XEN) bad p2m lookup
> (XEN) dom1 IPA 0x0000000090000000
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> (XEN) bad p2m lookup
> (XEN) dom1 IPA 0x0000000090001000
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> (XEN) bad p2m lookup
> (XEN) dom1 IPA 0x0000000090001000
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> 
> and then everything  hangs with translation fault:
> 
> (XEN) DOM1: Grant tables using version 1 layout.
> (XEN) DOM1: Grant table initialized
> (XEN) DOM1: NET: Registered protocol family 16
> (XEN) Guest data abort: Translation fault at level 2
> (XEN)     gva=88808804
> (XEN)     gpa=0000000090001804
> (XEN)     size=2 sign=0 write=0 reg=2
> (XEN)     eat=0 cm=0 s1ptw=0 dfsc=6
> (XEN) dom1 IPA 0x0000000090001804
> (XEN) P2M @ 02ffcac0 mfn:0xffe56
> (XEN) 1ST[0x2] = 0x00000000f3f686ff
> (XEN) 2ND[0x80] = 0x0000000000000000
> 
> Detailed log is attached.
> Ok, i moved allocation for console and xenstore pages back in
> arch_setup_meminit() like in
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01340.html and
> then added kernel parameter keep_bootcon in DomU  device tree file and
> everything booted up to "unable to open an initial console" and unable
> to mount rootfs.

You are probably missing Ian's fix to alloc_magic_pages:

http://marc.info/?l=xen-devel&m=134398933530124


> I still didn't learn how to deal with xenstore, hvc0,
> xvda and how to boot from initramfs on ARM using xcbuild but i'll try
> to understand and learn this :) So may be it's good thing to
> investigate or take deep look why add_to_physmap failed in xcbuild and
> why there is bad p2m lookup in xen. Log is attached.
> 
> Do you have any difference between Dom0 .config and DomU .config? Did
> you just attach initrd using xc_dom_ramdisk_file() call in xcbuild?
> Any special configuration of xen console/xen store?

I am just using one config, attached.


> Well, i dont mean that i'm doing everything correctly but i tried to
> run it fixing/commenting as much as i can. Could you please help if
> you have time? I can test new changes, sent other useful info/logs.

Looking at the guest data abort that you are getting, I think that you
didn't update the dts and dtsi to the latest version. They are attached
to the 00/23 email "Introduce Xen support on ARM" for the linux kernel.
--1342847746-1131063668-1344432587=:21096
Content-Type: text/plain; charset="US-ASCII"; name="config-linux"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208081429470.21096@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="config-linux"

Iw0KIyBBdXRvbWF0aWNhbGx5IGdlbmVyYXRlZCBmaWxlOyBETyBOT1QgRURJ
VC4NCiMgTGludXgvYXJtIDMuNS4wLXJjNyBLZXJuZWwgQ29uZmlndXJhdGlv
bg0KIw0KQ09ORklHX0FSTT15DQpDT05GSUdfU1lTX1NVUFBPUlRTX0FQTV9F
TVVMQVRJT049eQ0KQ09ORklHX0hBVkVfUFJPQ19DUFU9eQ0KQ09ORklHX05P
X0lPUE9SVD15DQpDT05GSUdfU1RBQ0tUUkFDRV9TVVBQT1JUPXkNCkNPTkZJ
R19IQVZFX0xBVEVOQ1lUT1BfU1VQUE9SVD15DQpDT05GSUdfTE9DS0RFUF9T
VVBQT1JUPXkNCkNPTkZJR19UUkFDRV9JUlFGTEFHU19TVVBQT1JUPXkNCkNP
TkZJR19SV1NFTV9HRU5FUklDX1NQSU5MT0NLPXkNCkNPTkZJR19HRU5FUklD
X0hXRUlHSFQ9eQ0KQ09ORklHX0dFTkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkN
CkNPTkZJR19ORUVEX0RNQV9NQVBfU1RBVEU9eQ0KQ09ORklHX1ZFQ1RPUlNf
QkFTRT0weGZmZmYwMDAwDQpDT05GSUdfQVJNX1BBVENIX1BIWVNfVklSVD15
DQpDT05GSUdfR0VORVJJQ19CVUc9eQ0KQ09ORklHX0RFRkNPTkZJR19MSVNU
PSIvbGliL21vZHVsZXMvJFVOQU1FX1JFTEVBU0UvLmNvbmZpZyINCkNPTkZJ
R19IQVZFX0lSUV9XT1JLPXkNCg0KIw0KIyBHZW5lcmFsIHNldHVwDQojDQpD
T05GSUdfRVhQRVJJTUVOVEFMPXkNCkNPTkZJR19CUk9LRU5fT05fU01QPXkN
CkNPTkZJR19JTklUX0VOVl9BUkdfTElNSVQ9MzINCkNPTkZJR19DUk9TU19D
T01QSUxFPSIiDQpDT05GSUdfTE9DQUxWRVJTSU9OPSIiDQojIENPTkZJR19M
T0NBTFZFUlNJT05fQVVUTyBpcyBub3Qgc2V0DQpDT05GSUdfSEFWRV9LRVJO
RUxfR1pJUD15DQpDT05GSUdfSEFWRV9LRVJORUxfTFpNQT15DQpDT05GSUdf
SEFWRV9LRVJORUxfWFo9eQ0KQ09ORklHX0hBVkVfS0VSTkVMX0xaTz15DQpD
T05GSUdfS0VSTkVMX0daSVA9eQ0KIyBDT05GSUdfS0VSTkVMX0xaTUEgaXMg
bm90IHNldA0KIyBDT05GSUdfS0VSTkVMX1haIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0tFUk5FTF9MWk8gaXMgbm90IHNldA0KQ09ORklHX0RFRkFVTFRfSE9T
VE5BTUU9Iihub25lKSINCkNPTkZJR19TV0FQPXkNCkNPTkZJR19TWVNWSVBD
PXkNCkNPTkZJR19TWVNWSVBDX1NZU0NUTD15DQojIENPTkZJR19QT1NJWF9N
UVVFVUUgaXMgbm90IHNldA0KIyBDT05GSUdfQlNEX1BST0NFU1NfQUNDVCBp
cyBub3Qgc2V0DQojIENPTkZJR19GSEFORExFIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1RBU0tTVEFUUyBpcyBub3Qgc2V0DQojIENPTkZJR19BVURJVCBpcyBu
b3Qgc2V0DQpDT05GSUdfSEFWRV9HRU5FUklDX0hBUkRJUlFTPXkNCg0KIw0K
IyBJUlEgc3Vic3lzdGVtDQojDQpDT05GSUdfR0VORVJJQ19IQVJESVJRUz15
DQpDT05GSUdfR0VORVJJQ19JUlFfUFJPQkU9eQ0KQ09ORklHX0dFTkVSSUNf
SVJRX1NIT1c9eQ0KQ09ORklHX0hBUkRJUlFTX1NXX1JFU0VORD15DQpDT05G
SUdfSVJRX0RPTUFJTj15DQpDT05GSUdfS1RJTUVfU0NBTEFSPXkNCkNPTkZJ
R19HRU5FUklDX0NMT0NLRVZFTlRTPXkNCkNPTkZJR19HRU5FUklDX0NMT0NL
RVZFTlRTX0JVSUxEPXkNCg0KIw0KIyBUaW1lcnMgc3Vic3lzdGVtDQojDQpD
T05GSUdfVElDS19PTkVTSE9UPXkNCkNPTkZJR19OT19IWj15DQpDT05GSUdf
SElHSF9SRVNfVElNRVJTPXkNCg0KIw0KIyBSQ1UgU3Vic3lzdGVtDQojDQpD
T05GSUdfVElOWV9SQ1U9eQ0KIyBDT05GSUdfUFJFRU1QVF9SQ1UgaXMgbm90
IHNldA0KIyBDT05GSUdfVFJFRV9SQ1VfVFJBQ0UgaXMgbm90IHNldA0KIyBD
T05GSUdfSUtDT05GSUcgaXMgbm90IHNldA0KQ09ORklHX0xPR19CVUZfU0hJ
RlQ9MTQNCiMgQ09ORklHX0NHUk9VUFMgaXMgbm90IHNldA0KIyBDT05GSUdf
Q0hFQ0tQT0lOVF9SRVNUT1JFIGlzIG5vdCBzZXQNCkNPTkZJR19OQU1FU1BB
Q0VTPXkNCkNPTkZJR19VVFNfTlM9eQ0KQ09ORklHX0lQQ19OUz15DQpDT05G
SUdfUElEX05TPXkNCkNPTkZJR19ORVRfTlM9eQ0KIyBDT05GSUdfU0NIRURf
QVVUT0dST1VQIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NZU0ZTX0RFUFJFQ0FU
RUQgaXMgbm90IHNldA0KIyBDT05GSUdfUkVMQVkgaXMgbm90IHNldA0KQ09O
RklHX0JMS19ERVZfSU5JVFJEPXkNCkNPTkZJR19JTklUUkFNRlNfU09VUkNF
PSIiDQpDT05GSUdfUkRfR1pJUD15DQojIENPTkZJR19SRF9CWklQMiBpcyBu
b3Qgc2V0DQojIENPTkZJR19SRF9MWk1BIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1JEX1haIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JEX0xaTyBpcyBub3Qgc2V0
DQpDT05GSUdfQ0NfT1BUSU1JWkVfRk9SX1NJWkU9eQ0KQ09ORklHX1NZU0NU
TD15DQpDT05GSUdfQU5PTl9JTk9ERVM9eQ0KQ09ORklHX0VYUEVSVD15DQpD
T05GSUdfVUlEMTY9eQ0KIyBDT05GSUdfU1lTQ1RMX1NZU0NBTEwgaXMgbm90
IHNldA0KQ09ORklHX0tBTExTWU1TPXkNCiMgQ09ORklHX0tBTExTWU1TX0FM
TCBpcyBub3Qgc2V0DQpDT05GSUdfSE9UUExVRz15DQpDT05GSUdfUFJJTlRL
PXkNCkNPTkZJR19CVUc9eQ0KQ09ORklHX0VMRl9DT1JFPXkNCkNPTkZJR19C
QVNFX0ZVTEw9eQ0KQ09ORklHX0ZVVEVYPXkNCkNPTkZJR19FUE9MTD15DQpD
T05GSUdfU0lHTkFMRkQ9eQ0KQ09ORklHX1RJTUVSRkQ9eQ0KQ09ORklHX0VW
RU5URkQ9eQ0KQ09ORklHX1NITUVNPXkNCkNPTkZJR19BSU89eQ0KIyBDT05G
SUdfRU1CRURERUQgaXMgbm90IHNldA0KQ09ORklHX0hBVkVfUEVSRl9FVkVO
VFM9eQ0KQ09ORklHX1BFUkZfVVNFX1ZNQUxMT0M9eQ0KDQojDQojIEtlcm5l
bCBQZXJmb3JtYW5jZSBFdmVudHMgQW5kIENvdW50ZXJzDQojDQojIENPTkZJ
R19QRVJGX0VWRU5UUyBpcyBub3Qgc2V0DQpDT05GSUdfVk1fRVZFTlRfQ09V
TlRFUlM9eQ0KQ09ORklHX0NPTVBBVF9CUks9eQ0KQ09ORklHX1NMQUI9eQ0K
IyBDT05GSUdfU0xVQiBpcyBub3Qgc2V0DQojIENPTkZJR19TTE9CIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1BST0ZJTElORyBpcyBub3Qgc2V0DQpDT05GSUdf
SEFWRV9PUFJPRklMRT15DQojIENPTkZJR19KVU1QX0xBQkVMIGlzIG5vdCBz
ZXQNCkNPTkZJR19IQVZFX0tQUk9CRVM9eQ0KQ09ORklHX0hBVkVfS1JFVFBS
T0JFUz15DQpDT05GSUdfSEFWRV9BUkNIX1RSQUNFSE9PSz15DQpDT05GSUdf
SEFWRV9ETUFfQVRUUlM9eQ0KQ09ORklHX0hBVkVfRE1BX0NPTlRJR1VPVVM9
eQ0KQ09ORklHX0dFTkVSSUNfU01QX0lETEVfVEhSRUFEPXkNCkNPTkZJR19I
QVZFX1JFR1NfQU5EX1NUQUNLX0FDQ0VTU19BUEk9eQ0KQ09ORklHX0hBVkVf
Q0xLPXkNCkNPTkZJR19IQVZFX0RNQV9BUElfREVCVUc9eQ0KQ09ORklHX0hB
VkVfQVJDSF9KVU1QX0xBQkVMPXkNCg0KIw0KIyBHQ09WLWJhc2VkIGtlcm5l
bCBwcm9maWxpbmcNCiMNCkNPTkZJR19IQVZFX0dFTkVSSUNfRE1BX0NPSEVS
RU5UPXkNCkNPTkZJR19TTEFCSU5GTz15DQpDT05GSUdfUlRfTVVURVhFUz15
DQpDT05GSUdfQkFTRV9TTUFMTD0wDQojIENPTkZJR19NT0RVTEVTIGlzIG5v
dCBzZXQNCkNPTkZJR19CTE9DSz15DQpDT05GSUdfTEJEQUY9eQ0KQ09ORklH
X0JMS19ERVZfQlNHPXkNCiMgQ09ORklHX0JMS19ERVZfQlNHTElCIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0JMS19ERVZfSU5URUdSSVRZIGlzIG5vdCBzZXQN
Cg0KIw0KIyBQYXJ0aXRpb24gVHlwZXMNCiMNCkNPTkZJR19QQVJUSVRJT05f
QURWQU5DRUQ9eQ0KIyBDT05GSUdfQUNPUk5fUEFSVElUSU9OIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX09TRl9QQVJUSVRJT04gaXMgbm90IHNldA0KIyBDT05G
SUdfQU1JR0FfUEFSVElUSU9OIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FUQVJJ
X1BBUlRJVElPTiBpcyBub3Qgc2V0DQojIENPTkZJR19NQUNfUEFSVElUSU9O
IGlzIG5vdCBzZXQNCkNPTkZJR19NU0RPU19QQVJUSVRJT049eQ0KIyBDT05G
SUdfQlNEX0RJU0tMQUJFTCBpcyBub3Qgc2V0DQojIENPTkZJR19NSU5JWF9T
VUJQQVJUSVRJT04gaXMgbm90IHNldA0KIyBDT05GSUdfU09MQVJJU19YODZf
UEFSVElUSU9OIGlzIG5vdCBzZXQNCiMgQ09ORklHX1VOSVhXQVJFX0RJU0tM
QUJFTCBpcyBub3Qgc2V0DQojIENPTkZJR19MRE1fUEFSVElUSU9OIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NHSV9QQVJUSVRJT04gaXMgbm90IHNldA0KIyBD
T05GSUdfVUxUUklYX1BBUlRJVElPTiBpcyBub3Qgc2V0DQojIENPTkZJR19T
VU5fUEFSVElUSU9OIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tBUk1BX1BBUlRJ
VElPTiBpcyBub3Qgc2V0DQojIENPTkZJR19FRklfUEFSVElUSU9OIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NZU1Y2OF9QQVJUSVRJT04gaXMgbm90IHNldA0K
DQojDQojIElPIFNjaGVkdWxlcnMNCiMNCkNPTkZJR19JT1NDSEVEX05PT1A9
eQ0KQ09ORklHX0lPU0NIRURfREVBRExJTkU9eQ0KQ09ORklHX0lPU0NIRURf
Q0ZRPXkNCiMgQ09ORklHX0RFRkFVTFRfREVBRExJTkUgaXMgbm90IHNldA0K
Q09ORklHX0RFRkFVTFRfQ0ZRPXkNCiMgQ09ORklHX0RFRkFVTFRfTk9PUCBp
cyBub3Qgc2V0DQpDT05GSUdfREVGQVVMVF9JT1NDSEVEPSJjZnEiDQojIENP
TkZJR19JTkxJTkVfU1BJTl9UUllMT0NLIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0lOTElORV9TUElOX1RSWUxPQ0tfQkggaXMgbm90IHNldA0KIyBDT05GSUdf
SU5MSU5FX1NQSU5fTE9DSyBpcyBub3Qgc2V0DQojIENPTkZJR19JTkxJTkVf
U1BJTl9MT0NLX0JIIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9TUElO
X0xPQ0tfSVJRIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9TUElOX0xP
Q0tfSVJRU0FWRSBpcyBub3Qgc2V0DQojIENPTkZJR19JTkxJTkVfU1BJTl9V
TkxPQ0tfQkggaXMgbm90IHNldA0KQ09ORklHX0lOTElORV9TUElOX1VOTE9D
S19JUlE9eQ0KIyBDT05GSUdfSU5MSU5FX1NQSU5fVU5MT0NLX0lSUVJFU1RP
UkUgaXMgbm90IHNldA0KIyBDT05GSUdfSU5MSU5FX1JFQURfVFJZTE9DSyBp
cyBub3Qgc2V0DQojIENPTkZJR19JTkxJTkVfUkVBRF9MT0NLIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0lOTElORV9SRUFEX0xPQ0tfQkggaXMgbm90IHNldA0K
IyBDT05GSUdfSU5MSU5FX1JFQURfTE9DS19JUlEgaXMgbm90IHNldA0KIyBD
T05GSUdfSU5MSU5FX1JFQURfTE9DS19JUlFTQVZFIGlzIG5vdCBzZXQNCkNP
TkZJR19JTkxJTkVfUkVBRF9VTkxPQ0s9eQ0KIyBDT05GSUdfSU5MSU5FX1JF
QURfVU5MT0NLX0JIIGlzIG5vdCBzZXQNCkNPTkZJR19JTkxJTkVfUkVBRF9V
TkxPQ0tfSVJRPXkNCiMgQ09ORklHX0lOTElORV9SRUFEX1VOTE9DS19JUlFS
RVNUT1JFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9UUllM
T0NLIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9MT0NLIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9MT0NLX0JIIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0lOTElORV9XUklURV9MT0NLX0lSUSBpcyBub3Qg
c2V0DQojIENPTkZJR19JTkxJTkVfV1JJVEVfTE9DS19JUlFTQVZFIGlzIG5v
dCBzZXQNCkNPTkZJR19JTkxJTkVfV1JJVEVfVU5MT0NLPXkNCiMgQ09ORklH
X0lOTElORV9XUklURV9VTkxPQ0tfQkggaXMgbm90IHNldA0KQ09ORklHX0lO
TElORV9XUklURV9VTkxPQ0tfSVJRPXkNCiMgQ09ORklHX0lOTElORV9XUklU
RV9VTkxPQ0tfSVJRUkVTVE9SRSBpcyBub3Qgc2V0DQojIENPTkZJR19NVVRF
WF9TUElOX09OX09XTkVSIGlzIG5vdCBzZXQNCkNPTkZJR19GUkVFWkVSPXkN
Cg0KIw0KIyBTeXN0ZW0gVHlwZQ0KIw0KQ09ORklHX01NVT15DQojIENPTkZJ
R19BUkNIX0lOVEVHUkFUT1IgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9S
RUFMVklFVyBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX1ZFUlNBVElMRSBp
cyBub3Qgc2V0DQpDT05GSUdfQVJDSF9WRVhQUkVTUz15DQojIENPTkZJR19B
UkNIX0FUOTEgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9CQ01SSU5HIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfSElHSEJBTksgaXMgbm90IHNldA0K
IyBDT05GSUdfQVJDSF9DTFBTNzExWCBpcyBub3Qgc2V0DQojIENPTkZJR19B
UkNIX0NOUzNYWFggaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9HRU1JTkkg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QUklNQTIgaXMgbm90IHNldA0K
IyBDT05GSUdfQVJDSF9FQlNBMTEwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FS
Q0hfRVA5M1hYIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfRk9PVEJSSURH
RSBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX01YQyBpcyBub3Qgc2V0DQoj
IENPTkZJR19BUkNIX01YUyBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX05F
VFggaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9INzIwWCBpcyBub3Qgc2V0
DQojIENPTkZJR19BUkNIX0lPUDEzWFggaXMgbm90IHNldA0KIyBDT05GSUdf
QVJDSF9JT1AzMlggaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9JT1AzM1gg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9JWFA0WFggaXMgbm90IHNldA0K
IyBDT05GSUdfQVJDSF9ET1ZFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hf
S0lSS1dPT0QgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9MUEMzMlhYIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfTVY3OFhYMCBpcyBub3Qgc2V0DQoj
IENPTkZJR19BUkNIX09SSU9ONVggaXMgbm90IHNldA0KIyBDT05GSUdfQVJD
SF9NTVAgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9LUzg2OTUgaXMgbm90
IHNldA0KIyBDT05GSUdfQVJDSF9XOTBYOTAwIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0FSQ0hfVEVHUkEgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QSUNP
WENFTEwgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QTlg0MDA4IGlzIG5v
dCBzZXQNCiMgQ09ORklHX0FSQ0hfUFhBIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0FSQ0hfTVNNIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfU0hNT0JJTEUg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9SUEMgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9TQTExMDAgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9T
M0MyNFhYIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfUzNDNjRYWCBpcyBu
b3Qgc2V0DQojIENPTkZJR19BUkNIX1M1UDY0WDAgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9TNVBDMTAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hf
UzVQVjIxMCBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX0VYWU5PUyBpcyBu
b3Qgc2V0DQojIENPTkZJR19BUkNIX1NIQVJLIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0FSQ0hfVTMwMCBpcyBub3Qgc2V0DQojIENPTkZJR19BUkNIX1U4NTAw
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSQ0hfTk9NQURJSyBpcyBub3Qgc2V0
DQojIENPTkZJR19BUkNIX0RBVklOQ0kgaXMgbm90IHNldA0KIyBDT05GSUdf
QVJDSF9PTUFQIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BMQVRfU1BFQVIgaXMg
bm90IHNldA0KIyBDT05GSUdfQVJDSF9WVDg1MDAgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9aWU5RIGlzIG5vdCBzZXQNCg0KIw0KIyBWZXJzYXRpbGUg
RXhwcmVzcyBwbGF0Zm9ybSB0eXBlDQojDQpDT05GSUdfQVJDSF9WRVhQUkVT
U19DT1JURVhfQTVfQTlfRVJSQVRBPXkNCiMgQ09ORklHX0FSQ0hfVkVYUFJF
U1NfQ0E5WDQgaXMgbm90IHNldA0KQ09ORklHX0FSQ0hfVkVYUFJFU1NfRFQ9
eQ0KQ09ORklHX1BMQVRfVkVSU0FUSUxFX0NMQ0Q9eQ0KQ09ORklHX1BMQVRf
VkVSU0FUSUxFX1NDSEVEX0NMT0NLPXkNCkNPTkZJR19QTEFUX1ZFUlNBVElM
RT15DQpDT05GSUdfQVJNX1RJTUVSX1NQODA0PXkNCg0KIw0KIyBQcm9jZXNz
b3IgVHlwZQ0KIw0KQ09ORklHX0NQVV9WNz15DQpDT05GSUdfQ1BVXzMydjZL
PXkNCkNPTkZJR19DUFVfMzJ2Nz15DQpDT05GSUdfQ1BVX0FCUlRfRVY3PXkN
CkNPTkZJR19DUFVfUEFCUlRfVjc9eQ0KQ09ORklHX0NQVV9DQUNIRV9WNz15
DQpDT05GSUdfQ1BVX0NBQ0hFX1ZJUFQ9eQ0KQ09ORklHX0NQVV9DT1BZX1Y2
PXkNCkNPTkZJR19DUFVfVExCX1Y3PXkNCkNPTkZJR19DUFVfSEFTX0FTSUQ9
eQ0KQ09ORklHX0NQVV9DUDE1PXkNCkNPTkZJR19DUFVfQ1AxNV9NTVU9eQ0K
DQojDQojIFByb2Nlc3NvciBGZWF0dXJlcw0KIw0KIyBDT05GSUdfQVJNX0xQ
QUUgaXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9QSFlTX0FERFJfVF82NEJJ
VCBpcyBub3Qgc2V0DQpDT05GSUdfQVJNX1RIVU1CPXkNCkNPTkZJR19BUk1f
VEhVTUJFRT15DQpDT05GSUdfU1dQX0VNVUxBVEU9eQ0KIyBDT05GSUdfQ1BV
X0lDQUNIRV9ESVNBQkxFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NQVV9EQ0FD
SEVfRElTQUJMRSBpcyBub3Qgc2V0DQojIENPTkZJR19DUFVfQlBSRURJQ1Rf
RElTQUJMRSBpcyBub3Qgc2V0DQpDT05GSUdfT1VURVJfQ0FDSEU9eQ0KQ09O
RklHX09VVEVSX0NBQ0hFX1NZTkM9eQ0KQ09ORklHX01JR0hUX0hBVkVfQ0FD
SEVfTDJYMD15DQpDT05GSUdfQ0FDSEVfTDJYMD15DQpDT05GSUdfQ0FDSEVf
UEwzMTA9eQ0KQ09ORklHX0FSTV9MMV9DQUNIRV9TSElGVF82PXkNCkNPTkZJ
R19BUk1fTDFfQ0FDSEVfU0hJRlQ9Ng0KQ09ORklHX0FSTV9ETUFfTUVNX0JV
RkZFUkFCTEU9eQ0KQ09ORklHX0FSTV9OUl9CQU5LUz04DQpDT05GSUdfQ1BV
X0hBU19QTVU9eQ0KQ09ORklHX01VTFRJX0lSUV9IQU5ETEVSPXkNCiMgQ09O
RklHX0FSTV9FUlJBVEFfNDMwOTczIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FS
TV9FUlJBVEFfNDU4NjkzIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSTV9FUlJB
VEFfNDYwMDc1IGlzIG5vdCBzZXQNCiMgQ09ORklHX1BMMzEwX0VSUkFUQV81
ODgzNjkgaXMgbm90IHNldA0KQ09ORklHX0FSTV9FUlJBVEFfNzIwNzg5PXkN
CiMgQ09ORklHX1BMMzEwX0VSUkFUQV83Mjc5MTUgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJNX0VSUkFUQV83NDM2MjIgaXMgbm90IHNldA0KQ09ORklHX0FS
TV9FUlJBVEFfNzUxNDcyPXkNCkNPTkZJR19QTDMxMF9FUlJBVEFfNzUzOTcw
PXkNCiMgQ09ORklHX0FSTV9FUlJBVEFfNzU0MzIyIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1BMMzEwX0VSUkFUQV83Njk0MTkgaXMgbm90IHNldA0KQ09ORklH
X0FSTV9HSUM9eQ0KQ09ORklHX0lDU1Q9eQ0KDQojDQojIEJ1cyBzdXBwb3J0
DQojDQpDT05GSUdfQVJNX0FNQkE9eQ0KIyBDT05GSUdfUENJX1NZU0NBTEwg
aXMgbm90IHNldA0KIyBDT05GSUdfQVJDSF9TVVBQT1JUU19NU0kgaXMgbm90
IHNldA0KIyBDT05GSUdfUENDQVJEIGlzIG5vdCBzZXQNCg0KIw0KIyBLZXJu
ZWwgRmVhdHVyZXMNCiMNCkNPTkZJR19IQVZFX1NNUD15DQojIENPTkZJR19T
TVAgaXMgbm90IHNldA0KQ09ORklHX0FSTV9BUkNIX1RJTUVSPXkNCkNPTkZJ
R19WTVNQTElUXzNHPXkNCiMgQ09ORklHX1ZNU1BMSVRfMkcgaXMgbm90IHNl
dA0KIyBDT05GSUdfVk1TUExJVF8xRyBpcyBub3Qgc2V0DQpDT05GSUdfUEFH
RV9PRkZTRVQ9MHhDMDAwMDAwMA0KQ09ORklHX0FSQ0hfTlJfR1BJTz0wDQpD
T05GSUdfUFJFRU1QVF9OT05FPXkNCiMgQ09ORklHX1BSRUVNUFRfVk9MVU5U
QVJZIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BSRUVNUFQgaXMgbm90IHNldA0K
Q09ORklHX0haPTEwMA0KIyBDT05GSUdfVEhVTUIyX0tFUk5FTCBpcyBub3Qg
c2V0DQpDT05GSUdfQUVBQkk9eQ0KQ09ORklHX09BQklfQ09NUEFUPXkNCiMg
Q09ORklHX0FSQ0hfU1BBUlNFTUVNX0RFRkFVTFQgaXMgbm90IHNldA0KIyBD
T05GSUdfQVJDSF9TRUxFQ1RfTUVNT1JZX01PREVMIGlzIG5vdCBzZXQNCkNP
TkZJR19IQVZFX0FSQ0hfUEZOX1ZBTElEPXkNCkNPTkZJR19ISUdITUVNPXkN
CkNPTkZJR19ISUdIUFRFPXkNCkNPTkZJR19TRUxFQ1RfTUVNT1JZX01PREVM
PXkNCkNPTkZJR19GTEFUTUVNX01BTlVBTD15DQpDT05GSUdfRkxBVE1FTT15
DQpDT05GSUdfRkxBVF9OT0RFX01FTV9NQVA9eQ0KQ09ORklHX0hBVkVfTUVN
QkxPQ0s9eQ0KQ09ORklHX1BBR0VGTEFHU19FWFRFTkRFRD15DQpDT05GSUdf
U1BMSVRfUFRMT0NLX0NQVVM9NA0KIyBDT05GSUdfQ09NUEFDVElPTiBpcyBu
b3Qgc2V0DQojIENPTkZJR19QSFlTX0FERFJfVF82NEJJVCBpcyBub3Qgc2V0
DQpDT05GSUdfWk9ORV9ETUFfRkxBRz0wDQpDT05GSUdfQk9VTkNFPXkNCkNP
TkZJR19WSVJUX1RPX0JVUz15DQojIENPTkZJR19LU00gaXMgbm90IHNldA0K
Q09ORklHX0RFRkFVTFRfTU1BUF9NSU5fQUREUj00MDk2DQpDT05GSUdfQ1JP
U1NfTUVNT1JZX0FUVEFDSD15DQpDT05GSUdfTkVFRF9QRVJfQ1BVX0tNPXkN
CiMgQ09ORklHX0NMRUFOQ0FDSEUgaXMgbm90IHNldA0KIyBDT05GSUdfRlJP
TlRTV0FQIGlzIG5vdCBzZXQNCkNPTkZJR19GT1JDRV9NQVhfWk9ORU9SREVS
PTExDQpDT05GSUdfQUxJR05NRU5UX1RSQVA9eQ0KIyBDT05GSUdfVUFDQ0VT
U19XSVRIX01FTUNQWSBpcyBub3Qgc2V0DQojIENPTkZJR19TRUNDT01QIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0RFUFJFQ0FURURfUEFSQU1fU1RSVUNUIGlzIG5vdCBz
ZXQNCg0KIw0KIyBCb290IG9wdGlvbnMNCiMNCkNPTkZJR19VU0VfT0Y9eQ0K
Q09ORklHX1pCT09UX1JPTV9URVhUPTB4MA0KQ09ORklHX1pCT09UX1JPTV9C
U1M9MHgwDQpDT05GSUdfQVJNX0FQUEVOREVEX0RUQj15DQpDT05GSUdfQVJN
X0FUQUdfRFRCX0NPTVBBVD15DQpDT05GSUdfQ01ETElORT0iZWFybHlwcmlu
dGs9eGVuYm9vdCBjb25zb2xlPXR0eUFNQTEgcm9vdD0vZGV2L21tY2JsazAg
ZGVidWcgcncgaW5pdD0vYmluL2Jhc2giDQpDT05GSUdfQ01ETElORV9GUk9N
X0JPT1RMT0FERVI9eQ0KIyBDT05GSUdfQ01ETElORV9FWFRFTkQgaXMgbm90
IHNldA0KIyBDT05GSUdfQ01ETElORV9GT1JDRSBpcyBub3Qgc2V0DQojIENP
TkZJR19YSVBfS0VSTkVMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWEVDIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0NSQVNIX0RVTVAgaXMgbm90IHNldA0KQ09O
RklHX0FVVE9fWlJFTEFERFI9eQ0KDQojDQojIENQVSBQb3dlciBNYW5hZ2Vt
ZW50DQojDQojIENPTkZJR19DUFVfSURMRSBpcyBub3Qgc2V0DQoNCiMNCiMg
RmxvYXRpbmcgcG9pbnQgZW11bGF0aW9uDQojDQoNCiMNCiMgQXQgbGVhc3Qg
b25lIGVtdWxhdGlvbiBtdXN0IGJlIHNlbGVjdGVkDQojDQojIENPTkZJR19G
UEVfTldGUEUgaXMgbm90IHNldA0KIyBDT05GSUdfRlBFX0ZBU1RGUEUgaXMg
bm90IHNldA0KQ09ORklHX1ZGUD15DQpDT05GSUdfVkZQdjM9eQ0KIyBDT05G
SUdfTkVPTiBpcyBub3Qgc2V0DQpDT05GSUdfWEVOX0RPTTA9eQ0KQ09ORklH
X1hFTj15DQoNCiMNCiMgVXNlcnNwYWNlIGJpbmFyeSBmb3JtYXRzDQojDQpD
T05GSUdfQklORk1UX0VMRj15DQpDT05GSUdfQVJDSF9CSU5GTVRfRUxGX1JB
TkRPTUlaRV9QSUU9eQ0KQ09ORklHX0NPUkVfRFVNUF9ERUZBVUxUX0VMRl9I
RUFERVJTPXkNCkNPTkZJR19IQVZFX0FPVVQ9eQ0KQ09ORklHX0JJTkZNVF9B
T1VUPXkNCkNPTkZJR19CSU5GTVRfTUlTQz15DQoNCiMNCiMgUG93ZXIgbWFu
YWdlbWVudCBvcHRpb25zDQojDQpDT05GSUdfU1VTUEVORD15DQpDT05GSUdf
U1VTUEVORF9GUkVFWkVSPXkNCkNPTkZJR19QTV9TTEVFUD15DQojIENPTkZJ
R19QTV9BVVRPU0xFRVAgaXMgbm90IHNldA0KIyBDT05GSUdfUE1fV0FLRUxP
Q0tTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BNX1JVTlRJTUUgaXMgbm90IHNl
dA0KQ09ORklHX1BNPXkNCiMgQ09ORklHX1BNX0RFQlVHIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0FQTV9FTVVMQVRJT04gaXMgbm90IHNldA0KQ09ORklHX1BN
X0NMSz15DQpDT05GSUdfQ1BVX1BNPXkNCkNPTkZJR19BUkNIX1NVU1BFTkRf
UE9TU0lCTEU9eQ0KQ09ORklHX0FSTV9DUFVfU1VTUEVORD15DQpDT05GSUdf
TkVUPXkNCg0KIw0KIyBOZXR3b3JraW5nIG9wdGlvbnMNCiMNCkNPTkZJR19Q
QUNLRVQ9eQ0KQ09ORklHX1VOSVg9eQ0KIyBDT05GSUdfVU5JWF9ESUFHIGlz
IG5vdCBzZXQNCkNPTkZJR19YRlJNPXkNCiMgQ09ORklHX1hGUk1fVVNFUiBp
cyBub3Qgc2V0DQojIENPTkZJR19YRlJNX1NVQl9QT0xJQ1kgaXMgbm90IHNl
dA0KIyBDT05GSUdfWEZSTV9NSUdSQVRFIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1hGUk1fU1RBVElTVElDUyBpcyBub3Qgc2V0DQojIENPTkZJR19ORVRfS0VZ
IGlzIG5vdCBzZXQNCkNPTkZJR19JTkVUPXkNCkNPTkZJR19JUF9NVUxUSUNB
U1Q9eQ0KIyBDT05GSUdfSVBfQURWQU5DRURfUk9VVEVSIGlzIG5vdCBzZXQN
CkNPTkZJR19JUF9QTlA9eQ0KIyBDT05GSUdfSVBfUE5QX0RIQ1AgaXMgbm90
IHNldA0KQ09ORklHX0lQX1BOUF9CT09UUD15DQojIENPTkZJR19JUF9QTlBf
UkFSUCBpcyBub3Qgc2V0DQojIENPTkZJR19ORVRfSVBJUCBpcyBub3Qgc2V0
DQojIENPTkZJR19ORVRfSVBHUkVfREVNVVggaXMgbm90IHNldA0KIyBDT05G
SUdfSVBfTVJPVVRFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSUEQgaXMgbm90
IHNldA0KIyBDT05GSUdfU1lOX0NPT0tJRVMgaXMgbm90IHNldA0KIyBDT05G
SUdfSU5FVF9BSCBpcyBub3Qgc2V0DQojIENPTkZJR19JTkVUX0VTUCBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTkVUX0lQQ09NUCBpcyBub3Qgc2V0DQojIENP
TkZJR19JTkVUX1hGUk1fVFVOTkVMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lO
RVRfVFVOTkVMIGlzIG5vdCBzZXQNCkNPTkZJR19JTkVUX1hGUk1fTU9ERV9U
UkFOU1BPUlQ9eQ0KQ09ORklHX0lORVRfWEZSTV9NT0RFX1RVTk5FTD15DQpD
T05GSUdfSU5FVF9YRlJNX01PREVfQkVFVD15DQpDT05GSUdfSU5FVF9MUk89
eQ0KIyBDT05GSUdfSU5FVF9ESUFHIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RD
UF9DT05HX0FEVkFOQ0VEIGlzIG5vdCBzZXQNCkNPTkZJR19UQ1BfQ09OR19D
VUJJQz15DQpDT05GSUdfREVGQVVMVF9UQ1BfQ09ORz0iY3ViaWMiDQojIENP
TkZJR19UQ1BfTUQ1U0lHIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lQVjYgaXMg
bm90IHNldA0KIyBDT05GSUdfTkVUV09SS19TRUNNQVJLIGlzIG5vdCBzZXQN
CiMgQ09ORklHX05FVFdPUktfUEhZX1RJTUVTVEFNUElORyBpcyBub3Qgc2V0
DQojIENPTkZJR19ORVRGSUxURVIgaXMgbm90IHNldA0KIyBDT05GSUdfSVBf
RENDUCBpcyBub3Qgc2V0DQojIENPTkZJR19JUF9TQ1RQIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JEUyBpcyBub3Qgc2V0DQojIENPTkZJR19USVBDIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0FUTSBpcyBub3Qgc2V0DQojIENPTkZJR19MMlRQ
IGlzIG5vdCBzZXQNCkNPTkZJR19TVFA9eQ0KQ09ORklHX0JSSURHRT15DQpD
T05GSUdfQlJJREdFX0lHTVBfU05PT1BJTkc9eQ0KIyBDT05GSUdfTkVUX0RT
QSBpcyBub3Qgc2V0DQojIENPTkZJR19WTEFOXzgwMjFRIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0RFQ05FVCBpcyBub3Qgc2V0DQpDT05GSUdfTExDPXkNCiMg
Q09ORklHX0xMQzIgaXMgbm90IHNldA0KIyBDT05GSUdfSVBYIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0FUQUxLIGlzIG5vdCBzZXQNCiMgQ09ORklHX1gyNSBp
cyBub3Qgc2V0DQojIENPTkZJR19MQVBCIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1dBTl9ST1VURVIgaXMgbm90IHNldA0KIyBDT05GSUdfUEhPTkVUIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0lFRUU4MDIxNTQgaXMgbm90IHNldA0KIyBDT05G
SUdfTkVUX1NDSEVEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RDQiBpcyBub3Qg
c2V0DQojIENPTkZJR19CQVRNQU5fQURWIGlzIG5vdCBzZXQNCiMgQ09ORklH
X09QRU5WU1dJVENIIGlzIG5vdCBzZXQNCkNPTkZJR19CUUw9eQ0KDQojDQoj
IE5ldHdvcmsgdGVzdGluZw0KIw0KIyBDT05GSUdfTkVUX1BLVEdFTiBpcyBu
b3Qgc2V0DQojIENPTkZJR19IQU1SQURJTyBpcyBub3Qgc2V0DQojIENPTkZJ
R19DQU4gaXMgbm90IHNldA0KIyBDT05GSUdfSVJEQSBpcyBub3Qgc2V0DQoj
IENPTkZJR19CVCBpcyBub3Qgc2V0DQojIENPTkZJR19BRl9SWFJQQyBpcyBu
b3Qgc2V0DQpDT05GSUdfV0lSRUxFU1M9eQ0KIyBDT05GSUdfQ0ZHODAyMTEg
aXMgbm90IHNldA0KIyBDT05GSUdfTElCODAyMTEgaXMgbm90IHNldA0KDQoj
DQojIENGRzgwMjExIG5lZWRzIHRvIGJlIGVuYWJsZWQgZm9yIE1BQzgwMjEx
DQojDQojIENPTkZJR19XSU1BWCBpcyBub3Qgc2V0DQojIENPTkZJR19SRktJ
TEwgaXMgbm90IHNldA0KIyBDT05GSUdfTkVUXzlQIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NBSUYgaXMgbm90IHNldA0KIyBDT05GSUdfQ0VQSF9MSUIgaXMg
bm90IHNldA0KIyBDT05GSUdfTkZDIGlzIG5vdCBzZXQNCkNPTkZJR19IQVZF
X0JQRl9KSVQ9eQ0KDQojDQojIERldmljZSBEcml2ZXJzDQojDQoNCiMNCiMg
R2VuZXJpYyBEcml2ZXIgT3B0aW9ucw0KIw0KQ09ORklHX1VFVkVOVF9IRUxQ
RVJfUEFUSD0iIg0KIyBDT05GSUdfREVWVE1QRlMgaXMgbm90IHNldA0KQ09O
RklHX1NUQU5EQUxPTkU9eQ0KQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJ
TEQ9eQ0KQ09ORklHX0ZXX0xPQURFUj15DQpDT05GSUdfRklSTVdBUkVfSU5f
S0VSTkVMPXkNCkNPTkZJR19FWFRSQV9GSVJNV0FSRT0iIg0KIyBDT05GSUdf
REVCVUdfRFJJVkVSIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0RFVlJF
UyBpcyBub3Qgc2V0DQojIENPTkZJR19TWVNfSFlQRVJWSVNPUiBpcyBub3Qg
c2V0DQojIENPTkZJR19HRU5FUklDX0NQVV9ERVZJQ0VTIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0RNQV9TSEFSRURfQlVGRkVSIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0NNQSBpcyBub3Qgc2V0DQojIENPTkZJR19DT05ORUNUT1IgaXMgbm90
IHNldA0KQ09ORklHX01URD15DQojIENPTkZJR19NVERfUkVEQk9PVF9QQVJU
UyBpcyBub3Qgc2V0DQpDT05GSUdfTVREX0NNRExJTkVfUEFSVFM9eQ0KIyBD
T05GSUdfTVREX0FGU19QQVJUUyBpcyBub3Qgc2V0DQojIENPTkZJR19NVERf
T0ZfUEFSVFMgaXMgbm90IHNldA0KIyBDT05GSUdfTVREX0FSN19QQVJUUyBp
cyBub3Qgc2V0DQoNCiMNCiMgVXNlciBNb2R1bGVzIEFuZCBUcmFuc2xhdGlv
biBMYXllcnMNCiMNCkNPTkZJR19NVERfQ0hBUj15DQpDT05GSUdfTVREX0JM
S0RFVlM9eQ0KQ09ORklHX01URF9CTE9DSz15DQojIENPTkZJR19GVEwgaXMg
bm90IHNldA0KIyBDT05GSUdfTkZUTCBpcyBub3Qgc2V0DQojIENPTkZJR19J
TkZUTCBpcyBub3Qgc2V0DQojIENPTkZJR19SRkRfRlRMIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1NTRkRDIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NNX0ZUTCBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfT09QUyBpcyBub3Qgc2V0DQojIENP
TkZJR19NVERfU1dBUCBpcyBub3Qgc2V0DQoNCiMNCiMgUkFNL1JPTS9GbGFz
aCBjaGlwIGRyaXZlcnMNCiMNCkNPTkZJR19NVERfQ0ZJPXkNCiMgQ09ORklH
X01URF9KRURFQ1BST0JFIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfR0VOX1BS
T0JFPXkNCkNPTkZJR19NVERfQ0ZJX0FEVl9PUFRJT05TPXkNCkNPTkZJR19N
VERfQ0ZJX05PU1dBUD15DQojIENPTkZJR19NVERfQ0ZJX0JFX0JZVEVfU1dB
UCBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJX0xFX0JZVEVfU1dBUCBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJX0dFT01FVFJZIGlzIG5vdCBz
ZXQNCkNPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfMT15DQpDT05GSUdfTVRE
X01BUF9CQU5LX1dJRFRIXzI9eQ0KQ09ORklHX01URF9NQVBfQkFOS19XSURU
SF80PXkNCiMgQ09ORklHX01URF9NQVBfQkFOS19XSURUSF84IGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01URF9NQVBfQkFOS19XSURUSF8xNiBpcyBub3Qgc2V0
DQojIENPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfMzIgaXMgbm90IHNldA0K
Q09ORklHX01URF9DRklfSTE9eQ0KQ09ORklHX01URF9DRklfSTI9eQ0KIyBD
T05GSUdfTVREX0NGSV9JNCBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJ
X0k4IGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9PVFAgaXMgbm90IHNldA0K
Q09ORklHX01URF9DRklfSU5URUxFWFQ9eQ0KIyBDT05GSUdfTVREX0NGSV9B
TURTVEQgaXMgbm90IHNldA0KIyBDT05GSUdfTVREX0NGSV9TVEFBIGlzIG5v
dCBzZXQNCkNPTkZJR19NVERfQ0ZJX1VUSUw9eQ0KIyBDT05GSUdfTVREX1JB
TSBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfUk9NIGlzIG5vdCBzZXQNCiMg
Q09ORklHX01URF9BQlNFTlQgaXMgbm90IHNldA0KDQojDQojIE1hcHBpbmcg
ZHJpdmVycyBmb3IgY2hpcCBhY2Nlc3MNCiMNCiMgQ09ORklHX01URF9DT01Q
TEVYX01BUFBJTkdTIGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9QSFlTTUFQ
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9QSFlTTUFQX09GIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01URF9QTEFUUkFNIGlzIG5vdCBzZXQNCg0KIw0KIyBT
ZWxmLWNvbnRhaW5lZCBNVEQgZGV2aWNlIGRyaXZlcnMNCiMNCiMgQ09ORklH
X01URF9TTFJBTSBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfUEhSQU0gaXMg
bm90IHNldA0KIyBDT05GSUdfTVREX01URFJBTSBpcyBub3Qgc2V0DQojIENP
TkZJR19NVERfQkxPQ0syTVREIGlzIG5vdCBzZXQNCg0KIw0KIyBEaXNrLU9u
LUNoaXAgRGV2aWNlIERyaXZlcnMNCiMNCiMgQ09ORklHX01URF9ET0NHMyBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfTkFORCBpcyBub3Qgc2V0DQojIENP
TkZJR19NVERfT05FTkFORCBpcyBub3Qgc2V0DQoNCiMNCiMgTFBERFIgZmxh
c2ggbWVtb3J5IGRyaXZlcnMNCiMNCiMgQ09ORklHX01URF9MUEREUiBpcyBu
b3Qgc2V0DQojIENPTkZJR19NVERfVUJJIGlzIG5vdCBzZXQNCkNPTkZJR19E
VEM9eQ0KQ09ORklHX09GPXkNCg0KIw0KIyBEZXZpY2UgVHJlZSBhbmQgT3Bl
biBGaXJtd2FyZSBzdXBwb3J0DQojDQojIENPTkZJR19QUk9DX0RFVklDRVRS
RUUgaXMgbm90IHNldA0KIyBDT05GSUdfT0ZfU0VMRlRFU1QgaXMgbm90IHNl
dA0KQ09ORklHX09GX0ZMQVRUUkVFPXkNCkNPTkZJR19PRl9FQVJMWV9GTEFU
VFJFRT15DQpDT05GSUdfT0ZfQUREUkVTUz15DQpDT05GSUdfT0ZfSVJRPXkN
CkNPTkZJR19PRl9ERVZJQ0U9eQ0KQ09ORklHX09GX0kyQz15DQpDT05GSUdf
T0ZfTkVUPXkNCkNPTkZJR19PRl9NRElPPXkNCkNPTkZJR19PRl9NVEQ9eQ0K
IyBDT05GSUdfUEFSUE9SVCBpcyBub3Qgc2V0DQpDT05GSUdfQkxLX0RFVj15
DQojIENPTkZJR19CTEtfREVWX0NPV19DT01NT04gaXMgbm90IHNldA0KQ09O
RklHX0JMS19ERVZfTE9PUD15DQpDT05GSUdfQkxLX0RFVl9MT09QX01JTl9D
T1VOVD04DQojIENPTkZJR19CTEtfREVWX0NSWVBUT0xPT1AgaXMgbm90IHNl
dA0KDQojDQojIERSQkQgZGlzYWJsZWQgYmVjYXVzZSBQUk9DX0ZTLCBJTkVU
IG9yIENPTk5FQ1RPUiBub3Qgc2VsZWN0ZWQNCiMNCiMgQ09ORklHX0JMS19E
RVZfTkJEIGlzIG5vdCBzZXQNCkNPTkZJR19CTEtfREVWX1JBTT15DQpDT05G
SUdfQkxLX0RFVl9SQU1fQ09VTlQ9MTYNCkNPTkZJR19CTEtfREVWX1JBTV9T
SVpFPTQwOTYNCiMgQ09ORklHX0JMS19ERVZfWElQIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NEUk9NX1BLVENEVkQgaXMgbm90IHNldA0KIyBDT05GSUdfQVRB
X09WRVJfRVRIIGlzIG5vdCBzZXQNCkNPTkZJR19YRU5fQkxLREVWX0ZST05U
RU5EPXkNCkNPTkZJR19YRU5fQkxLREVWX0JBQ0tFTkQ9eQ0KIyBDT05GSUdf
QkxLX0RFVl9SQkQgaXMgbm90IHNldA0KDQojDQojIE1pc2MgZGV2aWNlcw0K
Iw0KIyBDT05GSUdfU0VOU09SU19MSVMzTFYwMkQgaXMgbm90IHNldA0KIyBD
T05GSUdfQUQ1MjVYX0RQT1QgaXMgbm90IHNldA0KIyBDT05GSUdfQVRNRUxf
UFdNIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lDUzkzMlM0MDEgaXMgbm90IHNl
dA0KIyBDT05GSUdfRU5DTE9TVVJFX1NFUlZJQ0VTIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0FQRFM5ODAyQUxTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lTTDI5
MDAzIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lTTDI5MDIwIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1NFTlNPUlNfVFNMMjU1MCBpcyBub3Qgc2V0DQojIENPTkZJ
R19TRU5TT1JTX0JIMTc4MCBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JT
X0JIMTc3MCBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JTX0FQRFM5OTBY
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0hNQzYzNTIgaXMgbm90IHNldA0KIyBD
T05GSUdfRFMxNjgyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FSTV9DSEFSTENE
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0JNUDA4NV9JMkMgaXMgbm90IHNldA0K
IyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0MyUE9SVCBpcyBub3Qgc2V0DQoNCiMNCiMgRUVQUk9NIHN1cHBvcnQN
CiMNCiMgQ09ORklHX0VFUFJPTV9BVDI0IGlzIG5vdCBzZXQNCiMgQ09ORklH
X0VFUFJPTV9MRUdBQ1kgaXMgbm90IHNldA0KIyBDT05GSUdfRUVQUk9NX01B
WDY4NzUgaXMgbm90IHNldA0KIyBDT05GSUdfRUVQUk9NXzkzQ1g2IGlzIG5v
dCBzZXQNCiMgQ09ORklHX0lXTUMzMjAwVE9QIGlzIG5vdCBzZXQNCg0KIw0K
IyBUZXhhcyBJbnN0cnVtZW50cyBzaGFyZWQgdHJhbnNwb3J0IGxpbmUgZGlz
Y2lwbGluZQ0KIw0KIyBDT05GSUdfU0VOU09SU19MSVMzX0kyQyBpcyBub3Qg
c2V0DQoNCiMNCiMgQWx0ZXJhIEZQR0EgZmlybXdhcmUgZG93bmxvYWQgbW9k
dWxlDQojDQojIENPTkZJR19BTFRFUkFfU1RBUEwgaXMgbm90IHNldA0KDQoj
DQojIFNDU0kgZGV2aWNlIHN1cHBvcnQNCiMNCkNPTkZJR19TQ1NJX01PRD15
DQojIENPTkZJR19SQUlEX0FUVFJTIGlzIG5vdCBzZXQNCkNPTkZJR19TQ1NJ
PXkNCkNPTkZJR19TQ1NJX0RNQT15DQojIENPTkZJR19TQ1NJX1RHVCBpcyBu
b3Qgc2V0DQojIENPTkZJR19TQ1NJX05FVExJTksgaXMgbm90IHNldA0KQ09O
RklHX1NDU0lfUFJPQ19GUz15DQoNCiMNCiMgU0NTSSBzdXBwb3J0IHR5cGUg
KGRpc2ssIHRhcGUsIENELVJPTSkNCiMNCkNPTkZJR19CTEtfREVWX1NEPXkN
CiMgQ09ORklHX0NIUl9ERVZfU1QgaXMgbm90IHNldA0KIyBDT05GSUdfQ0hS
X0RFVl9PU1NUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0JMS19ERVZfU1IgaXMg
bm90IHNldA0KIyBDT05GSUdfQ0hSX0RFVl9TRyBpcyBub3Qgc2V0DQojIENP
TkZJR19DSFJfREVWX1NDSCBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX01V
TFRJX0xVTiBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX0NPTlNUQU5UUyBp
cyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX0xPR0dJTkcgaXMgbm90IHNldA0K
IyBDT05GSUdfU0NTSV9TQ0FOX0FTWU5DIGlzIG5vdCBzZXQNCg0KIw0KIyBT
Q1NJIFRyYW5zcG9ydHMNCiMNCiMgQ09ORklHX1NDU0lfU1BJX0FUVFJTIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NDU0lfRkNfQVRUUlMgaXMgbm90IHNldA0K
IyBDT05GSUdfU0NTSV9JU0NTSV9BVFRSUyBpcyBub3Qgc2V0DQojIENPTkZJ
R19TQ1NJX1NBU19BVFRSUyBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX1NB
U19MSUJTQVMgaXMgbm90IHNldA0KIyBDT05GSUdfU0NTSV9TUlBfQVRUUlMg
aXMgbm90IHNldA0KQ09ORklHX1NDU0lfTE9XTEVWRUw9eQ0KIyBDT05GSUdf
SVNDU0lfVENQIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lTQ1NJX0JPT1RfU1lT
RlMgaXMgbm90IHNldA0KIyBDT05GSUdfTElCRkMgaXMgbm90IHNldA0KIyBD
T05GSUdfTElCRkNPRSBpcyBub3Qgc2V0DQojIENPTkZJR19TQ1NJX0RFQlVH
IGlzIG5vdCBzZXQNCiMgQ09ORklHX1NDU0lfREggaXMgbm90IHNldA0KIyBD
T05GSUdfU0NTSV9PU0RfSU5JVElBVE9SIGlzIG5vdCBzZXQNCkNPTkZJR19I
QVZFX1BBVEFfUExBVEZPUk09eQ0KQ09ORklHX0FUQT15DQojIENPTkZJR19B
VEFfTk9OU1RBTkRBUkQgaXMgbm90IHNldA0KQ09ORklHX0FUQV9WRVJCT1NF
X0VSUk9SPXkNCkNPTkZJR19TQVRBX1BNUD15DQoNCiMNCiMgQ29udHJvbGxl
cnMgd2l0aCBub24tU0ZGIG5hdGl2ZSBpbnRlcmZhY2UNCiMNCiMgQ09ORklH
X1NBVEFfQUhDSV9QTEFURk9STSBpcyBub3Qgc2V0DQpDT05GSUdfQVRBX1NG
Rj15DQoNCiMNCiMgU0ZGIGNvbnRyb2xsZXJzIHdpdGggY3VzdG9tIERNQSBp
bnRlcmZhY2UNCiMNCiMgQ09ORklHX0FUQV9CTURNQSBpcyBub3Qgc2V0DQoN
CiMNCiMgUElPLW9ubHkgU0ZGIGNvbnRyb2xsZXJzDQojDQojIENPTkZJR19Q
QVRBX1BMQVRGT1JNIGlzIG5vdCBzZXQNCg0KIw0KIyBHZW5lcmljIGZhbGxi
YWNrIC8gbGVnYWN5IGRyaXZlcnMNCiMNCiMgQ09ORklHX01EIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1RBUkdFVF9DT1JFIGlzIG5vdCBzZXQNCkNPTkZJR19O
RVRERVZJQ0VTPXkNCkNPTkZJR19ORVRfQ09SRT15DQojIENPTkZJR19CT05E
SU5HIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RVTU1ZIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0VRVUFMSVpFUiBpcyBub3Qgc2V0DQpDT05GSUdfTUlJPXkNCiMg
Q09ORklHX05FVF9URUFNIGlzIG5vdCBzZXQNCiMgQ09ORklHX01BQ1ZMQU4g
aXMgbm90IHNldA0KIyBDT05GSUdfTkVUQ09OU09MRSBpcyBub3Qgc2V0DQoj
IENPTkZJR19ORVRQT0xMIGlzIG5vdCBzZXQNCiMgQ09ORklHX05FVF9QT0xM
X0NPTlRST0xMRVIgaXMgbm90IHNldA0KQ09ORklHX1RVTj15DQojIENPTkZJ
R19WRVRIIGlzIG5vdCBzZXQNCg0KIw0KIyBDQUlGIHRyYW5zcG9ydCBkcml2
ZXJzDQojDQpDT05GSUdfRVRIRVJORVQ9eQ0KQ09ORklHX05FVF9WRU5ET1Jf
QlJPQURDT009eQ0KIyBDT05GSUdfQjQ0IGlzIG5vdCBzZXQNCiMgQ09ORklH
X05FVF9DQUxYRURBX1hHTUFDIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRfVkVO
RE9SX0NIRUxTSU89eQ0KQ09ORklHX05FVF9WRU5ET1JfQ0lSUlVTPXkNCiMg
Q09ORklHX0NTODl4MCBpcyBub3Qgc2V0DQojIENPTkZJR19ETTkwMDAgaXMg
bm90IHNldA0KIyBDT05GSUdfRE5FVCBpcyBub3Qgc2V0DQpDT05GSUdfTkVU
X1ZFTkRPUl9GQVJBREFZPXkNCiMgQ09ORklHX0ZUTUFDMTAwIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0ZUR01BQzEwMCBpcyBub3Qgc2V0DQpDT05GSUdfTkVU
X1ZFTkRPUl9JTlRFTD15DQpDT05GSUdfTkVUX1ZFTkRPUl9JODI1WFg9eQ0K
Q09ORklHX05FVF9WRU5ET1JfTUFSVkVMTD15DQpDT05GSUdfTkVUX1ZFTkRP
Ul9NSUNSRUw9eQ0KIyBDT05GSUdfS1M4ODUxX01MTCBpcyBub3Qgc2V0DQpD
T05GSUdfTkVUX1ZFTkRPUl9OQVRTRU1JPXkNCkNPTkZJR19ORVRfVkVORE9S
XzgzOTA9eQ0KIyBDT05GSUdfQVg4ODc5NiBpcyBub3Qgc2V0DQojIENPTkZJ
R19FVEhPQyBpcyBub3Qgc2V0DQpDT05GSUdfTkVUX1ZFTkRPUl9TRUVRPXkN
CiMgQ09ORklHX1NFRVE4MDA1IGlzIG5vdCBzZXQNCkNPTkZJR19ORVRfVkVO
RE9SX1NNU0M9eQ0KQ09ORklHX1NNQzkxWD15DQpDT05GSUdfU01DOTExWD15
DQpDT05GSUdfU01TQzkxMVg9eQ0KIyBDT05GSUdfU01TQzkxMVhfQVJDSF9I
T09LUyBpcyBub3Qgc2V0DQpDT05GSUdfTkVUX1ZFTkRPUl9TVE1JQ1JPPXkN
CiMgQ09ORklHX1NUTU1BQ19FVEggaXMgbm90IHNldA0KQ09ORklHX05FVF9W
RU5ET1JfV0laTkVUPXkNCiMgQ09ORklHX1dJWk5FVF9XNTEwMCBpcyBub3Qg
c2V0DQojIENPTkZJR19XSVpORVRfVzUzMDAgaXMgbm90IHNldA0KQ09ORklH
X1BIWUxJQj15DQoNCiMNCiMgTUlJIFBIWSBkZXZpY2UgZHJpdmVycw0KIw0K
IyBDT05GSUdfQU1EX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19NQVJWRUxM
X1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19EQVZJQ09NX1BIWSBpcyBub3Qg
c2V0DQojIENPTkZJR19RU0VNSV9QSFkgaXMgbm90IHNldA0KIyBDT05GSUdf
TFhUX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19DSUNBREFfUEhZIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1ZJVEVTU0VfUEhZIGlzIG5vdCBzZXQNCkNPTkZJ
R19TTVNDX1BIWT15DQojIENPTkZJR19CUk9BRENPTV9QSFkgaXMgbm90IHNl
dA0KIyBDT05GSUdfSUNQTFVTX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19S
RUFMVEVLX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19OQVRJT05BTF9QSFkg
aXMgbm90IHNldA0KIyBDT05GSUdfU1RFMTBYUCBpcyBub3Qgc2V0DQojIENP
TkZJR19MU0lfRVQxMDExQ19QSFkgaXMgbm90IHNldA0KIyBDT05GSUdfTUlD
UkVMX1BIWSBpcyBub3Qgc2V0DQojIENPTkZJR19GSVhFRF9QSFkgaXMgbm90
IHNldA0KIyBDT05GSUdfTURJT19CSVRCQU5HIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1BQUCBpcyBub3Qgc2V0DQojIENPTkZJR19TTElQIGlzIG5vdCBzZXQN
CkNPTkZJR19XTEFOPXkNCiMgQ09ORklHX0hPU1RBUCBpcyBub3Qgc2V0DQoj
IENPTkZJR19XTF9USSBpcyBub3Qgc2V0DQoNCiMNCiMgRW5hYmxlIFdpTUFY
IChOZXR3b3JraW5nIG9wdGlvbnMpIHRvIHNlZSB0aGUgV2lNQVggZHJpdmVy
cw0KIw0KIyBDT05GSUdfV0FOIGlzIG5vdCBzZXQNCkNPTkZJR19YRU5fTkVU
REVWX0ZST05URU5EPXkNCkNPTkZJR19YRU5fTkVUREVWX0JBQ0tFTkQ9eQ0K
IyBDT05GSUdfSVNETiBpcyBub3Qgc2V0DQoNCiMNCiMgSW5wdXQgZGV2aWNl
IHN1cHBvcnQNCiMNCkNPTkZJR19JTlBVVD15DQojIENPTkZJR19JTlBVVF9G
Rl9NRU1MRVNTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lOUFVUX1BPTExERVYg
aXMgbm90IHNldA0KIyBDT05GSUdfSU5QVVRfU1BBUlNFS01BUCBpcyBub3Qg
c2V0DQojIENPTkZJR19JTlBVVF9NQVRSSVhLTUFQIGlzIG5vdCBzZXQNCg0K
Iw0KIyBVc2VybGFuZCBpbnRlcmZhY2VzDQojDQpDT05GSUdfSU5QVVRfTU9V
U0VERVY9eQ0KQ09ORklHX0lOUFVUX01PVVNFREVWX1BTQVVYPXkNCkNPTkZJ
R19JTlBVVF9NT1VTRURFVl9TQ1JFRU5fWD0xMDI0DQpDT05GSUdfSU5QVVRf
TU9VU0VERVZfU0NSRUVOX1k9NzY4DQojIENPTkZJR19JTlBVVF9KT1lERVYg
aXMgbm90IHNldA0KQ09ORklHX0lOUFVUX0VWREVWPXkNCiMgQ09ORklHX0lO
UFVUX0VWQlVHIGlzIG5vdCBzZXQNCg0KIw0KIyBJbnB1dCBEZXZpY2UgRHJp
dmVycw0KIw0KQ09ORklHX0lOUFVUX0tFWUJPQVJEPXkNCiMgQ09ORklHX0tF
WUJPQVJEX0FEUDU1ODggaXMgbm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRf
QURQNTU4OSBpcyBub3Qgc2V0DQpDT05GSUdfS0VZQk9BUkRfQVRLQkQ9eQ0K
IyBDT05GSUdfS0VZQk9BUkRfUVQxMDcwIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0tFWUJPQVJEX1FUMjE2MCBpcyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FS
RF9MS0tCRCBpcyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9UQ0E2NDE2
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJEX1RDQTg0MTggaXMgbm90
IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfTE04MzMzIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0tFWUJPQVJEX01BWDczNTkgaXMgbm90IHNldA0KIyBDT05GSUdf
S0VZQk9BUkRfTUNTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJEX01Q
UjEyMSBpcyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9ORVdUT04gaXMg
bm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfT1BFTkNPUkVTIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0tFWUJPQVJEX1NBTVNVTkcgaXMgbm90IHNldA0KIyBD
T05GSUdfS0VZQk9BUkRfU1RPV0FXQVkgaXMgbm90IHNldA0KIyBDT05GSUdf
S0VZQk9BUkRfU1VOS0JEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJE
X09NQVA0IGlzIG5vdCBzZXQNCiMgQ09ORklHX0tFWUJPQVJEX1hUS0JEIGlz
IG5vdCBzZXQNCkNPTkZJR19JTlBVVF9NT1VTRT15DQpDT05GSUdfTU9VU0Vf
UFMyPXkNCkNPTkZJR19NT1VTRV9QUzJfQUxQUz15DQpDT05GSUdfTU9VU0Vf
UFMyX0xPR0lQUzJQUD15DQpDT05GSUdfTU9VU0VfUFMyX1NZTkFQVElDUz15
DQpDT05GSUdfTU9VU0VfUFMyX1RSQUNLUE9JTlQ9eQ0KIyBDT05GSUdfTU9V
U0VfUFMyX0VMQU5URUNIIGlzIG5vdCBzZXQNCiMgQ09ORklHX01PVVNFX1BT
Ml9TRU5URUxJQyBpcyBub3Qgc2V0DQojIENPTkZJR19NT1VTRV9QUzJfVE9V
Q0hLSVQgaXMgbm90IHNldA0KIyBDT05GSUdfTU9VU0VfU0VSSUFMIGlzIG5v
dCBzZXQNCiMgQ09ORklHX01PVVNFX0FQUExFVE9VQ0ggaXMgbm90IHNldA0K
IyBDT05GSUdfTU9VU0VfQkNNNTk3NCBpcyBub3Qgc2V0DQojIENPTkZJR19N
T1VTRV9WU1hYWEFBIGlzIG5vdCBzZXQNCiMgQ09ORklHX01PVVNFX1NZTkFQ
VElDU19JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfTU9VU0VfU1lOQVBUSUNT
X1VTQiBpcyBub3Qgc2V0DQojIENPTkZJR19JTlBVVF9KT1lTVElDSyBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTlBVVF9UQUJMRVQgaXMgbm90IHNldA0KIyBD
T05GSUdfSU5QVVRfVE9VQ0hTQ1JFRU4gaXMgbm90IHNldA0KIyBDT05GSUdf
SU5QVVRfTUlTQyBpcyBub3Qgc2V0DQoNCiMNCiMgSGFyZHdhcmUgSS9PIHBv
cnRzDQojDQpDT05GSUdfU0VSSU89eQ0KIyBDT05GSUdfU0VSSU9fU0VSUE9S
VCBpcyBub3Qgc2V0DQpDT05GSUdfU0VSSU9fQU1CQUtNST15DQpDT05GSUdf
U0VSSU9fTElCUFMyPXkNCiMgQ09ORklHX1NFUklPX1JBVyBpcyBub3Qgc2V0
DQojIENPTkZJR19TRVJJT19BTFRFUkFfUFMyIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1NFUklPX1BTMk1VTFQgaXMgbm90IHNldA0KIyBDT05GSUdfR0FNRVBP
UlQgaXMgbm90IHNldA0KDQojDQojIENoYXJhY3RlciBkZXZpY2VzDQojDQpD
T05GSUdfVlQ9eQ0KQ09ORklHX0NPTlNPTEVfVFJBTlNMQVRJT05TPXkNCkNP
TkZJR19WVF9DT05TT0xFPXkNCkNPTkZJR19WVF9DT05TT0xFX1NMRUVQPXkN
CkNPTkZJR19IV19DT05TT0xFPXkNCiMgQ09ORklHX1ZUX0hXX0NPTlNPTEVf
QklORElORyBpcyBub3Qgc2V0DQpDT05GSUdfVU5JWDk4X1BUWVM9eQ0KIyBD
T05GSUdfREVWUFRTX01VTFRJUExFX0lOU1RBTkNFUyBpcyBub3Qgc2V0DQpD
T05GSUdfTEVHQUNZX1BUWVM9eQ0KQ09ORklHX0xFR0FDWV9QVFlfQ09VTlQ9
MTYNCiMgQ09ORklHX1NFUklBTF9OT05TVEFOREFSRCBpcyBub3Qgc2V0DQoj
IENPTkZJR19OX0dTTSBpcyBub3Qgc2V0DQojIENPTkZJR19UUkFDRV9TSU5L
IGlzIG5vdCBzZXQNCkNPTkZJR19ERVZLTUVNPXkNCg0KIw0KIyBTZXJpYWwg
ZHJpdmVycw0KIw0KQ09ORklHX1NFUklBTF84MjUwPXkNCiMgQ09ORklHX1NF
UklBTF84MjUwX0NPTlNPTEUgaXMgbm90IHNldA0KQ09ORklHX1NFUklBTF84
MjUwX05SX1VBUlRTPTQNCkNPTkZJR19TRVJJQUxfODI1MF9SVU5USU1FX1VB
UlRTPTQNCkNPTkZJR19TRVJJQUxfODI1MF9FWFRFTkRFRD15DQpDT05GSUdf
U0VSSUFMXzgyNTBfTUFOWV9QT1JUUz15DQpDT05GSUdfU0VSSUFMXzgyNTBf
U0hBUkVfSVJRPXkNCiMgQ09ORklHX1NFUklBTF84MjUwX0RFVEVDVF9JUlEg
aXMgbm90IHNldA0KQ09ORklHX1NFUklBTF84MjUwX1JTQT15DQojIENPTkZJ
R19TRVJJQUxfODI1MF9EVyBpcyBub3Qgc2V0DQojIENPTkZJR19TRVJJQUxf
ODI1MF9FTSBpcyBub3Qgc2V0DQoNCiMNCiMgTm9uLTgyNTAgc2VyaWFsIHBv
cnQgc3VwcG9ydA0KIw0KIyBDT05GSUdfU0VSSUFMX0FNQkFfUEwwMTAgaXMg
bm90IHNldA0KQ09ORklHX1NFUklBTF9BTUJBX1BMMDExPXkNCkNPTkZJR19T
RVJJQUxfQU1CQV9QTDAxMV9DT05TT0xFPXkNCkNPTkZJR19TRVJJQUxfQ09S
RT15DQpDT05GSUdfU0VSSUFMX0NPUkVfQ09OU09MRT15DQojIENPTkZJR19T
RVJJQUxfT0ZfUExBVEZPUk0gaXMgbm90IHNldA0KIyBDT05GSUdfU0VSSUFM
X1RJTUJFUkRBTEUgaXMgbm90IHNldA0KIyBDT05GSUdfU0VSSUFMX0FMVEVS
QV9KVEFHVUFSVCBpcyBub3Qgc2V0DQojIENPTkZJR19TRVJJQUxfQUxURVJB
X1VBUlQgaXMgbm90IHNldA0KIyBDT05GSUdfU0VSSUFMX1hJTElOWF9QU19V
QVJUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RUWV9QUklOVEsgaXMgbm90IHNl
dA0KQ09ORklHX0hWQ19EUklWRVI9eQ0KQ09ORklHX0hWQ19JUlE9eQ0KQ09O
RklHX0hWQ19YRU49eQ0KIyBDT05GSUdfSFZDX1hFTl9GUk9OVEVORCBpcyBu
b3Qgc2V0DQojIENPTkZJR19IVkNfRENDIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0lQTUlfSEFORExFUiBpcyBub3Qgc2V0DQpDT05GSUdfSFdfUkFORE9NPXkN
CiMgQ09ORklHX0hXX1JBTkRPTV9USU1FUklPTUVNIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0hXX1JBTkRPTV9BVE1FTCBpcyBub3Qgc2V0DQojIENPTkZJR19S
Mzk2NCBpcyBub3Qgc2V0DQojIENPTkZJR19SQVdfRFJJVkVSIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1RDR19UUE0gaXMgbm90IHNldA0KQ09ORklHX0kyQz15
DQpDT05GSUdfSTJDX0JPQVJESU5GTz15DQpDT05GSUdfSTJDX0NPTVBBVD15
DQpDT05GSUdfSTJDX0NIQVJERVY9eQ0KIyBDT05GSUdfSTJDX01VWCBpcyBu
b3Qgc2V0DQpDT05GSUdfSTJDX0hFTFBFUl9BVVRPPXkNCg0KIw0KIyBJMkMg
SGFyZHdhcmUgQnVzIHN1cHBvcnQNCiMNCg0KIw0KIyBJMkMgc3lzdGVtIGJ1
cyBkcml2ZXJzIChtb3N0bHkgZW1iZWRkZWQgLyBzeXN0ZW0tb24tY2hpcCkN
CiMNCiMgQ09ORklHX0kyQ19ERVNJR05XQVJFX1BMQVRGT1JNIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0kyQ19PQ09SRVMgaXMgbm90IHNldA0KIyBDT05GSUdf
STJDX1BDQV9QTEFURk9STSBpcyBub3Qgc2V0DQojIENPTkZJR19JMkNfUFhB
X1BDSSBpcyBub3Qgc2V0DQojIENPTkZJR19JMkNfU0lNVEVDIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0kyQ19WRVJTQVRJTEUgaXMgbm90IHNldA0KIyBDT05G
SUdfSTJDX1hJTElOWCBpcyBub3Qgc2V0DQoNCiMNCiMgRXh0ZXJuYWwgSTJD
L1NNQnVzIGFkYXB0ZXIgZHJpdmVycw0KIw0KIyBDT05GSUdfSTJDX1BBUlBP
UlRfTElHSFQgaXMgbm90IHNldA0KIyBDT05GSUdfSTJDX1RBT1NfRVZNIGlz
IG5vdCBzZXQNCg0KIw0KIyBPdGhlciBJMkMvU01CdXMgYnVzIGRyaXZlcnMN
CiMNCiMgQ09ORklHX0kyQ19ERUJVR19DT1JFIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0kyQ19ERUJVR19BTEdPIGlzIG5vdCBzZXQNCiMgQ09ORklHX0kyQ19E
RUJVR19CVVMgaXMgbm90IHNldA0KIyBDT05GSUdfU1BJIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0hTSSBpcyBub3Qgc2V0DQoNCiMNCiMgUFBTIHN1cHBvcnQN
CiMNCiMgQ09ORklHX1BQUyBpcyBub3Qgc2V0DQoNCiMNCiMgUFBTIGdlbmVy
YXRvcnMgc3VwcG9ydA0KIw0KDQojDQojIFBUUCBjbG9jayBzdXBwb3J0DQoj
DQoNCiMNCiMgRW5hYmxlIERldmljZSBEcml2ZXJzIC0+IFBQUyB0byBzZWUg
dGhlIFBUUCBjbG9jayBvcHRpb25zLg0KIw0KQ09ORklHX0FSQ0hfSEFWRV9D
VVNUT01fR1BJT19IPXkNCkNPTkZJR19BUkNIX1dBTlRfT1BUSU9OQUxfR1BJ
T0xJQj15DQojIENPTkZJR19HUElPTElCIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1cxIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BPV0VSX1NVUFBMWSBpcyBub3Qg
c2V0DQojIENPTkZJR19IV01PTiBpcyBub3Qgc2V0DQojIENPTkZJR19USEVS
TUFMIGlzIG5vdCBzZXQNCiMgQ09ORklHX1dBVENIRE9HIGlzIG5vdCBzZXQN
CkNPTkZJR19TU0JfUE9TU0lCTEU9eQ0KDQojDQojIFNvbmljcyBTaWxpY29u
IEJhY2twbGFuZQ0KIw0KIyBDT05GSUdfU1NCIGlzIG5vdCBzZXQNCkNPTkZJ
R19CQ01BX1BPU1NJQkxFPXkNCg0KIw0KIyBCcm9hZGNvbSBzcGVjaWZpYyBB
TUJBDQojDQojIENPTkZJR19CQ01BIGlzIG5vdCBzZXQNCg0KIw0KIyBNdWx0
aWZ1bmN0aW9uIGRldmljZSBkcml2ZXJzDQojDQojIENPTkZJR19NRkRfQ09S
RSBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfODhQTTg2MFggaXMgbm90IHNl
dA0KIyBDT05GSUdfTUZEX1NNNTAxIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hU
Q19QQVNJQzMgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX0xNMzUzMyBpcyBu
b3Qgc2V0DQojIENPTkZJR19UUFM2MTA1WCBpcyBub3Qgc2V0DQojIENPTkZJ
R19UUFM2NTA3WCBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfVFBTNjUyMTcg
aXMgbm90IHNldA0KIyBDT05GSUdfVFdMNDAzMF9DT1JFIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1RXTDYwNDBfQ09SRSBpcyBub3Qgc2V0DQojIENPTkZJR19N
RkRfU1RNUEUgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1RDMzU4OVggaXMg
bm90IHNldA0KIyBDT05GSUdfTUZEX1RNSU8gaXMgbm90IHNldA0KIyBDT05G
SUdfTUZEX1Q3TDY2WEIgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1RDNjM4
N1hCIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BNSUNfREE5MDNYIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01GRF9EQTkwNTJfSTJDIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1BNSUNfQURQNTUyMCBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfTUFY
Nzc2OTMgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX01BWDg5MjUgaXMgbm90
IHNldA0KIyBDT05GSUdfTUZEX01BWDg5OTcgaXMgbm90IHNldA0KIyBDT05G
SUdfTUZEX01BWDg5OTggaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1M1TV9D
T1JFIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9XTTg0MDAgaXMgbm90IHNl
dA0KIyBDT05GSUdfTUZEX1dNODMxWF9JMkMgaXMgbm90IHNldA0KIyBDT05G
SUdfTUZEX1dNODM1MF9JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1dN
ODk5NCBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfUENGNTA2MzMgaXMgbm90
IHNldA0KIyBDT05GSUdfTUZEX01DMTNYWFhfSTJDIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0FCWDUwMF9DT1JFIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9X
TDEyNzNfQ09SRSBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRfVFBTNjUwOTAg
aXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1JDNVQ1ODMgaXMgbm90IHNldA0K
IyBDT05GSUdfTUZEX1BBTE1BUyBpcyBub3Qgc2V0DQojIENPTkZJR19SRUdV
TEFUT1IgaXMgbm90IHNldA0KIyBDT05GSUdfTUVESUFfU1VQUE9SVCBpcyBu
b3Qgc2V0DQoNCiMNCiMgR3JhcGhpY3Mgc3VwcG9ydA0KIw0KIyBDT05GSUdf
RFJNIGlzIG5vdCBzZXQNCiMgQ09ORklHX1ZHQVNUQVRFIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1ZJREVPX09VVFBVVF9DT05UUk9MIGlzIG5vdCBzZXQNCkNP
TkZJR19GQj15DQojIENPTkZJR19GSVJNV0FSRV9FRElEIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZCX0REQyBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9CT09U
X1ZFU0FfU1VQUE9SVCBpcyBub3Qgc2V0DQpDT05GSUdfRkJfQ0ZCX0ZJTExS
RUNUPXkNCkNPTkZJR19GQl9DRkJfQ09QWUFSRUE9eQ0KQ09ORklHX0ZCX0NG
Ql9JTUFHRUJMSVQ9eQ0KIyBDT05GSUdfRkJfQ0ZCX1JFVl9QSVhFTFNfSU5f
QllURSBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9TWVNfRklMTFJFQ1QgaXMg
bm90IHNldA0KIyBDT05GSUdfRkJfU1lTX0NPUFlBUkVBIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZCX1NZU19JTUFHRUJMSVQgaXMgbm90IHNldA0KIyBDT05G
SUdfRkJfRk9SRUlHTl9FTkRJQU4gaXMgbm90IHNldA0KIyBDT05GSUdfRkJf
U1lTX0ZPUFMgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfV01UX0dFX1JPUFMg
aXMgbm90IHNldA0KIyBDT05GSUdfRkJfU1ZHQUxJQiBpcyBub3Qgc2V0DQoj
IENPTkZJR19GQl9NQUNNT0RFUyBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9C
QUNLTElHSFQgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfTU9ERV9IRUxQRVJT
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZCX1RJTEVCTElUVElORyBpcyBub3Qg
c2V0DQoNCiMNCiMgRnJhbWUgYnVmZmVyIGhhcmR3YXJlIGRyaXZlcnMNCiMN
CkNPTkZJR19GQl9BUk1DTENEPXkNCiMgQ09ORklHX0ZCX1MxRDEzWFhYIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0ZCX1ZJUlRVQUwgaXMgbm90IHNldA0KIyBD
T05GSUdfWEVOX0ZCREVWX0ZST05URU5EIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0ZCX01FVFJPTk9NRSBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9CUk9BRFNI
RUVUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZCX0FVT19LMTkwWCBpcyBub3Qg
c2V0DQojIENPTkZJR19FWFlOT1NfVklERU8gaXMgbm90IHNldA0KIyBDT05G
SUdfQkFDS0xJR0hUX0xDRF9TVVBQT1JUIGlzIG5vdCBzZXQNCg0KIw0KIyBD
b25zb2xlIGRpc3BsYXkgZHJpdmVyIHN1cHBvcnQNCiMNCkNPTkZJR19EVU1N
WV9DT05TT0xFPXkNCkNPTkZJR19GUkFNRUJVRkZFUl9DT05TT0xFPXkNCiMg
Q09ORklHX0ZSQU1FQlVGRkVSX0NPTlNPTEVfREVURUNUX1BSSU1BUlkgaXMg
bm90IHNldA0KIyBDT05GSUdfRlJBTUVCVUZGRVJfQ09OU09MRV9ST1RBVElP
TiBpcyBub3Qgc2V0DQpDT05GSUdfRk9OVFM9eQ0KIyBDT05GSUdfRk9OVF84
eDggaXMgbm90IHNldA0KIyBDT05GSUdfRk9OVF84eDE2IGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZPTlRfNngxMSBpcyBub3Qgc2V0DQojIENPTkZJR19GT05U
Xzd4MTQgaXMgbm90IHNldA0KIyBDT05GSUdfRk9OVF9QRUFSTF84eDggaXMg
bm90IHNldA0KQ09ORklHX0ZPTlRfQUNPUk5fOHg4PXkNCiMgQ09ORklHX0ZP
TlRfTUlOSV80eDYgaXMgbm90IHNldA0KIyBDT05GSUdfRk9OVF9TVU44eDE2
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZPTlRfU1VOMTJ4MjIgaXMgbm90IHNl
dA0KIyBDT05GSUdfRk9OVF8xMHgxOCBpcyBub3Qgc2V0DQojIENPTkZJR19M
T0dPIGlzIG5vdCBzZXQNCkNPTkZJR19TT1VORD15DQpDT05GSUdfU09VTkRf
T1NTX0NPUkU9eQ0KQ09ORklHX1NPVU5EX09TU19DT1JFX1BSRUNMQUlNPXkN
CkNPTkZJR19TTkQ9eQ0KQ09ORklHX1NORF9USU1FUj15DQpDT05GSUdfU05E
X1BDTT15DQojIENPTkZJR19TTkRfU0VRVUVOQ0VSIGlzIG5vdCBzZXQNCkNP
TkZJR19TTkRfT1NTRU1VTD15DQpDT05GSUdfU05EX01JWEVSX09TUz15DQpD
T05GSUdfU05EX1BDTV9PU1M9eQ0KQ09ORklHX1NORF9QQ01fT1NTX1BMVUdJ
TlM9eQ0KIyBDT05GSUdfU05EX0hSVElNRVIgaXMgbm90IHNldA0KIyBDT05G
SUdfU05EX0RZTkFNSUNfTUlOT1JTIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRf
U1VQUE9SVF9PTERfQVBJPXkNCkNPTkZJR19TTkRfVkVSQk9TRV9QUk9DRlM9
eQ0KIyBDT05GSUdfU05EX1ZFUkJPU0VfUFJJTlRLIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1NORF9ERUJVRyBpcyBub3Qgc2V0DQpDT05GSUdfU05EX1ZNQVNU
RVI9eQ0KIyBDT05GSUdfU05EX1JBV01JRElfU0VRIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1NORF9PUEwzX0xJQl9TRVEgaXMgbm90IHNldA0KIyBDT05GSUdf
U05EX09QTDRfTElCX1NFUSBpcyBub3Qgc2V0DQojIENPTkZJR19TTkRfU0JB
V0VfU0VRIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9FTVUxMEsxX1NFUSBp
cyBub3Qgc2V0DQpDT05GSUdfU05EX0FDOTdfQ09ERUM9eQ0KQ09ORklHX1NO
RF9EUklWRVJTPXkNCiMgQ09ORklHX1NORF9EVU1NWSBpcyBub3Qgc2V0DQoj
IENPTkZJR19TTkRfQUxPT1AgaXMgbm90IHNldA0KIyBDT05GSUdfU05EX01U
UEFWIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9TRVJJQUxfVTE2NTUwIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NORF9NUFU0MDEgaXMgbm90IHNldA0KIyBD
T05GSUdfU05EX0FDOTdfUE9XRVJfU0FWRSBpcyBub3Qgc2V0DQpDT05GSUdf
U05EX0FSTT15DQpDT05GSUdfU05EX0FSTUFBQ0k9eQ0KIyBDT05GSUdfU05E
X1NPQyBpcyBub3Qgc2V0DQojIENPTkZJR19TT1VORF9QUklNRSBpcyBub3Qg
c2V0DQpDT05GSUdfQUM5N19CVVM9eQ0KDQojDQojIEhJRCBzdXBwb3J0DQoj
DQpDT05GSUdfSElEPXkNCiMgQ09ORklHX0hJRFJBVyBpcyBub3Qgc2V0DQpD
T05GSUdfSElEX0dFTkVSSUM9eQ0KDQojDQojIFNwZWNpYWwgSElEIGRyaXZl
cnMNCiMNCiMgQ09ORklHX1VTQl9BUkNIX0hBU19PSENJIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1VTQl9BUkNIX0hBU19FSENJIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1VTQl9BUkNIX0hBU19YSENJIGlzIG5vdCBzZXQNCkNPTkZJR19VU0Jf
U1VQUE9SVD15DQpDT05GSUdfVVNCX0FSQ0hfSEFTX0hDRD15DQojIENPTkZJ
R19VU0IgaXMgbm90IHNldA0KIyBDT05GSUdfVVNCX09UR19XSElURUxJU1Qg
aXMgbm90IHNldA0KIyBDT05GSUdfVVNCX09UR19CTEFDS0xJU1RfSFVCIGlz
IG5vdCBzZXQNCg0KIw0KIyBOT1RFOiBVU0JfU1RPUkFHRSBkZXBlbmRzIG9u
IFNDU0kgYnV0IEJMS19ERVZfU0QgbWF5DQojDQojIENPTkZJR19VU0JfR0FE
R0VUIGlzIG5vdCBzZXQNCg0KIw0KIyBPVEcgYW5kIHJlbGF0ZWQgaW5mcmFz
dHJ1Y3R1cmUNCiMNCkNPTkZJR19NTUM9eQ0KIyBDT05GSUdfTU1DX0RFQlVH
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01NQ19VTlNBRkVfUkVTVU1FIGlzIG5v
dCBzZXQNCiMgQ09ORklHX01NQ19DTEtHQVRFIGlzIG5vdCBzZXQNCg0KIw0K
IyBNTUMvU0QvU0RJTyBDYXJkIERyaXZlcnMNCiMNCkNPTkZJR19NTUNfQkxP
Q0s9eQ0KQ09ORklHX01NQ19CTE9DS19NSU5PUlM9OA0KQ09ORklHX01NQ19C
TE9DS19CT1VOQ0U9eQ0KIyBDT05GSUdfU0RJT19VQVJUIGlzIG5vdCBzZXQN
CiMgQ09ORklHX01NQ19URVNUIGlzIG5vdCBzZXQNCg0KIw0KIyBNTUMvU0Qv
U0RJTyBIb3N0IENvbnRyb2xsZXIgRHJpdmVycw0KIw0KQ09ORklHX01NQ19B
Uk1NTUNJPXkNCiMgQ09ORklHX01NQ19TREhDSSBpcyBub3Qgc2V0DQojIENP
TkZJR19NTUNfU0RIQ0lfUFhBVjMgaXMgbm90IHNldA0KIyBDT05GSUdfTU1D
X1NESENJX1BYQVYyIGlzIG5vdCBzZXQNCiMgQ09ORklHX01NQ19EVyBpcyBu
b3Qgc2V0DQojIENPTkZJR19NRU1TVElDSyBpcyBub3Qgc2V0DQojIENPTkZJ
R19ORVdfTEVEUyBpcyBub3Qgc2V0DQojIENPTkZJR19BQ0NFU1NJQklMSVRZ
IGlzIG5vdCBzZXQNCkNPTkZJR19SVENfTElCPXkNCiMgQ09ORklHX1JUQ19D
TEFTUyBpcyBub3Qgc2V0DQojIENPTkZJR19ETUFERVZJQ0VTIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0FVWERJU1BMQVkgaXMgbm90IHNldA0KIyBDT05GSUdf
VUlPIGlzIG5vdCBzZXQNCg0KIw0KIyBWaXJ0aW8gZHJpdmVycw0KIw0KIyBD
T05GSUdfVklSVElPX0JBTExPT04gaXMgbm90IHNldA0KIyBDT05GSUdfVklS
VElPX01NSU8gaXMgbm90IHNldA0KDQojDQojIE1pY3Jvc29mdCBIeXBlci1W
IGd1ZXN0IHN1cHBvcnQNCiMNCg0KIw0KIyBYZW4gZHJpdmVyIHN1cHBvcnQN
CiMNCiMgQ09ORklHX1hFTl9CQUxMT09OIGlzIG5vdCBzZXQNCkNPTkZJR19Y
RU5fREVWX0VWVENITj15DQpDT05GSUdfWEVOX0JBQ0tFTkQ9eQ0KQ09ORklH
X1hFTkZTPXkNCkNPTkZJR19YRU5fQ09NUEFUX1hFTkZTPXkNCiMgQ09ORklH
X1hFTl9TWVNfSFlQRVJWSVNPUiBpcyBub3Qgc2V0DQpDT05GSUdfWEVOX1hF
TkJVU19GUk9OVEVORD15DQojIENPTkZJR19YRU5fR05UREVWIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1hFTl9HUkFOVF9ERVZfQUxMT0MgaXMgbm90IHNldA0K
Q09ORklHX1hFTl9QUklWQ01EPXkNCiMgQ09ORklHX1NUQUdJTkcgaXMgbm90
IHNldA0KQ09ORklHX0NMS0RFVl9MT09LVVA9eQ0KQ09ORklHX0hBVkVfTUFD
SF9DTEtERVY9eQ0KDQojDQojIEhhcmR3YXJlIFNwaW5sb2NrIGRyaXZlcnMN
CiMNCkNPTkZJR19DTEtTUkNfTU1JTz15DQpDT05GSUdfSU9NTVVfU1VQUE9S
VD15DQoNCiMNCiMgUmVtb3RlcHJvYyBkcml2ZXJzIChFWFBFUklNRU5UQUwp
DQojDQoNCiMNCiMgUnBtc2cgZHJpdmVycyAoRVhQRVJJTUVOVEFMKQ0KIw0K
IyBDT05GSUdfVklSVF9EUklWRVJTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BN
X0RFVkZSRVEgaXMgbm90IHNldA0KIyBDT05GSUdfRVhUQ09OIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01FTU9SWSBpcyBub3Qgc2V0DQojIENPTkZJR19JSU8g
aXMgbm90IHNldA0KDQojDQojIEZpbGUgc3lzdGVtcw0KIw0KQ09ORklHX0VY
VDJfRlM9eQ0KIyBDT05GSUdfRVhUMl9GU19YQVRUUiBpcyBub3Qgc2V0DQoj
IENPTkZJR19FWFQyX0ZTX1hJUCBpcyBub3Qgc2V0DQpDT05GSUdfRVhUM19G
Uz15DQpDT05GSUdfRVhUM19ERUZBVUxUU19UT19PUkRFUkVEPXkNCkNPTkZJ
R19FWFQzX0ZTX1hBVFRSPXkNCiMgQ09ORklHX0VYVDNfRlNfUE9TSVhfQUNM
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0VYVDNfRlNfU0VDVVJJVFkgaXMgbm90
IHNldA0KQ09ORklHX0VYVDRfRlM9eQ0KQ09ORklHX0VYVDRfRlNfWEFUVFI9
eQ0KIyBDT05GSUdfRVhUNF9GU19QT1NJWF9BQ0wgaXMgbm90IHNldA0KIyBD
T05GSUdfRVhUNF9GU19TRUNVUklUWSBpcyBub3Qgc2V0DQojIENPTkZJR19F
WFQ0X0RFQlVHIGlzIG5vdCBzZXQNCkNPTkZJR19KQkQ9eQ0KQ09ORklHX0pC
RDI9eQ0KQ09ORklHX0ZTX01CQ0FDSEU9eQ0KIyBDT05GSUdfUkVJU0VSRlNf
RlMgaXMgbm90IHNldA0KIyBDT05GSUdfSkZTX0ZTIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1hGU19GUyBpcyBub3Qgc2V0DQojIENPTkZJR19HRlMyX0ZTIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0JUUkZTX0ZTIGlzIG5vdCBzZXQNCiMgQ09O
RklHX05JTEZTMl9GUyBpcyBub3Qgc2V0DQpDT05GSUdfRlNfUE9TSVhfQUNM
PXkNCkNPTkZJR19FWFBPUlRGUz15DQpDT05GSUdfRklMRV9MT0NLSU5HPXkN
CkNPTkZJR19GU05PVElGWT15DQpDT05GSUdfRE5PVElGWT15DQpDT05GSUdf
SU5PVElGWV9VU0VSPXkNCiMgQ09ORklHX0ZBTk9USUZZIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1FVT1RBIGlzIG5vdCBzZXQNCiMgQ09ORklHX1FVT1RBQ1RM
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0FVVE9GUzRfRlMgaXMgbm90IHNldA0K
IyBDT05GSUdfRlVTRV9GUyBpcyBub3Qgc2V0DQpDT05GSUdfR0VORVJJQ19B
Q0w9eQ0KDQojDQojIENhY2hlcw0KIw0KIyBDT05GSUdfRlNDQUNIRSBpcyBu
b3Qgc2V0DQoNCiMNCiMgQ0QtUk9NL0RWRCBGaWxlc3lzdGVtcw0KIw0KIyBD
T05GSUdfSVNPOTY2MF9GUyBpcyBub3Qgc2V0DQojIENPTkZJR19VREZfRlMg
aXMgbm90IHNldA0KDQojDQojIERPUy9GQVQvTlQgRmlsZXN5c3RlbXMNCiMN
CkNPTkZJR19GQVRfRlM9eQ0KIyBDT05GSUdfTVNET1NfRlMgaXMgbm90IHNl
dA0KQ09ORklHX1ZGQVRfRlM9eQ0KQ09ORklHX0ZBVF9ERUZBVUxUX0NPREVQ
QUdFPTQzNw0KQ09ORklHX0ZBVF9ERUZBVUxUX0lPQ0hBUlNFVD0iaXNvODg1
OS0xIg0KIyBDT05GSUdfTlRGU19GUyBpcyBub3Qgc2V0DQoNCiMNCiMgUHNl
dWRvIGZpbGVzeXN0ZW1zDQojDQpDT05GSUdfUFJPQ19GUz15DQpDT05GSUdf
UFJPQ19TWVNDVEw9eQ0KQ09ORklHX1BST0NfUEFHRV9NT05JVE9SPXkNCkNP
TkZJR19TWVNGUz15DQpDT05GSUdfVE1QRlM9eQ0KQ09ORklHX1RNUEZTX1BP
U0lYX0FDTD15DQpDT05GSUdfVE1QRlNfWEFUVFI9eQ0KIyBDT05GSUdfSFVH
RVRMQl9QQUdFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NPTkZJR0ZTX0ZTIGlz
IG5vdCBzZXQNCkNPTkZJR19NSVNDX0ZJTEVTWVNURU1TPXkNCiMgQ09ORklH
X0FERlNfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfQUZGU19GUyBpcyBub3Qg
c2V0DQojIENPTkZJR19IRlNfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfSEZT
UExVU19GUyBpcyBub3Qgc2V0DQojIENPTkZJR19CRUZTX0ZTIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0JGU19GUyBpcyBub3Qgc2V0DQojIENPTkZJR19FRlNf
RlMgaXMgbm90IHNldA0KQ09ORklHX0pGRlMyX0ZTPXkNCkNPTkZJR19KRkZT
Ml9GU19ERUJVRz0wDQpDT05GSUdfSkZGUzJfRlNfV1JJVEVCVUZGRVI9eQ0K
IyBDT05GSUdfSkZGUzJfRlNfV0JVRl9WRVJJRlkgaXMgbm90IHNldA0KIyBD
T05GSUdfSkZGUzJfU1VNTUFSWSBpcyBub3Qgc2V0DQojIENPTkZJR19KRkZT
Ml9GU19YQVRUUiBpcyBub3Qgc2V0DQojIENPTkZJR19KRkZTMl9DT01QUkVT
U0lPTl9PUFRJT05TIGlzIG5vdCBzZXQNCkNPTkZJR19KRkZTMl9aTElCPXkN
CiMgQ09ORklHX0pGRlMyX0xaTyBpcyBub3Qgc2V0DQpDT05GSUdfSkZGUzJf
UlRJTUU9eQ0KIyBDT05GSUdfSkZGUzJfUlVCSU4gaXMgbm90IHNldA0KIyBD
T05GSUdfTE9HRlMgaXMgbm90IHNldA0KQ09ORklHX0NSQU1GUz15DQojIENP
TkZJR19TUVVBU0hGUyBpcyBub3Qgc2V0DQojIENPTkZJR19WWEZTX0ZTIGlz
IG5vdCBzZXQNCkNPTkZJR19NSU5JWF9GUz15DQojIENPTkZJR19PTUZTX0ZT
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0hQRlNfRlMgaXMgbm90IHNldA0KIyBD
T05GSUdfUU5YNEZTX0ZTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1FOWDZGU19G
UyBpcyBub3Qgc2V0DQpDT05GSUdfUk9NRlNfRlM9eQ0KQ09ORklHX1JPTUZT
X0JBQ0tFRF9CWV9CTE9DSz15DQojIENPTkZJR19ST01GU19CQUNLRURfQllf
TVREIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JPTUZTX0JBQ0tFRF9CWV9CT1RI
IGlzIG5vdCBzZXQNCkNPTkZJR19ST01GU19PTl9CTE9DSz15DQojIENPTkZJ
R19QU1RPUkUgaXMgbm90IHNldA0KIyBDT05GSUdfU1lTVl9GUyBpcyBub3Qg
c2V0DQojIENPTkZJR19VRlNfRlMgaXMgbm90IHNldA0KQ09ORklHX05FVFdP
UktfRklMRVNZU1RFTVM9eQ0KQ09ORklHX05GU19GUz15DQpDT05GSUdfTkZT
X1YyPXkNCkNPTkZJR19ORlNfVjM9eQ0KIyBDT05GSUdfTkZTX1YzX0FDTCBp
cyBub3Qgc2V0DQojIENPTkZJR19ORlNfVjQgaXMgbm90IHNldA0KQ09ORklH
X1JPT1RfTkZTPXkNCkNPTkZJR19ORlNEPXkNCkNPTkZJR19ORlNEX1YzPXkN
CiMgQ09ORklHX05GU0RfVjNfQUNMIGlzIG5vdCBzZXQNCiMgQ09ORklHX05G
U0RfVjQgaXMgbm90IHNldA0KQ09ORklHX0xPQ0tEPXkNCkNPTkZJR19MT0NL
RF9WND15DQpDT05GSUdfTkZTX0NPTU1PTj15DQpDT05GSUdfU1VOUlBDPXkN
CiMgQ09ORklHX1NVTlJQQ19ERUJVRyBpcyBub3Qgc2V0DQojIENPTkZJR19D
RVBIX0ZTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NJRlMgaXMgbm90IHNldA0K
IyBDT05GSUdfTkNQX0ZTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NPREFfRlMg
aXMgbm90IHNldA0KIyBDT05GSUdfQUZTX0ZTIGlzIG5vdCBzZXQNCkNPTkZJ
R19OTFM9eQ0KQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiDQojIENP
TkZJR19OTFNfQ09ERVBBR0VfNDM3IGlzIG5vdCBzZXQNCiMgQ09ORklHX05M
U19DT0RFUEFHRV83MzcgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQ
QUdFXzc3NSBpcyBub3Qgc2V0DQpDT05GSUdfTkxTX0NPREVQQUdFXzg1MD15
DQojIENPTkZJR19OTFNfQ09ERVBBR0VfODUyIGlzIG5vdCBzZXQNCiMgQ09O
RklHX05MU19DT0RFUEFHRV84NTUgaXMgbm90IHNldA0KIyBDT05GSUdfTkxT
X0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNfQ09ERVBB
R0VfODYwIGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19DT0RFUEFHRV84NjEg
aXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2MiBpcyBub3Qg
c2V0DQojIENPTkZJR19OTFNfQ09ERVBBR0VfODYzIGlzIG5vdCBzZXQNCiMg
Q09ORklHX05MU19DT0RFUEFHRV84NjQgaXMgbm90IHNldA0KIyBDT05GSUdf
TkxTX0NPREVQQUdFXzg2NSBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNfQ09E
RVBBR0VfODY2IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19DT0RFUEFHRV84
NjkgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzkzNiBpcyBu
b3Qgc2V0DQojIENPTkZJR19OTFNfQ09ERVBBR0VfOTUwIGlzIG5vdCBzZXQN
CiMgQ09ORklHX05MU19DT0RFUEFHRV85MzIgaXMgbm90IHNldA0KIyBDT05G
SUdfTkxTX0NPREVQQUdFXzk0OSBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNf
Q09ERVBBR0VfODc0IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19JU084ODU5
XzggaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzEyNTAgaXMg
bm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzEyNTEgaXMgbm90IHNl
dA0KIyBDT05GSUdfTkxTX0FTQ0lJIGlzIG5vdCBzZXQNCkNPTkZJR19OTFNf
SVNPODg1OV8xPXkNCiMgQ09ORklHX05MU19JU084ODU5XzIgaXMgbm90IHNl
dA0KIyBDT05GSUdfTkxTX0lTTzg4NTlfMyBpcyBub3Qgc2V0DQojIENPTkZJ
R19OTFNfSVNPODg1OV80IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19JU084
ODU5XzUgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0lTTzg4NTlfNiBpcyBu
b3Qgc2V0DQojIENPTkZJR19OTFNfSVNPODg1OV83IGlzIG5vdCBzZXQNCiMg
Q09ORklHX05MU19JU084ODU5XzkgaXMgbm90IHNldA0KIyBDT05GSUdfTkxT
X0lTTzg4NTlfMTMgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0lTTzg4NTlf
MTQgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90
IHNldA0KIyBDT05GSUdfTkxTX0tPSThfUiBpcyBub3Qgc2V0DQojIENPTkZJ
R19OTFNfS09JOF9VIGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19NQUNfUk9N
QU4gaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX01BQ19DRUxUSUMgaXMgbm90
IHNldA0KIyBDT05GSUdfTkxTX01BQ19DRU5URVVSTyBpcyBub3Qgc2V0DQoj
IENPTkZJR19OTFNfTUFDX0NST0FUSUFOIGlzIG5vdCBzZXQNCiMgQ09ORklH
X05MU19NQUNfQ1lSSUxMSUMgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX01B
Q19HQUVMSUMgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX01BQ19HUkVFSyBp
cyBub3Qgc2V0DQojIENPTkZJR19OTFNfTUFDX0lDRUxBTkQgaXMgbm90IHNl
dA0KIyBDT05GSUdfTkxTX01BQ19JTlVJVCBpcyBub3Qgc2V0DQojIENPTkZJ
R19OTFNfTUFDX1JPTUFOSUFOIGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19N
QUNfVFVSS0lTSCBpcyBub3Qgc2V0DQojIENPTkZJR19OTFNfVVRGOCBpcyBu
b3Qgc2V0DQoNCiMNCiMgS2VybmVsIGhhY2tpbmcNCiMNCiMgQ09ORklHX1BS
SU5US19USU1FIGlzIG5vdCBzZXQNCkNPTkZJR19ERUZBVUxUX01FU1NBR0Vf
TE9HTEVWRUw9NA0KQ09ORklHX0VOQUJMRV9XQVJOX0RFUFJFQ0FURUQ9eQ0K
Q09ORklHX0VOQUJMRV9NVVNUX0NIRUNLPXkNCkNPTkZJR19GUkFNRV9XQVJO
PTEwMjQNCkNPTkZJR19NQUdJQ19TWVNSUT15DQojIENPTkZJR19TVFJJUF9B
U01fU1lNUyBpcyBub3Qgc2V0DQojIENPTkZJR19SRUFEQUJMRV9BU00gaXMg
bm90IHNldA0KIyBDT05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldA0K
IyBDT05GSUdfREVCVUdfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfSEVBREVS
U19DSEVDSyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19TRUNUSU9OX01J
U01BVENIIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19LRVJORUw9eQ0KIyBD
T05GSUdfREVCVUdfU0hJUlEgaXMgbm90IHNldA0KIyBDT05GSUdfTE9DS1VQ
X0RFVEVDVE9SIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hBUkRMT0NLVVBfREVU
RUNUT1IgaXMgbm90IHNldA0KIyBDT05GSUdfUEFOSUNfT05fT09QUyBpcyBu
b3Qgc2V0DQpDT05GSUdfUEFOSUNfT05fT09QU19WQUxVRT0wDQojIENPTkZJ
R19ERVRFQ1RfSFVOR19UQVNLIGlzIG5vdCBzZXQNCkNPTkZJR19TQ0hFRF9E
RUJVRz15DQojIENPTkZJR19TQ0hFRFNUQVRTIGlzIG5vdCBzZXQNCiMgQ09O
RklHX1RJTUVSX1NUQVRTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX09C
SkVDVFMgaXMgbm90IHNldA0KIyBDT05GSUdfREVCVUdfU0xBQiBpcyBub3Qg
c2V0DQojIENPTkZJR19ERUJVR19LTUVNTEVBSyBpcyBub3Qgc2V0DQojIENP
TkZJR19ERUJVR19SVF9NVVRFWEVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JU
X01VVEVYX1RFU1RFUiBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19TUElO
TE9DSyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19NVVRFWEVTIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0RFQlVHX0xPQ0tfQUxMT0MgaXMgbm90IHNldA0K
IyBDT05GSUdfUFJPVkVfTE9DS0lORyBpcyBub3Qgc2V0DQojIENPTkZJR19T
UEFSU0VfUkNVX1BPSU5URVIgaXMgbm90IHNldA0KIyBDT05GSUdfTE9DS19T
VEFUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0FUT01JQ19TTEVFUCBp
cyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19MT0NLSU5HX0FQSV9TRUxGVEVT
VFMgaXMgbm90IHNldA0KIyBDT05GSUdfREVCVUdfU1RBQ0tfVVNBR0UgaXMg
bm90IHNldA0KIyBDT05GSUdfREVCVUdfS09CSkVDVCBpcyBub3Qgc2V0DQoj
IENPTkZJR19ERUJVR19ISUdITUVNIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJV
R19CVUdWRVJCT1NFPXkNCiMgQ09ORklHX0RFQlVHX0lORk8gaXMgbm90IHNl
dA0KIyBDT05GSUdfREVCVUdfVk0gaXMgbm90IHNldA0KIyBDT05GSUdfREVC
VUdfV1JJVEVDT1VOVCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19NRU1P
UllfSU5JVCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19MSVNUIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1RFU1RfTElTVF9TT1JUIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0RFQlVHX1NHIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX05P
VElGSUVSUyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19DUkVERU5USUFM
UyBpcyBub3Qgc2V0DQojIENPTkZJR19CT09UX1BSSU5US19ERUxBWSBpcyBu
b3Qgc2V0DQojIENPTkZJR19SQ1VfVE9SVFVSRV9URVNUIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JDVV9UUkFDRSBpcyBub3Qgc2V0DQojIENPTkZJR19CQUNL
VFJBQ0VfU0VMRl9URVNUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0JM
T0NLX0VYVF9ERVZUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX0ZPUkNF
X1dFQUtfUEVSX0NQVSBpcyBub3Qgc2V0DQojIENPTkZJR19GQVVMVF9JTkpF
Q1RJT04gaXMgbm90IHNldA0KIyBDT05GSUdfTEFURU5DWVRPUCBpcyBub3Qg
c2V0DQojIENPTkZJR19ERUJVR19QQUdFQUxMT0MgaXMgbm90IHNldA0KQ09O
RklHX0hBVkVfRlVOQ1RJT05fVFJBQ0VSPXkNCkNPTkZJR19IQVZFX0ZVTkNU
SU9OX0dSQVBIX1RSQUNFUj15DQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFD
RT15DQpDT05GSUdfSEFWRV9GVFJBQ0VfTUNPVU5UX1JFQ09SRD15DQpDT05G
SUdfSEFWRV9DX1JFQ09SRE1DT1VOVD15DQpDT05GSUdfVFJBQ0lOR19TVVBQ
T1JUPXkNCkNPTkZJR19GVFJBQ0U9eQ0KIyBDT05GSUdfRlVOQ1RJT05fVFJB
Q0VSIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lSUVNPRkZfVFJBQ0VSIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NDSEVEX1RSQUNFUiBpcyBub3Qgc2V0DQojIENP
TkZJR19FTkFCTEVfREVGQVVMVF9UUkFDRVJTIGlzIG5vdCBzZXQNCkNPTkZJ
R19CUkFOQ0hfUFJPRklMRV9OT05FPXkNCiMgQ09ORklHX1BST0ZJTEVfQU5O
T1RBVEVEX0JSQU5DSEVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BST0ZJTEVf
QUxMX0JSQU5DSEVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NUQUNLX1RSQUNF
UiBpcyBub3Qgc2V0DQojIENPTkZJR19CTEtfREVWX0lPX1RSQUNFIGlzIG5v
dCBzZXQNCiMgQ09ORklHX1BST0JFX0VWRU5UUyBpcyBub3Qgc2V0DQojIENP
TkZJR19ETUFfQVBJX0RFQlVHIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FUT01J
QzY0X1NFTEZURVNUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NBTVBMRVMgaXMg
bm90IHNldA0KQ09ORklHX0hBVkVfQVJDSF9LR0RCPXkNCiMgQ09ORklHX0tH
REIgaXMgbm90IHNldA0KIyBDT05GSUdfVEVTVF9LU1RSVE9YIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX1NUUklDVF9ERVZNRU0gaXMgbm90IHNldA0KQ09ORklH
X0FSTV9VTldJTkQ9eQ0KQ09ORklHX0RFQlVHX1VTRVI9eQ0KQ09ORklHX0RF
QlVHX0xMPXkNCkNPTkZJR19ERUJVR19MTF9VQVJUX05PTkU9eQ0KIyBDT05G
SUdfREVCVUdfSUNFRENDIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RFQlVHX1NF
TUlIT1NUSU5HIGlzIG5vdCBzZXQNCkNPTkZJR19FQVJMWV9QUklOVEs9eQ0K
IyBDT05GSUdfT0NfRVRNIGlzIG5vdCBzZXQNCg0KIw0KIyBTZWN1cml0eSBv
cHRpb25zDQojDQojIENPTkZJR19LRVlTIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1NFQ1VSSVRZX0RNRVNHX1JFU1RSSUNUIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1NFQ1VSSVRZIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFQ1VSSVRZRlMgaXMg
bm90IHNldA0KQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlfREFDPXkNCkNPTkZJ
R19ERUZBVUxUX1NFQ1VSSVRZPSIiDQpDT05GSUdfQ1JZUFRPPXkNCg0KIw0K
IyBDcnlwdG8gY29yZSBvciBoZWxwZXINCiMNCkNPTkZJR19DUllQVE9fQUxH
QVBJPXkNCkNPTkZJR19DUllQVE9fQUxHQVBJMj15DQpDT05GSUdfQ1JZUFRP
X0FFQUQyPXkNCkNPTkZJR19DUllQVE9fQkxLQ0lQSEVSPXkNCkNPTkZJR19D
UllQVE9fQkxLQ0lQSEVSMj15DQpDT05GSUdfQ1JZUFRPX0hBU0g9eQ0KQ09O
RklHX0NSWVBUT19IQVNIMj15DQpDT05GSUdfQ1JZUFRPX1JORz15DQpDT05G
SUdfQ1JZUFRPX1JORzI9eQ0KQ09ORklHX0NSWVBUT19QQ09NUDI9eQ0KQ09O
RklHX0NSWVBUT19NQU5BR0VSPXkNCkNPTkZJR19DUllQVE9fTUFOQUdFUjI9
eQ0KIyBDT05GSUdfQ1JZUFRPX1VTRVIgaXMgbm90IHNldA0KQ09ORklHX0NS
WVBUT19NQU5BR0VSX0RJU0FCTEVfVEVTVFM9eQ0KIyBDT05GSUdfQ1JZUFRP
X0dGMTI4TVVMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19OVUxMIGlz
IG5vdCBzZXQNCkNPTkZJR19DUllQVE9fV09SS1FVRVVFPXkNCiMgQ09ORklH
X0NSWVBUT19DUllQVEQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0FV
VEhFTkMgaXMgbm90IHNldA0KDQojDQojIEF1dGhlbnRpY2F0ZWQgRW5jcnlw
dGlvbiB3aXRoIEFzc29jaWF0ZWQgRGF0YQ0KIw0KIyBDT05GSUdfQ1JZUFRP
X0NDTSBpcyBub3Qgc2V0DQojIENPTkZJR19DUllQVE9fR0NNIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0NSWVBUT19TRVFJViBpcyBub3Qgc2V0DQoNCiMNCiMg
QmxvY2sgbW9kZXMNCiMNCkNPTkZJR19DUllQVE9fQ0JDPXkNCiMgQ09ORklH
X0NSWVBUT19DVFIgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0NUUyBp
cyBub3Qgc2V0DQojIENPTkZJR19DUllQVE9fRUNCIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NSWVBUT19MUlcgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRP
X1BDQkMgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1hUUyBpcyBub3Qg
c2V0DQoNCiMNCiMgSGFzaCBtb2Rlcw0KIw0KIyBDT05GSUdfQ1JZUFRPX0hN
QUMgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1hDQkMgaXMgbm90IHNl
dA0KIyBDT05GSUdfQ1JZUFRPX1ZNQUMgaXMgbm90IHNldA0KDQojDQojIERp
Z2VzdA0KIw0KQ09ORklHX0NSWVBUT19DUkMzMkM9eQ0KIyBDT05GSUdfQ1JZ
UFRPX0dIQVNIIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19NRDQgaXMg
bm90IHNldA0KQ09ORklHX0NSWVBUT19NRDU9eQ0KIyBDT05GSUdfQ1JZUFRP
X01JQ0hBRUxfTUlDIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19STUQx
MjggaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1JNRDE2MCBpcyBub3Qg
c2V0DQojIENPTkZJR19DUllQVE9fUk1EMjU2IGlzIG5vdCBzZXQNCiMgQ09O
RklHX0NSWVBUT19STUQzMjAgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRP
X1NIQTEgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1NIQTI1NiBpcyBu
b3Qgc2V0DQojIENPTkZJR19DUllQVE9fU0hBNTEyIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NSWVBUT19UR1IxOTIgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZ
UFRPX1dQNTEyIGlzIG5vdCBzZXQNCg0KIw0KIyBDaXBoZXJzDQojDQpDT05G
SUdfQ1JZUFRPX0FFUz15DQojIENPTkZJR19DUllQVE9fQU5VQklTIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0NSWVBUT19BUkM0IGlzIG5vdCBzZXQNCiMgQ09O
RklHX0NSWVBUT19CTE9XRklTSCBpcyBub3Qgc2V0DQojIENPTkZJR19DUllQ
VE9fQ0FNRUxMSUEgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0NBU1Q1
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19DQVNUNiBpcyBub3Qgc2V0
DQpDT05GSUdfQ1JZUFRPX0RFUz15DQojIENPTkZJR19DUllQVE9fRkNSWVBU
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19LSEFaQUQgaXMgbm90IHNl
dA0KIyBDT05GSUdfQ1JZUFRPX1NBTFNBMjAgaXMgbm90IHNldA0KIyBDT05G
SUdfQ1JZUFRPX1NFRUQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1NF
UlBFTlQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX1RFQSBpcyBub3Qg
c2V0DQojIENPTkZJR19DUllQVE9fVFdPRklTSCBpcyBub3Qgc2V0DQoNCiMN
CiMgQ29tcHJlc3Npb24NCiMNCiMgQ09ORklHX0NSWVBUT19ERUZMQVRFIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19aTElCIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0NSWVBUT19MWk8gaXMgbm90IHNldA0KDQojDQojIFJhbmRvbSBO
dW1iZXIgR2VuZXJhdGlvbg0KIw0KQ09ORklHX0NSWVBUT19BTlNJX0NQUk5H
PXkNCiMgQ09ORklHX0NSWVBUT19VU0VSX0FQSV9IQVNIIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0NSWVBUT19VU0VSX0FQSV9TS0NJUEhFUiBpcyBub3Qgc2V0
DQpDT05GSUdfQ1JZUFRPX0hXPXkNCiMgQ09ORklHX0JJTkFSWV9QUklOVEYg
aXMgbm90IHNldA0KDQojDQojIExpYnJhcnkgcm91dGluZXMNCiMNCkNPTkZJ
R19CSVRSRVZFUlNFPXkNCkNPTkZJR19HRU5FUklDX1BDSV9JT01BUD15DQpD
T05GSUdfR0VORVJJQ19JTz15DQojIENPTkZJR19DUkNfQ0NJVFQgaXMgbm90
IHNldA0KQ09ORklHX0NSQzE2PXkNCiMgQ09ORklHX0NSQ19UMTBESUYgaXMg
bm90IHNldA0KIyBDT05GSUdfQ1JDX0lUVV9UIGlzIG5vdCBzZXQNCkNPTkZJ
R19DUkMzMj15DQojIENPTkZJR19DUkMzMl9TRUxGVEVTVCBpcyBub3Qgc2V0
DQpDT05GSUdfQ1JDMzJfU0xJQ0VCWTg9eQ0KIyBDT05GSUdfQ1JDMzJfU0xJ
Q0VCWTQgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JDMzJfU0FSV0FURSBpcyBu
b3Qgc2V0DQojIENPTkZJR19DUkMzMl9CSVQgaXMgbm90IHNldA0KIyBDT05G
SUdfQ1JDNyBpcyBub3Qgc2V0DQojIENPTkZJR19MSUJDUkMzMkMgaXMgbm90
IHNldA0KIyBDT05GSUdfQ1JDOCBpcyBub3Qgc2V0DQpDT05GSUdfWkxJQl9J
TkZMQVRFPXkNCkNPTkZJR19aTElCX0RFRkxBVEU9eQ0KQ09ORklHX1haX0RF
Qz15DQpDT05GSUdfWFpfREVDX1g4Nj15DQpDT05GSUdfWFpfREVDX1BPV0VS
UEM9eQ0KQ09ORklHX1haX0RFQ19JQTY0PXkNCkNPTkZJR19YWl9ERUNfQVJN
PXkNCkNPTkZJR19YWl9ERUNfQVJNVEhVTUI9eQ0KQ09ORklHX1haX0RFQ19T
UEFSQz15DQpDT05GSUdfWFpfREVDX0JDSj15DQojIENPTkZJR19YWl9ERUNf
VEVTVCBpcyBub3Qgc2V0DQpDT05GSUdfREVDT01QUkVTU19HWklQPXkNCkNP
TkZJR19IQVNfSU9NRU09eQ0KQ09ORklHX0hBU19ETUE9eQ0KQ09ORklHX0RR
TD15DQpDT05GSUdfTkxBVFRSPXkNCiMgQ09ORklHX0FWRVJBR0UgaXMgbm90
IHNldA0KIyBDT05GSUdfQ09SRElDIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RE
UiBpcyBub3Qgc2V0DQo=

--1342847746-1131063668-1344432587=:21096
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1131063668-1344432587=:21096--


From xen-devel-bounces@lists.xen.org Wed Aug 08 13:54:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6ht-0008AJ-V8; Wed, 08 Aug 2012 13:53:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sz6hs-0008AE-9X
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 13:53:52 +0000
Received: from [85.158.143.99:37133] by server-2.bemta-4.messagelabs.com id
	D5/1E-19021-F6F62205; Wed, 08 Aug 2012 13:53:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344434030!30889638!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4700 invoked from network); 8 Aug 2012 13:53:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 13:53:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13910326"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:53:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 14:53:50 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sz6hp-0000hP-Uo; Wed, 08 Aug 2012 13:53:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sz6hp-0001fZ-Oh;
	Wed, 08 Aug 2012 14:53:49 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20514.28525.474693.960425@mariner.uk.xensource.com>
Date: Wed, 8 Aug 2012 14:53:49 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344416832.32142.10.camel@zakaz.uk.xensource.com>
References: <osstest-13571-mainreport@xen.org>
	<1344416832.32142.10.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Lars Kurth <lars.kurth@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED"):
> On Wed, 2012-08-08 at 03:38 +0100, xen.org wrote:
> > flight 13571 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/13571/
> > 
> > Failures :-/ but no regressions.
> 
> I think this (25739:472fc515a463) is a good candidate for rc2.
> 
> There was a proposal to have an RC test day on Monday (the 13th). I
> don't think there is anything else outstanding which would be considered
> a blocker for having a test day. It would be good to tag this now so
> that we definitely have a decent baseline tagged and through the test
> system for then.

I have tagged the qemu trees.  Keir, shall we have RC2 today then ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 13:54:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 13:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6ht-0008AJ-V8; Wed, 08 Aug 2012 13:53:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sz6hs-0008AE-9X
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 13:53:52 +0000
Received: from [85.158.143.99:37133] by server-2.bemta-4.messagelabs.com id
	D5/1E-19021-F6F62205; Wed, 08 Aug 2012 13:53:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344434030!30889638!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4700 invoked from network); 8 Aug 2012 13:53:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 13:53:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13910326"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:53:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 14:53:50 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sz6hp-0000hP-Uo; Wed, 08 Aug 2012 13:53:49 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sz6hp-0001fZ-Oh;
	Wed, 08 Aug 2012 14:53:49 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20514.28525.474693.960425@mariner.uk.xensource.com>
Date: Wed, 8 Aug 2012 14:53:49 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344416832.32142.10.camel@zakaz.uk.xensource.com>
References: <osstest-13571-mainreport@xen.org>
	<1344416832.32142.10.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Lars Kurth <lars.kurth@xen.org>
Subject: Re: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [xen-unstable test] 13571: tolerable FAIL - PUSHED"):
> On Wed, 2012-08-08 at 03:38 +0100, xen.org wrote:
> > flight 13571 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/13571/
> > 
> > Failures :-/ but no regressions.
> 
> I think this (25739:472fc515a463) is a good candidate for rc2.
> 
> There was a proposal to have an RC test day on Monday (the 13th). I
> don't think there is anything else outstanding which would be considered
> a blocker for having a test day. It would be good to tag this now so
> that we definitely have a decent baseline tagged and through the test
> system for then.

I have tagged the qemu trees.  Keir, shall we have RC2 today then ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:08:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6vJ-0008RV-FM; Wed, 08 Aug 2012 14:07:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz6vI-0008RQ-F1
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:07:44 +0000
Received: from [85.158.143.99:26160] by server-1.bemta-4.messagelabs.com id
	59/85-20198-FA272205; Wed, 08 Aug 2012 14:07:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344434863!21096415!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22866 invoked from network); 8 Aug 2012 14:07:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 14:07:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:07:42 +0100
Message-Id: <50228ECD0200007800093A0B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:07:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 14:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
>> > Considering that each field of a multicall_entry is usually passed as an
>> > hypercall parameter, they should all remain unsigned long.
>> 
>> That'll give you subtle bugs I'm afraid: do_memory_op()'s
>> encoding of a continuation start extent (into the 'cmd' value),
>> for example, depends on being able to store the full value into
>> the command field of the multicall structure. The limit checking
>> of the permitted number of extents therefore is different
>> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
>> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
>> neither find it very appealing to have do_memory_op() adjusted
>> for dealing with this new special case, nor am I sure that's the
>> only place your approach would cause problems if you excluded
>> the multicall structure from the model change.
> 
> Given the way the continuation is implemented, the same problem can
> also happen on x86.

No. The compat wrapper, as pointed out there has a different
check on the maximum number of extents, and hence the
continuation index can't overflow.

> In fact, considering that we don't use any compat code, and that
> do_memory_op has the following check:
> 
>     /* Is size too large for us to encode a continuation? */
>     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
>         return start_extent;
> 
> it would work as-is for ARM too.

Not afaict. For a 32-bit guest, but the above code executed in
a 64-bit hypervisor, the guest could pass in (theoretically)
UINT_MAX, which would pass this check, yet the eventual
continuation index would get truncated when stored in the
32-bit hypercall operation field.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:08:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz6vJ-0008RV-FM; Wed, 08 Aug 2012 14:07:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz6vI-0008RQ-F1
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:07:44 +0000
Received: from [85.158.143.99:26160] by server-1.bemta-4.messagelabs.com id
	59/85-20198-FA272205; Wed, 08 Aug 2012 14:07:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344434863!21096415!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22866 invoked from network); 8 Aug 2012 14:07:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 14:07:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:07:42 +0100
Message-Id: <50228ECD0200007800093A0B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:07:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 14:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Tue, 7 Aug 2012, Jan Beulich wrote:
>> > Considering that each field of a multicall_entry is usually passed as an
>> > hypercall parameter, they should all remain unsigned long.
>> 
>> That'll give you subtle bugs I'm afraid: do_memory_op()'s
>> encoding of a continuation start extent (into the 'cmd' value),
>> for example, depends on being able to store the full value into
>> the command field of the multicall structure. The limit checking
>> of the permitted number of extents therefore is different
>> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
>> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
>> neither find it very appealing to have do_memory_op() adjusted
>> for dealing with this new special case, nor am I sure that's the
>> only place your approach would cause problems if you excluded
>> the multicall structure from the model change.
> 
> Given the way the continuation is implemented, the same problem can
> also happen on x86.

No. The compat wrapper, as pointed out there has a different
check on the maximum number of extents, and hence the
continuation index can't overflow.

> In fact, considering that we don't use any compat code, and that
> do_memory_op has the following check:
> 
>     /* Is size too large for us to encode a continuation? */
>     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
>         return start_extent;
> 
> it would work as-is for ARM too.

Not afaict. For a 32-bit guest, but the above code executed in
a 64-bit hypervisor, the guest could pass in (theoretically)
UINT_MAX, which would pass this check, yet the eventual
continuation index would get truncated when stored in the
32-bit hypercall operation field.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz788-0000Ba-PG; Wed, 08 Aug 2012 14:21:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dvrabel@cantab.net>) id 1Sz786-0000BU-TX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:20:59 +0000
X-Env-Sender: dvrabel@cantab.net
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344435651!4478002!1
X-Originating-IP: [212.23.1.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIzLjEuMiA9PiA2NDAwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5661 invoked from network); 8 Aug 2012 14:20:51 -0000
Received: from smarthost02.mail.zen.net.uk (HELO smarthost02.mail.zen.net.uk)
	(212.23.1.2) by server-7.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 14:20:51 -0000
Received: from [82.70.146.41] (helo=pear)
	by smarthost02.mail.zen.net.uk with esmtps
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <dvrabel@cantab.net>)
	id 1Sz77w-0002Mi-QN; Wed, 08 Aug 2012 14:20:48 +0000
Received: from apple.davidvrabel.org.uk ([82.70.146.43])
	by pear with esmtp (Exim 4.72) (envelope-from <dvrabel@cantab.net>)
	id 1Sz77u-0005dT-Ri; Wed, 08 Aug 2012 15:20:47 +0100
Message-ID: <502275B5.9040500@cantab.net>
Date: Wed, 08 Aug 2012 15:20:37 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jean Guyader <jean.guyader@gmail.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
	<CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
In-Reply-To: <CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
X-SA-Exim-Connect-IP: 82.70.146.43
X-SA-Exim-Mail-From: dvrabel@cantab.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pear
X-Spam-Level: 
X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,SPF_NEUTRAL
	autolearn=no version=3.3.1
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:26:47 +0000)
X-SA-Exim-Scanned: Yes (on pear)
X-Originating-Smarthost02-IP: [82.70.146.41]
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 11:03, Jean Guyader wrote:
>
> I have introduced the size field for XENMAPSPACE_gmfn_range and that is used
> by hvmload when we want to relocated the memory because the pci hole needs to be
> bigger.

Why do this so late? Rather than creating the domain with the correctly 
sized PCI hole?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:21:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz788-0000Ba-PG; Wed, 08 Aug 2012 14:21:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dvrabel@cantab.net>) id 1Sz786-0000BU-TX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:20:59 +0000
X-Env-Sender: dvrabel@cantab.net
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344435651!4478002!1
X-Originating-IP: [212.23.1.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIzLjEuMiA9PiA2NDAwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5661 invoked from network); 8 Aug 2012 14:20:51 -0000
Received: from smarthost02.mail.zen.net.uk (HELO smarthost02.mail.zen.net.uk)
	(212.23.1.2) by server-7.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 14:20:51 -0000
Received: from [82.70.146.41] (helo=pear)
	by smarthost02.mail.zen.net.uk with esmtps
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <dvrabel@cantab.net>)
	id 1Sz77w-0002Mi-QN; Wed, 08 Aug 2012 14:20:48 +0000
Received: from apple.davidvrabel.org.uk ([82.70.146.43])
	by pear with esmtp (Exim 4.72) (envelope-from <dvrabel@cantab.net>)
	id 1Sz77u-0005dT-Ri; Wed, 08 Aug 2012 15:20:47 +0100
Message-ID: <502275B5.9040500@cantab.net>
Date: Wed, 08 Aug 2012 15:20:37 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jean Guyader <jean.guyader@gmail.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
	<CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
In-Reply-To: <CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
X-SA-Exim-Connect-IP: 82.70.146.43
X-SA-Exim-Mail-From: dvrabel@cantab.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pear
X-Spam-Level: 
X-Spam-Status: No, score=-1.1 required=5.0 tests=BAYES_00,SPF_NEUTRAL
	autolearn=no version=3.3.1
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:26:47 +0000)
X-SA-Exim-Scanned: Yes (on pear)
X-Originating-Smarthost02-IP: [82.70.146.41]
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 11:03, Jean Guyader wrote:
>
> I have introduced the size field for XENMAPSPACE_gmfn_range and that is used
> by hvmload when we want to relocated the memory because the pci hole needs to be
> bigger.

Why do this so late? Rather than creating the domain with the correctly 
sized PCI hole?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:26:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Dc-0000KP-HZ; Wed, 08 Aug 2012 14:26:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Da-0000KI-JK
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:26:38 +0000
Received: from [85.158.143.99:20536] by server-1.bemta-4.messagelabs.com id
	D0/B6-20198-D1772205; Wed, 08 Aug 2012 14:26:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344435996!29961343!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3095 invoked from network); 8 Aug 2012 14:26:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 14:26:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:26:35 +0100
Message-Id: <5022933B0200007800093A1F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:26:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6657190B.3__="
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too
	early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6657190B.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

A comment in tools/configure says that it is intended for these to be
command line overridable, so they shouldn't get expanded at configure
time.

The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
doing this, but imo it is flawed altogether and should rather be
removed:
- setting prefix and exec_prefix to default values is being done later
  in tools/configure anyway
- setting LIB_PATH based on the (non-)existence of a lib64 directory
  underneath ${exec_prefix} is plain wrong (it can obviously exist on a
  32-bit installation)
- I wasn't able to locate any use of LIB_PATH
(I did see IanC's comment in c/s 25594:ad08cd8e7097 that removing it
supposedly causes other problems, but I don't see how that would
happen).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
This will require tools/configure to be re-generated.

--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -1,5 +1,6 @@
 # Prefix and install folder
-PREFIX              :=3D @prefix@
+prefix              :=3D @prefix@
+PREFIX              :=3D $(prefix)
 exec_prefix         :=3D @exec_prefix@
 LIBDIR              :=3D @libdir@
=20
--- a/tools/m4/default_lib.m4
+++ b/tools/m4/default_lib.m4
@@ -1,14 +1,19 @@
 AC_DEFUN([AX_DEFAULT_LIB],
-[AS_IF([test "\${exec_prefix}/lib" =3D "$libdir"],
-    [AS_IF([test "$exec_prefix" =3D "NONE" && test "$prefix" !=3D =
"NONE"],
-        [exec_prefix=3D$prefix])
-    AS_IF([test "$exec_prefix" =3D "NONE"], [exec_prefix=3D$ac_default_pre=
fix])
-    AS_IF([test -d "${exec_prefix}/lib64"], [
+[AS_IF([test "\${exec_prefix}/lib" =3D "$libdir"], [
+    AS_IF([test "$prefix" =3D "NONE"], [prefix=3D$ac_default_prefix])
+    AS_IF([test "$exec_prefix" =3D "NONE"], [exec_prefix=3D'${prefix}'])
+    AS_IF([eval test -d "${exec_prefix}/lib64"], [
         LIB_PATH=3D"lib64"
     ],[
         LIB_PATH=3D"lib"
     ])
 ], [
     LIB_PATH=3D"${libdir:`expr length "$exec_prefix" + 1`}"
+    AS_IF([test -z "${libdir##\$\{exec_prefix\}/*}"], [
+        LIB_PATH=3D"${libdir:15}"
+    ])
+    AS_IF([test -z "${libdir##\$exec_prefix/*}"], [
+        LIB_PATH=3D"${libdir:13}"
+    ])
 ])
 AC_SUBST(LIB_PATH)])




--=__Part6657190B.3__=
Content-Type: text/plain; name="tools-cfg-libdir-x86_64.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="tools-cfg-libdir-x86_64.patch"

A comment in tools/configure says that it is intended for these to =
be=0Acommand line overridable, so they shouldn't get expanded at =
configure=0Atime.=0A=0AThe patch is fixing tools/m4/default_lib.m4 as far =
as I can see myself=0Adoing this, but imo it is flawed altogether and =
should rather be=0Aremoved:=0A- setting prefix and exec_prefix to default =
values is being done later=0A  in tools/configure anyway=0A- setting =
LIB_PATH based on the (non-)existence of a lib64 directory=0A  underneath =
${exec_prefix} is plain wrong (it can obviously exist on a=0A  32-bit =
installation)=0A- I wasn't able to locate any use of LIB_PATH=0A(I did see =
IanC's comment in c/s 25594:ad08cd8e7097 that removing it=0Asupposedly =
causes other problems, but I don't see how that would=0Ahappen).=0A=0ASigne=
d-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A---=0AThis will require =
tools/configure to be re-generated.=0A=0A--- a/config/Tools.mk.in=0A+++ =
b/config/Tools.mk.in=0A@@ -1,5 +1,6 @@=0A # Prefix and install folder=0A-PR=
EFIX              :=3D @prefix@=0A+prefix              :=3D @prefix@=0A+PRE=
FIX              :=3D $(prefix)=0A exec_prefix         :=3D @exec_prefix@=
=0A LIBDIR              :=3D @libdir@=0A =0A--- a/tools/m4/default_lib.m4=
=0A+++ b/tools/m4/default_lib.m4=0A@@ -1,14 +1,19 @@=0A AC_DEFUN([AX_DEFAUL=
T_LIB],=0A-[AS_IF([test "\${exec_prefix}/lib" =3D "$libdir"],=0A-    =
[AS_IF([test "$exec_prefix" =3D "NONE" && test "$prefix" !=3D "NONE"],=0A- =
       [exec_prefix=3D$prefix])=0A-    AS_IF([test "$exec_prefix" =3D =
"NONE"], [exec_prefix=3D$ac_default_prefix])=0A-    AS_IF([test -d =
"${exec_prefix}/lib64"], [=0A+[AS_IF([test "\${exec_prefix}/lib" =3D =
"$libdir"], [=0A+    AS_IF([test "$prefix" =3D "NONE"], [prefix=3D$ac_defau=
lt_prefix])=0A+    AS_IF([test "$exec_prefix" =3D "NONE"], [exec_prefix=3D'=
${prefix}'])=0A+    AS_IF([eval test -d "${exec_prefix}/lib64"], [=0A      =
   LIB_PATH=3D"lib64"=0A     ],[=0A         LIB_PATH=3D"lib"=0A     ])=0A =
], [=0A     LIB_PATH=3D"${libdir:`expr length "$exec_prefix" + 1`}"=0A+    =
AS_IF([test -z "${libdir##\$\{exec_prefix\}/*}"], [=0A+        LIB_PATH=3D"=
${libdir:15}"=0A+    ])=0A+    AS_IF([test -z "${libdir##\$exec_prefix/*}"]=
, [=0A+        LIB_PATH=3D"${libdir:13}"=0A+    ])=0A ])=0A AC_SUBST(LIB_PA=
TH)])=0A
--=__Part6657190B.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6657190B.3__=--


From xen-devel-bounces@lists.xen.org Wed Aug 08 14:26:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Dc-0000KP-HZ; Wed, 08 Aug 2012 14:26:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Da-0000KI-JK
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:26:38 +0000
Received: from [85.158.143.99:20536] by server-1.bemta-4.messagelabs.com id
	D0/B6-20198-D1772205; Wed, 08 Aug 2012 14:26:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344435996!29961343!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3095 invoked from network); 8 Aug 2012 14:26:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 14:26:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:26:35 +0100
Message-Id: <5022933B0200007800093A1F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:26:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6657190B.3__="
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too
	early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6657190B.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

A comment in tools/configure says that it is intended for these to be
command line overridable, so they shouldn't get expanded at configure
time.

The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
doing this, but imo it is flawed altogether and should rather be
removed:
- setting prefix and exec_prefix to default values is being done later
  in tools/configure anyway
- setting LIB_PATH based on the (non-)existence of a lib64 directory
  underneath ${exec_prefix} is plain wrong (it can obviously exist on a
  32-bit installation)
- I wasn't able to locate any use of LIB_PATH
(I did see IanC's comment in c/s 25594:ad08cd8e7097 that removing it
supposedly causes other problems, but I don't see how that would
happen).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
This will require tools/configure to be re-generated.

--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -1,5 +1,6 @@
 # Prefix and install folder
-PREFIX              :=3D @prefix@
+prefix              :=3D @prefix@
+PREFIX              :=3D $(prefix)
 exec_prefix         :=3D @exec_prefix@
 LIBDIR              :=3D @libdir@
=20
--- a/tools/m4/default_lib.m4
+++ b/tools/m4/default_lib.m4
@@ -1,14 +1,19 @@
 AC_DEFUN([AX_DEFAULT_LIB],
-[AS_IF([test "\${exec_prefix}/lib" =3D "$libdir"],
-    [AS_IF([test "$exec_prefix" =3D "NONE" && test "$prefix" !=3D =
"NONE"],
-        [exec_prefix=3D$prefix])
-    AS_IF([test "$exec_prefix" =3D "NONE"], [exec_prefix=3D$ac_default_pre=
fix])
-    AS_IF([test -d "${exec_prefix}/lib64"], [
+[AS_IF([test "\${exec_prefix}/lib" =3D "$libdir"], [
+    AS_IF([test "$prefix" =3D "NONE"], [prefix=3D$ac_default_prefix])
+    AS_IF([test "$exec_prefix" =3D "NONE"], [exec_prefix=3D'${prefix}'])
+    AS_IF([eval test -d "${exec_prefix}/lib64"], [
         LIB_PATH=3D"lib64"
     ],[
         LIB_PATH=3D"lib"
     ])
 ], [
     LIB_PATH=3D"${libdir:`expr length "$exec_prefix" + 1`}"
+    AS_IF([test -z "${libdir##\$\{exec_prefix\}/*}"], [
+        LIB_PATH=3D"${libdir:15}"
+    ])
+    AS_IF([test -z "${libdir##\$exec_prefix/*}"], [
+        LIB_PATH=3D"${libdir:13}"
+    ])
 ])
 AC_SUBST(LIB_PATH)])




--=__Part6657190B.3__=
Content-Type: text/plain; name="tools-cfg-libdir-x86_64.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="tools-cfg-libdir-x86_64.patch"

A comment in tools/configure says that it is intended for these to =
be=0Acommand line overridable, so they shouldn't get expanded at =
configure=0Atime.=0A=0AThe patch is fixing tools/m4/default_lib.m4 as far =
as I can see myself=0Adoing this, but imo it is flawed altogether and =
should rather be=0Aremoved:=0A- setting prefix and exec_prefix to default =
values is being done later=0A  in tools/configure anyway=0A- setting =
LIB_PATH based on the (non-)existence of a lib64 directory=0A  underneath =
${exec_prefix} is plain wrong (it can obviously exist on a=0A  32-bit =
installation)=0A- I wasn't able to locate any use of LIB_PATH=0A(I did see =
IanC's comment in c/s 25594:ad08cd8e7097 that removing it=0Asupposedly =
causes other problems, but I don't see how that would=0Ahappen).=0A=0ASigne=
d-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A---=0AThis will require =
tools/configure to be re-generated.=0A=0A--- a/config/Tools.mk.in=0A+++ =
b/config/Tools.mk.in=0A@@ -1,5 +1,6 @@=0A # Prefix and install folder=0A-PR=
EFIX              :=3D @prefix@=0A+prefix              :=3D @prefix@=0A+PRE=
FIX              :=3D $(prefix)=0A exec_prefix         :=3D @exec_prefix@=
=0A LIBDIR              :=3D @libdir@=0A =0A--- a/tools/m4/default_lib.m4=
=0A+++ b/tools/m4/default_lib.m4=0A@@ -1,14 +1,19 @@=0A AC_DEFUN([AX_DEFAUL=
T_LIB],=0A-[AS_IF([test "\${exec_prefix}/lib" =3D "$libdir"],=0A-    =
[AS_IF([test "$exec_prefix" =3D "NONE" && test "$prefix" !=3D "NONE"],=0A- =
       [exec_prefix=3D$prefix])=0A-    AS_IF([test "$exec_prefix" =3D =
"NONE"], [exec_prefix=3D$ac_default_prefix])=0A-    AS_IF([test -d =
"${exec_prefix}/lib64"], [=0A+[AS_IF([test "\${exec_prefix}/lib" =3D =
"$libdir"], [=0A+    AS_IF([test "$prefix" =3D "NONE"], [prefix=3D$ac_defau=
lt_prefix])=0A+    AS_IF([test "$exec_prefix" =3D "NONE"], [exec_prefix=3D'=
${prefix}'])=0A+    AS_IF([eval test -d "${exec_prefix}/lib64"], [=0A      =
   LIB_PATH=3D"lib64"=0A     ],[=0A         LIB_PATH=3D"lib"=0A     ])=0A =
], [=0A     LIB_PATH=3D"${libdir:`expr length "$exec_prefix" + 1`}"=0A+    =
AS_IF([test -z "${libdir##\$\{exec_prefix\}/*}"], [=0A+        LIB_PATH=3D"=
${libdir:15}"=0A+    ])=0A+    AS_IF([test -z "${libdir##\$exec_prefix/*}"]=
, [=0A+        LIB_PATH=3D"${libdir:13}"=0A+    ])=0A ])=0A AC_SUBST(LIB_PA=
TH)])=0A
--=__Part6657190B.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6657190B.3__=--


From xen-devel-bounces@lists.xen.org Wed Aug 08 14:39:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Pn-0000Xo-Vl; Wed, 08 Aug 2012 14:39:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Pn-0000Xj-AX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:39:15 +0000
Received: from [85.158.138.51:50689] by server-9.bemta-3.messagelabs.com id
	7A/AB-14615-21A72205; Wed, 08 Aug 2012 14:39:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344436753!27002008!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24489 invoked from network); 8 Aug 2012 14:39:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 14:39:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:39:13 +0100
Message-Id: <502296310200007800093A35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:39:13 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>, "Ian Campbell" <ian.campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
In-Reply-To: <5022933B0200007800093A1F@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
> A comment in tools/configure says that it is intended for these to be
> command line overridable, so they shouldn't get expanded at configure
> time.

In addition, it would have been _very_ nice if it had been
prominently announced that with (I believe) 25594:ad08cd8e7097
it is now _required_ to configure with --libdir on x86-64, or else all
the .so-s end up under /usr/lib. Figuring this out and getting the
patch here in the right form to be able to use the most compatible
form --libdir='${exec_prefix}'/lib64 has taken me a good part of
the day, which could have been avoided if this whole configure
adjustment (much like had apparently been missing already in
earlier cases) had been done properly. I just can't imagine I'm
the only one having used no options at all, and things working
nevertheless despite .../lib not being in the library search paths
used when running xl et al.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:39:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Pn-0000Xo-Vl; Wed, 08 Aug 2012 14:39:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Pn-0000Xj-AX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:39:15 +0000
Received: from [85.158.138.51:50689] by server-9.bemta-3.messagelabs.com id
	7A/AB-14615-21A72205; Wed, 08 Aug 2012 14:39:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344436753!27002008!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24489 invoked from network); 8 Aug 2012 14:39:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 14:39:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:39:13 +0100
Message-Id: <502296310200007800093A35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:39:13 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>, "Ian Campbell" <ian.campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
In-Reply-To: <5022933B0200007800093A1F@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
> A comment in tools/configure says that it is intended for these to be
> command line overridable, so they shouldn't get expanded at configure
> time.

In addition, it would have been _very_ nice if it had been
prominently announced that with (I believe) 25594:ad08cd8e7097
it is now _required_ to configure with --libdir on x86-64, or else all
the .so-s end up under /usr/lib. Figuring this out and getting the
patch here in the right form to be able to use the most compatible
form --libdir='${exec_prefix}'/lib64 has taken me a good part of
the day, which could have been avoided if this whole configure
adjustment (much like had apparently been missing already in
earlier cases) had been done properly. I just can't imagine I'm
the only one having used no options at all, and things working
nevertheless despite .../lib not being in the library search paths
used when running xl et al.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:43:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Tu-0000eL-Ks; Wed, 08 Aug 2012 14:43:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Tt-0000eG-47
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 14:43:29 +0000
Received: from [85.158.138.51:56565] by server-4.bemta-3.messagelabs.com id
	56/1C-06379-01B72205; Wed, 08 Aug 2012 14:43:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344437007!23006284!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12420 invoked from network); 8 Aug 2012 14:43:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 14:43:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:43:27 +0100
Message-Id: <5022972F0200007800093A3C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:43:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<20120807162637.GB15053@phenom.dumpdata.com>
	<50223027.6080502@oracle.com>
In-Reply-To: <50223027.6080502@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com, Feng Jin <joe.jin@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 11:23, "zhenzhong.duan" <zhenzhong.duan@oracle.com> wrote:
> I also send a patch to lkml that can workaround this issue, but I don't 
> know the reason of block in xen side.
> link: https://lkml.org/lkml/2012/8/7/50 

Without understanding the reason for this, I agree with hpa that
blindly changing the kernel to address this is not really a good
idea.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:43:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Tu-0000eL-Ks; Wed, 08 Aug 2012 14:43:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Tt-0000eG-47
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 14:43:29 +0000
Received: from [85.158.138.51:56565] by server-4.bemta-3.messagelabs.com id
	56/1C-06379-01B72205; Wed, 08 Aug 2012 14:43:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344437007!23006284!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12420 invoked from network); 8 Aug 2012 14:43:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 14:43:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:43:27 +0100
Message-Id: <5022972F0200007800093A3C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:43:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<20120807162637.GB15053@phenom.dumpdata.com>
	<50223027.6080502@oracle.com>
In-Reply-To: <50223027.6080502@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com, Feng Jin <joe.jin@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 11:23, "zhenzhong.duan" <zhenzhong.duan@oracle.com> wrote:
> I also send a patch to lkml that can workaround this issue, but I don't 
> know the reason of block in xen side.
> link: https://lkml.org/lkml/2012/8/7/50 

Without understanding the reason for this, I agree with hpa that
blindly changing the kernel to address this is not really a good
idea.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:45:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:45:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Vc-0000jq-4b; Wed, 08 Aug 2012 14:45:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sz7Va-0000jk-SO
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:45:15 +0000
Received: from [85.158.143.99:49225] by server-2.bemta-4.messagelabs.com id
	9B/46-19021-A7B72205; Wed, 08 Aug 2012 14:45:14 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344437110!30285323!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNjU0NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7907 invoked from network); 8 Aug 2012 14:45:12 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 14:45:12 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344437112; x=1375973112;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=dnIHctCKhrq91o0cBYpQBEVUEIPEfShKVcgjAiDmwmw=;
	b=jn1Kr887uc92QUfpAlEJjiMRYqAJnL9cYlpsbPTBg1y+EVl26aH5J77h
	yjnKZ4ICZhTU2I6892IVn7/r65KiSQ==;
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="420538167"
Received: from smtp-in-0105.sea3.amazon.com ([10.224.19.45])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 08 Aug 2012 14:45:08 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-0105.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q78Ej7RT032086
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 8 Aug 2012 14:45:07 GMT
Received: from US-SEA-R8XVZTX (10.224.80.41) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 8 Aug 2012 07:45:01 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 08 Aug 2012
	07:45:00 -0700
Date: Wed, 8 Aug 2012 07:45:00 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120808144500.GD5592@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502296310200007800093A35@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 07:39:13AM -0700, Jan Beulich wrote:
> >>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
> > A comment in tools/configure says that it is intended for these to be
> > command line overridable, so they shouldn't get expanded at configure
> > time.
> 
> In addition, it would have been _very_ nice if it had been
> prominently announced that with (I believe) 25594:ad08cd8e7097
>
> it is now _required_ to configure with --libdir on x86-64, or else all
> the .so-s end up under /usr/lib. Figuring this out and getting the
> patch here in the right form to be able to use the most compatible
> form --libdir='${exec_prefix}'/lib64 has taken me a good part of
> the day, which could have been avoided if this whole configure
> adjustment (much like had apparently been missing already in
> earlier cases) had been done properly. I just can't imagine I'm
> the only one having used no options at all, and things working
> nevertheless despite .../lib not being in the library search paths
> used when running xl et al.

I'm sorry for the trouble it cause you today, Jan. I thought I called
it out sufficiently in the commit log:

    With this change, packagers can supply the desired location for
    shared libraries on the ./configure command line. Packagers need
    to note that the default behaviour on 64-bit Linux systems will be
    to install shared libraries in /usr/lib, not /usr/lib64, unless a
    --libdir value is provided to ./configure.

The new behavior is consistent with all packages that use autoconf.

Would a separate email on the topic to xen-devel, apart from the patch
discussion, have helped raise awareness?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:45:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:45:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Vc-0000jq-4b; Wed, 08 Aug 2012 14:45:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sz7Va-0000jk-SO
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:45:15 +0000
Received: from [85.158.143.99:49225] by server-2.bemta-4.messagelabs.com id
	9B/46-19021-A7B72205; Wed, 08 Aug 2012 14:45:14 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344437110!30285323!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNjU0NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7907 invoked from network); 8 Aug 2012 14:45:12 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 14:45:12 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344437112; x=1375973112;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=dnIHctCKhrq91o0cBYpQBEVUEIPEfShKVcgjAiDmwmw=;
	b=jn1Kr887uc92QUfpAlEJjiMRYqAJnL9cYlpsbPTBg1y+EVl26aH5J77h
	yjnKZ4ICZhTU2I6892IVn7/r65KiSQ==;
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="420538167"
Received: from smtp-in-0105.sea3.amazon.com ([10.224.19.45])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 08 Aug 2012 14:45:08 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-0105.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q78Ej7RT032086
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 8 Aug 2012 14:45:07 GMT
Received: from US-SEA-R8XVZTX (10.224.80.41) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 8 Aug 2012 07:45:01 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 08 Aug 2012
	07:45:00 -0700
Date: Wed, 8 Aug 2012 07:45:00 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120808144500.GD5592@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502296310200007800093A35@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 07:39:13AM -0700, Jan Beulich wrote:
> >>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
> > A comment in tools/configure says that it is intended for these to be
> > command line overridable, so they shouldn't get expanded at configure
> > time.
> 
> In addition, it would have been _very_ nice if it had been
> prominently announced that with (I believe) 25594:ad08cd8e7097
>
> it is now _required_ to configure with --libdir on x86-64, or else all
> the .so-s end up under /usr/lib. Figuring this out and getting the
> patch here in the right form to be able to use the most compatible
> form --libdir='${exec_prefix}'/lib64 has taken me a good part of
> the day, which could have been avoided if this whole configure
> adjustment (much like had apparently been missing already in
> earlier cases) had been done properly. I just can't imagine I'm
> the only one having used no options at all, and things working
> nevertheless despite .../lib not being in the library search paths
> used when running xl et al.

I'm sorry for the trouble it cause you today, Jan. I thought I called
it out sufficiently in the commit log:

    With this change, packagers can supply the desired location for
    shared libraries on the ./configure command line. Packagers need
    to note that the default behaviour on 64-bit Linux systems will be
    to install shared libraries in /usr/lib, not /usr/lib64, unless a
    --libdir value is provided to ./configure.

The new behavior is consistent with all packages that use autoconf.

Would a separate email on the topic to xen-devel, apart from the patch
discussion, have helped raise awareness?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:47:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Xt-0000s3-Ls; Wed, 08 Aug 2012 14:47:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Xr-0000rw-FT
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:47:35 +0000
Received: from [85.158.138.51:61835] by server-11.bemta-3.messagelabs.com id
	5E/63-10722-60C72205; Wed, 08 Aug 2012 14:47:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344437253!22142294!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24923 invoked from network); 8 Aug 2012 14:47:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 14:47:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:47:33 +0100
Message-Id: <502298230200007800093A57@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:47:31 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
In-Reply-To: <502235E8.9040309@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA4LjA4LjEyIGF0IDExOjQ4LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwNy4wOC4xMiBhdCAwOToyMiwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+PiBBZnRlciBzb21lIGRlYnVnLCB3ZSBm
b3VuZCBpdCdzIGluIGtlcm5lbCBtdHJyIGluaXQgdGhhdCBtYWtlIHRoaXMgZGVsYXkuCj4+Pgo+
Pj4gbXRycl9hcHNfaW5pdCgpCj4+PiAgIFwtPiAgc2V0X210cnIoKQo+Pj4gICAgICAgXC0+ICBt
dHJyX3dvcmtfaGFuZGxlcigpCj4+Pgo+Pj4ga2VybmVsIHNwaW4gaW4gbXRycl93b3JrX2hhbmRs
ZXIuCj4+Pgo+Pj4gQnV0IHdlIGRvbid0IGtub3cgdGhlIHNjZW5lIGhpZGUgaW4gdGhlIGh5cGVy
dmlzb3IuIFdoeSBiaWcgbWVtICsKPj4+IHBhc3N0aHJvdWdoIG1hZGUgdGhlIHdvcnN0IGNhc2Uu
Cj4+PiBJcyB0aGlzIGFscmVhZHkgZml4ZWQgaW4geGVuIHVwc3RyZWFtPwo+PiBGaXJzdCBvZiBh
bGwgaXQgd291bGQgaGF2ZSBiZWVuIHVzZWZ1bCB0byBpbmRpY2F0ZSB0aGUga2VybmVsIHZlcnNp
b24sCj4+IHNpbmNlIG10cnJfd29ya19oYW5kbGVyKCkgZGlzYXBwZWFyZWQgYWZ0ZXIgMy4wLiBP
YnZpb3VzbHkgd29ydGgKPj4gY2hlY2tpbmcgd2hldGhlciB0aGF0IGNoYW5nZSBieSBpdHNlbGYg
YWxyZWFkeSBhZGRyZXNzZXMgeW91cgo+PiBwcm9ibGVtLgo+IE5vIGx1Y2ssIHRyaWVkIHVwc3Ry
ZWFtIGtlcm5lbCAzLjYuMC1yYzEsIHNlZW1zIHdvcnNlLiBJdCB0b29rIDIgaG91cnMgCj4gdG8g
Ym9vdCB1cC4KClRoYXQncyBxdWl0ZSBhIGJpZyBzdGVwIGZyb20gMy4wLnguIEFuZCBpbiBhbm90
aGVyIHJlc3BvbnNlIHlvdQpwb2ludCBvdXQgdGhhdCAzLjYgaXMgd2F5IHdvcnNlIHRoYW4gMy41
IHdhcy4gU28gbWF5YmUgZ29pbmcKYmFjayB0byAzLjEgb3IgMy4yIG1pZ2h0IGJlIGEgYmV0dGVy
IGlkZWEgaWYgZGVidWdnaW5nIHRoZSBpc3N1ZQpkb2Vzbid0IGdldCB5b3UgYW55d2hlcmUuCgpK
YW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:47:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7Xt-0000s3-Ls; Wed, 08 Aug 2012 14:47:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7Xr-0000rw-FT
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:47:35 +0000
Received: from [85.158.138.51:61835] by server-11.bemta-3.messagelabs.com id
	5E/63-10722-60C72205; Wed, 08 Aug 2012 14:47:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344437253!22142294!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24923 invoked from network); 8 Aug 2012 14:47:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 14:47:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:47:33 +0100
Message-Id: <502298230200007800093A57@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:47:31 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
In-Reply-To: <502235E8.9040309@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA4LjA4LjEyIGF0IDExOjQ4LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwNy4wOC4xMiBhdCAwOToyMiwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+PiBBZnRlciBzb21lIGRlYnVnLCB3ZSBm
b3VuZCBpdCdzIGluIGtlcm5lbCBtdHJyIGluaXQgdGhhdCBtYWtlIHRoaXMgZGVsYXkuCj4+Pgo+
Pj4gbXRycl9hcHNfaW5pdCgpCj4+PiAgIFwtPiAgc2V0X210cnIoKQo+Pj4gICAgICAgXC0+ICBt
dHJyX3dvcmtfaGFuZGxlcigpCj4+Pgo+Pj4ga2VybmVsIHNwaW4gaW4gbXRycl93b3JrX2hhbmRs
ZXIuCj4+Pgo+Pj4gQnV0IHdlIGRvbid0IGtub3cgdGhlIHNjZW5lIGhpZGUgaW4gdGhlIGh5cGVy
dmlzb3IuIFdoeSBiaWcgbWVtICsKPj4+IHBhc3N0aHJvdWdoIG1hZGUgdGhlIHdvcnN0IGNhc2Uu
Cj4+PiBJcyB0aGlzIGFscmVhZHkgZml4ZWQgaW4geGVuIHVwc3RyZWFtPwo+PiBGaXJzdCBvZiBh
bGwgaXQgd291bGQgaGF2ZSBiZWVuIHVzZWZ1bCB0byBpbmRpY2F0ZSB0aGUga2VybmVsIHZlcnNp
b24sCj4+IHNpbmNlIG10cnJfd29ya19oYW5kbGVyKCkgZGlzYXBwZWFyZWQgYWZ0ZXIgMy4wLiBP
YnZpb3VzbHkgd29ydGgKPj4gY2hlY2tpbmcgd2hldGhlciB0aGF0IGNoYW5nZSBieSBpdHNlbGYg
YWxyZWFkeSBhZGRyZXNzZXMgeW91cgo+PiBwcm9ibGVtLgo+IE5vIGx1Y2ssIHRyaWVkIHVwc3Ry
ZWFtIGtlcm5lbCAzLjYuMC1yYzEsIHNlZW1zIHdvcnNlLiBJdCB0b29rIDIgaG91cnMgCj4gdG8g
Ym9vdCB1cC4KClRoYXQncyBxdWl0ZSBhIGJpZyBzdGVwIGZyb20gMy4wLnguIEFuZCBpbiBhbm90
aGVyIHJlc3BvbnNlIHlvdQpwb2ludCBvdXQgdGhhdCAzLjYgaXMgd2F5IHdvcnNlIHRoYW4gMy41
IHdhcy4gU28gbWF5YmUgZ29pbmcKYmFjayB0byAzLjEgb3IgMy4yIG1pZ2h0IGJlIGEgYmV0dGVy
IGlkZWEgaWYgZGVidWdnaW5nIHRoZSBpc3N1ZQpkb2Vzbid0IGdldCB5b3UgYW55d2hlcmUuCgpK
YW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:54:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7eI-00018d-L5; Wed, 08 Aug 2012 14:54:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7eH-00018Y-CQ
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:54:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344437576!4492569!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29129 invoked from network); 8 Aug 2012 14:52:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 14:52:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:52:55 +0100
Message-Id: <502299650200007800093A68@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:52:53 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
In-Reply-To: <20120808144500.GD5592@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 16:45, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Aug 08, 2012 at 07:39:13AM -0700, Jan Beulich wrote:
>> >>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > A comment in tools/configure says that it is intended for these to be
>> > command line overridable, so they shouldn't get expanded at configure
>> > time.
>> 
>> In addition, it would have been _very_ nice if it had been
>> prominently announced that with (I believe) 25594:ad08cd8e7097
>>
>> it is now _required_ to configure with --libdir on x86-64, or else all
>> the .so-s end up under /usr/lib. Figuring this out and getting the
>> patch here in the right form to be able to use the most compatible
>> form --libdir='${exec_prefix}'/lib64 has taken me a good part of
>> the day, which could have been avoided if this whole configure
>> adjustment (much like had apparently been missing already in
>> earlier cases) had been done properly. I just can't imagine I'm
>> the only one having used no options at all, and things working
>> nevertheless despite .../lib not being in the library search paths
>> used when running xl et al.
> 
> I'm sorry for the trouble it cause you today, Jan. I thought I called
> it out sufficiently in the commit log:
> 
>     With this change, packagers can supply the desired location for
>     shared libraries on the ./configure command line. Packagers need
>     to note that the default behaviour on 64-bit Linux systems will be
>     to install shared libraries in /usr/lib, not /usr/lib64, unless a
>     --libdir value is provided to ./configure.

No, this comment says "can", not "have to". Plus you don't really
expect people to read every changeset's description, do you?

> The new behavior is consistent with all packages that use autoconf.

That indeed appears to be the case, admittedly to my not
insignificant surprise.

> Would a separate email on the topic to xen-devel, apart from the patch
> discussion, have helped raise awareness?

Yes. I for my part follow only very selected tools side discussions,
and expect the tools maintainers to get things sorted out. But of
course I need to build and run the tools, so if some change gets
done to the way things need to get built (or run) successfully
then announcing this independently of the patch would certainly
be helpful in general.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:54:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7eI-00018d-L5; Wed, 08 Aug 2012 14:54:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7eH-00018Y-CQ
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:54:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344437576!4492569!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29129 invoked from network); 8 Aug 2012 14:52:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 14:52:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 15:52:55 +0100
Message-Id: <502299650200007800093A68@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 15:52:53 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
In-Reply-To: <20120808144500.GD5592@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 16:45, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Aug 08, 2012 at 07:39:13AM -0700, Jan Beulich wrote:
>> >>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > A comment in tools/configure says that it is intended for these to be
>> > command line overridable, so they shouldn't get expanded at configure
>> > time.
>> 
>> In addition, it would have been _very_ nice if it had been
>> prominently announced that with (I believe) 25594:ad08cd8e7097
>>
>> it is now _required_ to configure with --libdir on x86-64, or else all
>> the .so-s end up under /usr/lib. Figuring this out and getting the
>> patch here in the right form to be able to use the most compatible
>> form --libdir='${exec_prefix}'/lib64 has taken me a good part of
>> the day, which could have been avoided if this whole configure
>> adjustment (much like had apparently been missing already in
>> earlier cases) had been done properly. I just can't imagine I'm
>> the only one having used no options at all, and things working
>> nevertheless despite .../lib not being in the library search paths
>> used when running xl et al.
> 
> I'm sorry for the trouble it cause you today, Jan. I thought I called
> it out sufficiently in the commit log:
> 
>     With this change, packagers can supply the desired location for
>     shared libraries on the ./configure command line. Packagers need
>     to note that the default behaviour on 64-bit Linux systems will be
>     to install shared libraries in /usr/lib, not /usr/lib64, unless a
>     --libdir value is provided to ./configure.

No, this comment says "can", not "have to". Plus you don't really
expect people to read every changeset's description, do you?

> The new behavior is consistent with all packages that use autoconf.

That indeed appears to be the case, admittedly to my not
insignificant surprise.

> Would a separate email on the topic to xen-devel, apart from the patch
> discussion, have helped raise awareness?

Yes. I for my part follow only very selected tools side discussions,
and expect the tools maintainers to get things sorted out. But of
course I need to build and run the tools, so if some change gets
done to the way things need to get built (or run) successfully
then announcing this independently of the patch would certainly
be helpful in general.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:55:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7fE-0001Bf-2k; Wed, 08 Aug 2012 14:55:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz7fC-0001BR-RN
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:55:11 +0000
Received: from [85.158.143.35:20565] by server-1.bemta-4.messagelabs.com id
	D6/54-20198-ECD72205; Wed, 08 Aug 2012 14:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344437708!6147452!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1138 invoked from network); 8 Aug 2012 14:55:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 14:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912046"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 14:55:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	15:55:08 +0100
Message-ID: <1344437707.32142.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 15:55:07 +0100
In-Reply-To: <502299650200007800093A68@nat28.tlf.novell.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
	<502299650200007800093A68@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 15:52 +0100, Jan Beulich wrote:
> >>> On 08.08.12 at 16:45, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, Aug 08, 2012 at 07:39:13AM -0700, Jan Beulich wrote:
> >> >>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> > A comment in tools/configure says that it is intended for these to be
> >> > command line overridable, so they shouldn't get expanded at configure
> >> > time.
> >> 
> >> In addition, it would have been _very_ nice if it had been
> >> prominently announced that with (I believe) 25594:ad08cd8e7097
> >>
> >> it is now _required_ to configure with --libdir on x86-64, or else all
> >> the .so-s end up under /usr/lib. Figuring this out and getting the
> >> patch here in the right form to be able to use the most compatible
> >> form --libdir='${exec_prefix}'/lib64 has taken me a good part of
> >> the day, which could have been avoided if this whole configure
> >> adjustment (much like had apparently been missing already in
> >> earlier cases) had been done properly. I just can't imagine I'm
> >> the only one having used no options at all, and things working
> >> nevertheless despite .../lib not being in the library search paths
> >> used when running xl et al.
> > 
> > I'm sorry for the trouble it cause you today, Jan. I thought I called
> > it out sufficiently in the commit log:
> > 
> >     With this change, packagers can supply the desired location for
> >     shared libraries on the ./configure command line. Packagers need
> >     to note that the default behaviour on 64-bit Linux systems will be
> >     to install shared libraries in /usr/lib, not /usr/lib64, unless a
> >     --libdir value is provided to ./configure.
> 
> No, this comment says "can", not "have to". Plus you don't really
> expect people to read every changeset's description, do you?
> 
> > The new behavior is consistent with all packages that use autoconf.
> 
> That indeed appears to be the case, admittedly to my not
> insignificant surprise.
> 
> > Would a separate email on the topic to xen-devel, apart from the patch
> > discussion, have helped raise awareness?
> 
> Yes. I for my part follow only very selected tools side discussions,
> and expect the tools maintainers to get things sorted out. But of
> course I need to build and run the tools, so if some change gets
> done to the way things need to get built (or run) successfully
> then announcing this independently of the patch would certainly
> be helpful in general.

FWIW I've just added it http://wiki.xen.org/wiki/Xen_4.2_Release_Notes
which probably wouldn't have helped you but should help in general.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 14:55:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 14:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7fE-0001Bf-2k; Wed, 08 Aug 2012 14:55:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz7fC-0001BR-RN
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 14:55:11 +0000
Received: from [85.158.143.35:20565] by server-1.bemta-4.messagelabs.com id
	D6/54-20198-ECD72205; Wed, 08 Aug 2012 14:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344437708!6147452!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1138 invoked from network); 8 Aug 2012 14:55:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 14:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912046"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 14:55:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	15:55:08 +0100
Message-ID: <1344437707.32142.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 15:55:07 +0100
In-Reply-To: <502299650200007800093A68@nat28.tlf.novell.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
	<502299650200007800093A68@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 15:52 +0100, Jan Beulich wrote:
> >>> On 08.08.12 at 16:45, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, Aug 08, 2012 at 07:39:13AM -0700, Jan Beulich wrote:
> >> >>> On 08.08.12 at 16:26, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> > A comment in tools/configure says that it is intended for these to be
> >> > command line overridable, so they shouldn't get expanded at configure
> >> > time.
> >> 
> >> In addition, it would have been _very_ nice if it had been
> >> prominently announced that with (I believe) 25594:ad08cd8e7097
> >>
> >> it is now _required_ to configure with --libdir on x86-64, or else all
> >> the .so-s end up under /usr/lib. Figuring this out and getting the
> >> patch here in the right form to be able to use the most compatible
> >> form --libdir='${exec_prefix}'/lib64 has taken me a good part of
> >> the day, which could have been avoided if this whole configure
> >> adjustment (much like had apparently been missing already in
> >> earlier cases) had been done properly. I just can't imagine I'm
> >> the only one having used no options at all, and things working
> >> nevertheless despite .../lib not being in the library search paths
> >> used when running xl et al.
> > 
> > I'm sorry for the trouble it cause you today, Jan. I thought I called
> > it out sufficiently in the commit log:
> > 
> >     With this change, packagers can supply the desired location for
> >     shared libraries on the ./configure command line. Packagers need
> >     to note that the default behaviour on 64-bit Linux systems will be
> >     to install shared libraries in /usr/lib, not /usr/lib64, unless a
> >     --libdir value is provided to ./configure.
> 
> No, this comment says "can", not "have to". Plus you don't really
> expect people to read every changeset's description, do you?
> 
> > The new behavior is consistent with all packages that use autoconf.
> 
> That indeed appears to be the case, admittedly to my not
> insignificant surprise.
> 
> > Would a separate email on the topic to xen-devel, apart from the patch
> > discussion, have helped raise awareness?
> 
> Yes. I for my part follow only very selected tools side discussions,
> and expect the tools maintainers to get things sorted out. But of
> course I need to build and run the tools, so if some change gets
> done to the way things need to get built (or run) successfully
> then announcing this independently of the patch would certainly
> be helpful in general.

FWIW I've just added it http://wiki.xen.org/wiki/Xen_4.2_Release_Notes
which probably wouldn't have helped you but should help in general.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7lr-0001Rw-UV; Wed, 08 Aug 2012 15:02:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7lp-0001Rr-Pd
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:02:01 +0000
Received: from [85.158.143.35:9952] by server-2.bemta-4.messagelabs.com id
	0F/E0-19021-96F72205; Wed, 08 Aug 2012 15:02:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344438119!19307403!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24963 invoked from network); 8 Aug 2012 15:02:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 15:02:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 16:01:58 +0100
Message-Id: <50229B840200007800093A73@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 16:01:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
In-Reply-To: <502235E8.9040309@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA4LjA4LjEyIGF0IDExOjQ4LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwNy4wOC4xMiBhdCAwOToyMiwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+IE5leHQsIGlmIHlvdSBhbHJlYWR5IHNw
b3R0ZWQgd2hlcmUgdGhlIHNwaW5uaW5nIG9jY3VycywgeW91Cj4+IHNob3VsZCBhbHNvIGJlIGFi
bGUgdG8gdGVsbCB3aGF0J3MgZ29pbmcgb24gYXQgdGhlIG90aGVyIHNpZGUsIGkuZS4KPj4gd2h5
IHRoZSBldmVudCB0aGF0IGlzIGJlaW5nIHdhaXRlZCBmb3IgaXNuJ3Qgb2NjdXJyaW5nIGZvciB0
aGlzCj4+IGxvbmcgYSB0aW1lLiBTaW5jZSB0aGVyZSdzIGEgbnVtYmVyIG9mIG9wZW4gY29kZWQg
c3BpbiBsb29wcwo+PiBoZXJlLCBrbm93aW5nIGV4YWN0bHkgd2hpY2ggb25lIGVhY2ggQ1BVIGlz
IHNpdHRpbmcgaW4gKGFuZAo+PiB3aGljaCBvbmVzIG1pZ2h0IG5vdCBiZSBpbiBhbnkpIGlzIHRo
ZSBmdW5kYW1lbnRhbCBpbmZvcm1hdGlvbgo+PiBuZWVkZWQuCj4+Cj4+ICBGcm9tIHdoYXQgeW91
J3JlIHRlbGxpbmcgdXMgc28gZmFyLCBJJ2QgcmF0aGVyIHN1c3BlY3QgYSBrZXJuZWwKPj4gcHJv
YmxlbSwgbm90IGEgaHlwZXJ2aXNvciBvbmUgaGVyZS4KPiBQZXIgbXkgZmluZGluZywgbW9zdCBv
ZiB2Y3B1cyBzcGluIGF0IHNldF9hdG9taWNpdHlfbG9jay4KClRoZW4geW91IG5lZWQgdG8gZGV0
ZXJtaW5lIHdoYXQgdGhlIGN1cnJlbnQgb3duZXIgb2YgdGhlCmxvY2sgaXMgZG9pbmcuCgo+IFNv
bWUgc3BpbiBhdCBzdG9wX21hY2hpbmUgYWZ0ZXIgZmluaXNoIHRoZWlyIGpvYi4KCkFuZCBoZXJl
IHlvdSdkIG5lZWQgdG8gZmluZCBvdXQgd2hhdCB0aGV5J3JlIHdhaXRpbmcgZm9yLAphbmQgd2hh
dCB0aG9zZSBDUFVzIGFyZSBkb2luZy4KCj4gT25seSBvbmUgdmNwdSBpcyBjYWxsaW5nIGdlbmVy
aWNfc2V0X2FsbC4KPiBJJ20gbm90IHN1cmUgaWYgdGhlIHZjcHUgY2FsbGluZyBnZW5lcmljX3Nl
dF9hbGwgZG9uJ3QgaGF2ZSBoaWdoZXIgCj4gcHJpb3JpdHkgYW5kIG1heWJlIHByZWVtcHQgYnkg
b3RoZXIgdmNwdXMgYW5kIGRvbTAgZnJlcXVlbnRseS4KPiBUaGlzIHdhc3RlIG11Y2ggdGltZS4K
ClRoZXJlJ3Mgbm90IHRoYXQgbXVjaCBiZWluZyBkb25lIGluIGdlbmVyaWNfc2V0X2FsbCgpLCBz
byB0aGUKY29kZSBzaG91bGQgZmluaXNoIHJlYXNvbmFibHkgcXVpY2tseS4gQXJlIHlvdSBwZXJo
YXBzIGhhdmluZwptb3JlIHZDUFUtcyBpbiB0aGUgZ3Vlc3QgdGhhbiBwQ1BVLXMgdGhleSBjYW4g
cnVuIG9uPyBEb2VzCnlvdXIgaGFyZHdhcmUgc3VwcG9ydCBQYXVzZS1Mb29wLUV4aXRpbmcgKG9y
IHRoZSBBTUQKZXF1aXZhbGVudCwgZG9uJ3QgcmVjYWxsIHRoZWlyIHRlcm0gcmlnaHQgbm93KT8K
CkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:02:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:02:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7lr-0001Rw-UV; Wed, 08 Aug 2012 15:02:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7lp-0001Rr-Pd
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:02:01 +0000
Received: from [85.158.143.35:9952] by server-2.bemta-4.messagelabs.com id
	0F/E0-19021-96F72205; Wed, 08 Aug 2012 15:02:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344438119!19307403!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24963 invoked from network); 8 Aug 2012 15:02:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 15:02:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 16:01:58 +0100
Message-Id: <50229B840200007800093A73@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 16:01:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
In-Reply-To: <502235E8.9040309@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA4LjA4LjEyIGF0IDExOjQ4LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwNy4wOC4xMiBhdCAwOToyMiwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+IE5leHQsIGlmIHlvdSBhbHJlYWR5IHNw
b3R0ZWQgd2hlcmUgdGhlIHNwaW5uaW5nIG9jY3VycywgeW91Cj4+IHNob3VsZCBhbHNvIGJlIGFi
bGUgdG8gdGVsbCB3aGF0J3MgZ29pbmcgb24gYXQgdGhlIG90aGVyIHNpZGUsIGkuZS4KPj4gd2h5
IHRoZSBldmVudCB0aGF0IGlzIGJlaW5nIHdhaXRlZCBmb3IgaXNuJ3Qgb2NjdXJyaW5nIGZvciB0
aGlzCj4+IGxvbmcgYSB0aW1lLiBTaW5jZSB0aGVyZSdzIGEgbnVtYmVyIG9mIG9wZW4gY29kZWQg
c3BpbiBsb29wcwo+PiBoZXJlLCBrbm93aW5nIGV4YWN0bHkgd2hpY2ggb25lIGVhY2ggQ1BVIGlz
IHNpdHRpbmcgaW4gKGFuZAo+PiB3aGljaCBvbmVzIG1pZ2h0IG5vdCBiZSBpbiBhbnkpIGlzIHRo
ZSBmdW5kYW1lbnRhbCBpbmZvcm1hdGlvbgo+PiBuZWVkZWQuCj4+Cj4+ICBGcm9tIHdoYXQgeW91
J3JlIHRlbGxpbmcgdXMgc28gZmFyLCBJJ2QgcmF0aGVyIHN1c3BlY3QgYSBrZXJuZWwKPj4gcHJv
YmxlbSwgbm90IGEgaHlwZXJ2aXNvciBvbmUgaGVyZS4KPiBQZXIgbXkgZmluZGluZywgbW9zdCBv
ZiB2Y3B1cyBzcGluIGF0IHNldF9hdG9taWNpdHlfbG9jay4KClRoZW4geW91IG5lZWQgdG8gZGV0
ZXJtaW5lIHdoYXQgdGhlIGN1cnJlbnQgb3duZXIgb2YgdGhlCmxvY2sgaXMgZG9pbmcuCgo+IFNv
bWUgc3BpbiBhdCBzdG9wX21hY2hpbmUgYWZ0ZXIgZmluaXNoIHRoZWlyIGpvYi4KCkFuZCBoZXJl
IHlvdSdkIG5lZWQgdG8gZmluZCBvdXQgd2hhdCB0aGV5J3JlIHdhaXRpbmcgZm9yLAphbmQgd2hh
dCB0aG9zZSBDUFVzIGFyZSBkb2luZy4KCj4gT25seSBvbmUgdmNwdSBpcyBjYWxsaW5nIGdlbmVy
aWNfc2V0X2FsbC4KPiBJJ20gbm90IHN1cmUgaWYgdGhlIHZjcHUgY2FsbGluZyBnZW5lcmljX3Nl
dF9hbGwgZG9uJ3QgaGF2ZSBoaWdoZXIgCj4gcHJpb3JpdHkgYW5kIG1heWJlIHByZWVtcHQgYnkg
b3RoZXIgdmNwdXMgYW5kIGRvbTAgZnJlcXVlbnRseS4KPiBUaGlzIHdhc3RlIG11Y2ggdGltZS4K
ClRoZXJlJ3Mgbm90IHRoYXQgbXVjaCBiZWluZyBkb25lIGluIGdlbmVyaWNfc2V0X2FsbCgpLCBz
byB0aGUKY29kZSBzaG91bGQgZmluaXNoIHJlYXNvbmFibHkgcXVpY2tseS4gQXJlIHlvdSBwZXJo
YXBzIGhhdmluZwptb3JlIHZDUFUtcyBpbiB0aGUgZ3Vlc3QgdGhhbiBwQ1BVLXMgdGhleSBjYW4g
cnVuIG9uPyBEb2VzCnlvdXIgaGFyZHdhcmUgc3VwcG9ydCBQYXVzZS1Mb29wLUV4aXRpbmcgKG9y
IHRoZSBBTUQKZXF1aXZhbGVudCwgZG9uJ3QgcmVjYWxsIHRoZWlyIHRlcm0gcmlnaHQgbm93KT8K
CkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:02:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7m3-0001Su-As; Wed, 08 Aug 2012 15:02:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz7m2-0001SD-9C
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:02:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344438124!9218904!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32596 invoked from network); 8 Aug 2012 15:02:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:02:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912230"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:02:04 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 16:02:04 +0100
Date: Wed, 8 Aug 2012 16:01:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50228ECD0200007800093A0B@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
	<50228ECD0200007800093A0B@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Jan Beulich wrote:
> >>> On 08.08.12 at 14:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Tue, 7 Aug 2012, Jan Beulich wrote:
> >> > Considering that each field of a multicall_entry is usually passed as an
> >> > hypercall parameter, they should all remain unsigned long.
> >> 
> >> That'll give you subtle bugs I'm afraid: do_memory_op()'s
> >> encoding of a continuation start extent (into the 'cmd' value),
> >> for example, depends on being able to store the full value into
> >> the command field of the multicall structure. The limit checking
> >> of the permitted number of extents therefore is different
> >> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> >> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
> >> neither find it very appealing to have do_memory_op() adjusted
> >> for dealing with this new special case, nor am I sure that's the
> >> only place your approach would cause problems if you excluded
> >> the multicall structure from the model change.
> > 
> > Given the way the continuation is implemented, the same problem can
> > also happen on x86.
> 
> No. The compat wrapper, as pointed out there has a different
> check on the maximum number of extents, and hence the
> continuation index can't overflow.

Right. I meant the same conceptual problem: the guest passing a number of
extents that is too big for the hypervisor to handle.


> > In fact, considering that we don't use any compat code, and that
> > do_memory_op has the following check:
> > 
> >     /* Is size too large for us to encode a continuation? */
> >     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
> >         return start_extent;
> > 
> > it would work as-is for ARM too.
> 
> Not afaict. For a 32-bit guest, but the above code executed in
> a 64-bit hypervisor, the guest could pass in (theoretically)
> UINT_MAX, which would pass this check, yet the eventual
> continuation index would get truncated when stored in the
> 32-bit hypercall operation field.
 
Actually, like Ian wrote, I expect that using the upper 32 bit part of
the x0 register, only for continuations, would work just fine.

In any case we can make that check arch dependent and restrict the
maximum number of extents on aarch64 to the same limit we have on
aarch32.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:02:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7m3-0001Su-As; Wed, 08 Aug 2012 15:02:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz7m2-0001SD-9C
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:02:14 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344438124!9218904!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32596 invoked from network); 8 Aug 2012 15:02:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:02:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912230"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:02:04 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 16:02:04 +0100
Date: Wed, 8 Aug 2012 16:01:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50228ECD0200007800093A0B@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
	<50228ECD0200007800093A0B@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Jan Beulich wrote:
> >>> On 08.08.12 at 14:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Tue, 7 Aug 2012, Jan Beulich wrote:
> >> > Considering that each field of a multicall_entry is usually passed as an
> >> > hypercall parameter, they should all remain unsigned long.
> >> 
> >> That'll give you subtle bugs I'm afraid: do_memory_op()'s
> >> encoding of a continuation start extent (into the 'cmd' value),
> >> for example, depends on being able to store the full value into
> >> the command field of the multicall structure. The limit checking
> >> of the permitted number of extents therefore is different
> >> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
> >> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
> >> neither find it very appealing to have do_memory_op() adjusted
> >> for dealing with this new special case, nor am I sure that's the
> >> only place your approach would cause problems if you excluded
> >> the multicall structure from the model change.
> > 
> > Given the way the continuation is implemented, the same problem can
> > also happen on x86.
> 
> No. The compat wrapper, as pointed out there has a different
> check on the maximum number of extents, and hence the
> continuation index can't overflow.

Right. I meant the same conceptual problem: the guest passing a number of
extents that is too big for the hypervisor to handle.


> > In fact, considering that we don't use any compat code, and that
> > do_memory_op has the following check:
> > 
> >     /* Is size too large for us to encode a continuation? */
> >     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
> >         return start_extent;
> > 
> > it would work as-is for ARM too.
> 
> Not afaict. For a 32-bit guest, but the above code executed in
> a 64-bit hypervisor, the guest could pass in (theoretically)
> UINT_MAX, which would pass this check, yet the eventual
> continuation index would get truncated when stored in the
> 32-bit hypercall operation field.
 
Actually, like Ian wrote, I expect that using the upper 32 bit part of
the x0 register, only for continuations, would work just fine.

In any case we can make that check arch dependent and restrict the
maximum number of extents on aarch64 to the same limit we have on
aarch32.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:10:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:10:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7tn-0001n1-9l; Wed, 08 Aug 2012 15:10:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1Sz7tm-0001mw-AP
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:10:14 +0000
Received: from [85.158.139.83:58664] by server-2.bemta-5.messagelabs.com id
	DF/EE-25118-55182205; Wed, 08 Aug 2012 15:10:13 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344438610!23693004!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23891 invoked from network); 8 Aug 2012 15:10:10 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:10:10 -0000
Received: by lbbgm13 with SMTP id gm13so603666lbb.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 08:10:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=kB78q/PsRavTf/1ysYA2tRBUAy4aqjafkquvbgJNWoM=;
	b=Yx52v5p1hPgNOiiXL3lD21doUearsKVU0DLigoB+9L06pru9rB0J2Og8EhFAUVqpeo
	DLc/oekmMXHOJLyphzR+lqErzoeVXXwtWxG6yLryzreO8XYkkgH096PCFKgTnXIaQN/4
	d/5N1Y2CIsRWBMGpcYOnqorUra7ok7EFJ7I7OWEk/ENp41Z96iPbfjYgqYXQr8ZRzsW6
	oFzWPUg0xdm6Q6OytnB9dcQzBqBekmTLcKbRyUTxFNUHydR+aG9oiiGAREZrwfMECsU4
	umo9hzyCQg+k7De4dc53jmoPTkISX171X32LMVWmNPKz9IGvsib5Qfte70NpgSYo+fe9
	d6KQ==
MIME-Version: 1.0
Received: by 10.152.108.144 with SMTP id hk16mr18430790lab.2.1344438609770;
	Wed, 08 Aug 2012 08:10:09 -0700 (PDT)
Received: by 10.114.23.194 with HTTP; Wed, 8 Aug 2012 08:10:09 -0700 (PDT)
Date: Wed, 8 Aug 2012 23:10:09 +0800
Message-ID: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Attilio Rao <attilio.rao@citrix.com>, Jordan Justen <jljusten@gmail.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>
Subject: [Xen-devel] How to initialize the grant table in a HVM guest OS and
	its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2187543447621051890=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2187543447621051890==
Content-Type: multipart/alternative; boundary=bcaec54d4dd6c1f70f04c6c28199

--bcaec54d4dd6c1f70f04c6c28199
Content-Type: text/plain; charset=ISO-8859-1

Hi,

In Mini-OS, the initialization of the grant table maps the grant table
entries into an area called demand_map_area. The code is in the file
mini-os/gnttab.c and it's like this
gnttab_table = map_frames(frames, NR_GRANT_FRAMES);

It seems that getting the demand_map_area needs the infomation from the
start_info page in Mini-OS (refer to the method allocate_ondemand int
mini-os/arch/x86/mm.c). However, there is no start_info page in a HVM guest
OS (maybe it's also true for its virtual bios, such as seabios and uefi
bios). So, how to map the grant table entries in a HVM guest OS and its
bios?

Any suggestions is appreciated.


Best Regards,
Bei Guan

--bcaec54d4dd6c1f70f04c6c28199
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Hi,</div><div><br></div><div>In Mini-OS, the initialization of the gra=
nt table maps the grant table entries into an area called demand_map_area. =
The code is in the file mini-os/gnttab.c and it&#39;s like this=A0</div><di=
v>
gnttab_table =3D map_frames(frames, NR_GRANT_FRAMES);</div><div><br></div><=
div>It seems that getting the demand_map_area needs the infomation from the=
 start_info page in Mini-OS (refer to the method allocate_ondemand int mini=
-os/arch/x86/mm.c). However, there is no start_info page in a HVM guest OS =
(maybe it&#39;s also true for its virtual bios, such as seabios and uefi bi=
os). So, how to map the grant table entries in a HVM guest OS and its bios?=
</div>
<div><br></div><div>Any suggestions is=A0appreciated.</div><div><br></div><=
div><br></div>Best Regards,<div>Bei Guan</div><br>

--bcaec54d4dd6c1f70f04c6c28199--


--===============2187543447621051890==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2187543447621051890==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 15:10:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:10:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7tn-0001n1-9l; Wed, 08 Aug 2012 15:10:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1Sz7tm-0001mw-AP
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:10:14 +0000
Received: from [85.158.139.83:58664] by server-2.bemta-5.messagelabs.com id
	DF/EE-25118-55182205; Wed, 08 Aug 2012 15:10:13 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344438610!23693004!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23891 invoked from network); 8 Aug 2012 15:10:10 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:10:10 -0000
Received: by lbbgm13 with SMTP id gm13so603666lbb.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 08:10:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=kB78q/PsRavTf/1ysYA2tRBUAy4aqjafkquvbgJNWoM=;
	b=Yx52v5p1hPgNOiiXL3lD21doUearsKVU0DLigoB+9L06pru9rB0J2Og8EhFAUVqpeo
	DLc/oekmMXHOJLyphzR+lqErzoeVXXwtWxG6yLryzreO8XYkkgH096PCFKgTnXIaQN/4
	d/5N1Y2CIsRWBMGpcYOnqorUra7ok7EFJ7I7OWEk/ENp41Z96iPbfjYgqYXQr8ZRzsW6
	oFzWPUg0xdm6Q6OytnB9dcQzBqBekmTLcKbRyUTxFNUHydR+aG9oiiGAREZrwfMECsU4
	umo9hzyCQg+k7De4dc53jmoPTkISX171X32LMVWmNPKz9IGvsib5Qfte70NpgSYo+fe9
	d6KQ==
MIME-Version: 1.0
Received: by 10.152.108.144 with SMTP id hk16mr18430790lab.2.1344438609770;
	Wed, 08 Aug 2012 08:10:09 -0700 (PDT)
Received: by 10.114.23.194 with HTTP; Wed, 8 Aug 2012 08:10:09 -0700 (PDT)
Date: Wed, 8 Aug 2012 23:10:09 +0800
Message-ID: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Attilio Rao <attilio.rao@citrix.com>, Jordan Justen <jljusten@gmail.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>
Subject: [Xen-devel] How to initialize the grant table in a HVM guest OS and
	its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2187543447621051890=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2187543447621051890==
Content-Type: multipart/alternative; boundary=bcaec54d4dd6c1f70f04c6c28199

--bcaec54d4dd6c1f70f04c6c28199
Content-Type: text/plain; charset=ISO-8859-1

Hi,

In Mini-OS, the initialization of the grant table maps the grant table
entries into an area called demand_map_area. The code is in the file
mini-os/gnttab.c and it's like this
gnttab_table = map_frames(frames, NR_GRANT_FRAMES);

It seems that getting the demand_map_area needs the infomation from the
start_info page in Mini-OS (refer to the method allocate_ondemand int
mini-os/arch/x86/mm.c). However, there is no start_info page in a HVM guest
OS (maybe it's also true for its virtual bios, such as seabios and uefi
bios). So, how to map the grant table entries in a HVM guest OS and its
bios?

Any suggestions is appreciated.


Best Regards,
Bei Guan

--bcaec54d4dd6c1f70f04c6c28199
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Hi,</div><div><br></div><div>In Mini-OS, the initialization of the gra=
nt table maps the grant table entries into an area called demand_map_area. =
The code is in the file mini-os/gnttab.c and it&#39;s like this=A0</div><di=
v>
gnttab_table =3D map_frames(frames, NR_GRANT_FRAMES);</div><div><br></div><=
div>It seems that getting the demand_map_area needs the infomation from the=
 start_info page in Mini-OS (refer to the method allocate_ondemand int mini=
-os/arch/x86/mm.c). However, there is no start_info page in a HVM guest OS =
(maybe it&#39;s also true for its virtual bios, such as seabios and uefi bi=
os). So, how to map the grant table entries in a HVM guest OS and its bios?=
</div>
<div><br></div><div>Any suggestions is=A0appreciated.</div><div><br></div><=
div><br></div>Best Regards,<div>Bei Guan</div><br>

--bcaec54d4dd6c1f70f04c6c28199--


--===============2187543447621051890==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2187543447621051890==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 15:11:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7ud-0001qi-En; Wed, 08 Aug 2012 15:11:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sz7uc-0001qC-G2
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:11:06 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344438656!2777301!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNjU0NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12714 invoked from network); 8 Aug 2012 15:10:57 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:10:57 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344438657; x=1375974657;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=x/5kmZEYVFC6H4hAs/e9Ls/ssSAAgB5WjshTuPmuSPI=;
	b=Zhdks5hi16PU5+ICgntQ4GyTXxawKUO4pbdh+eeS7HcB29A3NnjTKC5n
	xUhH60moyfazdf8nzkFl+up4udv+yA==;
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="420548121"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 08 Aug 2012 15:10:45 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q78FAiTg023297
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 8 Aug 2012 15:10:44 GMT
Received: from US-SEA-R8XVZTX (10.224.80.34) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 8 Aug 2012 08:10:37 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 08 Aug 2012
	08:10:37 -0700
Date: Wed, 8 Aug 2012 08:10:37 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120808151037.GE5592@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5022933B0200007800093A1F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
	too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 07:26:35AM -0700, Jan Beulich wrote:
> A comment in tools/configure says that it is intended for these to be
> command line overridable, so they shouldn't get expanded at configure
> time.

I'm not sure which comment you're referencing, or which command line
it's intended that you're able to override prefix/exec_prefix. Could
you point it out? What's not working?

> The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
> doing this, but imo it is flawed altogether and should rather be
> removed:
> - setting prefix and exec_prefix to default values is being done later
>   in tools/configure anyway
> - setting LIB_PATH based on the (non-)existence of a lib64 directory
>   underneath ${exec_prefix} is plain wrong (it can obviously exist on a
>   32-bit installation)
> - I wasn't able to locate any use of LIB_PATH

I believe that the LIB_PATH portions can now be removed. I also
believe you're right that all of this can be removed. Did you attempt
that as well?

> (I did see IanC's comment in c/s 25594:ad08cd8e7097 that removing it
> supposedly causes other problems, but I don't see how that would
> happen).

The tail of the thread where IanC had trouble is here: 
  http://lists.xen.org/archives/html/xen-devel/2012-07/msg00268.html

If we're going back into this code, I think that adding a lowercase
prefix variable and removing default_lib.m4 altogether should resolve
remaining problems. But we should probably only do that for 4.2 if
something's very broken with what's in the tree now.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> ---
> This will require tools/configure to be re-generated.

This is typically part of the same commit.

Matt

> --- a/config/Tools.mk.in
> +++ b/config/Tools.mk.in
> @@ -1,5 +1,6 @@
>  # Prefix and install folder
> -PREFIX              := @prefix@
> +prefix              := @prefix@
> +PREFIX              := $(prefix)
>  exec_prefix         := @exec_prefix@
>  LIBDIR              := @libdir@
>  
> --- a/tools/m4/default_lib.m4
> +++ b/tools/m4/default_lib.m4
> @@ -1,14 +1,19 @@
>  AC_DEFUN([AX_DEFAULT_LIB],
> -[AS_IF([test "\${exec_prefix}/lib" = "$libdir"],
> -    [AS_IF([test "$exec_prefix" = "NONE" && test "$prefix" != "NONE"],
> -        [exec_prefix=$prefix])
> -    AS_IF([test "$exec_prefix" = "NONE"], [exec_prefix=$ac_default_prefix])
> -    AS_IF([test -d "${exec_prefix}/lib64"], [
> +[AS_IF([test "\${exec_prefix}/lib" = "$libdir"], [
> +    AS_IF([test "$prefix" = "NONE"], [prefix=$ac_default_prefix])
> +    AS_IF([test "$exec_prefix" = "NONE"], [exec_prefix='${prefix}'])
> +    AS_IF([eval test -d "${exec_prefix}/lib64"], [
>          LIB_PATH="lib64"
>      ],[
>          LIB_PATH="lib"
>      ])
>  ], [
>      LIB_PATH="${libdir:`expr length "$exec_prefix" + 1`}"
> +    AS_IF([test -z "${libdir##\$\{exec_prefix\}/*}"], [
> +        LIB_PATH="${libdir:15}"
> +    ])
> +    AS_IF([test -z "${libdir##\$exec_prefix/*}"], [
> +        LIB_PATH="${libdir:13}"
> +    ])
>  ])
>  AC_SUBST(LIB_PATH)])
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:11:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7ud-0001qi-En; Wed, 08 Aug 2012 15:11:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sz7uc-0001qC-G2
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:11:06 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344438656!2777301!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwNjU0NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12714 invoked from network); 8 Aug 2012 15:10:57 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:10:57 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344438657; x=1375974657;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=x/5kmZEYVFC6H4hAs/e9Ls/ssSAAgB5WjshTuPmuSPI=;
	b=Zhdks5hi16PU5+ICgntQ4GyTXxawKUO4pbdh+eeS7HcB29A3NnjTKC5n
	xUhH60moyfazdf8nzkFl+up4udv+yA==;
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="420548121"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 08 Aug 2012 15:10:45 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q78FAiTg023297
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 8 Aug 2012 15:10:44 GMT
Received: from US-SEA-R8XVZTX (10.224.80.34) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 8 Aug 2012 08:10:37 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 08 Aug 2012
	08:10:37 -0700
Date: Wed, 8 Aug 2012 08:10:37 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120808151037.GE5592@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5022933B0200007800093A1F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
	too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 07:26:35AM -0700, Jan Beulich wrote:
> A comment in tools/configure says that it is intended for these to be
> command line overridable, so they shouldn't get expanded at configure
> time.

I'm not sure which comment you're referencing, or which command line
it's intended that you're able to override prefix/exec_prefix. Could
you point it out? What's not working?

> The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
> doing this, but imo it is flawed altogether and should rather be
> removed:
> - setting prefix and exec_prefix to default values is being done later
>   in tools/configure anyway
> - setting LIB_PATH based on the (non-)existence of a lib64 directory
>   underneath ${exec_prefix} is plain wrong (it can obviously exist on a
>   32-bit installation)
> - I wasn't able to locate any use of LIB_PATH

I believe that the LIB_PATH portions can now be removed. I also
believe you're right that all of this can be removed. Did you attempt
that as well?

> (I did see IanC's comment in c/s 25594:ad08cd8e7097 that removing it
> supposedly causes other problems, but I don't see how that would
> happen).

The tail of the thread where IanC had trouble is here: 
  http://lists.xen.org/archives/html/xen-devel/2012-07/msg00268.html

If we're going back into this code, I think that adding a lowercase
prefix variable and removing default_lib.m4 altogether should resolve
remaining problems. But we should probably only do that for 4.2 if
something's very broken with what's in the tree now.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> ---
> This will require tools/configure to be re-generated.

This is typically part of the same commit.

Matt

> --- a/config/Tools.mk.in
> +++ b/config/Tools.mk.in
> @@ -1,5 +1,6 @@
>  # Prefix and install folder
> -PREFIX              := @prefix@
> +prefix              := @prefix@
> +PREFIX              := $(prefix)
>  exec_prefix         := @exec_prefix@
>  LIBDIR              := @libdir@
>  
> --- a/tools/m4/default_lib.m4
> +++ b/tools/m4/default_lib.m4
> @@ -1,14 +1,19 @@
>  AC_DEFUN([AX_DEFAULT_LIB],
> -[AS_IF([test "\${exec_prefix}/lib" = "$libdir"],
> -    [AS_IF([test "$exec_prefix" = "NONE" && test "$prefix" != "NONE"],
> -        [exec_prefix=$prefix])
> -    AS_IF([test "$exec_prefix" = "NONE"], [exec_prefix=$ac_default_prefix])
> -    AS_IF([test -d "${exec_prefix}/lib64"], [
> +[AS_IF([test "\${exec_prefix}/lib" = "$libdir"], [
> +    AS_IF([test "$prefix" = "NONE"], [prefix=$ac_default_prefix])
> +    AS_IF([test "$exec_prefix" = "NONE"], [exec_prefix='${prefix}'])
> +    AS_IF([eval test -d "${exec_prefix}/lib64"], [
>          LIB_PATH="lib64"
>      ],[
>          LIB_PATH="lib"
>      ])
>  ], [
>      LIB_PATH="${libdir:`expr length "$exec_prefix" + 1`}"
> +    AS_IF([test -z "${libdir##\$\{exec_prefix\}/*}"], [
> +        LIB_PATH="${libdir:15}"
> +    ])
> +    AS_IF([test -z "${libdir##\$exec_prefix/*}"], [
> +        LIB_PATH="${libdir:13}"
> +    ])
>  ])
>  AC_SUBST(LIB_PATH)])
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:12:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7w3-00021t-VU; Wed, 08 Aug 2012 15:12:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7w2-000219-C7
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:12:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344438725!9220946!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9681 invoked from network); 8 Aug 2012 15:12:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 15:12:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 16:12:05 +0100
Message-Id: <50229DE40200007800093A8D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 16:12:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
	<50228ECD0200007800093A0B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 17:01, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Wed, 8 Aug 2012, Jan Beulich wrote:
>> >>> On 08.08.12 at 14:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Tue, 7 Aug 2012, Jan Beulich wrote:
>> >> > Considering that each field of a multicall_entry is usually passed as an
>> >> > hypercall parameter, they should all remain unsigned long.
>> >> 
>> >> That'll give you subtle bugs I'm afraid: do_memory_op()'s
>> >> encoding of a continuation start extent (into the 'cmd' value),
>> >> for example, depends on being able to store the full value into
>> >> the command field of the multicall structure. The limit checking
>> >> of the permitted number of extents therefore is different
>> >> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
>> >> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
>> >> neither find it very appealing to have do_memory_op() adjusted
>> >> for dealing with this new special case, nor am I sure that's the
>> >> only place your approach would cause problems if you excluded
>> >> the multicall structure from the model change.
>> > 
>> > Given the way the continuation is implemented, the same problem can
>> > also happen on x86.
>> 
>> No. The compat wrapper, as pointed out there has a different
>> check on the maximum number of extents, and hence the
>> continuation index can't overflow.
> 
> Right. I meant the same conceptual problem: the guest passing a number of
> extents that is too big for the hypervisor to handle.

I don't think the hypervisor has a problem handling such large
amounts.

>> > In fact, considering that we don't use any compat code, and that
>> > do_memory_op has the following check:
>> > 
>> >     /* Is size too large for us to encode a continuation? */
>> >     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
>> >         return start_extent;
>> > 
>> > it would work as-is for ARM too.
>> 
>> Not afaict. For a 32-bit guest, but the above code executed in
>> a 64-bit hypervisor, the guest could pass in (theoretically)
>> UINT_MAX, which would pass this check, yet the eventual
>> continuation index would get truncated when stored in the
>> 32-bit hypercall operation field.
>  
> Actually, like Ian wrote, I expect that using the upper 32 bit part of
> the x0 register, only for continuations, would work just fine.
> 
> In any case we can make that check arch dependent and restrict the
> maximum number of extents on aarch64 to the same limit we have on
> aarch32.

We should even discuss lowering the limit universally to the
UINT_MAX based value. I don't think anyone is actively using
such insane numbers.

And don't forget that I pointed out this issue only as example
of possible problems with your intended approach to the
handling of multicalls.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:12:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7w3-00021t-VU; Wed, 08 Aug 2012 15:12:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz7w2-000219-C7
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:12:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344438725!9220946!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9681 invoked from network); 8 Aug 2012 15:12:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 15:12:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 16:12:05 +0100
Message-Id: <50229DE40200007800093A8D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 16:12:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
	<50228ECD0200007800093A0B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 17:01, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Wed, 8 Aug 2012, Jan Beulich wrote:
>> >>> On 08.08.12 at 14:12, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Tue, 7 Aug 2012, Jan Beulich wrote:
>> >> > Considering that each field of a multicall_entry is usually passed as an
>> >> > hypercall parameter, they should all remain unsigned long.
>> >> 
>> >> That'll give you subtle bugs I'm afraid: do_memory_op()'s
>> >> encoding of a continuation start extent (into the 'cmd' value),
>> >> for example, depends on being able to store the full value into
>> >> the command field of the multicall structure. The limit checking
>> >> of the permitted number of extents therefore is different
>> >> between native (ULONG_MAX >> MEMOP_EXTENT_SHIFT) and
>> >> compat (UINT_MAX >> MEMOP_EXTENT_SHIFT). I would
>> >> neither find it very appealing to have do_memory_op() adjusted
>> >> for dealing with this new special case, nor am I sure that's the
>> >> only place your approach would cause problems if you excluded
>> >> the multicall structure from the model change.
>> > 
>> > Given the way the continuation is implemented, the same problem can
>> > also happen on x86.
>> 
>> No. The compat wrapper, as pointed out there has a different
>> check on the maximum number of extents, and hence the
>> continuation index can't overflow.
> 
> Right. I meant the same conceptual problem: the guest passing a number of
> extents that is too big for the hypervisor to handle.

I don't think the hypervisor has a problem handling such large
amounts.

>> > In fact, considering that we don't use any compat code, and that
>> > do_memory_op has the following check:
>> > 
>> >     /* Is size too large for us to encode a continuation? */
>> >     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
>> >         return start_extent;
>> > 
>> > it would work as-is for ARM too.
>> 
>> Not afaict. For a 32-bit guest, but the above code executed in
>> a 64-bit hypervisor, the guest could pass in (theoretically)
>> UINT_MAX, which would pass this check, yet the eventual
>> continuation index would get truncated when stored in the
>> 32-bit hypercall operation field.
>  
> Actually, like Ian wrote, I expect that using the upper 32 bit part of
> the x0 register, only for continuations, would work just fine.
> 
> In any case we can make that check arch dependent and restrict the
> maximum number of extents on aarch64 to the same limit we have on
> aarch32.

We should even discuss lowering the limit universally to the
UINT_MAX based value. I don't think anyone is actively using
such insane numbers.

And don't forget that I pointed out this issue only as example
of possible problems with your intended approach to the
handling of multicalls.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:13:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7wR-00025z-Iu; Wed, 08 Aug 2012 15:12:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Sz7wQ-00025j-Mc
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:12:58 +0000
Received: from [85.158.143.35:22249] by server-2.bemta-4.messagelabs.com id
	DA/43-19021-AF182205; Wed, 08 Aug 2012 15:12:58 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344438777!16777696!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjIwMDU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjIwMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6441 invoked from network); 8 Aug 2012 15:12:57 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 15:12:57 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGji0PEjyX
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-080-112.pools.arcor-ip.net [88.65.80.112])
	by smtp.strato.de (josoe mo57) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id w04ea6o78BH0Sx ;
	Wed, 8 Aug 2012 17:11:35 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 3855518639; Wed,  8 Aug 2012 17:11:35 +0200 (CEST)
Date: Wed, 8 Aug 2012 17:11:35 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120808151134.GA11659@aepfle.de>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
	<502299650200007800093A68@nat28.tlf.novell.com>
	<1344437707.32142.38.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344437707.32142.38.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, Ian Campbell wrote:

> FWIW I've just added it http://wiki.xen.org/wiki/Xen_4.2_Release_Notes
> which probably wouldn't have helped you but should help in general.

Maybe we can simplify configure like this (untested patch):

diff -r f9f47654484e configure
--- a/configure
+++ b/configure
@@ -1,2 +1,7 @@
 #!/bin/sh -e
-cd tools && ./configure $@
+_easy_lib64=
+if test "`arch`" = "x86_64"
+then
+	_easy_lib64=--libdir=/usr/lib64
+fi
+cd tools && ./configure ${_easy_lib64} $@

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:13:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz7wR-00025z-Iu; Wed, 08 Aug 2012 15:12:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Sz7wQ-00025j-Mc
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:12:58 +0000
Received: from [85.158.143.35:22249] by server-2.bemta-4.messagelabs.com id
	DA/43-19021-AF182205; Wed, 08 Aug 2012 15:12:58 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344438777!16777696!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjIwMDU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjIwMDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6441 invoked from network); 8 Aug 2012 15:12:57 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 15:12:57 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGji0PEjyX
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-080-112.pools.arcor-ip.net [88.65.80.112])
	by smtp.strato.de (josoe mo57) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id w04ea6o78BH0Sx ;
	Wed, 8 Aug 2012 17:11:35 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 3855518639; Wed,  8 Aug 2012 17:11:35 +0200 (CEST)
Date: Wed, 8 Aug 2012 17:11:35 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120808151134.GA11659@aepfle.de>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
	<502299650200007800093A68@nat28.tlf.novell.com>
	<1344437707.32142.38.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344437707.32142.38.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, Ian Campbell wrote:

> FWIW I've just added it http://wiki.xen.org/wiki/Xen_4.2_Release_Notes
> which probably wouldn't have helped you but should help in general.

Maybe we can simplify configure like this (untested patch):

diff -r f9f47654484e configure
--- a/configure
+++ b/configure
@@ -1,2 +1,7 @@
 #!/bin/sh -e
-cd tools && ./configure $@
+_easy_lib64=
+if test "`arch`" = "x86_64"
+then
+	_easy_lib64=--libdir=/usr/lib64
+fi
+cd tools && ./configure ${_easy_lib64} $@

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz82d-0002b3-DQ; Wed, 08 Aug 2012 15:19:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz82c-0002as-Fq
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:19:22 +0000
Received: from [85.158.138.51:21813] by server-12.bemta-3.messagelabs.com id
	98/F0-21301-97382205; Wed, 08 Aug 2012 15:19:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344439161!22967574!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20573 invoked from network); 8 Aug 2012 15:19:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912629"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:18:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	16:18:58 +0100
Message-ID: <1344439136.32142.44.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 8 Aug 2012 16:18:56 +0100
In-Reply-To: <20120808151134.GA11659@aepfle.de>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
	<502299650200007800093A68@nat28.tlf.novell.com>
	<1344437707.32142.38.camel@zakaz.uk.xensource.com>
	<20120808151134.GA11659@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 16:11 +0100, Olaf Hering wrote:
> On Wed, Aug 08, Ian Campbell wrote:
> 
> > FWIW I've just added it http://wiki.xen.org/wiki/Xen_4.2_Release_Notes
> > which probably wouldn't have helped you but should help in general.
> 
> Maybe we can simplify configure like this (untested patch):

Ew, what a horribly opaque way to go about this.

Unless there is some common autoconf macro which all autoconf-using
projects use to get this sort of behaviour we should just leave things
as is, i.e. our behaviour should match the behaviour of every other
autoconf-using project not invent our own madness.

> 
> diff -r f9f47654484e configure
> --- a/configure
> +++ b/configure
> @@ -1,2 +1,7 @@
>  #!/bin/sh -e
> -cd tools && ./configure $@
> +_easy_lib64=
> +if test "`arch`" = "x86_64"
> +then
> +	_easy_lib64=--libdir=/usr/lib64
> +fi
> +cd tools && ./configure ${_easy_lib64} $@



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz82d-0002b3-DQ; Wed, 08 Aug 2012 15:19:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz82c-0002as-Fq
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:19:22 +0000
Received: from [85.158.138.51:21813] by server-12.bemta-3.messagelabs.com id
	98/F0-21301-97382205; Wed, 08 Aug 2012 15:19:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344439161!22967574!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20573 invoked from network); 8 Aug 2012 15:19:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912629"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:18:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	16:18:58 +0100
Message-ID: <1344439136.32142.44.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 8 Aug 2012 16:18:56 +0100
In-Reply-To: <20120808151134.GA11659@aepfle.de>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
	<20120808144500.GD5592@US-SEA-R8XVZTX>
	<502299650200007800093A68@nat28.tlf.novell.com>
	<1344437707.32142.38.camel@zakaz.uk.xensource.com>
	<20120808151134.GA11659@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 16:11 +0100, Olaf Hering wrote:
> On Wed, Aug 08, Ian Campbell wrote:
> 
> > FWIW I've just added it http://wiki.xen.org/wiki/Xen_4.2_Release_Notes
> > which probably wouldn't have helped you but should help in general.
> 
> Maybe we can simplify configure like this (untested patch):

Ew, what a horribly opaque way to go about this.

Unless there is some common autoconf macro which all autoconf-using
projects use to get this sort of behaviour we should just leave things
as is, i.e. our behaviour should match the behaviour of every other
autoconf-using project not invent our own madness.

> 
> diff -r f9f47654484e configure
> --- a/configure
> +++ b/configure
> @@ -1,2 +1,7 @@
>  #!/bin/sh -e
> -cd tools && ./configure $@
> +_easy_lib64=
> +if test "`arch`" = "x86_64"
> +then
> +	_easy_lib64=--libdir=/usr/lib64
> +fi
> +cd tools && ./configure ${_easy_lib64} $@



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:23:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz86I-0002pJ-Lt; Wed, 08 Aug 2012 15:23:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz86H-0002oQ-CX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:23:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344439374!4490311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19054 invoked from network); 8 Aug 2012 15:22:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:22:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912729"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:22:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	16:22:52 +0100
Message-ID: <1344439371.32142.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Wed, 8 Aug 2012 16:22:51 +0100
In-Reply-To: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>,
	Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 16:10 +0100, Bei Guan wrote:
> Hi,
> 
> 
> In Mini-OS, the initialization of the grant table maps the grant table
> entries into an area called demand_map_area. The code is in the file
> mini-os/gnttab.c and it's like this 
> gnttab_table = map_frames(frames, NR_GRANT_FRAMES);
> 
> 
> It seems that getting the demand_map_area needs the infomation from
> the start_info page in Mini-OS (refer to the method allocate_ondemand
> int mini-os/arch/x86/mm.c). However, there is no start_info page in a
> HVM guest OS (maybe it's also true for its virtual bios, such as
> seabios and uefi bios). So, how to map the grant table entries in a
> HVM guest OS and its bios?

The PCI Platform device provides a BAR region which is specifically set
aside for the guest OS to use as a hole for grant tab mappings. Or if
you control the phyiscal address space you can just allocate somewhere
to do the mapping.


> 
> 
> Any suggestions is appreciated.
> 
> 
> 
> 
> Best Regards,
> Bei Guan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:23:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz86I-0002pJ-Lt; Wed, 08 Aug 2012 15:23:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz86H-0002oQ-CX
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:23:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344439374!4490311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19054 invoked from network); 8 Aug 2012 15:22:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:22:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13912729"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:22:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	16:22:52 +0100
Message-ID: <1344439371.32142.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Wed, 8 Aug 2012 16:22:51 +0100
In-Reply-To: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>,
	Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 16:10 +0100, Bei Guan wrote:
> Hi,
> 
> 
> In Mini-OS, the initialization of the grant table maps the grant table
> entries into an area called demand_map_area. The code is in the file
> mini-os/gnttab.c and it's like this 
> gnttab_table = map_frames(frames, NR_GRANT_FRAMES);
> 
> 
> It seems that getting the demand_map_area needs the infomation from
> the start_info page in Mini-OS (refer to the method allocate_ondemand
> int mini-os/arch/x86/mm.c). However, there is no start_info page in a
> HVM guest OS (maybe it's also true for its virtual bios, such as
> seabios and uefi bios). So, how to map the grant table entries in a
> HVM guest OS and its bios?

The PCI Platform device provides a BAR region which is specifically set
aside for the guest OS to use as a hole for grant tab mappings. Or if
you control the phyiscal address space you can just allocate somewhere
to do the mapping.


> 
> 
> Any suggestions is appreciated.
> 
> 
> 
> 
> Best Regards,
> Bei Guan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:33:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8Fz-0003I5-Oq; Wed, 08 Aug 2012 15:33:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz8Fy-0003Hw-4q
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:33:10 +0000
Received: from [85.158.139.83:27307] by server-7.bemta-5.messagelabs.com id
	D7/00-00857-5B682205; Wed, 08 Aug 2012 15:33:09 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344439986!30882745!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9742 invoked from network); 8 Aug 2012 15:33:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:33:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="33986332"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 11:33:05 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Wed, 8 Aug 2012 08:33:04 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:32:49 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac11N+a/peJlDKt2TAin6p+TNthfGQAQqt7w
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E9331@SJCPMAILBOX01.citrite.net>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
	<5022320C0200007800093760@nat28.tlf.novell.com>
In-Reply-To: <5022320C0200007800093760@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for catching that. I did my work in the xenserver tree (which is based off 4.1) which did not have that handler. I will look for the next free letter and resend the patch.

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Wednesday, August 08, 2012 12:32 AM
To: Santosh Jodh
Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('I', &iommu_p2m_table); }

Are you btw aware that a handler for 'I' already exists (in xen/arch/x86/hvm/irq.c)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:33:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8Fz-0003I5-Oq; Wed, 08 Aug 2012 15:33:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz8Fy-0003Hw-4q
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:33:10 +0000
Received: from [85.158.139.83:27307] by server-7.bemta-5.messagelabs.com id
	D7/00-00857-5B682205; Wed, 08 Aug 2012 15:33:09 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344439986!30882745!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9742 invoked from network); 8 Aug 2012 15:33:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:33:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="33986332"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 11:33:05 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Wed, 8 Aug 2012 08:33:04 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 08:32:49 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac11N+a/peJlDKt2TAin6p+TNthfGQAQqt7w
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E9331@SJCPMAILBOX01.citrite.net>
References: <8b634f1786825a98ce70.1344350958@REDBLD-XS.ad.xensource.com>
	<5022320C0200007800093760@nat28.tlf.novell.com>
In-Reply-To: <5022320C0200007800093760@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for catching that. I did my work in the xenserver tree (which is based off 4.1) which did not have that handler. I will look for the next free letter and resend the patch.

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Wednesday, August 08, 2012 12:32 AM
To: Santosh Jodh
Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('I', &iommu_p2m_table); }

Are you btw aware that a handler for 'I' already exists (in xen/arch/x86/hvm/irq.c)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:34:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8HS-0003MM-7u; Wed, 08 Aug 2012 15:34:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sz8HR-0003MH-0j
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:34:41 +0000
Received: from [85.158.143.99:57664] by server-1.bemta-4.messagelabs.com id
	F9/D0-20198-01782205; Wed, 08 Aug 2012 15:34:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344440078!30293952!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31005 invoked from network); 8 Aug 2012 15:34:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:34:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913077"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:34:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 16:34:35 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sz8HK-000244-N3; Wed, 08 Aug 2012 15:34:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sz8HK-0001ky-IT;
	Wed, 08 Aug 2012 16:34:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20514.34570.249805.733820@mariner.uk.xensource.com>
Date: Wed, 8 Aug 2012 16:34:34 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502296310200007800093A35@nat28.tlf.novell.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too early"):
> In addition, it would have been _very_ nice if it had been
> prominently announced that with (I believe) 25594:ad08cd8e7097
> it is now _required_ to configure with --libdir on x86-64,

This is only required on FHS-noncompliant systems.  On (for example)
a Debian amd64 system the default directory /usr/lib is correct.

I expect we will revisit all of this in 4.3 since it might be a good
idea to support multiarch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:34:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8HS-0003MM-7u; Wed, 08 Aug 2012 15:34:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Sz8HR-0003MH-0j
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:34:41 +0000
Received: from [85.158.143.99:57664] by server-1.bemta-4.messagelabs.com id
	F9/D0-20198-01782205; Wed, 08 Aug 2012 15:34:40 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344440078!30293952!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31005 invoked from network); 8 Aug 2012 15:34:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:34:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913077"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:34:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 16:34:35 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Sz8HK-000244-N3; Wed, 08 Aug 2012 15:34:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Sz8HK-0001ky-IT;
	Wed, 08 Aug 2012 16:34:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20514.34570.249805.733820@mariner.uk.xensource.com>
Date: Wed, 8 Aug 2012 16:34:34 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502296310200007800093A35@nat28.tlf.novell.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<502296310200007800093A35@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too early"):
> In addition, it would have been _very_ nice if it had been
> prominently announced that with (I believe) 25594:ad08cd8e7097
> it is now _required_ to configure with --libdir on x86-64,

This is only required on FHS-noncompliant systems.  On (for example)
a Debian amd64 system the default directory /usr/lib is correct.

I expect we will revisit all of this in 4.3 since it might be a good
idea to support multiarch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:44:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:44:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8QU-0003h2-99; Wed, 08 Aug 2012 15:44:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1Sz8QR-0003gx-I6
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:44:00 +0000
Received: from [85.158.143.99:62113] by server-3.bemta-4.messagelabs.com id
	24/28-31486-E3982205; Wed, 08 Aug 2012 15:43:58 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344440638!25576328!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14924 invoked from network); 8 Aug 2012 15:43:58 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Aug 2012 15:43:58 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id BB04DA02EB;
	Wed,  8 Aug 2012 15:43:57 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 5oix4sgEm7Jo; Wed,  8 Aug 2012 15:43:57 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id DAB6CA02EA;
	Wed,  8 Aug 2012 15:43:56 +0000 (UTC)
Date: Wed, 8 Aug 2012 17:43:55 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: xen-devel@lists.xen.org
Message-ID: <20120808174355.470746e0@internecto.net>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Possible bug with huge unflushed console buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Greetings list,

Yesterday the following scenario took place. In short, I had a HVM host
with a poorly configured firewall. It logged way more than it should,
and it had been doing that for days. IPTables entries with -j LOG
show up in the console. I told Konrad about this but I said it was a PV
host while in fact it is concerning a HVM host.

Anyway, when I saw that this was the case I corrected iptables and then
I removed those faulty log entries from the logfiles in /var/log.
Finally I issued /etc/init.d/rsyslogd reload.

Suddenly the server stopped responding at all, not even arp requests
were answered. I opened up a console via xl (none was open before
this) and I was greeted with a crazy amount of log entries - it took
about five or six minutes before the whole buffer was flushed to my
screen. Then the server responded again.

Today I viewed syslog's messages in detail and found this and a lot of
similar entries:

http://pastebin.com/Ga7aE7hb

So the questions that I have now are - Is this a bug? How is console
history handled and where is the console's unread history stored?
Somewhere in the hypervisor or in dom0 or domU memory space? Is there a
limit to how much info can be stored?

The behaviour I encountered i.e. a system lock seems to suggest that
there was some kind of buffer overflow. And this can be achieved by
something as simple as 'iptables -I INPUT -j LOG'. Perhaps it is a wise
idea to consider xl console -c to clear the console's history, then at
least I could exit the console and clear unsent messages...

I am not receiving xen-devel messages, so please CC me on replies.
Thank you.

-- 
Stay in touch,
Mark van Dijk.                  ,-----------------------------------
-------------------------------'           Wed Aug 08 15:29 UTC 2012
Today is Setting Orange, the 1st day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:44:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:44:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8QU-0003h2-99; Wed, 08 Aug 2012 15:44:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1Sz8QR-0003gx-I6
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:44:00 +0000
Received: from [85.158.143.99:62113] by server-3.bemta-4.messagelabs.com id
	24/28-31486-E3982205; Wed, 08 Aug 2012 15:43:58 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344440638!25576328!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14924 invoked from network); 8 Aug 2012 15:43:58 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Aug 2012 15:43:58 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id BB04DA02EB;
	Wed,  8 Aug 2012 15:43:57 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 5oix4sgEm7Jo; Wed,  8 Aug 2012 15:43:57 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id DAB6CA02EA;
	Wed,  8 Aug 2012 15:43:56 +0000 (UTC)
Date: Wed, 8 Aug 2012 17:43:55 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: xen-devel@lists.xen.org
Message-ID: <20120808174355.470746e0@internecto.net>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Possible bug with huge unflushed console buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Greetings list,

Yesterday the following scenario took place. In short, I had a HVM host
with a poorly configured firewall. It logged way more than it should,
and it had been doing that for days. IPTables entries with -j LOG
show up in the console. I told Konrad about this but I said it was a PV
host while in fact it is concerning a HVM host.

Anyway, when I saw that this was the case I corrected iptables and then
I removed those faulty log entries from the logfiles in /var/log.
Finally I issued /etc/init.d/rsyslogd reload.

Suddenly the server stopped responding at all, not even arp requests
were answered. I opened up a console via xl (none was open before
this) and I was greeted with a crazy amount of log entries - it took
about five or six minutes before the whole buffer was flushed to my
screen. Then the server responded again.

Today I viewed syslog's messages in detail and found this and a lot of
similar entries:

http://pastebin.com/Ga7aE7hb

So the questions that I have now are - Is this a bug? How is console
history handled and where is the console's unread history stored?
Somewhere in the hypervisor or in dom0 or domU memory space? Is there a
limit to how much info can be stored?

The behaviour I encountered i.e. a system lock seems to suggest that
there was some kind of buffer overflow. And this can be achieved by
something as simple as 'iptables -I INPUT -j LOG'. Perhaps it is a wise
idea to consider xl console -c to clear the console's history, then at
least I could exit the console and clear unsent messages...

I am not receiving xen-devel messages, so please CC me on replies.
Thank you.

-- 
Stay in touch,
Mark van Dijk.                  ,-----------------------------------
-------------------------------'           Wed Aug 08 15:29 UTC 2012
Today is Setting Orange, the 1st day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:49:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8VG-0003oY-0b; Wed, 08 Aug 2012 15:48:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1Sz8VF-0003oQ-53
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:48:57 +0000
Received: from [85.158.139.83:11134] by server-6.bemta-5.messagelabs.com id
	3D/3D-27759-86A82205; Wed, 08 Aug 2012 15:48:56 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344440935!28985996!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23374 invoked from network); 8 Aug 2012 15:48:55 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:48:55 -0000
Received: by lagz14 with SMTP id z14so550353lag.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 08:48:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=r5ci43unEI8mXoLavWpI5wq0HURywlQdiLkkiUSclI0=;
	b=jr1KPNkhnJEpW9o2YRUwXaCmhh4TF+LU7ZiJGeVku9Q1Gxi7IwDU7Px1UdIXVxnXtW
	rijsM4csj86J3GC2OjQf1MLCHrusVquf3HpdJbF360li4SnXN7qMZ3NUCAwgj9u7Y5RV
	iRXyN7AKA7J+cBZHewlKm/nSlQmt86mePHrYZyuUgror4pZGrZVs54wxO+JWmfju28fp
	84oxHhwZgg1SbmZNbKjHE6UryqmDcg5OjLgDmorMLBbi2EgfaYkBlOs27tljPzyNuSfl
	+D+WsPXWp28jzH8PctAQFN6IJcjYcGcCvyryBInEM320tNsKK6dq2APwx3/JExb0TFqC
	eFfA==
MIME-Version: 1.0
Received: by 10.112.9.133 with SMTP id z5mr4313921lba.69.1344440934728; Wed,
	08 Aug 2012 08:48:54 -0700 (PDT)
Received: by 10.114.23.194 with HTTP; Wed, 8 Aug 2012 08:48:54 -0700 (PDT)
In-Reply-To: <1344439371.32142.46.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
Date: Wed, 8 Aug 2012 23:48:54 +0800
Message-ID: <CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>,
	Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4068623418670276297=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4068623418670276297==
Content-Type: multipart/alternative; boundary=e0cb4efe2d645603d904c6c30c42

--e0cb4efe2d645603d904c6c30c42
Content-Type: text/plain; charset=ISO-8859-1

2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>

> On Wed, 2012-08-08 at 16:10 +0100, Bei Guan wrote:
> > Hi,
> >
> >
> > In Mini-OS, the initialization of the grant table maps the grant table
> > entries into an area called demand_map_area. The code is in the file
> > mini-os/gnttab.c and it's like this
> > gnttab_table = map_frames(frames, NR_GRANT_FRAMES);
> >
> >
> > It seems that getting the demand_map_area needs the infomation from
> > the start_info page in Mini-OS (refer to the method allocate_ondemand
> > int mini-os/arch/x86/mm.c). However, there is no start_info page in a
> > HVM guest OS (maybe it's also true for its virtual bios, such as
> > seabios and uefi bios). So, how to map the grant table entries in a
> > HVM guest OS and its bios?
>
> The PCI Platform device provides a BAR region which is specifically set
> aside for the guest OS to use as a hole for grant tab mappings. Or if
> you control the phyiscal address space you can just allocate somewhere
> to do the mapping.
>
Hi Ian,

Thank you very much for your help.
Is there any example code of initialization of grant table in HVM that I
can refer to?


Thanks,
Bei Guan



>
>
> >
> >
> > Any suggestions is appreciated.
> >
> >
> >
> >
> > Best Regards,
> > Bei Guan
> >
>
>
>


-- 
Best Regards,
Bei Guan

--e0cb4efe2d645603d904c6c30c42
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">2012/8/8 Ian Campbell <span dir=3D"ltr">&lt;=
<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@c=
itrix.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Wed, 2012-08-08 at 16:10 +0100, =
Bei Guan wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt;<br>
&gt; In Mini-OS, the initialization of the grant table maps the grant table=
<br>
&gt; entries into an area called demand_map_area. The code is in the file<b=
r>
&gt; mini-os/gnttab.c and it&#39;s like this<br>
&gt; gnttab_table =3D map_frames(frames, NR_GRANT_FRAMES);<br>
&gt;<br>
&gt;<br>
&gt; It seems that getting the demand_map_area needs the infomation from<br=
>
&gt; the start_info page in Mini-OS (refer to the method allocate_ondemand<=
br>
&gt; int mini-os/arch/x86/mm.c). However, there is no start_info page in a<=
br>
&gt; HVM guest OS (maybe it&#39;s also true for its virtual bios, such as<b=
r>
&gt; seabios and uefi bios). So, how to map the grant table entries in a<br=
>
&gt; HVM guest OS and its bios?<br>
<br>
</div></div>The PCI Platform device provides a BAR region which is specific=
ally set<br>
aside for the guest OS to use as a hole for grant tab mappings. Or if<br>
you control the phyiscal address space you can just allocate somewhere<br>
to do the mapping.<br></blockquote><div>Hi Ian,</div><div><br></div><div>Th=
ank you very much for your help.</div><div>Is there any example code of ini=
tialization of grant table in HVM that I can refer to?</div><div><br></div>
<div><br></div><div>Thanks,</div><div>Bei Guan</div><div><br></div><div>=A0=
</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-l=
eft:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt;<br>
&gt;<br>
&gt; Any suggestions is appreciated.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Best Regards,<br>
&gt; Bei Guan<br>
&gt;<br>
<br>
<br>
</div></div></blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>=
Best Regards,<div>Bei Guan</div><br>

--e0cb4efe2d645603d904c6c30c42--


--===============4068623418670276297==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4068623418670276297==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 15:49:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8VG-0003oY-0b; Wed, 08 Aug 2012 15:48:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1Sz8VF-0003oQ-53
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:48:57 +0000
Received: from [85.158.139.83:11134] by server-6.bemta-5.messagelabs.com id
	3D/3D-27759-86A82205; Wed, 08 Aug 2012 15:48:56 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344440935!28985996!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23374 invoked from network); 8 Aug 2012 15:48:55 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:48:55 -0000
Received: by lagz14 with SMTP id z14so550353lag.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 08:48:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=r5ci43unEI8mXoLavWpI5wq0HURywlQdiLkkiUSclI0=;
	b=jr1KPNkhnJEpW9o2YRUwXaCmhh4TF+LU7ZiJGeVku9Q1Gxi7IwDU7Px1UdIXVxnXtW
	rijsM4csj86J3GC2OjQf1MLCHrusVquf3HpdJbF360li4SnXN7qMZ3NUCAwgj9u7Y5RV
	iRXyN7AKA7J+cBZHewlKm/nSlQmt86mePHrYZyuUgror4pZGrZVs54wxO+JWmfju28fp
	84oxHhwZgg1SbmZNbKjHE6UryqmDcg5OjLgDmorMLBbi2EgfaYkBlOs27tljPzyNuSfl
	+D+WsPXWp28jzH8PctAQFN6IJcjYcGcCvyryBInEM320tNsKK6dq2APwx3/JExb0TFqC
	eFfA==
MIME-Version: 1.0
Received: by 10.112.9.133 with SMTP id z5mr4313921lba.69.1344440934728; Wed,
	08 Aug 2012 08:48:54 -0700 (PDT)
Received: by 10.114.23.194 with HTTP; Wed, 8 Aug 2012 08:48:54 -0700 (PDT)
In-Reply-To: <1344439371.32142.46.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
Date: Wed, 8 Aug 2012 23:48:54 +0800
Message-ID: <CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>,
	Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4068623418670276297=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4068623418670276297==
Content-Type: multipart/alternative; boundary=e0cb4efe2d645603d904c6c30c42

--e0cb4efe2d645603d904c6c30c42
Content-Type: text/plain; charset=ISO-8859-1

2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>

> On Wed, 2012-08-08 at 16:10 +0100, Bei Guan wrote:
> > Hi,
> >
> >
> > In Mini-OS, the initialization of the grant table maps the grant table
> > entries into an area called demand_map_area. The code is in the file
> > mini-os/gnttab.c and it's like this
> > gnttab_table = map_frames(frames, NR_GRANT_FRAMES);
> >
> >
> > It seems that getting the demand_map_area needs the infomation from
> > the start_info page in Mini-OS (refer to the method allocate_ondemand
> > int mini-os/arch/x86/mm.c). However, there is no start_info page in a
> > HVM guest OS (maybe it's also true for its virtual bios, such as
> > seabios and uefi bios). So, how to map the grant table entries in a
> > HVM guest OS and its bios?
>
> The PCI Platform device provides a BAR region which is specifically set
> aside for the guest OS to use as a hole for grant tab mappings. Or if
> you control the phyiscal address space you can just allocate somewhere
> to do the mapping.
>
Hi Ian,

Thank you very much for your help.
Is there any example code of initialization of grant table in HVM that I
can refer to?


Thanks,
Bei Guan



>
>
> >
> >
> > Any suggestions is appreciated.
> >
> >
> >
> >
> > Best Regards,
> > Bei Guan
> >
>
>
>


-- 
Best Regards,
Bei Guan

--e0cb4efe2d645603d904c6c30c42
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">2012/8/8 Ian Campbell <span dir=3D"ltr">&lt;=
<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@c=
itrix.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Wed, 2012-08-08 at 16:10 +0100, =
Bei Guan wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt;<br>
&gt; In Mini-OS, the initialization of the grant table maps the grant table=
<br>
&gt; entries into an area called demand_map_area. The code is in the file<b=
r>
&gt; mini-os/gnttab.c and it&#39;s like this<br>
&gt; gnttab_table =3D map_frames(frames, NR_GRANT_FRAMES);<br>
&gt;<br>
&gt;<br>
&gt; It seems that getting the demand_map_area needs the infomation from<br=
>
&gt; the start_info page in Mini-OS (refer to the method allocate_ondemand<=
br>
&gt; int mini-os/arch/x86/mm.c). However, there is no start_info page in a<=
br>
&gt; HVM guest OS (maybe it&#39;s also true for its virtual bios, such as<b=
r>
&gt; seabios and uefi bios). So, how to map the grant table entries in a<br=
>
&gt; HVM guest OS and its bios?<br>
<br>
</div></div>The PCI Platform device provides a BAR region which is specific=
ally set<br>
aside for the guest OS to use as a hole for grant tab mappings. Or if<br>
you control the phyiscal address space you can just allocate somewhere<br>
to do the mapping.<br></blockquote><div>Hi Ian,</div><div><br></div><div>Th=
ank you very much for your help.</div><div>Is there any example code of ini=
tialization of grant table in HVM that I can refer to?</div><div><br></div>
<div><br></div><div>Thanks,</div><div>Bei Guan</div><div><br></div><div>=A0=
</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-l=
eft:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt;<br>
&gt;<br>
&gt; Any suggestions is appreciated.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Best Regards,<br>
&gt; Bei Guan<br>
&gt;<br>
<br>
<br>
</div></div></blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>=
Best Regards,<div>Bei Guan</div><br>

--e0cb4efe2d645603d904c6c30c42--


--===============4068623418670276297==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4068623418670276297==--


From xen-devel-bounces@lists.xen.org Wed Aug 08 15:53:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8Zh-0003xb-NO; Wed, 08 Aug 2012 15:53:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz8Zg-0003xW-GS
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:53:32 +0000
Received: from [85.158.139.83:33377] by server-6.bemta-5.messagelabs.com id
	08/75-27759-B7B82205; Wed, 08 Aug 2012 15:53:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344441211!30213975!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10698 invoked from network); 8 Aug 2012 15:53:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 15:53:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 16:53:30 +0100
Message-Id: <5022A7980200007800093AD9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 16:53:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
In-Reply-To: <20120808151037.GE5592@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 17:10, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Aug 08, 2012 at 07:26:35AM -0700, Jan Beulich wrote:
>> A comment in tools/configure says that it is intended for these to be
>> command line overridable, so they shouldn't get expanded at configure
>> time.
> 
> I'm not sure which comment you're referencing, or which command line
> it's intended that you're able to override prefix/exec_prefix. Could
> you point it out? What's not working?

The comment (around line 790 currently) reads
"# Installation directory options.
 # These are left unexpanded so users can "make install exec_prefix=/foo"
 # and all the variables that are supposed to be based on exec_prefix
 # by default will actually change.
 # Use braces instead of parens because sh, perl, etc. also accept them.
 # (The list follows the same order as the GNU Coding Standards.)"

The thing that wasn't working for me was passing
--libdir='${exec_prefix}'/lib64 to ./configure.

>> The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
>> doing this, but imo it is flawed altogether and should rather be
>> removed:
>> - setting prefix and exec_prefix to default values is being done later
>>   in tools/configure anyway
>> - setting LIB_PATH based on the (non-)existence of a lib64 directory
>>   underneath ${exec_prefix} is plain wrong (it can obviously exist on a
>>   32-bit installation)
>> - I wasn't able to locate any use of LIB_PATH
> 
> I believe that the LIB_PATH portions can now be removed. I also
> believe you're right that all of this can be removed. Did you attempt
> that as well?

No, I didn't, not the least because I don't have the right autoconf
version around and didn't dare to re-generate tools/configure
myself (instead I did the resulting adjustments manually).

>> This will require tools/configure to be re-generated.
> 
> This is typically part of the same commit.

Sure, just that generally one wouldn't include generated parts
in the patch, but expect them to be produced when committing
(and as said above, I'd have to rely on the tools maintainers
for this in order to not risk corrupting something).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:53:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8Zh-0003xb-NO; Wed, 08 Aug 2012 15:53:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz8Zg-0003xW-GS
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:53:32 +0000
Received: from [85.158.139.83:33377] by server-6.bemta-5.messagelabs.com id
	08/75-27759-B7B82205; Wed, 08 Aug 2012 15:53:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344441211!30213975!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10698 invoked from network); 8 Aug 2012 15:53:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 15:53:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 16:53:30 +0100
Message-Id: <5022A7980200007800093AD9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 16:53:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
In-Reply-To: <20120808151037.GE5592@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 17:10, Matt Wilson <msw@amazon.com> wrote:
> On Wed, Aug 08, 2012 at 07:26:35AM -0700, Jan Beulich wrote:
>> A comment in tools/configure says that it is intended for these to be
>> command line overridable, so they shouldn't get expanded at configure
>> time.
> 
> I'm not sure which comment you're referencing, or which command line
> it's intended that you're able to override prefix/exec_prefix. Could
> you point it out? What's not working?

The comment (around line 790 currently) reads
"# Installation directory options.
 # These are left unexpanded so users can "make install exec_prefix=/foo"
 # and all the variables that are supposed to be based on exec_prefix
 # by default will actually change.
 # Use braces instead of parens because sh, perl, etc. also accept them.
 # (The list follows the same order as the GNU Coding Standards.)"

The thing that wasn't working for me was passing
--libdir='${exec_prefix}'/lib64 to ./configure.

>> The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
>> doing this, but imo it is flawed altogether and should rather be
>> removed:
>> - setting prefix and exec_prefix to default values is being done later
>>   in tools/configure anyway
>> - setting LIB_PATH based on the (non-)existence of a lib64 directory
>>   underneath ${exec_prefix} is plain wrong (it can obviously exist on a
>>   32-bit installation)
>> - I wasn't able to locate any use of LIB_PATH
> 
> I believe that the LIB_PATH portions can now be removed. I also
> believe you're right that all of this can be removed. Did you attempt
> that as well?

No, I didn't, not the least because I don't have the right autoconf
version around and didn't dare to re-generate tools/configure
myself (instead I did the resulting adjustments manually).

>> This will require tools/configure to be re-generated.
> 
> This is typically part of the same commit.

Sure, just that generally one wouldn't include generated parts
in the patch, but expect them to be produced when committing
(and as said above, I'd have to rely on the tools maintainers
for this in order to not risk corrupting something).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:55:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8bh-000468-Ok; Wed, 08 Aug 2012 15:55:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz8bh-00045o-4I
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:55:37 +0000
Received: from [85.158.143.99:27756] by server-2.bemta-4.messagelabs.com id
	D4/5F-19021-8FB82205; Wed, 08 Aug 2012 15:55:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344441334!24478700!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7803 invoked from network); 8 Aug 2012 15:55:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913496"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:55:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 16:55:35 +0100
Date: Wed, 8 Aug 2012 16:55:11 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50229DE40200007800093A8D@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208081630280.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
	<50228ECD0200007800093A0B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
	<50229DE40200007800093A8D@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >> > do_memory_op has the following check:
> >> > 
> >> >     /* Is size too large for us to encode a continuation? */
> >> >     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
> >> >         return start_extent;
> >> > 
> >> > it would work as-is for ARM too.
> >> 
> >> Not afaict. For a 32-bit guest, but the above code executed in
> >> a 64-bit hypervisor, the guest could pass in (theoretically)
> >> UINT_MAX, which would pass this check, yet the eventual
> >> continuation index would get truncated when stored in the
> >> 32-bit hypercall operation field.
> >  
> > Actually, like Ian wrote, I expect that using the upper 32 bit part of
> > the x0 register, only for continuations, would work just fine.
> > 
> > In any case we can make that check arch dependent and restrict the
> > maximum number of extents on aarch64 to the same limit we have on
> > aarch32.
> 
> We should even discuss lowering the limit universally to the
> UINT_MAX based value. I don't think anyone is actively using
> such insane numbers.

I am OK with that, it would make things easier for me.


> And don't forget that I pointed out this issue only as example
> of possible problems with your intended approach to the
> handling of multicalls.

I just went through the hypercall continuations in common code, and
do_memory_op is the only one that uses this "trick" of encoding the
start extent in another parameter.
I think we'll have to deal with these issues as we find them out.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:55:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8bh-000468-Ok; Wed, 08 Aug 2012 15:55:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz8bh-00045o-4I
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:55:37 +0000
Received: from [85.158.143.99:27756] by server-2.bemta-4.messagelabs.com id
	D4/5F-19021-8FB82205; Wed, 08 Aug 2012 15:55:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344441334!24478700!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7803 invoked from network); 8 Aug 2012 15:55:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913496"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:55:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 16:55:35 +0100
Date: Wed, 8 Aug 2012 16:55:11 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50229DE40200007800093A8D@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208081630280.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<5020011F0200007800092FD7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071257330.4645@kaball.uk.xensource.com>
	<50212C2B02000078000933CE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071942480.4645@kaball.uk.xensource.com>
	<50228ECD0200007800093A0B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081554170.21096@kaball.uk.xensource.com>
	<50229DE40200007800093A8D@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/5] xen: few more xen_ulong_t substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >> > do_memory_op has the following check:
> >> > 
> >> >     /* Is size too large for us to encode a continuation? */
> >> >     if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
> >> >         return start_extent;
> >> > 
> >> > it would work as-is for ARM too.
> >> 
> >> Not afaict. For a 32-bit guest, but the above code executed in
> >> a 64-bit hypervisor, the guest could pass in (theoretically)
> >> UINT_MAX, which would pass this check, yet the eventual
> >> continuation index would get truncated when stored in the
> >> 32-bit hypercall operation field.
> >  
> > Actually, like Ian wrote, I expect that using the upper 32 bit part of
> > the x0 register, only for continuations, would work just fine.
> > 
> > In any case we can make that check arch dependent and restrict the
> > maximum number of extents on aarch64 to the same limit we have on
> > aarch32.
> 
> We should even discuss lowering the limit universally to the
> UINT_MAX based value. I don't think anyone is actively using
> such insane numbers.

I am OK with that, it would make things easier for me.


> And don't forget that I pointed out this issue only as example
> of possible problems with your intended approach to the
> handling of multicalls.

I just went through the hypercall continuations in common code, and
do_memory_op is the only one that uses this "trick" of encoding the
start extent in another parameter.
I think we'll have to deal with these issues as we find them out.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:55:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8bh-000461-Cv; Wed, 08 Aug 2012 15:55:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz8bg-00045o-M0
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:55:36 +0000
Received: from [85.158.143.99:48495] by server-2.bemta-4.messagelabs.com id
	52/5F-19021-8FB82205; Wed, 08 Aug 2012 15:55:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344441334!24478700!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7776 invoked from network); 8 Aug 2012 15:55:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913495"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:55:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	16:55:34 +0100
Message-ID: <1344441333.32142.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Wed, 8 Aug 2012 16:55:33 +0100
In-Reply-To: <CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>,
	Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:

> Thank you very much for your help.
> Is there any example code of initialization of grant table in HVM that
> I can refer to?

The PVHVM support in upstream Linux would be a good place to look.

So might the code in the xen tree in unmodified_drivers/linux-2.6/

IIRC Daniel got grant tables working in SeaBIOS last summer for GSoC so
you might also find some useful examples in
git://github.com/evildani/seabios_patch.git

Ian
> 
> 
> 
> 
> Thanks,
> Bei Guan
> 
> 
>  
>         
>         
>         >
>         >
>         > Any suggestions is appreciated.
>         >
>         >
>         >
>         >
>         > Best Regards,
>         > Bei Guan
>         >
>         
>         
>         
> 
> 
> 
> 
> -- 
> Best Regards,
> Bei Guan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:55:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8bh-000461-Cv; Wed, 08 Aug 2012 15:55:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1Sz8bg-00045o-M0
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 15:55:36 +0000
Received: from [85.158.143.99:48495] by server-2.bemta-4.messagelabs.com id
	52/5F-19021-8FB82205; Wed, 08 Aug 2012 15:55:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344441334!24478700!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7776 invoked from network); 8 Aug 2012 15:55:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913495"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 15:55:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Wed, 8 Aug 2012
	16:55:34 +0100
Message-ID: <1344441333.32142.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Wed, 8 Aug 2012 16:55:33 +0100
In-Reply-To: <CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"Andrei E. Warkentin" <andrey.warkentin@gmail.com>,
	Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:

> Thank you very much for your help.
> Is there any example code of initialization of grant table in HVM that
> I can refer to?

The PVHVM support in upstream Linux would be a good place to look.

So might the code in the xen tree in unmodified_drivers/linux-2.6/

IIRC Daniel got grant tables working in SeaBIOS last summer for GSoC so
you might also find some useful examples in
git://github.com/evildani/seabios_patch.git

Ian
> 
> 
> 
> 
> Thanks,
> Bei Guan
> 
> 
>  
>         
>         
>         >
>         >
>         > Any suggestions is appreciated.
>         >
>         >
>         >
>         >
>         > Best Regards,
>         > Bei Guan
>         >
>         
>         
>         
> 
> 
> 
> 
> -- 
> Best Regards,
> Bei Guan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:56:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8cD-0004Ai-5v; Wed, 08 Aug 2012 15:56:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz8cB-0004AQ-3w
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 15:56:07 +0000
Received: from [85.158.139.83:60222] by server-7.bemta-5.messagelabs.com id
	AA/77-00857-61C82205; Wed, 08 Aug 2012 15:56:06 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344441364!30214450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19928 invoked from network); 8 Aug 2012 15:56:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:56:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="33989998"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 11:56:03 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 11:56:03 -0400
MIME-Version: 1.0
X-Mercurial-Node: 68946c7e1d67f823b072cc385e58b059108e0728
Message-ID: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 8 Aug 2012 08:56:02 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 08 08:55:49 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
+{
+    u64 address;
+    void *table_vaddr, *pde;
+    u64 next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level <= 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( (next_table_maddr != 0) && (next_level != 0)
+            && present )
+        {
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1, address);
+        }
+
+        if ( present )
+        {
+            printk("gfn: %-16lx  mfn: %-16lx\n",
+                   address, next_table_maddr);
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +595,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 08:55:49 2012 -0700
@@ -18,6 +18,7 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
@@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+void setup_iommu_dump(void);
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    setup_iommu_dump();
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +658,46 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
+void __init setup_iommu_dump(void)
+{
+    register_keyhandler('o', &iommu_p2m_table);
+}
+
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 08:55:49 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,56 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
+{
+    u64 address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %-16lx\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+        
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 )
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
+
+        if ( level == 1 )
+            printk("gfn: %-16lx mfn: %-16lx superpage=%d\n", 
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2438,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 08 08:55:49 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,7 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r 68946c7e1d67 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 08 08:55:49 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r 68946c7e1d67 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Wed Aug 08 08:55:49 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 15:56:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 15:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8cD-0004Ai-5v; Wed, 08 Aug 2012 15:56:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz8cB-0004AQ-3w
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 15:56:07 +0000
Received: from [85.158.139.83:60222] by server-7.bemta-5.messagelabs.com id
	AA/77-00857-61C82205; Wed, 08 Aug 2012 15:56:06 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344441364!30214450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19928 invoked from network); 8 Aug 2012 15:56:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 15:56:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="33989998"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 11:56:03 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 11:56:03 -0400
MIME-Version: 1.0
X-Mercurial-Node: 68946c7e1d67f823b072cc385e58b059108e0728
Message-ID: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 8 Aug 2012 08:56:02 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 08 08:55:49 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
+{
+    u64 address;
+    void *table_vaddr, *pde;
+    u64 next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level <= 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( (next_table_maddr != 0) && (next_level != 0)
+            && present )
+        {
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1, address);
+        }
+
+        if ( present )
+        {
+            printk("gfn: %-16lx  mfn: %-16lx\n",
+                   address, next_table_maddr);
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +595,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 08:55:49 2012 -0700
@@ -18,6 +18,7 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
@@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+void setup_iommu_dump(void);
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    setup_iommu_dump();
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +658,46 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
+void __init setup_iommu_dump(void)
+{
+    register_keyhandler('o', &iommu_p2m_table);
+}
+
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 08:55:49 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,56 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
+{
+    u64 address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %-16lx\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+        
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 )
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
+
+        if ( level == 1 )
+            printk("gfn: %-16lx mfn: %-16lx superpage=%d\n", 
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2438,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r 68946c7e1d67 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 08 08:55:49 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,7 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r 68946c7e1d67 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 08 08:55:49 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r 68946c7e1d67 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Wed Aug 08 08:55:49 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:05:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8l5-0004zz-6l; Wed, 08 Aug 2012 16:05:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz8l3-0004zu-Kp
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:05:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344441910!9749090!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7118 invoked from network); 8 Aug 2012 16:05:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 16:05:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 17:05:10 +0100
Message-Id: <5022AA540200007800093AF6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 17:05:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB081CF24.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH] make all (native) hypercalls consistently have
 "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB081CF24.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

for common and x86 ones at least, to address the problem of storing
zero-extended values into the multicall result field otherwise.

Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2973,7 +2973,7 @@ static inline void fixunmap_domain_page(
 #define fixunmap_domain_page(ptr) ((void)(ptr))
 #endif
=20
-int do_mmuext_op(
+long do_mmuext_op(
     XEN_GUEST_HANDLE(mmuext_op_t) uops,
     unsigned int count,
     XEN_GUEST_HANDLE(uint) pdone,
@@ -3437,7 +3437,7 @@ int do_mmuext_op(
     return rc;
 }
=20
-int do_mmu_update(
+long do_mmu_update(
     XEN_GUEST_HANDLE(mmu_update_t) ureqs,
     unsigned int count,
     XEN_GUEST_HANDLE(uint) pdone,
@@ -4285,15 +4285,15 @@ static int __do_update_va_mapping(
     return rc;
 }
=20
-int do_update_va_mapping(unsigned long va, u64 val64,
-                         unsigned long flags)
+long do_update_va_mapping(unsigned long va, u64 val64,
+                          unsigned long flags)
 {
     return __do_update_va_mapping(va, val64, flags, current->domain);
 }
=20
-int do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
-                                     unsigned long flags,
-                                     domid_t domid)
+long do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
+                                      unsigned long flags,
+                                      domid_t domid)
 {
     struct domain *pg_owner;
     int rc;
--- a/xen/common/compat/xenoprof.c
+++ b/xen/common/compat/xenoprof.c
@@ -5,6 +5,7 @@
 #include <compat/xenoprof.h>
=20
 #define COMPAT
+#define ret_t int
=20
 #define do_xenoprof_op compat_xenoprof_op
=20
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -845,8 +845,8 @@ static int kexec_exec(XEN_GUEST_HANDLE(v
     return -EINVAL; /* never reached */
 }
=20
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
-                           int compat)
+static int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) =
uarg,
+                                bool_t compat)
 {
     unsigned long flags;
     int ret =3D -EINVAL;
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -607,6 +607,8 @@ static int xenoprof_op_init(XEN_GUEST_HA
     return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
 }
=20
+#define ret_t long
+
 #endif /* !COMPAT */
=20
 static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
@@ -660,7 +662,7 @@ static int xenoprof_op_get_buffer(XEN_GU
                       || (op =3D=3D XENOPROF_disable_virq)  \
                       || (op =3D=3D XENOPROF_get_buffer))
 =20
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+ret_t do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
 {
     int ret =3D 0;
    =20
@@ -904,6 +906,7 @@ int do_xenoprof_op(int op, XEN_GUEST_HAN
 }
=20
 #if defined(CONFIG_COMPAT) && !defined(COMPAT)
+#undef ret_t
 #include "compat/xenoprof.c"
 #endif
=20
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -24,7 +24,7 @@ extern long
 do_set_trap_table(
     XEN_GUEST_HANDLE(const_trap_info_t) traps);
=20
-extern int
+extern long
 do_mmu_update(
     XEN_GUEST_HANDLE(mmu_update_t) ureqs,
     unsigned int count,
@@ -62,7 +62,7 @@ do_update_descriptor(
 extern long
 do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
=20
-extern int
+extern long
 do_update_va_mapping(
     unsigned long va,
     u64 val64,
@@ -72,14 +72,14 @@ extern long
 do_physdev_op(
     int cmd, XEN_GUEST_HANDLE(void) arg);
=20
-extern int
+extern long
 do_update_va_mapping_otherdomain(
     unsigned long va,
     u64 val64,
     unsigned long flags,
     domid_t domid);
=20
-extern int
+extern long
 do_mmuext_op(
     XEN_GUEST_HANDLE(mmuext_op_t) uops,
     unsigned int count,
@@ -90,10 +90,6 @@ extern unsigned long
 do_iret(
     void);
=20
-extern int
-do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
-
 #ifdef __x86_64__
=20
 extern long
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -137,7 +137,7 @@ extern long
 do_tmem_op(
     XEN_GUEST_HANDLE(tmem_op_t) uops);
=20
-extern int
+extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
=20
 #ifdef CONFIG_COMPAT



--=__PartB081CF24.0__=
Content-Type: text/plain; name="hypercall-return-long.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="hypercall-return-long.patch"

make all (native) hypercalls consistently have "long" return type=0A=0Afor =
common and x86 ones at least, to address the problem of storing=0Azero-exte=
nded values into the multicall result field otherwise.=0A=0AReported-by: =
Daniel De Graaf <dgdegra@tycho.nsa.gov>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=
=0A@@ -2973,7 +2973,7 @@ static inline void fixunmap_domain_page(=0A =
#define fixunmap_domain_page(ptr) ((void)(ptr))=0A #endif=0A =0A-int =
do_mmuext_op(=0A+long do_mmuext_op(=0A     XEN_GUEST_HANDLE(mmuext_op_t) =
uops,=0A     unsigned int count,=0A     XEN_GUEST_HANDLE(uint) pdone,=0A@@ =
-3437,7 +3437,7 @@ int do_mmuext_op(=0A     return rc;=0A }=0A =0A-int =
do_mmu_update(=0A+long do_mmu_update(=0A     XEN_GUEST_HANDLE(mmu_update_t)=
 ureqs,=0A     unsigned int count,=0A     XEN_GUEST_HANDLE(uint) pdone,=0A@=
@ -4285,15 +4285,15 @@ static int __do_update_va_mapping(=0A     return =
rc;=0A }=0A =0A-int do_update_va_mapping(unsigned long va, u64 val64,=0A-  =
                       unsigned long flags)=0A+long do_update_va_mapping(un=
signed long va, u64 val64,=0A+                          unsigned long =
flags)=0A {=0A     return __do_update_va_mapping(va, val64, flags, =
current->domain);=0A }=0A =0A-int do_update_va_mapping_otherdomain(unsigned=
 long va, u64 val64,=0A-                                     unsigned long =
flags,=0A-                                     domid_t domid)=0A+long =
do_update_va_mapping_otherdomain(unsigned long va, u64 val64,=0A+          =
                            unsigned long flags,=0A+                       =
               domid_t domid)=0A {=0A     struct domain *pg_owner;=0A     =
int rc;=0A--- a/xen/common/compat/xenoprof.c=0A+++ b/xen/common/compat/xeno=
prof.c=0A@@ -5,6 +5,7 @@=0A #include <compat/xenoprof.h>=0A =0A #define =
COMPAT=0A+#define ret_t int=0A =0A #define do_xenoprof_op compat_xenoprof_o=
p=0A =0A--- a/xen/common/kexec.c=0A+++ b/xen/common/kexec.c=0A@@ -845,8 =
+845,8 @@ static int kexec_exec(XEN_GUEST_HANDLE(v=0A     return -EINVAL; =
/* never reached */=0A }=0A =0A-int do_kexec_op_internal(unsigned long op, =
XEN_GUEST_HANDLE(void) uarg,=0A-                           int compat)=0A+s=
tatic int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) =
uarg,=0A+                                bool_t compat)=0A {=0A     =
unsigned long flags;=0A     int ret =3D -EINVAL;=0A--- a/xen/common/xenopro=
f.c=0A+++ b/xen/common/xenoprof.c=0A@@ -607,6 +607,8 @@ static int =
xenoprof_op_init(XEN_GUEST_HA=0A     return (copy_to_guest(arg, &xenoprof_i=
nit, 1) ? -EFAULT : 0);=0A }=0A =0A+#define ret_t long=0A+=0A #endif /* =
!COMPAT */=0A =0A static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) =
arg)=0A@@ -660,7 +662,7 @@ static int xenoprof_op_get_buffer(XEN_GU=0A     =
                  || (op =3D=3D XENOPROF_disable_virq)  \=0A               =
        || (op =3D=3D XENOPROF_get_buffer))=0A  =0A-int do_xenoprof_op(int =
op, XEN_GUEST_HANDLE(void) arg)=0A+ret_t do_xenoprof_op(int op, XEN_GUEST_H=
ANDLE(void) arg)=0A {=0A     int ret =3D 0;=0A     =0A@@ -904,6 +906,7 @@ =
int do_xenoprof_op(int op, XEN_GUEST_HAN=0A }=0A =0A #if defined(CONFIG_COM=
PAT) && !defined(COMPAT)=0A+#undef ret_t=0A #include "compat/xenoprof.c"=0A=
 #endif=0A =0A--- a/xen/include/asm-x86/hypercall.h=0A+++ b/xen/include/asm=
-x86/hypercall.h=0A@@ -24,7 +24,7 @@ extern long=0A do_set_trap_table(=0A  =
   XEN_GUEST_HANDLE(const_trap_info_t) traps);=0A =0A-extern int=0A+extern =
long=0A do_mmu_update(=0A     XEN_GUEST_HANDLE(mmu_update_t) ureqs,=0A     =
unsigned int count,=0A@@ -62,7 +62,7 @@ do_update_descriptor(=0A extern =
long=0A do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);=0A =0A-extern =
int=0A+extern long=0A do_update_va_mapping(=0A     unsigned long va,=0A    =
 u64 val64,=0A@@ -72,14 +72,14 @@ extern long=0A do_physdev_op(=0A     int =
cmd, XEN_GUEST_HANDLE(void) arg);=0A =0A-extern int=0A+extern long=0A =
do_update_va_mapping_otherdomain(=0A     unsigned long va,=0A     u64 =
val64,=0A     unsigned long flags,=0A     domid_t domid);=0A =0A-extern =
int=0A+extern long=0A do_mmuext_op(=0A     XEN_GUEST_HANDLE(mmuext_op_t) =
uops,=0A     unsigned int count,=0A@@ -90,10 +90,6 @@ extern unsigned =
long=0A do_iret(=0A     void);=0A =0A-extern int=0A-do_kexec(=0A-    =
unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);=0A-=0A =
#ifdef __x86_64__=0A =0A extern long=0A--- a/xen/include/xen/hypercall.h=0A=
+++ b/xen/include/xen/hypercall.h=0A@@ -137,7 +137,7 @@ extern long=0A =
do_tmem_op(=0A     XEN_GUEST_HANDLE(tmem_op_t) uops);=0A =0A-extern =
int=0A+extern long=0A do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) =
arg);=0A =0A #ifdef CONFIG_COMPAT=0A
--=__PartB081CF24.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB081CF24.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 08 16:05:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8l5-0004zz-6l; Wed, 08 Aug 2012 16:05:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz8l3-0004zu-Kp
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:05:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344441910!9749090!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7118 invoked from network); 8 Aug 2012 16:05:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 16:05:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 17:05:10 +0100
Message-Id: <5022AA540200007800093AF6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 17:05:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB081CF24.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH] make all (native) hypercalls consistently have
 "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB081CF24.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

for common and x86 ones at least, to address the problem of storing
zero-extended values into the multicall result field otherwise.

Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2973,7 +2973,7 @@ static inline void fixunmap_domain_page(
 #define fixunmap_domain_page(ptr) ((void)(ptr))
 #endif
=20
-int do_mmuext_op(
+long do_mmuext_op(
     XEN_GUEST_HANDLE(mmuext_op_t) uops,
     unsigned int count,
     XEN_GUEST_HANDLE(uint) pdone,
@@ -3437,7 +3437,7 @@ int do_mmuext_op(
     return rc;
 }
=20
-int do_mmu_update(
+long do_mmu_update(
     XEN_GUEST_HANDLE(mmu_update_t) ureqs,
     unsigned int count,
     XEN_GUEST_HANDLE(uint) pdone,
@@ -4285,15 +4285,15 @@ static int __do_update_va_mapping(
     return rc;
 }
=20
-int do_update_va_mapping(unsigned long va, u64 val64,
-                         unsigned long flags)
+long do_update_va_mapping(unsigned long va, u64 val64,
+                          unsigned long flags)
 {
     return __do_update_va_mapping(va, val64, flags, current->domain);
 }
=20
-int do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
-                                     unsigned long flags,
-                                     domid_t domid)
+long do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
+                                      unsigned long flags,
+                                      domid_t domid)
 {
     struct domain *pg_owner;
     int rc;
--- a/xen/common/compat/xenoprof.c
+++ b/xen/common/compat/xenoprof.c
@@ -5,6 +5,7 @@
 #include <compat/xenoprof.h>
=20
 #define COMPAT
+#define ret_t int
=20
 #define do_xenoprof_op compat_xenoprof_op
=20
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -845,8 +845,8 @@ static int kexec_exec(XEN_GUEST_HANDLE(v
     return -EINVAL; /* never reached */
 }
=20
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
-                           int compat)
+static int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) =
uarg,
+                                bool_t compat)
 {
     unsigned long flags;
     int ret =3D -EINVAL;
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -607,6 +607,8 @@ static int xenoprof_op_init(XEN_GUEST_HA
     return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
 }
=20
+#define ret_t long
+
 #endif /* !COMPAT */
=20
 static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
@@ -660,7 +662,7 @@ static int xenoprof_op_get_buffer(XEN_GU
                       || (op =3D=3D XENOPROF_disable_virq)  \
                       || (op =3D=3D XENOPROF_get_buffer))
 =20
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+ret_t do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
 {
     int ret =3D 0;
    =20
@@ -904,6 +906,7 @@ int do_xenoprof_op(int op, XEN_GUEST_HAN
 }
=20
 #if defined(CONFIG_COMPAT) && !defined(COMPAT)
+#undef ret_t
 #include "compat/xenoprof.c"
 #endif
=20
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -24,7 +24,7 @@ extern long
 do_set_trap_table(
     XEN_GUEST_HANDLE(const_trap_info_t) traps);
=20
-extern int
+extern long
 do_mmu_update(
     XEN_GUEST_HANDLE(mmu_update_t) ureqs,
     unsigned int count,
@@ -62,7 +62,7 @@ do_update_descriptor(
 extern long
 do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
=20
-extern int
+extern long
 do_update_va_mapping(
     unsigned long va,
     u64 val64,
@@ -72,14 +72,14 @@ extern long
 do_physdev_op(
     int cmd, XEN_GUEST_HANDLE(void) arg);
=20
-extern int
+extern long
 do_update_va_mapping_otherdomain(
     unsigned long va,
     u64 val64,
     unsigned long flags,
     domid_t domid);
=20
-extern int
+extern long
 do_mmuext_op(
     XEN_GUEST_HANDLE(mmuext_op_t) uops,
     unsigned int count,
@@ -90,10 +90,6 @@ extern unsigned long
 do_iret(
     void);
=20
-extern int
-do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
-
 #ifdef __x86_64__
=20
 extern long
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -137,7 +137,7 @@ extern long
 do_tmem_op(
     XEN_GUEST_HANDLE(tmem_op_t) uops);
=20
-extern int
+extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
=20
 #ifdef CONFIG_COMPAT



--=__PartB081CF24.0__=
Content-Type: text/plain; name="hypercall-return-long.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="hypercall-return-long.patch"

make all (native) hypercalls consistently have "long" return type=0A=0Afor =
common and x86 ones at least, to address the problem of storing=0Azero-exte=
nded values into the multicall result field otherwise.=0A=0AReported-by: =
Daniel De Graaf <dgdegra@tycho.nsa.gov>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=
=0A@@ -2973,7 +2973,7 @@ static inline void fixunmap_domain_page(=0A =
#define fixunmap_domain_page(ptr) ((void)(ptr))=0A #endif=0A =0A-int =
do_mmuext_op(=0A+long do_mmuext_op(=0A     XEN_GUEST_HANDLE(mmuext_op_t) =
uops,=0A     unsigned int count,=0A     XEN_GUEST_HANDLE(uint) pdone,=0A@@ =
-3437,7 +3437,7 @@ int do_mmuext_op(=0A     return rc;=0A }=0A =0A-int =
do_mmu_update(=0A+long do_mmu_update(=0A     XEN_GUEST_HANDLE(mmu_update_t)=
 ureqs,=0A     unsigned int count,=0A     XEN_GUEST_HANDLE(uint) pdone,=0A@=
@ -4285,15 +4285,15 @@ static int __do_update_va_mapping(=0A     return =
rc;=0A }=0A =0A-int do_update_va_mapping(unsigned long va, u64 val64,=0A-  =
                       unsigned long flags)=0A+long do_update_va_mapping(un=
signed long va, u64 val64,=0A+                          unsigned long =
flags)=0A {=0A     return __do_update_va_mapping(va, val64, flags, =
current->domain);=0A }=0A =0A-int do_update_va_mapping_otherdomain(unsigned=
 long va, u64 val64,=0A-                                     unsigned long =
flags,=0A-                                     domid_t domid)=0A+long =
do_update_va_mapping_otherdomain(unsigned long va, u64 val64,=0A+          =
                            unsigned long flags,=0A+                       =
               domid_t domid)=0A {=0A     struct domain *pg_owner;=0A     =
int rc;=0A--- a/xen/common/compat/xenoprof.c=0A+++ b/xen/common/compat/xeno=
prof.c=0A@@ -5,6 +5,7 @@=0A #include <compat/xenoprof.h>=0A =0A #define =
COMPAT=0A+#define ret_t int=0A =0A #define do_xenoprof_op compat_xenoprof_o=
p=0A =0A--- a/xen/common/kexec.c=0A+++ b/xen/common/kexec.c=0A@@ -845,8 =
+845,8 @@ static int kexec_exec(XEN_GUEST_HANDLE(v=0A     return -EINVAL; =
/* never reached */=0A }=0A =0A-int do_kexec_op_internal(unsigned long op, =
XEN_GUEST_HANDLE(void) uarg,=0A-                           int compat)=0A+s=
tatic int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) =
uarg,=0A+                                bool_t compat)=0A {=0A     =
unsigned long flags;=0A     int ret =3D -EINVAL;=0A--- a/xen/common/xenopro=
f.c=0A+++ b/xen/common/xenoprof.c=0A@@ -607,6 +607,8 @@ static int =
xenoprof_op_init(XEN_GUEST_HA=0A     return (copy_to_guest(arg, &xenoprof_i=
nit, 1) ? -EFAULT : 0);=0A }=0A =0A+#define ret_t long=0A+=0A #endif /* =
!COMPAT */=0A =0A static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) =
arg)=0A@@ -660,7 +662,7 @@ static int xenoprof_op_get_buffer(XEN_GU=0A     =
                  || (op =3D=3D XENOPROF_disable_virq)  \=0A               =
        || (op =3D=3D XENOPROF_get_buffer))=0A  =0A-int do_xenoprof_op(int =
op, XEN_GUEST_HANDLE(void) arg)=0A+ret_t do_xenoprof_op(int op, XEN_GUEST_H=
ANDLE(void) arg)=0A {=0A     int ret =3D 0;=0A     =0A@@ -904,6 +906,7 @@ =
int do_xenoprof_op(int op, XEN_GUEST_HAN=0A }=0A =0A #if defined(CONFIG_COM=
PAT) && !defined(COMPAT)=0A+#undef ret_t=0A #include "compat/xenoprof.c"=0A=
 #endif=0A =0A--- a/xen/include/asm-x86/hypercall.h=0A+++ b/xen/include/asm=
-x86/hypercall.h=0A@@ -24,7 +24,7 @@ extern long=0A do_set_trap_table(=0A  =
   XEN_GUEST_HANDLE(const_trap_info_t) traps);=0A =0A-extern int=0A+extern =
long=0A do_mmu_update(=0A     XEN_GUEST_HANDLE(mmu_update_t) ureqs,=0A     =
unsigned int count,=0A@@ -62,7 +62,7 @@ do_update_descriptor(=0A extern =
long=0A do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);=0A =0A-extern =
int=0A+extern long=0A do_update_va_mapping(=0A     unsigned long va,=0A    =
 u64 val64,=0A@@ -72,14 +72,14 @@ extern long=0A do_physdev_op(=0A     int =
cmd, XEN_GUEST_HANDLE(void) arg);=0A =0A-extern int=0A+extern long=0A =
do_update_va_mapping_otherdomain(=0A     unsigned long va,=0A     u64 =
val64,=0A     unsigned long flags,=0A     domid_t domid);=0A =0A-extern =
int=0A+extern long=0A do_mmuext_op(=0A     XEN_GUEST_HANDLE(mmuext_op_t) =
uops,=0A     unsigned int count,=0A@@ -90,10 +90,6 @@ extern unsigned =
long=0A do_iret(=0A     void);=0A =0A-extern int=0A-do_kexec(=0A-    =
unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);=0A-=0A =
#ifdef __x86_64__=0A =0A extern long=0A--- a/xen/include/xen/hypercall.h=0A=
+++ b/xen/include/xen/hypercall.h=0A@@ -137,7 +137,7 @@ extern long=0A =
do_tmem_op(=0A     XEN_GUEST_HANDLE(tmem_op_t) uops);=0A =0A-extern =
int=0A+extern long=0A do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) =
arg);=0A =0A #ifdef CONFIG_COMPAT=0A
--=__PartB081CF24.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB081CF24.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 08 16:06:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8mH-00054m-Ra; Wed, 08 Aug 2012 16:06:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sz8mG-00054a-Mi
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:06:32 +0000
Received: from [85.158.138.51:45206] by server-7.bemta-3.messagelabs.com id
	EF/DE-04660-78E82205; Wed, 08 Aug 2012 16:06:31 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344441989!22156884!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNTYyMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24300 invoked from network); 8 Aug 2012 16:06:30 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:06:30 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344441990; x=1375977990;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=iXG0ycjhprsBkAwKJSpP6F3GycfM5MoGcXOjHjdp9jE=;
	b=Muecr4WtWn9GoU6KAVGWQDWiF6Tv2Kvx45lBWrlnCRj1HQAZShtfKQ9Q
	sJAuna/eYijUTxxZ8iRgelMgWUo4/A==;
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="275773689"
Received: from smtp-in-6001.iad6.amazon.com ([10.195.76.178])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 08 Aug 2012 16:06:24 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-6001.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q78G6M5c019980
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 8 Aug 2012 16:06:23 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-9003.ant.amazon.com
	(10.185.137.132) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 8 Aug 2012 09:06:06 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 08 Aug 2012
	09:06:06 -0700
Date: Wed, 8 Aug 2012 09:06:06 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120808160606.GA9048@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5022A7980200007800093AD9@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
	too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 08:53:28AM -0700, Jan Beulich wrote:
> >>> On 08.08.12 at 17:10, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, Aug 08, 2012 at 07:26:35AM -0700, Jan Beulich wrote:
> >> A comment in tools/configure says that it is intended for these to be
> >> command line overridable, so they shouldn't get expanded at configure
> >> time.
> > 
> > I'm not sure which comment you're referencing, or which command line
> > it's intended that you're able to override prefix/exec_prefix. Could
> > you point it out? What's not working?
> 
> The comment (around line 790 currently) reads
> "# Installation directory options.
>  # These are left unexpanded so users can "make install exec_prefix=/foo"
>  # and all the variables that are supposed to be based on exec_prefix
>  # by default will actually change.
>  # Use braces instead of parens because sh, perl, etc. also accept them.
>  # (The list follows the same order as the GNU Coding Standards.)"
> 
> The thing that wasn't working for me was passing
> --libdir='${exec_prefix}'/lib64 to ./configure.

Aah, OK. That's not something that I commonly see packaging control
files (RPM .spec / dpkg rules) do, but you're right that it's intended
to work.

> >> The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
> >> doing this, but imo it is flawed altogether and should rather be
> >> removed:
> >> - setting prefix and exec_prefix to default values is being done later
> >>   in tools/configure anyway
> >> - setting LIB_PATH based on the (non-)existence of a lib64 directory
> >>   underneath ${exec_prefix} is plain wrong (it can obviously exist on a
> >>   32-bit installation)
> >> - I wasn't able to locate any use of LIB_PATH
> > 
> > I believe that the LIB_PATH portions can now be removed. I also
> > believe you're right that all of this can be removed. Did you attempt
> > that as well?
> 
> No, I didn't, not the least because I don't have the right autoconf
> version around and didn't dare to re-generate tools/configure
> myself (instead I did the resulting adjustments manually).

Indeed, it's a bit of a pain to set it up. I imagine that's why the
autogenerated files are committed into mercurial, which is a practice
I don't like...

> >> This will require tools/configure to be re-generated.
> > 
> > This is typically part of the same commit.
> 
> Sure, just that generally one wouldn't include generated parts
> in the patch, but expect them to be produced when committing
> (and as said above, I'd have to rely on the tools maintainers
> for this in order to not risk corrupting something).

Fair enough. :-)

Would you like me to take another pass at fixing this the "right" way,
by removing this bit altogether and adding lowercase variables to
tools/Config.mk?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:06:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8mH-00054m-Ra; Wed, 08 Aug 2012 16:06:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1Sz8mG-00054a-Mi
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:06:32 +0000
Received: from [85.158.138.51:45206] by server-7.bemta-3.messagelabs.com id
	EF/DE-04660-78E82205; Wed, 08 Aug 2012 16:06:31 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344441989!22156884!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNTYyMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24300 invoked from network); 8 Aug 2012 16:06:30 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:06:30 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344441990; x=1375977990;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=iXG0ycjhprsBkAwKJSpP6F3GycfM5MoGcXOjHjdp9jE=;
	b=Muecr4WtWn9GoU6KAVGWQDWiF6Tv2Kvx45lBWrlnCRj1HQAZShtfKQ9Q
	sJAuna/eYijUTxxZ8iRgelMgWUo4/A==;
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="275773689"
Received: from smtp-in-6001.iad6.amazon.com ([10.195.76.178])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 08 Aug 2012 16:06:24 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-6001.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q78G6M5c019980
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 8 Aug 2012 16:06:23 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-9003.ant.amazon.com
	(10.185.137.132) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 8 Aug 2012 09:06:06 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 08 Aug 2012
	09:06:06 -0700
Date: Wed, 8 Aug 2012 09:06:06 -0700
From: Matt Wilson <msw@amazon.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120808160606.GA9048@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5022A7980200007800093AD9@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
	too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 08:53:28AM -0700, Jan Beulich wrote:
> >>> On 08.08.12 at 17:10, Matt Wilson <msw@amazon.com> wrote:
> > On Wed, Aug 08, 2012 at 07:26:35AM -0700, Jan Beulich wrote:
> >> A comment in tools/configure says that it is intended for these to be
> >> command line overridable, so they shouldn't get expanded at configure
> >> time.
> > 
> > I'm not sure which comment you're referencing, or which command line
> > it's intended that you're able to override prefix/exec_prefix. Could
> > you point it out? What's not working?
> 
> The comment (around line 790 currently) reads
> "# Installation directory options.
>  # These are left unexpanded so users can "make install exec_prefix=/foo"
>  # and all the variables that are supposed to be based on exec_prefix
>  # by default will actually change.
>  # Use braces instead of parens because sh, perl, etc. also accept them.
>  # (The list follows the same order as the GNU Coding Standards.)"
> 
> The thing that wasn't working for me was passing
> --libdir='${exec_prefix}'/lib64 to ./configure.

Aah, OK. That's not something that I commonly see packaging control
files (RPM .spec / dpkg rules) do, but you're right that it's intended
to work.

> >> The patch is fixing tools/m4/default_lib.m4 as far as I can see myself
> >> doing this, but imo it is flawed altogether and should rather be
> >> removed:
> >> - setting prefix and exec_prefix to default values is being done later
> >>   in tools/configure anyway
> >> - setting LIB_PATH based on the (non-)existence of a lib64 directory
> >>   underneath ${exec_prefix} is plain wrong (it can obviously exist on a
> >>   32-bit installation)
> >> - I wasn't able to locate any use of LIB_PATH
> > 
> > I believe that the LIB_PATH portions can now be removed. I also
> > believe you're right that all of this can be removed. Did you attempt
> > that as well?
> 
> No, I didn't, not the least because I don't have the right autoconf
> version around and didn't dare to re-generate tools/configure
> myself (instead I did the resulting adjustments manually).

Indeed, it's a bit of a pain to set it up. I imagine that's why the
autogenerated files are committed into mercurial, which is a practice
I don't like...

> >> This will require tools/configure to be re-generated.
> > 
> > This is typically part of the same commit.
> 
> Sure, just that generally one wouldn't include generated parts
> in the patch, but expect them to be produced when committing
> (and as said above, I'd have to rely on the tools maintainers
> for this in order to not risk corrupting something).

Fair enough. :-)

Would you like me to take another pass at fixing this the "right" way,
by removing this bit altogether and adding lowercase variables to
tools/Config.mk?

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:09:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8oi-0005Dg-DR; Wed, 08 Aug 2012 16:09:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz8oh-0005DQ-8o
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:09:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344442116!9749670!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17935 invoked from network); 8 Aug 2012 16:08:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 16:08:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 17:08:36 +0100
Message-Id: <5022AB240200007800093B0E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 17:08:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
	<20120808160606.GA9048@US-SEA-R8XVZTX>
In-Reply-To: <20120808160606.GA9048@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> Would you like me to take another pass at fixing this the "right" way,
> by removing this bit altogether and adding lowercase variables to
> tools/Config.mk?

Oh, yes, if you can do this in a more complete fashion, removing
the suspicious default_dir.m4 altogether, that would be wonderful.
Not sure what the tools maintainers think of this, though.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:09:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8oi-0005Dg-DR; Wed, 08 Aug 2012 16:09:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz8oh-0005DQ-8o
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:09:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344442116!9749670!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17935 invoked from network); 8 Aug 2012 16:08:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 16:08:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 17:08:36 +0100
Message-Id: <5022AB240200007800093B0E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 17:08:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
	<20120808160606.GA9048@US-SEA-R8XVZTX>
In-Reply-To: <20120808160606.GA9048@US-SEA-R8XVZTX>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> Would you like me to take another pass at fixing this the "right" way,
> by removing this bit altogether and adding lowercase variables to
> tools/Config.mk?

Oh, yes, if you can do this in a more complete fashion, removing
the suspicious default_dir.m4 altogether, that would be wonderful.
Not sure what the tools maintainers think of this, though.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:15:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8ur-0005WT-72; Wed, 08 Aug 2012 16:15:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz8up-0005WN-A2
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:15:23 +0000
Received: from [85.158.139.83:13303] by server-9.bemta-5.messagelabs.com id
	34/94-06631-A9092205; Wed, 08 Aug 2012 16:15:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344442521!30890222!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20935 invoked from network); 8 Aug 2012 16:15:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913930"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:14:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:14:40 +0100
Date: Wed, 8 Aug 2012 17:14:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208071855270.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: protect LR registers and lr_mask
 changes with spin_lock_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH_LR registers and lr_mask need to be kept in sync: make sure that
their modifications are protected by spin_lock_irq(&gic.lock).

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 378158b..e444032 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -57,9 +57,11 @@ void gic_save_state(struct vcpu *v)
 {
     int i;
 
+    spin_lock_irq(&gic.lock);
     for ( i=0; i<nr_lrs; i++)
         v->arch.gic_lr[i] = GICH[GICH_LR + i];
     v->arch.lr_mask = gic.lr_mask;
+    spin_unlock_irq(&gic.lock);
     /* Disable until next VCPU scheduled */
     GICH[GICH_HCR] = 0;
     isb();
@@ -72,9 +74,11 @@ void gic_restore_state(struct vcpu *v)
     if ( is_idle_vcpu(v) )
         return;
 
+    spin_lock_irq(&gic.lock);
     gic.lr_mask = v->arch.lr_mask;
     for ( i=0; i<nr_lrs; i++)
         GICH[GICH_LR + i] = v->arch.gic_lr[i];
+    spin_unlock_irq(&gic.lock);
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
@@ -469,9 +473,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         i = find_first_zero_bit(&gic.lr_mask, nr_lrs);
         if ( i >= nr_lrs ) return;
 
+        spin_lock_irq(&gic.lock);
         gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &gic.lr_mask);
+        spin_unlock_irq(&gic.lock);
     }
 
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:15:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz8ur-0005WT-72; Wed, 08 Aug 2012 16:15:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz8up-0005WN-A2
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:15:23 +0000
Received: from [85.158.139.83:13303] by server-9.bemta-5.messagelabs.com id
	34/94-06631-A9092205; Wed, 08 Aug 2012 16:15:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344442521!30890222!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20935 invoked from network); 8 Aug 2012 16:15:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13913930"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:14:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:14:40 +0100
Date: Wed, 8 Aug 2012 17:14:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208071855270.4645@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: protect LR registers and lr_mask
 changes with spin_lock_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH_LR registers and lr_mask need to be kept in sync: make sure that
their modifications are protected by spin_lock_irq(&gic.lock).

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 378158b..e444032 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -57,9 +57,11 @@ void gic_save_state(struct vcpu *v)
 {
     int i;
 
+    spin_lock_irq(&gic.lock);
     for ( i=0; i<nr_lrs; i++)
         v->arch.gic_lr[i] = GICH[GICH_LR + i];
     v->arch.lr_mask = gic.lr_mask;
+    spin_unlock_irq(&gic.lock);
     /* Disable until next VCPU scheduled */
     GICH[GICH_HCR] = 0;
     isb();
@@ -72,9 +74,11 @@ void gic_restore_state(struct vcpu *v)
     if ( is_idle_vcpu(v) )
         return;
 
+    spin_lock_irq(&gic.lock);
     gic.lr_mask = v->arch.lr_mask;
     for ( i=0; i<nr_lrs; i++)
         GICH[GICH_LR + i] = v->arch.gic_lr[i];
+    spin_unlock_irq(&gic.lock);
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
@@ -469,9 +473,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         i = find_first_zero_bit(&gic.lr_mask, nr_lrs);
         if ( i >= nr_lrs ) return;
 
+        spin_lock_irq(&gic.lock);
         gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &gic.lr_mask);
+        spin_unlock_irq(&gic.lock);
     }
 
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:21:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz90r-0005gY-1D; Wed, 08 Aug 2012 16:21:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz90q-0005gS-5I
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:21:36 +0000
Received: from [85.158.143.35:27064] by server-2.bemta-4.messagelabs.com id
	52/5F-19021-F0292205; Wed, 08 Aug 2012 16:21:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344442874!6161717!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19957 invoked from network); 8 Aug 2012 16:21:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 16:21:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 17:21:12 +0100
Message-Id: <5022AE180200007800093B41@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 17:21:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
In-Reply-To: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 17:56, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));

Does that build on x86-32? Similar format specifier problems
appear to be present elsewhere in the patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:21:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz90r-0005gY-1D; Wed, 08 Aug 2012 16:21:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Sz90q-0005gS-5I
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:21:36 +0000
Received: from [85.158.143.35:27064] by server-2.bemta-4.messagelabs.com id
	52/5F-19021-F0292205; Wed, 08 Aug 2012 16:21:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344442874!6161717!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19957 invoked from network); 8 Aug 2012 16:21:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 16:21:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Aug 2012 17:21:12 +0100
Message-Id: <5022AE180200007800093B41@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 08 Aug 2012 17:21:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
In-Reply-To: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 17:56, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));

Does that build on x86-32? Similar format specifier problems
appear to be present elsewhere in the patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz946-0005mn-Kc; Wed, 08 Aug 2012 16:24:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <donald.d.dugger@intel.com>) id 1Sz945-0005me-KY
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:24:57 +0000
Received: from [85.158.143.99:34336] by server-2.bemta-4.messagelabs.com id
	D2/F2-19021-8D292205; Wed, 08 Aug 2012 16:24:56 +0000
X-Env-Sender: donald.d.dugger@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344443095!23850706!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA2NDU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12930 invoked from network); 8 Aug 2012 16:24:56 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 16:24:56 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 08 Aug 2012 09:24:55 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,733,1336374000"; d="scan'208";a="183216697"
Received: from orsmsx603.amr.corp.intel.com ([10.22.226.49])
	by orsmga002.jf.intel.com with ESMTP; 08 Aug 2012 09:24:54 -0700
Received: from orsmsx105.amr.corp.intel.com (10.22.225.132) by
	orsmsx603.amr.corp.intel.com (10.22.226.49) with Microsoft SMTP Server
	(TLS) id 8.2.255.0; Wed, 8 Aug 2012 09:24:54 -0700
Received: from orsmsx102.amr.corp.intel.com ([169.254.1.159]) by
	ORSMSX105.amr.corp.intel.com ([169.254.4.231]) with mapi id
	14.01.0355.002; Wed, 8 Aug 2012 09:24:54 -0700
From: "Dugger, Donald D" <donald.d.dugger@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
	Intel Xeon Processor E5 Family
Thread-Index: AQHNcPu9hqaUly7tyUG6Nw6+hFvQIJdNtiyAgACutgCAALfxgIAA39sAgAAPDwCAAAzMAIAACZqQ
Date: Wed, 8 Aug 2012 16:24:53 +0000
Message-ID: <6AF484C0160C61439DE06F17668F3BCB285CCD36@ORSMSX102.amr.corp.intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
	<1344412960.11783.15.camel@dagon.hellion.org.uk>
	<502243FC02000078000937B6@nat28.tlf.novell.com>
In-Reply-To: <502243FC02000078000937B6@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.22.254.138]
MIME-Version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, "Keir\(Xen.org\)" <keir@xen.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan-

If we're using git doesn't `git -amend' do what you want?  If we're talking `hg' then I have no clue.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Wednesday, August 08, 2012 2:48 AM
To: Ian Campbell
Cc: Matt Wilson; Dugger, Donald D; Liu, Jinsong; Nakajima, Jun; xen-devel@lists.xen.org; Keir(Xen.org)
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on Intel Xeon Processor E5 Family

>>> On 08.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
>> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
>> > For what it's worth, I think that the first line of the commit log got
>> > dropped, which makes for a strange short log message of:
>> > 
>> >   Although the "Intel Virtualization Technology FlexMigration
>> 
>> Yes, I'm sorry for that, but I realized this only after pushing, and
>> I'm unaware of ways to adjust the commit message of an existing
>> c/s.
> 
> There is an hg rebase extension something like git's rebase -i but I
> find the easiest way is to use the mq extension's function which pulls
> the tip commit into a patch in the queue.
> 
> Actually, that's not quite true, I find the real easiest way is to hg
> strip the wrong commit and try again.

But that's only if it didn't get pushed yet?

> Actually, that's not true either, the real easiest way IMHO is to use a
> git mirror for all the leg work and Ian J's git2hgapply script to
> actually "apply" it. YMMV depending on your feelings about git
> though ;-)

Indeed. While I'm slowly getting to know it better, I'm still not
really friends with it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz946-0005mn-Kc; Wed, 08 Aug 2012 16:24:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <donald.d.dugger@intel.com>) id 1Sz945-0005me-KY
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:24:57 +0000
Received: from [85.158.143.99:34336] by server-2.bemta-4.messagelabs.com id
	D2/F2-19021-8D292205; Wed, 08 Aug 2012 16:24:56 +0000
X-Env-Sender: donald.d.dugger@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344443095!23850706!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA2NDU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12930 invoked from network); 8 Aug 2012 16:24:56 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-216.messagelabs.com with SMTP;
	8 Aug 2012 16:24:56 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 08 Aug 2012 09:24:55 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,733,1336374000"; d="scan'208";a="183216697"
Received: from orsmsx603.amr.corp.intel.com ([10.22.226.49])
	by orsmga002.jf.intel.com with ESMTP; 08 Aug 2012 09:24:54 -0700
Received: from orsmsx105.amr.corp.intel.com (10.22.225.132) by
	orsmsx603.amr.corp.intel.com (10.22.226.49) with Microsoft SMTP Server
	(TLS) id 8.2.255.0; Wed, 8 Aug 2012 09:24:54 -0700
Received: from orsmsx102.amr.corp.intel.com ([169.254.1.159]) by
	ORSMSX105.amr.corp.intel.com ([169.254.4.231]) with mapi id
	14.01.0355.002; Wed, 8 Aug 2012 09:24:54 -0700
From: "Dugger, Donald D" <donald.d.dugger@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
	Intel Xeon Processor E5 Family
Thread-Index: AQHNcPu9hqaUly7tyUG6Nw6+hFvQIJdNtiyAgACutgCAALfxgIAA39sAgAAPDwCAAAzMAIAACZqQ
Date: Wed, 8 Aug 2012 16:24:53 +0000
Message-ID: <6AF484C0160C61439DE06F17668F3BCB285CCD36@ORSMSX102.amr.corp.intel.com>
References: <bf922651da96e045cd8f.1343503160@kaos-source-31003.sea31.amazon.com>
	<50164C5F0200007800091325@nat28.tlf.novell.com>
	<CAL54oT0HAgS-+sQVmWJt9kMiTGdoBmR5Mj-TqEwBoQzzEiUwDg@mail.gmail.com>
	<5020D6880200007800093198@nat28.tlf.novell.com>
	<20120807174733.GA5592@US-SEA-R8XVZTX>
	<50222C9E0200007800093719@nat28.tlf.novell.com>
	<1344412960.11783.15.camel@dagon.hellion.org.uk>
	<502243FC02000078000937B6@nat28.tlf.novell.com>
In-Reply-To: <502243FC02000078000937B6@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.22.254.138]
MIME-Version: 1.0
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>, "Keir\(Xen.org\)" <keir@xen.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on
 Intel Xeon Processor E5 Family
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan-

If we're using git doesn't `git -amend' do what you want?  If we're talking `hg' then I have no clue.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Wednesday, August 08, 2012 2:48 AM
To: Ian Campbell
Cc: Matt Wilson; Dugger, Donald D; Liu, Jinsong; Nakajima, Jun; xen-devel@lists.xen.org; Keir(Xen.org)
Subject: Re: [Xen-devel] [PATCH] xen/x86: Add support for cpuid masking on Intel Xeon Processor E5 Family

>>> On 08.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-08 at 08:08 +0100, Jan Beulich wrote:
>> >>> On 07.08.12 at 19:47, Matt Wilson <msw@amazon.com> wrote:
>> > For what it's worth, I think that the first line of the commit log got
>> > dropped, which makes for a strange short log message of:
>> > 
>> >   Although the "Intel Virtualization Technology FlexMigration
>> 
>> Yes, I'm sorry for that, but I realized this only after pushing, and
>> I'm unaware of ways to adjust the commit message of an existing
>> c/s.
> 
> There is an hg rebase extension something like git's rebase -i but I
> find the easiest way is to use the mq extension's function which pulls
> the tip commit into a patch in the queue.
> 
> Actually, that's not quite true, I find the real easiest way is to hg
> strip the wrong commit and try again.

But that's only if it didn't get pushed yet?

> Actually, that's not true either, the real easiest way IMHO is to use a
> git mirror for all the leg work and Ian J's git2hgapply script to
> actually "apply" it. YMMV depending on your feelings about git
> though ;-)

Indeed. While I'm slowly getting to know it better, I'm still not
really friends with it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz94p-0005qK-2B; Wed, 08 Aug 2012 16:25:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz94n-0005pm-La
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:25:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344443135!11930179!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19364 invoked from network); 8 Aug 2012 16:25:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:25:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914136"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:25:07 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:25:07 +0100
Date: Wed, 8 Aug 2012 17:24:43 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <501FEF65.1000304@citrix.com>
Message-ID: <alpine.DEB.2.02.1208081715550.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FEF65.1000304@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, David Vrabel wrote:
> On 06/08/12 15:27, Stefano Stabellini wrote:
> > Check for a "/xen" node in the device tree, if it is present set
> > xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> > 
> > Map the real shared info page using XENMEM_add_to_physmap with
> > XENMAPSPACE_shared_info.
> > 
> > Changes in v2:
> > 
> > - replace pr_info with pr_debug.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 files changed, 52 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index d27c2a6..102d823 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -5,6 +5,9 @@
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/of_irq.h>
> > +#include <linux/of_address.h>
> >  
> >  struct start_info _xen_start_info;
> >  struct start_info *xen_start_info = &_xen_start_info;
> > @@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  	return -ENOSYS;
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> > +
> > +/*
> > + * == Xen Device Tree format ==
> > + * - /xen node;
> > + * - compatible "arm,xen";
> > + * - one interrupt for Xen event notifications;
> > + * - one memory region to map the grant_table.
> > + */
> 
> These needs to be documented in Documentation/devicetree/bindings/ and
> should be sent to the devicetree-discuss mailing list for review.

That's a good idea.


> The node should be called 'hypervisor' I think.
> 
> The first word of the compatible string is the vendor/organization that
> defined the binding so should be "xen" here.  This does give a odd
> looking "xen,xen" but we'll have to live with that.
> 
> I'd suggest that the DT provided by the hypervisor or tools give the
> hypercall ABI version in the compatible string as well.  e.g.,
> 
> hypervisor {
>     compatible = "xen,xen-4.3", "xen,xen"
> };

It makes sense, I'll do that.


> I missed the Xen patch that adds this node for dom0.  Can you point me
> to it?

Nope, you didn't miss it: I don't have a patch for Xen yet.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz94p-0005qK-2B; Wed, 08 Aug 2012 16:25:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz94n-0005pm-La
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:25:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344443135!11930179!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19364 invoked from network); 8 Aug 2012 16:25:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:25:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914136"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:25:07 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:25:07 +0100
Date: Wed, 8 Aug 2012 17:24:43 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <501FEF65.1000304@citrix.com>
Message-ID: <alpine.DEB.2.02.1208081715550.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-7-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FEF65.1000304@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 07/23] xen/arm: Xen detection and
 shared_info page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012, David Vrabel wrote:
> On 06/08/12 15:27, Stefano Stabellini wrote:
> > Check for a "/xen" node in the device tree, if it is present set
> > xen_domain_type to XEN_HVM_DOMAIN and continue initialization.
> > 
> > Map the real shared info page using XENMEM_add_to_physmap with
> > XENMAPSPACE_shared_info.
> > 
> > Changes in v2:
> > 
> > - replace pr_info with pr_debug.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/xen/enlighten.c |   52 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 files changed, 52 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index d27c2a6..102d823 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -5,6 +5,9 @@
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> >  #include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/of_irq.h>
> > +#include <linux/of_address.h>
> >  
> >  struct start_info _xen_start_info;
> >  struct start_info *xen_start_info = &_xen_start_info;
> > @@ -33,3 +36,52 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  	return -ENOSYS;
> >  }
> >  EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
> > +
> > +/*
> > + * == Xen Device Tree format ==
> > + * - /xen node;
> > + * - compatible "arm,xen";
> > + * - one interrupt for Xen event notifications;
> > + * - one memory region to map the grant_table.
> > + */
> 
> These needs to be documented in Documentation/devicetree/bindings/ and
> should be sent to the devicetree-discuss mailing list for review.

That's a good idea.


> The node should be called 'hypervisor' I think.
> 
> The first word of the compatible string is the vendor/organization that
> defined the binding so should be "xen" here.  This does give a odd
> looking "xen,xen" but we'll have to live with that.
> 
> I'd suggest that the DT provided by the hypervisor or tools give the
> hypercall ABI version in the compatible string as well.  e.g.,
> 
> hypervisor {
>     compatible = "xen,xen-4.3", "xen,xen"
> };

It makes sense, I'll do that.


> I missed the Xen patch that adds this node for dom0.  Can you point me
> to it?

Nope, you didn't miss it: I don't have a patch for Xen yet.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:31:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9Ah-00068C-SB; Wed, 08 Aug 2012 16:31:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9Ag-000686-6h
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:31:46 +0000
Received: from [85.158.143.35:34415] by server-1.bemta-4.messagelabs.com id
	49/28-20198-17492205; Wed, 08 Aug 2012 16:31:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344443497!16140603!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11636 invoked from network); 8 Aug 2012 16:31:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:31:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914211"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:31:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:31:37 +0100
Date: Wed, 8 Aug 2012 17:31:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181017.GF15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081730560.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181017.GF15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 01/23] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:04PM +0100, Stefano Stabellini wrote:
> > - Basic hypervisor.h and interface.h definitions.
> > - Skeleton enlighten.c, set xen_start_info to an empty struct.
> > - Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.
> > 
> > The new code only compiles when CONFIG_XEN is set, that is going to be
> > added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
> > ARM".
> 
> You can add my Ack, but do one change pls:

Thanks! I'll make the changes.


> > +/* XXX: Move pvclock definitions some place arch independent */
> 
> Just use 'TODO'
> 
> > +struct pvclock_vcpu_time_info {
> > +	u32   version;
> > +	u32   pad0;
> > +	u64   tsc_timestamp;
> > +	u64   system_time;
> > +	u32   tsc_to_system_mul;
> > +	s8    tsc_shift;
> > +	u8    flags;
> > +	u8    pad[2];
> > +} __attribute__((__packed__)); /* 32 bytes */
> > +
> > +struct pvclock_wall_clock {
> > +	u32   version;
> > +	u32   sec;
> > +	u32   nsec;
> > +} __attribute__((__packed__));
> 
> Mention the size and why it is OK to have it be a weird
> size while the one above is nicely padded.
> 
> > +#endif
> > +
> > +#endif /* _ASM_ARM_XEN_INTERFACE_H */
> > diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> > new file mode 100644
> > index 0000000..0bad594
> > --- /dev/null
> > +++ b/arch/arm/xen/Makefile
> > @@ -0,0 +1 @@
> > +obj-y		:= enlighten.o
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > new file mode 100644
> > index 0000000..d27c2a6
> > --- /dev/null
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -0,0 +1,35 @@
> > +#include <xen/xen.h>
> > +#include <xen/interface/xen.h>
> > +#include <xen/interface/memory.h>
> > +#include <xen/platform_pci.h>
> > +#include <asm/xen/hypervisor.h>
> > +#include <asm/xen/hypercall.h>
> > +#include <linux/module.h>
> > +
> > +struct start_info _xen_start_info;
> > +struct start_info *xen_start_info = &_xen_start_info;
> > +EXPORT_SYMBOL_GPL(xen_start_info);
> > +
> > +enum xen_domain_type xen_domain_type = XEN_NATIVE;
> > +EXPORT_SYMBOL_GPL(xen_domain_type);
> > +
> > +struct shared_info xen_dummy_shared_info;
> > +struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
> > +
> > +DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
> > +
> > +/* XXX: to be removed */
> 
> s/XXX/TODO/ here, and mention pls why it needs to be removed.
> 
> > +__read_mostly int xen_have_vector_callback;
> > +EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> > +
> > +int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> > +EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:31:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9Ah-00068C-SB; Wed, 08 Aug 2012 16:31:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9Ag-000686-6h
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:31:46 +0000
Received: from [85.158.143.35:34415] by server-1.bemta-4.messagelabs.com id
	49/28-20198-17492205; Wed, 08 Aug 2012 16:31:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344443497!16140603!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11636 invoked from network); 8 Aug 2012 16:31:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:31:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914211"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:31:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:31:37 +0100
Date: Wed, 8 Aug 2012 17:31:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181017.GF15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081730560.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181017.GF15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 01/23] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:04PM +0100, Stefano Stabellini wrote:
> > - Basic hypervisor.h and interface.h definitions.
> > - Skeleton enlighten.c, set xen_start_info to an empty struct.
> > - Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.
> > 
> > The new code only compiles when CONFIG_XEN is set, that is going to be
> > added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
> > ARM".
> 
> You can add my Ack, but do one change pls:

Thanks! I'll make the changes.


> > +/* XXX: Move pvclock definitions some place arch independent */
> 
> Just use 'TODO'
> 
> > +struct pvclock_vcpu_time_info {
> > +	u32   version;
> > +	u32   pad0;
> > +	u64   tsc_timestamp;
> > +	u64   system_time;
> > +	u32   tsc_to_system_mul;
> > +	s8    tsc_shift;
> > +	u8    flags;
> > +	u8    pad[2];
> > +} __attribute__((__packed__)); /* 32 bytes */
> > +
> > +struct pvclock_wall_clock {
> > +	u32   version;
> > +	u32   sec;
> > +	u32   nsec;
> > +} __attribute__((__packed__));
> 
> Mention the size and why it is OK to have it be a weird
> size while the one above is nicely padded.
> 
> > +#endif
> > +
> > +#endif /* _ASM_ARM_XEN_INTERFACE_H */
> > diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
> > new file mode 100644
> > index 0000000..0bad594
> > --- /dev/null
> > +++ b/arch/arm/xen/Makefile
> > @@ -0,0 +1 @@
> > +obj-y		:= enlighten.o
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > new file mode 100644
> > index 0000000..d27c2a6
> > --- /dev/null
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -0,0 +1,35 @@
> > +#include <xen/xen.h>
> > +#include <xen/interface/xen.h>
> > +#include <xen/interface/memory.h>
> > +#include <xen/platform_pci.h>
> > +#include <asm/xen/hypervisor.h>
> > +#include <asm/xen/hypercall.h>
> > +#include <linux/module.h>
> > +
> > +struct start_info _xen_start_info;
> > +struct start_info *xen_start_info = &_xen_start_info;
> > +EXPORT_SYMBOL_GPL(xen_start_info);
> > +
> > +enum xen_domain_type xen_domain_type = XEN_NATIVE;
> > +EXPORT_SYMBOL_GPL(xen_domain_type);
> > +
> > +struct shared_info xen_dummy_shared_info;
> > +struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
> > +
> > +DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
> > +
> > +/* XXX: to be removed */
> 
> s/XXX/TODO/ here, and mention pls why it needs to be removed.
> 
> > +__read_mostly int xen_have_vector_callback;
> > +EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> > +
> > +int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> > +EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:33:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9CX-0006Hn-Hx; Wed, 08 Aug 2012 16:33:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9CW-0006Hd-HB
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:33:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344443613!4502194!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26433 invoked from network); 8 Aug 2012 16:33:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914251"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:33:33 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:33:33 +0100
Date: Wed, 8 Aug 2012 17:33:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181322.GG15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081732590.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181322.GG15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 03/23] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:06PM +0100, Stefano Stabellini wrote:
> > ARM Xen guests always use paging in hardware, like PV on HVM guests in
> > the X86 world.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Ack.. with one nitpick
> 
> > +/* XXX: this shouldn't be here */
> 
> .. but its here b/c the frontend drivers are using it (its rolled in
> headers)- even though we won't hit the code path. So for right now
> just punt with this.

Yep, I'll do that.


> > +static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
> > +{
> > +	BUG();
> > +	return NULL;
> > +}
> > +
> > +static inline int m2p_add_override(unsigned long mfn, struct page *page,
> > +		struct gnttab_map_grant_ref *kmap_op)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int m2p_remove_override(struct page *page, bool clear_pte)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> > +{
> > +	BUG();
> > +	return false;
> > +}
> > +#endif /* _ASM_ARM_XEN_PAGE_H */
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:33:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9CX-0006Hn-Hx; Wed, 08 Aug 2012 16:33:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9CW-0006Hd-HB
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:33:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344443613!4502194!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26433 invoked from network); 8 Aug 2012 16:33:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914251"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:33:33 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:33:33 +0100
Date: Wed, 8 Aug 2012 17:33:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181322.GG15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081732590.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181322.GG15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 03/23] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:06PM +0100, Stefano Stabellini wrote:
> > ARM Xen guests always use paging in hardware, like PV on HVM guests in
> > the X86 world.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Ack.. with one nitpick
> 
> > +/* XXX: this shouldn't be here */
> 
> .. but its here b/c the frontend drivers are using it (its rolled in
> headers)- even though we won't hit the code path. So for right now
> just punt with this.

Yep, I'll do that.


> > +static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
> > +{
> > +	BUG();
> > +	return NULL;
> > +}
> > +
> > +static inline int m2p_add_override(unsigned long mfn, struct page *page,
> > +		struct gnttab_map_grant_ref *kmap_op)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int m2p_remove_override(struct page *page, bool clear_pte)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> > +{
> > +	BUG();
> > +	return false;
> > +}
> > +#endif /* _ASM_ARM_XEN_PAGE_H */
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:38:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:38:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9HR-0006XX-9Y; Wed, 08 Aug 2012 16:38:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9HP-0006XS-TB
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:38:44 +0000
Received: from [85.158.143.35:64049] by server-1.bemta-4.messagelabs.com id
	90/BE-20198-31692205; Wed, 08 Aug 2012 16:38:43 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344443921!16792699!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15787 invoked from network); 8 Aug 2012 16:38:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:38:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914324"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:38:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:38:40 +0100
Date: Wed, 8 Aug 2012 17:38:17 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181457.GJ15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081737330.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181457.GJ15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 06/23] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:09PM +0100, Stefano Stabellini wrote:
> > Changes in v2:
> > - remove pvclock hack;
> > - remove include linux/types.h from xen/interface/xen.h.
> 
> I think I can take in my tree now right by itself right? Or do
> you want to keep this in your patchqueue? If so, Ack from me.

Yep, just go ahead and take the patch, thanks.


> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/x86/include/asm/xen/interface.h       |    2 ++
> >  drivers/tty/hvc/hvc_xen.c                  |    2 ++
> >  drivers/xen/grant-table.c                  |    1 +
> >  drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
> >  include/xen/interface/xen.h                |    1 -
> >  include/xen/privcmd.h                      |    1 +
> >  6 files changed, 7 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> > index cbf0c9d..a93db16 100644
> > --- a/arch/x86/include/asm/xen/interface.h
> > +++ b/arch/x86/include/asm/xen/interface.h
> > @@ -121,6 +121,8 @@ struct arch_shared_info {
> >  #include "interface_64.h"
> >  #endif
> >  
> > +#include <asm/pvclock-abi.h>
> > +
> >  #ifndef __ASSEMBLY__
> >  /*
> >   * The following is all CPU context. Note that the fpu_ctxt block is filled
> > diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> > index 944eaeb..dc07f56 100644
> > --- a/drivers/tty/hvc/hvc_xen.c
> > +++ b/drivers/tty/hvc/hvc_xen.c
> > @@ -21,6 +21,7 @@
> >  #include <linux/console.h>
> >  #include <linux/delay.h>
> >  #include <linux/err.h>
> > +#include <linux/irq.h>
> >  #include <linux/init.h>
> >  #include <linux/types.h>
> >  #include <linux/list.h>
> > @@ -35,6 +36,7 @@
> >  #include <xen/page.h>
> >  #include <xen/events.h>
> >  #include <xen/interface/io/console.h>
> > +#include <xen/interface/sched.h>
> >  #include <xen/hvc-console.h>
> >  #include <xen/xenbus.h>
> >  
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 0bfc1ef..1d0d95e 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -47,6 +47,7 @@
> >  #include <xen/interface/memory.h>
> >  #include <xen/hvc-console.h>
> >  #include <asm/xen/hypercall.h>
> > +#include <asm/xen/interface.h>
> >  
> >  #include <asm/pgtable.h>
> >  #include <asm/sync_bitops.h>
> > diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
> > index a31b54d..3159a37 100644
> > --- a/drivers/xen/xenbus/xenbus_probe_frontend.c
> > +++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
> > @@ -21,6 +21,7 @@
> >  #include <xen/xenbus.h>
> >  #include <xen/events.h>
> >  #include <xen/page.h>
> > +#include <xen/xen.h>
> >  
> >  #include <xen/platform_pci.h>
> >  
> > diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> > index a890804..3871e47 100644
> > --- a/include/xen/interface/xen.h
> > +++ b/include/xen/interface/xen.h
> > @@ -10,7 +10,6 @@
> >  #define __XEN_PUBLIC_XEN_H__
> >  
> >  #include <asm/xen/interface.h>
> > -#include <asm/pvclock-abi.h>
> >  
> >  /*
> >   * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
> > diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> > index 17857fb..4d58881 100644
> > --- a/include/xen/privcmd.h
> > +++ b/include/xen/privcmd.h
> > @@ -35,6 +35,7 @@
> >  
> >  #include <linux/types.h>
> >  #include <linux/compiler.h>
> > +#include <xen/interface/xen.h>
> >  
> >  typedef unsigned long xen_pfn_t;
> >  
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:38:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:38:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9HR-0006XX-9Y; Wed, 08 Aug 2012 16:38:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9HP-0006XS-TB
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:38:44 +0000
Received: from [85.158.143.35:64049] by server-1.bemta-4.messagelabs.com id
	90/BE-20198-31692205; Wed, 08 Aug 2012 16:38:43 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344443921!16792699!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15787 invoked from network); 8 Aug 2012 16:38:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:38:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914324"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:38:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:38:40 +0100
Date: Wed, 8 Aug 2012 17:38:17 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181457.GJ15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081737330.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-6-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181457.GJ15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 06/23] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:09PM +0100, Stefano Stabellini wrote:
> > Changes in v2:
> > - remove pvclock hack;
> > - remove include linux/types.h from xen/interface/xen.h.
> 
> I think I can take in my tree now right by itself right? Or do
> you want to keep this in your patchqueue? If so, Ack from me.

Yep, just go ahead and take the patch, thanks.


> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/x86/include/asm/xen/interface.h       |    2 ++
> >  drivers/tty/hvc/hvc_xen.c                  |    2 ++
> >  drivers/xen/grant-table.c                  |    1 +
> >  drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
> >  include/xen/interface/xen.h                |    1 -
> >  include/xen/privcmd.h                      |    1 +
> >  6 files changed, 7 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> > index cbf0c9d..a93db16 100644
> > --- a/arch/x86/include/asm/xen/interface.h
> > +++ b/arch/x86/include/asm/xen/interface.h
> > @@ -121,6 +121,8 @@ struct arch_shared_info {
> >  #include "interface_64.h"
> >  #endif
> >  
> > +#include <asm/pvclock-abi.h>
> > +
> >  #ifndef __ASSEMBLY__
> >  /*
> >   * The following is all CPU context. Note that the fpu_ctxt block is filled
> > diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> > index 944eaeb..dc07f56 100644
> > --- a/drivers/tty/hvc/hvc_xen.c
> > +++ b/drivers/tty/hvc/hvc_xen.c
> > @@ -21,6 +21,7 @@
> >  #include <linux/console.h>
> >  #include <linux/delay.h>
> >  #include <linux/err.h>
> > +#include <linux/irq.h>
> >  #include <linux/init.h>
> >  #include <linux/types.h>
> >  #include <linux/list.h>
> > @@ -35,6 +36,7 @@
> >  #include <xen/page.h>
> >  #include <xen/events.h>
> >  #include <xen/interface/io/console.h>
> > +#include <xen/interface/sched.h>
> >  #include <xen/hvc-console.h>
> >  #include <xen/xenbus.h>
> >  
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 0bfc1ef..1d0d95e 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -47,6 +47,7 @@
> >  #include <xen/interface/memory.h>
> >  #include <xen/hvc-console.h>
> >  #include <asm/xen/hypercall.h>
> > +#include <asm/xen/interface.h>
> >  
> >  #include <asm/pgtable.h>
> >  #include <asm/sync_bitops.h>
> > diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
> > index a31b54d..3159a37 100644
> > --- a/drivers/xen/xenbus/xenbus_probe_frontend.c
> > +++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
> > @@ -21,6 +21,7 @@
> >  #include <xen/xenbus.h>
> >  #include <xen/events.h>
> >  #include <xen/page.h>
> > +#include <xen/xen.h>
> >  
> >  #include <xen/platform_pci.h>
> >  
> > diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> > index a890804..3871e47 100644
> > --- a/include/xen/interface/xen.h
> > +++ b/include/xen/interface/xen.h
> > @@ -10,7 +10,6 @@
> >  #define __XEN_PUBLIC_XEN_H__
> >  
> >  #include <asm/xen/interface.h>
> > -#include <asm/pvclock-abi.h>
> >  
> >  /*
> >   * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
> > diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> > index 17857fb..4d58881 100644
> > --- a/include/xen/privcmd.h
> > +++ b/include/xen/privcmd.h
> > @@ -35,6 +35,7 @@
> >  
> >  #include <linux/types.h>
> >  #include <linux/compiler.h>
> > +#include <xen/interface/xen.h>
> >  
> >  typedef unsigned long xen_pfn_t;
> >  
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:42:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9L5-0006er-T7; Wed, 08 Aug 2012 16:42:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Sz9L4-0006ek-S4
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:42:31 +0000
Received: from [85.158.138.51:50032] by server-4.bemta-3.messagelabs.com id
	84/5B-06379-5F692205; Wed, 08 Aug 2012 16:42:29 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344444149!27023575!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28969 invoked from network); 8 Aug 2012 16:42:29 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:42:29 -0000
Received: by eeke53 with SMTP id e53so319849eek.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 09:42:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=qGJfqNYWzEsUpd1HUib5SQEhd/yC9oIYfdY6DHP4zWw=;
	b=q6eooARh0ErocIeOS7VZXp6IX5phD52caD2ZDNyem5LnFxS0iD/F2HSHb2YSxMQWns
	8QEHl77fDGMzDAgJFhJ8xgpT4SzYN+KxBaHg31lK7hVzboiKHfsSwM/TF3K+hZDqgk2L
	mlrp2b1D9KyDbkdNI82mm6Cj9UF5WzZ+YSAMlQlqKZnH2H9VXr9AgeoQS7m1p1rOJPma
	WcByhf7VR+V+oMhXtdoVmqTDsSY4wl3ck4gaIBUs/Agkmi3TFaFHjwKqFzbBaBdtlLcw
	i8B4P36uuGLK+OmcVXAKMgM0oR1V/BE97se0+gbk2kYQdfANivsvUwMpfNl5lfrXwpy9
	jiPw==
Received: by 10.14.175.7 with SMTP id y7mr22865376eel.29.1344444148934;
	Wed, 08 Aug 2012 09:42:28 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id h2sm23177547eeo.3.2012.08.08.09.42.26
	(version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 09:42:28 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 08 Aug 2012 17:42:23 +0100
From: Keir Fraser <keir@xen.org>
To: Mark van Dijk <lists+xen@internecto.net>,
	<xen-devel@lists.xen.org>
Message-ID: <CC48557F.484FE%keir@xen.org>
Thread-Topic: [Xen-devel] Possible bug with huge unflushed console buffer
Thread-Index: Ac11hMjiMN8F120MA0CVQDUc3XZfaQ==
In-Reply-To: <20120808174355.470746e0@internecto.net>
Mime-version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Possible bug with huge unflushed console buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 16:43, "Mark van Dijk" <lists+xen@internecto.net> wrote:

> Greetings list,
> 
> Yesterday the following scenario took place. In short, I had a HVM host
> with a poorly configured firewall. It logged way more than it should,
> and it had been doing that for days. IPTables entries with -j LOG
> show up in the console. I told Konrad about this but I said it was a PV
> host while in fact it is concerning a HVM host.
> 
> Anyway, when I saw that this was the case I corrected iptables and then
> I removed those faulty log entries from the logfiles in /var/log.
> Finally I issued /etc/init.d/rsyslogd reload.
> 
> Suddenly the server stopped responding at all, not even arp requests
> were answered. I opened up a console via xl (none was open before
> this) and I was greeted with a crazy amount of log entries - it took
> about five or six minutes before the whole buffer was flushed to my
> screen. Then the server responded again.
> 
> Today I viewed syslog's messages in detail and found this and a lot of
> similar entries:
> 
> http://pastebin.com/Ga7aE7hb
> 
> So the questions that I have now are - Is this a bug? How is console
> history handled and where is the console's unread history stored?
> Somewhere in the hypervisor or in dom0 or domU memory space? Is there a
> limit to how much info can be stored?
> 
> The behaviour I encountered i.e. a system lock seems to suggest that
> there was some kind of buffer overflow. And this can be achieved by
> something as simple as 'iptables -I INPUT -j LOG'. Perhaps it is a wise
> idea to consider xl console -c to clear the console's history, then at
> least I could exit the console and clear unsent messages...

Is your VM logging to its virtual serial line? You could just not do that...

I think this is due to the guest's qemu-dm process stuffing its transmitted
serial data into a pty, to be consumed by 'xl console', and if that doesn't
happen the buffers fill up, serial processing stops, and we back up all the
way to the guest.

 -- Keir

> I am not receiving xen-devel messages, so please CC me on replies.
> Thank you.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:42:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9L5-0006er-T7; Wed, 08 Aug 2012 16:42:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Sz9L4-0006ek-S4
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 16:42:31 +0000
Received: from [85.158.138.51:50032] by server-4.bemta-3.messagelabs.com id
	84/5B-06379-5F692205; Wed, 08 Aug 2012 16:42:29 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344444149!27023575!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28969 invoked from network); 8 Aug 2012 16:42:29 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:42:29 -0000
Received: by eeke53 with SMTP id e53so319849eek.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 09:42:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=qGJfqNYWzEsUpd1HUib5SQEhd/yC9oIYfdY6DHP4zWw=;
	b=q6eooARh0ErocIeOS7VZXp6IX5phD52caD2ZDNyem5LnFxS0iD/F2HSHb2YSxMQWns
	8QEHl77fDGMzDAgJFhJ8xgpT4SzYN+KxBaHg31lK7hVzboiKHfsSwM/TF3K+hZDqgk2L
	mlrp2b1D9KyDbkdNI82mm6Cj9UF5WzZ+YSAMlQlqKZnH2H9VXr9AgeoQS7m1p1rOJPma
	WcByhf7VR+V+oMhXtdoVmqTDsSY4wl3ck4gaIBUs/Agkmi3TFaFHjwKqFzbBaBdtlLcw
	i8B4P36uuGLK+OmcVXAKMgM0oR1V/BE97se0+gbk2kYQdfANivsvUwMpfNl5lfrXwpy9
	jiPw==
Received: by 10.14.175.7 with SMTP id y7mr22865376eel.29.1344444148934;
	Wed, 08 Aug 2012 09:42:28 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id h2sm23177547eeo.3.2012.08.08.09.42.26
	(version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 09:42:28 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 08 Aug 2012 17:42:23 +0100
From: Keir Fraser <keir@xen.org>
To: Mark van Dijk <lists+xen@internecto.net>,
	<xen-devel@lists.xen.org>
Message-ID: <CC48557F.484FE%keir@xen.org>
Thread-Topic: [Xen-devel] Possible bug with huge unflushed console buffer
Thread-Index: Ac11hMjiMN8F120MA0CVQDUc3XZfaQ==
In-Reply-To: <20120808174355.470746e0@internecto.net>
Mime-version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Possible bug with huge unflushed console buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 16:43, "Mark van Dijk" <lists+xen@internecto.net> wrote:

> Greetings list,
> 
> Yesterday the following scenario took place. In short, I had a HVM host
> with a poorly configured firewall. It logged way more than it should,
> and it had been doing that for days. IPTables entries with -j LOG
> show up in the console. I told Konrad about this but I said it was a PV
> host while in fact it is concerning a HVM host.
> 
> Anyway, when I saw that this was the case I corrected iptables and then
> I removed those faulty log entries from the logfiles in /var/log.
> Finally I issued /etc/init.d/rsyslogd reload.
> 
> Suddenly the server stopped responding at all, not even arp requests
> were answered. I opened up a console via xl (none was open before
> this) and I was greeted with a crazy amount of log entries - it took
> about five or six minutes before the whole buffer was flushed to my
> screen. Then the server responded again.
> 
> Today I viewed syslog's messages in detail and found this and a lot of
> similar entries:
> 
> http://pastebin.com/Ga7aE7hb
> 
> So the questions that I have now are - Is this a bug? How is console
> history handled and where is the console's unread history stored?
> Somewhere in the hypervisor or in dom0 or domU memory space? Is there a
> limit to how much info can be stored?
> 
> The behaviour I encountered i.e. a system lock seems to suggest that
> there was some kind of buffer overflow. And this can be achieved by
> something as simple as 'iptables -I INPUT -j LOG'. Perhaps it is a wise
> idea to consider xl console -c to clear the console's history, then at
> least I could exit the console and clear unsent messages...

Is your VM logging to its virtual serial line? You could just not do that...

I think this is due to the guest's qemu-dm process stuffing its transmitted
serial data into a pty, to be consumed by 'xl console', and if that doesn't
happen the buffers fill up, serial processing stops, and we back up all the
way to the guest.

 -- Keir

> I am not receiving xen-devel messages, so please CC me on replies.
> Thank you.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:43:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9LZ-0006hg-A5; Wed, 08 Aug 2012 16:43:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9LY-0006hS-3J
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:43:00 +0000
Received: from [85.158.139.83:58035] by server-4.bemta-5.messagelabs.com id
	A3/5B-32474-31792205; Wed, 08 Aug 2012 16:42:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344444176!23708462!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2122 invoked from network); 8 Aug 2012 16:42:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:42:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914378"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:42:56 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:42:56 +0100
Date: Wed, 8 Aug 2012 17:42:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181839.GM15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081741331.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181839.GM15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 09/23] xen/arm: Introduce xen_ulong_t for
 unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:12PM +0100, Stefano Stabellini wrote:
> > All the original Xen headers have xen_ulong_t as unsigned long type, however
> > when they have been imported in Linux, xen_ulong_t has been replaced with
> > unsigned long. That might work for x86 and ia64 but it does not for arm.
> > Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
> > see fit.
> > 
> > Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.
> 
> Looks ok to me.

Considering that I'll have to change it a bit in the next version
(remove the apic_physbase and multicall_entry changes), I won't add your
acked-by here just yet.


> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/include/asm/xen/interface.h  |    8 ++++++--
> >  arch/ia64/include/asm/xen/interface.h |    1 +
> >  arch/x86/include/asm/xen/interface.h  |    1 +
> >  include/xen/interface/memory.h        |   12 ++++++------
> >  include/xen/interface/physdev.h       |    4 ++--
> >  include/xen/interface/version.h       |    2 +-
> >  include/xen/interface/xen.h           |    6 +++---
> >  7 files changed, 20 insertions(+), 14 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> > index f904dcc..1d3030c 100644
> > --- a/arch/arm/include/asm/xen/interface.h
> > +++ b/arch/arm/include/asm/xen/interface.h
> > @@ -9,8 +9,11 @@
> >  
> >  #include <linux/types.h>
> >  
> > +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
> > +
> >  #define __DEFINE_GUEST_HANDLE(name, type) \
> > -	typedef type * __guest_handle_ ## name
> > +	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> > +        __guest_handle_ ## name
> >  
> >  #define DEFINE_GUEST_HANDLE_STRUCT(name) \
> >  	__DEFINE_GUEST_HANDLE(name, struct name)
> > @@ -21,13 +24,14 @@
> >  	do {						\
> >  		if (sizeof(hnd) == 8)			\
> >  			*(uint64_t *)&(hnd) = 0;	\
> > -		(hnd) = val;				\
> > +		(hnd).p = val;				\
> >  	} while (0)
> >  
> >  #ifndef __ASSEMBLY__
> >  /* Explicitly size integers that represent pfns in the interface with
> >   * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
> >  typedef uint64_t xen_pfn_t;
> > +typedef uint64_t xen_ulong_t;
> >  /* Guest handles for primitive C types. */
> >  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
> >  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> > diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> > index 686464e..7c83445 100644
> > --- a/arch/ia64/include/asm/xen/interface.h
> > +++ b/arch/ia64/include/asm/xen/interface.h
> > @@ -71,6 +71,7 @@
> >   * with Xen so that we could have one ABI that works for 32 and 64 bit
> >   * guests. */
> >  typedef unsigned long xen_pfn_t;
> > +typedef unsigned long xen_ulong_t;
> >  /* Guest handles for primitive C types. */
> >  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
> >  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> > diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> > index 555f94d..28fc621 100644
> > --- a/arch/x86/include/asm/xen/interface.h
> > +++ b/arch/x86/include/asm/xen/interface.h
> > @@ -51,6 +51,7 @@
> >   * with Xen so that on ARM we can have one ABI that works for 32 and 64
> >   * bit guests. */
> >  typedef unsigned long xen_pfn_t;
> > +typedef unsigned long xen_ulong_t;
> >  /* Guest handles for primitive C types. */
> >  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
> >  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> > diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> > index abbbff0..b5c3098 100644
> > --- a/include/xen/interface/memory.h
> > +++ b/include/xen/interface/memory.h
> > @@ -34,7 +34,7 @@ struct xen_memory_reservation {
> >      GUEST_HANDLE(xen_pfn_t) extent_start;
> >  
> >      /* Number of extents, and size/alignment of each (2^extent_order pages). */
> > -    unsigned long  nr_extents;
> > +    xen_ulong_t  nr_extents;
> >      unsigned int   extent_order;
> >  
> >      /*
> > @@ -92,7 +92,7 @@ struct xen_memory_exchange {
> >       *     command will be non-zero.
> >       *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
> >       */
> > -    unsigned long nr_exchanged;
> > +    xen_ulong_t nr_exchanged;
> >  };
> >  
> >  DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
> > @@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
> >   */
> >  #define XENMEM_machphys_mapping     12
> >  struct xen_machphys_mapping {
> > -    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
> > -    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
> > +    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
> > +    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
> >  };
> >  DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
> >  
> > @@ -169,7 +169,7 @@ struct xen_add_to_physmap {
> >      unsigned int space;
> >  
> >      /* Index into source mapping space. */
> > -    unsigned long idx;
> > +    xen_ulong_t idx;
> >  
> >      /* GPFN where the source mapping page should appear. */
> >      xen_pfn_t gpfn;
> > @@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
> >      domid_t domid;
> >  
> >      /* Length of list. */
> > -    unsigned long nr_gpfns;
> > +    xen_ulong_t nr_gpfns;
> >  
> >      /* List of GPFNs to translate. */
> >      GUEST_HANDLE(ulong) gpfn_list;
> > diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
> > index 9ce788d..bc3ae14 100644
> > --- a/include/xen/interface/physdev.h
> > +++ b/include/xen/interface/physdev.h
> > @@ -56,7 +56,7 @@ struct physdev_eoi {
> >  #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
> >  struct physdev_pirq_eoi_gmfn {
> >      /* IN */
> > -    unsigned long gmfn;
> > +    xen_ulong_t gmfn;
> >  };
> >  
> >  /*
> > @@ -108,7 +108,7 @@ struct physdev_set_iobitmap {
> >  #define PHYSDEVOP_apic_write		 9
> >  struct physdev_apic {
> >  	/* IN */
> > -	unsigned long apic_physbase;
> > +	xen_ulong_t apic_physbase;
> >  	uint32_t reg;
> >  	/* IN or OUT */
> >  	uint32_t value;
> > diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> > index e8b6519..30280c9 100644
> > --- a/include/xen/interface/version.h
> > +++ b/include/xen/interface/version.h
> > @@ -45,7 +45,7 @@ struct xen_changeset_info {
> >  
> >  #define XENVER_platform_parameters 5
> >  struct xen_platform_parameters {
> > -    unsigned long virt_start;
> > +    xen_ulong_t virt_start;
> >  };
> >  
> >  #define XENVER_get_features 6
> > diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> > index 42834a3..ec32115 100644
> > --- a/include/xen/interface/xen.h
> > +++ b/include/xen/interface/xen.h
> > @@ -274,9 +274,9 @@ DEFINE_GUEST_HANDLE_STRUCT(mmu_update);
> >   * NB. The fields are natural register size for this architecture.
> >   */
> >  struct multicall_entry {
> > -    unsigned long op;
> > -    long result;
> > -    unsigned long args[6];
> > +    xen_ulong_t op;
> > +    xen_ulong_t result;
> > +    xen_ulong_t args[6];
> >  };
> >  DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
> >  
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:43:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9LZ-0006hg-A5; Wed, 08 Aug 2012 16:43:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9LY-0006hS-3J
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:43:00 +0000
Received: from [85.158.139.83:58035] by server-4.bemta-5.messagelabs.com id
	A3/5B-32474-31792205; Wed, 08 Aug 2012 16:42:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344444176!23708462!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2122 invoked from network); 8 Aug 2012 16:42:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:42:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914378"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:42:56 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:42:56 +0100
Date: Wed, 8 Aug 2012 17:42:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807181839.GM15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081741331.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-9-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807181839.GM15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 09/23] xen/arm: Introduce xen_ulong_t for
 unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:12PM +0100, Stefano Stabellini wrote:
> > All the original Xen headers have xen_ulong_t as unsigned long type, however
> > when they have been imported in Linux, xen_ulong_t has been replaced with
> > unsigned long. That might work for x86 and ia64 but it does not for arm.
> > Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
> > see fit.
> > 
> > Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.
> 
> Looks ok to me.

Considering that I'll have to change it a bit in the next version
(remove the apic_physbase and multicall_entry changes), I won't add your
acked-by here just yet.


> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/include/asm/xen/interface.h  |    8 ++++++--
> >  arch/ia64/include/asm/xen/interface.h |    1 +
> >  arch/x86/include/asm/xen/interface.h  |    1 +
> >  include/xen/interface/memory.h        |   12 ++++++------
> >  include/xen/interface/physdev.h       |    4 ++--
> >  include/xen/interface/version.h       |    2 +-
> >  include/xen/interface/xen.h           |    6 +++---
> >  7 files changed, 20 insertions(+), 14 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
> > index f904dcc..1d3030c 100644
> > --- a/arch/arm/include/asm/xen/interface.h
> > +++ b/arch/arm/include/asm/xen/interface.h
> > @@ -9,8 +9,11 @@
> >  
> >  #include <linux/types.h>
> >  
> > +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
> > +
> >  #define __DEFINE_GUEST_HANDLE(name, type) \
> > -	typedef type * __guest_handle_ ## name
> > +	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> > +        __guest_handle_ ## name
> >  
> >  #define DEFINE_GUEST_HANDLE_STRUCT(name) \
> >  	__DEFINE_GUEST_HANDLE(name, struct name)
> > @@ -21,13 +24,14 @@
> >  	do {						\
> >  		if (sizeof(hnd) == 8)			\
> >  			*(uint64_t *)&(hnd) = 0;	\
> > -		(hnd) = val;				\
> > +		(hnd).p = val;				\
> >  	} while (0)
> >  
> >  #ifndef __ASSEMBLY__
> >  /* Explicitly size integers that represent pfns in the interface with
> >   * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
> >  typedef uint64_t xen_pfn_t;
> > +typedef uint64_t xen_ulong_t;
> >  /* Guest handles for primitive C types. */
> >  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
> >  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> > diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
> > index 686464e..7c83445 100644
> > --- a/arch/ia64/include/asm/xen/interface.h
> > +++ b/arch/ia64/include/asm/xen/interface.h
> > @@ -71,6 +71,7 @@
> >   * with Xen so that we could have one ABI that works for 32 and 64 bit
> >   * guests. */
> >  typedef unsigned long xen_pfn_t;
> > +typedef unsigned long xen_ulong_t;
> >  /* Guest handles for primitive C types. */
> >  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
> >  __DEFINE_GUEST_HANDLE(uint, unsigned int);
> > diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> > index 555f94d..28fc621 100644
> > --- a/arch/x86/include/asm/xen/interface.h
> > +++ b/arch/x86/include/asm/xen/interface.h
> > @@ -51,6 +51,7 @@
> >   * with Xen so that on ARM we can have one ABI that works for 32 and 64
> >   * bit guests. */
> >  typedef unsigned long xen_pfn_t;
> > +typedef unsigned long xen_ulong_t;
> >  /* Guest handles for primitive C types. */
> >  __DEFINE_GUEST_HANDLE(uchar, unsigned char);
> >  __DEFINE_GUEST_HANDLE(uint,  unsigned int);
> > diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> > index abbbff0..b5c3098 100644
> > --- a/include/xen/interface/memory.h
> > +++ b/include/xen/interface/memory.h
> > @@ -34,7 +34,7 @@ struct xen_memory_reservation {
> >      GUEST_HANDLE(xen_pfn_t) extent_start;
> >  
> >      /* Number of extents, and size/alignment of each (2^extent_order pages). */
> > -    unsigned long  nr_extents;
> > +    xen_ulong_t  nr_extents;
> >      unsigned int   extent_order;
> >  
> >      /*
> > @@ -92,7 +92,7 @@ struct xen_memory_exchange {
> >       *     command will be non-zero.
> >       *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
> >       */
> > -    unsigned long nr_exchanged;
> > +    xen_ulong_t nr_exchanged;
> >  };
> >  
> >  DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
> > @@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
> >   */
> >  #define XENMEM_machphys_mapping     12
> >  struct xen_machphys_mapping {
> > -    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
> > -    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
> > +    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
> > +    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
> >  };
> >  DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
> >  
> > @@ -169,7 +169,7 @@ struct xen_add_to_physmap {
> >      unsigned int space;
> >  
> >      /* Index into source mapping space. */
> > -    unsigned long idx;
> > +    xen_ulong_t idx;
> >  
> >      /* GPFN where the source mapping page should appear. */
> >      xen_pfn_t gpfn;
> > @@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
> >      domid_t domid;
> >  
> >      /* Length of list. */
> > -    unsigned long nr_gpfns;
> > +    xen_ulong_t nr_gpfns;
> >  
> >      /* List of GPFNs to translate. */
> >      GUEST_HANDLE(ulong) gpfn_list;
> > diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
> > index 9ce788d..bc3ae14 100644
> > --- a/include/xen/interface/physdev.h
> > +++ b/include/xen/interface/physdev.h
> > @@ -56,7 +56,7 @@ struct physdev_eoi {
> >  #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
> >  struct physdev_pirq_eoi_gmfn {
> >      /* IN */
> > -    unsigned long gmfn;
> > +    xen_ulong_t gmfn;
> >  };
> >  
> >  /*
> > @@ -108,7 +108,7 @@ struct physdev_set_iobitmap {
> >  #define PHYSDEVOP_apic_write		 9
> >  struct physdev_apic {
> >  	/* IN */
> > -	unsigned long apic_physbase;
> > +	xen_ulong_t apic_physbase;
> >  	uint32_t reg;
> >  	/* IN or OUT */
> >  	uint32_t value;
> > diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> > index e8b6519..30280c9 100644
> > --- a/include/xen/interface/version.h
> > +++ b/include/xen/interface/version.h
> > @@ -45,7 +45,7 @@ struct xen_changeset_info {
> >  
> >  #define XENVER_platform_parameters 5
> >  struct xen_platform_parameters {
> > -    unsigned long virt_start;
> > +    xen_ulong_t virt_start;
> >  };
> >  
> >  #define XENVER_get_features 6
> > diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> > index 42834a3..ec32115 100644
> > --- a/include/xen/interface/xen.h
> > +++ b/include/xen/interface/xen.h
> > @@ -274,9 +274,9 @@ DEFINE_GUEST_HANDLE_STRUCT(mmu_update);
> >   * NB. The fields are natural register size for this architecture.
> >   */
> >  struct multicall_entry {
> > -    unsigned long op;
> > -    long result;
> > -    unsigned long args[6];
> > +    xen_ulong_t op;
> > +    xen_ulong_t result;
> > +    xen_ulong_t args[6];
> >  };
> >  DEFINE_GUEST_HANDLE_STRUCT(multicall_entry);
> >  
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:52:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9UM-0006zD-Di; Wed, 08 Aug 2012 16:52:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9UK-0006z7-JM
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:52:04 +0000
Received: from [85.158.138.51:60323] by server-11.bemta-3.messagelabs.com id
	B7/DB-10722-33992205; Wed, 08 Aug 2012 16:52:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344444723!24730234!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12800 invoked from network); 8 Aug 2012 16:52:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:52:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914482"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:52:03 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:52:02 +0100
Date: Wed, 8 Aug 2012 17:51:39 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
In-Reply-To: <50216228.7010407@tycho.nsa.gov>
Message-ID: <alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Daniel De Graaf wrote:
> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> > On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> >> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> >> an error.
> >>
> >> If Linux is running as an HVM domain and is running as Dom0, use
> >> xenstored_local_init to initialize the xenstore page and event channel.
> >>
> >> Changes in v2:
> >>
> >> - refactor xenbus_init.
> > 
> > Thank you. Lets also CC our friend at NSA who has been doing some work
> > in that area. Daniel are you OK with this change - will it still make
> > PV initial domain with with the MiniOS XenBus driver?
> > 
> > Thanks.
> 
> That case will work, but what this will break is launching the initial domain
> with a Xenstore stub domain already running (see below).
> 
> >>
> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> ---
> >>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
> >>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >>  3 files changed, 45 insertions(+), 20 deletions(-)
> >>
> >> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> >> index 52fe7ad..c5aa55c 100644
> >> --- a/drivers/xen/xenbus/xenbus_comms.c
> >> +++ b/drivers/xen/xenbus/xenbus_comms.c
> >> @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >>  		int err;
> >>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >>  						0, "xenbus", &xb_waitq);
> >> -		if (err <= 0) {
> >> +		if (err < 0) {
> >>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >>  			return err;
> >>  		}
> > 
> >> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >> index b793723..a67ccc0 100644
> >> --- a/drivers/xen/xenbus/xenbus_probe.c
> >> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
> >>  	return err;
> >>  }
> >>  
> >> +enum xenstore_init {
> >> +	UNKNOWN,
> >> +	PV,
> >> +	HVM,
> >> +	LOCAL,
> >> +};
> >>  static int __init xenbus_init(void)
> >>  {
> >>  	int err = 0;
> >> +	enum xenstore_init usage = UNKNOWN;
> >> +	uint64_t v = 0;
> >>  
> >>  	if (!xen_domain())
> >>  		return -ENODEV;
> >>  
> >>  	xenbus_ring_ops_init();
> >>  
> >> -	if (xen_hvm_domain()) {
> >> -		uint64_t v = 0;
> >> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> >> -		if (err)
> >> -			goto out_error;
> >> -		xen_store_evtchn = (int)v;
> >> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> >> -		if (err)
> >> -			goto out_error;
> >> -		xen_store_mfn = (unsigned long)v;
> >> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> >> -	} else {
> >> -		xen_store_evtchn = xen_start_info->store_evtchn;
> >> -		xen_store_mfn = xen_start_info->store_mfn;
> >> -		if (xen_store_evtchn)
> >> -			xenstored_ready = 1;
> >> -		else {
> >> +	if (xen_pv_domain())
> >> +		usage = PV;
> >> +	if (xen_hvm_domain())
> >> +		usage = HVM;
> 
> The above is correct for domUs, and is overridden for dom0s:
>
> >> +	if (xen_hvm_domain() && xen_initial_domain())
> >> +		usage = LOCAL;
> >> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >> +		usage = LOCAL;
> 
> Instead of these checks, I think it should just be:
> 
> if (!xen_start_info->store_evtchn)
> 	usage = LOCAL;
> 
> Any domain started after xenstore will have store_evtchn set, so if you don't
> have this set, you are either going to be running xenstore locally, or will
> use the ioctl to change it later (and so should still set up everything as if
> it will be running locally).

That would be wrong for an HVM dom0 domain (at least on ARM), because
we don't have a start_info page at all.


> >> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> >> +		xenstored_ready = 1;
> 
> This part can now just be moved unconditionally into case PV.

What about:

if (xen_pv_domain())
    usage = PV;
if (xen_hvm_domain())
    usage = HVM;
if (!xen_store_evtchn)
    usage = LOCAL;

and moving xenstored_ready in case PV, like you suggested.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 16:52:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 16:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9UM-0006zD-Di; Wed, 08 Aug 2012 16:52:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9UK-0006z7-JM
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 16:52:04 +0000
Received: from [85.158.138.51:60323] by server-11.bemta-3.messagelabs.com id
	B7/DB-10722-33992205; Wed, 08 Aug 2012 16:52:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344444723!24730234!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12800 invoked from network); 8 Aug 2012 16:52:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 16:52:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13914482"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 16:52:03 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 17:52:02 +0100
Date: Wed, 8 Aug 2012 17:51:39 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
In-Reply-To: <50216228.7010407@tycho.nsa.gov>
Message-ID: <alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Daniel De Graaf wrote:
> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> > On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> >> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> >> an error.
> >>
> >> If Linux is running as an HVM domain and is running as Dom0, use
> >> xenstored_local_init to initialize the xenstore page and event channel.
> >>
> >> Changes in v2:
> >>
> >> - refactor xenbus_init.
> > 
> > Thank you. Lets also CC our friend at NSA who has been doing some work
> > in that area. Daniel are you OK with this change - will it still make
> > PV initial domain with with the MiniOS XenBus driver?
> > 
> > Thanks.
> 
> That case will work, but what this will break is launching the initial domain
> with a Xenstore stub domain already running (see below).
> 
> >>
> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> ---
> >>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
> >>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >>  3 files changed, 45 insertions(+), 20 deletions(-)
> >>
> >> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> >> index 52fe7ad..c5aa55c 100644
> >> --- a/drivers/xen/xenbus/xenbus_comms.c
> >> +++ b/drivers/xen/xenbus/xenbus_comms.c
> >> @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >>  		int err;
> >>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >>  						0, "xenbus", &xb_waitq);
> >> -		if (err <= 0) {
> >> +		if (err < 0) {
> >>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >>  			return err;
> >>  		}
> > 
> >> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >> index b793723..a67ccc0 100644
> >> --- a/drivers/xen/xenbus/xenbus_probe.c
> >> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
> >>  	return err;
> >>  }
> >>  
> >> +enum xenstore_init {
> >> +	UNKNOWN,
> >> +	PV,
> >> +	HVM,
> >> +	LOCAL,
> >> +};
> >>  static int __init xenbus_init(void)
> >>  {
> >>  	int err = 0;
> >> +	enum xenstore_init usage = UNKNOWN;
> >> +	uint64_t v = 0;
> >>  
> >>  	if (!xen_domain())
> >>  		return -ENODEV;
> >>  
> >>  	xenbus_ring_ops_init();
> >>  
> >> -	if (xen_hvm_domain()) {
> >> -		uint64_t v = 0;
> >> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> >> -		if (err)
> >> -			goto out_error;
> >> -		xen_store_evtchn = (int)v;
> >> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> >> -		if (err)
> >> -			goto out_error;
> >> -		xen_store_mfn = (unsigned long)v;
> >> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> >> -	} else {
> >> -		xen_store_evtchn = xen_start_info->store_evtchn;
> >> -		xen_store_mfn = xen_start_info->store_mfn;
> >> -		if (xen_store_evtchn)
> >> -			xenstored_ready = 1;
> >> -		else {
> >> +	if (xen_pv_domain())
> >> +		usage = PV;
> >> +	if (xen_hvm_domain())
> >> +		usage = HVM;
> 
> The above is correct for domUs, and is overridden for dom0s:
>
> >> +	if (xen_hvm_domain() && xen_initial_domain())
> >> +		usage = LOCAL;
> >> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >> +		usage = LOCAL;
> 
> Instead of these checks, I think it should just be:
> 
> if (!xen_start_info->store_evtchn)
> 	usage = LOCAL;
> 
> Any domain started after xenstore will have store_evtchn set, so if you don't
> have this set, you are either going to be running xenstore locally, or will
> use the ioctl to change it later (and so should still set up everything as if
> it will be running locally).

That would be wrong for an HVM dom0 domain (at least on ARM), because
we don't have a start_info page at all.


> >> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> >> +		xenstored_ready = 1;
> 
> This part can now just be moved unconditionally into case PV.

What about:

if (xen_pv_domain())
    usage = PV;
if (xen_hvm_domain())
    usage = HVM;
if (!xen_store_evtchn)
    usage = LOCAL;

and moving xenstored_ready in case PV, like you suggested.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:01:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9di-0007CL-NM; Wed, 08 Aug 2012 17:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Sz9dh-0007CF-TR
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:01:46 +0000
Received: from [85.158.143.35:51536] by server-1.bemta-4.messagelabs.com id
	AA/12-20198-97B92205; Wed, 08 Aug 2012 17:01:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344445303!19327310!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6427 invoked from network); 8 Aug 2012 17:01:44 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 17:01:44 -0000
X-TM-IMSS-Message-ID: <85dc5e84000510bd@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 85dc5e84000510bd ;
	Wed, 8 Aug 2012 13:01:43 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q78H1arw004605; 
	Wed, 8 Aug 2012 13:01:36 -0400
Message-ID: <50229B70.3090507@tycho.nsa.gov>
Date: Wed, 08 Aug 2012 13:01:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
> On Tue, 7 Aug 2012, Daniel De Graaf wrote:
>> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
>>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
>>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
>>>> an error.
>>>>
>>>> If Linux is running as an HVM domain and is running as Dom0, use
>>>> xenstored_local_init to initialize the xenstore page and event channel.
>>>>
>>>> Changes in v2:
>>>>
>>>> - refactor xenbus_init.
>>>
>>> Thank you. Lets also CC our friend at NSA who has been doing some work
>>> in that area. Daniel are you OK with this change - will it still make
>>> PV initial domain with with the MiniOS XenBus driver?
>>>
>>> Thanks.
>>
>> That case will work, but what this will break is launching the initial domain
>> with a Xenstore stub domain already running (see below).
>>
>>>>
>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>> ---
>>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>>>>  3 files changed, 45 insertions(+), 20 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
>>>> index 52fe7ad..c5aa55c 100644
>>>> --- a/drivers/xen/xenbus/xenbus_comms.c
>>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
>>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>>>>  		int err;
>>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>>>>  						0, "xenbus", &xb_waitq);
>>>> -		if (err <= 0) {
>>>> +		if (err < 0) {
>>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>>>>  			return err;
>>>>  		}
>>>
>>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>>>> index b793723..a67ccc0 100644
>>>> --- a/drivers/xen/xenbus/xenbus_probe.c
>>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>>>>  	return err;
>>>>  }
>>>>  
>>>> +enum xenstore_init {
>>>> +	UNKNOWN,
>>>> +	PV,
>>>> +	HVM,
>>>> +	LOCAL,
>>>> +};
>>>>  static int __init xenbus_init(void)
>>>>  {
>>>>  	int err = 0;
>>>> +	enum xenstore_init usage = UNKNOWN;
>>>> +	uint64_t v = 0;
>>>>  
>>>>  	if (!xen_domain())
>>>>  		return -ENODEV;
>>>>  
>>>>  	xenbus_ring_ops_init();
>>>>  
>>>> -	if (xen_hvm_domain()) {
>>>> -		uint64_t v = 0;
>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>>>> -		if (err)
>>>> -			goto out_error;
>>>> -		xen_store_evtchn = (int)v;
>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>>>> -		if (err)
>>>> -			goto out_error;
>>>> -		xen_store_mfn = (unsigned long)v;
>>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>>>> -	} else {
>>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
>>>> -		xen_store_mfn = xen_start_info->store_mfn;
>>>> -		if (xen_store_evtchn)
>>>> -			xenstored_ready = 1;
>>>> -		else {
>>>> +	if (xen_pv_domain())
>>>> +		usage = PV;
>>>> +	if (xen_hvm_domain())
>>>> +		usage = HVM;
>>
>> The above is correct for domUs, and is overridden for dom0s:
>>
>>>> +	if (xen_hvm_domain() && xen_initial_domain())
>>>> +		usage = LOCAL;
>>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
>>>> +		usage = LOCAL;
>>
>> Instead of these checks, I think it should just be:
>>
>> if (!xen_start_info->store_evtchn)
>> 	usage = LOCAL;
>>
>> Any domain started after xenstore will have store_evtchn set, so if you don't
>> have this set, you are either going to be running xenstore locally, or will
>> use the ioctl to change it later (and so should still set up everything as if
>> it will be running locally).
> 
> That would be wrong for an HVM dom0 domain (at least on ARM), because
> we don't have a start_info page at all.
> 
> 
>>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
>>>> +		xenstored_ready = 1;
>>
>> This part can now just be moved unconditionally into case PV.
> 
> What about:
> 
> if (xen_pv_domain())
>     usage = PV;
> if (xen_hvm_domain())
>     usage = HVM;
> if (!xen_store_evtchn)
>     usage = LOCAL;
> 
> and moving xenstored_ready in case PV, like you suggested.
> 

That looks correct, but you'd need to split up the switch statement in
order to populate xen_store_evtchn before that last condition, which
ends up pretty much eliminating the usage variable.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:01:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9di-0007CL-NM; Wed, 08 Aug 2012 17:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1Sz9dh-0007CF-TR
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:01:46 +0000
Received: from [85.158.143.35:51536] by server-1.bemta-4.messagelabs.com id
	AA/12-20198-97B92205; Wed, 08 Aug 2012 17:01:45 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344445303!19327310!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6427 invoked from network); 8 Aug 2012 17:01:44 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-21.messagelabs.com with SMTP;
	8 Aug 2012 17:01:44 -0000
X-TM-IMSS-Message-ID: <85dc5e84000510bd@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 85dc5e84000510bd ;
	Wed, 8 Aug 2012 13:01:43 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q78H1arw004605; 
	Wed, 8 Aug 2012 13:01:36 -0400
Message-ID: <50229B70.3090507@tycho.nsa.gov>
Date: Wed, 08 Aug 2012 13:01:36 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
> On Tue, 7 Aug 2012, Daniel De Graaf wrote:
>> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
>>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
>>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
>>>> an error.
>>>>
>>>> If Linux is running as an HVM domain and is running as Dom0, use
>>>> xenstored_local_init to initialize the xenstore page and event channel.
>>>>
>>>> Changes in v2:
>>>>
>>>> - refactor xenbus_init.
>>>
>>> Thank you. Lets also CC our friend at NSA who has been doing some work
>>> in that area. Daniel are you OK with this change - will it still make
>>> PV initial domain with with the MiniOS XenBus driver?
>>>
>>> Thanks.
>>
>> That case will work, but what this will break is launching the initial domain
>> with a Xenstore stub domain already running (see below).
>>
>>>>
>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>> ---
>>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>>>>  3 files changed, 45 insertions(+), 20 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
>>>> index 52fe7ad..c5aa55c 100644
>>>> --- a/drivers/xen/xenbus/xenbus_comms.c
>>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
>>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>>>>  		int err;
>>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>>>>  						0, "xenbus", &xb_waitq);
>>>> -		if (err <= 0) {
>>>> +		if (err < 0) {
>>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>>>>  			return err;
>>>>  		}
>>>
>>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>>>> index b793723..a67ccc0 100644
>>>> --- a/drivers/xen/xenbus/xenbus_probe.c
>>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>>>>  	return err;
>>>>  }
>>>>  
>>>> +enum xenstore_init {
>>>> +	UNKNOWN,
>>>> +	PV,
>>>> +	HVM,
>>>> +	LOCAL,
>>>> +};
>>>>  static int __init xenbus_init(void)
>>>>  {
>>>>  	int err = 0;
>>>> +	enum xenstore_init usage = UNKNOWN;
>>>> +	uint64_t v = 0;
>>>>  
>>>>  	if (!xen_domain())
>>>>  		return -ENODEV;
>>>>  
>>>>  	xenbus_ring_ops_init();
>>>>  
>>>> -	if (xen_hvm_domain()) {
>>>> -		uint64_t v = 0;
>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>>>> -		if (err)
>>>> -			goto out_error;
>>>> -		xen_store_evtchn = (int)v;
>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>>>> -		if (err)
>>>> -			goto out_error;
>>>> -		xen_store_mfn = (unsigned long)v;
>>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>>>> -	} else {
>>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
>>>> -		xen_store_mfn = xen_start_info->store_mfn;
>>>> -		if (xen_store_evtchn)
>>>> -			xenstored_ready = 1;
>>>> -		else {
>>>> +	if (xen_pv_domain())
>>>> +		usage = PV;
>>>> +	if (xen_hvm_domain())
>>>> +		usage = HVM;
>>
>> The above is correct for domUs, and is overridden for dom0s:
>>
>>>> +	if (xen_hvm_domain() && xen_initial_domain())
>>>> +		usage = LOCAL;
>>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
>>>> +		usage = LOCAL;
>>
>> Instead of these checks, I think it should just be:
>>
>> if (!xen_start_info->store_evtchn)
>> 	usage = LOCAL;
>>
>> Any domain started after xenstore will have store_evtchn set, so if you don't
>> have this set, you are either going to be running xenstore locally, or will
>> use the ioctl to change it later (and so should still set up everything as if
>> it will be running locally).
> 
> That would be wrong for an HVM dom0 domain (at least on ARM), because
> we don't have a start_info page at all.
> 
> 
>>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
>>>> +		xenstored_ready = 1;
>>
>> This part can now just be moved unconditionally into case PV.
> 
> What about:
> 
> if (xen_pv_domain())
>     usage = PV;
> if (xen_hvm_domain())
>     usage = HVM;
> if (!xen_store_evtchn)
>     usage = LOCAL;
> 
> and moving xenstored_ready in case PV, like you suggested.
> 

That looks correct, but you'd need to split up the switch statement in
order to populate xen_store_evtchn before that last condition, which
ends up pretty much eliminating the usage variable.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:19:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9ut-0007TR-Hx; Wed, 08 Aug 2012 17:19:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz9us-0007TM-3a
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 17:19:30 +0000
Received: from [85.158.139.83:62704] by server-1.bemta-5.messagelabs.com id
	CD/19-14385-1AF92205; Wed, 08 Aug 2012 17:19:29 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344446367!19652836!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21411 invoked from network); 8 Aug 2012 17:19:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:19:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="34004141"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:17:32 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Wed, 8 Aug 2012 10:17:32 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 10:17:17 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac11ggDi2HQAt37aSbmCvHsyQECFgQAAZEmA
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E93A1@SJCPMAILBOX01.citrite.net>
References: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
	<5022AE180200007800093B41@nat28.tlf.novell.com>
In-Reply-To: <5022AE180200007800093B41@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks Jan. No - it does not compile on x86_32. Who runs x86_32 xen anyway? Just kidding...

BTW, I hate printf format specifiers...

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Wednesday, August 08, 2012 9:21 AM
To: Santosh Jodh
Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 08.08.12 at 17:56, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", 
> + page_to_maddr(pg));

Does that build on x86-32? Similar format specifier problems appear to be present elsewhere in the patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:19:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9ut-0007TR-Hx; Wed, 08 Aug 2012 17:19:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz9us-0007TM-3a
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 17:19:30 +0000
Received: from [85.158.139.83:62704] by server-1.bemta-5.messagelabs.com id
	CD/19-14385-1AF92205; Wed, 08 Aug 2012 17:19:29 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344446367!19652836!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21411 invoked from network); 8 Aug 2012 17:19:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:19:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="34004141"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:17:32 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi;
	Wed, 8 Aug 2012 10:17:32 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Aug 2012 10:17:17 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac11ggDi2HQAt37aSbmCvHsyQECFgQAAZEmA
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E93A1@SJCPMAILBOX01.citrite.net>
References: <68946c7e1d67f823b072.1344441362@REDBLD-XS.ad.xensource.com>
	<5022AE180200007800093B41@nat28.tlf.novell.com>
In-Reply-To: <5022AE180200007800093B41@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks Jan. No - it does not compile on x86_32. Who runs x86_32 xen anyway? Just kidding...

BTW, I hate printf format specifiers...

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Wednesday, August 08, 2012 9:21 AM
To: Santosh Jodh
Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 08.08.12 at 17:56, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", 
> + page_to_maddr(pg));

Does that build on x86-32? Similar format specifier problems appear to be present elsewhere in the patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9v0-0007U0-Af; Wed, 08 Aug 2012 17:19:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz9uy-0007Ta-0L
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:19:36 +0000
Received: from [85.158.143.35:56698] by server-1.bemta-4.messagelabs.com id
	5A/21-20198-7AF92205; Wed, 08 Aug 2012 17:19:35 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344446371!6033322!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5748 invoked from network); 8 Aug 2012 17:19:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:19:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="34004188"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:17:36 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 13:17:36 -0400
MIME-Version: 1.0
X-Mercurial-Node: 8deb7c7a25c4a3bc50d7c2f80afc94f8776bf462
Message-ID: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 8 Aug 2012 10:17:35 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 08 09:56:50 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
+{
+    u64 address;
+    void *table_vaddr, *pde;
+    u64 next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level <= 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %016"PRIx64"\n", page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( (next_table_maddr != 0) && (next_level != 0)
+            && present )
+        {
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1, address);
+        }
+
+        if ( present )
+        {
+            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
+                   address >> PAGE_SHIFT, next_table_maddr >> PAGE_SHIFT);
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +595,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 09:56:50 2012 -0700
@@ -18,6 +18,7 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
@@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+void setup_iommu_dump(void);
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    setup_iommu_dump();
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +658,46 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
+void __init setup_iommu_dump(void)
+{
+    register_keyhandler('o', &iommu_p2m_table);
+}
+
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,56 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
+{
+    u64 address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+        
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 )
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
+
+        if ( level == 1 )
+            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n", 
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2438,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 08 09:56:50 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,7 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 08 09:56:50 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Wed Aug 08 09:56:50 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9v0-0007U0-Af; Wed, 08 Aug 2012 17:19:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Sz9uy-0007Ta-0L
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:19:36 +0000
Received: from [85.158.143.35:56698] by server-1.bemta-4.messagelabs.com id
	5A/21-20198-7AF92205; Wed, 08 Aug 2012 17:19:35 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344446371!6033322!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjMzNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5748 invoked from network); 8 Aug 2012 17:19:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:19:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336363200"; d="scan'208";a="34004188"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 13:17:36 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 13:17:36 -0400
MIME-Version: 1.0
X-Mercurial-Node: 8deb7c7a25c4a3bc50d7c2f80afc94f8776bf462
Message-ID: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 8 Aug 2012 10:17:35 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 08 09:56:50 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)
+{
+    u64 address;
+    void *table_vaddr, *pde;
+    u64 next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level <= 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %016"PRIx64"\n", page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( (next_table_maddr != 0) && (next_level != 0)
+            && present )
+        {
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1, address);
+        }
+
+        if ( present )
+        {
+            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
+                   address >> PAGE_SHIFT, next_table_maddr >> PAGE_SHIFT);
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +595,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 09:56:50 2012 -0700
@@ -18,6 +18,7 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
@@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+void setup_iommu_dump(void);
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    setup_iommu_dump();
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +658,46 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
+void __init setup_iommu_dump(void)
+{
+    register_keyhandler('o', &iommu_p2m_table);
+}
+
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,56 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
+{
+    u64 address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+        
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 )
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
+
+        if ( level == 1 )
+            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n", 
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2438,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 08 09:56:50 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,7 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 08 09:56:50 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r 8deb7c7a25c4 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Wed Aug 08 09:56:50 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:19:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:19:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9uy-0007To-UU; Wed, 08 Aug 2012 17:19:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9uw-0007Ta-Ok
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:19:35 +0000
Received: from [85.158.143.99:65234] by server-1.bemta-4.messagelabs.com id
	05/21-20198-6AF92205; Wed, 08 Aug 2012 17:19:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344446373!25635736!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30219 invoked from network); 8 Aug 2012 17:19:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:19:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13915090"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 17:19:32 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 18:19:32 +0100
Date: Wed, 8 Aug 2012 18:19:08 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
In-Reply-To: <50229B70.3090507@tycho.nsa.gov>
Message-ID: <alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Daniel De Graaf wrote:
> On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
> > On Tue, 7 Aug 2012, Daniel De Graaf wrote:
> >> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> >>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> >>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> >>>> an error.
> >>>>
> >>>> If Linux is running as an HVM domain and is running as Dom0, use
> >>>> xenstored_local_init to initialize the xenstore page and event channel.
> >>>>
> >>>> Changes in v2:
> >>>>
> >>>> - refactor xenbus_init.
> >>>
> >>> Thank you. Lets also CC our friend at NSA who has been doing some work
> >>> in that area. Daniel are you OK with this change - will it still make
> >>> PV initial domain with with the MiniOS XenBus driver?
> >>>
> >>> Thanks.
> >>
> >> That case will work, but what this will break is launching the initial domain
> >> with a Xenstore stub domain already running (see below).
> >>
> >>>>
> >>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >>>> ---
> >>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
> >>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >>>>  3 files changed, 45 insertions(+), 20 deletions(-)
> >>>>
> >>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> >>>> index 52fe7ad..c5aa55c 100644
> >>>> --- a/drivers/xen/xenbus/xenbus_comms.c
> >>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
> >>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >>>>  		int err;
> >>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >>>>  						0, "xenbus", &xb_waitq);
> >>>> -		if (err <= 0) {
> >>>> +		if (err < 0) {
> >>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >>>>  			return err;
> >>>>  		}
> >>>
> >>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >>>> index b793723..a67ccc0 100644
> >>>> --- a/drivers/xen/xenbus/xenbus_probe.c
> >>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
> >>>>  	return err;
> >>>>  }
> >>>>  
> >>>> +enum xenstore_init {
> >>>> +	UNKNOWN,
> >>>> +	PV,
> >>>> +	HVM,
> >>>> +	LOCAL,
> >>>> +};
> >>>>  static int __init xenbus_init(void)
> >>>>  {
> >>>>  	int err = 0;
> >>>> +	enum xenstore_init usage = UNKNOWN;
> >>>> +	uint64_t v = 0;
> >>>>  
> >>>>  	if (!xen_domain())
> >>>>  		return -ENODEV;
> >>>>  
> >>>>  	xenbus_ring_ops_init();
> >>>>  
> >>>> -	if (xen_hvm_domain()) {
> >>>> -		uint64_t v = 0;
> >>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> >>>> -		if (err)
> >>>> -			goto out_error;
> >>>> -		xen_store_evtchn = (int)v;
> >>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> >>>> -		if (err)
> >>>> -			goto out_error;
> >>>> -		xen_store_mfn = (unsigned long)v;
> >>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> >>>> -	} else {
> >>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
> >>>> -		xen_store_mfn = xen_start_info->store_mfn;
> >>>> -		if (xen_store_evtchn)
> >>>> -			xenstored_ready = 1;
> >>>> -		else {
> >>>> +	if (xen_pv_domain())
> >>>> +		usage = PV;
> >>>> +	if (xen_hvm_domain())
> >>>> +		usage = HVM;
> >>
> >> The above is correct for domUs, and is overridden for dom0s:
> >>
> >>>> +	if (xen_hvm_domain() && xen_initial_domain())
> >>>> +		usage = LOCAL;
> >>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >>>> +		usage = LOCAL;
> >>
> >> Instead of these checks, I think it should just be:
> >>
> >> if (!xen_start_info->store_evtchn)
> >> 	usage = LOCAL;
> >>
> >> Any domain started after xenstore will have store_evtchn set, so if you don't
> >> have this set, you are either going to be running xenstore locally, or will
> >> use the ioctl to change it later (and so should still set up everything as if
> >> it will be running locally).
> > 
> > That would be wrong for an HVM dom0 domain (at least on ARM), because
> > we don't have a start_info page at all.
> > 
> > 
> >>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> >>>> +		xenstored_ready = 1;
> >>
> >> This part can now just be moved unconditionally into case PV.
> > 
> > What about:
> > 
> > if (xen_pv_domain())
> >     usage = PV;
> > if (xen_hvm_domain())
> >     usage = HVM;
> > if (!xen_store_evtchn)
> >     usage = LOCAL;
> > 
> > and moving xenstored_ready in case PV, like you suggested.
> > 
> 
> That looks correct, but you'd need to split up the switch statement in
> order to populate xen_store_evtchn before that last condition, which
> ends up pretty much eliminating the usage variable.

Going back to what you wrote in the previous email, in what way this
patch breaks the case when an initial domain is started after a Xenstore
stub domain?

Assuming that we are talking about a PV initial domain on x86, the
following check

if (xen_pv_domain() && !xen_start_info->store_evtchn)
    usage = LOCAL;

will return false (because store_evtchn is set), therefore usage will
remain set to PV.
And the check:

if (xen_pv_domain() && xen_start_info->store_evtchn)
	xenstored_ready = 1;

will return true so xenstored_ready is going to be set to 1.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:19:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:19:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9uy-0007To-UU; Wed, 08 Aug 2012 17:19:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9uw-0007Ta-Ok
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:19:35 +0000
Received: from [85.158.143.99:65234] by server-1.bemta-4.messagelabs.com id
	05/21-20198-6AF92205; Wed, 08 Aug 2012 17:19:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344446373!25635736!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30219 invoked from network); 8 Aug 2012 17:19:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:19:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13915090"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 17:19:32 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 18:19:32 +0100
Date: Wed, 8 Aug 2012 18:19:08 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
In-Reply-To: <50229B70.3090507@tycho.nsa.gov>
Message-ID: <alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Daniel De Graaf wrote:
> On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
> > On Tue, 7 Aug 2012, Daniel De Graaf wrote:
> >> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> >>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> >>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> >>>> an error.
> >>>>
> >>>> If Linux is running as an HVM domain and is running as Dom0, use
> >>>> xenstored_local_init to initialize the xenstore page and event channel.
> >>>>
> >>>> Changes in v2:
> >>>>
> >>>> - refactor xenbus_init.
> >>>
> >>> Thank you. Lets also CC our friend at NSA who has been doing some work
> >>> in that area. Daniel are you OK with this change - will it still make
> >>> PV initial domain with with the MiniOS XenBus driver?
> >>>
> >>> Thanks.
> >>
> >> That case will work, but what this will break is launching the initial domain
> >> with a Xenstore stub domain already running (see below).
> >>
> >>>>
> >>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >>>> ---
> >>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
> >>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >>>>  3 files changed, 45 insertions(+), 20 deletions(-)
> >>>>
> >>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> >>>> index 52fe7ad..c5aa55c 100644
> >>>> --- a/drivers/xen/xenbus/xenbus_comms.c
> >>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
> >>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >>>>  		int err;
> >>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >>>>  						0, "xenbus", &xb_waitq);
> >>>> -		if (err <= 0) {
> >>>> +		if (err < 0) {
> >>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >>>>  			return err;
> >>>>  		}
> >>>
> >>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >>>> index b793723..a67ccc0 100644
> >>>> --- a/drivers/xen/xenbus/xenbus_probe.c
> >>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
> >>>>  	return err;
> >>>>  }
> >>>>  
> >>>> +enum xenstore_init {
> >>>> +	UNKNOWN,
> >>>> +	PV,
> >>>> +	HVM,
> >>>> +	LOCAL,
> >>>> +};
> >>>>  static int __init xenbus_init(void)
> >>>>  {
> >>>>  	int err = 0;
> >>>> +	enum xenstore_init usage = UNKNOWN;
> >>>> +	uint64_t v = 0;
> >>>>  
> >>>>  	if (!xen_domain())
> >>>>  		return -ENODEV;
> >>>>  
> >>>>  	xenbus_ring_ops_init();
> >>>>  
> >>>> -	if (xen_hvm_domain()) {
> >>>> -		uint64_t v = 0;
> >>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> >>>> -		if (err)
> >>>> -			goto out_error;
> >>>> -		xen_store_evtchn = (int)v;
> >>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> >>>> -		if (err)
> >>>> -			goto out_error;
> >>>> -		xen_store_mfn = (unsigned long)v;
> >>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> >>>> -	} else {
> >>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
> >>>> -		xen_store_mfn = xen_start_info->store_mfn;
> >>>> -		if (xen_store_evtchn)
> >>>> -			xenstored_ready = 1;
> >>>> -		else {
> >>>> +	if (xen_pv_domain())
> >>>> +		usage = PV;
> >>>> +	if (xen_hvm_domain())
> >>>> +		usage = HVM;
> >>
> >> The above is correct for domUs, and is overridden for dom0s:
> >>
> >>>> +	if (xen_hvm_domain() && xen_initial_domain())
> >>>> +		usage = LOCAL;
> >>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >>>> +		usage = LOCAL;
> >>
> >> Instead of these checks, I think it should just be:
> >>
> >> if (!xen_start_info->store_evtchn)
> >> 	usage = LOCAL;
> >>
> >> Any domain started after xenstore will have store_evtchn set, so if you don't
> >> have this set, you are either going to be running xenstore locally, or will
> >> use the ioctl to change it later (and so should still set up everything as if
> >> it will be running locally).
> > 
> > That would be wrong for an HVM dom0 domain (at least on ARM), because
> > we don't have a start_info page at all.
> > 
> > 
> >>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> >>>> +		xenstored_ready = 1;
> >>
> >> This part can now just be moved unconditionally into case PV.
> > 
> > What about:
> > 
> > if (xen_pv_domain())
> >     usage = PV;
> > if (xen_hvm_domain())
> >     usage = HVM;
> > if (!xen_store_evtchn)
> >     usage = LOCAL;
> > 
> > and moving xenstored_ready in case PV, like you suggested.
> > 
> 
> That looks correct, but you'd need to split up the switch statement in
> order to populate xen_store_evtchn before that last condition, which
> ends up pretty much eliminating the usage variable.

Going back to what you wrote in the previous email, in what way this
patch breaks the case when an initial domain is started after a Xenstore
stub domain?

Assuming that we are talking about a PV initial domain on x86, the
following check

if (xen_pv_domain() && !xen_start_info->store_evtchn)
    usage = LOCAL;

will return false (because store_evtchn is set), therefore usage will
remain set to PV.
And the check:

if (xen_pv_domain() && xen_start_info->store_evtchn)
	xenstored_ready = 1;

will return true so xenstored_ready is going to be set to 1.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:22:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9xa-0007ks-CX; Wed, 08 Aug 2012 17:22:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1Sz9xY-0007kE-LH
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 17:22:16 +0000
Received: from [85.158.143.99:17348] by server-1.bemta-4.messagelabs.com id
	42/33-20198-740A2205; Wed, 08 Aug 2012 17:22:15 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344446524!30308058!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17724 invoked from network); 8 Aug 2012 17:22:09 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:22:09 -0000
Received: by yhpp34 with SMTP id p34so1166262yhp.32
	for <multiple recipients>; Wed, 08 Aug 2012 10:22:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=G2AxySMZvIVlMKu7SBnacJHcIdcjbzONtODtQoGDtw4=;
	b=xt9s9hGRLRY7JxsFBI76/ZH1yJj7jpTHimezpsoB+8O019oBo4pkiG9tWqeo4s+j8J
	lGQqzEQgl2FOzpw/2ryLAWzdxXACQltp4Kj1H4hxS0aSNgCV/I50v0RLgvqSx2XJHl27
	2IlQAg2ynLjgbCj9RRWODY8qhQ5Vqane4ms5u7r2NL53sJlUGMwiexK6ZKb2X9K8wEPj
	XiyHZOq+D/Hi3UzYxnmImGGsCwynDR+5iCFm+B7jczWO/dQ7U1obh05pn4ucZ7PF0udQ
	mfkIi88UCa//eNZCaExKv9H/a+/PVk3z+oC00pu8am5v0HvPcJl4/lwlLQtJir9K9b0Y
	X+ng==
Received: by 10.236.154.2 with SMTP id g2mr17357655yhk.29.1344446523587;
	Wed, 08 Aug 2012 10:22:03 -0700 (PDT)
Received: from [172.16.25.10] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id h8sm20747948ank.9.2012.08.08.10.22.01
	(version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 10:22:02 -0700 (PDT)
Message-ID: <5022A037.8080706@xen.org>
Date: Wed, 08 Aug 2012 18:21:59 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	xen-users@lists.xen.org, xen-arm@lists.xen.org
Subject: [Xen-devel] First Xen Test Day, August 14th
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

After the success of Xen Document Days and with Xen 4.2 being close to 
release, we decided to trial Xen test Days. The first Xen Test Day will 
be on Tuesday, August 14th, on IRC freenode channel *#xentest.* The plan 
is to test Xen 4.2 RC2, whic h should be released shortly. You can find 
more information about Xen Test Days on:

* http://wiki.xen.org/wiki/Xen_Test_Days
* http://wiki.xen.org/wiki/Xen_4.2_RC2_test_instructions

What is a Xen Test Day?
=======================

Xen test days are all day IRC events, facilitated by members of the Xen 
community. The purpose of Xen Test Days is to

  * Provide focus in testing Xen Release Candidates
  * Primary focus will be on ensuring that Xen RCs work with distros and
    with your hardware and in your environment
  * For Xen 4.2, we will also focus on testing the new XL tools

How Does it Work?
=================

The pattern is the same as for Xen Document Days:

  * Join us on IRC: freenode channel *#xentest*
  * Tell people what you intend to test
  * Make sure that a Xen release candidate works for you
  * Help others, get help!
  * And above all: have fun!

It is also OK, do to some testing before and jon the Test Day, if you 
get stuck and need some help. Looking forward to see you on IRC!

Best Regards
Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:22:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9xa-0007ks-CX; Wed, 08 Aug 2012 17:22:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1Sz9xY-0007kE-LH
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 17:22:16 +0000
Received: from [85.158.143.99:17348] by server-1.bemta-4.messagelabs.com id
	42/33-20198-740A2205; Wed, 08 Aug 2012 17:22:15 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344446524!30308058!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17724 invoked from network); 8 Aug 2012 17:22:09 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:22:09 -0000
Received: by yhpp34 with SMTP id p34so1166262yhp.32
	for <multiple recipients>; Wed, 08 Aug 2012 10:22:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=G2AxySMZvIVlMKu7SBnacJHcIdcjbzONtODtQoGDtw4=;
	b=xt9s9hGRLRY7JxsFBI76/ZH1yJj7jpTHimezpsoB+8O019oBo4pkiG9tWqeo4s+j8J
	lGQqzEQgl2FOzpw/2ryLAWzdxXACQltp4Kj1H4hxS0aSNgCV/I50v0RLgvqSx2XJHl27
	2IlQAg2ynLjgbCj9RRWODY8qhQ5Vqane4ms5u7r2NL53sJlUGMwiexK6ZKb2X9K8wEPj
	XiyHZOq+D/Hi3UzYxnmImGGsCwynDR+5iCFm+B7jczWO/dQ7U1obh05pn4ucZ7PF0udQ
	mfkIi88UCa//eNZCaExKv9H/a+/PVk3z+oC00pu8am5v0HvPcJl4/lwlLQtJir9K9b0Y
	X+ng==
Received: by 10.236.154.2 with SMTP id g2mr17357655yhk.29.1344446523587;
	Wed, 08 Aug 2012 10:22:03 -0700 (PDT)
Received: from [172.16.25.10] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id h8sm20747948ank.9.2012.08.08.10.22.01
	(version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 10:22:02 -0700 (PDT)
Message-ID: <5022A037.8080706@xen.org>
Date: Wed, 08 Aug 2012 18:21:59 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	xen-users@lists.xen.org, xen-arm@lists.xen.org
Subject: [Xen-devel] First Xen Test Day, August 14th
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

After the success of Xen Document Days and with Xen 4.2 being close to 
release, we decided to trial Xen test Days. The first Xen Test Day will 
be on Tuesday, August 14th, on IRC freenode channel *#xentest.* The plan 
is to test Xen 4.2 RC2, whic h should be released shortly. You can find 
more information about Xen Test Days on:

* http://wiki.xen.org/wiki/Xen_Test_Days
* http://wiki.xen.org/wiki/Xen_4.2_RC2_test_instructions

What is a Xen Test Day?
=======================

Xen test days are all day IRC events, facilitated by members of the Xen 
community. The purpose of Xen Test Days is to

  * Provide focus in testing Xen Release Candidates
  * Primary focus will be on ensuring that Xen RCs work with distros and
    with your hardware and in your environment
  * For Xen 4.2, we will also focus on testing the new XL tools

How Does it Work?
=================

The pattern is the same as for Xen Document Days:

  * Join us on IRC: freenode channel *#xentest*
  * Tell people what you intend to test
  * Make sure that a Xen release candidate works for you
  * Help others, get help!
  * And above all: have fun!

It is also OK, do to some testing before and jon the Test Day, if you 
get stuck and need some help. Looking forward to see you on IRC!

Best Regards
Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:23:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9yq-00080Q-Lh; Wed, 08 Aug 2012 17:23:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9yp-0007z5-Am
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:23:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344446609!2798361!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11118 invoked from network); 8 Aug 2012 17:23:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:23:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13915138"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 17:23:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 18:23:14 +0100
Date: Wed, 8 Aug 2012 18:22:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807183306.GX15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081821380.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807183306.GX15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 21/23] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:24PM +0100, Stefano Stabellini wrote:
> > Update struct xen_add_to_physmap to be in sync with Xen's version of the
> > structure.
> > The size field was introduced by:
> > 
> > changeset:   24164:707d27fe03e7
> > user:        Jean Guyader <jean.guyader@eu.citrix.com>
> > date:        Fri Nov 18 13:42:08 2011 +0000
> > summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> > 
> > According to the comment:
> > 
> > "This new field .size is located in the 16 bits padding between .domid
> > and .space in struct xen_add_to_physmap to stay compatible with older
> > versions."
> > 
> > Changes in v2:
> 
> Looks good. Let me take this as in my tree to prep it for Mukesh's patches.

OK.
Beware that patch #23 is going to modify xen_add_to_physmap again to
replace .size with a union.


> > - remove erroneous comment in the commit message.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  include/xen/interface/memory.h |    3 +++
> >  1 files changed, 3 insertions(+), 0 deletions(-)
> > 
> > diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> > index b5c3098..b66d04c 100644
> > --- a/include/xen/interface/memory.h
> > +++ b/include/xen/interface/memory.h
> > @@ -163,6 +163,9 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > +    /* Number of pages to go through for gmfn_range */
> > +    uint16_t    size;
> > +
> >      /* Source mapping space. */
> >  #define XENMAPSPACE_shared_info 0 /* shared info page */
> >  #define XENMAPSPACE_grant_table 1 /* grant table page */
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:23:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sz9yq-00080Q-Lh; Wed, 08 Aug 2012 17:23:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Sz9yp-0007z5-Am
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:23:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344446609!2798361!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11118 invoked from network); 8 Aug 2012 17:23:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:23:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13915138"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 17:23:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 18:23:14 +0100
Date: Wed, 8 Aug 2012 18:22:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807183306.GX15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081821380.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-21-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807183306.GX15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 21/23] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:24PM +0100, Stefano Stabellini wrote:
> > Update struct xen_add_to_physmap to be in sync with Xen's version of the
> > structure.
> > The size field was introduced by:
> > 
> > changeset:   24164:707d27fe03e7
> > user:        Jean Guyader <jean.guyader@eu.citrix.com>
> > date:        Fri Nov 18 13:42:08 2011 +0000
> > summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range
> > 
> > According to the comment:
> > 
> > "This new field .size is located in the 16 bits padding between .domid
> > and .space in struct xen_add_to_physmap to stay compatible with older
> > versions."
> > 
> > Changes in v2:
> 
> Looks good. Let me take this as in my tree to prep it for Mukesh's patches.

OK.
Beware that patch #23 is going to modify xen_add_to_physmap again to
replace .size with a union.


> > - remove erroneous comment in the commit message.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  include/xen/interface/memory.h |    3 +++
> >  1 files changed, 3 insertions(+), 0 deletions(-)
> > 
> > diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> > index b5c3098..b66d04c 100644
> > --- a/include/xen/interface/memory.h
> > +++ b/include/xen/interface/memory.h
> > @@ -163,6 +163,9 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > +    /* Number of pages to go through for gmfn_range */
> > +    uint16_t    size;
> > +
> >      /* Source mapping space. */
> >  #define XENMAPSPACE_shared_info 0 /* shared info page */
> >  #define XENMAPSPACE_grant_table 1 /* grant table page */
> > -- 
> > 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzA3J-0008U9-Cc; Wed, 08 Aug 2012 17:28:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzA3H-0008U0-Su
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 17:28:12 +0000
Received: from [85.158.139.83:54769] by server-9.bemta-5.messagelabs.com id
	BD/4E-06631-AA1A2205; Wed, 08 Aug 2012 17:28:10 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344446890!30968160!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTEwMTM=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTEwMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26441 invoked from network); 8 Aug 2012 17:28:10 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 17:28:10 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGji0PEjyX
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-080-112.pools.arcor-ip.net [88.65.80.112])
	by smtp.strato.de (josoe mo49) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Y04cd8o78HCtgN ;
	Wed, 8 Aug 2012 19:28:09 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 65C6D18639; Wed,  8 Aug 2012 19:28:09 +0200 (CEST)
Date: Wed, 8 Aug 2012 19:28:09 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120808172809.GA22206@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344353581.11339.105.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, Ian Campbell wrote:

> On Tue, 2012-08-07 at 16:25 +0100, Olaf Hering wrote:
> > On Tue, Aug 07, Ian Campbell wrote:
> > 
> > > On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > > > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > > > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > > > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > > > patch.
> > > > 
> > > > The output from this command is attached:
> > > > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > > > 
> > > > Any ideas how to fix this timeout error?
> > > 
> > > The tools are waiting for the backend to move from state 1
> > > (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> > > driver typically makes that transition at the end of its probe function
> > > -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> > > in which case perhaps there is an error node in XS?
> > 
> > I think there is a difference between the two kernels. The pvops kernel
> > goes into state 2 right away (I cant tell from repeated xenstore-ls runs
> > if it had also state 1).
> > The sles11 kernel remains in state 1.
> 
> What is it waiting for?

I have no idea, have to browse code debug it.
A quick test with plain sles11sp2+xend and xm start -p shows that
/local/domain/0/backend/vif/1/0/state finally gets into state 2.

Looks like something to fix before 4.2.


> >  Did the expectations of libxl
> > change recently? xl create used to work not too long ago.
> 
> I don't think the expectation has changed but the implementation is
> probably more picky since Roger's hotplug patches.
> 
> > xm does not work either, so the change is most likely in the scripts.
> 
> If you are switching from xl to xm then you should either reboot or
> remove libxl/disable_udev in xenstore manually.
> 
> Other than that nor much has changed in the scripts either. Are you sure
> it isn't the kernel which has changed?

The kernel is ok.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzA3J-0008U9-Cc; Wed, 08 Aug 2012 17:28:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzA3H-0008U0-Su
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 17:28:12 +0000
Received: from [85.158.139.83:54769] by server-9.bemta-5.messagelabs.com id
	BD/4E-06631-AA1A2205; Wed, 08 Aug 2012 17:28:10 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344446890!30968160!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTEwMTM=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTEwMTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26441 invoked from network); 8 Aug 2012 17:28:10 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Aug 2012 17:28:10 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGji0PEjyX
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-080-112.pools.arcor-ip.net [88.65.80.112])
	by smtp.strato.de (josoe mo49) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Y04cd8o78HCtgN ;
	Wed, 8 Aug 2012 19:28:09 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 65C6D18639; Wed,  8 Aug 2012 19:28:09 +0200 (CEST)
Date: Wed, 8 Aug 2012 19:28:09 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120808172809.GA22206@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344353581.11339.105.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 07, Ian Campbell wrote:

> On Tue, 2012-08-07 at 16:25 +0100, Olaf Hering wrote:
> > On Tue, Aug 07, Ian Campbell wrote:
> > 
> > > On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > > > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > > > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > > > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > > > patch.
> > > > 
> > > > The output from this command is attached:
> > > > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > > > 
> > > > Any ideas how to fix this timeout error?
> > > 
> > > The tools are waiting for the backend to move from state 1
> > > (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> > > driver typically makes that transition at the end of its probe function
> > > -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> > > in which case perhaps there is an error node in XS?
> > 
> > I think there is a difference between the two kernels. The pvops kernel
> > goes into state 2 right away (I cant tell from repeated xenstore-ls runs
> > if it had also state 1).
> > The sles11 kernel remains in state 1.
> 
> What is it waiting for?

I have no idea, have to browse code debug it.
A quick test with plain sles11sp2+xend and xm start -p shows that
/local/domain/0/backend/vif/1/0/state finally gets into state 2.

Looks like something to fix before 4.2.


> >  Did the expectations of libxl
> > change recently? xl create used to work not too long ago.
> 
> I don't think the expectation has changed but the implementation is
> probably more picky since Roger's hotplug patches.
> 
> > xm does not work either, so the change is most likely in the scripts.
> 
> If you are switching from xl to xm then you should either reboot or
> remove libxl/disable_udev in xenstore manually.
> 
> Other than that nor much has changed in the scripts either. Are you sure
> it isn't the kernel which has changed?

The kernel is ok.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:34:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzA8y-0000Kf-5X; Wed, 08 Aug 2012 17:34:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SzA8w-0000Ka-TV
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:34:03 +0000
Received: from [85.158.139.83:25755] by server-8.bemta-5.messagelabs.com id
	20/79-05939-A03A2205; Wed, 08 Aug 2012 17:34:02 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344447240!30846368!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10429 invoked from network); 8 Aug 2012 17:34:01 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-15.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 17:34:01 -0000
X-TM-IMSS-Message-ID: <85f9f23c00051cf0@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 85f9f23c00051cf0 ;
	Wed, 8 Aug 2012 13:34:01 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q78HXtTp027388; 
	Wed, 8 Aug 2012 13:33:55 -0400
Message-ID: <5022A303.709@tycho.nsa.gov>
Date: Wed, 08 Aug 2012 13:33:55 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 01:19 PM, Stefano Stabellini wrote:
> On Wed, 8 Aug 2012, Daniel De Graaf wrote:
>> On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
>>> On Tue, 7 Aug 2012, Daniel De Graaf wrote:
>>>> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
>>>>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
>>>>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
>>>>>> an error.
>>>>>>
>>>>>> If Linux is running as an HVM domain and is running as Dom0, use
>>>>>> xenstored_local_init to initialize the xenstore page and event channel.
>>>>>>
>>>>>> Changes in v2:
>>>>>>
>>>>>> - refactor xenbus_init.
>>>>>
>>>>> Thank you. Lets also CC our friend at NSA who has been doing some work
>>>>> in that area. Daniel are you OK with this change - will it still make
>>>>> PV initial domain with with the MiniOS XenBus driver?
>>>>>
>>>>> Thanks.
>>>>
>>>> That case will work, but what this will break is launching the initial domain
>>>> with a Xenstore stub domain already running (see below).
>>>>
>>>>>>
>>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>>>> ---
>>>>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>>>>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>>>>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>>>>>>  3 files changed, 45 insertions(+), 20 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
>>>>>> index 52fe7ad..c5aa55c 100644
>>>>>> --- a/drivers/xen/xenbus/xenbus_comms.c
>>>>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
>>>>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>>>>>>  		int err;
>>>>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>>>>>>  						0, "xenbus", &xb_waitq);
>>>>>> -		if (err <= 0) {
>>>>>> +		if (err < 0) {
>>>>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>>>>>>  			return err;
>>>>>>  		}
>>>>>
>>>>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>>>>>> index b793723..a67ccc0 100644
>>>>>> --- a/drivers/xen/xenbus/xenbus_probe.c
>>>>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>>>>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>>>>>>  	return err;
>>>>>>  }
>>>>>>  
>>>>>> +enum xenstore_init {
>>>>>> +	UNKNOWN,
>>>>>> +	PV,
>>>>>> +	HVM,
>>>>>> +	LOCAL,
>>>>>> +};
>>>>>>  static int __init xenbus_init(void)
>>>>>>  {
>>>>>>  	int err = 0;
>>>>>> +	enum xenstore_init usage = UNKNOWN;
>>>>>> +	uint64_t v = 0;
>>>>>>  
>>>>>>  	if (!xen_domain())
>>>>>>  		return -ENODEV;
>>>>>>  
>>>>>>  	xenbus_ring_ops_init();
>>>>>>  
>>>>>> -	if (xen_hvm_domain()) {
>>>>>> -		uint64_t v = 0;
>>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>>>>>> -		if (err)
>>>>>> -			goto out_error;
>>>>>> -		xen_store_evtchn = (int)v;
>>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>>>>>> -		if (err)
>>>>>> -			goto out_error;
>>>>>> -		xen_store_mfn = (unsigned long)v;
>>>>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>>>>>> -	} else {
>>>>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
>>>>>> -		xen_store_mfn = xen_start_info->store_mfn;
>>>>>> -		if (xen_store_evtchn)
>>>>>> -			xenstored_ready = 1;
>>>>>> -		else {
>>>>>> +	if (xen_pv_domain())
>>>>>> +		usage = PV;
>>>>>> +	if (xen_hvm_domain())
>>>>>> +		usage = HVM;
>>>>
>>>> The above is correct for domUs, and is overridden for dom0s:
>>>>
>>>>>> +	if (xen_hvm_domain() && xen_initial_domain())
>>>>>> +		usage = LOCAL;
>>>>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
>>>>>> +		usage = LOCAL;
>>>>
>>>> Instead of these checks, I think it should just be:
>>>>
>>>> if (!xen_start_info->store_evtchn)
>>>> 	usage = LOCAL;
>>>>
>>>> Any domain started after xenstore will have store_evtchn set, so if you don't
>>>> have this set, you are either going to be running xenstore locally, or will
>>>> use the ioctl to change it later (and so should still set up everything as if
>>>> it will be running locally).
>>>
>>> That would be wrong for an HVM dom0 domain (at least on ARM), because
>>> we don't have a start_info page at all.
>>>
>>>
>>>>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
>>>>>> +		xenstored_ready = 1;
>>>>
>>>> This part can now just be moved unconditionally into case PV.
>>>
>>> What about:
>>>
>>> if (xen_pv_domain())
>>>     usage = PV;
>>> if (xen_hvm_domain())
>>>     usage = HVM;
>>> if (!xen_store_evtchn)
>>>     usage = LOCAL;
>>>
>>> and moving xenstored_ready in case PV, like you suggested.
>>>
>>
>> That looks correct, but you'd need to split up the switch statement in
>> order to populate xen_store_evtchn before that last condition, which
>> ends up pretty much eliminating the usage variable.
> 
> Going back to what you wrote in the previous email, in what way this
> patch breaks the case when an initial domain is started after a Xenstore
> stub domain?
> 
> Assuming that we are talking about a PV initial domain on x86, the
> following check
> 
> if (xen_pv_domain() && !xen_start_info->store_evtchn)
>     usage = LOCAL;
> 
> will return false (because store_evtchn is set), therefore usage will
> remain set to PV.
> And the check:
> 
> if (xen_pv_domain() && xen_start_info->store_evtchn)
> 	xenstored_ready = 1;
> 
> will return true so xenstored_ready is going to be set to 1.
> 

Right, the original patch didn't break anything with PV domains. The case
it doesn't handle is an HVM initial domain with an already-running
Xenstore domain; I think this applies both to ARM and hybrid/PVH on x86.
In that case, usage would be set to LOCAL instead of HVM.

As a side note: the value of xen_initial_domain() shouldn't be connected to
determining if xenstore is running locally or not.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:34:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzA8y-0000Kf-5X; Wed, 08 Aug 2012 17:34:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SzA8w-0000Ka-TV
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:34:03 +0000
Received: from [85.158.139.83:25755] by server-8.bemta-5.messagelabs.com id
	20/79-05939-A03A2205; Wed, 08 Aug 2012 17:34:02 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344447240!30846368!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10429 invoked from network); 8 Aug 2012 17:34:01 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-15.tower-182.messagelabs.com with SMTP;
	8 Aug 2012 17:34:01 -0000
X-TM-IMSS-Message-ID: <85f9f23c00051cf0@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 85f9f23c00051cf0 ;
	Wed, 8 Aug 2012 13:34:01 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q78HXtTp027388; 
	Wed, 8 Aug 2012 13:33:55 -0400
Message-ID: <5022A303.709@tycho.nsa.gov>
Date: Wed, 08 Aug 2012 13:33:55 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 01:19 PM, Stefano Stabellini wrote:
> On Wed, 8 Aug 2012, Daniel De Graaf wrote:
>> On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
>>> On Tue, 7 Aug 2012, Daniel De Graaf wrote:
>>>> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
>>>>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
>>>>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
>>>>>> an error.
>>>>>>
>>>>>> If Linux is running as an HVM domain and is running as Dom0, use
>>>>>> xenstored_local_init to initialize the xenstore page and event channel.
>>>>>>
>>>>>> Changes in v2:
>>>>>>
>>>>>> - refactor xenbus_init.
>>>>>
>>>>> Thank you. Lets also CC our friend at NSA who has been doing some work
>>>>> in that area. Daniel are you OK with this change - will it still make
>>>>> PV initial domain with with the MiniOS XenBus driver?
>>>>>
>>>>> Thanks.
>>>>
>>>> That case will work, but what this will break is launching the initial domain
>>>> with a Xenstore stub domain already running (see below).
>>>>
>>>>>>
>>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>>>> ---
>>>>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
>>>>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
>>>>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
>>>>>>  3 files changed, 45 insertions(+), 20 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
>>>>>> index 52fe7ad..c5aa55c 100644
>>>>>> --- a/drivers/xen/xenbus/xenbus_comms.c
>>>>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
>>>>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
>>>>>>  		int err;
>>>>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
>>>>>>  						0, "xenbus", &xb_waitq);
>>>>>> -		if (err <= 0) {
>>>>>> +		if (err < 0) {
>>>>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
>>>>>>  			return err;
>>>>>>  		}
>>>>>
>>>>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>>>>>> index b793723..a67ccc0 100644
>>>>>> --- a/drivers/xen/xenbus/xenbus_probe.c
>>>>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>>>>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
>>>>>>  	return err;
>>>>>>  }
>>>>>>  
>>>>>> +enum xenstore_init {
>>>>>> +	UNKNOWN,
>>>>>> +	PV,
>>>>>> +	HVM,
>>>>>> +	LOCAL,
>>>>>> +};
>>>>>>  static int __init xenbus_init(void)
>>>>>>  {
>>>>>>  	int err = 0;
>>>>>> +	enum xenstore_init usage = UNKNOWN;
>>>>>> +	uint64_t v = 0;
>>>>>>  
>>>>>>  	if (!xen_domain())
>>>>>>  		return -ENODEV;
>>>>>>  
>>>>>>  	xenbus_ring_ops_init();
>>>>>>  
>>>>>> -	if (xen_hvm_domain()) {
>>>>>> -		uint64_t v = 0;
>>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
>>>>>> -		if (err)
>>>>>> -			goto out_error;
>>>>>> -		xen_store_evtchn = (int)v;
>>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
>>>>>> -		if (err)
>>>>>> -			goto out_error;
>>>>>> -		xen_store_mfn = (unsigned long)v;
>>>>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
>>>>>> -	} else {
>>>>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
>>>>>> -		xen_store_mfn = xen_start_info->store_mfn;
>>>>>> -		if (xen_store_evtchn)
>>>>>> -			xenstored_ready = 1;
>>>>>> -		else {
>>>>>> +	if (xen_pv_domain())
>>>>>> +		usage = PV;
>>>>>> +	if (xen_hvm_domain())
>>>>>> +		usage = HVM;
>>>>
>>>> The above is correct for domUs, and is overridden for dom0s:
>>>>
>>>>>> +	if (xen_hvm_domain() && xen_initial_domain())
>>>>>> +		usage = LOCAL;
>>>>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
>>>>>> +		usage = LOCAL;
>>>>
>>>> Instead of these checks, I think it should just be:
>>>>
>>>> if (!xen_start_info->store_evtchn)
>>>> 	usage = LOCAL;
>>>>
>>>> Any domain started after xenstore will have store_evtchn set, so if you don't
>>>> have this set, you are either going to be running xenstore locally, or will
>>>> use the ioctl to change it later (and so should still set up everything as if
>>>> it will be running locally).
>>>
>>> That would be wrong for an HVM dom0 domain (at least on ARM), because
>>> we don't have a start_info page at all.
>>>
>>>
>>>>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
>>>>>> +		xenstored_ready = 1;
>>>>
>>>> This part can now just be moved unconditionally into case PV.
>>>
>>> What about:
>>>
>>> if (xen_pv_domain())
>>>     usage = PV;
>>> if (xen_hvm_domain())
>>>     usage = HVM;
>>> if (!xen_store_evtchn)
>>>     usage = LOCAL;
>>>
>>> and moving xenstored_ready in case PV, like you suggested.
>>>
>>
>> That looks correct, but you'd need to split up the switch statement in
>> order to populate xen_store_evtchn before that last condition, which
>> ends up pretty much eliminating the usage variable.
> 
> Going back to what you wrote in the previous email, in what way this
> patch breaks the case when an initial domain is started after a Xenstore
> stub domain?
> 
> Assuming that we are talking about a PV initial domain on x86, the
> following check
> 
> if (xen_pv_domain() && !xen_start_info->store_evtchn)
>     usage = LOCAL;
> 
> will return false (because store_evtchn is set), therefore usage will
> remain set to PV.
> And the check:
> 
> if (xen_pv_domain() && xen_start_info->store_evtchn)
> 	xenstored_ready = 1;
> 
> will return true so xenstored_ready is going to be set to 1.
> 

Right, the original patch didn't break anything with PV domains. The case
it doesn't handle is an HVM initial domain with an already-running
Xenstore domain; I think this applies both to ARM and hybrid/PVH on x86.
In that case, usage would be set to LOCAL instead of HVM.

As a side note: the value of xen_initial_domain() shouldn't be connected to
determining if xenstore is running locally or not.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzAHF-0000YZ-5X; Wed, 08 Aug 2012 17:42:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzAHD-0000YS-78
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:42:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344447749!6203984!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15964 invoked from network); 8 Aug 2012 17:42:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:42:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13915528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 17:42:29 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 18:42:28 +0100
Date: Wed, 8 Aug 2012 18:42:05 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
In-Reply-To: <5022A303.709@tycho.nsa.gov>
Message-ID: <alpine.DEB.2.02.1208081836050.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
	<5022A303.709@tycho.nsa.gov>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Daniel De Graaf wrote:
> On 08/08/2012 01:19 PM, Stefano Stabellini wrote:
> > On Wed, 8 Aug 2012, Daniel De Graaf wrote:
> >> On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
> >>> On Tue, 7 Aug 2012, Daniel De Graaf wrote:
> >>>> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> >>>>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> >>>>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> >>>>>> an error.
> >>>>>>
> >>>>>> If Linux is running as an HVM domain and is running as Dom0, use
> >>>>>> xenstored_local_init to initialize the xenstore page and event channel.
> >>>>>>
> >>>>>> Changes in v2:
> >>>>>>
> >>>>>> - refactor xenbus_init.
> >>>>>
> >>>>> Thank you. Lets also CC our friend at NSA who has been doing some work
> >>>>> in that area. Daniel are you OK with this change - will it still make
> >>>>> PV initial domain with with the MiniOS XenBus driver?
> >>>>>
> >>>>> Thanks.
> >>>>
> >>>> That case will work, but what this will break is launching the initial domain
> >>>> with a Xenstore stub domain already running (see below).
> >>>>
> >>>>>>
> >>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >>>>>> ---
> >>>>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >>>>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
> >>>>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >>>>>>  3 files changed, 45 insertions(+), 20 deletions(-)
> >>>>>>
> >>>>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> >>>>>> index 52fe7ad..c5aa55c 100644
> >>>>>> --- a/drivers/xen/xenbus/xenbus_comms.c
> >>>>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
> >>>>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >>>>>>  		int err;
> >>>>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >>>>>>  						0, "xenbus", &xb_waitq);
> >>>>>> -		if (err <= 0) {
> >>>>>> +		if (err < 0) {
> >>>>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >>>>>>  			return err;
> >>>>>>  		}
> >>>>>
> >>>>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >>>>>> index b793723..a67ccc0 100644
> >>>>>> --- a/drivers/xen/xenbus/xenbus_probe.c
> >>>>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >>>>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
> >>>>>>  	return err;
> >>>>>>  }
> >>>>>>  
> >>>>>> +enum xenstore_init {
> >>>>>> +	UNKNOWN,
> >>>>>> +	PV,
> >>>>>> +	HVM,
> >>>>>> +	LOCAL,
> >>>>>> +};
> >>>>>>  static int __init xenbus_init(void)
> >>>>>>  {
> >>>>>>  	int err = 0;
> >>>>>> +	enum xenstore_init usage = UNKNOWN;
> >>>>>> +	uint64_t v = 0;
> >>>>>>  
> >>>>>>  	if (!xen_domain())
> >>>>>>  		return -ENODEV;
> >>>>>>  
> >>>>>>  	xenbus_ring_ops_init();
> >>>>>>  
> >>>>>> -	if (xen_hvm_domain()) {
> >>>>>> -		uint64_t v = 0;
> >>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> >>>>>> -		if (err)
> >>>>>> -			goto out_error;
> >>>>>> -		xen_store_evtchn = (int)v;
> >>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> >>>>>> -		if (err)
> >>>>>> -			goto out_error;
> >>>>>> -		xen_store_mfn = (unsigned long)v;
> >>>>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> >>>>>> -	} else {
> >>>>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
> >>>>>> -		xen_store_mfn = xen_start_info->store_mfn;
> >>>>>> -		if (xen_store_evtchn)
> >>>>>> -			xenstored_ready = 1;
> >>>>>> -		else {
> >>>>>> +	if (xen_pv_domain())
> >>>>>> +		usage = PV;
> >>>>>> +	if (xen_hvm_domain())
> >>>>>> +		usage = HVM;
> >>>>
> >>>> The above is correct for domUs, and is overridden for dom0s:
> >>>>
> >>>>>> +	if (xen_hvm_domain() && xen_initial_domain())
> >>>>>> +		usage = LOCAL;
> >>>>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >>>>>> +		usage = LOCAL;
> >>>>
> >>>> Instead of these checks, I think it should just be:
> >>>>
> >>>> if (!xen_start_info->store_evtchn)
> >>>> 	usage = LOCAL;
> >>>>
> >>>> Any domain started after xenstore will have store_evtchn set, so if you don't
> >>>> have this set, you are either going to be running xenstore locally, or will
> >>>> use the ioctl to change it later (and so should still set up everything as if
> >>>> it will be running locally).
> >>>
> >>> That would be wrong for an HVM dom0 domain (at least on ARM), because
> >>> we don't have a start_info page at all.
> >>>
> >>>
> >>>>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> >>>>>> +		xenstored_ready = 1;
> >>>>
> >>>> This part can now just be moved unconditionally into case PV.
> >>>
> >>> What about:
> >>>
> >>> if (xen_pv_domain())
> >>>     usage = PV;
> >>> if (xen_hvm_domain())
> >>>     usage = HVM;
> >>> if (!xen_store_evtchn)
> >>>     usage = LOCAL;
> >>>
> >>> and moving xenstored_ready in case PV, like you suggested.
> >>>
> >>
> >> That looks correct, but you'd need to split up the switch statement in
> >> order to populate xen_store_evtchn before that last condition, which
> >> ends up pretty much eliminating the usage variable.
> > 
> > Going back to what you wrote in the previous email, in what way this
> > patch breaks the case when an initial domain is started after a Xenstore
> > stub domain?
> > 
> > Assuming that we are talking about a PV initial domain on x86, the
> > following check
> > 
> > if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >     usage = LOCAL;
> > 
> > will return false (because store_evtchn is set), therefore usage will
> > remain set to PV.
> > And the check:
> > 
> > if (xen_pv_domain() && xen_start_info->store_evtchn)
> > 	xenstored_ready = 1;
> > 
> > will return true so xenstored_ready is going to be set to 1.
> > 
> 
> Right, the original patch didn't break anything with PV domains. The case
> it doesn't handle is an HVM initial domain with an already-running
> Xenstore domain; I think this applies both to ARM and hybrid/PVH on x86.
> In that case, usage would be set to LOCAL instead of HVM.


Right, however if I am not mistaken there is no such thing as an HVM
dom0 right now on x86 and hybrid/PVH is probably going to return true on
xen_pv_domain() and false on xen_hvm_domain().

In the ARM case, given that we don't have a start_info page, we would
need another way to figure out whether a xenstore stub domain is already
running, so I think we can just postpone the solution of that problem
for now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 17:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 17:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzAHF-0000YZ-5X; Wed, 08 Aug 2012 17:42:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzAHD-0000YS-78
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 17:42:35 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344447749!6203984!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15964 invoked from network); 8 Aug 2012 17:42:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 17:42:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,733,1336348800"; d="scan'208";a="13915528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 17:42:29 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 18:42:28 +0100
Date: Wed, 8 Aug 2012 18:42:05 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
In-Reply-To: <5022A303.709@tycho.nsa.gov>
Message-ID: <alpine.DEB.2.02.1208081836050.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
	<5022A303.709@tycho.nsa.gov>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Daniel De Graaf wrote:
> On 08/08/2012 01:19 PM, Stefano Stabellini wrote:
> > On Wed, 8 Aug 2012, Daniel De Graaf wrote:
> >> On 08/08/2012 12:51 PM, Stefano Stabellini wrote:
> >>> On Tue, 7 Aug 2012, Daniel De Graaf wrote:
> >>>> On 08/07/2012 02:21 PM, Konrad Rzeszutek Wilk wrote:
> >>>>> On Mon, Aug 06, 2012 at 03:27:13PM +0100, Stefano Stabellini wrote:
> >>>>>> bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
> >>>>>> an error.
> >>>>>>
> >>>>>> If Linux is running as an HVM domain and is running as Dom0, use
> >>>>>> xenstored_local_init to initialize the xenstore page and event channel.
> >>>>>>
> >>>>>> Changes in v2:
> >>>>>>
> >>>>>> - refactor xenbus_init.
> >>>>>
> >>>>> Thank you. Lets also CC our friend at NSA who has been doing some work
> >>>>> in that area. Daniel are you OK with this change - will it still make
> >>>>> PV initial domain with with the MiniOS XenBus driver?
> >>>>>
> >>>>> Thanks.
> >>>>
> >>>> That case will work, but what this will break is launching the initial domain
> >>>> with a Xenstore stub domain already running (see below).
> >>>>
> >>>>>>
> >>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >>>>>> ---
> >>>>>>  drivers/xen/xenbus/xenbus_comms.c |    2 +-
> >>>>>>  drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
> >>>>>>  drivers/xen/xenbus/xenbus_xs.c    |    1 +
> >>>>>>  3 files changed, 45 insertions(+), 20 deletions(-)
> >>>>>>
> >>>>>> diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
> >>>>>> index 52fe7ad..c5aa55c 100644
> >>>>>> --- a/drivers/xen/xenbus/xenbus_comms.c
> >>>>>> +++ b/drivers/xen/xenbus/xenbus_comms.c
> >>>>>> @@ -224,7 +224,7 @@ int xb_init_comms(void)
> >>>>>>  		int err;
> >>>>>>  		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
> >>>>>>  						0, "xenbus", &xb_waitq);
> >>>>>> -		if (err <= 0) {
> >>>>>> +		if (err < 0) {
> >>>>>>  			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
> >>>>>>  			return err;
> >>>>>>  		}
> >>>>>
> >>>>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >>>>>> index b793723..a67ccc0 100644
> >>>>>> --- a/drivers/xen/xenbus/xenbus_probe.c
> >>>>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >>>>>> @@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
> >>>>>>  	return err;
> >>>>>>  }
> >>>>>>  
> >>>>>> +enum xenstore_init {
> >>>>>> +	UNKNOWN,
> >>>>>> +	PV,
> >>>>>> +	HVM,
> >>>>>> +	LOCAL,
> >>>>>> +};
> >>>>>>  static int __init xenbus_init(void)
> >>>>>>  {
> >>>>>>  	int err = 0;
> >>>>>> +	enum xenstore_init usage = UNKNOWN;
> >>>>>> +	uint64_t v = 0;
> >>>>>>  
> >>>>>>  	if (!xen_domain())
> >>>>>>  		return -ENODEV;
> >>>>>>  
> >>>>>>  	xenbus_ring_ops_init();
> >>>>>>  
> >>>>>> -	if (xen_hvm_domain()) {
> >>>>>> -		uint64_t v = 0;
> >>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
> >>>>>> -		if (err)
> >>>>>> -			goto out_error;
> >>>>>> -		xen_store_evtchn = (int)v;
> >>>>>> -		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
> >>>>>> -		if (err)
> >>>>>> -			goto out_error;
> >>>>>> -		xen_store_mfn = (unsigned long)v;
> >>>>>> -		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
> >>>>>> -	} else {
> >>>>>> -		xen_store_evtchn = xen_start_info->store_evtchn;
> >>>>>> -		xen_store_mfn = xen_start_info->store_mfn;
> >>>>>> -		if (xen_store_evtchn)
> >>>>>> -			xenstored_ready = 1;
> >>>>>> -		else {
> >>>>>> +	if (xen_pv_domain())
> >>>>>> +		usage = PV;
> >>>>>> +	if (xen_hvm_domain())
> >>>>>> +		usage = HVM;
> >>>>
> >>>> The above is correct for domUs, and is overridden for dom0s:
> >>>>
> >>>>>> +	if (xen_hvm_domain() && xen_initial_domain())
> >>>>>> +		usage = LOCAL;
> >>>>>> +	if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >>>>>> +		usage = LOCAL;
> >>>>
> >>>> Instead of these checks, I think it should just be:
> >>>>
> >>>> if (!xen_start_info->store_evtchn)
> >>>> 	usage = LOCAL;
> >>>>
> >>>> Any domain started after xenstore will have store_evtchn set, so if you don't
> >>>> have this set, you are either going to be running xenstore locally, or will
> >>>> use the ioctl to change it later (and so should still set up everything as if
> >>>> it will be running locally).
> >>>
> >>> That would be wrong for an HVM dom0 domain (at least on ARM), because
> >>> we don't have a start_info page at all.
> >>>
> >>>
> >>>>>> +	if (xen_pv_domain() && xen_start_info->store_evtchn)
> >>>>>> +		xenstored_ready = 1;
> >>>>
> >>>> This part can now just be moved unconditionally into case PV.
> >>>
> >>> What about:
> >>>
> >>> if (xen_pv_domain())
> >>>     usage = PV;
> >>> if (xen_hvm_domain())
> >>>     usage = HVM;
> >>> if (!xen_store_evtchn)
> >>>     usage = LOCAL;
> >>>
> >>> and moving xenstored_ready in case PV, like you suggested.
> >>>
> >>
> >> That looks correct, but you'd need to split up the switch statement in
> >> order to populate xen_store_evtchn before that last condition, which
> >> ends up pretty much eliminating the usage variable.
> > 
> > Going back to what you wrote in the previous email, in what way this
> > patch breaks the case when an initial domain is started after a Xenstore
> > stub domain?
> > 
> > Assuming that we are talking about a PV initial domain on x86, the
> > following check
> > 
> > if (xen_pv_domain() && !xen_start_info->store_evtchn)
> >     usage = LOCAL;
> > 
> > will return false (because store_evtchn is set), therefore usage will
> > remain set to PV.
> > And the check:
> > 
> > if (xen_pv_domain() && xen_start_info->store_evtchn)
> > 	xenstored_ready = 1;
> > 
> > will return true so xenstored_ready is going to be set to 1.
> > 
> 
> Right, the original patch didn't break anything with PV domains. The case
> it doesn't handle is an HVM initial domain with an already-running
> Xenstore domain; I think this applies both to ARM and hybrid/PVH on x86.
> In that case, usage would be set to LOCAL instead of HVM.


Right, however if I am not mistaken there is no such thing as an HVM
dom0 right now on x86 and hybrid/PVH is probably going to return true on
xen_pv_domain() and false on xen_hvm_domain().

In the ARM case, given that we don't have a start_info page, we would
need another way to figure out whether a xenstore stub domain is already
running, so I think we can just postpone the solution of that problem
for now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 18:06:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 18:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzAdm-0000qj-Gw; Wed, 08 Aug 2012 18:05:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzAdk-0000qe-Bl
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 18:05:52 +0000
Received: from [85.158.138.51:14837] by server-11.bemta-3.messagelabs.com id
	79/C2-10722-F7AA2205; Wed, 08 Aug 2012 18:05:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344449150!27034067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20775 invoked from network); 8 Aug 2012 18:05:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 18:05:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,734,1336348800"; d="scan'208";a="13915802"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 18:05:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 19:05:50 +0100
Date: Wed, 8 Aug 2012 19:05:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807183036.GR15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081826190.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807183036.GR15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 15/23] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:18PM +0100, Stefano Stabellini wrote:
> > Compile events.c on ARM.
> > Parse, map and enable the IRQ to get event notifications from the device
> > tree (node "/xen").
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
> >  arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
> >  arch/x86/xen/enlighten.c          |    1 +
> >  arch/x86/xen/irq.c                |    1 +
> >  arch/x86/xen/xen-ops.h            |    1 -
> >  drivers/xen/events.c              |   17 ++++++++++++++---
> >  include/xen/events.h              |    2 ++
> >  7 files changed, 69 insertions(+), 4 deletions(-)
> >  create mode 100644 arch/arm/include/asm/xen/events.h
> > 
> > diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
> > new file mode 100644
> > index 0000000..94b4e90
> > --- /dev/null
> > +++ b/arch/arm/include/asm/xen/events.h
> > @@ -0,0 +1,18 @@
> > +#ifndef _ASM_ARM_XEN_EVENTS_H
> > +#define _ASM_ARM_XEN_EVENTS_H
> > +
> > +#include <asm/ptrace.h>
> > +
> > +enum ipi_vector {
> > +	XEN_PLACEHOLDER_VECTOR,
> > +
> > +	/* Xen IPIs go here */
> > +	XEN_NR_IPIS,
> > +};
> > +
> > +static inline int xen_irqs_disabled(struct pt_regs *regs)
> > +{
> > +	return raw_irqs_disabled_flags(regs->ARM_cpsr);
> > +}
> > +
> > +#endif /* _ASM_ARM_XEN_EVENTS_H */
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index e5e92d5..87b17f0 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -1,4 +1,5 @@
> >  #include <xen/xen.h>
> > +#include <xen/events.h>
> >  #include <xen/grant_table.h>
> >  #include <xen/hvm.h>
> >  #include <xen/interface/xen.h>
> > @@ -9,6 +10,8 @@
> >  #include <xen/xenbus.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/irqreturn.h>
> >  #include <linux/module.h>
> >  #include <linux/of.h>
> >  #include <linux/of_irq.h>
> > @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> >  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> >  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> >  
> > +static __read_mostly int xen_events_irq = -1;
> > +
> 
> So this is global..
> >  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  			       unsigned long addr,
> >  			       unsigned long mfn, int nr,
> > @@ -66,6 +71,9 @@ static int __init xen_guest_init(void)
> >  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
> >  		return 0;
> >  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> > +	xen_events_irq = irq_of_parse_and_map(node, 0);
> > +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> > +			xen_events_irq, xen_hvm_resume_frames);
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -107,3 +115,28 @@ static int __init xen_guest_init(void)
> >  	return 0;
> >  }
> >  core_initcall(xen_guest_init);
> > +
> > +static irqreturn_t xen_arm_callback(int irq, void *arg)
> > +{
> > +	xen_hvm_evtchn_do_upcall();
> > +	return IRQ_HANDLED;
> > +}
> > +
> > +static int __init xen_init_events(void)
> > +{
> > +	if (!xen_domain() || xen_events_irq < 0)
> > +		return -ENODEV;
> > +
> > +	xen_init_IRQ();
> > +
> > +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> > +			"events", xen_vcpu)) {
> 
> But here you are asking for it to be percpu? What if there are other
> interrupts on the _other_ CPUs that conflict with it?
> > +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> > +		return -EINVAL;
> > +	}
> > +
> > +	enable_percpu_irq(xen_events_irq, 0);
> 
> Uh, that is bold. One global to rule them all, eh? Should you make
> it at least:
> static DEFINE_PER_CPU(int, xen_events_irq);
> ?

That is an interesting observation.

Currently Xen is using a per-cpu interrupt (a PPI, using the GIC
terminology), and it makes sense so that we can receive event
notifications on multiple vcpus independently.
The irq range 16-31 is reserved for PPIs and I am assuming that Xen will
be able to find one spare, the same one, for all vcpus.
In fact the third field corresponding to the interrupt in the DT (0xf08
in my dts) contains the cpu mask and it is set to 0xf (the maximum)
right now.

Maybe I should just BUG_ON(xen_events_irq > 31 || xen_events_irq < 16)?

The versioning of the hypervisor node on the DT is going to help us make
any changes to the interface in the future.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 18:06:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 18:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzAdm-0000qj-Gw; Wed, 08 Aug 2012 18:05:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzAdk-0000qe-Bl
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 18:05:52 +0000
Received: from [85.158.138.51:14837] by server-11.bemta-3.messagelabs.com id
	79/C2-10722-F7AA2205; Wed, 08 Aug 2012 18:05:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344449150!27034067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg1NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20775 invoked from network); 8 Aug 2012 18:05:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 18:05:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,734,1336348800"; d="scan'208";a="13915802"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	08 Aug 2012 18:05:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 8 Aug 2012 19:05:50 +0100
Date: Wed, 8 Aug 2012 19:05:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120807183036.GR15053@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208081826190.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-15-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807183036.GR15053@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 15/23] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 06, 2012 at 03:27:18PM +0100, Stefano Stabellini wrote:
> > Compile events.c on ARM.
> > Parse, map and enable the IRQ to get event notifications from the device
> > tree (node "/xen").
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
> >  arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
> >  arch/x86/xen/enlighten.c          |    1 +
> >  arch/x86/xen/irq.c                |    1 +
> >  arch/x86/xen/xen-ops.h            |    1 -
> >  drivers/xen/events.c              |   17 ++++++++++++++---
> >  include/xen/events.h              |    2 ++
> >  7 files changed, 69 insertions(+), 4 deletions(-)
> >  create mode 100644 arch/arm/include/asm/xen/events.h
> > 
> > diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
> > new file mode 100644
> > index 0000000..94b4e90
> > --- /dev/null
> > +++ b/arch/arm/include/asm/xen/events.h
> > @@ -0,0 +1,18 @@
> > +#ifndef _ASM_ARM_XEN_EVENTS_H
> > +#define _ASM_ARM_XEN_EVENTS_H
> > +
> > +#include <asm/ptrace.h>
> > +
> > +enum ipi_vector {
> > +	XEN_PLACEHOLDER_VECTOR,
> > +
> > +	/* Xen IPIs go here */
> > +	XEN_NR_IPIS,
> > +};
> > +
> > +static inline int xen_irqs_disabled(struct pt_regs *regs)
> > +{
> > +	return raw_irqs_disabled_flags(regs->ARM_cpsr);
> > +}
> > +
> > +#endif /* _ASM_ARM_XEN_EVENTS_H */
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index e5e92d5..87b17f0 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -1,4 +1,5 @@
> >  #include <xen/xen.h>
> > +#include <xen/events.h>
> >  #include <xen/grant_table.h>
> >  #include <xen/hvm.h>
> >  #include <xen/interface/xen.h>
> > @@ -9,6 +10,8 @@
> >  #include <xen/xenbus.h>
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/xen/hypercall.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/irqreturn.h>
> >  #include <linux/module.h>
> >  #include <linux/of.h>
> >  #include <linux/of_irq.h>
> > @@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
> >  int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
> >  EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
> >  
> > +static __read_mostly int xen_events_irq = -1;
> > +
> 
> So this is global..
> >  int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> >  			       unsigned long addr,
> >  			       unsigned long mfn, int nr,
> > @@ -66,6 +71,9 @@ static int __init xen_guest_init(void)
> >  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
> >  		return 0;
> >  	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
> > +	xen_events_irq = irq_of_parse_and_map(node, 0);
> > +	pr_info("Xen support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> > +			xen_events_irq, xen_hvm_resume_frames);
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -107,3 +115,28 @@ static int __init xen_guest_init(void)
> >  	return 0;
> >  }
> >  core_initcall(xen_guest_init);
> > +
> > +static irqreturn_t xen_arm_callback(int irq, void *arg)
> > +{
> > +	xen_hvm_evtchn_do_upcall();
> > +	return IRQ_HANDLED;
> > +}
> > +
> > +static int __init xen_init_events(void)
> > +{
> > +	if (!xen_domain() || xen_events_irq < 0)
> > +		return -ENODEV;
> > +
> > +	xen_init_IRQ();
> > +
> > +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> > +			"events", xen_vcpu)) {
> 
> But here you are asking for it to be percpu? What if there are other
> interrupts on the _other_ CPUs that conflict with it?
> > +		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> > +		return -EINVAL;
> > +	}
> > +
> > +	enable_percpu_irq(xen_events_irq, 0);
> 
> Uh, that is bold. One global to rule them all, eh? Should you make
> it at least:
> static DEFINE_PER_CPU(int, xen_events_irq);
> ?

That is an interesting observation.

Currently Xen is using a per-cpu interrupt (a PPI, using the GIC
terminology), and it makes sense so that we can receive event
notifications on multiple vcpus independently.
The irq range 16-31 is reserved for PPIs and I am assuming that Xen will
be able to find one spare, the same one, for all vcpus.
In fact the third field corresponding to the interrupt in the DT (0xf08
in my dts) contains the cpu mask and it is set to 0xf (the maximum)
right now.

Maybe I should just BUG_ON(xen_events_irq > 31 || xen_events_irq < 16)?

The versioning of the hypervisor node on the DT is going to help us make
any changes to the interface in the future.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 19:18:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 19:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzBl7-0001Nf-3n; Wed, 08 Aug 2012 19:17:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <riel@redhat.com>) id 1SzBl5-0001Na-GM
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 19:17:31 +0000
Received: from [85.158.138.51:36589] by server-5.bemta-3.messagelabs.com id
	13/83-27557-A4BB2205; Wed, 08 Aug 2012 19:17:30 +0000
X-Env-Sender: riel@redhat.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344453449!31168685!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUyNjI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8222 invoked from network); 8 Aug 2012 19:17:29 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 19:17:29 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q78JGZu4026576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Aug 2012 15:16:35 -0400
Received: from cuia.bos.redhat.com (cuia.bos.redhat.com [10.16.184.35])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q78JGW7C018871; Wed, 8 Aug 2012 15:16:32 -0400
Message-ID: <5022BAA9.7090604@redhat.com>
Date: Wed, 08 Aug 2012 15:14:49 -0400
From: Rik van Riel <riel@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120605 Thunderbird/13.0
MIME-Version: 1.0
To: Mel Gorman <mgorman@suse.de>
References: <20120807085554.GF29814@suse.de>
In-Reply-To: <20120807085554.GF29814@suse.de>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: Xen-devel <xen-devel@lists.xensource.com>,
	Linux-Netdev <netdev@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, Linux-MM <linux-mm@kvack.org>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 04:55 AM, Mel Gorman wrote:
> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> for the following bug triggered by a xen network driver
>
> [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.908703] PGD ea1df067 PUD e8ada067 PMD 0
> [    1.908774] Oops: 0000 [#1] SMP
> [    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea +xen_kbdfront xenfs xen_privcmd
> [    1.908938] CPU 0
> [    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1
> [    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
> [    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
> [    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
> [    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
> [    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
> [    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
> [    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> [    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
> [    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
> [    1.909055] Stack:
> [    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
> [    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
> [    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
> [    1.909055] Call Trace:
> [    1.909055]  <IRQ>
> [    1.909055]  [<ffffffff81066028>] ?  pvclock_clocksource_read+0x58/0xd0
> [    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
> [    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
> [    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
>
> The problem is that the xenfront driver is passing a NULL page to
> __skb_fill_page_desc() which was unexpected. This patch checks that
> there is a page before dereferencing.
>
> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>

Acked-by: Rik van Riel <riel@redhat.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 19:18:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 19:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzBl7-0001Nf-3n; Wed, 08 Aug 2012 19:17:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <riel@redhat.com>) id 1SzBl5-0001Na-GM
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 19:17:31 +0000
Received: from [85.158.138.51:36589] by server-5.bemta-3.messagelabs.com id
	13/83-27557-A4BB2205; Wed, 08 Aug 2012 19:17:30 +0000
X-Env-Sender: riel@redhat.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344453449!31168685!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUyNjI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8222 invoked from network); 8 Aug 2012 19:17:29 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-174.messagelabs.com with SMTP;
	8 Aug 2012 19:17:29 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q78JGZu4026576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Aug 2012 15:16:35 -0400
Received: from cuia.bos.redhat.com (cuia.bos.redhat.com [10.16.184.35])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q78JGW7C018871; Wed, 8 Aug 2012 15:16:32 -0400
Message-ID: <5022BAA9.7090604@redhat.com>
Date: Wed, 08 Aug 2012 15:14:49 -0400
From: Rik van Riel <riel@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120605 Thunderbird/13.0
MIME-Version: 1.0
To: Mel Gorman <mgorman@suse.de>
References: <20120807085554.GF29814@suse.de>
In-Reply-To: <20120807085554.GF29814@suse.de>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: Xen-devel <xen-devel@lists.xensource.com>,
	Linux-Netdev <netdev@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, Linux-MM <linux-mm@kvack.org>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/07/2012 04:55 AM, Mel Gorman wrote:
> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> for the following bug triggered by a xen network driver
>
> [    1.908592] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
> [    1.908643] IP: [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.908703] PGD ea1df067 PUD e8ada067 PMD 0
> [    1.908774] Oops: 0000 [#1] SMP
> [    1.908797] Modules linked in: fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper crc32c_intel xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea +xen_kbdfront xenfs xen_privcmd
> [    1.908938] CPU 0
> [    1.908950] Pid: 2165, comm: ip Not tainted 3.5.0upstream-08854-g444fa66 #1
> [    1.908983] RIP: e030:[<ffffffffa0037750>]  [<ffffffffa0037750>] xennet_poll+0x980/0xec0 [xen_netfront]
> [    1.909029] RSP: e02b:ffff8800ffc03db8  EFLAGS: 00010282
> [    1.909055] RAX: ffff8800ea010140 RBX: ffff8800f00e86c0 RCX: 000000000000009a
> [    1.909055] RDX: 0000000000000040 RSI: 000000000000005a RDI: ffff8800fa7dee80
> [    1.909055] RBP: ffff8800ffc03ee8 R08: ffff8800f00e86d8 R09: ffff8800ea010000
> [    1.909055] R10: dead000000200200 R11: dead000000100100 R12: ffff8800fa7dee80
> [    1.909055] R13: 000000000000005a R14: ffff8800fa7dee80 R15: 0000000000000200
> [    1.909055] FS:  00007fbafc188700(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> [    1.909055] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [    1.909055] CR2: 0000000000000010 CR3: 00000000ea108000 CR4: 0000000000002660
> [    1.909055] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [    1.909055] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [    1.909055] Process ip (pid: 2165, threadinfo ffff8800ea0f2000, task ffff8800fa783040)
> [    1.909055] Stack:
> [    1.909055]  ffff8800e27e5040 ffff8800ffc03e88 ffff8800ffc03e68 ffff8800ffc03e48
> [    1.909055]  7fffffffffffffff ffff8800ffc03e00 ffff8800e27e5040 ffff8800f00e86d8
> [    1.909055]  ffff8800ffc03eb0 00000040ffffffff ffff8800f00e8000 00000000ffc03e30
> [    1.909055] Call Trace:
> [    1.909055]  <IRQ>
> [    1.909055]  [<ffffffff81066028>] ?  pvclock_clocksource_read+0x58/0xd0
> [    1.909055]  [<ffffffff81486352>] net_rx_action+0x112/0x240
> [    1.909055]  [<ffffffff8107f319>] __do_softirq+0xb9/0x190
> [    1.909055]  [<ffffffff815d8d7c>] call_softirq+0x1c/0x30
>
> The problem is that the xenfront driver is passing a NULL page to
> __skb_fill_page_desc() which was unexpected. This patch checks that
> there is a page before dereferencing.
>
> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>

Acked-by: Rik van Riel <riel@redhat.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 19:34:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 19:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzC1S-0001bd-LW; Wed, 08 Aug 2012 19:34:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzC1R-0001bY-Id
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 19:34:25 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344454456!11487891!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19686 invoked from network); 8 Aug 2012 19:34:17 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 19:34:17 -0000
Received: by pbbrp12 with SMTP id rp12so1557029pbb.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 12:34:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=5J9nPKubqlLV1oaqnjNd5Yj97qDgbjQiwAZnUVLUs2U=;
	b=M9H7b1myr96sJOTB4zfnkPtukuZHP2huGaZnjNiJgg4BBAG2WImUCv/7UfiLyvLCIk
	J+LkUs0HmEIHqQN9IHluDcvjk6ano5ila1LSsbMQogsVwmP/GoDwjcwCbU/GroeH/Gx5
	XNcThDko0AcX5NCKvAeMCLN04g7SR4+BFXe61N9GDI+XQY6RvU7GNzyPssl7hMpyOsvQ
	Ii7aRan1ET4YH0+aliayBnxMENI8MDzJU3b5yTzMLdOaGuXGAjDSIoKK6HCmb05WX2si
	Ut/iJ1JIEMXJWDATVK6lqPHSTLQOBx366jqqp5gWsEZy0SKEB0yNmuZB8iFc8uLxm2Mm
	ELAQ==
Received: by 10.68.195.202 with SMTP id ig10mr38772830pbc.37.1344454455601;
	Wed, 08 Aug 2012 12:34:15 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.55.166 with HTTP; Wed, 8 Aug 2012 12:33:55 -0700 (PDT)
In-Reply-To: <502275B5.9040500@cantab.net>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
	<CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
	<502275B5.9040500@cantab.net>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Wed, 8 Aug 2012 20:33:55 +0100
Message-ID: <CAEBdQ93LpNXYQGtXOUOnQ0Z+01QmBRHfmBhz7d+oskii4UO7KQ@mail.gmail.com>
To: David Vrabel <dvrabel@cantab.net>
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 8 August 2012 15:20, David Vrabel <dvrabel@cantab.net> wrote:
> On 08/08/2012 11:03, Jean Guyader wrote:
>>
>>
>> I have introduced the size field for XENMAPSPACE_gmfn_range and that is
>> used
>> by hvmload when we want to relocated the memory because the pci hole needs
>> to be
>> bigger.
>
>
> Why do this so late? Rather than creating the domain with the correctly
> sized PCI hole?
>

The PCI hole isn't fix in size, especially if you do passthrough. It's
not really easy to
compute outside of the domain you will have to ask Qemu. I think it's
fine how it's
done today.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 19:34:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 19:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzC1S-0001bd-LW; Wed, 08 Aug 2012 19:34:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzC1R-0001bY-Id
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 19:34:25 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344454456!11487891!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19686 invoked from network); 8 Aug 2012 19:34:17 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Aug 2012 19:34:17 -0000
Received: by pbbrp12 with SMTP id rp12so1557029pbb.32
	for <xen-devel@lists.xen.org>; Wed, 08 Aug 2012 12:34:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=5J9nPKubqlLV1oaqnjNd5Yj97qDgbjQiwAZnUVLUs2U=;
	b=M9H7b1myr96sJOTB4zfnkPtukuZHP2huGaZnjNiJgg4BBAG2WImUCv/7UfiLyvLCIk
	J+LkUs0HmEIHqQN9IHluDcvjk6ano5ila1LSsbMQogsVwmP/GoDwjcwCbU/GroeH/Gx5
	XNcThDko0AcX5NCKvAeMCLN04g7SR4+BFXe61N9GDI+XQY6RvU7GNzyPssl7hMpyOsvQ
	Ii7aRan1ET4YH0+aliayBnxMENI8MDzJU3b5yTzMLdOaGuXGAjDSIoKK6HCmb05WX2si
	Ut/iJ1JIEMXJWDATVK6lqPHSTLQOBx366jqqp5gWsEZy0SKEB0yNmuZB8iFc8uLxm2Mm
	ELAQ==
Received: by 10.68.195.202 with SMTP id ig10mr38772830pbc.37.1344454455601;
	Wed, 08 Aug 2012 12:34:15 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.55.166 with HTTP; Wed, 8 Aug 2012 12:33:55 -0700 (PDT)
In-Reply-To: <502275B5.9040500@cantab.net>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<501FFFB50200007800092FC7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208061640270.4645@kaball.uk.xensource.com>
	<502004BE0200007800093028@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071318430.4645@kaball.uk.xensource.com>
	<CAEBdQ921bVoRss3iuf1oLDrv9JBrhcvx=S16XsjrYWA2eg1Y8g@mail.gmail.com>
	<502131E202000078000933F9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208071613400.4645@kaball.uk.xensource.com>
	<50222DFB0200007800093746@nat28.tlf.novell.com>
	<1344411923.11783.1.camel@dagon.hellion.org.uk>
	<5022445002000078000937B9@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208081048410.4645@kaball.uk.xensource.com>
	<CAEBdQ92wgXRs3xX87NE=Db-yF-VOJMLrOYh+6e=gkhPf+qAb7g@mail.gmail.com>
	<502275B5.9040500@cantab.net>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Wed, 8 Aug 2012 20:33:55 +0100
Message-ID: <CAEBdQ93LpNXYQGtXOUOnQ0Z+01QmBRHfmBhz7d+oskii4UO7KQ@mail.gmail.com>
To: David Vrabel <dvrabel@cantab.net>
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 8 August 2012 15:20, David Vrabel <dvrabel@cantab.net> wrote:
> On 08/08/2012 11:03, Jean Guyader wrote:
>>
>>
>> I have introduced the size field for XENMAPSPACE_gmfn_range and that is
>> used
>> by hvmload when we want to relocated the memory because the pci hole needs
>> to be
>> bigger.
>
>
> Why do this so late? Rather than creating the domain with the correctly
> sized PCI hole?
>

The PCI hole isn't fix in size, especially if you do passthrough. It's
not really easy to
compute outside of the domain you will have to ask Qemu. I think it's
fine how it's
done today.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 20:54:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 20:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzDG6-0003C9-RM; Wed, 08 Aug 2012 20:53:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SzDG5-0003C4-5U
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 20:53:37 +0000
Received: from [85.158.143.99:10340] by server-3.bemta-4.messagelabs.com id
	F6/7E-31486-0D1D2205; Wed, 08 Aug 2012 20:53:36 +0000
X-Env-Sender: andy@strugglers.net
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344459215!24509995!1
X-Originating-IP: [85.119.80.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14512 invoked from network); 8 Aug 2012 20:53:35 -0000
Received: from bitfolk.com (HELO mail.bitfolk.com) (85.119.80.223)
	by server-10.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	8 Aug 2012 20:53:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com;
	s=alpha; 
	h=Subject:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:To:From:Date;
	bh=sKoZoA4q0A3LBeKzhsFcDd327J7Fyga1sAOwah+Os+g=; 
	b=fzScXUzl7BoKQpHOa5i7uh4q8tdShWQQN3suESXMXt+Se0V1LciRzrbgFPRo3BHdmL7VcsMEz12SD24kGl513vLUKMX/eHNiZzBMKOgznyQ57YyV8LsIA1J61J5poyqa;
Received: from andy by mail.bitfolk.com with local (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SzDG3-0000cT-6v
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 20:53:35 +0000
Date: Wed, 8 Aug 2012 20:53:35 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xen.org
Message-ID: <20120808205335.GC11695@bitfolk.com>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
	<501F8F240200007800092C84@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501F8F240200007800092C84@nat28.tlf.novell.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Virus-Scanner: Scanned by ClamAV on mail.bitfolk.com at Wed,
	08 Aug 2012 20:53:35 +0000
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	spamd1.lon.bitfolk.com
X-Spam-Level: 
X-Spam-ASN: 
X-Spam-Status: No, score=-0.0 required=5.0 tests=NO_RELAYS shortcircuit=no
	autolearn=disabled version=3.3.1
X-Spam-Report: * -0.0 NO_RELAYS Informational: message was not relayed via SMTP
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on mail.bitfolk.com)
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Mon, Aug 06, 2012 at 08:32:20AM +0100, Jan Beulich wrote:
> >>> On 06.08.12 at 05:55, Andy Smith <andy@strugglers.net> wrote:
> > Is there any point at this stage in trying to find out which commit
> > appears to have fixed the problem I was seeing?
> 
> That's a question you have to ask yourself (or the Debian folks
> if you want them to deliver a fixed packed). From our
> (developers') perspective knowing the problem is fixed is
> sufficient.

I bisected it down to changeset 21332 fixing the problem for me:

    http://xenbits.xen.org/hg/xen-4.0-testing.hg/rev/ab1fb1b8b569?revcount=480

And have reported it as Debian bug #684334:

    http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684334

in case they should want to fix it for their soon-to-be oldstable
release.

Thanks for your help,
Andy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 20:54:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 20:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzDG6-0003C9-RM; Wed, 08 Aug 2012 20:53:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SzDG5-0003C4-5U
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 20:53:37 +0000
Received: from [85.158.143.99:10340] by server-3.bemta-4.messagelabs.com id
	F6/7E-31486-0D1D2205; Wed, 08 Aug 2012 20:53:36 +0000
X-Env-Sender: andy@strugglers.net
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344459215!24509995!1
X-Originating-IP: [85.119.80.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14512 invoked from network); 8 Aug 2012 20:53:35 -0000
Received: from bitfolk.com (HELO mail.bitfolk.com) (85.119.80.223)
	by server-10.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	8 Aug 2012 20:53:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com;
	s=alpha; 
	h=Subject:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:To:From:Date;
	bh=sKoZoA4q0A3LBeKzhsFcDd327J7Fyga1sAOwah+Os+g=; 
	b=fzScXUzl7BoKQpHOa5i7uh4q8tdShWQQN3suESXMXt+Se0V1LciRzrbgFPRo3BHdmL7VcsMEz12SD24kGl513vLUKMX/eHNiZzBMKOgznyQ57YyV8LsIA1J61J5poyqa;
Received: from andy by mail.bitfolk.com with local (Exim 4.72)
	(envelope-from <andy@strugglers.net>) id 1SzDG3-0000cT-6v
	for xen-devel@lists.xen.org; Wed, 08 Aug 2012 20:53:35 +0000
Date: Wed, 8 Aug 2012 20:53:35 +0000
From: Andy Smith <andy@strugglers.net>
To: xen-devel@lists.xen.org
Message-ID: <20120808205335.GC11695@bitfolk.com>
References: <20120727161545.GR11695@bitfolk.com>
	<50165AAB0200007800091380@nat28.tlf.novell.com>
	<20120806035540.GC11695@bitfolk.com>
	<501F8F240200007800092C84@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <501F8F240200007800092C84@nat28.tlf.novell.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Virus-Scanner: Scanned by ClamAV on mail.bitfolk.com at Wed,
	08 Aug 2012 20:53:35 +0000
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	spamd1.lon.bitfolk.com
X-Spam-Level: 
X-Spam-ASN: 
X-Spam-Status: No, score=-0.0 required=5.0 tests=NO_RELAYS shortcircuit=no
	autolearn=disabled version=3.3.1
X-Spam-Report: * -0.0 NO_RELAYS Informational: message was not relayed via SMTP
X-SA-Exim-Version: 4.2.1 (built Mon, 22 Mar 2010 06:51:10 +0000)
X-SA-Exim-Scanned: Yes (on mail.bitfolk.com)
Subject: Re: [Xen-devel] Failure to boot,
 Debian squeeze with 4.0.1 hypervisor, timer problems?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Mon, Aug 06, 2012 at 08:32:20AM +0100, Jan Beulich wrote:
> >>> On 06.08.12 at 05:55, Andy Smith <andy@strugglers.net> wrote:
> > Is there any point at this stage in trying to find out which commit
> > appears to have fixed the problem I was seeing?
> 
> That's a question you have to ask yourself (or the Debian folks
> if you want them to deliver a fixed packed). From our
> (developers') perspective knowing the problem is fixed is
> sufficient.

I bisected it down to changeset 21332 fixing the problem for me:

    http://xenbits.xen.org/hg/xen-4.0-testing.hg/rev/ab1fb1b8b569?revcount=480

And have reported it as Debian bug #684334:

    http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684334

in case they should want to fix it for their soon-to-be oldstable
release.

Thanks for your help,
Andy

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 22:51:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 22:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzF5c-0004cw-5N; Wed, 08 Aug 2012 22:50:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1SzF5a-0004cr-Ot
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 22:50:54 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344466248!9272119!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7453 invoked from network); 8 Aug 2012 22:50:48 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-16.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 22:50:48 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98])
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 6B25E584F81;
	Wed,  8 Aug 2012 15:50:49 -0700 (PDT)
Date: Wed, 08 Aug 2012 15:50:46 -0700 (PDT)
Message-Id: <20120808.155046.820543563969484712.davem@davemloft.net>
To: mgorman@suse.de
From: David Miller <davem@davemloft.net>
In-Reply-To: <20120807085554.GF29814@suse.de>
References: <20120807085554.GF29814@suse.de>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mel Gorman <mgorman@suse.de>
Date: Tue, 7 Aug 2012 09:55:55 +0100

> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> for the following bug triggered by a xen network driver
 ...
> The problem is that the xenfront driver is passing a NULL page to
> __skb_fill_page_desc() which was unexpected. This patch checks that
> there is a page before dereferencing.
> 
> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>

That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
It's the only driver passing NULL here.

That whole song and dance figuring out what to do with the head
fragment page, depending upon whether the length is greater than the
RX_COPY_THRESHOLD, is completely unnecessary.

Just use something like a call to __pskb_pull_tail(skb, len) and all
that other crap around that area can simply be deleted.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 08 22:51:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Aug 2012 22:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzF5c-0004cw-5N; Wed, 08 Aug 2012 22:50:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1SzF5a-0004cr-Ot
	for xen-devel@lists.xensource.com; Wed, 08 Aug 2012 22:50:54 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344466248!9272119!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7453 invoked from network); 8 Aug 2012 22:50:48 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-16.tower-27.messagelabs.com with SMTP;
	8 Aug 2012 22:50:48 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98])
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 6B25E584F81;
	Wed,  8 Aug 2012 15:50:49 -0700 (PDT)
Date: Wed, 08 Aug 2012 15:50:46 -0700 (PDT)
Message-Id: <20120808.155046.820543563969484712.davem@davemloft.net>
To: mgorman@suse.de
From: David Miller <davem@davemloft.net>
In-Reply-To: <20120807085554.GF29814@suse.de>
References: <20120807085554.GF29814@suse.de>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mel Gorman <mgorman@suse.de>
Date: Tue, 7 Aug 2012 09:55:55 +0100

> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> for the following bug triggered by a xen network driver
 ...
> The problem is that the xenfront driver is passing a NULL page to
> __skb_fill_page_desc() which was unexpected. This patch checks that
> there is a page before dereferencing.
> 
> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>

That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
It's the only driver passing NULL here.

That whole song and dance figuring out what to do with the head
fragment page, depending upon whether the length is greater than the
RX_COPY_THRESHOLD, is completely unnecessary.

Just use something like a call to __pskb_pull_tail(skb, len) and all
that other crap around that area can simply be deleted.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 00:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 00:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzGrU-00064V-Al; Thu, 09 Aug 2012 00:44:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzGrS-00064Q-Fj
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 00:44:26 +0000
Received: from [85.158.143.99:52020] by server-2.bemta-4.messagelabs.com id
	2E/D6-19021-9E703205; Thu, 09 Aug 2012 00:44:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344473065!30345126!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19013 invoked from network); 9 Aug 2012 00:44:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 00:44:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,735,1336348800"; d="scan'208";a="13920120"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 00:44:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 01:44:24 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzGrP-0006yX-Sq;
	Thu, 09 Aug 2012 00:44:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzGrP-0002kH-Ad;
	Thu, 09 Aug 2012 01:44:23 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13573-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 01:44:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13573: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13573 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13573/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 13572

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13572
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13572
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13572
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13572

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  d25406e25af4
baseline version:
 xen                  472fc515a463

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25743:d25406e25af4
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:57 2012 +0100
    
    Update Xen version to 4.2.0-rc3-pre
    
    
changeset:   25742:24ac8a177376
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:35 2012 +0100
    
    Added signature for changeset f4c47bcc01e1
    
    
changeset:   25741:9bb05a2360a8
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:27 2012 +0100
    
    Added tag 4.2.0-rc2 for changeset f4c47bcc01e1
    
    
changeset:   25740:f4c47bcc01e1
tag:         4.2.0-rc2
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:21 2012 +0100
    
    Update Xen version to 4.2.0-rc2
    
    
changeset:   25739:472fc515a463
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 07 18:37:31 2012 +0100
    
    QEMU_TAG update
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 00:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 00:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzGrU-00064V-Al; Thu, 09 Aug 2012 00:44:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzGrS-00064Q-Fj
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 00:44:26 +0000
Received: from [85.158.143.99:52020] by server-2.bemta-4.messagelabs.com id
	2E/D6-19021-9E703205; Thu, 09 Aug 2012 00:44:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344473065!30345126!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19013 invoked from network); 9 Aug 2012 00:44:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 00:44:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,735,1336348800"; d="scan'208";a="13920120"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 00:44:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 01:44:24 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzGrP-0006yX-Sq;
	Thu, 09 Aug 2012 00:44:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzGrP-0002kH-Ad;
	Thu, 09 Aug 2012 01:44:23 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13573-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 01:44:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13573: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13573 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13573/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 13572

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13572
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13572
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13572
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13572

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  d25406e25af4
baseline version:
 xen                  472fc515a463

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25743:d25406e25af4
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:57 2012 +0100
    
    Update Xen version to 4.2.0-rc3-pre
    
    
changeset:   25742:24ac8a177376
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:35 2012 +0100
    
    Added signature for changeset f4c47bcc01e1
    
    
changeset:   25741:9bb05a2360a8
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:27 2012 +0100
    
    Added tag 4.2.0-rc2 for changeset f4c47bcc01e1
    
    
changeset:   25740:f4c47bcc01e1
tag:         4.2.0-rc2
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 08 18:02:21 2012 +0100
    
    Update Xen version to 4.2.0-rc2
    
    
changeset:   25739:472fc515a463
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 07 18:37:31 2012 +0100
    
    QEMU_TAG update
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 06:50:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 06:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMYt-0004dD-Oy; Thu, 09 Aug 2012 06:49:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chao.zhou@intel.com>) id 1SzMYt-0004d8-7C
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 06:49:39 +0000
Received: from [85.158.138.51:30612] by server-1.bemta-3.messagelabs.com id
	B8/70-29224-28D53205; Thu, 09 Aug 2012 06:49:38 +0000
X-Env-Sender: chao.zhou@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344494977!27314389!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwNzczNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8302 invoked from network); 9 Aug 2012 06:49:37 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 06:49:37 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 08 Aug 2012 23:49:36 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,738,1336374000"; d="scan'208";a="205331656"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga002.fm.intel.com with ESMTP; 08 Aug 2012 23:49:23 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 8 Aug 2012 23:49:23 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 8 Aug 2012 23:49:22 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 9 Aug 2012 14:49:21 +0800
From: "Zhou, Chao" <chao.zhou@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
Thread-Index: AQHNcWQHo7KALelyBkWancTZpqx1/5dREAmQ
Date: Thu, 9 Aug 2012 06:49:21 +0000
Message-ID: <40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343990187.21372.48.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest
	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I rebuild the upstream QEMU according to the wiki, but device static assignment doesn't work, no lspci output in guest. However hotplug & unplug works fine.

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
Sent: Friday, August 03, 2012 6:36 PM
To: Stefano Stabellini
Cc: Zhang, Yang Z; Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> > When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> > libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an 
> > error message from QMP server: Parameter 'driver' expects a driver 
> > name
> > 
> > It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> > I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?
> 
> Yes, it was accepted, but it is present only in upstream QEMU (from 
> git://git.qemu.org/qemu.git), not the tree we are currently using in 
> xen-unstable for development 
> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
> Make sure you are using the right tree!

http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the upstream qemu tree instead of our stable branch of upstream.

> 
> Anthony is currently on vacation and is going to be back in about a 
> week.
> 
> > Another question:
> > Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.
> 
> You should use upstream QEMU, I am going to rebase our tree on that 
> early on in the 4.3 release cycle.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 06:50:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 06:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMYt-0004dD-Oy; Thu, 09 Aug 2012 06:49:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chao.zhou@intel.com>) id 1SzMYt-0004d8-7C
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 06:49:39 +0000
Received: from [85.158.138.51:30612] by server-1.bemta-3.messagelabs.com id
	B8/70-29224-28D53205; Thu, 09 Aug 2012 06:49:38 +0000
X-Env-Sender: chao.zhou@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344494977!27314389!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwNzczNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8302 invoked from network); 9 Aug 2012 06:49:37 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 06:49:37 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 08 Aug 2012 23:49:36 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,738,1336374000"; d="scan'208";a="205331656"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga002.fm.intel.com with ESMTP; 08 Aug 2012 23:49:23 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 8 Aug 2012 23:49:23 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 8 Aug 2012 23:49:22 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 9 Aug 2012 14:49:21 +0800
From: "Zhou, Chao" <chao.zhou@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
Thread-Index: AQHNcWQHo7KALelyBkWancTZpqx1/5dREAmQ
Date: Thu, 9 Aug 2012 06:49:21 +0000
Message-ID: <40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
In-Reply-To: <1343990187.21372.48.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest
	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I rebuild the upstream QEMU according to the wiki, but device static assignment doesn't work, no lspci output in guest. However hotplug & unplug works fine.

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
Sent: Friday, August 03, 2012 6:36 PM
To: Stefano Stabellini
Cc: Zhang, Yang Z; Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> > When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> > libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an 
> > error message from QMP server: Parameter 'driver' expects a driver 
> > name
> > 
> > It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> > I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?
> 
> Yes, it was accepted, but it is present only in upstream QEMU (from 
> git://git.qemu.org/qemu.git), not the tree we are currently using in 
> xen-unstable for development 
> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
> Make sure you are using the right tree!

http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the upstream qemu tree instead of our stable branch of upstream.

> 
> Anthony is currently on vacation and is going to be back in about a 
> week.
> 
> > Another question:
> > Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.
> 
> You should use upstream QEMU, I am going to rebase our tree on that 
> early on in the 4.3 release cycle.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 06:59:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 06:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMi8-0004mT-QI; Thu, 09 Aug 2012 06:59:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SzMi7-0004mL-22
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 06:59:11 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344495510!9316022!1
X-Originating-IP: [119.145.14.64]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NCA9PiA0MTQ4Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18592 invoked from network); 9 Aug 2012 06:58:33 -0000
Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com)
	(119.145.14.64) by server-16.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 06:58:33 -0000
Received: from 172.24.2.119 (EHLO szxeml209-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg01-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AMX71722; Thu, 09 Aug 2012 14:58:22 +0800 (CST)
Received: from SZXEML418-HUB.china.huawei.com (10.82.67.157) by
	szxeml209-edg.china.huawei.com (172.24.2.184) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Thu, 9 Aug 2012 14:57:08 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml418-hub.china.huawei.com ([10.82.67.157]) with mapi id
	14.01.0323.003; Thu, 9 Aug 2012 14:57:00 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ/8o++QD/k88DUP8oGQ+A/k5x8JA=
Date: Thu, 9 Aug 2012 06:56:59 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
	<1344427585.32142.34.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344427585.32142.34.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Wednesday, August 08, 2012 8:06 PM
> To: Wangzhenguo
> Cc: Yangxiaowei; Yechuan; Hongtao; xen-devel@lists.xen.org; Ian Jackson
> Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when
> the page becomes COW by forking process in linux
> 
> > Note:
> > Chlid process do not use xc interface handle which is opend by parent
> process, it should open
> 
> "Child" and "opened" (you made these typos pretty consistently
> throughout ;-))
> 
> Please can you try and keep the commit message to 75-80 characters wide.

Hi, Ian, thanks your reply. There is a new patch as follow

# HG changeset patch
# Parent a5dfd924fcdb173a154dad9f37073c1de1302065
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the
       hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two
threads, thread A may call hypercall, thread B may call fork() to create child
process. After forking, all pages of the process including hypercall buffers
are cow. It will cause a write protection and return EFAULT error if hypervisor
calls copy_to_user in hypercall in thread A context,

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall
   buffer not to be copied to child process after fork.
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer
   by using MADV_DOFORK of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Note:
Child processes must not use the opened xc_{interface,evtchn,gnttab,gntshr}
handle that inherits from parents. They should reopen the handle if they want
to interact with xc. Otherwise, it may cause segment fault to access hypercall
buffer caches of the handle.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Thu Aug 09 14:42:29 2012 +0800
@@ -93,22 +93,21 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW
+        on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    /* Recover the VMA flags. Maybe it's not necessary */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
+    
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
diff -r a5dfd924fcdb tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xenctrl.h	Thu Aug 09 14:42:29 2012 +0800
@@ -134,6 +134,12 @@ typedef enum xc_error_code xc_error_code
  * be called multiple times within a single process.  Multiple processes can
  * have an open hypervisor interface at the same time.
  *
+ * Note:
+ * Child processes must not use the opened xc interface handle that inherits
+ * from parents. They should reopen the handle if they want to interact with
+ * xc. Otherwise, it may cause segment fault to access hypercall buffer caches
+ * of the handle.
+ *
  * Each call to this function should have a corresponding call to
  * xc_interface_close().
  *
@@ -908,6 +914,12 @@ int xc_evtchn_status(xc_interface *xch, 
  * Return a handle to the event channel driver, or -1 on failure, in which case
  * errno will be set appropriately.
  *
+ * Note:
+ * Child processes must not use the opened xc evtchn handle that inherits from
+ * parents. They should reopen the handle if they want to interact with xc.
+ * Otherwise, it may cause segment fault to access hypercall buffer caches of
+ * the handle.
+ *
  * Before Xen pre-4.1 this function would sometimes report errors with perror.
  */
 xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
@@ -1339,9 +1351,13 @@ int xc_domain_subscribe_for_suspend(
 
 /*
  * These functions sometimes log messages as above, but not always.
- */
-
-/*
+ *
+ * Note:
+ * Child processes must not use the opened xc gnttab handle that inherits from
+ * parents. They should reopen the handle if they want to interact with xc.
+ * Otherwise, it may cause segment fault to access hypercall buffer caches of
+ * the handle.
+ *
  * Return an fd onto the grant table driver.  Logs errors.
  */
 xc_gnttab *xc_gnttab_open(xentoollog_logger *logger,
@@ -1458,6 +1474,13 @@ grant_entry_v2_t *xc_gnttab_map_table_v2
 
 /*
  * Return an fd onto the grant sharing driver.  Logs errors.
+ *
+ * Note:
+ * Child processes must not use the opened xc gntshr handle that inherits from
+ * parents. They should reopen the handle if they want to interact with xc.
+ * Otherwise, it may cause segment fault to access hypercall buffer caches of
+ * the handle.
+ *
  */
 xc_gntshr *xc_gntshr_open(xentoollog_logger *logger,
 			  unsigned open_flags);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 06:59:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 06:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMi8-0004mT-QI; Thu, 09 Aug 2012 06:59:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1SzMi7-0004mL-22
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 06:59:11 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344495510!9316022!1
X-Originating-IP: [119.145.14.64]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NCA9PiA0MTQ4Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18592 invoked from network); 9 Aug 2012 06:58:33 -0000
Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com)
	(119.145.14.64) by server-16.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 06:58:33 -0000
Received: from 172.24.2.119 (EHLO szxeml209-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg01-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id AMX71722; Thu, 09 Aug 2012 14:58:22 +0800 (CST)
Received: from SZXEML418-HUB.china.huawei.com (10.82.67.157) by
	szxeml209-edg.china.huawei.com (172.24.2.184) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Thu, 9 Aug 2012 14:57:08 +0800
Received: from SZXEML528-MBX.china.huawei.com ([169.254.4.120]) by
	szxeml418-hub.china.huawei.com ([10.82.67.157]) with mapi id
	14.01.0323.003; Thu, 9 Aug 2012 14:57:00 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ/8o++QD/k88DUP8oGQ+A/k5x8JA=
Date: Thu, 9 Aug 2012 06:56:59 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
	<1344427585.32142.34.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344427585.32142.34.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Wednesday, August 08, 2012 8:06 PM
> To: Wangzhenguo
> Cc: Yangxiaowei; Yechuan; Hongtao; xen-devel@lists.xen.org; Ian Jackson
> Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when
> the page becomes COW by forking process in linux
> 
> > Note:
> > Chlid process do not use xc interface handle which is opend by parent
> process, it should open
> 
> "Child" and "opened" (you made these typos pretty consistently
> throughout ;-))
> 
> Please can you try and keep the commit message to 75-80 characters wide.

Hi, Ian, thanks your reply. There is a new patch as follow

# HG changeset patch
# Parent a5dfd924fcdb173a154dad9f37073c1de1302065
libxc: Add VM_DONTCOPY flag of the VMA of the hypercall buffer, to avoid the
       hypercall buffer becoming COW on hypercall.

In multi-threads and multi-processes environment, e.g. the process has two
threads, thread A may call hypercall, thread B may call fork() to create child
process. After forking, all pages of the process including hypercall buffers
are cow. It will cause a write protection and return EFAULT error if hypervisor
calls copy_to_user in hypercall in thread A context,

Fix:
1. Before hypercall: use MADV_DONTFORK of madvise syscall to make the hypercall
   buffer not to be copied to child process after fork.
2. After hypercall: undo the effect of MADV_DONTFORK for the hypercall buffer
   by using MADV_DOFORK of madvise syscall.
3. Use mmap/nunmap for memory alloc/free instead of malloc/free to bypass libc.

Note:
Child processes must not use the opened xc_{interface,evtchn,gnttab,gntshr}
handle that inherits from parents. They should reopen the handle if they want
to interact with xc. Otherwise, it may cause segment fault to access hypercall
buffer caches of the handle.

Signed-off-by: Zhenguo Wang <wangzhenguo@huawei.com>
Signed-off-by: Xiaowei Yang <xiaowei.yang@huawei.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

diff -r a5dfd924fcdb tools/libxc/xc_linux_osdep.c
--- a/tools/libxc/xc_linux_osdep.c	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xc_linux_osdep.c	Thu Aug 09 14:42:29 2012 +0800
@@ -93,22 +93,21 @@ static void *linux_privcmd_alloc_hyperca
     size_t size = npages * XC_PAGE_SIZE;
     void *p;
 
-    p = xc_memalign(xch, XC_PAGE_SIZE, size);
-    if (!p)
-        return NULL;
+    /* Address returned by mmap is page aligned. */
+    p = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_LOCKED, -1, 0);
 
-    if ( mlock(p, size) < 0 )
-    {
-        free(p);
-        return NULL;
-    }
+    /* Do not copy the VMA to child process on fork. Avoid the page being COW
+        on hypercall. */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DONTFORK);
     return p;
 }
 
 static void linux_privcmd_free_hypercall_buffer(xc_interface *xch, xc_osdep_handle h, void *ptr, int npages)
 {
-    munlock(ptr, npages * XC_PAGE_SIZE);
-    free(ptr);
+    /* Recover the VMA flags. Maybe it's not necessary */
+    madvise(ptr, npages * XC_PAGE_SIZE, MADV_DOFORK);
+    
+    munmap(ptr, npages * XC_PAGE_SIZE);
 }
 
 static int linux_privcmd_hypercall(xc_interface *xch, xc_osdep_handle h, privcmd_hypercall_t *hypercall)
diff -r a5dfd924fcdb tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Tue Aug 07 13:52:10 2012 +0800
+++ b/tools/libxc/xenctrl.h	Thu Aug 09 14:42:29 2012 +0800
@@ -134,6 +134,12 @@ typedef enum xc_error_code xc_error_code
  * be called multiple times within a single process.  Multiple processes can
  * have an open hypervisor interface at the same time.
  *
+ * Note:
+ * Child processes must not use the opened xc interface handle that inherits
+ * from parents. They should reopen the handle if they want to interact with
+ * xc. Otherwise, it may cause segment fault to access hypercall buffer caches
+ * of the handle.
+ *
  * Each call to this function should have a corresponding call to
  * xc_interface_close().
  *
@@ -908,6 +914,12 @@ int xc_evtchn_status(xc_interface *xch, 
  * Return a handle to the event channel driver, or -1 on failure, in which case
  * errno will be set appropriately.
  *
+ * Note:
+ * Child processes must not use the opened xc evtchn handle that inherits from
+ * parents. They should reopen the handle if they want to interact with xc.
+ * Otherwise, it may cause segment fault to access hypercall buffer caches of
+ * the handle.
+ *
  * Before Xen pre-4.1 this function would sometimes report errors with perror.
  */
 xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
@@ -1339,9 +1351,13 @@ int xc_domain_subscribe_for_suspend(
 
 /*
  * These functions sometimes log messages as above, but not always.
- */
-
-/*
+ *
+ * Note:
+ * Child processes must not use the opened xc gnttab handle that inherits from
+ * parents. They should reopen the handle if they want to interact with xc.
+ * Otherwise, it may cause segment fault to access hypercall buffer caches of
+ * the handle.
+ *
  * Return an fd onto the grant table driver.  Logs errors.
  */
 xc_gnttab *xc_gnttab_open(xentoollog_logger *logger,
@@ -1458,6 +1474,13 @@ grant_entry_v2_t *xc_gnttab_map_table_v2
 
 /*
  * Return an fd onto the grant sharing driver.  Logs errors.
+ *
+ * Note:
+ * Child processes must not use the opened xc gntshr handle that inherits from
+ * parents. They should reopen the handle if they want to interact with xc.
+ * Otherwise, it may cause segment fault to access hypercall buffer caches of
+ * the handle.
+ *
  */
 xc_gntshr *xc_gntshr_open(xentoollog_logger *logger,
 			  unsigned open_flags);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 07:06:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 07:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMoU-00058H-Iv; Thu, 09 Aug 2012 07:05:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzMoS-00058B-VB
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 07:05:45 +0000
Received: from [85.158.138.51:23149] by server-1.bemta-3.messagelabs.com id
	4C/B3-29224-84163205; Thu, 09 Aug 2012 07:05:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344495942!19403764!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20181 invoked from network); 9 Aug 2012 07:05:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 07:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,738,1336348800"; d="scan'208";a="13922617"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 07:05:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 08:05:42 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzMoP-0000oZ-VM;
	Thu, 09 Aug 2012 07:05:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzMoP-0002Dc-OQ;
	Thu, 09 Aug 2012 08:05:41 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13574-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 08:05:41 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13574: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13574 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13574/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13572
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13572
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13572
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13572

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  d25406e25af4
baseline version:
 xen                  472fc515a463

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=d25406e25af4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable d25406e25af4
+ branch=xen-unstable
+ revision=d25406e25af4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r d25406e25af4 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 5 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 07:06:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 07:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMoU-00058H-Iv; Thu, 09 Aug 2012 07:05:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzMoS-00058B-VB
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 07:05:45 +0000
Received: from [85.158.138.51:23149] by server-1.bemta-3.messagelabs.com id
	4C/B3-29224-84163205; Thu, 09 Aug 2012 07:05:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344495942!19403764!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20181 invoked from network); 9 Aug 2012 07:05:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 07:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,738,1336348800"; d="scan'208";a="13922617"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 07:05:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 08:05:42 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzMoP-0000oZ-VM;
	Thu, 09 Aug 2012 07:05:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzMoP-0002Dc-OQ;
	Thu, 09 Aug 2012 08:05:41 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13574-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 08:05:41 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13574: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13574 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13574/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13572
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13572
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13572
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13572

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  d25406e25af4
baseline version:
 xen                  472fc515a463

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=d25406e25af4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable d25406e25af4
+ branch=xen-unstable
+ revision=d25406e25af4
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r d25406e25af4 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 4 changesets with 5 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 07:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 07:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMqP-0005D7-3T; Thu, 09 Aug 2012 07:07:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzMqO-0005Cy-98
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 07:07:44 +0000
Received: from [85.158.139.83:61749] by server-4.bemta-5.messagelabs.com id
	9A/EE-32474-EB163205; Thu, 09 Aug 2012 07:07:42 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344496062!30299691!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27115 invoked from network); 9 Aug 2012 07:07:42 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 07:07:42 -0000
Received: by eaah11 with SMTP id h11so30048eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 00:07:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=PYQ9aPZ9rVOO5WOs1uhMJEWvi81y9oVAPkZ74d0EH1s=;
	b=d/ABKhOZ75JGCNJrKAfsVkL5gynRehhF1ceXCbeP9kt9Ej6Wvj3oiL1nPclYjM+Nyj
	1F2ELNsLFbrTfsSjx2/VmVuUm8hLHQUsQAomGXKbW9UW/uAAKZg3+VSWJvEd30xYBEbO
	26YPfmLCmP96dLUO2OYg7gYu/OV5Cto8EH1CyyYKQ1Vq6+rRLnKKjQUmVvFrr1kNYjrK
	Dsiyp5CyjpCYfkHBVNC4Dfejf3in3u/uw6iTziN7cl+ayPDPKrbRKc31GhsXdqkN5yuc
	wdaF3QAkIAZRGtD1VYnYrZvfeOMVfX8ybtbpaDEdQbCffZTo7wYe7L/dr7aDGaJ5hVKY
	vN0A==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr16363434eeo.24.1344496061886; Thu,
	09 Aug 2012 00:07:41 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 00:07:41 -0700 (PDT)
In-Reply-To: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
Date: Thu, 9 Aug 2012 15:07:41 +0800
Message-ID: <CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0024301747621438595=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0024301747621438595==
Content-Type: multipart/alternative; boundary=047d7b603a782bb28704c6cfe26b

--047d7b603a782bb28704c6cfe26b
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed, Aug 8, 2012 at 4:32 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.com> wr=
ote:

> Hi all,
>
>     In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,  there are
> several lines as follow:
>
>  437        mutex_lock(&h->request_mutex);
>
>
>
>  438
>
>
>
>  439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))
>
>
>
>  440                goto fail;
>
>
>
>  441
>
>
>
>  442        for (i =3D 0; i < num_vecs; i++)
>
>
>
>  443                if (!xs_write_all(h->fd, iovec[i].iov_base,
> iovec[i].iov_len))
>
>
>  444                        goto fail;
>
>
>
>  445
>
>
>
>  446        ret =3D read_reply(h, &msg.type, len);
>
>
>
>  447        if (!ret)
>
>
>
>  448                goto fail;
>
>
>
>  449
>
>
>
>  450        mutex_unlock(&h->request_mutex);
>
>
> The above seems to tell me that after writing to h->fd , the read_reply
> invoking read_message which immediatelly  read from hd->fd?
> What did it mean by this?
>
>
> Thanks
>

If hd->fd refers to a socket descriptor, it's understandable that writing
and then imediatelly reading, in which case the fd is returned by
get_handle(xs_daemon_socket(), flags).

But when fd is retrived by get_handle(xs_domain_dev(), flags), it means to
write to a file and then read from the same file imediatelly.
Dose it have something to do with the internal communication protocol?!

Thanks for replying.

--047d7b603a782bb28704c6cfe26b
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGJyPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gV2VkLCBBdWcgOCwgMjAxMiBhdCA0
OjMyIFBNLCDpqazno4ogPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86YXdhcmUu
d2h5QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmF3YXJlLndoeUBnbWFpbC5jb208L2E+Jmd0
Ozwvc3Bhbj4gd3JvdGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9
Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVm
dDoxZXgiPgpIaSBhbGwsPGRpdj7CoDwvZGl2PjxkaXY+wqAgwqAgSW4gwqB4ZW4tNC4xLjIgc3Jj
L3Rvb2xzL3hlbnN0b3JlL3hzLmM6IHhzX3RhbGt2IGZ1bmN0aW9uLCDCoHRoZXJlIGFyZSBzZXZl
cmFsIGxpbmVzIGFzIGZvbGxvdzrCoDwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+wqA0MzcgwqAg
wqAgwqAgwqBtdXRleF9sb2NrKCZhbXA7aC0mZ3Q7cmVxdWVzdF9tdXRleCk7IMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+
Cgo8ZGl2PsKgNDM4IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDM5IMKgIMKgIMKgIMKgaWYgKCF4c193
cml0ZV9hbGwoaC0mZ3Q7ZmQsICZhbXA7bXNnLCBzaXplb2YobXNnKSkpIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L2Rpdj4KCjxkaXY+wqA0NDAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqBnb3RvIGZhaWw7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+
Cgo8ZGl2PsKgNDQxIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDQyIMKgIMKgIMKgIMKgZm9yIChpID0g
MDsgaSAmbHQ7IG51bV92ZWNzOyBpKyspIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDQzIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgaWYgKCF4c193cml0ZV9hbGwoaC0mZ3Q7ZmQsIGlvdmVjW2ldLmlvdl9i
YXNlLCBpb3ZlY1tpXS5pb3ZfbGVuKSkgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KCjxkaXY+wqA0NDQgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqBnb3RvIGZhaWw7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2
PsKgNDQ1IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDQ2IMKgIMKgIMKgIMKgcmV0ID0gcmVhZF9yZXBs
eShoLCAmYW1wO21zZy50eXBlLCBsZW4pOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2PgoKPGRpdj7CoDQ0NyDCoCDCoCDCoCDCoGlmICgh
cmV0KSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgPC9kaXY+Cgo8ZGl2PsKg
NDQ4IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgZ290byBmYWlsOyDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoDwvZGl2PgoKPGRpdj7CoDQ0OSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2PgoKPGRpdj7CoDQ1MCDCoCDCoCDCoCDC
oG11dGV4X3VubG9jaygmYW1wO2gtJmd0O3JlcXVlc3RfbXV0ZXgpOyDCoDwvZGl2PjxkaXY+PGJy
PjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+VGhlIGFib3ZlIHNlZW1zIHRvIHRlbGwgbWUgdGhh
dCBhZnRlciB3cml0aW5nIHRvIGgtJmd0O2ZkICwgdGhlIHJlYWRfcmVwbHkgaW52b2tpbmcgcmVh
ZF9tZXNzYWdlIHdoaWNoIGltbWVkaWF0ZWxseSDCoHJlYWQgZnJvbSBoZC0mZ3Q7ZmQ/PC9kaXY+
Cgo8ZGl2PldoYXQgZGlkIGl0IG1lYW4gYnkgdGhpcz88L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2
Pjxicj48L2Rpdj48ZGl2PlRoYW5rczwvZGl2Pgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxkaXY+
PGZvbnQgY29sb3I9IiNmZjY2NjYiPklmIGhkLSZndDtmZCByZWZlcnMgdG8gYSBzb2NrZXQgZGVz
Y3JpcHRvciwgaXQmIzM5O3MgdW5kZXJzdGFuZGFibGUgdGhhdCB3cml0aW5nIGFuZCB0aGVuIGlt
ZWRpYXRlbGx5IHJlYWRpbmcsIGluIHdoaWNoIGNhc2UgdGhlIGZkIGlzIHJldHVybmVkIGJ5IGdl
dF9oYW5kbGUoeHNfZGFlbW9uX3NvY2tldCgpLCBmbGFncykuPC9mb250PjwvZGl2Pgo8ZGl2Pjxm
b250IGNvbG9yPSIjZmY2NjY2Ij48YnI+PC9mb250PjwvZGl2PjxkaXY+PGZvbnQgY29sb3I9IiNm
ZjY2NjYiPkJ1dCB3aGVuIGZkIGlzIHJldHJpdmVkIGJ5wqBnZXRfaGFuZGxlKHhzX2RvbWFpbl9k
ZXYoKSwgZmxhZ3MpLCBpdCBtZWFucyB0byB3cml0ZSB0byBhIGZpbGUgYW5kIHRoZW4gcmVhZCBm
cm9tIHRoZSBzYW1lIGZpbGUgaW1lZGlhdGVsbHkuPC9mb250PjwvZGl2PjxkaXY+Cjxmb250IGNv
bG9yPSIjZmY2NjY2Ij5Eb3NlIGl0IGhhdmUgc29tZXRoaW5nIHRvIGRvIHdpdGggdGhlIGludGVy
bmFsIGNvbW11bmljYXRpb24gcHJvdG9jb2w/IcKgPC9mb250PjwvZGl2PjxkaXY+PGZvbnQgY29s
b3I9IiNmZjY2NjYiPjxicj48L2ZvbnQ+PC9kaXY+PGRpdj5UaGFua3MgZm9yIHJlcGx5aW5nLjwv
ZGl2PjxkaXY+PGZvbnQgY29sb3I9IiNmZjY2NjYiPjxicj48L2ZvbnQ+PC9kaXY+CjxkaXY+PGZv
bnQgY29sb3I9IiNmZjY2NjYiPjxicj48L2ZvbnQ+PC9kaXY+PGRpdj48Zm9udCBjb2xvcj0iI2Zm
NjY2NiI+PGJyPjwvZm9udD48L2Rpdj4K
--047d7b603a782bb28704c6cfe26b--


--===============0024301747621438595==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0024301747621438595==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 07:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 07:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzMqP-0005D7-3T; Thu, 09 Aug 2012 07:07:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzMqO-0005Cy-98
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 07:07:44 +0000
Received: from [85.158.139.83:61749] by server-4.bemta-5.messagelabs.com id
	9A/EE-32474-EB163205; Thu, 09 Aug 2012 07:07:42 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344496062!30299691!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27115 invoked from network); 9 Aug 2012 07:07:42 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 07:07:42 -0000
Received: by eaah11 with SMTP id h11so30048eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 00:07:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=PYQ9aPZ9rVOO5WOs1uhMJEWvi81y9oVAPkZ74d0EH1s=;
	b=d/ABKhOZ75JGCNJrKAfsVkL5gynRehhF1ceXCbeP9kt9Ej6Wvj3oiL1nPclYjM+Nyj
	1F2ELNsLFbrTfsSjx2/VmVuUm8hLHQUsQAomGXKbW9UW/uAAKZg3+VSWJvEd30xYBEbO
	26YPfmLCmP96dLUO2OYg7gYu/OV5Cto8EH1CyyYKQ1Vq6+rRLnKKjQUmVvFrr1kNYjrK
	Dsiyp5CyjpCYfkHBVNC4Dfejf3in3u/uw6iTziN7cl+ayPDPKrbRKc31GhsXdqkN5yuc
	wdaF3QAkIAZRGtD1VYnYrZvfeOMVfX8ybtbpaDEdQbCffZTo7wYe7L/dr7aDGaJ5hVKY
	vN0A==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr16363434eeo.24.1344496061886; Thu,
	09 Aug 2012 00:07:41 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 00:07:41 -0700 (PDT)
In-Reply-To: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
Date: Thu, 9 Aug 2012 15:07:41 +0800
Message-ID: <CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0024301747621438595=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0024301747621438595==
Content-Type: multipart/alternative; boundary=047d7b603a782bb28704c6cfe26b

--047d7b603a782bb28704c6cfe26b
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed, Aug 8, 2012 at 4:32 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.com> wr=
ote:

> Hi all,
>
>     In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,  there are
> several lines as follow:
>
>  437        mutex_lock(&h->request_mutex);
>
>
>
>  438
>
>
>
>  439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))
>
>
>
>  440                goto fail;
>
>
>
>  441
>
>
>
>  442        for (i =3D 0; i < num_vecs; i++)
>
>
>
>  443                if (!xs_write_all(h->fd, iovec[i].iov_base,
> iovec[i].iov_len))
>
>
>  444                        goto fail;
>
>
>
>  445
>
>
>
>  446        ret =3D read_reply(h, &msg.type, len);
>
>
>
>  447        if (!ret)
>
>
>
>  448                goto fail;
>
>
>
>  449
>
>
>
>  450        mutex_unlock(&h->request_mutex);
>
>
> The above seems to tell me that after writing to h->fd , the read_reply
> invoking read_message which immediatelly  read from hd->fd?
> What did it mean by this?
>
>
> Thanks
>

If hd->fd refers to a socket descriptor, it's understandable that writing
and then imediatelly reading, in which case the fd is returned by
get_handle(xs_daemon_socket(), flags).

But when fd is retrived by get_handle(xs_domain_dev(), flags), it means to
write to a file and then read from the same file imediatelly.
Dose it have something to do with the internal communication protocol?!

Thanks for replying.

--047d7b603a782bb28704c6cfe26b
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGJyPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gV2VkLCBBdWcgOCwgMjAxMiBhdCA0
OjMyIFBNLCDpqazno4ogPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86YXdhcmUu
d2h5QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmF3YXJlLndoeUBnbWFpbC5jb208L2E+Jmd0
Ozwvc3Bhbj4gd3JvdGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9
Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVm
dDoxZXgiPgpIaSBhbGwsPGRpdj7CoDwvZGl2PjxkaXY+wqAgwqAgSW4gwqB4ZW4tNC4xLjIgc3Jj
L3Rvb2xzL3hlbnN0b3JlL3hzLmM6IHhzX3RhbGt2IGZ1bmN0aW9uLCDCoHRoZXJlIGFyZSBzZXZl
cmFsIGxpbmVzIGFzIGZvbGxvdzrCoDwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+wqA0MzcgwqAg
wqAgwqAgwqBtdXRleF9sb2NrKCZhbXA7aC0mZ3Q7cmVxdWVzdF9tdXRleCk7IMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+
Cgo8ZGl2PsKgNDM4IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDM5IMKgIMKgIMKgIMKgaWYgKCF4c193
cml0ZV9hbGwoaC0mZ3Q7ZmQsICZhbXA7bXNnLCBzaXplb2YobXNnKSkpIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgwqA8L2Rpdj4KCjxkaXY+wqA0NDAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqBnb3RvIGZhaWw7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+
Cgo8ZGl2PsKgNDQxIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDQyIMKgIMKgIMKgIMKgZm9yIChpID0g
MDsgaSAmbHQ7IG51bV92ZWNzOyBpKyspIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDQzIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgaWYgKCF4c193cml0ZV9hbGwoaC0mZ3Q7ZmQsIGlvdmVjW2ldLmlvdl9i
YXNlLCBpb3ZlY1tpXS5pb3ZfbGVuKSkgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqA8L2Rpdj4KCjxkaXY+wqA0NDQgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqBnb3RvIGZhaWw7IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2
PsKgNDQ1IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgPC9kaXY+Cgo8ZGl2PsKgNDQ2IMKgIMKgIMKgIMKgcmV0ID0gcmVhZF9yZXBs
eShoLCAmYW1wO21zZy50eXBlLCBsZW4pOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2PgoKPGRpdj7CoDQ0NyDCoCDCoCDCoCDCoGlmICgh
cmV0KSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgPC9kaXY+Cgo8ZGl2PsKg
NDQ4IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgZ290byBmYWlsOyDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoDwvZGl2PgoKPGRpdj7CoDQ0OSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDwvZGl2PgoKPGRpdj7CoDQ1MCDCoCDCoCDCoCDC
oG11dGV4X3VubG9jaygmYW1wO2gtJmd0O3JlcXVlc3RfbXV0ZXgpOyDCoDwvZGl2PjxkaXY+PGJy
PjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+VGhlIGFib3ZlIHNlZW1zIHRvIHRlbGwgbWUgdGhh
dCBhZnRlciB3cml0aW5nIHRvIGgtJmd0O2ZkICwgdGhlIHJlYWRfcmVwbHkgaW52b2tpbmcgcmVh
ZF9tZXNzYWdlIHdoaWNoIGltbWVkaWF0ZWxseSDCoHJlYWQgZnJvbSBoZC0mZ3Q7ZmQ/PC9kaXY+
Cgo8ZGl2PldoYXQgZGlkIGl0IG1lYW4gYnkgdGhpcz88L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2
Pjxicj48L2Rpdj48ZGl2PlRoYW5rczwvZGl2Pgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxkaXY+
PGZvbnQgY29sb3I9IiNmZjY2NjYiPklmIGhkLSZndDtmZCByZWZlcnMgdG8gYSBzb2NrZXQgZGVz
Y3JpcHRvciwgaXQmIzM5O3MgdW5kZXJzdGFuZGFibGUgdGhhdCB3cml0aW5nIGFuZCB0aGVuIGlt
ZWRpYXRlbGx5IHJlYWRpbmcsIGluIHdoaWNoIGNhc2UgdGhlIGZkIGlzIHJldHVybmVkIGJ5IGdl
dF9oYW5kbGUoeHNfZGFlbW9uX3NvY2tldCgpLCBmbGFncykuPC9mb250PjwvZGl2Pgo8ZGl2Pjxm
b250IGNvbG9yPSIjZmY2NjY2Ij48YnI+PC9mb250PjwvZGl2PjxkaXY+PGZvbnQgY29sb3I9IiNm
ZjY2NjYiPkJ1dCB3aGVuIGZkIGlzIHJldHJpdmVkIGJ5wqBnZXRfaGFuZGxlKHhzX2RvbWFpbl9k
ZXYoKSwgZmxhZ3MpLCBpdCBtZWFucyB0byB3cml0ZSB0byBhIGZpbGUgYW5kIHRoZW4gcmVhZCBm
cm9tIHRoZSBzYW1lIGZpbGUgaW1lZGlhdGVsbHkuPC9mb250PjwvZGl2PjxkaXY+Cjxmb250IGNv
bG9yPSIjZmY2NjY2Ij5Eb3NlIGl0IGhhdmUgc29tZXRoaW5nIHRvIGRvIHdpdGggdGhlIGludGVy
bmFsIGNvbW11bmljYXRpb24gcHJvdG9jb2w/IcKgPC9mb250PjwvZGl2PjxkaXY+PGZvbnQgY29s
b3I9IiNmZjY2NjYiPjxicj48L2ZvbnQ+PC9kaXY+PGRpdj5UaGFua3MgZm9yIHJlcGx5aW5nLjwv
ZGl2PjxkaXY+PGZvbnQgY29sb3I9IiNmZjY2NjYiPjxicj48L2ZvbnQ+PC9kaXY+CjxkaXY+PGZv
bnQgY29sb3I9IiNmZjY2NjYiPjxicj48L2ZvbnQ+PC9kaXY+PGRpdj48Zm9udCBjb2xvcj0iI2Zm
NjY2NiI+PGJyPjwvZm9udD48L2Rpdj4K
--047d7b603a782bb28704c6cfe26b--


--===============0024301747621438595==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0024301747621438595==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 07:26:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 07:26:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzN8H-0005aj-9o; Thu, 09 Aug 2012 07:26:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzN8F-0005ae-12
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 07:26:11 +0000
Received: from [85.158.143.99:46935] by server-3.bemta-4.messagelabs.com id
	1F/56-31486-21663205; Thu, 09 Aug 2012 07:26:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344497169!27280193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8312 invoked from network); 9 Aug 2012 07:26:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	9 Aug 2012 07:26:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 08:26:07 +0100
Message-Id: <5023822E0200007800093D2A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 08:26:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <wei.wang2@amd.com>,"Santosh Jodh" <santosh.jodh@citrix.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
In-Reply-To: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xiantao.zhang@intel.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 19:17, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.

I'm sorry Santosh, but this is still not quite right.

> @@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)

static void amd_dump_p2m_table_level(struct page_info *pg, int level, paddr_t gpa)

> +{
> +    u64 address;

paddr_t

> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;

This ought to also be paddr_t, but I realize that the whole AMD
IOMMU code is using u64 instead. So here it's probably okay to
remain u64.

> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level <= 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %016"PRIx64"\n", page_to_maddr(pg));

We specifically have PRIpaddr for this purpose.

> +        return;
> +    }
> +
> +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( (next_table_maddr != 0) && (next_level != 0)
> +            && present )
> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1, address);

I guess I see now that you started by cloning
deallocate_next_page_table(), which has almost all the same
issues I have been complaining about here.

Wei - here I'm particularly worried about the use of "level - 1"
instead of "next_level", which would similarly apply to the
original function. If the way this is currently done is okay, then
why is next_level being computed in the first place? (And similar
to the issue Santosh has already fixed here - the original
function pointlessly maps/unmaps the page when "level <= 1".
Furthermore, iommu_map.c has nice helper functions
iommu_next_level() and amd_iommu_is_pte_present() - why
aren't they in a header instead, so they could be used here,
avoiding the open coding of them?)

> +        }
> +
> +        if ( present )
> +        {
> +            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
> +                   address >> PAGE_SHIFT, next_table_maddr >> PAGE_SHIFT);

I'd prefer you to use PFN_DOWN() here.

Also, depth first, as requested by Tim, to me doesn't mean
recursing before printing. I think you really want to print first,
then recurse. Otherwise how would be output be made sense
of?

> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
>...
> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 09:56:50 2012 -0700
> @@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
>  
>  DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>  
> +void setup_iommu_dump(void);
> +

This is completely bogus. If the function was called from another
source file, the declaration would belong into a header file. Since
it's only used here, it ought to be static.

>  static void __init parse_iommu_param(char *s)
>  {
>      char *ss;
> @@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
>      if ( !iommu_enabled )
>          return;
>  
> +    setup_iommu_dump();
>      d->need_iommu = !!iommu_dom0_strict;
>      if ( need_iommu(d) )
>      {
>...
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('o', &iommu_p2m_table);
> +}

Furthermore, there's no real need for a separate function here
anyway. Just call register_key_handler() directly. Or
alternatively this ought to match other code doing the same -
using an initcall.

> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
> +static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
> +{
> +    u64 address;

Again, both gpa and address ought to be paddr_t, and the format
specifiers should match.

> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);

Pointless cast.

> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +        
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
> +
> +        if ( level == 1 )
> +            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n", 
> +                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);

Why do you print leaf (level 1) tables here only?

And the last line certainly is above 80 chars, so needs breaking up.

(Also, just to avoid you needing to do another iteration: Don't
switch to PFN_DOWN() here.)

I further wonder whether "superpage" alone is enough - don't we
have both 2M and 1G pages? Of course, that would become mute
if higher levels got also dumped (as then this knowledge is implicit).

Which reminds me to ask that both here and in the AMD code the
recursion level should probably be reflected by indenting the
printed strings.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 07:26:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 07:26:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzN8H-0005aj-9o; Thu, 09 Aug 2012 07:26:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzN8F-0005ae-12
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 07:26:11 +0000
Received: from [85.158.143.99:46935] by server-3.bemta-4.messagelabs.com id
	1F/56-31486-21663205; Thu, 09 Aug 2012 07:26:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344497169!27280193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8312 invoked from network); 9 Aug 2012 07:26:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	9 Aug 2012 07:26:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 08:26:07 +0100
Message-Id: <5023822E0200007800093D2A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 08:26:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <wei.wang2@amd.com>,"Santosh Jodh" <santosh.jodh@citrix.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
In-Reply-To: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xiantao.zhang@intel.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.08.12 at 19:17, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.

I'm sorry Santosh, but this is still not quite right.

> @@ -512,6 +513,69 @@ static int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, u64 gpa)

static void amd_dump_p2m_table_level(struct page_info *pg, int level, paddr_t gpa)

> +{
> +    u64 address;

paddr_t

> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;

This ought to also be paddr_t, but I realize that the whole AMD
IOMMU code is using u64 instead. So here it's probably okay to
remain u64.

> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level <= 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %016"PRIx64"\n", page_to_maddr(pg));

We specifically have PRIpaddr for this purpose.

> +        return;
> +    }
> +
> +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( (next_table_maddr != 0) && (next_level != 0)
> +            && present )
> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1, address);

I guess I see now that you started by cloning
deallocate_next_page_table(), which has almost all the same
issues I have been complaining about here.

Wei - here I'm particularly worried about the use of "level - 1"
instead of "next_level", which would similarly apply to the
original function. If the way this is currently done is okay, then
why is next_level being computed in the first place? (And similar
to the issue Santosh has already fixed here - the original
function pointlessly maps/unmaps the page when "level <= 1".
Furthermore, iommu_map.c has nice helper functions
iommu_next_level() and amd_iommu_is_pte_present() - why
aren't they in a header instead, so they could be used here,
avoiding the open coding of them?)

> +        }
> +
> +        if ( present )
> +        {
> +            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
> +                   address >> PAGE_SHIFT, next_table_maddr >> PAGE_SHIFT);

I'd prefer you to use PFN_DOWN() here.

Also, depth first, as requested by Tim, to me doesn't mean
recursing before printing. I think you really want to print first,
then recurse. Otherwise how would be output be made sense
of?

> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
>...
> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 09:56:50 2012 -0700
> @@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
>  
>  DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>  
> +void setup_iommu_dump(void);
> +

This is completely bogus. If the function was called from another
source file, the declaration would belong into a header file. Since
it's only used here, it ought to be static.

>  static void __init parse_iommu_param(char *s)
>  {
>      char *ss;
> @@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
>      if ( !iommu_enabled )
>          return;
>  
> +    setup_iommu_dump();
>      d->need_iommu = !!iommu_dom0_strict;
>      if ( need_iommu(d) )
>      {
>...
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('o', &iommu_p2m_table);
> +}

Furthermore, there's no real need for a separate function here
anyway. Just call register_key_handler() directly. Or
alternatively this ought to match other code doing the same -
using an initcall.

> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
> +static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
> +{
> +    u64 address;

Again, both gpa and address ought to be paddr_t, and the format
specifiers should match.

> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);

Pointless cast.

> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +        
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
> +
> +        if ( level == 1 )
> +            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n", 
> +                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);

Why do you print leaf (level 1) tables here only?

And the last line certainly is above 80 chars, so needs breaking up.

(Also, just to avoid you needing to do another iteration: Don't
switch to PFN_DOWN() here.)

I further wonder whether "superpage" alone is enough - don't we
have both 2M and 1G pages? Of course, that would become mute
if higher levels got also dumped (as then this knowledge is implicit).

Which reminds me to ask that both here and in the AMD code the
recursion level should probably be reflected by indenting the
printed strings.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 08:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 08:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzNzn-0006LA-Hy; Thu, 09 Aug 2012 08:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzNzm-0006L5-5M
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 08:21:30 +0000
Received: from [85.158.139.83:47694] by server-7.bemta-5.messagelabs.com id
	2A/C7-00857-90373205; Thu, 09 Aug 2012 08:21:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344500487!31283725!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9605 invoked from network); 9 Aug 2012 08:21:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 08:21:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,738,1336348800"; d="scan'208";a="13923991"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 08:21:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	09:21:27 +0100
Message-ID: <1344500485.32142.70.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 09:21:25 +0100
In-Reply-To: <CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDA4OjA3ICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gV2VkLCBBdWcgOCwgMjAxMiBhdCA0OjMyIFBNLCDpqazno4ogPGF3YXJlLndoeUBnbWFpbC5j
b20+IHdyb3RlOgo+ICAgICAgICAgSGkgYWxsLAo+ICAgICAgICAgIAo+ICAgICAgICAgICAgIElu
ICB4ZW4tNC4xLjIgc3JjL3Rvb2xzL3hlbnN0b3JlL3hzLmM6IHhzX3RhbGt2IGZ1bmN0aW9uLAo+
ICAgICAgICAgIHRoZXJlIGFyZSBzZXZlcmFsIGxpbmVzIGFzIGZvbGxvdzogCj4gICAgICAgICAK
PiAgICAgICAgIAo+ICAgICAgICAgIDQzNyAgICAgICAgbXV0ZXhfbG9jaygmaC0+cmVxdWVzdF9t
dXRleCk7Cj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCj4gICAgICAgICAgNDM4Cj4gICAgICAg
ICAKPiAgICAgICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgCj4gICAgICAgICAgNDM5ICAgICAgICBpZiAoIXhzX3dyaXRlX2Fs
bChoLT5mZCwgJm1zZywgc2l6ZW9mKG1zZykpKQo+ICAgICAgICAgCj4gICAgICAgICAKPiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IAo+ICAgICAgICAgIDQ0MCAgICAgICAgICAgICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICAKPiAg
ICAgICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgCj4gICAgICAgICAgNDQxCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCj4g
ICAgICAgICAgNDQyICAgICAgICBmb3IgKGkgPSAwOyBpIDwgbnVtX3ZlY3M7IGkrKykKPiAgICAg
ICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NDMgICAgICAgICAgICAgICAgaWYgKCF4
c193cml0ZV9hbGwoaC0+ZmQsCj4gICAgICAgICBpb3ZlY1tpXS5pb3ZfYmFzZSwgaW92ZWNbaV0u
aW92X2xlbikpCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgICAgICAKPiAgICAgICAg
ICA0NDQgICAgICAgICAgICAgICAgICAgICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICAKPiAgICAg
ICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgCj4gICAgICAgICAgNDQ1Cj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCj4gICAg
ICAgICAgNDQ2ICAgICAgICByZXQgPSByZWFkX3JlcGx5KGgsICZtc2cudHlwZSwgbGVuKTsKPiAg
ICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NDcgICAgICAgIGlmICghcmV0KQo+
ICAgICAgICAgCj4gICAgICAgICAKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgCj4gICAgICAgICAgNDQ4ICAgICAgICAgICAgICAgIGdv
dG8gZmFpbDsKPiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NDkKPiAgICAg
ICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NTAgICAgICAgIG11dGV4X3VubG9jaygm
aC0+cmVxdWVzdF9tdXRleCk7ICAKPiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICAKPiAg
ICAgICAgIAo+ICAgICAgICAgVGhlIGFib3ZlIHNlZW1zIHRvIHRlbGwgbWUgdGhhdCBhZnRlciB3
cml0aW5nIHRvIGgtPmZkICwgdGhlCj4gICAgICAgICByZWFkX3JlcGx5IGludm9raW5nIHJlYWRf
bWVzc2FnZSB3aGljaCBpbW1lZGlhdGVsbHkgIHJlYWQgZnJvbQo+ICAgICAgICAgaGQtPmZkPwo+
ICAgICAgICAgV2hhdCBkaWQgaXQgbWVhbiBieSB0aGlzPwo+ICAgICAgICAgCj4gICAgICAgICAK
PiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICBUaGFua3MKPiAKPiBJZiBoZC0+ZmQgcmVm
ZXJzIHRvIGEgc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFuZGFibGUgdGhhdAo+IHdy
aXRpbmcgYW5kIHRoZW4gaW1lZGlhdGVsbHkgcmVhZGluZywgaW4gd2hpY2ggY2FzZSB0aGUgZmQg
aXMgcmV0dXJuZWQKPiBieSBnZXRfaGFuZGxlKHhzX2RhZW1vbl9zb2NrZXQoKSwgZmxhZ3MpLgo+
IAo+IAo+IEJ1dCB3aGVuIGZkIGlzIHJldHJpdmVkIGJ5IGdldF9oYW5kbGUoeHNfZG9tYWluX2Rl
digpLCBmbGFncyksIGl0Cj4gbWVhbnMgdG8gd3JpdGUgdG8gYSBmaWxlIGFuZCB0aGVuIHJlYWQg
ZnJvbSB0aGUgc2FtZSBmaWxlIGltZWRpYXRlbGx5Lgo+IERvc2UgaXQgaGF2ZSBzb21ldGhpbmcg
dG8gZG8gd2l0aCB0aGUgaW50ZXJuYWwgY29tbXVuaWNhdGlvbgo+IHByb3RvY29sPyEgCgpZZXMs
IHRoZSB4ZW5zdG9yZSBwcm90b2NvbCBpbnZvbHZlcyBib3RoIHdyaXRpbmcgbWVzc2FnZXMgYW5k
IHJlYWRpbmcKcmVwbGllcywgYnV0IHRoYXQgc2VlbXMgdHJpdmlhbGx5IG9idmlvdXMgc28gSSdt
IGFmcmFpZCBJIHJlYWxseSBoYXZlIG5vCmlkZWEgd2hhdCB5b3VyIHF1ZXN0aW9uIGlzIG5vciB3
aGF0IGlzIGNvbmZ1c2luZyB5b3UuIFBlcmhhcHMgZGVzY3JpYmluZwppbiBtb3JlIGRldGFpbCB3
aGF0IHlvdSBhcmUgdHJ5aW5nIHRvIGFjaGlldmUgd2lsbCBoZWxwPwoKUmVhZGluZyBodHRwOi8v
d2lraS54ZW4ub3JnL3dpa2kvQXNraW5nX1hlbl9EZXZlbF9RdWVzdGlvbnMgbWlnaHQgYWxzbwpo
ZWxwIHlvdSBjb25zaWRlciB3aGF0IGl0IGlzIHlvdSBhcmUgYXNraW5nLgoKCj4gCj4gCj4gVGhh
bmtzIGZvciByZXBseWluZy4KPiAKPiAKPiAKPiAKPiAKPiAKCgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 08:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 08:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzNzn-0006LA-Hy; Thu, 09 Aug 2012 08:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzNzm-0006L5-5M
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 08:21:30 +0000
Received: from [85.158.139.83:47694] by server-7.bemta-5.messagelabs.com id
	2A/C7-00857-90373205; Thu, 09 Aug 2012 08:21:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344500487!31283725!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9605 invoked from network); 9 Aug 2012 08:21:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 08:21:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,738,1336348800"; d="scan'208";a="13923991"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 08:21:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	09:21:27 +0100
Message-ID: <1344500485.32142.70.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 09:21:25 +0100
In-Reply-To: <CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDA4OjA3ICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gV2VkLCBBdWcgOCwgMjAxMiBhdCA0OjMyIFBNLCDpqazno4ogPGF3YXJlLndoeUBnbWFpbC5j
b20+IHdyb3RlOgo+ICAgICAgICAgSGkgYWxsLAo+ICAgICAgICAgIAo+ICAgICAgICAgICAgIElu
ICB4ZW4tNC4xLjIgc3JjL3Rvb2xzL3hlbnN0b3JlL3hzLmM6IHhzX3RhbGt2IGZ1bmN0aW9uLAo+
ICAgICAgICAgIHRoZXJlIGFyZSBzZXZlcmFsIGxpbmVzIGFzIGZvbGxvdzogCj4gICAgICAgICAK
PiAgICAgICAgIAo+ICAgICAgICAgIDQzNyAgICAgICAgbXV0ZXhfbG9jaygmaC0+cmVxdWVzdF9t
dXRleCk7Cj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCj4gICAgICAgICAgNDM4Cj4gICAgICAg
ICAKPiAgICAgICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgCj4gICAgICAgICAgNDM5ICAgICAgICBpZiAoIXhzX3dyaXRlX2Fs
bChoLT5mZCwgJm1zZywgc2l6ZW9mKG1zZykpKQo+ICAgICAgICAgCj4gICAgICAgICAKPiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IAo+ICAgICAgICAgIDQ0MCAgICAgICAgICAgICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICAKPiAg
ICAgICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgCj4gICAgICAgICAgNDQxCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCj4g
ICAgICAgICAgNDQyICAgICAgICBmb3IgKGkgPSAwOyBpIDwgbnVtX3ZlY3M7IGkrKykKPiAgICAg
ICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NDMgICAgICAgICAgICAgICAgaWYgKCF4
c193cml0ZV9hbGwoaC0+ZmQsCj4gICAgICAgICBpb3ZlY1tpXS5pb3ZfYmFzZSwgaW92ZWNbaV0u
aW92X2xlbikpCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgICAgICAKPiAgICAgICAg
ICA0NDQgICAgICAgICAgICAgICAgICAgICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICAKPiAgICAg
ICAgIAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgCj4gICAgICAgICAgNDQ1Cj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCj4gICAg
ICAgICAgNDQ2ICAgICAgICByZXQgPSByZWFkX3JlcGx5KGgsICZtc2cudHlwZSwgbGVuKTsKPiAg
ICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NDcgICAgICAgIGlmICghcmV0KQo+
ICAgICAgICAgCj4gICAgICAgICAKPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgCj4gICAgICAgICAgNDQ4ICAgICAgICAgICAgICAgIGdv
dG8gZmFpbDsKPiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NDkKPiAgICAg
ICAgIAo+ICAgICAgICAgCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAKPiAgICAgICAgICA0NTAgICAgICAgIG11dGV4X3VubG9jaygm
aC0+cmVxdWVzdF9tdXRleCk7ICAKPiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICAKPiAg
ICAgICAgIAo+ICAgICAgICAgVGhlIGFib3ZlIHNlZW1zIHRvIHRlbGwgbWUgdGhhdCBhZnRlciB3
cml0aW5nIHRvIGgtPmZkICwgdGhlCj4gICAgICAgICByZWFkX3JlcGx5IGludm9raW5nIHJlYWRf
bWVzc2FnZSB3aGljaCBpbW1lZGlhdGVsbHkgIHJlYWQgZnJvbQo+ICAgICAgICAgaGQtPmZkPwo+
ICAgICAgICAgV2hhdCBkaWQgaXQgbWVhbiBieSB0aGlzPwo+ICAgICAgICAgCj4gICAgICAgICAK
PiAgICAgICAgIAo+ICAgICAgICAgCj4gICAgICAgICBUaGFua3MKPiAKPiBJZiBoZC0+ZmQgcmVm
ZXJzIHRvIGEgc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFuZGFibGUgdGhhdAo+IHdy
aXRpbmcgYW5kIHRoZW4gaW1lZGlhdGVsbHkgcmVhZGluZywgaW4gd2hpY2ggY2FzZSB0aGUgZmQg
aXMgcmV0dXJuZWQKPiBieSBnZXRfaGFuZGxlKHhzX2RhZW1vbl9zb2NrZXQoKSwgZmxhZ3MpLgo+
IAo+IAo+IEJ1dCB3aGVuIGZkIGlzIHJldHJpdmVkIGJ5IGdldF9oYW5kbGUoeHNfZG9tYWluX2Rl
digpLCBmbGFncyksIGl0Cj4gbWVhbnMgdG8gd3JpdGUgdG8gYSBmaWxlIGFuZCB0aGVuIHJlYWQg
ZnJvbSB0aGUgc2FtZSBmaWxlIGltZWRpYXRlbGx5Lgo+IERvc2UgaXQgaGF2ZSBzb21ldGhpbmcg
dG8gZG8gd2l0aCB0aGUgaW50ZXJuYWwgY29tbXVuaWNhdGlvbgo+IHByb3RvY29sPyEgCgpZZXMs
IHRoZSB4ZW5zdG9yZSBwcm90b2NvbCBpbnZvbHZlcyBib3RoIHdyaXRpbmcgbWVzc2FnZXMgYW5k
IHJlYWRpbmcKcmVwbGllcywgYnV0IHRoYXQgc2VlbXMgdHJpdmlhbGx5IG9idmlvdXMgc28gSSdt
IGFmcmFpZCBJIHJlYWxseSBoYXZlIG5vCmlkZWEgd2hhdCB5b3VyIHF1ZXN0aW9uIGlzIG5vciB3
aGF0IGlzIGNvbmZ1c2luZyB5b3UuIFBlcmhhcHMgZGVzY3JpYmluZwppbiBtb3JlIGRldGFpbCB3
aGF0IHlvdSBhcmUgdHJ5aW5nIHRvIGFjaGlldmUgd2lsbCBoZWxwPwoKUmVhZGluZyBodHRwOi8v
d2lraS54ZW4ub3JnL3dpa2kvQXNraW5nX1hlbl9EZXZlbF9RdWVzdGlvbnMgbWlnaHQgYWxzbwpo
ZWxwIHlvdSBjb25zaWRlciB3aGF0IGl0IGlzIHlvdSBhcmUgYXNraW5nLgoKCj4gCj4gCj4gVGhh
bmtzIGZvciByZXBseWluZy4KPiAKPiAKPiAKPiAKPiAKPiAKCgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 08:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 08:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOAY-0006jA-Ep; Thu, 09 Aug 2012 08:32:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOAW-0006j4-UJ
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 08:32:37 +0000
Received: from [85.158.143.99:35365] by server-2.bemta-4.messagelabs.com id
	69/F3-19021-4A573205; Thu, 09 Aug 2012 08:32:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344501153!20280994!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16903 invoked from network); 9 Aug 2012 08:32:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 08:32:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13924221"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 08:32:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	09:32:33 +0100
Message-ID: <1344501152.32142.78.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 9 Aug 2012 09:32:32 +0100
In-Reply-To: <20120808172809.GA22206@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 18:28 +0100, Olaf Hering wrote:
> On Tue, Aug 07, Ian Campbell wrote:
> 
> > On Tue, 2012-08-07 at 16:25 +0100, Olaf Hering wrote:
> > > On Tue, Aug 07, Ian Campbell wrote:
> > > 
> > > > On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > > > > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > > > > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > > > > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > > > > patch.
> > > > > 
> > > > > The output from this command is attached:
> > > > > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > > > > 
> > > > > Any ideas how to fix this timeout error?
> > > > 
> > > > The tools are waiting for the backend to move from state 1
> > > > (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> > > > driver typically makes that transition at the end of its probe function
> > > > -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> > > > in which case perhaps there is an error node in XS?
> > > 
> > > I think there is a difference between the two kernels. The pvops kernel
> > > goes into state 2 right away (I cant tell from repeated xenstore-ls runs
> > > if it had also state 1).
> > > The sles11 kernel remains in state 1.
> > 
> > What is it waiting for?
> 
> I have no idea, have to browse code debug it.
> A quick test with plain sles11sp2+xend and xm start -p shows that
> /local/domain/0/backend/vif/1/0/state finally gets into state 2.

When you say "finally" do you mean that it takes an unusually long time?

> Looks like something to fix before 4.2.
> 
> 
> > >  Did the expectations of libxl
> > > change recently? xl create used to work not too long ago.
> > 
> > I don't think the expectation has changed but the implementation is
> > probably more picky since Roger's hotplug patches.
> > 
> > > xm does not work either, so the change is most likely in the scripts.
> > 
> > If you are switching from xl to xm then you should either reboot or
> > remove libxl/disable_udev in xenstore manually.
> > 
> > Other than that nor much has changed in the scripts either. Are you sure
> > it isn't the kernel which has changed?
> 
> The kernel is ok.

I think there is at least the posibility that this kernel has a latent
bug exposed by recent changes to libxl, or at least  we should consider
the possibility.

Is this kernel tree available somewhere convenient (i.e. which doesn't
involves unpacking .src.rpms and applying patches etc).

I checked netback_probe in the linux-2.6.18-xen.hg tree (which I believe
relates at least somewhat to the SLES kernel) and it switches to
XenbusStateInitWait just before calling the function which triggers the
hotplug script -- so libxl's behaviour of waiting for
XenbusStateInitWait before running the hotplug scripts would seem to be
correct. I couldn't find anything before this point which would cause
the driver to block. So if your observation is that your kernel is
blocking in state 1 or taking an inordinate amount of time to get to
state 2 then that is what you need to dig into.

Have you reinstalled your udev rules etc? They changed recently and I
suspect they need to be up to date to work with the latest scripts.
Although you don't appear to be getting to that point so I don't think
it would matter (yet).

You didn't answer my question about error nodes in xenstore.

You could, experimentally, try increasing LIBXL_INIT_TIMEOUT to some
enormous time.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 08:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 08:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOAY-0006jA-Ep; Thu, 09 Aug 2012 08:32:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOAW-0006j4-UJ
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 08:32:37 +0000
Received: from [85.158.143.99:35365] by server-2.bemta-4.messagelabs.com id
	69/F3-19021-4A573205; Thu, 09 Aug 2012 08:32:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344501153!20280994!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16903 invoked from network); 9 Aug 2012 08:32:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 08:32:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13924221"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 08:32:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	09:32:33 +0100
Message-ID: <1344501152.32142.78.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 9 Aug 2012 09:32:32 +0100
In-Reply-To: <20120808172809.GA22206@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 18:28 +0100, Olaf Hering wrote:
> On Tue, Aug 07, Ian Campbell wrote:
> 
> > On Tue, 2012-08-07 at 16:25 +0100, Olaf Hering wrote:
> > > On Tue, Aug 07, Ian Campbell wrote:
> > > 
> > > > On Mon, 2012-08-06 at 18:39 +0100, Olaf Hering wrote:
> > > > > With current xen-unstable 25733:353bc0801b11 the attached hvm.cfg does
> > > > > not start anymore with a SLES11SP2 dom0 kernel, but it starts if I run a
> > > > > 3.5 pvops dom0 kernel. I have no modifications other than the stubdom -j
> > > > > patch.
> > > > > 
> > > > > The output from this command is attached:
> > > > > xl -vvvv create -d -f /root/xenpaging/sles11sp2_full_xenpaging_local.cfg 2>&1 | tee xl-create-`uname -r`.txt &
> > > > > 
> > > > > Any ideas how to fix this timeout error?
> > > > 
> > > > The tools are waiting for the backend to move from state 1
> > > > (XenbusStateInitialising) to state 2 (XenbusStateInitWait). A backend
> > > > driver typically makes that transition at the end of its probe function
> > > > -- what is the SLES11SP2 netback waiting for? Or is it failing to init,
> > > > in which case perhaps there is an error node in XS?
> > > 
> > > I think there is a difference between the two kernels. The pvops kernel
> > > goes into state 2 right away (I cant tell from repeated xenstore-ls runs
> > > if it had also state 1).
> > > The sles11 kernel remains in state 1.
> > 
> > What is it waiting for?
> 
> I have no idea, have to browse code debug it.
> A quick test with plain sles11sp2+xend and xm start -p shows that
> /local/domain/0/backend/vif/1/0/state finally gets into state 2.

When you say "finally" do you mean that it takes an unusually long time?

> Looks like something to fix before 4.2.
> 
> 
> > >  Did the expectations of libxl
> > > change recently? xl create used to work not too long ago.
> > 
> > I don't think the expectation has changed but the implementation is
> > probably more picky since Roger's hotplug patches.
> > 
> > > xm does not work either, so the change is most likely in the scripts.
> > 
> > If you are switching from xl to xm then you should either reboot or
> > remove libxl/disable_udev in xenstore manually.
> > 
> > Other than that nor much has changed in the scripts either. Are you sure
> > it isn't the kernel which has changed?
> 
> The kernel is ok.

I think there is at least the posibility that this kernel has a latent
bug exposed by recent changes to libxl, or at least  we should consider
the possibility.

Is this kernel tree available somewhere convenient (i.e. which doesn't
involves unpacking .src.rpms and applying patches etc).

I checked netback_probe in the linux-2.6.18-xen.hg tree (which I believe
relates at least somewhat to the SLES kernel) and it switches to
XenbusStateInitWait just before calling the function which triggers the
hotplug script -- so libxl's behaviour of waiting for
XenbusStateInitWait before running the hotplug scripts would seem to be
correct. I couldn't find anything before this point which would cause
the driver to block. So if your observation is that your kernel is
blocking in state 1 or taking an inordinate amount of time to get to
state 2 then that is what you need to dig into.

Have you reinstalled your udev rules etc? They changed recently and I
suspect they need to be up to date to work with the latest scripts.
Although you don't appear to be getting to that point so I don't think
it would matter (yet).

You didn't answer my question about error nodes in xenstore.

You could, experimentally, try increasing LIBXL_INIT_TIMEOUT to some
enormous time.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 08:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 08:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOFX-0006wu-70; Thu, 09 Aug 2012 08:37:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzOFV-0006wo-Hk
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 08:37:45 +0000
Received: from [85.158.143.35:7003] by server-3.bemta-4.messagelabs.com id
	DF/F6-31486-8D673205; Thu, 09 Aug 2012 08:37:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344501464!13126646!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27092 invoked from network); 9 Aug 2012 08:37:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	9 Aug 2012 08:37:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 09:37:43 +0100
Message-Id: <502392F60200007800093D47@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 09:37:42 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
	<502134440200007800093416@nat28.tlf.novell.com>
	<50212F5F.3090405@eu.citrix.com>
In-Reply-To: <50212F5F.3090405@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:08, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 07/08/12 14:29, Jan Beulich wrote:
>> Hmm, would looks more like a hack to me.
>>
>> How about doing the initial check as suggested earlier
>>
>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>          goto out;
>>
>> and the latter check in a similar way
>>
>>      if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
>>          pod_target = p2md->pod.entry_count;
>>
>> (which would still take care of any ballooning activity)? Or are
>> there any other traps to fall into?
> The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish 
> about having an interface like this?
> * allocate_pod_mem()
> * for() { populate_pod_mem() }
> * [Boot VM]
> * set_pod_target()
> 
> Right now set_pod_mem() is used both for initial allocation and for 
> adjustments.  But it seems like there's good reason to make a distinction.

So that one didn't take hypercall preemption into account.

While adjusting this to use "d->tot_pages > p2md->pod.count",
which in turn I converted to simply "populated > 0" (moving the
calculation of "populated" earlier), I noticed that there's quite a
bit of type inconsistency: I first stumbled across
p2m_pod_set_mem_target()'s "pod_target" being "unsigned"
instead of "unsigned long", then noticed that all the fields in
struct p2m_domain's pod sub-structure aren't "long" as at least
the GFNs ought to be (the counts should be not only for consistency,
but also because they're signed, and hence overflow earlier). As a
consequence, the PoD code itself would also need careful review,
as there are a number of questionable uses of plain "int" (and, quite
the opposite, two cases of "order" pointlessly being "unsigned long").

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 08:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 08:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOFX-0006wu-70; Thu, 09 Aug 2012 08:37:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzOFV-0006wo-Hk
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 08:37:45 +0000
Received: from [85.158.143.35:7003] by server-3.bemta-4.messagelabs.com id
	DF/F6-31486-8D673205; Thu, 09 Aug 2012 08:37:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344501464!13126646!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27092 invoked from network); 9 Aug 2012 08:37:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	9 Aug 2012 08:37:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 09:37:43 +0100
Message-Id: <502392F60200007800093D47@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 09:37:42 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <501173540200007800090A04@nat28.tlf.novell.com>
	<50116CD9.6000503@eu.citrix.com>
	<501FE96E0200007800092E25@nat28.tlf.novell.com>
	<501FED030200007800092E35@nat28.tlf.novell.com>
	<CAFLBxZZ5ifar7ou3NaeKne3b0rrw=wkjN6E1bEZ8+M=2w2bz2A@mail.gmail.com>
	<502123620200007800093377@nat28.tlf.novell.com>
	<CAFLBxZbq1mevbGwVMTq+M6VAs=EvAUt_G_acPyOvwbH7_GdL6Q@mail.gmail.com>
	<502134440200007800093416@nat28.tlf.novell.com>
	<50212F5F.3090405@eu.citrix.com>
In-Reply-To: <50212F5F.3090405@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PoD code killing domain before it really gets
 started
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.08.12 at 17:08, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 07/08/12 14:29, Jan Beulich wrote:
>> Hmm, would looks more like a hack to me.
>>
>> How about doing the initial check as suggested earlier
>>
>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>          goto out;
>>
>> and the latter check in a similar way
>>
>>      if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
>>          pod_target = p2md->pod.entry_count;
>>
>> (which would still take care of any ballooning activity)? Or are
>> there any other traps to fall into?
> The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish 
> about having an interface like this?
> * allocate_pod_mem()
> * for() { populate_pod_mem() }
> * [Boot VM]
> * set_pod_target()
> 
> Right now set_pod_mem() is used both for initial allocation and for 
> adjustments.  But it seems like there's good reason to make a distinction.

So that one didn't take hypercall preemption into account.

While adjusting this to use "d->tot_pages > p2md->pod.count",
which in turn I converted to simply "populated > 0" (moving the
calculation of "populated" earlier), I noticed that there's quite a
bit of type inconsistency: I first stumbled across
p2m_pod_set_mem_target()'s "pod_target" being "unsigned"
instead of "unsigned long", then noticed that all the fields in
struct p2m_domain's pod sub-structure aren't "long" as at least
the GFNs ought to be (the counts should be not only for consistency,
but also because they're signed, and hence overflow earlier). As a
consequence, the PoD code itself would also need careful review,
as there are a number of questionable uses of plain "int" (and, quite
the opposite, two cases of "order" pointlessly being "unsigned long").

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:11:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOle-0007Hb-Lr; Thu, 09 Aug 2012 09:10:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOld-0007HK-8A
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:10:57 +0000
Received: from [85.158.138.51:33992] by server-12.bemta-3.messagelabs.com id
	AD/36-21301-E9E73205; Thu, 09 Aug 2012 09:10:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344503453!27405290!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5175 invoked from network); 9 Aug 2012 09:10:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:10:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:10:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:10:53 +0100
Message-ID: <1344503451.32142.85.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Aug 2012 10:10:51 +0100
In-Reply-To: <20120807094243.GB84051@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
	<20120807094243.GB84051@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
 boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 10:42 +0100, Tim Deegan wrote:
> Acked-by: Tim Deegan <tim@xen.org>

Thanks, pushed to my arm-for-4.3 branch.

Any comments/objections/acks for patches #2-4?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:11:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOle-0007Hb-Lr; Thu, 09 Aug 2012 09:10:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOld-0007HK-8A
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:10:57 +0000
Received: from [85.158.138.51:33992] by server-12.bemta-3.messagelabs.com id
	AD/36-21301-E9E73205; Thu, 09 Aug 2012 09:10:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344503453!27405290!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5175 invoked from network); 9 Aug 2012 09:10:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:10:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:10:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:10:53 +0100
Message-ID: <1344503451.32142.85.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Aug 2012 10:10:51 +0100
In-Reply-To: <20120807094243.GB84051@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
	<20120807094243.GB84051@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
 boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 10:42 +0100, Tim Deegan wrote:
> Acked-by: Tim Deegan <tim@xen.org>

Thanks, pushed to my arm-for-4.3 branch.

Any comments/objections/acks for patches #2-4?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:11:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOlh-0007I5-7G; Thu, 09 Aug 2012 09:11:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOlf-0007Hh-Gf
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:10:59 +0000
Received: from [85.158.138.51:50805] by server-6.bemta-3.messagelabs.com id
	92/43-02321-2AE73205; Thu, 09 Aug 2012 09:10:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344503453!27405290!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5551 invoked from network); 9 Aug 2012 09:10:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925217"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:10:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:10:58 +0100
Message-ID: <1344503456.32142.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Thu, 9 Aug 2012 10:10:56 +0100
In-Reply-To: <1343991322.21372.49.camel@zakaz.uk.xensource.com>
References: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
	<1343991322.21372.49.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to
 popphysmap in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:55 +0100, Ian Campbell wrote:
> On Fri, 2012-08-03 at 11:51 +0100, Stefano Stabellini wrote:
> > On Fri, 3 Aug 2012, Ian Campbell wrote:
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > 
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> I may end up folding this into your original patch in my branch.

I added it to my arm-for-4.3 branch as is, perhaps I'll collapse when I
submit for 4.3 proper.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:11:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOlh-0007I5-7G; Thu, 09 Aug 2012 09:11:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOlf-0007Hh-Gf
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:10:59 +0000
Received: from [85.158.138.51:50805] by server-6.bemta-3.messagelabs.com id
	92/43-02321-2AE73205; Thu, 09 Aug 2012 09:10:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344503453!27405290!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5551 invoked from network); 9 Aug 2012 09:10:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:10:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925217"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:10:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:10:58 +0100
Message-ID: <1344503456.32142.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Thu, 9 Aug 2012 10:10:56 +0100
In-Reply-To: <1343991322.21372.49.camel@zakaz.uk.xensource.com>
References: <1343988996-20803-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1208031151120.4645@kaball.uk.xensource.com>
	<1343991322.21372.49.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] arm/tools: pass correct p2m array to
 popphysmap in alloc_magic_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-03 at 11:55 +0100, Ian Campbell wrote:
> On Fri, 2012-08-03 at 11:51 +0100, Stefano Stabellini wrote:
> > On Fri, 3 Aug 2012, Ian Campbell wrote:
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > 
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> I may end up folding this into your original patch in my branch.

I added it to my arm-for-4.3 branch as is, perhaps I'll collapse when I
submit for 4.3 proper.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:11:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOlU-0007H3-9N; Thu, 09 Aug 2012 09:10:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOlT-0007Gy-8b
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:10:47 +0000
Received: from [85.158.139.83:45883] by server-12.bemta-5.messagelabs.com id
	BB/8D-16438-69E73205; Thu, 09 Aug 2012 09:10:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344503445!26293035!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25480 invoked from network); 9 Aug 2012 09:10:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:10:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925208"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:10:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:10:45 +0100
Message-ID: <1344503444.32142.84.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Aug 2012 10:10:44 +0100
In-Reply-To: <alpine.DEB.2.02.1208071855270.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208071855270.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: protect LR registers and lr_mask
 changes with spin_lock_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 17:14 +0100, Stefano Stabellini wrote:
> GICH_LR registers and lr_mask need to be kept in sync: make sure that
> their modifications are protected by spin_lock_irq(&gic.lock).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied to my arm-for-4.3 branch.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:11:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOlU-0007H3-9N; Thu, 09 Aug 2012 09:10:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOlT-0007Gy-8b
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:10:47 +0000
Received: from [85.158.139.83:45883] by server-12.bemta-5.messagelabs.com id
	BB/8D-16438-69E73205; Thu, 09 Aug 2012 09:10:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344503445!26293035!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25480 invoked from network); 9 Aug 2012 09:10:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:10:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925208"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:10:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:10:45 +0100
Message-ID: <1344503444.32142.84.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Aug 2012 10:10:44 +0100
In-Reply-To: <alpine.DEB.2.02.1208071855270.4645@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208071855270.4645@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: protect LR registers and lr_mask
 changes with spin_lock_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 17:14 +0100, Stefano Stabellini wrote:
> GICH_LR registers and lr_mask need to be kept in sync: make sure that
> their modifications are protected by spin_lock_irq(&gic.lock).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied to my arm-for-4.3 branch.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOrM-0007hR-CR; Thu, 09 Aug 2012 09:16:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOrL-0007hJ-5F
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:16:51 +0000
Received: from [85.158.143.35:63039] by server-2.bemta-4.messagelabs.com id
	C4/39-19021-20083205; Thu, 09 Aug 2012 09:16:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344503806!11282537!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20378 invoked from network); 9 Aug 2012 09:16:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:16:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:16:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:16:46 +0100
Message-ID: <1344503804.32142.88.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Aug 2012 10:16:44 +0100
In-Reply-To: <1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/5] xen/arm: introduce __lshrdi3 and
	__aeabi_llsr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 15:12 +0100, Stefano Stabellini wrote:
> Taken from Linux.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Any idea what lshrdi means?

Anyway, given that this is presumably required by code which gcc
generates and that it comes direct from Linux:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

and pushed to my arm-for-4.3 branch.

> ---
>  xen/arch/arm/lib/Makefile  |    2 +-
>  xen/arch/arm/lib/lshrdi3.S |   54 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 55 insertions(+), 1 deletions(-)
>  create mode 100644 xen/arch/arm/lib/lshrdi3.S
> 
> diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/lib/Makefile
> index cbbed68..4cf41f4 100644
> --- a/xen/arch/arm/lib/Makefile
> +++ b/xen/arch/arm/lib/Makefile
> @@ -2,4 +2,4 @@ obj-y += memcpy.o memmove.o memset.o memzero.o
>  obj-y += findbit.o setbit.o
>  obj-y += setbit.o clearbit.o changebit.o
>  obj-y += testsetbit.o testclearbit.o testchangebit.o
> -obj-y += lib1funcs.o div64.o
> +obj-y += lib1funcs.o lshrdi3.o div64.o
> diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/lib/lshrdi3.S
> new file mode 100644
> index 0000000..3e8887e
> --- /dev/null
> +++ b/xen/arch/arm/lib/lshrdi3.S
> @@ -0,0 +1,54 @@
> +/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
> +   Free Software Foundation, Inc.
> +
> +This file is free software; you can redistribute it and/or modify it
> +under the terms of the GNU General Public License as published by the
> +Free Software Foundation; either version 2, or (at your option) any
> +later version.
> +
> +In addition to the permissions in the GNU General Public License, the
> +Free Software Foundation gives you unlimited permission to link the
> +compiled version of this file into combinations with other programs,
> +and to distribute those combinations without any restriction coming
> +from the use of this file.  (The General Public License restrictions
> +do apply in other respects; for example, they cover modification of
> +the file, and distribution when not linked into a combine
> +executable.)
> +
> +This file is distributed in the hope that it will be useful, but
> +WITHOUT ANY WARRANTY; without even the implied warranty of
> +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +General Public License for more details.
> +
> +You should have received a copy of the GNU General Public License
> +along with this program; see the file COPYING.  If not, write to
> +the Free Software Foundation, 51 Franklin Street, Fifth Floor,
> +Boston, MA 02110-1301, USA.  */
> +
> +
> +#include <xen/config.h>
> +#include "assembler.h"
> +
> +#ifdef __ARMEB__
> +#define al r1
> +#define ah r0
> +#else
> +#define al r0
> +#define ah r1
> +#endif
> +
> +ENTRY(__lshrdi3)
> +ENTRY(__aeabi_llsr)
> +
> +	subs	r3, r2, #32
> +	rsb	ip, r2, #32
> +	movmi	al, al, lsr r2
> +	movpl	al, ah, lsr r3
> + ARM(	orrmi	al, al, ah, lsl ip	)
> + THUMB(	lslmi	r3, ah, ip		)
> + THUMB(	orrmi	al, al, r3		)
> +	mov	ah, ah, lsr r2
> +	mov	pc, lr
> +
> +ENDPROC(__lshrdi3)
> +ENDPROC(__aeabi_llsr)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzOrM-0007hR-CR; Thu, 09 Aug 2012 09:16:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzOrL-0007hJ-5F
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:16:51 +0000
Received: from [85.158.143.35:63039] by server-2.bemta-4.messagelabs.com id
	C4/39-19021-20083205; Thu, 09 Aug 2012 09:16:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344503806!11282537!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20378 invoked from network); 9 Aug 2012 09:16:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:16:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13925400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:16:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:16:46 +0100
Message-ID: <1344503804.32142.88.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Aug 2012 10:16:44 +0100
In-Reply-To: <1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "Tim
	Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/5] xen/arm: introduce __lshrdi3 and
	__aeabi_llsr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-06 at 15:12 +0100, Stefano Stabellini wrote:
> Taken from Linux.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Any idea what lshrdi means?

Anyway, given that this is presumably required by code which gcc
generates and that it comes direct from Linux:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

and pushed to my arm-for-4.3 branch.

> ---
>  xen/arch/arm/lib/Makefile  |    2 +-
>  xen/arch/arm/lib/lshrdi3.S |   54 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 55 insertions(+), 1 deletions(-)
>  create mode 100644 xen/arch/arm/lib/lshrdi3.S
> 
> diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/lib/Makefile
> index cbbed68..4cf41f4 100644
> --- a/xen/arch/arm/lib/Makefile
> +++ b/xen/arch/arm/lib/Makefile
> @@ -2,4 +2,4 @@ obj-y += memcpy.o memmove.o memset.o memzero.o
>  obj-y += findbit.o setbit.o
>  obj-y += setbit.o clearbit.o changebit.o
>  obj-y += testsetbit.o testclearbit.o testchangebit.o
> -obj-y += lib1funcs.o div64.o
> +obj-y += lib1funcs.o lshrdi3.o div64.o
> diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/lib/lshrdi3.S
> new file mode 100644
> index 0000000..3e8887e
> --- /dev/null
> +++ b/xen/arch/arm/lib/lshrdi3.S
> @@ -0,0 +1,54 @@
> +/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
> +   Free Software Foundation, Inc.
> +
> +This file is free software; you can redistribute it and/or modify it
> +under the terms of the GNU General Public License as published by the
> +Free Software Foundation; either version 2, or (at your option) any
> +later version.
> +
> +In addition to the permissions in the GNU General Public License, the
> +Free Software Foundation gives you unlimited permission to link the
> +compiled version of this file into combinations with other programs,
> +and to distribute those combinations without any restriction coming
> +from the use of this file.  (The General Public License restrictions
> +do apply in other respects; for example, they cover modification of
> +the file, and distribution when not linked into a combine
> +executable.)
> +
> +This file is distributed in the hope that it will be useful, but
> +WITHOUT ANY WARRANTY; without even the implied warranty of
> +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +General Public License for more details.
> +
> +You should have received a copy of the GNU General Public License
> +along with this program; see the file COPYING.  If not, write to
> +the Free Software Foundation, 51 Franklin Street, Fifth Floor,
> +Boston, MA 02110-1301, USA.  */
> +
> +
> +#include <xen/config.h>
> +#include "assembler.h"
> +
> +#ifdef __ARMEB__
> +#define al r1
> +#define ah r0
> +#else
> +#define al r0
> +#define ah r1
> +#endif
> +
> +ENTRY(__lshrdi3)
> +ENTRY(__aeabi_llsr)
> +
> +	subs	r3, r2, #32
> +	rsb	ip, r2, #32
> +	movmi	al, al, lsr r2
> +	movpl	al, ah, lsr r3
> + ARM(	orrmi	al, al, ah, lsl ip	)
> + THUMB(	lslmi	r3, ah, ip		)
> + THUMB(	orrmi	al, al, r3		)
> +	mov	ah, ah, lsr r2
> +	mov	pc, lr
> +
> +ENDPROC(__lshrdi3)
> +ENDPROC(__aeabi_llsr)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:42:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPFz-0007ww-H0; Thu, 09 Aug 2012 09:42:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1SzPFy-0007wr-7B
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:42:18 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344505327!12302508!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4ODYxMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12896 invoked from network); 9 Aug 2012 09:42:09 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 09:42:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q799g5wZ026053
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 09:42:06 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q799g4B6025120
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 09:42:05 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q799g4VQ016811; Thu, 9 Aug 2012 04:42:04 -0500
Received: from [10.191.1.148] (/10.191.1.148)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 02:42:04 -0700
Message-ID: <5023860E.7080908@oracle.com>
Date: Thu, 09 Aug 2012 17:42:38 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
In-Reply-To: <50229B840200007800093A73@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cgrkuo4gMjAxMi0wOC0wOCAyMzowMSwgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDA4LjA4
LjEyIGF0IDExOjQ4LCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+
ICB3cm90ZToKPj4g5LqOIDIwMTItMDgtMDcgMTY6MzcsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+
Pj4+IE9uIDA3LjA4LjEyIGF0IDA5OjIyLCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFu
QG9yYWNsZS5jb20+ICAgd3JvdGU6Cj4+PiBOZXh0LCBpZiB5b3UgYWxyZWFkeSBzcG90dGVkIHdo
ZXJlIHRoZSBzcGlubmluZyBvY2N1cnMsIHlvdQo+Pj4gc2hvdWxkIGFsc28gYmUgYWJsZSB0byB0
ZWxsIHdoYXQncyBnb2luZyBvbiBhdCB0aGUgb3RoZXIgc2lkZSwgaS5lLgo+Pj4gd2h5IHRoZSBl
dmVudCB0aGF0IGlzIGJlaW5nIHdhaXRlZCBmb3IgaXNuJ3Qgb2NjdXJyaW5nIGZvciB0aGlzCj4+
PiBsb25nIGEgdGltZS4gU2luY2UgdGhlcmUncyBhIG51bWJlciBvZiBvcGVuIGNvZGVkIHNwaW4g
bG9vcHMKPj4+IGhlcmUsIGtub3dpbmcgZXhhY3RseSB3aGljaCBvbmUgZWFjaCBDUFUgaXMgc2l0
dGluZyBpbiAoYW5kCj4+PiB3aGljaCBvbmVzIG1pZ2h0IG5vdCBiZSBpbiBhbnkpIGlzIHRoZSBm
dW5kYW1lbnRhbCBpbmZvcm1hdGlvbgo+Pj4gbmVlZGVkLgo+Pj4KPj4+ICAgRnJvbSB3aGF0IHlv
dSdyZSB0ZWxsaW5nIHVzIHNvIGZhciwgSSdkIHJhdGhlciBzdXNwZWN0IGEga2VybmVsCj4+PiBw
cm9ibGVtLCBub3QgYSBoeXBlcnZpc29yIG9uZSBoZXJlLgo+PiBQZXIgbXkgZmluZGluZywgbW9z
dCBvZiB2Y3B1cyBzcGluIGF0IHNldF9hdG9taWNpdHlfbG9jay4KPiBUaGVuIHlvdSBuZWVkIHRv
IGRldGVybWluZSB3aGF0IHRoZSBjdXJyZW50IG93bmVyIG9mIHRoZQo+IGxvY2sgaXMgZG9pbmcu
CkkgYWRkIHByaW50ay50aW1lPTEgdG8ga2VybmVsIGNtZGxpbmUsIGJ1dCBkbWVzZyBkb24ndCBz
aG93IG11Y2ggaGVscC4KClsgICAgMS45Nzg3MDZdIHNtcGJvb3Q6IFRvdGFsIG9mIDI0IHByb2Nl
c3NvcnMgYWN0aXZhdGVkICgxNDA0NDkuMzQgCkJvZ29NSVBTKQooYmxvY2sgfjMwIG1pbnMpClsg
ICAgMS45ODg4NTldIGRldnRtcGZzOiBpbml0aWFsaXplZAo+Cj4+IFNvbWUgc3BpbiBhdCBzdG9w
X21hY2hpbmUgYWZ0ZXIgZmluaXNoIHRoZWlyIGpvYi4KPiBBbmQgaGVyZSB5b3UnZCBuZWVkIHRv
IGZpbmQgb3V0IHdoYXQgdGhleSdyZSB3YWl0aW5nIGZvciwKPiBhbmQgd2hhdCB0aG9zZSBDUFVz
IGFyZSBkb2luZy4KVGhleSBhcmUgd2FpdGluZyB0aGUgdmNwdSBjYWxsaW5nIGdlbmVyaWNfc2V0
X2FsbCBhbmQgdGhvc2Ugc3BpbiBhdCAKc2V0X2F0b21pY2l0eV9sb2NrLgpJbiBmYWN0LCBhbGwg
YXJlIHdhaXRpbmcgZ2VuZXJpY19zZXRfYWxsCj4KPj4gT25seSBvbmUgdmNwdSBpcyBjYWxsaW5n
IGdlbmVyaWNfc2V0X2FsbC4KPj4gSSdtIG5vdCBzdXJlIGlmIHRoZSB2Y3B1IGNhbGxpbmcgZ2Vu
ZXJpY19zZXRfYWxsIGRvbid0IGhhdmUgaGlnaGVyCj4+IHByaW9yaXR5IGFuZCBtYXliZSBwcmVl
bXB0IGJ5IG90aGVyIHZjcHVzIGFuZCBkb20wIGZyZXF1ZW50bHkuCj4+IFRoaXMgd2FzdGUgbXVj
aCB0aW1lLgo+IFRoZXJlJ3Mgbm90IHRoYXQgbXVjaCBiZWluZyBkb25lIGluIGdlbmVyaWNfc2V0
X2FsbCgpLCBzbyB0aGUKPiBjb2RlIHNob3VsZCBmaW5pc2ggcmVhc29uYWJseSBxdWlja2x5LiBB
cmUgeW91IHBlcmhhcHMgaGF2aW5nCj4gbW9yZSB2Q1BVLXMgaW4gdGhlIGd1ZXN0IHRoYW4gcENQ
VS1zIHRoZXkgY2FuIHJ1biBvbj8KU3lzdGVtIGVudiBpcyBhbiBleGFsb2dpYyBub2RlIHdpdGgg
MjQgY29yZXMgKyAxMDBHIG1lbSAoMiBzb2NrZXQgLCA2IApjb3JlcyBwZXIgc29ja2V0LCAyIEhU
IHRocmVhZHMgcGVyIGNvcmUpLgpCb290dXAgYSBwdmh2bSB3aXRoIDEydnBjdXMgKG9yIDI0KSAr
IDkwIEdCICsgcGNpIHBhc3N0aHJvdWdoZWQgZGV2aWNlLgo+ICAgRG9lcwo+IHlvdXIgaGFyZHdh
cmUgc3VwcG9ydCBQYXVzZS1Mb29wLUV4aXRpbmcgKG9yIHRoZSBBTUQKPiBlcXVpdmFsZW50LCBk
b24ndCByZWNhbGwgdGhlaXIgdGVybSByaWdodCBub3cpPwpJIGhhdmUgbm8gYWNjZXNzIHRvIHNl
cmlhbCBsaW5lLCBjb3VsZCBJIGdldCB0aGUgaW5mbyBieSBhIGNvbW1hbmQ/Ci9wcm9jL2NwdWlu
Zm8gc2hvd3MgYmVsb3c6CmNwdSBmYW1pbHkgICAgICA6IDYKbW9kZWwgICAgICAgICAgIDogNDQK
bW9kZWwgbmFtZSAgICAgIDogSW50ZWwoUikgWGVvbihSKSBDUFUgICAgICAgICAgIFg1NjcwICBA
IDIuOTNHSHoKPgo+IEphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpo
dHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:42:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPFz-0007ww-H0; Thu, 09 Aug 2012 09:42:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1SzPFy-0007wr-7B
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:42:18 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344505327!12302508!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4ODYxMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12896 invoked from network); 9 Aug 2012 09:42:09 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 09:42:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q799g5wZ026053
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 09:42:06 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q799g4B6025120
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 09:42:05 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q799g4VQ016811; Thu, 9 Aug 2012 04:42:04 -0500
Received: from [10.191.1.148] (/10.191.1.148)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 02:42:04 -0700
Message-ID: <5023860E.7080908@oracle.com>
Date: Thu, 09 Aug 2012 17:42:38 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
In-Reply-To: <50229B840200007800093A73@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cgrkuo4gMjAxMi0wOC0wOCAyMzowMSwgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDA4LjA4
LjEyIGF0IDExOjQ4LCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+
ICB3cm90ZToKPj4g5LqOIDIwMTItMDgtMDcgMTY6MzcsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+
Pj4+IE9uIDA3LjA4LjEyIGF0IDA5OjIyLCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFu
QG9yYWNsZS5jb20+ICAgd3JvdGU6Cj4+PiBOZXh0LCBpZiB5b3UgYWxyZWFkeSBzcG90dGVkIHdo
ZXJlIHRoZSBzcGlubmluZyBvY2N1cnMsIHlvdQo+Pj4gc2hvdWxkIGFsc28gYmUgYWJsZSB0byB0
ZWxsIHdoYXQncyBnb2luZyBvbiBhdCB0aGUgb3RoZXIgc2lkZSwgaS5lLgo+Pj4gd2h5IHRoZSBl
dmVudCB0aGF0IGlzIGJlaW5nIHdhaXRlZCBmb3IgaXNuJ3Qgb2NjdXJyaW5nIGZvciB0aGlzCj4+
PiBsb25nIGEgdGltZS4gU2luY2UgdGhlcmUncyBhIG51bWJlciBvZiBvcGVuIGNvZGVkIHNwaW4g
bG9vcHMKPj4+IGhlcmUsIGtub3dpbmcgZXhhY3RseSB3aGljaCBvbmUgZWFjaCBDUFUgaXMgc2l0
dGluZyBpbiAoYW5kCj4+PiB3aGljaCBvbmVzIG1pZ2h0IG5vdCBiZSBpbiBhbnkpIGlzIHRoZSBm
dW5kYW1lbnRhbCBpbmZvcm1hdGlvbgo+Pj4gbmVlZGVkLgo+Pj4KPj4+ICAgRnJvbSB3aGF0IHlv
dSdyZSB0ZWxsaW5nIHVzIHNvIGZhciwgSSdkIHJhdGhlciBzdXNwZWN0IGEga2VybmVsCj4+PiBw
cm9ibGVtLCBub3QgYSBoeXBlcnZpc29yIG9uZSBoZXJlLgo+PiBQZXIgbXkgZmluZGluZywgbW9z
dCBvZiB2Y3B1cyBzcGluIGF0IHNldF9hdG9taWNpdHlfbG9jay4KPiBUaGVuIHlvdSBuZWVkIHRv
IGRldGVybWluZSB3aGF0IHRoZSBjdXJyZW50IG93bmVyIG9mIHRoZQo+IGxvY2sgaXMgZG9pbmcu
CkkgYWRkIHByaW50ay50aW1lPTEgdG8ga2VybmVsIGNtZGxpbmUsIGJ1dCBkbWVzZyBkb24ndCBz
aG93IG11Y2ggaGVscC4KClsgICAgMS45Nzg3MDZdIHNtcGJvb3Q6IFRvdGFsIG9mIDI0IHByb2Nl
c3NvcnMgYWN0aXZhdGVkICgxNDA0NDkuMzQgCkJvZ29NSVBTKQooYmxvY2sgfjMwIG1pbnMpClsg
ICAgMS45ODg4NTldIGRldnRtcGZzOiBpbml0aWFsaXplZAo+Cj4+IFNvbWUgc3BpbiBhdCBzdG9w
X21hY2hpbmUgYWZ0ZXIgZmluaXNoIHRoZWlyIGpvYi4KPiBBbmQgaGVyZSB5b3UnZCBuZWVkIHRv
IGZpbmQgb3V0IHdoYXQgdGhleSdyZSB3YWl0aW5nIGZvciwKPiBhbmQgd2hhdCB0aG9zZSBDUFVz
IGFyZSBkb2luZy4KVGhleSBhcmUgd2FpdGluZyB0aGUgdmNwdSBjYWxsaW5nIGdlbmVyaWNfc2V0
X2FsbCBhbmQgdGhvc2Ugc3BpbiBhdCAKc2V0X2F0b21pY2l0eV9sb2NrLgpJbiBmYWN0LCBhbGwg
YXJlIHdhaXRpbmcgZ2VuZXJpY19zZXRfYWxsCj4KPj4gT25seSBvbmUgdmNwdSBpcyBjYWxsaW5n
IGdlbmVyaWNfc2V0X2FsbC4KPj4gSSdtIG5vdCBzdXJlIGlmIHRoZSB2Y3B1IGNhbGxpbmcgZ2Vu
ZXJpY19zZXRfYWxsIGRvbid0IGhhdmUgaGlnaGVyCj4+IHByaW9yaXR5IGFuZCBtYXliZSBwcmVl
bXB0IGJ5IG90aGVyIHZjcHVzIGFuZCBkb20wIGZyZXF1ZW50bHkuCj4+IFRoaXMgd2FzdGUgbXVj
aCB0aW1lLgo+IFRoZXJlJ3Mgbm90IHRoYXQgbXVjaCBiZWluZyBkb25lIGluIGdlbmVyaWNfc2V0
X2FsbCgpLCBzbyB0aGUKPiBjb2RlIHNob3VsZCBmaW5pc2ggcmVhc29uYWJseSBxdWlja2x5LiBB
cmUgeW91IHBlcmhhcHMgaGF2aW5nCj4gbW9yZSB2Q1BVLXMgaW4gdGhlIGd1ZXN0IHRoYW4gcENQ
VS1zIHRoZXkgY2FuIHJ1biBvbj8KU3lzdGVtIGVudiBpcyBhbiBleGFsb2dpYyBub2RlIHdpdGgg
MjQgY29yZXMgKyAxMDBHIG1lbSAoMiBzb2NrZXQgLCA2IApjb3JlcyBwZXIgc29ja2V0LCAyIEhU
IHRocmVhZHMgcGVyIGNvcmUpLgpCb290dXAgYSBwdmh2bSB3aXRoIDEydnBjdXMgKG9yIDI0KSAr
IDkwIEdCICsgcGNpIHBhc3N0aHJvdWdoZWQgZGV2aWNlLgo+ICAgRG9lcwo+IHlvdXIgaGFyZHdh
cmUgc3VwcG9ydCBQYXVzZS1Mb29wLUV4aXRpbmcgKG9yIHRoZSBBTUQKPiBlcXVpdmFsZW50LCBk
b24ndCByZWNhbGwgdGhlaXIgdGVybSByaWdodCBub3cpPwpJIGhhdmUgbm8gYWNjZXNzIHRvIHNl
cmlhbCBsaW5lLCBjb3VsZCBJIGdldCB0aGUgaW5mbyBieSBhIGNvbW1hbmQ/Ci9wcm9jL2NwdWlu
Zm8gc2hvd3MgYmVsb3c6CmNwdSBmYW1pbHkgICAgICA6IDYKbW9kZWwgICAgICAgICAgIDogNDQK
bW9kZWwgbmFtZSAgICAgIDogSW50ZWwoUikgWGVvbihSKSBDUFUgICAgICAgICAgIFg1NjcwICBA
IDIuOTNHSHoKPgo+IEphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpo
dHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:42:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPGH-0007xf-TF; Thu, 09 Aug 2012 09:42:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzPGG-0007xV-Ev
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:42:36 +0000
Received: from [85.158.143.35:55164] by server-1.bemta-4.messagelabs.com id
	F5/C4-20198-B0683205; Thu, 09 Aug 2012 09:42:35 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344505354!13865551!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15761 invoked from network); 9 Aug 2012 09:42:34 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:42:34 -0000
Received: by eaah11 with SMTP id h11so75588eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 02:42:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=CrR339w0CiqGoSjOCkEdZTMcA14Dam435FlNnINJvUk=;
	b=iR6eyU173l4OJpB1F84Fn4thb1ftehIJjdphooLLDdF/um9mqnMn6k0lzgp20TBzCu
	NXldhToK3Rm6orKEzsOrxQtkIY5Yi3DyAgckRXq1i/GuFkA5O6mzEPZprgyLVjzmzJDk
	FpBy+1e+nRD70NglMfBUTDJ54FUmB99/w/80JGhuh5jRzcMVgaqSRzcdlR3pm10A5tZm
	UoSxAgMHg6k0cmSNtc2WaH7hohC+vx2KsBhiPvOEdxxQibRwnNznALYpOYT+JCh7DmmH
	OMim0caMX9L04BCj3F+bovEEfHrk7ySbIE2ap98q+KZ0+ccLsGRt2dM/QFXvRxI0mbWF
	8pQQ==
MIME-Version: 1.0
Received: by 10.14.203.73 with SMTP id e49mr4130012eeo.27.1344505354377; Thu,
	09 Aug 2012 02:42:34 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 02:42:34 -0700 (PDT)
In-Reply-To: <1344500485.32142.70.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 17:42:34 +0800
Message-ID: <CA+ePHTAcXAtdJhqih7gihyDfZBthmWe0kMJBN=szOMbDgA8YWA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1081108737946152375=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1081108737946152375==
Content-Type: multipart/alternative; boundary=047d7b343f060bd9ef04c6d20c52

--047d7b343f060bd9ef04c6d20c52
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=C0=DA wrote:
> >
> >
> > On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA <aware.why@gmail.com> wrot=
e:
> >         Hi all,
> >
> >             In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,
> >          there are several lines as follow:
> >
> >
> >          437        mutex_lock(&h->request_mutex);
> >
> >
> >
> >          438
> >
> >
> >
> >          439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))
> >
> >
> >
> >          440                goto fail;
> >
> >
> >
> >          441
> >
> >
> >
> >          442        for (i =3D 0; i < num_vecs; i++)
> >
> >
> >
> >          443                if (!xs_write_all(h->fd,
> >         iovec[i].iov_base, iovec[i].iov_len))
> >
> >
> >
> >          444                        goto fail;
> >
> >
> >
> >          445
> >
> >
> >
> >          446        ret =3D read_reply(h, &msg.type, len);
> >
> >
> >
> >          447        if (!ret)
> >
> >
> >
> >          448                goto fail;
> >
> >
> >
> >          449
> >
> >
> >
> >          450        mutex_unlock(&h->request_mutex);
> >
> >
> >
> >
> >         The above seems to tell me that after writing to h->fd , the
> >         read_reply invoking read_message which immediatelly  read from
> >         hd->fd?
> >         What did it mean by this?
> >
> >
> >
> >
> >         Thanks
> >
> > If hd->fd refers to a socket descriptor, it's understandable that
> > writing and then imediatelly reading, in which case the fd is returned
> > by get_handle(xs_daemon_socket(), flags).
> >
> >
> > But when fd is retrived by get_handle(xs_domain_dev(), flags), it
> > means to write to a file and then read from the same file imediatelly.
> > Dose it have something to do with the internal communication
> > protocol?!
>
> Yes, the xenstore protocol involves both writing messages and reading
> replies, but that seems trivially obvious so I'm afraid I really have no
> idea what your question is nor what is confusing you. Perhaps describing
> in more detail what you are trying to achieve will help?
>
> Reading http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions might also
> help you consider what it is you are asking.
>
>
> >
> >
> > Thanks for replying.
> >
> >
> >
> >
> >
> >
>
>
Sorry about that, pehaps it results from my lacking of rudimentary
knowledge.
IIRC, when using socket, there are two buffers of which one is for sending
and another for receiving, so I  said 'If hd->fd refers to a socket
descriptor, it's understandable that
> writing and then imediatelly reading,'

But  when fd is retrived by get_handle(xs_domain_dev(), flags), it seems
that sender and receiver both writing to the same file, I have no idea
about the file /proc/xen/xenbus for communicating between domains?

--047d7b343f060bd9ef04c6d20c52
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 4:21 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 08:07 +0100, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA &lt;<a href=3D"mailto:awa=
re.why@gmail.com">aware.why@gmail.com</a>&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Hi all,<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; In &nbsp;xen-4.1.2 src/tools=
/xenstore/xs.c: xs_talkv function,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;there are several lines as follow:<b=
r>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;437 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_lock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;438<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;439 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
xs_write_all(h-&gt;fd, &amp;msg, sizeof(msg)))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;440 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;441<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;442 &nbsp; &nbsp; &nbsp; &nbsp;for (=
i =3D 0; i &lt; num_vecs; i++)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h-&gt;fd,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; iovec[i].iov_base, iovec[i].iov_len))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;444 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;445<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;446 &nbsp; &nbsp; &nbsp; &nbsp;ret =
=3D read_reply(h, &amp;msg.type, len);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;447 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
ret)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;448 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;449<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;450 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_unlock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The above seems to tell me that after writ=
ing to h-&gt;fd , the<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; read_reply invoking read_message which imm=
ediatelly &nbsp;read from<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; hd-&gt;fd?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; What did it mean by this?<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Thanks<br>
&gt;<br>
&gt; If hd-&gt;fd refers to a socket descriptor, it&#39;s understandable th=
at<br>
&gt; writing and then imediatelly reading, in which case the fd is returned=
<br>
&gt; by get_handle(xs_daemon_socket(), flags).<br>
&gt;<br>
&gt;<br>
&gt; But when fd is retrived by get_handle(xs_domain_dev(), flags), it<br>
&gt; means to write to a file and then read from the same file imediatelly.=
<br>
&gt; Dose it have something to do with the internal communication<br>
&gt; protocol?!<br>
<br>
</div></div>Yes, the xenstore protocol involves both writing messages and r=
eading<br>
replies, but that seems trivially obvious so I&#39;m afraid I really have n=
o<br>
idea what your question is nor what is confusing you. Perhaps describing<br=
>
in more detail what you are trying to achieve will help?<br>
<br>
Reading <a href=3D"http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions" tar=
get=3D"_blank">http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions</a> migh=
t also<br>
help you consider what it is you are asking.<br>
<br>
<br>
&gt;<br>
&gt;<br>
&gt; Thanks for replying.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
<br></blockquote><div><br></div><div>Sorry about that, pehaps it results fr=
om my lacking of rudimentary knowledge.</div><div>IIRC, when using socket, =
there are two buffers of which one is for sending and another for receiving=
, so I &nbsp;said &#39;If hd-&gt;fd refers to a socket descriptor, it&#39;s=
 understandable that</div>
<div>&gt; writing and then imediatelly reading,&#39;</div><div><br></div><d=
iv>But&nbsp;&nbsp;when fd is retrived by get_handle(xs_domain_dev(), flags)=
, it seems that sender and receiver both writing to the same file, I have n=
o idea about the file /proc/xen/xenbus for communicating between domains?</=
div>
</div>&nbsp;

--047d7b343f060bd9ef04c6d20c52--


--===============1081108737946152375==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1081108737946152375==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 09:42:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPGH-0007xf-TF; Thu, 09 Aug 2012 09:42:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzPGG-0007xV-Ev
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:42:36 +0000
Received: from [85.158.143.35:55164] by server-1.bemta-4.messagelabs.com id
	F5/C4-20198-B0683205; Thu, 09 Aug 2012 09:42:35 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344505354!13865551!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15761 invoked from network); 9 Aug 2012 09:42:34 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:42:34 -0000
Received: by eaah11 with SMTP id h11so75588eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 02:42:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=CrR339w0CiqGoSjOCkEdZTMcA14Dam435FlNnINJvUk=;
	b=iR6eyU173l4OJpB1F84Fn4thb1ftehIJjdphooLLDdF/um9mqnMn6k0lzgp20TBzCu
	NXldhToK3Rm6orKEzsOrxQtkIY5Yi3DyAgckRXq1i/GuFkA5O6mzEPZprgyLVjzmzJDk
	FpBy+1e+nRD70NglMfBUTDJ54FUmB99/w/80JGhuh5jRzcMVgaqSRzcdlR3pm10A5tZm
	UoSxAgMHg6k0cmSNtc2WaH7hohC+vx2KsBhiPvOEdxxQibRwnNznALYpOYT+JCh7DmmH
	OMim0caMX9L04BCj3F+bovEEfHrk7ySbIE2ap98q+KZ0+ccLsGRt2dM/QFXvRxI0mbWF
	8pQQ==
MIME-Version: 1.0
Received: by 10.14.203.73 with SMTP id e49mr4130012eeo.27.1344505354377; Thu,
	09 Aug 2012 02:42:34 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 02:42:34 -0700 (PDT)
In-Reply-To: <1344500485.32142.70.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 17:42:34 +0800
Message-ID: <CA+ePHTAcXAtdJhqih7gihyDfZBthmWe0kMJBN=szOMbDgA8YWA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1081108737946152375=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1081108737946152375==
Content-Type: multipart/alternative; boundary=047d7b343f060bd9ef04c6d20c52

--047d7b343f060bd9ef04c6d20c52
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=C0=DA wrote:
> >
> >
> > On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA <aware.why@gmail.com> wrot=
e:
> >         Hi all,
> >
> >             In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,
> >          there are several lines as follow:
> >
> >
> >          437        mutex_lock(&h->request_mutex);
> >
> >
> >
> >          438
> >
> >
> >
> >          439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))
> >
> >
> >
> >          440                goto fail;
> >
> >
> >
> >          441
> >
> >
> >
> >          442        for (i =3D 0; i < num_vecs; i++)
> >
> >
> >
> >          443                if (!xs_write_all(h->fd,
> >         iovec[i].iov_base, iovec[i].iov_len))
> >
> >
> >
> >          444                        goto fail;
> >
> >
> >
> >          445
> >
> >
> >
> >          446        ret =3D read_reply(h, &msg.type, len);
> >
> >
> >
> >          447        if (!ret)
> >
> >
> >
> >          448                goto fail;
> >
> >
> >
> >          449
> >
> >
> >
> >          450        mutex_unlock(&h->request_mutex);
> >
> >
> >
> >
> >         The above seems to tell me that after writing to h->fd , the
> >         read_reply invoking read_message which immediatelly  read from
> >         hd->fd?
> >         What did it mean by this?
> >
> >
> >
> >
> >         Thanks
> >
> > If hd->fd refers to a socket descriptor, it's understandable that
> > writing and then imediatelly reading, in which case the fd is returned
> > by get_handle(xs_daemon_socket(), flags).
> >
> >
> > But when fd is retrived by get_handle(xs_domain_dev(), flags), it
> > means to write to a file and then read from the same file imediatelly.
> > Dose it have something to do with the internal communication
> > protocol?!
>
> Yes, the xenstore protocol involves both writing messages and reading
> replies, but that seems trivially obvious so I'm afraid I really have no
> idea what your question is nor what is confusing you. Perhaps describing
> in more detail what you are trying to achieve will help?
>
> Reading http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions might also
> help you consider what it is you are asking.
>
>
> >
> >
> > Thanks for replying.
> >
> >
> >
> >
> >
> >
>
>
Sorry about that, pehaps it results from my lacking of rudimentary
knowledge.
IIRC, when using socket, there are two buffers of which one is for sending
and another for receiving, so I  said 'If hd->fd refers to a socket
descriptor, it's understandable that
> writing and then imediatelly reading,'

But  when fd is retrived by get_handle(xs_domain_dev(), flags), it seems
that sender and receiver both writing to the same file, I have no idea
about the file /proc/xen/xenbus for communicating between domains?

--047d7b343f060bd9ef04c6d20c52
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 4:21 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 08:07 +0100, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA &lt;<a href=3D"mailto:awa=
re.why@gmail.com">aware.why@gmail.com</a>&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Hi all,<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; In &nbsp;xen-4.1.2 src/tools=
/xenstore/xs.c: xs_talkv function,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;there are several lines as follow:<b=
r>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;437 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_lock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;438<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;439 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
xs_write_all(h-&gt;fd, &amp;msg, sizeof(msg)))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;440 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;441<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;442 &nbsp; &nbsp; &nbsp; &nbsp;for (=
i =3D 0; i &lt; num_vecs; i++)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h-&gt;fd,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; iovec[i].iov_base, iovec[i].iov_len))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;444 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;445<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;446 &nbsp; &nbsp; &nbsp; &nbsp;ret =
=3D read_reply(h, &amp;msg.type, len);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;447 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
ret)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;448 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;449<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;450 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_unlock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The above seems to tell me that after writ=
ing to h-&gt;fd , the<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; read_reply invoking read_message which imm=
ediatelly &nbsp;read from<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; hd-&gt;fd?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; What did it mean by this?<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Thanks<br>
&gt;<br>
&gt; If hd-&gt;fd refers to a socket descriptor, it&#39;s understandable th=
at<br>
&gt; writing and then imediatelly reading, in which case the fd is returned=
<br>
&gt; by get_handle(xs_daemon_socket(), flags).<br>
&gt;<br>
&gt;<br>
&gt; But when fd is retrived by get_handle(xs_domain_dev(), flags), it<br>
&gt; means to write to a file and then read from the same file imediatelly.=
<br>
&gt; Dose it have something to do with the internal communication<br>
&gt; protocol?!<br>
<br>
</div></div>Yes, the xenstore protocol involves both writing messages and r=
eading<br>
replies, but that seems trivially obvious so I&#39;m afraid I really have n=
o<br>
idea what your question is nor what is confusing you. Perhaps describing<br=
>
in more detail what you are trying to achieve will help?<br>
<br>
Reading <a href=3D"http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions" tar=
get=3D"_blank">http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions</a> migh=
t also<br>
help you consider what it is you are asking.<br>
<br>
<br>
&gt;<br>
&gt;<br>
&gt; Thanks for replying.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
<br></blockquote><div><br></div><div>Sorry about that, pehaps it results fr=
om my lacking of rudimentary knowledge.</div><div>IIRC, when using socket, =
there are two buffers of which one is for sending and another for receiving=
, so I &nbsp;said &#39;If hd-&gt;fd refers to a socket descriptor, it&#39;s=
 understandable that</div>
<div>&gt; writing and then imediatelly reading,&#39;</div><div><br></div><d=
iv>But&nbsp;&nbsp;when fd is retrived by get_handle(xs_domain_dev(), flags)=
, it seems that sender and receiver both writing to the same file, I have n=
o idea about the file /proc/xen/xenbus for communicating between domains?</=
div>
</div>&nbsp;

--047d7b343f060bd9ef04c6d20c52--


--===============1081108737946152375==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1081108737946152375==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 09:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPHB-00082b-B1; Thu, 09 Aug 2012 09:43:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzPHA-00082P-0V
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:43:32 +0000
Received: from [85.158.143.99:17330] by server-1.bemta-4.messagelabs.com id
	64/86-20198-34683205; Thu, 09 Aug 2012 09:43:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344505410!23960864!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29200 invoked from network); 9 Aug 2012 09:43:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:43:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926058"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:43:30 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 10:43:30 +0100
Date: Thu, 9 Aug 2012 10:43:16 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344503804.32142.88.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208091041380.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<1344503804.32142.88.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/5] xen/arm: introduce __lshrdi3 and
	__aeabi_llsr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Aug 2012, Ian Campbell wrote:
> On Mon, 2012-08-06 at 15:12 +0100, Stefano Stabellini wrote:
> > Taken from Linux.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Any idea what lshrdi means?

I am not sure TBH, but it is one of the needed libgcc functions.


> Anyway, given that this is presumably required by code which gcc
> generates and that it comes direct from Linux:
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> and pushed to my arm-for-4.3 branch.
> 
> > ---
> >  xen/arch/arm/lib/Makefile  |    2 +-
> >  xen/arch/arm/lib/lshrdi3.S |   54 ++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 55 insertions(+), 1 deletions(-)
> >  create mode 100644 xen/arch/arm/lib/lshrdi3.S
> > 
> > diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/lib/Makefile
> > index cbbed68..4cf41f4 100644
> > --- a/xen/arch/arm/lib/Makefile
> > +++ b/xen/arch/arm/lib/Makefile
> > @@ -2,4 +2,4 @@ obj-y += memcpy.o memmove.o memset.o memzero.o
> >  obj-y += findbit.o setbit.o
> >  obj-y += setbit.o clearbit.o changebit.o
> >  obj-y += testsetbit.o testclearbit.o testchangebit.o
> > -obj-y += lib1funcs.o div64.o
> > +obj-y += lib1funcs.o lshrdi3.o div64.o
> > diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/lib/lshrdi3.S
> > new file mode 100644
> > index 0000000..3e8887e
> > --- /dev/null
> > +++ b/xen/arch/arm/lib/lshrdi3.S
> > @@ -0,0 +1,54 @@
> > +/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
> > +   Free Software Foundation, Inc.
> > +
> > +This file is free software; you can redistribute it and/or modify it
> > +under the terms of the GNU General Public License as published by the
> > +Free Software Foundation; either version 2, or (at your option) any
> > +later version.
> > +
> > +In addition to the permissions in the GNU General Public License, the
> > +Free Software Foundation gives you unlimited permission to link the
> > +compiled version of this file into combinations with other programs,
> > +and to distribute those combinations without any restriction coming
> > +from the use of this file.  (The General Public License restrictions
> > +do apply in other respects; for example, they cover modification of
> > +the file, and distribution when not linked into a combine
> > +executable.)
> > +
> > +This file is distributed in the hope that it will be useful, but
> > +WITHOUT ANY WARRANTY; without even the implied warranty of
> > +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > +General Public License for more details.
> > +
> > +You should have received a copy of the GNU General Public License
> > +along with this program; see the file COPYING.  If not, write to
> > +the Free Software Foundation, 51 Franklin Street, Fifth Floor,
> > +Boston, MA 02110-1301, USA.  */
> > +
> > +
> > +#include <xen/config.h>
> > +#include "assembler.h"
> > +
> > +#ifdef __ARMEB__
> > +#define al r1
> > +#define ah r0
> > +#else
> > +#define al r0
> > +#define ah r1
> > +#endif
> > +
> > +ENTRY(__lshrdi3)
> > +ENTRY(__aeabi_llsr)
> > +
> > +	subs	r3, r2, #32
> > +	rsb	ip, r2, #32
> > +	movmi	al, al, lsr r2
> > +	movpl	al, ah, lsr r3
> > + ARM(	orrmi	al, al, ah, lsl ip	)
> > + THUMB(	lslmi	r3, ah, ip		)
> > + THUMB(	orrmi	al, al, r3		)
> > +	mov	ah, ah, lsr r2
> > +	mov	pc, lr
> > +
> > +ENDPROC(__lshrdi3)
> > +ENDPROC(__aeabi_llsr)
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPHB-00082b-B1; Thu, 09 Aug 2012 09:43:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzPHA-00082P-0V
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:43:32 +0000
Received: from [85.158.143.99:17330] by server-1.bemta-4.messagelabs.com id
	64/86-20198-34683205; Thu, 09 Aug 2012 09:43:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344505410!23960864!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29200 invoked from network); 9 Aug 2012 09:43:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:43:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926058"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:43:30 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 10:43:30 +0100
Date: Thu, 9 Aug 2012 10:43:16 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344503804.32142.88.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208091041380.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<1344503804.32142.88.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/5] xen/arm: introduce __lshrdi3 and
	__aeabi_llsr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Aug 2012, Ian Campbell wrote:
> On Mon, 2012-08-06 at 15:12 +0100, Stefano Stabellini wrote:
> > Taken from Linux.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Any idea what lshrdi means?

I am not sure TBH, but it is one of the needed libgcc functions.


> Anyway, given that this is presumably required by code which gcc
> generates and that it comes direct from Linux:
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> and pushed to my arm-for-4.3 branch.
> 
> > ---
> >  xen/arch/arm/lib/Makefile  |    2 +-
> >  xen/arch/arm/lib/lshrdi3.S |   54 ++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 55 insertions(+), 1 deletions(-)
> >  create mode 100644 xen/arch/arm/lib/lshrdi3.S
> > 
> > diff --git a/xen/arch/arm/lib/Makefile b/xen/arch/arm/lib/Makefile
> > index cbbed68..4cf41f4 100644
> > --- a/xen/arch/arm/lib/Makefile
> > +++ b/xen/arch/arm/lib/Makefile
> > @@ -2,4 +2,4 @@ obj-y += memcpy.o memmove.o memset.o memzero.o
> >  obj-y += findbit.o setbit.o
> >  obj-y += setbit.o clearbit.o changebit.o
> >  obj-y += testsetbit.o testclearbit.o testchangebit.o
> > -obj-y += lib1funcs.o div64.o
> > +obj-y += lib1funcs.o lshrdi3.o div64.o
> > diff --git a/xen/arch/arm/lib/lshrdi3.S b/xen/arch/arm/lib/lshrdi3.S
> > new file mode 100644
> > index 0000000..3e8887e
> > --- /dev/null
> > +++ b/xen/arch/arm/lib/lshrdi3.S
> > @@ -0,0 +1,54 @@
> > +/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
> > +   Free Software Foundation, Inc.
> > +
> > +This file is free software; you can redistribute it and/or modify it
> > +under the terms of the GNU General Public License as published by the
> > +Free Software Foundation; either version 2, or (at your option) any
> > +later version.
> > +
> > +In addition to the permissions in the GNU General Public License, the
> > +Free Software Foundation gives you unlimited permission to link the
> > +compiled version of this file into combinations with other programs,
> > +and to distribute those combinations without any restriction coming
> > +from the use of this file.  (The General Public License restrictions
> > +do apply in other respects; for example, they cover modification of
> > +the file, and distribution when not linked into a combine
> > +executable.)
> > +
> > +This file is distributed in the hope that it will be useful, but
> > +WITHOUT ANY WARRANTY; without even the implied warranty of
> > +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > +General Public License for more details.
> > +
> > +You should have received a copy of the GNU General Public License
> > +along with this program; see the file COPYING.  If not, write to
> > +the Free Software Foundation, 51 Franklin Street, Fifth Floor,
> > +Boston, MA 02110-1301, USA.  */
> > +
> > +
> > +#include <xen/config.h>
> > +#include "assembler.h"
> > +
> > +#ifdef __ARMEB__
> > +#define al r1
> > +#define ah r0
> > +#else
> > +#define al r0
> > +#define ah r1
> > +#endif
> > +
> > +ENTRY(__lshrdi3)
> > +ENTRY(__aeabi_llsr)
> > +
> > +	subs	r3, r2, #32
> > +	rsb	ip, r2, #32
> > +	movmi	al, al, lsr r2
> > +	movpl	al, ah, lsr r3
> > + ARM(	orrmi	al, al, ah, lsl ip	)
> > + THUMB(	lslmi	r3, ah, ip		)
> > + THUMB(	orrmi	al, al, r3		)
> > +	mov	ah, ah, lsr r2
> > +	mov	pc, lr
> > +
> > +ENDPROC(__lshrdi3)
> > +ENDPROC(__aeabi_llsr)
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPNH-0008Oc-Am; Thu, 09 Aug 2012 09:49:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPNF-0008OX-Eh
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:49:49 +0000
Received: from [85.158.139.83:60900] by server-10.bemta-5.messagelabs.com id
	BB/19-24472-CB783205; Thu, 09 Aug 2012 09:49:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344505788!31040665!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31813 invoked from network); 9 Aug 2012 09:49:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:49:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:49:48 +0100
Message-ID: <1344505786.32142.90.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 10:49:46 +0100
In-Reply-To: <CA+ePHTAcXAtdJhqih7gihyDfZBthmWe0kMJBN=szOMbDgA8YWA@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTAcXAtdJhqih7gihyDfZBthmWe0kMJBN=szOMbDgA8YWA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDEwOjQyICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gVGh1LCBBdWcgOSwgMjAxMiBhdCA0OjIxIFBNLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVs
bEBjaXRyaXguY29tPgo+IHdyb3RlOgo+ICAgICAgICAgT24gVGh1LCAyMDEyLTA4LTA5IGF0IDA4
OjA3ICswMTAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAg
ICA+IE9uIFdlZCwgQXVnIDgsIDIwMTIgYXQgNDozMiBQTSwg6ams56OKIDxhd2FyZS53aHlAZ21h
aWwuY29tPgo+ICAgICAgICAgd3JvdGU6Cj4gICAgICAgICA+ICAgICAgICAgSGkgYWxsLAo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICAgICBJbiAgeGVuLTQuMS4yIHNyYy90b29scy94
ZW5zdG9yZS94cy5jOiB4c190YWxrdgo+ICAgICAgICAgZnVuY3Rpb24sCj4gICAgICAgICA+ICAg
ICAgICAgIHRoZXJlIGFyZSBzZXZlcmFsIGxpbmVzIGFzIGZvbGxvdzoKPiAgICAgICAgID4KPiAg
ICAgICAgID4KPiAgICAgICAgID4gICAgICAgICAgNDM3ICAgICAgICBtdXRleF9sb2NrKCZoLT5y
ZXF1ZXN0X211dGV4KTsKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAg
ICAgID4gICAgICAgICAgNDM4Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4g
ICAgICAgICA+ICAgICAgICAgIDQzOSAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsICZt
c2csCj4gICAgICAgICBzaXplb2YobXNnKSkpCj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+Cj4gICAgICAgICA+ICAgICAgICAgIDQ0MCAgICAgICAgICAgICAgICBnb3RvIGZhaWw7
Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+ICAgICAgICAg
IDQ0MQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDIgICAgICAgIGZvciAoaSA9IDA7IGkgPCBudW1fdmVjczsgaSsrKQo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDMgICAgICAg
ICAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsCj4gICAgICAgICA+ICAgICAgICAgaW92
ZWNbaV0uaW92X2Jhc2UsIGlvdmVjW2ldLmlvdl9sZW4pKQo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDQgICAgICAgICAgICAgICAgICAg
ICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+ICAgICAgICAgIDQ0NQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPiAgICAgICAgICA0NDYgICAgICAgIHJldCA9IHJlYWRfcmVwbHkoaCwgJm1zZy50
eXBlLCBsZW4pOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
PiAgICAgICAgICA0NDcgICAgICAgIGlmICghcmV0KQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDggICAgICAgICAgICAgICAgZ290byBm
YWlsOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDkKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4g
ICAgICAgICAgNDUwICAgICAgICBtdXRleF91bmxvY2soJmgtPnJlcXVlc3RfbXV0ZXgpOwo+ICAg
ICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAg
ICAgICAgIFRoZSBhYm92ZSBzZWVtcyB0byB0ZWxsIG1lIHRoYXQgYWZ0ZXIgd3JpdGluZyB0bwo+
ICAgICAgICAgaC0+ZmQgLCB0aGUKPiAgICAgICAgID4gICAgICAgICByZWFkX3JlcGx5IGludm9r
aW5nIHJlYWRfbWVzc2FnZSB3aGljaCBpbW1lZGlhdGVsbHkKPiAgICAgICAgICByZWFkIGZyb20K
PiAgICAgICAgID4gICAgICAgICBoZC0+ZmQ/Cj4gICAgICAgICA+ICAgICAgICAgV2hhdCBkaWQg
aXQgbWVhbiBieSB0aGlzPwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgIFRoYW5rcwo+ICAgICAgICAgPgo+ICAgICAgICAg
PiBJZiBoZC0+ZmQgcmVmZXJzIHRvIGEgc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFu
ZGFibGUKPiAgICAgICAgIHRoYXQKPiAgICAgICAgID4gd3JpdGluZyBhbmQgdGhlbiBpbWVkaWF0
ZWxseSByZWFkaW5nLCBpbiB3aGljaCBjYXNlIHRoZSBmZAo+ICAgICAgICAgaXMgcmV0dXJuZWQK
PiAgICAgICAgID4gYnkgZ2V0X2hhbmRsZSh4c19kYWVtb25fc29ja2V0KCksIGZsYWdzKS4KPiAg
ICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4gQnV0IHdoZW4gZmQgaXMgcmV0cml2ZWQg
YnkgZ2V0X2hhbmRsZSh4c19kb21haW5fZGV2KCksCj4gICAgICAgICBmbGFncyksIGl0Cj4gICAg
ICAgICA+IG1lYW5zIHRvIHdyaXRlIHRvIGEgZmlsZSBhbmQgdGhlbiByZWFkIGZyb20gdGhlIHNh
bWUgZmlsZQo+ICAgICAgICAgaW1lZGlhdGVsbHkuCj4gICAgICAgICA+IERvc2UgaXQgaGF2ZSBz
b21ldGhpbmcgdG8gZG8gd2l0aCB0aGUgaW50ZXJuYWwgY29tbXVuaWNhdGlvbgo+ICAgICAgICAg
PiBwcm90b2NvbD8hCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgWWVzLCB0aGUgeGVu
c3RvcmUgcHJvdG9jb2wgaW52b2x2ZXMgYm90aCB3cml0aW5nIG1lc3NhZ2VzIGFuZAo+ICAgICAg
ICAgcmVhZGluZwo+ICAgICAgICAgcmVwbGllcywgYnV0IHRoYXQgc2VlbXMgdHJpdmlhbGx5IG9i
dmlvdXMgc28gSSdtIGFmcmFpZCBJCj4gICAgICAgICByZWFsbHkgaGF2ZSBubwo+ICAgICAgICAg
aWRlYSB3aGF0IHlvdXIgcXVlc3Rpb24gaXMgbm9yIHdoYXQgaXMgY29uZnVzaW5nIHlvdS4gUGVy
aGFwcwo+ICAgICAgICAgZGVzY3JpYmluZwo+ICAgICAgICAgaW4gbW9yZSBkZXRhaWwgd2hhdCB5
b3UgYXJlIHRyeWluZyB0byBhY2hpZXZlIHdpbGwgaGVscD8KPiAgICAgICAgIAo+ICAgICAgICAg
UmVhZGluZyBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvQXNraW5nX1hlbl9EZXZlbF9RdWVzdGlv
bnMKPiAgICAgICAgIG1pZ2h0IGFsc28KPiAgICAgICAgIGhlbHAgeW91IGNvbnNpZGVyIHdoYXQg
aXQgaXMgeW91IGFyZSBhc2tpbmcuCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiBUaGFua3MgZm9yIHJlcGx5aW5nLgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgCj4gCj4gCj4gU29ycnkgYWJvdXQgdGhhdCwgcGVoYXBzIGl0IHJlc3VsdHMg
ZnJvbSBteSBsYWNraW5nIG9mIHJ1ZGltZW50YXJ5Cj4ga25vd2xlZGdlLgo+IElJUkMsIHdoZW4g
dXNpbmcgc29ja2V0LCB0aGVyZSBhcmUgdHdvIGJ1ZmZlcnMgb2Ygd2hpY2ggb25lIGlzIGZvcgo+
IHNlbmRpbmcgYW5kIGFub3RoZXIgZm9yIHJlY2VpdmluZywgc28gSSAgc2FpZCAnSWYgaGQtPmZk
IHJlZmVycyB0byBhCj4gc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFuZGFibGUgdGhh
dAo+ID4gd3JpdGluZyBhbmQgdGhlbiBpbWVkaWF0ZWxseSByZWFkaW5nLCcKPiAKPiAKPiBCdXQg
IHdoZW4gZmQgaXMgcmV0cml2ZWQgYnkgZ2V0X2hhbmRsZSh4c19kb21haW5fZGV2KCksIGZsYWdz
KSwgaXQKPiBzZWVtcyB0aGF0IHNlbmRlciBhbmQgcmVjZWl2ZXIgYm90aCB3cml0aW5nIHRvIHRo
ZSBzYW1lIGZpbGUsIEkgaGF2ZQo+IG5vIGlkZWEgYWJvdXQgdGhlIGZpbGUgL3Byb2MveGVuL3hl
bmJ1cyBmb3IgY29tbXVuaWNhdGluZyBiZXR3ZWVuCj4gZG9tYWlucz8KCi9wcm9jL3hlbi94ZW5i
dXMgaXMgYSB2aXJ0dWFsIGRldmljZSB3aGljaCBhcmJpdHJhdGVzIGFjY2VzcyB0byB0aGUKeGVu
YnVzIHJpbmcsIHdoaWNoIGluIHR1cm4gaXMgdXNlZCB0byBjb21tdW5pY2F0ZSB3aXRoIHRoZSB4
ZW5zdG9yZQpkYWVtb24uIFlvdSBjb3VsZCB0aGluayBvZiBhbiBmZCBvbnRvIHRoaXMgZGV2aWNl
IGFzIGEgc29ja2V0IGNvbm5lY3RlZAp0byB0aGUgeGVuc3RvcmUgZGFlbW9uLgoKSWFuLgoKCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPNH-0008Oc-Am; Thu, 09 Aug 2012 09:49:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPNF-0008OX-Eh
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:49:49 +0000
Received: from [85.158.139.83:60900] by server-10.bemta-5.messagelabs.com id
	BB/19-24472-CB783205; Thu, 09 Aug 2012 09:49:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344505788!31040665!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31813 invoked from network); 9 Aug 2012 09:49:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:49:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:49:48 +0100
Message-ID: <1344505786.32142.90.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 10:49:46 +0100
In-Reply-To: <CA+ePHTAcXAtdJhqih7gihyDfZBthmWe0kMJBN=szOMbDgA8YWA@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTAcXAtdJhqih7gihyDfZBthmWe0kMJBN=szOMbDgA8YWA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDEwOjQyICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gVGh1LCBBdWcgOSwgMjAxMiBhdCA0OjIxIFBNLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVs
bEBjaXRyaXguY29tPgo+IHdyb3RlOgo+ICAgICAgICAgT24gVGh1LCAyMDEyLTA4LTA5IGF0IDA4
OjA3ICswMTAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAg
ICA+IE9uIFdlZCwgQXVnIDgsIDIwMTIgYXQgNDozMiBQTSwg6ams56OKIDxhd2FyZS53aHlAZ21h
aWwuY29tPgo+ICAgICAgICAgd3JvdGU6Cj4gICAgICAgICA+ICAgICAgICAgSGkgYWxsLAo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICAgICBJbiAgeGVuLTQuMS4yIHNyYy90b29scy94
ZW5zdG9yZS94cy5jOiB4c190YWxrdgo+ICAgICAgICAgZnVuY3Rpb24sCj4gICAgICAgICA+ICAg
ICAgICAgIHRoZXJlIGFyZSBzZXZlcmFsIGxpbmVzIGFzIGZvbGxvdzoKPiAgICAgICAgID4KPiAg
ICAgICAgID4KPiAgICAgICAgID4gICAgICAgICAgNDM3ICAgICAgICBtdXRleF9sb2NrKCZoLT5y
ZXF1ZXN0X211dGV4KTsKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAg
ICAgID4gICAgICAgICAgNDM4Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4g
ICAgICAgICA+ICAgICAgICAgIDQzOSAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsICZt
c2csCj4gICAgICAgICBzaXplb2YobXNnKSkpCj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+Cj4gICAgICAgICA+ICAgICAgICAgIDQ0MCAgICAgICAgICAgICAgICBnb3RvIGZhaWw7
Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+ICAgICAgICAg
IDQ0MQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDIgICAgICAgIGZvciAoaSA9IDA7IGkgPCBudW1fdmVjczsgaSsrKQo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDMgICAgICAg
ICAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsCj4gICAgICAgICA+ICAgICAgICAgaW92
ZWNbaV0uaW92X2Jhc2UsIGlvdmVjW2ldLmlvdl9sZW4pKQo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDQgICAgICAgICAgICAgICAgICAg
ICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+ICAgICAgICAgIDQ0NQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPiAgICAgICAgICA0NDYgICAgICAgIHJldCA9IHJlYWRfcmVwbHkoaCwgJm1zZy50
eXBlLCBsZW4pOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
PiAgICAgICAgICA0NDcgICAgICAgIGlmICghcmV0KQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDggICAgICAgICAgICAgICAgZ290byBm
YWlsOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDkKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4g
ICAgICAgICAgNDUwICAgICAgICBtdXRleF91bmxvY2soJmgtPnJlcXVlc3RfbXV0ZXgpOwo+ICAg
ICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAg
ICAgICAgIFRoZSBhYm92ZSBzZWVtcyB0byB0ZWxsIG1lIHRoYXQgYWZ0ZXIgd3JpdGluZyB0bwo+
ICAgICAgICAgaC0+ZmQgLCB0aGUKPiAgICAgICAgID4gICAgICAgICByZWFkX3JlcGx5IGludm9r
aW5nIHJlYWRfbWVzc2FnZSB3aGljaCBpbW1lZGlhdGVsbHkKPiAgICAgICAgICByZWFkIGZyb20K
PiAgICAgICAgID4gICAgICAgICBoZC0+ZmQ/Cj4gICAgICAgICA+ICAgICAgICAgV2hhdCBkaWQg
aXQgbWVhbiBieSB0aGlzPwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgIFRoYW5rcwo+ICAgICAgICAgPgo+ICAgICAgICAg
PiBJZiBoZC0+ZmQgcmVmZXJzIHRvIGEgc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFu
ZGFibGUKPiAgICAgICAgIHRoYXQKPiAgICAgICAgID4gd3JpdGluZyBhbmQgdGhlbiBpbWVkaWF0
ZWxseSByZWFkaW5nLCBpbiB3aGljaCBjYXNlIHRoZSBmZAo+ICAgICAgICAgaXMgcmV0dXJuZWQK
PiAgICAgICAgID4gYnkgZ2V0X2hhbmRsZSh4c19kYWVtb25fc29ja2V0KCksIGZsYWdzKS4KPiAg
ICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4gQnV0IHdoZW4gZmQgaXMgcmV0cml2ZWQg
YnkgZ2V0X2hhbmRsZSh4c19kb21haW5fZGV2KCksCj4gICAgICAgICBmbGFncyksIGl0Cj4gICAg
ICAgICA+IG1lYW5zIHRvIHdyaXRlIHRvIGEgZmlsZSBhbmQgdGhlbiByZWFkIGZyb20gdGhlIHNh
bWUgZmlsZQo+ICAgICAgICAgaW1lZGlhdGVsbHkuCj4gICAgICAgICA+IERvc2UgaXQgaGF2ZSBz
b21ldGhpbmcgdG8gZG8gd2l0aCB0aGUgaW50ZXJuYWwgY29tbXVuaWNhdGlvbgo+ICAgICAgICAg
PiBwcm90b2NvbD8hCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgWWVzLCB0aGUgeGVu
c3RvcmUgcHJvdG9jb2wgaW52b2x2ZXMgYm90aCB3cml0aW5nIG1lc3NhZ2VzIGFuZAo+ICAgICAg
ICAgcmVhZGluZwo+ICAgICAgICAgcmVwbGllcywgYnV0IHRoYXQgc2VlbXMgdHJpdmlhbGx5IG9i
dmlvdXMgc28gSSdtIGFmcmFpZCBJCj4gICAgICAgICByZWFsbHkgaGF2ZSBubwo+ICAgICAgICAg
aWRlYSB3aGF0IHlvdXIgcXVlc3Rpb24gaXMgbm9yIHdoYXQgaXMgY29uZnVzaW5nIHlvdS4gUGVy
aGFwcwo+ICAgICAgICAgZGVzY3JpYmluZwo+ICAgICAgICAgaW4gbW9yZSBkZXRhaWwgd2hhdCB5
b3UgYXJlIHRyeWluZyB0byBhY2hpZXZlIHdpbGwgaGVscD8KPiAgICAgICAgIAo+ICAgICAgICAg
UmVhZGluZyBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvQXNraW5nX1hlbl9EZXZlbF9RdWVzdGlv
bnMKPiAgICAgICAgIG1pZ2h0IGFsc28KPiAgICAgICAgIGhlbHAgeW91IGNvbnNpZGVyIHdoYXQg
aXQgaXMgeW91IGFyZSBhc2tpbmcuCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiBUaGFua3MgZm9yIHJlcGx5aW5nLgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgCj4gCj4gCj4gU29ycnkgYWJvdXQgdGhhdCwgcGVoYXBzIGl0IHJlc3VsdHMg
ZnJvbSBteSBsYWNraW5nIG9mIHJ1ZGltZW50YXJ5Cj4ga25vd2xlZGdlLgo+IElJUkMsIHdoZW4g
dXNpbmcgc29ja2V0LCB0aGVyZSBhcmUgdHdvIGJ1ZmZlcnMgb2Ygd2hpY2ggb25lIGlzIGZvcgo+
IHNlbmRpbmcgYW5kIGFub3RoZXIgZm9yIHJlY2VpdmluZywgc28gSSAgc2FpZCAnSWYgaGQtPmZk
IHJlZmVycyB0byBhCj4gc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFuZGFibGUgdGhh
dAo+ID4gd3JpdGluZyBhbmQgdGhlbiBpbWVkaWF0ZWxseSByZWFkaW5nLCcKPiAKPiAKPiBCdXQg
IHdoZW4gZmQgaXMgcmV0cml2ZWQgYnkgZ2V0X2hhbmRsZSh4c19kb21haW5fZGV2KCksIGZsYWdz
KSwgaXQKPiBzZWVtcyB0aGF0IHNlbmRlciBhbmQgcmVjZWl2ZXIgYm90aCB3cml0aW5nIHRvIHRo
ZSBzYW1lIGZpbGUsIEkgaGF2ZQo+IG5vIGlkZWEgYWJvdXQgdGhlIGZpbGUgL3Byb2MveGVuL3hl
bmJ1cyBmb3IgY29tbXVuaWNhdGluZyBiZXR3ZWVuCj4gZG9tYWlucz8KCi9wcm9jL3hlbi94ZW5i
dXMgaXMgYSB2aXJ0dWFsIGRldmljZSB3aGljaCBhcmJpdHJhdGVzIGFjY2VzcyB0byB0aGUKeGVu
YnVzIHJpbmcsIHdoaWNoIGluIHR1cm4gaXMgdXNlZCB0byBjb21tdW5pY2F0ZSB3aXRoIHRoZSB4
ZW5zdG9yZQpkYWVtb24uIFlvdSBjb3VsZCB0aGluayBvZiBhbiBmZCBvbnRvIHRoaXMgZGV2aWNl
IGFzIGEgc29ja2V0IGNvbm5lY3RlZAp0byB0aGUgeGVuc3RvcmUgZGFlbW9uLgoKSWFuLgoKCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:50:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPO3-0008R3-P3; Thu, 09 Aug 2012 09:50:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzPO1-0008Qq-Py
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:50:38 +0000
Received: from [85.158.139.83:10445] by server-12.bemta-5.messagelabs.com id
	AD/F6-16438-CE783205; Thu, 09 Aug 2012 09:50:36 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344505834!23686500!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8858 invoked from network); 9 Aug 2012 09:50:34 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:50:34 -0000
Received: by eekd4 with SMTP id d4so79293eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 02:50:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=jRAuBn9wIPB4eteUs6Wn2WmTAL+RZWviFmwki6P1Haw=;
	b=bKehRxfjupwTyssXJx6+GxwMejIcH7evyuXdLzX0ePwvruSHCQTjJB/vk8ufsGkZW7
	MnDDPykecxVEoXOYu1nbGfZWYgUh0IYfl4Sx6FaXlSWVq3RYoscyjTeN5hs4kWXGGbRU
	gKyPAiUcFSYIu53+VD6gjAMioONmRi+usPKcqgNtNQyy64CpOYG+YZvo1XIOrn2HkVTE
	p1wOhi0yPCB9Cx4yeiFSEFzc3sSMpAQHmfvW+x4DsHUMhxASYcSUazMd/QGjBJ7hnnlT
	nh/XKV34HDC13ySFgG49Uus5cmWSNILjOKdKjt3oaeKIky9cY73QKY+orxuB8M9E74vN
	qcPA==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr26589403eep.10.1344505833791; Thu,
	09 Aug 2012 02:50:33 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 02:50:33 -0700 (PDT)
In-Reply-To: <1344500485.32142.70.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 17:50:33 +0800
Message-ID: <CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8891045425445543588=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8891045425445543588==
Content-Type: multipart/alternative; boundary=047d7b670a9d9f23ab04c6d2283d

--047d7b670a9d9f23ab04c6d2283d
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=C0=DA wrote:
> >
> >
> > On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA <aware.why@gmail.com> wrot=
e:
> >         Hi all,
> >
> >             In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,
> >          there are several lines as follow:
> >
> >
> >          437        mutex_lock(&h->request_mutex);
> >
> >
> >
> >          438
> >
> >
> >
> >          439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))
> >
> >
> >
> >          440                goto fail;
> >
> >
> >
> >          441
> >
> >
> >
> >          442        for (i =3D 0; i < num_vecs; i++)
> >
> >
> >
> >          443                if (!xs_write_all(h->fd,
> >         iovec[i].iov_base, iovec[i].iov_len))
> >
> >
> >
> >          444                        goto fail;
> >
> >
> >
> >          445
> >
> >
> >
> >          446        ret =3D read_reply(h, &msg.type, len);
> >
> >
> >
> >          447        if (!ret)
> >
> >
> >
> >          448                goto fail;
> >
> >
> >
> >          449
> >
> >
> >
> >          450        mutex_unlock(&h->request_mutex);
> >
> >
> >
> >
> >         The above seems to tell me that after writing to h->fd , the
> >         read_reply invoking read_message which immediatelly  read from
> >         hd->fd?
> >         What did it mean by this?
> >
> >
> >
> >
> >         Thanks
> >
> > If hd->fd refers to a socket descriptor, it's understandable that
> > writing and then imediatelly reading, in which case the fd is returned
> > by get_handle(xs_daemon_socket(), flags).
> >
> >
> > But when fd is retrived by get_handle(xs_domain_dev(), flags), it
> > means to write to a file and then read from the same file imediatelly.
> > Dose it have something to do with the internal communication
> > protocol?!
>
> Yes, the xenstore protocol involves both writing messages and reading
> replies, but that seems trivially obvious so I'm afraid I really have no
> idea what your question is nor what is confusing you. Perhaps describing
> in more detail what you are trying to achieve will help?
>
> Reading http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions might also
> help you consider what it is you are asking.
>
>
> >
> >
> > Thanks for replying.
> >
> >
> >
> >
> >
> >
> The final read and write operations are achieved by:

read(fd, data, len);
write(fd, data, len);

Maybe my confusing lies in this point that what's the distinction between
the read and write operations on a socket file, the /proc/xen/xenbus, a
regular file?

--047d7b670a9d9f23ab04c6d2283d
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 4:21 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 08:07 +0100, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA &lt;<a href=3D"mailto:awa=
re.why@gmail.com">aware.why@gmail.com</a>&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Hi all,<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; In &nbsp;xen-4.1.2 src/tools=
/xenstore/xs.c: xs_talkv function,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;there are several lines as follow:<b=
r>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;437 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_lock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;438<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;439 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
xs_write_all(h-&gt;fd, &amp;msg, sizeof(msg)))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;440 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;441<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;442 &nbsp; &nbsp; &nbsp; &nbsp;for (=
i =3D 0; i &lt; num_vecs; i++)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h-&gt;fd,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; iovec[i].iov_base, iovec[i].iov_len))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;444 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;445<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;446 &nbsp; &nbsp; &nbsp; &nbsp;ret =
=3D read_reply(h, &amp;msg.type, len);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;447 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
ret)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;448 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;449<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;450 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_unlock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The above seems to tell me that after writ=
ing to h-&gt;fd , the<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; read_reply invoking read_message which imm=
ediatelly &nbsp;read from<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; hd-&gt;fd?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; What did it mean by this?<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Thanks<br>
&gt;<br>
&gt; If hd-&gt;fd refers to a socket descriptor, it&#39;s understandable th=
at<br>
&gt; writing and then imediatelly reading, in which case the fd is returned=
<br>
&gt; by get_handle(xs_daemon_socket(), flags).<br>
&gt;<br>
&gt;<br>
&gt; But when fd is retrived by get_handle(xs_domain_dev(), flags), it<br>
&gt; means to write to a file and then read from the same file imediatelly.=
<br>
&gt; Dose it have something to do with the internal communication<br>
&gt; protocol?!<br>
<br>
</div></div>Yes, the xenstore protocol involves both writing messages and r=
eading<br>
replies, but that seems trivially obvious so I&#39;m afraid I really have n=
o<br>
idea what your question is nor what is confusing you. Perhaps describing<br=
>
in more detail what you are trying to achieve will help?<br>
<br>
Reading <a href=3D"http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions" tar=
get=3D"_blank">http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions</a> migh=
t also<br>
help you consider what it is you are asking.<br>
<br>
<br>
&gt;<br>
&gt;<br>
&gt; Thanks for replying.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>The final read and write operations are achieved by:</blockquote><d=
iv>read(fd, data, len);&nbsp;</div><div>write(fd, data, len);&nbsp;</div><d=
iv><br></div><div>Maybe my confusing lies in this point that what&#39;s the=
 distinction between the read and write operations on a socket file, the /p=
roc/xen/xenbus, a regular file?</div>
</div>

--047d7b670a9d9f23ab04c6d2283d--


--===============8891045425445543588==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8891045425445543588==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 09:50:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPO3-0008R3-P3; Thu, 09 Aug 2012 09:50:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzPO1-0008Qq-Py
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:50:38 +0000
Received: from [85.158.139.83:10445] by server-12.bemta-5.messagelabs.com id
	AD/F6-16438-CE783205; Thu, 09 Aug 2012 09:50:36 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344505834!23686500!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8858 invoked from network); 9 Aug 2012 09:50:34 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:50:34 -0000
Received: by eekd4 with SMTP id d4so79293eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 02:50:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=jRAuBn9wIPB4eteUs6Wn2WmTAL+RZWviFmwki6P1Haw=;
	b=bKehRxfjupwTyssXJx6+GxwMejIcH7evyuXdLzX0ePwvruSHCQTjJB/vk8ufsGkZW7
	MnDDPykecxVEoXOYu1nbGfZWYgUh0IYfl4Sx6FaXlSWVq3RYoscyjTeN5hs4kWXGGbRU
	gKyPAiUcFSYIu53+VD6gjAMioONmRi+usPKcqgNtNQyy64CpOYG+YZvo1XIOrn2HkVTE
	p1wOhi0yPCB9Cx4yeiFSEFzc3sSMpAQHmfvW+x4DsHUMhxASYcSUazMd/QGjBJ7hnnlT
	nh/XKV34HDC13ySFgG49Uus5cmWSNILjOKdKjt3oaeKIky9cY73QKY+orxuB8M9E74vN
	qcPA==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr26589403eep.10.1344505833791; Thu,
	09 Aug 2012 02:50:33 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 02:50:33 -0700 (PDT)
In-Reply-To: <1344500485.32142.70.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 17:50:33 +0800
Message-ID: <CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8891045425445543588=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8891045425445543588==
Content-Type: multipart/alternative; boundary=047d7b670a9d9f23ab04c6d2283d

--047d7b670a9d9f23ab04c6d2283d
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=C0=DA wrote:
> >
> >
> > On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA <aware.why@gmail.com> wrot=
e:
> >         Hi all,
> >
> >             In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv function,
> >          there are several lines as follow:
> >
> >
> >          437        mutex_lock(&h->request_mutex);
> >
> >
> >
> >          438
> >
> >
> >
> >          439        if (!xs_write_all(h->fd, &msg, sizeof(msg)))
> >
> >
> >
> >          440                goto fail;
> >
> >
> >
> >          441
> >
> >
> >
> >          442        for (i =3D 0; i < num_vecs; i++)
> >
> >
> >
> >          443                if (!xs_write_all(h->fd,
> >         iovec[i].iov_base, iovec[i].iov_len))
> >
> >
> >
> >          444                        goto fail;
> >
> >
> >
> >          445
> >
> >
> >
> >          446        ret =3D read_reply(h, &msg.type, len);
> >
> >
> >
> >          447        if (!ret)
> >
> >
> >
> >          448                goto fail;
> >
> >
> >
> >          449
> >
> >
> >
> >          450        mutex_unlock(&h->request_mutex);
> >
> >
> >
> >
> >         The above seems to tell me that after writing to h->fd , the
> >         read_reply invoking read_message which immediatelly  read from
> >         hd->fd?
> >         What did it mean by this?
> >
> >
> >
> >
> >         Thanks
> >
> > If hd->fd refers to a socket descriptor, it's understandable that
> > writing and then imediatelly reading, in which case the fd is returned
> > by get_handle(xs_daemon_socket(), flags).
> >
> >
> > But when fd is retrived by get_handle(xs_domain_dev(), flags), it
> > means to write to a file and then read from the same file imediatelly.
> > Dose it have something to do with the internal communication
> > protocol?!
>
> Yes, the xenstore protocol involves both writing messages and reading
> replies, but that seems trivially obvious so I'm afraid I really have no
> idea what your question is nor what is confusing you. Perhaps describing
> in more detail what you are trying to achieve will help?
>
> Reading http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions might also
> help you consider what it is you are asking.
>
>
> >
> >
> > Thanks for replying.
> >
> >
> >
> >
> >
> >
> The final read and write operations are achieved by:

read(fd, data, len);
write(fd, data, len);

Maybe my confusing lies in this point that what's the distinction between
the read and write operations on a socket file, the /proc/xen/xenbus, a
regular file?

--047d7b670a9d9f23ab04c6d2283d
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 4:21 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 08:07 +0100, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA &lt;<a href=3D"mailto:awa=
re.why@gmail.com">aware.why@gmail.com</a>&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Hi all,<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; In &nbsp;xen-4.1.2 src/tools=
/xenstore/xs.c: xs_talkv function,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;there are several lines as follow:<b=
r>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;437 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_lock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;438<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;439 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
xs_write_all(h-&gt;fd, &amp;msg, sizeof(msg)))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;440 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;441<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;442 &nbsp; &nbsp; &nbsp; &nbsp;for (=
i =3D 0; i &lt; num_vecs; i++)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h-&gt;fd,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; iovec[i].iov_base, iovec[i].iov_len))<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;444 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;445<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;446 &nbsp; &nbsp; &nbsp; &nbsp;ret =
=3D read_reply(h, &amp;msg.type, len);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;447 &nbsp; &nbsp; &nbsp; &nbsp;if (!=
ret)<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;448 &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;449<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;450 &nbsp; &nbsp; &nbsp; &nbsp;mutex=
_unlock(&amp;h-&gt;request_mutex);<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The above seems to tell me that after writ=
ing to h-&gt;fd , the<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; read_reply invoking read_message which imm=
ediatelly &nbsp;read from<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; hd-&gt;fd?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; What did it mean by this?<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Thanks<br>
&gt;<br>
&gt; If hd-&gt;fd refers to a socket descriptor, it&#39;s understandable th=
at<br>
&gt; writing and then imediatelly reading, in which case the fd is returned=
<br>
&gt; by get_handle(xs_daemon_socket(), flags).<br>
&gt;<br>
&gt;<br>
&gt; But when fd is retrived by get_handle(xs_domain_dev(), flags), it<br>
&gt; means to write to a file and then read from the same file imediatelly.=
<br>
&gt; Dose it have something to do with the internal communication<br>
&gt; protocol?!<br>
<br>
</div></div>Yes, the xenstore protocol involves both writing messages and r=
eading<br>
replies, but that seems trivially obvious so I&#39;m afraid I really have n=
o<br>
idea what your question is nor what is confusing you. Perhaps describing<br=
>
in more detail what you are trying to achieve will help?<br>
<br>
Reading <a href=3D"http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions" tar=
get=3D"_blank">http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions</a> migh=
t also<br>
help you consider what it is you are asking.<br>
<br>
<br>
&gt;<br>
&gt;<br>
&gt; Thanks for replying.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>The final read and write operations are achieved by:</blockquote><d=
iv>read(fd, data, len);&nbsp;</div><div>write(fd, data, len);&nbsp;</div><d=
iv><br></div><div>Maybe my confusing lies in this point that what&#39;s the=
 distinction between the read and write operations on a socket file, the /p=
roc/xen/xenbus, a regular file?</div>
</div>

--047d7b670a9d9f23ab04c6d2283d--


--===============8891045425445543588==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8891045425445543588==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 09:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPP8-00005J-8X; Thu, 09 Aug 2012 09:51:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzPP6-000059-UE
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:51:45 +0000
Received: from [85.158.139.83:20159] by server-4.bemta-5.messagelabs.com id
	95/3B-32474-03883205; Thu, 09 Aug 2012 09:51:44 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344505901!23686798!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14498 invoked from network); 9 Aug 2012 09:51:42 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 09:51:42 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzPOz-0004WS-1W; Thu, 09 Aug 2012 09:51:37 +0000
Date: Thu, 9 Aug 2012 10:51:36 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120809095136.GB16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Jean Guyader <jean.guyader@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >
> > Without finally explaining why you need this type in the first place,
> > I'll continue to NAK this patch. (This is made even worse by the fact
> > taht the two inline functions in patch 5 that make use of the type
> > appear to be unused.)
> >
> 
> Understood. I'll switch to use long instead of ssize_t in my
> forthcoming patch serie.

Please use an explicitly 64-bit type - AFAICS you're holding the sum of
some 64-bit length fields.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPP8-00005J-8X; Thu, 09 Aug 2012 09:51:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzPP6-000059-UE
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:51:45 +0000
Received: from [85.158.139.83:20159] by server-4.bemta-5.messagelabs.com id
	95/3B-32474-03883205; Thu, 09 Aug 2012 09:51:44 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344505901!23686798!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14498 invoked from network); 9 Aug 2012 09:51:42 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 09:51:42 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzPOz-0004WS-1W; Thu, 09 Aug 2012 09:51:37 +0000
Date: Thu, 9 Aug 2012 10:51:36 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120809095136.GB16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Jean Guyader <jean.guyader@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >
> > Without finally explaining why you need this type in the first place,
> > I'll continue to NAK this patch. (This is made even worse by the fact
> > taht the two inline functions in patch 5 that make use of the type
> > appear to be unused.)
> >
> 
> Understood. I'll switch to use long instead of ssize_t in my
> forthcoming patch serie.

Please use an explicitly 64-bit type - AFAICS you're holding the sum of
some 64-bit length fields.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:58:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPVe-0000NR-4P; Thu, 09 Aug 2012 09:58:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPVc-0000NM-3L
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:58:28 +0000
Received: from [85.158.143.35:30545] by server-2.bemta-4.messagelabs.com id
	EB/98-19021-3C983205; Thu, 09 Aug 2012 09:58:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344506306!4641373!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27255 invoked from network); 9 Aug 2012 09:58:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:58:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926399"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:58:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:58:26 +0100
Message-ID: <1344506305.32142.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 10:58:25 +0100
In-Reply-To: <CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDEwOjUwICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gVGh1LCBBdWcgOSwgMjAxMiBhdCA0OjIxIFBNLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVs
bEBjaXRyaXguY29tPgo+IHdyb3RlOgo+ICAgICAgICAgT24gVGh1LCAyMDEyLTA4LTA5IGF0IDA4
OjA3ICswMTAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAg
ICA+IE9uIFdlZCwgQXVnIDgsIDIwMTIgYXQgNDozMiBQTSwg6ams56OKIDxhd2FyZS53aHlAZ21h
aWwuY29tPgo+ICAgICAgICAgd3JvdGU6Cj4gICAgICAgICA+ICAgICAgICAgSGkgYWxsLAo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICAgICBJbiAgeGVuLTQuMS4yIHNyYy90b29scy94
ZW5zdG9yZS94cy5jOiB4c190YWxrdgo+ICAgICAgICAgZnVuY3Rpb24sCj4gICAgICAgICA+ICAg
ICAgICAgIHRoZXJlIGFyZSBzZXZlcmFsIGxpbmVzIGFzIGZvbGxvdzoKPiAgICAgICAgID4KPiAg
ICAgICAgID4KPiAgICAgICAgID4gICAgICAgICAgNDM3ICAgICAgICBtdXRleF9sb2NrKCZoLT5y
ZXF1ZXN0X211dGV4KTsKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAg
ICAgID4gICAgICAgICAgNDM4Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4g
ICAgICAgICA+ICAgICAgICAgIDQzOSAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsICZt
c2csCj4gICAgICAgICBzaXplb2YobXNnKSkpCj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+Cj4gICAgICAgICA+ICAgICAgICAgIDQ0MCAgICAgICAgICAgICAgICBnb3RvIGZhaWw7
Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+ICAgICAgICAg
IDQ0MQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDIgICAgICAgIGZvciAoaSA9IDA7IGkgPCBudW1fdmVjczsgaSsrKQo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDMgICAgICAg
ICAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsCj4gICAgICAgICA+ICAgICAgICAgaW92
ZWNbaV0uaW92X2Jhc2UsIGlvdmVjW2ldLmlvdl9sZW4pKQo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDQgICAgICAgICAgICAgICAgICAg
ICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+ICAgICAgICAgIDQ0NQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPiAgICAgICAgICA0NDYgICAgICAgIHJldCA9IHJlYWRfcmVwbHkoaCwgJm1zZy50
eXBlLCBsZW4pOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
PiAgICAgICAgICA0NDcgICAgICAgIGlmICghcmV0KQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDggICAgICAgICAgICAgICAgZ290byBm
YWlsOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDkKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4g
ICAgICAgICAgNDUwICAgICAgICBtdXRleF91bmxvY2soJmgtPnJlcXVlc3RfbXV0ZXgpOwo+ICAg
ICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAg
ICAgICAgIFRoZSBhYm92ZSBzZWVtcyB0byB0ZWxsIG1lIHRoYXQgYWZ0ZXIgd3JpdGluZyB0bwo+
ICAgICAgICAgaC0+ZmQgLCB0aGUKPiAgICAgICAgID4gICAgICAgICByZWFkX3JlcGx5IGludm9r
aW5nIHJlYWRfbWVzc2FnZSB3aGljaCBpbW1lZGlhdGVsbHkKPiAgICAgICAgICByZWFkIGZyb20K
PiAgICAgICAgID4gICAgICAgICBoZC0+ZmQ/Cj4gICAgICAgICA+ICAgICAgICAgV2hhdCBkaWQg
aXQgbWVhbiBieSB0aGlzPwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgIFRoYW5rcwo+ICAgICAgICAgPgo+ICAgICAgICAg
PiBJZiBoZC0+ZmQgcmVmZXJzIHRvIGEgc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFu
ZGFibGUKPiAgICAgICAgIHRoYXQKPiAgICAgICAgID4gd3JpdGluZyBhbmQgdGhlbiBpbWVkaWF0
ZWxseSByZWFkaW5nLCBpbiB3aGljaCBjYXNlIHRoZSBmZAo+ICAgICAgICAgaXMgcmV0dXJuZWQK
PiAgICAgICAgID4gYnkgZ2V0X2hhbmRsZSh4c19kYWVtb25fc29ja2V0KCksIGZsYWdzKS4KPiAg
ICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4gQnV0IHdoZW4gZmQgaXMgcmV0cml2ZWQg
YnkgZ2V0X2hhbmRsZSh4c19kb21haW5fZGV2KCksCj4gICAgICAgICBmbGFncyksIGl0Cj4gICAg
ICAgICA+IG1lYW5zIHRvIHdyaXRlIHRvIGEgZmlsZSBhbmQgdGhlbiByZWFkIGZyb20gdGhlIHNh
bWUgZmlsZQo+ICAgICAgICAgaW1lZGlhdGVsbHkuCj4gICAgICAgICA+IERvc2UgaXQgaGF2ZSBz
b21ldGhpbmcgdG8gZG8gd2l0aCB0aGUgaW50ZXJuYWwgY29tbXVuaWNhdGlvbgo+ICAgICAgICAg
PiBwcm90b2NvbD8hCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgWWVzLCB0aGUgeGVu
c3RvcmUgcHJvdG9jb2wgaW52b2x2ZXMgYm90aCB3cml0aW5nIG1lc3NhZ2VzIGFuZAo+ICAgICAg
ICAgcmVhZGluZwo+ICAgICAgICAgcmVwbGllcywgYnV0IHRoYXQgc2VlbXMgdHJpdmlhbGx5IG9i
dmlvdXMgc28gSSdtIGFmcmFpZCBJCj4gICAgICAgICByZWFsbHkgaGF2ZSBubwo+ICAgICAgICAg
aWRlYSB3aGF0IHlvdXIgcXVlc3Rpb24gaXMgbm9yIHdoYXQgaXMgY29uZnVzaW5nIHlvdS4gUGVy
aGFwcwo+ICAgICAgICAgZGVzY3JpYmluZwo+ICAgICAgICAgaW4gbW9yZSBkZXRhaWwgd2hhdCB5
b3UgYXJlIHRyeWluZyB0byBhY2hpZXZlIHdpbGwgaGVscD8KPiAgICAgICAgIAo+ICAgICAgICAg
UmVhZGluZyBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvQXNraW5nX1hlbl9EZXZlbF9RdWVzdGlv
bnMKPiAgICAgICAgIG1pZ2h0IGFsc28KPiAgICAgICAgIGhlbHAgeW91IGNvbnNpZGVyIHdoYXQg
aXQgaXMgeW91IGFyZSBhc2tpbmcuCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiBUaGFua3MgZm9yIHJlcGx5aW5nLgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgVGhlIGZpbmFsIHJlYWQgYW5kIHdyaXRlIG9wZXJhdGlvbnMgYXJlIGFjaGll
dmVkIGJ5Ogo+IHJlYWQoZmQsIGRhdGEsIGxlbik7IAo+IHdyaXRlKGZkLCBkYXRhLCBsZW4pOyAK
PiAKPiAKPiBNYXliZSBteSBjb25mdXNpbmcgbGllcyBpbiB0aGlzIHBvaW50IHRoYXQgd2hhdCdz
IHRoZSBkaXN0aW5jdGlvbgo+IGJldHdlZW4gdGhlIHJlYWQgYW5kIHdyaXRlIG9wZXJhdGlvbnMg
b24gYSBzb2NrZXQgZmlsZSwKPiB0aGUgL3Byb2MveGVuL3hlbmJ1cywgYSByZWd1bGFyIGZpbGU/
CgovcHJvYy94ZW4veGVuYnVzIGlzIG5vdCBhIHJlZ3VsYXIgZmlsZS4gL3Byb2MgaXMgYSB2aXJ0
dWFsIGZpbGUgc3lzdGVtCndoZXJlIHRoZSBmaWxlcyBvZnRlbiBoYXZlIHNwZWNpYWwgYW5kIG1h
Z2ljIHNlbWFudGljcy4gL3Byb2MveGVuL3hlbmJ1cwppcyBlZmZlY3RpdmVseSBzb21ldGhpbmcg
bGlrZSBhIGNoYXJhY3RlciBkZXZpY2UsIGV2ZW4gdGhvdWdoIGl0IGlzbid0CmFjdHVhbGx5IGlt
cGxlbWVudGVkIGFzIG9uZS4KClRha2UgYSBsb29rIGF0IGRyaXZlcnMveGVuL3hlbmJ1cy94ZW5i
dXNfZGV2X2Zyb250ZW5kLmMgd2hpY2ggaXMgdGhlCmRyaXZlciB3aGljaCBiYWNrZW5kcyAvcHJv
Yy94ZW4veGVuYnVzCgpJYW4uCgoKSWFuLgoKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 09:58:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 09:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPVe-0000NR-4P; Thu, 09 Aug 2012 09:58:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPVc-0000NM-3L
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 09:58:28 +0000
Received: from [85.158.143.35:30545] by server-2.bemta-4.messagelabs.com id
	EB/98-19021-3C983205; Thu, 09 Aug 2012 09:58:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344506306!4641373!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27255 invoked from network); 9 Aug 2012 09:58:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:58:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926399"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:58:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:58:26 +0100
Message-ID: <1344506305.32142.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 10:58:25 +0100
In-Reply-To: <CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDEwOjUwICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gCj4g
T24gVGh1LCBBdWcgOSwgMjAxMiBhdCA0OjIxIFBNLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVs
bEBjaXRyaXguY29tPgo+IHdyb3RlOgo+ICAgICAgICAgT24gVGh1LCAyMDEyLTA4LTA5IGF0IDA4
OjA3ICswMTAwLCDpqazno4ogd3JvdGU6Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAg
ICA+IE9uIFdlZCwgQXVnIDgsIDIwMTIgYXQgNDozMiBQTSwg6ams56OKIDxhd2FyZS53aHlAZ21h
aWwuY29tPgo+ICAgICAgICAgd3JvdGU6Cj4gICAgICAgICA+ICAgICAgICAgSGkgYWxsLAo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICAgICBJbiAgeGVuLTQuMS4yIHNyYy90b29scy94
ZW5zdG9yZS94cy5jOiB4c190YWxrdgo+ICAgICAgICAgZnVuY3Rpb24sCj4gICAgICAgICA+ICAg
ICAgICAgIHRoZXJlIGFyZSBzZXZlcmFsIGxpbmVzIGFzIGZvbGxvdzoKPiAgICAgICAgID4KPiAg
ICAgICAgID4KPiAgICAgICAgID4gICAgICAgICAgNDM3ICAgICAgICBtdXRleF9sb2NrKCZoLT5y
ZXF1ZXN0X211dGV4KTsKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAg
ICAgID4gICAgICAgICAgNDM4Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4g
ICAgICAgICA+ICAgICAgICAgIDQzOSAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsICZt
c2csCj4gICAgICAgICBzaXplb2YobXNnKSkpCj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+Cj4gICAgICAgICA+ICAgICAgICAgIDQ0MCAgICAgICAgICAgICAgICBnb3RvIGZhaWw7
Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+ICAgICAgICAg
IDQ0MQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDIgICAgICAgIGZvciAoaSA9IDA7IGkgPCBudW1fdmVjczsgaSsrKQo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDMgICAgICAg
ICAgICAgICAgaWYgKCF4c193cml0ZV9hbGwoaC0+ZmQsCj4gICAgICAgICA+ICAgICAgICAgaW92
ZWNbaV0uaW92X2Jhc2UsIGlvdmVjW2ldLmlvdl9sZW4pKQo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDQgICAgICAgICAgICAgICAgICAg
ICAgICBnb3RvIGZhaWw7Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAgICAgICA+Cj4gICAg
ICAgICA+ICAgICAgICAgIDQ0NQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPiAgICAgICAgICA0NDYgICAgICAgIHJldCA9IHJlYWRfcmVwbHkoaCwgJm1zZy50
eXBlLCBsZW4pOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
PiAgICAgICAgICA0NDcgICAgICAgIGlmICghcmV0KQo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgICA0NDggICAgICAgICAgICAgICAgZ290byBm
YWlsOwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAgICAg
ICAgICA0NDkKPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4g
ICAgICAgICAgNDUwICAgICAgICBtdXRleF91bmxvY2soJmgtPnJlcXVlc3RfbXV0ZXgpOwo+ICAg
ICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPiAg
ICAgICAgIFRoZSBhYm92ZSBzZWVtcyB0byB0ZWxsIG1lIHRoYXQgYWZ0ZXIgd3JpdGluZyB0bwo+
ICAgICAgICAgaC0+ZmQgLCB0aGUKPiAgICAgICAgID4gICAgICAgICByZWFkX3JlcGx5IGludm9r
aW5nIHJlYWRfbWVzc2FnZSB3aGljaCBpbW1lZGlhdGVsbHkKPiAgICAgICAgICByZWFkIGZyb20K
PiAgICAgICAgID4gICAgICAgICBoZC0+ZmQ/Cj4gICAgICAgICA+ICAgICAgICAgV2hhdCBkaWQg
aXQgbWVhbiBieSB0aGlzPwo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAg
ICAgICAgPgo+ICAgICAgICAgPiAgICAgICAgIFRoYW5rcwo+ICAgICAgICAgPgo+ICAgICAgICAg
PiBJZiBoZC0+ZmQgcmVmZXJzIHRvIGEgc29ja2V0IGRlc2NyaXB0b3IsIGl0J3MgdW5kZXJzdGFu
ZGFibGUKPiAgICAgICAgIHRoYXQKPiAgICAgICAgID4gd3JpdGluZyBhbmQgdGhlbiBpbWVkaWF0
ZWxseSByZWFkaW5nLCBpbiB3aGljaCBjYXNlIHRoZSBmZAo+ICAgICAgICAgaXMgcmV0dXJuZWQK
PiAgICAgICAgID4gYnkgZ2V0X2hhbmRsZSh4c19kYWVtb25fc29ja2V0KCksIGZsYWdzKS4KPiAg
ICAgICAgID4KPiAgICAgICAgID4KPiAgICAgICAgID4gQnV0IHdoZW4gZmQgaXMgcmV0cml2ZWQg
YnkgZ2V0X2hhbmRsZSh4c19kb21haW5fZGV2KCksCj4gICAgICAgICBmbGFncyksIGl0Cj4gICAg
ICAgICA+IG1lYW5zIHRvIHdyaXRlIHRvIGEgZmlsZSBhbmQgdGhlbiByZWFkIGZyb20gdGhlIHNh
bWUgZmlsZQo+ICAgICAgICAgaW1lZGlhdGVsbHkuCj4gICAgICAgICA+IERvc2UgaXQgaGF2ZSBz
b21ldGhpbmcgdG8gZG8gd2l0aCB0aGUgaW50ZXJuYWwgY29tbXVuaWNhdGlvbgo+ICAgICAgICAg
PiBwcm90b2NvbD8hCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgWWVzLCB0aGUgeGVu
c3RvcmUgcHJvdG9jb2wgaW52b2x2ZXMgYm90aCB3cml0aW5nIG1lc3NhZ2VzIGFuZAo+ICAgICAg
ICAgcmVhZGluZwo+ICAgICAgICAgcmVwbGllcywgYnV0IHRoYXQgc2VlbXMgdHJpdmlhbGx5IG9i
dmlvdXMgc28gSSdtIGFmcmFpZCBJCj4gICAgICAgICByZWFsbHkgaGF2ZSBubwo+ICAgICAgICAg
aWRlYSB3aGF0IHlvdXIgcXVlc3Rpb24gaXMgbm9yIHdoYXQgaXMgY29uZnVzaW5nIHlvdS4gUGVy
aGFwcwo+ICAgICAgICAgZGVzY3JpYmluZwo+ICAgICAgICAgaW4gbW9yZSBkZXRhaWwgd2hhdCB5
b3UgYXJlIHRyeWluZyB0byBhY2hpZXZlIHdpbGwgaGVscD8KPiAgICAgICAgIAo+ICAgICAgICAg
UmVhZGluZyBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvQXNraW5nX1hlbl9EZXZlbF9RdWVzdGlv
bnMKPiAgICAgICAgIG1pZ2h0IGFsc28KPiAgICAgICAgIGhlbHAgeW91IGNvbnNpZGVyIHdoYXQg
aXQgaXMgeW91IGFyZSBhc2tpbmcuCj4gICAgICAgICAKPiAgICAgICAgIAo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPiBUaGFua3MgZm9yIHJlcGx5aW5nLgo+ICAgICAgICAgPgo+
ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAgPgo+ICAgICAgICAg
Pgo+ICAgICAgICAgVGhlIGZpbmFsIHJlYWQgYW5kIHdyaXRlIG9wZXJhdGlvbnMgYXJlIGFjaGll
dmVkIGJ5Ogo+IHJlYWQoZmQsIGRhdGEsIGxlbik7IAo+IHdyaXRlKGZkLCBkYXRhLCBsZW4pOyAK
PiAKPiAKPiBNYXliZSBteSBjb25mdXNpbmcgbGllcyBpbiB0aGlzIHBvaW50IHRoYXQgd2hhdCdz
IHRoZSBkaXN0aW5jdGlvbgo+IGJldHdlZW4gdGhlIHJlYWQgYW5kIHdyaXRlIG9wZXJhdGlvbnMg
b24gYSBzb2NrZXQgZmlsZSwKPiB0aGUgL3Byb2MveGVuL3hlbmJ1cywgYSByZWd1bGFyIGZpbGU/
CgovcHJvYy94ZW4veGVuYnVzIGlzIG5vdCBhIHJlZ3VsYXIgZmlsZS4gL3Byb2MgaXMgYSB2aXJ0
dWFsIGZpbGUgc3lzdGVtCndoZXJlIHRoZSBmaWxlcyBvZnRlbiBoYXZlIHNwZWNpYWwgYW5kIG1h
Z2ljIHNlbWFudGljcy4gL3Byb2MveGVuL3hlbmJ1cwppcyBlZmZlY3RpdmVseSBzb21ldGhpbmcg
bGlrZSBhIGNoYXJhY3RlciBkZXZpY2UsIGV2ZW4gdGhvdWdoIGl0IGlzbid0CmFjdHVhbGx5IGlt
cGxlbWVudGVkIGFzIG9uZS4KClRha2UgYSBsb29rIGF0IGRyaXZlcnMveGVuL3hlbmJ1cy94ZW5i
dXNfZGV2X2Zyb250ZW5kLmMgd2hpY2ggaXMgdGhlCmRyaXZlciB3aGljaCBiYWNrZW5kcyAvcHJv
Yy94ZW4veGVuYnVzCgpJYW4uCgoKSWFuLgoKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:00:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:00:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPWw-0000R8-JH; Thu, 09 Aug 2012 09:59:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPWu-0000Qx-P8
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:59:49 +0000
Received: from [85.158.138.51:12163] by server-3.bemta-3.messagelabs.com id
	DA/7E-13122-31A83205; Thu, 09 Aug 2012 09:59:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344506387!25624299!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4571 invoked from network); 9 Aug 2012 09:59:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:59:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926433"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:59:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:59:46 +0100
Message-ID: <1344506385.32142.94.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 10:59:45 +0100
In-Reply-To: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
References: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: document/mark-up SCHEDOP_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 10:42 +0100, Ian Campbell wrote:
> The biggest subtlety here is there additional argument when op ==
> SCHEDOP_shutdown and reason == SHUTDOWN_suspend and its interpretation by
> xc_domain_{save,restore}. Add some clarifying comments to libxc as well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any thoughts on this doc patch>

> ---
>  tools/libxc/xc_domain_restore.c |   10 ++++-
>  tools/libxc/xc_domain_save.c    |    9 ++++-
>  xen/include/public/sched.h      |   84 ++++++++++++++++++++++++++-------------
>  3 files changed, 72 insertions(+), 31 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 3fe2b12..5541e73 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1895,8 +1895,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          if ( i == 0 )
>          {
>              /*
> -             * Uncanonicalise the suspend-record frame number and poke
> -             * resume record.
> +             * Uncanonicalise the start info frame number and poke in
> +             * updated values into the start info itself.
> +             *
> +             * The start info MFN is the 3rd argument to the
> +             * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown
> +             * and reason==SHUTDOWN_suspend, it is canonicalised in
> +             * xc_domain_save and therefore the PFN is found in the
> +             * edx register.
>               */
>              pfn = GET_FIELD(ctxt, user_regs.edx);
>              if ( (pfn >= dinfo->p2m_size) ||
> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
> index c359649..f161472 100644
> --- a/tools/libxc/xc_domain_save.c
> +++ b/tools/libxc/xc_domain_save.c
> @@ -1867,7 +1867,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
>          goto out;
>      }
>  
> -    /* Canonicalise the suspend-record frame number. */
> +    /*
> +     * Canonicalise the start info frame number.
> +     *
> +     * The start info MFN is the 3rd argument to the
> +     * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown and
> +     * reason==SHUTDOWN_suspend and is therefore found in the edx
> +     * register.
> +     */
>      mfn = GET_FIELD(&ctxt, user_regs.edx);
>      if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
>      {
> diff --git a/xen/include/public/sched.h b/xen/include/public/sched.h
> index 7f87420..db5124a 100644
> --- a/xen/include/public/sched.h
> +++ b/xen/include/public/sched.h
> @@ -1,8 +1,8 @@
>  /******************************************************************************
>   * sched.h
> - * 
> + *
>   * Scheduler state interactions
> - * 
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
>   * of this software and associated documentation files (the "Software"), to
>   * deal in the Software without restriction, including without limitation the
> @@ -30,20 +30,33 @@
>  #include "event_channel.h"
>  
>  /*
> + * `incontents 150 sched Guest Scheduler Operations
> + *
> + * The SCHEDOP interface provides mechanisms for a guest to interact
> + * with the scheduler, including yield, blocking and shutting itself
> + * down.
> + */
> +
> +/*
>   * The prototype for this hypercall is:
> - *  long sched_op(int cmd, void *arg)
> + * ` long HYPERVISOR_sched_op(enum sched_op cmd, void *arg, ...)
> + *
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == Operation-specific extra argument(s), as described below.
> - * 
> + * ...  == Additional Operation-specific extra arguments, described below.
> + *
>   * Versions of Xen prior to 3.0.2 provided only the following legacy version
>   * of this hypercall, supporting only the commands yield, block and shutdown:
>   *  long sched_op(int cmd, unsigned long arg)
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == 0               (SCHEDOP_yield and SCHEDOP_block)
>   *      == SHUTDOWN_* code (SCHEDOP_shutdown)
> - * This legacy version is available to new guests as sched_op_compat().
> + *
> + * This legacy version is available to new guests as:
> + * ` long HYPERVISOR_sched_op_compat(enum sched_op cmd, unsigned long arg)
>   */
>  
> +/* ` enum sched_op { // SCHEDOP_* => struct sched_* */
>  /*
>   * Voluntarily yield the CPU.
>   * @arg == NULL.
> @@ -61,59 +74,72 @@
>  
>  /*
>   * Halt execution of this domain (all VCPUs) and notify the system controller.
> - * @arg == pointer to sched_shutdown structure.
> + * @arg == pointer to sched_shutdown_t structure.
> + *
> + * If the sched_shutdown_t reason is SHUTDOWN_suspend then this
> + * hypercall takes an additional extra argument which should be the
> + * MFN of the guest's start_info_t.
> + *
> + * In addition, which reason is SHUTDOWN_suspend this hypercall
> + * returns 1 if suspend was cancelled or the domain was merely
> + * checkpointed, and 0 if it is resuming in a new domain.
>   */
>  #define SCHEDOP_shutdown    2
> -struct sched_shutdown {
> -    unsigned int reason; /* SHUTDOWN_* */
> -};
> -typedef struct sched_shutdown sched_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
>  
>  /*
>   * Poll a set of event-channel ports. Return when one or more are pending. An
>   * optional timeout may be specified.
> - * @arg == pointer to sched_poll structure.
> + * @arg == pointer to sched_poll_t structure.
>   */
>  #define SCHEDOP_poll        3
> -struct sched_poll {
> -    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> -    unsigned int nr_ports;
> -    uint64_t timeout;
> -};
> -typedef struct sched_poll sched_poll_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
>  
>  /*
>   * Declare a shutdown for another domain. The main use of this function is
>   * in interpreting shutdown requests and reasons for fully-virtualized
>   * domains.  A para-virtualized domain may use SCHEDOP_shutdown directly.
> - * @arg == pointer to sched_remote_shutdown structure.
> + * @arg == pointer to sched_remote_shutdown_t structure.
>   */
>  #define SCHEDOP_remote_shutdown        4
> -struct sched_remote_shutdown {
> -    domid_t domain_id;         /* Remote domain ID */
> -    unsigned int reason;       /* SHUTDOWN_xxx reason */
> -};
> -typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
>  
>  /*
>   * Latch a shutdown code, so that when the domain later shuts down it
>   * reports this code to the control tools.
> - * @arg == as for SCHEDOP_shutdown.
> + * @arg == sched_shutdown_t, as for SCHEDOP_shutdown.
>   */
>  #define SCHEDOP_shutdown_code 5
>  
>  /*
>   * Setup, poke and destroy a domain watchdog timer.
> - * @arg == pointer to sched_watchdog structure.
> + * @arg == pointer to sched_watchdog_t structure.
>   * With id == 0, setup a domain watchdog timer to cause domain shutdown
>   *               after timeout, returns watchdog id.
>   * With id != 0 and timeout == 0, destroy domain watchdog timer.
>   * With id != 0 and timeout != 0, poke watchdog timer and set new timeout.
>   */
>  #define SCHEDOP_watchdog    6
> +/* ` } */
> +
> +struct sched_shutdown {
> +    unsigned int reason; /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_shutdown sched_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
> +
> +struct sched_poll {
> +    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> +    unsigned int nr_ports;
> +    uint64_t timeout;
> +};
> +typedef struct sched_poll sched_poll_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
> +
> +struct sched_remote_shutdown {
> +    domid_t domain_id;         /* Remote domain ID */
> +    unsigned int reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
> +
>  struct sched_watchdog {
>      uint32_t id;                /* watchdog ID */
>      uint32_t timeout;           /* timeout */
> @@ -126,11 +152,13 @@ DEFINE_XEN_GUEST_HANDLE(sched_watchdog_t);
>   * software to determine the appropriate action. For the most part, Xen does
>   * not care about the shutdown code.
>   */
> +/* ` enum sched_shutdown_reason { */
>  #define SHUTDOWN_poweroff   0  /* Domain exited normally. Clean up and kill. */
>  #define SHUTDOWN_reboot     1  /* Clean up, kill, and then restart.          */
>  #define SHUTDOWN_suspend    2  /* Clean up, save suspend info, kill.         */
>  #define SHUTDOWN_crash      3  /* Tell controller we've crashed.             */
>  #define SHUTDOWN_watchdog   4  /* Restart because watchdog time expired.     */
> +/* ` } */
>  
>  #endif /* __XEN_PUBLIC_SCHED_H__ */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:00:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:00:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPWw-0000R8-JH; Thu, 09 Aug 2012 09:59:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPWu-0000Qx-P8
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 09:59:49 +0000
Received: from [85.158.138.51:12163] by server-3.bemta-3.messagelabs.com id
	DA/7E-13122-31A83205; Thu, 09 Aug 2012 09:59:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344506387!25624299!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4571 invoked from network); 9 Aug 2012 09:59:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 09:59:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926433"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 09:59:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	10:59:46 +0100
Message-ID: <1344506385.32142.94.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 10:59:45 +0100
In-Reply-To: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
References: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: document/mark-up SCHEDOP_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 10:42 +0100, Ian Campbell wrote:
> The biggest subtlety here is there additional argument when op ==
> SCHEDOP_shutdown and reason == SHUTDOWN_suspend and its interpretation by
> xc_domain_{save,restore}. Add some clarifying comments to libxc as well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any thoughts on this doc patch>

> ---
>  tools/libxc/xc_domain_restore.c |   10 ++++-
>  tools/libxc/xc_domain_save.c    |    9 ++++-
>  xen/include/public/sched.h      |   84 ++++++++++++++++++++++++++-------------
>  3 files changed, 72 insertions(+), 31 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 3fe2b12..5541e73 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1895,8 +1895,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          if ( i == 0 )
>          {
>              /*
> -             * Uncanonicalise the suspend-record frame number and poke
> -             * resume record.
> +             * Uncanonicalise the start info frame number and poke in
> +             * updated values into the start info itself.
> +             *
> +             * The start info MFN is the 3rd argument to the
> +             * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown
> +             * and reason==SHUTDOWN_suspend, it is canonicalised in
> +             * xc_domain_save and therefore the PFN is found in the
> +             * edx register.
>               */
>              pfn = GET_FIELD(ctxt, user_regs.edx);
>              if ( (pfn >= dinfo->p2m_size) ||
> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
> index c359649..f161472 100644
> --- a/tools/libxc/xc_domain_save.c
> +++ b/tools/libxc/xc_domain_save.c
> @@ -1867,7 +1867,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
>          goto out;
>      }
>  
> -    /* Canonicalise the suspend-record frame number. */
> +    /*
> +     * Canonicalise the start info frame number.
> +     *
> +     * The start info MFN is the 3rd argument to the
> +     * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown and
> +     * reason==SHUTDOWN_suspend and is therefore found in the edx
> +     * register.
> +     */
>      mfn = GET_FIELD(&ctxt, user_regs.edx);
>      if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
>      {
> diff --git a/xen/include/public/sched.h b/xen/include/public/sched.h
> index 7f87420..db5124a 100644
> --- a/xen/include/public/sched.h
> +++ b/xen/include/public/sched.h
> @@ -1,8 +1,8 @@
>  /******************************************************************************
>   * sched.h
> - * 
> + *
>   * Scheduler state interactions
> - * 
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
>   * of this software and associated documentation files (the "Software"), to
>   * deal in the Software without restriction, including without limitation the
> @@ -30,20 +30,33 @@
>  #include "event_channel.h"
>  
>  /*
> + * `incontents 150 sched Guest Scheduler Operations
> + *
> + * The SCHEDOP interface provides mechanisms for a guest to interact
> + * with the scheduler, including yield, blocking and shutting itself
> + * down.
> + */
> +
> +/*
>   * The prototype for this hypercall is:
> - *  long sched_op(int cmd, void *arg)
> + * ` long HYPERVISOR_sched_op(enum sched_op cmd, void *arg, ...)
> + *
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == Operation-specific extra argument(s), as described below.
> - * 
> + * ...  == Additional Operation-specific extra arguments, described below.
> + *
>   * Versions of Xen prior to 3.0.2 provided only the following legacy version
>   * of this hypercall, supporting only the commands yield, block and shutdown:
>   *  long sched_op(int cmd, unsigned long arg)
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == 0               (SCHEDOP_yield and SCHEDOP_block)
>   *      == SHUTDOWN_* code (SCHEDOP_shutdown)
> - * This legacy version is available to new guests as sched_op_compat().
> + *
> + * This legacy version is available to new guests as:
> + * ` long HYPERVISOR_sched_op_compat(enum sched_op cmd, unsigned long arg)
>   */
>  
> +/* ` enum sched_op { // SCHEDOP_* => struct sched_* */
>  /*
>   * Voluntarily yield the CPU.
>   * @arg == NULL.
> @@ -61,59 +74,72 @@
>  
>  /*
>   * Halt execution of this domain (all VCPUs) and notify the system controller.
> - * @arg == pointer to sched_shutdown structure.
> + * @arg == pointer to sched_shutdown_t structure.
> + *
> + * If the sched_shutdown_t reason is SHUTDOWN_suspend then this
> + * hypercall takes an additional extra argument which should be the
> + * MFN of the guest's start_info_t.
> + *
> + * In addition, which reason is SHUTDOWN_suspend this hypercall
> + * returns 1 if suspend was cancelled or the domain was merely
> + * checkpointed, and 0 if it is resuming in a new domain.
>   */
>  #define SCHEDOP_shutdown    2
> -struct sched_shutdown {
> -    unsigned int reason; /* SHUTDOWN_* */
> -};
> -typedef struct sched_shutdown sched_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
>  
>  /*
>   * Poll a set of event-channel ports. Return when one or more are pending. An
>   * optional timeout may be specified.
> - * @arg == pointer to sched_poll structure.
> + * @arg == pointer to sched_poll_t structure.
>   */
>  #define SCHEDOP_poll        3
> -struct sched_poll {
> -    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> -    unsigned int nr_ports;
> -    uint64_t timeout;
> -};
> -typedef struct sched_poll sched_poll_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
>  
>  /*
>   * Declare a shutdown for another domain. The main use of this function is
>   * in interpreting shutdown requests and reasons for fully-virtualized
>   * domains.  A para-virtualized domain may use SCHEDOP_shutdown directly.
> - * @arg == pointer to sched_remote_shutdown structure.
> + * @arg == pointer to sched_remote_shutdown_t structure.
>   */
>  #define SCHEDOP_remote_shutdown        4
> -struct sched_remote_shutdown {
> -    domid_t domain_id;         /* Remote domain ID */
> -    unsigned int reason;       /* SHUTDOWN_xxx reason */
> -};
> -typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
>  
>  /*
>   * Latch a shutdown code, so that when the domain later shuts down it
>   * reports this code to the control tools.
> - * @arg == as for SCHEDOP_shutdown.
> + * @arg == sched_shutdown_t, as for SCHEDOP_shutdown.
>   */
>  #define SCHEDOP_shutdown_code 5
>  
>  /*
>   * Setup, poke and destroy a domain watchdog timer.
> - * @arg == pointer to sched_watchdog structure.
> + * @arg == pointer to sched_watchdog_t structure.
>   * With id == 0, setup a domain watchdog timer to cause domain shutdown
>   *               after timeout, returns watchdog id.
>   * With id != 0 and timeout == 0, destroy domain watchdog timer.
>   * With id != 0 and timeout != 0, poke watchdog timer and set new timeout.
>   */
>  #define SCHEDOP_watchdog    6
> +/* ` } */
> +
> +struct sched_shutdown {
> +    unsigned int reason; /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_shutdown sched_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
> +
> +struct sched_poll {
> +    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> +    unsigned int nr_ports;
> +    uint64_t timeout;
> +};
> +typedef struct sched_poll sched_poll_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
> +
> +struct sched_remote_shutdown {
> +    domid_t domain_id;         /* Remote domain ID */
> +    unsigned int reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
> +
>  struct sched_watchdog {
>      uint32_t id;                /* watchdog ID */
>      uint32_t timeout;           /* timeout */
> @@ -126,11 +152,13 @@ DEFINE_XEN_GUEST_HANDLE(sched_watchdog_t);
>   * software to determine the appropriate action. For the most part, Xen does
>   * not care about the shutdown code.
>   */
> +/* ` enum sched_shutdown_reason { */
>  #define SHUTDOWN_poweroff   0  /* Domain exited normally. Clean up and kill. */
>  #define SHUTDOWN_reboot     1  /* Clean up, kill, and then restart.          */
>  #define SHUTDOWN_suspend    2  /* Clean up, save suspend info, kill.         */
>  #define SHUTDOWN_crash      3  /* Tell controller we've crashed.             */
>  #define SHUTDOWN_watchdog   4  /* Restart because watchdog time expired.     */
> +/* ` } */
>  
>  #endif /* __XEN_PUBLIC_SCHED_H__ */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:00:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPXR-0000YO-6w; Thu, 09 Aug 2012 10:00:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPXQ-0000YF-Db
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:00:20 +0000
Received: from [85.158.143.35:3226] by server-2.bemta-4.messagelabs.com id
	2D/DB-19021-33A83205; Thu, 09 Aug 2012 10:00:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344506411!4737742!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10910 invoked from network); 9 Aug 2012 10:00:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:00:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926444"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:00:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:00:11 +0100
Message-ID: <1344506410.32142.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 11:00:10 +0100
In-Reply-To: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
References: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: console: correct example
 console type definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 13:04 +0100, Ian Campbell wrote:
> I think this is intended to be under the specific consoles directory.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any thoughts on this patch?

> ---
>  docs/misc/console.txt |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/docs/misc/console.txt b/docs/misc/console.txt
> index 73ca835..8a53a95 100644
> --- a/docs/misc/console.txt
> +++ b/docs/misc/console.txt
> @@ -36,7 +36,7 @@ toolstack in the "type" node on xenstore, under the relevant console
>  section.
>  For example:
>  
> -# xenstore-read /local/domain/26/console/type
> +# xenstore-read /local/domain/26/console/1/type
>  xenconsoled
>  
>  The supported values are only xenconsoled or ioemu; xenconsoled has



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:00:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPXR-0000YO-6w; Thu, 09 Aug 2012 10:00:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPXQ-0000YF-Db
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:00:20 +0000
Received: from [85.158.143.35:3226] by server-2.bemta-4.messagelabs.com id
	2D/DB-19021-33A83205; Thu, 09 Aug 2012 10:00:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344506411!4737742!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10910 invoked from network); 9 Aug 2012 10:00:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:00:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926444"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:00:11 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:00:11 +0100
Message-ID: <1344506410.32142.95.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 11:00:10 +0100
In-Reply-To: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
References: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: console: correct example
 console type definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 13:04 +0100, Ian Campbell wrote:
> I think this is intended to be under the specific consoles directory.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any thoughts on this patch?

> ---
>  docs/misc/console.txt |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/docs/misc/console.txt b/docs/misc/console.txt
> index 73ca835..8a53a95 100644
> --- a/docs/misc/console.txt
> +++ b/docs/misc/console.txt
> @@ -36,7 +36,7 @@ toolstack in the "type" node on xenstore, under the relevant console
>  section.
>  For example:
>  
> -# xenstore-read /local/domain/26/console/type
> +# xenstore-read /local/domain/26/console/1/type
>  xenconsoled
>  
>  The supported values are only xenconsoled or ioemu; xenconsoled has



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:02:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPZL-0000kd-NP; Thu, 09 Aug 2012 10:02:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPZJ-0000k1-Uh
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:02:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344506466!9356874!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7381 invoked from network); 9 Aug 2012 10:01:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:01:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926475"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:01:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:01:03 +0100
Message-ID: <1344506462.32142.96.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 11:01:02 +0100
In-Reply-To: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 15:03 +0100, Ian Campbell wrote:
> This is based upon my inspection of a system with a single PV domain running
> and is therefore very incomplete.
> 
> There are several things I'm not sure of here, mostly marked with XXX in the
> text.
> 
> In particular:
> 
>  - We seem to expose various things to the guest which really it has no need to
>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>    the size of the video ram, store port and gref.
>  - Missing reference for ~/device-model/*
>  - Missing protocol reference for ~/control/shutdown
>  - What is the distinction between /vm/UUID and /local/domain/DOMID
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any thoughts on this or the follow up?

Ian J is this machine-readable in a way which is useful for whatever it
is you wanted to machine read it into?

> ---
>  docs/INDEX                        |    1 +
>  docs/misc/xenstore-paths.markdown |  294 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 295 insertions(+), 0 deletions(-)
> 
> diff --git a/docs/INDEX b/docs/INDEX
> index 5a0a2c2..f5ccae2 100644
> --- a/docs/INDEX
> +++ b/docs/INDEX
> @@ -12,6 +12,7 @@ misc/kexec_and_kdump		Kexec and Kdump for Xen
>  misc/tscmode			TSC Mode HOWTO
>  misc/vbd-interface		Xen Guest Disk (VBD) Interface
>  misc/xenstore			Xenstore protocol specification
> +misc/xenstore-paths		Xenstore path documentation
>  misc/xl-disk-configuration	XL Disk Configuration
>  misc/xl-network-configuration	XL Network Configuration
>  misc/distro_mapping		Distro Directory Layouts
> diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
> index e69de29..967ed7b 100644
> --- a/docs/misc/xenstore-paths.markdown
> +++ b/docs/misc/xenstore-paths.markdown
> @@ -0,0 +1,294 @@
> +# XenStore Paths
> +
> +This document attempts to defines all the paths which are in common
> +use by either guests, front-/back-end drivers, toolstacks etc.
> +
> +The XenStore wire protocol itself is described in
> +[xenstore.txt](xenstore.txt).
> +
> +## Notation
> +
> +This document is intended to be partially machine readable, such that
> +test system etc can use it to validate whether the xenstore paths used
> +by a test are allowable etc.
> +
> +Therefore the following notation conventions apply:
> +
> +A xenstore path is generically defined as:
> +
> +        PATH = VALUES [TAGS]
> +    
> +        PATH/* [TAGS]
> +
> +The first syntax defines a simple path with a single value. The second
> +syntax defines an aggregated set of paths which are usually described
> +externally to this document. The text will give a pointer to the
> +appropriate external documentation.
> +
> +PATH can contain simple regex constructs following the POSIX regexp
> +syntax described in regexp(7). In addition the following additional
> +wild card names are defined and are evaluated before regexp expansion:
> +
> +* ~ -- expands to an arbitrary a domain's home path (described below).
> +  Only valid at the begining of a path.
> +* $DEVID -- a per-device type device identifier. Typically an integer.
> +* $DOMID -- a domain identifier, an integer. Typically this refers to
> +  the "other" domain. i.e. ~ refers to the domain providing a service
> +  while $DOMID is the consumer of that service.
> +* $UUID -- a UUID in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
> +
> +VALUES are strings and can take the following forms:
> +
> +* PATH -- a XenStore path.
> +* STRING -- an arbitrary string.
> +* INTEGER -- the decimal representation of an integer.
> + * MEMKB -- the decimal representation of a number of kilobytes.
> + * EVTCHN -- the decimal representation of an event channel.
> + * GNTREF -- the decimal representation of a grant reference.
> +* "a literal string" -- literal strings are contained within quotes.
> +* (VALUE | VALUE | ... ) -- a set of alternatives. Alternatives are
> +  separated by a "|" and all the alternatives are enclosed in "(" and
> +  ")".
> +
> +Additional TAGS may follow as a comma separated set of the following
> +tags enclosed in square brackets.
> +
> +* w -- Path is writable by the containing domain, that is the domain
> +  whose home path ~ this key is under or which /vm/$UUID refers to. By
> +  default paths under both of these locationsare read only for the
> +  domain.
> +* HVM -- Path is valid for HVM domains only
> +* PV --  Path is valid for PV domains only
> +* BACKEND -- Path is valid for a backend domain (AKA driver domain)
> +
> +Owning domain means the domain whose home path this tag is found
> +under.
> +
> +Lack of either a __HVM__ or __PV__ tag indicates that the path is
> +valid for either type of domain (including PVonHVM and similar mixed
> +domain types).
> +
> +## Domain Home Path
> +
> +Every domain has a home path within the xenstore hierarchy. This is
> +the path where the majority of the domain-visible information about
> +each domain is stored.
> +
> +This path is:
> +    
> +      /local/domain/$DOMID
> +
> +All non-absolute paths are relative to this path.
> +
> +Although this path could be considered a "Home Directory" for the
> +domain it would not usually be writable by the domain. The tools will
> +create writable subdirectories as necessary.
> +
> +## Per Domain Paths
> +
> +## General Paths
> +
> +#### ~/vm = PATH []
> +
> +A pointer back to the domain's /vm/$UUID path (described below).
> +
> +#### ~/name = STRING []
> +
> +The guests name.
> +
> +#### ~/domid = INTEGER   []
> +
> +The domain's own ID.
> +
> +XXX why is this exposed to the guest here?
> +
> +#### ~/image/device-model-pid = INTEGER   [r]
> +
> +The process ID of the device model associated with this domain, if it
> +has one.
> +
> +XXX why is this visible to the guest?
> +
> +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
> +
> +One node for each virtual CPU up to the guest's configured
> +maximum. Valid values are "online" and "offline". 
> +
> +#### ~/memory/static-max = MEMKB []
> +
> +Specifies a static maximum amount memory which this domain should
> +expect to be given. In the absence of in-guest memory hotplug support
> +this set on domain boot and is usually the maximum amount of RAM which
> +a guest can make use of .
> +
> +#### ~/memory/target = MEMKB []
> +
> +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort 
> +
> +#### ~/memory/videoram = MEMKB [HVM]
> +
> +The size of the video RAM this domain is configured with.
> +
> +XXX why is this exposed to the guest here instead of as a PCI BAR or
> +some other property of the virtual GFX card? Perhaps should be
> +non-guest visible.
> +
> +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
> +
> +The domain's suspend event channel. The use of a suspend event channel
> +is optional at the domain's discression. If it is not used then this
> +path will be left blank.
> +
> +### Frontend device paths
> +
> +Paravirtual device frontends are generally specified by their own
> +directory within the XenStore hierarchy. Usually this is under
> +~/device/$TYPE/$DEVID although there are exceptions, e.g. ~/console
> +for the first PV console.
> +
> +#### ~/device/vbd/$DEVID/* []
> +
> +A virtual block device frontend. Described by
> +[xen/include/public/io/blkif.h][BLKIF]
> +
> +#### ~/device/vfb/$DEVID/* []
> +
> +A virtual framebuffer frontend. Described by
> +[xen/include/public/io/fbif.h][FBIF]
> +
> +#### ~/device/vkbd/$DEVID/* []
> +
> +A virtual keyboard device frontend. Described by
> +[xen/include/public/io/kbdif.h][KBDIF]
> +
> +#### ~/device/vif/$DEVID/* []
> +
> +A virtual network device frontend. Described by
> +[xen/include/public/io/netif.h][NETIF]
> +
> +#### ~/console/* []
> +
> +The primary PV console device. Described in [console.txt](console.txt)
> +
> +#### ~/device/console/$DEVID/* []
> +
> +A secondary PV console device. Described in [console.txt](console.txt)
> +
> +#### ~/device/serial/$DEVID/* [HVM]
> +
> +An emulated serial device
> +
> +#### ~/store/port = EVTCHN []
> +
> +The event channel used by the domains connection to XenStore.
> +
> +XXX why is this exposed to the guest?
> +
> +#### ~/store/ring-ref = GNTREF []
> +
> +The grant reference of the domain's XenStore ring.
> +
> +XXX why is this exposed to the guest?
> +
> +### Backend Device Paths
> +
> +Paravirtual device backends are generally specified by their own
> +directory within the XenStore hierarchy. Usually this is under
> +~/backend/$TYPE/$DOMID/$DEVID.
> +
> +#### ~/backend/vbd/$DOMID/$DEVID/* []
> +
> +A virtual block device backend. Described by
> +[xen/include/public/io/blkif.h][BLKIF]
> +
> +#### ~/backend/vfb/$DOMID/$DEVID/* []
> +
> +A virtual framebuffer backend. Described by
> +[xen/include/public/io/fbif.h][FBIF]
> +
> +#### ~/backend/vkbd/$DOMID/$DEVID/* []
> +
> +A virtual keyboard device backend. Described by
> +[xen/include/public/io/kbdif.h][KBDIF]
> +
> +#### ~/backend/vif/$DOMID/$DEVID/* []
> +
> +A virtual network device backend. Described by
> +[xen/include/public/io/netif.h][NETIF]
> +
> +#### ~/backend/console/$DOMID/$DEVID/* []
> +
> +A PV console backend. Described in [console.txt](console.txt)
> +
> +#### ~/device-model/$DOMID/* []
> +
> +Information relating to device models running in the domain. $DOMID is
> +the target domain of the device model.
> +
> +XXX where is the contents of this directory specified?
> +
> +#### ~/libxl/disable_udev = ("1"|"0") []
> +
> +Indicates whether device hotplug scripts in this domain should be run
> +by udev ("0") or will be run by the toolstack directly ("1").
> +
> +### Platform Feature and Control Paths
> +
> +#### ~/control/shutdown = (""|COMMAND) [w]
> +
> +This is the PV shutdown control node. A toolstack can write various
> +commands here to cause various guest shutdown, reboot or suspend
> +activities. The guest acknowledges a request by writing the empty
> +string back to the command node.
> +
> +XXX where is this protocol and the valid keys defined?
> +
> +#### ~/control/platform-feature-multiprocessor-suspend = (0|1) []
> +
> +Indicates to the guest that this platform supports the multiprocessor
> +suspend feature.
> +
> +#### ~/control/platform-feature-xs\_reset\_watches = (0|1) []
> +
> +Indicates to the guest that this platform supports the
> +XS_RESET_WATCHES xenstore message. See xen/include/public/io/xs\_wire.h
> +for the XenStore wire protocol definition.
> +
> +### Domain Controlled Paths
> +
> +#### ~/data/* [w]
> +
> +A domain writable path. Available for arbitrary domain use.
> +
> +## Virtual Machine paths
> +
> +XXX somehow describe how /vm is different to /local/domain/
> +
> +### /vm/$UUID/uuid = UUID []
> +
> +Value is the same UUID as the path.
> +
> +### /vm/$UUID/name = STRING []
> +
> +The domains name.
> +
> +### /vm/$UUID/image/* []
> +
> +Various information relating to the domain builder.
> +
> +### /vm/$UUID/start_time = INTEGER "." INTEGER []
> +
> +The time which the guest was started in SECONDS.MICROSECONDS format
> +
> +## Platform-Level paths
> +
> +### libxl Specific Paths
> +
> +#### /libxl/$DOMID/dm-version ("qemu_xen"|"qemu_xen_traditional") = []
> +
> +The device model version for a domain.
> +
> +[BLKIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
> +[FBIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
> +[KBDIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,kbdif.h.html
> +[NETIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,netif.h.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:02:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPZL-0000kd-NP; Thu, 09 Aug 2012 10:02:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPZJ-0000k1-Uh
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:02:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344506466!9356874!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7381 invoked from network); 9 Aug 2012 10:01:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:01:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926475"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:01:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:01:03 +0100
Message-ID: <1344506462.32142.96.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 11:01:02 +0100
In-Reply-To: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 15:03 +0100, Ian Campbell wrote:
> This is based upon my inspection of a system with a single PV domain running
> and is therefore very incomplete.
> 
> There are several things I'm not sure of here, mostly marked with XXX in the
> text.
> 
> In particular:
> 
>  - We seem to expose various things to the guest which really it has no need to
>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>    the size of the video ram, store port and gref.
>  - Missing reference for ~/device-model/*
>  - Missing protocol reference for ~/control/shutdown
>  - What is the distinction between /vm/UUID and /local/domain/DOMID
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Any thoughts on this or the follow up?

Ian J is this machine-readable in a way which is useful for whatever it
is you wanted to machine read it into?

> ---
>  docs/INDEX                        |    1 +
>  docs/misc/xenstore-paths.markdown |  294 +++++++++++++++++++++++++++++++++++++
>  2 files changed, 295 insertions(+), 0 deletions(-)
> 
> diff --git a/docs/INDEX b/docs/INDEX
> index 5a0a2c2..f5ccae2 100644
> --- a/docs/INDEX
> +++ b/docs/INDEX
> @@ -12,6 +12,7 @@ misc/kexec_and_kdump		Kexec and Kdump for Xen
>  misc/tscmode			TSC Mode HOWTO
>  misc/vbd-interface		Xen Guest Disk (VBD) Interface
>  misc/xenstore			Xenstore protocol specification
> +misc/xenstore-paths		Xenstore path documentation
>  misc/xl-disk-configuration	XL Disk Configuration
>  misc/xl-network-configuration	XL Network Configuration
>  misc/distro_mapping		Distro Directory Layouts
> diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
> index e69de29..967ed7b 100644
> --- a/docs/misc/xenstore-paths.markdown
> +++ b/docs/misc/xenstore-paths.markdown
> @@ -0,0 +1,294 @@
> +# XenStore Paths
> +
> +This document attempts to defines all the paths which are in common
> +use by either guests, front-/back-end drivers, toolstacks etc.
> +
> +The XenStore wire protocol itself is described in
> +[xenstore.txt](xenstore.txt).
> +
> +## Notation
> +
> +This document is intended to be partially machine readable, such that
> +test system etc can use it to validate whether the xenstore paths used
> +by a test are allowable etc.
> +
> +Therefore the following notation conventions apply:
> +
> +A xenstore path is generically defined as:
> +
> +        PATH = VALUES [TAGS]
> +    
> +        PATH/* [TAGS]
> +
> +The first syntax defines a simple path with a single value. The second
> +syntax defines an aggregated set of paths which are usually described
> +externally to this document. The text will give a pointer to the
> +appropriate external documentation.
> +
> +PATH can contain simple regex constructs following the POSIX regexp
> +syntax described in regexp(7). In addition the following additional
> +wild card names are defined and are evaluated before regexp expansion:
> +
> +* ~ -- expands to an arbitrary a domain's home path (described below).
> +  Only valid at the begining of a path.
> +* $DEVID -- a per-device type device identifier. Typically an integer.
> +* $DOMID -- a domain identifier, an integer. Typically this refers to
> +  the "other" domain. i.e. ~ refers to the domain providing a service
> +  while $DOMID is the consumer of that service.
> +* $UUID -- a UUID in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
> +
> +VALUES are strings and can take the following forms:
> +
> +* PATH -- a XenStore path.
> +* STRING -- an arbitrary string.
> +* INTEGER -- the decimal representation of an integer.
> + * MEMKB -- the decimal representation of a number of kilobytes.
> + * EVTCHN -- the decimal representation of an event channel.
> + * GNTREF -- the decimal representation of a grant reference.
> +* "a literal string" -- literal strings are contained within quotes.
> +* (VALUE | VALUE | ... ) -- a set of alternatives. Alternatives are
> +  separated by a "|" and all the alternatives are enclosed in "(" and
> +  ")".
> +
> +Additional TAGS may follow as a comma separated set of the following
> +tags enclosed in square brackets.
> +
> +* w -- Path is writable by the containing domain, that is the domain
> +  whose home path ~ this key is under or which /vm/$UUID refers to. By
> +  default paths under both of these locationsare read only for the
> +  domain.
> +* HVM -- Path is valid for HVM domains only
> +* PV --  Path is valid for PV domains only
> +* BACKEND -- Path is valid for a backend domain (AKA driver domain)
> +
> +Owning domain means the domain whose home path this tag is found
> +under.
> +
> +Lack of either a __HVM__ or __PV__ tag indicates that the path is
> +valid for either type of domain (including PVonHVM and similar mixed
> +domain types).
> +
> +## Domain Home Path
> +
> +Every domain has a home path within the xenstore hierarchy. This is
> +the path where the majority of the domain-visible information about
> +each domain is stored.
> +
> +This path is:
> +    
> +      /local/domain/$DOMID
> +
> +All non-absolute paths are relative to this path.
> +
> +Although this path could be considered a "Home Directory" for the
> +domain it would not usually be writable by the domain. The tools will
> +create writable subdirectories as necessary.
> +
> +## Per Domain Paths
> +
> +## General Paths
> +
> +#### ~/vm = PATH []
> +
> +A pointer back to the domain's /vm/$UUID path (described below).
> +
> +#### ~/name = STRING []
> +
> +The guests name.
> +
> +#### ~/domid = INTEGER   []
> +
> +The domain's own ID.
> +
> +XXX why is this exposed to the guest here?
> +
> +#### ~/image/device-model-pid = INTEGER   [r]
> +
> +The process ID of the device model associated with this domain, if it
> +has one.
> +
> +XXX why is this visible to the guest?
> +
> +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
> +
> +One node for each virtual CPU up to the guest's configured
> +maximum. Valid values are "online" and "offline". 
> +
> +#### ~/memory/static-max = MEMKB []
> +
> +Specifies a static maximum amount memory which this domain should
> +expect to be given. In the absence of in-guest memory hotplug support
> +this set on domain boot and is usually the maximum amount of RAM which
> +a guest can make use of .
> +
> +#### ~/memory/target = MEMKB []
> +
> +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort 
> +
> +#### ~/memory/videoram = MEMKB [HVM]
> +
> +The size of the video RAM this domain is configured with.
> +
> +XXX why is this exposed to the guest here instead of as a PCI BAR or
> +some other property of the virtual GFX card? Perhaps should be
> +non-guest visible.
> +
> +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
> +
> +The domain's suspend event channel. The use of a suspend event channel
> +is optional at the domain's discression. If it is not used then this
> +path will be left blank.
> +
> +### Frontend device paths
> +
> +Paravirtual device frontends are generally specified by their own
> +directory within the XenStore hierarchy. Usually this is under
> +~/device/$TYPE/$DEVID although there are exceptions, e.g. ~/console
> +for the first PV console.
> +
> +#### ~/device/vbd/$DEVID/* []
> +
> +A virtual block device frontend. Described by
> +[xen/include/public/io/blkif.h][BLKIF]
> +
> +#### ~/device/vfb/$DEVID/* []
> +
> +A virtual framebuffer frontend. Described by
> +[xen/include/public/io/fbif.h][FBIF]
> +
> +#### ~/device/vkbd/$DEVID/* []
> +
> +A virtual keyboard device frontend. Described by
> +[xen/include/public/io/kbdif.h][KBDIF]
> +
> +#### ~/device/vif/$DEVID/* []
> +
> +A virtual network device frontend. Described by
> +[xen/include/public/io/netif.h][NETIF]
> +
> +#### ~/console/* []
> +
> +The primary PV console device. Described in [console.txt](console.txt)
> +
> +#### ~/device/console/$DEVID/* []
> +
> +A secondary PV console device. Described in [console.txt](console.txt)
> +
> +#### ~/device/serial/$DEVID/* [HVM]
> +
> +An emulated serial device
> +
> +#### ~/store/port = EVTCHN []
> +
> +The event channel used by the domains connection to XenStore.
> +
> +XXX why is this exposed to the guest?
> +
> +#### ~/store/ring-ref = GNTREF []
> +
> +The grant reference of the domain's XenStore ring.
> +
> +XXX why is this exposed to the guest?
> +
> +### Backend Device Paths
> +
> +Paravirtual device backends are generally specified by their own
> +directory within the XenStore hierarchy. Usually this is under
> +~/backend/$TYPE/$DOMID/$DEVID.
> +
> +#### ~/backend/vbd/$DOMID/$DEVID/* []
> +
> +A virtual block device backend. Described by
> +[xen/include/public/io/blkif.h][BLKIF]
> +
> +#### ~/backend/vfb/$DOMID/$DEVID/* []
> +
> +A virtual framebuffer backend. Described by
> +[xen/include/public/io/fbif.h][FBIF]
> +
> +#### ~/backend/vkbd/$DOMID/$DEVID/* []
> +
> +A virtual keyboard device backend. Described by
> +[xen/include/public/io/kbdif.h][KBDIF]
> +
> +#### ~/backend/vif/$DOMID/$DEVID/* []
> +
> +A virtual network device backend. Described by
> +[xen/include/public/io/netif.h][NETIF]
> +
> +#### ~/backend/console/$DOMID/$DEVID/* []
> +
> +A PV console backend. Described in [console.txt](console.txt)
> +
> +#### ~/device-model/$DOMID/* []
> +
> +Information relating to device models running in the domain. $DOMID is
> +the target domain of the device model.
> +
> +XXX where is the contents of this directory specified?
> +
> +#### ~/libxl/disable_udev = ("1"|"0") []
> +
> +Indicates whether device hotplug scripts in this domain should be run
> +by udev ("0") or will be run by the toolstack directly ("1").
> +
> +### Platform Feature and Control Paths
> +
> +#### ~/control/shutdown = (""|COMMAND) [w]
> +
> +This is the PV shutdown control node. A toolstack can write various
> +commands here to cause various guest shutdown, reboot or suspend
> +activities. The guest acknowledges a request by writing the empty
> +string back to the command node.
> +
> +XXX where is this protocol and the valid keys defined?
> +
> +#### ~/control/platform-feature-multiprocessor-suspend = (0|1) []
> +
> +Indicates to the guest that this platform supports the multiprocessor
> +suspend feature.
> +
> +#### ~/control/platform-feature-xs\_reset\_watches = (0|1) []
> +
> +Indicates to the guest that this platform supports the
> +XS_RESET_WATCHES xenstore message. See xen/include/public/io/xs\_wire.h
> +for the XenStore wire protocol definition.
> +
> +### Domain Controlled Paths
> +
> +#### ~/data/* [w]
> +
> +A domain writable path. Available for arbitrary domain use.
> +
> +## Virtual Machine paths
> +
> +XXX somehow describe how /vm is different to /local/domain/
> +
> +### /vm/$UUID/uuid = UUID []
> +
> +Value is the same UUID as the path.
> +
> +### /vm/$UUID/name = STRING []
> +
> +The domains name.
> +
> +### /vm/$UUID/image/* []
> +
> +Various information relating to the domain builder.
> +
> +### /vm/$UUID/start_time = INTEGER "." INTEGER []
> +
> +The time which the guest was started in SECONDS.MICROSECONDS format
> +
> +## Platform-Level paths
> +
> +### libxl Specific Paths
> +
> +#### /libxl/$DOMID/dm-version ("qemu_xen"|"qemu_xen_traditional") = []
> +
> +The device model version for a domain.
> +
> +[BLKIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
> +[FBIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
> +[KBDIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,kbdif.h.html
> +[NETIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,netif.h.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:02:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPZV-0000ly-3j; Thu, 09 Aug 2012 10:02:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzPZT-0000lb-NU
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:02:27 +0000
Received: from [85.158.139.83:60702] by server-3.bemta-5.messagelabs.com id
	55/A9-31899-2BA83205; Thu, 09 Aug 2012 10:02:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344506546!30956906!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10638 invoked from network); 9 Aug 2012 10:02:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:02:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926501"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:02:03 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 11:02:03 +0100
Date: Thu, 9 Aug 2012 11:01:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344506410.32142.95.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208091101270.21096@kaball.uk.xensource.com>
References: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
	<1344506410.32142.95.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: console: correct example
 console type definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Aug 2012, Ian Campbell wrote:
> On Mon, 2012-07-30 at 13:04 +0100, Ian Campbell wrote:
> > I think this is intended to be under the specific consoles directory.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Any thoughts on this patch?


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> > ---
> >  docs/misc/console.txt |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/docs/misc/console.txt b/docs/misc/console.txt
> > index 73ca835..8a53a95 100644
> > --- a/docs/misc/console.txt
> > +++ b/docs/misc/console.txt
> > @@ -36,7 +36,7 @@ toolstack in the "type" node on xenstore, under the relevant console
> >  section.
> >  For example:
> >  
> > -# xenstore-read /local/domain/26/console/type
> > +# xenstore-read /local/domain/26/console/1/type
> >  xenconsoled
> >  
> >  The supported values are only xenconsoled or ioemu; xenconsoled has
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:02:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPZV-0000ly-3j; Thu, 09 Aug 2012 10:02:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzPZT-0000lb-NU
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:02:27 +0000
Received: from [85.158.139.83:60702] by server-3.bemta-5.messagelabs.com id
	55/A9-31899-2BA83205; Thu, 09 Aug 2012 10:02:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344506546!30956906!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10638 invoked from network); 9 Aug 2012 10:02:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:02:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926501"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:02:03 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 11:02:03 +0100
Date: Thu, 9 Aug 2012 11:01:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344506410.32142.95.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208091101270.21096@kaball.uk.xensource.com>
References: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
	<1344506410.32142.95.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: console: correct example
 console type definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Aug 2012, Ian Campbell wrote:
> On Mon, 2012-07-30 at 13:04 +0100, Ian Campbell wrote:
> > I think this is intended to be under the specific consoles directory.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Any thoughts on this patch?


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

> > ---
> >  docs/misc/console.txt |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/docs/misc/console.txt b/docs/misc/console.txt
> > index 73ca835..8a53a95 100644
> > --- a/docs/misc/console.txt
> > +++ b/docs/misc/console.txt
> > @@ -36,7 +36,7 @@ toolstack in the "type" node on xenstore, under the relevant console
> >  section.
> >  For example:
> >  
> > -# xenstore-read /local/domain/26/console/type
> > +# xenstore-read /local/domain/26/console/1/type
> >  xenconsoled
> >  
> >  The supported values are only xenconsoled or ioemu; xenconsoled has
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:06:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPd9-0001Bi-O1; Thu, 09 Aug 2012 10:06:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzPd8-0001BU-9t
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:06:14 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344506765!9357733!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27660 invoked from network); 9 Aug 2012 10:06:06 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:06:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzPcz-0004Zx-AA; Thu, 09 Aug 2012 10:06:05 +0000
Date: Thu, 9 Aug 2012 11:06:05 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120809100605.GC16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> 
> Exposes evtchn_alloc_unbound_domain to the rest of
> Xen so we can create allocated unbound evtchn within Xen.
> 
> Signed-off-by: Jean Guyader <jean.guyader@citrix.com>

> @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>  {
>      struct evtchn *chn;
>      struct domain *d;
> -    int            port;
> +    evtchn_port_t  port;
>      domid_t        dom = alloc->dom;
> -    long           rc;
> +    int            rc;

The function returns long; if you're tidying this up to be an int, might
as well change the return type too.

>  
>      rc = rcu_lock_target_domain_by_id(dom, &d);
>      if ( rc )
>          return rc;
>  
> -    spin_lock(&d->event_lock);
> +    rc = evtchn_alloc_unbound_domain(d, &port);

This moves some of the setting of channel state before the xsm hook.
Also, the state changes lower down in this function are no longer under
the event_lock. :(

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:06:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPd9-0001Bi-O1; Thu, 09 Aug 2012 10:06:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzPd8-0001BU-9t
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:06:14 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344506765!9357733!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27660 invoked from network); 9 Aug 2012 10:06:06 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:06:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzPcz-0004Zx-AA; Thu, 09 Aug 2012 10:06:05 +0000
Date: Thu, 9 Aug 2012 11:06:05 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120809100605.GC16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> 
> Exposes evtchn_alloc_unbound_domain to the rest of
> Xen so we can create allocated unbound evtchn within Xen.
> 
> Signed-off-by: Jean Guyader <jean.guyader@citrix.com>

> @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>  {
>      struct evtchn *chn;
>      struct domain *d;
> -    int            port;
> +    evtchn_port_t  port;
>      domid_t        dom = alloc->dom;
> -    long           rc;
> +    int            rc;

The function returns long; if you're tidying this up to be an int, might
as well change the return type too.

>  
>      rc = rcu_lock_target_domain_by_id(dom, &d);
>      if ( rc )
>          return rc;
>  
> -    spin_lock(&d->event_lock);
> +    rc = evtchn_alloc_unbound_domain(d, &port);

This moves some of the setting of channel state before the xsm hook.
Also, the state changes lower down in this function are no longer under
the event_lock. :(

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:19:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPpx-0001OY-2n; Thu, 09 Aug 2012 10:19:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzPpv-0001OT-P2
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:19:27 +0000
Received: from [85.158.138.51:27413] by server-11.bemta-3.messagelabs.com id
	FA/27-10722-FAE83205; Thu, 09 Aug 2012 10:19:27 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344507563!27444912!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2232 invoked from network); 9 Aug 2012 10:19:25 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:19:25 -0000
Received: by pbbrp12 with SMTP id rp12so690851pbb.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 03:19:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=h35yyVaF2fnYsJGt37sPoXl8TkVPmPyPPAB8DfaVJ28=;
	b=qr+4vI8jpi1wQ2anAwMRmwQ5I1W4GYYafKuzO7v4K32L0VeQ8FkpAD5lvYRzICOsJT
	D4zx66E8nmg26gGc7r1G0tcFxx4hYz1u4yXAi0ZqFnl4972G8SncZ/BNUiHRAXW0rP+M
	5w3AwKfRu+a72iBS/GDMXd9daxccN57ngqz8L8+EtjB54/bz/Y4WL+h98OoV3pw5AGmP
	q38Z3Dbqrr1HuHv88RJTpt/YMu2Jep02pOHZ7+301xOIugHYgDy7yEJOyhb+IE+rdplR
	/zd5kMxYQuzhfUzab0OF3inuEu6DOihU5xj/cs9x4YujfdGgMFZYcqOX0MYCl1hcsMFp
	XlZw==
Received: by 10.68.195.202 with SMTP id ig10mr3041519pbc.37.1344507563117;
	Thu, 09 Aug 2012 03:19:23 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 03:19:02 -0700 (PDT)
In-Reply-To: <20120809095136.GB16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 11:19:02 +0100
Message-ID: <CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Jean Guyader <jean.guyader@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >
>> > Without finally explaining why you need this type in the first place,
>> > I'll continue to NAK this patch. (This is made even worse by the fact
>> > taht the two inline functions in patch 5 that make use of the type
>> > appear to be unused.)
>> >
>>
>> Understood. I'll switch to use long instead of ssize_t in my
>> forthcoming patch serie.
>
> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
> some 64-bit length fields.
>

Ok but ssize_t is kind of a funny one. It should accept everything
that size_t can accept + negative values.

The linux kernel is defining ssize_t as int for 32b and long for 64b.
I could do the same then check for the
size of the copy and if it's bigger than MAX_INT|MAX_LONG return -EMSGSIZE.

http://lxr.free-electrons.com/source/include/asm-generic/posix_types.h#L68
(look for __kernel_ssize_t)

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:19:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPpx-0001OY-2n; Thu, 09 Aug 2012 10:19:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzPpv-0001OT-P2
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:19:27 +0000
Received: from [85.158.138.51:27413] by server-11.bemta-3.messagelabs.com id
	FA/27-10722-FAE83205; Thu, 09 Aug 2012 10:19:27 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344507563!27444912!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2232 invoked from network); 9 Aug 2012 10:19:25 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:19:25 -0000
Received: by pbbrp12 with SMTP id rp12so690851pbb.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 03:19:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=h35yyVaF2fnYsJGt37sPoXl8TkVPmPyPPAB8DfaVJ28=;
	b=qr+4vI8jpi1wQ2anAwMRmwQ5I1W4GYYafKuzO7v4K32L0VeQ8FkpAD5lvYRzICOsJT
	D4zx66E8nmg26gGc7r1G0tcFxx4hYz1u4yXAi0ZqFnl4972G8SncZ/BNUiHRAXW0rP+M
	5w3AwKfRu+a72iBS/GDMXd9daxccN57ngqz8L8+EtjB54/bz/Y4WL+h98OoV3pw5AGmP
	q38Z3Dbqrr1HuHv88RJTpt/YMu2Jep02pOHZ7+301xOIugHYgDy7yEJOyhb+IE+rdplR
	/zd5kMxYQuzhfUzab0OF3inuEu6DOihU5xj/cs9x4YujfdGgMFZYcqOX0MYCl1hcsMFp
	XlZw==
Received: by 10.68.195.202 with SMTP id ig10mr3041519pbc.37.1344507563117;
	Thu, 09 Aug 2012 03:19:23 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 03:19:02 -0700 (PDT)
In-Reply-To: <20120809095136.GB16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 11:19:02 +0100
Message-ID: <CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Jean Guyader <jean.guyader@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >
>> > Without finally explaining why you need this type in the first place,
>> > I'll continue to NAK this patch. (This is made even worse by the fact
>> > taht the two inline functions in patch 5 that make use of the type
>> > appear to be unused.)
>> >
>>
>> Understood. I'll switch to use long instead of ssize_t in my
>> forthcoming patch serie.
>
> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
> some 64-bit length fields.
>

Ok but ssize_t is kind of a funny one. It should accept everything
that size_t can accept + negative values.

The linux kernel is defining ssize_t as int for 32b and long for 64b.
I could do the same then check for the
size of the copy and if it's bigger than MAX_INT|MAX_LONG return -EMSGSIZE.

http://lxr.free-electrons.com/source/include/asm-generic/posix_types.h#L68
(look for __kernel_ssize_t)

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPq9-0001PC-G3; Thu, 09 Aug 2012 10:19:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPq7-0001P2-HG
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:19:39 +0000
Received: from [85.158.143.99:64190] by server-1.bemta-4.messagelabs.com id
	0F/CD-20198-ABE83205; Thu, 09 Aug 2012 10:19:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344507577!22085554!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2303 invoked from network); 9 Aug 2012 10:19:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:19:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926916"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:19:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:19:38 +0100
Message-ID: <1344507576.32142.113.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 11:19:36 +0100
In-Reply-To: <20502.48288.351664.168722@mariner.uk.xensource.com>
References: <20502.48288.351664.168722@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: document hotplug script protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 17:56 +0100, Ian Jackson wrote:

> diff --git a/docs/misc/xl-scripts-block.txt b/docs/misc/xl-scripts-block.txt
> new file mode 100644
> index 0000000..d6a5123
> --- /dev/null
> +++ b/docs/misc/xl-scripts-block.txt

It'd be nice to use markdown.

> @@ -0,0 +1,246 @@
> +                    -----------------------
> +                    DEVICE SCRIPT INTERFACE
> +                    -----------------------
> +
> +Guest devices can be specified in [lib]xl to be provided with the help
> +of a "script".
> +
> +This document describes the interface between the toolstack and device
> +scripts.  This protocol was partially reverse-engineered from the xend
> +infrastructure during the development of libxl; the interface is
> +intended to be compatible between libxl and xend.  This document
> +describes the protocol implemented by libxl.
> +
> +This description is applicable to Linux.
> +
> +Only block and network devices support these scripts.
> +
> +Note that we are consider deprecating this protocol after the release

                    considering

> +of Xen 4.2, or providing a more sophisticated replacement for it.

I'm not a huge fan of these sorts of TODO list statement scattered
around docs, they quickly become out of date or irrelevant.

> +
> +
> +=============
> +BLOCK SCRIPTS
> +=============
> +
> +When a script is specifed, the <target> value from the

                   specified

> +domain configuration (the pdev_path value in the libxl API) is not
> +interpreted by libxl.  Instead, it is passed as an opaque parameter to
> +the script.
> +
> +The block is responsible for taking the pdev_path (<target> from the
> +xl configuration), henceforth known as the "params", and providing a
> +block device in the toolstack domain.

Do you mean backend domain here?

> +
> +Block scripts are optional; if not specified, the pdev_path is used as
> +the path of the intended block device or file in the driver domain.
> +
> +
> +
> +NETWORK SCRIPTS
> +===============
> +
> +When a guest's networking is being set up, it appears in the driver
> +domain as a virtual network device (a "vif" or "tap" device).
> +
> +The network script is responsible for plumbing this virtual network
> +device through to the driver domain's networking arrangements.  For
> +example, enslaving the virtual device to a suitable bridge, creating
> +routes to it, or whatever is necessary.
> +
> +The script is also responsible for renaming the interface as may be
> +desired.

Should we mention here and in the block section that the script is
responsible for tearing stuff down as well as setting it up?

> +=====
> +MODEL
> +=====
> +
> +We will define a "connection" which is the making available or a
> +single block or network device, relating to a single guest, to a
> +single facility (in the guest or toolstack or service domain).
> +
> +It may be necessary to make multiple connections for one guest device;
> +even multiple connections which overlap in time.
> +
> +Such connection is created by an "add" execution of the script and
> +destroyed by a "remove" execution.  The script is invoked in the
> +driver domain (usually dom0).
> +
> +
> +=========
> +EXECUTION
> +=========
> +
> +Environment
> +-----------
> +
> +The scripts are invoked with the following environment variables:
> +
> + script=<value specified in domain configuration>
> +
> +     This may or may not be a fully qualified path.  Scripts should
> +     not use this information as the form is not reliable.

Looking at the kernel I think this is only supplied by netback in the
traditional udev based model. libxl does indeed supply it in both cases
when it runs the hotplug scripts.

Only the vif-setup script currently makes use of it. AFAIK vif-setup is
only called from the udev based mechanism to shell out to $script, libxl
will call $script itself directly.

We could therefore declare this variable to be internal to the udev
based hotplug script infrastructure and not for use by "users". 

> + XENBUS_TYPE=vbd    [block devices]
> + XENBUS_TYPE=vif    [network devices]
> +
> + XENBUS_PATH=<backend>
> +
> +     This is a path in xenstore which is used extensively to
> +     communicate between the script and the rest of the backend
> +     system.  This may be a relative or absolute xenstore path.
> +
> +     See below for the contents and usage of this value.
> +
> + XENBUS_BASE_PATH=backend
> +
> +     Do not use this value.
> +
> + vif=<devname>             [network vif devices]
> + INTERFACE=<devname>       [network tap devices]
> +
> +     Current name of the virtual device in the toolstack domain.

"Initial name" perhaps, since strictly speaking someone might have
already renamed it (although the scripts are a bi stuffed if someone
does)

> +
> +
> +Arguments
> +---------
> +
> +Block scripts:
> +   The script will get one argument, "add" or "remove"
> +
> +Network scripts:
> +   The script will get two arguments.  They vary according to
> +   whether the script is running with respect to a vif or
> +   a tap virtual device:
> +
> +                arguments for add        arguments for remove
> +   vif device   "online" "type_if=vif"   "offline" "type_if=vif"
> +   tap device   "add" "type_if=tap"      "remove" "type_if=tap"

True, sigh.

> +Exit status and error handling
> +------------------------------
> +
> +The script should exit zero if it is successful.  If it fails it
> +should clean everything up, try write an error message to the
> +hotplug-error path (see below), and and exit 1 or >=126 or die from a
> +signal.
> +
> +Failure of "remove" scripts will be logged and reported but there are
> +no arrangements to re-invoke a failed script.

Is it worth referencing the helper libraries which we supply
(xen-hotplug-common.sh, {block,vif}-common.sh etc).

Although it's true that scripts can just do all this themselves it would
be more consistent and likely to be done correctly if people would use
the supplied functions.

> +
> +
> +Locking, timeouts, concurrency
> +------------------------------
> +
> +Scripts should not block indefinitely.  If a script takes longer than
> +30s to exit it will be sent a SIGKILL which will result in the
> +operation being considered failed.

30s? LIBXL_{INIT,DESTROY,HOTPLUG}_TIMEOUT are all 10s (I'm not sure
which if any actually applies here).

It's also a bit toolstack specific, IIRC xapi has a much longer timeout
(it also has its own hotplug scripts but eventually I suppose we'd like
to converge)

> +
> +The scripts may be invoked concurrently for the same or different
> +resources, and the same or different guests, and must do any necessary
> +locking.

Again point to the helpers in locking.h and *-common.sh?

> +
> +
> +
> +========
> +XENSTORE
> +========
> +
> +The <backend> directory in xenstore contains keys and values used to
> +communicate between the script and the rest of the infrastructure.
> +
> +The contents of this path are shared between the script and other
> +parts of the infrastructure.  The script should therefore not write
> +arbitrary subpaths for its own purposes.
> +
> +The following paths are defined:
> +
> +
> +All devices
> +-----------
> +
> + <backend>/hotplug-error
> +
> +     If a script is about to fail, it may write an error message
> +     string to this path.  The string will be reported to the user.
> +
> +
> +Block devices
> +-------------
> +
> + <backend>/params
> +
> +     The params.  Ie, the <target> string from the configuration file,
> +     which corresponds to the "pdev_path" in the libxl API.
> +
> +     This is not interpreted by the toolstack and the script may
> +     define whatever syntax it likes for it.  Newlines, whitespace,
> +     backslashes, quotes, and so forth, should be avoided as
> +     specifying them from a config file may be difficult.  Consult the
> +     xl configuration API documentation for information about the
> +     exact syntactic transformations in xl.

Reference xl-disk-configuration specifically?

> +
> +     The value is written by the toolstack before the add script is
> +     run and remains present until the remove script has finished.  It
> +     should not be modified by the script.
> +  
> + <backend>/mode = "r" | "w"
> +
> +     The device is to be readonly or read/write, respectively.

Maybe "read-only"? (perhaps it is alloneword, I'm not sure)

> +     Written by the toolstack; should not be modified.
> +
> + <backend>/physical-device
> +
> +     The main result of the add script.  This key does not exist on
> +     entry to add.  The add script should, if successful, write this
> +     key with the major and minor number of the block device
> +     (as accessible in the driver domain) in the format "%x:%x".
> +
> +
> +Other information that may be useful to the script is also present.
> +The block script does not need to pay attention to this if it doesn't
> +want to.  These keys include:
> +
> + <backend>/removable = "0" | "1"       [block devices]
> +
> +     Indicates whether the device is being presented to the guest as a
> +     removable device.
> +
> + <backend>/device-type = "disk" | "cdrom"       [block devices]
> +
> +
> +Network devices
> +---------------
> +
> +All of these values are provided by the toolstack to the script and
> +should not be changed by the script:
> +
> + <backend>/vifname      [network devices]

I guess "[network devices]" (and "[block devices]" above) predate you
splitting them into their own sections?

> +
> +     Name that the interface should be renamed to.
> +
> + <backend>/mac
> +
> +     mac parameter from the libxl API.
> +     Ethernet address which is to be used by the guest.
> +
> + <backend>/ip
> +
> +     ip parameter from the libxl API.
> +     IP address which is to be used by the guest.
> +
> + <backend>/bridge
> +
> +     bridge parameter from the libxl API.
> +     Bridge to which the network should be enslaved (not interpreted
> +     by the libxl toolstack).

I suppose "ip" and "bridge" are something like the block scripts'
"params" node, in so much as we could consider making them more
explicitly generic and opaque to the toolstack in the future.

In practice at the moment bridge is the parameter for vif-bridge and ip
is the parameter for vif-{route,nat}. Or something like that anyway.

> +
> + <backend>/rate = "%"PRIu64",%"PRIu32""
> +
> +     rate_bytes_per_interval and rate_interval_usecs parameters
> +     from the libxl API.
> +     The network script is responsible for enforcing this.

Actually, netback does this itself in practice.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPq9-0001PC-G3; Thu, 09 Aug 2012 10:19:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPq7-0001P2-HG
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:19:39 +0000
Received: from [85.158.143.99:64190] by server-1.bemta-4.messagelabs.com id
	0F/CD-20198-ABE83205; Thu, 09 Aug 2012 10:19:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344507577!22085554!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2303 invoked from network); 9 Aug 2012 10:19:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:19:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13926916"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:19:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:19:38 +0100
Message-ID: <1344507576.32142.113.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 11:19:36 +0100
In-Reply-To: <20502.48288.351664.168722@mariner.uk.xensource.com>
References: <20502.48288.351664.168722@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] docs: document hotplug script protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-07-30 at 17:56 +0100, Ian Jackson wrote:

> diff --git a/docs/misc/xl-scripts-block.txt b/docs/misc/xl-scripts-block.txt
> new file mode 100644
> index 0000000..d6a5123
> --- /dev/null
> +++ b/docs/misc/xl-scripts-block.txt

It'd be nice to use markdown.

> @@ -0,0 +1,246 @@
> +                    -----------------------
> +                    DEVICE SCRIPT INTERFACE
> +                    -----------------------
> +
> +Guest devices can be specified in [lib]xl to be provided with the help
> +of a "script".
> +
> +This document describes the interface between the toolstack and device
> +scripts.  This protocol was partially reverse-engineered from the xend
> +infrastructure during the development of libxl; the interface is
> +intended to be compatible between libxl and xend.  This document
> +describes the protocol implemented by libxl.
> +
> +This description is applicable to Linux.
> +
> +Only block and network devices support these scripts.
> +
> +Note that we are consider deprecating this protocol after the release

                    considering

> +of Xen 4.2, or providing a more sophisticated replacement for it.

I'm not a huge fan of these sorts of TODO list statement scattered
around docs, they quickly become out of date or irrelevant.

> +
> +
> +=============
> +BLOCK SCRIPTS
> +=============
> +
> +When a script is specifed, the <target> value from the

                   specified

> +domain configuration (the pdev_path value in the libxl API) is not
> +interpreted by libxl.  Instead, it is passed as an opaque parameter to
> +the script.
> +
> +The block is responsible for taking the pdev_path (<target> from the
> +xl configuration), henceforth known as the "params", and providing a
> +block device in the toolstack domain.

Do you mean backend domain here?

> +
> +Block scripts are optional; if not specified, the pdev_path is used as
> +the path of the intended block device or file in the driver domain.
> +
> +
> +
> +NETWORK SCRIPTS
> +===============
> +
> +When a guest's networking is being set up, it appears in the driver
> +domain as a virtual network device (a "vif" or "tap" device).
> +
> +The network script is responsible for plumbing this virtual network
> +device through to the driver domain's networking arrangements.  For
> +example, enslaving the virtual device to a suitable bridge, creating
> +routes to it, or whatever is necessary.
> +
> +The script is also responsible for renaming the interface as may be
> +desired.

Should we mention here and in the block section that the script is
responsible for tearing stuff down as well as setting it up?

> +=====
> +MODEL
> +=====
> +
> +We will define a "connection" which is the making available or a
> +single block or network device, relating to a single guest, to a
> +single facility (in the guest or toolstack or service domain).
> +
> +It may be necessary to make multiple connections for one guest device;
> +even multiple connections which overlap in time.
> +
> +Such connection is created by an "add" execution of the script and
> +destroyed by a "remove" execution.  The script is invoked in the
> +driver domain (usually dom0).
> +
> +
> +=========
> +EXECUTION
> +=========
> +
> +Environment
> +-----------
> +
> +The scripts are invoked with the following environment variables:
> +
> + script=<value specified in domain configuration>
> +
> +     This may or may not be a fully qualified path.  Scripts should
> +     not use this information as the form is not reliable.

Looking at the kernel I think this is only supplied by netback in the
traditional udev based model. libxl does indeed supply it in both cases
when it runs the hotplug scripts.

Only the vif-setup script currently makes use of it. AFAIK vif-setup is
only called from the udev based mechanism to shell out to $script, libxl
will call $script itself directly.

We could therefore declare this variable to be internal to the udev
based hotplug script infrastructure and not for use by "users". 

> + XENBUS_TYPE=vbd    [block devices]
> + XENBUS_TYPE=vif    [network devices]
> +
> + XENBUS_PATH=<backend>
> +
> +     This is a path in xenstore which is used extensively to
> +     communicate between the script and the rest of the backend
> +     system.  This may be a relative or absolute xenstore path.
> +
> +     See below for the contents and usage of this value.
> +
> + XENBUS_BASE_PATH=backend
> +
> +     Do not use this value.
> +
> + vif=<devname>             [network vif devices]
> + INTERFACE=<devname>       [network tap devices]
> +
> +     Current name of the virtual device in the toolstack domain.

"Initial name" perhaps, since strictly speaking someone might have
already renamed it (although the scripts are a bi stuffed if someone
does)

> +
> +
> +Arguments
> +---------
> +
> +Block scripts:
> +   The script will get one argument, "add" or "remove"
> +
> +Network scripts:
> +   The script will get two arguments.  They vary according to
> +   whether the script is running with respect to a vif or
> +   a tap virtual device:
> +
> +                arguments for add        arguments for remove
> +   vif device   "online" "type_if=vif"   "offline" "type_if=vif"
> +   tap device   "add" "type_if=tap"      "remove" "type_if=tap"

True, sigh.

> +Exit status and error handling
> +------------------------------
> +
> +The script should exit zero if it is successful.  If it fails it
> +should clean everything up, try write an error message to the
> +hotplug-error path (see below), and and exit 1 or >=126 or die from a
> +signal.
> +
> +Failure of "remove" scripts will be logged and reported but there are
> +no arrangements to re-invoke a failed script.

Is it worth referencing the helper libraries which we supply
(xen-hotplug-common.sh, {block,vif}-common.sh etc).

Although it's true that scripts can just do all this themselves it would
be more consistent and likely to be done correctly if people would use
the supplied functions.

> +
> +
> +Locking, timeouts, concurrency
> +------------------------------
> +
> +Scripts should not block indefinitely.  If a script takes longer than
> +30s to exit it will be sent a SIGKILL which will result in the
> +operation being considered failed.

30s? LIBXL_{INIT,DESTROY,HOTPLUG}_TIMEOUT are all 10s (I'm not sure
which if any actually applies here).

It's also a bit toolstack specific, IIRC xapi has a much longer timeout
(it also has its own hotplug scripts but eventually I suppose we'd like
to converge)

> +
> +The scripts may be invoked concurrently for the same or different
> +resources, and the same or different guests, and must do any necessary
> +locking.

Again point to the helpers in locking.h and *-common.sh?

> +
> +
> +
> +========
> +XENSTORE
> +========
> +
> +The <backend> directory in xenstore contains keys and values used to
> +communicate between the script and the rest of the infrastructure.
> +
> +The contents of this path are shared between the script and other
> +parts of the infrastructure.  The script should therefore not write
> +arbitrary subpaths for its own purposes.
> +
> +The following paths are defined:
> +
> +
> +All devices
> +-----------
> +
> + <backend>/hotplug-error
> +
> +     If a script is about to fail, it may write an error message
> +     string to this path.  The string will be reported to the user.
> +
> +
> +Block devices
> +-------------
> +
> + <backend>/params
> +
> +     The params.  Ie, the <target> string from the configuration file,
> +     which corresponds to the "pdev_path" in the libxl API.
> +
> +     This is not interpreted by the toolstack and the script may
> +     define whatever syntax it likes for it.  Newlines, whitespace,
> +     backslashes, quotes, and so forth, should be avoided as
> +     specifying them from a config file may be difficult.  Consult the
> +     xl configuration API documentation for information about the
> +     exact syntactic transformations in xl.

Reference xl-disk-configuration specifically?

> +
> +     The value is written by the toolstack before the add script is
> +     run and remains present until the remove script has finished.  It
> +     should not be modified by the script.
> +  
> + <backend>/mode = "r" | "w"
> +
> +     The device is to be readonly or read/write, respectively.

Maybe "read-only"? (perhaps it is alloneword, I'm not sure)

> +     Written by the toolstack; should not be modified.
> +
> + <backend>/physical-device
> +
> +     The main result of the add script.  This key does not exist on
> +     entry to add.  The add script should, if successful, write this
> +     key with the major and minor number of the block device
> +     (as accessible in the driver domain) in the format "%x:%x".
> +
> +
> +Other information that may be useful to the script is also present.
> +The block script does not need to pay attention to this if it doesn't
> +want to.  These keys include:
> +
> + <backend>/removable = "0" | "1"       [block devices]
> +
> +     Indicates whether the device is being presented to the guest as a
> +     removable device.
> +
> + <backend>/device-type = "disk" | "cdrom"       [block devices]
> +
> +
> +Network devices
> +---------------
> +
> +All of these values are provided by the toolstack to the script and
> +should not be changed by the script:
> +
> + <backend>/vifname      [network devices]

I guess "[network devices]" (and "[block devices]" above) predate you
splitting them into their own sections?

> +
> +     Name that the interface should be renamed to.
> +
> + <backend>/mac
> +
> +     mac parameter from the libxl API.
> +     Ethernet address which is to be used by the guest.
> +
> + <backend>/ip
> +
> +     ip parameter from the libxl API.
> +     IP address which is to be used by the guest.
> +
> + <backend>/bridge
> +
> +     bridge parameter from the libxl API.
> +     Bridge to which the network should be enslaved (not interpreted
> +     by the libxl toolstack).

I suppose "ip" and "bridge" are something like the block scripts'
"params" node, in so much as we could consider making them more
explicitly generic and opaque to the toolstack in the future.

In practice at the moment bridge is the parameter for vif-bridge and ip
is the parameter for vif-{route,nat}. Or something like that anyway.

> +
> + <backend>/rate = "%"PRIu64",%"PRIu32""
> +
> +     rate_bytes_per_interval and rate_interval_usecs parameters
> +     from the libxl API.
> +     The network script is responsible for enforcing this.

Actually, netback does this itself in practice.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPs6-0001aK-7D; Thu, 09 Aug 2012 10:21:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzPs5-0001aA-2s
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:21:41 +0000
Received: from [85.158.139.83:60966] by server-5.bemta-5.messagelabs.com id
	D5/A3-03096-43F83205; Thu, 09 Aug 2012 10:21:40 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344507698!30342122!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9186 invoked from network); 9 Aug 2012 10:21:38 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:21:38 -0000
Received: by eekd4 with SMTP id d4so89034eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 03:21:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=N0jKQC88pKEo7FnBbtQbXtgQmKTKlq+p9Gl+7/dNuGc=;
	b=aE1i92+yvJ5L0+jS5bFv1jVbzpneFUmVsyVfWJ2esUGwZs1UR9X0hcCPL6zjg0abIC
	fpd7qps0Q6n7bK/S8rdHupyt1wIl68hnA63wZfeBHweSKt/9D4VO85Y/lDptnOqCtq2T
	S05m90j2dpjeOdduNUWo5y6bHHBcFkTnoC2abh5vteDOFjbCeJctD9yIoDh/ZEC7CaXH
	e5ar2DJ0rd4PxuAjpwQSTGYu6V6o6Mrb3adSh9TEwIJBN4zJbx1RgOXUT41W800hM0or
	Ypary1QsmsyAyasOM0PDhgjhHHEk4vwKLmAFAt/RWnMJvhMEjxhxe+oET3JDQJPKyxfU
	8hbg==
MIME-Version: 1.0
Received: by 10.14.223.72 with SMTP id u48mr4315990eep.37.1344507698541; Thu,
	09 Aug 2012 03:21:38 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 03:21:38 -0700 (PDT)
In-Reply-To: <1344506305.32142.93.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 18:21:38 +0800
Message-ID: <CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8930282536863291018=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8930282536863291018==
Content-Type: multipart/alternative; boundary=047d7b622810c4f5f404c6d29757

--047d7b622810c4f5f404c6d29757
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 5:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 10:50 +0100, =C2=ED=C0=DA wrote:
> >
> >
> > On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell <Ian.Campbell@citrix.com>
> > wrote:
> >         On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=C0=DA wrote:
> >         >
> >         >
> >         > On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA <aware.why@gmail=
.com>
> >         wrote:
> >         >         Hi all,
> >         >
> >         >             In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv
> >         function,
> >         >          there are several lines as follow:
> >         >
> >         >
> >         >          437        mutex_lock(&h->request_mutex);
> >         >
> >         >
> >         >
> >         >          438
> >         >
> >         >
> >         >
> >         >          439        if (!xs_write_all(h->fd, &msg,
> >         sizeof(msg)))
> >         >
> >         >
> >         >
> >         >          440                goto fail;
> >         >
> >         >
> >         >
> >         >          441
> >         >
> >         >
> >         >
> >         >          442        for (i =3D 0; i < num_vecs; i++)
> >         >
> >         >
> >         >
> >         >          443                if (!xs_write_all(h->fd,
> >         >         iovec[i].iov_base, iovec[i].iov_len))
> >         >
> >         >
> >         >
> >         >          444                        goto fail;
> >         >
> >         >
> >         >
> >         >          445
> >         >
> >         >
> >         >
> >         >          446        ret =3D read_reply(h, &msg.type, len);
> >         >
> >         >
> >         >
> >         >          447        if (!ret)
> >         >
> >         >
> >         >
> >         >          448                goto fail;
> >         >
> >         >
> >         >
> >         >          449
> >         >
> >         >
> >         >
> >         >          450        mutex_unlock(&h->request_mutex);
> >         >
> >         >
> >         >
> >         >
> >         >         The above seems to tell me that after writing to
> >         h->fd , the
> >         >         read_reply invoking read_message which immediatelly
> >          read from
> >         >         hd->fd?
> >         >         What did it mean by this?
> >         >
> >         >
> >         >
> >         >
> >         >         Thanks
> >         >
> >         > If hd->fd refers to a socket descriptor, it's understandable
> >         that
> >         > writing and then imediatelly reading, in which case the fd
> >         is returned
> >         > by get_handle(xs_daemon_socket(), flags).
> >         >
> >         >
> >         > But when fd is retrived by get_handle(xs_domain_dev(),
> >         flags), it
> >         > means to write to a file and then read from the same file
> >         imediatelly.
> >         > Dose it have something to do with the internal communication
> >         > protocol?!
> >
> >
> >         Yes, the xenstore protocol involves both writing messages and
> >         reading
> >         replies, but that seems trivially obvious so I'm afraid I
> >         really have no
> >         idea what your question is nor what is confusing you. Perhaps
> >         describing
> >         in more detail what you are trying to achieve will help?
> >
> >         Reading http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions
> >         might also
> >         help you consider what it is you are asking.
> >
> >
> >         >
> >         >
> >         > Thanks for replying.
> >         >
> >         >
> >         >
> >         >
> >         >
> >         >
> >         The final read and write operations are achieved by:
> > read(fd, data, len);
> > write(fd, data, len);
> >
> >
> > Maybe my confusing lies in this point that what's the distinction
> > between the read and write operations on a socket file,
> > the /proc/xen/xenbus, a regular file?
>
> /proc/xen/xenbus is not a regular file. /proc is a virtual file system
> where the files often have special and magic semantics. /proc/xen/xenbus
> is effectively something like a character device, even though it isn't
> actually implemented as one.
>
> Take a look at drivers/xen/xenbus/xenbus_dev_frontend.c which is the
> driver which backends /proc/xen/xenbus
>
> Ian.
>
>
> Ian.
>
>
>
>
I didn't find the xenbus_dev_frontend.c by `` find / -name "*xenbus*"
2>/dev/null `` command.

--047d7b622810c4f5f404c6d29757
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 5:58 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 10:50 +0100, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell &lt;<a href=3D"mailto:Ian=
.Campbell@citrix.com">Ian.Campbell@citrix.com</a>&gt;<br>
&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=
=C0=DA wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; On Wed, Aug 8, 2012 at 4:32 PM, =C2=
=ED=C0=DA &lt;<a href=3D"mailto:aware.why@gmail.com">aware.why@gmail.com</a=
>&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; Hi all,<b=
r>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; In &nbsp;xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; function,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;the=
re are several lines as follow:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;437=
 &nbsp; &nbsp; &nbsp; &nbsp;mutex_lock(&amp;h-&gt;request_mutex);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;438=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;439=
 &nbsp; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h-&gt;fd, &amp;msg,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; sizeof(msg)))<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;440=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;441=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;442=
 &nbsp; &nbsp; &nbsp; &nbsp;for (i =3D 0; i &lt; num_vecs; i++)<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h=
-&gt;fd,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; iovec[i].=
iov_base, iovec[i].iov_len))<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;444=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp;goto fail;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;445=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;446=
 &nbsp; &nbsp; &nbsp; &nbsp;ret =3D read_reply(h, &amp;msg.type, len);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;447=
 &nbsp; &nbsp; &nbsp; &nbsp;if (!ret)<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;448=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;449=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;450=
 &nbsp; &nbsp; &nbsp; &nbsp;mutex_unlock(&amp;h-&gt;request_mutex);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; The above=
 seems to tell me that after writing to<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; h-&gt;fd , the<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; read_repl=
y invoking read_message which immediatelly<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;read from<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; hd-&gt;fd=
?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; What did =
it mean by this?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; Thanks<br=
>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; If hd-&gt;fd refers to a socket descr=
iptor, it&#39;s understandable<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; that<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; writing and then imediatelly reading,=
 in which case the fd<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; is returned<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; by get_handle(xs_daemon_socket(), fla=
gs).<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; But when fd is retrived by get_handle=
(xs_domain_dev(),<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; flags), it<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; means to write to a file and then rea=
d from the same file<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; imediatelly.<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; Dose it have something to do with the=
 internal communication<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; protocol?!<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Yes, the xenstore protocol involves both w=
riting messages and<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; reading<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; replies, but that seems trivially obvious =
so I&#39;m afraid I<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; really have no<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; idea what your question is nor what is con=
fusing you. Perhaps<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; describing<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; in more detail what you are trying to achi=
eve will help?<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Reading <a href=3D"http://wiki.xen.org/wik=
i/Asking_Xen_Devel_Questions" target=3D"_blank">http://wiki.xen.org/wiki/As=
king_Xen_Devel_Questions</a><br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; might also<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; help you consider what it is you are askin=
g.<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; Thanks for replying.<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The final read and write operations are ac=
hieved by:<br>
&gt; read(fd, data, len);<br>
&gt; write(fd, data, len);<br>
&gt;<br>
&gt;<br>
&gt; Maybe my confusing lies in this point that what&#39;s the distinction<=
br>
&gt; between the read and write operations on a socket file,<br>
&gt; the /proc/xen/xenbus, a regular file?<br>
<br>
</div></div>/proc/xen/xenbus is not a regular file. /proc is a virtual file=
 system<br>
where the files often have special and magic semantics. /proc/xen/xenbus<br=
>
is effectively something like a character device, even though it isn&#39;t<=
br>
actually implemented as one.<br>
<br>
Take a look at drivers/xen/xenbus/xenbus_dev_frontend.c which is the<br>
driver which backends /proc/xen/xenbus<br>
<br>
Ian.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Ian.<br>
<br>
<br>
<br>
</font></span></blockquote></div><br><div>I didn&#39;t find the xenbus_dev_=
frontend.c by `` find / -name &quot;*xenbus*&quot; &nbsp; 2&gt;/dev/null ``=
 command.</div><div><br></div>

--047d7b622810c4f5f404c6d29757--


--===============8930282536863291018==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8930282536863291018==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 10:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPs6-0001aK-7D; Thu, 09 Aug 2012 10:21:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzPs5-0001aA-2s
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:21:41 +0000
Received: from [85.158.139.83:60966] by server-5.bemta-5.messagelabs.com id
	D5/A3-03096-43F83205; Thu, 09 Aug 2012 10:21:40 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344507698!30342122!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9186 invoked from network); 9 Aug 2012 10:21:38 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:21:38 -0000
Received: by eekd4 with SMTP id d4so89034eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 03:21:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=N0jKQC88pKEo7FnBbtQbXtgQmKTKlq+p9Gl+7/dNuGc=;
	b=aE1i92+yvJ5L0+jS5bFv1jVbzpneFUmVsyVfWJ2esUGwZs1UR9X0hcCPL6zjg0abIC
	fpd7qps0Q6n7bK/S8rdHupyt1wIl68hnA63wZfeBHweSKt/9D4VO85Y/lDptnOqCtq2T
	S05m90j2dpjeOdduNUWo5y6bHHBcFkTnoC2abh5vteDOFjbCeJctD9yIoDh/ZEC7CaXH
	e5ar2DJ0rd4PxuAjpwQSTGYu6V6o6Mrb3adSh9TEwIJBN4zJbx1RgOXUT41W800hM0or
	Ypary1QsmsyAyasOM0PDhgjhHHEk4vwKLmAFAt/RWnMJvhMEjxhxe+oET3JDQJPKyxfU
	8hbg==
MIME-Version: 1.0
Received: by 10.14.223.72 with SMTP id u48mr4315990eep.37.1344507698541; Thu,
	09 Aug 2012 03:21:38 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 03:21:38 -0700 (PDT)
In-Reply-To: <1344506305.32142.93.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 18:21:38 +0800
Message-ID: <CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8930282536863291018=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8930282536863291018==
Content-Type: multipart/alternative; boundary=047d7b622810c4f5f404c6d29757

--047d7b622810c4f5f404c6d29757
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 5:58 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 10:50 +0100, =C2=ED=C0=DA wrote:
> >
> >
> > On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell <Ian.Campbell@citrix.com>
> > wrote:
> >         On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=C0=DA wrote:
> >         >
> >         >
> >         > On Wed, Aug 8, 2012 at 4:32 PM, =C2=ED=C0=DA <aware.why@gmail=
.com>
> >         wrote:
> >         >         Hi all,
> >         >
> >         >             In  xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv
> >         function,
> >         >          there are several lines as follow:
> >         >
> >         >
> >         >          437        mutex_lock(&h->request_mutex);
> >         >
> >         >
> >         >
> >         >          438
> >         >
> >         >
> >         >
> >         >          439        if (!xs_write_all(h->fd, &msg,
> >         sizeof(msg)))
> >         >
> >         >
> >         >
> >         >          440                goto fail;
> >         >
> >         >
> >         >
> >         >          441
> >         >
> >         >
> >         >
> >         >          442        for (i =3D 0; i < num_vecs; i++)
> >         >
> >         >
> >         >
> >         >          443                if (!xs_write_all(h->fd,
> >         >         iovec[i].iov_base, iovec[i].iov_len))
> >         >
> >         >
> >         >
> >         >          444                        goto fail;
> >         >
> >         >
> >         >
> >         >          445
> >         >
> >         >
> >         >
> >         >          446        ret =3D read_reply(h, &msg.type, len);
> >         >
> >         >
> >         >
> >         >          447        if (!ret)
> >         >
> >         >
> >         >
> >         >          448                goto fail;
> >         >
> >         >
> >         >
> >         >          449
> >         >
> >         >
> >         >
> >         >          450        mutex_unlock(&h->request_mutex);
> >         >
> >         >
> >         >
> >         >
> >         >         The above seems to tell me that after writing to
> >         h->fd , the
> >         >         read_reply invoking read_message which immediatelly
> >          read from
> >         >         hd->fd?
> >         >         What did it mean by this?
> >         >
> >         >
> >         >
> >         >
> >         >         Thanks
> >         >
> >         > If hd->fd refers to a socket descriptor, it's understandable
> >         that
> >         > writing and then imediatelly reading, in which case the fd
> >         is returned
> >         > by get_handle(xs_daemon_socket(), flags).
> >         >
> >         >
> >         > But when fd is retrived by get_handle(xs_domain_dev(),
> >         flags), it
> >         > means to write to a file and then read from the same file
> >         imediatelly.
> >         > Dose it have something to do with the internal communication
> >         > protocol?!
> >
> >
> >         Yes, the xenstore protocol involves both writing messages and
> >         reading
> >         replies, but that seems trivially obvious so I'm afraid I
> >         really have no
> >         idea what your question is nor what is confusing you. Perhaps
> >         describing
> >         in more detail what you are trying to achieve will help?
> >
> >         Reading http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions
> >         might also
> >         help you consider what it is you are asking.
> >
> >
> >         >
> >         >
> >         > Thanks for replying.
> >         >
> >         >
> >         >
> >         >
> >         >
> >         >
> >         The final read and write operations are achieved by:
> > read(fd, data, len);
> > write(fd, data, len);
> >
> >
> > Maybe my confusing lies in this point that what's the distinction
> > between the read and write operations on a socket file,
> > the /proc/xen/xenbus, a regular file?
>
> /proc/xen/xenbus is not a regular file. /proc is a virtual file system
> where the files often have special and magic semantics. /proc/xen/xenbus
> is effectively something like a character device, even though it isn't
> actually implemented as one.
>
> Take a look at drivers/xen/xenbus/xenbus_dev_frontend.c which is the
> driver which backends /proc/xen/xenbus
>
> Ian.
>
>
> Ian.
>
>
>
>
I didn't find the xenbus_dev_frontend.c by `` find / -name "*xenbus*"
2>/dev/null `` command.

--047d7b622810c4f5f404c6d29757
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 5:58 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 10:50 +0100, =
=C2=ED=C0=DA wrote:<br>
&gt;<br>
&gt;<br>
&gt; On Thu, Aug 9, 2012 at 4:21 PM, Ian Campbell &lt;<a href=3D"mailto:Ian=
.Campbell@citrix.com">Ian.Campbell@citrix.com</a>&gt;<br>
&gt; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; On Thu, 2012-08-09 at 08:07 +0100, =C2=ED=
=C0=DA wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; On Wed, Aug 8, 2012 at 4:32 PM, =C2=
=ED=C0=DA &lt;<a href=3D"mailto:aware.why@gmail.com">aware.why@gmail.com</a=
>&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; wrote:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; Hi all,<b=
r>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; In &nbsp;xen-4.1.2 src/tools/xenstore/xs.c: xs_talkv<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; function,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;the=
re are several lines as follow:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;437=
 &nbsp; &nbsp; &nbsp; &nbsp;mutex_lock(&amp;h-&gt;request_mutex);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;438=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;439=
 &nbsp; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h-&gt;fd, &amp;msg,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; sizeof(msg)))<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;440=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;441=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;442=
 &nbsp; &nbsp; &nbsp; &nbsp;for (i =3D 0; i &lt; num_vecs; i++)<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if (!xs_write_all(h=
-&gt;fd,<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; iovec[i].=
iov_base, iovec[i].iov_len))<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;444=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp;goto fail;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;445=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;446=
 &nbsp; &nbsp; &nbsp; &nbsp;ret =3D read_reply(h, &amp;msg.type, len);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;447=
 &nbsp; &nbsp; &nbsp; &nbsp;if (!ret)<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;448=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;goto fail;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;449=
<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;450=
 &nbsp; &nbsp; &nbsp; &nbsp;mutex_unlock(&amp;h-&gt;request_mutex);<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; The above=
 seems to tell me that after writing to<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; h-&gt;fd , the<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; read_repl=
y invoking read_message which immediatelly<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;read from<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; hd-&gt;fd=
?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; What did =
it mean by this?<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; &nbsp; &nbsp; &nbsp; &nbsp; Thanks<br=
>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; If hd-&gt;fd refers to a socket descr=
iptor, it&#39;s understandable<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; that<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; writing and then imediatelly reading,=
 in which case the fd<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; is returned<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; by get_handle(xs_daemon_socket(), fla=
gs).<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; But when fd is retrived by get_handle=
(xs_domain_dev(),<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; flags), it<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; means to write to a file and then rea=
d from the same file<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; imediatelly.<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; Dose it have something to do with the=
 internal communication<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; protocol?!<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Yes, the xenstore protocol involves both w=
riting messages and<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; reading<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; replies, but that seems trivially obvious =
so I&#39;m afraid I<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; really have no<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; idea what your question is nor what is con=
fusing you. Perhaps<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; describing<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; in more detail what you are trying to achi=
eve will help?<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; Reading <a href=3D"http://wiki.xen.org/wik=
i/Asking_Xen_Devel_Questions" target=3D"_blank">http://wiki.xen.org/wiki/As=
king_Xen_Devel_Questions</a><br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; might also<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; help you consider what it is you are askin=
g.<br>
&gt;<br>
&gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt; Thanks for replying.<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &gt;<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; The final read and write operations are ac=
hieved by:<br>
&gt; read(fd, data, len);<br>
&gt; write(fd, data, len);<br>
&gt;<br>
&gt;<br>
&gt; Maybe my confusing lies in this point that what&#39;s the distinction<=
br>
&gt; between the read and write operations on a socket file,<br>
&gt; the /proc/xen/xenbus, a regular file?<br>
<br>
</div></div>/proc/xen/xenbus is not a regular file. /proc is a virtual file=
 system<br>
where the files often have special and magic semantics. /proc/xen/xenbus<br=
>
is effectively something like a character device, even though it isn&#39;t<=
br>
actually implemented as one.<br>
<br>
Take a look at drivers/xen/xenbus/xenbus_dev_frontend.c which is the<br>
driver which backends /proc/xen/xenbus<br>
<br>
Ian.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
<br>
Ian.<br>
<br>
<br>
<br>
</font></span></blockquote></div><br><div>I didn&#39;t find the xenbus_dev_=
frontend.c by `` find / -name &quot;*xenbus*&quot; &nbsp; 2&gt;/dev/null ``=
 command.</div><div><br></div>

--047d7b622810c4f5f404c6d29757--


--===============8930282536863291018==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8930282536863291018==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 10:24:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPuJ-0001kd-Ov; Thu, 09 Aug 2012 10:23:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPuI-0001kV-1p
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:23:58 +0000
Received: from [85.158.143.99:20272] by server-3.bemta-4.messagelabs.com id
	10/B0-31486-DBF83205; Thu, 09 Aug 2012 10:23:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344507827!19807493!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23621 invoked from network); 9 Aug 2012 10:23:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:23:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927008"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:23:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:23:47 +0100
Message-ID: <1344507826.32142.116.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Aug 2012 11:23:46 +0100
In-Reply-To: <20120809100605.GC16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
> At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> > 
> > Exposes evtchn_alloc_unbound_domain to the rest of
> > Xen so we can create allocated unbound evtchn within Xen.
> > 
> > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> 
> > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> >  {
> >      struct evtchn *chn;
> >      struct domain *d;
> > -    int            port;
> > +    evtchn_port_t  port;
> >      domid_t        dom = alloc->dom;
> > -    long           rc;
> > +    int            rc;
> 
> The function returns long; if you're tidying this up to be an int, might
> as well change the return type too.

I'm not sure if this is relevant but Jan just sent a patch to "make all
(native) hypercalls consistently have "long" return type". I
think/suspect this rc here turns into the result of the hypercall?

Jan's patch was motivated by something to do with sign extension when a
hypercall's int return is written to the long in the multicall arg
struct which causes strangeness. Perhaps not totally relevant to
evtchn_alloc which is unlikely to be in a MC.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:24:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPuJ-0001kd-Ov; Thu, 09 Aug 2012 10:23:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPuI-0001kV-1p
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:23:58 +0000
Received: from [85.158.143.99:20272] by server-3.bemta-4.messagelabs.com id
	10/B0-31486-DBF83205; Thu, 09 Aug 2012 10:23:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344507827!19807493!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23621 invoked from network); 9 Aug 2012 10:23:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:23:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927008"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:23:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:23:47 +0100
Message-ID: <1344507826.32142.116.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Aug 2012 11:23:46 +0100
In-Reply-To: <20120809100605.GC16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
> At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> > 
> > Exposes evtchn_alloc_unbound_domain to the rest of
> > Xen so we can create allocated unbound evtchn within Xen.
> > 
> > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> 
> > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> >  {
> >      struct evtchn *chn;
> >      struct domain *d;
> > -    int            port;
> > +    evtchn_port_t  port;
> >      domid_t        dom = alloc->dom;
> > -    long           rc;
> > +    int            rc;
> 
> The function returns long; if you're tidying this up to be an int, might
> as well change the return type too.

I'm not sure if this is relevant but Jan just sent a patch to "make all
(native) hypercalls consistently have "long" return type". I
think/suspect this rc here turns into the result of the hypercall?

Jan's patch was motivated by something to do with sign extension when a
hypercall's int return is written to the long in the multicall arg
struct which causes strangeness. Perhaps not totally relevant to
evtchn_alloc which is unlikely to be in a MC.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:28:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPyz-0001yq-Nj; Thu, 09 Aug 2012 10:28:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPyy-0001yj-BQ
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:28:48 +0000
Received: from [85.158.139.83:64819] by server-10.bemta-5.messagelabs.com id
	1B/9D-24472-FD093205; Thu, 09 Aug 2012 10:28:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344508125!30343553!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7477 invoked from network); 9 Aug 2012 10:28:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:28:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:28:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:28:45 +0100
Message-ID: <1344508123.32142.118.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 11:28:43 +0100
In-Reply-To: <CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjIxICswMTAwLCDpqazno4ogd3JvdGU6Cgo+IEkgZGlk
bid0IGZpbmQgdGhlIHhlbmJ1c19kZXZfZnJvbnRlbmQuYyBieSBgYCBmaW5kIC8gLW5hbWUgIip4
ZW5idXMqIgo+IDI+L2Rldi9udWxsIGBgIGNvbW1hbmQuCgpMb29rIGluIHRoZSB1cHN0cmVhbSBM
aW51eCBrZXJuZWwgc291cmNlLgoKSW4gc29tZSBvbGRlciBrZXJuZWxzIGl0IGlzIApkcml2ZXJz
L3hlbi94ZW5idXMveGVuYnVzX2Rldi5jCgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:28:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzPyz-0001yq-Nj; Thu, 09 Aug 2012 10:28:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzPyy-0001yj-BQ
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:28:48 +0000
Received: from [85.158.139.83:64819] by server-10.bemta-5.messagelabs.com id
	1B/9D-24472-FD093205; Thu, 09 Aug 2012 10:28:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344508125!30343553!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7477 invoked from network); 9 Aug 2012 10:28:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:28:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927110"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:28:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:28:45 +0100
Message-ID: <1344508123.32142.118.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 11:28:43 +0100
In-Reply-To: <CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjIxICswMTAwLCDpqazno4ogd3JvdGU6Cgo+IEkgZGlk
bid0IGZpbmQgdGhlIHhlbmJ1c19kZXZfZnJvbnRlbmQuYyBieSBgYCBmaW5kIC8gLW5hbWUgIip4
ZW5idXMqIgo+IDI+L2Rldi9udWxsIGBgIGNvbW1hbmQuCgpMb29rIGluIHRoZSB1cHN0cmVhbSBM
aW51eCBrZXJuZWwgc291cmNlLgoKSW4gc29tZSBvbGRlciBrZXJuZWxzIGl0IGlzIApkcml2ZXJz
L3hlbi94ZW5idXMveGVuYnVzX2Rldi5jCgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:31:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ1e-0002BL-U6; Thu, 09 Aug 2012 10:31:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzQ1d-0002B4-5e
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:31:33 +0000
Received: from [85.158.143.35:55579] by server-2.bemta-4.messagelabs.com id
	DA/58-19021-48193205; Thu, 09 Aug 2012 10:31:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344508290!11298926!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17387 invoked from network); 9 Aug 2012 10:31:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:31:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927168"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:31:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:31:30 +0100
Message-ID: <1344508288.32142.119.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 11:31:28 +0100
In-Reply-To: <1344508123.32142.118.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjI4ICswMTAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjIxICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gPiBJ
IGRpZG4ndCBmaW5kIHRoZSB4ZW5idXNfZGV2X2Zyb250ZW5kLmMgYnkgYGAgZmluZCAvIC1uYW1l
ICIqeGVuYnVzKiIKPiA+IDI+L2Rldi9udWxsIGBgIGNvbW1hbmQuCj4gCj4gTG9vayBpbiB0aGUg
dXBzdHJlYW0gTGludXgga2VybmVsIHNvdXJjZS4KPiAKPiBJbiBzb21lIG9sZGVyIGtlcm5lbHMg
aXQgaXMgCj4gZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c19kZXYuYwoKQWxzbyBwbGVhc2UgdHJ5
IHVzaW5nIGEgc2VhcmNoIGVuZ2luZSB0byBhbnN3ZXIgdGhlc2UgcXVlc3Rpb25zLiBUaGlzCmRy
aXZlciBpcyB0cml2aWFsbHkgZm91bmQgd2l0aCBnb29nbGUuCgpJYW4uCgoKCl9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxp
c3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs
Cg==

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:31:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ1e-0002BL-U6; Thu, 09 Aug 2012 10:31:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzQ1d-0002B4-5e
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:31:33 +0000
Received: from [85.158.143.35:55579] by server-2.bemta-4.messagelabs.com id
	DA/58-19021-48193205; Thu, 09 Aug 2012 10:31:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344508290!11298926!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17387 invoked from network); 9 Aug 2012 10:31:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:31:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927168"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 10:31:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	11:31:30 +0100
Message-ID: <1344508288.32142.119.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 11:31:28 +0100
In-Reply-To: <1344508123.32142.118.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjI4ICswMTAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjIxICswMTAwLCDpqazno4ogd3JvdGU6Cj4gCj4gPiBJ
IGRpZG4ndCBmaW5kIHRoZSB4ZW5idXNfZGV2X2Zyb250ZW5kLmMgYnkgYGAgZmluZCAvIC1uYW1l
ICIqeGVuYnVzKiIKPiA+IDI+L2Rldi9udWxsIGBgIGNvbW1hbmQuCj4gCj4gTG9vayBpbiB0aGUg
dXBzdHJlYW0gTGludXgga2VybmVsIHNvdXJjZS4KPiAKPiBJbiBzb21lIG9sZGVyIGtlcm5lbHMg
aXQgaXMgCj4gZHJpdmVycy94ZW4veGVuYnVzL3hlbmJ1c19kZXYuYwoKQWxzbyBwbGVhc2UgdHJ5
IHVzaW5nIGEgc2VhcmNoIGVuZ2luZSB0byBhbnN3ZXIgdGhlc2UgcXVlc3Rpb25zLiBUaGlzCmRy
aXZlciBpcyB0cml2aWFsbHkgZm91bmQgd2l0aCBnb29nbGUuCgpJYW4uCgoKCl9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxp
c3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs
Cg==

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:35:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ5f-0002ZS-LA; Thu, 09 Aug 2012 10:35:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzQ5e-0002ZG-5V
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:35:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344508535!12313947!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18914 invoked from network); 9 Aug 2012 10:35:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 10:35:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 11:35:34 +0100
Message-Id: <5023AE960200007800093DE8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 11:35:34 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
In-Reply-To: <5023860E.7080908@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA5LjA4LjEyIGF0IDExOjQyLCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wOCAyMzowMSwgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwOC4wOC4xMiBhdCAxMTo0OCwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+PiDkuo4gMjAxMi0wOC0wNyAxNjozNywg
SmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4gU29tZSBzcGluIGF0IHN0b3BfbWFjaGluZSBhZnRlciBm
aW5pc2ggdGhlaXIgam9iLgo+PiBBbmQgaGVyZSB5b3UnZCBuZWVkIHRvIGZpbmQgb3V0IHdoYXQg
dGhleSdyZSB3YWl0aW5nIGZvciwKPj4gYW5kIHdoYXQgdGhvc2UgQ1BVcyBhcmUgZG9pbmcuCj4g
VGhleSBhcmUgd2FpdGluZyB0aGUgdmNwdSBjYWxsaW5nIGdlbmVyaWNfc2V0X2FsbCBhbmQgdGhv
c2Ugc3BpbiBhdCAKPiBzZXRfYXRvbWljaXR5X2xvY2suCj4gSW4gZmFjdCwgYWxsIGFyZSB3YWl0
aW5nIGdlbmVyaWNfc2V0X2FsbAoKSSB0aGluayB3ZSdyZSBtb3ZpbmcgaW4gY2lyY2xlcyAtIHdo
YXQgaXMgdGhlIHZDUFUgY3VycmVudGx5CmdlbmVyaWNfc2V0X2FsbCgpIHRoZW4gZG9pbmc/Cgo+
PiBUaGVyZSdzIG5vdCB0aGF0IG11Y2ggYmVpbmcgZG9uZSBpbiBnZW5lcmljX3NldF9hbGwoKSwg
c28gdGhlCj4+IGNvZGUgc2hvdWxkIGZpbmlzaCByZWFzb25hYmx5IHF1aWNrbHkuIEFyZSB5b3Ug
cGVyaGFwcyBoYXZpbmcKPj4gbW9yZSB2Q1BVLXMgaW4gdGhlIGd1ZXN0IHRoYW4gcENQVS1zIHRo
ZXkgY2FuIHJ1biBvbj8KPiBTeXN0ZW0gZW52IGlzIGFuIGV4YWxvZ2ljIG5vZGUgd2l0aCAyNCBj
b3JlcyArIDEwMEcgbWVtICgyIHNvY2tldCAsIDYgCj4gY29yZXMgcGVyIHNvY2tldCwgMiBIVCB0
aHJlYWRzIHBlciBjb3JlKS4KPiBCb290dXAgYSBwdmh2bSB3aXRoIDEydnBjdXMgKG9yIDI0KSAr
IDkwIEdCICsgcGNpIHBhc3N0aHJvdWdoZWQgZGV2aWNlLgoKU28geW91J3JlIGluZGVlZCBvdmVy
LWNvbW1pdHRpbmcgdGhlIHN5c3RlbS4gSG93IG1hbnkgdkNQVS1zCmRvZXMgeW91IERvbTAgaGF2
ZT8gQXJlIHRoZXJlIGFueSBvdGhlciBWTXM/IElzIHRoZXJlIGFueQp2Q1BVIHBpbm5pbmcgaW4g
ZWZmZWN0PwoKPj4gICBEb2VzCj4+IHlvdXIgaGFyZHdhcmUgc3VwcG9ydCBQYXVzZS1Mb29wLUV4
aXRpbmcgKG9yIHRoZSBBTUQKPj4gZXF1aXZhbGVudCwgZG9uJ3QgcmVjYWxsIHRoZWlyIHRlcm0g
cmlnaHQgbm93KT8KPiBJIGhhdmUgbm8gYWNjZXNzIHRvIHNlcmlhbCBsaW5lLCBjb3VsZCBJIGdl
dCB0aGUgaW5mbyBieSBhIGNvbW1hbmQ/CgoieGwgZG1lc2ciIHJ1biBlYXJseSBlbm91Z2ggKGku
ZS4gYmVmb3JlIHRoZSBsb2cgYnVmZmVyIHdyYXBzKS4KCkphbgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:35:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ5f-0002ZS-LA; Thu, 09 Aug 2012 10:35:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzQ5e-0002ZG-5V
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:35:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344508535!12313947!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18914 invoked from network); 9 Aug 2012 10:35:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 10:35:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 11:35:34 +0100
Message-Id: <5023AE960200007800093DE8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 11:35:34 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
In-Reply-To: <5023860E.7080908@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA5LjA4LjEyIGF0IDExOjQyLCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wOCAyMzowMSwgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwOC4wOC4xMiBhdCAxMTo0OCwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+PiDkuo4gMjAxMi0wOC0wNyAxNjozNywg
SmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4gU29tZSBzcGluIGF0IHN0b3BfbWFjaGluZSBhZnRlciBm
aW5pc2ggdGhlaXIgam9iLgo+PiBBbmQgaGVyZSB5b3UnZCBuZWVkIHRvIGZpbmQgb3V0IHdoYXQg
dGhleSdyZSB3YWl0aW5nIGZvciwKPj4gYW5kIHdoYXQgdGhvc2UgQ1BVcyBhcmUgZG9pbmcuCj4g
VGhleSBhcmUgd2FpdGluZyB0aGUgdmNwdSBjYWxsaW5nIGdlbmVyaWNfc2V0X2FsbCBhbmQgdGhv
c2Ugc3BpbiBhdCAKPiBzZXRfYXRvbWljaXR5X2xvY2suCj4gSW4gZmFjdCwgYWxsIGFyZSB3YWl0
aW5nIGdlbmVyaWNfc2V0X2FsbAoKSSB0aGluayB3ZSdyZSBtb3ZpbmcgaW4gY2lyY2xlcyAtIHdo
YXQgaXMgdGhlIHZDUFUgY3VycmVudGx5CmdlbmVyaWNfc2V0X2FsbCgpIHRoZW4gZG9pbmc/Cgo+
PiBUaGVyZSdzIG5vdCB0aGF0IG11Y2ggYmVpbmcgZG9uZSBpbiBnZW5lcmljX3NldF9hbGwoKSwg
c28gdGhlCj4+IGNvZGUgc2hvdWxkIGZpbmlzaCByZWFzb25hYmx5IHF1aWNrbHkuIEFyZSB5b3Ug
cGVyaGFwcyBoYXZpbmcKPj4gbW9yZSB2Q1BVLXMgaW4gdGhlIGd1ZXN0IHRoYW4gcENQVS1zIHRo
ZXkgY2FuIHJ1biBvbj8KPiBTeXN0ZW0gZW52IGlzIGFuIGV4YWxvZ2ljIG5vZGUgd2l0aCAyNCBj
b3JlcyArIDEwMEcgbWVtICgyIHNvY2tldCAsIDYgCj4gY29yZXMgcGVyIHNvY2tldCwgMiBIVCB0
aHJlYWRzIHBlciBjb3JlKS4KPiBCb290dXAgYSBwdmh2bSB3aXRoIDEydnBjdXMgKG9yIDI0KSAr
IDkwIEdCICsgcGNpIHBhc3N0aHJvdWdoZWQgZGV2aWNlLgoKU28geW91J3JlIGluZGVlZCBvdmVy
LWNvbW1pdHRpbmcgdGhlIHN5c3RlbS4gSG93IG1hbnkgdkNQVS1zCmRvZXMgeW91IERvbTAgaGF2
ZT8gQXJlIHRoZXJlIGFueSBvdGhlciBWTXM/IElzIHRoZXJlIGFueQp2Q1BVIHBpbm5pbmcgaW4g
ZWZmZWN0PwoKPj4gICBEb2VzCj4+IHlvdXIgaGFyZHdhcmUgc3VwcG9ydCBQYXVzZS1Mb29wLUV4
aXRpbmcgKG9yIHRoZSBBTUQKPj4gZXF1aXZhbGVudCwgZG9uJ3QgcmVjYWxsIHRoZWlyIHRlcm0g
cmlnaHQgbm93KT8KPiBJIGhhdmUgbm8gYWNjZXNzIHRvIHNlcmlhbCBsaW5lLCBjb3VsZCBJIGdl
dCB0aGUgaW5mbyBieSBhIGNvbW1hbmQ/CgoieGwgZG1lc2ciIHJ1biBlYXJseSBlbm91Z2ggKGku
ZS4gYmVmb3JlIHRoZSBsb2cgYnVmZmVyIHdyYXBzKS4KCkphbgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ69-0002cw-7L; Thu, 09 Aug 2012 10:36:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQ67-0002cJ-W4
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:36:12 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344508557!4637427!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7887 invoked from network); 9 Aug 2012 10:35:57 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:35:57 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQ5t-0004gC-4Y; Thu, 09 Aug 2012 10:35:57 +0000
Date: Thu, 9 Aug 2012 11:35:57 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120809103557.GA17503@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344507826.32142.116.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:23 +0100 on 09 Aug (1344511426), Ian Campbell wrote:
> On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
> > At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> > > 
> > > Exposes evtchn_alloc_unbound_domain to the rest of
> > > Xen so we can create allocated unbound evtchn within Xen.
> > > 
> > > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> > 
> > > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> > >  {
> > >      struct evtchn *chn;
> > >      struct domain *d;
> > > -    int            port;
> > > +    evtchn_port_t  port;
> > >      domid_t        dom = alloc->dom;
> > > -    long           rc;
> > > +    int            rc;
> > 
> > The function returns long; if you're tidying this up to be an int, might
> > as well change the return type too.
> 
> I'm not sure if this is relevant but Jan just sent a patch to "make all
> (native) hypercalls consistently have "long" return type". I
> think/suspect this rc here turns into the result of the hypercall?
> 
> Jan's patch was motivated by something to do with sign extension when a
> hypercall's int return is written to the long in the multicall arg
> struct which causes strangeness. Perhaps not totally relevant to
> evtchn_alloc which is unlikely to be in a MC.

Yes, this eventually ends up in a hypercall handler, but s/long/int/
here doesn't cause problems because
 - rc is only ever set to an 'int' value here so we can't lose data
   from the type being too narrow; and
 - Those int values get cast up to long (either in here or in the
   caller) directly, which will sign-extend the.

It really doesn't matter whether this function returns an int or a long,
but it's a bit untidy to change it half-way. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ69-0002cw-7L; Thu, 09 Aug 2012 10:36:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQ67-0002cJ-W4
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:36:12 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344508557!4637427!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7887 invoked from network); 9 Aug 2012 10:35:57 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:35:57 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQ5t-0004gC-4Y; Thu, 09 Aug 2012 10:35:57 +0000
Date: Thu, 9 Aug 2012 11:35:57 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120809103557.GA17503@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344507826.32142.116.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: "Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:23 +0100 on 09 Aug (1344511426), Ian Campbell wrote:
> On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
> > At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> > > 
> > > Exposes evtchn_alloc_unbound_domain to the rest of
> > > Xen so we can create allocated unbound evtchn within Xen.
> > > 
> > > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> > 
> > > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> > >  {
> > >      struct evtchn *chn;
> > >      struct domain *d;
> > > -    int            port;
> > > +    evtchn_port_t  port;
> > >      domid_t        dom = alloc->dom;
> > > -    long           rc;
> > > +    int            rc;
> > 
> > The function returns long; if you're tidying this up to be an int, might
> > as well change the return type too.
> 
> I'm not sure if this is relevant but Jan just sent a patch to "make all
> (native) hypercalls consistently have "long" return type". I
> think/suspect this rc here turns into the result of the hypercall?
> 
> Jan's patch was motivated by something to do with sign extension when a
> hypercall's int return is written to the long in the multicall arg
> struct which causes strangeness. Perhaps not totally relevant to
> evtchn_alloc which is unlikely to be in a MC.

Yes, this eventually ends up in a hypercall handler, but s/long/int/
here doesn't cause problems because
 - rc is only ever set to an 'int' value here so we can't lose data
   from the type being too narrow; and
 - Those int values get cast up to long (either in here or in the
   caller) directly, which will sign-extend the.

It really doesn't matter whether this function returns an int or a long,
but it's a bit untidy to change it half-way. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:38:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ8c-0002qG-PM; Thu, 09 Aug 2012 10:38:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQ8b-0002q1-Oh
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:38:45 +0000
Received: from [85.158.139.83:31049] by server-8.bemta-5.messagelabs.com id
	11/BC-05939-33393205; Thu, 09 Aug 2012 10:38:43 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344508720!31050600!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3041 invoked from network); 9 Aug 2012 10:38:41 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:38:41 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQ8W-0004hK-Fm; Thu, 09 Aug 2012 10:38:40 +0000
Date: Thu, 9 Aug 2012 11:38:40 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120809103840.GD16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This looks pretty good; I think you've addressed almost all my comments
except for one, which is really a design decision raether than an
implementation one.  As I said last time: 

] And what about protocol?  Protocol seems to have ended up as a bit of a
] second-class citizen in v4v; it's defined, and indeed required, but not
] used for routing or for acccess control, so all traffic to a given port
] _on every protocol_ ends up on the same ring. 
] 
] This is the inverse of the TCP/IP namespace that you're copying, where
] protocol demux happens before port demux.  And I think it will bite
] someone if you ever, for example, want to send ICMP or GRE over a v4v
] channel.

I see Jan has some comments on the detail; all I have to add is this:

At 20:50 +0100 on 03 Aug (1344027054), Jean Guyader wrote:
> +
> +
> +struct list_head viprules = LIST_HEAD_INIT(viprules);
> +static DEFINE_RWLOCK(viprules_lock);

Please add comments describing this lock and where it comes in the
locking order relative to the locks below -- it looks like it's always
taken after any other locks but please make it clear.

> +/*
> + * Structure definitions
> + */
> +
> +#define V4V_RING_MAGIC          0xA822f72bb0b9d8cc
> +#define V4V_RING_DATA_MAGIC	0x45fe852220b801d4

Thanks for getting rid of the #ifdefs here, but you need to keep the
'ULL' suffix so this will compile properly on 32-bit.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:38:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ8c-0002qG-PM; Thu, 09 Aug 2012 10:38:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQ8b-0002q1-Oh
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:38:45 +0000
Received: from [85.158.139.83:31049] by server-8.bemta-5.messagelabs.com id
	11/BC-05939-33393205; Thu, 09 Aug 2012 10:38:43 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344508720!31050600!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3041 invoked from network); 9 Aug 2012 10:38:41 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:38:41 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQ8W-0004hK-Fm; Thu, 09 Aug 2012 10:38:40 +0000
Date: Thu, 9 Aug 2012 11:38:40 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120809103840.GD16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This looks pretty good; I think you've addressed almost all my comments
except for one, which is really a design decision raether than an
implementation one.  As I said last time: 

] And what about protocol?  Protocol seems to have ended up as a bit of a
] second-class citizen in v4v; it's defined, and indeed required, but not
] used for routing or for acccess control, so all traffic to a given port
] _on every protocol_ ends up on the same ring. 
] 
] This is the inverse of the TCP/IP namespace that you're copying, where
] protocol demux happens before port demux.  And I think it will bite
] someone if you ever, for example, want to send ICMP or GRE over a v4v
] channel.

I see Jan has some comments on the detail; all I have to add is this:

At 20:50 +0100 on 03 Aug (1344027054), Jean Guyader wrote:
> +
> +
> +struct list_head viprules = LIST_HEAD_INIT(viprules);
> +static DEFINE_RWLOCK(viprules_lock);

Please add comments describing this lock and where it comes in the
locking order relative to the locks below -- it looks like it's always
taken after any other locks but please make it clear.

> +/*
> + * Structure definitions
> + */
> +
> +#define V4V_RING_MAGIC          0xA822f72bb0b9d8cc
> +#define V4V_RING_DATA_MAGIC	0x45fe852220b801d4

Thanks for getting rid of the #ifdefs here, but you need to keep the
'ULL' suffix so this will compile properly on 32-bit.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ9g-0002yM-8R; Thu, 09 Aug 2012 10:39:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzQ9e-0002xW-GR
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:39:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344508754!2919294!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20147 invoked from network); 9 Aug 2012 10:39:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 10:39:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 11:39:14 +0100
Message-Id: <5023AF710200007800093DFE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 11:39:13 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
In-Reply-To: <CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 12:19, Jean Guyader <jean.guyader@gmail.com> wrote:
> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>> >
>>> > Without finally explaining why you need this type in the first place,
>>> > I'll continue to NAK this patch. (This is made even worse by the fact
>>> > taht the two inline functions in patch 5 that make use of the type
>>> > appear to be unused.)
>>> >
>>>
>>> Understood. I'll switch to use long instead of ssize_t in my
>>> forthcoming patch serie.
>>
>> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>> some 64-bit length fields.
>>
> 
> Ok but ssize_t is kind of a funny one. It should accept everything
> that size_t can accept + negative values.

No. It's the same relation as between e.g. "signed int" and
"unsigned int". Value preserving conversions are only guaranteed
for non-negative values fitting both types.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQ9g-0002yM-8R; Thu, 09 Aug 2012 10:39:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzQ9e-0002xW-GR
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:39:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344508754!2919294!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20147 invoked from network); 9 Aug 2012 10:39:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 10:39:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 11:39:14 +0100
Message-Id: <5023AF710200007800093DFE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 11:39:13 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
In-Reply-To: <CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 12:19, Jean Guyader <jean.guyader@gmail.com> wrote:
> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>> >
>>> > Without finally explaining why you need this type in the first place,
>>> > I'll continue to NAK this patch. (This is made even worse by the fact
>>> > taht the two inline functions in patch 5 that make use of the type
>>> > appear to be unused.)
>>> >
>>>
>>> Understood. I'll switch to use long instead of ssize_t in my
>>> forthcoming patch serie.
>>
>> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>> some 64-bit length fields.
>>
> 
> Ok but ssize_t is kind of a funny one. It should accept everything
> that size_t can accept + negative values.

No. It's the same relation as between e.g. "signed int" and
"unsigned int". Value preserving conversions are only guaranteed
for non-negative values fitting both types.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:41:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQAi-00036b-N5; Thu, 09 Aug 2012 10:40:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzQAh-00036N-QC
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:40:56 +0000
Received: from [85.158.139.83:57462] by server-11.bemta-5.messagelabs.com id
	F2/C9-11482-7B393205; Thu, 09 Aug 2012 10:40:55 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344508848!30346139!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3434 invoked from network); 9 Aug 2012 10:40:50 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:40:50 -0000
Received: by pbbrp12 with SMTP id rp12so718790pbb.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 03:40:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=oRTpXWVrAxjAKEjYN3eH2Xgg1m7New4fxCCo+zwXyd4=;
	b=0bghBhu8T2YhknXoglqqRVsBxeCAWY+GvhUOvr6lQF6csOT1sWXp4HDF6CsGDfWti2
	a0mxygrED6xmUgqFpZaJt6HbqRSoqItuhFL+Cvkh5HBwuqsf/c/s9oasH7jLpY4a1pmG
	ywzjF3lbLGHDlvq/YJMyLxfrBidmymfc9AGt3iKfz/sQnw7pQ3VFTJx7me9Rar8AALfF
	SYQgHt/adH31pTE9uwvXWXEM03sky6hzuoVO6jfbCiZdg77VmE8DHjBmA5RMDRD+8YXF
	wiITegbDuFDFAclMtSgIerFQK9/K3iao0jVh+07IZR6tZkOKVBQQkdwAJ7GKQoyivjrs
	A0pw==
Received: by 10.68.222.167 with SMTP id qn7mr3071934pbc.98.1344508847933; Thu,
	09 Aug 2012 03:40:47 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 03:40:27 -0700 (PDT)
In-Reply-To: <20120809103557.GA17503@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 11:40:27 +0100
Message-ID: <CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 11:35, Tim Deegan <tim@xen.org> wrote:
> At 11:23 +0100 on 09 Aug (1344511426), Ian Campbell wrote:
>> On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
>> > At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
>> > >
>> > > Exposes evtchn_alloc_unbound_domain to the rest of
>> > > Xen so we can create allocated unbound evtchn within Xen.
>> > >
>> > > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
>> >
>> > > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>> > >  {
>> > >      struct evtchn *chn;
>> > >      struct domain *d;
>> > > -    int            port;
>> > > +    evtchn_port_t  port;
>> > >      domid_t        dom = alloc->dom;
>> > > -    long           rc;
>> > > +    int            rc;
>> >
>> > The function returns long; if you're tidying this up to be an int, might
>> > as well change the return type too.
>>
>> I'm not sure if this is relevant but Jan just sent a patch to "make all
>> (native) hypercalls consistently have "long" return type". I
>> think/suspect this rc here turns into the result of the hypercall?
>>
>> Jan's patch was motivated by something to do with sign extension when a
>> hypercall's int return is written to the long in the multicall arg
>> struct which causes strangeness. Perhaps not totally relevant to
>> evtchn_alloc which is unlikely to be in a MC.
>
> Yes, this eventually ends up in a hypercall handler, but s/long/int/
> here doesn't cause problems because
>  - rc is only ever set to an 'int' value here so we can't lose data
>    from the type being too narrow; and
>  - Those int values get cast up to long (either in here or in the
>    caller) directly, which will sign-extend the.
>
> It really doesn't matter whether this function returns an int or a long,
> but it's a bit untidy to change it half-way.
>

The main reason why I changed it only base ERROR_EXIT_DOM expects an int based
on the format string. I guess I could cast the long in int for the
call to ERROR_EXIT_DOM
but that doesn't really look nice either.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:41:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQAi-00036b-N5; Thu, 09 Aug 2012 10:40:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzQAh-00036N-QC
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:40:56 +0000
Received: from [85.158.139.83:57462] by server-11.bemta-5.messagelabs.com id
	F2/C9-11482-7B393205; Thu, 09 Aug 2012 10:40:55 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344508848!30346139!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3434 invoked from network); 9 Aug 2012 10:40:50 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:40:50 -0000
Received: by pbbrp12 with SMTP id rp12so718790pbb.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 03:40:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=oRTpXWVrAxjAKEjYN3eH2Xgg1m7New4fxCCo+zwXyd4=;
	b=0bghBhu8T2YhknXoglqqRVsBxeCAWY+GvhUOvr6lQF6csOT1sWXp4HDF6CsGDfWti2
	a0mxygrED6xmUgqFpZaJt6HbqRSoqItuhFL+Cvkh5HBwuqsf/c/s9oasH7jLpY4a1pmG
	ywzjF3lbLGHDlvq/YJMyLxfrBidmymfc9AGt3iKfz/sQnw7pQ3VFTJx7me9Rar8AALfF
	SYQgHt/adH31pTE9uwvXWXEM03sky6hzuoVO6jfbCiZdg77VmE8DHjBmA5RMDRD+8YXF
	wiITegbDuFDFAclMtSgIerFQK9/K3iao0jVh+07IZR6tZkOKVBQQkdwAJ7GKQoyivjrs
	A0pw==
Received: by 10.68.222.167 with SMTP id qn7mr3071934pbc.98.1344508847933; Thu,
	09 Aug 2012 03:40:47 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 03:40:27 -0700 (PDT)
In-Reply-To: <20120809103557.GA17503@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 11:40:27 +0100
Message-ID: <CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"Jean Guyader \(3P\)" <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 11:35, Tim Deegan <tim@xen.org> wrote:
> At 11:23 +0100 on 09 Aug (1344511426), Ian Campbell wrote:
>> On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
>> > At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
>> > >
>> > > Exposes evtchn_alloc_unbound_domain to the rest of
>> > > Xen so we can create allocated unbound evtchn within Xen.
>> > >
>> > > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
>> >
>> > > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>> > >  {
>> > >      struct evtchn *chn;
>> > >      struct domain *d;
>> > > -    int            port;
>> > > +    evtchn_port_t  port;
>> > >      domid_t        dom = alloc->dom;
>> > > -    long           rc;
>> > > +    int            rc;
>> >
>> > The function returns long; if you're tidying this up to be an int, might
>> > as well change the return type too.
>>
>> I'm not sure if this is relevant but Jan just sent a patch to "make all
>> (native) hypercalls consistently have "long" return type". I
>> think/suspect this rc here turns into the result of the hypercall?
>>
>> Jan's patch was motivated by something to do with sign extension when a
>> hypercall's int return is written to the long in the multicall arg
>> struct which causes strangeness. Perhaps not totally relevant to
>> evtchn_alloc which is unlikely to be in a MC.
>
> Yes, this eventually ends up in a hypercall handler, but s/long/int/
> here doesn't cause problems because
>  - rc is only ever set to an 'int' value here so we can't lose data
>    from the type being too narrow; and
>  - Those int values get cast up to long (either in here or in the
>    caller) directly, which will sign-extend the.
>
> It really doesn't matter whether this function returns an int or a long,
> but it's a bit untidy to change it half-way.
>

The main reason why I changed it only base ERROR_EXIT_DOM expects an int based
on the format string. I guess I could cast the long in int for the
call to ERROR_EXIT_DOM
but that doesn't really look nice either.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:48:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQIC-0003PU-KV; Thu, 09 Aug 2012 10:48:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzQIB-0003PP-5u
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:48:39 +0000
Received: from [85.158.138.51:27877] by server-12.bemta-3.messagelabs.com id
	F4/CE-21301-68593205; Thu, 09 Aug 2012 10:48:38 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344509316!19452618!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31867 invoked from network); 9 Aug 2012 10:48:37 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:48:37 -0000
Received: by ghrr17 with SMTP id r17so295529ghr.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 03:48:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=9ssVGPOHyJPV8QxkVsDpsYuL7ut4vKFgvcCQnQVHTTg=;
	b=QNVwjUzv09+PUpxaUxWRz0neBUGb1NIjkcjz6PzlBV3MwBfCidCBCia/jInEne4siI
	jWsTlLmGlXf9dbnZ1pRUvdEs/ydakbjdyJmnbybmdfsYBXb6B+ZZShN8xlMt/fpJP1TF
	3ggDNC2Ombdn5Z/P9f157SC37L68jSD5aF7lSNE4ET4VHfrm822BI9diMAVcFjJw9gE4
	Dz3gIUIM9RVsFBBXOSkMuUVzx8UgfgjjPgzfr1iVlR4Dj02cELA7eYCM3U9vTzxf/GQ3
	XHEzykMtFnidcfxh4aIdETsHQH3r81SBnYx9Jb107gmGeTMmQgczph2mGW+cgtZ9tGnr
	vQfw==
Received: by 10.66.85.201 with SMTP id j9mr6786243paz.40.1344509315824; Thu,
	09 Aug 2012 03:48:35 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 03:48:15 -0700 (PDT)
In-Reply-To: <5023AF710200007800093DFE@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
	<5023AF710200007800093DFE@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 11:48:15 +0100
Message-ID: <CAEBdQ92PBFq1kwvKhYz-mPBNm-NhO6HmaTw+Yi+oQP6+i89dGQ@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Tim Deegan <tim@xen.org>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 11:39, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 12:19, Jean Guyader <jean.guyader@gmail.com> wrote:
>> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>>> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>>>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>>> >
>>>> > Without finally explaining why you need this type in the first place,
>>>> > I'll continue to NAK this patch. (This is made even worse by the fact
>>>> > taht the two inline functions in patch 5 that make use of the type
>>>> > appear to be unused.)
>>>> >
>>>>
>>>> Understood. I'll switch to use long instead of ssize_t in my
>>>> forthcoming patch serie.
>>>
>>> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>>> some 64-bit length fields.
>>>
>>
>> Ok but ssize_t is kind of a funny one. It should accept everything
>> that size_t can accept + negative values.
>
> No. It's the same relation as between e.g. "signed int" and
> "unsigned int". Value preserving conversions are only guaranteed
> for non-negative values fitting both types.
>

ssize_t is a *signed* type, I was wrong by saying that it should
accept all the range of a size_t, it allows only
a subset of it. read/write used ssize_t as a return type.

>From man 2 read:
If count is zero, read() returns zero and has no other results. If
count is greater than SSIZE_MAX, the result is unspecified.

Would it be ok to claim the same thing here? i.e. if count > INT_MAX
the result is unspecified.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:48:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQIC-0003PU-KV; Thu, 09 Aug 2012 10:48:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzQIB-0003PP-5u
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:48:39 +0000
Received: from [85.158.138.51:27877] by server-12.bemta-3.messagelabs.com id
	F4/CE-21301-68593205; Thu, 09 Aug 2012 10:48:38 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344509316!19452618!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31867 invoked from network); 9 Aug 2012 10:48:37 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:48:37 -0000
Received: by ghrr17 with SMTP id r17so295529ghr.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 03:48:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=9ssVGPOHyJPV8QxkVsDpsYuL7ut4vKFgvcCQnQVHTTg=;
	b=QNVwjUzv09+PUpxaUxWRz0neBUGb1NIjkcjz6PzlBV3MwBfCidCBCia/jInEne4siI
	jWsTlLmGlXf9dbnZ1pRUvdEs/ydakbjdyJmnbybmdfsYBXb6B+ZZShN8xlMt/fpJP1TF
	3ggDNC2Ombdn5Z/P9f157SC37L68jSD5aF7lSNE4ET4VHfrm822BI9diMAVcFjJw9gE4
	Dz3gIUIM9RVsFBBXOSkMuUVzx8UgfgjjPgzfr1iVlR4Dj02cELA7eYCM3U9vTzxf/GQ3
	XHEzykMtFnidcfxh4aIdETsHQH3r81SBnYx9Jb107gmGeTMmQgczph2mGW+cgtZ9tGnr
	vQfw==
Received: by 10.66.85.201 with SMTP id j9mr6786243paz.40.1344509315824; Thu,
	09 Aug 2012 03:48:35 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 03:48:15 -0700 (PDT)
In-Reply-To: <5023AF710200007800093DFE@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
	<5023AF710200007800093DFE@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 11:48:15 +0100
Message-ID: <CAEBdQ92PBFq1kwvKhYz-mPBNm-NhO6HmaTw+Yi+oQP6+i89dGQ@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Tim Deegan <tim@xen.org>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 11:39, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 12:19, Jean Guyader <jean.guyader@gmail.com> wrote:
>> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>>> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>>>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>>> >
>>>> > Without finally explaining why you need this type in the first place,
>>>> > I'll continue to NAK this patch. (This is made even worse by the fact
>>>> > taht the two inline functions in patch 5 that make use of the type
>>>> > appear to be unused.)
>>>> >
>>>>
>>>> Understood. I'll switch to use long instead of ssize_t in my
>>>> forthcoming patch serie.
>>>
>>> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>>> some 64-bit length fields.
>>>
>>
>> Ok but ssize_t is kind of a funny one. It should accept everything
>> that size_t can accept + negative values.
>
> No. It's the same relation as between e.g. "signed int" and
> "unsigned int". Value preserving conversions are only guaranteed
> for non-negative values fitting both types.
>

ssize_t is a *signed* type, I was wrong by saying that it should
accept all the range of a size_t, it allows only
a subset of it. read/write used ssize_t as a return type.

>From man 2 read:
If count is zero, read() returns zero and has no other results. If
count is greater than SSIZE_MAX, the result is unspecified.

Would it be ok to claim the same thing here? i.e. if count > INT_MAX
the result is unspecified.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQJR-0003Tk-30; Thu, 09 Aug 2012 10:49:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzQJP-0003TU-8d
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:49:55 +0000
Received: from [85.158.138.51:36785] by server-12.bemta-3.messagelabs.com id
	A4/61-21301-2D593205; Thu, 09 Aug 2012 10:49:54 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344509393!27363914!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 960 invoked from network); 9 Aug 2012 10:49:53 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:49:53 -0000
Received: by eaah11 with SMTP id h11so97169eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 03:49:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=+GARoSVnsqZVpUmQDj+K7/kFwnvXpmSGhaey7jRK0vk=;
	b=GGKx5VUeISlxNr1SzuU+FhGF2e0nS7FvuaLAhrTpEeT9IpiqTdZ6sb44wHMW13dt8z
	kbpwOYBuk9R+OYXLHhg3UzPCFLG5vGdptqdkQYQqGudPGPpH4pWtcWhpG4SyVtiu1lpt
	z6gpyDVA5s1KQ+BI5jBcd+nvb9MXJ0Ecu+KTWxdtJUqtUS3eRB7rhZ0dDin7KpvT/1PA
	j8rDVp4t4EgcI4oNxK4N/zPmqSSTY6Vw4sDahVN0S+TnFfLRvYrJNv/MVCUJCMhyFr4H
	3WcznmViXyQR5lmPy0TsdQfYCt9B5dPIzizdAOyjO9PGDLaUZlBzyfHr1T0+v92rmYT6
	T1Tg==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr17162787eeo.24.1344509393485; Thu,
	09 Aug 2012 03:49:53 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 03:49:53 -0700 (PDT)
In-Reply-To: <1344508288.32142.119.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
	<1344508288.32142.119.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 18:49:53 +0800
Message-ID: <CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2752899584458237063=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2752899584458237063==
Content-Type: multipart/alternative; boundary=047d7b603a78cbc01704c6d2fc18

--047d7b603a78cbc01704c6d2fc18
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 6:31 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 11:28 +0100, Ian Campbell wrote:
> > On Thu, 2012-08-09 at 11:21 +0100, =E9=A9=AC=E7=A3=8A wrote:
> >
> > > I didn't find the xenbus_dev_frontend.c by `` find / -name "*xenbus*"
> > > 2>/dev/null `` command.
> >
> > Look in the upstream Linux kernel source.
> >
> > In some older kernels it is
> > drivers/xen/xenbus/xenbus_dev.c
>
> Also please try using a search engine to answer these questions. This
> driver is trivially found with google.
>
> Ian.
>
>

> By the way, which driver use the socket file as  backend ?...

--047d7b603a78cbc01704c6d2fc18
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 6:31 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 11:28 +0100, =
Ian Campbell wrote:<br>
&gt; On Thu, 2012-08-09 at 11:21 +0100, =E9=A9=AC=E7=A3=8A wrote:<br>
&gt;<br>
&gt; &gt; I didn&#39;t find the xenbus_dev_frontend.c by `` find / -name &q=
uot;*xenbus*&quot;<br>
&gt; &gt; 2&gt;/dev/null `` command.<br>
&gt;<br>
&gt; Look in the upstream Linux kernel source.<br>
&gt;<br>
&gt; In some older kernels it is<br>
&gt; drivers/xen/xenbus/xenbus_dev.c<br>
<br>
</div></div>Also please try using a search engine to answer these questions=
. This<br>
driver is trivially found with google.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote><div>=C2=A0</div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">=
<span class=3D"HOEnZb"><font color=3D"#888888">By the way, which driver use=
 the socket file as=C2=A0</font></span><span style=3D"color:rgb(136,136,136=
)">=C2=A0</span><span style=3D"color:rgb(136,136,136)">backend</span><span =
style=3D"color:rgb(136,136,136)">=C2=A0</span><span style=3D"color:rgb(136,=
136,136)">?...</span></blockquote>
<div>=C2=A0</div></div><br>

--047d7b603a78cbc01704c6d2fc18--


--===============2752899584458237063==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2752899584458237063==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 10:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQJR-0003Tk-30; Thu, 09 Aug 2012 10:49:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1SzQJP-0003TU-8d
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 10:49:55 +0000
Received: from [85.158.138.51:36785] by server-12.bemta-3.messagelabs.com id
	A4/61-21301-2D593205; Thu, 09 Aug 2012 10:49:54 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344509393!27363914!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 960 invoked from network); 9 Aug 2012 10:49:53 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 10:49:53 -0000
Received: by eaah11 with SMTP id h11so97169eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 03:49:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=+GARoSVnsqZVpUmQDj+K7/kFwnvXpmSGhaey7jRK0vk=;
	b=GGKx5VUeISlxNr1SzuU+FhGF2e0nS7FvuaLAhrTpEeT9IpiqTdZ6sb44wHMW13dt8z
	kbpwOYBuk9R+OYXLHhg3UzPCFLG5vGdptqdkQYQqGudPGPpH4pWtcWhpG4SyVtiu1lpt
	z6gpyDVA5s1KQ+BI5jBcd+nvb9MXJ0Ecu+KTWxdtJUqtUS3eRB7rhZ0dDin7KpvT/1PA
	j8rDVp4t4EgcI4oNxK4N/zPmqSSTY6Vw4sDahVN0S+TnFfLRvYrJNv/MVCUJCMhyFr4H
	3WcznmViXyQR5lmPy0TsdQfYCt9B5dPIzizdAOyjO9PGDLaUZlBzyfHr1T0+v92rmYT6
	T1Tg==
MIME-Version: 1.0
Received: by 10.14.209.129 with SMTP id s1mr17162787eeo.24.1344509393485; Thu,
	09 Aug 2012 03:49:53 -0700 (PDT)
Received: by 10.14.218.200 with HTTP; Thu, 9 Aug 2012 03:49:53 -0700 (PDT)
In-Reply-To: <1344508288.32142.119.camel@zakaz.uk.xensource.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
	<1344508288.32142.119.camel@zakaz.uk.xensource.com>
Date: Thu, 9 Aug 2012 18:49:53 +0800
Message-ID: <CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
	xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2752899584458237063=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2752899584458237063==
Content-Type: multipart/alternative; boundary=047d7b603a78cbc01704c6d2fc18

--047d7b603a78cbc01704c6d2fc18
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, Aug 9, 2012 at 6:31 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote=
:

> On Thu, 2012-08-09 at 11:28 +0100, Ian Campbell wrote:
> > On Thu, 2012-08-09 at 11:21 +0100, =E9=A9=AC=E7=A3=8A wrote:
> >
> > > I didn't find the xenbus_dev_frontend.c by `` find / -name "*xenbus*"
> > > 2>/dev/null `` command.
> >
> > Look in the upstream Linux kernel source.
> >
> > In some older kernels it is
> > drivers/xen/xenbus/xenbus_dev.c
>
> Also please try using a search engine to answer these questions. This
> driver is trivially found with google.
>
> Ian.
>
>

> By the way, which driver use the socket file as  backend ?...

--047d7b603a78cbc01704c6d2fc18
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Thu, Aug 9, 2012 at 6:31 PM, Ian Camp=
bell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targe=
t=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Thu, 2012-08-09 at 11:28 +0100, =
Ian Campbell wrote:<br>
&gt; On Thu, 2012-08-09 at 11:21 +0100, =E9=A9=AC=E7=A3=8A wrote:<br>
&gt;<br>
&gt; &gt; I didn&#39;t find the xenbus_dev_frontend.c by `` find / -name &q=
uot;*xenbus*&quot;<br>
&gt; &gt; 2&gt;/dev/null `` command.<br>
&gt;<br>
&gt; Look in the upstream Linux kernel source.<br>
&gt;<br>
&gt; In some older kernels it is<br>
&gt; drivers/xen/xenbus/xenbus_dev.c<br>
<br>
</div></div>Also please try using a search engine to answer these questions=
. This<br>
driver is trivially found with google.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote><div>=C2=A0</div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">=
<span class=3D"HOEnZb"><font color=3D"#888888">By the way, which driver use=
 the socket file as=C2=A0</font></span><span style=3D"color:rgb(136,136,136=
)">=C2=A0</span><span style=3D"color:rgb(136,136,136)">backend</span><span =
style=3D"color:rgb(136,136,136)">=C2=A0</span><span style=3D"color:rgb(136,=
136,136)">?...</span></blockquote>
<div>=C2=A0</div></div><br>

--047d7b603a78cbc01704c6d2fc18--


--===============2752899584458237063==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2752899584458237063==--


From xen-devel-bounces@lists.xen.org Thu Aug 09 10:59:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQSW-0003hW-42; Thu, 09 Aug 2012 10:59:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQSU-0003hR-5z
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:59:18 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344509949!4641960!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14004 invoked from network); 9 Aug 2012 10:59:09 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:59:09 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQSI-0004ll-SU; Thu, 09 Aug 2012 10:59:06 +0000
Date: Thu, 9 Aug 2012 11:59:06 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120809105906.GE16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Jan Beulich <JBeulich@suse.com>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:19 +0100 on 09 Aug (1344511142), Jean Guyader wrote:
> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
> > At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
> >> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >> >
> >> > Without finally explaining why you need this type in the first place,
> >> > I'll continue to NAK this patch. (This is made even worse by the fact
> >> > taht the two inline functions in patch 5 that make use of the type
> >> > appear to be unused.)
> >> >
> >>
> >> Understood. I'll switch to use long instead of ssize_t in my
> >> forthcoming patch serie.
> >
> > Please use an explicitly 64-bit type - AFAICS you're holding the sum of
> > some 64-bit length fields.
> >
> 
> Ok but ssize_t is kind of a funny one. It should accept everything
> that size_t can accept + negative values.

Given that you start with user-supplied 64-bit iov lengths, there is no
such type. :)

And now that I look at it, your handling of sizes is pretty confused.
v4v_ringbuf_insertv() returns a ssize_t, but calculates it internally as
an int32_t, which is _smaller_ than the size_t it starts with (which is
the sum of some guest-supplied uint64_ts).  And then v4v_sendv() puts
that ssize_t into an int again before returning it as a size_t, which
d_v4v_op() casts as a long to give to the guest.  Whee!

Can you please make that lot consistent, and then audit the whole path
from user-supplied iovec lists to actual copies and make sure that there
are no overflows?

I think that explicitly limiting your sum-of-iovec-lengths to 31 bits
would be perfectly reasonable (1 hypercall per 2GB copy), and would
avoid a lot of pain here.  As long as it was documented in the
interface, of course!

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 10:59:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 10:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQSW-0003hW-42; Thu, 09 Aug 2012 10:59:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQSU-0003hR-5z
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 10:59:18 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344509949!4641960!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14004 invoked from network); 9 Aug 2012 10:59:09 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 10:59:09 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQSI-0004ll-SU; Thu, 09 Aug 2012 10:59:06 +0000
Date: Thu, 9 Aug 2012 11:59:06 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120809105906.GE16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Jan Beulich <JBeulich@suse.com>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:19 +0100 on 09 Aug (1344511142), Jean Guyader wrote:
> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
> > At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
> >> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
> >> >
> >> > Without finally explaining why you need this type in the first place,
> >> > I'll continue to NAK this patch. (This is made even worse by the fact
> >> > taht the two inline functions in patch 5 that make use of the type
> >> > appear to be unused.)
> >> >
> >>
> >> Understood. I'll switch to use long instead of ssize_t in my
> >> forthcoming patch serie.
> >
> > Please use an explicitly 64-bit type - AFAICS you're holding the sum of
> > some 64-bit length fields.
> >
> 
> Ok but ssize_t is kind of a funny one. It should accept everything
> that size_t can accept + negative values.

Given that you start with user-supplied 64-bit iov lengths, there is no
such type. :)

And now that I look at it, your handling of sizes is pretty confused.
v4v_ringbuf_insertv() returns a ssize_t, but calculates it internally as
an int32_t, which is _smaller_ than the size_t it starts with (which is
the sum of some guest-supplied uint64_ts).  And then v4v_sendv() puts
that ssize_t into an int again before returning it as a size_t, which
d_v4v_op() casts as a long to give to the guest.  Whee!

Can you please make that lot consistent, and then audit the whole path
from user-supplied iovec lists to actual copies and make sure that there
are no overflows?

I think that explicitly limiting your sum-of-iovec-lengths to 31 bits
would be perfectly reasonable (1 hypercall per 2GB copy), and would
avoid a lot of pain here.  As long as it was documented in the
interface, of course!

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:01:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQU1-0003qR-Oy; Thu, 09 Aug 2012 11:00:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzQU0-0003qK-L9
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 11:00:52 +0000
Received: from [85.158.143.35:46961] by server-2.bemta-4.messagelabs.com id
	AF/9D-19021-36893205; Thu, 09 Aug 2012 11:00:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344510046!13882477!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29216 invoked from network); 9 Aug 2012 11:00:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 11:00:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927841"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 11:00:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	12:00:19 +0100
Message-ID: <1344510017.32142.125.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 12:00:17 +0100
In-Reply-To: <CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
	<1344508288.32142.119.camel@zakaz.uk.xensource.com>
	<CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjQ5ICswMTAwLCDpqazno4ogd3JvdGU6Cgo+ICAgICAg
ICAgQnkgdGhlIHdheSwgd2hpY2ggZHJpdmVyIHVzZSB0aGUgc29ja2V0IGZpbGUgYXMgIGJhY2tl
bmQgPy4uLgoKWW91IG1lYW4gd2hvIHVzZXMgdGhlIHVuaXggZG9tYWluIHNvY2tldCBpbnRlcmZh
Y2UgdG8geGVuc3RvcmVkPwoKbGlieGVuc3RvcmUgY2FuIG9wdGlvbmFsbHkgdXNlIHRoaXMgaWYg
aXQgaGFwcGVucyB0byBiZSBydW5uaW5nIGluIHRoZQpzYW1lIGRvbWFpbiBhcyB0aGUgZGFlbW9u
LiBPdGhlcndpc2UgaXQgd2lsbCB1c2UgdGhlIC9wcm9jL3hlbi94ZW5idXMKbWV0aG9kLgoKUGxl
YXNlLCB0cnkgKGhhcmRlcikgdG8gdXNlIGdyZXAsIGdvb2dsZSBldGMgdG8gZmluZCB0aGUgYW5z
d2VycyB0bwp0aGVzZSB0aGluZ3MsIHlvdSBhcmUgYXNraW5nIGEgbG90IG9mIHF1ZXN0aW9ucyB3
aGljaCB5b3UgcmVhbGx5IG91Z2h0CnRvIGJlIGFibGUgdG8gYW5zd2VyIGJ5IHN0dWR5aW5nIHRo
ZSBjb2RlIGEgYml0LgoKSWFuLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:01:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQU1-0003qR-Oy; Thu, 09 Aug 2012 11:00:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzQU0-0003qK-L9
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 11:00:52 +0000
Received: from [85.158.143.35:46961] by server-2.bemta-4.messagelabs.com id
	AF/9D-19021-36893205; Thu, 09 Aug 2012 11:00:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344510046!13882477!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29216 invoked from network); 9 Aug 2012 11:00:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 11:00:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13927841"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 11:00:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	12:00:19 +0100
Message-ID: <1344510017.32142.125.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E9=A9=AC=E7=A3=8A?= <aware.why@gmail.com>
Date: Thu, 9 Aug 2012 12:00:17 +0100
In-Reply-To: <CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
References: <CA+ePHTAGBYkyO1kUH2Cuf_fRRB4ELZeUO5Ot_d79c30N1=+VUQ@mail.gmail.com>
	<CA+ePHTAQHidwtaATLsH6kVz89sTs6vF_=k1OQ462JL4UBVZsnA@mail.gmail.com>
	<1344500485.32142.70.camel@zakaz.uk.xensource.com>
	<CA+ePHTCW-NL0S10J3Ru9xhuO+i2-UUHFnjD9H0==DrMWNcW58g@mail.gmail.com>
	<1344506305.32142.93.camel@zakaz.uk.xensource.com>
	<CA+ePHTATByp2yk_qBEJXNT_wVBPZcNrDn6T87AzHFKevRiCtMA@mail.gmail.com>
	<1344508123.32142.118.camel@zakaz.uk.xensource.com>
	<1344508288.32142.119.camel@zakaz.uk.xensource.com>
	<CA+ePHTBkBQuhX4C9VZ2a_ARDkY=fvF-=UEBjkOt5ThFZCwa-+A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [help] problem with `tools/xenstore/xs.c:
 xs_talkv()`
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDExOjQ5ICswMTAwLCDpqazno4ogd3JvdGU6Cgo+ICAgICAg
ICAgQnkgdGhlIHdheSwgd2hpY2ggZHJpdmVyIHVzZSB0aGUgc29ja2V0IGZpbGUgYXMgIGJhY2tl
bmQgPy4uLgoKWW91IG1lYW4gd2hvIHVzZXMgdGhlIHVuaXggZG9tYWluIHNvY2tldCBpbnRlcmZh
Y2UgdG8geGVuc3RvcmVkPwoKbGlieGVuc3RvcmUgY2FuIG9wdGlvbmFsbHkgdXNlIHRoaXMgaWYg
aXQgaGFwcGVucyB0byBiZSBydW5uaW5nIGluIHRoZQpzYW1lIGRvbWFpbiBhcyB0aGUgZGFlbW9u
LiBPdGhlcndpc2UgaXQgd2lsbCB1c2UgdGhlIC9wcm9jL3hlbi94ZW5idXMKbWV0aG9kLgoKUGxl
YXNlLCB0cnkgKGhhcmRlcikgdG8gdXNlIGdyZXAsIGdvb2dsZSBldGMgdG8gZmluZCB0aGUgYW5z
d2VycyB0bwp0aGVzZSB0aGluZ3MsIHlvdSBhcmUgYXNraW5nIGEgbG90IG9mIHF1ZXN0aW9ucyB3
aGljaCB5b3UgcmVhbGx5IG91Z2h0CnRvIGJlIGFibGUgdG8gYW5zd2VyIGJ5IHN0dWR5aW5nIHRo
ZSBjb2RlIGEgYml0LgoKSWFuLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:08:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQbY-00048J-Mz; Thu, 09 Aug 2012 11:08:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzQbW-00048E-PS
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 11:08:38 +0000
Received: from [85.158.143.99:59183] by server-2.bemta-4.messagelabs.com id
	92/6E-19021-63A93205; Thu, 09 Aug 2012 11:08:38 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344510513!27016221!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20247 invoked from network); 9 Aug 2012 11:08:35 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 11:08:35 -0000
Received: by pbbrp12 with SMTP id rp12so754246pbb.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 04:08:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=aMvrKQtayaQLeQcTgDIkFI6UxmdVZwqTVSj/gaorqvQ=;
	b=tZsiSVpRbE+XKTqXWso2+vzh0EP/daMHU38CzgoKcHdI6LiGNuU80DdvJ3Czzbg6DQ
	Yh3Z9+EcluDWX5noO9f5/T/F/ge4GnfdtWETY6fhkDNOV5+loI5nLqIGZ0OWKKDZGT3o
	+Dkggi6pzM6oE6/cOl+XC+Xvy8unQ26bh9EcEnd2HUbBP53uhaCLKZHAQkm0YkViLVjN
	Hxdnmtlgb1VLC6i1/C3zZzKVVRaDa81ICmvad63xwAYyQqi01KOqW41Xz/33P2gZbMl8
	GwwOs5g7OowYFixHG7ZgnhPNOO5BT3bWqBHl6IhTc+VcREhNumsK1yzxYGL5AMw6SxVI
	M6gg==
Received: by 10.68.218.163 with SMTP id ph3mr3390084pbc.58.1344510513409; Thu,
	09 Aug 2012 04:08:33 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 04:08:12 -0700 (PDT)
In-Reply-To: <20120809105906.GE16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
	<20120809105906.GE16986@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 12:08:12 +0100
Message-ID: <CAEBdQ915FDdioeurKv-FkEgoqdBM7L4MtntA4HwXFhBHrsWX2Q@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 11:59, Tim Deegan <tim@xen.org> wrote:
> At 11:19 +0100 on 09 Aug (1344511142), Jean Guyader wrote:
>> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>> > At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>> >> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >> >
>> >> > Without finally explaining why you need this type in the first place,
>> >> > I'll continue to NAK this patch. (This is made even worse by the fact
>> >> > taht the two inline functions in patch 5 that make use of the type
>> >> > appear to be unused.)
>> >> >
>> >>
>> >> Understood. I'll switch to use long instead of ssize_t in my
>> >> forthcoming patch serie.
>> >
>> > Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>> > some 64-bit length fields.
>> >
>>
>> Ok but ssize_t is kind of a funny one. It should accept everything
>> that size_t can accept + negative values.
>
> Given that you start with user-supplied 64-bit iov lengths, there is no
> such type. :)
>
> And now that I look at it, your handling of sizes is pretty confused.
> v4v_ringbuf_insertv() returns a ssize_t, but calculates it internally as
> an int32_t, which is _smaller_ than the size_t it starts with (which is
> the sum of some guest-supplied uint64_ts).  And then v4v_sendv() puts
> that ssize_t into an int again before returning it as a size_t, which
> d_v4v_op() casts as a long to give to the guest.  Whee!
>
> Can you please make that lot consistent, and then audit the whole path
> from user-supplied iovec lists to actual copies and make sure that there
> are no overflows?
>
> I think that explicitly limiting your sum-of-iovec-lengths to 31 bits
> would be perfectly reasonable (1 hypercall per 2GB copy), and would
> avoid a lot of pain here.  As long as it was documented in the
> interface, of course!
>

Ok fine that works for me. I'll do that for the next version.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:08:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQbY-00048J-Mz; Thu, 09 Aug 2012 11:08:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzQbW-00048E-PS
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 11:08:38 +0000
Received: from [85.158.143.99:59183] by server-2.bemta-4.messagelabs.com id
	92/6E-19021-63A93205; Thu, 09 Aug 2012 11:08:38 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344510513!27016221!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20247 invoked from network); 9 Aug 2012 11:08:35 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 11:08:35 -0000
Received: by pbbrp12 with SMTP id rp12so754246pbb.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 04:08:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=aMvrKQtayaQLeQcTgDIkFI6UxmdVZwqTVSj/gaorqvQ=;
	b=tZsiSVpRbE+XKTqXWso2+vzh0EP/daMHU38CzgoKcHdI6LiGNuU80DdvJ3Czzbg6DQ
	Yh3Z9+EcluDWX5noO9f5/T/F/ge4GnfdtWETY6fhkDNOV5+loI5nLqIGZ0OWKKDZGT3o
	+Dkggi6pzM6oE6/cOl+XC+Xvy8unQ26bh9EcEnd2HUbBP53uhaCLKZHAQkm0YkViLVjN
	Hxdnmtlgb1VLC6i1/C3zZzKVVRaDa81ICmvad63xwAYyQqi01KOqW41Xz/33P2gZbMl8
	GwwOs5g7OowYFixHG7ZgnhPNOO5BT3bWqBHl6IhTc+VcREhNumsK1yzxYGL5AMw6SxVI
	M6gg==
Received: by 10.68.218.163 with SMTP id ph3mr3390084pbc.58.1344510513409; Thu,
	09 Aug 2012 04:08:33 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.68.135.129 with HTTP; Thu, 9 Aug 2012 04:08:12 -0700 (PDT)
In-Reply-To: <20120809105906.GE16986@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
	<20120809105906.GE16986@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 12:08:12 +0100
Message-ID: <CAEBdQ915FDdioeurKv-FkEgoqdBM7L4MtntA4HwXFhBHrsWX2Q@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 11:59, Tim Deegan <tim@xen.org> wrote:
> At 11:19 +0100 on 09 Aug (1344511142), Jean Guyader wrote:
>> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>> > At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>> >> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>> >> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>> >> >
>> >> > Without finally explaining why you need this type in the first place,
>> >> > I'll continue to NAK this patch. (This is made even worse by the fact
>> >> > taht the two inline functions in patch 5 that make use of the type
>> >> > appear to be unused.)
>> >> >
>> >>
>> >> Understood. I'll switch to use long instead of ssize_t in my
>> >> forthcoming patch serie.
>> >
>> > Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>> > some 64-bit length fields.
>> >
>>
>> Ok but ssize_t is kind of a funny one. It should accept everything
>> that size_t can accept + negative values.
>
> Given that you start with user-supplied 64-bit iov lengths, there is no
> such type. :)
>
> And now that I look at it, your handling of sizes is pretty confused.
> v4v_ringbuf_insertv() returns a ssize_t, but calculates it internally as
> an int32_t, which is _smaller_ than the size_t it starts with (which is
> the sum of some guest-supplied uint64_ts).  And then v4v_sendv() puts
> that ssize_t into an int again before returning it as a size_t, which
> d_v4v_op() casts as a long to give to the guest.  Whee!
>
> Can you please make that lot consistent, and then audit the whole path
> from user-supplied iovec lists to actual copies and make sure that there
> are no overflows?
>
> I think that explicitly limiting your sum-of-iovec-lengths to 31 bits
> would be perfectly reasonable (1 hypercall per 2GB copy), and would
> avoid a lot of pain here.  As long as it was documented in the
> interface, of course!
>

Ok fine that works for me. I'll do that for the next version.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:26:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQsf-0004Kw-BU; Thu, 09 Aug 2012 11:26:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQse-0004Kr-2W
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 11:26:20 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344511574!9893227!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11034 invoked from network); 9 Aug 2012 11:26:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 11:26:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQsW-0004qb-Ah; Thu, 09 Aug 2012 11:26:12 +0000
Date: Thu, 9 Aug 2012 12:26:12 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120809112612.GF16986@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
	<20120807094243.GB84051@ocelot.phlegethon.org>
	<1344503451.32142.85.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344503451.32142.85.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:10 +0100 on 09 Aug (1344507051), Ian Campbell wrote:
> On Tue, 2012-08-07 at 10:42 +0100, Tim Deegan wrote:
> > Acked-by: Tim Deegan <tim@xen.org>
> 
> Thanks, pushed to my arm-for-4.3 branch.
> 
> Any comments/objections/acks for patches #2-4?

Ack to all three.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:26:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzQsf-0004Kw-BU; Thu, 09 Aug 2012 11:26:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzQse-0004Kr-2W
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 11:26:20 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1344511574!9893227!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11034 invoked from network); 9 Aug 2012 11:26:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 11:26:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzQsW-0004qb-Ah; Thu, 09 Aug 2012 11:26:12 +0000
Date: Thu, 9 Aug 2012 12:26:12 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120809112612.GF16986@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
	<20120807094243.GB84051@ocelot.phlegethon.org>
	<1344503451.32142.85.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344503451.32142.85.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.4.2.1i
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
	boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:10 +0100 on 09 Aug (1344507051), Ian Campbell wrote:
> On Tue, 2012-08-07 at 10:42 +0100, Tim Deegan wrote:
> > Acked-by: Tim Deegan <tim@xen.org>
> 
> Thanks, pushed to my arm-for-4.3 branch.
> 
> Any comments/objections/acks for patches #2-4?

Ack to all three.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:38:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzR3y-0004YY-HG; Thu, 09 Aug 2012 11:38:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzR3x-0004Y4-Nt
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 11:38:01 +0000
Received: from [85.158.138.51:31195] by server-3.bemta-3.messagelabs.com id
	52/CD-13122-811A3205; Thu, 09 Aug 2012 11:38:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344512279!27366584!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27529 invoked from network); 9 Aug 2012 11:38:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 11:38:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13928594"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 11:37:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	12:37:59 +0100
Message-ID: <1344512278.32142.131.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Aug 2012 12:37:58 +0100
In-Reply-To: <20120809112612.GF16986@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
	<20120807094243.GB84051@ocelot.phlegethon.org>
	<1344503451.32142.85.camel@zakaz.uk.xensource.com>
	<20120809112612.GF16986@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
 boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 12:26 +0100, Tim Deegan wrote:
> At 10:10 +0100 on 09 Aug (1344507051), Ian Campbell wrote:
> > On Tue, 2012-08-07 at 10:42 +0100, Tim Deegan wrote:
> > > Acked-by: Tim Deegan <tim@xen.org>
> > 
> > Thanks, pushed to my arm-for-4.3 branch.
> > 
> > Any comments/objections/acks for patches #2-4?
> 
> Ack to all three.

Thanks, I'll push shortly.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 11:38:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 11:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzR3y-0004YY-HG; Thu, 09 Aug 2012 11:38:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzR3x-0004Y4-Nt
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 11:38:01 +0000
Received: from [85.158.138.51:31195] by server-3.bemta-3.messagelabs.com id
	52/CD-13122-811A3205; Thu, 09 Aug 2012 11:38:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344512279!27366584!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27529 invoked from network); 9 Aug 2012 11:38:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 11:38:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13928594"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 11:37:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	12:37:59 +0100
Message-ID: <1344512278.32142.131.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Aug 2012 12:37:58 +0100
In-Reply-To: <20120809112612.GF16986@ocelot.phlegethon.org>
References: <1344252872.11339.27.camel@zakaz.uk.xensource.com>
	<1344252900-26148-1-git-send-email-ian.campbell@citrix.com>
	<20120806115516.GA68290@ocelot.phlegethon.org>
	<1344254179.11339.29.camel@zakaz.uk.xensource.com>
	<1344260756.11339.44.camel@zakaz.uk.xensource.com>
	<20120807094243.GB84051@ocelot.phlegethon.org>
	<1344503451.32142.85.camel@zakaz.uk.xensource.com>
	<20120809112612.GF16986@ocelot.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] arm: disable distributor delivery on
 boot CPU only
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 12:26 +0100, Tim Deegan wrote:
> At 10:10 +0100 on 09 Aug (1344507051), Ian Campbell wrote:
> > On Tue, 2012-08-07 at 10:42 +0100, Tim Deegan wrote:
> > > Acked-by: Tim Deegan <tim@xen.org>
> > 
> > Thanks, pushed to my arm-for-4.3 branch.
> > 
> > Any comments/objections/acks for patches #2-4?
> 
> Ack to all three.

Thanks, I'll push shortly.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 13:03:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 13:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzSNs-0005IN-8o; Thu, 09 Aug 2012 13:02:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzSNq-0005II-Uf
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 13:02:39 +0000
Received: from [85.158.143.99:18462] by server-1.bemta-4.messagelabs.com id
	B8/9E-20198-EE4B3205; Thu, 09 Aug 2012 13:02:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344517357!24000333!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9296 invoked from network); 9 Aug 2012 13:02:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	9 Aug 2012 13:02:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 14:02:36 +0100
Message-Id: <5023D10C0200007800093E77@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 14:02:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
	<5023AF710200007800093DFE@nat28.tlf.novell.com>
	<CAEBdQ92PBFq1kwvKhYz-mPBNm-NhO6HmaTw+Yi+oQP6+i89dGQ@mail.gmail.com>
In-Reply-To: <CAEBdQ92PBFq1kwvKhYz-mPBNm-NhO6HmaTw+Yi+oQP6+i89dGQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 12:48, Jean Guyader <jean.guyader@gmail.com> wrote:
> On 9 August 2012 11:39, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 09.08.12 at 12:19, Jean Guyader <jean.guyader@gmail.com> wrote:
>>> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>>>> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>>>>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>>>> >
>>>>> > Without finally explaining why you need this type in the first place,
>>>>> > I'll continue to NAK this patch. (This is made even worse by the fact
>>>>> > taht the two inline functions in patch 5 that make use of the type
>>>>> > appear to be unused.)
>>>>> >
>>>>>
>>>>> Understood. I'll switch to use long instead of ssize_t in my
>>>>> forthcoming patch serie.
>>>>
>>>> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>>>> some 64-bit length fields.
>>>>
>>>
>>> Ok but ssize_t is kind of a funny one. It should accept everything
>>> that size_t can accept + negative values.
>>
>> No. It's the same relation as between e.g. "signed int" and
>> "unsigned int". Value preserving conversions are only guaranteed
>> for non-negative values fitting both types.
>>
> 
> ssize_t is a *signed* type, I was wrong by saying that it should
> accept all the range of a size_t, it allows only
> a subset of it. read/write used ssize_t as a return type.
> 
> From man 2 read:
> If count is zero, read() returns zero and has no other results. If
> count is greater than SSIZE_MAX, the result is unspecified.
> 
> Would it be ok to claim the same thing here? i.e. if count > INT_MAX
> the result is unspecified.

Sure, except that I think you wanted to use long and hence
LONG_MAX.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 13:03:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 13:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzSNs-0005IN-8o; Thu, 09 Aug 2012 13:02:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzSNq-0005II-Uf
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 13:02:39 +0000
Received: from [85.158.143.99:18462] by server-1.bemta-4.messagelabs.com id
	B8/9E-20198-EE4B3205; Thu, 09 Aug 2012 13:02:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344517357!24000333!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9296 invoked from network); 9 Aug 2012 13:02:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	9 Aug 2012 13:02:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 14:02:36 +0100
Message-Id: <5023D10C0200007800093E77@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 14:02:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-2-git-send-email-jean.guyader@citrix.com>
	<501F97890200007800092CA9@nat28.tlf.novell.com>
	<CAEBdQ92yh41qEhN=etNpkZDaQargVw2iJiCGwkG_hAmGe9aZdw@mail.gmail.com>
	<20120809095136.GB16986@ocelot.phlegethon.org>
	<CAEBdQ90OoRmmY7PBD-ZAkdV2ha8zs9_aP6EAANod0HkwgPQOrg@mail.gmail.com>
	<5023AF710200007800093DFE@nat28.tlf.novell.com>
	<CAEBdQ92PBFq1kwvKhYz-mPBNm-NhO6HmaTw+Yi+oQP6+i89dGQ@mail.gmail.com>
In-Reply-To: <CAEBdQ92PBFq1kwvKhYz-mPBNm-NhO6HmaTw+Yi+oQP6+i89dGQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Jean Guyader <jean.guyader@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/5] xen: add ssize_t
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 12:48, Jean Guyader <jean.guyader@gmail.com> wrote:
> On 9 August 2012 11:39, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 09.08.12 at 12:19, Jean Guyader <jean.guyader@gmail.com> wrote:
>>> On 9 August 2012 10:51, Tim Deegan <tim@xen.org> wrote:
>>>> At 15:47 +0100 on 06 Aug (1344268059), Jean Guyader wrote:
>>>>> On 6 August 2012 09:08, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> >>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>>>> >
>>>>> > Without finally explaining why you need this type in the first place,
>>>>> > I'll continue to NAK this patch. (This is made even worse by the fact
>>>>> > taht the two inline functions in patch 5 that make use of the type
>>>>> > appear to be unused.)
>>>>> >
>>>>>
>>>>> Understood. I'll switch to use long instead of ssize_t in my
>>>>> forthcoming patch serie.
>>>>
>>>> Please use an explicitly 64-bit type - AFAICS you're holding the sum of
>>>> some 64-bit length fields.
>>>>
>>>
>>> Ok but ssize_t is kind of a funny one. It should accept everything
>>> that size_t can accept + negative values.
>>
>> No. It's the same relation as between e.g. "signed int" and
>> "unsigned int". Value preserving conversions are only guaranteed
>> for non-negative values fitting both types.
>>
> 
> ssize_t is a *signed* type, I was wrong by saying that it should
> accept all the range of a size_t, it allows only
> a subset of it. read/write used ssize_t as a return type.
> 
> From man 2 read:
> If count is zero, read() returns zero and has no other results. If
> count is greater than SSIZE_MAX, the result is unspecified.
> 
> Would it be ok to claim the same thing here? i.e. if count > INT_MAX
> the result is unspecified.

Sure, except that I think you wanted to use long and hence
LONG_MAX.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 13:41:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 13:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzSyf-0005v5-7q; Thu, 09 Aug 2012 13:40:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzSye-0005v0-0S
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 13:40:40 +0000
Received: from [85.158.143.35:24035] by server-1.bemta-4.messagelabs.com id
	F4/43-20198-7DDB3205; Thu, 09 Aug 2012 13:40:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344519637!12434297!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20247 invoked from network); 9 Aug 2012 13:40:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 13:40:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13931904"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 13:40:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 14:40:36 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzSwt-0004R3-Cb; Thu, 09 Aug 2012 13:38:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzSwt-0000nT-8T;
	Thu, 09 Aug 2012 14:38:51 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20515.48491.125590.97199@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 14:38:51 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344506462.32142.96.camel@zakaz.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
> On Mon, 2012-07-30 at 15:03 +0100, Ian Campbell wrote:
> > This is based upon my inspection of a system with a single PV domain running
> > and is therefore very incomplete.
> > 
> > There are several things I'm not sure of here, mostly marked with XXX in the
> > text.
> > 
> > In particular:
> > 
> >  - We seem to expose various things to the guest which really it has no need to
> >    know (at least not via xenstore). e.g. its own domid, its device model pid,
> >    the size of the video ram, store port and gref.

Maybe we need to have a "???" or "?deprecated" tag ?  That would avoid
us documenting things which we don't want people to use.

> >  - Missing reference for ~/device-model/*
> >  - Missing protocol reference for ~/control/shutdown

These can wait for now.

> >  - What is the distinction between /vm/UUID and /local/domain/DOMID

I think the former should be abolished (eventually).

> Ian J is this machine-readable in a way which is useful for whatever it
> is you wanted to machine read it into?

I think it will do.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 13:41:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 13:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzSyf-0005v5-7q; Thu, 09 Aug 2012 13:40:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzSye-0005v0-0S
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 13:40:40 +0000
Received: from [85.158.143.35:24035] by server-1.bemta-4.messagelabs.com id
	F4/43-20198-7DDB3205; Thu, 09 Aug 2012 13:40:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344519637!12434297!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20247 invoked from network); 9 Aug 2012 13:40:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 13:40:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13931904"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 13:40:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 14:40:36 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzSwt-0004R3-Cb; Thu, 09 Aug 2012 13:38:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzSwt-0000nT-8T;
	Thu, 09 Aug 2012 14:38:51 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20515.48491.125590.97199@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 14:38:51 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344506462.32142.96.camel@zakaz.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
> On Mon, 2012-07-30 at 15:03 +0100, Ian Campbell wrote:
> > This is based upon my inspection of a system with a single PV domain running
> > and is therefore very incomplete.
> > 
> > There are several things I'm not sure of here, mostly marked with XXX in the
> > text.
> > 
> > In particular:
> > 
> >  - We seem to expose various things to the guest which really it has no need to
> >    know (at least not via xenstore). e.g. its own domid, its device model pid,
> >    the size of the video ram, store port and gref.

Maybe we need to have a "???" or "?deprecated" tag ?  That would avoid
us documenting things which we don't want people to use.

> >  - Missing reference for ~/device-model/*
> >  - Missing protocol reference for ~/control/shutdown

These can wait for now.

> >  - What is the distinction between /vm/UUID and /local/domain/DOMID

I think the former should be abolished (eventually).

> Ian J is this machine-readable in a way which is useful for whatever it
> is you wanted to machine read it into?

I think it will do.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 14:02:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 14:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzTJh-00069c-3U; Thu, 09 Aug 2012 14:02:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzTJf-00069X-69
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 14:02:23 +0000
Received: from [85.158.143.99:2296] by server-3.bemta-4.messagelabs.com id
	8C/94-31486-EE2C3205; Thu, 09 Aug 2012 14:02:22 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344520941!26818811!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8110 invoked from network); 9 Aug 2012 14:02:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 14:02:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13932549"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 14:02:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 15:02:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzTJd-0004rS-IT; Thu, 09 Aug 2012 14:02:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzTJd-0000pY-Ex;
	Thu, 09 Aug 2012 15:02:21 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20515.49901.369414.638893@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 15:02:21 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344506462.32142.96.camel@zakaz.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
...
> > --- a/docs/misc/xenstore-paths.markdown
> > +++ b/docs/misc/xenstore-paths.markdown
> > @@ -0,0 +1,294 @@
...
> > +PATH can contain simple regex constructs following the POSIX regexp
> > +syntax described in regexp(7). In addition the following additional
> > +wild card names are defined and are evaluated before regexp expansion:

Can we use a restricted perl re syntax ?  That avoids weirdness with
the rules for \.  Also how does this interact with markdown ?

> > +#### ~/image/device-model-pid = INTEGER   [r]

This [r] tag is not defined above.  I assume you mean "readonly to the
domain" but that's the default.  Left over from an earlier version ?

> > +The process ID of the device model associated with this domain, if it
> > +has one.
> > +
> > +XXX why is this visible to the guest?

I think some of these things were put here just because there wasn't
another place for the toolstack to store things.  See also the
arbitrary junk stored by scripts in the device backend directories.

> > +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
> > +
> > +One node for each virtual CPU up to the guest's configured
> > +maximum. Valid values are "online" and "offline". 

Should have a cross-reference to the cpu online/offline protocol,
which appears to be in xen/include/public/vcpu.h.  It doesn't seem to
be fully documented yet.

> > +#### ~/memory/static-max = MEMKB []
> > +
> > +Specifies a static maximum amount memory which this domain should
> > +expect to be given. In the absence of in-guest memory hotplug support
> > +this set on domain boot and is usually the maximum amount of RAM which
> > +a guest can make use of .

This should have a cross-reference to the documentation defining
static-max etc.  I thought we had some in tree but I can't seem to
find it.  The best I can find is docs/man/xl.cfg.pod.5.

> > +#### ~/memory/target = MEMKB []
> > +
> > +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort 

every effort to ... ?

The interaction with the Xen maximum should be stated, preferably by
cross-reference.  In general it might be better to have a single place
where all these values and their semantics are written down ?

> > +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
> > +
> > +The domain's suspend event channel. The use of a suspend event channel
> > +is optional at the domain's discression. If it is not used then this
> > +path will be left blank.

May it be ENOENT ?  Does the toolstack create it as "" then ?

> > +#### ~/device/serial/$DEVID/* [HVM]
> > +
> > +An emulated serial device

You should presumably add
    XXX documentation for the protocol needed
here.

> > +#### ~/store/port = EVTCHN []
> > +
> > +The event channel used by the domains connection to XenStore.

Apostrophe.

> > +XXX why is this exposed to the guest?

Is there really only one event channel ?  Ie the same evtchn is used
to signal to xenstore that the guest has sent a command, and to signal
the guest that xenstore has written the response ?

Anyway surely this is something the guest needs to know.  Why it's in
xenstore is a bit of a mystery since you can't use xenstore without it
and it's in the start_info.

Is this the same value as start_info.store_evtchn ?  Cross reference ?

> > +#### ~/store/ring-ref = GNTREF []
> > +
> > +The grant reference of the domain's XenStore ring.
> > +
> > +XXX why is this exposed to the guest?

See above.

> > +#### ~/device-model/$DOMID/* []
> > +
> > +Information relating to device models running in the domain. $DOMID is
> > +the target domain of the device model.
> > +
> > +XXX where is the contents of this directory specified?

I think it's not specified anywhere.  It's ad-hoc.  The guest
shouldn't need to see it but exposing it readonly is probably
harmless.  Except perhaps for the vnc password ?

> > +## Virtual Machine paths
> > +
> > +XXX somehow describe how /vm is different to /local/domain/

See my other email.

> > +### /vm/$UUID/uuid = UUID []
> > +
> > +Value is the same UUID as the path.
> > +
> > +### /vm/$UUID/name = STRING []
> > +
> > +The domains name.

IMO this should be
  (a) in /local/domain/$DOMID
  (b) also a copy in /byname/$NAME = $DOMID   for fast lookup
but not in 4.2.

Guests shouldn't rely on it.  In fact do guests actually need anything
from here ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 14:02:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 14:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzTJh-00069c-3U; Thu, 09 Aug 2012 14:02:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzTJf-00069X-69
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 14:02:23 +0000
Received: from [85.158.143.99:2296] by server-3.bemta-4.messagelabs.com id
	8C/94-31486-EE2C3205; Thu, 09 Aug 2012 14:02:22 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344520941!26818811!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8110 invoked from network); 9 Aug 2012 14:02:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 14:02:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; d="scan'208";a="13932549"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 14:02:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 15:02:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzTJd-0004rS-IT; Thu, 09 Aug 2012 14:02:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzTJd-0000pY-Ex;
	Thu, 09 Aug 2012 15:02:21 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20515.49901.369414.638893@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 15:02:21 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1344506462.32142.96.camel@zakaz.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
...
> > --- a/docs/misc/xenstore-paths.markdown
> > +++ b/docs/misc/xenstore-paths.markdown
> > @@ -0,0 +1,294 @@
...
> > +PATH can contain simple regex constructs following the POSIX regexp
> > +syntax described in regexp(7). In addition the following additional
> > +wild card names are defined and are evaluated before regexp expansion:

Can we use a restricted perl re syntax ?  That avoids weirdness with
the rules for \.  Also how does this interact with markdown ?

> > +#### ~/image/device-model-pid = INTEGER   [r]

This [r] tag is not defined above.  I assume you mean "readonly to the
domain" but that's the default.  Left over from an earlier version ?

> > +The process ID of the device model associated with this domain, if it
> > +has one.
> > +
> > +XXX why is this visible to the guest?

I think some of these things were put here just because there wasn't
another place for the toolstack to store things.  See also the
arbitrary junk stored by scripts in the device backend directories.

> > +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
> > +
> > +One node for each virtual CPU up to the guest's configured
> > +maximum. Valid values are "online" and "offline". 

Should have a cross-reference to the cpu online/offline protocol,
which appears to be in xen/include/public/vcpu.h.  It doesn't seem to
be fully documented yet.

> > +#### ~/memory/static-max = MEMKB []
> > +
> > +Specifies a static maximum amount memory which this domain should
> > +expect to be given. In the absence of in-guest memory hotplug support
> > +this set on domain boot and is usually the maximum amount of RAM which
> > +a guest can make use of .

This should have a cross-reference to the documentation defining
static-max etc.  I thought we had some in tree but I can't seem to
find it.  The best I can find is docs/man/xl.cfg.pod.5.

> > +#### ~/memory/target = MEMKB []
> > +
> > +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort 

every effort to ... ?

The interaction with the Xen maximum should be stated, preferably by
cross-reference.  In general it might be better to have a single place
where all these values and their semantics are written down ?

> > +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
> > +
> > +The domain's suspend event channel. The use of a suspend event channel
> > +is optional at the domain's discression. If it is not used then this
> > +path will be left blank.

May it be ENOENT ?  Does the toolstack create it as "" then ?

> > +#### ~/device/serial/$DEVID/* [HVM]
> > +
> > +An emulated serial device

You should presumably add
    XXX documentation for the protocol needed
here.

> > +#### ~/store/port = EVTCHN []
> > +
> > +The event channel used by the domains connection to XenStore.

Apostrophe.

> > +XXX why is this exposed to the guest?

Is there really only one event channel ?  Ie the same evtchn is used
to signal to xenstore that the guest has sent a command, and to signal
the guest that xenstore has written the response ?

Anyway surely this is something the guest needs to know.  Why it's in
xenstore is a bit of a mystery since you can't use xenstore without it
and it's in the start_info.

Is this the same value as start_info.store_evtchn ?  Cross reference ?

> > +#### ~/store/ring-ref = GNTREF []
> > +
> > +The grant reference of the domain's XenStore ring.
> > +
> > +XXX why is this exposed to the guest?

See above.

> > +#### ~/device-model/$DOMID/* []
> > +
> > +Information relating to device models running in the domain. $DOMID is
> > +the target domain of the device model.
> > +
> > +XXX where is the contents of this directory specified?

I think it's not specified anywhere.  It's ad-hoc.  The guest
shouldn't need to see it but exposing it readonly is probably
harmless.  Except perhaps for the vnc password ?

> > +## Virtual Machine paths
> > +
> > +XXX somehow describe how /vm is different to /local/domain/

See my other email.

> > +### /vm/$UUID/uuid = UUID []
> > +
> > +Value is the same UUID as the path.
> > +
> > +### /vm/$UUID/name = STRING []
> > +
> > +The domains name.

IMO this should be
  (a) in /local/domain/$DOMID
  (b) also a copy in /byname/$NAME = $DOMID   for fast lookup
but not in 4.2.

Guests shouldn't rely on it.  In fact do guests actually need anything
from here ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 14:03:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 14:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzTKX-0006Ey-It; Thu, 09 Aug 2012 14:03:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SzTKU-0006Es-JN
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 14:03:15 +0000
Received: from [85.158.139.83:55797] by server-8.bemta-5.messagelabs.com id
	5F/11-05939-123C3205; Thu, 09 Aug 2012 14:03:13 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344520991!31008366!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8761 invoked from network); 9 Aug 2012 14:03:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 14:03:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; 
	d="bz2'66?scan'66,208,217,66";a="13932572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 14:03:08 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Thu, 9 Aug 2012
	15:03:08 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 15:03:06 +0100
Thread-Topic: RFC: blktap3
Thread-Index: Ac12N7M32j6sBcjRSBOZA6AeyHi9wg==
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_"
MIME-Version: 1.0
Subject: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: multipart/alternative;
	boundary="_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_"

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,

I'd like to introduce blktap3: essentially blktap2 without the need of blkb=
ack. This has been developed by Santosh Jodh, and I'll maintain it.

In this patch, blktap2 binaries are suffixed with "2", so it's not yet poss=
ible to use it along with blktap3.

An example configuration file I used is the following:
name =3D "debian bktap3 without pygrub"
memory =3D 256
disk =3D ['backendtype=3Dxenio,format=3Dvhd,vdev=3Dxvda,access=3Drw,target=
=3D/root/debian-blktap3.vhd']
kernel =3D "vmlinuz-2.6.32-5-amd64"
root =3D '/dev/xvda1'
ramdisk =3D "initrd.img-2.6.32-5-amd64"
cpu_weight=3D256
vif=3D['bridge=3Dxenbr0']

Before starting any blktap3 VM, the xenio daemon must be started.

I've tested it on change set 472fc515a463 without pygrub.

Any comments are welcome :)

--
Thanos Makatos


--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type content=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 12 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-GB link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>Hi,<o:p></o:p></=
p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I&#8217;d =
like to introduce blktap3: essentially blktap2 without the need of blkback.=
 This has been developed by Santosh Jodh, and I&#8217;ll maintain it.<o:p><=
/o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>In =
this patch, blktap2 binaries are suffixed with &#8220;2&#8221;, so it&#8217=
;s not yet possible to use it along with blktap3.<o:p></o:p></p><p class=3D=
MsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>An example configuratio=
n file I used is the following:<o:p></o:p></p><p class=3DMsoNormal><span st=
yle=3D'font-family:Consolas'>name =3D &quot;debian bktap3 without pygrub&qu=
ot;<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Co=
nsolas'>memory =3D 256<o:p></o:p></span></p><p class=3DMsoNormal><span styl=
e=3D'font-family:Consolas'>disk =3D ['backendtype=3Dxenio,format=3Dvhd,vdev=
=3Dxvda,access=3Drw,target=3D/root/debian-blktap3.vhd']<o:p></o:p></span></=
p><p class=3DMsoNormal><span style=3D'font-family:Consolas'>kernel =3D &quo=
t;vmlinuz-2.6.32-5-amd64&quot;<o:p></o:p></span></p><p class=3DMsoNormal><s=
pan style=3D'font-family:Consolas'>root =3D '/dev/xvda1'<o:p></o:p></span><=
/p><p class=3DMsoNormal><span style=3D'font-family:Consolas'>ramdisk =3D &q=
uot;initrd.img-2.6.32-5-amd64&quot;<o:p></o:p></span></p><p class=3DMsoNorm=
al><span style=3D'font-family:Consolas'>cpu_weight=3D256<o:p></o:p></span><=
/p><p class=3DMsoNormal><span style=3D'font-family:Consolas'>vif=3D['bridge=
=3Dxenbr0']<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-f=
amily:Consolas'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal>Before sta=
rting any blktap3 VM, the <span style=3D'font-family:Consolas'>xenio</span>=
 daemon must be started.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:=
p></p><p class=3DMsoNormal>I&#8217;ve tested it on change set 472fc515a463 =
without pygrub.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Any comments are welcome <span style=3D'font-family:Wingd=
ings'>J</span><o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p c=
lass=3DMsoNormal>--<o:p></o:p></p><p class=3DMsoNormal>Thanos Makatos<o:p><=
/o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p></div></body></html>=

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_--

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: application/octet-stream; name="blktap3.bz2"
Content-Description: blktap3.bz2
Content-Disposition: attachment; filename="blktap3.bz2"; size=230728;
	creation-date="Thu, 09 Aug 2012 13:51:49 GMT";
	modification-date="Thu, 09 Aug 2012 13:51:20 GMT"
Content-Transfer-Encoding: base64

QlpoOTFBWSZTWRP/lFwBTB5/gH////////////////////9i/h7wCPV9cAAAdAAAAANUAbYAAAAA
AAGgAAKoAAAAAE8c++9bqqHTW8zp2+nWguwOqGJoqrPG+xefJ9fXIlcKGZUtag0wEbVbVtgKAKih
JFKq2AL61AAFAAVe53rHXbz7ed2b1G9Xs9Dq7c9TXolVHee+e5xVHr6Qed7Pg7r49973j4+u6t7t
Sh7aS6wJu7y9Q9xnMbem19eOjk+JIAL21vRc4ebChncYure5z3e3bp3u9ID1NY6O+puOt6GuY26a
OJ75bBe+tCOvqqjbHPTbUbh997tg9d7zMTj6eQDl82MQds9SPTWtBIqlEuzavdu5adCWJ0wOnrqq
VM+uLtRVvvR9ffa+307yPcd99vubAD6HgniejQB573A9fQ0HQAPvfC94D32O67jQH33w76Aed5c+
R9JsAe9gF3ue7OmmW2MgAVLayj6Dr7a3zh0C51PvUB7vbh9PPkTw97lFI7apPu7r7ADYe+5tPOgU
3VZ33p1T3DgAGrLdUE8Pue+dqtAPppwdtLtIcd6DvfQAaPA8Oofc32L2Uy92tnopl5r1yKTrt3vG
QpG7kuVt7s89696wgDe2R7VjroGveABIABpSkQiAAAAqgUAAKAAAUSAAAAAAAFFOqaA7vvlVVht7
dG89HdZZGtZE92cYD5OgDdb0AAB73s5O3oD0KIQCewKNCgNaGsSjSns+t3m9eKn2Z7Km1h9cuD21
t2tyfTnwkJfJi+RnrN7eQc9uAPVGdU7e+j1c0+C7buxT74Duh98c+Rk68djVQffXcChQHpoN1YD7
3rnyPUno0Dr0A49jocgBnYDoHvdwNsUAghSgARUAAAApQXNa+s4Ap63ZqbMKg+tM9Vcxp6qo23O7
2N29tS6RVMeFRoemVC+xpdtErWHu5y2wpL5w3Uqfdd0LY++szWryvTPvY97J7aERNgLa2GqoNGsq
sYUKoUVVIvto57QupXWK7Oure+9573ci0+88HKKCqKpKSoPRove45NlhooIkee50+sBVRVF7u6wb
fGKtolSr2ZBKbO5u2JU9G65p2ydjdYQAdAu1nTANburevHk1HJ3ZU9t7be3Tr3sHfVn12tb6tsOp
hm+9tXPdvNq5169XvMe7p7L1cTu9d6G5s7rqku2887qu83u99gfPijwbddzXcKB3CK2g+s6bCzR1
bnM93fTyp6Gum9N7ypZ0587r7zurVfO3Nizp1u5Otem9tbDxxPe3TmnsQHe7pHXvvDry9ta3tkqP
bbwIdc6xuz1w4ebsfe3MLZV2o7oznTc4193fbvCzwvfHb7ysw0He3U96t9Z2vV2Cii3bnWqREijR
0dbm7nuA88exU22+3Klz26dvuZu5zr6dde57tR9fXfButz3poB01RW93fO77wUDQA0AHQHyAAAAA
BtWHTvt8gB5N9PqO2Th13dsn1bVLxeA+noGnHoepalmPl1fHtO+vl59Z9TU+0d9Dy2ytHnUdejkE
vuDoLvoen3KGeRY2GxpmX3bpnNuZrb6Z4B205Xadzve9rs+99z2bcW4wn0XAMXXdkdFWzAvu7s6Z
RxHVvO82c+31yquj67bH3Etjvd3Om966nX3PcoB9d94677vea8t673e+4B6vvafAwOlu333Xb3d2
b7a7eFutXQbhmieW827EHo9PJZHpGu6BeHbirYE+mpyfYubttuuh01etr7zX18R2ompb3fd7j1jk
6y62955Ogx16hVPbpzn2rpIXWqEvrK7uc3SpJbLmZObZsacHD7eAbdeAY2+dzvZ7vvnBeEIpeidW
iwF1FNe7TM3Hby4gx7d2G9Hur1utzt0Affdxe7MHdg3t7vorCVVDk+662Wb7DqeRa1e70L3ts73u
r0z26t67vbW1ju0Jiz2sG2QBbVpdj61ePX3nVIfe573ZZXBZ23fW+8+uUefd241undzerZdsEq8z
SdQbEUXxfd717lZq33yiglWtWxk1umipCchp5pKtvvj73oIFzkj6T0O7u3sdux3hybdcuvR13rWl
e7ePrrsZe7utpvY+cPtfa5a0l7VA+Kgu6lffW8u+3D333g9d30B3Ylpus7a7d093XvWJC7akHQaK
B2fXuPDp12dGpgfH33ujbeH2sroHvsHSgfPhtC+t1vbO7kaLw97c3Xu3M01T7ufM91hmt3fL0bdc
dVpV53HWvdea3PrXq8N29628KFfV7u5h6q5Wa2ts15u+NWj20k7MKfeuO+m99dD7VPO4L7vHe3Z9
99XdvJRdte73emvu3tp9tc+KbAabkr6itnzdzB4SmkCaAIATQAgATTIAQABMmEACNExknkhpiT0E
EpkIIQQgIBDJMEwJNGNKHpHoTSPIwk8jUZPSPSaaAANNASaKRITQTTQBEwgMU9GQDUDQajRoTMEJ
4mmkaDQMBNGgEnqkpJNTRT1M0KehPFPRim1NMIBo0eo9QGRppoAABoAMgAAEKSCEAgE0AmRk0noE
ZJlPJqYNNKeymExNKeymp+mpp6p+hHqJ5R6aT0QRIiCBACACGgTTJoAmIZGmjQECnmI00VPaGqf6
qfqn4mVPUbU2o/WfWn8neAhitBQhSAAf/Mtj/CYonl7S/5V0hNkSrWvrFK6W1c15VdbK1FUFprKT
aKotllMtottTJTai821XQtotLaWSNJEarSVgrFWNpaYLFFaxtGyaxqybFjSGQsbWKNUsjWKiZtpN
3baxV1bbWNStQtJrSWksRqZCKIFoCGUzAwA/VChoDfEV1CbIEEwwQG0uEpSLQrJfwwIZCbwA7SUh
ExIjETQBlZAgB4bddDHKQCHAEAIh+AQikEijiICh+xQT9n7f7aYFNISQpv/WUTikoI0SyckJpgyE
SQsW2n/7KMJV4wt0CBiIhmCCss5TqGkF4AEAOzoqnMFWCizGtilKoJI39lXQ0ZIypUyNEQTBRMCR
sRIYmYZJsphZSCoygmCWGioxU0WCtNlv6La10xL/q13aNhUMCZawWDVOq7XTYpDM1SJtFhZrRoxj
SMtGrAyk1ZmZrRotGNvv0p92+7l9zKLkmki5X0m2JNjZBFFKxpm0WlW0rQ2RtNrZbMtESaoxrWpZ
qkMmNibZkigtRaipJtU0oWxtQJRBGWVpNqTai1GRaaqZrQYaW2yWTSRWaYrUNNYxrSUyxRUUloRS
lLRGxUhJiTSmWm0UmLTTbSm0RsaixqLFg1pSKUtCVNINLU0W2MWNUwNaiWytsbWUNjBTCEpQUoMB
AjTIMsy0akxtFGo2jY1jGmViiKTZMbaKjaVStsprJbJWjElg2LGrBs00mjWyGqMWNZESNCbRo2Ss
ajFGZrNNma0RUWMRFFRGSStKVMkLVNozRqMUkbMhpaWWSMDKMhiWxNlNFoUjZMbIzRSGDYZpCUNU
UaNRSTKpSTWzLYtMqKWbbUmtFRrVNqVBbRsG0msWo1KZTbBGhM2lVsxEpKiU1VGo2KIi0lKaZRaI
xGYppCbA0y0pUaMQmxM2NKaxayaxWVTbU2URbGsUVowbSY1g20oaaFYo0VmVZmA2k0lqLKUhRma0
IJJqMaSogojVFGJNaNRopWZpqlpqqZYtWNVJaNKluWxt73t7ciUZIpsyZbNKyWNqLKbYsZhmmjWS
00UiTUasJbRjSYyYqSyaiUpsoMNNmMjQU1/V/P/tz58kU1GsyYokKmSNo0gGMWElNSylGRY0TG+3
VyLFERFRL/clzJTaKbVmURjMg0TKLSSVoVbFmCSMFBT3JH1ttVbJSmmqn8zNM0aJQy0l+rdVylNi
+NcgyWytp+xmkYmlVEgI0/oNcqUm0FmW5rphQtznd2JRO67Wy1KtpNac6ZUmu7qKSNQVCk1FOUq5
Nk39i11rx5iUo2QNFFM0FfgtubbPyLXUQG2ZhkE0CoLYZRMyaNBGJbEVGaaZI2bFWxrBjJTLWbMr
IUVi2xUjUyFW+4v6nX2718aKICQikoKVp0SiR/+jTzHnPhLpfiV/u9ZYIqj1Zx0nzpOIIpqieUyv
yaEpf9vrq8aRmNQkmSvrrgrLLEhFil7rpVMymaUmL/3ldlZhaKLFEQUoRYKZaNqMU2RBNVrGwsCU
SRSa840lNUFkXdq6MpIpML+V11WNto2jVixEWaqjGmqbIstrRFG2NtGBQxopYkKaMMzIKJWgNLUj
YDZmbDGYDEJskGNSbSUmmRStpkZM0mxRUYmk2pql5whraXO2iKMyUSMUaptTbZplNRJSlZqRIpGN
iwalJYqixapZqo22pmsk1ZbSNSFUtKImmDbETQzLMoNpKDSksjRagNpNGKIitFBBEVKVJhKiNpAo
NpNomYLRZMURtERY2JNZK2ZZbJMyhpulqd0JYUpSYmhCkRSqNrluU2VqIlMkTGpSSBYbU/XXK0WN
gsVFqKIootRaNTMVNNoxpaxtSoMayyRFTSaxqGWKZZZY0IAmlEFRFLTSmIJqytv3LupLVtmatabK
tm+K3UpMpYszJYMlElTJK2NUmZqoNoCaQMxtkqMGrBjSJaI2LGiyNJtMUVQjFKIjYgSxiUxTMZoa
yZLFkhppppMbRpiaWRGpKLaJBmswraJlkatTVVStKtk02iZNqNNEgsTNgZYJJQTGWVtJqTbTSoDR
aQyFJqSxtFihTRBWNQkZYFJUlUabQ1FrMxbQajIomJNUlRZNk0Y1iSwbGoiabGpGbY0miktGlIZs
xkUGo0wtMqlI0FFmLEi01SKKRLUMwIF2xyFJqaAJkSZFaQTGwpVYqZtZNtRthTWTTNGK0JiojTUo
TCzNNRRYKKYSRWLJRaUFZMCiGTMphLIjSiCmUiaorMySTXv1q9XtLRoqNG2TaxmU0TQiSiiyi0Qp
tEbajSBGMaKNFjbRG1jUVJaWkiKLKkoylJebqVKl1OxglMBTSYkNY1RsZpRiyZfi7CMLKbTLZhjG
xRiiKkqSApprNSjZedaYyRGjVBYjFjSyFipKCSixGQxRUJB3V0wSbGlElgqZqI1DI2kkpUzNY21m
vw610xswkEsSEvwnG0WUmi0RkFMTKtQMDKFDLEoJI2tFWZSUbSWtjG1GsYWFFoBSZJCVgIIBkaVR
lSJsloptbUtFttlJNKbQmKjYszRqooK2DSaNg1SrZqiZskzJEhQFMmRGEMYJmi0vldutIooLUSmK
Ig0MqFIGEYjYNTJpEWEjY1JRILnLEaLNEUGxM2iDYqNUiBWKsa0owaSZpEgIltjXd1TEErJbZEqi
qxtGNtGpLbRJTSoo1SbYpNbEasWLUYtGrFZEmtKkZJsZGiaZswiDBAMAATWI0UpGNGoUEMkbZshM
UWxkbZtaS0spKbKJFpkqpCisy2jREaaFTKLRi0TTTEKMm0SaaRiQkxKlFoxTRGlGmmYFTTbEaksa
mzKW2MVaU2Us01JmVs01jFrlcxbYraNZNmGjIlNCRIizKd10yABEJaSoWI2yClmNNChsMaxrWK0C
Vagi0ltmUyUZrbpWuVNoc25sUbLQik1oNo2xiEiopkWjaUiFJYsTm5G1O7RXMWIK0FmlRaDGKwhg
koKKuXLURzbkm0RaoDRrRtopZrFFJqKLbFSmioxqILApTUv04zbdYqpbS1qlaNsVY1RqqiKAKEWg
KFFoKRBmFJgRpaEiUkICIIIRCflH+nw/84kLn/+KH6z/5/P/morvcUfJ+8IqMoKFv+sv+sJy/+Vr
gf+ySJUfda+CyrXoulJWTxoPEcP/EHC0UXf0qaB8Hkv+9P3RX/XyRvjGGHndVYf/6rjbfL+6g4hj
/sgP+p5mZCZC9LIUGCkU/LaSH56+s/fU4rw0PzZLPjutORlu5bz9211CLIjeYNvTdbvbrtXqEKhp
OXcGBNL8ZF4mkMZBf+hxEfX+T5/Rj3YAOkYRLorjw4yWF1UjIOTnxtD6ompz67C7YGoTmAP9sHrY
DsS0geqU/Rr4xQ+rCpxhF4GOrAPLruTpVUElHgdd89cObdNZAYhRhg4RGGpqLmqos8qjO0sOFQIA
DhEgPUGS/yIKpJW0AKOThgZNjmGbbahqC3Rs+lk4UOCIq8BKfNBugO71JrLAqdmhjjzjuclx5Y5o
jDixQ+MWVcDIibqqOYQUTQkJI1MyQKJxiyFhEPZQLNQsASShKUo0kDqcpqKFDKyMkQiAuyZs/UXV
CbiEI5kxLW3kUJuJbh4RFyc4dIXPac6dmoTEnAa1axgi0KFEWYdXDsohuIFBzdhaU1lF1YOYyK7s
y5VQowIxgOFEXCCBJmGCB7EBlFBuXTDA26Y74ay6iXgzNY1Y5SLMQisaq8K4kFgSgzbiJzYFTOJl
YQyyKBonEUYZaMYmYdyzAnFgwyPKccXFRGGgeqAeh7g4pAm418v+5Qflx4jLSEoxbMgjujChp4a0
41UhW8bQ98pkPYnoSbpWYD0tQOIMlYhvBtvMwpI+YvcLEJrQcQzCRr/785xbETfHUw4QL3YfmiC6
koyhuhd0n9OjZR38UuCHdq+YMQFwFCsVtxhU4cKFGULgqfobFaaRjcWE0l/1Q7kRgwqqGYU1Y2gN
YCyYhp1f5k3ZyrIYVFCieMUiMWcEEQ2nLdQ20k0oElGFC97qvu1UFI+P+35+a125ie0Ld0xwIntK
+FGfPjSepzjSW3U8PhSqySWvn1kbFsU0uj2enduPdAeuU4lyE1AZDwWHlbr7OZwrSctCZMFShYk7
r7NrzVBUZRUwmr06yUrAPv8sHjqyIvg7G5zDymC89cVQVM1lLRbOrTM7dRnxNDh5HZ67goTtQ4yk
xwbuk/hNJcDuSI+PDiJQitSNAWFiBShHFkqHnbIcL5nQnAMwoovqbl0t0slpMmSkJOVMmTopxEGY
3vNIZDic91nDU5SsZgTxavOXStJWINtguc3VqVQyGwmbd5HDDOfdutaMt+HR7E8cKewPAy+DQrG8
uK6/MFKlJVAWDws4KxfQ3WrBHndOc5oKW7FHDCLs6yU7pYml8m6VNbb4ANSJpBSNGJiTKrCJ3yGF
SJJqUvq43ZjyvA0REOWwVFEzTUo5Lg8TnOHC1whbZtbZdN0cynKnLYpm18nYHxsapU8hLkt5ZUkB
QWCkBnanLDMiLEW665xjrkuPJyuS2FLWw61a7Q5jcpOJaVzt27JeeVObldMK0Etm7L30VKtq1NFb
O//T7DwZw3kG2HR57B9xQZEYQMDpD3Bqiv0JUUh5eexfF3Oc0vNs0ff3uaLJECEGap55nCMQYZMs
5bJfi8pjpHqrrRY3YuX2zm6vHyL3B1Z1U+HKc5covsOHtZxTrqMLi+DhnVrLSnoOepW3l02XVwwK
xO/LzvScL2HYpV2PVl5OLVo+Gpi8SiW62LbbNWBjE5Nh0sHEO2+Iu+AxKcyZVIbziiRKESjSkDk4
gncWBvGCvbMJJkmSQN840uRKMEhGOZnIsNYBSEyGZvIVRESOuCjlN5eXPLqJb1e9hU1dQvljSjuM
4tlCQ+u0xRzudqm5TUN5mUNTxcMxilmnUKJ6etyqwRRfm6wdsFAVnBlh7tMJHcU12NXz9duvzPYi
iHo9dQPZTGH8svYjDIpx828nM+ruJSigolymmoY2LLCqXKEnAs8uLpvMnFU5rzc+recx5hYdrkO2
ytPqHvopViIjGBvEyOvhsurAuECXMtQomnVV6bGWcfDKiD4EoKk+3VGd8sL7fGHrwbt9yWg9kXf7
lF06zA5pNJApbY2KU28SD0V3YZG9tlrHxdgOIKClEoQZkaSlGlDnHD5YKXO7HUnu9Hw93+yR9X6Z
45oJKC+eDOoHzc4JdB4u4WzcgDoZy1Q8RdghNY5RW2c9uqSOxyS6X6oDr4UaHvmXTHbb3qbRTK6x
1fHptcqmLVnVzy6S7H1yhKtY9XuT9KM9xjhEi6Pwlx6lfZd1bUe0n/8yZkwdsjiYgXPyFigcKNw+
HK7JHbJsnxuFIxRyWFD4kmKd81855sYUTenyu3+3Y2r4qLZ+UCgJfastfp9gGxBRbk3+/Y5YJjq6
McT/WU7X3J4UjDsiSNUnlHw0nAsloKHwVdtJV4zgze42zgdOr3CEL8/q5tugnS9qX+n29xvFdixZ
scl2e0vjRNLDnHXOD7gbDj/+8Dth/Ls8VxPL7llE+BKGP0fyXDFTamc2IIsYtTJXs+kmgMIiQxbC
AJShSxQ/lZ13YVVX4Mmr6VFvtT6J5ukjX04KRpvbjGLCRslSc3LoNBavjeuCTwkMgnfePCToejzs
Kk6ZxYXGRGWtBV8bMDRQFBWKGTprKH6tzOg7SBdnMmLnaWO093pZxneDGUFxZZe0f1qw/iVBYVIS
jwfDHeOzwG0NtttoEKKKIkn186si7YWqq+0/hZdPMpQYL5J7L+DcPjSxd4j735/t1eSmVGNJfdcQ
VzdGOcRBc10p+ie9F+X+L83rQgYv0sKsn99CqMFAQD3+vl8eq/P3dcVxSiYUZcXUjaRYFL+43nre
FvVelNZWPr8qcnLeqYcxR22zYrF1uJMQ50FEyJNjCWLEwjKQsyDu/Drq87ldpbbsqLIfJ59FOTrh
T5MK92qVvLQe5ggZKVLuvv3pJK/U7o2poiJlqqOuYVlsJaUKLf9r3e/2cA8TBDV0jNxfTz2FP2+r
taF5o/LmTEl6vDF1QQBUSepzMmzEaGjCADrIaIGhYqrvhD3QCm8g7RfIlVXZbaSqMaX5O7REqLkI
m66Nl0OeZmISSpb1UR9+53xEmOQ2e/D5vZ+G/79+vYoioaKIa4xcpliEIk8cMZIGkIkKU+SMaihi
A8G6P/r7enAbPu483bsmjTSmVMGa6IxyG1v2WZFXJdSqMn33mOkwoh8tYc7+3HYqwRgodV+2hPGI
kTM1DMr8+7o+OstFFy5ShVzXQNGNUaXJUG2g/Y+TlTlLPN01Ynm0WIjvt17YmSQys32Xz1nXUJk0
aM7ZKzCoICosEtLhpPCdW9cbUh21HqnGZEVWSpatKIWvdPSvNvLcquTW8ra6hIWMihkCFBmYIUDz
AqZKhReXIqfHOdayUQenvVxXtSZkuNrESMrAtpUlRivjl5ywL3eetiDVwyskd26m04o2WCMhz9OM
HEKxLZLAYkRKWlLOkqTIMdbmVK1h0M8dHTg6C3O8XtPmTrqlSKKV7ZlQ80Od0lZHlWBUFmrroHSH
GaJWVivOgNJmGYtDqyVAY6nu9jOfQ13v+l38xLfn49MT27YXTs4U1N9MGEF7/dpPWbZFRL7yirIN
KKqxZatvx2zkxqSsQSxMhtQtaW0RjbFpbRlGt0Qu6TtEzc1+iXsEpjEnu4YyxgGAa2xpBMaQSBLM
EbSlGmam0aWculiINCKBiQsm1izNFYmZkKgoCGyppFIWRNF9vdFPLmRg0MU/Q6lkwQg/Z1N7EOMO
y+QhjMLbYljbR8iFOPExBfK8av3B7cdsSPhr1SxUtKGtXFlz2h9KGiKDOTrFBRQRkyEUW+DnX4Ym
ghd6cwycg4xsczAU7iSiZg5LujdbZCE+2wlGmqoPozK6HzddbF0jJemKUYi5mUkZKfi3SXl1+OZR
W8D9Ueu29WGHJVjxiYh+OYNFPT1+Pr10oLJ5QsCuOrDAryFPvx4dIjDSRkZZSfSukV9XXTbgjUWU
T4+Kc+v7PTHIqT1VrAYWlIRmFLGUZNXTW4RGzIqLIXNw+PDt1KaQo3wxPEjC0wUp4sxklo0SnVCp
BiAxgioUlbJqCxE91yfp9p3babuuRJqoCsXxxqmsnzd07dcNyS5GyRvpxMMbYKS7u0XS3QxI+3cC
91dixoUwudNJr44ZK7JNUBVzzjOlXTepbnKLTMkYuB3fG6vcdOZtFKaZaUSLKy8tRZKZsdUWeQWj
BaMDF8cgPpXfXXDIK+K6WZiMLTa7twKB3V38b3v4j57rjfvdXRN8bmBQAahR8aI2ZEyKMa3b6/m+
9hH8Wb6TSQkvJOFdRLrvze834e9vXxO7JRq5zUGanZESAmXy7cnu3dp3NjQc3Nsuv6H7/z5T511G
k7uQMnDodNchdd019LeGr2910gBeWKLBjNSqibUhkWSrbKLaCajuuxRcukXr3Xp4yc2qyshREVFP
ZCWiNBCzDK6s1WZBGalF1lDP3c0Xj11XKKqVjyjmDAWMVTJph1jjEENF5CuV2YxLlnW5Noktr2rj
Wrn22q68Nc7F2Ngty4aWZGtqKBnOYoZJrayf2PS9HVRR6tpSymoIpGaqxJUbaDF1ooNXHjnMVigo
grFbQtsstAp3c2NyWdsJuVOrmqXc7ld2ulJ3z1x65jYnTXIyV1QqAxEUElGVG0qn7X1m04oiPaVU
U75pX05xNqSoz6e9+Bhyd0nLdu7phDCmulBPca97rY5ty2MKEU7rnOW57d22K8xcuculzW6WQNiS
kSRUvklJk45imZKlKWBrVMxVA0QWXXU+67Gr1btXP3FcPdyRhN3cSS+d01QoawqiwFY2llsoJq9a
EOJbyaHyF2JvKK5iTVy5dI684gnLjVGKTiiFTULBEzFgs4dx1HLmOlQwt+VeySvipBQEpZPu3ThW
B1QKZArFG1cr3S+WsNnMr1fPtbdtWskofLg5mikpVmWrGo2sWLaKtFSVGK0RRtGhLTbLWKNWwWwY
tqNRGszbFY20aC1rjXdo3NtVyivsiMM25XNt7ab17jJ3dHnzUvptfTRiKuX6/7HbfATKQLY22NtF
8bmJnd1j4uaijaI1Ca85tCQkGoxo0zFcuZ7tcIxX3bmpIgofCMIDURD21ipRTxCZUNCZMhOnQlbB
z5rA54XRif6t3Okg52XCEgQqrujwnV+k/Vff6jYd4CJIqQiCgmfn+vXmxqK0VZJICxoCgpHOT4/Z
p2J1YbkFF1+a8NbA0U0bBJbBVFERomWCS/YWt+qtr1DfOuxbEVGxFaMasWMV+LhGU0khzm0JJqgs
lmmKSCwag2NQazKUSxoNGxBGxFmTIsUUajUY2sazNYqyVjUaKrm3LJFYosVjJKWTaKTUMtFjZLJr
GNBzcL6bdI2aRjGxsGkjWKJmoqMsiuXCCyhGUqUik1i0BQQFksbY2pK/WXMWxMlFlDU2C3xXISEq
LYSIjW5zRjGTUbYKMZRMZliZTVxGS+h+v3ZWg2qWhpdotG+y3SpNQFFudKIiyWPLzyTbZNsaSzNG
2uu6ItFQFbe7rJKUyKxEFUYpEosWxoyVk0lizNgSpCjJqKNGjRaTRJ91dNtGk0YNiSogZkosaUsk
UbFGmFijfoau0MaSJNBIRqKK0WxRoLSBGgNixk2NG2hCtFBe64KwUWmbX0oqgNVzGig/S66xGxox
o0VkMWoqS3muGS0tCeP2fDNO1UVDRSgbwlZBi1u7jUGwmNtzdJLUWaGhShmTMxqh74Hxg6ttMVfl
8T1SxFUM7iPyqJQke1Sfaq8utVor28VP5enAkynTKdyuhUWuKvPJGKJU38ZFF6XpB6vgVUXJoS/6
J+EYToyfwdWEZJkrpZwvUzZa9quX/RGtCSUe5AxQWfTtSlBRkZkRPst+X37J8S0a3rl0VFAEViqS
jVJjNCiTJlMxszASYTJCBJIEuhMOgPupgv+C9f4PwQ7yu2zev6I/8938KcONwQTBRDBA0h4QLovD
8ln/L4/5Up93L6Ixg543RGvTIRg8k8HICD9y+aFl/uppSlsbFB8oSFEvvjm6uqjnDLs0kS1jfer4
091+jjg75vPNV1V1vH3a308qfezxmU/V/mQpjbBQyd4Y6Rm3xUzOTiVypV5pjQTbWCrFWPpaJqy+
nOcR5fd52TIcYCWkUoMy69nBV0SonCjxmrK6+Fh+REBCXHFHh2/8t9zrYeyQd045uJqgN4F2gyKc
skMAO2gOlez4b879dZAOEpidYMYb/O/L16a8YH0WdMwKwXLeJ23lwmY1cqMuD8V/m+Xv/z6LzMzx
sv4NxT9vpqqZlpRJPumVHzu5aJJ8XiQ2Jjf+13ZN7GF/x4e9QsPhAn59f+fGUf6/w8cW+mbU9nZ9
/aZ+mcIGEvrgREkyXKwjBoiBJW65RaKNXB0U1IDR2+P9vjcKNQhjMEURbzkLgeUUSoPqg4gDmqec
LjA3mIfDamkJ3ErYOx6/S4FFSwbVAZqJm9v6fq3vtd13ZP2PdPY49DvuGGikgEhSQ6j26/zSsmOI
EJJvld6PTD1Q5Lor8YXJGgKQK8JPkzE1pAqQokIsilQLafqYZgC5gm0CHxlDUi8w7wLSHSOLiQTh
AJUlTpzkhfZTw+El7p2ovELkoknaSHSQCoYQbYVOmFQFCcRfCAcck4MhDpnbAdXvu85JIqsO3J2m
KJHh6371Gz0QPbxugxaJI/hf1QdCi9qSFL/cu6LWlFcuA7LPbIeVBSqXMB+xHKDCNu7BBEEzo+xE
01JvNRTMNP7ufn/hKamIfsQPvR2vn1VErpNHZHhEe/2uiiZa+/W1PtfPym6L6YX0pXq933MBrOyN
QB1gj1QBuF1RsZgDoSEMm8pwf7Uer65n3et5cMHaEben4fpGutmK2/9X4vR/X9yf1/5v+n6vwW3N
f0uHyq9GXrf6LrLD6ZN8/Y7f8PfwhqToQiKqqoirf4frTA/w8dc2jd33O+irN5ZC4coW/rZUp+I7
Qdl3HGbRZotCH4HYGLyXTJ+lWwTjibrGan1xHi254Mc4j6zzX+eOvu6ZYku5OcYwTWqPY/TPzUiZ
pqMuxD8A+xaCGUP1/NE9US9jx73ZjX8Qe8DFNli/BkRi4Rg/xTHz9GG/XpMa9BK98FG2MN3KmEJI
qC8H7qjiu2ZiqWIzIvO4M4DGjwcrfnjk7gR6IDnsimhIXhdoK3CrQmLRCShhkRCopDjxKzMT+lnH
pUpeDm9O1Gv095yjj9HnH3eMHCXr5a4rBw41fQM1o6IObl9cMJlKpjFxBPjjA4uOm1r24PuvaDTP
fsvUL2aqPQKFqJ+URHRnczDJJ3o2KImopikcDn2fxaDoUnjfogTRB0g6IYadLLhUhSYdDGoM9Fp8
vx+069B6rKVr6NnyEox8jr+nHP8Zxndf83TpoTLnMFPCKWkqoil0aIW8spw/B4MSY4XsYdoXThzk
3LK2rWlOetKxEWH0cuYeHuW1hKxZFgJDDiDNqcrb+pHX5J3bFvuUbmEm5M2M5pJCUWs9xxlRA5i8
HGRGnvqysyRd1eRJB6O5js8SCRShCXHN0UtIhwkI0Q4rE5aYRPKlXytB5yyVRIsL/xjz0tw5nfOp
ubTPOy7Phbch03qUG9M621EqX/XYXovHJDrukuoPRimFiwVIaagM/58dskYELGRBKgsQEqtaMVCK
aTtuIlCw4qiiRwKJZDQNjIKvxos6Og6rwxTjDi4t0Fj4St5ZP+Wh2nVNMQ4O/hf5GQNAf3kSpUNs
xwB2g7LapFZCjN9WrufntMs5JSjDFGz56SvwsYj7X9a/BRHbWs0S20HsgnhkqJEROpkVMrBXq9W5
15zkdGRGMGJCxigy2n2ss6S+zq5Ze9ISqwYOUXytRDRVh2yXZu+7hKkmgyUqNHlLmXRqONjKImsy
gAPickqf3r+yPnu+AQaiZme+70OoPfqB5siimiim86u6zaM4+P6bP7j3FKKaKKchRSlFKOH3jJ9F
FG+Es3bVtjx+1mGY/GGVP5hqooxAN3TzWX9bMMxYEyiE0mTWP46uTATUaNFaSVdNTTYqEmLSoooI
jEUUQnwDMhIU+XLzoMHQd+fRP9ainWopR/3/Xw5YTX6VFNQt7w5qKVyUUpRTa0AcKKc6sn/9BFVO
YdNc4k3cPrXZ0uxNcSkuuoPtPedM2sSSUXfkoQEtRZ2fqlhOVOhEovo5kiP/zozMzEV00l5sIEF4
VIK1EYqIoVq8NZjOCtckLx6oEepSVrRhJa9Vc5kFswoXNZ71PlIQfL5ervrF2p2RBPKB2RTeIH/W
DOjH6dn6o8zabLIkvtdgNkjA/J5ahCdYZUlIH01J/12L6e3f33sXLh+6fyxrv/Aah+j37WbB0e6C
U8yvw3xIr9Po/nujT/h64f+v6NLv+fy0/1s/6VP9YSRYv8v9T6vZLslI+SJ+3/Qov9Y93q88G8q5
f3dn6U6+r64v5cvtw8tt3+i2/TX9v1+vx67Lrj7J/8cYT4/zORX8p6N2cW+PvnbZ2fTTsW75jykA
TJCEhCZMhCCCoiImaqqIqPpxfu6/yfq2f9n9XOthveIP3H8Y+T+ff/ZfYTQ72IsVZFsJfcruVko4
YX4emylcbNqlL46/r/VTRKvLyoppwJQl/fcMNX329HB373J8Ma1kz3fYhthj9JBiEBy4QVjjCyWU
csJ44W2WZFmVt9uEH3KCEwZRrZXDCy+omP4osRqiJG0hPaKvCsJjspFA0LD9W+quQKf+33V1Uftw
RNCj3O1SS5acHrh+nP/kzWl/Ov/TvfnbLXDjuuDWNVDpMnMOU1wv8QnMVNycLhUu0yc08LSFplQa
ziqXtiVz7fHj33rKpJdhj0rtvj49YzjFYmUjv3qrG7SW21I93XD+G/dedaYbnhze3eLveMf1dant
MPlpPeJ6PX+v6Na1hBVP+JPLMsCv8O883h/zlrH+Dqw/Ek59dkpqT2YsirIiPGneZVBHgGYnWX47
bfnP3bO0O4fZfLHlLmpBPkgtvlr8/wowH8k4BfORRkLgB54HR/srSIf96Hyn+mkAOMD4DDn6LVrV
RYoqLA92oZgfDossEtn3QJNKCRWGe1zXXNuqlRWNotQUGiqLRjbSxSGMT+P7a9MZppJI2NFFtTER
R9dzWeu24aZTbGzQRkxMn6jsmo1FG0ai2vjVubTVINeOGBQzAUAMRXlyfq+T+L/Oezb1/P2f6P4z
vDgeuJ/YIr1p/Tpg8j0+BGE3dcQIoBANCesk20A4BuyYkmMhErpgaQhj1LCNe5fPI8buUBwXj7lz
nvJDaxkBZv1p7EQ+Z7vv06U2i/oucPu+4fbs/KQmYp5Ky8j/CMHUPb+7bup/4Wi6k6orq/ehux+9
qypMqJ0zTUNJv/h86AltoJIlUZ5uFjAURgM/xn+X13Z/u0Od0/m6wvwTPGMJeD+7cxjH/jIT7Qna
0mJQLvQyc9n+y+Ulukav7l+WcZjT0n/nSsZif1f1fPaOzoUVe9FM3yN2+vk5/EjuatHuzU+4w/mK
snSrSMyZIYYkjpfPdlh93Urour0+bxd4KanJ2ig3IU002tvYuAdapy4fLOlGxaC+WHok1zZ/sxhf
s87wWbv+tM9XenV+yPq2OSTL9fn3nr4XY79o1363Z82ZhupAkMwbWjFt+0mUac6EyYJimZCZ+Tro
DBGZkKDTMLzHEXOlDSxpNRD7E2ooRQFgoqhfZvMHx9u81Px0svr9tk/e/c5t0YAAAAFVVVhC/d/k
8jZ3NxDNXFq5O11KMzGQt1IGTU1Timaz5FL5ByIx+/SkDxMJ5IIyi6u/5Yf4coZFKQQxUBRHDm94
F/a6u/8m3OxtMfx9LhfFMh2dM4kJJJH73/j/RxuU6c3vPNVpIipOXYUhv2XJudtw+v32+ft/HdCD
CEbKfNfuDB2sAmaKi+z2U2TJBSoVTJKRKVAoiFoBKVU+WbV/itbb4N8QzRrnSo1ubVC5GEoGQjvG
EJkKZABvIYtrb4NCbKRsa1zWKuW25WtytGq/J+T8z9x+UAMAH5+zxwAyHUPjv3l8nbjnkoqIkpYi
mURzmfg5e7cs5dli7uUwVC7uj3XM1Ht27IhY0bu+d6vNHLrMJQLJid3MgklpEowiuY6xc67tcwii
rbUUETxzGWjaCWKCXuwNFndGHx7t3XOGTXWM/x5nu6I9aCGqDYUNKkTRGNGMzAwSV5ZhRG5fbx+r
WyaxXx0OpYP3neeLx9abnF42NtHSNORZmZjBFQTKFTVSSby6aIwTNJe67b9T3za2vzMwUEJUmMUb
ZUUwEjQ1WxrpxudUQOCFBH6oEADcii4+3zfc80OE+jghpjPqwn2QMzlBKkQ4mlMlW7YLkD+M5K0g
nf147Pmr6a2IttRjRqLGtotSbY1ikjRFsUWCND+ou2Nk2xGxFqsVFaNURIbaLJVjY2RDX3arcxqN
Giyaxv1+4jfZc34i6RsVG2LfTXKooryrgLrM3hwhNc+BaTyn8ZGOuP90NOD6WUQFRezDO5mnIocx
HEoGlUONtJaGnr6z2QfW/zwUXofkidRM0z9EPC2fV5uvy99+rJkL5UkP9Xs5+0P9n+m4X9dMJltl
W/V70wbe7XEAjRDm5FBITCSBAkO7OkZ/T5vkwSNq70Z6Poak21B/P2dn57KpuFLf2fiu7Ru1MCVb
6drNJFo4nRNKCHQgAcOBzl3XHdz+d1/G+fExX6Y+fcz+Pz9uu/v49E+b6Pq+jUPlIVlktM01SIAA
AAgEf3HXvVrMzDPH6tjbaLoIfLKKJ/qGFWCQWJQYZUCCQWGQUIIUYCECIAhkVggCGQcwMEQiQiyV
UUX7/nU2NK/9sA/jB/fWB3i5poGcIIfFEbB+gO6dcf7/kx8uPlj19XpLT+yD1JV9XdS6lXTDpVQP
hrvXyO/9+dIeHWuvHs/veWDttsH2K3am+SGCIa0NMwdsr3Iov77+1ANXWnYl7Ieb+TJpqJRQ+/bt
mnjJ9nVdGc5JX7/TqFW3leYQer1N3WwHEnKhEvaBQFQ7pUEVDeBFQNJ9iv+yQcRKaaaUiWkIiTGs
lV+xdVdUJiIsoWmgMhQSTBDYSt/0q2CLmb+r0xlIVIxaHIvE0DPQ8/bE/kPFirWEW1CF+935f2Sx
/8Lav4I/m2/xhNfn9/0fy6yzA1dH9wf/nl/zpTox+900teEj8fSeT0n+fP9VlfPG973Xz+sd9csz
MysdW4vXGN/jIeClm1du/ntHkdHrz//xp3+0+EqMSV+eMCUaKBLfz5dKkwWxGqJ+bMWqCKA2UpCt
PziZ/OcsNKW0Yfm27r571Wa+OrRaZpmhfToqXyFt1VhK5tXNyruF1fg2gcagiGhpEqZqkl2x40Zx
hnaTenJuMKKr0BaOvXlzE5ZZWqIQ45hMSrQr0McJIdowiA7syUJIRtGErOpIWGYKbbIxvz2xG+DM
ojSji37sd8sJkh9+IYDSSe0g/wwaOvOm5gGAXqykS1ri+Vn1npD1DEJHfkL1dsO78LP95er2bKO5
+BFOB2+fe9p/b4eCY1PnRoHF2dHni/YX9/R86r5vNZOg/X94nDK1yatRbCcH89S6jUQT872yeW/u
EOvOdq8+Pr50mE4NIZFnb/bf3TWdJB6shzq5BrFYmztjCSsQ6cpZ/ti02g1VAy5eaNrWnmCnmbzo
Teo+Hn8x6F2nltPT8Ia1h/dYi1/91lk/TDv7IOjhf716N/VSH2EaSdu+DsTeDWp9/jOE3TzY4845
hBN1uOC59JPz484JX4aBG1NIUFRKZ+NsP0XjJmYaiDGJG1RIyp17VwCMZcZQChKEgMtpQCuF/4M4
KWzWq9fXhhFLwd0tR20bYwcuzUxHgcTu7O3P2+zf7Ivcxc+HjId4d/+EpSP/r3x8ut574meyU9nf
5LOrp6qhIl97rI2n2Z2+Q+Xzk+S1r/xToz41FIVPhVTMrCikv5rFFLvFfTKhG5SlThvjBjOAraZJ
83VMY3UU0/Lw5pfm+q7mw7fUfy4vy8xdSiylrk18iQ+lYCQB6eETs/4+bRvokeTx9D19Nu67yXdH
E+7/a/nbw8//59XLo9HWx1fPK6ruvRZ6qwifUT8ou1mDkeYdl4Hlgfai3F2ZgqGlZIOWKdJW1IXp
QiOYSLUQ43gsQxBuyrILmDWpSGuaU0inG6eExAIjzxHYo7gw0/MO3l7OotIs22zxgS6oRlknue8w
meYb4eel2JGfk/8xdfDxf4Kz04bfDxCzX0pus0T9sHXS46wMug74/YL2pqdEyMe1NK7Bzp1FjfUs
YlgurqDXZzPGwitbyc8euUof/fTCPSvbqB+n7tvOTpCEJm1GwgKKPP4zIB19b9CvPJnt6ONqwLBL
odIzOj6X4+7jlBYpQ577viZYerjOFREvBIlCDpqLo2Q1iE/hZAslfALFQ7jEzg2DtDq4Uut44ORa
TC4Bb27byG+JDzYTgJTNnSbelaHZQ0k26UTTQghJktadkkOhCBO/im3x3tTzjtJvRvkbtfy8owXj
DKPRpu1eiB5uD62KG7V1IKpkEk5DZFoxO3jsNDSy6jJR566QIC7AVzG0tyrmmOhMGrF4LFpaM3L5
brJ39xXTTv5GkLZyBMWprbTEg5j46XjCEiu/LX3u9Hhtqhwj9PA0soiSLIEJGJO6VcIUP0uGUBML
fZGCO38v0P43TY1YjkFsjq2LqMuu6yrISt63ghkIggmdHO6I9ux1W7bV0/DwT2ZypzjJzlQSQJyr
ELmxcESFpp5fPfVByrYwNvUUx6/GLcjXKC6662IRQ3azzzISIK/Zsf/1G6bUu8hjK9ceia8RkyVn
fOvj2viWr3d844OMlIlSUHxOYffw7jbDuQHsJD+iHeTaGkWYgrIMRpAOvDpDaUiGl7oHx093dsid
IbZcnMzWvk7+3bYeYCgYmIgdSUPCxFtv6z8NhOCcLPkwJYxSAwkWeFKfbty1DHLWnPlKPuxD/V2f
y2Pd+3W1E5fZReAoDEvsoN8on0z4foaQT6CO56fovGCh9aNWnqgB+E+ImEJH2nnp9Uz6BuLPkuRO
Gtd4vPDavm4z+bOx/ogr5Ihbb88Iw8npl4m/TTj5z4kI+gF5kOoTdoKHn//bPQ8aU+V+mMC1xogy
omjFvhiTsb0TnWzV6Y4K97IzuupkZUmQ8O6kTIumqwadZk4klBHk9jcXVzkEvU/5f0FBIazcUbUX
KlUoxIVIz9V1P6BXj+E6p/v1GpoMUTm7letzJlBNs3F/iJoFFnXhMCp+4h+E+X5bTpE7AQi9H9lI
VACKVXUT64jjL5PmgQ1HnHN6NcnSNn5HbmdL/tXSjhfJmhyUIvq+Y74e4uZeBSDqTSGY/8I6E0rU
/cuQhkzfo+h/mi/84iEOiB+j/3hCX6/zORi1F2KMPEXc3f5CAeRAJN8//qUh86Pv8PNDyddZRb7F
JXEfLAb673CUfzQ993qPjq9459FDWn8wUY/AlfD6q+uH15n0WmLocQIEhjoOYPIutf006EF6ZL50
0lja6V9t8PSVn9USs6KDihCiemMgf7ae1Ti40hg0hdL3UQah4qiknL1VBYkif1kIUt2H1jRMplz1
/GcpjWNQxN6jWg8Dd/TC2wdsU1ipoaUKGkyKjFz7yHXzS0hCENkUiCvU0gtZx4XE3wx2OlGYr/eb
lCL/34haug6Gd9nFpSwIag2aGFXY6/j+gj9hEX/Q4/JJQk2P0/iPnDgv9PjnIc9nspuBVlE/to0k
p/EnjYccn9yU0/hp7E54/A3TpHhfCw4Es688cX/NgLgkgusqMSL+pGmoyRNL+TGDjHeF/z0h54We
GalijRAK7bskThBTjbrOjCRMARx6v1YApAVudUfYbxF0JC6/XJPD2K1XMMgI9y8U4gSH75lcdp8X
fYR2xrMVLiBqDUJqJ1NIOkv80xgrELjv4ZOuzMqEEBpD5cMPoppSB23Sfmi4iGvn9WC4E/UZUAn+
pXlEYyfLKRnssBP/N+W0OTWqw/zPVqYWsKcYkRRUExODwJClT0YU/beupprjTmFPyyo+iGqbap/D
3gUnwnFekxHGh8B2vQEFG1/cl74B79jyFYmssDLVGKHn1kkyZLqiEMKJgwyX1dPWLv+X4x8hHCYg
+Syb4k5GxW2tFqcf7D8n5rxuyYfv0eMPuIeb4elTzerGppcf9BFeVGtuic1qqwdWDrPObNxH9h6y
8QdoZl/okgAcMh7PR1m+HGKb8MYxplLkwfQh1B0qtIx3uyTRWcaL6SI54wJtYsVMRU6tAPtXBMwe
lMIQM2i/nDdfA7EQ4Ca8R3xp+Oe8304W2kKdv+E5l039wp1bG0fxXoPwmx8Oz+vuOf7yinfo1X9E
ZBFC4WawxPVgxEHo4GEepabgMVGFQhoqqDmnBCpFPqbMrQKNI8tgd/gwQ+XwC/g95MWYtA5Bparp
swOEWZ0LJy+jdROxbXIkwomBo/TJUbM4ZEbXF2lIjDSD+KyikhREUW3ObToud7F73rj2I5nOYTLn
o1iOgSYNKImXSIsQopxCpuk6O7AsRVrM5kfOnjbiDHxc8FzHDcZQ0bFO0BmJGZjEx41J4xwe/Y3C
p51swzsgIMAgcIX+51fdsGoacyaaYzgEuVNmNu9HGaHK8ir/j8SGS4VBSP2yBhLMh11hoZyOm6wh
IlIfrmMNBjy7lV9lDsO1xXp/HVzhCyErJWWvQm1kHEG2f0H4yfWkfq/Zh5oj/N8DCk644JJSIUny
nz/DuA9xPkXuz/9v1bdMwd6PvP/wmu0QRD+a1r12iRRCf+mAKlHieEQlBfp6UkoNFoiHcKZx9rPJ
g6E2yZFuyvakbgbv/+rZ4bUxa34w9wSjFv6T3jl+bt6hNYJqlTQkQ+ycmJnu8iZ4WIODHrYi0cns
IMeih+Dg0+wNZ+E/sZuk+Wx8LcbdZFvzb0hTbkebr0H9JY3YNRqMOF21iViPGNG8kDeHXhUaCmug
+r672wZeDMf2ffsbDJa1Qyx+D7LNZPusdaa5a056wRzcSHsR94EzrkdN1ay7sGmwdHxayAWe0l6A
2lsBoTIv4x2Wv8Ht7cHHyNaiBGv0qQmH1OxXrEGEhG52lFxXIctzlg1HRRFSyxt2ytcnayGyvC9n
C4mNlN7d+kmoxQ48V8/Qf9lRgPZQ1nQwaSgyiIpceHBaWoGbbjKpVl6kZ8FODm6mVyRs28ddLfiX
xJ1NFtibZg8E86fHiuJ9fbM8Y8+HBaOfXJWvTHdFp+XkRBSi+I4N8EgJrCaXy4NlC4SnVyB3oxcd
SmoAhLb+jyjwiyGYgJsKOWILVhi+Jbjdk8LHT6PTdfLfW8naK3HpIRZiyhjEs01enL73jEo9WfVF
77bm/ni1OgZwQgD4e7j7j260p+uSHs0oqdjPHn/TDZGAwYeqLRLinTa3s+rXIPy8JQ+HqcD2SrDD
BxrlzR2YUvVnpykfP4UKLUu+FzMrOFZ9k28BiXlaBpDR7RKz7JnpxqHftMS/wzfDLGWOM4b+Nijn
O/Xb80lsMv2yyM9t70Otg/Ai7vm74nYvf93MRE0pERBIQQEe0geazLaNtRrFpSaptJKkGkWSGGQ+
zMGGIZWgCaKybYowVzWK8rnpVkygUrQRt7tdpo2bve14AhAwIb6XGLSUefmtI6xrcxKJHjMIKDWF
/WHp5v3FyUs2KM0AdC6ZCQJMBSMp/NH+Pnmx90TMD6OujMVPu+GLG4j3YpzAeOAl2uYvBRlkAuOL
a89L2/ZrtoYkrts8dr1BA/pgWIANyChf1yKZJSnSUUxEE4wQwxMQCRUfH6G+p9R0+japA+wkdENO
q5CZMIwwT5fGVmNFeMw4x+IR3WSg5Qe1RAUtoR/snF3grmGRLa79+4fHA5ULMw5SSECRMg2LMJZ7
Pnv33zp/K7MRxp3/t/n+XESvMMPT2fw68L6wY7dW7UYUukXiA3QBmY/SHxABu/MpDnKw9KY7dHPa
fDzYZm/9dC6rjSTd//s/go+lvGDeVmNj5o48EevwiWsZjg2iSSY6lPp4FEvXxZhQCY1yNhc4BqPd
Q9d0mlfd6uWQ6Q61RewE1vu/rbKZGAGLBJ2tVwgbfNxmuWSBr16RGStReggrSHaTRxh1IRyQE6xc
hEYgoPBfo92ES7AbgQYUCfswwV/74H8xPjLzhsaSFCXNjhSlzjkKeqXPRM9MKoFsl+v3b/Jpot9l
Dqdulo6cnwhx7GqIZB6jSmGKRsmN9+zAFzEBNIukXh5UBvHWKa6l2AYIGlZlDFAEkzVvdmCisEEu
qEA52lT7o9Mjj6vLcZWTSRPTF7i7mH6t84gyFpWwwOO6A+CDXtr7M3Bsj97AdjUcDgJ9wwOpG3Ob
y7EGSnr5zYjaF4k4INF6SuQU0h0k5hvfm0UxJ0Y7+MDhlHRjV2Iu5T7ZwIJmsRbNqEGL3OByF9sg
KlZ7zYxjpD80aJe6VwoO6AO4lNHB6dnsbC5LzCZOQDSL4S7SROzDzG8rtGnadviGNsqOCc/fre4C
dQrhCa6DRdEn9h2uC5kgbQ7oj2jGHT4v4a964iyvuVTz/qR3QFeH3eht7PM24M0zFeN13wo0oqCQ
2QEB3CAI9ORKEsifi/Tw/2f2fv6Jmeoq/67JEBv4fZ3bAiMtQiDFEVqwkpAGhAYlRTyJAHJUDJQF
X1LdS/3lzUUzTL0xRj6g0VRb9fKfrp/g77FPRqg9/vUwikyFGAEv0bwHLEFdjZFBYdA1yPrfwxDf
g3+Uk+vrRsFEkIoKoi/VGQHjBjVQp9vHyF83gdtg8kd0ZmAyZJUA9n32ZOkUnSdx4zHs51ySHEIi
K8vTDpGcNsN1It6aE1Ep4RpYs6VQQYKJfIO6c4l7GGEOUqj4LADyxQ6jOvNI2RLDk3aeMFEEQFEY
jBSCnENaRoiacXhTEiMdrTnzaHamRoBJJkLknEkDCEyEGs8Fedla6HzX64tDWUh+Kgeb0T+qmJBd
vwPuS5T/l8/m1R9r/oUlcfLm/C72vNG3S0875caPdOqn2v77+E44WttuY/Ri9Zvij8uuWWUocWbe
d9D8O3u0UHaAA1B7IQ+mf2VPXMXkWxU2Oy4Wx/BuzuwViLXossiHURjOzON9Dsup0zaECsCMOrk5
WPoanrlatcmpeWqWq9auuG3SkjTJnlZIHpUh0xHQBvnc5BmgKdXCBg+isRshujLCeSbuUBFmq/kf
4N5P4uAleetHyNVN+Hqh78TjDVGOMIg5lFh2MwdKX4bWIFhmGCdODbShErx9ihJUhibbJzIREw1R
zmZiZNN3/N+zlR2OHv1o5MwjgoJmhYmCKYhaSon8oyMREEgITkgZ7suvh5KaBQp8DDF3kKDUFKmn
cwzaBzLgB9RilQsZgmYHnwHoPp1E2MUQV4E6YuQuWMDTlqB1IPW9KOAQ7L7x0KVSDh22wV6SUb1Q
7iQyEgskHkGQSnar+k/pcE1BudMOIX2YZGw5pEkES7ajYpaHweFBOAYiTw6Q92+vJ6zHr7amLqa4
crBxSacMzL3RoiJG4VFP1lJctDSdvQvJgtXotOy3zKGN5IvaVhE3iMN2BfOtIlkVU/TH62K4iC43
svi384cE69DQSw7ZRCEcyo01BqtKpbQsFpy0cWNQagwYYhlOhcG+FYeUbbBG4eshyePVvojA3zTI
gyqecNlDJQ5eykC3uKaq7LizE2ROnYieOg50RKHUJcQb1WdAeWUuIc6NBRegGGpACRAJdbceBfgG
nMgFeEv/7F9E+L0vC1OmhDF+cqNNP+U2TJgufxdrXx4EXh/d9G+OvTbbxzgQRFOa3TvFO01RRE0H
dIFUMZ8C3Iwaj7tQwQLJcwd2fpXMRKM6zt0KF5CVl99l7d0yUq0IcHtsTLqY4dYvOwe8YZm/9/7V
9ZxjBRf6P/f7IBTqOkz37OiO2A8xkJhvxiZcOpdG8yHFJxIh8e7VSF1nlM3xh6bk9GDHsicIkx8/
PWuOtHKJL7kJEmOt98GaqDxjIQRjSs/PXbTTo9lk2Ai2EqixlGQFzdX8U1es1sEj+ot8VBoGddk7
KhXHlxgsqZb2nz/GazmH2D+Y/gm2O9mCdqtnYbX7A209JdcwfRKFI72SyMZkiIt84G4zs+T0P7Vw
p9G2PQ12tXCrK+GmmZOdlgQD9FnVB8ezc79n0fyyjKfpuWfGTyn3Tl+VxpfyV+/CzeMAHyYRjPsU
NoPZ7t8c/lOJt+f/UfT+78/mv05ouzOM5qf6RFencRD88BUIF/N9eljRhJv4OP1wgfNKhREhAsRR
cW4cWSUFM19fx3+B7gj3XXbLuwzbfOcss3951iCJXoBUQXzT3rOqEjwZx2wwnXDiLoiCGESilqgS
29VfXvRNNQs0m9xbXVshbCyYQsQX+BD6FQO3/Kh/V38+APYhjQ+wVXpmfjZNcm/YmB/ybIeQ6/yG
zZmXpkzJJkItlDQaTUCBFIxNCnnI4Q55/mPs+HrFH+en587Q2/z4pbLOWgvj7nH+zzP5fs/SBkbL
5/0e3lNuaxa1mqcEEOcpHTm2+LdLHO5T6G/RUDzFrs3yB/Jbd2N0MGLXiYI+3Hbzd6O8vfezOXMK
v7/kbVu+qvKjXnnmeQGSbkgvDTVr8h6gaadDoHHdncRuZQZDJoHA/Xd2ih6TnHOTh2GHPKIBEepr
Pe/tTfZX9cPz5/kfZ0flgQkhnR7gBxwSZovugD2/yVZwopqtrLLqTNbeHo+vw8viEV6lfOQ6yG1w
3FoJdL+/J4D/jk1D8I7TaZPzpnIohbug3q/c7MH0Mt9O/6jtuavsJeeuOBRRMGONi4TioQsvn7vr
T3vHX6zdEpjfXUsAUtkoSKs6cXqcK2QXjCjWQKaxs5qvGmtBSHTA6NCeHQ8JEyd7CIogmqIRWtzx
+H93YE8A4F3IIkSRohQ7pDtKGqgUmJCo0Yn37pZBDaixJk0I0skYMkWLTJSW2w1FYyhVmzpbdGwU
olJVUAtKBPcwiBttx4YCd1fjmSPnAbSfZyfE0hSFKe2D+S0VpdjdJgSstuhBvaN/Kn/Lm/d7E7FE
XtTjuZ7C6fnQ380o9WdCciyDsfvfDd/fq/cuPtlvTHaYM40EW5yfpkGwb2R3fe2d9O2Yxh7/PAvl
dCLIQjg7JoEJRdDRBJkCimmkp+pTSQhI9M4sLhkFko+42H9PLFIzwJo8/2kfhvs6WL201O15DCS+
c30eaQSTSg9wwzMmh+P/ZdAi2i9HQRPurfbW84Zv8Dz/sP167rqt5NP2jhrNJPwp+FrV/BTDhgVK
qTJjOxmImywpv37Jo8V1xlRigU/F5+PidHRYTNfqd2kdH3viSC5eOECl6uc0UI4XKJNSbn4dIZIX
3y1robG+o3jgMFyD8KAZxMxtmYVFL3bc+2RI52HvyiUJ+c0BvKk35ExedPO/ctIuW3xEJ0cYQxHb
Dz0424WZko3cPTeNZP0C/KiyErDFglL236oTES+WMZIWJNT9EdinO2Gx2bsjdcRVtCE4zvndQ8Uj
4Z6sbbdIaNG6ERC6k/4fOJmO5mOtjcAJC8tp1HvJxh5iB+qBb5umMLUFyXOpX+k8y8bv7fJ+C69C
yPLw83ye6lPlR6PLEtXL+L938EGDefi1I4I9Us78cXl0z49XyppHyl3DcH9XfEVU/CLbNSdb+yDr
G1JNFJnG22wvkSGEjLAVzZKUKfKSnSDcO9D1zPxO59XezcLvD7zO64Rec7SMoScSgpwRcmh5okIe
/N6vCDv6O7R4bmScbq6l1Ybsa2UKIuY5q7uv4zu4gVKolG9WNQfu60LnSkH/VA4xOSD8I8CTzjpG
2Yb94Yf5O2cEDtAbyudHBGY7OjTCyyxfumUDUPNDn33YL/n+S1TTAjVA6RQ+8pd+fSn3t/Hl32z7
5h65EdL0J6MzsJT6Jwb/Bzg0ojsbNbjxt82puq9/SLZ3vEbvTAMZlnsrr7JiawHaKZ9H81oGe3s7
vX9BocC6Tw2pQV1vJYg9URwd0zCsB4Ztzz0049XeaxGbNQkRkkJIHFVS/x/XX3eRsbz4q6NuPmrP
h1a3NSx1P6f4vIMv5i4MEPHSISWoREVohMyH24WhREvVeekuqcVgaKzVEuo2DPNiwwa5lfg0k5O0
RjbdfR75SYlGndyzgxFDGCnAexSOjz23UfbWCseOsxc8xXAjpPG2pFkwjU8ENNXbDqkaSNpQiSNm
yBvBUVueOGWrdqn2VtO62wRcJ3znBDRQRgoBFFBMkVU49R+jqm7E2M7qwa/XvVP6+idwh9SAkgVF
A+X0m/KjKGQ64KmpEJBHz+x2kVZWDfAQmEkJDa2geTXHZ3u6yOraLzYb+zfSNtXFvxyWC6F5J0qT
k9qlRY1aUJtNN22OU42nOJYWdi/P0cGEIM+7vHj53a5jhyjWzll7EzGd4JcyMWqtHTvJF0LM6wvw
vfVnbRXlYxjiYO98rZ2GCfDWdWUpGLx2yfjx2xiMvYzi/p64665a1zymuZ9Kv3ttQOOnANcLGR73
8aXNVjSWc+/X1+2bFrw9rbk46C6+GGGhmzMzEUzAespCHr6qqxg5+I+Ri+WSu48sYTEKm/CNI89U
QUM76Xxeam8Lb7y/nPdQpaWokawOEthSZTBgekL0qMBib0XIHEhMFBiD46NbdiOB4XKgooiQrCH4
7OMGznO6KTBBx3aUY979MazRl0VjvkWJwsHNpnlZ4d9kMtPB7zCVb+RPODk6Pms4xkRV8NlyplKK
lbO2GPzY/X2na6OZwcl1TSvV+vfwvnq9OvTd3Yn6v3a4+KDMufotNkcW1xB8Ywxz2uJJTyfOUYZu
ULBtmPFzq3B8ncfLvg9IKQqqgZ96+q9YafLq8UCi4bhboldBwpA0ghA5a0ZyiSAntvg2oFhXc76s
PN4dxQSiH8BQMBTgCeUOa7RryLbGLFZwLGTg9GlnnIV2/KNignL4LNZHB+wZcbAte3KHZprxrbCy
FlrWnDZprmTuHtyzhnNMckPOFr1ZjATzot8i4MkJqM9oTpdljua835Jmva623S+JmXCSZEIX3BjY
OyqhCZMkCmTBKJvns8Zb+NhAsEKAOwrUgToe7XKLX2mljSZlQv224Y029/cPvyanjEiRSF3mTU2d
jEP0yeqUKBIxm/eagIouzRxNcNry43Mj+le/ynuDgHcxGd1v3xFKGBHVhhy0t175rDO82IpfLU7w
s7YvrcWikyzI50paVua1nILeUu32tkWOUswd4kMfDsEgVA6MrNduECe++/XwoWRbEhPO2bPudktO
zsSSmUaUhzGAIQ2+eL92AFWPZUR72e7us8cpcy/XBn2uz6PRk+ujK3fO+y4yYic0A7EoXDItU8Oe
uyxNPhamw0Sk64Z6GAuWrR2wfRrz6Vjyy/W4ANoeHmu/KSSFLdv3ESdtxDsNtzTjK11whFEE4yQk
BZqZ8QjTelzfqUrt8DVZPYYRLSFhzpbBmdMK+x07cuUI3ujzPut3mMjZ2QyWh9qCg9o70SE+X1sc
eg9as1sOTG3ShWyvQWaAbIzYa6rtOVmOWM09WdNQk5rqRINNQ0sk1KPCy2U26J6bevEz2M23DAlZ
FsYVWV9Jk7JaftrDERTC/VJ7kWykGElu52cE189wXBZkiF0o5D8S8Blwa7YycksJEvnUFTSEey5u
NHEGyl9pXz3Br4GzWlloHhx88kfNRlSiuXgJ6V1fCUGluhjl0cMMsrbG13Q6o8ujy9lz28mYvZWs
wbWTMcKKTY2lKHLlt5dZNzxhqMCMiPfwqXzsN6HG8aoorqAHA2xJQIxrwUUxBDWjDjpQJBFHXzLO
LAYghKCRBxVo74NHyZaWMw4yU1a7pCCFnSS2ymTbTjy0nZqh2Pq5kTY0TlsnshqGSlhmo1Xh02Jm
G2sSiUUeWtPGtFANyFThElQgbnzSMHV7vjkQLUFQICHlacuwguZLo5Hx7jfjA4ER2I1vvaxgRRik
RiQSMIehcOVjNvQ4CY4SIO9NSMEiDyKCPkCIJ08yd/r80tvCSmu5jI3gsJD5jbmejjs7b4N6zzB7
GZ68+OyYN7CgFgMmxWzCE7zPErheOz2sUJC7nZ2HREDyg3FCwPqSgKDq8syI6bZVGSuwWRojmtiI
HhJyeaEvxjU9Ps8rCBY9O+EcCzQ4YQ3fnLUmK2Ns1Uwd3a5hlezQQeRrY7pDzx8wapqO0XqO/bYc
L7JltOY8+90ObiEFEWGHFy3udntvxjiWl1xNTXDpgfnuY9KJuP9MM9hdai8ff+JuWhCU4WfCJaU0
4HzEiVhGp7Nc5fBaj4fHWKp2uNQdNGTXxprFPVMSEpUGvc/koEeR7m2WjiJYDtz229UbMLcZ4med
7msttxul+k/ru01V80F4+qrRlR2cbfAy0nObCW3rH2MisjatvuQ8bxwa0sYDkoBkm9i6b3kNRg+6
H6EzQzNHa5UUvoj7TDCDVJ19lerTT+i/9z3eGTwdOjzCdRznnqjWwsm6rMrYzilZ1d1vOp3tSJ6e
8as06WrxZhPOqnU1ja3jLxnBinqkpmlo1hZwm5eM3qsa1Rt5VOltPdq9TH42YZhI/V5Nf2n6dpT9
sIN+jf9+f3finyf+j75ZA9Tn5PP31vh+0AH+ND5s+Ph+4OGJwx07/5E+6op+tRT7gAP3QAf4g836
4qpYANgqnTwDhFFPrAB7U9h+58JC13DhqJ3dvum7VVVUd93ty8dD3p9mY8UMA2oHHYR6SUIDpolO
1gl7z1+z2wwsX8LnM0Z5zZiBsY6GYZh2GgL8/Ef+T9rn7P5jg2ZfsJ9I/7976DgOKHtxQWQWLDDX
pAqowk30qi2KYhfS439CYpkODo1ImtWofB3i+UBg1gA1gADTaB2gA3vbLpM+MPtZV+Hfd2MdrT+P
7Ps7AAfWvrI9vefynUc/i91XuAD9AAOyilKKe319x6G/H5xRT0mATgSEISH8fV5Y/m9PHrfTuI69
/DJ6PxfbQ+2Ko7PGg3mv4wAfvqqa6gqZCZ6gHy+OogT6Z3kD4YCo6hAdLPvKqcPIAH/Trn7kPbAd
34qMz7TeBQfOHYd8e393qK+fF8iqKNyfhIPPnIUrkiQUg2EGmoj9nb4BZFQDmyqg/EfLrFi641qk
dhg0D7T0IVoAD/mABibKKecAHu38MHXWPcdOjxX9vEfUADx4/KgD6wAb1M2fV8/0Xj81YxW3ATsP
2zXTzM/C73WeGgffUeqCsvjd6qns06nzeKC++dRnZfSEkiw1/v94cPFRT67AB9U9Sm4/P8162o9h
q77LPXZmoRpFQlqjArTmADbgAfWCpkPrABoAGcbCKKbgqUEABuaABrRtOXtsMoYJvRI7GZmYp0HO
3qXtt9U28V5eAAPEAHsndz7x+L0v9d5JMGoJ1Ksj92IOmGEgBmW3qnOh4y+O7Syy20S8B3+uAAMg
iop2gA+d7PTty3O40TgiDuopp6Tn9F0kY4olfdsu/XVy8Xdlfe6c7+Hqrbqhi+rdByRTzRaAkJAk
JG0dObYO384AN7eG2VUrMeWry4fCH66fK5wOrbv7ZoY6Y+vqBOWU3bWSp+O2ixgU/ddyr7dI0Uk6
/IpitL+wAGoRkIBMxVITNKBqXq9VMRltNS/fri0PDKGJte/qdmb1AmYZivPCXDosNegAP2KinUJI
9p0BGgj63BDCQK9Z0wd9kTaA8oVoAiBiApaKzXdbm2hLDCg1BRgiKkqDfZRTY/3956tg9P4lFPuC
T8AVMB0ABsAH7AAaAByCpkAGwAeNB0UU+QAGAA7KKWop4agA4SwAYAD19x8Prv1AA9AAeiinoOgT
eJntIPyE7MJN+kxTIL3kmMMxpsaK3BAH1fiYAa64Sm4LQpBosgPf9+wAG8caqtKv9SYTCinsx3Ja
in2ef9/obPAe19AANVgBqVF0nEviWrvb1/RwPUUj8PdA9l9rn2O7PHt+qd7kxI93GsdrPunzY2zX
DfW26MzucTNYxFr5TyxYe/r767AAeVJSikCKKR7qKmdODPwFVUhvd2lFUUc7zzXpIxJIGqrt5wAf
EAGpHW+5YRgj5XlK/a8whvyXm3T+ajOgmh6qEFOPnAB61QhBWEFPH8/Dbv75uAD7sU9VjswDaDHv
AAaXcjip6zltzXRwhBOlQ6Ic5Q4T38uda2T94D9J5HHJ9pmr5SqKoqion/T8MfOvC1aW0tpatLWq
iqKKKgBAJgHdu663M+lw/dcXy9+Midy1VPsGqiqCUvSi61ZBRs0teR4O5bbbc7uXa2229KvzPB5S
JxbbS8drecuVBiggyJ5jbWVgSWnHy9azgRDtbJbUibahIFShiMUFgsj6OehYKPAa2/bg/XD9IAMU
UyAD6AAGwVPAKBU9FAg2Y2M6dCSVI+AADTc8AAaQXzsOHo+/0HO26GFtY6y0RZ0G3VKJylVla1dc
YEmuE7kJ+gAGyABvVJqWFKqG7yP9VJbq49bSxM+3z6HQu3ro19nU9sV7Z+M165MJprbqb4BvwedY
sWLIjBiocpQtoW0LaLB3djS+yI7Ylz9n7fTV5ZfHsX8DhfW2222l/1f6Lbea3lXpMHnLDlhWet/F
qdIoqWWqVS1FFt8zexL6gouw8JDNEffzfWtyvz8/o3euW1uDfEqwbENkjuZmU9MAB2+MAHPcqpnh
qXifowGvMqqKkJEx9wNbDuwiJ8yil662gjpEH3fFNfD3pYw7YH7g2i5n5HdMlVizv9P4pNkgaiSa
bQMDfFQCDdSBTzgB8qNk9JM/3fT0rqN3yvAnqcgCR65vIbV26FnI+ngeLGB3s39X90js6rWZYv9/
6vkwJkmO1xnHX9esdWRS0JMFLZjZioQf14B3X2/8n+Dk57frn1/wTfUv42/T/VTLy1/LPzFyry1V
VVVVtqrbVttpVVG223iVF2ANs0kfv+J455W1rbb+1e1XxbbbattqqhNg6n6elvHidB+GE98SQCTO
ruwphXA3YptpFGjNyZMLQklgkzFZi4Yfq2gQbVypLfF+ijRNrFSm2jUbYa+9BCp/ZAxGyoJFrubr
5oNE4fgoPzJlpJ0x79N3kwaXOXQWKS3ZujD7YFO1oPbjFQnHz5WwlecdbXGams4nCvNdLLrOHd4b
btFDd3QhtfvqVZz123x21pBwkuZW+CotaxeCdcB/LpJJJJSSRRWjfBvxzwAAY3tOm23o5eD0enGw
Lrkrqwpdnme0xyMoEEoDvjD6cyE+Z8ddhFZ4m0x1YN2aQPspeuz6PLuwsMt95UwkaY2RqQd/WppI
t3KkClalaStntjaxecnm9gG62V78749LUUkXlqupwjFUtqCbHhCPlMgIaRSmrCErr2pocnnH3VIp
YF1bopFi6EZxsjHjrot9xcix313QbfZdTLGr32T9HmPxhQ49lftFXPfSG3aADffam7oju4YVN+6O
2jW7n0shAms1WjzfK8eJ05W0wlOByhrjEABtdrvDvUDAAGtsw1ZUWGW6oANDB8M7btKa4EkTVik+
OzKIxkwCEkwmYq7O12rC+umeEoNx7XdN95BFFFMWpF6IanZmZjANzxESQkZAA10YBTfCBP+dxsc7
/xQ/uaGuWDXo+sKnuVTXloOrnTw2BeZk84ySvj07XpCSyvhDUVjG0nrg5SanPtedI+GEK+bFmJ0U
nwgc/Gukz2Fz6VhLGUY0Wjvtlea4njArlspGtztsbHt01swErnGg19l9lNy0q9Fsl02O1qtXVCUN
k50bQgOEq0jSOSaxY4WJE3VNtxfeRLLeKV0Lnk9i2yseDS2atI/vM/3/5v4avr+E/kOAF+s/ff03
7D+g+nAA84AAHOec84ABzgcDnA5znOHOHOc5znOc5znOc5znOc5znOHOc5znDnPyvD4+AAAA5555
4eXivoUklJSTpXcepvM25Ym9c+ceOse2ulnPN0cuiVJfq87xiXlrdcsXas2k0aT7Isc0saTn9ndt
O2Ot7rAuSV/jma+1DOaPHsvjU+s9qjE3i6UtK7O2JpKFQ3guq9rxp1lZ6mUYVaHWFMERBL2jxnSU
YnvMH7945+Gd18Zb1xJz7YjMmiTOU+peqmvv4oxj1x8PfX7bwAZwAbOjZ9Wxr8jAUNLINEQVtUoi
dLFpbtfF0LGkAK0x8JRuNVjFVIxHcGgMDsmEmVoYh62XcKKGpXD39W8yd+gH8E2+e6ufh/HYEMqW
B86aTyfJXt1bXih5UR6E4IfozIsXte41iYrZsRbx2R3/f6OjhxJ0bW19uD41LY0xfYyGRubujASa
H44dogLkbbBxIpAc2oiYScgjbju/NrhdOlgQfa2RC8tkpiUpwdkfYTjMOKufQ8WYr5nvRBtM0QMI
0Yu+t3FaroGXbQ/jwkdwhMhWbfDWWoF04k2YY6UVZiehz+x2iZG36dxTSVtnXdTyTaYITYESpxew
fkotHevUsPH6SCo/7PsnaZ93ho27VsM41MGw6aGtG/B2xXYiKa1UecW4KF7hbIfwUsMYFsJwD803
YsQcFjD7/0Fjaz6/ppGnzXMrK2golUXYn8umN9JzVMRDEEML4Ub59jkZNsNkboiLT4n9UYHhuFuF
Ga59Gsuxa93HxpELW3FXipzmMzT2W5wwezpT26l88R6vB8NMSam9kYHXMfSVNfj57bT2m1ggJvYI
ZrPvfr/fvuSEcZ54HByaWj6oSlqlg2Q3Evjata+IitCv124X+ry8d+h4iX43cSY9+RKb7EBhCJAk
yVBAaYDBdtsR3/l3Hc2xoU2lHm1FpmLbYrRebcsBSohnGDAJSwaKMXLiXcqjzfTZwefP83og+F76
srKvLdEq+ubrFCGUjU0M4wVhd6m+6hhvpSOb5Z40wnPThLUa7SBC+avBkmYNGZ2aeCeNJ2X2MCSa
cTUMK+3yXXCVGwxhc2TtYmxxsz7XuABtKQhdjSIXIb2R3QC3daSEFtt8t768uvInTTbWxtDXZSds
yOvB5JsdBMSpMlZ8vvfB9J423/UXMbwAGTMMx7e2Rr9Go0JSLjfgZ3PKERQk+yBA++cXIkbdz32W
6kog+KB/xcShoGfZ9Iz1mpVf1S5aUZPhw338WGXKGw2I9SOTalfgpu9f3eARGYNXi7MJfahhnQQU
IZlEhyfGwaSYwBdSUQ9nut1cry9HDhL5n1jTf26++Hs3pX2lyZhmLd9UWT1ZvKEU66dNXG6tkWLj
0lmrfy03WyqarmZmZh4bN1II8hr2WJFjew0HsZ3B0Jn18euMavNQXyr2/a0uNaXiBAkbY+Tc2dwS
oFDVHla8eik88EgXvtDZ76dT0sa6p0wginyvrQlyiU00JeZGlKwYqCcBVFWgPVOJRDpeRZnWQA11
i8yZVUc2TMdld0ToRl3PAILzX7lFPoSnttzg8+MFSyePbS6OjMQX9trbeu0ld27dlh130yqxJaWB
lGL3eX2PLz636cHO6DDfZ2R5BLc4M7ukQ4Vx3UzmXt19PQ+/S8mKl+ojbuKgzMQU5IkceuE2uv4C
UTyNbdWHkJu1e3LQMtkLIxvnAoCEZmTlLTTQ6eJ+jODJ309vbsd3H44uwaw6+j7HejJ8XkrLaPM8
OipZZKdSGd5l9NsLKCLFO/YtKNq4UrZCcfv2MzMxxQANeX6TLeiWvqv83TO6nSWl/tzlEs5cnjGz
HJ8YmOb0rS3Kd2chmdDfags7+UMi/uh24MaUsSlQ0MXABn5R+lgDrqSJ09MLOWLXUuUUSh/JbzgS
unn0b5Fml9WB0he5bpt0IyvfB+uUcOtzHfGmxgIrr9ELMb5qhDVLvwjrxLNmKvn3ZZytrljSBbKc
XhR2htAIZuiIPijRGKUhiOGd82Ju+DzylnUgVTpv4Vvmoik6w0mfzwnf3QLlMKzqP6CEYCw3X9u2
5y7CSquuXYW6bKVlGUkqT48hjoAN4vqLXYZkJgDChdrLjjuxKTVIIpCMtRntgatdKzWg6aEORbZC
gSt0YsiDMxfK3CBlvnfIolltr5tIZtvn6eqLMQaDjodJJMJOhjWMk1aWBp+X935Pyuj8akqlpzZX
smyv9xAMb9aXQoKdvYYpmlE+qSjUAUQlEVGP4hOvQ5KF2YiiVbCoPTKMIZg2NRrEmj5qo2nLjrnO
TnM7mjuq7uw53Iybl0YSgo5uT3u9bFXfPXl5c0mXdGZ3Qt06XNXXCVtItsWiKFtC2BYyR+sKQJxV
AuZ0yHfsxxw4cMnBKEPfFN6sODRSkaIImBZbOzPnTFiuCBxIEfQ+8vnBZnDtss9JspHHvlruun0+
tyyfSpW5arbGSYSbhtnucj8LO6DYCZLS/C+vzzA8YNKBw3p/SdHC08PTdTB17ZPPSDnKM02JuTuS
pPt3hB5z1D2kGsU1hxxJUyNUqQOv8M87lr8zjxuvAyuoHMMKUajCYoUHT525a2Y5sWuwAAkFErW0
nRN4eQR1+XGdg7D5B8Cn6T34fNd/Ldwl2wXcYR/HLqpnjHPX5aqy2w65ndvMrtcqeeN+Ey6HJ/L1
NW/Csoqx9UjscwGEgSskUjQ90adEy4JjaaBItP5d2StfmDB+LXGch94Opaz30dfjuHPfWZW7E3xs
Z2bVPVJoo0diZ3TuFmxIr5Bhmbbw+SV/er8f/qeoC+fdAqpU0lv1WW/Pv3Nu6DAPPvPWeOBZmefH
hxcuR8rl2zdss1BNHDfkRqNVwcWBZflIthxZh4nDdxkNBP1dxj5Jxmm5V67Cd0pSorQSQIGkmGnM
4ycINWfq/z89Oh4dAT9UvdDOH37NJ79xEQ8H+CjCeXXnr4Ld6UGK8pX4aPMxPw38rrd2P80fk+4r
M4nGX1jF/XTS4vHrb+KnjjH1TlcCpKiLUrKQqz6XTwe6ZFttlYxvh9t9M7F2eiG/R6tAEySEXfGF
l48pmXCULwAZeaeJYGyULeGoqADWtKeNrgnsczavpWyU6UvUz5biUfSMtVg6b4+Vnv4k670zK+ig
yCpcqbcPDJ8Tu2t8737q8DMK/HTHVhI3uZ4Xl8TXWL7ojW52YrqexfS7u7OkLPExi1tXxO3pIbLR
goIGa79Egg2oP5UO06b27ks7ktVq3a3/qtPWa7zA9J0ZcNK6ZHadnQzTuVAu+WzXZT5QbJa2lWh+
nOEub6O/Uu9Fm1y3zu/te17omOts4U22ZRjCChqzluyhO8Uts4wzBJeSdu0aZXJhiljhkcK4ItrO
xM231VLFyE0hDTYIohB7tWb/BjwzMIwhlXhcf1R6zU+GUyHp9UriOa03qyKqZV7GGYDMVCbiPy+d
2SRH94r5Vqn1j4ktY0ZTWKYKtYYOy8lOKfJJUi4gVgc4XlDRAmpdQ7Y62evye7OeZ1d/OzIaJolf
3wuEzMaDPojKe1aNtfmwPM68HO0kdo5722dazc236h6qUqCIQk+rALDGaKmWQqDMcl+3x22Pr7vH
RvwbeGu87+VG+SWz9u3x5bpWX9Fa7sIF2L2ZXfebZlGWDXdsPQXwtPbLZjrlC+7nXNWjmzhn0Yf0
8UduOokqUR3AwB/1toaZnm1v+6lsulF7eMS+vy/Zr0unpusvzFu+V3WWrLZtxvHaK1xL/1nZsmQ+
xF61YFZg4IIwRoWupkhnX22lCMspx/MpfR2fh/Dc8gAbwtyusIOyvndn3ttqc5E0hfPYStnsJ8EX
KTRE3QqIAG+9P5IszMxG/IvPfGUvwXl9HmvmsI0iVd0UdqpftfsjMt/m2yOlenG/0YVs5Py+X3Zt
nTA+dAfSmcRETIPub7iCcm46lEpdlsDk+N0engqxE30Djgkzu7Q9E4IK+Tm+706zaaG/hO3GP3hm
YGs2RdvGnrgYE93SD4IPFeazEPhWuLAUYTDDSRbGNNMWtFqNoyVRrSSVMLG1GqZRiUlDPX3EL8hI
erHbD0U2IAiHjbXluEG04sMkOOIVPAwkEGCkPQTnleQ4AWdCFTs7sMIiCkQ/vMK0IHMuidEvKBby
G8GSRJ06HTYimZ6QYsQEkEmYcdgDSeryco48O19krLqyiRwboXTLGGliKEZGEMgpxaBNTsXc9IoZ
i5j7E6bsopeAjJAnso4xuRECIb9pocvWTz9exezxiebbNuey7NT9Bon5ZJhhB2hTp2dwU0Hh5tsZ
NIEYPAcX3KKSxeymyFjX7B/jpI5E63Ul2QeD3WRjbZMk3mhIj9zcPsrI8Vh+DV9PORlG6hU1YTwl
5o7223y17LjXkeXuifmXb3Z6o3X5uurrMPuLKwWVsDNRTfgmdFIhz7L5ok7HL+CKFSmanqbj+PmO
86uqmfnZ+Bg6CGY8A+/SU/YJtp6h2nnpqxv1y9Pexj5Ft2YbsLD10USXzT3EH8o935M7Prl9eUam
yV20hpoUnrddkOHCy+dFobe6r8bWaMM+Eg0CEaFCOy+0XPJPGEeLnQH4f2dpG8yIUZ+w9tQrDyMx
x1gJFkHkjJCGNxkc4kbnbVXHt1vEx0lvV2+ss8d9C2bOIHMz9nX4d09Yiapvy2as/KqDdDavbdfs
e2EtV2COMZKOH4f5rpYFcWOhedA6US8d9ajBSd1AkO8lhJq7+NuzpRrVWq9gtpfujtjyztVbrZZU
qNYmSFKGMmQ1pbT5VKXVxHGg0eIaQwBpM6UsjbZiAA1+ek6sNVqOOBVc67INssDbD4fRmoMW3uAO
EWAGgp0CTcN10ZyGwRj5YkBJkOhxJkITNfSs5a9sT0yljw15QbSeb1mmxg0gW+BAgR2xhRsBCiPF
RMYehcMhxGM4TgVlbvd3pOK+u+DVPMsaSzLIsHRqdqxeCNrIsrV9IWSPAyOdLdHbZfZEqOLg/5Ew
oDOd/OtmIAm6tLaVx5nt+2qL6sIogjIs7sKIFRQFpxA+iXaOZjMN0kiIWITMyaYEGxbx7guhLnf4
qXDTPD16iBiSLzOHEPvoeMgEjOz8p2j7DIawTFGrr6+mJEUUaIJ87Ggmxd0l4Lu8In1xHZpL0fPZ
ycm5estcNXkxnx8XdamUNmUUX7NODYcr53aywt8/kmA21sFXWqZ3WxtfQtLWaydLW+Z/6fGVuuwK
9+tQgQ1NAtyqADYxO6yymdOciiEgke+HjsH5M9m1yhd0cPVG/NXc0V50d8e7q7v67WGu7ALjdpZF
xvv9M5EgNtm23wQ1E+677AiECdhSIGCqDSiAToX5wraccN7ADbVbPIcha3U4oEPtz9YHn6cBshTA
RESkFERjmGAT8qE49/XoVVfXsHN8J3VXu5JqQHngRXibTM5FUtpjG4iNfToqt8YDCiCIN/hxmuN+
7PesVNVCBPqgYFm7BWwSEBui1xO32pJJJJVhZfOquUJP145VmdNfp65Q4AAzgA01v8ecyLn0QbJO
jWJlmFlD3YwgsUlN0TZlOWMeC43vx5f6DcogKlxJt3moD5QK77sEuOnFdL0ciDMw6lGSR6I5icpO
q6OmCJlbOT2uug5l6p9Fz1rk8LueGRi8s/HDVO8oI0LawCo92GdlNVgzJBEbIlErvUWJM6yVwkaJ
8F5eH5YGdhvR/5ZSK/tvzuz78NOlmGY0uu0BMh5wab4SJoSaavg9GSIvRBJFlxXIp8qyse+pfXYs
89WdCVjZms9ED0wK3yJcC5Wi2dfD1X/rg+7QT25XBcDYvlgKItrZKA7SaLffg9svxRHzcKY8tzJd
98NIwVxaVEmCG9oxgIs6fHCcttH/Q/CcRZEYEzdCSZhmFE6fTiXwDwq+nGeManvoACGuQ+6dVnXK
vN8n1PPWS/i9W6mIIg4njRiRxh/iz21U4ZpZzc3WsHd4ABpG2ctclMnR55P1I83bjerr8qELczhH
GLwOiJrfxOW/ZjQ+bZPvQkQuq4MVsyQHEKtokfk3xBmj5QYAGgWgTGk+oklRsoYoBDcjc2JOl9sd
Mal29pkR3eT/VwGPz3dU6P07T7m3Y5nTY5pXMvv7QrpLPF6Rkj83O/lzfKuJX+TK6u84InadWvG8
ng9cCypsekh5KeCwyrKcYdf5L4y5wtL7ECOodB7FUZVPGHdhM7udKTxg4O+uvdANniTut6ZO882H
goyYo+2X0irvZcV2/R79Msfkycbxw5+bQsuno8fSvh++LfRH8f5MGPrJS+/TVi6jxIOZphmYfSr8
NXaADeEduRVEwjwlbM3K9YyABnLzW9J5QLOWHGb0V3QGKGGA+lAD4XFipi7elErRyVuuBj5bnyKV
nfsliRAl4xwjOWXSF421H1COh3d3eB8mNMrkerVCp+FVXy/GBYosvK3U35erX0X13PFuVtzFs+01
bZQONodFvmyZUbbPAILWnajhYefKXTOzzWeeFCkdqhGGUMKzNREYabLbI8kPemq0rbN9fGlIJYHf
Gw2bp01ysGOvpcksqekpE4oL0El1KzZKeqUUbdnO6PTbwjAtu012M1+FJ+R6iLc3oYyvkcOmyRzW
S+XEtZhmJPEAGyz0133x1T5O04B+xJG9RZhMyZEXYwh1sGHonjtu3WQ2bGjM0h58cjhaxecJBCA5
1HcR5mwlMYMceg2aRzlc5tvajQeVgXcB+JyJQgOOTi8o9u4wIZuSuqdcIGfDv25bVw2K1XbyLX12
E+fJpBI+kFTx0YMtNStgWbMY5CnPPz0VuDW3WVZhmLdpsO0owFjbbaaZvbGMdX8PD3dHMUU339uH
y5rSd296TAdecd2A3ABpVSvvbU7eeqgx3hy9VlkCJ+uFS05cCY5EBrrEoAmIM0UOn3G7KJFqXbLT
n8F9ELj6ZV0THafoYdCSR2epL/IviF3jEWvCI+bGMIb0jR159MvNUbl2cJeFNTtj1W+NpUBRTaPX
A9qAYUjQ4GNnzcrfu6vMKZjGH9hDLC1quihJXs96+uMV2x169nbEoV7XbX1SYY37UcbMbDhTX0J0
CZmmkC1ryqPW9dfLCo+zXyaWuVjvgwD6vhcWHO2H1+la5bUcnIqQkeRl3CSiERxTOfOK6TUkXBjC
6m2j46YK2RKRDOJVVro2kOg1m3M15au3zXW4435Wah7dIYB58kril1gtiaC1vqqWryWEjCmTm6rz
W6vkpezZaS7975nsrPNAA3Q26YMNRgVMyv3eu36yu0lNNMlea4EJWmTtZnwYgRECZCaMoTHV685G
MD+jfv2VW6OprjdGikT42ZKWcpXtiMyWNt3fjpAhhaocER20q2UNxZWyNmi4aao2mPRi1p2q+E9t
nyZRbayML67bS5Mww0p4trvjNuuijbGySsW5BemYiVuhf0tNVVdsL2ahz/p80Oroeiw6NTc5bzwx
IaNQTFUJhIRZVmdoAQqAUIDz+yzBy2o7TGpns8wHOCtd29qTw8NbLpKQPY6UpNHezDDVfq6dqn0U
U6bH8scG70x5t9ovONqpC/AsJgPZnbWPFoPh7dAfRfbAcEpcfNGw9lbWPDgVbry8PbZp6IHoh2w3
N8hVzrc8NHHv56buPSle3QSqRlIgqYJ8GIrYmsB0zwR1wMWSdMFHmKTBnp80LMP8Gq7WahELO1nB
78yImCJf03tmGJtB0414aNpnWx4F2KfjOzmDlKwREzKs/sU71F8C2d27HkaTfmJ4AREQfULK4pPH
RpGDJ1ciqixO6DBck1CTjaHghty290d2/pndTatV+vna1DFBZVwRwilHfDri85eXw116b3q5FRxy
lbxhxIKym6zfkQ6ls7dVBItg7EGSTNdIgh4DZugs4Wwk35rHjoLOPVb8hfZ8NKQi7oQTZxAO2PPH
dIJyU7CzOnzaX3sfVabJBkP5cLXx9z3d31b041982C5ZG3b5+9hmx6JIRcilVsaDENm6Gwmd/sjj
wrVKE3A5tscH2RSzxiqWayhHjv7us2GdCsnklDfM2L9Pa15wesLL4azlKUYPtltIZpoOSNvqLjcT
oW5xwepsFITXWKBZP0F+2wnyf2bWLxELwhzvgoSNkOGZx44EinCzyT20LJcLN1seOlmJZKbljmYK
yx7jxwhfLMwV5phtehupUg2PcV4mmmKcni0/LG4sv2Xwipa7cZHGPXCXsKgVNXPs083fPDy8nbpJ
41yE93O+EFJKfnzli2ImzZ0cHQQkUzHe85q6+zc9eYnTeimOLCg5zAYJjuexw/JHJ/ih5OMUcgh+
dk8vKhWSZCosGlhRAG2oKLFph8ZyDTPhfwSP0ye2D6pXnrgc0iKoPkhUCxIKHjxsx4MsQw0RcgId
WYR68P+YdeF8WPlQ0Z4lLI1AgTBkFrXcm/n9eO4i9Ds6RD1gy/1t/T+t+gRfcxrEGL3RR2Wu98Iy
l96btMZo24qkAsuOtbxa+a7QFsJRWuTYREHbP1HF1esVzhfj755ABogA1DpcfwWJBPO10lPU51dn
5peTAGZiphqutim2CFLZBiAmXX0whEknGr21U2aPNFH6aFLRUzT67NVpFNarZlvI8zZ274yre9iz
VxGV5hDHCgwUDB3k7JFP0bYLJ8crywPL0Yw3G+pifHnGclbF2g1vVqReWbqSM8SrWfBvgRKM2Cm1
IIyiJM39PnjP3JLGB3oBasjjNPLn+tGwsbTXxLJ6N35JOFyYKIMNTROUdiilDU7BRDA+WgVUesL+
6BBMWJh/JcA3wOuHCAasDq2ADEbI4jPuYLkY/SZCzICUUntP3cdx4dIHIOHhwch5EnOmJXHGdJA5
nSYckxUdI32ys2Dkg0gUR0NbsTTAGLXFqYvezv+Lbb8v46P6f9dHymwuPPAt6ceuFa1o0FtNpZ+2
6eGlsLvRdU5zaa7BG6y6wfu1whl6Ln1dr120C/EbbTHGeVpp5X+/vZsTE5bcos1CFxpu1jrXY5h9
G201lo63Jt93wTRYQm1dsKwpYT7/8fJP6u7zMaufp4b8eG6/LBkwm2oAQj8IgCi1JxO7iWowrnd1
xJP4ndPcdoKRdCEEyYu4uwMjyeyZF5039XMhtirrsNk8i0jaj7oHeDsJwagmCqYxkJA4REB1+WRw
NUoGCc3Dfl5al49MwMUBkgMyFYslKh74myRRpsVDMUl5l7LZwND3n2v4qj3XrNCEkhJqDyIgwgzA
OBKwkvHnmLqhwXMMdOIBqFI23EDbbyHw+E3bQpUtZJEolHpgUBMlIRD9F9+vnavVrfSz2MOXmmpa
qoog+/s+ODidKVqiMntLTIPdLnaVFi+PSzKrElSudEKUkkE5Yo9/VqCoS+Muft5mvr+7iuLT+ZHz
Gyg4/1MZ+V37/ccTGBeVOCghCEgWGtE6fV9VGdehoi+AWXv4anLJxdHZeOlDgXzUFFVI6N6RoZ6z
7w8T1QzqNcZtMGxkkVeEW+FA+ic9CskF/fg36+WpYYISENCmcSBSb4pvEpLJBO2HKNhyKPCzb6ff
ZnpXDfF3oeHVXdyYHDrbAOHYXx0MGZKFjAxLTj0EawfLfSVICr9nrh2zoMx2lOxQx/GfsVq7DUuM
ppIQUzV2AEWq1l1/Oipj6fGD3488Ug0ogiClpYKXdvTNc/TOzbfj+ToV91lKK17s1UUqfRZvtKw3
aQ0TEgTMMwgZmLUJnHU1q54Yfn9MqceTOiUp+PrWMa8OTn0/xZ7otYxL1bsXOkAETXbI6K7ZzI8H
uoAIuHEREQX37HtetGuXrbeGTNUuOrtFVrjDz7Z3VWRAzELcPC4voo3QfbYVLTVjOtgsrByhST47
L8rbNo9nCpi7UPELfbnPF73FRO5eNTvwXmlwv0rBvePG+XxGuazPhE8Pj+uIgIMoKW9prSstVuaF
72bm9hndQzqZkMD6bqY2mE7M4mdz69tkNNjUvwnmX6gAG6NeJdhrUQZmI44anuwet1DGU+UIeNLI
WhTfDUmfg5CfYtI0d2K9eUZZ9WkK8DdqqUv0uuEF8xmdIpB/HLzx6Wopq5kofUQQIS1LWuD1bUZN
rI19F+8EblU/yiuEiaTAvBqD5CAMIIQQdjUCHRSatzM0KcXTpPCRubnM/NbSpFqJlnbl+fd8qgvp
/3s+eirWU1EqOet9Cormc1qXpAiY/BFS790HpSDDMxXHltjdR5itEVbHGGJQJfTpxf+Oyu9RMW8o
QpQrJFUpVSB/U75+GqVYJU+2HXOK9uJ7LrjhbWONeN7ozBnYjdTKURQ2/HW/BuCEvc7BWo3grYO1
3a8VkkpTr1Cb58nMaXiZosMxujJZScTP5pX0nOH4Mdsis+WnR18Mmz73HMXPt81AxUljuwwN19UZ
LH8NgcYJ8mkzc/ASUhYBTTQBtJBnb2cfkJ6M695+V2u07YGZEey0hDajmPthg5C+PteBUcoMIEgE
LhuHlf68em6t7WCQJkmSZkgxH4Jjqc65mEH5G6aM9iTUB7d65r3+hzdPH8BeKLP24ggoOQDpkwxR
EA6AoHMVigCokkhnTQ8JZRuhYsMXviOQIjaTjDVY62rDDVwsgmv1mQxji5eJyEJTx949+psWDgDT
kavPWMJKVpojpnLosnCZCk83/tVni6WikX70X90/3Yt2sS3gzWoBsx7c5UmgAZE8XfOnhSNdklMi
RyTDMxciKxRRdWFqZQ65O3RTYdtjWj/RQv/tynD4Y40CZv4O2sbAgejMr1FJHYj3VGbfbGdm605G
PCVKCRTk9wYsmPBTS8zxRjHyLIscdkw7pg8qMezjnqzrv0ugY4iHR73dhJCBJhHmdxCG2JenKOOY
bZl8Dtpv06Ug39s5nRfstI7EV872UuLMVv77pZWPV1UwZ6nCI+2oxsygOHoccd3d+MGH/dYa2iRB
IDI3fTjkOQlJNAdIiqsU3mH+Pv9wKPgg6EYBDMNkxBzSSGE4AIqMpwo/nvr1RZ7ellrexgfb0HEW
CgrOHJNognUx6HOjXqdaaQMbaTERZGMafjtlukIoCgemshVSI6JB+doidWx8BZUUWoiKBHt7vTTs
h0g5YMwwTScSqX4re2bUfTUHOB0jjuzOgkTdSQ3UuJBwiHK1QwUNl3H9HbAFxhlk5lwSJihQgcsJ
RCS9rp/H5/6PpK+ntn8J47u/2jzCzJBJTRFSEDCxBGMpDWpm1UWmKao0aOcxZIBimJiIJT1ZOwo8
Uyvkbju0MhFCmGnRhREWHzP5MFOi0fceq+ARQkKxHOjnbF26053/nLMKtqnFoabPwWkObdsjZSR6
i4fxv/q69n77ZF2fRkdcIX290R+XKVjcp2Su06rPH+J6453qi5173O3ovEosx9er3+OUbG0kztdI
esx9ts+nLCwxJcHwXS/XAdX2PWuMMoO2zyMw3ATMkJL3xLI+JfefG43UckRucvNVnqg8WrB/KqrT
l5bTCwabJTuHhu2ndlrqUcAGe3OdI1RvTMzMddHznCrzUe+66xAA3CFSC5u+0uc1A1G8Ot1fA8tH
kJhOCE+L39NMcIR1YJQcI7HlHTOO+9yQI8FvRNylzx1uueyBbZaaaZ6gYui0Hd+cDCDsNsFNM6dj
QoSmnhc9vc5cvjzM2/pXgK4jXD44bUqIIY2TIHTKyz+smE8mJwUoXIYW1yLS10rILq/JlsstrHKC
DMjtwIQsw4X2ynG3qCxCEyYEwkxIQzVCBQg+cKGVCHmwSFkszfFVytkraKio5rUmLmQxhRcJWgcg
gloQSJQe5XlWxr7ty3Pg15G25dKgNodQgULSOQZLqDSSNLuIKPMuQRCUhmNJYWKu62cvbSnpKS1S
m90UiyyLZ23I7NX358ozvukpUleXKCnMlB2vd8bXgXk3xVfq3zi1sY0ThRa8HuHfZ+F2Xj0xXXbF
T6zEyvN9iVQ+yOZ16Ge328N/APBADtitwAsHh33xg4YCYBvpW5FJV+y3KLJaisbfJ3vebXx04ws1
SMQ6ci0D+Q+2LpYfsP7P83Uh/SfBTZdR+LrwZg77lblQP55Kf6Lq6J5rdTLBxqN+6ydis9nt4dTu
4sTPA1Cbe8PhRvcucfr2yZsGwF8LPXLIxidnm/eqmvbcYi9WH0emsQk/PE9M0xwHfUjZL3/PxoyX
mNzzsPjuPPZINY51eh26yg1kCegHHjXDXrr8f0cttu3xPtSADF2qoPx4te7nlTynpQmxgJE1P2eV
+jfLA+kf0P36X1k/KckXs+kz3YA/PP9gXEH79RBjJkZJ0MVRfm9aWzqIKitX03b+DuWNaKtGNYrY
qNoN/n3u1t4+dtq5RbFM1qSxFRWS82uR/Zlt0qTRrz/LSe/rfPbW9lY0VhX9dfoX2oAzPmF6O2IT
F99fMfc4DfbX5T7ldD0SeHLow/p5fP9krfs+08Lv5/k77z6+Q6lx3MNLduu+22Mcv4Rw+OqEuEv3
TJ6O33on21Nk5GuuBP7c6FyNWpxbi3byW4br/JsYyKuazg2847NRTPT9aIXbXlvwZ676b4UN28Ub
mwaoZLig/mxaMkVZN86J3+7XZ+HCZCh80J/knDWeBRfQhr0MGSPxIY2JvpfvgzMRQN83H4d/X2eo
w3HqdY9mwh/8O3akjBu1l3b/nV/RdGRHSjy81lYbkiZnlZ6oXf45dPhq+toZS6nwy7ZWNJ13OMSZ
HSgxgMen8CoJdtO7xNwhH5CA49dviXE+Ebtf8LfIwzA0Z6H4Vuh6fyey47X9XYfhXNTevduw47NP
R+jpY/8My+wG5XwGgMJrLLesi3edwdofzhIP6A7A/cH9ge5vNc7f7BocmrJP+VN6GP5mht/X65am
6ZH329sn/FD1cR+nQchywgP3XpJase3Yd4j6CHstzw1Dbz5W8/dlYbULZBu7QXC1ZXHShKPr9H8I
RlkQP209fti67J+pHoN+2lDYsko6YRN5T7owu1WN08fydfl3swzH5W2kj9Gk/E/3Lv4OQfzKCZPp
rNEF7sR+6DRMMD98BqTVB/P7j1/dp8fDV5mYNzejrPHtLraH4f0FWVP4chpri9i+CH6Fr9Psk0+A
z+wxhLHwzPE2cCkQ1KYYMYGTyjWlNrmulRdd2uyKTfUyWL/BXT9pfs3e9CLJYY0aZ9MLMKllmaqK
uCwtXxv5P59zUeql48/1UP3eY9Pjuh8rAcu/lAn04dXkrTqsunEzl946i7cKwHOOvfiqreNsP9nD
3S2ddGlQ/ivVgSKdNv+Fi9hmJj+NuRIZUb9QxBvxWOSTfm3Ofo+xxrEbrNe5ovu4LSF32ZB6SQ++
Rd/Klfq/o6rPT83ffwmxH2w7Yp14ygGh/c4NBMBvQVXcJiMHb+iDxXH97gFUASXwujdGy2kJyckp
Ti0+3Hl1N++V82Kt9vjBdFLIT/C9fPExQJBkGFntgBJJkU9C8yfHj3ukrc4d6rbsgWRhAl1jE+oh
h+uzD0yuuwwjcWobIxnJ4Jfq3yqv6OVdtWedrmYIuYd065mt2eFRQM6xqbCdjyZ6NZwt/bSNmdl9
k2we7LUCRdUO/Q2AwnKAbahvdQPTVTV+EOovjlxBITWqCRGYKwwsqtosTTOAoZKC5XmcL8ghnwHk
UJffGXdCD4zKEku6CNc5mC1H5tFThKWR1/JcwlxwEzxLIkfGu+KxYIBLCUCxLlQ4AW/xXkLLQgSW
Wfz6H7LOEeq1FFIZPukSK5/dI4j+cUn5h0MCin6GFYshoh/hQT+Y/vlE3/r/rKTByJBv+5L/7+8f
NK5kyZFyZtrF3jq01eTuJcPPNvX4WU8T6v3kvZZ4v1Ye5STprrnijKbtKUL4+tEUs+6BV4+Im938
wS/IwqnvGKT8AwyqApSIIWlpaTTaP3f4PXs2otKjN/Yckh+y3f7E/p7Wv65+3zsWoTbpSl/5NQ4B
35ssB5QqJPWUn/l/6nA0kQUOMTAW8jxIbTpnuApYID6/EPLY70mh6yBRIAyls8vPzwW0FD99nHcs
Q+7/8mAb74+FZgEzBUBxdIRdKxPA0BP62AW2s8NVPFsBkK/+WpARqh7uFr73clCbQib3vXkz83ND
KLUKyd0J5pxk4CIMKnlusSUQg+CkDpNjlnXKcpOkDMFFizxSh4EKznVqIs1iRi9oFZMwsEOi+FNW
dMDCiTR6ZcXWzMFj3W0WSCIHLXQ5ZooiwREw0RnC1krzrhzkCX9TSIIQiMkBQ8u6EhoyKskdaeO7
p4tS2iixk5bBQKPbNc4MlYxpShXpeX3uCIqEghmCv9/9otP/af+P/r//a44E64a/8hCoANIAJDgr
BIkiY/u4gXUBZ/MyVBYdQvvcuadDeWY1VLaTNilRa9prosyYzzO8715e6CUrERY620Kto2jFJbSW
0RBa8EYQhwiEKgQMcJkGRKY4SjELEEREBDaEdlU1eVlkmTooiH9ayq+eh2k0w8LEsDICfXIgYwqT
ApkqNIq4wCNCrAQQQKrSCKnSBQPjpMsP6LD7fz/8wfuinbwv5hSe0srD+IHVShohhKAhB9ZD0x/9
yCmwoX/cb75J15BiVCHbIeXl5JbTRRtjc6UT3tulte4U8vx0fjBuOHJrSaIDIEtNBQMJhIb/IJ66
cQUnOcuXPEbGnEMZ485OI84vLycR4OvOTiKYSiEUwcHLLA9+wxQkFe3FrIaCCfhyXSV5ctjEkm5x
AgqIKgs74Q6k7yvLwnEeQfGhk+lfhfPmvr5SpbrXb8L6KSAAAlNDfdxZ9S7RQUM0+RpDpqqJRPGL
IiLJrZbRjBRRoMZZYwoIMb3gWGg4paP+aYr0wbSw4bA//MKaDMzzWwsxbI/+zjNGy8YT9y2erDtB
GYZT6AbQT29U0yB4XTPAm/DBwxklWFK8mIxJRsaFtwfSRF/1frPX/vwgn7YLqQZOP2Qqc66Ga/ZW
9r6+VBsBwCuWphDhAMQUKSAJb56xGQQ0YUNPwsg90YEkR4S+sgNpfSAkiC2MmvPqHCV7wiBWSUQE
m/zc+qljF0wyDmStrIWkpiHsGxhMch1Ccv+rBekDyVUGxDQAZJklA2YrVHmhAbQU2jnS5FAf8dxM
85Ni4/XjlrpY9p5P3Ygb/3LAWISgIuv9/WkOpNhh8L+jtnpBxw4VQhVOGfQK7nmhuiinKBjq/+3w
dRS2k65bGqsIKJJsG0WSJaV/F67LSaivENmVFtmlJljKYNBaiIp5kjtGIzLb/7dqLWmWZNtU2tFo
rFUZW2rgUlv4urK3SMRSWIX8zruXaRlMymzMrP2bq6LDNKmTf7q4rRLQURSJaY1v6Lbrfw3btA32
u26IRSyyua7NVAMULRGlvtK82vLNZqjaUrJFNZqmFGTC1f3veeoUSmkv19/L96zUqbMZlQoUtLLJ
LFstljGV8dNmEylSmpU0ZAlMlJobWYJyhSMwyodoUkSOaxBJ0i3VZjSjNIszY2yWjWab+YrmkSlI
wMybEgqJGUJRYkJmsqU0TMhhITJgEIQLLuhp+x3X5v5JxO88fy2hZ1/Hw/Lys07GPkhf7NbpDear
jeEU6tZtYv7W/Yf0/x+rd+k/tx/X9mv+jHpbRdB9wmA6UDamF827aanr2r+WXL2kD+ED5HAeTHyM
ngyFxtwBLqQxcgaHk/c37uDcO1ug8pIvO4wP6zEmf1sNYyal39XHu7e7UwaT14KQxK+6X0p6RBvB
sIu0GJc8ay52eF9v2f2f52czr73Vn1/1xYY86GZamYd/MRIDN08WKftgNIH9X1NgA4f9kYj+mADC
/DPPL+p+7Pu/ox5GkX60AhKPlPTDtC/3iV/hbl/3YtIFJ/Cyf7pX0hDJADIU1BtCBBJ/JLsMhqAT
IQKDYhAiyZINsgH6AA/28KRRpf4d56vPyRGp7To43q+yfF9Mt9tHPIclvpwTpPEGTuinTIc76NWd
M6S/mOjApvvtZ1FePeKH9N5ipDzQO9RxQ3aHtx0yKLD0kQLakLCwFJWct7ZDncpOFuy+V4nAnJat
7ZYvXKdcGydcxpQ87FFhmGsxdFAT59KeW3Benvo1qh/eTQokX5+3u8HPjti/OF5KT3nrdoohzg0C
Dg1hYWbPDYULrlaQstrVG1iepoJnlZWtc7qYKmY73Q1iQkldKO+MkhXer6dvgo+Z541spMa7yHby
4qqCoqpKqrcLjfCCejDykNmCRpMlkSMZEdpnC+cuowYCR2CtoYY2FtzXte7Rq4iPJnYvEYpWE1BQ
EXl8RmoOFMRCVELU04Sc+bL6ESo1R3Qiy+JXbnsEmnMv0RIRFNoXPaIbNWFizdyGO2pg9IEkna5Z
8JuqKqrH43h3rBz3XPcrg6ZtzFU0t2cDNRYmcmG/zXQnn9PXHYmSTVKods8mksTZ7to4tpZYX6Ak
dAfKgP3i4o6V97vch625B5IoRFfe28HlKxwmv7MP0/2fjP/9MP7IO9jnwwB+675CI9D4P9/R/Iwd
CN+2GqlD/bCQTWfp/H9JCNtg4b+nlvj/d/227YmT//kAvV3/zzgXWPf/d5Qs7z83qcNUSQOj4HQD
QjR4lF7k9x+ujfFfnRRmXxwsW4R2P5sTE/uFNx2J/N/WznXH5d84dbGSUnaiaHnE0YtLnlKJlEiZ
i3+mvZTS2oNoYorh2uRvR6TIfv/wSS3B+z2M6I9T9RH9k38qGipM2CGaEGeRqNA1Grugu53+C2cf
0uAhP8Qe8/j2TWLNfvffdtsz4F8dn7LwJP5JaD93m3xENbw/da4O6b/5QuL5CqFXr09kOCa+A7Nm
B/WDhenYub4ev/d/HxwUK8UY4jb/2yPm7DsB8ivwzlIABgs6UIakUQ368xElyJ4eH1Yo/519mj18
h2I9IjI5IzZkg0mwdE3R9MQ0iboMM7NZdrSQhHqj1l5aQlcw7D/3XYQ6euuJXJhdQzBnxZCP4MZG
CQVBmDk6fdkL//E+thiE/u/x2cV4jV++2/tb/h/NM/w/XGd90H/jX9MoyvfNNMlZ2s3gxtbsb1Ox
0sgtVgUCIRnA/Bz3zpQ/L1tb9nfDowB2dDUIMNae4hE3y2i7HPLP85qJ64bO/3oDizc2mxo625yH
Lotum4rbBScOHInp/Pd933/as7P+GfOVk0wmcNUBdCnrT4/PgiN93/o5ikIs4mBgctZD/hdBSOsh
mFZZstFCJfH9UWHvnZQk3pjX9Ov64jfZBQjCIceG6lqV3H/WSSBgOzl7r0Dv2t7vnpr5QYDW+/3v
iwI5Q+SIwr8GuYDsc99YhjM/Uap/rckYsqwzRtt+RPW3JJJ7LOYhCEU8mqyglMgjw1uJaUY/G460
dDNmBwWoXvo6nKX9YzCU5aGc8hfcxDbg7qMbZj+lZ2uVBZ/GnXOIwgQjmK4FJIk70MVqdWhdlnir
iHVWMlNZK4P9kGj+xR5RkiRsj4ChmTQY6mx3DXgxUntxpeYN774NCkir6pWp2z1HUjkzt3RYcxYq
XHWIrVyGH0EjBB9hJYJR/Yjt2Oxr03dpbCN46Ut3Rri9sGEczxkF+QO7mSe5yKa31yYaDBe2xg2D
EKF5tgb1DYazj55ZE6mDiVE48HR4URDhQVDwaVDzJSOkcWygVSX76is4R9CwPZKRIouTiTnMjfnW
GYdPS9ycZHWcCktKmZhDuk5jLrRmPOaGHGihDR/lb0ujSyH2orQma0AhB7mu6aND2NywNeEyEmWr
8cmFBJRnY/d4b3Yj860nz8jwV0hqRnWR08VR/rQXfj64qIPTfaeBauZmWF2/rqz5nvC+Y7A/yozw
NfUY5JazYW0LcZRo1tU5aO1GDBOYZDjfuR5RBp1UtaPS9c5L2wyREVGxEbxMOKyLsM6ghhVfV981
K+xTtmUAoRwFDrt6qtOrWEhJ6mgblqSTd7vei4rNwb9qC7fKE2HgdvaRGJMJ7BCFA8/L0E98AZAl
JyXqSPcU+g6pwLxT82QrL8Tw8w6m0IEkmCmpPRm3DCuwgeIPnUYqJS3g72H2I9b8a/kNUZC7jgCk
P60FUPnXH45KemX2Tw2+Rj5gZqYfrNaTCb9oNOVzWit3j23FOl5kgtTa+Ds00SEz2s7YC2K65rk5
hULggOD2t5SxpIKThtjcIYGBJrQTDYonMDbLa0Kkgv2CczvtKYO2cAvT4EnEcTXVwXDFSZKfmnMQ
vKPm+/BH4j4Y86OpaKKlxCKK6FdShXdntrpXR9RipUIb0/nh1JvAEXE0U0RmtxEwm4lGahO7njCE
qMFFY0zXIbiJpt1D4JqiSiXsbm2ZznJTu3wMRSypA067p6pejozEj+tDsutS1LFOjUxELVVebKbq
FuyEvrewmPrJQBt3B67QcbLqXDrra71iPfGd5S0dAUag1k2SFfMqOKtxakSRcowwGl8ddQkREtXe
lAeYi+pVyVbh4mLjoZu0RZ3N3ekIg1TRFkMGi+cYQqhYvBEgTxbvPMzrqpAmUiOzdyg0KLEJG0xH
HrLgs94qN07VKJl3bImovB+/vJrJmM0/xpkpp24+2XDl/m0/FL92smTo+XDDcKMb55qmxYXWr7rC
32cl6yTu6Vb+uIMCzmtSfY//jnNCp96bxEKAp2fcWfOP5mTYHq1S1IhfXfhLC40bmbvxI0D+TTbt
SopAJMBi7jmHKzlKHzx0dGy3FCQ445uwID03SaAb5rWVYaNHmgzpXDMWdlSx5yKJrCmxpd5J8IH3
I9z+V2vox9UT4ulK6sU7QgbnUV94Bp9zXfS4Xr3p3I+nvy6/g/4SXpDy4XbVGsCyeie5qvfyIQCd
U6OWcGaDM9onfV0WWvSxV8dhewXkoAs6hBo1PSjjOGH5ijseLOK9wzZtkUcJQM4jsM+JeMihGx1E
2XlMV3p0VnBFH0+/30preH0vGl/c0QfiXFadHHK2uIhQZ6E57FJtqrEgbfs2KX6jzusEcMNMEojv
7HEnv8OD7HDO+g8uFZl3tbgmi9jzp+2LaBAeuPZ1ei9rp3SyQgtMFKY7NFBBrpOHIs0aUTn1Oiwa
ynPRozbdjwQkPC7XhHM9UDnPOUe6D9nYo3w7ijYLk98pMkOhVRR1+qb2o4x7X6fOs1cindEQvT2C
dJJD/FF28soPSkYxjEhy1tNLVlNGaZ9hfXKXOZnLXezzZvOmmhqIQVHViayTm7xKRJeU9PcfbTpf
Sye63KzyudaFNxDcV/sJk06ta4+W4+kfknwPQpMh5fIncnq1hU46Jis2PIkSTEKxLOuEIYTRaNXg
+GO7c3J0iOnPB53aMqWIzYp8Dni6b/QcPMeyvaz6FmCJukcy6leMs255x216u3Ga1LNo+03x58by
YOgjzkMWxYzUKLjIdqMGBwzKY5x5u+XvVm6QZl5A7Cy7xHqXVKpmno8luGbEZDkBnH2nAMGnCFVR
5VqPsWMWyYn0cXnPrT9XGVFoctvXbm/k31F4w3K4ld5Ri3nRcSGxKpMYsqcOnG5Kid545Mnove+Y
0ekiMAIBURxojGLnJkwc63Lh2aUf3MHOzRyn0H1/dyLoCmSCztqwZ5sI0IbV7KmgH/1MAGSQEiI+
Hu9vExQe3vMlzNnmu9J2PTU6NA/smMcKuECgZa1RqsYz9TMed8pzMjb25T+nRB+7F4eLdFGpLSB8
UFHYY+LJkyLP3e1UJW/lMrcHImXLpSd/oXMWa1Bp0I/IyzrRis+nOGJuoaQ/EwaO0t3avJh59VPn
+8s40HL9j8rkxvKcnbldeuYyIwUQUWUN+nrW14W8Rc5SxHCHFmLFRipdFKuI9sZnXWdDSH/d7Ef2
+6t49OMoPk4xC8CfgYw97GHZRUSlU94+2apJY/g/yavnFpB+ZV4HjAOIEUIQeFGVaETbFHZKmLXG
yz8/wnq2Xcj9fP9X65VwraJITM4GEuOcALzMCiZWKr4fE+HO+VfEgM8+c3i6l3h4nTuV5Y5dLNTK
TJZTZ/LJPHrJqh/ZV3ViRez4bJCMC12O2pJr3Ovl0yiZ3wLEwlQdx7HXYsM+pTwwhB4VYo8s7dCL
wEqtMsjel0adsIrSNtiJXynoPd26UT7hU5WXjtOtF9/wQGu9eF1s/YUxl3hYyLGcTxjgnq1V1iTM
sPdtl/AH7Y/WvTb4TUdN967zjvr345Malxr75alfLfMdannJ0oN4ezm5zmuqfbrljp3Y+hxhtG8W
s7fh8Skv0YOSFe7jANjXQdDQSgnQpOWmV7Mlwa8LpDTLDk54iPcBOzvecdp83T2jSbWxn5TcKoQ8
+cvPnqdqa0TCVYu1P+jlA5R1a5Do4DIA1Q2c7mtqsphnRRh+8Zx3KhFMx8s8eTeTOlASfRjnp4xA
RGT88uJPvRDR+9RSDXXNFDbK2qfHgmm68973diEfciNyj6ThtHzbdtQi3nNYrOKBmFig+xzU/Cft
X7++Y9fbY9HMeOWYduOU4mJaWJmJPcx+86HrmHESmtiVYNEFL7csywCPBN17syiTnf4QFESUkQaN
ts8u5JcecfISvJ6QdVBeeBKFyIpEpHpE4snCbDk24x7gmc3ON4E918NbEqwtvMtCSLpzupLPSTk0
K5RMZY20hCjVLRh9fkx+TGuafeI7kOadHHDMRhsJqebFkpl0plF0i368KiKpxGGlLwp7cRs32xg6
reZHcbMG8I5YkIUcY5hKN9fTCOeMYAU8w2MNTRWBYkipm9yG5sAjLtibb6eu/mP603S/ltFPb8vj
83Pg02ddndOe28UeuMYrWrmFOXH7f8lc7vM9wB3ddqNN6zc0u8cyG3Wz2U3cE9fclCYdfsoLIdkI
zWiIMStZ04IjC7C0uYMbBIxajBTJ9F8U6dD/D5oweD8oxIEfjMwVbnPwpdv4NeaOg/665bJemF0m
i0Fv3sXtEuKHnrYj+jDjGiLdONtBSjIXtOTenqPBEfMne2EEKYuvsf0Nl/axDf7o3HjfouJJmbWW
qv5pQuo6nQlqwrbLwnsrHbjZCIiSMPXeUYjMWGKrhGWtct5Ts/hW2NYQe2Ldn3t9SJThzx2QRYLQ
vOvjZ1VtWXqdxFnTGcZKzXDll+2z22XCL4LGBHpZzHb2QOE2bimJJmI3p7Uw63iJJ9szXy6cW+7b
z54N0TMsPx63m2KbPhlDu7H3T5qX5OONPR02KxmIoWJqdtuL4KOj9SL62kY3O+xMjiYPDkKFglVP
dEcXRJ+qM+7K3421gFNSvZ3jc/GE652dtZFh0wfFf6FSBsgOFehwyl7K93xLTFGo3nkJ3v+P6yGL
aWZa9vFQ2nRoc9d0kzNbcDjHvTpmZIqjevjO+F5oTMH3bUPtiM3q4eMd4eUxECHCvlpoL40SBmNS
zGJxtoLhuZ7z6YuZ3sTtlLYJIqep/nXVZ5N2Nm9UmsSIiy/OXLf5qyFPx5wxPJaX02QhkOrrHYu3
8mpAIgboQgcd6aC0ick+MlBmyg5cn/Ve/s/vytpiT9cMJOaLkvLuOz2ykf8HLodWAeHORDzaQsE6
4emvr6YySpf/jArsvYOhqDpqoAerjppvJqOPO6VJd1LmgmsjCEXhua2UlUXV3cepZSv96yxuN60U
qPi9/uzhoeL2FlzAN0IYZo65NBQ0c88ts59NQgyhbqjw5YRnWyD2kfPHVB/D7EQ3V0dQr5ZQ3wch
BvPapaqv5ZUyttg8KbIdGG1t1fZypUjvfvg4jq05Qu4ThLPVG3VzoZ8qZaeXc634kyDe/Deef8eP
ng0a9Bume/53bTj/sh8h+byQO/6CR3lrn2w7qWw6uFMq3Qs+WeEP9dNV5nKnmducvDcJRvgCIrAj
KMPri5V/d4TM49X/fa0Fgmf0rHbLD/tbzpsKebZV8pkP+nhOXy4Vz75bC5yb3fhd1GXZCFifcjf0
5eBHzKh0q9es0rOQpDQ1zpSfkSEkk+eqqEvnA8LcH8e6OHH+HNmN/3Y5Yz93Zs4y2aA/jBwMBdPn
cgt/XNqRJPnnq0r0KtiMbgcBXwxP00xBpjx7T8foLuP76qvqxg+COM0phe6o5JEtPp5MfN4dPsNf
N/o88+gsXDKydijI32evpPhHd8fRP6i4dJmhyTHGkV8/o6YYiqjc2Fhb53tOJVXJq9dbfD0q11ff
LwZznGvHXmJ4DWc8WY8d+TNj2D4X18wNrvpWdtpURBl0aofJZRFubXFupXbD/vW+l8kYhlUw2Gq+
6gZ30zg5SlhZdHSEhG3YITCUJ6q1IxeEIW31iqrEd6S9YM4o9bwJSUtOa3tz69ia9LzHpG8m+DbY
X34Te11p+LXiXXUqyxK+fCyzDCeqtsJwwjZAVHezCFhEN0MFnZgQX3kYVfXhXbbstSo0lorZW73Q
qfx1PPVzWHyjGOcVXLnFPg3iWK9nv3RvOXUav48btnriaesnvXbS3xTzw/b3ZMez7E6lT8TZr537
51Opx8LoOw/X4PXl5CeE69ZLO6pXWsPmffOeUlWDkKtg860ugEWEbUUsFUgOUx2wrLi9q1Ra+M6v
PO84ZW7SjRte3UmLbc5HCTNCRZTHfj+KWqJfjZaGF4X347tsS42UkmopoSdq33a75OTnpfwiqTrZ
snZZURDNYKm+2FlxBn0acqmV10jDSeNawYzHssKkSEmrsCqF2Xr01vvZpSWRqhk05mUdYeKlptsa
w5kuhScSoPWI9ERjOUa9tc9q5XWzTqoyWLaImmXjEsWEUsfCpi43rQSa69squBfV7DnxBkXQnbhe
Pk1IEYwyvyiTrHmOmbZC2NydRxu41WowE2GRe6giNpGasnNi1Xqqs3Mc3qRBKvBYS0gnuWNOk6mF
qewf1R7l7NmBWzCQvPt9vwp5MTr6dXGsCiCPzkiFKPKe2G5B2s8tPppMKShZGEIKB04R2SbrOsay
vK29YKtc6LIHaHbZ3xUZPSN3cmyzsL29+k6NUvhXZYckcsPjxCiGYz3fe0wNWxW+Y28Z3B6Nna2x
j8R7fL5BpEWEEuu0mRmV8vPpYL3H0Gx4bWmTDWqbenujUqc2r4t9mDtB+rE9EOUvA6eYvna9p2HK
e4tn43tDZrOtoZeDU8GFSJ4GiNzWjXHya/n7vl6L8sTznzRlDzrPWfFdC5X0/FCtmr/4752Tp1+W
x7+yUVAXdxEl5neFUdvT6H13T+Xs+Xxkcs38Os9ZqFsdm2igwgHw6tefgvU3E6vL09pq7KDMM1WY
ep1wZth67VYD3wswyGJJm2Jm+G/Hkc87EssL4sY3DLt6CChy/iRnPJUulu8x7phZ8WNvq0NX4riG
n0m7328Ny3/L67qIp7/IQSZJFbN2wc5w251Pf8mlhLf1gA3zw5h4fIiO3YADaoKI9efdub3G+x8C
h6M2jHzWt7bl1151lyr9M7hunyZ20IuYQfWfXFqnVOCl8lvRlT5ev2iELh6uZnTkTh179+Wmo8nz
QDKBjt2+3ZAptfy43sGGdJxYD8d34fPTcd4Hj2drKbi+ST4PhpDplymOVlD5r+PpwawvVPMu3pdr
Wwv81jfXrPL7/NbijrWvtbquLKFNcaPtwnkey+E/CBXf5OJCMKdHQYdF0sj3eyNDjpfr9U3hFDZC
55uxVRlT1bpqWVAd84RtfVNzp034dhOYXqtrmyMINgmyE6a8sBEIQhBFslFo6eucpvJUby+3Cc91
z4+e2KWHbY5S9zCZouFD54VeELFqZxEEZf9sth0McUfSmGdNEq3jXfHs3baXfDbnujvjFK1EFeT1
PkS1q6IrF0SwghJqidk93RC9QUnZfJF4XcqtCv1QaooRhenXkrhTS7ppUw8koFF+c2aNZ+SorUp3
Ufce9wVSOex4PdvHj0eNxqQgvxMiyx9S2zDKecDzohF59t8GaeN1MejxpGSWsSxD2OfaZic0vh76
s3tkij0VSebmk8KPUr4aH1PLTkYRsuaqqESaKdnKFZcd/0V9GJ4fGzjr3QL6boSa0hLGXam0s4wA
9CaFXOkhH6o2inhgOUwNvRxkvxFbSwk6gZJsY3E56dtOgohTvsyFzXy/lhlTO5mshe5C9gPamN9H
odFD2wlDWyTZQjE4Nrl6W6Lqcsu1cPyX4frwauO5rNV2dYnJNgkRHO8tuKPEsCNd+MRUPYjy3Eu+
t2zus+sCNnxhfy9anBObBMMZoMcpBZyTbtwOW+BhSDFcTE/sdfT8sT2Q1J7sJCblf0Lr8TtECEVi
vpj52j7rpS5IkmPVa15nbAx9VGaz03wIKT+uMI2JpO7i5/G/l75Vo7QUu9xnRbLrCvgiP3+6xuR2
/Znvh3tAw3+/3T/EYvt2cqWxMjnDr/1whJlrp5YR+Hby93mfGr+b8VrW/B0ejyrlh3b+he9g+0G4
r1rcKGkGnkw8JBwnrL3y6IQapA6vPdnyt29R5vlV3d4Umqpef6tPQdxtJ3Y/galo0YvkrlPvlpPp
nql5c/NxMIR1EzP5J6Rusj3efvmb8LAE1eNZ6sz7od0rDO1Vzwi8XL37Ozqv13mqcYaYHmys6z5p
baifV56ykuyboWzqNOFj0w+HRdugTmz775LtaLc3iqT3bcpynNizamh9hibVYWdX6nYtOxXdW1vl
8BmNA3bvve/pNU93dqPur7FD7r3Pn2bwpKM9G2jd2e3Hx1d/t62OwbRfK1eWHfqjhBsPoqpR43PW
PJCFAa6zV4qyhRtcbDqsHU5684Qy9FbqS3xw29HlwLxzHnqk/l5TstxSoCtOKCF21GrZZA1UQvN/
r2dVttjV6J17fv7fIDsOz7Wtpv6UeP4eynXOfrsck1ygO7MndjvXCnE3PaoMPXj3U3wDw4309ptC
fsycrnhlylHz/2sajMbn3eSDQEqG34/d1GH5d+R4zrtusMlsWeyZPOdhBqtPq0IhpZthk2PKfOTo
xWCM5DkkaJ8XyTFIIrbdJiMee9UkTMhEnrTkbZAtVXeqM+cvt+uTH9COV/CzVfnw3W6Vv94+SOlX
u7oKIl5H+ZNq48DSJ5qvg7jrwyskS1qHBonPe5VuLv0LKDktm/P5pbyjzqQN9r59XX9DaT02JhIu
ECQCUNRQvubp7GOnK0v0MrMCWexT1cPPooxnGBD2u/07NUC1HJ5wDzGXdrPOxCgpz0sOcmkNrdU2
b5LbnSO7zxfw5n9hZvsyPRD054Z6coabj64DbqIwOdKEixDUCaPUt8Dr7J1wrZFeP3vj1XAHYgmU
6Drdoc28oo06mdl1k3maeXspC/+HQ/0jN6O89ySE4Yd/hFl4HgkOCd7Bt46XiDsM6Xq3gCfA7/hY
Y+T2jL0/a4c017b2fphBiO2Q56JRPU+vsb2w5TmoeTa0p2FSO324FkbZTh6I+dNp1rtBiSSZEcvG
yLDxY8qfG/Xa/olPXVB3iVjb4L1zrHrxjtDYhz3XvdrbugQIjg/nbnc/ZZtyYlyiQwOiFhC3pv5w
shpKbRG1IbgrnjGiIS7MNt97bONJbUoRfnKPt+iF9nGF2416WmOHTx464bL67fonbs3OzYad5u5r
kboHrsxlUN1jOaQ0h1xdFu7OIELBrSnJrqHuWF6ZRQbYdG+Npu+E4XaO2ota9jCvP9HfvjQ0dQhl
tt6t343xDIkUU8c4cFr95EMF96Dw6c4aYbiM9VhSNeMbISgtWx6xxcJNfqy5o3eORqtJa+ThpzbF
yydNGpHG36LuG3dq7+nLw9VezSSYzHaKbVd9m+eR0Zddtvbg+zWUtjopihkstqpKUvN9kudxx26v
Cff6sMFFjPTStulO2FnZEsPp6tuGy87o3GD3sSNd6jq3ykbZUjfBt2OzoLL5l/ptGvf56rCS+P6v
X79ijM5rNtebmXqfscNU34n119EqFBhSlKO5Pong89Wyd+M90sKQ2bJytZ+U+DRuuk+rJWQhDS6h
88p+taIN3PjZLzuTFYZVy9UyPBu7VvfE/67Jz1wwjjqeF+xPZqcUPDDIrSikzzh59Ue9XkBa6NYQ
4y59rR7f06tU/i7G9fV3/jrS2WmvXy+KawKq9m13+A5pOqFLfxlqWnLn+Htc73oyDg72NLqolCMn
xt/V684D3z5wpyt7rO6zsssxph83ifyVvTeSufSRhfx8+XO2aKJ0au/c630ukvunytOV990z2QK2
WPkuGrOk2lIFSGdwqnlnKIe9QUEqBW7KGhEfD2qvblCamdBKKaD5UUIImiJO8wjFoNLyU6I20dur
vYnxVW4HQWSphWlj/fbh521TmN0Xd3r4lt1Er8ngWs9YQl2QaXCcIvnKezwo8fQQetbRQcER1Va4
RGWt5hHsflF0xa+3Dirh5u+uyON3tgBaNi2WymppoMPJAolDOPoe3pVrWP90M2Iarro+a3ox0axY
/djpHCvksn1Hi+3oINPw7654XZXdzwILW5l7H5efVDydutY7fTt8c+dlsoOLb690TyJ25bHxj2rZ
2eNikRh7rA5eJGyMRH6n5ml4d6CqhlOIdXe2kedm6kXrmLoJ7FaPZ4Ea6TkPGL0+mGDEJ7NydJEF
GTRV72dLQs5RwvfWrq28OjNtKUvwsLvNe6WFhh2FMDxjZ4uUhn8LiCy1fHPLTn2FhsXCr+XZmjyF
509KnKL0eHDfv+bKz8sbm6PIJCCC3++PV3Y9erUUtcs6eTXt1654MPy44nKdnmq2cImDxIHR0cJE
U3RZyoZ4E6Tn8nVHXbZBFcDt49VMe2VafaIvQ++cEX1rG6Ost9pSaTWJ1hVTsUysemBWRWhEZn6r
7oeHQQ2FDRQuN3LBtJ+ZQL1llZdpSVBxi8XQ1Y298LdWy2ObmzZF4xjy98t9MIb9uG4cZt+/bWNV
jpV5XQEWnUWcJ0ZHLbKBdLzkN6JXPxhlwzhbemuCT5yiVh7HyjH32XHVIJHf6McA0L77Uo5pzpWf
XTV8CwB+iNFDOFvLwpr+mTcds9vOGu272bOeo7DdyKRgznGu6sdsE79OME3kOc7SL+VWKyp8CHzY
cMEkI+ixxWN2JKxJOlgPDxx6/QV5GlbvKm81xfuhXfKAsHHPDrENRqRspAMt+EO6GHfu1Xc414MH
RnZuzaGvAsNgi8TQ+VaW99O4khecJt6em6HFGyndj5KddN/R4Q+vBXw6uu0xhDbZ042Ryzs6K530
v+XopAlu9fdjOPVpuKl/LUZdUZc8yawj5YJSNCOuZaeXr7Xvvn8d1svJK/nljjl0R3HRugeXTqhJ
747tJk5T8caJ+jy77rfkv3GS82+Jvzs9mrhWdt/VWFvKd1Gicl1QeXKHVjHq1S38vozgujXqp1WG
lM/LGv08X91IbucJXl6gLK/WQNnmmX3fG2dg+5R8l0OmWUbpcceercxhFU+ngoEqdLkwtUtfTC31
Ka3+Tpr5bJ3Rtg1fJygYxlDLkemUqw3SXZMsnqHgQl0d7RIoUXQ8oWbFU46rZq7zl1kN16vbTja9
lygl5R8iBwESRgdMS+mEJ+pSv5wMrDtg+PCwxk3GDiVk6bvssoo7RFh378t9Nj1lZjjDZdq2Uada
Hffw8bONfRe5SXiX56+eWej67rvN3LC6N6N+UaPbZXdbKHGWVc9mq4ljszlTxpOMs4Bx7/jHArXa
vTyt24vLzbu7fzwopLh0T6pS54PZ0D8dnfp234rVyxlTqg2V9hvjlFPgnsIwa57Xg1ojDvvpjIRq
6tnXPDpd/Lr3QfpUuGzftlJSlzvi2kfWEOb1hONs46d3HiYs+PRG3hbzhq04ffvy0tnWGPfMeSN8
/u75WL09eylcLdXOOHbPqUFGyu7dLul51fZWGzyEJar91nKUb68zItPJdT499lujzV/SQ3r0Cz+9
maRj5BZvnjqmlTReK1bNWzw398jO7PPK97RT87QTS0hCcpE77lQWrseid7+pZ08/l6n+ScrHbE88
d8FHrIGkJbrcO06O2u6hPhvuki42WRhboRh2dPCMW3LruzwiQ+zy8n2mMuy9+65d2WMMq2W1ZxUb
Wek+Dzp64NhKtvCUWjrTpNB7cIR/Vc4/mj8mrqrQ807b78O129SnHPohBFoKClB0JGUIQVHHLFXs
eN6b5LN2zuurD1equrB2dHQLFKJeoaoio9aSDJFknB/hxg9j639awN/BSiMIVNIQjy34dyQ/Vqp8
tKQmwdytqjR4wW6M2j2njfIpdT0kSv17F5Ld34UXF+r7ZD47EJM1mTLCx0qxWmt5eNhPSAtFFy3o
u1kZbrpVayZssthQd2J1sexEKrdGmM7L47V43RxjjllnRkOb8JFx3wcvOjQwyWcRMb1rEozZ7xfp
U1Px/LuXLrLMtDdsj1u+CRak7SIEUou5HjwcJZaharOHVUW2/lDrq+zvVkFuQqDhejpxmRUUOjNC
E10MN0IpB+2xYWcqJ+hhhO2oP7YTewfKJIgnQkCS67X8nXCxTLvK7qi3pknUncXljyic67u+ponS
VidNW08t9yGViMTi0qE0nGdezP80+ert7Z4i+5+Gt8NS6o1u83umTw91pb29MM1HyX2anweE7t3y
W9mXVr6q43/Fym6M/dT3Xy4WXZH7O727tpSiIx2jX8Fc93Ph1K+PfU+tmPS/3+mfF6M/vt7WPLJr
if0Nw9Py3WeOCvhPXPV0Rfbqj4Y06mBGZZHPV1ttoatjxo+is9m6P1/M9Dj1zhdL4Q98uOqfRbu4
bt/k6LbsOO+OKv3nu662yq+ignsaxLMyRlmrVoickJMLE42e6GMPkzseGvfZDnGp57OV12vbLGy6
Edjz21fYS10r8tFXLCfqo+69vfugrFdnwjmt/Kc16teOp544O9Mr9RzvvlM5ZPhqWrNOS336/dnA
6jwTR5S07Ppxqvhux1NvUau8Or6cClmyqEtTdTnzcNnz5H4Ok9iOXEMn/GYsax2bn6uu/0bAe8+4
gyQjz+aRFGHZ9f+J1h+Q7u9X5GHvipYcSqLlhVsIhcCBRCiH7DFBx7Dnu+/h/bkOMSMixgigf23q
Ks1GitsVK1ZK1Y0iWkaWOkp2hOZHNY0OWy0g1EVCAA52rATwkAdioP8f58E00k0Yv9FrmyRttpZV
rGhKwSiUqMkENzIqZDNUhkViittoiyzIoFANX/8KKY5AAwQCpBKA4KbXXW20G1rBijJomTKgiSCJ
QiP88ZHOOQbyYdc5k0yva7clm+OSYLvdvTF3dBX0hgzidtF6tifBh41kRGDBBVUB8J5YHmmIwUtK
qLBUtaWECPUew1j5WvdCjFEFiPGFEZH88YUFTypzFViVpyQKHXZTgqTi746amF7QmE6uTc201VNF
bVdLQbX49dlSWSaiGLJISZVZh7EMx/Dh59GbciXmBv4uv2ON875LdkPL+VfCebIPb6de4Nn2VFAh
RxCpEJEsixQIHcSdMLZTi8NgKf4vlP+EwchBIUKn+RgPdakDWsQdSjQnM5KdIEX6CBP5NyMTJP9v
Tpo6kFxgNCj6EL7IU7QKmEuo+aBDmSgA/8piqb/9W5Mf/q/4/8rIdv/X/h/9x/32/5H+kLbbcf/3
/j/px1w/747sXG/04Of0wX/zHp3VXmKB+NivPzd3X0Pi/79uwG8UYQH3TnPuznvSh9xA+V42jZ9R
GMklqfMEceBV9a+ss02r5XxPruhnQXXBzQr/nBXzYoZgQKUPXA9vqpfo0adPgUhiA4COISKpZA/7
YpEPL/KgK9tlnpqv8o/NL4HgWK9D08C8kDWLzhygJRFXMGyWR5ymQobRtRAcSRAcd3XX3dcOk9YL
BGaApFwYDUgOpbIPgf86wK+I8U1UhABwmZEJemKYRqpX7vz4HSV++Mu0AG0AJD4cu33laZOUWR5R
Hc6+NiJ5/Hq+jCKd8AeUHecInGIJcTkJXDzdWTTTss0iC+qB+JCp74O6COXAMJBO+R+SQ7QFnN7+
DhJMAgQF7uDM2vVteBjXbt//spsqdC5i5a5gckxxRcgbFAEhAEUxFQEG4Tc0CeUGQrtUbQXzevRq
Q39ePpdYTpIfPdAPZiD55iDpg3YH5LuIUiiApA1TGRtCDEAGeJOC+3z3+HG9mHuj0znlpHq/D59n
HpL08581Vnga8YAenNKGIqE8qVExlV873yPncesw9IoedOPwjLckz2ODdKYaaGbagzFVLU4wHNW9
UqdEgKiGq7j88gY0BIUjnlp0AtB1hA8JXod9GZiG8S4oyAZgqHqi+nfyLHwgasQdIIHlvdlyd5Qe
QoqArIAgCtAKH7UIFQH/+4UVB1AAqUAAAUIqsJIgIpCp95IQgBDgwqxKlKpAiKtB9sWiIUwof0lC
AH+pD8UP+EPyn6aFfwxX8EAzEQzBhEH7SGdOAgB+S9Py/lP62fw/66OuHsSqvr/m0mX0+L5vDrrR
mUP7ZSgnnb7ejUoYShD7oOkfpzpHGmpND+iy/ORfHTFgLZDp4/4vFFtJ+3RwN6zJknOB7vRIn/Y+
Uu31HoSkPyHxn3TCIr/T2vl6HZRXPl/L/h/pmMgf7YJ0QRih80Q++GMjFF8ZAyQf2QKP+v+v/t7A
r5wKHYAaEgBaQKRBCiCBFQiGhRKQCIpRYkRRH7Op2YBmP2CBmBhg/P/jr9cG7+A7da82H7MPLZih
Fn7P39ETyK/mknU+vL9UJNnKd+Hqkfv1Omwtq2ut0NHxrhL6pEGnKnuWtsf5k/yTkQZSkXW25/0z
Nf9VhxMrr4uz3NeanbAh0bYEKyVrgKOHbdumWkJ7bXLej2keGL1p8uRUrNrjdrx4W6p/tlf0X3ac
nLLW2PrVrLLZ00mWhwvrGNzbvnuadyW4qUbbiQ+Z0qEt0Z5DWsfB+ux67XDoXx9s5xHWxA7rzxCL
NtQZy4Qju/0YP99N3PRH+FW6dj5osg5iiVTfau0380mfffpVfFV3cz0YcJ/P/VisX0j0t8v+pdue
nPjmwSE/CcvrwLH4Wa8xWs0otKpWqkmJRyTE9U4T3XQY8pJzKjjqCZih45QhHA32qUKp5ptSJqSK
agrKyW61tN183TRaMwS52Y4RCqCwRDt52h6D2IX9HRDRNTshUeSEQtb+eUh0GJO244dSVu/Jv9n/
FeVOg9Pof1KX+PphC96Yt5I3Xtgkm1odMJcu4CMU4zYXiQLaSu7vk/w24/5TeH+M/q/z8TWA3Dbv
4pbG1tRjV3+PZuV5Ne3v9PseKtOZ67rbrfCta8laQXUHkZhmNYbWYZjVhZ3/7/4y1+z2e3Zecmrs
t8bzZNUfad2nrXTQF4DiZPSOvU7boQPkiQ9vsh0R9GTYIh0odKfHog3pi4dZabZR71qY6HNfBwom
7EMRTPudoElGKi0ot3Z3+/Pt71HXvu+crKEZSk8Qk7oZSRBtvPe/b33jd+NeiNNLi0hCKRceUqWR
o95A7RpLLp8e75i5M1iHCuNlvDQibbeuZqRXJ1rJ46sbxEGHQeWHTBoqPDgpSR8fUqCog6+HHqYb
z69HBhoAs1sGJfYZ+iPNuhEfC/74DtsQUEYHvw7HYMAK5EgWqz+/BnTyrhQSwNEk/I1z8Gmemv27
4LWQ7Stq0tk40VJBzN4Txi+1hJhMy2QtcyhPyeN7YfsQv3k/6xDpqKEUD8CVj0dGlh8og51G/oKT
7IdY48F33XcbGIbn0u4ebQvNZF7l42wjbiF1p6vGVp88P9/0lm/p4a+42E8mUnvJ4ULZcXAma+YR
LYqbkYdPlLC0aYG6bPm/ptJwtmHMhkkjY87mIjwfzPTpHHnbPeYOUmMG5Bzg57N68kOOlYX51j6P
YayVJJlRxpt53YtKszWdfJJEqD65ze6Nva3/Hvtmg73wSUGo4kcYNhFnCvpjANcnZ+yGuvbNsYy6
ni2vQUU6rIiseqjmd6XK3nbjKyCta2LrmduMcjhw52dV2o6mMmSzmQgIdEq2PZ3dg0BGBxpHNMI7
mjoG2mom2IzIZDF/VUmQa02weUsHgRNtp1SrbHsL+jVa+jwRuo+og9ZPbxlr28uh1wuyUIcXTwML
yFLYvtMDG2wxT1swnfHZTolLhqlDXbDJl7oxhmXssUM5k9sITnGnV3wS+dMRK1ZqI1tIqYJkhvaO
4u/04UOwvuyh1GPhzXPhBtOq0c17riWBcmjczkKO5AFSw31InOVYP0p6KMO3flIv68evTHqXrkQg
RfYVaiZrzIeSMOvvuhGTFThHQ7aNCts28tOsLpSQMlZqm0Bms2hJd5DTLCRfZAKlSkb73RUKUnXR
lExj4kbrR2rqxj6w+pdqNz+RvNFpqrXd2xpQbjCBqyIBDkn2zMWta0rzbvq69vxPmhfhbTwKoWrV
822A6rspnjDfww/wpD0Gh/xNwjQ9+LFuo4Ssxm2Pl8oADb6DxdJJd8eCY+lMA+z2Aqa8XX8n3eTn
QAHuT3az6XcNnr3OOD1gqfrABgAfXwqX6G8D7PIvGqafsntwcJxJSgOPP0pccxeC+OGVAU8lYYJR
tQfoUfyTo+/oeSTyajIHXyfqj3dOvr3TOCpJ/7qKfQADEgAPl0ZHDVDtVJO4kPbjHoOxK8pnzw7z
LV0236trN5q2eSOyMYMnSHingoEMC6y/2a6bsXHdLQ+1gkADRbVkuoAGw9QsOvg+uLDa7ntYEIFW
NL4+rtLQD73nAB9lh6eKI0J6CE6Sm/wwkN/DrfVDr+Kk9iqk5ev7h9dUUFQE+Vl3V98sLNttpWw1
OqYiq+UTwX90ennfxqagz17UYzMjywHCy3tQ2+lkxhg2t8lVCbCIW4p/TS/y/wWs/A/gzhf0VBOD
Rtj6DxGCCwYdWRW/sxHeETs3EYBIdvMQHZzyvRtlWl24tIsr1v31rPQPWZx6LC675SEufknP0Rvh
RHKznK5DDpgdtgANPWeHu77iOJU0BmtaKkVtv1doHaUnvefwQhC343d9u3dcDWrpXTEjBYidxNBB
BEEBKGq13bNwIcnojMFZ2OB12xnMpC+PUWyUhmKiSUF7bBRetca6ms+e47RJaK3eQbgIh027+JEs
ZMzohxZzCs3hZsNtEK8sruVuxFsuikYkp2D76Q1tRmIY6BDQonf92UFejcAHwOSsw8RiDEEYXWGl
onFThTRbdC2N3ZZnxVQuRPLFmaLEMrYEdGPLLE4bz3nrdbcAghIEhSoNA+W7CV/BumUBCEVEWQ7W
3pmIj5O6Oza9MdeK2M0ROIZJCS3jwhqgJTo7W9BMyUH43LpUVJ4wCCwK+GLE6BcPYLWujMrr5mdm
OGoiVKCqO+aj8vUXrT0YiiIszZ37PSvbgk5dEq2+cFK82z8NB+CqaillbLbDVpTS7eU6zCN8peZa
9MslYIVk5RXVynsndjH/krOyUvDbZVcExiLFGzxfKT9dXkR4Zp98K83ibuuuB1CdHNNR+0sVTSOU
jHpONnCLX11FI/I3GPfPuEBX6ak7p/Y9LAodzMLXE+6isVpmfDMnx8pJLjyu9NV9GJFTgUJzN0Y2
+2WhJMxWHaVpjlFKimHqsVT7erZ3QUpUOydwUYuRHb6eR+5DqtTuIQh6hx+LPckpYwm1Hj6fTRqc
yVnD2w5hJ9niUNiYSfyscnFyOTmrqqGuDMnem6OfJ24Bi9q1xMtlKVu86mUaCYKh77Bw1jCU3FQq
0bhhR791OyEupTkRquzfjeiedtkdwk8TslBu7C4hnULwvjf79UStLlOW15J+J3xS15PKsWqxGWog
rXYynO9304wCIvDnAg5qgJAkGkHuuJ6vEJHS5GFdsVHXEuOUoNuNDm5qpbiA5eWJisXL7db1lOyh
0zbrjG4VbJ+APBT535rAskYlJHLlSZJbngQM2TG0rJg3xI4vNO54Czs8+MmKIx4mkhkbJKGpz4C4
kRyq6GGhHZYzdipEHkd90aCflMCTQMQ1Bl5z4SSuVRHX7cWkJ1s044vjHEzHbJkstvy4+zZqSEkZ
EkyTUToW8fe1bOvMHscdxyK+YQVvc9dIwDGFJrG6cYqjJu0IU6ZUNsY2R5bbLPDXfAFvZPiJp371
dEeMDGDhS+QGGvZvracGLrXfgQtgrw2QHY6RRhUgh86bYQ9fZ17QtFldDZlBrdnZCZSWPW3Zw04x
L8HbdfxlXT2Hgcp53mXSam1Nw3uE2BTUt2yA3FBMFcan4KxMHchkmJKCL7HxXQwqnZHFpojXCC29
BXKOjmqtsoJKj2xPgTkxfPjPuM1nFlkrj7tuV1isIlDYp1RJ22whRnIiBodqWoEHVHppnTJAVVa2
NXhXLLlsj0XChCaK0nBCLESRC4Q1KWYHbLEhBnC5wfvg0EhxuHOhOr6Mc3WMVEjKBFHiiuTwF0vB
9JNni2Z9ps3s46+t1nGUuL2TslB1K8IjWrE7ruN9k9lrhN78MCoRJEzUO2YWxIVqtN9Y9gVLKWMS
RvZnKQU60Vu/t7izDoJkDpZIq9Nv+X8jdfq6bNUNjFtpfqn07K0SR2g71SzlWMq9sGk+uPYKLUHH
qFGrF4PvE6Ql2D8bBX+vOoEm65Qen213MJjxM8jXWpJ+3g9t/Vo8KSZn0RsuGCSi7DsnQkwV7dbW
9pwm0CtRakR2C8+l9OEzXYTtMe7SMkizMtsj0TxEfiweSzdqELMH4S3mwfsNVSaQih5LHx6re52j
zvprB8sJ37NW2M2muTvYdj+RK0lDdqbk0rbntPREzociTTO01+fAkuu067WbA3kWDpNP2rrUYrzY
nI3EES3HFpONskFi8XDUNcV16crXaeu54KqLjc7weVYbM+ySYXP3WZQ4Ywc8d1lJQNThDTY5DdGo
3PW8poqi3JT3xG1MW5qDVphVRKRvsi0RO9RIElwfpUzF9fuOhRrtekQxAz033x5pSIL8+3EtgBlO
+DwVruDW68myInGj1ax7RFiZm8YczAwmPDPWFjzpqakYPLiUQP8w7VC9x4u0Ks4LPQxi1mo9F8XT
OJjAQ7IKp78qtPjI1aPeudlGma3misVioN3WavNnlhcri6E9fRXquwa28driNqHLOP+SQWK3wJvH
N6wrj6jZBmZ0fhyO8ffMI9u+3XrZQjSUmDNYkB4J1tbssu7d8FY0BG+wtv3YPcW8snOe6tHeOGyw
oL6kOhxoOW5DSXn0+/tv3u4eyyDG/7OECeMwNnzefn7ZaZMPXtthdzgb2bRDkoItqCXQo+KHpA4G
igaFxOYXwdfQqj6cRqo3GaowfNtFIGqQkNdrcoVHMH0gbFEzBZefG6RiDlnPBrZUDRMz3pGDvgvK
/F/S69K57FvW2hd/1zgAA2Qcx/FMyjhLSqeZIQk2cDzRMCFk74SUEzJ4wi0U0T2RCyr7cIbyNd+m
qJhb2yoBb+I8JVSCV3ifEqi1MfO2C/TMnv/HB9yMRUvQ6deeFmWIO9L0xW3VA1rW6zyGtV5a0vdq
NEyRr20vj1Geaa5M73QaIehYNMVVAsEzJKqguG2G08lhjeSMU5F6kBXCRdKOw/7D9BYt+6FbPdXw
LJq9t8Q2gafb8dLrONdVuJ0bKcEyYu/uwraJU7NbRacBDIFMtrtKkmJgmEVKWDkGI9ZyiSIlBNBk
5Ad8OmuwTaMmCYbaIVTXgXHKdUzqoMqnWRGcH1cdUM9dttQOmHu852XnVlG89y2PLrseNE6p1b/j
4yJoAGxndfjq7o8q1bt7e7o1XqgsrH2KJOBq68+9g1y45dh7Wp/1O0m2wYZm8h+JL2uNUSH85A/0
EmwRvL+2GIKRiFOCUJL/gzdQrGLElyhsieKfr+n+L89fj9P0f9fyOvjPfGH0N1r/kVUz973l9Zjr
PY/uUs/TASKAEQGP8HV3dLjrf1XWzRUbJG2KgUjBf8GBUCSB0T9CFobjg/0fbbb9ME3QhFb9nyGf
8D/TgbgRSkX9z/bwfr3Q7eoQA/6ZkRg1sFozjU4pl8StvFP8VKcoh7iwa61f6h+iP/y3NV/z/B0X
/oUQAF3ZmgypyMmsiv3OLCzFo1qfUjP050SWtREkggaNI1UyEGtn2mhFfw/H8fX/r3f9kgn5Amgq
JIhEtJk1aLWLf1n8mmZJqv9R+8v8ySvEDQCUAB9+Nnkf3TPQ/hwfGQOkePh+MU9QhgVhIHvD+IT2
ov5fsv+NTOW4XRB+Dg6uhLst7olRSYKEGISgiB9z6ptVKSHBPwGrsd47Au8kV+8Jf3Nb5Xd3WrUZ
VRk4RtGbQeT5fgHTM/neTqkd4J6F+q3jO2u4r+Edw6g3DQLeYnWB3KUI7PYHque3rDMIfNg0Er6+
jbHlhrTRkZXoYREUGK9BGB/vrDuwF0U9gel4WkE+NMf1cyB1amoFFYRVA8/iJih0E4JzGNEIkMvI
DVOSdhouhkDVDWHdigzAs7aMBdOA7t7LlxbHMSA+Le9j603rnvtBxt6ozuNk6kRUURLwVDE0XNi5
GQUHJrA7Pkp6RdHMrF5g0dhEUGIB/3/d1mZ64GzPkU0KBYkxh3lCZ0aokM4Yddo9A23x05AcZUZ4
Jq8CG4mqQSk5pqcTJfUlBwOBHuxyvPjDlDhKNMWMRIJC2Ec3CPfbIyBgDshISELzb2w8eDRVNRNF
O9HXUDl1Hk65vij4fswRKWyCEBwhhYc4URl+TEkjZD0vQKgBh0BeCPMUdwvcVwNxGRFIiDWWQu6b
UoNc4c3nJ1e884mvIduCkDwEPYSHmd8kkggZAp6zDuPmB2CjJg7EOtNnZxA4OkwUPmm/KOLB6SeX
uec58c1sVb89JBAOIWyI4oO1+W22IUNDE2xKjJ2hJCSgT8+GoyiXchPYYULXr0aoM4Y8WT11xdJG
dkhOkjA96mwefdJfh6QciNsG7es9GEUzqerzgFo0EzEguc9o9IwJJJICYNNm7fLq35Su0vlqdfI8
Uo3SmQo0MK6o4YVhhY6g6WOqN0sMwLxUXNS5QTvdlGeJz8TqdaSfWDM487rlQY3x4AGzqBzfAjkd
aa6PctqHCOr3HWWcF4Dp2EB6tRE5rzNXklqWh23OQxp9paqSlh3MP5NKOxhQhIFC1EegHxI9e46t
wThh6heX08ianZN4PbYdjD13eR014D01IemhPH5S4h9SZMSckxHCGC9k9Z8vbBjr17yM7tvOqqi6
5hVVE3c9jbI8xwISwjx3aLXy8Lu6XCjoQeWHReuEtWq4A1rv7xtlhiPAjunOw9ghvpVHlJfrbe6c
fr77Rhx46WB5Cgn2kD+LASGCJLOzOl6vpOplLk5RN17X7B4NhJT31SU0FEQUeh6QLp0xSUJCiJ7e
D4Wzvb8wFVUlosS9nDkuQmcF4+HoHnB89Pp+yNuVKuu2ym1HZqjt292RAPukclTwgrwLLvLZiTXg
Xq6Hhg7ReAAZQpoqYAzx6PWB0aIBXbCNiVWmXT4+Ke3SRIvQfD1J3Dz56oLaFLgaC1OQNp1A4fOH
bekEmoL1JvCYjHtBi+p8A0ug3ITuzFTIpKUSBKIgSBHrV+zffE07ohaCM5SILEDz5zmivYn1Y5gT
Kwi7Bp3cSPF9jqKUrpRvOxahpy2jR6AHCvkc1Ph4cVUyVNFVVFWevxtw05gU5BPiphzfLMobTMpS
bfkkbeWoq5vPWAGvZeAzLHmBoB8FO7tO+UcSXQTs9/riaPWeS7+oIDCV2Xv86iKu2YUQaTGnqH9p
7D7h+0SgD5YqB8hARSRATl/804Iv3vlKQ2CDdIZ3AS/2/mP65D+E/bBIX8WH/ki1uAi1r3/G7phj
J94QmLFvtIH0drHDrrx8JHkDoaFS4mbqro7/ZLz+t/PtyDakkkkgzgk2nr3MbtQ+wsCjjVbv8930
cN1QaLFuTGwiw12xpsJjCTAw0e6wG4Zj8kBdMVoC5JY4DNiJgFNqFN9ghOeMFp8gQ6BBE/hjckBQ
JEB1CL0Ip9zpvuJwaO/QoQA8OnBOHOjKMicd0OOd4Nxp0a7RDGRmBjWrDEAkwWYXrbBiYIPTnVuq
/EyLb4NgFwXNN2GYyAgAQQaAZCrQI77qjQbVqnAD+217S0P/aDYAbXGCNu/X7+rZiGoYuNlbmMsg
Fh+aTVCuq02QyzidO6zcDRbDqIHSxwG1EidiBNZrmyoQkQpbx4PVx4GrHhRqm/TXgo3kS6PT0oO/
GmswSSRUH8E4zW+m1gvM8NGZen44dOdVpGkcXDUHBw1Bu3MT9Pw+Tfy8FrOVsnDUkQoirRVRYhzm
troa7FzsUkPZJ8/0enCEYScH5GYJXeFfdQskBQYt7WYIh+T92wRH08qc+sVDh+VzyY0pDmd9kflV
jfRk5qVskpd8u1P/TSgp9LkJVerWnU/lo/prpnkcPoK16amvCuAJON9ebVU1pVTN158Y7a5ydOn4
dNlFNVFK/FwN8BQc1W4ClMF6eYNvHTWcNo5j8sETnhoScqGoLVGm1+Uys3hpcRFdYfRpDCtz6sIX
PD9Nvyd/RCd9PlibdOrlOw19NmqG/Aa2zHrPuvuLkqpzl/8+80Umu2onvV1PTpI337fe4Gb74Yn3
zp3GcZbskVAP9s1PGx1P4V9Sx2VfLnhbjfAgcne4GZitIVsOwgVmKML49KryY+thbvm8v04wnYQv
v4o0WuMpkSG6kYfK/54Tq6u2cpFn2NIpG7VlUH3NCRnZtvdvIzDMPnOxS6fC6MZIv7H7cfu/pCzy
8T32dRJfvv0+OuX0Z++Xe13c9IGOKPmAfiFkoMhu7aMIUet5ecerNIqdMr3/H6Oa/dWT6l/m7wv5
Qb+m/qJfr38Ts1LEIv8kImAeXp+Q0Q49ssG0kVYEC4o0FfP+az/RzKOAf7Sv4BfadATCbHe3w0Af
zv29jR9CijvoPBN/4MKkGQR+i+36+aX/3rI9CPCA3iXR70ayD/vQ51p+wZrOm5iDDP/kTSzF+coz
9p9f7U6kHvlDoH6EZ5/NX9qpP0vfClBSttsLy5Jy3htD9P0KP5ZfDXXbCcIavohDN2pj9lYwEfFY
KF7xezHApP4wzhBfTa9e/XPfXHjRjgzxXRPGI1865MVnjOMvrhVntqV6XjzXb04JvssPVT41IjyM
9Fa6W0Vx38rVnD48duM5Xhu2hBsMXxwfS3O+3OchFqknhhC6MLqaOVMccYYmd5o89M87aLF3toQb
LN8cXxt0vt0nIRapJ4YQujC6mjlTHHGHHazATV7MRt/GQDcwmE0DMzNuwa/3mfASFexYNS4BqgJA
QLTa97r0kgK2uaqsA5IXuc2l2+HlQxEuwSE3VTYNhiKfJNJ3dChFY9JCID4huBfHkL2FkWVGtsfQ
xPab1HyZZkVeFomTQSpB1njmUzzfTpiv/q8BN8RUoOnV/T/XP80/xuRvqR/X5D753id3h6fE1xhW
HB6rRQ58CFkHIJK/WK6d+NlvUplYPGyyUy7CFp1vE0bt+DHfvxzgyLp+NZbrhYznnlUVjjNZrUyI
pYTniZfXW+uF233610+lK79xKi+Oe9cVqZEUsJz3mXx3334XfffrXT6Urv3vvz31KW2aRRSe+Nd1
udXx34410cxKwVoL7jpRniskvJrFzvNKa8JrUBEO4JZycTmZ4mV3viMmicRjmWSnGGpToaqOFV9m
Baikow0plzeJWMXdu3B3txTyShpJq9TcStMQ5Rt3O5w7T6GzTe5UJzwPFYxG++MGpCTl1lRS0g8S
Udr4zSfFDxSFnjJLuN6izOu+h1FPLikpqULWzToBNZC8GyvSKQwyaFnVt1N158Ot4cY1pTrSNJTb
TepSTRdAyVkOrsmxQgZ5XKikNE3c73e6WMV1iPnz1bvF14a3jWLzi+J4tNYjd0GVG7NVFTnAZRWg
oy5USLSzOJyy8S9O9ZlPMIShPpVtjEXpRGcji42c5yaGkPjSWX8Y+KIR4J65zoSYk48bmWCBItD0
hTkm1Vx5Eh3Ad0qO6wJCSQL8PMxTHcMmVzQGO9OxpQ1l5JFGTpHCQD+TYd+nXI9u8kZ18LznOeK0
9Z2n5cC8ePNfl1AwC/gO6HcdkCAd94WISoxra0iNaqpGzYDXvMeK/jifzVR7IGhmd6Eh43V0V84h
xTSqsTVI4jZALF7mIESHXlNQWtCSSSWJ8oeppmhDRtPQ4hx+Tva4YAbgTWlkmZu7NXYYDQDhCe/3
nteht1Mt2cI7baVkk7/d+U/eP5938Q0dXbw868IYA+hHXQ0Zq8qeZJqJgLH5OvkIdPwHoByGnvOu
D7R9cM0MjQRFRUmllt+WlrhpqKZprTISEMMBD68clOkh7qj2fB5uJ1KZUlscB4n0KnJ2zcO5+PVk
hZoYD0ML9PuPIzg1r444FTFERFtz7RPNQ4TsDyGTeHEi2tDGIt8xNv27WBIZjtItaFBBPK4O4nXZ
+J0OsoUFAUhQBQnIDu2zDoPWxo3GNW8kelLg+gmid8yrFkp0sfpDifjnLK96QofEpUtYB4QTsiIH
x+BlL4m/pMDiJ1RkhB3XmBxdukfAmvxd3uEUP/uAoB7pUchB0cQH31AJdOkH/t9phs/7//wB+ADz
L7jitUQNPJZ+ESUn54le/6PRk38ft+m+fh+RT2iECQE8oqpQfgKRbPRSH3sSkon9pLqruoP7j/uY
mhr/aty9QhpjuYnf85/pzFqCzmSSR7NRNUFI2jirjhEA/++5MHLZ1hXEuapCziqw7mZOJIpSg/7m
Y3bt9XK2Lm8qe3JKU/9qJlm0aUbRRXXFXEIRf+6TnN963My83z/8CTqcSlKwckxddOzdZhFPtZkc
0IuGeVPHWulZC7jIXbCmgjUDTs/J+g8D9ZF9B6xHP48xdJhgYh9eKtPTIX5zx/N6U8D87/evbgIz
8p+z9vw9l/H9ef0/v/VjE4xm0rqpqpmlSqaf0YrWcZxLrOc511111111++ftUd3ZI/v/42p6getj
/CWB1w4SSVVfV1Jry/x/7T1nd/L5h3Hd0cHHGVQGYAEHw/Fde3Xsg7uk47/beFxNz7gdgZlvLvrp
T9HSggfJbQfw+s23eXLkkEEf2d539wTsP0kJ45M+rzYjx1ibcFQg7MQ8fcQB8YW6E7cFep7WMGOZ
CSWcbO2zff9JabPkUP7I8e3zH3vW8/OhxDYG1DR27Sl4zcmxGGAxNTMA0b3K12BcvKIUvY9vnO0O
nJ7yPm7fLsU+b+n/Owt1XsOgV2hdL170Lw6ErPoFHsZkwK4ICGGKIePuqhO/YfbP1dD6tGtERERE
RtKhERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERER
ERERECQkJCcYmAddyT8GHDLaDs/s6ji1DnqutO16EfYGzmUEV9Urvde5ONfsuvX63p0YcuPLgGAm
frBmIm3aBBN0v/Ym/PCM7Yfep89lKjIf6V9thZH71eX3Fa2ccqQaKeEGj/yeUZShF6/kwlHqtG1I
Y/v7LYE03263Y/kelsT+oYZmccM9323XIltKMc4xGhJFq42/7ErlK56wplaAz/kfL3DgkkBrbzUy
Hxx+hqMDBsjpQf9EpeN16Iog9i8ZoqC0DUH63WsTmXH/FOnQ+jNFCoZJ2IfM1h1bCQyyQ6S78uO0
jQbSZIUGoyWt4Fr5viRVadJFsSUe/Sp9XzGxxrBfPdDnR/7Sgh/oEV/uH/isAl9nY0eZqidYKcPZ
23ZR5viu/Rm2Pjizy1copbpA/om6BrQlLjehTE5e4K6yoormXJTSaBJEgN2hCNLAofVyoj6PDB18
3NzllAC0YH7C0j92JeJghm6cEDqwoWgZ1kqstAab3UdMKhmrT7p0KxN2IKxTnPhsmTgb6iGjIYoy
AkRHhGgXTTNYQ2XUNpdElIobyGShkqZZKjlkhkNUoBe4wD6JQFNumACG/drzizRjumfJhAXSA0yE
YfH7yusI7TyNg9Z9HyFiXEU88UWvdT7Ie73/uv599dM/7dP8qmthrw1qWV/Lj/voyiljnFna3Swd
U8d5C10dPOv+WYjxb7/DqWhuKY+vhxx1/RQeaI/Twf/zcX6e3b/Of+OmqfJ6a/zP00vC4TcOrbzf
K+t1LXo+8nGHXWMOmCV2+Xh03wpjfKVS2cXGzEN+eP2atUjq9+vI+4kYJm+cVqsEj37vxz/oTqkH
Lj4d3z+qPySjj7foM+laFdkNp7RPlLaer8ob0E91jA7MKo/xPww/YG78hf18j7NYlLSySLVMfj+n
tjB5s13YpjV2n6sB3y4AQiBISKh4QST+Kzl104XWI1pVNVRHxnsumH73fTwvnYbHGigjGfUGLtht
SyMnOHCBfH3prh5QeHKuFbbWMkzXsXlG82Hn8fURso2dmCLlcTBMzYa4000NO08SKuhEPEieaCj7
bJSJ7Tr5h4wdlPm+52bzIXRoDaps/2sBP7Ln+ZfJ09WyBF/cjf5iIO7eX6er4zQfXE2ACT2Th7sX
JdNMAJwpVWH6W9ED6f9YQyXwTfUysyThABHpVxdg22je3ZQ1WMcIeFGJkPRwhqRRMxVAlfYU1VB1
nMQrzyfNOZqyrebL+j0uHy2TPPFsLtefy7K42FDyUfH6+r/nwl21fTh/toIxcREkH+PR0tXamkaq
2JUP4segZi5uGlpuVxqwho775kIEwQYHuV36MPBqfavkQNRMCQh6QKL6CTOCXzuswgJqAZWV867u
33W8JmqfzJTMli9NIggp/FaDFf1Uozf43X8ne6pRqg2RRMhK/zYL6UJ/unSGZ8WWo/h9ZmEdWgPV
EdCGLCz7wH1CY9Py9HUxt2rYnE1TPp9NkbB/We2MYTlSXUpz9z6vbUu8c53X2qFkI3e614wh/dt6
G1ae1/Z/tz7b/v+m5smxzdsUG4lb9svsTp7CkHt9fVq3/gh8zvWw8nRG4/p4FPWuhodzk/tj6mP5
z2oSKRaRwt4lRxv9/9xn9NrsOlMcIJm19Gg+bn8ST74O3OY8+/Z1x44J6LaJrIR7wtlHGDkCfJP9
T2qBLc9O24JKbtwu+rWL6CVQvTD3DHu+I5ie5j2h0A3qSSDzq8K+NGVT4zjD0HSji98Pr+fD3KLD
MG7NwPbzhBhg9Cht9f2EpGKDsl7K2X006/XssueXpttjItGNQh7nTIakGJ0+3bjnNqo2Ip8Gi2Le
1yNzxFpT6dIiIe+DQY9HV593bsJeWQTUFgftO3zqkqUdjvhrf5em7U6Bmz/q//YO8tG+uX19AErE
6SQJCTdtFWVVTP9QP3kA/hPxGqNixAgnE8QtWZmNxw8Rurj/Rke0D6AJsx0c4KlsHvb7PXT03W7f
ZD0qz7Pwy9Xuw4Q8xUilcuDeojwM/niTKfWCA9sHDWmrgDsR9EZv7aO9xCjHr8lQGZmIZCLfYtPK
3vwwn9Hmcr47+S5x/p+v0CDvB2QRuOi74nUl8Cf7fYwfjGZvKMMzFzHBuNDrgb1ll3keMhvW8Pcd
Ve3sHvPqUIDtqZrvWLubwnp8wa4VUEOvrhriU7PVZFvVG6ffc/0yqUspDDiwMzYHm9sWL7MXJsO2
EnRYkL78qUeIhTWkGHPYCP9ZCr2AlS+/CIQ6664B3BkYyfz7vXe5cmEmh1zJEIdNhQB6Whjdl9UY
cOdvu/W8G6TNPpZ5h44sDMzuOzGrVPulZm/n9F9a/rnj6re+LYbM7oNY3R4NHhfnDimkhA75EG3u
aARdd+P3j0GqljMC6ceynxn4fej43498Iw5ktS39OOsVkFE6ca0tgWPTXqC9oIvCR0uQuCHTC2SC
tIXt0fgRnbalDo37ahBMzchfDC2GnX72/1zwm48Q1szpmSbX2HFNbC6ZUrVibYhtB7XDzw3w6r/U
usnKXpeBgrMX2CMtUaR6cuoiqw1w/VSfn0h2AmYqmyJu1js51xheTzY7rsqa8GyJbenH6Hc7vF+P
7onjXOdP4jcThmCgpoCgKZkxlkpooy/Yv4K3+Au6+fLeMgw6+KIBgp9XP1/uX9tSqUA0ErZmxSqU
m0m2oy2WaTJCwLEb1xt4SIsIXohsgdTLNWSpsZBXvLoKTO8uuDCTA02EvWv94jv+uXLzrEIK1JMO
i/HTnqnkSw7wuZtO+JwQBiYskKy171GwX9cMbBvmmEKPSaVhDdK5t+zvpnHbbslKOXg8Iede1/J8
2nr1mD1hDDfNdGwdLc5KzkGQDHmaMWtOh2+Z8SjDDbmQLSfLQxtr7uGhGbPp1RL6vOj4prm/qQOq
oPEZmzC9DfKhg3aXQ7vD6umufanssHcfnlD2wwCTv0h+7KkXY5UZt4vd+PP2+MCPCbjiUGiVXaZB
ZuD8OeqCkE/7qGR52LIhndNdvT/afl/5/sqq9ifzrf9kPno/wJMDf/s0cOv8ntNg5IHdpgaA9pIu
Eu0Icfx4OIRMz/pZ+n+Xb9zvq7nCXJH8X/UzwZjMnSVIV/0uyOZO+qin/3Tn+6WpQ264xkqq5Oo3
da3bnr7b/4ar2yWEHzK3ripSKfh+8pcHbh/XAaKDjhv23CnhQcEkmEm/2+nsOJbYPGKsGQT5XrFF
Q4j/es0/zvMUUtJ1TOjUJ/zYm1hK+Pqoc8+riphQdoKAgjCEHfj+2Ff2YQX8ZGsLO7EJkB/09fUk
k+o/i919jMMxfqUIBfAcSguWdsqL8Brla8eZD273N7ymuXdYlCgIyQX+34OChBijDuAuUrT9JBsP
HH0dm2xNNtvs4+uaJLlPxHt2Pivgr/lUi0VKzwPuLiPDSRWFDTSmMf8aNIiniNqnhCaaElOEJvHl
H2qjVsmUJjNHmvsUpVoxUxKAeGYuI4ROVSnGrklfkOrxwtrGW6r3VR96+z+zM/n9uLYivJiDn0rU
7Y6qArdiM2pCFVwsusqWKl2b685G1BWs9eDyXtOJu2lJEzX/B3jO/P1B5v77O5mNEDNf4DjNHTKW
PvPJ+YtclEVfWTlEQH55Nvsvk8zXK6u8k6TTV8JKROXRbgIQJVxKZon2gthtacRDEr2enb5lAuVg
i9o7LoWTJlbGZz87dnkD3GqWsphVXjiFYd3pb1Z9XtnbbbF4QywgLTb7dlpZMJieXjAiylIlcKf0
Lj1hdtF2zBF1TPjZc7bvtdXv4FHnI8HFNCDLjZWVrQk/o9Umgs0P7LTZlE0t0hDBxnj/fZL9d2xf
7+2dsdQXQV0sJyu56SkGftNith1+/t/0qFRmvWAjpC5jyZHOHqKtYyBY/H7epvW5vPX6Uvx8j+Q9
TiEOO7I/2DhuPp+Bw1eR18jP6Nv4/vt9PE6uWN8X+flS/resHbOjW1EmIiMCfbdyMeuhYOBXzHXB
tBN3iActZOM2KEmZvbrpASPoTEFYqIKoNnkazjHsteKDben/g9vNonRTj5FvC20ttoFHocOViqIo
yXmSVfmt0lnXC6qo7TSSYQkpcsfyJcuSp71SKb+NK5gdN/v/j0z1UG321cyzxh/Q6mbreNwN//Lx
JEoE0Hy5uzM0dUcccuLUONj81rLXKI5yYhbjIO5ajNn+VMZCKJmI/K4xmdsIMXSdtwts37PfSDEE
z9b9yKptJOb8jqv2Elwkdf2RjHjtjbZZhfD/yIbfk7c03FMcTJ0NIxgJH0JmzvcCSA1eO+AFUytf
0qmawwiBHrcYdDbunsxnL46O7cB0BBAw7ODO16BmmbFx/FLGLYFfDuH60xlJ2/HxcbIyLTWPAixw
4eiDc0TG47Dak3RYp+PY+9FHB0CGRk7MYdY42QLOUIOgsQzWo+Julzi2PEweCPLEgQH+1hM3P1Ls
n+gonCDEKGforE1n/4n6eS87ffUMRT5RJ71AMw+2WllFXHyVefzzhLm5JMkp6ouepGiEsAn+GZPS
KHcqaR/XnWcGk4Qg7oPlh/5ZgajJ+a+T6cA5jcmzA8zwM99EoB5ObtCWwtrJoo9kHBui46DuIsXJ
urN2aCOJ9f2tOWz12wz8HxWq89JScGOpAV3u2qtNUW5CZi5a10IIJfe9PSRGNevA9MWNch9mrKGr
Q9Cn6JWIPmTMGiA5wfirNqfH8lcoe2IfNvXxZouJuQuDURLiOmaqY6lDaR5RdPzVzQ76vtrTo9S3
U5cru+4kzuNBtIU8FdmqMHatSOIiLSctTep9ZL2LVGfYX4wsR1fHG1qYO+x23fnk1dt07PhbFKuG
3mHpeL3MBamnrqQ818JTu5kG81h5GrppHaKElYrTTyTieFMK5XTpCcMprqkxOkBzy5775md9++Tn
WRISF7LSYp79laPU02s7/Fhi0G/8GZ49r8K6Qmzhm6d3uNdl3l/dmY9fL9sMPFOSYvxM5S4bcyFP
N+CG1bJ79b4yD3saEfJfHWHLSw1HRKijtXbDlGJFMB4pg2IZjNHJMoO2rqfz7nCul98iGE4YuBtT
MYDQHANspwJI3lw9hTSBFF/RK5Ns34lFHZqe3XLonYaVpY2aYkxGRcleTyyeyN9kUeHdaoW83KrW
/sZEGdm6GZCCBcs06Fjr+fZfMbbR6CE7y2QqR12QKSj7fJHCNmfTJoWSfXB4nNHiuvDft6Jvrjv6
L3OjaLdkomsmicCCszlQz49VsMIdEKlpuRVG5MYO9kemFFwG1ppwmZvPPpKz0V06BTEiQDRXKnod
tFCo9q4WjliDXqy4Wl2VCCYqupTTZIOtN2kShGV21Lx+Uqza1gG0fH3S1xlgqahpyvo+09uMy5/s
mZ+qHtv23w5EPaWC3FDJOzxL91Hll7SB3/H3cN5nx78fXxydUOyAvEZGH0RV5ReZCcT9MgHT4Yic
ny472SG15yBQCfVHpTAwwZIBoa/P4ehdVKA3k73Dmg5RcuEzZ3uM00Vu/97oQL3ymARE21N9CN1j
iQxRNzjwjhpxNhYVR0i2dG1WLrjqtlxW+POOF8YrjIlDzO+x7GlYWzIY2wIK4trRTaMnnr8YkrsM
3MJVIOrntz0a1emhpdZKllkRUe03ejKOV3nr5tc7cvRrvjWqshprlGT373nvfCMy6uslMcu3vCtk
vSRLXjYRxvMumsJ3W5No9qLjjChBYktLiUFMjF9kMpWcNdL8UrdVjvC27jKIXcqQ164Sz3/DxFdc
zhW+Ofbx712yTFqWKDlwwMviT3U4KEEsi3jtnvlcuyfOuzVhZHPPnJ4S169L865Pqv9s4FosW0Ha
dHBNY7XLtVUOfJ8wu3x9bymw1x2aLamwL2CemU4Sk7WUaiGc0f0Rl4S7zCNrI80tmOL28verJoSj
dnDrk+dCETojK6j37OcBoLOSth5Nr2IOu2bPWM2lqgZw9Rnv1bx6ZXQNUeswuhivJcXJw4x9WsS7
vChy0KVIuOtIHBWbK2nP6UHwZrWC3Vd80BuWyZBA20Ton/TZp2naFcon1fNYcZA++/K4zA6SskyC
Yh54BUUzCoDiePKk8OdJZIZ42Gvzn1xMUrj6cDU/TxRJx41MGPBWZrCsQZEY98Gz7pEjqQzP6dxD
RRQYmptvbOkR93SvBGbFlYj0knTOk7undrrod1kFw78qNhK2g7vvyxbzxMoFO7PYZRvS3d3SOk8c
NkwqrzwIYEpUjZKBJVhPY5iKYoQfGiTySsw0jIrBQbCWZiXS35G1NbG7dHHAWGfTKqe6+ZV9ybXL
QwM9b+nN1nnDdHM3a8jcR2RvRwufTDUVgQ3W7FTbbugk79mjqhXa2yMLK4bJxdCbnlDbbTgmzklf
nd4PneU7PZWJrGVO63gXvYH3ZMmV7DWGUTgoCI1W/TMFZcCeAMP6e9/TWJSz89M5x7Zlqqobqmc1
dKzk/FWxnz50xpS+3A3EbsVwnnqMuq3msteVYzrE2y4zbS+VVpvJNnzOcZokNNC2vhnUfGLOj1Qc
mtisr07d9olwTQK5l0Ipb50hr5VzlWaueFbYEKQdQk02ZD4R18Ncel8a2jG1DpHDtWUP6B/BRMX8
cRRCN7v0PbH5Ia59EJIvQ9yt3W6qZ0nsyfB7eb3y0fg2mhfWzkqWRnbF+ErI0Up4vA6olJyKTrqt
xHm2M8y23iugsffYwbK7oSfoIHmTG0TdyMp/O9IHXIt2LWXxI7pHAS6Y0gOCV2dVCEKsKeqXR1bp
I5PsnOrEca4tarCtbBXK8dyolYjjdOBPF742YTyzwvhXAbeO4YXdPRdn1x0RYxjMtrm5lWcJox3l
m02dgXnRlE2+earXjqMLufQDbYHX8APLlZLld4Y1kSMjohAwfGGq6y6dkIxs6Jli8O1y4VUeZ3a+
l90IWomeZ5bsJN81ei4iWq4XnmHTX34mDm55qTChiOOnhUKPt9JzUHKJ6b+5n3T85iONH7p+S7rN
1hcOsK57q6MiFL7OST085VOtE4UcOl9tT70zr0T3DnU7RRuqf4S/RAunHB8mWuNXzXcOnU7opec/
H6LfOJy+D5tmBWsHJc017+Oaxipz20XCVrHo/u2bzWNFP57ZXdxF22HZUdpKatUPK8oeFYxLUDro
v1r2kh6DpwxNucYxl09ttnjI7UUSi9QRmYuRzd2vshImIczTnVUopxhbARN6RXDZQndw9SLuntsp
F3vQkVgvdJySvdwXa8aYQjFujpFZ2WakR0rC4d/lRKVotIt8IPJNaV9vqkifS5BShrKjlFL8t/q+
dRkUUsKfLaO6C5YI1o8ebxk/K3u7NCaNtTz1DNGUx9yanDVAINk70l175BhqeS9MoTnSHY/dYNG+
+seyWEPVf33SNl0PQHhgdkYhQCQcCbfMKfhMfO0dxkzia+X9T8e2NtHlWG/N4Xxv9dmVG4q2LUVb
isx+jZIykpUslIVyfHZHX3SYsWyxnVvvyePrUMPJW2r5UfnEjkSZ4M6FdldciQ9uvM31bVe+K7p3
l0R8N0Y4rpgRhFA97mhQn5YhPj8vhqlsS0+vG+EktE7bMh1YR5+aRAo5WF3VXx8+4s3LLdaJs3ig
y9Y5ulj786+GyGPi9t602dBDS7lBoQqd+XVlCFMvY/edp6Zz4ZXw6ONplBxJcd+/SY986EFcopjc
UjCq8yueEC6F0LWRVGfQ5GLyeUeHhfC2zCOb5arpqyUGyU49qviXUnZcjZxoVSSR1vwpheIJl9E+
Vk1TVsU20Ox738tlZOjG1/cuKlsvxlIW+E9soa7LLlcurnlnHjS5tZERxYYWKG1/R1ZSpz16XKby
VVr1jw37c+dllDNVq7QxWixxOzr3QI2IRxO+JCOGDs31csjim+uOrPO/ft3dmGymbojHcLUcXwWu
hIg1uFNCyl6dJWWSeJROm19yj+dEqrlr501rX6/V2D+OJSXhI8TwY/m1mL4f7lhLNx6iivxuOK1J
7pfsqmZGs6wyS38+5uS1+fv7TQmpmbc48s7Yz043OvB3n9ftV5XGGKLuUzznjOyZfAeOrXbLRE3f
1irwcqIwLx4VVNUCrW5DCJ/B136lW4UmJMkilnuqw/P6ymyUSFmZlkMJXDwLu70aSDnHlS/czteY
kVt1E+qlDHlAvl3Wx3X5XRlsdr+641J4q2XmfOPbJeSHeTsmtxs6IRWXGkGQlyyFZdjoPSgtkb4G
tO7wXHR8fI4ZsskyTo2r2YTnhZd1U52au+7uOgcWxe2nCWEtuOuS9L2rJRRaonGjQ67n2xwjF+ty
1dPnL77jvhuo1y4xeDJe30G22Q617XyLNS1IjYcXh6bcFulofQ9NHbyUwyhrl5Vp2vVQXpyhDJ2b
BjwiYzcIu5BWbJyMu3O05Uk15G+7K7OEZMXrGPTlv34l2VKZWQO29z0ItVFojLyu4GDmWmcJYvBd
cYzJcp6pNpSIlJEV6kSRF8YPs88NcnTq3aa4y3/JCqt7ghkZ2LW7GxdKsTK/FWRLkaILoRkPFt6N
ERshDGalnE2Kk7VBvVIpdZOG6OwfxCs133a4Z7ONsbc6LzmHmx0mplNYh6QlF9jy9kiGvzV32Yl2
SdL0W79c77a74dDlpLeXwXc735j8OcOM9qpzd7CkLrE/UjxTYirCfuK3ouVC/SHNXSSjAYQCfCBp
3aE2mtd2xyEF65cIYpsI+izGTatj8KQII3rQhjnGCrv9GwyutyizqrbFlN2o7Wycg3Kx8DoIWa5Q
zRsZF9nTiRqr+Eo2xViyaJchXxnHd6Y+vKeCpePYLdlGWPxm3YiiKZHp1x29F0AxgrIwOyQYSc9D
JXuG5RzeDbto5mot1JiJsf09njj0ee+FMUEIbc+W8jKZVWRdTXphFEEdb69ZEPbye/Y/ZF2XU4c0
E0OoqDd79fOyEk1i7E/ucdXWOXFwnNaggr4whkoKaSLO/ZDoJeL7Nku6t8Gn1u3nTBFNlucaCCCY
oiC4JvKj4KJk9CD6lpm7YjKvwdm7bHtteqGsdmxh+SeMDf6rYNwe2E0zZXQg7uu5y2nX0Rbb33wS
EpLyeRytVmoXprjzWwlsd2E2TFHJFHwFjF0l6LPPpK21SqjDnG26yEJs/seOyldspXq1P5JGOq+X
di8DoWMNkOmM/PlwjOWkH1+mUYpKWuy7W/HseF7TnNePdPMV700fGhLpb99zLH+f34mxapkcKJUl
NpGVkEpXuGSGgnxtviz0NyezVq4zRS6yy+JgYGKWlzYLzUwt7YdPsezHRoHW8TbrkGEuJN9qHfb1
4QV/r4vHxdaN/Vhonv9X6H9NPfoLGqr0uuaGolP7dc2k6dI64webinRUxFoQg0eKhytUDoFDi9cL
5U9fL5jrsKNWsMYwQ0DXKKLl2KQRyhrd9uHqtsaT21g0d6cred2UKLqU27twWtORIfo0weMk2ijD
p6dLphwRitXnHj0jpGW17NRHVqynPZfAh5nhCSyhtlj72Z22Km9J8yDsWpsoPbtfZWvms8800U71
cefl18aBOhnTyNFX34xvUth/JtxxvjFFFOj3aU2IVfGWrTUGvzs+5G0eSD1PVx6/Hc4feFa7+u7q
giVr05GQ7QKyKKsPp68bn0cxiP17Y1iz6WceHvq5GOFdV4jnshMrWsLJVdiaQmOCCKIIKJoIFn6t
TfCwr4STsOrJxzS4+9vPOYgsSIg4UBi/zPeSC0S+pI4UpCF82a+67jlHNhK/Fz8zWevRY6yfJrL8
/ODw8kzDWj3a/aXS3jLFjpciih+btcMiIoKT6E4I4N/VwxONe98ycsopkQhFqCkAqY/E9/6AxgOk
u7ApEDu8VEQCRtZSQfG2SfrtyfbVTVdiSvKvnU9n/e1Xv34wsWS8Ixd8Dg8ldDCMu1QTBkjwQYpm
IMuVHToc08JEcl1LGEVmVMZSknn55XQMzqfps2S9RQZsPQzkyTnYgq3wxNlNRpnCTkB7Gjv3FrKI
r90HUXU4OnjBQ+bhT0PKybM6D1LUZuyRb7hcLYEcfmXugGpDYiEj4Qca1F974KjvJEzElq0ynJQk
7fNBxfM8oXUbExljKg6fSUGvRFMx24xga7LYBng15BCPYrYuk21TV+veQA1mx1Zr1Qy0m1USudji
KPO3SLOjNXLLDVAjBwvRBHkftg1jXdW/wCTS6Xs6jpgzHHjv7j23l6PkikW/IjXccu8cjx57SOiJ
CZsuuENDVXPGpxpZ5wz2XK/R9ZRXxn6NVcdJrDcaG4jCM3j91bRGBGiiWAgoKGmbZzKptjeXdfH3
6dkLGSZGxMetldA2olWXnW4TMAEr08DyodIYh7jSGCtx192BPXPw8K3BWIoeYnlPj6Ga+7gNc6Xi
XJ0j32YJ15Ec5PzUvW/tUhSfqQdaimxT4vdycjoYdRA6JivI8X5UczYsNltq0l9LLmOJGKxtL86f
reenKG86UzYo38+Os8NlYwK09/y4fhY4O7u1eFkKqD+eWUooDldbfEmm0TG1N7/OnoC8jmUGdAnf
ksYlYeSEIPu8sDBSE0jJzYoWzujDU+S/D8j0pVo9k1T69qipfdSoRvr2qJ6dIsUJRtWLuUlrp5YR
8HHRn2vFarXfoTgssXhlk91hjCE3KTxRmi1Khx0vpr2RJbf0Sz7LcNEfi7ZJRjqXKjqaUe198UTT
YkrVIp8YaXlfLB8Z1IPa2iV153fybo6hxiraxVUL82wYpf3KeSQRi/b2SynK36oDZx1qa2qXNwrF
yifHfCnY+xFqecqRp5I64gERMlN36+16ork42w1VbSJLB8DEoxARHbRJ5UQ9M9PmMgpFI3I8ZdL8
SGXx3rfGMUgUy5Ofv81SVMa8o6WVCKd+jpQhRsdqTujBLENifaRrGPWTPGOMWbUNo7XVCaiOkGEY
UJUIgdtpJIn8fp6Vjlx3zqXKmEm6LHME02kssXJwnCU3Za3t5vdpcUrTGJsi65LTvg6+XF/F8J9T
zxMIJv34mcrH3eGaX6etfRXpxxLWiV2tapQSKPJiCxWu2SKmmWUpapsYZRJEMqemk/HCxlCUHwXl
WOj0zoqQiWvOBOL5QJcfewa9OWjBRVsw8YNfHvnxnZ0TNvCmqNrIxfosz43R4p46W4qMbOFl5ree
OGtEtvKNvxzw3W9rdlCpHryMZXrO2FFkWbvQAA0vT49X68MNx8WZoCQgF+pxdlZ4DXEwWWO1j030
YDRMkNQDQkSJQny5gp98IYd36XgoP0n/T6ePDo0hWxrBPVtIe2MiB9z2eFYdv/k7pmMvwYUuqind
1TsrhKGR6SZrLuqYoRKlySa3MIUi0qE7mZHjlZS/GBTO+2BuPNDMgZSdeZevt1MdanKvYzOmYBjv
uP4nmLKeRhxkIbs+AVDyN/Af4u4N+dNcZLo4af6v8/OS0qLQVMkkUjJBEsMUQBIFIswCUjSq0j3F
/1QBsJAYoPRfn/l7C/mlGX1eJixdoEJ/uLuLCyNSiv/wkA/z6LDJxj/XAZIKuCCpmCHqhrE0f8qS
9X9+8Mf90zFSyj/N/Vsp7ICh4/0BFf8dKIdIR6Uf6M3v3hf9dbl8T++UHLgUie2I/FD/CA7RDSO0
kqKfGhseF0iGiuJK6vKaPHdwQz17yxvVvjkmSOmunfNWEFregcbJGkNDFB/uEjT83B7Ph23E2hTl
wMSjJ6K886AyL2zj3wt44CdIEMt98DlXeIJAJkgkYgSCCFIaKKRN9HHhHfoyQi4FBIpIKOcYyTHp
xsljNtYxtoIkmMcEWIUlgdUQS+0wDBJoOkiJlowxKCKn/X24O82Y8owXrDdYw4K85mMxaiTXUsmS
KDWtsKb/h/6p0ee69esVT3QHdTO9keNqdSnz2yPKHF5QOJM/VbD497EO3M+hkD6UdSnSAMJDrCGr
mVOYRDpG3ngCB8Bhbz4HFWQ5KCFnS+2n/VP9f/uoJrHe6wh4Q9p9svfHwgNQ1wyPyeOeOWHluF15
hHd8vF/qPs7IdPtOqDNe5pFYNCxCMp/fP9WMHAgkQjSUKhZRYLUOvZoaILAikWQ5QlIPELdSJZXr
LRekA6uuh4nVdhAg3JCovhFEwiogQ2l2kToxb3Uo8dh42x7oeU9RFPCKKKdJ9Lwgh256QUEQCxJM
URQvqp4qLcJAVR86JUiNdPKuELKxMNtUuZ9vcqyGmaSjCFpHmEd1CZfZAazlDF089LtdppBezShH
hANYgGsUOQSc+fGqGuMwaTvSNTq4opNsMTygf/Iufpu5dHO26gNnucY1I0ENis0AJAIq7AaQ2j4O
jKRq5dvCK3AxE1i2ZpA2hziOIGsThJVKGxBMa0YCHbDPPjYgcIO3Gh2zpLVkXnGiD4zSeMUNp6ZM
8umE7yFCuIfGzwMX3TvIeydgg75Ny85ClwoJW9vwojC4TRD5el5kYUjlcejk1c8KEXEXWGM0uNKA
5Q1OcZmwNuyjwhnw8PDB5PufRLBLaVn0l9RxEN57J3zlxxAWCEg7iIgko0ao55581VVVHqojkUZX
C8dUSwDnNKoeubekoTftoE3m0D6otwFxDEjKD2Xv4uQ9kZktTEXwwyIGqvEko2olE6UmkZAEq6CR
QzNIBnupcGP+W2mVTriJuQGuNJjahcQLgcvHjZki7EXzxQNCjjtxxQEOXZI9oO1MIyogGRLPAiA5
h5kczKTfwx578KF2IPjd23XjZMg7KLpgcryjv2qQXhxBR0OD5KOJGNnExB3lgByR2QRvx1ZhLpsU
Y75khkwwM1CEzR4Y4UzwkL32djEOhB7ZNQh3Q8wjxLq63N6o3GkMxRcRaAvMgyBDvk6wh6QCG0f7
yMg3hMJHvjiAOVEFKCLQkNQRD+fr89fOva8nv9/NUJVVTS2220Z6YAeuHw+Hyml846SnmQTRBe4r
WWim2RXbs5D3KIysC0o5Ri13kgLptYFi4lM2zJDIfCTeyOJ3ti9uC2InaFPYSHfH2XfbweEqp6d2
Cd0Z3PU0FU8XQmSIjUrpTthoxDpFTzwTBBMExFR9IQT/mMIP4fvfp/T7qT4yfgv+O3P+FbBsfT+v
AXAhs7tM2/z04gSVCvz8rnLEOMJ+/TexRu/UbRDBrT8uUfs/b8+RaXfw/a2F+l36Yswtf6ioUS1F
FSCh+NoIZ/zJIJxw+aP9Y9F19J168C3j/Tuz2EwfPfe1xw1TmywMO1nI3fx/C871kfzdiDFtYzAz
C/EIMA/SBH8Lberw08Ed1n7Mj+/Vzpo7V5myHd/N/X0xm3eX+D38AwdwpGmGFb4CTeW6RuuPLnXP
BVXzPal+pQT1EBRTA/fhB8+g+htdCcWG6zJnZgmkCQCTGq1gp2T7J7v8O7XU/dtw4kIQSJEOnKTR
vEgyJ+cgyAYOVBYd/9k+Otjl2IPLc5ck9vIVKZCIidiROYb8D6upK4dVUEz65WJqooIBt/Uj6uDR
ztiJqv/O50ibMoykKfRK6bI5/3gGDu4fw//YG8FJFUYBEUDaoigfxooHOvfK+f4l/2f++UAa9Z+i
39vyOqoP+yP9sQB/yAAZ6oUAD+CrIwnMLaYN8z5en5VfnPwfWbj+QFGCaf8TRYPmHukhGJ354ABH
+aH+dX6vB/jDVYUmKB9G46Ty+v/c39fdURYTeq8ED8jBAInY9Y/fH/i+ZDRQcD2itJQQ2ABiDtBU
1LP9QEXzcMqHj6Sj3AA4EADDiiHdrKzTiKwBKYpAGnhAB+QQORgMAB+Kn9yA6jiE6oUHNotCLyf8
h4igezt5+sCoWqp9CGAzoTIdKKRhibAA46YIAGRQCsd7ahnDmdAAYPDuWeuBnCYPYeWQordE0/8E
eQISHGFDjyDdZPMAA0GVAeEOJ/+2BeKvhACB7vdc+5xDMWdHt7wtRVIcROB6F7OpdUAcIAPW6myA
A9AxqAQoBAM88WY0s0sWWpQltllUmWWaapgIx4rwQB89K+YgmoAPmAAeIKYB0APE7eh0DYAHQNgU
QoXwLeaRRGA56g4AAxXloeSin/wnowOnYPP3oznRipSESt6oPE+fc+k9W+4eneFGHYweQGxS4mKw
UW5etRAt7WeOmO7rmAwdyVmBBZYG2L4IBZwBsKR4geLreV4CXilAXmAnRegAML4g+0apifyezA/J
v/E90k5IfK+MCqLOkMx1taIUS1VHpxdtQMSQsrz7W4cGbH0IgWeHiKKf/P4AA3stDsL7UyEeO9mv
nYMZVGnVAqDUKYsj/09n+G2PxYiY8c5+Ffeh+XEdgp+XGNcj7RdPiOF5x4/7I15th9wkEwmFiEOy
hQ+cv4wz1ChYbEI/5698HQEDY6WADKxoqUiFmAiEjD0+0/vD8/5fzAKHiZ+IAOAAyADI9P4C9U9x
+exsmxz+GPpqrWYJSHgC7qq7ODKcOlv/RjD0KlHpmqeGDsMufq/P7fM3SIP9B/oP3GgDtCdkiejJ
YDDzQCiTJOIVIuGUQxUc3W3UOvc5ZiaHghjFAPYvtxPUfegLgK86UfZESbfpCfiMPv0gANKr5zgJ
8Qap1iQoT1WliQU+cyfOYAB0RYAA7AovyAhUSBT2NAAxY9vEx7lg0FBR8oRXw5mqdI+aDiJeKDP4
p4TzG/lqePZqf2Zw2zEi+0KVYnK5mZBGJ/zkTRLvteiCh0t+fA+lKgu5zhyTcXGGbUtW20zNAjwg
Fzgcw6gN4yDkAGM9aux8yHAkdgAc81brVhi7H70gEdlAcOAKLeCG4yDud3Wd5eSPP7PSADyKIcCE
0+ikAcOEHih98D1BSlp1KpBUPUIqxyJzgGkAfUADl3+lGxlWVlYyjTgyQZFmKVMwhgiASlKUoWQU
7dgAd1TAJFDUQEPYHtfodkJkgJEmeJ7Pt50J6HzEYLHgoY+BjNFVVEkQafRA9Hpg8SQEa/iNIr7Q
oOMdwh3Ns6dKpqQ9WvCi6YYpQURby6ABvxwUQMQfEipSqpAjt/yUsRNTJ3WbD14CoADKENS0CqbA
AzkppGE+J3vZE+qaCojAeiSQEd0YSwzJNAFKYibIA0WADFsTMN73Dc78GIK4YmsfbAkXlihryg0p
czps2WYr0G3d7gKiUlAiqL9sArjPpL83n+g9CvpfqfV7TCQu6evD/mLZKuXQgpTbGCI+8oZOLLia
NveSrw4ZFsAIoAIQQEcuISHmIATFEAJQAiAARLdZ1go9p9m+j2HPMFh/LEIFHxanfre2RclnKeAd
GTgLaFYocTJUPYhUPAwNHMA2KBeEOXJSowvRwthId7Ag+Av+dnHmk4rDJHc/0dHWxmHTRkTpi7rW
zAETqXzHYAA8hSsQZGwAYlZKnuAL6lA2gH/aQuIEy6AHt1E1BxuHPhCYcAkSQJ6E9EDdLD1FJVGw
APXwyZGBETzGvk6qAzK7JHgB6MJ4wPgopInXhDw7D0A+6wWBMj8tTogBgVNh3VOhvt8A1aTgCpSQ
JQgAGA730Tb813w+Z9+B7SB6Hz4clNwXVCNBL0Om+lOQAfAAHEelQ2pCRJK00WuqrMlfr2SEiAkA
J94MIY9/5C/xUpT7bcXlLbd/P64RCRK9LqSL7a3479a23apwT0ND4j6DudE8zlwiPWPgJ8z83U2n
PcL1gm8ITBuXsg+GIUYOTo1jUEv4rsj5PFB2OgAPCknR6g9iqaBGjAAPWA9oUZNLlyFB+qksex4g
lKvBpKQLTvege5sALcGabLIIRylKwIaq3SGGkRI3yHwDQyaMBXI0kMaMVXRTdENJ0DgDRCcwTity
ZoepupaDU01NLYky1mowAjaGXCpbRqRcC0UNg2SmnoaJyCwcoBujoNptToqZ4tFKxIRYQAThRo8D
AbC8esPlndKPhBqEE3M8wMv9O0zdtnX45SJRM0NAkXDwJs5Iz0pYp2wPVatfuuue3W5XXW+a7Gfr
41vGXHdJOM4V3Vmh1XU0InfO+DipXc6xhUrd3DiZEa1WJExjkgAgFEpIXpWuu2mKhl5cOkpVTYAB
sAHgAD162agA5ABoAHqAB1AB2AB0yABEuN8ZkTcNwzg7IKiMNxW3M4nIfkhhXprFTUUVzrqasSjM
2pDx6vCGiNREIg16RZkQiarrtItmrzN8RXAAzlsUVWV91ijARGY1X3VoM97kUQoK1qZRHYRzY2cI
g4nCfEetWZBUwADyNExoSiO01qlOoGPJ4gBHHPF+IgBQIiDjgQCU4MTqhSSenLRxh5EMMOVDW++p
ir3rnhdB3qpIPWSULcUpdWwXkR0jMOiHfO4nSU7UiFjUKYoMlCMMQwAe19oBixKHbzQJHcSLaJ3x
yPoDcWtopIr1nZ9KdUOmH1dZQoKQoCZhKAoJBVGWyZok0w0jbTJQkyQEoQfmvr0nZkvywdUAfMAP
JQ7Knhg+3/0DYSS9RBkiqYEYGI4sCKoQotMUJVo9ePC66/Tjhx9PlNZg9HM8DhNCrN+IUADkPEAG
5YAOiADzZJIcgU6vBvhwcxfJ4KCmGGNyeqHHVSHZTManc6kbYy21t6W+xD1hRHyBIFSaZkocBt11
Vb4JnSVytVMJ0hwNa4rarSaidnaRCxA1Kgr8PSAU49RDgADuD5+tDDYWB04nFYyBxKVoZIsiyUia
LJUplLYrSawTCSkUQyBLISGwbCh29inkG5APScnuFAwADA9ZQeIAPUYC0rt7ownGAV3HkADG4RTG
AalWlTwpaPP0D8tVyD6zVJAeADwADwh3T0VfbLD0U4SQ5XoQEsEXAYZ7QgQxkNWyywrQoWn7Rthj
QOz3n5fmACdgANtpIQjCc+wAHyTIecAHigGyHRuGM8ZO8899lx7xLAoMcrMMQwIotIxewGDaRW4Q
bPafUdmxgOPCPYHnAB5oPf2aEQqh6l0N4Qu/jD3wEJPlip/iRNB01DVfcJYobgAxEH1qKY80J7qP
KL5wAb7AAbXJIM48dEBcJ6IA+p6Wh+BTCfOx9qBHop7Kk63cOSYhhmD0X3snWD89rerZDTJDg6ho
KSmTHoAD4qI9FE9aiGklAesAH1AW5B7aIaEITghAoHAYcBUMIJhobv4g7fD0WTyqJSANUipKhRVA
qctDYx5ZAB451ABspgvDc8iL3SY9MkAfCVUxVTZEE0nUqYoPPR5h0Q3EN3hwyCkGgYYcCb2iT3fS
etH3jU0oEulChSyAowxSXGPAyBB6GHTsABtswgbqZUuiqhstUvBDkmR3ANeYADgO4AHAj1LF9WEc
gA+kGg+LWkMAAwdBF614D2dgdsPAqidh1ZsPOB2wZ+vC7BYETLckSSDmgoNvAhwDUTydNlF01DY0
AzkkgYTXAECjioppq4bzxppDRy6gUOsXALvQ0+BTo3wmYSFF6SRt1Oi6pZ2FD0zCRdMYIgdwZDo4
nWekSHqga6Y9NGoND6xHOQ5XkDj3UVRURs7OwczibkDZDQYtAcchxGiHK4SMlXXqQb8oe44PDlqg
otoe/sOu+JTh7senL4xgy1BZRsvrZAo8BhfHsnUnamrWHkdQUch0U0No8giOSA2G7kA5oZiwOS+k
hA4J3j5z00VHq6G08w2e5QGuInDyABwgCJYwi5FEbiBlLDFj1OkuGA5HmOLyoFpYdzEgkICc3dx6
jd7oKWhC+Z+s9GHFHYeXyGf6IodCw7XJ099IdYCB0o4PXH31QcyEqAyncfHjEOgmUJ19x6opNIGE
BqN2eI7IA5AB2HCF7/t2n++H/5+Lpu+XsQ8hg1SZLDnyyYQtQngiL734YeiCQ+aADAKA0KXExhkm
lHuMxPhEh6wwOAQJ3iAO2wsHznRwuipGOV0BobyNOH5jqivTVfBMoQgEBi8bEB5kDhL9l32RckAF
6e/3HT7m6biSKEYRkTYNopSi0UUA2RzgKsPz2GSD1YoCR3RCiQLaTtCkAI5Iq30GfeklROcQ21ih
9tG6BYoOCHxgFq4ABiPh2BkUHc+DGBD4KQS0EHRHomwKpbzPetjRygAOjwGgf/XItm88YSLTX540
DAA7Bz85CdPkkPDwET0m8M0hUALhSQTYKU4DSKZuecyN4GkKWyBwz8gtcNAkYwMRDwyWcDz8Y534
SCQApVQGWY0A0FDUmIaFpWFmHQwF7ohoq0uhokWw81pZiSiQpIhw7AEWUhjwHQBjkAMb680/CP8n
nvogB5eBK5EJUntQBoqRYon3FOIvCfyMOtaAESARCQ/Ufl/LS/oO5BdbbeQ7QUoBSBywK4E0gg0I
BSAQhDgnQAFZX+13Ih67jAwzEwpRgkMP4w3ADviIQfEORzNDU4AAh9gAOoqGMBsCBkABiLhUFwop
8AwDm8t0ADlQsLO1RUxLEwf15WaIjjAmEkVooCltENPx++nth7/RCe/hzrDM0IQqqF1i/ZA6j84A
IeAAPiADXR0q9gAevfVV+lMAg8xRSICc+z45xqV7vQoL8oOj9Bqboo7IA43n/uV2wuQhd1ZAPuBw
AB7QAfITXTRgcgD+HfAAD9CnEOSwFUh9ALqugAPAPLySiuoEDKgeR2x5IA9hRSDF5ZYIDvBCQhpQ
CZR5PzkQOYWAtiERMdQUAEN0ABx7ByYBwLgSNdDHAAYUMOoqDR0AAaHU8CBmGEAQ0sE70SDFMFjc
CqhSGAAbPEEGlFICpE8LKlSEAXLYAPZu8egAMKANF+NHQaEOJEORzmk7gVLLp6rxf6pdl8QAfXhu
EmOKHwWAAeJwWN7ZOhJ0kQsivN4AnKhpDmIF0mvvDoo47UM0hSQ+PrHAB7EIYABidu3JUF9NDWTb
i4RFNVeibY8cR1YSaDIgTEmbQePWdHoq9I6Lt6kh7pcBIAGCCCAWRhAicyGmTahp1YOicQzgwvAO
WufeOjgVeCAoRR8xJADt8gKnIAMBBgANhs5pBGdQAPmAB8XgADuKInPPovRxT9I+RNMoto6KEVzG
qL3RWwAG6VeIGOD1r50TyQAdaXg4AB2aQAYkAAdUqXEMLuuKBgYSGw+ZxEziBPRjjGQIE6iu+whB
DMlBBQEKsSKEMoclgGn0gPL7JD3fZfhGmGYn6pRBqPpozYdAEdDqh4C9aTvRfBEMqNUsSM+RlZCC
XFH43eBc49r0PT7az9O5qEIurRBkE5rEyfp3XZ7GVscFBJ5qMjnib2LgXKlg5VKUCgwH5DoUmF80
EBOp5+Z4cOyQJGEOIxA4Mo4spEQMkQQipvimEBEYFhpUwMKEICVA0oThEyTPh6eP6ftOfP8GPm5u
p0qHEofvP1sw9zwKU7zQIRsAJSpUYIdK9k+RzSdU8ai0lrrOY0bUai0Ysja2mpNseE4sp9DYADAl
G9g0/jgYFIUjKw20YaJTF9muiRbRbAyUtkpUJm2jUQphiFJsGbQ2MzFolRhJaJlYyJCkClKahp7H
bjXr6CAPUAH3LygDAGAAeyKdvHgpkA6n18jQrCWaKKUncAXz+o4K5Q3IOX5aL+9ZIBAAy5YEQj29
tPYicRsSzIZly4pVSusTIjQMUgkXrEMW9hIil1U1bPJs+Ha8B0nRMMBwYMCS94HkSAIdw45GCVS+
nt7bDO4cTw8AzcSNNu+Dbak0HYM+6RGLLKfT+LBnijtjwhuHlzyt7wEUW7q4aXWZ3Udr8NcyYLU7
1DIAD3gA+xPHyo6gAMGzAfb1EPaEu4SAUlQwF/irHHSAA4HrA7oF9yir5aYgEb8kgBxNAAUCi655
Q2Dn27BImUYZQumQLMLAig+IhnqfUQPSeY/HA6kXE4VMUThRkw6Q1oOl+f91i9etutr+R9YDEbES
YKRYQPgNgYien0U4d35tPzf+If+WQhRXD2v4JeM19kxWJNDbakCIQI++HxlvkgOEgYAsmpxRZzcK
RRFTf7GBQk04zqHzoEL8ACAA+Y6yBBCFvnNhD68HqN/W4Bdgk5e2Diwj1EBDNxgTfeok+Jo5etVd
A/3czayFdPNbY8SjmfHCRmlBYLicmsNP39COj0Ng5orXJBqDAkMHGC0HETwEiTSITkkSmkOcnwGA
BJoQyDq7Xb8SAhoHyaokATe5k33e0v8Nh/z+kfP+b26q5mhr1R++nWMwkFMamA/LegwQ7axBEggR
jcU6VgNQAHxKDlmdkVL5AlhA5B+cd9eSYsMLP1ybGDjhocHGTYe9Pn5LSPAkqyfrMyHIMkpBCJGh
ZL5sbCUojEwaBtFsEgnD0RSSx+XQn/NLvsiCgOdbawqw7Mwzt3lHNkqMgXcpKg3c+Ilg4AB6xQPQ
cThxfIAH8QeUHZAHpnCIJEBiBnWxSgaQMTBxOyFn3FhwIHQ5+S1gdoSxCrp0AEcnIcGCfYeapYmT
5llEeb98ORVBpuLlAHNHAABuwdzHyhQ7pQMSEAMh6d0Oz5s5Tc+wQgPCAk5x5DpDfcMCCgZfeFmL
A+niHy/lEpVH/YnO1pJQ4pIR5fOED/Pxfv/A7X4s5qzZHwWrjF9jLYsmCr3JoXFo0YADAAOCAyAO
Wcxq+EYCciO6JjSOKTgFmombE0aTKYs2vLp1lACI/u6FQidHV1fOyBUHwhwOWgngxWqFg5vkCMwU
h3As8UkjFfVkwSR8xEooJoDcQKoAPtHpSewz9jweatqmwZPUlgA8wAaggIV2WjgYGcHYgAMG21Cx
L0QAgASEEBtlC0lPYq9Zy4AA09iimTsQBiqmUyUIaqgkVGyyEDzmATCKNAixCNIANIEARdjMh7KZ
rREpQhkBkiUWdUTQiHcIQYZTguVV9ZAE3AB1TKmocBoWEYQiVAkOoEeGyh5BAaN0Sh5AAaP3JhpP
sITzhTuUEOUHcjFsaUQZJDsgDAA4hrEwAFICDYAD0tcKohgRIOKjLgVTrMHsA7E9nyhGkf5akUyW
IZgihhoFF63VQ6PCHXAQG6nVAAcUykSSoILTBhK9lwx8UDEhxFeV80xQ/JCO5AG5GDD8gdyineEn
0EEF93BxuB9ij+SKA8bKqiUePGiwFTB82MdYAOGkjJIBkjV+bWxyjwLmDBwATZmJTlfIopD2wYwi
DgCtvTdnAoyWBUCpIjuHYh0Y78mi8MQPo7zde6QXoeeA6ZGWYgSkEICA7KOygeSAOnj9HdVj3QJi
6MIMMAbA5fQAHYH0O5VSJAEScYceIINiAPBaDiCECKG4LAGKSUiBS0CyxSPmyAWr6Tgxcq6CGOIA
b9aikWgQzoIvLoQ0pRBBASw8AhoBDwFLbvDlE8AAZTspMAxKEiD4x0CBUWl1DjuKUkUVOaDXQTik
HdiIQyAmyqatAAyhpAUMDmjoHK70UdWB6DHwAB4VPHt544yiwETpxVwnaPWcASSQRGJh5WeyAJ4n
FVUpADH8WyinBVTYxjjd1Lq0yKDeacKKBkgAOJgpvxWYZUTVNVnFoXRrQqSPhy9HR1CAeMCdksUU
2tkSgAdHmUcwAYPCKocT/GBxxetgAcTIcwxklVJJ30eB1VMgA2ADwgyNrzIAA+CuGJrbqWO4ANBY
MQe4AXkiHgvQd48NgAYq0ATwTI7AA9nLYAHjok3UUpHIng9s1XZ2A2cIbPri0GYlAecnj/5KMJNa
Tg2rw4AUGDI0fLFH55hDUI6OuEgmLU1EzHgLMmtnYhLYnSddkeG6cIppk3fiSYuMMOmZhHBhMxQC
xau9QwjouQlGiDvAbYJGeQYKwCDCHHvt4ARE9UjhMfwYRNVOyA8UOZSc+gBYoAfIaUm4gO7yvzIM
q+a4IjJ5epCTYeDAd1UDp9G4h8n+QdP9fyPjw5EA8R+QOs2zWEAaE7RCnDAAHtT5jYXQ8y0I0UNA
apAdOj7lAdBAcDHvSt34hDiFgd8/pAB5TyDZIiCk++BscVionduoK6rt1pnVZ2Mw1BBUsYyMrRZY
WWDBKWSr+IculOmYMfaj5RVogowAHOChAfIKEfM3Ezp6H16abBFfeBAAbCw6PX4W0evuABpRNAvA
mqANnugZVVP9u6FIQAkCE6sIv1BC4yhqmhpZQ1DBDEBlDQFLyD9fmmB2B6v0Z9A5yIrkCAIVQQih
3EHus/VlFp/n4oA2opFFJI+ag5w7QojiijGMR38jDrhTlaU2upYkbSQCUAIhIpQKkgDIg6QBkQH0
04HkXKes2OV/cb/QNBQQyzpxDE3x+WB9rpYYejoGcr42FjEWAgh7EzoGKFoY7lYoJijyKzhjHJqm
glZhckDHVzXI65iBoxEuG9FVW2S1UdfREfCfgi++XDdHx+O4UHEoSjoc0AH6g1w2LhAXrUiREHD1
toA7CaKGGKoZqNAA2YADhkiTgAOJDtsvX6TA9UFMH1mZMeBlUsiAaD2qHxgNcVTvAUcrow4jxgrT
D4mk3FaSj6vi14HUADvoGDvrg9pEIld5F2gImnZQBRETwSQ7X6IHgAG3kSDjB3HZ80MQ7zTiiZN0
QQokIfSKKZzZ4JqbuDpJCSiMTIQwQL/K9lWk91AckEXwOYAPqBU7UMD60CAI8Q13D0lB9QbPf1sm
g9bscu1CPelUqsQtUI0ieKRUo4kF5iCuqD5uSTKvU7ICdhqi9RAAerruyVUqRXuVQcXB+SopBmGI
QKKQBjwUU2NGqEiYallFNIqYDjOMjQGjx8t1E4E5Kj6BCDsBrwoFTRECvVC3CQiAPYZGlzDikR9+
XaJOR8h7vkO0AHiAj3JyYAD2rwMywUHsGaIgAabHGAkJBsVjjBD6Al37jAD1kCugE8mVCzcQ3H8v
9z6NfGJkAQoJjV+RiDZH67sLuq/dpIGZiQQZhmO0UYx1woaDx9F3uj1y6fVhR4ky1hFt84ZT7aF+
adhrCTJBaPYmDCJVb34rj59q11ualKENtFaQA0qFI0ZmTbEQ81XyTLtcaMGyQoUUk5oAPnAB+/gF
S1VTvAB7wAfOBB5D087v8s1/3Zrxh4YJh5UUxk7ujLZH38dcd87ztxVHA0KznJLMEv30xjgIwXQ4
HCd+u1+HCnfWuOAAb37MWhHsKeBIL1/bnZj2y5/Nt9+v2y8NLwjJK35BmAAi4AIw3xKoucQwdlBR
RE5d4QOCIJ8Cw4cED0n0J98PEQF/XLjOowjkAH6ZDnnJfgB7nvqvIz4yqmgAbJ8mOQAfnzeGABiq
e78y/lO4R1QhH7sDAKaVDfhQHoADDkJQXcbqu6pcdgAfWoh47m4AO2BtFVVTMRSrtKc9JVSBqKq7
xx9YfWiO7qQ6gAw0UQExLMvhng6BA9X6i+4neNgKQ97uFnMg8uZp0ABmmxSQnq1GrYyMRVOcAA94
AOepRO6E8jQh1h+ST0MDvgriIGCiUdRdJIJlqJdiFzbYAmj57Lt5hESgibbfbS+0Qx2IHqEQeRYd
555Dtt0AF61R0KPn2G0TnQAOMiFpVDfcuGkhrEAHQAGKqUREE6KIUpYANHRcJd0PGZkqoPgJAZis
dlenyIfDW+4c6V9lyeRlCqChDs5D5i3YAG9Ud8FCk2ABxAxRO+d3cwdCpBAxrbumGiQ0KjIcI2wA
ByDTxTqRfdZCdRtuNo7RSG/nR6Lz4cU9slymqld0otDp1rsHQ9WpJkvk8nt6tHgQdRi8iE24OcdE
147vm5ySVRJ4PzhFF9Xlo1BXeyf9AApbR/0MehdCYA2bRtbcdoamYZiIR3bJRCJR+pQOpdPqeOx5
90+w2+YmSqfKfY1LuVrAREGNSyzHydaxvnXa+CimpQAPREHcFU4gAwAHdRTAAd3cLMMMMySWPlnu
7sMybcDZB4WSWmC7PfzymIg5kLgBoOhgEFQ4NisUUJ0yDmjPNS4zrXCjZpIS3VzVhM5ip8zpXnJh
sWRmGrZC0doiLoRbg5UxGVAPkiXwsbqZkbLxMcEXgjd4y9WyUO5JkIiAEaujJBb7rAZ84zk3mjk4
zO5gMvU1Q45V0OSZm6QnZqHUIw34LBcW78OTSCOIA9wdzcPy8VVFRUH1+s/CGmwAPQF2Dv8yRQDX
3T0uUzij/P+zTLgodZpiT7afxNRStRStT/CmamZFhIpwtOWqLCvf7xpzhbYpWopbRStREtss7pUq
KW2Afxxh4gMhMkJ3uBJgE0osQD4Kh7UfWwp6eeyh8gnL0D+TwwpqnwwwqiKilowlHIzMcvGTaHRM
TERfhIz3IPLIE4iPwQqAvg/NTagBxAClCqO3j8ZhbFVMcoomgCvIpUkVBFp7jP23gAE+5vJORkXl
N8TzIdpp67PYQIxZ8A+u0MEVB5OSPLt+n39qd6+rIOF8kSEvzuLkMdyCL6xgzzHscM7H4ZE1LLvc
ABDIAPUADQADqDrsa7DabCm4ANUdyANgAOqjweYjwPDOhgeBxbPDBikYADYcWrIoxFRCTGLz0K3X
kAH18p9jgYIbnoqlfHDEIYB0ih1TyIICQiehne0AaAAewOtRpiieqAh9ACaoHpgnXtPZXhUoUU6A
A7KiernzkOvTB2hAwavaJYAO6hgVwliiloo8kg8G0Gjm6Ja5ZVNIOHtg7xLJNMHhFGRnmADm08Ka
8032Eh0O401AI/DJpg6EKJkAHrNR8QAHkJjEKQAfSMszBAGl3vPy4PpNfNpBA9R5oq+GTlrBo2ow
CQTm2dEhCEkJDGY4RiaQeCbAdibXhEKdKXWQmvDQiOFAvQCgrM0hTTUY2Wl44qWGg6cQdBo2d6ij
OOATohxwbp06JQw/J6ebR9NkGr2lSZJtRBkDtLtK/5W2h32yj/Zxhr1cPxePZf8sFNYdQCLDtEBD
L2VB+sBH2DEBKPUoneQULBYHD+D7Sh6kAfnVTjy8P3aerCqh/8QAR8H7IiFdwPAK4JqIW+0EuKEX
3/F3mBdgMgA9R3gC+G3iUoSgoMAA1S2xNQ8omiIZPegDygeetvfpue+j0cABoAGfQgAYSgpjQwnU
3AB7k4sReeg4G+KYCqRVA6hOAAPp+MhN3gDjoB2maH5cUu6AB2FFO3sJPN2GQ7PcxJUdsPhrAPqw
9R4c4L3vtdJQFAImUwv6sBIoO8a0trEGAycJGO92B7lUGBgp8rIC3YtBTp8eVwUflLCRA7IyJg0d
as4cSRXkdogEObFpsokHqLNDqNv3s0OMRCJIKCtP/N/HaOPo+PwDTpEP4kyAfOwlgfwYD9EI+Eu1
kNKGTkpyMCG7LSxAB3aMIkNDKDzt/w1z7MDghCyAGpwp8hfAAGJ4AA5D1AA+Gh5qdAoiEOzDt/u7
v8em0uZAhRIEziIFbyw+zA7BQG9wdIApgcADoJk6HFgooRABtgAOnFugAbQBo7SY5iA0YiXkVH1C
pfaAwHPxVOFBUkFCorZETTUuiQD1DWUIBQkOMeMKtPVeb4nk9JqvVfQ6/ZoLxdjC0JthiNBCYZiR
GLxLiEAD5eILXbwDvg03JI1VBPg6qOvxm6lgA8tPxP9L/VkeoMhR1CEh/CwriODYB7wwxhVpSZiI
vAVscoCnoSuZ6/PrVTFVDFVbMz6cBgAHCqpgAGgAdAAdAAcAqmCsBCUYIJmJvFzBNzbVQUnTHUOI
aaI1ZYwAGlVTOL0DOhDGCdUb1zVJrpm5mqHJpRdmg+hDy3XU1Q1EEMiilCBs7omwsBf/44uU/sgH
MRB8kCCuL7/8i8Cv2QgMFAVWsfmBvsM/iP6P92iEnQx9Am/KI/3f7D88v3+lPbg7eZEQKxEzUEXC
SqJGgoIho/yuwUQdD/Qf6LuOrnXPOYYuMIP8oiJCU9d3n/j6jWa/2xihKcCLI3YJQhQHRR7X/zWE
WYxe1e4pP0GyK4mFEmopzXH91hOk2EUiw6a7ytD+vcctG+k+dfVAvmXGcOKxNWfQMaCNKANRYs5V
wPSmrSSPQMQRSxmPqWsfFTB8M3g19AitpJaP8E1n3PqvFjVIydhFToHM7SQf/K/9vIODtE6upXw3
4Hdp9clP7y+auWaPbeZoIPdsIwwGG+1Ts+ldtaSl+2hSSOCdb4XSTqWxdkO6yZzmS7qS6ZkZZ3po
mlD+0QFQfWnDQQ4hUrUJSODUxP+iqMTNlcxMnKMiHpn+4QU/4Uwz4Wk1I52Dptl5Swj2rwhEKtF1
FnhBGIxCyPQW7oHpLBxkMKPaEDarthyMR7Bhy2eeP0Z5GEUxXmYGPywnrjaTI2WfyXe7mBBioMIZ
cYRacOouqJtJWChQsYyg4VoLMoeSjRqLr/rxMWYlzJjDcIkErSUf1KGo0qtpoGxGHCynP/h/0foz
v/9qGkcH/yHpHMsO8P7T9i5dnv7SPQLl/fcAfIeojBhw9jnNj5WbIPEspajCLP7T7D/5GfhmU+I/
E2If5vblftX6NOw5f8ULaF/no398BIziOfgkxCPT6iBR7bYDeXHlIpzWEWTGPKjjMWvFV3slB3Qd
+3WmxIXdHU4/7PQOI+r/7eNUz0TitwPtibPt6rmnnoUCO6smLHxdUsKVJuijSHF2sS/3rXGCkfb4
zGdqBCHpMEFP+eEWjPPRXdWbj+nvy5irGwTe1zLiIikXuBSxzGQTENQfUKBt/EQdXC/0JDGkHBJL
TS8xiBh+yEGRk7p1ciwOVsFJcEaFkr8YhDwIsJj7PHYQEMZfyf2fz+j9Pb+t/69W686UtUVf3hYm
ra5hG7GmNPacf9MPWPD0pelQpjL/l39p5v9NxhDkTvOkD8kzkAAPS8/iwGrqDPUKb1U4/zNYXrH4
+awVg4kIFITiSZWoDbAcD+13ZGDf2YKfq/hE8YzYQuuY95XM6Bzy5HNLkOlCGuG6HqHhP5JFvmRq
tb2849Tdl/P9FtYCb0MNkbNbRez/tr/Luj2+8lS+nDynb0MzU6u3+//0u1Hnc7IGBBnS55whN+ek
Ybokt4RMYktISYfqJ2DYI3oKkbTA2EP3CnY58vGOr6Pws3pbifQ1p/wn0llsOFjxjDp83n9Ee2yp
Cj7zcbzjzjvifETx9Ds8+FDtuNgvVt7dWDdOBcdtsP1+vxE44D5P9V/mX/q7PhFFf/7Uv5o1VR2m
1kGiZpCAnt8c7HHeEmZu8LXp3f8oNrQGFrtamIT5on3dvv0C2e7zdPos9BenLDB7rccjbKZR+ct3
9U872wsLdMTJGy4HwlZCNkMs0x4q5YIxRlo9u/dBUPgORXk/xoFAozIsxMN2dkrMdcvDSPPZOxLZ
DfvLCS/r5/XmYGy7ZXbhdt0z1NrNTpp8qPZRFrvTH4n0IxwyifWdHubixhpq/m/KSD5zz10H4fd4
p/xM0aFIMZszf7RJm3O30MOJ6e0zY8l3tflR2P9XsoKEqYIKoIRipCEkIn7D9FKAf7/yhD7Sk5gx
9234Dw6hCDpZvBm5eb2f7u0mCxZplmZYSJgIzY+thEMA4TQc/cNqwj+8QYHP+ORn+12uMebScOEd
OINx55aiCi6+Yni/BNnv7irvNzFDLsG4GJMIaCCpZu/xqsHqm3GpiaNH7MQIQc4KUqXLnOgoNwiI
VCFCpiIIqqiOvlvngD3a6ZlbG5ykGgzdspDOODiDcLIMQOwyJwDjp8XpzNnFr3UgtXAd+x5CTJE0
XF+SnQind2TroNIHXeKOl4esOifxEU7BNXtGHAujdIIQeRkq8w27Czpp8zVSpEmoGwxDKA7JExns
CL2tgaQ2NDiWVCo4Tekz+nlRuVW25v/74dDBiEbITMYUL3AQ84HESQmE7+UOi+JYGB0OtWnZMDgI
opdAYCu3qzkYxSPF7A5vSIOW4Gy9EvWTww0hlCg0Tla4CIRxg7a1FlzDx8JIHPhwdxW8+6ZoTbdM
hpKJbVTyI6vhQGYfGQ3kL6AX5nsTZ5JxPyNK2K8gopVgQWFhJ8JDwTuKI7Zs2j84ceDHTPI4PNCf
ViU5CGuoNqxVU6eDAzIdGLDBGIj19XmTYMVex6mc+4blub5qI+g7Bnr5bNd2lfsq1XL9NLcOVb76
wvskqXF6UL4JXXUrepQnJ3hNYJTrZduxut61rGKqpzq9zM4xtsJhQ9nXhlRncdyTv2/cikqYj6iz
LNHPXoCvwHo9YWfNg3PBnInZKXuBoZ93YW1gifg8I8MqoRHT1eFS001RVN7yfgeHo7cgD8Q36Xpv
QS2B1Lt4K3kkIQD8nu/L/dWf7w9HaPdhB8PakkJS+B4hsPl3uqb6yQka7zrPEyaAHK34nAbKBloD
4ggapAib0m7pTdcIQQk2E4yEYNAE0CkREWiVOljavyHcqhkRC0LuhHQDtPlJ5tLojEzBbk4tBxbY
EBxNBRVTMuBMzBVNmoG76jL3KwA7wg10PW2dA6VwPADoRQCHiUq1FZAQ5HM0y15YclPod8xXDA/l
O5OyJt718u6sjui9r+idXQF25SUlW6ygQ9JuFJ4lmOXeGHmvoFPZwGg7JnVen+L3ZPM2Pi9uQkhI
oKqQJmU22JTlwLqjuJVVQwUNEylCYEDkwTR4GntxmdTPKdbW8OW85VHahg7IT0YVBjKgVAWacN9Q
fjVIrBE/JS236Op6yIPm36dTzHK3KH4cbeLTrPEnFPauJv5Y+QUmsDPRizBoImqQ3PkEN9gp5Dd7
0EcIAaDbJFRkYXYgILfvgfe65jcEF7ExqhTwCnh84eYsXCGe0IdbjqHaQi1g31Oyd5MlqXUhpjJx
tEmBM0Dye29RdnTdTDRTkGQbCnka8MimCLJuvIHVwXU2cpCnkTgx05dGEHW+d3QUSTQHPZuh2qoo
pO55yO+0VaPowcq2sqCpp6Q3LHiuQi2HNOLvvzmmaCoHPdmzzMh53rDjkvrhDDJ2UnAykK2RleuG
mdmzh0HSK+RRuqwpqldVWFVGHgq7rFGKNnySHXBx8h8n+VdeZRd4PI9MFphqRkiX0Q4veHcvYa6O
qNqkKZHo5OnXXEQ5eHDkVdEyirPZ9J1twlvLy0VCKKLFaWJkF38pGMnudnx260drDaPK7Hwzbthh
OpontCXQECGHXK41w3pVIRMVSHGqWIeXedL5oipBYO8GIZPoPEs0D/Yfzo014HiJgZMmYA8r0K78
7MU7uJ5MY6xwybJpxaJYdjA1B7zQNxDv8G3f1hk4O4Na8eLOVfOfMxAQyMwkB4Rjrn3YFdL8/bnb
Ny0xmmZohw1HidfV61mvM2Fdk6cgdziR7kIECl5NpQFoRKDBALYGA51jBqE6EKsVWSfcQOFkD7fu
9KisC9Ic0yfGWUUNFYkni3tCOCmA9h2lQRIoII6AOvYn3Tr2bbb4ttt4h1ZSHERvExsL7RWjRCiY
iwBaycyeY0TGaXXg+BByQhaQqpWClsYS7LejMi2O3hqnFpIhuEXZ4Ds88i8H4fcnmF+iK+Y2s7MA
ZA5B8h7fo+Vvb3/Np+PZOf4NvuuCpFiQSBH0//Ef74P+39zSL9SRAkH/ZAA/8w9EDBH+//n/df71
/+L/tI8Z/Awr/JaHg21/GZL6ocMolTQM5xjMIRFfXZqUTVez/V95px5Ov9omM2cqxZPb8uz5fERY
zyc6E4m7rmMEV2rrR/6H/Tua20tq2lVV9TeuOKp+HX9qksmtkXUldZTa7ag+Z+gP4z5V/On6jlw4
n+k/UJqQdR+IIEYWFFBCwX+4OAf3j9JjUvpJd4LCgPVg3IJ/EX4u4cSdiJJPiEs+r09PQ81ez0D+
w9P4Iu7wZ4vi89lxA27WTiH6rbPENR6+rA6j3Iww8PsPxwnzH+t9JZ/zeXqvy+Gj9rmxlaltsyNS
j/z9vy8oOxDHyLBhFEQRF/lYuScfq88U1V0hgal+fcEWCqQDHt+rskkJJGYBTmh/XThHejxaLyZy
YzWYbwQRGT6jrC4n+vAploQ7nkb6fx3s8I+7b1x9avDW50BISSJlSPX6y28rv7vJEpjuhbGYxc4u
GoXrEdIFxADOSxHIMjAkDHaCYZkKcF6d9UXh06qaPV6x5hNIToaS+J0N/d7oZyQBWQDg7nlFVqBd
wby1gQpyDZCaHpP/fzA2h6Q6JvvthORgEH5DzrEytnYxOvd6B704aTmKJkESCBxcIMlGYjAVfwBI
DsI6MgUFBAHoTHOQhCi0UoHzepblcRR0c5yGIuZS3L7FvBXNm4xzazGLmkrKiJ3jSDjA6SRW920n
ZiSYkwI2cGXzcyr3w+MQXgxq4S3RJAgW7OUKeSPoPt47hdXdwlVamRSphQh6Mm48AxBJ9wSSeTiJ
Ncj2mUUdBwV4EU4kJVAnZgpY9k40EOYHF8e9Z76BLIMgNsJa1Rb1ug+5PccF3RDsx8Q4NPYCer1O
Ga5DEyihJccJwlx5dDgpAeMT45mHvM+LhsgRHV7jqHnF8nuq0Ovkr3YZCUAUAEwU1TDCkZZ8fAxR
JUD+v26+QujYfB1B7gn6O6zCQTLBjwSFR6OKVFMLCoZomDB2AUb93qZ1Tb1Q0zzkzr1qiwwXc7LP
i4d0nQwbnJ5Bv7nUHuQQiKu/HvolVwB6B2KzjvgLiXplvnMebiZJCZgNG6G6BCt7oBttNsfE7VPX
4nwi/f8edztAYgp9Ek7+ZVAixk7lTtTB5yiYleyDWKMTF8nPBNE+ewGZwkHI1m844mheQg6LOieh
ESrGJZOTSQU4N2G74vi1xCTtGmjXui+h9R2I9g+C7kYaxnM2MBIIHZVi0jm53abqRtuhfazDOSso
5PWuOL420wsgOyR0gTgD8axQXIoQoFAKA1r3NDhjjR3OY8Qxny4SWQycUJJcERjAyAuEYHOHzHD3
xEfU4j1QF6qbqJbY8i6CHTr8/yAxOTqaH4V61Nw5neEbiDr5Ubc7NKPkEt+HDFIyk5YdEqCR6wkQ
S8ee482d88jzrSxLW3SBYVmhahRtrApWxttlW0PE0h83/Oh+NkjIk5B6Ic3lZFJuzlPHKH+D0aHt
Dzm8EEDMKCGoGegx8TctXbQoszESRKFdFXDNFE0G8ysXM4PZ59Tt20Xc/oV/QeRzNIS6HBmi9R6G
/r8eLm7m7xmje1CEJCUIQkffs+Caez2fN9h9+B9+k95JjA5ANAQREEkTRJNC0hBFLGn7tYcHxnlI
e71X7BgxMhxmiFbtvmLMb4+LaXMIwsyzOxF8oYnrVCdjS+bp5IdI2pJ+vUODvEimBJUQMMXPeBed
rImSLjHY2Ng5j6nxdAbjhgomSqUIPLRJ1WSxuB1A9ZmEId3d3Nehz3nkVwkehhshrNrGZiKRDuDx
WTdUPNDc9RwkJFKC1NAyNmESL3mUex6yDJBmN0Zho80Vd4mF6mQlKzZqjJU0qNCw9DMbgbyTnIIM
GCoKIjPd92tspVs0FfOB4O5lVR9Pd4pbbS21S6dPXDN3dy7JLpOgZDvLeLE4kC43lymyuHnDBxAp
2Os94DDk9w9Z1m0KOfUSUOALXUpaSKb5OLgwrxax0vnlzaZzjM2HJh6uDRBMBx9EAr4eI0kzK1rg
M+FuEh389x4CYkBR8D1EvajlNITqIAj3nPSPplkNQ5q9v6uarcnpvQFHu2sLrw2pd9Deurpmru29
86qnVXcVd3aururU3cBVCu0plxVUJk5q7G07bTbmGOLtzJUTNWrEkjDnGMNJmFntMNDsTh4PU+Do
2HaSzE++1PE0lbniHOFy3rQ4n2FSiyC9Mm6wZe7ZBTiKSy0o8Zx7ScWM4WMCykIRmZGRQWZ2D0wm
fPyY0fhlx6p+bnvqt5kkIYxLKLulJhJWcZ3HiqGhoFKnI446TeEaQWsfrPd5c5PDzrZk9YdskrOH
MIw01Mj9BwDd3pQB4X4HfQF3V3+Vaq/YGDPujcYhv61IXMztQQG25LMmPLewYHg+eeft4yJCReXo
xmzYryjiVvgGFwd/WupGCHFv50Kpq5kMwoRaQkcBgky0uEDJ5yFLLZXRNQKuROOdSVeI4MqPMsXQ
85oWlllKaJkhSIrxtgdC3h3LJ9n1fD2h5dbK0u21qoRGImJEEMR8EqnA8TSrBuL0PDYow9PM2qS8
TpwkOh1VgZJJjW+9aRM2yA/KeQFRkYFyfnNaZjxdvAqz2BQeCSOCej6g9RKRYNBCr+N9m1Pox2VL
4nh3AOm+cQPEhnhV0JM9j3HW9jZIpTlPck5uOihiKeTNFFURa46W8uFCzajo5SJzKA4oSEWBRbjC
sw4egcVDQoKVKOAFE0krxq5ZdWSSNlFKHusu6K+rqPb6Nb7O71m9C7uOtOIEljPUTff/Pxes7h3T
2E0HEgrzPO8GinwaPE3ClKoiWZwBgxg3eWI2NDIYQQZ26TEDIiRJnf46eKodJZxX9R/K3ieMIPAI
PCGG4bqqR2j64o2eOuc6O9Gd+ZvNGatiWeEkndZ4Hf3yT7JA6w5o83YPceZ4L8pXUmAT1hlPagRA
v0pgdxwL1pqqiYsViCJFPUSpS0jQRYaSivRSPR5PI4Xp6TEzMxMz5PfHQeoTySQdqq2+TBhlRpyi
GBt6ixMyZQygWgJkJml1BsMLLxFqVEhQ1EQnJpHxqTw5dZsGomPmLFZz7iMxsMOSiEdT43iD7cGq
dTrwcBgiL2AQ6g37guP2COPwyi3aRMySSTarbKLqGoljFMeLsuUZzZhhgI4uGp3edfXwE6vGrjbH
HBzYj12Ir6qirPfNyG6k8U94Na4M8XI6ZKUxkwsqNFIGTqktuy4ZJIZODcmnvUG7PhKZMkNoZqYT
rN21yYMeqE81kg+JMkZAoOuc+lNV5rL2GnchuSsh20sexrtDjwhKozcZQhGdGuSmNHhtDSETI0L4
RB+oPkB8FdxT1KF+De+ApfyyeqBo3tSnmCFqFcqcvZg8CfJH1Szr954LxIPB8bk6SCOjtCSSS+a4
MwuU/EhnDJll3iBCJQ5Tg+O+1wPNV68bHDo2pbjQ68N66tuIiidXnLx3hOa+UCeoVsPpIovAspPk
w1nS5yLGHCKiyZnXRgbtHYHHuK1bnmBYTHYaJaHIMcWvLmeEtNlrE7hDYgg2F5RjAQlFHzbyEIaq
YU6B3m5CFvkQpPiD4ttj0s93t6wNt+vFIbTZhmXMy2Sy8esISRMxY1DZDXrCUxSKdIF77XTjhaXj
N5w5tiRQUim4mTvQLIXHqgQxMQgCtPOY2sbtxnGYmDVCngQIWJzYd08SdSExYnoaUzrdCL4Oet6O
/FDuQTnEVRghscd6cG6m8TFRUd7D3knfYQbjuL40E9xwH0IfTxW9RSJ5BIcnY6CGQosm71LGOiaL
EvTufW/E+Dobd8ldCMnA3AdXof++0sYxSZTW4zDo0ZrggIV5ihUHgkzUrpZdaxStgogq4n0vF1eX
Z0xV7AfZSSYEWYhOIExXj5Z3AKFyXZPPGbpioiNom3zjPp8Xz6n1MfVSi+xDT6i/KqWdAMPgU+vB
LBJyh0W4npYWayhXaWcTrDsq6KPSe7nMZqr+W0fRoDJOuizCjOqSyHpIUs9knso6jvctlhSmG+66
l2pCo02EuFEUKMQFwSHuYSF1kzFaNXMm9WSi+d8nF2WaXu1EWxbWVZYzlZj4mz5B3j5HTjq7PXmK
O7MG53MszJOVdHLh3psaoN8o863c5Ds9fCrHhDAGL3m+UqHaHgE8TRVT2ganhrzKOrwHsolJQ9oc
wOXNIanuKO4N+PdaWWVbHmQKa4HOPaPNOLgw6HcEaH3KGhEe0oR9CkhorsB6GynXzqMEz1FggcfO
GVtJXo7+PXxKJKlHIeRuwbXeEjO4cv3TKiJops6HekiFEL7DsDJY7LOo2bLzwkghqHFyJyab76JK
ge42IHe6puR4RHeBuEIVgfTHj7uc85ohKLEFSGlVq/GeqJkhDmZwAZTY4JJMuKxN7cUTh4SwmIKF
tS18DNmMcYpW0963ZiYalTnBq6ESIRV4ErmrHUQkZyaS8K7gyjOplFzJkZjZObOdMNOpyitO84mp
qYwhPGnV0noxMemqeobO6bng2mCqQwscIwwjCY0eA0B5dDxyEhHiBw7kiZXoJ6Di+f0ch5hOfEcq
y/GHxmD22e1EUYvtTwhjIUK2RkVaxWMaaQJJA0yzOQGrz0ornzwJyTNT648xjJRrEUZNQjCM4w44
PQRCRAmMYQJQQmt1Wi7OwJsE9x1JsTuIfSLkPmDq4G+FZm4qBpQlEkCBzW8HKHcHMyOBNA0jPWfD
sdHV0DgWg4FBStUKmMJKhHyncfdthhRn7hsRQ2a/emk4jax/q9KC5/5k9NTnhXm/ynE+7DpelORo
mhKVdNndj6cn20dqaSM1dfZYOOT4Tziz6MdgwKaeDwZx3z383vwcmk4Zwfbx4ydsa9IEY1zJ1jpV
uG1bXKGg0tCUm80cav4rtTsSyyqdZ9XKsUNQkkmXeirXgVFvt7wxu9Dnz9lYnDtIj2qUH66ppqGh
CQpFOxqWVCBCizxQ+GsPXXWC6w4hd4ceH3kxLizXBo3rWWl7Up+De8e1UYKjvjWZ9K+erMGZJsqc
o1gj3qRvEHMI1qH6qZHlHGZih7FFqWYcOJ6ZydcCnYe8FtND8HmyjrqUKISjpCSaXcmyzV0THiZQ
kL34k0m0xIQPMjOO2Q4xQiTKiCMd84KDQeShatpqo79wB0xCDIhE7/DIiHd6DxUJFIeHduMHswE0
h3Ch2UfQT2+gKbHkPLnraoqgZoiogg7HIOA90h6k78Q5JjyCTkO5OZKCkpiSKCCKiKNxequL60B0
D7dWQkooaGhxadnDDr4KMdGzPgL03wdpyHUfBcAjhRSxB6Xud3Vz468tBHsgGIV03SmEO13cnJxu
Jeh4deW8Gc0XoJ4GhEIwNdhH1R1uxmoTjo3IpzneE2A4Y99mJgiBk6qQqzsYrI3EC9y3PdqodQUa
4Hy63UQCGMaMcSgwhpKEgEiHhvug7QlNFAkaqPCOdjI25xopziCIGeI1WbC4H1t3WpM13+DtekXE
cei4UyI8JzmPF+MQYSS+GTNKpafejctIEo92cKqvTzrlRMG3A0dEHSD0y0dXjlVL5cOlVlq6jDGo
YpdtSy+SiwtDiaEXEhecEMq96gc6OxT3WQRgZnma2nl7MtB79ysxR28oo1HjZZKsZaZ5GvOu2tsr
6rTdZgqWDyxlKD2Te227nNmoxxmpiX4k4831zY5ne99W+dampR7c1Kl9KVnTfY9++3Pp11r0FqDc
gxkFkkJgSSNdxLkDTmY1FZ6h/J8lB9X5aI+gjIE/f9pXy083s1ETmcjo8DD3B1LaWfH0h5srA+/4
PsVSIzk7073l8SO0mwWMd0kxa+D7o+zMMY6dDadGax8joByDLR6Q+NUolESa+MoHbbHA1DTbH60J
dyO3f5EepAGnyB7n3yE+AuFcjw5h4kTAcz2gvcPUBlXvefbHUpqSqsw6IoHAOsna6j2+an1nvmIV
0fbl8wnch5iuK3nlwZnHRShHUHuO8k3ORDsDtNXQ3fR2kkdjgocRwh1EhTshCgEKIUKIiE/n0fQ8
80uX6l0s01hZt5p4qtb8YbHj0s7mH2/XVFC0lbvZ8TxsowSJCqC8zEw8AT7PM3QvHzZ8nz0+OizU
Y4Blh1BGEkF7b6+nZblq6xhM26G51vLFWTwHsPYbGz8hCsMgnr1I8eJZbFhJhpTIuzuAjsSCk6Qx
MRSVJAAj0jYIzPQ3qlm3X3HqA+8e6fTodh2jO1+tuxntNZfgCBWicEmc3M+bp+3XpFmSSlKCkSIu
A7Pr8D5ddLpJ2Dr2IpxPRrPXEd6PV5CQm+64pvGcqM2ZtNDoC8dppNucmvY+YNTdHw1qVaPecj1m
mDgyvT7Ztq0VcfXBweMhzgznUy1qqnBBo1Uk4XHNSJIBdlESXRGjEFxEjBpCAOhA2chpcNseVkv0
+u9Sh8PM6nAcj2IYhNIyEQShH6wwxYhkCU3RPF9XjW6D/p/af9euHd/dnHf8+MQ2yYD+TAY3kzTf
z3Vf6cHJjCJnL+wxj/Hl6ZzxqUr/BvygH/dPSmYcJ9BaVVAw/DKCGBVBJ+I/vzB9Mn30xoWm/ICo
Iw0kkqWvsqY9j/bGbwXlGBR7IwoKFF/Qo/GbzZjubpqpUobpgYZg9gg1UsM05jTZAjUSUb475dr/
rVff5jfXOnL/Z73ETwB/l8cBcig0eMrhRBH2gPXR5VHNuUscSEQpjxBkBgFNObf04OXXoiHODIiS
EmmWxEDyH9JrqZyd5/xsQOMR5QO+ApIf/M6gEfZ8ln4wZT+AgiTBGFMCVesBQ6B6hgPoD0+/9QYD
jAJEPoj07vXMc8ssCgsdKZLvIGZg7Su22xMd++t3eaoFAoAiTgwsMySIrZc0r3nMfB9n3fqwRCfa
eKab9Q9ry/qLju+hPtP86TpA/nPRig7m5i7bTcaZghOsD5/IH8c+cOSRVERajCIUgLgO8f0+0MCg
QgChBICUiD9z/0QAAGaozt8xia6IJsgfV2Q79/c59GUmiDYml1cMeBC7QzEfL+Kk2no7LsANkf+v
geH73agGvZF8gT8shBBKvU/lgePcgP773yomFTQ+2+nWX6THNRUkptpIKxbYJKUChQd49sLEBGsB
clAxCQcgfegWI9X0955Q0JqUbww0Qr2txkoMaSIE1CH5DFdZUf8fXQnuf3sWOYDRFtpoxINS2N8S
QpWgj3h7+6Xp2IgBfcQW0UDjSqeIBBDMQ+YMdO2pOR8dOgB9wTxSgB7EXCnZD+CexRBiHDjUSfTl
on35noQYkMEmTo0GGp1JIsrJVcOxSoVPY32WHXKcek4kBzq5e92NStFtlNuzXaaqFTNustXbuSyV
zbzcytz3kQlahtcxbIiayzUlIRJGIdlyQtLNrq7da27SaVdpBpbKIrYS2tG21C1mtIhICwgoCkhk
qENSgWykRQZI57a9NuNz17qIk9U26yrJYlgldMLiJBEEOkJwJgjIKTJVYHBxMFkzBXMxcTJzS06e
3F5qiqaZ4XYBdUMEo56G23IAEOukD5xPmjIgE/MWEyKKTKGCQIf9ZUNICVrWBkroYenIADHyfWdn
uyNxti62WtoAT2cvCc5ZQiSsORbiTYRJ2lNESdZrIdNlpNpWrcJ+eu09qZ/E+R2CepSIVH6wclJm
98hyoJg9pcVl8nSx3Mx1RwmRAKCYP/VMOigJmRE9af0WHWe/gBrgEwmPaHp9H7VR84g/Ej3xQShQ
pQXFPmokPzRQCSIGOEjYPuo+elY/EZJg3ymkADOmhLzbdMa/Oz+/DoolUTcYsNTnEYlxkUYcWsxd
qIBE6Dc2Kyy28mGBb0rJoEVESenL3yXK3C2toKait1JjrSTEwibRmYSaAOB2cVQ0QiYEKsEgDFiU
V9CnyPedsNIaIK8gTyAviDcRSYPN7x2fos/2DYeuT0nwzY+82HUl54GCTBe1uLvxwGCDzUmVCdRB
2sek/DVTQKQLGZClGUzsUSTGh48tCWVCR3i2rLpHsL4FNPwnjrRtF4HgSHdXLW715lq2xUfRq20P
sPCJ4h6soHgAHobQ7XC/JwPEFVP9s3ckDX2AFAtFIS6pLuuqjFGtfbrg2rJsm2kqktGjapLJaqUq
YjT36/A7kGqAAppFSIClRBKGcwgByUmu/cQ8FO3ZMxBCHlMnNQ1p82OFLyTkmEoaNVXhthJUzX/a
z8IxgqAcH6JPAyOMoDbDfB+PSQ2F7+5e5OKCRPZAg0OEoT78ZALgCj5ogjgzA5pVAcMKA4XuEMG4
GUE0UbQPWH2kHGeSLWgoRJHZQ514h9N40ypB/AkjzNb7gqbqAXO040O2GtEYEMvbH13kfXIGHL2T
2FtKSHnX0bA/QkMng/02GGEmtZyU6RTWHEwZKggn5GFFTiSnnZRkFGT6Y7JK1swX303a7dLqupIo
1EQb4xUN9VyLoGNjVNUKGlw0TrhWc4iiKKiGFINQXFFCwSIImzQl2IKNFUlZqN3prWXhtdC8RZDd
BEPKOAGQkkYJqxpWpUSaZpalNZWICIoggkIWKUiV2QxB6yCqrTzM6zCT5bPKQolOm3cXg1wpY9J6
pgFxvK1p7+mC9UDhsTxNkUUOwgkjIQKEIIL1ShgL5C4AGHuVU3gefks3R8evuPYIGRR5ZrjkNwVJ
ocpbYVKh97KwhCP8DVA5QtKX5RBPATUC0JKf64RqlAT8v3fDlP2mGe/E9NCsuoVUi2OcdFvp9uNb
xMYrVX+tLM1OHRIUvoMuYK/iYaQbSO2KpgHvDIEaj0DDgxPAh6lzMWSkL62Q9goASOEdiTQuhHAx
jI3BjDBaH/1gyTUYEZsjrFGK1DXL8Vfvvn8/iavl9x5oSVij3957RQy08fTtH0IToTdBPJGQDaLG
o1Zm0bUtZajaUQhESgtK3d3+N+TyEjDvNgAHZCzpWW2hC/jrETEVwjxDmQSez0UNp0iYYONUxHqh
0B7e5k9cqgZi0xdtaw2i3NpCK7OzdWWpwGboS2jAnXFHl8xWnwhNHmqPJ/63yDzQkDvqvBDwJk1X
UryIXh0+OOo9JiaE/Di7Gogdic06m/No76QOEOxNsbFHc5QpMUll4MAhnGUgVTE3gV+OUZdaaQ53
Y4f9NFjGdVIiHnpiAREKMKLGpbfi8xRD5o6T2LZSsKQKIYRRBk8uUIiI/xGzmMO2jqWEN5UIHpDo
JgiSELBJIbb6k1iRENMGQ7EgaZBxkDRA6lfFzAfiv/VAkygCaHg4lrfMaGomfhnxg0BA28Ry/vOR
h0Xx8TX0fs9MiIVOslIFZCBzquSii/Cdapu6bdZN2V2/AeOsYwB/FEa+v5Y4ecRdTet5jOEqhNuM
CKw5dLCvt43y11ee+piNPjMuhxU1QtdSx12k4RiHKPCbi+UJr1whxlTCAXU0VJFibJVVXHOttxzq
hu5UtqtayIltEZYmJ4DofC6FgY7CMNIcCk5MIkgUaLTRLhuIoZGFZq6Aki71+d3Yik8YLMlQ8sKx
EaURBhaM6R0AEeQrPGc7OBK3s9XiYNk8gAR0EvdopEKBRBk2Hm6DqXOZyH+ctOGlaHFYACFNp3NW
FuHEQ5VSDyiDKACMR3zhKUW7lsbps0dKILrtAUMAd4oeg81esj+UYQfmCjkdZCSYQoF7vPsJEUAC
Hp8v8J06hpDNBg4sTGyHGD8h5gd0kDIwQR1RRP09/1IjdSj15kRBABe/vR/Hvm+v14PrHM7ZdpHT
NifnmId0h4UBcIroG+uAG7lOmQSiEc9tJ3cgCRSHcgwL4eGELzmLSYPYQUp7BCnxRA+yPsiIN1Ql
+oRuizxP0YCg9z5ncukCo6RD9pnfOkcog3UGEIJdp8hjuLCjMouzTJPA/Gd2pgOq8n6HGiaIEhqR
UOIfbJyq49KpkYkHtAv+N28htzpCwC+pDGIlUEYFVrLvlP1pzvq+fIaSlLNZD+44ZAZkCxAVCUm1
d2u1ktK/QuzJpTBABJAm3h4jSg9x6YC4IAEf8/+izhvVUz8MUo4SpyoGlIcOq10MZ5GLIyHQ6Yak
3cO4+W7EHCcBjgI4JKMUDKLDGsXFGlBnCMVAwN/5vHKwbkjGsN3Qe0oH5PMZkDp7qDR7qO/Mp/QE
gyJGCbFdQpq9Pj3Mrn8zW2Lu+FXtL5OIz9vKB81pjrdjYwlG+aUDcIZ9GWmqOGtYF3Vx7DnVoJVU
puS/soGEJmwSSOP7f4a/t3/rnLZZRy0ODBhO3nlVwY+Kg+9hBMIkR6/Bk/WIiA5Do9DtgSXFimx1
L6hDlOLKZBv6v+71BiZlUslIkScGFxYPDkwZWO8Piwgq5/6efHm9+T4f26so6onUBwf+kgASJe09
3rwFPzfxyUqEs4n/vbl2vM9Tmwka/hPgR8lEGyID6Rw8v1aDAZIT2EgHlhr1wtO8m8EqmxFhHxQ0
rZG+HpalWH+2zDjQSiIQCqSh+x/4znc6xMP4AXygpH2w5AgBsEvyDJSHh2PkVKeZ/v+hmYfntnfH
vMhzcBKcKkjmwKJCVEQKzOiXl5yfgurWBXsadZkTUMj/+ftw2LqGORJXsMjq2ch+e1ICDk9ywDVz
9qgkJWJUKAehlWT8pP+Nm16P+AvXXFcH/dzkxTh6cPLGxcU9s3cqqqqv7+GGzHGFL4B+VIfpMHx5
Ywt8dB70BSDwKU5rMHyk+NwMikOHYe7pxNF81JEggBtipx9m4M0dKAwhP3ImpeyZrFFhik0VFlgy
QpFNhkiKmGeXzhwDpiJEgCFfl/TAfNKRY2A+iPnRUi+YI6DGgSAlJiKGCYzB9EthXvMu6KHxXqdb
6cTqcDyuWs4D6u2QzMzZkx/TOR2zwZmmhpiacMxieoBnXIMnO16bZSd2UdsyqqqqX2DVlH0aX9GM
fJOEEj+jvrygfYvVDIBn62AGn1FL9dBLKURaz/9X6Ifk9QPQH1YhTv7nSOwYawWqoqZkp1ejbHov
GQtCweTrBZyvPXbVB/2O0kvQZkYJrScqRLH4J4P9cB9EI/rzZObiivMCCik/KIP8ePfHoH4mZuF+
D+Au3QuYwkO3AiD+ruO0EBwvtA4PYJ0j88hLBAKICh6ygpcdKDy8HB0hjysu5CUbAqUaUOAnsX8f
7ePqgJ/6WB273/0SRhE9kKUCle6VT8fPoeHhwcqG91R+a7bkBQKQ34vtqBUDrjUO7np2dNNHBlKu
S25xonVt/JtKZZoX9zH5P9Oi9PUyj9JXNO1J0clHP9syY+fYi/3zXGJ/pDloYcMMrnT9xthKjpdh
GBFpSO0bR1f6i3B3V9LKFE7KfFhYTXNLGmQMlGsOJ6lkrD7xTKUBSiVEwh4IBxMY3CmeuJvWdHYj
J4rb1Kg47zGu9nCm7iKJRbqh2dpIYopRUzNzUnE5m+CtqOgRiXzM2TlQd0NEj9wkjSw3OHJnTb06
N5qoZ4Ulf9q/4/99OiI/cyo/XD70L+vruMeMIReqe2PUOhiz8MR74p7QFEO2j1dP+E84oaAmVeyC
fTFaj1k6MeyB2ecrnWYOPpP5LLzomY7BA6LOKIjPxWmDo7OhvDjFg2yk0Cn9FVP/qYlmdlf7H5+v
medIFElUiB9hBRyGgV9Ffzx/ZImGZ/LscaAOITkl5IHkS9iSr63nZTTkwXzEGpmDJ/BSUBf8oiIM
BdRQ8IJ9QhsihDwPJC1kdg/1Z3+AK7Kh3k9obwM3JEFclxiqmIIAm01zn0EdI4x+99zc7OX/fnvX
2rvw5pPqGhLCDBThiiRpgpiKdDI4unNw+E684vttgovm8Pv5wg+HiG3h4UaP1vGnWY5F0TFJoXLs
/WP5yQQyM1gxYrg74oIiSD62EksbZM+3Q8dda7xjtASPS8cxAy6m53dueuMnNDmYzsgg+SHeGRoR
8teUP6YecwDZr/2ccvm6kDh/aU4IeGO6xJJDEAqkcvUlCD6wYzE8SGX0FhmQsbqCByqFhahmJsR8
CG5kNUfjNCw7kKRodDFodT08PPrt6djAGTME+oMWsa1rRbCM4m8YjFXVyQH6DRH0QGUF5zmsQhYb
bgcCNmhH90tlH6fyVMKx/nP58+IeKFQlve7qxGVOmnENDHFdkjCEISHyJdaGSkd774sMEUXbTady
5mZmhhm2211Mrw/UnadznSNbTtFiTn+Fks3XW22A2121trcNWy2ltKtoy0qW63G2sBjCY1KwSKCI
IGyKiqrFu2222FlkQ1tt9OqYe71oRDSKQcp8dG+E6aAO6z9UK+NyJzMdem0UR+aXJCq5MUDo6kIr
zMUACgWn3QuEk3qST+t1wUp+M7W4Xm1sLwkDxEYh4BEzDB5GAsGEIHYbEEC64z4eZgptVQtIRCNF
CNDSi/2iKAIJVNaXGiIu6d/6S29Ky+dPap7Jqg6BBMMSp8CK0LTLzgBtKqyDVb8nyIz8+fkpSpUl
tEtJAySSF+mhRITBnNvVpR3YBhlHcskKv4/4v59zcCIBqh30o2MURPcp9ie8X88ogkhGlpcdweJ4
V4Fdvgvp2a8Spd+ODEsKgApUQCNdgAlyaNeNQGceSMLabFHbT1ACKznOQxjo5FE5ZyJHXmDlNcaC
O1hVAAiJZrQ4hDk/xdBouOkqFcFlJ1Cle/4D+wqqPu9CQNhdE1DRTRYmu8h5OhpjW8kFxC4G+0Oj
MpJ1WySaW1h7DoH484Z21gi6O1olZbjhOcHU5eYaHHRrFgn72CgILBE9hYUGwpLA6VGkK9woOg0E
yKzK4EoUg/1dPTxD6z3oGHHJj9muJ8z+TDUH6EYkGCkUGCCkwfXKY/CDZYGCJlrUiERzm7Ksm1dL
GOW3Mvi7eaiiUNbMNCKBpsaVKwFmYExvNu/gWrw2vhico1Svl1szAVYn4ZbAUVYI5lUT7WBhmOu0
MGFjA8NYnVheFP/JkMHhKPLVEOkgUUiDGD45KJzjHwhKhbQ7OwEADi2kgvOFMr9FhubUFFUrVUeJ
FCRPD6Qb9oz7D1OsdFf+v/t6PUDx8iCPYorBo+LblxlRgz66ojj8V7hqeibnpCgok2e6nmQO4FDm
6i8NfUDxB7juhCE/ih80Juc+HI5HxvPnIe/AoPWgkgh4bdbMyIvUDqQ8hc1JgGoW2DqQDMixEhog
FAo0CglYpFKFLDJh2l3VXu7Z6dShbVyNsaqTzbGuVayXLbdS2MVuWqCVEmXCMCCZBhIJNGjDTWlr
uwVZqbrLLLblPTXbbsm96t+i+WipIDRtqNapZlrBYW8brYI1lgW0iwuLCVQ1IycjfyyAGir8dEP1
dK5gAOP2HrM7tVQPTxEO3yoxVAdExFxYQgIH7trSEID2WBsEUeQhw5BG4SxREE45AfJGIaByhDHB
kJXdS+eC8Mw0AZIbs0FfyQFgjrvCT6/6MoP7k8p2iCHi1SlClssGNGfq8jrdP8g0FSgpAKSJU15B
6GjRQkAl9B4HY5Ds8TIyuq06oSDg9Dymqvla3aMVbuGGGBukK93Q8DqTH3jZ+UhfyytKKtrQD2i8
f4aZVEkYoFglIh8mVJ7Oj8w/gLTap5c0zX0RKXHyFPasTmcKdL+CK3ZioeZamGhOZklO4SMpRs8z
gPVZ6fXRE3KAB+bWtXhprrl9ldnVrNl6agkhBBA14axzBlASBxwNYiDBPrxP2E6Ce9MTA+ujuDO0
7EwFBf37AB6Plgmu+cBzO49D00VR/gvxk2Nln/GYv8lsvUCD3Xlr2RSFIpEpMkAvsP6t1AQ0u/0m
RAyxfwLYS0xHuDc403/e7AzLrTRFtmr25/5A408VgmSLk3xQmUhbEgwING4bHn6BhdDlHtfT10v5
pg0RCGxXpaShKAPhU6yzLT9XOkuSMndogjepI7MkwM0mlAwbxlDjOnGD8LTMRfJo3AfhE5OFFCr2
2Lixgm1xaNB4TId7w+QW2M9dvvKMQgT4fLsFNRiJ6YqmRVrS0KKhCQhUeiX+GgMXvm3WqYBANYcj
nskSj7UHJkWM1KxUjvr/jxJR7GYGHzW1AhBIMymkQzHt1ndJfpf5jt5sUV3XRj5+r3lQaJXu2g5G
WTT8/XSavi6N87k0ucyqFL0W/w82WIop5Gws1YFVs1JLEFAFMiFEZk1LhQimYcallIYozI9W1zDQ
fJLtUPcMMUUILZ091lxsZoTCxggWippKpoDuiGT4zBflMJgIhBgLgNoitvbgBDLy+VOVuMhKM2wk
9nrIB0YGAQlDEyVZUv2kZGvr+6/P4uA58ROfWQMNgRQBgej81p3SyHCFEKOvdjwSTOon1xE3j/K8
2e2lTqpmQxMtxl0FF8/4Kr8bUIhxvGl3nmbxwOGcthkb3OI1Jpmv/cjCoPbjt02XUyvw7sT5zy9P
UB/xP/sSCee1c/TRAdwv7Re37XBqDCMSMcbMTCyDIz7TRu/JeZyuiTq7IHpC+6NxITN6cRlLv3kg
mPQ2M2tSa4xivGVMzFVc/XVfZVJtF58yCf84Jz/MgHU/RfbBc/q+AQSlKBSgU/bLkUBMrELEDGYi
5If1SJkFApTVAB0ID+gSKloFMkaTiV1Cg6IFQyQFpUJlVFiEHJEQMD/lAnMAUhEmrvkAMZUySdDJ
20tLCJUoLwSpqq1B2UtKbKXdzSx17vZLy5tixF3n8X93MyJmrDzH4PertvuAh6xIEwhSYbDEcrOr
y2YvlTQQslACwILrFxzHkA0iYf5DAwmCEnnzNAkLGgoi8GAQfiiB6bBylg+PQ6lRQ7OuDXZ21AVI
Hdf7IGA1340gwKnnnOYilAMgQe3ceJi7dFL6p/lmRSMt4SY4mCixEwzRhRhLDnLuAycFKQNiwcGj
89blwCQv5/my6uRolyS0kS/9/Pu7UXioni0tDYWaegPq19bpjwfx4knxP92Hy8gAfnJqmjD3RMW0
nnxwvB1lhysQ+2/bOubEHORgDNTnEYgN49HUUJBeYOkXbTcQPhtY5S+xHoYPfS9Ic4P7oSZIXFvS
oXj0YyUCGIB3ygLJAUhAgrlpDjlXsoLt1FmOCNkutiyLEB1CVB6SyIIow4h/nLiY/oEitQY1BZSw
xprVuYLLIkkzLJNc+leST91N663k1T/99U4bUak3jd1LqooWJp80NCUERwsZhmIgdTQixzFwRUxI
nduv0ccbbO/BIBuQJjvI5BAgEAoASgsDGMI/tUij5PibkUaCdGqwebYnqwbT9f+rDpdHuLmyaC1Y
86O1O+FROFDkD5tvTrxMqigrFjGRpFEDmYj306hH5h7zRHDNRmjyZAxRYcVAZzm61U3RDFCAWhEU
SjVhh3NwAbH8QxnemTmShwC1BkxJETAUxVUxXWMXM4sCwgLzJVMilnPNQVYkCgQk1COY2OhehMSg
JUIqCXnjRqyDI8NAllRDSGNY0vpvm1VYHV9Jp8jeMWk/vY+Zw8UAISgv353VkBhNEKv+lLkqsKZI
awPq8u6GhzYTKjwwuDLFd8FQ0EscrML+9QPFNSwmHApyel3Lix+JIpUgiIMKBzeRSV54vTDD6JvC
U+HU2gsmTqYKQh09aiBQ5cRijJMwCiLphBBqmUnQwMCxbq6v5OrxBF4oi0laKTiBK2y0PttyktXD
1zdCWrEAkhSNEARPEylg+Ka5WLoiXwZ6mBM685Y0sR1pxn/h1zqHeO+UUdQm0BW3CAO9AANuThvU
GV1nMCgxFZFRbiDcM33lpnLEAa5EWaHVDmkhvZqhx99hWAEUe29lyxa681JhBjEjUJSgSDKdQHfh
b+2zEGRfQxmIfa+SzkuIjUIg/94jgV1DKQ4EYRxvz6PvzwWJPHovdBPK6gSF6ZsU0SEkGAmhVRJh
2SVUnaqLpQJGXUU7UGSTqcqkvi47djdygdD48QrPGMdYKhBOBs8wEhmiwYFwkJuwpkiVRGoQO/I3
Y+AtPB0bYWPjAYBRRcicmMlSYDLUqEZymCxd0mNkAopChW40IUsM4AGlGK71UZQsE2XbhKJdw3Rz
xwetjxjdyi5yUxSzxVqKnG905uzy3fXSWziZUing5sdcsrDIMWGSzjRkFFO050XNh8rYeXXjvpyQ
vymhRJim8ZDgua0EyevjR19QTOOllGlMo7VroqC0UjOJo6Q0aMDZhpZ+iiyjVY7xyco7/HUGDqjh
CQSB8ZJ6QlBUAnhjJKxEgDNB9px4cQwc50nLxZR1B2vfGuNqU10pgR2k4NiOrXpmHnT33SotpuGK
RDXCmWVKhowLFmcN4Z2QW3A8mMtrqGGhY0PAlhDZdzJwqESlCO+HjO2azhazDgaawypwhJrFSIpp
VlxJgpOe4pRFOALktlkRItYpGeXKZQTLeJdSySdYmMU3esncgwRZ2UEGFEKM1AEojlBmW1CiE6Rr
DjYhNOxWFyFWBcBVDJTUY2bVFHY0CMYwzRkxDzGJRdQtaWSkg0acVkhoDqbLjVzLdkgPJgsgolRS
FCgsQDhAgU7YGoRSi3cGSibiVdNlAoJgXDgMWbkxERPFWTAXxhRtkRUZw3dCUmMEyRdA3WpGbhgt
5oZcGg7OxeqY4ZQ2xwzjZCK04ZSxQm2ySr0XZNSWYBQ7AqIgSKZZhIUkYqKUd0Zoes6yXgx1ugwC
okmZEMzMCVKKFCfTJl0Ksm7MxmMFuVEZqUZ0woyeOuh4zkhQe4o2Y74FRkmYuCSLALzYykoMDLhb
7LOYsMUEMCyA0O1BIlK8OFIyrQNTDTqaULmE5k50ZSm8xzmQEQA8wo887aOQ1bG4Jxx0QM0LiEI0
xVa4VF4uZIaOyg8gItQGFz8KJjgS4IOVzeqxcqdtttADsSQ4iiiyKyM9uOChIQyYRqzXTCvjngG7
rT1aAecOBQj1KA0bJZy5H+VoNZ0zpXPrzfFOd6MBqCOO3GcCIOKabK1R0C9mf+61jCCVoOh02D32
tOWybiqzWHD6nclJE6e1SLEkm0JC2IztoNi1IpzKbFkuoqY1TBUKNmKuby8IV047us6DRqpuEoSG
xmk0lhIJJRett4YYvNXaby0VjeUxlLWi7IvO2ydSpaRhaVuZ3UMKivXFq0UrmYlmFJFxDsS0JrlN
WhMZiMzJlJTSq6nYqljhIMSOlS2wzPxZHrqi6RgmSDDIPKjZg3+pw6GP/Cd5ihCcp5wHdAGQndAJ
kCe2EevOO088G2DrRZeA4+SmysGAaOfGawUsmrQpYCJCAiIDJ0ekIGOGq05CBpiyRhwSzsIgYghG
KUKVLAoeQmBhJgnRgAmkIcnDailMFrULCFUOhQWIO7DMQF2NDLsBu7ReJpeDeuAIRcEgkYZShapK
GkBgOmwUuKUg7AGwkQFiLLKYLTGnRgGyaGmjFTQNiLh0UUwAg6BEdjMiQmBcB1lN0V46VgIppiSV
I3Vedy5TDUs1cC5VTLcmmoYFC3QvJkaTAmFwQMAUfSJuZyZbKdzXYSLGRIQMlCssCyBzEh3PQhVE
QQ1r0HCu6kHgmKbMDqKSwCCpxEyaGUmjkMYBLVw8WBjCNQqVBAlU2b4gcMF4AYop11lcBjaCcFpK
ykJGqltsQq+qt3b8ZuoB5ERMqho8FM4MDRu7F4JhiIlOBKB1e5wASeSQ2okInp21qBoaQlJFSVWh
pLW22ydgXY6CGobrhCwDJFMjkEaVIkDYgUjTBGmEQ3CEVPURtydgwwIQ0gsQe0A7A+U6j3+0hRNV
A5BXSlC1gK32h80ZKT4vgYfmzRWQ8YvzwkRP0kymGSLjhX7HJ6bFpjJwtgmoyx4y1Aw5qVVEEzZU
Kza4a1FDJMIrhbK1mpZmGpbu0RROHf9Fqbq1CQjLG09FQ4iGUMLG0//1NT/yW1GLB0epIk+b8z/m
g2vWULIVMSfdgOAkIIkZB7jRB9W6p4HWL2eHENskyg//iBsEBPFiAH7yEOHvEw182B9cdZRT6D97
83TCxcFUCQaEVhVRqEnoFwYkCdNLVwkBQEYRQhMlKLQFCD4j+D+ig/0bHakh29nux+qea9FD8UCF
W+EGcf5p0JajNMjRD+aHmjDYk4AM8NpMdzZVTZHYqYdk4N/mSkVqiACKhZnZaERoN3qnb0PL1KBt
er0Nf6482ybflqyGIaT+lURC0YKSIqVQPhxoQ+jABUBmnhVhB/L8KeWA/Mu+F8u0Te7QG2JmXw/X
STT9D6tqZ7ycAgdMoBIOZzTF4m7UXNqKUpSsPchxgiQDIQQYAbJCUIoajJ1AYSgBsZ1BU2+1FCd4
iT8gc84UfTrGvx7/HtGiwTCKHLqKo8IL1EBzA9gefQsUmgUDgMYtIr9n0H6ZxmK8VBDf7c54BRe1
4E3Q5NP9H9uH+XSRfEwzj3JoMiCmrdQs4MOJmQI/QvTMorIpQ95Gpe/FTH9G3pFUH1KnqX6NHoyS
cclJf4/rVX8fgO7miLNqNaq7aW2k13Seuh5mp67VBPQH3BADHbP55/R4mG9IwD5Z8LCcRlJyVB6R
PgGAvm/PB74lQQyKPcYYEE0OsM52xETjY5eZFJTb49/uePDKyxOVU0V8HkTxNgj5/11dDU7jCcGu
CrWhcWMFCYF1B2Yb5fiWiId6o6HEMvn1CoJD2eekPkgfXin8GPk+Y7Vc1TUIAgwBSPPHPq50ushQ
38f5SmkjyLUlUT9pP3z1Kt43IjfDz1vG4DthkdYwhlY1KoJPXFXIEiRVdBAsgRapc1GjRKUtnaNq
rXauhFTmv81IoDnmJO+CNqIp17IUkERkUd6Bshtwba6E8wmvG5j30ADAAYIg+SekiAIWRPaKgdoH
2WHzJFDSL9f1iDwPOIpwDiAfr4xozDB9USFKleJnwD+VRTFPhP2RyKUyez2lvak+Jgn4fjgZ22Y/
gza8MOxK2HwMQDBEqUEc9zkykmZTqWOhdQXEzSJNXIOIa1xpP8XGCbIez2LFMxI0NMx0teDMxd/L
ic94DUAENajlzg2bmerVgbg7/mPtNteRl3EueRy+AQQkQJMglAznvI9QBeloeHSmFBF90oklgdI8
jvzToh6knV4iL7RZDknqlMXvG7OSDdKrcQ3DpjgDDOHTE3n4x/uHCMPFLKkqiwYddqjv1Gl7sovS
DKU21vx5O17SpZ6HZ0eA5ix7L5q7ZuKLTtAWm71PA+Z0H9SfIkE9e4FGy+dJRi3AAhHjflDJQdEI
CQfp2M+GYdDJhIxfhAxUhCafijZ0M4u9JMPBkPAjB17JefyP9XJbaIiDoSSY89DWWxJQrTS2ztFg
ZHXIm5YaQeVoULw5rS1CFQhQSimNaKesIPdz9SdHgG/IXrTHntOoAHgQjEVMMpAWjc9AUeg7Vx34
PjNDb+o1AHkww7s98iD3IgSKRZg0LJoDUBsgYBwDWRa5huV+ynlYrAH1/ky+4ZfgciC1BAOvnJeh
uP2vCPx/VvHOsazVYxVatqlKgHXf0Nr+jW12uDhBoQdlfHLG4wjFjM/qkDUN4KpsCpcrpyr8cOUc
6TBxC2pWNLE5YyRgsT8N5l9iJ/hJ3WEFkUtSRwKGgseaKNz4We/tp+d9HePvNVM1zOTtGMPDH246
1OdHVBvRvmktLRQarNxD5l4tBnV6YcZ483HbeCYHA3nHaddjzXa4Pds5zfA4WOcetHjsCysnbfVy
jE58qpRGPKe0U+ddd9mMZjqInfNVyIJNOCjinBinnwI5MQRoQREHSAIgmF65x2fCO+IyqHqGs3E0
UvbnUo8T4hc+McNRBaRwzR1BBPjXbFVxLO61owevB1AREdxAeFa6Nds9AUYBL6th5csxrJFArJMW
+9JQOwqfWFChgHTGXkeadUGZmMnK7TXh0WkCRnxw8wFkcEAMgA1zPN5jgKKiM05TUHAi4iNy4GNu
nLHPk4b1pzlySXk2e2w48QWeNkERF4yZMbOTBx3wjjrovhYzDgO3YiCHBFesiCobKnZvo7c004cH
2c+e/b07unI48CocrHfzDzMVa8V12gNzcYw82DCL4CclHa0KJy0smKrmxaKgIUaFNRwZLXapNI8a
5KrCysewkJ8YfKiNKDq9brnk3o0OgvwQPuIWK04sefKHUXudMcEb6Tq14XtdI2px4yW1dhfCSlJ9
yYydgnqtgFsE8BTgxqIyGhFCIJ5HPd2cuNXWsONgG8ugisGQO8hBKUO3kYA9IRdmAKXu43jQYGjR
iSWpOrWxxjoAKW1E0GYKpSxERGVPs6Y4IPFOBoO1sO4IDRhoEC+r7+vbTU0zaKaApabbKKVqS19v
yPv7c2+RERE20qpppUqqqqoc7/LQJv5drA691wN9NRHeCpOHC5tdPDuqzpnnpsAAzHVfPvm1uECR
aslbWQU/33XTupChXO3AAGswbRE56oYquZmu/euOuxybRLK6nxlz6eYwGeaxEvMlbXnHgiqs0be8
vNvhQt+PV0ZEcy4jwuOujsdXEb7S80EiNePCMIoxF1Em6jUY1O8+KZ1ABDO3NYjlGMVpWTxiIiIg
4ZgBNWfL0fZeya9VqFa6R16LDfZcijASYJQ3ZJ6247ozxmQAjuj0LX0ixirzTVNsVrWE7cLsLpKc
Cl1lmJlJixWq2NGtPHHVyh9Jai/D0UyaObLs4wyCCAo4HwlNVp8FFdr9ODfapyslWOnaLE66g9vF
lYfCMkEDZPsb9ruxQLytMcHbwwoF5zJXgrPnldoAIT74vPHKumr5AAnqcmujV4oWeeg3J1k76laS
PWfX1pILOTzjsd2Tylo8s7Asy/GrOAhi85ojjRfW8Raa1YlB1TPL6Ucb5WajJjjkAHllKzXARXPH
qSuPHhx6wAd+3s67BYMMLUhEQb78HlvxZOjMkweuaKtL048d8+mHEEQYpK6vvhc580jGeJOxtE2a
51PbtnBqTsqaxxeFjAREHXMF8U8VtVDhVurA73fGMBleNYr0Sueh4NYsAIm7Zy4dWpxELaqIIg4v
n26DGssMLe55xHbMdtPbnzOjsIK4cXxz3jM7c+hySSvGcHseXzpxx5IZO9G+713yA/D2SzQo9uBx
UGTvVVjdejHvjnRN3V+vbvjXRvtDRA4Ozsm+TCZ7NnAy8LJa422YPcdZgzlRrT7P1ikVGDaIhiI1
pmmRRL7R3M57s9kVnHpm9TivLlABHNXqKd5HxLMNZNV7pesDqqlU5IQo9kNmEaX5Q10GkyZNNYhf
YW3V0pgaRfNiqa297pcSkYVPZHboh74O3KJAQcI7mDe6vuXmIKLUPSiljrT3Vmc+3nFqIiILLQOX
HZmvJ1fUTLrkC0JEQEJRus6XeWUJPPS768ZNqjt26PFpm127ZlwRFrs1srPNHn0vVMzKUjyEwQme
vR6VPHgMea1k3x25vdRK8PiTz26eNYsyWscUTZGqQcGussTXTzvqOnGHkgpEcPG/8J6wHZYjZ52L
XapjkZDuBQ40PBxceY06Qr6UwL7Er9+GHgWJOSSQSHMyLgVFHVzGy47chs1BDKHnIHthdBdjLg2M
FqbGID04uA44HsFEBo04Gcpmri40jZR4xGAoiMnITM45YTCM67EkYMIyRycZFJg3MyYGMFS2S3Xf
XbyzhAeF7QBpDgsZiTmIu1tiYnDejarC0L1XZphpCpHHU88xFGySpck2kIoxmUYmSIUKYSaCyzxJ
FJeEaUseU0KWLpyRFVtHhTTvcWwuhuHELESNIl+NMkSrVzGt9cwiayRvngwWFknR1SpUSrhvA7kp
672qRdKbuRImSLNkyBcWCH0S0kjVakfQWUKRBmI48IiDvEkEWShxotYiqiVmhwnd3QjghuG69TcY
Xnt5T2sKiOth1TCaxqqBpFa+btsOC+xS+N7HdC+Kc8dUN0Y9VMU0muRtvcyS0zZkm3GsTUwcKDlc
d9UoYhGMrq2qWSdbOlFLYuRyJLoSGVmU6qXTrmiDWIii+ZJlgx88zOph3x4F1m02GFIhrAo6Q66m
F2nCyilcLEUX463OPg+whRTx30boO6OFhHc2dliiJwLLTmR12qRGLb2N995p5+zR2veZp8CfGurW
e2ysaWq0pJW5dG87s5yYcl8dieeTWkhXcEqIkOUPhGVrNlddYrZHRjYolYnhGc+KMHQ3PbHcXoaz
E92HhToa7cSKOMeclne6qu7hovLAk0mzJXFy3cwuKpZYt3PeKjjVc8JmSr5xZm9EgksnQ+YKpGVS
ON9qCjHizn1gsTBWuLCLPEB5dnhOu3djYq3fjESt1gEnyQXwhWcHZgUauBzDKRDyZvrs6hWkC6bg
kUa5ZhWuoXZzJiScsdJbXE8om2E82lYdrHHCR4Y90b4uTpjZKNdYtIxVzinzBUwHFQxCLMt3pxNW
4jlEqkG1BIxRrFR3Zx06w34GhGzjHSoz2qcX0pe3FI7NmsWTPiYfbGMaxIozzcrxnvVRxufxVfgx
36zut6KRZPLk6t2PjFQd0LRrubdxoW1tCEYqNvdGrGlGC4aUBKt7amcN8cIRPisq/GZy511vdGuA
NPBum6zFxuIDpBZEm1hwkRxyq1DxEskmFpYTQhJO2UOZZ1VUX5dJYdGAounYxomoo4gOoDruEvWS
bx1vpZVqBo5Ub41wauCVoQVqSq0krwXcXijEUoO16qEqtDFa7xwWJ2EVsiMSYN2pyIxvRjEKM9u3
eoLI8cEEmUlA4TQ1RNd+2oIvB0+1AAuYcBvCDoY+Q7AjRg8XZy7SXiPG1Iuh7qlqApjxU75MTCDE
DNmzsUrVXCEHEct+c9asOIbEa1jlFx4xV5mcswCozIE0ye7h+WE9mRUG8ac0bNjDxcUSOFtOBRzH
jU1FhrmV28HYz1ENxnrFVmNPFMOBGzjCJ4AcwdqImMwU0brKDIix72UceeOaga5hJ8bArveu+BsU
ctdfDjzHfngzGcR15MdqcWliJc90SL0nd+hc8mb3QXCUlsSIGeIQeGNqdHlGgAI3ZD3EML7hkmNI
WzV2doFyZvJTO6ZeaRZURagMispQRm6izrxXdFnDeyDQdkkcTwGQw7yIkiE7g2TbUkbFUBCq+p2j
OQ6kDB10qo8osRFmIiSxIcijd3XHjBDYDeg3AZOoGU5eRlol4AMQ5Yyih1CWKABEFAIQKDmo5Xee
+oZVJIzCjtjrN3SEzHUKcaO9VWfNM7au6warevFT2x3znUYZP3+KYWHPjkKREVBEVBHIcdLKe986
7z28RbzWc3qlnGXtbt0hxV7zre51tG43veN73O4g4gGzFrDKrAOMGmvsQkl2iHnOuEML3wlCBOco
zu7qqucSYxWKxOJilUNNEysVTrFUkKLqaUyNGB+sRhQJENCkdwdvQgrAi/YgH5Dy71eigHIcvmYB
VQURVQSrQJIgMTaWlRz765q/Bt7YRSCOo5Ou0YIZRqCyWLwywD17R27DIkOcbtGFV31i9+FdU6Ej
xjOjNejfHenPVy3TTWjSobdNh8N8rqhiEeNEiFd+9x58/3GA8WIYoviYiOCgmRluo3rvSS73MRmD
xPjVzwguOwiFMc4zgmtMnvW6zfO2J7rNxjr1meucrJcecaraUu1w+jdOL9J3WJNEwzHeLhozjMRj
OcB1V7m2o08b1PC89YJXPWZ51e1wrwraPG6zAiF2zPdBypQeW99mSKI8KMcMh0+h12JS4TQrbErn
MSqhw+6iDKiJ7MB5p67+K1yKjDQnz58Ynwu4XrXYO3kzrfL4w/Ge06Mcs8apS9JoDAi45ZfWue+y
MRCOYEkGMSMXNsZwVgrnGOtXxm5EzssrxkffL5RxzpeSqJNHMaSL30Diapee98mlXnJ4xGMX5b7P
ziMa8x3NYPhSzHCCE2aLa0rELkHU+UEPjdbSbwbWSJWu4zodBG9pnmlhmwUYxvdi0d9AEFUTkwys
UHoq5hwShMClAOOBbjEmCLIkZil1yP2WsyQKY4PXyccbM4vT4FAYRGRIDt1gIvp1HsXJqGjzA6L0
KXC4NTW+UblUXs01BZuReQ7JuPKpWewPSxXksjD2AUm7CSQ1WHBynB4HFxoWRsLoshLC7TIu2DBQ
arSBsPQcOiwID6zc2NjtIaCVg0cJwOOx1qLAoo68Pkp5HR0ctiSRoBj0wEi9Dg0YXYOJqm+5wLdD
eCmOo9xMHTc5TJsTQdmDSSm6xpjJKNyNzkDmaDE6wJhsYG5CeErh2UnE7Pli1aDI7GHcbhsdcK7n
kweDUWG4QRrpMAaAUQhmIjBoxGyNHEjgKSxTQjbqPE4GbCQWcQ2MtnE6TEgxiJw0xvyGGnA1gO4B
s+R0qY0c3GQYDwdljYBu/AcPFONFUG5CVUlNUymmV1JbxMBgN0tNyg6JYGGD1nQTq3pAlUDgg7nA
Ii7pE2aHQAeBDMX3Y4J49l0Q8Jw06zGGwgqUudmspXz3lyczJGc065edSd494Y84oICu8IpCz3BT
kO4mPuQ00iHmLiPmWbVV03vRG829Z2E4eYHUgyG4m0pSE4EXBIcEB5VojdGFdDLG30s75ZOiyd8K
REbO4aYEDaQoiyCaGkIHZCWhDtsoY7EcoOijtsBidWlAFoEENqRpoioGIXuixKTYacUIUZYkYQEw
IxQpMBe7o5uBAfPRrMzt3VPkvB0eiuAbobstJLeVtmq5DNwkZGkt47xkchrpharqwSJQm6VskoaX
VTcUINBgeCngnvQFw1vgQNCyQMQq+Mrg0EOsDP8BOIUtANKLBsEgZOWS0avmXalu7tijZKktHdnO
oA0iDkGA/8oyBQChCJRUm2vptdfLdq11DyuVJJVbL/viCUIqx8LKTbCxCCIJrCz+kkPMZ1HtumGG
Sg+aHAYeLI8QC0U0NNNKC+P/oQAwIHxXOQGptneLyRHqe3cYmKJBJYETvz/5af8JIq0Lqqmj6VxQ
obr10nL6n+07aX6AjCBCECBCD8D9FEQ+UgYjk+iq7QEe7ih51QDyNXqzZ9HQp7YAdIPxsgJFQPQI
fnpT/zZy3h9CevDY0SjFkYJGIHSoc3YackQ4dySwJ3W0qyVEypLNYDirDhinTUJgtCT7RxOQJD4E
KpiSKkKc8pYKdYiXg+3r4Xdurn7hUWmklz92Si7OpQ6/HqM0fjXbQdY299U8tdCxEEZm+r4JHGde
2oJLvrFlpLi4ahD2OUkJbt5ocrqXt2cRNbJGWr7TMscN6CYa2dTrmW49a+/O3PWOniuCUSuddl4z
jtzvt9eJw+kfb6ZnlNZO/hmbvKbIp+HUVvjFT8Bznxqda7898Y3hPwhEREFQTqy33dHjzcqmuEUp
VREB98nsY3ws+h5LV9Sg0cBBAjQCeGYjXae1K6RKGb54O1qABrLnna40bXV5HTKwxpYW1ACJMrBu
e1aw1wDqCfTXdTWuaealB34meoIiBa1UwFCpEiKKzVGcVh+no6jJ33lPzYxYVU48uvr249NYcZvK
IFUFZl3y665CONTZNST0feCySmyrAoqItxIqFQoEIbtvPWME9RhOubmVnrFjWEKbOqPJ2gxVxm7x
WTlakTxxVIwNQr07jGHoid6lErw5gkxOybsqZy7tiwznq1ENOamI4GhxMpgukF0VinIyl426yXid
ElvWMIxIiw5OOStKxB3zOOOsUaACHhhKJUD7sZIoKfme2V6C81k0k1Bv7LroRbSJmaRrGMLgDVB7
Lx2kW5iuLaP645dE2yiVSkWa09arOUjjUyrdrhDGowsU961PSuTieowZgk5UcSMjNQgO5ZBuONqi
0nIve7+hJ2raWiCn1Gl0O52QJEXPQ806HAntABCE5V6DidkBZA5Ht00gFBUfDApQBT2Hq2ICk+vm
/7jnN1lIcGBhILIsJmHpYVVU+HPnxQpELDOvhmOFVt7X1EHB5bPqmAAKVFkAn0wBwfySoWAnoJaI
gH5XoiBp0KKpveGjvSpg/4RhS+ID7MxAh7EuySo6Bedgwd5cZHkqUBR6OdNvZ40HUB3qKdKyiGKm
oA/E3ZKifHCUwigzGhS9cY6xdShYBWgB6FRIj0R+ZTs1Uw/z8vj+KP8GDCFjqqefb1enycnlb2FQ
seoIonl3RdQ/PUgez0PIiGRo8iUw8zAMzHAkwb92E4P7LgBOhJGRDWqRRYfO3uwpaYhioX9YR/gP
3ntQfQ+j0+sqL1thTlsgBgr9h8e5PvEThi2C3/rs/D1TqknYPd5hwlOL2wpCdRIBEsUJuadqIRf3
iS5DQbqJ9fIv/G8JT8fYd/cPvFT79B9Oo9cOyAPQUA/1nUOO0EHWAMIomlfRffu+HwUlMheArDMa
FlqFgyVPdNJYCkRj159S24DwdDvPq91LO0ues005wuvOpCQ5TcEXq8twMl8/jeGxTyTyYGSZjFjI
pBFGZrBTIaoklSypWCiFm5ndJa5q3ZRV5r68m7NuFYVWNMaoZUaNg0hYVK0Q6Wi0E4EB1tKPJ136
Wqqk7Ewn/piHGPDweGCj5WejhFRMmeDaeACJvtocWBgklZIkfTz9J/qP3dnUlduRYgNQz3JAPiDM
xR/IeBsDdCJcTajNqFX7Il3H6YmMpSjB0Q1wt5csJTwt8d7XA0JPfC4GTBoIZdbCI593bSxas89U
IlkpTLD6XxhZXVlbS53ggsPx38rcqwg6vZbO5xsfysT5xFPCI1AqIBQhyDuQpTEFKibAqefpdTaI
Pp/8h8S+o27vo7wfUG49hSCtL8R3Ke7cpX0j+2i/4ezY+t8IcuS6p5+RH2/Pd1LiStMStjF5mCjJ
MHW36mqdovDN6tk1p1w+beF7pwupgxcVmhswe+jIp5RinlFtVG2Ogiz9p3jWBPQTtkGIUZCUIkY+
lNo2sS5c107F2jqnbTpu7rXDdj51z10lZTWZpTbIBAIXx31ZxMhjdZ5joZlGpUYpU11NOtmUkgYS
FIAgISEWAYYI4GU23VB3jiuco8we/kQPy9RISsixGllpZ3bCm9opoEVFh+/mTDvaJIP2RJf/gwx6
PFGoOph06aLVKiTTwLGVxv+IbsTGRqFV6LeNGM5wxKdHY0eqZmwh+pMAnsTgM5Q4aiL+HrhuTuyy
y2EiUhVDt+yG2WotcKr/fxLwqp393zfTPnxLeRvOQRCCwlSQUXcuRUgSL5+fwd2TqNT1WD7Dc+II
nshESQh0f3o78bAPchFR/HEQQyBw1QfLW90+Ec9378UAO90Dxzft0AXpmfywbPxuWZjA1bonvIIH
cfE6aYFtiAO6NfIiQUhYp97D08fYaBTUUqIFCqo0KIwgSCvXBxEVD90oAdAlTaQ/6/q/Srb+iZ71
XDtLuiGVgHXixyqUy/HvmqI4WheLR2suz1ykkW5nGLuwaG+p4mrrjJN7n7DF/UotTlbmbzdQkdrx
au4kV3XyACHejDUcmN1TkAIpXVrr86IN3vestaGcyNTQSyXZRK7K0SZYREFqomaVVEiEu2DOLeis
8m4munjnvc+yPUE7YVFBLSpEyfjurJXIIH57rpIhMdTs3dJY8gMrYLC0rQQwziBaF3MG2C8TGKwq
RUZtkHTBSyiOl1srKXjNqcuGIwUFwlClJRCW00TERy1pnIW6AzQQBkjAJFMQxJScBHDAMcEhAiVM
GAkc/8ex5PlemvwO/0VUulnX09nR82OMGTl4jata1TKIixEZJ0re7zu7ic6c7u7EsbSRnJMNGJIh
D4DjiEpsAWJAHlHlzgvY99UIt1nOBVgNtBbbSjWz37wriThUE6DQRIGBhqCqSBog04hKbowYSjs7
SUmDqCHAySpEQphCwtGRQMYBZIzbKUlby9Lu68xPXiE48JcuCEElCFgYI40hWVJUlZBzKChpHa5h
4KcQiIaBZSbzCifTKLDv03roFKVBeB/9VVhQ7PidhCRhD8MB6k4vOjhveLyNJRW7SgZoTdRlWADx
C2taFzoWCp24KypSKpAh5KdPrQOokqeHzB9pJQUCnx+lkEPw0NH0Q9thNKtDT601pQD9xE+KJUBM
kUKF1DnGFSyAfjBdDD1QLvCIbwpSlAKcQi/olCJB/yyKmpRe4A4lHiQ6ZhWQeCTQDLVG1CpZpW1F
tRi1KlEYg0G0uO0K2YrkCUp/iMJ/R4iV+zAdlEB+nTIMVQVRMEIDEuI4wmWrg3JrUyrhIQTTBA49
NQmGYBgupHP+jE9XCZSl/+8AHCBQppRooGlRZkAoGlZlz/djqyBe6Nw6UVRE5EQVuPGJ7UHzqVcA
wIScK7V9L4+VqWmDUVgSa2duAqCmHaFxzAcsC/jRHMgIJUCgKGkUlNjMOpBR/WsR5eIV62Oubm+2
b4kvd5uaDCOOc4izMOglKUpsQTX0JfiMhTCMeqMuOt0zOidTB8rwg2iKaaiYIJg/bdO/pvMezA/j
/ltuOf9MWCVGO0Oezp2Viy6sCZ/uu3zel8a7y27+35qz9GM1f3s41ws1r6HXRRRvQ0fnwQkmImYK
/URCqQU+YP2dQf4B2ymigiKE/Wwb6eu4jeAgAD4IYYZsZh70Ahl7W+Vgpepv31AfwLPe5/giFSRK
71dco75h5/ejvdozu+YFdV8KM163XUA4ySwabR3TBWxwU0cSQMXlEo357k5541QUtXbuafUi4ngx
34zTjrYyht8knR5reNKtKA9e8vzEEQcRwuVzWPGLjfj0k3vvrdmqx3ozyEzK76mbqpqcX6EQXF4Z
XcJtbCIHYRQzZVjAuJbGt9+ve+rOjTCH1wTSRp2gbeYhTTCkpDJaSaRcWKqu5sRbO8zfVmMuHttJ
JBlVnNrwywaXfEhSQzs5xt3NSi8vSveMZGo4ggUFCiMjfZY6z8V1yakYI2KmPlQSigQjW5mEhJoy
+KehN4b4bO7vQNAPv26tg7EwDjrl0Ir6j5H71HUh73kJySWH8kDsH8XWIrCDxlTYnBIPdL5QCp7T
1F2zMYyPW2cE4O1gJDNjcYGHRJCQ/wMMQ3L+xlc91jx7IUgYUZF0WBEfRGTFGsc645AUQQsEkysM
qQExMsBMqlKqQDCkyeBKRNAhSBjJtFatKyy+DgYpgAZMzBqD/z2uLyfKerMY2F9twJAIQjbXvDbc
EOI8BV5d5KFKAEoIhC0ik+P0nqfhbyGszrLrRg0kThFVqu0olndWu0CSxo8u0C2FBBAzIVEEC0yY
d/TOWDDWNS1lLEFOuwvOfgumE6e52bnOmAl/K7ppG1ryqdq61G5JbXWySLCRZPJtpVZ1i5KMFONS
LM1CMTGpJgcmpYOKe9oyIKuoVBEhyCceSa4AJiWTioMOUybWFBFhUKxQEYpbIWEZHX+EON0dyJhS
h8BlwiCqZB4ccIDhwEIshiGSWZAlIoDVMovnoAo0ABzn/N2fMc0VNQ9jEpQDQ3LDdGgxjl3mL24l
3oyA6jgbVGkcG2mGKWgDWifBzSrIaEZhihTAEFSDPinzvbf2gqgQPeM1SlAEiIVJJAQXVH6fqMAT
rgr6j8tXZL03OoUN4zIaAO8pNJSiDP1NQSfhtkvmaW29xrGlUIstL1toH4ZroNr/czsePH9CSzM5
T3cv1n8JO0P9H6SsVFNVoo1LUaVG3yN/P+vvv9n5/AHoeRErZozTX+vsnh0Od5bjQhEteGYXrzY9
0D5YC6zcqt1lm1RYz8NGJtAw+F+l48KP0vtPlyJsPoT5AiiFsAC/QQ2cQEA+OIJ7IpeiPsiCp8Rr
QKDUUoOk7h/HV+BSaHxmGxSe6o0MAzd3KTyofGFwg2LB+5aSz3QELNTh3eTu2d/SgIohUbYqbEKl
fCWkpEDaVApFgBYiNDuVHgfVrX/Pdu4oAxTz/Mfmknzn1vs28dKBfsD83t2L4D8E/Gf2U1IAz9gR
KN/GrjU0gIW/SfgwUOJrMbObkSQIojV7jZWYgJnobvT67frA4ckmu0CihkIKgYSDkAAHHm2b5KHG
+bCAG8+gAsNhMIfT2ZgIh7ELycGcReEFO/U7nIZnAZWk1ChQykZmILm5SkUUoQA4lijYUsYbZvJD
gM/tsxZ1RNJZEQHBNotKCIyIrQpV4AnkvoYpzk53RTuPfjEwREGkw7gY3uxiLSEo2IonYiUFIHRU
0aiJQHhQGyhE7WU8Zh5sB9NhRYxRA7kV8GS4OuLOcw2Y2lb6QNp59P7A61VNIjuC7wWgLWg85sji
FTCFkSOGWAp7imOcXay1C2JCsTqNhhvTS8pAyMYThOUbhBg0nLJvImp3lcwxxTiQWlDAEyUcgIMI
xEt3ANtGAFiQogAx+XL5ocqSyZQ2x0qKmm4f3H2DkzxijKgGgh6bbgYmhywTbUYesHHpqXlFi5kY
tafH57w4hZL1zAih8pzxtQcF1kOsn/xerhzcRBuWA4dRUNnp04u2wbBe1EdSnbQrAx0IMjnanERG
4MGinFqm7xKhlKEjTQFIaUCWpLADlnOxWcOCsOyMnaienQWHD5v+nxsiULWtKjCE55U5PUpv894t
dN//TTt32BBKZOEOSZTRM79NBhQaV12vs+Jw7eIdXy2He3Y1peAf9/+Lpl1YpIEif4m2z5FIcIhb
Fogegg9ikiuweXU1UShKWkyxy1Zxzo35EDA3HjN227gkCTSUkMfIvdIc8wMlJ0eV2Eh6uCmQhShE
I0EgIfnlEcq1ORmGFPvl4oJlGU/USBqRNEXY4of3/zHRycjNOjDWSECJwn+yGFIneJQRENs9h/w6
gMgmr/MA+wAxy7YZnHAdJ6D/UsCj82bmHLSUyrYxSTf7MwfnxGDW8Zd3gW5Wr/5FMvK/2IOY/qz8
HQEjNY3Z2jhxPDoAwgOGrnCgS5uhUoPGNRf7P+E1iKQeAUIJy/MirUEcpLGHJwqNO3SiYGpY9aw4
mUxEJEUm3XLI0OGQDag0rUqoyiDMaMMuoAcwGJiMyyJVkV+jeMYgw7Y5agqFKdUSZE6GYJGiWykC
QOI4cfDbzcRiFGUC3Tn+fFigkKhg0YUAwmP6JtQKoOJ3F6bHCNvW5+2Ix/eopo52M8NEbPucvTH+
xBjY4nNjcQ2Upz8yD8yiFHu+Kj6PEi8Nil81M638Q6KI+VxZYUFuqLmSy1FVUKyyGosu8kM4ybf9
N1Pt5Y+4iz/WkSJPk+oo6/w9phDrgkYMhB8YIfetS4a7aVY14IR/B9tpYCBEShM2Y5ftJyaGkSeY
cFGitSOEu84wSw7djgQ2ggihNR/y2fxTAHDXuffM8QFXL6h5MqmQCl+qXtQNBtAYH3Yu6BSbYhsV
RUJLMhpKKQmSPwzzYYBs7ifQe/vcvg8IfLPjA/0VrB7qNjhdtnpiKcfuYfVC8BWscktZse9AWIN1
cQZpf62OsnAyQ1q1o+5N9jTKELUxaMiYhCAYCdXdA6HWF41c1l6x2hPGG10h1BL34KL4GSONVnLc
SoJBTDJBMETVRh3CaNzbbEPBFxPlqpx3wo2xIMXuSHnmYqb/e7I9ENzggdIspQDHOqUKBo6kk/0f
4YB2A2U24u5zj+RYcZIJH2Ti66iVYqn8237LL75/2QgPy91H7v4SdsGx43xO2Q28Jj6A4mmCB2Ei
ZScXf+6soxl+mZmp4liIRKWCBaXHIFkhjTggoiBfgye0CQkMySaFkJ2VVtLbVQtqqXaAmF/hd8Y2
qSlBGAafO3FIwicJ4g/NIFHb+2j5+/l8z2VQTg7jIQ4nIsRbQpClkk8+B7JIWvE0BlghwXnSXUlK
dLjrrk6eEFnxQqOhb9GOpIKT6+Zhhu+/bq8cxvLmzX3lUIml88MzQQJN0fzuERf2L713+cCxDoo8
vxR+zJ29IlxSReD2fknyP4Odz2y4UJtnCM61BVKZoG/wT8KY16GfzWhhkoiKP0pVnV4iJRhS4dhJ
PEYQ9AOHk9zsZYmadQT6QF5AopoiiHu8KdQGPAF2DzcjkHEbZxsRRazYDcoYkiWsP03KqQp0RAoG
kX1SZUrQv9EB9MaK4xBD1thKhNMFLYoUikg6kC4B6o6gi3Uh5o9WSePW4Rq6XaD6wSfyFp4qsXFx
XY6dgBvnhaossW44lCCH3Jzc47EGWUKwa+ocFAAG6I14ITLdvuo2gMyBFwOM/tSb8+r8MiTctbRn
Md0nJ0lQCw+Y2bdo1jfzWZoAbi3FCXnJppv/PVAeEC4trGiAbqIyB/h/WbnJLWk9C0IdfruFFEn9
KShG4ZlK/xyZJETPt7FhoScsJqKQqS3MBMeNudtjukPhJkvcdTDVkJsUxeA4mTUxRtKnWZmgHR8k
JPrtE3PPhxE2jkRM5jN/Tv79UcI92tKaRRHYQ+WpK5+h9voooqPFPP/VIQheMTUOHL4bUcAFXv/b
85jpIBIewV7noL9H8Cv/tPlDrA23gW/aqpt3GDyIGqUUvzvrBx6PUVMvGLOb9+4jun8EOPcvxBpP
AnJd5Nd9jmYIethoaR8tYvkQnwkNElNCYAvzUIzdQkRtXMONQo0qKuW8TKwxxiJamoQiQ1BvADog
qlVQ7p1LQCbYYqbQLvKgZAcMCGjWJpIQwjULklUgHXMVDIDJKYbipYYRiVEzhFJDHvFT+r6zTYVU
b1MihIEEN8oBcppw5JisOMyRhWQKcHknKKJaPCYMY5KbWvJsOLHyehUNBl9DBoVYMsaceyd+vXu4
qPjni8EO/Fnhgq+IpOwyHkXmJroPRJG32c4cW2sbxLlLazr12E6vEyQ6ZTcKXS82A4PfUhyaQte8
ZM+UexHHh6HnbdyauLzUBmDqRJttT0eCGHtzzkKjDRhUnKBiYZBN1DhLjF2EJF4cwh3IXRJGLhxS
VOVvBiuzDgm+EkSrI7W0pp49masidsMJh05BTQ8dRMTUU0tiElIKWPVC91GFYFS1LbLWBxCWjMuz
bUcaU22BkLcFtaZgNbAYJQXosqLBReSRPHOtOD0kU7wXtKU3iYtuaLjdNYhDtTKUaLwTWUhDiDBw
Tg12MZCEMHfc2djYYWRiDkLAsQiESeaQPAHZyBgKFoytYxkKWEqICcDFCIxMWHUQoM1KzVKEHAYQ
mC8yc27uC7BCJsMgG4O44xCsATYCxZhiTZGEIPBjoSIAg2U2UPG9TSevYOIIXBADoU5h3hMnA2Q/
XEE7uRitAQRSB2hwhwkNpE1IZBhIYacRGkFmRV0MAhTSiQkqsMAGoVQpQXIGhQCQFBsgkCroXloP
qIJuuvp/lPNgCiaKK8in8v5ee7wPI0mQEREg0Js0rgNf44hUDYC7UkGkfuYR+W0KKKZatAPoTDAI
fIQt+im3gxIf2YCSH8sfleUHHpBeSeR7EQcL4pKaffoqiSCGhQEiSlQD+wN3E9aO+G+ylMRTYPQI
Ae323UTlLGHoCHaHkFd/UCFB3bGkMq89eYHNFiqHsWACnBi/0xD4wPz8zs66P5+b/o2wnMf5k81w
exAPaBEYD9tIVEFYwQXB91Pv/wM/D/DCZ/H/hS4vZVQ2CAF6//expoYycB1FTqH0fR6AMcl9xDzT
3616f4vT9wMHyMf+RFgyEgQ5K7P5X0YPU8g/AgeCFyvswB1Nsfrjd22gcJYJz3nN6tTG900rY2xY
2lLe4sgRQlb5/KmhwQqErJFoadQMg/DPxdcIBEnHOzsgpERHe/znHVJD8xXSM0ImSCkbGnMkjFgF
nCkwLaIJ82CTCiQwgCy0plbErYG5JIUuqo8SpqUX6IaXp5OmUQqrfJXXuLl5bltWUjlwqtS3hCQB
p8P8m39ECs4eXg8uw/mAmp4YLUFpULIllDglmJsbVQCmRkieUgf8NYO0Dh82CmpX9MhQobMqRKHG
YLttgDSGoCgvAxdlEiTD5VDlMk+im5LPYzruDYV6SFgJ8s2nQJLAYuRcboqjh+rP6DQbgQBxdN8I
s0mwSEoQkAmmQkhCQjMwkwxQxClhklDDvzdsAr/vEAtusiTyx0YwJA3Bo9lz10FvCcNlCjzUAIpx
f9UFTAOoH5uESfEYvAEcffznw/JXwmOo2l9qJ+X/lQkgoNRhFJ/b4Nw+eSgD3D/q1yflR4JCqQaE
FGkAF4HyjEVAT2QeoT1F8D0/PuJ2eZiSIBGWfMkylzEsUMyBa21WA25tdNFFGq6Wtbk5guEoiGEm
TkTLFkjSZBtrltVc1umxsmyagKilNCQJGqKRAXd33EDeKjY+eHbeQu/YmkojCs+5kUYr4aAtFuMj
8RDtU5FIpeFFe5zZoRgoWGQ4RYnoUusFxpHHzyG/I5xazg6fbmBjhQWJQicKG3ZoFEfRtlVdTPd5
bYtF8kqJwBAiiyU5xVOFQU3D0v13hg7OjshKwQhSW2yXCL9qKhSyowf3/8n9twHY7AAhKqRnc9G/
v561RPpShhQ+gKiKpaQqEkkBc14XjB2Opm1/Ra/7qjq/kMfL2cA2M+vQqM9K+ciqJ7yAT/jVQhUH
2BaYIvyxRBxJrTWkssiQdDH3C3vOogPJBssIQGjuD72ZctzoSzRUGNXaQOnnTpRfmzPtevRx3ot0
9CQe5AX1zT6GkuuIZEt0v0SiyRFQzNmqrKUBQqlNJ+7sdj1998Z0fT/F3L7fnbf5PFROZBC2CDoR
BtgFACXRDk+hu8lAdVNh/ScKumuc0QMT6IMghuWgg4rjjioefD7DQOCQ0I7oAptAbyiOc2hkLzzQ
yeBx1Mmpt+pEQ+qKWa/3/3Z+s7Jyq9m3NoANoClVpEKdsjMVBoUkFqfRDEMEPq04TJTfufnBCgD4
d1bh02y7bFNBTECgd/d4bG2jDH+3fNB0kU7TOmCCBppJjP/6cLf1WlChIgMnYcYb8Hfzu6Pj0BdS
pc4Uo+Mo/D3Z6lB1K5CkQgG53JCIUUTINELKqErUTABJKShJv+xM7tj9gFIb/MQgHbMAE5TuDtHd
S4IE/2IEKhGfQGGpUGqoBNKArCIHdc95GEFDTOav+bEKwQQFbzQ1LBUzixAIdE/SAJGeOsJxhb28
E9HdJu/Fg0cgDHcLxhkUQmAAMBhdnLFTyBM44Iv50A2/mezNB2HE5HU5s9goI/ognI7Eudx9MykS
UVSBaZsc2FZq+VSdWoqu7fm2KqED4eNUYNwzR6oDyyphCeDxct2MHiRLeZi12LgNaxRjjfGqiOTp
KIJbg3t26byyG3v0KnfVJ2yEDIcZJHhn19cE4QfNOiWlEQHUJa+f8w+fmKLD9x8taiwwKIGYUScY
J6Fhw8/4utzmZhnbvgH6edAGDIcdidUMCanWyeWcmNSN68JzslxOlzMrkjCU1FSVALGJIiETjKg6
7YG1OgeHsigrChvZzQ8HgEkiOZowtCyrUplGSghBkXPDOY8dzRxVFRJ6A48jmYFtwVcZsNxI+Nv/
HNmFawtXT/6LMaHiHOcH99f8zWKMF7meYf+bnx1FGNTI/btzzOcrS+3FSaNyVLA/9YM7EuIFBaY2
iG04IlcLcSUcUKq0VDkZaPZuAp1jHHBvhfASAgwwDX+Jx9HBzRSMQhAiyHjD04pobsxVt4LsIeji
8DisfWvrz8NQ7esXVLNo9fDzLhkgJEdrPuwlZSowxIgaIsPZ9j9ZCNCgfL19Mb5NGiqQUh+WF2O2
DXya9kicZYiZBSUq/FT34b+rvFEL8vjozOIcR0IAPh/m4wfoUu9OMEijszAoDTqwFpB2gMhWIYkO
Tf+rSpkg7GLgBxYkI0iakIlD+qHQSjMuAJTif5WVKcf8G+D30agWOMExBq2AI9GkphJxqEKk/Ywm
YRQmrYS7UmsOUBQMwUAdobP7uZaXBDVlG42QK9iDuXSBtQ4h4LQkFUwgOQgUUW2CHE8bmI8QI8Ql
IHcQCoB0magdeVJOlkQ2XvSE6RShXeUtE4BBIUUAHHbpxrY4wxeJGkF1CuSLVDssOT/Cbfh/o3OD
JzzxHPOEBbUGHmjCgiGiIEcKGUoYg9XrQPJTznQm+P3udExuSmYmoskVkrrlxriYYNum7M7rnWhV
J3WtrSvIdnZMpkDFIdDCXrCPYtqsTj0GLWitt+FPxdQ/GwPJVEkAQN1EKNuAAp+2HuUB+9QgORVH
075yqPPHRfmV3OZvrT9Z+TCCZRClPqhwHgXSqp3VKi/4DKCTPhCF0WhSRJKRRxEyjbgFjSKjkP8r
9145G+w9Ye5cVQ5VQwD+mYKpS8iXBZCkohEiGESAkaSGhAZJVVlIgiWCIklDvfvPvDxImUiUJWSE
uiof+EroBaUGVzEwG5cAMkQCYaccwBHUYSACcQHIABIcgQCmHs5ogKVZPwSIEiGihvZvodWBoqkP
75D/JyBwKXfwArOIPZg2ZjnR7vVz2At4nAdM1N9I+t4MwWA8UEU0Izm3P55WkeL9kuCphucianTr
Yj+MgWFQOhnRGFI2ah4TENLi0PKeBw1+OgJR+yohzH5eJpF4f4FsQG8ZJoSNLBr8C05g7D/FaVzK
jRGU3Ia0HQREkSJ+D8KBR+We3yF9KA/rYoJ6LKARKAAYKLk4b8jn25yQ6o/c/lrymvn/XOfpf0RU
A6lTnjjto/PPxh2p3seuD+cPylSbDtRP4R5/RhsQ8SJIV44s8PiNYc/qyBsRT2xqIDFRP0X0MFRS
KMYvmJfU14crIskVVi2rKqzRWL+7RVzX45s+E7F4OSwTBLCyQQSSPhvwd+FTgxdMGIscf0YAoiOo
FBS+YbyKiiJIuUIeLr/62ZEIGo/QdeIUEIj17iAPPMYeAJ8fOP2CKBD23Viqj5VyKfb8YhfPuqHJ
4GsA9YHkQP6syHzhqbACVMc/agTw6YPUFwu4d0DHvXmmYFy0xQhWRFk/IXww7C/wWU3ntrYUWUa1
aleolf2h53zDN+945uNPAOSB+boRWSm7xgTiCjH2kk/o+/gUTfUe7lQT+E7u4CCKoXszJmR+3M+j
AAMRjbFxkJCzamb583V5U2lFKNtWZUg/U5VmCOCR08QryD9dnj3mHXSkeo5zpnxYal67W2vkjSVG
WgGGQ+nqHzKffFm6mo1KMGN6RuKGBoXxKigmIQyj/NXV+ar711+bdV0qMajEkHdRxkgXVMLKJ+i6
qc89Lz5fwYvrUdQCAxGEmTNNwc29Hw7ox8PZxQSnH5bPRMdCSGVlH5wgLN95KiWoJ/uE/yYvVCKk
ClapWjCqnFwJLC7SbnQoWeHdbf+UdZ4YvyTjxdziOyo2afGbn/1Lrca/7MnmOcixuf3ms9vUAIsl
Gamb1LicRnw4iIiD1r01rn0Wuh56zYTQoXG+h1xT2YcOEnkzGLkRmSX/LUdLjBqOiKviZNLgXjt1
muMb889dr8dtXnp8HsiljjPntF1zG5nl1CEo2sLXSvVzY2x0prKH5x+Xrtrq4eTz31XufFFmVGRx
7ovmeO3pJK7rx6VPCUUHRVFUvQiUZoSKUSIMzBKRUO6veBZUUPhtOHjouJZWUKmqcDRmc+tEiiIA
0+NUSKIIEGHEHWOIeg+NjB4b2PXLHVFiRCiqWkWCspKxASEBjKru/t26O1hzx51sOUL18jvE+UPl
UQs84h2R4kCQkBkIsCuT9s/PQtWwtRSO/Un8+q9IiiJPJ3pznJ4AHZFBDIcoog4IcxUG+TBt5G2h
Zj0hXyRJ9lJq66llRUkCRlbKa0tMzWwVGmsbRokaVMyiWaQGbUobESmRmRJGaQZS02NYto0yNo2N
ZgTKKIivbZFGNNqNMpFCpLbRasyu5+p16mr+FV/dTRNbALOrdCVihGIoiIzYwP979NCT+mPKtKAX
u5IPH+3unRBsSFCJJCFBsh8+smnF7wCCGIJ1S4gUOsxjaRSLFh53HoWTnoneWBCsDpgAUMiJIyws
QSpdyYuPFmwnSdQYhpY8aBEYo8GFpYFoginGUP5zTvoCjdncibYf1GJiAxkB6vEh1EOUZIzaYiJO
amYWjUoU8dSsHbqBWEIiKIgRhphgl/jtt84TTDiYa3LTgqmsaO7a/Jp+nfbiQ1lUyiWKURv2Pz7T
aIqDqZQDv1AtIPbDh3olH11U8qgBxivjckfrCf8YP1ir8f5Q+aCQIBuwYxoBog9zU07l5Xf2X2Hk
kmJF8oAOMKAzE/Ms5k2T9+6XEXTGbZwHCgRnMaqtyOTWZz4WVeVy/Q97clNWS5G5eNziG8HhlUcI
zJFCzLihKEmMHlsHMzk67TOCIs0Zyp3kRFVRF4cvHR0mg/a7VBt3nChyo5MSLsa7eNWSEPZdBeKc
pzjX5GR8L1fpn7XdiNb7LvBaxVIvcIixKeRHbzs75jF16hkuLqpR5rJiRGTABFhsk+PjibEsgHKJ
tcHbDMjc7AmQZpYpfjKbU3SkYwLoO9h0A3MrYXGVqI1ZLZwKCNigODZOUmZE6Y3LIUaWCAqzzTxT
dgZNdspGASPpoYTSBhmI4uGIJkyp5MOp4AkiQmIGSY5GnvQBEnGeGHXKHjFgIfb0NThB/YxhIJKK
2kGf7km8iae6JK8bIxsdum3V7pyzDNjciKOu+waIYCUkZMowiN3JKRUChgqJJyYafhGDJhz/nTjW
uG9XGGN0uygkqH7KzoSQafBCHeiHoF2PN2e9er+E/jLFJIUZCmLEBbVW7uv193ZZtMA8xW6TTruM
UbRaNXpu20U2NsljVFszSRtJarIllbaa20X81jbdZLJNCNTLK0MCtACDQgMxVCbQtIuaBs8qaht9
rxBUjyEPTV0B1uxJprNpwsxDYOE0Bcf9Vg49nUpeyP8rfzzg38tGIgX95w9he/8Cj84dyh1HbHAf
8pI4+GV/BBNkfjAD65KEoRpD5eyfoNKpGHy4hlF21rDhNPcWqeoHsD93QHCuQQyJQp+yInfyUhQH
dKuMwSE1UsExRLIpbJO6qrcoVRYSwYIFSYkKSoaPYp+09ofRvinImHdgIDECfdnj3+XqRX7kIWS5
RHykIgX2SDkAgIFADrp+R7ngP03blE4A1eyQB4D30eSkZ+eR8AP7sA0A0CgxAAlIqD1lWPAVMH3H
11Q9SKec01BnQggA0c+EZRoGP3m8PfckEY4hQVF5IADNSuk1lxDiXQp09TSUkVEn0T0KfqfMw8s0
i6mXAlCv6eaJ5MKEIAoiMA8FsObIQNQW4gpzKQYFEiIhCioe1qS1EKCYuZJTKyqw2EV1W82hFW5v
tsPQmYiNgkMJWICJnDBMICYJDf4CAeKD8IUoAAN98UXhdEJCPehZdoLoj+/FAXwA/8xFTDZJBB6C
rnZqOQm7nMhjyzSagPYp6yahPRxGIAYpICUUCV6ZgHqQP/CRGlVDjyPmD9BGH5ndP6xG/HU52Cp0
OZd5pzqyi5CDAkJWSlBdFVOuhkJzMHkfUaZAkwUS7VkuG/CWcWJFn2eO1IQ+miPyg61BjfO8CHAs
QGBYcCSDA2FFF3CIQEDSIIveuSvSRBhxuh1BMooFCHCbXfbw84COOYOSKUKAasI/mu+6uqTTO63M
1EomyZjGhUlrWr4au3W8tbVymosiBQgpVaYLuLGOFUUoO/tIdrO8AlDRmKOSOBZKGMwJ7Rw/Pm2Z
btLI11dn+CZOF5BgcEowmhLSgDRcfrZx+MIHc0nygYiny8jgQtWY84g+/0fQhQj+LU+2hohVoBQF
QoBll00SrQBOe3oiePjxPXFK9ctZ6Zm4edKLNyqW3SiIDExaH5E/6fv7Oocoi9soGyWGK0UaJ6MK
cnMImmmMWaSgkSaSWBYLtD6tRj6VfRl3s0qE8M9uvXjaQRBPqLIylRQVTpKwEETXt75rYedvBWIK
uSytAiSpyQZVTI1Eqs3Jl/8qM0ZnUkUJIMgo1A5Q0STd8k5AO51VhWFrCxyGRMCFhXCxMyVBSdJO
N5eW7jJVGJymf9N4a2h0UG3zpuSpOFKWysrZKyMrzevXVgN10i11EwTjMcAtKKLK0bE4IR8U5EDD
WShGnNlhRhA0gJAoRBkFYCIqQWSUUkIUhIAkITasMbDPXoHCKqFN5MVkToSUuoJWSF9sWAQ8WkDM
gsCzm4WwOJwKAoSAxkFnNrzcqcBnKGBBL0zW5mN6JpyuD/ygxAgQlWlmoy1amZS32la3KTEUsqlK
yxomtVKbVtfNwYArmLiMdqSMIkZkZSYkISFFJAJISRR1UIoGBySjDDQZmAYKEyrMCCwwBg3CkVtQ
y9tqN9YcRP9ckSRhEAoA22KOnlFh2S2lYIiSqD6fE+r5sOqIHZBDqoZARCicoTh2UN0PKU24mRUM
HqNG1Aeojg/JhSj0sv+yqPEaOCgOyqCVq/mqlg9j28QPCHSG5OohmDmKaKBwNgvKHGkyxdY1gKKm
vdfOuxRiKTMmlb51clV/LCAn3zdQRDAjarxUqWH3IlHn7yy33agAFv4YtoLRx6odBOB1Sfj91WeN
jHgvOHJhkXmtw44TKw3R/n41+zfkPH2FDuc810b9zzyp0jy5M8gm6xN6RpQ5ShLZQXFnJjdcmEDq
0ZUbSdeNkZWg8tIq8RC1Jr66eHXrBiIjAFGHXCF5ZTnGW8zrBnI3w04YoBJpuKUe+uCybkOgbd3z
nBVsOWkGFuMtQx1traW804Jx66+nqn4/HeOmnr9Zxz13aIqSmlaiI5Ow3edtHLxeDMQNyiouKh2O
CW8BBZgJSVQ6hA0KV9dcKVtm9O5m6RVMpQ19sRUKOpxGXlfNVw92tonbMEg1bbT5p4opWksFtxOL
FdmUqFJKRbh5nElSQaT6e4PXGhTC3viTc9hZdkPY7cC8Q4cPsDfUC0M0NJwSIGOOu5xTYUI2ZNt+
S0Hcc4FUpQIoDOmGcEyBvT0t9bU657WHgdzTang1Ntje7QOEONvNtm3Y22loGQNbvB3shBRUQPZE
CzoVEDuYQwShNGYph78VHpsu5VQtAUPMEkO2gbodUs4yn4kzIa/DSB3q99HbV0zdATeVcITiF3lc
kDUGEGRekAKiGsTQopo6sXiJrmvmq1mx25dITlF8AHZQxidholnTKYHhZkzTHBhpTEKAEqNMMkwS
llKJZCFnODHgFOFk4qAlhIhToJeA6BEZBSWJjkhw6eIh0i8MIfoJphDB11Gkgc4KGELNNDBwuknC
0ZwoRnJgshwQOEJEnZzosrQhWHDBKYuSnKZymRtiZFtIPtR7CQBHUT1/h+n7qiK9gADjo408c9oU
WToRLdfOnNBFovGDobxMvodv75RzzYwS73dgdxt2yNE1TvMuRw9naao9uxxD1AoB2WeuFqoPWYbq
OclW/76fm4qHm9IPm208pCaaEkkuzXi22tG/w6bJ1UtXoATm0A5Xj+ThxOh60ynaHaTun/rjEorw
mCpIVbQ5JdIU4LOacnOZWSICs5tDici0Th4xJDnbx6/9NoVOlGC/pLyaiLO8/9ehwTwUzqk+mwOw
Br6vjgr+AWqJ4ZXUqvvlU2CImWDoZk22Y3Fi/4pMIaDhJzflxe5St9LMiPhnaQkiORgfMh5c5xk8
dBRBFtjStlG320krGRi8uGkjZILc2upRZ9dykh2jJGYCZkZ5siCCGKESJj1PtYCSc6bAYVWZlEwj
R3AYuGASLgEEyjIskjLhjm2mmsrpar1263tgY2pYCAwgMEiwo/hDPkaoV9MHzfDOX0Efasx/dqcs
Prnsx+74gEPRZkJ+2qIJ44SwuVzKJDY5PeZEUxX9Ju6QaA7wPgAPCR+zKSn02ltBYwRhmSTOSWZW
fda4ghdSgSVSD9z5i9QyxeateAHdyWkIKDDBy/WTofDY+0P2tSUiSQQiIotDy+y/8oak8WrMDPhm
wwziDK+u3ciMJwvsQqw9MyTxhZP+fgcvcLO8o9zdHdTpu7tvygSSN6VVd8bvJCASEiOkTCRcYpcK
Z7vknA8e7jt1hk7jiKRIJACkGhQmBw9fldnVOxhidIS/w6UEY4sxX+rbiAnW9AQg+wEDm9pZ8e5/
rZiiG6d4g5RMyMYHl9To2d0Mvx0Ue5dazjuWG3fWpgzLrWl8GBpqoui2XYBQ3NpX4/LOD/GJyj88
TYjUzDsNdvl1yMWeRbFloe5NNI1X1tFYHEPPVWIUA7FOuORkaMW1rQUwged+jy8Edb4/f8kFcc++
tU2AfjfOg4x7dyMSE3RcLcC8paTCRwgSCiGuVDX3d2J12MCPjTLsNcYhA6yAVPOlaB9qfny0tWra
MQd24do+Eh7W6j+KBgfQH4dAFPdKi1BpGQhITTctZdHZj2dvg3S7lhdeWvx+d3dyRG/5825YImHI
TCB8vjp6HWwsnZzNEWUOLsusWVDTBN2EjPoaHUNDtYJaceHnD+Dx/k5jppw4gWM18xi5gg873kxk
egQ5FyISwhU3touFa7QV7xYnlO2a2UplsqqGBHI4gAhJlI/ChhBWFyswFDXRzDEjmlGbvko5ThS2
udcXqIMGBQt4hYnshzxHqZlbSlT76XDPnD96U+ZnJHs78vFgFOW0nDpV1AP/xFrSioryAZ1ZCNhQ
O2w1kGj7fajeVTAoIie7c7p0Ney1NfCa5Qa5BtsWQLAkPMQPcz93nWRsKNS2xTww6T9qD8qRmAJ8
8fbT+YiaqgpmGEYISAkMJfgiMV8YcVByEQHYg/vQ0qAhsm4X1AqwFOnNIwlN+s8rgnLCocRrzGE6
aB/1UoxIw75h151saQhOU4c1LOrwzEQeW8qWNSa2toghTici0eKgYqYCxoCyB1HGi0jhS2ygT0B2
YKeJ/cnp6aPVTccWHza1EaqtgIke48DuZyKLqguXdARulVDpMAohIxjx49DpTde1h3a1Uxqq2Egz
EcWOh3qdESEP6ArSHlCxIHk/SgdHshT1cvljIvbEmCyQ92vlPafeFQT+hCOOp1PlLIoB6be9OBGW
SzCLsHqhj2b1zLkDFsyFTaNfLgfiM45GAp3I8FJBTy/xGlLEgodnjAxCyB3iwPcj9mOnk5r+9fiH
4IhR76J2eNo3MSRO6CUr9FJWKxLziYmDz6HHLzDzU0RwT+4JOMAGenWCaIvVKRlKbDRyVZJASfN/
D93JA4H0EJRVPDfAu+9A8sYLcHaTjKGNWohukF5mIrxJ4BiubHafT0NzXP5iyP+ecIDDH/zsaJQs
zhFKFFUZCaLEO2Jq5ikHBjFnmKSKvYMlSfyeOe6416N3fp3TDfAcSKHR85Dyidnt/EUiSUncTad0
IxhCHDsnT5J0eTHaoKSkoA+CZgHbHRaSDbcu/ulcxDPbvMS0idCBvBMM0K1q0xDAN2nfXZuzp6TK
RYIGSnv6TBaJStsoNTzyzMdS0ahSjWgNgkoJPjbEyVp93hWKPlBvzNttt+FKgxjVyQXAEiIssKLs
4kOsjyKMVHeV9MPA0TLeHQRH15MRCM06h2KmDdmDIRy0jk473mZko0hKbSg1AwoakQaNcIto6OiL
SoWdRRYJDZ5aGqds35iXUMIIUQHrwJPzSwTPw+ICBARf6eioet+OA5IUJH0M+vQYShQTJVEX+X+H
WjyUwMpQoA7Ed7bEjQL+S2gubcgCoyIoJ3FX2Eknqx56MSYO/c0wYyoKH0RqwKibxogHLyYEEsAB
h+tAYDZ4PkmoXcPqH5aiSmIZVwxqIcvr5E/GQOf3uYAN3dHVms4JVNRUpEEc1qzYVGZBT1dogoI/
0fv8jZaCuH1eAJsHpY936/Wy1Md5AUCUikMIgyrAV2TsT7ytFuV4ieT9XcffkjFET+qlX56ErAzR
UT/cEpklGH8PwIIg8kf4uaD+T1o5dIehE7jfr9Z1UFfT5fwy8S+k+9+/+9fV8x/b5TXuVRTzL6L0
NfNx6QZUTTjQ/f61ZI499tL8BzWtGiofg5OKc/o+Wtvnjst1uaxFKzazMD+n/44H1oNW35Uk0Pmg
6606DsYFA1ChRbP7iiUhYu/5P1NaYzA4LGfrRrV/0TQqy2hyGqwSf3dSftbznRmy6kcQAUCjwImj
jBa1cojBq741rkwQCMCtXm+4ydp9W34zbABnqNNJKGsarYnlDCYczrioj/RNR6I4iIF6U4pcKqNr
Ub+7HyIxA7wLU0BQ4MssRN8cEuQHFQ8AKGj85U3NGVSO7Y3xtDxSnwgeQATgpxd4m0618TethYqD
IgiIPcYJp6uVOAvcUUqIsUINnpr+uCMAGAylANdkhdUE13ZKfEEJyJJBLBqHxA4UGBx+3PrvBnEM
SRG1DRdtbeKqY2TV6mrZLLVopYESo8C0ShCeJYJSOM7d7RAfV9ub843pXziYRvZJnNdYyTgZV3Tt
dq9x8WodTxcVQVd4/FnRTNTQotSm7loBi3HuwrKzgClwtHeA8njxqpyAx0HtApQ6+0MoD8QcIyJI
gQ+SvlqZUvwcwMD+4b+rzuaGiUxcHqq6icIOA484Om1JpZzg40rr05gamtZrsgXF+I2HT1R4o52S
izMkU3Z9jD2ZeUlJ2cwHlMDdJNd5i0oGxkDVyGjoHA3lFVS2bMDCNqhjIdGBHZJh0nBh0AoaaCmg
t9zScEzMVXUNx2EdzoYdAjkCTAO88EkihiH93uPjGtjxRT/fUvSEyImYhgRFpR9/ebtWCY+Yvweg
9jgrhHz0Xb5QMF+jyLT2BjJCqqCcnWLqb6MMCcsikCIYNgvC975XgCQfJjeANnnFqqUSk2nfRvbU
jDtQX6CELIGkpoTgkOkJvoH7dIROsXFApTMcQMjPowfikWRnRg1JUyL8+PkY7xWdWpIf0W+zw0o5
gQO/Skn0kdBkIinOuTAjY5Oo+UmBykPzRH/Fx4UTDbOsV9flrikkpTleN/8PIZppLMICR5P53iyX
QVrhLPYU+2SSqOPp3t64AD6wOA9lycGL6o8PbXV6Q5SgP+XpD16EhWDgsbzxClPxqKT3SnBt9Woq
PzGBKwpJ5/z+fcv8e5zEXIA/MgHb833gaHu492bouFDRCkEEUiKgFSNGDsXWi1rbCSFKaudjDNqa
Aa1IxisEQkYy3TqtzVRoyG21FWKZZUi0CRItIFKpMi0NKNI2jWMkxbIVG2WBgipEikw6Lr3bEa+n
YcO3090IRFE76UaAegep0RoZ6O9O9PORMVQNMCERH8eGdMvb+4wQf3mz8/GfmhMFKpT8rJhKP6JH
JAopaD/Mf7jF0e79Cfa++H7vVLqFWxT1U/hhi8gA0pbKoC0Q9kUBbgIOCHa+UOzlOZXhBMBou+9J
61FNJhhFG26sgHz9gb8pLK5I47NhhQaS7KXuHE0SEIfdg4f48N01RYYlO+mJQ6f78cVumAo4MdhM
EBxE7h71NgRMSVJyRlV1A2zszCm4YbIQgJn4l7EANb1xoJqfRGe8N5wt9u6eUOYshQNy8h8UsP7I
nC9FdGPbSSBmKSJCCRiWbX/d+7JcD9sGon9mB3Rp/rIe6dRU+tyuxaCiGIPt5O2YMGeePf5Z6/k9
3gip5UwZGQ98CokgEDCxoBmQGGUf5ahjANgAZQsHoxTmnThJmHM7K2/t7u0XiomRd/a8sAFCZfuH
PvNSHKG9UlwaIWIr6aCGFt+mPflw2VF/PBTx/BR/tmnqaOERB+cjjWvf6PiIEhEk57a5xnQXxwYU
L6/BFT83zGNCGgFHWpWUhD3/2nfLLDoXv2NH7NmUWYbQhVO5AWZkIUiTxoDcE4VMoNEIJZgX0BcR
0oUEoC5KN+H3zYAuAgEWw3r63okJVUJoNsL6ZCQOAYVIpWgAP+jGFA+5A7LRAPSdvh/Ah9fcbbSV
TX01x+bl1rb/mt+djeorKmTkC2bcaOM2nf/Zj8sHVJTu/L7CIpq/wKr3rgyeAeghSRH1vBbh+OSt
dR67RQcnccF8PlutIca971A7iECgG+MYIGSDRRMgWMkPEZMhypv7+3F4YvSctb5mJgmZwoO+WRja
alho0yZbO0FrCmDo9yUhtEGTQoTUK3lsYG+c7ZTSNuglG+caJnNxuiNQDWqBJC4jI17yOTjmhAku
MB4Hg4MR6HBvprcEKVRfYDvsxoDJEUKODudBoQuA7FLIw8VvZJXZSr9e25qeDE6SWVBqeP6rvAi8
EjJmLJrC6iUvp6FggUvBOgU9iPHSADMWmUoQiICxjAKaDEgURaS7tpApZjsZq2oACKHdVDihFjgJ
iYAJTmjDojSvVOhHiJxm393knBIZGGNYVBSjN5Tkj+bF24UDptwlJpxMZZCEDopaNpqTBjZp1uXT
jKXFC6ileMOCMDCJShzJxE5yVwCNTSrVFIJoZMxMwKaigrSLjNYKBqKkYJcUUgjTKNCMSXVNGgcU
RUYwTRi3m3JCQs4ji5E1gz/t6roF6vTJFeWE4dkoV6BSlOjZMLR2szL40KkAIdRvNXUXEFNynAFu
lOWMQojBjBLMOu/PG6qlKg6bjTGETW1tsZtagk8Ml6u08DDthzvh4GbtvW0uhW9WSF5THDgZFm02
nPZTp6iHu7LkDzJOunB/815zQQYZpNSqRESlCLiKA8c8LH+aCImsBOQaGm2aOkmgfM5D+s4Mp/fq
qjQFFBqOuxbQNljVEKd7N62XkoVduuZ6KGMRD4WdCp2h8OixnHqbKmSoVkokDlqDIb0EPLPHaQ1R
syJzG1QBtIwVbREvN05zahOAjB5GOVrVcGnIijeDES69+OsTd2YSoDnGlLSl7ps1rScx228z8Wfp
fAnjv6TTwn3npDK+BOKDFjoZduQaQTjUWkGDJtzlBRPumqK2ZTL4aHJj8L/HdS8scLTHQJnNXNQw
bBQoqYqOlKZxrNWdMX2b9OfhuJsTyhNm2eYh5vSJ0yS4FD1MGpSnfOhTbBye5uYjAARjBOBkk1kv
EYgcIuXIKpGFMQYkspRI8ElwoheHeMK5iRyKhKAJHrCHEIuaQwCkeTNYZOcmNal+F2MhJIeaFEho
3h1jaLWt5BKKx5PeIiYTMWaJZg5YoMEYKcL9yUjJgbLZ0U6LbJ7DlBTN518LzGbwykoNg+N2NSoM
DJiabnblGVdqTPgA6euPEpoevVCZUOalWTUdSvT8yZFIHbeVdL5iMxPB114Nmt8kap0u01xdRZO4
7cXpQgprIXFgdNToDDOLNvly5zSGpaWDQMT5QP67cGSJy75go7iyvgGNx7nZikNg7AdENEin5v0f
W/lwPANTUk2WyUEIGDc5OiJ3ljlg2hSaGpEwPJU57Hzh4aiGybEOPW7H789ZwMxsnzg0p/26wOgd
SafEkMMMUMWbuVeelo8gTdM8CKOR5MXzYN2OESP/YYQ9N9/g9COhYuRGZ8pg4NFUGIYMAwEKRKhJ
AgMnMcmt9d/w++Mh1CD9aztqJ/dsdnVBmFWiVFFVay+QraSH/yKOnurNVgwbBwxV2qp74/HN+Rm8
vLOpGZrnBkSUAKEb12Z3Po7xB0ZOOLyW+brWUwlJXaGcRkZrzKRZSi4P+CKm7lHU12po4RrrRMFt
CQ8qf8Y3+9euaMwab2yzFS6klc1NiHNTGV3LPK/4NmBFl4iThiQGuHdtoSjvfVnJug4WVC2T1i7D
iOLZdk5cAhRpRUZDBpBM2IH/KgHhxD1PFOukTOiMCEoN49M6WFYo0aCQloaQqT60XX696J36bOOS
DCeINyaq7McMJwlDion35OLoFgbH4Q9Ok1iFNyWt5IBLzZMJYqlCuN/4/4XZ4/0f0R72jfBsIDUo
pD+XjAC/b4IEEAa1pJDzFPaK1uCpD6B9FXrg9ZnXTfROfGc2zy6A+NFBCRZE++gg0UgxIcedCdEv
8B77Cz46dE5zDeBxwiI0KyBWTwE7xwhCBPASlMlMQoVxCyeBOxYhxSJyjjRFGDlpykLYyY5GQY8G
zbErCBHWTCGC3icSVsVDjOJGyFWVKkJCVrACVDPGHNAahEE4VCxoUOGLmuZUhYDEUxSkixjFJC2q
1hRNJREusLRyceTmVkAmP+j5+od8KgVk7kWQwMViSS4qkw0RLDprHSuMmjQzgYATbRIlC0oRm2TH
5lR3TNRg0BEKoOJDIIO5GQgMoUEImUEwEUYYBUsj/qzWIdQyNWkLPMoTmJIdqxiP65jQw1K2qvzK
1rmttURWgkqqZtcuySorJY21GrFQtMhM6wMyMCGIAklKVFkFiQKSIcLexfmUQMeRjhDH0YHlgmKL
EV5QqYzLEbTSDFJAx3d/lOMSS7sD6mLwyS8r+5bv8Tn+6z/e/YW0v3AjE8QJ/ZnHVkAkmBGrjJmN
YxqDc+0TtSoTQkeVMH94iso7zwLCz7Sg7UOxTxoszpUjJxQh7zVO+BnwxaMkxkq0zYbOSsRHYl2V
SokszIOu1yzqvmvXlRIc0lMc9mcFwsUaD49hYF+uzieTKo1ISuqcGKlC0Mmoi0EhPw/2VOQD57IM
71Xv9okEgUsCSe2U8zqPz/r10MwMitjZ0biCnCAqe5zF9Oro0RA1GbDbRaajapom1pmsWLUWjZlG
1bKUYtVYhKNKVY1TUybY2LLVS0yylqSLRYtKWS2otTaTDFLu1020zbYqaCMpUqEyCDDIqmIJhDK0
kGKOAYFqHgHqntMCZTz+q5bLoqXKu5QQzEuBcscKoxJFPt/BvqohMCpQAOPkoVbCGArhxIYSoIYi
KBjRI0m4BnG45AXKqDDuiAW+xTAHF4/JK0tFKJoU6Z2lNEioPaUQJHBiOuI6D+tE9P+x6CYQJiSJ
0zEIkczmTSDslL/idO+xhbyrkmpDMxApN9Z/tlFxhQIhiCkiCyKa69H/f3eR5+GZaf+atDAekLH9
nqphZNYWFsdEH2dlklH9XsP5Qwh9M1dVRpRSIAHgBYcJR+0M/4sNBT1I8DtxuwXCvLAxkiaEKiU3
viSfgH6xRSBTj1KVtA8n5OsvwZ9s+oPKyUbYpS5IlGKumzjBC5NKCo43P/aAe8Q+8Izoo8X+N9nV
8TZicCnyI739P8F2++VN71ixKe2+2Aoz7GQCBkAJy03LCBNmXH79K7ZgecAbzwS5FCZINbjuAu7p
qmkViUB0hCabJzZVA206WVQTaBSAfzyGDsYBhiWRqNzrP6zuQ7CQhHxZeNLT/HMVhk2sYMkCGiFZ
S7TBbsa4igzD4YaCLo1LUVKW+fPNR3JrtybjpaVtOtLhrW2ttaiA1BEqPCSybbRnyu5rZSlLO7Vj
oWNYbyGSCjqRQiEMkMgYmLW2LVNNZNslkq1JVXcee1XmtXzK1eWsW0RZMbFYNrJrVnXW1yiVmwDt
mEQOjDC2mHMEYlYkMlQUiDK9KMjprcClNSZbGk1pKNYW8udR5IghKMkwZRRVRNJOkMmGKXgVkSbD
tIJEqMdaMcxiDTUglBxJAyToTjDKM/35SQoMhAImTobaVrLEFqjRqWnTsNYVTUq6lJYtZWMLATg5
FRDgckOAoIaFlCIgqykCQThjGKaDRxoDZkKLCEqrcRQsxAEfepBqZJkDKYpLByztABQGyCsqAGlE
xdlIVaHVRxpYUpiDlpRS1VukEWUACOIopSJValAlirCh/qAQOnbR32PX9zs/efNnkp9Di1jmU1pr
glYqrS1+PzyigMTCxR+Q+Sf7P38H28OENIeRCRfZJ1973+Hl6ihQ8yeJA+EQkYRLH0xkXCMqLbsB
9+KSDbkxvfv9xCFfJfD9ySfgwaZzkkUg9d6rhvZwR5xhN130m5/RyIgIQlE9kC7/X8tAY03RTYHc
gaDeIrpSv7m9wGC/1ccBcQCiEFoZmOp1orNOhLA+6OATiHi3wMjMCiqUxikIMmQjiheoszAtcgmm
KBdAzFNYKXLgN58Kg/4VJqPDuwyU/5F3dcHvCA3gHtAC8yAgZIuGYZD8Ie/Rj278XhHAooXkWEco
JJPWIomKiVGUGOa7UULULuLnir9ZHYB0gAgmEaNNIjlFIDeHmOY3BzDjHMDWA5hsR7AztkoMcYqV
tfHA3BF0N+YauAZNQQztCSESis8nk75bLH4ZjymOdt9Jooohn0JOdsNkkNeDDid9Bevjg+BNTgok
UuJS605YUb45ipw6IiSlRhDllYwVQRkGWLWljC0bakPDk9b5MQ18GIqShELDiDV1IPFSFSs3VPx+
UqCICYSLYMTCEMCQcVDpJ1ha6dta6iNfZojG/BXSNGvtGZSrKxphFmam0iMT1PWUxkMSEcTtw4Gr
uY4XUJgSi76xq8cutLDz0yPkzjgyZj3tpOMKkOzGdFIiQoIdXaFSiEREK08i2BrSi07GZZDx/4/7
epw7kDoPEgFELkqUwDDEEKKGCUX6U9w/lnSgbEgbiyMrI5VEicEvrIdAbDgYSCZCYiODJIRgnR71
A/BSAOQgb8oesmwOzsopiSof95BMFDQUiD98rEIPVAFd9wT9G+UlqJpZSzTVWZVZNMomgZKppWtR
ksYKUjW2lZpTZapVUzLYiRQD7aRB4wLCCBjmqUg6qr0AB4zcZmr8SwYmmkyTQeOssiZIW0IANgyU
lsAJWqoCFsFMy5XZ1105cjUptlXcTRjFYRgmMoeuDUVv/o0ojvrhVU1gByr3oPsZJlUF9kAj+aTJ
B/og2FToV44AuSg0gmSATKoGJJLUSofM4H3hIPL7pPhdAiX9dUDBoT+PrdTlQCodIMRkiJ/LwiJI
7fB8Iy4YH9qgONsSEa+l4UDY6ETURDjeTFE5IvIJxmT2MGhRGHXWL1JYoydux2rpOrS8o0pdOIZG
IIZD0cIpD/N/S3fy0t+GwL7PKjoF/XTRP4aKkkn8k/qgFgeAJ+jvUU95D3hiK/ZC/NCIGoFKIgAi
QBA+UAkRD2tgNWp7yAh8aqCP7vq2kyH7lNkPwHp0IQ/GiLbBFNbIg6akmZhRMWlGUvWWumqFWaEB
mglCYEbfYf1J6M3D+vfgQIBOmz9wQUMACj2InBxcWE5xE3o2kgSqQKfuj/8MnUhuk6bJOhjRDgI7
7HfAohUShhqhhVyinXm7JSi9EVIVB/OHlK0hBEz+E0i9cOtexpqqz8vVctLS910Ikm5uzbd3VYzd
NdZqFp3dkhEiryy1ArMhDJJRvlA/SwwcJECgKHbrigPb6oyIv2RgCD1gQ1iutFBIq370n5CD9RWF
Qseg/tniECARIoh4u0gHzUzQME5W+a9VstFnrzY8YfpC1QMSxNMfNcmkBMBipCZaU3pkOuiRBY6E
CPlA5CA0iulYc1rv5OPVA4m8mKMynAPAIK/rhSgBxTkXCGaKCBVWomkpCSmEfEQU2NAgDS7DCiOV
xiimQ0oq/M8e1D5QG7x3UB+k12JmAIQiAggohQ+glUR/A7591/L45fJ3CdxByrf5YADBl29Voc0k
NqHdeYwaJJ4xSTIkhI9aMGI59CGgSP9790cb3CNpORKU3NE06GuHJIEkgzXaz6p9X8x/s/uxwkjs
blHQtPiwYS0O42X1SmwUB3/RiZMewjYHRJQfMQu+g3nQwQSxhGA2GY6hUyRm8whLCI+z6w2P6yRD
6N8I7PtwMtPlF1vVxL/UaYy4PN3FZe48z67VMTimtNzWYS3paoophhJ6yfz4qoqndsfw+pJcTynW
ZMG+ezeSTEkcKGEIMRqkJFvHaIArzEIYV0JO0TsobjnIy+Qk+dfxmGjsmADMHcuv2H0OQj6HwUPm
iiDJBRYPvQ7PV50jf7OMmtnzn5r+qwrttxKsvQdQ5SiTgMOIAQzqD11bVmhx6x0dWxAtkK8ZZCtJ
YjdqAxmsOpDru877E7OB8HwxZkO0l1MJhNFhcLpMlspDDq7W3W397TZPxsoJmHpOzk9AAdzLcgVP
uzS0pUGjDA3GNTiCkcxBECgpfJGJTlSsC5szdqACNLAnoPMJIWHmeryXxsBkgUZCZzC0uZExW2WW
pMWqyYUqWIxsClVqpqBc2tWLUKCU2IFZgVMbamG6WFlOrDTk4coX2IdIJ1ZUFHjDM2MmAwhaK6Fw
mqsVWIQlaR4BFgNETKiaYEiyGGHy8sxmY9jMGd3NAdKPN+x7And60FpeFCtpgci1qS3JZNQ1KQJ7
kCgWPEPSYT83BK10qUedOoN4SMJ+6NCo7iQAe6ER6eCfzHjnPPcopSFd00obczsu0lHKuV3NdZ3R
FzF7vjtoodosjJaiSsIQPwBU4fcvwC8LA9mADPeeRugPRNJXdZBQTQFBHw7q8Hw7SZ9mh1oxciAw
gesG9aL13TNFUlxnF8W98Z09x+ZSvCIjMgykoBFoTmFMlFhVIRV2RgHDI1VIf8sHAnykxW423dKN
eNCbwK7ac5KKin4MjQ4HBI1qDV3AprrUPtiVnlRAVNqgWA8x1QEMHRDyKgANNUkFAbM8REEIBul3
h7vE1t/Ch/7FTinls9ofPYboTRQRfDEz7cDE1UBhqpnNny+RPd0wKxXIFiFpZX+N42VPBgrIQggs
I+maPTvlRFHcfvAFPmEQfmBBDf0fmv6bKsmpFofv7KqZlSP1+yhPpK9Nrpp3FdIfTMGIoDmGGuUD
gJikhghmBlpX7IGyBBkH5QnGH1xHEwRxbwsMlUz6REw/g1UoOG9omQJJuuIBkgH9ASHMIGmBpoSu
V/06DDR7zc5NI+mA4UwN/e5PYj98ovzEp7MCVVJqIYlXFY4Q+UCf0R/4wvgog7ofc9p8X7hq2ror
W/t/+GOclGrG+NbakyXumtAYhBEESyTFCWQUZMxoKTM1h+jCePmtIohx25sQxKDD2e10H88O3BkU
ncqvn9pn9J9YVfnsYPx521oPK+R7mPFkwKFTGcI0SA/FzIzMf5T92RCDLoGSXEkVfmnF6yvENFo/
Lq1s+iq50V7Su/LDNwOrNzDrObq3+Q15qtsV7iDBhl4iKRZ81hZuqcZWcR51tbDKIjLtivg8juUN
GcSNf7DBA6NOGllxMkoNURmRidUaGJmVj5rDFlaRL2p/2i2ahYUQOI0s8BuMgni3qEpBKolYVCKq
wRoUapLIwtLGbFR6oQOZNSwbui81eHEbWojgxi+XxtpcJ1U1r89a4zO5IXv1QHqIS8ihmhRqYnDg
aDjIM5UdTJdXzpVFQ0wtPM5LxYjUORFAojKhQJ94nYnsvldJUA9i97E8kzD7WLJvLq4LEBvMWfbi
9qWORDa1S2F7uhqFuMWrF2dhf+iIxwzssandEGOX5z0Tzh40sF0Sq8VJaLhDRnwYM2V/UkzABpwZ
LxkoxACiOiFJhR8kYQYgjEFx7pDV53qlcEER1x4lIMCACgYj1RMRgZ6snh0f/1TUVJb0VCLm6ACE
ArMoxVKKp5dpxLYffKP/T0LRK7rOO9DQhGU9Cvz1VYjFJr+llSlbUuUrebpq2sWy4tL2ZmW8XVE2
qsdf6c6BkTjOw8g7nhAwgco5GcYCJ2EQ3IKiWHrvDhCRAkhRg2XFt3ANL0OeEg2Yi4XEA8Rl3SV0
sqm/TF4CAwkNiLhCE25aKBhD5Hn6x/2kPU90AvzYVman6fV9stcCet3MuNY2aVLkJUqnWYYP2scM
dlIHgJ2B1AfIDlQrIFaVwLBvlwrPuurL0miNZx4CNm1Ty9hTrHuKiCDPE53Iw1+DWqmEFEQogOLs
kkrlCHVEFEyIwP8f+bDURwWFBu0Uz9OH+zboAu5bTjHmnQ6J+SIfDw0S0gmvydtfFyzZIGJJHWIH
/P/cO2zW5wOYGUcZNPUxHQSvtt8cyqrdqsPpDdxCFjbV1vZPyebGRMj/6fYH/Hl+9xJ2T5SCU+cR
pSkLQoH2Ywp7P3982sPHy8ZB2TJPI9YSGOT9akSMq1mor792Sz6NiRYSu7L2guxHyKDv9O7K1FKW
KmBCA8rimcrFIxdmjnDNYtzx0MKjsy5TjcK1NtqM+kzFIhsH0LfBJ0mpNY5yovAt0CrVYN4DhK64
FQYdTMs5GyxWKjOlhzOLnnEhcYThKJV4dYcYXSOniSn59OO5yzvH7I+qLrWPK45qEhJQ3SfCQ8uk
mF0W/wM/5YH/qunZaxAY+IgZnQdRnsPu9Ro4P0c2fY5EaKpL3lVZEUX0IA/WeKD4IG0/3b0jVugf
aHlSHnAA6zgvv9amhsOw1sHuBmJBghgAAKyhRhEx+8GAPGA7pcukOZhSgZAIFSMAnw7Nv5f8ccjj
m4Zn33tBB+ogfeUMYewiPQ5oi48uNegIemdnao7wD/AIqCVEWiIJu+hjvhw0M7w+PT3OBxLIaWU0
3EtQfFBQeOpKr77a39zp3GjP8U4HnsmO/1f8X+zrpWdte2blxI4qIFNZkmA7DYwTNaikgSN7gP5x
H/CB9+cok8jmbQx5UV/ZDHApFU1SN3yRH+HAm0zq6u/wqwbP7nODcSrZwOHC5UbMniwJdTtUTkq0
OFWWYQ+zwzpX5gaAvUmkoiXY92MK2lyohOHfP5L5pNfo30Jxxh8ve7ULJlz06hOAd87JZBcAi8d+
HP+WI1/v9fIucEY/NUkZ5RX5UzWIqCdE9XU9eoGbCUo64sMIz25z8jn1EBsb3dmR2IdjLUlvUAiC
YMgGZh690luhey9Mp2uRlFH4IRipJ5OkofGJcLVHX3cAfhxxkFQ6dT/Wsfp+QRyHOdEHH1iNniPt
pscnWBDBNwpomnnCksuZ5milmYJIh7ku+PbG8XondHyQQGD21r/BPzESfRM4n0Sm4VVFIFiEf7+9
6JyOU8HzFpPhh7WB19eHCBAMQdwLG0dJ956z29OMKp/t97sEkcmqJvFC/eL5TBaNaf2oo+lwWI+A
gNfglX+WnxPzynQsOrVOQTObbdP7M9wTkA9TC/1y0eUIX3ezIxX6JBqFaa+TyIwIw3E7KH2i9JzX
8mKKS4SM5FbWXVvTXNklyo1WEhWAoZxtWye3f+1yLqpFT3XtVzE++iCHOIxH+aI3gtTCAwM98M0E
/i9pamo7oIF3XKv0cDSL7QAdFVNDqAgQCbUEn1VQfSG5qDyZAy4N8dTax6KR7LxJQH8YvvNRoHRk
j1I+ruq06/fQG8UAd887ajSTWggOiNqOF7M90/6VYZiRc6PbHERE1rAmbHLF9v2l78yWhgxKwgzF
oyBaGHTLzydgFYIN3YARpPJUKjt50SRSg4WimB2NR4vMI0ZtuYqVnNVypmeNRlEr7G27Lqm6hUCo
z3boXexGbhuhljFR+/zosUiiwopHlBYHD9e/wgcCqIfJ3ZDSGfD9H+ud48+sc/RXKKLN0xrE5xmK
tRJBUJQjGHF6vl8PQ9CSEkV/Zf3xTu2Y7KUieBfrgtKWHLfF/HC2ZOvhfWy811ZnV7f7qr+Dj6tk
rGXURt3Y9I2e9MJP/EAvQ9SAUkVIESxMoIn1+L9MfOCPpAkwnzwof5JUHxYpEkhQdQgOQLEICp3w
PjCIRAZI7QLmB9mGkFb1R/kErXqtf5Yn0dWrY4r+c0x4Q4ByCgODYgV1QoRXFFDOWrxvOUFVMeiQ
RJjuqoNeZA/YrsJSFEBhWAcxaj40fbc1Cpg0NJClDEjcgEzWy4SJCd7vMFhlRPZvuTII+p0R/BRQ
K4h+9RiMWFfocWsjLltsxJaNyBjsB5pzDiBnpQfXZzm+ADRgX5QsdKQGeGKAam8zBNyHeTTCh2Np
rkJCg4zAj2Rh/Rn+DrnEdE8cI7AmRC+4qwYKgqqfe0ts/KmH9gnuu/sclhNDD81YLUxE/FAuB6dN
LMQaiqxrRGD4tHulnl0bSQGwSgpLWkllPvcTuJNigg0Q+LHOznww3NUsVICFBd2KY9wUJohMIQGI
Ak1wEZoBWBhEWkHWi7v8OBoXeTkKavw1odVfkuTjPAk+jRzzsIQkIlC0KEQSwAnpCZFDkoZ/csHU
gUQkw7MoB0CCYUjjsbyYDtINBZKChXsYZDSMDphxCSnKcEgOdcbDuEmqZDeKApaTTtJndbwIbjEU
UsCLxLvUigbcGCeRIZZGQBiHHm7c6l+abGEiwZD4dVltEGW+5A6TipDjVE5aKDFNSqqWHKbDLbay
sBQtLJy0Tn/AWaDEFgqdUsSyhXnRNisgEqxCvHnCxHFphsiAEQW6iCwBzS0S22hYKoIijkaUAIzU
rbGspSSQV+d2rJyljEBYPLeQbWsET1YG1g9aTTQWIwkgwkgsYiUKlBQGmb0cLvAw2IxgiipIQGmq
ARgO7ujl0dgJwSQfRF1CyNm0QrF/hszxric4vWWOvYKfUIYDzGBeNdQTWQmI9FCnJSpbHpd4hEKM
ZKoDFlFxc4c5XAqqEOinpp6wB5fDBQ9pqLAB5YBlQC5KJQoZPSDW5BgJSiOzQYd350DepSs6T0Qo
kdVtWrzriYOOqlqC67UGlKl5S/V5c+FOg6CIH3tZgHqsgrpSyLtOrIMhN6IQhcXJTDWjFX75hoMU
rgEoxwru24nRD39ToBYexhO/CHFUW4owU1bmiaHdwaWMKgNpUZVJKLGME5ZvqicVZOPCaqqUsVtr
QspZSxMmcuMvw5tr3473KfO3RJmUEVYOpbiIMzIQUgYsTQ9sYm45BIQUWIcbRPL7yRBmJME3EjmG
dwYFkWJ6e66HViQorb4RwZO5eRKNwiTlB4ujLpKJQkI8zJxQecQ6VsimaWU7oRcMc1gtKIIeMFKj
obFJczVMJwObRbkaA2Eggz3WWL0KoRDjFTy0wBaJRVEqUi8k0v1wPZNZoBCR559NUVQd0ojQUoWY
GEFIOIiAcIDIjFWCD+DJ085FLYc7DPgV0FBDjskw7h0Re7e8uKHZBtccSEzWGNwyGoWWQobqmZwh
bZ4EDJkmaOQiMBfex1DwhQ4UigKBDyRyOkhkjbGOBTKOJBSmQ0oKVgVJ4SBmcGpy2Tg+GHIkDiOS
GBIUgZDkmShEsQm0W+3LTw1ky2kp0BZk64dYLyyBCshD7HYGcX4YlpzNKHBiIB4nkeUejrclNwlP
IZANIw44gL1E0qmRECkyIHVdkwcSRl0GI7koaGA1KMbDGKe5XHEISTvAxF7iVSpNJjKVBBQBSosy
1ImtQUVflVzGK6s2rtEK2kCagW4oMN7Qe2xMFh+QwezzTvlyCij9f1SjFdaqLSi2j/30p1aF2OaD
30cI8ZadUzPy+Mbrrxx1qDayAjf2Ys1yEgAvyGlFhoZhidavQ/G6EOQJ7SfPBeeoAm8+1fmOUm7U
hZ/Fh4+n7zXEWY+HXGj6IzPkWbRx5fOWa+dEf9kR7AfLyosPXGvY5lsHEkCo/75crz0jiPzViCQz
RQtXF+UmCk2S+tA/xRMGlk4HCdEwxWcoBsSO6LSZJ0orCdtWyHbf3tToQko2WqCq6uXh9PL9cOQv
uOnI/lqAKPo2NoiJpE48U0vkPmAKUAAw0ITJTMHUMN0dzvGZrLkwCvX35z81FIfiYJXGKhQBqxKv
sPEynRuob9+nCioFSKbFyMhyBL7NnWkpdTXbzHn6eU6ac61HnCubWaxS6o2AkdKDqECLKlVxXmCB
K6lhQb+mDTnMRzbR6W5kZNWcHtzzoUthUzUyquLIqmve8rwMEnDqqVKwLJRkKDIyWQ6PYJAH+8Ly
DQ5R3sUBOB1gNFQkiISIfdxdn4CvPGwDB81hLBO4f46Ac5KUMH4LQ7+BX1alUDBfuECQPRC/7KGy
M3NA6Dlc51ODhrw+34ARWY3mGGZm3RzQdQKCX3EMCD+WH76Kj6Tl3DqCeBleKQr+LCZQx+A94J/Z
6/V5tCX8AsnwBJFGhMHG0tvTQ9Ae/cflRnwCpBEKzzK1tSiR8/e7pYm6YCQObosDCMDE0JOEjp4z
iblTmodvQmEHpLRQscmnFwuyG2Xtg1xgGFJ3n6YEYwE/tKpZNq2zSsptMFVQzUzIyDKQsJIBKkwB
CMTIMAREAQSQLIBAHFufkeMBxfW701eCPDWRaRRmDi01t63uTD/jhA/kqQLBzeNC0f35jH6zvwif
7cH1T1+gikaIGJM3+IFBgPcUswfC/A8a8dU77aeBVLHz+c42hgeJ5rlg0Jg48L+bNQJ3sCexT2kQ
PbAbooRDUiJhJt7lH4Ugd3Q7aAf/GSbgwDEAes/PDyp7JRRRaYd5LevJ4SFkR9lhtGzhSzem/soI
U6mHBRDhXE/FVEjsRNAibec5caS/PyXbbcslLRAmsS9KomCMogVEVRnQngfOzqZ8/Lwd3HSbrC+x
KWwOSXrby66yFTk87ZheY/a1UEqGNpkNU3eJug5msMxh5p5rTiUpSjhQMFVBgm8VG8e+biG7k6HC
ajtmGVvbe4U3NrgTK2mwbm+UkxmgdSyjHAwdw3DhaByngPHBBVpK0Y7uaMpQCtzeEdhQUolea0Sz
ox0cOg3W7D1HogNxisUHoCztOjzYxSnlGdePOmjBWeavE6OcRXyOBurPAxFd3o07JpcxlDXAW51X
81aJuMXS0N9C6SwqW6iwuYeTAtL/OzhOumboSbU7E6bJPKKZoRFdoAMHbLjzFEE0uecYswRBlREM
SUiDsacDYIDwVdblpzvzTNbxXkzMXRny6jCDE4UWihrSmJYwvM0KJYXLaIsxTkHZIi1AJBDP6v68
qTj/CqgkSDIJDTjgwKhYQPt1oyEOHi0hYwnb63zXzJ+LddFLKz+HwccCCAgok4OJL6tg+rQphAjK
k0oi7iyObAREr7TBMKtGSUvG8YkeWIKmOUAw+mhN4JkFcVGBypsfz/trxmH+OB9MMExuk1ohMCEZ
Ij937LyAbhgNJ+6x5dcH6xKFfKaTQLb8MmHyx3eng6aBnPuQs80FR/Dt57272MIboWQbQnE09vxP
QTkaz7Wdq+l2Qh0hAmGwAPai9wnm+YhiZAROpNiB0RuAaIQgE+lXuDTOAykXslBO+7HBVJg+KDpH
+SHwJ/m2MliakO1KTBhMEZExdsfxqZiUm0qEp9GhyvYYom3gSS/I6ACKey+uo/FPsP7ac3uyJfdj
+jBk0GfD3Gj/hlc7KvRr8DQPQIEhICdUV8s0CyA/bsC2k0CDSKGoVbMxlVwzDaFA23XBN0IViFWR
kCIoGjeY2CcHUBtAupUiRDRJSOJKUiiGSFAZCbkjGsXJTQaxCtSkQG8htOzAVsw5IsiDpMXRdFAL
cuK/Qa4HPEGdPp67LmkYR/n9CK+3rDBHy1zPyZ+TKGqIe8x51IX+GKfeuIHYlAiUfYSgRIImyjKi
BCAH7qABrB+MNFAVwEZ8FPtWzoc4FAhTlmYTAuWdFRQ6wykEOCtkRoxZ2X7/hVP57xGD9Q+IzucI
aj2kZ9uIwjkSe7xX2BH6LQagG15kcULNWorUnaMjAi9xDUt2fqpu7sMDAgZIUZSS+Bx8VpfGKPM4
hDtb5dLlft89GmunDz0bUdiDsuEmr8mkNDH1S0ZTTBM2wecGRj9z23/BM9NQSqh12W1VjIVYYfwT
1AeMUVD0iwFT0a+1A6e3BDT6z9gq/4REQkVEpFCploQPenTnSPBKqfr1F3KJ7zcFS/1QIUKCp4KO
51SIqpOqSxAjHfYfIrSz8Us+b5hMMbV/pt/HeHW97w2DXIONoNClYxkLgysrDCPfVOhGZ2P8H5Me
fDoe4WM4Y1ChWFw3SUmDx5b+NHpsbS39l0V/Q0eiQ6rHCd40URhvG8Pg5VV6EqraUXrXeV5r5QiU
Vera1XgVHGqqjOBCWqopzmXLwC2OGihR8tT4zr6CJ1u4bDo+4clMDTt1fnovpsfPU2l3VSGdkEeZ
6rKchKhLSKpUgD0P8HtQEU2IRWYQRJhRYWFDiQckhkFUtjwilkhoKiUnJBwgUpmpCA2AX7FEBwJz
OWB5BG4fg1ofQHlAX9MABbEiiPPKCL84gfqiIuFVPaHxdgMUzlPZzPjIekofl2ewwMT7FVQ8fyJM
VRSEjEwHAhtFTriE5CeJj4nkNUw0UQbJ2KryPYZh3tcvR1/L6DAqAsgJIAKJIIghGIc/TxQ8Q2Mo
miU0pGCiLqP8+BiZK4SiJEwSAacCUdc0Zy6SDstcM0d12FQzSlJkLRrRUy3b67y2bW1hJRUYvbtz
CS2r8LQKmYREQLxmSpkgjTSyD39OLt1rEHQP1ukBI1PnA88bQaXemLsUhkhKEh0aoipxwHxXgdR2
Cfx9bvFuAVLYMrIg5tjChbVlpVbQjKUpUuS2uIpiNqXbtq5CWjKV+khgmhYmhYRh44GMAzkzis0k
BEPJGawwwwFHD1AHxU0gayAYKClcwAjWyA6KIgnH4us81twA5o4+6pf6cOonUDh4zbJUCezBk+Hd
QUm5GS6D6m1htGKGMxEJY7oeUoimnVwezicCl4SMP1wUKD0ua0mx5bFuUOAe/19EyaS42SlNgoxP
ewpwHkY0yYACUGBWqlFlY7YcTUaWTTrAWJFNTlkgxLMFMSSwBiGZWEqWlktsrJBYBLVTC1gLjGoq
koWmxwRwWEkYIJcfCu/Y6FvjsHij8OH5dAxE4/dLeeLxi7MMmoR/J/V99PWDCCyXAhWta/01WnU4
iCIOT+ZTDpunzjJrR2j0O9UzXQvh5bZLK3VZLdr1QIBf+K9yDPIsjHIeVqrnO2oU/3QttjVRqrJz
64eGg5O5hIav6XNMiG12fFeZ6mW+1F+p6bJjx1jgs+k1cnRm7kXsW28B/+f51/zw/KpCFo5tjfZ/
kv/E4gNSy+AXlGOjPXzhYVYlu2/7LK0N08AlqKvjdzDdzITE00Wu6hxO6Edjw0eN2UIRWu+cAxUs
42HPFpsKb0tt0xaM+AoZ9bv0Fhncx5K2SeRcKXU78Xjl11qrz7HoPURi1qPJrSS1M5Y26cak9Gdm
H8GxfbmXFzYqsGBVeEUUH67B5nm3Mh89bs6tiXsLypkJyOhlOkUYO2wwI5qkqYRsK4uCnSkQ2R74
AwWDTabMXw0yxO68llfYPhpOme+pS4pZhdJQ4wVxDVr40tYLNY10bwTsnUVWRfZGdM2ayg12hCJn
QzWu1rwxZsyl8Y9cu+8dsZ444NBcVUnWtb2dvjHD041xweBRfpLilnOJFHdP6voka19BTZ9EHiCY
p3aJaba397JxdSHBbcI8a4RnjDVyiWK0xuxiYX4yuiAA0uhVCu11rXzqaC9GZi4giBWI+t+79sVF
/C2u9cSOKe7Bozcy0yhQ0Y8L772mE7YwFHrg5s8VbFo3NCpiTzEzK8uvGVFMvtmLBU815gdIVOO5
V6Ysgrkz3JwdtW/jjN0aqXWthW+6zVBnKO+sqEFpY6N4KoaKFCPCc769AAifJw7gx27/Nz6retTu
TbJ4LURGVHYtRYoOOqoy2xaO2RDHN2ABqd77E1hfLaGco8nDuXsTJYSSNx6cOKBXZifDF6+fc3vK
qlmwgN8+Ob69PyYeDy2uvDuNceW28jiYKZplcQTSnkADTqiQ6axHVC7lGykoDibNEWYYaxRhsHcS
DjhQV6lI1QyfXtjI6YUpJEgZmHhg/CL3Y0OEixbisrCmHAhTR7CQ/BUbbQk+NcWp6swJ1MwhJB3y
Phv6/nPtzqeV76nXprqe25R09rjFpMWVjHYn1fxvTV6T42n57OO3ro9y3ryelss7OhSsp0UlSesv
kOlNyd7MBjbxfMUZ6aXSMUxWjaz//ZyMxmLIMnaX/RaK8QYW3dMHfhXqVkbqtcAPqI/QQFwRLns2
xxwnogB1z1DEcPiz04kIKOmZvEW6wTkJp1NaQo8IJucJm6F9tkGsbcKGzmH5xjEa6NxwLXGIdXQk
UlE8RaJoWNjAtG4FFpM1z23772VFPLEsBO5ybOVZMHcmdLMK0uUEflopCjpJEZGcTyIiUkCUFkHD
JBCUJRtasNIQa8XzrllFfJ5ylFHfvto+L+qGFa7bR1aSfWRpze1d+VmNZadVo6n2+Hs8675XtmfN
H5Q8poaahdmWuKg37HF0m3mr5mo1eJiUJwYmUbRjDjCC20koTZh7koSXbsT602m8HK7Qg5Ilyfqm
Re35fBSyfRBOJk+6JpPGxs1UlhOGrfJF0FLzYEYiUWts3F1YTn8WX8vhio58xVQ0eLW3MJCJ1M/T
8/hYRgvHrRwQlIRwtOKdQeMhRL4GgC40vTZkb2s3NaQauDs22tGspDcq3uSVUUFIVCmlrT0KlChG
8vjLBJBoMjRS0ZS9sSYv3cdg2gURxvxk9r3fV1FK3qX87k97fXft3R629uEnXnznziheeuuKgoZM
SjMc83Mc8smcTmlG8P/V4HntO+L6QfNcorGZxbvTk6i801UjOMP0ruGGOBrgvhHTxeFp4hMmuynG
HFzWUfYJI03t1EiktUH1qL2wyU7XVHwKSnJtezI4iRXbQe9RrWFBKbtFr5Gt6UU3zd98CEO8NnNR
3nljleSPLGOZN/HQzxpdjZ6eVmkeEd/LVhV4VMrxAzN0d0SAMAmga+fRK4KilZk7AG7dKADXT50w
bZMGlKlsyYQzKR3bbd9+hdqC3W6Tjy4XsGLay/hEBbMc7/lm+5r8IsR2vlskG27CTNeQwzLTggST
bgwaN94lO+GBjecMJLjurHUbxaXNEnMghtZqHMiYjbB1A165QTsYFAneVfo5RAlAG1pTyy/nWIjX
l8Vs7eLs7FnxOCyqYtc2UKVNzDKVIDNIs2yq16k4tI0YKm98Xe6+Ze9euuvwlkADYtZpboa6Jcg1
6Deh3gHXAifx3rWo5iJdLMh1udWXnNpNsbnAyxMV/Z+LH7OdH8lsR4ksGHlDzovE0nC/12zZE0S2
VLbeBcT/V5xNwNQmxNxDjEMSUlJROA0NUqUoJexnmAnDkQllmDb6+GMJexp0+adcrLES/ggeOBMJ
Ecx3jEhQNohjgSbTTRog3mihUQv7MVPWMVCM1ZFISFI8F04RaZCSbaS60WcawLaEC2UBjKr0EvQl
KlVJXUuBlEYoUQQ4Yjwyzs40pThIw+xyVSLyxl9d6vfSR9e5yKKY8e75z7v279y3D6tvF4w3wsg6
7ucKVfhOFVyhbA4qDQRgil1a3eucl+PWFt+LNYwoPs8lzyYUJCQvMk95dwgKUHrTX1IJSYo+VfuZ
e5O6bsyGA4PD0qtDptkwvs5N+CrohvHJ9UBidjE2WDqazfM+vH753NIdsKqgkxQ79Do8iVQHBXgM
wcQnBhcF5ppgwBiKB9NABgOB8p81a+Fwtl78kJWx20Kqfl2Tzi2uU6xYNyCijupAZrN01ONjikBx
Yqs/3KBoIPDeVA15cOAJ6N/6RDvkkH0N0GOEqeeGu53T8ptC52aatuLEiBFE0dCKKeyFxWJUm+e3
YZyzMDJZJ7oYXu7heZ9jg8Htaogk7x4CQ8ssLs7YdF4PZnW8UlDHM1rKjoirLqlaqMakKplN5LSy
WVlI346o11j0nEH133KKtT3J533+nOt9qos77rlypOUobJl8Yp8Kwyk9lco52xm8bq6QxtKWz2nw
vbo6N11TWqfp2VmfDhnWM447c53xPLDFnApxly8ria56Eme3bFql5R2Rxuc6Wpx5WNvGK6p7R5Vw
mjfLPCPpwTJHnei4zNo5rSr530KFHMcExobhUxpZox88fqN9esTPvZEo4rrole2I84dQ8cfK5pU+
ZUzEihGSr2kkJQoj0Nwwc1Si9i1Nb1FCjbl1rAiLIK7BkkkplQzEVhGCgRzpCBOw6ymluv3AZHUw
24qxtMWIEAfcbGKh7ng9gagaApKUhcDMiuDbYiDZB8BgM81evqIqJlgiKyDqcCzReJBA6dlEqc0o
QO8e9To6Cv5jz/NI8ROsDsAX5UMA/KGhaSDB9yL6k8oihKCQCVmQmWEaGoUiQEJU/4wUCOLI0QEJ
KIGFYuxwOONWH8vw9T9iZ5dZJqTrXhC/InWazhdpsfEfYUBg+rOWKdj6JRCfFVkC7tvXDqHJ5hFb
PlX+PodmpJM1VU1RdU1V94w+eLPmk/xbpe1bZFrajWi2tt9ZA00JOPIAB7liJ24MBF3jSTI5/tLd
oPzgAfJFX3ej07e4yrpmZxJUEulslaUoAZJ0haPTpwSSpAqSEKL0vPZunBztxYENQAm7s3dNjsJI
ak7L1i4QxZWPrZpCgMZzp1YlJERcKCKvNtXDQVJtIdjJPLHQaHEoiIZAAq2hKwLa2kAyoZhAVQES
Btap0AZBUGBiFJZIigOuE8PWYiKdEIG02MIDraUqpQlCTuS42MpIE6Q0rgbNUEJ7OnYpP0+R3MDx
lUOCJA2lK6XRLuyZr7WjFuMEomCBYO3s4tXkZOCGKHDzBPolTjDLqh3yjzmtS5ac0nAEmrtDcsoy
4MbZIBJI5xjATxoP6b/qs/l/u1t2KOEk6Ox/rgKLYeNmzueMUaPCwgkyc1Ia4oX2I+FJtIjAdJ/k
sD/MzdQKTpTy9NTnAs3OcmU5im80o8E6VgLmTMrFOlnp+7zjOukmWZoqZlNQscrPVJRjuYJTysNO
TkJCgZJCUIbgWmFRNP3e3ti7MSlA+z/t8OKlN1+v9sojL2KQNbd5crfI7ugoAmYghqbElbHB3wT9
OpQazJB7yPfu7mmG7p1UsvqwnBJzZucklIEIUhP87raqWLhOqTOajgyyoR4Ef4tVMYRj7P96sEHS
WEpy2+HUnPYqIMiDtrF4tInDHZ2+2fnVmCAPrPrmCI/L95NfqQnaM/vU616Qa0AIu22i3UnUPgP5
34J6flgmWlhcDqR4nPocF6tz7NwyjWHU0b3mfh8d+dbnVaiKO02GeyOxrnM5k56Z8sP74/H1B7Y9
gmw95SF9FiJdST8DRNOPhef88nqkI4MXI+X7jPp5H5X5F85PxkNH4sJEhI/j6hRdQ6508/nuJ7A7
rpUo+5koqmiZUy6CIkOPHlVHjdWBL8qKzz+81E2YqZAHw78U9kEQDsRPpjycpsagpyj0Pzqh+6NM
B+zTo1ECRKByS5DDKMPa0cAzyqqWzI+j54AlZAPlnGMChCSD/ugPPoZ9XAV/M5ExddhNU6zxMEi+
s8Gj8iO5YakYgBKEeYDcPBwZdMorzMcwCEYOPJ7nuAV5+2TSjrOPbeDAUBBP5psPjaP5jDRBEQym
5umJAm8tAOQa9dB5SZ5kekZru9Wy+uH5uONSw/BDrDBq744r0gB3xLelc9UcaZKeK86DqGH0cJFh
Yv89CqHn8+A3oQoe7BQlV+9vwtNCI6G8pkZ0IZJHh8ECwuAHDYfdi7QBGl/Dg0w6mZa40TTbJfLC
f8CeaZ9pygzRY6GPGUZTIGRmJ5vxpoXk9su6pKYRIGk9WCjESQCTktQBA7aNIGEjmnmezG8QnlTr
OyLq+Vl/9OPhH4AV8JYUVR8sLkJZVZtNmtXDo3VaaRpQPOAxRSpRJD1hZSTBDKljoEkB0g2Ccez7
Tbln9b1D7mTq3r/tsrkKqTonQcN+LVfjFOZmGoCmxkGJltll3cWCtk/mZdSaUyEwiP4+ti/wehzk
fOhWKJ1XzwscFkktJwyZswaUJoenInKD3KZGQCRd+cqvPoMn3IM1zWFGQH1pUV3e3rSFaUXRV9hE
x65tSPMyTDG02wlJPw3CwhsC9WChRWiCdWYKMRLTmxjxUcmGjP7USURRifhejSk8t4LufNSm0Ma2
IPX+Zk7YZ2MrjZKnDifI2vpYZksPja+tqzy/yWY1LbLERgoh+uz+Vz+5+p4IQtacSZhCNlbYh/Og
ktzmuA7IksNUNkpF/bnsx5aBSkJsayv0x+QMvUSMlFetG1rlIbNiFZ+jGAwFpWMLA+qIBYiCanDA
4cTaGJD7whTIYKjeUMJQ/VKmv35QnWVD9FtI5UvBKNKAbZiiUKUxFD4/rAZ/gTFADWpTSmRNsM1W
tctbdKAACRhUFkkQACUnk2E7iLInHo17IaJrEB+c2+f61VHQK8T9Jrs3LcNB9k+40ncd6fjh+vWb
HhPcThFBJl6Af0dvRi0AtSE0+ufbALEIP7P59/oOB+lALHNCu4uxAjEeRAbL6EC/JYP5plyUeiH6
oip8ESK+CeaV2FUgA+T3xqffATixfLCVIXVx/LZ+T3mLAk9tG8P5DB+DsxW9keCJv7pe8Nsb4D75
mVqKmlpiT//LIpIvxMyj9GfvR9nJ1ZFTwn7b2jyQwW0BEhA5zUBU7Tkpy8lfp8Pj8uVQjeEEIg7+
+J9OWYUIsuwzPzdnJF7MpnQPNvmAifA1inhnReo8QeHG3k5oY8fOKRQpo0nXH2gvrIp+YhxIAojD
PtgQB8z+/ocYQnbkNokAP6N7YEfU4u6IlQqUEyKl5c/jRHUPQE08vB+L0BDRo9+sBQtm0lGCkPSC
FT9y9aDLQGKIoQCCvdSVelkvRxjpkqog5qQSXn2SBALx5KukC1zDpzOB3TaECqZm76O+kpACdChe
gAmQAnM87iKrSRbYb5vvmbhuJvTATUxBBpdSUVa5ttxLVlMlbVNFPFI0tlWWFlSrLO2AtNZ7UE9p
QRJ9vvaHDuhUqB0mZWCzWj2XDoSTy8qBNvouTuQ/ZjREPD4FiFAphQifTY4gipP3BcEO0uJSgxRW
FWlDphFwBhwHMKADKgqISMVlU0HOZwJURakEzwDEUwU2HFAcWVCZQggQYDNiaCfyGxszgSS9PM0O
5wEcGMcBsBiDQBBIkBKKUtLTQTJskYwI8ByC4P+2DNB3odmwr+WLYoEjxxeA7i7+rujUsjOqkwUd
hvvx/P6QOb+TxPXG6u+MTNCFZg/u3YrXuTkel/T9DtWVp9iixn1u0LRJkg7h3/hfwaJNfEYzZuxJ
OCSD7HAxime7GHBSXR3v4ytfpB/ldBges1MU3BrlaWb/Pjokc6dR80ekKCIgpwV6oKIG2KlFArAJ
P1fxX6f4t7X3y5sZ33eEXzCfJCqrQJQIoYHicZ4l0AuOmtSYQLxTxjs/zdvsAlCElDuZ2LjMg63I
IJ0rGkKDyjLou/y2/bPv49j/4/xx/do9NRkkHSiGtzuzj+ss/qEbyztjbWj3D5VBV+50JAy+uoNC
EjJgxd2H6+nBvnDUgfwfLv8Z9gA/kE48aFpRfhOhAdYDLEflAnziu7HwPmFkH8SKcADQX62iTNUg
mDNffw0aAYokUjJwMgfVDTzO7ftA/FF3PT6Xvkn3/2YeUQRfyR9oN/guOwnp6lo2IxgHQaMxYlPa
bx7Td/5lMJL8CdBpchi+LxHuJrZRWihy13biDaRnirfoMiF3cpw0xHxTrPv5uTj1Bh/OnTy9Djap
5EAJcSfnhqqpKQ0vuEAGzzsiI+cQGiB+bzow/3nfmvJ8aU/pLfkk6bIY9/Y/4pN31W9JSeAq4p3h
RLv+nfWVosw1TOttsVLaW22qu1VZhDGtLdHfSczDhwTWa6iLJ81krwlBhYPk2TESCKQbRf67NGa9
rSDqQC7RRiahzqnHkWhRBRiD+ryt8dE5eKquBp9qBU7S8jrk1JSM0FLUNo1bAjbpcZpS2gZwbU0d
djtd1672e9wAB294TeO6bd3SWb2d9M/nfyPa+V9oAEQGJ0326bWbd4AACe3qSBq4poWmQ1p6azcP
TA4y5qqnazgAB05ziBznfNLWigugyBtbbkzWBhcSYJ+Fnr2VpQ4CHdAnwSZGHik5lIywoVgxswxr
ZgFgBW11cKptnGuaQoKZiDK+VCc4cqqqs001A2S1qBSjBLJsNNpChGARDdnbdld3uAAADrcp7UeQ
R6jygnwgMioEgSMgIX57+F3j4fEcD24O4gPa392MfoxxqJXx4LVJ3S/YU/x/mtPu86TYiSLtPrBC
fP9JXzEOb+XvUD9jcOtlvrEyCnWY1SPw3DtDmbPeIEPdsOBBEye0Lh/gB7ImpAOxDZ0PeHmYJwqf
waFzslyaw8qM+eqGfYyfcEE1Lv272nMRPtYOG8cMD+fNacckDVRNVwnMGtGEZ9qJWTxsBmZoHj3O
BtQF/2fcQcq+9QF77mUlwobeAUYbKQYX6lHRVyGJ1l1Kj65Q58ujr6xYkIfj8fzdsYbCMVPshkqX
OfqGZmYdgBrbfTXa3jv5aUIlo7WemL3+WAbwgUG6o0YtpK2NRak20Y0aiK2KTVaKqLG2wbSRpP/p
/c7/B47/HkswNXXc89dhSnaChyLtTRI0UKJn5RiBl3DHmRh92XcFAxDC7TZqUWoyhllpinbDZsKy
VDO/HfbIiWg2wcGp5hwZip8ZNyNIY1iK34Nc2aJbSY0xpktssakRGvi3IoEX2XLm6d1xkbFixSEF
FizVFShKy2krvnM+SwRZMkyvTDskQE2cnqVLBp242iyHqxqY5YmsGcWAcDRCXU1pNyNkmU4xK2OX
RuMXjNDEV2hH1S5bQqeDWR6iNh3l9gahJ4deiouvLscu/ItAcHbEQ5lKQKWlTaFIkQMhEIN+7N9H
OcGc9tHBxHR044qhEAUg0RCIYZipDFLISAZKoYSWZQqYQ4MGSCwQMVIzKMYtiZJMAYqJgmZqSwzM
tXaUKwExg4I8RohBXodHUWbgWREi9K3yYmxDjeoGoFFImHWQ31JAHkT9rioyfX1iYRepg2jIoLjS
56KyZ7gj2hVXfixx4SBR994+9OAoutUE9W22CPEkpR0mCSnE3sURZUbPGqmoz5qlQgvrWYjm7IJ4
9+bzvayNdTcShdQuOC+XtyGcQzM1HajdYBEstBKI4VVNKFreJhqpwoCYqSeIjLiLbGYHBVNwBWVG
lRKzFVGKKRmoeGbOKDO+NZZRxJKO3tBO40+IxHoIUOIuLMiN1S46uiAA5Lch1yPnV45UReFzF683
iuuHU6YELRbhREQTw5xK10ax31uepSCUzEHhm1D3bgvehqApTbjZdFBZ1w/qefO864tNnlpXOrwu
BGjNGOJUjkYaqEAIThqGiDbmEp7h0Qclmc5MpRi5wioSSGWjHBYBa0mjgXMnBE72+EQVAnbjRULU
4VYiAUDa5DJBxecYgGoqzF83GCryjgERzNbK20jXG9rQ1t2Kl1DhEAM7F3Jkuh2OJwEvDIKRZCMm
jpw4Zvi4Bl7hrLntT6zGYaiOUPRWgauwys7lKu55U4DjMiGhUaWqZWAKihRBky7GQ+uL7UlkrlSS
CMxrMEBMWKSCFCggBFJyfCtG8cC08meHoysCpErPKvi8SQ7gwFHE1byEIJHQyDUQBhC0Fge3He6M
cYuCdMp5PniFh7Bh47qyyngYxaBiHAsVwXuWl8NHTYcgakBQIQoB8NafhGlHNcR47cYgISPZRrWj
scT5xXIAEcKXqJWEiVxEQdxRAWZMpWNVKux4guSr1mzUGwTvzwI64GOeONOKGYut2sB47WgTbNys
oUG21uWO/O0vhCL2zNkF8qsHDLggRAUjhRUKCF336LG2ADRpeNORwy6E14FiyW7xGiGAAYW+x8h6
FOw5GbR8mVWGBh3dmRERFGXEBzUMb3lg7qS0aehdVDDDSZCQtDTQBErjGZ37i9EVN+xxgbdhimAI
QMIbbaRV5TBrgqjldkHXjYtwlPVVkIaVCVyawQigyMKVYhAlwgGu2+w6g6EZAGGZfbuvIq5tzYq6
lUs1spT6TIgvq3jAxEuACHoGipJSbcmtQGnFEDW8XK1rTHPjqZdJrMlY62tUU4cBU588yYiA7o0c
9yJDt2pHcs44cZ7LmBHO91QRYePdfdykOgQKJJBqIUtQRAoWDEoIlMhMRGSBEsMAMFUURBSIgool
Og73RvZKJuamNTNaKi5K4qg0qFCOUYOIIcFqB20o4BDJIHJ0gWnSiKtIY0LluXTsuIPCPFmJi47a
xdL/iUflz2NqNK58dU+18YMZjwQWXpe0+J3z0O+qIaFIaMch5Q5piT5DooDIHHkYHc7odRHhfE+P
DpAruAUihiCLMQYCPJEGs5zgjOhpQFpQi3A0GGBOO1wMmgiWZhjzcAqVYwy0YdwKgKKpjIorTDKh
y03eIjUJP0FgsoM6C6iwiZZHK55uDLWC9Q4L1WS4OLIMC/kzFUUaeL5TxXHjcFEV7NuebLIyCHQ+
vi4pizWDwKq3AwKtaALaMw3s0zWHWK365zHkrK0kJaYStAkZHditZT7l2qOYGL0LueOGcexgUVED
DJ59+nOR8Mfa2ElqJWBBm3BUsKsO0xEDh9uSqEsuJ4DyZawDjFVVigsQ6nGXyLnHNnUd1jic6Ou2
U9u6bTEwulnCBZG0jDhVE8qZUVbsqpkXTYngJ5/e5l4MVsYShooFv4RzWI5gtukkOESVPIKDsSPG
GaXCEIlAWhS4zY4Vdl7+uTBNS3hzL08N4w20927lnCMRi2XH2k1eUaxbLhQLfIYFOXxWCSKuOKLQ
1EDQkQF00gjCDaoHU2sdEpfHHboYobXWLpvmSXXskOuHa741kMm8U1Sy8wpuMJdYcL4t220RtTGI
08Lntmc0VbiphR43JMcw1MsUuW47QzbzhBjmL4KvsZrWTGeY3i72pqq4BMxCPKzFPru9dlc8uESS
SpzyTb5xcpzbkvFUY3OCcEtXiTPVRIljWarScK344ou0JG8JIzVRGaIa3Xz3e7azoOmdMDqzrZyK
+8RwpYOjA3lR45eKYIxmdoa2725oz4VoFCSFkQDIhIO1z1wyDsdTIC56RtUFKHk2xpGCsMKW2iR1
lQMXkyzGnGhY5H1fNBSQi7c77hUUOhC7rEPSyzohF/TM7xqCHivCWhihqXTRCm441EVzQQWXgtDY
aSg2dSBMKFCJLkmd4mN2Xro0PJnCj0ghjBBd6OK92h5hC47bcShQOg11MA2WWurxSpQgRC9cHbOc
ow3vcgjKKhEnFQTxjGMDvtUw1OsWotxZkYoSO+5BFRRmqo2SeUTURYT1yyTYRYTiKBkCjkTyxxxv
o7kDhyDRs5hSjQkWl2skMe4sJTU4B1CUlDhYFkSGhyliiDb668kLQPf9yaHSqzkP2+X6eHmoyGGA
IHZoFdWyTYgUE+ZGfaGhKVFbgqSrbCE5BJwAvjy8bdrbbOiaM7kYFlR1aNtxpczEztv5L14422uK
d4ZLXQ31PaZODNjhVw7gUJtHsM0MILGBgiUbBvuGLUmKaCxTJCiqlVtFEB8AWoO+drpQNiI1C1VN
SVts1N6wS5mjfAbIjSudZ3rrHXP6u6+lGeue7FMyT1TG2E7XaVZFqIl9nMCEXA1ktlHY7FaV4wSl
GraXyWyW/Mmp0m9vxWZseBzHEI4MK4eIVQ024SEobSSanOKlQZRVTqesYzcJ6cyaN5vFHgYE7YCy
KQ0YFZUEeNZrRwee1EpKA5QDXGY6L5xvLG2oEglR87vmmsV67qMbjgmM3pqYbcOMTPiE7VzNzunl
YHPFVvrXlt8ApTkkDpK4c809PLqFm5wUERFRpHfib4SypNZE26mF547arTOpyU5xdhiDxJ14G+NO
1CcjGimSBSUVvA3f+xNh+PGJ3Zg7EJS95ERruuusO3HbQV2JMuI4WIKSgXhNkxPBzTB0thpipYKy
Q6Dh5AlM8BApEPcaYnodHReQg37iiqaAqBeyHWw6LoIgjFExIeZTlAkNweo8Bw7vQztVRVFNXZex
wOx1MJI3qGxUOzCmSBqrwaBB3NU1pE1HDspiIEgbxzQWShqEaUTZOjC7LycvY2KLdFLNzR1bdANB
y6AlIUaMTg0ampQGq6hoEgSBhAw3bC8A00amQQTHJisORhTTqUJvNRKqFOFN0IknoE0EEp4OUPFz
EUwCeJjeTIVhVPKnSKosLsgKjillgwhKFhQBDkBCHJLYGgDADiQvYXxGEhg5TraJgoyUSbu8RQ1v
K404MEDksOxNIyyHR3AOHKgtBsBEWwdt2+tvtdU1969qvxm1EasVClqyUbBo0aZraKjWI1RbW+NC
CTMMcdRq9evyoIoZzoild2Eg1MiisHn1Irzd4TFEikm1w1eFDES3Z8a0JGo0CBJRCEHIqEGhhwh/
A+zKN5D960DwR7D4+P736rAAkggpQDICSQMtIGU7i8juPAjXxcFQ8+MzreBu+j5oWADVgnBfa1GR
Q4DyUT/lA9OaTeHweXn25l4DjAfrj5mdvp219O43i8GDBYXGaYLCh0ijgsQiKOCUpP0fKTdH2STJ
iUCGP75Xv6XfnxjGMW/P3YxjGMYZBqNRoAoK+vXhkNhEF06pOfSXWBrY9haUtKWlLSltVEtKWlLS
lpS0o0EppCmBRRYsgoKLvO5FMlH3hu+cUFi6BaLyWxYsFijzYdVsSiCrO4gAzQvQoUDgwKS0bZxi
gHxJTvNwyOCQ7pi2DpW8nStmR0CjpbTyLq2RwlBKaWMjiGY04LGTThmNOBmNOI5jTSFEbFlLYspb
L+naLMCRtiyyBSxZSWxZQsOXTB13dFnCWxZQccacAzGnDMcFPgnBwbu4xux47F3zX1Oe2upqrmqL
S1WjeuqZix/C76u7rrgB7q964AJ0uJxP0MEplAhpfcNVVHNbav8dKrrbeDh9D4ddYs/inVBZHXt3
ntqPeutttLaWtVYNtvqWvdsix5C2wtZSNQWDwpURHWoonPfISb2+2w779jXWnraYRO+6W0tpwU79
MueBec5nclMPdto1KUq+lLbXDbS1vbcqwfY7P91A6qSeWPvljgR/BQ6G2NQrY5eE3Aqmiop5Ofp8
IMDWtQP2TYuXf9qeFDOAaEDmw3Wvx6pgVUUeRVbaiiiti2/LYgzat60JtTlosbBtqNpGrbbawogt
RsFnz5aa2lC1aULafD4FhuNjUodGdoQNbUINGkBtKxvzXw6ACjwWFq0bBGlG+zj88/D276nE4yEl
XcM50544RU1WjWtITh/aKZReq06VdWPsoZ1LUVtqp3Eua6VtcK16cHq9e9Gc7nJ3XSAB3dVa1Wqs
9nVMq8GqLGtG20tpbZHqYugt89RxbQVslWAoW3uzIK+PVvXKxstttaqWCyUqxigtqW45oZF4NV8F
4Po7tL0KO7uG20aePOltETjyjRo1Y0a/Bw58GtddkdbVtPbssUPyfSQnZ5To+z7oXqMkaSQzzKoI
Rw/SqPs4kz2StdjGPPisxnuo94oh7hiqvyMBECfVSCiPwEB/gvumjSgK/fyP4jjhyo0CxkDgFS5C
JlT+sI7cHJknuw9Q/Ph+Yb6IDf5/u2/r34ewqYa7BYHY6We0nMcIwlLL4fYJ79G3G9mpgczFfmI5
jTGFN8Dxw0HXU2RnwTeH0h6d7m9NCc7v9pMIDD8n+DHaYCEgQNtRLgLcASCU8D1UXnK/HeBgKEUj
JBEn9/9roxzZMha6m8k5BGKpRtpuQPFoM9xB9zOeynUMWdgsYgYjRFJYMlSgVFBcbeGexJOhQYj+
/SdGRQ/3MZF/3UnA/w05IQmKHA4wG/wM5UI4ZUIlcxIc45gsLtJ1/b2OAojKhwmxtp95cZUcEHFR
OHdVYzGUKxgKxRnGA4KYErjS0LUBkWRRKJVRPldFZEFiedJyYrSBM9kmqZ6zt4Yu16t9V3nd32h6
BLI3NHCFwhYlKwl9YbEUhGOPoVKmsO+yy2n7kelimNiVicuQnHDpyo1uipGqkNujFNzvw0qVrvLe
2SW8ypGCA33aHOEhxCiQCvo8IMMgFKGSs92eqB013sKenSzeJhoSNJ1FVLYm14CKB6V9ZRQopq0I
lAPxgUAfUv5A+3HKGZIGoohCahUP8Dl200v3l1MSfF69cxTXdu0yC20UvdJNImLd0PWWFZZCQSbl
vHrSHUQ6yFrFAsEikKJLCKZKNEwbG0DYhhtGTMaabaYaxrNsmyJQV2GdxFo7rk+NncaNWLUqOiDK
gxkiA/zKQagTdlq4Wl9rLoZjC+q65SyqXbtq4NoxQUJlpbXbrWu9qu3plSVG1N6YkrqzMsB0ENAd
OF3olZKDY+z7uf4rM30GJTev57IQoRhUYV5MZTQPLi8fpliFzmTJUIztTUpABFRDShK9VJJrM4xb
ChJKAkWbmKKhmXQi4thNq0FsrCkMRKq8FlBJDCsCabVouiWSNCZDiIL2QZGgHiWgOGw0S4D15ZGD
NlKCwWwkgwRn83kNEfbaII60DQwXzOER+xfxMCkIJQsScoa6uLJwO+teHi3UhtHDtq106JUQ7gYZ
gSuDBS0lLFDbYjkUIHPGxoNlCGUIo2wRDDCBgwgyAYJ0cEgD0AEUPn6TcK5nMEi68EyLvzpwB28M
lBGLCQjfVPc98k6mtHrbLCiqwsjFjAZaU1G0FYspYLXKd2KGmgHuETy2MA/O3tDYR9cUo/ofkOk/
gT9o6vK/1NAcJ19+ttSLr9+lUYMLDhNJPLlx6PZ1dy/yQQWiCgBcUDcE3SHnynkkTw8KSRCTwuEp
AP1RbZj/4mZf7lREkbxxtsoDq4tSAtZSby71m2hNSOpQMkWhCkTCQA3gkKMnQzDAwypwQ6SVIcre
JIYELuyj1ET1EgLhAgH5oQF1HyQCK0nwgAGhCgFA7cjHJlAAz2lNwl9kRUl5BjFLL7sOBQXrAI7p
8pjxaQQZHlfMD17TIrB2uBREbiboANnE0XKaZIu6sIsKolQeRj/LD0AhS6j/cPb+eP8c4Zt+F5QB
/IH0Z+V6BXUmvw2wuuHbWMvl9H9uGWSTRJNP8sIFReEMapzccsqR8rBf8vyQYGh7hEpuWRtDHlpD
gwP35uH8KrI2X5aLFDrSy6GrkJpCFfSe6sD+mpk0cNP+bJxfknHlQ4TtjsRJH71AifMqd77aBD3H
x38Pk8vgSXi4acrpFIbh73QtV+tR+NXlErNahXkzLMGZLq5uW3Qnjgud5SiSVQoaEZfKCYUmrmKK
8yST17fefIsxbGPxu30DiBftKX5wxs0bwnhCBSiWdQApPGOoi1IlNJGGb5HxS3DY+b9v+HY2uhdW
YipUjG/jxPXXNO9h2yjwZU76KmEoGvGNEvzAA0ADptrGKiIogzh3b27BvMVKbQWjzNoqpq0hRCzo
YykKhUlbYsuZzV0lSARHuSSC82gnITUcsugMEhDZuHWhWh00OUoQCIyEg14dABLaAE05Cc5s65kq
6SEoCEWr59h5k8U3HJxBR4m625ilEI4PhQn1EA3CqVNGPnJ6sFcra86avkToUZlhWZgRmwoPRmBg
kJByE18KAUDc7TSwQ6shSBDxTieu314r7WPVx/Bpk7LaiJa2W07Q/bND+QTjHbtjbbQfWo1kYzHE
RHx7Q0KCz1Njj9yD6gPiL8wfLBBAjREsVNIRBNNTKUK0pqRYybGE1iJNWasbNU2pppQghSaVpTg+
Hw4OfGQLYxCfLD4nJzvtZVWEhCSKUgUhXyeHzplcue1Kg8wMEPVLX2QZJQuMuShjLGYLjCQhJBIq
QkQAgUo0uYDjFhgmEZJkFKyKhIERmB3NubKmrgUbGN2krWajmND98vm2YEB9Kp6UijsCHt+N2D/h
Xb0gTKhbPv/H6GXbwUkSSGapuFDSaXhOTH7cPp1KkDTgC/xf7zoPgHYqvYq07Cfq/paoK9v7tttm
YgKADDaKiPeh9sn/uQBUStILQKrSFCFKxDQb7B0Aryklg+QU/CB+08JbRiuEEFVSfFSWUo1k26am
UatTCDJBNOAL/dJV8gSE2SMU2u1vmFUyaUlJvruZubdKiCUGh6H3/Jj9k18xGFh/0/h2T18h6/kH
3ofcOGIQhgISVCntRPm/cJuHAAdEU/f6yCH9MA57TEyE6EmqowMXJoTB1jlEJp8FP5nn7YwfAyBh
hSPH5GgbI+OO2/mda5JOIoHJiqiabwPlJm/jDvPhOExCEBG047+EGwQfmMTCOm46h/JDcVHQhoRR
54QxGwwIE4FHpnNRYW0iM1Ka1kWS1lEsiWrOg0M+x4rhph5hLE38PR11K9FKI1N7XJovfXvDUSvX
czLNC2cCYzUth39NOwWyZETfjjzOipSom5xt9GguIw7p/cZi/P+zD+GXe9QifSMgAPuUUn6dtABg
VBUE0KWhUs8hH3gWImBiCZERYKsiH0wooJ9ly/QHmRZzyVJOrNBk4ZY5dGrUyKNWry2tdK1yrVTk
/Nsmg0QQkyqEEGGRnH0P2YddHthIsEKOMWKkU3GMxT+p/YYPijgWqFnbHLbJeaLlw68UxRTiCprd
ySe/azOuZrRRP+OIII3E4UcWh1Ru6S3cErSYOpVDoZbRmHpKyVDajQK0RigxUBnRSau2L1ziOZBZ
zFX77Q9b0eNyHXnW60quBMc3Lq6zr01ZbttxmFuPjEKZcZQt8wa9e3HCQOiUKEr6sh5Br48YqxKV
xmH9Ld6WOFFC3hfLE1B9+HPfmg7h4I0YDTsGUcgLKwATAofN0rnwPJV5Q42/6cADsoLH/TAYqpIc
CBgQGSOMi8fbtJDbhGBgXg15tTJMgyHLwYhRQtANCYiqIcEogGKInce/wC/nExyfrIpChSW6ZPYo
tn8W8MGQK1cI4zPf9iUjElBBIRs4mdjvDvOA6a71VMlJWA+WO+HqeHU84gx4NIQvleYOsWLaTeaF
8UgOIDMw2k6ymp2gFlFAmcUpsZ9VmMBjEswDHARoUOpvHSFOY+8/dvCPUh5oyhW+aYC1XbAm7oHE
knoHr8D5fbstjPxGofS8dKDKIJafHu9dF6JA7NKBShdYDrRBEFLqGZVStsonC+p7/NPZ8Q48GG20
m2Ti+TDmVbY26Gxt5YKcFBkmMYFEIKbePZQfL1k1cxsg17tRhJmLL3gokBj3ZN1QOsXm4HCwNYjx
D5+7J0DEoi9aP0MBEwgM1nV9RrizxH/Jh+o2xTuM8Mvt1yY/Oh2BJ26qdiHyoi9nb1bqaJ537YQh
EhEzsMXhE1uGfLm/L6z84I2jfWpWpgnJkzjum7ZS2Fakxpznn/d50fh8dqKKwiMB9QMPuln+mgpO
BpRGMYEIhY7Ntnkp5hQhJYAkOmYhBVRUUibGx3dD/IxAp1TIIt3cKgcC/34HeL7D99Qd4TH+mHI9
tjtkQaQ/kkH/tecBH+GAOEQfVgeclin/I6Ad6gIJwUipA6CIg6nHzqSKhgiYjijpZcG5UnGo42Gl
TUxADTtCoFInGyBgbSYoFEJkImxCTArMJFMMQq6hMCkgCkpHgDCA2zWZ9whsgpQKih0ABHyI7RW8
IwCAD96KBTDkJ9cm/fj76qnBf+0bHglSAeHi/YdISj7DKAcFoSh+qImVy0r040rKyruzq7SftpYV
NqftTFxShDQoUA4YY2MdfLroDSVljX+I/pN9D5c3YXIdwQIZDgw47pCa8GNvFzr7f/O4b5T36C6j
2lFEYem60QX2oC/9k312HE3gisSCxBw4gfP2GIAgiKESAjrhMvo0szJp2id10pq6zak2YUE2bGma
lLar+02rqes3ABlJuD6GAmD6PiCRWL0oekFt+laANhA8Dy+4V+AtqB8Cuj9RXeqO4nhdp/2gWIB2
AEQ/vyHri2dhspaHkwrtHT39VHxaD9ogw6ifZJdaKQgCQM/zamMB/dVT0XgW3fu66nDj5maGdHy6
PYx9sh0rFP4z0nUNhtwoOoI2XSJThdw5EEbJwlFHFoRlsKyQ/vMhUVEMFSFlKakL7QBTvja7U6dW
/H1+ZfwgeghwIyPETZv4hMexOknHkmA+DFkncYjUoXjmBsYOznZ4S0DAcWu7Yxf9V2LrfT98Aenk
AcAUEsQAeQGCKp4ifqNYZQtOwRF2tsQUNW6mWSlIjTTI5hiQDmBgpmRZ17rMAE5dgURTUEEIK/KW
v/OOIC/D64BIn/t8eM7PmI9kSyNvhivLCOYoNxDrsPXLM0tyxIyoQnapUMwysihTCH2HPzn3H2WT
g5O4G5Z7z9lJoZc/lJiAyC2Y+iyy0kczqowpkq38DhUxVe2BBwo4UrEnASKKEqBYAX4MWSBNaA5e
iPGeqQ0l5/L+gPb7D8Jf+6owJABifPw34nLlv6pnIvnh0ABk9ZFyBESyyiVn9+0Y4bQFlifh1CSG
tFEG0LgBANCtIC0UoDTQIpCwEEEkw7Aa6hDIFtz2/f+U7oD2xIr8CIjIDSLY/IWKbG24AA0bgdYq
hVopoRoU0opGOhP/pCwYI/DquN+XUKIPFkmf3HeQg+ju8iJEUOthPVTgOPVj2B9vh9+h/TZ+5Q+h
2pgeVoS+lJx4Mg8g3BMBIpASBDv9XiyNeX3bLNPkCs+xaKNcP8h4z+0n9nAsm7bMsfQpmYOtejm3
oienp3i1GbxP+Rmo/A0gXB94xxXDgGoSP4vNXYdXwYKgJMOGVUuXCJTIDqlTPtVBhQQYXoUXUYCw
xKJh2Hshle+SUSJovETFBG45PaQteJg3rlJI5TwxlHIZMuyuCCOICAv9PpMZxaiJwnyaxY6iHC4x
MyFKREwMG9OJyqjeGZpnzcFaLpVzFaezEbEtIAymvxHDwg9B0F9fViovAbUfNgYl4z9XIAMJszLp
gB6kOcIjAZAeHmByKNRkUOJBe8Q/vITG0x395ZYcM41tyTL+3rG4OQaKGFIiTlvVQPGDhoD7cT1f
DENzdSOc0a0xI5IaYIjgs8H7nDsHC1tpSloN9s11Pq+3ThzrFWBJCNwpjVMlhWA2QIMV45x/ljjf
H71ezkLYprbaKXz1OEbxZi3W4hNsqmLhVoW2EB/lIP01P3iTEPYwO1FO/kcCYYUD+copIHAFMcwQ
6yu2pIOSiiCB9ERcw/TALvUL0IzXAtixMMjYBYQoyY+nzr/L84FzYCU6hpImzWFr7KLKxRjut1JD
S5KcGiln/HybxiEZw81DBS6ZMSADhp6RSkGYzbWPEAvl81c14TjAMmMSXRKKoCWooMIpwzGO+K7d
Q9IQNo/n+97VPgf0hw5oEBk+38p2/ahLcVlpSgWSSQbbRsYpWItc1n22w5PxdGZylOSGtiJiQ2No
zF0CGYGQgSy2QJ0yEsSAxkTTGNDBKBQEJYJJJYiKaHvwl5JQKmdDGcJSKMC6LUS1GIsECQkZ0ztn
EIV5x5MVNatBEdqVWQBeDCS2kDMLaQhmrm2rXGgDEhaTKZVKsq0aLAlhGbYgcCQYhDUs2xJW0ymr
WNjKrrZMtB4BuRhpYsBBc5DPbojQEUIEkgxmwhK778bwT1p76A/Lopn0PvzTVDjnyGPRxb/zqiTs
64wDX0zoeg19MA0hkT0r2lvzmKCvnxEfwcwiZHGHQ/Qz2P5/B3/l5CHvnxXVLS9m/cmDKLGiiPBJ
gctV/akIZISSh8zRni5k1paiW9d4zODakodWUY60Ylpa+wpThSh5sKSlKaIykj2WXfJQinm9FKGz
zpcYwZTgXUClQ3uYTDa3wqg5itlkTXMhShfvGbMnjAQdZpKNdxQ6mBqSBtiiO/+SsEIOKmI8CMws
Ymr22RrGlrP9uCpyAz/N+G9j9B6iWzIb/Y8pVyKqJTBPZDQ9getmevNoNjoiBmIFP5sNi92Z65Og
ixx+7MSaZKu47y22NqezDWGL6YdYKbZQo27SYbVUYyWRvp608ZVFO4zO+WIemjYi88yo743Ing6/
SfXyM6w6cncjOEnDudOWimjhacSXvnS8F5AiRGVE3veAbAABPdwIO4+b0NASUuVxpbJTMUBVVVkX
zl4B87gAEGxPltbvPNRJGCh+ptr3Hr6pUyn2uq99610IyuUz1PVJP6/38rvpXFLuxELAIgcOf02j
mAdGAg8OMDvLAOe4RNbQM6Q/4FgbTxIncsyuvPbgMgT9nG80emVwatRJ8scVjyhw/pvE9nkrGB/l
8cJ9SduGpfIA6wcVODUW8iIsOqGBfnvwxev7WnhU5a1eQRh/Hzi8LUtR5MaUD7McVBHnKts924nK
vJRJ/l4cRRQ4FQULwP0UOZqqCivxPpCfi4m+g33/j/0BpPx/R+qAClBz9KiDDm33B938HC0S7/AG
voZpsRS+a4jMa1mGmgICGYOrRN6FvmT27F9/Os8N+YH5YszRUV6mxdZIMIAkiWbNY1tIlqYYtZFp
oICKYpSiGotk2NDJExhKY0oyGsaiFiKWfgK5RU13ODEATjiYrJQiENS/usgUT9rbQODJ89oCRjSa
1MkiK0kUqaikXRAShYkCDLNW22n8/UvMAmx8cafcVT4+XOAWeYziSaY/UxVFpBFkMIdHcL++0icW
h89SmTT+O81U/HT4z2dSJtIKeRDl0MKmHDjQMB9VZUSTGQRdBQPJCHgjwgSb2RM1YqZk4Ib+KKqZ
x1pJCQqKppiqk5PcbODuXe5muuAZ8nKQQb3hFtXb2zTXDjnigmLvhioTm/SQdkoCQySYRWDEkqqD
CTe49cfSUaM/d/EdDXLLSsbSen5doozlLOehj95lEknEdiYQKUMfN5Ci4A/1oI0JKA9U14scEbga
crMOxm/XcgSN3FcQOB8Ohj7QeP1mA3MtZage+2/Z3+Fp+UA6hQ8EVP4CSYNiIG0Np/f4E9Cp2mAV
2O8iUNESBPyMfbIP/lK+f8WP1+JhKkDkj80YaDmNSEQzWoMU0mL5fiDd+kD6YF8FEQ6Q70XwQA/n
yIr6oaxFHc/MfTaiPxUUk+aA9nuJmisQyqgxC9C1Y2Afzw4mrKpjaTy8/zWdISiSCIJKakDb2dPL
1ofW7ncIW0bVNIes6vZD28nNP2/4V7gx/CE7S0OdmCFyodd+e0JF6kn7aFecoV0pJ/jEmUDyfrSo
Lj8aGW7KmRbYoFn4aTGA7sM+ogHxH9GAJR+CjppiEx6QLbIxYp8UHj6SlKgaxDocZ9kjoA0RFYZ1
vQ18IUHmT5z2+mBclYJ2e1mnMR6LJ16W+CfxHTodur/DwkWwkwNL+YPI8jhPAJYv5js0YmOVE2KZ
77gT/sma1/rqtVbJM1MuyaE7Rv3EREKa/1n/t/wxq8wdyVQiQiNxAD7p7+7yJZ5ZzIru94PAuQM5
SgIV5MYBkdQszaxSoGAnU71FRMVDHOc64NGo4D09N3bikitvTzIeuyG5azApt5Mr01peDbLpPGmR
rTMUOIjhQPTpXs1JCEDcWTinAlg1YRbtEoY6zZ05BLtx3+5I3R3YNKUqzbS1TRNqWlbKsEsLTrOy
uXfqBrAI+DDKHbqQFDnvVECZA/7E/1wI4vr6B25w5ZiHROOYmLS+usyglolHvoYe/4fCPzYm+mz/
sOjqyr2Dh1u/AteSxV5nMqRDQEh/TY5QsXCQiUcUjD2gkYLECESxJBIKwliwbRQKGjIajtPvdz5v
Ip/d9hq7o3EKi8RCY83p47eoDCfuEvtHAkUYONjSaUQm/Wxjl9EGYLbntFZov090WRdcgQ12Y94h
QBkpHf2HPK7amO/x9QHXx5cm7vTnt3ng4AQ8OR1onDDc4l9XZwuM2NEB8Q8ijCAKBaVRNQgj4Qg+
oOORSN/e+r+Fpag3z92U4u/JfsNSmN+nvrv56UQXqtltK18f6A8eOPl4PB/I/suHNgdgjlghp2Ik
lNuMaXNpaktYHBSYIlc43ZVrrGbqriftxCLGkREQKJvENRCjGnwYkysF4uNJIr/MyzajKlrOIyjn
BVjAAilDhIYDBYg/r4eOFjwtvWqnD0K8MpjyxM2sO8YirajFYCy4VTcpJUwVCqhJxRcsUZx1WCNf
8M0t0+2xxk0sTDXRypOSXSg5nM5w6bePXruJseXVst4kC1jOQVkqLS0pXRxyZgDKkMHYhxiEeGGy
cgry2Ly+jDCAm6DYDq2lvKTc5E4GyrahbySUZZZKVLhxSY9glTEvDLuBSGE3wD9fu8eV7qTqMhVO
2hexL30S7rUw7mMqvwEh5M6pZKhX1Zl6rOqTYLVZ6+A4Y6pUYxh14aZWeBOc4pBxFFbadFOPJ4cB
AXToS+LI6JHvs3CCD3iA13KwTqxni8HRolijEPOzveNbQ1sx7HYHdzg8KglhQ7HrlOCTsQ0sS0ZJ
SU9OKMHzgTE3cUIkFmDwDuj+hB/piEFPGH56zocoTeijkYFOddgBQL2+G4Jz69QEQOHZyRVDO0Ju
qJKo8SK84uAfzSCjQRBEBAOHV+EKCdu89nL7w9HBByV+5D9QwqAp6B+2/IxB7/w0FkQ+BX4r+KGw
xDcyOhESQFH5iVClVEKFRPzJCw0qUEtlNopbWWirKVSbFtWABCCFAK0pSJSESlILFtjG1G1otG0l
o1aNSWsapTWktpKTVSpUqbNZZZVMqSWbS2S0kBqgJJmRZaRiYVWSRFijapEka42hltNoxA+RJ/CE
Q9/5kFNIBiYxQfPYn1IE/T9f9f82x/5z/Uf/D/4//L4nr4134UpIlQ0NIrUMBIkoQUrIsEkfw/wf
+P4/cf/b/w/T/p/P+Ifk9Pm+kM2YY9v0ob3tL/XRg2TIqs/s1Sj/O/yz+/8vB/q7t9t3zzsu/VOf
0zMVYvoeN315ddNH6OYVfkADbvkYPzWuzDRKqENoCfuggGYKIv5oAKfgUP81i7KA+S5/pb/n3qrf
Ptl3LjdtGjQMFb4IrqF0zRqQQXUKi6lyQBjnZyTgOJH+vAT9P7Kgl/ccj0IDfgyQa368IAcSCGMg
pet/lwnd19h76KL9uU+WHEA48IsCqIc7GkGwsOwOvGnCcaL9kMLIQhe92MYwSREqDzw6e3X4sSGM
dD9+zSHXtd/mX5ebzE5gR/H/YdxUn5SVmZZdD/Oc+vsRoIERtDs2dBPY8hRTq6/6OHfB/WNPIKf1
ShD//mKCskymswi+n9QINoX/AP////v///////////////7ENvzwKogEHqgAAAAAGskWzakANAAC
gaAAAFAAA33PfFVU9PZ3u2uqoreY+9vbUlNMqwsY+vd7JKdNI2batZAAMSiCyFAAAAB3194zpKc6
YG9N9j6HubXeCL7B97eIT67sAN9sO7cXu7sbfHnvPtvrV3W4NdFm77wfF758+tzoOuU73vUBmjPX
XajZtICse9Ve7eda3a+PvkdrbyigPn190n28bh97yPe+97VSuPodHQA6UoNtRcGa0Aa3NBdju2gv
Mfdkl3t9988PV2POWUAB97HIB9AvvudKVVAAYfe4A9B9DV95x6Ge7gFa0Aru7gCR21HQoAFF2Asm
NBuwKVCFUHQFpw89oB3s2kxoth7xvRZyblsKO2J9wlNqxq6b7DwB4fHPr23jpvvuO26aBtZ2MXYF
t1u2tDs3EigS3cAUHQKiAAAAAAAAAAAAAAADVdb0vrbAbXp9bnbc6eh8+3ceQQkA6AAAFADAPQGz
K6AOqG2CgOjQFAAEM4UV9au+4n3Dvb626t56xAAFLZUZrtL3gZ74su0MqF8ciuOa2gddG+++7674
z29lt7bonuptbuYKAABQLbd729XoCRXp3MAtPvsB6AAemOzp0AswFAyIJRQCgAAC6wXtLQAA9s3n
r1dNc2a1VJVD3tFFAAGjSl6wKvMWWKyfWgKEkkREACgXVcLhGxhF20IVUSACqTMAb7KDSja3C6Ak
lQKNNUbX1nXh73pwiBo7PQO2BVIlBe++5777ldGaqEgtm2Xa3ezeZ2ZL05OW9zraYgM3F6d7bEh1
w4o+nfeBz2wl103k92Nh9ZdDKoTXptvp657N026afextoPT3bG2+57y93Yll11692zTYNNp69dzu
96z1E27693yN1fZ9XOcTdwGvr5V96ZJOgFsKAAQgIdsVkFe2K0Hfe67s5Zsc6dtvoweterthsMdZ
23WhbAAAAk6ANW3o76PaZcfIXdJV9m8+9nz7PmezbNRx6PPVvt11z42gU9OHvkDnfd33ER7mZLa6
pejS9t3rU+8z6vCae4Oh2t7659ffdbrKWzb29DktmnNgK+J97ue7a6yCQp73B27FcgWRm2r189Qv
deN77Ktj0epsyDuYkVXXT3YU8uvdmLa7LfXl3vt7yF6andd2Q3atEfO3Pd73pc0JY+TiiN9z1PQD
ooyb7Nd7uN09rqXHde+YoPOt7d545Edk6W2lanfVm+3tX3A94zxsycpU6Mo192neqb29ykAe8pO9
295bMbGcwu46OKnW3WtHZhU++V2b049Fp3OjpvYPTvO23U+fd3Yw+woypMXtY2ePn2AxKlWz7tRK
dau7OGdXbUaV99npTrH1vqSfXuW30+e7uqeHVdwpWZ7vd7rjcPvDTkD64nVCmgK9AAAA9Ho9sFOz
071ujn2Z933h92zTcL50Zo188+C58LkPi42603lNy+n3GL5blpFfWLz2ORXZpPcdJsmSN6znd2Gt
3nO1r1wae7kGjuCxW3A3YnKqNUt9uu2dt8+b7zQa1pbOEpogICAIACAJkACZATJkZNDQEyTNSemp
kZDZTTIyAkIQiETQgmTRomp6NTKfqNJPNU9D00mk0NPU9T0ho0bSA9ENAGnpAxBJlKSJokyGpppk
h6PVM0ARtKeo9IZBppoGj0gA08kAAAAAJPVJSRBRkmSPKP1R5R6R6hoeoxA0Yj1B6gDQ0AAAABoA
AQpERAI0BNNAJpo0AmE00pp5HoU09I9BJvQSMyj1PU0NDRpkZAIkSBAIAQAEATJkE9TEaGk9U1Px
U/J6maCDRqRobU2poeoBk+h7ZRoQgB/FgZVLFC/oMMIhoGhClBB/95vEENTbwEwhaZKyNGypmRWZ
ZbKSxFkkNZqZsmaFqypQ2UqbIpSrRSJixS0sLSzFlpqYtGtoZMowkoWv5bblKyqQqxWWWsamZSSq
Kr+HrbSXVQCmSuMlIKwBJSpEhQFsyamUqxFkxRtIWmCSmWGFgiJSQalEHc8rS2LMbUbag1YqKjUU
zVa7KqrtStUarFq1Kakma23NtdYJqUtWABKmmVHRKJQoGIgIP5tQUjzCvfLqCJI/n2RRToDSQDMo
lCkvOICYAzLAsASQb2G8cT/0zjgkoCCYglSUyVw26ook06qgn/+8BxhmncRzGh27DWjX/fmZtXcg
vhzipqhoaGHUhkgRTMkoy6n+TrV2djBos5VUOCmiBoUChFKQDmBAcZVSWpNy0bbXKqjSVqTVUVai
mqmtS2WuyulfmSTSTLSShIgTSMZQJkyN7rr3vt8+C+ddKZG0EaWUKZYxUYgLu3bsedbiy0ULNrMr
apcomdA67u6m1y3GOdmsaQs3Yxol0yimvZNzlyHHDz3HrtvM+Dlq5G0YNWTJjTIxVMi2Yu2YVD6d
TK9uvvMPt1r7N8xjBJYU02Gi0VGwUitWVZJtpZ9XW7Kk3ddY21ZmNSERhSZJGMIpFNtimQZiKZUs
0VFqSHdXTVO3c7rqZJYk2pqa1CmqTJbKlrSpo0pJFtRtFRk0jNo0ayVZmo2yamaZoEIZUWid27LT
IhMskpRsldO0UlYrCljYxtiMaNolMamyUEGm0tS1JbY0stU1UoNRplCbFFFisoSbCmqhJtUq2kii
02pW39PuwhAXr7rXhj3UMYorFktQUWioxRtktG0kaLK02rZS2ySFjEmwVFko0WNJMsUmSgNphiSs
aKNpNmmoSxQBIpkubRchkuW7TI0UhWRNsEVSZkSWde96mzKjKzGIxspZ6cUaNk0M0gyII1JRG0Gx
rJSmiLZhoqLAaDIZJmJkbQyTu6udmopy7UWs0qJlpJs1YrUbM2xaoyWymwaLVc2uTNaSNVNKTGsZ
MYoptLW0lkybUbbre7y84FUGhGjQrm4ySsVJ3dVhIMlhJkFogkyFJFKWZtIDTY0oESZIrJpLFGTZ
iZCp0S6pLdXP5x3r3QEgr4jcoFDE0aVkQQaZIWEJFZCEKAElwFm0bto110rmNEcndhX81jCD2iTF
i0IgiKJsg2MnnT1nhsHKitaTnkOZRMTqzcytlEUlSbElsoZNiDGqNX6FtfPjuXTliuaAXL46sWnS
RShBQUccdzmJYiUbJTpkUaoaRiwzE7EbO1gF4Zrbds7ZiiC3P4v1+vSWoQozaJpKxCKKKd8Y0FFC
YGnFISg8z/tPDZ7Ip7Xt1UKosM/5lO6LO8E1wL6SeQs+5F4mudZTlrLgE3jPpXZS2ZFszYsNLC0j
Yik72RyJ2Qu9JgjBMRU0Caye7qaWZpmi1mevemCgXCNDBDFSpQvjJzmejWR6Md2ry4zTx1RdEsVQ
QUjDFHWHJhE78Kt6rzAUpGZUJQWKUxDSkbJQ1jJqWYyJksmpkzZe2VtqIuyQY3e9a9Jolyrucnu3
FUVbFWIjTambu4kqSyWIqr3XWtGp7trzy2NVGg1FrdXdMxSSmTQFJJJBGaTRpR5dJbU2WYppRmNN
TLUmJAyoooklAtSq5kOGFUSazxcYOQ6gyI7E1ZXV27GyIS84lYZrGpFMaxRaZUttja2jaY22lUxS
WISTZUCStkqxRAsKbJNZKbTYSRTZiksTE0ZQxNZiMRslZtEyywZG1GQhMzMWUwzUBKyRESUyaGCk
Rm2iYJJZMpZtHe69Y50lsUlLAlYGUEoZKjNbpdmuxPe1cpM0xaLZKoqk2o2ZsyxIDGXm7NSRZpVl
DUpDJCjJjKWjWWZBpBFRNoaIgjQUkAooNW2slaru1zyuzUqFBARw9noCMmXFA0ZRu4ZkFvd1UWot
EImo2FkbUsJIsaoymkLRrJYyNFLKaIlljW2cu0Fd3GbFzrp2FjCaLGglmixak2pCjMNjZETUkbZd
27ZtClGSEyzFSoBIosAEKJBEtCEmxGkimFSZTZFlJiGFLIwyNpNkpJJd3MzMxUxkY0zJGJMihRaT
SloMEElZIk0y0bTYVEilKKMaMTWKaURYhJLRsiKZdr3Wr0rFGICbEltkNT3cko2rUxpq0VFre970
RWcOiS27rrOrOoUyCEiBkrMhEZYpMptKTZNjJmySJsjShERlhIjQRpmSqTWk2LEbaMM1LKlklJ73
rxYsmS2CazKkZmNoVMJklNvx7XniMzGihEtBti0hRqomMQDWNM+OwKaMhCiJzbdiSic6KJoi0WJt
FZltEWWyFo2tGskVlZsVmo2wTBeNp1BsJJMEqU2SkkyrZakU2wxcIUCEkoQkgAoQSBG9dtyixalZ
b3bdiU1FQY2du6ubmVrULbbKbQ0shbmuMklK3LcxsaSSpJlYq97k3rUbKCC07t1IAJUpGN7umgsU
oKAUpYyG3K5smqZUlkFQ0Wi5ttc0ZlWYaxtRrEhKWbZSmLIw1YXdtdAEkJLuu0VRbG0sZi1GsVjT
LG0mlptGDMpWKQpUzVMiLG0UzTMiGoSKSkpmNJiZpjFSuW6ZUpTJbaLaUxbQITKZUQbTSk1ESUZZ
sGBJZLMbNlFVRnsx2tFy5KGDSmsRY1SXcpIjDXK3E93Jwuy5J1sq5MrJ3VXKdtGxYUolNQYWtLd3
aWQAIqQAEkQKFBIikRaFGlSlaFKUAhDUtEuFhEmLZJirKHZ4v+xWcsJQlElov9v/6KtCWWw/If9L
tSZaPtZrXu68Sf5n/XU18Q76SuviHTv69WalYU2wZTIfg4/tuqYjGvN8s5O7jP8X4H/N3vWd7xuf
/Pru7onq71IY+uZGyktJp1DU1VjbD8taZWhrE2JJSeKSJo/8jmZliY3Sf/Bif+pPQl0j689YuJGH
6clt2T3Eff86HjXzhONF/h2RckL7T3oK8u6nOGDqe8jVDOgIv/77T/8ds0Xw/6/+1BSG1qnxhBTP
H45VYSn+Umu5WaHhE7HVVVMf4acp19Dcd1SlHCQmomlM80XkTMIlKl+jAVeLjE1TwWk7+2/0V+bz
xirOt15NMiDCU49XLcdO5Y6bd1WZEbzl0mdGnI4Pr6d+3/D/s1/81Sr/91uMh6ygw2/62MpeJyrP
b/+6PCXJPDpYizMpYOn1jH/nKMkzyeUYHQ5lR9/vPqETwrTgP6l2P8De/5uR6t0we67nNiXq1jEP
cEsF6FhBjWtHNhOk9/VlV3eoVNRyCszCjRKQ0eRpCS9jwds0K4HJDpj+w6MGY494e/xaX6oJ9cAw
dO21ziQ39J8Qxvl3NZkYgtnH9YLu/a9ofLUiO8WL2SPHOyCZcAn5O47HJvV553j046pEkfnWGims
60RmYH21WoiB7xHuwF2/06OFbVIszmN6ROyXth7JNS+gjIAzpYl5YRDfBcRQNPxcrDwmin+UdeBn
HfxeWuJ1YfDAajWFrVLfozTZa49E0ybOiszcpj3tJC6mgTUdYWYCiB6YGrmg68aLVmY3ayXTRVCR
wvT1Wlo+ixGXCYSgdbdVpw5V3u7usCFxx7nM96joSGKEtH5fW8GszbUTcPFELZfBwVozBuB4sUk/
FPQIKAggmEwplTI8QEBqgY2hjXHCpUywbQ4R7dl+I4xO9PdeMK+CWc+ELebiKubCFNt2yBRk27tw
5QU2tEg7irZVA7qQFRKuBTpywhqEyFWQ3vLSkCIepCqpkZB5yEpcNAWNNDE2mDTHOHGFVfhur6uu
PYvZPFeOCfjj18drBatyphbW6Bzn/n75hzh11jKcgy+qsrWuKCNcb52b4jkV7ojPXHraCUmGRcRA
hJDH9+4ZwZtUMXQOgz5Zolu+dz1V6KOu34/3UvKmYIL9HHiuT7UzRy2ut0h0xS9VR2lJxaihCXac
hJApbKStmbvmzEh5ERbH/Rdg00tM4xhjIt+oi9ONnGtPshm9UKoLId1TtwGNM3hdNqxtEbc+9oMb
SauQW9svpS5dbFVy0kCg4ztpAUhwyPg675RvY7yh3KV3eZZjipmMLdxsh3RSZUcGKbQZkFcEKcWi
QqXjcOpOJKcik3h3cYJCFRwi7YPGfLQpz293hWNZcWVRTtqXY6GQgUhtKTlZcI+Pq8fag89JIgpJ
7IXbWyoqlRN25Nl3W5iabJW6zFGSQHLsoouqbpgyrIIbXKvzdLG4RO4NyQ7fbRdx3pB75eEnXPmT
k8kuIGLrmSR44O7uYTg6YXOw4xkwCBjICExDQD5RFXVQ1w3obwbYMG8hBD0yJxUdJ551lEryLEPe
jzuyO1fK5Q5FXFSQSEgb5uxzlwlKtKBVAU6iwLeg0xXLvWud53DpjJ5SOVeHuKH3TgcL2vbvXnl2
3K5TKcIc45SrHy7wDlPKEqVUCMbOmjjKZrKjUZWpkq7bUHCMi7fLugcjzcoMcqRrGMY2OOZRQcPH
v5w8PwIduuN3oEeQjx66BItPE6wjxjTuWMV5lXLgSaqYzE3eT10F4N4lW7o9khDhxpBCgIXZmRLR
RmtdNb1MbDNPAGkclTazROoaQxFIkZgCJZSVyMNQ4IEEagyjVgdSXbD1IdhK9tkEgEqJRBI6gpwl
a6dMeFz7cXHjh8O7oghToJPlEohaYhtNskOh5yFjeb5aDs7msbTgwgpuo0G7VU3UplVVJ0Tkt2xl
kHFCVW7lWd2uqjY8m+oV0ziyRWtCqNU+3bjsdwIc1CUpEIUA0jSBE1SUlAVmOHafZ8hrd7/R8mdC
gwP9hmJpyM8cNTB3d2ueu98HVVlOlW5z3REf8NW33zEP/xi52faZfdjytjJk+J37x/8KOPR6MSQk
ILX7dCaFpud2C6+GZAmQ/CBNIcW2NRuc3Aa+zglfO+Z79t726DElGEXIiJhXCy1YAWGQoGjUU8Ya
zZiRREQnXOId8qJJzgYRpL5ZXEdzXXiXjzdIPK96uQ0RYhJ3Lnfq71tJ+15B4+vsbioKyIKqoWz5
z8ruOTnL55HyVjlc93bkqyiiiopJolBJklAMISKZkjEjIRi2ZXRQoJea5ERfDyHmkROrKgiDXOW+
mvNvWKQCebXx6MfjUq+mvVjbb6TrlGyVGLDExNKUrZnNqWhpBymE3Gmjy1zHldNRyrGojWxsa5uG
tyxtaK5tcvKuSATPVRZyOAFwjhPZCTO9NnY0ZFLMo28r4t/Ma+NFRGNl9u5kjUzag+E5ffuxeyPr
uorVVqSiw6YJwrIilELJM9Qgrtya71dvd16i9kjtaE9ET1rQr1eyE5X2SRy5WeNhrm2bsliIgiN2
SEOWMhQzNjzpM+zkfh3b6l2E2kUkw2ppQJEU2UqbotpxDhTI4PK55m63YR9o58dvOdG40Satc5wO
7CjchlMSEENmXujtZ5ySbPdq+vVXKKMxaW3YQq6lRrMxWihQpStotnbirm1uhtUaNdNhvn9lwqfg
d7IHJIApooCT3RTu0NGz3dI9d2pRRjfdzPy7X8P8r2/8N8qwUdMxo6Y1gdZs0aRcjcOVRbbWjEXR
AdkO53ucIKcqiMtc+NXSNSbGklkmzKbVSohSUKSUWRolQlJNNo20SqIG2LrvUjecKmNOVyTr9F7G
r0m93IGmZiTFr7duQ0ImMi9cur9n1wyhIZDPOxMakaX3433VeV/Q7clMijJNEmhM2WJUbfO4pfT6
G3ESKQktkC93XCayRhoiULu7MzINfFbjSxilebszFe1F3rm94Ffg59K6EwmZaTm5pfV30S9My3RO
CpNMTNqGIzVGpDXlckNr2auSVfsfPC8OKJHZHAiDEWHCyWZpxZVBqQ+EO6ERUZEYcuHLj0MylGR3
jjeK48fHHDHWKZNNfO57u+euj2TExSWRMh5u2glBoZr7dtfJrXriLIsgmSrbaUQOElixIjeZCv7B
c0XmuyNy4hkCLq5o1FzIZZJNXzuzCb3vW3NiijbF3HaKNWQAiG25R5DkOE9MODwFG2VQc5EFwKeU
LDENnOcVB3Rz3ndpVG+fXEPiCqbKYF2XHIAhQiCOQcxEwIOQRnuxbZRRGI9PP8L5y6BwoEIDJ1Rt
fU1JYq4ljY313X4PhrR22aB51FmsMQ3Ot7ytn1ht9qe0mukf646wZGrUL06qdlVI00pQlFUVJsF+
TVc21i1FsW0JLQqjYmWCoslRsbHLW3I2xQWjUa2jRvK5jY2oxtYIuq7r6jP3vtwdCN2/L9OvDtwu
zAiGIKRKSpmkzNFFQWNhMElBjYMEpJEFk2mbQSmTYxtREbBIlkENIaNJaUplGxtFoNiiKjGjaPwa
6bRZELUUUkzJJFBKRsFoQ1UVaLJJE01RTMUUy1EBqLFFpNigsT+77pfG5tJFjYpLFUVRNYGMRNGR
pgGxJWGIFFJsIiYtEVpLFjESyoxFMxjBMgTRtjBr79dqTZCqKI1GojYtk1JixkJMSYiLGjEhKZJl
/ntuaZaDWNkMS0LNmotY1pLLKNjRqNokyGkZMg0mo2GRGIsmoQkkmSUZNjEzRhNiGMCxbFiNosmN
Gk0bTLGKjKaMzWEtjRGKErFRoSwEYtFGMWSZVFiiLfTb9S8oo1JJg0xmMREBlJlBAmNt5cgskmpN
Gvs5RRFTM/PrcxshvOamaiiwlYKIo0mhNiwJRqgtjSWQjIG0lo1uXN5Gi3jIWiBjjYxjP1T4dLxc
R5Prj6/9H93P+iobiSr/ymTvN4HhQPCQhDkCv96mQaMN9SShabIjpAPqBxliMf6mlza80ataBhJD
fEJEozgyiLqvQSdOrFVu59+vfu38cXDu9H8/4eOC46aNSREQ1V4whk0hAgbKqqgHIfAz+f9/2cJt
aYhSlKWoJ/FGLdMV7LUt6M0Q6kclNypTQIbQkSQAbQ2dXAKLeXVo/fkAG0H2SylMgg2wof6YrdNR
6qA2HGo65p9qrN09XCBmSEwxOUH3SB9POIO0+UPaORC+4aRf5nxBiPEPcqEZ+IzP2wSt6mq1IiXV
GviSibyLcDykfptL8PX5MPvfryR194R69+S+FMO2MJrefRQDabH2pKiwQY0Pq037dyjU7339mW76
qUI52/HkYGxtyba+/e7bnbhA4QTnAxJCEJShdZJh6hbH41RnI2pQ8/QX9SpeeKKf5ePk/0TbOjfh
y2Rigcd6LKHYWUkoIXogkzYpmMTb57unLsK9W2g/Q/OUofBczW2mWzctmDnN3Sw9dJWU2EkJOzqt
ywURC6ol7ukQQ3RDugej05VfpiiZonJE7UTtESkTzXj7J+r9SJ8OPf9xurn3Tll6DnfO7OPdGHll
u1w/QiXzwhzF6cMkSEJIEx5kT8k+fgmZlPPX/eid6Jj/5B8Ar3JDZE0RKdyJSJSL4Im6lSRT/mHf
XnBMcOhN0YdXPD28Hn7MJI29nRHWdWPbah8a6JtlNzDGAYzIhTemJP7MMZSS1RnplljNj/1vEdyN
Mn3SfIM+ThDBRhj/uKuziDJaiOGbsxGany/9uFdLAqfB/TI4Jufye1kHd7ve7jr4cfkqa+23ul8n
ZlZDosZFIaS+YxHiX5op932R6uTt9UKk6TUTvjmUNi2n0v9g7qpSGB9rsenx/+JbY8eP126cDHo6
EonsbjvBh1SaaPMp7kErdvsbLy5T/yjrU8yJknh0M8ZfG96JmRKUFrtdiNl/K+K7ZUlG2ItoZ0wM
BiiAGmIUpeAe++Xy9Dfu+/9efFV+XRjGqzO5/eIwUhEo9p+M9+gX4Rum5CHF7EehMSW4V8Tug2re
IywHZ49puQ0hnIn2uDnME0LLMKKFxXEZrxN1ogt/s40/jOEMZB6VUmcmYZcWvdHUhTslyFaClE6e
sw00PIdLrZvjQGy3hs9cmjaY9KikIJEooE2JMUSpjayWUSN76ukGmdGKCIz/lzBNSUp65DUCakKR
iRggKVClYlKRcYMjiT144tREQC0PZG9cWWlHZ1K06IIioigncduzhNI44FFEOsxSgaaoOldKjJYt
73bbzJLNVyo2oONASZdhQJwSg404z4Y3oeTrw/DBD553IyEj3oYrvufR6suLDmrr2cSAi+ZCafqh
QpRoVoq0bSVsaNG21Gqgo00UBRTEpQDQFCh7i8y+N3xH6ajJIoMRjUGf2f7vP5J1T/cW6NIjMuHi
0TEiXkQpt3p7zGXw7advB7bPL5afL86wNNdVq0OjL27ZaPWtUlunabupNv73gSVXc8Rzdx3X6uRs
ePm4wc+2fEP18djb+W3e0uSaW6bIea6+coVHnCvV2clOE8oiLqaglHGgZoja8IFerF0wrx8uyuFc
UI47d2XVws/Lp6Pwu1OjTibhGyEthvQgeY7qE8pE3BIsr7cFmX/s/bWa2GtuYmEHyrTuL+R5l/S9
cFem662Vheszqau8X9jN8I7j1UonWPqJcdhgum22roKrZRCNSqpKsv2/M+t29td8K6hJpOTXvV+z
0uenoOJ9j9XZ/4B4Dmh3mmwz90zvWCxRLmId1+Hd5d0PNF8YHthZBszZk+JPIBtrKZMZqlT4yEGE
ZIQacIrMhqtaMyNufEwsI0zbjKa0ymrd1jqppseFjEOBQxhWWLTNJ6zVBKJGPcooeOElNxuDW+z7
JSsGbii0Tw1x/plytt7nsD0NePtygLZqrdGjladojByNhGbKgmm05/p9czmtREmFMLkTfYgSKxqH
Kiko/RdHv7+Tps7+enUw9kndCdZpfiemkoagGqJX+Ews0+/Bo5hVGLw6eOk0par9UbAnsrG1zAP/
LCGtIKIwkCs8CLoUnIQsqxlJVmUkKT7ukvO5g+BJVPWe4lEhKpm5JNJSBN8LO03JuqTvLsK+7zsH
zknMEkkhhCGPp8/SO2Y19DUFRuRjKJXtAPsqFmf6pQhsvi/CtDUyB9gy88V/tzX64i2jn6U5z+bL
6UsGZIvxaDB3H3JjlmrLKCMCeKGruvEzwRNlv8/trH0jGmmm0JvqMoPF5T5qv2XRjN4TAklguiTs
TXZ8AO4YgbBJdn/06XS/Q8p73G02t+jyOj8N/rtM3UGLplycdHUrfN2fktbrOFmwTAk1FSao1JUa
sWwVk1iybcgumKVcVCcEJ12b3ahRw3tcYhCTAoKRpoKVaGeuCmQjgaN5vMTXF+BS0edCpUZS+MDH
RIP9Giv+28DG2dMxoPKOH2OZnA6jMdEHdBkFH7eG2Uu34fCstDurzbcktH3O0GUhJO46SH7Ich5w
7xDJnieynSbvQj1PpDHhP6oLIYogdOjkgHSQHso5QLSG5yOEM7CHNzfL8A/GxgjmaGUdFw/AMcYV
75bwwMTG0/Re9KYS42/gjawdKGDqEMkzWTHFahb7Pm9nY9vmjqXeeemnu+TMqkkL32yHIJN+Z/+h
9AhCETD/xFT9IpjL01P/vD03kBB9g/543t/O33kiW7GsJIueMRvFcRgZTcuG2ZI3C33cyE3pczLn
d7UYCF6JwOre1yc3JB13XQe09RDFih9CgaoiPgOSRNeBtOuZfjXq/YXMvfal63ilO3SLiAFcHHbW
YuyMCh2ElOo938DPiHGHmJhDDjEkREEIRn83wu373vKLKCG25cF+6FbxdoJUmomkm8Drziffs7Ja
+1j6zgAGHsLiPqSkRFgtoZbDaJuF/fGfAvXpM+8MDiQ0R+EjL939nymg3HySR6xPHDQrkUfzfR7P
qPI55LCiksPPDsZKioZ93bphoppeA/Ex2HXnk20UUBG77XjNnPdp3uszPg2VD8TZo1B+Yzo/M0UL
GL4LKChDZsZCSKMGNOBD7y0sy6PBM5I/GBZdAMpQyh5tg13g6igShddoHxPogo/ohyCqMkzvJrlX
5JUgPoWiX73PySllJ09RyM5WkOOh8XPrtefZfq6LayplKMnN0+7XT8haUrf9X4dfyZV+XedL5z5Y
bNn6HmSX1eL7dnANehxg/ZccSWFoX/obphuUjFse5xgTjlVpPRCn9ZKL9LCEyqoif6d2Q7NpkNpE
bBNmjX6OtulmDAhB9cBYxQMEtTvfV/YU+v8uPL6MtvYVki9msdC0j88G/jsy2J+hOQDlmwt+iJaC
baqBVw6ko+pgcNAYIV7rABEU/0Sh1gqICgp/PFQsf5btSwgCjR12owQoZ40/n8Mi1yrqps9bfzeR
39DRdSEJGOEAPk+qUgq2xAT1/6/2/Hh9h/N8OfnzyhnU+Yo9N4xoROspSbeyyrH6bR8VjSxghXWV
KkmJJqbO0XQIw9iQYQfv1pmvkgx2fcbB/9WjDfBFOOnH/abuFHgdpO47Oyh3DPA4URLCJFgHInhx
cxzw6T6Phwvmt1pRMScOI+TbbeDnL4nQ4v7Jvle+Xe0LRIbUYUu1d1rgASCpcrXCvK/jvl8+X6ff
3SxhnlPkVWIYPBwNPrD0+r9ZXdB1V8vN4eYfY/B+t4I6k6MJ/+8Kjl399St0/yfz8Iv/W6+pBhe+
czWSrqYSiNMMMD4nPp6H4JJR2RCXb2EFGkzpUZod4I1UerXsxWB+qL36umI3idUl1Uy6dHho3bZW
k0hJmltzdFXKGNTpdxNPJNaI+j8/p/JIAYPbxM25+rr9XDo5y7yXts9uLw72q/5Pu4HI5odg83Un
tz8uzHafJh8T48a4rrFmmtBYWFJjv2b5ce9lSkrWhlSjmxPSOTsFLCSHQOJyjF3Z1ZUIG+MiybiB
nH8hkeXs3d+Z6x7V4i5IBkgYa2HW2rHRLZjDm/qrN5NKc/f7MOjMMTZJpqSjth3MdNFys8pUQkVo
LLvRUjpLkwboT5RQXk0hDBpCKi0ONyIiTONOpOeETGxDhCn1B5o/PCu4T0d3jnqsb/6cXT8HCHuH
t9nxtYmmwbEwyR+YKMYEIVAgRoYXVOv31YwkQhw4/CtcvydxpydzMipzs/4I3lvDpO/1048WyAMU
JMCQhMmWxHt35Wk2/HaGJcenyf0bSMX99Tgg8vX9H8aW9fKzNZ8NDVGeefELVmciM+rJJGDB9/Pp
bn6tMrFwSDc76p2SYTcgjIMemaJIEmSSHQ7On7/zhUavHdfIzMcTsRKBsDA6JacqFZs/dP1UkhBR
AhMkwS+NjBmFYljGvRf32baW/j9vQZ1FpscHVIpUef4b4/DcR7KEzB3bD3mLDwMtKGUETYSmir9c
w6VsyZ5tm/j26OzDnTTTB4Y/w0XkT6ulnYPnoOF8eRdpvoU1COPFqgVs4a6gXb+H9NmYYZhsD6MT
gex+RsYH9Iq07p99HkeVHadRz1x5j+mgRTGrywm6U/RZ8WYSwU61tgeoJ20WtiuuGluo94SSfG+D
MR8CjIaPfjod/o/NZGuUNA0w/tUf0hl7x5iJK0LwYEEzGZetaWs2/iidKXW93TdbKKLsbf1xIzLP
DKr/e/p5oEaL4/ptruOodnKCE2Sv2civ425d0pe7vh6GHAEmTH4xAo3glFEfP2FKnVBMgYg8Yjmb
tfhPwfrEvv6/a43ifc/Q+2PydalsyKNmdRZkVmZ+E0h809SQ6ADCyqQiMgMEgkwCQsAMkqEiQISM
qErKh+7EcBSCFCUUJBJEgQhgWQYUBkgAJhVhQgFhkFkIVZSAktmVNpaVrTU1NSamVKalKlNZSplp
KlKZKZTMpSUzMmmkzMlMSCGIWYomSJkSRUO78Hjrzt+A4tOENRM59A4IQbhUTUVFFXafFRE1NuqO
jUyPTJ+ER+bSM/7GqTPE3TlMeQdSu4P1GSfvDUxWQpgvLMTOO/7S3HfeHxf763sIdd+QfX178w6x
9HK5KIlF1trWx9x2OAxF5jx2tHEXpxLcu76X2OCEG5IF2hUVny/pstn2VQKG9OhEHHeoOwu/W/Xe
LSvBavuouEnIwZE3xD2NGraaxuK1rVsfjLmIL+3cpi7ZYMMerqgxgVUbrCrLBqKuspwyHTFFOkQ6
ERXzJBuwygMcEfnz58CgZaaPGgyZshTk42NyWubzEpIdZuW36z7Z55CEkS47u+qvHI2VKPMrzwfG
/lud4f3vcP1H+zhAgICEhCECB2Y6CP6zyNT0cu9zw13nSehdQhNRvVWRBgAVD0vIO92kW9BN3Cdv
k1DIKYNhg17swOKEa6uEpoF6z8MQs+jMeszLE5A53YUDQkU6NpuUgmVMm+B8ZHQ3Wx0sM+KdCYSS
SY7BN1w7JjsXM61f19XVPXgjbZS503XPzfIpXAln54dX59GuPPLDinsgSL2kFaiQgY/Nv/ReBDj7
kizZnOTHBDcF+B1cezgTswx97to2I5kii1Kf+/8zKxx6SZT9NU+ahzcLapttKSLcJ+Lj3wpfuZab
KSRJbR1ApdWRD21HbZxoIGdAEIZIBUYYgTH7v4exf3+3zn/Gn9kf2VTbezX/hvNhFvLnXDrq+bO5
o70dXo4Ns1hynuby97333qZky+dh5H5U/MiF837/X56tpr5iOF7eda3IhU78yeDvPO6w/ighw/sR
/Y9b0BgR9GfMdTrxVO2OYLDCe0f2uf4BQwJ8PRVoCf02fIbEGEHsoo1MHG3btIFtwjWGl5X47/xf
5gr5uzlRe03UJSe+cpNIDIh2EkbKpDbvoRLh0gnVM/CgThzKHLTQ9n3cEQkxsQ4Sq4QGji1lAIuj
dNC9mOBk2qSTJId2vKUMJ3Xxd4px+fPJjjUDJbDMxpo7Wo5OeXBSSQDq6SWpMbFDVjQx4zYayxk8
IZJIx+zpykURPE1IFdiYdOYOe6d43nSLHc7ONZgYoV/OKD5qT+GcY7voyaTSX937HbGzjfrN3w7/
7Mtt/S0olLhgyeRu6FznhCOynx/S2TVt3S/UVzYVP4JrwRt6OPHdfHbXpO3PzsZ9brdicFzTbm2P
slu7v7i6MuAsN3Qfo9HxX37+i5L8f4f780pSp9ZBiSvjAGEhQgTAH9cq7tKyAESMQKxAURMxSoBR
I/xfP9OlGUN2dkd9UxIX/LYfTpgLg8COU4h2kqZ4/WfNsOIYh43guQm8xw5MXI4g2xhfQo3RzH6s
RkzA+Zg+m07sdF/dLrdgB0hAe+A/pIUP0STQMM0IAGkmDw3Ml/8UGdfQHca+PlPfOaIPrxNjH9RT
19ewsev9n+fKeNABf/jsc/a8yDtX2pdsQfI19YOeCivxrPwLBpC8REX+zCZF9787iMvuhD/ZA+ig
wmXQxy59ZY9n99QXEYgQB9dxm9+tbONpodl9a5GpZnn2on3onjuDV+ATg8xtb1Mn5WuzOO/6CcjY
JiqEWCR2bj3j+4bsAmMaoMeDnlJ3ngYHzw+Ekwp8kGgtMcMjnvwcZz76ymSKZq0BqaOXT4FhtDaH
3mbRcvUA7Licbs+bPWXSGhpaB6wh7yJI9TI/duKDhNIIawEqCFxorWUkLBFHfBQLW1rXdjMhgTvi
NBnSO+f2MAPZEyIhJv7eKqNp7uVJmBPI5UXE1O6nw5d/ErxmZTLPNiu/eH5jE2Mgumr1M4GSmhm2
ghe8bmFYkfyX0dWLANsASB4FGe8PHB9sGo3e/vwH3Rk8kYR3yeEv1ndjxBl4UuJn56U3xXSYiaXU
j4u81sD4xDDfOMSORh/0qcaHxh01xycUWal7HTC0Q3pTUQxNgSSCHZz3X0ju9xQal68rpDqY3lR0
3NccLnES4m+FQRhBQhAHq78O24rQ+BWicBD892hlVltx8G9GzFj7wRq1PEsI4kzZtcr5JZwatxt5
exp+w1HZE32obPaSXrHJHA57jYVN8pHE3noKHgIRTxyfoJr4E5E5BihpnQ1Wqxr7QkPw4u4vBBij
jcf6w/iIGqZ4A1j7vPT98bu/7niOfdqRFLEs67KJ//W3/pj2fNl/7/k+/+Gc9xsSq/BPWE1+/LbC
+8pCXhUMBSDKzR17mvOc3wCoK/0wDvuhh2h2Zz4IdMI/MmYDRGCGsmBuizikVhGWTfce0P3VNi+C
qmxz4cSpdl3aSLJGBtrsuRoIiVBkNUSdhh70SmVJybM/a2J+83npyithgyEJ/BAUlCGDBRIjQUKl
4RYKYEtONhCjBUGOGKlCAUg0FIn+WVNayd2CVS6gry4MDez7OnJzp32Cc9/mvCKIe0fHrajy5lZ6
rVF0MGmNJvyqCOpP7hXKLCiDIHo7g6u4kdzamT6ngcG530tYXDeEUZT04OhNtgvHsPZZwhvVvx7g
/jyuXWVfwn2e/w/sPx9PJzxNF9Onp6zxFRhw/VtHWLbbY9RxMGNA/lwntoQFh8yzK/lCyjrOllMJ
CRJ0pqjW6eH7cY0q5JM1rVkHLJ8f061tyHrTaYE0XUvP9dm+MVbgnJF5MVU3gJiYyzf+T/HtL+//
0T3aMR3GGhukcaSZmblrX2v8fUdfSapvWtKO88d8iNJ4bycF6y+vvv9aK1fNEETpfdi8qFayOvKZ
NlL86NTDqnnbUK21mT0v6Ju9ir8y1RAP6fd+P5vp+GaPWSQdYSGiBs0xGqGAchxtfbtrY9SntHW9
2HXNOJk6+/zwxdGiCGNprmFGdTPUNjw63/Uf5jDD9ojzCspSPlUlLt9Lh6cKx92DiNtYLoktUzOM
i3k0EEyJjnVZ0ZE6YSz1lNDj6z/cFnSOI8FQwRGBHdL+ED4JH5t9/vw56xdSyUhDbBry0QlkcigL
rpYxuk+EXTew9kR0oqM7HrUj8xiHGshHWGDuaVMOsBgKp58/dvnx8/eDvVSTJtLKUYpmRvp3y5vv
3vdr48vbfqJFREaNkpSwVkoqo2rvk+FnjhDHH6h1xpw6FhJ8O/bD+C36oNprJpRog1QmKtb7tJVg
LMumq/BLqxXoR6pC9kBYb7OZYFqCCEzcHq4LE62+jt65P9TF6QDlaseDhTJkBHEqFnZEHM7JHe6N
4ceUyi9n4O7rQiP2s+nxOZMpRa+T+5wbxBK3UOEV+SMI5MMOf465O3+Loy6R06IQqInTttG2F/qC
SMiY02wuaJ5v5tw4DynYUlCdhc688MII1Gew1o1DDRVsChgU04FMacZJjHsfGRUovKykvL1oeFEN
mEonprSbbZ/j/brrZHZhIqEHSsPtiG+3htKfJVn81QrZ+ZI9FCXp4T7Z37KUlKL9itoaPu2zODN7
h0VPqws8ObZ/5EglQwYzwW8Q2Z8YH4h9Bmn633zzPqcmdsfbUKXFRGrbwwiSdaKEoH5ZSM+RpBAj
I2GXHnImyW0SaeQ8msmD6XuQCdV+13kbTcbZVaZlkbWotab23B39iY+pd6bsPmGDyEM3y+KX9kS+
qUpuzzR/ZH1T+ubElKEvR1RswnHHj5cLZNdXL06fMkKmXOMSh5QpjpNKKQP+gOfR2hwJApd78PST
j5LddpKSDrn0x6qd/gkh4sV7fKNX1jV55Oa+jQT7ChLoobpuXQ75we+XmiH+ESycH+0pi4a4hL9Z
6bFxFxGYgocRGYhyCByxhTn67OeMra9TxWlvfEr1lFZUrM+jdy7ZjDTRt6z+to3PMYkI35aFJLr6
Yu7W99pJc/w+MlzPM02I486tcyxM9yK6vNl4DhFjBk25UFoYuVPHlxnwzOkxnfAEuW3DArnpaY3E
07Jfu7j767PlWGRw8uoVP94Aft+wOyAgeX9B9wZoL1mQB4smkoiYAHo8PDy57VdF3y6ZhuQ3B91H
HkUpiG1NCFRtDKo1GrdtcakiQsoBpEjAhFCG7xo/JA+acXbY2tiHZSEW0e38ffh0B5Q5qBSVKRG4
2aaOjEmYi6P04DG7+dmGMdvkZwdXE3Nx3k2G44HThw5E9K9/7Y/z0RUZOD4xkfRsjHVRFuO6NEUx
JQ8bzVVHTCE/Zvg/1Mn+Rl/Te7d57KjBLPHJKTMfncIGmt3q20AD859Rl1fPft+h/j0Ej937fJrp
vzmz0BH6C45uPYKTQCBH9TuO9MhM/FynJWEGlLgPyTbZNqxMvlAhWsoIECIuzs/1Dt+x4NwxFUWy
hWwVotalZFEpoWAbGhTY0+/V2xlprBpVndGNbqsVaUYohfy0dFjnChDWhcY2+W8D6v47gOJxz2Js
JS7Pf4fW8N8ED5BT960mYmP0aDQyC4gZ8Xb3WpFgjdUPZjQgiFEEDX+Z2h2a6C7fzGo3E8f8+Yb2
pwao55f0+n0MP6TrPZLNvWUK6lpB0enP0o9ko7Ovq2Qenab1gCIfwxeTiSZnCGHQmUIIKRkbG1JG
2XyVeMrclyURh0M9ef34Ie9TxfbwIHc7R7eDpoOJSliiAi+v1/ba9NFRRosRUprapLZfp7VuVpVK
sajYyWsI0rRQxy70moWj1BBhDQUhHJtvANom5KCgpAoZ1gJ79KYUiUhJKtC9YPWHgGJHIcBH2Cff
dtvrzpFUI4LXXdbS0H4e64IQ0mQOCJJnQzkL18dxsxw21LjvB44tCj1nCacr7dLNp2GewB2uu1Tg
jCJogdYJQx1la/WhUUyijQajRbCEBKECkFpVpvZ8hh28lwCxCIEYlxEjw2tMXQB/AWUPaUUGetJG
jBjzLEU2QIDEqEeHo29hPkV3zw4OI8c10oTSZbKgbHc0oWc6Tsi8SeQhSl9ZmAiUlILHVOzxMfem
tXyHIMwd//tzmThFyYgscuZuNqs/g0z/XqNkD6eel9fBOdaBlkAm0GWFNG/m1xmAJFfCMm1idPTb
0tBzowE0QFJQXShOttULMTYPLl25idpkBTFUILnFZAWGGXDyCblSozDRFV62D8i40PWgr2afi5s9
LDJtNp3Kwel53LGn6tA+idT4n9lt2xt14f3wGZ8ae/6wPjKZ9XxQxlh97HLcfE+GyOqeP0SwgmQ4
7lex4nFSuFaOtZ39pvAP73/TOmkvDS676op+twPGQ413zVdNII2uYYrCGDRNTWUDRi/86GZ7M80Y
pqoX5fzxp0LuNxj8fSndwMkOkb0QzSoeKhiDbv3w0gzmOFDw2xLjlBccrqlM5c6m5D2+MNvW5Mwx
imEmZifT8IDcgYZkmY4Atsbn1mBFKOdJv17apWPw51B2w0jVAUPEDq32Hq7/f5Z2MZ7LuCyCZKRD
GYobUydaFPdEw0xK7rW6935WywbXBD6m0N+Gmppe8n4FAa0flmGxJBGJaDeokSaL0/QipUYClFg+
7aPLI1WTdtbcEWN306DwTnKQ5LsLeZDt5HCGtyiaoDTFKiDUT40UjEkb2TA6ZhhJnOPfkVnGE+53
31K4vlqaRVsQMsyquQiTaypHbDfui+WE4vZ6Q74dNKIpW8pWtN4KK3RSJiKJDj0Ulgwy0llDTw1O
cGuD2xtDrFSQTV3Fe9jMOXLddmGmg2ccjFpMwpLhPOYstS1OtY7rcaJ7Y4beOVOGLaEkMwN5NdnL
VOJu0tu7JGkmecLDhWVL7XKw/0734VHUqPnXfuy0pKuOeVKI30J44U1m5Vx6UtpLopzH5bGlwp0R
lQ5FCx+ufxPuPuOptf2U0j9Y8Q/E57ECW03IBMJCbeybjWzXpICyacom50IeG5PLEZm7G52nR2it
5dEpdtseqbHXZ6Z45ZZ0Fjk0aUzbsymQX2vdGKCdE5gPEESmu2lCiwj2SaW/MJi+JwzOame862vR
y6Pbr8e/fRuYdeOsrE5X4TnM0xWF54E8I5nCa0L1CWbT30tOhq9mJHCZKkv/7rATO0yLmBKmsme8
4NzG2WDMeWdyDiigC0EYNa6rpiwY1RguvJn1tB9w9cIevOhVWRSUYmhYrOst5VsmitkLoe/Bm6Hp
gTLCbCXmtGNr56dErTkqa+iWufUY7q0tGNHMF0oe+zPG1JtpwNbkDZNadeW30HwVZ369qs68cvqB
OyKzsvZledK4DE92MInNRVOBwTIqMp05pY4Wvg7VZZ+LUqYltSta4basaGOIwXcjDbls1m0aMb4C
zrVqVnLIJspb+G21rOPzXq2jXpbbuNksyTQm4mpCtuzkbi2sLWVqtG8kmdMVnDu+uQQJMjTThpmT
tc0llnOZT2xoLJtdHweTvUT2uPPKZTtNShwHyFRhv7Hpv7DRrsaKZqcqOsHpx4JUXOJWKdHvoyxx
S88/OfBshgxMGwYDY2HzU5l0w5YuEoKm0KoEKt4/unssPs6KAnxuGSZE3aLln6CpkHj9OaCeV9Yo
k78EPg4YdHg8r66orUGuWIM5aYtJMncnsWLxdXzo+LltFSH3RNlii5OqlaRNSdS64b0LQBVmOXFn
l1P3FZG9yiMsBH7f2devDiY9LbTrUmcqLFbg6+/m2xMiU5PtSS7Izhppwq7R6hlwI9tHwa2z6BoY
hqAHJyce718J8vTEaShA4lTJswKMlObUgnnOQOg+TDnR2G4NodGGgSmkIyOeE79HbaXXBhrOUMIi
GCpKmF29Ty0J4rHGsUiogokiBiQOemG5ooomAOwgDUo+74Xb4HhZ2wEr8mFfB+ecpWHw1dpoDKWa
MmJMITt314xJ8+Is8JtommPg40IaSPWxs2MmQy41zOMsJd9L2GowXH8u8BmDtJnxs+fA71xsogaN
FcXCW4JAVhBkROVg9hSEWFJkg/9pEKEu4n2ehLp7QDE+X04TxMVUZgZmOWYEJ0XT5aRImpyldN/K
ZqvP0VNU2d2d3BZg4PCdoeXNebZC22xMgZ20u3I2DmGFSzCYNtHCTMXHkGlDrLTxJLtZELvINuBj
Sedngi85zzktcfZQ2Y4TMYY20vEkYbDaTaP0GpaspkpKWlZXd2ROmOxWhqioRBn1Ui7Yv4a2wDGu
A7cb79LH8QuZNg76/1V68peDsZ2MWqbcEuKZfr79LVy9FgnNjsZJM6KDLumOIQJtibiI0dnJQ7ay
mfLQ/xx7Y72WxZ1xOn8suthqlT578I21KiYjDFN+OzAKYsjadEa+Zi38f4sk33nxTsP8R/qnETh/
UgdQhUdvnEUJqMZFFVq223OZRnL5IZivHwqxqjUp5Wc3+6+dVU7XvSarIuVeJ3qVeJkGngv1HSm6
GQLnOZETnCv0nrNP158dchJ8bpA/PCkSbgi5l6R0NxRpurKD9URIBCzzUfV/CiT+NE9CJ7ET5Pci
fJvFKRNdUT3ARE4GiJ/L7kS/0TXT96Y5iBm2p035UFPcg9nF+AzHJ+aX7EtEXEjLF883uROpE/VE
TrRNpP7z0w9Fmp/QFdxxK7jcFVHrdx+uED93sT+4zA6kTI83sz/V6s0SIns7kS0T1okiJ1fHgHZ2
INH7uddsGEAGpP6kT70TMU8fSKlMz8MW/zgUncic0Suc0/sQ4nrISODD8SdIHtgdUTEOIfLv1tz6
LgPsYccn/ARAEkkyZFv1HEuHvpWckmKrNkyBIM5+zqzPy7KNq16Qj/1CR0hHk/WsHngidWaGp1VP
5M9o1+n1omfQDgaHvKDc7H0CvoFPl2zySa+NFxYyQCHmyHoQ4wLyE85YUP2kBhZSHPwPDo4P6Inv
CA7EDqRPoOwIiJIk2D2+kU7PQwf3H+nMkTXbj32ocbpd/nGY82GMeGTo4JG88Du/QlyDIqb/ewxy
YT6EThPsRIKcRT1dqidCIm/RPbRz8iiIiIiIiOiJ3CkKeHiHUKiLfK+XS7IEhN6jYJCIkGkLJOl+
/2/Xo+KJqKcRS3l5O11VNFUFNUFDokC1LTQU0RKxAx9CJ8uUy5vsHAQe0oIAalKhrF88Px2Ptdmj
sodIFpCel7ZPxImwyOSPxoPaQnn1LNvSlKlG7+Ka+wU9fgonL4n7cjv+jw8sc8NIB670QbTzzJIK
RIj7dfk1krYq2ko0axVCbZNBSYiJRqW+L8r+n+NtfH7hVJRJFPWKdPaKdBS0TqRKRPWiYRLRIm6c
kT0okRNkSxTx0qImtIlZImnTxAb3onsNUTqPMeybfIFawcoURTSQcUI8M4Yrk+77APt/SGT+M23x
O17WAgIiAmAofSifuOgid7x+L1inYK+GfGbg40SfABg8fO/Zn7YcQ5GTRS7tVAOyHqujhCghIBbl
n6HarKIDCChHkie8UzRRXbmVR6/NEu1Eh0ibO6MRI/UN8wic8L4h46QwIaA1wI0Hr1aPKHURsPf6
yzcfBB933e5EiJqic0TT7/haJ4fdqiWiZonV/KexRPVh1RN5+kL9pNHX2h+LeBtfX76N699sU1iU
pMfZcH4duYtfBIhoSr+2vkkl1fmdgzG1hjgx4EuijdSMEkxf9mnurxn2Ikq+icYt9xmSeyap+CeP
A8bsTnnPsNuanPvx3GtZrVpi+5WWxjsYBBkaJ9tGXugyQkiCSqxVVe58kVT9QghOEyiw2j2RCkz7
ZUlVSvl07fz5elE80RPk9aLvVPP5ncEt677p6SFSul1dSoDkD0DinwiSJMzzzBjmwwgmJMz9e+SH
k0lcr/hLp5UhbKlTq0xEBe14kIu9ZmK+1bsfWXL4XcnJoKynEho6YB5OJfOgfMqOKb9ZyLk87NWo
hu+ep/VLmfPZiFY+toss8hqbfIpRjAuUIba7H9ZIkNRXKjl1EhBvNVRr0x7zfwdScZasqiu40aDZ
5e8NU/rOIOgfFwik386E98oSRg1BCEkh2SZuOt7wilmjuzzLVz+DPWGWw1FB4Z5+Vmzxg+Tjj9ih
xzZPmP5PG8ERERVBPkaFwqZn4GKsgHBAm4/THKTskyFL1cPoX7mF9CPpWebk2+fzpJcHZ++UQOIE
mSb3pvM3zj8VVI/lIew7km+RECGoT3/g8SQYvjuy9J6hztRscc8HAxFRsig3P0ySgPsQXd/KqBgb
D8jaD9arMe+HGFbYDL/cDiP04gRgWGyvz0o8edZuoOO4LE8DRERMffmgD8sp9U/WyYcmcZNU91sk
HRERkvX1GeuO0OA4Vn0qnwjbbbmSSU25Tf8thYUyyyJpIiTodpDpoGkJS44weH3y9oaNje9/xk/B
7m+Z8mzdL7srVV/3OVmOJA98WbBvkJzbNyXTI1ni7zwK7cWaJWu7hWFlG6PZvaevbsvdvU85dUNP
buTe6qOsY1qXipUuAGaSSEkkkkkkkkkkr9k9kb/qJQDMXqc/oyz5jLStjsWBgTtXp/JzH5irNFj1
SJYO8/skDrDmjgLUeCJyRLA0rOG/v8CUg9GT9jvyxGyhHzZtf6qZh3I2duOeZTLDM/k8YrAR0c9W
KN0XsJkYYj/T1YkrB3Aqe9X0caW/dO0skpNCTyaUx+lCULFV93lGFKbAeRhuJTu3f7nZzs0s5Jrr
Etz65CN1tvTs6OnjTObDFhIw85S02FYETRFJUNXgq38xDVI/3H443shVxcqJSGGOrIcpMhhijmEV
NLDhNDSYY7tbHYYUEGzOhbfulVmxGZAI1PzPhswxjC9+GBPGXHQk0xmNHyFGbDERX5t8zPp8RddX
BkzBNlftdrSBibYZEqVN9s4evOxUiSwTCGVi+/dkwZYZSthwih+sxPiirbnLYFYwWlZIl3KLbTQP
l2TLPsBHzDczoDr+88/NcKU3aXv34/Ofwfywfqfm8AAAAAFnleVmb7LlR/VllfGZru5+8fVjobcb
b/JyzDQTJAyqOdI7lJ1XHotSCNpQmBttpyd/gdsDiE+yJElnBAiMy/zDjoZL71GlidA6k5WrqOPp
gkSdO6Q67h8jTMfxlfMeW6GuN0ztsXsW3sPa8jHJ+1ioZV59kn4qhMGH0dBcv7qHRV1AfxRahZcd
RCLsR+vpC1u1mYKdenxpGVbOjC0o5TcjJx4iModrOSj7go9KUEYEhaNECKZlXvP08HvRhizDHZI4
21acZqezcuVRuKK51iE334C9HxyJB8q+jY/9vX2ZZMzIZCIBM3W7o7XO8n85/eOUkpqIcg3O0ux2
I4SgBNxF5R1hWwWEV/KvZDXQTTOgfuu0S3uQL0mMvGZyqDhTXqazYkWv5GJ4bZNCp4F3xmAZgNkz
uDIGSS32djTPr6BqtNlOi1wjdlh6LF5I0zxJmQpd3Ft3DLDU3RMmWLuq7Fv/jp8lDos7MM0kDM0I
4JbQ4EhvOHvsvgGydtDynSYzg4SWFV130IHDkM5j0DksHLw0Y4GMjG06FmWC0Y1E2xNUUVd1bFvR
fAKSDRDOIsEEoPmf+ehtZqa7NN8dUsursiuQ6UwSVKVNx1iZ0yagn/dyLTMmaaPc9fi3ywQJB0p2
RLexDdUi9SXOjlGNvAdSsdCatJzReiSTkpZgzPYsw/pnKh0fv3/1f1fz/dOix12H+gxHcb02nXqf
Teu33cK4culgy/WeBjjchI6spHpk26muxOxynzz5JeZ7SburJoNFaXEzlFCtIYGBQTenjg1/rg3w
ZN7LDUjkrQQZBgKysGLW83DenTRr03vqfbdsTlEfgTlBIednlT8CTEWcf2I6vS1acAU5X+Cn9i2V
DMROVmkQSEzIyrX007EDu57LSaA3RwyH7vfXl6eWfQPS1zQOO3Ed0bNOq3GhVBAjMkKqh3TsYbAp
3r3mbDGx7F2X9dMIbeTfcXDMiq3lh20Q1tUWJPd9udeVbcL4Ucujt0stB6ZJliMMZildufXboa2b
evcaJa9UPTAKigSKEIhDJNBDK9qfN4Sy2qeKbTXXZ/Tt1jjiwxtGYr0OBuY3ZrZI4siYiTWS4bTR
8TOBqf325Gb41kFmYTf921F9++zpjfv8vDr3BwMwyCiZra0FIUCkJmCpmCKUDrYEc/z1brlXI9Nz
kqj886m3jBCY9eWh6MqOEKlTPvsbAiB1QOxY2LL/EjhCCTuRweaJIPs/oeG+w+1lFB0OzZMv3l9b
H5chpTMg1SSFHtKNc+yakHy8Cl/LptLrdTzbsgr0z6OhUyIkWeE1WUq3Yci+GYLttQpJptMcvORI
7MeqU+9loIw/1bzZMMjAKEFBsz7gdkIQ3YwgQDoa7QOhYS1pQkDl8ZEThGmsltaTJkOJbGGML72P
Tlla/mY3zuvwnSpYC6zyl+JyTz2J8i7NQY4ukwkkM6ztSMDj7+jEhlal68HgvsMxhjO9pXcVMdn1
ShNgwIWQF2GMi99tZ14WiZ01lSR7oxgL1ltsbsnb9AqbM9dGmCGQIZx0tpLdMc005pS3TEgdeqkI
ZiT6Oat7ZZLr9ZqRguGm0t7jZ69g3CzbpjFfR17ypMeyKZhaQFzEkTnbN+J803/oAX7XGbQQDeoy
hY6DkRwPy8wpjmt9hw6phCmwmJFwWu+U+60hskFSNxhUW31b2HcdrMMdFLgg3OneDmO/7HOre5v/
UjbxTtLNtxtPd2zoqjg82lEJbJDsUl1jEVi3H3/uGpNkH6/2nU9trX78PMjSTIjRpMAmU1+FvrNN
urwkNd7vIDXvPb0y0TIxJZIKKu21eVXNRTIkNBXnRTIJkocizkQhkBcxaKwZFJFQkQovfx1/L1zr
06idCHQdShYceRIPghznm7bZjfTQdmOwvlEkSqSg3Z/RPIrqBO18JUKxTlh6vtxDwQFm+BiOl+rj
ByRTQPqtreSFqwYx4prnnut6rAaCBAgk2N2YGai4HSNAgzHyl6g4NeW102Gr/NyoU+eqIg7Uxial
dDllU28fjLcVHJgJDCSBkyZm4Pkw5srgV5nVYrycdsGxwlg5nKU9vCUyO7ZgdG1iuSqdgAOW2xEh
PLHMDETU/mUtDQjTCAqeQjyNbEDeZZ4DoYf/RdO+Y002zSDeFepSGmmFsMZWR64EwQQEB7HljelR
ATPUkmZh2ZMrCy7zhkZ7jiZ0sDHQILCEDbeGuAhQhrvLQAvFcYeBKO3CHaKNJNVhLj3OIvWivA8b
r4CfitNlo7m8FB9K24dxfdKS1cwzHbUgvMclS1VxxjOldjccJY8SjNGZYcC+uheVJ4n52nMK4GGh
mb8iX9ErmuBmr7dNuHQjK46m3NDMRhqsTCr0YYe1OHBo27DMzk5b6c2voMf0hDETR1N1OYCWwNtL
+7XeoNpsN/HjeuEv5y/hMfG+8dX6qkkeSQPf4+UW5/Uw8mr30wqpJSeUtku/ZSH0Hvlbqz8JulJK
jdWB0mJL4kyEyS8NDBqCBi18Osn2IRyXwLPLIyfUvSe/HecYDxYvjiuRpswaOuuehxT6tvpK23Dp
IfDDN81oXBqe3leeV+GO4ytTge7hOMy/d3RL6DPO2pyVDPDSWKlQ2b5ppBqNlebyhy7KiPSD4SOZ
9HQ3Mr41ZM2TNhjfg2vFt050+wPfwlh7KH2bwbGNNhbLaduTbaPtUAFxmKLldfiNDYxQHfU7b/N6
5rIwAWfWxrT4WZEWDaiITL5qtm6ZXw0a2LUsSUUYRryOzXfwUlLg8kBQYBC+G9Zqtx7GThN59Zb9
Zj6fpDyiVILFOJvcmcGS9nyfX81idpGr9QVnM1PX6NDKnxNmPID8mjlvRs2m21NScdf7fGZxmB9O
3imhPj/f/LZTf6N7Has3TDjiSZfS4/a7s0alvltncl76AcZCaaz+BGVJh/G9V3Ayj6NHT1h+2SkI
Y9I+BxWy9UiFigzUSIqThlSTTYYflb1y+eGGH+Wkl+n0E8voMwM64Q/wQS+ErCvLkxZBtVqnRv3n
TtgmHDEOZAaROWGRgxXsl9OlWVqHPZlGTOiRVRGNFB0jA0ivSZUNBvoJs6CKeSFCw5EpBFBGJXf6
XOrZsfYzL21FJMMko0xKIUAdxHYHLiHjDGIGM0iemQ/fmIa5DgIOQw5xGuMzL5ECy09+4wOM6SRT
dFNoti2EWw/yHR8DNaSfne+lg1HTG+0ohYDZr7xzp0xDFtLQumOjnvHzvM6H0pVqH1vkILVfKOs+
V271/Krm0cPEh7ozcgTL7DGRfyn3hvD1EJ7xIgEEWtujHbiqb9aFJ+p8jRTF3Vf8Xw8UpUVscbTq
UKZ7sP5lW5hW6le8fz1PaijvQ6UU8h9cPVMYleeMjN/CakIO4h6DnhMlKUM/QPtcyc6grMrkB3wX
wL1eksjQmXRYppSkMGxjvS9Y5kst9liwxbtM89JeVTs9nxbLN2dm76Z6S4Ib7DcZ4S1rg66RqlW/
KQWpjPdeRh9us+rqRORc4wGBdqpyw4+6hsJDZL0fJrsUD+NVmvAQK7Dr7+yff4w0dihHIEZDs6HQ
nfrL4znCz7sv7TYDPOu/phLJmQl71tVmZCEhAeqVTlLZJzrXkb4vlq57YCHQ6z3wYGBQ1Y5T3yki
LmYwx0zzMB8Jw20QIN14SO1EgVhNfHSW2ezODI1bO31zCdBX6yOIm1gm34Ta5nJkjDVyS7U1+8T6
3rlgCK+TwJrJtr5Xldjdm4+SdRKGIUSLkXrTjsqsQxcGdx4Gg8YcFKrhB10uX68fIZNus6BFU2Ma
s116dpfM0QBGHKuN5RgDHNFhWfrh0N4o6rOuA7DoJpibupOSS4PlHeg3xzedhx57ODoY7eUG5YE/
gRHFXxZGdsmGJVfyHoNym6jgmgja5GFC0ekjCTyccOJ4SsIts3yyQHppFOcD9wx09FX1vTVuq8QO
nYYoo4kju2Y2k2dbTS3TwNcqEs3sSmx6+MEBmJJsH9yGEREI4JhDOrA3+qpIip4WMjDEbGX5PTDB
JmxbOOa3IyP0mD9ezjJtd/ewnpRNdp2XzGZ5k+nV9Pv52a4ckuLIqoLsookGhhPVWKAON8+2GVnF
E4kvHlkbsL4QSQeLLWsZaP5GGMnda6Qjhr/QzX18rL16bNwn1Q7yzLEpP5BapNZZl4DJbHdnp2NG
CdBkJkbXpJ2NTjkwZKA4RB8etB0kMSVEc3Hhx01UQoHHY6iC5BIQI37u4zYmTQ7BuQ5wTDPzbBJD
Q91sRFt7FsBlxwUup6ZYxDqzmETUEzo482WJqQ8O/R9jDG6lrbQcMqzjamSMPB2qEPvVMhyUDuzx
AXGLGB1bNilVGJyJEfGdFpNmepbHy0KE2BI4M/q8HWDYm49HNZmjZLqemvXlfjvLknJQZJGfZbV7
be3DeF8DqYZJgR8Ntj0lRrFq9R2b+emBMfN2F3Ra8po6L0nJVK6JT64iRcVPpM20kd9yD5LkP3Os
Lh4EacEIyiqSKuj9cA0zouq2QgNCPD5VZMnyfneV6XRC0HuTDF54E2o6MNpQqIEN/OItgOYvXR8M
5NFP1Q4rHbU0VnwRteRgpXVCX9Jn+Vm/gMJmd1B/Aj+F8NdvQ3al1jOfGjt0UHgZpgHsTHkgGSCE
vVmRP4jAXl43hFKGqbDMTeCer3+H2lf8u+WIxs1YjSOhgomLvWz9wOzKh8fmnpp1II2N0zqhHdSk
uw+wwYRL9ODNuMbkIhb6Do8QvzpQz1fqr+xwP3K9ngsfs3NdhDZv9SKuSxVsoc3AyFy0BpWRK+gO
zDekY4ucX6PZXAPLUzUYn0aAydv5Afe56Ue4iRBu4/2Gh8SXLsOqOkTDGpsRXH3d+1hju4aU9Cdn
MfKfhB3UoKu6Y89ptbawm7I6dWWfDl+Pf0OXLlwD1j+cwqGCIIBiI/biShm2NhnE+7dlQJj7TFsM
0GPqJNEG3ZzinQ8ZcH2nfPCDCHNddVgbkVKCjC0h5dEP4dOOE64VNzExA2aaM8vQTomRT7MYhLqQ
XVtIn6MpmSB2M+TmteoevpwjZTMdO+yoTv63plymaJpUHH7EbMm1/3uzF/yUYiLz3zWx7N/oxTgN
YTx54fD+mnPbs4ImeSJD9mNSMsSJHJD9jjiZJYVI4W25cnLMGYW774aXO5Gx549vdfIsT2J+dp80
OLk5A/frJzq5S34UZf1Z79uTAFeFZqnJTwjXHm4H792VvCxj0cft2VjPPXoydMqY48CzXv5fVjfl
xw9PXiHYwxr7FW4+5UvftdhjpaozTBxs8mGIlvO6KCEcfn/wx5JYZOYphhMMXu1xJx3z0SaJgbLE
thXn1vYS70M4YsyEwWyMYJimuiDz/tcH8HpQ0wC7Z6RZylXCqaNX5XHNiKhjhP1OeBwJx+ZuozOF
GbLROJNGoczZiFsee42sbjcTeBoTEM1GlpQYtE99MFLd8KNIEpCmXRsJ7RpvI2QXyzs2emViEn3Y
7iJYp9PCk2GJ9mXUoj1EGaJI62jbzwwqy84DbmKWUnPqKetoFx48GrowxwrnU3nbXYUfAp88kpGw
H3Zw0jKfyex9GOsZMUSeEAqDXApLouqRuREowE7jpfq4b5NOZWcktltnyArnC20zbIMAksjpiYkd
XFhiv5TytwaKm24/e7lPER09/QedGEYmYxf5kXa+9bNP2lIWTZYCxdOHTORKUwnyyZuWcl4h9MmM
Dg98sPHI8UquS3VSVo6PA5PiX0KLJFTfOK8OO2WWCh9xafsunw35b9n8d7+j24/YTkbgKJZ9kGez
Mudkppjg5SnieHgiACbVVGGwlJJoDpZwDgbgxIOig41SYulpzIG18XtNzlzeTc3OiZ8oseOOGRGf
FukzjLnvHN5mG+md8U2BAbG3j78PjPKtCcI7Zt+bKVeNpDRoTuRz5JJG+e1twGDFcMy2HGL1OPAw
kt/j+JAYhKdsTRdF+uCx0DV+sW393v+e2/gxx4Sugv0PwCW6DzOfHLLcimbtWcvjHb8GF8L9eVu5
Z77dfgSH5/n25YcfUNkHP02HLHGR+C9VurrEcODHDRSKm5tq5v6v4ekPUj5OP3epUfth2VPWy1mc
0EARBIkPKIV19Tpz5TM3mlFJtLsajUGUVIlRo3+883my0NK9leEFe8iPN/oJoXdcHzuBTNmOSRJE
R/mwyfGwL+6QO2Pb0O3Q0EBQzyGsE0y6D2ld2koDLDDVF5MEmd6MyA/I9d/ITTbu74dk/P2zyRZc
hGnyOx2ZP0PkQ1eYaCcZiQwsOvSfqahRqHqYY2yyqwxLbDSEe7NReZ6tF5TPRtwwE38zthRFnBxx
Hp31RMvjRVXfmNfLBR5gvKxJJJKZ56Eo779FuvhRvIYQ1Pta/Rrm4911oHTQjxdx26yI4oKkD+GX
r+b4zLLtSOJqYLhih27zKNgu+bAYExnBra2pL8Doo3b44UhnMcmHmhnCYL6SB5tIdTcrE4crEHnP
aflGJ3DYXGvK8HTb21NgyMuGD4FWub/eKKfJoc/gZHWnH3kDCEwY5MmKu6WNtyJBLOnnxuOoe2Hs
mh7pR6fT83BVVWo8p+ftMa/YbfzqxBeKKcCCGSGqMGNn3JjXxX3bsqE7Nit23nzs2zFSMnp/8ReR
njxmOKGNzYkqKmL0NAJu2ufsKPLI2UPVM4+/6wq21AmQSd0jTTHZwI0xWyl1pTDFNMcB28be9b6D
OmAe9HzgrPRMUC4efviATH/BOYuGOGqZjimpmdO2Q7ITRa5YJybkJ5SHYHiOe+PN2GJzVNpHF4hJ
dFnIytxJNyVH1HYYjc5lQpSl0lRR4SzxMOdSVRyInj8/fVC4L8AZ6sNnYl6ZXKncNPCzfx56LQij
DEfCkFdtVm8Q9pZBqUr5nF9oUy/c+E+y+mMpySyNJ2DTdW2+d5Kro1kcWGMbw6SRSm4clip7YyPC
MPidYZ7s0hHPcQjuWG85dgHS+QfXWCtSw+mWulQFW1SmVIiInpE9NajDG9yRGZg0Wo89o0Ndvfd0
Gnw68WXuuKqXW5rMHdq+h/F4VNEfPlfeH930cPjwTvhuoekeJfNGj0FarIe0zePtSiHxV+psDx07
h0aeiiuieBhd90efbs80eNlaPgdRUw7r2yPHfq562er0RSTVR7HvOheVFMiEal5EPmFRhiitjncK
BDvqwxOpi9byKkWzwsT0VJlaks4sQmH9Qi6DI65yoOe0TBJgRR3yxymK0L3Rsm0qNz1XqnnpvyS+
mU8MR8j9JbfeTZoNhfDSI/Oi2y+V/K7lf7ijpsjsnJuQm9Tq46f5+D9Nhn2dE3CEQhzSKppYfF4R
qeWJwN1zfayIbEwh1Qk82kQLF2iCUG2xJXb+NFJdDXN+/XNl1Wg/z8qo3PPj1y2TSOh1Ny7RWJiF
PY+sYxsx8CJ/1aONiyUtj1lgRJlZSP6bFqZvALZniXC6Js5uH26klSUJjeh8HY2mHQQWsOxR5Yy3
3Ju0i0rWJwkRzftUHt+V5C+kdOsb7snZ9klqYuihm5PU0VJVmfx4n1zpMPSG45Ntx6voNx+RZzdY
AVZkhkdPJn+T6gabEDZD6cTb5KQJ9UcEANIiNllJ4n9eIZnI0A7h0U3ZG0wbPE9lrUrZXrSd2aSA
l986TOy/yZvOOBMag13KRwuzF0SCC7SpBJJHHO3fqzMxF60pOczugwYYjw51vS2kquJhhCf4pwbs
4D4BiOZoa/h0/mN3Q7Ur8F0fl36UM4OBtHAS5JZs7B0bg2nFtLYY+HcacWJwF+/PrwX8+Nd60jea
ocTCMzdczybp00o3A9ME93F9Ny7lsOp+Jb2EFw+YcxXXhECZtohAnDguOMvrpt3vyv9TLZGB0O/D
xXVmStY465MUTCWbBo9FBDysoh5EApxD9STrmD5/EtdYZmVYhQ4uQ/rlrdNWQwroX2H5P4KDnJJ+
tx17CD5lpgiwHlSYMiBiXXv/dgPdkmXnxVlRQkOfbVkEfAyNof2UVxyeMkqIU2FECydxp3aup6Oz
eraR0zFiWQHmDFjaATuQKAKWmsSGeLH0/LggeN6bylese2oOztMfVmM9yXl02tIUBTEo1TEbLpDx
LxOiDRSZmJH5ZDJNt+hX2qlJKlavPuLJEl9+7aotFiSjRjJot801fJDg+JIHCEIabhAoGhZ7zf+3
tNv1W8ceIYh6lwlmaIlmpBoPxGeKcVNBHQ7pIZbV8imFYTOAmHZMw/gaY2mvye/7rdmrPQy/UNB2
poUI+mB/X0ARz5Dz+Mdcyp/ZTMsdjWlTSdic9ONJo+YRaLLStvB64ib4j6yoZlDG8SXSZUq2kPkq
BmVvdoJuKHiZl10kSxsYRbIwwrlEyk++XPZf3X6JaMXzbMb8iuYrhw2N0S2pLflO/2yc+fQ3GpIO
XtYY24HDPCuZDny7DZIyF9K2IgD5xfa2eP1aTMKXiprHQS6ONOjoHCZSI1NwvRKUXG4MMJO4zCWP
4IIoMSKENkbmGN4Eqbh2sj5impq5nNSY3w8/KH495e99JZnZWo+5d97+mziTYj1RExRkHR232l2L
00ttOyqWhlBR9PFHI8puMbObxEkdqwG/1nAevze1nMva5UzQwYzcJVjjPvJblZrN5cXP5MIZE9Ph
VJXbDrljoSoPZcZc9nXvYloYZy2upv3vhxNh9kKhhrqppjph7DrOyBZmR+aNpiPPrZqWnyEH8Mj4
LruL42/FIDQe+j+QxyeGaUJ5MaaeYIaX6g9BAyMlwn26FTcSGrZ9xHA9UoSoIdrh6x55Gk5dprue
W8UabW4EINe+9dpn/ez4Ls7OG6DE6Z47Hn24UdJ+t6TGz/gkV6h2bUF877uWRdZbf0WIkdQOSL1O
MsYX1eyMD6NCscTqxrkYdrR3XzwNnGXE/sw6DAmZGAjBHACR2EF9x9i4IkeN9w3Mm+BaG6NJXbS1
jZVb+IwHU3VXhl2ycQl2ONxqdJDATxev8qCo5fyUUXT7TqNTzePcZr4/cNWqhZxPXbFfbnmZd3SY
Y2uLC0FalXpun0alWM5CxgPQWJxdHU1A6FREdI5IjOISQh4UFDqTzVCJwb/F21SStONnExgsaEHb
KNrZTUEjIvlEuWVTZ8KrAsGRrOGu09Wmmnrxki9ZPAyGMUCoK17hSgKYCGJ0yMTDPchwPn8Hh5gy
GCwyEIWBtvOdkfxEyHhQPURE0IeRUy3bzcemnjfzidUfEaEN098jRjPjSyN0G+TVLnYZ4ZJIQbBE
bhSS+68byVIegkztMh93aTlQgQiIzJHgVBxHQez/f18hb9J+UtUTQzsdoCCEDiBh2xVRV+bVdLGj
ajVD/M6s5CmpMhAqkTI4an7xkMkHcGQAhhK0tIGQg0rkoUtIUlCZKiZCOSHyyBqU4PX26BjnPYiQ
n6n9/x65fGtIrW1fpoXo1N+ZkdVYgMENIJjsoaiEISUs6syuhdGfGH6/f93w/P+CZVcZ7bKEkCIZ
TnD5z/UUkgP8alG/yb4HUMHVwOiv+P/ZwB5b/0kj/BvsH1vaPvX2MCDK/V1RNOdf2xVpf2+UpbU/
5orxY/ewd3fjxDpvru4ab2rr59XuPRJgmVcujUl7P5P4A32CR19Zf8cAKwQr+TsFcBv/35yZr8/0
P3hCQ9Tx/KkMEEEClRfuz1u5/ndCcEsKAwTJUHd8NA/rg3CyEjMCw0RtX13Jo0GKMBVFFFUYtitS
VQH8U1c3zAP7J1JTSUVSLMAPBGTcr9q/Cz7PPN8O4/bPjA++TAz8+tAEBNfm69UICA/dE/n/y/xt
UMpPyT5ro+NUTnZ1CfPTjKUYEA0L/bbwG6wTbmOZ/gzz7ElvTM6Q+/3/P3fNv3jefBs2xnGf7zvb
6aY/4sMf1f2cLt6PHQY/Pc+okfqq1CyPajiZny/hwz20fSVukEpkcvrvfP/1kqlAEv6s7rCGQT9W
9TrCFiQIIrWt75uICiCU6fPpi2gg7fW35BdPze0/YSfj4AH6TvXWl9tL5a1Ir6Yy8qVfR/Qwuj8/
zf3fN/uP0fjwx/z0MY2xoqO7epJHageW6ptm59XGkhrCB0E0QmFCPql/R/g2EdNjPodtPBxmxs5i
rTnkfzPPydzBNtQ38/G2gzmnlEonWMmPXtw8X/mgDnzzui4tfqM2gfxnJaQfX/L/HQp+//B9lHp7
9EePTkXG2cV7LvHE93hRRxyvU8Z2isOjng/e9Vhe0YSUl+XG1J/iWx7jVAVwlu8459P4+jSfA5QP
6Vyns0d3f+4SW8Sly34fDJ4uE/wle4zMVHcbBpMS/wR7pgOqXbiQgMSbV9mNtrnm1SalMPw4H0yH
SA2Sh6I51mpiBv0kGEiaISS3u2LVlATxPviXVJvYyX0gzpJt+mHsedBJMkwk3NkeZUr0O/O0LV1z
ho0Hsv+H4L5u/x/0v6LZ+HujPV69Dnju7hfJ04xRFHMEWSm9ElSTzBRwh9vuRfkJgmpSg/6rMzIA
yxmWqDMwD84L90RMkTP+bMDbs4Yxgv5qVHO0BG/NocoxwKH5sZN4djTfJupf0u6SZm+LliSvdykP
Na6TA5/Uer4uy7u2plY1SX9JW5UbPtOlIT7XXApx32Xm93P6nVytIxRlMtG2tnIK953oQCfEOvG7
SO25sAjN7liUEWBxOSsWRF/tN6Scpj9N12RSCKUglEVfRY0osNoBLeO0Ah3duBHezbxAIQ/134fm
PdVVYKVbarENRGEWDBhhF4epwT/nSP6kbk5KGgYpmoqmpRNMWW3d3aE5VCvhCBoGNGC5DIUtUuVz
BGQ264u2rty0tVyywBLQVBQNhbbzcEjk7UQWnbZsEDsIVjCVJhQxGiKto2wkwBkQP8hAIQMIm0iY
SkS+EiGEJblyhigqgcdhEB2FJJzKJk1Y9H9HbHCY7840vxCoA/tlzwOfym4xyCNv6lS+Yf7j7CwS
aKhQGbzJli6YyzICCEDsmdf5bcYfPD9WzvCXo9IOdenfUNP88odsjVDRSA00CnJl4hoXXZgm2Cgh
kqAEEwjazX9W83GgokwXze7Y9/lhyYy6YSof5c/TV6UTNn0Bd01tqRiVsMHskqIyE3GOQ9bcqbqI
VOSwDqSJhPSucHUDqy0Q/7tmVsecTvUQkJW/uUAlJCpUYIhooViFYT4wp8JPqgX/tncB8YED/7T2
R/5dv31v/d8b8u4U1Mn/B12VTZkVIpmWRU20mWIVqMiSizCM0hhmGTRhNhLS1fiutSJ67OrqsTCr
KwzZmastgKlmipms1MU01pGKGNvfj16zJiWKI0iizMU2alUVkrY1plVLSYUz73VyxiVMmlmWVlPs
rocc+HacGgP1/y+eJO541+Mv+0U+jvPa3952GR2m1ghB3eOnqPH3TlJe0X9Bsq/7+OVAnsTIbNuU
p7zqZpsdSBr4f7fxf64/L+j98af2/s8O86j5qH4t+76Ujr1f/bTb+jC/+0fVltP+Tf8P6N+u3aDf
zfwlGQ6xR17IdZZ7Cdc/0SYNmfTgB2h3nr8/Y4/tcDs/wodPiHy0DRBSbloaIIft8wUiHtqUKEWH
lfV5fJ/ZyMv3fGtNDcWY30YMUb0P8t7XdvVCjKJmwC/Hz+vTh1k8z87sowXkP+zh8mQByhuz8X9p
t6ycybWU4ld7zDo5vMAOhIKtDR/PC/5JRHqof6AQgooIDIn6lENkdnv0j8qHgHDseDUICP2y6jZA
bkaCjWGI6g7y1rOhGiXuua2S0ZfG74mlxLGvk9883aQjcWlRfK8So2nIgtS1k0lhZGH+MXs0G1/u
ccg4LpHUQ+cEzdPJ+zgPn0zDczw80znLWJPCSQnn6zZNJKrA3JFOX9XLGCW8pDSSSYojO02VKTdM
IVnmiCzwngydCTksisHxpKFhqYNWbu/1q5AOGzVIAAD16+fl9fQASAfV9WGQ8ITaWeBJTFGTmmRr
Xdm9MXjlg0/GUmbKuVKoJuEO+JW6iX+GvBJKKMVLuquYYazJCqfy7TgdpefIdDfWNaIJKhYu7es1
ee9/ROBQ2w19fXOfd1lP6xC5qak4M6Pq45qd6YxmKDwpPBf3oT2k6oPsj6yCGPRsYxx0pP5YH8pr
mdVSYd2esBghv3f0ZYr0E2hVRXQdhM3R58j5qD85HrPbYiliBHp64JF2ed6HW6HiGS3ztSpKluOp
rOl8Xc8ccCmRsK1J6VhxO48k03iAwYY+CBvMKDyxMJhqogg5Zwh4xMf3kesi8J/0J/kGuLV1ZGNk
h6B6ehenphmn/qeLQc8HJflf9IwL1WzZxMkBDoEgP3ccxuK0hht5o1oyj6B4FAMwMwww85n5li3T
ocMsQ39Yx2adjvu5olEDUlClPokHj8GaoN6t/RAYIn0OMMfNoAubmDIM2afsQf4au2/IOxu09+2g
Yhv9dD17D/j7TXKC9sJup3HKkJFSeTRtVA8e7Tw9OQFjTORL2mL+zr+SCDrW8rZp2ZOaN6xML575
HGod2LyADd8ZS6Vv+u+suBTmk5DcgJARCR4qXhMAdkztwskEq6yKgexYGqc2X554hMrpE2ehI7XG
qiqSSS1+agTuJTV7LUlTQj0YmN8neLl2FlBLIc0rZGO+YUE3zSzMqtPxzJYyByrZqCBq0/AeW5DZ
MmEf43bqJSVsucGm6eMRnERFlQbn+f/Y+LVGdKkBId/Ky7gxqIAsoXfjw7GfsMHZ6dhPo8ACvsiK
b4HniyB0nKLE+lKLiRtuQI36yg8w+z7u9fiHzt8XbSXN/J65HZMJ7nMMXqmMKo3oRVyRqqGazI9J
GZnkr1JPD2MnMJvDKa66NZeBD0ymZ1kjaoyDCoY9bUrKqY8T0p6fDs2+BscmiI4lyi1ZETd/D2eF
rnio0HKKq3d/vlnsGA4iBENrqdaqlHvNlGqnhDR+79wjfa1hWh3GHbpdDbYNsbfQ1B45Hs2EZdxU
yIByu5Hm80rJjjORo1oEjSHZDubUocrv47NNxXxRima85u5cl1ROEsxMHoVTO3GWZGIa440L+FdV
PbQpuwFQJJGaD66BslKLygPFEmLPsQx3/sdj6E1GPW/XzCNwjL8O2PuSQmpf8HEfh94k0Q97+vjq
viZPdoZQIfnwUceJkFl0dcE+UIiURB9iAAAzZMwIEN1jalut5vjY+Yt2WduxWQ0OF3Zxi7kHAl1s
S5sk47iT8U3DeYVkb2pLaXKHfMcimr1M/qtBJGxkzHekg3pnQ47Ds7ulgxZyTLpYgyEHFoMxvAww
sxOHYuYcYyIw54kZ6cMYsPfn366PLXaJm//KGy6DamyUOA5CHqBNPVlrTjGmRr1lydsAng0YXvRt
xK4QE7+6IwyPNMY8LDt+cBKAVwHE8bQNDeGCyjiUuHu6+cTTchjCPHBepCNDlv7tzfyNe9TaBZNN
FyuSbrVx8cTGFXCSpBRRl7P9eWz3YZ53ZMq7XiZj7LBCmru5w4PHRovL06c3npjnmtkhN63iGf1i
cVK6uGMjn0yYzTZlnbru8ZSRjpMOH270Z9z9lhsUxK/a0ijzFFR4i5a7f1uBBdcRA9pjhsVzJK+i
xPogG5eifotfug9CM3ybjBoaRr693k/u2KeeDG/XKXBptIH5Uq0nrlRMVnpZmhtsf1Uh02xsKWjK
LbZYUyiihaosnpzHqKV7u8PLODk47Um6ZssPblnSmTn9yG/R9tGR3FCcmAG3PIbBfphghx5vx+OE
0/mO8QgM0Sq7eaaOfhDYyYyk8cowKF01ApE5M6CkpsW4y9JGG2QS2DumKkp7q449GVrGhSJPQhsF
J9H0mHTRCkgqPLLToYIPE9TjYrLBt6ykZoYiY7GOgOSkdEGWo4H5UwNgQPy/3qkBrg25myFwcE2z
r3bNmzUZhe1pb2EmAx6nAyDkUPj0pNPoodY3oUdc2Ihh8JRJVV/Nlhs+iL5t+L2jmQ576D80Ogz5
n8At+QMBppJUjQtJ+0gjkGiGaYyQSQCEJJS5CNH5yNYjYeYb3HqufjFyjohAXzLVUMnk3Ko831eV
RpHBpiAtrpv29Wd9KqNe0XVAYqUA/KBF01O/JfY8778zvo4tfYMPQ0X6NWrsf4M9IEyZB8CmMMdK
sK1/KDcm3U/CEkj9eZJovl9MBBuo59MglD76uQP6BJCSQhCEtpzmFGXzco+kg6CCjfk6bKZ2yKHD
wMstjLOMon9Pg/euZx8aTTI2kxtZhMwgzgO2EzkwhB1rWnovUkzd7Ws1LQ9GiA6rZZo8jFUY3iVn
CeZJrJbJTF8PUOxqwjAo9sDyvlMxtoz2fSRJ70rXb9ZrEcwLNLUPeqrkjkGScvp053T+Vlu45L5z
Bm7rL/TkvOed8b67je7GVxtqR5FBjUtvhR60TTZ+rCa8OYXV68PLy8yma7HvwHoR0meT2uy9d7hi
QFoFLuWku7680+/NFn0bfgrPB0AwPBZ26DFDUsv5n7ohHQHS2FFUH9l9/nHH3IjNjcGEOJset2zs
LffwUIxJWjv+jIlJuov0GTGHodj3LIlNGmxMhOLr3Zw77ZhM/JqRTC0SqkySQpVdxEHZ0fRfhaws
JDrhR5ueomL5tbaEH3Tfn+Ki1zx7yzD6hoPuIH8p99InCCL3R0j5HaON82zQh35yes2HfUqcLJ3E
ERAhCEYucDDL9+7OL/xIVZdcW2ejCmn8c4wE5C9M9tP7PXU/hKaX16f18/t1/opV528+X4NB93LZ
6ZhvKoLv/hF/l6fjy/h9v0dXC50njkdAybh7n3VZxjFdFS/p1+7SMMbtcxa+602JNs4FKHyTpqIc
U0cn+/SRk9Dz6I0ik2/1xH1d98JfpVd+hzWKLN2KknaRzzYI6HG1SSTEMaOfYgYN6aaY7snGKoan
09+IlkgW3hzvlmmxDsk3am3Imn6IVCJOJkDHRzzjThDX5SI+wx0pcpj+/Hqy17al6AWI0bwqfjJ8
HNxV27+/feZY5P2ZvJ6el5GcSv4OsvTe7Zw7o3lXJpsUeMOzC8P++JooIOyjqOUMAemT4SPSQiOy
TnRg0gdDXT3te+OB+A32dB8eDMsR+SMCHRwV8MzMp0UOjyzgpQT2syC+ITq1N7uLV39Q7dyK1f1/
c9VlNznQkTl1zc6+jKAmgMVZFXkDxO9KyOmWBHTj/rK2WbvPxnyuTKuCBxHECZsQku51HXF68Llw
V/5TdsMCeSM1kjYg9CGnp7LVt3H+muKyvXTI7GoM47DiGxhB82LTEKNI/gxIL5Jql6NWQyGmvz38
u7RDptBLHvjYscEPZV2/CwUlwoYbDrNxUtsL+nDz447MIgz++UQ3mgf5+TyXLNyiZfV0/NFqevff
0/NVo9mXTxw5V5Uf6EY867dxxslXLiQdPz8o8u1b38Ibpsaocc7eXwfsRL8j1j3Ai3n2eZSfwt1R
n3IIQ9sfP21qU67lFVgvWXyvrjX63vhamC2Yv+3s3eZv1hbmWmw3NizSUZuWtJ3y1NNmDIwthS7/
LnOmT4YSgzxjKHpTKXB5m57abcLJSk0Vuzy+93t3B7PJCvUXnzLH4OnH4zesY28ZrVp1nCYvunL6
BWjK9k8LdTYVcrhZEPlLTB5pTwVSr4Xq9O6k+tbZ7Cydy2ed67+j8LrfI/WZvPk9GLnVHS1Dmlzg
9F3cIUz6GcFb632PZ4fvXjnjvRnq6dE9hfZZ6ePnB8wL9XhR1Xc2n2/Ctwww3iusC1qLWXOsxZfY
x5oD9AmA70eCYb5/U7PdwPbsp6+/27VfM9vT4Z5tj0P8gu7Qcz91dyfkd1PGh4+3Db1b/ktcKcRy
DgU6tZHqNn2G5z/vPqEVCYOSPCjsMCNumm3j4/HI03t15Kjb6zmfQzsS1MPpke3v84APWc+MH0T/
X/trY8zAccc2jmyQtT0xyz8Uefl9LdGl2aLtn6grts1edX9XQbZceCbr6oCa7goJ/l7ejkqe+h6u
B7689uJ3H1/8MDJ/q2N2HeMYggQgkI6CRUfC/gPau/5+b7GDPsW3byY959oVBhJgZV7D0nx1W6vj
xRu+2U/dTjyYl1/Q/5PSS8jLMrLJdBl0tnkH3GMyfzMH2LjoN6qHt/J7/kPdr+an8PNhjgG1hjzD
l5y7/Xsb6eppqhI77ejjbr6ttzQ9rLRr9mWeyJd5p/c0fMQHhubgY/5S0+WuTJtiDaNOkN+TbjNp
bD/c+vsfLae/okUM8OxxxxOwaPbhpLDKesx0bfd+ifx9fr0tOTt7zu+78PfMWbqurkSgcwXwU12V
9UNJiydZdew+ybZTfgn028iX5HrtyzQtf4a9OZjddHy5kGKEcK/bLri2Mqxr5vIkbp5+55y2V7LE
23HXnwloR1o2WGtmO2O37d/rl1XzzXuQubYhPXCPdXSL8TIvs3UCJ2g50plQ7yteyz5G7J6SxLow
gNPn/TQp444hZJm72A3fkixPit5L0Q/lwgaMS25aLIhuEPiqr4rtWmwp97Pp2LLn192mtaHV97yg
sqS37ht/drK2NLd23sgpMjHbL1JprNnQJMcimPUTtRunsOnemDT6iZkjBotgmNEWs4/VpcIRzxrc
wKfp9pKL0ja+i3daHm2tRR1ezftkgdND4ylaMoUT+x7+dIo59zjiRy+DsY+FWpLHi5Ha5Ke2N3Db
apOqXdLNNtDLnBUTNVTI7pSNg4u3q2FpkkzrBBRM0IOhDRb5iJppIQja5sz4q0n4ffSJC4YMM6FT
qcmbH9uEs+Pqp6pUgc5Lx+6Jk/B6egnr06+r9JbhtYPkYXHlPFZ1c+6vzR7aBJWkea+PvpGWTwKG
+jNyXs2t/8vtbt077W+3nj9rMI7HOrX51iR7CUNkYpfL8D1zJt14Zw7okPZLkYfOa9S4z6d/0Hm1
3HR5Pft6ZsamqPaPvJYLx2alLPuY2i2SMoolIWXdRJsPh+S30vh83ot9saqLNyIeUl6bEZjpvam/
OmNvY4ziab7/LD2FMsZ+apM5Dl6SarScuk/y0MthLvWf4+Mo+z4fpO8m9mVde3b3tgmZj48NMOee
PXyzy4ZNxtge/om2nlZlhp5wd3hT827drq3VfK3yU14HZx2XnjxT+M/VkeimOqYtqULKR1uzr09E
F8YJ2+X9tJc9j3XT6Dh0eHDOu+VX/5ZzUGAgwXjIvxSr6q/H4U7IPs6/Gce5vNovx91p+qZgJbsZ
X+ZRJ3HXLDAWtNxkZ5tVWVrWiqatCnr319KnhivEQr4QuR7qw3pKXO6TDrZm/xq9KGzLOd+XH74L
aUbEzfji4GT0rJpJJqSdcfcUkjg91zXu50jevbk/StFqqPp0vVXwrgiOOEDYvWeyc5bNssk2udIC
//fScfsw2G6W27H37HMuFRoLq61V7R8S9KNzLp20i8bWGfnWRI+S+ZYbd3eXxLhXLOkosd3jPl6T
kdRgbpb5Bkrpey5KOeiCNZbtjnZhT6TmN7ryAY8dwrOO1zsWi1WcpZ8Rev59525C+n2fSbtOGBGx
bG6tkqd7zXRbrUa4RSkUc7XKb5TU4pHT9H3Rp7GWlerbL4/JnT4+OO7apnCWtLejxrXzpXox5kxY
vWzrCN48/Jm+nB2D0nd7cpZHO5hpTRnNzLzUXHIJv8Jvjdyy21fVYv0w3FWK7fn6qHEy7fVKWOHT
q7U1zwm5wp4V20Mq6+rjlPoE7pY35cOORve3fLSV2nAser2H1dfl02xQObTpDOZPZSeS5kG1lt/0
bdM5/Py9GKns+g7Nu03bhyN3hE8W6qd0pqeVtpEKZJc6xdvX1ecWvdz9ec2OCDRFJFMM+mJ7bRXr
8TsrL4NxzLiQaUcOCg9ThNOjw75VWCy11n0Ozif1exyU/Dr7JZ0vRHrzNW5+vwyknU29PdDZk/W7
9ZyXTcvLWNX7JNJAld654XlTUyHtq2/WstcK/HZE05VzQlbi5wFO8wX1aaZk5xbPZfPWi4bfv1kv
GvLV43WtVqt36wkbfRutp25259fj91vBdm7ipRfp2GfI0nqtQ9+fVJtFReTIPflocm7e1/QpRoR4
zXDKT4tY9ISs7C3S5eq2mXn7scOfbiSm+qkoQcsXYxTSRVSZ/BfXF8a7N2VLfQbMyFngaPLk7LCl
HR7fDPP5ij1O+p2LpfTHMq+9zuMSWyFTnrE9jjz63iqlWsTq486xWk4iIt3c+fwUudHKtyacu4kf
pllBWj0ZLj37zyqedbwJJcbRhwT0xT7XN99tCdCnh8l9l3Xx91pbl5mHd0+3cGSureOHQdS5mx7K
te7x4UPCnPhnjoxWkjfsnOHcjG5DWsWMyRqe7qwkUmaD+xSkYaUlj2pTe7ybhWCPtdKfDb0RIou6
I2Sx+PnGaKE+G4kadP14yhaGa9Lt041KkhKrP3PmSTiv49EpI0EuTx3tW2W6s22eH576hmbQ7jP5
exYjzts9LdG/2w9PTVlWvx49q4joUrog7out3q8YeQT9uw1XF5rs9esslXPJRvJD/NiQRZUfGZML
E+eDUpPCv2GCvajSezm6R6JSnyH34v6ssxZWnB47tptFXUQ5s1u4mkkvJYzalYruyKrciiIxnScb
1bNvdjBsvfcoMXKEUgxJzhPRNpTMqqTraDYJ1LjXnBLtxgumObIEJYbS9s5Fq5qdJxY465SbZkjB
Tu2WeSjZswRM3RrUQ/byNNXnP57uHq04LV56Un6S9Zab3d23x6Zv2vXgZEznASMu7o38cC3x9HoW
bYrNqMJZxnATKzl4rCz2rFvQevslOhlfov7J1TTu9aOg6KShstN8yRPqwk8hRLCWcCrVQ8tuKiQu
eB4eMwLUTvrTVPHRjaup2Ia/bsvkYk92aBQ9k23q3zrQvD3s8iUFxURN2rGH3WxvSlsHy0DKcq+g
2tz05Wzq2mPlVJt5hnlrlbcB7r5RlUjlBtx1lt6pKjp9tnpPq68s+7QbfxfPjTb3S1Qs16nNi6Lz
5v2WynjPPd6Dry8NpmYZIdfSsJKRt65azk5rWZeLayPj0yr1evI3m288Yy+Gb496FxzK9nc6UpPz
fPXD4nw+Ec+f089s88PPTTDfX1238qGm/edGOcUjrh+7HdpLq3Trw8ns+s9uGUq1nyzjO1aiUZyl
jPq3jno8l0bBpgjVBRCuvNmzb9TukAhRNddOslLHHPlryd9uGD+ZjIfanyXVSUShDi1d0YCL9HXG
K6/ZLPLbSOzP37dkl6tmltrNwxd08lynvvKebVJScjeV996k64Ybjb1uGnZ8PR6tpkNj8DKJctHm
dTlUbKPhbbdS0tSFxKW3Yba8NX81KRfdfPXFZaL6O3bOVVv8uqWzdvMqcduPWZPp17lums3StXKN
ObkjiFPRtp6Q7fkx34ZR8PF9NWfVLWwKO/ZhrX3VbrtWTyvoYppQ8bOOhUKmFTok+Q6U4NekVmtB
31yaE22kk2MSjLHDSfXonlKU3tu19/HGVoJaWKT8erbpQVV44aG0nh7ThtuUHHzx0zeUtvsLKWdz
My2fNwqsvl2czDQpB2truv7oY6uGFqM4s92VI8d2mHdQma4cNmvhkS80JBqcwsT2dNIf04G2OUcd
sreF8Me/hrnzmKueGOvd7IfDPOk57Hya+Oyi9uM0lnJRXyd9ZYVI3UxIKnhsrpI1McKYa4VxqQXZ
8IdFLSyU6LfZq7OHV1Lk6aziMljqZ+nr9Fd2pf1/ZwOnY8d6cekQjwiOWeFo02l6pIOTg6I7I3mB
oiqSmuL/Baib7Oedm2JQhDNEacH7PmPuX0er1vl7M/TpmdLS4KA5cp9kqxQRgKQbt41BIUIXs102
ZG4P0fD1+O7Vma49/t1QNJ4Ov149bWmO0athjiOmcl2MtgpZaqzXng7YorP8HAyoWvVkCV+w0OaX
vDn27ni+uvV/F/pnZeM7kmyhxvli33aTb6A/UdT/XAoiB/JKBw6r7+AHHq5IdFqwqZJLdkRstaFD
7/q7LA94v+Z2+rhd1aVsymZqxqplMpYkKUkesrSdSHQwUyyAH+pkT/oJEyAE0rIkFM/6JMpmjRM1
WlNaQaa19+1dln71pDKGdthE6y6ohx8o2LOGSssvbOkWld9+9YFazbVJBIQwjYiQhiJJNApDpJVx
jjGI5NwbMDhyYnApdbXBjblbst7rjKilYUybNNi0WFlpgTUJqZvi815o5uUREmmkmRJEppaKWYwy
vuvn9d83qJkWkpmpNUkJVR52JMs0TZJKIhNfjNXMpYpabyvWa9kqSVsJaXtnCU3IYiRStEyBFJIT
0oklRR6tsHsOI/x/u7+YQ+QmN+uODHtwQ75cZ9UJqCirSgdg5Imng9d4xSlFCjpEN8Qf3GQ0icIg
J/4wP+8lFw/3GAHjKKHrJU54wV84T/ngcgChQpKVUPZ/3Z3yv64T5T0qYHl/e6+ZYH5lQPkr1dFs
YWCUChhRGCGAS+L/fiBxIf6Z9UJQOpShP/GAyUDWtbBQ/7YCyz/aWVcUqJcSoqGTDbY/9Hd6hcf+
3EY9IB4nskdGDwu6DggITR4guLbhxIG+HE4x/gQReyIONCWJgHMgtsLaoaAGxKyJUFqCtknK+GBa
ypQb14WB/purENHSxj8aXzkTJnwdA7nYg6Hox8OPCeC4sJuMXukdynWB3AQSGZhwSHXobdEw0hyP
BYDqB1BlkRQ0CjEV/Sp0wtgfi0Y1KhogQQdMRzXNaFyVoRPGRMhTnDAKT4zxAdSdwJ4w8vvG26FM
GZIMg5jL8Hr7ekiW/32uX/JtbWtWlDRVsDQMQoKKUCovhgmCoC/MYIHfAuAQsikIGpECGUDcpkKy
iK/abUQM78P8X7/xJf37BT+1/l/T+/pdeA/b/HFpX9ZI/1fp66Ok8w9k9Jdkjn9m4ACbux8+0ZmM
bKWz95GuZ+6b/QjWn73/m26Mp0gdOnyiVFRarCc01DLoPAQyEvB3fr6pQyOliOLMAFn2wEThFByA
SIOk+apQ+qChd0PRE4QlUIkaUWgmAUIg1YtotVWK1VklRTWrZm+vTf+tcJZZInHIgaqRCOxW5858
pkceiSJ9UQSeifCdv1fgsYna3XhPsm34Bv2a92T/3ou7mTvj/SG7KNZuYYTyxffS93SyU5VHrLDB
bOJ4Z2pzlRTwvgW2rnok9K638u958vfRw9vlY+8H4jL1kvJmMuclUXGuiEE3rorvoL2uuV1Zp6ve
zW57bwqqKL/yzfg4qJrb3o6cosp44/+LcimKCasRGY8g6VKef9WyUv0O6FrXkiShMBKqcmtgvUtm
f41/rthsl0Q2GBwikFIeZBE+HbQyRdDoKaXhoQyh8U6GwO78cRgP4oocGz/w/bjsym/64r39/b/d
sOE9+G+JByMcWGP4+LDHRhwwDcHEb9qynuT/7s7n9/9O//PpJdX8J9cKFLXtQletk8jpYTvO+rzH
V5ozO7yXV666JXTzur3enzd6LqN9TK3zrrqszUU1nW6VzREi+rrjyXvLuqKo1q+srWsT3Jpu+pLd
zWVV9BWhdMy807CuSpFZvKOTTu849a3QuGzp1vjDDMOVve9vSzyrIm7WqJ7MzJF0zpISleL3pesV
88+ltj9/OXsDrfd01bfKIdNRBtNc9IbcoS3SweTG1FU1Z1rX0w+lbOc65KyuqiOmaYOkFMEjTAQG
MVsAjBcYG2I3u9W2uTfOUjE61rb6JMmSlxm2UmcZGR1tuXupq5U3OSne5K6d7mPtALUy9XuSrXXe
t7vDGbRjk6qcZXdXxitgo0uNbaNtIze7zDeHRveLHx0zoZbfHVXc5yFbIUw6fL4VxnN5e9ZH1sm4
LKKquQWnbzrl3xoearjMdVOFuQdOD3IFE0Sll02ymXIsYbY6IaY5q0TcZnOKblIkjrW8U671uXdR
veGmRy4qYGQbKYlU6vMxbZpgtGEzd4QlOqe86vYcYax5lQ31ko4c6m30G9ucOcmuSGXcuwLcNnTN
sb5fDjlXeurKfR1OtSMbyo931fLmpOnDlUb6kZp75I+Z1w1d661VZs31Mvlb25IdPmsK05zGcquU
nmqvmThu+ibe9c1rDU65eb5rbd9U863rqU5UN47nWqdy9TOX1dc4a3Zua0N6yuquXOuutXT3m9Xm
+qebu5mr1hnWVvk3znW+TrNmc2bea5KMnOb6Te5rV5bNvfN7dvZcK1dY8zOnzq6l1T5vQ+rGUzXN
sKqb1HvjrpbqHbx9ds52ZGQmyc3H3hV6vm+97lTfJeUbcrdGudS8zM3zRqoTUK5jOaZN8vg2umSV
t9aOctc62MlQzDdM6y9PLNPbuTWun1rJMerZcuVXOPlSZt3NWUMxrpvrdGzW6rTMhzlke+o61N08
cY7663WwuXWot1VPqTNvrXWcrkWWTfV3T2s4c30F6N1fWjfCczc4b5VdM6Oboq+r3ivjrmLeby6d
OTrm9bUdKSac2Otcp65q5N9bit9butY93erfForGVzdW5hUNldWujV5syVUNnNS73l4Xu9p85ibL
mG+aKtSSy7NO75111OFVmVzpnXW9b48vnTyae97KZDe766OdR63SZea5vqttNPW711ejL6d75Ra0
TnWZynZ108d9Y82GRYFMFhlmi61dWxLXOIO4HJFM5529MyNJxZlXMTR1bqVXJnTdGddWbuin0+g6
5nOletVsZcap9J3dHQ7nSp9W5x9VSs51KXKLvR06VLrdLC1eoY62xjZ0+nLdRyTGqlPSp9aHj6Uv
lW+jFGXVSmTRsjb6vMrpaH0o3zoyo06bgxvaOA+EudLY8uo+lnUzKI6fSoKzqlGaG3UCMj6VSEHZ
T6dySuiyt2y1Oay8Oqx9b65q7vrrZ01c6y+Vqub1d72TVnWcod9ZeuPWX1k6zlZznQ9HR11mZrrd
1zrVxvfTrKrrWTk1NaeZetY70+rzmXd83mPMqnV27zNa1vWXe7u7qqq7u7qqq762YbGc3V7cHOu/
MNMwTXF5/4Px28g34d3YmDtPkQO2hu1AsHk569Nmh1NhQ+RU4f7s5JHX0xCOx+nq6+yjdL9VeM2n
pwgn6XpBPirFAp0J4k7DUw6uZ5Txr0XwJnuc+9QLIT1e2t7uHbjdpzrSrWb4ZfO7o3ssKHq+qJ3/
l8dc2+f0mfF/mbAY6mhp2bci816GqW4VNz4UQzE8Ka2ZTaK4npw2je3gU2F9uFibe756xJtyCkBG
JeGl2mpAyKOt2+G0Ri703bsEUEzTNmGRM0WNHTNgYNwao2s9xhKEt9yIkimEpU+Wh6vbbhmTxFuH
EpQbhY6DmYaRx1OFDaLqSGb1vUmwYerH6MyS+7l2vDj6uXgXhkHCr7d+73yZ4xdtFvflTVTVa4SJ
uQ86uR9Cl0J1VNsN1YjrRPik8jGqfO0V4YyehyQ93oc6FiX9qTkRPHTGb09i9nA6MzSaf+Tc9DFd
1OiCFUbYgJf758+norfHgobrbYfJL6tWsy5atvjb7lSWR3CYSQkrRlTbd6zI8SVk9Ch1YE99XL0x
pXA3GB/l3Nh6DJn8VgZtroEYJtaNq2in6H9Vjq5sdKEE1zC3HkMHyB9JQzwwCEI6z5ip8xzJGYTD
w56tb+4fiXgOJqY5NyI8NP2hGFtRpSv3Eof+vXz/fFurwasps21hzaK0JcxlLy2N33iJki/3WiYf
i0ifi9nX7jMrv+/5Dv9fr0vQpuexE3/OiUibzJBgHP6QwZjbeY1z4wetGZ5aNbLz84EL0DBEvXf/
Nhj3jDG6PYsUHQex+DC6/Y9PZN+ck59m4ZtkFeFXGOCaUM5UDIlVd13FZj2onmROrumA/MiY+ngc
8m+eTxnox8sa7bTZpBxRA6U2hCkgLZa8OA0kjszPmjV6of1S9vyCnvzxO6vFk+1+9KiKT7Nv53On
hyXT7vh8Ph1HbvPNAfX1+PuW7n6n8rdjIYCMRd9XSS02euLDNTYmODRLskbpHyHpo1JOOeT9KDwU
YnX78ptY0Pf77nhz5Z7sW0jJ8XkM3MerDGZJl5jBOvzhvH+nn7L52Hd8euGYk+eOcqjsm92+mXad
+N76JAtHMYFNDsIDxp4Rw2buSMO3BnL5Hbm5IQgmJjkS3ZylB0XHqFinF7NJpuobihpZ1dm6IUFa
y5FjLbcw53MyYExAIib4ra7sUEAgQhbV+mUsJFREglqoYhhCTI5bCIYccotmimhISlyzkqOPv3As
bW4cZ8lsN1nOJpm35Z5M0HvjluduIBropfUvklNX8GD+tW/lhBh+Kxb1v8c1qumP09Wjok1Z8mTG
kPjBYTV8+GMVTl6ZkadqB/RkRfh8Q8Hx9G5CVF3ZgzeV5c8+CilPgUN/OzLNN3esmo1tRmbdtc+Y
tfLHr9lgxplrCN6zoAURh3mWuuvwO2xplifag1xSwGLG7ILuF1Cq2Gm7Gs+ZjeJWe+XeRStuuTEp
HQ1XJDLHu49FapNR6LDq6YpMbCqJL39lMTO21Ge69eBx3Jksidza/BPsvPz13987F3R6uez5d16Y
01RVNUvlORNBNDGn5bbJF6cS6nD9NT6Qy6LTG8n1ff0lteygQOg3KMJTiRCPWtqlhXLAuG14wGng
+QErnoWcFnZxCvMwCWyoSGK6lL3ygMYlGw1kYk5hFHPt/xt0FZ3fdpfbRnO4oSkM2ybFwJtrFB+d
H3sbT8nO2VKvw28OzYUxdzfORbPHe7TEkecszsSXpXTx5FCU5+Mi5z0y2YiwYbblBdJTo8g2NZ2Z
pYLbOOz5Xy5+eKOp9YaqLQzuHH9uBQU4Ofl+xzdZqPrUm54ay69++UnaLVj0bnxlimZaiSV9qluL
EYHHKDB3DEMIlCRXjSCSCUbyfpoaN06NcpaiIweINNZNC3Kxuk0LRAeaDHC9r+qVO39Pc11Pp3LQ
O9Br3ajX5Zm7q+z6NNKI0c3JdRl0aW8TNwS37owXbN9yn14T0XSPSN+uJfbV3plsgHqTzoW2+DxR
MZMZ2p7V25G2luhjGcSzncRLAvsesWzsPNXeleuPx8MSHY67rkF2EJhnww4c7wxJeUEVN1CqabS7
VDB442OUZ4fNZcuG6XI3VJcTKxaO4rI51NkmREoEVD+bTtlyxNLFN66qU6NcjBzAwYktIN5DyQnL
O5xuGRQ42gxk3z/ldtT28PQRtYnoy37eQmHOjbskEhVNRkIZJhG+AtwcmxjUK1jBygi9E+bTfRfd
EYsY9EBUidDfyU8P6Qh+813bpbenHSuzdrgbejFL2T4exuGfGu/k5C2Ui7VYUQ7Op+Gjc5SH8k7S
kFwCecyjZNi9UE7zEeU8bEyD+pDwnRfolxisPRMqcHnEyYsXnoqImlsE7HJ4kdPZKmy77Ql1Jra+
hVDVnPKgba4CrLVtfRQ9ONO277t2nZn45t1mpxRm7SVhIDamM+AbpRzHXHMca90kcDdzHLaaGLoo
eTUtqvg5KXLuc27aUTG5dPg8jFbqvJB0pnUnMWTHHPV4Hkhxg4msMcd8b4iaA5Wh6sA/thGBX0Xb
bTXZCJs0+vOHjzw6a6ibhhCVts2waTzRVBisc9pBlsFKGayqg8kFIlg0nEgVXgiUdUMVQWnPZHyb
ZHCWTSpWXqnvRG5HXsbbiPl0Si3uwaVLIulQipc8LZ3nxXNeCxLdkJdYugyTkCchODjCF5a/oXuO
bVNo3dvOeDqZjw3ScUBtiKpObgzyjf+extwadTQQwxx0zzutTyftic9TqDw9Z3TM/rg0wpu14Rc3
Ngb29Ha+GVaxrNjdKNxvGWJDD9Pct1XZlODHuxrStma5NbHHmJglRmuDg5goHlCpqirML0y1liib
ZmM0yN/QZZVZpEw9Y1pQUbJkRuzvu+i1ETbCnljKNjUieIGqSarvLnEaYwI5UcUyVSUti13FKiTI
TCrXANJlkUINqas8ubtArtWw4DsONgLzeXjRxfi35jq7vtoM/fnx3jnfnpAzHhxwG1Dc0uLJUGwJ
3XI6XPHrIIxjWnMrOtvlZhtqb9rICh1GOOl9m5/Ht6LzDbTpbbl4yNxLiOMOzpwSOv62CNpuPWcw
gNAYHODIf7/L86H/G5K7/j9tSciapCsKCJUnO07VKeO29kWQNvGt4RmPo4ZSbDW76vP6LrnRTlLK
Kre4ZyPnexq3m9kHvl6zeuVuPK1mrHvrWzY66KfUzhzrdVxy5nUeutVznJzVjeHCzMfW3HvrrnIO
665K3mrYZqamjM5rWVdzLWj2r1fb0HzWD2h6vFcYzsez8PtKPebjL4BXRsPz/oIQhCTPefXlUvz7
yiHNFOoIi9BP4V/xCgIGiBQDFkAQf+7SriChqEDmVMADQKfVDn9UTWDgsBQJxORRSUC0sQHCqo/0
F2J6wg+6PsfHodeg1K5IBBCyeMY/yQuBUje8n/rR53mS8piSnj9WZ/17thU/Nb+LH+sP9Z3H9v93
h6BRDuQD1QSRURDJS/kDDAIiKCglSqUD8t2l9R+EMRP5t4CcEhzLEFEED7z1n8muwh6EB/NPJGyB
3QN8UyCO8icj9ND/RCQDxneHvPgnh9QcWHwCEyE/FH15/JpLU2YSlOsdVhMu+LECYXKZQWg/keFC
rwcHeOLmUGEsRBBgfS5117+zu/NV+7go8nc7QkIQkCGw+bkmYfXqSgZVHLLI1F7QTgOemKTFJRKQ
lUfvw2umAxdpaSJts7jkWBsPWfySQdGEKonDKQbBpNcDphr5SfDIpTrnvwSO15bYM4Qu0v1/yMJe
3y+dASlmNhSoaDOEDkO5NOzgqmSgioAhkqSSSjssYqWSHpJiR3B6o7+1Q47no9qH9teCHed5wno8
PO7ZgqgzCyq3d9u3L7O5T5bwleioNGQhIMyWYSJX8fvftfyPxt+hOv0OQLOWI+JGJvVBgprnmi5D
CKGa2jYBVsgpiICrtV7s2mHwbXr+FHOIXXgCLplyY17O6ltsfb9Oi3WprCFtmnjqPHGv1XNBY5s/
xZjf+Oar5YfF7p4XeZCVVeNas17mS0fMZ676GxNi+Pf74cts3CPFuzhMDBL6nZ8llSFI9HnzvGEu
4evhRBrkPTYcmPccj00TgvbJqhYbg8OxK05vQ2TQUsorfmdnLIfBwpxvfsdHsvtaotWfQ+i39F6S
1b2KBySy9izhFFd8346JsuRVHxsPRLGlvyIw0xeHE2M0B5evJ5D0aA6Iw49cOQByFpHnyXgGxhMb
uvCmCsZs3hqYF2/BBIv4qwkRd2QmIqKJNaaOonQRvFfEO8E2iQQvsP4lFEOVF/AAFDrYw2VEV2jR
RtsMZoMBabAxppEDn3Hy+nD3H4pA98UFflG6Cmth0xf45KH8eZTFU9+eZ+j+HxBpSOjvY5fkK+Ad
gVtbJgaybqbtkO30TOPpCc25hkGGNNxbFsp0nv5oMEwwwWqzcrKj6NsNGprrkMkU7164hWzsryfi
jfEIvWjFD9v2BwWrDOtTaadzNm1bXyZh7nisGoPiMsmH7chJ7ULpnrR8jbVD49iZcnbanQcNDyY+
wMZ8JGGHUj1SHaVJJXCHiTo9yoa8tU/1a5CBw39XTcCPCO/lCdfI3AJy5/lG8okiw+onKdjjdgoR
XrNuznngfsx/ztStvwX55fszOs6uEa2HqfvilQY6T31rTpYY/h4yJTYY9oXC0wvSjDGTDEj2by1R
AxlM3+eTDYo7UxNQv2+OtymNTLnJ5UiaQlXYtl8fpwrparvHrRgigIQ4kJ5ECFtKkpbJ/Ojy6c9m
V8gx1UL0ZuOsXu7y3zgRCaw1rHEjaifgYVk713ybz/hfYepTFjgbtz6io2erh86Y/66v31HQvtd7
H1fYY8D9Y/R/NWJu+7txkpbqQKjlGciKbhSeXBT7/vOyuNFhytOWOEsUar9ZdsIKp6r/XLKmFV6o
zgkZo8wlBymU9HdY30kezUvoW65bLPTb7cpZp1N8/RskOEpsl65GymEPxei/bbExOPkuvtvgvffc
JuWzxk/P0UR6Z7zD6PT3ytwfbD2W52iycv3ww5BTv7uqTUJbKEET287mhU4ewxwKTahA4j1KSbCV
CEsJuuj0/zsMSD1mNoKo6Ly8Q4FCN1acd1Jf8zyrFVO7DHdunG/UrQBv9eUUGuTYZvh8nd/zNwQw
WdhD+nTsP3cPNGUkkpOx6Bvw+cbdDkf0n9YQBtueq/mbfd2eqOjrEN1B4H6Zs4OkkpkoMpAfWKHk
G0KVPm4G7dXsnOYxWJeVb72+rnlD0s0qbrYxhlKlbUWSJ4YUrjYrIreVMpPjKuUY4aABe8sssa4m
MiuUqXk+Mq5RjEsM4xQgYSQZsWHBD+rmNrx30mQH0wCQkOaMETMi7oRM9z+wkD5sA9mI90efIxcL
qlI5HsCMtDek0vu/q/3/2r/8yX50QSf+pE/xUQSnE17LmjGqEmbVOgHadv6P79FDuyxluA9wb/sM
4xXadSXfLv6fCdCa1mu/G8N7m95x3Iaf6slMPohtsr611zryhCq0Ca9POGuTxJ96NGoTC/fiUxoN
Vl4SgmrO8FL1e/PH00FjSVWOpCPjIe/+71nec2/I+3iBj7gppkabumaHchyji0z8+c0uVhc4Im4o
w6u31VHKMOGyg4awh857b+bqkm5Epsuy7lOI9VfLprIQkQ4iE4bG0xkKG/k9uFTcqckOXJCQtV5p
QkPQonTViE2pqirPn6/MtzOMFbZK+91iGiCo74KW8b0NueZkbjeh68hJPuKGsgeoXtNBhwId/LBy
myfQJDVdaB6hh6KYZTtE8bwGFld2DJB6yLB/GnYgomYxMguDFAb/omBg6JQBD9eOQY4BBiUiUNl9
74BZT0A8IozAz8+4Cjv9iv9YoH9a4mHcmAmEmaP9XPV7XPXMqIhAVQ7Ej5CZB++RMgQfa3d66F/d
t56fDc57REO6Ldjv3jynUVunFoSbSGxtDcCDy6uwrYyytRyqB1VZGy6pmZlANMwZGHNt7vFgzVNl
uXUodOIdm61VYXTWqCgrTplIjBQuohUDZCdUjvuUfxiRqtUwobiY0barrO7OuspaQx/6mozZ1Kj1
/GWHnBEKGJ8Z/yF48E7jBZIg8xWqTBlIgmIYZIP4T5zkwhFP+bQKE3L3dX5+fzVoeQezyEl9wHh8
tKTD2wADeIrgwbh/ZKZH5PjCnOCbekGZvqHeFoLEU4HB49N45j22oGwAX1PHePr3+BfPW+75vra/
SiIiIiIiIiIiIiIiIiIiJ9p0kkSSSSSSShCQkIQhMDoNqhH9qIHaYs4HQ4cvrTM1gGptD7wOIL63
/IRueP4o/oZDfaUdKa8qTsFWp/Kv513oaa/8AApZGSEd0wyOrsNJM1NFZtdV18TQyw0rDl/0BLQO
KGJRU4kbXhRiAlLc8J6pJR57QoWMQN6GEcatw6nZQzzWqmPCWXse6pQYkLJyKaZSJJMKiayY96F/
wQh4aI8/hoI2D78qM3S+bJ1NVceWQobuvMZ1z1bTZ0F+jOjs6RlLtdMbO2+xTuPvmXl3C++X2DrT
0ZV0AayAgXfaTLF0KeWscfhDpj6Tl0Um3KKyIrssrbTbv3654qE35XxDlIvAMzJdQ0Qcz7vrpA1I
uZFqKSCrE+oAPpOwy47zqnTh05dcrXKYMsrCE0rGr9RDaDjLz96+wlf71RUn0lI9KcNG20gEvC0Y
A+ay8jFsqgLECIkX9FqtosIKGDPsrjEKwax/5MDaveEQQA2bKZ6FFHnygQWhfLDL4tGmqIlhWF9g
1WdNpUFJhQU9EoUPdIljWA0ZQQVrcMehAdqqNIcMaIUtsHfe0LLMm5m4G6ZANIuS9HFzYzUttCBe
ZnvFu1uKwD5yQQOICIEXcIB80UoJSgUpskRd3tP856CRdAhpmkX+9gTJIgAiR9rKfH4n7bWTXcJI
omY4IOxGkOGbIcTFuSiBjZ7f6FznyKUlixjLBkfFPzqCRM0CblPU4RMLmPQYEmnHc77xcHKbwMn/
wP4ZqiaSmfTmEhR+5jD7Iwb0iEDhT3Zg0qPoxDJ9UBu0aMOKpICgkYom4jrrJCTy8acXjZy4xC3n
hwKDxfW17E8GBr3JwzDdSGBkhRYUihqIoae2DsPkzE1Bz2mdDprF48NBT15s2Sqg8ZNxnng10HFm
xtN9GU+AQ46b2jDaj3kt9HVnSruKl3jGN9o7Ozve967uqlZLu79vZ/CYE7Kn7tWYfPAoHyo7JD0H
ccouvCl5QeLCQK55Tjo4OddWmI4vSFmYep4ge2TPs5AYBK5UuhYd2aDYc5w/Pcx+gjbIJxPEPdI9
IdXplMgJFz10TpHpgsyPNEdnRJwy65LDI38NCr4QsLgWzyxNmNIodNGmi2N3jhTrII63TQ92trdb
xlO6QTvaqEqigKW1yAUxDYXpP435xc78UjyxeCpQwPTQaa0z7BoxnMZDfziKa9Fd9AbygYwbBCiG
HoYe/nAOheBxKSHGHGjxcWDMqykhUGHPexugKD28Oxbb5A4we4W10022GVKZT4DCmNopn6L3UOol
wKF7ecwWUTpld9t0gtgXIVcCmoxU0mxCTHJhsKfrcxkGqpM1HYqxQNc0qEtKBgmMfLy79NNmNZIF
/g1MaqgHKyJMyIrI3UZG8UVE/xjC5/UDzAcBvdc7N3Ck5xectKKe+RHxPtrpo69uqGSHu7KIhtDY
Dlui3ZIKOMqoGPPl5pKjyjz4tGZF2wDvuDYU0DdVALqqLqqAqqdUw7yLjO2NAGEsFtzyqCfHKWSd
ByECATClCUGaYOcPMoNbaJG2j6r6q/Kfr/x8YD63Y++kpICjFE0mgQ2MfZcVLolyEpaXn7QMD3hr
4/L0OHPCdARn45dbYM71Y7xabOHTzecsZPxkM4sOuGwq9pvSYoD4EuH7eJ+zKmiOMokLruhbgxDX
H/h7inPBg3PQy2myU1RALvQ6Y799N02k972mILDoQZIt0r4treIxbeX5K15GCTaSXO9ZdTh1qoD1
rvs0PZrfAOts4stozEumxarrnUnRxpiw1rnSZTTGzoGhs65pKkUumRwIbkIm0Gda1ZlnXmrIVYZl
OWzMgiH39pUyVEBPSI0xo+ej2eIrLnmB9yZJuDSnP0abRFQJ6Dt4oggN5DBFs+zGW2jHfHEuoF/l
g+uOZR7DnDIaM7DJ4wMRbCIYrKpWVVWmA0qnpmMIPzs8UHHtwuXLPN1htB4jDhdcJBFgQIGXMyLb
iJA93b46E7YHxkKKShVmKRiTZ4MmblNwNBv2b0BqOsr2QnWTEU49t2IbEaA3G3PpeW+Mk6VUF3xC
uGLCmFi8mva/LbPJYp5emJcGLHGgpiLjfJiNDlLy9+j2ModYaaaAQqtJqakVFQ/W6r91t5kr9ZS+
Bopq2IMYqZ7U4d77/O15ZivXr5TAO2hoeZDITUGWoQyae512k9dmL2Zp5bdXPGQ6ctrHfEkQ4SoA
SHEY1EMo1DnkjOlSd8paMXIa1MFkdO87vR4cKdpd3frQ6gMj1S5Rbh75HjMCoLve3ZoTmuMFyKdl
kHiQZIdJDJ73kzrZdpVUVPOeVB6ofgHpUgQH08jp3G8m8EF9Q/sfYxgHrNOzu9Pj4PDK+qW15ikh
ljjjJ0MuO9pMHYNze8nBkmaQClYu94vnK/VLStSh2vJ/WqA+T8M00C3kcFBJtKkgs8OnffjnjJCp
IhsRpKtWsFwJ3Re11RXJXC4TdXhXuFaFj0+kdM6gdwjMRsXfffWtd61oyV3zNeqGEDkZNQw5N5Jv
3PdCSN69/yYES7C/IcuR3enVH07FCVJEWSg7aXwIu7fQmRTvw2RMxwrAzpu3cpEMauNVmwxKBqKu
qCqbHgIo4Zr/UCfMDAShfWICdWfObud3d8+eeSb37MW98pKmDVC561iIw2z5s2PpcIhcaDsqRiFK
eJIslO3WGp3s589h6p7snLpIoTci7s2DQjC5IyTcwRki9rwOFwuUwgmGKwVUhJZsmcCYVrnnepdQ
CSR1vTPXmeBp+A8eOvsPv0JbF3kbKb5KaR7RTSkD5B2IC3f2sWMW1xkJHjN+cBAfztALaT+45G0u
XCDQWX68d+3iPxXPN91N5z8O2HZe4rFKZ2Bvoru8aq6Q6vt53Q6ZsroDo3zqRudHR0+uiXzrjXVB
uKM6RoKezjUvTBgxZmva1p0ieTkBUiqkwYUmPintjgVOHXj+75B9X6uEcgPyeu++a2UA39YC8996
sIsipojUY+3AuDO6Sx+41mtctS0xu9nU+3NHTR21oQF1FRz7jCG7Rtl/bOVNNOA+jv5fQzVNbqJ7
Yt+oX4yeO/RrlZWo2QgVzU7yjUQy9UOgcS343nyBmY/alvlmfP6a9nQzPIpG+Sk5Ahv5ebECQdBv
exmwyHsU86yrWvGpq4fR83Ie/Uv5i8cvuq8P5dY8lmoMqhYzH9jrLzKd13MrZp8lDKTiGItxNW+B
vKN2FNFsRGY8cabDvk1k5hfLSZuLnqBpxtvcHY+qb6E1xt85INtEZQbhxg0bXOXVOnHLvjIZ30F9
rtpq322bDYa5zWsvrRNa33l3d3XdcvxDrsmIaHfXNX3ZVj3VlUdXB6PSxet+KpoQ8aagCAoni4D+
felqzKcPm4BQr5fd7v5fPSA+bp9bhqkRlJaoxyrp9yFAdwr6PwmzcL0mohwVujzdIpODDfeodnNb
8ouHA1cGOlDEMdChbBOrx4NN0lBX8nm9OOaHobJDK80ndIt34hL8wKCnoiIefM8M1gqJChTFuHMS
G037CtFu2lVgzphtQU3GZ8XEOtevb1mc8JFbYn1Bt134613RcK1oeZXXZd73s85KKD7GhNt+PyUg
6ZzJYePDa6eedvO/NsC4Z5qcLdj0tsbH0jfSyIYbbe84sfNPmtQOprqXa1qdckFF3ZTp9LQbWPnO
720bZjLZB/ZNifFNPw44S773ms7P4nML+UxuvkJlJeeOykO2OQbn0IlEyka8+JTXFLhEISKVghmd
jDI0nxo60rGknFDNlecyZ3S5F7vjj/YfFaN/MY4P/JLMxMwZ/xaSL+ev47mlppNtsIUaoiM1BNy0
sMgixz+7CRVE6uVHlKV5mzFZN0QOIgORWlc03MF9Owm3YiGxUMn5v3FO5kA6gD+MSxfxvIJKMpQp
Fo2o1jLZsCTKRMgMKxAUMgUEhdnd7ulW0Nyrj6yC+eAWY5XUuMCg81B579hdGKBEtdfbz8jhv9ZD
7O7O6HGqCAREuid23YSgsx8DBlAJA2zfYgtqJqHDGN/FQNMBWNHhltC31u9DOFiONXDy7zed1lSj
odrnllLCr1mMkPVgdJ7wlOdLYXjzpfFqU6lXnNdRjOoxbQ98DXYR246T6MF0m+bedHV7l9WzqhmN
l4aJOhc6FXVLQaDHix8N5rWanW+t7e8uru76+OkYhPfHvrvp93qs32Tcurd5whZ6Bo4LB66HdNPb
Wm1UDW+F/j+2fJ4zi+cOkuvZ0HgqW9GiKku4Sy3QUjfgroHl4RasVCvNWHh28NhFEhjpkoGQJWPn
WtaFFEZ7ZVXt6skTIF9Z6mFES3UazVM2w1wlLHHC1p6tRFODmaCSnhRw4tqYA17FD9G8dpRppTXS
V3lNZJ3JuiGXp0sLlZcmGVFmvnVo8LxPp4Qp585zvc3NHaNLDHoW2411vo0ToIM1JQp0+M6aaY3q
KqOPqhZKbXWIYM6oG01xE4jaOI4HM63Waw5ZX2avYSSZMw9TFWwxpE3vCiHphsPRii0DQBBa1Ql9
t2zLGoHTx3aXyag2MZSWuvH0+2r7qiotoMURoixpG1rz6lL7muykd9eGED7cMN7VFLPQeei8NQ1C
mblq1Dx8zXef+/W8akYor7lfDNQTQ3BZFgxXW9HjN8s0HbLleDhnVlBpGwLYq5CKyeAdiWhswlBQ
okFzoJr3A3SwwTDi1GY0ON3yyeeRPPezNNBFaSDC1hhyJTpEXpTv7nfLl7+Xjrh971JJ3vmI2c7z
PgPGO8vm3yG3U09zlyHOMxGmvCOhvJEQt9Q6KfBnXLGHA5wtl72bUT6Ra5w6ea1o1fXXW9cu7u73
5OixcnObO9dZNZW6l677w1nYMiOgjDR0JWfPQaCkX13y8PbdVEyCvPs82Y2B3uIdI7r0dEO2Zmgq
kV4+2izDogpQs8w8A+YNi29qwxarZmNLTtFJbb80U1zvLvSGoFT00H+cFaw7YFooNF+HYegYFQFV
JuISA7/k/u8l/9h/5fED/6rflt7NJsM7H9/r/+kg1iOSEdev3bzj+/NBNCSBwlfZ2U8Y6983761Q
mFiUv0EiS3fpq2y1tx6TYacNd/PbhTA4X2NNTwwhjal8n88cdn/Gvy/5fjlSlNaUns2rYIKbcDbt
6dv1n1oN80xJM8kjiV05FTH2oevPDcYnSCSYezSOOomSJ/n5zcEdDHi/gTgf7vH2f23x69nrl6/Z
8Pj+nf6fZ+qNzDH08YF3+Hu8v3eEQ8X/x7vKMYEGIbun48OXVLqHXRWlerqvaIq7yzPh18I92v8D
a1W3OzsJfx9WByt4fHIuzSagj+cRCijC75M3saskezB3bRSC7u62XoSLUo8Nfwf2zH/lPEdf5yM6
UGMVsTEpn9UZnAt75Xlw/zrL++bXf8kfwn+acVJwvzGO8lrCY/mWWGH/OrT71ilxTMGC8vHy9Xr9
ft4WZm7eWLrMnwHlI/SGrPazC7JyGP8/8sWPiWwwfALYv+tR/4Sf26Yy9l7Ek1Xdx/YeROhUcI9I
/2+uq0Pg99Yva0if1Sqlx/3mjTq9snUG9o0DX+/L79KskYJU/5D+5k0ru9EOR9h9h4n0H3ch7Dbf
79w37sg4Njx/x1/Hpxrk3iy7MeEY/9v58PwjKX+cyU3qj/h92k5E5PzpWLI/Hzxjz9bA/e+3F/1m
xy2tfkd5M3qgyI67fo7pDdCOKx023kbccDHTroOmunYHfX/jSMko32IaV/3PP+E8Ovb8ux8cssDF
J0weuY/DV9tD7On5OHv+RcC9xtr47z2zUmx/40/iX/myvOzY/8en9ZviWHilUn3X8usnU6xvbCw6
3g0Vq+yRvmTVyHbETb02B488pM2uyR2Hsn0pl0u1FoCkjo4PhRuhd8h4H7VC1TokOubmciTQzyh0
0Yw8P3vdSEpCdCEKqiK/0/07PCdG9/U5yXVaNVOuS6hrdIj5N2xobLEf8t3oj3f5OZJiaGfpHOoT
C1fIFjp2LbHa/VBDUY5wOJJE5vJNTO7VDtJhxTMzJGS7Uwt+Zg7mhM438oyXu0iT8VoU3bAX069c
2CXKIZor3565ctTZfoh3g2yk3BHtfPdKSH6Xfdc2qT9VyVKdUmiHqxCY3t6MuOxpg29SyOuBtmFu
8wHyXRzCBCI9lVDnTmdaKVHsgAdlZZ2wkdn4/m5k6Kx60+OYIhuvQnDHxEBqm7kG+jtgggBASTb8
cYdBNMBkpr9ybBDc0N1o9P29V90oTZ+X1hjX/VeKzQ8dfyXebyXKNRvBSyZ/meX+uq+/R9pRvhbL
shT4Eom5WUqYVGNZG3Q6qZcCEGWUVdUo6ouq61Vd6pkQXmSB6r32TkeyU4kQDug7IPZziCEwVtcC
xu+Tj2+dy2w6YhGeGmb6on8UTU/DK/rlPcI8dm6GLYrThD9OwoBsScdkJ0hBoODkihIgbpXVjrPL
JMjypUNfLyyw/HkG465g5TTfspLvZNPfaPMnEHfUdj0cjWDiq9DsYqEx1pJs2XC5tsSauCdKyUR+
9HT2GEGFrsxEObLynPlWLKa8V+vOcubndZRv7c73zvpyddT35q7NFuQwujS6FEH0XrYpsrEOzvfF
zHdSwV1eho85vIzaUa/RON+7szqJNcTOLscoYDiESbdyxUxmY9knNVXfe8iSYaecEpV7U5Loy4Zq
UyyFR8lIhbVOf078KXnr5WuZO8145Nm0+9RE+hugTIafE3a8dc90n/e6W7had++knT+HjlIknKvz
hPKtc4h6z1cprKmE7S614okc2cyVcjOxfZe1idr8Q5aOLO/LZIfvzrfRz5E5V/ohv6VOSRP0rQjH
MiOU0x1poifguUtTI5ncUMdlW3Jgqs+lOxsRopvsIMY6dJMPOUAkxOVIA7cog4M28149t4l3s9SJ
9OH45f5Qgw6/STp7Gkhhq1vh2a3rv+TM4bNZm/qsEMRVfKGIkmm9KXqgB7YKInZ9+LNood8Q0iJp
ntrjhrTRuc3PSE/N/mnyPVJqLMlX7X5LnpR8vdPCS86v1zfJGTxAvF/Ol1nUmXp4dREqNXusqTJF
Jm+cypEjGf2rL0VEhzQlffBvjbuwfO+SMCu7Y+NOjbdn6+kNaSHVNhaTWOnHXGd6NVdspYItDDpt
iZ0BZN70exDXs80GAKomnIeCHF3Ujf0b72nTeaM8pbBUjGJxJ9q2c3l1LOWdI9cTiaWvOIU0MkTu
ZGeZJ4UFJ7okpkOQk3RIYPZ+4/rhmYp5vYywhtsjEfhpO4jxopNoL0fm+S8TEN0N26gygqyImRDh
C0oiOArem5/PJtc5CUa3hjrd+RE5JieSzKZ5z1UTLQOm8k838V1rCeemb2xnWjIzayRkkwmZvCYY
HwEY9fbssvg/PY+Sz12SJ+hemdUWLeEOsXe87UIjSTUltWxE5OpIg6DmSCiaS9ON8MDBYnJlpWSb
zn6GraHFvLFSQh79NDHEyteOD3onnUWdZ3Rh1VDJpMsBYJUlhli9uMTkrVMsFt1rt4Jt/emDPI2X
ljFA6HXuN+xLj3DKaVfu9lrNFbi+159llPPDw+5zvX2dqHMCFX8vRtpLl1srhFWHGF45T2flPlFe
D8ecU6cXihDYIfAxvd3wKFcodGyhQLPPfasGk8JY6y2wNNVUcZyl1P0nPiYphtmVIFYnpG5PwRyI
3mKfqqUjvOcFbPeryeyPSmJZTTZxYU2pkqkNrxr+BeHom2t/k7Zu2M0+Py3kD0NYOicIG7vxCWRe
nlyssP2tdjNvQVkVv8chT3qe+Rst4vZuqPfIvt8rqrezqSvcmm2yrR047+Xi/53rZpbSbt1U1K2z
t0YTTJr5EJRF71l73r5ZFFcu9aPs75s6+r5QvxYRv14fAyy+hFaUrFBNN342lHBUl+ebFoV3m5HK
XsV5zh9u/h4a5VboRW78lsTY6+nhLoxy5XHxeWFYaPknYpKcmTV8E28oHX5y8GO1PnCAx+08hyN6
Z9GVQ7FU1Tm7Gy1SGnn8O2TQutf1X398tejHcu6Y9SfnLau04VjptJK0nJyR09JHhJyq6ey/CfHS
TWF0UlwsBl5a1jhh9VetGOUxNwj2bu+ZQhBtiKJpMiE5l6pz9U3TpqP6DVGyXuR0Mji/OV6WGUf7
Gc/BetfW7z+GMQVTh2Hqxlz7m7oCSvTbc0ptj9uM9kp0wTfMuuT7C1COHU8RVniuGsPjPww4LZi5
akRBn6p5Etd9cqX41Omk+pNWJ8erpw48eBbDv2wQ6dYvZ3ZJS8+uKa+eRhpj5+D9fVi+PbA5P4u8
vZ80peE75z3TpDpYOQlkjbPKb8zty6tlL87emI38aANuQWs/ftcvIHY8RGCDXydh3cBPPAfBLR9+
cllffGcbI4rq6rpUaX/jwru2mJtM9PXST9s5La9WMxpjbjuwKmzvZWs1VgsNtI+NTBUl8O7sh+Fa
yXg7i4Inq+7PZr0/HZhj7McUaG+lMVz6FQotJyIc6kZNXPbHlJMqNw+Pv7y/qw0MH4+6jL+991x2
0Nera+PqjhiVwdyqftU7D1RjaxCLEdLujpeRkn9giMeVZYquENJnkIRhGKKlZ2aWFZcdu0xlYfbD
ZLGy871hZFDOMIk05SymSZNJsEU0ckqaT2JpCgeddsTkZRxd80TYWqNKuHRSIfIt1y2UdKy49bxz
cwOt+Cut8IiNhnPoFN+PG2rxnGAWWfXjHlg5er5mClF0KWfX6Z59I+er4rtFdNXf0xORyihfjDBP
brRnS5dT1O/Stpzvqo2zmLHKj7MN27rzre1IjJwHeN073csddJ8FbqJolVok4a45bbHRBlxvYu8X
oRqQFtfbGs6eyXnckLHRTfbr2UY5LkuS6UyTHNT3awacRt0OGwcxtOM0P3bYsyddVH1xgnrlhrap
0P7FU6X6V93sfLu548OvbtPbu7vGxSVHy5y7/RUnrTWC/K+Aqk3jtJXW2Vlmcq2r0vdYGZ3UhsCf
OCUOS7ZRyibdchPjGVefbaNv1Wtlsw8C2m/TbdJ3XnSxx0Mt2+3n7qdmOOdcImuWFn31zuxNNxpH
RfGHmnffhFjCgqR5w80k/J4G4OSrwUTxZ3dqlDgmlTZrqsevFg83yWyTYmeJrKqmjspJoZpEMOeh
MQvBSXBMpOlz9W/bJ896j8os5SKy+CmlhrfHnvwPENMBia3KaYwQAkkzEXc4IbmmJikmqjeu6V5E
OumIx6hVHcs/DlVUrXuUTn1NxzNCSPLszuaLi9S9lkjDZTW+j+EYTc3zxC73rHN5xaay34F4FJq8
TtgdWVaRtOStJHTRcM7qSyTrc7VSsyd1Ag6cnJdFKNHF5tZZpsRcTBqf5fdXejs7PDv9wYrkQ2No
jhD9dGJjQMu+9P4cDbD1xwr6vp1qvZZ35jMB4GZUkMb8kgNez/h7r7Zyi9w5Kr3qU1Gk236dRYJx
Ansnn7aYywHwSeB0Db9d0DZcXyvbw75mJ5/mhmmpV8uflE9iO3btjNDe9Y8X+L+0jjv9OPs9fPTr
1wn+RwidayO9dfr8uFecaXxXz9708t3s0lOLSD2ylNP551iOSdvWX7uL6Y9cGdnU06QhIQ1Ku2OM
of01aC7pJmSKrlJrcHwWXXttb5K1kuE3MJtGz2zKycMDLZJEYGGb38hzm6tu4Nluy/EqEbgNpmOD
pxsZC+OLhRb9nNww6Pnc18/jxTvm/Hupod6zKFE0old6ybS1oBLFarGVm85FVXxeObjvv7w7rfK8
OfDuul/Mz36TcMj9yTw1H2OnPWfonHZ1A0In42eHLhGdDp0zvnDkCQmRsxyn6U86b4b7YJI9VHCF
glBzcCrf2N/Wyj2mNlRJtA/iBGV91R8BIEk0tVnrsnt9Pl5xRHTy2RunJQ+T3vuPnnvP+2ymQ0P6
MVxypbY/up5PZc5VwxvH0220MMak3wnXy6Ytg3dHVBiLAf4Q08R7v16y6VbCePhbCu03LGDF+2Gw
TVVceOHhPRTUJtPZlXFQ8Qha1r1+johskCN7xKWeA/HJ67aOUlwIuivB93xIwk2Wu1Q4sRDOidn+
7tftfbE9j9UtcZHVV9v+NKzrS476JzeSHUujYqBjDzJvhFWpI/k9PfP6c3L2I+nHHCtcYtLTPSR1
Wo6y1xoGpZ6vWkY9+EzA8bXaP1OZVrBitBMxuQxVm1dhrdXVZScVxJ8nKSBsunKeyUo9imu6++Gb
tPZevyZUv6H1YJJ3d9IprxaikCdJwSD4Ydf0TnH0VdTM4m0pPtOslB86GwTYLhG7D5+53nWsnu85
NZMhT9Xo5SYmQhzcywlDM8Dfh8Pv+185sVuDDgGSO/ofZLygdbp769pw7+3JXy7s7UvQkqItHfsw
kTjOsGiCiquaOabIs4zlBT5++lO6ovnkd06kjw5yFJnN1PG3kdL5HD3S5aH4eWZ1rZ2xbLP5T2DT
G57A7g+m/HMY7gOgYYdvUr7Hqj45Zd5LnWfOKPaKPLDw4TCb26MG7G/XhIRJhjamZjsg82FxXSQh
7ICGfD3G+nM4ba2YkMaBd1/Ln5VrSsYfbyXxl0F1iWrPjHf81PytPSPEPLF5lqQ6ZO6MpOSWUPj4
+YeRM3G0vK1OjhMoK3Rynbns1KXpxN82lMoYniBkB1Z5Wzs2DDHSSdoQG9/5KeBsp6sj1I/6AQIE
QCDAgT+9DHCUkDIQKBDGQQ/vv9sHSEE/PRJ/bID/fIP9f8uCqxKfdCB/VCgfuiIh9SbnH98f9r+a
v9j6HE+Jt/QJ/1UlE365TA1kGRgXm4eTL/nYXvhNc4iMvzDaLS9mKCBxkQ0kjfd5mUmB5ahHWe09
oOCPldOrEQkSsJKQBnXcIyMG0v+EbMmDQCzxEKhpCthH4jGICa3vrJosKuJUhVFw2yVTnNoL6Lnq
0rdMupP82rxvFpxlSOQmoWz4yqLZNwCr1cq3ZAbAj2ObkVkQmMD/TA4QD4yZAD4QCh3Pdp1qMDMw
LbS8g0kGM4MEjTs1UVCP0B7x3lEYRJ9MI45MoZyo5xAe9u60xhKNmsWdavlY5MgqjlHP2Pi4NMUZ
jnjzza2dCMRiFecCZMRD/y5H4gjZFCTnm1lRMYqq6WNNlU4FOnVSvk9UEGoX8qLZx2wEUwbNsEUx
fNA/DlC+hziaWsZfPq59eLlQIJezbLS0z2YVjWUYqBmUiiMdVAoJRb0mExrEsnEyOXWKbkUpA5gM
l0VEiCDjPHqCO2339Ids4PPMQu2L1eUobk5JHo9QjAXpTOUmjrrRPbIPRsspEbFyJFoath7tbBoK
GTCFNKHPZjW/PB3vEAoCICMP9el1K2MBNhCy+rSONHGbYLYxj24nbRWQppsBYL+/Edstm00ljQg0
/f1OxhkPMp0q6WaMbtqoxOC6VJqQ3GEJuA4J1KZAhQbIp9PbvQvEgnYMb69RK2hHs/m8afhwPGiA
cHjRUXnVopbvsop7swOQ6uGguh78yR3CR49m3ZtmNCjO5BIMDqqECzqAY0jjX0aA8IHrcwp1umYB
3yHbLzFxguQDECA5AbhV5h0ScZjqOYHIRaRAdQoe0ZEO5hDugA5u+ATWzDJ5IXCDiNyuzeBshPku
D/uT7+d/jP0H1v7HPi0M7r0SlIQvvuf0yPa1vaQjY9HmYH0gctGZlA+PfRr/BTyVfzafl44UUKgn
kZ7Zjpk4VspaBKaOied+zYbBpQjTzJ0jRSQToZUlxJFP3f18sH3+84lfn8dlTcEUB6Br0m7yry6Z
gAKB1ah/uCI7REMoKf4+enIP+Af94ggf+X5MkED7EECPGKv5Z7Ygp+kPof6R0gmKP+5/2f7w5L7X
gVxRKFItY/sCBA/WHVuCh/ae8f6t6GWqfsjD/iq0QggRsH92YpuDpELNqUI5I7lTVP8aByQmiJFC
4BkomablHUE2WKhgp/0gcHKj6A8opdffiJOmyAkTiiUqIYjxM/3CmLCSQHdu/4LoAO52UI8A/bzU
S1Rw4f4AOw648SkRJxyBMDTj4mA5isBGxVMhQoM9SEQwaBYPWwNA5isA6oPYGiJ/aimSYcK0Iaru
zFNQ0AWsM6nRd7BWJoaomD1KO84gb0Nv7md2rEkGLwOXF8B3BBkws2HtBdVCBQnQ4dAd4UNKBAYo
c3egWdq5G4Szfai7cEjwMkTkeS80TrRzEA5BAE5olpoYDO1dF/TBA0if5B329hY8PjQYWHWf816w
H0ng77g+2HCwf8fLAp/foKMyEsh+uFcRo6Ah70T3IkBHi4TPj0UyVTuDsMgzY/0pGkSJFU+PaUgH
wCcjZPFXlhRi5IkA/zRtQU9QbntGglQgAGSqsRMcwSwI+5wRiHaQGv5aFxqFIVm+8plVfrxQWkE9
jRY17b8hpl8JIhMzBhhMY7DQDCyJ42woh4ID4gcF0QhIG9Eh8A3AdoHcDSdSJDknQcEKRIBA4GzA
4gex2RNKWCuA4sUhDZDfxMyInBE5JkPKDDZiA0JSOIBgHftExNdV5AA0Hmq1Fz6pTtBTkWKYsf6M
+Dr7sk0AAgnJpRIFUb//Qt1CCcwetotpEyxVLCpRMFKSkNCYCDiD5jsV4DoiQQTCMo4HqBpDr87A
LA3ARXg/P1tH8vxpBs/IFPAlEgyyAYIINBOjyvIiO7MAkzuw3ImkSFPuwRILEw4xWMYo0OB0C/uM
fm/MewBwKKPPGFS1RUUZmvx5vRvS6FI+Qy11scoOfPgokDDSJKk7jl7h90Uc/UoGlCD1HjXn3SCn
DKBES2z6HAOSZKM3QDYDZJxFKX2Hn3B2HFRczlsHzZqnpRNCKJDf2mALRM0ShIieoSlW+KHM3gag
aipweQKWOmom59qB/q/nv+n/xxch+35M6mk/0Mh8zyROuxPH+qV1dnRvxKTREyQe4LX2L9YqSAcK
ez6XeyqAPyHvMOSU4krUgUDw1RKUAjIvpoFAzptbAWlpHskxDf0HjEpSR7Bwf7SOXfh3eH415/on
/X/rV04kMPozYYHxAqG/44MwMiaguhtq97RlQC6CI3xWMoSsaBgYCJCis5YV1FBo6bF+5Pfm0FmY
pwETVE2FNkS0SxT5RTIU4Il4Ad6RUzOGxKE2OOBtJhae4gNOUDhBaTPWjjJpcXle1AUoADO4VltG
zq9oQ1eFXJyVzFhY82gjRB4+G/adBTSJwaJZhYqHwFDqPAfcocx01zxgHUBNELs+V+vTkljPe/Xt
14ZsijZ3CmkdXDrQYr8E3dpxOpAgRGKQkCU9iUo9fDQsSwPkWhMzeiahkgY/tgyIprDqEBxTrSUW
qbWiUicFE7RE0M3bmZHbw1R3wPCYamfWxLAfM0Quo23Q/H1QjvW9M2biOpN4OtPADiBkIvUPNOwQ
j2iQHNKA5ROUPUdKIiL32QBQASVKNUhEAlLQRLEvJ2CLH6+xMQ6+AK0icPDs7ESHsA/JH0zuA70T
jQh0TD0A3omncEANFhQREoB3iJ2uy6fiRGYv2t3UJiqMaxpNsw8E7A60TkKddntHEEpE8kTDgDQa
RU7O3uDfkAB4kGeg5APogpaA84iHWFFLQYDzIm8cPITRT2Rt7OwcgOi+ZE1RvY7lSvZ4xAonoCeK
JmKapecyIZJZQSZxN4JhXcKzsc31dwdgNztCKFESpPXdlg0ib++CkdgE5CvMUjyFySigBog8AIoP
QLOvvRN1enxipgdWGLRIKTuUHttdlBsUrR1fMEhO5gZPNcFsSAluXKG+gSRUO81G9VUegUsKo42B
rXJmQYuKuIeXuUT8aJQckCHP0LkibhPWPaeDgCQU1oQyF4nADioRfMdR2IljZ3qmQEWjINAsCmK9
waqNdECZjmBamjSauxmm97kyEexgcA3ImwaCblWnXPl/8Ih5omPUKebPchdD3omKKG9ZtWYC0hpw
wSsM0wab1vwMmE6ewR+eqqpJI/LFXM7d5c3HUGnqOCChkQgp659nggoYPuOYG86lSPT8RwKiyMhD
g++wueodnj0fNKiSiaB5kwL1pY9psV7kTYHmHmfOnwzAyA2nIEEQeuLE6iQI7CcxyTG8dUlsIaGB
0AzrrAwA+mIhxAHU1ELOIYF2CLALUgRoIBwT1iPU7wfeD6uDMMXIBeO8RqlHvMz6rtAyQYkAokKo
akoZlJqsscImDHRqXRq11bZKJTLcUGCJ8AOLsGQdnyWZImYJ6hCZonQyDJEwkfWbCfAIhbx1UTgi
6hAKQoXgQHMYicnQHdDPgtAbY5FpmCWvKJJQYJDwEvbo31AfRAzdoj7YEu3HBTyzNR9U7i0KYAM7
4RPnQYhRJIiBRggED14AmKJyqjhU/qCIaI2oB8BTUFaES4w0FKEiJHqPfvUT0DhdjUpHWpY68ZJI
Ofomfqx0T7e5yMnIDmBA3ckAOoeTAgpyRNUTAB0ESIqfFYmaDTaDmK7zkeoDU0FN56DrOCJZxDY8
4nJRMcTiokpEzOb2KlBXyArmaihyHZE2USGi361TgnoAKQQgQQi+sVSKhssFI6IrxUT5VSjuBtQD
yDeo3UB0KYOw5InnRIKREzD0c8KnVQbhOaJq16juibGsEqk5unaiVDsCWiUpQiZQodboCgixi6Yg
+IGry3nuQ+uJjsHRDZLPEIBAgECKNAB4ewyPcPpXSoJ9L6UxGGLQeYkwhKkEBgEEoBzHqKGhDYT0
9yBkiPnROIpESClLmERPAU9KJqKWgno8yQzDoPA8hG1CgDQ7VEyIFBuEwG3gK7JEToibhVMjXZoA
oHMsDZzQgmgHUqIckUfuCL54n/HNBeL88HvPUdaclNSxHyThIyVR22f532hDHbAoyoYw2YcGVjEH
tuOFWB8Q4DBIUOzuDSA6DZi6SIQtRCYCfkQQIgUASQDQqff9v+k8XN58+zl3ccOtGGQ0UiGJFYWS
kfBLoxiSntC7G4mfbRSGiA1QSTRJqDESUCRJTin7d3t8H01cVEtmSi0Gxs2WkyGFRtiqxQbFIxlp
bDEzYgaJZmJkajUpRpoRLMyQimqEhJYrjveFHsFPRsVxPQIfOie8KXQOhhRfVjkKbV7wNvRzRTxk
AYun4VIoYIGF8XVUtYHONYOO83nQHpByRzTURiZiAbtl7QNjHoqqqiqkoAO76ashZZVwPhInzpQm
gETuUT5UTxOvCiQerwBj8A4nHoXFeaJv8NaFU9QeYSqRCp6h6CkAKknshiE9ngHy+UHBBheSHgGH
YCEp8i+pAPOCBqVEkzYjZGgr9jvtz+T/huqlAhUnmAwic5v9gXygzqDjusF4gAZADsp2L1nxNABw
oHB3qDmDloxVihT27BhwxTQyAaEeCaoh4kFToRSzpUFfDr9evyQHmnPdc7NonykC5BER2D6BXlRI
4PN+59Lyww3CPAz2MGBIOxlQgpD3GAnhp8mo7h7Se5iSqIPBEu0FfAU+AGL0RPSS8vcl1BS2wYj5
FEMArm64B3MDct4YiZKJADUck10TvzVNXMDWmNnenTHP1bgP6BJX/dSbGww5NDR67DgXUArFIkUT
SJCmUvjqNGHfhzxrQLgtZg4SwWvoxtRCKhlxjKZclSmws2cqrRAxFzAsdE6Oie5yA8jCjvWLkaln
TNE7ERLMSBO0BTQJxvEIgkTLWkWohJIMW1J1dxsBsiYeKJkPgK2iZBkfH6zsvAQmQO5UihcRRoCc
ASIhhb6jiInIyFD1UGYZqD4JNR2FNhNhM4AQWArEbw9Fs3DDI0icqJo/Iy8IccBMkKdFlNBpVO21
GpMIwEITYrInmGkRxE0om1+jgOHnYAHKnKEOyQEhge0Fxe1TsqGgE9szP+CXSaZxMJUyI1CgQSRE
SJ3tHWdZd+0y4ipuHcqCdx3iqeDgtiYUVplMC1iJGIp1NtkYIbAO+EiJl5vLxFPG7ibjd38o66Qm
Sa6IksiWh519fDVJyAeSnjShEQNNkSlPAFdXkiZAGF4omiRUQghamoGqJ8oA0iYDNRHDAA1+CZA0
jsJ6+gptYHrUpQMGx3KrL4D3h4jzAeKJSR3wsyD9dsIHqdCUlTyAzp6DtgoSikKVoy56dG0SRCiC
IaO2WCiCHciZCYA1IcAQJqPgZCDhyO1E4ClUq0goUmkBJAdC/N69dBOiHJbp2bMCUIndETnOwN8n
YiWZwQHiD95sNBygGo6ikgEAztyDtRMkTFj0IoneagNGgrz4i700Dm8wNRSWo0dw4DTUUhmibeuc
CaomvTsLCxO8B8bApg/+cPVLA1SgdlhC2hcTWIWntycywxlAwLaFI5/lo9M6vUHrQaqqrRCmUVPC
Mx4SxxY+4PBH0DEsLdhXJwPqUy7UUmaq74FInnoGBmHAEDcUhhBDgGTxx95Dkcjr4onOgpShKUoS
mnFE4A7g3DZt83oFdETjvD0PLiJoKcAej5hwrIQLT5KHbUoEImOpEwPff9EKd4+nkDxIPBEwdYdY
uRRoKzd5IZiDsQ6nuGDC6aI7N24wFBryf7xoRO8vdRB5IJ1oUqFIUj1kHZNBSFewtEOwRkSAAqEY
B2KyiZ9tnylsf7HR/Jd5Flihl2/7n89CCkGtQpj1ujH3BH26dWUiOXI51gy7/xPAZ0LoSTD+VaBF
hsCOYfLocx6ETJM1dFeAvdQrodhoIlVQhCBkHAFMQUsA+Qm9R7BPYJ5vFEdAPUBxYcDZTgiZDzRQ
iESxQNhSOx/HRa8hXQI6IOp0BAqaqJAekN4/Xm/iY0b2U070QzE2FO0bbEiC2FPrzNxwBmmZxSHM
TAbIpxQzHeZAj1q5IvbBTYV1TZBaEEoETciUVCClCkbEgCatFoNhVBBWuFJfiDyHNRm9E0VTJEzh
BShCNYKviveY8SQgMgsCaD4/sMAA9XrcCTIGO0oXYY6WGOkln5UKEJ+0PI01ddLRKmQH053mamoW
anu9bd7zNRNBXdETsFMxSkSJMREr5ETkol7w8/n4+iNBmPNrsaGG7ZrN33uUy1N5Ybsa7DGsgF5k
Ut4pT3HDJEyRJPPuHqXI/NPt/iowHrQHzJSJQedPMPJgpuFOTmiYxWETCWBoTQQHCpkiQFNRFtDN
EtPOnMMkS0x53yFz0RQtNwG5dB3vAavcKaKrbudkTqD25kmSJyFNwCZaqhxnYfiKVxBUiA9A7CMD
EDyHqJOdqOw2qzxC0U5NaKBkBeQvLNLsIUiZBfB8b0YIuaJFTcl2ibuBk7xzqOdDQyGYZjFZ9idx
dBTuSQ24qYQgflOG1YGsUDqiEHY0D0Inp6+7DvRNQh7T2mB6p5ZD+gmiGE5oBEHiKdawMBkiQCIm
DptWDM0DCd5l4aSVsmo/peyPpNXbsXUsHxxAcYzgpxqkyEIQjXcJgb+AhnQH7oRyf8SCT+QtXp/g
f82i1GG7/IASieL6bRhjZG4PI8lLofoCY9IYsSEKZUZjyHrTuZqw0MjSFYDMD5RTzInAUsFM0SIm
mSEUJFeNeOPrXDJiHN+EK6E9TDBeUxItYle7fQBo/bRht8J+72PD6dgyaPFx37DwHxxCUYQL65vS
ACu61+NdidoQdGp0zaqqVIIN8giGoiJlyNkbbo80Xs4C5iu71IZApkmETJQt7M/h+MocCIchyAYC
HaBCGCOelEwAckJFOX49C98ejxDciTIHu8yH8La70NQ0ADg7hgQvrNgQDtMCkFKETRzzQzO/FDEb
dUScxWlE1NF1egYXuXZEmkXWzYsTNEpDLeb4OuBNRXZVN3PIIPGcdUN7sR0ETUU4Ag7VmaLskF4h
kg2u4OitiJuQdCAOQkQnDBGyVESt5kNGAKQDCJ3Ip7dxaJlEWgsibAUjMMdARPJHUry0TEMtikUu
AUGIjGRDBiB4uJsOxPKkUIou56ZpuzyzMLh0BOA4GkDOJkuQBggtrb7EdEsJknRNF2TUDLJJwdR9
3mqFAJ+2AD7SQiQ18+AdQ7YB+GV/tGyaKqS/rXyFHB9YAvJsQInUoA8ona9gbwchOgrso0HdFE/4
Qf8RCAOQKvrPExPoAoCJIfJS/pXBhNuPkCRO7q/HYqecR7jcG/ZE6g1F4nVLRIFpYrYglL1oDyOi
JAUpOCmehO1Eg6AUokVKIoziic3Twhu2WaiJqBjOzY3DhwflpT0c8lQTMAdyehmdWoEHcdGE1AAz
BLgCcAXI3rqMmsSsvdZ7UUgHMIoVFMJF5qfm+fSn5wlHCRT+Eq9UhQgKmgU1jmQr81G9YjuSPETq
ROoOoU8UTTq0mqnYcdTsXaTJU8jB4Ipz3ZEBGEFM0SClAfZsSuQpgTUc1HHJB+IUPOCzNT02yDIv
/Af1H78EoMS5CPAWmC/U52bwxgYTerSGgBmB2gRE2MvPFOrMZBkGh6jMNlSIODzgIUqZ+cpQyUiC
CUaIlIOFU/ZND9HtiVPeCneiUKaIm5EwINieTlEhBPp92A1Fb3oNGYKWGaRMUTe8K3EynsBBhBHt
gAew1aA71saUyAUIAcFEtKRR3wQ4nq66fYwfMMS5CHpi+uAB4QPAgaxNNZUiZPoPoLNBgOHxK0Gm
023TEfxRtQNXAvsiEVqeR9yRt1ZiBf8u9JwMhsuCQrNGKh8QgOKm22wpAmsYrAwKhEWho9YZ4SLD
s8L+mEBrIfZQivywLYSVHKf3dMbkY0NG8IgqkPKSBiZSnKsLqIPxWxaD0Ep3sngQJ3lxXVBzgPQZ
gQeiO6SkO8k+2KB6zQ+oEUPaEIB60iK2Ht6ig9wovCIB5Fh9n4H9Yj/yY+493d0pCgoayMkwzKCI
KPv63+1/5oPutL7V+3L/KGv/gsBoc/DUcjJ02L0Z6/xJklNYXZ60S/W70aSFswvLU6m+JuP7zfgO
ZO58jcIaC5tG+neRupdbpb93FT3hvLAEMkmag6TOCk7Fv/J2Iq1YN6SaI/e+JEiLc6zkJYUP24Ss
h0S1BnoB80SJUcPlpE/nfXk9b6cF4+Cew/6HtG2XpeC12vAUeDPkaPIUB8JwWiN+VZKGygvnlm4a
DBE8fKH01Q+2/TL/48837Y5EdKqmqqH/Uyufz1FDVKFVbtvp7/47X92jeH+rqudZ103fDMt7Kp65
/zyv+v/R3/0eIB4358WhfJpA2Bnp5Rle8ME/tvDNNNJAvrlTH/1nWV7B/7JxB/4IC1X/1/6N8wMO
j5f+47l8vtiXx8PX/h7g86ij+R9UZ7/qPnwW0SbDF4+lPjtiZj7/d35capxf+n+B+iGM0U/4OD1/
PWx/1JWEHH5t6vDnVDOJLig3FXDRTUCacIHZnTyJKV7kxzIo6G8qFBHm6oscHFtytI2219GjGLHS
X2Fr/n1NabLpUpuJ0xinCHfysDg7026oR/ytAvZpZkYwbnZLeMo++ZqHiErvaBuUPijVdsgBxKqZ
xCbigdi6HFS8Kg3YD1o427+v/b/RH2f4OeRB+J6G2ZYZdfDhxlVH+eLiyo2iD+j9I9v/GQf+X/VV
/5b//vFDkqSz7f7Xr0HuW1Nxfw4Uq1q3kUjwsHOvXMP2VfPHdi4rJG0+KOKl5lDRsEAtDY3R149v
/ny5K1V2EhfHSImqQPbp7clT0Fj7zabW38GQ6N6bOpQnJcHcSf9kf+fszlSZ09mUiIygLIxbP9//
12D4emaNd17vl/8umnV0lCu6IYj0h2AyAw5siNjiQtR3O2W3/5Mcf40GR65ev1f62klibU2qZtcZ
xpWPVI9jxt6pnh000bLy4/7Ul7LzXIrTQW53SVFj6eq6rXJZiSn0aLp+n3Lv+G/dJM2qb36df4e0
/luXyu58f7nZgP+Lf3DjbsexkYsAH84fxfjVf79LUI/wYM/3h/v2/zYhShEPgcQ/OdZSHlq/wBou
TR+9erQBzH/iuQm7WT6iksB1yVQvVR76R7JoyUwHTgAyC7I6mYkQQTdvaB9aHmTHWGbFm++EIzcq
u5RiVym0KhL2c06kyOAbkUNLkmn/MzveTebOHUDrSB/yOYZqmakXVVyEq3oFLv5VXIGVRmLa2HQu
cFOvdIIUnKzJyyaVw6OgZUyIGz1A5qHZ1n/MnIqmmqlE11HIP8+rnwA2TcJ0A6zMQp4PWROZsM13
hMt0mSJ1cTkI1qf8CTSJVs8hK02A3G9LbgSaYcx9UhQntHqv5vlhiaCmm9Qeo5E8Z7jPJ8IktTYP
1/h398IVyXPtTvCCUkDcpmSB1L+rLJR4OjZqeIOVO/UAjoUUKfw8wBzDPIGkfQdA6jfkSoUvqO8+
sooohRCiiGunKjc1Xmstbss8ym4HieYDJ63Q3nA4EKKOZ1g+gO5OnMyUe9c3hA5BA58eti79ynke
jcZKbFnn7N4m4JAkT3pXapqBxRNvXqUFmHvDaD74ROHZ6g93zwxNBTTeg0dx3Tf2PROx8sjiHBDk
97oFagdR21yfN7ZuRPX1XsHvToHUSqAoYopiZkqCEnLhkHcJPI4ocht73fBgWB6P6SSQEkHSs3s2
e43PN5nU88xHCZlBxFD200IDoDjOoCdBOgj6H0BsIKCkpYmIZohqNrklgwmAbIPb6ntTg5iiKIKR
eIznHWwO1+t+TnKzDP5zA9KZhIGqgwetQJ2AdYdGx3hoBRmOhqEIgaaId6WHeJ3Znmdl8wmnqVsH
tdwoUAbeI+3x37BU6KYvi/9pncneG6JYgFr4wfDYJVZgTqE08cpJVMtU5nMoCdmh0EyguiGHuHgG
ZwIQhs0Bn2B4gQ4ByAsin3J/GfmsyCFUMDYqlCSRQnADxD1f12BX6ANmwyFJhiB7AikCB/yIGQIW
UfsA/eV/s/24Cyv/P+kFcYEftpCwBHSdI8ZEiWhv1HPyjhmD0A4E7o8e0cZBjN72EkCv99l3YUVV
BQEE+ToE7bfW7XlPtOAAwA+Bva9gqD9nej4Rpzgvdj9H/goqqqkUX6Q/OQuvUVof6y+MZH8TZHk7
MOh+o5HZ/OHkf0dwQUNP8C25rGY8cew13yRj4H/J2Nw4D+VsDV/IwO13ugbQCSQNR5O9dMGPzSSS
SSWz8UH3+yBsdBYg+ZzZ2GAkcDeaS/xFi9BppEYh/J7qFVQFAxlXVU8A00lXrPPgrgLhjYR7QDmQ
qIpNmk2FJExBx7TZUR2WagDA9+k5yXbQmlAolOi7mWTiDAiY2VCRgLynvHR6rugKKBiQ45wzMM1O
i6khcvBRCKF5ybATy6cj9ZoeJ2GUoyFBwp2nX3hryjwK8MyL4OOVkatPRrgPOdRjolKiAKY2ehOE
DcwQrs7HGlu0NCZ60e4hevkTMT5EmFSn4gBWkq59YdV9SjbTLsshnuxTxSZSps2zIG3W6l2GVhNj
2YlwQwaQvPS/AjpDqhq1aXiXu340J32ZGUMNvAXpNMUiXBAqiqSlVI6kTs3rbBuSJlhgkdJJJqHV
dlD7gd6MBGKpaDjKEBzIodIS9YPoUILQxIwIQBjEIiQasKW0yKnbJGgzDoFHAIOkdAir2Du8U50Y
aMs7WyLYbBnAkJAoR8XiHSAQkCBZ5JRAy0h3niQcPaaf/c7dUVHoERYkHkcaID4sDseoUNEHu3RI
aG+UfWI+D49a1qaDpCZBgaPsSNXgME2P2cnwDEj6e7DsNCSIbNlefEPmDdVZjdBS5lAasYnezqHY
NwUDoZo4SkNu8hJol1Ojvq435umOhMvEz3SWU3G4Z+ENg1DV6WMSR2MKuAICyA9jbsuNpOgY6wwb
GSYJsjdJ1yKDgCnCgDsX1bG91xIDoCFtBAIEA61Dg6Lkax3dRMYz4Hf2H4JDq7yVqVVXPKD8BfeY
DqgbRCg22x9kxB4ZjGY+HSoirWwOA993nZBHVkrLo+HBHv0fDgHaTuKXctS14GM9z4J1idNibHYa
C0b/XE4kIJ1Yk8EO1dIid0cucxVRVpwYCouKmDyoxe5lGJCHICHbrSrFJVEckOCexIjCVMDDx5Ox
OKR6zJ1I5HYWFh0SHAgQriRsQ7yI6bCHR6OEzTyjUqcyXCQ4XWBjd0ELdhbh1VRAdGEXuZDkMMeR
e4e0nwJyE2y4cbA7w7YFzYSbLpMmboEhLQbIMh3CZopuOh0J4wrAhlANHmJ2gkSkZjUNUDjcAcqA
+0EEmgDeBBj37CCQfqQ9KdAaGfgPPJ4h27KNnh3+3GwqqJtMHgfcNCDfQIWQZVSnIFLI/tz7m0z7
Hg+uKSh9ru9nxVMeKphWyGySSCYDeHjEbtZ2cai8FDs9QnKBgS8AvDM1RFwzwwHqWg4zHH5DkY0k
HPuLPsXSZc3hfy1Z4ea3hejZgmj5hSFnh3SSSYd6GA4mY7CGoLe5JAz30mgp2uAVDlCEU3Gy7lNg
xQ1qGMicKp4+pqWaKIShXtOQ6nPr+a2ZaLSUa1qLtd9V/WHv9JUUUVXnmFn+b4b0gvbD5wAupzGG
F1ryuU7uRcSpi8gZlHUHaMHmBvDeAdhSMgHeBhPEx1gYbA1TOMSQ6gIbjlj3uEI6Fogeo7oKTFme
oPdMRsdHAPgug9/yRHgeNs9eGZAdx/aQJyb99RaS4qKYb5IQ6zkcmHjs9HieeDqlHnRDULDppJJG
hoI9wQgRdxoZicgxE4tdpHqNh6uiLD8/Bcm9NYYFlRB8STNqHrNRMkyShnd0OkJOycmaGhyMigEw
13hdriGny9BdGyIqnObgzXHlq9vVPBoo7U7gAwk9BB3KejvL1e0jS4l7NG82ZFUaLnxOvHnqI7l4
4o4Ut+mHOPpfPrkRgq0JF2H2j4y3o9eoqmqTWYURhjnD8676bKwJ6PzBJ6YfTIntTD3cezgogk3Y
QntAjIwCNHUxcI5Qj1GBeSb3LuJEjg7hzHOaViEurhqYDnqgHpPh/7rO/zUjwFPEXcO3Mjz58SoS
UQbLLMMPQRlaEW9BrL0PBwJt4O40Kh+RPqPkWx7fa6Q5OOjlGV5BdPN8qK/G4qY2A2DMadSrqO3F
mReUe7Xt9Q3Z7zakweYOgbnUDkugnA0SgkTohvM0DcB04xwZnRgJmPIobKGBuXgvPUzZL7PkNHer
zxhjJDVVl02J5HkOEwUnIOmvYR85DoWbipizXI68x4JSlgnXxXfoJ0sPJ5aCOg8kiDDUyFkB6qq0
ZClAvwkuAznCsmEPWaIjo9xs7y7LwA9CPK0WQ9z+1/abglJ05v3PHzdazWG0iVHJJhbH7i69vEch
hvfTlEquhVLMRZ9gkz0mMHm8dmhDe9+R3+JLA3OTJChYKBAiervdy5cUEOvIO47S7KllDAouXKxg
tqOEhWMEuzmZiixpc1OaGDkc61TuGMYVDss9j2LUKQQWQg0bYfV0Oco7PFD0vnDk8UjV3PVhMho2
ZMtuO0VDqQ5e8zmzZWudVKM05qEXtOntmQOVEmDGu7b95YEhRh8aKWn1AHXQ6o7rhowdQd+NFPM6
qXvLuXVVWFFWFhMopoZe6FD9MhrKqq6uh7ZL3NUDsI6pusox1bI1ltVRbrxdXrPPhax23NUvcQfL
5Da+F4D7wNJ6kTu9nt71OKgpCmIDuDDBO/UwSdVO3E7Y2p7h5ehD6CwJonCQPGCgvkE9QUyhcAZT
uhUU1QzBiLWwL7BYsuxbb9P9T8cREREVFS5QqKhQkSqqgm9/ZXuVDxaKiqqoFyrglqqqqqvD61E1
JUJHkEgUHINxv7133pUu+C6psBpT1b07G8QmHT7l0YzDd+kHQL9Qi6Is76vttiRW3pRVpmx9Lz5X
qUul0hvdTWdKzyxkIEqJo6Sn0hMGU5ueDr2EaQ8yuYG4FyMJyWEoCmKlpFYhtgmD0LvOpsUBVsIJ
fQQcUErMQD36yi2X6QeV6KMA8jYoVToBA22XhylB2QMyEGdNN0Txzds0hjRQwlEQEkJRAelIkWwo
9vNoL2sQpr4IP3tFMR7WthLNo+hD6EIQ71+fVYKAxloc+qkV3TTbBoCjntLXxkoaGymkgjSI204T
EPyEenNsxSU4ComiKCnRs26SmJoikaEoTNmnXobZb8O8DocAnThjZSNB8i7XsugZtOEoAA+KOj4l
oeFobGztpuDB9n0KSLVv3ZSy43oy6KuZURjpjfzXyMokmoh+jwNY4NDTSsuWPCmmMqm7KSplqw92
GP6a9AWuhAdw2zhVeQFzYYCOEtcH2Isk2mb6Z6ZGYWKmZIQxlkGEIFLjRiBsEPSIOg6IMW7sDBku
aeAaWWQqrIV1odqQGEKBzYrmHk+c9ImnWGFgqHFDgZJoHYJxDMhDMfxJztTiNJmFIaPjBJ2oVOZU
FDgJkvcDhgFHXLDTOSTo6mdu/NhQhqHcH0AGlFHuDuXJemId9QRvHCaCioJ1sJ7TYhEnii9dPqC0
YYXHfznTIaDIH1IsfZH2nq+jZFfRH1tyfSEoR3FB04yU8SylhHidrAteuNyRs1LtPbIaK5CDB1hj
5sKevRupg0gHIHX1HUkYlPUYx70jqUyEMD7gR0eWRTQ0L3iEjkpVA0BEeiv9I9XqiTUWxkt+3XXW
k0NJRAShQxIUsUkj1EO4xPQFXRfy/5Sr087fGDWf9M+E7jFiSMvwRvcPdmpNAchyyWmxsaaMf/2G
gVOhhA7FaMuBcZx4UgjjosenERMotRTugTI3Y+r9oUGIp39zTdNb40BAfME8/prhDOOOufvsNUJm
CvISn/bHXnQVOI8adZYxPMdp87YRTAUJQBRSoZsAtTFQiQCJ6qO4sxOGJRJl+qn4+x3aGZ0rz7bt
lagCmZpFsuEIlUdDnt++nysQckljGAHvPhfh5d7+wPeiH/0Ng9utEqb2bowrtz/mLkh+5fa/cYGV
UMEnxz7Q/DAv4QGf8nInge6EKppC6ZjBonCaiVMao2aS1lSUpNqWmlQNZSn/T+20XuE1CI8k8CJ8
pOhN/SLwRdv3kH4iTAvbB0L6MZTBORiWDuXzXshQPH15vWKtCv9zKmyRSQmgIIVOJCmJFDAIWI/A
hAcSJRplPWAdR7gH92YAR1FL57/OGJ5+79HMd6CCcIKgSIkmpAQ/nPmTsQwC+gQ7SLQ9IYNmRB5l
AFNQKoozCxJWiIkGlyEV5MdZaAJdsplVwdtkbWkaCEw549gATg/EJ94VVVURJfD8Ye3PaBpMBClV
t+Furqv41RsVMjQy2rTtDEOxFMe29/yjhs1luYMRMMoZC2YGOoIs14D/y/wpeB/RB9Re1QB7MhU4
SVIxH1vn3v4PPKgZP4DsS7LfnwBZVtQduKyH+JmrBQCDN3Aumq0JjYy9DI2Bi3osuOEdDHsg4SoE
ykg3didclwJgnYR11tt799vbEYEE9F67ueLucI2CMOBwe+Sm2IuhIJDzqt+EVF4KhQe0Z6/nmdOz
uMhChO3zL2co5MUl24cwv9ms/L33b4pAxbgUFFXF4dHmQXttdMRzE7jiHhI7OhTyedpNOPc9ICRr
3Q1hFkjo/hUifex9D8Ph8FV5VeObmBgGI9YhoiQWkwI/0QAKf9Ifl9qROE5Ap1AibI+7j8vsw/dx
5gK/g/pnCJEtGCKLQgP3SIYpIodr7H4iGwDsfM9mSR+i+EH/w3kEywkUEBosUwsbJW/Byb16uAQ9
/EYIZADATOAuROo/yBoT4pd7uzWcTvzZQI2JiU9pIpQAfGFA/85E3CbAgAux9ScVB/o2VMhMc0Cw
yIIwD66a6zOzIwkRATa3L5aRD5Y7RApAiPcRvYXGgeTXQWKkk/FUrsIDmRa+6NW92MKJhMHDsbe+
iDBOwcInEJywk6sEyQMiIpQvLAcAnIZNNjqQ1IjoCFTObEKQGaTszpIulBYDkLiNawiQo1DqIlQh
YpJIFZGEUyFoFokEXBvE6+7u6982Rf+m8sdCncHVtkGuVKTBkJR98aQuekB7CwA7CAIrD6SlaPPF
Mn4SZCQkI7IAwGUeEEPwkMx2y+j/r7g7tmVE70HSGtXYBUjtdFxVyEd2jYQPLsuzaAf3EIUhSKIV
jPqe3tp73BR3WIcSKODB9AGVskOVeKEBosQpQo3DyhqAg78n7w3OqEbA1NhzAMqbMCELGpDAXXtx
3o+jyVDq48u0H8z9BN5uKsv+LllQcDUA3URoXvRfAYqXrVTAhWIZaBgqQghGJAgKCIAmJFL0vx9x
6VwRPNCsbfuAm4EPsD8Kv0EhxWYL88dusEmGJRFIh2Ig/mkBiRFyPHQxqIYMaERM8L7O6FZ2ennC
/n2JjfxDD5T/HfXtfq4q8sMl9aa7d7Dflxxt4pootThKh+nz3sOOp431Z5jeeHmYFgGJmcB+OHuU
UHW9ZydWBjfX5PkO8b5WDjoccV1RstiuwpjQhsMYYMLGixp9QXO54CDWfQyhWw3moSmpdabWaEi0
2vKJTW9erulXTaqHCLtGwdSJxA6XMQckFNJKEIkOQJEAMQHAcx00KYyDmREDVIp3AJ8pFQDq5I7u
udUF0Pz1axDIREkoeiTI6eT8O/HuJVsxcX31KSD6pqUioUTROswxrTIysFq5ct0Ifk/QSTVisw1c
pVYXIBbSb6bIGb8DgynrzXBh4JEkJ2QVKhcmZVdUwnC0RK6kW0tNBQ7dIaR0RImiDQMyLGHRkOSO
LrxRRY5zQtXeql2mNJtBSN0Ww7QjNGNZEpyOhAN6oKbiIuEARsDLPSUUhtuE7vmFr/UxdJnEIewq
EpdlBwenoaNOtdFFczRaE2kbBjEMONCIVDgIxhg0GHE9YNwVfiN8phBmVVQEivRHpFJCSQVRO18D
uA2A2XkVAxDhHJGsjtkHGCMO2PoBxugQdNx0DVPQGamaCdyi7B9xFHsTXVTcloasE2glRKH7ZbAs
wlcUAM78PBQ/JeKpBEo0CEDCPvPYcDYDIFwqDiFn5oFmEuAWGyft5+mJXpriMQg1JRZPY5Hwur+r
QwGdfnYfyB39HY0NKPCSY6fc4jB2HUoiecjOzVJzEsz8gh0QhDb4xf5cuhYSfydsEpu8onkmIRQh
IaSG4/AjvU2dKzIBiQoCkMBx/kMkvzMI+ENCPAKq0E60YFn4pfWWSSTq6UJeD2xdtMhu/sLzwYUt
JVNIf6LodptoQ1z2kMBJxNKRB+b4/XEKhSkJQRAwZwRIYw+vAM5O7jB+wKQZ/zUCgNCR2Ms/Ydqd
vD3h14hI4MLDlNDH5t0JWjHtQC+EsphuZewQX66ENGdf38njSRWGDywWmYzF3QAGuhM/DINnptDR
3DV47scxUnbE2GrysTyFnC1DS79s0cukNtT6ULvptDaSXTUcYBGKMabaG23hKeuWjRQ1MUDuQGn3
COSHEkHBiN2DNwMKhEggQfaDjuOjtidsoq4ITujGTu+/wnWpR5aG02PQIRxIxNha8nIVjqnFcIBW
42tU1eL6zzljoKZKImepRRVKu44DQaO7u7VpOd4aKhsSqZBM71aapWQpFqh8epH0n2hoa803/lSj
jKbkYkDvPZn/E/i7rWYRYtCwpOiCh8N/ZkpcZ02Fh2T4EeEIZIBQIXpSQzs4xCkX8/KYK73yHcbD
DDAfQR9iHWWYBIJRiQmOuh7PxOg5NmP4cET9J2xGC30+OOwPPkS0Xru4WuMfV5/z5l3k+iF3WF0A
DPw+x9V+hREsnWx9R0YVB4ymQGuAxu+y0ZaAIIL8kh+72Cp90gnnFRVWWlpr+sroXpVzRhKK1tbA
bQZaSiRdQzCBshVTYKGpaWWBRFZCiKfKpG/1Y+NmZk3AssoakgYPTvxP+WN67iJx6HjlG6+x+31e
qRGPrLlDR96YbxE+yqYmNgqbyihgw1SXFIx2yQHP9LlSnJRVpE194foXl11s0jn4gg8DDLGAj8yc
eF8+7ot62iaBCIzNBSWWBSVd2jyjrs55cMCoEgTbd24tFSBeA0zr1FOr5ww2+XwNNQxg2PQM6YHs
0TFzjkYh2gNtjRlKBuxGg7MOyNN68TUmkrjAyEumJjJovO+veLuHnWPZ9ei896aUN9rmqZu6/qvf
2EqeFrlDYZ46qx42NhvNNRmtJxkjTikbQkWMfXnFZY6HQm/G6LuPO7u2m8ZIyDwgfj+LHQWGAiRG
B+1TITJjpNcL859xwt83dndZNteRB6I3rwywTgDjWWd0Oy/PIhdD/1wUKgSK/P+3PE0yNJj0pySc
Bcr9r9GRarbXI1V7o6yGsPLC+U8NbOKmV84Msi5bW73fzbW9e/mOiXREsiWgtsOA9mJ27cRBjzg5
EYqcPsMRdMRecd99Eh1AYA+jFjXvEt9PBQzy1jSGmAMY1ujsyDMbnLxOWDjQoGkpkGSGYSPsIHEb
costpaNQZarL7W/grCBZNFwgSeBIemA80BQ9UqiFC6QU5RA13kKgX6quEtvpl95eWZgylOFWljF+
TWIEKotyiv7TiGruU0QyKmcxs1eRoepJtqqj7OmglhT8Pj4l9lgUsJQRVeZ6IFwlgBS1tXc++sCG
OlJrNNFzCyBEIaRLIsPVWl5XB15vDsrY9WsYUmcnEV04VyWgJSVJu1l1QvuSGM/O6hbvds+R54oQ
gvMAIHCCPmy1bkVQC41FExigVN8xAZxUrRRy+bP8hoa5LGJQpdxpFstGYVQDpJy4PZcFR1GrqDbp
+vWUgiEjN0OKraA0usLRmiLbSt7tGBUHINshN1P3zXJoxtmoRT631fzy6IgIabBscqqY6QKiQAoa
n1cZAqxizAMWkscqpUFW+sKNa0gZe3+cqy+tiqmk7iyNYIG6AiEikpMA2RcJxDkm5AegcdS0SoNB
oaQg4lS/af69CxmZS/wS/DhbeS0y5UuyMchI/vcoiP2vthtvGYj36ivImJP71QxdEUgB8nh/OAYv
ByzBEFbOsGwoDZpUhzYBh8x4cDeoZA5AQPLAIdZAXjefmIiAqSFoGphgQDUqgBvX1NPZhtZyC/Dn
b6tmupgiOdtVKqq/nqXonEl/JSod+0quDeC/tX3Gi6fi0CBg8R1kPSnqg84pwJPXZGuBma3m0a0V
U1oWmYiUBw3CRKSewfpQSfpMUZiv9QQDNN/YRIroldldg+BwO/Yp5UUeUspDDugNz3HPYwndYgOu
OPvGw9ugZbuioTw9R6Tbh5ITcnmdkpm5SZLzbzbzqWjV0ta3Xlo0botOs4g0TsYIJEn4AymICMww
Z8tfl7yULOcSSZmCGJUdMOoTmOImoSpCBkgG8FXQTMy84o77NqaSqaFAISx7q2UyKZQQoLYTwxBp
plkvB1QZAxBvB8I2n1JnlaZICPhYfhPMpdShiYP2SvrJIGgmiM4x4TbT+PrBwyZRRVCHQuaJj3/G
z4Ra1rm1wjhE2c60w4WRSLOkwRe9uC4LL70CZBAQzt4QawfuoQK72sEOU8QjkjA5w4UT2tiz5bEF
Q4cirvpo9JxkdLLjO07kh3ylXJ8bHTAxVvAx53hu6GUOYQ29aq6C4CwBSPoAZopHyoDtOw8ZQuBU
EH8gf9UAA3rjQ2fIz9R7nwaD9cU6JBSjjwI5XQkQZQURf9rsod3Fh5iCmwY3uie6Ted876H3ocW8
o46Z2Obvg7P5O5RzYF8xurQHGDvgQ9Lhd+41LiQ3QpdET0MPTFA1PZA7se7FOWuZQxSEJhqDIxsz
CQLQxILrIIIMJKF5cOXksl0s111tKVopLlzWjWmVkxJqDSYyWkbZ2atVpwKyOjbWsZw7YUcAiCZE
ENCRCEYsOwRYxnYE2IxY1th0IEEiAYCBEyYhxQwzMIBJe57kMOE4EBxFz9+nYRIp5jNeAkhrKqkq
p5vaUGBgQJ5UIvAwe7EyvmjJaHA4VyMx416FAQ66z83hamJoGV/Wd4a0fG2QCm2FBcACmdGP8To+
l+Kj7Xi2YyqqJOLeYU5usdD51xh+nOOIImJqJHOMUwiDEW8LzYB38n2XwjR2jmobXvIB5Y2dsTxh
FoEaOHM2iZGHCBoSYk6xR2cnCR1jYs3jjs4v5H7unpPlIIk7lkjDAjA7Se4qDZ1wsfCQ5xkHQiR2
boajuwSwH4O7IUDEn1xeQJDgUJ0LBAv0/eckSzGAzZJJIViqLIvhDwl5UdZE4UtHMgUVRCH12X44
//cSydyfoanP3UGXEN5uCptsbbBtLRAtipqN3QU2A0MtqNSdMy0dDf1aRur3rk5uIDWs1HnB3kG5
TwqZUALznDqHRCCVP4nq4RI+/930JvwI6QhjIZFvjgB2hv0Rnf8LPKfkPopMI2RqJjQvy3PhiGfA
bOncGPS1QYAdxENxIR+K+8nyaXbMa4H9Mh9oNmUuZAZ0ofp0QGxaJIhoW0lwRIQvYpB3yQ9B/YbC
hmXe252Bq976t9B6wb1MmBJCBitlgXSPFK1x1uD1wevYJ10bbVKEBuKcLfGaE56Inv/EJMgfIfIF
BFDExVPT1CIP0fRruJd9A/IEMWYCUsVJQZCZCDSClBMoNKCEnqoINEo0EBAEMoagcUeA8eYjLaht
gPDCbIxr9HTQamCTwjJEkLRDlMDxUH8PaF3Vh6gnzU5AesBQkgqhqv6rFLBkZNgrq58jUE7l6Sht
3oVhM9+Cppjc7+jJgpqlIqIK+T156uWpbb2WkrFJbeNuCpsQhZl2UyBi5lNXqC0h2EDOQPRGOA94
ONUEimexu5buHhHEnrj381vPVDwKuIHLqPiY0G7B1BpHemLlr2ZooOVzCzWFxbesFxuIpTknhvTp
7TKG3XTEw3G5N5n2zot7jEiMZoKgMUYOqUg0DGI/X9AqnqaKelV1ZTV+vMaGFU1DG7KzD+fkKeH6
RO8U9zq6Q8+m29i2qtWxGSswTGMl3LV+SKqXp0h8n/TtwW2VsX8Rujs4EDsONbCjY2kU0A2FRLg/
RI7ChEu6O4+j4PwX48dvh4a5eLmYPO0WlNCKRoi880bLcb/Qfovs3nJxh6Zvjb1QPVsUB8MP5GVr
VNDZEzWza0aDGkaCR2M/5I5MhbEPqI4PJbWmWkYNzo3JhAxKH499YdIDvjAxpSliGiE6a20EMa3G
tGrMwuh1OimGze6EmRkBXJlVvSEoY3BiNSOI6RrWLAW7WiyosMt3DSKEt0bbexrdka19AvOMq7SC
CBiDyU3fO53ry6mqMbYWO1jibTLt4XChELIODQhpjX6ZL4tOnq6FBoCYDu7SFigTCl4zTy0PE2ZI
84qVMYx2XW9QNkGPQ4QlDXe7/RIRovNm7DXMpIVQodYLSLYVG2NgNltIeqlhYIpNjpnU1mUiQYGF
KyCKGtMezVLVSiLCzwRR71I3oo3FgJjE2Jukrai5sqttkenRK3fM5mvDuy0CmgqFVIbTpUQGONjD
ouFRPQHXKbDvhMY9nR1vHztEgtA5I3GAwkwRMbjN5InUsoKm6AoNuNgYwE5AeAhbaKKS3faiu2jk
hgyUbHhbQ7f+0Zp2yO2ThR5f+sX1RtE9NrsVjTLAjA3FKTUT5WqIrI0iMKa0Uo9ZQUyEgK2GIe2a
tuXR0uqNHGtmkzbvC5LekJJqgbUSeVLBvbGMjqJkRst6ioYBrKJpOVhVlwqNqOOXVVBmEg2sLuMu
rdyKgr62dvY9DZHv5PhyrrFacNMRbSUYU61FTqMMMI7RjAmnVMo6ptLbEaa1qKFVXyIW8ZZcqiDA
weJhdRVUStsGUQiTKhBvgYVTzB3WMmCT41bwdxrNjiWtZoZs0K9W3FjZJT5rpN4Gmtu83ZmJ+40c
d3G3GBtTW1adFsVpoOx0MAlBhVDBLGD3B7rZfdhxKh5AqZZ2dGhNataHusDbNJNINPQmMSYm7caY
2FGEqoiUcNBVqgi06eIwTCFGhhz/Ih3rMDofXWuqKoLTFTqiG4DuXzEFciV7ejYnaQ2GmwZGsqNo
bHGKtGLDfiyjRci4OEbGQGJywUmSmtOBooqikx6rsOHyI0RyJ0kApU7JdkKaCQHtgyGhKEYqHQG5
d5onsFwNgGluNIBRi7RZQMA64PGY6X9cqmhHLWEE2hj7IToNAY7bkk6oELYCYGxrRwagJKAA0Gyb
CI4qtxwkQ0gx4TSKFCdYrNf7FzMInHlUqC4pUGexSx0Ms9aJbfl9s4bhoGDa0O3SQO3HzIQfhorD
KaJRQZnzwp4wwY61Aj0+NF4Q05UqtUoU4DLto0VbMGZBzzQWt3m7xUlvUDYJHeSmmNIAciBbjqbP
BUMep1Zb00mC4TmJgpEAEgSJokcNMGm0isB1WFxBIk0PVQwV3GECnMsuu00UUlAKkqQgWGcVARAc
kkhgtalIUZqRA1LS1sBogABZBhSQEikYBEAEQUqiWrAS0YEQgwCBQ5IEdSIJyHOKJoOAMXYwK6Qh
yECIZSiJcDScZRtVQhqmB3SQLgwncRqkukJGRwaSFYUNQZqCQlRzooQWWRYhNOGdjAsjdgwaDkJO
WB0QocubYhYirmYLLTMuyELDOIaxKlEVHD22eRmNELceD45+OWX8NNRwevQwYfRGoDEUcIZJcOPR
+JOjiNZqDifjn2Wuh3a6u3LYbD5yfwmORFUW8RwgqNYGF+NU9koSVMJECye6z0inyHYnao9sinAi
yvQD8QCEtChKgMqYEuGnFSLRhgAhsfzvI/QTkdOygrIpgBFOdbuG3ihvgPmIZYwiZRE4FONzFTsj
JPlDREZGKkDSktixM1jUygkrLFR0sqlRiOwcN9O4TyrxUDifm0/tUfo+ehEOxiilzHJZklWhE1rA
NBBkhhKB3yGooDRDAdgQU/LBSpAREvgeBgGX38LvQGKzJ/3mck+YJnFCvXShv7dzCJMnZCBq/KHX
dw9WWKs0mGb+nAN3y/bE0VUfZA53CY4wCpsyxqSkwpjajWq/OUKFBhVQUQQiGGcInvFdpwQBEB8D
5boQPIn1+GeFn3mD4NULHwZH431fT8ZP8L+yd6Qz8GZCFJECBsULm/89pyB78Lh8bv1GB+CxiUPI
ePns/DlPBFRocS8LkJ1qpN3WxT5TR/h3m6PTBh3Z+6/7uF4yynIVCgbKZ8/Ie3pbz/+PECEtA+dA
fja35sq7UXGX/X8viDMxi3oSSBIbbT5QYUyT5UhbstKon0CpzsMJ7kE+wPlPmPxB2EfiLJKmgltG
PMbKQ1t1rO7VSYM14BAGC/j+Jz8/eePPMqrh0CaXVwVGB8RfyH5QLRWqS/whqKLMSHw7mdZ4omm4
HP5m7K0kDz8IGKC8pRCSRoIqwXWLMDGLedKANvc5HrGeAm9SIkRIImE+uIAeR9QGAsQcp7PMi+GP
Y9RXQJoBEJBoD+EJYn6Y9iJK+7lUz569Z7dZnYWmoqKGIIakn8+H/mQ7bvjmAOwNGHeMjAH7zr4I
lvUsNS6fOe6vFN4Jvk9PH3eYSBkwHJjx8gHjz7R97jppO5CN6J7ZwOD0d5zHaBIlQJv0VP4UoRWG
Kd2bPt2p93fAmiTQop3rWZifDk3BBDrg0fRmdiehE9FESREsNzPFYZvrOvTKq1p88QxAI554t0Lu
3hgcQ4Q/S9adQocqRwpnNVV+87AiSdkUPTFX+fxOq8fbofsnd9+L79DO2oiaon8PqoLH6/oqVpeX
Tu4m+2qaGM9n+TzZDmrLs9WRid9e1Ja7uhPRptT167eiK3SG0ORccoc1TOyFo83h4PZVx6458i0s
WqY57Z5Z2qSd5z81VuEDqun6HseqBHDoLOa7VpIYwA+Y1NQIwOMZ4v17d6Gz4PXn1kpxYP5CQLpB
0LS8Fih3wBiT3sGMmEcBz12jgqMBhqCEbfzSwzelcNcYJsIwBiyTVJRjOEmhvjC2ewzbWORwGNlz
TCxmNLmzHt78NO/T2nIgYBsgItRhKxMQQmRw5NnIg33+4E6NVCplBI7AgwlmYtSGZZkZl2ch18Je
kKeEGxB8iXwIXXTAADObSHi0TTQzDEBEUU5WwuOHFiLuLuu62R+IQZRrJb+UMYxVF579t+Tdm7jb
XlwILgcpW8s5cXbG7hVLHT1W6XIUg0wG1mdPlukUIwsbbZcXiWLVsgIjBDeFoI13UpA8BUcQQK7w
87c6YsTPZgLo8ov2oNnsrZEMYJpPBskhICiQ5gFqG4toEjhAywglGI4BzSkodMgMbqkgGEodCqIw
GtiRgTm8HSQvIDD6Lyh7CU458TpsAKQCGAdy6SRg1S9g6wJJ59YXPlwEizcsZtrcyUonUK0p+95m
tYZmJmee1pz0TtTLPMkIhfLHAfn2TDy31teeoKRsekbTzAeQeVXRckLUxiYQhh7jhNtwWjE1Mc8X
RpQpGYOO7A1v13ImA6zPGW4y19p0BTrsOD0rTRHsxyeexmjhCM8rhDdoIYeoJEOtzwxarPDAsLHc
ztQTZPoq7ieSbEH3WtadBqJ0zG6+cDAEwWpctvdELXOAHQrzJw1wvbK2ChzKDOGKq1qMIyFjrQCL
sAYakgHriNkPRhjFZ0zwNhEIIyE+INqwEF6TyCTS0k8n0u9BjJ6alGMcghtwN9nRKsN0qinpZytT
6O2i3fSumvIxMZ5lnllp9eT4Mz4fQ9oQ8l7mhAPpCNIRtoRwRcoY00AaZQb3Nd70DcIh9xZ6Qijf
v67DzXPmXHvZS4xQ8B3v5HRqI9ZpdZfdnrs0hHz4YN+4TVKP5hI95ZBkLBrQXqkww+YjQlgTWKrQ
Zja+UahaiJSlLIZBdSpsrIkLRaILolkYQMwTWGgt0FRMXkVziRmjTIzga7u2uFbm4Y2FMrliGMUH
nvAvKrDrL2k93cYHBd+Z4GnCnzvMPPEUs9ZNgmYsp6kjTHewTcwRqRk5XdNJyGJXkYhD2nv4fFfr
3orR8YQ5nfRszmqVJne5uzMvs1t836O11UD41zOhxjBncNrfe/nYs34OdX04RvhRXCuUd/PQ5GCi
7xU6+ttVzR1yK5RsSUWljuMYNg7DaIGujICpSbmI6aglUgKz4wvwE36O+WdnUZ8zFt2DPIzv5XM+
FhccoiB6w7hHyg8+QXy9POrWHWp9JxuEqLg/U6PHx8tZ2X5dp0/Lzua76K1wp+TmXM6yUsSZxJKK
3ra6krMvym4bgi7Jt9Q2Mzu7OF32Ol8N52X4xJm4ArMY1BWYg6yYxt8KGKFu/cU9IzYUK/c+57zQ
QpA13uID51EIGjzspmx+dhOHnVcsrWj00lp98NC4Wxwe6RQ9RFm2T7OoFNeTZGBSijYnoMmci2ec
mhJJhBYQ+OGFhNvyprakqZ5q52r4dvYNC9IjJxJYKSlPAcygjtmqSwYenG3bEvcdstbJPXqdeDRo
4NYhpg0/DIBpNrA7C+p3oifvuPxa35I2PldbRZkC5ZXmgYYc8gywvr22Wl1z2a8e/uetZhCNpyT6
uNU1dHl20i36QOj1dOnwzXi1dw8bOinGOiiqIpHyR3egclS75bG3a5dGE0VqmnD3Bq1vfC9NV3DC
783l+HyyNvvZFYyzoeOjGQuh0FyJUPrQsLvqbepDr2pcZi2qiIdUU/bMtx89Hjf1NXbNLJMeI1PN
R6fbKNkIzmNzB2x1CmtTVez0NhbB2pSY3WqW3HRfbBtBUmi4DLGvYcoabO3KlM1UtJvjFDUSLKkY
8y7EnbUb8Ay6uhNxWOG+stu4auqvB5fh03RCMH2MeVs7yme0FLFJ5kfFNyeddVel5PN5L5oymtx3
Cjx1VUakZZyazk2eZ8qskXm0tNWWdWaXH4dRwy+naZbg1r17z0Fcxm7lBs02yLB2yDR0No7kPUNv
LcV7gVuLyVC8CYQj9komqzRe2jbWY7Buw0qxGDQVdauqni7pm0YeLXtNdLB4oDXtVqhWzi8l1vuj
zrSM664dDbbbbbAbGKE5ktngJB8kBnpgHy3rl68woZQ3nmU2LtZFe9aLO9yVkraKnkdIeRvGkQGJ
lGiTTLKw1ZCyzrarrUs11qweYM51DoHoUIjnDkBI6+XWkPiB8JSY3IlJugZwuy/WFWHSQcCPYuCQ
piBihiYIQhkUkUQRQQLMCYgb8dFxY94HHZ06Z4lTW6yqOKrNGmqyjodmpzLjhKXObrQzVM9YdY/P
nwbMz/AXBbLpxOOOOeERE8mbxJtxNNRSJwOKyTMWjG457jfm9C8+aabs5XGWjNM0qwsP6+ua38vW
VdFjdBQONvpr07ErjDAayj23jKW3yoL38EpnqSmSSVndZkuGMoYmx9hlFM2NGDVDAabGNjZLrvvO
14G20pKnhsmu6upjTZgxkLpNtsZJEvHeWG3mrripPHwostkdoZE/FRt5qChRGJvNl1w3RVUah4dt
+KnVlOix0UtMpplG8t1YjwV6AojwvQdcgjvqC2rdOti7E0ptJsp+xZZ0zYgNCHAbRdSjwzJgq68b
75JV6pijS1wiwITvlqqRdcmGZmEo9ikDx6XqcAQS8q8LD58qJyEKSXYIPhGi7IujsbYJRjLpNvAQ
PGxKSfA4QO5bucjR2EOa8jeOqZkEM9UhvOiHIy4XJXB9CPdLpLBoMXEaEqIaCp23pHLkoPC9/ocN
EM52ZI+HuEYWuZQd8tkS0ol6GBh7603NCZiPCEtwANimADpKw7CiAsBpGWiCIwEmNBdlOkiUpjAk
pyi0JVzKbTbH2EDpJdL2Do8p4cGgJPlA6h8qhBzL0KGoIamin/FLtBi7uzQ19tOKvgA1vHaikhrA
wmRlimkbIwkaiQBl+6BpFUcTciYJJMiI+RTKUIPPex/AgX0R3Xi85qYqiqAEuCNeJ+6Afvcw+PyG
ZwADXcPbBjWpf5HXLRbJZmtFptRJYQDsYrhfBUi+Qa80TYigdagrk2AwzdHsLM7RGQUSQFf3bTB1
TpJjeo1ybyDSjHJoDQaXaBQ1SntKxw4Azg50qeJBQzCwksBJERgCNJoaiQObwwZbnib/EPcHDLLj
scbpWyfdzNm0j8aRpgNbfnoNWPE4RNQr8YUb3F52HD8Z7NALmxd+dHmcvR+Q+F+9LdqQ6U4xVwg3
5IiHtsSO/zw8Zh6e0mvHRhS6UmGJ8YMTdfTC8Y2Yjd68QJ0JgxbbMgSBl57szLbgZSK3qPitRMzG
BQpBvyBzYslG3ODRhjxr0aa8B64b5f0ZzSERtBZ3BcYh2QsaqEfWpSGlbw0Q9/ADmhfLwhFKgZe6
KWvbJx34VHDqRe3SqUn9YWwGwZe9ZX1PbeiNZ1FduDH68fLW6OtjNxh4i7lVO8Dvx2hjHcTrQvu2
Yzyd867b44SMrWZmJCHNIcEsMYwuHkMKDtPQBhG2OBQ5FA93j5ps8RMKrqHpw4AyP2Uh0O6hSyIw
LgQKVFDwaX/ruyUDuhmuBQOh78zCl6I+3/d5vZ0OAfWKiwjQzAQn+cgqkEoeogelDYTAAA5RUwAb
kT/ZLLE3CbCdsRuCc14OiphQwAyyka68FYe08DzRIPMd+sUD39O6vENDFBGOOQoeQj2U8Q3IcM1O
xcBSfErv01OBqCf2H7juw+zISlpopSqAs/j/PrTu/eYnFcB44ueDW9kXcHK/yseeEdnDnQiBx8Q6
mbw80OY+SoZiwBDHjExFzy+5cxTrM0GT/AM3Dv+/6FrkfxV8xoHGIM1RQ2eXo2nwJs1830mwtcAc
SG6YjDMxlFGooRT9eLDKGEaY9N/+29BVuu6VDNXUVDqpRstmIREstoOalBIxLZsYfW7TK3TbblRM
SiLQCJsQhHo7WOi4m9ysociyed+BlN1cDHkKCm4xurIWkX/L6BiNgK2bwhFoGNOpvLxssjHUbWnK
PCBEppKYBNkDY7Bs24tggqBULgGEjHOomY8AcDAN3w+8z+/Nmw/Kbr/XA+FQnaTKeCd7b61GYWmt
Qm41DX+AzNMH8EEdxbP8KTJTeErmJE8VsqmjGK7XvCmxtv2YLVhEKzCIQhi2oevY9GtHOoG0U1p4
7FH3Sxlf4/+/2b2vEegOAQppCDSEneQ4oQAZFnUf+P8ImTR9v26TVAiUBECRAB2fGLaJD3bgr1/6
AME8+Z61HoHtDREoAVMV9ahkhAbXYxAd6rmv8KmXXtA+EN+u6QD7kxJ4JJ+JKjKYO49fwnhuKCOE
k+yHnbiHQilEKFMrMQNYmukmx3jipvuMDUbJqJKQBMBdM8Gzk6wPiD5Hw9dVDNlF1QSm0oLSHccL
lOV/peBD5qVrAnn7D4H62TL+Ej6iN4cGmqw+zTv79GE873HSaeN5wraMZ6dKcxXx9J4khemrEpKI
9xyf0T6oe1/rrVJ7E6V8sCno/nU5mE+J8lWOYSOYFrlZkT6oggXghBaEQ0evXFpZCQhVXFLtM8TE
Ybj4xXr1+Pws9Ie5aH2EPc4o5byH12mQBBSB7wIqxA/RaKGpBcoKtnXPxsOOfgFkhghy3Y9HqJkN
/NoYMepLxC4w4kAqDRSP5mbxkCDCyt/Wl2YTjdUtb3oxPXeit6TYLsA1dnBIngsMu/NqtMltumLE
wyiXcJXFNmJcpQLEiKVieNa3Raha7ZPcw8GGKxljDGAipU/lKtb7QiprxgkbRjSLYgjwl4EVIhKp
IbjFwm167roavGhFAMoWhQOZszj2aBwN0ae+sCjAm64UraGL27rtUyF8B6PN4jkfBCK08bSkg4co
i5E3ygPZDs+mB3xkh4ZHJ3RzHIb5He4KTgxyqg0Cm3W94WirTL5o4R8+fCHBORRMqqlaerDCYLaL
zK8wa3vDCorp043xcFDAYxqyrt2QHjgMeYRcBq0x8ZbvO76Ofh8q+BAkSSJFUKiJSSEkCJSCSE6d
Aiee9Meo+He7LDFusd7pEXY20Oy5SVGlDxzCFHhsZShIFtWRTMk53q9khjbsY06ISmEZYaaGmmSh
xRrd3XOpyWYjZD2g6MIiIjfzZ1TIivwrbXW3WyNSkiBEmZgLEk+tXa34aq3BvIBaJwIcUf1TdCQZ
FOJv3kEtAiZmgmqfv3FcVckTgZ7SykSsmwJpFUoCBBH/8nmUTh1YFPUIfJBJEU9Q+eA+6fQOg/VG
ngfsh6SGS5AtGOzNRFi2Na81q5Ra/zvnbRijUIm4CIEItw6lF1xj98I5AuiGgF/olpUfqgA0ShuU
FOCH2FOhTpBVAfdJ0R+nx0AGoEyBAiBIhADiFyRUzuJwNQHSBMgHUOmNEjRR3QCZ+Xx5VD7ixhHm
A1JzHjD2cZzxjfIo4KUA0AXvz3rpsqMJSotX5L88qvKp6SAZC0CcQjhLQMQi0UJEhuBWhUMj9N8B
/SPp13VyjpXw/kw32lQm/yk9/cUMp7yFaFRXWikMf1oMPGkckpAIkpClYgpsuCP8U9uXrpP1hDWB
6EODyFO7BxEokY4oamF93sV9PlhcTYmdQE1hX9kGueAckJYdy0B3ev11Xyl+djA+EprQxQXUCzni
8laapgcotsLM4Z3i2vm2UU6iBkQkJGQHCFKB8gX+KSESdm317XXE1PjMN99sP5xotEgWyCdw357/
ovSGvZxeGHfXxRjSIrETckiicZ2eUezPS5UkXfbWcu4yvttUosig5GRWY6noQ1CghIN2tIMRpwwx
aRpYRF9njyvX8fW+9XFo9dUF5wpsw8UKn1/oc1YsaR6GgGMT8EAR3TIgKGK/JFVaqhbnboZbjxlv
MPrSotqMwcYy4R6aIGVfFpmlvc7kSTrNUds60og6avSnGGD7GQRU59SL6wKjk8++VzYzR5wiAlHo
PXhGiNEhjqWeatt2y5RWSD83Zq8NXKEbhCDvZNUnYki8CvhMh1E6r+hkWIB6Mr5jD0k/45F7+wwz
G8rVoIYwCkapopeZZxVYH8wBEL47UFQe+67TSua7Fsapryko3UzQOEEKJB+GxBgijBZ52AGA381g
u8JE6I7Qennh1McGUgikMSkBMayJhBNkc2IskWsZw4Su0UX8XW3fzpdSJGbvS85x673tbyk0bJqN
U020bYWxKKmYyNMMsbXyinMVsOYECzhS0jbChA6cEINZf6Tdh/GcFuLLAkDtbAO8vvYxwB4OR3MO
mNRacKLtQ7gm265praaykmlMpZJUyyEmmSyaiLLSFUEVpTdpq7U1vV2VX5aMIlmEORH0N9QGbDAa
BcW8B3A2Da8ImzQ9YG0B3MBD38GkEo0YYyKbBNMiIHsgVCgCgFpEU1rVirbBZTL9Wv4fV+Yxpaoj
WWRo1Uy2qM01X5ScUC/d6/9RAvyO0h96Wh+uahuREDxitgxeUcbI/OlfXVmB3CCfCKAnYEBOj+2+
sqXJ3M0smnVRRaGywqWIrTNo6425W6srMCRSDCAkCJsgdUNrMHkGLCBpPmj/GT07wwPCT9PdnCrc
EdUaY3jiHbQ12/kpalqVLEgs2LBrMMDSxKyE1BIBASMKyOMACrHo/bJxTsepqJwn1m+/g98/MfaA
fgP9ed0DqEDLAICIAaiQ3n3YRzU84lUks4qNxhePudogZdKoLhIFMyHEYSqiNIqOEIZCAM8Jo0bN
wmyUoBZYhH6OD6cednDgN6AC1FrDuw/fZGXGsgjFHA+GLkJq6rKtFayrZwgnhG0mVEK1kURQIOvX
4oD1Duz5kcVAYna2w6hJE+jkcbkOE2MoAls46wAX+GkcZI3gvN6WB0Qb646FjQko9CrenVy0jtd6
7s0GPRhE9HgvbY7YHacG5CTeziEF6sKinCdzxB0bAZCMGcRDB3p0UE0zmt754f8xv/uDPnkMVACI
sTtJKLbu0LYtg4k0ci4sOMgCoJhiNbAotC0iDOi1WtdTkzrqg/bDgRq0pRRFPevbD5Q/DfXN9K/v
olTZTMoYxhqKQgpJ9Lly7J7523e6W82KMGU1HX3QLouN0JmoqbKJjixy/G39ec82FnOVJEmtmzVs
lMwXmQ0qbQL9e1swtlA2khjIzVB68HjL73R7XQ1RMdcwLONui2gZKxKj0jPEzJoSiqHYXEes5ewj
hSIsJpMRCzq/k+A01UwvJgzAxGBmQFGihJH9KB6IEtKvt08W+97wc7OnQKo8O4ccCCzIqmogUF6t
1r37p31HhXQ7vTxh0JrWZTRFEEw7OFCSQiBg4cMccB9C7AHCWQGAtcUSkWghSVAKJMgNihsYI+wy
2P5BAgRfmipAimYkhWkKE86vWE6u6nBdORkhoov4ZGXX2ZDpu/k3bl+X86B+Sz5569Bc3/apIIh6
kX72byz/Ax2ZkuzdDVvWqq0hXvlNhjxmh6veFbhHjGPGIobL2LFrCDpht7bbYyo7kBsYwYXu6wGZ
TaalOBsdqyVhdXjsJHRGQsdQ0KLvO2iJYqaaWlqkymdlBc9Vou3cCuDzHek/1IXtxCo+KmkFB7SK
qECU2Ux8ABNipKIfXHxJQUO85kKQNtk3bkJoiAGyk3H5inXc1SCj94Kvn0RHTN4fhPa+poDMeysz
11c8gnyMNYUJOVOLAzoTbkhfhThg8h7pv0qQp8ohVlIkVTYuMPLSA/Y02mmWGaHPaXz2K0Vbute8
Iysxjo0ZPQ60XZeausaMaVtJsG2RMghsJNODXb8Ca8B/4+9X2v1xEXQMGDQjuRIMLPB/H+bLCxnh
8ZzlB7CenGx5akTChOMoi48bGFciOT2TR8vk9x7OlFhfZnZt/k1hSHUQ7MMAk8wEWkEI7OjvcZ6x
LR6Y28iCbh/W9D5ye67BhDYmBYxKQylfEdoVbSUkQLae0GDD8W2Ar/v6henIft3d4MgxvE0qAjMa
gxsC9oatxyhBuof+TUrIU06kf6TsgnbDCwXcQGhiJ/t7yaZGA6ycRgfATh6po/KQGPIE86KKCjS7
PSq9fPonGM69QonGzzepGlANIGKoGADmJO7g7Qg2ZD7R2xMshGSFl2Rllyo9D8jD2Py3uKbhP84+
3jkk84hSRt7BHHHLr5cdfjt+IrRUVEUlaMuVPVofEnKIPY+f9S+rttgtfvdipbB3ATzwoWQBX2xK
//nspAOkQU9eqLXZJ51SjhME/E+ekTxtKvsK8x2SOktMwgeUibh5jKfyu2TVu3wSdhjcjtflB7Bu
DPjM60sifpwVjF9pH7O6nKbqKHmk74pthPywyI4A+HCgHdIm59BK/QQL/ql4laHcqwaUTEjhy1X3
fjZHTD7hnJFQaPsNEdx/EmZVoqDOx0HtcCyZgZ0wXLyhD+Tji8ZfVnk33qnvLszC8l1iPsX7f1Vt
wCRKJE/d+z8SCH5bLSCj3dHtnt7s8J6dUIcIgsX84jB0T5VF9mQluYkrzdEenFPDQAdFJ8kuI7ui
6EO4F9wndHhGVNcEq/VPs0WNpiZmUew1LdlaYXaCLCFlfiIDP1bJZTNGuY7uf1lBjtJtsYgA7GCa
RUPqhTpRB3WVOuuMt0cwIi11RnOaluMjB3CFPr2iEUdGDlUKoTp3d/jkmY1QLBoRJFDZlWILyhYN
caWFtjenuCdmBg4ztMDDM8EFHFHeTmdXHCpttnUZHfJVJxQt3WNKMZMSzJ1Sxnh229Mjk3U936a2
M7J2M1dJUiLLSWUGhtERwwsUP4iIH3gqShgfOCnJFIdA5RjgNEymLBpQHQ6Q8DkEjwXeBwdyxB80
DpKChSkQqgKgLvfdvffLH0TGRF2N7A4tRJQhgwSQBVINOEoI4JKkBgcAmFkNe4c2OZMXSoZHya4o
zIaEZszjKNpV7kOzAymYgwCkSAklF8ZHEB9zv4Y6mDNYBoGBmQ2kqY2JSEumA4DAD8cG0F/k/T5b
o/3nrPce8qFwFGqFfLS+49HmLwPf5hoF+UVN4PcGiV0767ECRYxlMDBxNQHeCjxIg6Ds40H8Wp1c
vxcHOfKEGRPZilTQEBEsQgf1QDWQCZnp1wy6r9GjJtFTTKKEsAUCjT0YAwgqQPheMV9NfV9kxnDx
eLInUdwuAO3YKfTZ7IHBeJPyQ7UXtTOsvi/VW3uV5xEmR8izYDmOx4DD8vzgDHpjEcrMn6ioyhD1
MhMu5SlcfXYIRa2OmU0RAZDmshccGA9bdAaakWg2xoOaafFAS0QN3p/H9n6NJGsVwDth1JSwqXeZ
hEvrskcihHCfRH2Q66JYHfYzOgMM1YO+GHYRVUOj4DXd6tYdJ64tlEVRNjwW0L0TDpw7GEs4JUiY
BQx9x/vjX1Vty/m+yv+HbuOPD8ACVAiyMae+DIsMcYoU9EI+cCfRAFrMitzkhkJ0CAOJXUnMsQ+m
ADJDmD5YXgk5IeCNQ6PWSclx0K9HIHRhO27JNQRUV2oIMOMevFHqhKBpgPmQlTWgMO34AcEGDKVY
YYQ3cWwCfTiCqGPQMyFYHhEO/DqfvikWgX+zDED+6wf0pwIHrNp+wBRG4IorCKAAHV8KVQAvf3e+
yvbns9Y9p+HsBxDylWhPOcJyDJECi2ja0UKWxaqNW2U0iFjFaxUGpDaLGrJWltIlKii2NiktGqpg
ImkwU7lE7b7CqCZBo8UO97ygnDByfCHNmPMX6PE0hlRXUUeJe3AaTTXRuqFpkZVysl4UOqnDGzRH
EfZeC4uI6IyAo4RD99I9/dSRBQRlLYNGQrUklK1QpwF+3XZu3rXaghPaTgPXEI0nGg8FBD5mRU96
QURQQ1AT0eeh+T6tVPTDhoB5PmlVCglSpEhIYoGtsBaf0+t1u47O7MItjH0+pEEEh4dLxyBR3Hl1
RCpULh5jlF8j0TQmnVIXcOh6w5Jbl2lb4Z6UfvmLhIdB/YwS9IAHgoB4phBBKeUUsQQVBGSBUttE
FlpRqyTTJPJvadsIbQbZ4KHBiHs4OwQtmLAbSAMEWip6JWghjaWuhhsYvrf1Wyg9oIvr50KfldjV
U3Gx7uQQR83ofw+kPS5fGPiR9BAA/rgDl4YhyWlApWgKaKFMkPVLrsDX8x9Afc9DnpUoRVSP80XR
sQoiNyqPdhjptuOBJxo0/bIpSu0sI3LpL8cfNrzHWH5PoNdpChtO3BMCVyXjzQDiUE4BQMgCfObC
X6oD5Qgh4uVg0yCNkpEzTz53cyZQWopSwPOZ3fW9UPmOb8UiagdYDoL1UAJQFdHqjBCRTIQAPylg
k4fRwYib2g8yqmIhiSj+kgLn8pBsY56+JYgxHpA7JpE0UQq4IkIiZKf2AORBi/4kXjX1/qTlxOxO
7tTrSJYH4+x055+oFTi/SZ2HKsUFgo+GjeXyOvLob9McdetJKlIRMhEQeohyYJiyyhF7DrwxZo7I
oNjD6tszCHI8h1sIU2PxqF6oHmo3Bm3uiQgcOiMdxG9sxBb4IcXn0+TZtM8H4XWuecMbQ9n8G8tx
ueGOS2OTxIx3hVuQqf0eCh1hBAhufSRRNg8y/QBtv6oeGZneAklWyj1YsuYPW3H+YQOB3YToXALS
yCPpDntZmC+bMoRe1VeEEE4/0hCwr579SBRPL+ykTGJnV3EJ/6Y/Vfrv+I2IQ0G54g1GgkQyBdSt
AFCAUPWZADwmAcjOx3bHCB44hMYPH1wiSxga0RD1hSlSSJLbpo/22WwEjco7kGjuMcQ69cE1KvO8
UMhGSUAHCBE1AjShqpUMhClEOZU7ZHZLywKNUaj/fzgcQKlF+x+a1llK6J5tsHTrV9t3F7CS8xn3
BBxplyxaKIyD4HhZ+SRDcN2yIIZgregqf64J4qMRO9KGjLLdQJUjBCQElUDUDy1+AY9oB/cQNVNj
5K3kgQfC/SfOaPnQcwAgImUT8MCMEVSIfSqGhGgSJGQpjvD8Af7Ips09p1efyQHn/xwScDRIwqQK
hKKMHoH7oRegsosgGxvO1IJqaXFgU1AUiIcKiSB9r99iq6D0nSv1O4By90GSVpIzDRAQf0coYDAJ
vLQ7OhzG2AfQq+kIHZF/UDD2H9I2raaugocML/c8KSP9zTsCpFP6pEhjRRbqAwohTxO5Bshaku7r
TDEDXymAkkLLlUbCeghB3QTkC9QqchEg/hFdTM2SuEK40yiqq7xzRPQQeIesOsoGEn5KB4wej+Wy
4W+4w2sMS57iNNi+Jwdp9O1jRhPJgDo1is1CUvJLko4QZAMiSA88G+vBAUVDRBwX3ZbdEnyPdXTa
sOiBPbpyH5WvoMta8nhPwtPPsEAhu3CJN470k7BP1fKKB/CfopFe6QA+/MFAkTLoA/W1fh8QPBii
LGhKgvg5p5B6Hhg+jIPqnuw4C2Cd880Ma30C4NgPHOK7DtD+i9neYSh9MQsUubQpNipSLW62rYWK
BCSIIiEYpFYWBUlXyc6VWa7cZZyH+pMWL9XUx6G5L39+33XGrNwdx0NVfb6Hn4Y0bLDJjBqY5Sa0
rQwsHwMC9FPFRE6Hvd4+TKsNvRGMYnHXf+XoSPGb99I3b0IR7ei0tpruUUGKPY8qGj0rF/leVyHZ
1Z2lY25GEzPF3V4dzJ10ao38puhE0T5dcrNLkp9O2PX5/2UWdya3RvqBCf3JLs+W1pmt57+/daWY
zuz6ZbAq4isr6+TW7I9/XOqVrGqejnVDaDTYMZ4yrtQaYwYMFKcrm9WZmXA46aRTV6HqtVTslwo+
CYawuCYurLDDKrmSwyB7jYBcYC4gaJkUOzNSIP0D6RPID0+VCkiIRhKIr2G159ATAzKbBH0IfE6R
/453K/pM+6SC8ILCA1B95/Po3HmnqbHun1BNsm0hopoaWZI2mbJiplNmqWNlpkkqRJWLSCiW0k1G
yYMKCgpJvuRmVvuCByWvNNOKX9MC2fxPBveYEm0gqk7GWC6B26xtDjnbXcdz7kXjf4zoHbAHgnrZ
whw/y2fsx2nTZv8+9IRCEkAXECGSCdMx086A7iNkW8TuLCNNwX753bShlHREjas+NM9kJFH5N9I2
gfDqYGjunG74HgSes9qEH3qQSEnO+Z9QiQo9lFIUgRC+E/t09L8UMc8jyichTKuKtttvsriYmDGx
tpkhEhBAfSnqEO02fymgIiIkRMJBMCEYZEDIE1CJ4SpolNkEEHQUfE2Mmx9xJqlgXSdD82DdHQQR
Mp9FqHxw/rA8QB9rwQc4iwH7TsPQ4H4jyItppA2VLInonKH9s0rsbhlUSIiRB24oYyxAUgEYQQGI
BiyqTCynXDzjnf2/61mBGTJk01iFRsNyYqkSAZAUP2ki5CUGicCS2i1lJNlNalkpklUmUk1pVkjW
Lao21b3XKkgkgL9ZuFIj3fIc911Ge6/XkmCH0GpBBL2Bhkj3v2ekPxmGLM8nqsHFvH8an7JKIolA
MU09/5II88Uwiki1/A5NMzFRAazaam0VbGGqNNaUMy2lKglikKYKmPP0fsMsaH1A/RHMAfw7dGzW
iIFKE/SHzezWJ7GGA8gzQpyQKkMIiH5uz+8T7IYmI0IGjn9MqDQLSgp2AOeqk+YAwMjvO1BHnTQ0
QRSVUNKdA6nbD2BhGYdgqWJRoVMn0iRgy27qi7iX8BizIkjkWQKVKwNha03EC0Iyq81VkGLu1MF4
wRqINLFPK+oFDI4pgIKe6IocyVQ7ej2ifVVfTZVx49yCPzqMqvgi+ngfEl7351J4vU6sMKgiKgMz
BDMa0a0quBrL7fMAQ6PxP2RTfAsyj03VVVUfmoqRnzYbNf5hHdap/rxXeQaAoZzsLROpAOs9J2bN
kkJAwjYB6CAvhFD3DtE2BRfYwjFAe1EPwYgdiaVVChD+AThCAgSEqCR4kOIABNEeKeZ6A95mH5w/
UHGkd4FwYRBYDX7Gj+YPm+VzkcdcATzm2Gj0+i3F3WArBsk3Hcsp1DZ87wQ0WTSYQY4Y0SmNIU1g
rS6R5M+JMl0Z7qASg3kT6pCj4+rwxr5D2G/+bDINhBBDJhBQZmqT/luuivZQX03e7b9xem1vDJLe
Jkko/qqR1xgFGSRTw4yoTjiGi5D7GtVCmQLpWrNXBWjE0k+odca2UYRTqTBOcTZEaky3rNJkRA72
c6TJEoOpBzhhE6Ywo3GkBPEgPATPztIdQSOQRxTbmlNTveq9h/zU3U+MLMqJkZg6Axhl3HXFdxVV
GWAnVQ4NaDuKROgQInaYYvEFNAbkLRDigZD0zTVqw50hkRcaH6tzDwQEwskNIHQIEcTYuAQwylbb
uqayq3N0sm2wpFiAJQwgAgkElAGA4VP8uxU2/2m+oZQQmRyGMHloTgPoeXlE3KpmKgEAN6IQz8de
/zFea6q7zEDgKiGbkTPwDcBzSJw6muMQ3DxFoczeB8p2oaB9ocDwVPOJhsyyjKjrJSaKfEOEPO7i
HUbukk6slBMiWWZUkBhAKwAvB7XQrr0134AHuPyh6VkeA4OsMscL/QMtmkygZqoWz8JeBMrdJZ8e
WmDtmRAk0hGVjWz1RB3ROzvO/aDs6xgwzQdIsMzHIHUphVzmflzYQ6YK3mVPBiJlDuXrpNEt3EMS
oozVrE2uziANmsz0SH8hfTbZZGxanVGzYyaZdjGPUTZBmDGEqzFpHTlUUKNX7BsMRVtmqISyJsLZ
cBqMqyXoTvDaaD5LmNhB2SmucHlIDUEn0+VimQONqAbjVaYEqb/LWfVVVrl1Zdt3RbSVLY3xulue
i4ANlNQLFCKmgd8RHQQOz499ZmACHN5UlgIKjsuxyEhUnDlPr+IL2L4MbO0Jnioh4iwBFCKCCFC6
qm6qwkSGOy0EaRAihCjyrg3iVlIxpn8xS6Pi48cg7IsWwY9a42EETZTjtxyOZ3wJIeczDrH+4Jfz
IKnYilpp32p2UZo7azPm7ULrqwaBJ5S/dEza0xobsBIEMPVN7t7JOVPiMtyAwB3gYTCSOQAZHLi+
tD2idfX0IiOzTIxJkBYZhmBWGaw4MygomSgWCCKf+zajokeGkwXHl1HPhmD3z0UdLBXnjDa7xLou
SBOhBFEIcXoG5QTfkJvynaHoLNQQFxxtNXCB6m65uItrbp06CRti1Gxr024025dm7pylq9FsYUUO
Ey88k7ck44clBGYaBCDM5NmtO2SQIU0RhIygRNjv+wirzCx2KOoCE0eupEoAYMHVaqFFIs7XQSs3
iEGjv8p2khSZtvZGPx9ZrnCc4LkfglEERRn5OvQtfgirG6GWdjs200pJh4gWMraDcbQ9Z6CbHKvI
nxNWHswmMAckX16TE9Qr7S+vw8vMxfXLCHj6F9ObpfaUJLAQhFFoOhmHTy/n4lov5wV3n4JsDRPw
DDBBPw9sASUF/G9J0zJJ/B+Xd7AeyBtCSsDWaR+z8JKP4JuWfogf9iH+k/yg/+s/3QJE3cQ2/Hif
Xf8n6cnq66SmdtmWziBcCQKiyZMCIkGYqYyTM5YAH2APxQtmzXwVFgkFG93tD2XP1j9HsInLTMat
ocPqHM9wcz6/7URP6O8ewIQiReHDRLhyQxOOYQQR93lPnkH337enTsElxNF4HTlS0cB98oAcBvsI
38B/Bfst+kJDuxbgpLTUrewdkX3GnPvBo3ZP6Nxug2yXaKVRChYofBX4PMW8cDmhdJRg3eFNoSEg
H3gba0h/lBnZmcqCIvNAPAghREtIESCIH0a/QAEcOFkAqCGizB+HpTKo7gZ/nxiJ9VhCt/X4L1M4
OYet7dYEw+YP8QQPShfDAql9DDEmStJEoBQwzNMQTEMkHZIpQBtQDgxekserD0eeeXb8khVUj+r3
eXmvm/sJYQPejL8UPGPkkT3qSaJiPRmMBKIbNNYD8rDqu8d94ep6U2WEzyxZnJBrGLf592XzGsBN
VPFH9wdN2XQo4yVtBxo7I0oGNau8O5COnBsSb0kiOog8HaagMByfqQ67VdxO02i2IGj6QghJ7cF5
1WsL2b81aR9VQIfxgShBTWjBFD2jAV2/ZH84QxLhI54Sq1wWJfCNGfoXxTC+EQRfADnQh98AUBg3
8TlRX+YqtHh6/QFTPsIxQDV1XYQzSnaDHo7+p+ThB+Rk8qSRMIiT61iqdWBgChZCIagAH7yKBqbb
VA7N/54VYGUuGIUZu82F10z8o/ge9FDnzdogQhY9D9IAA9ygIZKL1ooZodh9oj4HehSsTADHYaMJ
dq2e/zht6BKJHi3EOYgaePN3bpygqIVjB23t6w1bIG4czNeCII+Zvix4kcqED+05dv8HKhGy3Kq9
crSjS0U7VHGLX8CfhkNu86BA9UIj43njAWlVOYR8DGAtMFJvIaWd7AoIbzt+hx6mcaDeZAzHG0TD
jTtONvTSkQp6C3vDJcAcTEn97aPp8B4HKi+6oN7na/aYPyRHg6QZ15XmsGjBZAiY+og1D9+C2NHZ
wX7Oj9Pr/TXLuzNLEsXxIdppnE1XEUdhoY4AatDAWwhziVsGBMZgOStpgwHgQO71bg9CwH3b3rYS
9wCrdlEX2jmHsHhDVIYqHQp6oiezqfsZBS8xYH93uORfWUQeeVTFJlBy+iH0+GLnGUYrn91GsH2R
RMcmgsuVCABkuEKmSFlnN92ajQOvHeddjsk9HmfrLPDCim1bgi+pHj25p3ux0oL1S/hCk93Wairp
XQhh0maxXtNKhl2whAm0om103HPILNiViRuJhqhB6QQI7WdPbMBtjLcpvyKlGe5AaOo2Kixpl0UU
hHs3BILFeMbUyFspUoVCDgBh7ttMw0aMDSGrKqhNulI0FEZCRETUUTibcBCman9P5wp21jYD333R
K465e/eud8OqprpP37ZJiPA86RSKESBuOXgCmVkEuHlRTLSpEI848C0XMMcTJcgwkoZF+h21oyqj
P0+SUJhfo92IHRAb2bdmFZF/nzT2/qzR+uT+REhfF+xPoCChElJomQL8SBvviIdKD84vzx4kGe2R
OsBkKhkjT9TGZjotWLr173s2rzzZJS26gJduup1OQtVzWorRaoiyBDIACJRiQBMJAyUTIXFxDACo
qxMDofn7k+TiTk/w9e1QOtXRKeMKj1SmXIG+HWIl95FD0NoB78ylkQDznZ0qj5Bu0ly0nSClpYTq
I3ZTF7BEs3mY9RhlClVhtDBzDyJGLRRePloOohInmEnwnDdldVO0MCFM1H+bjkguFgMgHTMtACJt
NT3DA8AOfUVkc50EgZkP7KyEeYkm/WpEC0TA9+ie887xdhTDmE7OQh1b3/CaFkRTCg+HXrEIEHmE
dp7HA3LIeKxJK96Qfxk6BImiHiD15mmzsoT2ExUAUUhoA9BbE2KZHgKfvfDydpyxRyCuRiSUnXR6
/ZLb9hW/daNXBYMFzHDAaUi/wuOQDxQ+YZOwKJoEVk1H6fqTLVM0SB4UEA1q1LoRiJVIoczV/Ido
+vBSkDwcokAylxr75IPLG0P6WlQ7UGkh6gV9DReKDPlfuOyIpm8Dn2aBNXIC0R6PgPw8uzUe2pJv
qMKX+vFKIXKClRQ34kP6d6u209HQ9ul8Nhk+kjdnlAcGAiMdFsMLlwB8M1QO9YmpDSbT31Y+J4HO
4k3YZ1Ju6GmZ4P3zZj7EJIbSEgZqQ8sCAGQjAGNU+hsfAey7Xq98vUC32416enMTkPmJLCmDMrRe
jI7swpta/lmjBDMGUm1QNiKhshwWMPYyl5I5MEn6EUz0DP+4EwKEBiag0YUEERoESgJcCySymUFl
VLoppIjQhllDhaKKBqy0IshRSqoNN0iBbUd9LLuyoSshgXGYPialGRFal2mACCEQ3b7dDtY7EleJ
l85I8IuPR5Z9GjWiMMlkQ7SsouBoajdzKiHTdVMVXBDu7+sSK90hwxpwjWmbsuiDAgNGx4WP9Qyr
n7KWmdmbZqxYWhFxW6gKPDyMEDi7wuRdQ58fPDY4yeIQsHGeEwo2ELyohFXIDSRGquK2tSAUA1iK
Wqq3C2NZCDt6coGXIwxm7LdTFaKUq8QrYkriq0K6QwLbdTk1FNaMxiEzdr07ef/ug59Ro6mwDpCC
HXR93jsBV4C3Bxjn6Bbb+eqJDZja/wkCmU4qgZpBPMJ16YhMcpUVCVKTRUgkQYF0pbGOq+fWjQa0
MjEHkyIlfh704882FDUEQmhN96CCAiCCFZy1KwcEUFzQOgSdcYaY7DnQneSD3sxlwZTZlVn62T+4
4DSaTbGDIwbluYyMKHbCcLeX/lfeoabE4Q1n43sRg7oDXmUjeXcdCaBrDEdDUpgEgbKVtIYfMKoW
rIIMYJPZ4YHYbMCGIqXk5hc1hP3YYwA2UqzpleMay5Fh6/BHiZdTv0dJJNpRCZ9lhzyWUk8i6fdH
p6OuRLfOKu4THLqU9ZasshKrLLYYUadF00ZDbu8DKwGS7KAKg8Liwv3oQiRv5V2DbcINgb7Pz2H5
0JUhKqLUEWBBmFsihZF+n+/Be8NDS6qkwajaFqmqEMgyeqdY+rh45TAxDkCAxQAwgMiViaEs5TtI
h+noawNSVFjPK2yyoFMJBYRECyCtEECyZn1+4+acs6Ljv6ZXUan+lVQkLxmzDERK1iRHvx/CVLTx
jG1EgbDAZxRgjz+8OOMA6cfovjPvevrYo+/0Z95swIl5xSFonySdLy2egYHN5SZRwkb8lCwlQk6J
G2MGVZLd5+7JcWoUzKgxtssyV9MHWtWqEoh4VKokNCSbJORX0Z6xSI7MbMdgno7C2xtJ/EOHnfr1
EW3uHntFMWS/Q/ZnarcRrXT57NItqpy73EKqsmywZ7N4Yfxk1sZF7Rs99tkimDDhpFZA4ceihtcc
ZxAzrNJJEJbgWXVJCFlBUlwD6icdNW6Keo53Z2xkDGXl36ZGO8ieaCz6+Wf5suSH9dYUJ6MD5rRN
5CT8s2+RFD6wH3IL0PWdx/3e8PQ7G5pAtHtKSKK/QChCjr9J9R3trCPqLesNkRACNoAx7VCtIBB6
RBEUXJpIdgcjmI6U1kEDiMJMKTJgkkJtp3yZwoHIvM7xALBneHnfGr5AS1bptr4ovLPK823mu3Li
TSb5Tpa83s3xckvEaHVuR1ZGQqMAE6+5MVMlKU1tcUJjGAMISkCYEmIkBgjQhhgZpIBsc/KqUkSE
qI1olNZjSzfKlpVW9iIAUuUnMZfNwUDlQeB/+Tipwic8cGjWxMTt0qrwEiCxd+GCxUA5jlMEKpAx
FMUQKS+Afyqn38jtSJBekaFJ2ZjuxVmTnWjRRAZYcq/IiRCwHKbVezFQT7YIIbKB8U+gNE7FyTgb
BwydY6yH0d1q/7gAgj9XBo7TIqjxKWiGpj56ojbIF27cfhdpKji/OPYHiwlHpOe+kR0ItloVGEA9
tfXfYOeLGoJuv2XibZVlDs+RQw0c/PgENGhPEjoQ6OpEchjQVBwbDP13QtLDC47GOYEbc38iWULV
ppMU0p//fjkJZ0rSN5+c5RXhnlhR8kfEjtJNqDYSsouJ2F2xJsGEzWS1RsWtjbKl0tcrFsmxqpm5
ajWxaogk2wTIPICagwswCJENA9fdv0mwuidNKdD4ZQUawTGdfCZ0AKnI8Rt3LgDCB9RAhMxKES58
xYqRIZFB64kIFkANNBKCwR3UCY32vrippkoewNwYjFTcAgmyEMibtxAi+OAuSJEcQUr93ku2Ig0O
YUyHZGuIRww5/fsQN6xFdyhQhrUmNgDzV6Yxk+n1Ytr169msIJbGV9VRy9PBtEgvJA+yxMyCSBui
vsgHjETf+o326iJEw6UdF7mg/USEZIyK978o80UqEiBov3jms9aHsM3sdDccuEoJPFYmcAHC0u8c
G4dyY0k+R27bmJTIgncLpHQmEJASwETGGJciDi4na9uC6QAeZElOVweSNuAxgXH6LqHttuzVtdt3
YxJuGEDRz0MjMWtYHKmpznrwa+prXniGi1CbS52qXC9EG2wuAOZ5DY1QQp0bEIo7CHZNi2Q4raEr
NhOtuMplpLdXuLV72ruiRWDROAY1GQ2GsEwJWmlVZF0rtONJkTeWIKF4OgbAwvOGO7szXKJNFF63
rvXROOd3Kg7qF0bnTu4rW1uxiMXKO4OcptYEDiFR4LaMcdV1V0WaurnUrVWa9ZKDmVFzsIQUEZFU
bB3DtoTRG9C4BBMyQoETpMDEXQkAaJNZD1gjG+DYPLyFkseBgr8eClJ9MNAhoEQIQRN8QujoGtmg
BxOqLrAldQTYAYKQokIqIZW0FEWEE8BsCwLLbAH8BgMu5n4Qezsc/quw5JJAOLjwc7xMyQWASIQY
VE7Z2GiczrDm31ECKgUIPZAZ2YIdiQYMUAbb956+buFTp+PNFJjkqOeiTn0NBGQ4U8xvWSSN1sPC
VchdSuKcQV0emI7h7CR/0QAaWOMco7YQqD+qBIgXBXAH2UoIcMFAdW7gWpHhnGjsk2oyGxm0mHl6
aC2THJIDy95a0yb50urIEBPGc1i0iZABO/4fVjfL3PXY3jp2jIbqxlkUxA0qYCUo1XiiCtoFEIe7
WeZP9oazkxrgTgE3rIj0J2/Kli7tztvSuCDoYzp7rSkjJg5Y+7+A0BbSps9zi6s0XxaVjEDZBLbd
1vr+h226AA1GE5IFA7XSnBCY9qxkC6gE0QCOkFIQAxnCFUgCRiAN4mKMEuVLSBCGv5kA+pV1gnbQ
yGyFSFvu+b6vkwjPv880ZUb+c/S2e7M0bTrDuZRgiHZhbTDkh7Xv7h0EoIf65QiQUJJQCJEQkWkw
P4J4OFEV61060OwU8E1f0ZdwpoJMYYznB/dHSbCvBWckBDqMWWgVgEYcEJstpjCUDVbCJrA7Qoaw
Yv5Q9Y2P7eeL8vajgB8jEwMKQypCgFKINQRIgB7hslPaA+1PyzhIdxnB9o/66cj/R/PhhgY0cLoY
DI6IHIyuEh/TZoqBaFoi0NXuYmP9LI3sC3GhhgnUAxopmoGUYwrf6VCiUOMHTocyYKV1gyCeyEj1
pzv4rcP6D57RkaVktWECyEE4oTu4hr/JZRCUhEnnHXVmVJ2H93b27oe1gOU+53S2KWQSsY3Cwh7Z
HynRovX02a1DMSKuI9UAW/22dMDTNvA021TAoGqgKOTjEP/hed2pDta8OdBpzr3xO8W+pE+8UpIT
6/jC0Ew0qeZir8gfW/0oKh6vSex+CJ7zdoHfYPLzEf6rFK3lQzAT/dAQiQotROE4ici27iU82qzY
XYZmgNVrA0B1A+wpQu6u+yjAmq/69+Yf27+KHADq+hQCl48o1SihyBCkE2BDQxzGjYipaRHfBQuE
7SioGVFBhVFgRoLRpzoSm0CVhx6J7LAgGYhmBkgBzyRA0FIlwD5vnX5yBSFNUI9gw5mIZD7pOsmX
YEom5TYR6cclgd84uZgGhjVwE7lfVvQmTjwrZZBdbZ07XaMCfG35/0Y2GNOw5kXshMilcgMlewZF
cukBt+B+gk3DQbRHCU4JYoJiQpTIxJ+hcXEWSA+f4V+DxfT++U6gIqBzDpkH8OtdIHbOnV0E2cYq
hggZhzsxvAe0hkLsQEqGPjmAfGI556RWEGPVclVfXAhnoRJUF4kOAQsSBICqGfBu1Be4f804+aD9
sQJBMzMAKH1O8iQDURQ3jh9a/1wAQc/kzZB0g+6BUW/20tp0OQHwoeom7usAaJ8cF+pC4jzJrIa9
K+ngVdjwdONdQWQJG1PdkNE9nFuP8J8xfa8zuQnSTBhvEsjZ6n+T0PNqRbvhgNDxtOUSIEJ6O4MI
YCnSNkH8CU7C9KP6Py94UU0VR+zXm/REIDGIKRQZ1uvhCv0hdVBTofDLIsT1FoZltAcQEMLaMTIV
WKY0gfj3gaD9mBc5o908AVkmtYsxRHQ+7iOIQXWPKMjdvDw8EfLL+cT0lgwyQMJdILtRsbosUO1f
gVCI1YbsYj1jDuDRaDzXcMPRj4xhAp0cyhWx2iEY0IpWhFRqyAMYG+b7dKCI4Ryri2U/IW7rFRO8
YQcZM+rWzw+HTlUE6+pOJA4CZQl9ZKuwZRlkCJE0Mnem64D5Dpzt7F1oiGdnkO/1Ozh0fTEJCo4c
2vgfU5/hMWUUF/DxTdoOWONzIh2IMX33WP8S/U3s9I9OlGT6goPbxHVjJId1KvffmgUPQLx4AvMs
yqPtRPn90QffCfPoHKdGhsApWtEBph6bw59j7IAoI7eGSMKUaSiAUQcW9N2BZr9NWYipIQDqXXOB
sRaE639w+l5rtj7NB2XnKbZyViTJCgzdgcbG41kjY5Q3ChbtpBSBt6wFBvYcuAj4zkJ8GBTgBdOI
cVEEIg8zluB5PrB/5DF7mzI3LqU7zAFdsSRIMA741KJYngb2DoIUfAUhmI9jD4xUGoIRyNzxDrHy
+5BICvwVUNc+h+k+R3QT0460T1B7uI8iyXHAG4IecSuuwGJGYNEfny8T00e833eC0qvR9QNkZAYQ
RJGAihZtXHBSEIFsC7KYQOp0H8LDgR61DxB5qJBWgj3fL3HF11jVZQDPeKkOb2dxUBu2MKKpU5UC
BqWFIl3aL/afxMZQgCStUSN4mDmOPlO0qgzJj8ekzcWi9G6RulREHYm8UQyYWbnVGn9rSA4pi/Ni
kO5ocrxhZJG/KfaR+F887pcak/FVyvCyl6+Djr43Eh/tNUePTjTZnSwiTVk40CV/9P9IkyRNcxZR
3xMJPaY5qbBtBuKcITiEmZ4IUOcQ2m94TlTejGeUarKf06+7aPoyZGukoG2qbZxhGxrPLVH22s/V
fx0Lpl6+gzdoCLcnxO6S5/Yl6y2xWNDyT27MDZ/CTF1OJKhmC4RjW80qcBKinwROxnrFGAVHALvZ
AYTmQxkmhkZIhEkx0CDggpJ7u+4T8uqe9x5Wie/WjT7Ddg+DBzDFySuS+E9IS4LVVlOGKEkKjJIP
Z3GZn66PA6barg3oYeBlfh+H32XtqfDx62etI2109B8RJEatokUD4cYKmCWDL9QDmvY98xG2GJhx
izBqX79SkQyq5YvgsXTqlFU0oQIicyGJMsipMsXof6DXiXHbezr2nA2WP7A612i2k2DTOpKYEW4t
0Z7wsl0ZLdRPtQcm539/QP5qH4smpfC5UqW7kvMb8t5oK+O8DOPiBnZMqBwNpBe4YKpxCIuoJBEz
ICgQBgnE7QPR0t6CksgX5OATre4duJhPtASao/jPft9082OPceJ18HzDOGT9M4mRwfmzUaNaNvfB
E0CP2+yQr8t/0vga84xuf2y6p2Es3VMu05J/k5ZWzHpUnPGVccvDKUTMZYZLgN+XSDHjGPBxMbPO
d0bK8tUuC3NAIJcdVVT9F4mhKEmRTfHBRZnkHgd5CLPoaEqXMikTsKRQphnU37EtwyfNCt8551z/
uozUWiuhuHL3R6j0KsCx1gwTu7R3eiVUZIdPV912eSrXCZKwki8DvgHagN0sPd5sbXHZp6rFFU+B
TQzrKq+nDXr29r01zoiG0yFUcGy3upW73hjMHK3RUI7I22NDoZQyNVHNu6ZLc6JTcgxllR3qsvT5
sAmYlisqFzZZgVdhZSfr9O7L2A7fu7+YDymdOmKEn4Y5Tpm9UBVzNgQOCaA2jh6+hgYKHujp9fuV
Y+J1+F5H2Aj/ZYe1N1hSk+zEMj6UV+AEFUPEidz7+v7YQiI/2gJgchQNEKiChBDbTwuBRWBSZJk+
ovANjGKmZFTvlHFWjQAK7u/heVk/x2yln6UdRoZ20Q4cof3KEGfAQGpISKzFQT8NkPOHM5wognVm
HLAauJJRMj6aTNwJ9AeObqndRkBYhyNaT1qmDmYO6k91nbqVKIzIq5CU80tCoDwNyLWvt9Vucdet
KMtOya6kcE+LmhQ3PckkjYjNEvK+Ev1Q465AaVyEpLRzSjqCP4QbQnNJaDjyEuWbMbWTMYExpgIP
V2uzAeSBpHMPHBxxhagdWqNDVXW0KJJkUSu+y8bD6CAMj6GVE9B+6ikLE4cmD/EubrsJD5PuQPJ4
RIO6rM8dw6cB1ud8tQ9XbdD1EGt+P4y8ZUSdk0Am7qeRbuQ7RqFbH9+NU/dX4VKlVIlkis5UCHkf
nC0D4wH0s5UifcQiaZ0SgFIlLpFCoSISCZi6LkfYU/iqwwjsW1SwImE9IJXg8XZ3D1kDMX2P1MOx
z6ZZezkRSVJdIT8qJn2kiefGNC4eJgRAB2uiHHdcLKZaP5n/ws8ZA9MAWvTHf4XQWJhfK+iPqqiq
Cwkz0hmotIOcnj6CIwu7pJHEJW9HmnFYAWbUdqp08vjEgu5UJgAQAffgAZ4EYyyQxSBv1vZAfUSC
9biVzzQ6HBZdSwsRO0rGRo4kjkA0h+I7M5jqF8wAfkozDJacsmgou0DFIT5ubv5Nn6/o18m2jL7p
3W81HkeKSJBSBSUCUSCB6UQOEN4ht4HQsxkTzNogZhtQGpCSJE8CkqOzJTRsbBpXBWwr4TPzDRjq
hf8UjpzYxedSkNpIwqhjYyokTGxSFsuVZYKZvRoImcDDYQbfVOmDiFatDt1renQsELopCWIUWhcM
MYDrIqcoI36ZTEQDawILscAqKTTMwREZmLYHLJyujeajejhmfdqs+WpsFntaKQkIj4zbCnGhsQhm
iSnALnnN6zkOBAyIh0EvdCESFLITV0khtA1SguXEFMBCFpi0hgivskTGI4c3Zhkdj8ioFvQBYAqb
WUmb79m91eWEpr112yXMWP5o4zxrLsmRJJzAQe/HPZMouEImULCIjqBMWIUiUopCtlMsorvt16Z0
EHj27GKo2U2pI+zg20AeB5rSJEonpjNWtGLzJjBERKElsJJ8XETWlrLZi7d3Oo2fnd2ZoJYKU9K3
NI1pByO2EZtTfB29YeN+Fsqe1COLr95f5Q1oMAQisB37UiWfpm5bDMgWla8dg8M7xPKNOz84571s
+wVw3PieA7zQc7SBJlx5xYH2gvV4m2o7lyEVzMaFFyKRznehVOfE4K3uxckpI0WHqIFNz0wOQ6cq
G44kQoyR4jUpQA4FgeMaw90cGb0szlG9uW5pkDdwgxiGwjIwQ2YZjQLDQwEFCGieutzoh4w45d8a
3WtxlMZxrJ1TEQ6mkdwGXMrqKegbEy0ns49fY89kZGEFc0WUNU+hItsYO1adpUVp161A1oiTaqjq
g44YBN5WnPHD2U0+QXIV5Zi7VhYmKnEGycqJsttxl1Wrctqm+WUilMDY0ikeTPGraSVNYMGiBxiS
WJAeEW4oFhVwtAg4sbTAkF0kvdAhqNNrMMsIyDiTU8yK9zKoPZeo9B8NdAogajp889u3AO2EeJOW
aQJ2Rw3Sd0R0MAKiBkHde2EHbIO5eJCgiR1PfCGoDe8XIydQLktIUGocg8L3Q6ilCgDiVBpRzqHY
29BkGpaH0jE6ZUoVMUoC+Cs0VI1hQ2+j4VXt50cI50MOkDakbHa4IShwEI4LJ3DjiELykJ+BDQjy
cw0UelT54JFxvAECQ0TBSp7Y2BB7MyNRWyqZFK1FixkkESEhD05ZQqoxMACgDsUdo/E29hwhtR/g
cowkAEUchij+71+k/mb5q99gFU+qJSJSf6TvBf8kD300EhC9xB5gfzh7IfPnyB3l0PSPcQYREBEi
Z4+AX3az6DVsTC3ebFnyfds2Yf85c10CpMYXxOpE4cDP5vqIn4hnCOINMomgiFSmgiU+IieuBWer
2HqQmVXx0m4Cx/34gmie8uBTNIPQ3QTwyAQnduPzO8V/ZPNzgnF5eu0h4o4ITl0jHbsxjcEhy3V3
kiGqjHdFOsLCnVkVKimNgxvx+TmGGNshVWJVgwUwHsiHb2eR1Yv4+2nPOskDu+6s4fq33R7djhgs
mJMiZ8q0Y+7rfuJtGQof/id25KO9+tXSX8EykPHtCEbTRBBuhIyAeSfWjmoA3zIbwj8waUURd0II
A+Cn2ZhgH9UhQI5H4ZdMofDkDNDwIetPUOR5vQFVVksuvRmDxidyHzCkD80Ar6GCJUSgQyEXb5+9
xngzPMCJPfNTGocIogPen9Gw9QsbJsOj0O3QDdDubnYDJeA0yYfhP1o9hXxdwzrWnrmdu3KPnzuQ
bSiL5PJhCpsYn1sw3X2pmASQROl0rTRmTcq2q6qzOKRz7DXWd715Cu/AwN1yglcSP3lXOvAUQ9hG
HB6/3715vaLrtGTtDRiQkE8WQIhkwhIDXp7zvumLZFR5tLcrKs3anKTOG0aJ8xZQDwIQO2iq4ksl
pdk7WrMVulTxgYPACAlec3xWj2CRO2tuy8JuyufvhgxMvTV2eYgahghVRPqgIWIeYwp0gFhRgJ3E
qcMg+AJgg9dBg/whA1CUqJSgxAuv7bWmlQDJ3/MfXxxG+Tkfy7zYZ/stDSbudGCFKGQgxA0decd/
nsKQGJADylUTmXco7tRyb0hjhAf1jJbGFDevSVHMqvymHqy7CxGkHkC/1HhnA62sA/A+AaUAe1EP
cxEWoogyqFf44iB+Ux4/UfhOm/5AXVHQELfoA+CgBZD3jEFK4iFGoicooPrlIXFT8Mk3bgXXe6Jy
fyjZtleVo8xp24Q30QKrX4/XUfiPxI69jB8kzTvPlD9EwWLWc3Uf1Xh0YOcfsKDkXU8w5g/4Sw/G
vWIAvIdDxRvB+CH74Co2j3In3/o1UfIfKAQUI0voCoZ2idPMd/3GKnD1OqD6QhFmTQFCLIknpQFR
pSGFVIkiGIUSiSBXr9+vdkbIZqbJKE4KCtI4JIR7IknnihlUsT6VUeT4d53dNY+8PUMCBEomiJ75
3HTwcgQ0HIeaojy5KwGFoBmdgh+My0chUvKAHQR5dOo0E18v2UKaJICi8QCKlCArAD60XSe1NIgQ
GxFE/vkRIlE0wGmIUj76iex05UXediTyPOzWFZJPIt2QYpwiTNY86REFFFUVUc9TlAz7Ied4BcBw
cDG2CBkSV4gaBNKKbY6nmCpoDonSvtfZctmaEqTImSftdurMKTIc6krEQAyiKQIgiTRNWNJrUurt
Vywpo1BBhh6BieibA0OlZNmjHBiwKwkU5EfNANa5Q/o8yORBB58gT1P2WDFsY+vx70O6IkA+STlo
NQ3hYkIert4jBZ2MJhAfJ5wf00J3Xnbyuml7vMmyJRqUlliKZSEbbXrq9vbtNCnlrqTM0egWu8/u
vr5UI+1x6gmGMIhGHCZLJzG0qmWE2pPrpjR5POgRlX8sodX/HwphQzWsk0sVJQXggmiTp7rLBzx4
4H9Ek23LYQMSBMH6swhkMwIz2jgV8f2Le8l/LyXGaDgzD4LG1bHWqLgQZ6PEGcGXJ4uxkZz6jWva
xmkf45hG2+6J/lmdXc6aErWssrbkJHYrpubbMDKcCs7zeNptKV4qwow1aV9r6YNWmDjwb287Y7J7
MLGGFGq8NFHXPhGCrR09Sysc/3PnmNgxcos7VUXhoIwJRb6MHJz6Sl2vKKS3yka8ouxsRcpiDkyR
LBOmJNNIbZiaBubc1Tm2zXdXTe6bAwYcvokJtjvsc2Xe7k6JcG1uOHT206RJHRIceITqBzkQQ8Jy
FEZ/QynkqTVCI2g5IVNSuM5lkikQQS1la71HGGLFCN8mGHnfWCLqQ5HhsqZavvWO/DGpynQ4EGHo
Z440IlvzRez6GeMQjzQG+OepwovGdUd8J4y1ws1Rr5azRKFXy79uj2xIo6fN94kP7Dx1rGyh9vmG
EhlDBkQiRiyxVKQYD5cIpngWbAxc3pnQaFXE6j2cC8OpUZmSGGNt3M5PfLhFMDcaYzPPIduGTmMi
kDZNzVMz5EL+yYinOeS4qQieqAykI9AZGqnfTLFymHCq3xcxCe7UGkZqeBjOEwsUEpE3y+tGAHaE
WBBnumXWo0Im+ooWrPizfZ6+ueVhpJHtuHWQR4nijzkgzT830YQz5Q/dV8RPPi+qs0wpDYIlIqqQ
8I/yjoK2KxJ8qijYFll9c8WXzCill+PjdmsN6+PlA92q5zVnBhO7teb13wWLYxjBKWQumKSwwiQ/
Mrsg3zpnAwdacA6upOhfVYcIhdUYItwS2twWJ1EWkz7D8ThSWj0Z651MqvnwtjL/z3Vq4+OCH+MR
AexwIQPxYfgMJkBphgPbKBlhjPNQYEwrwXbb5M3GiYlwgwQmUSeiZlg8gSSEV2tDY8p8iXujBQfm
E1yKH+z1lBwy7+z7lnLhTbplFQjm1HSp0oOH2RMlvzk0Gg7ZcJ4GvC1ZYGyR4YPMu+14DJytcu8+
ZNkH3ScJ3UX1B+/ifHTXjlmXCmDInPnykz7evl5M222VTh1ffd6RMwmoiOyIkyoqH51KbmIlDZVU
351X23Bp826gx0rfFcGlqrE6mEOQ9ZTdZaY6SW26MGQy4l9keVJRlsd03nq5lV9UJNw42hulahwZ
kX4QSZrpow8nRetePNbWVlDQK/tq0LRVc77SKK6n6OKH17+a+dzGUbtgDs2Z8enRpVbp9LaJnzaZ
1PONFPnIYN2J7KVMWHJjTcv1YtBRc3qXE636+MrSLdGhOsnHEPr4d3XZh0hlDiiTxHh6HPBsHDe2
9vYFuZGyA0hoDoYuayzQjpyC0mf0xFOaYmV83JtnMrKw17XMuJ/zUVVMg3qXuEr9VbOUbLNJmDPI
htvmYU5LFrVeIdbeEuDdlZNUJsxNiH4ZM7XZHJGHyYEQa2Q3e6BWGIVp8SSRkM+btBZ0UQzDx3V8
ePktZ93HFXHqMyDqKbQQ8/Ss6p6WJkywB/LWeduh2HDj4mDqla0/W9wnzv0rg3C7kav9qy39cn69
SpDK7+mXl+9Urmhpu6/LvZt/d+VL3uBuHddqE2VBsthWtNMKWtdXm02dIvlUw5bu7fWWNdrYLPnE
oRkNwRd4fKT44xaQqCG0Q10w6YE5kkoEcttLUsAc5N5XH3RSIlXBOpIdlBKS0IKKiQPLZEjxN0Wz
N2eUlNQzA0JmBzjEQz71lNSKBvpq6epvH5EQ8U9InZB8lzTBYQOwpptImXhZhDgkAi6LuW1sUo0y
DFCm4QwVh9KU1e8C7YUIhQuQ8hUIOQe8hoDXlzqVVVVVX2DwOYPf6AMgTlIoRiK0sOlg4DNnP5X6
v9v1cgf0x0n8BOVk59L9yh/Qty5AP+7D+ey2FEEyqRVVRB3ZQ6dEqbQNRFPMigQTs6iVnVJMjnmC
DRKl/VAh4xpQaQYJSgJSECCda+gQIIlwSywMVM/nbra//fnp3/APJAD5oNQGzetHwMMHiGfqEE/u
lDOmCvECa6JQ4YGsRRp5xMOgdQ1g3LGtmbAjU8IBIm5pYjlE54NHFTFHE2iirKCh6NFKhglSYYkh
RiG5YkcgcOSzTQaLjNhaMemsl3nAOpDMhlLKI4cZPr6e3yDM8RyvXhZ/XRlpH9mRUFxTUFbbUsVU
1GRNUSMZE9Umo0lAUD+Ux50XOROawojHcciKZoUeNAgFFilcf2u4aXk6dg5x1038z272kYpTe1P+
XboaQ+NbNwFwbA684dzYyWsdZiZA0nKRP1HJodSm8c/n1mxUPZ7L54R8odxffZUrx1eR+uu7fTAW
iKREo4R1E2ez05b62YZuPQscESRAH9k/9mzOzjD8oZ8ZX1L83zQaMPHz9QB00jRT7MKwIIyEIDw6
fE4lnyTedCaGwQIzcnEQMWnfD3Fehjf8jQh4yr4QprrmH+OcG88xjjDgIHR+f2YP7FH8JDzCGuhJ
iYRMUZUUZTJQsc623fbChsfIdlwet5MF1NA8E3wPqw3mtZn1aO0WSYkIsIBGJwNQpNPfGp7f5659
dG0uKa0eZ+sR9h4+1wvTxIbR88Pq14k8VRfQeg4B7PRQfYZQpTglgFQM6IFj3nlSpR9wDSjQw1Ki
BDm09smsNzG3sczhYyRJMbxCFCP+wwBQQYQENijM2KTSIY90+E6o9t3eKZvVejM5S2gESgyJLHn3
LwBMBMGxIPHv80pf2fSy4VDt/OKDyEeD3ZY9aNIcyGz5I/kODB3UbzPTBWwvfsTCkN+h9tpmrITt
0UhlE4LwdaYaw2z+8Q7pw7/j6xPmFNm/RRXlqcs81auRD010z0byKZKTuIOS0Sc45PahHh4GEZgF
9eK6++8oWihvtwP5NHATNAxJqf1nI2goGdReWU/TS9FEbGEajuGatUwsZ8+pabH82jZucybJxn2/
OGWwUeyRgbEJZSKiMSaUT6Z01TZYyNxuQH0PunA+pU+JBrTh15rFS4D0ge6YcTAYK6YIzTjCEI+g
N0BCzo9BTR6iOEoPAsqikou69kafTmJEW7y0eyPzbDK6Y49b1XSdlEVTdY/5KgiJOGSSZ3X+u2RC
GRsxdC2Li5q0BRrh1H5sB5YUY0ECEqAXERr7IGYnbcE2MNBqRlTDZo9urCTgxTRCagvaOgXGUB/f
iQFwKYxM7+Kjfxih+ptz9sPBKVVUqdsI8kAHifV+AhaVaBAoUENJ9BdM/Rr0/Umcg3rF7D6O5DAf
F9aGI2aHLlJA8vDTOQjQd+BoxvOqFI/RJ56TwivmYAHUfQgFcQgeUQ2DbUeaGbcgiLhOA+5bG6ZA
DOG1zFAtlvrcSVpZpmqMCJlS+Z7ntvZxIpHqi9QdgaKE9X1MQNCBShgnzEOYRUmqUnD8L3J5KlIu
4z3xyOwWI3qiCGHy+416Qj54qZ313asyI2ZhuuXFD9splYv6/eZXsalnZje5Y9mdYe2nPTD9XRPa
yESJQFJSXpNYOolJJQPWfKevQBylyhhs0VUC9p9aReBgx5Is0tB2g2f7VqWfg/zCFgyradWi/8NR
v+7DCawzJgPMCZBodgDcwr2Sag62kkgq0QvLRdOwHVEKTkuqFlJJEyrAUYkMHHiC7ntdDMLxiLYE
WoOkylBtbFWAXThvL5dD58ug/J5wpPnmqdERg1qmNwaGRDdtVacWqioxFIciJ2ZE3aL3HF6QgHnQ
fP0brMlZBHsdNj+xrBF7dN73RHXUsx3RhUFzub2i1bVO40GGClwKGGIAdpA3sNcBLsnoYBwvBKQJ
QyTNM0EVyYnDyS8DtaWSo/w3YewEkICRIVg810IYn2dTXr5wuN5lmYWejMOC2l22PxOB/DAdQ2DM
Zw0RFkVDIsrPH+vBrMvNT9MAsXdF7Qh6ifnhZlLtjNEj+NNg4GyN3lzs1skY4JiQJilPT3YDlToQ
Psezt07QY7eSsDSxTFIUj+EuDZAhFBpfWGuwFAn0M7CAUY0rifh4x3ZMBrDbBDKpWNDsJcEQcGg0
LMgvyZ8IZ4+gVh+sTLVlvGyt7kk2nSRhBGz/jYucjSMf6YhIXduYatccsoftxozwMekBwij0uhH5
DelwBg1CIjSUev2gT1HvDvRdwDmEhnaTuDv4U8vMbrsxz1wIki4AJsnDUcIRGvBNo16UsL2hBx6E
swH84RI8FvYeBCIq/kASpFCE233jcbHe8Y6BxQR36hhCmg9w+AQTA9yOpk2PGkyCO2ej0FYrh+kk
B+BChytMVNtIVsedFXkWRRRVkbdem3WpzFzchqhSL+PULbtqJg/9//YkdlN7qjBwfB0J2gxZPMFs
4HD7pHHd2HO88xCdM8q9uNwCJQm6IJ6qXjbpWS4FURVpZbs/mPMdtFv657YSQctoh07i2Hr0SRFE
EpEFFIlPUefdp0yEPxENVD7BAbrdVJodfYBtqhoR0iUDs6PTaqhlDnEQpCM2UPH5SwbgHyhUcWeh
AgHUD5KFGrQfVYXA/R6urqndv5T1zr+f9YiZoOupfV5YVewipJIsGt+LqUzZNZTVY2NVFVhtAq1/
hke7M8dv0X1wIVZVLT8Dpw0apeS/Wa1vb5xeKUYskAYZ/VUY2NiY2AxDE202IaCjn+Cmmebw/z3z
0Gs0YeZfmd8x0LW2xjYxsgY6e2jVshf15G3kXPKrPjLuO2e4wrIQkAklcx4bGLRSbLlrK6fuuzpG
1Z1Vcnawa40QaEHQMCMoYDZ6sqQtqFOFTOtNFMgW2NISMTTIAWIE0khjIJEKMc4GEkStFLs1jKxq
QVjhNDdhkENo0zVy2aDRTVvGpCGXQoJryEKGx+ZptlPpb6OyXxmijwQp8uq2zutM8629GotUq3aj
bixNRKrlnUptHuwLrKJHF0O99UH1dl61Ss83NzXGrKytm176Kw1XmcrppWPTQ2mc81/UPlqLJxWe
C8DLm4o3bTc1jBk9mvAXoo2M+SSfLzK+bo4ZryJMTM8UFjEyGyjAKOM9b4GVGhXVzEY3zkzNmOAh
hLrxJ5PEG4IeqoqebDHeq4WrNgm9S4MjVttCJwmnATOJugaRMt90MB4FkoKkdsJtgLRUN8d22aGU
wUhUicYUW7xq+Ozka9e8vTQhCKwdN5dyCqc6qHZmamGDd6H4ot/Pxs9meOI0J7v278bYaR1tcwdx
MxgUkCDx2BejnYVc1tpWbVGBxoOx6bdFMSxdBgwwjAYxjEwwNe1PvTeZqle92dqyW6uLAdztO08Z
7TodnVA6duG6IVpAiqkiFsGmAMTC1+cvwu6fij3wy5fl0i7ODi0P0I8TrhyeVyGdTrkRp3kUi+UE
dYI56B+x4Xrxw7BGmEZwQJ88tQFWioKu9SxsZgzr3G7pjHnUXOHDL0m2ErhmGJp1TLEZORCTXTqY
dtsiqJh6ePXq7LYJHBHVp1mK3sQKDQtQMa7wgYe3W86MRpcfHckk5llQGd6JSGvbuue2L1voM45Z
kFpbzybqzkWcSHB3ilHDjy4G2fTfVwMGIaTwSOiHPUvx37XyRBXysRGkU0vc3EFw1i6wvis8PsPE
358avoYwA9pAbBjSW3522Fg0I9yFq/NOlzwCXZgtOOCZfjoXZj6Eb+u12jrxDwhL48HOvCK8Ivzb
evjRSEaDwJ4XL5FuwvRYUe1wZ2+3yxny89D2D7KzusneHSxT2uXc6dM8vORFvfiuz2Y/KWgbFXgD
T8oOuHRmw2+GEEZ66ZClsIEMNHCCJZFWIQvQTca1zbK6atMqMCJ+mQf/a34u99eDwB3GiFTsU1U1
7fbS9RUr1t10NqEMGEkQXh4MQkFoC2YAANEemoDF5mma6vHauq17vXqFuz6cosufPktEb9ig9vdp
scyvf0eSFizORUFQIl4wdFCjPbWtjvBi06ZAWQIwcrBsNkzQ1Eg0DQ6MA8gDCs0zSb63VDNMDHz2
1x3cvIVgjbCjRGiiRFEEyF05Q+c1q4lvKErbGaG3NPMvRsmucoziIZXqFFICydTW+ViT3MHUirxn
FZCPVJdpwQ2UMTJGhm06oKCA0OQro5m27MdDrRdwsqQpmh1RHRRSuZGD2MGUtq9Hd64akbKTnJdW
zqIvJdROfGUrbyRzKkog/iqssZkgbuy+VoPixt7TWqto9nYNBnPJWmrHIea4RwYy7e10DCrLUMq1
mXTxxQyA2OcpRgMaFjCJipkaVNIjTYJ6ixnswEVWx7eodRSiLWFCsd0aYQmPMw6N7QV1ytQ538VQ
rXjwR0hcZXXKssw6NLms6LvCqoZs5vZpSORd8UY2M64Q1cF0C9Uii/MMbqDsPEKjKNl+eq0Xo1PM
WHTvvWWrem+0GWETY5ATatiA5SVsL+4VnbOWd6t7xlKqINxttRzSrxJVtcfkrYePS8UI6geeeRle
OpT3ksvEcApZZrOFUjrOpo6+Rs64YHVXovC77WHY88FiWK34w8ujtIfXHGkzUDsyylrzrfWytHKH
6y+iNjCIT8nvI6AQ5XVbCV2rUwEwSJGheIwJ8VLUcfDWmE2bzCA0NJa8dqgy01r1Vo84QbRTXc9j
U55NB4uFxHCoDIxtdlQOzqBaMjGs0fw+Er7e8ITdqgK2YLgq3qJNL5ZEbi4NjSaCx6pm/3SaTiWj
g9HXVUujgi6EIMYNwBoTIA61Ra+dhaIe+fTC/Qw06lgOEDqQFbHVaw6uLNkDr3xVzrghLZ02xvCh
JMyQIttKBT6h4ZW50W2NualUhcShJY7o9fjMxEr8V2ICciJAKnoEO6TthoILfeChEes8IRdN+aM3
xwlJtHueubTw9vHd/UtiotZZKrUDoVBXNGWsMh1WqO9IDjEW1hvX51GUR2SVrVXLN+tVoyGsiKYs
7rLUNeh8yPHshdU4qUZL6EhQYJKmK2k2lthbAKNDyhSppmvBC2mPGLebKTqquNmjFdnRwDtJN2S7
sjhJeN8290zBj5uA2O8o41VNy2obZsZQQpkAkPKtNFPbBXbFjV6bqgpP1FWzY/Hf19WrUYZWcLZ5
b0zNQTZIvpoaGmVJ0pac6SCjDMwMQxUm3l1gbeHVc9M0SHUlJ6pPEs7xDISyQkwLeOpC1qPafLwA
VISR5+tdyXqW/GXfdpOptoIsXg9fDyElGg2RoND3JiOS7WVmYe142HVkjBcNKcODsLOqMlYkrVix
BQgVF6UjyKDSTFRgogXTSRWwzwiBaw0oYnEKHNAoum3hTskb0JYB2tI0BhYsGrabY7DMiGJIXlR7
CbQvNRQmx5F8vvVq9TU1hMle+p7erQlaZIDg5gSIaCgDvA5DoHQdnBy9OmhMY4QkQciCjWBhEOYo
DoZhk29DOCQbYY1WTHHMHccuh0Q2Xg2WMwmYRDcDZmMIUNgGCFuTuCww0qjQQQD7fd2/NC0+ns+n
G/WnLBj0cP3PgHgCQQShx22jsqo4gDQbj0n0p3uAH+gloaGCCkiAKCZEMAOQJD0rInz4dQPWPfOo
pijENFTy7K8/Zr83xE4hl1z1GmmNLv+rVMmyBIv15ZgA6/9s1+vWmskPeFk+sq0n5IclKxEBvfdx
PIAaAGwbRHXDBgK7B8dkMkVOCuIooQ+Qzkj7YhKEDh3h4lS5xGLg/46IE5yk1opSJD4tmzRbhOtG
t7jGLNHtRD5SREriUFMkYgRpBJGQRX4APz2IiEKFo+J2950Mww0fXA/N2CpRMQBMBMUAyhCOMfYI
cf9XagCckcygPOiD6o6teaStdPK9SopvdJWufuKUA8STXQc6oPDHex20iJnepHcc60Q8om8OCi3A
brDWuZxjU4VLNpNkUjKlNqWUZspso0zZrXSrpddnVSxKG9ROnZ4sA49EbnYDwrbWPA6yMiiYgaFm
S2+spK6s1KTNNGzFMaWXuioZlCJAxOQNv4DcD9WZGFFyZil9+s0XoBr5XjlfyIQz9+RdZBabDC6s
aGyxgVG2SnHTUc7ulbNBSUF7cdoSiAGX+8YII5Ee0PHkPG/JNfkMTfF1ipYKwlNni7Ra2/WldaKQ
bLoKw31QYdSuog6JSkA64GtZRA5hhMNzioEwUgrcVQpHREgaokxzD48jaAcjjB0R0toU379GioFF
1GJRVF0hsur8vOJLGhvp1aqNWLW/NqCbj8hHB4K93opAx7AaK8aa9LZJLlTK09W/YvpC115pVwCy
8VRZIBCSIzBQIUCgH8ZAUE8ACQ0A6B0Oq3yidgfZXBFDuJ3ms06RoRw437yUOLim5M+Eb2v59PFo
U6ZiAZKZYY6uCIkaqKDbRto5Vu7tWtzUbaoJbMes6hKfJAD75Q/VaTqYCPthFU3BSCnEIId0odvJ
CPQvDSJDBJm1gT/LgO02HnxbT4EAL8I7YdkTQOssdhF5UOQ9nEyRatWDERQgJVTvBTe8U2dGiJBA
OAI7gQV2IJ+s+WTdDNP5RC+gfkyY37ypUNwT9vrdkKT4uEeRsG4gBtaDNg3HcPSH38eUxyY6d4tD
NWKIx5+UsdTRj7s2bNGLTAs3N8htiSrLdDoywBOtkYwuqG2BUTQxwwfPGONicfDry3pDkUePZjkf
b1wd4npPMd4dCLoktZIuLcPWYgsKIJnPBxcc4k3OJ2wTMXBe3tjHrwHxH0swdwci4e0YcQDkcjsn
OLrk5TrbbMEHmESohHV2W2m8hG0mSmhdhxaQkc0PRiUhl6SGMrHEVDBVUEgVFKdh2Dz7t9OQ7w/G
HrHDzHu5ophgLENFJMxtJsEs1s2Ray0pLAmKipmNjMIlMyoSRGgoba0wwAJbUlqZYNSSilSIgmEg
JD9Jw/e+1w6aNa0H5Rgv5I49B6PzvtvRBAnmglZyiOEFfs/NgL+w7fg+/E4nEtoMooChNAfXbgNE
KbJygCUZV7oTFSEFhj1mZCOyClcEiYCBASfqkG6Yi6YCgaANyBSNBEBiqQtOGCGHhr4FW13Pr8Tx
1Fsnyw6ZTbar0lWiDoRCjLBlMKZr2viGvVIemE3VYEi84tGEra2Sn1xd5dYorOLRqHUbiCQD1r3D
m/LD5gFCHRMVCBE2O/W41rWrUbHoIfHJ7AIJYXKMMoxt+8uoLyNNdu7UdjNm7OnTu1dstSE1LLKm
m2WVGt21d+7rS8orNTHu4WVrFeG3ZskJJvlp1EKI4gdhUO1s6xDV62c39fefO+hD0h9UJumhiKil
iJY2UI4ta0ltaBURCESMAhAOWIYF2IBCH3iPcR5DjDSkh49RSHEHabII25j+IsQ5pAVMkTKo4KJH
hoDiAWh6KPrXET0EodCBDUgKPTyC3whUlNgtgAUhZRNgPngdtN60cfi75iQdg8Kz1qGYqfyrtrvS
ptJEQijcsqW/nCAXBX+xCIaLCrw6NGmGjvWJhgMLptNjrvueh8QRHihhAnCqIsReFbJqiHz0UssK
3be7g0IsvxA0zxxInYog2JTZbbCjhMdKmmbyV3haNPHt2acSXZGDyihwG+BxE4kAh0CPWFhQO9wu
IRQyA3biJDZw7JmuIFqCC8IgA0gm7iotpjIBQXmbZh2gRq8OtogqjjdDmggkfr6+xxKGEluSba6I
XR4EdBMogE1TcWcEwaFY+xJS+X82t/obgDwtgdHtqHVqLJPmrCc2eOJxHWTi/p6bOOpoEIQifrDk
7kIiIMalXXdkuklNfqWL9f9kIr72sdh9Pf+0wJIQ/5h9GDCOEuGWKKr0d771DcszdAUE/pYCMOAH
lD7f+FpxP6wGuiED447Pq9P0Zj60WJmdqiA2wIiR5qAhrBD1f1EY8GjDUtBBSFIsJhLLmD0PINrE
VSUJBIJT/KCKvpJQD/1wgDx95vQ90IHeAQdE+9U8yFt5HiMPUEnyB2BdAwEnxiJZdXV/mrAibDgX
dTBj2Nba6wKrgRw+kNZAXJ2TE7KZT99JN1tCsJk0Ja2VLA9vADq8g+g6AQDu+k/KRnZQ8Ah6xX5B
A6EHzRsS0L+bumCSitlJTbUvxSvnapNacuaVKZSmlGiNVLWv96SYpREI/E0kMSDd6dwpbNu1fBvm
HSCYCkJnxD7TuOwOSgGB81KPRFkAf9XLKUMf7YA0fkxSnsww/JPz3wjg/eKZ9bpyTATHfzCeQPvS
Yd2egnu2nLyZQ0SggT9MbAhBsNjiioU6443/buDyLNod5U+tfi4FLBc4TX81mbJu+f77Nd2ef0ZO
2dLkDJPQQoYdDgIZeG41jv35PXFKC3DdXAR0C6VPbHzp9EPr+c8JEIZVYYBgoPrMSd3u78TAQKSl
MWtQOff1P5ydZyMy7syZmVhk90dXvDDghJAHlJJA0j2QA4HJBf8IIBkWJiAQWjaSDbZEl+eafrlI
A2Bx3AeuCIkcj3LiqfJ+Ayi0xlIsH6gzIlKaI2QLSL/D58R+UJ/ZFX+no6Ivf+PzPjOMkFVhOjhU
7fiH9qRgQgQRISOweeadyaht80HIu0FsZgIL+LuBOVEds2LGE1VvocEIOTkVFdlEdaZbVWRs1JlZ
YyUsrTeNiR3UeFi1CJBs4FgZdADz7//Ye9QcKKNkDe6Im6KrQxEgnM2iqHpTLPQh/C01Cqah3Svt
np74+6N+drY4+ikrjCwpyDz/Vxpa80k/kDsFHOi8YGCRBQwkTdcTGhYE0VJ2dvzZhpQ2kpmlNEIG
OOVq4gBNaFD+kujbXjaYhcfRkpcH3EMNmqZtNJakSQDZ3qRYCYNVc1+Xsd9eGmRywfsy7cGDfWb+
K8UIWVnrtre57vAekCwTEoRLcIpdv6jHt7ylfiTPlX686Nd+LM10OK3RH6GKH8SNlHoG0Qfo+2Pk
fpfhPlCkMjmB/LGLYwrmnWjCSSA9e97NwzhvRhVFNVEQtRFRkOjSujT/SeZ+Y9UZ2vr77j/m3mqN
SizUwPFB4O90rgNhESu3GC4qV5KcBFRIXvDEYuipbinBsIOpgmoDjnbtkD6CcSGW7ggIiAggxdYa
0QCCQ1gTZ2N1PDTY6FP3TeSTIAoKoeqQnbDkU4F+EDvgcK97fAE7tynj0zt7qCMZdRskibGNYwqm
3FAexlzt3a4YYhRtM6BQ65MPVQ5DyE65MqZovE3PYdgcCHeFuz5i83EgdaMpkdrzuHK588NJsh1b
llNkZEZLvetbIo2JmQ2wTvqjvSCmen6qHjSSSZ8Ap3CIjOighLe9uPQOfL1giHJslir6tGgNibRx
KMjItMmQa5jezfyya1hkGEofvfIMMbxjBEx337BmLVlBHB8Bs9TSMxVUL0gNF413DrGA+ve3fW+7
e5H96OOxRDfssLlHPe/Qdvnx3g2TzX6N6v4lcv1y/XwnDk5eycG6AZIqBUEXzVV75boeFtl4oi7w
d/VgVuHbj8Fx6UigREUi2xjb82R8vTnaagDd6iO1us28yLDMirlsCgKCsjJq0vdAJkaNp3ca3spQ
GI+7ylmNUxy6Jd0BTBmXSAaSQwrxtkUb4SwnyVW3qQY22DacxpB7M1jsMB3Zk6HYw/dZ6VNmhmML
hEnVukSnnjZxw9OWyqsMqG54sjQ4sO/HRmrjJ0F346dKBgLuVqmiwD2ARdui27grwFeuCNFto5MU
AA87GR2cADnO7vz30B2w42xwd9pBdsEFxXFvQIDJyIY4PJxlYARkW70dtUEQT3cBx0zoM9ITcCaY
dZhkVqBySkN3hwRYELcbS0LcGMaUJ5KkKNGFkRBkJPNUYtCKGj/TaKEzQ78RLryQjYfTgdEOcGhu
jbmuBezKgCgqvROa0iqcj+CwXln0kMyabwlKQwJ9MCnTquZVRQfppxKn4Z+XvvfotZku+ANUAYNT
6IHBAEe13JAPTiEJd3Tv094R7uEhyb817yAAgTGAHLvn3OFFHkQVVV+Xy4OZoqsVF34S6NqNGwUm
0WMQVeQ9jOwhnLfaSQQPUdOBvhct5dKYETPMXiWihDVxEjImNDQeLoLVkJEan8tgcRCG8gwmPLzo
gh5xU80PPRC3WIejZNk28Si01e8eyduQbLEahJGqVBTUpWAiNfp13V+MWyNpXumlXKS/kPW1+z3S
Yu4cTxgZg/IZnVv1kdOSjAzDCQyUKTYQibuae8vCQ3nRG+ZgksX69WizKn0j+FYJIhwJAPy8PP3k
i509Pto3ZHdc3Yy66S2ZWpMpJjfv3jRoqhmIKDM/B9RrZ9ziYSRHBONadZUohcikf3B/B8fOr68e
/uZRckgByYhgTKBBC5Vd2ztLodFD4L9qi0o7snug5nbkXgk1sDh2dkOQ4QE8IDzz4pdkYmSlJw+n
jZiCGCAb42xV5CTdQhSvtssuRIT/sNUhNKqqQTcoB3B1qD+eAOBcogAbIkdwGhzIKc8AbofCAGYd
Ynj9GP2X+JLSoeRB7YN36PynafDdicLso193RgRc/KNbxwriqWB3RVpYyB0TiBR6DeW9U9fI9tp2
9Z+z+ZH59nBaZrYJro5M7LrPVZIh7IV/5OGzqPbGN9AU4p1Ds0S2tOxCehpI52cJMVyWypNNmcBz
rf/zQEPwGv47Xk7o+icIjg+dCqkkYRzJ39HgH5v6vxx97e8fCGIEN7IGuiwICemE6LJoqpHOzVo0
FQRIeF0fPg0A64xEaHmXyNYpLDQRMQlFCOsO40IZrMSebEhqSMnnebKICJt2APozAi1gbnSJQBol
MhOsKimw1k1VXZAZI+JsfQIH/IiE+/9R+cn6/wgk7LKu4Je4ePs9/VeZJ9KC6gNoI8lVeQHg/fHX
s9RUes95gtGk6vaTVyPaCEPcEvrJIwdAxpEe9EJTSDAaBJF/EnuFor3h5r+CTtDMiCYo7lM9ECkK
7Sw3oHn9D9K67Ojjs+wMzgYhYe/r8gXrKaCjW8gN93go/AoyrZWVCWVZqtj75s9vr8urMK/TmlF3
VhVU/rftke7id4PorN4YzbgpYLBwDxwP7bGXKIqCg+bz7f0cY5Bzt6I9SqqOn6h3H0qjwrllwo5Z
B0vsOyHoLpfCCAdeiCsCAGIIFSp/136Yp8D9mDurdR7T0xifx9V/Tp16sAM/6XHRBTEQwHuhMj9E
eDZDrrRhnQxx1gHDDDAuibjToDV23bLoQagHhA5gizqRT0omzkzzpuzaCSJMEJFV1I1/L+Q064ce
osxFX934j48LFLVFQUsUtUVH1GDG/xUEIiqH9sAT5nBn+OV91UV1/PNIesyz0Khf3SgOL+AyyhFr
+Acy9MwY4Y6zmoEW7h6aCZ3ureOdALwT+3d13l3xmH9l4X3W00tH9/DlWIQ8kt7Q6EOtlB5YewXn
kLGRjCkegNjbYqpstTIGf7tl9ZWSF/7i5R0Z53Z40TyI3Lty6o817RmLwlTEzuGBYR11FrU76rhc
JGcX94EbOf/nzljdkOkom4QHfU/G7txaF9o3oTeHMCOEdXcJGARBNvMoDDfnaA/n6He9dgeEdklh
Ftq3oaWmjSY7mgYawsypLYzTSSNNtrUIq+VV9pyt06qSUJFNJ83YWCGHnqNmtSOM/3DnKDvZ/eSv
mF+IjBBQNYEOgkETgkHYQYSgywqaxxDxZFNOzAAJnZKowymsNAa0BAUYOKU2DzgB64FgqeMAXmRV
LICAue047YH/VBENQgAPkKQETy/6pRDSH+Ato0woeePeqQBkJEAOgKCQAeIqk6g7ImihSFoIDnBg
AaWFoSgpF0hIjB1QaBDxUIERFPMxUSgUOfmoEQpEPUvgh+2+77sA+sJ/iMP4Nln3XDBMEGEuELSF
LRCwyc8caE2izKcSGBYQOB+38Zm+cGefq2huCznDR4QoHbN7DcI2jlgsSGGkdCgv6ROKOsdZ0S8J
mT2RWIGhDBMlLEkvxJYYNFLtkBhoCIjAbNwUTabAZgncBsQl7X6zbEZo5766LqetlQGQNcdhGwzp
hq4OxfzaHbOqGfHAKqsH/DloMxRaQMptbhjKaaY2kxgcbo+0hhkeRBpT/CcLaOp3FBx3b8KYGCfE
+WCyqhIB/tStDBRN8y2iK7LQ6pKUvpWCgp6Xt6QRCCijjrzaiu1VQqifX3Mthh02cJybfEwzZ65o
qzKOduHFrMrSSPV1d+oQuznLfcvuQZQ6qMckVA61LZV8Y2/dVE1zOCYsqgDdHAQDiqxFDYlqu0nJ
jrxVoNUkeR1x756Fw9pnUlbV3WywVjOhjEkzvvdA3Yy2f3kmAbmyl0/H+gl5LZTF5dNhSh0M+jNJ
o6fnQtOeW6bUxvDwJO4PSm6J0pAC31iaY2iHRmijApNYecx5GL3EcEPiasfI7seO7NDJQw9o5wYN
GbrrolnoNBTE2IfLiNf1t8lYlJSVSylt7dJTRihMTU0u0oknRdOShBtmIMCm2Yb5IbSGw8sWkWFo
sMRpCI/EYD32288QEU+2ds+XPihLQyO7gh1eX4+pCwZTjAbn9EaC+VNMHkEGgi93ZXDwmTbHrr6i
HOqUPUREEPGJufKfYLEMR44gcR6ST+ufIjio40XXVll32FFagPRghhL2zfzJeQt00P4JTZSBfURQ
dplGEKBgfX6v24H8wftSQVBOHiKOwQiKB0P6v8eHKh/Jyn0yeyp1HnAbYoPmhRP7IUQ2EgqlCoxI
AehpClaWUZEYqKVSGVVKFAmpiCZiQoUoWlI8CMKb2MYzGT0PrvR8s7IO4eG5DEN0aib6rEJQZ3lD
6cd6wTHnjRqIJRugSboJKdEmmj1gETANJJGNpKiqS1cC7ImUVhsOPl3/6EFTSg1IDChEgEV+r8vd
83+x0zUc4qhzSCpuTlZL5NBSJMfLg272moVWJBB9coKnsCDRGIQSLqExSQRgZEEOP/ITZnt9vMe3
rtnp36G6KOQcNAtZVE0Kj/j76HxYBjPQshCklTRCO1Q0CIu6v+bEZbejtAnAvYGy0LBuJjHIQx6b
0UU9JT/h9IaX0E8yeEl55271cWwK+me67aDuADCckOzzNvkdtv/8XckU4UJCjoGdSA==

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_--


From xen-devel-bounces@lists.xen.org Thu Aug 09 14:03:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 14:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzTKX-0006Ey-It; Thu, 09 Aug 2012 14:03:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SzTKU-0006Es-JN
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 14:03:15 +0000
Received: from [85.158.139.83:55797] by server-8.bemta-5.messagelabs.com id
	5F/11-05939-123C3205; Thu, 09 Aug 2012 14:03:13 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344520991!31008366!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8761 invoked from network); 9 Aug 2012 14:03:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 14:03:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,739,1336348800"; 
	d="bz2'66?scan'66,208,217,66";a="13932572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 14:03:08 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Thu, 9 Aug 2012
	15:03:08 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 15:03:06 +0100
Thread-Topic: RFC: blktap3
Thread-Index: Ac12N7M32j6sBcjRSBOZA6AeyHi9wg==
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/mixed;
	boundary="_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_"
MIME-Version: 1.0
Subject: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: multipart/alternative;
	boundary="_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_"

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,

I'd like to introduce blktap3: essentially blktap2 without the need of blkb=
ack. This has been developed by Santosh Jodh, and I'll maintain it.

In this patch, blktap2 binaries are suffixed with "2", so it's not yet poss=
ible to use it along with blktap3.

An example configuration file I used is the following:
name =3D "debian bktap3 without pygrub"
memory =3D 256
disk =3D ['backendtype=3Dxenio,format=3Dvhd,vdev=3Dxvda,access=3Drw,target=
=3D/root/debian-blktap3.vhd']
kernel =3D "vmlinuz-2.6.32-5-amd64"
root =3D '/dev/xvda1'
ramdisk =3D "initrd.img-2.6.32-5-amd64"
cpu_weight=3D256
vif=3D['bridge=3Dxenbr0']

Before starting any blktap3 VM, the xenio daemon must be started.

I've tested it on change set 472fc515a463 without pygrub.

Any comments are welcome :)

--
Thanos Makatos


--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type content=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 12 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-GB link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>Hi,<o:p></o:p></=
p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I&#8217;d =
like to introduce blktap3: essentially blktap2 without the need of blkback.=
 This has been developed by Santosh Jodh, and I&#8217;ll maintain it.<o:p><=
/o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>In =
this patch, blktap2 binaries are suffixed with &#8220;2&#8221;, so it&#8217=
;s not yet possible to use it along with blktap3.<o:p></o:p></p><p class=3D=
MsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>An example configuratio=
n file I used is the following:<o:p></o:p></p><p class=3DMsoNormal><span st=
yle=3D'font-family:Consolas'>name =3D &quot;debian bktap3 without pygrub&qu=
ot;<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Co=
nsolas'>memory =3D 256<o:p></o:p></span></p><p class=3DMsoNormal><span styl=
e=3D'font-family:Consolas'>disk =3D ['backendtype=3Dxenio,format=3Dvhd,vdev=
=3Dxvda,access=3Drw,target=3D/root/debian-blktap3.vhd']<o:p></o:p></span></=
p><p class=3DMsoNormal><span style=3D'font-family:Consolas'>kernel =3D &quo=
t;vmlinuz-2.6.32-5-amd64&quot;<o:p></o:p></span></p><p class=3DMsoNormal><s=
pan style=3D'font-family:Consolas'>root =3D '/dev/xvda1'<o:p></o:p></span><=
/p><p class=3DMsoNormal><span style=3D'font-family:Consolas'>ramdisk =3D &q=
uot;initrd.img-2.6.32-5-amd64&quot;<o:p></o:p></span></p><p class=3DMsoNorm=
al><span style=3D'font-family:Consolas'>cpu_weight=3D256<o:p></o:p></span><=
/p><p class=3DMsoNormal><span style=3D'font-family:Consolas'>vif=3D['bridge=
=3Dxenbr0']<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-f=
amily:Consolas'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal>Before sta=
rting any blktap3 VM, the <span style=3D'font-family:Consolas'>xenio</span>=
 daemon must be started.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:=
p></p><p class=3DMsoNormal>I&#8217;ve tested it on change set 472fc515a463 =
without pygrub.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Any comments are welcome <span style=3D'font-family:Wingd=
ings'>J</span><o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p c=
lass=3DMsoNormal>--<o:p></o:p></p><p class=3DMsoNormal>Thanos Makatos<o:p><=
/o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p></div></body></html>=

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_--

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: application/octet-stream; name="blktap3.bz2"
Content-Description: blktap3.bz2
Content-Disposition: attachment; filename="blktap3.bz2"; size=230728;
	creation-date="Thu, 09 Aug 2012 13:51:49 GMT";
	modification-date="Thu, 09 Aug 2012 13:51:20 GMT"
Content-Transfer-Encoding: base64

QlpoOTFBWSZTWRP/lFwBTB5/gH////////////////////9i/h7wCPV9cAAAdAAAAANUAbYAAAAA
AAGgAAKoAAAAAE8c++9bqqHTW8zp2+nWguwOqGJoqrPG+xefJ9fXIlcKGZUtag0wEbVbVtgKAKih
JFKq2AL61AAFAAVe53rHXbz7ed2b1G9Xs9Dq7c9TXolVHee+e5xVHr6Qed7Pg7r49973j4+u6t7t
Sh7aS6wJu7y9Q9xnMbem19eOjk+JIAL21vRc4ebChncYure5z3e3bp3u9ID1NY6O+puOt6GuY26a
OJ75bBe+tCOvqqjbHPTbUbh997tg9d7zMTj6eQDl82MQds9SPTWtBIqlEuzavdu5adCWJ0wOnrqq
VM+uLtRVvvR9ffa+307yPcd99vubAD6HgniejQB573A9fQ0HQAPvfC94D32O67jQH33w76Aed5c+
R9JsAe9gF3ue7OmmW2MgAVLayj6Dr7a3zh0C51PvUB7vbh9PPkTw97lFI7apPu7r7ADYe+5tPOgU
3VZ33p1T3DgAGrLdUE8Pue+dqtAPppwdtLtIcd6DvfQAaPA8Oofc32L2Uy92tnopl5r1yKTrt3vG
QpG7kuVt7s89696wgDe2R7VjroGveABIABpSkQiAAAAqgUAAKAAAUSAAAAAAAFFOqaA7vvlVVht7
dG89HdZZGtZE92cYD5OgDdb0AAB73s5O3oD0KIQCewKNCgNaGsSjSns+t3m9eKn2Z7Km1h9cuD21
t2tyfTnwkJfJi+RnrN7eQc9uAPVGdU7e+j1c0+C7buxT74Duh98c+Rk68djVQffXcChQHpoN1YD7
3rnyPUno0Dr0A49jocgBnYDoHvdwNsUAghSgARUAAAApQXNa+s4Ap63ZqbMKg+tM9Vcxp6qo23O7
2N29tS6RVMeFRoemVC+xpdtErWHu5y2wpL5w3Uqfdd0LY++szWryvTPvY97J7aERNgLa2GqoNGsq
sYUKoUVVIvto57QupXWK7Oure+9573ci0+88HKKCqKpKSoPRove45NlhooIkee50+sBVRVF7u6wb
fGKtolSr2ZBKbO5u2JU9G65p2ydjdYQAdAu1nTANburevHk1HJ3ZU9t7be3Tr3sHfVn12tb6tsOp
hm+9tXPdvNq5169XvMe7p7L1cTu9d6G5s7rqku2887qu83u99gfPijwbddzXcKB3CK2g+s6bCzR1
bnM93fTyp6Gum9N7ypZ0587r7zurVfO3Nizp1u5Otem9tbDxxPe3TmnsQHe7pHXvvDry9ta3tkqP
bbwIdc6xuz1w4ebsfe3MLZV2o7oznTc4193fbvCzwvfHb7ysw0He3U96t9Z2vV2Cii3bnWqREijR
0dbm7nuA88exU22+3Klz26dvuZu5zr6dde57tR9fXfButz3poB01RW93fO77wUDQA0AHQHyAAAAA
BtWHTvt8gB5N9PqO2Th13dsn1bVLxeA+noGnHoepalmPl1fHtO+vl59Z9TU+0d9Dy2ytHnUdejkE
vuDoLvoen3KGeRY2GxpmX3bpnNuZrb6Z4B205Xadzve9rs+99z2bcW4wn0XAMXXdkdFWzAvu7s6Z
RxHVvO82c+31yquj67bH3Etjvd3Om966nX3PcoB9d94677vea8t673e+4B6vvafAwOlu333Xb3d2
b7a7eFutXQbhmieW827EHo9PJZHpGu6BeHbirYE+mpyfYubttuuh01etr7zX18R2ompb3fd7j1jk
6y62955Ogx16hVPbpzn2rpIXWqEvrK7uc3SpJbLmZObZsacHD7eAbdeAY2+dzvZ7vvnBeEIpeidW
iwF1FNe7TM3Hby4gx7d2G9Hur1utzt0Affdxe7MHdg3t7vorCVVDk+662Wb7DqeRa1e70L3ts73u
r0z26t67vbW1ju0Jiz2sG2QBbVpdj61ePX3nVIfe573ZZXBZ23fW+8+uUefd241undzerZdsEq8z
SdQbEUXxfd717lZq33yiglWtWxk1umipCchp5pKtvvj73oIFzkj6T0O7u3sdux3hybdcuvR13rWl
e7ePrrsZe7utpvY+cPtfa5a0l7VA+Kgu6lffW8u+3D333g9d30B3Ylpus7a7d093XvWJC7akHQaK
B2fXuPDp12dGpgfH33ujbeH2sroHvsHSgfPhtC+t1vbO7kaLw97c3Xu3M01T7ufM91hmt3fL0bdc
dVpV53HWvdea3PrXq8N29628KFfV7u5h6q5Wa2ts15u+NWj20k7MKfeuO+m99dD7VPO4L7vHe3Z9
99XdvJRdte73emvu3tp9tc+KbAabkr6itnzdzB4SmkCaAIATQAgATTIAQABMmEACNExknkhpiT0E
EpkIIQQgIBDJMEwJNGNKHpHoTSPIwk8jUZPSPSaaAANNASaKRITQTTQBEwgMU9GQDUDQajRoTMEJ
4mmkaDQMBNGgEnqkpJNTRT1M0KehPFPRim1NMIBo0eo9QGRppoAABoAMgAAEKSCEAgE0AmRk0noE
ZJlPJqYNNKeymExNKeymp+mpp6p+hHqJ5R6aT0QRIiCBACACGgTTJoAmIZGmjQECnmI00VPaGqf6
qfqn4mVPUbU2o/WfWn8neAhitBQhSAAf/Mtj/CYonl7S/5V0hNkSrWvrFK6W1c15VdbK1FUFprKT
aKotllMtottTJTai821XQtotLaWSNJEarSVgrFWNpaYLFFaxtGyaxqybFjSGQsbWKNUsjWKiZtpN
3baxV1bbWNStQtJrSWksRqZCKIFoCGUzAwA/VChoDfEV1CbIEEwwQG0uEpSLQrJfwwIZCbwA7SUh
ExIjETQBlZAgB4bddDHKQCHAEAIh+AQikEijiICh+xQT9n7f7aYFNISQpv/WUTikoI0SyckJpgyE
SQsW2n/7KMJV4wt0CBiIhmCCss5TqGkF4AEAOzoqnMFWCizGtilKoJI39lXQ0ZIypUyNEQTBRMCR
sRIYmYZJsphZSCoygmCWGioxU0WCtNlv6La10xL/q13aNhUMCZawWDVOq7XTYpDM1SJtFhZrRoxj
SMtGrAyk1ZmZrRotGNvv0p92+7l9zKLkmki5X0m2JNjZBFFKxpm0WlW0rQ2RtNrZbMtESaoxrWpZ
qkMmNibZkigtRaipJtU0oWxtQJRBGWVpNqTai1GRaaqZrQYaW2yWTSRWaYrUNNYxrSUyxRUUloRS
lLRGxUhJiTSmWm0UmLTTbSm0RsaixqLFg1pSKUtCVNINLU0W2MWNUwNaiWytsbWUNjBTCEpQUoMB
AjTIMsy0akxtFGo2jY1jGmViiKTZMbaKjaVStsprJbJWjElg2LGrBs00mjWyGqMWNZESNCbRo2Ss
ajFGZrNNma0RUWMRFFRGSStKVMkLVNozRqMUkbMhpaWWSMDKMhiWxNlNFoUjZMbIzRSGDYZpCUNU
UaNRSTKpSTWzLYtMqKWbbUmtFRrVNqVBbRsG0msWo1KZTbBGhM2lVsxEpKiU1VGo2KIi0lKaZRaI
xGYppCbA0y0pUaMQmxM2NKaxayaxWVTbU2URbGsUVowbSY1g20oaaFYo0VmVZmA2k0lqLKUhRma0
IJJqMaSogojVFGJNaNRopWZpqlpqqZYtWNVJaNKluWxt73t7ciUZIpsyZbNKyWNqLKbYsZhmmjWS
00UiTUasJbRjSYyYqSyaiUpsoMNNmMjQU1/V/P/tz58kU1GsyYokKmSNo0gGMWElNSylGRY0TG+3
VyLFERFRL/clzJTaKbVmURjMg0TKLSSVoVbFmCSMFBT3JH1ttVbJSmmqn8zNM0aJQy0l+rdVylNi
+NcgyWytp+xmkYmlVEgI0/oNcqUm0FmW5rphQtznd2JRO67Wy1KtpNac6ZUmu7qKSNQVCk1FOUq5
Nk39i11rx5iUo2QNFFM0FfgtubbPyLXUQG2ZhkE0CoLYZRMyaNBGJbEVGaaZI2bFWxrBjJTLWbMr
IUVi2xUjUyFW+4v6nX2718aKICQikoKVp0SiR/+jTzHnPhLpfiV/u9ZYIqj1Zx0nzpOIIpqieUyv
yaEpf9vrq8aRmNQkmSvrrgrLLEhFil7rpVMymaUmL/3ldlZhaKLFEQUoRYKZaNqMU2RBNVrGwsCU
SRSa840lNUFkXdq6MpIpML+V11WNto2jVixEWaqjGmqbIstrRFG2NtGBQxopYkKaMMzIKJWgNLUj
YDZmbDGYDEJskGNSbSUmmRStpkZM0mxRUYmk2pql5whraXO2iKMyUSMUaptTbZplNRJSlZqRIpGN
iwalJYqixapZqo22pmsk1ZbSNSFUtKImmDbETQzLMoNpKDSksjRagNpNGKIitFBBEVKVJhKiNpAo
NpNomYLRZMURtERY2JNZK2ZZbJMyhpulqd0JYUpSYmhCkRSqNrluU2VqIlMkTGpSSBYbU/XXK0WN
gsVFqKIootRaNTMVNNoxpaxtSoMayyRFTSaxqGWKZZZY0IAmlEFRFLTSmIJqytv3LupLVtmatabK
tm+K3UpMpYszJYMlElTJK2NUmZqoNoCaQMxtkqMGrBjSJaI2LGiyNJtMUVQjFKIjYgSxiUxTMZoa
yZLFkhppppMbRpiaWRGpKLaJBmswraJlkatTVVStKtk02iZNqNNEgsTNgZYJJQTGWVtJqTbTSoDR
aQyFJqSxtFihTRBWNQkZYFJUlUabQ1FrMxbQajIomJNUlRZNk0Y1iSwbGoiabGpGbY0miktGlIZs
xkUGo0wtMqlI0FFmLEi01SKKRLUMwIF2xyFJqaAJkSZFaQTGwpVYqZtZNtRthTWTTNGK0JiojTUo
TCzNNRRYKKYSRWLJRaUFZMCiGTMphLIjSiCmUiaorMySTXv1q9XtLRoqNG2TaxmU0TQiSiiyi0Qp
tEbajSBGMaKNFjbRG1jUVJaWkiKLKkoylJebqVKl1OxglMBTSYkNY1RsZpRiyZfi7CMLKbTLZhjG
xRiiKkqSApprNSjZedaYyRGjVBYjFjSyFipKCSixGQxRUJB3V0wSbGlElgqZqI1DI2kkpUzNY21m
vw610xswkEsSEvwnG0WUmi0RkFMTKtQMDKFDLEoJI2tFWZSUbSWtjG1GsYWFFoBSZJCVgIIBkaVR
lSJsloptbUtFttlJNKbQmKjYszRqooK2DSaNg1SrZqiZskzJEhQFMmRGEMYJmi0vldutIooLUSmK
Ig0MqFIGEYjYNTJpEWEjY1JRILnLEaLNEUGxM2iDYqNUiBWKsa0owaSZpEgIltjXd1TEErJbZEqi
qxtGNtGpLbRJTSoo1SbYpNbEasWLUYtGrFZEmtKkZJsZGiaZswiDBAMAATWI0UpGNGoUEMkbZshM
UWxkbZtaS0spKbKJFpkqpCisy2jREaaFTKLRi0TTTEKMm0SaaRiQkxKlFoxTRGlGmmYFTTbEaksa
mzKW2MVaU2Us01JmVs01jFrlcxbYraNZNmGjIlNCRIizKd10yABEJaSoWI2yClmNNChsMaxrWK0C
Vagi0ltmUyUZrbpWuVNoc25sUbLQik1oNo2xiEiopkWjaUiFJYsTm5G1O7RXMWIK0FmlRaDGKwhg
koKKuXLURzbkm0RaoDRrRtopZrFFJqKLbFSmioxqILApTUv04zbdYqpbS1qlaNsVY1RqqiKAKEWg
KFFoKRBmFJgRpaEiUkICIIIRCflH+nw/84kLn/+KH6z/5/P/morvcUfJ+8IqMoKFv+sv+sJy/+Vr
gf+ySJUfda+CyrXoulJWTxoPEcP/EHC0UXf0qaB8Hkv+9P3RX/XyRvjGGHndVYf/6rjbfL+6g4hj
/sgP+p5mZCZC9LIUGCkU/LaSH56+s/fU4rw0PzZLPjutORlu5bz9211CLIjeYNvTdbvbrtXqEKhp
OXcGBNL8ZF4mkMZBf+hxEfX+T5/Rj3YAOkYRLorjw4yWF1UjIOTnxtD6ompz67C7YGoTmAP9sHrY
DsS0geqU/Rr4xQ+rCpxhF4GOrAPLruTpVUElHgdd89cObdNZAYhRhg4RGGpqLmqos8qjO0sOFQIA
DhEgPUGS/yIKpJW0AKOThgZNjmGbbahqC3Rs+lk4UOCIq8BKfNBugO71JrLAqdmhjjzjuclx5Y5o
jDixQ+MWVcDIibqqOYQUTQkJI1MyQKJxiyFhEPZQLNQsASShKUo0kDqcpqKFDKyMkQiAuyZs/UXV
CbiEI5kxLW3kUJuJbh4RFyc4dIXPac6dmoTEnAa1axgi0KFEWYdXDsohuIFBzdhaU1lF1YOYyK7s
y5VQowIxgOFEXCCBJmGCB7EBlFBuXTDA26Y74ay6iXgzNY1Y5SLMQisaq8K4kFgSgzbiJzYFTOJl
YQyyKBonEUYZaMYmYdyzAnFgwyPKccXFRGGgeqAeh7g4pAm418v+5Qflx4jLSEoxbMgjujChp4a0
41UhW8bQ98pkPYnoSbpWYD0tQOIMlYhvBtvMwpI+YvcLEJrQcQzCRr/785xbETfHUw4QL3YfmiC6
koyhuhd0n9OjZR38UuCHdq+YMQFwFCsVtxhU4cKFGULgqfobFaaRjcWE0l/1Q7kRgwqqGYU1Y2gN
YCyYhp1f5k3ZyrIYVFCieMUiMWcEEQ2nLdQ20k0oElGFC97qvu1UFI+P+35+a125ie0Ld0xwIntK
+FGfPjSepzjSW3U8PhSqySWvn1kbFsU0uj2enduPdAeuU4lyE1AZDwWHlbr7OZwrSctCZMFShYk7
r7NrzVBUZRUwmr06yUrAPv8sHjqyIvg7G5zDymC89cVQVM1lLRbOrTM7dRnxNDh5HZ67goTtQ4yk
xwbuk/hNJcDuSI+PDiJQitSNAWFiBShHFkqHnbIcL5nQnAMwoovqbl0t0slpMmSkJOVMmTopxEGY
3vNIZDic91nDU5SsZgTxavOXStJWINtguc3VqVQyGwmbd5HDDOfdutaMt+HR7E8cKewPAy+DQrG8
uK6/MFKlJVAWDws4KxfQ3WrBHndOc5oKW7FHDCLs6yU7pYml8m6VNbb4ANSJpBSNGJiTKrCJ3yGF
SJJqUvq43ZjyvA0REOWwVFEzTUo5Lg8TnOHC1whbZtbZdN0cynKnLYpm18nYHxsapU8hLkt5ZUkB
QWCkBnanLDMiLEW665xjrkuPJyuS2FLWw61a7Q5jcpOJaVzt27JeeVObldMK0Etm7L30VKtq1NFb
O//T7DwZw3kG2HR57B9xQZEYQMDpD3Bqiv0JUUh5eexfF3Oc0vNs0ff3uaLJECEGap55nCMQYZMs
5bJfi8pjpHqrrRY3YuX2zm6vHyL3B1Z1U+HKc5covsOHtZxTrqMLi+DhnVrLSnoOepW3l02XVwwK
xO/LzvScL2HYpV2PVl5OLVo+Gpi8SiW62LbbNWBjE5Nh0sHEO2+Iu+AxKcyZVIbziiRKESjSkDk4
gncWBvGCvbMJJkmSQN840uRKMEhGOZnIsNYBSEyGZvIVRESOuCjlN5eXPLqJb1e9hU1dQvljSjuM
4tlCQ+u0xRzudqm5TUN5mUNTxcMxilmnUKJ6etyqwRRfm6wdsFAVnBlh7tMJHcU12NXz9duvzPYi
iHo9dQPZTGH8svYjDIpx828nM+ruJSigolymmoY2LLCqXKEnAs8uLpvMnFU5rzc+recx5hYdrkO2
ytPqHvopViIjGBvEyOvhsurAuECXMtQomnVV6bGWcfDKiD4EoKk+3VGd8sL7fGHrwbt9yWg9kXf7
lF06zA5pNJApbY2KU28SD0V3YZG9tlrHxdgOIKClEoQZkaSlGlDnHD5YKXO7HUnu9Hw93+yR9X6Z
45oJKC+eDOoHzc4JdB4u4WzcgDoZy1Q8RdghNY5RW2c9uqSOxyS6X6oDr4UaHvmXTHbb3qbRTK6x
1fHptcqmLVnVzy6S7H1yhKtY9XuT9KM9xjhEi6Pwlx6lfZd1bUe0n/8yZkwdsjiYgXPyFigcKNw+
HK7JHbJsnxuFIxRyWFD4kmKd81855sYUTenyu3+3Y2r4qLZ+UCgJfastfp9gGxBRbk3+/Y5YJjq6
McT/WU7X3J4UjDsiSNUnlHw0nAsloKHwVdtJV4zgze42zgdOr3CEL8/q5tugnS9qX+n29xvFdixZ
scl2e0vjRNLDnHXOD7gbDj/+8Dth/Ls8VxPL7llE+BKGP0fyXDFTamc2IIsYtTJXs+kmgMIiQxbC
AJShSxQ/lZ13YVVX4Mmr6VFvtT6J5ukjX04KRpvbjGLCRslSc3LoNBavjeuCTwkMgnfePCToejzs
Kk6ZxYXGRGWtBV8bMDRQFBWKGTprKH6tzOg7SBdnMmLnaWO093pZxneDGUFxZZe0f1qw/iVBYVIS
jwfDHeOzwG0NtttoEKKKIkn186si7YWqq+0/hZdPMpQYL5J7L+DcPjSxd4j735/t1eSmVGNJfdcQ
VzdGOcRBc10p+ie9F+X+L83rQgYv0sKsn99CqMFAQD3+vl8eq/P3dcVxSiYUZcXUjaRYFL+43nre
FvVelNZWPr8qcnLeqYcxR22zYrF1uJMQ50FEyJNjCWLEwjKQsyDu/Drq87ldpbbsqLIfJ59FOTrh
T5MK92qVvLQe5ggZKVLuvv3pJK/U7o2poiJlqqOuYVlsJaUKLf9r3e/2cA8TBDV0jNxfTz2FP2+r
taF5o/LmTEl6vDF1QQBUSepzMmzEaGjCADrIaIGhYqrvhD3QCm8g7RfIlVXZbaSqMaX5O7REqLkI
m66Nl0OeZmISSpb1UR9+53xEmOQ2e/D5vZ+G/79+vYoioaKIa4xcpliEIk8cMZIGkIkKU+SMaihi
A8G6P/r7enAbPu483bsmjTSmVMGa6IxyG1v2WZFXJdSqMn33mOkwoh8tYc7+3HYqwRgodV+2hPGI
kTM1DMr8+7o+OstFFy5ShVzXQNGNUaXJUG2g/Y+TlTlLPN01Ynm0WIjvt17YmSQys32Xz1nXUJk0
aM7ZKzCoICosEtLhpPCdW9cbUh21HqnGZEVWSpatKIWvdPSvNvLcquTW8ra6hIWMihkCFBmYIUDz
AqZKhReXIqfHOdayUQenvVxXtSZkuNrESMrAtpUlRivjl5ywL3eetiDVwyskd26m04o2WCMhz9OM
HEKxLZLAYkRKWlLOkqTIMdbmVK1h0M8dHTg6C3O8XtPmTrqlSKKV7ZlQ80Od0lZHlWBUFmrroHSH
GaJWVivOgNJmGYtDqyVAY6nu9jOfQ13v+l38xLfn49MT27YXTs4U1N9MGEF7/dpPWbZFRL7yirIN
KKqxZatvx2zkxqSsQSxMhtQtaW0RjbFpbRlGt0Qu6TtEzc1+iXsEpjEnu4YyxgGAa2xpBMaQSBLM
EbSlGmam0aWculiINCKBiQsm1izNFYmZkKgoCGyppFIWRNF9vdFPLmRg0MU/Q6lkwQg/Z1N7EOMO
y+QhjMLbYljbR8iFOPExBfK8av3B7cdsSPhr1SxUtKGtXFlz2h9KGiKDOTrFBRQRkyEUW+DnX4Ym
ghd6cwycg4xsczAU7iSiZg5LujdbZCE+2wlGmqoPozK6HzddbF0jJemKUYi5mUkZKfi3SXl1+OZR
W8D9Ueu29WGHJVjxiYh+OYNFPT1+Pr10oLJ5QsCuOrDAryFPvx4dIjDSRkZZSfSukV9XXTbgjUWU
T4+Kc+v7PTHIqT1VrAYWlIRmFLGUZNXTW4RGzIqLIXNw+PDt1KaQo3wxPEjC0wUp4sxklo0SnVCp
BiAxgioUlbJqCxE91yfp9p3babuuRJqoCsXxxqmsnzd07dcNyS5GyRvpxMMbYKS7u0XS3QxI+3cC
91dixoUwudNJr44ZK7JNUBVzzjOlXTepbnKLTMkYuB3fG6vcdOZtFKaZaUSLKy8tRZKZsdUWeQWj
BaMDF8cgPpXfXXDIK+K6WZiMLTa7twKB3V38b3v4j57rjfvdXRN8bmBQAahR8aI2ZEyKMa3b6/m+
9hH8Wb6TSQkvJOFdRLrvze834e9vXxO7JRq5zUGanZESAmXy7cnu3dp3NjQc3Nsuv6H7/z5T511G
k7uQMnDodNchdd019LeGr2910gBeWKLBjNSqibUhkWSrbKLaCajuuxRcukXr3Xp4yc2qyshREVFP
ZCWiNBCzDK6s1WZBGalF1lDP3c0Xj11XKKqVjyjmDAWMVTJph1jjEENF5CuV2YxLlnW5Noktr2rj
Wrn22q68Nc7F2Ngty4aWZGtqKBnOYoZJrayf2PS9HVRR6tpSymoIpGaqxJUbaDF1ooNXHjnMVigo
grFbQtsstAp3c2NyWdsJuVOrmqXc7ld2ulJ3z1x65jYnTXIyV1QqAxEUElGVG0qn7X1m04oiPaVU
U75pX05xNqSoz6e9+Bhyd0nLdu7phDCmulBPca97rY5ty2MKEU7rnOW57d22K8xcuculzW6WQNiS
kSRUvklJk45imZKlKWBrVMxVA0QWXXU+67Gr1btXP3FcPdyRhN3cSS+d01QoawqiwFY2llsoJq9a
EOJbyaHyF2JvKK5iTVy5dI684gnLjVGKTiiFTULBEzFgs4dx1HLmOlQwt+VeySvipBQEpZPu3ThW
B1QKZArFG1cr3S+WsNnMr1fPtbdtWskofLg5mikpVmWrGo2sWLaKtFSVGK0RRtGhLTbLWKNWwWwY
tqNRGszbFY20aC1rjXdo3NtVyivsiMM25XNt7ab17jJ3dHnzUvptfTRiKuX6/7HbfATKQLY22NtF
8bmJnd1j4uaijaI1Ca85tCQkGoxo0zFcuZ7tcIxX3bmpIgofCMIDURD21ipRTxCZUNCZMhOnQlbB
z5rA54XRif6t3Okg52XCEgQqrujwnV+k/Vff6jYd4CJIqQiCgmfn+vXmxqK0VZJICxoCgpHOT4/Z
p2J1YbkFF1+a8NbA0U0bBJbBVFERomWCS/YWt+qtr1DfOuxbEVGxFaMasWMV+LhGU0khzm0JJqgs
lmmKSCwag2NQazKUSxoNGxBGxFmTIsUUajUY2sazNYqyVjUaKrm3LJFYosVjJKWTaKTUMtFjZLJr
GNBzcL6bdI2aRjGxsGkjWKJmoqMsiuXCCyhGUqUik1i0BQQFksbY2pK/WXMWxMlFlDU2C3xXISEq
LYSIjW5zRjGTUbYKMZRMZliZTVxGS+h+v3ZWg2qWhpdotG+y3SpNQFFudKIiyWPLzyTbZNsaSzNG
2uu6ItFQFbe7rJKUyKxEFUYpEosWxoyVk0lizNgSpCjJqKNGjRaTRJ91dNtGk0YNiSogZkosaUsk
UbFGmFijfoau0MaSJNBIRqKK0WxRoLSBGgNixk2NG2hCtFBe64KwUWmbX0oqgNVzGig/S66xGxox
o0VkMWoqS3muGS0tCeP2fDNO1UVDRSgbwlZBi1u7jUGwmNtzdJLUWaGhShmTMxqh74Hxg6ttMVfl
8T1SxFUM7iPyqJQke1Sfaq8utVor28VP5enAkynTKdyuhUWuKvPJGKJU38ZFF6XpB6vgVUXJoS/6
J+EYToyfwdWEZJkrpZwvUzZa9quX/RGtCSUe5AxQWfTtSlBRkZkRPst+X37J8S0a3rl0VFAEViqS
jVJjNCiTJlMxszASYTJCBJIEuhMOgPupgv+C9f4PwQ7yu2zev6I/8938KcONwQTBRDBA0h4QLovD
8ln/L4/5Up93L6Ixg543RGvTIRg8k8HICD9y+aFl/uppSlsbFB8oSFEvvjm6uqjnDLs0kS1jfer4
091+jjg75vPNV1V1vH3a308qfezxmU/V/mQpjbBQyd4Y6Rm3xUzOTiVypV5pjQTbWCrFWPpaJqy+
nOcR5fd52TIcYCWkUoMy69nBV0SonCjxmrK6+Fh+REBCXHFHh2/8t9zrYeyQd045uJqgN4F2gyKc
skMAO2gOlez4b879dZAOEpidYMYb/O/L16a8YH0WdMwKwXLeJ23lwmY1cqMuD8V/m+Xv/z6LzMzx
sv4NxT9vpqqZlpRJPumVHzu5aJJ8XiQ2Jjf+13ZN7GF/x4e9QsPhAn59f+fGUf6/w8cW+mbU9nZ9
/aZ+mcIGEvrgREkyXKwjBoiBJW65RaKNXB0U1IDR2+P9vjcKNQhjMEURbzkLgeUUSoPqg4gDmqec
LjA3mIfDamkJ3ErYOx6/S4FFSwbVAZqJm9v6fq3vtd13ZP2PdPY49DvuGGikgEhSQ6j26/zSsmOI
EJJvld6PTD1Q5Lor8YXJGgKQK8JPkzE1pAqQokIsilQLafqYZgC5gm0CHxlDUi8w7wLSHSOLiQTh
AJUlTpzkhfZTw+El7p2ovELkoknaSHSQCoYQbYVOmFQFCcRfCAcck4MhDpnbAdXvu85JIqsO3J2m
KJHh6371Gz0QPbxugxaJI/hf1QdCi9qSFL/cu6LWlFcuA7LPbIeVBSqXMB+xHKDCNu7BBEEzo+xE
01JvNRTMNP7ufn/hKamIfsQPvR2vn1VErpNHZHhEe/2uiiZa+/W1PtfPym6L6YX0pXq933MBrOyN
QB1gj1QBuF1RsZgDoSEMm8pwf7Uer65n3et5cMHaEben4fpGutmK2/9X4vR/X9yf1/5v+n6vwW3N
f0uHyq9GXrf6LrLD6ZN8/Y7f8PfwhqToQiKqqoirf4frTA/w8dc2jd33O+irN5ZC4coW/rZUp+I7
Qdl3HGbRZotCH4HYGLyXTJ+lWwTjibrGan1xHi254Mc4j6zzX+eOvu6ZYku5OcYwTWqPY/TPzUiZ
pqMuxD8A+xaCGUP1/NE9US9jx73ZjX8Qe8DFNli/BkRi4Rg/xTHz9GG/XpMa9BK98FG2MN3KmEJI
qC8H7qjiu2ZiqWIzIvO4M4DGjwcrfnjk7gR6IDnsimhIXhdoK3CrQmLRCShhkRCopDjxKzMT+lnH
pUpeDm9O1Gv095yjj9HnH3eMHCXr5a4rBw41fQM1o6IObl9cMJlKpjFxBPjjA4uOm1r24PuvaDTP
fsvUL2aqPQKFqJ+URHRnczDJJ3o2KImopikcDn2fxaDoUnjfogTRB0g6IYadLLhUhSYdDGoM9Fp8
vx+069B6rKVr6NnyEox8jr+nHP8Zxndf83TpoTLnMFPCKWkqoil0aIW8spw/B4MSY4XsYdoXThzk
3LK2rWlOetKxEWH0cuYeHuW1hKxZFgJDDiDNqcrb+pHX5J3bFvuUbmEm5M2M5pJCUWs9xxlRA5i8
HGRGnvqysyRd1eRJB6O5js8SCRShCXHN0UtIhwkI0Q4rE5aYRPKlXytB5yyVRIsL/xjz0tw5nfOp
ubTPOy7Phbch03qUG9M621EqX/XYXovHJDrukuoPRimFiwVIaagM/58dskYELGRBKgsQEqtaMVCK
aTtuIlCw4qiiRwKJZDQNjIKvxos6Og6rwxTjDi4t0Fj4St5ZP+Wh2nVNMQ4O/hf5GQNAf3kSpUNs
xwB2g7LapFZCjN9WrufntMs5JSjDFGz56SvwsYj7X9a/BRHbWs0S20HsgnhkqJEROpkVMrBXq9W5
15zkdGRGMGJCxigy2n2ss6S+zq5Ze9ISqwYOUXytRDRVh2yXZu+7hKkmgyUqNHlLmXRqONjKImsy
gAPickqf3r+yPnu+AQaiZme+70OoPfqB5siimiim86u6zaM4+P6bP7j3FKKaKKchRSlFKOH3jJ9F
FG+Es3bVtjx+1mGY/GGVP5hqooxAN3TzWX9bMMxYEyiE0mTWP46uTATUaNFaSVdNTTYqEmLSoooI
jEUUQnwDMhIU+XLzoMHQd+fRP9ainWopR/3/Xw5YTX6VFNQt7w5qKVyUUpRTa0AcKKc6sn/9BFVO
YdNc4k3cPrXZ0uxNcSkuuoPtPedM2sSSUXfkoQEtRZ2fqlhOVOhEovo5kiP/zozMzEV00l5sIEF4
VIK1EYqIoVq8NZjOCtckLx6oEepSVrRhJa9Vc5kFswoXNZ71PlIQfL5ervrF2p2RBPKB2RTeIH/W
DOjH6dn6o8zabLIkvtdgNkjA/J5ahCdYZUlIH01J/12L6e3f33sXLh+6fyxrv/Aah+j37WbB0e6C
U8yvw3xIr9Po/nujT/h64f+v6NLv+fy0/1s/6VP9YSRYv8v9T6vZLslI+SJ+3/Qov9Y93q88G8q5
f3dn6U6+r64v5cvtw8tt3+i2/TX9v1+vx67Lrj7J/8cYT4/zORX8p6N2cW+PvnbZ2fTTsW75jykA
TJCEhCZMhCCCoiImaqqIqPpxfu6/yfq2f9n9XOthveIP3H8Y+T+ff/ZfYTQ72IsVZFsJfcruVko4
YX4emylcbNqlL46/r/VTRKvLyoppwJQl/fcMNX329HB373J8Ma1kz3fYhthj9JBiEBy4QVjjCyWU
csJ44W2WZFmVt9uEH3KCEwZRrZXDCy+omP4osRqiJG0hPaKvCsJjspFA0LD9W+quQKf+33V1Uftw
RNCj3O1SS5acHrh+nP/kzWl/Ov/TvfnbLXDjuuDWNVDpMnMOU1wv8QnMVNycLhUu0yc08LSFplQa
ziqXtiVz7fHj33rKpJdhj0rtvj49YzjFYmUjv3qrG7SW21I93XD+G/dedaYbnhze3eLveMf1dant
MPlpPeJ6PX+v6Na1hBVP+JPLMsCv8O883h/zlrH+Dqw/Ek59dkpqT2YsirIiPGneZVBHgGYnWX47
bfnP3bO0O4fZfLHlLmpBPkgtvlr8/wowH8k4BfORRkLgB54HR/srSIf96Hyn+mkAOMD4DDn6LVrV
RYoqLA92oZgfDossEtn3QJNKCRWGe1zXXNuqlRWNotQUGiqLRjbSxSGMT+P7a9MZppJI2NFFtTER
R9dzWeu24aZTbGzQRkxMn6jsmo1FG0ai2vjVubTVINeOGBQzAUAMRXlyfq+T+L/Oezb1/P2f6P4z
vDgeuJ/YIr1p/Tpg8j0+BGE3dcQIoBANCesk20A4BuyYkmMhErpgaQhj1LCNe5fPI8buUBwXj7lz
nvJDaxkBZv1p7EQ+Z7vv06U2i/oucPu+4fbs/KQmYp5Ky8j/CMHUPb+7bup/4Wi6k6orq/ehux+9
qypMqJ0zTUNJv/h86AltoJIlUZ5uFjAURgM/xn+X13Z/u0Od0/m6wvwTPGMJeD+7cxjH/jIT7Qna
0mJQLvQyc9n+y+Ulukav7l+WcZjT0n/nSsZif1f1fPaOzoUVe9FM3yN2+vk5/EjuatHuzU+4w/mK
snSrSMyZIYYkjpfPdlh93Urour0+bxd4KanJ2ig3IU002tvYuAdapy4fLOlGxaC+WHok1zZ/sxhf
s87wWbv+tM9XenV+yPq2OSTL9fn3nr4XY79o1363Z82ZhupAkMwbWjFt+0mUac6EyYJimZCZ+Tro
DBGZkKDTMLzHEXOlDSxpNRD7E2ooRQFgoqhfZvMHx9u81Px0svr9tk/e/c5t0YAAAAFVVVhC/d/k
8jZ3NxDNXFq5O11KMzGQt1IGTU1Timaz5FL5ByIx+/SkDxMJ5IIyi6u/5Yf4coZFKQQxUBRHDm94
F/a6u/8m3OxtMfx9LhfFMh2dM4kJJJH73/j/RxuU6c3vPNVpIipOXYUhv2XJudtw+v32+ft/HdCD
CEbKfNfuDB2sAmaKi+z2U2TJBSoVTJKRKVAoiFoBKVU+WbV/itbb4N8QzRrnSo1ubVC5GEoGQjvG
EJkKZABvIYtrb4NCbKRsa1zWKuW25WtytGq/J+T8z9x+UAMAH5+zxwAyHUPjv3l8nbjnkoqIkpYi
mURzmfg5e7cs5dli7uUwVC7uj3XM1Ht27IhY0bu+d6vNHLrMJQLJid3MgklpEowiuY6xc67tcwii
rbUUETxzGWjaCWKCXuwNFndGHx7t3XOGTXWM/x5nu6I9aCGqDYUNKkTRGNGMzAwSV5ZhRG5fbx+r
WyaxXx0OpYP3neeLx9abnF42NtHSNORZmZjBFQTKFTVSSby6aIwTNJe67b9T3za2vzMwUEJUmMUb
ZUUwEjQ1WxrpxudUQOCFBH6oEADcii4+3zfc80OE+jghpjPqwn2QMzlBKkQ4mlMlW7YLkD+M5K0g
nf147Pmr6a2IttRjRqLGtotSbY1ikjRFsUWCND+ou2Nk2xGxFqsVFaNURIbaLJVjY2RDX3arcxqN
Giyaxv1+4jfZc34i6RsVG2LfTXKooryrgLrM3hwhNc+BaTyn8ZGOuP90NOD6WUQFRezDO5mnIocx
HEoGlUONtJaGnr6z2QfW/zwUXofkidRM0z9EPC2fV5uvy99+rJkL5UkP9Xs5+0P9n+m4X9dMJltl
W/V70wbe7XEAjRDm5FBITCSBAkO7OkZ/T5vkwSNq70Z6Poak21B/P2dn57KpuFLf2fiu7Ru1MCVb
6drNJFo4nRNKCHQgAcOBzl3XHdz+d1/G+fExX6Y+fcz+Pz9uu/v49E+b6Pq+jUPlIVlktM01SIAA
AAgEf3HXvVrMzDPH6tjbaLoIfLKKJ/qGFWCQWJQYZUCCQWGQUIIUYCECIAhkVggCGQcwMEQiQiyV
UUX7/nU2NK/9sA/jB/fWB3i5poGcIIfFEbB+gO6dcf7/kx8uPlj19XpLT+yD1JV9XdS6lXTDpVQP
hrvXyO/9+dIeHWuvHs/veWDttsH2K3am+SGCIa0NMwdsr3Iov77+1ANXWnYl7Ieb+TJpqJRQ+/bt
mnjJ9nVdGc5JX7/TqFW3leYQer1N3WwHEnKhEvaBQFQ7pUEVDeBFQNJ9iv+yQcRKaaaUiWkIiTGs
lV+xdVdUJiIsoWmgMhQSTBDYSt/0q2CLmb+r0xlIVIxaHIvE0DPQ8/bE/kPFirWEW1CF+935f2Sx
/8Lav4I/m2/xhNfn9/0fy6yzA1dH9wf/nl/zpTox+900teEj8fSeT0n+fP9VlfPG973Xz+sd9csz
MysdW4vXGN/jIeClm1du/ntHkdHrz//xp3+0+EqMSV+eMCUaKBLfz5dKkwWxGqJ+bMWqCKA2UpCt
PziZ/OcsNKW0Yfm27r571Wa+OrRaZpmhfToqXyFt1VhK5tXNyruF1fg2gcagiGhpEqZqkl2x40Zx
hnaTenJuMKKr0BaOvXlzE5ZZWqIQ45hMSrQr0McJIdowiA7syUJIRtGErOpIWGYKbbIxvz2xG+DM
ojSji37sd8sJkh9+IYDSSe0g/wwaOvOm5gGAXqykS1ri+Vn1npD1DEJHfkL1dsO78LP95er2bKO5
+BFOB2+fe9p/b4eCY1PnRoHF2dHni/YX9/R86r5vNZOg/X94nDK1yatRbCcH89S6jUQT872yeW/u
EOvOdq8+Pr50mE4NIZFnb/bf3TWdJB6shzq5BrFYmztjCSsQ6cpZ/ti02g1VAy5eaNrWnmCnmbzo
Teo+Hn8x6F2nltPT8Ia1h/dYi1/91lk/TDv7IOjhf716N/VSH2EaSdu+DsTeDWp9/jOE3TzY4845
hBN1uOC59JPz484JX4aBG1NIUFRKZ+NsP0XjJmYaiDGJG1RIyp17VwCMZcZQChKEgMtpQCuF/4M4
KWzWq9fXhhFLwd0tR20bYwcuzUxHgcTu7O3P2+zf7Ivcxc+HjId4d/+EpSP/r3x8ut574meyU9nf
5LOrp6qhIl97rI2n2Z2+Q+Xzk+S1r/xToz41FIVPhVTMrCikv5rFFLvFfTKhG5SlThvjBjOAraZJ
83VMY3UU0/Lw5pfm+q7mw7fUfy4vy8xdSiylrk18iQ+lYCQB6eETs/4+bRvokeTx9D19Nu67yXdH
E+7/a/nbw8//59XLo9HWx1fPK6ruvRZ6qwifUT8ou1mDkeYdl4Hlgfai3F2ZgqGlZIOWKdJW1IXp
QiOYSLUQ43gsQxBuyrILmDWpSGuaU0inG6eExAIjzxHYo7gw0/MO3l7OotIs22zxgS6oRlknue8w
meYb4eel2JGfk/8xdfDxf4Kz04bfDxCzX0pus0T9sHXS46wMug74/YL2pqdEyMe1NK7Bzp1FjfUs
YlgurqDXZzPGwitbyc8euUof/fTCPSvbqB+n7tvOTpCEJm1GwgKKPP4zIB19b9CvPJnt6ONqwLBL
odIzOj6X4+7jlBYpQ577viZYerjOFREvBIlCDpqLo2Q1iE/hZAslfALFQ7jEzg2DtDq4Uut44ORa
TC4Bb27byG+JDzYTgJTNnSbelaHZQ0k26UTTQghJktadkkOhCBO/im3x3tTzjtJvRvkbtfy8owXj
DKPRpu1eiB5uD62KG7V1IKpkEk5DZFoxO3jsNDSy6jJR566QIC7AVzG0tyrmmOhMGrF4LFpaM3L5
brJ39xXTTv5GkLZyBMWprbTEg5j46XjCEiu/LX3u9Hhtqhwj9PA0soiSLIEJGJO6VcIUP0uGUBML
fZGCO38v0P43TY1YjkFsjq2LqMuu6yrISt63ghkIggmdHO6I9ux1W7bV0/DwT2ZypzjJzlQSQJyr
ELmxcESFpp5fPfVByrYwNvUUx6/GLcjXKC6662IRQ3azzzISIK/Zsf/1G6bUu8hjK9ceia8RkyVn
fOvj2viWr3d844OMlIlSUHxOYffw7jbDuQHsJD+iHeTaGkWYgrIMRpAOvDpDaUiGl7oHx093dsid
IbZcnMzWvk7+3bYeYCgYmIgdSUPCxFtv6z8NhOCcLPkwJYxSAwkWeFKfbty1DHLWnPlKPuxD/V2f
y2Pd+3W1E5fZReAoDEvsoN8on0z4foaQT6CO56fovGCh9aNWnqgB+E+ImEJH2nnp9Uz6BuLPkuRO
Gtd4vPDavm4z+bOx/ogr5Ihbb88Iw8npl4m/TTj5z4kI+gF5kOoTdoKHn//bPQ8aU+V+mMC1xogy
omjFvhiTsb0TnWzV6Y4K97IzuupkZUmQ8O6kTIumqwadZk4klBHk9jcXVzkEvU/5f0FBIazcUbUX
KlUoxIVIz9V1P6BXj+E6p/v1GpoMUTm7letzJlBNs3F/iJoFFnXhMCp+4h+E+X5bTpE7AQi9H9lI
VACKVXUT64jjL5PmgQ1HnHN6NcnSNn5HbmdL/tXSjhfJmhyUIvq+Y74e4uZeBSDqTSGY/8I6E0rU
/cuQhkzfo+h/mi/84iEOiB+j/3hCX6/zORi1F2KMPEXc3f5CAeRAJN8//qUh86Pv8PNDyddZRb7F
JXEfLAb673CUfzQ993qPjq9459FDWn8wUY/AlfD6q+uH15n0WmLocQIEhjoOYPIutf006EF6ZL50
0lja6V9t8PSVn9USs6KDihCiemMgf7ae1Ti40hg0hdL3UQah4qiknL1VBYkif1kIUt2H1jRMplz1
/GcpjWNQxN6jWg8Dd/TC2wdsU1ipoaUKGkyKjFz7yHXzS0hCENkUiCvU0gtZx4XE3wx2OlGYr/eb
lCL/34haug6Gd9nFpSwIag2aGFXY6/j+gj9hEX/Q4/JJQk2P0/iPnDgv9PjnIc9nspuBVlE/to0k
p/EnjYccn9yU0/hp7E54/A3TpHhfCw4Es688cX/NgLgkgusqMSL+pGmoyRNL+TGDjHeF/z0h54We
GalijRAK7bskThBTjbrOjCRMARx6v1YApAVudUfYbxF0JC6/XJPD2K1XMMgI9y8U4gSH75lcdp8X
fYR2xrMVLiBqDUJqJ1NIOkv80xgrELjv4ZOuzMqEEBpD5cMPoppSB23Sfmi4iGvn9WC4E/UZUAn+
pXlEYyfLKRnssBP/N+W0OTWqw/zPVqYWsKcYkRRUExODwJClT0YU/beupprjTmFPyyo+iGqbap/D
3gUnwnFekxHGh8B2vQEFG1/cl74B79jyFYmssDLVGKHn1kkyZLqiEMKJgwyX1dPWLv+X4x8hHCYg
+Syb4k5GxW2tFqcf7D8n5rxuyYfv0eMPuIeb4elTzerGppcf9BFeVGtuic1qqwdWDrPObNxH9h6y
8QdoZl/okgAcMh7PR1m+HGKb8MYxplLkwfQh1B0qtIx3uyTRWcaL6SI54wJtYsVMRU6tAPtXBMwe
lMIQM2i/nDdfA7EQ4Ca8R3xp+Oe8304W2kKdv+E5l039wp1bG0fxXoPwmx8Oz+vuOf7yinfo1X9E
ZBFC4WawxPVgxEHo4GEepabgMVGFQhoqqDmnBCpFPqbMrQKNI8tgd/gwQ+XwC/g95MWYtA5Bparp
swOEWZ0LJy+jdROxbXIkwomBo/TJUbM4ZEbXF2lIjDSD+KyikhREUW3ObToud7F73rj2I5nOYTLn
o1iOgSYNKImXSIsQopxCpuk6O7AsRVrM5kfOnjbiDHxc8FzHDcZQ0bFO0BmJGZjEx41J4xwe/Y3C
p51swzsgIMAgcIX+51fdsGoacyaaYzgEuVNmNu9HGaHK8ir/j8SGS4VBSP2yBhLMh11hoZyOm6wh
IlIfrmMNBjy7lV9lDsO1xXp/HVzhCyErJWWvQm1kHEG2f0H4yfWkfq/Zh5oj/N8DCk644JJSIUny
nz/DuA9xPkXuz/9v1bdMwd6PvP/wmu0QRD+a1r12iRRCf+mAKlHieEQlBfp6UkoNFoiHcKZx9rPJ
g6E2yZFuyvakbgbv/+rZ4bUxa34w9wSjFv6T3jl+bt6hNYJqlTQkQ+ycmJnu8iZ4WIODHrYi0cns
IMeih+Dg0+wNZ+E/sZuk+Wx8LcbdZFvzb0hTbkebr0H9JY3YNRqMOF21iViPGNG8kDeHXhUaCmug
+r672wZeDMf2ffsbDJa1Qyx+D7LNZPusdaa5a056wRzcSHsR94EzrkdN1ay7sGmwdHxayAWe0l6A
2lsBoTIv4x2Wv8Ht7cHHyNaiBGv0qQmH1OxXrEGEhG52lFxXIctzlg1HRRFSyxt2ytcnayGyvC9n
C4mNlN7d+kmoxQ48V8/Qf9lRgPZQ1nQwaSgyiIpceHBaWoGbbjKpVl6kZ8FODm6mVyRs28ddLfiX
xJ1NFtibZg8E86fHiuJ9fbM8Y8+HBaOfXJWvTHdFp+XkRBSi+I4N8EgJrCaXy4NlC4SnVyB3oxcd
SmoAhLb+jyjwiyGYgJsKOWILVhi+Jbjdk8LHT6PTdfLfW8naK3HpIRZiyhjEs01enL73jEo9WfVF
77bm/ni1OgZwQgD4e7j7j260p+uSHs0oqdjPHn/TDZGAwYeqLRLinTa3s+rXIPy8JQ+HqcD2SrDD
BxrlzR2YUvVnpykfP4UKLUu+FzMrOFZ9k28BiXlaBpDR7RKz7JnpxqHftMS/wzfDLGWOM4b+Nijn
O/Xb80lsMv2yyM9t70Otg/Ai7vm74nYvf93MRE0pERBIQQEe0geazLaNtRrFpSaptJKkGkWSGGQ+
zMGGIZWgCaKybYowVzWK8rnpVkygUrQRt7tdpo2bve14AhAwIb6XGLSUefmtI6xrcxKJHjMIKDWF
/WHp5v3FyUs2KM0AdC6ZCQJMBSMp/NH+Pnmx90TMD6OujMVPu+GLG4j3YpzAeOAl2uYvBRlkAuOL
a89L2/ZrtoYkrts8dr1BA/pgWIANyChf1yKZJSnSUUxEE4wQwxMQCRUfH6G+p9R0+japA+wkdENO
q5CZMIwwT5fGVmNFeMw4x+IR3WSg5Qe1RAUtoR/snF3grmGRLa79+4fHA5ULMw5SSECRMg2LMJZ7
Pnv33zp/K7MRxp3/t/n+XESvMMPT2fw68L6wY7dW7UYUukXiA3QBmY/SHxABu/MpDnKw9KY7dHPa
fDzYZm/9dC6rjSTd//s/go+lvGDeVmNj5o48EevwiWsZjg2iSSY6lPp4FEvXxZhQCY1yNhc4BqPd
Q9d0mlfd6uWQ6Q61RewE1vu/rbKZGAGLBJ2tVwgbfNxmuWSBr16RGStReggrSHaTRxh1IRyQE6xc
hEYgoPBfo92ES7AbgQYUCfswwV/74H8xPjLzhsaSFCXNjhSlzjkKeqXPRM9MKoFsl+v3b/Jpot9l
Dqdulo6cnwhx7GqIZB6jSmGKRsmN9+zAFzEBNIukXh5UBvHWKa6l2AYIGlZlDFAEkzVvdmCisEEu
qEA52lT7o9Mjj6vLcZWTSRPTF7i7mH6t84gyFpWwwOO6A+CDXtr7M3Bsj97AdjUcDgJ9wwOpG3Ob
y7EGSnr5zYjaF4k4INF6SuQU0h0k5hvfm0UxJ0Y7+MDhlHRjV2Iu5T7ZwIJmsRbNqEGL3OByF9sg
KlZ7zYxjpD80aJe6VwoO6AO4lNHB6dnsbC5LzCZOQDSL4S7SROzDzG8rtGnadviGNsqOCc/fre4C
dQrhCa6DRdEn9h2uC5kgbQ7oj2jGHT4v4a964iyvuVTz/qR3QFeH3eht7PM24M0zFeN13wo0oqCQ
2QEB3CAI9ORKEsifi/Tw/2f2fv6Jmeoq/67JEBv4fZ3bAiMtQiDFEVqwkpAGhAYlRTyJAHJUDJQF
X1LdS/3lzUUzTL0xRj6g0VRb9fKfrp/g77FPRqg9/vUwikyFGAEv0bwHLEFdjZFBYdA1yPrfwxDf
g3+Uk+vrRsFEkIoKoi/VGQHjBjVQp9vHyF83gdtg8kd0ZmAyZJUA9n32ZOkUnSdx4zHs51ySHEIi
K8vTDpGcNsN1It6aE1Ep4RpYs6VQQYKJfIO6c4l7GGEOUqj4LADyxQ6jOvNI2RLDk3aeMFEEQFEY
jBSCnENaRoiacXhTEiMdrTnzaHamRoBJJkLknEkDCEyEGs8Fedla6HzX64tDWUh+Kgeb0T+qmJBd
vwPuS5T/l8/m1R9r/oUlcfLm/C72vNG3S0875caPdOqn2v77+E44WttuY/Ri9Zvij8uuWWUocWbe
d9D8O3u0UHaAA1B7IQ+mf2VPXMXkWxU2Oy4Wx/BuzuwViLXossiHURjOzON9Dsup0zaECsCMOrk5
WPoanrlatcmpeWqWq9auuG3SkjTJnlZIHpUh0xHQBvnc5BmgKdXCBg+isRshujLCeSbuUBFmq/kf
4N5P4uAleetHyNVN+Hqh78TjDVGOMIg5lFh2MwdKX4bWIFhmGCdODbShErx9ihJUhibbJzIREw1R
zmZiZNN3/N+zlR2OHv1o5MwjgoJmhYmCKYhaSon8oyMREEgITkgZ7suvh5KaBQp8DDF3kKDUFKmn
cwzaBzLgB9RilQsZgmYHnwHoPp1E2MUQV4E6YuQuWMDTlqB1IPW9KOAQ7L7x0KVSDh22wV6SUb1Q
7iQyEgskHkGQSnar+k/pcE1BudMOIX2YZGw5pEkES7ajYpaHweFBOAYiTw6Q92+vJ6zHr7amLqa4
crBxSacMzL3RoiJG4VFP1lJctDSdvQvJgtXotOy3zKGN5IvaVhE3iMN2BfOtIlkVU/TH62K4iC43
svi384cE69DQSw7ZRCEcyo01BqtKpbQsFpy0cWNQagwYYhlOhcG+FYeUbbBG4eshyePVvojA3zTI
gyqecNlDJQ5eykC3uKaq7LizE2ROnYieOg50RKHUJcQb1WdAeWUuIc6NBRegGGpACRAJdbceBfgG
nMgFeEv/7F9E+L0vC1OmhDF+cqNNP+U2TJgufxdrXx4EXh/d9G+OvTbbxzgQRFOa3TvFO01RRE0H
dIFUMZ8C3Iwaj7tQwQLJcwd2fpXMRKM6zt0KF5CVl99l7d0yUq0IcHtsTLqY4dYvOwe8YZm/9/7V
9ZxjBRf6P/f7IBTqOkz37OiO2A8xkJhvxiZcOpdG8yHFJxIh8e7VSF1nlM3xh6bk9GDHsicIkx8/
PWuOtHKJL7kJEmOt98GaqDxjIQRjSs/PXbTTo9lk2Ai2EqixlGQFzdX8U1es1sEj+ot8VBoGddk7
KhXHlxgsqZb2nz/GazmH2D+Y/gm2O9mCdqtnYbX7A209JdcwfRKFI72SyMZkiIt84G4zs+T0P7Vw
p9G2PQ12tXCrK+GmmZOdlgQD9FnVB8ezc79n0fyyjKfpuWfGTyn3Tl+VxpfyV+/CzeMAHyYRjPsU
NoPZ7t8c/lOJt+f/UfT+78/mv05ouzOM5qf6RFencRD88BUIF/N9eljRhJv4OP1wgfNKhREhAsRR
cW4cWSUFM19fx3+B7gj3XXbLuwzbfOcss3951iCJXoBUQXzT3rOqEjwZx2wwnXDiLoiCGESilqgS
29VfXvRNNQs0m9xbXVshbCyYQsQX+BD6FQO3/Kh/V38+APYhjQ+wVXpmfjZNcm/YmB/ybIeQ6/yG
zZmXpkzJJkItlDQaTUCBFIxNCnnI4Q55/mPs+HrFH+en587Q2/z4pbLOWgvj7nH+zzP5fs/SBkbL
5/0e3lNuaxa1mqcEEOcpHTm2+LdLHO5T6G/RUDzFrs3yB/Jbd2N0MGLXiYI+3Hbzd6O8vfezOXMK
v7/kbVu+qvKjXnnmeQGSbkgvDTVr8h6gaadDoHHdncRuZQZDJoHA/Xd2ih6TnHOTh2GHPKIBEepr
Pe/tTfZX9cPz5/kfZ0flgQkhnR7gBxwSZovugD2/yVZwopqtrLLqTNbeHo+vw8viEV6lfOQ6yG1w
3FoJdL+/J4D/jk1D8I7TaZPzpnIohbug3q/c7MH0Mt9O/6jtuavsJeeuOBRRMGONi4TioQsvn7vr
T3vHX6zdEpjfXUsAUtkoSKs6cXqcK2QXjCjWQKaxs5qvGmtBSHTA6NCeHQ8JEyd7CIogmqIRWtzx
+H93YE8A4F3IIkSRohQ7pDtKGqgUmJCo0Yn37pZBDaixJk0I0skYMkWLTJSW2w1FYyhVmzpbdGwU
olJVUAtKBPcwiBttx4YCd1fjmSPnAbSfZyfE0hSFKe2D+S0VpdjdJgSstuhBvaN/Kn/Lm/d7E7FE
XtTjuZ7C6fnQ380o9WdCciyDsfvfDd/fq/cuPtlvTHaYM40EW5yfpkGwb2R3fe2d9O2Yxh7/PAvl
dCLIQjg7JoEJRdDRBJkCimmkp+pTSQhI9M4sLhkFko+42H9PLFIzwJo8/2kfhvs6WL201O15DCS+
c30eaQSTSg9wwzMmh+P/ZdAi2i9HQRPurfbW84Zv8Dz/sP167rqt5NP2jhrNJPwp+FrV/BTDhgVK
qTJjOxmImywpv37Jo8V1xlRigU/F5+PidHRYTNfqd2kdH3viSC5eOECl6uc0UI4XKJNSbn4dIZIX
3y1robG+o3jgMFyD8KAZxMxtmYVFL3bc+2RI52HvyiUJ+c0BvKk35ExedPO/ctIuW3xEJ0cYQxHb
Dz0424WZko3cPTeNZP0C/KiyErDFglL236oTES+WMZIWJNT9EdinO2Gx2bsjdcRVtCE4zvndQ8Uj
4Z6sbbdIaNG6ERC6k/4fOJmO5mOtjcAJC8tp1HvJxh5iB+qBb5umMLUFyXOpX+k8y8bv7fJ+C69C
yPLw83ye6lPlR6PLEtXL+L938EGDefi1I4I9Us78cXl0z49XyppHyl3DcH9XfEVU/CLbNSdb+yDr
G1JNFJnG22wvkSGEjLAVzZKUKfKSnSDcO9D1zPxO59XezcLvD7zO64Rec7SMoScSgpwRcmh5okIe
/N6vCDv6O7R4bmScbq6l1Ybsa2UKIuY5q7uv4zu4gVKolG9WNQfu60LnSkH/VA4xOSD8I8CTzjpG
2Yb94Yf5O2cEDtAbyudHBGY7OjTCyyxfumUDUPNDn33YL/n+S1TTAjVA6RQ+8pd+fSn3t/Hl32z7
5h65EdL0J6MzsJT6Jwb/Bzg0ojsbNbjxt82puq9/SLZ3vEbvTAMZlnsrr7JiawHaKZ9H81oGe3s7
vX9BocC6Tw2pQV1vJYg9URwd0zCsB4Ztzz0049XeaxGbNQkRkkJIHFVS/x/XX3eRsbz4q6NuPmrP
h1a3NSx1P6f4vIMv5i4MEPHSISWoREVohMyH24WhREvVeekuqcVgaKzVEuo2DPNiwwa5lfg0k5O0
RjbdfR75SYlGndyzgxFDGCnAexSOjz23UfbWCseOsxc8xXAjpPG2pFkwjU8ENNXbDqkaSNpQiSNm
yBvBUVueOGWrdqn2VtO62wRcJ3znBDRQRgoBFFBMkVU49R+jqm7E2M7qwa/XvVP6+idwh9SAkgVF
A+X0m/KjKGQ64KmpEJBHz+x2kVZWDfAQmEkJDa2geTXHZ3u6yOraLzYb+zfSNtXFvxyWC6F5J0qT
k9qlRY1aUJtNN22OU42nOJYWdi/P0cGEIM+7vHj53a5jhyjWzll7EzGd4JcyMWqtHTvJF0LM6wvw
vfVnbRXlYxjiYO98rZ2GCfDWdWUpGLx2yfjx2xiMvYzi/p64665a1zymuZ9Kv3ttQOOnANcLGR73
8aXNVjSWc+/X1+2bFrw9rbk46C6+GGGhmzMzEUzAespCHr6qqxg5+I+Ri+WSu48sYTEKm/CNI89U
QUM76Xxeam8Lb7y/nPdQpaWokawOEthSZTBgekL0qMBib0XIHEhMFBiD46NbdiOB4XKgooiQrCH4
7OMGznO6KTBBx3aUY979MazRl0VjvkWJwsHNpnlZ4d9kMtPB7zCVb+RPODk6Pms4xkRV8NlyplKK
lbO2GPzY/X2na6OZwcl1TSvV+vfwvnq9OvTd3Yn6v3a4+KDMufotNkcW1xB8Ywxz2uJJTyfOUYZu
ULBtmPFzq3B8ncfLvg9IKQqqgZ96+q9YafLq8UCi4bhboldBwpA0ghA5a0ZyiSAntvg2oFhXc76s
PN4dxQSiH8BQMBTgCeUOa7RryLbGLFZwLGTg9GlnnIV2/KNignL4LNZHB+wZcbAte3KHZprxrbCy
FlrWnDZprmTuHtyzhnNMckPOFr1ZjATzot8i4MkJqM9oTpdljua835Jmva623S+JmXCSZEIX3BjY
OyqhCZMkCmTBKJvns8Zb+NhAsEKAOwrUgToe7XKLX2mljSZlQv224Y029/cPvyanjEiRSF3mTU2d
jEP0yeqUKBIxm/eagIouzRxNcNry43Mj+le/ynuDgHcxGd1v3xFKGBHVhhy0t175rDO82IpfLU7w
s7YvrcWikyzI50paVua1nILeUu32tkWOUswd4kMfDsEgVA6MrNduECe++/XwoWRbEhPO2bPudktO
zsSSmUaUhzGAIQ2+eL92AFWPZUR72e7us8cpcy/XBn2uz6PRk+ujK3fO+y4yYic0A7EoXDItU8Oe
uyxNPhamw0Sk64Z6GAuWrR2wfRrz6Vjyy/W4ANoeHmu/KSSFLdv3ESdtxDsNtzTjK11whFEE4yQk
BZqZ8QjTelzfqUrt8DVZPYYRLSFhzpbBmdMK+x07cuUI3ujzPut3mMjZ2QyWh9qCg9o70SE+X1sc
eg9as1sOTG3ShWyvQWaAbIzYa6rtOVmOWM09WdNQk5rqRINNQ0sk1KPCy2U26J6bevEz2M23DAlZ
FsYVWV9Jk7JaftrDERTC/VJ7kWykGElu52cE189wXBZkiF0o5D8S8Blwa7YycksJEvnUFTSEey5u
NHEGyl9pXz3Br4GzWlloHhx88kfNRlSiuXgJ6V1fCUGluhjl0cMMsrbG13Q6o8ujy9lz28mYvZWs
wbWTMcKKTY2lKHLlt5dZNzxhqMCMiPfwqXzsN6HG8aoorqAHA2xJQIxrwUUxBDWjDjpQJBFHXzLO
LAYghKCRBxVo74NHyZaWMw4yU1a7pCCFnSS2ymTbTjy0nZqh2Pq5kTY0TlsnshqGSlhmo1Xh02Jm
G2sSiUUeWtPGtFANyFThElQgbnzSMHV7vjkQLUFQICHlacuwguZLo5Hx7jfjA4ER2I1vvaxgRRik
RiQSMIehcOVjNvQ4CY4SIO9NSMEiDyKCPkCIJ08yd/r80tvCSmu5jI3gsJD5jbmejjs7b4N6zzB7
GZ68+OyYN7CgFgMmxWzCE7zPErheOz2sUJC7nZ2HREDyg3FCwPqSgKDq8syI6bZVGSuwWRojmtiI
HhJyeaEvxjU9Ps8rCBY9O+EcCzQ4YQ3fnLUmK2Ns1Uwd3a5hlezQQeRrY7pDzx8wapqO0XqO/bYc
L7JltOY8+90ObiEFEWGHFy3udntvxjiWl1xNTXDpgfnuY9KJuP9MM9hdai8ff+JuWhCU4WfCJaU0
4HzEiVhGp7Nc5fBaj4fHWKp2uNQdNGTXxprFPVMSEpUGvc/koEeR7m2WjiJYDtz229UbMLcZ4med
7msttxul+k/ru01V80F4+qrRlR2cbfAy0nObCW3rH2MisjatvuQ8bxwa0sYDkoBkm9i6b3kNRg+6
H6EzQzNHa5UUvoj7TDCDVJ19lerTT+i/9z3eGTwdOjzCdRznnqjWwsm6rMrYzilZ1d1vOp3tSJ6e
8as06WrxZhPOqnU1ja3jLxnBinqkpmlo1hZwm5eM3qsa1Rt5VOltPdq9TH42YZhI/V5Nf2n6dpT9
sIN+jf9+f3finyf+j75ZA9Tn5PP31vh+0AH+ND5s+Ph+4OGJwx07/5E+6op+tRT7gAP3QAf4g836
4qpYANgqnTwDhFFPrAB7U9h+58JC13DhqJ3dvum7VVVUd93ty8dD3p9mY8UMA2oHHYR6SUIDpolO
1gl7z1+z2wwsX8LnM0Z5zZiBsY6GYZh2GgL8/Ef+T9rn7P5jg2ZfsJ9I/7976DgOKHtxQWQWLDDX
pAqowk30qi2KYhfS439CYpkODo1ImtWofB3i+UBg1gA1gADTaB2gA3vbLpM+MPtZV+Hfd2MdrT+P
7Ps7AAfWvrI9vefynUc/i91XuAD9AAOyilKKe319x6G/H5xRT0mATgSEISH8fV5Y/m9PHrfTuI69
/DJ6PxfbQ+2Ko7PGg3mv4wAfvqqa6gqZCZ6gHy+OogT6Z3kD4YCo6hAdLPvKqcPIAH/Trn7kPbAd
34qMz7TeBQfOHYd8e393qK+fF8iqKNyfhIPPnIUrkiQUg2EGmoj9nb4BZFQDmyqg/EfLrFi641qk
dhg0D7T0IVoAD/mABibKKecAHu38MHXWPcdOjxX9vEfUADx4/KgD6wAb1M2fV8/0Xj81YxW3ATsP
2zXTzM/C73WeGgffUeqCsvjd6qns06nzeKC++dRnZfSEkiw1/v94cPFRT67AB9U9Sm4/P8162o9h
q77LPXZmoRpFQlqjArTmADbgAfWCpkPrABoAGcbCKKbgqUEABuaABrRtOXtsMoYJvRI7GZmYp0HO
3qXtt9U28V5eAAPEAHsndz7x+L0v9d5JMGoJ1Ksj92IOmGEgBmW3qnOh4y+O7Syy20S8B3+uAAMg
iop2gA+d7PTty3O40TgiDuopp6Tn9F0kY4olfdsu/XVy8Xdlfe6c7+Hqrbqhi+rdByRTzRaAkJAk
JG0dObYO384AN7eG2VUrMeWry4fCH66fK5wOrbv7ZoY6Y+vqBOWU3bWSp+O2ixgU/ddyr7dI0Uk6
/IpitL+wAGoRkIBMxVITNKBqXq9VMRltNS/fri0PDKGJte/qdmb1AmYZivPCXDosNegAP2KinUJI
9p0BGgj63BDCQK9Z0wd9kTaA8oVoAiBiApaKzXdbm2hLDCg1BRgiKkqDfZRTY/3956tg9P4lFPuC
T8AVMB0ABsAH7AAaAByCpkAGwAeNB0UU+QAGAA7KKWop4agA4SwAYAD19x8Prv1AA9AAeiinoOgT
eJntIPyE7MJN+kxTIL3kmMMxpsaK3BAH1fiYAa64Sm4LQpBosgPf9+wAG8caqtKv9SYTCinsx3Ja
in2ef9/obPAe19AANVgBqVF0nEviWrvb1/RwPUUj8PdA9l9rn2O7PHt+qd7kxI93GsdrPunzY2zX
DfW26MzucTNYxFr5TyxYe/r767AAeVJSikCKKR7qKmdODPwFVUhvd2lFUUc7zzXpIxJIGqrt5wAf
EAGpHW+5YRgj5XlK/a8whvyXm3T+ajOgmh6qEFOPnAB61QhBWEFPH8/Dbv75uAD7sU9VjswDaDHv
AAaXcjip6zltzXRwhBOlQ6Ic5Q4T38uda2T94D9J5HHJ9pmr5SqKoqion/T8MfOvC1aW0tpatLWq
iqKKKgBAJgHdu663M+lw/dcXy9+Midy1VPsGqiqCUvSi61ZBRs0teR4O5bbbc7uXa2229KvzPB5S
JxbbS8drecuVBiggyJ5jbWVgSWnHy9azgRDtbJbUibahIFShiMUFgsj6OehYKPAa2/bg/XD9IAMU
UyAD6AAGwVPAKBU9FAg2Y2M6dCSVI+AADTc8AAaQXzsOHo+/0HO26GFtY6y0RZ0G3VKJylVla1dc
YEmuE7kJ+gAGyABvVJqWFKqG7yP9VJbq49bSxM+3z6HQu3ro19nU9sV7Z+M165MJprbqb4BvwedY
sWLIjBiocpQtoW0LaLB3djS+yI7Ylz9n7fTV5ZfHsX8DhfW2222l/1f6Lbea3lXpMHnLDlhWet/F
qdIoqWWqVS1FFt8zexL6gouw8JDNEffzfWtyvz8/o3euW1uDfEqwbENkjuZmU9MAB2+MAHPcqpnh
qXifowGvMqqKkJEx9wNbDuwiJ8yil662gjpEH3fFNfD3pYw7YH7g2i5n5HdMlVizv9P4pNkgaiSa
bQMDfFQCDdSBTzgB8qNk9JM/3fT0rqN3yvAnqcgCR65vIbV26FnI+ngeLGB3s39X90js6rWZYv9/
6vkwJkmO1xnHX9esdWRS0JMFLZjZioQf14B3X2/8n+Dk57frn1/wTfUv42/T/VTLy1/LPzFyry1V
VVVVtqrbVttpVVG223iVF2ANs0kfv+J455W1rbb+1e1XxbbbattqqhNg6n6elvHidB+GE98SQCTO
ruwphXA3YptpFGjNyZMLQklgkzFZi4Yfq2gQbVypLfF+ijRNrFSm2jUbYa+9BCp/ZAxGyoJFrubr
5oNE4fgoPzJlpJ0x79N3kwaXOXQWKS3ZujD7YFO1oPbjFQnHz5WwlecdbXGams4nCvNdLLrOHd4b
btFDd3QhtfvqVZz123x21pBwkuZW+CotaxeCdcB/LpJJJJSSRRWjfBvxzwAAY3tOm23o5eD0enGw
Lrkrqwpdnme0xyMoEEoDvjD6cyE+Z8ddhFZ4m0x1YN2aQPspeuz6PLuwsMt95UwkaY2RqQd/WppI
t3KkClalaStntjaxecnm9gG62V78749LUUkXlqupwjFUtqCbHhCPlMgIaRSmrCErr2pocnnH3VIp
YF1bopFi6EZxsjHjrot9xcix313QbfZdTLGr32T9HmPxhQ49lftFXPfSG3aADffam7oju4YVN+6O
2jW7n0shAms1WjzfK8eJ05W0wlOByhrjEABtdrvDvUDAAGtsw1ZUWGW6oANDB8M7btKa4EkTVik+
OzKIxkwCEkwmYq7O12rC+umeEoNx7XdN95BFFFMWpF6IanZmZjANzxESQkZAA10YBTfCBP+dxsc7
/xQ/uaGuWDXo+sKnuVTXloOrnTw2BeZk84ySvj07XpCSyvhDUVjG0nrg5SanPtedI+GEK+bFmJ0U
nwgc/Gukz2Fz6VhLGUY0Wjvtlea4njArlspGtztsbHt01swErnGg19l9lNy0q9Fsl02O1qtXVCUN
k50bQgOEq0jSOSaxY4WJE3VNtxfeRLLeKV0Lnk9i2yseDS2atI/vM/3/5v4avr+E/kOAF+s/ff03
7D+g+nAA84AAHOec84ABzgcDnA5znOHOHOc5znOc5znOc5znOc5znOHOc5znDnPyvD4+AAAA5555
4eXivoUklJSTpXcepvM25Ym9c+ceOse2ulnPN0cuiVJfq87xiXlrdcsXas2k0aT7Isc0saTn9ndt
O2Ot7rAuSV/jma+1DOaPHsvjU+s9qjE3i6UtK7O2JpKFQ3guq9rxp1lZ6mUYVaHWFMERBL2jxnSU
YnvMH7945+Gd18Zb1xJz7YjMmiTOU+peqmvv4oxj1x8PfX7bwAZwAbOjZ9Wxr8jAUNLINEQVtUoi
dLFpbtfF0LGkAK0x8JRuNVjFVIxHcGgMDsmEmVoYh62XcKKGpXD39W8yd+gH8E2+e6ufh/HYEMqW
B86aTyfJXt1bXih5UR6E4IfozIsXte41iYrZsRbx2R3/f6OjhxJ0bW19uD41LY0xfYyGRubujASa
H44dogLkbbBxIpAc2oiYScgjbju/NrhdOlgQfa2RC8tkpiUpwdkfYTjMOKufQ8WYr5nvRBtM0QMI
0Yu+t3FaroGXbQ/jwkdwhMhWbfDWWoF04k2YY6UVZiehz+x2iZG36dxTSVtnXdTyTaYITYESpxew
fkotHevUsPH6SCo/7PsnaZ93ho27VsM41MGw6aGtG/B2xXYiKa1UecW4KF7hbIfwUsMYFsJwD803
YsQcFjD7/0Fjaz6/ppGnzXMrK2golUXYn8umN9JzVMRDEEML4Ub59jkZNsNkboiLT4n9UYHhuFuF
Ga59Gsuxa93HxpELW3FXipzmMzT2W5wwezpT26l88R6vB8NMSam9kYHXMfSVNfj57bT2m1ggJvYI
ZrPvfr/fvuSEcZ54HByaWj6oSlqlg2Q3Evjata+IitCv124X+ry8d+h4iX43cSY9+RKb7EBhCJAk
yVBAaYDBdtsR3/l3Hc2xoU2lHm1FpmLbYrRebcsBSohnGDAJSwaKMXLiXcqjzfTZwefP83og+F76
srKvLdEq+ubrFCGUjU0M4wVhd6m+6hhvpSOb5Z40wnPThLUa7SBC+avBkmYNGZ2aeCeNJ2X2MCSa
cTUMK+3yXXCVGwxhc2TtYmxxsz7XuABtKQhdjSIXIb2R3QC3daSEFtt8t768uvInTTbWxtDXZSds
yOvB5JsdBMSpMlZ8vvfB9J423/UXMbwAGTMMx7e2Rr9Go0JSLjfgZ3PKERQk+yBA++cXIkbdz32W
6kog+KB/xcShoGfZ9Iz1mpVf1S5aUZPhw338WGXKGw2I9SOTalfgpu9f3eARGYNXi7MJfahhnQQU
IZlEhyfGwaSYwBdSUQ9nut1cry9HDhL5n1jTf26++Hs3pX2lyZhmLd9UWT1ZvKEU66dNXG6tkWLj
0lmrfy03WyqarmZmZh4bN1II8hr2WJFjew0HsZ3B0Jn18euMavNQXyr2/a0uNaXiBAkbY+Tc2dwS
oFDVHla8eik88EgXvtDZ76dT0sa6p0wginyvrQlyiU00JeZGlKwYqCcBVFWgPVOJRDpeRZnWQA11
i8yZVUc2TMdld0ToRl3PAILzX7lFPoSnttzg8+MFSyePbS6OjMQX9trbeu0ld27dlh130yqxJaWB
lGL3eX2PLz636cHO6DDfZ2R5BLc4M7ukQ4Vx3UzmXt19PQ+/S8mKl+ojbuKgzMQU5IkceuE2uv4C
UTyNbdWHkJu1e3LQMtkLIxvnAoCEZmTlLTTQ6eJ+jODJ309vbsd3H44uwaw6+j7HejJ8XkrLaPM8
OipZZKdSGd5l9NsLKCLFO/YtKNq4UrZCcfv2MzMxxQANeX6TLeiWvqv83TO6nSWl/tzlEs5cnjGz
HJ8YmOb0rS3Kd2chmdDfags7+UMi/uh24MaUsSlQ0MXABn5R+lgDrqSJ09MLOWLXUuUUSh/JbzgS
unn0b5Fml9WB0he5bpt0IyvfB+uUcOtzHfGmxgIrr9ELMb5qhDVLvwjrxLNmKvn3ZZytrljSBbKc
XhR2htAIZuiIPijRGKUhiOGd82Ju+DzylnUgVTpv4Vvmoik6w0mfzwnf3QLlMKzqP6CEYCw3X9u2
5y7CSquuXYW6bKVlGUkqT48hjoAN4vqLXYZkJgDChdrLjjuxKTVIIpCMtRntgatdKzWg6aEORbZC
gSt0YsiDMxfK3CBlvnfIolltr5tIZtvn6eqLMQaDjodJJMJOhjWMk1aWBp+X935Pyuj8akqlpzZX
smyv9xAMb9aXQoKdvYYpmlE+qSjUAUQlEVGP4hOvQ5KF2YiiVbCoPTKMIZg2NRrEmj5qo2nLjrnO
TnM7mjuq7uw53Iybl0YSgo5uT3u9bFXfPXl5c0mXdGZ3Qt06XNXXCVtItsWiKFtC2BYyR+sKQJxV
AuZ0yHfsxxw4cMnBKEPfFN6sODRSkaIImBZbOzPnTFiuCBxIEfQ+8vnBZnDtss9JspHHvlruun0+
tyyfSpW5arbGSYSbhtnucj8LO6DYCZLS/C+vzzA8YNKBw3p/SdHC08PTdTB17ZPPSDnKM02JuTuS
pPt3hB5z1D2kGsU1hxxJUyNUqQOv8M87lr8zjxuvAyuoHMMKUajCYoUHT525a2Y5sWuwAAkFErW0
nRN4eQR1+XGdg7D5B8Cn6T34fNd/Ldwl2wXcYR/HLqpnjHPX5aqy2w65ndvMrtcqeeN+Ey6HJ/L1
NW/Csoqx9UjscwGEgSskUjQ90adEy4JjaaBItP5d2StfmDB+LXGch94Opaz30dfjuHPfWZW7E3xs
Z2bVPVJoo0diZ3TuFmxIr5Bhmbbw+SV/er8f/qeoC+fdAqpU0lv1WW/Pv3Nu6DAPPvPWeOBZmefH
hxcuR8rl2zdss1BNHDfkRqNVwcWBZflIthxZh4nDdxkNBP1dxj5Jxmm5V67Cd0pSorQSQIGkmGnM
4ycINWfq/z89Oh4dAT9UvdDOH37NJ79xEQ8H+CjCeXXnr4Ld6UGK8pX4aPMxPw38rrd2P80fk+4r
M4nGX1jF/XTS4vHrb+KnjjH1TlcCpKiLUrKQqz6XTwe6ZFttlYxvh9t9M7F2eiG/R6tAEySEXfGF
l48pmXCULwAZeaeJYGyULeGoqADWtKeNrgnsczavpWyU6UvUz5biUfSMtVg6b4+Vnv4k670zK+ig
yCpcqbcPDJ8Tu2t8737q8DMK/HTHVhI3uZ4Xl8TXWL7ojW52YrqexfS7u7OkLPExi1tXxO3pIbLR
goIGa79Egg2oP5UO06b27ks7ktVq3a3/qtPWa7zA9J0ZcNK6ZHadnQzTuVAu+WzXZT5QbJa2lWh+
nOEub6O/Uu9Fm1y3zu/te17omOts4U22ZRjCChqzluyhO8Uts4wzBJeSdu0aZXJhiljhkcK4ItrO
xM231VLFyE0hDTYIohB7tWb/BjwzMIwhlXhcf1R6zU+GUyHp9UriOa03qyKqZV7GGYDMVCbiPy+d
2SRH94r5Vqn1j4ktY0ZTWKYKtYYOy8lOKfJJUi4gVgc4XlDRAmpdQ7Y62evye7OeZ1d/OzIaJolf
3wuEzMaDPojKe1aNtfmwPM68HO0kdo5722dazc236h6qUqCIQk+rALDGaKmWQqDMcl+3x22Pr7vH
RvwbeGu87+VG+SWz9u3x5bpWX9Fa7sIF2L2ZXfebZlGWDXdsPQXwtPbLZjrlC+7nXNWjmzhn0Yf0
8UduOokqUR3AwB/1toaZnm1v+6lsulF7eMS+vy/Zr0unpusvzFu+V3WWrLZtxvHaK1xL/1nZsmQ+
xF61YFZg4IIwRoWupkhnX22lCMspx/MpfR2fh/Dc8gAbwtyusIOyvndn3ttqc5E0hfPYStnsJ8EX
KTRE3QqIAG+9P5IszMxG/IvPfGUvwXl9HmvmsI0iVd0UdqpftfsjMt/m2yOlenG/0YVs5Py+X3Zt
nTA+dAfSmcRETIPub7iCcm46lEpdlsDk+N0engqxE30Djgkzu7Q9E4IK+Tm+706zaaG/hO3GP3hm
YGs2RdvGnrgYE93SD4IPFeazEPhWuLAUYTDDSRbGNNMWtFqNoyVRrSSVMLG1GqZRiUlDPX3EL8hI
erHbD0U2IAiHjbXluEG04sMkOOIVPAwkEGCkPQTnleQ4AWdCFTs7sMIiCkQ/vMK0IHMuidEvKBby
G8GSRJ06HTYimZ6QYsQEkEmYcdgDSeryco48O19krLqyiRwboXTLGGliKEZGEMgpxaBNTsXc9IoZ
i5j7E6bsopeAjJAnso4xuRECIb9pocvWTz9exezxiebbNuey7NT9Bon5ZJhhB2hTp2dwU0Hh5tsZ
NIEYPAcX3KKSxeymyFjX7B/jpI5E63Ul2QeD3WRjbZMk3mhIj9zcPsrI8Vh+DV9PORlG6hU1YTwl
5o7223y17LjXkeXuifmXb3Z6o3X5uurrMPuLKwWVsDNRTfgmdFIhz7L5ok7HL+CKFSmanqbj+PmO
86uqmfnZ+Bg6CGY8A+/SU/YJtp6h2nnpqxv1y9Pexj5Ft2YbsLD10USXzT3EH8o935M7Prl9eUam
yV20hpoUnrddkOHCy+dFobe6r8bWaMM+Eg0CEaFCOy+0XPJPGEeLnQH4f2dpG8yIUZ+w9tQrDyMx
x1gJFkHkjJCGNxkc4kbnbVXHt1vEx0lvV2+ss8d9C2bOIHMz9nX4d09Yiapvy2as/KqDdDavbdfs
e2EtV2COMZKOH4f5rpYFcWOhedA6US8d9ajBSd1AkO8lhJq7+NuzpRrVWq9gtpfujtjyztVbrZZU
qNYmSFKGMmQ1pbT5VKXVxHGg0eIaQwBpM6UsjbZiAA1+ek6sNVqOOBVc67INssDbD4fRmoMW3uAO
EWAGgp0CTcN10ZyGwRj5YkBJkOhxJkITNfSs5a9sT0yljw15QbSeb1mmxg0gW+BAgR2xhRsBCiPF
RMYehcMhxGM4TgVlbvd3pOK+u+DVPMsaSzLIsHRqdqxeCNrIsrV9IWSPAyOdLdHbZfZEqOLg/5Ew
oDOd/OtmIAm6tLaVx5nt+2qL6sIogjIs7sKIFRQFpxA+iXaOZjMN0kiIWITMyaYEGxbx7guhLnf4
qXDTPD16iBiSLzOHEPvoeMgEjOz8p2j7DIawTFGrr6+mJEUUaIJ87Ggmxd0l4Lu8In1xHZpL0fPZ
ycm5estcNXkxnx8XdamUNmUUX7NODYcr53aywt8/kmA21sFXWqZ3WxtfQtLWaydLW+Z/6fGVuuwK
9+tQgQ1NAtyqADYxO6yymdOciiEgke+HjsH5M9m1yhd0cPVG/NXc0V50d8e7q7v67WGu7ALjdpZF
xvv9M5EgNtm23wQ1E+677AiECdhSIGCqDSiAToX5wraccN7ADbVbPIcha3U4oEPtz9YHn6cBshTA
RESkFERjmGAT8qE49/XoVVfXsHN8J3VXu5JqQHngRXibTM5FUtpjG4iNfToqt8YDCiCIN/hxmuN+
7PesVNVCBPqgYFm7BWwSEBui1xO32pJJJJVhZfOquUJP145VmdNfp65Q4AAzgA01v8ecyLn0QbJO
jWJlmFlD3YwgsUlN0TZlOWMeC43vx5f6DcogKlxJt3moD5QK77sEuOnFdL0ciDMw6lGSR6I5icpO
q6OmCJlbOT2uug5l6p9Fz1rk8LueGRi8s/HDVO8oI0LawCo92GdlNVgzJBEbIlErvUWJM6yVwkaJ
8F5eH5YGdhvR/5ZSK/tvzuz78NOlmGY0uu0BMh5wab4SJoSaavg9GSIvRBJFlxXIp8qyse+pfXYs
89WdCVjZms9ED0wK3yJcC5Wi2dfD1X/rg+7QT25XBcDYvlgKItrZKA7SaLffg9svxRHzcKY8tzJd
98NIwVxaVEmCG9oxgIs6fHCcttH/Q/CcRZEYEzdCSZhmFE6fTiXwDwq+nGeManvoACGuQ+6dVnXK
vN8n1PPWS/i9W6mIIg4njRiRxh/iz21U4ZpZzc3WsHd4ABpG2ctclMnR55P1I83bjerr8qELczhH
GLwOiJrfxOW/ZjQ+bZPvQkQuq4MVsyQHEKtokfk3xBmj5QYAGgWgTGk+oklRsoYoBDcjc2JOl9sd
Mal29pkR3eT/VwGPz3dU6P07T7m3Y5nTY5pXMvv7QrpLPF6Rkj83O/lzfKuJX+TK6u84InadWvG8
ng9cCypsekh5KeCwyrKcYdf5L4y5wtL7ECOodB7FUZVPGHdhM7udKTxg4O+uvdANniTut6ZO882H
goyYo+2X0irvZcV2/R79Msfkycbxw5+bQsuno8fSvh++LfRH8f5MGPrJS+/TVi6jxIOZphmYfSr8
NXaADeEduRVEwjwlbM3K9YyABnLzW9J5QLOWHGb0V3QGKGGA+lAD4XFipi7elErRyVuuBj5bnyKV
nfsliRAl4xwjOWXSF421H1COh3d3eB8mNMrkerVCp+FVXy/GBYosvK3U35erX0X13PFuVtzFs+01
bZQONodFvmyZUbbPAILWnajhYefKXTOzzWeeFCkdqhGGUMKzNREYabLbI8kPemq0rbN9fGlIJYHf
Gw2bp01ysGOvpcksqekpE4oL0El1KzZKeqUUbdnO6PTbwjAtu012M1+FJ+R6iLc3oYyvkcOmyRzW
S+XEtZhmJPEAGyz0133x1T5O04B+xJG9RZhMyZEXYwh1sGHonjtu3WQ2bGjM0h58cjhaxecJBCA5
1HcR5mwlMYMceg2aRzlc5tvajQeVgXcB+JyJQgOOTi8o9u4wIZuSuqdcIGfDv25bVw2K1XbyLX12
E+fJpBI+kFTx0YMtNStgWbMY5CnPPz0VuDW3WVZhmLdpsO0owFjbbaaZvbGMdX8PD3dHMUU339uH
y5rSd296TAdecd2A3ABpVSvvbU7eeqgx3hy9VlkCJ+uFS05cCY5EBrrEoAmIM0UOn3G7KJFqXbLT
n8F9ELj6ZV0THafoYdCSR2epL/IviF3jEWvCI+bGMIb0jR159MvNUbl2cJeFNTtj1W+NpUBRTaPX
A9qAYUjQ4GNnzcrfu6vMKZjGH9hDLC1quihJXs96+uMV2x169nbEoV7XbX1SYY37UcbMbDhTX0J0
CZmmkC1ryqPW9dfLCo+zXyaWuVjvgwD6vhcWHO2H1+la5bUcnIqQkeRl3CSiERxTOfOK6TUkXBjC
6m2j46YK2RKRDOJVVro2kOg1m3M15au3zXW4435Wah7dIYB58kril1gtiaC1vqqWryWEjCmTm6rz
W6vkpezZaS7975nsrPNAA3Q26YMNRgVMyv3eu36yu0lNNMlea4EJWmTtZnwYgRECZCaMoTHV685G
MD+jfv2VW6OprjdGikT42ZKWcpXtiMyWNt3fjpAhhaocER20q2UNxZWyNmi4aao2mPRi1p2q+E9t
nyZRbayML67bS5Mww0p4trvjNuuijbGySsW5BemYiVuhf0tNVVdsL2ahz/p80Oroeiw6NTc5bzwx
IaNQTFUJhIRZVmdoAQqAUIDz+yzBy2o7TGpns8wHOCtd29qTw8NbLpKQPY6UpNHezDDVfq6dqn0U
U6bH8scG70x5t9ovONqpC/AsJgPZnbWPFoPh7dAfRfbAcEpcfNGw9lbWPDgVbry8PbZp6IHoh2w3
N8hVzrc8NHHv56buPSle3QSqRlIgqYJ8GIrYmsB0zwR1wMWSdMFHmKTBnp80LMP8Gq7WahELO1nB
78yImCJf03tmGJtB0414aNpnWx4F2KfjOzmDlKwREzKs/sU71F8C2d27HkaTfmJ4AREQfULK4pPH
RpGDJ1ciqixO6DBck1CTjaHghty290d2/pndTatV+vna1DFBZVwRwilHfDri85eXw116b3q5FRxy
lbxhxIKym6zfkQ6ls7dVBItg7EGSTNdIgh4DZugs4Wwk35rHjoLOPVb8hfZ8NKQi7oQTZxAO2PPH
dIJyU7CzOnzaX3sfVabJBkP5cLXx9z3d31b041982C5ZG3b5+9hmx6JIRcilVsaDENm6Gwmd/sjj
wrVKE3A5tscH2RSzxiqWayhHjv7us2GdCsnklDfM2L9Pa15wesLL4azlKUYPtltIZpoOSNvqLjcT
oW5xwepsFITXWKBZP0F+2wnyf2bWLxELwhzvgoSNkOGZx44EinCzyT20LJcLN1seOlmJZKbljmYK
yx7jxwhfLMwV5phtehupUg2PcV4mmmKcni0/LG4sv2Xwipa7cZHGPXCXsKgVNXPs083fPDy8nbpJ
41yE93O+EFJKfnzli2ImzZ0cHQQkUzHe85q6+zc9eYnTeimOLCg5zAYJjuexw/JHJ/ih5OMUcgh+
dk8vKhWSZCosGlhRAG2oKLFph8ZyDTPhfwSP0ye2D6pXnrgc0iKoPkhUCxIKHjxsx4MsQw0RcgId
WYR68P+YdeF8WPlQ0Z4lLI1AgTBkFrXcm/n9eO4i9Ds6RD1gy/1t/T+t+gRfcxrEGL3RR2Wu98Iy
l96btMZo24qkAsuOtbxa+a7QFsJRWuTYREHbP1HF1esVzhfj755ABogA1DpcfwWJBPO10lPU51dn
5peTAGZiphqutim2CFLZBiAmXX0whEknGr21U2aPNFH6aFLRUzT67NVpFNarZlvI8zZ274yre9iz
VxGV5hDHCgwUDB3k7JFP0bYLJ8crywPL0Yw3G+pifHnGclbF2g1vVqReWbqSM8SrWfBvgRKM2Cm1
IIyiJM39PnjP3JLGB3oBasjjNPLn+tGwsbTXxLJ6N35JOFyYKIMNTROUdiilDU7BRDA+WgVUesL+
6BBMWJh/JcA3wOuHCAasDq2ADEbI4jPuYLkY/SZCzICUUntP3cdx4dIHIOHhwch5EnOmJXHGdJA5
nSYckxUdI32ys2Dkg0gUR0NbsTTAGLXFqYvezv+Lbb8v46P6f9dHymwuPPAt6ceuFa1o0FtNpZ+2
6eGlsLvRdU5zaa7BG6y6wfu1whl6Ln1dr120C/EbbTHGeVpp5X+/vZsTE5bcos1CFxpu1jrXY5h9
G201lo63Jt93wTRYQm1dsKwpYT7/8fJP6u7zMaufp4b8eG6/LBkwm2oAQj8IgCi1JxO7iWowrnd1
xJP4ndPcdoKRdCEEyYu4uwMjyeyZF5039XMhtirrsNk8i0jaj7oHeDsJwagmCqYxkJA4REB1+WRw
NUoGCc3Dfl5al49MwMUBkgMyFYslKh74myRRpsVDMUl5l7LZwND3n2v4qj3XrNCEkhJqDyIgwgzA
OBKwkvHnmLqhwXMMdOIBqFI23EDbbyHw+E3bQpUtZJEolHpgUBMlIRD9F9+vnavVrfSz2MOXmmpa
qoog+/s+ODidKVqiMntLTIPdLnaVFi+PSzKrElSudEKUkkE5Yo9/VqCoS+Muft5mvr+7iuLT+ZHz
Gyg4/1MZ+V37/ccTGBeVOCghCEgWGtE6fV9VGdehoi+AWXv4anLJxdHZeOlDgXzUFFVI6N6RoZ6z
7w8T1QzqNcZtMGxkkVeEW+FA+ic9CskF/fg36+WpYYISENCmcSBSb4pvEpLJBO2HKNhyKPCzb6ff
ZnpXDfF3oeHVXdyYHDrbAOHYXx0MGZKFjAxLTj0EawfLfSVICr9nrh2zoMx2lOxQx/GfsVq7DUuM
ppIQUzV2AEWq1l1/Oipj6fGD3488Ug0ogiClpYKXdvTNc/TOzbfj+ToV91lKK17s1UUqfRZvtKw3
aQ0TEgTMMwgZmLUJnHU1q54Yfn9MqceTOiUp+PrWMa8OTn0/xZ7otYxL1bsXOkAETXbI6K7ZzI8H
uoAIuHEREQX37HtetGuXrbeGTNUuOrtFVrjDz7Z3VWRAzELcPC4voo3QfbYVLTVjOtgsrByhST47
L8rbNo9nCpi7UPELfbnPF73FRO5eNTvwXmlwv0rBvePG+XxGuazPhE8Pj+uIgIMoKW9prSstVuaF
72bm9hndQzqZkMD6bqY2mE7M4mdz69tkNNjUvwnmX6gAG6NeJdhrUQZmI44anuwet1DGU+UIeNLI
WhTfDUmfg5CfYtI0d2K9eUZZ9WkK8DdqqUv0uuEF8xmdIpB/HLzx6Wopq5kofUQQIS1LWuD1bUZN
rI19F+8EblU/yiuEiaTAvBqD5CAMIIQQdjUCHRSatzM0KcXTpPCRubnM/NbSpFqJlnbl+fd8qgvp
/3s+eirWU1EqOet9Cormc1qXpAiY/BFS790HpSDDMxXHltjdR5itEVbHGGJQJfTpxf+Oyu9RMW8o
QpQrJFUpVSB/U75+GqVYJU+2HXOK9uJ7LrjhbWONeN7ozBnYjdTKURQ2/HW/BuCEvc7BWo3grYO1
3a8VkkpTr1Cb58nMaXiZosMxujJZScTP5pX0nOH4Mdsis+WnR18Mmz73HMXPt81AxUljuwwN19UZ
LH8NgcYJ8mkzc/ASUhYBTTQBtJBnb2cfkJ6M695+V2u07YGZEey0hDajmPthg5C+PteBUcoMIEgE
LhuHlf68em6t7WCQJkmSZkgxH4Jjqc65mEH5G6aM9iTUB7d65r3+hzdPH8BeKLP24ggoOQDpkwxR
EA6AoHMVigCokkhnTQ8JZRuhYsMXviOQIjaTjDVY62rDDVwsgmv1mQxji5eJyEJTx949+psWDgDT
kavPWMJKVpojpnLosnCZCk83/tVni6WikX70X90/3Yt2sS3gzWoBsx7c5UmgAZE8XfOnhSNdklMi
RyTDMxciKxRRdWFqZQ65O3RTYdtjWj/RQv/tynD4Y40CZv4O2sbAgejMr1FJHYj3VGbfbGdm605G
PCVKCRTk9wYsmPBTS8zxRjHyLIscdkw7pg8qMezjnqzrv0ugY4iHR73dhJCBJhHmdxCG2JenKOOY
bZl8Dtpv06Ug39s5nRfstI7EV872UuLMVv77pZWPV1UwZ6nCI+2oxsygOHoccd3d+MGH/dYa2iRB
IDI3fTjkOQlJNAdIiqsU3mH+Pv9wKPgg6EYBDMNkxBzSSGE4AIqMpwo/nvr1RZ7ellrexgfb0HEW
CgrOHJNognUx6HOjXqdaaQMbaTERZGMafjtlukIoCgemshVSI6JB+doidWx8BZUUWoiKBHt7vTTs
h0g5YMwwTScSqX4re2bUfTUHOB0jjuzOgkTdSQ3UuJBwiHK1QwUNl3H9HbAFxhlk5lwSJihQgcsJ
RCS9rp/H5/6PpK+ntn8J47u/2jzCzJBJTRFSEDCxBGMpDWpm1UWmKao0aOcxZIBimJiIJT1ZOwo8
Uyvkbju0MhFCmGnRhREWHzP5MFOi0fceq+ARQkKxHOjnbF26053/nLMKtqnFoabPwWkObdsjZSR6
i4fxv/q69n77ZF2fRkdcIX290R+XKVjcp2Su06rPH+J6453qi5173O3ovEosx9er3+OUbG0kztdI
esx9ts+nLCwxJcHwXS/XAdX2PWuMMoO2zyMw3ATMkJL3xLI+JfefG43UckRucvNVnqg8WrB/KqrT
l5bTCwabJTuHhu2ndlrqUcAGe3OdI1RvTMzMddHznCrzUe+66xAA3CFSC5u+0uc1A1G8Ot1fA8tH
kJhOCE+L39NMcIR1YJQcI7HlHTOO+9yQI8FvRNylzx1uueyBbZaaaZ6gYui0Hd+cDCDsNsFNM6dj
QoSmnhc9vc5cvjzM2/pXgK4jXD44bUqIIY2TIHTKyz+smE8mJwUoXIYW1yLS10rILq/JlsstrHKC
DMjtwIQsw4X2ynG3qCxCEyYEwkxIQzVCBQg+cKGVCHmwSFkszfFVytkraKio5rUmLmQxhRcJWgcg
gloQSJQe5XlWxr7ty3Pg15G25dKgNodQgULSOQZLqDSSNLuIKPMuQRCUhmNJYWKu62cvbSnpKS1S
m90UiyyLZ23I7NX358ozvukpUleXKCnMlB2vd8bXgXk3xVfq3zi1sY0ThRa8HuHfZ+F2Xj0xXXbF
T6zEyvN9iVQ+yOZ16Ge328N/APBADtitwAsHh33xg4YCYBvpW5FJV+y3KLJaisbfJ3vebXx04ws1
SMQ6ci0D+Q+2LpYfsP7P83Uh/SfBTZdR+LrwZg77lblQP55Kf6Lq6J5rdTLBxqN+6ydis9nt4dTu
4sTPA1Cbe8PhRvcucfr2yZsGwF8LPXLIxidnm/eqmvbcYi9WH0emsQk/PE9M0xwHfUjZL3/PxoyX
mNzzsPjuPPZINY51eh26yg1kCegHHjXDXrr8f0cttu3xPtSADF2qoPx4te7nlTynpQmxgJE1P2eV
+jfLA+kf0P36X1k/KckXs+kz3YA/PP9gXEH79RBjJkZJ0MVRfm9aWzqIKitX03b+DuWNaKtGNYrY
qNoN/n3u1t4+dtq5RbFM1qSxFRWS82uR/Zlt0qTRrz/LSe/rfPbW9lY0VhX9dfoX2oAzPmF6O2IT
F99fMfc4DfbX5T7ldD0SeHLow/p5fP9krfs+08Lv5/k77z6+Q6lx3MNLduu+22Mcv4Rw+OqEuEv3
TJ6O33on21Nk5GuuBP7c6FyNWpxbi3byW4br/JsYyKuazg2847NRTPT9aIXbXlvwZ676b4UN28Ub
mwaoZLig/mxaMkVZN86J3+7XZ+HCZCh80J/knDWeBRfQhr0MGSPxIY2JvpfvgzMRQN83H4d/X2eo
w3HqdY9mwh/8O3akjBu1l3b/nV/RdGRHSjy81lYbkiZnlZ6oXf45dPhq+toZS6nwy7ZWNJ13OMSZ
HSgxgMen8CoJdtO7xNwhH5CA49dviXE+Ebtf8LfIwzA0Z6H4Vuh6fyey47X9XYfhXNTevduw47NP
R+jpY/8My+wG5XwGgMJrLLesi3edwdofzhIP6A7A/cH9ge5vNc7f7BocmrJP+VN6GP5mht/X65am
6ZH329sn/FD1cR+nQchywgP3XpJase3Yd4j6CHstzw1Dbz5W8/dlYbULZBu7QXC1ZXHShKPr9H8I
RlkQP209fti67J+pHoN+2lDYsko6YRN5T7owu1WN08fydfl3swzH5W2kj9Gk/E/3Lv4OQfzKCZPp
rNEF7sR+6DRMMD98BqTVB/P7j1/dp8fDV5mYNzejrPHtLraH4f0FWVP4chpri9i+CH6Fr9Psk0+A
z+wxhLHwzPE2cCkQ1KYYMYGTyjWlNrmulRdd2uyKTfUyWL/BXT9pfs3e9CLJYY0aZ9MLMKllmaqK
uCwtXxv5P59zUeql48/1UP3eY9Pjuh8rAcu/lAn04dXkrTqsunEzl946i7cKwHOOvfiqreNsP9nD
3S2ddGlQ/ivVgSKdNv+Fi9hmJj+NuRIZUb9QxBvxWOSTfm3Ofo+xxrEbrNe5ovu4LSF32ZB6SQ++
Rd/Klfq/o6rPT83ffwmxH2w7Yp14ygGh/c4NBMBvQVXcJiMHb+iDxXH97gFUASXwujdGy2kJyckp
Ti0+3Hl1N++V82Kt9vjBdFLIT/C9fPExQJBkGFntgBJJkU9C8yfHj3ukrc4d6rbsgWRhAl1jE+oh
h+uzD0yuuwwjcWobIxnJ4Jfq3yqv6OVdtWedrmYIuYd065mt2eFRQM6xqbCdjyZ6NZwt/bSNmdl9
k2we7LUCRdUO/Q2AwnKAbahvdQPTVTV+EOovjlxBITWqCRGYKwwsqtosTTOAoZKC5XmcL8ghnwHk
UJffGXdCD4zKEku6CNc5mC1H5tFThKWR1/JcwlxwEzxLIkfGu+KxYIBLCUCxLlQ4AW/xXkLLQgSW
Wfz6H7LOEeq1FFIZPukSK5/dI4j+cUn5h0MCin6GFYshoh/hQT+Y/vlE3/r/rKTByJBv+5L/7+8f
NK5kyZFyZtrF3jq01eTuJcPPNvX4WU8T6v3kvZZ4v1Ye5STprrnijKbtKUL4+tEUs+6BV4+Im938
wS/IwqnvGKT8AwyqApSIIWlpaTTaP3f4PXs2otKjN/Yckh+y3f7E/p7Wv65+3zsWoTbpSl/5NQ4B
35ssB5QqJPWUn/l/6nA0kQUOMTAW8jxIbTpnuApYID6/EPLY70mh6yBRIAyls8vPzwW0FD99nHcs
Q+7/8mAb74+FZgEzBUBxdIRdKxPA0BP62AW2s8NVPFsBkK/+WpARqh7uFr73clCbQib3vXkz83ND
KLUKyd0J5pxk4CIMKnlusSUQg+CkDpNjlnXKcpOkDMFFizxSh4EKznVqIs1iRi9oFZMwsEOi+FNW
dMDCiTR6ZcXWzMFj3W0WSCIHLXQ5ZooiwREw0RnC1krzrhzkCX9TSIIQiMkBQ8u6EhoyKskdaeO7
p4tS2iixk5bBQKPbNc4MlYxpShXpeX3uCIqEghmCv9/9otP/af+P/r//a44E64a/8hCoANIAJDgr
BIkiY/u4gXUBZ/MyVBYdQvvcuadDeWY1VLaTNilRa9prosyYzzO8715e6CUrERY620Kto2jFJbSW
0RBa8EYQhwiEKgQMcJkGRKY4SjELEEREBDaEdlU1eVlkmTooiH9ayq+eh2k0w8LEsDICfXIgYwqT
ApkqNIq4wCNCrAQQQKrSCKnSBQPjpMsP6LD7fz/8wfuinbwv5hSe0srD+IHVShohhKAhB9ZD0x/9
yCmwoX/cb75J15BiVCHbIeXl5JbTRRtjc6UT3tulte4U8vx0fjBuOHJrSaIDIEtNBQMJhIb/IJ66
cQUnOcuXPEbGnEMZ485OI84vLycR4OvOTiKYSiEUwcHLLA9+wxQkFe3FrIaCCfhyXSV5ctjEkm5x
AgqIKgs74Q6k7yvLwnEeQfGhk+lfhfPmvr5SpbrXb8L6KSAAAlNDfdxZ9S7RQUM0+RpDpqqJRPGL
IiLJrZbRjBRRoMZZYwoIMb3gWGg4paP+aYr0wbSw4bA//MKaDMzzWwsxbI/+zjNGy8YT9y2erDtB
GYZT6AbQT29U0yB4XTPAm/DBwxklWFK8mIxJRsaFtwfSRF/1frPX/vwgn7YLqQZOP2Qqc66Ga/ZW
9r6+VBsBwCuWphDhAMQUKSAJb56xGQQ0YUNPwsg90YEkR4S+sgNpfSAkiC2MmvPqHCV7wiBWSUQE
m/zc+qljF0wyDmStrIWkpiHsGxhMch1Ccv+rBekDyVUGxDQAZJklA2YrVHmhAbQU2jnS5FAf8dxM
85Ni4/XjlrpY9p5P3Ygb/3LAWISgIuv9/WkOpNhh8L+jtnpBxw4VQhVOGfQK7nmhuiinKBjq/+3w
dRS2k65bGqsIKJJsG0WSJaV/F67LSaivENmVFtmlJljKYNBaiIp5kjtGIzLb/7dqLWmWZNtU2tFo
rFUZW2rgUlv4urK3SMRSWIX8zruXaRlMymzMrP2bq6LDNKmTf7q4rRLQURSJaY1v6Lbrfw3btA32
u26IRSyyua7NVAMULRGlvtK82vLNZqjaUrJFNZqmFGTC1f3veeoUSmkv19/L96zUqbMZlQoUtLLJ
LFstljGV8dNmEylSmpU0ZAlMlJobWYJyhSMwyodoUkSOaxBJ0i3VZjSjNIszY2yWjWab+YrmkSlI
wMybEgqJGUJRYkJmsqU0TMhhITJgEIQLLuhp+x3X5v5JxO88fy2hZ1/Hw/Lys07GPkhf7NbpDear
jeEU6tZtYv7W/Yf0/x+rd+k/tx/X9mv+jHpbRdB9wmA6UDamF827aanr2r+WXL2kD+ED5HAeTHyM
ngyFxtwBLqQxcgaHk/c37uDcO1ug8pIvO4wP6zEmf1sNYyal39XHu7e7UwaT14KQxK+6X0p6RBvB
sIu0GJc8ay52eF9v2f2f52czr73Vn1/1xYY86GZamYd/MRIDN08WKftgNIH9X1NgA4f9kYj+mADC
/DPPL+p+7Pu/ox5GkX60AhKPlPTDtC/3iV/hbl/3YtIFJ/Cyf7pX0hDJADIU1BtCBBJ/JLsMhqAT
IQKDYhAiyZINsgH6AA/28KRRpf4d56vPyRGp7To43q+yfF9Mt9tHPIclvpwTpPEGTuinTIc76NWd
M6S/mOjApvvtZ1FePeKH9N5ipDzQO9RxQ3aHtx0yKLD0kQLakLCwFJWct7ZDncpOFuy+V4nAnJat
7ZYvXKdcGydcxpQ87FFhmGsxdFAT59KeW3Benvo1qh/eTQokX5+3u8HPjti/OF5KT3nrdoohzg0C
Dg1hYWbPDYULrlaQstrVG1iepoJnlZWtc7qYKmY73Q1iQkldKO+MkhXer6dvgo+Z541spMa7yHby
4qqCoqpKqrcLjfCCejDykNmCRpMlkSMZEdpnC+cuowYCR2CtoYY2FtzXte7Rq4iPJnYvEYpWE1BQ
EXl8RmoOFMRCVELU04Sc+bL6ESo1R3Qiy+JXbnsEmnMv0RIRFNoXPaIbNWFizdyGO2pg9IEkna5Z
8JuqKqrH43h3rBz3XPcrg6ZtzFU0t2cDNRYmcmG/zXQnn9PXHYmSTVKods8mksTZ7to4tpZYX6Ak
dAfKgP3i4o6V97vch625B5IoRFfe28HlKxwmv7MP0/2fjP/9MP7IO9jnwwB+675CI9D4P9/R/Iwd
CN+2GqlD/bCQTWfp/H9JCNtg4b+nlvj/d/227YmT//kAvV3/zzgXWPf/d5Qs7z83qcNUSQOj4HQD
QjR4lF7k9x+ujfFfnRRmXxwsW4R2P5sTE/uFNx2J/N/WznXH5d84dbGSUnaiaHnE0YtLnlKJlEiZ
i3+mvZTS2oNoYorh2uRvR6TIfv/wSS3B+z2M6I9T9RH9k38qGipM2CGaEGeRqNA1Grugu53+C2cf
0uAhP8Qe8/j2TWLNfvffdtsz4F8dn7LwJP5JaD93m3xENbw/da4O6b/5QuL5CqFXr09kOCa+A7Nm
B/WDhenYub4ev/d/HxwUK8UY4jb/2yPm7DsB8ivwzlIABgs6UIakUQ368xElyJ4eH1Yo/519mj18
h2I9IjI5IzZkg0mwdE3R9MQ0iboMM7NZdrSQhHqj1l5aQlcw7D/3XYQ6euuJXJhdQzBnxZCP4MZG
CQVBmDk6fdkL//E+thiE/u/x2cV4jV++2/tb/h/NM/w/XGd90H/jX9MoyvfNNMlZ2s3gxtbsb1Ox
0sgtVgUCIRnA/Bz3zpQ/L1tb9nfDowB2dDUIMNae4hE3y2i7HPLP85qJ64bO/3oDizc2mxo625yH
Lotum4rbBScOHInp/Pd933/as7P+GfOVk0wmcNUBdCnrT4/PgiN93/o5ikIs4mBgctZD/hdBSOsh
mFZZstFCJfH9UWHvnZQk3pjX9Ov64jfZBQjCIceG6lqV3H/WSSBgOzl7r0Dv2t7vnpr5QYDW+/3v
iwI5Q+SIwr8GuYDsc99YhjM/Uap/rckYsqwzRtt+RPW3JJJ7LOYhCEU8mqyglMgjw1uJaUY/G460
dDNmBwWoXvo6nKX9YzCU5aGc8hfcxDbg7qMbZj+lZ2uVBZ/GnXOIwgQjmK4FJIk70MVqdWhdlnir
iHVWMlNZK4P9kGj+xR5RkiRsj4ChmTQY6mx3DXgxUntxpeYN774NCkir6pWp2z1HUjkzt3RYcxYq
XHWIrVyGH0EjBB9hJYJR/Yjt2Oxr03dpbCN46Ut3Rri9sGEczxkF+QO7mSe5yKa31yYaDBe2xg2D
EKF5tgb1DYazj55ZE6mDiVE48HR4URDhQVDwaVDzJSOkcWygVSX76is4R9CwPZKRIouTiTnMjfnW
GYdPS9ycZHWcCktKmZhDuk5jLrRmPOaGHGihDR/lb0ujSyH2orQma0AhB7mu6aND2NywNeEyEmWr
8cmFBJRnY/d4b3Yj860nz8jwV0hqRnWR08VR/rQXfj64qIPTfaeBauZmWF2/rqz5nvC+Y7A/yozw
NfUY5JazYW0LcZRo1tU5aO1GDBOYZDjfuR5RBp1UtaPS9c5L2wyREVGxEbxMOKyLsM6ghhVfV981
K+xTtmUAoRwFDrt6qtOrWEhJ6mgblqSTd7vei4rNwb9qC7fKE2HgdvaRGJMJ7BCFA8/L0E98AZAl
JyXqSPcU+g6pwLxT82QrL8Tw8w6m0IEkmCmpPRm3DCuwgeIPnUYqJS3g72H2I9b8a/kNUZC7jgCk
P60FUPnXH45KemX2Tw2+Rj5gZqYfrNaTCb9oNOVzWit3j23FOl5kgtTa+Ds00SEz2s7YC2K65rk5
hULggOD2t5SxpIKThtjcIYGBJrQTDYonMDbLa0Kkgv2CczvtKYO2cAvT4EnEcTXVwXDFSZKfmnMQ
vKPm+/BH4j4Y86OpaKKlxCKK6FdShXdntrpXR9RipUIb0/nh1JvAEXE0U0RmtxEwm4lGahO7njCE
qMFFY0zXIbiJpt1D4JqiSiXsbm2ZznJTu3wMRSypA067p6pejozEj+tDsutS1LFOjUxELVVebKbq
FuyEvrewmPrJQBt3B67QcbLqXDrra71iPfGd5S0dAUag1k2SFfMqOKtxakSRcowwGl8ddQkREtXe
lAeYi+pVyVbh4mLjoZu0RZ3N3ekIg1TRFkMGi+cYQqhYvBEgTxbvPMzrqpAmUiOzdyg0KLEJG0xH
HrLgs94qN07VKJl3bImovB+/vJrJmM0/xpkpp24+2XDl/m0/FL92smTo+XDDcKMb55qmxYXWr7rC
32cl6yTu6Vb+uIMCzmtSfY//jnNCp96bxEKAp2fcWfOP5mTYHq1S1IhfXfhLC40bmbvxI0D+TTbt
SopAJMBi7jmHKzlKHzx0dGy3FCQ445uwID03SaAb5rWVYaNHmgzpXDMWdlSx5yKJrCmxpd5J8IH3
I9z+V2vox9UT4ulK6sU7QgbnUV94Bp9zXfS4Xr3p3I+nvy6/g/4SXpDy4XbVGsCyeie5qvfyIQCd
U6OWcGaDM9onfV0WWvSxV8dhewXkoAs6hBo1PSjjOGH5ijseLOK9wzZtkUcJQM4jsM+JeMihGx1E
2XlMV3p0VnBFH0+/30preH0vGl/c0QfiXFadHHK2uIhQZ6E57FJtqrEgbfs2KX6jzusEcMNMEojv
7HEnv8OD7HDO+g8uFZl3tbgmi9jzp+2LaBAeuPZ1ei9rp3SyQgtMFKY7NFBBrpOHIs0aUTn1Oiwa
ynPRozbdjwQkPC7XhHM9UDnPOUe6D9nYo3w7ijYLk98pMkOhVRR1+qb2o4x7X6fOs1cindEQvT2C
dJJD/FF28soPSkYxjEhy1tNLVlNGaZ9hfXKXOZnLXezzZvOmmhqIQVHViayTm7xKRJeU9PcfbTpf
Sye63KzyudaFNxDcV/sJk06ta4+W4+kfknwPQpMh5fIncnq1hU46Jis2PIkSTEKxLOuEIYTRaNXg
+GO7c3J0iOnPB53aMqWIzYp8Dni6b/QcPMeyvaz6FmCJukcy6leMs255x216u3Ga1LNo+03x58by
YOgjzkMWxYzUKLjIdqMGBwzKY5x5u+XvVm6QZl5A7Cy7xHqXVKpmno8luGbEZDkBnH2nAMGnCFVR
5VqPsWMWyYn0cXnPrT9XGVFoctvXbm/k31F4w3K4ld5Ri3nRcSGxKpMYsqcOnG5Kid545Mnove+Y
0ekiMAIBURxojGLnJkwc63Lh2aUf3MHOzRyn0H1/dyLoCmSCztqwZ5sI0IbV7KmgH/1MAGSQEiI+
Hu9vExQe3vMlzNnmu9J2PTU6NA/smMcKuECgZa1RqsYz9TMed8pzMjb25T+nRB+7F4eLdFGpLSB8
UFHYY+LJkyLP3e1UJW/lMrcHImXLpSd/oXMWa1Bp0I/IyzrRis+nOGJuoaQ/EwaO0t3avJh59VPn
+8s40HL9j8rkxvKcnbldeuYyIwUQUWUN+nrW14W8Rc5SxHCHFmLFRipdFKuI9sZnXWdDSH/d7Ef2
+6t49OMoPk4xC8CfgYw97GHZRUSlU94+2apJY/g/yavnFpB+ZV4HjAOIEUIQeFGVaETbFHZKmLXG
yz8/wnq2Xcj9fP9X65VwraJITM4GEuOcALzMCiZWKr4fE+HO+VfEgM8+c3i6l3h4nTuV5Y5dLNTK
TJZTZ/LJPHrJqh/ZV3ViRez4bJCMC12O2pJr3Ovl0yiZ3wLEwlQdx7HXYsM+pTwwhB4VYo8s7dCL
wEqtMsjel0adsIrSNtiJXynoPd26UT7hU5WXjtOtF9/wQGu9eF1s/YUxl3hYyLGcTxjgnq1V1iTM
sPdtl/AH7Y/WvTb4TUdN967zjvr345Malxr75alfLfMdannJ0oN4ezm5zmuqfbrljp3Y+hxhtG8W
s7fh8Skv0YOSFe7jANjXQdDQSgnQpOWmV7Mlwa8LpDTLDk54iPcBOzvecdp83T2jSbWxn5TcKoQ8
+cvPnqdqa0TCVYu1P+jlA5R1a5Do4DIA1Q2c7mtqsphnRRh+8Zx3KhFMx8s8eTeTOlASfRjnp4xA
RGT88uJPvRDR+9RSDXXNFDbK2qfHgmm68973diEfciNyj6ThtHzbdtQi3nNYrOKBmFig+xzU/Cft
X7++Y9fbY9HMeOWYduOU4mJaWJmJPcx+86HrmHESmtiVYNEFL7csywCPBN17syiTnf4QFESUkQaN
ts8u5JcecfISvJ6QdVBeeBKFyIpEpHpE4snCbDk24x7gmc3ON4E918NbEqwtvMtCSLpzupLPSTk0
K5RMZY20hCjVLRh9fkx+TGuafeI7kOadHHDMRhsJqebFkpl0plF0i368KiKpxGGlLwp7cRs32xg6
reZHcbMG8I5YkIUcY5hKN9fTCOeMYAU8w2MNTRWBYkipm9yG5sAjLtibb6eu/mP603S/ltFPb8vj
83Pg02ddndOe28UeuMYrWrmFOXH7f8lc7vM9wB3ddqNN6zc0u8cyG3Wz2U3cE9fclCYdfsoLIdkI
zWiIMStZ04IjC7C0uYMbBIxajBTJ9F8U6dD/D5oweD8oxIEfjMwVbnPwpdv4NeaOg/665bJemF0m
i0Fv3sXtEuKHnrYj+jDjGiLdONtBSjIXtOTenqPBEfMne2EEKYuvsf0Nl/axDf7o3HjfouJJmbWW
qv5pQuo6nQlqwrbLwnsrHbjZCIiSMPXeUYjMWGKrhGWtct5Ts/hW2NYQe2Ldn3t9SJThzx2QRYLQ
vOvjZ1VtWXqdxFnTGcZKzXDll+2z22XCL4LGBHpZzHb2QOE2bimJJmI3p7Uw63iJJ9szXy6cW+7b
z54N0TMsPx63m2KbPhlDu7H3T5qX5OONPR02KxmIoWJqdtuL4KOj9SL62kY3O+xMjiYPDkKFglVP
dEcXRJ+qM+7K3421gFNSvZ3jc/GE652dtZFh0wfFf6FSBsgOFehwyl7K93xLTFGo3nkJ3v+P6yGL
aWZa9vFQ2nRoc9d0kzNbcDjHvTpmZIqjevjO+F5oTMH3bUPtiM3q4eMd4eUxECHCvlpoL40SBmNS
zGJxtoLhuZ7z6YuZ3sTtlLYJIqep/nXVZ5N2Nm9UmsSIiy/OXLf5qyFPx5wxPJaX02QhkOrrHYu3
8mpAIgboQgcd6aC0ick+MlBmyg5cn/Ve/s/vytpiT9cMJOaLkvLuOz2ykf8HLodWAeHORDzaQsE6
4emvr6YySpf/jArsvYOhqDpqoAerjppvJqOPO6VJd1LmgmsjCEXhua2UlUXV3cepZSv96yxuN60U
qPi9/uzhoeL2FlzAN0IYZo65NBQ0c88ts59NQgyhbqjw5YRnWyD2kfPHVB/D7EQ3V0dQr5ZQ3wch
BvPapaqv5ZUyttg8KbIdGG1t1fZypUjvfvg4jq05Qu4ThLPVG3VzoZ8qZaeXc634kyDe/Deef8eP
ng0a9Bume/53bTj/sh8h+byQO/6CR3lrn2w7qWw6uFMq3Qs+WeEP9dNV5nKnmducvDcJRvgCIrAj
KMPri5V/d4TM49X/fa0Fgmf0rHbLD/tbzpsKebZV8pkP+nhOXy4Vz75bC5yb3fhd1GXZCFifcjf0
5eBHzKh0q9es0rOQpDQ1zpSfkSEkk+eqqEvnA8LcH8e6OHH+HNmN/3Y5Yz93Zs4y2aA/jBwMBdPn
cgt/XNqRJPnnq0r0KtiMbgcBXwxP00xBpjx7T8foLuP76qvqxg+COM0phe6o5JEtPp5MfN4dPsNf
N/o88+gsXDKydijI32evpPhHd8fRP6i4dJmhyTHGkV8/o6YYiqjc2Fhb53tOJVXJq9dbfD0q11ff
LwZznGvHXmJ4DWc8WY8d+TNj2D4X18wNrvpWdtpURBl0aofJZRFubXFupXbD/vW+l8kYhlUw2Gq+
6gZ30zg5SlhZdHSEhG3YITCUJ6q1IxeEIW31iqrEd6S9YM4o9bwJSUtOa3tz69ia9LzHpG8m+DbY
X34Te11p+LXiXXUqyxK+fCyzDCeqtsJwwjZAVHezCFhEN0MFnZgQX3kYVfXhXbbstSo0lorZW73Q
qfx1PPVzWHyjGOcVXLnFPg3iWK9nv3RvOXUav48btnriaesnvXbS3xTzw/b3ZMez7E6lT8TZr537
51Opx8LoOw/X4PXl5CeE69ZLO6pXWsPmffOeUlWDkKtg860ugEWEbUUsFUgOUx2wrLi9q1Ra+M6v
PO84ZW7SjRte3UmLbc5HCTNCRZTHfj+KWqJfjZaGF4X347tsS42UkmopoSdq33a75OTnpfwiqTrZ
snZZURDNYKm+2FlxBn0acqmV10jDSeNawYzHssKkSEmrsCqF2Xr01vvZpSWRqhk05mUdYeKlptsa
w5kuhScSoPWI9ERjOUa9tc9q5XWzTqoyWLaImmXjEsWEUsfCpi43rQSa69squBfV7DnxBkXQnbhe
Pk1IEYwyvyiTrHmOmbZC2NydRxu41WowE2GRe6giNpGasnNi1Xqqs3Mc3qRBKvBYS0gnuWNOk6mF
qewf1R7l7NmBWzCQvPt9vwp5MTr6dXGsCiCPzkiFKPKe2G5B2s8tPppMKShZGEIKB04R2SbrOsay
vK29YKtc6LIHaHbZ3xUZPSN3cmyzsL29+k6NUvhXZYckcsPjxCiGYz3fe0wNWxW+Y28Z3B6Nna2x
j8R7fL5BpEWEEuu0mRmV8vPpYL3H0Gx4bWmTDWqbenujUqc2r4t9mDtB+rE9EOUvA6eYvna9p2HK
e4tn43tDZrOtoZeDU8GFSJ4GiNzWjXHya/n7vl6L8sTznzRlDzrPWfFdC5X0/FCtmr/4752Tp1+W
x7+yUVAXdxEl5neFUdvT6H13T+Xs+Xxkcs38Os9ZqFsdm2igwgHw6tefgvU3E6vL09pq7KDMM1WY
ep1wZth67VYD3wswyGJJm2Jm+G/Hkc87EssL4sY3DLt6CChy/iRnPJUulu8x7phZ8WNvq0NX4riG
n0m7328Ny3/L67qIp7/IQSZJFbN2wc5w251Pf8mlhLf1gA3zw5h4fIiO3YADaoKI9efdub3G+x8C
h6M2jHzWt7bl1151lyr9M7hunyZ20IuYQfWfXFqnVOCl8lvRlT5ev2iELh6uZnTkTh179+Wmo8nz
QDKBjt2+3ZAptfy43sGGdJxYD8d34fPTcd4Hj2drKbi+ST4PhpDplymOVlD5r+PpwawvVPMu3pdr
Wwv81jfXrPL7/NbijrWvtbquLKFNcaPtwnkey+E/CBXf5OJCMKdHQYdF0sj3eyNDjpfr9U3hFDZC
55uxVRlT1bpqWVAd84RtfVNzp034dhOYXqtrmyMINgmyE6a8sBEIQhBFslFo6eucpvJUby+3Cc91
z4+e2KWHbY5S9zCZouFD54VeELFqZxEEZf9sth0McUfSmGdNEq3jXfHs3baXfDbnujvjFK1EFeT1
PkS1q6IrF0SwghJqidk93RC9QUnZfJF4XcqtCv1QaooRhenXkrhTS7ppUw8koFF+c2aNZ+SorUp3
Ufce9wVSOex4PdvHj0eNxqQgvxMiyx9S2zDKecDzohF59t8GaeN1MejxpGSWsSxD2OfaZic0vh76
s3tkij0VSebmk8KPUr4aH1PLTkYRsuaqqESaKdnKFZcd/0V9GJ4fGzjr3QL6boSa0hLGXam0s4wA
9CaFXOkhH6o2inhgOUwNvRxkvxFbSwk6gZJsY3E56dtOgohTvsyFzXy/lhlTO5mshe5C9gPamN9H
odFD2wlDWyTZQjE4Nrl6W6Lqcsu1cPyX4frwauO5rNV2dYnJNgkRHO8tuKPEsCNd+MRUPYjy3Eu+
t2zus+sCNnxhfy9anBObBMMZoMcpBZyTbtwOW+BhSDFcTE/sdfT8sT2Q1J7sJCblf0Lr8TtECEVi
vpj52j7rpS5IkmPVa15nbAx9VGaz03wIKT+uMI2JpO7i5/G/l75Vo7QUu9xnRbLrCvgiP3+6xuR2
/Znvh3tAw3+/3T/EYvt2cqWxMjnDr/1whJlrp5YR+Hby93mfGr+b8VrW/B0ejyrlh3b+he9g+0G4
r1rcKGkGnkw8JBwnrL3y6IQapA6vPdnyt29R5vlV3d4Umqpef6tPQdxtJ3Y/galo0YvkrlPvlpPp
nql5c/NxMIR1EzP5J6Rusj3efvmb8LAE1eNZ6sz7od0rDO1Vzwi8XL37Ozqv13mqcYaYHmys6z5p
baifV56ykuyboWzqNOFj0w+HRdugTmz775LtaLc3iqT3bcpynNizamh9hibVYWdX6nYtOxXdW1vl
8BmNA3bvve/pNU93dqPur7FD7r3Pn2bwpKM9G2jd2e3Hx1d/t62OwbRfK1eWHfqjhBsPoqpR43PW
PJCFAa6zV4qyhRtcbDqsHU5684Qy9FbqS3xw29HlwLxzHnqk/l5TstxSoCtOKCF21GrZZA1UQvN/
r2dVttjV6J17fv7fIDsOz7Wtpv6UeP4eynXOfrsck1ygO7MndjvXCnE3PaoMPXj3U3wDw4309ptC
fsycrnhlylHz/2sajMbn3eSDQEqG34/d1GH5d+R4zrtusMlsWeyZPOdhBqtPq0IhpZthk2PKfOTo
xWCM5DkkaJ8XyTFIIrbdJiMee9UkTMhEnrTkbZAtVXeqM+cvt+uTH9COV/CzVfnw3W6Vv94+SOlX
u7oKIl5H+ZNq48DSJ5qvg7jrwyskS1qHBonPe5VuLv0LKDktm/P5pbyjzqQN9r59XX9DaT02JhIu
ECQCUNRQvubp7GOnK0v0MrMCWexT1cPPooxnGBD2u/07NUC1HJ5wDzGXdrPOxCgpz0sOcmkNrdU2
b5LbnSO7zxfw5n9hZvsyPRD054Z6coabj64DbqIwOdKEixDUCaPUt8Dr7J1wrZFeP3vj1XAHYgmU
6Drdoc28oo06mdl1k3maeXspC/+HQ/0jN6O89ySE4Yd/hFl4HgkOCd7Bt46XiDsM6Xq3gCfA7/hY
Y+T2jL0/a4c017b2fphBiO2Q56JRPU+vsb2w5TmoeTa0p2FSO324FkbZTh6I+dNp1rtBiSSZEcvG
yLDxY8qfG/Xa/olPXVB3iVjb4L1zrHrxjtDYhz3XvdrbugQIjg/nbnc/ZZtyYlyiQwOiFhC3pv5w
shpKbRG1IbgrnjGiIS7MNt97bONJbUoRfnKPt+iF9nGF2416WmOHTx464bL67fonbs3OzYad5u5r
kboHrsxlUN1jOaQ0h1xdFu7OIELBrSnJrqHuWF6ZRQbYdG+Npu+E4XaO2ota9jCvP9HfvjQ0dQhl
tt6t343xDIkUU8c4cFr95EMF96Dw6c4aYbiM9VhSNeMbISgtWx6xxcJNfqy5o3eORqtJa+ThpzbF
yydNGpHG36LuG3dq7+nLw9VezSSYzHaKbVd9m+eR0Zddtvbg+zWUtjopihkstqpKUvN9kudxx26v
Cff6sMFFjPTStulO2FnZEsPp6tuGy87o3GD3sSNd6jq3ykbZUjfBt2OzoLL5l/ptGvf56rCS+P6v
X79ijM5rNtebmXqfscNU34n119EqFBhSlKO5Pong89Wyd+M90sKQ2bJytZ+U+DRuuk+rJWQhDS6h
88p+taIN3PjZLzuTFYZVy9UyPBu7VvfE/67Jz1wwjjqeF+xPZqcUPDDIrSikzzh59Ue9XkBa6NYQ
4y59rR7f06tU/i7G9fV3/jrS2WmvXy+KawKq9m13+A5pOqFLfxlqWnLn+Htc73oyDg72NLqolCMn
xt/V684D3z5wpyt7rO6zsssxph83ifyVvTeSufSRhfx8+XO2aKJ0au/c630ukvunytOV990z2QK2
WPkuGrOk2lIFSGdwqnlnKIe9QUEqBW7KGhEfD2qvblCamdBKKaD5UUIImiJO8wjFoNLyU6I20dur
vYnxVW4HQWSphWlj/fbh521TmN0Xd3r4lt1Er8ngWs9YQl2QaXCcIvnKezwo8fQQetbRQcER1Va4
RGWt5hHsflF0xa+3Dirh5u+uyON3tgBaNi2WymppoMPJAolDOPoe3pVrWP90M2Iarro+a3ox0axY
/djpHCvksn1Hi+3oINPw7654XZXdzwILW5l7H5efVDydutY7fTt8c+dlsoOLb690TyJ25bHxj2rZ
2eNikRh7rA5eJGyMRH6n5ml4d6CqhlOIdXe2kedm6kXrmLoJ7FaPZ4Ea6TkPGL0+mGDEJ7NydJEF
GTRV72dLQs5RwvfWrq28OjNtKUvwsLvNe6WFhh2FMDxjZ4uUhn8LiCy1fHPLTn2FhsXCr+XZmjyF
509KnKL0eHDfv+bKz8sbm6PIJCCC3++PV3Y9erUUtcs6eTXt1654MPy44nKdnmq2cImDxIHR0cJE
U3RZyoZ4E6Tn8nVHXbZBFcDt49VMe2VafaIvQ++cEX1rG6Ost9pSaTWJ1hVTsUysemBWRWhEZn6r
7oeHQQ2FDRQuN3LBtJ+ZQL1llZdpSVBxi8XQ1Y298LdWy2ObmzZF4xjy98t9MIb9uG4cZt+/bWNV
jpV5XQEWnUWcJ0ZHLbKBdLzkN6JXPxhlwzhbemuCT5yiVh7HyjH32XHVIJHf6McA0L77Uo5pzpWf
XTV8CwB+iNFDOFvLwpr+mTcds9vOGu272bOeo7DdyKRgznGu6sdsE79OME3kOc7SL+VWKyp8CHzY
cMEkI+ixxWN2JKxJOlgPDxx6/QV5GlbvKm81xfuhXfKAsHHPDrENRqRspAMt+EO6GHfu1Xc414MH
RnZuzaGvAsNgi8TQ+VaW99O4khecJt6em6HFGyndj5KddN/R4Q+vBXw6uu0xhDbZ042Ryzs6K530
v+XopAlu9fdjOPVpuKl/LUZdUZc8yawj5YJSNCOuZaeXr7Xvvn8d1svJK/nljjl0R3HRugeXTqhJ
747tJk5T8caJ+jy77rfkv3GS82+Jvzs9mrhWdt/VWFvKd1Gicl1QeXKHVjHq1S38vozgujXqp1WG
lM/LGv08X91IbucJXl6gLK/WQNnmmX3fG2dg+5R8l0OmWUbpcceercxhFU+ngoEqdLkwtUtfTC31
Ka3+Tpr5bJ3Rtg1fJygYxlDLkemUqw3SXZMsnqHgQl0d7RIoUXQ8oWbFU46rZq7zl1kN16vbTja9
lygl5R8iBwESRgdMS+mEJ+pSv5wMrDtg+PCwxk3GDiVk6bvssoo7RFh378t9Nj1lZjjDZdq2Uada
Hffw8bONfRe5SXiX56+eWej67rvN3LC6N6N+UaPbZXdbKHGWVc9mq4ljszlTxpOMs4Bx7/jHArXa
vTyt24vLzbu7fzwopLh0T6pS54PZ0D8dnfp234rVyxlTqg2V9hvjlFPgnsIwa57Xg1ojDvvpjIRq
6tnXPDpd/Lr3QfpUuGzftlJSlzvi2kfWEOb1hONs46d3HiYs+PRG3hbzhq04ffvy0tnWGPfMeSN8
/u75WL09eylcLdXOOHbPqUFGyu7dLul51fZWGzyEJar91nKUb68zItPJdT499lujzV/SQ3r0Cz+9
maRj5BZvnjqmlTReK1bNWzw398jO7PPK97RT87QTS0hCcpE77lQWrseid7+pZ08/l6n+ScrHbE88
d8FHrIGkJbrcO06O2u6hPhvuki42WRhboRh2dPCMW3LruzwiQ+zy8n2mMuy9+65d2WMMq2W1ZxUb
Wek+Dzp64NhKtvCUWjrTpNB7cIR/Vc4/mj8mrqrQ807b78O129SnHPohBFoKClB0JGUIQVHHLFXs
eN6b5LN2zuurD1equrB2dHQLFKJeoaoio9aSDJFknB/hxg9j639awN/BSiMIVNIQjy34dyQ/Vqp8
tKQmwdytqjR4wW6M2j2njfIpdT0kSv17F5Ld34UXF+r7ZD47EJM1mTLCx0qxWmt5eNhPSAtFFy3o
u1kZbrpVayZssthQd2J1sexEKrdGmM7L47V43RxjjllnRkOb8JFx3wcvOjQwyWcRMb1rEozZ7xfp
U1Px/LuXLrLMtDdsj1u+CRak7SIEUou5HjwcJZaharOHVUW2/lDrq+zvVkFuQqDhejpxmRUUOjNC
E10MN0IpB+2xYWcqJ+hhhO2oP7YTewfKJIgnQkCS67X8nXCxTLvK7qi3pknUncXljyic67u+ponS
VidNW08t9yGViMTi0qE0nGdezP80+ert7Z4i+5+Gt8NS6o1u83umTw91pb29MM1HyX2anweE7t3y
W9mXVr6q43/Fym6M/dT3Xy4WXZH7O727tpSiIx2jX8Fc93Ph1K+PfU+tmPS/3+mfF6M/vt7WPLJr
if0Nw9Py3WeOCvhPXPV0Rfbqj4Y06mBGZZHPV1ttoatjxo+is9m6P1/M9Dj1zhdL4Q98uOqfRbu4
bt/k6LbsOO+OKv3nu662yq+ignsaxLMyRlmrVoickJMLE42e6GMPkzseGvfZDnGp57OV12vbLGy6
Edjz21fYS10r8tFXLCfqo+69vfugrFdnwjmt/Kc16teOp544O9Mr9RzvvlM5ZPhqWrNOS336/dnA
6jwTR5S07Ppxqvhux1NvUau8Or6cClmyqEtTdTnzcNnz5H4Ok9iOXEMn/GYsax2bn6uu/0bAe8+4
gyQjz+aRFGHZ9f+J1h+Q7u9X5GHvipYcSqLlhVsIhcCBRCiH7DFBx7Dnu+/h/bkOMSMixgigf23q
Ks1GitsVK1ZK1Y0iWkaWOkp2hOZHNY0OWy0g1EVCAA52rATwkAdioP8f58E00k0Yv9FrmyRttpZV
rGhKwSiUqMkENzIqZDNUhkViittoiyzIoFANX/8KKY5AAwQCpBKA4KbXXW20G1rBijJomTKgiSCJ
QiP88ZHOOQbyYdc5k0yva7clm+OSYLvdvTF3dBX0hgzidtF6tifBh41kRGDBBVUB8J5YHmmIwUtK
qLBUtaWECPUew1j5WvdCjFEFiPGFEZH88YUFTypzFViVpyQKHXZTgqTi746amF7QmE6uTc201VNF
bVdLQbX49dlSWSaiGLJISZVZh7EMx/Dh59GbciXmBv4uv2ON875LdkPL+VfCebIPb6de4Nn2VFAh
RxCpEJEsixQIHcSdMLZTi8NgKf4vlP+EwchBIUKn+RgPdakDWsQdSjQnM5KdIEX6CBP5NyMTJP9v
Tpo6kFxgNCj6EL7IU7QKmEuo+aBDmSgA/8piqb/9W5Mf/q/4/8rIdv/X/h/9x/32/5H+kLbbcf/3
/j/px1w/747sXG/04Of0wX/zHp3VXmKB+NivPzd3X0Pi/79uwG8UYQH3TnPuznvSh9xA+V42jZ9R
GMklqfMEceBV9a+ss02r5XxPruhnQXXBzQr/nBXzYoZgQKUPXA9vqpfo0adPgUhiA4COISKpZA/7
YpEPL/KgK9tlnpqv8o/NL4HgWK9D08C8kDWLzhygJRFXMGyWR5ymQobRtRAcSRAcd3XX3dcOk9YL
BGaApFwYDUgOpbIPgf86wK+I8U1UhABwmZEJemKYRqpX7vz4HSV++Mu0AG0AJD4cu33laZOUWR5R
Hc6+NiJ5/Hq+jCKd8AeUHecInGIJcTkJXDzdWTTTss0iC+qB+JCp74O6COXAMJBO+R+SQ7QFnN7+
DhJMAgQF7uDM2vVteBjXbt//spsqdC5i5a5gckxxRcgbFAEhAEUxFQEG4Tc0CeUGQrtUbQXzevRq
Q39ePpdYTpIfPdAPZiD55iDpg3YH5LuIUiiApA1TGRtCDEAGeJOC+3z3+HG9mHuj0znlpHq/D59n
HpL08581Vnga8YAenNKGIqE8qVExlV873yPncesw9IoedOPwjLckz2ODdKYaaGbagzFVLU4wHNW9
UqdEgKiGq7j88gY0BIUjnlp0AtB1hA8JXod9GZiG8S4oyAZgqHqi+nfyLHwgasQdIIHlvdlyd5Qe
QoqArIAgCtAKH7UIFQH/+4UVB1AAqUAAAUIqsJIgIpCp95IQgBDgwqxKlKpAiKtB9sWiIUwof0lC
AH+pD8UP+EPyn6aFfwxX8EAzEQzBhEH7SGdOAgB+S9Py/lP62fw/66OuHsSqvr/m0mX0+L5vDrrR
mUP7ZSgnnb7ejUoYShD7oOkfpzpHGmpND+iy/ORfHTFgLZDp4/4vFFtJ+3RwN6zJknOB7vRIn/Y+
Uu31HoSkPyHxn3TCIr/T2vl6HZRXPl/L/h/pmMgf7YJ0QRih80Q++GMjFF8ZAyQf2QKP+v+v/t7A
r5wKHYAaEgBaQKRBCiCBFQiGhRKQCIpRYkRRH7Op2YBmP2CBmBhg/P/jr9cG7+A7da82H7MPLZih
Fn7P39ETyK/mknU+vL9UJNnKd+Hqkfv1Omwtq2ut0NHxrhL6pEGnKnuWtsf5k/yTkQZSkXW25/0z
Nf9VhxMrr4uz3NeanbAh0bYEKyVrgKOHbdumWkJ7bXLej2keGL1p8uRUrNrjdrx4W6p/tlf0X3ac
nLLW2PrVrLLZ00mWhwvrGNzbvnuadyW4qUbbiQ+Z0qEt0Z5DWsfB+ux67XDoXx9s5xHWxA7rzxCL
NtQZy4Qju/0YP99N3PRH+FW6dj5osg5iiVTfau0380mfffpVfFV3cz0YcJ/P/VisX0j0t8v+pdue
nPjmwSE/CcvrwLH4Wa8xWs0otKpWqkmJRyTE9U4T3XQY8pJzKjjqCZih45QhHA32qUKp5ptSJqSK
agrKyW61tN183TRaMwS52Y4RCqCwRDt52h6D2IX9HRDRNTshUeSEQtb+eUh0GJO244dSVu/Jv9n/
FeVOg9Pof1KX+PphC96Yt5I3Xtgkm1odMJcu4CMU4zYXiQLaSu7vk/w24/5TeH+M/q/z8TWA3Dbv
4pbG1tRjV3+PZuV5Ne3v9PseKtOZ67rbrfCta8laQXUHkZhmNYbWYZjVhZ3/7/4y1+z2e3Zecmrs
t8bzZNUfad2nrXTQF4DiZPSOvU7boQPkiQ9vsh0R9GTYIh0odKfHog3pi4dZabZR71qY6HNfBwom
7EMRTPudoElGKi0ot3Z3+/Pt71HXvu+crKEZSk8Qk7oZSRBtvPe/b33jd+NeiNNLi0hCKRceUqWR
o95A7RpLLp8e75i5M1iHCuNlvDQibbeuZqRXJ1rJ46sbxEGHQeWHTBoqPDgpSR8fUqCog6+HHqYb
z69HBhoAs1sGJfYZ+iPNuhEfC/74DtsQUEYHvw7HYMAK5EgWqz+/BnTyrhQSwNEk/I1z8Gmemv27
4LWQ7Stq0tk40VJBzN4Txi+1hJhMy2QtcyhPyeN7YfsQv3k/6xDpqKEUD8CVj0dGlh8og51G/oKT
7IdY48F33XcbGIbn0u4ebQvNZF7l42wjbiF1p6vGVp88P9/0lm/p4a+42E8mUnvJ4ULZcXAma+YR
LYqbkYdPlLC0aYG6bPm/ptJwtmHMhkkjY87mIjwfzPTpHHnbPeYOUmMG5Bzg57N68kOOlYX51j6P
YayVJJlRxpt53YtKszWdfJJEqD65ze6Nva3/Hvtmg73wSUGo4kcYNhFnCvpjANcnZ+yGuvbNsYy6
ni2vQUU6rIiseqjmd6XK3nbjKyCta2LrmduMcjhw52dV2o6mMmSzmQgIdEq2PZ3dg0BGBxpHNMI7
mjoG2mom2IzIZDF/VUmQa02weUsHgRNtp1SrbHsL+jVa+jwRuo+og9ZPbxlr28uh1wuyUIcXTwML
yFLYvtMDG2wxT1swnfHZTolLhqlDXbDJl7oxhmXssUM5k9sITnGnV3wS+dMRK1ZqI1tIqYJkhvaO
4u/04UOwvuyh1GPhzXPhBtOq0c17riWBcmjczkKO5AFSw31InOVYP0p6KMO3flIv68evTHqXrkQg
RfYVaiZrzIeSMOvvuhGTFThHQ7aNCts28tOsLpSQMlZqm0Bms2hJd5DTLCRfZAKlSkb73RUKUnXR
lExj4kbrR2rqxj6w+pdqNz+RvNFpqrXd2xpQbjCBqyIBDkn2zMWta0rzbvq69vxPmhfhbTwKoWrV
822A6rspnjDfww/wpD0Gh/xNwjQ9+LFuo4Ssxm2Pl8oADb6DxdJJd8eCY+lMA+z2Aqa8XX8n3eTn
QAHuT3az6XcNnr3OOD1gqfrABgAfXwqX6G8D7PIvGqafsntwcJxJSgOPP0pccxeC+OGVAU8lYYJR
tQfoUfyTo+/oeSTyajIHXyfqj3dOvr3TOCpJ/7qKfQADEgAPl0ZHDVDtVJO4kPbjHoOxK8pnzw7z
LV0236trN5q2eSOyMYMnSHingoEMC6y/2a6bsXHdLQ+1gkADRbVkuoAGw9QsOvg+uLDa7ntYEIFW
NL4+rtLQD73nAB9lh6eKI0J6CE6Sm/wwkN/DrfVDr+Kk9iqk5ev7h9dUUFQE+Vl3V98sLNttpWw1
OqYiq+UTwX90ennfxqagz17UYzMjywHCy3tQ2+lkxhg2t8lVCbCIW4p/TS/y/wWs/A/gzhf0VBOD
Rtj6DxGCCwYdWRW/sxHeETs3EYBIdvMQHZzyvRtlWl24tIsr1v31rPQPWZx6LC675SEufknP0Rvh
RHKznK5DDpgdtgANPWeHu77iOJU0BmtaKkVtv1doHaUnvefwQhC343d9u3dcDWrpXTEjBYidxNBB
BEEBKGq13bNwIcnojMFZ2OB12xnMpC+PUWyUhmKiSUF7bBRetca6ms+e47RJaK3eQbgIh027+JEs
ZMzohxZzCs3hZsNtEK8sruVuxFsuikYkp2D76Q1tRmIY6BDQonf92UFejcAHwOSsw8RiDEEYXWGl
onFThTRbdC2N3ZZnxVQuRPLFmaLEMrYEdGPLLE4bz3nrdbcAghIEhSoNA+W7CV/BumUBCEVEWQ7W
3pmIj5O6Oza9MdeK2M0ROIZJCS3jwhqgJTo7W9BMyUH43LpUVJ4wCCwK+GLE6BcPYLWujMrr5mdm
OGoiVKCqO+aj8vUXrT0YiiIszZ37PSvbgk5dEq2+cFK82z8NB+CqaillbLbDVpTS7eU6zCN8peZa
9MslYIVk5RXVynsndjH/krOyUvDbZVcExiLFGzxfKT9dXkR4Zp98K83ibuuuB1CdHNNR+0sVTSOU
jHpONnCLX11FI/I3GPfPuEBX6ak7p/Y9LAodzMLXE+6isVpmfDMnx8pJLjyu9NV9GJFTgUJzN0Y2
+2WhJMxWHaVpjlFKimHqsVT7erZ3QUpUOydwUYuRHb6eR+5DqtTuIQh6hx+LPckpYwm1Hj6fTRqc
yVnD2w5hJ9niUNiYSfyscnFyOTmrqqGuDMnem6OfJ24Bi9q1xMtlKVu86mUaCYKh77Bw1jCU3FQq
0bhhR791OyEupTkRquzfjeiedtkdwk8TslBu7C4hnULwvjf79UStLlOW15J+J3xS15PKsWqxGWog
rXYynO9304wCIvDnAg5qgJAkGkHuuJ6vEJHS5GFdsVHXEuOUoNuNDm5qpbiA5eWJisXL7db1lOyh
0zbrjG4VbJ+APBT535rAskYlJHLlSZJbngQM2TG0rJg3xI4vNO54Czs8+MmKIx4mkhkbJKGpz4C4
kRyq6GGhHZYzdipEHkd90aCflMCTQMQ1Bl5z4SSuVRHX7cWkJ1s044vjHEzHbJkstvy4+zZqSEkZ
EkyTUToW8fe1bOvMHscdxyK+YQVvc9dIwDGFJrG6cYqjJu0IU6ZUNsY2R5bbLPDXfAFvZPiJp371
dEeMDGDhS+QGGvZvracGLrXfgQtgrw2QHY6RRhUgh86bYQ9fZ17QtFldDZlBrdnZCZSWPW3Zw04x
L8HbdfxlXT2Hgcp53mXSam1Nw3uE2BTUt2yA3FBMFcan4KxMHchkmJKCL7HxXQwqnZHFpojXCC29
BXKOjmqtsoJKj2xPgTkxfPjPuM1nFlkrj7tuV1isIlDYp1RJ22whRnIiBodqWoEHVHppnTJAVVa2
NXhXLLlsj0XChCaK0nBCLESRC4Q1KWYHbLEhBnC5wfvg0EhxuHOhOr6Mc3WMVEjKBFHiiuTwF0vB
9JNni2Z9ps3s46+t1nGUuL2TslB1K8IjWrE7ruN9k9lrhN78MCoRJEzUO2YWxIVqtN9Y9gVLKWMS
RvZnKQU60Vu/t7izDoJkDpZIq9Nv+X8jdfq6bNUNjFtpfqn07K0SR2g71SzlWMq9sGk+uPYKLUHH
qFGrF4PvE6Ql2D8bBX+vOoEm65Qen213MJjxM8jXWpJ+3g9t/Vo8KSZn0RsuGCSi7DsnQkwV7dbW
9pwm0CtRakR2C8+l9OEzXYTtMe7SMkizMtsj0TxEfiweSzdqELMH4S3mwfsNVSaQih5LHx6re52j
zvprB8sJ37NW2M2muTvYdj+RK0lDdqbk0rbntPREzociTTO01+fAkuu067WbA3kWDpNP2rrUYrzY
nI3EES3HFpONskFi8XDUNcV16crXaeu54KqLjc7weVYbM+ySYXP3WZQ4Ywc8d1lJQNThDTY5DdGo
3PW8poqi3JT3xG1MW5qDVphVRKRvsi0RO9RIElwfpUzF9fuOhRrtekQxAz033x5pSIL8+3EtgBlO
+DwVruDW68myInGj1ax7RFiZm8YczAwmPDPWFjzpqakYPLiUQP8w7VC9x4u0Ks4LPQxi1mo9F8XT
OJjAQ7IKp78qtPjI1aPeudlGma3misVioN3WavNnlhcri6E9fRXquwa28driNqHLOP+SQWK3wJvH
N6wrj6jZBmZ0fhyO8ffMI9u+3XrZQjSUmDNYkB4J1tbssu7d8FY0BG+wtv3YPcW8snOe6tHeOGyw
oL6kOhxoOW5DSXn0+/tv3u4eyyDG/7OECeMwNnzefn7ZaZMPXtthdzgb2bRDkoItqCXQo+KHpA4G
igaFxOYXwdfQqj6cRqo3GaowfNtFIGqQkNdrcoVHMH0gbFEzBZefG6RiDlnPBrZUDRMz3pGDvgvK
/F/S69K57FvW2hd/1zgAA2Qcx/FMyjhLSqeZIQk2cDzRMCFk74SUEzJ4wi0U0T2RCyr7cIbyNd+m
qJhb2yoBb+I8JVSCV3ifEqi1MfO2C/TMnv/HB9yMRUvQ6deeFmWIO9L0xW3VA1rW6zyGtV5a0vdq
NEyRr20vj1Geaa5M73QaIehYNMVVAsEzJKqguG2G08lhjeSMU5F6kBXCRdKOw/7D9BYt+6FbPdXw
LJq9t8Q2gafb8dLrONdVuJ0bKcEyYu/uwraJU7NbRacBDIFMtrtKkmJgmEVKWDkGI9ZyiSIlBNBk
5Ad8OmuwTaMmCYbaIVTXgXHKdUzqoMqnWRGcH1cdUM9dttQOmHu852XnVlG89y2PLrseNE6p1b/j
4yJoAGxndfjq7o8q1bt7e7o1XqgsrH2KJOBq68+9g1y45dh7Wp/1O0m2wYZm8h+JL2uNUSH85A/0
EmwRvL+2GIKRiFOCUJL/gzdQrGLElyhsieKfr+n+L89fj9P0f9fyOvjPfGH0N1r/kVUz973l9Zjr
PY/uUs/TASKAEQGP8HV3dLjrf1XWzRUbJG2KgUjBf8GBUCSB0T9CFobjg/0fbbb9ME3QhFb9nyGf
8D/TgbgRSkX9z/bwfr3Q7eoQA/6ZkRg1sFozjU4pl8StvFP8VKcoh7iwa61f6h+iP/y3NV/z/B0X
/oUQAF3ZmgypyMmsiv3OLCzFo1qfUjP050SWtREkggaNI1UyEGtn2mhFfw/H8fX/r3f9kgn5Amgq
JIhEtJk1aLWLf1n8mmZJqv9R+8v8ySvEDQCUAB9+Nnkf3TPQ/hwfGQOkePh+MU9QhgVhIHvD+IT2
ov5fsv+NTOW4XRB+Dg6uhLst7olRSYKEGISgiB9z6ptVKSHBPwGrsd47Au8kV+8Jf3Nb5Xd3WrUZ
VRk4RtGbQeT5fgHTM/neTqkd4J6F+q3jO2u4r+Edw6g3DQLeYnWB3KUI7PYHque3rDMIfNg0Er6+
jbHlhrTRkZXoYREUGK9BGB/vrDuwF0U9gel4WkE+NMf1cyB1amoFFYRVA8/iJih0E4JzGNEIkMvI
DVOSdhouhkDVDWHdigzAs7aMBdOA7t7LlxbHMSA+Le9j603rnvtBxt6ozuNk6kRUURLwVDE0XNi5
GQUHJrA7Pkp6RdHMrF5g0dhEUGIB/3/d1mZ64GzPkU0KBYkxh3lCZ0aokM4Yddo9A23x05AcZUZ4
Jq8CG4mqQSk5pqcTJfUlBwOBHuxyvPjDlDhKNMWMRIJC2Ec3CPfbIyBgDshISELzb2w8eDRVNRNF
O9HXUDl1Hk65vij4fswRKWyCEBwhhYc4URl+TEkjZD0vQKgBh0BeCPMUdwvcVwNxGRFIiDWWQu6b
UoNc4c3nJ1e884mvIduCkDwEPYSHmd8kkggZAp6zDuPmB2CjJg7EOtNnZxA4OkwUPmm/KOLB6SeX
uec58c1sVb89JBAOIWyI4oO1+W22IUNDE2xKjJ2hJCSgT8+GoyiXchPYYULXr0aoM4Y8WT11xdJG
dkhOkjA96mwefdJfh6QciNsG7es9GEUzqerzgFo0EzEguc9o9IwJJJICYNNm7fLq35Su0vlqdfI8
Uo3SmQo0MK6o4YVhhY6g6WOqN0sMwLxUXNS5QTvdlGeJz8TqdaSfWDM487rlQY3x4AGzqBzfAjkd
aa6PctqHCOr3HWWcF4Dp2EB6tRE5rzNXklqWh23OQxp9paqSlh3MP5NKOxhQhIFC1EegHxI9e46t
wThh6heX08ianZN4PbYdjD13eR014D01IemhPH5S4h9SZMSckxHCGC9k9Z8vbBjr17yM7tvOqqi6
5hVVE3c9jbI8xwISwjx3aLXy8Lu6XCjoQeWHReuEtWq4A1rv7xtlhiPAjunOw9ghvpVHlJfrbe6c
fr77Rhx46WB5Cgn2kD+LASGCJLOzOl6vpOplLk5RN17X7B4NhJT31SU0FEQUeh6QLp0xSUJCiJ7e
D4Wzvb8wFVUlosS9nDkuQmcF4+HoHnB89Pp+yNuVKuu2ym1HZqjt292RAPukclTwgrwLLvLZiTXg
Xq6Hhg7ReAAZQpoqYAzx6PWB0aIBXbCNiVWmXT4+Ke3SRIvQfD1J3Dz56oLaFLgaC1OQNp1A4fOH
bekEmoL1JvCYjHtBi+p8A0ug3ITuzFTIpKUSBKIgSBHrV+zffE07ohaCM5SILEDz5zmivYn1Y5gT
Kwi7Bp3cSPF9jqKUrpRvOxahpy2jR6AHCvkc1Ph4cVUyVNFVVFWevxtw05gU5BPiphzfLMobTMpS
bfkkbeWoq5vPWAGvZeAzLHmBoB8FO7tO+UcSXQTs9/riaPWeS7+oIDCV2Xv86iKu2YUQaTGnqH9p
7D7h+0SgD5YqB8hARSRATl/804Iv3vlKQ2CDdIZ3AS/2/mP65D+E/bBIX8WH/ki1uAi1r3/G7phj
J94QmLFvtIH0drHDrrx8JHkDoaFS4mbqro7/ZLz+t/PtyDakkkkgzgk2nr3MbtQ+wsCjjVbv8930
cN1QaLFuTGwiw12xpsJjCTAw0e6wG4Zj8kBdMVoC5JY4DNiJgFNqFN9ghOeMFp8gQ6BBE/hjckBQ
JEB1CL0Ip9zpvuJwaO/QoQA8OnBOHOjKMicd0OOd4Nxp0a7RDGRmBjWrDEAkwWYXrbBiYIPTnVuq
/EyLb4NgFwXNN2GYyAgAQQaAZCrQI77qjQbVqnAD+217S0P/aDYAbXGCNu/X7+rZiGoYuNlbmMsg
Fh+aTVCuq02QyzidO6zcDRbDqIHSxwG1EidiBNZrmyoQkQpbx4PVx4GrHhRqm/TXgo3kS6PT0oO/
GmswSSRUH8E4zW+m1gvM8NGZen44dOdVpGkcXDUHBw1Bu3MT9Pw+Tfy8FrOVsnDUkQoirRVRYhzm
troa7FzsUkPZJ8/0enCEYScH5GYJXeFfdQskBQYt7WYIh+T92wRH08qc+sVDh+VzyY0pDmd9kflV
jfRk5qVskpd8u1P/TSgp9LkJVerWnU/lo/prpnkcPoK16amvCuAJON9ebVU1pVTN158Y7a5ydOn4
dNlFNVFK/FwN8BQc1W4ClMF6eYNvHTWcNo5j8sETnhoScqGoLVGm1+Uys3hpcRFdYfRpDCtz6sIX
PD9Nvyd/RCd9PlibdOrlOw19NmqG/Aa2zHrPuvuLkqpzl/8+80Umu2onvV1PTpI337fe4Gb74Yn3
zp3GcZbskVAP9s1PGx1P4V9Sx2VfLnhbjfAgcne4GZitIVsOwgVmKML49KryY+thbvm8v04wnYQv
v4o0WuMpkSG6kYfK/54Tq6u2cpFn2NIpG7VlUH3NCRnZtvdvIzDMPnOxS6fC6MZIv7H7cfu/pCzy
8T32dRJfvv0+OuX0Z++Xe13c9IGOKPmAfiFkoMhu7aMIUet5ecerNIqdMr3/H6Oa/dWT6l/m7wv5
Qb+m/qJfr38Ts1LEIv8kImAeXp+Q0Q49ssG0kVYEC4o0FfP+az/RzKOAf7Sv4BfadATCbHe3w0Af
zv29jR9CijvoPBN/4MKkGQR+i+36+aX/3rI9CPCA3iXR70ayD/vQ51p+wZrOm5iDDP/kTSzF+coz
9p9f7U6kHvlDoH6EZ5/NX9qpP0vfClBSttsLy5Jy3htD9P0KP5ZfDXXbCcIavohDN2pj9lYwEfFY
KF7xezHApP4wzhBfTa9e/XPfXHjRjgzxXRPGI1865MVnjOMvrhVntqV6XjzXb04JvssPVT41IjyM
9Fa6W0Vx38rVnD48duM5Xhu2hBsMXxwfS3O+3OchFqknhhC6MLqaOVMccYYmd5o89M87aLF3toQb
LN8cXxt0vt0nIRapJ4YQujC6mjlTHHGHHazATV7MRt/GQDcwmE0DMzNuwa/3mfASFexYNS4BqgJA
QLTa97r0kgK2uaqsA5IXuc2l2+HlQxEuwSE3VTYNhiKfJNJ3dChFY9JCID4huBfHkL2FkWVGtsfQ
xPab1HyZZkVeFomTQSpB1njmUzzfTpiv/q8BN8RUoOnV/T/XP80/xuRvqR/X5D753id3h6fE1xhW
HB6rRQ58CFkHIJK/WK6d+NlvUplYPGyyUy7CFp1vE0bt+DHfvxzgyLp+NZbrhYznnlUVjjNZrUyI
pYTniZfXW+uF233610+lK79xKi+Oe9cVqZEUsJz3mXx3334XfffrXT6Urv3vvz31KW2aRRSe+Nd1
udXx34410cxKwVoL7jpRniskvJrFzvNKa8JrUBEO4JZycTmZ4mV3viMmicRjmWSnGGpToaqOFV9m
Baikow0plzeJWMXdu3B3txTyShpJq9TcStMQ5Rt3O5w7T6GzTe5UJzwPFYxG++MGpCTl1lRS0g8S
Udr4zSfFDxSFnjJLuN6izOu+h1FPLikpqULWzToBNZC8GyvSKQwyaFnVt1N158Ot4cY1pTrSNJTb
TepSTRdAyVkOrsmxQgZ5XKikNE3c73e6WMV1iPnz1bvF14a3jWLzi+J4tNYjd0GVG7NVFTnAZRWg
oy5USLSzOJyy8S9O9ZlPMIShPpVtjEXpRGcji42c5yaGkPjSWX8Y+KIR4J65zoSYk48bmWCBItD0
hTkm1Vx5Eh3Ad0qO6wJCSQL8PMxTHcMmVzQGO9OxpQ1l5JFGTpHCQD+TYd+nXI9u8kZ18LznOeK0
9Z2n5cC8ePNfl1AwC/gO6HcdkCAd94WISoxra0iNaqpGzYDXvMeK/jifzVR7IGhmd6Eh43V0V84h
xTSqsTVI4jZALF7mIESHXlNQWtCSSSWJ8oeppmhDRtPQ4hx+Tva4YAbgTWlkmZu7NXYYDQDhCe/3
nteht1Mt2cI7baVkk7/d+U/eP5938Q0dXbw868IYA+hHXQ0Zq8qeZJqJgLH5OvkIdPwHoByGnvOu
D7R9cM0MjQRFRUmllt+WlrhpqKZprTISEMMBD68clOkh7qj2fB5uJ1KZUlscB4n0KnJ2zcO5+PVk
hZoYD0ML9PuPIzg1r444FTFERFtz7RPNQ4TsDyGTeHEi2tDGIt8xNv27WBIZjtItaFBBPK4O4nXZ
+J0OsoUFAUhQBQnIDu2zDoPWxo3GNW8kelLg+gmid8yrFkp0sfpDifjnLK96QofEpUtYB4QTsiIH
x+BlL4m/pMDiJ1RkhB3XmBxdukfAmvxd3uEUP/uAoB7pUchB0cQH31AJdOkH/t9phs/7//wB+ADz
L7jitUQNPJZ+ESUn54le/6PRk38ft+m+fh+RT2iECQE8oqpQfgKRbPRSH3sSkon9pLqruoP7j/uY
mhr/aty9QhpjuYnf85/pzFqCzmSSR7NRNUFI2jirjhEA/++5MHLZ1hXEuapCziqw7mZOJIpSg/7m
Y3bt9XK2Lm8qe3JKU/9qJlm0aUbRRXXFXEIRf+6TnN963My83z/8CTqcSlKwckxddOzdZhFPtZkc
0IuGeVPHWulZC7jIXbCmgjUDTs/J+g8D9ZF9B6xHP48xdJhgYh9eKtPTIX5zx/N6U8D87/evbgIz
8p+z9vw9l/H9ef0/v/VjE4xm0rqpqpmlSqaf0YrWcZxLrOc511111111++ftUd3ZI/v/42p6getj
/CWB1w4SSVVfV1Jry/x/7T1nd/L5h3Hd0cHHGVQGYAEHw/Fde3Xsg7uk47/beFxNz7gdgZlvLvrp
T9HSggfJbQfw+s23eXLkkEEf2d539wTsP0kJ45M+rzYjx1ibcFQg7MQ8fcQB8YW6E7cFep7WMGOZ
CSWcbO2zff9JabPkUP7I8e3zH3vW8/OhxDYG1DR27Sl4zcmxGGAxNTMA0b3K12BcvKIUvY9vnO0O
nJ7yPm7fLsU+b+n/Owt1XsOgV2hdL170Lw6ErPoFHsZkwK4ICGGKIePuqhO/YfbP1dD6tGtERERE
RtKhERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERERER
ERERECQkJCcYmAddyT8GHDLaDs/s6ji1DnqutO16EfYGzmUEV9Urvde5ONfsuvX63p0YcuPLgGAm
frBmIm3aBBN0v/Ym/PCM7Yfep89lKjIf6V9thZH71eX3Fa2ccqQaKeEGj/yeUZShF6/kwlHqtG1I
Y/v7LYE03263Y/kelsT+oYZmccM9323XIltKMc4xGhJFq42/7ErlK56wplaAz/kfL3DgkkBrbzUy
Hxx+hqMDBsjpQf9EpeN16Iog9i8ZoqC0DUH63WsTmXH/FOnQ+jNFCoZJ2IfM1h1bCQyyQ6S78uO0
jQbSZIUGoyWt4Fr5viRVadJFsSUe/Sp9XzGxxrBfPdDnR/7Sgh/oEV/uH/isAl9nY0eZqidYKcPZ
23ZR5viu/Rm2Pjizy1copbpA/om6BrQlLjehTE5e4K6yoormXJTSaBJEgN2hCNLAofVyoj6PDB18
3NzllAC0YH7C0j92JeJghm6cEDqwoWgZ1kqstAab3UdMKhmrT7p0KxN2IKxTnPhsmTgb6iGjIYoy
AkRHhGgXTTNYQ2XUNpdElIobyGShkqZZKjlkhkNUoBe4wD6JQFNumACG/drzizRjumfJhAXSA0yE
YfH7yusI7TyNg9Z9HyFiXEU88UWvdT7Ie73/uv599dM/7dP8qmthrw1qWV/Lj/voyiljnFna3Swd
U8d5C10dPOv+WYjxb7/DqWhuKY+vhxx1/RQeaI/Twf/zcX6e3b/Of+OmqfJ6a/zP00vC4TcOrbzf
K+t1LXo+8nGHXWMOmCV2+Xh03wpjfKVS2cXGzEN+eP2atUjq9+vI+4kYJm+cVqsEj37vxz/oTqkH
Lj4d3z+qPySjj7foM+laFdkNp7RPlLaer8ob0E91jA7MKo/xPww/YG78hf18j7NYlLSySLVMfj+n
tjB5s13YpjV2n6sB3y4AQiBISKh4QST+Kzl104XWI1pVNVRHxnsumH73fTwvnYbHGigjGfUGLtht
SyMnOHCBfH3prh5QeHKuFbbWMkzXsXlG82Hn8fURso2dmCLlcTBMzYa4000NO08SKuhEPEieaCj7
bJSJ7Tr5h4wdlPm+52bzIXRoDaps/2sBP7Ln+ZfJ09WyBF/cjf5iIO7eX6er4zQfXE2ACT2Th7sX
JdNMAJwpVWH6W9ED6f9YQyXwTfUysyThABHpVxdg22je3ZQ1WMcIeFGJkPRwhqRRMxVAlfYU1VB1
nMQrzyfNOZqyrebL+j0uHy2TPPFsLtefy7K42FDyUfH6+r/nwl21fTh/toIxcREkH+PR0tXamkaq
2JUP4segZi5uGlpuVxqwho775kIEwQYHuV36MPBqfavkQNRMCQh6QKL6CTOCXzuswgJqAZWV867u
33W8JmqfzJTMli9NIggp/FaDFf1Uozf43X8ne6pRqg2RRMhK/zYL6UJ/unSGZ8WWo/h9ZmEdWgPV
EdCGLCz7wH1CY9Py9HUxt2rYnE1TPp9NkbB/We2MYTlSXUpz9z6vbUu8c53X2qFkI3e614wh/dt6
G1ae1/Z/tz7b/v+m5smxzdsUG4lb9svsTp7CkHt9fVq3/gh8zvWw8nRG4/p4FPWuhodzk/tj6mP5
z2oSKRaRwt4lRxv9/9xn9NrsOlMcIJm19Gg+bn8ST74O3OY8+/Z1x44J6LaJrIR7wtlHGDkCfJP9
T2qBLc9O24JKbtwu+rWL6CVQvTD3DHu+I5ie5j2h0A3qSSDzq8K+NGVT4zjD0HSji98Pr+fD3KLD
MG7NwPbzhBhg9Cht9f2EpGKDsl7K2X006/XssueXpttjItGNQh7nTIakGJ0+3bjnNqo2Ip8Gi2Le
1yNzxFpT6dIiIe+DQY9HV593bsJeWQTUFgftO3zqkqUdjvhrf5em7U6Bmz/q//YO8tG+uX19AErE
6SQJCTdtFWVVTP9QP3kA/hPxGqNixAgnE8QtWZmNxw8Rurj/Rke0D6AJsx0c4KlsHvb7PXT03W7f
ZD0qz7Pwy9Xuw4Q8xUilcuDeojwM/niTKfWCA9sHDWmrgDsR9EZv7aO9xCjHr8lQGZmIZCLfYtPK
3vwwn9Hmcr47+S5x/p+v0CDvB2QRuOi74nUl8Cf7fYwfjGZvKMMzFzHBuNDrgb1ll3keMhvW8Pcd
Ve3sHvPqUIDtqZrvWLubwnp8wa4VUEOvrhriU7PVZFvVG6ffc/0yqUspDDiwMzYHm9sWL7MXJsO2
EnRYkL78qUeIhTWkGHPYCP9ZCr2AlS+/CIQ6664B3BkYyfz7vXe5cmEmh1zJEIdNhQB6Whjdl9UY
cOdvu/W8G6TNPpZ5h44sDMzuOzGrVPulZm/n9F9a/rnj6re+LYbM7oNY3R4NHhfnDimkhA75EG3u
aARdd+P3j0GqljMC6ceynxn4fej43498Iw5ktS39OOsVkFE6ca0tgWPTXqC9oIvCR0uQuCHTC2SC
tIXt0fgRnbalDo37ahBMzchfDC2GnX72/1zwm48Q1szpmSbX2HFNbC6ZUrVibYhtB7XDzw3w6r/U
usnKXpeBgrMX2CMtUaR6cuoiqw1w/VSfn0h2AmYqmyJu1js51xheTzY7rsqa8GyJbenH6Hc7vF+P
7onjXOdP4jcThmCgpoCgKZkxlkpooy/Yv4K3+Au6+fLeMgw6+KIBgp9XP1/uX9tSqUA0ErZmxSqU
m0m2oy2WaTJCwLEb1xt4SIsIXohsgdTLNWSpsZBXvLoKTO8uuDCTA02EvWv94jv+uXLzrEIK1JMO
i/HTnqnkSw7wuZtO+JwQBiYskKy171GwX9cMbBvmmEKPSaVhDdK5t+zvpnHbbslKOXg8Iede1/J8
2nr1mD1hDDfNdGwdLc5KzkGQDHmaMWtOh2+Z8SjDDbmQLSfLQxtr7uGhGbPp1RL6vOj4prm/qQOq
oPEZmzC9DfKhg3aXQ7vD6umufanssHcfnlD2wwCTv0h+7KkXY5UZt4vd+PP2+MCPCbjiUGiVXaZB
ZuD8OeqCkE/7qGR52LIhndNdvT/afl/5/sqq9ifzrf9kPno/wJMDf/s0cOv8ntNg5IHdpgaA9pIu
Eu0Icfx4OIRMz/pZ+n+Xb9zvq7nCXJH8X/UzwZjMnSVIV/0uyOZO+qin/3Tn+6WpQ264xkqq5Oo3
da3bnr7b/4ar2yWEHzK3ripSKfh+8pcHbh/XAaKDjhv23CnhQcEkmEm/2+nsOJbYPGKsGQT5XrFF
Q4j/es0/zvMUUtJ1TOjUJ/zYm1hK+Pqoc8+riphQdoKAgjCEHfj+2Ff2YQX8ZGsLO7EJkB/09fUk
k+o/i919jMMxfqUIBfAcSguWdsqL8Brla8eZD273N7ymuXdYlCgIyQX+34OChBijDuAuUrT9JBsP
HH0dm2xNNtvs4+uaJLlPxHt2Pivgr/lUi0VKzwPuLiPDSRWFDTSmMf8aNIiniNqnhCaaElOEJvHl
H2qjVsmUJjNHmvsUpVoxUxKAeGYuI4ROVSnGrklfkOrxwtrGW6r3VR96+z+zM/n9uLYivJiDn0rU
7Y6qArdiM2pCFVwsusqWKl2b685G1BWs9eDyXtOJu2lJEzX/B3jO/P1B5v77O5mNEDNf4DjNHTKW
PvPJ+YtclEVfWTlEQH55Nvsvk8zXK6u8k6TTV8JKROXRbgIQJVxKZon2gthtacRDEr2enb5lAuVg
i9o7LoWTJlbGZz87dnkD3GqWsphVXjiFYd3pb1Z9XtnbbbF4QywgLTb7dlpZMJieXjAiylIlcKf0
Lj1hdtF2zBF1TPjZc7bvtdXv4FHnI8HFNCDLjZWVrQk/o9Umgs0P7LTZlE0t0hDBxnj/fZL9d2xf
7+2dsdQXQV0sJyu56SkGftNith1+/t/0qFRmvWAjpC5jyZHOHqKtYyBY/H7epvW5vPX6Uvx8j+Q9
TiEOO7I/2DhuPp+Bw1eR18jP6Nv4/vt9PE6uWN8X+flS/resHbOjW1EmIiMCfbdyMeuhYOBXzHXB
tBN3iActZOM2KEmZvbrpASPoTEFYqIKoNnkazjHsteKDben/g9vNonRTj5FvC20ttoFHocOViqIo
yXmSVfmt0lnXC6qo7TSSYQkpcsfyJcuSp71SKb+NK5gdN/v/j0z1UG321cyzxh/Q6mbreNwN//Lx
JEoE0Hy5uzM0dUcccuLUONj81rLXKI5yYhbjIO5ajNn+VMZCKJmI/K4xmdsIMXSdtwts37PfSDEE
z9b9yKptJOb8jqv2Elwkdf2RjHjtjbZZhfD/yIbfk7c03FMcTJ0NIxgJH0JmzvcCSA1eO+AFUytf
0qmawwiBHrcYdDbunsxnL46O7cB0BBAw7ODO16BmmbFx/FLGLYFfDuH60xlJ2/HxcbIyLTWPAixw
4eiDc0TG47Dak3RYp+PY+9FHB0CGRk7MYdY42QLOUIOgsQzWo+Julzi2PEweCPLEgQH+1hM3P1Ls
n+gonCDEKGforE1n/4n6eS87ffUMRT5RJ71AMw+2WllFXHyVefzzhLm5JMkp6ouepGiEsAn+GZPS
KHcqaR/XnWcGk4Qg7oPlh/5ZgajJ+a+T6cA5jcmzA8zwM99EoB5ObtCWwtrJoo9kHBui46DuIsXJ
urN2aCOJ9f2tOWz12wz8HxWq89JScGOpAV3u2qtNUW5CZi5a10IIJfe9PSRGNevA9MWNch9mrKGr
Q9Cn6JWIPmTMGiA5wfirNqfH8lcoe2IfNvXxZouJuQuDURLiOmaqY6lDaR5RdPzVzQ76vtrTo9S3
U5cru+4kzuNBtIU8FdmqMHatSOIiLSctTep9ZL2LVGfYX4wsR1fHG1qYO+x23fnk1dt07PhbFKuG
3mHpeL3MBamnrqQ818JTu5kG81h5GrppHaKElYrTTyTieFMK5XTpCcMprqkxOkBzy5775md9++Tn
WRISF7LSYp79laPU02s7/Fhi0G/8GZ49r8K6Qmzhm6d3uNdl3l/dmY9fL9sMPFOSYvxM5S4bcyFP
N+CG1bJ79b4yD3saEfJfHWHLSw1HRKijtXbDlGJFMB4pg2IZjNHJMoO2rqfz7nCul98iGE4YuBtT
MYDQHANspwJI3lw9hTSBFF/RK5Ns34lFHZqe3XLonYaVpY2aYkxGRcleTyyeyN9kUeHdaoW83KrW
/sZEGdm6GZCCBcs06Fjr+fZfMbbR6CE7y2QqR12QKSj7fJHCNmfTJoWSfXB4nNHiuvDft6Jvrjv6
L3OjaLdkomsmicCCszlQz49VsMIdEKlpuRVG5MYO9kemFFwG1ppwmZvPPpKz0V06BTEiQDRXKnod
tFCo9q4WjliDXqy4Wl2VCCYqupTTZIOtN2kShGV21Lx+Uqza1gG0fH3S1xlgqahpyvo+09uMy5/s
mZ+qHtv23w5EPaWC3FDJOzxL91Hll7SB3/H3cN5nx78fXxydUOyAvEZGH0RV5ReZCcT9MgHT4Yic
ny472SG15yBQCfVHpTAwwZIBoa/P4ehdVKA3k73Dmg5RcuEzZ3uM00Vu/97oQL3ymARE21N9CN1j
iQxRNzjwjhpxNhYVR0i2dG1WLrjqtlxW+POOF8YrjIlDzO+x7GlYWzIY2wIK4trRTaMnnr8YkrsM
3MJVIOrntz0a1emhpdZKllkRUe03ejKOV3nr5tc7cvRrvjWqshprlGT373nvfCMy6uslMcu3vCtk
vSRLXjYRxvMumsJ3W5No9qLjjChBYktLiUFMjF9kMpWcNdL8UrdVjvC27jKIXcqQ164Sz3/DxFdc
zhW+Ofbx712yTFqWKDlwwMviT3U4KEEsi3jtnvlcuyfOuzVhZHPPnJ4S169L865Pqv9s4FosW0Ha
dHBNY7XLtVUOfJ8wu3x9bymw1x2aLamwL2CemU4Sk7WUaiGc0f0Rl4S7zCNrI80tmOL28verJoSj
dnDrk+dCETojK6j37OcBoLOSth5Nr2IOu2bPWM2lqgZw9Rnv1bx6ZXQNUeswuhivJcXJw4x9WsS7
vChy0KVIuOtIHBWbK2nP6UHwZrWC3Vd80BuWyZBA20Ton/TZp2naFcon1fNYcZA++/K4zA6SskyC
Yh54BUUzCoDiePKk8OdJZIZ42Gvzn1xMUrj6cDU/TxRJx41MGPBWZrCsQZEY98Gz7pEjqQzP6dxD
RRQYmptvbOkR93SvBGbFlYj0knTOk7undrrod1kFw78qNhK2g7vvyxbzxMoFO7PYZRvS3d3SOk8c
NkwqrzwIYEpUjZKBJVhPY5iKYoQfGiTySsw0jIrBQbCWZiXS35G1NbG7dHHAWGfTKqe6+ZV9ybXL
QwM9b+nN1nnDdHM3a8jcR2RvRwufTDUVgQ3W7FTbbugk79mjqhXa2yMLK4bJxdCbnlDbbTgmzklf
nd4PneU7PZWJrGVO63gXvYH3ZMmV7DWGUTgoCI1W/TMFZcCeAMP6e9/TWJSz89M5x7Zlqqobqmc1
dKzk/FWxnz50xpS+3A3EbsVwnnqMuq3msteVYzrE2y4zbS+VVpvJNnzOcZokNNC2vhnUfGLOj1Qc
mtisr07d9olwTQK5l0Ipb50hr5VzlWaueFbYEKQdQk02ZD4R18Ncel8a2jG1DpHDtWUP6B/BRMX8
cRRCN7v0PbH5Ia59EJIvQ9yt3W6qZ0nsyfB7eb3y0fg2mhfWzkqWRnbF+ErI0Up4vA6olJyKTrqt
xHm2M8y23iugsffYwbK7oSfoIHmTG0TdyMp/O9IHXIt2LWXxI7pHAS6Y0gOCV2dVCEKsKeqXR1bp
I5PsnOrEca4tarCtbBXK8dyolYjjdOBPF742YTyzwvhXAbeO4YXdPRdn1x0RYxjMtrm5lWcJox3l
m02dgXnRlE2+earXjqMLufQDbYHX8APLlZLld4Y1kSMjohAwfGGq6y6dkIxs6Jli8O1y4VUeZ3a+
l90IWomeZ5bsJN81ei4iWq4XnmHTX34mDm55qTChiOOnhUKPt9JzUHKJ6b+5n3T85iONH7p+S7rN
1hcOsK57q6MiFL7OST085VOtE4UcOl9tT70zr0T3DnU7RRuqf4S/RAunHB8mWuNXzXcOnU7opec/
H6LfOJy+D5tmBWsHJc017+Oaxipz20XCVrHo/u2bzWNFP57ZXdxF22HZUdpKatUPK8oeFYxLUDro
v1r2kh6DpwxNucYxl09ttnjI7UUSi9QRmYuRzd2vshImIczTnVUopxhbARN6RXDZQndw9SLuntsp
F3vQkVgvdJySvdwXa8aYQjFujpFZ2WakR0rC4d/lRKVotIt8IPJNaV9vqkifS5BShrKjlFL8t/q+
dRkUUsKfLaO6C5YI1o8ebxk/K3u7NCaNtTz1DNGUx9yanDVAINk70l175BhqeS9MoTnSHY/dYNG+
+seyWEPVf33SNl0PQHhgdkYhQCQcCbfMKfhMfO0dxkzia+X9T8e2NtHlWG/N4Xxv9dmVG4q2LUVb
isx+jZIykpUslIVyfHZHX3SYsWyxnVvvyePrUMPJW2r5UfnEjkSZ4M6FdldciQ9uvM31bVe+K7p3
l0R8N0Y4rpgRhFA97mhQn5YhPj8vhqlsS0+vG+EktE7bMh1YR5+aRAo5WF3VXx8+4s3LLdaJs3ig
y9Y5ulj786+GyGPi9t602dBDS7lBoQqd+XVlCFMvY/edp6Zz4ZXw6ONplBxJcd+/SY986EFcopjc
UjCq8yueEC6F0LWRVGfQ5GLyeUeHhfC2zCOb5arpqyUGyU49qviXUnZcjZxoVSSR1vwpheIJl9E+
Vk1TVsU20Ox738tlZOjG1/cuKlsvxlIW+E9soa7LLlcurnlnHjS5tZERxYYWKG1/R1ZSpz16XKby
VVr1jw37c+dllDNVq7QxWixxOzr3QI2IRxO+JCOGDs31csjim+uOrPO/ft3dmGymbojHcLUcXwWu
hIg1uFNCyl6dJWWSeJROm19yj+dEqrlr501rX6/V2D+OJSXhI8TwY/m1mL4f7lhLNx6iivxuOK1J
7pfsqmZGs6wyS38+5uS1+fv7TQmpmbc48s7Yz043OvB3n9ftV5XGGKLuUzznjOyZfAeOrXbLRE3f
1irwcqIwLx4VVNUCrW5DCJ/B136lW4UmJMkilnuqw/P6ymyUSFmZlkMJXDwLu70aSDnHlS/czteY
kVt1E+qlDHlAvl3Wx3X5XRlsdr+641J4q2XmfOPbJeSHeTsmtxs6IRWXGkGQlyyFZdjoPSgtkb4G
tO7wXHR8fI4ZsskyTo2r2YTnhZd1U52au+7uOgcWxe2nCWEtuOuS9L2rJRRaonGjQ67n2xwjF+ty
1dPnL77jvhuo1y4xeDJe30G22Q617XyLNS1IjYcXh6bcFulofQ9NHbyUwyhrl5Vp2vVQXpyhDJ2b
BjwiYzcIu5BWbJyMu3O05Uk15G+7K7OEZMXrGPTlv34l2VKZWQO29z0ItVFojLyu4GDmWmcJYvBd
cYzJcp6pNpSIlJEV6kSRF8YPs88NcnTq3aa4y3/JCqt7ghkZ2LW7GxdKsTK/FWRLkaILoRkPFt6N
ERshDGalnE2Kk7VBvVIpdZOG6OwfxCs133a4Z7ONsbc6LzmHmx0mplNYh6QlF9jy9kiGvzV32Yl2
SdL0W79c77a74dDlpLeXwXc735j8OcOM9qpzd7CkLrE/UjxTYirCfuK3ouVC/SHNXSSjAYQCfCBp
3aE2mtd2xyEF65cIYpsI+izGTatj8KQII3rQhjnGCrv9GwyutyizqrbFlN2o7Wycg3Kx8DoIWa5Q
zRsZF9nTiRqr+Eo2xViyaJchXxnHd6Y+vKeCpePYLdlGWPxm3YiiKZHp1x29F0AxgrIwOyQYSc9D
JXuG5RzeDbto5mot1JiJsf09njj0ee+FMUEIbc+W8jKZVWRdTXphFEEdb69ZEPbye/Y/ZF2XU4c0
E0OoqDd79fOyEk1i7E/ucdXWOXFwnNaggr4whkoKaSLO/ZDoJeL7Nku6t8Gn1u3nTBFNlucaCCCY
oiC4JvKj4KJk9CD6lpm7YjKvwdm7bHtteqGsdmxh+SeMDf6rYNwe2E0zZXQg7uu5y2nX0Rbb33wS
EpLyeRytVmoXprjzWwlsd2E2TFHJFHwFjF0l6LPPpK21SqjDnG26yEJs/seOyldspXq1P5JGOq+X
di8DoWMNkOmM/PlwjOWkH1+mUYpKWuy7W/HseF7TnNePdPMV700fGhLpb99zLH+f34mxapkcKJUl
NpGVkEpXuGSGgnxtviz0NyezVq4zRS6yy+JgYGKWlzYLzUwt7YdPsezHRoHW8TbrkGEuJN9qHfb1
4QV/r4vHxdaN/Vhonv9X6H9NPfoLGqr0uuaGolP7dc2k6dI64webinRUxFoQg0eKhytUDoFDi9cL
5U9fL5jrsKNWsMYwQ0DXKKLl2KQRyhrd9uHqtsaT21g0d6cred2UKLqU27twWtORIfo0weMk2ijD
p6dLphwRitXnHj0jpGW17NRHVqynPZfAh5nhCSyhtlj72Z22Km9J8yDsWpsoPbtfZWvms8800U71
cefl18aBOhnTyNFX34xvUth/JtxxvjFFFOj3aU2IVfGWrTUGvzs+5G0eSD1PVx6/Hc4feFa7+u7q
giVr05GQ7QKyKKsPp68bn0cxiP17Y1iz6WceHvq5GOFdV4jnshMrWsLJVdiaQmOCCKIIKJoIFn6t
TfCwr4STsOrJxzS4+9vPOYgsSIg4UBi/zPeSC0S+pI4UpCF82a+67jlHNhK/Fz8zWevRY6yfJrL8
/ODw8kzDWj3a/aXS3jLFjpciih+btcMiIoKT6E4I4N/VwxONe98ycsopkQhFqCkAqY/E9/6AxgOk
u7ApEDu8VEQCRtZSQfG2SfrtyfbVTVdiSvKvnU9n/e1Xv34wsWS8Ixd8Dg8ldDCMu1QTBkjwQYpm
IMuVHToc08JEcl1LGEVmVMZSknn55XQMzqfps2S9RQZsPQzkyTnYgq3wxNlNRpnCTkB7Gjv3FrKI
r90HUXU4OnjBQ+bhT0PKybM6D1LUZuyRb7hcLYEcfmXugGpDYiEj4Qca1F974KjvJEzElq0ynJQk
7fNBxfM8oXUbExljKg6fSUGvRFMx24xga7LYBng15BCPYrYuk21TV+veQA1mx1Zr1Qy0m1USudji
KPO3SLOjNXLLDVAjBwvRBHkftg1jXdW/wCTS6Xs6jpgzHHjv7j23l6PkikW/IjXccu8cjx57SOiJ
CZsuuENDVXPGpxpZ5wz2XK/R9ZRXxn6NVcdJrDcaG4jCM3j91bRGBGiiWAgoKGmbZzKptjeXdfH3
6dkLGSZGxMetldA2olWXnW4TMAEr08DyodIYh7jSGCtx192BPXPw8K3BWIoeYnlPj6Ga+7gNc6Xi
XJ0j32YJ15Ec5PzUvW/tUhSfqQdaimxT4vdycjoYdRA6JivI8X5UczYsNltq0l9LLmOJGKxtL86f
reenKG86UzYo38+Os8NlYwK09/y4fhY4O7u1eFkKqD+eWUooDldbfEmm0TG1N7/OnoC8jmUGdAnf
ksYlYeSEIPu8sDBSE0jJzYoWzujDU+S/D8j0pVo9k1T69qipfdSoRvr2qJ6dIsUJRtWLuUlrp5YR
8HHRn2vFarXfoTgssXhlk91hjCE3KTxRmi1Khx0vpr2RJbf0Sz7LcNEfi7ZJRjqXKjqaUe198UTT
YkrVIp8YaXlfLB8Z1IPa2iV153fybo6hxiraxVUL82wYpf3KeSQRi/b2SynK36oDZx1qa2qXNwrF
yifHfCnY+xFqecqRp5I64gERMlN36+16ork42w1VbSJLB8DEoxARHbRJ5UQ9M9PmMgpFI3I8ZdL8
SGXx3rfGMUgUy5Ofv81SVMa8o6WVCKd+jpQhRsdqTujBLENifaRrGPWTPGOMWbUNo7XVCaiOkGEY
UJUIgdtpJIn8fp6Vjlx3zqXKmEm6LHME02kssXJwnCU3Za3t5vdpcUrTGJsi65LTvg6+XF/F8J9T
zxMIJv34mcrH3eGaX6etfRXpxxLWiV2tapQSKPJiCxWu2SKmmWUpapsYZRJEMqemk/HCxlCUHwXl
WOj0zoqQiWvOBOL5QJcfewa9OWjBRVsw8YNfHvnxnZ0TNvCmqNrIxfosz43R4p46W4qMbOFl5ree
OGtEtvKNvxzw3W9rdlCpHryMZXrO2FFkWbvQAA0vT49X68MNx8WZoCQgF+pxdlZ4DXEwWWO1j030
YDRMkNQDQkSJQny5gp98IYd36XgoP0n/T6ePDo0hWxrBPVtIe2MiB9z2eFYdv/k7pmMvwYUuqind
1TsrhKGR6SZrLuqYoRKlySa3MIUi0qE7mZHjlZS/GBTO+2BuPNDMgZSdeZevt1MdanKvYzOmYBjv
uP4nmLKeRhxkIbs+AVDyN/Af4u4N+dNcZLo4af6v8/OS0qLQVMkkUjJBEsMUQBIFIswCUjSq0j3F
/1QBsJAYoPRfn/l7C/mlGX1eJixdoEJ/uLuLCyNSiv/wkA/z6LDJxj/XAZIKuCCpmCHqhrE0f8qS
9X9+8Mf90zFSyj/N/Vsp7ICh4/0BFf8dKIdIR6Uf6M3v3hf9dbl8T++UHLgUie2I/FD/CA7RDSO0
kqKfGhseF0iGiuJK6vKaPHdwQz17yxvVvjkmSOmunfNWEFregcbJGkNDFB/uEjT83B7Ph23E2hTl
wMSjJ6K886AyL2zj3wt44CdIEMt98DlXeIJAJkgkYgSCCFIaKKRN9HHhHfoyQi4FBIpIKOcYyTHp
xsljNtYxtoIkmMcEWIUlgdUQS+0wDBJoOkiJlowxKCKn/X24O82Y8owXrDdYw4K85mMxaiTXUsmS
KDWtsKb/h/6p0ee69esVT3QHdTO9keNqdSnz2yPKHF5QOJM/VbD497EO3M+hkD6UdSnSAMJDrCGr
mVOYRDpG3ngCB8Bhbz4HFWQ5KCFnS+2n/VP9f/uoJrHe6wh4Q9p9svfHwgNQ1wyPyeOeOWHluF15
hHd8vF/qPs7IdPtOqDNe5pFYNCxCMp/fP9WMHAgkQjSUKhZRYLUOvZoaILAikWQ5QlIPELdSJZXr
LRekA6uuh4nVdhAg3JCovhFEwiogQ2l2kToxb3Uo8dh42x7oeU9RFPCKKKdJ9Lwgh256QUEQCxJM
URQvqp4qLcJAVR86JUiNdPKuELKxMNtUuZ9vcqyGmaSjCFpHmEd1CZfZAazlDF089LtdppBezShH
hANYgGsUOQSc+fGqGuMwaTvSNTq4opNsMTygf/Iufpu5dHO26gNnucY1I0ENis0AJAIq7AaQ2j4O
jKRq5dvCK3AxE1i2ZpA2hziOIGsThJVKGxBMa0YCHbDPPjYgcIO3Gh2zpLVkXnGiD4zSeMUNp6ZM
8umE7yFCuIfGzwMX3TvIeydgg75Ny85ClwoJW9vwojC4TRD5el5kYUjlcejk1c8KEXEXWGM0uNKA
5Q1OcZmwNuyjwhnw8PDB5PufRLBLaVn0l9RxEN57J3zlxxAWCEg7iIgko0ao55581VVVHqojkUZX
C8dUSwDnNKoeubekoTftoE3m0D6otwFxDEjKD2Xv4uQ9kZktTEXwwyIGqvEko2olE6UmkZAEq6CR
QzNIBnupcGP+W2mVTriJuQGuNJjahcQLgcvHjZki7EXzxQNCjjtxxQEOXZI9oO1MIyogGRLPAiA5
h5kczKTfwx578KF2IPjd23XjZMg7KLpgcryjv2qQXhxBR0OD5KOJGNnExB3lgByR2QRvx1ZhLpsU
Y75khkwwM1CEzR4Y4UzwkL32djEOhB7ZNQh3Q8wjxLq63N6o3GkMxRcRaAvMgyBDvk6wh6QCG0f7
yMg3hMJHvjiAOVEFKCLQkNQRD+fr89fOva8nv9/NUJVVTS2220Z6YAeuHw+Hyml846SnmQTRBe4r
WWim2RXbs5D3KIysC0o5Ri13kgLptYFi4lM2zJDIfCTeyOJ3ti9uC2InaFPYSHfH2XfbweEqp6d2
Cd0Z3PU0FU8XQmSIjUrpTthoxDpFTzwTBBMExFR9IQT/mMIP4fvfp/T7qT4yfgv+O3P+FbBsfT+v
AXAhs7tM2/z04gSVCvz8rnLEOMJ+/TexRu/UbRDBrT8uUfs/b8+RaXfw/a2F+l36Yswtf6ioUS1F
FSCh+NoIZ/zJIJxw+aP9Y9F19J168C3j/Tuz2EwfPfe1xw1TmywMO1nI3fx/C871kfzdiDFtYzAz
C/EIMA/SBH8Lberw08Ed1n7Mj+/Vzpo7V5myHd/N/X0xm3eX+D38AwdwpGmGFb4CTeW6RuuPLnXP
BVXzPal+pQT1EBRTA/fhB8+g+htdCcWG6zJnZgmkCQCTGq1gp2T7J7v8O7XU/dtw4kIQSJEOnKTR
vEgyJ+cgyAYOVBYd/9k+Otjl2IPLc5ck9vIVKZCIidiROYb8D6upK4dVUEz65WJqooIBt/Uj6uDR
ztiJqv/O50ibMoykKfRK6bI5/3gGDu4fw//YG8FJFUYBEUDaoigfxooHOvfK+f4l/2f++UAa9Z+i
39vyOqoP+yP9sQB/yAAZ6oUAD+CrIwnMLaYN8z5en5VfnPwfWbj+QFGCaf8TRYPmHukhGJ354ABH
+aH+dX6vB/jDVYUmKB9G46Ty+v/c39fdURYTeq8ED8jBAInY9Y/fH/i+ZDRQcD2itJQQ2ABiDtBU
1LP9QEXzcMqHj6Sj3AA4EADDiiHdrKzTiKwBKYpAGnhAB+QQORgMAB+Kn9yA6jiE6oUHNotCLyf8
h4igezt5+sCoWqp9CGAzoTIdKKRhibAA46YIAGRQCsd7ahnDmdAAYPDuWeuBnCYPYeWQordE0/8E
eQISHGFDjyDdZPMAA0GVAeEOJ/+2BeKvhACB7vdc+5xDMWdHt7wtRVIcROB6F7OpdUAcIAPW6myA
A9AxqAQoBAM88WY0s0sWWpQltllUmWWaapgIx4rwQB89K+YgmoAPmAAeIKYB0APE7eh0DYAHQNgU
QoXwLeaRRGA56g4AAxXloeSin/wnowOnYPP3oznRipSESt6oPE+fc+k9W+4eneFGHYweQGxS4mKw
UW5etRAt7WeOmO7rmAwdyVmBBZYG2L4IBZwBsKR4geLreV4CXilAXmAnRegAML4g+0apifyezA/J
v/E90k5IfK+MCqLOkMx1taIUS1VHpxdtQMSQsrz7W4cGbH0IgWeHiKKf/P4AA3stDsL7UyEeO9mv
nYMZVGnVAqDUKYsj/09n+G2PxYiY8c5+Ffeh+XEdgp+XGNcj7RdPiOF5x4/7I15th9wkEwmFiEOy
hQ+cv4wz1ChYbEI/5698HQEDY6WADKxoqUiFmAiEjD0+0/vD8/5fzAKHiZ+IAOAAyADI9P4C9U9x
+exsmxz+GPpqrWYJSHgC7qq7ODKcOlv/RjD0KlHpmqeGDsMufq/P7fM3SIP9B/oP3GgDtCdkiejJ
YDDzQCiTJOIVIuGUQxUc3W3UOvc5ZiaHghjFAPYvtxPUfegLgK86UfZESbfpCfiMPv0gANKr5zgJ
8Qap1iQoT1WliQU+cyfOYAB0RYAA7AovyAhUSBT2NAAxY9vEx7lg0FBR8oRXw5mqdI+aDiJeKDP4
p4TzG/lqePZqf2Zw2zEi+0KVYnK5mZBGJ/zkTRLvteiCh0t+fA+lKgu5zhyTcXGGbUtW20zNAjwg
Fzgcw6gN4yDkAGM9aux8yHAkdgAc81brVhi7H70gEdlAcOAKLeCG4yDud3Wd5eSPP7PSADyKIcCE
0+ikAcOEHih98D1BSlp1KpBUPUIqxyJzgGkAfUADl3+lGxlWVlYyjTgyQZFmKVMwhgiASlKUoWQU
7dgAd1TAJFDUQEPYHtfodkJkgJEmeJ7Pt50J6HzEYLHgoY+BjNFVVEkQafRA9Hpg8SQEa/iNIr7Q
oOMdwh3Ns6dKpqQ9WvCi6YYpQURby6ABvxwUQMQfEipSqpAjt/yUsRNTJ3WbD14CoADKENS0CqbA
AzkppGE+J3vZE+qaCojAeiSQEd0YSwzJNAFKYibIA0WADFsTMN73Dc78GIK4YmsfbAkXlihryg0p
czps2WYr0G3d7gKiUlAiqL9sArjPpL83n+g9CvpfqfV7TCQu6evD/mLZKuXQgpTbGCI+8oZOLLia
NveSrw4ZFsAIoAIQQEcuISHmIATFEAJQAiAARLdZ1go9p9m+j2HPMFh/LEIFHxanfre2RclnKeAd
GTgLaFYocTJUPYhUPAwNHMA2KBeEOXJSowvRwthId7Ag+Av+dnHmk4rDJHc/0dHWxmHTRkTpi7rW
zAETqXzHYAA8hSsQZGwAYlZKnuAL6lA2gH/aQuIEy6AHt1E1BxuHPhCYcAkSQJ6E9EDdLD1FJVGw
APXwyZGBETzGvk6qAzK7JHgB6MJ4wPgopInXhDw7D0A+6wWBMj8tTogBgVNh3VOhvt8A1aTgCpSQ
JQgAGA730Tb813w+Z9+B7SB6Hz4clNwXVCNBL0Om+lOQAfAAHEelQ2pCRJK00WuqrMlfr2SEiAkA
J94MIY9/5C/xUpT7bcXlLbd/P64RCRK9LqSL7a3479a23apwT0ND4j6DudE8zlwiPWPgJ8z83U2n
PcL1gm8ITBuXsg+GIUYOTo1jUEv4rsj5PFB2OgAPCknR6g9iqaBGjAAPWA9oUZNLlyFB+qksex4g
lKvBpKQLTvege5sALcGabLIIRylKwIaq3SGGkRI3yHwDQyaMBXI0kMaMVXRTdENJ0DgDRCcwTity
ZoepupaDU01NLYky1mowAjaGXCpbRqRcC0UNg2SmnoaJyCwcoBujoNptToqZ4tFKxIRYQAThRo8D
AbC8esPlndKPhBqEE3M8wMv9O0zdtnX45SJRM0NAkXDwJs5Iz0pYp2wPVatfuuue3W5XXW+a7Gfr
41vGXHdJOM4V3Vmh1XU0InfO+DipXc6xhUrd3DiZEa1WJExjkgAgFEpIXpWuu2mKhl5cOkpVTYAB
sAHgAD162agA5ABoAHqAB1AB2AB0yABEuN8ZkTcNwzg7IKiMNxW3M4nIfkhhXprFTUUVzrqasSjM
2pDx6vCGiNREIg16RZkQiarrtItmrzN8RXAAzlsUVWV91ijARGY1X3VoM97kUQoK1qZRHYRzY2cI
g4nCfEetWZBUwADyNExoSiO01qlOoGPJ4gBHHPF+IgBQIiDjgQCU4MTqhSSenLRxh5EMMOVDW++p
ir3rnhdB3qpIPWSULcUpdWwXkR0jMOiHfO4nSU7UiFjUKYoMlCMMQwAe19oBixKHbzQJHcSLaJ3x
yPoDcWtopIr1nZ9KdUOmH1dZQoKQoCZhKAoJBVGWyZok0w0jbTJQkyQEoQfmvr0nZkvywdUAfMAP
JQ7Knhg+3/0DYSS9RBkiqYEYGI4sCKoQotMUJVo9ePC66/Tjhx9PlNZg9HM8DhNCrN+IUADkPEAG
5YAOiADzZJIcgU6vBvhwcxfJ4KCmGGNyeqHHVSHZTManc6kbYy21t6W+xD1hRHyBIFSaZkocBt11
Vb4JnSVytVMJ0hwNa4rarSaidnaRCxA1Kgr8PSAU49RDgADuD5+tDDYWB04nFYyBxKVoZIsiyUia
LJUplLYrSawTCSkUQyBLISGwbCh29inkG5APScnuFAwADA9ZQeIAPUYC0rt7ownGAV3HkADG4RTG
AalWlTwpaPP0D8tVyD6zVJAeADwADwh3T0VfbLD0U4SQ5XoQEsEXAYZ7QgQxkNWyywrQoWn7Rthj
QOz3n5fmACdgANtpIQjCc+wAHyTIecAHigGyHRuGM8ZO8899lx7xLAoMcrMMQwIotIxewGDaRW4Q
bPafUdmxgOPCPYHnAB5oPf2aEQqh6l0N4Qu/jD3wEJPlip/iRNB01DVfcJYobgAxEH1qKY80J7qP
KL5wAb7AAbXJIM48dEBcJ6IA+p6Wh+BTCfOx9qBHop7Kk63cOSYhhmD0X3snWD89rerZDTJDg6ho
KSmTHoAD4qI9FE9aiGklAesAH1AW5B7aIaEITghAoHAYcBUMIJhobv4g7fD0WTyqJSANUipKhRVA
qctDYx5ZAB451ABspgvDc8iL3SY9MkAfCVUxVTZEE0nUqYoPPR5h0Q3EN3hwyCkGgYYcCb2iT3fS
etH3jU0oEulChSyAowxSXGPAyBB6GHTsABtswgbqZUuiqhstUvBDkmR3ANeYADgO4AHAj1LF9WEc
gA+kGg+LWkMAAwdBF614D2dgdsPAqidh1ZsPOB2wZ+vC7BYETLckSSDmgoNvAhwDUTydNlF01DY0
AzkkgYTXAECjioppq4bzxppDRy6gUOsXALvQ0+BTo3wmYSFF6SRt1Oi6pZ2FD0zCRdMYIgdwZDo4
nWekSHqga6Y9NGoND6xHOQ5XkDj3UVRURs7OwczibkDZDQYtAcchxGiHK4SMlXXqQb8oe44PDlqg
otoe/sOu+JTh7senL4xgy1BZRsvrZAo8BhfHsnUnamrWHkdQUch0U0No8giOSA2G7kA5oZiwOS+k
hA4J3j5z00VHq6G08w2e5QGuInDyABwgCJYwi5FEbiBlLDFj1OkuGA5HmOLyoFpYdzEgkICc3dx6
jd7oKWhC+Z+s9GHFHYeXyGf6IodCw7XJ099IdYCB0o4PXH31QcyEqAyncfHjEOgmUJ19x6opNIGE
BqN2eI7IA5AB2HCF7/t2n++H/5+Lpu+XsQ8hg1SZLDnyyYQtQngiL734YeiCQ+aADAKA0KXExhkm
lHuMxPhEh6wwOAQJ3iAO2wsHznRwuipGOV0BobyNOH5jqivTVfBMoQgEBi8bEB5kDhL9l32RckAF
6e/3HT7m6biSKEYRkTYNopSi0UUA2RzgKsPz2GSD1YoCR3RCiQLaTtCkAI5Iq30GfeklROcQ21ih
9tG6BYoOCHxgFq4ABiPh2BkUHc+DGBD4KQS0EHRHomwKpbzPetjRygAOjwGgf/XItm88YSLTX540
DAA7Bz85CdPkkPDwET0m8M0hUALhSQTYKU4DSKZuecyN4GkKWyBwz8gtcNAkYwMRDwyWcDz8Y534
SCQApVQGWY0A0FDUmIaFpWFmHQwF7ohoq0uhokWw81pZiSiQpIhw7AEWUhjwHQBjkAMb680/CP8n
nvogB5eBK5EJUntQBoqRYon3FOIvCfyMOtaAESARCQ/Ufl/LS/oO5BdbbeQ7QUoBSBywK4E0gg0I
BSAQhDgnQAFZX+13Ih67jAwzEwpRgkMP4w3ADviIQfEORzNDU4AAh9gAOoqGMBsCBkABiLhUFwop
8AwDm8t0ADlQsLO1RUxLEwf15WaIjjAmEkVooCltENPx++nth7/RCe/hzrDM0IQqqF1i/ZA6j84A
IeAAPiADXR0q9gAevfVV+lMAg8xRSICc+z45xqV7vQoL8oOj9Bqboo7IA43n/uV2wuQhd1ZAPuBw
AB7QAfITXTRgcgD+HfAAD9CnEOSwFUh9ALqugAPAPLySiuoEDKgeR2x5IA9hRSDF5ZYIDvBCQhpQ
CZR5PzkQOYWAtiERMdQUAEN0ABx7ByYBwLgSNdDHAAYUMOoqDR0AAaHU8CBmGEAQ0sE70SDFMFjc
CqhSGAAbPEEGlFICpE8LKlSEAXLYAPZu8egAMKANF+NHQaEOJEORzmk7gVLLp6rxf6pdl8QAfXhu
EmOKHwWAAeJwWN7ZOhJ0kQsivN4AnKhpDmIF0mvvDoo47UM0hSQ+PrHAB7EIYABidu3JUF9NDWTb
i4RFNVeibY8cR1YSaDIgTEmbQePWdHoq9I6Lt6kh7pcBIAGCCCAWRhAicyGmTahp1YOicQzgwvAO
WufeOjgVeCAoRR8xJADt8gKnIAMBBgANhs5pBGdQAPmAB8XgADuKInPPovRxT9I+RNMoto6KEVzG
qL3RWwAG6VeIGOD1r50TyQAdaXg4AB2aQAYkAAdUqXEMLuuKBgYSGw+ZxEziBPRjjGQIE6iu+whB
DMlBBQEKsSKEMoclgGn0gPL7JD3fZfhGmGYn6pRBqPpozYdAEdDqh4C9aTvRfBEMqNUsSM+RlZCC
XFH43eBc49r0PT7az9O5qEIurRBkE5rEyfp3XZ7GVscFBJ5qMjnib2LgXKlg5VKUCgwH5DoUmF80
EBOp5+Z4cOyQJGEOIxA4Mo4spEQMkQQipvimEBEYFhpUwMKEICVA0oThEyTPh6eP6ftOfP8GPm5u
p0qHEofvP1sw9zwKU7zQIRsAJSpUYIdK9k+RzSdU8ai0lrrOY0bUai0Ysja2mpNseE4sp9DYADAl
G9g0/jgYFIUjKw20YaJTF9muiRbRbAyUtkpUJm2jUQphiFJsGbQ2MzFolRhJaJlYyJCkClKahp7H
bjXr6CAPUAH3LygDAGAAeyKdvHgpkA6n18jQrCWaKKUncAXz+o4K5Q3IOX5aL+9ZIBAAy5YEQj29
tPYicRsSzIZly4pVSusTIjQMUgkXrEMW9hIil1U1bPJs+Ha8B0nRMMBwYMCS94HkSAIdw45GCVS+
nt7bDO4cTw8AzcSNNu+Dbak0HYM+6RGLLKfT+LBnijtjwhuHlzyt7wEUW7q4aXWZ3Udr8NcyYLU7
1DIAD3gA+xPHyo6gAMGzAfb1EPaEu4SAUlQwF/irHHSAA4HrA7oF9yir5aYgEb8kgBxNAAUCi655
Q2Dn27BImUYZQumQLMLAig+IhnqfUQPSeY/HA6kXE4VMUThRkw6Q1oOl+f91i9etutr+R9YDEbES
YKRYQPgNgYien0U4d35tPzf+If+WQhRXD2v4JeM19kxWJNDbakCIQI++HxlvkgOEgYAsmpxRZzcK
RRFTf7GBQk04zqHzoEL8ACAA+Y6yBBCFvnNhD68HqN/W4Bdgk5e2Diwj1EBDNxgTfeok+Jo5etVd
A/3czayFdPNbY8SjmfHCRmlBYLicmsNP39COj0Ng5orXJBqDAkMHGC0HETwEiTSITkkSmkOcnwGA
BJoQyDq7Xb8SAhoHyaokATe5k33e0v8Nh/z+kfP+b26q5mhr1R++nWMwkFMamA/LegwQ7axBEggR
jcU6VgNQAHxKDlmdkVL5AlhA5B+cd9eSYsMLP1ybGDjhocHGTYe9Pn5LSPAkqyfrMyHIMkpBCJGh
ZL5sbCUojEwaBtFsEgnD0RSSx+XQn/NLvsiCgOdbawqw7Mwzt3lHNkqMgXcpKg3c+Ilg4AB6xQPQ
cThxfIAH8QeUHZAHpnCIJEBiBnWxSgaQMTBxOyFn3FhwIHQ5+S1gdoSxCrp0AEcnIcGCfYeapYmT
5llEeb98ORVBpuLlAHNHAABuwdzHyhQ7pQMSEAMh6d0Oz5s5Tc+wQgPCAk5x5DpDfcMCCgZfeFmL
A+niHy/lEpVH/YnO1pJQ4pIR5fOED/Pxfv/A7X4s5qzZHwWrjF9jLYsmCr3JoXFo0YADAAOCAyAO
Wcxq+EYCciO6JjSOKTgFmombE0aTKYs2vLp1lACI/u6FQidHV1fOyBUHwhwOWgngxWqFg5vkCMwU
h3As8UkjFfVkwSR8xEooJoDcQKoAPtHpSewz9jweatqmwZPUlgA8wAaggIV2WjgYGcHYgAMG21Cx
L0QAgASEEBtlC0lPYq9Zy4AA09iimTsQBiqmUyUIaqgkVGyyEDzmATCKNAixCNIANIEARdjMh7KZ
rREpQhkBkiUWdUTQiHcIQYZTguVV9ZAE3AB1TKmocBoWEYQiVAkOoEeGyh5BAaN0Sh5AAaP3JhpP
sITzhTuUEOUHcjFsaUQZJDsgDAA4hrEwAFICDYAD0tcKohgRIOKjLgVTrMHsA7E9nyhGkf5akUyW
IZgihhoFF63VQ6PCHXAQG6nVAAcUykSSoILTBhK9lwx8UDEhxFeV80xQ/JCO5AG5GDD8gdyineEn
0EEF93BxuB9ij+SKA8bKqiUePGiwFTB82MdYAOGkjJIBkjV+bWxyjwLmDBwATZmJTlfIopD2wYwi
DgCtvTdnAoyWBUCpIjuHYh0Y78mi8MQPo7zde6QXoeeA6ZGWYgSkEICA7KOygeSAOnj9HdVj3QJi
6MIMMAbA5fQAHYH0O5VSJAEScYceIINiAPBaDiCECKG4LAGKSUiBS0CyxSPmyAWr6Tgxcq6CGOIA
b9aikWgQzoIvLoQ0pRBBASw8AhoBDwFLbvDlE8AAZTspMAxKEiD4x0CBUWl1DjuKUkUVOaDXQTik
HdiIQyAmyqatAAyhpAUMDmjoHK70UdWB6DHwAB4VPHt544yiwETpxVwnaPWcASSQRGJh5WeyAJ4n
FVUpADH8WyinBVTYxjjd1Lq0yKDeacKKBkgAOJgpvxWYZUTVNVnFoXRrQqSPhy9HR1CAeMCdksUU
2tkSgAdHmUcwAYPCKocT/GBxxetgAcTIcwxklVJJ30eB1VMgA2ADwgyNrzIAA+CuGJrbqWO4ANBY
MQe4AXkiHgvQd48NgAYq0ATwTI7AA9nLYAHjok3UUpHIng9s1XZ2A2cIbPri0GYlAecnj/5KMJNa
Tg2rw4AUGDI0fLFH55hDUI6OuEgmLU1EzHgLMmtnYhLYnSddkeG6cIppk3fiSYuMMOmZhHBhMxQC
xau9QwjouQlGiDvAbYJGeQYKwCDCHHvt4ARE9UjhMfwYRNVOyA8UOZSc+gBYoAfIaUm4gO7yvzIM
q+a4IjJ5epCTYeDAd1UDp9G4h8n+QdP9fyPjw5EA8R+QOs2zWEAaE7RCnDAAHtT5jYXQ8y0I0UNA
apAdOj7lAdBAcDHvSt34hDiFgd8/pAB5TyDZIiCk++BscVionduoK6rt1pnVZ2Mw1BBUsYyMrRZY
WWDBKWSr+IculOmYMfaj5RVogowAHOChAfIKEfM3Ezp6H16abBFfeBAAbCw6PX4W0evuABpRNAvA
mqANnugZVVP9u6FIQAkCE6sIv1BC4yhqmhpZQ1DBDEBlDQFLyD9fmmB2B6v0Z9A5yIrkCAIVQQih
3EHus/VlFp/n4oA2opFFJI+ag5w7QojiijGMR38jDrhTlaU2upYkbSQCUAIhIpQKkgDIg6QBkQH0
04HkXKes2OV/cb/QNBQQyzpxDE3x+WB9rpYYejoGcr42FjEWAgh7EzoGKFoY7lYoJijyKzhjHJqm
glZhckDHVzXI65iBoxEuG9FVW2S1UdfREfCfgi++XDdHx+O4UHEoSjoc0AH6g1w2LhAXrUiREHD1
toA7CaKGGKoZqNAA2YADhkiTgAOJDtsvX6TA9UFMH1mZMeBlUsiAaD2qHxgNcVTvAUcrow4jxgrT
D4mk3FaSj6vi14HUADvoGDvrg9pEIld5F2gImnZQBRETwSQ7X6IHgAG3kSDjB3HZ80MQ7zTiiZN0
QQokIfSKKZzZ4JqbuDpJCSiMTIQwQL/K9lWk91AckEXwOYAPqBU7UMD60CAI8Q13D0lB9QbPf1sm
g9bscu1CPelUqsQtUI0ieKRUo4kF5iCuqD5uSTKvU7ICdhqi9RAAerruyVUqRXuVQcXB+SopBmGI
QKKQBjwUU2NGqEiYallFNIqYDjOMjQGjx8t1E4E5Kj6BCDsBrwoFTRECvVC3CQiAPYZGlzDikR9+
XaJOR8h7vkO0AHiAj3JyYAD2rwMywUHsGaIgAabHGAkJBsVjjBD6Al37jAD1kCugE8mVCzcQ3H8v
9z6NfGJkAQoJjV+RiDZH67sLuq/dpIGZiQQZhmO0UYx1woaDx9F3uj1y6fVhR4ky1hFt84ZT7aF+
adhrCTJBaPYmDCJVb34rj59q11ualKENtFaQA0qFI0ZmTbEQ81XyTLtcaMGyQoUUk5oAPnAB+/gF
S1VTvAB7wAfOBB5D087v8s1/3Zrxh4YJh5UUxk7ujLZH38dcd87ztxVHA0KznJLMEv30xjgIwXQ4
HCd+u1+HCnfWuOAAb37MWhHsKeBIL1/bnZj2y5/Nt9+v2y8NLwjJK35BmAAi4AIw3xKoucQwdlBR
RE5d4QOCIJ8Cw4cED0n0J98PEQF/XLjOowjkAH6ZDnnJfgB7nvqvIz4yqmgAbJ8mOQAfnzeGABiq
e78y/lO4R1QhH7sDAKaVDfhQHoADDkJQXcbqu6pcdgAfWoh47m4AO2BtFVVTMRSrtKc9JVSBqKq7
xx9YfWiO7qQ6gAw0UQExLMvhng6BA9X6i+4neNgKQ97uFnMg8uZp0ABmmxSQnq1GrYyMRVOcAA94
AOepRO6E8jQh1h+ST0MDvgriIGCiUdRdJIJlqJdiFzbYAmj57Lt5hESgibbfbS+0Qx2IHqEQeRYd
555Dtt0AF61R0KPn2G0TnQAOMiFpVDfcuGkhrEAHQAGKqUREE6KIUpYANHRcJd0PGZkqoPgJAZis
dlenyIfDW+4c6V9lyeRlCqChDs5D5i3YAG9Ud8FCk2ABxAxRO+d3cwdCpBAxrbumGiQ0KjIcI2wA
ByDTxTqRfdZCdRtuNo7RSG/nR6Lz4cU9slymqld0otDp1rsHQ9WpJkvk8nt6tHgQdRi8iE24OcdE
147vm5ySVRJ4PzhFF9Xlo1BXeyf9AApbR/0MehdCYA2bRtbcdoamYZiIR3bJRCJR+pQOpdPqeOx5
90+w2+YmSqfKfY1LuVrAREGNSyzHydaxvnXa+CimpQAPREHcFU4gAwAHdRTAAd3cLMMMMySWPlnu
7sMybcDZB4WSWmC7PfzymIg5kLgBoOhgEFQ4NisUUJ0yDmjPNS4zrXCjZpIS3VzVhM5ip8zpXnJh
sWRmGrZC0doiLoRbg5UxGVAPkiXwsbqZkbLxMcEXgjd4y9WyUO5JkIiAEaujJBb7rAZ84zk3mjk4
zO5gMvU1Q45V0OSZm6QnZqHUIw34LBcW78OTSCOIA9wdzcPy8VVFRUH1+s/CGmwAPQF2Dv8yRQDX
3T0uUzij/P+zTLgodZpiT7afxNRStRStT/CmamZFhIpwtOWqLCvf7xpzhbYpWopbRStREtss7pUq
KW2Afxxh4gMhMkJ3uBJgE0osQD4Kh7UfWwp6eeyh8gnL0D+TwwpqnwwwqiKilowlHIzMcvGTaHRM
TERfhIz3IPLIE4iPwQqAvg/NTagBxAClCqO3j8ZhbFVMcoomgCvIpUkVBFp7jP23gAE+5vJORkXl
N8TzIdpp67PYQIxZ8A+u0MEVB5OSPLt+n39qd6+rIOF8kSEvzuLkMdyCL6xgzzHscM7H4ZE1LLvc
ABDIAPUADQADqDrsa7DabCm4ANUdyANgAOqjweYjwPDOhgeBxbPDBikYADYcWrIoxFRCTGLz0K3X
kAH18p9jgYIbnoqlfHDEIYB0ih1TyIICQiehne0AaAAewOtRpiieqAh9ACaoHpgnXtPZXhUoUU6A
A7KiernzkOvTB2hAwavaJYAO6hgVwliiloo8kg8G0Gjm6Ja5ZVNIOHtg7xLJNMHhFGRnmADm08Ka
8032Eh0O401AI/DJpg6EKJkAHrNR8QAHkJjEKQAfSMszBAGl3vPy4PpNfNpBA9R5oq+GTlrBo2ow
CQTm2dEhCEkJDGY4RiaQeCbAdibXhEKdKXWQmvDQiOFAvQCgrM0hTTUY2Wl44qWGg6cQdBo2d6ij
OOATohxwbp06JQw/J6ebR9NkGr2lSZJtRBkDtLtK/5W2h32yj/Zxhr1cPxePZf8sFNYdQCLDtEBD
L2VB+sBH2DEBKPUoneQULBYHD+D7Sh6kAfnVTjy8P3aerCqh/8QAR8H7IiFdwPAK4JqIW+0EuKEX
3/F3mBdgMgA9R3gC+G3iUoSgoMAA1S2xNQ8omiIZPegDygeetvfpue+j0cABoAGfQgAYSgpjQwnU
3AB7k4sReeg4G+KYCqRVA6hOAAPp+MhN3gDjoB2maH5cUu6AB2FFO3sJPN2GQ7PcxJUdsPhrAPqw
9R4c4L3vtdJQFAImUwv6sBIoO8a0trEGAycJGO92B7lUGBgp8rIC3YtBTp8eVwUflLCRA7IyJg0d
as4cSRXkdogEObFpsokHqLNDqNv3s0OMRCJIKCtP/N/HaOPo+PwDTpEP4kyAfOwlgfwYD9EI+Eu1
kNKGTkpyMCG7LSxAB3aMIkNDKDzt/w1z7MDghCyAGpwp8hfAAGJ4AA5D1AA+Gh5qdAoiEOzDt/u7
v8em0uZAhRIEziIFbyw+zA7BQG9wdIApgcADoJk6HFgooRABtgAOnFugAbQBo7SY5iA0YiXkVH1C
pfaAwHPxVOFBUkFCorZETTUuiQD1DWUIBQkOMeMKtPVeb4nk9JqvVfQ6/ZoLxdjC0JthiNBCYZiR
GLxLiEAD5eILXbwDvg03JI1VBPg6qOvxm6lgA8tPxP9L/VkeoMhR1CEh/CwriODYB7wwxhVpSZiI
vAVscoCnoSuZ6/PrVTFVDFVbMz6cBgAHCqpgAGgAdAAdAAcAqmCsBCUYIJmJvFzBNzbVQUnTHUOI
aaI1ZYwAGlVTOL0DOhDGCdUb1zVJrpm5mqHJpRdmg+hDy3XU1Q1EEMiilCBs7omwsBf/44uU/sgH
MRB8kCCuL7/8i8Cv2QgMFAVWsfmBvsM/iP6P92iEnQx9Am/KI/3f7D88v3+lPbg7eZEQKxEzUEXC
SqJGgoIho/yuwUQdD/Qf6LuOrnXPOYYuMIP8oiJCU9d3n/j6jWa/2xihKcCLI3YJQhQHRR7X/zWE
WYxe1e4pP0GyK4mFEmopzXH91hOk2EUiw6a7ytD+vcctG+k+dfVAvmXGcOKxNWfQMaCNKANRYs5V
wPSmrSSPQMQRSxmPqWsfFTB8M3g19AitpJaP8E1n3PqvFjVIydhFToHM7SQf/K/9vIODtE6upXw3
4Hdp9clP7y+auWaPbeZoIPdsIwwGG+1Ts+ldtaSl+2hSSOCdb4XSTqWxdkO6yZzmS7qS6ZkZZ3po
mlD+0QFQfWnDQQ4hUrUJSODUxP+iqMTNlcxMnKMiHpn+4QU/4Uwz4Wk1I52Dptl5Swj2rwhEKtF1
FnhBGIxCyPQW7oHpLBxkMKPaEDarthyMR7Bhy2eeP0Z5GEUxXmYGPywnrjaTI2WfyXe7mBBioMIZ
cYRacOouqJtJWChQsYyg4VoLMoeSjRqLr/rxMWYlzJjDcIkErSUf1KGo0qtpoGxGHCynP/h/0foz
v/9qGkcH/yHpHMsO8P7T9i5dnv7SPQLl/fcAfIeojBhw9jnNj5WbIPEspajCLP7T7D/5GfhmU+I/
E2If5vblftX6NOw5f8ULaF/no398BIziOfgkxCPT6iBR7bYDeXHlIpzWEWTGPKjjMWvFV3slB3Qd
+3WmxIXdHU4/7PQOI+r/7eNUz0TitwPtibPt6rmnnoUCO6smLHxdUsKVJuijSHF2sS/3rXGCkfb4
zGdqBCHpMEFP+eEWjPPRXdWbj+nvy5irGwTe1zLiIikXuBSxzGQTENQfUKBt/EQdXC/0JDGkHBJL
TS8xiBh+yEGRk7p1ciwOVsFJcEaFkr8YhDwIsJj7PHYQEMZfyf2fz+j9Pb+t/69W686UtUVf3hYm
ra5hG7GmNPacf9MPWPD0pelQpjL/l39p5v9NxhDkTvOkD8kzkAAPS8/iwGrqDPUKb1U4/zNYXrH4
+awVg4kIFITiSZWoDbAcD+13ZGDf2YKfq/hE8YzYQuuY95XM6Bzy5HNLkOlCGuG6HqHhP5JFvmRq
tb2849Tdl/P9FtYCb0MNkbNbRez/tr/Luj2+8lS+nDynb0MzU6u3+//0u1Hnc7IGBBnS55whN+ek
Ybokt4RMYktISYfqJ2DYI3oKkbTA2EP3CnY58vGOr6Pws3pbifQ1p/wn0llsOFjxjDp83n9Ee2yp
Cj7zcbzjzjvifETx9Ds8+FDtuNgvVt7dWDdOBcdtsP1+vxE44D5P9V/mX/q7PhFFf/7Uv5o1VR2m
1kGiZpCAnt8c7HHeEmZu8LXp3f8oNrQGFrtamIT5on3dvv0C2e7zdPos9BenLDB7rccjbKZR+ct3
9U872wsLdMTJGy4HwlZCNkMs0x4q5YIxRlo9u/dBUPgORXk/xoFAozIsxMN2dkrMdcvDSPPZOxLZ
DfvLCS/r5/XmYGy7ZXbhdt0z1NrNTpp8qPZRFrvTH4n0IxwyifWdHubixhpq/m/KSD5zz10H4fd4
p/xM0aFIMZszf7RJm3O30MOJ6e0zY8l3tflR2P9XsoKEqYIKoIRipCEkIn7D9FKAf7/yhD7Sk5gx
9234Dw6hCDpZvBm5eb2f7u0mCxZplmZYSJgIzY+thEMA4TQc/cNqwj+8QYHP+ORn+12uMebScOEd
OINx55aiCi6+Yni/BNnv7irvNzFDLsG4GJMIaCCpZu/xqsHqm3GpiaNH7MQIQc4KUqXLnOgoNwiI
VCFCpiIIqqiOvlvngD3a6ZlbG5ykGgzdspDOODiDcLIMQOwyJwDjp8XpzNnFr3UgtXAd+x5CTJE0
XF+SnQind2TroNIHXeKOl4esOifxEU7BNXtGHAujdIIQeRkq8w27Czpp8zVSpEmoGwxDKA7JExns
CL2tgaQ2NDiWVCo4Tekz+nlRuVW25v/74dDBiEbITMYUL3AQ84HESQmE7+UOi+JYGB0OtWnZMDgI
opdAYCu3qzkYxSPF7A5vSIOW4Gy9EvWTww0hlCg0Tla4CIRxg7a1FlzDx8JIHPhwdxW8+6ZoTbdM
hpKJbVTyI6vhQGYfGQ3kL6AX5nsTZ5JxPyNK2K8gopVgQWFhJ8JDwTuKI7Zs2j84ceDHTPI4PNCf
ViU5CGuoNqxVU6eDAzIdGLDBGIj19XmTYMVex6mc+4blub5qI+g7Bnr5bNd2lfsq1XL9NLcOVb76
wvskqXF6UL4JXXUrepQnJ3hNYJTrZduxut61rGKqpzq9zM4xtsJhQ9nXhlRncdyTv2/cikqYj6iz
LNHPXoCvwHo9YWfNg3PBnInZKXuBoZ93YW1gifg8I8MqoRHT1eFS001RVN7yfgeHo7cgD8Q36Xpv
QS2B1Lt4K3kkIQD8nu/L/dWf7w9HaPdhB8PakkJS+B4hsPl3uqb6yQka7zrPEyaAHK34nAbKBloD
4ggapAib0m7pTdcIQQk2E4yEYNAE0CkREWiVOljavyHcqhkRC0LuhHQDtPlJ5tLojEzBbk4tBxbY
EBxNBRVTMuBMzBVNmoG76jL3KwA7wg10PW2dA6VwPADoRQCHiUq1FZAQ5HM0y15YclPod8xXDA/l
O5OyJt718u6sjui9r+idXQF25SUlW6ygQ9JuFJ4lmOXeGHmvoFPZwGg7JnVen+L3ZPM2Pi9uQkhI
oKqQJmU22JTlwLqjuJVVQwUNEylCYEDkwTR4GntxmdTPKdbW8OW85VHahg7IT0YVBjKgVAWacN9Q
fjVIrBE/JS236Op6yIPm36dTzHK3KH4cbeLTrPEnFPauJv5Y+QUmsDPRizBoImqQ3PkEN9gp5Dd7
0EcIAaDbJFRkYXYgILfvgfe65jcEF7ExqhTwCnh84eYsXCGe0IdbjqHaQi1g31Oyd5MlqXUhpjJx
tEmBM0Dye29RdnTdTDRTkGQbCnka8MimCLJuvIHVwXU2cpCnkTgx05dGEHW+d3QUSTQHPZuh2qoo
pO55yO+0VaPowcq2sqCpp6Q3LHiuQi2HNOLvvzmmaCoHPdmzzMh53rDjkvrhDDJ2UnAykK2RleuG
mdmzh0HSK+RRuqwpqldVWFVGHgq7rFGKNnySHXBx8h8n+VdeZRd4PI9MFphqRkiX0Q4veHcvYa6O
qNqkKZHo5OnXXEQ5eHDkVdEyirPZ9J1twlvLy0VCKKLFaWJkF38pGMnudnx260drDaPK7Hwzbthh
OpontCXQECGHXK41w3pVIRMVSHGqWIeXedL5oipBYO8GIZPoPEs0D/Yfzo014HiJgZMmYA8r0K78
7MU7uJ5MY6xwybJpxaJYdjA1B7zQNxDv8G3f1hk4O4Na8eLOVfOfMxAQyMwkB4Rjrn3YFdL8/bnb
Ny0xmmZohw1HidfV61mvM2Fdk6cgdziR7kIECl5NpQFoRKDBALYGA51jBqE6EKsVWSfcQOFkD7fu
9KisC9Ic0yfGWUUNFYkni3tCOCmA9h2lQRIoII6AOvYn3Tr2bbb4ttt4h1ZSHERvExsL7RWjRCiY
iwBaycyeY0TGaXXg+BByQhaQqpWClsYS7LejMi2O3hqnFpIhuEXZ4Ds88i8H4fcnmF+iK+Y2s7MA
ZA5B8h7fo+Vvb3/Np+PZOf4NvuuCpFiQSBH0//Ef74P+39zSL9SRAkH/ZAA/8w9EDBH+//n/df71
/+L/tI8Z/Awr/JaHg21/GZL6ocMolTQM5xjMIRFfXZqUTVez/V95px5Ov9omM2cqxZPb8uz5fERY
zyc6E4m7rmMEV2rrR/6H/Tua20tq2lVV9TeuOKp+HX9qksmtkXUldZTa7ag+Z+gP4z5V/On6jlw4
n+k/UJqQdR+IIEYWFFBCwX+4OAf3j9JjUvpJd4LCgPVg3IJ/EX4u4cSdiJJPiEs+r09PQ81ez0D+
w9P4Iu7wZ4vi89lxA27WTiH6rbPENR6+rA6j3Iww8PsPxwnzH+t9JZ/zeXqvy+Gj9rmxlaltsyNS
j/z9vy8oOxDHyLBhFEQRF/lYuScfq88U1V0hgal+fcEWCqQDHt+rskkJJGYBTmh/XThHejxaLyZy
YzWYbwQRGT6jrC4n+vAploQ7nkb6fx3s8I+7b1x9avDW50BISSJlSPX6y28rv7vJEpjuhbGYxc4u
GoXrEdIFxADOSxHIMjAkDHaCYZkKcF6d9UXh06qaPV6x5hNIToaS+J0N/d7oZyQBWQDg7nlFVqBd
wby1gQpyDZCaHpP/fzA2h6Q6JvvthORgEH5DzrEytnYxOvd6B704aTmKJkESCBxcIMlGYjAVfwBI
DsI6MgUFBAHoTHOQhCi0UoHzepblcRR0c5yGIuZS3L7FvBXNm4xzazGLmkrKiJ3jSDjA6SRW920n
ZiSYkwI2cGXzcyr3w+MQXgxq4S3RJAgW7OUKeSPoPt47hdXdwlVamRSphQh6Mm48AxBJ9wSSeTiJ
Ncj2mUUdBwV4EU4kJVAnZgpY9k40EOYHF8e9Z76BLIMgNsJa1Rb1ug+5PccF3RDsx8Q4NPYCer1O
Ga5DEyihJccJwlx5dDgpAeMT45mHvM+LhsgRHV7jqHnF8nuq0Ovkr3YZCUAUAEwU1TDCkZZ8fAxR
JUD+v26+QujYfB1B7gn6O6zCQTLBjwSFR6OKVFMLCoZomDB2AUb93qZ1Tb1Q0zzkzr1qiwwXc7LP
i4d0nQwbnJ5Bv7nUHuQQiKu/HvolVwB6B2KzjvgLiXplvnMebiZJCZgNG6G6BCt7oBttNsfE7VPX
4nwi/f8edztAYgp9Ek7+ZVAixk7lTtTB5yiYleyDWKMTF8nPBNE+ewGZwkHI1m844mheQg6LOieh
ESrGJZOTSQU4N2G74vi1xCTtGmjXui+h9R2I9g+C7kYaxnM2MBIIHZVi0jm53abqRtuhfazDOSso
5PWuOL420wsgOyR0gTgD8axQXIoQoFAKA1r3NDhjjR3OY8Qxny4SWQycUJJcERjAyAuEYHOHzHD3
xEfU4j1QF6qbqJbY8i6CHTr8/yAxOTqaH4V61Nw5neEbiDr5Ubc7NKPkEt+HDFIyk5YdEqCR6wkQ
S8ee482d88jzrSxLW3SBYVmhahRtrApWxttlW0PE0h83/Oh+NkjIk5B6Ic3lZFJuzlPHKH+D0aHt
Dzm8EEDMKCGoGegx8TctXbQoszESRKFdFXDNFE0G8ysXM4PZ59Tt20Xc/oV/QeRzNIS6HBmi9R6G
/r8eLm7m7xmje1CEJCUIQkffs+Caez2fN9h9+B9+k95JjA5ANAQREEkTRJNC0hBFLGn7tYcHxnlI
e71X7BgxMhxmiFbtvmLMb4+LaXMIwsyzOxF8oYnrVCdjS+bp5IdI2pJ+vUODvEimBJUQMMXPeBed
rImSLjHY2Ng5j6nxdAbjhgomSqUIPLRJ1WSxuB1A9ZmEId3d3Nehz3nkVwkehhshrNrGZiKRDuDx
WTdUPNDc9RwkJFKC1NAyNmESL3mUex6yDJBmN0Zho80Vd4mF6mQlKzZqjJU0qNCw9DMbgbyTnIIM
GCoKIjPd92tspVs0FfOB4O5lVR9Pd4pbbS21S6dPXDN3dy7JLpOgZDvLeLE4kC43lymyuHnDBxAp
2Os94DDk9w9Z1m0KOfUSUOALXUpaSKb5OLgwrxax0vnlzaZzjM2HJh6uDRBMBx9EAr4eI0kzK1rg
M+FuEh389x4CYkBR8D1EvajlNITqIAj3nPSPplkNQ5q9v6uarcnpvQFHu2sLrw2pd9Deurpmru29
86qnVXcVd3aururU3cBVCu0plxVUJk5q7G07bTbmGOLtzJUTNWrEkjDnGMNJmFntMNDsTh4PU+Do
2HaSzE++1PE0lbniHOFy3rQ4n2FSiyC9Mm6wZe7ZBTiKSy0o8Zx7ScWM4WMCykIRmZGRQWZ2D0wm
fPyY0fhlx6p+bnvqt5kkIYxLKLulJhJWcZ3HiqGhoFKnI446TeEaQWsfrPd5c5PDzrZk9YdskrOH
MIw01Mj9BwDd3pQB4X4HfQF3V3+Vaq/YGDPujcYhv61IXMztQQG25LMmPLewYHg+eeft4yJCReXo
xmzYryjiVvgGFwd/WupGCHFv50Kpq5kMwoRaQkcBgky0uEDJ5yFLLZXRNQKuROOdSVeI4MqPMsXQ
85oWlllKaJkhSIrxtgdC3h3LJ9n1fD2h5dbK0u21qoRGImJEEMR8EqnA8TSrBuL0PDYow9PM2qS8
TpwkOh1VgZJJjW+9aRM2yA/KeQFRkYFyfnNaZjxdvAqz2BQeCSOCej6g9RKRYNBCr+N9m1Pox2VL
4nh3AOm+cQPEhnhV0JM9j3HW9jZIpTlPck5uOihiKeTNFFURa46W8uFCzajo5SJzKA4oSEWBRbjC
sw4egcVDQoKVKOAFE0krxq5ZdWSSNlFKHusu6K+rqPb6Nb7O71m9C7uOtOIEljPUTff/Pxes7h3T
2E0HEgrzPO8GinwaPE3ClKoiWZwBgxg3eWI2NDIYQQZ26TEDIiRJnf46eKodJZxX9R/K3ieMIPAI
PCGG4bqqR2j64o2eOuc6O9Gd+ZvNGatiWeEkndZ4Hf3yT7JA6w5o83YPceZ4L8pXUmAT1hlPagRA
v0pgdxwL1pqqiYsViCJFPUSpS0jQRYaSivRSPR5PI4Xp6TEzMxMz5PfHQeoTySQdqq2+TBhlRpyi
GBt6ixMyZQygWgJkJml1BsMLLxFqVEhQ1EQnJpHxqTw5dZsGomPmLFZz7iMxsMOSiEdT43iD7cGq
dTrwcBgiL2AQ6g37guP2COPwyi3aRMySSTarbKLqGoljFMeLsuUZzZhhgI4uGp3edfXwE6vGrjbH
HBzYj12Ir6qirPfNyG6k8U94Na4M8XI6ZKUxkwsqNFIGTqktuy4ZJIZODcmnvUG7PhKZMkNoZqYT
rN21yYMeqE81kg+JMkZAoOuc+lNV5rL2GnchuSsh20sexrtDjwhKozcZQhGdGuSmNHhtDSETI0L4
RB+oPkB8FdxT1KF+De+ApfyyeqBo3tSnmCFqFcqcvZg8CfJH1Szr954LxIPB8bk6SCOjtCSSS+a4
MwuU/EhnDJll3iBCJQ5Tg+O+1wPNV68bHDo2pbjQ68N66tuIiidXnLx3hOa+UCeoVsPpIovAspPk
w1nS5yLGHCKiyZnXRgbtHYHHuK1bnmBYTHYaJaHIMcWvLmeEtNlrE7hDYgg2F5RjAQlFHzbyEIaq
YU6B3m5CFvkQpPiD4ttj0s93t6wNt+vFIbTZhmXMy2Sy8esISRMxY1DZDXrCUxSKdIF77XTjhaXj
N5w5tiRQUim4mTvQLIXHqgQxMQgCtPOY2sbtxnGYmDVCngQIWJzYd08SdSExYnoaUzrdCL4Oet6O
/FDuQTnEVRghscd6cG6m8TFRUd7D3knfYQbjuL40E9xwH0IfTxW9RSJ5BIcnY6CGQosm71LGOiaL
EvTufW/E+Dobd8ldCMnA3AdXof++0sYxSZTW4zDo0ZrggIV5ihUHgkzUrpZdaxStgogq4n0vF1eX
Z0xV7AfZSSYEWYhOIExXj5Z3AKFyXZPPGbpioiNom3zjPp8Xz6n1MfVSi+xDT6i/KqWdAMPgU+vB
LBJyh0W4npYWayhXaWcTrDsq6KPSe7nMZqr+W0fRoDJOuizCjOqSyHpIUs9knso6jvctlhSmG+66
l2pCo02EuFEUKMQFwSHuYSF1kzFaNXMm9WSi+d8nF2WaXu1EWxbWVZYzlZj4mz5B3j5HTjq7PXmK
O7MG53MszJOVdHLh3psaoN8o863c5Ds9fCrHhDAGL3m+UqHaHgE8TRVT2ganhrzKOrwHsolJQ9oc
wOXNIanuKO4N+PdaWWVbHmQKa4HOPaPNOLgw6HcEaH3KGhEe0oR9CkhorsB6GynXzqMEz1FggcfO
GVtJXo7+PXxKJKlHIeRuwbXeEjO4cv3TKiJops6HekiFEL7DsDJY7LOo2bLzwkghqHFyJyab76JK
ge42IHe6puR4RHeBuEIVgfTHj7uc85ohKLEFSGlVq/GeqJkhDmZwAZTY4JJMuKxN7cUTh4SwmIKF
tS18DNmMcYpW0963ZiYalTnBq6ESIRV4ErmrHUQkZyaS8K7gyjOplFzJkZjZObOdMNOpyitO84mp
qYwhPGnV0noxMemqeobO6bng2mCqQwscIwwjCY0eA0B5dDxyEhHiBw7kiZXoJ6Di+f0ch5hOfEcq
y/GHxmD22e1EUYvtTwhjIUK2RkVaxWMaaQJJA0yzOQGrz0ornzwJyTNT648xjJRrEUZNQjCM4w44
PQRCRAmMYQJQQmt1Wi7OwJsE9x1JsTuIfSLkPmDq4G+FZm4qBpQlEkCBzW8HKHcHMyOBNA0jPWfD
sdHV0DgWg4FBStUKmMJKhHyncfdthhRn7hsRQ2a/emk4jax/q9KC5/5k9NTnhXm/ynE+7DpelORo
mhKVdNndj6cn20dqaSM1dfZYOOT4Tziz6MdgwKaeDwZx3z383vwcmk4Zwfbx4ydsa9IEY1zJ1jpV
uG1bXKGg0tCUm80cav4rtTsSyyqdZ9XKsUNQkkmXeirXgVFvt7wxu9Dnz9lYnDtIj2qUH66ppqGh
CQpFOxqWVCBCizxQ+GsPXXWC6w4hd4ceH3kxLizXBo3rWWl7Up+De8e1UYKjvjWZ9K+erMGZJsqc
o1gj3qRvEHMI1qH6qZHlHGZih7FFqWYcOJ6ZydcCnYe8FtND8HmyjrqUKISjpCSaXcmyzV0THiZQ
kL34k0m0xIQPMjOO2Q4xQiTKiCMd84KDQeShatpqo79wB0xCDIhE7/DIiHd6DxUJFIeHduMHswE0
h3Ch2UfQT2+gKbHkPLnraoqgZoiogg7HIOA90h6k78Q5JjyCTkO5OZKCkpiSKCCKiKNxequL60B0
D7dWQkooaGhxadnDDr4KMdGzPgL03wdpyHUfBcAjhRSxB6Xud3Vz468tBHsgGIV03SmEO13cnJxu
Jeh4deW8Gc0XoJ4GhEIwNdhH1R1uxmoTjo3IpzneE2A4Y99mJgiBk6qQqzsYrI3EC9y3PdqodQUa
4Hy63UQCGMaMcSgwhpKEgEiHhvug7QlNFAkaqPCOdjI25xopziCIGeI1WbC4H1t3WpM13+DtekXE
cei4UyI8JzmPF+MQYSS+GTNKpafejctIEo92cKqvTzrlRMG3A0dEHSD0y0dXjlVL5cOlVlq6jDGo
YpdtSy+SiwtDiaEXEhecEMq96gc6OxT3WQRgZnma2nl7MtB79ysxR28oo1HjZZKsZaZ5GvOu2tsr
6rTdZgqWDyxlKD2Te227nNmoxxmpiX4k4831zY5ne99W+dampR7c1Kl9KVnTfY9++3Pp11r0FqDc
gxkFkkJgSSNdxLkDTmY1FZ6h/J8lB9X5aI+gjIE/f9pXy083s1ETmcjo8DD3B1LaWfH0h5srA+/4
PsVSIzk7073l8SO0mwWMd0kxa+D7o+zMMY6dDadGax8joByDLR6Q+NUolESa+MoHbbHA1DTbH60J
dyO3f5EepAGnyB7n3yE+AuFcjw5h4kTAcz2gvcPUBlXvefbHUpqSqsw6IoHAOsna6j2+an1nvmIV
0fbl8wnch5iuK3nlwZnHRShHUHuO8k3ORDsDtNXQ3fR2kkdjgocRwh1EhTshCgEKIUKIiE/n0fQ8
80uX6l0s01hZt5p4qtb8YbHj0s7mH2/XVFC0lbvZ8TxsowSJCqC8zEw8AT7PM3QvHzZ8nz0+OizU
Y4Blh1BGEkF7b6+nZblq6xhM26G51vLFWTwHsPYbGz8hCsMgnr1I8eJZbFhJhpTIuzuAjsSCk6Qx
MRSVJAAj0jYIzPQ3qlm3X3HqA+8e6fTodh2jO1+tuxntNZfgCBWicEmc3M+bp+3XpFmSSlKCkSIu
A7Pr8D5ddLpJ2Dr2IpxPRrPXEd6PV5CQm+64pvGcqM2ZtNDoC8dppNucmvY+YNTdHw1qVaPecj1m
mDgyvT7Ztq0VcfXBweMhzgznUy1qqnBBo1Uk4XHNSJIBdlESXRGjEFxEjBpCAOhA2chpcNseVkv0
+u9Sh8PM6nAcj2IYhNIyEQShH6wwxYhkCU3RPF9XjW6D/p/af9euHd/dnHf8+MQ2yYD+TAY3kzTf
z3Vf6cHJjCJnL+wxj/Hl6ZzxqUr/BvygH/dPSmYcJ9BaVVAw/DKCGBVBJ+I/vzB9Mn30xoWm/ICo
Iw0kkqWvsqY9j/bGbwXlGBR7IwoKFF/Qo/GbzZjubpqpUobpgYZg9gg1UsM05jTZAjUSUb475dr/
rVff5jfXOnL/Z73ETwB/l8cBcig0eMrhRBH2gPXR5VHNuUscSEQpjxBkBgFNObf04OXXoiHODIiS
EmmWxEDyH9JrqZyd5/xsQOMR5QO+ApIf/M6gEfZ8ln4wZT+AgiTBGFMCVesBQ6B6hgPoD0+/9QYD
jAJEPoj07vXMc8ssCgsdKZLvIGZg7Su22xMd++t3eaoFAoAiTgwsMySIrZc0r3nMfB9n3fqwRCfa
eKab9Q9ry/qLju+hPtP86TpA/nPRig7m5i7bTcaZghOsD5/IH8c+cOSRVERajCIUgLgO8f0+0MCg
QgChBICUiD9z/0QAAGaozt8xia6IJsgfV2Q79/c59GUmiDYml1cMeBC7QzEfL+Kk2no7LsANkf+v
geH73agGvZF8gT8shBBKvU/lgePcgP773yomFTQ+2+nWX6THNRUkptpIKxbYJKUChQd49sLEBGsB
clAxCQcgfegWI9X0955Q0JqUbww0Qr2txkoMaSIE1CH5DFdZUf8fXQnuf3sWOYDRFtpoxINS2N8S
QpWgj3h7+6Xp2IgBfcQW0UDjSqeIBBDMQ+YMdO2pOR8dOgB9wTxSgB7EXCnZD+CexRBiHDjUSfTl
on35noQYkMEmTo0GGp1JIsrJVcOxSoVPY32WHXKcek4kBzq5e92NStFtlNuzXaaqFTNustXbuSyV
zbzcytz3kQlahtcxbIiayzUlIRJGIdlyQtLNrq7da27SaVdpBpbKIrYS2tG21C1mtIhICwgoCkhk
qENSgWykRQZI57a9NuNz17qIk9U26yrJYlgldMLiJBEEOkJwJgjIKTJVYHBxMFkzBXMxcTJzS06e
3F5qiqaZ4XYBdUMEo56G23IAEOukD5xPmjIgE/MWEyKKTKGCQIf9ZUNICVrWBkroYenIADHyfWdn
uyNxti62WtoAT2cvCc5ZQiSsORbiTYRJ2lNESdZrIdNlpNpWrcJ+eu09qZ/E+R2CepSIVH6wclJm
98hyoJg9pcVl8nSx3Mx1RwmRAKCYP/VMOigJmRE9af0WHWe/gBrgEwmPaHp9H7VR84g/Ej3xQShQ
pQXFPmokPzRQCSIGOEjYPuo+elY/EZJg3ymkADOmhLzbdMa/Oz+/DoolUTcYsNTnEYlxkUYcWsxd
qIBE6Dc2Kyy28mGBb0rJoEVESenL3yXK3C2toKait1JjrSTEwibRmYSaAOB2cVQ0QiYEKsEgDFiU
V9CnyPedsNIaIK8gTyAviDcRSYPN7x2fos/2DYeuT0nwzY+82HUl54GCTBe1uLvxwGCDzUmVCdRB
2sek/DVTQKQLGZClGUzsUSTGh48tCWVCR3i2rLpHsL4FNPwnjrRtF4HgSHdXLW715lq2xUfRq20P
sPCJ4h6soHgAHobQ7XC/JwPEFVP9s3ckDX2AFAtFIS6pLuuqjFGtfbrg2rJsm2kqktGjapLJaqUq
YjT36/A7kGqAAppFSIClRBKGcwgByUmu/cQ8FO3ZMxBCHlMnNQ1p82OFLyTkmEoaNVXhthJUzX/a
z8IxgqAcH6JPAyOMoDbDfB+PSQ2F7+5e5OKCRPZAg0OEoT78ZALgCj5ogjgzA5pVAcMKA4XuEMG4
GUE0UbQPWH2kHGeSLWgoRJHZQ514h9N40ypB/AkjzNb7gqbqAXO040O2GtEYEMvbH13kfXIGHL2T
2FtKSHnX0bA/QkMng/02GGEmtZyU6RTWHEwZKggn5GFFTiSnnZRkFGT6Y7JK1swX303a7dLqupIo
1EQb4xUN9VyLoGNjVNUKGlw0TrhWc4iiKKiGFINQXFFCwSIImzQl2IKNFUlZqN3prWXhtdC8RZDd
BEPKOAGQkkYJqxpWpUSaZpalNZWICIoggkIWKUiV2QxB6yCqrTzM6zCT5bPKQolOm3cXg1wpY9J6
pgFxvK1p7+mC9UDhsTxNkUUOwgkjIQKEIIL1ShgL5C4AGHuVU3gefks3R8evuPYIGRR5ZrjkNwVJ
ocpbYVKh97KwhCP8DVA5QtKX5RBPATUC0JKf64RqlAT8v3fDlP2mGe/E9NCsuoVUi2OcdFvp9uNb
xMYrVX+tLM1OHRIUvoMuYK/iYaQbSO2KpgHvDIEaj0DDgxPAh6lzMWSkL62Q9goASOEdiTQuhHAx
jI3BjDBaH/1gyTUYEZsjrFGK1DXL8Vfvvn8/iavl9x5oSVij3957RQy08fTtH0IToTdBPJGQDaLG
o1Zm0bUtZajaUQhESgtK3d3+N+TyEjDvNgAHZCzpWW2hC/jrETEVwjxDmQSez0UNp0iYYONUxHqh
0B7e5k9cqgZi0xdtaw2i3NpCK7OzdWWpwGboS2jAnXFHl8xWnwhNHmqPJ/63yDzQkDvqvBDwJk1X
UryIXh0+OOo9JiaE/Di7Gogdic06m/No76QOEOxNsbFHc5QpMUll4MAhnGUgVTE3gV+OUZdaaQ53
Y4f9NFjGdVIiHnpiAREKMKLGpbfi8xRD5o6T2LZSsKQKIYRRBk8uUIiI/xGzmMO2jqWEN5UIHpDo
JgiSELBJIbb6k1iRENMGQ7EgaZBxkDRA6lfFzAfiv/VAkygCaHg4lrfMaGomfhnxg0BA28Ry/vOR
h0Xx8TX0fs9MiIVOslIFZCBzquSii/Cdapu6bdZN2V2/AeOsYwB/FEa+v5Y4ecRdTet5jOEqhNuM
CKw5dLCvt43y11ee+piNPjMuhxU1QtdSx12k4RiHKPCbi+UJr1whxlTCAXU0VJFibJVVXHOttxzq
hu5UtqtayIltEZYmJ4DofC6FgY7CMNIcCk5MIkgUaLTRLhuIoZGFZq6Aki71+d3Yik8YLMlQ8sKx
EaURBhaM6R0AEeQrPGc7OBK3s9XiYNk8gAR0EvdopEKBRBk2Hm6DqXOZyH+ctOGlaHFYACFNp3NW
FuHEQ5VSDyiDKACMR3zhKUW7lsbps0dKILrtAUMAd4oeg81esj+UYQfmCjkdZCSYQoF7vPsJEUAC
Hp8v8J06hpDNBg4sTGyHGD8h5gd0kDIwQR1RRP09/1IjdSj15kRBABe/vR/Hvm+v14PrHM7ZdpHT
NifnmId0h4UBcIroG+uAG7lOmQSiEc9tJ3cgCRSHcgwL4eGELzmLSYPYQUp7BCnxRA+yPsiIN1Ql
+oRuizxP0YCg9z5ncukCo6RD9pnfOkcog3UGEIJdp8hjuLCjMouzTJPA/Gd2pgOq8n6HGiaIEhqR
UOIfbJyq49KpkYkHtAv+N28htzpCwC+pDGIlUEYFVrLvlP1pzvq+fIaSlLNZD+44ZAZkCxAVCUm1
d2u1ktK/QuzJpTBABJAm3h4jSg9x6YC4IAEf8/+izhvVUz8MUo4SpyoGlIcOq10MZ5GLIyHQ6Yak
3cO4+W7EHCcBjgI4JKMUDKLDGsXFGlBnCMVAwN/5vHKwbkjGsN3Qe0oH5PMZkDp7qDR7qO/Mp/QE
gyJGCbFdQpq9Pj3Mrn8zW2Lu+FXtL5OIz9vKB81pjrdjYwlG+aUDcIZ9GWmqOGtYF3Vx7DnVoJVU
puS/soGEJmwSSOP7f4a/t3/rnLZZRy0ODBhO3nlVwY+Kg+9hBMIkR6/Bk/WIiA5Do9DtgSXFimx1
L6hDlOLKZBv6v+71BiZlUslIkScGFxYPDkwZWO8Piwgq5/6efHm9+T4f26so6onUBwf+kgASJe09
3rwFPzfxyUqEs4n/vbl2vM9Tmwka/hPgR8lEGyID6Rw8v1aDAZIT2EgHlhr1wtO8m8EqmxFhHxQ0
rZG+HpalWH+2zDjQSiIQCqSh+x/4znc6xMP4AXygpH2w5AgBsEvyDJSHh2PkVKeZ/v+hmYfntnfH
vMhzcBKcKkjmwKJCVEQKzOiXl5yfgurWBXsadZkTUMj/+ftw2LqGORJXsMjq2ch+e1ICDk9ywDVz
9qgkJWJUKAehlWT8pP+Nm16P+AvXXFcH/dzkxTh6cPLGxcU9s3cqqqqv7+GGzHGFL4B+VIfpMHx5
Ywt8dB70BSDwKU5rMHyk+NwMikOHYe7pxNF81JEggBtipx9m4M0dKAwhP3ImpeyZrFFhik0VFlgy
QpFNhkiKmGeXzhwDpiJEgCFfl/TAfNKRY2A+iPnRUi+YI6DGgSAlJiKGCYzB9EthXvMu6KHxXqdb
6cTqcDyuWs4D6u2QzMzZkx/TOR2zwZmmhpiacMxieoBnXIMnO16bZSd2UdsyqqqqX2DVlH0aX9GM
fJOEEj+jvrygfYvVDIBn62AGn1FL9dBLKURaz/9X6Ifk9QPQH1YhTv7nSOwYawWqoqZkp1ejbHov
GQtCweTrBZyvPXbVB/2O0kvQZkYJrScqRLH4J4P9cB9EI/rzZObiivMCCik/KIP8ePfHoH4mZuF+
D+Au3QuYwkO3AiD+ruO0EBwvtA4PYJ0j88hLBAKICh6ygpcdKDy8HB0hjysu5CUbAqUaUOAnsX8f
7ePqgJ/6WB273/0SRhE9kKUCle6VT8fPoeHhwcqG91R+a7bkBQKQ34vtqBUDrjUO7np2dNNHBlKu
S25xonVt/JtKZZoX9zH5P9Oi9PUyj9JXNO1J0clHP9syY+fYi/3zXGJ/pDloYcMMrnT9xthKjpdh
GBFpSO0bR1f6i3B3V9LKFE7KfFhYTXNLGmQMlGsOJ6lkrD7xTKUBSiVEwh4IBxMY3CmeuJvWdHYj
J4rb1Kg47zGu9nCm7iKJRbqh2dpIYopRUzNzUnE5m+CtqOgRiXzM2TlQd0NEj9wkjSw3OHJnTb06
N5qoZ4Ulf9q/4/99OiI/cyo/XD70L+vruMeMIReqe2PUOhiz8MR74p7QFEO2j1dP+E84oaAmVeyC
fTFaj1k6MeyB2ecrnWYOPpP5LLzomY7BA6LOKIjPxWmDo7OhvDjFg2yk0Cn9FVP/qYlmdlf7H5+v
medIFElUiB9hBRyGgV9Ffzx/ZImGZ/LscaAOITkl5IHkS9iSr63nZTTkwXzEGpmDJ/BSUBf8oiIM
BdRQ8IJ9QhsihDwPJC1kdg/1Z3+AK7Kh3k9obwM3JEFclxiqmIIAm01zn0EdI4x+99zc7OX/fnvX
2rvw5pPqGhLCDBThiiRpgpiKdDI4unNw+E684vttgovm8Pv5wg+HiG3h4UaP1vGnWY5F0TFJoXLs
/WP5yQQyM1gxYrg74oIiSD62EksbZM+3Q8dda7xjtASPS8cxAy6m53dueuMnNDmYzsgg+SHeGRoR
8teUP6YecwDZr/2ccvm6kDh/aU4IeGO6xJJDEAqkcvUlCD6wYzE8SGX0FhmQsbqCByqFhahmJsR8
CG5kNUfjNCw7kKRodDFodT08PPrt6djAGTME+oMWsa1rRbCM4m8YjFXVyQH6DRH0QGUF5zmsQhYb
bgcCNmhH90tlH6fyVMKx/nP58+IeKFQlve7qxGVOmnENDHFdkjCEISHyJdaGSkd774sMEUXbTady
5mZmhhm2211Mrw/UnadznSNbTtFiTn+Fks3XW22A2121trcNWy2ltKtoy0qW63G2sBjCY1KwSKCI
IGyKiqrFu2222FlkQ1tt9OqYe71oRDSKQcp8dG+E6aAO6z9UK+NyJzMdem0UR+aXJCq5MUDo6kIr
zMUACgWn3QuEk3qST+t1wUp+M7W4Xm1sLwkDxEYh4BEzDB5GAsGEIHYbEEC64z4eZgptVQtIRCNF
CNDSi/2iKAIJVNaXGiIu6d/6S29Ky+dPap7Jqg6BBMMSp8CK0LTLzgBtKqyDVb8nyIz8+fkpSpUl
tEtJAySSF+mhRITBnNvVpR3YBhlHcskKv4/4v59zcCIBqh30o2MURPcp9ie8X88ogkhGlpcdweJ4
V4Fdvgvp2a8Spd+ODEsKgApUQCNdgAlyaNeNQGceSMLabFHbT1ACKznOQxjo5FE5ZyJHXmDlNcaC
O1hVAAiJZrQ4hDk/xdBouOkqFcFlJ1Cle/4D+wqqPu9CQNhdE1DRTRYmu8h5OhpjW8kFxC4G+0Oj
MpJ1WySaW1h7DoH484Z21gi6O1olZbjhOcHU5eYaHHRrFgn72CgILBE9hYUGwpLA6VGkK9woOg0E
yKzK4EoUg/1dPTxD6z3oGHHJj9muJ8z+TDUH6EYkGCkUGCCkwfXKY/CDZYGCJlrUiERzm7Ksm1dL
GOW3Mvi7eaiiUNbMNCKBpsaVKwFmYExvNu/gWrw2vhico1Svl1szAVYn4ZbAUVYI5lUT7WBhmOu0
MGFjA8NYnVheFP/JkMHhKPLVEOkgUUiDGD45KJzjHwhKhbQ7OwEADi2kgvOFMr9FhubUFFUrVUeJ
FCRPD6Qb9oz7D1OsdFf+v/t6PUDx8iCPYorBo+LblxlRgz66ojj8V7hqeibnpCgok2e6nmQO4FDm
6i8NfUDxB7juhCE/ih80Juc+HI5HxvPnIe/AoPWgkgh4bdbMyIvUDqQ8hc1JgGoW2DqQDMixEhog
FAo0CglYpFKFLDJh2l3VXu7Z6dShbVyNsaqTzbGuVayXLbdS2MVuWqCVEmXCMCCZBhIJNGjDTWlr
uwVZqbrLLLblPTXbbsm96t+i+WipIDRtqNapZlrBYW8brYI1lgW0iwuLCVQ1IycjfyyAGir8dEP1
dK5gAOP2HrM7tVQPTxEO3yoxVAdExFxYQgIH7trSEID2WBsEUeQhw5BG4SxREE45AfJGIaByhDHB
kJXdS+eC8Mw0AZIbs0FfyQFgjrvCT6/6MoP7k8p2iCHi1SlClssGNGfq8jrdP8g0FSgpAKSJU15B
6GjRQkAl9B4HY5Ds8TIyuq06oSDg9Dymqvla3aMVbuGGGBukK93Q8DqTH3jZ+UhfyytKKtrQD2i8
f4aZVEkYoFglIh8mVJ7Oj8w/gLTap5c0zX0RKXHyFPasTmcKdL+CK3ZioeZamGhOZklO4SMpRs8z
gPVZ6fXRE3KAB+bWtXhprrl9ldnVrNl6agkhBBA14axzBlASBxwNYiDBPrxP2E6Ce9MTA+ujuDO0
7EwFBf37AB6Plgmu+cBzO49D00VR/gvxk2Nln/GYv8lsvUCD3Xlr2RSFIpEpMkAvsP6t1AQ0u/0m
RAyxfwLYS0xHuDc403/e7AzLrTRFtmr25/5A408VgmSLk3xQmUhbEgwING4bHn6BhdDlHtfT10v5
pg0RCGxXpaShKAPhU6yzLT9XOkuSMndogjepI7MkwM0mlAwbxlDjOnGD8LTMRfJo3AfhE5OFFCr2
2Lixgm1xaNB4TId7w+QW2M9dvvKMQgT4fLsFNRiJ6YqmRVrS0KKhCQhUeiX+GgMXvm3WqYBANYcj
nskSj7UHJkWM1KxUjvr/jxJR7GYGHzW1AhBIMymkQzHt1ndJfpf5jt5sUV3XRj5+r3lQaJXu2g5G
WTT8/XSavi6N87k0ucyqFL0W/w82WIop5Gws1YFVs1JLEFAFMiFEZk1LhQimYcallIYozI9W1zDQ
fJLtUPcMMUUILZ091lxsZoTCxggWippKpoDuiGT4zBflMJgIhBgLgNoitvbgBDLy+VOVuMhKM2wk
9nrIB0YGAQlDEyVZUv2kZGvr+6/P4uA58ROfWQMNgRQBgej81p3SyHCFEKOvdjwSTOon1xE3j/K8
2e2lTqpmQxMtxl0FF8/4Kr8bUIhxvGl3nmbxwOGcthkb3OI1Jpmv/cjCoPbjt02XUyvw7sT5zy9P
UB/xP/sSCee1c/TRAdwv7Re37XBqDCMSMcbMTCyDIz7TRu/JeZyuiTq7IHpC+6NxITN6cRlLv3kg
mPQ2M2tSa4xivGVMzFVc/XVfZVJtF58yCf84Jz/MgHU/RfbBc/q+AQSlKBSgU/bLkUBMrELEDGYi
5If1SJkFApTVAB0ID+gSKloFMkaTiV1Cg6IFQyQFpUJlVFiEHJEQMD/lAnMAUhEmrvkAMZUySdDJ
20tLCJUoLwSpqq1B2UtKbKXdzSx17vZLy5tixF3n8X93MyJmrDzH4PertvuAh6xIEwhSYbDEcrOr
y2YvlTQQslACwILrFxzHkA0iYf5DAwmCEnnzNAkLGgoi8GAQfiiB6bBylg+PQ6lRQ7OuDXZ21AVI
Hdf7IGA1340gwKnnnOYilAMgQe3ceJi7dFL6p/lmRSMt4SY4mCixEwzRhRhLDnLuAycFKQNiwcGj
89blwCQv5/my6uRolyS0kS/9/Pu7UXioni0tDYWaegPq19bpjwfx4knxP92Hy8gAfnJqmjD3RMW0
nnxwvB1lhysQ+2/bOubEHORgDNTnEYgN49HUUJBeYOkXbTcQPhtY5S+xHoYPfS9Ic4P7oSZIXFvS
oXj0YyUCGIB3ygLJAUhAgrlpDjlXsoLt1FmOCNkutiyLEB1CVB6SyIIow4h/nLiY/oEitQY1BZSw
xprVuYLLIkkzLJNc+leST91N663k1T/99U4bUak3jd1LqooWJp80NCUERwsZhmIgdTQixzFwRUxI
nduv0ccbbO/BIBuQJjvI5BAgEAoASgsDGMI/tUij5PibkUaCdGqwebYnqwbT9f+rDpdHuLmyaC1Y
86O1O+FROFDkD5tvTrxMqigrFjGRpFEDmYj306hH5h7zRHDNRmjyZAxRYcVAZzm61U3RDFCAWhEU
SjVhh3NwAbH8QxnemTmShwC1BkxJETAUxVUxXWMXM4sCwgLzJVMilnPNQVYkCgQk1COY2OhehMSg
JUIqCXnjRqyDI8NAllRDSGNY0vpvm1VYHV9Jp8jeMWk/vY+Zw8UAISgv353VkBhNEKv+lLkqsKZI
awPq8u6GhzYTKjwwuDLFd8FQ0EscrML+9QPFNSwmHApyel3Lix+JIpUgiIMKBzeRSV54vTDD6JvC
U+HU2gsmTqYKQh09aiBQ5cRijJMwCiLphBBqmUnQwMCxbq6v5OrxBF4oi0laKTiBK2y0PttyktXD
1zdCWrEAkhSNEARPEylg+Ka5WLoiXwZ6mBM685Y0sR1pxn/h1zqHeO+UUdQm0BW3CAO9AANuThvU
GV1nMCgxFZFRbiDcM33lpnLEAa5EWaHVDmkhvZqhx99hWAEUe29lyxa681JhBjEjUJSgSDKdQHfh
b+2zEGRfQxmIfa+SzkuIjUIg/94jgV1DKQ4EYRxvz6PvzwWJPHovdBPK6gSF6ZsU0SEkGAmhVRJh
2SVUnaqLpQJGXUU7UGSTqcqkvi47djdygdD48QrPGMdYKhBOBs8wEhmiwYFwkJuwpkiVRGoQO/I3
Y+AtPB0bYWPjAYBRRcicmMlSYDLUqEZymCxd0mNkAopChW40IUsM4AGlGK71UZQsE2XbhKJdw3Rz
xwetjxjdyi5yUxSzxVqKnG905uzy3fXSWziZUing5sdcsrDIMWGSzjRkFFO050XNh8rYeXXjvpyQ
vymhRJim8ZDgua0EyevjR19QTOOllGlMo7VroqC0UjOJo6Q0aMDZhpZ+iiyjVY7xyco7/HUGDqjh
CQSB8ZJ6QlBUAnhjJKxEgDNB9px4cQwc50nLxZR1B2vfGuNqU10pgR2k4NiOrXpmHnT33SotpuGK
RDXCmWVKhowLFmcN4Z2QW3A8mMtrqGGhY0PAlhDZdzJwqESlCO+HjO2azhazDgaawypwhJrFSIpp
VlxJgpOe4pRFOALktlkRItYpGeXKZQTLeJdSySdYmMU3esncgwRZ2UEGFEKM1AEojlBmW1CiE6Rr
DjYhNOxWFyFWBcBVDJTUY2bVFHY0CMYwzRkxDzGJRdQtaWSkg0acVkhoDqbLjVzLdkgPJgsgolRS
FCgsQDhAgU7YGoRSi3cGSibiVdNlAoJgXDgMWbkxERPFWTAXxhRtkRUZw3dCUmMEyRdA3WpGbhgt
5oZcGg7OxeqY4ZQ2xwzjZCK04ZSxQm2ySr0XZNSWYBQ7AqIgSKZZhIUkYqKUd0Zoes6yXgx1ugwC
okmZEMzMCVKKFCfTJl0Ksm7MxmMFuVEZqUZ0woyeOuh4zkhQe4o2Y74FRkmYuCSLALzYykoMDLhb
7LOYsMUEMCyA0O1BIlK8OFIyrQNTDTqaULmE5k50ZSm8xzmQEQA8wo887aOQ1bG4Jxx0QM0LiEI0
xVa4VF4uZIaOyg8gItQGFz8KJjgS4IOVzeqxcqdtttADsSQ4iiiyKyM9uOChIQyYRqzXTCvjngG7
rT1aAecOBQj1KA0bJZy5H+VoNZ0zpXPrzfFOd6MBqCOO3GcCIOKabK1R0C9mf+61jCCVoOh02D32
tOWybiqzWHD6nclJE6e1SLEkm0JC2IztoNi1IpzKbFkuoqY1TBUKNmKuby8IV047us6DRqpuEoSG
xmk0lhIJJRett4YYvNXaby0VjeUxlLWi7IvO2ydSpaRhaVuZ3UMKivXFq0UrmYlmFJFxDsS0JrlN
WhMZiMzJlJTSq6nYqljhIMSOlS2wzPxZHrqi6RgmSDDIPKjZg3+pw6GP/Cd5ihCcp5wHdAGQndAJ
kCe2EevOO088G2DrRZeA4+SmysGAaOfGawUsmrQpYCJCAiIDJ0ekIGOGq05CBpiyRhwSzsIgYghG
KUKVLAoeQmBhJgnRgAmkIcnDailMFrULCFUOhQWIO7DMQF2NDLsBu7ReJpeDeuAIRcEgkYZShapK
GkBgOmwUuKUg7AGwkQFiLLKYLTGnRgGyaGmjFTQNiLh0UUwAg6BEdjMiQmBcB1lN0V46VgIppiSV
I3Vedy5TDUs1cC5VTLcmmoYFC3QvJkaTAmFwQMAUfSJuZyZbKdzXYSLGRIQMlCssCyBzEh3PQhVE
QQ1r0HCu6kHgmKbMDqKSwCCpxEyaGUmjkMYBLVw8WBjCNQqVBAlU2b4gcMF4AYop11lcBjaCcFpK
ykJGqltsQq+qt3b8ZuoB5ERMqho8FM4MDRu7F4JhiIlOBKB1e5wASeSQ2okInp21qBoaQlJFSVWh
pLW22ydgXY6CGobrhCwDJFMjkEaVIkDYgUjTBGmEQ3CEVPURtydgwwIQ0gsQe0A7A+U6j3+0hRNV
A5BXSlC1gK32h80ZKT4vgYfmzRWQ8YvzwkRP0kymGSLjhX7HJ6bFpjJwtgmoyx4y1Aw5qVVEEzZU
Kza4a1FDJMIrhbK1mpZmGpbu0RROHf9Fqbq1CQjLG09FQ4iGUMLG0//1NT/yW1GLB0epIk+b8z/m
g2vWULIVMSfdgOAkIIkZB7jRB9W6p4HWL2eHENskyg//iBsEBPFiAH7yEOHvEw182B9cdZRT6D97
83TCxcFUCQaEVhVRqEnoFwYkCdNLVwkBQEYRQhMlKLQFCD4j+D+ig/0bHakh29nux+qea9FD8UCF
W+EGcf5p0JajNMjRD+aHmjDYk4AM8NpMdzZVTZHYqYdk4N/mSkVqiACKhZnZaERoN3qnb0PL1KBt
er0Nf6482ybflqyGIaT+lURC0YKSIqVQPhxoQ+jABUBmnhVhB/L8KeWA/Mu+F8u0Te7QG2JmXw/X
STT9D6tqZ7ycAgdMoBIOZzTF4m7UXNqKUpSsPchxgiQDIQQYAbJCUIoajJ1AYSgBsZ1BU2+1FCd4
iT8gc84UfTrGvx7/HtGiwTCKHLqKo8IL1EBzA9gefQsUmgUDgMYtIr9n0H6ZxmK8VBDf7c54BRe1
4E3Q5NP9H9uH+XSRfEwzj3JoMiCmrdQs4MOJmQI/QvTMorIpQ95Gpe/FTH9G3pFUH1KnqX6NHoyS
cclJf4/rVX8fgO7miLNqNaq7aW2k13Seuh5mp67VBPQH3BADHbP55/R4mG9IwD5Z8LCcRlJyVB6R
PgGAvm/PB74lQQyKPcYYEE0OsM52xETjY5eZFJTb49/uePDKyxOVU0V8HkTxNgj5/11dDU7jCcGu
CrWhcWMFCYF1B2Yb5fiWiId6o6HEMvn1CoJD2eekPkgfXin8GPk+Y7Vc1TUIAgwBSPPHPq50ushQ
38f5SmkjyLUlUT9pP3z1Kt43IjfDz1vG4DthkdYwhlY1KoJPXFXIEiRVdBAsgRapc1GjRKUtnaNq
rXauhFTmv81IoDnmJO+CNqIp17IUkERkUd6Bshtwba6E8wmvG5j30ADAAYIg+SekiAIWRPaKgdoH
2WHzJFDSL9f1iDwPOIpwDiAfr4xozDB9USFKleJnwD+VRTFPhP2RyKUyez2lvak+Jgn4fjgZ22Y/
gza8MOxK2HwMQDBEqUEc9zkykmZTqWOhdQXEzSJNXIOIa1xpP8XGCbIez2LFMxI0NMx0teDMxd/L
ic94DUAENajlzg2bmerVgbg7/mPtNteRl3EueRy+AQQkQJMglAznvI9QBeloeHSmFBF90oklgdI8
jvzToh6knV4iL7RZDknqlMXvG7OSDdKrcQ3DpjgDDOHTE3n4x/uHCMPFLKkqiwYddqjv1Gl7sovS
DKU21vx5O17SpZ6HZ0eA5ix7L5q7ZuKLTtAWm71PA+Z0H9SfIkE9e4FGy+dJRi3AAhHjflDJQdEI
CQfp2M+GYdDJhIxfhAxUhCafijZ0M4u9JMPBkPAjB17JefyP9XJbaIiDoSSY89DWWxJQrTS2ztFg
ZHXIm5YaQeVoULw5rS1CFQhQSimNaKesIPdz9SdHgG/IXrTHntOoAHgQjEVMMpAWjc9AUeg7Vx34
PjNDb+o1AHkww7s98iD3IgSKRZg0LJoDUBsgYBwDWRa5huV+ynlYrAH1/ky+4ZfgciC1BAOvnJeh
uP2vCPx/VvHOsazVYxVatqlKgHXf0Nr+jW12uDhBoQdlfHLG4wjFjM/qkDUN4KpsCpcrpyr8cOUc
6TBxC2pWNLE5YyRgsT8N5l9iJ/hJ3WEFkUtSRwKGgseaKNz4We/tp+d9HePvNVM1zOTtGMPDH246
1OdHVBvRvmktLRQarNxD5l4tBnV6YcZ483HbeCYHA3nHaddjzXa4Pds5zfA4WOcetHjsCysnbfVy
jE58qpRGPKe0U+ddd9mMZjqInfNVyIJNOCjinBinnwI5MQRoQREHSAIgmF65x2fCO+IyqHqGs3E0
UvbnUo8T4hc+McNRBaRwzR1BBPjXbFVxLO61owevB1AREdxAeFa6Nds9AUYBL6th5csxrJFArJMW
+9JQOwqfWFChgHTGXkeadUGZmMnK7TXh0WkCRnxw8wFkcEAMgA1zPN5jgKKiM05TUHAi4iNy4GNu
nLHPk4b1pzlySXk2e2w48QWeNkERF4yZMbOTBx3wjjrovhYzDgO3YiCHBFesiCobKnZvo7c004cH
2c+e/b07unI48CocrHfzDzMVa8V12gNzcYw82DCL4CclHa0KJy0smKrmxaKgIUaFNRwZLXapNI8a
5KrCysewkJ8YfKiNKDq9brnk3o0OgvwQPuIWK04sefKHUXudMcEb6Tq14XtdI2px4yW1dhfCSlJ9
yYydgnqtgFsE8BTgxqIyGhFCIJ5HPd2cuNXWsONgG8ugisGQO8hBKUO3kYA9IRdmAKXu43jQYGjR
iSWpOrWxxjoAKW1E0GYKpSxERGVPs6Y4IPFOBoO1sO4IDRhoEC+r7+vbTU0zaKaApabbKKVqS19v
yPv7c2+RERE20qpppUqqqqoc7/LQJv5drA691wN9NRHeCpOHC5tdPDuqzpnnpsAAzHVfPvm1uECR
aslbWQU/33XTupChXO3AAGswbRE56oYquZmu/euOuxybRLK6nxlz6eYwGeaxEvMlbXnHgiqs0be8
vNvhQt+PV0ZEcy4jwuOujsdXEb7S80EiNePCMIoxF1Em6jUY1O8+KZ1ABDO3NYjlGMVpWTxiIiIg
4ZgBNWfL0fZeya9VqFa6R16LDfZcijASYJQ3ZJ6247ozxmQAjuj0LX0ixirzTVNsVrWE7cLsLpKc
Cl1lmJlJixWq2NGtPHHVyh9Jai/D0UyaObLs4wyCCAo4HwlNVp8FFdr9ODfapyslWOnaLE66g9vF
lYfCMkEDZPsb9ruxQLytMcHbwwoF5zJXgrPnldoAIT74vPHKumr5AAnqcmujV4oWeeg3J1k76laS
PWfX1pILOTzjsd2Tylo8s7Asy/GrOAhi85ojjRfW8Raa1YlB1TPL6Ucb5WajJjjkAHllKzXARXPH
qSuPHhx6wAd+3s67BYMMLUhEQb78HlvxZOjMkweuaKtL048d8+mHEEQYpK6vvhc580jGeJOxtE2a
51PbtnBqTsqaxxeFjAREHXMF8U8VtVDhVurA73fGMBleNYr0Sueh4NYsAIm7Zy4dWpxELaqIIg4v
n26DGssMLe55xHbMdtPbnzOjsIK4cXxz3jM7c+hySSvGcHseXzpxx5IZO9G+713yA/D2SzQo9uBx
UGTvVVjdejHvjnRN3V+vbvjXRvtDRA4Ozsm+TCZ7NnAy8LJa422YPcdZgzlRrT7P1ikVGDaIhiI1
pmmRRL7R3M57s9kVnHpm9TivLlABHNXqKd5HxLMNZNV7pesDqqlU5IQo9kNmEaX5Q10GkyZNNYhf
YW3V0pgaRfNiqa297pcSkYVPZHboh74O3KJAQcI7mDe6vuXmIKLUPSiljrT3Vmc+3nFqIiILLQOX
HZmvJ1fUTLrkC0JEQEJRus6XeWUJPPS768ZNqjt26PFpm127ZlwRFrs1srPNHn0vVMzKUjyEwQme
vR6VPHgMea1k3x25vdRK8PiTz26eNYsyWscUTZGqQcGussTXTzvqOnGHkgpEcPG/8J6wHZYjZ52L
XapjkZDuBQ40PBxceY06Qr6UwL7Er9+GHgWJOSSQSHMyLgVFHVzGy47chs1BDKHnIHthdBdjLg2M
FqbGID04uA44HsFEBo04Gcpmri40jZR4xGAoiMnITM45YTCM67EkYMIyRycZFJg3MyYGMFS2S3Xf
XbyzhAeF7QBpDgsZiTmIu1tiYnDejarC0L1XZphpCpHHU88xFGySpck2kIoxmUYmSIUKYSaCyzxJ
FJeEaUseU0KWLpyRFVtHhTTvcWwuhuHELESNIl+NMkSrVzGt9cwiayRvngwWFknR1SpUSrhvA7kp
672qRdKbuRImSLNkyBcWCH0S0kjVakfQWUKRBmI48IiDvEkEWShxotYiqiVmhwnd3QjghuG69TcY
Xnt5T2sKiOth1TCaxqqBpFa+btsOC+xS+N7HdC+Kc8dUN0Y9VMU0muRtvcyS0zZkm3GsTUwcKDlc
d9UoYhGMrq2qWSdbOlFLYuRyJLoSGVmU6qXTrmiDWIii+ZJlgx88zOph3x4F1m02GFIhrAo6Q66m
F2nCyilcLEUX463OPg+whRTx30boO6OFhHc2dliiJwLLTmR12qRGLb2N995p5+zR2veZp8CfGurW
e2ysaWq0pJW5dG87s5yYcl8dieeTWkhXcEqIkOUPhGVrNlddYrZHRjYolYnhGc+KMHQ3PbHcXoaz
E92HhToa7cSKOMeclne6qu7hovLAk0mzJXFy3cwuKpZYt3PeKjjVc8JmSr5xZm9EgksnQ+YKpGVS
ON9qCjHizn1gsTBWuLCLPEB5dnhOu3djYq3fjESt1gEnyQXwhWcHZgUauBzDKRDyZvrs6hWkC6bg
kUa5ZhWuoXZzJiScsdJbXE8om2E82lYdrHHCR4Y90b4uTpjZKNdYtIxVzinzBUwHFQxCLMt3pxNW
4jlEqkG1BIxRrFR3Zx06w34GhGzjHSoz2qcX0pe3FI7NmsWTPiYfbGMaxIozzcrxnvVRxufxVfgx
36zut6KRZPLk6t2PjFQd0LRrubdxoW1tCEYqNvdGrGlGC4aUBKt7amcN8cIRPisq/GZy511vdGuA
NPBum6zFxuIDpBZEm1hwkRxyq1DxEskmFpYTQhJO2UOZZ1VUX5dJYdGAounYxomoo4gOoDruEvWS
bx1vpZVqBo5Ub41wauCVoQVqSq0krwXcXijEUoO16qEqtDFa7xwWJ2EVsiMSYN2pyIxvRjEKM9u3
eoLI8cEEmUlA4TQ1RNd+2oIvB0+1AAuYcBvCDoY+Q7AjRg8XZy7SXiPG1Iuh7qlqApjxU75MTCDE
DNmzsUrVXCEHEct+c9asOIbEa1jlFx4xV5mcswCozIE0ye7h+WE9mRUG8ac0bNjDxcUSOFtOBRzH
jU1FhrmV28HYz1ENxnrFVmNPFMOBGzjCJ4AcwdqImMwU0brKDIix72UceeOaga5hJ8bArveu+BsU
ctdfDjzHfngzGcR15MdqcWliJc90SL0nd+hc8mb3QXCUlsSIGeIQeGNqdHlGgAI3ZD3EML7hkmNI
WzV2doFyZvJTO6ZeaRZURagMispQRm6izrxXdFnDeyDQdkkcTwGQw7yIkiE7g2TbUkbFUBCq+p2j
OQ6kDB10qo8osRFmIiSxIcijd3XHjBDYDeg3AZOoGU5eRlol4AMQ5Yyih1CWKABEFAIQKDmo5Xee
+oZVJIzCjtjrN3SEzHUKcaO9VWfNM7au6warevFT2x3znUYZP3+KYWHPjkKREVBEVBHIcdLKe986
7z28RbzWc3qlnGXtbt0hxV7zre51tG43veN73O4g4gGzFrDKrAOMGmvsQkl2iHnOuEML3wlCBOco
zu7qqucSYxWKxOJilUNNEysVTrFUkKLqaUyNGB+sRhQJENCkdwdvQgrAi/YgH5Dy71eigHIcvmYB
VQURVQSrQJIgMTaWlRz765q/Bt7YRSCOo5Ou0YIZRqCyWLwywD17R27DIkOcbtGFV31i9+FdU6Ej
xjOjNejfHenPVy3TTWjSobdNh8N8rqhiEeNEiFd+9x58/3GA8WIYoviYiOCgmRluo3rvSS73MRmD
xPjVzwguOwiFMc4zgmtMnvW6zfO2J7rNxjr1meucrJcecaraUu1w+jdOL9J3WJNEwzHeLhozjMRj
OcB1V7m2o08b1PC89YJXPWZ51e1wrwraPG6zAiF2zPdBypQeW99mSKI8KMcMh0+h12JS4TQrbErn
MSqhw+6iDKiJ7MB5p67+K1yKjDQnz58Ynwu4XrXYO3kzrfL4w/Ge06Mcs8apS9JoDAi45ZfWue+y
MRCOYEkGMSMXNsZwVgrnGOtXxm5EzssrxkffL5RxzpeSqJNHMaSL30Diapee98mlXnJ4xGMX5b7P
ziMa8x3NYPhSzHCCE2aLa0rELkHU+UEPjdbSbwbWSJWu4zodBG9pnmlhmwUYxvdi0d9AEFUTkwys
UHoq5hwShMClAOOBbjEmCLIkZil1yP2WsyQKY4PXyccbM4vT4FAYRGRIDt1gIvp1HsXJqGjzA6L0
KXC4NTW+UblUXs01BZuReQ7JuPKpWewPSxXksjD2AUm7CSQ1WHBynB4HFxoWRsLoshLC7TIu2DBQ
arSBsPQcOiwID6zc2NjtIaCVg0cJwOOx1qLAoo68Pkp5HR0ctiSRoBj0wEi9Dg0YXYOJqm+5wLdD
eCmOo9xMHTc5TJsTQdmDSSm6xpjJKNyNzkDmaDE6wJhsYG5CeErh2UnE7Pli1aDI7GHcbhsdcK7n
kweDUWG4QRrpMAaAUQhmIjBoxGyNHEjgKSxTQjbqPE4GbCQWcQ2MtnE6TEgxiJw0xvyGGnA1gO4B
s+R0qY0c3GQYDwdljYBu/AcPFONFUG5CVUlNUymmV1JbxMBgN0tNyg6JYGGD1nQTq3pAlUDgg7nA
Ii7pE2aHQAeBDMX3Y4J49l0Q8Jw06zGGwgqUudmspXz3lyczJGc065edSd494Y84oICu8IpCz3BT
kO4mPuQ00iHmLiPmWbVV03vRG829Z2E4eYHUgyG4m0pSE4EXBIcEB5VojdGFdDLG30s75ZOiyd8K
REbO4aYEDaQoiyCaGkIHZCWhDtsoY7EcoOijtsBidWlAFoEENqRpoioGIXuixKTYacUIUZYkYQEw
IxQpMBe7o5uBAfPRrMzt3VPkvB0eiuAbobstJLeVtmq5DNwkZGkt47xkchrpharqwSJQm6VskoaX
VTcUINBgeCngnvQFw1vgQNCyQMQq+Mrg0EOsDP8BOIUtANKLBsEgZOWS0avmXalu7tijZKktHdnO
oA0iDkGA/8oyBQChCJRUm2vptdfLdq11DyuVJJVbL/viCUIqx8LKTbCxCCIJrCz+kkPMZ1HtumGG
Sg+aHAYeLI8QC0U0NNNKC+P/oQAwIHxXOQGptneLyRHqe3cYmKJBJYETvz/5af8JIq0Lqqmj6VxQ
obr10nL6n+07aX6AjCBCECBCD8D9FEQ+UgYjk+iq7QEe7ih51QDyNXqzZ9HQp7YAdIPxsgJFQPQI
fnpT/zZy3h9CevDY0SjFkYJGIHSoc3YackQ4dySwJ3W0qyVEypLNYDirDhinTUJgtCT7RxOQJD4E
KpiSKkKc8pYKdYiXg+3r4Xdurn7hUWmklz92Si7OpQ6/HqM0fjXbQdY299U8tdCxEEZm+r4JHGde
2oJLvrFlpLi4ahD2OUkJbt5ocrqXt2cRNbJGWr7TMscN6CYa2dTrmW49a+/O3PWOniuCUSuddl4z
jtzvt9eJw+kfb6ZnlNZO/hmbvKbIp+HUVvjFT8Bznxqda7898Y3hPwhEREFQTqy33dHjzcqmuEUp
VREB98nsY3ws+h5LV9Sg0cBBAjQCeGYjXae1K6RKGb54O1qABrLnna40bXV5HTKwxpYW1ACJMrBu
e1aw1wDqCfTXdTWuaealB34meoIiBa1UwFCpEiKKzVGcVh+no6jJ33lPzYxYVU48uvr249NYcZvK
IFUFZl3y665CONTZNST0feCySmyrAoqItxIqFQoEIbtvPWME9RhOubmVnrFjWEKbOqPJ2gxVxm7x
WTlakTxxVIwNQr07jGHoid6lErw5gkxOybsqZy7tiwznq1ENOamI4GhxMpgukF0VinIyl426yXid
ElvWMIxIiw5OOStKxB3zOOOsUaACHhhKJUD7sZIoKfme2V6C81k0k1Bv7LroRbSJmaRrGMLgDVB7
Lx2kW5iuLaP645dE2yiVSkWa09arOUjjUyrdrhDGowsU961PSuTieowZgk5UcSMjNQgO5ZBuONqi
0nIve7+hJ2raWiCn1Gl0O52QJEXPQ806HAntABCE5V6DidkBZA5Ht00gFBUfDApQBT2Hq2ICk+vm
/7jnN1lIcGBhILIsJmHpYVVU+HPnxQpELDOvhmOFVt7X1EHB5bPqmAAKVFkAn0wBwfySoWAnoJaI
gH5XoiBp0KKpveGjvSpg/4RhS+ID7MxAh7EuySo6Bedgwd5cZHkqUBR6OdNvZ40HUB3qKdKyiGKm
oA/E3ZKifHCUwigzGhS9cY6xdShYBWgB6FRIj0R+ZTs1Uw/z8vj+KP8GDCFjqqefb1enycnlb2FQ
seoIonl3RdQ/PUgez0PIiGRo8iUw8zAMzHAkwb92E4P7LgBOhJGRDWqRRYfO3uwpaYhioX9YR/gP
3ntQfQ+j0+sqL1thTlsgBgr9h8e5PvEThi2C3/rs/D1TqknYPd5hwlOL2wpCdRIBEsUJuadqIRf3
iS5DQbqJ9fIv/G8JT8fYd/cPvFT79B9Oo9cOyAPQUA/1nUOO0EHWAMIomlfRffu+HwUlMheArDMa
FlqFgyVPdNJYCkRj159S24DwdDvPq91LO0ues005wuvOpCQ5TcEXq8twMl8/jeGxTyTyYGSZjFjI
pBFGZrBTIaoklSypWCiFm5ndJa5q3ZRV5r68m7NuFYVWNMaoZUaNg0hYVK0Q6Wi0E4EB1tKPJ136
Wqqk7Ewn/piHGPDweGCj5WejhFRMmeDaeACJvtocWBgklZIkfTz9J/qP3dnUlduRYgNQz3JAPiDM
xR/IeBsDdCJcTajNqFX7Il3H6YmMpSjB0Q1wt5csJTwt8d7XA0JPfC4GTBoIZdbCI593bSxas89U
IlkpTLD6XxhZXVlbS53ggsPx38rcqwg6vZbO5xsfysT5xFPCI1AqIBQhyDuQpTEFKibAqefpdTaI
Pp/8h8S+o27vo7wfUG49hSCtL8R3Ke7cpX0j+2i/4ezY+t8IcuS6p5+RH2/Pd1LiStMStjF5mCjJ
MHW36mqdovDN6tk1p1w+beF7pwupgxcVmhswe+jIp5RinlFtVG2Ogiz9p3jWBPQTtkGIUZCUIkY+
lNo2sS5c107F2jqnbTpu7rXDdj51z10lZTWZpTbIBAIXx31ZxMhjdZ5joZlGpUYpU11NOtmUkgYS
FIAgISEWAYYI4GU23VB3jiuco8we/kQPy9RISsixGllpZ3bCm9opoEVFh+/mTDvaJIP2RJf/gwx6
PFGoOph06aLVKiTTwLGVxv+IbsTGRqFV6LeNGM5wxKdHY0eqZmwh+pMAnsTgM5Q4aiL+HrhuTuyy
y2EiUhVDt+yG2WotcKr/fxLwqp393zfTPnxLeRvOQRCCwlSQUXcuRUgSL5+fwd2TqNT1WD7Dc+II
nshESQh0f3o78bAPchFR/HEQQyBw1QfLW90+Ec9378UAO90Dxzft0AXpmfywbPxuWZjA1bonvIIH
cfE6aYFtiAO6NfIiQUhYp97D08fYaBTUUqIFCqo0KIwgSCvXBxEVD90oAdAlTaQ/6/q/Srb+iZ71
XDtLuiGVgHXixyqUy/HvmqI4WheLR2suz1ykkW5nGLuwaG+p4mrrjJN7n7DF/UotTlbmbzdQkdrx
au4kV3XyACHejDUcmN1TkAIpXVrr86IN3vestaGcyNTQSyXZRK7K0SZYREFqomaVVEiEu2DOLeis
8m4munjnvc+yPUE7YVFBLSpEyfjurJXIIH57rpIhMdTs3dJY8gMrYLC0rQQwziBaF3MG2C8TGKwq
RUZtkHTBSyiOl1srKXjNqcuGIwUFwlClJRCW00TERy1pnIW6AzQQBkjAJFMQxJScBHDAMcEhAiVM
GAkc/8ex5PlemvwO/0VUulnX09nR82OMGTl4jata1TKIixEZJ0re7zu7ic6c7u7EsbSRnJMNGJIh
D4DjiEpsAWJAHlHlzgvY99UIt1nOBVgNtBbbSjWz37wriThUE6DQRIGBhqCqSBog04hKbowYSjs7
SUmDqCHAySpEQphCwtGRQMYBZIzbKUlby9Lu68xPXiE48JcuCEElCFgYI40hWVJUlZBzKChpHa5h
4KcQiIaBZSbzCifTKLDv03roFKVBeB/9VVhQ7PidhCRhD8MB6k4vOjhveLyNJRW7SgZoTdRlWADx
C2taFzoWCp24KypSKpAh5KdPrQOokqeHzB9pJQUCnx+lkEPw0NH0Q9thNKtDT601pQD9xE+KJUBM
kUKF1DnGFSyAfjBdDD1QLvCIbwpSlAKcQi/olCJB/yyKmpRe4A4lHiQ6ZhWQeCTQDLVG1CpZpW1F
tRi1KlEYg0G0uO0K2YrkCUp/iMJ/R4iV+zAdlEB+nTIMVQVRMEIDEuI4wmWrg3JrUyrhIQTTBA49
NQmGYBgupHP+jE9XCZSl/+8AHCBQppRooGlRZkAoGlZlz/djqyBe6Nw6UVRE5EQVuPGJ7UHzqVcA
wIScK7V9L4+VqWmDUVgSa2duAqCmHaFxzAcsC/jRHMgIJUCgKGkUlNjMOpBR/WsR5eIV62Oubm+2
b4kvd5uaDCOOc4izMOglKUpsQTX0JfiMhTCMeqMuOt0zOidTB8rwg2iKaaiYIJg/bdO/pvMezA/j
/ltuOf9MWCVGO0Oezp2Viy6sCZ/uu3zel8a7y27+35qz9GM1f3s41ws1r6HXRRRvQ0fnwQkmImYK
/URCqQU+YP2dQf4B2ymigiKE/Wwb6eu4jeAgAD4IYYZsZh70Ahl7W+Vgpepv31AfwLPe5/giFSRK
71dco75h5/ejvdozu+YFdV8KM163XUA4ySwabR3TBWxwU0cSQMXlEo357k5541QUtXbuafUi4ngx
34zTjrYyht8knR5reNKtKA9e8vzEEQcRwuVzWPGLjfj0k3vvrdmqx3ozyEzK76mbqpqcX6EQXF4Z
XcJtbCIHYRQzZVjAuJbGt9+ve+rOjTCH1wTSRp2gbeYhTTCkpDJaSaRcWKqu5sRbO8zfVmMuHttJ
JBlVnNrwywaXfEhSQzs5xt3NSi8vSveMZGo4ggUFCiMjfZY6z8V1yakYI2KmPlQSigQjW5mEhJoy
+KehN4b4bO7vQNAPv26tg7EwDjrl0Ir6j5H71HUh73kJySWH8kDsH8XWIrCDxlTYnBIPdL5QCp7T
1F2zMYyPW2cE4O1gJDNjcYGHRJCQ/wMMQ3L+xlc91jx7IUgYUZF0WBEfRGTFGsc645AUQQsEkysM
qQExMsBMqlKqQDCkyeBKRNAhSBjJtFatKyy+DgYpgAZMzBqD/z2uLyfKerMY2F9twJAIQjbXvDbc
EOI8BV5d5KFKAEoIhC0ik+P0nqfhbyGszrLrRg0kThFVqu0olndWu0CSxo8u0C2FBBAzIVEEC0yY
d/TOWDDWNS1lLEFOuwvOfgumE6e52bnOmAl/K7ppG1ryqdq61G5JbXWySLCRZPJtpVZ1i5KMFONS
LM1CMTGpJgcmpYOKe9oyIKuoVBEhyCceSa4AJiWTioMOUybWFBFhUKxQEYpbIWEZHX+EON0dyJhS
h8BlwiCqZB4ccIDhwEIshiGSWZAlIoDVMovnoAo0ABzn/N2fMc0VNQ9jEpQDQ3LDdGgxjl3mL24l
3oyA6jgbVGkcG2mGKWgDWifBzSrIaEZhihTAEFSDPinzvbf2gqgQPeM1SlAEiIVJJAQXVH6fqMAT
rgr6j8tXZL03OoUN4zIaAO8pNJSiDP1NQSfhtkvmaW29xrGlUIstL1toH4ZroNr/czsePH9CSzM5
T3cv1n8JO0P9H6SsVFNVoo1LUaVG3yN/P+vvv9n5/AHoeRErZozTX+vsnh0Od5bjQhEteGYXrzY9
0D5YC6zcqt1lm1RYz8NGJtAw+F+l48KP0vtPlyJsPoT5AiiFsAC/QQ2cQEA+OIJ7IpeiPsiCp8Rr
QKDUUoOk7h/HV+BSaHxmGxSe6o0MAzd3KTyofGFwg2LB+5aSz3QELNTh3eTu2d/SgIohUbYqbEKl
fCWkpEDaVApFgBYiNDuVHgfVrX/Pdu4oAxTz/Mfmknzn1vs28dKBfsD83t2L4D8E/Gf2U1IAz9gR
KN/GrjU0gIW/SfgwUOJrMbObkSQIojV7jZWYgJnobvT67frA4ckmu0CihkIKgYSDkAAHHm2b5KHG
+bCAG8+gAsNhMIfT2ZgIh7ELycGcReEFO/U7nIZnAZWk1ChQykZmILm5SkUUoQA4lijYUsYbZvJD
gM/tsxZ1RNJZEQHBNotKCIyIrQpV4AnkvoYpzk53RTuPfjEwREGkw7gY3uxiLSEo2IonYiUFIHRU
0aiJQHhQGyhE7WU8Zh5sB9NhRYxRA7kV8GS4OuLOcw2Y2lb6QNp59P7A61VNIjuC7wWgLWg85sji
FTCFkSOGWAp7imOcXay1C2JCsTqNhhvTS8pAyMYThOUbhBg0nLJvImp3lcwxxTiQWlDAEyUcgIMI
xEt3ANtGAFiQogAx+XL5ocqSyZQ2x0qKmm4f3H2DkzxijKgGgh6bbgYmhywTbUYesHHpqXlFi5kY
tafH57w4hZL1zAih8pzxtQcF1kOsn/xerhzcRBuWA4dRUNnp04u2wbBe1EdSnbQrAx0IMjnanERG
4MGinFqm7xKhlKEjTQFIaUCWpLADlnOxWcOCsOyMnaienQWHD5v+nxsiULWtKjCE55U5PUpv894t
dN//TTt32BBKZOEOSZTRM79NBhQaV12vs+Jw7eIdXy2He3Y1peAf9/+Lpl1YpIEif4m2z5FIcIhb
Fogegg9ikiuweXU1UShKWkyxy1Zxzo35EDA3HjN227gkCTSUkMfIvdIc8wMlJ0eV2Eh6uCmQhShE
I0EgIfnlEcq1ORmGFPvl4oJlGU/USBqRNEXY4of3/zHRycjNOjDWSECJwn+yGFIneJQRENs9h/w6
gMgmr/MA+wAxy7YZnHAdJ6D/UsCj82bmHLSUyrYxSTf7MwfnxGDW8Zd3gW5Wr/5FMvK/2IOY/qz8
HQEjNY3Z2jhxPDoAwgOGrnCgS5uhUoPGNRf7P+E1iKQeAUIJy/MirUEcpLGHJwqNO3SiYGpY9aw4
mUxEJEUm3XLI0OGQDag0rUqoyiDMaMMuoAcwGJiMyyJVkV+jeMYgw7Y5agqFKdUSZE6GYJGiWykC
QOI4cfDbzcRiFGUC3Tn+fFigkKhg0YUAwmP6JtQKoOJ3F6bHCNvW5+2Ix/eopo52M8NEbPucvTH+
xBjY4nNjcQ2Upz8yD8yiFHu+Kj6PEi8Nil81M638Q6KI+VxZYUFuqLmSy1FVUKyyGosu8kM4ybf9
N1Pt5Y+4iz/WkSJPk+oo6/w9phDrgkYMhB8YIfetS4a7aVY14IR/B9tpYCBEShM2Y5ftJyaGkSeY
cFGitSOEu84wSw7djgQ2ggihNR/y2fxTAHDXuffM8QFXL6h5MqmQCl+qXtQNBtAYH3Yu6BSbYhsV
RUJLMhpKKQmSPwzzYYBs7ifQe/vcvg8IfLPjA/0VrB7qNjhdtnpiKcfuYfVC8BWscktZse9AWIN1
cQZpf62OsnAyQ1q1o+5N9jTKELUxaMiYhCAYCdXdA6HWF41c1l6x2hPGG10h1BL34KL4GSONVnLc
SoJBTDJBMETVRh3CaNzbbEPBFxPlqpx3wo2xIMXuSHnmYqb/e7I9ENzggdIspQDHOqUKBo6kk/0f
4YB2A2U24u5zj+RYcZIJH2Ti66iVYqn8237LL75/2QgPy91H7v4SdsGx43xO2Q28Jj6A4mmCB2Ei
ZScXf+6soxl+mZmp4liIRKWCBaXHIFkhjTggoiBfgye0CQkMySaFkJ2VVtLbVQtqqXaAmF/hd8Y2
qSlBGAafO3FIwicJ4g/NIFHb+2j5+/l8z2VQTg7jIQ4nIsRbQpClkk8+B7JIWvE0BlghwXnSXUlK
dLjrrk6eEFnxQqOhb9GOpIKT6+Zhhu+/bq8cxvLmzX3lUIml88MzQQJN0fzuERf2L713+cCxDoo8
vxR+zJ29IlxSReD2fknyP4Odz2y4UJtnCM61BVKZoG/wT8KY16GfzWhhkoiKP0pVnV4iJRhS4dhJ
PEYQ9AOHk9zsZYmadQT6QF5AopoiiHu8KdQGPAF2DzcjkHEbZxsRRazYDcoYkiWsP03KqQp0RAoG
kX1SZUrQv9EB9MaK4xBD1thKhNMFLYoUikg6kC4B6o6gi3Uh5o9WSePW4Rq6XaD6wSfyFp4qsXFx
XY6dgBvnhaossW44lCCH3Jzc47EGWUKwa+ocFAAG6I14ITLdvuo2gMyBFwOM/tSb8+r8MiTctbRn
Md0nJ0lQCw+Y2bdo1jfzWZoAbi3FCXnJppv/PVAeEC4trGiAbqIyB/h/WbnJLWk9C0IdfruFFEn9
KShG4ZlK/xyZJETPt7FhoScsJqKQqS3MBMeNudtjukPhJkvcdTDVkJsUxeA4mTUxRtKnWZmgHR8k
JPrtE3PPhxE2jkRM5jN/Tv79UcI92tKaRRHYQ+WpK5+h9voooqPFPP/VIQheMTUOHL4bUcAFXv/b
85jpIBIewV7noL9H8Cv/tPlDrA23gW/aqpt3GDyIGqUUvzvrBx6PUVMvGLOb9+4jun8EOPcvxBpP
AnJd5Nd9jmYIethoaR8tYvkQnwkNElNCYAvzUIzdQkRtXMONQo0qKuW8TKwxxiJamoQiQ1BvADog
qlVQ7p1LQCbYYqbQLvKgZAcMCGjWJpIQwjULklUgHXMVDIDJKYbipYYRiVEzhFJDHvFT+r6zTYVU
b1MihIEEN8oBcppw5JisOMyRhWQKcHknKKJaPCYMY5KbWvJsOLHyehUNBl9DBoVYMsaceyd+vXu4
qPjni8EO/Fnhgq+IpOwyHkXmJroPRJG32c4cW2sbxLlLazr12E6vEyQ6ZTcKXS82A4PfUhyaQte8
ZM+UexHHh6HnbdyauLzUBmDqRJttT0eCGHtzzkKjDRhUnKBiYZBN1DhLjF2EJF4cwh3IXRJGLhxS
VOVvBiuzDgm+EkSrI7W0pp49masidsMJh05BTQ8dRMTUU0tiElIKWPVC91GFYFS1LbLWBxCWjMuz
bUcaU22BkLcFtaZgNbAYJQXosqLBReSRPHOtOD0kU7wXtKU3iYtuaLjdNYhDtTKUaLwTWUhDiDBw
Tg12MZCEMHfc2djYYWRiDkLAsQiESeaQPAHZyBgKFoytYxkKWEqICcDFCIxMWHUQoM1KzVKEHAYQ
mC8yc27uC7BCJsMgG4O44xCsATYCxZhiTZGEIPBjoSIAg2U2UPG9TSevYOIIXBADoU5h3hMnA2Q/
XEE7uRitAQRSB2hwhwkNpE1IZBhIYacRGkFmRV0MAhTSiQkqsMAGoVQpQXIGhQCQFBsgkCroXloP
qIJuuvp/lPNgCiaKK8in8v5ee7wPI0mQEREg0Js0rgNf44hUDYC7UkGkfuYR+W0KKKZatAPoTDAI
fIQt+im3gxIf2YCSH8sfleUHHpBeSeR7EQcL4pKaffoqiSCGhQEiSlQD+wN3E9aO+G+ylMRTYPQI
Ae323UTlLGHoCHaHkFd/UCFB3bGkMq89eYHNFiqHsWACnBi/0xD4wPz8zs66P5+b/o2wnMf5k81w
exAPaBEYD9tIVEFYwQXB91Pv/wM/D/DCZ/H/hS4vZVQ2CAF6//expoYycB1FTqH0fR6AMcl9xDzT
3616f4vT9wMHyMf+RFgyEgQ5K7P5X0YPU8g/AgeCFyvswB1Nsfrjd22gcJYJz3nN6tTG900rY2xY
2lLe4sgRQlb5/KmhwQqErJFoadQMg/DPxdcIBEnHOzsgpERHe/znHVJD8xXSM0ImSCkbGnMkjFgF
nCkwLaIJ82CTCiQwgCy0plbErYG5JIUuqo8SpqUX6IaXp5OmUQqrfJXXuLl5bltWUjlwqtS3hCQB
p8P8m39ECs4eXg8uw/mAmp4YLUFpULIllDglmJsbVQCmRkieUgf8NYO0Dh82CmpX9MhQobMqRKHG
YLttgDSGoCgvAxdlEiTD5VDlMk+im5LPYzruDYV6SFgJ8s2nQJLAYuRcboqjh+rP6DQbgQBxdN8I
s0mwSEoQkAmmQkhCQjMwkwxQxClhklDDvzdsAr/vEAtusiTyx0YwJA3Bo9lz10FvCcNlCjzUAIpx
f9UFTAOoH5uESfEYvAEcffznw/JXwmOo2l9qJ+X/lQkgoNRhFJ/b4Nw+eSgD3D/q1yflR4JCqQaE
FGkAF4HyjEVAT2QeoT1F8D0/PuJ2eZiSIBGWfMkylzEsUMyBa21WA25tdNFFGq6Wtbk5guEoiGEm
TkTLFkjSZBtrltVc1umxsmyagKilNCQJGqKRAXd33EDeKjY+eHbeQu/YmkojCs+5kUYr4aAtFuMj
8RDtU5FIpeFFe5zZoRgoWGQ4RYnoUusFxpHHzyG/I5xazg6fbmBjhQWJQicKG3ZoFEfRtlVdTPd5
bYtF8kqJwBAiiyU5xVOFQU3D0v13hg7OjshKwQhSW2yXCL9qKhSyowf3/8n9twHY7AAhKqRnc9G/
v561RPpShhQ+gKiKpaQqEkkBc14XjB2Opm1/Ra/7qjq/kMfL2cA2M+vQqM9K+ciqJ7yAT/jVQhUH
2BaYIvyxRBxJrTWkssiQdDH3C3vOogPJBssIQGjuD72ZctzoSzRUGNXaQOnnTpRfmzPtevRx3ot0
9CQe5AX1zT6GkuuIZEt0v0SiyRFQzNmqrKUBQqlNJ+7sdj1998Z0fT/F3L7fnbf5PFROZBC2CDoR
BtgFACXRDk+hu8lAdVNh/ScKumuc0QMT6IMghuWgg4rjjioefD7DQOCQ0I7oAptAbyiOc2hkLzzQ
yeBx1Mmpt+pEQ+qKWa/3/3Z+s7Jyq9m3NoANoClVpEKdsjMVBoUkFqfRDEMEPq04TJTfufnBCgD4
d1bh02y7bFNBTECgd/d4bG2jDH+3fNB0kU7TOmCCBppJjP/6cLf1WlChIgMnYcYb8Hfzu6Pj0BdS
pc4Uo+Mo/D3Z6lB1K5CkQgG53JCIUUTINELKqErUTABJKShJv+xM7tj9gFIb/MQgHbMAE5TuDtHd
S4IE/2IEKhGfQGGpUGqoBNKArCIHdc95GEFDTOav+bEKwQQFbzQ1LBUzixAIdE/SAJGeOsJxhb28
E9HdJu/Fg0cgDHcLxhkUQmAAMBhdnLFTyBM44Iv50A2/mezNB2HE5HU5s9goI/ognI7Eudx9MykS
UVSBaZsc2FZq+VSdWoqu7fm2KqED4eNUYNwzR6oDyyphCeDxct2MHiRLeZi12LgNaxRjjfGqiOTp
KIJbg3t26byyG3v0KnfVJ2yEDIcZJHhn19cE4QfNOiWlEQHUJa+f8w+fmKLD9x8taiwwKIGYUScY
J6Fhw8/4utzmZhnbvgH6edAGDIcdidUMCanWyeWcmNSN68JzslxOlzMrkjCU1FSVALGJIiETjKg6
7YG1OgeHsigrChvZzQ8HgEkiOZowtCyrUplGSghBkXPDOY8dzRxVFRJ6A48jmYFtwVcZsNxI+Nv/
HNmFawtXT/6LMaHiHOcH99f8zWKMF7meYf+bnx1FGNTI/btzzOcrS+3FSaNyVLA/9YM7EuIFBaY2
iG04IlcLcSUcUKq0VDkZaPZuAp1jHHBvhfASAgwwDX+Jx9HBzRSMQhAiyHjD04pobsxVt4LsIeji
8DisfWvrz8NQ7esXVLNo9fDzLhkgJEdrPuwlZSowxIgaIsPZ9j9ZCNCgfL19Mb5NGiqQUh+WF2O2
DXya9kicZYiZBSUq/FT34b+rvFEL8vjozOIcR0IAPh/m4wfoUu9OMEijszAoDTqwFpB2gMhWIYkO
Tf+rSpkg7GLgBxYkI0iakIlD+qHQSjMuAJTif5WVKcf8G+D30agWOMExBq2AI9GkphJxqEKk/Ywm
YRQmrYS7UmsOUBQMwUAdobP7uZaXBDVlG42QK9iDuXSBtQ4h4LQkFUwgOQgUUW2CHE8bmI8QI8Ql
IHcQCoB0magdeVJOlkQ2XvSE6RShXeUtE4BBIUUAHHbpxrY4wxeJGkF1CuSLVDssOT/Cbfh/o3OD
JzzxHPOEBbUGHmjCgiGiIEcKGUoYg9XrQPJTznQm+P3udExuSmYmoskVkrrlxriYYNum7M7rnWhV
J3WtrSvIdnZMpkDFIdDCXrCPYtqsTj0GLWitt+FPxdQ/GwPJVEkAQN1EKNuAAp+2HuUB+9QgORVH
075yqPPHRfmV3OZvrT9Z+TCCZRClPqhwHgXSqp3VKi/4DKCTPhCF0WhSRJKRRxEyjbgFjSKjkP8r
9145G+w9Ye5cVQ5VQwD+mYKpS8iXBZCkohEiGESAkaSGhAZJVVlIgiWCIklDvfvPvDxImUiUJWSE
uiof+EroBaUGVzEwG5cAMkQCYaccwBHUYSACcQHIABIcgQCmHs5ogKVZPwSIEiGihvZvodWBoqkP
75D/JyBwKXfwArOIPZg2ZjnR7vVz2At4nAdM1N9I+t4MwWA8UEU0Izm3P55WkeL9kuCphucianTr
Yj+MgWFQOhnRGFI2ah4TENLi0PKeBw1+OgJR+yohzH5eJpF4f4FsQG8ZJoSNLBr8C05g7D/FaVzK
jRGU3Ia0HQREkSJ+D8KBR+We3yF9KA/rYoJ6LKARKAAYKLk4b8jn25yQ6o/c/lrymvn/XOfpf0RU
A6lTnjjto/PPxh2p3seuD+cPylSbDtRP4R5/RhsQ8SJIV44s8PiNYc/qyBsRT2xqIDFRP0X0MFRS
KMYvmJfU14crIskVVi2rKqzRWL+7RVzX45s+E7F4OSwTBLCyQQSSPhvwd+FTgxdMGIscf0YAoiOo
FBS+YbyKiiJIuUIeLr/62ZEIGo/QdeIUEIj17iAPPMYeAJ8fOP2CKBD23Viqj5VyKfb8YhfPuqHJ
4GsA9YHkQP6syHzhqbACVMc/agTw6YPUFwu4d0DHvXmmYFy0xQhWRFk/IXww7C/wWU3ntrYUWUa1
aleolf2h53zDN+945uNPAOSB+boRWSm7xgTiCjH2kk/o+/gUTfUe7lQT+E7u4CCKoXszJmR+3M+j
AAMRjbFxkJCzamb583V5U2lFKNtWZUg/U5VmCOCR08QryD9dnj3mHXSkeo5zpnxYal67W2vkjSVG
WgGGQ+nqHzKffFm6mo1KMGN6RuKGBoXxKigmIQyj/NXV+ar711+bdV0qMajEkHdRxkgXVMLKJ+i6
qc89Lz5fwYvrUdQCAxGEmTNNwc29Hw7ox8PZxQSnH5bPRMdCSGVlH5wgLN95KiWoJ/uE/yYvVCKk
ClapWjCqnFwJLC7SbnQoWeHdbf+UdZ4YvyTjxdziOyo2afGbn/1Lrca/7MnmOcixuf3ms9vUAIsl
Gamb1LicRnw4iIiD1r01rn0Wuh56zYTQoXG+h1xT2YcOEnkzGLkRmSX/LUdLjBqOiKviZNLgXjt1
muMb889dr8dtXnp8HsiljjPntF1zG5nl1CEo2sLXSvVzY2x0prKH5x+Xrtrq4eTz31XufFFmVGRx
7ovmeO3pJK7rx6VPCUUHRVFUvQiUZoSKUSIMzBKRUO6veBZUUPhtOHjouJZWUKmqcDRmc+tEiiIA
0+NUSKIIEGHEHWOIeg+NjB4b2PXLHVFiRCiqWkWCspKxASEBjKru/t26O1hzx51sOUL18jvE+UPl
UQs84h2R4kCQkBkIsCuT9s/PQtWwtRSO/Un8+q9IiiJPJ3pznJ4AHZFBDIcoog4IcxUG+TBt5G2h
Zj0hXyRJ9lJq66llRUkCRlbKa0tMzWwVGmsbRokaVMyiWaQGbUobESmRmRJGaQZS02NYto0yNo2N
ZgTKKIivbZFGNNqNMpFCpLbRasyu5+p16mr+FV/dTRNbALOrdCVihGIoiIzYwP979NCT+mPKtKAX
u5IPH+3unRBsSFCJJCFBsh8+smnF7wCCGIJ1S4gUOsxjaRSLFh53HoWTnoneWBCsDpgAUMiJIyws
QSpdyYuPFmwnSdQYhpY8aBEYo8GFpYFoginGUP5zTvoCjdncibYf1GJiAxkB6vEh1EOUZIzaYiJO
amYWjUoU8dSsHbqBWEIiKIgRhphgl/jtt84TTDiYa3LTgqmsaO7a/Jp+nfbiQ1lUyiWKURv2Pz7T
aIqDqZQDv1AtIPbDh3olH11U8qgBxivjckfrCf8YP1ir8f5Q+aCQIBuwYxoBog9zU07l5Xf2X2Hk
kmJF8oAOMKAzE/Ms5k2T9+6XEXTGbZwHCgRnMaqtyOTWZz4WVeVy/Q97clNWS5G5eNziG8HhlUcI
zJFCzLihKEmMHlsHMzk67TOCIs0Zyp3kRFVRF4cvHR0mg/a7VBt3nChyo5MSLsa7eNWSEPZdBeKc
pzjX5GR8L1fpn7XdiNb7LvBaxVIvcIixKeRHbzs75jF16hkuLqpR5rJiRGTABFhsk+PjibEsgHKJ
tcHbDMjc7AmQZpYpfjKbU3SkYwLoO9h0A3MrYXGVqI1ZLZwKCNigODZOUmZE6Y3LIUaWCAqzzTxT
dgZNdspGASPpoYTSBhmI4uGIJkyp5MOp4AkiQmIGSY5GnvQBEnGeGHXKHjFgIfb0NThB/YxhIJKK
2kGf7km8iae6JK8bIxsdum3V7pyzDNjciKOu+waIYCUkZMowiN3JKRUChgqJJyYafhGDJhz/nTjW
uG9XGGN0uygkqH7KzoSQafBCHeiHoF2PN2e9er+E/jLFJIUZCmLEBbVW7uv193ZZtMA8xW6TTruM
UbRaNXpu20U2NsljVFszSRtJarIllbaa20X81jbdZLJNCNTLK0MCtACDQgMxVCbQtIuaBs8qaht9
rxBUjyEPTV0B1uxJprNpwsxDYOE0Bcf9Vg49nUpeyP8rfzzg38tGIgX95w9he/8Cj84dyh1HbHAf
8pI4+GV/BBNkfjAD65KEoRpD5eyfoNKpGHy4hlF21rDhNPcWqeoHsD93QHCuQQyJQp+yInfyUhQH
dKuMwSE1UsExRLIpbJO6qrcoVRYSwYIFSYkKSoaPYp+09ofRvinImHdgIDECfdnj3+XqRX7kIWS5
RHykIgX2SDkAgIFADrp+R7ngP03blE4A1eyQB4D30eSkZ+eR8AP7sA0A0CgxAAlIqD1lWPAVMH3H
11Q9SKec01BnQggA0c+EZRoGP3m8PfckEY4hQVF5IADNSuk1lxDiXQp09TSUkVEn0T0KfqfMw8s0
i6mXAlCv6eaJ5MKEIAoiMA8FsObIQNQW4gpzKQYFEiIhCioe1qS1EKCYuZJTKyqw2EV1W82hFW5v
tsPQmYiNgkMJWICJnDBMICYJDf4CAeKD8IUoAAN98UXhdEJCPehZdoLoj+/FAXwA/8xFTDZJBB6C
rnZqOQm7nMhjyzSagPYp6yahPRxGIAYpICUUCV6ZgHqQP/CRGlVDjyPmD9BGH5ndP6xG/HU52Cp0
OZd5pzqyi5CDAkJWSlBdFVOuhkJzMHkfUaZAkwUS7VkuG/CWcWJFn2eO1IQ+miPyg61BjfO8CHAs
QGBYcCSDA2FFF3CIQEDSIIveuSvSRBhxuh1BMooFCHCbXfbw84COOYOSKUKAasI/mu+6uqTTO63M
1EomyZjGhUlrWr4au3W8tbVymosiBQgpVaYLuLGOFUUoO/tIdrO8AlDRmKOSOBZKGMwJ7Rw/Pm2Z
btLI11dn+CZOF5BgcEowmhLSgDRcfrZx+MIHc0nygYiny8jgQtWY84g+/0fQhQj+LU+2hohVoBQF
QoBll00SrQBOe3oiePjxPXFK9ctZ6Zm4edKLNyqW3SiIDExaH5E/6fv7Oocoi9soGyWGK0UaJ6MK
cnMImmmMWaSgkSaSWBYLtD6tRj6VfRl3s0qE8M9uvXjaQRBPqLIylRQVTpKwEETXt75rYedvBWIK
uSytAiSpyQZVTI1Eqs3Jl/8qM0ZnUkUJIMgo1A5Q0STd8k5AO51VhWFrCxyGRMCFhXCxMyVBSdJO
N5eW7jJVGJymf9N4a2h0UG3zpuSpOFKWysrZKyMrzevXVgN10i11EwTjMcAtKKLK0bE4IR8U5EDD
WShGnNlhRhA0gJAoRBkFYCIqQWSUUkIUhIAkITasMbDPXoHCKqFN5MVkToSUuoJWSF9sWAQ8WkDM
gsCzm4WwOJwKAoSAxkFnNrzcqcBnKGBBL0zW5mN6JpyuD/ygxAgQlWlmoy1amZS32la3KTEUsqlK
yxomtVKbVtfNwYArmLiMdqSMIkZkZSYkISFFJAJISRR1UIoGBySjDDQZmAYKEyrMCCwwBg3CkVtQ
y9tqN9YcRP9ckSRhEAoA22KOnlFh2S2lYIiSqD6fE+r5sOqIHZBDqoZARCicoTh2UN0PKU24mRUM
HqNG1Aeojg/JhSj0sv+yqPEaOCgOyqCVq/mqlg9j28QPCHSG5OohmDmKaKBwNgvKHGkyxdY1gKKm
vdfOuxRiKTMmlb51clV/LCAn3zdQRDAjarxUqWH3IlHn7yy33agAFv4YtoLRx6odBOB1Sfj91WeN
jHgvOHJhkXmtw44TKw3R/n41+zfkPH2FDuc810b9zzyp0jy5M8gm6xN6RpQ5ShLZQXFnJjdcmEDq
0ZUbSdeNkZWg8tIq8RC1Jr66eHXrBiIjAFGHXCF5ZTnGW8zrBnI3w04YoBJpuKUe+uCybkOgbd3z
nBVsOWkGFuMtQx1traW804Jx66+nqn4/HeOmnr9Zxz13aIqSmlaiI5Ow3edtHLxeDMQNyiouKh2O
CW8BBZgJSVQ6hA0KV9dcKVtm9O5m6RVMpQ19sRUKOpxGXlfNVw92tonbMEg1bbT5p4opWksFtxOL
FdmUqFJKRbh5nElSQaT6e4PXGhTC3viTc9hZdkPY7cC8Q4cPsDfUC0M0NJwSIGOOu5xTYUI2ZNt+
S0Hcc4FUpQIoDOmGcEyBvT0t9bU657WHgdzTang1Ntje7QOEONvNtm3Y22loGQNbvB3shBRUQPZE
CzoVEDuYQwShNGYph78VHpsu5VQtAUPMEkO2gbodUs4yn4kzIa/DSB3q99HbV0zdATeVcITiF3lc
kDUGEGRekAKiGsTQopo6sXiJrmvmq1mx25dITlF8AHZQxidholnTKYHhZkzTHBhpTEKAEqNMMkwS
llKJZCFnODHgFOFk4qAlhIhToJeA6BEZBSWJjkhw6eIh0i8MIfoJphDB11Gkgc4KGELNNDBwuknC
0ZwoRnJgshwQOEJEnZzosrQhWHDBKYuSnKZymRtiZFtIPtR7CQBHUT1/h+n7qiK9gADjo408c9oU
WToRLdfOnNBFovGDobxMvodv75RzzYwS73dgdxt2yNE1TvMuRw9naao9uxxD1AoB2WeuFqoPWYbq
OclW/76fm4qHm9IPm208pCaaEkkuzXi22tG/w6bJ1UtXoATm0A5Xj+ThxOh60ynaHaTun/rjEorw
mCpIVbQ5JdIU4LOacnOZWSICs5tDici0Th4xJDnbx6/9NoVOlGC/pLyaiLO8/9ehwTwUzqk+mwOw
Br6vjgr+AWqJ4ZXUqvvlU2CImWDoZk22Y3Fi/4pMIaDhJzflxe5St9LMiPhnaQkiORgfMh5c5xk8
dBRBFtjStlG320krGRi8uGkjZILc2upRZ9dykh2jJGYCZkZ5siCCGKESJj1PtYCSc6bAYVWZlEwj
R3AYuGASLgEEyjIskjLhjm2mmsrpar1263tgY2pYCAwgMEiwo/hDPkaoV9MHzfDOX0Efasx/dqcs
Prnsx+74gEPRZkJ+2qIJ44SwuVzKJDY5PeZEUxX9Ju6QaA7wPgAPCR+zKSn02ltBYwRhmSTOSWZW
fda4ghdSgSVSD9z5i9QyxeateAHdyWkIKDDBy/WTofDY+0P2tSUiSQQiIotDy+y/8oak8WrMDPhm
wwziDK+u3ciMJwvsQqw9MyTxhZP+fgcvcLO8o9zdHdTpu7tvygSSN6VVd8bvJCASEiOkTCRcYpcK
Z7vknA8e7jt1hk7jiKRIJACkGhQmBw9fldnVOxhidIS/w6UEY4sxX+rbiAnW9AQg+wEDm9pZ8e5/
rZiiG6d4g5RMyMYHl9To2d0Mvx0Ue5dazjuWG3fWpgzLrWl8GBpqoui2XYBQ3NpX4/LOD/GJyj88
TYjUzDsNdvl1yMWeRbFloe5NNI1X1tFYHEPPVWIUA7FOuORkaMW1rQUwged+jy8Edb4/f8kFcc++
tU2AfjfOg4x7dyMSE3RcLcC8paTCRwgSCiGuVDX3d2J12MCPjTLsNcYhA6yAVPOlaB9qfny0tWra
MQd24do+Eh7W6j+KBgfQH4dAFPdKi1BpGQhITTctZdHZj2dvg3S7lhdeWvx+d3dyRG/5825YImHI
TCB8vjp6HWwsnZzNEWUOLsusWVDTBN2EjPoaHUNDtYJaceHnD+Dx/k5jppw4gWM18xi5gg873kxk
egQ5FyISwhU3touFa7QV7xYnlO2a2UplsqqGBHI4gAhJlI/ChhBWFyswFDXRzDEjmlGbvko5ThS2
udcXqIMGBQt4hYnshzxHqZlbSlT76XDPnD96U+ZnJHs78vFgFOW0nDpV1AP/xFrSioryAZ1ZCNhQ
O2w1kGj7fajeVTAoIie7c7p0Ney1NfCa5Qa5BtsWQLAkPMQPcz93nWRsKNS2xTww6T9qD8qRmAJ8
8fbT+YiaqgpmGEYISAkMJfgiMV8YcVByEQHYg/vQ0qAhsm4X1AqwFOnNIwlN+s8rgnLCocRrzGE6
aB/1UoxIw75h151saQhOU4c1LOrwzEQeW8qWNSa2toghTici0eKgYqYCxoCyB1HGi0jhS2ygT0B2
YKeJ/cnp6aPVTccWHza1EaqtgIke48DuZyKLqguXdARulVDpMAohIxjx49DpTde1h3a1Uxqq2Egz
EcWOh3qdESEP6ArSHlCxIHk/SgdHshT1cvljIvbEmCyQ92vlPafeFQT+hCOOp1PlLIoB6be9OBGW
SzCLsHqhj2b1zLkDFsyFTaNfLgfiM45GAp3I8FJBTy/xGlLEgodnjAxCyB3iwPcj9mOnk5r+9fiH
4IhR76J2eNo3MSRO6CUr9FJWKxLziYmDz6HHLzDzU0RwT+4JOMAGenWCaIvVKRlKbDRyVZJASfN/
D93JA4H0EJRVPDfAu+9A8sYLcHaTjKGNWohukF5mIrxJ4BiubHafT0NzXP5iyP+ecIDDH/zsaJQs
zhFKFFUZCaLEO2Jq5ikHBjFnmKSKvYMlSfyeOe6416N3fp3TDfAcSKHR85Dyidnt/EUiSUncTad0
IxhCHDsnT5J0eTHaoKSkoA+CZgHbHRaSDbcu/ulcxDPbvMS0idCBvBMM0K1q0xDAN2nfXZuzp6TK
RYIGSnv6TBaJStsoNTzyzMdS0ahSjWgNgkoJPjbEyVp93hWKPlBvzNttt+FKgxjVyQXAEiIssKLs
4kOsjyKMVHeV9MPA0TLeHQRH15MRCM06h2KmDdmDIRy0jk473mZko0hKbSg1AwoakQaNcIto6OiL
SoWdRRYJDZ5aGqds35iXUMIIUQHrwJPzSwTPw+ICBARf6eioet+OA5IUJH0M+vQYShQTJVEX+X+H
WjyUwMpQoA7Ed7bEjQL+S2gubcgCoyIoJ3FX2Eknqx56MSYO/c0wYyoKH0RqwKibxogHLyYEEsAB
h+tAYDZ4PkmoXcPqH5aiSmIZVwxqIcvr5E/GQOf3uYAN3dHVms4JVNRUpEEc1qzYVGZBT1dogoI/
0fv8jZaCuH1eAJsHpY936/Wy1Md5AUCUikMIgyrAV2TsT7ytFuV4ieT9XcffkjFET+qlX56ErAzR
UT/cEpklGH8PwIIg8kf4uaD+T1o5dIehE7jfr9Z1UFfT5fwy8S+k+9+/+9fV8x/b5TXuVRTzL6L0
NfNx6QZUTTjQ/f61ZI499tL8BzWtGiofg5OKc/o+Wtvnjst1uaxFKzazMD+n/44H1oNW35Uk0Pmg
6606DsYFA1ChRbP7iiUhYu/5P1NaYzA4LGfrRrV/0TQqy2hyGqwSf3dSftbznRmy6kcQAUCjwImj
jBa1cojBq741rkwQCMCtXm+4ydp9W34zbABnqNNJKGsarYnlDCYczrioj/RNR6I4iIF6U4pcKqNr
Ub+7HyIxA7wLU0BQ4MssRN8cEuQHFQ8AKGj85U3NGVSO7Y3xtDxSnwgeQATgpxd4m0618TethYqD
IgiIPcYJp6uVOAvcUUqIsUINnpr+uCMAGAylANdkhdUE13ZKfEEJyJJBLBqHxA4UGBx+3PrvBnEM
SRG1DRdtbeKqY2TV6mrZLLVopYESo8C0ShCeJYJSOM7d7RAfV9ub843pXziYRvZJnNdYyTgZV3Tt
dq9x8WodTxcVQVd4/FnRTNTQotSm7loBi3HuwrKzgClwtHeA8njxqpyAx0HtApQ6+0MoD8QcIyJI
gQ+SvlqZUvwcwMD+4b+rzuaGiUxcHqq6icIOA484Om1JpZzg40rr05gamtZrsgXF+I2HT1R4o52S
izMkU3Z9jD2ZeUlJ2cwHlMDdJNd5i0oGxkDVyGjoHA3lFVS2bMDCNqhjIdGBHZJh0nBh0AoaaCmg
t9zScEzMVXUNx2EdzoYdAjkCTAO88EkihiH93uPjGtjxRT/fUvSEyImYhgRFpR9/ebtWCY+Yvweg
9jgrhHz0Xb5QMF+jyLT2BjJCqqCcnWLqb6MMCcsikCIYNgvC975XgCQfJjeANnnFqqUSk2nfRvbU
jDtQX6CELIGkpoTgkOkJvoH7dIROsXFApTMcQMjPowfikWRnRg1JUyL8+PkY7xWdWpIf0W+zw0o5
gQO/Skn0kdBkIinOuTAjY5Oo+UmBykPzRH/Fx4UTDbOsV9flrikkpTleN/8PIZppLMICR5P53iyX
QVrhLPYU+2SSqOPp3t64AD6wOA9lycGL6o8PbXV6Q5SgP+XpD16EhWDgsbzxClPxqKT3SnBt9Woq
PzGBKwpJ5/z+fcv8e5zEXIA/MgHb833gaHu492bouFDRCkEEUiKgFSNGDsXWi1rbCSFKaudjDNqa
Aa1IxisEQkYy3TqtzVRoyG21FWKZZUi0CRItIFKpMi0NKNI2jWMkxbIVG2WBgipEikw6Lr3bEa+n
YcO3090IRFE76UaAegep0RoZ6O9O9PORMVQNMCERH8eGdMvb+4wQf3mz8/GfmhMFKpT8rJhKP6JH
JAopaD/Mf7jF0e79Cfa++H7vVLqFWxT1U/hhi8gA0pbKoC0Q9kUBbgIOCHa+UOzlOZXhBMBou+9J
61FNJhhFG26sgHz9gb8pLK5I47NhhQaS7KXuHE0SEIfdg4f48N01RYYlO+mJQ6f78cVumAo4MdhM
EBxE7h71NgRMSVJyRlV1A2zszCm4YbIQgJn4l7EANb1xoJqfRGe8N5wt9u6eUOYshQNy8h8UsP7I
nC9FdGPbSSBmKSJCCRiWbX/d+7JcD9sGon9mB3Rp/rIe6dRU+tyuxaCiGIPt5O2YMGeePf5Z6/k9
3gip5UwZGQ98CokgEDCxoBmQGGUf5ahjANgAZQsHoxTmnThJmHM7K2/t7u0XiomRd/a8sAFCZfuH
PvNSHKG9UlwaIWIr6aCGFt+mPflw2VF/PBTx/BR/tmnqaOERB+cjjWvf6PiIEhEk57a5xnQXxwYU
L6/BFT83zGNCGgFHWpWUhD3/2nfLLDoXv2NH7NmUWYbQhVO5AWZkIUiTxoDcE4VMoNEIJZgX0BcR
0oUEoC5KN+H3zYAuAgEWw3r63okJVUJoNsL6ZCQOAYVIpWgAP+jGFA+5A7LRAPSdvh/Ah9fcbbSV
TX01x+bl1rb/mt+djeorKmTkC2bcaOM2nf/Zj8sHVJTu/L7CIpq/wKr3rgyeAeghSRH1vBbh+OSt
dR67RQcnccF8PlutIca971A7iECgG+MYIGSDRRMgWMkPEZMhypv7+3F4YvSctb5mJgmZwoO+WRja
alho0yZbO0FrCmDo9yUhtEGTQoTUK3lsYG+c7ZTSNuglG+caJnNxuiNQDWqBJC4jI17yOTjmhAku
MB4Hg4MR6HBvprcEKVRfYDvsxoDJEUKODudBoQuA7FLIw8VvZJXZSr9e25qeDE6SWVBqeP6rvAi8
EjJmLJrC6iUvp6FggUvBOgU9iPHSADMWmUoQiICxjAKaDEgURaS7tpApZjsZq2oACKHdVDihFjgJ
iYAJTmjDojSvVOhHiJxm393knBIZGGNYVBSjN5Tkj+bF24UDptwlJpxMZZCEDopaNpqTBjZp1uXT
jKXFC6ileMOCMDCJShzJxE5yVwCNTSrVFIJoZMxMwKaigrSLjNYKBqKkYJcUUgjTKNCMSXVNGgcU
RUYwTRi3m3JCQs4ji5E1gz/t6roF6vTJFeWE4dkoV6BSlOjZMLR2szL40KkAIdRvNXUXEFNynAFu
lOWMQojBjBLMOu/PG6qlKg6bjTGETW1tsZtagk8Ml6u08DDthzvh4GbtvW0uhW9WSF5THDgZFm02
nPZTp6iHu7LkDzJOunB/815zQQYZpNSqRESlCLiKA8c8LH+aCImsBOQaGm2aOkmgfM5D+s4Mp/fq
qjQFFBqOuxbQNljVEKd7N62XkoVduuZ6KGMRD4WdCp2h8OixnHqbKmSoVkokDlqDIb0EPLPHaQ1R
syJzG1QBtIwVbREvN05zahOAjB5GOVrVcGnIijeDES69+OsTd2YSoDnGlLSl7ps1rScx228z8Wfp
fAnjv6TTwn3npDK+BOKDFjoZduQaQTjUWkGDJtzlBRPumqK2ZTL4aHJj8L/HdS8scLTHQJnNXNQw
bBQoqYqOlKZxrNWdMX2b9OfhuJsTyhNm2eYh5vSJ0yS4FD1MGpSnfOhTbBye5uYjAARjBOBkk1kv
EYgcIuXIKpGFMQYkspRI8ElwoheHeMK5iRyKhKAJHrCHEIuaQwCkeTNYZOcmNal+F2MhJIeaFEho
3h1jaLWt5BKKx5PeIiYTMWaJZg5YoMEYKcL9yUjJgbLZ0U6LbJ7DlBTN518LzGbwykoNg+N2NSoM
DJiabnblGVdqTPgA6euPEpoevVCZUOalWTUdSvT8yZFIHbeVdL5iMxPB114Nmt8kap0u01xdRZO4
7cXpQgprIXFgdNToDDOLNvly5zSGpaWDQMT5QP67cGSJy75go7iyvgGNx7nZikNg7AdENEin5v0f
W/lwPANTUk2WyUEIGDc5OiJ3ljlg2hSaGpEwPJU57Hzh4aiGybEOPW7H789ZwMxsnzg0p/26wOgd
SafEkMMMUMWbuVeelo8gTdM8CKOR5MXzYN2OESP/YYQ9N9/g9COhYuRGZ8pg4NFUGIYMAwEKRKhJ
AgMnMcmt9d/w++Mh1CD9aztqJ/dsdnVBmFWiVFFVay+QraSH/yKOnurNVgwbBwxV2qp74/HN+Rm8
vLOpGZrnBkSUAKEb12Z3Po7xB0ZOOLyW+brWUwlJXaGcRkZrzKRZSi4P+CKm7lHU12po4RrrRMFt
CQ8qf8Y3+9euaMwab2yzFS6klc1NiHNTGV3LPK/4NmBFl4iThiQGuHdtoSjvfVnJug4WVC2T1i7D
iOLZdk5cAhRpRUZDBpBM2IH/KgHhxD1PFOukTOiMCEoN49M6WFYo0aCQloaQqT60XX696J36bOOS
DCeINyaq7McMJwlDion35OLoFgbH4Q9Ok1iFNyWt5IBLzZMJYqlCuN/4/4XZ4/0f0R72jfBsIDUo
pD+XjAC/b4IEEAa1pJDzFPaK1uCpD6B9FXrg9ZnXTfROfGc2zy6A+NFBCRZE++gg0UgxIcedCdEv
8B77Cz46dE5zDeBxwiI0KyBWTwE7xwhCBPASlMlMQoVxCyeBOxYhxSJyjjRFGDlpykLYyY5GQY8G
zbErCBHWTCGC3icSVsVDjOJGyFWVKkJCVrACVDPGHNAahEE4VCxoUOGLmuZUhYDEUxSkixjFJC2q
1hRNJREusLRyceTmVkAmP+j5+od8KgVk7kWQwMViSS4qkw0RLDprHSuMmjQzgYATbRIlC0oRm2TH
5lR3TNRg0BEKoOJDIIO5GQgMoUEImUEwEUYYBUsj/qzWIdQyNWkLPMoTmJIdqxiP65jQw1K2qvzK
1rmttURWgkqqZtcuySorJY21GrFQtMhM6wMyMCGIAklKVFkFiQKSIcLexfmUQMeRjhDH0YHlgmKL
EV5QqYzLEbTSDFJAx3d/lOMSS7sD6mLwyS8r+5bv8Tn+6z/e/YW0v3AjE8QJ/ZnHVkAkmBGrjJmN
YxqDc+0TtSoTQkeVMH94iso7zwLCz7Sg7UOxTxoszpUjJxQh7zVO+BnwxaMkxkq0zYbOSsRHYl2V
SokszIOu1yzqvmvXlRIc0lMc9mcFwsUaD49hYF+uzieTKo1ISuqcGKlC0Mmoi0EhPw/2VOQD57IM
71Xv9okEgUsCSe2U8zqPz/r10MwMitjZ0biCnCAqe5zF9Oro0RA1GbDbRaajapom1pmsWLUWjZlG
1bKUYtVYhKNKVY1TUybY2LLVS0yylqSLRYtKWS2otTaTDFLu1020zbYqaCMpUqEyCDDIqmIJhDK0
kGKOAYFqHgHqntMCZTz+q5bLoqXKu5QQzEuBcscKoxJFPt/BvqohMCpQAOPkoVbCGArhxIYSoIYi
KBjRI0m4BnG45AXKqDDuiAW+xTAHF4/JK0tFKJoU6Z2lNEioPaUQJHBiOuI6D+tE9P+x6CYQJiSJ
0zEIkczmTSDslL/idO+xhbyrkmpDMxApN9Z/tlFxhQIhiCkiCyKa69H/f3eR5+GZaf+atDAekLH9
nqphZNYWFsdEH2dlklH9XsP5Qwh9M1dVRpRSIAHgBYcJR+0M/4sNBT1I8DtxuwXCvLAxkiaEKiU3
viSfgH6xRSBTj1KVtA8n5OsvwZ9s+oPKyUbYpS5IlGKumzjBC5NKCo43P/aAe8Q+8Izoo8X+N9nV
8TZicCnyI739P8F2++VN71ixKe2+2Aoz7GQCBkAJy03LCBNmXH79K7ZgecAbzwS5FCZINbjuAu7p
qmkViUB0hCabJzZVA206WVQTaBSAfzyGDsYBhiWRqNzrP6zuQ7CQhHxZeNLT/HMVhk2sYMkCGiFZ
S7TBbsa4igzD4YaCLo1LUVKW+fPNR3JrtybjpaVtOtLhrW2ttaiA1BEqPCSybbRnyu5rZSlLO7Vj
oWNYbyGSCjqRQiEMkMgYmLW2LVNNZNslkq1JVXcee1XmtXzK1eWsW0RZMbFYNrJrVnXW1yiVmwDt
mEQOjDC2mHMEYlYkMlQUiDK9KMjprcClNSZbGk1pKNYW8udR5IghKMkwZRRVRNJOkMmGKXgVkSbD
tIJEqMdaMcxiDTUglBxJAyToTjDKM/35SQoMhAImTobaVrLEFqjRqWnTsNYVTUq6lJYtZWMLATg5
FRDgckOAoIaFlCIgqykCQThjGKaDRxoDZkKLCEqrcRQsxAEfepBqZJkDKYpLByztABQGyCsqAGlE
xdlIVaHVRxpYUpiDlpRS1VukEWUACOIopSJValAlirCh/qAQOnbR32PX9zs/efNnkp9Di1jmU1pr
glYqrS1+PzyigMTCxR+Q+Sf7P38H28OENIeRCRfZJ1973+Hl6ihQ8yeJA+EQkYRLH0xkXCMqLbsB
9+KSDbkxvfv9xCFfJfD9ySfgwaZzkkUg9d6rhvZwR5xhN130m5/RyIgIQlE9kC7/X8tAY03RTYHc
gaDeIrpSv7m9wGC/1ccBcQCiEFoZmOp1orNOhLA+6OATiHi3wMjMCiqUxikIMmQjiheoszAtcgmm
KBdAzFNYKXLgN58Kg/4VJqPDuwyU/5F3dcHvCA3gHtAC8yAgZIuGYZD8Ie/Rj278XhHAooXkWEco
JJPWIomKiVGUGOa7UULULuLnir9ZHYB0gAgmEaNNIjlFIDeHmOY3BzDjHMDWA5hsR7AztkoMcYqV
tfHA3BF0N+YauAZNQQztCSESis8nk75bLH4ZjymOdt9Jooohn0JOdsNkkNeDDid9Bevjg+BNTgok
UuJS605YUb45ipw6IiSlRhDllYwVQRkGWLWljC0bakPDk9b5MQ18GIqShELDiDV1IPFSFSs3VPx+
UqCICYSLYMTCEMCQcVDpJ1ha6dta6iNfZojG/BXSNGvtGZSrKxphFmam0iMT1PWUxkMSEcTtw4Gr
uY4XUJgSi76xq8cutLDz0yPkzjgyZj3tpOMKkOzGdFIiQoIdXaFSiEREK08i2BrSi07GZZDx/4/7
epw7kDoPEgFELkqUwDDEEKKGCUX6U9w/lnSgbEgbiyMrI5VEicEvrIdAbDgYSCZCYiODJIRgnR71
A/BSAOQgb8oesmwOzsopiSof95BMFDQUiD98rEIPVAFd9wT9G+UlqJpZSzTVWZVZNMomgZKppWtR
ksYKUjW2lZpTZapVUzLYiRQD7aRB4wLCCBjmqUg6qr0AB4zcZmr8SwYmmkyTQeOssiZIW0IANgyU
lsAJWqoCFsFMy5XZ1105cjUptlXcTRjFYRgmMoeuDUVv/o0ojvrhVU1gByr3oPsZJlUF9kAj+aTJ
B/og2FToV44AuSg0gmSATKoGJJLUSofM4H3hIPL7pPhdAiX9dUDBoT+PrdTlQCodIMRkiJ/LwiJI
7fB8Iy4YH9qgONsSEa+l4UDY6ETURDjeTFE5IvIJxmT2MGhRGHXWL1JYoydux2rpOrS8o0pdOIZG
IIZD0cIpD/N/S3fy0t+GwL7PKjoF/XTRP4aKkkn8k/qgFgeAJ+jvUU95D3hiK/ZC/NCIGoFKIgAi
QBA+UAkRD2tgNWp7yAh8aqCP7vq2kyH7lNkPwHp0IQ/GiLbBFNbIg6akmZhRMWlGUvWWumqFWaEB
mglCYEbfYf1J6M3D+vfgQIBOmz9wQUMACj2InBxcWE5xE3o2kgSqQKfuj/8MnUhuk6bJOhjRDgI7
7HfAohUShhqhhVyinXm7JSi9EVIVB/OHlK0hBEz+E0i9cOtexpqqz8vVctLS910Ikm5uzbd3VYzd
NdZqFp3dkhEiryy1ArMhDJJRvlA/SwwcJECgKHbrigPb6oyIv2RgCD1gQ1iutFBIq370n5CD9RWF
Qseg/tniECARIoh4u0gHzUzQME5W+a9VstFnrzY8YfpC1QMSxNMfNcmkBMBipCZaU3pkOuiRBY6E
CPlA5CA0iulYc1rv5OPVA4m8mKMynAPAIK/rhSgBxTkXCGaKCBVWomkpCSmEfEQU2NAgDS7DCiOV
xiimQ0oq/M8e1D5QG7x3UB+k12JmAIQiAggohQ+glUR/A7591/L45fJ3CdxByrf5YADBl29Voc0k
NqHdeYwaJJ4xSTIkhI9aMGI59CGgSP9790cb3CNpORKU3NE06GuHJIEkgzXaz6p9X8x/s/uxwkjs
blHQtPiwYS0O42X1SmwUB3/RiZMewjYHRJQfMQu+g3nQwQSxhGA2GY6hUyRm8whLCI+z6w2P6yRD
6N8I7PtwMtPlF1vVxL/UaYy4PN3FZe48z67VMTimtNzWYS3paoophhJ6yfz4qoqndsfw+pJcTynW
ZMG+ezeSTEkcKGEIMRqkJFvHaIArzEIYV0JO0TsobjnIy+Qk+dfxmGjsmADMHcuv2H0OQj6HwUPm
iiDJBRYPvQ7PV50jf7OMmtnzn5r+qwrttxKsvQdQ5SiTgMOIAQzqD11bVmhx6x0dWxAtkK8ZZCtJ
YjdqAxmsOpDru877E7OB8HwxZkO0l1MJhNFhcLpMlspDDq7W3W397TZPxsoJmHpOzk9AAdzLcgVP
uzS0pUGjDA3GNTiCkcxBECgpfJGJTlSsC5szdqACNLAnoPMJIWHmeryXxsBkgUZCZzC0uZExW2WW
pMWqyYUqWIxsClVqpqBc2tWLUKCU2IFZgVMbamG6WFlOrDTk4coX2IdIJ1ZUFHjDM2MmAwhaK6Fw
mqsVWIQlaR4BFgNETKiaYEiyGGHy8sxmY9jMGd3NAdKPN+x7And60FpeFCtpgci1qS3JZNQ1KQJ7
kCgWPEPSYT83BK10qUedOoN4SMJ+6NCo7iQAe6ER6eCfzHjnPPcopSFd00obczsu0lHKuV3NdZ3R
FzF7vjtoodosjJaiSsIQPwBU4fcvwC8LA9mADPeeRugPRNJXdZBQTQFBHw7q8Hw7SZ9mh1oxciAw
gesG9aL13TNFUlxnF8W98Z09x+ZSvCIjMgykoBFoTmFMlFhVIRV2RgHDI1VIf8sHAnykxW423dKN
eNCbwK7ac5KKin4MjQ4HBI1qDV3AprrUPtiVnlRAVNqgWA8x1QEMHRDyKgANNUkFAbM8REEIBul3
h7vE1t/Ch/7FTinls9ofPYboTRQRfDEz7cDE1UBhqpnNny+RPd0wKxXIFiFpZX+N42VPBgrIQggs
I+maPTvlRFHcfvAFPmEQfmBBDf0fmv6bKsmpFofv7KqZlSP1+yhPpK9Nrpp3FdIfTMGIoDmGGuUD
gJikhghmBlpX7IGyBBkH5QnGH1xHEwRxbwsMlUz6REw/g1UoOG9omQJJuuIBkgH9ASHMIGmBpoSu
V/06DDR7zc5NI+mA4UwN/e5PYj98ovzEp7MCVVJqIYlXFY4Q+UCf0R/4wvgog7ofc9p8X7hq2ror
W/t/+GOclGrG+NbakyXumtAYhBEESyTFCWQUZMxoKTM1h+jCePmtIohx25sQxKDD2e10H88O3BkU
ncqvn9pn9J9YVfnsYPx521oPK+R7mPFkwKFTGcI0SA/FzIzMf5T92RCDLoGSXEkVfmnF6yvENFo/
Lq1s+iq50V7Su/LDNwOrNzDrObq3+Q15qtsV7iDBhl4iKRZ81hZuqcZWcR51tbDKIjLtivg8juUN
GcSNf7DBA6NOGllxMkoNURmRidUaGJmVj5rDFlaRL2p/2i2ahYUQOI0s8BuMgni3qEpBKolYVCKq
wRoUapLIwtLGbFR6oQOZNSwbui81eHEbWojgxi+XxtpcJ1U1r89a4zO5IXv1QHqIS8ihmhRqYnDg
aDjIM5UdTJdXzpVFQ0wtPM5LxYjUORFAojKhQJ94nYnsvldJUA9i97E8kzD7WLJvLq4LEBvMWfbi
9qWORDa1S2F7uhqFuMWrF2dhf+iIxwzssandEGOX5z0Tzh40sF0Sq8VJaLhDRnwYM2V/UkzABpwZ
LxkoxACiOiFJhR8kYQYgjEFx7pDV53qlcEER1x4lIMCACgYj1RMRgZ6snh0f/1TUVJb0VCLm6ACE
ArMoxVKKp5dpxLYffKP/T0LRK7rOO9DQhGU9Cvz1VYjFJr+llSlbUuUrebpq2sWy4tL2ZmW8XVE2
qsdf6c6BkTjOw8g7nhAwgco5GcYCJ2EQ3IKiWHrvDhCRAkhRg2XFt3ANL0OeEg2Yi4XEA8Rl3SV0
sqm/TF4CAwkNiLhCE25aKBhD5Hn6x/2kPU90AvzYVman6fV9stcCet3MuNY2aVLkJUqnWYYP2scM
dlIHgJ2B1AfIDlQrIFaVwLBvlwrPuurL0miNZx4CNm1Ty9hTrHuKiCDPE53Iw1+DWqmEFEQogOLs
kkrlCHVEFEyIwP8f+bDURwWFBu0Uz9OH+zboAu5bTjHmnQ6J+SIfDw0S0gmvydtfFyzZIGJJHWIH
/P/cO2zW5wOYGUcZNPUxHQSvtt8cyqrdqsPpDdxCFjbV1vZPyebGRMj/6fYH/Hl+9xJ2T5SCU+cR
pSkLQoH2Ywp7P3982sPHy8ZB2TJPI9YSGOT9akSMq1mor792Sz6NiRYSu7L2guxHyKDv9O7K1FKW
KmBCA8rimcrFIxdmjnDNYtzx0MKjsy5TjcK1NtqM+kzFIhsH0LfBJ0mpNY5yovAt0CrVYN4DhK64
FQYdTMs5GyxWKjOlhzOLnnEhcYThKJV4dYcYXSOniSn59OO5yzvH7I+qLrWPK45qEhJQ3SfCQ8uk
mF0W/wM/5YH/qunZaxAY+IgZnQdRnsPu9Ro4P0c2fY5EaKpL3lVZEUX0IA/WeKD4IG0/3b0jVugf
aHlSHnAA6zgvv9amhsOw1sHuBmJBghgAAKyhRhEx+8GAPGA7pcukOZhSgZAIFSMAnw7Nv5f8ccjj
m4Zn33tBB+ogfeUMYewiPQ5oi48uNegIemdnao7wD/AIqCVEWiIJu+hjvhw0M7w+PT3OBxLIaWU0
3EtQfFBQeOpKr77a39zp3GjP8U4HnsmO/1f8X+zrpWdte2blxI4qIFNZkmA7DYwTNaikgSN7gP5x
H/CB9+cok8jmbQx5UV/ZDHApFU1SN3yRH+HAm0zq6u/wqwbP7nODcSrZwOHC5UbMniwJdTtUTkq0
OFWWYQ+zwzpX5gaAvUmkoiXY92MK2lyohOHfP5L5pNfo30Jxxh8ve7ULJlz06hOAd87JZBcAi8d+
HP+WI1/v9fIucEY/NUkZ5RX5UzWIqCdE9XU9eoGbCUo64sMIz25z8jn1EBsb3dmR2IdjLUlvUAiC
YMgGZh690luhey9Mp2uRlFH4IRipJ5OkofGJcLVHX3cAfhxxkFQ6dT/Wsfp+QRyHOdEHH1iNniPt
pscnWBDBNwpomnnCksuZ5milmYJIh7ku+PbG8XondHyQQGD21r/BPzESfRM4n0Sm4VVFIFiEf7+9
6JyOU8HzFpPhh7WB19eHCBAMQdwLG0dJ956z29OMKp/t97sEkcmqJvFC/eL5TBaNaf2oo+lwWI+A
gNfglX+WnxPzynQsOrVOQTObbdP7M9wTkA9TC/1y0eUIX3ezIxX6JBqFaa+TyIwIw3E7KH2i9JzX
8mKKS4SM5FbWXVvTXNklyo1WEhWAoZxtWye3f+1yLqpFT3XtVzE++iCHOIxH+aI3gtTCAwM98M0E
/i9pamo7oIF3XKv0cDSL7QAdFVNDqAgQCbUEn1VQfSG5qDyZAy4N8dTax6KR7LxJQH8YvvNRoHRk
j1I+ruq06/fQG8UAd887ajSTWggOiNqOF7M90/6VYZiRc6PbHERE1rAmbHLF9v2l78yWhgxKwgzF
oyBaGHTLzydgFYIN3YARpPJUKjt50SRSg4WimB2NR4vMI0ZtuYqVnNVypmeNRlEr7G27Lqm6hUCo
z3boXexGbhuhljFR+/zosUiiwopHlBYHD9e/wgcCqIfJ3ZDSGfD9H+ud48+sc/RXKKLN0xrE5xmK
tRJBUJQjGHF6vl8PQ9CSEkV/Zf3xTu2Y7KUieBfrgtKWHLfF/HC2ZOvhfWy811ZnV7f7qr+Dj6tk
rGXURt3Y9I2e9MJP/EAvQ9SAUkVIESxMoIn1+L9MfOCPpAkwnzwof5JUHxYpEkhQdQgOQLEICp3w
PjCIRAZI7QLmB9mGkFb1R/kErXqtf5Yn0dWrY4r+c0x4Q4ByCgODYgV1QoRXFFDOWrxvOUFVMeiQ
RJjuqoNeZA/YrsJSFEBhWAcxaj40fbc1Cpg0NJClDEjcgEzWy4SJCd7vMFhlRPZvuTII+p0R/BRQ
K4h+9RiMWFfocWsjLltsxJaNyBjsB5pzDiBnpQfXZzm+ADRgX5QsdKQGeGKAam8zBNyHeTTCh2Np
rkJCg4zAj2Rh/Rn+DrnEdE8cI7AmRC+4qwYKgqqfe0ts/KmH9gnuu/sclhNDD81YLUxE/FAuB6dN
LMQaiqxrRGD4tHulnl0bSQGwSgpLWkllPvcTuJNigg0Q+LHOznww3NUsVICFBd2KY9wUJohMIQGI
Ak1wEZoBWBhEWkHWi7v8OBoXeTkKavw1odVfkuTjPAk+jRzzsIQkIlC0KEQSwAnpCZFDkoZ/csHU
gUQkw7MoB0CCYUjjsbyYDtINBZKChXsYZDSMDphxCSnKcEgOdcbDuEmqZDeKApaTTtJndbwIbjEU
UsCLxLvUigbcGCeRIZZGQBiHHm7c6l+abGEiwZD4dVltEGW+5A6TipDjVE5aKDFNSqqWHKbDLbay
sBQtLJy0Tn/AWaDEFgqdUsSyhXnRNisgEqxCvHnCxHFphsiAEQW6iCwBzS0S22hYKoIijkaUAIzU
rbGspSSQV+d2rJyljEBYPLeQbWsET1YG1g9aTTQWIwkgwkgsYiUKlBQGmb0cLvAw2IxgiipIQGmq
ARgO7ujl0dgJwSQfRF1CyNm0QrF/hszxric4vWWOvYKfUIYDzGBeNdQTWQmI9FCnJSpbHpd4hEKM
ZKoDFlFxc4c5XAqqEOinpp6wB5fDBQ9pqLAB5YBlQC5KJQoZPSDW5BgJSiOzQYd350DepSs6T0Qo
kdVtWrzriYOOqlqC67UGlKl5S/V5c+FOg6CIH3tZgHqsgrpSyLtOrIMhN6IQhcXJTDWjFX75hoMU
rgEoxwru24nRD39ToBYexhO/CHFUW4owU1bmiaHdwaWMKgNpUZVJKLGME5ZvqicVZOPCaqqUsVtr
QspZSxMmcuMvw5tr3473KfO3RJmUEVYOpbiIMzIQUgYsTQ9sYm45BIQUWIcbRPL7yRBmJME3EjmG
dwYFkWJ6e66HViQorb4RwZO5eRKNwiTlB4ujLpKJQkI8zJxQecQ6VsimaWU7oRcMc1gtKIIeMFKj
obFJczVMJwObRbkaA2Eggz3WWL0KoRDjFTy0wBaJRVEqUi8k0v1wPZNZoBCR559NUVQd0ojQUoWY
GEFIOIiAcIDIjFWCD+DJ085FLYc7DPgV0FBDjskw7h0Re7e8uKHZBtccSEzWGNwyGoWWQobqmZwh
bZ4EDJkmaOQiMBfex1DwhQ4UigKBDyRyOkhkjbGOBTKOJBSmQ0oKVgVJ4SBmcGpy2Tg+GHIkDiOS
GBIUgZDkmShEsQm0W+3LTw1ky2kp0BZk64dYLyyBCshD7HYGcX4YlpzNKHBiIB4nkeUejrclNwlP
IZANIw44gL1E0qmRECkyIHVdkwcSRl0GI7koaGA1KMbDGKe5XHEISTvAxF7iVSpNJjKVBBQBSosy
1ImtQUVflVzGK6s2rtEK2kCagW4oMN7Qe2xMFh+QwezzTvlyCij9f1SjFdaqLSi2j/30p1aF2OaD
30cI8ZadUzPy+Mbrrxx1qDayAjf2Ys1yEgAvyGlFhoZhidavQ/G6EOQJ7SfPBeeoAm8+1fmOUm7U
hZ/Fh4+n7zXEWY+HXGj6IzPkWbRx5fOWa+dEf9kR7AfLyosPXGvY5lsHEkCo/75crz0jiPzViCQz
RQtXF+UmCk2S+tA/xRMGlk4HCdEwxWcoBsSO6LSZJ0orCdtWyHbf3tToQko2WqCq6uXh9PL9cOQv
uOnI/lqAKPo2NoiJpE48U0vkPmAKUAAw0ITJTMHUMN0dzvGZrLkwCvX35z81FIfiYJXGKhQBqxKv
sPEynRuob9+nCioFSKbFyMhyBL7NnWkpdTXbzHn6eU6ac61HnCubWaxS6o2AkdKDqECLKlVxXmCB
K6lhQb+mDTnMRzbR6W5kZNWcHtzzoUthUzUyquLIqmve8rwMEnDqqVKwLJRkKDIyWQ6PYJAH+8Ly
DQ5R3sUBOB1gNFQkiISIfdxdn4CvPGwDB81hLBO4f46Ac5KUMH4LQ7+BX1alUDBfuECQPRC/7KGy
M3NA6Dlc51ODhrw+34ARWY3mGGZm3RzQdQKCX3EMCD+WH76Kj6Tl3DqCeBleKQr+LCZQx+A94J/Z
6/V5tCX8AsnwBJFGhMHG0tvTQ9Ae/cflRnwCpBEKzzK1tSiR8/e7pYm6YCQObosDCMDE0JOEjp4z
iblTmodvQmEHpLRQscmnFwuyG2Xtg1xgGFJ3n6YEYwE/tKpZNq2zSsptMFVQzUzIyDKQsJIBKkwB
CMTIMAREAQSQLIBAHFufkeMBxfW701eCPDWRaRRmDi01t63uTD/jhA/kqQLBzeNC0f35jH6zvwif
7cH1T1+gikaIGJM3+IFBgPcUswfC/A8a8dU77aeBVLHz+c42hgeJ5rlg0Jg48L+bNQJ3sCexT2kQ
PbAbooRDUiJhJt7lH4Ugd3Q7aAf/GSbgwDEAes/PDyp7JRRRaYd5LevJ4SFkR9lhtGzhSzem/soI
U6mHBRDhXE/FVEjsRNAibec5caS/PyXbbcslLRAmsS9KomCMogVEVRnQngfOzqZ8/Lwd3HSbrC+x
KWwOSXrby66yFTk87ZheY/a1UEqGNpkNU3eJug5msMxh5p5rTiUpSjhQMFVBgm8VG8e+biG7k6HC
ajtmGVvbe4U3NrgTK2mwbm+UkxmgdSyjHAwdw3DhaByngPHBBVpK0Y7uaMpQCtzeEdhQUolea0Sz
ox0cOg3W7D1HogNxisUHoCztOjzYxSnlGdePOmjBWeavE6OcRXyOBurPAxFd3o07JpcxlDXAW51X
81aJuMXS0N9C6SwqW6iwuYeTAtL/OzhOumboSbU7E6bJPKKZoRFdoAMHbLjzFEE0uecYswRBlREM
SUiDsacDYIDwVdblpzvzTNbxXkzMXRny6jCDE4UWihrSmJYwvM0KJYXLaIsxTkHZIi1AJBDP6v68
qTj/CqgkSDIJDTjgwKhYQPt1oyEOHi0hYwnb63zXzJ+LddFLKz+HwccCCAgok4OJL6tg+rQphAjK
k0oi7iyObAREr7TBMKtGSUvG8YkeWIKmOUAw+mhN4JkFcVGBypsfz/trxmH+OB9MMExuk1ohMCEZ
Ij937LyAbhgNJ+6x5dcH6xKFfKaTQLb8MmHyx3eng6aBnPuQs80FR/Dt57272MIboWQbQnE09vxP
QTkaz7Wdq+l2Qh0hAmGwAPai9wnm+YhiZAROpNiB0RuAaIQgE+lXuDTOAykXslBO+7HBVJg+KDpH
+SHwJ/m2MliakO1KTBhMEZExdsfxqZiUm0qEp9GhyvYYom3gSS/I6ACKey+uo/FPsP7ac3uyJfdj
+jBk0GfD3Gj/hlc7KvRr8DQPQIEhICdUV8s0CyA/bsC2k0CDSKGoVbMxlVwzDaFA23XBN0IViFWR
kCIoGjeY2CcHUBtAupUiRDRJSOJKUiiGSFAZCbkjGsXJTQaxCtSkQG8htOzAVsw5IsiDpMXRdFAL
cuK/Qa4HPEGdPp67LmkYR/n9CK+3rDBHy1zPyZ+TKGqIe8x51IX+GKfeuIHYlAiUfYSgRIImyjKi
BCAH7qABrB+MNFAVwEZ8FPtWzoc4FAhTlmYTAuWdFRQ6wykEOCtkRoxZ2X7/hVP57xGD9Q+IzucI
aj2kZ9uIwjkSe7xX2BH6LQagG15kcULNWorUnaMjAi9xDUt2fqpu7sMDAgZIUZSS+Bx8VpfGKPM4
hDtb5dLlft89GmunDz0bUdiDsuEmr8mkNDH1S0ZTTBM2wecGRj9z23/BM9NQSqh12W1VjIVYYfwT
1AeMUVD0iwFT0a+1A6e3BDT6z9gq/4REQkVEpFCploQPenTnSPBKqfr1F3KJ7zcFS/1QIUKCp4KO
51SIqpOqSxAjHfYfIrSz8Us+b5hMMbV/pt/HeHW97w2DXIONoNClYxkLgysrDCPfVOhGZ2P8H5Me
fDoe4WM4Y1ChWFw3SUmDx5b+NHpsbS39l0V/Q0eiQ6rHCd40URhvG8Pg5VV6EqraUXrXeV5r5QiU
Vera1XgVHGqqjOBCWqopzmXLwC2OGihR8tT4zr6CJ1u4bDo+4clMDTt1fnovpsfPU2l3VSGdkEeZ
6rKchKhLSKpUgD0P8HtQEU2IRWYQRJhRYWFDiQckhkFUtjwilkhoKiUnJBwgUpmpCA2AX7FEBwJz
OWB5BG4fg1ofQHlAX9MABbEiiPPKCL84gfqiIuFVPaHxdgMUzlPZzPjIekofl2ewwMT7FVQ8fyJM
VRSEjEwHAhtFTriE5CeJj4nkNUw0UQbJ2KryPYZh3tcvR1/L6DAqAsgJIAKJIIghGIc/TxQ8Q2Mo
miU0pGCiLqP8+BiZK4SiJEwSAacCUdc0Zy6SDstcM0d12FQzSlJkLRrRUy3b67y2bW1hJRUYvbtz
CS2r8LQKmYREQLxmSpkgjTSyD39OLt1rEHQP1ukBI1PnA88bQaXemLsUhkhKEh0aoipxwHxXgdR2
Cfx9bvFuAVLYMrIg5tjChbVlpVbQjKUpUuS2uIpiNqXbtq5CWjKV+khgmhYmhYRh44GMAzkzis0k
BEPJGawwwwFHD1AHxU0gayAYKClcwAjWyA6KIgnH4us81twA5o4+6pf6cOonUDh4zbJUCezBk+Hd
QUm5GS6D6m1htGKGMxEJY7oeUoimnVwezicCl4SMP1wUKD0ua0mx5bFuUOAe/19EyaS42SlNgoxP
ewpwHkY0yYACUGBWqlFlY7YcTUaWTTrAWJFNTlkgxLMFMSSwBiGZWEqWlktsrJBYBLVTC1gLjGoq
koWmxwRwWEkYIJcfCu/Y6FvjsHij8OH5dAxE4/dLeeLxi7MMmoR/J/V99PWDCCyXAhWta/01WnU4
iCIOT+ZTDpunzjJrR2j0O9UzXQvh5bZLK3VZLdr1QIBf+K9yDPIsjHIeVqrnO2oU/3QttjVRqrJz
64eGg5O5hIav6XNMiG12fFeZ6mW+1F+p6bJjx1jgs+k1cnRm7kXsW28B/+f51/zw/KpCFo5tjfZ/
kv/E4gNSy+AXlGOjPXzhYVYlu2/7LK0N08AlqKvjdzDdzITE00Wu6hxO6Edjw0eN2UIRWu+cAxUs
42HPFpsKb0tt0xaM+AoZ9bv0Fhncx5K2SeRcKXU78Xjl11qrz7HoPURi1qPJrSS1M5Y26cak9Gdm
H8GxfbmXFzYqsGBVeEUUH67B5nm3Mh89bs6tiXsLypkJyOhlOkUYO2wwI5qkqYRsK4uCnSkQ2R74
AwWDTabMXw0yxO68llfYPhpOme+pS4pZhdJQ4wVxDVr40tYLNY10bwTsnUVWRfZGdM2ayg12hCJn
QzWu1rwxZsyl8Y9cu+8dsZ444NBcVUnWtb2dvjHD041xweBRfpLilnOJFHdP6voka19BTZ9EHiCY
p3aJaba397JxdSHBbcI8a4RnjDVyiWK0xuxiYX4yuiAA0uhVCu11rXzqaC9GZi4giBWI+t+79sVF
/C2u9cSOKe7Bozcy0yhQ0Y8L772mE7YwFHrg5s8VbFo3NCpiTzEzK8uvGVFMvtmLBU815gdIVOO5
V6Ysgrkz3JwdtW/jjN0aqXWthW+6zVBnKO+sqEFpY6N4KoaKFCPCc769AAifJw7gx27/Nz6retTu
TbJ4LURGVHYtRYoOOqoy2xaO2RDHN2ABqd77E1hfLaGco8nDuXsTJYSSNx6cOKBXZifDF6+fc3vK
qlmwgN8+Ob69PyYeDy2uvDuNceW28jiYKZplcQTSnkADTqiQ6axHVC7lGykoDibNEWYYaxRhsHcS
DjhQV6lI1QyfXtjI6YUpJEgZmHhg/CL3Y0OEixbisrCmHAhTR7CQ/BUbbQk+NcWp6swJ1MwhJB3y
Phv6/nPtzqeV76nXprqe25R09rjFpMWVjHYn1fxvTV6T42n57OO3ro9y3ryelss7OhSsp0UlSesv
kOlNyd7MBjbxfMUZ6aXSMUxWjaz//ZyMxmLIMnaX/RaK8QYW3dMHfhXqVkbqtcAPqI/QQFwRLns2
xxwnogB1z1DEcPiz04kIKOmZvEW6wTkJp1NaQo8IJucJm6F9tkGsbcKGzmH5xjEa6NxwLXGIdXQk
UlE8RaJoWNjAtG4FFpM1z23772VFPLEsBO5ybOVZMHcmdLMK0uUEflopCjpJEZGcTyIiUkCUFkHD
JBCUJRtasNIQa8XzrllFfJ5ylFHfvto+L+qGFa7bR1aSfWRpze1d+VmNZadVo6n2+Hs8675XtmfN
H5Q8poaahdmWuKg37HF0m3mr5mo1eJiUJwYmUbRjDjCC20koTZh7koSXbsT602m8HK7Qg5Ilyfqm
Re35fBSyfRBOJk+6JpPGxs1UlhOGrfJF0FLzYEYiUWts3F1YTn8WX8vhio58xVQ0eLW3MJCJ1M/T
8/hYRgvHrRwQlIRwtOKdQeMhRL4GgC40vTZkb2s3NaQauDs22tGspDcq3uSVUUFIVCmlrT0KlChG
8vjLBJBoMjRS0ZS9sSYv3cdg2gURxvxk9r3fV1FK3qX87k97fXft3R629uEnXnznziheeuuKgoZM
SjMc83Mc8smcTmlG8P/V4HntO+L6QfNcorGZxbvTk6i801UjOMP0ruGGOBrgvhHTxeFp4hMmuynG
HFzWUfYJI03t1EiktUH1qL2wyU7XVHwKSnJtezI4iRXbQe9RrWFBKbtFr5Gt6UU3zd98CEO8NnNR
3nljleSPLGOZN/HQzxpdjZ6eVmkeEd/LVhV4VMrxAzN0d0SAMAmga+fRK4KilZk7AG7dKADXT50w
bZMGlKlsyYQzKR3bbd9+hdqC3W6Tjy4XsGLay/hEBbMc7/lm+5r8IsR2vlskG27CTNeQwzLTggST
bgwaN94lO+GBjecMJLjurHUbxaXNEnMghtZqHMiYjbB1A165QTsYFAneVfo5RAlAG1pTyy/nWIjX
l8Vs7eLs7FnxOCyqYtc2UKVNzDKVIDNIs2yq16k4tI0YKm98Xe6+Ze9euuvwlkADYtZpboa6Jcg1
6Deh3gHXAifx3rWo5iJdLMh1udWXnNpNsbnAyxMV/Z+LH7OdH8lsR4ksGHlDzovE0nC/12zZE0S2
VLbeBcT/V5xNwNQmxNxDjEMSUlJROA0NUqUoJexnmAnDkQllmDb6+GMJexp0+adcrLES/ggeOBMJ
Ecx3jEhQNohjgSbTTRog3mihUQv7MVPWMVCM1ZFISFI8F04RaZCSbaS60WcawLaEC2UBjKr0EvQl
KlVJXUuBlEYoUQQ4Yjwyzs40pThIw+xyVSLyxl9d6vfSR9e5yKKY8e75z7v279y3D6tvF4w3wsg6
7ucKVfhOFVyhbA4qDQRgil1a3eucl+PWFt+LNYwoPs8lzyYUJCQvMk95dwgKUHrTX1IJSYo+VfuZ
e5O6bsyGA4PD0qtDptkwvs5N+CrohvHJ9UBidjE2WDqazfM+vH753NIdsKqgkxQ79Do8iVQHBXgM
wcQnBhcF5ppgwBiKB9NABgOB8p81a+Fwtl78kJWx20Kqfl2Tzi2uU6xYNyCijupAZrN01ONjikBx
Yqs/3KBoIPDeVA15cOAJ6N/6RDvkkH0N0GOEqeeGu53T8ptC52aatuLEiBFE0dCKKeyFxWJUm+e3
YZyzMDJZJ7oYXu7heZ9jg8Htaogk7x4CQ8ssLs7YdF4PZnW8UlDHM1rKjoirLqlaqMakKplN5LSy
WVlI346o11j0nEH133KKtT3J533+nOt9qos77rlypOUobJl8Yp8Kwyk9lco52xm8bq6QxtKWz2nw
vbo6N11TWqfp2VmfDhnWM447c53xPLDFnApxly8ria56Eme3bFql5R2Rxuc6Wpx5WNvGK6p7R5Vw
mjfLPCPpwTJHnei4zNo5rSr530KFHMcExobhUxpZox88fqN9esTPvZEo4rrole2I84dQ8cfK5pU+
ZUzEihGSr2kkJQoj0Nwwc1Si9i1Nb1FCjbl1rAiLIK7BkkkplQzEVhGCgRzpCBOw6ymluv3AZHUw
24qxtMWIEAfcbGKh7ng9gagaApKUhcDMiuDbYiDZB8BgM81evqIqJlgiKyDqcCzReJBA6dlEqc0o
QO8e9To6Cv5jz/NI8ROsDsAX5UMA/KGhaSDB9yL6k8oihKCQCVmQmWEaGoUiQEJU/4wUCOLI0QEJ
KIGFYuxwOONWH8vw9T9iZ5dZJqTrXhC/InWazhdpsfEfYUBg+rOWKdj6JRCfFVkC7tvXDqHJ5hFb
PlX+PodmpJM1VU1RdU1V94w+eLPmk/xbpe1bZFrajWi2tt9ZA00JOPIAB7liJ24MBF3jSTI5/tLd
oPzgAfJFX3ej07e4yrpmZxJUEulslaUoAZJ0haPTpwSSpAqSEKL0vPZunBztxYENQAm7s3dNjsJI
ak7L1i4QxZWPrZpCgMZzp1YlJERcKCKvNtXDQVJtIdjJPLHQaHEoiIZAAq2hKwLa2kAyoZhAVQES
Btap0AZBUGBiFJZIigOuE8PWYiKdEIG02MIDraUqpQlCTuS42MpIE6Q0rgbNUEJ7OnYpP0+R3MDx
lUOCJA2lK6XRLuyZr7WjFuMEomCBYO3s4tXkZOCGKHDzBPolTjDLqh3yjzmtS5ac0nAEmrtDcsoy
4MbZIBJI5xjATxoP6b/qs/l/u1t2KOEk6Ox/rgKLYeNmzueMUaPCwgkyc1Ia4oX2I+FJtIjAdJ/k
sD/MzdQKTpTy9NTnAs3OcmU5im80o8E6VgLmTMrFOlnp+7zjOukmWZoqZlNQscrPVJRjuYJTysNO
TkJCgZJCUIbgWmFRNP3e3ti7MSlA+z/t8OKlN1+v9sojL2KQNbd5crfI7ugoAmYghqbElbHB3wT9
OpQazJB7yPfu7mmG7p1UsvqwnBJzZucklIEIUhP87raqWLhOqTOajgyyoR4Ef4tVMYRj7P96sEHS
WEpy2+HUnPYqIMiDtrF4tInDHZ2+2fnVmCAPrPrmCI/L95NfqQnaM/vU616Qa0AIu22i3UnUPgP5
34J6flgmWlhcDqR4nPocF6tz7NwyjWHU0b3mfh8d+dbnVaiKO02GeyOxrnM5k56Z8sP74/H1B7Y9
gmw95SF9FiJdST8DRNOPhef88nqkI4MXI+X7jPp5H5X5F85PxkNH4sJEhI/j6hRdQ6508/nuJ7A7
rpUo+5koqmiZUy6CIkOPHlVHjdWBL8qKzz+81E2YqZAHw78U9kEQDsRPpjycpsagpyj0Pzqh+6NM
B+zTo1ECRKByS5DDKMPa0cAzyqqWzI+j54AlZAPlnGMChCSD/ugPPoZ9XAV/M5ExddhNU6zxMEi+
s8Gj8iO5YakYgBKEeYDcPBwZdMorzMcwCEYOPJ7nuAV5+2TSjrOPbeDAUBBP5psPjaP5jDRBEQym
5umJAm8tAOQa9dB5SZ5kekZru9Wy+uH5uONSw/BDrDBq744r0gB3xLelc9UcaZKeK86DqGH0cJFh
Yv89CqHn8+A3oQoe7BQlV+9vwtNCI6G8pkZ0IZJHh8ECwuAHDYfdi7QBGl/Dg0w6mZa40TTbJfLC
f8CeaZ9pygzRY6GPGUZTIGRmJ5vxpoXk9su6pKYRIGk9WCjESQCTktQBA7aNIGEjmnmezG8QnlTr
OyLq+Vl/9OPhH4AV8JYUVR8sLkJZVZtNmtXDo3VaaRpQPOAxRSpRJD1hZSTBDKljoEkB0g2Ccez7
Tbln9b1D7mTq3r/tsrkKqTonQcN+LVfjFOZmGoCmxkGJltll3cWCtk/mZdSaUyEwiP4+ti/wehzk
fOhWKJ1XzwscFkktJwyZswaUJoenInKD3KZGQCRd+cqvPoMn3IM1zWFGQH1pUV3e3rSFaUXRV9hE
x65tSPMyTDG02wlJPw3CwhsC9WChRWiCdWYKMRLTmxjxUcmGjP7USURRifhejSk8t4LufNSm0Ma2
IPX+Zk7YZ2MrjZKnDifI2vpYZksPja+tqzy/yWY1LbLERgoh+uz+Vz+5+p4IQtacSZhCNlbYh/Og
ktzmuA7IksNUNkpF/bnsx5aBSkJsayv0x+QMvUSMlFetG1rlIbNiFZ+jGAwFpWMLA+qIBYiCanDA
4cTaGJD7whTIYKjeUMJQ/VKmv35QnWVD9FtI5UvBKNKAbZiiUKUxFD4/rAZ/gTFADWpTSmRNsM1W
tctbdKAACRhUFkkQACUnk2E7iLInHo17IaJrEB+c2+f61VHQK8T9Jrs3LcNB9k+40ncd6fjh+vWb
HhPcThFBJl6Af0dvRi0AtSE0+ufbALEIP7P59/oOB+lALHNCu4uxAjEeRAbL6EC/JYP5plyUeiH6
oip8ESK+CeaV2FUgA+T3xqffATixfLCVIXVx/LZ+T3mLAk9tG8P5DB+DsxW9keCJv7pe8Nsb4D75
mVqKmlpiT//LIpIvxMyj9GfvR9nJ1ZFTwn7b2jyQwW0BEhA5zUBU7Tkpy8lfp8Pj8uVQjeEEIg7+
+J9OWYUIsuwzPzdnJF7MpnQPNvmAifA1inhnReo8QeHG3k5oY8fOKRQpo0nXH2gvrIp+YhxIAojD
PtgQB8z+/ocYQnbkNokAP6N7YEfU4u6IlQqUEyKl5c/jRHUPQE08vB+L0BDRo9+sBQtm0lGCkPSC
FT9y9aDLQGKIoQCCvdSVelkvRxjpkqog5qQSXn2SBALx5KukC1zDpzOB3TaECqZm76O+kpACdChe
gAmQAnM87iKrSRbYb5vvmbhuJvTATUxBBpdSUVa5ttxLVlMlbVNFPFI0tlWWFlSrLO2AtNZ7UE9p
QRJ9vvaHDuhUqB0mZWCzWj2XDoSTy8qBNvouTuQ/ZjREPD4FiFAphQifTY4gipP3BcEO0uJSgxRW
FWlDphFwBhwHMKADKgqISMVlU0HOZwJURakEzwDEUwU2HFAcWVCZQggQYDNiaCfyGxszgSS9PM0O
5wEcGMcBsBiDQBBIkBKKUtLTQTJskYwI8ByC4P+2DNB3odmwr+WLYoEjxxeA7i7+rujUsjOqkwUd
hvvx/P6QOb+TxPXG6u+MTNCFZg/u3YrXuTkel/T9DtWVp9iixn1u0LRJkg7h3/hfwaJNfEYzZuxJ
OCSD7HAxime7GHBSXR3v4ytfpB/ldBges1MU3BrlaWb/Pjokc6dR80ekKCIgpwV6oKIG2KlFArAJ
P1fxX6f4t7X3y5sZ33eEXzCfJCqrQJQIoYHicZ4l0AuOmtSYQLxTxjs/zdvsAlCElDuZ2LjMg63I
IJ0rGkKDyjLou/y2/bPv49j/4/xx/do9NRkkHSiGtzuzj+ss/qEbyztjbWj3D5VBV+50JAy+uoNC
EjJgxd2H6+nBvnDUgfwfLv8Z9gA/kE48aFpRfhOhAdYDLEflAnziu7HwPmFkH8SKcADQX62iTNUg
mDNffw0aAYokUjJwMgfVDTzO7ftA/FF3PT6Xvkn3/2YeUQRfyR9oN/guOwnp6lo2IxgHQaMxYlPa
bx7Td/5lMJL8CdBpchi+LxHuJrZRWihy13biDaRnirfoMiF3cpw0xHxTrPv5uTj1Bh/OnTy9Djap
5EAJcSfnhqqpKQ0vuEAGzzsiI+cQGiB+bzow/3nfmvJ8aU/pLfkk6bIY9/Y/4pN31W9JSeAq4p3h
RLv+nfWVosw1TOttsVLaW22qu1VZhDGtLdHfSczDhwTWa6iLJ81krwlBhYPk2TESCKQbRf67NGa9
rSDqQC7RRiahzqnHkWhRBRiD+ryt8dE5eKquBp9qBU7S8jrk1JSM0FLUNo1bAjbpcZpS2gZwbU0d
djtd1672e9wAB294TeO6bd3SWb2d9M/nfyPa+V9oAEQGJ0326bWbd4AACe3qSBq4poWmQ1p6azcP
TA4y5qqnazgAB05ziBznfNLWigugyBtbbkzWBhcSYJ+Fnr2VpQ4CHdAnwSZGHik5lIywoVgxswxr
ZgFgBW11cKptnGuaQoKZiDK+VCc4cqqqs001A2S1qBSjBLJsNNpChGARDdnbdld3uAAADrcp7UeQ
R6jygnwgMioEgSMgIX57+F3j4fEcD24O4gPa392MfoxxqJXx4LVJ3S/YU/x/mtPu86TYiSLtPrBC
fP9JXzEOb+XvUD9jcOtlvrEyCnWY1SPw3DtDmbPeIEPdsOBBEye0Lh/gB7ImpAOxDZ0PeHmYJwqf
waFzslyaw8qM+eqGfYyfcEE1Lv272nMRPtYOG8cMD+fNacckDVRNVwnMGtGEZ9qJWTxsBmZoHj3O
BtQF/2fcQcq+9QF77mUlwobeAUYbKQYX6lHRVyGJ1l1Kj65Q58ujr6xYkIfj8fzdsYbCMVPshkqX
OfqGZmYdgBrbfTXa3jv5aUIlo7WemL3+WAbwgUG6o0YtpK2NRak20Y0aiK2KTVaKqLG2wbSRpP/p
/c7/B47/HkswNXXc89dhSnaChyLtTRI0UKJn5RiBl3DHmRh92XcFAxDC7TZqUWoyhllpinbDZsKy
VDO/HfbIiWg2wcGp5hwZip8ZNyNIY1iK34Nc2aJbSY0xpktssakRGvi3IoEX2XLm6d1xkbFixSEF
FizVFShKy2krvnM+SwRZMkyvTDskQE2cnqVLBp242iyHqxqY5YmsGcWAcDRCXU1pNyNkmU4xK2OX
RuMXjNDEV2hH1S5bQqeDWR6iNh3l9gahJ4deiouvLscu/ItAcHbEQ5lKQKWlTaFIkQMhEIN+7N9H
OcGc9tHBxHR044qhEAUg0RCIYZipDFLISAZKoYSWZQqYQ4MGSCwQMVIzKMYtiZJMAYqJgmZqSwzM
tXaUKwExg4I8RohBXodHUWbgWREi9K3yYmxDjeoGoFFImHWQ31JAHkT9rioyfX1iYRepg2jIoLjS
56KyZ7gj2hVXfixx4SBR994+9OAoutUE9W22CPEkpR0mCSnE3sURZUbPGqmoz5qlQgvrWYjm7IJ4
9+bzvayNdTcShdQuOC+XtyGcQzM1HajdYBEstBKI4VVNKFreJhqpwoCYqSeIjLiLbGYHBVNwBWVG
lRKzFVGKKRmoeGbOKDO+NZZRxJKO3tBO40+IxHoIUOIuLMiN1S46uiAA5Lch1yPnV45UReFzF683
iuuHU6YELRbhREQTw5xK10ax31uepSCUzEHhm1D3bgvehqApTbjZdFBZ1w/qefO864tNnlpXOrwu
BGjNGOJUjkYaqEAIThqGiDbmEp7h0Qclmc5MpRi5wioSSGWjHBYBa0mjgXMnBE72+EQVAnbjRULU
4VYiAUDa5DJBxecYgGoqzF83GCryjgERzNbK20jXG9rQ1t2Kl1DhEAM7F3Jkuh2OJwEvDIKRZCMm
jpw4Zvi4Bl7hrLntT6zGYaiOUPRWgauwys7lKu55U4DjMiGhUaWqZWAKihRBky7GQ+uL7UlkrlSS
CMxrMEBMWKSCFCggBFJyfCtG8cC08meHoysCpErPKvi8SQ7gwFHE1byEIJHQyDUQBhC0Fge3He6M
cYuCdMp5PniFh7Bh47qyyngYxaBiHAsVwXuWl8NHTYcgakBQIQoB8NafhGlHNcR47cYgISPZRrWj
scT5xXIAEcKXqJWEiVxEQdxRAWZMpWNVKux4guSr1mzUGwTvzwI64GOeONOKGYut2sB47WgTbNys
oUG21uWO/O0vhCL2zNkF8qsHDLggRAUjhRUKCF336LG2ADRpeNORwy6E14FiyW7xGiGAAYW+x8h6
FOw5GbR8mVWGBh3dmRERFGXEBzUMb3lg7qS0aehdVDDDSZCQtDTQBErjGZ37i9EVN+xxgbdhimAI
QMIbbaRV5TBrgqjldkHXjYtwlPVVkIaVCVyawQigyMKVYhAlwgGu2+w6g6EZAGGZfbuvIq5tzYq6
lUs1spT6TIgvq3jAxEuACHoGipJSbcmtQGnFEDW8XK1rTHPjqZdJrMlY62tUU4cBU588yYiA7o0c
9yJDt2pHcs44cZ7LmBHO91QRYePdfdykOgQKJJBqIUtQRAoWDEoIlMhMRGSBEsMAMFUURBSIgool
Og73RvZKJuamNTNaKi5K4qg0qFCOUYOIIcFqB20o4BDJIHJ0gWnSiKtIY0LluXTsuIPCPFmJi47a
xdL/iUflz2NqNK58dU+18YMZjwQWXpe0+J3z0O+qIaFIaMch5Q5piT5DooDIHHkYHc7odRHhfE+P
DpAruAUihiCLMQYCPJEGs5zgjOhpQFpQi3A0GGBOO1wMmgiWZhjzcAqVYwy0YdwKgKKpjIorTDKh
y03eIjUJP0FgsoM6C6iwiZZHK55uDLWC9Q4L1WS4OLIMC/kzFUUaeL5TxXHjcFEV7NuebLIyCHQ+
vi4pizWDwKq3AwKtaALaMw3s0zWHWK365zHkrK0kJaYStAkZHditZT7l2qOYGL0LueOGcexgUVED
DJ59+nOR8Mfa2ElqJWBBm3BUsKsO0xEDh9uSqEsuJ4DyZawDjFVVigsQ6nGXyLnHNnUd1jic6Ou2
U9u6bTEwulnCBZG0jDhVE8qZUVbsqpkXTYngJ5/e5l4MVsYShooFv4RzWI5gtukkOESVPIKDsSPG
GaXCEIlAWhS4zY4Vdl7+uTBNS3hzL08N4w20927lnCMRi2XH2k1eUaxbLhQLfIYFOXxWCSKuOKLQ
1EDQkQF00gjCDaoHU2sdEpfHHboYobXWLpvmSXXskOuHa741kMm8U1Sy8wpuMJdYcL4t220RtTGI
08Lntmc0VbiphR43JMcw1MsUuW47QzbzhBjmL4KvsZrWTGeY3i72pqq4BMxCPKzFPru9dlc8uESS
SpzyTb5xcpzbkvFUY3OCcEtXiTPVRIljWarScK344ou0JG8JIzVRGaIa3Xz3e7azoOmdMDqzrZyK
+8RwpYOjA3lR45eKYIxmdoa2725oz4VoFCSFkQDIhIO1z1wyDsdTIC56RtUFKHk2xpGCsMKW2iR1
lQMXkyzGnGhY5H1fNBSQi7c77hUUOhC7rEPSyzohF/TM7xqCHivCWhihqXTRCm441EVzQQWXgtDY
aSg2dSBMKFCJLkmd4mN2Xro0PJnCj0ghjBBd6OK92h5hC47bcShQOg11MA2WWurxSpQgRC9cHbOc
ow3vcgjKKhEnFQTxjGMDvtUw1OsWotxZkYoSO+5BFRRmqo2SeUTURYT1yyTYRYTiKBkCjkTyxxxv
o7kDhyDRs5hSjQkWl2skMe4sJTU4B1CUlDhYFkSGhyliiDb668kLQPf9yaHSqzkP2+X6eHmoyGGA
IHZoFdWyTYgUE+ZGfaGhKVFbgqSrbCE5BJwAvjy8bdrbbOiaM7kYFlR1aNtxpczEztv5L14422uK
d4ZLXQ31PaZODNjhVw7gUJtHsM0MILGBgiUbBvuGLUmKaCxTJCiqlVtFEB8AWoO+drpQNiI1C1VN
SVts1N6wS5mjfAbIjSudZ3rrHXP6u6+lGeue7FMyT1TG2E7XaVZFqIl9nMCEXA1ktlHY7FaV4wSl
GraXyWyW/Mmp0m9vxWZseBzHEI4MK4eIVQ024SEobSSanOKlQZRVTqesYzcJ6cyaN5vFHgYE7YCy
KQ0YFZUEeNZrRwee1EpKA5QDXGY6L5xvLG2oEglR87vmmsV67qMbjgmM3pqYbcOMTPiE7VzNzunl
YHPFVvrXlt8ApTkkDpK4c809PLqFm5wUERFRpHfib4SypNZE26mF547arTOpyU5xdhiDxJ14G+NO
1CcjGimSBSUVvA3f+xNh+PGJ3Zg7EJS95ERruuusO3HbQV2JMuI4WIKSgXhNkxPBzTB0thpipYKy
Q6Dh5AlM8BApEPcaYnodHReQg37iiqaAqBeyHWw6LoIgjFExIeZTlAkNweo8Bw7vQztVRVFNXZex
wOx1MJI3qGxUOzCmSBqrwaBB3NU1pE1HDspiIEgbxzQWShqEaUTZOjC7LycvY2KLdFLNzR1bdANB
y6AlIUaMTg0ampQGq6hoEgSBhAw3bC8A00amQQTHJisORhTTqUJvNRKqFOFN0IknoE0EEp4OUPFz
EUwCeJjeTIVhVPKnSKosLsgKjillgwhKFhQBDkBCHJLYGgDADiQvYXxGEhg5TraJgoyUSbu8RQ1v
K404MEDksOxNIyyHR3AOHKgtBsBEWwdt2+tvtdU1969qvxm1EasVClqyUbBo0aZraKjWI1RbW+NC
CTMMcdRq9evyoIoZzoild2Eg1MiisHn1Irzd4TFEikm1w1eFDES3Z8a0JGo0CBJRCEHIqEGhhwh/
A+zKN5D960DwR7D4+P736rAAkggpQDICSQMtIGU7i8juPAjXxcFQ8+MzreBu+j5oWADVgnBfa1GR
Q4DyUT/lA9OaTeHweXn25l4DjAfrj5mdvp219O43i8GDBYXGaYLCh0ijgsQiKOCUpP0fKTdH2STJ
iUCGP75Xv6XfnxjGMW/P3YxjGMYZBqNRoAoK+vXhkNhEF06pOfSXWBrY9haUtKWlLSltVEtKWlLS
lpS0o0EppCmBRRYsgoKLvO5FMlH3hu+cUFi6BaLyWxYsFijzYdVsSiCrO4gAzQvQoUDgwKS0bZxi
gHxJTvNwyOCQ7pi2DpW8nStmR0CjpbTyLq2RwlBKaWMjiGY04LGTThmNOBmNOI5jTSFEbFlLYspb
L+naLMCRtiyyBSxZSWxZQsOXTB13dFnCWxZQccacAzGnDMcFPgnBwbu4xux47F3zX1Oe2upqrmqL
S1WjeuqZix/C76u7rrgB7q964AJ0uJxP0MEplAhpfcNVVHNbav8dKrrbeDh9D4ddYs/inVBZHXt3
ntqPeutttLaWtVYNtvqWvdsix5C2wtZSNQWDwpURHWoonPfISb2+2w779jXWnraYRO+6W0tpwU79
MueBec5nclMPdto1KUq+lLbXDbS1vbcqwfY7P91A6qSeWPvljgR/BQ6G2NQrY5eE3Aqmiop5Ofp8
IMDWtQP2TYuXf9qeFDOAaEDmw3Wvx6pgVUUeRVbaiiiti2/LYgzat60JtTlosbBtqNpGrbbawogt
RsFnz5aa2lC1aULafD4FhuNjUodGdoQNbUINGkBtKxvzXw6ACjwWFq0bBGlG+zj88/D276nE4yEl
XcM50544RU1WjWtITh/aKZReq06VdWPsoZ1LUVtqp3Eua6VtcK16cHq9e9Gc7nJ3XSAB3dVa1Wqs
9nVMq8GqLGtG20tpbZHqYugt89RxbQVslWAoW3uzIK+PVvXKxstttaqWCyUqxigtqW45oZF4NV8F
4Po7tL0KO7uG20aePOltETjyjRo1Y0a/Bw58GtddkdbVtPbssUPyfSQnZ5To+z7oXqMkaSQzzKoI
Rw/SqPs4kz2StdjGPPisxnuo94oh7hiqvyMBECfVSCiPwEB/gvumjSgK/fyP4jjhyo0CxkDgFS5C
JlT+sI7cHJknuw9Q/Ph+Yb6IDf5/u2/r34ewqYa7BYHY6We0nMcIwlLL4fYJ79G3G9mpgczFfmI5
jTGFN8Dxw0HXU2RnwTeH0h6d7m9NCc7v9pMIDD8n+DHaYCEgQNtRLgLcASCU8D1UXnK/HeBgKEUj
JBEn9/9roxzZMha6m8k5BGKpRtpuQPFoM9xB9zOeynUMWdgsYgYjRFJYMlSgVFBcbeGexJOhQYj+
/SdGRQ/3MZF/3UnA/w05IQmKHA4wG/wM5UI4ZUIlcxIc45gsLtJ1/b2OAojKhwmxtp95cZUcEHFR
OHdVYzGUKxgKxRnGA4KYErjS0LUBkWRRKJVRPldFZEFiedJyYrSBM9kmqZ6zt4Yu16t9V3nd32h6
BLI3NHCFwhYlKwl9YbEUhGOPoVKmsO+yy2n7kelimNiVicuQnHDpyo1uipGqkNujFNzvw0qVrvLe
2SW8ypGCA33aHOEhxCiQCvo8IMMgFKGSs92eqB013sKenSzeJhoSNJ1FVLYm14CKB6V9ZRQopq0I
lAPxgUAfUv5A+3HKGZIGoohCahUP8Dl200v3l1MSfF69cxTXdu0yC20UvdJNImLd0PWWFZZCQSbl
vHrSHUQ6yFrFAsEikKJLCKZKNEwbG0DYhhtGTMaabaYaxrNsmyJQV2GdxFo7rk+NncaNWLUqOiDK
gxkiA/zKQagTdlq4Wl9rLoZjC+q65SyqXbtq4NoxQUJlpbXbrWu9qu3plSVG1N6YkrqzMsB0ENAd
OF3olZKDY+z7uf4rM30GJTev57IQoRhUYV5MZTQPLi8fpliFzmTJUIztTUpABFRDShK9VJJrM4xb
ChJKAkWbmKKhmXQi4thNq0FsrCkMRKq8FlBJDCsCabVouiWSNCZDiIL2QZGgHiWgOGw0S4D15ZGD
NlKCwWwkgwRn83kNEfbaII60DQwXzOER+xfxMCkIJQsScoa6uLJwO+teHi3UhtHDtq106JUQ7gYZ
gSuDBS0lLFDbYjkUIHPGxoNlCGUIo2wRDDCBgwgyAYJ0cEgD0AEUPn6TcK5nMEi68EyLvzpwB28M
lBGLCQjfVPc98k6mtHrbLCiqwsjFjAZaU1G0FYspYLXKd2KGmgHuETy2MA/O3tDYR9cUo/ofkOk/
gT9o6vK/1NAcJ19+ttSLr9+lUYMLDhNJPLlx6PZ1dy/yQQWiCgBcUDcE3SHnynkkTw8KSRCTwuEp
AP1RbZj/4mZf7lREkbxxtsoDq4tSAtZSby71m2hNSOpQMkWhCkTCQA3gkKMnQzDAwypwQ6SVIcre
JIYELuyj1ET1EgLhAgH5oQF1HyQCK0nwgAGhCgFA7cjHJlAAz2lNwl9kRUl5BjFLL7sOBQXrAI7p
8pjxaQQZHlfMD17TIrB2uBREbiboANnE0XKaZIu6sIsKolQeRj/LD0AhS6j/cPb+eP8c4Zt+F5QB
/IH0Z+V6BXUmvw2wuuHbWMvl9H9uGWSTRJNP8sIFReEMapzccsqR8rBf8vyQYGh7hEpuWRtDHlpD
gwP35uH8KrI2X5aLFDrSy6GrkJpCFfSe6sD+mpk0cNP+bJxfknHlQ4TtjsRJH71AifMqd77aBD3H
x38Pk8vgSXi4acrpFIbh73QtV+tR+NXlErNahXkzLMGZLq5uW3Qnjgud5SiSVQoaEZfKCYUmrmKK
8yST17fefIsxbGPxu30DiBftKX5wxs0bwnhCBSiWdQApPGOoi1IlNJGGb5HxS3DY+b9v+HY2uhdW
YipUjG/jxPXXNO9h2yjwZU76KmEoGvGNEvzAA0ADptrGKiIogzh3b27BvMVKbQWjzNoqpq0hRCzo
YykKhUlbYsuZzV0lSARHuSSC82gnITUcsugMEhDZuHWhWh00OUoQCIyEg14dABLaAE05Cc5s65kq
6SEoCEWr59h5k8U3HJxBR4m625ilEI4PhQn1EA3CqVNGPnJ6sFcra86avkToUZlhWZgRmwoPRmBg
kJByE18KAUDc7TSwQ6shSBDxTieu314r7WPVx/Bpk7LaiJa2W07Q/bND+QTjHbtjbbQfWo1kYzHE
RHx7Q0KCz1Njj9yD6gPiL8wfLBBAjREsVNIRBNNTKUK0pqRYybGE1iJNWasbNU2pppQghSaVpTg+
Hw4OfGQLYxCfLD4nJzvtZVWEhCSKUgUhXyeHzplcue1Kg8wMEPVLX2QZJQuMuShjLGYLjCQhJBIq
QkQAgUo0uYDjFhgmEZJkFKyKhIERmB3NubKmrgUbGN2krWajmND98vm2YEB9Kp6UijsCHt+N2D/h
Xb0gTKhbPv/H6GXbwUkSSGapuFDSaXhOTH7cPp1KkDTgC/xf7zoPgHYqvYq07Cfq/paoK9v7tttm
YgKADDaKiPeh9sn/uQBUStILQKrSFCFKxDQb7B0Aryklg+QU/CB+08JbRiuEEFVSfFSWUo1k26am
UatTCDJBNOAL/dJV8gSE2SMU2u1vmFUyaUlJvruZubdKiCUGh6H3/Jj9k18xGFh/0/h2T18h6/kH
3ofcOGIQhgISVCntRPm/cJuHAAdEU/f6yCH9MA57TEyE6EmqowMXJoTB1jlEJp8FP5nn7YwfAyBh
hSPH5GgbI+OO2/mda5JOIoHJiqiabwPlJm/jDvPhOExCEBG047+EGwQfmMTCOm46h/JDcVHQhoRR
54QxGwwIE4FHpnNRYW0iM1Ka1kWS1lEsiWrOg0M+x4rhph5hLE38PR11K9FKI1N7XJovfXvDUSvX
czLNC2cCYzUth39NOwWyZETfjjzOipSom5xt9GguIw7p/cZi/P+zD+GXe9QifSMgAPuUUn6dtABg
VBUE0KWhUs8hH3gWImBiCZERYKsiH0wooJ9ly/QHmRZzyVJOrNBk4ZY5dGrUyKNWry2tdK1yrVTk
/Nsmg0QQkyqEEGGRnH0P2YddHthIsEKOMWKkU3GMxT+p/YYPijgWqFnbHLbJeaLlw68UxRTiCprd
ySe/azOuZrRRP+OIII3E4UcWh1Ru6S3cErSYOpVDoZbRmHpKyVDajQK0RigxUBnRSau2L1ziOZBZ
zFX77Q9b0eNyHXnW60quBMc3Lq6zr01ZbttxmFuPjEKZcZQt8wa9e3HCQOiUKEr6sh5Br48YqxKV
xmH9Ld6WOFFC3hfLE1B9+HPfmg7h4I0YDTsGUcgLKwATAofN0rnwPJV5Q42/6cADsoLH/TAYqpIc
CBgQGSOMi8fbtJDbhGBgXg15tTJMgyHLwYhRQtANCYiqIcEogGKInce/wC/nExyfrIpChSW6ZPYo
tn8W8MGQK1cI4zPf9iUjElBBIRs4mdjvDvOA6a71VMlJWA+WO+HqeHU84gx4NIQvleYOsWLaTeaF
8UgOIDMw2k6ymp2gFlFAmcUpsZ9VmMBjEswDHARoUOpvHSFOY+8/dvCPUh5oyhW+aYC1XbAm7oHE
knoHr8D5fbstjPxGofS8dKDKIJafHu9dF6JA7NKBShdYDrRBEFLqGZVStsonC+p7/NPZ8Q48GG20
m2Ti+TDmVbY26Gxt5YKcFBkmMYFEIKbePZQfL1k1cxsg17tRhJmLL3gokBj3ZN1QOsXm4HCwNYjx
D5+7J0DEoi9aP0MBEwgM1nV9RrizxH/Jh+o2xTuM8Mvt1yY/Oh2BJ26qdiHyoi9nb1bqaJ537YQh
EhEzsMXhE1uGfLm/L6z84I2jfWpWpgnJkzjum7ZS2Fakxpznn/d50fh8dqKKwiMB9QMPuln+mgpO
BpRGMYEIhY7Ntnkp5hQhJYAkOmYhBVRUUibGx3dD/IxAp1TIIt3cKgcC/34HeL7D99Qd4TH+mHI9
tjtkQaQ/kkH/tecBH+GAOEQfVgeclin/I6Ad6gIJwUipA6CIg6nHzqSKhgiYjijpZcG5UnGo42Gl
TUxADTtCoFInGyBgbSYoFEJkImxCTArMJFMMQq6hMCkgCkpHgDCA2zWZ9whsgpQKih0ABHyI7RW8
IwCAD96KBTDkJ9cm/fj76qnBf+0bHglSAeHi/YdISj7DKAcFoSh+qImVy0r040rKyruzq7SftpYV
NqftTFxShDQoUA4YY2MdfLroDSVljX+I/pN9D5c3YXIdwQIZDgw47pCa8GNvFzr7f/O4b5T36C6j
2lFEYem60QX2oC/9k312HE3gisSCxBw4gfP2GIAgiKESAjrhMvo0szJp2id10pq6zak2YUE2bGma
lLar+02rqes3ABlJuD6GAmD6PiCRWL0oekFt+laANhA8Dy+4V+AtqB8Cuj9RXeqO4nhdp/2gWIB2
AEQ/vyHri2dhspaHkwrtHT39VHxaD9ogw6ifZJdaKQgCQM/zamMB/dVT0XgW3fu66nDj5maGdHy6
PYx9sh0rFP4z0nUNhtwoOoI2XSJThdw5EEbJwlFHFoRlsKyQ/vMhUVEMFSFlKakL7QBTvja7U6dW
/H1+ZfwgeghwIyPETZv4hMexOknHkmA+DFkncYjUoXjmBsYOznZ4S0DAcWu7Yxf9V2LrfT98Aenk
AcAUEsQAeQGCKp4ifqNYZQtOwRF2tsQUNW6mWSlIjTTI5hiQDmBgpmRZ17rMAE5dgURTUEEIK/KW
v/OOIC/D64BIn/t8eM7PmI9kSyNvhivLCOYoNxDrsPXLM0tyxIyoQnapUMwysihTCH2HPzn3H2WT
g5O4G5Z7z9lJoZc/lJiAyC2Y+iyy0kczqowpkq38DhUxVe2BBwo4UrEnASKKEqBYAX4MWSBNaA5e
iPGeqQ0l5/L+gPb7D8Jf+6owJABifPw34nLlv6pnIvnh0ABk9ZFyBESyyiVn9+0Y4bQFlifh1CSG
tFEG0LgBANCtIC0UoDTQIpCwEEEkw7Aa6hDIFtz2/f+U7oD2xIr8CIjIDSLY/IWKbG24AA0bgdYq
hVopoRoU0opGOhP/pCwYI/DquN+XUKIPFkmf3HeQg+ju8iJEUOthPVTgOPVj2B9vh9+h/TZ+5Q+h
2pgeVoS+lJx4Mg8g3BMBIpASBDv9XiyNeX3bLNPkCs+xaKNcP8h4z+0n9nAsm7bMsfQpmYOtejm3
oienp3i1GbxP+Rmo/A0gXB94xxXDgGoSP4vNXYdXwYKgJMOGVUuXCJTIDqlTPtVBhQQYXoUXUYCw
xKJh2Hshle+SUSJovETFBG45PaQteJg3rlJI5TwxlHIZMuyuCCOICAv9PpMZxaiJwnyaxY6iHC4x
MyFKREwMG9OJyqjeGZpnzcFaLpVzFaezEbEtIAymvxHDwg9B0F9fViovAbUfNgYl4z9XIAMJszLp
gB6kOcIjAZAeHmByKNRkUOJBe8Q/vITG0x395ZYcM41tyTL+3rG4OQaKGFIiTlvVQPGDhoD7cT1f
DENzdSOc0a0xI5IaYIjgs8H7nDsHC1tpSloN9s11Pq+3ThzrFWBJCNwpjVMlhWA2QIMV45x/ljjf
H71ezkLYprbaKXz1OEbxZi3W4hNsqmLhVoW2EB/lIP01P3iTEPYwO1FO/kcCYYUD+copIHAFMcwQ
6yu2pIOSiiCB9ERcw/TALvUL0IzXAtixMMjYBYQoyY+nzr/L84FzYCU6hpImzWFr7KLKxRjut1JD
S5KcGiln/HybxiEZw81DBS6ZMSADhp6RSkGYzbWPEAvl81c14TjAMmMSXRKKoCWooMIpwzGO+K7d
Q9IQNo/n+97VPgf0hw5oEBk+38p2/ahLcVlpSgWSSQbbRsYpWItc1n22w5PxdGZylOSGtiJiQ2No
zF0CGYGQgSy2QJ0yEsSAxkTTGNDBKBQEJYJJJYiKaHvwl5JQKmdDGcJSKMC6LUS1GIsECQkZ0ztn
EIV5x5MVNatBEdqVWQBeDCS2kDMLaQhmrm2rXGgDEhaTKZVKsq0aLAlhGbYgcCQYhDUs2xJW0ymr
WNjKrrZMtB4BuRhpYsBBc5DPbojQEUIEkgxmwhK778bwT1p76A/Lopn0PvzTVDjnyGPRxb/zqiTs
64wDX0zoeg19MA0hkT0r2lvzmKCvnxEfwcwiZHGHQ/Qz2P5/B3/l5CHvnxXVLS9m/cmDKLGiiPBJ
gctV/akIZISSh8zRni5k1paiW9d4zODakodWUY60Ylpa+wpThSh5sKSlKaIykj2WXfJQinm9FKGz
zpcYwZTgXUClQ3uYTDa3wqg5itlkTXMhShfvGbMnjAQdZpKNdxQ6mBqSBtiiO/+SsEIOKmI8CMws
Ymr22RrGlrP9uCpyAz/N+G9j9B6iWzIb/Y8pVyKqJTBPZDQ9getmevNoNjoiBmIFP5sNi92Z65Og
ixx+7MSaZKu47y22NqezDWGL6YdYKbZQo27SYbVUYyWRvp608ZVFO4zO+WIemjYi88yo743Ing6/
SfXyM6w6cncjOEnDudOWimjhacSXvnS8F5AiRGVE3veAbAABPdwIO4+b0NASUuVxpbJTMUBVVVkX
zl4B87gAEGxPltbvPNRJGCh+ptr3Hr6pUyn2uq99610IyuUz1PVJP6/38rvpXFLuxELAIgcOf02j
mAdGAg8OMDvLAOe4RNbQM6Q/4FgbTxIncsyuvPbgMgT9nG80emVwatRJ8scVjyhw/pvE9nkrGB/l
8cJ9SduGpfIA6wcVODUW8iIsOqGBfnvwxev7WnhU5a1eQRh/Hzi8LUtR5MaUD7McVBHnKts924nK
vJRJ/l4cRRQ4FQULwP0UOZqqCivxPpCfi4m+g33/j/0BpPx/R+qAClBz9KiDDm33B938HC0S7/AG
voZpsRS+a4jMa1mGmgICGYOrRN6FvmT27F9/Os8N+YH5YszRUV6mxdZIMIAkiWbNY1tIlqYYtZFp
oICKYpSiGotk2NDJExhKY0oyGsaiFiKWfgK5RU13ODEATjiYrJQiENS/usgUT9rbQODJ89oCRjSa
1MkiK0kUqaikXRAShYkCDLNW22n8/UvMAmx8cafcVT4+XOAWeYziSaY/UxVFpBFkMIdHcL++0icW
h89SmTT+O81U/HT4z2dSJtIKeRDl0MKmHDjQMB9VZUSTGQRdBQPJCHgjwgSb2RM1YqZk4Ib+KKqZ
x1pJCQqKppiqk5PcbODuXe5muuAZ8nKQQb3hFtXb2zTXDjnigmLvhioTm/SQdkoCQySYRWDEkqqD
CTe49cfSUaM/d/EdDXLLSsbSen5doozlLOehj95lEknEdiYQKUMfN5Ci4A/1oI0JKA9U14scEbga
crMOxm/XcgSN3FcQOB8Ohj7QeP1mA3MtZage+2/Z3+Fp+UA6hQ8EVP4CSYNiIG0Np/f4E9Cp2mAV
2O8iUNESBPyMfbIP/lK+f8WP1+JhKkDkj80YaDmNSEQzWoMU0mL5fiDd+kD6YF8FEQ6Q70XwQA/n
yIr6oaxFHc/MfTaiPxUUk+aA9nuJmisQyqgxC9C1Y2Afzw4mrKpjaTy8/zWdISiSCIJKakDb2dPL
1ofW7ncIW0bVNIes6vZD28nNP2/4V7gx/CE7S0OdmCFyodd+e0JF6kn7aFecoV0pJ/jEmUDyfrSo
Lj8aGW7KmRbYoFn4aTGA7sM+ogHxH9GAJR+CjppiEx6QLbIxYp8UHj6SlKgaxDocZ9kjoA0RFYZ1
vQ18IUHmT5z2+mBclYJ2e1mnMR6LJ16W+CfxHTodur/DwkWwkwNL+YPI8jhPAJYv5js0YmOVE2KZ
77gT/sma1/rqtVbJM1MuyaE7Rv3EREKa/1n/t/wxq8wdyVQiQiNxAD7p7+7yJZ5ZzIru94PAuQM5
SgIV5MYBkdQszaxSoGAnU71FRMVDHOc64NGo4D09N3bikitvTzIeuyG5azApt5Mr01peDbLpPGmR
rTMUOIjhQPTpXs1JCEDcWTinAlg1YRbtEoY6zZ05BLtx3+5I3R3YNKUqzbS1TRNqWlbKsEsLTrOy
uXfqBrAI+DDKHbqQFDnvVECZA/7E/1wI4vr6B25w5ZiHROOYmLS+usyglolHvoYe/4fCPzYm+mz/
sOjqyr2Dh1u/AteSxV5nMqRDQEh/TY5QsXCQiUcUjD2gkYLECESxJBIKwliwbRQKGjIajtPvdz5v
Ip/d9hq7o3EKi8RCY83p47eoDCfuEvtHAkUYONjSaUQm/Wxjl9EGYLbntFZov090WRdcgQ12Y94h
QBkpHf2HPK7amO/x9QHXx5cm7vTnt3ng4AQ8OR1onDDc4l9XZwuM2NEB8Q8ijCAKBaVRNQgj4Qg+
oOORSN/e+r+Fpag3z92U4u/JfsNSmN+nvrv56UQXqtltK18f6A8eOPl4PB/I/suHNgdgjlghp2Ik
lNuMaXNpaktYHBSYIlc43ZVrrGbqriftxCLGkREQKJvENRCjGnwYkysF4uNJIr/MyzajKlrOIyjn
BVjAAilDhIYDBYg/r4eOFjwtvWqnD0K8MpjyxM2sO8YirajFYCy4VTcpJUwVCqhJxRcsUZx1WCNf
8M0t0+2xxk0sTDXRypOSXSg5nM5w6bePXruJseXVst4kC1jOQVkqLS0pXRxyZgDKkMHYhxiEeGGy
cgry2Ly+jDCAm6DYDq2lvKTc5E4GyrahbySUZZZKVLhxSY9glTEvDLuBSGE3wD9fu8eV7qTqMhVO
2hexL30S7rUw7mMqvwEh5M6pZKhX1Zl6rOqTYLVZ6+A4Y6pUYxh14aZWeBOc4pBxFFbadFOPJ4cB
AXToS+LI6JHvs3CCD3iA13KwTqxni8HRolijEPOzveNbQ1sx7HYHdzg8KglhQ7HrlOCTsQ0sS0ZJ
SU9OKMHzgTE3cUIkFmDwDuj+hB/piEFPGH56zocoTeijkYFOddgBQL2+G4Jz69QEQOHZyRVDO0Ju
qJKo8SK84uAfzSCjQRBEBAOHV+EKCdu89nL7w9HBByV+5D9QwqAp6B+2/IxB7/w0FkQ+BX4r+KGw
xDcyOhESQFH5iVClVEKFRPzJCw0qUEtlNopbWWirKVSbFtWABCCFAK0pSJSESlILFtjG1G1otG0l
o1aNSWsapTWktpKTVSpUqbNZZZVMqSWbS2S0kBqgJJmRZaRiYVWSRFijapEka42hltNoxA+RJ/CE
Q9/5kFNIBiYxQfPYn1IE/T9f9f82x/5z/Uf/D/4//L4nr4134UpIlQ0NIrUMBIkoQUrIsEkfw/wf
+P4/cf/b/w/T/p/P+Ifk9Pm+kM2YY9v0ob3tL/XRg2TIqs/s1Sj/O/yz+/8vB/q7t9t3zzsu/VOf
0zMVYvoeN315ddNH6OYVfkADbvkYPzWuzDRKqENoCfuggGYKIv5oAKfgUP81i7KA+S5/pb/n3qrf
Ptl3LjdtGjQMFb4IrqF0zRqQQXUKi6lyQBjnZyTgOJH+vAT9P7Kgl/ccj0IDfgyQa368IAcSCGMg
pet/lwnd19h76KL9uU+WHEA48IsCqIc7GkGwsOwOvGnCcaL9kMLIQhe92MYwSREqDzw6e3X4sSGM
dD9+zSHXtd/mX5ebzE5gR/H/YdxUn5SVmZZdD/Oc+vsRoIERtDs2dBPY8hRTq6/6OHfB/WNPIKf1
ShD//mKCskymswi+n9QINoX/AP////v///////////////7ENvzwKogEHqgAAAAAGskWzakANAAC
gaAAAFAAA33PfFVU9PZ3u2uqoreY+9vbUlNMqwsY+vd7JKdNI2batZAAMSiCyFAAAAB3194zpKc6
YG9N9j6HubXeCL7B97eIT67sAN9sO7cXu7sbfHnvPtvrV3W4NdFm77wfF758+tzoOuU73vUBmjPX
XajZtICse9Ve7eda3a+PvkdrbyigPn190n28bh97yPe+97VSuPodHQA6UoNtRcGa0Aa3NBdju2gv
Mfdkl3t9988PV2POWUAB97HIB9AvvudKVVAAYfe4A9B9DV95x6Ge7gFa0Aru7gCR21HQoAFF2Asm
NBuwKVCFUHQFpw89oB3s2kxoth7xvRZyblsKO2J9wlNqxq6b7DwB4fHPr23jpvvuO26aBtZ2MXYF
t1u2tDs3EigS3cAUHQKiAAAAAAAAAAAAAAADVdb0vrbAbXp9bnbc6eh8+3ceQQkA6AAAFADAPQGz
K6AOqG2CgOjQFAAEM4UV9au+4n3Dvb626t56xAAFLZUZrtL3gZ74su0MqF8ciuOa2gddG+++7674
z29lt7bonuptbuYKAABQLbd729XoCRXp3MAtPvsB6AAemOzp0AswFAyIJRQCgAAC6wXtLQAA9s3n
r1dNc2a1VJVD3tFFAAGjSl6wKvMWWKyfWgKEkkREACgXVcLhGxhF20IVUSACqTMAb7KDSja3C6Ak
lQKNNUbX1nXh73pwiBo7PQO2BVIlBe++5777ldGaqEgtm2Xa3ezeZ2ZL05OW9zraYgM3F6d7bEh1
w4o+nfeBz2wl103k92Nh9ZdDKoTXptvp657N026afextoPT3bG2+57y93Yll11692zTYNNp69dzu
96z1E27693yN1fZ9XOcTdwGvr5V96ZJOgFsKAAQgIdsVkFe2K0Hfe67s5Zsc6dtvoweterthsMdZ
23WhbAAAAk6ANW3o76PaZcfIXdJV9m8+9nz7PmezbNRx6PPVvt11z42gU9OHvkDnfd33ER7mZLa6
pejS9t3rU+8z6vCae4Oh2t7659ffdbrKWzb29DktmnNgK+J97ue7a6yCQp73B27FcgWRm2r189Qv
deN77Ktj0epsyDuYkVXXT3YU8uvdmLa7LfXl3vt7yF6andd2Q3atEfO3Pd73pc0JY+TiiN9z1PQD
ooyb7Nd7uN09rqXHde+YoPOt7d545Edk6W2lanfVm+3tX3A94zxsycpU6Mo192neqb29ykAe8pO9
295bMbGcwu46OKnW3WtHZhU++V2b049Fp3OjpvYPTvO23U+fd3Yw+woypMXtY2ePn2AxKlWz7tRK
dau7OGdXbUaV99npTrH1vqSfXuW30+e7uqeHVdwpWZ7vd7rjcPvDTkD64nVCmgK9AAAA9Ho9sFOz
071ujn2Z933h92zTcL50Zo188+C58LkPi42603lNy+n3GL5blpFfWLz2ORXZpPcdJsmSN6znd2Gt
3nO1r1wae7kGjuCxW3A3YnKqNUt9uu2dt8+b7zQa1pbOEpogICAIACAJkACZATJkZNDQEyTNSemp
kZDZTTIyAkIQiETQgmTRomp6NTKfqNJPNU9D00mk0NPU9T0ho0bSA9ENAGnpAxBJlKSJokyGpppk
h6PVM0ARtKeo9IZBppoGj0gA08kAAAAAJPVJSRBRkmSPKP1R5R6R6hoeoxA0Yj1B6gDQ0AAAABoA
AQpERAI0BNNAJpo0AmE00pp5HoU09I9BJvQSMyj1PU0NDRpkZAIkSBAIAQAEATJkE9TEaGk9U1Px
U/J6maCDRqRobU2poeoBk+h7ZRoQgB/FgZVLFC/oMMIhoGhClBB/95vEENTbwEwhaZKyNGypmRWZ
ZbKSxFkkNZqZsmaFqypQ2UqbIpSrRSJixS0sLSzFlpqYtGtoZMowkoWv5bblKyqQqxWWWsamZSSq
Kr+HrbSXVQCmSuMlIKwBJSpEhQFsyamUqxFkxRtIWmCSmWGFgiJSQalEHc8rS2LMbUbag1YqKjUU
zVa7KqrtStUarFq1Kakma23NtdYJqUtWABKmmVHRKJQoGIgIP5tQUjzCvfLqCJI/n2RRToDSQDMo
lCkvOICYAzLAsASQb2G8cT/0zjgkoCCYglSUyVw26ook06qgn/+8BxhmncRzGh27DWjX/fmZtXcg
vhzipqhoaGHUhkgRTMkoy6n+TrV2djBos5VUOCmiBoUChFKQDmBAcZVSWpNy0bbXKqjSVqTVUVai
mqmtS2WuyulfmSTSTLSShIgTSMZQJkyN7rr3vt8+C+ddKZG0EaWUKZYxUYgLu3bsedbiy0ULNrMr
apcomdA67u6m1y3GOdmsaQs3Yxol0yimvZNzlyHHDz3HrtvM+Dlq5G0YNWTJjTIxVMi2Yu2YVD6d
TK9uvvMPt1r7N8xjBJYU02Gi0VGwUitWVZJtpZ9XW7Kk3ddY21ZmNSERhSZJGMIpFNtimQZiKZUs
0VFqSHdXTVO3c7rqZJYk2pqa1CmqTJbKlrSpo0pJFtRtFRk0jNo0ayVZmo2yamaZoEIZUWid27LT
IhMskpRsldO0UlYrCljYxtiMaNolMamyUEGm0tS1JbY0stU1UoNRplCbFFFisoSbCmqhJtUq2kii
02pW39PuwhAXr7rXhj3UMYorFktQUWioxRtktG0kaLK02rZS2ySFjEmwVFko0WNJMsUmSgNphiSs
aKNpNmmoSxQBIpkubRchkuW7TI0UhWRNsEVSZkSWde96mzKjKzGIxspZ6cUaNk0M0gyII1JRG0Gx
rJSmiLZhoqLAaDIZJmJkbQyTu6udmopy7UWs0qJlpJs1YrUbM2xaoyWymwaLVc2uTNaSNVNKTGsZ
MYoptLW0lkybUbbre7y84FUGhGjQrm4ySsVJ3dVhIMlhJkFogkyFJFKWZtIDTY0oESZIrJpLFGTZ
iZCp0S6pLdXP5x3r3QEgr4jcoFDE0aVkQQaZIWEJFZCEKAElwFm0bto110rmNEcndhX81jCD2iTF
i0IgiKJsg2MnnT1nhsHKitaTnkOZRMTqzcytlEUlSbElsoZNiDGqNX6FtfPjuXTliuaAXL46sWnS
RShBQUccdzmJYiUbJTpkUaoaRiwzE7EbO1gF4Zrbds7ZiiC3P4v1+vSWoQozaJpKxCKKKd8Y0FFC
YGnFISg8z/tPDZ7Ip7Xt1UKosM/5lO6LO8E1wL6SeQs+5F4mudZTlrLgE3jPpXZS2ZFszYsNLC0j
Yik72RyJ2Qu9JgjBMRU0Caye7qaWZpmi1mevemCgXCNDBDFSpQvjJzmejWR6Md2ry4zTx1RdEsVQ
QUjDFHWHJhE78Kt6rzAUpGZUJQWKUxDSkbJQ1jJqWYyJksmpkzZe2VtqIuyQY3e9a9Jolyrucnu3
FUVbFWIjTambu4kqSyWIqr3XWtGp7trzy2NVGg1FrdXdMxSSmTQFJJJBGaTRpR5dJbU2WYppRmNN
TLUmJAyoooklAtSq5kOGFUSazxcYOQ6gyI7E1ZXV27GyIS84lYZrGpFMaxRaZUttja2jaY22lUxS
WISTZUCStkqxRAsKbJNZKbTYSRTZiksTE0ZQxNZiMRslZtEyywZG1GQhMzMWUwzUBKyRESUyaGCk
Rm2iYJJZMpZtHe69Y50lsUlLAlYGUEoZKjNbpdmuxPe1cpM0xaLZKoqk2o2ZsyxIDGXm7NSRZpVl
DUpDJCjJjKWjWWZBpBFRNoaIgjQUkAooNW2slaru1zyuzUqFBARw9noCMmXFA0ZRu4ZkFvd1UWot
EImo2FkbUsJIsaoymkLRrJYyNFLKaIlljW2cu0Fd3GbFzrp2FjCaLGglmixak2pCjMNjZETUkbZd
27ZtClGSEyzFSoBIosAEKJBEtCEmxGkimFSZTZFlJiGFLIwyNpNkpJJd3MzMxUxkY0zJGJMihRaT
SloMEElZIk0y0bTYVEilKKMaMTWKaURYhJLRsiKZdr3Wr0rFGICbEltkNT3cko2rUxpq0VFre970
RWcOiS27rrOrOoUyCEiBkrMhEZYpMptKTZNjJmySJsjShERlhIjQRpmSqTWk2LEbaMM1LKlklJ73
rxYsmS2CazKkZmNoVMJklNvx7XniMzGihEtBti0hRqomMQDWNM+OwKaMhCiJzbdiSic6KJoi0WJt
FZltEWWyFo2tGskVlZsVmo2wTBeNp1BsJJMEqU2SkkyrZakU2wxcIUCEkoQkgAoQSBG9dtyixalZ
b3bdiU1FQY2du6ubmVrULbbKbQ0shbmuMklK3LcxsaSSpJlYq97k3rUbKCC07t1IAJUpGN7umgsU
oKAUpYyG3K5smqZUlkFQ0Wi5ttc0ZlWYaxtRrEhKWbZSmLIw1YXdtdAEkJLuu0VRbG0sZi1GsVjT
LG0mlptGDMpWKQpUzVMiLG0UzTMiGoSKSkpmNJiZpjFSuW6ZUpTJbaLaUxbQITKZUQbTSk1ESUZZ
sGBJZLMbNlFVRnsx2tFy5KGDSmsRY1SXcpIjDXK3E93Jwuy5J1sq5MrJ3VXKdtGxYUolNQYWtLd3
aWQAIqQAEkQKFBIikRaFGlSlaFKUAhDUtEuFhEmLZJirKHZ4v+xWcsJQlElov9v/6KtCWWw/If9L
tSZaPtZrXu68Sf5n/XU18Q76SuviHTv69WalYU2wZTIfg4/tuqYjGvN8s5O7jP8X4H/N3vWd7xuf
/Pru7onq71IY+uZGyktJp1DU1VjbD8taZWhrE2JJSeKSJo/8jmZliY3Sf/Bif+pPQl0j689YuJGH
6clt2T3Eff86HjXzhONF/h2RckL7T3oK8u6nOGDqe8jVDOgIv/77T/8ds0Xw/6/+1BSG1qnxhBTP
H45VYSn+Umu5WaHhE7HVVVMf4acp19Dcd1SlHCQmomlM80XkTMIlKl+jAVeLjE1TwWk7+2/0V+bz
xirOt15NMiDCU49XLcdO5Y6bd1WZEbzl0mdGnI4Pr6d+3/D/s1/81Sr/91uMh6ygw2/62MpeJyrP
b/+6PCXJPDpYizMpYOn1jH/nKMkzyeUYHQ5lR9/vPqETwrTgP6l2P8De/5uR6t0we67nNiXq1jEP
cEsF6FhBjWtHNhOk9/VlV3eoVNRyCszCjRKQ0eRpCS9jwds0K4HJDpj+w6MGY494e/xaX6oJ9cAw
dO21ziQ39J8Qxvl3NZkYgtnH9YLu/a9ofLUiO8WL2SPHOyCZcAn5O47HJvV553j046pEkfnWGims
60RmYH21WoiB7xHuwF2/06OFbVIszmN6ROyXth7JNS+gjIAzpYl5YRDfBcRQNPxcrDwmin+UdeBn
HfxeWuJ1YfDAajWFrVLfozTZa49E0ybOiszcpj3tJC6mgTUdYWYCiB6YGrmg68aLVmY3ayXTRVCR
wvT1Wlo+ixGXCYSgdbdVpw5V3u7usCFxx7nM96joSGKEtH5fW8GszbUTcPFELZfBwVozBuB4sUk/
FPQIKAggmEwplTI8QEBqgY2hjXHCpUywbQ4R7dl+I4xO9PdeMK+CWc+ELebiKubCFNt2yBRk27tw
5QU2tEg7irZVA7qQFRKuBTpywhqEyFWQ3vLSkCIepCqpkZB5yEpcNAWNNDE2mDTHOHGFVfhur6uu
PYvZPFeOCfjj18drBatyphbW6Bzn/n75hzh11jKcgy+qsrWuKCNcb52b4jkV7ojPXHraCUmGRcRA
hJDH9+4ZwZtUMXQOgz5Zolu+dz1V6KOu34/3UvKmYIL9HHiuT7UzRy2ut0h0xS9VR2lJxaihCXac
hJApbKStmbvmzEh5ERbH/Rdg00tM4xhjIt+oi9ONnGtPshm9UKoLId1TtwGNM3hdNqxtEbc+9oMb
SauQW9svpS5dbFVy0kCg4ztpAUhwyPg675RvY7yh3KV3eZZjipmMLdxsh3RSZUcGKbQZkFcEKcWi
QqXjcOpOJKcik3h3cYJCFRwi7YPGfLQpz293hWNZcWVRTtqXY6GQgUhtKTlZcI+Pq8fag89JIgpJ
7IXbWyoqlRN25Nl3W5iabJW6zFGSQHLsoouqbpgyrIIbXKvzdLG4RO4NyQ7fbRdx3pB75eEnXPmT
k8kuIGLrmSR44O7uYTg6YXOw4xkwCBjICExDQD5RFXVQ1w3obwbYMG8hBD0yJxUdJ551lEryLEPe
jzuyO1fK5Q5FXFSQSEgb5uxzlwlKtKBVAU6iwLeg0xXLvWud53DpjJ5SOVeHuKH3TgcL2vbvXnl2
3K5TKcIc45SrHy7wDlPKEqVUCMbOmjjKZrKjUZWpkq7bUHCMi7fLugcjzcoMcqRrGMY2OOZRQcPH
v5w8PwIduuN3oEeQjx66BItPE6wjxjTuWMV5lXLgSaqYzE3eT10F4N4lW7o9khDhxpBCgIXZmRLR
RmtdNb1MbDNPAGkclTazROoaQxFIkZgCJZSVyMNQ4IEEagyjVgdSXbD1IdhK9tkEgEqJRBI6gpwl
a6dMeFz7cXHjh8O7oghToJPlEohaYhtNskOh5yFjeb5aDs7msbTgwgpuo0G7VU3UplVVJ0Tkt2xl
kHFCVW7lWd2uqjY8m+oV0ziyRWtCqNU+3bjsdwIc1CUpEIUA0jSBE1SUlAVmOHafZ8hrd7/R8mdC
gwP9hmJpyM8cNTB3d2ueu98HVVlOlW5z3REf8NW33zEP/xi52faZfdjytjJk+J37x/8KOPR6MSQk
ILX7dCaFpud2C6+GZAmQ/CBNIcW2NRuc3Aa+zglfO+Z79t726DElGEXIiJhXCy1YAWGQoGjUU8Ya
zZiRREQnXOId8qJJzgYRpL5ZXEdzXXiXjzdIPK96uQ0RYhJ3Lnfq71tJ+15B4+vsbioKyIKqoWz5
z8ruOTnL55HyVjlc93bkqyiiiopJolBJklAMISKZkjEjIRi2ZXRQoJea5ERfDyHmkROrKgiDXOW+
mvNvWKQCebXx6MfjUq+mvVjbb6TrlGyVGLDExNKUrZnNqWhpBymE3Gmjy1zHldNRyrGojWxsa5uG
tyxtaK5tcvKuSATPVRZyOAFwjhPZCTO9NnY0ZFLMo28r4t/Ma+NFRGNl9u5kjUzag+E5ffuxeyPr
uorVVqSiw6YJwrIilELJM9Qgrtya71dvd16i9kjtaE9ET1rQr1eyE5X2SRy5WeNhrm2bsliIgiN2
SEOWMhQzNjzpM+zkfh3b6l2E2kUkw2ppQJEU2UqbotpxDhTI4PK55m63YR9o58dvOdG40Satc5wO
7CjchlMSEENmXujtZ5ySbPdq+vVXKKMxaW3YQq6lRrMxWihQpStotnbirm1uhtUaNdNhvn9lwqfg
d7IHJIApooCT3RTu0NGz3dI9d2pRRjfdzPy7X8P8r2/8N8qwUdMxo6Y1gdZs0aRcjcOVRbbWjEXR
AdkO53ucIKcqiMtc+NXSNSbGklkmzKbVSohSUKSUWRolQlJNNo20SqIG2LrvUjecKmNOVyTr9F7G
r0m93IGmZiTFr7duQ0ImMi9cur9n1wyhIZDPOxMakaX3433VeV/Q7clMijJNEmhM2WJUbfO4pfT6
G3ESKQktkC93XCayRhoiULu7MzINfFbjSxilebszFe1F3rm94Ffg59K6EwmZaTm5pfV30S9My3RO
CpNMTNqGIzVGpDXlckNr2auSVfsfPC8OKJHZHAiDEWHCyWZpxZVBqQ+EO6ERUZEYcuHLj0MylGR3
jjeK48fHHDHWKZNNfO57u+euj2TExSWRMh5u2glBoZr7dtfJrXriLIsgmSrbaUQOElixIjeZCv7B
c0XmuyNy4hkCLq5o1FzIZZJNXzuzCb3vW3NiijbF3HaKNWQAiG25R5DkOE9MODwFG2VQc5EFwKeU
LDENnOcVB3Rz3ndpVG+fXEPiCqbKYF2XHIAhQiCOQcxEwIOQRnuxbZRRGI9PP8L5y6BwoEIDJ1Rt
fU1JYq4ljY313X4PhrR22aB51FmsMQ3Ot7ytn1ht9qe0mukf646wZGrUL06qdlVI00pQlFUVJsF+
TVc21i1FsW0JLQqjYmWCoslRsbHLW3I2xQWjUa2jRvK5jY2oxtYIuq7r6jP3vtwdCN2/L9OvDtwu
zAiGIKRKSpmkzNFFQWNhMElBjYMEpJEFk2mbQSmTYxtREbBIlkENIaNJaUplGxtFoNiiKjGjaPwa
6bRZELUUUkzJJFBKRsFoQ1UVaLJJE01RTMUUy1EBqLFFpNigsT+77pfG5tJFjYpLFUVRNYGMRNGR
pgGxJWGIFFJsIiYtEVpLFjESyoxFMxjBMgTRtjBr79dqTZCqKI1GojYtk1JixkJMSYiLGjEhKZJl
/ntuaZaDWNkMS0LNmotY1pLLKNjRqNokyGkZMg0mo2GRGIsmoQkkmSUZNjEzRhNiGMCxbFiNosmN
Gk0bTLGKjKaMzWEtjRGKErFRoSwEYtFGMWSZVFiiLfTb9S8oo1JJg0xmMREBlJlBAmNt5cgskmpN
Gvs5RRFTM/PrcxshvOamaiiwlYKIo0mhNiwJRqgtjSWQjIG0lo1uXN5Gi3jIWiBjjYxjP1T4dLxc
R5Prj6/9H93P+iobiSr/ymTvN4HhQPCQhDkCv96mQaMN9SShabIjpAPqBxliMf6mlza80ataBhJD
fEJEozgyiLqvQSdOrFVu59+vfu38cXDu9H8/4eOC46aNSREQ1V4whk0hAgbKqqgHIfAz+f9/2cJt
aYhSlKWoJ/FGLdMV7LUt6M0Q6kclNypTQIbQkSQAbQ2dXAKLeXVo/fkAG0H2SylMgg2wof6YrdNR
6qA2HGo65p9qrN09XCBmSEwxOUH3SB9POIO0+UPaORC+4aRf5nxBiPEPcqEZ+IzP2wSt6mq1IiXV
GviSibyLcDykfptL8PX5MPvfryR194R69+S+FMO2MJrefRQDabH2pKiwQY0Pq037dyjU7339mW76
qUI52/HkYGxtyba+/e7bnbhA4QTnAxJCEJShdZJh6hbH41RnI2pQ8/QX9SpeeKKf5ePk/0TbOjfh
y2Rigcd6LKHYWUkoIXogkzYpmMTb57unLsK9W2g/Q/OUofBczW2mWzctmDnN3Sw9dJWU2EkJOzqt
ywURC6ol7ukQQ3RDugej05VfpiiZonJE7UTtESkTzXj7J+r9SJ8OPf9xurn3Tll6DnfO7OPdGHll
u1w/QiXzwhzF6cMkSEJIEx5kT8k+fgmZlPPX/eid6Jj/5B8Ar3JDZE0RKdyJSJSL4Im6lSRT/mHf
XnBMcOhN0YdXPD28Hn7MJI29nRHWdWPbah8a6JtlNzDGAYzIhTemJP7MMZSS1RnplljNj/1vEdyN
Mn3SfIM+ThDBRhj/uKuziDJaiOGbsxGany/9uFdLAqfB/TI4Jufye1kHd7ve7jr4cfkqa+23ul8n
ZlZDosZFIaS+YxHiX5op932R6uTt9UKk6TUTvjmUNi2n0v9g7qpSGB9rsenx/+JbY8eP126cDHo6
EonsbjvBh1SaaPMp7kErdvsbLy5T/yjrU8yJknh0M8ZfG96JmRKUFrtdiNl/K+K7ZUlG2ItoZ0wM
BiiAGmIUpeAe++Xy9Dfu+/9efFV+XRjGqzO5/eIwUhEo9p+M9+gX4Rum5CHF7EehMSW4V8Tug2re
IywHZ49puQ0hnIn2uDnME0LLMKKFxXEZrxN1ogt/s40/jOEMZB6VUmcmYZcWvdHUhTslyFaClE6e
sw00PIdLrZvjQGy3hs9cmjaY9KikIJEooE2JMUSpjayWUSN76ukGmdGKCIz/lzBNSUp65DUCakKR
iRggKVClYlKRcYMjiT144tREQC0PZG9cWWlHZ1K06IIioigncduzhNI44FFEOsxSgaaoOldKjJYt
73bbzJLNVyo2oONASZdhQJwSg404z4Y3oeTrw/DBD553IyEj3oYrvufR6suLDmrr2cSAi+ZCafqh
QpRoVoq0bSVsaNG21Gqgo00UBRTEpQDQFCh7i8y+N3xH6ajJIoMRjUGf2f7vP5J1T/cW6NIjMuHi
0TEiXkQpt3p7zGXw7advB7bPL5afL86wNNdVq0OjL27ZaPWtUlunabupNv73gSVXc8Rzdx3X6uRs
ePm4wc+2fEP18djb+W3e0uSaW6bIea6+coVHnCvV2clOE8oiLqaglHGgZoja8IFerF0wrx8uyuFc
UI47d2XVws/Lp6Pwu1OjTibhGyEthvQgeY7qE8pE3BIsr7cFmX/s/bWa2GtuYmEHyrTuL+R5l/S9
cFem662Vheszqau8X9jN8I7j1UonWPqJcdhgum22roKrZRCNSqpKsv2/M+t29td8K6hJpOTXvV+z
0uenoOJ9j9XZ/4B4Dmh3mmwz90zvWCxRLmId1+Hd5d0PNF8YHthZBszZk+JPIBtrKZMZqlT4yEGE
ZIQacIrMhqtaMyNufEwsI0zbjKa0ymrd1jqppseFjEOBQxhWWLTNJ6zVBKJGPcooeOElNxuDW+z7
JSsGbii0Tw1x/plytt7nsD0NePtygLZqrdGjladojByNhGbKgmm05/p9czmtREmFMLkTfYgSKxqH
Kiko/RdHv7+Tps7+enUw9kndCdZpfiemkoagGqJX+Ews0+/Bo5hVGLw6eOk0par9UbAnsrG1zAP/
LCGtIKIwkCs8CLoUnIQsqxlJVmUkKT7ukvO5g+BJVPWe4lEhKpm5JNJSBN8LO03JuqTvLsK+7zsH
zknMEkkhhCGPp8/SO2Y19DUFRuRjKJXtAPsqFmf6pQhsvi/CtDUyB9gy88V/tzX64i2jn6U5z+bL
6UsGZIvxaDB3H3JjlmrLKCMCeKGruvEzwRNlv8/trH0jGmmm0JvqMoPF5T5qv2XRjN4TAklguiTs
TXZ8AO4YgbBJdn/06XS/Q8p73G02t+jyOj8N/rtM3UGLplycdHUrfN2fktbrOFmwTAk1FSao1JUa
sWwVk1iybcgumKVcVCcEJ12b3ahRw3tcYhCTAoKRpoKVaGeuCmQjgaN5vMTXF+BS0edCpUZS+MDH
RIP9Giv+28DG2dMxoPKOH2OZnA6jMdEHdBkFH7eG2Uu34fCstDurzbcktH3O0GUhJO46SH7Ich5w
7xDJnieynSbvQj1PpDHhP6oLIYogdOjkgHSQHso5QLSG5yOEM7CHNzfL8A/GxgjmaGUdFw/AMcYV
75bwwMTG0/Re9KYS42/gjawdKGDqEMkzWTHFahb7Pm9nY9vmjqXeeemnu+TMqkkL32yHIJN+Z/+h
9AhCETD/xFT9IpjL01P/vD03kBB9g/543t/O33kiW7GsJIueMRvFcRgZTcuG2ZI3C33cyE3pczLn
d7UYCF6JwOre1yc3JB13XQe09RDFih9CgaoiPgOSRNeBtOuZfjXq/YXMvfal63ilO3SLiAFcHHbW
YuyMCh2ElOo938DPiHGHmJhDDjEkREEIRn83wu373vKLKCG25cF+6FbxdoJUmomkm8Drziffs7Ja
+1j6zgAGHsLiPqSkRFgtoZbDaJuF/fGfAvXpM+8MDiQ0R+EjL939nymg3HySR6xPHDQrkUfzfR7P
qPI55LCiksPPDsZKioZ93bphoppeA/Ex2HXnk20UUBG77XjNnPdp3uszPg2VD8TZo1B+Yzo/M0UL
GL4LKChDZsZCSKMGNOBD7y0sy6PBM5I/GBZdAMpQyh5tg13g6igShddoHxPogo/ohyCqMkzvJrlX
5JUgPoWiX73PySllJ09RyM5WkOOh8XPrtefZfq6LayplKMnN0+7XT8haUrf9X4dfyZV+XedL5z5Y
bNn6HmSX1eL7dnANehxg/ZccSWFoX/obphuUjFse5xgTjlVpPRCn9ZKL9LCEyqoif6d2Q7NpkNpE
bBNmjX6OtulmDAhB9cBYxQMEtTvfV/YU+v8uPL6MtvYVki9msdC0j88G/jsy2J+hOQDlmwt+iJaC
baqBVw6ko+pgcNAYIV7rABEU/0Sh1gqICgp/PFQsf5btSwgCjR12owQoZ40/n8Mi1yrqps9bfzeR
39DRdSEJGOEAPk+qUgq2xAT1/6/2/Hh9h/N8OfnzyhnU+Yo9N4xoROspSbeyyrH6bR8VjSxghXWV
KkmJJqbO0XQIw9iQYQfv1pmvkgx2fcbB/9WjDfBFOOnH/abuFHgdpO47Oyh3DPA4URLCJFgHInhx
cxzw6T6Phwvmt1pRMScOI+TbbeDnL4nQ4v7Jvle+Xe0LRIbUYUu1d1rgASCpcrXCvK/jvl8+X6ff
3SxhnlPkVWIYPBwNPrD0+r9ZXdB1V8vN4eYfY/B+t4I6k6MJ/+8Kjl399St0/yfz8Iv/W6+pBhe+
czWSrqYSiNMMMD4nPp6H4JJR2RCXb2EFGkzpUZod4I1UerXsxWB+qL36umI3idUl1Uy6dHho3bZW
k0hJmltzdFXKGNTpdxNPJNaI+j8/p/JIAYPbxM25+rr9XDo5y7yXts9uLw72q/5Pu4HI5odg83Un
tz8uzHafJh8T48a4rrFmmtBYWFJjv2b5ce9lSkrWhlSjmxPSOTsFLCSHQOJyjF3Z1ZUIG+MiybiB
nH8hkeXs3d+Z6x7V4i5IBkgYa2HW2rHRLZjDm/qrN5NKc/f7MOjMMTZJpqSjth3MdNFys8pUQkVo
LLvRUjpLkwboT5RQXk0hDBpCKi0ONyIiTONOpOeETGxDhCn1B5o/PCu4T0d3jnqsb/6cXT8HCHuH
t9nxtYmmwbEwyR+YKMYEIVAgRoYXVOv31YwkQhw4/CtcvydxpydzMipzs/4I3lvDpO/1048WyAMU
JMCQhMmWxHt35Wk2/HaGJcenyf0bSMX99Tgg8vX9H8aW9fKzNZ8NDVGeefELVmciM+rJJGDB9/Pp
bn6tMrFwSDc76p2SYTcgjIMemaJIEmSSHQ7On7/zhUavHdfIzMcTsRKBsDA6JacqFZs/dP1UkhBR
AhMkwS+NjBmFYljGvRf32baW/j9vQZ1FpscHVIpUef4b4/DcR7KEzB3bD3mLDwMtKGUETYSmir9c
w6VsyZ5tm/j26OzDnTTTB4Y/w0XkT6ulnYPnoOF8eRdpvoU1COPFqgVs4a6gXb+H9NmYYZhsD6MT
gex+RsYH9Iq07p99HkeVHadRz1x5j+mgRTGrywm6U/RZ8WYSwU61tgeoJ20WtiuuGluo94SSfG+D
MR8CjIaPfjod/o/NZGuUNA0w/tUf0hl7x5iJK0LwYEEzGZetaWs2/iidKXW93TdbKKLsbf1xIzLP
DKr/e/p5oEaL4/ptruOodnKCE2Sv2civ425d0pe7vh6GHAEmTH4xAo3glFEfP2FKnVBMgYg8Yjmb
tfhPwfrEvv6/a43ifc/Q+2PydalsyKNmdRZkVmZ+E0h809SQ6ADCyqQiMgMEgkwCQsAMkqEiQISM
qErKh+7EcBSCFCUUJBJEgQhgWQYUBkgAJhVhQgFhkFkIVZSAktmVNpaVrTU1NSamVKalKlNZSplp
KlKZKZTMpSUzMmmkzMlMSCGIWYomSJkSRUO78Hjrzt+A4tOENRM59A4IQbhUTUVFFXafFRE1NuqO
jUyPTJ+ER+bSM/7GqTPE3TlMeQdSu4P1GSfvDUxWQpgvLMTOO/7S3HfeHxf763sIdd+QfX178w6x
9HK5KIlF1trWx9x2OAxF5jx2tHEXpxLcu76X2OCEG5IF2hUVny/pstn2VQKG9OhEHHeoOwu/W/Xe
LSvBavuouEnIwZE3xD2NGraaxuK1rVsfjLmIL+3cpi7ZYMMerqgxgVUbrCrLBqKuspwyHTFFOkQ6
ERXzJBuwygMcEfnz58CgZaaPGgyZshTk42NyWubzEpIdZuW36z7Z55CEkS47u+qvHI2VKPMrzwfG
/lud4f3vcP1H+zhAgICEhCECB2Y6CP6zyNT0cu9zw13nSehdQhNRvVWRBgAVD0vIO92kW9BN3Cdv
k1DIKYNhg17swOKEa6uEpoF6z8MQs+jMeszLE5A53YUDQkU6NpuUgmVMm+B8ZHQ3Wx0sM+KdCYSS
SY7BN1w7JjsXM61f19XVPXgjbZS503XPzfIpXAln54dX59GuPPLDinsgSL2kFaiQgY/Nv/ReBDj7
kizZnOTHBDcF+B1cezgTswx97to2I5kii1Kf+/8zKxx6SZT9NU+ahzcLapttKSLcJ+Lj3wpfuZab
KSRJbR1ApdWRD21HbZxoIGdAEIZIBUYYgTH7v4exf3+3zn/Gn9kf2VTbezX/hvNhFvLnXDrq+bO5
o70dXo4Ns1hynuby97333qZky+dh5H5U/MiF837/X56tpr5iOF7eda3IhU78yeDvPO6w/ighw/sR
/Y9b0BgR9GfMdTrxVO2OYLDCe0f2uf4BQwJ8PRVoCf02fIbEGEHsoo1MHG3btIFtwjWGl5X47/xf
5gr5uzlRe03UJSe+cpNIDIh2EkbKpDbvoRLh0gnVM/CgThzKHLTQ9n3cEQkxsQ4Sq4QGji1lAIuj
dNC9mOBk2qSTJId2vKUMJ3Xxd4px+fPJjjUDJbDMxpo7Wo5OeXBSSQDq6SWpMbFDVjQx4zYayxk8
IZJIx+zpykURPE1IFdiYdOYOe6d43nSLHc7ONZgYoV/OKD5qT+GcY7voyaTSX937HbGzjfrN3w7/
7Mtt/S0olLhgyeRu6FznhCOynx/S2TVt3S/UVzYVP4JrwRt6OPHdfHbXpO3PzsZ9brdicFzTbm2P
slu7v7i6MuAsN3Qfo9HxX37+i5L8f4f780pSp9ZBiSvjAGEhQgTAH9cq7tKyAESMQKxAURMxSoBR
I/xfP9OlGUN2dkd9UxIX/LYfTpgLg8COU4h2kqZ4/WfNsOIYh43guQm8xw5MXI4g2xhfQo3RzH6s
RkzA+Zg+m07sdF/dLrdgB0hAe+A/pIUP0STQMM0IAGkmDw3Ml/8UGdfQHca+PlPfOaIPrxNjH9RT
19ewsev9n+fKeNABf/jsc/a8yDtX2pdsQfI19YOeCivxrPwLBpC8REX+zCZF9787iMvuhD/ZA+ig
wmXQxy59ZY9n99QXEYgQB9dxm9+tbONpodl9a5GpZnn2on3onjuDV+ATg8xtb1Mn5WuzOO/6CcjY
JiqEWCR2bj3j+4bsAmMaoMeDnlJ3ngYHzw+Ekwp8kGgtMcMjnvwcZz76ymSKZq0BqaOXT4FhtDaH
3mbRcvUA7Licbs+bPWXSGhpaB6wh7yJI9TI/duKDhNIIawEqCFxorWUkLBFHfBQLW1rXdjMhgTvi
NBnSO+f2MAPZEyIhJv7eKqNp7uVJmBPI5UXE1O6nw5d/ErxmZTLPNiu/eH5jE2Mgumr1M4GSmhm2
ghe8bmFYkfyX0dWLANsASB4FGe8PHB9sGo3e/vwH3Rk8kYR3yeEv1ndjxBl4UuJn56U3xXSYiaXU
j4u81sD4xDDfOMSORh/0qcaHxh01xycUWal7HTC0Q3pTUQxNgSSCHZz3X0ju9xQal68rpDqY3lR0
3NccLnES4m+FQRhBQhAHq78O24rQ+BWicBD892hlVltx8G9GzFj7wRq1PEsI4kzZtcr5JZwatxt5
exp+w1HZE32obPaSXrHJHA57jYVN8pHE3noKHgIRTxyfoJr4E5E5BihpnQ1Wqxr7QkPw4u4vBBij
jcf6w/iIGqZ4A1j7vPT98bu/7niOfdqRFLEs67KJ//W3/pj2fNl/7/k+/+Gc9xsSq/BPWE1+/LbC
+8pCXhUMBSDKzR17mvOc3wCoK/0wDvuhh2h2Zz4IdMI/MmYDRGCGsmBuizikVhGWTfce0P3VNi+C
qmxz4cSpdl3aSLJGBtrsuRoIiVBkNUSdhh70SmVJybM/a2J+83npyithgyEJ/BAUlCGDBRIjQUKl
4RYKYEtONhCjBUGOGKlCAUg0FIn+WVNayd2CVS6gry4MDez7OnJzp32Cc9/mvCKIe0fHrajy5lZ6
rVF0MGmNJvyqCOpP7hXKLCiDIHo7g6u4kdzamT6ngcG530tYXDeEUZT04OhNtgvHsPZZwhvVvx7g
/jyuXWVfwn2e/w/sPx9PJzxNF9Onp6zxFRhw/VtHWLbbY9RxMGNA/lwntoQFh8yzK/lCyjrOllMJ
CRJ0pqjW6eH7cY0q5JM1rVkHLJ8f061tyHrTaYE0XUvP9dm+MVbgnJF5MVU3gJiYyzf+T/HtL+//
0T3aMR3GGhukcaSZmblrX2v8fUdfSapvWtKO88d8iNJ4bycF6y+vvv9aK1fNEETpfdi8qFayOvKZ
NlL86NTDqnnbUK21mT0v6Ju9ir8y1RAP6fd+P5vp+GaPWSQdYSGiBs0xGqGAchxtfbtrY9SntHW9
2HXNOJk6+/zwxdGiCGNprmFGdTPUNjw63/Uf5jDD9ojzCspSPlUlLt9Lh6cKx92DiNtYLoktUzOM
i3k0EEyJjnVZ0ZE6YSz1lNDj6z/cFnSOI8FQwRGBHdL+ED4JH5t9/vw56xdSyUhDbBry0QlkcigL
rpYxuk+EXTew9kR0oqM7HrUj8xiHGshHWGDuaVMOsBgKp58/dvnx8/eDvVSTJtLKUYpmRvp3y5vv
3vdr48vbfqJFREaNkpSwVkoqo2rvk+FnjhDHH6h1xpw6FhJ8O/bD+C36oNprJpRog1QmKtb7tJVg
LMumq/BLqxXoR6pC9kBYb7OZYFqCCEzcHq4LE62+jt65P9TF6QDlaseDhTJkBHEqFnZEHM7JHe6N
4ceUyi9n4O7rQiP2s+nxOZMpRa+T+5wbxBK3UOEV+SMI5MMOf465O3+Loy6R06IQqInTttG2F/qC
SMiY02wuaJ5v5tw4DynYUlCdhc688MII1Gew1o1DDRVsChgU04FMacZJjHsfGRUovKykvL1oeFEN
mEonprSbbZ/j/brrZHZhIqEHSsPtiG+3htKfJVn81QrZ+ZI9FCXp4T7Z37KUlKL9itoaPu2zODN7
h0VPqws8ObZ/5EglQwYzwW8Q2Z8YH4h9Bmn633zzPqcmdsfbUKXFRGrbwwiSdaKEoH5ZSM+RpBAj
I2GXHnImyW0SaeQ8msmD6XuQCdV+13kbTcbZVaZlkbWotab23B39iY+pd6bsPmGDyEM3y+KX9kS+
qUpuzzR/ZH1T+ubElKEvR1RswnHHj5cLZNdXL06fMkKmXOMSh5QpjpNKKQP+gOfR2hwJApd78PST
j5LddpKSDrn0x6qd/gkh4sV7fKNX1jV55Oa+jQT7ChLoobpuXQ75we+XmiH+ESycH+0pi4a4hL9Z
6bFxFxGYgocRGYhyCByxhTn67OeMra9TxWlvfEr1lFZUrM+jdy7ZjDTRt6z+to3PMYkI35aFJLr6
Yu7W99pJc/w+MlzPM02I486tcyxM9yK6vNl4DhFjBk25UFoYuVPHlxnwzOkxnfAEuW3DArnpaY3E
07Jfu7j767PlWGRw8uoVP94Aft+wOyAgeX9B9wZoL1mQB4smkoiYAHo8PDy57VdF3y6ZhuQ3B91H
HkUpiG1NCFRtDKo1GrdtcakiQsoBpEjAhFCG7xo/JA+acXbY2tiHZSEW0e38ffh0B5Q5qBSVKRG4
2aaOjEmYi6P04DG7+dmGMdvkZwdXE3Nx3k2G44HThw5E9K9/7Y/z0RUZOD4xkfRsjHVRFuO6NEUx
JQ8bzVVHTCE/Zvg/1Mn+Rl/Te7d57KjBLPHJKTMfncIGmt3q20AD859Rl1fPft+h/j0Ej937fJrp
vzmz0BH6C45uPYKTQCBH9TuO9MhM/FynJWEGlLgPyTbZNqxMvlAhWsoIECIuzs/1Dt+x4NwxFUWy
hWwVotalZFEpoWAbGhTY0+/V2xlprBpVndGNbqsVaUYohfy0dFjnChDWhcY2+W8D6v47gOJxz2Js
JS7Pf4fW8N8ED5BT960mYmP0aDQyC4gZ8Xb3WpFgjdUPZjQgiFEEDX+Z2h2a6C7fzGo3E8f8+Yb2
pwao55f0+n0MP6TrPZLNvWUK6lpB0enP0o9ko7Ovq2Qenab1gCIfwxeTiSZnCGHQmUIIKRkbG1JG
2XyVeMrclyURh0M9ef34Ie9TxfbwIHc7R7eDpoOJSliiAi+v1/ba9NFRRosRUprapLZfp7VuVpVK
sajYyWsI0rRQxy70moWj1BBhDQUhHJtvANom5KCgpAoZ1gJ79KYUiUhJKtC9YPWHgGJHIcBH2Cff
dtvrzpFUI4LXXdbS0H4e64IQ0mQOCJJnQzkL18dxsxw21LjvB44tCj1nCacr7dLNp2GewB2uu1Tg
jCJogdYJQx1la/WhUUyijQajRbCEBKECkFpVpvZ8hh28lwCxCIEYlxEjw2tMXQB/AWUPaUUGetJG
jBjzLEU2QIDEqEeHo29hPkV3zw4OI8c10oTSZbKgbHc0oWc6Tsi8SeQhSl9ZmAiUlILHVOzxMfem
tXyHIMwd//tzmThFyYgscuZuNqs/g0z/XqNkD6eel9fBOdaBlkAm0GWFNG/m1xmAJFfCMm1idPTb
0tBzowE0QFJQXShOttULMTYPLl25idpkBTFUILnFZAWGGXDyCblSozDRFV62D8i40PWgr2afi5s9
LDJtNp3Kwel53LGn6tA+idT4n9lt2xt14f3wGZ8ae/6wPjKZ9XxQxlh97HLcfE+GyOqeP0SwgmQ4
7lex4nFSuFaOtZ39pvAP73/TOmkvDS676op+twPGQ413zVdNII2uYYrCGDRNTWUDRi/86GZ7M80Y
pqoX5fzxp0LuNxj8fSndwMkOkb0QzSoeKhiDbv3w0gzmOFDw2xLjlBccrqlM5c6m5D2+MNvW5Mwx
imEmZifT8IDcgYZkmY4Atsbn1mBFKOdJv17apWPw51B2w0jVAUPEDq32Hq7/f5Z2MZ7LuCyCZKRD
GYobUydaFPdEw0xK7rW6935WywbXBD6m0N+Gmppe8n4FAa0flmGxJBGJaDeokSaL0/QipUYClFg+
7aPLI1WTdtbcEWN306DwTnKQ5LsLeZDt5HCGtyiaoDTFKiDUT40UjEkb2TA6ZhhJnOPfkVnGE+53
31K4vlqaRVsQMsyquQiTaypHbDfui+WE4vZ6Q74dNKIpW8pWtN4KK3RSJiKJDj0Ulgwy0llDTw1O
cGuD2xtDrFSQTV3Fe9jMOXLddmGmg2ccjFpMwpLhPOYstS1OtY7rcaJ7Y4beOVOGLaEkMwN5NdnL
VOJu0tu7JGkmecLDhWVL7XKw/0734VHUqPnXfuy0pKuOeVKI30J44U1m5Vx6UtpLopzH5bGlwp0R
lQ5FCx+ufxPuPuOptf2U0j9Y8Q/E57ECW03IBMJCbeybjWzXpICyacom50IeG5PLEZm7G52nR2it
5dEpdtseqbHXZ6Z45ZZ0Fjk0aUzbsymQX2vdGKCdE5gPEESmu2lCiwj2SaW/MJi+JwzOame862vR
y6Pbr8e/fRuYdeOsrE5X4TnM0xWF54E8I5nCa0L1CWbT30tOhq9mJHCZKkv/7rATO0yLmBKmsme8
4NzG2WDMeWdyDiigC0EYNa6rpiwY1RguvJn1tB9w9cIevOhVWRSUYmhYrOst5VsmitkLoe/Bm6Hp
gTLCbCXmtGNr56dErTkqa+iWufUY7q0tGNHMF0oe+zPG1JtpwNbkDZNadeW30HwVZ369qs68cvqB
OyKzsvZledK4DE92MInNRVOBwTIqMp05pY4Wvg7VZZ+LUqYltSta4basaGOIwXcjDbls1m0aMb4C
zrVqVnLIJspb+G21rOPzXq2jXpbbuNksyTQm4mpCtuzkbi2sLWVqtG8kmdMVnDu+uQQJMjTThpmT
tc0llnOZT2xoLJtdHweTvUT2uPPKZTtNShwHyFRhv7Hpv7DRrsaKZqcqOsHpx4JUXOJWKdHvoyxx
S88/OfBshgxMGwYDY2HzU5l0w5YuEoKm0KoEKt4/unssPs6KAnxuGSZE3aLln6CpkHj9OaCeV9Yo
k78EPg4YdHg8r66orUGuWIM5aYtJMncnsWLxdXzo+LltFSH3RNlii5OqlaRNSdS64b0LQBVmOXFn
l1P3FZG9yiMsBH7f2devDiY9LbTrUmcqLFbg6+/m2xMiU5PtSS7Izhppwq7R6hlwI9tHwa2z6BoY
hqAHJyce718J8vTEaShA4lTJswKMlObUgnnOQOg+TDnR2G4NodGGgSmkIyOeE79HbaXXBhrOUMIi
GCpKmF29Ty0J4rHGsUiogokiBiQOemG5ooomAOwgDUo+74Xb4HhZ2wEr8mFfB+ecpWHw1dpoDKWa
MmJMITt314xJ8+Is8JtommPg40IaSPWxs2MmQy41zOMsJd9L2GowXH8u8BmDtJnxs+fA71xsogaN
FcXCW4JAVhBkROVg9hSEWFJkg/9pEKEu4n2ehLp7QDE+X04TxMVUZgZmOWYEJ0XT5aRImpyldN/K
ZqvP0VNU2d2d3BZg4PCdoeXNebZC22xMgZ20u3I2DmGFSzCYNtHCTMXHkGlDrLTxJLtZELvINuBj
Sedngi85zzktcfZQ2Y4TMYY20vEkYbDaTaP0GpaspkpKWlZXd2ROmOxWhqioRBn1Ui7Yv4a2wDGu
A7cb79LH8QuZNg76/1V68peDsZ2MWqbcEuKZfr79LVy9FgnNjsZJM6KDLumOIQJtibiI0dnJQ7ay
mfLQ/xx7Y72WxZ1xOn8suthqlT578I21KiYjDFN+OzAKYsjadEa+Zi38f4sk33nxTsP8R/qnETh/
UgdQhUdvnEUJqMZFFVq223OZRnL5IZivHwqxqjUp5Wc3+6+dVU7XvSarIuVeJ3qVeJkGngv1HSm6
GQLnOZETnCv0nrNP158dchJ8bpA/PCkSbgi5l6R0NxRpurKD9URIBCzzUfV/CiT+NE9CJ7ET5Pci
fJvFKRNdUT3ARE4GiJ/L7kS/0TXT96Y5iBm2p035UFPcg9nF+AzHJ+aX7EtEXEjLF883uROpE/VE
TrRNpP7z0w9Fmp/QFdxxK7jcFVHrdx+uED93sT+4zA6kTI83sz/V6s0SIns7kS0T1okiJ1fHgHZ2
INH7uddsGEAGpP6kT70TMU8fSKlMz8MW/zgUncic0Suc0/sQ4nrISODD8SdIHtgdUTEOIfLv1tz6
LgPsYccn/ARAEkkyZFv1HEuHvpWckmKrNkyBIM5+zqzPy7KNq16Qj/1CR0hHk/WsHngidWaGp1VP
5M9o1+n1omfQDgaHvKDc7H0CvoFPl2zySa+NFxYyQCHmyHoQ4wLyE85YUP2kBhZSHPwPDo4P6Inv
CA7EDqRPoOwIiJIk2D2+kU7PQwf3H+nMkTXbj32ocbpd/nGY82GMeGTo4JG88Du/QlyDIqb/ewxy
YT6EThPsRIKcRT1dqidCIm/RPbRz8iiIiIiIiOiJ3CkKeHiHUKiLfK+XS7IEhN6jYJCIkGkLJOl+
/2/Xo+KJqKcRS3l5O11VNFUFNUFDokC1LTQU0RKxAx9CJ8uUy5vsHAQe0oIAalKhrF88Px2Ptdmj
sodIFpCel7ZPxImwyOSPxoPaQnn1LNvSlKlG7+Ka+wU9fgonL4n7cjv+jw8sc8NIB670QbTzzJIK
RIj7dfk1krYq2ko0axVCbZNBSYiJRqW+L8r+n+NtfH7hVJRJFPWKdPaKdBS0TqRKRPWiYRLRIm6c
kT0okRNkSxTx0qImtIlZImnTxAb3onsNUTqPMeybfIFawcoURTSQcUI8M4Yrk+77APt/SGT+M23x
O17WAgIiAmAofSifuOgid7x+L1inYK+GfGbg40SfABg8fO/Zn7YcQ5GTRS7tVAOyHqujhCghIBbl
n6HarKIDCChHkie8UzRRXbmVR6/NEu1Eh0ibO6MRI/UN8wic8L4h46QwIaA1wI0Hr1aPKHURsPf6
yzcfBB933e5EiJqic0TT7/haJ4fdqiWiZonV/KexRPVh1RN5+kL9pNHX2h+LeBtfX76N699sU1iU
pMfZcH4duYtfBIhoSr+2vkkl1fmdgzG1hjgx4EuijdSMEkxf9mnurxn2Ikq+icYt9xmSeyap+CeP
A8bsTnnPsNuanPvx3GtZrVpi+5WWxjsYBBkaJ9tGXugyQkiCSqxVVe58kVT9QghOEyiw2j2RCkz7
ZUlVSvl07fz5elE80RPk9aLvVPP5ncEt677p6SFSul1dSoDkD0DinwiSJMzzzBjmwwgmJMz9e+SH
k0lcr/hLp5UhbKlTq0xEBe14kIu9ZmK+1bsfWXL4XcnJoKynEho6YB5OJfOgfMqOKb9ZyLk87NWo
hu+ep/VLmfPZiFY+toss8hqbfIpRjAuUIba7H9ZIkNRXKjl1EhBvNVRr0x7zfwdScZasqiu40aDZ
5e8NU/rOIOgfFwik386E98oSRg1BCEkh2SZuOt7wilmjuzzLVz+DPWGWw1FB4Z5+Vmzxg+Tjj9ih
xzZPmP5PG8ERERVBPkaFwqZn4GKsgHBAm4/THKTskyFL1cPoX7mF9CPpWebk2+fzpJcHZ++UQOIE
mSb3pvM3zj8VVI/lIew7km+RECGoT3/g8SQYvjuy9J6hztRscc8HAxFRsig3P0ySgPsQXd/KqBgb
D8jaD9arMe+HGFbYDL/cDiP04gRgWGyvz0o8edZuoOO4LE8DRERMffmgD8sp9U/WyYcmcZNU91sk
HRERkvX1GeuO0OA4Vn0qnwjbbbmSSU25Tf8thYUyyyJpIiTodpDpoGkJS44weH3y9oaNje9/xk/B
7m+Z8mzdL7srVV/3OVmOJA98WbBvkJzbNyXTI1ni7zwK7cWaJWu7hWFlG6PZvaevbsvdvU85dUNP
buTe6qOsY1qXipUuAGaSSEkkkkkkkkkkr9k9kb/qJQDMXqc/oyz5jLStjsWBgTtXp/JzH5irNFj1
SJYO8/skDrDmjgLUeCJyRLA0rOG/v8CUg9GT9jvyxGyhHzZtf6qZh3I2duOeZTLDM/k8YrAR0c9W
KN0XsJkYYj/T1YkrB3Aqe9X0caW/dO0skpNCTyaUx+lCULFV93lGFKbAeRhuJTu3f7nZzs0s5Jrr
Etz65CN1tvTs6OnjTObDFhIw85S02FYETRFJUNXgq38xDVI/3H443shVxcqJSGGOrIcpMhhijmEV
NLDhNDSYY7tbHYYUEGzOhbfulVmxGZAI1PzPhswxjC9+GBPGXHQk0xmNHyFGbDERX5t8zPp8RddX
BkzBNlftdrSBibYZEqVN9s4evOxUiSwTCGVi+/dkwZYZSthwih+sxPiirbnLYFYwWlZIl3KLbTQP
l2TLPsBHzDczoDr+88/NcKU3aXv34/Ofwfywfqfm8AAAAAFnleVmb7LlR/VllfGZru5+8fVjobcb
b/JyzDQTJAyqOdI7lJ1XHotSCNpQmBttpyd/gdsDiE+yJElnBAiMy/zDjoZL71GlidA6k5WrqOPp
gkSdO6Q67h8jTMfxlfMeW6GuN0ztsXsW3sPa8jHJ+1ioZV59kn4qhMGH0dBcv7qHRV1AfxRahZcd
RCLsR+vpC1u1mYKdenxpGVbOjC0o5TcjJx4iModrOSj7go9KUEYEhaNECKZlXvP08HvRhizDHZI4
21acZqezcuVRuKK51iE334C9HxyJB8q+jY/9vX2ZZMzIZCIBM3W7o7XO8n85/eOUkpqIcg3O0ux2
I4SgBNxF5R1hWwWEV/KvZDXQTTOgfuu0S3uQL0mMvGZyqDhTXqazYkWv5GJ4bZNCp4F3xmAZgNkz
uDIGSS32djTPr6BqtNlOi1wjdlh6LF5I0zxJmQpd3Ft3DLDU3RMmWLuq7Fv/jp8lDos7MM0kDM0I
4JbQ4EhvOHvsvgGydtDynSYzg4SWFV130IHDkM5j0DksHLw0Y4GMjG06FmWC0Y1E2xNUUVd1bFvR
fAKSDRDOIsEEoPmf+ehtZqa7NN8dUsursiuQ6UwSVKVNx1iZ0yagn/dyLTMmaaPc9fi3ywQJB0p2
RLexDdUi9SXOjlGNvAdSsdCatJzReiSTkpZgzPYsw/pnKh0fv3/1f1fz/dOix12H+gxHcb02nXqf
Teu33cK4culgy/WeBjjchI6spHpk26muxOxynzz5JeZ7SburJoNFaXEzlFCtIYGBQTenjg1/rg3w
ZN7LDUjkrQQZBgKysGLW83DenTRr03vqfbdsTlEfgTlBIednlT8CTEWcf2I6vS1acAU5X+Cn9i2V
DMROVmkQSEzIyrX007EDu57LSaA3RwyH7vfXl6eWfQPS1zQOO3Ed0bNOq3GhVBAjMkKqh3TsYbAp
3r3mbDGx7F2X9dMIbeTfcXDMiq3lh20Q1tUWJPd9udeVbcL4Ucujt0stB6ZJliMMZildufXboa2b
evcaJa9UPTAKigSKEIhDJNBDK9qfN4Sy2qeKbTXXZ/Tt1jjiwxtGYr0OBuY3ZrZI4siYiTWS4bTR
8TOBqf325Gb41kFmYTf921F9++zpjfv8vDr3BwMwyCiZra0FIUCkJmCpmCKUDrYEc/z1brlXI9Nz
kqj886m3jBCY9eWh6MqOEKlTPvsbAiB1QOxY2LL/EjhCCTuRweaJIPs/oeG+w+1lFB0OzZMv3l9b
H5chpTMg1SSFHtKNc+yakHy8Cl/LptLrdTzbsgr0z6OhUyIkWeE1WUq3Yci+GYLttQpJptMcvORI
7MeqU+9loIw/1bzZMMjAKEFBsz7gdkIQ3YwgQDoa7QOhYS1pQkDl8ZEThGmsltaTJkOJbGGML72P
Tlla/mY3zuvwnSpYC6zyl+JyTz2J8i7NQY4ukwkkM6ztSMDj7+jEhlal68HgvsMxhjO9pXcVMdn1
ShNgwIWQF2GMi99tZ14WiZ01lSR7oxgL1ltsbsnb9AqbM9dGmCGQIZx0tpLdMc005pS3TEgdeqkI
ZiT6Oat7ZZLr9ZqRguGm0t7jZ69g3CzbpjFfR17ypMeyKZhaQFzEkTnbN+J803/oAX7XGbQQDeoy
hY6DkRwPy8wpjmt9hw6phCmwmJFwWu+U+60hskFSNxhUW31b2HcdrMMdFLgg3OneDmO/7HOre5v/
UjbxTtLNtxtPd2zoqjg82lEJbJDsUl1jEVi3H3/uGpNkH6/2nU9trX78PMjSTIjRpMAmU1+FvrNN
urwkNd7vIDXvPb0y0TIxJZIKKu21eVXNRTIkNBXnRTIJkocizkQhkBcxaKwZFJFQkQovfx1/L1zr
06idCHQdShYceRIPghznm7bZjfTQdmOwvlEkSqSg3Z/RPIrqBO18JUKxTlh6vtxDwQFm+BiOl+rj
ByRTQPqtreSFqwYx4prnnut6rAaCBAgk2N2YGai4HSNAgzHyl6g4NeW102Gr/NyoU+eqIg7Uxial
dDllU28fjLcVHJgJDCSBkyZm4Pkw5srgV5nVYrycdsGxwlg5nKU9vCUyO7ZgdG1iuSqdgAOW2xEh
PLHMDETU/mUtDQjTCAqeQjyNbEDeZZ4DoYf/RdO+Y002zSDeFepSGmmFsMZWR64EwQQEB7HljelR
ATPUkmZh2ZMrCy7zhkZ7jiZ0sDHQILCEDbeGuAhQhrvLQAvFcYeBKO3CHaKNJNVhLj3OIvWivA8b
r4CfitNlo7m8FB9K24dxfdKS1cwzHbUgvMclS1VxxjOldjccJY8SjNGZYcC+uheVJ4n52nMK4GGh
mb8iX9ErmuBmr7dNuHQjK46m3NDMRhqsTCr0YYe1OHBo27DMzk5b6c2voMf0hDETR1N1OYCWwNtL
+7XeoNpsN/HjeuEv5y/hMfG+8dX6qkkeSQPf4+UW5/Uw8mr30wqpJSeUtku/ZSH0Hvlbqz8JulJK
jdWB0mJL4kyEyS8NDBqCBi18Osn2IRyXwLPLIyfUvSe/HecYDxYvjiuRpswaOuuehxT6tvpK23Dp
IfDDN81oXBqe3leeV+GO4ytTge7hOMy/d3RL6DPO2pyVDPDSWKlQ2b5ppBqNlebyhy7KiPSD4SOZ
9HQ3Mr41ZM2TNhjfg2vFt050+wPfwlh7KH2bwbGNNhbLaduTbaPtUAFxmKLldfiNDYxQHfU7b/N6
5rIwAWfWxrT4WZEWDaiITL5qtm6ZXw0a2LUsSUUYRryOzXfwUlLg8kBQYBC+G9Zqtx7GThN59Zb9
Zj6fpDyiVILFOJvcmcGS9nyfX81idpGr9QVnM1PX6NDKnxNmPID8mjlvRs2m21NScdf7fGZxmB9O
3imhPj/f/LZTf6N7Has3TDjiSZfS4/a7s0alvltncl76AcZCaaz+BGVJh/G9V3Ayj6NHT1h+2SkI
Y9I+BxWy9UiFigzUSIqThlSTTYYflb1y+eGGH+Wkl+n0E8voMwM64Q/wQS+ErCvLkxZBtVqnRv3n
TtgmHDEOZAaROWGRgxXsl9OlWVqHPZlGTOiRVRGNFB0jA0ivSZUNBvoJs6CKeSFCw5EpBFBGJXf6
XOrZsfYzL21FJMMko0xKIUAdxHYHLiHjDGIGM0iemQ/fmIa5DgIOQw5xGuMzL5ECy09+4wOM6SRT
dFNoti2EWw/yHR8DNaSfne+lg1HTG+0ohYDZr7xzp0xDFtLQumOjnvHzvM6H0pVqH1vkILVfKOs+
V271/Krm0cPEh7ozcgTL7DGRfyn3hvD1EJ7xIgEEWtujHbiqb9aFJ+p8jRTF3Vf8Xw8UpUVscbTq
UKZ7sP5lW5hW6le8fz1PaijvQ6UU8h9cPVMYleeMjN/CakIO4h6DnhMlKUM/QPtcyc6grMrkB3wX
wL1eksjQmXRYppSkMGxjvS9Y5kst9liwxbtM89JeVTs9nxbLN2dm76Z6S4Ib7DcZ4S1rg66RqlW/
KQWpjPdeRh9us+rqRORc4wGBdqpyw4+6hsJDZL0fJrsUD+NVmvAQK7Dr7+yff4w0dihHIEZDs6HQ
nfrL4znCz7sv7TYDPOu/phLJmQl71tVmZCEhAeqVTlLZJzrXkb4vlq57YCHQ6z3wYGBQ1Y5T3yki
LmYwx0zzMB8Jw20QIN14SO1EgVhNfHSW2ezODI1bO31zCdBX6yOIm1gm34Ta5nJkjDVyS7U1+8T6
3rlgCK+TwJrJtr5Xldjdm4+SdRKGIUSLkXrTjsqsQxcGdx4Gg8YcFKrhB10uX68fIZNus6BFU2Ma
s116dpfM0QBGHKuN5RgDHNFhWfrh0N4o6rOuA7DoJpibupOSS4PlHeg3xzedhx57ODoY7eUG5YE/
gRHFXxZGdsmGJVfyHoNym6jgmgja5GFC0ekjCTyccOJ4SsIts3yyQHppFOcD9wx09FX1vTVuq8QO
nYYoo4kju2Y2k2dbTS3TwNcqEs3sSmx6+MEBmJJsH9yGEREI4JhDOrA3+qpIip4WMjDEbGX5PTDB
JmxbOOa3IyP0mD9ezjJtd/ewnpRNdp2XzGZ5k+nV9Pv52a4ckuLIqoLsookGhhPVWKAON8+2GVnF
E4kvHlkbsL4QSQeLLWsZaP5GGMnda6Qjhr/QzX18rL16bNwn1Q7yzLEpP5BapNZZl4DJbHdnp2NG
CdBkJkbXpJ2NTjkwZKA4RB8etB0kMSVEc3Hhx01UQoHHY6iC5BIQI37u4zYmTQ7BuQ5wTDPzbBJD
Q91sRFt7FsBlxwUup6ZYxDqzmETUEzo482WJqQ8O/R9jDG6lrbQcMqzjamSMPB2qEPvVMhyUDuzx
AXGLGB1bNilVGJyJEfGdFpNmepbHy0KE2BI4M/q8HWDYm49HNZmjZLqemvXlfjvLknJQZJGfZbV7
be3DeF8DqYZJgR8Ntj0lRrFq9R2b+emBMfN2F3Ra8po6L0nJVK6JT64iRcVPpM20kd9yD5LkP3Os
Lh4EacEIyiqSKuj9cA0zouq2QgNCPD5VZMnyfneV6XRC0HuTDF54E2o6MNpQqIEN/OItgOYvXR8M
5NFP1Q4rHbU0VnwRteRgpXVCX9Jn+Vm/gMJmd1B/Aj+F8NdvQ3al1jOfGjt0UHgZpgHsTHkgGSCE
vVmRP4jAXl43hFKGqbDMTeCer3+H2lf8u+WIxs1YjSOhgomLvWz9wOzKh8fmnpp1II2N0zqhHdSk
uw+wwYRL9ODNuMbkIhb6Do8QvzpQz1fqr+xwP3K9ngsfs3NdhDZv9SKuSxVsoc3AyFy0BpWRK+gO
zDekY4ucX6PZXAPLUzUYn0aAydv5Afe56Ue4iRBu4/2Gh8SXLsOqOkTDGpsRXH3d+1hju4aU9Cdn
MfKfhB3UoKu6Y89ptbawm7I6dWWfDl+Pf0OXLlwD1j+cwqGCIIBiI/biShm2NhnE+7dlQJj7TFsM
0GPqJNEG3ZzinQ8ZcH2nfPCDCHNddVgbkVKCjC0h5dEP4dOOE64VNzExA2aaM8vQTomRT7MYhLqQ
XVtIn6MpmSB2M+TmteoevpwjZTMdO+yoTv63plymaJpUHH7EbMm1/3uzF/yUYiLz3zWx7N/oxTgN
YTx54fD+mnPbs4ImeSJD9mNSMsSJHJD9jjiZJYVI4W25cnLMGYW774aXO5Gx549vdfIsT2J+dp80
OLk5A/frJzq5S34UZf1Z79uTAFeFZqnJTwjXHm4H792VvCxj0cft2VjPPXoydMqY48CzXv5fVjfl
xw9PXiHYwxr7FW4+5UvftdhjpaozTBxs8mGIlvO6KCEcfn/wx5JYZOYphhMMXu1xJx3z0SaJgbLE
thXn1vYS70M4YsyEwWyMYJimuiDz/tcH8HpQ0wC7Z6RZylXCqaNX5XHNiKhjhP1OeBwJx+ZuozOF
GbLROJNGoczZiFsee42sbjcTeBoTEM1GlpQYtE99MFLd8KNIEpCmXRsJ7RpvI2QXyzs2emViEn3Y
7iJYp9PCk2GJ9mXUoj1EGaJI62jbzwwqy84DbmKWUnPqKetoFx48GrowxwrnU3nbXYUfAp88kpGw
H3Zw0jKfyex9GOsZMUSeEAqDXApLouqRuREowE7jpfq4b5NOZWcktltnyArnC20zbIMAksjpiYkd
XFhiv5TytwaKm24/e7lPER09/QedGEYmYxf5kXa+9bNP2lIWTZYCxdOHTORKUwnyyZuWcl4h9MmM
Dg98sPHI8UquS3VSVo6PA5PiX0KLJFTfOK8OO2WWCh9xafsunw35b9n8d7+j24/YTkbgKJZ9kGez
Mudkppjg5SnieHgiACbVVGGwlJJoDpZwDgbgxIOig41SYulpzIG18XtNzlzeTc3OiZ8oseOOGRGf
FukzjLnvHN5mG+md8U2BAbG3j78PjPKtCcI7Zt+bKVeNpDRoTuRz5JJG+e1twGDFcMy2HGL1OPAw
kt/j+JAYhKdsTRdF+uCx0DV+sW393v+e2/gxx4Sugv0PwCW6DzOfHLLcimbtWcvjHb8GF8L9eVu5
Z77dfgSH5/n25YcfUNkHP02HLHGR+C9VurrEcODHDRSKm5tq5v6v4ekPUj5OP3epUfth2VPWy1mc
0EARBIkPKIV19Tpz5TM3mlFJtLsajUGUVIlRo3+883my0NK9leEFe8iPN/oJoXdcHzuBTNmOSRJE
R/mwyfGwL+6QO2Pb0O3Q0EBQzyGsE0y6D2ld2koDLDDVF5MEmd6MyA/I9d/ITTbu74dk/P2zyRZc
hGnyOx2ZP0PkQ1eYaCcZiQwsOvSfqahRqHqYY2yyqwxLbDSEe7NReZ6tF5TPRtwwE38zthRFnBxx
Hp31RMvjRVXfmNfLBR5gvKxJJJKZ56Eo779FuvhRvIYQ1Pta/Rrm4911oHTQjxdx26yI4oKkD+GX
r+b4zLLtSOJqYLhih27zKNgu+bAYExnBra2pL8Doo3b44UhnMcmHmhnCYL6SB5tIdTcrE4crEHnP
aflGJ3DYXGvK8HTb21NgyMuGD4FWub/eKKfJoc/gZHWnH3kDCEwY5MmKu6WNtyJBLOnnxuOoe2Hs
mh7pR6fT83BVVWo8p+ftMa/YbfzqxBeKKcCCGSGqMGNn3JjXxX3bsqE7Nit23nzs2zFSMnp/8ReR
njxmOKGNzYkqKmL0NAJu2ufsKPLI2UPVM4+/6wq21AmQSd0jTTHZwI0xWyl1pTDFNMcB28be9b6D
OmAe9HzgrPRMUC4efviATH/BOYuGOGqZjimpmdO2Q7ITRa5YJybkJ5SHYHiOe+PN2GJzVNpHF4hJ
dFnIytxJNyVH1HYYjc5lQpSl0lRR4SzxMOdSVRyInj8/fVC4L8AZ6sNnYl6ZXKncNPCzfx56LQij
DEfCkFdtVm8Q9pZBqUr5nF9oUy/c+E+y+mMpySyNJ2DTdW2+d5Kro1kcWGMbw6SRSm4clip7YyPC
MPidYZ7s0hHPcQjuWG85dgHS+QfXWCtSw+mWulQFW1SmVIiInpE9NajDG9yRGZg0Wo89o0Ndvfd0
Gnw68WXuuKqXW5rMHdq+h/F4VNEfPlfeH930cPjwTvhuoekeJfNGj0FarIe0zePtSiHxV+psDx07
h0aeiiuieBhd90efbs80eNlaPgdRUw7r2yPHfq562er0RSTVR7HvOheVFMiEal5EPmFRhiitjncK
BDvqwxOpi9byKkWzwsT0VJlaks4sQmH9Qi6DI65yoOe0TBJgRR3yxymK0L3Rsm0qNz1XqnnpvyS+
mU8MR8j9JbfeTZoNhfDSI/Oi2y+V/K7lf7ijpsjsnJuQm9Tq46f5+D9Nhn2dE3CEQhzSKppYfF4R
qeWJwN1zfayIbEwh1Qk82kQLF2iCUG2xJXb+NFJdDXN+/XNl1Wg/z8qo3PPj1y2TSOh1Ny7RWJiF
PY+sYxsx8CJ/1aONiyUtj1lgRJlZSP6bFqZvALZniXC6Js5uH26klSUJjeh8HY2mHQQWsOxR5Yy3
3Ju0i0rWJwkRzftUHt+V5C+kdOsb7snZ9klqYuihm5PU0VJVmfx4n1zpMPSG45Ntx6voNx+RZzdY
AVZkhkdPJn+T6gabEDZD6cTb5KQJ9UcEANIiNllJ4n9eIZnI0A7h0U3ZG0wbPE9lrUrZXrSd2aSA
l986TOy/yZvOOBMag13KRwuzF0SCC7SpBJJHHO3fqzMxF60pOczugwYYjw51vS2kquJhhCf4pwbs
4D4BiOZoa/h0/mN3Q7Ur8F0fl36UM4OBtHAS5JZs7B0bg2nFtLYY+HcacWJwF+/PrwX8+Nd60jea
ocTCMzdczybp00o3A9ME93F9Ny7lsOp+Jb2EFw+YcxXXhECZtohAnDguOMvrpt3vyv9TLZGB0O/D
xXVmStY465MUTCWbBo9FBDysoh5EApxD9STrmD5/EtdYZmVYhQ4uQ/rlrdNWQwroX2H5P4KDnJJ+
tx17CD5lpgiwHlSYMiBiXXv/dgPdkmXnxVlRQkOfbVkEfAyNof2UVxyeMkqIU2FECydxp3aup6Oz
eraR0zFiWQHmDFjaATuQKAKWmsSGeLH0/LggeN6bylese2oOztMfVmM9yXl02tIUBTEo1TEbLpDx
LxOiDRSZmJH5ZDJNt+hX2qlJKlavPuLJEl9+7aotFiSjRjJot801fJDg+JIHCEIabhAoGhZ7zf+3
tNv1W8ceIYh6lwlmaIlmpBoPxGeKcVNBHQ7pIZbV8imFYTOAmHZMw/gaY2mvye/7rdmrPQy/UNB2
poUI+mB/X0ARz5Dz+Mdcyp/ZTMsdjWlTSdic9ONJo+YRaLLStvB64ib4j6yoZlDG8SXSZUq2kPkq
BmVvdoJuKHiZl10kSxsYRbIwwrlEyk++XPZf3X6JaMXzbMb8iuYrhw2N0S2pLflO/2yc+fQ3GpIO
XtYY24HDPCuZDny7DZIyF9K2IgD5xfa2eP1aTMKXiprHQS6ONOjoHCZSI1NwvRKUXG4MMJO4zCWP
4IIoMSKENkbmGN4Eqbh2sj5impq5nNSY3w8/KH495e99JZnZWo+5d97+mziTYj1RExRkHR232l2L
00ttOyqWhlBR9PFHI8puMbObxEkdqwG/1nAevze1nMva5UzQwYzcJVjjPvJblZrN5cXP5MIZE9Ph
VJXbDrljoSoPZcZc9nXvYloYZy2upv3vhxNh9kKhhrqppjph7DrOyBZmR+aNpiPPrZqWnyEH8Mj4
LruL42/FIDQe+j+QxyeGaUJ5MaaeYIaX6g9BAyMlwn26FTcSGrZ9xHA9UoSoIdrh6x55Gk5dprue
W8UabW4EINe+9dpn/ez4Ls7OG6DE6Z47Hn24UdJ+t6TGz/gkV6h2bUF877uWRdZbf0WIkdQOSL1O
MsYX1eyMD6NCscTqxrkYdrR3XzwNnGXE/sw6DAmZGAjBHACR2EF9x9i4IkeN9w3Mm+BaG6NJXbS1
jZVb+IwHU3VXhl2ycQl2ONxqdJDATxev8qCo5fyUUXT7TqNTzePcZr4/cNWqhZxPXbFfbnmZd3SY
Y2uLC0FalXpun0alWM5CxgPQWJxdHU1A6FREdI5IjOISQh4UFDqTzVCJwb/F21SStONnExgsaEHb
KNrZTUEjIvlEuWVTZ8KrAsGRrOGu09Wmmnrxki9ZPAyGMUCoK17hSgKYCGJ0yMTDPchwPn8Hh5gy
GCwyEIWBtvOdkfxEyHhQPURE0IeRUy3bzcemnjfzidUfEaEN098jRjPjSyN0G+TVLnYZ4ZJIQbBE
bhSS+68byVIegkztMh93aTlQgQiIzJHgVBxHQez/f18hb9J+UtUTQzsdoCCEDiBh2xVRV+bVdLGj
ajVD/M6s5CmpMhAqkTI4an7xkMkHcGQAhhK0tIGQg0rkoUtIUlCZKiZCOSHyyBqU4PX26BjnPYiQ
n6n9/x65fGtIrW1fpoXo1N+ZkdVYgMENIJjsoaiEISUs6syuhdGfGH6/f93w/P+CZVcZ7bKEkCIZ
TnD5z/UUkgP8alG/yb4HUMHVwOiv+P/ZwB5b/0kj/BvsH1vaPvX2MCDK/V1RNOdf2xVpf2+UpbU/
5orxY/ewd3fjxDpvru4ab2rr59XuPRJgmVcujUl7P5P4A32CR19Zf8cAKwQr+TsFcBv/35yZr8/0
P3hCQ9Tx/KkMEEEClRfuz1u5/ndCcEsKAwTJUHd8NA/rg3CyEjMCw0RtX13Jo0GKMBVFFFUYtitS
VQH8U1c3zAP7J1JTSUVSLMAPBGTcr9q/Cz7PPN8O4/bPjA++TAz8+tAEBNfm69UICA/dE/n/y/xt
UMpPyT5ro+NUTnZ1CfPTjKUYEA0L/bbwG6wTbmOZ/gzz7ElvTM6Q+/3/P3fNv3jefBs2xnGf7zvb
6aY/4sMf1f2cLt6PHQY/Pc+okfqq1CyPajiZny/hwz20fSVukEpkcvrvfP/1kqlAEv6s7rCGQT9W
9TrCFiQIIrWt75uICiCU6fPpi2gg7fW35BdPze0/YSfj4AH6TvXWl9tL5a1Ir6Yy8qVfR/Qwuj8/
zf3fN/uP0fjwx/z0MY2xoqO7epJHageW6ptm59XGkhrCB0E0QmFCPql/R/g2EdNjPodtPBxmxs5i
rTnkfzPPydzBNtQ38/G2gzmnlEonWMmPXtw8X/mgDnzzui4tfqM2gfxnJaQfX/L/HQp+//B9lHp7
9EePTkXG2cV7LvHE93hRRxyvU8Z2isOjng/e9Vhe0YSUl+XG1J/iWx7jVAVwlu8459P4+jSfA5QP
6Vyns0d3f+4SW8Sly34fDJ4uE/wle4zMVHcbBpMS/wR7pgOqXbiQgMSbV9mNtrnm1SalMPw4H0yH
SA2Sh6I51mpiBv0kGEiaISS3u2LVlATxPviXVJvYyX0gzpJt+mHsedBJMkwk3NkeZUr0O/O0LV1z
ho0Hsv+H4L5u/x/0v6LZ+HujPV69Dnju7hfJ04xRFHMEWSm9ElSTzBRwh9vuRfkJgmpSg/6rMzIA
yxmWqDMwD84L90RMkTP+bMDbs4Yxgv5qVHO0BG/NocoxwKH5sZN4djTfJupf0u6SZm+LliSvdykP
Na6TA5/Uer4uy7u2plY1SX9JW5UbPtOlIT7XXApx32Xm93P6nVytIxRlMtG2tnIK953oQCfEOvG7
SO25sAjN7liUEWBxOSsWRF/tN6Scpj9N12RSCKUglEVfRY0osNoBLeO0Ah3duBHezbxAIQ/134fm
PdVVYKVbarENRGEWDBhhF4epwT/nSP6kbk5KGgYpmoqmpRNMWW3d3aE5VCvhCBoGNGC5DIUtUuVz
BGQ264u2rty0tVyywBLQVBQNhbbzcEjk7UQWnbZsEDsIVjCVJhQxGiKto2wkwBkQP8hAIQMIm0iY
SkS+EiGEJblyhigqgcdhEB2FJJzKJk1Y9H9HbHCY7840vxCoA/tlzwOfym4xyCNv6lS+Yf7j7CwS
aKhQGbzJli6YyzICCEDsmdf5bcYfPD9WzvCXo9IOdenfUNP88odsjVDRSA00CnJl4hoXXZgm2Cgh
kqAEEwjazX9W83GgokwXze7Y9/lhyYy6YSof5c/TV6UTNn0Bd01tqRiVsMHskqIyE3GOQ9bcqbqI
VOSwDqSJhPSucHUDqy0Q/7tmVsecTvUQkJW/uUAlJCpUYIhooViFYT4wp8JPqgX/tncB8YED/7T2
R/5dv31v/d8b8u4U1Mn/B12VTZkVIpmWRU20mWIVqMiSizCM0hhmGTRhNhLS1fiutSJ67OrqsTCr
KwzZmastgKlmipms1MU01pGKGNvfj16zJiWKI0iizMU2alUVkrY1plVLSYUz73VyxiVMmlmWVlPs
rocc+HacGgP1/y+eJO541+Mv+0U+jvPa3952GR2m1ghB3eOnqPH3TlJe0X9Bsq/7+OVAnsTIbNuU
p7zqZpsdSBr4f7fxf64/L+j98af2/s8O86j5qH4t+76Ujr1f/bTb+jC/+0fVltP+Tf8P6N+u3aDf
zfwlGQ6xR17IdZZ7Cdc/0SYNmfTgB2h3nr8/Y4/tcDs/wodPiHy0DRBSbloaIIft8wUiHtqUKEWH
lfV5fJ/ZyMv3fGtNDcWY30YMUb0P8t7XdvVCjKJmwC/Hz+vTh1k8z87sowXkP+zh8mQByhuz8X9p
t6ycybWU4ld7zDo5vMAOhIKtDR/PC/5JRHqof6AQgooIDIn6lENkdnv0j8qHgHDseDUICP2y6jZA
bkaCjWGI6g7y1rOhGiXuua2S0ZfG74mlxLGvk9883aQjcWlRfK8So2nIgtS1k0lhZGH+MXs0G1/u
ccg4LpHUQ+cEzdPJ+zgPn0zDczw80znLWJPCSQnn6zZNJKrA3JFOX9XLGCW8pDSSSYojO02VKTdM
IVnmiCzwngydCTksisHxpKFhqYNWbu/1q5AOGzVIAAD16+fl9fQASAfV9WGQ8ITaWeBJTFGTmmRr
Xdm9MXjlg0/GUmbKuVKoJuEO+JW6iX+GvBJKKMVLuquYYazJCqfy7TgdpefIdDfWNaIJKhYu7es1
ee9/ROBQ2w19fXOfd1lP6xC5qak4M6Pq45qd6YxmKDwpPBf3oT2k6oPsj6yCGPRsYxx0pP5YH8pr
mdVSYd2esBghv3f0ZYr0E2hVRXQdhM3R58j5qD85HrPbYiliBHp64JF2ed6HW6HiGS3ztSpKluOp
rOl8Xc8ccCmRsK1J6VhxO48k03iAwYY+CBvMKDyxMJhqogg5Zwh4xMf3kesi8J/0J/kGuLV1ZGNk
h6B6ehenphmn/qeLQc8HJflf9IwL1WzZxMkBDoEgP3ccxuK0hht5o1oyj6B4FAMwMwww85n5li3T
ocMsQ39Yx2adjvu5olEDUlClPokHj8GaoN6t/RAYIn0OMMfNoAubmDIM2afsQf4au2/IOxu09+2g
Yhv9dD17D/j7TXKC9sJup3HKkJFSeTRtVA8e7Tw9OQFjTORL2mL+zr+SCDrW8rZp2ZOaN6xML575
HGod2LyADd8ZS6Vv+u+suBTmk5DcgJARCR4qXhMAdkztwskEq6yKgexYGqc2X554hMrpE2ehI7XG
qiqSSS1+agTuJTV7LUlTQj0YmN8neLl2FlBLIc0rZGO+YUE3zSzMqtPxzJYyByrZqCBq0/AeW5DZ
MmEf43bqJSVsucGm6eMRnERFlQbn+f/Y+LVGdKkBId/Ky7gxqIAsoXfjw7GfsMHZ6dhPo8ACvsiK
b4HniyB0nKLE+lKLiRtuQI36yg8w+z7u9fiHzt8XbSXN/J65HZMJ7nMMXqmMKo3oRVyRqqGazI9J
GZnkr1JPD2MnMJvDKa66NZeBD0ymZ1kjaoyDCoY9bUrKqY8T0p6fDs2+BscmiI4lyi1ZETd/D2eF
rnio0HKKq3d/vlnsGA4iBENrqdaqlHvNlGqnhDR+79wjfa1hWh3GHbpdDbYNsbfQ1B45Hs2EZdxU
yIByu5Hm80rJjjORo1oEjSHZDubUocrv47NNxXxRima85u5cl1ROEsxMHoVTO3GWZGIa440L+FdV
PbQpuwFQJJGaD66BslKLygPFEmLPsQx3/sdj6E1GPW/XzCNwjL8O2PuSQmpf8HEfh94k0Q97+vjq
viZPdoZQIfnwUceJkFl0dcE+UIiURB9iAAAzZMwIEN1jalut5vjY+Yt2WduxWQ0OF3Zxi7kHAl1s
S5sk47iT8U3DeYVkb2pLaXKHfMcimr1M/qtBJGxkzHekg3pnQ47Ds7ulgxZyTLpYgyEHFoMxvAww
sxOHYuYcYyIw54kZ6cMYsPfn366PLXaJm//KGy6DamyUOA5CHqBNPVlrTjGmRr1lydsAng0YXvRt
xK4QE7+6IwyPNMY8LDt+cBKAVwHE8bQNDeGCyjiUuHu6+cTTchjCPHBepCNDlv7tzfyNe9TaBZNN
FyuSbrVx8cTGFXCSpBRRl7P9eWz3YZ53ZMq7XiZj7LBCmru5w4PHRovL06c3npjnmtkhN63iGf1i
cVK6uGMjn0yYzTZlnbru8ZSRjpMOH270Z9z9lhsUxK/a0ijzFFR4i5a7f1uBBdcRA9pjhsVzJK+i
xPogG5eifotfug9CM3ybjBoaRr693k/u2KeeDG/XKXBptIH5Uq0nrlRMVnpZmhtsf1Uh02xsKWjK
LbZYUyiihaosnpzHqKV7u8PLODk47Um6ZssPblnSmTn9yG/R9tGR3FCcmAG3PIbBfphghx5vx+OE
0/mO8QgM0Sq7eaaOfhDYyYyk8cowKF01ApE5M6CkpsW4y9JGG2QS2DumKkp7q449GVrGhSJPQhsF
J9H0mHTRCkgqPLLToYIPE9TjYrLBt6ykZoYiY7GOgOSkdEGWo4H5UwNgQPy/3qkBrg25myFwcE2z
r3bNmzUZhe1pb2EmAx6nAyDkUPj0pNPoodY3oUdc2Ihh8JRJVV/Nlhs+iL5t+L2jmQ576D80Ogz5
n8At+QMBppJUjQtJ+0gjkGiGaYyQSQCEJJS5CNH5yNYjYeYb3HqufjFyjohAXzLVUMnk3Ko831eV
RpHBpiAtrpv29Wd9KqNe0XVAYqUA/KBF01O/JfY8778zvo4tfYMPQ0X6NWrsf4M9IEyZB8CmMMdK
sK1/KDcm3U/CEkj9eZJovl9MBBuo59MglD76uQP6BJCSQhCEtpzmFGXzco+kg6CCjfk6bKZ2yKHD
wMstjLOMon9Pg/euZx8aTTI2kxtZhMwgzgO2EzkwhB1rWnovUkzd7Ws1LQ9GiA6rZZo8jFUY3iVn
CeZJrJbJTF8PUOxqwjAo9sDyvlMxtoz2fSRJ70rXb9ZrEcwLNLUPeqrkjkGScvp053T+Vlu45L5z
Bm7rL/TkvOed8b67je7GVxtqR5FBjUtvhR60TTZ+rCa8OYXV68PLy8yma7HvwHoR0meT2uy9d7hi
QFoFLuWku7680+/NFn0bfgrPB0AwPBZ26DFDUsv5n7ohHQHS2FFUH9l9/nHH3IjNjcGEOJset2zs
LffwUIxJWjv+jIlJuov0GTGHodj3LIlNGmxMhOLr3Zw77ZhM/JqRTC0SqkySQpVdxEHZ0fRfhaws
JDrhR5ueomL5tbaEH3Tfn+Ki1zx7yzD6hoPuIH8p99InCCL3R0j5HaON82zQh35yes2HfUqcLJ3E
ERAhCEYucDDL9+7OL/xIVZdcW2ejCmn8c4wE5C9M9tP7PXU/hKaX16f18/t1/opV528+X4NB93LZ
6ZhvKoLv/hF/l6fjy/h9v0dXC50njkdAybh7n3VZxjFdFS/p1+7SMMbtcxa+602JNs4FKHyTpqIc
U0cn+/SRk9Dz6I0ik2/1xH1d98JfpVd+hzWKLN2KknaRzzYI6HG1SSTEMaOfYgYN6aaY7snGKoan
09+IlkgW3hzvlmmxDsk3am3Imn6IVCJOJkDHRzzjThDX5SI+wx0pcpj+/Hqy17al6AWI0bwqfjJ8
HNxV27+/feZY5P2ZvJ6el5GcSv4OsvTe7Zw7o3lXJpsUeMOzC8P++JooIOyjqOUMAemT4SPSQiOy
TnRg0gdDXT3te+OB+A32dB8eDMsR+SMCHRwV8MzMp0UOjyzgpQT2syC+ITq1N7uLV39Q7dyK1f1/
c9VlNznQkTl1zc6+jKAmgMVZFXkDxO9KyOmWBHTj/rK2WbvPxnyuTKuCBxHECZsQku51HXF68Llw
V/5TdsMCeSM1kjYg9CGnp7LVt3H+muKyvXTI7GoM47DiGxhB82LTEKNI/gxIL5Jql6NWQyGmvz38
u7RDptBLHvjYscEPZV2/CwUlwoYbDrNxUtsL+nDz447MIgz++UQ3mgf5+TyXLNyiZfV0/NFqevff
0/NVo9mXTxw5V5Uf6EY867dxxslXLiQdPz8o8u1b38Ibpsaocc7eXwfsRL8j1j3Ai3n2eZSfwt1R
n3IIQ9sfP21qU67lFVgvWXyvrjX63vhamC2Yv+3s3eZv1hbmWmw3NizSUZuWtJ3y1NNmDIwthS7/
LnOmT4YSgzxjKHpTKXB5m57abcLJSk0Vuzy+93t3B7PJCvUXnzLH4OnH4zesY28ZrVp1nCYvunL6
BWjK9k8LdTYVcrhZEPlLTB5pTwVSr4Xq9O6k+tbZ7Cydy2ed67+j8LrfI/WZvPk9GLnVHS1Dmlzg
9F3cIUz6GcFb632PZ4fvXjnjvRnq6dE9hfZZ6ePnB8wL9XhR1Xc2n2/Ctwww3iusC1qLWXOsxZfY
x5oD9AmA70eCYb5/U7PdwPbsp6+/27VfM9vT4Z5tj0P8gu7Qcz91dyfkd1PGh4+3Db1b/ktcKcRy
DgU6tZHqNn2G5z/vPqEVCYOSPCjsMCNumm3j4/HI03t15Kjb6zmfQzsS1MPpke3v84APWc+MH0T/
X/trY8zAccc2jmyQtT0xyz8Uefl9LdGl2aLtn6grts1edX9XQbZceCbr6oCa7goJ/l7ejkqe+h6u
B7689uJ3H1/8MDJ/q2N2HeMYggQgkI6CRUfC/gPau/5+b7GDPsW3byY959oVBhJgZV7D0nx1W6vj
xRu+2U/dTjyYl1/Q/5PSS8jLMrLJdBl0tnkH3GMyfzMH2LjoN6qHt/J7/kPdr+an8PNhjgG1hjzD
l5y7/Xsb6eppqhI77ejjbr6ttzQ9rLRr9mWeyJd5p/c0fMQHhubgY/5S0+WuTJtiDaNOkN+TbjNp
bD/c+vsfLae/okUM8OxxxxOwaPbhpLDKesx0bfd+ifx9fr0tOTt7zu+78PfMWbqurkSgcwXwU12V
9UNJiydZdew+ybZTfgn028iX5HrtyzQtf4a9OZjddHy5kGKEcK/bLri2Mqxr5vIkbp5+55y2V7LE
23HXnwloR1o2WGtmO2O37d/rl1XzzXuQubYhPXCPdXSL8TIvs3UCJ2g50plQ7yteyz5G7J6SxLow
gNPn/TQp444hZJm72A3fkixPit5L0Q/lwgaMS25aLIhuEPiqr4rtWmwp97Pp2LLn192mtaHV97yg
sqS37ht/drK2NLd23sgpMjHbL1JprNnQJMcimPUTtRunsOnemDT6iZkjBotgmNEWs4/VpcIRzxrc
wKfp9pKL0ja+i3daHm2tRR1ezftkgdND4ylaMoUT+x7+dIo59zjiRy+DsY+FWpLHi5Ha5Ke2N3Db
apOqXdLNNtDLnBUTNVTI7pSNg4u3q2FpkkzrBBRM0IOhDRb5iJppIQja5sz4q0n4ffSJC4YMM6FT
qcmbH9uEs+Pqp6pUgc5Lx+6Jk/B6egnr06+r9JbhtYPkYXHlPFZ1c+6vzR7aBJWkea+PvpGWTwKG
+jNyXs2t/8vtbt077W+3nj9rMI7HOrX51iR7CUNkYpfL8D1zJt14Zw7okPZLkYfOa9S4z6d/0Hm1
3HR5Pft6ZsamqPaPvJYLx2alLPuY2i2SMoolIWXdRJsPh+S30vh83ot9saqLNyIeUl6bEZjpvam/
OmNvY4ziab7/LD2FMsZ+apM5Dl6SarScuk/y0MthLvWf4+Mo+z4fpO8m9mVde3b3tgmZj48NMOee
PXyzy4ZNxtge/om2nlZlhp5wd3hT827drq3VfK3yU14HZx2XnjxT+M/VkeimOqYtqULKR1uzr09E
F8YJ2+X9tJc9j3XT6Dh0eHDOu+VX/5ZzUGAgwXjIvxSr6q/H4U7IPs6/Gce5vNovx91p+qZgJbsZ
X+ZRJ3HXLDAWtNxkZ5tVWVrWiqatCnr319KnhivEQr4QuR7qw3pKXO6TDrZm/xq9KGzLOd+XH74L
aUbEzfji4GT0rJpJJqSdcfcUkjg91zXu50jevbk/StFqqPp0vVXwrgiOOEDYvWeyc5bNssk2udIC
//fScfsw2G6W27H37HMuFRoLq61V7R8S9KNzLp20i8bWGfnWRI+S+ZYbd3eXxLhXLOkosd3jPl6T
kdRgbpb5Bkrpey5KOeiCNZbtjnZhT6TmN7ryAY8dwrOO1zsWi1WcpZ8Rev59525C+n2fSbtOGBGx
bG6tkqd7zXRbrUa4RSkUc7XKb5TU4pHT9H3Rp7GWlerbL4/JnT4+OO7apnCWtLejxrXzpXox5kxY
vWzrCN48/Jm+nB2D0nd7cpZHO5hpTRnNzLzUXHIJv8Jvjdyy21fVYv0w3FWK7fn6qHEy7fVKWOHT
q7U1zwm5wp4V20Mq6+rjlPoE7pY35cOORve3fLSV2nAser2H1dfl02xQObTpDOZPZSeS5kG1lt/0
bdM5/Py9GKns+g7Nu03bhyN3hE8W6qd0pqeVtpEKZJc6xdvX1ecWvdz9ec2OCDRFJFMM+mJ7bRXr
8TsrL4NxzLiQaUcOCg9ThNOjw75VWCy11n0Ozif1exyU/Dr7JZ0vRHrzNW5+vwyknU29PdDZk/W7
9ZyXTcvLWNX7JNJAld654XlTUyHtq2/WstcK/HZE05VzQlbi5wFO8wX1aaZk5xbPZfPWi4bfv1kv
GvLV43WtVqt36wkbfRutp25259fj91vBdm7ipRfp2GfI0nqtQ9+fVJtFReTIPflocm7e1/QpRoR4
zXDKT4tY9ISs7C3S5eq2mXn7scOfbiSm+qkoQcsXYxTSRVSZ/BfXF8a7N2VLfQbMyFngaPLk7LCl
HR7fDPP5ij1O+p2LpfTHMq+9zuMSWyFTnrE9jjz63iqlWsTq486xWk4iIt3c+fwUudHKtyacu4kf
pllBWj0ZLj37zyqedbwJJcbRhwT0xT7XN99tCdCnh8l9l3Xx91pbl5mHd0+3cGSureOHQdS5mx7K
te7x4UPCnPhnjoxWkjfsnOHcjG5DWsWMyRqe7qwkUmaD+xSkYaUlj2pTe7ybhWCPtdKfDb0RIou6
I2Sx+PnGaKE+G4kadP14yhaGa9Lt041KkhKrP3PmSTiv49EpI0EuTx3tW2W6s22eH576hmbQ7jP5
exYjzts9LdG/2w9PTVlWvx49q4joUrog7out3q8YeQT9uw1XF5rs9esslXPJRvJD/NiQRZUfGZML
E+eDUpPCv2GCvajSezm6R6JSnyH34v6ssxZWnB47tptFXUQ5s1u4mkkvJYzalYruyKrciiIxnScb
1bNvdjBsvfcoMXKEUgxJzhPRNpTMqqTraDYJ1LjXnBLtxgumObIEJYbS9s5Fq5qdJxY465SbZkjB
Tu2WeSjZswRM3RrUQ/byNNXnP57uHq04LV56Un6S9Zab3d23x6Zv2vXgZEznASMu7o38cC3x9HoW
bYrNqMJZxnATKzl4rCz2rFvQevslOhlfov7J1TTu9aOg6KShstN8yRPqwk8hRLCWcCrVQ8tuKiQu
eB4eMwLUTvrTVPHRjaup2Ia/bsvkYk92aBQ9k23q3zrQvD3s8iUFxURN2rGH3WxvSlsHy0DKcq+g
2tz05Wzq2mPlVJt5hnlrlbcB7r5RlUjlBtx1lt6pKjp9tnpPq68s+7QbfxfPjTb3S1Qs16nNi6Lz
5v2WynjPPd6Dry8NpmYZIdfSsJKRt65azk5rWZeLayPj0yr1evI3m288Yy+Gb496FxzK9nc6UpPz
fPXD4nw+Ec+f089s88PPTTDfX1238qGm/edGOcUjrh+7HdpLq3Trw8ns+s9uGUq1nyzjO1aiUZyl
jPq3jno8l0bBpgjVBRCuvNmzb9TukAhRNddOslLHHPlryd9uGD+ZjIfanyXVSUShDi1d0YCL9HXG
K6/ZLPLbSOzP37dkl6tmltrNwxd08lynvvKebVJScjeV996k64Ybjb1uGnZ8PR6tpkNj8DKJctHm
dTlUbKPhbbdS0tSFxKW3Yba8NX81KRfdfPXFZaL6O3bOVVv8uqWzdvMqcduPWZPp17lums3StXKN
ObkjiFPRtp6Q7fkx34ZR8PF9NWfVLWwKO/ZhrX3VbrtWTyvoYppQ8bOOhUKmFTok+Q6U4NekVmtB
31yaE22kk2MSjLHDSfXonlKU3tu19/HGVoJaWKT8erbpQVV44aG0nh7ThtuUHHzx0zeUtvsLKWdz
My2fNwqsvl2czDQpB2truv7oY6uGFqM4s92VI8d2mHdQma4cNmvhkS80JBqcwsT2dNIf04G2OUcd
sreF8Me/hrnzmKueGOvd7IfDPOk57Hya+Oyi9uM0lnJRXyd9ZYVI3UxIKnhsrpI1McKYa4VxqQXZ
8IdFLSyU6LfZq7OHV1Lk6aziMljqZ+nr9Fd2pf1/ZwOnY8d6cekQjwiOWeFo02l6pIOTg6I7I3mB
oiqSmuL/Baib7Oedm2JQhDNEacH7PmPuX0er1vl7M/TpmdLS4KA5cp9kqxQRgKQbt41BIUIXs102
ZG4P0fD1+O7Vma49/t1QNJ4Ov149bWmO0athjiOmcl2MtgpZaqzXng7YorP8HAyoWvVkCV+w0OaX
vDn27ni+uvV/F/pnZeM7kmyhxvli33aTb6A/UdT/XAoiB/JKBw6r7+AHHq5IdFqwqZJLdkRstaFD
7/q7LA94v+Z2+rhd1aVsymZqxqplMpYkKUkesrSdSHQwUyyAH+pkT/oJEyAE0rIkFM/6JMpmjRM1
WlNaQaa19+1dln71pDKGdthE6y6ohx8o2LOGSssvbOkWld9+9YFazbVJBIQwjYiQhiJJNApDpJVx
jjGI5NwbMDhyYnApdbXBjblbst7rjKilYUybNNi0WFlpgTUJqZvi815o5uUREmmkmRJEppaKWYwy
vuvn9d83qJkWkpmpNUkJVR52JMs0TZJKIhNfjNXMpYpabyvWa9kqSVsJaXtnCU3IYiRStEyBFJIT
0oklRR6tsHsOI/x/u7+YQ+QmN+uODHtwQ75cZ9UJqCirSgdg5Imng9d4xSlFCjpEN8Qf3GQ0icIg
J/4wP+8lFw/3GAHjKKHrJU54wV84T/ngcgChQpKVUPZ/3Z3yv64T5T0qYHl/e6+ZYH5lQPkr1dFs
YWCUChhRGCGAS+L/fiBxIf6Z9UJQOpShP/GAyUDWtbBQ/7YCyz/aWVcUqJcSoqGTDbY/9Hd6hcf+
3EY9IB4nskdGDwu6DggITR4guLbhxIG+HE4x/gQReyIONCWJgHMgtsLaoaAGxKyJUFqCtknK+GBa
ypQb14WB/purENHSxj8aXzkTJnwdA7nYg6Hox8OPCeC4sJuMXukdynWB3AQSGZhwSHXobdEw0hyP
BYDqB1BlkRQ0CjEV/Sp0wtgfi0Y1KhogQQdMRzXNaFyVoRPGRMhTnDAKT4zxAdSdwJ4w8vvG26FM
GZIMg5jL8Hr7ekiW/32uX/JtbWtWlDRVsDQMQoKKUCovhgmCoC/MYIHfAuAQsikIGpECGUDcpkKy
iK/abUQM78P8X7/xJf37BT+1/l/T+/pdeA/b/HFpX9ZI/1fp66Ok8w9k9Jdkjn9m4ACbux8+0ZmM
bKWz95GuZ+6b/QjWn73/m26Mp0gdOnyiVFRarCc01DLoPAQyEvB3fr6pQyOliOLMAFn2wEThFByA
SIOk+apQ+qChd0PRE4QlUIkaUWgmAUIg1YtotVWK1VklRTWrZm+vTf+tcJZZInHIgaqRCOxW5858
pkceiSJ9UQSeifCdv1fgsYna3XhPsm34Bv2a92T/3ou7mTvj/SG7KNZuYYTyxffS93SyU5VHrLDB
bOJ4Z2pzlRTwvgW2rnok9K638u958vfRw9vlY+8H4jL1kvJmMuclUXGuiEE3rorvoL2uuV1Zp6ve
zW57bwqqKL/yzfg4qJrb3o6cosp44/+LcimKCasRGY8g6VKef9WyUv0O6FrXkiShMBKqcmtgvUtm
f41/rthsl0Q2GBwikFIeZBE+HbQyRdDoKaXhoQyh8U6GwO78cRgP4oocGz/w/bjsym/64r39/b/d
sOE9+G+JByMcWGP4+LDHRhwwDcHEb9qynuT/7s7n9/9O//PpJdX8J9cKFLXtQletk8jpYTvO+rzH
V5ozO7yXV666JXTzur3enzd6LqN9TK3zrrqszUU1nW6VzREi+rrjyXvLuqKo1q+srWsT3Jpu+pLd
zWVV9BWhdMy807CuSpFZvKOTTu849a3QuGzp1vjDDMOVve9vSzyrIm7WqJ7MzJF0zpISleL3pesV
88+ltj9/OXsDrfd01bfKIdNRBtNc9IbcoS3SweTG1FU1Z1rX0w+lbOc65KyuqiOmaYOkFMEjTAQG
MVsAjBcYG2I3u9W2uTfOUjE61rb6JMmSlxm2UmcZGR1tuXupq5U3OSne5K6d7mPtALUy9XuSrXXe
t7vDGbRjk6qcZXdXxitgo0uNbaNtIze7zDeHRveLHx0zoZbfHVXc5yFbIUw6fL4VxnN5e9ZH1sm4
LKKquQWnbzrl3xoearjMdVOFuQdOD3IFE0Sll02ymXIsYbY6IaY5q0TcZnOKblIkjrW8U671uXdR
veGmRy4qYGQbKYlU6vMxbZpgtGEzd4QlOqe86vYcYax5lQ31ko4c6m30G9ucOcmuSGXcuwLcNnTN
sb5fDjlXeurKfR1OtSMbyo931fLmpOnDlUb6kZp75I+Z1w1d661VZs31Mvlb25IdPmsK05zGcquU
nmqvmThu+ibe9c1rDU65eb5rbd9U863rqU5UN47nWqdy9TOX1dc4a3Zua0N6yuquXOuutXT3m9Xm
+qebu5mr1hnWVvk3znW+TrNmc2bea5KMnOb6Te5rV5bNvfN7dvZcK1dY8zOnzq6l1T5vQ+rGUzXN
sKqb1HvjrpbqHbx9ds52ZGQmyc3H3hV6vm+97lTfJeUbcrdGudS8zM3zRqoTUK5jOaZN8vg2umSV
t9aOctc62MlQzDdM6y9PLNPbuTWun1rJMerZcuVXOPlSZt3NWUMxrpvrdGzW6rTMhzlke+o61N08
cY7663WwuXWot1VPqTNvrXWcrkWWTfV3T2s4c30F6N1fWjfCczc4b5VdM6Oboq+r3ivjrmLeby6d
OTrm9bUdKSac2Otcp65q5N9bit9butY93erfForGVzdW5hUNldWujV5syVUNnNS73l4Xu9p85ibL
mG+aKtSSy7NO75111OFVmVzpnXW9b48vnTyae97KZDe766OdR63SZea5vqttNPW711ejL6d75Ra0
TnWZynZ108d9Y82GRYFMFhlmi61dWxLXOIO4HJFM5529MyNJxZlXMTR1bqVXJnTdGddWbuin0+g6
5nOletVsZcap9J3dHQ7nSp9W5x9VSs51KXKLvR06VLrdLC1eoY62xjZ0+nLdRyTGqlPSp9aHj6Uv
lW+jFGXVSmTRsjb6vMrpaH0o3zoyo06bgxvaOA+EudLY8uo+lnUzKI6fSoKzqlGaG3UCMj6VSEHZ
T6dySuiyt2y1Oay8Oqx9b65q7vrrZ01c6y+Vqub1d72TVnWcod9ZeuPWX1k6zlZznQ9HR11mZrrd
1zrVxvfTrKrrWTk1NaeZetY70+rzmXd83mPMqnV27zNa1vWXe7u7qqq7u7qqq762YbGc3V7cHOu/
MNMwTXF5/4Px28g34d3YmDtPkQO2hu1AsHk569Nmh1NhQ+RU4f7s5JHX0xCOx+nq6+yjdL9VeM2n
pwgn6XpBPirFAp0J4k7DUw6uZ5Txr0XwJnuc+9QLIT1e2t7uHbjdpzrSrWb4ZfO7o3ssKHq+qJ3/
l8dc2+f0mfF/mbAY6mhp2bci816GqW4VNz4UQzE8Ka2ZTaK4npw2je3gU2F9uFibe756xJtyCkBG
JeGl2mpAyKOt2+G0Ri703bsEUEzTNmGRM0WNHTNgYNwao2s9xhKEt9yIkimEpU+Wh6vbbhmTxFuH
EpQbhY6DmYaRx1OFDaLqSGb1vUmwYerH6MyS+7l2vDj6uXgXhkHCr7d+73yZ4xdtFvflTVTVa4SJ
uQ86uR9Cl0J1VNsN1YjrRPik8jGqfO0V4YyehyQ93oc6FiX9qTkRPHTGb09i9nA6MzSaf+Tc9DFd
1OiCFUbYgJf758+norfHgobrbYfJL6tWsy5atvjb7lSWR3CYSQkrRlTbd6zI8SVk9Ch1YE99XL0x
pXA3GB/l3Nh6DJn8VgZtroEYJtaNq2in6H9Vjq5sdKEE1zC3HkMHyB9JQzwwCEI6z5ip8xzJGYTD
w56tb+4fiXgOJqY5NyI8NP2hGFtRpSv3Eof+vXz/fFurwasps21hzaK0JcxlLy2N33iJki/3WiYf
i0ifi9nX7jMrv+/5Dv9fr0vQpuexE3/OiUibzJBgHP6QwZjbeY1z4wetGZ5aNbLz84EL0DBEvXf/
Nhj3jDG6PYsUHQex+DC6/Y9PZN+ck59m4ZtkFeFXGOCaUM5UDIlVd13FZj2onmROrumA/MiY+ngc
8m+eTxnox8sa7bTZpBxRA6U2hCkgLZa8OA0kjszPmjV6of1S9vyCnvzxO6vFk+1+9KiKT7Nv53On
hyXT7vh8Ph1HbvPNAfX1+PuW7n6n8rdjIYCMRd9XSS02euLDNTYmODRLskbpHyHpo1JOOeT9KDwU
YnX78ptY0Pf77nhz5Z7sW0jJ8XkM3MerDGZJl5jBOvzhvH+nn7L52Hd8euGYk+eOcqjsm92+mXad
+N76JAtHMYFNDsIDxp4Rw2buSMO3BnL5Hbm5IQgmJjkS3ZylB0XHqFinF7NJpuobihpZ1dm6IUFa
y5FjLbcw53MyYExAIib4ra7sUEAgQhbV+mUsJFREglqoYhhCTI5bCIYccotmimhISlyzkqOPv3As
bW4cZ8lsN1nOJpm35Z5M0HvjluduIBropfUvklNX8GD+tW/lhBh+Kxb1v8c1qumP09Wjok1Z8mTG
kPjBYTV8+GMVTl6ZkadqB/RkRfh8Q8Hx9G5CVF3ZgzeV5c8+CilPgUN/OzLNN3esmo1tRmbdtc+Y
tfLHr9lgxplrCN6zoAURh3mWuuvwO2xplifag1xSwGLG7ILuF1Cq2Gm7Gs+ZjeJWe+XeRStuuTEp
HQ1XJDLHu49FapNR6LDq6YpMbCqJL39lMTO21Ge69eBx3Jksidza/BPsvPz13987F3R6uez5d16Y
01RVNUvlORNBNDGn5bbJF6cS6nD9NT6Qy6LTG8n1ff0lteygQOg3KMJTiRCPWtqlhXLAuG14wGng
+QErnoWcFnZxCvMwCWyoSGK6lL3ygMYlGw1kYk5hFHPt/xt0FZ3fdpfbRnO4oSkM2ybFwJtrFB+d
H3sbT8nO2VKvw28OzYUxdzfORbPHe7TEkecszsSXpXTx5FCU5+Mi5z0y2YiwYbblBdJTo8g2NZ2Z
pYLbOOz5Xy5+eKOp9YaqLQzuHH9uBQU4Ofl+xzdZqPrUm54ay69++UnaLVj0bnxlimZaiSV9qluL
EYHHKDB3DEMIlCRXjSCSCUbyfpoaN06NcpaiIweINNZNC3Kxuk0LRAeaDHC9r+qVO39Pc11Pp3LQ
O9Br3ajX5Zm7q+z6NNKI0c3JdRl0aW8TNwS37owXbN9yn14T0XSPSN+uJfbV3plsgHqTzoW2+DxR
MZMZ2p7V25G2luhjGcSzncRLAvsesWzsPNXeleuPx8MSHY67rkF2EJhnww4c7wxJeUEVN1CqabS7
VDB442OUZ4fNZcuG6XI3VJcTKxaO4rI51NkmREoEVD+bTtlyxNLFN66qU6NcjBzAwYktIN5DyQnL
O5xuGRQ42gxk3z/ldtT28PQRtYnoy37eQmHOjbskEhVNRkIZJhG+AtwcmxjUK1jBygi9E+bTfRfd
EYsY9EBUidDfyU8P6Qh+813bpbenHSuzdrgbejFL2T4exuGfGu/k5C2Ui7VYUQ7Op+Gjc5SH8k7S
kFwCecyjZNi9UE7zEeU8bEyD+pDwnRfolxisPRMqcHnEyYsXnoqImlsE7HJ4kdPZKmy77Ql1Jra+
hVDVnPKgba4CrLVtfRQ9ONO277t2nZn45t1mpxRm7SVhIDamM+AbpRzHXHMca90kcDdzHLaaGLoo
eTUtqvg5KXLuc27aUTG5dPg8jFbqvJB0pnUnMWTHHPV4Hkhxg4msMcd8b4iaA5Wh6sA/thGBX0Xb
bTXZCJs0+vOHjzw6a6ibhhCVts2waTzRVBisc9pBlsFKGayqg8kFIlg0nEgVXgiUdUMVQWnPZHyb
ZHCWTSpWXqnvRG5HXsbbiPl0Si3uwaVLIulQipc8LZ3nxXNeCxLdkJdYugyTkCchODjCF5a/oXuO
bVNo3dvOeDqZjw3ScUBtiKpObgzyjf+extwadTQQwxx0zzutTyftic9TqDw9Z3TM/rg0wpu14Rc3
Ngb29Ha+GVaxrNjdKNxvGWJDD9Pct1XZlODHuxrStma5NbHHmJglRmuDg5goHlCpqirML0y1liib
ZmM0yN/QZZVZpEw9Y1pQUbJkRuzvu+i1ETbCnljKNjUieIGqSarvLnEaYwI5UcUyVSUti13FKiTI
TCrXANJlkUINqas8ubtArtWw4DsONgLzeXjRxfi35jq7vtoM/fnx3jnfnpAzHhxwG1Dc0uLJUGwJ
3XI6XPHrIIxjWnMrOtvlZhtqb9rICh1GOOl9m5/Ht6LzDbTpbbl4yNxLiOMOzpwSOv62CNpuPWcw
gNAYHODIf7/L86H/G5K7/j9tSciapCsKCJUnO07VKeO29kWQNvGt4RmPo4ZSbDW76vP6LrnRTlLK
Kre4ZyPnexq3m9kHvl6zeuVuPK1mrHvrWzY66KfUzhzrdVxy5nUeutVznJzVjeHCzMfW3HvrrnIO
665K3mrYZqamjM5rWVdzLWj2r1fb0HzWD2h6vFcYzsez8PtKPebjL4BXRsPz/oIQhCTPefXlUvz7
yiHNFOoIi9BP4V/xCgIGiBQDFkAQf+7SriChqEDmVMADQKfVDn9UTWDgsBQJxORRSUC0sQHCqo/0
F2J6wg+6PsfHodeg1K5IBBCyeMY/yQuBUje8n/rR53mS8piSnj9WZ/17thU/Nb+LH+sP9Z3H9v93
h6BRDuQD1QSRURDJS/kDDAIiKCglSqUD8t2l9R+EMRP5t4CcEhzLEFEED7z1n8muwh6EB/NPJGyB
3QN8UyCO8icj9ND/RCQDxneHvPgnh9QcWHwCEyE/FH15/JpLU2YSlOsdVhMu+LECYXKZQWg/keFC
rwcHeOLmUGEsRBBgfS5117+zu/NV+7go8nc7QkIQkCGw+bkmYfXqSgZVHLLI1F7QTgOemKTFJRKQ
lUfvw2umAxdpaSJts7jkWBsPWfySQdGEKonDKQbBpNcDphr5SfDIpTrnvwSO15bYM4Qu0v1/yMJe
3y+dASlmNhSoaDOEDkO5NOzgqmSgioAhkqSSSjssYqWSHpJiR3B6o7+1Q47no9qH9teCHed5wno8
PO7ZgqgzCyq3d9u3L7O5T5bwleioNGQhIMyWYSJX8fvftfyPxt+hOv0OQLOWI+JGJvVBgprnmi5D
CKGa2jYBVsgpiICrtV7s2mHwbXr+FHOIXXgCLplyY17O6ltsfb9Oi3WprCFtmnjqPHGv1XNBY5s/
xZjf+Oar5YfF7p4XeZCVVeNas17mS0fMZ676GxNi+Pf74cts3CPFuzhMDBL6nZ8llSFI9HnzvGEu
4evhRBrkPTYcmPccj00TgvbJqhYbg8OxK05vQ2TQUsorfmdnLIfBwpxvfsdHsvtaotWfQ+i39F6S
1b2KBySy9izhFFd8346JsuRVHxsPRLGlvyIw0xeHE2M0B5evJ5D0aA6Iw49cOQByFpHnyXgGxhMb
uvCmCsZs3hqYF2/BBIv4qwkRd2QmIqKJNaaOonQRvFfEO8E2iQQvsP4lFEOVF/AAFDrYw2VEV2jR
RtsMZoMBabAxppEDn3Hy+nD3H4pA98UFflG6Cmth0xf45KH8eZTFU9+eZ+j+HxBpSOjvY5fkK+Ad
gVtbJgaybqbtkO30TOPpCc25hkGGNNxbFsp0nv5oMEwwwWqzcrKj6NsNGprrkMkU7164hWzsryfi
jfEIvWjFD9v2BwWrDOtTaadzNm1bXyZh7nisGoPiMsmH7chJ7ULpnrR8jbVD49iZcnbanQcNDyY+
wMZ8JGGHUj1SHaVJJXCHiTo9yoa8tU/1a5CBw39XTcCPCO/lCdfI3AJy5/lG8okiw+onKdjjdgoR
XrNuznngfsx/ztStvwX55fszOs6uEa2HqfvilQY6T31rTpYY/h4yJTYY9oXC0wvSjDGTDEj2by1R
AxlM3+eTDYo7UxNQv2+OtymNTLnJ5UiaQlXYtl8fpwrparvHrRgigIQ4kJ5ECFtKkpbJ/Ojy6c9m
V8gx1UL0ZuOsXu7y3zgRCaw1rHEjaifgYVk713ybz/hfYepTFjgbtz6io2erh86Y/66v31HQvtd7
H1fYY8D9Y/R/NWJu+7txkpbqQKjlGciKbhSeXBT7/vOyuNFhytOWOEsUar9ZdsIKp6r/XLKmFV6o
zgkZo8wlBymU9HdY30kezUvoW65bLPTb7cpZp1N8/RskOEpsl65GymEPxei/bbExOPkuvtvgvffc
JuWzxk/P0UR6Z7zD6PT3ytwfbD2W52iycv3ww5BTv7uqTUJbKEET287mhU4ewxwKTahA4j1KSbCV
CEsJuuj0/zsMSD1mNoKo6Ly8Q4FCN1acd1Jf8zyrFVO7DHdunG/UrQBv9eUUGuTYZvh8nd/zNwQw
WdhD+nTsP3cPNGUkkpOx6Bvw+cbdDkf0n9YQBtueq/mbfd2eqOjrEN1B4H6Zs4OkkpkoMpAfWKHk
G0KVPm4G7dXsnOYxWJeVb72+rnlD0s0qbrYxhlKlbUWSJ4YUrjYrIreVMpPjKuUY4aABe8sssa4m
MiuUqXk+Mq5RjEsM4xQgYSQZsWHBD+rmNrx30mQH0wCQkOaMETMi7oRM9z+wkD5sA9mI90efIxcL
qlI5HsCMtDek0vu/q/3/2r/8yX50QSf+pE/xUQSnE17LmjGqEmbVOgHadv6P79FDuyxluA9wb/sM
4xXadSXfLv6fCdCa1mu/G8N7m95x3Iaf6slMPohtsr611zryhCq0Ca9POGuTxJ96NGoTC/fiUxoN
Vl4SgmrO8FL1e/PH00FjSVWOpCPjIe/+71nec2/I+3iBj7gppkabumaHchyji0z8+c0uVhc4Im4o
w6u31VHKMOGyg4awh857b+bqkm5Epsuy7lOI9VfLprIQkQ4iE4bG0xkKG/k9uFTcqckOXJCQtV5p
QkPQonTViE2pqirPn6/MtzOMFbZK+91iGiCo74KW8b0NueZkbjeh68hJPuKGsgeoXtNBhwId/LBy
myfQJDVdaB6hh6KYZTtE8bwGFld2DJB6yLB/GnYgomYxMguDFAb/omBg6JQBD9eOQY4BBiUiUNl9
74BZT0A8IozAz8+4Cjv9iv9YoH9a4mHcmAmEmaP9XPV7XPXMqIhAVQ7Ej5CZB++RMgQfa3d66F/d
t56fDc57REO6Ldjv3jynUVunFoSbSGxtDcCDy6uwrYyytRyqB1VZGy6pmZlANMwZGHNt7vFgzVNl
uXUodOIdm61VYXTWqCgrTplIjBQuohUDZCdUjvuUfxiRqtUwobiY0barrO7OuspaQx/6mozZ1Kj1
/GWHnBEKGJ8Z/yF48E7jBZIg8xWqTBlIgmIYZIP4T5zkwhFP+bQKE3L3dX5+fzVoeQezyEl9wHh8
tKTD2wADeIrgwbh/ZKZH5PjCnOCbekGZvqHeFoLEU4HB49N45j22oGwAX1PHePr3+BfPW+75vra/
SiIiIiIiIiIiIiIiIiIiJ9p0kkSSSSSSShCQkIQhMDoNqhH9qIHaYs4HQ4cvrTM1gGptD7wOIL63
/IRueP4o/oZDfaUdKa8qTsFWp/Kv513oaa/8AApZGSEd0wyOrsNJM1NFZtdV18TQyw0rDl/0BLQO
KGJRU4kbXhRiAlLc8J6pJR57QoWMQN6GEcatw6nZQzzWqmPCWXse6pQYkLJyKaZSJJMKiayY96F/
wQh4aI8/hoI2D78qM3S+bJ1NVceWQobuvMZ1z1bTZ0F+jOjs6RlLtdMbO2+xTuPvmXl3C++X2DrT
0ZV0AayAgXfaTLF0KeWscfhDpj6Tl0Um3KKyIrssrbTbv3654qE35XxDlIvAMzJdQ0Qcz7vrpA1I
uZFqKSCrE+oAPpOwy47zqnTh05dcrXKYMsrCE0rGr9RDaDjLz96+wlf71RUn0lI9KcNG20gEvC0Y
A+ay8jFsqgLECIkX9FqtosIKGDPsrjEKwax/5MDaveEQQA2bKZ6FFHnygQWhfLDL4tGmqIlhWF9g
1WdNpUFJhQU9EoUPdIljWA0ZQQVrcMehAdqqNIcMaIUtsHfe0LLMm5m4G6ZANIuS9HFzYzUttCBe
ZnvFu1uKwD5yQQOICIEXcIB80UoJSgUpskRd3tP856CRdAhpmkX+9gTJIgAiR9rKfH4n7bWTXcJI
omY4IOxGkOGbIcTFuSiBjZ7f6FznyKUlixjLBkfFPzqCRM0CblPU4RMLmPQYEmnHc77xcHKbwMn/
wP4ZqiaSmfTmEhR+5jD7Iwb0iEDhT3Zg0qPoxDJ9UBu0aMOKpICgkYom4jrrJCTy8acXjZy4xC3n
hwKDxfW17E8GBr3JwzDdSGBkhRYUihqIoae2DsPkzE1Bz2mdDprF48NBT15s2Sqg8ZNxnng10HFm
xtN9GU+AQ46b2jDaj3kt9HVnSruKl3jGN9o7Ozve967uqlZLu79vZ/CYE7Kn7tWYfPAoHyo7JD0H
ccouvCl5QeLCQK55Tjo4OddWmI4vSFmYep4ge2TPs5AYBK5UuhYd2aDYc5w/Pcx+gjbIJxPEPdI9
IdXplMgJFz10TpHpgsyPNEdnRJwy65LDI38NCr4QsLgWzyxNmNIodNGmi2N3jhTrII63TQ92trdb
xlO6QTvaqEqigKW1yAUxDYXpP435xc78UjyxeCpQwPTQaa0z7BoxnMZDfziKa9Fd9AbygYwbBCiG
HoYe/nAOheBxKSHGHGjxcWDMqykhUGHPexugKD28Oxbb5A4we4W10022GVKZT4DCmNopn6L3UOol
wKF7ecwWUTpld9t0gtgXIVcCmoxU0mxCTHJhsKfrcxkGqpM1HYqxQNc0qEtKBgmMfLy79NNmNZIF
/g1MaqgHKyJMyIrI3UZG8UVE/xjC5/UDzAcBvdc7N3Ck5xectKKe+RHxPtrpo69uqGSHu7KIhtDY
Dlui3ZIKOMqoGPPl5pKjyjz4tGZF2wDvuDYU0DdVALqqLqqAqqdUw7yLjO2NAGEsFtzyqCfHKWSd
ByECATClCUGaYOcPMoNbaJG2j6r6q/Kfr/x8YD63Y++kpICjFE0mgQ2MfZcVLolyEpaXn7QMD3hr
4/L0OHPCdARn45dbYM71Y7xabOHTzecsZPxkM4sOuGwq9pvSYoD4EuH7eJ+zKmiOMokLruhbgxDX
H/h7inPBg3PQy2myU1RALvQ6Y799N02k972mILDoQZIt0r4treIxbeX5K15GCTaSXO9ZdTh1qoD1
rvs0PZrfAOts4stozEumxarrnUnRxpiw1rnSZTTGzoGhs65pKkUumRwIbkIm0Gda1ZlnXmrIVYZl
OWzMgiH39pUyVEBPSI0xo+ej2eIrLnmB9yZJuDSnP0abRFQJ6Dt4oggN5DBFs+zGW2jHfHEuoF/l
g+uOZR7DnDIaM7DJ4wMRbCIYrKpWVVWmA0qnpmMIPzs8UHHtwuXLPN1htB4jDhdcJBFgQIGXMyLb
iJA93b46E7YHxkKKShVmKRiTZ4MmblNwNBv2b0BqOsr2QnWTEU49t2IbEaA3G3PpeW+Mk6VUF3xC
uGLCmFi8mva/LbPJYp5emJcGLHGgpiLjfJiNDlLy9+j2ModYaaaAQqtJqakVFQ/W6r91t5kr9ZS+
Bopq2IMYqZ7U4d77/O15ZivXr5TAO2hoeZDITUGWoQyae512k9dmL2Zp5bdXPGQ6ctrHfEkQ4SoA
SHEY1EMo1DnkjOlSd8paMXIa1MFkdO87vR4cKdpd3frQ6gMj1S5Rbh75HjMCoLve3ZoTmuMFyKdl
kHiQZIdJDJ73kzrZdpVUVPOeVB6ofgHpUgQH08jp3G8m8EF9Q/sfYxgHrNOzu9Pj4PDK+qW15ikh
ljjjJ0MuO9pMHYNze8nBkmaQClYu94vnK/VLStSh2vJ/WqA+T8M00C3kcFBJtKkgs8OnffjnjJCp
IhsRpKtWsFwJ3Re11RXJXC4TdXhXuFaFj0+kdM6gdwjMRsXfffWtd61oyV3zNeqGEDkZNQw5N5Jv
3PdCSN69/yYES7C/IcuR3enVH07FCVJEWSg7aXwIu7fQmRTvw2RMxwrAzpu3cpEMauNVmwxKBqKu
qCqbHgIo4Zr/UCfMDAShfWICdWfObud3d8+eeSb37MW98pKmDVC561iIw2z5s2PpcIhcaDsqRiFK
eJIslO3WGp3s589h6p7snLpIoTci7s2DQjC5IyTcwRki9rwOFwuUwgmGKwVUhJZsmcCYVrnnepdQ
CSR1vTPXmeBp+A8eOvsPv0JbF3kbKb5KaR7RTSkD5B2IC3f2sWMW1xkJHjN+cBAfztALaT+45G0u
XCDQWX68d+3iPxXPN91N5z8O2HZe4rFKZ2Bvoru8aq6Q6vt53Q6ZsroDo3zqRudHR0+uiXzrjXVB
uKM6RoKezjUvTBgxZmva1p0ieTkBUiqkwYUmPintjgVOHXj+75B9X6uEcgPyeu++a2UA39YC8996
sIsipojUY+3AuDO6Sx+41mtctS0xu9nU+3NHTR21oQF1FRz7jCG7Rtl/bOVNNOA+jv5fQzVNbqJ7
Yt+oX4yeO/RrlZWo2QgVzU7yjUQy9UOgcS343nyBmY/alvlmfP6a9nQzPIpG+Sk5Ahv5ebECQdBv
exmwyHsU86yrWvGpq4fR83Ie/Uv5i8cvuq8P5dY8lmoMqhYzH9jrLzKd13MrZp8lDKTiGItxNW+B
vKN2FNFsRGY8cabDvk1k5hfLSZuLnqBpxtvcHY+qb6E1xt85INtEZQbhxg0bXOXVOnHLvjIZ30F9
rtpq322bDYa5zWsvrRNa33l3d3XdcvxDrsmIaHfXNX3ZVj3VlUdXB6PSxet+KpoQ8aagCAoni4D+
felqzKcPm4BQr5fd7v5fPSA+bp9bhqkRlJaoxyrp9yFAdwr6PwmzcL0mohwVujzdIpODDfeodnNb
8ouHA1cGOlDEMdChbBOrx4NN0lBX8nm9OOaHobJDK80ndIt34hL8wKCnoiIefM8M1gqJChTFuHMS
G037CtFu2lVgzphtQU3GZ8XEOtevb1mc8JFbYn1Bt134613RcK1oeZXXZd73s85KKD7GhNt+PyUg
6ZzJYePDa6eedvO/NsC4Z5qcLdj0tsbH0jfSyIYbbe84sfNPmtQOprqXa1qdckFF3ZTp9LQbWPnO
720bZjLZB/ZNifFNPw44S773ms7P4nML+UxuvkJlJeeOykO2OQbn0IlEyka8+JTXFLhEISKVghmd
jDI0nxo60rGknFDNlecyZ3S5F7vjj/YfFaN/MY4P/JLMxMwZ/xaSL+ev47mlppNtsIUaoiM1BNy0
sMgixz+7CRVE6uVHlKV5mzFZN0QOIgORWlc03MF9Owm3YiGxUMn5v3FO5kA6gD+MSxfxvIJKMpQp
Fo2o1jLZsCTKRMgMKxAUMgUEhdnd7ulW0Nyrj6yC+eAWY5XUuMCg81B579hdGKBEtdfbz8jhv9ZD
7O7O6HGqCAREuid23YSgsx8DBlAJA2zfYgtqJqHDGN/FQNMBWNHhltC31u9DOFiONXDy7zed1lSj
odrnllLCr1mMkPVgdJ7wlOdLYXjzpfFqU6lXnNdRjOoxbQ98DXYR246T6MF0m+bedHV7l9WzqhmN
l4aJOhc6FXVLQaDHix8N5rWanW+t7e8uru76+OkYhPfHvrvp93qs32Tcurd5whZ6Bo4LB66HdNPb
Wm1UDW+F/j+2fJ4zi+cOkuvZ0HgqW9GiKku4Sy3QUjfgroHl4RasVCvNWHh28NhFEhjpkoGQJWPn
WtaFFEZ7ZVXt6skTIF9Z6mFES3UazVM2w1wlLHHC1p6tRFODmaCSnhRw4tqYA17FD9G8dpRppTXS
V3lNZJ3JuiGXp0sLlZcmGVFmvnVo8LxPp4Qp585zvc3NHaNLDHoW2411vo0ToIM1JQp0+M6aaY3q
KqOPqhZKbXWIYM6oG01xE4jaOI4HM63Waw5ZX2avYSSZMw9TFWwxpE3vCiHphsPRii0DQBBa1Ql9
t2zLGoHTx3aXyag2MZSWuvH0+2r7qiotoMURoixpG1rz6lL7muykd9eGED7cMN7VFLPQeei8NQ1C
mblq1Dx8zXef+/W8akYor7lfDNQTQ3BZFgxXW9HjN8s0HbLleDhnVlBpGwLYq5CKyeAdiWhswlBQ
okFzoJr3A3SwwTDi1GY0ON3yyeeRPPezNNBFaSDC1hhyJTpEXpTv7nfLl7+Xjrh971JJ3vmI2c7z
PgPGO8vm3yG3U09zlyHOMxGmvCOhvJEQt9Q6KfBnXLGHA5wtl72bUT6Ra5w6ea1o1fXXW9cu7u73
5OixcnObO9dZNZW6l677w1nYMiOgjDR0JWfPQaCkX13y8PbdVEyCvPs82Y2B3uIdI7r0dEO2Zmgq
kV4+2izDogpQs8w8A+YNi29qwxarZmNLTtFJbb80U1zvLvSGoFT00H+cFaw7YFooNF+HYegYFQFV
JuISA7/k/u8l/9h/5fED/6rflt7NJsM7H9/r/+kg1iOSEdev3bzj+/NBNCSBwlfZ2U8Y6983761Q
mFiUv0EiS3fpq2y1tx6TYacNd/PbhTA4X2NNTwwhjal8n88cdn/Gvy/5fjlSlNaUns2rYIKbcDbt
6dv1n1oN80xJM8kjiV05FTH2oevPDcYnSCSYezSOOomSJ/n5zcEdDHi/gTgf7vH2f23x69nrl6/Z
8Pj+nf6fZ+qNzDH08YF3+Hu8v3eEQ8X/x7vKMYEGIbun48OXVLqHXRWlerqvaIq7yzPh18I92v8D
a1W3OzsJfx9WByt4fHIuzSagj+cRCijC75M3saskezB3bRSC7u62XoSLUo8Nfwf2zH/lPEdf5yM6
UGMVsTEpn9UZnAt75Xlw/zrL++bXf8kfwn+acVJwvzGO8lrCY/mWWGH/OrT71ilxTMGC8vHy9Xr9
ft4WZm7eWLrMnwHlI/SGrPazC7JyGP8/8sWPiWwwfALYv+tR/4Sf26Yy9l7Ek1Xdx/YeROhUcI9I
/2+uq0Pg99Yva0if1Sqlx/3mjTq9snUG9o0DX+/L79KskYJU/5D+5k0ru9EOR9h9h4n0H3ch7Dbf
79w37sg4Njx/x1/Hpxrk3iy7MeEY/9v58PwjKX+cyU3qj/h92k5E5PzpWLI/Hzxjz9bA/e+3F/1m
xy2tfkd5M3qgyI67fo7pDdCOKx023kbccDHTroOmunYHfX/jSMko32IaV/3PP+E8Ovb8ux8cssDF
J0weuY/DV9tD7On5OHv+RcC9xtr47z2zUmx/40/iX/myvOzY/8en9ZviWHilUn3X8usnU6xvbCw6
3g0Vq+yRvmTVyHbETb02B488pM2uyR2Hsn0pl0u1FoCkjo4PhRuhd8h4H7VC1TokOubmciTQzyh0
0Yw8P3vdSEpCdCEKqiK/0/07PCdG9/U5yXVaNVOuS6hrdIj5N2xobLEf8t3oj3f5OZJiaGfpHOoT
C1fIFjp2LbHa/VBDUY5wOJJE5vJNTO7VDtJhxTMzJGS7Uwt+Zg7mhM438oyXu0iT8VoU3bAX069c
2CXKIZor3565ctTZfoh3g2yk3BHtfPdKSH6Xfdc2qT9VyVKdUmiHqxCY3t6MuOxpg29SyOuBtmFu
8wHyXRzCBCI9lVDnTmdaKVHsgAdlZZ2wkdn4/m5k6Kx60+OYIhuvQnDHxEBqm7kG+jtgggBASTb8
cYdBNMBkpr9ybBDc0N1o9P29V90oTZ+X1hjX/VeKzQ8dfyXebyXKNRvBSyZ/meX+uq+/R9pRvhbL
shT4Eom5WUqYVGNZG3Q6qZcCEGWUVdUo6ouq61Vd6pkQXmSB6r32TkeyU4kQDug7IPZziCEwVtcC
xu+Tj2+dy2w6YhGeGmb6on8UTU/DK/rlPcI8dm6GLYrThD9OwoBsScdkJ0hBoODkihIgbpXVjrPL
JMjypUNfLyyw/HkG465g5TTfspLvZNPfaPMnEHfUdj0cjWDiq9DsYqEx1pJs2XC5tsSauCdKyUR+
9HT2GEGFrsxEObLynPlWLKa8V+vOcubndZRv7c73zvpyddT35q7NFuQwujS6FEH0XrYpsrEOzvfF
zHdSwV1eho85vIzaUa/RON+7szqJNcTOLscoYDiESbdyxUxmY9knNVXfe8iSYaecEpV7U5Loy4Zq
UyyFR8lIhbVOf078KXnr5WuZO8145Nm0+9RE+hugTIafE3a8dc90n/e6W7had++knT+HjlIknKvz
hPKtc4h6z1cprKmE7S614okc2cyVcjOxfZe1idr8Q5aOLO/LZIfvzrfRz5E5V/ohv6VOSRP0rQjH
MiOU0x1poifguUtTI5ncUMdlW3Jgqs+lOxsRopvsIMY6dJMPOUAkxOVIA7cog4M28149t4l3s9SJ
9OH45f5Qgw6/STp7Gkhhq1vh2a3rv+TM4bNZm/qsEMRVfKGIkmm9KXqgB7YKInZ9+LNood8Q0iJp
ntrjhrTRuc3PSE/N/mnyPVJqLMlX7X5LnpR8vdPCS86v1zfJGTxAvF/Ol1nUmXp4dREqNXusqTJF
Jm+cypEjGf2rL0VEhzQlffBvjbuwfO+SMCu7Y+NOjbdn6+kNaSHVNhaTWOnHXGd6NVdspYItDDpt
iZ0BZN70exDXs80GAKomnIeCHF3Ujf0b72nTeaM8pbBUjGJxJ9q2c3l1LOWdI9cTiaWvOIU0MkTu
ZGeZJ4UFJ7okpkOQk3RIYPZ+4/rhmYp5vYywhtsjEfhpO4jxopNoL0fm+S8TEN0N26gygqyImRDh
C0oiOArem5/PJtc5CUa3hjrd+RE5JieSzKZ5z1UTLQOm8k838V1rCeemb2xnWjIzayRkkwmZvCYY
HwEY9fbssvg/PY+Sz12SJ+hemdUWLeEOsXe87UIjSTUltWxE5OpIg6DmSCiaS9ON8MDBYnJlpWSb
zn6GraHFvLFSQh79NDHEyteOD3onnUWdZ3Rh1VDJpMsBYJUlhli9uMTkrVMsFt1rt4Jt/emDPI2X
ljFA6HXuN+xLj3DKaVfu9lrNFbi+159llPPDw+5zvX2dqHMCFX8vRtpLl1srhFWHGF45T2flPlFe
D8ecU6cXihDYIfAxvd3wKFcodGyhQLPPfasGk8JY6y2wNNVUcZyl1P0nPiYphtmVIFYnpG5PwRyI
3mKfqqUjvOcFbPeryeyPSmJZTTZxYU2pkqkNrxr+BeHom2t/k7Zu2M0+Py3kD0NYOicIG7vxCWRe
nlyssP2tdjNvQVkVv8chT3qe+Rst4vZuqPfIvt8rqrezqSvcmm2yrR047+Xi/53rZpbSbt1U1K2z
t0YTTJr5EJRF71l73r5ZFFcu9aPs75s6+r5QvxYRv14fAyy+hFaUrFBNN342lHBUl+ebFoV3m5HK
XsV5zh9u/h4a5VboRW78lsTY6+nhLoxy5XHxeWFYaPknYpKcmTV8E28oHX5y8GO1PnCAx+08hyN6
Z9GVQ7FU1Tm7Gy1SGnn8O2TQutf1X398tejHcu6Y9SfnLau04VjptJK0nJyR09JHhJyq6ey/CfHS
TWF0UlwsBl5a1jhh9VetGOUxNwj2bu+ZQhBtiKJpMiE5l6pz9U3TpqP6DVGyXuR0Mji/OV6WGUf7
Gc/BetfW7z+GMQVTh2Hqxlz7m7oCSvTbc0ptj9uM9kp0wTfMuuT7C1COHU8RVniuGsPjPww4LZi5
akRBn6p5Etd9cqX41Omk+pNWJ8erpw48eBbDv2wQ6dYvZ3ZJS8+uKa+eRhpj5+D9fVi+PbA5P4u8
vZ80peE75z3TpDpYOQlkjbPKb8zty6tlL87emI38aANuQWs/ftcvIHY8RGCDXydh3cBPPAfBLR9+
cllffGcbI4rq6rpUaX/jwru2mJtM9PXST9s5La9WMxpjbjuwKmzvZWs1VgsNtI+NTBUl8O7sh+Fa
yXg7i4Inq+7PZr0/HZhj7McUaG+lMVz6FQotJyIc6kZNXPbHlJMqNw+Pv7y/qw0MH4+6jL+991x2
0Nera+PqjhiVwdyqftU7D1RjaxCLEdLujpeRkn9giMeVZYquENJnkIRhGKKlZ2aWFZcdu0xlYfbD
ZLGy871hZFDOMIk05SymSZNJsEU0ckqaT2JpCgeddsTkZRxd80TYWqNKuHRSIfIt1y2UdKy49bxz
cwOt+Cut8IiNhnPoFN+PG2rxnGAWWfXjHlg5er5mClF0KWfX6Z59I+er4rtFdNXf0xORyihfjDBP
brRnS5dT1O/Stpzvqo2zmLHKj7MN27rzre1IjJwHeN073csddJ8FbqJolVok4a45bbHRBlxvYu8X
oRqQFtfbGs6eyXnckLHRTfbr2UY5LkuS6UyTHNT3awacRt0OGwcxtOM0P3bYsyddVH1xgnrlhrap
0P7FU6X6V93sfLu548OvbtPbu7vGxSVHy5y7/RUnrTWC/K+Aqk3jtJXW2Vlmcq2r0vdYGZ3UhsCf
OCUOS7ZRyibdchPjGVefbaNv1Wtlsw8C2m/TbdJ3XnSxx0Mt2+3n7qdmOOdcImuWFn31zuxNNxpH
RfGHmnffhFjCgqR5w80k/J4G4OSrwUTxZ3dqlDgmlTZrqsevFg83yWyTYmeJrKqmjspJoZpEMOeh
MQvBSXBMpOlz9W/bJ896j8os5SKy+CmlhrfHnvwPENMBia3KaYwQAkkzEXc4IbmmJikmqjeu6V5E
OumIx6hVHcs/DlVUrXuUTn1NxzNCSPLszuaLi9S9lkjDZTW+j+EYTc3zxC73rHN5xaay34F4FJq8
TtgdWVaRtOStJHTRcM7qSyTrc7VSsyd1Ag6cnJdFKNHF5tZZpsRcTBqf5fdXejs7PDv9wYrkQ2No
jhD9dGJjQMu+9P4cDbD1xwr6vp1qvZZ35jMB4GZUkMb8kgNez/h7r7Zyi9w5Kr3qU1Gk236dRYJx
Ansnn7aYywHwSeB0Db9d0DZcXyvbw75mJ5/mhmmpV8uflE9iO3btjNDe9Y8X+L+0jjv9OPs9fPTr
1wn+RwidayO9dfr8uFecaXxXz9708t3s0lOLSD2ylNP551iOSdvWX7uL6Y9cGdnU06QhIQ1Ku2OM
of01aC7pJmSKrlJrcHwWXXttb5K1kuE3MJtGz2zKycMDLZJEYGGb38hzm6tu4Nluy/EqEbgNpmOD
pxsZC+OLhRb9nNww6Pnc18/jxTvm/Hupod6zKFE0old6ybS1oBLFarGVm85FVXxeObjvv7w7rfK8
OfDuul/Mz36TcMj9yTw1H2OnPWfonHZ1A0In42eHLhGdDp0zvnDkCQmRsxyn6U86b4b7YJI9VHCF
glBzcCrf2N/Wyj2mNlRJtA/iBGV91R8BIEk0tVnrsnt9Pl5xRHTy2RunJQ+T3vuPnnvP+2ymQ0P6
MVxypbY/up5PZc5VwxvH0220MMak3wnXy6Ytg3dHVBiLAf4Q08R7v16y6VbCePhbCu03LGDF+2Gw
TVVceOHhPRTUJtPZlXFQ8Qha1r1+johskCN7xKWeA/HJ67aOUlwIuivB93xIwk2Wu1Q4sRDOidn+
7tftfbE9j9UtcZHVV9v+NKzrS476JzeSHUujYqBjDzJvhFWpI/k9PfP6c3L2I+nHHCtcYtLTPSR1
Wo6y1xoGpZ6vWkY9+EzA8bXaP1OZVrBitBMxuQxVm1dhrdXVZScVxJ8nKSBsunKeyUo9imu6++Gb
tPZevyZUv6H1YJJ3d9IprxaikCdJwSD4Ydf0TnH0VdTM4m0pPtOslB86GwTYLhG7D5+53nWsnu85
NZMhT9Xo5SYmQhzcywlDM8Dfh8Pv+185sVuDDgGSO/ofZLygdbp769pw7+3JXy7s7UvQkqItHfsw
kTjOsGiCiquaOabIs4zlBT5++lO6ovnkd06kjw5yFJnN1PG3kdL5HD3S5aH4eWZ1rZ2xbLP5T2DT
G57A7g+m/HMY7gOgYYdvUr7Hqj45Zd5LnWfOKPaKPLDw4TCb26MG7G/XhIRJhjamZjsg82FxXSQh
7ICGfD3G+nM4ba2YkMaBd1/Ln5VrSsYfbyXxl0F1iWrPjHf81PytPSPEPLF5lqQ6ZO6MpOSWUPj4
+YeRM3G0vK1OjhMoK3Rynbns1KXpxN82lMoYniBkB1Z5Wzs2DDHSSdoQG9/5KeBsp6sj1I/6AQIE
QCDAgT+9DHCUkDIQKBDGQQ/vv9sHSEE/PRJ/bID/fIP9f8uCqxKfdCB/VCgfuiIh9SbnH98f9r+a
v9j6HE+Jt/QJ/1UlE365TA1kGRgXm4eTL/nYXvhNc4iMvzDaLS9mKCBxkQ0kjfd5mUmB5ahHWe09
oOCPldOrEQkSsJKQBnXcIyMG0v+EbMmDQCzxEKhpCthH4jGICa3vrJosKuJUhVFw2yVTnNoL6Lnq
0rdMupP82rxvFpxlSOQmoWz4yqLZNwCr1cq3ZAbAj2ObkVkQmMD/TA4QD4yZAD4QCh3Pdp1qMDMw
LbS8g0kGM4MEjTs1UVCP0B7x3lEYRJ9MI45MoZyo5xAe9u60xhKNmsWdavlY5MgqjlHP2Pi4NMUZ
jnjzza2dCMRiFecCZMRD/y5H4gjZFCTnm1lRMYqq6WNNlU4FOnVSvk9UEGoX8qLZx2wEUwbNsEUx
fNA/DlC+hziaWsZfPq59eLlQIJezbLS0z2YVjWUYqBmUiiMdVAoJRb0mExrEsnEyOXWKbkUpA5gM
l0VEiCDjPHqCO2339Ids4PPMQu2L1eUobk5JHo9QjAXpTOUmjrrRPbIPRsspEbFyJFoath7tbBoK
GTCFNKHPZjW/PB3vEAoCICMP9el1K2MBNhCy+rSONHGbYLYxj24nbRWQppsBYL+/Edstm00ljQg0
/f1OxhkPMp0q6WaMbtqoxOC6VJqQ3GEJuA4J1KZAhQbIp9PbvQvEgnYMb69RK2hHs/m8afhwPGiA
cHjRUXnVopbvsop7swOQ6uGguh78yR3CR49m3ZtmNCjO5BIMDqqECzqAY0jjX0aA8IHrcwp1umYB
3yHbLzFxguQDECA5AbhV5h0ScZjqOYHIRaRAdQoe0ZEO5hDugA5u+ATWzDJ5IXCDiNyuzeBshPku
D/uT7+d/jP0H1v7HPi0M7r0SlIQvvuf0yPa1vaQjY9HmYH0gctGZlA+PfRr/BTyVfzafl44UUKgn
kZ7Zjpk4VspaBKaOied+zYbBpQjTzJ0jRSQToZUlxJFP3f18sH3+84lfn8dlTcEUB6Br0m7yry6Z
gAKB1ah/uCI7REMoKf4+enIP+Af94ggf+X5MkED7EECPGKv5Z7Ygp+kPof6R0gmKP+5/2f7w5L7X
gVxRKFItY/sCBA/WHVuCh/ae8f6t6GWqfsjD/iq0QggRsH92YpuDpELNqUI5I7lTVP8aByQmiJFC
4BkomablHUE2WKhgp/0gcHKj6A8opdffiJOmyAkTiiUqIYjxM/3CmLCSQHdu/4LoAO52UI8A/bzU
S1Rw4f4AOw648SkRJxyBMDTj4mA5isBGxVMhQoM9SEQwaBYPWwNA5isA6oPYGiJ/aimSYcK0Iaru
zFNQ0AWsM6nRd7BWJoaomD1KO84gb0Nv7md2rEkGLwOXF8B3BBkws2HtBdVCBQnQ4dAd4UNKBAYo
c3egWdq5G4Szfai7cEjwMkTkeS80TrRzEA5BAE5olpoYDO1dF/TBA0if5B329hY8PjQYWHWf816w
H0ng77g+2HCwf8fLAp/foKMyEsh+uFcRo6Ah70T3IkBHi4TPj0UyVTuDsMgzY/0pGkSJFU+PaUgH
wCcjZPFXlhRi5IkA/zRtQU9QbntGglQgAGSqsRMcwSwI+5wRiHaQGv5aFxqFIVm+8plVfrxQWkE9
jRY17b8hpl8JIhMzBhhMY7DQDCyJ42woh4ID4gcF0QhIG9Eh8A3AdoHcDSdSJDknQcEKRIBA4GzA
4gex2RNKWCuA4sUhDZDfxMyInBE5JkPKDDZiA0JSOIBgHftExNdV5AA0Hmq1Fz6pTtBTkWKYsf6M
+Dr7sk0AAgnJpRIFUb//Qt1CCcwetotpEyxVLCpRMFKSkNCYCDiD5jsV4DoiQQTCMo4HqBpDr87A
LA3ARXg/P1tH8vxpBs/IFPAlEgyyAYIINBOjyvIiO7MAkzuw3ImkSFPuwRILEw4xWMYo0OB0C/uM
fm/MewBwKKPPGFS1RUUZmvx5vRvS6FI+Qy11scoOfPgokDDSJKk7jl7h90Uc/UoGlCD1HjXn3SCn
DKBES2z6HAOSZKM3QDYDZJxFKX2Hn3B2HFRczlsHzZqnpRNCKJDf2mALRM0ShIieoSlW+KHM3gag
aipweQKWOmom59qB/q/nv+n/xxch+35M6mk/0Mh8zyROuxPH+qV1dnRvxKTREyQe4LX2L9YqSAcK
ez6XeyqAPyHvMOSU4krUgUDw1RKUAjIvpoFAzptbAWlpHskxDf0HjEpSR7Bwf7SOXfh3eH415/on
/X/rV04kMPozYYHxAqG/44MwMiaguhtq97RlQC6CI3xWMoSsaBgYCJCis5YV1FBo6bF+5Pfm0FmY
pwETVE2FNkS0SxT5RTIU4Il4Ad6RUzOGxKE2OOBtJhae4gNOUDhBaTPWjjJpcXle1AUoADO4VltG
zq9oQ1eFXJyVzFhY82gjRB4+G/adBTSJwaJZhYqHwFDqPAfcocx01zxgHUBNELs+V+vTkljPe/Xt
14ZsijZ3CmkdXDrQYr8E3dpxOpAgRGKQkCU9iUo9fDQsSwPkWhMzeiahkgY/tgyIprDqEBxTrSUW
qbWiUicFE7RE0M3bmZHbw1R3wPCYamfWxLAfM0Quo23Q/H1QjvW9M2biOpN4OtPADiBkIvUPNOwQ
j2iQHNKA5ROUPUdKIiL32QBQASVKNUhEAlLQRLEvJ2CLH6+xMQ6+AK0icPDs7ESHsA/JH0zuA70T
jQh0TD0A3omncEANFhQREoB3iJ2uy6fiRGYv2t3UJiqMaxpNsw8E7A60TkKddntHEEpE8kTDgDQa
RU7O3uDfkAB4kGeg5APogpaA84iHWFFLQYDzIm8cPITRT2Rt7OwcgOi+ZE1RvY7lSvZ4xAonoCeK
JmKapecyIZJZQSZxN4JhXcKzsc31dwdgNztCKFESpPXdlg0ib++CkdgE5CvMUjyFySigBog8AIoP
QLOvvRN1enxipgdWGLRIKTuUHttdlBsUrR1fMEhO5gZPNcFsSAluXKG+gSRUO81G9VUegUsKo42B
rXJmQYuKuIeXuUT8aJQckCHP0LkibhPWPaeDgCQU1oQyF4nADioRfMdR2IljZ3qmQEWjINAsCmK9
waqNdECZjmBamjSauxmm97kyEexgcA3ImwaCblWnXPl/8Ih5omPUKebPchdD3omKKG9ZtWYC0hpw
wSsM0wab1vwMmE6ewR+eqqpJI/LFXM7d5c3HUGnqOCChkQgp659nggoYPuOYG86lSPT8RwKiyMhD
g++wueodnj0fNKiSiaB5kwL1pY9psV7kTYHmHmfOnwzAyA2nIEEQeuLE6iQI7CcxyTG8dUlsIaGB
0AzrrAwA+mIhxAHU1ELOIYF2CLALUgRoIBwT1iPU7wfeD6uDMMXIBeO8RqlHvMz6rtAyQYkAokKo
akoZlJqsscImDHRqXRq11bZKJTLcUGCJ8AOLsGQdnyWZImYJ6hCZonQyDJEwkfWbCfAIhbx1UTgi
6hAKQoXgQHMYicnQHdDPgtAbY5FpmCWvKJJQYJDwEvbo31AfRAzdoj7YEu3HBTyzNR9U7i0KYAM7
4RPnQYhRJIiBRggED14AmKJyqjhU/qCIaI2oB8BTUFaES4w0FKEiJHqPfvUT0DhdjUpHWpY68ZJI
Ofomfqx0T7e5yMnIDmBA3ckAOoeTAgpyRNUTAB0ESIqfFYmaDTaDmK7zkeoDU0FN56DrOCJZxDY8
4nJRMcTiokpEzOb2KlBXyArmaihyHZE2USGi361TgnoAKQQgQQi+sVSKhssFI6IrxUT5VSjuBtQD
yDeo3UB0KYOw5InnRIKREzD0c8KnVQbhOaJq16juibGsEqk5unaiVDsCWiUpQiZQodboCgixi6Yg
+IGry3nuQ+uJjsHRDZLPEIBAgECKNAB4ewyPcPpXSoJ9L6UxGGLQeYkwhKkEBgEEoBzHqKGhDYT0
9yBkiPnROIpESClLmERPAU9KJqKWgno8yQzDoPA8hG1CgDQ7VEyIFBuEwG3gK7JEToibhVMjXZoA
oHMsDZzQgmgHUqIckUfuCL54n/HNBeL88HvPUdaclNSxHyThIyVR22f532hDHbAoyoYw2YcGVjEH
tuOFWB8Q4DBIUOzuDSA6DZi6SIQtRCYCfkQQIgUASQDQqff9v+k8XN58+zl3ccOtGGQ0UiGJFYWS
kfBLoxiSntC7G4mfbRSGiA1QSTRJqDESUCRJTin7d3t8H01cVEtmSi0Gxs2WkyGFRtiqxQbFIxlp
bDEzYgaJZmJkajUpRpoRLMyQimqEhJYrjveFHsFPRsVxPQIfOie8KXQOhhRfVjkKbV7wNvRzRTxk
AYun4VIoYIGF8XVUtYHONYOO83nQHpByRzTURiZiAbtl7QNjHoqqqiqkoAO76ashZZVwPhInzpQm
gETuUT5UTxOvCiQerwBj8A4nHoXFeaJv8NaFU9QeYSqRCp6h6CkAKknshiE9ngHy+UHBBheSHgGH
YCEp8i+pAPOCBqVEkzYjZGgr9jvtz+T/huqlAhUnmAwic5v9gXygzqDjusF4gAZADsp2L1nxNABw
oHB3qDmDloxVihT27BhwxTQyAaEeCaoh4kFToRSzpUFfDr9evyQHmnPdc7NonykC5BER2D6BXlRI
4PN+59Lyww3CPAz2MGBIOxlQgpD3GAnhp8mo7h7Se5iSqIPBEu0FfAU+AGL0RPSS8vcl1BS2wYj5
FEMArm64B3MDct4YiZKJADUck10TvzVNXMDWmNnenTHP1bgP6BJX/dSbGww5NDR67DgXUArFIkUT
SJCmUvjqNGHfhzxrQLgtZg4SwWvoxtRCKhlxjKZclSmws2cqrRAxFzAsdE6Oie5yA8jCjvWLkaln
TNE7ERLMSBO0BTQJxvEIgkTLWkWohJIMW1J1dxsBsiYeKJkPgK2iZBkfH6zsvAQmQO5UihcRRoCc
ASIhhb6jiInIyFD1UGYZqD4JNR2FNhNhM4AQWArEbw9Fs3DDI0icqJo/Iy8IccBMkKdFlNBpVO21
GpMIwEITYrInmGkRxE0om1+jgOHnYAHKnKEOyQEhge0Fxe1TsqGgE9szP+CXSaZxMJUyI1CgQSRE
SJ3tHWdZd+0y4ipuHcqCdx3iqeDgtiYUVplMC1iJGIp1NtkYIbAO+EiJl5vLxFPG7ibjd38o66Qm
Sa6IksiWh519fDVJyAeSnjShEQNNkSlPAFdXkiZAGF4omiRUQghamoGqJ8oA0iYDNRHDAA1+CZA0
jsJ6+gptYHrUpQMGx3KrL4D3h4jzAeKJSR3wsyD9dsIHqdCUlTyAzp6DtgoSikKVoy56dG0SRCiC
IaO2WCiCHciZCYA1IcAQJqPgZCDhyO1E4ClUq0goUmkBJAdC/N69dBOiHJbp2bMCUIndETnOwN8n
YiWZwQHiD95sNBygGo6ikgEAztyDtRMkTFj0IoneagNGgrz4i700Dm8wNRSWo0dw4DTUUhmibeuc
CaomvTsLCxO8B8bApg/+cPVLA1SgdlhC2hcTWIWntycywxlAwLaFI5/lo9M6vUHrQaqqrRCmUVPC
Mx4SxxY+4PBH0DEsLdhXJwPqUy7UUmaq74FInnoGBmHAEDcUhhBDgGTxx95Dkcjr4onOgpShKUoS
mnFE4A7g3DZt83oFdETjvD0PLiJoKcAej5hwrIQLT5KHbUoEImOpEwPff9EKd4+nkDxIPBEwdYdY
uRRoKzd5IZiDsQ6nuGDC6aI7N24wFBryf7xoRO8vdRB5IJ1oUqFIUj1kHZNBSFewtEOwRkSAAqEY
B2KyiZ9tnylsf7HR/Jd5Flihl2/7n89CCkGtQpj1ujH3BH26dWUiOXI51gy7/xPAZ0LoSTD+VaBF
hsCOYfLocx6ETJM1dFeAvdQrodhoIlVQhCBkHAFMQUsA+Qm9R7BPYJ5vFEdAPUBxYcDZTgiZDzRQ
iESxQNhSOx/HRa8hXQI6IOp0BAqaqJAekN4/Xm/iY0b2U070QzE2FO0bbEiC2FPrzNxwBmmZxSHM
TAbIpxQzHeZAj1q5IvbBTYV1TZBaEEoETciUVCClCkbEgCatFoNhVBBWuFJfiDyHNRm9E0VTJEzh
BShCNYKviveY8SQgMgsCaD4/sMAA9XrcCTIGO0oXYY6WGOkln5UKEJ+0PI01ddLRKmQH053mamoW
anu9bd7zNRNBXdETsFMxSkSJMREr5ETkol7w8/n4+iNBmPNrsaGG7ZrN33uUy1N5Ybsa7DGsgF5k
Ut4pT3HDJEyRJPPuHqXI/NPt/iowHrQHzJSJQedPMPJgpuFOTmiYxWETCWBoTQQHCpkiQFNRFtDN
EtPOnMMkS0x53yFz0RQtNwG5dB3vAavcKaKrbudkTqD25kmSJyFNwCZaqhxnYfiKVxBUiA9A7CMD
EDyHqJOdqOw2qzxC0U5NaKBkBeQvLNLsIUiZBfB8b0YIuaJFTcl2ibuBk7xzqOdDQyGYZjFZ9idx
dBTuSQ24qYQgflOG1YGsUDqiEHY0D0Inp6+7DvRNQh7T2mB6p5ZD+gmiGE5oBEHiKdawMBkiQCIm
DptWDM0DCd5l4aSVsmo/peyPpNXbsXUsHxxAcYzgpxqkyEIQjXcJgb+AhnQH7oRyf8SCT+QtXp/g
f82i1GG7/IASieL6bRhjZG4PI8lLofoCY9IYsSEKZUZjyHrTuZqw0MjSFYDMD5RTzInAUsFM0SIm
mSEUJFeNeOPrXDJiHN+EK6E9TDBeUxItYle7fQBo/bRht8J+72PD6dgyaPFx37DwHxxCUYQL65vS
ACu61+NdidoQdGp0zaqqVIIN8giGoiJlyNkbbo80Xs4C5iu71IZApkmETJQt7M/h+MocCIchyAYC
HaBCGCOelEwAckJFOX49C98ejxDciTIHu8yH8La70NQ0ADg7hgQvrNgQDtMCkFKETRzzQzO/FDEb
dUScxWlE1NF1egYXuXZEmkXWzYsTNEpDLeb4OuBNRXZVN3PIIPGcdUN7sR0ETUU4Ag7VmaLskF4h
kg2u4OitiJuQdCAOQkQnDBGyVESt5kNGAKQDCJ3Ip7dxaJlEWgsibAUjMMdARPJHUry0TEMtikUu
AUGIjGRDBiB4uJsOxPKkUIou56ZpuzyzMLh0BOA4GkDOJkuQBggtrb7EdEsJknRNF2TUDLJJwdR9
3mqFAJ+2AD7SQiQ18+AdQ7YB+GV/tGyaKqS/rXyFHB9YAvJsQInUoA8ona9gbwchOgrso0HdFE/4
Qf8RCAOQKvrPExPoAoCJIfJS/pXBhNuPkCRO7q/HYqecR7jcG/ZE6g1F4nVLRIFpYrYglL1oDyOi
JAUpOCmehO1Eg6AUokVKIoziic3Twhu2WaiJqBjOzY3DhwflpT0c8lQTMAdyehmdWoEHcdGE1AAz
BLgCcAXI3rqMmsSsvdZ7UUgHMIoVFMJF5qfm+fSn5wlHCRT+Eq9UhQgKmgU1jmQr81G9YjuSPETq
ROoOoU8UTTq0mqnYcdTsXaTJU8jB4Ipz3ZEBGEFM0SClAfZsSuQpgTUc1HHJB+IUPOCzNT02yDIv
/Af1H78EoMS5CPAWmC/U52bwxgYTerSGgBmB2gRE2MvPFOrMZBkGh6jMNlSIODzgIUqZ+cpQyUiC
CUaIlIOFU/ZND9HtiVPeCneiUKaIm5EwINieTlEhBPp92A1Fb3oNGYKWGaRMUTe8K3EynsBBhBHt
gAew1aA71saUyAUIAcFEtKRR3wQ4nq66fYwfMMS5CHpi+uAB4QPAgaxNNZUiZPoPoLNBgOHxK0Gm
023TEfxRtQNXAvsiEVqeR9yRt1ZiBf8u9JwMhsuCQrNGKh8QgOKm22wpAmsYrAwKhEWho9YZ4SLD
s8L+mEBrIfZQivywLYSVHKf3dMbkY0NG8IgqkPKSBiZSnKsLqIPxWxaD0Ep3sngQJ3lxXVBzgPQZ
gQeiO6SkO8k+2KB6zQ+oEUPaEIB60iK2Ht6ig9wovCIB5Fh9n4H9Yj/yY+493d0pCgoayMkwzKCI
KPv63+1/5oPutL7V+3L/KGv/gsBoc/DUcjJ02L0Z6/xJklNYXZ60S/W70aSFswvLU6m+JuP7zfgO
ZO58jcIaC5tG+neRupdbpb93FT3hvLAEMkmag6TOCk7Fv/J2Iq1YN6SaI/e+JEiLc6zkJYUP24Ss
h0S1BnoB80SJUcPlpE/nfXk9b6cF4+Cew/6HtG2XpeC12vAUeDPkaPIUB8JwWiN+VZKGygvnlm4a
DBE8fKH01Q+2/TL/48837Y5EdKqmqqH/Uyufz1FDVKFVbtvp7/47X92jeH+rqudZ103fDMt7Kp65
/zyv+v/R3/0eIB4358WhfJpA2Bnp5Rle8ME/tvDNNNJAvrlTH/1nWV7B/7JxB/4IC1X/1/6N8wMO
j5f+47l8vtiXx8PX/h7g86ij+R9UZ7/qPnwW0SbDF4+lPjtiZj7/d35capxf+n+B+iGM0U/4OD1/
PWx/1JWEHH5t6vDnVDOJLig3FXDRTUCacIHZnTyJKV7kxzIo6G8qFBHm6oscHFtytI2219GjGLHS
X2Fr/n1NabLpUpuJ0xinCHfysDg7026oR/ytAvZpZkYwbnZLeMo++ZqHiErvaBuUPijVdsgBxKqZ
xCbigdi6HFS8Kg3YD1o427+v/b/RH2f4OeRB+J6G2ZYZdfDhxlVH+eLiyo2iD+j9I9v/GQf+X/VV
/5b//vFDkqSz7f7Xr0HuW1Nxfw4Uq1q3kUjwsHOvXMP2VfPHdi4rJG0+KOKl5lDRsEAtDY3R149v
/ny5K1V2EhfHSImqQPbp7clT0Fj7zabW38GQ6N6bOpQnJcHcSf9kf+fszlSZ09mUiIygLIxbP9//
12D4emaNd17vl/8umnV0lCu6IYj0h2AyAw5siNjiQtR3O2W3/5Mcf40GR65ev1f62klibU2qZtcZ
xpWPVI9jxt6pnh000bLy4/7Ul7LzXIrTQW53SVFj6eq6rXJZiSn0aLp+n3Lv+G/dJM2qb36df4e0
/luXyu58f7nZgP+Lf3DjbsexkYsAH84fxfjVf79LUI/wYM/3h/v2/zYhShEPgcQ/OdZSHlq/wBou
TR+9erQBzH/iuQm7WT6iksB1yVQvVR76R7JoyUwHTgAyC7I6mYkQQTdvaB9aHmTHWGbFm++EIzcq
u5RiVym0KhL2c06kyOAbkUNLkmn/MzveTebOHUDrSB/yOYZqmakXVVyEq3oFLv5VXIGVRmLa2HQu
cFOvdIIUnKzJyyaVw6OgZUyIGz1A5qHZ1n/MnIqmmqlE11HIP8+rnwA2TcJ0A6zMQp4PWROZsM13
hMt0mSJ1cTkI1qf8CTSJVs8hK02A3G9LbgSaYcx9UhQntHqv5vlhiaCmm9Qeo5E8Z7jPJ8IktTYP
1/h398IVyXPtTvCCUkDcpmSB1L+rLJR4OjZqeIOVO/UAjoUUKfw8wBzDPIGkfQdA6jfkSoUvqO8+
sooohRCiiGunKjc1Xmstbss8ym4HieYDJ63Q3nA4EKKOZ1g+gO5OnMyUe9c3hA5BA58eti79ynke
jcZKbFnn7N4m4JAkT3pXapqBxRNvXqUFmHvDaD74ROHZ6g93zwxNBTTeg0dx3Tf2PROx8sjiHBDk
97oFagdR21yfN7ZuRPX1XsHvToHUSqAoYopiZkqCEnLhkHcJPI4ocht73fBgWB6P6SSQEkHSs3s2
e43PN5nU88xHCZlBxFD200IDoDjOoCdBOgj6H0BsIKCkpYmIZohqNrklgwmAbIPb6ntTg5iiKIKR
eIznHWwO1+t+TnKzDP5zA9KZhIGqgwetQJ2AdYdGx3hoBRmOhqEIgaaId6WHeJ3Znmdl8wmnqVsH
tdwoUAbeI+3x37BU6KYvi/9pncneG6JYgFr4wfDYJVZgTqE08cpJVMtU5nMoCdmh0EyguiGHuHgG
ZwIQhs0Bn2B4gQ4ByAsin3J/GfmsyCFUMDYqlCSRQnADxD1f12BX6ANmwyFJhiB7AikCB/yIGQIW
UfsA/eV/s/24Cyv/P+kFcYEftpCwBHSdI8ZEiWhv1HPyjhmD0A4E7o8e0cZBjN72EkCv99l3YUVV
BQEE+ToE7bfW7XlPtOAAwA+Bva9gqD9nej4Rpzgvdj9H/goqqqkUX6Q/OQuvUVof6y+MZH8TZHk7
MOh+o5HZ/OHkf0dwQUNP8C25rGY8cew13yRj4H/J2Nw4D+VsDV/IwO13ugbQCSQNR5O9dMGPzSSS
SSWz8UH3+yBsdBYg+ZzZ2GAkcDeaS/xFi9BppEYh/J7qFVQFAxlXVU8A00lXrPPgrgLhjYR7QDmQ
qIpNmk2FJExBx7TZUR2WagDA9+k5yXbQmlAolOi7mWTiDAiY2VCRgLynvHR6rugKKBiQ45wzMM1O
i6khcvBRCKF5ybATy6cj9ZoeJ2GUoyFBwp2nX3hryjwK8MyL4OOVkatPRrgPOdRjolKiAKY2ehOE
DcwQrs7HGlu0NCZ60e4hevkTMT5EmFSn4gBWkq59YdV9SjbTLsshnuxTxSZSps2zIG3W6l2GVhNj
2YlwQwaQvPS/AjpDqhq1aXiXu340J32ZGUMNvAXpNMUiXBAqiqSlVI6kTs3rbBuSJlhgkdJJJqHV
dlD7gd6MBGKpaDjKEBzIodIS9YPoUILQxIwIQBjEIiQasKW0yKnbJGgzDoFHAIOkdAir2Du8U50Y
aMs7WyLYbBnAkJAoR8XiHSAQkCBZ5JRAy0h3niQcPaaf/c7dUVHoERYkHkcaID4sDseoUNEHu3RI
aG+UfWI+D49a1qaDpCZBgaPsSNXgME2P2cnwDEj6e7DsNCSIbNlefEPmDdVZjdBS5lAasYnezqHY
NwUDoZo4SkNu8hJol1Ojvq435umOhMvEz3SWU3G4Z+ENg1DV6WMSR2MKuAICyA9jbsuNpOgY6wwb
GSYJsjdJ1yKDgCnCgDsX1bG91xIDoCFtBAIEA61Dg6Lkax3dRMYz4Hf2H4JDq7yVqVVXPKD8BfeY
DqgbRCg22x9kxB4ZjGY+HSoirWwOA993nZBHVkrLo+HBHv0fDgHaTuKXctS14GM9z4J1idNibHYa
C0b/XE4kIJ1Yk8EO1dIid0cucxVRVpwYCouKmDyoxe5lGJCHICHbrSrFJVEckOCexIjCVMDDx5Ox
OKR6zJ1I5HYWFh0SHAgQriRsQ7yI6bCHR6OEzTyjUqcyXCQ4XWBjd0ELdhbh1VRAdGEXuZDkMMeR
e4e0nwJyE2y4cbA7w7YFzYSbLpMmboEhLQbIMh3CZopuOh0J4wrAhlANHmJ2gkSkZjUNUDjcAcqA
+0EEmgDeBBj37CCQfqQ9KdAaGfgPPJ4h27KNnh3+3GwqqJtMHgfcNCDfQIWQZVSnIFLI/tz7m0z7
Hg+uKSh9ru9nxVMeKphWyGySSCYDeHjEbtZ2cai8FDs9QnKBgS8AvDM1RFwzwwHqWg4zHH5DkY0k
HPuLPsXSZc3hfy1Z4ea3hejZgmj5hSFnh3SSSYd6GA4mY7CGoLe5JAz30mgp2uAVDlCEU3Gy7lNg
xQ1qGMicKp4+pqWaKIShXtOQ6nPr+a2ZaLSUa1qLtd9V/WHv9JUUUVXnmFn+b4b0gvbD5wAupzGG
F1ryuU7uRcSpi8gZlHUHaMHmBvDeAdhSMgHeBhPEx1gYbA1TOMSQ6gIbjlj3uEI6Fogeo7oKTFme
oPdMRsdHAPgug9/yRHgeNs9eGZAdx/aQJyb99RaS4qKYb5IQ6zkcmHjs9HieeDqlHnRDULDppJJG
hoI9wQgRdxoZicgxE4tdpHqNh6uiLD8/Bcm9NYYFlRB8STNqHrNRMkyShnd0OkJOycmaGhyMigEw
13hdriGny9BdGyIqnObgzXHlq9vVPBoo7U7gAwk9BB3KejvL1e0jS4l7NG82ZFUaLnxOvHnqI7l4
4o4Ut+mHOPpfPrkRgq0JF2H2j4y3o9eoqmqTWYURhjnD8676bKwJ6PzBJ6YfTIntTD3cezgogk3Y
QntAjIwCNHUxcI5Qj1GBeSb3LuJEjg7hzHOaViEurhqYDnqgHpPh/7rO/zUjwFPEXcO3Mjz58SoS
UQbLLMMPQRlaEW9BrL0PBwJt4O40Kh+RPqPkWx7fa6Q5OOjlGV5BdPN8qK/G4qY2A2DMadSrqO3F
mReUe7Xt9Q3Z7zakweYOgbnUDkugnA0SgkTohvM0DcB04xwZnRgJmPIobKGBuXgvPUzZL7PkNHer
zxhjJDVVl02J5HkOEwUnIOmvYR85DoWbipizXI68x4JSlgnXxXfoJ0sPJ5aCOg8kiDDUyFkB6qq0
ZClAvwkuAznCsmEPWaIjo9xs7y7LwA9CPK0WQ9z+1/abglJ05v3PHzdazWG0iVHJJhbH7i69vEch
hvfTlEquhVLMRZ9gkz0mMHm8dmhDe9+R3+JLA3OTJChYKBAiervdy5cUEOvIO47S7KllDAouXKxg
tqOEhWMEuzmZiixpc1OaGDkc61TuGMYVDss9j2LUKQQWQg0bYfV0Oco7PFD0vnDk8UjV3PVhMho2
ZMtuO0VDqQ5e8zmzZWudVKM05qEXtOntmQOVEmDGu7b95YEhRh8aKWn1AHXQ6o7rhowdQd+NFPM6
qXvLuXVVWFFWFhMopoZe6FD9MhrKqq6uh7ZL3NUDsI6pusox1bI1ltVRbrxdXrPPhax23NUvcQfL
5Da+F4D7wNJ6kTu9nt71OKgpCmIDuDDBO/UwSdVO3E7Y2p7h5ehD6CwJonCQPGCgvkE9QUyhcAZT
uhUU1QzBiLWwL7BYsuxbb9P9T8cREREVFS5QqKhQkSqqgm9/ZXuVDxaKiqqoFyrglqqqqqvD61E1
JUJHkEgUHINxv7133pUu+C6psBpT1b07G8QmHT7l0YzDd+kHQL9Qi6Is76vttiRW3pRVpmx9Lz5X
qUul0hvdTWdKzyxkIEqJo6Sn0hMGU5ueDr2EaQ8yuYG4FyMJyWEoCmKlpFYhtgmD0LvOpsUBVsIJ
fQQcUErMQD36yi2X6QeV6KMA8jYoVToBA22XhylB2QMyEGdNN0Txzds0hjRQwlEQEkJRAelIkWwo
9vNoL2sQpr4IP3tFMR7WthLNo+hD6EIQ71+fVYKAxloc+qkV3TTbBoCjntLXxkoaGymkgjSI204T
EPyEenNsxSU4ComiKCnRs26SmJoikaEoTNmnXobZb8O8DocAnThjZSNB8i7XsugZtOEoAA+KOj4l
oeFobGztpuDB9n0KSLVv3ZSy43oy6KuZURjpjfzXyMokmoh+jwNY4NDTSsuWPCmmMqm7KSplqw92
GP6a9AWuhAdw2zhVeQFzYYCOEtcH2Isk2mb6Z6ZGYWKmZIQxlkGEIFLjRiBsEPSIOg6IMW7sDBku
aeAaWWQqrIV1odqQGEKBzYrmHk+c9ImnWGFgqHFDgZJoHYJxDMhDMfxJztTiNJmFIaPjBJ2oVOZU
FDgJkvcDhgFHXLDTOSTo6mdu/NhQhqHcH0AGlFHuDuXJemId9QRvHCaCioJ1sJ7TYhEnii9dPqC0
YYXHfznTIaDIH1IsfZH2nq+jZFfRH1tyfSEoR3FB04yU8SylhHidrAteuNyRs1LtPbIaK5CDB1hj
5sKevRupg0gHIHX1HUkYlPUYx70jqUyEMD7gR0eWRTQ0L3iEjkpVA0BEeiv9I9XqiTUWxkt+3XXW
k0NJRAShQxIUsUkj1EO4xPQFXRfy/5Sr087fGDWf9M+E7jFiSMvwRvcPdmpNAchyyWmxsaaMf/2G
gVOhhA7FaMuBcZx4UgjjosenERMotRTugTI3Y+r9oUGIp39zTdNb40BAfME8/prhDOOOufvsNUJm
CvISn/bHXnQVOI8adZYxPMdp87YRTAUJQBRSoZsAtTFQiQCJ6qO4sxOGJRJl+qn4+x3aGZ0rz7bt
lagCmZpFsuEIlUdDnt++nysQckljGAHvPhfh5d7+wPeiH/0Ng9utEqb2bowrtz/mLkh+5fa/cYGV
UMEnxz7Q/DAv4QGf8nInge6EKppC6ZjBonCaiVMao2aS1lSUpNqWmlQNZSn/T+20XuE1CI8k8CJ8
pOhN/SLwRdv3kH4iTAvbB0L6MZTBORiWDuXzXshQPH15vWKtCv9zKmyRSQmgIIVOJCmJFDAIWI/A
hAcSJRplPWAdR7gH92YAR1FL57/OGJ5+79HMd6CCcIKgSIkmpAQ/nPmTsQwC+gQ7SLQ9IYNmRB5l
AFNQKoozCxJWiIkGlyEV5MdZaAJdsplVwdtkbWkaCEw549gATg/EJ94VVVURJfD8Ye3PaBpMBClV
t+Furqv41RsVMjQy2rTtDEOxFMe29/yjhs1luYMRMMoZC2YGOoIs14D/y/wpeB/RB9Re1QB7MhU4
SVIxH1vn3v4PPKgZP4DsS7LfnwBZVtQduKyH+JmrBQCDN3Aumq0JjYy9DI2Bi3osuOEdDHsg4SoE
ykg3didclwJgnYR11tt799vbEYEE9F67ueLucI2CMOBwe+Sm2IuhIJDzqt+EVF4KhQe0Z6/nmdOz
uMhChO3zL2co5MUl24cwv9ms/L33b4pAxbgUFFXF4dHmQXttdMRzE7jiHhI7OhTyedpNOPc9ICRr
3Q1hFkjo/hUifex9D8Ph8FV5VeObmBgGI9YhoiQWkwI/0QAKf9Ifl9qROE5Ap1AibI+7j8vsw/dx
5gK/g/pnCJEtGCKLQgP3SIYpIodr7H4iGwDsfM9mSR+i+EH/w3kEywkUEBosUwsbJW/Byb16uAQ9
/EYIZADATOAuROo/yBoT4pd7uzWcTvzZQI2JiU9pIpQAfGFA/85E3CbAgAux9ScVB/o2VMhMc0Cw
yIIwD66a6zOzIwkRATa3L5aRD5Y7RApAiPcRvYXGgeTXQWKkk/FUrsIDmRa+6NW92MKJhMHDsbe+
iDBOwcInEJywk6sEyQMiIpQvLAcAnIZNNjqQ1IjoCFTObEKQGaTszpIulBYDkLiNawiQo1DqIlQh
YpJIFZGEUyFoFokEXBvE6+7u6982Rf+m8sdCncHVtkGuVKTBkJR98aQuekB7CwA7CAIrD6SlaPPF
Mn4SZCQkI7IAwGUeEEPwkMx2y+j/r7g7tmVE70HSGtXYBUjtdFxVyEd2jYQPLsuzaAf3EIUhSKIV
jPqe3tp73BR3WIcSKODB9AGVskOVeKEBosQpQo3DyhqAg78n7w3OqEbA1NhzAMqbMCELGpDAXXtx
3o+jyVDq48u0H8z9BN5uKsv+LllQcDUA3URoXvRfAYqXrVTAhWIZaBgqQghGJAgKCIAmJFL0vx9x
6VwRPNCsbfuAm4EPsD8Kv0EhxWYL88dusEmGJRFIh2Ig/mkBiRFyPHQxqIYMaERM8L7O6FZ2ennC
/n2JjfxDD5T/HfXtfq4q8sMl9aa7d7Dflxxt4pootThKh+nz3sOOp431Z5jeeHmYFgGJmcB+OHuU
UHW9ZydWBjfX5PkO8b5WDjoccV1RstiuwpjQhsMYYMLGixp9QXO54CDWfQyhWw3moSmpdabWaEi0
2vKJTW9erulXTaqHCLtGwdSJxA6XMQckFNJKEIkOQJEAMQHAcx00KYyDmREDVIp3AJ8pFQDq5I7u
udUF0Pz1axDIREkoeiTI6eT8O/HuJVsxcX31KSD6pqUioUTROswxrTIysFq5ct0Ifk/QSTVisw1c
pVYXIBbSb6bIGb8DgynrzXBh4JEkJ2QVKhcmZVdUwnC0RK6kW0tNBQ7dIaR0RImiDQMyLGHRkOSO
LrxRRY5zQtXeql2mNJtBSN0Ww7QjNGNZEpyOhAN6oKbiIuEARsDLPSUUhtuE7vmFr/UxdJnEIewq
EpdlBwenoaNOtdFFczRaE2kbBjEMONCIVDgIxhg0GHE9YNwVfiN8phBmVVQEivRHpFJCSQVRO18D
uA2A2XkVAxDhHJGsjtkHGCMO2PoBxugQdNx0DVPQGamaCdyi7B9xFHsTXVTcloasE2glRKH7ZbAs
wlcUAM78PBQ/JeKpBEo0CEDCPvPYcDYDIFwqDiFn5oFmEuAWGyft5+mJXpriMQg1JRZPY5Hwur+r
QwGdfnYfyB39HY0NKPCSY6fc4jB2HUoiecjOzVJzEsz8gh0QhDb4xf5cuhYSfydsEpu8onkmIRQh
IaSG4/AjvU2dKzIBiQoCkMBx/kMkvzMI+ENCPAKq0E60YFn4pfWWSSTq6UJeD2xdtMhu/sLzwYUt
JVNIf6LodptoQ1z2kMBJxNKRB+b4/XEKhSkJQRAwZwRIYw+vAM5O7jB+wKQZ/zUCgNCR2Ms/Ydqd
vD3h14hI4MLDlNDH5t0JWjHtQC+EsphuZewQX66ENGdf38njSRWGDywWmYzF3QAGuhM/DINnptDR
3DV47scxUnbE2GrysTyFnC1DS79s0cukNtT6ULvptDaSXTUcYBGKMabaG23hKeuWjRQ1MUDuQGn3
COSHEkHBiN2DNwMKhEggQfaDjuOjtidsoq4ITujGTu+/wnWpR5aG02PQIRxIxNha8nIVjqnFcIBW
42tU1eL6zzljoKZKImepRRVKu44DQaO7u7VpOd4aKhsSqZBM71aapWQpFqh8epH0n2hoa803/lSj
jKbkYkDvPZn/E/i7rWYRYtCwpOiCh8N/ZkpcZ02Fh2T4EeEIZIBQIXpSQzs4xCkX8/KYK73yHcbD
DDAfQR9iHWWYBIJRiQmOuh7PxOg5NmP4cET9J2xGC30+OOwPPkS0Xru4WuMfV5/z5l3k+iF3WF0A
DPw+x9V+hREsnWx9R0YVB4ymQGuAxu+y0ZaAIIL8kh+72Cp90gnnFRVWWlpr+sroXpVzRhKK1tbA
bQZaSiRdQzCBshVTYKGpaWWBRFZCiKfKpG/1Y+NmZk3AssoakgYPTvxP+WN67iJx6HjlG6+x+31e
qRGPrLlDR96YbxE+yqYmNgqbyihgw1SXFIx2yQHP9LlSnJRVpE194foXl11s0jn4gg8DDLGAj8yc
eF8+7ot62iaBCIzNBSWWBSVd2jyjrs55cMCoEgTbd24tFSBeA0zr1FOr5ww2+XwNNQxg2PQM6YHs
0TFzjkYh2gNtjRlKBuxGg7MOyNN68TUmkrjAyEumJjJovO+veLuHnWPZ9ei896aUN9rmqZu6/qvf
2EqeFrlDYZ46qx42NhvNNRmtJxkjTikbQkWMfXnFZY6HQm/G6LuPO7u2m8ZIyDwgfj+LHQWGAiRG
B+1TITJjpNcL859xwt83dndZNteRB6I3rwywTgDjWWd0Oy/PIhdD/1wUKgSK/P+3PE0yNJj0pySc
Bcr9r9GRarbXI1V7o6yGsPLC+U8NbOKmV84Msi5bW73fzbW9e/mOiXREsiWgtsOA9mJ27cRBjzg5
EYqcPsMRdMRecd99Eh1AYA+jFjXvEt9PBQzy1jSGmAMY1ujsyDMbnLxOWDjQoGkpkGSGYSPsIHEb
costpaNQZarL7W/grCBZNFwgSeBIemA80BQ9UqiFC6QU5RA13kKgX6quEtvpl95eWZgylOFWljF+
TWIEKotyiv7TiGruU0QyKmcxs1eRoepJtqqj7OmglhT8Pj4l9lgUsJQRVeZ6IFwlgBS1tXc++sCG
OlJrNNFzCyBEIaRLIsPVWl5XB15vDsrY9WsYUmcnEV04VyWgJSVJu1l1QvuSGM/O6hbvds+R54oQ
gvMAIHCCPmy1bkVQC41FExigVN8xAZxUrRRy+bP8hoa5LGJQpdxpFstGYVQDpJy4PZcFR1GrqDbp
+vWUgiEjN0OKraA0usLRmiLbSt7tGBUHINshN1P3zXJoxtmoRT631fzy6IgIabBscqqY6QKiQAoa
n1cZAqxizAMWkscqpUFW+sKNa0gZe3+cqy+tiqmk7iyNYIG6AiEikpMA2RcJxDkm5AegcdS0SoNB
oaQg4lS/af69CxmZS/wS/DhbeS0y5UuyMchI/vcoiP2vthtvGYj36ivImJP71QxdEUgB8nh/OAYv
ByzBEFbOsGwoDZpUhzYBh8x4cDeoZA5AQPLAIdZAXjefmIiAqSFoGphgQDUqgBvX1NPZhtZyC/Dn
b6tmupgiOdtVKqq/nqXonEl/JSod+0quDeC/tX3Gi6fi0CBg8R1kPSnqg84pwJPXZGuBma3m0a0V
U1oWmYiUBw3CRKSewfpQSfpMUZiv9QQDNN/YRIroldldg+BwO/Yp5UUeUspDDugNz3HPYwndYgOu
OPvGw9ugZbuioTw9R6Tbh5ITcnmdkpm5SZLzbzbzqWjV0ta3Xlo0botOs4g0TsYIJEn4AymICMww
Z8tfl7yULOcSSZmCGJUdMOoTmOImoSpCBkgG8FXQTMy84o77NqaSqaFAISx7q2UyKZQQoLYTwxBp
plkvB1QZAxBvB8I2n1JnlaZICPhYfhPMpdShiYP2SvrJIGgmiM4x4TbT+PrBwyZRRVCHQuaJj3/G
z4Ra1rm1wjhE2c60w4WRSLOkwRe9uC4LL70CZBAQzt4QawfuoQK72sEOU8QjkjA5w4UT2tiz5bEF
Q4cirvpo9JxkdLLjO07kh3ylXJ8bHTAxVvAx53hu6GUOYQ29aq6C4CwBSPoAZopHyoDtOw8ZQuBU
EH8gf9UAA3rjQ2fIz9R7nwaD9cU6JBSjjwI5XQkQZQURf9rsod3Fh5iCmwY3uie6Ted876H3ocW8
o46Z2Obvg7P5O5RzYF8xurQHGDvgQ9Lhd+41LiQ3QpdET0MPTFA1PZA7se7FOWuZQxSEJhqDIxsz
CQLQxILrIIIMJKF5cOXksl0s111tKVopLlzWjWmVkxJqDSYyWkbZ2atVpwKyOjbWsZw7YUcAiCZE
ENCRCEYsOwRYxnYE2IxY1th0IEEiAYCBEyYhxQwzMIBJe57kMOE4EBxFz9+nYRIp5jNeAkhrKqkq
p5vaUGBgQJ5UIvAwe7EyvmjJaHA4VyMx416FAQ66z83hamJoGV/Wd4a0fG2QCm2FBcACmdGP8To+
l+Kj7Xi2YyqqJOLeYU5usdD51xh+nOOIImJqJHOMUwiDEW8LzYB38n2XwjR2jmobXvIB5Y2dsTxh
FoEaOHM2iZGHCBoSYk6xR2cnCR1jYs3jjs4v5H7unpPlIIk7lkjDAjA7Se4qDZ1wsfCQ5xkHQiR2
boajuwSwH4O7IUDEn1xeQJDgUJ0LBAv0/eckSzGAzZJJIViqLIvhDwl5UdZE4UtHMgUVRCH12X44
//cSydyfoanP3UGXEN5uCptsbbBtLRAtipqN3QU2A0MtqNSdMy0dDf1aRur3rk5uIDWs1HnB3kG5
TwqZUALznDqHRCCVP4nq4RI+/930JvwI6QhjIZFvjgB2hv0Rnf8LPKfkPopMI2RqJjQvy3PhiGfA
bOncGPS1QYAdxENxIR+K+8nyaXbMa4H9Mh9oNmUuZAZ0ofp0QGxaJIhoW0lwRIQvYpB3yQ9B/YbC
hmXe252Bq976t9B6wb1MmBJCBitlgXSPFK1x1uD1wevYJ10bbVKEBuKcLfGaE56Inv/EJMgfIfIF
BFDExVPT1CIP0fRruJd9A/IEMWYCUsVJQZCZCDSClBMoNKCEnqoINEo0EBAEMoagcUeA8eYjLaht
gPDCbIxr9HTQamCTwjJEkLRDlMDxUH8PaF3Vh6gnzU5AesBQkgqhqv6rFLBkZNgrq58jUE7l6Sht
3oVhM9+Cppjc7+jJgpqlIqIK+T156uWpbb2WkrFJbeNuCpsQhZl2UyBi5lNXqC0h2EDOQPRGOA94
ONUEimexu5buHhHEnrj381vPVDwKuIHLqPiY0G7B1BpHemLlr2ZooOVzCzWFxbesFxuIpTknhvTp
7TKG3XTEw3G5N5n2zot7jEiMZoKgMUYOqUg0DGI/X9AqnqaKelV1ZTV+vMaGFU1DG7KzD+fkKeH6
RO8U9zq6Q8+m29i2qtWxGSswTGMl3LV+SKqXp0h8n/TtwW2VsX8Rujs4EDsONbCjY2kU0A2FRLg/
RI7ChEu6O4+j4PwX48dvh4a5eLmYPO0WlNCKRoi880bLcb/Qfovs3nJxh6Zvjb1QPVsUB8MP5GVr
VNDZEzWza0aDGkaCR2M/5I5MhbEPqI4PJbWmWkYNzo3JhAxKH499YdIDvjAxpSliGiE6a20EMa3G
tGrMwuh1OimGze6EmRkBXJlVvSEoY3BiNSOI6RrWLAW7WiyosMt3DSKEt0bbexrdka19AvOMq7SC
CBiDyU3fO53ry6mqMbYWO1jibTLt4XChELIODQhpjX6ZL4tOnq6FBoCYDu7SFigTCl4zTy0PE2ZI
84qVMYx2XW9QNkGPQ4QlDXe7/RIRovNm7DXMpIVQodYLSLYVG2NgNltIeqlhYIpNjpnU1mUiQYGF
KyCKGtMezVLVSiLCzwRR71I3oo3FgJjE2Jukrai5sqttkenRK3fM5mvDuy0CmgqFVIbTpUQGONjD
ouFRPQHXKbDvhMY9nR1vHztEgtA5I3GAwkwRMbjN5InUsoKm6AoNuNgYwE5AeAhbaKKS3faiu2jk
hgyUbHhbQ7f+0Zp2yO2ThR5f+sX1RtE9NrsVjTLAjA3FKTUT5WqIrI0iMKa0Uo9ZQUyEgK2GIe2a
tuXR0uqNHGtmkzbvC5LekJJqgbUSeVLBvbGMjqJkRst6ioYBrKJpOVhVlwqNqOOXVVBmEg2sLuMu
rdyKgr62dvY9DZHv5PhyrrFacNMRbSUYU61FTqMMMI7RjAmnVMo6ptLbEaa1qKFVXyIW8ZZcqiDA
weJhdRVUStsGUQiTKhBvgYVTzB3WMmCT41bwdxrNjiWtZoZs0K9W3FjZJT5rpN4Gmtu83ZmJ+40c
d3G3GBtTW1adFsVpoOx0MAlBhVDBLGD3B7rZfdhxKh5AqZZ2dGhNataHusDbNJNINPQmMSYm7caY
2FGEqoiUcNBVqgi06eIwTCFGhhz/Ih3rMDofXWuqKoLTFTqiG4DuXzEFciV7ejYnaQ2GmwZGsqNo
bHGKtGLDfiyjRci4OEbGQGJywUmSmtOBooqikx6rsOHyI0RyJ0kApU7JdkKaCQHtgyGhKEYqHQG5
d5onsFwNgGluNIBRi7RZQMA64PGY6X9cqmhHLWEE2hj7IToNAY7bkk6oELYCYGxrRwagJKAA0Gyb
CI4qtxwkQ0gx4TSKFCdYrNf7FzMInHlUqC4pUGexSx0Ms9aJbfl9s4bhoGDa0O3SQO3HzIQfhorD
KaJRQZnzwp4wwY61Aj0+NF4Q05UqtUoU4DLto0VbMGZBzzQWt3m7xUlvUDYJHeSmmNIAciBbjqbP
BUMep1Zb00mC4TmJgpEAEgSJokcNMGm0isB1WFxBIk0PVQwV3GECnMsuu00UUlAKkqQgWGcVARAc
kkhgtalIUZqRA1LS1sBogABZBhSQEikYBEAEQUqiWrAS0YEQgwCBQ5IEdSIJyHOKJoOAMXYwK6Qh
yECIZSiJcDScZRtVQhqmB3SQLgwncRqkukJGRwaSFYUNQZqCQlRzooQWWRYhNOGdjAsjdgwaDkJO
WB0QocubYhYirmYLLTMuyELDOIaxKlEVHD22eRmNELceD45+OWX8NNRwevQwYfRGoDEUcIZJcOPR
+JOjiNZqDifjn2Wuh3a6u3LYbD5yfwmORFUW8RwgqNYGF+NU9koSVMJECye6z0inyHYnao9sinAi
yvQD8QCEtChKgMqYEuGnFSLRhgAhsfzvI/QTkdOygrIpgBFOdbuG3ihvgPmIZYwiZRE4FONzFTsj
JPlDREZGKkDSktixM1jUygkrLFR0sqlRiOwcN9O4TyrxUDifm0/tUfo+ehEOxiilzHJZklWhE1rA
NBBkhhKB3yGooDRDAdgQU/LBSpAREvgeBgGX38LvQGKzJ/3mck+YJnFCvXShv7dzCJMnZCBq/KHX
dw9WWKs0mGb+nAN3y/bE0VUfZA53CY4wCpsyxqSkwpjajWq/OUKFBhVQUQQiGGcInvFdpwQBEB8D
5boQPIn1+GeFn3mD4NULHwZH431fT8ZP8L+yd6Qz8GZCFJECBsULm/89pyB78Lh8bv1GB+CxiUPI
ePns/DlPBFRocS8LkJ1qpN3WxT5TR/h3m6PTBh3Z+6/7uF4yynIVCgbKZ8/Ie3pbz/+PECEtA+dA
fja35sq7UXGX/X8viDMxi3oSSBIbbT5QYUyT5UhbstKon0CpzsMJ7kE+wPlPmPxB2EfiLJKmgltG
PMbKQ1t1rO7VSYM14BAGC/j+Jz8/eePPMqrh0CaXVwVGB8RfyH5QLRWqS/whqKLMSHw7mdZ4omm4
HP5m7K0kDz8IGKC8pRCSRoIqwXWLMDGLedKANvc5HrGeAm9SIkRIImE+uIAeR9QGAsQcp7PMi+GP
Y9RXQJoBEJBoD+EJYn6Y9iJK+7lUz569Z7dZnYWmoqKGIIakn8+H/mQ7bvjmAOwNGHeMjAH7zr4I
lvUsNS6fOe6vFN4Jvk9PH3eYSBkwHJjx8gHjz7R97jppO5CN6J7ZwOD0d5zHaBIlQJv0VP4UoRWG
Kd2bPt2p93fAmiTQop3rWZifDk3BBDrg0fRmdiehE9FESREsNzPFYZvrOvTKq1p88QxAI554t0Lu
3hgcQ4Q/S9adQocqRwpnNVV+87AiSdkUPTFX+fxOq8fbofsnd9+L79DO2oiaon8PqoLH6/oqVpeX
Tu4m+2qaGM9n+TzZDmrLs9WRid9e1Ja7uhPRptT167eiK3SG0ORccoc1TOyFo83h4PZVx6458i0s
WqY57Z5Z2qSd5z81VuEDqun6HseqBHDoLOa7VpIYwA+Y1NQIwOMZ4v17d6Gz4PXn1kpxYP5CQLpB
0LS8Fih3wBiT3sGMmEcBz12jgqMBhqCEbfzSwzelcNcYJsIwBiyTVJRjOEmhvjC2ewzbWORwGNlz
TCxmNLmzHt78NO/T2nIgYBsgItRhKxMQQmRw5NnIg33+4E6NVCplBI7AgwlmYtSGZZkZl2ch18Je
kKeEGxB8iXwIXXTAADObSHi0TTQzDEBEUU5WwuOHFiLuLuu62R+IQZRrJb+UMYxVF579t+Tdm7jb
XlwILgcpW8s5cXbG7hVLHT1W6XIUg0wG1mdPlukUIwsbbZcXiWLVsgIjBDeFoI13UpA8BUcQQK7w
87c6YsTPZgLo8ov2oNnsrZEMYJpPBskhICiQ5gFqG4toEjhAywglGI4BzSkodMgMbqkgGEodCqIw
GtiRgTm8HSQvIDD6Lyh7CU458TpsAKQCGAdy6SRg1S9g6wJJ59YXPlwEizcsZtrcyUonUK0p+95m
tYZmJmee1pz0TtTLPMkIhfLHAfn2TDy31teeoKRsekbTzAeQeVXRckLUxiYQhh7jhNtwWjE1Mc8X
RpQpGYOO7A1v13ImA6zPGW4y19p0BTrsOD0rTRHsxyeexmjhCM8rhDdoIYeoJEOtzwxarPDAsLHc
ztQTZPoq7ieSbEH3WtadBqJ0zG6+cDAEwWpctvdELXOAHQrzJw1wvbK2ChzKDOGKq1qMIyFjrQCL
sAYakgHriNkPRhjFZ0zwNhEIIyE+INqwEF6TyCTS0k8n0u9BjJ6alGMcghtwN9nRKsN0qinpZytT
6O2i3fSumvIxMZ5lnllp9eT4Mz4fQ9oQ8l7mhAPpCNIRtoRwRcoY00AaZQb3Nd70DcIh9xZ6Qijf
v67DzXPmXHvZS4xQ8B3v5HRqI9ZpdZfdnrs0hHz4YN+4TVKP5hI95ZBkLBrQXqkww+YjQlgTWKrQ
Zja+UahaiJSlLIZBdSpsrIkLRaILolkYQMwTWGgt0FRMXkVziRmjTIzga7u2uFbm4Y2FMrliGMUH
nvAvKrDrL2k93cYHBd+Z4GnCnzvMPPEUs9ZNgmYsp6kjTHewTcwRqRk5XdNJyGJXkYhD2nv4fFfr
3orR8YQ5nfRszmqVJne5uzMvs1t836O11UD41zOhxjBncNrfe/nYs34OdX04RvhRXCuUd/PQ5GCi
7xU6+ttVzR1yK5RsSUWljuMYNg7DaIGujICpSbmI6aglUgKz4wvwE36O+WdnUZ8zFt2DPIzv5XM+
FhccoiB6w7hHyg8+QXy9POrWHWp9JxuEqLg/U6PHx8tZ2X5dp0/Lzua76K1wp+TmXM6yUsSZxJKK
3ra6krMvym4bgi7Jt9Q2Mzu7OF32Ol8N52X4xJm4ArMY1BWYg6yYxt8KGKFu/cU9IzYUK/c+57zQ
QpA13uID51EIGjzspmx+dhOHnVcsrWj00lp98NC4Wxwe6RQ9RFm2T7OoFNeTZGBSijYnoMmci2ec
mhJJhBYQ+OGFhNvyprakqZ5q52r4dvYNC9IjJxJYKSlPAcygjtmqSwYenG3bEvcdstbJPXqdeDRo
4NYhpg0/DIBpNrA7C+p3oifvuPxa35I2PldbRZkC5ZXmgYYc8gywvr22Wl1z2a8e/uetZhCNpyT6
uNU1dHl20i36QOj1dOnwzXi1dw8bOinGOiiqIpHyR3egclS75bG3a5dGE0VqmnD3Bq1vfC9NV3DC
783l+HyyNvvZFYyzoeOjGQuh0FyJUPrQsLvqbepDr2pcZi2qiIdUU/bMtx89Hjf1NXbNLJMeI1PN
R6fbKNkIzmNzB2x1CmtTVez0NhbB2pSY3WqW3HRfbBtBUmi4DLGvYcoabO3KlM1UtJvjFDUSLKkY
8y7EnbUb8Ay6uhNxWOG+stu4auqvB5fh03RCMH2MeVs7yme0FLFJ5kfFNyeddVel5PN5L5oymtx3
Cjx1VUakZZyazk2eZ8qskXm0tNWWdWaXH4dRwy+naZbg1r17z0Fcxm7lBs02yLB2yDR0No7kPUNv
LcV7gVuLyVC8CYQj9komqzRe2jbWY7Buw0qxGDQVdauqni7pm0YeLXtNdLB4oDXtVqhWzi8l1vuj
zrSM664dDbbbbbAbGKE5ktngJB8kBnpgHy3rl68woZQ3nmU2LtZFe9aLO9yVkraKnkdIeRvGkQGJ
lGiTTLKw1ZCyzrarrUs11qweYM51DoHoUIjnDkBI6+XWkPiB8JSY3IlJugZwuy/WFWHSQcCPYuCQ
piBihiYIQhkUkUQRQQLMCYgb8dFxY94HHZ06Z4lTW6yqOKrNGmqyjodmpzLjhKXObrQzVM9YdY/P
nwbMz/AXBbLpxOOOOeERE8mbxJtxNNRSJwOKyTMWjG457jfm9C8+aabs5XGWjNM0qwsP6+ua38vW
VdFjdBQONvpr07ErjDAayj23jKW3yoL38EpnqSmSSVndZkuGMoYmx9hlFM2NGDVDAabGNjZLrvvO
14G20pKnhsmu6upjTZgxkLpNtsZJEvHeWG3mrripPHwostkdoZE/FRt5qChRGJvNl1w3RVUah4dt
+KnVlOix0UtMpplG8t1YjwV6AojwvQdcgjvqC2rdOti7E0ptJsp+xZZ0zYgNCHAbRdSjwzJgq68b
75JV6pijS1wiwITvlqqRdcmGZmEo9ikDx6XqcAQS8q8LD58qJyEKSXYIPhGi7IujsbYJRjLpNvAQ
PGxKSfA4QO5bucjR2EOa8jeOqZkEM9UhvOiHIy4XJXB9CPdLpLBoMXEaEqIaCp23pHLkoPC9/ocN
EM52ZI+HuEYWuZQd8tkS0ol6GBh7603NCZiPCEtwANimADpKw7CiAsBpGWiCIwEmNBdlOkiUpjAk
pyi0JVzKbTbH2EDpJdL2Do8p4cGgJPlA6h8qhBzL0KGoIamin/FLtBi7uzQ19tOKvgA1vHaikhrA
wmRlimkbIwkaiQBl+6BpFUcTciYJJMiI+RTKUIPPex/AgX0R3Xi85qYqiqAEuCNeJ+6Afvcw+PyG
ZwADXcPbBjWpf5HXLRbJZmtFptRJYQDsYrhfBUi+Qa80TYigdagrk2AwzdHsLM7RGQUSQFf3bTB1
TpJjeo1ybyDSjHJoDQaXaBQ1SntKxw4Azg50qeJBQzCwksBJERgCNJoaiQObwwZbnib/EPcHDLLj
scbpWyfdzNm0j8aRpgNbfnoNWPE4RNQr8YUb3F52HD8Z7NALmxd+dHmcvR+Q+F+9LdqQ6U4xVwg3
5IiHtsSO/zw8Zh6e0mvHRhS6UmGJ8YMTdfTC8Y2Yjd68QJ0JgxbbMgSBl57szLbgZSK3qPitRMzG
BQpBvyBzYslG3ODRhjxr0aa8B64b5f0ZzSERtBZ3BcYh2QsaqEfWpSGlbw0Q9/ADmhfLwhFKgZe6
KWvbJx34VHDqRe3SqUn9YWwGwZe9ZX1PbeiNZ1FduDH68fLW6OtjNxh4i7lVO8Dvx2hjHcTrQvu2
Yzyd867b44SMrWZmJCHNIcEsMYwuHkMKDtPQBhG2OBQ5FA93j5ps8RMKrqHpw4AyP2Uh0O6hSyIw
LgQKVFDwaX/ruyUDuhmuBQOh78zCl6I+3/d5vZ0OAfWKiwjQzAQn+cgqkEoeogelDYTAAA5RUwAb
kT/ZLLE3CbCdsRuCc14OiphQwAyyka68FYe08DzRIPMd+sUD39O6vENDFBGOOQoeQj2U8Q3IcM1O
xcBSfErv01OBqCf2H7juw+zISlpopSqAs/j/PrTu/eYnFcB44ueDW9kXcHK/yseeEdnDnQiBx8Q6
mbw80OY+SoZiwBDHjExFzy+5cxTrM0GT/AM3Dv+/6FrkfxV8xoHGIM1RQ2eXo2nwJs1830mwtcAc
SG6YjDMxlFGooRT9eLDKGEaY9N/+29BVuu6VDNXUVDqpRstmIREstoOalBIxLZsYfW7TK3TbblRM
SiLQCJsQhHo7WOi4m9ysociyed+BlN1cDHkKCm4xurIWkX/L6BiNgK2bwhFoGNOpvLxssjHUbWnK
PCBEppKYBNkDY7Bs24tggqBULgGEjHOomY8AcDAN3w+8z+/Nmw/Kbr/XA+FQnaTKeCd7b61GYWmt
Qm41DX+AzNMH8EEdxbP8KTJTeErmJE8VsqmjGK7XvCmxtv2YLVhEKzCIQhi2oevY9GtHOoG0U1p4
7FH3Sxlf4/+/2b2vEegOAQppCDSEneQ4oQAZFnUf+P8ImTR9v26TVAiUBECRAB2fGLaJD3bgr1/6
AME8+Z61HoHtDREoAVMV9ahkhAbXYxAd6rmv8KmXXtA+EN+u6QD7kxJ4JJ+JKjKYO49fwnhuKCOE
k+yHnbiHQilEKFMrMQNYmukmx3jipvuMDUbJqJKQBMBdM8Gzk6wPiD5Hw9dVDNlF1QSm0oLSHccL
lOV/peBD5qVrAnn7D4H62TL+Ej6iN4cGmqw+zTv79GE873HSaeN5wraMZ6dKcxXx9J4khemrEpKI
9xyf0T6oe1/rrVJ7E6V8sCno/nU5mE+J8lWOYSOYFrlZkT6oggXghBaEQ0evXFpZCQhVXFLtM8TE
Ybj4xXr1+Pws9Ie5aH2EPc4o5byH12mQBBSB7wIqxA/RaKGpBcoKtnXPxsOOfgFkhghy3Y9HqJkN
/NoYMepLxC4w4kAqDRSP5mbxkCDCyt/Wl2YTjdUtb3oxPXeit6TYLsA1dnBIngsMu/NqtMltumLE
wyiXcJXFNmJcpQLEiKVieNa3Raha7ZPcw8GGKxljDGAipU/lKtb7QiprxgkbRjSLYgjwl4EVIhKp
IbjFwm167roavGhFAMoWhQOZszj2aBwN0ae+sCjAm64UraGL27rtUyF8B6PN4jkfBCK08bSkg4co
i5E3ygPZDs+mB3xkh4ZHJ3RzHIb5He4KTgxyqg0Cm3W94WirTL5o4R8+fCHBORRMqqlaerDCYLaL
zK8wa3vDCorp043xcFDAYxqyrt2QHjgMeYRcBq0x8ZbvO76Ofh8q+BAkSSJFUKiJSSEkCJSCSE6d
Aiee9Meo+He7LDFusd7pEXY20Oy5SVGlDxzCFHhsZShIFtWRTMk53q9khjbsY06ISmEZYaaGmmSh
xRrd3XOpyWYjZD2g6MIiIjfzZ1TIivwrbXW3WyNSkiBEmZgLEk+tXa34aq3BvIBaJwIcUf1TdCQZ
FOJv3kEtAiZmgmqfv3FcVckTgZ7SykSsmwJpFUoCBBH/8nmUTh1YFPUIfJBJEU9Q+eA+6fQOg/VG
ngfsh6SGS5AtGOzNRFi2Na81q5Ra/zvnbRijUIm4CIEItw6lF1xj98I5AuiGgF/olpUfqgA0ShuU
FOCH2FOhTpBVAfdJ0R+nx0AGoEyBAiBIhADiFyRUzuJwNQHSBMgHUOmNEjRR3QCZ+Xx5VD7ixhHm
A1JzHjD2cZzxjfIo4KUA0AXvz3rpsqMJSotX5L88qvKp6SAZC0CcQjhLQMQi0UJEhuBWhUMj9N8B
/SPp13VyjpXw/kw32lQm/yk9/cUMp7yFaFRXWikMf1oMPGkckpAIkpClYgpsuCP8U9uXrpP1hDWB
6EODyFO7BxEokY4oamF93sV9PlhcTYmdQE1hX9kGueAckJYdy0B3ev11Xyl+djA+EprQxQXUCzni
8laapgcotsLM4Z3i2vm2UU6iBkQkJGQHCFKB8gX+KSESdm317XXE1PjMN99sP5xotEgWyCdw357/
ovSGvZxeGHfXxRjSIrETckiicZ2eUezPS5UkXfbWcu4yvttUosig5GRWY6noQ1CghIN2tIMRpwwx
aRpYRF9njyvX8fW+9XFo9dUF5wpsw8UKn1/oc1YsaR6GgGMT8EAR3TIgKGK/JFVaqhbnboZbjxlv
MPrSotqMwcYy4R6aIGVfFpmlvc7kSTrNUds60og6avSnGGD7GQRU59SL6wKjk8++VzYzR5wiAlHo
PXhGiNEhjqWeatt2y5RWSD83Zq8NXKEbhCDvZNUnYki8CvhMh1E6r+hkWIB6Mr5jD0k/45F7+wwz
G8rVoIYwCkapopeZZxVYH8wBEL47UFQe+67TSua7Fsapryko3UzQOEEKJB+GxBgijBZ52AGA381g
u8JE6I7Qennh1McGUgikMSkBMayJhBNkc2IskWsZw4Su0UX8XW3fzpdSJGbvS85x673tbyk0bJqN
U020bYWxKKmYyNMMsbXyinMVsOYECzhS0jbChA6cEINZf6Tdh/GcFuLLAkDtbAO8vvYxwB4OR3MO
mNRacKLtQ7gm265praaykmlMpZJUyyEmmSyaiLLSFUEVpTdpq7U1vV2VX5aMIlmEORH0N9QGbDAa
BcW8B3A2Da8ImzQ9YG0B3MBD38GkEo0YYyKbBNMiIHsgVCgCgFpEU1rVirbBZTL9Wv4fV+Yxpaoj
WWRo1Uy2qM01X5ScUC/d6/9RAvyO0h96Wh+uahuREDxitgxeUcbI/OlfXVmB3CCfCKAnYEBOj+2+
sqXJ3M0smnVRRaGywqWIrTNo6425W6srMCRSDCAkCJsgdUNrMHkGLCBpPmj/GT07wwPCT9PdnCrc
EdUaY3jiHbQ12/kpalqVLEgs2LBrMMDSxKyE1BIBASMKyOMACrHo/bJxTsepqJwn1m+/g98/MfaA
fgP9ed0DqEDLAICIAaiQ3n3YRzU84lUks4qNxhePudogZdKoLhIFMyHEYSqiNIqOEIZCAM8Jo0bN
wmyUoBZYhH6OD6cednDgN6AC1FrDuw/fZGXGsgjFHA+GLkJq6rKtFayrZwgnhG0mVEK1kURQIOvX
4oD1Duz5kcVAYna2w6hJE+jkcbkOE2MoAls46wAX+GkcZI3gvN6WB0Qb646FjQko9CrenVy0jtd6
7s0GPRhE9HgvbY7YHacG5CTeziEF6sKinCdzxB0bAZCMGcRDB3p0UE0zmt754f8xv/uDPnkMVACI
sTtJKLbu0LYtg4k0ci4sOMgCoJhiNbAotC0iDOi1WtdTkzrqg/bDgRq0pRRFPevbD5Q/DfXN9K/v
olTZTMoYxhqKQgpJ9Lly7J7523e6W82KMGU1HX3QLouN0JmoqbKJjixy/G39ec82FnOVJEmtmzVs
lMwXmQ0qbQL9e1swtlA2khjIzVB68HjL73R7XQ1RMdcwLONui2gZKxKj0jPEzJoSiqHYXEes5ewj
hSIsJpMRCzq/k+A01UwvJgzAxGBmQFGihJH9KB6IEtKvt08W+97wc7OnQKo8O4ccCCzIqmogUF6t
1r37p31HhXQ7vTxh0JrWZTRFEEw7OFCSQiBg4cMccB9C7AHCWQGAtcUSkWghSVAKJMgNihsYI+wy
2P5BAgRfmipAimYkhWkKE86vWE6u6nBdORkhoov4ZGXX2ZDpu/k3bl+X86B+Sz5569Bc3/apIIh6
kX72byz/Ax2ZkuzdDVvWqq0hXvlNhjxmh6veFbhHjGPGIobL2LFrCDpht7bbYyo7kBsYwYXu6wGZ
TaalOBsdqyVhdXjsJHRGQsdQ0KLvO2iJYqaaWlqkymdlBc9Vou3cCuDzHek/1IXtxCo+KmkFB7SK
qECU2Ux8ABNipKIfXHxJQUO85kKQNtk3bkJoiAGyk3H5inXc1SCj94Kvn0RHTN4fhPa+poDMeysz
11c8gnyMNYUJOVOLAzoTbkhfhThg8h7pv0qQp8ohVlIkVTYuMPLSA/Y02mmWGaHPaXz2K0Vbute8
Iysxjo0ZPQ60XZeausaMaVtJsG2RMghsJNODXb8Ca8B/4+9X2v1xEXQMGDQjuRIMLPB/H+bLCxnh
8ZzlB7CenGx5akTChOMoi48bGFciOT2TR8vk9x7OlFhfZnZt/k1hSHUQ7MMAk8wEWkEI7OjvcZ6x
LR6Y28iCbh/W9D5ye67BhDYmBYxKQylfEdoVbSUkQLae0GDD8W2Ar/v6henIft3d4MgxvE0qAjMa
gxsC9oatxyhBuof+TUrIU06kf6TsgnbDCwXcQGhiJ/t7yaZGA6ycRgfATh6po/KQGPIE86KKCjS7
PSq9fPonGM69QonGzzepGlANIGKoGADmJO7g7Qg2ZD7R2xMshGSFl2Rllyo9D8jD2Py3uKbhP84+
3jkk84hSRt7BHHHLr5cdfjt+IrRUVEUlaMuVPVofEnKIPY+f9S+rttgtfvdipbB3ATzwoWQBX2xK
//nspAOkQU9eqLXZJ51SjhME/E+ekTxtKvsK8x2SOktMwgeUibh5jKfyu2TVu3wSdhjcjtflB7Bu
DPjM60sifpwVjF9pH7O6nKbqKHmk74pthPywyI4A+HCgHdIm59BK/QQL/ql4laHcqwaUTEjhy1X3
fjZHTD7hnJFQaPsNEdx/EmZVoqDOx0HtcCyZgZ0wXLyhD+Tji8ZfVnk33qnvLszC8l1iPsX7f1Vt
wCRKJE/d+z8SCH5bLSCj3dHtnt7s8J6dUIcIgsX84jB0T5VF9mQluYkrzdEenFPDQAdFJ8kuI7ui
6EO4F9wndHhGVNcEq/VPs0WNpiZmUew1LdlaYXaCLCFlfiIDP1bJZTNGuY7uf1lBjtJtsYgA7GCa
RUPqhTpRB3WVOuuMt0cwIi11RnOaluMjB3CFPr2iEUdGDlUKoTp3d/jkmY1QLBoRJFDZlWILyhYN
caWFtjenuCdmBg4ztMDDM8EFHFHeTmdXHCpttnUZHfJVJxQt3WNKMZMSzJ1Sxnh229Mjk3U936a2
M7J2M1dJUiLLSWUGhtERwwsUP4iIH3gqShgfOCnJFIdA5RjgNEymLBpQHQ6Q8DkEjwXeBwdyxB80
DpKChSkQqgKgLvfdvffLH0TGRF2N7A4tRJQhgwSQBVINOEoI4JKkBgcAmFkNe4c2OZMXSoZHya4o
zIaEZszjKNpV7kOzAymYgwCkSAklF8ZHEB9zv4Y6mDNYBoGBmQ2kqY2JSEumA4DAD8cG0F/k/T5b
o/3nrPce8qFwFGqFfLS+49HmLwPf5hoF+UVN4PcGiV0767ECRYxlMDBxNQHeCjxIg6Ds40H8Wp1c
vxcHOfKEGRPZilTQEBEsQgf1QDWQCZnp1wy6r9GjJtFTTKKEsAUCjT0YAwgqQPheMV9NfV9kxnDx
eLInUdwuAO3YKfTZ7IHBeJPyQ7UXtTOsvi/VW3uV5xEmR8izYDmOx4DD8vzgDHpjEcrMn6ioyhD1
MhMu5SlcfXYIRa2OmU0RAZDmshccGA9bdAaakWg2xoOaafFAS0QN3p/H9n6NJGsVwDth1JSwqXeZ
hEvrskcihHCfRH2Q66JYHfYzOgMM1YO+GHYRVUOj4DXd6tYdJ64tlEVRNjwW0L0TDpw7GEs4JUiY
BQx9x/vjX1Vty/m+yv+HbuOPD8ACVAiyMae+DIsMcYoU9EI+cCfRAFrMitzkhkJ0CAOJXUnMsQ+m
ADJDmD5YXgk5IeCNQ6PWSclx0K9HIHRhO27JNQRUV2oIMOMevFHqhKBpgPmQlTWgMO34AcEGDKVY
YYQ3cWwCfTiCqGPQMyFYHhEO/DqfvikWgX+zDED+6wf0pwIHrNp+wBRG4IorCKAAHV8KVQAvf3e+
yvbns9Y9p+HsBxDylWhPOcJyDJECi2ja0UKWxaqNW2U0iFjFaxUGpDaLGrJWltIlKii2NiktGqpg
ImkwU7lE7b7CqCZBo8UO97ygnDByfCHNmPMX6PE0hlRXUUeJe3AaTTXRuqFpkZVysl4UOqnDGzRH
EfZeC4uI6IyAo4RD99I9/dSRBQRlLYNGQrUklK1QpwF+3XZu3rXaghPaTgPXEI0nGg8FBD5mRU96
QURQQ1AT0eeh+T6tVPTDhoB5PmlVCglSpEhIYoGtsBaf0+t1u47O7MItjH0+pEEEh4dLxyBR3Hl1
RCpULh5jlF8j0TQmnVIXcOh6w5Jbl2lb4Z6UfvmLhIdB/YwS9IAHgoB4phBBKeUUsQQVBGSBUttE
FlpRqyTTJPJvadsIbQbZ4KHBiHs4OwQtmLAbSAMEWip6JWghjaWuhhsYvrf1Wyg9oIvr50KfldjV
U3Gx7uQQR83ofw+kPS5fGPiR9BAA/rgDl4YhyWlApWgKaKFMkPVLrsDX8x9Afc9DnpUoRVSP80XR
sQoiNyqPdhjptuOBJxo0/bIpSu0sI3LpL8cfNrzHWH5PoNdpChtO3BMCVyXjzQDiUE4BQMgCfObC
X6oD5Qgh4uVg0yCNkpEzTz53cyZQWopSwPOZ3fW9UPmOb8UiagdYDoL1UAJQFdHqjBCRTIQAPylg
k4fRwYib2g8yqmIhiSj+kgLn8pBsY56+JYgxHpA7JpE0UQq4IkIiZKf2AORBi/4kXjX1/qTlxOxO
7tTrSJYH4+x055+oFTi/SZ2HKsUFgo+GjeXyOvLob9McdetJKlIRMhEQeohyYJiyyhF7DrwxZo7I
oNjD6tszCHI8h1sIU2PxqF6oHmo3Bm3uiQgcOiMdxG9sxBb4IcXn0+TZtM8H4XWuecMbQ9n8G8tx
ueGOS2OTxIx3hVuQqf0eCh1hBAhufSRRNg8y/QBtv6oeGZneAklWyj1YsuYPW3H+YQOB3YToXALS
yCPpDntZmC+bMoRe1VeEEE4/0hCwr579SBRPL+ykTGJnV3EJ/6Y/Vfrv+I2IQ0G54g1GgkQyBdSt
AFCAUPWZADwmAcjOx3bHCB44hMYPH1wiSxga0RD1hSlSSJLbpo/22WwEjco7kGjuMcQ69cE1KvO8
UMhGSUAHCBE1AjShqpUMhClEOZU7ZHZLywKNUaj/fzgcQKlF+x+a1llK6J5tsHTrV9t3F7CS8xn3
BBxplyxaKIyD4HhZ+SRDcN2yIIZgregqf64J4qMRO9KGjLLdQJUjBCQElUDUDy1+AY9oB/cQNVNj
5K3kgQfC/SfOaPnQcwAgImUT8MCMEVSIfSqGhGgSJGQpjvD8Af7Ips09p1efyQHn/xwScDRIwqQK
hKKMHoH7oRegsosgGxvO1IJqaXFgU1AUiIcKiSB9r99iq6D0nSv1O4By90GSVpIzDRAQf0coYDAJ
vLQ7OhzG2AfQq+kIHZF/UDD2H9I2raaugocML/c8KSP9zTsCpFP6pEhjRRbqAwohTxO5Bshaku7r
TDEDXymAkkLLlUbCeghB3QTkC9QqchEg/hFdTM2SuEK40yiqq7xzRPQQeIesOsoGEn5KB4wej+Wy
4W+4w2sMS57iNNi+Jwdp9O1jRhPJgDo1is1CUvJLko4QZAMiSA88G+vBAUVDRBwX3ZbdEnyPdXTa
sOiBPbpyH5WvoMta8nhPwtPPsEAhu3CJN470k7BP1fKKB/CfopFe6QA+/MFAkTLoA/W1fh8QPBii
LGhKgvg5p5B6Hhg+jIPqnuw4C2Cd880Ma30C4NgPHOK7DtD+i9neYSh9MQsUubQpNipSLW62rYWK
BCSIIiEYpFYWBUlXyc6VWa7cZZyH+pMWL9XUx6G5L39+33XGrNwdx0NVfb6Hn4Y0bLDJjBqY5Sa0
rQwsHwMC9FPFRE6Hvd4+TKsNvRGMYnHXf+XoSPGb99I3b0IR7ei0tpruUUGKPY8qGj0rF/leVyHZ
1Z2lY25GEzPF3V4dzJ10ao38puhE0T5dcrNLkp9O2PX5/2UWdya3RvqBCf3JLs+W1pmt57+/daWY
zuz6ZbAq4isr6+TW7I9/XOqVrGqejnVDaDTYMZ4yrtQaYwYMFKcrm9WZmXA46aRTV6HqtVTslwo+
CYawuCYurLDDKrmSwyB7jYBcYC4gaJkUOzNSIP0D6RPID0+VCkiIRhKIr2G159ATAzKbBH0IfE6R
/453K/pM+6SC8ILCA1B95/Po3HmnqbHun1BNsm0hopoaWZI2mbJiplNmqWNlpkkqRJWLSCiW0k1G
yYMKCgpJvuRmVvuCByWvNNOKX9MC2fxPBveYEm0gqk7GWC6B26xtDjnbXcdz7kXjf4zoHbAHgnrZ
whw/y2fsx2nTZv8+9IRCEkAXECGSCdMx086A7iNkW8TuLCNNwX753bShlHREjas+NM9kJFH5N9I2
gfDqYGjunG74HgSes9qEH3qQSEnO+Z9QiQo9lFIUgRC+E/t09L8UMc8jyichTKuKtttvsriYmDGx
tpkhEhBAfSnqEO02fymgIiIkRMJBMCEYZEDIE1CJ4SpolNkEEHQUfE2Mmx9xJqlgXSdD82DdHQQR
Mp9FqHxw/rA8QB9rwQc4iwH7TsPQ4H4jyItppA2VLInonKH9s0rsbhlUSIiRB24oYyxAUgEYQQGI
BiyqTCynXDzjnf2/61mBGTJk01iFRsNyYqkSAZAUP2ki5CUGicCS2i1lJNlNalkpklUmUk1pVkjW
Lao21b3XKkgkgL9ZuFIj3fIc911Ge6/XkmCH0GpBBL2Bhkj3v2ekPxmGLM8nqsHFvH8an7JKIolA
MU09/5II88Uwiki1/A5NMzFRAazaam0VbGGqNNaUMy2lKglikKYKmPP0fsMsaH1A/RHMAfw7dGzW
iIFKE/SHzezWJ7GGA8gzQpyQKkMIiH5uz+8T7IYmI0IGjn9MqDQLSgp2AOeqk+YAwMjvO1BHnTQ0
QRSVUNKdA6nbD2BhGYdgqWJRoVMn0iRgy27qi7iX8BizIkjkWQKVKwNha03EC0Iyq81VkGLu1MF4
wRqINLFPK+oFDI4pgIKe6IocyVQ7ej2ifVVfTZVx49yCPzqMqvgi+ngfEl7351J4vU6sMKgiKgMz
BDMa0a0quBrL7fMAQ6PxP2RTfAsyj03VVVUfmoqRnzYbNf5hHdap/rxXeQaAoZzsLROpAOs9J2bN
kkJAwjYB6CAvhFD3DtE2BRfYwjFAe1EPwYgdiaVVChD+AThCAgSEqCR4kOIABNEeKeZ6A95mH5w/
UHGkd4FwYRBYDX7Gj+YPm+VzkcdcATzm2Gj0+i3F3WArBsk3Hcsp1DZ87wQ0WTSYQY4Y0SmNIU1g
rS6R5M+JMl0Z7qASg3kT6pCj4+rwxr5D2G/+bDINhBBDJhBQZmqT/luuivZQX03e7b9xem1vDJLe
Jkko/qqR1xgFGSRTw4yoTjiGi5D7GtVCmQLpWrNXBWjE0k+odca2UYRTqTBOcTZEaky3rNJkRA72
c6TJEoOpBzhhE6Ywo3GkBPEgPATPztIdQSOQRxTbmlNTveq9h/zU3U+MLMqJkZg6Axhl3HXFdxVV
GWAnVQ4NaDuKROgQInaYYvEFNAbkLRDigZD0zTVqw50hkRcaH6tzDwQEwskNIHQIEcTYuAQwylbb
uqayq3N0sm2wpFiAJQwgAgkElAGA4VP8uxU2/2m+oZQQmRyGMHloTgPoeXlE3KpmKgEAN6IQz8de
/zFea6q7zEDgKiGbkTPwDcBzSJw6muMQ3DxFoczeB8p2oaB9ocDwVPOJhsyyjKjrJSaKfEOEPO7i
HUbukk6slBMiWWZUkBhAKwAvB7XQrr0134AHuPyh6VkeA4OsMscL/QMtmkygZqoWz8JeBMrdJZ8e
WmDtmRAk0hGVjWz1RB3ROzvO/aDs6xgwzQdIsMzHIHUphVzmflzYQ6YK3mVPBiJlDuXrpNEt3EMS
oozVrE2uziANmsz0SH8hfTbZZGxanVGzYyaZdjGPUTZBmDGEqzFpHTlUUKNX7BsMRVtmqISyJsLZ
cBqMqyXoTvDaaD5LmNhB2SmucHlIDUEn0+VimQONqAbjVaYEqb/LWfVVVrl1Zdt3RbSVLY3xulue
i4ANlNQLFCKmgd8RHQQOz499ZmACHN5UlgIKjsuxyEhUnDlPr+IL2L4MbO0Jnioh4iwBFCKCCFC6
qm6qwkSGOy0EaRAihCjyrg3iVlIxpn8xS6Pi48cg7IsWwY9a42EETZTjtxyOZ3wJIeczDrH+4Jfz
IKnYilpp32p2UZo7azPm7ULrqwaBJ5S/dEza0xobsBIEMPVN7t7JOVPiMtyAwB3gYTCSOQAZHLi+
tD2idfX0IiOzTIxJkBYZhmBWGaw4MygomSgWCCKf+zajokeGkwXHl1HPhmD3z0UdLBXnjDa7xLou
SBOhBFEIcXoG5QTfkJvynaHoLNQQFxxtNXCB6m65uItrbp06CRti1Gxr024025dm7pylq9FsYUUO
Ey88k7ck44clBGYaBCDM5NmtO2SQIU0RhIygRNjv+wirzCx2KOoCE0eupEoAYMHVaqFFIs7XQSs3
iEGjv8p2khSZtvZGPx9ZrnCc4LkfglEERRn5OvQtfgirG6GWdjs200pJh4gWMraDcbQ9Z6CbHKvI
nxNWHswmMAckX16TE9Qr7S+vw8vMxfXLCHj6F9ObpfaUJLAQhFFoOhmHTy/n4lov5wV3n4JsDRPw
DDBBPw9sASUF/G9J0zJJ/B+Xd7AeyBtCSsDWaR+z8JKP4JuWfogf9iH+k/yg/+s/3QJE3cQ2/Hif
Xf8n6cnq66SmdtmWziBcCQKiyZMCIkGYqYyTM5YAH2APxQtmzXwVFgkFG93tD2XP1j9HsInLTMat
ocPqHM9wcz6/7URP6O8ewIQiReHDRLhyQxOOYQQR93lPnkH337enTsElxNF4HTlS0cB98oAcBvsI
38B/Bfst+kJDuxbgpLTUrewdkX3GnPvBo3ZP6Nxug2yXaKVRChYofBX4PMW8cDmhdJRg3eFNoSEg
H3gba0h/lBnZmcqCIvNAPAghREtIESCIH0a/QAEcOFkAqCGizB+HpTKo7gZ/nxiJ9VhCt/X4L1M4
OYet7dYEw+YP8QQPShfDAql9DDEmStJEoBQwzNMQTEMkHZIpQBtQDgxekserD0eeeXb8khVUj+r3
eXmvm/sJYQPejL8UPGPkkT3qSaJiPRmMBKIbNNYD8rDqu8d94ep6U2WEzyxZnJBrGLf592XzGsBN
VPFH9wdN2XQo4yVtBxo7I0oGNau8O5COnBsSb0kiOog8HaagMByfqQ67VdxO02i2IGj6QghJ7cF5
1WsL2b81aR9VQIfxgShBTWjBFD2jAV2/ZH84QxLhI54Sq1wWJfCNGfoXxTC+EQRfADnQh98AUBg3
8TlRX+YqtHh6/QFTPsIxQDV1XYQzSnaDHo7+p+ThB+Rk8qSRMIiT61iqdWBgChZCIagAH7yKBqbb
VA7N/54VYGUuGIUZu82F10z8o/ge9FDnzdogQhY9D9IAA9ygIZKL1ooZodh9oj4HehSsTADHYaMJ
dq2e/zht6BKJHi3EOYgaePN3bpygqIVjB23t6w1bIG4czNeCII+Zvix4kcqED+05dv8HKhGy3Kq9
crSjS0U7VHGLX8CfhkNu86BA9UIj43njAWlVOYR8DGAtMFJvIaWd7AoIbzt+hx6mcaDeZAzHG0TD
jTtONvTSkQp6C3vDJcAcTEn97aPp8B4HKi+6oN7na/aYPyRHg6QZ15XmsGjBZAiY+og1D9+C2NHZ
wX7Oj9Pr/TXLuzNLEsXxIdppnE1XEUdhoY4AatDAWwhziVsGBMZgOStpgwHgQO71bg9CwH3b3rYS
9wCrdlEX2jmHsHhDVIYqHQp6oiezqfsZBS8xYH93uORfWUQeeVTFJlBy+iH0+GLnGUYrn91GsH2R
RMcmgsuVCABkuEKmSFlnN92ajQOvHeddjsk9HmfrLPDCim1bgi+pHj25p3ux0oL1S/hCk93Wairp
XQhh0maxXtNKhl2whAm0om103HPILNiViRuJhqhB6QQI7WdPbMBtjLcpvyKlGe5AaOo2Kixpl0UU
hHs3BILFeMbUyFspUoVCDgBh7ttMw0aMDSGrKqhNulI0FEZCRETUUTibcBCman9P5wp21jYD333R
K465e/eud8OqprpP37ZJiPA86RSKESBuOXgCmVkEuHlRTLSpEI848C0XMMcTJcgwkoZF+h21oyqj
P0+SUJhfo92IHRAb2bdmFZF/nzT2/qzR+uT+REhfF+xPoCChElJomQL8SBvviIdKD84vzx4kGe2R
OsBkKhkjT9TGZjotWLr173s2rzzZJS26gJduup1OQtVzWorRaoiyBDIACJRiQBMJAyUTIXFxDACo
qxMDofn7k+TiTk/w9e1QOtXRKeMKj1SmXIG+HWIl95FD0NoB78ylkQDznZ0qj5Bu0ly0nSClpYTq
I3ZTF7BEs3mY9RhlClVhtDBzDyJGLRRePloOohInmEnwnDdldVO0MCFM1H+bjkguFgMgHTMtACJt
NT3DA8AOfUVkc50EgZkP7KyEeYkm/WpEC0TA9+ie887xdhTDmE7OQh1b3/CaFkRTCg+HXrEIEHmE
dp7HA3LIeKxJK96Qfxk6BImiHiD15mmzsoT2ExUAUUhoA9BbE2KZHgKfvfDydpyxRyCuRiSUnXR6
/ZLb9hW/daNXBYMFzHDAaUi/wuOQDxQ+YZOwKJoEVk1H6fqTLVM0SB4UEA1q1LoRiJVIoczV/Ido
+vBSkDwcokAylxr75IPLG0P6WlQ7UGkh6gV9DReKDPlfuOyIpm8Dn2aBNXIC0R6PgPw8uzUe2pJv
qMKX+vFKIXKClRQ34kP6d6u209HQ9ul8Nhk+kjdnlAcGAiMdFsMLlwB8M1QO9YmpDSbT31Y+J4HO
4k3YZ1Ju6GmZ4P3zZj7EJIbSEgZqQ8sCAGQjAGNU+hsfAey7Xq98vUC32416enMTkPmJLCmDMrRe
jI7swpta/lmjBDMGUm1QNiKhshwWMPYyl5I5MEn6EUz0DP+4EwKEBiag0YUEERoESgJcCySymUFl
VLoppIjQhllDhaKKBqy0IshRSqoNN0iBbUd9LLuyoSshgXGYPialGRFal2mACCEQ3b7dDtY7EleJ
l85I8IuPR5Z9GjWiMMlkQ7SsouBoajdzKiHTdVMVXBDu7+sSK90hwxpwjWmbsuiDAgNGx4WP9Qyr
n7KWmdmbZqxYWhFxW6gKPDyMEDi7wuRdQ58fPDY4yeIQsHGeEwo2ELyohFXIDSRGquK2tSAUA1iK
Wqq3C2NZCDt6coGXIwxm7LdTFaKUq8QrYkriq0K6QwLbdTk1FNaMxiEzdr07ef/ug59Ro6mwDpCC
HXR93jsBV4C3Bxjn6Bbb+eqJDZja/wkCmU4qgZpBPMJ16YhMcpUVCVKTRUgkQYF0pbGOq+fWjQa0
MjEHkyIlfh704882FDUEQmhN96CCAiCCFZy1KwcEUFzQOgSdcYaY7DnQneSD3sxlwZTZlVn62T+4
4DSaTbGDIwbluYyMKHbCcLeX/lfeoabE4Q1n43sRg7oDXmUjeXcdCaBrDEdDUpgEgbKVtIYfMKoW
rIIMYJPZ4YHYbMCGIqXk5hc1hP3YYwA2UqzpleMay5Fh6/BHiZdTv0dJJNpRCZ9lhzyWUk8i6fdH
p6OuRLfOKu4THLqU9ZasshKrLLYYUadF00ZDbu8DKwGS7KAKg8Liwv3oQiRv5V2DbcINgb7Pz2H5
0JUhKqLUEWBBmFsihZF+n+/Be8NDS6qkwajaFqmqEMgyeqdY+rh45TAxDkCAxQAwgMiViaEs5TtI
h+noawNSVFjPK2yyoFMJBYRECyCtEECyZn1+4+acs6Ljv6ZXUan+lVQkLxmzDERK1iRHvx/CVLTx
jG1EgbDAZxRgjz+8OOMA6cfovjPvevrYo+/0Z95swIl5xSFonySdLy2egYHN5SZRwkb8lCwlQk6J
G2MGVZLd5+7JcWoUzKgxtssyV9MHWtWqEoh4VKokNCSbJORX0Z6xSI7MbMdgno7C2xtJ/EOHnfr1
EW3uHntFMWS/Q/ZnarcRrXT57NItqpy73EKqsmywZ7N4Yfxk1sZF7Rs99tkimDDhpFZA4ceihtcc
ZxAzrNJJEJbgWXVJCFlBUlwD6icdNW6Keo53Z2xkDGXl36ZGO8ieaCz6+Wf5suSH9dYUJ6MD5rRN
5CT8s2+RFD6wH3IL0PWdx/3e8PQ7G5pAtHtKSKK/QChCjr9J9R3trCPqLesNkRACNoAx7VCtIBB6
RBEUXJpIdgcjmI6U1kEDiMJMKTJgkkJtp3yZwoHIvM7xALBneHnfGr5AS1bptr4ovLPK823mu3Li
TSb5Tpa83s3xckvEaHVuR1ZGQqMAE6+5MVMlKU1tcUJjGAMISkCYEmIkBgjQhhgZpIBsc/KqUkSE
qI1olNZjSzfKlpVW9iIAUuUnMZfNwUDlQeB/+Tipwic8cGjWxMTt0qrwEiCxd+GCxUA5jlMEKpAx
FMUQKS+Afyqn38jtSJBekaFJ2ZjuxVmTnWjRRAZYcq/IiRCwHKbVezFQT7YIIbKB8U+gNE7FyTgb
BwydY6yH0d1q/7gAgj9XBo7TIqjxKWiGpj56ojbIF27cfhdpKji/OPYHiwlHpOe+kR0ItloVGEA9
tfXfYOeLGoJuv2XibZVlDs+RQw0c/PgENGhPEjoQ6OpEchjQVBwbDP13QtLDC47GOYEbc38iWULV
ppMU0p//fjkJZ0rSN5+c5RXhnlhR8kfEjtJNqDYSsouJ2F2xJsGEzWS1RsWtjbKl0tcrFsmxqpm5
ajWxaogk2wTIPICagwswCJENA9fdv0mwuidNKdD4ZQUawTGdfCZ0AKnI8Rt3LgDCB9RAhMxKES58
xYqRIZFB64kIFkANNBKCwR3UCY32vrippkoewNwYjFTcAgmyEMibtxAi+OAuSJEcQUr93ku2Ig0O
YUyHZGuIRww5/fsQN6xFdyhQhrUmNgDzV6Yxk+n1Ytr169msIJbGV9VRy9PBtEgvJA+yxMyCSBui
vsgHjETf+o326iJEw6UdF7mg/USEZIyK978o80UqEiBov3jms9aHsM3sdDccuEoJPFYmcAHC0u8c
G4dyY0k+R27bmJTIgncLpHQmEJASwETGGJciDi4na9uC6QAeZElOVweSNuAxgXH6LqHttuzVtdt3
YxJuGEDRz0MjMWtYHKmpznrwa+prXniGi1CbS52qXC9EG2wuAOZ5DY1QQp0bEIo7CHZNi2Q4raEr
NhOtuMplpLdXuLV72ruiRWDROAY1GQ2GsEwJWmlVZF0rtONJkTeWIKF4OgbAwvOGO7szXKJNFF63
rvXROOd3Kg7qF0bnTu4rW1uxiMXKO4OcptYEDiFR4LaMcdV1V0WaurnUrVWa9ZKDmVFzsIQUEZFU
bB3DtoTRG9C4BBMyQoETpMDEXQkAaJNZD1gjG+DYPLyFkseBgr8eClJ9MNAhoEQIQRN8QujoGtmg
BxOqLrAldQTYAYKQokIqIZW0FEWEE8BsCwLLbAH8BgMu5n4Qezsc/quw5JJAOLjwc7xMyQWASIQY
VE7Z2GiczrDm31ECKgUIPZAZ2YIdiQYMUAbb956+buFTp+PNFJjkqOeiTn0NBGQ4U8xvWSSN1sPC
VchdSuKcQV0emI7h7CR/0QAaWOMco7YQqD+qBIgXBXAH2UoIcMFAdW7gWpHhnGjsk2oyGxm0mHl6
aC2THJIDy95a0yb50urIEBPGc1i0iZABO/4fVjfL3PXY3jp2jIbqxlkUxA0qYCUo1XiiCtoFEIe7
WeZP9oazkxrgTgE3rIj0J2/Kli7tztvSuCDoYzp7rSkjJg5Y+7+A0BbSps9zi6s0XxaVjEDZBLbd
1vr+h226AA1GE5IFA7XSnBCY9qxkC6gE0QCOkFIQAxnCFUgCRiAN4mKMEuVLSBCGv5kA+pV1gnbQ
yGyFSFvu+b6vkwjPv880ZUb+c/S2e7M0bTrDuZRgiHZhbTDkh7Xv7h0EoIf65QiQUJJQCJEQkWkw
P4J4OFEV61060OwU8E1f0ZdwpoJMYYznB/dHSbCvBWckBDqMWWgVgEYcEJstpjCUDVbCJrA7Qoaw
Yv5Q9Y2P7eeL8vajgB8jEwMKQypCgFKINQRIgB7hslPaA+1PyzhIdxnB9o/66cj/R/PhhgY0cLoY
DI6IHIyuEh/TZoqBaFoi0NXuYmP9LI3sC3GhhgnUAxopmoGUYwrf6VCiUOMHTocyYKV1gyCeyEj1
pzv4rcP6D57RkaVktWECyEE4oTu4hr/JZRCUhEnnHXVmVJ2H93b27oe1gOU+53S2KWQSsY3Cwh7Z
HynRovX02a1DMSKuI9UAW/22dMDTNvA021TAoGqgKOTjEP/hed2pDta8OdBpzr3xO8W+pE+8UpIT
6/jC0Ew0qeZir8gfW/0oKh6vSex+CJ7zdoHfYPLzEf6rFK3lQzAT/dAQiQotROE4ici27iU82qzY
XYZmgNVrA0B1A+wpQu6u+yjAmq/69+Yf27+KHADq+hQCl48o1SihyBCkE2BDQxzGjYipaRHfBQuE
7SioGVFBhVFgRoLRpzoSm0CVhx6J7LAgGYhmBkgBzyRA0FIlwD5vnX5yBSFNUI9gw5mIZD7pOsmX
YEom5TYR6cclgd84uZgGhjVwE7lfVvQmTjwrZZBdbZ07XaMCfG35/0Y2GNOw5kXshMilcgMlewZF
cukBt+B+gk3DQbRHCU4JYoJiQpTIxJ+hcXEWSA+f4V+DxfT++U6gIqBzDpkH8OtdIHbOnV0E2cYq
hggZhzsxvAe0hkLsQEqGPjmAfGI556RWEGPVclVfXAhnoRJUF4kOAQsSBICqGfBu1Be4f804+aD9
sQJBMzMAKH1O8iQDURQ3jh9a/1wAQc/kzZB0g+6BUW/20tp0OQHwoeom7usAaJ8cF+pC4jzJrIa9
K+ngVdjwdONdQWQJG1PdkNE9nFuP8J8xfa8zuQnSTBhvEsjZ6n+T0PNqRbvhgNDxtOUSIEJ6O4MI
YCnSNkH8CU7C9KP6Py94UU0VR+zXm/REIDGIKRQZ1uvhCv0hdVBTofDLIsT1FoZltAcQEMLaMTIV
WKY0gfj3gaD9mBc5o908AVkmtYsxRHQ+7iOIQXWPKMjdvDw8EfLL+cT0lgwyQMJdILtRsbosUO1f
gVCI1YbsYj1jDuDRaDzXcMPRj4xhAp0cyhWx2iEY0IpWhFRqyAMYG+b7dKCI4Ryri2U/IW7rFRO8
YQcZM+rWzw+HTlUE6+pOJA4CZQl9ZKuwZRlkCJE0Mnem64D5Dpzt7F1oiGdnkO/1Ozh0fTEJCo4c
2vgfU5/hMWUUF/DxTdoOWONzIh2IMX33WP8S/U3s9I9OlGT6goPbxHVjJId1KvffmgUPQLx4AvMs
yqPtRPn90QffCfPoHKdGhsApWtEBph6bw59j7IAoI7eGSMKUaSiAUQcW9N2BZr9NWYipIQDqXXOB
sRaE639w+l5rtj7NB2XnKbZyViTJCgzdgcbG41kjY5Q3ChbtpBSBt6wFBvYcuAj4zkJ8GBTgBdOI
cVEEIg8zluB5PrB/5DF7mzI3LqU7zAFdsSRIMA741KJYngb2DoIUfAUhmI9jD4xUGoIRyNzxDrHy
+5BICvwVUNc+h+k+R3QT0460T1B7uI8iyXHAG4IecSuuwGJGYNEfny8T00e833eC0qvR9QNkZAYQ
RJGAihZtXHBSEIFsC7KYQOp0H8LDgR61DxB5qJBWgj3fL3HF11jVZQDPeKkOb2dxUBu2MKKpU5UC
BqWFIl3aL/afxMZQgCStUSN4mDmOPlO0qgzJj8ekzcWi9G6RulREHYm8UQyYWbnVGn9rSA4pi/Ni
kO5ocrxhZJG/KfaR+F887pcak/FVyvCyl6+Djr43Eh/tNUePTjTZnSwiTVk40CV/9P9IkyRNcxZR
3xMJPaY5qbBtBuKcITiEmZ4IUOcQ2m94TlTejGeUarKf06+7aPoyZGukoG2qbZxhGxrPLVH22s/V
fx0Lpl6+gzdoCLcnxO6S5/Yl6y2xWNDyT27MDZ/CTF1OJKhmC4RjW80qcBKinwROxnrFGAVHALvZ
AYTmQxkmhkZIhEkx0CDggpJ7u+4T8uqe9x5Wie/WjT7Ddg+DBzDFySuS+E9IS4LVVlOGKEkKjJIP
Z3GZn66PA6barg3oYeBlfh+H32XtqfDx62etI2109B8RJEatokUD4cYKmCWDL9QDmvY98xG2GJhx
izBqX79SkQyq5YvgsXTqlFU0oQIicyGJMsipMsXof6DXiXHbezr2nA2WP7A612i2k2DTOpKYEW4t
0Z7wsl0ZLdRPtQcm539/QP5qH4smpfC5UqW7kvMb8t5oK+O8DOPiBnZMqBwNpBe4YKpxCIuoJBEz
ICgQBgnE7QPR0t6CksgX5OATre4duJhPtASao/jPft9082OPceJ18HzDOGT9M4mRwfmzUaNaNvfB
E0CP2+yQr8t/0vga84xuf2y6p2Es3VMu05J/k5ZWzHpUnPGVccvDKUTMZYZLgN+XSDHjGPBxMbPO
d0bK8tUuC3NAIJcdVVT9F4mhKEmRTfHBRZnkHgd5CLPoaEqXMikTsKRQphnU37EtwyfNCt8551z/
uozUWiuhuHL3R6j0KsCx1gwTu7R3eiVUZIdPV912eSrXCZKwki8DvgHagN0sPd5sbXHZp6rFFU+B
TQzrKq+nDXr29r01zoiG0yFUcGy3upW73hjMHK3RUI7I22NDoZQyNVHNu6ZLc6JTcgxllR3qsvT5
sAmYlisqFzZZgVdhZSfr9O7L2A7fu7+YDymdOmKEn4Y5Tpm9UBVzNgQOCaA2jh6+hgYKHujp9fuV
Y+J1+F5H2Aj/ZYe1N1hSk+zEMj6UV+AEFUPEidz7+v7YQiI/2gJgchQNEKiChBDbTwuBRWBSZJk+
ovANjGKmZFTvlHFWjQAK7u/heVk/x2yln6UdRoZ20Q4cof3KEGfAQGpISKzFQT8NkPOHM5wognVm
HLAauJJRMj6aTNwJ9AeObqndRkBYhyNaT1qmDmYO6k91nbqVKIzIq5CU80tCoDwNyLWvt9Vucdet
KMtOya6kcE+LmhQ3PckkjYjNEvK+Ev1Q465AaVyEpLRzSjqCP4QbQnNJaDjyEuWbMbWTMYExpgIP
V2uzAeSBpHMPHBxxhagdWqNDVXW0KJJkUSu+y8bD6CAMj6GVE9B+6ikLE4cmD/EubrsJD5PuQPJ4
RIO6rM8dw6cB1ud8tQ9XbdD1EGt+P4y8ZUSdk0Am7qeRbuQ7RqFbH9+NU/dX4VKlVIlkis5UCHkf
nC0D4wH0s5UifcQiaZ0SgFIlLpFCoSISCZi6LkfYU/iqwwjsW1SwImE9IJXg8XZ3D1kDMX2P1MOx
z6ZZezkRSVJdIT8qJn2kiefGNC4eJgRAB2uiHHdcLKZaP5n/ws8ZA9MAWvTHf4XQWJhfK+iPqqiq
Cwkz0hmotIOcnj6CIwu7pJHEJW9HmnFYAWbUdqp08vjEgu5UJgAQAffgAZ4EYyyQxSBv1vZAfUSC
9biVzzQ6HBZdSwsRO0rGRo4kjkA0h+I7M5jqF8wAfkozDJacsmgou0DFIT5ubv5Nn6/o18m2jL7p
3W81HkeKSJBSBSUCUSCB6UQOEN4ht4HQsxkTzNogZhtQGpCSJE8CkqOzJTRsbBpXBWwr4TPzDRjq
hf8UjpzYxedSkNpIwqhjYyokTGxSFsuVZYKZvRoImcDDYQbfVOmDiFatDt1renQsELopCWIUWhcM
MYDrIqcoI36ZTEQDawILscAqKTTMwREZmLYHLJyujeajejhmfdqs+WpsFntaKQkIj4zbCnGhsQhm
iSnALnnN6zkOBAyIh0EvdCESFLITV0khtA1SguXEFMBCFpi0hgivskTGI4c3Zhkdj8ioFvQBYAqb
WUmb79m91eWEpr112yXMWP5o4zxrLsmRJJzAQe/HPZMouEImULCIjqBMWIUiUopCtlMsorvt16Z0
EHj27GKo2U2pI+zg20AeB5rSJEonpjNWtGLzJjBERKElsJJ8XETWlrLZi7d3Oo2fnd2ZoJYKU9K3
NI1pByO2EZtTfB29YeN+Fsqe1COLr95f5Q1oMAQisB37UiWfpm5bDMgWla8dg8M7xPKNOz84571s
+wVw3PieA7zQc7SBJlx5xYH2gvV4m2o7lyEVzMaFFyKRznehVOfE4K3uxckpI0WHqIFNz0wOQ6cq
G44kQoyR4jUpQA4FgeMaw90cGb0szlG9uW5pkDdwgxiGwjIwQ2YZjQLDQwEFCGieutzoh4w45d8a
3WtxlMZxrJ1TEQ6mkdwGXMrqKegbEy0ns49fY89kZGEFc0WUNU+hItsYO1adpUVp161A1oiTaqjq
g44YBN5WnPHD2U0+QXIV5Zi7VhYmKnEGycqJsttxl1Wrctqm+WUilMDY0ikeTPGraSVNYMGiBxiS
WJAeEW4oFhVwtAg4sbTAkF0kvdAhqNNrMMsIyDiTU8yK9zKoPZeo9B8NdAogajp889u3AO2EeJOW
aQJ2Rw3Sd0R0MAKiBkHde2EHbIO5eJCgiR1PfCGoDe8XIydQLktIUGocg8L3Q6ilCgDiVBpRzqHY
29BkGpaH0jE6ZUoVMUoC+Cs0VI1hQ2+j4VXt50cI50MOkDakbHa4IShwEI4LJ3DjiELykJ+BDQjy
cw0UelT54JFxvAECQ0TBSp7Y2BB7MyNRWyqZFK1FixkkESEhD05ZQqoxMACgDsUdo/E29hwhtR/g
cowkAEUchij+71+k/mb5q99gFU+qJSJSf6TvBf8kD300EhC9xB5gfzh7IfPnyB3l0PSPcQYREBEi
Z4+AX3az6DVsTC3ebFnyfds2Yf85c10CpMYXxOpE4cDP5vqIn4hnCOINMomgiFSmgiU+IieuBWer
2HqQmVXx0m4Cx/34gmie8uBTNIPQ3QTwyAQnduPzO8V/ZPNzgnF5eu0h4o4ITl0jHbsxjcEhy3V3
kiGqjHdFOsLCnVkVKimNgxvx+TmGGNshVWJVgwUwHsiHb2eR1Yv4+2nPOskDu+6s4fq33R7djhgs
mJMiZ8q0Y+7rfuJtGQof/id25KO9+tXSX8EykPHtCEbTRBBuhIyAeSfWjmoA3zIbwj8waUURd0II
A+Cn2ZhgH9UhQI5H4ZdMofDkDNDwIetPUOR5vQFVVksuvRmDxidyHzCkD80Ar6GCJUSgQyEXb5+9
xngzPMCJPfNTGocIogPen9Gw9QsbJsOj0O3QDdDubnYDJeA0yYfhP1o9hXxdwzrWnrmdu3KPnzuQ
bSiL5PJhCpsYn1sw3X2pmASQROl0rTRmTcq2q6qzOKRz7DXWd715Cu/AwN1yglcSP3lXOvAUQ9hG
HB6/3715vaLrtGTtDRiQkE8WQIhkwhIDXp7zvumLZFR5tLcrKs3anKTOG0aJ8xZQDwIQO2iq4ksl
pdk7WrMVulTxgYPACAlec3xWj2CRO2tuy8JuyufvhgxMvTV2eYgahghVRPqgIWIeYwp0gFhRgJ3E
qcMg+AJgg9dBg/whA1CUqJSgxAuv7bWmlQDJ3/MfXxxG+Tkfy7zYZ/stDSbudGCFKGQgxA0decd/
nsKQGJADylUTmXco7tRyb0hjhAf1jJbGFDevSVHMqvymHqy7CxGkHkC/1HhnA62sA/A+AaUAe1EP
cxEWoogyqFf44iB+Ux4/UfhOm/5AXVHQELfoA+CgBZD3jEFK4iFGoicooPrlIXFT8Mk3bgXXe6Jy
fyjZtleVo8xp24Q30QKrX4/XUfiPxI69jB8kzTvPlD9EwWLWc3Uf1Xh0YOcfsKDkXU8w5g/4Sw/G
vWIAvIdDxRvB+CH74Co2j3In3/o1UfIfKAQUI0voCoZ2idPMd/3GKnD1OqD6QhFmTQFCLIknpQFR
pSGFVIkiGIUSiSBXr9+vdkbIZqbJKE4KCtI4JIR7IknnihlUsT6VUeT4d53dNY+8PUMCBEomiJ75
3HTwcgQ0HIeaojy5KwGFoBmdgh+My0chUvKAHQR5dOo0E18v2UKaJICi8QCKlCArAD60XSe1NIgQ
GxFE/vkRIlE0wGmIUj76iex05UXediTyPOzWFZJPIt2QYpwiTNY86REFFFUVUc9TlAz7Ied4BcBw
cDG2CBkSV4gaBNKKbY6nmCpoDonSvtfZctmaEqTImSftdurMKTIc6krEQAyiKQIgiTRNWNJrUurt
Vywpo1BBhh6BieibA0OlZNmjHBiwKwkU5EfNANa5Q/o8yORBB58gT1P2WDFsY+vx70O6IkA+STlo
NQ3hYkIert4jBZ2MJhAfJ5wf00J3Xnbyuml7vMmyJRqUlliKZSEbbXrq9vbtNCnlrqTM0egWu8/u
vr5UI+1x6gmGMIhGHCZLJzG0qmWE2pPrpjR5POgRlX8sodX/HwphQzWsk0sVJQXggmiTp7rLBzx4
4H9Ek23LYQMSBMH6swhkMwIz2jgV8f2Le8l/LyXGaDgzD4LG1bHWqLgQZ6PEGcGXJ4uxkZz6jWva
xmkf45hG2+6J/lmdXc6aErWssrbkJHYrpubbMDKcCs7zeNptKV4qwow1aV9r6YNWmDjwb287Y7J7
MLGGFGq8NFHXPhGCrR09Sysc/3PnmNgxcos7VUXhoIwJRb6MHJz6Sl2vKKS3yka8ouxsRcpiDkyR
LBOmJNNIbZiaBubc1Tm2zXdXTe6bAwYcvokJtjvsc2Xe7k6JcG1uOHT206RJHRIceITqBzkQQ8Jy
FEZ/QynkqTVCI2g5IVNSuM5lkikQQS1la71HGGLFCN8mGHnfWCLqQ5HhsqZavvWO/DGpynQ4EGHo
Z440IlvzRez6GeMQjzQG+OepwovGdUd8J4y1ws1Rr5azRKFXy79uj2xIo6fN94kP7Dx1rGyh9vmG
EhlDBkQiRiyxVKQYD5cIpngWbAxc3pnQaFXE6j2cC8OpUZmSGGNt3M5PfLhFMDcaYzPPIduGTmMi
kDZNzVMz5EL+yYinOeS4qQieqAykI9AZGqnfTLFymHCq3xcxCe7UGkZqeBjOEwsUEpE3y+tGAHaE
WBBnumXWo0Im+ooWrPizfZ6+ueVhpJHtuHWQR4nijzkgzT830YQz5Q/dV8RPPi+qs0wpDYIlIqqQ
8I/yjoK2KxJ8qijYFll9c8WXzCill+PjdmsN6+PlA92q5zVnBhO7teb13wWLYxjBKWQumKSwwiQ/
Mrsg3zpnAwdacA6upOhfVYcIhdUYItwS2twWJ1EWkz7D8ThSWj0Z651MqvnwtjL/z3Vq4+OCH+MR
AexwIQPxYfgMJkBphgPbKBlhjPNQYEwrwXbb5M3GiYlwgwQmUSeiZlg8gSSEV2tDY8p8iXujBQfm
E1yKH+z1lBwy7+z7lnLhTbplFQjm1HSp0oOH2RMlvzk0Gg7ZcJ4GvC1ZYGyR4YPMu+14DJytcu8+
ZNkH3ScJ3UX1B+/ifHTXjlmXCmDInPnykz7evl5M222VTh1ffd6RMwmoiOyIkyoqH51KbmIlDZVU
351X23Bp826gx0rfFcGlqrE6mEOQ9ZTdZaY6SW26MGQy4l9keVJRlsd03nq5lV9UJNw42hulahwZ
kX4QSZrpow8nRetePNbWVlDQK/tq0LRVc77SKK6n6OKH17+a+dzGUbtgDs2Z8enRpVbp9LaJnzaZ
1PONFPnIYN2J7KVMWHJjTcv1YtBRc3qXE636+MrSLdGhOsnHEPr4d3XZh0hlDiiTxHh6HPBsHDe2
9vYFuZGyA0hoDoYuayzQjpyC0mf0xFOaYmV83JtnMrKw17XMuJ/zUVVMg3qXuEr9VbOUbLNJmDPI
htvmYU5LFrVeIdbeEuDdlZNUJsxNiH4ZM7XZHJGHyYEQa2Q3e6BWGIVp8SSRkM+btBZ0UQzDx3V8
ePktZ93HFXHqMyDqKbQQ8/Ss6p6WJkywB/LWeduh2HDj4mDqla0/W9wnzv0rg3C7kav9qy39cn69
SpDK7+mXl+9Urmhpu6/LvZt/d+VL3uBuHddqE2VBsthWtNMKWtdXm02dIvlUw5bu7fWWNdrYLPnE
oRkNwRd4fKT44xaQqCG0Q10w6YE5kkoEcttLUsAc5N5XH3RSIlXBOpIdlBKS0IKKiQPLZEjxN0Wz
N2eUlNQzA0JmBzjEQz71lNSKBvpq6epvH5EQ8U9InZB8lzTBYQOwpptImXhZhDgkAi6LuW1sUo0y
DFCm4QwVh9KU1e8C7YUIhQuQ8hUIOQe8hoDXlzqVVVVVX2DwOYPf6AMgTlIoRiK0sOlg4DNnP5X6
v9v1cgf0x0n8BOVk59L9yh/Qty5AP+7D+ey2FEEyqRVVRB3ZQ6dEqbQNRFPMigQTs6iVnVJMjnmC
DRKl/VAh4xpQaQYJSgJSECCda+gQIIlwSywMVM/nbra//fnp3/APJAD5oNQGzetHwMMHiGfqEE/u
lDOmCvECa6JQ4YGsRRp5xMOgdQ1g3LGtmbAjU8IBIm5pYjlE54NHFTFHE2iirKCh6NFKhglSYYkh
RiG5YkcgcOSzTQaLjNhaMemsl3nAOpDMhlLKI4cZPr6e3yDM8RyvXhZ/XRlpH9mRUFxTUFbbUsVU
1GRNUSMZE9Umo0lAUD+Ux50XOROawojHcciKZoUeNAgFFilcf2u4aXk6dg5x1038z272kYpTe1P+
XboaQ+NbNwFwbA684dzYyWsdZiZA0nKRP1HJodSm8c/n1mxUPZ7L54R8odxffZUrx1eR+uu7fTAW
iKREo4R1E2ez05b62YZuPQscESRAH9k/9mzOzjD8oZ8ZX1L83zQaMPHz9QB00jRT7MKwIIyEIDw6
fE4lnyTedCaGwQIzcnEQMWnfD3Fehjf8jQh4yr4QprrmH+OcG88xjjDgIHR+f2YP7FH8JDzCGuhJ
iYRMUZUUZTJQsc623fbChsfIdlwet5MF1NA8E3wPqw3mtZn1aO0WSYkIsIBGJwNQpNPfGp7f5659
dG0uKa0eZ+sR9h4+1wvTxIbR88Pq14k8VRfQeg4B7PRQfYZQpTglgFQM6IFj3nlSpR9wDSjQw1Ki
BDm09smsNzG3sczhYyRJMbxCFCP+wwBQQYQENijM2KTSIY90+E6o9t3eKZvVejM5S2gESgyJLHn3
LwBMBMGxIPHv80pf2fSy4VDt/OKDyEeD3ZY9aNIcyGz5I/kODB3UbzPTBWwvfsTCkN+h9tpmrITt
0UhlE4LwdaYaw2z+8Q7pw7/j6xPmFNm/RRXlqcs81auRD010z0byKZKTuIOS0Sc45PahHh4GEZgF
9eK6++8oWihvtwP5NHATNAxJqf1nI2goGdReWU/TS9FEbGEajuGatUwsZ8+pabH82jZucybJxn2/
OGWwUeyRgbEJZSKiMSaUT6Z01TZYyNxuQH0PunA+pU+JBrTh15rFS4D0ge6YcTAYK6YIzTjCEI+g
N0BCzo9BTR6iOEoPAsqikou69kafTmJEW7y0eyPzbDK6Y49b1XSdlEVTdY/5KgiJOGSSZ3X+u2RC
GRsxdC2Li5q0BRrh1H5sB5YUY0ECEqAXERr7IGYnbcE2MNBqRlTDZo9urCTgxTRCagvaOgXGUB/f
iQFwKYxM7+Kjfxih+ptz9sPBKVVUqdsI8kAHifV+AhaVaBAoUENJ9BdM/Rr0/Umcg3rF7D6O5DAf
F9aGI2aHLlJA8vDTOQjQd+BoxvOqFI/RJ56TwivmYAHUfQgFcQgeUQ2DbUeaGbcgiLhOA+5bG6ZA
DOG1zFAtlvrcSVpZpmqMCJlS+Z7ntvZxIpHqi9QdgaKE9X1MQNCBShgnzEOYRUmqUnD8L3J5KlIu
4z3xyOwWI3qiCGHy+416Qj54qZ313asyI2ZhuuXFD9splYv6/eZXsalnZje5Y9mdYe2nPTD9XRPa
yESJQFJSXpNYOolJJQPWfKevQBylyhhs0VUC9p9aReBgx5Is0tB2g2f7VqWfg/zCFgyradWi/8NR
v+7DCawzJgPMCZBodgDcwr2Sag62kkgq0QvLRdOwHVEKTkuqFlJJEyrAUYkMHHiC7ntdDMLxiLYE
WoOkylBtbFWAXThvL5dD58ug/J5wpPnmqdERg1qmNwaGRDdtVacWqioxFIciJ2ZE3aL3HF6QgHnQ
fP0brMlZBHsdNj+xrBF7dN73RHXUsx3RhUFzub2i1bVO40GGClwKGGIAdpA3sNcBLsnoYBwvBKQJ
QyTNM0EVyYnDyS8DtaWSo/w3YewEkICRIVg810IYn2dTXr5wuN5lmYWejMOC2l22PxOB/DAdQ2DM
Zw0RFkVDIsrPH+vBrMvNT9MAsXdF7Qh6ifnhZlLtjNEj+NNg4GyN3lzs1skY4JiQJilPT3YDlToQ
Psezt07QY7eSsDSxTFIUj+EuDZAhFBpfWGuwFAn0M7CAUY0rifh4x3ZMBrDbBDKpWNDsJcEQcGg0
LMgvyZ8IZ4+gVh+sTLVlvGyt7kk2nSRhBGz/jYucjSMf6YhIXduYatccsoftxozwMekBwij0uhH5
DelwBg1CIjSUev2gT1HvDvRdwDmEhnaTuDv4U8vMbrsxz1wIki4AJsnDUcIRGvBNo16UsL2hBx6E
swH84RI8FvYeBCIq/kASpFCE233jcbHe8Y6BxQR36hhCmg9w+AQTA9yOpk2PGkyCO2ej0FYrh+kk
B+BChytMVNtIVsedFXkWRRRVkbdem3WpzFzchqhSL+PULbtqJg/9//YkdlN7qjBwfB0J2gxZPMFs
4HD7pHHd2HO88xCdM8q9uNwCJQm6IJ6qXjbpWS4FURVpZbs/mPMdtFv657YSQctoh07i2Hr0SRFE
EpEFFIlPUefdp0yEPxENVD7BAbrdVJodfYBtqhoR0iUDs6PTaqhlDnEQpCM2UPH5SwbgHyhUcWeh
AgHUD5KFGrQfVYXA/R6urqndv5T1zr+f9YiZoOupfV5YVewipJIsGt+LqUzZNZTVY2NVFVhtAq1/
hke7M8dv0X1wIVZVLT8Dpw0apeS/Wa1vb5xeKUYskAYZ/VUY2NiY2AxDE202IaCjn+Cmmebw/z3z
0Gs0YeZfmd8x0LW2xjYxsgY6e2jVshf15G3kXPKrPjLuO2e4wrIQkAklcx4bGLRSbLlrK6fuuzpG
1Z1Vcnawa40QaEHQMCMoYDZ6sqQtqFOFTOtNFMgW2NISMTTIAWIE0khjIJEKMc4GEkStFLs1jKxq
QVjhNDdhkENo0zVy2aDRTVvGpCGXQoJryEKGx+ZptlPpb6OyXxmijwQp8uq2zutM8629GotUq3aj
bixNRKrlnUptHuwLrKJHF0O99UH1dl61Ss83NzXGrKytm176Kw1XmcrppWPTQ2mc81/UPlqLJxWe
C8DLm4o3bTc1jBk9mvAXoo2M+SSfLzK+bo4ZryJMTM8UFjEyGyjAKOM9b4GVGhXVzEY3zkzNmOAh
hLrxJ5PEG4IeqoqebDHeq4WrNgm9S4MjVttCJwmnATOJugaRMt90MB4FkoKkdsJtgLRUN8d22aGU
wUhUicYUW7xq+Ozka9e8vTQhCKwdN5dyCqc6qHZmamGDd6H4ot/Pxs9meOI0J7v278bYaR1tcwdx
MxgUkCDx2BejnYVc1tpWbVGBxoOx6bdFMSxdBgwwjAYxjEwwNe1PvTeZqle92dqyW6uLAdztO08Z
7TodnVA6duG6IVpAiqkiFsGmAMTC1+cvwu6fij3wy5fl0i7ODi0P0I8TrhyeVyGdTrkRp3kUi+UE
dYI56B+x4Xrxw7BGmEZwQJ88tQFWioKu9SxsZgzr3G7pjHnUXOHDL0m2ErhmGJp1TLEZORCTXTqY
dtsiqJh6ePXq7LYJHBHVp1mK3sQKDQtQMa7wgYe3W86MRpcfHckk5llQGd6JSGvbuue2L1voM45Z
kFpbzybqzkWcSHB3ilHDjy4G2fTfVwMGIaTwSOiHPUvx37XyRBXysRGkU0vc3EFw1i6wvis8PsPE
358avoYwA9pAbBjSW3522Fg0I9yFq/NOlzwCXZgtOOCZfjoXZj6Eb+u12jrxDwhL48HOvCK8Ivzb
evjRSEaDwJ4XL5FuwvRYUe1wZ2+3yxny89D2D7KzusneHSxT2uXc6dM8vORFvfiuz2Y/KWgbFXgD
T8oOuHRmw2+GEEZ66ZClsIEMNHCCJZFWIQvQTca1zbK6atMqMCJ+mQf/a34u99eDwB3GiFTsU1U1
7fbS9RUr1t10NqEMGEkQXh4MQkFoC2YAANEemoDF5mma6vHauq17vXqFuz6cosufPktEb9ig9vdp
scyvf0eSFizORUFQIl4wdFCjPbWtjvBi06ZAWQIwcrBsNkzQ1Eg0DQ6MA8gDCs0zSb63VDNMDHz2
1x3cvIVgjbCjRGiiRFEEyF05Q+c1q4lvKErbGaG3NPMvRsmucoziIZXqFFICydTW+ViT3MHUirxn
FZCPVJdpwQ2UMTJGhm06oKCA0OQro5m27MdDrRdwsqQpmh1RHRRSuZGD2MGUtq9Hd64akbKTnJdW
zqIvJdROfGUrbyRzKkog/iqssZkgbuy+VoPixt7TWqto9nYNBnPJWmrHIea4RwYy7e10DCrLUMq1
mXTxxQyA2OcpRgMaFjCJipkaVNIjTYJ6ixnswEVWx7eodRSiLWFCsd0aYQmPMw6N7QV1ytQ538VQ
rXjwR0hcZXXKssw6NLms6LvCqoZs5vZpSORd8UY2M64Q1cF0C9Uii/MMbqDsPEKjKNl+eq0Xo1PM
WHTvvWWrem+0GWETY5ATatiA5SVsL+4VnbOWd6t7xlKqINxttRzSrxJVtcfkrYePS8UI6geeeRle
OpT3ksvEcApZZrOFUjrOpo6+Rs64YHVXovC77WHY88FiWK34w8ujtIfXHGkzUDsyylrzrfWytHKH
6y+iNjCIT8nvI6AQ5XVbCV2rUwEwSJGheIwJ8VLUcfDWmE2bzCA0NJa8dqgy01r1Vo84QbRTXc9j
U55NB4uFxHCoDIxtdlQOzqBaMjGs0fw+Er7e8ITdqgK2YLgq3qJNL5ZEbi4NjSaCx6pm/3SaTiWj
g9HXVUujgi6EIMYNwBoTIA61Ra+dhaIe+fTC/Qw06lgOEDqQFbHVaw6uLNkDr3xVzrghLZ02xvCh
JMyQIttKBT6h4ZW50W2NualUhcShJY7o9fjMxEr8V2ICciJAKnoEO6TthoILfeChEes8IRdN+aM3
xwlJtHueubTw9vHd/UtiotZZKrUDoVBXNGWsMh1WqO9IDjEW1hvX51GUR2SVrVXLN+tVoyGsiKYs
7rLUNeh8yPHshdU4qUZL6EhQYJKmK2k2lthbAKNDyhSppmvBC2mPGLebKTqquNmjFdnRwDtJN2S7
sjhJeN8290zBj5uA2O8o41VNy2obZsZQQpkAkPKtNFPbBXbFjV6bqgpP1FWzY/Hf19WrUYZWcLZ5
b0zNQTZIvpoaGmVJ0pac6SCjDMwMQxUm3l1gbeHVc9M0SHUlJ6pPEs7xDISyQkwLeOpC1qPafLwA
VISR5+tdyXqW/GXfdpOptoIsXg9fDyElGg2RoND3JiOS7WVmYe142HVkjBcNKcODsLOqMlYkrVix
BQgVF6UjyKDSTFRgogXTSRWwzwiBaw0oYnEKHNAoum3hTskb0JYB2tI0BhYsGrabY7DMiGJIXlR7
CbQvNRQmx5F8vvVq9TU1hMle+p7erQlaZIDg5gSIaCgDvA5DoHQdnBy9OmhMY4QkQciCjWBhEOYo
DoZhk29DOCQbYY1WTHHMHccuh0Q2Xg2WMwmYRDcDZmMIUNgGCFuTuCww0qjQQQD7fd2/NC0+ns+n
G/WnLBj0cP3PgHgCQQShx22jsqo4gDQbj0n0p3uAH+gloaGCCkiAKCZEMAOQJD0rInz4dQPWPfOo
pijENFTy7K8/Zr83xE4hl1z1GmmNLv+rVMmyBIv15ZgA6/9s1+vWmskPeFk+sq0n5IclKxEBvfdx
PIAaAGwbRHXDBgK7B8dkMkVOCuIooQ+Qzkj7YhKEDh3h4lS5xGLg/46IE5yk1opSJD4tmzRbhOtG
t7jGLNHtRD5SREriUFMkYgRpBJGQRX4APz2IiEKFo+J2950Mww0fXA/N2CpRMQBMBMUAyhCOMfYI
cf9XagCckcygPOiD6o6teaStdPK9SopvdJWufuKUA8STXQc6oPDHex20iJnepHcc60Q8om8OCi3A
brDWuZxjU4VLNpNkUjKlNqWUZspso0zZrXSrpddnVSxKG9ROnZ4sA49EbnYDwrbWPA6yMiiYgaFm
S2+spK6s1KTNNGzFMaWXuioZlCJAxOQNv4DcD9WZGFFyZil9+s0XoBr5XjlfyIQz9+RdZBabDC6s
aGyxgVG2SnHTUc7ulbNBSUF7cdoSiAGX+8YII5Ee0PHkPG/JNfkMTfF1ipYKwlNni7Ra2/WldaKQ
bLoKw31QYdSuog6JSkA64GtZRA5hhMNzioEwUgrcVQpHREgaokxzD48jaAcjjB0R0toU379GioFF
1GJRVF0hsur8vOJLGhvp1aqNWLW/NqCbj8hHB4K93opAx7AaK8aa9LZJLlTK09W/YvpC115pVwCy
8VRZIBCSIzBQIUCgH8ZAUE8ACQ0A6B0Oq3yidgfZXBFDuJ3ms06RoRw437yUOLim5M+Eb2v59PFo
U6ZiAZKZYY6uCIkaqKDbRto5Vu7tWtzUbaoJbMes6hKfJAD75Q/VaTqYCPthFU3BSCnEIId0odvJ
CPQvDSJDBJm1gT/LgO02HnxbT4EAL8I7YdkTQOssdhF5UOQ9nEyRatWDERQgJVTvBTe8U2dGiJBA
OAI7gQV2IJ+s+WTdDNP5RC+gfkyY37ypUNwT9vrdkKT4uEeRsG4gBtaDNg3HcPSH38eUxyY6d4tD
NWKIx5+UsdTRj7s2bNGLTAs3N8htiSrLdDoywBOtkYwuqG2BUTQxwwfPGONicfDry3pDkUePZjkf
b1wd4npPMd4dCLoktZIuLcPWYgsKIJnPBxcc4k3OJ2wTMXBe3tjHrwHxH0swdwci4e0YcQDkcjsn
OLrk5TrbbMEHmESohHV2W2m8hG0mSmhdhxaQkc0PRiUhl6SGMrHEVDBVUEgVFKdh2Dz7t9OQ7w/G
HrHDzHu5ophgLENFJMxtJsEs1s2Ray0pLAmKipmNjMIlMyoSRGgoba0wwAJbUlqZYNSSilSIgmEg
JD9Jw/e+1w6aNa0H5Rgv5I49B6PzvtvRBAnmglZyiOEFfs/NgL+w7fg+/E4nEtoMooChNAfXbgNE
KbJygCUZV7oTFSEFhj1mZCOyClcEiYCBASfqkG6Yi6YCgaANyBSNBEBiqQtOGCGHhr4FW13Pr8Tx
1Fsnyw6ZTbar0lWiDoRCjLBlMKZr2viGvVIemE3VYEi84tGEra2Sn1xd5dYorOLRqHUbiCQD1r3D
m/LD5gFCHRMVCBE2O/W41rWrUbHoIfHJ7AIJYXKMMoxt+8uoLyNNdu7UdjNm7OnTu1dstSE1LLKm
m2WVGt21d+7rS8orNTHu4WVrFeG3ZskJJvlp1EKI4gdhUO1s6xDV62c39fefO+hD0h9UJumhiKil
iJY2UI4ta0ltaBURCESMAhAOWIYF2IBCH3iPcR5DjDSkh49RSHEHabII25j+IsQ5pAVMkTKo4KJH
hoDiAWh6KPrXET0EodCBDUgKPTyC3whUlNgtgAUhZRNgPngdtN60cfi75iQdg8Kz1qGYqfyrtrvS
ptJEQijcsqW/nCAXBX+xCIaLCrw6NGmGjvWJhgMLptNjrvueh8QRHihhAnCqIsReFbJqiHz0UssK
3be7g0IsvxA0zxxInYog2JTZbbCjhMdKmmbyV3haNPHt2acSXZGDyihwG+BxE4kAh0CPWFhQO9wu
IRQyA3biJDZw7JmuIFqCC8IgA0gm7iotpjIBQXmbZh2gRq8OtogqjjdDmggkfr6+xxKGEluSba6I
XR4EdBMogE1TcWcEwaFY+xJS+X82t/obgDwtgdHtqHVqLJPmrCc2eOJxHWTi/p6bOOpoEIQifrDk
7kIiIMalXXdkuklNfqWL9f9kIr72sdh9Pf+0wJIQ/5h9GDCOEuGWKKr0d771DcszdAUE/pYCMOAH
lD7f+FpxP6wGuiED447Pq9P0Zj60WJmdqiA2wIiR5qAhrBD1f1EY8GjDUtBBSFIsJhLLmD0PINrE
VSUJBIJT/KCKvpJQD/1wgDx95vQ90IHeAQdE+9U8yFt5HiMPUEnyB2BdAwEnxiJZdXV/mrAibDgX
dTBj2Nba6wKrgRw+kNZAXJ2TE7KZT99JN1tCsJk0Ja2VLA9vADq8g+g6AQDu+k/KRnZQ8Ah6xX5B
A6EHzRsS0L+bumCSitlJTbUvxSvnapNacuaVKZSmlGiNVLWv96SYpREI/E0kMSDd6dwpbNu1fBvm
HSCYCkJnxD7TuOwOSgGB81KPRFkAf9XLKUMf7YA0fkxSnsww/JPz3wjg/eKZ9bpyTATHfzCeQPvS
Yd2egnu2nLyZQ0SggT9MbAhBsNjiioU6443/buDyLNod5U+tfi4FLBc4TX81mbJu+f77Nd2ef0ZO
2dLkDJPQQoYdDgIZeG41jv35PXFKC3DdXAR0C6VPbHzp9EPr+c8JEIZVYYBgoPrMSd3u78TAQKSl
MWtQOff1P5ydZyMy7syZmVhk90dXvDDghJAHlJJA0j2QA4HJBf8IIBkWJiAQWjaSDbZEl+eafrlI
A2Bx3AeuCIkcj3LiqfJ+Ayi0xlIsH6gzIlKaI2QLSL/D58R+UJ/ZFX+no6Ivf+PzPjOMkFVhOjhU
7fiH9qRgQgQRISOweeadyaht80HIu0FsZgIL+LuBOVEds2LGE1VvocEIOTkVFdlEdaZbVWRs1JlZ
YyUsrTeNiR3UeFi1CJBs4FgZdADz7//Ye9QcKKNkDe6Im6KrQxEgnM2iqHpTLPQh/C01Cqah3Svt
np74+6N+drY4+ikrjCwpyDz/Vxpa80k/kDsFHOi8YGCRBQwkTdcTGhYE0VJ2dvzZhpQ2kpmlNEIG
OOVq4gBNaFD+kujbXjaYhcfRkpcH3EMNmqZtNJakSQDZ3qRYCYNVc1+Xsd9eGmRywfsy7cGDfWb+
K8UIWVnrtre57vAekCwTEoRLcIpdv6jHt7ylfiTPlX686Nd+LM10OK3RH6GKH8SNlHoG0Qfo+2Pk
fpfhPlCkMjmB/LGLYwrmnWjCSSA9e97NwzhvRhVFNVEQtRFRkOjSujT/SeZ+Y9UZ2vr77j/m3mqN
SizUwPFB4O90rgNhESu3GC4qV5KcBFRIXvDEYuipbinBsIOpgmoDjnbtkD6CcSGW7ggIiAggxdYa
0QCCQ1gTZ2N1PDTY6FP3TeSTIAoKoeqQnbDkU4F+EDvgcK97fAE7tynj0zt7qCMZdRskibGNYwqm
3FAexlzt3a4YYhRtM6BQ65MPVQ5DyE65MqZovE3PYdgcCHeFuz5i83EgdaMpkdrzuHK588NJsh1b
llNkZEZLvetbIo2JmQ2wTvqjvSCmen6qHjSSSZ8Ap3CIjOighLe9uPQOfL1giHJslir6tGgNibRx
KMjItMmQa5jezfyya1hkGEofvfIMMbxjBEx337BmLVlBHB8Bs9TSMxVUL0gNF413DrGA+ve3fW+7
e5H96OOxRDfssLlHPe/Qdvnx3g2TzX6N6v4lcv1y/XwnDk5eycG6AZIqBUEXzVV75boeFtl4oi7w
d/VgVuHbj8Fx6UigREUi2xjb82R8vTnaagDd6iO1us28yLDMirlsCgKCsjJq0vdAJkaNp3ca3spQ
GI+7ylmNUxy6Jd0BTBmXSAaSQwrxtkUb4SwnyVW3qQY22DacxpB7M1jsMB3Zk6HYw/dZ6VNmhmML
hEnVukSnnjZxw9OWyqsMqG54sjQ4sO/HRmrjJ0F346dKBgLuVqmiwD2ARdui27grwFeuCNFto5MU
AA87GR2cADnO7vz30B2w42xwd9pBdsEFxXFvQIDJyIY4PJxlYARkW70dtUEQT3cBx0zoM9ITcCaY
dZhkVqBySkN3hwRYELcbS0LcGMaUJ5KkKNGFkRBkJPNUYtCKGj/TaKEzQ78RLryQjYfTgdEOcGhu
jbmuBezKgCgqvROa0iqcj+CwXln0kMyabwlKQwJ9MCnTquZVRQfppxKn4Z+XvvfotZku+ANUAYNT
6IHBAEe13JAPTiEJd3Tv094R7uEhyb817yAAgTGAHLvn3OFFHkQVVV+Xy4OZoqsVF34S6NqNGwUm
0WMQVeQ9jOwhnLfaSQQPUdOBvhct5dKYETPMXiWihDVxEjImNDQeLoLVkJEan8tgcRCG8gwmPLzo
gh5xU80PPRC3WIejZNk28Si01e8eyduQbLEahJGqVBTUpWAiNfp13V+MWyNpXumlXKS/kPW1+z3S
Yu4cTxgZg/IZnVv1kdOSjAzDCQyUKTYQibuae8vCQ3nRG+ZgksX69WizKn0j+FYJIhwJAPy8PP3k
i509Pto3ZHdc3Yy66S2ZWpMpJjfv3jRoqhmIKDM/B9RrZ9ziYSRHBONadZUohcikf3B/B8fOr68e
/uZRckgByYhgTKBBC5Vd2ztLodFD4L9qi0o7snug5nbkXgk1sDh2dkOQ4QE8IDzz4pdkYmSlJw+n
jZiCGCAb42xV5CTdQhSvtssuRIT/sNUhNKqqQTcoB3B1qD+eAOBcogAbIkdwGhzIKc8AbofCAGYd
Ynj9GP2X+JLSoeRB7YN36PynafDdicLso193RgRc/KNbxwriqWB3RVpYyB0TiBR6DeW9U9fI9tp2
9Z+z+ZH59nBaZrYJro5M7LrPVZIh7IV/5OGzqPbGN9AU4p1Ds0S2tOxCehpI52cJMVyWypNNmcBz
rf/zQEPwGv47Xk7o+icIjg+dCqkkYRzJ39HgH5v6vxx97e8fCGIEN7IGuiwICemE6LJoqpHOzVo0
FQRIeF0fPg0A64xEaHmXyNYpLDQRMQlFCOsO40IZrMSebEhqSMnnebKICJt2APozAi1gbnSJQBol
MhOsKimw1k1VXZAZI+JsfQIH/IiE+/9R+cn6/wgk7LKu4Je4ePs9/VeZJ9KC6gNoI8lVeQHg/fHX
s9RUes95gtGk6vaTVyPaCEPcEvrJIwdAxpEe9EJTSDAaBJF/EnuFor3h5r+CTtDMiCYo7lM9ECkK
7Sw3oHn9D9K67Ojjs+wMzgYhYe/r8gXrKaCjW8gN93go/AoyrZWVCWVZqtj75s9vr8urMK/TmlF3
VhVU/rftke7id4PorN4YzbgpYLBwDxwP7bGXKIqCg+bz7f0cY5Bzt6I9SqqOn6h3H0qjwrllwo5Z
B0vsOyHoLpfCCAdeiCsCAGIIFSp/136Yp8D9mDurdR7T0xifx9V/Tp16sAM/6XHRBTEQwHuhMj9E
eDZDrrRhnQxx1gHDDDAuibjToDV23bLoQagHhA5gizqRT0omzkzzpuzaCSJMEJFV1I1/L+Q064ce
osxFX934j48LFLVFQUsUtUVH1GDG/xUEIiqH9sAT5nBn+OV91UV1/PNIesyz0Khf3SgOL+AyyhFr
+Acy9MwY4Y6zmoEW7h6aCZ3ureOdALwT+3d13l3xmH9l4X3W00tH9/DlWIQ8kt7Q6EOtlB5YewXn
kLGRjCkegNjbYqpstTIGf7tl9ZWSF/7i5R0Z53Z40TyI3Lty6o817RmLwlTEzuGBYR11FrU76rhc
JGcX94EbOf/nzljdkOkom4QHfU/G7txaF9o3oTeHMCOEdXcJGARBNvMoDDfnaA/n6He9dgeEdklh
Ftq3oaWmjSY7mgYawsypLYzTSSNNtrUIq+VV9pyt06qSUJFNJ83YWCGHnqNmtSOM/3DnKDvZ/eSv
mF+IjBBQNYEOgkETgkHYQYSgywqaxxDxZFNOzAAJnZKowymsNAa0BAUYOKU2DzgB64FgqeMAXmRV
LICAue047YH/VBENQgAPkKQETy/6pRDSH+Ato0woeePeqQBkJEAOgKCQAeIqk6g7ImihSFoIDnBg
AaWFoSgpF0hIjB1QaBDxUIERFPMxUSgUOfmoEQpEPUvgh+2+77sA+sJ/iMP4Nln3XDBMEGEuELSF
LRCwyc8caE2izKcSGBYQOB+38Zm+cGefq2huCznDR4QoHbN7DcI2jlgsSGGkdCgv6ROKOsdZ0S8J
mT2RWIGhDBMlLEkvxJYYNFLtkBhoCIjAbNwUTabAZgncBsQl7X6zbEZo5766LqetlQGQNcdhGwzp
hq4OxfzaHbOqGfHAKqsH/DloMxRaQMptbhjKaaY2kxgcbo+0hhkeRBpT/CcLaOp3FBx3b8KYGCfE
+WCyqhIB/tStDBRN8y2iK7LQ6pKUvpWCgp6Xt6QRCCijjrzaiu1VQqifX3Mthh02cJybfEwzZ65o
qzKOduHFrMrSSPV1d+oQuznLfcvuQZQ6qMckVA61LZV8Y2/dVE1zOCYsqgDdHAQDiqxFDYlqu0nJ
jrxVoNUkeR1x756Fw9pnUlbV3WywVjOhjEkzvvdA3Yy2f3kmAbmyl0/H+gl5LZTF5dNhSh0M+jNJ
o6fnQtOeW6bUxvDwJO4PSm6J0pAC31iaY2iHRmijApNYecx5GL3EcEPiasfI7seO7NDJQw9o5wYN
GbrrolnoNBTE2IfLiNf1t8lYlJSVSylt7dJTRihMTU0u0oknRdOShBtmIMCm2Yb5IbSGw8sWkWFo
sMRpCI/EYD32288QEU+2ds+XPihLQyO7gh1eX4+pCwZTjAbn9EaC+VNMHkEGgi93ZXDwmTbHrr6i
HOqUPUREEPGJufKfYLEMR44gcR6ST+ufIjio40XXVll32FFagPRghhL2zfzJeQt00P4JTZSBfURQ
dplGEKBgfX6v24H8wftSQVBOHiKOwQiKB0P6v8eHKh/Jyn0yeyp1HnAbYoPmhRP7IUQ2EgqlCoxI
AehpClaWUZEYqKVSGVVKFAmpiCZiQoUoWlI8CMKb2MYzGT0PrvR8s7IO4eG5DEN0aib6rEJQZ3lD
6cd6wTHnjRqIJRugSboJKdEmmj1gETANJJGNpKiqS1cC7ImUVhsOPl3/6EFTSg1IDChEgEV+r8vd
83+x0zUc4qhzSCpuTlZL5NBSJMfLg272moVWJBB9coKnsCDRGIQSLqExSQRgZEEOP/ITZnt9vMe3
rtnp36G6KOQcNAtZVE0Kj/j76HxYBjPQshCklTRCO1Q0CIu6v+bEZbejtAnAvYGy0LBuJjHIQx6b
0UU9JT/h9IaX0E8yeEl55271cWwK+me67aDuADCckOzzNvkdtv/8XckU4UJCjoGdSA==

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44CLONPMAILBOX_--


From xen-devel-bounces@lists.xen.org Thu Aug 09 14:34:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 14:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzToX-0006Zm-K8; Thu, 09 Aug 2012 14:34:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzToV-0006Zh-Oq
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 14:34:15 +0000
Received: from [85.158.139.83:46824] by server-12.bemta-5.messagelabs.com id
	AF/49-16438-76AC3205; Thu, 09 Aug 2012 14:34:15 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344522854!23882888!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTI4MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17261 invoked from network); 9 Aug 2012 14:34:14 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 14:34:14 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7pEXh/
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-233.pools.arcor-ip.net [84.57.93.233])
	by smtp.strato.de (josoe mo47) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id K04c4bo79E40wy ;
	Thu, 9 Aug 2012 16:34:13 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 0968118639; Thu,  9 Aug 2012 16:34:06 +0200 (CEST)
Date: Thu, 9 Aug 2012 16:34:06 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120809143406.GA9317@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344501152.32142.78.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, Ian Campbell wrote:

> > I have no idea, have to browse code debug it.
> > A quick test with plain sles11sp2+xend and xm start -p shows that
> > /local/domain/0/backend/vif/1/0/state finally gets into state 2.
> 
> When you say "finally" do you mean that it takes an unusually long time?

'finally' is wrongly worded. It gets into state 2, I notice no delay.

> Is this kernel tree available somewhere convenient (i.e. which doesn't
> involves unpacking .src.rpms and applying patches etc).

Its available via git, see http://kernel.opensuse.org/git
The webui is here:
http://kernel.opensuse.org/cgit/kernel/tree/?h=SLE11-SP2

> I checked netback_probe in the linux-2.6.18-xen.hg tree (which I believe
> relates at least somewhat to the SLES kernel) and it switches to
> XenbusStateInitWait just before calling the function which triggers the
> hotplug script -- so libxl's behaviour of waiting for
> XenbusStateInitWait before running the hotplug scripts would seem to be
> correct. I couldn't find anything before this point which would cause
> the driver to block. So if your observation is that your kernel is
> blocking in state 1 or taking an inordinate amount of time to get to
> state 2 then that is what you need to dig into.

Indeed, netback_probe is appearently never called in my case. I will
check why that happens.

> Have you reinstalled your udev rules etc? They changed recently and I
> suspect they need to be up to date to work with the latest scripts.
> Although you don't appear to be getting to that point so I don't think
> it would matter (yet).

Its all coming from xen*.rpm packages, no manual install. The rules are
from xen-unstable.

> You didn't answer my question about error nodes in xenstore.

I dont see any error nodes in xenstore.

> You could, experimentally, try increasing LIBXL_INIT_TIMEOUT to some
> enormous time.

Thanks for the hint.  I will see what I find.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 14:34:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 14:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzToX-0006Zm-K8; Thu, 09 Aug 2012 14:34:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzToV-0006Zh-Oq
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 14:34:15 +0000
Received: from [85.158.139.83:46824] by server-12.bemta-5.messagelabs.com id
	AF/49-16438-76AC3205; Thu, 09 Aug 2012 14:34:15 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344522854!23882888!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTI4MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTI4MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17261 invoked from network); 9 Aug 2012 14:34:14 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 14:34:14 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lk2s7pEXh/
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-093-233.pools.arcor-ip.net [84.57.93.233])
	by smtp.strato.de (josoe mo47) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id K04c4bo79E40wy ;
	Thu, 9 Aug 2012 16:34:13 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 0968118639; Thu,  9 Aug 2012 16:34:06 +0200 (CEST)
Date: Thu, 9 Aug 2012 16:34:06 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120809143406.GA9317@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344501152.32142.78.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, Ian Campbell wrote:

> > I have no idea, have to browse code debug it.
> > A quick test with plain sles11sp2+xend and xm start -p shows that
> > /local/domain/0/backend/vif/1/0/state finally gets into state 2.
> 
> When you say "finally" do you mean that it takes an unusually long time?

'finally' is wrongly worded. It gets into state 2, I notice no delay.

> Is this kernel tree available somewhere convenient (i.e. which doesn't
> involves unpacking .src.rpms and applying patches etc).

Its available via git, see http://kernel.opensuse.org/git
The webui is here:
http://kernel.opensuse.org/cgit/kernel/tree/?h=SLE11-SP2

> I checked netback_probe in the linux-2.6.18-xen.hg tree (which I believe
> relates at least somewhat to the SLES kernel) and it switches to
> XenbusStateInitWait just before calling the function which triggers the
> hotplug script -- so libxl's behaviour of waiting for
> XenbusStateInitWait before running the hotplug scripts would seem to be
> correct. I couldn't find anything before this point which would cause
> the driver to block. So if your observation is that your kernel is
> blocking in state 1 or taking an inordinate amount of time to get to
> state 2 then that is what you need to dig into.

Indeed, netback_probe is appearently never called in my case. I will
check why that happens.

> Have you reinstalled your udev rules etc? They changed recently and I
> suspect they need to be up to date to work with the latest scripts.
> Although you don't appear to be getting to that point so I don't think
> it would matter (yet).

Its all coming from xen*.rpm packages, no manual install. The rules are
from xen-unstable.

> You didn't answer my question about error nodes in xenstore.

I dont see any error nodes in xenstore.

> You could, experimentally, try increasing LIBXL_INIT_TIMEOUT to some
> enormous time.

Thanks for the hint.  I will see what I find.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:04:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUHJ-0006to-C8; Thu, 09 Aug 2012 15:04:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1SzUHH-0006td-7x; Thu, 09 Aug 2012 15:03:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344524560!4694129!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3623 invoked from network); 9 Aug 2012 15:02:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:02:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,740,1336348800"; d="scan'208";a="13934006"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 15:02:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 16:02:40 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzUG0-0005Vc-1k; Thu, 09 Aug 2012 15:02:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzUFz-0003Ep-UX;
	Thu, 09 Aug 2012 16:02:39 +0100
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="55GXCx6iCO"
Content-Transfer-Encoding: 7bit
Message-ID: <20515.53519.833182.698887@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 16:02:39 +0100
From: Xen.org security team <security@xen.org>
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM destroy
	p2m host DoS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--55GXCx6iCO
Content-Type: text/plain; charset="us-ascii"
Content-Description: message body text
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2012-3433 / XSA-11
                          version 3

	HVM guest destroy p2m teardown host DoS vulnerability

UPDATES IN VERSION 3
====================

Embargo ended Thursday 2012-08-09 12:00:00 UTC.

ISSUE DESCRIPTION
=================

An HVM guest is able to manipulate its physical address space such
that tearing down the guest takes an extended period amount of
time searching for shared pages.

This causes the domain 0 VCPU which tears down the domain to be
blocked in the destroy hypercall. This causes that domain 0 VCPU to
become unavailable and may cause the domain 0 kernel to panic.

There is no requirement for memory sharing to be in use.

IMPACT
======

A guest kernel can cause the host to become unresponsive for a period
of time, potentially leading to a DoS.

VULNERABLE SYSTEMS
==================

All systems running HVM guests with untrusted guest kernels.

This vulnerability effects only Xen 4.0 and 4.1. Xen 3.4 and earlier
and xen-unstable are not vulnerable.

MITIGATION
==========

This issue can be mitigated by running PV (para-virtualised) guests
only, or by ensuring (inside the guest) that the kernel is
trustworthy.

RESOLUTION
==========

Applying the appropriate attached patch will resolve the issue.

NOTE REGARDING CVE
==================

We do not yet have a CVE Candidate number for this vulnerability.

PATCH INFORMATION
=================

The attached patches resolve this issue

 Xen 4.1, 4.1.x                              xsa11-4.1.patch
 Xen 4.0, 4.0.x                              xsa11-4.0.patch

$ sha256sum xsa11-*.patch
c8ab767d831b20a1b22c69a28127303c89cf0379cbf6f1ba3acfda6240aa2a89  xsa11-4.0.patch
61c6424023a26a8b4ea591d0bff6969908091a1a1e1304567d0d910908f21e8d  xsa11-4.1.patch
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQI8/0AAoJEIP+FMlX6CvZ+fIH/R8w3J9KUiLiIai/QaA4xOjp
rkvdR40b0GzcllDQEy9bUCvRY3QPz7DRza90vLvxCL9R5OnbkRtGJxdmbxjwmoVX
zF03FLaFCd5ypFsTGAcxaUcxtOrt6Ut6R0i8GZp5BCkOV+UkNvu/uaOxL6N3UZ3w
HfCm88EAWsWeJuShiG5jY3BhgCeR7b3GV9uXP0vG5Pa7cwPGvMnx/E6OsC/zEMG2
7yTX0/AI4qKMT9XtiA024vloN1mMlRgN74ZIBqmPuDv5ggv1wLFseARWueYMBn8Y
aUDi97nJf+YWXIx+YwAmD0XLmJ/5tTAYvaV3B4vjMrfFc/plMKDvOqohVB+hv08=
=l4LY
-----END PGP SIGNATURE-----

--55GXCx6iCO
Content-Type: text/plain; name="xsa11-4.0.patch"
Content-Disposition: inline; filename="xsa11-4.0.patch"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343123936 -3600
# Node ID 48ce1f45392708a70723e99fa80947958ae69732
# Parent  c6eb61ed6f04b4079525c3944b5a55268e1db4f1
xen: only check for shared pages while any exist on teardown

Avoids worst case behavour when guest has a large p2m.

This is XSA-11 / CVE-2012-nnn

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Olaf Hering <olaf@aepfle.de>

diff -r c6eb61ed6f04 -r 48ce1f453927 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Mon May 14 17:02:16 2012 +0100
+++ b/xen/arch/x86/mm/p2m.c	Tue Jul 24 10:58:56 2012 +0100
@@ -1725,6 +1725,8 @@ void p2m_teardown(struct domain *d)
 #ifdef __x86_64__
     for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )
     {
+        if ( atomic_read(&d->shr_pages) == 0 )
+            break;
         mfn = p2m->get_entry(d, gfn, &t, p2m_query);
         if ( mfn_valid(mfn) && (t == p2m_ram_shared) )
             BUG_ON(mem_sharing_unshare_page(d, gfn, MEM_SHARING_DESTROY_GFN));

--55GXCx6iCO
Content-Type: text/plain; name="xsa11-4.1.patch"
Content-Disposition: inline; filename="xsa11-4.1.patch"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343123777 -3600
# Node ID 83c979b30c9057dceb0a0bd2b6c19ab64616eb43
# Parent  3ce155e77f39d0c3cc787c1cc3d6bab1ef45a1dc
xen: only check for shared pages while any exist on teardown

Avoids worst case behavour when guest has a large p2m.

This is XSA-11 / CVE-2012-nnn

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Olaf Hering <olaf@aepfle.de>

diff -r 3ce155e77f39 -r 83c979b30c90 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Mon Jul 09 10:30:44 2012 +0100
+++ b/xen/arch/x86/mm/p2m.c	Tue Jul 24 10:56:17 2012 +0100
@@ -2044,6 +2044,8 @@ void p2m_teardown(struct p2m_domain *p2m
 #ifdef __x86_64__
     for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )
     {
+        if ( atomic_read(&d->shr_pages) == 0 )
+            break;
         mfn = p2m->get_entry(p2m, gfn, &t, &a, p2m_query);
         if ( mfn_valid(mfn) && (t == p2m_ram_shared) )
             BUG_ON(mem_sharing_unshare_page(p2m, gfn, MEM_SHARING_DESTROY_GFN));

--55GXCx6iCO
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--55GXCx6iCO--


From xen-devel-bounces@lists.xen.org Thu Aug 09 15:04:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUHJ-0006to-C8; Thu, 09 Aug 2012 15:04:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)
	id 1SzUHH-0006td-7x; Thu, 09 Aug 2012 15:03:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344524560!4694129!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3623 invoked from network); 9 Aug 2012 15:02:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:02:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,740,1336348800"; d="scan'208";a="13934006"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 15:02:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 16:02:40 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzUG0-0005Vc-1k; Thu, 09 Aug 2012 15:02:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzUFz-0003Ep-UX;
	Thu, 09 Aug 2012 16:02:39 +0100
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="55GXCx6iCO"
Content-Transfer-Encoding: 7bit
Message-ID: <20515.53519.833182.698887@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 16:02:39 +0100
From: Xen.org security team <security@xen.org>
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM destroy
	p2m host DoS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--55GXCx6iCO
Content-Type: text/plain; charset="us-ascii"
Content-Description: message body text
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2012-3433 / XSA-11
                          version 3

	HVM guest destroy p2m teardown host DoS vulnerability

UPDATES IN VERSION 3
====================

Embargo ended Thursday 2012-08-09 12:00:00 UTC.

ISSUE DESCRIPTION
=================

An HVM guest is able to manipulate its physical address space such
that tearing down the guest takes an extended period amount of
time searching for shared pages.

This causes the domain 0 VCPU which tears down the domain to be
blocked in the destroy hypercall. This causes that domain 0 VCPU to
become unavailable and may cause the domain 0 kernel to panic.

There is no requirement for memory sharing to be in use.

IMPACT
======

A guest kernel can cause the host to become unresponsive for a period
of time, potentially leading to a DoS.

VULNERABLE SYSTEMS
==================

All systems running HVM guests with untrusted guest kernels.

This vulnerability effects only Xen 4.0 and 4.1. Xen 3.4 and earlier
and xen-unstable are not vulnerable.

MITIGATION
==========

This issue can be mitigated by running PV (para-virtualised) guests
only, or by ensuring (inside the guest) that the kernel is
trustworthy.

RESOLUTION
==========

Applying the appropriate attached patch will resolve the issue.

NOTE REGARDING CVE
==================

We do not yet have a CVE Candidate number for this vulnerability.

PATCH INFORMATION
=================

The attached patches resolve this issue

 Xen 4.1, 4.1.x                              xsa11-4.1.patch
 Xen 4.0, 4.0.x                              xsa11-4.0.patch

$ sha256sum xsa11-*.patch
c8ab767d831b20a1b22c69a28127303c89cf0379cbf6f1ba3acfda6240aa2a89  xsa11-4.0.patch
61c6424023a26a8b4ea591d0bff6969908091a1a1e1304567d0d910908f21e8d  xsa11-4.1.patch
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJQI8/0AAoJEIP+FMlX6CvZ+fIH/R8w3J9KUiLiIai/QaA4xOjp
rkvdR40b0GzcllDQEy9bUCvRY3QPz7DRza90vLvxCL9R5OnbkRtGJxdmbxjwmoVX
zF03FLaFCd5ypFsTGAcxaUcxtOrt6Ut6R0i8GZp5BCkOV+UkNvu/uaOxL6N3UZ3w
HfCm88EAWsWeJuShiG5jY3BhgCeR7b3GV9uXP0vG5Pa7cwPGvMnx/E6OsC/zEMG2
7yTX0/AI4qKMT9XtiA024vloN1mMlRgN74ZIBqmPuDv5ggv1wLFseARWueYMBn8Y
aUDi97nJf+YWXIx+YwAmD0XLmJ/5tTAYvaV3B4vjMrfFc/plMKDvOqohVB+hv08=
=l4LY
-----END PGP SIGNATURE-----

--55GXCx6iCO
Content-Type: text/plain; name="xsa11-4.0.patch"
Content-Disposition: inline; filename="xsa11-4.0.patch"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343123936 -3600
# Node ID 48ce1f45392708a70723e99fa80947958ae69732
# Parent  c6eb61ed6f04b4079525c3944b5a55268e1db4f1
xen: only check for shared pages while any exist on teardown

Avoids worst case behavour when guest has a large p2m.

This is XSA-11 / CVE-2012-nnn

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Olaf Hering <olaf@aepfle.de>

diff -r c6eb61ed6f04 -r 48ce1f453927 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Mon May 14 17:02:16 2012 +0100
+++ b/xen/arch/x86/mm/p2m.c	Tue Jul 24 10:58:56 2012 +0100
@@ -1725,6 +1725,8 @@ void p2m_teardown(struct domain *d)
 #ifdef __x86_64__
     for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )
     {
+        if ( atomic_read(&d->shr_pages) == 0 )
+            break;
         mfn = p2m->get_entry(d, gfn, &t, p2m_query);
         if ( mfn_valid(mfn) && (t == p2m_ram_shared) )
             BUG_ON(mem_sharing_unshare_page(d, gfn, MEM_SHARING_DESTROY_GFN));

--55GXCx6iCO
Content-Type: text/plain; name="xsa11-4.1.patch"
Content-Disposition: inline; filename="xsa11-4.1.patch"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1343123777 -3600
# Node ID 83c979b30c9057dceb0a0bd2b6c19ab64616eb43
# Parent  3ce155e77f39d0c3cc787c1cc3d6bab1ef45a1dc
xen: only check for shared pages while any exist on teardown

Avoids worst case behavour when guest has a large p2m.

This is XSA-11 / CVE-2012-nnn

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Olaf Hering <olaf@aepfle.de>

diff -r 3ce155e77f39 -r 83c979b30c90 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Mon Jul 09 10:30:44 2012 +0100
+++ b/xen/arch/x86/mm/p2m.c	Tue Jul 24 10:56:17 2012 +0100
@@ -2044,6 +2044,8 @@ void p2m_teardown(struct p2m_domain *p2m
 #ifdef __x86_64__
     for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )
     {
+        if ( atomic_read(&d->shr_pages) == 0 )
+            break;
         mfn = p2m->get_entry(p2m, gfn, &t, &a, p2m_query);
         if ( mfn_valid(mfn) && (t == p2m_ram_shared) )
             BUG_ON(mem_sharing_unshare_page(p2m, gfn, MEM_SHARING_DESTROY_GFN));

--55GXCx6iCO
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--55GXCx6iCO--


From xen-devel-bounces@lists.xen.org Thu Aug 09 15:07:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUKI-00076x-Jy; Thu, 09 Aug 2012 15:07:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzUKH-00076h-12
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:07:05 +0000
Received: from [85.158.138.51:45270] by server-2.bemta-3.messagelabs.com id
	C8/FA-29239-812D3205; Thu, 09 Aug 2012 15:07:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344524823!27351926!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8674 invoked from network); 9 Aug 2012 15:07:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 15:07:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 16:07:02 +0100
Message-Id: <5023EE340200007800093ED0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 16:07:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartCDFCB304.0__="
Subject: [Xen-devel] [PATCH] x86/cpuidle: clean up statistics reporting to
 user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartCDFCB304.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

First of all, when no ACPI Cx data was reported, make sure the usage
count passed back to user mode is not random.

Besides that, fold a lot of redundant code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -1100,36 +1100,23 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
     }
=20
     stat->last =3D power->last_state ? power->last_state->idx : 0;
-    stat->nr =3D power->count;
     stat->idle_time =3D get_cpu_idle_time(cpuid);
=20
     /* mimic the stat when detail info hasn't been registered by dom0 */
     if ( pm_idle_save =3D=3D NULL )
     {
-        /* C1 */
-        usage[1] =3D 1;
-        res[1] =3D stat->idle_time;
-
-        /* C0 */
-        res[0] =3D NOW() - res[1];
-
-        if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], 2) ||
-            copy_to_guest_offset(stat->residencies, 0, &res[0], 2) )
-            return -EFAULT;
-
-        stat->pc2 =3D 0;
-        stat->pc3 =3D 0;
-        stat->pc6 =3D 0;
-        stat->pc7 =3D 0;
-        stat->cc3 =3D 0;
-        stat->cc6 =3D 0;
-        stat->cc7 =3D 0;
-        return 0;
-    }
+        stat->nr =3D 2;
+
+        usage[1] =3D idle_usage =3D 1;
+        res[1] =3D idle_res =3D stat->idle_time;
=20
-    for ( i =3D power->count - 1; i >=3D 0; i-- )
+        memset(&hw_res, 0, sizeof(hw_res));
+    }
+    else
     {
-        if ( i !=3D 0 )
+        stat->nr =3D power->count;
+
+        for ( i =3D 1; i < power->count; i++ )
         {
             spin_lock_irq(&power->stat_lock);
             usage[i] =3D power->states[i].usage;
@@ -1139,18 +1126,16 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
             idle_usage +=3D usage[i];
             idle_res +=3D res[i];
         }
-        else
-        {
-            usage[i] =3D idle_usage;
-            res[i] =3D NOW() - idle_res;
-        }
+
+        get_hw_residencies(cpuid, &hw_res);
     }
=20
-    if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], power->count) =
||
-        copy_to_guest_offset(stat->residencies, 0, &res[0], power->count) =
)
-        return -EFAULT;
+    usage[0] =3D idle_usage;
+    res[0] =3D NOW() - idle_res;
=20
-    get_hw_residencies(cpuid, &hw_res);
+    if ( copy_to_guest(stat->triggers, usage, stat->nr) ||
+         copy_to_guest(stat->residencies, res, stat->nr) )
+        return -EFAULT;
=20
     stat->pc2 =3D hw_res.pc2;
     stat->pc3 =3D hw_res.pc3;




--=__PartCDFCB304.0__=
Content-Type: text/plain; name="x86-cpuidle-stats-no-Cx.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-cpuidle-stats-no-Cx.patch"

x86/cpuidle: clean up statistics reporting to user mode=0A=0AFirst of all, =
when no ACPI Cx data was reported, make sure the usage=0Acount passed back =
to user mode is not random.=0A=0ABesides that, fold a lot of redundant =
code.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/acpi/cpu_idle.c=0A+++ b/xen/arch/x86/acpi/cpu_idle.c=0A@@ =
-1100,36 +1100,23 @@ int pmstat_get_cx_stat(uint32_t cpuid, s=0A     }=0A =
=0A     stat->last =3D power->last_state ? power->last_state->idx : 0;=0A- =
   stat->nr =3D power->count;=0A     stat->idle_time =3D get_cpu_idle_time(=
cpuid);=0A =0A     /* mimic the stat when detail info hasn't been =
registered by dom0 */=0A     if ( pm_idle_save =3D=3D NULL )=0A     {=0A-  =
      /* C1 */=0A-        usage[1] =3D 1;=0A-        res[1] =3D stat->idle_=
time;=0A-=0A-        /* C0 */=0A-        res[0] =3D NOW() - res[1];=0A-=0A-=
        if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], 2) ||=0A-  =
          copy_to_guest_offset(stat->residencies, 0, &res[0], 2) )=0A-     =
       return -EFAULT;=0A-=0A-        stat->pc2 =3D 0;=0A-        =
stat->pc3 =3D 0;=0A-        stat->pc6 =3D 0;=0A-        stat->pc7 =3D =
0;=0A-        stat->cc3 =3D 0;=0A-        stat->cc6 =3D 0;=0A-        =
stat->cc7 =3D 0;=0A-        return 0;=0A-    }=0A+        stat->nr =3D =
2;=0A+=0A+        usage[1] =3D idle_usage =3D 1;=0A+        res[1] =3D =
idle_res =3D stat->idle_time;=0A =0A-    for ( i =3D power->count - 1; i =
>=3D 0; i-- )=0A+        memset(&hw_res, 0, sizeof(hw_res));=0A+    }=0A+  =
  else=0A     {=0A-        if ( i !=3D 0 )=0A+        stat->nr =3D =
power->count;=0A+=0A+        for ( i =3D 1; i < power->count; i++ )=0A     =
    {=0A             spin_lock_irq(&power->stat_lock);=0A             =
usage[i] =3D power->states[i].usage;=0A@@ -1139,18 +1126,16 @@ int =
pmstat_get_cx_stat(uint32_t cpuid, s=0A             idle_usage +=3D =
usage[i];=0A             idle_res +=3D res[i];=0A         }=0A-        =
else=0A-        {=0A-            usage[i] =3D idle_usage;=0A-            =
res[i] =3D NOW() - idle_res;=0A-        }=0A+=0A+        get_hw_residencies=
(cpuid, &hw_res);=0A     }=0A =0A-    if ( copy_to_guest_offset(stat->trigg=
ers, 0, &usage[0], power->count) ||=0A-        copy_to_guest_offset(stat->r=
esidencies, 0, &res[0], power->count) )=0A-        return -EFAULT;=0A+    =
usage[0] =3D idle_usage;=0A+    res[0] =3D NOW() - idle_res;=0A =0A-    =
get_hw_residencies(cpuid, &hw_res);=0A+    if ( copy_to_guest(stat->trigger=
s, usage, stat->nr) ||=0A+         copy_to_guest(stat->residencies, res, =
stat->nr) )=0A+        return -EFAULT;=0A =0A     stat->pc2 =3D hw_res.pc2;=
=0A     stat->pc3 =3D hw_res.pc3;=0A
--=__PartCDFCB304.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartCDFCB304.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 09 15:07:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUKI-00076x-Jy; Thu, 09 Aug 2012 15:07:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzUKH-00076h-12
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:07:05 +0000
Received: from [85.158.138.51:45270] by server-2.bemta-3.messagelabs.com id
	C8/FA-29239-812D3205; Thu, 09 Aug 2012 15:07:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344524823!27351926!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8674 invoked from network); 9 Aug 2012 15:07:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 15:07:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 16:07:02 +0100
Message-Id: <5023EE340200007800093ED0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 16:07:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartCDFCB304.0__="
Subject: [Xen-devel] [PATCH] x86/cpuidle: clean up statistics reporting to
 user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartCDFCB304.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

First of all, when no ACPI Cx data was reported, make sure the usage
count passed back to user mode is not random.

Besides that, fold a lot of redundant code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -1100,36 +1100,23 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
     }
=20
     stat->last =3D power->last_state ? power->last_state->idx : 0;
-    stat->nr =3D power->count;
     stat->idle_time =3D get_cpu_idle_time(cpuid);
=20
     /* mimic the stat when detail info hasn't been registered by dom0 */
     if ( pm_idle_save =3D=3D NULL )
     {
-        /* C1 */
-        usage[1] =3D 1;
-        res[1] =3D stat->idle_time;
-
-        /* C0 */
-        res[0] =3D NOW() - res[1];
-
-        if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], 2) ||
-            copy_to_guest_offset(stat->residencies, 0, &res[0], 2) )
-            return -EFAULT;
-
-        stat->pc2 =3D 0;
-        stat->pc3 =3D 0;
-        stat->pc6 =3D 0;
-        stat->pc7 =3D 0;
-        stat->cc3 =3D 0;
-        stat->cc6 =3D 0;
-        stat->cc7 =3D 0;
-        return 0;
-    }
+        stat->nr =3D 2;
+
+        usage[1] =3D idle_usage =3D 1;
+        res[1] =3D idle_res =3D stat->idle_time;
=20
-    for ( i =3D power->count - 1; i >=3D 0; i-- )
+        memset(&hw_res, 0, sizeof(hw_res));
+    }
+    else
     {
-        if ( i !=3D 0 )
+        stat->nr =3D power->count;
+
+        for ( i =3D 1; i < power->count; i++ )
         {
             spin_lock_irq(&power->stat_lock);
             usage[i] =3D power->states[i].usage;
@@ -1139,18 +1126,16 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
             idle_usage +=3D usage[i];
             idle_res +=3D res[i];
         }
-        else
-        {
-            usage[i] =3D idle_usage;
-            res[i] =3D NOW() - idle_res;
-        }
+
+        get_hw_residencies(cpuid, &hw_res);
     }
=20
-    if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], power->count) =
||
-        copy_to_guest_offset(stat->residencies, 0, &res[0], power->count) =
)
-        return -EFAULT;
+    usage[0] =3D idle_usage;
+    res[0] =3D NOW() - idle_res;
=20
-    get_hw_residencies(cpuid, &hw_res);
+    if ( copy_to_guest(stat->triggers, usage, stat->nr) ||
+         copy_to_guest(stat->residencies, res, stat->nr) )
+        return -EFAULT;
=20
     stat->pc2 =3D hw_res.pc2;
     stat->pc3 =3D hw_res.pc3;




--=__PartCDFCB304.0__=
Content-Type: text/plain; name="x86-cpuidle-stats-no-Cx.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-cpuidle-stats-no-Cx.patch"

x86/cpuidle: clean up statistics reporting to user mode=0A=0AFirst of all, =
when no ACPI Cx data was reported, make sure the usage=0Acount passed back =
to user mode is not random.=0A=0ABesides that, fold a lot of redundant =
code.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/acpi/cpu_idle.c=0A+++ b/xen/arch/x86/acpi/cpu_idle.c=0A@@ =
-1100,36 +1100,23 @@ int pmstat_get_cx_stat(uint32_t cpuid, s=0A     }=0A =
=0A     stat->last =3D power->last_state ? power->last_state->idx : 0;=0A- =
   stat->nr =3D power->count;=0A     stat->idle_time =3D get_cpu_idle_time(=
cpuid);=0A =0A     /* mimic the stat when detail info hasn't been =
registered by dom0 */=0A     if ( pm_idle_save =3D=3D NULL )=0A     {=0A-  =
      /* C1 */=0A-        usage[1] =3D 1;=0A-        res[1] =3D stat->idle_=
time;=0A-=0A-        /* C0 */=0A-        res[0] =3D NOW() - res[1];=0A-=0A-=
        if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], 2) ||=0A-  =
          copy_to_guest_offset(stat->residencies, 0, &res[0], 2) )=0A-     =
       return -EFAULT;=0A-=0A-        stat->pc2 =3D 0;=0A-        =
stat->pc3 =3D 0;=0A-        stat->pc6 =3D 0;=0A-        stat->pc7 =3D =
0;=0A-        stat->cc3 =3D 0;=0A-        stat->cc6 =3D 0;=0A-        =
stat->cc7 =3D 0;=0A-        return 0;=0A-    }=0A+        stat->nr =3D =
2;=0A+=0A+        usage[1] =3D idle_usage =3D 1;=0A+        res[1] =3D =
idle_res =3D stat->idle_time;=0A =0A-    for ( i =3D power->count - 1; i =
>=3D 0; i-- )=0A+        memset(&hw_res, 0, sizeof(hw_res));=0A+    }=0A+  =
  else=0A     {=0A-        if ( i !=3D 0 )=0A+        stat->nr =3D =
power->count;=0A+=0A+        for ( i =3D 1; i < power->count; i++ )=0A     =
    {=0A             spin_lock_irq(&power->stat_lock);=0A             =
usage[i] =3D power->states[i].usage;=0A@@ -1139,18 +1126,16 @@ int =
pmstat_get_cx_stat(uint32_t cpuid, s=0A             idle_usage +=3D =
usage[i];=0A             idle_res +=3D res[i];=0A         }=0A-        =
else=0A-        {=0A-            usage[i] =3D idle_usage;=0A-            =
res[i] =3D NOW() - idle_res;=0A-        }=0A+=0A+        get_hw_residencies=
(cpuid, &hw_res);=0A     }=0A =0A-    if ( copy_to_guest_offset(stat->trigg=
ers, 0, &usage[0], power->count) ||=0A-        copy_to_guest_offset(stat->r=
esidencies, 0, &res[0], power->count) )=0A-        return -EFAULT;=0A+    =
usage[0] =3D idle_usage;=0A+    res[0] =3D NOW() - idle_res;=0A =0A-    =
get_hw_residencies(cpuid, &hw_res);=0A+    if ( copy_to_guest(stat->trigger=
s, usage, stat->nr) ||=0A+         copy_to_guest(stat->residencies, res, =
stat->nr) )=0A+        return -EFAULT;=0A =0A     stat->pc2 =3D hw_res.pc2;=
=0A     stat->pc3 =3D hw_res.pc3;=0A
--=__PartCDFCB304.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartCDFCB304.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 09 15:22:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUYc-0007Wl-6i; Thu, 09 Aug 2012 15:21:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SzUYZ-0007Wd-0y
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:21:51 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344525691!2982079!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14957 invoked from network); 9 Aug 2012 15:21:33 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:21:33 -0000
Received: by yhpp34 with SMTP id p34so620849yhp.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 08:21:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=BR56oS6TKVGe4tGipfnpwE6CVgJZoTIosTSfBkpBDnc=;
	b=jgqzPJm8Jq5xi6OaPCkjHxsrd1qWU2CSwvouuhDTUh32wzWWlAznIAAHV7Qn9nxa6L
	dB69iyzJ8EQ740m+aWirHo67WE0IgewOHG6W6donHu6Hi5id6bM9IK9k1R1W1xXV71n2
	N9Wg2zHoNRsRyk4dtt7aiageiXymbXyPRTCB48KllbXfVLwhGuCs7ncxkGKYlPx3ngM0
	i8QR652AMGqbfioycLEVonVCA42Bnmn+7dgcFrd+e364pwrB4+xMPTPk8MZLtKlFNqTC
	VOD78I2nAHq/Yt1/Lxxpu78SIxfY4NCNOjDRTtc87595l14qU8O4A1BhcehMxlpBoT2d
	/0pg==
MIME-Version: 1.0
Received: by 10.50.159.135 with SMTP id xc7mr1490187igb.1.1344525691186; Thu,
	09 Aug 2012 08:21:31 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 9 Aug 2012 08:21:31 -0700 (PDT)
In-Reply-To: <CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
Date: Thu, 9 Aug 2012 11:21:31 -0400
X-Google-Sender-Auth: d-ZaY-05EF6z53eHxW7sWz4c2AQ
Message-ID: <CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Content-Type: multipart/mixed; boundary=14dae9399de136edb804c6d6c8a6
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=ISO-8859-1

Attached is a new run for
new boot (pre-s3)
first suspend / resume cycle (s3-first)
second (failing) suspend / resume cycle (s3-second)



To go into greater detail on the kernel used -

It is a 3.2.23 kernel based off of the Ubuntu 12.04 git tree here
http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-precise.git;a=summary

To that, I also have some of Konrad's branches - specifically
/devel/ioperm
/devel/acpi-s3.v7
/stable/misc  (mostly for the microcode fixes)
/stable/for-linus-fixes-3.3
/stable/for-linus-3.3
/devel/ttm.dma_pool.v2.9
/stable/for-x86

On top of that, are some more patches specific to our operations, not
terribly interesting here, but I can provide them, if necessary.


The 3.5 tree I tested with has a similar makeup - with some fewer
branches from Konrad.


On Wed, Aug 8, 2012 at 6:39 AM, Ben Guthro <ben@guthro.net> wrote:
> Thanks for taking the time to reply.
>
> I'm out of the office today, so don't have direct access to the
> machine in question until tomorrow... but I'll do my best to answer
> (inline below) and I'll follow up tomorrow with concrete answers.
>
> On Wed, Aug 8, 2012 at 4:35 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
>>> Any suggestions on how best to chase this down?
>>>
>>> The first S3 suspend/resume cycle works, but the second does not.
>>>
>>> On the second try, I never get any interrupts delivered to ahci.
>>> (at least according to /proc/interrupts)
>>>
>>>
>>> syslog traces from the first (good) and the second (bad) are attached,
>>> as well as the output from the "*" debug Ctrl+a handler in both cases.
>>
>> You should have provided this also for the state before the
>> first suspend. The state after the first resume already looks
>> corrupted (presumably just not as badly):
>
> I'll be able to send this tomorrow.
>
>>
>> (XEN) PCI-MSI interrupt information:
>> (XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
>>                      ^^
>> (XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>>
>> so this is likely the reason for thing falling apart on the second
>> iteration:
>>
>> (XEN)   Interrupt Remapping: supported and enabled.
>> (XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
>> (XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
>> (XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
>> ...
>> (XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
>> (XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
>>                                               ^     ^  ^
>> (XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
>> (XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
>> (XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
>> (XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1
>>
>> Surprisingly in both cases we get (with the other vector fields varying
>> accordingly)
>>
>> (XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
>> (XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
>>                                     ^^
>> (XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
>> (XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
>> (XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
>> (XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
>>
>> The interrupt in question belongs to 0000:00:1f.2, i.e. the
>> AHCI contoller.
>
> This would be consistent with what I've observed.
>
>>
>> Unfortunately I can't make sense of the kernel side config space
>> restore messages - an offset of 1 gets reported for the device in
>> question (and various other odd offsets exist), yet 3.5's
>> drivers/pci/pci.c:pci_restore_config_space_range() calls
>> pci_restore_config_dword() with an offset that's always divisible
>> by 4. Could you clarify which kernel version you were using here?
>> We first need to determine whether the kernel corrupts something
>> (after all, config space isn't protected from Dom0 modifications) -
>> if that's the case, we may need to understand why older Xen was
>> immune against that. If that's not the case, adding some extra
>> logging to Xen's pci_restore_msi_state() would seem the best
>> first step, plus (maybe) logging of Dom0 post-resume config space
>> accesses to the device in question.
>
> This particular failure is using linux-3.2.23 + some of Konrad's
> branches that haven't been merged into mainline (s3 branches, are
> probably the most appropriate here)
>
>>
>> The most likely thing happening (though unclear where) is that
>> the corresponding struct msi_msg instance gets cleared in the
>> course of the first resume (but after the corresponding interrupt
>> remapping entry already got restored).
>>
>> Jan
>>

--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-s3-second.txt"
Content-Disposition: attachment; filename="xen-dump-s3-second.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5nzo9g00

KFhFTikgJyonIHByZXNzZWQgLT4gZmlyaW5nIGFsbCBkaWFnbm9zdGljIGtleWhhbmRsZXJzCihY
RU4pIFtkOiBkdW1wIHJlZ2lzdGVyc10KKFhFTikgJ2QnIHByZXNzZWQgLT4gZHVtcGluZyByZWdp
c3RlcnMKKFhFTikgCihYRU4pICoqKiBEdW1waW5nIENQVTAgaG9zdCBzdGF0ZTogKioqCihYRU4p
IC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMg
XS0tLS0KKFhFTikgQ1BVOiAgICAwCihYRU4pIFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxM2Q3
N2U+XSBuczE2NTUwX3BvbGwrMHgyNy8weDMzCihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAxMDI4
NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgwMzAyNWEwICAgcmJ4
OiBmZmZmODJjNDgwMzAyNDgwICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJkeDogMDAw
MDAwMDAwMDAwMDAwMCAgIHJzaTogZmZmZjgyYzQ4MDJlMjVjOCAgIHJkaTogZmZmZjgyYzQ4MDI3
MTgwMAooWEVOKSByYnA6IGZmZmY4MmM0ODAyYjdlMzAgICByc3A6IGZmZmY4MmM0ODAyYjdlMzAg
ICByODogIDAwMDAwMDVjNTEzZGRkMDAKKFhFTikgcjk6ICBmZmZmODJjNDgwMzAyNjAwICAgcjEw
OiAwMDAwMDA1YzUwNmQyMGY4ICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZm
ZjgyYzQ4MDI3MTgwMCAgIHIxMzogZmZmZjgyYzQ4MDEzZDc1NyAgIHIxNDogMDAwMDAwNWM1MDFj
YmQzMgooWEVOKSByMTU6IGZmZmY4MmM0ODAzMDIzMDggICBjcjA6IDAwMDAwMDAwODAwNTAwM2Ig
ICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTRkMDBmMDAwICAgY3Iy
OiAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIGRzOiAwMDAwICAgZXM6IDAwMDAgICBmczogMDAwMCAg
IGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJv
bSByc3A9ZmZmZjgyYzQ4MDJiN2UzMDoKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2U2MCBmZmZmODJj
NDgwMTI4MTdmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJlMjVjOAooWEVOKSAgICBmZmZm
ODJjNDgwMzAyNDgwIGZmZmY4MzAxNDg5YjNkNDAgZmZmZjgyYzQ4MDJiN2ViMCBmZmZmODJjNDgw
MTI4MjgxCihYRU4pICAgIGZmZmY4MmM0ODAyYjdmMTggMDAwMDAwMDAwMDAwMDI0NiAwMDAwMDA1
YzUwMWJmNGVlIGZmZmY4MmM0ODAyZDg4ODAKKFhFTikgICAgZmZmZjgyYzQ4MDJkODg4MCBmZmZm
ODJjNDgwMmI3ZjE4IGZmZmZmZmZmZmZmZmZmZmYgZmZmZjgyYzQ4MDMwMjMwOAooWEVOKSAgICBm
ZmZmODJjNDgwMmI3ZWUwIGZmZmY4MmM0ODAxMjU0MDUgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmODJj
NDgwMmI3ZjE4CihYRU4pICAgIDAwMDAwMDAwZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMiBmZmZm
ODJjNDgwMmI3ZWYwIGZmZmY4MmM0ODAxMjU0ODQKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2YxMCBm
ZmZmODJjNDgwMTU4YzA1IGZmZmY4MzAwYWE1ODQwMDAgZmZmZjgzMDBhYTBmYzAwMAooWEVOKSAg
ICBmZmZmODJjNDgwMmI3ZGE4IDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmZmZmZmZmZmZiAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZmZmZmY4MWEwMWVlOCBm
ZmZmZmZmZjgxYTAxZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAw
MDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNh
YSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZmZmZmY4MWEw
MWVkMCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODMwMGFhNTg0MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTNkNzdlPl0g
bnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgxN2Y+XSBleGVj
dXRlX3RpbWVyKzB4NGUvMHg2YwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgyODE+XSB0aW1lcl9z
b2Z0aXJxX2FjdGlvbisweGU0LzB4MjFhCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDEyNTQwNT5dIF9f
ZG9fc29mdGlycSsweDk1LzB4YTAKKFhFTikgICAgWzxmZmZmODJjNDgwMTI1NDg0Pl0gZG9fc29m
dGlycSsweDI2LzB4MjgKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4YzA1Pl0gaWRsZV9sb29wKzB4
NmYvMHg3MQooWEVOKSAgICAKKFhFTikgKioqIER1bXBpbmcgQ1BVMSBob3N0IHN0YXRlOiAqKioK
KFhFTikgLS0tLVsgWGVuLTQuMi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDog
ICAgQyBdLS0tLQooWEVOKSBDUFU6ICAgIDEKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4
MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAw
MDAwMjQ2ICAgQ09OVEVYVDogaHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAg
ICByYng6IGZmZmY4MzAxM2U2ZTdmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcmR4
OiAwMDAwMDAzY2JkMWI1ZDgwICAgcnNpOiAwMDAwMDAwMDM1NmNkMzg2ICAgcmRpOiAwMDAwMDAw
MDAwMDAwMDAxCihYRU4pIHJicDogZmZmZjgzMDEzZTZlN2VmMCAgIHJzcDogZmZmZjgzMDEzZTZl
N2VmMCAgIHI4OiAgMDAwMDAwMTc2M2UxODBhYwooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwM2Ug
ICByMTA6IDAwMDAwMDAwZGVhZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEy
OiBmZmZmODMwMTNlNmU3ZjE4ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAw
MDAwMDAwMDAyCihYRU4pIHIxNTogZmZmZjgzMDEzZDRiODA4OCAgIGNyMDogMDAwMDAwMDA4MDA1
MDAzYiAgIGNyNDogMDAwMDAwMDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxNGQwMGYwMDAg
ICBjcjI6IGZmZmY4ODAwMjVmYzZiOTgKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAw
MDAwICAgZ3M6IDAwMDAgICBzczogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFj
ZSBmcm9tIHJzcD1mZmZmODMwMTNlNmU3ZWYwOgooWEVOKSAgICBmZmZmODMwMTNlNmU3ZjEwIGZm
ZmY4MmM0ODAxNThiZjggZmZmZjgzMDBhYTBmZTAwMCBmZmZmODMwMGE4M2ZkMDAwCihYRU4pICAg
IGZmZmY4MzAxM2U2ZTdkYTggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDEKKFhFTikgICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODZkZWUwIGZm
ZmY4ODAwMjc4NmRmZDggMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAx
IDAwMDAwMDAwMDAwMDAwNDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICAgIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2Fh
IDAwMDAwMDAwMDAwMGUwMzMgMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODZk
ZWM4IDAwMDAwMDAwMDAwMGUwMmIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAx
IGZmZmY4MzAwYWEwZmUwMDAKKFhFTikgICAgMDAwMDAwM2NiZDFiNWQ4MCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pIFhlbiBjYWxsIHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBk
ZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVf
bG9vcCsweDYyLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1waW5nIENQVTIgaG9zdCBzdGF0
ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRh
aW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAyCihYRU4pIFJJUDogICAgZTAwODpbPGZm
ZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgw
MzAyMzcwICAgcmJ4OiBmZmZmODMwMTQ4OTlmZjE4ICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAyCihY
RU4pIHJkeDogMDAwMDAwM2NiZTNlZWQ4MCAgIHJzaTogMDAwMDAwMDAzNjMyN2E1MiAgIHJkaTog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByYnA6IGZmZmY4MzAxNDg5OWZlZjAgICByc3A6IGZmZmY4
MzAxNDg5OWZlZjAgICByODogIDAwMDAwMDE3ODY4M2Y0ZWMKKFhFTikgcjk6ICBmZmZmODMwMGE4
M2ZjMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihY
RU4pIHIxMjogZmZmZjgzMDE0ODk5ZmYxOCAgIHIxMzogMDAwMDAwMDBmZmZmZmZmZiAgIHIxNDog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxM2U2ZjEwODggICBjcjA6IDAwMDAw
MDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTQx
YTA1MDAwICAgY3IyOiBmZmZmODgwMDI3OGQwMGY4CihYRU4pIGRzOiAwMDJiICAgZXM6IDAwMmIg
ICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDE0ODk5ZmVmMDoKKFhFTikgICAgZmZmZjgzMDE0ODk5
ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYTg1YzcwMDAgZmZmZjgzMDBhODNmYzAwMAoo
WEVOKSAgICBmZmZmODMwMTQ4OTlmZGE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAyCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyNzg2
ZmVlMCBmZmZmODgwMDI3ODZmZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFk
YmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4
MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZjg4
MDAyNzg2ZmVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmODMwMGE4NWM3MDAwCihYRU4pICAgIDAwMDAwMDNjYmUzZWVkODAgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4
M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBbPGZmZmY4MmM0ODAxNThiZjg+
XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAqKiogRHVtcGluZyBDUFUzIGhv
c3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1
Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMwooWEVOKSBSSVA6ICAgIGUw
MDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSBSRkxB
R1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJheDogZmZm
ZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODk4ZmYxOCAgIHJjeDogMDAwMDAwMDAwMDAw
MDAwMwooWEVOKSByZHg6IDAwMDAwMDNjYzg2OTJkODAgICByc2k6IDAwMDAwMDAwMzZmODMyZmUg
ICByZGk6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmJwOiBmZmZmODMwMTQ4OThmZWYwICAgcnNw
OiBmZmZmODMwMTQ4OThmZWYwICAgcjg6ICAwMDAwMDAxN2E4ZGJmMzk0CihYRU4pIHI5OiAgMDAw
MDAwMDAwMDAwMDAzYyAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAgIHIxMTogMDAwMDAwMDAwMDAw
MDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5OGZmMTggICByMTM6IDAwMDAwMDAwZmZmZmZmZmYg
ICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZmODMwMTQ4OTk1MDg4ICAgY3Iw
OiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNyMzogMDAw
MDAwMDE0MWEwNTAwMCAgIGNyMjogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSBkczogMDAyYiAgIGVz
OiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgKKFhFTikg
WGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5OGZlZjA6CihYRU4pICAgIGZmZmY4
MzAxNDg5OGZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4M2ZlMDAwIGZmZmY4MzAwYWE1
ODMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODk4ZmRhOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMwooWEVOKSAgICBmZmZmZmZmZjgxYWFmZGEwIGZmZmY4
ODAwMjc4ODFlZTAgZmZmZjg4MDAyNzg4MWZkOCAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAgIDAw
MDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAg
IGZmZmY4ODAwMjc4ODFlYzggMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDMgZmZmZjgzMDBhODNmZTAwMAooWEVOKSAgICAwMDAwMDAzY2M4NjkyZDgw
IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgICAgWzxmZmZmODJjNDgw
MTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAKKFhFTikgWzA6IGR1bXAgRG9t
MCByZWdpc3RlcnNdCihYRU4pICcwJyBwcmVzc2VkIC0+IGR1bXBpbmcgRG9tMCdzIHJlZ2lzdGVy
cwooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMCBzdGF0ZTogKioqCihYRU4pIFJJUDogICAg
ZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYg
ICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAg
IHJieDogZmZmZmZmZmY4MWEwMWZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6
IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAw
ZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmZmZmZjgxYTAxZWU4ICAgcnNwOiBmZmZmZmZmZjgxYTAx
ZWQwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAg
IHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6
IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDAgICByMTQ6IGZmZmZmZmZm
ZmZmZmZmZmYKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDAwMDAw
MDA4ICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0ZDAwZjAwMCAg
IGNyMjogMDAwMDdmOGQ5ZTdkMzNkMAooWEVOKSBkczogMDAwMCAgIGVzOiAwMDAwICAgZnM6IDAw
MDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJh
Y2UgZnJvbSByc3A9ZmZmZmZmZmY4MWEwMWVkMDoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZmZmZmY4MWEwMWYxOAooWEVOKSAg
ICBmZmZmZmZmZjgxMDFjNjYzIGZmZmZmZmZmODFhMDFmZDggZmZmZmZmZmY4MWFhZmRhMCBmZmZm
ODgwMDJkZWUxYTAwCihYRU4pICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmY4MWEwMWY0OCBm
ZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgICAgYTNmYzk4NzMzOWVkMDEz
YiAwMDAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODFiMTUxNjAgZmZmZmZmZmY4MWEwMWY1OAooWEVO
KSAgICBmZmZmZmZmZjgxNTU0ZjVlIGZmZmZmZmZmODFhMDFmOTggZmZmZmZmZmY4MWFjY2JmNSBm
ZmZmZmZmZjgxYjE1MTYwCihYRU4pICAgIGU0YjE1OWJhM2VlYTA5NGMgMDAwMDAwMDAwMGNkZjAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxYTAxZmI4IGZmZmZmZmZmODFhY2MzNGIgZmZmZmZmZmY3ZmZmZmZmZgoo
WEVOKSAgICBmZmZmZmZmZjg0YjA0MDAwIGZmZmZmZmZmODFhMDFmZjggZmZmZmZmZmY4MWFjZmVj
YyAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAxMDAwMDAwMDAgMDAxMDA4MDAwMDAz
MDZhNCAxZmM5OGI3NWUzYjgyMjgzIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMxIHN0
YXRlOiAqKioKKFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJG
TEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikg
cmF4OiAwMDAwMDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODZkZmQ4ICAgcmN4OiBmZmZm
ZmZmZjgxMDAxM2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBk
ZWFkYmVlZiAgIHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4NmRl
ZTAgICByc3A6IGZmZmY4ODAwMjc4NmRlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikg
cjk6ICAwMDAwMDAwMDAwMDAwMDQwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAw
MDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAw
MDAwMDAwMSAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAw
MDAgICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikg
Y3IzOiAwMDAwMDAwMTRkMDBmMDAwICAgY3IyOiAwMDAwMDAwMDAxZmU5M2YwCihYRU4pIGRzOiAw
MDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAz
MwooWEVOKSBHdWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODZkZWM4OgooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBm
ZmZmODgwMDI3ODZkZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg2ZGZk
OCBmZmZmZmZmZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmODgwMDI3ODZkZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQoo
WEVOKSAgICBhZGNmNDU4MDdjMmQwNGZiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODgwMDI3ODZkZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmRmNTggMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMiBzdGF0ZTogKioqCihYRU4pIFJJ
UDogICAgZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAw
MDAyNDYgICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAw
MDAwMCAgIHJieDogZmZmZjg4MDAyNzg2ZmZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVO
KSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmODgwMDI3ODZmZWUwICAgcnNwOiBmZmZmODgw
MDI3ODZmZWM4ICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAw
MDAwMCAgIHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVO
KSByMTI6IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDIgICByMTQ6IDAw
MDAwMDAwMDAwMDAwMDAKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAw
MDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0MWEw
NTAwMCAgIGNyMjogMDAwMDdmODE4YmZmZWNkNgooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAg
ZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjg4MDAyNzg2ZmVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZmYxMAoo
WEVOKSAgICBmZmZmZmZmZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmZmZDggZmZmZmZmZmY4MWFhZmRh
MCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2
ZmY0MCBmZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgMWZlN2I1YTgy
MjE1MDQ5OSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1
MAooWEVOKSAgICBmZmZmZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCBmZmZmODgwMDI3ODZmZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioq
IER1bXBpbmcgRG9tMCB2Y3B1IzMgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZm
ZmZmZjgxMDAxM2FhPl0KKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBD
T05URVhUOiBwdiBndWVzdAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4
ODAwMjc4ODFmZDggICByY3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAw
MDAwMDAwICAgcnNpOiAwMDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihY
RU4pIHJicDogZmZmZjg4MDAyNzg4MWVlMCAgIHJzcDogZmZmZjg4MDAyNzg4MWVjOCAgIHI4OiAg
MDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAw
MDAwMDAwMDAwMDEgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgx
YWFmZGEwICAgcjEzOiAwMDAwMDAwMDAwMDAwMDAzICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pIHIxNTogMDAwMDAwMDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDog
MDAwMDAwMDAwMDAwMjY2MAooWEVOKSBjcjM6IDAwMDAwMDAxNDFhMDUwMDAgICBjcjI6IDAwMDA3
ZjhkOWU4MDBhMDAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAw
MDAgICBzczogZTAyYiAgIGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNw
PWZmZmY4ODAwMjc4ODFlYzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwNDAgMDAwMDAwMDBmZmZm
ZmZmZiBmZmZmZmZmZjgxMDBhNWMwIGZmZmY4ODAwMjc4ODFmMTAKKFhFTikgICAgZmZmZmZmZmY4
MTAxYzY2MyBmZmZmODgwMDI3ODgxZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNDAgZmZmZmZmZmY4MTAx
MzIzNiBmZmZmZmZmZjgxMDBhZGU5CihYRU4pICAgIDQ5ZGUxODMzZDEzZjJhMjYgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTAKKFhFTikgICAgZmZmZmZm
ZmY4MTU2MzQzOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZm
Zjg4MDAyNzg4MWY1OCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIFtIOiBkdW1wIGhlYXAgaW5mb10K
KFhFTikgJ0gnIHByZXNzZWQgLT4gZHVtcGluZyBoZWFwIGluZm8gKG5vdy0weDVDOjk5RTgyMTU1
KQooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0wXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Ml0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0zXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9NV0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT02XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OF0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT05XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTEwXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTExXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTEyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTEzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE0XSAtPiAx
NjEyOCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xNV0gLT4gMzI3NjggcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MTZdIC0+IDY1NTM2IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTE3XSAtPiAxMzA1NTkgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MThdIC0+
IDI2MjE0MyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xOV0gLT4gMTcyODM3IHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIwXSAtPiAxMzQyMjUgcGFnZXMKKFhFTikgaGVhcFtu
b2RlPTBdW3pvbmU9MjFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjJdIC0+
IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjNdIC0+IDAgcGFnZXMKKFhFTikgaGVh
cFtub2RlPTBdW3pvbmU9MjRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjVd
IC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjZdIC0+IDAgcGFnZXMKKFhFTikg
aGVhcFtub2RlPTBdW3pvbmU9MjddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9
MjhdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjldIC0+IDAgcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MzBdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MzFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzJdIC0+IDAgcGFnZXMK
KFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzNdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MzRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzVdIC0+IDAgcGFn
ZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzZdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2Rl
PTBdW3pvbmU9MzddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzhdIC0+IDAg
cGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzldIC0+IDAgcGFnZXMKKFhFTikgW0k6IGR1
bXAgSFZNIGlycSBpbmZvXQooWEVOKSAnSScgcHJlc3NlZCAtPiBkdW1waW5nIEhWTSBpcnEgaW5m
bwooWEVOKSBbTTogZHVtcCBNU0kgc3RhdGVdCihYRU4pIFBDSS1NU0kgaW50ZXJydXB0IGluZm9y
bWF0aW9uOgooWEVOKSAgTVNJICAgIDI2IHZlYz05MSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxv
ZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI3IHZlYz0w
MCAgZml4ZWQgIGVkZ2UgZGVhc3NlcnQgcGh5cyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAv
MS8tMQooWEVOKSAgTVNJICAgIDI4IHZlYz0zMSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBs
b3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSBbUTogZHVtcCBQQ0kgZGV2aWNl
c10KKFhFTikgPT09PSBQQ0kgZGV2aWNlcyA9PT09CihYRU4pID09PT0gc2VnbWVudCAwMDAwID09
PT0KKFhFTikgMDAwMDowNTowMS4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDQ6
MDAuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAzOjAwLjAgLSBkb20gMCAgIC0g
TVNJcyA8ID4KKFhFTikgMDAwMDowMjowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAw
MDA6MDA6MWYuMyAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFmLjIgLSBkb20g
MCAgIC0gTVNJcyA8IDI3ID4KKFhFTikgMDAwMDowMDoxZi4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDA6MWUuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFk
LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYy43IC0gZG9tIDAgICAtIE1T
SXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNiAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAw
OjAwOjFjLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYi4wIC0gZG9tIDAg
ICAtIE1TSXMgPCAyNiA+CihYRU4pIDAwMDA6MDA6MWEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgoo
WEVOKSAwMDAwOjAwOjE5LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxNi4z
IC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MTYuMCAtIGRvbSAwICAgLSBNU0lz
IDwgPgooWEVOKSAwMDAwOjAwOjE0LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDow
MDowMi4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOCA+CihYRU4pIDAwMDA6MDA6MDAuMCAtIGRvbSAw
ICAgLSBNU0lzIDwgPgooWEVOKSBbVjogZHVtcCBpb21tdSBpbmZvXQooWEVOKSAKKFhFTikgaW9t
bXUgMDogbnJfcHRfbGV2ZWxzID0gMy4KKFhFTikgICBRdWV1ZWQgSW52YWxpZGF0aW9uOiBzdXBw
b3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IFJlbWFwcGluZzogc3VwcG9ydGVk
IGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCByZW1hcHBpbmcgdGFibGUgKG5yX2VudHJ5
PTB4MTAwMDAuIE9ubHkgZHVtcCBQPTEgZW50cmllcyBoZXJlKToKKFhFTikgICAgICAgIFNWVCAg
U1EgICBTSUQgICAgICBEU1QgIFYgIEFWTCBETE0gVE0gUkggRE0gRlBEIFAKKFhFTikgICAwMDAw
OiAgMSAgIDAgIDAwMTAgMDAwMDAwMDEgMzEgICAgMCAgIDEgIDAgIDEgIDEgICAwIDEKKFhFTikg
CihYRU4pIGlvbW11IDE6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFsaWRh
dGlvbjogc3VwcG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBpbmc6
IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRhYmxl
IChucl9lbnRyeT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4pICAg
ICAgICBTVlQgIFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQCihY
RU4pICAgMDAwMDogIDEgICAwICBmMGY4IDAwMDAwMDAxIDM4ICAgIDAgICAxICAwICAxICAxICAg
MCAxCihYRU4pICAgMDAwMTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGYwICAgIDAgICAxICAwICAx
ICAxICAgMCAxCihYRU4pICAgMDAwMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQwICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQ4ICAg
IDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNDogIDEgICAwICBmMGY4IDAwMDAwMDAx
IDUwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNTogIDEgICAwICBmMGY4IDAw
MDAwMDAxIDU4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNjogIDEgICAwICBm
MGY4IDAwMDAwMDAxIDYwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNzogIDEg
ICAwICBmMGY4IDAwMDAwMDAxIDY4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAw
ODogIDEgICAwICBmMGY4IDAwMDAwMDAxIDcwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4p
ICAgMDAwOTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDc4ICAgIDAgICAxICAwICAxICAxICAgMCAx
CihYRU4pICAgMDAwYTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDg4ICAgIDAgICAxICAwICAxICAx
ICAgMCAxCihYRU4pICAgMDAwYjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDkwICAgIDAgICAxICAw
ICAxICAxICAgMCAxCihYRU4pICAgMDAwYzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDk4ICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGEw
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZTogIDEgICAwICBmMGY4IDAwMDAw
MDAxIGE4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZjogIDEgICAwICBmMGY4
IDAwMDAwMDAxIGIwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMDogIDEgICAw
ICBmMGY4IDAwMDAwMDAxIGI4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMTog
IDEgICAwICBmMGY4IDAwMDAwMDAxIGMwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAg
MDAxMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIGM4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihY
RU4pICAgMDAxMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQwICAgIDAgICAxICAxICAxICAxICAg
MCAxCihYRU4pICAgMDAxNDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQ4ICAgIDAgICAxICAxICAx
ICAxICAgMCAxCihYRU4pICAgMDAxNTogIDEgICAwICAwMGQ4IDAwMDAwMDAxIDkxICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNjogIDEgICAwICAwMGZhIDAwMDAwMDAxIDAwICAg
IDAgICAwICAwICAwICAwICAgMCAxCihYRU4pIAooWEVOKSBSZWRpcmVjdGlvbiB0YWJsZSBvZiBJ
T0FQSUMgMDoKKFhFTikgICAjZW50cnkgSURYIEZNVCBNQVNLIFRSSUcgSVJSIFBPTCBTVEFUIERF
TEkgIFZFQ1RPUgooWEVOKSAgICAwMTogIDAwMDAgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICAzOAooWEVOKSAgICAwMjogIDAwMDEgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICBmMAooWEVOKSAgICAwMzogIDAwMDIgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA0MAooWEVOKSAgICAwNDogIDAwMDMgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA0OAooWEVOKSAgICAwNTogIDAwMDQgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA1MAooWEVOKSAgICAwNjogIDAwMDUgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA1OAooWEVOKSAgICAwNzogIDAwMDYgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA2MAooWEVOKSAgICAwODogIDAwMDcgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA2OAooWEVOKSAgICAwOTogIDAwMDggICAxICAgIDAgICAxICAgMCAgIDAgICAgMCAg
ICAwICAgICA3MAooWEVOKSAgICAwYTogIDAwMDkgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA3OAooWEVOKSAgICAwYjogIDAwMGEgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA4OAooWEVOKSAgICAwYzogIDAwMGIgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA5MAooWEVOKSAgICAwZDogIDAwMGMgICAxICAgIDEgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA5OAooWEVOKSAgICAwZTogIDAwMGQgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICBhMAooWEVOKSAgICAwZjogIDAwMGUgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICBhOAooWEVOKSAgICAxMDogIDAwMGYgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBiMAooWEVOKSAgICAxMjogIDAwMTAgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBiOAooWEVOKSAgICAxMzogIDAwMTEgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBjMAooWEVOKSAgICAxNDogIDAwMTQgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBkOAooWEVOKSAgICAxNjogIDAwMTMgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBkMAooWEVOKSAgICAxNzogIDAwMTIgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBjOAooWEVOKSBbYTogZHVtcCB0aW1lciBxdWV1ZXNdCihYRU4pIER1bXBpbmcgdGlt
ZXIgcXVldWVzOgooWEVOKSBDUFUwMDoKKFhFTikgICBleD0gICAtMTY4MnVzIHRpbWVyPWZmZmY4
MmM0ODAyZTI1YzggY2I9ZmZmZjgyYzQ4MDEzZDc1NyhmZmZmODJjNDgwMjcxODAwKSBuczE2NTUw
X3BvbGwrMHgwLzB4MzMKKFhFTikgICBleD0gICAgNzMxN3VzIHRpbWVyPWZmZmY4MzAxNDg5NzMx
YjggY2I9ZmZmZjgyYzQ4MDExOWQ3MihmZmZmODMwMTQ4OTczMTkwKSBjc2NoZWRfYWNjdCsweDAv
MHg0MmEKKFhFTikgICBleD0gICAgLTk2NHVzIHRpbWVyPWZmZmY4MmM0ODAzMDI2MDAgY2I9ZmZm
ZjgyYzQ4MDEzZjZmOChmZmZmODJjNDgwMzAyNWMwKSBkb19kYnNfdGltZXIrMHgwLzB4MjFmCihY
RU4pICAgZXg9MTAwOTY5Mzl1cyB0aW1lcj1mZmZmODJjNDgwMzAwNTgwIGNiPWZmZmY4MmM0ODAx
YTg4NTAoMDAwMDAwMDAwMDAwMDAwMCkgbWNlX3dvcmtfZm4rMHgwLzB4YTkKKFhFTikgICBleD01
MTk1NDk2M3VzIHRpbWVyPWZmZmY4MmM0ODAyZmUyODAgY2I9ZmZmZjgyYzQ4MDE4MDdjMigwMDAw
MDAwMDAwMDAwMDAwKSBwbHRfb3ZlcmZsb3crMHgwLzB4MTMxCihYRU4pICAgZXg9ICAgIDczMTd1
cyB0aW1lcj1mZmZmODMwMTQ4OTczZWE4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAw
MDAwMCkgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4pIENQVTAxOgooWEVOKSAgIGV4PSAgIDY4
NjM2dXMgdGltZXI9ZmZmZjgzMDBhODNmZDA2MCBjYj1mZmZmODJjNDgwMTIxYzZiKGZmZmY4MzAw
YTgzZmQwMDApIHZjcHVfc2luZ2xlc2hvdF90aW1lcl9mbisweDAvMHhiCihYRU4pICAgZXg9ICAg
NzA4NjF1cyB0aW1lcj1mZmZmODMwMTRiMzI5MmM4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAw
MDAwMDAwMDAwMSkgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4pICAgZXg9ICAgNzkwMzV1cyB0
aW1lcj1mZmZmODMwMTNkNGI4MzgwIGNiPWZmZmY4MmM0ODAxM2Y2ZjgoZmZmZjgzMDEzZDRiODM0
MCkgZG9fZGJzX3RpbWVyKzB4MC8weDIxZgooWEVOKSBDUFUwMjoKKFhFTikgICBleD0gICA5MzUw
MnVzIHRpbWVyPWZmZmY4MzAxNDg5OTQ1ZjggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAw
MDAwMDAyKSBjc2NoZWRfdGljaysweDAvMHgzMTQKKFhFTikgICBleD0gICA5OTAzNXVzIHRpbWVy
PWZmZmY4MzAxM2U2ZjEzODAgY2I9ZmZmZjgyYzQ4MDEzZjZmOChmZmZmODMwMTNlNmYxMzQwKSBk
b19kYnNfdGltZXIrMHgwLzB4MjFmCihYRU4pICAgZXg9ICAgOTg2MzZ1cyB0aW1lcj1mZmZmODMw
MGE4M2ZjMDYwIGNiPWZmZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhODNmYzAwMCkgdmNwdV9zaW5n
bGVzaG90X3RpbWVyX2ZuKzB4MC8weGIKKFhFTikgQ1BVMDM6CihYRU4pICAgZXg9ICAxMjM4NjJ1
cyB0aW1lcj1mZmZmODMwMTRiMzI5OTA4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAw
MDAwMykgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4pICAgZXg9IDMzNzM2ODB1cyB0aW1lcj1m
ZmZmODMwMGFhNTgzMDYwIGNiPWZmZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhYTU4MzAwMCkgdmNw
dV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8weGIKKFhFTikgICBleD0gIDEzOTAzNXVzIHRpbWVy
PWZmZmY4MzAxNDg5OTUzODAgY2I9ZmZmZjgyYzQ4MDEzZjZmOChmZmZmODMwMTQ4OTk1MzQwKSBk
b19kYnNfdGltZXIrMHgwLzB4MjFmCihYRU4pIFtjOiBkdW1wIEFDUEkgQ3ggc3RydWN0dXJlc10K
KFhFTikgJ2MnIHByZXNzZWQgLT4gcHJpbnRpbmcgQUNQSSBDeCBzdHJ1Y3R1cmVzCihYRU4pID09
Y3B1MD09CihYRU4pIGFjdGl2ZSBzdGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0ZToJCUM3CihY
RU4pIHN0YXRlczoKKFhFTikgICAgIEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0gdXNhZ2VbMDAw
MDAwMDBdIG1ldGhvZFsgSEFMVF0gZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1c2FnZVswMDAw
MDAwMF0gZHVyYXRpb25bMzk4NDc3MDg2NzY2XQooWEVOKSBQQzJbMF0gUEMzWzBdIFBDNlswXSBQ
QzdbMF0KKFhFTikgQ0MzWzBdIENDNlswXSBDQzdbMF0KKFhFTikgPT1jcHUxPT0KKFhFTikgYWN0
aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgooWEVO
KSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9kWyBI
QUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlvblsz
OTg1MDE4NzY3NzddCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBDQzNb
MF0gQ0M2WzBdIENDN1swXQooWEVOKSA9PWNwdTI9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1
CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtD
MV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBd
CihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzM5ODUyNjY2NTU2OF0KKFhF
TikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBd
CihYRU4pID09Y3B1Mz09CihYRU4pIGFjdGl2ZSBzdGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0
ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAgIEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0g
dXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0gZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1
c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMzk4NTUxNDU1NTExXQooWEVOKSBQQzJbMF0gUEMzWzBd
IFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIENDNlswXSBDQzdbMF0KKFhFTikgW2U6IGR1bXAg
ZXZ0Y2huIGluZm9dCihYRU4pICdlJyBwcmVzc2VkIC0+IGR1bXBpbmcgZXZlbnQtY2hhbm5lbCBp
bmZvCihYRU4pIEV2ZW50IGNoYW5uZWwgaW5mb3JtYXRpb24gZm9yIGRvbWFpbiAwOgooWEVOKSBQ
b2xsaW5nIHZDUFVzOiB7fQooWEVOKSAgICAgcG9ydCBbcC9tXQooWEVOKSAgICAgICAgMSBbMS8w
XTogcz01IG49MCB4PTAgdj0wCihYRU4pICAgICAgICAyIFsxLzFdOiBzPTYgbj0wIHg9MAooWEVO
KSAgICAgICAgMyBbMS8wXTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDQgWzAvMF06IHM9NiBu
PTAgeD0wCihYRU4pICAgICAgICA1IFswLzBdOiBzPTUgbj0wIHg9MCB2PTEKKFhFTikgICAgICAg
IDYgWzAvMF06IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICA3IFswLzBdOiBzPTUgbj0xIHg9MCB2
PTAKKFhFTikgICAgICAgIDggWzEvMV06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgICA5IFswLzBd
OiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAxMCBbMC8wXTogcz02IG49MSB4PTAKKFhFTikgICAg
ICAgMTEgWzAvMF06IHM9NSBuPTEgeD0wIHY9MQooWEVOKSAgICAgICAxMiBbMC8wXTogcz02IG49
MSB4PTAKKFhFTikgICAgICAgMTMgWzAvMF06IHM9NSBuPTIgeD0wIHY9MAooWEVOKSAgICAgICAx
NCBbMC8xXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTUgWzAvMF06IHM9NiBuPTIgeD0wCihY
RU4pICAgICAgIDE2IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAgICAgICAxNyBbMC8wXTogcz01
IG49MiB4PTAgdj0xCihYRU4pICAgICAgIDE4IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAgICAg
ICAxOSBbMC8wXTogcz01IG49MyB4PTAgdj0wCihYRU4pICAgICAgIDIwIFsxLzFdOiBzPTYgbj0z
IHg9MAooWEVOKSAgICAgICAyMSBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAgICAgMjIgWzAv
MF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIzIFswLzBdOiBzPTUgbj0zIHg9MCB2PTEKKFhF
TikgICAgICAgMjQgWzAvMF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDI1IFswLzBdOiBzPTMg
bj0wIHg9MCBkPTAgcD0zNQooWEVOKSAgICAgICAyNiBbMC8wXTogcz00IG49MCB4PTAgcD05IGk9
OQooWEVOKSAgICAgICAyNyBbMC8wXTogcz01IG49MCB4PTAgdj0yCihYRU4pICAgICAgIDI4IFsw
LzBdOiBzPTQgbj0wIHg9MCBwPTggaT04CihYRU4pICAgICAgIDI5IFswLzBdOiBzPTQgbj0wIHg9
MCBwPTI3OCBpPTI3CihYRU4pICAgICAgIDMwIFswLzBdOiBzPTQgbj0wIHg9MCBwPTI3OSBpPTI2
CihYRU4pICAgICAgIDMxIFswLzBdOiBzPTQgbj0wIHg9MCBwPTI3NyBpPTI4CihYRU4pICAgICAg
IDM1IFswLzBdOiBzPTMgbj0wIHg9MCBkPTAgcD0yNQooWEVOKSAgICAgICAzNiBbMC8wXTogcz01
IG49MCB4PTAgdj0zCihYRU4pIFtnOiBwcmludCBncmFudCB0YWJsZSB1c2FnZV0KKFhFTikgZ250
dGFiX3VzYWdlX3ByaW50X2FsbCBbIGtleSAnZycgcHJlc3NlZAooWEVOKSAgICAgICAtLS0tLS0t
LSBhY3RpdmUgLS0tLS0tLS0gICAgICAgLS0tLS0tLS0gc2hhcmVkIC0tLS0tLS0tCihYRU4pIFty
ZWZdIGxvY2FsZG9tIG1mbiAgICAgIHBpbiAgICAgICAgICBsb2NhbGRvbSBnbWZuICAgICBmbGFn
cwooWEVOKSBncmFudC10YWJsZSBmb3IgcmVtb3RlIGRvbWFpbjogICAgMCAuLi4gbm8gYWN0aXZl
IGdyYW50IHRhYmxlIGVudHJpZXMKKFhFTikgZ250dGFiX3VzYWdlX3ByaW50X2FsbCBdIGRvbmUK
KFhFTikgW2k6IGR1bXAgaW50ZXJydXB0IGJpbmRpbmdzXQooWEVOKSBHdWVzdCBpbnRlcnJ1cHQg
aW5mb3JtYXRpb246CihYRU4pICAgIElSUTogICAwIGFmZmluaXR5OjAwMDEgdmVjOmYwIHR5cGU9
SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAg
SVJROiAgIDEgYWZmaW5pdHk6MDAwMSB2ZWM6MzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVz
PTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMiBhZmZpbml0eTpmZmZm
IHZlYzplMiB0eXBlPVhULVBJQyAgICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJv
dW5kCihYRU4pICAgIElSUTogICAzIGFmZmluaXR5OjAwMDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1l
ZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDQg
YWZmaW5pdHk6MDAwMSB2ZWM6NDggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAy
IG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNSBhZmZpbml0eTowMDAxIHZlYzo1MCB0
eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4p
ICAgIElSUTogICA2IGFmZmluaXR5OjAwMDEgdmVjOjU4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0
YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDcgYWZmaW5pdHk6
MDAwMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwg
dW5ib3VuZAooWEVOKSAgICBJUlE6ICAgOCBhZmZpbml0eTowMDAxIHZlYzo2OCB0eXBlPUlPLUFQ
SUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogIDgo
LVMtLSksCihYRU4pICAgIElSUTogICA5IGFmZmluaXR5OjAwMDEgdmVjOjcwIHR5cGU9SU8tQVBJ
Qy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOSgt
Uy0tKSwKKFhFTikgICAgSVJROiAgMTAgYWZmaW5pdHk6MDAwMSB2ZWM6NzggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAx
MSBhZmZpbml0eTowMDAxIHZlYzo4OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAw
MDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDEyIGFmZmluaXR5OjAwMDEgdmVjOjkw
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhF
TikgICAgSVJROiAgMTMgYWZmaW5pdHk6MDAwZiB2ZWM6OTggdHlwZT1JTy1BUElDLWVkZ2UgICAg
c3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNCBhZmZpbml0
eTowMDAxIHZlYzphMCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVk
LCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE1IGFmZmluaXR5OjAwMDEgdmVjOmE4IHR5cGU9SU8t
QVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJR
OiAgMTYgYWZmaW5pdHk6MDAwMSB2ZWM6YjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxOCBhZmZpbml0eTowMDBmIHZl
YzpiOCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5k
CihYRU4pICAgIElSUTogIDE5IGFmZmluaXR5OjAwMDEgdmVjOmMwIHR5cGU9SU8tQVBJQy1sZXZl
bCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjAgYWZm
aW5pdHk6MDAwZiB2ZWM6ZDggdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1h
cHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAyMiBhZmZpbml0eTowMDAxIHZlYzpkMCB0eXBl
PUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAg
IElSUTogIDIzIGFmZmluaXR5OjAwMDEgdmVjOmM4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1
cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjQgYWZmaW5pdHk6MDAw
MSB2ZWM6MjggdHlwZT1ETUFfTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5i
b3VuZAooWEVOKSAgICBJUlE6ICAyNSBhZmZpbml0eTowMDAxIHZlYzozMCB0eXBlPURNQV9NU0kg
ICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI2
IGFmZmluaXR5OjAwMDEgdmVjOjkxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAx
MCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3OSgtUy0tKSwKKFhFTikgICAgSVJROiAgMjcg
YWZmaW5pdHk6MDAwMSB2ZWM6MjkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEw
IGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6Mjc4KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyOCBh
ZmZpbml0eTowMDAxIHZlYzozMSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAg
aW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoyNzcoLVMtLSksCihYRU4pIElPLUFQSUMgaW50ZXJy
dXB0IGluZm9ybWF0aW9uOgooWEVOKSAgICAgSVJRICAwIFZlYzI0MDoKKFhFTikgICAgICAgQXBp
YyAweDAwLCBQaW4gIDI6IHZlYz1mMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9s
YXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICAxIFZl
YyA1NjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDE6IHZlYz0zOCBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6
MAooWEVOKSAgICAgSVJRICAzIFZlYyA2NDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDM6
IHZlYz00MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0
cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICA0IFZlYyA3MjoKKFhFTikgICAg
ICAgQXBpYyAweDAwLCBQaW4gIDQ6IHZlYz00OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVz
PTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJR
ICA1IFZlYyA4MDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDU6IHZlYz01MCBkZWxpdmVy
eT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRl
c3RfaWQ6MAooWEVOKSAgICAgSVJRICA2IFZlYyA4ODoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQ
aW4gIDY6IHZlYz01OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBp
cnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICA3IFZlYyA5NjoKKFhF
TikgICAgICAgQXBpYyAweDAwLCBQaW4gIDc6IHZlYz02MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwg
c3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAg
ICAgSVJRICA4IFZlYzEwNDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDg6IHZlYz02OCBk
ZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFz
az0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICA5IFZlYzExMjoKKFhFTikgICAgICAgQXBpYyAw
eDAwLCBQaW4gIDk6IHZlYz03MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJp
dHk9MCBpcnI9MCB0cmlnPUwgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDEwIFZlYzEy
MDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTA6IHZlYz03OCBkZWxpdmVyeT1Mb1ByaSBk
ZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAoo
WEVOKSAgICAgSVJRIDExIFZlYzEzNjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTE6IHZl
Yz04OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmln
PUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDEyIFZlYzE0NDoKKFhFTikgICAgICAg
QXBpYyAweDAwLCBQaW4gMTI6IHZlYz05MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAg
cG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDEz
IFZlYzE1MjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTM6IHZlYz05OCBkZWxpdmVyeT1M
b1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0xIGRlc3Rf
aWQ6MAooWEVOKSAgICAgSVJRIDE0IFZlYzE2MDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4g
MTQ6IHZlYz1hMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9
MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDE1IFZlYzE2ODoKKFhFTikg
ICAgICAgQXBpYyAweDAwLCBQaW4gMTU6IHZlYz1hOCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3Rh
dHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAg
SVJRIDE2IFZlYzE3NjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTY6IHZlYz1iMCBkZWxp
dmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0x
IGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDE4IFZlYzE4NDoKKFhFTikgICAgICAgQXBpYyAweDAw
LCBQaW4gMTg6IHZlYz1iOCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9
MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDE5IFZlYzE5MjoK
KFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTk6IHZlYz1jMCBkZWxpdmVyeT1Mb1ByaSBkZXN0
PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6MAooWEVO
KSAgICAgSVJRIDIwIFZlYzIxNjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMjA6IHZlYz1k
OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwg
bWFzaz0xIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDIyIFZlYzIwODoKKFhFTikgICAgICAgQXBp
YyAweDAwLCBQaW4gMjI6IHZlYz1kMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9s
YXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDIzIFZl
YzIwMDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMjM6IHZlYz1jOCBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6
MAooWEVOKSBbbTogbWVtb3J5IGluZm9dCihYRU4pIFBoeXNpY2FsIG1lbW9yeSBpbmZvcm1hdGlv
bjoKKFhFTikgICAgIFhlbiBoZWFwOiAwa0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNF06IDY0NTEy
a0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNV06IDEzMTA3MmtCIGZyZWUKKFhFTikgICAgIGhlYXBb
MTZdOiAyNjIxNDRrQiBmcmVlCihYRU4pICAgICBoZWFwWzE3XTogNTIyMjM2a0IgZnJlZQooWEVO
KSAgICAgaGVhcFsxOF06IDEwNDg1NzJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE5XTogNjkxMzQ4
a0IgZnJlZQooWEVOKSAgICAgaGVhcFsyMF06IDUzNjkwMGtCIGZyZWUKKFhFTikgICAgIERvbSBo
ZWFwOiAzMjU2Nzg0a0IgZnJlZQooWEVOKSBbbjogTk1JIHN0YXRpc3RpY3NdCihYRU4pIENQVQlO
TUkKKFhFTikgICAwCSAgMAooWEVOKSAgIDEJICAwCihYRU4pICAgMgkgIDAKKFhFTikgICAzCSAg
MAooWEVOKSBkb20wIHZjcHUwOiBOTUkgbmVpdGhlciBwZW5kaW5nIG5vciBtYXNrZWQKKFhFTikg
W3E6IGR1bXAgZG9tYWluIChhbmQgZ3Vlc3QgZGVidWcpIGluZm9dCihYRU4pICdxJyBwcmVzc2Vk
IC0+IGR1bXBpbmcgZG9tYWluIGluZm8gKG5vdz0weDVDOkY2NzNGRTE2KQooWEVOKSBHZW5lcmFs
IGluZm9ybWF0aW9uIGZvciBkb21haW4gMDoKKFhFTikgICAgIHJlZmNudD0zIGR5aW5nPTAgcGF1
c2VfY291bnQ9MAooWEVOKSAgICAgbnJfcGFnZXM9MTg3NTM5IHhlbmhlYXBfcGFnZXM9NiBzaGFy
ZWRfcGFnZXM9MCBwYWdlZF9wYWdlcz0wIGRpcnR5X2NwdXM9ezEtMn0gbWF4X3BhZ2VzPTE4ODE0
NwooWEVOKSAgICAgaGFuZGxlPTAwMDAwMDAwLTAwMDAtMDAwMC0wMDAwLTAwMDAwMDAwMDAwMCB2
bV9hc3Npc3Q9MDAwMDAwMGQKKFhFTikgUmFuZ2VzZXRzIGJlbG9uZ2luZyB0byBkb21haW4gMDoK
KFhFTikgICAgIEkvTyBQb3J0cyAgeyAwLTFmLCAyMi0zZiwgNDQtNjAsIDYyLTlmLCBhMi00MDcs
IDQwYy1jZmIsIGQwMC0yMDRmLCAyMDU4LWZmZmYgfQooWEVOKSAgICAgSW50ZXJydXB0cyB7IDAt
Mjc0LCAyNzctMjc5IH0KKFhFTikgICAgIEkvTyBNZW1vcnkgeyAwLWZlYmZmLCBmZWMwMS1mZWRm
ZiwgZmVlMDEtZmZmZmZmZmZmZmZmZmZmZiB9CihYRU4pIE1lbW9yeSBwYWdlcyBiZWxvbmdpbmcg
dG8gZG9tYWluIDA6CihYRU4pICAgICBEb21QYWdlIGxpc3QgdG9vIGxvbmcgdG8gZGlzcGxheQoo
WEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTQ4OTE3OiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwg
dGFmPTc0MDAwMDAwMDAwMDAwMDIKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0ODkxNjog
Y2FmPWMwMDAwMDAwMDAwMDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5Q
YWdlIDAwMDAwMDAwMDAxNDg5MTU6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAw
MDAwMDAwMQooWEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTQ4OTE0OiBjYWY9YzAwMDAwMDAw
MDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAwMDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAw
MDBhYTBmZDogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4p
ICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxM2Y0Mjg6IGNhZj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9
NzQwMDAwMDAwMDAwMDAwMgooWEVOKSBWQ1BVIGluZm9ybWF0aW9uIGFuZCBjYWxsYmFja3MgZm9y
IGRvbWFpbiAwOgooWEVOKSAgICAgVkNQVTA6IENQVTAgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3Bl
bmQgPSAwMSwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXt9IGNwdV9hZmZpbml0eT17MH0K
KFhFTikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxhZ3M9MAooWEVOKSAgICAgTm8gcGVyaW9k
aWMgdGltZXIKKFhFTikgICAgIFZDUFUxOiBDUFUxIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5k
ID0gMDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1cz17MX0gY3B1X2FmZmluaXR5PXswLTE1
fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBwZXJp
b2RpYyB0aW1lcgooWEVOKSAgICAgVkNQVTI6IENQVTIgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3Bl
bmQgPSAwMCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXsyfSBjcHVfYWZmaW5pdHk9ezAt
MTV9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTEKKFhFTikgICAgIE5vIHBl
cmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMzogQ1BVMyBbaGFzPUZdIHBvbGw9MCB1cGNhbGxf
cGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9e30gY3B1X2FmZmluaXR5PXsw
LTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBw
ZXJpb2RpYyB0aW1lcgooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDowICh2aXJxIDEsIHBvcnQgNSwg
c3RhdCAwLzAvLTEpCihYRU4pIE5vdGlmeWluZyBndWVzdCAwOjEgKHZpcnEgMSwgcG9ydCAxMSwg
c3RhdCAwLzAvMCkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6MiAodmlycSAxLCBwb3J0IDE3LCBz
dGF0IDAvMC8wKQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDozICh2aXJxIDEsIHBvcnQgMjMsIHN0
YXQgMC8wLzApCgooWEVOKSBTaGFyZWQgZnJhbWVzIDAgLS0gU2F2ZWQgZnJhbWVzIDAKWyAgMzk5
LjQ1MDY5N10gdihYRU4pIFtyOiBkdW1wIHJ1biBxdWV1ZXNdCmNwdSAxCihYRU4pIHNjaGVkX3Nt
dF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9MHgwMDAwMDA1RDAyNDYxNTVBCihY
RU4pIElkbGUgY3B1cG9vbDoKKFhFTikgU2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAo
Y3JlZGl0KQpbICAzOTkuNDUwNjk4XSAgKFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAg
ICAgICA9IDQKKFhFTikgCW1hc3RlciAgICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAg
ICAgICAgICA9IDQwMAooWEVOKSAJY3JlZGl0IGJhbGFuY2UgICAgID0gNjcKKFhFTikgCXdlaWdo
dCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9zb3J0ICAgICAgICAgID0gMjgwMgooWEVO
KSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0c2xpY2UgICAgICAgICAgICAgPSAx
MG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAxMDAwdXMKKFhFTikgCWNyZWRpdHMgcGVy
IG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNlICAgPSAxCihYRU4pIAltaWdyYXRp
b24gZGVsYXkgICAgPSAwdXMKIChYRU4pIGlkbGVyczogMDAwYwooWEVOKSBhY3RpdmUgdmNwdXM6
CjA6IG1hc2tlZD0wIHBlbmQoWEVOKSAJICAxOiBpbmc9MSBldmVudF9zZWwgWzAuMV0gcHJpPS0x
IGZsYWdzPTAgY3B1PTEgY3JlZGl0PS01MjEgW3c9MjU2XQowMDAwMDAwMDAwMDAwMDAxKFhFTikg
Q3B1cG9vbCAwOgooWEVOKSBTY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQp
CihYRU4pIGluZm86CihYRU4pIAluY3B1cyAgICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIg
ICAgICAgICAgICAgPSAwCihYRU4pIAljcmVkaXQgICAgICAgICAgICAgPSA0MDAKKFhFTikgCWNy
ZWRpdCBiYWxhbmNlICAgICA9IDY3CihYRU4pIAl3ZWlnaHQgICAgICAgICAgICAgPSAyNTYKKFhF
TikgCXJ1bnFfc29ydCAgICAgICAgICA9IDI4MDIKKFhFTikgCWRlZmF1bHQtd2VpZ2h0ICAgICA9
IDI1NgooWEVOKSAJdHNsaWNlICAgICAgICAgICAgID0gMTBtcwooWEVOKSAJcmF0ZWxpbWl0ICAg
ICAgICAgID0gMTAwMHVzCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAxMAooWEVOKSAJdGlj
a3MgcGVyIHRzbGljZSAgID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5ICAgID0gMHVzCgooWEVO
KSBpZGxlcnM6IDAwMGMKKFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJICAxOiBbMC4xXSBwcmk9
LTEgZmxhZ3M9MCBjcHU9MSBjcmVkaXQ9LTEwNzkgW3c9MjU2XQpbICAzOTkuNTIxMjEzXSAgKFhF
TikgQ1BVWzAwXSAgIHNvcnQ9MjgwMiwgc2libGluZz0wMDAxLCAxOiBtYXNrZWQ9MCBwZW5kY29y
ZT0wMDBmCihYRU4pIAlydW46IFszMjc2Ny4wXSBwcmk9MCBmbGFncz0wIGNwdT0wCihYRU4pIAkg
IDE6IFswLjBdIHByaT0wIGZsYWdzPTAgY3B1PTAgY3JlZGl0PTg0IFt3PTI1Nl0KaW5nPTEgZXZl
bnRfc2VsIChYRU4pIENQVVswMV0gIHNvcnQ9MjgwMiwgc2libGluZz0wMDAyLCBjb3JlPTAwMGYK
KFhFTikgCXJ1bjogWzAuMV0gcHJpPS0xIGZsYWdzPTAgY3B1PTEgY3JlZGl0PS0xMzQ5IFt3PTI1
Nl0KKFhFTikgCSAgMTogWzMyNzY3LjFdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MQooWEVOKSBDUFVb
MDJdIDAwMDAwMDAwMDAwMDAwMDEgc29ydD0yODAyLCBzaWJsaW5nPTAwMDQsIApjb3JlPTAwMGYK
WyAgMzk5LjU4ODk1N10gIChYRU4pIAlydW46ICBbMzI3NjcuMl0gcHJpPS02NCBmbGFncz0wIGNw
dT0yCjI6IG1hc2tlZD0xIHBlbmQoWEVOKSBDUFVbMDNdIGluZz0xIGV2ZW50X3NlbCAgc29ydD0y
ODAyLCBzaWJsaW5nPTAwMDgsIDAwMDAwMDAwMDAwMDAwMDFjb3JlPTAwMGYKKFhFTikgCXJ1bjog
ClszMjc2Ny4zXSBwcmk9LTY0IGZsYWdzPTAgY3B1PTMKWyAgMzk5LjYyNjk5Ml0gIChYRU4pIFtz
OiBkdW1wIHNvZnR0c2Mgc3RhdHNdCiAoWEVOKSBUU0MgbWFya2VkIGFzIHJlbGlhYmxlLCB3YXJw
ID0gMjQ5OTQwMTU3Njg1IChjb3VudD00KQozOiBtYXNrZWQ9MSBwZW5kKFhFTikgTm8gZG9tYWlu
cyBoYXZlIGVtdWxhdGVkIFRTQwppbmc9MCBldmVudF9zZWwgKFhFTikgW3Q6IGRpc3BsYXkgbXVs
dGktY3B1IGNsb2NrIGluZm9dCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSBTeW5jZWQgc3RpbWUgc2tl
dzogbWF4PTgxOTlucyBhdmc9NDQxNm5zIHNhbXBsZXM9MyBjdXJyZW50PTUwMjVucwooWEVOKSBT
eW5jZWQgY3ljbGVzIHNrZXc6IG1heD0xNzMgYXZnPTE2NSBzYW1wbGVzPTMgY3VycmVudD0xNzMK
CihYRU4pIFt1OiBkdW1wIG51bWEgaW5mb10KWyAgMzk5LjY2NzQwMV0gIChYRU4pICd1JyBwcmVz
c2VkIC0+IGR1bXBpbmcgbnVtYSBpbmZvIChub3ctMHg1RDowRkU5NDJFMSkKIChYRU4pIGlkeDAg
LT4gTk9ERTAgc3RhcnQtPjAgc2l6ZS0+MTM2OTYwMCBmcmVlLT44MTQxOTYKKFhFTikgcGh5c190
b19uaWQoMDAwMDAwMDAwMDAwMTAwMCkgLT4gMCBzaG91bGQgYmUgMAoKKFhFTikgQ1BVMCAtPiBO
T0RFMAooWEVOKSBDUFUxIC0+IE5PREUwCihYRU4pIENQVTIgLT4gTk9ERTAKKFhFTikgQ1BVMyAt
PiBOT0RFMAooWEVOKSBNZW1vcnkgbG9jYXRpb24gb2YgZWFjaCBkb21haW46CihYRU4pIERvbWFp
biAwICh0b3RhbDogMTg3NTM5KToKWyAgMzk5LjcwNTc5OV0gcChYRU4pICAgICBOb2RlIDA6IDE4
NzUzOQplbmRpbmc6CihYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KWyAgMzk5LjcwNTgwMF0g
IChYRU4pICoqKioqKioqKioqIFZNQ1MgQXJlYXMgKioqKioqKioqKioqKioKICAoWEVOKSAqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgowMDAwMDAwMDAwMDAwMDAwKFhFTikg
W3o6IHByaW50IGlvYXBpYyBpbmZvXQogKFhFTikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAx
NS4KMDAwMDAwMDAwMDAwMDAwMChYRU4pIG51bWJlciBvZiBJTy1BUElDICMyIHJlZ2lzdGVyczog
MjQuCihYRU4pIHRlc3RpbmcgdGhlIElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLgogKFhF
TikgSU8gQVBJQyAjMi4uLi4uLgooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDIwMDAwMDAKKFhF
TikgLi4uLi4uLiAgICA6IHBoeXNpY2FsIEFQSUMgaWQ6IDAyCihYRU4pIC4uLi4uLi4gICAgOiBE
ZWxpdmVyeSBUeXBlOiAwCihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwCjAwMDAw
MDAwMDAwMDAwMDAoWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzAwMjAKKFhFTikgLi4uLi4u
LiAgICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6
IFBSUSBpbXBsZW1lbnRlZDogMAooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjog
MDAyMAooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToKKFhFTikgIE5SIExvZyBQaHkg
TWFzayBUcmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDogICAKIChYRU4pICAwMCAwMDAg
MDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAo
WEVOKSAgMDEgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOAogKFhF
TikgIDAyIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgRjAKMDAwMDAw
MDAwMDAwMDAwMChYRU4pICAwMyAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAx
ICAgIDQwCiAoWEVOKSAgMDQgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICA0OAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDA1IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgNTAKIChYRU4pICAwNiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDU4CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDcgMDAwIDAwICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA2MAogKFhFTikgIDA4IDAwMCAwMCAgMCAgICAwICAg
IDAgICAwICAgMCAgICAxICAgIDEgICAgNjgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwOSAwMDAg
MDAgIDAgICAgMSAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDcwCgooWEVOKSAgMGEgMDAwIDAw
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3OApbICAzOTkuODUwNjkwXSAgKFhF
TikgIDBiIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgODgKICAoWEVO
KSAgMGMgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA5MAowMDAwMDAw
MDAwMDAwMDAwKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgOTgKIChYRU4pICAwZSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IEEwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGYgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICBBOAogKFhFTikgIDEwIDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAg
ICAxICAgIDEgICAgQjAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCiAoWEVOKSAgMTIgMDAwIDAwICAxICAgIDEgICAg
MCAgIDEgICAwICAgIDEgICAgMSAgICBCOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEzIDAwMCAw
MCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQzAKIChYRU4pICAxNCAwMDAgMDAg
IDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEQ4CjAwMDAwMDAwMDAwMDAwMDAoWEVO
KSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMAogKFhFTikg
IDE2IDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgRDAKMDAwMDAwMDAw
MDAwMDAwMChYRU4pICAxNyAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAg
IEM4CihYRU4pIFVzaW5nIHZlY3Rvci1iYXNlZCBpbmRleGluZwooWEVOKSBJUlEgdG8gcGluIG1h
cHBpbmdzOgogKFhFTikgSVJRMjQwIC0+IDA6MgooWEVOKSBJUlE1NiAtPiAwOjEKKFhFTikgSVJR
NjQgLT4gMDozCihYRU4pIElSUTcyIC0+IDA6NAooWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJR
ODggLT4gMDo2CihYRU4pIElSUTk2IC0+IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElS
UTExMiAtPiAwOjkKKFhFTikgSVJRMTIwIC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhF
TikgSVJRMTQ0IC0+IDA6MTIKKFhFTikgSVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6
MTQKKFhFTikgSVJRMTY4IC0+IDA6MTUKKFhFTikgSVJRMTc2IC0+IDA6MTYKKFhFTikgSVJRMTg0
IC0+IDA6MTgKKFhFTikgSVJRMTkyIC0+IDA6MTkKKFhFTikgSVJRMjE2IC0+IDA6MjAKKFhFTikg
SVJRMjA4IC0+IDA6MjIKKFhFTikgSVJRMjAwIC0+IDA6MjMKKFhFTikgLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uIGRvbmUuCjAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzOTkuOTkzNTIwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjAwNzQ4
MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC4wMjE0NDJdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICA0MDAuMDM1NDAzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAw
LjA0OTM2NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC4wNjMzMjZdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMjE4MgpbICA0MDAuMDc3Mjg3XSAgICAKWyAgNDAwLjA4MDU5N10gZ2xvYmFs
IG1hc2s6ClsgIDQwMC4wODA1OThdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDAuMDk1
ODEyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAwLjEwOTc3M10gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDQwMC4xMjM3MzRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0
MDAuMTM3Njk0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAwLjE1MTY1NV0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMC4xNjU2MTZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZgpbICA0MDAuMTc5NTc3XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmU3MDAxMDQxMDUKWyAgNDAwLjE5MzUzOF0g
ICAgClsgIDQwMC4xOTY4NDldIGdsb2JhbGx5IHVubWFza2VkOgpbICA0MDAuMTk2ODUwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjIxMjYwMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDQwMC4yMjY1NjFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuMjQwNTIz
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjI1NDQ4M10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDQwMC4yNjg0NDRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAu
MjgyNDA1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjI5NjM2NV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAyMDgyClsgIDQwMC4zMTAzMjddICAgIApbICA0MDAuMzEzNjM4XSBsb2NhbCBj
cHUxIG1hc2s6ClsgIDQwMC4zMTM2MzhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAu
MzI5MjEwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjM0MzE3MV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDQwMC4zNTcxMzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICA0MDAuMzcxMDkyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjM4NTA1NF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC4zOTkwMTRdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICA0MDAuNDEyOTc1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDFmODAKWyAgNDAwLjQyNjkz
N10gICAgClsgIDQwMC40MzAyNDddIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDQwMC40MzAyNDhdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNDQ1OTA5XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgNDAwLjQ1OTg3MF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC40NzM4
MzFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNDg3NzkzXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgNDAwLjUwMTc1NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQw
MC41MTU3MTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNTI5Njc2XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwODAKWyAgNDAwLjU0MzYzNl0gICAgClsgIDQwMC41NDY5NDddIHBlbmRp
bmcgbGlzdDoKWyAgNDAwLjU0OTk5MV0gICAwOiBldmVudCAxIC0+IGlycSAyNzIgbG9jYWxseS1t
YXNrZWQKWyAgNDAwLjU1NTAwMl0gICAxOiBldmVudCA3IC0+IGlycSAyNzgKWyAgNDAwLjU1ODY3
MV0gICAxOiBldmVudCA4IC0+IGlycSAyNzkgZ2xvYmFsbHktbWFza2VkClsgIDQwMC41NjM3NzJd
ICAgMjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1hc2tlZApbICA0MDAuNTY4ODk0XSAK
WyAgNDAwLjU2ODg5NF0gdmNwdSAwClsgIDQwMC41Njg4OTRdICAgMDogbWFza2VkPTAgcGVuZGlu
Zz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC41NzQxNTBdICAgMTogbWFza2Vk
PTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC41ODAyMzVdICAg
MjogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDQwMC41
ODYzMjFdICAgMzogbWFza2VkPTEgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDQwMC41OTI0MDZdICAgClsgIDQwMC41OTg0OTFdIHBlbmRpbmc6ClsgIDQwMC41OTg0OTJd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNjEzMzQ3XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAwLjYyNzMwOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC42
NDEyNjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNjU1MjMwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgNDAwLjY2OTE5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDQwMC42ODMxNTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNjk3MTEyXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDIxMDYKWyAgNDAwLjcxMTA3NF0gICAgClsgIDQwMC43MTQzODRdIGds
b2JhbCBtYXNrOgpbICA0MDAuNzE0Mzg1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAw
LjcyOTU5OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMC43NDM1NjBdICAgIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZgpbICA0MDAuNzU3NTIxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYK
WyAgNDAwLjc3MTQ4Ml0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMC43ODU0NDJdICAg
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDAuNzk5NDAzXSAgICBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYKWyAgNDAwLjgxMzM2NF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZlNzAwMTA0MTA1ClsgIDQwMC44Mjcz
MjZdICAgIApbICA0MDAuODMwNjM2XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAgNDAwLjgzMDYzNl0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC44NDYzODddICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDAuODYwMzQ4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjg3
NDMxMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC44ODgyNzFdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDAuOTAyMjMyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAwLjkxNjE5Ml0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC45MzAxNTNdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMjAwMgpbICA0MDAuOTQ0MTE0XSAgICAKWyAgNDAwLjk0NzQyNV0gbG9j
YWwgY3B1MCBtYXNrOgpbICA0MDAuOTQ3NDI1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAwLjk2Mjk5N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC45NzY5NThdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuOTkwOTE5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgNDAxLjAwNDg3OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4wMTg4NDFd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuMDMyODAyXSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjA0Njc2M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZmZlMDAwMDdmClsgIDQwMS4w
NjA3MjRdICAgIApbICA0MDEuMDY0MDM1XSBsb2NhbGx5IHVubWFza2VkOgpbICA0MDEuMDY0MDM2
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjA3OTY5N10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDQwMS4wOTM2NTddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEu
MTA3NjE4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjEyMTU4MF0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDQwMS4xMzU1NDFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICA0MDEuMTQ5NTAyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjE2MzQ2Ml0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAyClsgIDQwMS4xNzc0MjNdICAgIApbICA0MDEuMTgwNzM1XSBw
ZW5kaW5nIGxpc3Q6ClsgIDQwMS4xODM3ODJdICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGwyLWNs
ZWFyClsgIDQwMS4xODgyNTNdICAgMDogZXZlbnQgMiAtPiBpcnEgMjczIGwyLWNsZWFyIGdsb2Jh
bGx5LW1hc2tlZApbICA0MDEuMTk0MTU5XSAgIDE6IGV2ZW50IDggLT4gaXJxIDI3OSBsMi1jbGVh
ciBnbG9iYWxseS1tYXNrZWQgbG9jYWxseS1tYXNrZWQKWyAgNDAxLjIwMTQxMl0gICAyOiBldmVu
dCAxMyAtPiBpcnEgMjg0IGwyLWNsZWFyIGxvY2FsbHktbWFza2VkClsgIDQwMS4yMDczNDFdIApb
ICA0MDEuMjA3MzQxXSB2Y3B1IDIKWyAgNDAxLjIwNzM0Ml0gICAwOiBtYXNrZWQ9MCBwZW5kaW5n
PTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjIxMjYwMV0gICAxOiBtYXNrZWQ9
MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjIxODY4Nl0gICAy
OiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgNDAxLjIy
NDc3MV0gICAzOiBtYXNrZWQ9MSBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAK
WyAgNDAxLjIzMDg1N10gICAKWyAgNDAxLjIzNjk0MV0gcGVuZGluZzoKWyAgNDAxLjIzNjk0Ml0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4yNTE3OTddICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDEuMjY1NzU5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjI3
OTcxOV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4yOTM2ODFdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDEuMzA3NjQxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAxLjMyMTYwMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4zMzU1NjNdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwZTEwNApbICA0MDEuMzQ5NTIzXSAgICAKWyAgNDAxLjM1MjgzNV0gZ2xv
YmFsIG1hc2s6ClsgIDQwMS4zNTI4MzVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDEu
MzY4MDQ5XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAxLjM4MjAxMV0gICAgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmClsgIDQwMS4zOTU5NzFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpb
ICA0MDEuNDA5OTMyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAxLjQyMzg5M10gICAg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMS40Mzc4NTRdICAgIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZgpbICA0MDEuNDUxODE1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmU3MDAxMDQxMDUKWyAgNDAxLjQ2NTc3
Nl0gICAgClsgIDQwMS40NjkwODddIGdsb2JhbGx5IHVubWFza2VkOgpbICA0MDEuNDY5MDg4XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjQ4NDgzOF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDQwMS40OTg3OTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuNTEy
NzYwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjUyNjcyMV0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDQwMS41NDA2ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0
MDEuNTU0NjQzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjU2ODYwNF0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDBhMDAwClsgIDQwMS41ODI1NjVdICAgIApbICA0MDEuNTg1ODc2XSBsb2Nh
bCBjcHUyIG1hc2s6ClsgIDQwMS41ODU4NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0
MDEuNjAxNDQ3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjYxNTQwOF0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS42MjkzNzBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICA0MDEuNjQzMzMwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjY1NzI5MV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS42NzEyNTNdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDEuNjg1MjE0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwN2UwMDAKWyAgNDAxLjY5
OTE3NF0gICAgClsgIDQwMS43MDI0ODZdIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDQwMS43MDI0ODdd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuNzE4MTQ3XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjczMjEwOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS43
NDYwNjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuNzYwMDMwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgNDAxLjc3Mzk5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDQwMS43ODc5NTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODAxOTEzXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMGEwMDAKWyAgNDAxLjgxNTg3NF0gICAgClsgIDQwMS44MTkxODVdIHBl
bmRpbmcgbGlzdDoKWyAgNDAxLjgyMjIyOF0gICAwOiBldmVudCAyIC0+IGlycSAyNzMgZ2xvYmFs
bHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDQwMS44Mjg2NzFdICAgMTogZXZlbnQgOCAtPiBp
cnEgMjc5IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICA0MDEuODM1MTE1XSAgIDI6
IGV2ZW50IDEzIC0+IGlycSAyODQKWyAgNDAxLjgzODg3NF0gICAyOiBldmVudCAxNCAtPiBpcnEg
Mjg1IGdsb2JhbGx5LW1hc2tlZApbICA0MDEuODQ0MDY1XSAgIDI6IGV2ZW50IDE1IC0+IGlycSAy
ODYKWyAgNDAxLjg0NzgyM10gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFza2Vk
ClsgIDQwMS44NTI5NDZdIApbICA0MDEuODUyOTQ3XSB2Y3B1IDMKWyAgNDAxLjg1Mjk0N10gICAw
OiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1
ODIwOV0gICAxOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAK
WyAgNDAxLjg1ODIxMV0gICAyOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjg1ODIxM10gICAzOiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2Vs
IDAwMDAwMDAwMDAwMDAwMDEKWyAgNDAxLjg1ODIxNV0gICAKWyAgNDAxLjg1ODIxNl0gcGVuZGlu
ZzoKWyAgNDAxLjg1ODIxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgyMjNd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MjMwXSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjg1ODI0M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44
NTgyNDddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MjUwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODI1M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDQwMS44NTgyNTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDM4NDEwNApbICA0MDEuODU4MjU5XSAgICAK
WyAgNDAxLjg1ODI2MF0gZ2xvYmFsIG1hc2s6ClsgIDQwMS44NTgyNjBdICAgIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZgpbICA0MDEuODU4MjY0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAx
Ljg1ODI2N10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMS44NTgyNzBdICAgIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZgpbICA0MDEuODU4Mjc0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYK
WyAgNDAxLjg1ODI3N10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMS44NTgyODFdICAg
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDEuODU4Mjg0XSAgICBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmU3MDAx
MDQxMDUKWyAgNDAxLjg1ODI4N10gICAgClsgIDQwMS44NTgyODhdIGdsb2JhbGx5IHVubWFza2Vk
OgpbICA0MDEuODU4Mjg4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODI5MV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgyOTVdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDEuODU4Mjk4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1
ODMwMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzMDRdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzA3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAxLjg1ODMxMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDQwMS44NTgzMTNdICAgIApb
ICA0MDEuODU4MzE0XSBsb2NhbCBjcHUzIG1hc2s6ClsgIDQwMS44NTgzMTVdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzE4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAxLjg1ODMyMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzMjRdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzI3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgNDAxLjg1ODMzMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzMzRd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzM3XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDFmODAwMDAKWyAgNDAxLjg1ODM0MF0gICAgClsgIDQwMS44NTgzNDFdIGxvY2FsbHkgdW5tYXNr
ZWQ6ClsgIDQwMS44NTgzNDFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzQ0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODM0N10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDQwMS44NTgzNTBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEu
ODU4MzU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODM1N10gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzNjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICA0MDEuODU4MzYzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAyODAwMDAKWyAgNDAxLjg1ODM2Nl0gICAg
ClsgIDQwMS44NTgzNjddIHBlbmRpbmcgbGlzdDoKWyAgNDAxLjg1ODM2OF0gICAwOiBldmVudCAy
IC0+IGlycSAyNzMgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDQwMS44NTgzNzBd
ICAgMTogZXZlbnQgOCAtPiBpcnEgMjc5IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApb
ICA0MDEuODU4MzcxXSAgIDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxv
Y2FsbHktbWFza2VkClsgIDQwMS44NTgzNzNdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICA0
MDEuODU4Mzc0XSAgIDM6IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDQw
MS44NTgzNzVdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MgoK
--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-s3-first.txt"
Content-Disposition: attachment; filename="xen-dump-s3-first.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5nzo9g31

KFhFTikgJyonIHByZXNzZWQgLT4gZmlyaW5nIGFsbCBkaWFnbm9zdGljIGtleWhhbmRsZXJzCihY
RU4pIFtkOiBkdW1wIHJlZ2lzdGVyc10KKFhFTikgJ2QnIHByZXNzZWQgLT4gZHVtcGluZyByZWdp
c3RlcnMKKFhFTikgCihYRU4pICoqKiBEdW1waW5nIENQVTAgaG9zdCBzdGF0ZTogKioqCihYRU4p
IC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMg
XS0tLS0KKFhFTikgQ1BVOiAgICAwCihYRU4pIFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxM2Q3
N2U+XSBuczE2NTUwX3BvbGwrMHgyNy8weDMzCihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAxMDI4
NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgwMzAyNWEwICAgcmJ4
OiBmZmZmODJjNDgwMzAyNDgwICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJkeDogMDAw
MDAwMDAwMDAwMDAwMCAgIHJzaTogZmZmZjgyYzQ4MDJlMjVjOCAgIHJkaTogZmZmZjgyYzQ4MDI3
MTgwMAooWEVOKSByYnA6IGZmZmY4MmM0ODAyYjdlMzAgICByc3A6IGZmZmY4MmM0ODAyYjdlMzAg
ICByODogIDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcjk6ICBmZmZmODMwMTQ4OTczZWE4ICAgcjEw
OiAwMDAwMDA0YTkzZjFmNDY3ICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZm
ZjgyYzQ4MDI3MTgwMCAgIHIxMzogZmZmZjgyYzQ4MDEzZDc1NyAgIHIxNDogMDAwMDAwNGE5M2I5
Zjg1ZQooWEVOKSByMTU6IGZmZmY4MmM0ODAzMDIzMDggICBjcjA6IDAwMDAwMDAwODAwNTAwM2Ig
ICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTNjNjU2MDAwICAgY3Iy
OiBmZmZmODgwMDI2OGI1MDQwCihYRU4pIGRzOiAwMDAwICAgZXM6IDAwMDAgICBmczogMDAwMCAg
IGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJv
bSByc3A9ZmZmZjgyYzQ4MDJiN2UzMDoKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2U2MCBmZmZmODJj
NDgwMTI4MTdmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJlMjVjOAooWEVOKSAgICBmZmZm
ODJjNDgwMzAyNDgwIGZmZmY4MzAxNDg5YjNkNDAgZmZmZjgyYzQ4MDJiN2ViMCBmZmZmODJjNDgw
MTI4MjgxCihYRU4pICAgIGZmZmY4MmM0ODAyYjdmMTggMDAwMDAwMDAwMDAwMDI0NiAwMDAwMDA0
YTkzZjFmNDY3IGZmZmY4MmM0ODAyZDg4ODAKKFhFTikgICAgZmZmZjgyYzQ4MDJkODg4MCBmZmZm
ODJjNDgwMmI3ZjE4IGZmZmZmZmZmZmZmZmZmZmYgZmZmZjgyYzQ4MDMwMjMwOAooWEVOKSAgICBm
ZmZmODJjNDgwMmI3ZWUwIGZmZmY4MmM0ODAxMjU0MDUgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmODJj
NDgwMmI3ZjE4CihYRU4pICAgIDAwMDAwMDAwZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMiBmZmZm
ODJjNDgwMmI3ZWYwIGZmZmY4MmM0ODAxMjU0ODQKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2YxMCBm
ZmZmODJjNDgwMTU4YzA1IGZmZmY4MzAwYWE1ODQwMDAgZmZmZjgzMDBhYTBmYzAwMAooWEVOKSAg
ICBmZmZmODJjNDgwMmI3ZGE4IDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmZmZmZmZmZmZiAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZmZmZmY4MWEwMWVlOCBm
ZmZmZmZmZjgxYTAxZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAw
MDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNh
YSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZmZmZmY4MWEw
MWVkMCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODMwMGFhNTg0MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTNkNzdlPl0g
bnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgxN2Y+XSBleGVj
dXRlX3RpbWVyKzB4NGUvMHg2YwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgyODE+XSB0aW1lcl9z
b2Z0aXJxX2FjdGlvbisweGU0LzB4MjFhCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDEyNTQwNT5dIF9f
ZG9fc29mdGlycSsweDk1LzB4YTAKKFhFTikgICAgWzxmZmZmODJjNDgwMTI1NDg0Pl0gZG9fc29m
dGlycSsweDI2LzB4MjgKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4YzA1Pl0gaWRsZV9sb29wKzB4
NmYvMHg3MQooWEVOKSAgICAKKFhFTikgKioqIER1bXBpbmcgQ1BVMSBob3N0IHN0YXRlOiAqKioK
KFhFTikgLS0tLVsgWGVuLTQuMi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDog
ICAgQyBdLS0tLQooWEVOKSBDUFU6ICAgIDEKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4
MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAw
MDAwMjQ2ICAgQ09OVEVYVDogaHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAg
ICByYng6IGZmZmY4MzAxM2U2ZTdmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcmR4
OiAwMDAwMDAzY2JhZjI1ZDgwICAgcnNpOiAwMDAwMDAwMDQxNjk1NmEyICAgcmRpOiAwMDAwMDAw
MDAwMDAwMDAxCihYRU4pIHJicDogZmZmZjgzMDEzZTZlN2VmMCAgIHJzcDogZmZmZjgzMDEzZTZl
N2VmMCAgIHI4OiAgMDAwMDAwMGMyMzE2MWZhMAooWEVOKSByOTogIGZmZmY4MzAwYWE1ODMwNjAg
ICByMTA6IDAwMDAwMDAwZGVhZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEy
OiBmZmZmODMwMTNlNmU3ZjE4ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAw
MDAwMDAwMDAyCihYRU4pIHIxNTogZmZmZjgzMDEzYjIyODA4OCAgIGNyMDogMDAwMDAwMDA4MDA1
MDAzYiAgIGNyNDogMDAwMDAwMDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxM2M0OWEwMDAg
ICBjcjI6IGZmZmY4ODAwMjU4MTdkZjAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAw
MDAwICAgZ3M6IDAwMDAgICBzczogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFj
ZSBmcm9tIHJzcD1mZmZmODMwMTNlNmU3ZWYwOgooWEVOKSAgICBmZmZmODMwMTNlNmU3ZjEwIGZm
ZmY4MmM0ODAxNThiZjggZmZmZjgzMDBhYTBmZTAwMCBmZmZmODMwMGFhNTgzMDAwCihYRU4pICAg
IGZmZmY4MzAxM2U2ZTdkYTggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDMKKFhFTikgICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODgxZWUwIGZm
ZmY4ODAwMjc4ODFmZDggMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAx
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICAgIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2Fh
IDAwMDAwMDAwMDAwMGUwMzMgMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODgx
ZWM4IDAwMDAwMDAwMDAwMGUwMmIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAx
IGZmZmY4MzAwYWEwZmUwMDAKKFhFTikgICAgMDAwMDAwM2NiYWYyNWQ4MCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pIFhlbiBjYWxsIHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBk
ZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVf
bG9vcCsweDYyLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1waW5nIENQVTIgaG9zdCBzdGF0
ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRh
aW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAyCihYRU4pIFJJUDogICAgZTAwODpbPGZm
ZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgw
MzAyMzcwICAgcmJ4OiBmZmZmODMwMTQ4OTlmZjE4ICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAyCihY
RU4pIHJkeDogMDAwMDAwM2NiYjk2NmQ4MCAgIHJzaTogMDAwMDAwMDA0MjJlZDVjYSAgIHJkaTog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByYnA6IGZmZmY4MzAxNDg5OWZlZjAgICByc3A6IGZmZmY4
MzAxNDg5OWZlZjAgICByODogIDAwMDAwMDBjNDU2NDhmZTAKKFhFTikgcjk6ICBmZmZmODMwMGE4
M2ZkMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihY
RU4pIHIxMjogZmZmZjgzMDE0ODk5ZmYxOCAgIHIxMzogMDAwMDAwMDBmZmZmZmZmZiAgIHIxNDog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxM2JjNjkwODggICBjcjA6IDAwMDAw
MDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTNk
OTVmMDAwICAgY3IyOiBmZmZmODgwMDI1ZWE4YzEwCihYRU4pIGRzOiAwMDJiICAgZXM6IDAwMmIg
ICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDE0ODk5ZmVmMDoKKFhFTikgICAgZmZmZjgzMDE0ODk5
ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYTg1YzcwMDAgZmZmZjgzMDBhODNmZDAwMAoo
WEVOKSAgICBmZmZmODMwMTQ4OTlmZGE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAxCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyNzg2
ZGVlMCBmZmZmODgwMDI3ODZkZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFk
YmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4
MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZjg4
MDAyNzg2ZGVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmODMwMGE4NWM3MDAwCihYRU4pICAgIDAwMDAwMDNjYmI5NjZkODAgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4
M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBbPGZmZmY4MmM0ODAxNThiZjg+
XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAqKiogRHVtcGluZyBDUFUzIGhv
c3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1
Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMwooWEVOKSBSSVA6ICAgIGUw
MDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSBSRkxB
R1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJheDogZmZm
ZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODk4ZmYxOCAgIHJjeDogMDAwMDAwMDAwMDAw
MDAwMwooWEVOKSByZHg6IDAwMDAwMDNjYmU0YTVkODAgICByc2k6IDAwMDAwMDAwNDJmNDkzNTIg
ICByZGk6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmJwOiBmZmZmODMwMTQ4OThmZWYwICAgcnNw
OiBmZmZmODMwMTQ4OThmZWYwICAgcjg6ICAwMDAwMDAwYzY4NGYxYWYwCihYRU4pIHI5OiAgZmZm
ZjgzMDBhODNmYzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAgIHIxMTogMDAwMDAwMDAwMDAw
MDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5OGZmMTggICByMTM6IDAwMDAwMDAwZmZmZmZmZmYg
ICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZmODMwMTNlN2E4MDg4ICAgY3Iw
OiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNyMzogMDAw
MDAwMDEzZGI2MjAwMCAgIGNyMjogZmZmZjg4MDAyNWU5MTI2MAooWEVOKSBkczogMDAyYiAgIGVz
OiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgKKFhFTikg
WGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5OGZlZjA6CihYRU4pICAgIGZmZmY4
MzAxNDg5OGZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4M2ZlMDAwIGZmZmY4MzAwYTgz
ZmMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODk4ZmRhOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMgooWEVOKSAgICBmZmZmZmZmZjgxYWFmZGEwIGZmZmY4
ODAwMjc4NmZlZTAgZmZmZjg4MDAyNzg2ZmZkOCAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAgIDAw
MDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAg
IGZmZmY4ODAwMjc4NmZlYzggMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDMgZmZmZjgzMDBhODNmZTAwMAooWEVOKSAgICAwMDAwMDAzY2JlNGE1ZDgw
IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgICAgWzxmZmZmODJjNDgw
MTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAKKFhFTikgWzA6IGR1bXAgRG9t
MCByZWdpc3RlcnNdCihYRU4pICcwJyBwcmVzc2VkIC0+IGR1bXBpbmcgRG9tMCdzIHJlZ2lzdGVy
cwooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMCBzdGF0ZTogKioqCihYRU4pIFJJUDogICAg
ZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYg
ICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAg
IHJieDogZmZmZmZmZmY4MWEwMWZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6
IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAw
ZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmZmZmZjgxYTAxZWU4ICAgcnNwOiBmZmZmZmZmZjgxYTAx
ZWQwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAg
IHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6
IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDAgICByMTQ6IGZmZmZmZmZm
ZmZmZmZmZmYKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDAwMDAw
MDA4ICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDEzYzY1NjAwMCAg
IGNyMjogMDAwMDdmODE5MjYwODk0YwooWEVOKSBkczogMDAwMCAgIGVzOiAwMDAwICAgZnM6IDAw
MDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJh
Y2UgZnJvbSByc3A9ZmZmZmZmZmY4MWEwMWVkMDoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZmZmZmY4MWEwMWYxOAooWEVOKSAg
ICBmZmZmZmZmZjgxMDFjNjYzIGZmZmZmZmZmODFhMDFmZDggZmZmZmZmZmY4MWFhZmRhMCBmZmZm
ODgwMDJkZWUxYTAwCihYRU4pICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmY4MWEwMWY0OCBm
ZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgICAgYTNmYzk4NzMzOWVkMDEz
YiAwMDAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODFiMTUxNjAgZmZmZmZmZmY4MWEwMWY1OAooWEVO
KSAgICBmZmZmZmZmZjgxNTU0ZjVlIGZmZmZmZmZmODFhMDFmOTggZmZmZmZmZmY4MWFjY2JmNSBm
ZmZmZmZmZjgxYjE1MTYwCihYRU4pICAgIGU0YjE1OWJhM2VlYTA5NGMgMDAwMDAwMDAwMGNkZjAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxYTAxZmI4IGZmZmZmZmZmODFhY2MzNGIgZmZmZmZmZmY3ZmZmZmZmZgoo
WEVOKSAgICBmZmZmZmZmZjg0YjA0MDAwIGZmZmZmZmZmODFhMDFmZjggZmZmZmZmZmY4MWFjZmVj
YyAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAxMDAwMDAwMDAgMDAxMDA4MDAwMDAz
MDZhNCAxZmM5OGI3NWUzYjgyMjgzIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMxIHN0
YXRlOiAqKioKKFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJG
TEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikg
cmF4OiAwMDAwMDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODZkZmQ4ICAgcmN4OiBmZmZm
ZmZmZjgxMDAxM2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBk
ZWFkYmVlZiAgIHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4NmRl
ZTAgICByc3A6IGZmZmY4ODAwMjc4NmRlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikg
cjk6ICAwMDAwMDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAw
MDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAw
MDAwMDAwMSAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAw
MDAgICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikg
Y3IzOiAwMDAwMDAwMTNkOTVmMDAwICAgY3IyOiAwMDAwMDAwMDAyMTgyM2U4CihYRU4pIGRzOiAw
MDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAz
MwooWEVOKSBHdWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODZkZWM4OgooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBm
ZmZmODgwMDI3ODZkZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg2ZGZk
OCBmZmZmZmZmZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmODgwMDI3ODZkZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQoo
WEVOKSAgICBhZGNmNDU4MDdjMmQwNGZiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODgwMDI3ODZkZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmRmNTggMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMiBzdGF0ZTogKioqCihYRU4pIFJJ
UDogICAgZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAw
MDAyNDYgICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAw
MDAwMCAgIHJieDogZmZmZjg4MDAyNzg2ZmZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVO
KSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmODgwMDI3ODZmZWUwICAgcnNwOiBmZmZmODgw
MDI3ODZmZWM4ICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAw
MDAwMCAgIHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVO
KSByMTI6IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDIgICByMTQ6IDAw
MDAwMDAwMDAwMDAwMDAKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAw
MDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0Yzhk
ZTAwMCAgIGNyMjogMDAwMDdmZWFjYTY0YzAwMAooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAg
ZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjg4MDAyNzg2ZmVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZmYxMAoo
WEVOKSAgICBmZmZmZmZmZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmZmZDggZmZmZmZmZmY4MWFhZmRh
MCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2
ZmY0MCBmZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgMWZlN2I1YTgy
MjE1MDQ5OSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1
MAooWEVOKSAgICBmZmZmZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCBmZmZmODgwMDI3ODZmZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioq
IER1bXBpbmcgRG9tMCB2Y3B1IzMgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZm
ZmZmZjgxMDAxM2FhPl0KKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBD
T05URVhUOiBwdiBndWVzdAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4
ODAwMjc4ODFmZDggICByY3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAw
MDAwMDAwICAgcnNpOiAwMDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihY
RU4pIHJicDogZmZmZjg4MDAyNzg4MWVlMCAgIHJzcDogZmZmZjg4MDAyNzg4MWVjOCAgIHI4OiAg
MDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAw
MDAwMDAwMDAwMDEgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgx
YWFmZGEwICAgcjEzOiAwMDAwMDAwMDAwMDAwMDAzICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pIHIxNTogMDAwMDAwMDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDog
MDAwMDAwMDAwMDAwMjY2MAooWEVOKSBjcjM6IDAwMDAwMDAxM2M0OWEwMDAgICBjcjI6IDAwMDA3
ZjgxOWMzYmUwMDAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAw
MDAgICBzczogZTAyYiAgIGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNw
PWZmZmY4ODAwMjc4ODFlYzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZm
ZmZmZiBmZmZmZmZmZjgxMDBhNWMwIGZmZmY4ODAwMjc4ODFmMTAKKFhFTikgICAgZmZmZmZmZmY4
MTAxYzY2MyBmZmZmODgwMDI3ODgxZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNDAgZmZmZmZmZmY4MTAx
MzIzNiBmZmZmZmZmZjgxMDBhZGU5CihYRU4pICAgIDQ5ZGUxODMzZDEzZjJhMjYgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTAKKFhFTikgICAgZmZmZmZm
ZmY4MTU2MzQzOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZm
Zjg4MDAyNzg4MWY1OCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIFtIOiBkdW1wIGhlYXAgaW5mb10K
KFhFTikgJ0gnIHByZXNzZWQgLT4gZHVtcGluZyBoZWFwIGluZm8gKG5vdy0weDRBOkREODU2MDM4
KQooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0wXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Ml0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0zXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9NV0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT02XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OF0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT05XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTEwXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTExXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTEyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTEzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE0XSAtPiAx
NjEyOCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xNV0gLT4gMzI3NjggcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MTZdIC0+IDY1NTM2IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTE3XSAtPiAxMzA1NTkgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MThdIC0+
IDI2MjE0MyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xOV0gLT4gMTcyODM3IHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIwXSAtPiAxMzQyMjUgcGFnZXMKKFhFTikgaGVhcFtu
b2RlPTBdW3pvbmU9MjFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjJdIC0+
IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjNdIC0+IDAgcGFnZXMKKFhFTikgaGVh
cFtub2RlPTBdW3pvbmU9MjRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjVd
IC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjZdIC0+IDAgcGFnZXMKKFhFTikg
aGVhcFtub2RlPTBdW3pvbmU9MjddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9
MjhdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjldIC0+IDAgcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MzBdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MzFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzJdIC0+IDAgcGFnZXMK
KFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzNdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MzRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzVdIC0+IDAgcGFn
ZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzZdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2Rl
PTBdW3pvbmU9MzddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzhdIC0+IDAg
cGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzldIC0+IDAgcGFnZXMKKFhFTikgW0k6IGR1
bXAgSFZNIGlycSBpbmZvXQooWEVOKSAnSScgcHJlc3NlZCAtPiBkdW1waW5nIEhWTSBpcnEgaW5m
bwooWEVOKSBbTTogZHVtcCBNU0kgc3RhdGVdCihYRU4pIFBDSS1NU0kgaW50ZXJydXB0IGluZm9y
bWF0aW9uOgooWEVOKSAgTVNJICAgIDI2IHZlYz02OSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxv
ZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI3IHZlYz0w
MCAgZml4ZWQgIGVkZ2UgZGVhc3NlcnQgcGh5cyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAv
MS8tMQooWEVOKSAgTVNJICAgIDI4IHZlYz0zMSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBs
b3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI5IHZlYz03MSBs
b3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8t
MQooWEVOKSAgTVNJICAgIDMwIHZlYz04OSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dl
c3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSBbUTogZHVtcCBQQ0kgZGV2aWNlc10K
KFhFTikgPT09PSBQQ0kgZGV2aWNlcyA9PT09CihYRU4pID09PT0gc2VnbWVudCAwMDAwID09PT0K
KFhFTikgMDAwMDowNTowMS4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDQ6MDAu
MCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAzOjAwLjAgLSBkb20gMCAgIC0gTVNJ
cyA8ID4KKFhFTikgMDAwMDowMjowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOSA+CihYRU4pIDAw
MDA6MDA6MWYuMyAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFmLjIgLSBkb20g
MCAgIC0gTVNJcyA8IDI3ID4KKFhFTikgMDAwMDowMDoxZi4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDA6MWUuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFk
LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYy43IC0gZG9tIDAgICAtIE1T
SXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNiAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAw
OjAwOjFjLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYi4wIC0gZG9tIDAg
ICAtIE1TSXMgPCAyNiA+CihYRU4pIDAwMDA6MDA6MWEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgoo
WEVOKSAwMDAwOjAwOjE5LjAgLSBkb20gMCAgIC0gTVNJcyA8IDMwID4KKFhFTikgMDAwMDowMDox
Ni4zIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MTYuMCAtIGRvbSAwICAgLSBN
U0lzIDwgPgooWEVOKSAwMDAwOjAwOjE0LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAw
MDowMDowMi4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOCA+CihYRU4pIDAwMDA6MDA6MDAuMCAtIGRv
bSAwICAgLSBNU0lzIDwgPgooWEVOKSBbVjogZHVtcCBpb21tdSBpbmZvXQooWEVOKSAKKFhFTikg
aW9tbXUgMDogbnJfcHRfbGV2ZWxzID0gMy4KKFhFTikgICBRdWV1ZWQgSW52YWxpZGF0aW9uOiBz
dXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IFJlbWFwcGluZzogc3VwcG9y
dGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCByZW1hcHBpbmcgdGFibGUgKG5yX2Vu
dHJ5PTB4MTAwMDAuIE9ubHkgZHVtcCBQPTEgZW50cmllcyBoZXJlKToKKFhFTikgICAgICAgIFNW
VCAgU1EgICBTSUQgICAgICBEU1QgIFYgIEFWTCBETE0gVE0gUkggRE0gRlBEIFAKKFhFTikgICAw
MDAwOiAgMSAgIDAgIDAwMTAgMDAwMDAwMDEgMzEgICAgMCAgIDEgIDAgIDEgIDEgICAwIDEKKFhF
TikgCihYRU4pIGlvbW11IDE6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFs
aWRhdGlvbjogc3VwcG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBp
bmc6IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRh
YmxlIChucl9lbnRyeT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4p
ICAgICAgICBTVlQgIFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQ
CihYRU4pICAgMDAwMDogIDEgICAwICBmMGY4IDAwMDAwMDAxIDM4ICAgIDAgICAxICAwICAxICAx
ICAgMCAxCihYRU4pICAgMDAwMTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGYwICAgIDAgICAxICAw
ICAxICAxICAgMCAxCihYRU4pICAgMDAwMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQwICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQ4
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNDogIDEgICAwICBmMGY4IDAwMDAw
MDAxIDUwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNTogIDEgICAwICBmMGY4
IDAwMDAwMDAxIDU4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNjogIDEgICAw
ICBmMGY4IDAwMDAwMDAxIDYwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNzog
IDEgICAwICBmMGY4IDAwMDAwMDAxIDY4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAg
MDAwODogIDEgICAwICBmMGY4IDAwMDAwMDAxIDcwICAgIDAgICAxICAxICAxICAxICAgMCAxCihY
RU4pICAgMDAwOTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDc4ICAgIDAgICAxICAwICAxICAxICAg
MCAxCihYRU4pICAgMDAwYTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDg4ICAgIDAgICAxICAwICAx
ICAxICAgMCAxCihYRU4pICAgMDAwYjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDkwICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwYzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDk4ICAg
IDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZDogIDEgICAwICBmMGY4IDAwMDAwMDAx
IGEwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZTogIDEgICAwICBmMGY4IDAw
MDAwMDAxIGE4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZjogIDEgICAwICBm
MGY4IDAwMDAwMDAxIGIwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMDogIDEg
ICAwICBmMGY4IDAwMDAwMDAxIGI4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAx
MTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGMwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4p
ICAgMDAxMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIGM4ICAgIDAgICAxICAxICAxICAxICAgMCAx
CihYRU4pICAgMDAxMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQwICAgIDAgICAxICAxICAxICAx
ICAgMCAxCihYRU4pICAgMDAxNDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQ4ICAgIDAgICAxICAx
ICAxICAxICAgMCAxCihYRU4pICAgMDAxNTogIDEgICAwICAwMGQ4IDAwMDAwMDAxIDY5ICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNjogIDEgICAwICAwMGZhIDAwMDAwMDAxIDI5
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNzogIDEgICAwICAwMjAwIDAwMDAw
MDAxIDcxICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxODogIDEgICAwICAwMGM4
IDAwMDAwMDAxIDg5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pIAooWEVOKSBSZWRpcmVj
dGlvbiB0YWJsZSBvZiBJT0FQSUMgMDoKKFhFTikgICAjZW50cnkgSURYIEZNVCBNQVNLIFRSSUcg
SVJSIFBPTCBTVEFUIERFTEkgIFZFQ1RPUgooWEVOKSAgICAwMTogIDAwMDAgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICAzOAooWEVOKSAgICAwMjogIDAwMDEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBmMAooWEVOKSAgICAwMzogIDAwMDIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0MAooWEVOKSAgICAwNDogIDAwMDMgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0OAooWEVOKSAgICAwNTogIDAwMDQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1MAooWEVOKSAgICAwNjogIDAwMDUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1OAooWEVOKSAgICAwNzogIDAwMDYgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2MAooWEVOKSAgICAwODogIDAwMDcgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2OAooWEVOKSAgICAwOTogIDAwMDggICAxICAgIDAgICAx
ICAgMCAgIDAgICAgMCAgICAwICAgICA3MAooWEVOKSAgICAwYTogIDAwMDkgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA3OAooWEVOKSAgICAwYjogIDAwMGEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA4OAooWEVOKSAgICAwYzogIDAwMGIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5MAooWEVOKSAgICAwZDogIDAwMGMgICAxICAgIDEgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5OAooWEVOKSAgICAwZTogIDAwMGQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhMAooWEVOKSAgICAwZjogIDAwMGUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhOAooWEVOKSAgICAxMDogIDAwMGYgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiMAooWEVOKSAgICAxMjogIDAwMTAgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiOAooWEVOKSAgICAxMzogIDAwMTEgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjMAooWEVOKSAgICAxNDogIDAwMTQgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkOAooWEVOKSAgICAxNjogIDAwMTMgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkMAooWEVOKSAgICAxNzogIDAwMTIgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjOAooWEVOKSBbYTogZHVtcCB0aW1lciBxdWV1ZXNdCihY
RU4pIER1bXBpbmcgdGltZXIgcXVldWVzOgooWEVOKSBDUFUwMDoKKFhFTikgICBleD0gICAtMTY4
MXVzIHRpbWVyPWZmZmY4MmM0ODAyZTI1YzggY2I9ZmZmZjgyYzQ4MDEzZDc1NyhmZmZmODJjNDgw
MjcxODAwKSBuczE2NTUwX3BvbGwrMHgwLzB4MzMKKFhFTikgICBleD0gICAtMTY3OXVzIHRpbWVy
PWZmZmY4MzAxNGNhOTI1OTAgY2I9ZmZmZjgyYzQ4MDE2NjQxNihmZmZmODMwMTQ4OTQxZDgwKSBp
cnFfZ3Vlc3RfZW9pX3RpbWVyX2ZuKzB4MC8weDE1ZAooWEVOKSAgIGV4PSAgICA3MzIwdXMgdGlt
ZXI9ZmZmZjgzMDE0ODk3MzFiOCBjYj1mZmZmODJjNDgwMTE5ZDcyKGZmZmY4MzAxNDg5NzMxOTAp
IGNzY2hlZF9hY2N0KzB4MC8weDQyYQooWEVOKSAgIGV4PTEyODEwMjkzMHVzIHRpbWVyPWZmZmY4
MmM0ODAyZmUyODAgY2I9ZmZmZjgyYzQ4MDE4MDdjMigwMDAwMDAwMDAwMDAwMDAwKSBwbHRfb3Zl
cmZsb3crMHgwLzB4MTMxCihYRU4pICAgZXg9IDYyNDQ2Mzl1cyB0aW1lcj1mZmZmODJjNDgwMzAw
NTgwIGNiPWZmZmY4MmM0ODAxYTg4NTAoMDAwMDAwMDAwMDAwMDAwMCkgbWNlX3dvcmtfZm4rMHgw
LzB4YTkKKFhFTikgICBleD0gICAgNzMyMHVzIHRpbWVyPWZmZmY4MzAxNDg5NzNlYTggY2I9ZmZm
ZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAwKSBjc2NoZWRfdGljaysweDAvMHgzMTQKKFhF
TikgQ1BVMDE6CihYRU4pICAgZXg9ICAgNjYyMDZ1cyB0aW1lcj1mZmZmODMwMTRiMzI5ZWI4IGNi
PWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMSkgY3NjaGVkX3RpY2srMHgwLzB4MzE0
CihYRU4pICAgZXg9ICAgNzA5MDV1cyB0aW1lcj1mZmZmODMwMGFhNTgzMDYwIGNiPWZmZmY4MmM0
ODAxMjFjNmIoZmZmZjgzMDBhYTU4MzAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8w
eGIKKFhFTikgQ1BVMDI6CihYRU4pICAgZXg9ICAgODY1MjF1cyB0aW1lcj1mZmZmODMwMTQ4OTk0
NjU4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMikgY3NjaGVkX3RpY2srMHgw
LzB4MzE0CihYRU4pICAgZXg9ICA3OTQyNjJ1cyB0aW1lcj1mZmZmODMwMGE4M2ZkMDYwIGNiPWZm
ZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhODNmZDAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2Zu
KzB4MC8weGIKKFhFTikgQ1BVMDM6CihYRU4pICAgZXg9ICAxMTIwNTN1cyB0aW1lcj1mZmZmODMw
MTRiMzI5MGI4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMykgY3NjaGVkX3Rp
Y2srMHgwLzB4MzE0CihYRU4pICAgZXg9ICAzMzIyNzN1cyB0aW1lcj1mZmZmODMwMGE4M2ZjMDYw
IGNiPWZmZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhODNmYzAwMCkgdmNwdV9zaW5nbGVzaG90X3Rp
bWVyX2ZuKzB4MC8weGIKKFhFTikgW2M6IGR1bXAgQUNQSSBDeCBzdHJ1Y3R1cmVzXQooWEVOKSAn
YycgcHJlc3NlZCAtPiBwcmludGluZyBBQ1BJIEN4IHN0cnVjdHVyZXMKKFhFTikgPT1jcHUwPT0K
KFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3Rh
dGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0g
bWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBk
dXJhdGlvblszMjIzMDEzNzcxNjRdCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQoo
WEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVOKSA9PWNwdTE9PQooWEVOKSBhY3RpdmUgc3Rh
dGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBD
MToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1
cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzMyMjMyNjE2
NzA3N10KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZb
MF0gQ0M3WzBdCihYRU4pID09Y3B1Mj09CihYRU4pIGFjdGl2ZSBzdGF0ZToJCUMyNTUKKFhFTikg
bWF4X2NzdGF0ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAgIEMxOgl0eXBlW0MxXSBsYXRl
bmN5WzAwMF0gdXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0gZHVyYXRpb25bMF0KKFhFTikg
ICAgIEMwOgl1c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMzIyMzUwOTU3MjkzXQooWEVOKSBQQzJb
MF0gUEMzWzBdIFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIENDNlswXSBDQzdbMF0KKFhFTikg
PT1jcHUzPT0KKFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcK
KFhFTikgc3RhdGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVsw
MDAwMDAwMF0gbWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAw
MDAwMDAwXSBkdXJhdGlvblszMjIzNzU3NDY3MjBdCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBd
IFBDN1swXQooWEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVOKSBbZTogZHVtcCBldnRjaG4g
aW5mb10KKFhFTikgJ2UnIHByZXNzZWQgLT4gZHVtcGluZyBldmVudC1jaGFubmVsIGluZm8KKFhF
TikgRXZlbnQgY2hhbm5lbCBpbmZvcm1hdGlvbiBmb3IgZG9tYWluIDA6CihYRU4pIFBvbGxpbmcg
dkNQVXM6IHt9CihYRU4pICAgICBwb3J0IFtwL21dCihYRU4pICAgICAgICAxIFsxLzBdOiBzPTUg
bj0wIHg9MCB2PTAKKFhFTikgICAgICAgIDIgWzAvMV06IHM9NiBuPTAgeD0wCihYRU4pICAgICAg
ICAzIFsxLzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAgICAgNCBbMC8wXTogcz02IG49MCB4PTAK
KFhFTikgICAgICAgIDUgWzAvMF06IHM9NSBuPTAgeD0wIHY9MQooWEVOKSAgICAgICAgNiBbMC8w
XTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDcgWzAvMF06IHM9NSBuPTEgeD0wIHY9MAooWEVO
KSAgICAgICAgOCBbMC8xXTogcz02IG49MSB4PTAKKFhFTikgICAgICAgIDkgWzAvMF06IHM9NiBu
PTEgeD0wCihYRU4pICAgICAgIDEwIFswLzBdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAxMSBb
MC8wXTogcz01IG49MSB4PTAgdj0xCihYRU4pICAgICAgIDEyIFswLzBdOiBzPTYgbj0xIHg9MAoo
WEVOKSAgICAgICAxMyBbMC8wXTogcz01IG49MiB4PTAgdj0wCihYRU4pICAgICAgIDE0IFswLzFd
OiBzPTYgbj0yIHg9MAooWEVOKSAgICAgICAxNSBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAg
ICAgMTYgWzAvMF06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE3IFswLzBdOiBzPTUgbj0yIHg9
MCB2PTEKKFhFTikgICAgICAgMTggWzAvMF06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE5IFsw
LzBdOiBzPTUgbj0zIHg9MCB2PTAKKFhFTikgICAgICAgMjAgWzEvMV06IHM9NiBuPTMgeD0wCihY
RU4pICAgICAgIDIxIFswLzBdOiBzPTYgbj0zIHg9MAooWEVOKSAgICAgICAyMiBbMC8wXTogcz02
IG49MyB4PTAKKFhFTikgICAgICAgMjMgWzAvMF06IHM9NSBuPTMgeD0wIHY9MQooWEVOKSAgICAg
ICAyNCBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAgICAgMjUgWzAvMF06IHM9MyBuPTAgeD0w
IGQ9MCBwPTM1CihYRU4pICAgICAgIDI2IFswLzBdOiBzPTQgbj0wIHg9MCBwPTkgaT05CihYRU4p
ICAgICAgIDI3IFswLzBdOiBzPTUgbj0wIHg9MCB2PTIKKFhFTikgICAgICAgMjggWzAvMF06IHM9
NCBuPTAgeD0wIHA9OCBpPTgKKFhFTikgICAgICAgMjkgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc4
IGk9MjcKKFhFTikgICAgICAgMzAgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc5IGk9MjYKKFhFTikg
ICAgICAgMzEgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc3IGk9MjgKKFhFTikgICAgICAgMzIgWzAv
MF06IHM9NCBuPTAgeD0wIHA9MTYgaT0xNgooWEVOKSAgICAgICAzMyBbMC8wXTogcz00IG49MCB4
PTAgcD0yMyBpPTIzCihYRU4pICAgICAgIDM0IFsxLzBdOiBzPTQgbj0wIHg9MCBwPTI3NiBpPTI5
CihYRU4pICAgICAgIDM1IFswLzBdOiBzPTMgbj0wIHg9MCBkPTAgcD0yNQooWEVOKSAgICAgICAz
NiBbMC8wXTogcz01IG49MCB4PTAgdj0zCihYRU4pICAgICAgIDM3IFsxLzBdOiBzPTQgbj0wIHg9
MCBwPTI3NSBpPTMwCihYRU4pIFtnOiBwcmludCBncmFudCB0YWJsZSB1c2FnZV0KKFhFTikgZ250
dGFiX3VzYWdlX3ByaW50X2FsbCBbIGtleSAnZycgcHJlc3NlZAooWEVOKSAgICAgICAtLS0tLS0t
LSBhY3RpdmUgLS0tLS0tLS0gICAgICAgLS0tLS0tLS0gc2hhcmVkIC0tLS0tLS0tCihYRU4pIFty
ZWZdIGxvY2FsZG9tIG1mbiAgICAgIHBpbiAgICAgICAgICBsb2NhbGRvbSBnbWZuICAgICBmbGFn
cwooWEVOKSBncmFudC10YWJsZSBmb3IgcmVtb3RlIGRvbWFpbjogICAgMCAuLi4gbm8gYWN0aXZl
IGdyYW50IHRhYmxlIGVudHJpZXMKKFhFTikgZ250dGFiX3VzYWdlX3ByaW50X2FsbCBdIGRvbmUK
KFhFTikgW2k6IGR1bXAgaW50ZXJydXB0IGJpbmRpbmdzXQooWEVOKSBHdWVzdCBpbnRlcnJ1cHQg
aW5mb3JtYXRpb246CihYRU4pICAgIElSUTogICAwIGFmZmluaXR5OjAwMDEgdmVjOmYwIHR5cGU9
SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAg
SVJROiAgIDEgYWZmaW5pdHk6MDAwMSB2ZWM6MzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVz
PTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMiBhZmZpbml0eTpmZmZm
IHZlYzplMiB0eXBlPVhULVBJQyAgICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJv
dW5kCihYRU4pICAgIElSUTogICAzIGFmZmluaXR5OjAwMDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1l
ZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDQg
YWZmaW5pdHk6MDAwMSB2ZWM6NDggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAy
IG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNSBhZmZpbml0eTowMDAxIHZlYzo1MCB0
eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4p
ICAgIElSUTogICA2IGFmZmluaXR5OjAwMDEgdmVjOjU4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0
YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDcgYWZmaW5pdHk6
MDAwMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwg
dW5ib3VuZAooWEVOKSAgICBJUlE6ICAgOCBhZmZpbml0eTowMDAxIHZlYzo2OCB0eXBlPUlPLUFQ
SUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogIDgo
LVMtLSksCihYRU4pICAgIElSUTogICA5IGFmZmluaXR5OjAwMDEgdmVjOjcwIHR5cGU9SU8tQVBJ
Qy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOSgt
Uy0tKSwKKFhFTikgICAgSVJROiAgMTAgYWZmaW5pdHk6MDAwMSB2ZWM6NzggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAx
MSBhZmZpbml0eTowMDAxIHZlYzo4OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAw
MDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDEyIGFmZmluaXR5OjAwMDEgdmVjOjkw
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhF
TikgICAgSVJROiAgMTMgYWZmaW5pdHk6MDAwZiB2ZWM6OTggdHlwZT1JTy1BUElDLWVkZ2UgICAg
c3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNCBhZmZpbml0
eTowMDAxIHZlYzphMCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVk
LCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE1IGFmZmluaXR5OjAwMDEgdmVjOmE4IHR5cGU9SU8t
QVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJR
OiAgMTYgYWZmaW5pdHk6MDAwMSB2ZWM6YjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDE2KC1TLS0pLAooWEVOKSAgICBJUlE6
ICAxOCBhZmZpbml0eTowMDBmIHZlYzpiOCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAw
MDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE5IGFmZmluaXR5OjAwMDEgdmVj
OmMwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQK
KFhFTikgICAgSVJROiAgMjAgYWZmaW5pdHk6MDAwZiB2ZWM6ZDggdHlwZT1JTy1BUElDLWxldmVs
ICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAyMiBhZmZp
bml0eTowMDAxIHZlYzpkMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFw
cGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIzIGFmZmluaXR5OjAwMDEgdmVjOmM4IHR5cGU9
SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0w
OiAyMygtUy0tKSwKKFhFTikgICAgSVJROiAgMjQgYWZmaW5pdHk6MDAwMSB2ZWM6MjggdHlwZT1E
TUFfTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJ
UlE6ICAyNSBhZmZpbml0eTowMDAxIHZlYzozMCB0eXBlPURNQV9NU0kgICAgICAgICBzdGF0dXM9
MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI2IGFmZmluaXR5OjAwMDEg
dmVjOjY5IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBk
b21haW4tbGlzdD0wOjI3OSgtUy0tKSwKKFhFTikgICAgSVJROiAgMjcgYWZmaW5pdHk6MDAwMSB2
ZWM6MjkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRv
bWFpbi1saXN0PTA6Mjc4KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyOCBhZmZpbml0eTowMDAxIHZl
YzozMSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9t
YWluLWxpc3Q9MDoyNzcoLVMtLSksCihYRU4pICAgIElSUTogIDI5IGFmZmluaXR5OjAwMDEgdmVj
OjcxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MSBkb21h
aW4tbGlzdD0wOjI3NihQUy1NKSwKKFhFTikgICAgSVJROiAgMzAgYWZmaW5pdHk6MDAwMSB2ZWM6
ODkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFp
bi1saXN0PTA6Mjc1KFBTLS0pLAooWEVOKSBJTy1BUElDIGludGVycnVwdCBpbmZvcm1hdGlvbjoK
KFhFTikgICAgIElSUSAgMCBWZWMyNDA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICAyOiB2
ZWM9ZjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJp
Zz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgMSBWZWMgNTY6CihYRU4pICAgICAg
IEFwaWMgMHgwMCwgUGluICAxOiB2ZWM9MzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAg
MyBWZWMgNjQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICAzOiB2ZWM9NDAgZGVsaXZlcnk9
TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0
X2lkOjAKKFhFTikgICAgIElSUSAgNCBWZWMgNzI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGlu
ICA0OiB2ZWM9NDggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJy
PTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNSBWZWMgODA6CihYRU4p
ICAgICAgIEFwaWMgMHgwMCwgUGluICA1OiB2ZWM9NTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0
YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAg
IElSUSAgNiBWZWMgODg6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA2OiB2ZWM9NTggZGVs
aXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9
MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNyBWZWMgOTY6CihYRU4pICAgICAgIEFwaWMgMHgw
MCwgUGluICA3OiB2ZWM9NjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5
PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgOCBWZWMxMDQ6
CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA4OiB2ZWM9NjggZGVsaXZlcnk9TG9QcmkgZGVz
dD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhF
TikgICAgIElSUSAgOSBWZWMxMTI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA5OiB2ZWM9
NzAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1M
IG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMCBWZWMxMjA6CihYRU4pICAgICAgIEFw
aWMgMHgwMCwgUGluIDEwOiB2ZWM9NzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBv
bGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMSBW
ZWMxMzY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDExOiB2ZWM9ODggZGVsaXZlcnk9TG9Q
cmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lk
OjAKKFhFTikgICAgIElSUSAxMiBWZWMxNDQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDEy
OiB2ZWM9OTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAg
dHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMyBWZWMxNTI6CihYRU4pICAg
ICAgIEFwaWMgMHgwMCwgUGluIDEzOiB2ZWM9OTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElS
USAxNCBWZWMxNjA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE0OiB2ZWM9YTAgZGVsaXZl
cnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBk
ZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNSBWZWMxNjg6CihYRU4pICAgICAgIEFwaWMgMHgwMCwg
UGluIDE1OiB2ZWM9YTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAg
aXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNiBWZWMxNzY6CihY
RU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE2OiB2ZWM9YjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikg
ICAgIElSUSAxOCBWZWMxODQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE4OiB2ZWM9Yjgg
ZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1h
c2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxOSBWZWMxOTI6CihYRU4pICAgICAgIEFwaWMg
MHgwMCwgUGluIDE5OiB2ZWM9YzAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFy
aXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAyMCBWZWMy
MTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIwOiB2ZWM9ZDggZGVsaXZlcnk9TG9Qcmkg
ZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAK
KFhFTikgICAgIElSUSAyMiBWZWMyMDg6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIyOiB2
ZWM9ZDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJp
Zz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAyMyBWZWMyMDA6CihYRU4pICAgICAg
IEFwaWMgMHgwMCwgUGluIDIzOiB2ZWM9YzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgW206IG1lbW9y
eSBpbmZvXQooWEVOKSBQaHlzaWNhbCBtZW1vcnkgaW5mb3JtYXRpb246CihYRU4pICAgICBYZW4g
aGVhcDogMGtCIGZyZWUKKFhFTikgICAgIGhlYXBbMTRdOiA2NDUxMmtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMTVdOiAxMzEwNzJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE2XTogMjYyMTQ0a0IgZnJl
ZQooWEVOKSAgICAgaGVhcFsxN106IDUyMjIzNmtCIGZyZWUKKFhFTikgICAgIGhlYXBbMThdOiAx
MDQ4NTcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxOV06IDY5MTM0OGtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMjBdOiA1MzY5MDBrQiBmcmVlCihYRU4pICAgICBEb20gaGVhcDogMzI1Njc4NGtCIGZy
ZWUKKFhFTikgW246IE5NSSBzdGF0aXN0aWNzXQooWEVOKSBDUFUJTk1JCihYRU4pICAgMAkgIDAK
KFhFTikgICAxCSAgMAooWEVOKSAgIDIJICAwCihYRU4pICAgMwkgIDAKKFhFTikgZG9tMCB2Y3B1
MDogTk1JIG5laXRoZXIgcGVuZGluZyBub3IgbWFza2VkCihYRU4pIFtxOiBkdW1wIGRvbWFpbiAo
YW5kIGd1ZXN0IGRlYnVnKSBpbmZvXQooWEVOKSAncScgcHJlc3NlZCAtPiBkdW1waW5nIGRvbWFp
biBpbmZvIChub3c9MHg0QjozQzcwMjgyQSkKKFhFTikgR2VuZXJhbCBpbmZvcm1hdGlvbiBmb3Ig
ZG9tYWluIDA6CihYRU4pICAgICByZWZjbnQ9MyBkeWluZz0wIHBhdXNlX2NvdW50PTAKKFhFTikg
ICAgIG5yX3BhZ2VzPTE4NzUzOSB4ZW5oZWFwX3BhZ2VzPTYgc2hhcmVkX3BhZ2VzPTAgcGFnZWRf
cGFnZXM9MCBkaXJ0eV9jcHVzPXsxLTN9IG1heF9wYWdlcz0xODgxNDcKKFhFTikgICAgIGhhbmRs
ZT0wMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAwMDAwMDAgdm1fYXNzaXN0PTAwMDAwMDBk
CihYRU4pIFJhbmdlc2V0cyBiZWxvbmdpbmcgdG8gZG9tYWluIDA6CihYRU4pICAgICBJL08gUG9y
dHMgIHsgMC0xZiwgMjItM2YsIDQ0LTYwLCA2Mi05ZiwgYTItNDA3LCA0MGMtY2ZiLCBkMDAtMjA0
ZiwgMjA1OC1mZmZmIH0KKFhFTikgICAgIEludGVycnVwdHMgeyAwLTI3OSB9CihYRU4pICAgICBJ
L08gTWVtb3J5IHsgMC1mZWJmZiwgZmVjMDEtZmVkZmYsIGZlZTAxLWZmZmZmZmZmZmZmZmZmZmYg
fQooWEVOKSBNZW1vcnkgcGFnZXMgYmVsb25naW5nIHRvIGRvbWFpbiAwOgooWEVOKSAgICAgRG9t
UGFnZSBsaXN0IHRvbyBsb25nIHRvIGRpc3BsYXkKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAw
MDE0ODkxNzogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4p
ICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDg5MTY6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9
NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTQ4OTE1OiBjYWY9
YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAwMDAwMDEKKFhFTikgICAgIFhlblBhZ2Ug
MDAwMDAwMDAwMDE0ODkxNDogY2FmPWMwMDAwMDAwMDAwMDAwMDEsIHRhZj03NDAwMDAwMDAwMDAw
MDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAwYWEwZmQ6IGNhZj1jMDAwMDAwMDAwMDAw
MDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTNm
NDI4OiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0MDAwMDAwMDAwMDAwMDIKKFhFTikgVkNQ
VSBpbmZvcm1hdGlvbiBhbmQgY2FsbGJhY2tzIGZvciBkb21haW4gMDoKKFhFTikgICAgIFZDUFUw
OiBDUFUwIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDEsIHVwY2FsbF9tYXNrID0gMDAg
ZGlydHlfY3B1cz17fSBjcHVfYWZmaW5pdHk9ezB9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBh
dXNlX2ZsYWdzPTAKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMTog
Q1BVMiBbaGFzPUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRp
cnR5X2NwdXM9ezJ9IGNwdV9hZmZpbml0eT17MC0xNX0KKFhFTikgICAgIHBhdXNlX2NvdW50PTAg
cGF1c2VfZmxhZ3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMgdGltZXIKKFhFTikgICAgIFZDUFUy
OiBDUFUzIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDAsIHVwY2FsbF9tYXNrID0gMDAg
ZGlydHlfY3B1cz17M30gY3B1X2FmZmluaXR5PXswLTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9
MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSAgICAgVkNQ
VTM6IENQVTEgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAwMCwgdXBjYWxsX21hc2sgPSAw
MCBkaXJ0eV9jcHVzPXsxfSBjcHVfYWZmaW5pdHk9ezAtMTV9CihYRU4pICAgICBwYXVzZV9jb3Vu
dD0wIHBhdXNlX2ZsYWdzPTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pIE5vdGlm
eWluZyBndWVzdCAwOjAgKHZpcnEgMSwgcG9ydCA1LCBzdGF0IDAvMC8tMSkKKFhFTikgTm90aWZ5
aW5nIGd1ZXN0IDA6MSAodmlycSAxLCBwb3J0IDExLCBzdGF0IDAvMC8wKQooWEVOKSBOb3RpZnlp
bmcgZ3Vlc3QgMDoyICh2aXJxIDEsIHBvcnQgMTcsIHN0YXQgMC8wLzApCihYRU4pIE5vdGlmeWlu
ZyBndWVzdCAwOjMgKHZpcnEgMSwgcG9ydCAyMywgc3RhdCAwLzAvMCkKCihYRU4pIFNoYXJlZCBm
cmFtZXMgMCAtLSBTYXZlZCBmcmFtZXMgMApbICAzMjMuMzE0NzI2XSB2KFhFTikgW3I6IGR1bXAg
cnVuIHF1ZXVlc10KY3B1IDEKKFhFTikgc2NoZWRfc210X3Bvd2VyX3NhdmluZ3M6IGRpc2FibGVk
CihYRU4pIE5PVz0weDAwMDAwMDRCNDgzNzUwMUYKKFhFTikgSWRsZSBjcHVwb29sOgpbICAzMjMu
MzE0NzI2XSAgKFhFTikgU2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQog
KFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1hc3RlciAg
ICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDQwMAooWEVOKSAJY3Jl
ZGl0IGJhbGFuY2UgICAgID0gLTYxCihYRU4pIAl3ZWlnaHQgICAgICAgICAgICAgPSAyNTYKKFhF
TikgCXJ1bnFfc29ydCAgICAgICAgICA9IDIyNTIKKFhFTikgCWRlZmF1bHQtd2VpZ2h0ICAgICA9
IDI1NgooWEVOKSAJdHNsaWNlICAgICAgICAgICAgID0gMTBtcwooWEVOKSAJcmF0ZWxpbWl0ICAg
ICAgICAgID0gMTAwMHVzCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAxMAooWEVOKSAJdGlj
a3MgcGVyIHRzbGljZSAgID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5ICAgID0gMHVzCjA6IG1h
c2tlZD0wIHBlbmQoWEVOKSBpZGxlcnM6IDAwMGEKKFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJ
ICAxOiBpbmc9MSBldmVudF9zZWwgWzAuMV0gcHJpPS0yIGZsYWdzPTAgY3B1PTIgY3JlZGl0PS02
NjMgW3c9MjU2XQowMDAwMDAwMDAwMDAwMDAxKFhFTikgQ3B1cG9vbCAwOgooWEVOKSBTY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCgooWEVOKSBpbmZvOgooWEVOKSAJbmNw
dXMgICAgICAgICAgICAgID0gNAooWEVOKSAJbWFzdGVyICAgICAgICAgICAgID0gMAooWEVOKSAJ
Y3JlZGl0ICAgICAgICAgICAgID0gNDAwCihYRU4pIAljcmVkaXQgYmFsYW5jZSAgICAgPSAtNjEK
KFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9zb3J0ICAgICAgICAg
ID0gMjI1MgooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0c2xpY2UgICAg
ICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAxMDAwdXMKKFhFTikg
CWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNlICAgPSAxCihY
RU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMKWyAgMzIzLjM0ODkwN10gIChYRU4pIGlkbGVy
czogMDAwYQooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IFswLjFdIHByaT0tMiBmbGFn
cz0wIGNwdT0yIGNyZWRpdD0tMTIyMiBbdz0yNTZdCiAoWEVOKSBDUFVbMDBdIDE6IG1hc2tlZD0w
IHBlbmQgc29ydD0yMjUyLCBzaWJsaW5nPTAwMDEsIGluZz0wIGV2ZW50X3NlbCBjb3JlPTAwMGYK
MDAwMDAwMDAwMDAwMDAwMChYRU4pIAlydW46IFszMjc2Ny4wXSBwcmk9MCBmbGFncz0wIGNwdT0w
CihYRU4pIAkgIDE6IFswLjBdIHByaT0wIGZsYWdzPTAgY3B1PTAgY3JlZGl0PTc2IFt3PTI1Nl0K
CihYRU4pIENQVVswMV0gWyAgMzIzLjQ1MTkxM10gICBzb3J0PTIyNTIsIHNpYmxpbmc9MDAwMiwg
IGNvcmU9MDAwZgooWEVOKSAJcnVuOiBbMzI3NjcuMV0gcHJpPS02NCBmbGFncz0wIGNwdT0xCjI6
IG1hc2tlZD0xIHBlbmQoWEVOKSBDUFVbMDJdICBzb3J0PTIyNTIsIHNpYmxpbmc9MDAwNCwgY29y
ZT0wMDBmCihYRU4pIAlydW46IFswLjFdIHByaT0tMiBmbGFncz0wIGNwdT0yIGNyZWRpdD0tMTYx
MSBbdz0yNTZdCihYRU4pIAkgIDE6IFszMjc2Ny4yXSBwcmk9LTY0IGZsYWdzPTAgY3B1PTIKKFhF
TikgQ1BVWzAzXSBpbmc9MSBldmVudF9zZWwgIHNvcnQ9MjI1Miwgc2libGluZz0wMDA4LCAwMDAw
MDAwMDAwMDAwMDAxY29yZT0wMDBmCihYRU4pIAlydW46IApbMzI3NjcuM10gcHJpPS02NCBmbGFn
cz0wIGNwdT0zClsgIDMyMy40NzUwOTJdICAoWEVOKSBbczogZHVtcCBzb2Z0dHNjIHN0YXRzXQog
KFhFTikgVFNDIG1hcmtlZCBhcyByZWxpYWJsZSwgd2FycCA9IDI0OTk0MDE1NzY4NSAoY291bnQ9
MykKMzogbWFza2VkPTEgcGVuZChYRU4pIE5vIGRvbWFpbnMgaGF2ZSBlbXVsYXRlZCBUU0MKaW5n
PTAgZXZlbnRfc2VsIChYRU4pIFt0OiBkaXNwbGF5IG11bHRpLWNwdSBjbG9jayBpbmZvXQowMDAw
MDAwMDAwMDAwMDAwCihYRU4pIFN5bmNlZCBzdGltZSBza2V3OiBtYXg9ODE5OW5zIGF2Zz00MTEy
bnMgc2FtcGxlcz0yIGN1cnJlbnQ9ODE5OW5zCihYRU4pIFN5bmNlZCBjeWNsZXMgc2tldzogbWF4
PTE2NCBhdmc9MTYyIHNhbXBsZXM9MiBjdXJyZW50PTE2MApbICAzMjMuNTMxNjEwXSAgKFhFTikg
W3U6IGR1bXAgbnVtYSBpbmZvXQogKFhFTikgJ3UnIHByZXNzZWQgLT4gZHVtcGluZyBudW1hIGlu
Zm8gKG5vdy0weDRCOjU1RjMxNjA1KQoKKFhFTikgaWR4MCAtPiBOT0RFMCBzdGFydC0+MCBzaXpl
LT4xMzY5NjAwIGZyZWUtPjgxNDE5NgpbICAzMjMuNTY0NjM4XSBwKFhFTikgcGh5c190b19uaWQo
MDAwMDAwMDAwMDAwMTAwMCkgLT4gMCBzaG91bGQgYmUgMAplbmRpbmc6CihYRU4pIENQVTAgLT4g
Tk9ERTAKKFhFTikgQ1BVMSAtPiBOT0RFMAooWEVOKSBDUFUyIC0+IE5PREUwCihYRU4pIENQVTMg
LT4gTk9ERTAKWyAgMzIzLjU2NDYzOV0gIChYRU4pIE1lbW9yeSBsb2NhdGlvbiBvZiBlYWNoIGRv
bWFpbjoKKFhFTikgRG9tYWluIDAgKHRvdGFsOiAxODc1MzkpOgogIDAwMDAwMDAwMDAwMDAwMDAo
WEVOKSAgICAgTm9kZSAwOiAxODc1MzkKIChYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KMDAw
MDAwMDAwMDAwMDAwMChYRU4pICoqKioqKioqKioqIFZNQ1MgQXJlYXMgKioqKioqKioqKioqKioK
KFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKIChYRU4pIFt6OiBw
cmludCBpb2FwaWMgaW5mb10KMDAwMDAwMDAwMDAwMDAwMChYRU4pIG51bWJlciBvZiBNUCBJUlEg
c291cmNlczogMTUuCihYRU4pIG51bWJlciBvZiBJTy1BUElDICMyIHJlZ2lzdGVyczogMjQuCihY
RU4pIHRlc3RpbmcgdGhlIElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLgogKFhFTikgSU8g
QVBJQyAjMi4uLi4uLgooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDIwMDAwMDAKKFhFTikgLi4u
Li4uLiAgICA6IHBoeXNpY2FsIEFQSUMgaWQ6IDAyCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVy
eSBUeXBlOiAwCihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwCjAwMDAwMDAwMDAw
MDAwMDAoWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzAwMjAKKFhFTikgLi4uLi4uLiAgICAg
OiBtYXggcmVkaXJlY3Rpb24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBp
bXBsZW1lbnRlZDogMAooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMAoo
WEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToKKFhFTikgIE5SIExvZyBQaHkgTWFzayBU
cmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDogICAKIChYRU4pICAwMCAwMDAgMDAgIDEg
ICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAg
MDEgMDAwIDAwICAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgKMDAwMDAwMDAw
MDAwMDAwMChYRU4pICAwMiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IEYwCiAoWEVOKSAgMDMgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0
MAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDA0IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgNDgKIChYRU4pICAwNSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAg
MSAgICAxICAgIDUwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDYgMDAwIDAwICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA1OAoKKFhFTikgIDA3IDAwMCAwMCAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNjAKWyAgMzIzLjcwMDEzMl0gIChYRU4pICAwOCAwMDAgMDAg
IDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDY4CiAgKFhFTikgIDA5IDAwMCAwMCAg
MCAgICAxICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNzAKMDAwMDAwMDAwMDAwMDAwMChYRU4p
ICAwYSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDc4CiAoWEVOKSAg
MGIgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA4OAowMDAwMDAwMDAw
MDAwMDAwKFhFTikgIDBjIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAg
OTAKIChYRU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDk4
CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGUgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICBBMAogKFhFTikgIDBmIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgQTgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAxMCAwMDAgMDAgIDAgICAgMSAgICAw
ICAgMSAgIDAgICAgMSAgICAxICAgIEIwCiAoWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEyIDAwMCAwMCAg
MSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQjgKIChYRU4pICAxMyAwMDAgMDAgIDEg
ICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEMwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAg
MTQgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBEOAogKFhFTikgIDE1
IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAKMDAwMDAwMDAwMDAw
MDAwMChYRU4pICAxNiAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEQw
CiAoWEVOKSAgMTcgMDAwIDAwICAwICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBDOAow
MDAwMDAwMDAwMDAwMDAwKFhFTikgVXNpbmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nCihYRU4pIElS
USB0byBwaW4gbWFwcGluZ3M6CgooWEVOKSBJUlEyNDAgLT4gMDoyCihYRU4pIElSUTU2IC0+IDA6
MQooWEVOKSBJUlE2NCAtPiAwOjMKWyAgMzIzLjgwMjY5MV0gIChYRU4pIElSUTcyIC0+IDA6NAoo
WEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4pIElSUTk2IC0+IDA6Nwoo
WEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhFTikgSVJRMTIwIC0+IDA6
MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6MTIKKFhFTikgSVJRMTUy
IC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4IC0+IDA6MTUKKFhFTikg
SVJRMTc2IC0+IDA6MTYKKFhFTikgSVJRMTg0IC0+IDA6MTgKKFhFTikgSVJRMTkyIC0+IDA6MTkK
KFhFTikgSVJRMjE2IC0+IDA6MjAKICAoWEVOKSBJUlEyMDggLT4gMDoyMgooWEVOKSBJUlEyMDAg
LT4gMDoyMwooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4gZG9uZS4K
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyMy44NzE2OTFdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMjMuODg1NjUxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzIzLjg5OTYx
Ml0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyMy45MTM1NzNdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMjMuOTI3NTM0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDI0MDAwMDIwMDIKWyAgMzIz
Ljk0MTQ5Nl0gICAgClsgIDMyMy45NDQ4MDddIGdsb2JhbCBtYXNrOgpbICAzMjMuOTQ0ODA3XSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzIzLjk2MDAyMF0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMyMy45NzM5ODJdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjMuOTg3
OTQyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjAwMTkwNF0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDMyNC4wMTU4NjVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAz
MjQuMDI5ODI2XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjA0Mzc4Nl0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZjMDAwMTA0MTA1ClsgIDMyNC4wNTc3NDddICAgIApbICAzMjQuMDYxMDU4XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMzI0LjA2MTA1OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNC4wNzY4MDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMDkwNzcwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjEwNDczMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNC4xMTg2OTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMTMyNjU1
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjE0NjYxNV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC4xNjA1NzVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMjQwMDAwMjA4MgpbICAzMjQu
MTc0NTM1XSAgICAKWyAgMzI0LjE3Nzg0N10gbG9jYWwgY3B1MSBtYXNrOgpbICAzMjQuMTc3ODQ3
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjE5MzQxOV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC4yMDczODFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQu
MjIxMzQxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjIzNTMwMl0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMyNC4yNDkyNjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMjQuMjYzMjIzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjI3NzE4NF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAxZjgwClsgIDMyNC4yOTExNDVdICAgIApbICAzMjQuMjk0NDU3XSBs
b2NhbGx5IHVubWFza2VkOgpbICAzMjQuMjk0NDU4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzI0LjMxMDExOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC4zMjQwODBdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMzM4MDQxXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI0LjM1MjAwMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC4zNjU5
NjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMzc5OTI0XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI0LjM5Mzg4NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDgwClsgIDMy
NC40MDc4NDVdICAgIApbICAzMjQuNDExMTU2XSBwZW5kaW5nIGxpc3Q6ClsgIDMyNC40MTQyMDBd
ICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGxvY2FsbHktbWFza2VkClsgIDMyNC40MTkyMTFdICAg
MTogZXZlbnQgNyAtPiBpcnEgMjc4ClsgIDMyNC40MjI4ODFdICAgMjogZXZlbnQgMTMgLT4gaXJx
IDI4NCBsb2NhbGx5LW1hc2tlZApbICAzMjQuNDI3OTgyXSAgIDM6IGV2ZW50IDE5IC0+IGlycSAy
OTAgbG9jYWxseS1tYXNrZWQKWyAgMzI0LjQzMzA4M10gICAwOiBldmVudCAzNCAtPiBpcnEgMzAx
IGxvY2FsbHktbWFza2VkClsgIDMyNC40MzgxODRdICAgMDogZXZlbnQgMzcgLT4gaXJxIDMwMiBs
b2NhbGx5LW1hc2tlZApbICAzMjQuNDQzMzA0XSAKWyAgMzI0LjQ0MzMwNl0gdmNwdSAwClsgIDMy
NC40NDMzMDddICAgMDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNC40NDg1NThdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAw
MDAwMDAwMDAwMDAwClsgIDMyNC40NTQ2NDNdICAgMjogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50
X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMyNC40NjA3MjldICAgMzogbWFza2VkPTEgcGVuZGlu
Zz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMyNC40NjY4MTRdICAgClsgIDMyNC40
NzI4OTldIHBlbmRpbmc6ClsgIDMyNC40NzI4OTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMjQuNDg3NzU1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjUwMTcxNl0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC41MTU2NzZdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMjQuNTI5NjM4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjU0MzU5
OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC41NTc1NjBdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMjQuNTcxNTIxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDI0MDAyOGEwMDYKWyAgMzI0
LjU4NTQ4Ml0gICAgClsgIDMyNC41ODg3OTNdIGdsb2JhbCBtYXNrOgpbICAzMjQuNTg4NzkzXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjYwNDAwN10gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMyNC42MTc5NjhdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjQuNjMx
OTI5XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjY0NTg5MF0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDMyNC42NTk4NTFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAz
MjQuNjczODEyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjY4Nzc3Ml0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZjMDAwMTA0MTA1ClsgIDMyNC43MDE3MzRdICAgIApbICAzMjQuNzA1MDQ1XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMzI0LjcwNTA0Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNC43MjA3OTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuNzM0NzU2XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0Ljc0ODcxOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNC43NjI2NzldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuNzc2NjQw
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0Ljc5MDYwMF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC44MDQ1NjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMjQwMDI4YTAwMgpbICAzMjQu
ODE4NTIyXSAgICAKWyAgMzI0LjgyMTgzNF0gbG9jYWwgY3B1MCBtYXNrOgpbICAzMjQuODIxODM0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjgzNzQwNl0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC44NTEzNjddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQu
ODY1MzI3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0Ljg3OTI4OV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMyNC44OTMyNDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMjQuOTA3MjEwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjkyMTE3Ml0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZmZlMDAwMDdmClsgIDMyNC45MzUxMzJdICAgIApbICAzMjQuOTM4NDQzXSBs
b2NhbGx5IHVubWFza2VkOgpbICAzMjQuOTM4NDQ0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzI0Ljk1NDEwNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC45NjgwNjZdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuOTgyMDI2XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI0Ljk5NTk4OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS4wMDk5
NDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMDIzOTA5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI1LjAzNzg3MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAyNDAwMDAwMDAyClsgIDMy
NS4wNTE4MzFdICAgIApbICAzMjUuMDU1MTQzXSBwZW5kaW5nIGxpc3Q6ClsgIDMyNS4wNTgxOTFd
ICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGwyLWNsZWFyClsgIDMyNS4wNjI2NjFdICAgMDogZXZl
bnQgMiAtPiBpcnEgMjczIGwyLWNsZWFyIGdsb2JhbGx5LW1hc2tlZApbICAzMjUuMDY4NTY4XSAg
IDI6IGV2ZW50IDEzIC0+IGlycSAyODQgbDItY2xlYXIgbG9jYWxseS1tYXNrZWQKWyAgMzI1LjA3
NDQ3NF0gICAyOiBldmVudCAxNSAtPiBpcnEgMjg2IGwyLWNsZWFyIGxvY2FsbHktbWFza2VkClsg
IDMyNS4wODAzODBdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MCBsMi1jbGVhciBsb2NhbGx5LW1h
c2tlZApbICAzMjUuMDg2Mjg3XSAgIDM6IGV2ZW50IDIxIC0+IGlycSAyOTIgbDItY2xlYXIgbG9j
YWxseS1tYXNrZWQKWyAgMzI1LjA5MjE5NF0gICAwOiBldmVudCAzNCAtPiBpcnEgMzAxIGwyLWNs
ZWFyClsgIDMyNS4wOTY3NTddICAgMDogZXZlbnQgMzcgLT4gaXJxIDMwMiBsMi1jbGVhcgpbICAz
MjUuMTAxMzUwXSAKWyAgMzI1LjEwMTM1MV0gdmNwdSAyClsgIDMyNS4xMDEzNTJdICAgMDogbWFz
a2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS4xMDY2MTFd
ICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDMy
NS4xMTI2OTZdICAgMjogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAw
MDAxClsgIDMyNS4xMTg3ODJdICAgMzogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAw
MDAwMDAwMDAwMDAxClsgIDMyNS4xMjQ4NjddICAgClsgIDMyNS4xMzA5NTFdIHBlbmRpbmc6Clsg
IDMyNS4xMzA5NTJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMTQ1ODA4XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjE1OTc2OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNS4xNzM3MjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMTg3Njkx
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjIwMTY1Ml0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNS4yMTU2MTJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUu
MjI5NTc0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAyOGUwMDQKWyAgMzI1LjI0MzUzNV0gICAgClsgIDMy
NS4yNDY4NDVdIGdsb2JhbCBtYXNrOgpbICAzMjUuMjQ2ODQ2XSAgICBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYKWyAgMzI1LjI2MjA1OV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMyNS4yNzYw
MjFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuMjg5OTgyXSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYKWyAgMzI1LjMwMzk0Ml0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMy
NS4zMTc5MDNdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuMzMxODY1XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI1LjM0NTgyNl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZjMDAwMTA0MTA1
ClsgIDMyNS4zNTk3ODZdICAgIApbICAzMjUuMzYzMDk3XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAg
MzI1LjM2MzA5N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS4zNzg4NDhdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMzkyODA5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgMzI1LjQwNjc3MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS40MjA3MzRd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNDM0NjkzXSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMzI1LjQ0ODY1NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS40
NjI2MTVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4YTAwMApbICAzMjUuNDc2NTc2XSAgICAKWyAgMzI1
LjQ3OTg4N10gbG9jYWwgY3B1MiBtYXNrOgpbICAzMjUuNDc5ODg4XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMzI1LjQ5NTQ1OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS41
MDk0MjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNTIzMzgwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMzI1LjUzNzM0MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNS41NTEzMDJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNTY1MjYzXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjU3OTIyNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDdl
MDAwClsgIDMyNS41OTMxODhdICAgIApbICAzMjUuNTk2NDk3XSBsb2NhbGx5IHVubWFza2VkOgpb
ICAzMjUuNTk2NDk3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjYxMjE1N10gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS42MjYxMTldICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMjUuNjQwMDgwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjY1NDA0
MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS42NjgwMDFdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMjUuNjgxOTYzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1
LjY5NTkyM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDBhMDAwClsgIDMyNS43MDk4ODVdICAgIApbICAz
MjUuNzEzMTk2XSBwZW5kaW5nIGxpc3Q6ClsgIDMyNS43MTYyMzldICAgMDogZXZlbnQgMiAtPiBp
cnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAzMjUuNzIyNjgzXSAgIDI6
IGV2ZW50IDEzIC0+IGlycSAyODQKWyAgMzI1LjcyNjQ0MV0gICAyOiBldmVudCAxNCAtPiBpcnEg
Mjg1IGdsb2JhbGx5LW1hc2tlZApbICAzMjUuNzMxNjMyXSAgIDI6IGV2ZW50IDE1IC0+IGlycSAy
ODYKWyAgMzI1LjczNTM5MV0gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFza2Vk
ClsgIDMyNS43NDA0OTFdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tlZApb
ICAzMjUuNzQ1NjE1XSAKWyAgMzI1Ljc0NTYxNl0gdmNwdSAzClsgIDMyNS43NDU2MTZdICAgMDog
bWFza2VkPTEgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS43NTA4
NzNdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNS43NTY5NTldICAgMjogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMyNS43NjMwNDVdICAgMzogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAw
MDAwMDAwMDAwMDAwMDAxClsgIDMyNS43NjkxMjldICAgClsgIDMyNS43NzUyMTRdIHBlbmRpbmc6
ClsgIDMyNS43NzUyMTVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNzkwMDcxXSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjgwNDAzMl0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMyNS44MTc5OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuODMx
OTU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1Ljg0NTkxNF0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDMyNS44NTk4NzVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAz
MjUuODczODM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAzODQwMDQKWyAgMzI1Ljg4Nzc5N10gICAgClsg
IDMyNS44OTExMDldIGdsb2JhbCBtYXNrOgpbICAzMjUuODkxMTA5XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMzI1LjkwNjMyM10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMyNS45
MjAyODRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuOTM0MjQ1XSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMzI1Ljk0ODIwNl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDMyNS45NjIxNjddICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuOTc2MTI4XSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI1Ljk5MDA4OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZjMDAwMTA0
MTA1ClsgIDMyNi4wMDQwNDldICAgIApbICAzMjYuMDA3MzYwXSBnbG9iYWxseSB1bm1hc2tlZDoK
WyAgMzI2LjAwNzM2MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4wMjMxMTJdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMDM3MDcyXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI2LjA1MTAzNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4wNjQ5
OTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMDc4OTU1XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI2LjA5MjkxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMy
Ni4xMDY4NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4MDAwMApbICAzMjYuMTIwODM4XSAgICAKWyAg
MzI2LjEyNDE0OV0gbG9jYWwgY3B1MyBtYXNrOgpbICAzMjYuMTI0MTQ5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI2LjEzOTcyMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMy
Ni4xNTM2ODNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMTY3NjQzXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjE4MTYwNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDMyNi4xOTU1NjVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMjA5NTI2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjIyMzQ4OF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAx
ZjgwMDAwClsgIDMyNi4yMzc0NDhdICAgIApbICAzMjYuMjQwNzU4XSBsb2NhbGx5IHVubWFza2Vk
OgpbICAzMjYuMjQwNzU5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjI1NjQyMV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4yNzAzODJdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAzMjYuMjg0MzQzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjI5
ODMwNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4zMTIyNjRdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAzMjYuMzI2MjI1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MzI2LjM0MDE4Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDMyNi4zNTQxNDhdICAgIApb
ICAzMjYuMzU3NDU5XSBwZW5kaW5nIGxpc3Q6ClsgIDMyNi4zNjA1MDNdICAgMDogZXZlbnQgMiAt
PiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAzMjYuMzY2OTQ2XSAg
IDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsg
IDMyNi4zNzM0NzhdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICAzMjYuMzc3MjM3XSAgIDM6
IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDMyNi4zODI0MjddICAgMzog
ZXZlbnQgMjEgLT4gaXJxIDI5Mgo=
--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-pre-s3.txt"
Content-Disposition: attachment; filename="xen-dump-pre-s3.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5nzo9g42

KFhFTikgJyonIHByZXNzZWQgLT4gZmlyaW5nIGFsbCBkaWFnbm9zdGljIGtleWhhbmRsZXJzCihY
RU4pIFtkOiBkdW1wIHJlZ2lzdGVyc10KKFhFTikgJ2QnIHByZXNzZWQgLT4gZHVtcGluZyByZWdp
c3RlcnMKKFhFTikgCihYRU4pICoqKiBEdW1waW5nIENQVTAgaG9zdCBzdGF0ZTogKioqCihYRU4p
IC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMg
XS0tLS0KKFhFTikgQ1BVOiAgICAwCihYRU4pIFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxM2Q3
N2U+XSBuczE2NTUwX3BvbGwrMHgyNy8weDMzCihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAxMDI4
NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgwMzAyNWEwICAgcmJ4
OiBmZmZmODJjNDgwMzAyNDgwICAgcmN4OiAwMDAwMDAwMDAwMDAwMDA0CihYRU4pIHJkeDogMDAw
MDAwMDAwMDAwMDAwMCAgIHJzaTogZmZmZjgyYzQ4MDJlMjVjOCAgIHJkaTogZmZmZjgyYzQ4MDI3
MTgwMAooWEVOKSByYnA6IGZmZmY4MmM0ODAyYjdlMzAgICByc3A6IGZmZmY4MmM0ODAyYjdlMzAg
ICByODogIDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjk6ICBmZmZmODJjNDgwMmZlMjQwICAgcjEw
OiAwMDAwMDAxYzA4MzMzMTY5ICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZm
ZjgyYzQ4MDI3MTgwMCAgIHIxMzogZmZmZjgyYzQ4MDEzZDc1NyAgIHIxNDogMDAwMDAwMWE1MmEx
Zjc2MgooWEVOKSByMTU6IGZmZmY4MmM0ODAzMDIzMDggICBjcjA6IDAwMDAwMDAwODAwNTAwM2Ig
ICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTRjOGRlMDAwICAgY3Iy
OiBmZmZmZThmZmZmYzAwMjI4CihYRU4pIGRzOiAwMDAwICAgZXM6IDAwMDAgICBmczogMDAwMCAg
IGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJv
bSByc3A9ZmZmZjgyYzQ4MDJiN2UzMDoKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2U2MCBmZmZmODJj
NDgwMTI4MTdmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJlMjVjOAooWEVOKSAgICBmZmZm
ODJjNDgwMzAyNDgwIGZmZmY4MzAxNDg5YjNkNDAgZmZmZjgyYzQ4MDJiN2ViMCBmZmZmODJjNDgw
MTI4MjgxCihYRU4pICAgIGZmZmY4MmM0ODAyYjdmMTggMDAwMDAwMDAwMDAwMDI0NiAwMDAwMDAw
MGRlYWRiZWVmIGZmZmY4MmM0ODAyZDg4ODAKKFhFTikgICAgZmZmZjgyYzQ4MDJkODg4MCBmZmZm
ODJjNDgwMmI3ZjE4IGZmZmZmZmZmZmZmZmZmZmYgZmZmZjgyYzQ4MDMwMjMwOAooWEVOKSAgICBm
ZmZmODJjNDgwMmI3ZWUwIGZmZmY4MmM0ODAxMjU0MDUgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmODJj
NDgwMmI3ZjE4CihYRU4pICAgIDAwMDAwMDAwZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMiBmZmZm
ODJjNDgwMmI3ZWYwIGZmZmY4MmM0ODAxMjU0ODQKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2YxMCBm
ZmZmODJjNDgwMTU4YzA1IGZmZmY4MzAwYWE1ODQwMDAgZmZmZjgzMDBhYTBmYzAwMAooWEVOKSAg
ICBmZmZmODJjNDgwMmI3ZGE4IDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmZmZmZmZmZmZiAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZmZmZmY4MWEwMWVlOCBm
ZmZmZmZmZjgxYTAxZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAw
MDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNh
YSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZmZmZmY4MWEw
MWVkMCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODMwMGFhNTg0MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTNkNzdlPl0g
bnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgxN2Y+XSBleGVj
dXRlX3RpbWVyKzB4NGUvMHg2YwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgyODE+XSB0aW1lcl9z
b2Z0aXJxX2FjdGlvbisweGU0LzB4MjFhCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDEyNTQwNT5dIF9f
ZG9fc29mdGlycSsweDk1LzB4YTAKKFhFTikgICAgWzxmZmZmODJjNDgwMTI1NDg0Pl0gZG9fc29m
dGlycSsweDI2LzB4MjgKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4YzA1Pl0gaWRsZV9sb29wKzB4
NmYvMHg3MQooWEVOKSAgICAKKFhFTikgKioqIER1bXBpbmcgQ1BVMSBob3N0IHN0YXRlOiAqKioK
KFhFTikgLS0tLVsgWGVuLTQuMi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDog
ICAgQyBdLS0tLQooWEVOKSBDUFU6ICAgIDEKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4
MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAw
MDAwMjQ2ICAgQ09OVEVYVDogaHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAg
ICByYng6IGZmZmY4MzAxNDg5OWZmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcmR4
OiAwMDAwMDAzY2M4NmE4ZDgwICAgcnNpOiBmZmZmODMwMGE4M2ZkMGY4ICAgcmRpOiBmZmZmODMw
MGFhMGZlMDAwCihYRU4pIHJicDogZmZmZjgzMDE0ODk5ZmVmMCAgIHJzcDogZmZmZjgzMDE0ODk5
ZmVmMCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAg
ICByMTA6IDAwMDAwMDAwZGVhZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEy
OiBmZmZmODMwMTQ4OTlmZjE4ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAw
MDAwMDAwMDAyCihYRU4pIHIxNTogZmZmZjgzMDE0ODlhYjA4OCAgIGNyMDogMDAwMDAwMDA4MDA1
MDAzYiAgIGNyNDogMDAwMDAwMDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxM2M2NGYwMDAg
ICBjcjI6IGZmZmY4ODAwMjZkMGU4MzAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAw
MDAwICAgZ3M6IDAwMDAgICBzczogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFj
ZSBmcm9tIHJzcD1mZmZmODMwMTQ4OTlmZWYwOgooWEVOKSAgICBmZmZmODMwMTQ4OTlmZjEwIGZm
ZmY4MmM0ODAxNThiZjggZmZmZjgzMDBhYTBmZTAwMCBmZmZmODMwMGE4M2ZkMDAwCihYRU4pICAg
IGZmZmY4MzAxNDg5OWZkYTggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDEKKFhFTikgICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODZkZWUwIGZm
ZmY4ODAwMjc4NmRmZDggMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAx
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICAgIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2Fh
IDAwMDAwMDAwMDAwMGUwMzMgMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODZk
ZWM4IDAwMDAwMDAwMDAwMGUwMmIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAx
IGZmZmY4MzAwYWEwZmUwMDAKKFhFTikgICAgMDAwMDAwM2NjODZhOGQ4MCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pIFhlbiBjYWxsIHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBk
ZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVf
bG9vcCsweDYyLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1waW5nIENQVTIgaG9zdCBzdGF0
ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRh
aW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAyCihYRU4pIFJJUDogICAgZTAwODpbPGZm
ZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgw
MzAyMzcwICAgcmJ4OiBmZmZmODMwMTQ4OThmZjE4ICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAyCihY
RU4pIHJkeDogMDAwMDAwM2NjODY5MmQ4MCAgIHJzaTogMDAwMDAwNDA5YzA3OGE5NCAgIHJkaTog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByYnA6IGZmZmY4MzAxNDg5OGZlZjAgICByc3A6IGZmZmY4
MzAxNDg5OGZlZjAgICByODogIDAwMDAwMDAxMDU1YWRkNmMKKFhFTikgcjk6ICBmZmZmODMwMGE4
M2ZjMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihY
RU4pIHIxMjogZmZmZjgzMDE0ODk4ZmYxOCAgIHIxMzogMDAwMDAwMDBmZmZmZmZmZiAgIHIxNDog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxNDg5OTUwODggICBjcjA6IDAwMDAw
MDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTNj
YTVmMDAwICAgY3IyOiBmZmZmODgwMDI1ODE3ZGYwCihYRU4pIGRzOiAwMDJiICAgZXM6IDAwMmIg
ICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDE0ODk4ZmVmMDoKKFhFTikgICAgZmZmZjgzMDE0ODk4
ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYTg1YzcwMDAgZmZmZjgzMDBhODNmYzAwMAoo
WEVOKSAgICBmZmZmODMwMTQ4OThmZGE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAyCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyNzg2
ZmVlMCBmZmZmODgwMDI3ODZmZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFk
YmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4
MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZjg4
MDAyNzg2ZmVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmODMwMGE4NWM3MDAwCihYRU4pICAgIDAwMDAwMDNjYzg2OTJkODAgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4
M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBbPGZmZmY4MmM0ODAxNThiZjg+
XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAqKiogRHVtcGluZyBDUFUzIGhv
c3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1
Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMwooWEVOKSBSSVA6ICAgIGUw
MDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSBSRkxB
R1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJheDogZmZm
ZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODkzZmYxOCAgIHJjeDogMDAwMDAwMDAwMDAw
MDAwMwooWEVOKSByZHg6IDAwMDAwMDNjYzg2ODRkODAgICByc2k6IDAwMDAwMDQwOWMwNzhhYjYg
ICByZGk6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmJwOiBmZmZmODMwMTQ4OTNmZWYwICAgcnNw
OiBmZmZmODMwMTQ4OTNmZWYwICAgcjg6ICAwMDAwMDAwMTI4ODBkZjQ4CihYRU4pIHI5OiAgZmZm
ZjgzMDBhYTU4MzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAgIHIxMTogMDAwMDAwMDAwMDAw
MDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5M2ZmMTggICByMTM6IDAwMDAwMDAwZmZmZmZmZmYg
ICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZmODMwMTQ4OTg3MDg4ICAgY3Iw
OiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNyMzogMDAw
MDAwMDEzZDk1ZjAwMCAgIGNyMjogZmZmZjg4MDAyNWU5MTI2MAooWEVOKSBkczogMDAyYiAgIGVz
OiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgKKFhFTikg
WGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5M2ZlZjA6CihYRU4pICAgIGZmZmY4
MzAxNDg5M2ZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4M2ZlMDAwIGZmZmY4MzAwYWE1
ODMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODkzZmRhOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMwooWEVOKSAgICBmZmZmZmZmZjgxYWFmZGEwIGZmZmY4
ODAwMjc4ODFlZTAgZmZmZjg4MDAyNzg4MWZkOCAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAgIDAw
MDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAg
IGZmZmY4ODAwMjc4ODFlYzggMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDMgZmZmZjgzMDBhODNmZTAwMAooWEVOKSAgICAwMDAwMDAzY2M4Njg0ZDgw
IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgICAgWzxmZmZmODJjNDgw
MTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAKKFhFTikgWzA6IGR1bXAgRG9t
MCByZWdpc3RlcnNdCihYRU4pICcwJyBwcmVzc2VkIC0+IGR1bXBpbmcgRG9tMCdzIHJlZ2lzdGVy
cwooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMCBzdGF0ZTogKioqCihYRU4pIFJJUDogICAg
ZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYg
ICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAg
IHJieDogZmZmZmZmZmY4MWEwMWZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6
IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAw
ZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmZmZmZjgxYTAxZWU4ICAgcnNwOiBmZmZmZmZmZjgxYTAx
ZWQwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAg
IHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6
IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDAgICByMTQ6IGZmZmZmZmZm
ZmZmZmZmZmYKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDAwMDAw
MDA4ICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0YzhkZTAwMCAg
IGNyMjogZmZmZmU4ZmZmZmMwMDIyOAooWEVOKSBkczogMDAwMCAgIGVzOiAwMDAwICAgZnM6IDAw
MDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJh
Y2UgZnJvbSByc3A9ZmZmZmZmZmY4MWEwMWVkMDoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZmZmZmY4MWEwMWYxOAooWEVOKSAg
ICBmZmZmZmZmZjgxMDFjNjYzIGZmZmZmZmZmODFhMDFmZDggZmZmZmZmZmY4MWFhZmRhMCBmZmZm
ODgwMDJkZWUxYTAwCihYRU4pICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmY4MWEwMWY0OCBm
ZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgICAgYTNmYzk4NzMzOWVkMDEz
YiAwMDAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODFiMTUxNjAgZmZmZmZmZmY4MWEwMWY1OAooWEVO
KSAgICBmZmZmZmZmZjgxNTU0ZjVlIGZmZmZmZmZmODFhMDFmOTggZmZmZmZmZmY4MWFjY2JmNSBm
ZmZmZmZmZjgxYjE1MTYwCihYRU4pICAgIGU0YjE1OWJhM2VlYTA5NGMgMDAwMDAwMDAwMGNkZjAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxYTAxZmI4IGZmZmZmZmZmODFhY2MzNGIgZmZmZmZmZmY3ZmZmZmZmZgoo
WEVOKSAgICBmZmZmZmZmZjg0YjA0MDAwIGZmZmZmZmZmODFhMDFmZjggZmZmZmZmZmY4MWFjZmVj
YyAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAxMDAwMDAwMDAgMDAxMDA4MDAwMDAz
MDZhNCAxZmM5OGI3NWUzYjgyMjgzIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMxIHN0
YXRlOiAqKioKKFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJG
TEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikg
cmF4OiAwMDAwMDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODZkZmQ4ICAgcmN4OiBmZmZm
ZmZmZjgxMDAxM2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBk
ZWFkYmVlZiAgIHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4NmRl
ZTAgICByc3A6IGZmZmY4ODAwMjc4NmRlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikg
cjk6ICAwMDAwMDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAw
MDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAw
MDAwMDAwMSAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAw
MDAgICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikg
Y3IzOiAwMDAwMDAwMTNjNjRmMDAwICAgY3IyOiAwMDAwN2YzNTg4NTA2MjEwCihYRU4pIGRzOiAw
MDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAz
MwooWEVOKSBHdWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODZkZWM4OgooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBm
ZmZmODgwMDI3ODZkZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg2ZGZk
OCBmZmZmZmZmZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmODgwMDI3ODZkZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQoo
WEVOKSAgICBhZGNmNDU4MDdjMmQwNGZiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODgwMDI3ODZkZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmRmNTggMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMiBzdGF0ZTogKioqCihYRU4pIFJJ
UDogICAgZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAw
MDAyNDYgICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAw
MDAwMCAgIHJieDogZmZmZjg4MDAyNzg2ZmZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVO
KSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmODgwMDI3ODZmZWUwICAgcnNwOiBmZmZmODgw
MDI3ODZmZWM4ICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAw
MDAwMCAgIHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVO
KSByMTI6IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDIgICByMTQ6IDAw
MDAwMDAwMDAwMDAwMDAKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAw
MDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDEzY2E1
ZjAwMCAgIGNyMjogMDAwMDdmODE5YzNiZTAwMAooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAg
ZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjg4MDAyNzg2ZmVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZmYxMAoo
WEVOKSAgICBmZmZmZmZmZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmZmZDggZmZmZmZmZmY4MWFhZmRh
MCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2
ZmY0MCBmZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgMWZlN2I1YTgy
MjE1MDQ5OSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1
MAooWEVOKSAgICBmZmZmZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCBmZmZmODgwMDI3ODZmZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioq
IER1bXBpbmcgRG9tMCB2Y3B1IzMgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZm
ZmZmZjgxMDAxM2FhPl0KKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBD
T05URVhUOiBwdiBndWVzdAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4
ODAwMjc4ODFmZDggICByY3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAw
MDAwMDAwICAgcnNpOiAwMDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihY
RU4pIHJicDogZmZmZjg4MDAyNzg4MWVlMCAgIHJzcDogZmZmZjg4MDAyNzg4MWVjOCAgIHI4OiAg
MDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAw
MDAwMDAwMDAwMDEgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgx
YWFmZGEwICAgcjEzOiAwMDAwMDAwMDAwMDAwMDAzICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pIHIxNTogMDAwMDAwMDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDog
MDAwMDAwMDAwMDAwMjY2MAooWEVOKSBjcjM6IDAwMDAwMDAxM2RiNjIwMDAgICBjcjI6IDAwMDA3
ZmVhY2E2NGMwMDAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAw
MDAgICBzczogZTAyYiAgIGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNw
PWZmZmY4ODAwMjc4ODFlYzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZm
ZmZmZiBmZmZmZmZmZjgxMDBhNWMwIGZmZmY4ODAwMjc4ODFmMTAKKFhFTikgICAgZmZmZmZmZmY4
MTAxYzY2MyBmZmZmODgwMDI3ODgxZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNDAgZmZmZmZmZmY4MTAx
MzIzNiBmZmZmZmZmZjgxMDBhZGU5CihYRU4pICAgIDQ5ZGUxODMzZDEzZjJhMjYgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTAKKFhFTikgICAgZmZmZmZm
ZmY4MTU2MzQzOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZm
Zjg4MDAyNzg4MWY1OCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIFtIOiBkdW1wIGhlYXAgaW5mb10K
KFhFTikgJ0gnIHByZXNzZWQgLT4gZHVtcGluZyBoZWFwIGluZm8gKG5vdy0weDFBOjlDNkUyMkMw
KQooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0wXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Ml0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0zXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9NV0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT02XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OF0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT05XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTEwXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTExXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTEyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTEzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE0XSAtPiAx
NjEyOCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xNV0gLT4gMzI3NjggcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MTZdIC0+IDY1NTM2IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTE3XSAtPiAxMzA1NTkgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MThdIC0+
IDI2MjE0MyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xOV0gLT4gMTcyODQ1IHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIwXSAtPiAxMzQyMTggcGFnZXMKKFhFTikgaGVhcFtu
b2RlPTBdW3pvbmU9MjFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjJdIC0+
IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjNdIC0+IDAgcGFnZXMKKFhFTikgaGVh
cFtub2RlPTBdW3pvbmU9MjRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjVd
IC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjZdIC0+IDAgcGFnZXMKKFhFTikg
aGVhcFtub2RlPTBdW3pvbmU9MjddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9
MjhdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjldIC0+IDAgcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MzBdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MzFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzJdIC0+IDAgcGFnZXMK
KFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzNdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MzRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzVdIC0+IDAgcGFn
ZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzZdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2Rl
PTBdW3pvbmU9MzddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzhdIC0+IDAg
cGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzldIC0+IDAgcGFnZXMKKFhFTikgW0k6IGR1
bXAgSFZNIGlycSBpbmZvXQooWEVOKSAnSScgcHJlc3NlZCAtPiBkdW1waW5nIEhWTSBpcnEgaW5m
bwooWEVOKSBbTTogZHVtcCBNU0kgc3RhdGVdCihYRU4pIFBDSS1NU0kgaW50ZXJydXB0IGluZm9y
bWF0aW9uOgooWEVOKSAgTVNJICAgIDI2IHZlYz02MSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxv
ZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI3IHZlYz0y
OSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAv
MS8tMQooWEVOKSAgTVNJICAgIDI4IHZlYz0zMSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBs
b3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI5IHZlYz0zOSBs
b3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8t
MQooWEVOKSAgTVNJICAgIDMwIHZlYz00MSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dl
c3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSBbUTogZHVtcCBQQ0kgZGV2aWNlc10K
KFhFTikgPT09PSBQQ0kgZGV2aWNlcyA9PT09CihYRU4pID09PT0gc2VnbWVudCAwMDAwID09PT0K
KFhFTikgMDAwMDowNTowMS4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDQ6MDAu
MCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAzOjAwLjAgLSBkb20gMCAgIC0gTVNJ
cyA8ID4KKFhFTikgMDAwMDowMjowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCAzMCA+CihYRU4pIDAw
MDA6MDA6MWYuMyAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFmLjIgLSBkb20g
MCAgIC0gTVNJcyA8IDI3ID4KKFhFTikgMDAwMDowMDoxZi4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDA6MWUuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFk
LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYy43IC0gZG9tIDAgICAtIE1T
SXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNiAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAw
OjAwOjFjLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYi4wIC0gZG9tIDAg
ICAtIE1TSXMgPCAyOSA+CihYRU4pIDAwMDA6MDA6MWEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgoo
WEVOKSAwMDAwOjAwOjE5LjAgLSBkb20gMCAgIC0gTVNJcyA8IDI2ID4KKFhFTikgMDAwMDowMDox
Ni4zIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MTYuMCAtIGRvbSAwICAgLSBN
U0lzIDwgPgooWEVOKSAwMDAwOjAwOjE0LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAw
MDowMDowMi4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOCA+CihYRU4pIDAwMDA6MDA6MDAuMCAtIGRv
bSAwICAgLSBNU0lzIDwgPgooWEVOKSBbVjogZHVtcCBpb21tdSBpbmZvXQooWEVOKSAKKFhFTikg
aW9tbXUgMDogbnJfcHRfbGV2ZWxzID0gMy4KKFhFTikgICBRdWV1ZWQgSW52YWxpZGF0aW9uOiBz
dXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IFJlbWFwcGluZzogc3VwcG9y
dGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCByZW1hcHBpbmcgdGFibGUgKG5yX2Vu
dHJ5PTB4MTAwMDAuIE9ubHkgZHVtcCBQPTEgZW50cmllcyBoZXJlKToKKFhFTikgICAgICAgIFNW
VCAgU1EgICBTSUQgICAgICBEU1QgIFYgIEFWTCBETE0gVE0gUkggRE0gRlBEIFAKKFhFTikgICAw
MDAwOiAgMSAgIDAgIDAwMTAgMDAwMDAwMDEgMzEgICAgMCAgIDEgIDAgIDEgIDEgICAwIDEKKFhF
TikgCihYRU4pIGlvbW11IDE6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFs
aWRhdGlvbjogc3VwcG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBp
bmc6IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRh
YmxlIChucl9lbnRyeT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4p
ICAgICAgICBTVlQgIFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQ
CihYRU4pICAgMDAwMDogIDEgICAwICBmMGY4IDAwMDAwMDAxIDM4ICAgIDAgICAxICAwICAxICAx
ICAgMCAxCihYRU4pICAgMDAwMTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGYwICAgIDAgICAxICAw
ICAxICAxICAgMCAxCihYRU4pICAgMDAwMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQwICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQ4
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNDogIDEgICAwICBmMGY4IDAwMDAw
MDAxIDUwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNTogIDEgICAwICBmMGY4
IDAwMDAwMDAxIDU4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNjogIDEgICAw
ICBmMGY4IDAwMDAwMDAxIDYwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNzog
IDEgICAwICBmMGY4IDAwMDAwMDAxIDY4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAg
MDAwODogIDEgICAwICBmMGY4IDAwMDAwMDAxIDcwICAgIDAgICAxICAxICAxICAxICAgMCAxCihY
RU4pICAgMDAwOTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDc4ICAgIDAgICAxICAwICAxICAxICAg
MCAxCihYRU4pICAgMDAwYTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDg4ICAgIDAgICAxICAwICAx
ICAxICAgMCAxCihYRU4pICAgMDAwYjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDkwICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwYzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDk4ICAg
IDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZDogIDEgICAwICBmMGY4IDAwMDAwMDAx
IGEwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZTogIDEgICAwICBmMGY4IDAw
MDAwMDAxIGE4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZjogIDEgICAwICBm
MGY4IDAwMDAwMDAxIGIwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMDogIDEg
ICAwICBmMGY4IDAwMDAwMDAxIGI4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAx
MTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGMwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4p
ICAgMDAxMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIGM4ICAgIDAgICAxICAxICAxICAxICAgMCAx
CihYRU4pICAgMDAxMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQwICAgIDAgICAxICAxICAxICAx
ICAgMCAxCihYRU4pICAgMDAxNDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQ4ICAgIDAgICAxICAx
ICAxICAxICAgMCAxCihYRU4pICAgMDAxNTogIDEgICAwICAwMGM4IDAwMDAwMDAxIDYxICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNjogIDEgICAwICAwMGZhIDAwMDAwMDAxIDI5
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNzogIDEgICAwICAwMGQ4IDAwMDAw
MDAxIDM5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxODogIDEgICAwICAwMjAw
IDAwMDAwMDAxIDQxICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pIAooWEVOKSBSZWRpcmVj
dGlvbiB0YWJsZSBvZiBJT0FQSUMgMDoKKFhFTikgICAjZW50cnkgSURYIEZNVCBNQVNLIFRSSUcg
SVJSIFBPTCBTVEFUIERFTEkgIFZFQ1RPUgooWEVOKSAgICAwMTogIDAwMDAgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICAzOAooWEVOKSAgICAwMjogIDAwMDEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBmMAooWEVOKSAgICAwMzogIDAwMDIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0MAooWEVOKSAgICAwNDogIDAwMDMgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0OAooWEVOKSAgICAwNTogIDAwMDQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1MAooWEVOKSAgICAwNjogIDAwMDUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1OAooWEVOKSAgICAwNzogIDAwMDYgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2MAooWEVOKSAgICAwODogIDAwMDcgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2OAooWEVOKSAgICAwOTogIDAwMDggICAxICAgIDAgICAx
ICAgMCAgIDAgICAgMCAgICAwICAgICA3MAooWEVOKSAgICAwYTogIDAwMDkgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA3OAooWEVOKSAgICAwYjogIDAwMGEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA4OAooWEVOKSAgICAwYzogIDAwMGIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5MAooWEVOKSAgICAwZDogIDAwMGMgICAxICAgIDEgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5OAooWEVOKSAgICAwZTogIDAwMGQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhMAooWEVOKSAgICAwZjogIDAwMGUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhOAooWEVOKSAgICAxMDogIDAwMGYgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiMAooWEVOKSAgICAxMjogIDAwMTAgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiOAooWEVOKSAgICAxMzogIDAwMTEgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjMAooWEVOKSAgICAxNDogIDAwMTQgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkOAooWEVOKSAgICAxNjogIDAwMTMgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkMAooWEVOKSAgICAxNzogIDAwMTIgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjOAooWEVOKSBbYTogZHVtcCB0aW1lciBxdWV1ZXNdCihY
RU4pIER1bXBpbmcgdGltZXIgcXVldWVzOgooWEVOKSBDUFUwMDoKKFhFTikgICBleD0gICAtMTcy
M3VzIHRpbWVyPWZmZmY4MmM0ODAyZTI1YzggY2I9ZmZmZjgyYzQ4MDEzZDc1NyhmZmZmODJjNDgw
MjcxODAwKSBuczE2NTUwX3BvbGwrMHgwLzB4MzMKKFhFTikgICBleD0gICAtMTcyMnVzIHRpbWVy
PWZmZmY4MzAxNGNhOTIzYjAgY2I9ZmZmZjgyYzQ4MDE2NjQxNihmZmZmODMwMTQ4OTQxZTgwKSBp
cnFfZ3Vlc3RfZW9pX3RpbWVyX2ZuKzB4MC8weDE1ZAooWEVOKSAgIGV4PSAgICA3Mjc4dXMgdGlt
ZXI9ZmZmZjgzMDE0ODk3M2VhOCBjYj1mZmZmODJjNDgwMTFhYWYwKDAwMDAwMDAwMDAwMDAwMDAp
IGNzY2hlZF90aWNrKzB4MC8weDMxNAooWEVOKSAgIGV4PSAgICA3Mjc4dXMgdGltZXI9ZmZmZjgz
MDE0ODk3MzFiOCBjYj1mZmZmODJjNDgwMTE5ZDcyKGZmZmY4MzAxNDg5NzMxOTApIGNzY2hlZF9h
Y2N0KzB4MC8weDQyYQooWEVOKSAgIGV4PSA1NDk0MDg5dXMgdGltZXI9ZmZmZjgyYzQ4MDMwMDU4
MCBjYj1mZmZmODJjNDgwMWE4ODUwKDAwMDAwMDAwMDAwMDAwMDApIG1jZV93b3JrX2ZuKzB4MC8w
eGE5CihYRU4pICAgZXg9MzUzODcyMjZ1cyB0aW1lcj1mZmZmODJjNDgwMmZlMjgwIGNiPWZmZmY4
MmM0ODAxODA3YzIoMDAwMDAwMDAwMDAwMDAwMCkgcGx0X292ZXJmbG93KzB4MC8weDEzMQooWEVO
KSAgIGV4PSAgOTk3MzIzdXMgdGltZXI9ZmZmZjgyYzQ4MDJmZTI0MCBjYj1mZmZmODJjNDgwMTgw
MmZlKDAwMDAwMDAwMDAwMDAwMDApIHRpbWVfY2FsaWJyYXRpb24rMHgwLzB4NWMKKFhFTikgQ1BV
MDE6CihYRU4pICAgZXg9ICAgNzYyNDd1cyB0aW1lcj1mZmZmODMwMTQ4OWIzYjk4IGNiPWZmZmY4
MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMSkgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4p
ICAgZXg9ICAyMTQxMzR1cyB0aW1lcj1mZmZmODMwMGE4M2ZkMDYwIGNiPWZmZmY4MmM0ODAxMjFj
NmIoZmZmZjgzMDBhODNmZDAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8weGIKKFhF
TikgQ1BVMDI6CihYRU4pICAgZXg9ICAgOTY1NjJ1cyB0aW1lcj1mZmZmODMwMTQ4OTk0MDg4IGNi
PWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMikgY3NjaGVkX3RpY2srMHgwLzB4MzE0
CihYRU4pICAgZXg9ICAxODQxMzl1cyB0aW1lcj1mZmZmODMwMGE4M2ZjMDYwIGNiPWZmZmY4MmM0
ODAxMjFjNmIoZmZmZjgzMDBhODNmYzAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8w
eGIKKFhFTikgQ1BVMDM6CihYRU4pICAgZXg9ICAxMTY4Nzd1cyB0aW1lcj1mZmZmODMwMTQ4OTk0
NTU4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMykgY3NjaGVkX3RpY2srMHgw
LzB4MzE0CihYRU4pICAgZXg9ICAxNTQxNDh1cyB0aW1lcj1mZmZmODMwMGFhNTgzMDYwIGNiPWZm
ZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhYTU4MzAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2Zu
KzB4MC8weGIKKFhFTikgW2M6IGR1bXAgQUNQSSBDeCBzdHJ1Y3R1cmVzXQooWEVOKSAnYycgcHJl
c3NlZCAtPiBwcmludGluZyBBQ1BJIEN4IHN0cnVjdHVyZXMKKFhFTikgPT1jcHUwPT0KKFhFTikg
YWN0aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgoo
WEVOKSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9k
WyBIQUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlv
blsxMTUwNjA3MDIwMTZdCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBD
QzNbMF0gQ0M2WzBdIENDN1swXQooWEVOKSA9PWNwdTE9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlD
MjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlw
ZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9u
WzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzExNTA4NTQ5MjI5NF0K
KFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMTIxNDQ5
NzgxNDhdIENDN1swXQooWEVOKSA9PWNwdTI9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihY
RU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0g
bGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihY
RU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzExNTExMTE3NjY3M10KKFhFTikg
UEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMTIxNTA4ODEzOTdd
IENDN1swXQooWEVOKSA9PWNwdTM9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1h
eF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5j
eVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAg
ICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzExNTEzNjg2MDU2N10KKFhFTikgUEMyWzBd
IFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMTIxNTY4Nzg0MTZdIENDN1sw
XQooWEVOKSBbZTogZHVtcCBldnRjaG4gaW5mb10KKFhFTikgJ2UnIHByZXNzZWQgLT4gZHVtcGlu
ZyBldmVudC1jaGFubmVsIGluZm8KKFhFTikgRXZlbnQgY2hhbm5lbCBpbmZvcm1hdGlvbiBmb3Ig
ZG9tYWluIDA6CihYRU4pIFBvbGxpbmcgdkNQVXM6IHt9CihYRU4pICAgICBwb3J0IFtwL21dCihY
RU4pICAgICAgICAxIFsxLzBdOiBzPTUgbj0wIHg9MCB2PTAKKFhFTikgICAgICAgIDIgWzEvMV06
IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICAzIFsxLzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAg
ICAgNCBbMC8wXTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDUgWzAvMF06IHM9NSBuPTAgeD0w
IHY9MQooWEVOKSAgICAgICAgNiBbMC8wXTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDcgWzAv
MF06IHM9NSBuPTEgeD0wIHY9MAooWEVOKSAgICAgICAgOCBbMC8xXTogcz02IG49MSB4PTAKKFhF
TikgICAgICAgIDkgWzAvMF06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgIDEwIFswLzBdOiBzPTYg
bj0xIHg9MAooWEVOKSAgICAgICAxMSBbMC8wXTogcz01IG49MSB4PTAgdj0xCihYRU4pICAgICAg
IDEyIFswLzBdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAxMyBbMC8wXTogcz01IG49MiB4PTAg
dj0wCihYRU4pICAgICAgIDE0IFsxLzFdOiBzPTYgbj0yIHg9MAooWEVOKSAgICAgICAxNSBbMC8w
XTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTYgWzAvMF06IHM9NiBuPTIgeD0wCihYRU4pICAg
ICAgIDE3IFswLzBdOiBzPTUgbj0yIHg9MCB2PTEKKFhFTikgICAgICAgMTggWzAvMF06IHM9NiBu
PTIgeD0wCihYRU4pICAgICAgIDE5IFswLzBdOiBzPTUgbj0zIHg9MCB2PTAKKFhFTikgICAgICAg
MjAgWzEvMV06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIxIFswLzBdOiBzPTYgbj0zIHg9MAoo
WEVOKSAgICAgICAyMiBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAgICAgMjMgWzAvMF06IHM9
NSBuPTMgeD0wIHY9MQooWEVOKSAgICAgICAyNCBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAg
ICAgMjUgWzAvMF06IHM9MyBuPTAgeD0wIGQ9MCBwPTM1CihYRU4pICAgICAgIDI2IFswLzBdOiBz
PTQgbj0wIHg9MCBwPTkgaT05CihYRU4pICAgICAgIDI3IFswLzBdOiBzPTUgbj0wIHg9MCB2PTIK
KFhFTikgICAgICAgMjggWzAvMF06IHM9NCBuPTAgeD0wIHA9OCBpPTgKKFhFTikgICAgICAgMjkg
WzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc4IGk9MjcKKFhFTikgICAgICAgMzAgWzAvMF06IHM9NCBu
PTAgeD0wIHA9MTYgaT0xNgooWEVOKSAgICAgICAzMSBbMC8wXTogcz00IG49MCB4PTAgcD0yNzcg
aT0yOAooWEVOKSAgICAgICAzMiBbMC8wXTogcz00IG49MCB4PTAgcD0yMyBpPTIzCihYRU4pICAg
ICAgIDMzIFswLzBdOiBzPTQgbj0wIHg9MCBwPTI3NiBpPTI5CihYRU4pICAgICAgIDM0IFsxLzBd
OiBzPTQgbj0wIHg9MCBwPTI3NSBpPTMwCihYRU4pICAgICAgIDM1IFswLzBdOiBzPTMgbj0wIHg9
MCBkPTAgcD0yNQooWEVOKSAgICAgICAzNiBbMC8wXTogcz01IG49MCB4PTAgdj0zCihYRU4pICAg
ICAgIDM3IFsxLzBdOiBzPTQgbj0wIHg9MCBwPTI3OSBpPTI2CihYRU4pIFtnOiBwcmludCBncmFu
dCB0YWJsZSB1c2FnZV0KKFhFTikgZ250dGFiX3VzYWdlX3ByaW50X2FsbCBbIGtleSAnZycgcHJl
c3NlZAooWEVOKSAgICAgICAtLS0tLS0tLSBhY3RpdmUgLS0tLS0tLS0gICAgICAgLS0tLS0tLS0g
c2hhcmVkIC0tLS0tLS0tCihYRU4pIFtyZWZdIGxvY2FsZG9tIG1mbiAgICAgIHBpbiAgICAgICAg
ICBsb2NhbGRvbSBnbWZuICAgICBmbGFncwooWEVOKSBncmFudC10YWJsZSBmb3IgcmVtb3RlIGRv
bWFpbjogICAgMCAuLi4gbm8gYWN0aXZlIGdyYW50IHRhYmxlIGVudHJpZXMKKFhFTikgZ250dGFi
X3VzYWdlX3ByaW50X2FsbCBdIGRvbmUKKFhFTikgW2k6IGR1bXAgaW50ZXJydXB0IGJpbmRpbmdz
XQooWEVOKSBHdWVzdCBpbnRlcnJ1cHQgaW5mb3JtYXRpb246CihYRU4pICAgIElSUTogICAwIGFm
ZmluaXR5OjAwMDEgdmVjOmYwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMCBt
YXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDEgYWZmaW5pdHk6MDAwMSB2ZWM6MzggdHlw
ZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAg
ICBJUlE6ICAgMiBhZmZpbml0eTpmZmZmIHZlYzplMiB0eXBlPVhULVBJQyAgICAgICAgICBzdGF0
dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICAzIGFmZmluaXR5OjAw
MDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVu
Ym91bmQKKFhFTikgICAgSVJROiAgIDQgYWZmaW5pdHk6MDAwMSB2ZWM6NDggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAg
NSBhZmZpbml0eTowMDAxIHZlYzo1MCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAw
MDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICA2IGFmZmluaXR5OjAwMDEgdmVjOjU4
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhF
TikgICAgSVJROiAgIDcgYWZmaW5pdHk6MDAwMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWVkZ2UgICAg
c3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgOCBhZmZpbml0
eTowMDAxIHZlYzo2OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxp
Z2h0PTAgZG9tYWluLWxpc3Q9MDogIDgoLVMtLSksCihYRU4pICAgIElSUTogICA5IGFmZmluaXR5
OjAwMDEgdmVjOjcwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGln
aHQ9MCBkb21haW4tbGlzdD0wOiAgOSgtUy0tKSwKKFhFTikgICAgSVJROiAgMTAgYWZmaW5pdHk6
MDAwMSB2ZWM6NzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwg
dW5ib3VuZAooWEVOKSAgICBJUlE6ICAxMSBhZmZpbml0eTowMDAxIHZlYzo4OCB0eXBlPUlPLUFQ
SUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTog
IDEyIGFmZmluaXR5OjAwMDEgdmVjOjkwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAw
MDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTMgYWZmaW5pdHk6MDAwZiB2ZWM6
OTggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAoo
WEVOKSAgICBJUlE6ICAxNCBhZmZpbml0eTowMDAxIHZlYzphMCB0eXBlPUlPLUFQSUMtZWRnZSAg
ICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE1IGFmZmlu
aXR5OjAwMDEgdmVjOmE4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBw
ZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTYgYWZmaW5pdHk6MDAwMSB2ZWM6YjAgdHlwZT1J
Ty1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6
IDE2KC1TLS0pLAooWEVOKSAgICBJUlE6ICAxOCBhZmZpbml0eTowMDBmIHZlYzpiOCB0eXBlPUlP
LUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElS
UTogIDE5IGFmZmluaXR5OjAwMDEgdmVjOmMwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0w
MDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjAgYWZmaW5pdHk6MDAwZiB2
ZWM6ZDggdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3Vu
ZAooWEVOKSAgICBJUlE6ICAyMiBhZmZpbml0eTowMDAxIHZlYzpkMCB0eXBlPUlPLUFQSUMtbGV2
ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIzIGFm
ZmluaXR5OjAwMDEgdmVjOmM4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBp
bi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAyMygtUy0tKSwKKFhFTikgICAgSVJROiAgMjQgYWZm
aW5pdHk6MDAwMSB2ZWM6MjggdHlwZT1ETUFfTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAwIG1h
cHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAyNSBhZmZpbml0eTowMDAxIHZlYzozMCB0eXBl
PURNQV9NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAg
IElSUTogIDI2IGFmZmluaXR5OjAwMDEgdmVjOjYxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1
cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3OShQUy0tKSwKKFhFTikgICAg
SVJROiAgMjcgYWZmaW5pdHk6MDAwMSB2ZWM6MjkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVz
PTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6Mjc4KC1TLS0pLAooWEVOKSAgICBJ
UlE6ICAyOCBhZmZpbml0eTowMDAxIHZlYzozMSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9
MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoyNzcoLVMtLSksCihYRU4pICAgIElS
UTogIDI5IGFmZmluaXR5OjAwMDEgdmVjOjM5IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0w
MDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3NigtUy0tKSwKKFhFTikgICAgSVJR
OiAgMzAgYWZmaW5pdHk6MDAwMSB2ZWM6NDEgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAw
MDAwMDEwIGluLWZsaWdodD0xIGRvbWFpbi1saXN0PTA6Mjc1KFBTLU0pLAooWEVOKSBJTy1BUElD
IGludGVycnVwdCBpbmZvcm1hdGlvbjoKKFhFTikgICAgIElSUSAgMCBWZWMyNDA6CihYRU4pICAg
ICAgIEFwaWMgMHgwMCwgUGluICAyOiB2ZWM9ZjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElS
USAgMSBWZWMgNTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICAxOiB2ZWM9MzggZGVsaXZl
cnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBk
ZXN0X2lkOjAKKFhFTikgICAgIElSUSAgMyBWZWMgNjQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwg
UGluICAzOiB2ZWM9NDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAg
aXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNCBWZWMgNzI6CihY
RU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA0OiB2ZWM9NDggZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikg
ICAgIElSUSAgNSBWZWMgODA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA1OiB2ZWM9NTAg
ZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1h
c2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNiBWZWMgODg6CihYRU4pICAgICAgIEFwaWMg
MHgwMCwgUGluICA2OiB2ZWM9NTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFy
aXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNyBWZWMg
OTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA3OiB2ZWM9NjAgZGVsaXZlcnk9TG9Qcmkg
ZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAK
KFhFTikgICAgIElSUSAgOCBWZWMxMDQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA4OiB2
ZWM9NjggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJp
Zz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgOSBWZWMxMTI6CihYRU4pICAgICAg
IEFwaWMgMHgwMCwgUGluICA5OiB2ZWM9NzAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAx
MCBWZWMxMjA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDEwOiB2ZWM9NzggZGVsaXZlcnk9
TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0
X2lkOjAKKFhFTikgICAgIElSUSAxMSBWZWMxMzY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGlu
IDExOiB2ZWM9ODggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJy
PTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMiBWZWMxNDQ6CihYRU4p
ICAgICAgIEFwaWMgMHgwMCwgUGluIDEyOiB2ZWM9OTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0
YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAg
IElSUSAxMyBWZWMxNTI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDEzOiB2ZWM9OTggZGVs
aXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9
MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNCBWZWMxNjA6CihYRU4pICAgICAgIEFwaWMgMHgw
MCwgUGluIDE0OiB2ZWM9YTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5
PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNSBWZWMxNjg6
CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE1OiB2ZWM9YTggZGVsaXZlcnk9TG9QcmkgZGVz
dD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhF
TikgICAgIElSUSAxNiBWZWMxNzY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE2OiB2ZWM9
YjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1M
IG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxOCBWZWMxODQ6CihYRU4pICAgICAgIEFw
aWMgMHgwMCwgUGluIDE4OiB2ZWM9YjggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBv
bGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxOSBW
ZWMxOTI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE5OiB2ZWM9YzAgZGVsaXZlcnk9TG9Q
cmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lk
OjAKKFhFTikgICAgIElSUSAyMCBWZWMyMTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIw
OiB2ZWM9ZDggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAg
dHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAyMiBWZWMyMDg6CihYRU4pICAg
ICAgIEFwaWMgMHgwMCwgUGluIDIyOiB2ZWM9ZDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElS
USAyMyBWZWMyMDA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIzOiB2ZWM9YzggZGVsaXZl
cnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBk
ZXN0X2lkOjAKKFhFTikgW206IG1lbW9yeSBpbmZvXQooWEVOKSBQaHlzaWNhbCBtZW1vcnkgaW5m
b3JtYXRpb246CihYRU4pICAgICBYZW4gaGVhcDogMGtCIGZyZWUKKFhFTikgICAgIGhlYXBbMTRd
OiA2NDUxMmtCIGZyZWUKKFhFTikgICAgIGhlYXBbMTVdOiAxMzEwNzJrQiBmcmVlCihYRU4pICAg
ICBoZWFwWzE2XTogMjYyMTQ0a0IgZnJlZQooWEVOKSAgICAgaGVhcFsxN106IDUyMjIzNmtCIGZy
ZWUKKFhFTikgICAgIGhlYXBbMThdOiAxMDQ4NTcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxOV06
IDY5MTM4MGtCIGZyZWUKKFhFTikgICAgIGhlYXBbMjBdOiA1MzY4NzJrQiBmcmVlCihYRU4pICAg
ICBEb20gaGVhcDogMzI1Njc4OGtCIGZyZWUKKFhFTikgW246IE5NSSBzdGF0aXN0aWNzXQooWEVO
KSBDUFUJTk1JCihYRU4pICAgMAkgIDAKKFhFTikgICAxCSAgMAooWEVOKSAgIDIJICAwCihYRU4p
ICAgMwkgIDAKKFhFTikgZG9tMCB2Y3B1MDogTk1JIG5laXRoZXIgcGVuZGluZyBub3IgbWFza2Vk
CihYRU4pIFtxOiBkdW1wIGRvbWFpbiAoYW5kIGd1ZXN0IGRlYnVnKSBpbmZvXQooWEVOKSAncScg
cHJlc3NlZCAtPiBkdW1waW5nIGRvbWFpbiBpbmZvIChub3c9MHgxQTpGQzE3NUYwNCkKKFhFTikg
R2VuZXJhbCBpbmZvcm1hdGlvbiBmb3IgZG9tYWluIDA6CihYRU4pICAgICByZWZjbnQ9MyBkeWlu
Zz0wIHBhdXNlX2NvdW50PTAKKFhFTikgICAgIG5yX3BhZ2VzPTE4NzUzOSB4ZW5oZWFwX3BhZ2Vz
PTYgc2hhcmVkX3BhZ2VzPTAgcGFnZWRfcGFnZXM9MCBkaXJ0eV9jcHVzPXsxLTN9IG1heF9wYWdl
cz0xODgxNDcKKFhFTikgICAgIGhhbmRsZT0wMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAw
MDAwMDAgdm1fYXNzaXN0PTAwMDAwMDBkCihYRU4pIFJhbmdlc2V0cyBiZWxvbmdpbmcgdG8gZG9t
YWluIDA6CihYRU4pICAgICBJL08gUG9ydHMgIHsgMC0xZiwgMjItM2YsIDQ0LTYwLCA2Mi05Ziwg
YTItNDA3LCA0MGMtY2ZiLCBkMDAtMjA0ZiwgMjA1OC1mZmZmIH0KKFhFTikgICAgIEludGVycnVw
dHMgeyAwLTI3OSB9CihYRU4pICAgICBJL08gTWVtb3J5IHsgMC1mZWJmZiwgZmVjMDEtZmVkZmYs
IGZlZTAxLWZmZmZmZmZmZmZmZmZmZmYgfQooWEVOKSBNZW1vcnkgcGFnZXMgYmVsb25naW5nIHRv
IGRvbWFpbiAwOgooWEVOKSAgICAgRG9tUGFnZSBsaXN0IHRvbyBsb25nIHRvIGRpc3BsYXkKKFhF
TikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0ODkxNzogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRh
Zj03NDAwMDAwMDAwMDAwMDAyCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDg5MTY6IGNh
Zj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAgICAgWGVuUGFn
ZSAwMDAwMDAwMDAwMTQ4OTE1OiBjYWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAw
MDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0ODkxNDogY2FmPWMwMDAwMDAwMDAw
MDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAw
YWEwZmQ6IGNhZj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAg
ICAgWGVuUGFnZSAwMDAwMDAwMDAwMTNmNDI4OiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0
MDAwMDAwMDAwMDAwMDIKKFhFTikgVkNQVSBpbmZvcm1hdGlvbiBhbmQgY2FsbGJhY2tzIGZvciBk
b21haW4gMDoKKFhFTikgICAgIFZDUFUwOiBDUFUwIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5k
ID0gMDEsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1cz17fSBjcHVfYWZmaW5pdHk9ezB9CihY
RU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTAKKFhFTikgICAgIE5vIHBlcmlvZGlj
IHRpbWVyCihYRU4pICAgICBWQ1BVMTogQ1BVMSBbaGFzPUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9
IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9ezF9IGNwdV9hZmZpbml0eT17MX0KKFhF
TikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxhZ3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMg
dGltZXIKKFhFTikgICAgIFZDUFUyOiBDUFUyIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0g
MDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1cz17Mn0gY3B1X2FmZmluaXR5PXsyfQooWEVO
KSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0
aW1lcgooWEVOKSAgICAgVkNQVTM6IENQVTMgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAw
MCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXszfSBjcHVfYWZmaW5pdHk9ezN9CihYRU4p
ICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRp
bWVyCihYRU4pIE5vdGlmeWluZyBndWVzdCAwOjAgKHZpcnEgMSwgcG9ydCA1LCBzdGF0IDAvMC8t
MSkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6MSAodmlycSAxLCBwb3J0IDExLCBzdGF0IDAvMC8w
KQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDoyICh2aXJxIDEsIHBvcnQgMTcsIHN0YXQgMC8wLzAp
CihYRU4pIE5vdGlmeWluZyBndWVzdCAwOjMgKHZpcnEgMSwgcG9ydCAyMywgc3RhdCAwLzAvMCkK
CihYRU4pIFNoYXJlZCBmcmFtZXMgMCAtLSBTYXZlZCBmcmFtZXMgMApbICAxMTYuMDc1OTIxXSB2
Y3B1IDEKKFhFTikgW3I6IGR1bXAgcnVuIHF1ZXVlc10KWyAgMTE2LjA3NTkyMl0gIChYRU4pIHNj
aGVkX3NtdF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9MHgwMDAwMDAxQjA3RERD
RTAxCihYRU4pIElkbGUgY3B1cG9vbDoKKFhFTikgU2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVk
dWxlciAoY3JlZGl0KQogKFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQK
KFhFTikgCW1hc3RlciAgICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9
IDQwMAooWEVOKSAJY3JlZGl0IGJhbGFuY2UgICAgID0gLTEwCihYRU4pIAl3ZWlnaHQgICAgICAg
ICAgICAgPSAyNTYKKFhFTikgCXJ1bnFfc29ydCAgICAgICAgICA9IDEzMzMKKFhFTikgCWRlZmF1
bHQtd2VpZ2h0ICAgICA9IDI1NgooWEVOKSAJdHNsaWNlICAgICAgICAgICAgID0gMTBtcwooWEVO
KSAJcmF0ZWxpbWl0ICAgICAgICAgID0gMTAwMHVzCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAg
PSAxMAooWEVOKSAJdGlja3MgcGVyIHRzbGljZSAgID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5
ICAgID0gMHVzCjA6IG1hc2tlZD0wIHBlbmQoWEVOKSBpZGxlcnM6IDAwMGMKKFhFTikgYWN0aXZl
IHZjcHVzOgooWEVOKSAJICAxOiBpbmc9MSBldmVudF9zZWwgWzAuMV0gcHJpPS0yIGZsYWdzPTAg
Y3B1PTEgY3JlZGl0PS02MTIgW3c9MjU2XQowMDAwMDAwMDAwMDAwMDAxKFhFTikgQ3B1cG9vbCAw
OgooWEVOKSBTY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCihYRU4pIGlu
Zm86CihYRU4pIAluY3B1cyAgICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIgICAgICAgICAg
ICAgPSAwCihYRU4pIAljcmVkaXQgICAgICAgICAgICAgPSA0MDAKKFhFTikgCWNyZWRpdCBiYWxh
bmNlICAgICA9IC0xMAooWEVOKSAJd2VpZ2h0ICAgICAgICAgICAgID0gMjU2CihYRU4pIAlydW5x
X3NvcnQgICAgICAgICAgPSAxMzMzCihYRU4pIAlkZWZhdWx0LXdlaWdodCAgICAgPSAyNTYKKFhF
TikgCXRzbGljZSAgICAgICAgICAgICA9IDEwbXMKKFhFTikgCXJhdGVsaW1pdCAgICAgICAgICA9
IDEwMDB1cwooWEVOKSAJY3JlZGl0cyBwZXIgbXNlYyAgID0gMTAKKFhFTikgCXRpY2tzIHBlciB0
c2xpY2UgICA9IDEKKFhFTikgCW1pZ3JhdGlvbiBkZWxheSAgICA9IDB1cwoKKFhFTikgaWRsZXJz
OiAwMDBjCihYRU4pIGFjdGl2ZSB2Y3B1czoKKFhFTikgCSAgMTogWzAuMV0gcHJpPS0yIGZsYWdz
PTAgY3B1PTEgY3JlZGl0PS0xMTU3IFt3PTI1Nl0KWyAgMTE2LjExMDE0N10gIChYRU4pIENQVVsw
MF0gICBzb3J0PTEzMzMsIHNpYmxpbmc9MDAwMSwgMTogbWFza2VkPTAgcGVuZGNvcmU9MDAwZgoo
WEVOKSAJcnVuOiBbMzI3NjcuMF0gcHJpPTAgZmxhZ3M9MCBjcHU9MAooWEVOKSAJICAxOiBbMC4w
XSBwcmk9MCBmbGFncz0wIGNwdT0wIGNyZWRpdD0tNDExIFt3PTI1Nl0KaW5nPTAgZXZlbnRfc2Vs
IChYRU4pIENQVVswMV0gIHNvcnQ9MTMzMywgc2libGluZz0wMDAyLCBjb3JlPTAwMGYKKFhFTikg
CXJ1bjogWzAuMV0gcHJpPS0yIGZsYWdzPTAgY3B1PTEgY3JlZGl0PS0xNDI5IFt3PTI1Nl0KKFhF
TikgCSAgMTogWzMyNzY3LjFdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MQooWEVOKSBDUFVbMDJdIDAw
MDAwMDAwMDAwMDAwMDAgc29ydD0xMzMzLCBzaWJsaW5nPTAwMDQsIApjb3JlPTAwMGYKKFhFTikg
CXJ1bjogWyAgMTE2LjIxNDQwNV0gIFszMjc2Ny4yXSBwcmk9LTY0IGZsYWdzPTAgY3B1PTIKIChY
RU4pIENQVVswM10gMjogbWFza2VkPTEgcGVuZCBzb3J0PTEzMzMsIHNpYmxpbmc9MDAwOCwgaW5n
PTEgZXZlbnRfc2VsIGNvcmU9MDAwZgooWEVOKSAJcnVuOiAwMDAwMDAwMDAwMDAwMDAxWzMyNzY3
LjNdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MwoKKFhFTikgW3M6IGR1bXAgc29mdHRzYyBzdGF0c10K
WyAgMTE2LjI1NTU3Ml0gIChYRU4pIFRTQyBtYXJrZWQgYXMgcmVsaWFibGUsIHdhcnAgPSAwIChj
b3VudD0yKQogKFhFTikgTm8gZG9tYWlucyBoYXZlIGVtdWxhdGVkIFRTQwozOiBtYXNrZWQ9MSBw
ZW5kKFhFTikgW3Q6IGRpc3BsYXkgbXVsdGktY3B1IGNsb2NrIGluZm9dCmluZz0xIGV2ZW50X3Nl
bCAoWEVOKSBTeW5jZWQgc3RpbWUgc2tldzogbWF4PTI2bnMgYXZnPTI2bnMgc2FtcGxlcz0xIGN1
cnJlbnQ9MjZucwooWEVOKSBTeW5jZWQgY3ljbGVzIHNrZXc6IG1heD0xNjQgYXZnPTE2NCBzYW1w
bGVzPTEgY3VycmVudD0xNjQKMDAwMDAwMDAwMDAwMDAwMShYRU4pIFt1OiBkdW1wIG51bWEgaW5m
b10KCihYRU4pICd1JyBwcmVzc2VkIC0+IGR1bXBpbmcgbnVtYSBpbmZvIChub3ctMHgxQjoxNTYw
MzcyNCkKWyAgMTE2LjI5NzMyOV0gIChYRU4pIGlkeDAgLT4gTk9ERTAgc3RhcnQtPjAgc2l6ZS0+
MTM2OTYwMCBmcmVlLT44MTQxOTcKIChYRU4pIHBoeXNfdG9fbmlkKDAwMDAwMDAwMDAwMDEwMDAp
IC0+IDAgc2hvdWxkIGJlIDAKCihYRU4pIENQVTAgLT4gTk9ERTAKKFhFTikgQ1BVMSAtPiBOT0RF
MAooWEVOKSBDUFUyIC0+IE5PREUwCihYRU4pIENQVTMgLT4gTk9ERTAKKFhFTikgTWVtb3J5IGxv
Y2F0aW9uIG9mIGVhY2ggZG9tYWluOgooWEVOKSBEb21haW4gMCAodG90YWw6IDE4NzUzOSk6Clsg
IDExNi4zMzUwMDZdIHAoWEVOKSAgICAgTm9kZSAwOiAxODc1MzkKZW5kaW5nOgooWEVOKSBbdjog
ZHVtcCBJbnRlbCdzIFZNQ1NdClsgIDExNi4zMzUwMDZdICAoWEVOKSAqKioqKioqKioqKiBWTUNT
IEFyZWFzICoqKioqKioqKioqKioqCihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqCiAgKFhFTikgW3o6IHByaW50IGlvYXBpYyBpbmZvXQowMDAwMDAwMDAwMDAwMDAw
KFhFTikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4KKFhFTikgbnVtYmVyIG9mIElPLUFQ
SUMgIzIgcmVnaXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uCiAoWEVOKSBJTyBBUElDICMyLi4uLi4uCihYRU4pIC4uLi4gcmVnaXN0ZXIg
IzAwOiAwMjAwMDAwMAooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwgQVBJQyBpZDogMDIKKFhF
TikgLi4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAKKFhFTikgLi4uLi4uLiAgICA6IExUUyAg
ICAgICAgICA6IDAKMDAwMDAwMDAwMDAwMDAwMChYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAwMDE3
MDAyMAooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVzOiAwMDE3CihY
RU4pIC4uLi4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAwCihYRU4pIC4uLi4uLi4gICAgIDog
SU8gQVBJQyB2ZXJzaW9uOiAwMDIwCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOgoo
WEVOKSAgTlIgTG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAg
IAogKFhFTikgIDAwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAK
MDAwMDAwMDAwMDAwMDAwMChYRU4pICAwMSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAg
MSAgICAxICAgIDM4CiAoWEVOKSAgMDIgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICBGMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDAzIDAwMCAwMCAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNDAKIChYRU4pICAwNCAwMDAgMDAgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDQ4CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDUgMDAwIDAwICAw
ICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1MAogKFhFTikgIDA2IDAwMCAwMCAgMCAg
ICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAw
NyAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDYwCiAoWEVOKSAgMDgg
MDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA2OAowMDAwMDAwMDAwMDAw
MDAwKFhFTikgIDA5IDAwMCAwMCAgMCAgICAxICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNzAK
IChYRU4pICAwYSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDc4CjAw
MDAwMDAwMDAwMDAwMDAoWEVOKSAgMGIgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICA4OAoKKFhFTikgIDBjIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAg
IDEgICAgOTAKWyAgMTE2LjQ4NDYzOV0gIChYRU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDk4CiAgKFhFTikgIDBlIDAwMCAwMCAgMCAgICAwICAgIDAgICAw
ICAgMCAgICAxICAgIDEgICAgQTAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwZiAwMDAgMDAgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEE4CiAoWEVOKSAgMTAgMDAwIDAwICAwICAg
IDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBCMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEx
IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAKIChYRU4pICAxMiAw
MDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEI4CjAwMDAwMDAwMDAwMDAw
MDAoWEVOKSAgMTMgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBDMAog
KFhFTikgIDE0IDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgRDgKMDAw
MDAwMDAwMDAwMDAwMChYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwCiAoWEVOKSAgMTYgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAg
MSAgICBEMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDE3IDAwMCAwMCAgMCAgICAxICAgIDAgICAx
ICAgMCAgICAxICAgIDEgICAgQzgKIChYRU4pIFVzaW5nIHZlY3Rvci1iYXNlZCBpbmRleGluZwoo
WEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOgowMDAwMDAwMDAwMDAwMDAwKFhFTikgSVJRMjQwIC0+
IDA6MgooWEVOKSBJUlE1NiAtPiAwOjEKKFhFTikgSVJRNjQgLT4gMDozCihYRU4pIElSUTcyIC0+
IDA6NAooWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4pIElSUTk2IC0+
IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhFTikgSVJRMTIw
IC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6MTIKKFhFTikg
SVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4IC0+IDA6MTUK
IChYRU4pIElSUTE3NiAtPiAwOjE2CihYRU4pIElSUTE4NCAtPiAwOjE4CihYRU4pIElSUTE5MiAt
PiAwOjE5CihYRU4pIElSUTIxNiAtPiAwOjIwCihYRU4pIElSUTIwOCAtPiAwOjIyCihYRU4pIElS
UTIwMCAtPiAwOjIzCihYRU4pIC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLiBk
b25lLgowMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2LjYxNzYyNF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNi42MzE1ODZdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAxMTYuNjQ1NTQ2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2LjY1OTUw
N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNi42NzM0NjhdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxMTYuNjg3NDI5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDA0MDAwODIwODIKWyAgMTE2
LjcwMTM5MF0gICAgClsgIDExNi43MDQ3MDJdIGdsb2JhbCBtYXNrOgpbICAxMTYuNzA0NzAyXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE2LjcxOTkxNV0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDExNi43MzM4NzZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTYuNzQ3
ODM4XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE2Ljc2MTc5OF0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDExNi43NzU3NjBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAx
MTYuNzg5NzIxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE2LjgwMzY4MV0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZjMDAwMTA0MTA1ClsgIDExNi44MTc2NDJdICAgIApbICAxMTYuODIwOTU0XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMTE2LjgyMDk1NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDExNi44MzY3MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTYuODUwNjY1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2Ljg2NDYyN10gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDExNi44Nzg1ODddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTYuODkyNTQ5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2LjkwNjUxMF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExNi45MjA0NzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDQwMDA4MjA4MgpbICAxMTYu
OTM0NDMxXSAgICAKWyAgMTE2LjkzNzc0M10gbG9jYWwgY3B1MSBtYXNrOgpbICAxMTYuOTM3NzQz
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2Ljk1MzMxNF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExNi45NjcyNzVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTYu
OTgxMjM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2Ljk5NTE5N10gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDExNy4wMDkxNThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxMTcuMDIzMTE5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjAzNzA4MF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAxZjgwClsgIDExNy4wNTEwNDFdICAgIApbICAxMTcuMDU0MzUyXSBs
b2NhbGx5IHVubWFza2VkOgpbICAxMTcuMDU0MzUzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTE3LjA3MDAxM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy4wODM5NzRdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMDk3OTcxXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTE3LjExMTkzM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy4xMjU4
OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMTM5ODU1XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE3LjE1MzgxNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDgwClsgIDEx
Ny4xNjc3NzZdICAgIApbICAxMTcuMTcxMDg3XSBwZW5kaW5nIGxpc3Q6ClsgIDExNy4xNzQxMzFd
ICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGxvY2FsbHktbWFza2VkClsgIDExNy4xNzkxNDNdICAg
MTogZXZlbnQgNyAtPiBpcnEgMjc4ClsgIDExNy4xODI4MTFdICAgMjogZXZlbnQgMTMgLT4gaXJx
IDI4NCBsb2NhbGx5LW1hc2tlZApbICAxMTcuMTg3OTEyXSAgIDM6IGV2ZW50IDE5IC0+IGlycSAy
OTAgbG9jYWxseS1tYXNrZWQKWyAgMTE3LjE5MzAxNF0gICAwOiBldmVudCAzNCAtPiBpcnEgMzAy
IGxvY2FsbHktbWFza2VkClsgIDExNy4xOTgxMzVdIApbICAxMTcuMTk4MTM2XSB2Y3B1IDAKWyAg
MTE3LjE5ODEzN10gICAwOiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAw
MDAwMDEKWyAgMTE3LjIwMzM5N10gICAxOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTE3LjIwOTQ4Ml0gICAyOiBtYXNrZWQ9MSBwZW5kaW5nPTEgZXZl
bnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgMTE3LjIxNTU2N10gICAzOiBtYXNrZWQ9MSBwZW5k
aW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgMTE3LjIyMTY1M10gICAKWyAgMTE3
LjIyNzczN10gcGVuZGluZzoKWyAgMTE3LjIyNzczN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDExNy4yNDI1OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMjU2NTU0XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjI3MDUxNF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDExNy4yODQ0NzZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMjk4
NDM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjMxMjM5N10gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDExNy4zMjYzNTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDQwMDI4YTAwZQpbICAx
MTcuMzQwMzIwXSAgICAKWyAgMTE3LjM0MzYzMV0gZ2xvYmFsIG1hc2s6ClsgIDExNy4zNDM2MzJd
ICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTcuMzU4ODQ1XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMTE3LjM3MjgwNl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDExNy4z
ODY3NjddICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTcuNDAwNzI4XSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTE3LjQxNDY4OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDExNy40Mjg2NTBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTcuNDQyNjEwXSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmMwMDAxMDQxMDUKWyAgMTE3LjQ1NjU3Ml0gICAgClsgIDExNy40NTk4ODNdIGds
b2JhbGx5IHVubWFza2VkOgpbICAxMTcuNDU5ODg0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTE3LjQ3NTYzM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy40ODk1OTVdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNTAzNTU2XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTE3LjUxNzUxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy41MzE0
NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNTQ1NDM4XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE3LjU1OTQwMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwNDAwMjhhMDBhClsgIDEx
Ny41NzMzNjFdICAgIApbICAxMTcuNTc2NjcxXSBsb2NhbCBjcHUwIG1hc2s6ClsgIDExNy41NzY2
NzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNTkyMjQzXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE3LjYwNjIwNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDEx
Ny42MjAxNjZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNjM0MTI3XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjY0ODA4N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDExNy42NjIwNDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNjc2MDA5XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIGZmZmZmZmZmZmUwMDAwN2YKWyAgMTE3LjY4OTk3MV0gICAgClsgIDExNy42OTMyODFd
IGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDExNy42OTMyODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAxMTcuNzA4OTQyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjcyMjkwNF0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy43MzY4NjRdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxMTcuNzUwODI2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3Ljc2
NDc4N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy43Nzg3NDddICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAxMTcuNzkyNzA5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDA0MDAwMDAwMGEKWyAg
MTE3LjgwNjY3MF0gICAgClsgIDExNy44MDk5ODFdIHBlbmRpbmcgbGlzdDoKWyAgMTE3LjgxMzAy
OV0gICAwOiBldmVudCAxIC0+IGlycSAyNzIKWyAgMTE3LjgxNjY5NF0gICAwOiBldmVudCAyIC0+
IGlycSAyNzMgZ2xvYmFsbHktbWFza2VkClsgIDExNy44MjE3OTRdICAgMDogZXZlbnQgMyAtPiBp
cnEgMjc0ClsgIDExNy44MjU0NjRdICAgMjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1h
c2tlZApbICAxMTcuODMwNTY2XSAgIDI6IGV2ZW50IDE1IC0+IGlycSAyODYgbG9jYWxseS1tYXNr
ZWQKWyAgMTE3LjgzNTY2Nl0gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFza2Vk
ClsgIDExNy44NDA3NjZdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tlZApb
ICAxMTcuODQ1ODY4XSAgIDA6IGV2ZW50IDM0IC0+IGlycSAzMDIKWyAgMTE3Ljg0OTY0N10gClsg
IDExNy44NDk2NDhdIHZjcHUgMgpbICAxMTcuODQ5NjQ4XSAgIDA6IG1hc2tlZD0wIHBlbmRpbmc9
MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuODU0OTA4XSAgIDE6IG1hc2tlZD0w
IHBlbmRpbmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuODYwOTkyXSAgIDI6
IG1hc2tlZD0wIHBlbmRpbmc9MSBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMQpbICAxMTcuODY3
MDc5XSAgIDM6IG1hc2tlZD0xIHBlbmRpbmc9MSBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMQpb
ICAxMTcuODczMTYzXSAgIApbICAxMTcuODc5MjQ4XSBwZW5kaW5nOgpbICAxMTcuODc5MjQ5XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3Ljg5NDEwNV0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDExNy45MDgwNjZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuOTIy
MDI2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjkzNTk4OF0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDExNy45NDk5NDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAx
MTcuOTYzOTEwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3Ljk3Nzg3MV0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMjhlMDA0ClsgIDExNy45OTE4MzJdICAgIApbICAxMTcuOTk1MTQzXSBnbG9i
YWwgbWFzazoKWyAgMTE3Ljk5NTE0M10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDExOC4w
MTAzNTddICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguMDI0MzE4XSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjAzODI3OV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDExOC4wNTIyMzldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguMDY2MjAwXSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjA4MDE2MV0gICAgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmClsgIDExOC4wOTQxNTldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmYzAwMDEwNDEwNQpbICAxMTguMTA4MTE5
XSAgICAKWyAgMTE4LjExMTQzMV0gZ2xvYmFsbHkgdW5tYXNrZWQ6ClsgIDExOC4xMTE0MzJdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMTI3MTgyXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTE4LjE0MTE0M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC4xNTUx
MDNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMTY5MDY0XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE4LjE4MzAyNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDEx
OC4xOTY5ODZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMjEwOTQ3XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAyOGEwMDAKWyAgMTE4LjIyNDkwOF0gICAgClsgIDExOC4yMjgyMjBdIGxvY2Fs
IGNwdTIgbWFzazoKWyAgMTE4LjIyODIyMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDEx
OC4yNDM3OTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMjU3NzUzXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjI3MTcxM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDExOC4yODU2NzRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMjk5NjM2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjMxMzU5Nl0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDExOC4zMjc1NThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDA3ZTAwMApbICAxMTguMzQx
NTE4XSAgICAKWyAgMTE4LjM0NDgyOV0gbG9jYWxseSB1bm1hc2tlZDoKWyAgMTE4LjM0NDgyOV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC4zNjA0OTFdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxMTguMzc0NDUxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjM4
ODQxMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC40MDIzNzNdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAxMTguNDE2MzM0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MTE4LjQzMDI5Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC40NDQyNTZdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwYTAwMApbICAxMTguNDU4MjE4XSAgICAKWyAgMTE4LjQ2MTUyOV0gcGVu
ZGluZyBsaXN0OgpbICAxMTguNDY0NTczXSAgIDA6IGV2ZW50IDIgLT4gaXJxIDI3MyBnbG9iYWxs
eS1tYXNrZWQgbG9jYWxseS1tYXNrZWQKWyAgMTE4LjQ3MTAxNV0gICAyOiBldmVudCAxMyAtPiBp
cnEgMjg0ClsgIDExOC40NzQ3NzVdICAgMjogZXZlbnQgMTQgLT4gaXJxIDI4NSBnbG9iYWxseS1t
YXNrZWQKWyAgMTE4LjQ3OTk2NV0gICAyOiBldmVudCAxNSAtPiBpcnEgMjg2ClsgIDExOC40ODM3
MjNdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MCBsb2NhbGx5LW1hc2tlZApbICAxMTguNDg4ODI1
XSAgIDM6IGV2ZW50IDIxIC0+IGlycSAyOTIgbG9jYWxseS1tYXNrZWQKWyAgMTE4LjQ5Mzk0Nl0g
ClsgIDExOC40OTM5NDddIHZjcHUgMwpbICAxMTguNDkzOTQ4XSAgIDA6IG1hc2tlZD0wIHBlbmRp
bmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguNDk5MjA3XSAgIDE6IG1hc2tl
ZD0wIHBlbmRpbmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguNTA1MjkyXSAg
IDI6IG1hc2tlZD0wIHBlbmRpbmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTgu
NTExMzc3XSAgIDM6IG1hc2tlZD0wIHBlbmRpbmc9MSBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAw
MQpbICAxMTguNTE3NDYzXSAgIApbICAxMTguNTIzNTQ4XSBwZW5kaW5nOgpbICAxMTguNTIzNTQ5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjUzODQwM10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExOC41NTIzNjVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTgu
NTY2MzI2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjU4MDI4Nl0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDExOC41OTQyNDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxMTguNjA4MjA4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjYyMjE3MF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMzg0MDA0ClsgIDExOC42MzYxMzBdICAgIApbICAxMTguNjM5NDQyXSBn
bG9iYWwgbWFzazoKWyAgMTE4LjYzOTQ0M10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDEx
OC42NTQ2NTZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguNjY4NjE2XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjY4MjU3N10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
ClsgIDExOC42OTY1MzhdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguNzEwNDk5XSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjcyNDQ2MV0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDExOC43Mzg0MjFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmYzAwMDEwNDEwNQpbICAxMTguNzUy
MzgyXSAgICAKWyAgMTE4Ljc1NTY5NV0gZ2xvYmFsbHkgdW5tYXNrZWQ6ClsgIDExOC43NTU2OTZd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguNzcxNDQ0XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMTE4Ljc4NTQwNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC43
OTkzNjddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguODEzMzI4XSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTE4LjgyNzI4OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDExOC44NDEyNTBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguODU1MjEwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAyODAwMDAKWyAgMTE4Ljg2OTE3Ml0gICAgClsgIDExOC44NzI0ODJdIGxv
Y2FsIGNwdTMgbWFzazoKWyAgMTE4Ljg3MjQ4M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDExOC44ODgwNTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguOTAyMDE1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjkxNTk3N10gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDExOC45Mjk5MzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguOTQzODk5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4Ljk1Nzg1OV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExOC45NzE4MjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMWY4MDAwMApbICAxMTgu
OTg1NzgxXSAgICAKWyAgMTE4Ljk4OTA5M10gbG9jYWxseSB1bm1hc2tlZDoKWyAgMTE4Ljk4OTA5
M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOS4wMDQ3NTNdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxMTkuMDE4NzE1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE5
LjAzMjY3Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOS4wNDY2MzddICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxMTkuMDYwNTk3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTE5LjA3NDU1OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOS4wODg1MTldICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDI4MDAwMApbICAxMTkuMTAyNTE2XSAgICAKWyAgMTE5LjEwNTgyN10g
cGVuZGluZyBsaXN0OgpbICAxMTkuMTA4ODcwXSAgIDA6IGV2ZW50IDIgLT4gaXJxIDI3MyBnbG9i
YWxseS1tYXNrZWQgbG9jYWxseS1tYXNrZWQKWyAgMTE5LjExNTMxNF0gICAyOiBldmVudCAxNCAt
PiBpcnEgMjg1IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAxMTkuMTIxODQ3XSAg
IDM6IGV2ZW50IDE5IC0+IGlycSAyOTAKWyAgMTE5LjEyNTYwNV0gICAzOiBldmVudCAyMCAtPiBp
cnEgMjkxIGdsb2JhbGx5LW1hc2tlZApbICAxMTkuMTMwNzk1XSAgIDM6IGV2ZW50IDIxIC0+IGly
cSAyOTIK
--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae9399de136edb804c6d6c8a6--


From xen-devel-bounces@lists.xen.org Thu Aug 09 15:22:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUYc-0007Wl-6i; Thu, 09 Aug 2012 15:21:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SzUYZ-0007Wd-0y
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:21:51 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344525691!2982079!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14957 invoked from network); 9 Aug 2012 15:21:33 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:21:33 -0000
Received: by yhpp34 with SMTP id p34so620849yhp.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 08:21:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=BR56oS6TKVGe4tGipfnpwE6CVgJZoTIosTSfBkpBDnc=;
	b=jgqzPJm8Jq5xi6OaPCkjHxsrd1qWU2CSwvouuhDTUh32wzWWlAznIAAHV7Qn9nxa6L
	dB69iyzJ8EQ740m+aWirHo67WE0IgewOHG6W6donHu6Hi5id6bM9IK9k1R1W1xXV71n2
	N9Wg2zHoNRsRyk4dtt7aiageiXymbXyPRTCB48KllbXfVLwhGuCs7ncxkGKYlPx3ngM0
	i8QR652AMGqbfioycLEVonVCA42Bnmn+7dgcFrd+e364pwrB4+xMPTPk8MZLtKlFNqTC
	VOD78I2nAHq/Yt1/Lxxpu78SIxfY4NCNOjDRTtc87595l14qU8O4A1BhcehMxlpBoT2d
	/0pg==
MIME-Version: 1.0
Received: by 10.50.159.135 with SMTP id xc7mr1490187igb.1.1344525691186; Thu,
	09 Aug 2012 08:21:31 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 9 Aug 2012 08:21:31 -0700 (PDT)
In-Reply-To: <CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
Date: Thu, 9 Aug 2012 11:21:31 -0400
X-Google-Sender-Auth: d-ZaY-05EF6z53eHxW7sWz4c2AQ
Message-ID: <CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Content-Type: multipart/mixed; boundary=14dae9399de136edb804c6d6c8a6
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=ISO-8859-1

Attached is a new run for
new boot (pre-s3)
first suspend / resume cycle (s3-first)
second (failing) suspend / resume cycle (s3-second)



To go into greater detail on the kernel used -

It is a 3.2.23 kernel based off of the Ubuntu 12.04 git tree here
http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-precise.git;a=summary

To that, I also have some of Konrad's branches - specifically
/devel/ioperm
/devel/acpi-s3.v7
/stable/misc  (mostly for the microcode fixes)
/stable/for-linus-fixes-3.3
/stable/for-linus-3.3
/devel/ttm.dma_pool.v2.9
/stable/for-x86

On top of that, are some more patches specific to our operations, not
terribly interesting here, but I can provide them, if necessary.


The 3.5 tree I tested with has a similar makeup - with some fewer
branches from Konrad.


On Wed, Aug 8, 2012 at 6:39 AM, Ben Guthro <ben@guthro.net> wrote:
> Thanks for taking the time to reply.
>
> I'm out of the office today, so don't have direct access to the
> machine in question until tomorrow... but I'll do my best to answer
> (inline below) and I'll follow up tomorrow with concrete answers.
>
> On Wed, Aug 8, 2012 at 4:35 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
>>> Any suggestions on how best to chase this down?
>>>
>>> The first S3 suspend/resume cycle works, but the second does not.
>>>
>>> On the second try, I never get any interrupts delivered to ahci.
>>> (at least according to /proc/interrupts)
>>>
>>>
>>> syslog traces from the first (good) and the second (bad) are attached,
>>> as well as the output from the "*" debug Ctrl+a handler in both cases.
>>
>> You should have provided this also for the state before the
>> first suspend. The state after the first resume already looks
>> corrupted (presumably just not as badly):
>
> I'll be able to send this tomorrow.
>
>>
>> (XEN) PCI-MSI interrupt information:
>> (XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
>>                      ^^
>> (XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>>
>> so this is likely the reason for thing falling apart on the second
>> iteration:
>>
>> (XEN)   Interrupt Remapping: supported and enabled.
>> (XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
>> (XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
>> (XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
>> ...
>> (XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
>> (XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
>>                                               ^     ^  ^
>> (XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
>> (XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
>> (XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
>> (XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1
>>
>> Surprisingly in both cases we get (with the other vector fields varying
>> accordingly)
>>
>> (XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
>> (XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
>>                                     ^^
>> (XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
>> (XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
>> (XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
>> (XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
>>
>> The interrupt in question belongs to 0000:00:1f.2, i.e. the
>> AHCI contoller.
>
> This would be consistent with what I've observed.
>
>>
>> Unfortunately I can't make sense of the kernel side config space
>> restore messages - an offset of 1 gets reported for the device in
>> question (and various other odd offsets exist), yet 3.5's
>> drivers/pci/pci.c:pci_restore_config_space_range() calls
>> pci_restore_config_dword() with an offset that's always divisible
>> by 4. Could you clarify which kernel version you were using here?
>> We first need to determine whether the kernel corrupts something
>> (after all, config space isn't protected from Dom0 modifications) -
>> if that's the case, we may need to understand why older Xen was
>> immune against that. If that's not the case, adding some extra
>> logging to Xen's pci_restore_msi_state() would seem the best
>> first step, plus (maybe) logging of Dom0 post-resume config space
>> accesses to the device in question.
>
> This particular failure is using linux-3.2.23 + some of Konrad's
> branches that haven't been merged into mainline (s3 branches, are
> probably the most appropriate here)
>
>>
>> The most likely thing happening (though unclear where) is that
>> the corresponding struct msi_msg instance gets cleared in the
>> course of the first resume (but after the corresponding interrupt
>> remapping entry already got restored).
>>
>> Jan
>>

--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-s3-second.txt"
Content-Disposition: attachment; filename="xen-dump-s3-second.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5nzo9g00

KFhFTikgJyonIHByZXNzZWQgLT4gZmlyaW5nIGFsbCBkaWFnbm9zdGljIGtleWhhbmRsZXJzCihY
RU4pIFtkOiBkdW1wIHJlZ2lzdGVyc10KKFhFTikgJ2QnIHByZXNzZWQgLT4gZHVtcGluZyByZWdp
c3RlcnMKKFhFTikgCihYRU4pICoqKiBEdW1waW5nIENQVTAgaG9zdCBzdGF0ZTogKioqCihYRU4p
IC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMg
XS0tLS0KKFhFTikgQ1BVOiAgICAwCihYRU4pIFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxM2Q3
N2U+XSBuczE2NTUwX3BvbGwrMHgyNy8weDMzCihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAxMDI4
NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgwMzAyNWEwICAgcmJ4
OiBmZmZmODJjNDgwMzAyNDgwICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJkeDogMDAw
MDAwMDAwMDAwMDAwMCAgIHJzaTogZmZmZjgyYzQ4MDJlMjVjOCAgIHJkaTogZmZmZjgyYzQ4MDI3
MTgwMAooWEVOKSByYnA6IGZmZmY4MmM0ODAyYjdlMzAgICByc3A6IGZmZmY4MmM0ODAyYjdlMzAg
ICByODogIDAwMDAwMDVjNTEzZGRkMDAKKFhFTikgcjk6ICBmZmZmODJjNDgwMzAyNjAwICAgcjEw
OiAwMDAwMDA1YzUwNmQyMGY4ICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZm
ZjgyYzQ4MDI3MTgwMCAgIHIxMzogZmZmZjgyYzQ4MDEzZDc1NyAgIHIxNDogMDAwMDAwNWM1MDFj
YmQzMgooWEVOKSByMTU6IGZmZmY4MmM0ODAzMDIzMDggICBjcjA6IDAwMDAwMDAwODAwNTAwM2Ig
ICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTRkMDBmMDAwICAgY3Iy
OiAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIGRzOiAwMDAwICAgZXM6IDAwMDAgICBmczogMDAwMCAg
IGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJv
bSByc3A9ZmZmZjgyYzQ4MDJiN2UzMDoKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2U2MCBmZmZmODJj
NDgwMTI4MTdmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJlMjVjOAooWEVOKSAgICBmZmZm
ODJjNDgwMzAyNDgwIGZmZmY4MzAxNDg5YjNkNDAgZmZmZjgyYzQ4MDJiN2ViMCBmZmZmODJjNDgw
MTI4MjgxCihYRU4pICAgIGZmZmY4MmM0ODAyYjdmMTggMDAwMDAwMDAwMDAwMDI0NiAwMDAwMDA1
YzUwMWJmNGVlIGZmZmY4MmM0ODAyZDg4ODAKKFhFTikgICAgZmZmZjgyYzQ4MDJkODg4MCBmZmZm
ODJjNDgwMmI3ZjE4IGZmZmZmZmZmZmZmZmZmZmYgZmZmZjgyYzQ4MDMwMjMwOAooWEVOKSAgICBm
ZmZmODJjNDgwMmI3ZWUwIGZmZmY4MmM0ODAxMjU0MDUgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmODJj
NDgwMmI3ZjE4CihYRU4pICAgIDAwMDAwMDAwZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMiBmZmZm
ODJjNDgwMmI3ZWYwIGZmZmY4MmM0ODAxMjU0ODQKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2YxMCBm
ZmZmODJjNDgwMTU4YzA1IGZmZmY4MzAwYWE1ODQwMDAgZmZmZjgzMDBhYTBmYzAwMAooWEVOKSAg
ICBmZmZmODJjNDgwMmI3ZGE4IDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmZmZmZmZmZmZiAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZmZmZmY4MWEwMWVlOCBm
ZmZmZmZmZjgxYTAxZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAw
MDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNh
YSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZmZmZmY4MWEw
MWVkMCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODMwMGFhNTg0MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTNkNzdlPl0g
bnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgxN2Y+XSBleGVj
dXRlX3RpbWVyKzB4NGUvMHg2YwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgyODE+XSB0aW1lcl9z
b2Z0aXJxX2FjdGlvbisweGU0LzB4MjFhCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDEyNTQwNT5dIF9f
ZG9fc29mdGlycSsweDk1LzB4YTAKKFhFTikgICAgWzxmZmZmODJjNDgwMTI1NDg0Pl0gZG9fc29m
dGlycSsweDI2LzB4MjgKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4YzA1Pl0gaWRsZV9sb29wKzB4
NmYvMHg3MQooWEVOKSAgICAKKFhFTikgKioqIER1bXBpbmcgQ1BVMSBob3N0IHN0YXRlOiAqKioK
KFhFTikgLS0tLVsgWGVuLTQuMi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDog
ICAgQyBdLS0tLQooWEVOKSBDUFU6ICAgIDEKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4
MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAw
MDAwMjQ2ICAgQ09OVEVYVDogaHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAg
ICByYng6IGZmZmY4MzAxM2U2ZTdmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcmR4
OiAwMDAwMDAzY2JkMWI1ZDgwICAgcnNpOiAwMDAwMDAwMDM1NmNkMzg2ICAgcmRpOiAwMDAwMDAw
MDAwMDAwMDAxCihYRU4pIHJicDogZmZmZjgzMDEzZTZlN2VmMCAgIHJzcDogZmZmZjgzMDEzZTZl
N2VmMCAgIHI4OiAgMDAwMDAwMTc2M2UxODBhYwooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwM2Ug
ICByMTA6IDAwMDAwMDAwZGVhZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEy
OiBmZmZmODMwMTNlNmU3ZjE4ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAw
MDAwMDAwMDAyCihYRU4pIHIxNTogZmZmZjgzMDEzZDRiODA4OCAgIGNyMDogMDAwMDAwMDA4MDA1
MDAzYiAgIGNyNDogMDAwMDAwMDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxNGQwMGYwMDAg
ICBjcjI6IGZmZmY4ODAwMjVmYzZiOTgKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAw
MDAwICAgZ3M6IDAwMDAgICBzczogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFj
ZSBmcm9tIHJzcD1mZmZmODMwMTNlNmU3ZWYwOgooWEVOKSAgICBmZmZmODMwMTNlNmU3ZjEwIGZm
ZmY4MmM0ODAxNThiZjggZmZmZjgzMDBhYTBmZTAwMCBmZmZmODMwMGE4M2ZkMDAwCihYRU4pICAg
IGZmZmY4MzAxM2U2ZTdkYTggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDEKKFhFTikgICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODZkZWUwIGZm
ZmY4ODAwMjc4NmRmZDggMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAx
IDAwMDAwMDAwMDAwMDAwNDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICAgIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2Fh
IDAwMDAwMDAwMDAwMGUwMzMgMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODZk
ZWM4IDAwMDAwMDAwMDAwMGUwMmIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAx
IGZmZmY4MzAwYWEwZmUwMDAKKFhFTikgICAgMDAwMDAwM2NiZDFiNWQ4MCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pIFhlbiBjYWxsIHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBk
ZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVf
bG9vcCsweDYyLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1waW5nIENQVTIgaG9zdCBzdGF0
ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRh
aW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAyCihYRU4pIFJJUDogICAgZTAwODpbPGZm
ZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgw
MzAyMzcwICAgcmJ4OiBmZmZmODMwMTQ4OTlmZjE4ICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAyCihY
RU4pIHJkeDogMDAwMDAwM2NiZTNlZWQ4MCAgIHJzaTogMDAwMDAwMDAzNjMyN2E1MiAgIHJkaTog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByYnA6IGZmZmY4MzAxNDg5OWZlZjAgICByc3A6IGZmZmY4
MzAxNDg5OWZlZjAgICByODogIDAwMDAwMDE3ODY4M2Y0ZWMKKFhFTikgcjk6ICBmZmZmODMwMGE4
M2ZjMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihY
RU4pIHIxMjogZmZmZjgzMDE0ODk5ZmYxOCAgIHIxMzogMDAwMDAwMDBmZmZmZmZmZiAgIHIxNDog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxM2U2ZjEwODggICBjcjA6IDAwMDAw
MDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTQx
YTA1MDAwICAgY3IyOiBmZmZmODgwMDI3OGQwMGY4CihYRU4pIGRzOiAwMDJiICAgZXM6IDAwMmIg
ICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDE0ODk5ZmVmMDoKKFhFTikgICAgZmZmZjgzMDE0ODk5
ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYTg1YzcwMDAgZmZmZjgzMDBhODNmYzAwMAoo
WEVOKSAgICBmZmZmODMwMTQ4OTlmZGE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAyCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyNzg2
ZmVlMCBmZmZmODgwMDI3ODZmZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFk
YmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4
MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZjg4
MDAyNzg2ZmVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmODMwMGE4NWM3MDAwCihYRU4pICAgIDAwMDAwMDNjYmUzZWVkODAgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4
M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBbPGZmZmY4MmM0ODAxNThiZjg+
XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAqKiogRHVtcGluZyBDUFUzIGhv
c3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1
Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMwooWEVOKSBSSVA6ICAgIGUw
MDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSBSRkxB
R1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJheDogZmZm
ZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODk4ZmYxOCAgIHJjeDogMDAwMDAwMDAwMDAw
MDAwMwooWEVOKSByZHg6IDAwMDAwMDNjYzg2OTJkODAgICByc2k6IDAwMDAwMDAwMzZmODMyZmUg
ICByZGk6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmJwOiBmZmZmODMwMTQ4OThmZWYwICAgcnNw
OiBmZmZmODMwMTQ4OThmZWYwICAgcjg6ICAwMDAwMDAxN2E4ZGJmMzk0CihYRU4pIHI5OiAgMDAw
MDAwMDAwMDAwMDAzYyAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAgIHIxMTogMDAwMDAwMDAwMDAw
MDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5OGZmMTggICByMTM6IDAwMDAwMDAwZmZmZmZmZmYg
ICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZmODMwMTQ4OTk1MDg4ICAgY3Iw
OiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNyMzogMDAw
MDAwMDE0MWEwNTAwMCAgIGNyMjogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSBkczogMDAyYiAgIGVz
OiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgKKFhFTikg
WGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5OGZlZjA6CihYRU4pICAgIGZmZmY4
MzAxNDg5OGZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4M2ZlMDAwIGZmZmY4MzAwYWE1
ODMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODk4ZmRhOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMwooWEVOKSAgICBmZmZmZmZmZjgxYWFmZGEwIGZmZmY4
ODAwMjc4ODFlZTAgZmZmZjg4MDAyNzg4MWZkOCAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAgIDAw
MDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAg
IGZmZmY4ODAwMjc4ODFlYzggMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDMgZmZmZjgzMDBhODNmZTAwMAooWEVOKSAgICAwMDAwMDAzY2M4NjkyZDgw
IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgICAgWzxmZmZmODJjNDgw
MTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAKKFhFTikgWzA6IGR1bXAgRG9t
MCByZWdpc3RlcnNdCihYRU4pICcwJyBwcmVzc2VkIC0+IGR1bXBpbmcgRG9tMCdzIHJlZ2lzdGVy
cwooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMCBzdGF0ZTogKioqCihYRU4pIFJJUDogICAg
ZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYg
ICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAg
IHJieDogZmZmZmZmZmY4MWEwMWZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6
IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAw
ZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmZmZmZjgxYTAxZWU4ICAgcnNwOiBmZmZmZmZmZjgxYTAx
ZWQwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAg
IHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6
IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDAgICByMTQ6IGZmZmZmZmZm
ZmZmZmZmZmYKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDAwMDAw
MDA4ICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0ZDAwZjAwMCAg
IGNyMjogMDAwMDdmOGQ5ZTdkMzNkMAooWEVOKSBkczogMDAwMCAgIGVzOiAwMDAwICAgZnM6IDAw
MDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJh
Y2UgZnJvbSByc3A9ZmZmZmZmZmY4MWEwMWVkMDoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZmZmZmY4MWEwMWYxOAooWEVOKSAg
ICBmZmZmZmZmZjgxMDFjNjYzIGZmZmZmZmZmODFhMDFmZDggZmZmZmZmZmY4MWFhZmRhMCBmZmZm
ODgwMDJkZWUxYTAwCihYRU4pICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmY4MWEwMWY0OCBm
ZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgICAgYTNmYzk4NzMzOWVkMDEz
YiAwMDAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODFiMTUxNjAgZmZmZmZmZmY4MWEwMWY1OAooWEVO
KSAgICBmZmZmZmZmZjgxNTU0ZjVlIGZmZmZmZmZmODFhMDFmOTggZmZmZmZmZmY4MWFjY2JmNSBm
ZmZmZmZmZjgxYjE1MTYwCihYRU4pICAgIGU0YjE1OWJhM2VlYTA5NGMgMDAwMDAwMDAwMGNkZjAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxYTAxZmI4IGZmZmZmZmZmODFhY2MzNGIgZmZmZmZmZmY3ZmZmZmZmZgoo
WEVOKSAgICBmZmZmZmZmZjg0YjA0MDAwIGZmZmZmZmZmODFhMDFmZjggZmZmZmZmZmY4MWFjZmVj
YyAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAxMDAwMDAwMDAgMDAxMDA4MDAwMDAz
MDZhNCAxZmM5OGI3NWUzYjgyMjgzIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMxIHN0
YXRlOiAqKioKKFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJG
TEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikg
cmF4OiAwMDAwMDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODZkZmQ4ICAgcmN4OiBmZmZm
ZmZmZjgxMDAxM2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBk
ZWFkYmVlZiAgIHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4NmRl
ZTAgICByc3A6IGZmZmY4ODAwMjc4NmRlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikg
cjk6ICAwMDAwMDAwMDAwMDAwMDQwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAw
MDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAw
MDAwMDAwMSAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAw
MDAgICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikg
Y3IzOiAwMDAwMDAwMTRkMDBmMDAwICAgY3IyOiAwMDAwMDAwMDAxZmU5M2YwCihYRU4pIGRzOiAw
MDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAz
MwooWEVOKSBHdWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODZkZWM4OgooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBm
ZmZmODgwMDI3ODZkZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg2ZGZk
OCBmZmZmZmZmZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmODgwMDI3ODZkZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQoo
WEVOKSAgICBhZGNmNDU4MDdjMmQwNGZiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODgwMDI3ODZkZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmRmNTggMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMiBzdGF0ZTogKioqCihYRU4pIFJJ
UDogICAgZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAw
MDAyNDYgICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAw
MDAwMCAgIHJieDogZmZmZjg4MDAyNzg2ZmZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVO
KSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmODgwMDI3ODZmZWUwICAgcnNwOiBmZmZmODgw
MDI3ODZmZWM4ICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAw
MDAwMCAgIHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVO
KSByMTI6IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDIgICByMTQ6IDAw
MDAwMDAwMDAwMDAwMDAKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAw
MDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0MWEw
NTAwMCAgIGNyMjogMDAwMDdmODE4YmZmZWNkNgooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAg
ZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjg4MDAyNzg2ZmVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZmYxMAoo
WEVOKSAgICBmZmZmZmZmZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmZmZDggZmZmZmZmZmY4MWFhZmRh
MCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2
ZmY0MCBmZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgMWZlN2I1YTgy
MjE1MDQ5OSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1
MAooWEVOKSAgICBmZmZmZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCBmZmZmODgwMDI3ODZmZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioq
IER1bXBpbmcgRG9tMCB2Y3B1IzMgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZm
ZmZmZjgxMDAxM2FhPl0KKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBD
T05URVhUOiBwdiBndWVzdAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4
ODAwMjc4ODFmZDggICByY3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAw
MDAwMDAwICAgcnNpOiAwMDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihY
RU4pIHJicDogZmZmZjg4MDAyNzg4MWVlMCAgIHJzcDogZmZmZjg4MDAyNzg4MWVjOCAgIHI4OiAg
MDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAw
MDAwMDAwMDAwMDEgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgx
YWFmZGEwICAgcjEzOiAwMDAwMDAwMDAwMDAwMDAzICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pIHIxNTogMDAwMDAwMDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDog
MDAwMDAwMDAwMDAwMjY2MAooWEVOKSBjcjM6IDAwMDAwMDAxNDFhMDUwMDAgICBjcjI6IDAwMDA3
ZjhkOWU4MDBhMDAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAw
MDAgICBzczogZTAyYiAgIGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNw
PWZmZmY4ODAwMjc4ODFlYzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwNDAgMDAwMDAwMDBmZmZm
ZmZmZiBmZmZmZmZmZjgxMDBhNWMwIGZmZmY4ODAwMjc4ODFmMTAKKFhFTikgICAgZmZmZmZmZmY4
MTAxYzY2MyBmZmZmODgwMDI3ODgxZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNDAgZmZmZmZmZmY4MTAx
MzIzNiBmZmZmZmZmZjgxMDBhZGU5CihYRU4pICAgIDQ5ZGUxODMzZDEzZjJhMjYgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTAKKFhFTikgICAgZmZmZmZm
ZmY4MTU2MzQzOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZm
Zjg4MDAyNzg4MWY1OCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIFtIOiBkdW1wIGhlYXAgaW5mb10K
KFhFTikgJ0gnIHByZXNzZWQgLT4gZHVtcGluZyBoZWFwIGluZm8gKG5vdy0weDVDOjk5RTgyMTU1
KQooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0wXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Ml0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0zXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9NV0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT02XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OF0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT05XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTEwXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTExXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTEyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTEzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE0XSAtPiAx
NjEyOCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xNV0gLT4gMzI3NjggcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MTZdIC0+IDY1NTM2IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTE3XSAtPiAxMzA1NTkgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MThdIC0+
IDI2MjE0MyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xOV0gLT4gMTcyODM3IHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIwXSAtPiAxMzQyMjUgcGFnZXMKKFhFTikgaGVhcFtu
b2RlPTBdW3pvbmU9MjFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjJdIC0+
IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjNdIC0+IDAgcGFnZXMKKFhFTikgaGVh
cFtub2RlPTBdW3pvbmU9MjRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjVd
IC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjZdIC0+IDAgcGFnZXMKKFhFTikg
aGVhcFtub2RlPTBdW3pvbmU9MjddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9
MjhdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjldIC0+IDAgcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MzBdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MzFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzJdIC0+IDAgcGFnZXMK
KFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzNdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MzRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzVdIC0+IDAgcGFn
ZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzZdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2Rl
PTBdW3pvbmU9MzddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzhdIC0+IDAg
cGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzldIC0+IDAgcGFnZXMKKFhFTikgW0k6IGR1
bXAgSFZNIGlycSBpbmZvXQooWEVOKSAnSScgcHJlc3NlZCAtPiBkdW1waW5nIEhWTSBpcnEgaW5m
bwooWEVOKSBbTTogZHVtcCBNU0kgc3RhdGVdCihYRU4pIFBDSS1NU0kgaW50ZXJydXB0IGluZm9y
bWF0aW9uOgooWEVOKSAgTVNJICAgIDI2IHZlYz05MSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxv
ZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI3IHZlYz0w
MCAgZml4ZWQgIGVkZ2UgZGVhc3NlcnQgcGh5cyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAv
MS8tMQooWEVOKSAgTVNJICAgIDI4IHZlYz0zMSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBs
b3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSBbUTogZHVtcCBQQ0kgZGV2aWNl
c10KKFhFTikgPT09PSBQQ0kgZGV2aWNlcyA9PT09CihYRU4pID09PT0gc2VnbWVudCAwMDAwID09
PT0KKFhFTikgMDAwMDowNTowMS4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDQ6
MDAuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAzOjAwLjAgLSBkb20gMCAgIC0g
TVNJcyA8ID4KKFhFTikgMDAwMDowMjowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAw
MDA6MDA6MWYuMyAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFmLjIgLSBkb20g
MCAgIC0gTVNJcyA8IDI3ID4KKFhFTikgMDAwMDowMDoxZi4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDA6MWUuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFk
LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYy43IC0gZG9tIDAgICAtIE1T
SXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNiAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAw
OjAwOjFjLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYi4wIC0gZG9tIDAg
ICAtIE1TSXMgPCAyNiA+CihYRU4pIDAwMDA6MDA6MWEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgoo
WEVOKSAwMDAwOjAwOjE5LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxNi4z
IC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MTYuMCAtIGRvbSAwICAgLSBNU0lz
IDwgPgooWEVOKSAwMDAwOjAwOjE0LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDow
MDowMi4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOCA+CihYRU4pIDAwMDA6MDA6MDAuMCAtIGRvbSAw
ICAgLSBNU0lzIDwgPgooWEVOKSBbVjogZHVtcCBpb21tdSBpbmZvXQooWEVOKSAKKFhFTikgaW9t
bXUgMDogbnJfcHRfbGV2ZWxzID0gMy4KKFhFTikgICBRdWV1ZWQgSW52YWxpZGF0aW9uOiBzdXBw
b3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IFJlbWFwcGluZzogc3VwcG9ydGVk
IGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCByZW1hcHBpbmcgdGFibGUgKG5yX2VudHJ5
PTB4MTAwMDAuIE9ubHkgZHVtcCBQPTEgZW50cmllcyBoZXJlKToKKFhFTikgICAgICAgIFNWVCAg
U1EgICBTSUQgICAgICBEU1QgIFYgIEFWTCBETE0gVE0gUkggRE0gRlBEIFAKKFhFTikgICAwMDAw
OiAgMSAgIDAgIDAwMTAgMDAwMDAwMDEgMzEgICAgMCAgIDEgIDAgIDEgIDEgICAwIDEKKFhFTikg
CihYRU4pIGlvbW11IDE6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFsaWRh
dGlvbjogc3VwcG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBpbmc6
IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRhYmxl
IChucl9lbnRyeT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4pICAg
ICAgICBTVlQgIFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQCihY
RU4pICAgMDAwMDogIDEgICAwICBmMGY4IDAwMDAwMDAxIDM4ICAgIDAgICAxICAwICAxICAxICAg
MCAxCihYRU4pICAgMDAwMTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGYwICAgIDAgICAxICAwICAx
ICAxICAgMCAxCihYRU4pICAgMDAwMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQwICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQ4ICAg
IDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNDogIDEgICAwICBmMGY4IDAwMDAwMDAx
IDUwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNTogIDEgICAwICBmMGY4IDAw
MDAwMDAxIDU4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNjogIDEgICAwICBm
MGY4IDAwMDAwMDAxIDYwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNzogIDEg
ICAwICBmMGY4IDAwMDAwMDAxIDY4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAw
ODogIDEgICAwICBmMGY4IDAwMDAwMDAxIDcwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4p
ICAgMDAwOTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDc4ICAgIDAgICAxICAwICAxICAxICAgMCAx
CihYRU4pICAgMDAwYTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDg4ICAgIDAgICAxICAwICAxICAx
ICAgMCAxCihYRU4pICAgMDAwYjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDkwICAgIDAgICAxICAw
ICAxICAxICAgMCAxCihYRU4pICAgMDAwYzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDk4ICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGEw
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZTogIDEgICAwICBmMGY4IDAwMDAw
MDAxIGE4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZjogIDEgICAwICBmMGY4
IDAwMDAwMDAxIGIwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMDogIDEgICAw
ICBmMGY4IDAwMDAwMDAxIGI4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMTog
IDEgICAwICBmMGY4IDAwMDAwMDAxIGMwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAg
MDAxMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIGM4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihY
RU4pICAgMDAxMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQwICAgIDAgICAxICAxICAxICAxICAg
MCAxCihYRU4pICAgMDAxNDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQ4ICAgIDAgICAxICAxICAx
ICAxICAgMCAxCihYRU4pICAgMDAxNTogIDEgICAwICAwMGQ4IDAwMDAwMDAxIDkxICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNjogIDEgICAwICAwMGZhIDAwMDAwMDAxIDAwICAg
IDAgICAwICAwICAwICAwICAgMCAxCihYRU4pIAooWEVOKSBSZWRpcmVjdGlvbiB0YWJsZSBvZiBJ
T0FQSUMgMDoKKFhFTikgICAjZW50cnkgSURYIEZNVCBNQVNLIFRSSUcgSVJSIFBPTCBTVEFUIERF
TEkgIFZFQ1RPUgooWEVOKSAgICAwMTogIDAwMDAgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICAzOAooWEVOKSAgICAwMjogIDAwMDEgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICBmMAooWEVOKSAgICAwMzogIDAwMDIgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA0MAooWEVOKSAgICAwNDogIDAwMDMgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA0OAooWEVOKSAgICAwNTogIDAwMDQgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA1MAooWEVOKSAgICAwNjogIDAwMDUgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA1OAooWEVOKSAgICAwNzogIDAwMDYgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA2MAooWEVOKSAgICAwODogIDAwMDcgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA2OAooWEVOKSAgICAwOTogIDAwMDggICAxICAgIDAgICAxICAgMCAgIDAgICAgMCAg
ICAwICAgICA3MAooWEVOKSAgICAwYTogIDAwMDkgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA3OAooWEVOKSAgICAwYjogIDAwMGEgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA4OAooWEVOKSAgICAwYzogIDAwMGIgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA5MAooWEVOKSAgICAwZDogIDAwMGMgICAxICAgIDEgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICA5OAooWEVOKSAgICAwZTogIDAwMGQgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICBhMAooWEVOKSAgICAwZjogIDAwMGUgICAxICAgIDAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgICBhOAooWEVOKSAgICAxMDogIDAwMGYgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBiMAooWEVOKSAgICAxMjogIDAwMTAgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBiOAooWEVOKSAgICAxMzogIDAwMTEgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBjMAooWEVOKSAgICAxNDogIDAwMTQgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBkOAooWEVOKSAgICAxNjogIDAwMTMgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBkMAooWEVOKSAgICAxNzogIDAwMTIgICAxICAgIDEgICAxICAgMCAgIDEgICAgMCAg
ICAwICAgICBjOAooWEVOKSBbYTogZHVtcCB0aW1lciBxdWV1ZXNdCihYRU4pIER1bXBpbmcgdGlt
ZXIgcXVldWVzOgooWEVOKSBDUFUwMDoKKFhFTikgICBleD0gICAtMTY4MnVzIHRpbWVyPWZmZmY4
MmM0ODAyZTI1YzggY2I9ZmZmZjgyYzQ4MDEzZDc1NyhmZmZmODJjNDgwMjcxODAwKSBuczE2NTUw
X3BvbGwrMHgwLzB4MzMKKFhFTikgICBleD0gICAgNzMxN3VzIHRpbWVyPWZmZmY4MzAxNDg5NzMx
YjggY2I9ZmZmZjgyYzQ4MDExOWQ3MihmZmZmODMwMTQ4OTczMTkwKSBjc2NoZWRfYWNjdCsweDAv
MHg0MmEKKFhFTikgICBleD0gICAgLTk2NHVzIHRpbWVyPWZmZmY4MmM0ODAzMDI2MDAgY2I9ZmZm
ZjgyYzQ4MDEzZjZmOChmZmZmODJjNDgwMzAyNWMwKSBkb19kYnNfdGltZXIrMHgwLzB4MjFmCihY
RU4pICAgZXg9MTAwOTY5Mzl1cyB0aW1lcj1mZmZmODJjNDgwMzAwNTgwIGNiPWZmZmY4MmM0ODAx
YTg4NTAoMDAwMDAwMDAwMDAwMDAwMCkgbWNlX3dvcmtfZm4rMHgwLzB4YTkKKFhFTikgICBleD01
MTk1NDk2M3VzIHRpbWVyPWZmZmY4MmM0ODAyZmUyODAgY2I9ZmZmZjgyYzQ4MDE4MDdjMigwMDAw
MDAwMDAwMDAwMDAwKSBwbHRfb3ZlcmZsb3crMHgwLzB4MTMxCihYRU4pICAgZXg9ICAgIDczMTd1
cyB0aW1lcj1mZmZmODMwMTQ4OTczZWE4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAw
MDAwMCkgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4pIENQVTAxOgooWEVOKSAgIGV4PSAgIDY4
NjM2dXMgdGltZXI9ZmZmZjgzMDBhODNmZDA2MCBjYj1mZmZmODJjNDgwMTIxYzZiKGZmZmY4MzAw
YTgzZmQwMDApIHZjcHVfc2luZ2xlc2hvdF90aW1lcl9mbisweDAvMHhiCihYRU4pICAgZXg9ICAg
NzA4NjF1cyB0aW1lcj1mZmZmODMwMTRiMzI5MmM4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAw
MDAwMDAwMDAwMSkgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4pICAgZXg9ICAgNzkwMzV1cyB0
aW1lcj1mZmZmODMwMTNkNGI4MzgwIGNiPWZmZmY4MmM0ODAxM2Y2ZjgoZmZmZjgzMDEzZDRiODM0
MCkgZG9fZGJzX3RpbWVyKzB4MC8weDIxZgooWEVOKSBDUFUwMjoKKFhFTikgICBleD0gICA5MzUw
MnVzIHRpbWVyPWZmZmY4MzAxNDg5OTQ1ZjggY2I9ZmZmZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAw
MDAwMDAyKSBjc2NoZWRfdGljaysweDAvMHgzMTQKKFhFTikgICBleD0gICA5OTAzNXVzIHRpbWVy
PWZmZmY4MzAxM2U2ZjEzODAgY2I9ZmZmZjgyYzQ4MDEzZjZmOChmZmZmODMwMTNlNmYxMzQwKSBk
b19kYnNfdGltZXIrMHgwLzB4MjFmCihYRU4pICAgZXg9ICAgOTg2MzZ1cyB0aW1lcj1mZmZmODMw
MGE4M2ZjMDYwIGNiPWZmZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhODNmYzAwMCkgdmNwdV9zaW5n
bGVzaG90X3RpbWVyX2ZuKzB4MC8weGIKKFhFTikgQ1BVMDM6CihYRU4pICAgZXg9ICAxMjM4NjJ1
cyB0aW1lcj1mZmZmODMwMTRiMzI5OTA4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAw
MDAwMykgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4pICAgZXg9IDMzNzM2ODB1cyB0aW1lcj1m
ZmZmODMwMGFhNTgzMDYwIGNiPWZmZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhYTU4MzAwMCkgdmNw
dV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8weGIKKFhFTikgICBleD0gIDEzOTAzNXVzIHRpbWVy
PWZmZmY4MzAxNDg5OTUzODAgY2I9ZmZmZjgyYzQ4MDEzZjZmOChmZmZmODMwMTQ4OTk1MzQwKSBk
b19kYnNfdGltZXIrMHgwLzB4MjFmCihYRU4pIFtjOiBkdW1wIEFDUEkgQ3ggc3RydWN0dXJlc10K
KFhFTikgJ2MnIHByZXNzZWQgLT4gcHJpbnRpbmcgQUNQSSBDeCBzdHJ1Y3R1cmVzCihYRU4pID09
Y3B1MD09CihYRU4pIGFjdGl2ZSBzdGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0ZToJCUM3CihY
RU4pIHN0YXRlczoKKFhFTikgICAgIEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0gdXNhZ2VbMDAw
MDAwMDBdIG1ldGhvZFsgSEFMVF0gZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1c2FnZVswMDAw
MDAwMF0gZHVyYXRpb25bMzk4NDc3MDg2NzY2XQooWEVOKSBQQzJbMF0gUEMzWzBdIFBDNlswXSBQ
QzdbMF0KKFhFTikgQ0MzWzBdIENDNlswXSBDQzdbMF0KKFhFTikgPT1jcHUxPT0KKFhFTikgYWN0
aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgooWEVO
KSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9kWyBI
QUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlvblsz
OTg1MDE4NzY3NzddCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBDQzNb
MF0gQ0M2WzBdIENDN1swXQooWEVOKSA9PWNwdTI9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1
CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtD
MV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBd
CihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzM5ODUyNjY2NTU2OF0KKFhF
TikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMF0gQ0M3WzBd
CihYRU4pID09Y3B1Mz09CihYRU4pIGFjdGl2ZSBzdGF0ZToJCUMyNTUKKFhFTikgbWF4X2NzdGF0
ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAgIEMxOgl0eXBlW0MxXSBsYXRlbmN5WzAwMF0g
dXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0gZHVyYXRpb25bMF0KKFhFTikgICAgIEMwOgl1
c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMzk4NTUxNDU1NTExXQooWEVOKSBQQzJbMF0gUEMzWzBd
IFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIENDNlswXSBDQzdbMF0KKFhFTikgW2U6IGR1bXAg
ZXZ0Y2huIGluZm9dCihYRU4pICdlJyBwcmVzc2VkIC0+IGR1bXBpbmcgZXZlbnQtY2hhbm5lbCBp
bmZvCihYRU4pIEV2ZW50IGNoYW5uZWwgaW5mb3JtYXRpb24gZm9yIGRvbWFpbiAwOgooWEVOKSBQ
b2xsaW5nIHZDUFVzOiB7fQooWEVOKSAgICAgcG9ydCBbcC9tXQooWEVOKSAgICAgICAgMSBbMS8w
XTogcz01IG49MCB4PTAgdj0wCihYRU4pICAgICAgICAyIFsxLzFdOiBzPTYgbj0wIHg9MAooWEVO
KSAgICAgICAgMyBbMS8wXTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDQgWzAvMF06IHM9NiBu
PTAgeD0wCihYRU4pICAgICAgICA1IFswLzBdOiBzPTUgbj0wIHg9MCB2PTEKKFhFTikgICAgICAg
IDYgWzAvMF06IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICA3IFswLzBdOiBzPTUgbj0xIHg9MCB2
PTAKKFhFTikgICAgICAgIDggWzEvMV06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgICA5IFswLzBd
OiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAxMCBbMC8wXTogcz02IG49MSB4PTAKKFhFTikgICAg
ICAgMTEgWzAvMF06IHM9NSBuPTEgeD0wIHY9MQooWEVOKSAgICAgICAxMiBbMC8wXTogcz02IG49
MSB4PTAKKFhFTikgICAgICAgMTMgWzAvMF06IHM9NSBuPTIgeD0wIHY9MAooWEVOKSAgICAgICAx
NCBbMC8xXTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTUgWzAvMF06IHM9NiBuPTIgeD0wCihY
RU4pICAgICAgIDE2IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAgICAgICAxNyBbMC8wXTogcz01
IG49MiB4PTAgdj0xCihYRU4pICAgICAgIDE4IFswLzBdOiBzPTYgbj0yIHg9MAooWEVOKSAgICAg
ICAxOSBbMC8wXTogcz01IG49MyB4PTAgdj0wCihYRU4pICAgICAgIDIwIFsxLzFdOiBzPTYgbj0z
IHg9MAooWEVOKSAgICAgICAyMSBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAgICAgMjIgWzAv
MF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIzIFswLzBdOiBzPTUgbj0zIHg9MCB2PTEKKFhF
TikgICAgICAgMjQgWzAvMF06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDI1IFswLzBdOiBzPTMg
bj0wIHg9MCBkPTAgcD0zNQooWEVOKSAgICAgICAyNiBbMC8wXTogcz00IG49MCB4PTAgcD05IGk9
OQooWEVOKSAgICAgICAyNyBbMC8wXTogcz01IG49MCB4PTAgdj0yCihYRU4pICAgICAgIDI4IFsw
LzBdOiBzPTQgbj0wIHg9MCBwPTggaT04CihYRU4pICAgICAgIDI5IFswLzBdOiBzPTQgbj0wIHg9
MCBwPTI3OCBpPTI3CihYRU4pICAgICAgIDMwIFswLzBdOiBzPTQgbj0wIHg9MCBwPTI3OSBpPTI2
CihYRU4pICAgICAgIDMxIFswLzBdOiBzPTQgbj0wIHg9MCBwPTI3NyBpPTI4CihYRU4pICAgICAg
IDM1IFswLzBdOiBzPTMgbj0wIHg9MCBkPTAgcD0yNQooWEVOKSAgICAgICAzNiBbMC8wXTogcz01
IG49MCB4PTAgdj0zCihYRU4pIFtnOiBwcmludCBncmFudCB0YWJsZSB1c2FnZV0KKFhFTikgZ250
dGFiX3VzYWdlX3ByaW50X2FsbCBbIGtleSAnZycgcHJlc3NlZAooWEVOKSAgICAgICAtLS0tLS0t
LSBhY3RpdmUgLS0tLS0tLS0gICAgICAgLS0tLS0tLS0gc2hhcmVkIC0tLS0tLS0tCihYRU4pIFty
ZWZdIGxvY2FsZG9tIG1mbiAgICAgIHBpbiAgICAgICAgICBsb2NhbGRvbSBnbWZuICAgICBmbGFn
cwooWEVOKSBncmFudC10YWJsZSBmb3IgcmVtb3RlIGRvbWFpbjogICAgMCAuLi4gbm8gYWN0aXZl
IGdyYW50IHRhYmxlIGVudHJpZXMKKFhFTikgZ250dGFiX3VzYWdlX3ByaW50X2FsbCBdIGRvbmUK
KFhFTikgW2k6IGR1bXAgaW50ZXJydXB0IGJpbmRpbmdzXQooWEVOKSBHdWVzdCBpbnRlcnJ1cHQg
aW5mb3JtYXRpb246CihYRU4pICAgIElSUTogICAwIGFmZmluaXR5OjAwMDEgdmVjOmYwIHR5cGU9
SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAg
SVJROiAgIDEgYWZmaW5pdHk6MDAwMSB2ZWM6MzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVz
PTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMiBhZmZpbml0eTpmZmZm
IHZlYzplMiB0eXBlPVhULVBJQyAgICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJv
dW5kCihYRU4pICAgIElSUTogICAzIGFmZmluaXR5OjAwMDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1l
ZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDQg
YWZmaW5pdHk6MDAwMSB2ZWM6NDggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAy
IG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNSBhZmZpbml0eTowMDAxIHZlYzo1MCB0
eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4p
ICAgIElSUTogICA2IGFmZmluaXR5OjAwMDEgdmVjOjU4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0
YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDcgYWZmaW5pdHk6
MDAwMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwg
dW5ib3VuZAooWEVOKSAgICBJUlE6ICAgOCBhZmZpbml0eTowMDAxIHZlYzo2OCB0eXBlPUlPLUFQ
SUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogIDgo
LVMtLSksCihYRU4pICAgIElSUTogICA5IGFmZmluaXR5OjAwMDEgdmVjOjcwIHR5cGU9SU8tQVBJ
Qy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOSgt
Uy0tKSwKKFhFTikgICAgSVJROiAgMTAgYWZmaW5pdHk6MDAwMSB2ZWM6NzggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAx
MSBhZmZpbml0eTowMDAxIHZlYzo4OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAw
MDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDEyIGFmZmluaXR5OjAwMDEgdmVjOjkw
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhF
TikgICAgSVJROiAgMTMgYWZmaW5pdHk6MDAwZiB2ZWM6OTggdHlwZT1JTy1BUElDLWVkZ2UgICAg
c3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNCBhZmZpbml0
eTowMDAxIHZlYzphMCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVk
LCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE1IGFmZmluaXR5OjAwMDEgdmVjOmE4IHR5cGU9SU8t
QVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJR
OiAgMTYgYWZmaW5pdHk6MDAwMSB2ZWM6YjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxOCBhZmZpbml0eTowMDBmIHZl
YzpiOCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5k
CihYRU4pICAgIElSUTogIDE5IGFmZmluaXR5OjAwMDEgdmVjOmMwIHR5cGU9SU8tQVBJQy1sZXZl
bCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjAgYWZm
aW5pdHk6MDAwZiB2ZWM6ZDggdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1h
cHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAyMiBhZmZpbml0eTowMDAxIHZlYzpkMCB0eXBl
PUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAg
IElSUTogIDIzIGFmZmluaXR5OjAwMDEgdmVjOmM4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1
cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjQgYWZmaW5pdHk6MDAw
MSB2ZWM6MjggdHlwZT1ETUFfTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5i
b3VuZAooWEVOKSAgICBJUlE6ICAyNSBhZmZpbml0eTowMDAxIHZlYzozMCB0eXBlPURNQV9NU0kg
ICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI2
IGFmZmluaXR5OjAwMDEgdmVjOjkxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAx
MCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3OSgtUy0tKSwKKFhFTikgICAgSVJROiAgMjcg
YWZmaW5pdHk6MDAwMSB2ZWM6MjkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEw
IGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6Mjc4KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyOCBh
ZmZpbml0eTowMDAxIHZlYzozMSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAg
aW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoyNzcoLVMtLSksCihYRU4pIElPLUFQSUMgaW50ZXJy
dXB0IGluZm9ybWF0aW9uOgooWEVOKSAgICAgSVJRICAwIFZlYzI0MDoKKFhFTikgICAgICAgQXBp
YyAweDAwLCBQaW4gIDI6IHZlYz1mMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9s
YXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICAxIFZl
YyA1NjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDE6IHZlYz0zOCBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6
MAooWEVOKSAgICAgSVJRICAzIFZlYyA2NDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDM6
IHZlYz00MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0
cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICA0IFZlYyA3MjoKKFhFTikgICAg
ICAgQXBpYyAweDAwLCBQaW4gIDQ6IHZlYz00OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVz
PTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJR
ICA1IFZlYyA4MDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDU6IHZlYz01MCBkZWxpdmVy
eT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRl
c3RfaWQ6MAooWEVOKSAgICAgSVJRICA2IFZlYyA4ODoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQ
aW4gIDY6IHZlYz01OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBp
cnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICA3IFZlYyA5NjoKKFhF
TikgICAgICAgQXBpYyAweDAwLCBQaW4gIDc6IHZlYz02MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwg
c3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAg
ICAgSVJRICA4IFZlYzEwNDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gIDg6IHZlYz02OCBk
ZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFz
az0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRICA5IFZlYzExMjoKKFhFTikgICAgICAgQXBpYyAw
eDAwLCBQaW4gIDk6IHZlYz03MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJp
dHk9MCBpcnI9MCB0cmlnPUwgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDEwIFZlYzEy
MDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTA6IHZlYz03OCBkZWxpdmVyeT1Mb1ByaSBk
ZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAoo
WEVOKSAgICAgSVJRIDExIFZlYzEzNjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTE6IHZl
Yz04OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmln
PUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDEyIFZlYzE0NDoKKFhFTikgICAgICAg
QXBpYyAweDAwLCBQaW4gMTI6IHZlYz05MCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAg
cG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDEz
IFZlYzE1MjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTM6IHZlYz05OCBkZWxpdmVyeT1M
b1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0xIGRlc3Rf
aWQ6MAooWEVOKSAgICAgSVJRIDE0IFZlYzE2MDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4g
MTQ6IHZlYz1hMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9
MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDE1IFZlYzE2ODoKKFhFTikg
ICAgICAgQXBpYyAweDAwLCBQaW4gMTU6IHZlYz1hOCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3Rh
dHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MAooWEVOKSAgICAg
SVJRIDE2IFZlYzE3NjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTY6IHZlYz1iMCBkZWxp
dmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0x
IGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDE4IFZlYzE4NDoKKFhFTikgICAgICAgQXBpYyAweDAw
LCBQaW4gMTg6IHZlYz1iOCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9
MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDE5IFZlYzE5MjoK
KFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMTk6IHZlYz1jMCBkZWxpdmVyeT1Mb1ByaSBkZXN0
PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6MAooWEVO
KSAgICAgSVJRIDIwIFZlYzIxNjoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMjA6IHZlYz1k
OCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwg
bWFzaz0xIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDIyIFZlYzIwODoKKFhFTikgICAgICAgQXBp
YyAweDAwLCBQaW4gMjI6IHZlYz1kMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9s
YXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6MAooWEVOKSAgICAgSVJRIDIzIFZl
YzIwMDoKKFhFTikgICAgICAgQXBpYyAweDAwLCBQaW4gMjM6IHZlYz1jOCBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6
MAooWEVOKSBbbTogbWVtb3J5IGluZm9dCihYRU4pIFBoeXNpY2FsIG1lbW9yeSBpbmZvcm1hdGlv
bjoKKFhFTikgICAgIFhlbiBoZWFwOiAwa0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNF06IDY0NTEy
a0IgZnJlZQooWEVOKSAgICAgaGVhcFsxNV06IDEzMTA3MmtCIGZyZWUKKFhFTikgICAgIGhlYXBb
MTZdOiAyNjIxNDRrQiBmcmVlCihYRU4pICAgICBoZWFwWzE3XTogNTIyMjM2a0IgZnJlZQooWEVO
KSAgICAgaGVhcFsxOF06IDEwNDg1NzJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE5XTogNjkxMzQ4
a0IgZnJlZQooWEVOKSAgICAgaGVhcFsyMF06IDUzNjkwMGtCIGZyZWUKKFhFTikgICAgIERvbSBo
ZWFwOiAzMjU2Nzg0a0IgZnJlZQooWEVOKSBbbjogTk1JIHN0YXRpc3RpY3NdCihYRU4pIENQVQlO
TUkKKFhFTikgICAwCSAgMAooWEVOKSAgIDEJICAwCihYRU4pICAgMgkgIDAKKFhFTikgICAzCSAg
MAooWEVOKSBkb20wIHZjcHUwOiBOTUkgbmVpdGhlciBwZW5kaW5nIG5vciBtYXNrZWQKKFhFTikg
W3E6IGR1bXAgZG9tYWluIChhbmQgZ3Vlc3QgZGVidWcpIGluZm9dCihYRU4pICdxJyBwcmVzc2Vk
IC0+IGR1bXBpbmcgZG9tYWluIGluZm8gKG5vdz0weDVDOkY2NzNGRTE2KQooWEVOKSBHZW5lcmFs
IGluZm9ybWF0aW9uIGZvciBkb21haW4gMDoKKFhFTikgICAgIHJlZmNudD0zIGR5aW5nPTAgcGF1
c2VfY291bnQ9MAooWEVOKSAgICAgbnJfcGFnZXM9MTg3NTM5IHhlbmhlYXBfcGFnZXM9NiBzaGFy
ZWRfcGFnZXM9MCBwYWdlZF9wYWdlcz0wIGRpcnR5X2NwdXM9ezEtMn0gbWF4X3BhZ2VzPTE4ODE0
NwooWEVOKSAgICAgaGFuZGxlPTAwMDAwMDAwLTAwMDAtMDAwMC0wMDAwLTAwMDAwMDAwMDAwMCB2
bV9hc3Npc3Q9MDAwMDAwMGQKKFhFTikgUmFuZ2VzZXRzIGJlbG9uZ2luZyB0byBkb21haW4gMDoK
KFhFTikgICAgIEkvTyBQb3J0cyAgeyAwLTFmLCAyMi0zZiwgNDQtNjAsIDYyLTlmLCBhMi00MDcs
IDQwYy1jZmIsIGQwMC0yMDRmLCAyMDU4LWZmZmYgfQooWEVOKSAgICAgSW50ZXJydXB0cyB7IDAt
Mjc0LCAyNzctMjc5IH0KKFhFTikgICAgIEkvTyBNZW1vcnkgeyAwLWZlYmZmLCBmZWMwMS1mZWRm
ZiwgZmVlMDEtZmZmZmZmZmZmZmZmZmZmZiB9CihYRU4pIE1lbW9yeSBwYWdlcyBiZWxvbmdpbmcg
dG8gZG9tYWluIDA6CihYRU4pICAgICBEb21QYWdlIGxpc3QgdG9vIGxvbmcgdG8gZGlzcGxheQoo
WEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTQ4OTE3OiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwg
dGFmPTc0MDAwMDAwMDAwMDAwMDIKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0ODkxNjog
Y2FmPWMwMDAwMDAwMDAwMDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5Q
YWdlIDAwMDAwMDAwMDAxNDg5MTU6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAw
MDAwMDAwMQooWEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTQ4OTE0OiBjYWY9YzAwMDAwMDAw
MDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAwMDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAw
MDBhYTBmZDogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4p
ICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxM2Y0Mjg6IGNhZj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9
NzQwMDAwMDAwMDAwMDAwMgooWEVOKSBWQ1BVIGluZm9ybWF0aW9uIGFuZCBjYWxsYmFja3MgZm9y
IGRvbWFpbiAwOgooWEVOKSAgICAgVkNQVTA6IENQVTAgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3Bl
bmQgPSAwMSwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXt9IGNwdV9hZmZpbml0eT17MH0K
KFhFTikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxhZ3M9MAooWEVOKSAgICAgTm8gcGVyaW9k
aWMgdGltZXIKKFhFTikgICAgIFZDUFUxOiBDUFUxIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5k
ID0gMDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1cz17MX0gY3B1X2FmZmluaXR5PXswLTE1
fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBwZXJp
b2RpYyB0aW1lcgooWEVOKSAgICAgVkNQVTI6IENQVTIgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3Bl
bmQgPSAwMCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXsyfSBjcHVfYWZmaW5pdHk9ezAt
MTV9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTEKKFhFTikgICAgIE5vIHBl
cmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMzogQ1BVMyBbaGFzPUZdIHBvbGw9MCB1cGNhbGxf
cGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9e30gY3B1X2FmZmluaXR5PXsw
LTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBw
ZXJpb2RpYyB0aW1lcgooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDowICh2aXJxIDEsIHBvcnQgNSwg
c3RhdCAwLzAvLTEpCihYRU4pIE5vdGlmeWluZyBndWVzdCAwOjEgKHZpcnEgMSwgcG9ydCAxMSwg
c3RhdCAwLzAvMCkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6MiAodmlycSAxLCBwb3J0IDE3LCBz
dGF0IDAvMC8wKQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDozICh2aXJxIDEsIHBvcnQgMjMsIHN0
YXQgMC8wLzApCgooWEVOKSBTaGFyZWQgZnJhbWVzIDAgLS0gU2F2ZWQgZnJhbWVzIDAKWyAgMzk5
LjQ1MDY5N10gdihYRU4pIFtyOiBkdW1wIHJ1biBxdWV1ZXNdCmNwdSAxCihYRU4pIHNjaGVkX3Nt
dF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9MHgwMDAwMDA1RDAyNDYxNTVBCihY
RU4pIElkbGUgY3B1cG9vbDoKKFhFTikgU2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAo
Y3JlZGl0KQpbICAzOTkuNDUwNjk4XSAgKFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAg
ICAgICA9IDQKKFhFTikgCW1hc3RlciAgICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAg
ICAgICAgICA9IDQwMAooWEVOKSAJY3JlZGl0IGJhbGFuY2UgICAgID0gNjcKKFhFTikgCXdlaWdo
dCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9zb3J0ICAgICAgICAgID0gMjgwMgooWEVO
KSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0c2xpY2UgICAgICAgICAgICAgPSAx
MG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAxMDAwdXMKKFhFTikgCWNyZWRpdHMgcGVy
IG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNlICAgPSAxCihYRU4pIAltaWdyYXRp
b24gZGVsYXkgICAgPSAwdXMKIChYRU4pIGlkbGVyczogMDAwYwooWEVOKSBhY3RpdmUgdmNwdXM6
CjA6IG1hc2tlZD0wIHBlbmQoWEVOKSAJICAxOiBpbmc9MSBldmVudF9zZWwgWzAuMV0gcHJpPS0x
IGZsYWdzPTAgY3B1PTEgY3JlZGl0PS01MjEgW3c9MjU2XQowMDAwMDAwMDAwMDAwMDAxKFhFTikg
Q3B1cG9vbCAwOgooWEVOKSBTY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQp
CihYRU4pIGluZm86CihYRU4pIAluY3B1cyAgICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIg
ICAgICAgICAgICAgPSAwCihYRU4pIAljcmVkaXQgICAgICAgICAgICAgPSA0MDAKKFhFTikgCWNy
ZWRpdCBiYWxhbmNlICAgICA9IDY3CihYRU4pIAl3ZWlnaHQgICAgICAgICAgICAgPSAyNTYKKFhF
TikgCXJ1bnFfc29ydCAgICAgICAgICA9IDI4MDIKKFhFTikgCWRlZmF1bHQtd2VpZ2h0ICAgICA9
IDI1NgooWEVOKSAJdHNsaWNlICAgICAgICAgICAgID0gMTBtcwooWEVOKSAJcmF0ZWxpbWl0ICAg
ICAgICAgID0gMTAwMHVzCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAxMAooWEVOKSAJdGlj
a3MgcGVyIHRzbGljZSAgID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5ICAgID0gMHVzCgooWEVO
KSBpZGxlcnM6IDAwMGMKKFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJICAxOiBbMC4xXSBwcmk9
LTEgZmxhZ3M9MCBjcHU9MSBjcmVkaXQ9LTEwNzkgW3c9MjU2XQpbICAzOTkuNTIxMjEzXSAgKFhF
TikgQ1BVWzAwXSAgIHNvcnQ9MjgwMiwgc2libGluZz0wMDAxLCAxOiBtYXNrZWQ9MCBwZW5kY29y
ZT0wMDBmCihYRU4pIAlydW46IFszMjc2Ny4wXSBwcmk9MCBmbGFncz0wIGNwdT0wCihYRU4pIAkg
IDE6IFswLjBdIHByaT0wIGZsYWdzPTAgY3B1PTAgY3JlZGl0PTg0IFt3PTI1Nl0KaW5nPTEgZXZl
bnRfc2VsIChYRU4pIENQVVswMV0gIHNvcnQ9MjgwMiwgc2libGluZz0wMDAyLCBjb3JlPTAwMGYK
KFhFTikgCXJ1bjogWzAuMV0gcHJpPS0xIGZsYWdzPTAgY3B1PTEgY3JlZGl0PS0xMzQ5IFt3PTI1
Nl0KKFhFTikgCSAgMTogWzMyNzY3LjFdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MQooWEVOKSBDUFVb
MDJdIDAwMDAwMDAwMDAwMDAwMDEgc29ydD0yODAyLCBzaWJsaW5nPTAwMDQsIApjb3JlPTAwMGYK
WyAgMzk5LjU4ODk1N10gIChYRU4pIAlydW46ICBbMzI3NjcuMl0gcHJpPS02NCBmbGFncz0wIGNw
dT0yCjI6IG1hc2tlZD0xIHBlbmQoWEVOKSBDUFVbMDNdIGluZz0xIGV2ZW50X3NlbCAgc29ydD0y
ODAyLCBzaWJsaW5nPTAwMDgsIDAwMDAwMDAwMDAwMDAwMDFjb3JlPTAwMGYKKFhFTikgCXJ1bjog
ClszMjc2Ny4zXSBwcmk9LTY0IGZsYWdzPTAgY3B1PTMKWyAgMzk5LjYyNjk5Ml0gIChYRU4pIFtz
OiBkdW1wIHNvZnR0c2Mgc3RhdHNdCiAoWEVOKSBUU0MgbWFya2VkIGFzIHJlbGlhYmxlLCB3YXJw
ID0gMjQ5OTQwMTU3Njg1IChjb3VudD00KQozOiBtYXNrZWQ9MSBwZW5kKFhFTikgTm8gZG9tYWlu
cyBoYXZlIGVtdWxhdGVkIFRTQwppbmc9MCBldmVudF9zZWwgKFhFTikgW3Q6IGRpc3BsYXkgbXVs
dGktY3B1IGNsb2NrIGluZm9dCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSBTeW5jZWQgc3RpbWUgc2tl
dzogbWF4PTgxOTlucyBhdmc9NDQxNm5zIHNhbXBsZXM9MyBjdXJyZW50PTUwMjVucwooWEVOKSBT
eW5jZWQgY3ljbGVzIHNrZXc6IG1heD0xNzMgYXZnPTE2NSBzYW1wbGVzPTMgY3VycmVudD0xNzMK
CihYRU4pIFt1OiBkdW1wIG51bWEgaW5mb10KWyAgMzk5LjY2NzQwMV0gIChYRU4pICd1JyBwcmVz
c2VkIC0+IGR1bXBpbmcgbnVtYSBpbmZvIChub3ctMHg1RDowRkU5NDJFMSkKIChYRU4pIGlkeDAg
LT4gTk9ERTAgc3RhcnQtPjAgc2l6ZS0+MTM2OTYwMCBmcmVlLT44MTQxOTYKKFhFTikgcGh5c190
b19uaWQoMDAwMDAwMDAwMDAwMTAwMCkgLT4gMCBzaG91bGQgYmUgMAoKKFhFTikgQ1BVMCAtPiBO
T0RFMAooWEVOKSBDUFUxIC0+IE5PREUwCihYRU4pIENQVTIgLT4gTk9ERTAKKFhFTikgQ1BVMyAt
PiBOT0RFMAooWEVOKSBNZW1vcnkgbG9jYXRpb24gb2YgZWFjaCBkb21haW46CihYRU4pIERvbWFp
biAwICh0b3RhbDogMTg3NTM5KToKWyAgMzk5LjcwNTc5OV0gcChYRU4pICAgICBOb2RlIDA6IDE4
NzUzOQplbmRpbmc6CihYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KWyAgMzk5LjcwNTgwMF0g
IChYRU4pICoqKioqKioqKioqIFZNQ1MgQXJlYXMgKioqKioqKioqKioqKioKICAoWEVOKSAqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgowMDAwMDAwMDAwMDAwMDAwKFhFTikg
W3o6IHByaW50IGlvYXBpYyBpbmZvXQogKFhFTikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAx
NS4KMDAwMDAwMDAwMDAwMDAwMChYRU4pIG51bWJlciBvZiBJTy1BUElDICMyIHJlZ2lzdGVyczog
MjQuCihYRU4pIHRlc3RpbmcgdGhlIElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLgogKFhF
TikgSU8gQVBJQyAjMi4uLi4uLgooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDIwMDAwMDAKKFhF
TikgLi4uLi4uLiAgICA6IHBoeXNpY2FsIEFQSUMgaWQ6IDAyCihYRU4pIC4uLi4uLi4gICAgOiBE
ZWxpdmVyeSBUeXBlOiAwCihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwCjAwMDAw
MDAwMDAwMDAwMDAoWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzAwMjAKKFhFTikgLi4uLi4u
LiAgICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6
IFBSUSBpbXBsZW1lbnRlZDogMAooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjog
MDAyMAooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToKKFhFTikgIE5SIExvZyBQaHkg
TWFzayBUcmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDogICAKIChYRU4pICAwMCAwMDAg
MDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAo
WEVOKSAgMDEgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOAogKFhF
TikgIDAyIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgRjAKMDAwMDAw
MDAwMDAwMDAwMChYRU4pICAwMyAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAx
ICAgIDQwCiAoWEVOKSAgMDQgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICA0OAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDA1IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgNTAKIChYRU4pICAwNiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDU4CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDcgMDAwIDAwICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA2MAogKFhFTikgIDA4IDAwMCAwMCAgMCAgICAwICAg
IDAgICAwICAgMCAgICAxICAgIDEgICAgNjgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwOSAwMDAg
MDAgIDAgICAgMSAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDcwCgooWEVOKSAgMGEgMDAwIDAw
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3OApbICAzOTkuODUwNjkwXSAgKFhF
TikgIDBiIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgODgKICAoWEVO
KSAgMGMgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA5MAowMDAwMDAw
MDAwMDAwMDAwKFhFTikgIDBkIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgOTgKIChYRU4pICAwZSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IEEwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGYgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICBBOAogKFhFTikgIDEwIDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAg
ICAxICAgIDEgICAgQjAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCiAoWEVOKSAgMTIgMDAwIDAwICAxICAgIDEgICAg
MCAgIDEgICAwICAgIDEgICAgMSAgICBCOAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEzIDAwMCAw
MCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQzAKIChYRU4pICAxNCAwMDAgMDAg
IDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEQ4CjAwMDAwMDAwMDAwMDAwMDAoWEVO
KSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMAogKFhFTikg
IDE2IDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgRDAKMDAwMDAwMDAw
MDAwMDAwMChYRU4pICAxNyAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAg
IEM4CihYRU4pIFVzaW5nIHZlY3Rvci1iYXNlZCBpbmRleGluZwooWEVOKSBJUlEgdG8gcGluIG1h
cHBpbmdzOgogKFhFTikgSVJRMjQwIC0+IDA6MgooWEVOKSBJUlE1NiAtPiAwOjEKKFhFTikgSVJR
NjQgLT4gMDozCihYRU4pIElSUTcyIC0+IDA6NAooWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJR
ODggLT4gMDo2CihYRU4pIElSUTk2IC0+IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElS
UTExMiAtPiAwOjkKKFhFTikgSVJRMTIwIC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhF
TikgSVJRMTQ0IC0+IDA6MTIKKFhFTikgSVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6
MTQKKFhFTikgSVJRMTY4IC0+IDA6MTUKKFhFTikgSVJRMTc2IC0+IDA6MTYKKFhFTikgSVJRMTg0
IC0+IDA6MTgKKFhFTikgSVJRMTkyIC0+IDA6MTkKKFhFTikgSVJRMjE2IC0+IDA6MjAKKFhFTikg
SVJRMjA4IC0+IDA6MjIKKFhFTikgSVJRMjAwIC0+IDA6MjMKKFhFTikgLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uIGRvbmUuCjAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzOTkuOTkzNTIwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjAwNzQ4
MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC4wMjE0NDJdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICA0MDAuMDM1NDAzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAw
LjA0OTM2NV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC4wNjMzMjZdICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMjE4MgpbICA0MDAuMDc3Mjg3XSAgICAKWyAgNDAwLjA4MDU5N10gZ2xvYmFs
IG1hc2s6ClsgIDQwMC4wODA1OThdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDAuMDk1
ODEyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAwLjEwOTc3M10gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDQwMC4xMjM3MzRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0
MDAuMTM3Njk0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAwLjE1MTY1NV0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMC4xNjU2MTZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZgpbICA0MDAuMTc5NTc3XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmU3MDAxMDQxMDUKWyAgNDAwLjE5MzUzOF0g
ICAgClsgIDQwMC4xOTY4NDldIGdsb2JhbGx5IHVubWFza2VkOgpbICA0MDAuMTk2ODUwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjIxMjYwMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDQwMC4yMjY1NjFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuMjQwNTIz
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjI1NDQ4M10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDQwMC4yNjg0NDRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAu
MjgyNDA1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjI5NjM2NV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAyMDgyClsgIDQwMC4zMTAzMjddICAgIApbICA0MDAuMzEzNjM4XSBsb2NhbCBj
cHUxIG1hc2s6ClsgIDQwMC4zMTM2MzhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAu
MzI5MjEwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjM0MzE3MV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDQwMC4zNTcxMzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICA0MDAuMzcxMDkyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjM4NTA1NF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC4zOTkwMTRdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICA0MDAuNDEyOTc1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDFmODAKWyAgNDAwLjQyNjkz
N10gICAgClsgIDQwMC40MzAyNDddIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDQwMC40MzAyNDhdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNDQ1OTA5XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgNDAwLjQ1OTg3MF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC40NzM4
MzFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNDg3NzkzXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgNDAwLjUwMTc1NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQw
MC41MTU3MTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNTI5Njc2XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwODAKWyAgNDAwLjU0MzYzNl0gICAgClsgIDQwMC41NDY5NDddIHBlbmRp
bmcgbGlzdDoKWyAgNDAwLjU0OTk5MV0gICAwOiBldmVudCAxIC0+IGlycSAyNzIgbG9jYWxseS1t
YXNrZWQKWyAgNDAwLjU1NTAwMl0gICAxOiBldmVudCA3IC0+IGlycSAyNzgKWyAgNDAwLjU1ODY3
MV0gICAxOiBldmVudCA4IC0+IGlycSAyNzkgZ2xvYmFsbHktbWFza2VkClsgIDQwMC41NjM3NzJd
ICAgMjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1hc2tlZApbICA0MDAuNTY4ODk0XSAK
WyAgNDAwLjU2ODg5NF0gdmNwdSAwClsgIDQwMC41Njg4OTRdICAgMDogbWFza2VkPTAgcGVuZGlu
Zz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC41NzQxNTBdICAgMTogbWFza2Vk
PTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC41ODAyMzVdICAg
MjogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDQwMC41
ODYzMjFdICAgMzogbWFza2VkPTEgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDQwMC41OTI0MDZdICAgClsgIDQwMC41OTg0OTFdIHBlbmRpbmc6ClsgIDQwMC41OTg0OTJd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNjEzMzQ3XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAwLjYyNzMwOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC42
NDEyNjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNjU1MjMwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgNDAwLjY2OTE5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDQwMC42ODMxNTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuNjk3MTEyXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDIxMDYKWyAgNDAwLjcxMTA3NF0gICAgClsgIDQwMC43MTQzODRdIGds
b2JhbCBtYXNrOgpbICA0MDAuNzE0Mzg1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAw
LjcyOTU5OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMC43NDM1NjBdICAgIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZgpbICA0MDAuNzU3NTIxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYK
WyAgNDAwLjc3MTQ4Ml0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMC43ODU0NDJdICAg
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDAuNzk5NDAzXSAgICBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYKWyAgNDAwLjgxMzM2NF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZlNzAwMTA0MTA1ClsgIDQwMC44Mjcz
MjZdICAgIApbICA0MDAuODMwNjM2XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAgNDAwLjgzMDYzNl0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC44NDYzODddICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDAuODYwMzQ4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAwLjg3
NDMxMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC44ODgyNzFdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDAuOTAyMjMyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAwLjkxNjE5Ml0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC45MzAxNTNdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMjAwMgpbICA0MDAuOTQ0MTE0XSAgICAKWyAgNDAwLjk0NzQyNV0gbG9j
YWwgY3B1MCBtYXNrOgpbICA0MDAuOTQ3NDI1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAwLjk2Mjk5N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMC45NzY5NThdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDAuOTkwOTE5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgNDAxLjAwNDg3OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4wMTg4NDFd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuMDMyODAyXSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjA0Njc2M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZmZlMDAwMDdmClsgIDQwMS4w
NjA3MjRdICAgIApbICA0MDEuMDY0MDM1XSBsb2NhbGx5IHVubWFza2VkOgpbICA0MDEuMDY0MDM2
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjA3OTY5N10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDQwMS4wOTM2NTddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEu
MTA3NjE4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjEyMTU4MF0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDQwMS4xMzU1NDFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICA0MDEuMTQ5NTAyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjE2MzQ2Ml0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAyClsgIDQwMS4xNzc0MjNdICAgIApbICA0MDEuMTgwNzM1XSBw
ZW5kaW5nIGxpc3Q6ClsgIDQwMS4xODM3ODJdICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGwyLWNs
ZWFyClsgIDQwMS4xODgyNTNdICAgMDogZXZlbnQgMiAtPiBpcnEgMjczIGwyLWNsZWFyIGdsb2Jh
bGx5LW1hc2tlZApbICA0MDEuMTk0MTU5XSAgIDE6IGV2ZW50IDggLT4gaXJxIDI3OSBsMi1jbGVh
ciBnbG9iYWxseS1tYXNrZWQgbG9jYWxseS1tYXNrZWQKWyAgNDAxLjIwMTQxMl0gICAyOiBldmVu
dCAxMyAtPiBpcnEgMjg0IGwyLWNsZWFyIGxvY2FsbHktbWFza2VkClsgIDQwMS4yMDczNDFdIApb
ICA0MDEuMjA3MzQxXSB2Y3B1IDIKWyAgNDAxLjIwNzM0Ml0gICAwOiBtYXNrZWQ9MCBwZW5kaW5n
PTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjIxMjYwMV0gICAxOiBtYXNrZWQ9
MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjIxODY4Nl0gICAy
OiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgNDAxLjIy
NDc3MV0gICAzOiBtYXNrZWQ9MSBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAK
WyAgNDAxLjIzMDg1N10gICAKWyAgNDAxLjIzNjk0MV0gcGVuZGluZzoKWyAgNDAxLjIzNjk0Ml0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4yNTE3OTddICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDEuMjY1NzU5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjI3
OTcxOV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4yOTM2ODFdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDEuMzA3NjQxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAxLjMyMTYwMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS4zMzU1NjNdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwZTEwNApbICA0MDEuMzQ5NTIzXSAgICAKWyAgNDAxLjM1MjgzNV0gZ2xv
YmFsIG1hc2s6ClsgIDQwMS4zNTI4MzVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDEu
MzY4MDQ5XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAxLjM4MjAxMV0gICAgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmClsgIDQwMS4zOTU5NzFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpb
ICA0MDEuNDA5OTMyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAxLjQyMzg5M10gICAg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMS40Mzc4NTRdICAgIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZm
ZmZmZgpbICA0MDEuNDUxODE1XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmU3MDAxMDQxMDUKWyAgNDAxLjQ2NTc3
Nl0gICAgClsgIDQwMS40NjkwODddIGdsb2JhbGx5IHVubWFza2VkOgpbICA0MDEuNDY5MDg4XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjQ4NDgzOF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDQwMS40OTg3OTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuNTEy
NzYwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjUyNjcyMV0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDQwMS41NDA2ODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0
MDEuNTU0NjQzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjU2ODYwNF0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDBhMDAwClsgIDQwMS41ODI1NjVdICAgIApbICA0MDEuNTg1ODc2XSBsb2Nh
bCBjcHUyIG1hc2s6ClsgIDQwMS41ODU4NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0
MDEuNjAxNDQ3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjYxNTQwOF0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS42MjkzNzBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICA0MDEuNjQzMzMwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjY1NzI5MV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS42NzEyNTNdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDEuNjg1MjE0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwN2UwMDAKWyAgNDAxLjY5
OTE3NF0gICAgClsgIDQwMS43MDI0ODZdIGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDQwMS43MDI0ODdd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuNzE4MTQ3XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjczMjEwOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS43
NDYwNjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuNzYwMDMwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgNDAxLjc3Mzk5MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDQwMS43ODc5NTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODAxOTEzXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMGEwMDAKWyAgNDAxLjgxNTg3NF0gICAgClsgIDQwMS44MTkxODVdIHBl
bmRpbmcgbGlzdDoKWyAgNDAxLjgyMjIyOF0gICAwOiBldmVudCAyIC0+IGlycSAyNzMgZ2xvYmFs
bHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDQwMS44Mjg2NzFdICAgMTogZXZlbnQgOCAtPiBp
cnEgMjc5IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICA0MDEuODM1MTE1XSAgIDI6
IGV2ZW50IDEzIC0+IGlycSAyODQKWyAgNDAxLjgzODg3NF0gICAyOiBldmVudCAxNCAtPiBpcnEg
Mjg1IGdsb2JhbGx5LW1hc2tlZApbICA0MDEuODQ0MDY1XSAgIDI6IGV2ZW50IDE1IC0+IGlycSAy
ODYKWyAgNDAxLjg0NzgyM10gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFza2Vk
ClsgIDQwMS44NTI5NDZdIApbICA0MDEuODUyOTQ3XSB2Y3B1IDMKWyAgNDAxLjg1Mjk0N10gICAw
OiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1
ODIwOV0gICAxOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDAK
WyAgNDAxLjg1ODIxMV0gICAyOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjg1ODIxM10gICAzOiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2Vs
IDAwMDAwMDAwMDAwMDAwMDEKWyAgNDAxLjg1ODIxNV0gICAKWyAgNDAxLjg1ODIxNl0gcGVuZGlu
ZzoKWyAgNDAxLjg1ODIxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgyMjNd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MjMwXSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgNDAxLjg1ODI0M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44
NTgyNDddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MjUwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODI1M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDQwMS44NTgyNTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDM4NDEwNApbICA0MDEuODU4MjU5XSAgICAK
WyAgNDAxLjg1ODI2MF0gZ2xvYmFsIG1hc2s6ClsgIDQwMS44NTgyNjBdICAgIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZm
ZmZmZmZmZmZmZgpbICA0MDEuODU4MjY0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgNDAx
Ljg1ODI2N10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMS44NTgyNzBdICAgIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYg
ZmZmZmZmZmZmZmZmZmZmZgpbICA0MDEuODU4Mjc0XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYK
WyAgNDAxLjg1ODI3N10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDQwMS44NTgyODFdICAg
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICA0MDEuODU4Mjg0XSAgICBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmU3MDAx
MDQxMDUKWyAgNDAxLjg1ODI4N10gICAgClsgIDQwMS44NTgyODhdIGdsb2JhbGx5IHVubWFza2Vk
OgpbICA0MDEuODU4Mjg4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODI5MV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgyOTVdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICA0MDEuODU4Mjk4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1
ODMwMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzMDRdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzA3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAxLjg1ODMxMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDQwMS44NTgzMTNdICAgIApb
ICA0MDEuODU4MzE0XSBsb2NhbCBjcHUzIG1hc2s6ClsgIDQwMS44NTgzMTVdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzE4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
NDAxLjg1ODMyMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzMjRdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzI3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgNDAxLjg1ODMzMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzMzRd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzM3XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDFmODAwMDAKWyAgNDAxLjg1ODM0MF0gICAgClsgIDQwMS44NTgzNDFdIGxvY2FsbHkgdW5tYXNr
ZWQ6ClsgIDQwMS44NTgzNDFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEuODU4MzQ0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODM0N10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDQwMS44NTgzNTBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICA0MDEu
ODU4MzU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgNDAxLjg1ODM1N10gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDQwMS44NTgzNjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICA0MDEuODU4MzYzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAyODAwMDAKWyAgNDAxLjg1ODM2Nl0gICAg
ClsgIDQwMS44NTgzNjddIHBlbmRpbmcgbGlzdDoKWyAgNDAxLjg1ODM2OF0gICAwOiBldmVudCAy
IC0+IGlycSAyNzMgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsgIDQwMS44NTgzNzBd
ICAgMTogZXZlbnQgOCAtPiBpcnEgMjc5IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApb
ICA0MDEuODU4MzcxXSAgIDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxv
Y2FsbHktbWFza2VkClsgIDQwMS44NTgzNzNdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICA0
MDEuODU4Mzc0XSAgIDM6IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDQw
MS44NTgzNzVdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MgoK
--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-s3-first.txt"
Content-Disposition: attachment; filename="xen-dump-s3-first.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5nzo9g31

KFhFTikgJyonIHByZXNzZWQgLT4gZmlyaW5nIGFsbCBkaWFnbm9zdGljIGtleWhhbmRsZXJzCihY
RU4pIFtkOiBkdW1wIHJlZ2lzdGVyc10KKFhFTikgJ2QnIHByZXNzZWQgLT4gZHVtcGluZyByZWdp
c3RlcnMKKFhFTikgCihYRU4pICoqKiBEdW1waW5nIENQVTAgaG9zdCBzdGF0ZTogKioqCihYRU4p
IC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMg
XS0tLS0KKFhFTikgQ1BVOiAgICAwCihYRU4pIFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxM2Q3
N2U+XSBuczE2NTUwX3BvbGwrMHgyNy8weDMzCihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAxMDI4
NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgwMzAyNWEwICAgcmJ4
OiBmZmZmODJjNDgwMzAyNDgwICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAzCihYRU4pIHJkeDogMDAw
MDAwMDAwMDAwMDAwMCAgIHJzaTogZmZmZjgyYzQ4MDJlMjVjOCAgIHJkaTogZmZmZjgyYzQ4MDI3
MTgwMAooWEVOKSByYnA6IGZmZmY4MmM0ODAyYjdlMzAgICByc3A6IGZmZmY4MmM0ODAyYjdlMzAg
ICByODogIDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcjk6ICBmZmZmODMwMTQ4OTczZWE4ICAgcjEw
OiAwMDAwMDA0YTkzZjFmNDY3ICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZm
ZjgyYzQ4MDI3MTgwMCAgIHIxMzogZmZmZjgyYzQ4MDEzZDc1NyAgIHIxNDogMDAwMDAwNGE5M2I5
Zjg1ZQooWEVOKSByMTU6IGZmZmY4MmM0ODAzMDIzMDggICBjcjA6IDAwMDAwMDAwODAwNTAwM2Ig
ICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTNjNjU2MDAwICAgY3Iy
OiBmZmZmODgwMDI2OGI1MDQwCihYRU4pIGRzOiAwMDAwICAgZXM6IDAwMDAgICBmczogMDAwMCAg
IGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJv
bSByc3A9ZmZmZjgyYzQ4MDJiN2UzMDoKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2U2MCBmZmZmODJj
NDgwMTI4MTdmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJlMjVjOAooWEVOKSAgICBmZmZm
ODJjNDgwMzAyNDgwIGZmZmY4MzAxNDg5YjNkNDAgZmZmZjgyYzQ4MDJiN2ViMCBmZmZmODJjNDgw
MTI4MjgxCihYRU4pICAgIGZmZmY4MmM0ODAyYjdmMTggMDAwMDAwMDAwMDAwMDI0NiAwMDAwMDA0
YTkzZjFmNDY3IGZmZmY4MmM0ODAyZDg4ODAKKFhFTikgICAgZmZmZjgyYzQ4MDJkODg4MCBmZmZm
ODJjNDgwMmI3ZjE4IGZmZmZmZmZmZmZmZmZmZmYgZmZmZjgyYzQ4MDMwMjMwOAooWEVOKSAgICBm
ZmZmODJjNDgwMmI3ZWUwIGZmZmY4MmM0ODAxMjU0MDUgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmODJj
NDgwMmI3ZjE4CihYRU4pICAgIDAwMDAwMDAwZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMiBmZmZm
ODJjNDgwMmI3ZWYwIGZmZmY4MmM0ODAxMjU0ODQKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2YxMCBm
ZmZmODJjNDgwMTU4YzA1IGZmZmY4MzAwYWE1ODQwMDAgZmZmZjgzMDBhYTBmYzAwMAooWEVOKSAg
ICBmZmZmODJjNDgwMmI3ZGE4IDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmZmZmZmZmZmZiAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZmZmZmY4MWEwMWVlOCBm
ZmZmZmZmZjgxYTAxZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAw
MDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNh
YSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZmZmZmY4MWEw
MWVkMCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODMwMGFhNTg0MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTNkNzdlPl0g
bnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgxN2Y+XSBleGVj
dXRlX3RpbWVyKzB4NGUvMHg2YwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgyODE+XSB0aW1lcl9z
b2Z0aXJxX2FjdGlvbisweGU0LzB4MjFhCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDEyNTQwNT5dIF9f
ZG9fc29mdGlycSsweDk1LzB4YTAKKFhFTikgICAgWzxmZmZmODJjNDgwMTI1NDg0Pl0gZG9fc29m
dGlycSsweDI2LzB4MjgKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4YzA1Pl0gaWRsZV9sb29wKzB4
NmYvMHg3MQooWEVOKSAgICAKKFhFTikgKioqIER1bXBpbmcgQ1BVMSBob3N0IHN0YXRlOiAqKioK
KFhFTikgLS0tLVsgWGVuLTQuMi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDog
ICAgQyBdLS0tLQooWEVOKSBDUFU6ICAgIDEKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4
MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAw
MDAwMjQ2ICAgQ09OVEVYVDogaHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAg
ICByYng6IGZmZmY4MzAxM2U2ZTdmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcmR4
OiAwMDAwMDAzY2JhZjI1ZDgwICAgcnNpOiAwMDAwMDAwMDQxNjk1NmEyICAgcmRpOiAwMDAwMDAw
MDAwMDAwMDAxCihYRU4pIHJicDogZmZmZjgzMDEzZTZlN2VmMCAgIHJzcDogZmZmZjgzMDEzZTZl
N2VmMCAgIHI4OiAgMDAwMDAwMGMyMzE2MWZhMAooWEVOKSByOTogIGZmZmY4MzAwYWE1ODMwNjAg
ICByMTA6IDAwMDAwMDAwZGVhZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEy
OiBmZmZmODMwMTNlNmU3ZjE4ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAw
MDAwMDAwMDAyCihYRU4pIHIxNTogZmZmZjgzMDEzYjIyODA4OCAgIGNyMDogMDAwMDAwMDA4MDA1
MDAzYiAgIGNyNDogMDAwMDAwMDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxM2M0OWEwMDAg
ICBjcjI6IGZmZmY4ODAwMjU4MTdkZjAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAw
MDAwICAgZ3M6IDAwMDAgICBzczogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFj
ZSBmcm9tIHJzcD1mZmZmODMwMTNlNmU3ZWYwOgooWEVOKSAgICBmZmZmODMwMTNlNmU3ZjEwIGZm
ZmY4MmM0ODAxNThiZjggZmZmZjgzMDBhYTBmZTAwMCBmZmZmODMwMGFhNTgzMDAwCihYRU4pICAg
IGZmZmY4MzAxM2U2ZTdkYTggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDMKKFhFTikgICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODgxZWUwIGZm
ZmY4ODAwMjc4ODFmZDggMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAx
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICAgIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2Fh
IDAwMDAwMDAwMDAwMGUwMzMgMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODgx
ZWM4IDAwMDAwMDAwMDAwMGUwMmIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAx
IGZmZmY4MzAwYWEwZmUwMDAKKFhFTikgICAgMDAwMDAwM2NiYWYyNWQ4MCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pIFhlbiBjYWxsIHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBk
ZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVf
bG9vcCsweDYyLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1waW5nIENQVTIgaG9zdCBzdGF0
ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRh
aW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAyCihYRU4pIFJJUDogICAgZTAwODpbPGZm
ZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgw
MzAyMzcwICAgcmJ4OiBmZmZmODMwMTQ4OTlmZjE4ICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAyCihY
RU4pIHJkeDogMDAwMDAwM2NiYjk2NmQ4MCAgIHJzaTogMDAwMDAwMDA0MjJlZDVjYSAgIHJkaTog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByYnA6IGZmZmY4MzAxNDg5OWZlZjAgICByc3A6IGZmZmY4
MzAxNDg5OWZlZjAgICByODogIDAwMDAwMDBjNDU2NDhmZTAKKFhFTikgcjk6ICBmZmZmODMwMGE4
M2ZkMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihY
RU4pIHIxMjogZmZmZjgzMDE0ODk5ZmYxOCAgIHIxMzogMDAwMDAwMDBmZmZmZmZmZiAgIHIxNDog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxM2JjNjkwODggICBjcjA6IDAwMDAw
MDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTNk
OTVmMDAwICAgY3IyOiBmZmZmODgwMDI1ZWE4YzEwCihYRU4pIGRzOiAwMDJiICAgZXM6IDAwMmIg
ICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDE0ODk5ZmVmMDoKKFhFTikgICAgZmZmZjgzMDE0ODk5
ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYTg1YzcwMDAgZmZmZjgzMDBhODNmZDAwMAoo
WEVOKSAgICBmZmZmODMwMTQ4OTlmZGE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAxCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyNzg2
ZGVlMCBmZmZmODgwMDI3ODZkZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFk
YmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4
MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZjg4
MDAyNzg2ZGVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmODMwMGE4NWM3MDAwCihYRU4pICAgIDAwMDAwMDNjYmI5NjZkODAgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4
M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBbPGZmZmY4MmM0ODAxNThiZjg+
XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAqKiogRHVtcGluZyBDUFUzIGhv
c3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1
Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMwooWEVOKSBSSVA6ICAgIGUw
MDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSBSRkxB
R1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJheDogZmZm
ZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODk4ZmYxOCAgIHJjeDogMDAwMDAwMDAwMDAw
MDAwMwooWEVOKSByZHg6IDAwMDAwMDNjYmU0YTVkODAgICByc2k6IDAwMDAwMDAwNDJmNDkzNTIg
ICByZGk6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmJwOiBmZmZmODMwMTQ4OThmZWYwICAgcnNw
OiBmZmZmODMwMTQ4OThmZWYwICAgcjg6ICAwMDAwMDAwYzY4NGYxYWYwCihYRU4pIHI5OiAgZmZm
ZjgzMDBhODNmYzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAgIHIxMTogMDAwMDAwMDAwMDAw
MDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5OGZmMTggICByMTM6IDAwMDAwMDAwZmZmZmZmZmYg
ICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZmODMwMTNlN2E4MDg4ICAgY3Iw
OiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNyMzogMDAw
MDAwMDEzZGI2MjAwMCAgIGNyMjogZmZmZjg4MDAyNWU5MTI2MAooWEVOKSBkczogMDAyYiAgIGVz
OiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgKKFhFTikg
WGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5OGZlZjA6CihYRU4pICAgIGZmZmY4
MzAxNDg5OGZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4M2ZlMDAwIGZmZmY4MzAwYTgz
ZmMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODk4ZmRhOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMgooWEVOKSAgICBmZmZmZmZmZjgxYWFmZGEwIGZmZmY4
ODAwMjc4NmZlZTAgZmZmZjg4MDAyNzg2ZmZkOCAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAgIDAw
MDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAg
IGZmZmY4ODAwMjc4NmZlYzggMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDMgZmZmZjgzMDBhODNmZTAwMAooWEVOKSAgICAwMDAwMDAzY2JlNGE1ZDgw
IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgICAgWzxmZmZmODJjNDgw
MTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAKKFhFTikgWzA6IGR1bXAgRG9t
MCByZWdpc3RlcnNdCihYRU4pICcwJyBwcmVzc2VkIC0+IGR1bXBpbmcgRG9tMCdzIHJlZ2lzdGVy
cwooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMCBzdGF0ZTogKioqCihYRU4pIFJJUDogICAg
ZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYg
ICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAg
IHJieDogZmZmZmZmZmY4MWEwMWZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6
IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAw
ZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmZmZmZjgxYTAxZWU4ICAgcnNwOiBmZmZmZmZmZjgxYTAx
ZWQwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAg
IHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6
IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDAgICByMTQ6IGZmZmZmZmZm
ZmZmZmZmZmYKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDAwMDAw
MDA4ICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDEzYzY1NjAwMCAg
IGNyMjogMDAwMDdmODE5MjYwODk0YwooWEVOKSBkczogMDAwMCAgIGVzOiAwMDAwICAgZnM6IDAw
MDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJh
Y2UgZnJvbSByc3A9ZmZmZmZmZmY4MWEwMWVkMDoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZmZmZmY4MWEwMWYxOAooWEVOKSAg
ICBmZmZmZmZmZjgxMDFjNjYzIGZmZmZmZmZmODFhMDFmZDggZmZmZmZmZmY4MWFhZmRhMCBmZmZm
ODgwMDJkZWUxYTAwCihYRU4pICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmY4MWEwMWY0OCBm
ZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgICAgYTNmYzk4NzMzOWVkMDEz
YiAwMDAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODFiMTUxNjAgZmZmZmZmZmY4MWEwMWY1OAooWEVO
KSAgICBmZmZmZmZmZjgxNTU0ZjVlIGZmZmZmZmZmODFhMDFmOTggZmZmZmZmZmY4MWFjY2JmNSBm
ZmZmZmZmZjgxYjE1MTYwCihYRU4pICAgIGU0YjE1OWJhM2VlYTA5NGMgMDAwMDAwMDAwMGNkZjAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxYTAxZmI4IGZmZmZmZmZmODFhY2MzNGIgZmZmZmZmZmY3ZmZmZmZmZgoo
WEVOKSAgICBmZmZmZmZmZjg0YjA0MDAwIGZmZmZmZmZmODFhMDFmZjggZmZmZmZmZmY4MWFjZmVj
YyAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAxMDAwMDAwMDAgMDAxMDA4MDAwMDAz
MDZhNCAxZmM5OGI3NWUzYjgyMjgzIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMxIHN0
YXRlOiAqKioKKFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJG
TEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikg
cmF4OiAwMDAwMDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODZkZmQ4ICAgcmN4OiBmZmZm
ZmZmZjgxMDAxM2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBk
ZWFkYmVlZiAgIHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4NmRl
ZTAgICByc3A6IGZmZmY4ODAwMjc4NmRlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikg
cjk6ICAwMDAwMDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAw
MDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAw
MDAwMDAwMSAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAw
MDAgICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikg
Y3IzOiAwMDAwMDAwMTNkOTVmMDAwICAgY3IyOiAwMDAwMDAwMDAyMTgyM2U4CihYRU4pIGRzOiAw
MDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAz
MwooWEVOKSBHdWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODZkZWM4OgooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBm
ZmZmODgwMDI3ODZkZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg2ZGZk
OCBmZmZmZmZmZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmODgwMDI3ODZkZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQoo
WEVOKSAgICBhZGNmNDU4MDdjMmQwNGZiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODgwMDI3ODZkZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmRmNTggMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMiBzdGF0ZTogKioqCihYRU4pIFJJ
UDogICAgZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAw
MDAyNDYgICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAw
MDAwMCAgIHJieDogZmZmZjg4MDAyNzg2ZmZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVO
KSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmODgwMDI3ODZmZWUwICAgcnNwOiBmZmZmODgw
MDI3ODZmZWM4ICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAw
MDAwMCAgIHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVO
KSByMTI6IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDIgICByMTQ6IDAw
MDAwMDAwMDAwMDAwMDAKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAw
MDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0Yzhk
ZTAwMCAgIGNyMjogMDAwMDdmZWFjYTY0YzAwMAooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAg
ZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjg4MDAyNzg2ZmVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZmYxMAoo
WEVOKSAgICBmZmZmZmZmZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmZmZDggZmZmZmZmZmY4MWFhZmRh
MCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2
ZmY0MCBmZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgMWZlN2I1YTgy
MjE1MDQ5OSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1
MAooWEVOKSAgICBmZmZmZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCBmZmZmODgwMDI3ODZmZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioq
IER1bXBpbmcgRG9tMCB2Y3B1IzMgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZm
ZmZmZjgxMDAxM2FhPl0KKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBD
T05URVhUOiBwdiBndWVzdAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4
ODAwMjc4ODFmZDggICByY3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAw
MDAwMDAwICAgcnNpOiAwMDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihY
RU4pIHJicDogZmZmZjg4MDAyNzg4MWVlMCAgIHJzcDogZmZmZjg4MDAyNzg4MWVjOCAgIHI4OiAg
MDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAw
MDAwMDAwMDAwMDEgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgx
YWFmZGEwICAgcjEzOiAwMDAwMDAwMDAwMDAwMDAzICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pIHIxNTogMDAwMDAwMDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDog
MDAwMDAwMDAwMDAwMjY2MAooWEVOKSBjcjM6IDAwMDAwMDAxM2M0OWEwMDAgICBjcjI6IDAwMDA3
ZjgxOWMzYmUwMDAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAw
MDAgICBzczogZTAyYiAgIGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNw
PWZmZmY4ODAwMjc4ODFlYzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZm
ZmZmZiBmZmZmZmZmZjgxMDBhNWMwIGZmZmY4ODAwMjc4ODFmMTAKKFhFTikgICAgZmZmZmZmZmY4
MTAxYzY2MyBmZmZmODgwMDI3ODgxZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNDAgZmZmZmZmZmY4MTAx
MzIzNiBmZmZmZmZmZjgxMDBhZGU5CihYRU4pICAgIDQ5ZGUxODMzZDEzZjJhMjYgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTAKKFhFTikgICAgZmZmZmZm
ZmY4MTU2MzQzOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZm
Zjg4MDAyNzg4MWY1OCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIFtIOiBkdW1wIGhlYXAgaW5mb10K
KFhFTikgJ0gnIHByZXNzZWQgLT4gZHVtcGluZyBoZWFwIGluZm8gKG5vdy0weDRBOkREODU2MDM4
KQooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0wXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Ml0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0zXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9NV0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT02XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OF0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT05XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTEwXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTExXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTEyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTEzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE0XSAtPiAx
NjEyOCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xNV0gLT4gMzI3NjggcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MTZdIC0+IDY1NTM2IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTE3XSAtPiAxMzA1NTkgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MThdIC0+
IDI2MjE0MyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xOV0gLT4gMTcyODM3IHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIwXSAtPiAxMzQyMjUgcGFnZXMKKFhFTikgaGVhcFtu
b2RlPTBdW3pvbmU9MjFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjJdIC0+
IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjNdIC0+IDAgcGFnZXMKKFhFTikgaGVh
cFtub2RlPTBdW3pvbmU9MjRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjVd
IC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjZdIC0+IDAgcGFnZXMKKFhFTikg
aGVhcFtub2RlPTBdW3pvbmU9MjddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9
MjhdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjldIC0+IDAgcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MzBdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MzFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzJdIC0+IDAgcGFnZXMK
KFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzNdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MzRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzVdIC0+IDAgcGFn
ZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzZdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2Rl
PTBdW3pvbmU9MzddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzhdIC0+IDAg
cGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzldIC0+IDAgcGFnZXMKKFhFTikgW0k6IGR1
bXAgSFZNIGlycSBpbmZvXQooWEVOKSAnSScgcHJlc3NlZCAtPiBkdW1waW5nIEhWTSBpcnEgaW5m
bwooWEVOKSBbTTogZHVtcCBNU0kgc3RhdGVdCihYRU4pIFBDSS1NU0kgaW50ZXJydXB0IGluZm9y
bWF0aW9uOgooWEVOKSAgTVNJICAgIDI2IHZlYz02OSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxv
ZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI3IHZlYz0w
MCAgZml4ZWQgIGVkZ2UgZGVhc3NlcnQgcGh5cyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAv
MS8tMQooWEVOKSAgTVNJICAgIDI4IHZlYz0zMSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBs
b3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI5IHZlYz03MSBs
b3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8t
MQooWEVOKSAgTVNJICAgIDMwIHZlYz04OSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dl
c3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSBbUTogZHVtcCBQQ0kgZGV2aWNlc10K
KFhFTikgPT09PSBQQ0kgZGV2aWNlcyA9PT09CihYRU4pID09PT0gc2VnbWVudCAwMDAwID09PT0K
KFhFTikgMDAwMDowNTowMS4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDQ6MDAu
MCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAzOjAwLjAgLSBkb20gMCAgIC0gTVNJ
cyA8ID4KKFhFTikgMDAwMDowMjowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOSA+CihYRU4pIDAw
MDA6MDA6MWYuMyAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFmLjIgLSBkb20g
MCAgIC0gTVNJcyA8IDI3ID4KKFhFTikgMDAwMDowMDoxZi4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDA6MWUuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFk
LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYy43IC0gZG9tIDAgICAtIE1T
SXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNiAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAw
OjAwOjFjLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYi4wIC0gZG9tIDAg
ICAtIE1TSXMgPCAyNiA+CihYRU4pIDAwMDA6MDA6MWEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgoo
WEVOKSAwMDAwOjAwOjE5LjAgLSBkb20gMCAgIC0gTVNJcyA8IDMwID4KKFhFTikgMDAwMDowMDox
Ni4zIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MTYuMCAtIGRvbSAwICAgLSBN
U0lzIDwgPgooWEVOKSAwMDAwOjAwOjE0LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAw
MDowMDowMi4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOCA+CihYRU4pIDAwMDA6MDA6MDAuMCAtIGRv
bSAwICAgLSBNU0lzIDwgPgooWEVOKSBbVjogZHVtcCBpb21tdSBpbmZvXQooWEVOKSAKKFhFTikg
aW9tbXUgMDogbnJfcHRfbGV2ZWxzID0gMy4KKFhFTikgICBRdWV1ZWQgSW52YWxpZGF0aW9uOiBz
dXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IFJlbWFwcGluZzogc3VwcG9y
dGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCByZW1hcHBpbmcgdGFibGUgKG5yX2Vu
dHJ5PTB4MTAwMDAuIE9ubHkgZHVtcCBQPTEgZW50cmllcyBoZXJlKToKKFhFTikgICAgICAgIFNW
VCAgU1EgICBTSUQgICAgICBEU1QgIFYgIEFWTCBETE0gVE0gUkggRE0gRlBEIFAKKFhFTikgICAw
MDAwOiAgMSAgIDAgIDAwMTAgMDAwMDAwMDEgMzEgICAgMCAgIDEgIDAgIDEgIDEgICAwIDEKKFhF
TikgCihYRU4pIGlvbW11IDE6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFs
aWRhdGlvbjogc3VwcG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBp
bmc6IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRh
YmxlIChucl9lbnRyeT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4p
ICAgICAgICBTVlQgIFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQ
CihYRU4pICAgMDAwMDogIDEgICAwICBmMGY4IDAwMDAwMDAxIDM4ICAgIDAgICAxICAwICAxICAx
ICAgMCAxCihYRU4pICAgMDAwMTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGYwICAgIDAgICAxICAw
ICAxICAxICAgMCAxCihYRU4pICAgMDAwMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQwICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQ4
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNDogIDEgICAwICBmMGY4IDAwMDAw
MDAxIDUwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNTogIDEgICAwICBmMGY4
IDAwMDAwMDAxIDU4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNjogIDEgICAw
ICBmMGY4IDAwMDAwMDAxIDYwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNzog
IDEgICAwICBmMGY4IDAwMDAwMDAxIDY4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAg
MDAwODogIDEgICAwICBmMGY4IDAwMDAwMDAxIDcwICAgIDAgICAxICAxICAxICAxICAgMCAxCihY
RU4pICAgMDAwOTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDc4ICAgIDAgICAxICAwICAxICAxICAg
MCAxCihYRU4pICAgMDAwYTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDg4ICAgIDAgICAxICAwICAx
ICAxICAgMCAxCihYRU4pICAgMDAwYjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDkwICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwYzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDk4ICAg
IDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZDogIDEgICAwICBmMGY4IDAwMDAwMDAx
IGEwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZTogIDEgICAwICBmMGY4IDAw
MDAwMDAxIGE4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZjogIDEgICAwICBm
MGY4IDAwMDAwMDAxIGIwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMDogIDEg
ICAwICBmMGY4IDAwMDAwMDAxIGI4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAx
MTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGMwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4p
ICAgMDAxMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIGM4ICAgIDAgICAxICAxICAxICAxICAgMCAx
CihYRU4pICAgMDAxMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQwICAgIDAgICAxICAxICAxICAx
ICAgMCAxCihYRU4pICAgMDAxNDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQ4ICAgIDAgICAxICAx
ICAxICAxICAgMCAxCihYRU4pICAgMDAxNTogIDEgICAwICAwMGQ4IDAwMDAwMDAxIDY5ICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNjogIDEgICAwICAwMGZhIDAwMDAwMDAxIDI5
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNzogIDEgICAwICAwMjAwIDAwMDAw
MDAxIDcxICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxODogIDEgICAwICAwMGM4
IDAwMDAwMDAxIDg5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pIAooWEVOKSBSZWRpcmVj
dGlvbiB0YWJsZSBvZiBJT0FQSUMgMDoKKFhFTikgICAjZW50cnkgSURYIEZNVCBNQVNLIFRSSUcg
SVJSIFBPTCBTVEFUIERFTEkgIFZFQ1RPUgooWEVOKSAgICAwMTogIDAwMDAgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICAzOAooWEVOKSAgICAwMjogIDAwMDEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBmMAooWEVOKSAgICAwMzogIDAwMDIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0MAooWEVOKSAgICAwNDogIDAwMDMgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0OAooWEVOKSAgICAwNTogIDAwMDQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1MAooWEVOKSAgICAwNjogIDAwMDUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1OAooWEVOKSAgICAwNzogIDAwMDYgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2MAooWEVOKSAgICAwODogIDAwMDcgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2OAooWEVOKSAgICAwOTogIDAwMDggICAxICAgIDAgICAx
ICAgMCAgIDAgICAgMCAgICAwICAgICA3MAooWEVOKSAgICAwYTogIDAwMDkgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA3OAooWEVOKSAgICAwYjogIDAwMGEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA4OAooWEVOKSAgICAwYzogIDAwMGIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5MAooWEVOKSAgICAwZDogIDAwMGMgICAxICAgIDEgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5OAooWEVOKSAgICAwZTogIDAwMGQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhMAooWEVOKSAgICAwZjogIDAwMGUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhOAooWEVOKSAgICAxMDogIDAwMGYgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiMAooWEVOKSAgICAxMjogIDAwMTAgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiOAooWEVOKSAgICAxMzogIDAwMTEgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjMAooWEVOKSAgICAxNDogIDAwMTQgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkOAooWEVOKSAgICAxNjogIDAwMTMgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkMAooWEVOKSAgICAxNzogIDAwMTIgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjOAooWEVOKSBbYTogZHVtcCB0aW1lciBxdWV1ZXNdCihY
RU4pIER1bXBpbmcgdGltZXIgcXVldWVzOgooWEVOKSBDUFUwMDoKKFhFTikgICBleD0gICAtMTY4
MXVzIHRpbWVyPWZmZmY4MmM0ODAyZTI1YzggY2I9ZmZmZjgyYzQ4MDEzZDc1NyhmZmZmODJjNDgw
MjcxODAwKSBuczE2NTUwX3BvbGwrMHgwLzB4MzMKKFhFTikgICBleD0gICAtMTY3OXVzIHRpbWVy
PWZmZmY4MzAxNGNhOTI1OTAgY2I9ZmZmZjgyYzQ4MDE2NjQxNihmZmZmODMwMTQ4OTQxZDgwKSBp
cnFfZ3Vlc3RfZW9pX3RpbWVyX2ZuKzB4MC8weDE1ZAooWEVOKSAgIGV4PSAgICA3MzIwdXMgdGlt
ZXI9ZmZmZjgzMDE0ODk3MzFiOCBjYj1mZmZmODJjNDgwMTE5ZDcyKGZmZmY4MzAxNDg5NzMxOTAp
IGNzY2hlZF9hY2N0KzB4MC8weDQyYQooWEVOKSAgIGV4PTEyODEwMjkzMHVzIHRpbWVyPWZmZmY4
MmM0ODAyZmUyODAgY2I9ZmZmZjgyYzQ4MDE4MDdjMigwMDAwMDAwMDAwMDAwMDAwKSBwbHRfb3Zl
cmZsb3crMHgwLzB4MTMxCihYRU4pICAgZXg9IDYyNDQ2Mzl1cyB0aW1lcj1mZmZmODJjNDgwMzAw
NTgwIGNiPWZmZmY4MmM0ODAxYTg4NTAoMDAwMDAwMDAwMDAwMDAwMCkgbWNlX3dvcmtfZm4rMHgw
LzB4YTkKKFhFTikgICBleD0gICAgNzMyMHVzIHRpbWVyPWZmZmY4MzAxNDg5NzNlYTggY2I9ZmZm
ZjgyYzQ4MDExYWFmMCgwMDAwMDAwMDAwMDAwMDAwKSBjc2NoZWRfdGljaysweDAvMHgzMTQKKFhF
TikgQ1BVMDE6CihYRU4pICAgZXg9ICAgNjYyMDZ1cyB0aW1lcj1mZmZmODMwMTRiMzI5ZWI4IGNi
PWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMSkgY3NjaGVkX3RpY2srMHgwLzB4MzE0
CihYRU4pICAgZXg9ICAgNzA5MDV1cyB0aW1lcj1mZmZmODMwMGFhNTgzMDYwIGNiPWZmZmY4MmM0
ODAxMjFjNmIoZmZmZjgzMDBhYTU4MzAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8w
eGIKKFhFTikgQ1BVMDI6CihYRU4pICAgZXg9ICAgODY1MjF1cyB0aW1lcj1mZmZmODMwMTQ4OTk0
NjU4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMikgY3NjaGVkX3RpY2srMHgw
LzB4MzE0CihYRU4pICAgZXg9ICA3OTQyNjJ1cyB0aW1lcj1mZmZmODMwMGE4M2ZkMDYwIGNiPWZm
ZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhODNmZDAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2Zu
KzB4MC8weGIKKFhFTikgQ1BVMDM6CihYRU4pICAgZXg9ICAxMTIwNTN1cyB0aW1lcj1mZmZmODMw
MTRiMzI5MGI4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMykgY3NjaGVkX3Rp
Y2srMHgwLzB4MzE0CihYRU4pICAgZXg9ICAzMzIyNzN1cyB0aW1lcj1mZmZmODMwMGE4M2ZjMDYw
IGNiPWZmZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhODNmYzAwMCkgdmNwdV9zaW5nbGVzaG90X3Rp
bWVyX2ZuKzB4MC8weGIKKFhFTikgW2M6IGR1bXAgQUNQSSBDeCBzdHJ1Y3R1cmVzXQooWEVOKSAn
YycgcHJlc3NlZCAtPiBwcmludGluZyBBQ1BJIEN4IHN0cnVjdHVyZXMKKFhFTikgPT1jcHUwPT0K
KFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3Rh
dGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0g
bWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBk
dXJhdGlvblszMjIzMDEzNzcxNjRdCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQoo
WEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVOKSA9PWNwdTE9PQooWEVOKSBhY3RpdmUgc3Rh
dGU6CQlDMjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBD
MToJdHlwZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1
cmF0aW9uWzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzMyMjMyNjE2
NzA3N10KKFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZb
MF0gQ0M3WzBdCihYRU4pID09Y3B1Mj09CihYRU4pIGFjdGl2ZSBzdGF0ZToJCUMyNTUKKFhFTikg
bWF4X2NzdGF0ZToJCUM3CihYRU4pIHN0YXRlczoKKFhFTikgICAgIEMxOgl0eXBlW0MxXSBsYXRl
bmN5WzAwMF0gdXNhZ2VbMDAwMDAwMDBdIG1ldGhvZFsgSEFMVF0gZHVyYXRpb25bMF0KKFhFTikg
ICAgIEMwOgl1c2FnZVswMDAwMDAwMF0gZHVyYXRpb25bMzIyMzUwOTU3MjkzXQooWEVOKSBQQzJb
MF0gUEMzWzBdIFBDNlswXSBQQzdbMF0KKFhFTikgQ0MzWzBdIENDNlswXSBDQzdbMF0KKFhFTikg
PT1jcHUzPT0KKFhFTikgYWN0aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcK
KFhFTikgc3RhdGVzOgooWEVOKSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVsw
MDAwMDAwMF0gbWV0aG9kWyBIQUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAw
MDAwMDAwXSBkdXJhdGlvblszMjIzNzU3NDY3MjBdCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBd
IFBDN1swXQooWEVOKSBDQzNbMF0gQ0M2WzBdIENDN1swXQooWEVOKSBbZTogZHVtcCBldnRjaG4g
aW5mb10KKFhFTikgJ2UnIHByZXNzZWQgLT4gZHVtcGluZyBldmVudC1jaGFubmVsIGluZm8KKFhF
TikgRXZlbnQgY2hhbm5lbCBpbmZvcm1hdGlvbiBmb3IgZG9tYWluIDA6CihYRU4pIFBvbGxpbmcg
dkNQVXM6IHt9CihYRU4pICAgICBwb3J0IFtwL21dCihYRU4pICAgICAgICAxIFsxLzBdOiBzPTUg
bj0wIHg9MCB2PTAKKFhFTikgICAgICAgIDIgWzAvMV06IHM9NiBuPTAgeD0wCihYRU4pICAgICAg
ICAzIFsxLzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAgICAgNCBbMC8wXTogcz02IG49MCB4PTAK
KFhFTikgICAgICAgIDUgWzAvMF06IHM9NSBuPTAgeD0wIHY9MQooWEVOKSAgICAgICAgNiBbMC8w
XTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDcgWzAvMF06IHM9NSBuPTEgeD0wIHY9MAooWEVO
KSAgICAgICAgOCBbMC8xXTogcz02IG49MSB4PTAKKFhFTikgICAgICAgIDkgWzAvMF06IHM9NiBu
PTEgeD0wCihYRU4pICAgICAgIDEwIFswLzBdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAxMSBb
MC8wXTogcz01IG49MSB4PTAgdj0xCihYRU4pICAgICAgIDEyIFswLzBdOiBzPTYgbj0xIHg9MAoo
WEVOKSAgICAgICAxMyBbMC8wXTogcz01IG49MiB4PTAgdj0wCihYRU4pICAgICAgIDE0IFswLzFd
OiBzPTYgbj0yIHg9MAooWEVOKSAgICAgICAxNSBbMC8wXTogcz02IG49MiB4PTAKKFhFTikgICAg
ICAgMTYgWzAvMF06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE3IFswLzBdOiBzPTUgbj0yIHg9
MCB2PTEKKFhFTikgICAgICAgMTggWzAvMF06IHM9NiBuPTIgeD0wCihYRU4pICAgICAgIDE5IFsw
LzBdOiBzPTUgbj0zIHg9MCB2PTAKKFhFTikgICAgICAgMjAgWzEvMV06IHM9NiBuPTMgeD0wCihY
RU4pICAgICAgIDIxIFswLzBdOiBzPTYgbj0zIHg9MAooWEVOKSAgICAgICAyMiBbMC8wXTogcz02
IG49MyB4PTAKKFhFTikgICAgICAgMjMgWzAvMF06IHM9NSBuPTMgeD0wIHY9MQooWEVOKSAgICAg
ICAyNCBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAgICAgMjUgWzAvMF06IHM9MyBuPTAgeD0w
IGQ9MCBwPTM1CihYRU4pICAgICAgIDI2IFswLzBdOiBzPTQgbj0wIHg9MCBwPTkgaT05CihYRU4p
ICAgICAgIDI3IFswLzBdOiBzPTUgbj0wIHg9MCB2PTIKKFhFTikgICAgICAgMjggWzAvMF06IHM9
NCBuPTAgeD0wIHA9OCBpPTgKKFhFTikgICAgICAgMjkgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc4
IGk9MjcKKFhFTikgICAgICAgMzAgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc5IGk9MjYKKFhFTikg
ICAgICAgMzEgWzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc3IGk9MjgKKFhFTikgICAgICAgMzIgWzAv
MF06IHM9NCBuPTAgeD0wIHA9MTYgaT0xNgooWEVOKSAgICAgICAzMyBbMC8wXTogcz00IG49MCB4
PTAgcD0yMyBpPTIzCihYRU4pICAgICAgIDM0IFsxLzBdOiBzPTQgbj0wIHg9MCBwPTI3NiBpPTI5
CihYRU4pICAgICAgIDM1IFswLzBdOiBzPTMgbj0wIHg9MCBkPTAgcD0yNQooWEVOKSAgICAgICAz
NiBbMC8wXTogcz01IG49MCB4PTAgdj0zCihYRU4pICAgICAgIDM3IFsxLzBdOiBzPTQgbj0wIHg9
MCBwPTI3NSBpPTMwCihYRU4pIFtnOiBwcmludCBncmFudCB0YWJsZSB1c2FnZV0KKFhFTikgZ250
dGFiX3VzYWdlX3ByaW50X2FsbCBbIGtleSAnZycgcHJlc3NlZAooWEVOKSAgICAgICAtLS0tLS0t
LSBhY3RpdmUgLS0tLS0tLS0gICAgICAgLS0tLS0tLS0gc2hhcmVkIC0tLS0tLS0tCihYRU4pIFty
ZWZdIGxvY2FsZG9tIG1mbiAgICAgIHBpbiAgICAgICAgICBsb2NhbGRvbSBnbWZuICAgICBmbGFn
cwooWEVOKSBncmFudC10YWJsZSBmb3IgcmVtb3RlIGRvbWFpbjogICAgMCAuLi4gbm8gYWN0aXZl
IGdyYW50IHRhYmxlIGVudHJpZXMKKFhFTikgZ250dGFiX3VzYWdlX3ByaW50X2FsbCBdIGRvbmUK
KFhFTikgW2k6IGR1bXAgaW50ZXJydXB0IGJpbmRpbmdzXQooWEVOKSBHdWVzdCBpbnRlcnJ1cHQg
aW5mb3JtYXRpb246CihYRU4pICAgIElSUTogICAwIGFmZmluaXR5OjAwMDEgdmVjOmYwIHR5cGU9
SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMCBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAg
SVJROiAgIDEgYWZmaW5pdHk6MDAwMSB2ZWM6MzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVz
PTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgMiBhZmZpbml0eTpmZmZm
IHZlYzplMiB0eXBlPVhULVBJQyAgICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJv
dW5kCihYRU4pICAgIElSUTogICAzIGFmZmluaXR5OjAwMDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1l
ZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDQg
YWZmaW5pdHk6MDAwMSB2ZWM6NDggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAy
IG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgNSBhZmZpbml0eTowMDAxIHZlYzo1MCB0
eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4p
ICAgIElSUTogICA2IGFmZmluaXR5OjAwMDEgdmVjOjU4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0
YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDcgYWZmaW5pdHk6
MDAwMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwg
dW5ib3VuZAooWEVOKSAgICBJUlE6ICAgOCBhZmZpbml0eTowMDAxIHZlYzo2OCB0eXBlPUlPLUFQ
SUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDogIDgo
LVMtLSksCihYRU4pICAgIElSUTogICA5IGFmZmluaXR5OjAwMDEgdmVjOjcwIHR5cGU9SU8tQVBJ
Qy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAgOSgt
Uy0tKSwKKFhFTikgICAgSVJROiAgMTAgYWZmaW5pdHk6MDAwMSB2ZWM6NzggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAx
MSBhZmZpbml0eTowMDAxIHZlYzo4OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAw
MDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDEyIGFmZmluaXR5OjAwMDEgdmVjOjkw
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhF
TikgICAgSVJROiAgMTMgYWZmaW5pdHk6MDAwZiB2ZWM6OTggdHlwZT1JTy1BUElDLWVkZ2UgICAg
c3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAxNCBhZmZpbml0
eTowMDAxIHZlYzphMCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVk
LCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE1IGFmZmluaXR5OjAwMDEgdmVjOmE4IHR5cGU9SU8t
QVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJR
OiAgMTYgYWZmaW5pdHk6MDAwMSB2ZWM6YjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDE2KC1TLS0pLAooWEVOKSAgICBJUlE6
ICAxOCBhZmZpbml0eTowMDBmIHZlYzpiOCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAw
MDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE5IGFmZmluaXR5OjAwMDEgdmVj
OmMwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQK
KFhFTikgICAgSVJROiAgMjAgYWZmaW5pdHk6MDAwZiB2ZWM6ZDggdHlwZT1JTy1BUElDLWxldmVs
ICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAyMiBhZmZp
bml0eTowMDAxIHZlYzpkMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFw
cGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIzIGFmZmluaXR5OjAwMDEgdmVjOmM4IHR5cGU9
SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0w
OiAyMygtUy0tKSwKKFhFTikgICAgSVJROiAgMjQgYWZmaW5pdHk6MDAwMSB2ZWM6MjggdHlwZT1E
TUFfTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJ
UlE6ICAyNSBhZmZpbml0eTowMDAxIHZlYzozMCB0eXBlPURNQV9NU0kgICAgICAgICBzdGF0dXM9
MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDI2IGFmZmluaXR5OjAwMDEg
dmVjOjY5IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBk
b21haW4tbGlzdD0wOjI3OSgtUy0tKSwKKFhFTikgICAgSVJROiAgMjcgYWZmaW5pdHk6MDAwMSB2
ZWM6MjkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRv
bWFpbi1saXN0PTA6Mjc4KC1TLS0pLAooWEVOKSAgICBJUlE6ICAyOCBhZmZpbml0eTowMDAxIHZl
YzozMSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9t
YWluLWxpc3Q9MDoyNzcoLVMtLSksCihYRU4pICAgIElSUTogIDI5IGFmZmluaXR5OjAwMDEgdmVj
OjcxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MSBkb21h
aW4tbGlzdD0wOjI3NihQUy1NKSwKKFhFTikgICAgSVJROiAgMzAgYWZmaW5pdHk6MDAwMSB2ZWM6
ODkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFp
bi1saXN0PTA6Mjc1KFBTLS0pLAooWEVOKSBJTy1BUElDIGludGVycnVwdCBpbmZvcm1hdGlvbjoK
KFhFTikgICAgIElSUSAgMCBWZWMyNDA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICAyOiB2
ZWM9ZjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJp
Zz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgMSBWZWMgNTY6CihYRU4pICAgICAg
IEFwaWMgMHgwMCwgUGluICAxOiB2ZWM9MzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAg
MyBWZWMgNjQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICAzOiB2ZWM9NDAgZGVsaXZlcnk9
TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0
X2lkOjAKKFhFTikgICAgIElSUSAgNCBWZWMgNzI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGlu
ICA0OiB2ZWM9NDggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJy
PTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNSBWZWMgODA6CihYRU4p
ICAgICAgIEFwaWMgMHgwMCwgUGluICA1OiB2ZWM9NTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0
YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAg
IElSUSAgNiBWZWMgODg6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA2OiB2ZWM9NTggZGVs
aXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9
MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNyBWZWMgOTY6CihYRU4pICAgICAgIEFwaWMgMHgw
MCwgUGluICA3OiB2ZWM9NjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5
PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgOCBWZWMxMDQ6
CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA4OiB2ZWM9NjggZGVsaXZlcnk9TG9QcmkgZGVz
dD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhF
TikgICAgIElSUSAgOSBWZWMxMTI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA5OiB2ZWM9
NzAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1M
IG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMCBWZWMxMjA6CihYRU4pICAgICAgIEFw
aWMgMHgwMCwgUGluIDEwOiB2ZWM9NzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBv
bGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMSBW
ZWMxMzY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDExOiB2ZWM9ODggZGVsaXZlcnk9TG9Q
cmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lk
OjAKKFhFTikgICAgIElSUSAxMiBWZWMxNDQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDEy
OiB2ZWM9OTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAg
dHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMyBWZWMxNTI6CihYRU4pICAg
ICAgIEFwaWMgMHgwMCwgUGluIDEzOiB2ZWM9OTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElS
USAxNCBWZWMxNjA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE0OiB2ZWM9YTAgZGVsaXZl
cnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBk
ZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNSBWZWMxNjg6CihYRU4pICAgICAgIEFwaWMgMHgwMCwg
UGluIDE1OiB2ZWM9YTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAg
aXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNiBWZWMxNzY6CihY
RU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE2OiB2ZWM9YjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikg
ICAgIElSUSAxOCBWZWMxODQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE4OiB2ZWM9Yjgg
ZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1h
c2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxOSBWZWMxOTI6CihYRU4pICAgICAgIEFwaWMg
MHgwMCwgUGluIDE5OiB2ZWM9YzAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFy
aXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAyMCBWZWMy
MTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIwOiB2ZWM9ZDggZGVsaXZlcnk9TG9Qcmkg
ZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAK
KFhFTikgICAgIElSUSAyMiBWZWMyMDg6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIyOiB2
ZWM9ZDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJp
Zz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAyMyBWZWMyMDA6CihYRU4pICAgICAg
IEFwaWMgMHgwMCwgUGluIDIzOiB2ZWM9YzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgW206IG1lbW9y
eSBpbmZvXQooWEVOKSBQaHlzaWNhbCBtZW1vcnkgaW5mb3JtYXRpb246CihYRU4pICAgICBYZW4g
aGVhcDogMGtCIGZyZWUKKFhFTikgICAgIGhlYXBbMTRdOiA2NDUxMmtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMTVdOiAxMzEwNzJrQiBmcmVlCihYRU4pICAgICBoZWFwWzE2XTogMjYyMTQ0a0IgZnJl
ZQooWEVOKSAgICAgaGVhcFsxN106IDUyMjIzNmtCIGZyZWUKKFhFTikgICAgIGhlYXBbMThdOiAx
MDQ4NTcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxOV06IDY5MTM0OGtCIGZyZWUKKFhFTikgICAg
IGhlYXBbMjBdOiA1MzY5MDBrQiBmcmVlCihYRU4pICAgICBEb20gaGVhcDogMzI1Njc4NGtCIGZy
ZWUKKFhFTikgW246IE5NSSBzdGF0aXN0aWNzXQooWEVOKSBDUFUJTk1JCihYRU4pICAgMAkgIDAK
KFhFTikgICAxCSAgMAooWEVOKSAgIDIJICAwCihYRU4pICAgMwkgIDAKKFhFTikgZG9tMCB2Y3B1
MDogTk1JIG5laXRoZXIgcGVuZGluZyBub3IgbWFza2VkCihYRU4pIFtxOiBkdW1wIGRvbWFpbiAo
YW5kIGd1ZXN0IGRlYnVnKSBpbmZvXQooWEVOKSAncScgcHJlc3NlZCAtPiBkdW1waW5nIGRvbWFp
biBpbmZvIChub3c9MHg0QjozQzcwMjgyQSkKKFhFTikgR2VuZXJhbCBpbmZvcm1hdGlvbiBmb3Ig
ZG9tYWluIDA6CihYRU4pICAgICByZWZjbnQ9MyBkeWluZz0wIHBhdXNlX2NvdW50PTAKKFhFTikg
ICAgIG5yX3BhZ2VzPTE4NzUzOSB4ZW5oZWFwX3BhZ2VzPTYgc2hhcmVkX3BhZ2VzPTAgcGFnZWRf
cGFnZXM9MCBkaXJ0eV9jcHVzPXsxLTN9IG1heF9wYWdlcz0xODgxNDcKKFhFTikgICAgIGhhbmRs
ZT0wMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAwMDAwMDAgdm1fYXNzaXN0PTAwMDAwMDBk
CihYRU4pIFJhbmdlc2V0cyBiZWxvbmdpbmcgdG8gZG9tYWluIDA6CihYRU4pICAgICBJL08gUG9y
dHMgIHsgMC0xZiwgMjItM2YsIDQ0LTYwLCA2Mi05ZiwgYTItNDA3LCA0MGMtY2ZiLCBkMDAtMjA0
ZiwgMjA1OC1mZmZmIH0KKFhFTikgICAgIEludGVycnVwdHMgeyAwLTI3OSB9CihYRU4pICAgICBJ
L08gTWVtb3J5IHsgMC1mZWJmZiwgZmVjMDEtZmVkZmYsIGZlZTAxLWZmZmZmZmZmZmZmZmZmZmYg
fQooWEVOKSBNZW1vcnkgcGFnZXMgYmVsb25naW5nIHRvIGRvbWFpbiAwOgooWEVOKSAgICAgRG9t
UGFnZSBsaXN0IHRvbyBsb25nIHRvIGRpc3BsYXkKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAw
MDE0ODkxNzogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRhZj03NDAwMDAwMDAwMDAwMDAyCihYRU4p
ICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDg5MTY6IGNhZj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9
NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTQ4OTE1OiBjYWY9
YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAwMDAwMDEKKFhFTikgICAgIFhlblBhZ2Ug
MDAwMDAwMDAwMDE0ODkxNDogY2FmPWMwMDAwMDAwMDAwMDAwMDEsIHRhZj03NDAwMDAwMDAwMDAw
MDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAwYWEwZmQ6IGNhZj1jMDAwMDAwMDAwMDAw
MDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMTNm
NDI4OiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0MDAwMDAwMDAwMDAwMDIKKFhFTikgVkNQ
VSBpbmZvcm1hdGlvbiBhbmQgY2FsbGJhY2tzIGZvciBkb21haW4gMDoKKFhFTikgICAgIFZDUFUw
OiBDUFUwIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDEsIHVwY2FsbF9tYXNrID0gMDAg
ZGlydHlfY3B1cz17fSBjcHVfYWZmaW5pdHk9ezB9CihYRU4pICAgICBwYXVzZV9jb3VudD0wIHBh
dXNlX2ZsYWdzPTAKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pICAgICBWQ1BVMTog
Q1BVMiBbaGFzPUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRp
cnR5X2NwdXM9ezJ9IGNwdV9hZmZpbml0eT17MC0xNX0KKFhFTikgICAgIHBhdXNlX2NvdW50PTAg
cGF1c2VfZmxhZ3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMgdGltZXIKKFhFTikgICAgIFZDUFUy
OiBDUFUzIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0gMDAsIHVwY2FsbF9tYXNrID0gMDAg
ZGlydHlfY3B1cz17M30gY3B1X2FmZmluaXR5PXswLTE1fQooWEVOKSAgICAgcGF1c2VfY291bnQ9
MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSAgICAgVkNQ
VTM6IENQVTEgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAwMCwgdXBjYWxsX21hc2sgPSAw
MCBkaXJ0eV9jcHVzPXsxfSBjcHVfYWZmaW5pdHk9ezAtMTV9CihYRU4pICAgICBwYXVzZV9jb3Vu
dD0wIHBhdXNlX2ZsYWdzPTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRpbWVyCihYRU4pIE5vdGlm
eWluZyBndWVzdCAwOjAgKHZpcnEgMSwgcG9ydCA1LCBzdGF0IDAvMC8tMSkKKFhFTikgTm90aWZ5
aW5nIGd1ZXN0IDA6MSAodmlycSAxLCBwb3J0IDExLCBzdGF0IDAvMC8wKQooWEVOKSBOb3RpZnlp
bmcgZ3Vlc3QgMDoyICh2aXJxIDEsIHBvcnQgMTcsIHN0YXQgMC8wLzApCihYRU4pIE5vdGlmeWlu
ZyBndWVzdCAwOjMgKHZpcnEgMSwgcG9ydCAyMywgc3RhdCAwLzAvMCkKCihYRU4pIFNoYXJlZCBm
cmFtZXMgMCAtLSBTYXZlZCBmcmFtZXMgMApbICAzMjMuMzE0NzI2XSB2KFhFTikgW3I6IGR1bXAg
cnVuIHF1ZXVlc10KY3B1IDEKKFhFTikgc2NoZWRfc210X3Bvd2VyX3NhdmluZ3M6IGRpc2FibGVk
CihYRU4pIE5PVz0weDAwMDAwMDRCNDgzNzUwMUYKKFhFTikgSWRsZSBjcHVwb29sOgpbICAzMjMu
MzE0NzI2XSAgKFhFTikgU2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQog
KFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQKKFhFTikgCW1hc3RlciAg
ICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9IDQwMAooWEVOKSAJY3Jl
ZGl0IGJhbGFuY2UgICAgID0gLTYxCihYRU4pIAl3ZWlnaHQgICAgICAgICAgICAgPSAyNTYKKFhF
TikgCXJ1bnFfc29ydCAgICAgICAgICA9IDIyNTIKKFhFTikgCWRlZmF1bHQtd2VpZ2h0ICAgICA9
IDI1NgooWEVOKSAJdHNsaWNlICAgICAgICAgICAgID0gMTBtcwooWEVOKSAJcmF0ZWxpbWl0ICAg
ICAgICAgID0gMTAwMHVzCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAgPSAxMAooWEVOKSAJdGlj
a3MgcGVyIHRzbGljZSAgID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5ICAgID0gMHVzCjA6IG1h
c2tlZD0wIHBlbmQoWEVOKSBpZGxlcnM6IDAwMGEKKFhFTikgYWN0aXZlIHZjcHVzOgooWEVOKSAJ
ICAxOiBpbmc9MSBldmVudF9zZWwgWzAuMV0gcHJpPS0yIGZsYWdzPTAgY3B1PTIgY3JlZGl0PS02
NjMgW3c9MjU2XQowMDAwMDAwMDAwMDAwMDAxKFhFTikgQ3B1cG9vbCAwOgooWEVOKSBTY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCgooWEVOKSBpbmZvOgooWEVOKSAJbmNw
dXMgICAgICAgICAgICAgID0gNAooWEVOKSAJbWFzdGVyICAgICAgICAgICAgID0gMAooWEVOKSAJ
Y3JlZGl0ICAgICAgICAgICAgID0gNDAwCihYRU4pIAljcmVkaXQgYmFsYW5jZSAgICAgPSAtNjEK
KFhFTikgCXdlaWdodCAgICAgICAgICAgICA9IDI1NgooWEVOKSAJcnVucV9zb3J0ICAgICAgICAg
ID0gMjI1MgooWEVOKSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIAl0c2xpY2UgICAg
ICAgICAgICAgPSAxMG1zCihYRU4pIAlyYXRlbGltaXQgICAgICAgICAgPSAxMDAwdXMKKFhFTikg
CWNyZWRpdHMgcGVyIG1zZWMgICA9IDEwCihYRU4pIAl0aWNrcyBwZXIgdHNsaWNlICAgPSAxCihY
RU4pIAltaWdyYXRpb24gZGVsYXkgICAgPSAwdXMKWyAgMzIzLjM0ODkwN10gIChYRU4pIGlkbGVy
czogMDAwYQooWEVOKSBhY3RpdmUgdmNwdXM6CihYRU4pIAkgIDE6IFswLjFdIHByaT0tMiBmbGFn
cz0wIGNwdT0yIGNyZWRpdD0tMTIyMiBbdz0yNTZdCiAoWEVOKSBDUFVbMDBdIDE6IG1hc2tlZD0w
IHBlbmQgc29ydD0yMjUyLCBzaWJsaW5nPTAwMDEsIGluZz0wIGV2ZW50X3NlbCBjb3JlPTAwMGYK
MDAwMDAwMDAwMDAwMDAwMChYRU4pIAlydW46IFszMjc2Ny4wXSBwcmk9MCBmbGFncz0wIGNwdT0w
CihYRU4pIAkgIDE6IFswLjBdIHByaT0wIGZsYWdzPTAgY3B1PTAgY3JlZGl0PTc2IFt3PTI1Nl0K
CihYRU4pIENQVVswMV0gWyAgMzIzLjQ1MTkxM10gICBzb3J0PTIyNTIsIHNpYmxpbmc9MDAwMiwg
IGNvcmU9MDAwZgooWEVOKSAJcnVuOiBbMzI3NjcuMV0gcHJpPS02NCBmbGFncz0wIGNwdT0xCjI6
IG1hc2tlZD0xIHBlbmQoWEVOKSBDUFVbMDJdICBzb3J0PTIyNTIsIHNpYmxpbmc9MDAwNCwgY29y
ZT0wMDBmCihYRU4pIAlydW46IFswLjFdIHByaT0tMiBmbGFncz0wIGNwdT0yIGNyZWRpdD0tMTYx
MSBbdz0yNTZdCihYRU4pIAkgIDE6IFszMjc2Ny4yXSBwcmk9LTY0IGZsYWdzPTAgY3B1PTIKKFhF
TikgQ1BVWzAzXSBpbmc9MSBldmVudF9zZWwgIHNvcnQ9MjI1Miwgc2libGluZz0wMDA4LCAwMDAw
MDAwMDAwMDAwMDAxY29yZT0wMDBmCihYRU4pIAlydW46IApbMzI3NjcuM10gcHJpPS02NCBmbGFn
cz0wIGNwdT0zClsgIDMyMy40NzUwOTJdICAoWEVOKSBbczogZHVtcCBzb2Z0dHNjIHN0YXRzXQog
KFhFTikgVFNDIG1hcmtlZCBhcyByZWxpYWJsZSwgd2FycCA9IDI0OTk0MDE1NzY4NSAoY291bnQ9
MykKMzogbWFza2VkPTEgcGVuZChYRU4pIE5vIGRvbWFpbnMgaGF2ZSBlbXVsYXRlZCBUU0MKaW5n
PTAgZXZlbnRfc2VsIChYRU4pIFt0OiBkaXNwbGF5IG11bHRpLWNwdSBjbG9jayBpbmZvXQowMDAw
MDAwMDAwMDAwMDAwCihYRU4pIFN5bmNlZCBzdGltZSBza2V3OiBtYXg9ODE5OW5zIGF2Zz00MTEy
bnMgc2FtcGxlcz0yIGN1cnJlbnQ9ODE5OW5zCihYRU4pIFN5bmNlZCBjeWNsZXMgc2tldzogbWF4
PTE2NCBhdmc9MTYyIHNhbXBsZXM9MiBjdXJyZW50PTE2MApbICAzMjMuNTMxNjEwXSAgKFhFTikg
W3U6IGR1bXAgbnVtYSBpbmZvXQogKFhFTikgJ3UnIHByZXNzZWQgLT4gZHVtcGluZyBudW1hIGlu
Zm8gKG5vdy0weDRCOjU1RjMxNjA1KQoKKFhFTikgaWR4MCAtPiBOT0RFMCBzdGFydC0+MCBzaXpl
LT4xMzY5NjAwIGZyZWUtPjgxNDE5NgpbICAzMjMuNTY0NjM4XSBwKFhFTikgcGh5c190b19uaWQo
MDAwMDAwMDAwMDAwMTAwMCkgLT4gMCBzaG91bGQgYmUgMAplbmRpbmc6CihYRU4pIENQVTAgLT4g
Tk9ERTAKKFhFTikgQ1BVMSAtPiBOT0RFMAooWEVOKSBDUFUyIC0+IE5PREUwCihYRU4pIENQVTMg
LT4gTk9ERTAKWyAgMzIzLjU2NDYzOV0gIChYRU4pIE1lbW9yeSBsb2NhdGlvbiBvZiBlYWNoIGRv
bWFpbjoKKFhFTikgRG9tYWluIDAgKHRvdGFsOiAxODc1MzkpOgogIDAwMDAwMDAwMDAwMDAwMDAo
WEVOKSAgICAgTm9kZSAwOiAxODc1MzkKIChYRU4pIFt2OiBkdW1wIEludGVsJ3MgVk1DU10KMDAw
MDAwMDAwMDAwMDAwMChYRU4pICoqKioqKioqKioqIFZNQ1MgQXJlYXMgKioqKioqKioqKioqKioK
KFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKIChYRU4pIFt6OiBw
cmludCBpb2FwaWMgaW5mb10KMDAwMDAwMDAwMDAwMDAwMChYRU4pIG51bWJlciBvZiBNUCBJUlEg
c291cmNlczogMTUuCihYRU4pIG51bWJlciBvZiBJTy1BUElDICMyIHJlZ2lzdGVyczogMjQuCihY
RU4pIHRlc3RpbmcgdGhlIElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLgogKFhFTikgSU8g
QVBJQyAjMi4uLi4uLgooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDIwMDAwMDAKKFhFTikgLi4u
Li4uLiAgICA6IHBoeXNpY2FsIEFQSUMgaWQ6IDAyCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVy
eSBUeXBlOiAwCihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwCjAwMDAwMDAwMDAw
MDAwMDAoWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzAwMjAKKFhFTikgLi4uLi4uLiAgICAg
OiBtYXggcmVkaXJlY3Rpb24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBp
bXBsZW1lbnRlZDogMAooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMAoo
WEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToKKFhFTikgIE5SIExvZyBQaHkgTWFzayBU
cmlnIElSUiBQb2wgU3RhdCBEZXN0IERlbGkgVmVjdDogICAKIChYRU4pICAwMCAwMDAgMDAgIDEg
ICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAg
MDEgMDAwIDAwICAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgKMDAwMDAwMDAw
MDAwMDAwMChYRU4pICAwMiAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IEYwCiAoWEVOKSAgMDMgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0
MAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDA0IDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgNDgKIChYRU4pICAwNSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAg
MSAgICAxICAgIDUwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDYgMDAwIDAwICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA1OAoKKFhFTikgIDA3IDAwMCAwMCAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNjAKWyAgMzIzLjcwMDEzMl0gIChYRU4pICAwOCAwMDAgMDAg
IDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDY4CiAgKFhFTikgIDA5IDAwMCAwMCAg
MCAgICAxICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNzAKMDAwMDAwMDAwMDAwMDAwMChYRU4p
ICAwYSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDc4CiAoWEVOKSAg
MGIgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA4OAowMDAwMDAwMDAw
MDAwMDAwKFhFTikgIDBjIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAg
OTAKIChYRU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDk4
CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMGUgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICBBMAogKFhFTikgIDBmIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgQTgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAxMCAwMDAgMDAgIDAgICAgMSAgICAw
ICAgMSAgIDAgICAgMSAgICAxICAgIEIwCiAoWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEyIDAwMCAwMCAg
MSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgQjgKIChYRU4pICAxMyAwMDAgMDAgIDEg
ICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEMwCjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAg
MTQgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBEOAogKFhFTikgIDE1
IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAKMDAwMDAwMDAwMDAw
MDAwMChYRU4pICAxNiAwMDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEQw
CiAoWEVOKSAgMTcgMDAwIDAwICAwICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBDOAow
MDAwMDAwMDAwMDAwMDAwKFhFTikgVXNpbmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nCihYRU4pIElS
USB0byBwaW4gbWFwcGluZ3M6CgooWEVOKSBJUlEyNDAgLT4gMDoyCihYRU4pIElSUTU2IC0+IDA6
MQooWEVOKSBJUlE2NCAtPiAwOjMKWyAgMzIzLjgwMjY5MV0gIChYRU4pIElSUTcyIC0+IDA6NAoo
WEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4pIElSUTk2IC0+IDA6Nwoo
WEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhFTikgSVJRMTIwIC0+IDA6
MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6MTIKKFhFTikgSVJRMTUy
IC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4IC0+IDA6MTUKKFhFTikg
SVJRMTc2IC0+IDA6MTYKKFhFTikgSVJRMTg0IC0+IDA6MTgKKFhFTikgSVJRMTkyIC0+IDA6MTkK
KFhFTikgSVJRMjE2IC0+IDA6MjAKICAoWEVOKSBJUlEyMDggLT4gMDoyMgooWEVOKSBJUlEyMDAg
LT4gMDoyMwooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4gZG9uZS4K
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyMy44NzE2OTFdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMjMuODg1NjUxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzIzLjg5OTYx
Ml0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyMy45MTM1NzNdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMjMuOTI3NTM0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDI0MDAwMDIwMDIKWyAgMzIz
Ljk0MTQ5Nl0gICAgClsgIDMyMy45NDQ4MDddIGdsb2JhbCBtYXNrOgpbICAzMjMuOTQ0ODA3XSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzIzLjk2MDAyMF0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMyMy45NzM5ODJdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjMuOTg3
OTQyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjAwMTkwNF0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDMyNC4wMTU4NjVdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAz
MjQuMDI5ODI2XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjA0Mzc4Nl0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZjMDAwMTA0MTA1ClsgIDMyNC4wNTc3NDddICAgIApbICAzMjQuMDYxMDU4XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMzI0LjA2MTA1OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNC4wNzY4MDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMDkwNzcwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjEwNDczMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNC4xMTg2OTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMTMyNjU1
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjE0NjYxNV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC4xNjA1NzVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMjQwMDAwMjA4MgpbICAzMjQu
MTc0NTM1XSAgICAKWyAgMzI0LjE3Nzg0N10gbG9jYWwgY3B1MSBtYXNrOgpbICAzMjQuMTc3ODQ3
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjE5MzQxOV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC4yMDczODFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQu
MjIxMzQxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjIzNTMwMl0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMyNC4yNDkyNjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMjQuMjYzMjIzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjI3NzE4NF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAxZjgwClsgIDMyNC4yOTExNDVdICAgIApbICAzMjQuMjk0NDU3XSBs
b2NhbGx5IHVubWFza2VkOgpbICAzMjQuMjk0NDU4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzI0LjMxMDExOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC4zMjQwODBdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMzM4MDQxXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI0LjM1MjAwMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC4zNjU5
NjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuMzc5OTI0XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI0LjM5Mzg4NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDgwClsgIDMy
NC40MDc4NDVdICAgIApbICAzMjQuNDExMTU2XSBwZW5kaW5nIGxpc3Q6ClsgIDMyNC40MTQyMDBd
ICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGxvY2FsbHktbWFza2VkClsgIDMyNC40MTkyMTFdICAg
MTogZXZlbnQgNyAtPiBpcnEgMjc4ClsgIDMyNC40MjI4ODFdICAgMjogZXZlbnQgMTMgLT4gaXJx
IDI4NCBsb2NhbGx5LW1hc2tlZApbICAzMjQuNDI3OTgyXSAgIDM6IGV2ZW50IDE5IC0+IGlycSAy
OTAgbG9jYWxseS1tYXNrZWQKWyAgMzI0LjQzMzA4M10gICAwOiBldmVudCAzNCAtPiBpcnEgMzAx
IGxvY2FsbHktbWFza2VkClsgIDMyNC40MzgxODRdICAgMDogZXZlbnQgMzcgLT4gaXJxIDMwMiBs
b2NhbGx5LW1hc2tlZApbICAzMjQuNDQzMzA0XSAKWyAgMzI0LjQ0MzMwNl0gdmNwdSAwClsgIDMy
NC40NDMzMDddICAgMDogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNC40NDg1NThdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAw
MDAwMDAwMDAwMDAwClsgIDMyNC40NTQ2NDNdICAgMjogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50
X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMyNC40NjA3MjldICAgMzogbWFza2VkPTEgcGVuZGlu
Zz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAxClsgIDMyNC40NjY4MTRdICAgClsgIDMyNC40
NzI4OTldIHBlbmRpbmc6ClsgIDMyNC40NzI4OTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMjQuNDg3NzU1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjUwMTcxNl0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC41MTU2NzZdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMjQuNTI5NjM4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjU0MzU5
OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC41NTc1NjBdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMjQuNTcxNTIxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDI0MDAyOGEwMDYKWyAgMzI0
LjU4NTQ4Ml0gICAgClsgIDMyNC41ODg3OTNdIGdsb2JhbCBtYXNrOgpbICAzMjQuNTg4NzkzXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjYwNDAwN10gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDMyNC42MTc5NjhdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjQuNjMx
OTI5XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjY0NTg5MF0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDMyNC42NTk4NTFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAz
MjQuNjczODEyXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI0LjY4Nzc3Ml0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZjMDAwMTA0MTA1ClsgIDMyNC43MDE3MzRdICAgIApbICAzMjQuNzA1MDQ1XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMzI0LjcwNTA0Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNC43MjA3OTZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuNzM0NzU2XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0Ljc0ODcxOF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNC43NjI2NzldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuNzc2NjQw
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0Ljc5MDYwMF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC44MDQ1NjJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMjQwMDI4YTAwMgpbICAzMjQu
ODE4NTIyXSAgICAKWyAgMzI0LjgyMTgzNF0gbG9jYWwgY3B1MCBtYXNrOgpbICAzMjQuODIxODM0
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjgzNzQwNl0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNC44NTEzNjddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQu
ODY1MzI3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0Ljg3OTI4OV0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDMyNC44OTMyNDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAzMjQuOTA3MjEwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI0LjkyMTE3Ml0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZmZlMDAwMDdmClsgIDMyNC45MzUxMzJdICAgIApbICAzMjQuOTM4NDQzXSBs
b2NhbGx5IHVubWFza2VkOgpbICAzMjQuOTM4NDQ0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMzI0Ljk1NDEwNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNC45NjgwNjZdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjQuOTgyMDI2XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI0Ljk5NTk4OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS4wMDk5
NDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMDIzOTA5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI1LjAzNzg3MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAyNDAwMDAwMDAyClsgIDMy
NS4wNTE4MzFdICAgIApbICAzMjUuMDU1MTQzXSBwZW5kaW5nIGxpc3Q6ClsgIDMyNS4wNTgxOTFd
ICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGwyLWNsZWFyClsgIDMyNS4wNjI2NjFdICAgMDogZXZl
bnQgMiAtPiBpcnEgMjczIGwyLWNsZWFyIGdsb2JhbGx5LW1hc2tlZApbICAzMjUuMDY4NTY4XSAg
IDI6IGV2ZW50IDEzIC0+IGlycSAyODQgbDItY2xlYXIgbG9jYWxseS1tYXNrZWQKWyAgMzI1LjA3
NDQ3NF0gICAyOiBldmVudCAxNSAtPiBpcnEgMjg2IGwyLWNsZWFyIGxvY2FsbHktbWFza2VkClsg
IDMyNS4wODAzODBdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MCBsMi1jbGVhciBsb2NhbGx5LW1h
c2tlZApbICAzMjUuMDg2Mjg3XSAgIDM6IGV2ZW50IDIxIC0+IGlycSAyOTIgbDItY2xlYXIgbG9j
YWxseS1tYXNrZWQKWyAgMzI1LjA5MjE5NF0gICAwOiBldmVudCAzNCAtPiBpcnEgMzAxIGwyLWNs
ZWFyClsgIDMyNS4wOTY3NTddICAgMDogZXZlbnQgMzcgLT4gaXJxIDMwMiBsMi1jbGVhcgpbICAz
MjUuMTAxMzUwXSAKWyAgMzI1LjEwMTM1MV0gdmNwdSAyClsgIDMyNS4xMDEzNTJdICAgMDogbWFz
a2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS4xMDY2MTFd
ICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDMy
NS4xMTI2OTZdICAgMjogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAw
MDAxClsgIDMyNS4xMTg3ODJdICAgMzogbWFza2VkPTEgcGVuZGluZz0xIGV2ZW50X3NlbCAwMDAw
MDAwMDAwMDAwMDAxClsgIDMyNS4xMjQ4NjddICAgClsgIDMyNS4xMzA5NTFdIHBlbmRpbmc6Clsg
IDMyNS4xMzA5NTJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMTQ1ODA4XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjE1OTc2OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDMyNS4xNzM3MjldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMTg3Njkx
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjIwMTY1Ml0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDMyNS4yMTU2MTJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUu
MjI5NTc0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAyOGUwMDQKWyAgMzI1LjI0MzUzNV0gICAgClsgIDMy
NS4yNDY4NDVdIGdsb2JhbCBtYXNrOgpbICAzMjUuMjQ2ODQ2XSAgICBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZm
ZmZmZmYKWyAgMzI1LjI2MjA1OV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMyNS4yNzYw
MjFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuMjg5OTgyXSAgICBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZm
ZmZmZmZmZmZmZmYKWyAgMzI1LjMwMzk0Ml0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMy
NS4zMTc5MDNdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuMzMxODY1XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI1LjM0NTgyNl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZjMDAwMTA0MTA1
ClsgIDMyNS4zNTk3ODZdICAgIApbICAzMjUuMzYzMDk3XSBnbG9iYWxseSB1bm1hc2tlZDoKWyAg
MzI1LjM2MzA5N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS4zNzg4NDhdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuMzkyODA5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKWyAgMzI1LjQwNjc3MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS40MjA3MzRd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNDM0NjkzXSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMzI1LjQ0ODY1NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS40
NjI2MTVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4YTAwMApbICAzMjUuNDc2NTc2XSAgICAKWyAgMzI1
LjQ3OTg4N10gbG9jYWwgY3B1MiBtYXNrOgpbICAzMjUuNDc5ODg4XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMzI1LjQ5NTQ1OV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS41
MDk0MjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNTIzMzgwXSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMzI1LjUzNzM0MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNS41NTEzMDJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNTY1MjYzXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjU3OTIyNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDdl
MDAwClsgIDMyNS41OTMxODhdICAgIApbICAzMjUuNTk2NDk3XSBsb2NhbGx5IHVubWFza2VkOgpb
ICAzMjUuNTk2NDk3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjYxMjE1N10gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS42MjYxMTldICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAzMjUuNjQwMDgwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjY1NDA0
MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS42NjgwMDFdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAzMjUuNjgxOTYzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1
LjY5NTkyM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDBhMDAwClsgIDMyNS43MDk4ODVdICAgIApbICAz
MjUuNzEzMTk2XSBwZW5kaW5nIGxpc3Q6ClsgIDMyNS43MTYyMzldICAgMDogZXZlbnQgMiAtPiBp
cnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAzMjUuNzIyNjgzXSAgIDI6
IGV2ZW50IDEzIC0+IGlycSAyODQKWyAgMzI1LjcyNjQ0MV0gICAyOiBldmVudCAxNCAtPiBpcnEg
Mjg1IGdsb2JhbGx5LW1hc2tlZApbICAzMjUuNzMxNjMyXSAgIDI6IGV2ZW50IDE1IC0+IGlycSAy
ODYKWyAgMzI1LjczNTM5MV0gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFza2Vk
ClsgIDMyNS43NDA0OTFdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tlZApb
ICAzMjUuNzQ1NjE1XSAKWyAgMzI1Ljc0NTYxNl0gdmNwdSAzClsgIDMyNS43NDU2MTZdICAgMDog
bWFza2VkPTEgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNS43NTA4
NzNdICAgMTogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAwMDAwMDAwClsg
IDMyNS43NTY5NTldICAgMjogbWFza2VkPTAgcGVuZGluZz0wIGV2ZW50X3NlbCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMyNS43NjMwNDVdICAgMzogbWFza2VkPTAgcGVuZGluZz0xIGV2ZW50X3NlbCAw
MDAwMDAwMDAwMDAwMDAxClsgIDMyNS43NjkxMjldICAgClsgIDMyNS43NzUyMTRdIHBlbmRpbmc6
ClsgIDMyNS43NzUyMTVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuNzkwMDcxXSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1LjgwNDAzMl0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDMyNS44MTc5OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjUuODMx
OTU0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI1Ljg0NTkxNF0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDMyNS44NTk4NzVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAz
MjUuODczODM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAzODQwMDQKWyAgMzI1Ljg4Nzc5N10gICAgClsg
IDMyNS44OTExMDldIGdsb2JhbCBtYXNrOgpbICAzMjUuODkxMTA5XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMzI1LjkwNjMyM10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDMyNS45
MjAyODRdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuOTM0MjQ1XSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMzI1Ljk0ODIwNl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDMyNS45NjIxNjddICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAzMjUuOTc2MTI4XSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMzI1Ljk5MDA4OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZjMDAwMTA0
MTA1ClsgIDMyNi4wMDQwNDldICAgIApbICAzMjYuMDA3MzYwXSBnbG9iYWxseSB1bm1hc2tlZDoK
WyAgMzI2LjAwNzM2MV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4wMjMxMTJdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMDM3MDcyXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMzI2LjA1MTAzNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4wNjQ5
OTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMDc4OTU1XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI2LjA5MjkxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMy
Ni4xMDY4NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDI4MDAwMApbICAzMjYuMTIwODM4XSAgICAKWyAg
MzI2LjEyNDE0OV0gbG9jYWwgY3B1MyBtYXNrOgpbICAzMjYuMTI0MTQ5XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMzI2LjEzOTcyMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMy
Ni4xNTM2ODNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMTY3NjQzXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjE4MTYwNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDMyNi4xOTU1NjVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAzMjYuMjA5NTI2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjIyMzQ4OF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAx
ZjgwMDAwClsgIDMyNi4yMzc0NDhdICAgIApbICAzMjYuMjQwNzU4XSBsb2NhbGx5IHVubWFza2Vk
OgpbICAzMjYuMjQwNzU5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjI1NjQyMV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4yNzAzODJdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAzMjYuMjg0MzQzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMzI2LjI5
ODMwNF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDMyNi4zMTIyNjRdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAzMjYuMzI2MjI1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MzI2LjM0MDE4Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMjgwMDAwClsgIDMyNi4zNTQxNDhdICAgIApb
ICAzMjYuMzU3NDU5XSBwZW5kaW5nIGxpc3Q6ClsgIDMyNi4zNjA1MDNdICAgMDogZXZlbnQgMiAt
PiBpcnEgMjczIGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAzMjYuMzY2OTQ2XSAg
IDI6IGV2ZW50IDE0IC0+IGlycSAyODUgZ2xvYmFsbHktbWFza2VkIGxvY2FsbHktbWFza2VkClsg
IDMyNi4zNzM0NzhdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MApbICAzMjYuMzc3MjM3XSAgIDM6
IGV2ZW50IDIwIC0+IGlycSAyOTEgZ2xvYmFsbHktbWFza2VkClsgIDMyNi4zODI0MjddICAgMzog
ZXZlbnQgMjEgLT4gaXJxIDI5Mgo=
--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset=US-ASCII; name="xen-dump-pre-s3.txt"
Content-Disposition: attachment; filename="xen-dump-pre-s3.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5nzo9g42

KFhFTikgJyonIHByZXNzZWQgLT4gZmlyaW5nIGFsbCBkaWFnbm9zdGljIGtleWhhbmRsZXJzCihY
RU4pIFtkOiBkdW1wIHJlZ2lzdGVyc10KKFhFTikgJ2QnIHByZXNzZWQgLT4gZHVtcGluZyByZWdp
c3RlcnMKKFhFTikgCihYRU4pICoqKiBEdW1waW5nIENQVTAgaG9zdCBzdGF0ZTogKioqCihYRU4p
IC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRhaW50ZWQ6ICAgIEMg
XS0tLS0KKFhFTikgQ1BVOiAgICAwCihYRU4pIFJJUDogICAgZTAwODpbPGZmZmY4MmM0ODAxM2Q3
N2U+XSBuczE2NTUwX3BvbGwrMHgyNy8weDMzCihYRU4pIFJGTEFHUzogMDAwMDAwMDAwMDAxMDI4
NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgwMzAyNWEwICAgcmJ4
OiBmZmZmODJjNDgwMzAyNDgwICAgcmN4OiAwMDAwMDAwMDAwMDAwMDA0CihYRU4pIHJkeDogMDAw
MDAwMDAwMDAwMDAwMCAgIHJzaTogZmZmZjgyYzQ4MDJlMjVjOCAgIHJkaTogZmZmZjgyYzQ4MDI3
MTgwMAooWEVOKSByYnA6IGZmZmY4MmM0ODAyYjdlMzAgICByc3A6IGZmZmY4MmM0ODAyYjdlMzAg
ICByODogIDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjk6ICBmZmZmODJjNDgwMmZlMjQwICAgcjEw
OiAwMDAwMDAxYzA4MzMzMTY5ICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZm
ZjgyYzQ4MDI3MTgwMCAgIHIxMzogZmZmZjgyYzQ4MDEzZDc1NyAgIHIxNDogMDAwMDAwMWE1MmEx
Zjc2MgooWEVOKSByMTU6IGZmZmY4MmM0ODAzMDIzMDggICBjcjA6IDAwMDAwMDAwODAwNTAwM2Ig
ICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTRjOGRlMDAwICAgY3Iy
OiBmZmZmZThmZmZmYzAwMjI4CihYRU4pIGRzOiAwMDAwICAgZXM6IDAwMDAgICBmczogMDAwMCAg
IGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3RhY2sgdHJhY2UgZnJv
bSByc3A9ZmZmZjgyYzQ4MDJiN2UzMDoKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2U2MCBmZmZmODJj
NDgwMTI4MTdmIDAwMDAwMDAwMDAwMDAwMDIgZmZmZjgyYzQ4MDJlMjVjOAooWEVOKSAgICBmZmZm
ODJjNDgwMzAyNDgwIGZmZmY4MzAxNDg5YjNkNDAgZmZmZjgyYzQ4MDJiN2ViMCBmZmZmODJjNDgw
MTI4MjgxCihYRU4pICAgIGZmZmY4MmM0ODAyYjdmMTggMDAwMDAwMDAwMDAwMDI0NiAwMDAwMDAw
MGRlYWRiZWVmIGZmZmY4MmM0ODAyZDg4ODAKKFhFTikgICAgZmZmZjgyYzQ4MDJkODg4MCBmZmZm
ODJjNDgwMmI3ZjE4IGZmZmZmZmZmZmZmZmZmZmYgZmZmZjgyYzQ4MDMwMjMwOAooWEVOKSAgICBm
ZmZmODJjNDgwMmI3ZWUwIGZmZmY4MmM0ODAxMjU0MDUgZmZmZjgyYzQ4MDJiN2YxOCBmZmZmODJj
NDgwMmI3ZjE4CihYRU4pICAgIDAwMDAwMDAwZmZmZmZmZmYgMDAwMDAwMDAwMDAwMDAwMiBmZmZm
ODJjNDgwMmI3ZWYwIGZmZmY4MmM0ODAxMjU0ODQKKFhFTikgICAgZmZmZjgyYzQ4MDJiN2YxMCBm
ZmZmODJjNDgwMTU4YzA1IGZmZmY4MzAwYWE1ODQwMDAgZmZmZjgzMDBhYTBmYzAwMAooWEVOKSAg
ICBmZmZmODJjNDgwMmI3ZGE4IDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmZmZmZmZmZmZiAwMDAw
MDAwMDAwMDAwMDAwCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZmZmZmY4MWEwMWVlOCBm
ZmZmZmZmZjgxYTAxZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAw
MSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFkYmVlZiAw
MDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwMTNh
YSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZmZmZmY4MWEw
MWVkMCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODMwMGFhNTg0MDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTNkNzdlPl0g
bnMxNjU1MF9wb2xsKzB4MjcvMHgzMwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgxN2Y+XSBleGVj
dXRlX3RpbWVyKzB4NGUvMHg2YwooWEVOKSAgICBbPGZmZmY4MmM0ODAxMjgyODE+XSB0aW1lcl9z
b2Z0aXJxX2FjdGlvbisweGU0LzB4MjFhCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDEyNTQwNT5dIF9f
ZG9fc29mdGlycSsweDk1LzB4YTAKKFhFTikgICAgWzxmZmZmODJjNDgwMTI1NDg0Pl0gZG9fc29m
dGlycSsweDI2LzB4MjgKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4YzA1Pl0gaWRsZV9sb29wKzB4
NmYvMHg3MQooWEVOKSAgICAKKFhFTikgKioqIER1bXBpbmcgQ1BVMSBob3N0IHN0YXRlOiAqKioK
KFhFTikgLS0tLVsgWGVuLTQuMi4wLXJjMi1wcmUgIHg4Nl82NCAgZGVidWc9eSAgVGFpbnRlZDog
ICAgQyBdLS0tLQooWEVOKSBDUFU6ICAgIDEKKFhFTikgUklQOiAgICBlMDA4Ols8ZmZmZjgyYzQ4
MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAw
MDAwMjQ2ICAgQ09OVEVYVDogaHlwZXJ2aXNvcgooWEVOKSByYXg6IGZmZmY4MmM0ODAzMDIzNzAg
ICByYng6IGZmZmY4MzAxNDg5OWZmMTggICByY3g6IDAwMDAwMDAwMDAwMDAwMDEKKFhFTikgcmR4
OiAwMDAwMDAzY2M4NmE4ZDgwICAgcnNpOiBmZmZmODMwMGE4M2ZkMGY4ICAgcmRpOiBmZmZmODMw
MGFhMGZlMDAwCihYRU4pIHJicDogZmZmZjgzMDE0ODk5ZmVmMCAgIHJzcDogZmZmZjgzMDE0ODk5
ZmVmMCAgIHI4OiAgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAg
ICByMTA6IDAwMDAwMDAwZGVhZGJlZWYgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEy
OiBmZmZmODMwMTQ4OTlmZjE4ICAgcjEzOiAwMDAwMDAwMGZmZmZmZmZmICAgcjE0OiAwMDAwMDAw
MDAwMDAwMDAyCihYRU4pIHIxNTogZmZmZjgzMDE0ODlhYjA4OCAgIGNyMDogMDAwMDAwMDA4MDA1
MDAzYiAgIGNyNDogMDAwMDAwMDAwMDEwMjZmMAooWEVOKSBjcjM6IDAwMDAwMDAxM2M2NGYwMDAg
ICBjcjI6IGZmZmY4ODAwMjZkMGU4MzAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAw
MDAwICAgZ3M6IDAwMDAgICBzczogZTAxMCAgIGNzOiBlMDA4CihYRU4pIFhlbiBzdGFjayB0cmFj
ZSBmcm9tIHJzcD1mZmZmODMwMTQ4OTlmZWYwOgooWEVOKSAgICBmZmZmODMwMTQ4OTlmZjEwIGZm
ZmY4MmM0ODAxNThiZjggZmZmZjgzMDBhYTBmZTAwMCBmZmZmODMwMGE4M2ZkMDAwCihYRU4pICAg
IGZmZmY4MzAxNDg5OWZkYTggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDEKKFhFTikgICAgZmZmZmZmZmY4MWFhZmRhMCBmZmZmODgwMDI3ODZkZWUwIGZm
ZmY4ODAwMjc4NmRmZDggMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAx
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICAgIGZmZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMGRlYWRiZWVmIDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgICAgMDAwMDAxMDAwMDAwMDAwMCBmZmZmZmZmZjgxMDAxM2Fh
IDAwMDAwMDAwMDAwMGUwMzMgMDAwMDAwMDAwMDAwMDI0NgooWEVOKSAgICBmZmZmODgwMDI3ODZk
ZWM4IDAwMDAwMDAwMDAwMGUwMmIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAx
IGZmZmY4MzAwYWEwZmUwMDAKKFhFTikgICAgMDAwMDAwM2NjODZhOGQ4MCAwMDAwMDAwMDAwMDAw
MDAwCihYRU4pIFhlbiBjYWxsIHRyYWNlOgooWEVOKSAgICBbPGZmZmY4MmM0ODAxNTgzYzQ+XSBk
ZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pICAgIFs8ZmZmZjgyYzQ4MDE1OGJmOD5dIGlkbGVf
bG9vcCsweDYyLzB4NzEKKFhFTikgICAgCihYRU4pICoqKiBEdW1waW5nIENQVTIgaG9zdCBzdGF0
ZTogKioqCihYRU4pIC0tLS1bIFhlbi00LjIuMC1yYzItcHJlICB4ODZfNjQgIGRlYnVnPXkgIFRh
aW50ZWQ6ICAgIEMgXS0tLS0KKFhFTikgQ1BVOiAgICAyCihYRU4pIFJJUDogICAgZTAwODpbPGZm
ZmY4MmM0ODAxNTgzYzQ+XSBkZWZhdWx0X2lkbGUrMHg5OS8weDllCihYRU4pIFJGTEFHUzogMDAw
MDAwMDAwMDAwMDI0NiAgIENPTlRFWFQ6IGh5cGVydmlzb3IKKFhFTikgcmF4OiBmZmZmODJjNDgw
MzAyMzcwICAgcmJ4OiBmZmZmODMwMTQ4OThmZjE4ICAgcmN4OiAwMDAwMDAwMDAwMDAwMDAyCihY
RU4pIHJkeDogMDAwMDAwM2NjODY5MmQ4MCAgIHJzaTogMDAwMDAwNDA5YzA3OGE5NCAgIHJkaTog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByYnA6IGZmZmY4MzAxNDg5OGZlZjAgICByc3A6IGZmZmY4
MzAxNDg5OGZlZjAgICByODogIDAwMDAwMDAxMDU1YWRkNmMKKFhFTikgcjk6ICBmZmZmODMwMGE4
M2ZjMDYwICAgcjEwOiAwMDAwMDAwMGRlYWRiZWVmICAgcjExOiAwMDAwMDAwMDAwMDAwMjQ2CihY
RU4pIHIxMjogZmZmZjgzMDE0ODk4ZmYxOCAgIHIxMzogMDAwMDAwMDBmZmZmZmZmZiAgIHIxNDog
MDAwMDAwMDAwMDAwMDAwMgooWEVOKSByMTU6IGZmZmY4MzAxNDg5OTUwODggICBjcjA6IDAwMDAw
MDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAxMDI2ZjAKKFhFTikgY3IzOiAwMDAwMDAwMTNj
YTVmMDAwICAgY3IyOiBmZmZmODgwMDI1ODE3ZGYwCihYRU4pIGRzOiAwMDJiICAgZXM6IDAwMmIg
ICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMTAgICBjczogZTAwOAooWEVOKSBYZW4gc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjgzMDE0ODk4ZmVmMDoKKFhFTikgICAgZmZmZjgzMDE0ODk4
ZmYxMCBmZmZmODJjNDgwMTU4YmY4IGZmZmY4MzAwYTg1YzcwMDAgZmZmZjgzMDBhODNmYzAwMAoo
WEVOKSAgICBmZmZmODMwMTQ4OThmZGE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAyCihYRU4pICAgIGZmZmZmZmZmODFhYWZkYTAgZmZmZjg4MDAyNzg2
ZmVlMCBmZmZmODgwMDI3ODZmZmQ4IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICBmZmZmZmZmZjgxMDAxM2FhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBkZWFk
YmVlZiAwMDAwMDAwMGRlYWRiZWVmCihYRU4pICAgIDAwMDAwMTAwMDAwMDAwMDAgZmZmZmZmZmY4
MTAwMTNhYSAwMDAwMDAwMDAwMDBlMDMzIDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgICAgZmZmZjg4
MDAyNzg2ZmVjOCAwMDAwMDAwMDAwMDBlMDJiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmODMwMGE4NWM3MDAwCihYRU4pICAgIDAwMDAwMDNjYzg2OTJkODAgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSBYZW4gY2FsbCB0cmFjZToKKFhFTikgICAgWzxmZmZmODJjNDgwMTU4
M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSAgICBbPGZmZmY4MmM0ODAxNThiZjg+
XSBpZGxlX2xvb3ArMHg2Mi8weDcxCihYRU4pICAgIAooWEVOKSAqKiogRHVtcGluZyBDUFUzIGhv
c3Qgc3RhdGU6ICoqKgooWEVOKSAtLS0tWyBYZW4tNC4yLjAtcmMyLXByZSAgeDg2XzY0ICBkZWJ1
Zz15ICBUYWludGVkOiAgICBDIF0tLS0tCihYRU4pIENQVTogICAgMwooWEVOKSBSSVA6ICAgIGUw
MDg6WzxmZmZmODJjNDgwMTU4M2M0Pl0gZGVmYXVsdF9pZGxlKzB4OTkvMHg5ZQooWEVOKSBSRkxB
R1M6IDAwMDAwMDAwMDAwMDAyNDYgICBDT05URVhUOiBoeXBlcnZpc29yCihYRU4pIHJheDogZmZm
ZjgyYzQ4MDMwMjM3MCAgIHJieDogZmZmZjgzMDE0ODkzZmYxOCAgIHJjeDogMDAwMDAwMDAwMDAw
MDAwMwooWEVOKSByZHg6IDAwMDAwMDNjYzg2ODRkODAgICByc2k6IDAwMDAwMDQwOWMwNzhhYjYg
ICByZGk6IDAwMDAwMDAwMDAwMDAwMDMKKFhFTikgcmJwOiBmZmZmODMwMTQ4OTNmZWYwICAgcnNw
OiBmZmZmODMwMTQ4OTNmZWYwICAgcjg6ICAwMDAwMDAwMTI4ODBkZjQ4CihYRU4pIHI5OiAgZmZm
ZjgzMDBhYTU4MzA2MCAgIHIxMDogMDAwMDAwMDBkZWFkYmVlZiAgIHIxMTogMDAwMDAwMDAwMDAw
MDI0NgooWEVOKSByMTI6IGZmZmY4MzAxNDg5M2ZmMTggICByMTM6IDAwMDAwMDAwZmZmZmZmZmYg
ICByMTQ6IDAwMDAwMDAwMDAwMDAwMDIKKFhFTikgcjE1OiBmZmZmODMwMTQ4OTg3MDg4ICAgY3Iw
OiAwMDAwMDAwMDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMTAyNmYwCihYRU4pIGNyMzogMDAw
MDAwMDEzZDk1ZjAwMCAgIGNyMjogZmZmZjg4MDAyNWU5MTI2MAooWEVOKSBkczogMDAyYiAgIGVz
OiAwMDJiICAgZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDEwICAgY3M6IGUwMDgKKFhFTikg
WGVuIHN0YWNrIHRyYWNlIGZyb20gcnNwPWZmZmY4MzAxNDg5M2ZlZjA6CihYRU4pICAgIGZmZmY4
MzAxNDg5M2ZmMTAgZmZmZjgyYzQ4MDE1OGJmOCBmZmZmODMwMGE4M2ZlMDAwIGZmZmY4MzAwYWE1
ODMwMDAKKFhFTikgICAgZmZmZjgzMDE0ODkzZmRhOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMwooWEVOKSAgICBmZmZmZmZmZjgxYWFmZGEwIGZmZmY4
ODAwMjc4ODFlZTAgZmZmZjg4MDAyNzg4MWZkOCAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAgIDAw
MDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKKFhFTikgICAgZmZmZmZmZmY4MTAwMTNhYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwZGVhZGJlZWYgMDAwMDAwMDBkZWFkYmVlZgooWEVOKSAgICAwMDAwMDEwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDEzYWEgMDAwMDAwMDAwMDAwZTAzMyAwMDAwMDAwMDAwMDAwMjQ2CihYRU4pICAg
IGZmZmY4ODAwMjc4ODFlYzggMDAwMDAwMDAwMDAwZTAyYiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDMgZmZmZjgzMDBhODNmZTAwMAooWEVOKSAgICAwMDAwMDAzY2M4Njg0ZDgw
IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgWGVuIGNhbGwgdHJhY2U6CihYRU4pICAgIFs8ZmZmZjgy
YzQ4MDE1ODNjND5dIGRlZmF1bHRfaWRsZSsweDk5LzB4OWUKKFhFTikgICAgWzxmZmZmODJjNDgw
MTU4YmY4Pl0gaWRsZV9sb29wKzB4NjIvMHg3MQooWEVOKSAgICAKKFhFTikgWzA6IGR1bXAgRG9t
MCByZWdpc3RlcnNdCihYRU4pICcwJyBwcmVzc2VkIC0+IGR1bXBpbmcgRG9tMCdzIHJlZ2lzdGVy
cwooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMCBzdGF0ZTogKioqCihYRU4pIFJJUDogICAg
ZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAwMDAyNDYg
ICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAwMDAwMCAg
IHJieDogZmZmZmZmZmY4MWEwMWZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVOKSByZHg6
IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAwMDAwMDAw
ZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmZmZmZjgxYTAxZWU4ICAgcnNwOiBmZmZmZmZmZjgxYTAx
ZWQwICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAwMDAwMCAg
IHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVOKSByMTI6
IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDAgICByMTQ6IGZmZmZmZmZm
ZmZmZmZmZmYKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAwMDAwMDAw
MDA4ICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDE0YzhkZTAwMCAg
IGNyMjogZmZmZmU4ZmZmZmMwMDIyOAooWEVOKSBkczogMDAwMCAgIGVzOiAwMDAwICAgZnM6IDAw
MDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3RhY2sgdHJh
Y2UgZnJvbSByc3A9ZmZmZmZmZmY4MWEwMWVkMDoKKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZmZmZmY4MWEwMWYxOAooWEVOKSAg
ICBmZmZmZmZmZjgxMDFjNjYzIGZmZmZmZmZmODFhMDFmZDggZmZmZmZmZmY4MWFhZmRhMCBmZmZm
ODgwMDJkZWUxYTAwCihYRU4pICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmY4MWEwMWY0OCBm
ZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmZmZmZmZmZmYKKFhFTikgICAgYTNmYzk4NzMzOWVkMDEz
YiAwMDAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODFiMTUxNjAgZmZmZmZmZmY4MWEwMWY1OAooWEVO
KSAgICBmZmZmZmZmZjgxNTU0ZjVlIGZmZmZmZmZmODFhMDFmOTggZmZmZmZmZmY4MWFjY2JmNSBm
ZmZmZmZmZjgxYjE1MTYwCihYRU4pICAgIGU0YjE1OWJhM2VlYTA5NGMgMDAwMDAwMDAwMGNkZjAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxYTAxZmI4IGZmZmZmZmZmODFhY2MzNGIgZmZmZmZmZmY3ZmZmZmZmZgoo
WEVOKSAgICBmZmZmZmZmZjg0YjA0MDAwIGZmZmZmZmZmODFhMDFmZjggZmZmZmZmZmY4MWFjZmVj
YyAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAxMDAwMDAwMDAgMDAxMDA4MDAwMDAz
MDZhNCAxZmM5OGI3NWUzYjgyMjgzIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICoqKiBEdW1waW5nIERvbTAgdmNwdSMxIHN0
YXRlOiAqKioKKFhFTikgUklQOiAgICBlMDMzOls8ZmZmZmZmZmY4MTAwMTNhYT5dCihYRU4pIFJG
TEFHUzogMDAwMDAwMDAwMDAwMDI0NiAgIEVNOiAwICAgQ09OVEVYVDogcHYgZ3Vlc3QKKFhFTikg
cmF4OiAwMDAwMDAwMDAwMDAwMDAwICAgcmJ4OiBmZmZmODgwMDI3ODZkZmQ4ICAgcmN4OiBmZmZm
ZmZmZjgxMDAxM2FhCihYRU4pIHJkeDogMDAwMDAwMDAwMDAwMDAwMCAgIHJzaTogMDAwMDAwMDBk
ZWFkYmVlZiAgIHJkaTogMDAwMDAwMDBkZWFkYmVlZgooWEVOKSByYnA6IGZmZmY4ODAwMjc4NmRl
ZTAgICByc3A6IGZmZmY4ODAwMjc4NmRlYzggICByODogIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikg
cjk6ICAwMDAwMDAwMDAwMDAwMDAwICAgcjEwOiAwMDAwMDAwMDAwMDAwMDAxICAgcjExOiAwMDAw
MDAwMDAwMDAwMjQ2CihYRU4pIHIxMjogZmZmZmZmZmY4MWFhZmRhMCAgIHIxMzogMDAwMDAwMDAw
MDAwMDAwMSAgIHIxNDogMDAwMDAwMDAwMDAwMDAwMAooWEVOKSByMTU6IDAwMDAwMDAwMDAwMDAw
MDAgICBjcjA6IDAwMDAwMDAwODAwNTAwM2IgICBjcjQ6IDAwMDAwMDAwMDAwMDI2NjAKKFhFTikg
Y3IzOiAwMDAwMDAwMTNjNjRmMDAwICAgY3IyOiAwMDAwN2YzNTg4NTA2MjEwCihYRU4pIGRzOiAw
MDJiICAgZXM6IDAwMmIgICBmczogMDAwMCAgIGdzOiAwMDAwICAgc3M6IGUwMmIgICBjczogZTAz
MwooWEVOKSBHdWVzdCBzdGFjayB0cmFjZSBmcm9tIHJzcD1mZmZmODgwMDI3ODZkZWM4OgooWEVO
KSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwZmZmZmZmZmYgZmZmZmZmZmY4MTAwYTVjMCBm
ZmZmODgwMDI3ODZkZjEwCihYRU4pICAgIGZmZmZmZmZmODEwMWM2NjMgZmZmZjg4MDAyNzg2ZGZk
OCBmZmZmZmZmZjgxYWFmZGEwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCBmZmZmODgwMDI3ODZkZjQwIGZmZmZmZmZmODEwMTMyMzYgZmZmZmZmZmY4MTAwYWRlOQoo
WEVOKSAgICBhZGNmNDU4MDdjMmQwNGZiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCBmZmZmODgwMDI3ODZkZjUwCihYRU4pICAgIGZmZmZmZmZmODE1NjM0MzggMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4NmRmNTggMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAqKiogRHVtcGluZyBEb20wIHZjcHUjMiBzdGF0ZTogKioqCihYRU4pIFJJ
UDogICAgZTAzMzpbPGZmZmZmZmZmODEwMDEzYWE+XQooWEVOKSBSRkxBR1M6IDAwMDAwMDAwMDAw
MDAyNDYgICBFTTogMCAgIENPTlRFWFQ6IHB2IGd1ZXN0CihYRU4pIHJheDogMDAwMDAwMDAwMDAw
MDAwMCAgIHJieDogZmZmZjg4MDAyNzg2ZmZkOCAgIHJjeDogZmZmZmZmZmY4MTAwMTNhYQooWEVO
KSByZHg6IDAwMDAwMDAwMDAwMDAwMDAgICByc2k6IDAwMDAwMDAwZGVhZGJlZWYgICByZGk6IDAw
MDAwMDAwZGVhZGJlZWYKKFhFTikgcmJwOiBmZmZmODgwMDI3ODZmZWUwICAgcnNwOiBmZmZmODgw
MDI3ODZmZWM4ICAgcjg6ICAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIHI5OiAgMDAwMDAwMDAwMDAw
MDAwMCAgIHIxMDogMDAwMDAwMDAwMDAwMDAwMSAgIHIxMTogMDAwMDAwMDAwMDAwMDI0NgooWEVO
KSByMTI6IGZmZmZmZmZmODFhYWZkYTAgICByMTM6IDAwMDAwMDAwMDAwMDAwMDIgICByMTQ6IDAw
MDAwMDAwMDAwMDAwMDAKKFhFTikgcjE1OiAwMDAwMDAwMDAwMDAwMDAwICAgY3IwOiAwMDAwMDAw
MDgwMDUwMDNiICAgY3I0OiAwMDAwMDAwMDAwMDAyNjYwCihYRU4pIGNyMzogMDAwMDAwMDEzY2E1
ZjAwMCAgIGNyMjogMDAwMDdmODE5YzNiZTAwMAooWEVOKSBkczogMDAyYiAgIGVzOiAwMDJiICAg
ZnM6IDAwMDAgICBnczogMDAwMCAgIHNzOiBlMDJiICAgY3M6IGUwMzMKKFhFTikgR3Vlc3Qgc3Rh
Y2sgdHJhY2UgZnJvbSByc3A9ZmZmZjg4MDAyNzg2ZmVjODoKKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMGZmZmZmZmZmIGZmZmZmZmZmODEwMGE1YzAgZmZmZjg4MDAyNzg2ZmYxMAoo
WEVOKSAgICBmZmZmZmZmZjgxMDFjNjYzIGZmZmY4ODAwMjc4NmZmZDggZmZmZmZmZmY4MWFhZmRh
MCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2
ZmY0MCBmZmZmZmZmZjgxMDEzMjM2IGZmZmZmZmZmODEwMGFkZTkKKFhFTikgICAgMWZlN2I1YTgy
MjE1MDQ5OSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZjg4MDAyNzg2ZmY1
MAooWEVOKSAgICBmZmZmZmZmZjgxNTYzNDM4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCBmZmZmODgwMDI3ODZmZjU4IDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgKioq
IER1bXBpbmcgRG9tMCB2Y3B1IzMgc3RhdGU6ICoqKgooWEVOKSBSSVA6ICAgIGUwMzM6WzxmZmZm
ZmZmZjgxMDAxM2FhPl0KKFhFTikgUkZMQUdTOiAwMDAwMDAwMDAwMDAwMjQ2ICAgRU06IDAgICBD
T05URVhUOiBwdiBndWVzdAooWEVOKSByYXg6IDAwMDAwMDAwMDAwMDAwMDAgICByYng6IGZmZmY4
ODAwMjc4ODFmZDggICByY3g6IGZmZmZmZmZmODEwMDEzYWEKKFhFTikgcmR4OiAwMDAwMDAwMDAw
MDAwMDAwICAgcnNpOiAwMDAwMDAwMGRlYWRiZWVmICAgcmRpOiAwMDAwMDAwMGRlYWRiZWVmCihY
RU4pIHJicDogZmZmZjg4MDAyNzg4MWVlMCAgIHJzcDogZmZmZjg4MDAyNzg4MWVjOCAgIHI4OiAg
MDAwMDAwMDAwMDAwMDAwMAooWEVOKSByOTogIDAwMDAwMDAwMDAwMDAwMDAgICByMTA6IDAwMDAw
MDAwMDAwMDAwMDEgICByMTE6IDAwMDAwMDAwMDAwMDAyNDYKKFhFTikgcjEyOiBmZmZmZmZmZjgx
YWFmZGEwICAgcjEzOiAwMDAwMDAwMDAwMDAwMDAzICAgcjE0OiAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pIHIxNTogMDAwMDAwMDAwMDAwMDAwMCAgIGNyMDogMDAwMDAwMDA4MDA1MDAzYiAgIGNyNDog
MDAwMDAwMDAwMDAwMjY2MAooWEVOKSBjcjM6IDAwMDAwMDAxM2RiNjIwMDAgICBjcjI6IDAwMDA3
ZmVhY2E2NGMwMDAKKFhFTikgZHM6IDAwMmIgICBlczogMDAyYiAgIGZzOiAwMDAwICAgZ3M6IDAw
MDAgICBzczogZTAyYiAgIGNzOiBlMDMzCihYRU4pIEd1ZXN0IHN0YWNrIHRyYWNlIGZyb20gcnNw
PWZmZmY4ODAwMjc4ODFlYzg6CihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDBmZmZm
ZmZmZiBmZmZmZmZmZjgxMDBhNWMwIGZmZmY4ODAwMjc4ODFmMTAKKFhFTikgICAgZmZmZmZmZmY4
MTAxYzY2MyBmZmZmODgwMDI3ODgxZmQ4IGZmZmZmZmZmODFhYWZkYTAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNDAgZmZmZmZmZmY4MTAx
MzIzNiBmZmZmZmZmZjgxMDBhZGU5CihYRU4pICAgIDQ5ZGUxODMzZDEzZjJhMjYgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIGZmZmY4ODAwMjc4ODFmNTAKKFhFTikgICAgZmZmZmZm
ZmY4MTU2MzQzOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikgICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZm
Zjg4MDAyNzg4MWY1OCAwMDAwMDAwMDAwMDAwMDAwCihYRU4pIFtIOiBkdW1wIGhlYXAgaW5mb10K
KFhFTikgJ0gnIHByZXNzZWQgLT4gZHVtcGluZyBoZWFwIGluZm8gKG5vdy0weDFBOjlDNkUyMkMw
KQooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0wXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9Ml0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0zXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9NV0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT02XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9OF0gLT4gMCBwYWdl
cwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT05XSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTEwXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTExXSAtPiAwIHBh
Z2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTEyXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9k
ZT0wXVt6b25lPTEzXSAtPiAwIHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTE0XSAtPiAx
NjEyOCBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xNV0gLT4gMzI3NjggcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MTZdIC0+IDY1NTM2IHBhZ2VzCihYRU4pIGhlYXBbbm9kZT0w
XVt6b25lPTE3XSAtPiAxMzA1NTkgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MThdIC0+
IDI2MjE0MyBwYWdlcwooWEVOKSBoZWFwW25vZGU9MF1bem9uZT0xOV0gLT4gMTcyODQ1IHBhZ2Vz
CihYRU4pIGhlYXBbbm9kZT0wXVt6b25lPTIwXSAtPiAxMzQyMTggcGFnZXMKKFhFTikgaGVhcFtu
b2RlPTBdW3pvbmU9MjFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjJdIC0+
IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjNdIC0+IDAgcGFnZXMKKFhFTikgaGVh
cFtub2RlPTBdW3pvbmU9MjRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjVd
IC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjZdIC0+IDAgcGFnZXMKKFhFTikg
aGVhcFtub2RlPTBdW3pvbmU9MjddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9
MjhdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MjldIC0+IDAgcGFnZXMKKFhF
TikgaGVhcFtub2RlPTBdW3pvbmU9MzBdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pv
bmU9MzFdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzJdIC0+IDAgcGFnZXMK
KFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzNdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBd
W3pvbmU9MzRdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzVdIC0+IDAgcGFn
ZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzZdIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2Rl
PTBdW3pvbmU9MzddIC0+IDAgcGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzhdIC0+IDAg
cGFnZXMKKFhFTikgaGVhcFtub2RlPTBdW3pvbmU9MzldIC0+IDAgcGFnZXMKKFhFTikgW0k6IGR1
bXAgSFZNIGlycSBpbmZvXQooWEVOKSAnSScgcHJlc3NlZCAtPiBkdW1waW5nIEhWTSBpcnEgaW5m
bwooWEVOKSBbTTogZHVtcCBNU0kgc3RhdGVdCihYRU4pIFBDSS1NU0kgaW50ZXJydXB0IGluZm9y
bWF0aW9uOgooWEVOKSAgTVNJICAgIDI2IHZlYz02MSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxv
ZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI3IHZlYz0y
OSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAv
MS8tMQooWEVOKSAgTVNJICAgIDI4IHZlYz0zMSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBs
b3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSAgTVNJICAgIDI5IHZlYz0zOSBs
b3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dlc3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8t
MQooWEVOKSAgTVNJICAgIDMwIHZlYz00MSBsb3dlc3QgIGVkZ2UgICBhc3NlcnQgIGxvZyBsb3dl
c3QgZGVzdD0wMDAwMDAwMSBtYXNrPTAvMS8tMQooWEVOKSBbUTogZHVtcCBQQ0kgZGV2aWNlc10K
KFhFTikgPT09PSBQQ0kgZGV2aWNlcyA9PT09CihYRU4pID09PT0gc2VnbWVudCAwMDAwID09PT0K
KFhFTikgMDAwMDowNTowMS4wIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDQ6MDAu
MCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAzOjAwLjAgLSBkb20gMCAgIC0gTVNJ
cyA8ID4KKFhFTikgMDAwMDowMjowMC4wIC0gZG9tIDAgICAtIE1TSXMgPCAzMCA+CihYRU4pIDAw
MDA6MDA6MWYuMyAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFmLjIgLSBkb20g
MCAgIC0gTVNJcyA8IDI3ID4KKFhFTikgMDAwMDowMDoxZi4wIC0gZG9tIDAgICAtIE1TSXMgPCA+
CihYRU4pIDAwMDA6MDA6MWUuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAwOjAwOjFk
LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYy43IC0gZG9tIDAgICAtIE1T
SXMgPCA+CihYRU4pIDAwMDA6MDA6MWMuNiAtIGRvbSAwICAgLSBNU0lzIDwgPgooWEVOKSAwMDAw
OjAwOjFjLjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAwMDowMDoxYi4wIC0gZG9tIDAg
ICAtIE1TSXMgPCAyOSA+CihYRU4pIDAwMDA6MDA6MWEuMCAtIGRvbSAwICAgLSBNU0lzIDwgPgoo
WEVOKSAwMDAwOjAwOjE5LjAgLSBkb20gMCAgIC0gTVNJcyA8IDI2ID4KKFhFTikgMDAwMDowMDox
Ni4zIC0gZG9tIDAgICAtIE1TSXMgPCA+CihYRU4pIDAwMDA6MDA6MTYuMCAtIGRvbSAwICAgLSBN
U0lzIDwgPgooWEVOKSAwMDAwOjAwOjE0LjAgLSBkb20gMCAgIC0gTVNJcyA8ID4KKFhFTikgMDAw
MDowMDowMi4wIC0gZG9tIDAgICAtIE1TSXMgPCAyOCA+CihYRU4pIDAwMDA6MDA6MDAuMCAtIGRv
bSAwICAgLSBNU0lzIDwgPgooWEVOKSBbVjogZHVtcCBpb21tdSBpbmZvXQooWEVOKSAKKFhFTikg
aW9tbXUgMDogbnJfcHRfbGV2ZWxzID0gMy4KKFhFTikgICBRdWV1ZWQgSW52YWxpZGF0aW9uOiBz
dXBwb3J0ZWQgYW5kIGVuYWJsZWQuCihYRU4pICAgSW50ZXJydXB0IFJlbWFwcGluZzogc3VwcG9y
dGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCByZW1hcHBpbmcgdGFibGUgKG5yX2Vu
dHJ5PTB4MTAwMDAuIE9ubHkgZHVtcCBQPTEgZW50cmllcyBoZXJlKToKKFhFTikgICAgICAgIFNW
VCAgU1EgICBTSUQgICAgICBEU1QgIFYgIEFWTCBETE0gVE0gUkggRE0gRlBEIFAKKFhFTikgICAw
MDAwOiAgMSAgIDAgIDAwMTAgMDAwMDAwMDEgMzEgICAgMCAgIDEgIDAgIDEgIDEgICAwIDEKKFhF
TikgCihYRU4pIGlvbW11IDE6IG5yX3B0X2xldmVscyA9IDMuCihYRU4pICAgUXVldWVkIEludmFs
aWRhdGlvbjogc3VwcG9ydGVkIGFuZCBlbmFibGVkLgooWEVOKSAgIEludGVycnVwdCBSZW1hcHBp
bmc6IHN1cHBvcnRlZCBhbmQgZW5hYmxlZC4KKFhFTikgICBJbnRlcnJ1cHQgcmVtYXBwaW5nIHRh
YmxlIChucl9lbnRyeT0weDEwMDAwLiBPbmx5IGR1bXAgUD0xIGVudHJpZXMgaGVyZSk6CihYRU4p
ICAgICAgICBTVlQgIFNRICAgU0lEICAgICAgRFNUICBWICBBVkwgRExNIFRNIFJIIERNIEZQRCBQ
CihYRU4pICAgMDAwMDogIDEgICAwICBmMGY4IDAwMDAwMDAxIDM4ICAgIDAgICAxICAwICAxICAx
ICAgMCAxCihYRU4pICAgMDAwMTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGYwICAgIDAgICAxICAw
ICAxICAxICAgMCAxCihYRU4pICAgMDAwMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQwICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDQ4
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNDogIDEgICAwICBmMGY4IDAwMDAw
MDAxIDUwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNTogIDEgICAwICBmMGY4
IDAwMDAwMDAxIDU4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNjogIDEgICAw
ICBmMGY4IDAwMDAwMDAxIDYwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwNzog
IDEgICAwICBmMGY4IDAwMDAwMDAxIDY4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAg
MDAwODogIDEgICAwICBmMGY4IDAwMDAwMDAxIDcwICAgIDAgICAxICAxICAxICAxICAgMCAxCihY
RU4pICAgMDAwOTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDc4ICAgIDAgICAxICAwICAxICAxICAg
MCAxCihYRU4pICAgMDAwYTogIDEgICAwICBmMGY4IDAwMDAwMDAxIDg4ICAgIDAgICAxICAwICAx
ICAxICAgMCAxCihYRU4pICAgMDAwYjogIDEgICAwICBmMGY4IDAwMDAwMDAxIDkwICAgIDAgICAx
ICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwYzogIDEgICAwICBmMGY4IDAwMDAwMDAxIDk4ICAg
IDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZDogIDEgICAwICBmMGY4IDAwMDAwMDAx
IGEwICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZTogIDEgICAwICBmMGY4IDAw
MDAwMDAxIGE4ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAwZjogIDEgICAwICBm
MGY4IDAwMDAwMDAxIGIwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAxMDogIDEg
ICAwICBmMGY4IDAwMDAwMDAxIGI4ICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4pICAgMDAx
MTogIDEgICAwICBmMGY4IDAwMDAwMDAxIGMwICAgIDAgICAxICAxICAxICAxICAgMCAxCihYRU4p
ICAgMDAxMjogIDEgICAwICBmMGY4IDAwMDAwMDAxIGM4ICAgIDAgICAxICAxICAxICAxICAgMCAx
CihYRU4pICAgMDAxMzogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQwICAgIDAgICAxICAxICAxICAx
ICAgMCAxCihYRU4pICAgMDAxNDogIDEgICAwICBmMGY4IDAwMDAwMDAxIGQ4ICAgIDAgICAxICAx
ICAxICAxICAgMCAxCihYRU4pICAgMDAxNTogIDEgICAwICAwMGM4IDAwMDAwMDAxIDYxICAgIDAg
ICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNjogIDEgICAwICAwMGZhIDAwMDAwMDAxIDI5
ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxNzogIDEgICAwICAwMGQ4IDAwMDAw
MDAxIDM5ICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pICAgMDAxODogIDEgICAwICAwMjAw
IDAwMDAwMDAxIDQxICAgIDAgICAxICAwICAxICAxICAgMCAxCihYRU4pIAooWEVOKSBSZWRpcmVj
dGlvbiB0YWJsZSBvZiBJT0FQSUMgMDoKKFhFTikgICAjZW50cnkgSURYIEZNVCBNQVNLIFRSSUcg
SVJSIFBPTCBTVEFUIERFTEkgIFZFQ1RPUgooWEVOKSAgICAwMTogIDAwMDAgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICAzOAooWEVOKSAgICAwMjogIDAwMDEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBmMAooWEVOKSAgICAwMzogIDAwMDIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0MAooWEVOKSAgICAwNDogIDAwMDMgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA0OAooWEVOKSAgICAwNTogIDAwMDQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1MAooWEVOKSAgICAwNjogIDAwMDUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA1OAooWEVOKSAgICAwNzogIDAwMDYgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2MAooWEVOKSAgICAwODogIDAwMDcgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA2OAooWEVOKSAgICAwOTogIDAwMDggICAxICAgIDAgICAx
ICAgMCAgIDAgICAgMCAgICAwICAgICA3MAooWEVOKSAgICAwYTogIDAwMDkgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA3OAooWEVOKSAgICAwYjogIDAwMGEgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA4OAooWEVOKSAgICAwYzogIDAwMGIgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5MAooWEVOKSAgICAwZDogIDAwMGMgICAxICAgIDEgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICA5OAooWEVOKSAgICAwZTogIDAwMGQgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhMAooWEVOKSAgICAwZjogIDAwMGUgICAxICAgIDAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgICBhOAooWEVOKSAgICAxMDogIDAwMGYgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiMAooWEVOKSAgICAxMjogIDAwMTAgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBiOAooWEVOKSAgICAxMzogIDAwMTEgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjMAooWEVOKSAgICAxNDogIDAwMTQgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkOAooWEVOKSAgICAxNjogIDAwMTMgICAxICAgIDEgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBkMAooWEVOKSAgICAxNzogIDAwMTIgICAxICAgIDAgICAx
ICAgMCAgIDEgICAgMCAgICAwICAgICBjOAooWEVOKSBbYTogZHVtcCB0aW1lciBxdWV1ZXNdCihY
RU4pIER1bXBpbmcgdGltZXIgcXVldWVzOgooWEVOKSBDUFUwMDoKKFhFTikgICBleD0gICAtMTcy
M3VzIHRpbWVyPWZmZmY4MmM0ODAyZTI1YzggY2I9ZmZmZjgyYzQ4MDEzZDc1NyhmZmZmODJjNDgw
MjcxODAwKSBuczE2NTUwX3BvbGwrMHgwLzB4MzMKKFhFTikgICBleD0gICAtMTcyMnVzIHRpbWVy
PWZmZmY4MzAxNGNhOTIzYjAgY2I9ZmZmZjgyYzQ4MDE2NjQxNihmZmZmODMwMTQ4OTQxZTgwKSBp
cnFfZ3Vlc3RfZW9pX3RpbWVyX2ZuKzB4MC8weDE1ZAooWEVOKSAgIGV4PSAgICA3Mjc4dXMgdGlt
ZXI9ZmZmZjgzMDE0ODk3M2VhOCBjYj1mZmZmODJjNDgwMTFhYWYwKDAwMDAwMDAwMDAwMDAwMDAp
IGNzY2hlZF90aWNrKzB4MC8weDMxNAooWEVOKSAgIGV4PSAgICA3Mjc4dXMgdGltZXI9ZmZmZjgz
MDE0ODk3MzFiOCBjYj1mZmZmODJjNDgwMTE5ZDcyKGZmZmY4MzAxNDg5NzMxOTApIGNzY2hlZF9h
Y2N0KzB4MC8weDQyYQooWEVOKSAgIGV4PSA1NDk0MDg5dXMgdGltZXI9ZmZmZjgyYzQ4MDMwMDU4
MCBjYj1mZmZmODJjNDgwMWE4ODUwKDAwMDAwMDAwMDAwMDAwMDApIG1jZV93b3JrX2ZuKzB4MC8w
eGE5CihYRU4pICAgZXg9MzUzODcyMjZ1cyB0aW1lcj1mZmZmODJjNDgwMmZlMjgwIGNiPWZmZmY4
MmM0ODAxODA3YzIoMDAwMDAwMDAwMDAwMDAwMCkgcGx0X292ZXJmbG93KzB4MC8weDEzMQooWEVO
KSAgIGV4PSAgOTk3MzIzdXMgdGltZXI9ZmZmZjgyYzQ4MDJmZTI0MCBjYj1mZmZmODJjNDgwMTgw
MmZlKDAwMDAwMDAwMDAwMDAwMDApIHRpbWVfY2FsaWJyYXRpb24rMHgwLzB4NWMKKFhFTikgQ1BV
MDE6CihYRU4pICAgZXg9ICAgNzYyNDd1cyB0aW1lcj1mZmZmODMwMTQ4OWIzYjk4IGNiPWZmZmY4
MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMSkgY3NjaGVkX3RpY2srMHgwLzB4MzE0CihYRU4p
ICAgZXg9ICAyMTQxMzR1cyB0aW1lcj1mZmZmODMwMGE4M2ZkMDYwIGNiPWZmZmY4MmM0ODAxMjFj
NmIoZmZmZjgzMDBhODNmZDAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8weGIKKFhF
TikgQ1BVMDI6CihYRU4pICAgZXg9ICAgOTY1NjJ1cyB0aW1lcj1mZmZmODMwMTQ4OTk0MDg4IGNi
PWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMikgY3NjaGVkX3RpY2srMHgwLzB4MzE0
CihYRU4pICAgZXg9ICAxODQxMzl1cyB0aW1lcj1mZmZmODMwMGE4M2ZjMDYwIGNiPWZmZmY4MmM0
ODAxMjFjNmIoZmZmZjgzMDBhODNmYzAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2ZuKzB4MC8w
eGIKKFhFTikgQ1BVMDM6CihYRU4pICAgZXg9ICAxMTY4Nzd1cyB0aW1lcj1mZmZmODMwMTQ4OTk0
NTU4IGNiPWZmZmY4MmM0ODAxMWFhZjAoMDAwMDAwMDAwMDAwMDAwMykgY3NjaGVkX3RpY2srMHgw
LzB4MzE0CihYRU4pICAgZXg9ICAxNTQxNDh1cyB0aW1lcj1mZmZmODMwMGFhNTgzMDYwIGNiPWZm
ZmY4MmM0ODAxMjFjNmIoZmZmZjgzMDBhYTU4MzAwMCkgdmNwdV9zaW5nbGVzaG90X3RpbWVyX2Zu
KzB4MC8weGIKKFhFTikgW2M6IGR1bXAgQUNQSSBDeCBzdHJ1Y3R1cmVzXQooWEVOKSAnYycgcHJl
c3NlZCAtPiBwcmludGluZyBBQ1BJIEN4IHN0cnVjdHVyZXMKKFhFTikgPT1jcHUwPT0KKFhFTikg
YWN0aXZlIHN0YXRlOgkJQzI1NQooWEVOKSBtYXhfY3N0YXRlOgkJQzcKKFhFTikgc3RhdGVzOgoo
WEVOKSAgICAgQzE6CXR5cGVbQzFdIGxhdGVuY3lbMDAwXSB1c2FnZVswMDAwMDAwMF0gbWV0aG9k
WyBIQUxUXSBkdXJhdGlvblswXQooWEVOKSAgICAgQzA6CXVzYWdlWzAwMDAwMDAwXSBkdXJhdGlv
blsxMTUwNjA3MDIwMTZdCihYRU4pIFBDMlswXSBQQzNbMF0gUEM2WzBdIFBDN1swXQooWEVOKSBD
QzNbMF0gQ0M2WzBdIENDN1swXQooWEVOKSA9PWNwdTE9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlD
MjU1CihYRU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlw
ZVtDMV0gbGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9u
WzBdCihYRU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzExNTA4NTQ5MjI5NF0K
KFhFTikgUEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMTIxNDQ5
NzgxNDhdIENDN1swXQooWEVOKSA9PWNwdTI9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihY
RU4pIG1heF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0g
bGF0ZW5jeVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihY
RU4pICAgICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzExNTExMTE3NjY3M10KKFhFTikg
UEMyWzBdIFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMTIxNTA4ODEzOTdd
IENDN1swXQooWEVOKSA9PWNwdTM9PQooWEVOKSBhY3RpdmUgc3RhdGU6CQlDMjU1CihYRU4pIG1h
eF9jc3RhdGU6CQlDNwooWEVOKSBzdGF0ZXM6CihYRU4pICAgICBDMToJdHlwZVtDMV0gbGF0ZW5j
eVswMDBdIHVzYWdlWzAwMDAwMDAwXSBtZXRob2RbIEhBTFRdIGR1cmF0aW9uWzBdCihYRU4pICAg
ICBDMDoJdXNhZ2VbMDAwMDAwMDBdIGR1cmF0aW9uWzExNTEzNjg2MDU2N10KKFhFTikgUEMyWzBd
IFBDM1swXSBQQzZbMF0gUEM3WzBdCihYRU4pIENDM1swXSBDQzZbMTIxNTY4Nzg0MTZdIENDN1sw
XQooWEVOKSBbZTogZHVtcCBldnRjaG4gaW5mb10KKFhFTikgJ2UnIHByZXNzZWQgLT4gZHVtcGlu
ZyBldmVudC1jaGFubmVsIGluZm8KKFhFTikgRXZlbnQgY2hhbm5lbCBpbmZvcm1hdGlvbiBmb3Ig
ZG9tYWluIDA6CihYRU4pIFBvbGxpbmcgdkNQVXM6IHt9CihYRU4pICAgICBwb3J0IFtwL21dCihY
RU4pICAgICAgICAxIFsxLzBdOiBzPTUgbj0wIHg9MCB2PTAKKFhFTikgICAgICAgIDIgWzEvMV06
IHM9NiBuPTAgeD0wCihYRU4pICAgICAgICAzIFsxLzBdOiBzPTYgbj0wIHg9MAooWEVOKSAgICAg
ICAgNCBbMC8wXTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDUgWzAvMF06IHM9NSBuPTAgeD0w
IHY9MQooWEVOKSAgICAgICAgNiBbMC8wXTogcz02IG49MCB4PTAKKFhFTikgICAgICAgIDcgWzAv
MF06IHM9NSBuPTEgeD0wIHY9MAooWEVOKSAgICAgICAgOCBbMC8xXTogcz02IG49MSB4PTAKKFhF
TikgICAgICAgIDkgWzAvMF06IHM9NiBuPTEgeD0wCihYRU4pICAgICAgIDEwIFswLzBdOiBzPTYg
bj0xIHg9MAooWEVOKSAgICAgICAxMSBbMC8wXTogcz01IG49MSB4PTAgdj0xCihYRU4pICAgICAg
IDEyIFswLzBdOiBzPTYgbj0xIHg9MAooWEVOKSAgICAgICAxMyBbMC8wXTogcz01IG49MiB4PTAg
dj0wCihYRU4pICAgICAgIDE0IFsxLzFdOiBzPTYgbj0yIHg9MAooWEVOKSAgICAgICAxNSBbMC8w
XTogcz02IG49MiB4PTAKKFhFTikgICAgICAgMTYgWzAvMF06IHM9NiBuPTIgeD0wCihYRU4pICAg
ICAgIDE3IFswLzBdOiBzPTUgbj0yIHg9MCB2PTEKKFhFTikgICAgICAgMTggWzAvMF06IHM9NiBu
PTIgeD0wCihYRU4pICAgICAgIDE5IFswLzBdOiBzPTUgbj0zIHg9MCB2PTAKKFhFTikgICAgICAg
MjAgWzEvMV06IHM9NiBuPTMgeD0wCihYRU4pICAgICAgIDIxIFswLzBdOiBzPTYgbj0zIHg9MAoo
WEVOKSAgICAgICAyMiBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAgICAgMjMgWzAvMF06IHM9
NSBuPTMgeD0wIHY9MQooWEVOKSAgICAgICAyNCBbMC8wXTogcz02IG49MyB4PTAKKFhFTikgICAg
ICAgMjUgWzAvMF06IHM9MyBuPTAgeD0wIGQ9MCBwPTM1CihYRU4pICAgICAgIDI2IFswLzBdOiBz
PTQgbj0wIHg9MCBwPTkgaT05CihYRU4pICAgICAgIDI3IFswLzBdOiBzPTUgbj0wIHg9MCB2PTIK
KFhFTikgICAgICAgMjggWzAvMF06IHM9NCBuPTAgeD0wIHA9OCBpPTgKKFhFTikgICAgICAgMjkg
WzAvMF06IHM9NCBuPTAgeD0wIHA9Mjc4IGk9MjcKKFhFTikgICAgICAgMzAgWzAvMF06IHM9NCBu
PTAgeD0wIHA9MTYgaT0xNgooWEVOKSAgICAgICAzMSBbMC8wXTogcz00IG49MCB4PTAgcD0yNzcg
aT0yOAooWEVOKSAgICAgICAzMiBbMC8wXTogcz00IG49MCB4PTAgcD0yMyBpPTIzCihYRU4pICAg
ICAgIDMzIFswLzBdOiBzPTQgbj0wIHg9MCBwPTI3NiBpPTI5CihYRU4pICAgICAgIDM0IFsxLzBd
OiBzPTQgbj0wIHg9MCBwPTI3NSBpPTMwCihYRU4pICAgICAgIDM1IFswLzBdOiBzPTMgbj0wIHg9
MCBkPTAgcD0yNQooWEVOKSAgICAgICAzNiBbMC8wXTogcz01IG49MCB4PTAgdj0zCihYRU4pICAg
ICAgIDM3IFsxLzBdOiBzPTQgbj0wIHg9MCBwPTI3OSBpPTI2CihYRU4pIFtnOiBwcmludCBncmFu
dCB0YWJsZSB1c2FnZV0KKFhFTikgZ250dGFiX3VzYWdlX3ByaW50X2FsbCBbIGtleSAnZycgcHJl
c3NlZAooWEVOKSAgICAgICAtLS0tLS0tLSBhY3RpdmUgLS0tLS0tLS0gICAgICAgLS0tLS0tLS0g
c2hhcmVkIC0tLS0tLS0tCihYRU4pIFtyZWZdIGxvY2FsZG9tIG1mbiAgICAgIHBpbiAgICAgICAg
ICBsb2NhbGRvbSBnbWZuICAgICBmbGFncwooWEVOKSBncmFudC10YWJsZSBmb3IgcmVtb3RlIGRv
bWFpbjogICAgMCAuLi4gbm8gYWN0aXZlIGdyYW50IHRhYmxlIGVudHJpZXMKKFhFTikgZ250dGFi
X3VzYWdlX3ByaW50X2FsbCBdIGRvbmUKKFhFTikgW2k6IGR1bXAgaW50ZXJydXB0IGJpbmRpbmdz
XQooWEVOKSBHdWVzdCBpbnRlcnJ1cHQgaW5mb3JtYXRpb246CihYRU4pICAgIElSUTogICAwIGFm
ZmluaXR5OjAwMDEgdmVjOmYwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMCBt
YXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgIDEgYWZmaW5pdHk6MDAwMSB2ZWM6MzggdHlw
ZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAg
ICBJUlE6ICAgMiBhZmZpbml0eTpmZmZmIHZlYzplMiB0eXBlPVhULVBJQyAgICAgICAgICBzdGF0
dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICAzIGFmZmluaXR5OjAw
MDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVu
Ym91bmQKKFhFTikgICAgSVJROiAgIDQgYWZmaW5pdHk6MDAwMSB2ZWM6NDggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAg
NSBhZmZpbml0eTowMDAxIHZlYzo1MCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAw
MDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogICA2IGFmZmluaXR5OjAwMDEgdmVjOjU4
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhF
TikgICAgSVJROiAgIDcgYWZmaW5pdHk6MDAwMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWVkZ2UgICAg
c3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAgOCBhZmZpbml0
eTowMDAxIHZlYzo2OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMTAgaW4tZmxp
Z2h0PTAgZG9tYWluLWxpc3Q9MDogIDgoLVMtLSksCihYRU4pICAgIElSUTogICA5IGFmZmluaXR5
OjAwMDEgdmVjOjcwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBpbi1mbGln
aHQ9MCBkb21haW4tbGlzdD0wOiAgOSgtUy0tKSwKKFhFTikgICAgSVJROiAgMTAgYWZmaW5pdHk6
MDAwMSB2ZWM6NzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwg
dW5ib3VuZAooWEVOKSAgICBJUlE6ICAxMSBhZmZpbml0eTowMDAxIHZlYzo4OCB0eXBlPUlPLUFQ
SUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTog
IDEyIGFmZmluaXR5OjAwMDEgdmVjOjkwIHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAw
MDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTMgYWZmaW5pdHk6MDAwZiB2ZWM6
OTggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZAoo
WEVOKSAgICBJUlE6ICAxNCBhZmZpbml0eTowMDAxIHZlYzphMCB0eXBlPUlPLUFQSUMtZWRnZSAg
ICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDE1IGFmZmlu
aXR5OjAwMDEgdmVjOmE4IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBw
ZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMTYgYWZmaW5pdHk6MDAwMSB2ZWM6YjAgdHlwZT1J
Ty1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6
IDE2KC1TLS0pLAooWEVOKSAgICBJUlE6ICAxOCBhZmZpbml0eTowMDBmIHZlYzpiOCB0eXBlPUlP
LUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElS
UTogIDE5IGFmZmluaXR5OjAwMDEgdmVjOmMwIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0w
MDAwMDAwMiBtYXBwZWQsIHVuYm91bmQKKFhFTikgICAgSVJROiAgMjAgYWZmaW5pdHk6MDAwZiB2
ZWM6ZDggdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3Vu
ZAooWEVOKSAgICBJUlE6ICAyMiBhZmZpbml0eTowMDAxIHZlYzpkMCB0eXBlPUlPLUFQSUMtbGV2
ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAgIElSUTogIDIzIGFm
ZmluaXR5OjAwMDEgdmVjOmM4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAxMCBp
bi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAyMygtUy0tKSwKKFhFTikgICAgSVJROiAgMjQgYWZm
aW5pdHk6MDAwMSB2ZWM6MjggdHlwZT1ETUFfTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAwIG1h
cHBlZCwgdW5ib3VuZAooWEVOKSAgICBJUlE6ICAyNSBhZmZpbml0eTowMDAxIHZlYzozMCB0eXBl
PURNQV9NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kCihYRU4pICAg
IElSUTogIDI2IGFmZmluaXR5OjAwMDEgdmVjOjYxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1
cz0wMDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3OShQUy0tKSwKKFhFTikgICAg
SVJROiAgMjcgYWZmaW5pdHk6MDAwMSB2ZWM6MjkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVz
PTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6Mjc4KC1TLS0pLAooWEVOKSAgICBJ
UlE6ICAyOCBhZmZpbml0eTowMDAxIHZlYzozMSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9
MDAwMDAwMTAgaW4tZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDoyNzcoLVMtLSksCihYRU4pICAgIElS
UTogIDI5IGFmZmluaXR5OjAwMDEgdmVjOjM5IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0w
MDAwMDAxMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI3NigtUy0tKSwKKFhFTikgICAgSVJR
OiAgMzAgYWZmaW5pdHk6MDAwMSB2ZWM6NDEgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAw
MDAwMDEwIGluLWZsaWdodD0xIGRvbWFpbi1saXN0PTA6Mjc1KFBTLU0pLAooWEVOKSBJTy1BUElD
IGludGVycnVwdCBpbmZvcm1hdGlvbjoKKFhFTikgICAgIElSUSAgMCBWZWMyNDA6CihYRU4pICAg
ICAgIEFwaWMgMHgwMCwgUGluICAyOiB2ZWM9ZjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElS
USAgMSBWZWMgNTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICAxOiB2ZWM9MzggZGVsaXZl
cnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBk
ZXN0X2lkOjAKKFhFTikgICAgIElSUSAgMyBWZWMgNjQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwg
UGluICAzOiB2ZWM9NDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAg
aXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNCBWZWMgNzI6CihY
RU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA0OiB2ZWM9NDggZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikg
ICAgIElSUSAgNSBWZWMgODA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA1OiB2ZWM9NTAg
ZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1h
c2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNiBWZWMgODg6CihYRU4pICAgICAgIEFwaWMg
MHgwMCwgUGluICA2OiB2ZWM9NTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFy
aXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgNyBWZWMg
OTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA3OiB2ZWM9NjAgZGVsaXZlcnk9TG9Qcmkg
ZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAK
KFhFTikgICAgIElSUSAgOCBWZWMxMDQ6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluICA4OiB2
ZWM9NjggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJp
Zz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAgOSBWZWMxMTI6CihYRU4pICAgICAg
IEFwaWMgMHgwMCwgUGluICA5OiB2ZWM9NzAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAx
MCBWZWMxMjA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDEwOiB2ZWM9NzggZGVsaXZlcnk9
TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0
X2lkOjAKKFhFTikgICAgIElSUSAxMSBWZWMxMzY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGlu
IDExOiB2ZWM9ODggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJy
PTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxMiBWZWMxNDQ6CihYRU4p
ICAgICAgIEFwaWMgMHgwMCwgUGluIDEyOiB2ZWM9OTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0
YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAg
IElSUSAxMyBWZWMxNTI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDEzOiB2ZWM9OTggZGVs
aXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9
MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNCBWZWMxNjA6CihYRU4pICAgICAgIEFwaWMgMHgw
MCwgUGluIDE0OiB2ZWM9YTAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5
PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxNSBWZWMxNjg6
CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE1OiB2ZWM9YTggZGVsaXZlcnk9TG9QcmkgZGVz
dD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBkZXN0X2lkOjAKKFhF
TikgICAgIElSUSAxNiBWZWMxNzY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE2OiB2ZWM9
YjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1M
IG1hc2s9MCBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxOCBWZWMxODQ6CihYRU4pICAgICAgIEFw
aWMgMHgwMCwgUGluIDE4OiB2ZWM9YjggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBv
bGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAxOSBW
ZWMxOTI6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDE5OiB2ZWM9YzAgZGVsaXZlcnk9TG9Q
cmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lk
OjAKKFhFTikgICAgIElSUSAyMCBWZWMyMTY6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIw
OiB2ZWM9ZDggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAg
dHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElSUSAyMiBWZWMyMDg6CihYRU4pICAg
ICAgIEFwaWMgMHgwMCwgUGluIDIyOiB2ZWM9ZDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjAKKFhFTikgICAgIElS
USAyMyBWZWMyMDA6CihYRU4pICAgICAgIEFwaWMgMHgwMCwgUGluIDIzOiB2ZWM9YzggZGVsaXZl
cnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBk
ZXN0X2lkOjAKKFhFTikgW206IG1lbW9yeSBpbmZvXQooWEVOKSBQaHlzaWNhbCBtZW1vcnkgaW5m
b3JtYXRpb246CihYRU4pICAgICBYZW4gaGVhcDogMGtCIGZyZWUKKFhFTikgICAgIGhlYXBbMTRd
OiA2NDUxMmtCIGZyZWUKKFhFTikgICAgIGhlYXBbMTVdOiAxMzEwNzJrQiBmcmVlCihYRU4pICAg
ICBoZWFwWzE2XTogMjYyMTQ0a0IgZnJlZQooWEVOKSAgICAgaGVhcFsxN106IDUyMjIzNmtCIGZy
ZWUKKFhFTikgICAgIGhlYXBbMThdOiAxMDQ4NTcya0IgZnJlZQooWEVOKSAgICAgaGVhcFsxOV06
IDY5MTM4MGtCIGZyZWUKKFhFTikgICAgIGhlYXBbMjBdOiA1MzY4NzJrQiBmcmVlCihYRU4pICAg
ICBEb20gaGVhcDogMzI1Njc4OGtCIGZyZWUKKFhFTikgW246IE5NSSBzdGF0aXN0aWNzXQooWEVO
KSBDUFUJTk1JCihYRU4pICAgMAkgIDAKKFhFTikgICAxCSAgMAooWEVOKSAgIDIJICAwCihYRU4p
ICAgMwkgIDAKKFhFTikgZG9tMCB2Y3B1MDogTk1JIG5laXRoZXIgcGVuZGluZyBub3IgbWFza2Vk
CihYRU4pIFtxOiBkdW1wIGRvbWFpbiAoYW5kIGd1ZXN0IGRlYnVnKSBpbmZvXQooWEVOKSAncScg
cHJlc3NlZCAtPiBkdW1waW5nIGRvbWFpbiBpbmZvIChub3c9MHgxQTpGQzE3NUYwNCkKKFhFTikg
R2VuZXJhbCBpbmZvcm1hdGlvbiBmb3IgZG9tYWluIDA6CihYRU4pICAgICByZWZjbnQ9MyBkeWlu
Zz0wIHBhdXNlX2NvdW50PTAKKFhFTikgICAgIG5yX3BhZ2VzPTE4NzUzOSB4ZW5oZWFwX3BhZ2Vz
PTYgc2hhcmVkX3BhZ2VzPTAgcGFnZWRfcGFnZXM9MCBkaXJ0eV9jcHVzPXsxLTN9IG1heF9wYWdl
cz0xODgxNDcKKFhFTikgICAgIGhhbmRsZT0wMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAw
MDAwMDAgdm1fYXNzaXN0PTAwMDAwMDBkCihYRU4pIFJhbmdlc2V0cyBiZWxvbmdpbmcgdG8gZG9t
YWluIDA6CihYRU4pICAgICBJL08gUG9ydHMgIHsgMC0xZiwgMjItM2YsIDQ0LTYwLCA2Mi05Ziwg
YTItNDA3LCA0MGMtY2ZiLCBkMDAtMjA0ZiwgMjA1OC1mZmZmIH0KKFhFTikgICAgIEludGVycnVw
dHMgeyAwLTI3OSB9CihYRU4pICAgICBJL08gTWVtb3J5IHsgMC1mZWJmZiwgZmVjMDEtZmVkZmYs
IGZlZTAxLWZmZmZmZmZmZmZmZmZmZmYgfQooWEVOKSBNZW1vcnkgcGFnZXMgYmVsb25naW5nIHRv
IGRvbWFpbiAwOgooWEVOKSAgICAgRG9tUGFnZSBsaXN0IHRvbyBsb25nIHRvIGRpc3BsYXkKKFhF
TikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0ODkxNzogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRh
Zj03NDAwMDAwMDAwMDAwMDAyCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAxNDg5MTY6IGNh
Zj1jMDAwMDAwMDAwMDAwMDAxLCB0YWY9NzQwMDAwMDAwMDAwMDAwMQooWEVOKSAgICAgWGVuUGFn
ZSAwMDAwMDAwMDAwMTQ4OTE1OiBjYWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPTc0MDAwMDAwMDAw
MDAwMDEKKFhFTikgICAgIFhlblBhZ2UgMDAwMDAwMDAwMDE0ODkxNDogY2FmPWMwMDAwMDAwMDAw
MDAwMDEsIHRhZj03NDAwMDAwMDAwMDAwMDAxCihYRU4pICAgICBYZW5QYWdlIDAwMDAwMDAwMDAw
YWEwZmQ6IGNhZj1jMDAwMDAwMDAwMDAwMDAyLCB0YWY9NzQwMDAwMDAwMDAwMDAwMgooWEVOKSAg
ICAgWGVuUGFnZSAwMDAwMDAwMDAwMTNmNDI4OiBjYWY9YzAwMDAwMDAwMDAwMDAwMiwgdGFmPTc0
MDAwMDAwMDAwMDAwMDIKKFhFTikgVkNQVSBpbmZvcm1hdGlvbiBhbmQgY2FsbGJhY2tzIGZvciBk
b21haW4gMDoKKFhFTikgICAgIFZDUFUwOiBDUFUwIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5k
ID0gMDEsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1cz17fSBjcHVfYWZmaW5pdHk9ezB9CihY
RU4pICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTAKKFhFTikgICAgIE5vIHBlcmlvZGlj
IHRpbWVyCihYRU4pICAgICBWQ1BVMTogQ1BVMSBbaGFzPUZdIHBvbGw9MCB1cGNhbGxfcGVuZCA9
IDAwLCB1cGNhbGxfbWFzayA9IDAwIGRpcnR5X2NwdXM9ezF9IGNwdV9hZmZpbml0eT17MX0KKFhF
TikgICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxhZ3M9MQooWEVOKSAgICAgTm8gcGVyaW9kaWMg
dGltZXIKKFhFTikgICAgIFZDUFUyOiBDUFUyIFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kID0g
MDAsIHVwY2FsbF9tYXNrID0gMDAgZGlydHlfY3B1cz17Mn0gY3B1X2FmZmluaXR5PXsyfQooWEVO
KSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz0xCihYRU4pICAgICBObyBwZXJpb2RpYyB0
aW1lcgooWEVOKSAgICAgVkNQVTM6IENQVTMgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3BlbmQgPSAw
MCwgdXBjYWxsX21hc2sgPSAwMCBkaXJ0eV9jcHVzPXszfSBjcHVfYWZmaW5pdHk9ezN9CihYRU4p
ICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTEKKFhFTikgICAgIE5vIHBlcmlvZGljIHRp
bWVyCihYRU4pIE5vdGlmeWluZyBndWVzdCAwOjAgKHZpcnEgMSwgcG9ydCA1LCBzdGF0IDAvMC8t
MSkKKFhFTikgTm90aWZ5aW5nIGd1ZXN0IDA6MSAodmlycSAxLCBwb3J0IDExLCBzdGF0IDAvMC8w
KQooWEVOKSBOb3RpZnlpbmcgZ3Vlc3QgMDoyICh2aXJxIDEsIHBvcnQgMTcsIHN0YXQgMC8wLzAp
CihYRU4pIE5vdGlmeWluZyBndWVzdCAwOjMgKHZpcnEgMSwgcG9ydCAyMywgc3RhdCAwLzAvMCkK
CihYRU4pIFNoYXJlZCBmcmFtZXMgMCAtLSBTYXZlZCBmcmFtZXMgMApbICAxMTYuMDc1OTIxXSB2
Y3B1IDEKKFhFTikgW3I6IGR1bXAgcnVuIHF1ZXVlc10KWyAgMTE2LjA3NTkyMl0gIChYRU4pIHNj
aGVkX3NtdF9wb3dlcl9zYXZpbmdzOiBkaXNhYmxlZAooWEVOKSBOT1c9MHgwMDAwMDAxQjA3RERD
RTAxCihYRU4pIElkbGUgY3B1cG9vbDoKKFhFTikgU2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVk
dWxlciAoY3JlZGl0KQogKFhFTikgaW5mbzoKKFhFTikgCW5jcHVzICAgICAgICAgICAgICA9IDQK
KFhFTikgCW1hc3RlciAgICAgICAgICAgICA9IDAKKFhFTikgCWNyZWRpdCAgICAgICAgICAgICA9
IDQwMAooWEVOKSAJY3JlZGl0IGJhbGFuY2UgICAgID0gLTEwCihYRU4pIAl3ZWlnaHQgICAgICAg
ICAgICAgPSAyNTYKKFhFTikgCXJ1bnFfc29ydCAgICAgICAgICA9IDEzMzMKKFhFTikgCWRlZmF1
bHQtd2VpZ2h0ICAgICA9IDI1NgooWEVOKSAJdHNsaWNlICAgICAgICAgICAgID0gMTBtcwooWEVO
KSAJcmF0ZWxpbWl0ICAgICAgICAgID0gMTAwMHVzCihYRU4pIAljcmVkaXRzIHBlciBtc2VjICAg
PSAxMAooWEVOKSAJdGlja3MgcGVyIHRzbGljZSAgID0gMQooWEVOKSAJbWlncmF0aW9uIGRlbGF5
ICAgID0gMHVzCjA6IG1hc2tlZD0wIHBlbmQoWEVOKSBpZGxlcnM6IDAwMGMKKFhFTikgYWN0aXZl
IHZjcHVzOgooWEVOKSAJICAxOiBpbmc9MSBldmVudF9zZWwgWzAuMV0gcHJpPS0yIGZsYWdzPTAg
Y3B1PTEgY3JlZGl0PS02MTIgW3c9MjU2XQowMDAwMDAwMDAwMDAwMDAxKFhFTikgQ3B1cG9vbCAw
OgooWEVOKSBTY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCihYRU4pIGlu
Zm86CihYRU4pIAluY3B1cyAgICAgICAgICAgICAgPSA0CihYRU4pIAltYXN0ZXIgICAgICAgICAg
ICAgPSAwCihYRU4pIAljcmVkaXQgICAgICAgICAgICAgPSA0MDAKKFhFTikgCWNyZWRpdCBiYWxh
bmNlICAgICA9IC0xMAooWEVOKSAJd2VpZ2h0ICAgICAgICAgICAgID0gMjU2CihYRU4pIAlydW5x
X3NvcnQgICAgICAgICAgPSAxMzMzCihYRU4pIAlkZWZhdWx0LXdlaWdodCAgICAgPSAyNTYKKFhF
TikgCXRzbGljZSAgICAgICAgICAgICA9IDEwbXMKKFhFTikgCXJhdGVsaW1pdCAgICAgICAgICA9
IDEwMDB1cwooWEVOKSAJY3JlZGl0cyBwZXIgbXNlYyAgID0gMTAKKFhFTikgCXRpY2tzIHBlciB0
c2xpY2UgICA9IDEKKFhFTikgCW1pZ3JhdGlvbiBkZWxheSAgICA9IDB1cwoKKFhFTikgaWRsZXJz
OiAwMDBjCihYRU4pIGFjdGl2ZSB2Y3B1czoKKFhFTikgCSAgMTogWzAuMV0gcHJpPS0yIGZsYWdz
PTAgY3B1PTEgY3JlZGl0PS0xMTU3IFt3PTI1Nl0KWyAgMTE2LjExMDE0N10gIChYRU4pIENQVVsw
MF0gICBzb3J0PTEzMzMsIHNpYmxpbmc9MDAwMSwgMTogbWFza2VkPTAgcGVuZGNvcmU9MDAwZgoo
WEVOKSAJcnVuOiBbMzI3NjcuMF0gcHJpPTAgZmxhZ3M9MCBjcHU9MAooWEVOKSAJICAxOiBbMC4w
XSBwcmk9MCBmbGFncz0wIGNwdT0wIGNyZWRpdD0tNDExIFt3PTI1Nl0KaW5nPTAgZXZlbnRfc2Vs
IChYRU4pIENQVVswMV0gIHNvcnQ9MTMzMywgc2libGluZz0wMDAyLCBjb3JlPTAwMGYKKFhFTikg
CXJ1bjogWzAuMV0gcHJpPS0yIGZsYWdzPTAgY3B1PTEgY3JlZGl0PS0xNDI5IFt3PTI1Nl0KKFhF
TikgCSAgMTogWzMyNzY3LjFdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MQooWEVOKSBDUFVbMDJdIDAw
MDAwMDAwMDAwMDAwMDAgc29ydD0xMzMzLCBzaWJsaW5nPTAwMDQsIApjb3JlPTAwMGYKKFhFTikg
CXJ1bjogWyAgMTE2LjIxNDQwNV0gIFszMjc2Ny4yXSBwcmk9LTY0IGZsYWdzPTAgY3B1PTIKIChY
RU4pIENQVVswM10gMjogbWFza2VkPTEgcGVuZCBzb3J0PTEzMzMsIHNpYmxpbmc9MDAwOCwgaW5n
PTEgZXZlbnRfc2VsIGNvcmU9MDAwZgooWEVOKSAJcnVuOiAwMDAwMDAwMDAwMDAwMDAxWzMyNzY3
LjNdIHByaT0tNjQgZmxhZ3M9MCBjcHU9MwoKKFhFTikgW3M6IGR1bXAgc29mdHRzYyBzdGF0c10K
WyAgMTE2LjI1NTU3Ml0gIChYRU4pIFRTQyBtYXJrZWQgYXMgcmVsaWFibGUsIHdhcnAgPSAwIChj
b3VudD0yKQogKFhFTikgTm8gZG9tYWlucyBoYXZlIGVtdWxhdGVkIFRTQwozOiBtYXNrZWQ9MSBw
ZW5kKFhFTikgW3Q6IGRpc3BsYXkgbXVsdGktY3B1IGNsb2NrIGluZm9dCmluZz0xIGV2ZW50X3Nl
bCAoWEVOKSBTeW5jZWQgc3RpbWUgc2tldzogbWF4PTI2bnMgYXZnPTI2bnMgc2FtcGxlcz0xIGN1
cnJlbnQ9MjZucwooWEVOKSBTeW5jZWQgY3ljbGVzIHNrZXc6IG1heD0xNjQgYXZnPTE2NCBzYW1w
bGVzPTEgY3VycmVudD0xNjQKMDAwMDAwMDAwMDAwMDAwMShYRU4pIFt1OiBkdW1wIG51bWEgaW5m
b10KCihYRU4pICd1JyBwcmVzc2VkIC0+IGR1bXBpbmcgbnVtYSBpbmZvIChub3ctMHgxQjoxNTYw
MzcyNCkKWyAgMTE2LjI5NzMyOV0gIChYRU4pIGlkeDAgLT4gTk9ERTAgc3RhcnQtPjAgc2l6ZS0+
MTM2OTYwMCBmcmVlLT44MTQxOTcKIChYRU4pIHBoeXNfdG9fbmlkKDAwMDAwMDAwMDAwMDEwMDAp
IC0+IDAgc2hvdWxkIGJlIDAKCihYRU4pIENQVTAgLT4gTk9ERTAKKFhFTikgQ1BVMSAtPiBOT0RF
MAooWEVOKSBDUFUyIC0+IE5PREUwCihYRU4pIENQVTMgLT4gTk9ERTAKKFhFTikgTWVtb3J5IGxv
Y2F0aW9uIG9mIGVhY2ggZG9tYWluOgooWEVOKSBEb21haW4gMCAodG90YWw6IDE4NzUzOSk6Clsg
IDExNi4zMzUwMDZdIHAoWEVOKSAgICAgTm9kZSAwOiAxODc1MzkKZW5kaW5nOgooWEVOKSBbdjog
ZHVtcCBJbnRlbCdzIFZNQ1NdClsgIDExNi4zMzUwMDZdICAoWEVOKSAqKioqKioqKioqKiBWTUNT
IEFyZWFzICoqKioqKioqKioqKioqCihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqCiAgKFhFTikgW3o6IHByaW50IGlvYXBpYyBpbmZvXQowMDAwMDAwMDAwMDAwMDAw
KFhFTikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4KKFhFTikgbnVtYmVyIG9mIElPLUFQ
SUMgIzIgcmVnaXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uCiAoWEVOKSBJTyBBUElDICMyLi4uLi4uCihYRU4pIC4uLi4gcmVnaXN0ZXIg
IzAwOiAwMjAwMDAwMAooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwgQVBJQyBpZDogMDIKKFhF
TikgLi4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAKKFhFTikgLi4uLi4uLiAgICA6IExUUyAg
ICAgICAgICA6IDAKMDAwMDAwMDAwMDAwMDAwMChYRU4pIC4uLi4gcmVnaXN0ZXIgIzAxOiAwMDE3
MDAyMAooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVzOiAwMDE3CihY
RU4pIC4uLi4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAwCihYRU4pIC4uLi4uLi4gICAgIDog
SU8gQVBJQyB2ZXJzaW9uOiAwMDIwCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxlOgoo
WEVOKSAgTlIgTG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAg
IAogKFhFTikgIDAwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAK
MDAwMDAwMDAwMDAwMDAwMChYRU4pICAwMSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAg
MSAgICAxICAgIDM4CiAoWEVOKSAgMDIgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICBGMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDAzIDAwMCAwMCAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNDAKIChYRU4pICAwNCAwMDAgMDAgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDQ4CjAwMDAwMDAwMDAwMDAwMDAoWEVOKSAgMDUgMDAwIDAwICAw
ICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1MAogKFhFTikgIDA2IDAwMCAwMCAgMCAg
ICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTgKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAw
NyAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDYwCiAoWEVOKSAgMDgg
MDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA2OAowMDAwMDAwMDAwMDAw
MDAwKFhFTikgIDA5IDAwMCAwMCAgMCAgICAxICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNzAK
IChYRU4pICAwYSAwMDAgMDAgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDc4CjAw
MDAwMDAwMDAwMDAwMDAoWEVOKSAgMGIgMDAwIDAwICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEg
ICAgMSAgICA4OAoKKFhFTikgIDBjIDAwMCAwMCAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAg
IDEgICAgOTAKWyAgMTE2LjQ4NDYzOV0gIChYRU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDk4CiAgKFhFTikgIDBlIDAwMCAwMCAgMCAgICAwICAgIDAgICAw
ICAgMCAgICAxICAgIDEgICAgQTAKMDAwMDAwMDAwMDAwMDAwMChYRU4pICAwZiAwMDAgMDAgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEE4CiAoWEVOKSAgMTAgMDAwIDAwICAwICAg
IDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBCMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDEx
IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAKIChYRU4pICAxMiAw
MDAgMDAgIDEgICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAxICAgIEI4CjAwMDAwMDAwMDAwMDAw
MDAoWEVOKSAgMTMgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICBDMAog
KFhFTikgIDE0IDAwMCAwMCAgMSAgICAxICAgIDAgICAxICAgMCAgICAxICAgIDEgICAgRDgKMDAw
MDAwMDAwMDAwMDAwMChYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwCiAoWEVOKSAgMTYgMDAwIDAwICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAg
MSAgICBEMAowMDAwMDAwMDAwMDAwMDAwKFhFTikgIDE3IDAwMCAwMCAgMCAgICAxICAgIDAgICAx
ICAgMCAgICAxICAgIDEgICAgQzgKIChYRU4pIFVzaW5nIHZlY3Rvci1iYXNlZCBpbmRleGluZwoo
WEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOgowMDAwMDAwMDAwMDAwMDAwKFhFTikgSVJRMjQwIC0+
IDA6MgooWEVOKSBJUlE1NiAtPiAwOjEKKFhFTikgSVJRNjQgLT4gMDozCihYRU4pIElSUTcyIC0+
IDA6NAooWEVOKSBJUlE4MCAtPiAwOjUKKFhFTikgSVJRODggLT4gMDo2CihYRU4pIElSUTk2IC0+
IDA6NwooWEVOKSBJUlExMDQgLT4gMDo4CihYRU4pIElSUTExMiAtPiAwOjkKKFhFTikgSVJRMTIw
IC0+IDA6MTAKKFhFTikgSVJRMTM2IC0+IDA6MTEKKFhFTikgSVJRMTQ0IC0+IDA6MTIKKFhFTikg
SVJRMTUyIC0+IDA6MTMKKFhFTikgSVJRMTYwIC0+IDA6MTQKKFhFTikgSVJRMTY4IC0+IDA6MTUK
IChYRU4pIElSUTE3NiAtPiAwOjE2CihYRU4pIElSUTE4NCAtPiAwOjE4CihYRU4pIElSUTE5MiAt
PiAwOjE5CihYRU4pIElSUTIxNiAtPiAwOjIwCihYRU4pIElSUTIwOCAtPiAwOjIyCihYRU4pIElS
UTIwMCAtPiAwOjIzCihYRU4pIC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLiBk
b25lLgowMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2LjYxNzYyNF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNi42MzE1ODZdICAgIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMApbICAxMTYuNjQ1NTQ2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2LjY1OTUw
N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNi42NzM0NjhdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxMTYuNjg3NDI5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDA0MDAwODIwODIKWyAgMTE2
LjcwMTM5MF0gICAgClsgIDExNi43MDQ3MDJdIGdsb2JhbCBtYXNrOgpbICAxMTYuNzA0NzAyXSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE2LjcxOTkxNV0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDExNi43MzM4NzZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTYuNzQ3
ODM4XSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE2Ljc2MTc5OF0gICAgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmClsgIDExNi43NzU3NjBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAx
MTYuNzg5NzIxXSAgICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE2LjgwMzY4MV0gICAgZmZm
ZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZm
ZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZjMDAwMTA0MTA1ClsgIDExNi44MTc2NDJdICAgIApbICAxMTYuODIwOTU0XSBnbG9i
YWxseSB1bm1hc2tlZDoKWyAgMTE2LjgyMDk1NF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDExNi44MzY3MDRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTYuODUwNjY1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2Ljg2NDYyN10gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDExNi44Nzg1ODddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTYuODkyNTQ5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2LjkwNjUxMF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExNi45MjA0NzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDQwMDA4MjA4MgpbICAxMTYu
OTM0NDMxXSAgICAKWyAgMTE2LjkzNzc0M10gbG9jYWwgY3B1MSBtYXNrOgpbICAxMTYuOTM3NzQz
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2Ljk1MzMxNF0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExNi45NjcyNzVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTYu
OTgxMjM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE2Ljk5NTE5N10gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDExNy4wMDkxNThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxMTcuMDIzMTE5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjAzNzA4MF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAxZjgwClsgIDExNy4wNTEwNDFdICAgIApbICAxMTcuMDU0MzUyXSBs
b2NhbGx5IHVubWFza2VkOgpbICAxMTcuMDU0MzUzXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTE3LjA3MDAxM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy4wODM5NzRdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMDk3OTcxXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTE3LjExMTkzM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy4xMjU4
OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMTM5ODU1XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE3LjE1MzgxNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDgwClsgIDEx
Ny4xNjc3NzZdICAgIApbICAxMTcuMTcxMDg3XSBwZW5kaW5nIGxpc3Q6ClsgIDExNy4xNzQxMzFd
ICAgMDogZXZlbnQgMSAtPiBpcnEgMjcyIGxvY2FsbHktbWFza2VkClsgIDExNy4xNzkxNDNdICAg
MTogZXZlbnQgNyAtPiBpcnEgMjc4ClsgIDExNy4xODI4MTFdICAgMjogZXZlbnQgMTMgLT4gaXJx
IDI4NCBsb2NhbGx5LW1hc2tlZApbICAxMTcuMTg3OTEyXSAgIDM6IGV2ZW50IDE5IC0+IGlycSAy
OTAgbG9jYWxseS1tYXNrZWQKWyAgMTE3LjE5MzAxNF0gICAwOiBldmVudCAzNCAtPiBpcnEgMzAy
IGxvY2FsbHktbWFza2VkClsgIDExNy4xOTgxMzVdIApbICAxMTcuMTk4MTM2XSB2Y3B1IDAKWyAg
MTE3LjE5ODEzN10gICAwOiBtYXNrZWQ9MCBwZW5kaW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAw
MDAwMDEKWyAgMTE3LjIwMzM5N10gICAxOiBtYXNrZWQ9MCBwZW5kaW5nPTAgZXZlbnRfc2VsIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTE3LjIwOTQ4Ml0gICAyOiBtYXNrZWQ9MSBwZW5kaW5nPTEgZXZl
bnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgMTE3LjIxNTU2N10gICAzOiBtYXNrZWQ9MSBwZW5k
aW5nPTEgZXZlbnRfc2VsIDAwMDAwMDAwMDAwMDAwMDEKWyAgMTE3LjIyMTY1M10gICAKWyAgMTE3
LjIyNzczN10gcGVuZGluZzoKWyAgMTE3LjIyNzczN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDExNy4yNDI1OTNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMjU2NTU0XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjI3MDUxNF0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDExNy4yODQ0NzZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuMjk4
NDM2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjMxMjM5N10gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDExNy4zMjYzNTldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDQwMDI4YTAwZQpbICAx
MTcuMzQwMzIwXSAgICAKWyAgMTE3LjM0MzYzMV0gZ2xvYmFsIG1hc2s6ClsgIDExNy4zNDM2MzJd
ICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTcuMzU4ODQ1XSAgICBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZm
ZmZmZmZmZmYKWyAgMTE3LjM3MjgwNl0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDExNy4z
ODY3NjddICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTcuNDAwNzI4XSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTE3LjQxNDY4OF0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDExNy40Mjg2NTBdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTcuNDQyNjEwXSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmMwMDAxMDQxMDUKWyAgMTE3LjQ1NjU3Ml0gICAgClsgIDExNy40NTk4ODNdIGds
b2JhbGx5IHVubWFza2VkOgpbICAxMTcuNDU5ODg0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTE3LjQ3NTYzM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy40ODk1OTVdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNTAzNTU2XSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTE3LjUxNzUxN10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy41MzE0
NzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNTQ1NDM4XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE3LjU1OTQwMF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwNDAwMjhhMDBhClsgIDEx
Ny41NzMzNjFdICAgIApbICAxMTcuNTc2NjcxXSBsb2NhbCBjcHUwIG1hc2s6ClsgIDExNy41NzY2
NzJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNTkyMjQzXSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE3LjYwNjIwNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDEx
Ny42MjAxNjZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNjM0MTI3XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjY0ODA4N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDExNy42NjIwNDldICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuNjc2MDA5XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIGZmZmZmZmZmZmUwMDAwN2YKWyAgMTE3LjY4OTk3MV0gICAgClsgIDExNy42OTMyODFd
IGxvY2FsbHkgdW5tYXNrZWQ6ClsgIDExNy42OTMyODJdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MApbICAxMTcuNzA4OTQyXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjcyMjkwNF0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy43MzY4NjRdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxMTcuNzUwODI2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3Ljc2
NDc4N10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExNy43Nzg3NDddICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAxMTcuNzkyNzA5XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDA0MDAwMDAwMGEKWyAg
MTE3LjgwNjY3MF0gICAgClsgIDExNy44MDk5ODFdIHBlbmRpbmcgbGlzdDoKWyAgMTE3LjgxMzAy
OV0gICAwOiBldmVudCAxIC0+IGlycSAyNzIKWyAgMTE3LjgxNjY5NF0gICAwOiBldmVudCAyIC0+
IGlycSAyNzMgZ2xvYmFsbHktbWFza2VkClsgIDExNy44MjE3OTRdICAgMDogZXZlbnQgMyAtPiBp
cnEgMjc0ClsgIDExNy44MjU0NjRdICAgMjogZXZlbnQgMTMgLT4gaXJxIDI4NCBsb2NhbGx5LW1h
c2tlZApbICAxMTcuODMwNTY2XSAgIDI6IGV2ZW50IDE1IC0+IGlycSAyODYgbG9jYWxseS1tYXNr
ZWQKWyAgMTE3LjgzNTY2Nl0gICAzOiBldmVudCAxOSAtPiBpcnEgMjkwIGxvY2FsbHktbWFza2Vk
ClsgIDExNy44NDA3NjZdICAgMzogZXZlbnQgMjEgLT4gaXJxIDI5MiBsb2NhbGx5LW1hc2tlZApb
ICAxMTcuODQ1ODY4XSAgIDA6IGV2ZW50IDM0IC0+IGlycSAzMDIKWyAgMTE3Ljg0OTY0N10gClsg
IDExNy44NDk2NDhdIHZjcHUgMgpbICAxMTcuODQ5NjQ4XSAgIDA6IG1hc2tlZD0wIHBlbmRpbmc9
MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuODU0OTA4XSAgIDE6IG1hc2tlZD0w
IHBlbmRpbmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuODYwOTkyXSAgIDI6
IG1hc2tlZD0wIHBlbmRpbmc9MSBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMQpbICAxMTcuODY3
MDc5XSAgIDM6IG1hc2tlZD0xIHBlbmRpbmc9MSBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMQpb
ICAxMTcuODczMTYzXSAgIApbICAxMTcuODc5MjQ4XSBwZW5kaW5nOgpbICAxMTcuODc5MjQ5XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3Ljg5NDEwNV0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDExNy45MDgwNjZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTcuOTIy
MDI2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3LjkzNTk4OF0gICAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwClsgIDExNy45NDk5NDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAx
MTcuOTYzOTEwXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE3Ljk3Nzg3MV0gICAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMjhlMDA0ClsgIDExNy45OTE4MzJdICAgIApbICAxMTcuOTk1MTQzXSBnbG9i
YWwgbWFzazoKWyAgMTE3Ljk5NTE0M10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDExOC4w
MTAzNTddICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguMDI0MzE4XSAgICBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmIGZm
ZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjAzODI3OV0gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsg
IDExOC4wNTIyMzldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguMDY2MjAwXSAgICBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjA4MDE2MV0gICAgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZm
ZmZmClsgIDExOC4wOTQxNTldICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBm
ZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZm
ZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmYzAwMDEwNDEwNQpbICAxMTguMTA4MTE5
XSAgICAKWyAgMTE4LjExMTQzMV0gZ2xvYmFsbHkgdW5tYXNrZWQ6ClsgIDExOC4xMTE0MzJdICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMTI3MTgyXSAgICAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAKWyAgMTE4LjE0MTE0M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC4xNTUx
MDNdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMTY5MDY0XSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAKWyAgMTE4LjE4MzAyNl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDEx
OC4xOTY5ODZdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMjEwOTQ3XSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAyOGEwMDAKWyAgMTE4LjIyNDkwOF0gICAgClsgIDExOC4yMjgyMjBdIGxvY2Fs
IGNwdTIgbWFzazoKWyAgMTE4LjIyODIyMV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDEx
OC4yNDM3OTFdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMjU3NzUzXSAgICAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjI3MTcxM10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
ClsgIDExOC4yODU2NzRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguMjk5NjM2XSAg
ICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjMxMzU5Nl0gICAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwClsgIDExOC4zMjc1NThdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDA3ZTAwMApbICAxMTguMzQx
NTE4XSAgICAKWyAgMTE4LjM0NDgyOV0gbG9jYWxseSB1bm1hc2tlZDoKWyAgMTE4LjM0NDgyOV0g
ICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC4zNjA0OTFdICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMApbICAxMTguMzc0NDUxXSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjM4
ODQxMl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC40MDIzNzNdICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMApbICAxMTguNDE2MzM0XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAg
MTE4LjQzMDI5Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC40NDQyNTZdICAgIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwYTAwMApbICAxMTguNDU4MjE4XSAgICAKWyAgMTE4LjQ2MTUyOV0gcGVu
ZGluZyBsaXN0OgpbICAxMTguNDY0NTczXSAgIDA6IGV2ZW50IDIgLT4gaXJxIDI3MyBnbG9iYWxs
eS1tYXNrZWQgbG9jYWxseS1tYXNrZWQKWyAgMTE4LjQ3MTAxNV0gICAyOiBldmVudCAxMyAtPiBp
cnEgMjg0ClsgIDExOC40NzQ3NzVdICAgMjogZXZlbnQgMTQgLT4gaXJxIDI4NSBnbG9iYWxseS1t
YXNrZWQKWyAgMTE4LjQ3OTk2NV0gICAyOiBldmVudCAxNSAtPiBpcnEgMjg2ClsgIDExOC40ODM3
MjNdICAgMzogZXZlbnQgMTkgLT4gaXJxIDI5MCBsb2NhbGx5LW1hc2tlZApbICAxMTguNDg4ODI1
XSAgIDM6IGV2ZW50IDIxIC0+IGlycSAyOTIgbG9jYWxseS1tYXNrZWQKWyAgMTE4LjQ5Mzk0Nl0g
ClsgIDExOC40OTM5NDddIHZjcHUgMwpbICAxMTguNDkzOTQ4XSAgIDA6IG1hc2tlZD0wIHBlbmRp
bmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguNDk5MjA3XSAgIDE6IG1hc2tl
ZD0wIHBlbmRpbmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguNTA1MjkyXSAg
IDI6IG1hc2tlZD0wIHBlbmRpbmc9MCBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAwMApbICAxMTgu
NTExMzc3XSAgIDM6IG1hc2tlZD0wIHBlbmRpbmc9MSBldmVudF9zZWwgMDAwMDAwMDAwMDAwMDAw
MQpbICAxMTguNTE3NDYzXSAgIApbICAxMTguNTIzNTQ4XSBwZW5kaW5nOgpbICAxMTguNTIzNTQ5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjUzODQwM10gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExOC41NTIzNjVdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTgu
NTY2MzI2XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjU4MDI4Nl0gICAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwClsgIDExOC41OTQyNDhdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApb
ICAxMTguNjA4MjA4XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjYyMjE3MF0gICAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMzg0MDA0ClsgIDExOC42MzYxMzBdICAgIApbICAxMTguNjM5NDQyXSBn
bG9iYWwgbWFzazoKWyAgMTE4LjYzOTQ0M10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZmClsgIDEx
OC42NTQ2NTZdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguNjY4NjE2XSAgICBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
IGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjY4MjU3N10gICAgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZmZmZmZmZm
ClsgIDExOC42OTY1MzhdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZgpbICAxMTguNzEwNDk5XSAg
ICBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYKWyAgMTE4LjcyNDQ2MV0gICAgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZmZmZmZmZm
ZmZmZmZmClsgIDExOC43Mzg0MjFdICAgIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZm
ZiBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmZmZmZmZmZmZmZiBmZmZm
ZmZmZmZmZmZmZmZmIGZmZmZmZmZmZmZmZmZmZmYgZmZmZmZmYzAwMDEwNDEwNQpbICAxMTguNzUy
MzgyXSAgICAKWyAgMTE4Ljc1NTY5NV0gZ2xvYmFsbHkgdW5tYXNrZWQ6ClsgIDExOC43NTU2OTZd
ICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguNzcxNDQ0XSAgICAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAKWyAgMTE4Ljc4NTQwNV0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOC43
OTkzNjddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguODEzMzI4XSAgICAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAKWyAgMTE4LjgyNzI4OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDExOC44NDEyNTBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguODU1MjEwXSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAyODAwMDAKWyAgMTE4Ljg2OTE3Ml0gICAgClsgIDExOC44NzI0ODJdIGxv
Y2FsIGNwdTMgbWFzazoKWyAgMTE4Ljg3MjQ4M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsg
IDExOC44ODgwNTRdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguOTAyMDE1XSAgICAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4LjkxNTk3N10gICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwClsgIDExOC45Mjk5MzddICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMApbICAxMTguOTQzODk5
XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE4Ljk1Nzg1OV0gICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwClsgIDExOC45NzE4MjBdICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMWY4MDAwMApbICAxMTgu
OTg1NzgxXSAgICAKWyAgMTE4Ljk4OTA5M10gbG9jYWxseSB1bm1hc2tlZDoKWyAgMTE4Ljk4OTA5
M10gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOS4wMDQ3NTNdICAgIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMApbICAxMTkuMDE4NzE1XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAKWyAgMTE5
LjAzMjY3Nl0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOS4wNDY2MzddICAgIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMApbICAxMTkuMDYwNTk3XSAgICAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAK
WyAgMTE5LjA3NDU1OF0gICAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwClsgIDExOS4wODg1MTldICAg
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDI4MDAwMApbICAxMTkuMTAyNTE2XSAgICAKWyAgMTE5LjEwNTgyN10g
cGVuZGluZyBsaXN0OgpbICAxMTkuMTA4ODcwXSAgIDA6IGV2ZW50IDIgLT4gaXJxIDI3MyBnbG9i
YWxseS1tYXNrZWQgbG9jYWxseS1tYXNrZWQKWyAgMTE5LjExNTMxNF0gICAyOiBldmVudCAxNCAt
PiBpcnEgMjg1IGdsb2JhbGx5LW1hc2tlZCBsb2NhbGx5LW1hc2tlZApbICAxMTkuMTIxODQ3XSAg
IDM6IGV2ZW50IDE5IC0+IGlycSAyOTAKWyAgMTE5LjEyNTYwNV0gICAzOiBldmVudCAyMCAtPiBp
cnEgMjkxIGdsb2JhbGx5LW1hc2tlZApbICAxMTkuMTMwNzk1XSAgIDM6IGV2ZW50IDIxIC0+IGly
cSAyOTIK
--14dae9399de136edb804c6d6c8a6
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--14dae9399de136edb804c6d6c8a6--


From xen-devel-bounces@lists.xen.org Thu Aug 09 15:37:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUnM-00082m-Q2; Thu, 09 Aug 2012 15:37:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzUnK-00082f-SH
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:37:06 +0000
Received: from [85.158.138.51:7484] by server-1.bemta-3.messagelabs.com id
	57/B9-32745-129D3205; Thu, 09 Aug 2012 15:37:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344526625!27454967!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7490 invoked from network); 9 Aug 2012 15:37:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 15:37:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 16:37:05 +0100
Message-Id: <5023F5400200007800093F4E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 16:37:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
In-Reply-To: <CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 17:21, Ben Guthro <ben@guthro.net> wrote:
> Attached is a new run for
> new boot (pre-s3)
> first suspend / resume cycle (s3-first)
> second (failing) suspend / resume cycle (s3-second)

That confirms that the corruption occurs during the first suspend,
but presumably towards its end (where MSI and/or interrupt
redirection stuff already got restored). There's nothing I can add
on top of my recommendations as to the debugging of this.

One thing I think you didn't tell us so far is whether without
interrupt remapping (or the IOMMU turned off altogether) the
problem would also be observed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:37:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUnM-00082m-Q2; Thu, 09 Aug 2012 15:37:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzUnK-00082f-SH
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:37:06 +0000
Received: from [85.158.138.51:7484] by server-1.bemta-3.messagelabs.com id
	57/B9-32745-129D3205; Thu, 09 Aug 2012 15:37:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344526625!27454967!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7490 invoked from network); 9 Aug 2012 15:37:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 15:37:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 16:37:05 +0100
Message-Id: <5023F5400200007800093F4E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 16:37:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
In-Reply-To: <CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 17:21, Ben Guthro <ben@guthro.net> wrote:
> Attached is a new run for
> new boot (pre-s3)
> first suspend / resume cycle (s3-first)
> second (failing) suspend / resume cycle (s3-second)

That confirms that the corruption occurs during the first suspend,
but presumably towards its end (where MSI and/or interrupt
redirection stuff already got restored). There's nothing I can add
on top of my recommendations as to the debugging of this.

One thing I think you didn't tell us so far is whether without
interrupt remapping (or the IOMMU turned off altogether) the
problem would also be observed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:38:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUoA-00086A-7y; Thu, 09 Aug 2012 15:37:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzUo8-00085w-Hi
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 15:37:56 +0000
Received: from [85.158.138.51:16358] by server-4.bemta-3.messagelabs.com id
	22/D7-06379-359D3205; Thu, 09 Aug 2012 15:37:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344526674!27534885!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28158 invoked from network); 9 Aug 2012 15:37:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:37:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,740,1336348800"; d="scan'208";a="13934946"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 15:37:48 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 16:37:48 +0100
Date: Thu, 9 Aug 2012 16:37:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Dave Martin <dave.martin@linaro.org>
In-Reply-To: <20120808124111.GB2134@linaro.org>
Message-ID: <alpine.DEB.2.02.1208091046330.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120808124111.GB2134@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Dave Martin wrote:
> On Mon, Aug 06, 2012 at 03:27:05PM +0100, Stefano Stabellini wrote:
> > Use r12 to pass the hypercall number to the hypervisor.
> > 
> > We need a register to pass the hypercall number because we might not
> > know it at compile time and HVC only takes an immediate argument.
> > 
> > Among the available registers r12 seems to be the best choice because it
> > is defined as "intra-procedure call scratch register".
> > 
> > Use the ISS to pass an hypervisor specific tag.
> > 
> > Changes in v2:
> > - define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
> > at the moment is unused;
> > - use ldm instead of pop;
> > - fix up comments.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
> >  arch/arm/xen/Makefile                |    2 +-
> >  arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
> >  3 files changed, 157 insertions(+), 1 deletions(-)
> >  create mode 100644 arch/arm/include/asm/xen/hypercall.h
> >  create mode 100644 arch/arm/xen/hypercall.S
> 
> [...]
> 
> > diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
> > new file mode 100644
> > index 0000000..074f5ed
> > --- /dev/null
> > +++ b/arch/arm/xen/hypercall.S
> > @@ -0,0 +1,106 @@
> > +/******************************************************************************
> > + * hypercall.S
> > + *
> > + * Xen hypercall wrappers
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License version 2
> > + * as published by the Free Software Foundation; or, when distributed
> > + * separately from the Linux kernel or incorporated into other
> > + * software packages, subject to the following license:
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a copy
> > + * of this source file (the "Software"), to deal in the Software without
> > + * restriction, including without limitation the rights to use, copy, modify,
> > + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so, subject to
> > + * the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > + * IN THE SOFTWARE.
> > + */
> > +
> > +/*
> > + * The Xen hypercall calling convention is very similar to the ARM
> > + * procedure calling convention: the first paramter is passed in r0, the
> > + * second in r1, the third in r2 and the fourth in r3. Considering that
> > + * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
> > + * in r4, differently from the procedure calling convention of using the
> > + * stack for that case.
> > + *
> > + * The hypercall number is passed in r12.
> > + *
> > + * The return value is in r0.
> > + *
> > + * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
> > + * hypercall tag.
> > + */
> > +
> > +#include <linux/linkage.h>
> > +#include <asm/assembler.h>
> > +#include <xen/interface/xen.h>
> > +
> > +
> > +/* HVC 0xEA1 */
> > +#ifdef CONFIG_THUMB2_KERNEL
> > +#define xen_hvc .word 0xf7e08ea1
> > +#else
> > +#define xen_hvc .word 0xe140ea71
> > +#endif
> 
> Consider using my opcode injection helpers patch for this (see
> separate repost: [PATCH v2 REPOST 0/4] ARM: opcodes: Facilitate custom
> opcode injection), assuming that nobody objects to it.  This should mean
> that the right opcodes get generated when building a kernel for a big-
> endian target for example.
> 
> I believe the __HVC(imm) macro which I put in <asm/opcodes-virt.h> as an
> example should do what you need in this case.

Sure I can do that. Maybe I'll add another patch at the end of my series
to replace xen_hvc with __HVC(0xEA1), so that it remains independent
from your series.
I have learned through experience that avoiding cross patch series
dependencies help to reduce the amount of headaches during merge windows
:)


> > +
> > +#define HYPERCALL_SIMPLE(hypercall)		\
> > +ENTRY(HYPERVISOR_##hypercall)			\
> > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > +	xen_hvc;							\
> > +	mov pc, lr;							\
> > +ENDPROC(HYPERVISOR_##hypercall)
> > +
> > +#define HYPERCALL0 HYPERCALL_SIMPLE
> > +#define HYPERCALL1 HYPERCALL_SIMPLE
> > +#define HYPERCALL2 HYPERCALL_SIMPLE
> > +#define HYPERCALL3 HYPERCALL_SIMPLE
> > +#define HYPERCALL4 HYPERCALL_SIMPLE
> > +
> > +#define HYPERCALL5(hypercall)			\
> > +ENTRY(HYPERVISOR_##hypercall)			\
> > +	stmdb sp!, {r4}						\
> > +	ldr r4, [sp, #4]					\
> > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > +	xen_hvc								\
> > +	ldm sp!, {r4}						\
> > +	mov pc, lr							\
> > +ENDPROC(HYPERVISOR_##hypercall)
> > +
> > +                .text
> > +
> > +HYPERCALL2(xen_version);
> > +HYPERCALL3(console_io);
> > +HYPERCALL3(grant_table_op);
> > +HYPERCALL2(sched_op);
> > +HYPERCALL2(event_channel_op);
> > +HYPERCALL2(hvm_op);
> > +HYPERCALL2(memory_op);
> > +HYPERCALL2(physdev_op);
> > +
> > +ENTRY(privcmd_call)
> > +	stmdb sp!, {r4}
> > +	mov r12, r0
> > +	mov r0, r1
> > +	mov r1, r2
> > +	mov r2, r3
> > +	ldr r3, [sp, #8]
> > +	ldr r4, [sp, #4]
> > +	xen_hvc
> > +	ldm sp!, {r4}
> > +	mov pc, lr
> 
> Note that the preferred entry/exit sequences in such cases are:
> 
> 	stmfd	sp!, {r4,lr}
> 	...
> 	ldmfd	sp!, {r4,pc}
> 
> ...but it works either way.  I would bother to change it unless you
> have other changes to make too.

Wouldn't this needlessly save and restore one more register (lr) to the
stack?
I would try to keep the hypercall wrappers as small as possible...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:38:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUoA-00086A-7y; Thu, 09 Aug 2012 15:37:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzUo8-00085w-Hi
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 15:37:56 +0000
Received: from [85.158.138.51:16358] by server-4.bemta-3.messagelabs.com id
	22/D7-06379-359D3205; Thu, 09 Aug 2012 15:37:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344526674!27534885!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28158 invoked from network); 9 Aug 2012 15:37:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:37:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,740,1336348800"; d="scan'208";a="13934946"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 15:37:48 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 16:37:48 +0100
Date: Thu, 9 Aug 2012 16:37:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Dave Martin <dave.martin@linaro.org>
In-Reply-To: <20120808124111.GB2134@linaro.org>
Message-ID: <alpine.DEB.2.02.1208091046330.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120808124111.GB2134@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Aug 2012, Dave Martin wrote:
> On Mon, Aug 06, 2012 at 03:27:05PM +0100, Stefano Stabellini wrote:
> > Use r12 to pass the hypercall number to the hypervisor.
> > 
> > We need a register to pass the hypercall number because we might not
> > know it at compile time and HVC only takes an immediate argument.
> > 
> > Among the available registers r12 seems to be the best choice because it
> > is defined as "intra-procedure call scratch register".
> > 
> > Use the ISS to pass an hypervisor specific tag.
> > 
> > Changes in v2:
> > - define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
> > at the moment is unused;
> > - use ldm instead of pop;
> > - fix up comments.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
> >  arch/arm/xen/Makefile                |    2 +-
> >  arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
> >  3 files changed, 157 insertions(+), 1 deletions(-)
> >  create mode 100644 arch/arm/include/asm/xen/hypercall.h
> >  create mode 100644 arch/arm/xen/hypercall.S
> 
> [...]
> 
> > diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
> > new file mode 100644
> > index 0000000..074f5ed
> > --- /dev/null
> > +++ b/arch/arm/xen/hypercall.S
> > @@ -0,0 +1,106 @@
> > +/******************************************************************************
> > + * hypercall.S
> > + *
> > + * Xen hypercall wrappers
> > + *
> > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License version 2
> > + * as published by the Free Software Foundation; or, when distributed
> > + * separately from the Linux kernel or incorporated into other
> > + * software packages, subject to the following license:
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a copy
> > + * of this source file (the "Software"), to deal in the Software without
> > + * restriction, including without limitation the rights to use, copy, modify,
> > + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> > + * and to permit persons to whom the Software is furnished to do so, subject to
> > + * the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > + * IN THE SOFTWARE.
> > + */
> > +
> > +/*
> > + * The Xen hypercall calling convention is very similar to the ARM
> > + * procedure calling convention: the first paramter is passed in r0, the
> > + * second in r1, the third in r2 and the fourth in r3. Considering that
> > + * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
> > + * in r4, differently from the procedure calling convention of using the
> > + * stack for that case.
> > + *
> > + * The hypercall number is passed in r12.
> > + *
> > + * The return value is in r0.
> > + *
> > + * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
> > + * hypercall tag.
> > + */
> > +
> > +#include <linux/linkage.h>
> > +#include <asm/assembler.h>
> > +#include <xen/interface/xen.h>
> > +
> > +
> > +/* HVC 0xEA1 */
> > +#ifdef CONFIG_THUMB2_KERNEL
> > +#define xen_hvc .word 0xf7e08ea1
> > +#else
> > +#define xen_hvc .word 0xe140ea71
> > +#endif
> 
> Consider using my opcode injection helpers patch for this (see
> separate repost: [PATCH v2 REPOST 0/4] ARM: opcodes: Facilitate custom
> opcode injection), assuming that nobody objects to it.  This should mean
> that the right opcodes get generated when building a kernel for a big-
> endian target for example.
> 
> I believe the __HVC(imm) macro which I put in <asm/opcodes-virt.h> as an
> example should do what you need in this case.

Sure I can do that. Maybe I'll add another patch at the end of my series
to replace xen_hvc with __HVC(0xEA1), so that it remains independent
from your series.
I have learned through experience that avoiding cross patch series
dependencies help to reduce the amount of headaches during merge windows
:)


> > +
> > +#define HYPERCALL_SIMPLE(hypercall)		\
> > +ENTRY(HYPERVISOR_##hypercall)			\
> > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > +	xen_hvc;							\
> > +	mov pc, lr;							\
> > +ENDPROC(HYPERVISOR_##hypercall)
> > +
> > +#define HYPERCALL0 HYPERCALL_SIMPLE
> > +#define HYPERCALL1 HYPERCALL_SIMPLE
> > +#define HYPERCALL2 HYPERCALL_SIMPLE
> > +#define HYPERCALL3 HYPERCALL_SIMPLE
> > +#define HYPERCALL4 HYPERCALL_SIMPLE
> > +
> > +#define HYPERCALL5(hypercall)			\
> > +ENTRY(HYPERVISOR_##hypercall)			\
> > +	stmdb sp!, {r4}						\
> > +	ldr r4, [sp, #4]					\
> > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > +	xen_hvc								\
> > +	ldm sp!, {r4}						\
> > +	mov pc, lr							\
> > +ENDPROC(HYPERVISOR_##hypercall)
> > +
> > +                .text
> > +
> > +HYPERCALL2(xen_version);
> > +HYPERCALL3(console_io);
> > +HYPERCALL3(grant_table_op);
> > +HYPERCALL2(sched_op);
> > +HYPERCALL2(event_channel_op);
> > +HYPERCALL2(hvm_op);
> > +HYPERCALL2(memory_op);
> > +HYPERCALL2(physdev_op);
> > +
> > +ENTRY(privcmd_call)
> > +	stmdb sp!, {r4}
> > +	mov r12, r0
> > +	mov r0, r1
> > +	mov r1, r2
> > +	mov r2, r3
> > +	ldr r3, [sp, #8]
> > +	ldr r4, [sp, #4]
> > +	xen_hvc
> > +	ldm sp!, {r4}
> > +	mov pc, lr
> 
> Note that the preferred entry/exit sequences in such cases are:
> 
> 	stmfd	sp!, {r4,lr}
> 	...
> 	ldmfd	sp!, {r4,pc}
> 
> ...but it works either way.  I would bother to change it unless you
> have other changes to make too.

Wouldn't this needlessly save and restore one more register (lr) to the
stack?
I would try to keep the hypercall wrappers as small as possible...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:46:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUwD-0008OT-As; Thu, 09 Aug 2012 15:46:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SzUwB-0008OO-LP
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:46:15 +0000
Received: from [85.158.139.83:3829] by server-3.bemta-5.messagelabs.com id
	E9/4C-31899-64BD3205; Thu, 09 Aug 2012 15:46:14 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344527173!30412574!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26701 invoked from network); 9 Aug 2012 15:46:14 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:46:14 -0000
Received: by yenm4 with SMTP id m4so658949yen.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 08:46:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=LM0b+DNV5HlVkkkfpt92jGBVgYRb9f+eqwX9Vz3sgGY=;
	b=dbEvpLBSJdq4PamGsgcAayhV32KQFEE1AJxyHdFtsDuYd3WYAsbbQDHEVUutqwDqJI
	1CwWlfSNBQiwMyDLfqFlsJqkiUBu75fYj0+Q3Vupd8ob5uwrjWJ+RDStlLkYC2oQp8Iu
	aNnjILR53vUfoCmCAQOBnXj8Gb2ygVUXKyvqGavA0r8mXMk+HTYte1S8W5cOenDy3POR
	xccnXq+IyMDIGXNAvbrjSozQ67Va5CtP16UG5T2mx8cC23YQQ1s7bKPUkX9sRfZzwV8R
	0i16XewR14PTyNxpjGe2zqkfraUH8wEjQ4cFyyUBfMpxUaaxjScuEOjIUFAEzj8Cwoqt
	2X5g==
MIME-Version: 1.0
Received: by 10.42.85.69 with SMTP id p5mr5463447icl.24.1344527172341; Thu, 09
	Aug 2012 08:46:12 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 9 Aug 2012 08:46:12 -0700 (PDT)
In-Reply-To: <5023F5400200007800093F4E@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
Date: Thu, 9 Aug 2012 11:46:12 -0400
X-Google-Sender-Auth: Y72w5Vx_ruCnGHUoxQ3Wn6DusOQ
Message-ID: <CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
> One thing I think you didn't tell us so far is whether without
> interrupt remapping (or the IOMMU turned off altogether) the
> problem would also be observed.

I assume you mean the xen cli param iommu=off for the second test here.
What parameter should I be flipping for the first?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:46:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzUwD-0008OT-As; Thu, 09 Aug 2012 15:46:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SzUwB-0008OO-LP
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:46:15 +0000
Received: from [85.158.139.83:3829] by server-3.bemta-5.messagelabs.com id
	E9/4C-31899-64BD3205; Thu, 09 Aug 2012 15:46:14 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1344527173!30412574!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26701 invoked from network); 9 Aug 2012 15:46:14 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 15:46:14 -0000
Received: by yenm4 with SMTP id m4so658949yen.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 08:46:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=LM0b+DNV5HlVkkkfpt92jGBVgYRb9f+eqwX9Vz3sgGY=;
	b=dbEvpLBSJdq4PamGsgcAayhV32KQFEE1AJxyHdFtsDuYd3WYAsbbQDHEVUutqwDqJI
	1CwWlfSNBQiwMyDLfqFlsJqkiUBu75fYj0+Q3Vupd8ob5uwrjWJ+RDStlLkYC2oQp8Iu
	aNnjILR53vUfoCmCAQOBnXj8Gb2ygVUXKyvqGavA0r8mXMk+HTYte1S8W5cOenDy3POR
	xccnXq+IyMDIGXNAvbrjSozQ67Va5CtP16UG5T2mx8cC23YQQ1s7bKPUkX9sRfZzwV8R
	0i16XewR14PTyNxpjGe2zqkfraUH8wEjQ4cFyyUBfMpxUaaxjScuEOjIUFAEzj8Cwoqt
	2X5g==
MIME-Version: 1.0
Received: by 10.42.85.69 with SMTP id p5mr5463447icl.24.1344527172341; Thu, 09
	Aug 2012 08:46:12 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 9 Aug 2012 08:46:12 -0700 (PDT)
In-Reply-To: <5023F5400200007800093F4E@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
Date: Thu, 9 Aug 2012 11:46:12 -0400
X-Google-Sender-Auth: Y72w5Vx_ruCnGHUoxQ3Wn6DusOQ
Message-ID: <CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
> One thing I think you didn't tell us so far is whether without
> interrupt remapping (or the IOMMU turned off altogether) the
> problem would also be observed.

I assume you mean the xen cli param iommu=off for the second test here.
What parameter should I be flipping for the first?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:52:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzV1h-00005W-3x; Thu, 09 Aug 2012 15:51:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzV1f-00005Q-Ek
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:51:55 +0000
Received: from [85.158.138.51:49752] by server-2.bemta-3.messagelabs.com id
	5D/A3-29239-A9CD3205; Thu, 09 Aug 2012 15:51:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344527513!25694926!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 990 invoked from network); 9 Aug 2012 15:51:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 15:51:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 16:51:53 +0100
Message-Id: <5023F8B80200007800093F8C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 16:51:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
In-Reply-To: <CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 17:46, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> One thing I think you didn't tell us so far is whether without
>> interrupt remapping (or the IOMMU turned off altogether) the
>> problem would also be observed.
> 
> I assume you mean the xen cli param iommu=off for the second test here.
> What parameter should I be flipping for the first?

iommu=no-intremap

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 15:52:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 15:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzV1h-00005W-3x; Thu, 09 Aug 2012 15:51:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzV1f-00005Q-Ek
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 15:51:55 +0000
Received: from [85.158.138.51:49752] by server-2.bemta-3.messagelabs.com id
	5D/A3-29239-A9CD3205; Thu, 09 Aug 2012 15:51:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344527513!25694926!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 990 invoked from network); 9 Aug 2012 15:51:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	9 Aug 2012 15:51:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Aug 2012 16:51:53 +0100
Message-Id: <5023F8B80200007800093F8C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 09 Aug 2012 16:51:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
In-Reply-To: <CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 17:46, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> One thing I think you didn't tell us so far is whether without
>> interrupt remapping (or the IOMMU turned off altogether) the
>> problem would also be observed.
> 
> I assume you mean the xen cli param iommu=off for the second test here.
> What parameter should I be flipping for the first?

iommu=no-intremap

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:05:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVE9-0000hI-L9; Thu, 09 Aug 2012 16:04:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzVE8-0000hD-Mn
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:04:48 +0000
Received: from [85.158.143.35:57145] by server-3.bemta-4.messagelabs.com id
	43/C1-31486-F9FD3205; Thu, 09 Aug 2012 16:04:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344528284!4721294!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25969 invoked from network); 9 Aug 2012 16:04:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:04:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13935485"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 16:04:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	17:04:41 +0100
Message-ID: <1344528279.32142.141.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Thu, 9 Aug 2012 17:04:39 +0100
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDE1OjAzICswMTAwLCBUaGFub3MgTWFrYXRvcyB3cm90ZToK
PiBIaSwKPiAKPiAgCj4gCj4gSeKAmWQgbGlrZSB0byBpbnRyb2R1Y2UgYmxrdGFwMzogZXNzZW50
aWFsbHkgYmxrdGFwMiB3aXRob3V0IHRoZSBuZWVkIG9mCj4gYmxrYmFjay4gVGhpcyBoYXMgYmVl
biBkZXZlbG9wZWQgYnkgU2FudG9zaCBKb2RoLCBhbmQgSeKAmWxsIG1haW50YWluCj4gaXQuCj4g
Cj4gIAo+IAo+IEluIHRoaXMgcGF0Y2gsIGJsa3RhcDIgYmluYXJpZXMgYXJlIHN1ZmZpeGVkIHdp
dGgg4oCcMuKAnSwgc28gaXTigJlzIG5vdCB5ZXQKPiBwb3NzaWJsZSB0byB1c2UgaXQgYWxvbmcg
d2l0aCBibGt0YXAzLgoKSSdsbCB0YWtlIGEgcHJvcGVyIGxvb2sgYXQgdGhpcyB3aGVuIEkgZ2V0
IGJhY2sgZnJvbSB2YWNhdGlvbiBidXQganVzdCBhCmZldyBxdWljayBmaXJzdCBjb21tZW50czoK
CkkgZG9uJ3QgdGhpbmsgcmVuYW1pbmcgYmxrdGFwMiBpcyBnb2luZyB0byBmbHkuIEZvciBiZXR0
ZXIgb3Igd29yc2UKYmxrdGFwMiB1c2VzIHRoZSBuYW1lcyBpdCB1c2VzIGFuZCBwZW9wbGUgKGFu
ZCB4ZW5kKSB1c2UgdGhlbSB3aXRoIHRob3NlCm5hbWVzLCBzbyBJIHRoaW5rIGJsa3RhcDMgbmVl
ZHMgdG8gYXZvaWQgdGhlIGNvbmZsaWN0IGJ5IGFkZGluZyAiMyIgYXMKYXBwcm9wcmlhdGUuCgpJ
IG5vdGljZWQgdGhhdCB0aGUgUkVBRE1FIGxvb2tzIHByZXR0eSBibGt0YXAxIHNwZWNpZmljIGFu
ZCByZWZlcmVuY2VzCnhtIGV0Yy4gSXQgd291bGQgYmUgbmljZSB0byBkZWNydWZ0IHRoZSB0cmVl
IGFzIHBhcnQgb2YgdGhpcyB0cmFuc2l0aW9uCnJhdGhlciB0aGFuIGNhcnJ5aW5nIGFjcm9zcyBv
YnNvbGV0ZSBhbmQgb3V0IG9mIGRhdGUgY29tcG9uZW50cyBmcm9tCmJsa3RhcDEgJiAyIGludG8g
dGhlIGJsa3RhcDMgc3RhY2suIEkgYmV0IHRoZXJlIGlzIG90aGVyIHN0dWZmIHdoaWNoIGlzCm5v
IGxvbmdlciB1c2VkLCBlLmcuIHRoZSBsaWJhaW8tY29tcGF0LmggLS0gZG9lcyB0aGUgbmVlZCBm
b3IgdGhhdCBzdGlsbApleGlzdCBvciBpcyBsaWJhaW8gbm93IHVwIHRvIGRhdGUgaW4gZGlzdHJv
cyAoaXQgcmVmZXJlbmNlIDIuNi4yMSwgd2hpY2gKaXMgcXVpdGUgYSB3aGlsZSBhZ28gbm93KS4g
SSBleHBlY3QgeW91IGFsc28gd2FudCB0byBkaXRjaApsaW51eC1ibGt0YXAuaCAtLSBpdCBsb29r
cyBsaWtlIGlvY3RsIGRlZmluaXRpb25zIGZvciB0YWxraW5nIHRvIHRoZQoobm93IG5vbi1leGlz
dGVudCkga2VybmVsIGRyaXZlci4gSXQnZCBiZSBnb29kIHRvIHN0cmlwIGFsbCB0aGlzIHNvcnQg
b2YKdGhpbmcgb3V0IGJlZm9yZSBwZW9wbGUgc3RhcnQgcmV2aWV3aW5nIGluIGRlcHRoIC0tIHRo
ZXJlJ3Mgbm90IG11Y2gKcG9pbnQgaW4gcmV2aWV3aW5nIHN0dWZmIHdoaWNoIHNob3VsZCBqdXN0
IGJlIHJlbW92ZWQgYW5kIHRoZSBwYXRjaCBpcwpiaWcgZW5vdWdoIGFzIGlzLgoKT24gYSBzaW1p
bGFyIG5vdGUgaWYgdGhlcmUgYXJlIHBsdWdpbiBtb2R1bGVzIHdoaWNoIGFyZSBub3Qgb2Z0ZW4g
dXNlZAppbiBub3JtYWwgY29uZmlndXJhdGlvbnMgcGVyaGFwcyB0aGV5IGNvdWxkIGJlIG9taXR0
ZWQgZnJvbSB0aGUgaW5pdGlhbAp1cHN0cmVhbWluZz8gKEkgZG9uJ3Qga25vdyB3aGF0IGFsbCBv
ZiBibG9jay0qIGV0YyBhcmUsICBidXQgbWF5YmUgc29tZQpvZiB0aGVtIGFyZSBub3QgcmVhbGx5
IHVzZWQgaW4gcHJhY3RpY2U/KQoKCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:05:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVE9-0000hI-L9; Thu, 09 Aug 2012 16:04:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzVE8-0000hD-Mn
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:04:48 +0000
Received: from [85.158.143.35:57145] by server-3.bemta-4.messagelabs.com id
	43/C1-31486-F9FD3205; Thu, 09 Aug 2012 16:04:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344528284!4721294!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25969 invoked from network); 9 Aug 2012 16:04:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:04:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13935485"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 16:04:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	17:04:41 +0100
Message-ID: <1344528279.32142.141.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Thu, 9 Aug 2012 17:04:39 +0100
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDE1OjAzICswMTAwLCBUaGFub3MgTWFrYXRvcyB3cm90ZToK
PiBIaSwKPiAKPiAgCj4gCj4gSeKAmWQgbGlrZSB0byBpbnRyb2R1Y2UgYmxrdGFwMzogZXNzZW50
aWFsbHkgYmxrdGFwMiB3aXRob3V0IHRoZSBuZWVkIG9mCj4gYmxrYmFjay4gVGhpcyBoYXMgYmVl
biBkZXZlbG9wZWQgYnkgU2FudG9zaCBKb2RoLCBhbmQgSeKAmWxsIG1haW50YWluCj4gaXQuCj4g
Cj4gIAo+IAo+IEluIHRoaXMgcGF0Y2gsIGJsa3RhcDIgYmluYXJpZXMgYXJlIHN1ZmZpeGVkIHdp
dGgg4oCcMuKAnSwgc28gaXTigJlzIG5vdCB5ZXQKPiBwb3NzaWJsZSB0byB1c2UgaXQgYWxvbmcg
d2l0aCBibGt0YXAzLgoKSSdsbCB0YWtlIGEgcHJvcGVyIGxvb2sgYXQgdGhpcyB3aGVuIEkgZ2V0
IGJhY2sgZnJvbSB2YWNhdGlvbiBidXQganVzdCBhCmZldyBxdWljayBmaXJzdCBjb21tZW50czoK
CkkgZG9uJ3QgdGhpbmsgcmVuYW1pbmcgYmxrdGFwMiBpcyBnb2luZyB0byBmbHkuIEZvciBiZXR0
ZXIgb3Igd29yc2UKYmxrdGFwMiB1c2VzIHRoZSBuYW1lcyBpdCB1c2VzIGFuZCBwZW9wbGUgKGFu
ZCB4ZW5kKSB1c2UgdGhlbSB3aXRoIHRob3NlCm5hbWVzLCBzbyBJIHRoaW5rIGJsa3RhcDMgbmVl
ZHMgdG8gYXZvaWQgdGhlIGNvbmZsaWN0IGJ5IGFkZGluZyAiMyIgYXMKYXBwcm9wcmlhdGUuCgpJ
IG5vdGljZWQgdGhhdCB0aGUgUkVBRE1FIGxvb2tzIHByZXR0eSBibGt0YXAxIHNwZWNpZmljIGFu
ZCByZWZlcmVuY2VzCnhtIGV0Yy4gSXQgd291bGQgYmUgbmljZSB0byBkZWNydWZ0IHRoZSB0cmVl
IGFzIHBhcnQgb2YgdGhpcyB0cmFuc2l0aW9uCnJhdGhlciB0aGFuIGNhcnJ5aW5nIGFjcm9zcyBv
YnNvbGV0ZSBhbmQgb3V0IG9mIGRhdGUgY29tcG9uZW50cyBmcm9tCmJsa3RhcDEgJiAyIGludG8g
dGhlIGJsa3RhcDMgc3RhY2suIEkgYmV0IHRoZXJlIGlzIG90aGVyIHN0dWZmIHdoaWNoIGlzCm5v
IGxvbmdlciB1c2VkLCBlLmcuIHRoZSBsaWJhaW8tY29tcGF0LmggLS0gZG9lcyB0aGUgbmVlZCBm
b3IgdGhhdCBzdGlsbApleGlzdCBvciBpcyBsaWJhaW8gbm93IHVwIHRvIGRhdGUgaW4gZGlzdHJv
cyAoaXQgcmVmZXJlbmNlIDIuNi4yMSwgd2hpY2gKaXMgcXVpdGUgYSB3aGlsZSBhZ28gbm93KS4g
SSBleHBlY3QgeW91IGFsc28gd2FudCB0byBkaXRjaApsaW51eC1ibGt0YXAuaCAtLSBpdCBsb29r
cyBsaWtlIGlvY3RsIGRlZmluaXRpb25zIGZvciB0YWxraW5nIHRvIHRoZQoobm93IG5vbi1leGlz
dGVudCkga2VybmVsIGRyaXZlci4gSXQnZCBiZSBnb29kIHRvIHN0cmlwIGFsbCB0aGlzIHNvcnQg
b2YKdGhpbmcgb3V0IGJlZm9yZSBwZW9wbGUgc3RhcnQgcmV2aWV3aW5nIGluIGRlcHRoIC0tIHRo
ZXJlJ3Mgbm90IG11Y2gKcG9pbnQgaW4gcmV2aWV3aW5nIHN0dWZmIHdoaWNoIHNob3VsZCBqdXN0
IGJlIHJlbW92ZWQgYW5kIHRoZSBwYXRjaCBpcwpiaWcgZW5vdWdoIGFzIGlzLgoKT24gYSBzaW1p
bGFyIG5vdGUgaWYgdGhlcmUgYXJlIHBsdWdpbiBtb2R1bGVzIHdoaWNoIGFyZSBub3Qgb2Z0ZW4g
dXNlZAppbiBub3JtYWwgY29uZmlndXJhdGlvbnMgcGVyaGFwcyB0aGV5IGNvdWxkIGJlIG9taXR0
ZWQgZnJvbSB0aGUgaW5pdGlhbAp1cHN0cmVhbWluZz8gKEkgZG9uJ3Qga25vdyB3aGF0IGFsbCBv
ZiBibG9jay0qIGV0YyBhcmUsICBidXQgbWF5YmUgc29tZQpvZiB0aGVtIGFyZSBub3QgcmVhbGx5
IHVzZWQgaW4gcHJhY3RpY2U/KQoKCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:09:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVIS-0000sY-B6; Thu, 09 Aug 2012 16:09:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SzVIR-0000sS-9w
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:09:15 +0000
Received: from [85.158.139.83:22493] by server-8.bemta-5.messagelabs.com id
	7A/23-05939-AA0E3205; Thu, 09 Aug 2012 16:09:14 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344528551!31033106!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17377 invoked from network); 9 Aug 2012 16:09:12 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:09:12 -0000
Received: by ggna5 with SMTP id a5so690649ggn.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 09:09:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=UIhhiJ57TYVa/qFyeGes+PNPgtynxhFkF8f55C0gmZU=;
	b=0GjMAr2sWqjbo+nQWlPMW7lKZlaIdJFcpNMSjW4SlceutY875b7VJkraahnRWGDu6c
	ksXjgEWXmG2pMc2iUJ9WV1k7YnAGPffv5dYppojSFU24Da2N83dAyePoFToG2A/SQo68
	qt66WijFsYmyrqSahJ18SI7oYcTLXJObyUE/85WUXwotBNWjX7+zqFU8NuQZFdD03be2
	axD2hgmlZC/8/1UYxCbIDoJVyK9oWjAIAKq6PjPhGkLmziMhvuEdR28TTueZWppRHHki
	HOnlq9eEAHkz/J1sdwx9urLY7g7WMH1OrKHxaQqfndT+MPl5GvD291Hce3DkHuemIPiW
	e64w==
MIME-Version: 1.0
Received: by 10.42.85.69 with SMTP id p5mr5546373icl.24.1344528550756; Thu, 09
	Aug 2012 09:09:10 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 9 Aug 2012 09:09:10 -0700 (PDT)
In-Reply-To: <5023F8B80200007800093F8C@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
Date: Thu, 9 Aug 2012 12:09:10 -0400
X-Google-Sender-Auth: n6CVZB-Kb23O1teORZZCyCJWNEc
Message-ID: <CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu=no-intremap
This seems to work around the issue on this platform, performing
multiple suspend/resume cycles, and ahci came back afterwards just
fine.

What is the downside to flipping this off?

iommu=off
This test behaved similarly to the above, also working around the issue.


On Thu, Aug 9, 2012 at 11:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 17:46, Ben Guthro <ben@guthro.net> wrote:
>> On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> One thing I think you didn't tell us so far is whether without
>>> interrupt remapping (or the IOMMU turned off altogether) the
>>> problem would also be observed.
>>
>> I assume you mean the xen cli param iommu=off for the second test here.
>> What parameter should I be flipping for the first?
>
> iommu=no-intremap
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:09:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVIS-0000sY-B6; Thu, 09 Aug 2012 16:09:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1SzVIR-0000sS-9w
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:09:15 +0000
Received: from [85.158.139.83:22493] by server-8.bemta-5.messagelabs.com id
	7A/23-05939-AA0E3205; Thu, 09 Aug 2012 16:09:14 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344528551!31033106!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17377 invoked from network); 9 Aug 2012 16:09:12 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:09:12 -0000
Received: by ggna5 with SMTP id a5so690649ggn.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 09:09:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=UIhhiJ57TYVa/qFyeGes+PNPgtynxhFkF8f55C0gmZU=;
	b=0GjMAr2sWqjbo+nQWlPMW7lKZlaIdJFcpNMSjW4SlceutY875b7VJkraahnRWGDu6c
	ksXjgEWXmG2pMc2iUJ9WV1k7YnAGPffv5dYppojSFU24Da2N83dAyePoFToG2A/SQo68
	qt66WijFsYmyrqSahJ18SI7oYcTLXJObyUE/85WUXwotBNWjX7+zqFU8NuQZFdD03be2
	axD2hgmlZC/8/1UYxCbIDoJVyK9oWjAIAKq6PjPhGkLmziMhvuEdR28TTueZWppRHHki
	HOnlq9eEAHkz/J1sdwx9urLY7g7WMH1OrKHxaQqfndT+MPl5GvD291Hce3DkHuemIPiW
	e64w==
MIME-Version: 1.0
Received: by 10.42.85.69 with SMTP id p5mr5546373icl.24.1344528550756; Thu, 09
	Aug 2012 09:09:10 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 9 Aug 2012 09:09:10 -0700 (PDT)
In-Reply-To: <5023F8B80200007800093F8C@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
Date: Thu, 9 Aug 2012 12:09:10 -0400
X-Google-Sender-Auth: n6CVZB-Kb23O1teORZZCyCJWNEc
Message-ID: <CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu=no-intremap
This seems to work around the issue on this platform, performing
multiple suspend/resume cycles, and ahci came back afterwards just
fine.

What is the downside to flipping this off?

iommu=off
This test behaved similarly to the above, also working around the issue.


On Thu, Aug 9, 2012 at 11:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 17:46, Ben Guthro <ben@guthro.net> wrote:
>> On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> One thing I think you didn't tell us so far is whether without
>>> interrupt remapping (or the IOMMU turned off altogether) the
>>> problem would also be observed.
>>
>> I assume you mean the xen cli param iommu=off for the second test here.
>> What parameter should I be flipping for the first?
>
> iommu=no-intremap
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:30:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVcz-00016E-Ap; Thu, 09 Aug 2012 16:30:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1SzVcx-000169-F9
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:30:27 +0000
Received: from [85.158.143.99:62860] by server-1.bemta-4.messagelabs.com id
	1A/AF-20198-2A5E3205; Thu, 09 Aug 2012 16:30:26 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344529803!27429429!1
X-Originating-IP: [208.97.132.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMxNDUx\n,sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMxNDUx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4635 invoked from network); 9 Aug 2012 16:30:03 -0000
Received: from caiajhbdcaib.dreamhost.com (HELO homiemail-a21.g.dreamhost.com)
	(208.97.132.81) by server-9.tower-216.messagelabs.com with SMTP;
	9 Aug 2012 16:30:03 -0000
Received: from homiemail-a21.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a21.g.dreamhost.com (Postfix) with ESMTP id 831A3300074;
	Thu,  9 Aug 2012 09:30:01 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=JuF9kKNCiPX/W3egmQ8LPDFbYbU3lbaFBlPxbJC5I2Yf
	QtpAdzMobijyqndnUoGXDCbs/OyxxoI/FcAG2xEQ0fiC/Km6+BZNvrv+dH2z/dJ+
	eK51MQ2dSCyOTo+YW7FLPckBAFH/tstakiirhwqfw+YwMYeL/fsFKcJrCWnbll8=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=49vEYZigKUdMC9hRI5LLBDwlq4E=; b=UUOMu0nR
	IwkC/Fuspa3R889DfY87dcL6dbkXESPEpsczip1T9fA6V04hlZEeog9CD8oezD02
	E6pzPyjt2p7y/VIAHqCG2msIGOkPvt28iRxJ9iQv0FqG4n+Yw6rcd0wse3PuyUhW
	w+bXBpeG8doM4ETAgd+AFD/PDhKS7To3oa8=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a21.g.dreamhost.com (Postfix) with ESMTPA id EAECD300072; 
	Thu,  9 Aug 2012 09:30:00 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Thu, 9 Aug 2012 09:30:12 -0700
Message-ID: <a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 09:30:12 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: ian.jackson@citrix.com, tim@xen.org, ian.campbell@citrix.com,
	security@xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
 destroy	p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I realize Gridcentric is neither a service provider, nor a "big vendor",
and therefore not on the pre-disclosure list.

However, this is a bug on which we have first-hand knowledge and ability
to immediately mitigate. In fact, I wrote equivalent code for 4.2/unstable
months ago.

I ignored the xen-devel discussion on pre-disclosure list (my bad), but
understand now that there may be some use to Gridcentric being in that
list.

Thanks
Andres

>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>             Xen Security Advisory CVE-2012-3433 / XSA-11
>                           version 3
>
> 	HVM guest destroy p2m teardown host DoS vulnerability
>
> UPDATES IN VERSION 3
> ====================
>
> Embargo ended Thursday 2012-08-09 12:00:00 UTC.
>
> ISSUE DESCRIPTION
> =================
>
> An HVM guest is able to manipulate its physical address space such
> that tearing down the guest takes an extended period amount of
> time searching for shared pages.
>
> This causes the domain 0 VCPU which tears down the domain to be
> blocked in the destroy hypercall. This causes that domain 0 VCPU to
> become unavailable and may cause the domain 0 kernel to panic.
>
> There is no requirement for memory sharing to be in use.
>
> IMPACT
> ======
>
> A guest kernel can cause the host to become unresponsive for a period
> of time, potentially leading to a DoS.
>
> VULNERABLE SYSTEMS
> ==================
>
> All systems running HVM guests with untrusted guest kernels.
>
> This vulnerability effects only Xen 4.0 and 4.1. Xen 3.4 and earlier
> and xen-unstable are not vulnerable.
>
> MITIGATION
> ==========
>
> This issue can be mitigated by running PV (para-virtualised) guests
> only, or by ensuring (inside the guest) that the kernel is
> trustworthy.
>
> RESOLUTION
> ==========
>
> Applying the appropriate attached patch will resolve the issue.
>
> NOTE REGARDING CVE
> ==================
>
> We do not yet have a CVE Candidate number for this vulnerability.
>
> PATCH INFORMATION
> =================
>
> The attached patches resolve this issue
>
>  Xen 4.1, 4.1.x                              xsa11-4.1.patch
>  Xen 4.0, 4.0.x                              xsa11-4.0.patch
>
> $ sha256sum xsa11-*.patch
> c8ab767d831b20a1b22c69a28127303c89cf0379cbf6f1ba3acfda6240aa2a89
> xsa11-4.0.patch
> 61c6424023a26a8b4ea591d0bff6969908091a1a1e1304567d0d910908f21e8d
> xsa11-4.1.patch
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
>
> iQEcBAEBAgAGBQJQI8/0AAoJEIP+FMlX6CvZ+fIH/R8w3J9KUiLiIai/QaA4xOjp
> rkvdR40b0GzcllDQEy9bUCvRY3QPz7DRza90vLvxCL9R5OnbkRtGJxdmbxjwmoVX
> zF03FLaFCd5ypFsTGAcxaUcxtOrt6Ut6R0i8GZp5BCkOV+UkNvu/uaOxL6N3UZ3w
> HfCm88EAWsWeJuShiG5jY3BhgCeR7b3GV9uXP0vG5Pa7cwPGvMnx/E6OsC/zEMG2
> 7yTX0/AI4qKMT9XtiA024vloN1mMlRgN74ZIBqmPuDv5ggv1wLFseARWueYMBn8Y
> aUDi97nJf+YWXIx+YwAmD0XLmJ/5tTAYvaV3B4vjMrfFc/plMKDvOqohVB+hv08=
> =l4LY
> -----END PGP SIGNATURE-----



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:30:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVcz-00016E-Ap; Thu, 09 Aug 2012 16:30:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1SzVcx-000169-F9
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:30:27 +0000
Received: from [85.158.143.99:62860] by server-1.bemta-4.messagelabs.com id
	1A/AF-20198-2A5E3205; Thu, 09 Aug 2012 16:30:26 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344529803!27429429!1
X-Originating-IP: [208.97.132.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMxNDUx\n,sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMxNDUx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4635 invoked from network); 9 Aug 2012 16:30:03 -0000
Received: from caiajhbdcaib.dreamhost.com (HELO homiemail-a21.g.dreamhost.com)
	(208.97.132.81) by server-9.tower-216.messagelabs.com with SMTP;
	9 Aug 2012 16:30:03 -0000
Received: from homiemail-a21.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a21.g.dreamhost.com (Postfix) with ESMTP id 831A3300074;
	Thu,  9 Aug 2012 09:30:01 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=JuF9kKNCiPX/W3egmQ8LPDFbYbU3lbaFBlPxbJC5I2Yf
	QtpAdzMobijyqndnUoGXDCbs/OyxxoI/FcAG2xEQ0fiC/Km6+BZNvrv+dH2z/dJ+
	eK51MQ2dSCyOTo+YW7FLPckBAFH/tstakiirhwqfw+YwMYeL/fsFKcJrCWnbll8=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=49vEYZigKUdMC9hRI5LLBDwlq4E=; b=UUOMu0nR
	IwkC/Fuspa3R889DfY87dcL6dbkXESPEpsczip1T9fA6V04hlZEeog9CD8oezD02
	E6pzPyjt2p7y/VIAHqCG2msIGOkPvt28iRxJ9iQv0FqG4n+Yw6rcd0wse3PuyUhW
	w+bXBpeG8doM4ETAgd+AFD/PDhKS7To3oa8=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a21.g.dreamhost.com (Postfix) with ESMTPA id EAECD300072; 
	Thu,  9 Aug 2012 09:30:00 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Thu, 9 Aug 2012 09:30:12 -0700
Message-ID: <a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
Date: Thu, 9 Aug 2012 09:30:12 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: ian.jackson@citrix.com, tim@xen.org, ian.campbell@citrix.com,
	security@xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
 destroy	p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I realize Gridcentric is neither a service provider, nor a "big vendor",
and therefore not on the pre-disclosure list.

However, this is a bug on which we have first-hand knowledge and ability
to immediately mitigate. In fact, I wrote equivalent code for 4.2/unstable
months ago.

I ignored the xen-devel discussion on pre-disclosure list (my bad), but
understand now that there may be some use to Gridcentric being in that
list.

Thanks
Andres

>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>             Xen Security Advisory CVE-2012-3433 / XSA-11
>                           version 3
>
> 	HVM guest destroy p2m teardown host DoS vulnerability
>
> UPDATES IN VERSION 3
> ====================
>
> Embargo ended Thursday 2012-08-09 12:00:00 UTC.
>
> ISSUE DESCRIPTION
> =================
>
> An HVM guest is able to manipulate its physical address space such
> that tearing down the guest takes an extended period amount of
> time searching for shared pages.
>
> This causes the domain 0 VCPU which tears down the domain to be
> blocked in the destroy hypercall. This causes that domain 0 VCPU to
> become unavailable and may cause the domain 0 kernel to panic.
>
> There is no requirement for memory sharing to be in use.
>
> IMPACT
> ======
>
> A guest kernel can cause the host to become unresponsive for a period
> of time, potentially leading to a DoS.
>
> VULNERABLE SYSTEMS
> ==================
>
> All systems running HVM guests with untrusted guest kernels.
>
> This vulnerability effects only Xen 4.0 and 4.1. Xen 3.4 and earlier
> and xen-unstable are not vulnerable.
>
> MITIGATION
> ==========
>
> This issue can be mitigated by running PV (para-virtualised) guests
> only, or by ensuring (inside the guest) that the kernel is
> trustworthy.
>
> RESOLUTION
> ==========
>
> Applying the appropriate attached patch will resolve the issue.
>
> NOTE REGARDING CVE
> ==================
>
> We do not yet have a CVE Candidate number for this vulnerability.
>
> PATCH INFORMATION
> =================
>
> The attached patches resolve this issue
>
>  Xen 4.1, 4.1.x                              xsa11-4.1.patch
>  Xen 4.0, 4.0.x                              xsa11-4.0.patch
>
> $ sha256sum xsa11-*.patch
> c8ab767d831b20a1b22c69a28127303c89cf0379cbf6f1ba3acfda6240aa2a89
> xsa11-4.0.patch
> 61c6424023a26a8b4ea591d0bff6969908091a1a1e1304567d0d910908f21e8d
> xsa11-4.1.patch
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
>
> iQEcBAEBAgAGBQJQI8/0AAoJEIP+FMlX6CvZ+fIH/R8w3J9KUiLiIai/QaA4xOjp
> rkvdR40b0GzcllDQEy9bUCvRY3QPz7DRza90vLvxCL9R5OnbkRtGJxdmbxjwmoVX
> zF03FLaFCd5ypFsTGAcxaUcxtOrt6Ut6R0i8GZp5BCkOV+UkNvu/uaOxL6N3UZ3w
> HfCm88EAWsWeJuShiG5jY3BhgCeR7b3GV9uXP0vG5Pa7cwPGvMnx/E6OsC/zEMG2
> 7yTX0/AI4qKMT9XtiA024vloN1mMlRgN74ZIBqmPuDv5ggv1wLFseARWueYMBn8Y
> aUDi97nJf+YWXIx+YwAmD0XLmJ/5tTAYvaV3B4vjMrfFc/plMKDvOqohVB+hv08=
> =l4LY
> -----END PGP SIGNATURE-----



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:40:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVmc-0001Mm-NI; Thu, 09 Aug 2012 16:40:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SzVmb-0001Mh-7C
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:40:25 +0000
Received: from [85.158.138.51:16877] by server-5.bemta-3.messagelabs.com id
	16/98-27557-8F7E3205; Thu, 09 Aug 2012 16:40:24 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344530423!27465460!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8858 invoked from network); 9 Aug 2012 16:40:23 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:40:23 -0000
Received: by eeke53 with SMTP id e53so226622eek.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 09:40:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=hqvwPO6tNKUcWog17MppopW6CewdDzYPoI/em72nUhA=;
	b=DmNmZ8DVF/EUAsCHYNa+rCiZPFOc8guV91A13gSKMquwmJjLT6CyOKYUGKJ8CVKzug
	yH28NPgyJOIpc+7oZWNlI0Tu/wDd8dkLD3f4fQaQF+ryQTB7GlTnbKzHeluMZNPlWRtf
	SEfecWjwSBnmpeeGXNsZ2nfmvhLKkRCA87aUOlZFf3VJcqFpRZKYu50+52yTGAbvpXMg
	Uqn0Xg7Si5tcfZWY34RROoUPhfCISjqG6MFTe7SirrMRe2Nx7Ih+tHTNamCC9yFCN0J/
	TYb+vK6BHYBgyGDpDSIshY6WA2/ph2SvtGxfxV9T3mjQ+Y4A6QAgBRcp4V/DrkDLXFvS
	mHdQ==
MIME-Version: 1.0
Received: by 10.14.179.71 with SMTP id g47mr28720995eem.21.1344530423098; Thu,
	09 Aug 2012 09:40:23 -0700 (PDT)
Received: by 10.14.213.131 with HTTP; Thu, 9 Aug 2012 09:40:23 -0700 (PDT)
In-Reply-To: <a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
	<a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
Date: Thu, 9 Aug 2012 17:40:23 +0100
X-Google-Sender-Auth: Mw9TJKybeMwb7mWtvPij7VKZINs
Message-ID: <CAFLBxZYLPn_ttqP6DDiqgrVDirjnKmroANHrF5HB1-XG7pbPPA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: andres@lagarcavilla.org
Cc: ian.jackson@citrix.com, security@xen.org, tim@xen.org,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
 destroy p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 9, 2012 at 5:30 PM, Andres Lagar-Cavilla
<andres@lagarcavilla.org> wrote:
> I realize Gridcentric is neither a service provider, nor a "big vendor",
> and therefore not on the pre-disclosure list.
>
> However, this is a bug on which we have first-hand knowledge and ability
> to immediately mitigate. In fact, I wrote equivalent code for 4.2/unstable
> months ago.

I don't quite understand -- are you saying you could have helped craft
a fix?  Or are you saying that you would like to be on the list for
your customers' sake?

> I ignored the xen-devel discussion on pre-disclosure list (my bad), but
> understand now that there may be some use to Gridcentric being in that
> list.

The discussion has not concluded yet; you can even still express your
voice in the "poll" here:

http://xen.org/polls/xen_dev_2012_security_process.html

It would probably be good to take a look at the discussion before
answering; at least my recent posts describing the various options and
the criteria to judge them by. :-)

Peace,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:40:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVmc-0001Mm-NI; Thu, 09 Aug 2012 16:40:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1SzVmb-0001Mh-7C
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:40:25 +0000
Received: from [85.158.138.51:16877] by server-5.bemta-3.messagelabs.com id
	16/98-27557-8F7E3205; Thu, 09 Aug 2012 16:40:24 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344530423!27465460!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8858 invoked from network); 9 Aug 2012 16:40:23 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:40:23 -0000
Received: by eeke53 with SMTP id e53so226622eek.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 09:40:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=hqvwPO6tNKUcWog17MppopW6CewdDzYPoI/em72nUhA=;
	b=DmNmZ8DVF/EUAsCHYNa+rCiZPFOc8guV91A13gSKMquwmJjLT6CyOKYUGKJ8CVKzug
	yH28NPgyJOIpc+7oZWNlI0Tu/wDd8dkLD3f4fQaQF+ryQTB7GlTnbKzHeluMZNPlWRtf
	SEfecWjwSBnmpeeGXNsZ2nfmvhLKkRCA87aUOlZFf3VJcqFpRZKYu50+52yTGAbvpXMg
	Uqn0Xg7Si5tcfZWY34RROoUPhfCISjqG6MFTe7SirrMRe2Nx7Ih+tHTNamCC9yFCN0J/
	TYb+vK6BHYBgyGDpDSIshY6WA2/ph2SvtGxfxV9T3mjQ+Y4A6QAgBRcp4V/DrkDLXFvS
	mHdQ==
MIME-Version: 1.0
Received: by 10.14.179.71 with SMTP id g47mr28720995eem.21.1344530423098; Thu,
	09 Aug 2012 09:40:23 -0700 (PDT)
Received: by 10.14.213.131 with HTTP; Thu, 9 Aug 2012 09:40:23 -0700 (PDT)
In-Reply-To: <a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
	<a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
Date: Thu, 9 Aug 2012 17:40:23 +0100
X-Google-Sender-Auth: Mw9TJKybeMwb7mWtvPij7VKZINs
Message-ID: <CAFLBxZYLPn_ttqP6DDiqgrVDirjnKmroANHrF5HB1-XG7pbPPA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: andres@lagarcavilla.org
Cc: ian.jackson@citrix.com, security@xen.org, tim@xen.org,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
 destroy p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 9, 2012 at 5:30 PM, Andres Lagar-Cavilla
<andres@lagarcavilla.org> wrote:
> I realize Gridcentric is neither a service provider, nor a "big vendor",
> and therefore not on the pre-disclosure list.
>
> However, this is a bug on which we have first-hand knowledge and ability
> to immediately mitigate. In fact, I wrote equivalent code for 4.2/unstable
> months ago.

I don't quite understand -- are you saying you could have helped craft
a fix?  Or are you saying that you would like to be on the list for
your customers' sake?

> I ignored the xen-devel discussion on pre-disclosure list (my bad), but
> understand now that there may be some use to Gridcentric being in that
> list.

The discussion has not concluded yet; you can even still express your
voice in the "poll" here:

http://xen.org/polls/xen_dev_2012_security_process.html

It would probably be good to take a look at the discussion before
answering; at least my recent posts describing the various options and
the criteria to judge them by. :-)

Peace,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:41:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVnf-0001PN-5c; Thu, 09 Aug 2012 16:41:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzVnd-0001P5-Nw
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:41:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344530481!6402762!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgxMDE3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14149 invoked from network); 9 Aug 2012 16:41:22 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 16:41:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79GfFuF024403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 16:41:16 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79GfEbh008620
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 16:41:15 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79GfEUo008054; Thu, 9 Aug 2012 11:41:14 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 09:41:14 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 800D94211A; Thu,  9 Aug 2012 12:31:43 -0400 (EDT)
Date: Thu, 9 Aug 2012 12:31:43 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matthew Dean <mcd40@cam.ac.uk>
Message-ID: <20120809163143.GB4540@phenom.dumpdata.com>
References: <50226909.9090804@cam.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50226909.9090804@cam.ac.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 02:26:33PM +0100, Matthew Dean wrote:
> I have been trying to setup GPU passthrough for a couple of days now
> with little luck.  I'm hoping someone can shed some light as to
> where I may be going wrong or at least identify some genuine bugs.
> Essentially pci passthrough works for me but gpu passthrough
> doesn't.
> 
> My system is currently configured as follows (please ask if you need
> further details)
> 
> Asrock Z77 e-Itx (stock bios, not sure the version but I can find
> out.  vt-d is enabled)
> i7 3770S
> AMD Radeon HD 7750
> 
> Dom0 - Ubuntu server 12.04 with desktop environment and the
> following extra packages installed before the build
> 
> build-essential zlib1g-dev python-dev libncurses5-dev libssl-dev
> libx11-dev uuid-dev libyajl-dev libaio-dev libglib2.0-dev pkg-config
> bridge-utils iproute udev bison flex gettext bin86 bcc iasl markdown
> ocaml-nox ocaml-findlib git gcc-multilib texinfo pciutils-dev gawk
> libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif
> texlive-latex-base texlive-latex-recommended texlive-fonts-extra
> texlive-fonts-recommended mercurial make gcc libc6-dev python
> python-twisted patch libsdl-dev libjpeg62-dev libbz2-dev
> e2fslibs-dev git-core xz-utils liblzma-dev liblzo2-dev
> libvncserver-dev
> 
> I've used the stock Ubuntu provided kernel which is version
> 3.2.0-27-generic.  Dom0 is connected to a display via the intel
> integrated graphics.
> 
> xen 4.2 has been build from xen unstable tag 4.2.0-rc1 changeset
> 25689 using the following commands
> 
> ./configure
> make world
> make install
> 
> I had to manually add a line to /etc/fstab to get /proc/xen to mount
> on startup
> Modules xen-evtchn, xen-gntdev and xen-pciback were setup to load on
> boot in /etc/modules
> I've then setup xencommons to start on boot: "update-rc.d xencommons
> defaults 19 18"
> 
> Grub2 has then been setup to automatically boot the xen kernel. I've
> also had to add the option xsave=0 to the xen boot command line to
> get things to boot
> 
> On restart everything looks good. "xl list" returns the Dom0 domain
> only.  "cat /proc/xen/capabilities" returns control_d.  I've created
> a windows 7 domU without any pci passthrough and successfully
> installed windows 7 ultimate.  The config file looks like:
> 
> builder='hvm'
> vcpus = 4
> memory = 8192
> shadow_memory = 48
> name = "xenwin7"
> vif = [ 'bridge=br0' ]
> acpi = 1
> apic = 1
> disk = [ 'file:/usr/local/xenImages/xenwin7.img,hda,w']
> boot="c"
> sdl=0
> vnc=1
> vncconsole=1
> vncpasswd=''
> viridian=1
> usb=1
> serial='pty'
> usbdevice='tablet'
> on_poweroff="destroy"
> on_reboot="restart"
> on_crash="destroy"
> 
> I would like to be able to pass through the HD 7750 and a USB
> controller to the VM.  I see from lspci there are three devices I
> need to pass through, two for the gpu (the card itseft and the hd
> audio for hdmi device) and one for the USB controller.  The
> bus/device ids are:
> 
> 0000:01:00.0
> 0000:01:00.1
> 0000:00:14.0
> 
> There are other usb controllers but I would like to leave them with
> dom0.  I can successfully configure the devices for passthrough
> using "xl pci-assignable-add".  In the config I then add the line:
> 
>  pci=["01:00.0","01:00.1","00:14.0"]
> 
> When I boot the VM everything seems to work fine.  Windows picks up
> all three devices and I can install drivers for them.  The USB
> controller works flawlessly and I can use an attached mouse and
> keyboard.  The radeon card requires a restart after which an
> attached display comes to life and I have 3D acceleration.
> Restarting the VM seems to work OK.  If however I shut down the VM I
> have no end of problems.  On any subsequent startup the vm struggles
> to get past the windows splash screen, waiting much longer than
> usual.  During this period dom0 is sluggish regarding mouse and
> keyboard input even though cpu and memory usage are very low.  When
> windows finally loads I have no active display and I have to view
> the VM via VNC.  In device manage I find that windows has disabled
> the GPU saying there are not enough resources to run the card.  From
> this point onwards I can do nothing to get the gpu working again
> aside from removing the device, manually deleting the drivers, and
> starting again.
> 
> Note that at this point I am only trying to do secondary passthrough
> though I would ideally like to get to the point of doing primary
> passthrough.  Adding the line gfx_passthru=1 at any point in all
> this to the machine config however just prevents the VM from booting
> entirely; when I VNC in all I get is a qemu prompt and I never get


I am not sure if this the same problem I've been seeing is but every time I
restart a Windows VM with GPU passthrough it crashed. I found that if "Eject"
the GPU before shutting down it would work just great the next time I started
it.

Try that and see if that works. This is btw with an unmodified Xen 4.1 and
Fedora's 3.4.something kernel.

> any ouput to the real display.
> 
> Any suggestions as to how to get this to work would be greatly
> appreciated as I've hit a bit of a brick wall.  I should also say
> that I have managed to get secondary passthrough working using
> Debian Wheezy and the repository version of xen 4.1.  In that case
> though Dom0 didn't boot reliably.
> 
> Sincerely,
> 
> Matthew Dean

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:41:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVnf-0001PN-5c; Thu, 09 Aug 2012 16:41:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzVnd-0001P5-Nw
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:41:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344530481!6402762!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgxMDE3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14149 invoked from network); 9 Aug 2012 16:41:22 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 16:41:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79GfFuF024403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 16:41:16 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79GfEbh008620
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 16:41:15 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79GfEUo008054; Thu, 9 Aug 2012 11:41:14 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 09:41:14 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 800D94211A; Thu,  9 Aug 2012 12:31:43 -0400 (EDT)
Date: Thu, 9 Aug 2012 12:31:43 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matthew Dean <mcd40@cam.ac.uk>
Message-ID: <20120809163143.GB4540@phenom.dumpdata.com>
References: <50226909.9090804@cam.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50226909.9090804@cam.ac.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 02:26:33PM +0100, Matthew Dean wrote:
> I have been trying to setup GPU passthrough for a couple of days now
> with little luck.  I'm hoping someone can shed some light as to
> where I may be going wrong or at least identify some genuine bugs.
> Essentially pci passthrough works for me but gpu passthrough
> doesn't.
> 
> My system is currently configured as follows (please ask if you need
> further details)
> 
> Asrock Z77 e-Itx (stock bios, not sure the version but I can find
> out.  vt-d is enabled)
> i7 3770S
> AMD Radeon HD 7750
> 
> Dom0 - Ubuntu server 12.04 with desktop environment and the
> following extra packages installed before the build
> 
> build-essential zlib1g-dev python-dev libncurses5-dev libssl-dev
> libx11-dev uuid-dev libyajl-dev libaio-dev libglib2.0-dev pkg-config
> bridge-utils iproute udev bison flex gettext bin86 bcc iasl markdown
> ocaml-nox ocaml-findlib git gcc-multilib texinfo pciutils-dev gawk
> libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif
> texlive-latex-base texlive-latex-recommended texlive-fonts-extra
> texlive-fonts-recommended mercurial make gcc libc6-dev python
> python-twisted patch libsdl-dev libjpeg62-dev libbz2-dev
> e2fslibs-dev git-core xz-utils liblzma-dev liblzo2-dev
> libvncserver-dev
> 
> I've used the stock Ubuntu provided kernel which is version
> 3.2.0-27-generic.  Dom0 is connected to a display via the intel
> integrated graphics.
> 
> xen 4.2 has been build from xen unstable tag 4.2.0-rc1 changeset
> 25689 using the following commands
> 
> ./configure
> make world
> make install
> 
> I had to manually add a line to /etc/fstab to get /proc/xen to mount
> on startup
> Modules xen-evtchn, xen-gntdev and xen-pciback were setup to load on
> boot in /etc/modules
> I've then setup xencommons to start on boot: "update-rc.d xencommons
> defaults 19 18"
> 
> Grub2 has then been setup to automatically boot the xen kernel. I've
> also had to add the option xsave=0 to the xen boot command line to
> get things to boot
> 
> On restart everything looks good. "xl list" returns the Dom0 domain
> only.  "cat /proc/xen/capabilities" returns control_d.  I've created
> a windows 7 domU without any pci passthrough and successfully
> installed windows 7 ultimate.  The config file looks like:
> 
> builder='hvm'
> vcpus = 4
> memory = 8192
> shadow_memory = 48
> name = "xenwin7"
> vif = [ 'bridge=br0' ]
> acpi = 1
> apic = 1
> disk = [ 'file:/usr/local/xenImages/xenwin7.img,hda,w']
> boot="c"
> sdl=0
> vnc=1
> vncconsole=1
> vncpasswd=''
> viridian=1
> usb=1
> serial='pty'
> usbdevice='tablet'
> on_poweroff="destroy"
> on_reboot="restart"
> on_crash="destroy"
> 
> I would like to be able to pass through the HD 7750 and a USB
> controller to the VM.  I see from lspci there are three devices I
> need to pass through, two for the gpu (the card itseft and the hd
> audio for hdmi device) and one for the USB controller.  The
> bus/device ids are:
> 
> 0000:01:00.0
> 0000:01:00.1
> 0000:00:14.0
> 
> There are other usb controllers but I would like to leave them with
> dom0.  I can successfully configure the devices for passthrough
> using "xl pci-assignable-add".  In the config I then add the line:
> 
>  pci=["01:00.0","01:00.1","00:14.0"]
> 
> When I boot the VM everything seems to work fine.  Windows picks up
> all three devices and I can install drivers for them.  The USB
> controller works flawlessly and I can use an attached mouse and
> keyboard.  The radeon card requires a restart after which an
> attached display comes to life and I have 3D acceleration.
> Restarting the VM seems to work OK.  If however I shut down the VM I
> have no end of problems.  On any subsequent startup the vm struggles
> to get past the windows splash screen, waiting much longer than
> usual.  During this period dom0 is sluggish regarding mouse and
> keyboard input even though cpu and memory usage are very low.  When
> windows finally loads I have no active display and I have to view
> the VM via VNC.  In device manage I find that windows has disabled
> the GPU saying there are not enough resources to run the card.  From
> this point onwards I can do nothing to get the gpu working again
> aside from removing the device, manually deleting the drivers, and
> starting again.
> 
> Note that at this point I am only trying to do secondary passthrough
> though I would ideally like to get to the point of doing primary
> passthrough.  Adding the line gfx_passthru=1 at any point in all
> this to the machine config however just prevents the VM from booting
> entirely; when I VNC in all I get is a qemu prompt and I never get


I am not sure if this the same problem I've been seeing is but every time I
restart a Windows VM with GPU passthrough it crashed. I found that if "Eject"
the GPU before shutting down it would work just great the next time I started
it.

Try that and see if that works. This is btw with an unmodified Xen 4.1 and
Fedora's 3.4.something kernel.

> any ouput to the real display.
> 
> Any suggestions as to how to get this to work would be greatly
> appreciated as I've hit a bit of a brick wall.  I should also say
> that I have managed to get secondary passthrough working using
> Debian Wheezy and the repository version of xen 4.1.  In that case
> though Dom0 didn't boot reliably.
> 
> Sincerely,
> 
> Matthew Dean

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVoO-0001T6-JZ; Thu, 09 Aug 2012 16:42:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzVoM-0001Sl-On
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:42:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344530527!9445862!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4ODYxMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11978 invoked from network); 9 Aug 2012 16:42:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 16:42:08 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79Gfs7c007917
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 16:41:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79GfrA4012420
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 16:41:54 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79Gfri6012291; Thu, 9 Aug 2012 11:41:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 09:41:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 639F54211C; Thu,  9 Aug 2012 12:32:22 -0400 (EDT)
Date: Thu, 9 Aug 2012 12:32:22 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Message-ID: <20120809163222.GC4540@phenom.dumpdata.com>
References: <50226909.9090804@cam.ac.uk> <20120808134208.GI19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120808134208.GI19851@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Matthew Dean <mcd40@cam.ac.uk>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 04:42:08PM +0300, Pasi K=E4rkk=E4inen wrote:
> On Wed, Aug 08, 2012 at 02:26:33PM +0100, Matthew Dean wrote:
> > =

> >    I've used the stock Ubuntu provided kernel which is version
> >    3.2.0-27-generic.  Dom0 is connected to a display via the intel inte=
grated
> >    graphics.
> > =

> =

> You might want to try Linux 3.4.x or 3.5.x dom0 kernels aswell.

The v3.5.x has some issues right now with PCI passthrough. Not sure what - =
but
trying to figure it out.

> I think there has been some fixes in xen-pciback.
> =

> =

> > =

> >    When I boot the VM everything seems to work fine.  Windows picks up =
all
> >    three devices and I can install drivers for them.  The USB controller
> >    works flawlessly and I can use an attached mouse and keyboard.  The =
radeon
> >    card requires a restart after which an attached display comes to lif=
e and
> >    I have 3D acceleration.  Restarting the VM seems to work OK.  If how=
ever I
> >    shut down the VM I have no end of problems.  On any subsequent start=
up the
> >    vm struggles to get past the windows splash screen, waiting much lon=
ger
> >    than usual.  During this period dom0 is sluggish regarding mouse and
> >    keyboard input even though cpu and memory usage are very low.  When
> >    windows finally loads I have no active display and I have to view th=
e VM
> >    via VNC.  In device manage I find that windows has disabled the GPU =
saying
> >    there are not enough resources to run the card.  From this point onw=
ards I
> >    can do nothing to get the gpu working again aside from removing the
> >    device, manually deleting the drivers, and starting again.
> > =

> =

> Do you get any errors from Xen (xl dmesg), or from dom0 kernel (dmesg) ? =

> =

> Do you have a serial console? =

> =

> =

> >    Note that at this point I am only trying to do secondary passthrough
> >    though I would ideally like to get to the point of doing primary
> >    passthrough.  Adding the line gfx_passthru=3D1 at any point in all t=
his to
> >    the machine config however just prevents the VM from booting entirel=
y;
> >    when I VNC in all I get is a qemu prompt and I never get any ouput t=
o the
> >    real display.
> > =

> =

> AMD/ATI primary passthru requires extra patches to Xen qemu-dm,
> those are not included out-of-the-box yet.
> =

> >    Any suggestions as to how to get this to work would be greatly appre=
ciated
> >    as I've hit a bit of a brick wall.  I should also say that I have ma=
naged
> >    to get secondary passthrough working using Debian Wheezy and the
> >    repository version of xen 4.1.  In that case though Dom0 didn't boot
> >    reliably.
> > =

> =

> -- Pasi
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVoO-0001T6-JZ; Thu, 09 Aug 2012 16:42:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzVoM-0001Sl-On
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:42:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344530527!9445862!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4ODYxMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11978 invoked from network); 9 Aug 2012 16:42:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 16:42:08 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79Gfs7c007917
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 16:41:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79GfrA4012420
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 16:41:54 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79Gfri6012291; Thu, 9 Aug 2012 11:41:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 09:41:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 639F54211C; Thu,  9 Aug 2012 12:32:22 -0400 (EDT)
Date: Thu, 9 Aug 2012 12:32:22 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Message-ID: <20120809163222.GC4540@phenom.dumpdata.com>
References: <50226909.9090804@cam.ac.uk> <20120808134208.GI19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120808134208.GI19851@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Matthew Dean <mcd40@cam.ac.uk>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] GPU passthrough with Xen 4.2 on Ubuntu 12.04
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 04:42:08PM +0300, Pasi K=E4rkk=E4inen wrote:
> On Wed, Aug 08, 2012 at 02:26:33PM +0100, Matthew Dean wrote:
> > =

> >    I've used the stock Ubuntu provided kernel which is version
> >    3.2.0-27-generic.  Dom0 is connected to a display via the intel inte=
grated
> >    graphics.
> > =

> =

> You might want to try Linux 3.4.x or 3.5.x dom0 kernels aswell.

The v3.5.x has some issues right now with PCI passthrough. Not sure what - =
but
trying to figure it out.

> I think there has been some fixes in xen-pciback.
> =

> =

> > =

> >    When I boot the VM everything seems to work fine.  Windows picks up =
all
> >    three devices and I can install drivers for them.  The USB controller
> >    works flawlessly and I can use an attached mouse and keyboard.  The =
radeon
> >    card requires a restart after which an attached display comes to lif=
e and
> >    I have 3D acceleration.  Restarting the VM seems to work OK.  If how=
ever I
> >    shut down the VM I have no end of problems.  On any subsequent start=
up the
> >    vm struggles to get past the windows splash screen, waiting much lon=
ger
> >    than usual.  During this period dom0 is sluggish regarding mouse and
> >    keyboard input even though cpu and memory usage are very low.  When
> >    windows finally loads I have no active display and I have to view th=
e VM
> >    via VNC.  In device manage I find that windows has disabled the GPU =
saying
> >    there are not enough resources to run the card.  From this point onw=
ards I
> >    can do nothing to get the gpu working again aside from removing the
> >    device, manually deleting the drivers, and starting again.
> > =

> =

> Do you get any errors from Xen (xl dmesg), or from dom0 kernel (dmesg) ? =

> =

> Do you have a serial console? =

> =

> =

> >    Note that at this point I am only trying to do secondary passthrough
> >    though I would ideally like to get to the point of doing primary
> >    passthrough.  Adding the line gfx_passthru=3D1 at any point in all t=
his to
> >    the machine config however just prevents the VM from booting entirel=
y;
> >    when I VNC in all I get is a qemu prompt and I never get any ouput t=
o the
> >    real display.
> > =

> =

> AMD/ATI primary passthru requires extra patches to Xen qemu-dm,
> those are not included out-of-the-box yet.
> =

> >    Any suggestions as to how to get this to work would be greatly appre=
ciated
> >    as I've hit a bit of a brick wall.  I should also say that I have ma=
naged
> >    to get secondary passthrough working using Debian Wheezy and the
> >    repository version of xen 4.1.  In that case though Dom0 didn't boot
> >    reliably.
> > =

> =

> -- Pasi
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:45:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVr0-0001gM-5D; Thu, 09 Aug 2012 16:44:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1SzVqz-0001fr-CG
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:44:57 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344530690!12390388!1
X-Originating-IP: [208.97.132.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi41ID0+IDI3Mzk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11101 invoked from network); 9 Aug 2012 16:44:51 -0000
Received: from mailbigip.dreamhost.com (HELO homiemail-a15.g.dreamhost.com)
	(208.97.132.5) by server-12.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 16:44:51 -0000
Received: from homiemail-a15.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTP id 4995C76C072;
	Thu,  9 Aug 2012 09:44:50 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=OOVLLz9vwP7CVyy/SJ6D+A+60/3LycSi8UI4DGAkVqzD
	EZpBbNcdt5EDHKvh14c6GcnNJNsjWc+LsQEyJ0K9r4K04NTXsNbjSxorhbyee+H8
	w5k5nkjHkXds/cnTywGvrbC9164hJwfKfkhAjGFq7w/K/KLggU/99ivo+WTFFo0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=eylB8uYINp9F+ULVwyQ3C6b/S+g=; b=exWwcVIM
	wJI65Bd5YX4BzcigD43GnjpIxp9TcXQDzuK85tdJljo+UnwSH+i/B0Tq1SJhAyCT
	YBg6Q8Z7QzBZv2CdOipBAoWP2QqEUM1fVeqcvs62dLbVMZX8K8gmgCaU667Ifo0n
	4lnS5qQGAgeoZAynq+9XI0WEArXzs4mBVi0=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTPA id E09CA76C06E; 
	Thu,  9 Aug 2012 09:44:49 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Thu, 9 Aug 2012 09:44:41 -0700
Message-ID: <734b125f2e6fa635641ffc418e8e555b.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <CAFLBxZYLPn_ttqP6DDiqgrVDirjnKmroANHrF5HB1-XG7pbPPA@mail.gmail.com>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
	<a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
	<CAFLBxZYLPn_ttqP6DDiqgrVDirjnKmroANHrF5HB1-XG7pbPPA@mail.gmail.com>
Date: Thu, 9 Aug 2012 09:44:41 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: ian.jackson@citrix.com, security@xen.org, tim@xen.org,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
 destroy p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, Aug 9, 2012 at 5:30 PM, Andres Lagar-Cavilla
> <andres@lagarcavilla.org> wrote:
>> I realize Gridcentric is neither a service provider, nor a "big vendor",
>> and therefore not on the pre-disclosure list.
>>
>> However, this is a bug on which we have first-hand knowledge and ability
>> to immediately mitigate. In fact, I wrote equivalent code for
>> 4.2/unstable
>> months ago.
>
> I don't quite understand -- are you saying you could have helped craft
> a fix?  Or are you saying that you would like to be on the list for
> your customers' sake?

The former primarily. But ultimately both.

>
>> I ignored the xen-devel discussion on pre-disclosure list (my bad), but
>> understand now that there may be some use to Gridcentric being in that
>> list.
>
> The discussion has not concluded yet; you can even still express your
> voice in the "poll" here:
>
> http://xen.org/polls/xen_dev_2012_security_process.html
>
> It would probably be good to take a look at the discussion before
> answering; at least my recent posts describing the various options and
> the criteria to judge them by. :-)

Yes that will take some serious groking cycles. Thanks for the link.

Andres

>
> Peace,
>  -George
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:45:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVr0-0001gM-5D; Thu, 09 Aug 2012 16:44:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1SzVqz-0001fr-CG
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:44:57 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344530690!12390388!1
X-Originating-IP: [208.97.132.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi41ID0+IDI3Mzk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11101 invoked from network); 9 Aug 2012 16:44:51 -0000
Received: from mailbigip.dreamhost.com (HELO homiemail-a15.g.dreamhost.com)
	(208.97.132.5) by server-12.tower-27.messagelabs.com with SMTP;
	9 Aug 2012 16:44:51 -0000
Received: from homiemail-a15.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTP id 4995C76C072;
	Thu,  9 Aug 2012 09:44:50 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=OOVLLz9vwP7CVyy/SJ6D+A+60/3LycSi8UI4DGAkVqzD
	EZpBbNcdt5EDHKvh14c6GcnNJNsjWc+LsQEyJ0K9r4K04NTXsNbjSxorhbyee+H8
	w5k5nkjHkXds/cnTywGvrbC9164hJwfKfkhAjGFq7w/K/KLggU/99ivo+WTFFo0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=eylB8uYINp9F+ULVwyQ3C6b/S+g=; b=exWwcVIM
	wJI65Bd5YX4BzcigD43GnjpIxp9TcXQDzuK85tdJljo+UnwSH+i/B0Tq1SJhAyCT
	YBg6Q8Z7QzBZv2CdOipBAoWP2QqEUM1fVeqcvs62dLbVMZX8K8gmgCaU667Ifo0n
	4lnS5qQGAgeoZAynq+9XI0WEArXzs4mBVi0=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTPA id E09CA76C06E; 
	Thu,  9 Aug 2012 09:44:49 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP; Thu, 9 Aug 2012 09:44:41 -0700
Message-ID: <734b125f2e6fa635641ffc418e8e555b.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <CAFLBxZYLPn_ttqP6DDiqgrVDirjnKmroANHrF5HB1-XG7pbPPA@mail.gmail.com>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
	<a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
	<CAFLBxZYLPn_ttqP6DDiqgrVDirjnKmroANHrF5HB1-XG7pbPPA@mail.gmail.com>
Date: Thu, 9 Aug 2012 09:44:41 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: ian.jackson@citrix.com, security@xen.org, tim@xen.org,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
 destroy p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, Aug 9, 2012 at 5:30 PM, Andres Lagar-Cavilla
> <andres@lagarcavilla.org> wrote:
>> I realize Gridcentric is neither a service provider, nor a "big vendor",
>> and therefore not on the pre-disclosure list.
>>
>> However, this is a bug on which we have first-hand knowledge and ability
>> to immediately mitigate. In fact, I wrote equivalent code for
>> 4.2/unstable
>> months ago.
>
> I don't quite understand -- are you saying you could have helped craft
> a fix?  Or are you saying that you would like to be on the list for
> your customers' sake?

The former primarily. But ultimately both.

>
>> I ignored the xen-devel discussion on pre-disclosure list (my bad), but
>> understand now that there may be some use to Gridcentric being in that
>> list.
>
> The discussion has not concluded yet; you can even still express your
> voice in the "poll" here:
>
> http://xen.org/polls/xen_dev_2012_security_process.html
>
> It would probably be good to take a look at the discussion before
> answering; at least my recent posts describing the various options and
> the criteria to judge them by. :-)

Yes that will take some serious groking cycles. Thanks for the link.

Andres

>
> Peace,
>  -George
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVwR-0001vP-UZ; Thu, 09 Aug 2012 16:50:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dave.martin@linaro.org>) id 1SzVwR-0001vH-2l
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 16:50:35 +0000
X-Env-Sender: dave.martin@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344531028!11752743!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15512 invoked from network); 9 Aug 2012 16:50:28 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:50:28 -0000
Received: by eekd4 with SMTP id d4so224418eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 09:50:28 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=sAxAH/vLWu9/ZVi4686ogYtHvkBUyF4Ag5O+24O13WE=;
	b=hGUMpx7VW35+7lf8Ny9vLL6+A/Gz2GO46MTvaasxxFyV6avKoYsfcmV/LeyoIlkq7s
	VSnK/O2+jhWernaKnNyb/l1EOjtX5beNoP33B05siAdOgzZ+nfMKsh8ukgkXQrlKEXX0
	YdysvnQNrIN+DvJZBTq1uggkxMzu/ug5k4LNn+h2Y+1QTVgHIlem7k6O4ghAS7243sBS
	eaPqcZNb2adCY+FTFfH00D6nCb/owLO7JVzbk6IZQa1A6QiXT0awgtftjrUfnMAZItbw
	fQu7dvEp+5rI/AbdNCv5W/Cy/FEfWOETor4oyRxB48HMQkA1mbpLw/8Qq4nYf8k/uPki
	CygA==
Received: by 10.14.209.129 with SMTP id s1mr18916885eeo.24.1344531027974;
	Thu, 09 Aug 2012 09:50:27 -0700 (PDT)
Received: from linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63])
	by mx.google.com with ESMTPS id 45sm4832664eed.17.2012.08.09.09.50.26
	(version=SSLv3 cipher=OTHER); Thu, 09 Aug 2012 09:50:27 -0700 (PDT)
Date: Thu, 9 Aug 2012 17:50:26 +0100
From: Dave Martin <dave.martin@linaro.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120809165026.GC17588@linaro.org>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120808124111.GB2134@linaro.org>
	<alpine.DEB.2.02.1208091046330.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208091046330.21096@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQlwgilL9Vl49NunBc1UIm7B2jFaOC1vcHpjzaCRSsELsoZvo06s/QINMVYQ9wE5VKZ+kr4E
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, 2012 at 04:37:24PM +0100, Stefano Stabellini wrote:
> On Wed, 8 Aug 2012, Dave Martin wrote:
> > On Mon, Aug 06, 2012 at 03:27:05PM +0100, Stefano Stabellini wrote:
> > > Use r12 to pass the hypercall number to the hypervisor.
> > > 
> > > We need a register to pass the hypercall number because we might not
> > > know it at compile time and HVC only takes an immediate argument.
> > > 
> > > Among the available registers r12 seems to be the best choice because it
> > > is defined as "intra-procedure call scratch register".
> > > 
> > > Use the ISS to pass an hypervisor specific tag.
> > > 
> > > Changes in v2:
> > > - define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
> > > at the moment is unused;
> > > - use ldm instead of pop;
> > > - fix up comments.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
> > >  arch/arm/xen/Makefile                |    2 +-
> > >  arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
> > >  3 files changed, 157 insertions(+), 1 deletions(-)
> > >  create mode 100644 arch/arm/include/asm/xen/hypercall.h
> > >  create mode 100644 arch/arm/xen/hypercall.S
> > 
> > [...]
> > 
> > > diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
> > > new file mode 100644
> > > index 0000000..074f5ed
> > > --- /dev/null
> > > +++ b/arch/arm/xen/hypercall.S
> > > @@ -0,0 +1,106 @@
> > > +/******************************************************************************
> > > + * hypercall.S
> > > + *
> > > + * Xen hypercall wrappers
> > > + *
> > > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
> > > + *
> > > + * This program is free software; you can redistribute it and/or
> > > + * modify it under the terms of the GNU General Public License version 2
> > > + * as published by the Free Software Foundation; or, when distributed
> > > + * separately from the Linux kernel or incorporated into other
> > > + * software packages, subject to the following license:
> > > + *
> > > + * Permission is hereby granted, free of charge, to any person obtaining a copy
> > > + * of this source file (the "Software"), to deal in the Software without
> > > + * restriction, including without limitation the rights to use, copy, modify,
> > > + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> > > + * and to permit persons to whom the Software is furnished to do so, subject to
> > > + * the following conditions:
> > > + *
> > > + * The above copyright notice and this permission notice shall be included in
> > > + * all copies or substantial portions of the Software.
> > > + *
> > > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> > > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> > > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > > + * IN THE SOFTWARE.
> > > + */
> > > +
> > > +/*
> > > + * The Xen hypercall calling convention is very similar to the ARM
> > > + * procedure calling convention: the first paramter is passed in r0, the
> > > + * second in r1, the third in r2 and the fourth in r3. Considering that
> > > + * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
> > > + * in r4, differently from the procedure calling convention of using the
> > > + * stack for that case.
> > > + *
> > > + * The hypercall number is passed in r12.
> > > + *
> > > + * The return value is in r0.
> > > + *
> > > + * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
> > > + * hypercall tag.
> > > + */
> > > +
> > > +#include <linux/linkage.h>
> > > +#include <asm/assembler.h>
> > > +#include <xen/interface/xen.h>
> > > +
> > > +
> > > +/* HVC 0xEA1 */
> > > +#ifdef CONFIG_THUMB2_KERNEL
> > > +#define xen_hvc .word 0xf7e08ea1
> > > +#else
> > > +#define xen_hvc .word 0xe140ea71
> > > +#endif
> > 
> > Consider using my opcode injection helpers patch for this (see
> > separate repost: [PATCH v2 REPOST 0/4] ARM: opcodes: Facilitate custom
> > opcode injection), assuming that nobody objects to it.  This should mean
> > that the right opcodes get generated when building a kernel for a big-
> > endian target for example.
> > 
> > I believe the __HVC(imm) macro which I put in <asm/opcodes-virt.h> as an
> > example should do what you need in this case.
> 
> Sure I can do that. Maybe I'll add another patch at the end of my series
> to replace xen_hvc with __HVC(0xEA1), so that it remains independent
> from your series.
> I have learned through experience that avoiding cross patch series
> dependencies help to reduce the amount of headaches during merge windows
> :)

I agree.  I'll let you know when my patch gets merged -- in the meantime,
it makes sense for you to keep your existing code.

> 
> 
> > > +
> > > +#define HYPERCALL_SIMPLE(hypercall)		\
> > > +ENTRY(HYPERVISOR_##hypercall)			\
> > > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > > +	xen_hvc;							\
> > > +	mov pc, lr;							\
> > > +ENDPROC(HYPERVISOR_##hypercall)
> > > +
> > > +#define HYPERCALL0 HYPERCALL_SIMPLE
> > > +#define HYPERCALL1 HYPERCALL_SIMPLE
> > > +#define HYPERCALL2 HYPERCALL_SIMPLE
> > > +#define HYPERCALL3 HYPERCALL_SIMPLE
> > > +#define HYPERCALL4 HYPERCALL_SIMPLE
> > > +
> > > +#define HYPERCALL5(hypercall)			\
> > > +ENTRY(HYPERVISOR_##hypercall)			\
> > > +	stmdb sp!, {r4}						\
> > > +	ldr r4, [sp, #4]					\
> > > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > > +	xen_hvc								\
> > > +	ldm sp!, {r4}						\
> > > +	mov pc, lr							\
> > > +ENDPROC(HYPERVISOR_##hypercall)
> > > +
> > > +                .text
> > > +
> > > +HYPERCALL2(xen_version);
> > > +HYPERCALL3(console_io);
> > > +HYPERCALL3(grant_table_op);
> > > +HYPERCALL2(sched_op);
> > > +HYPERCALL2(event_channel_op);
> > > +HYPERCALL2(hvm_op);
> > > +HYPERCALL2(memory_op);
> > > +HYPERCALL2(physdev_op);
> > > +
> > > +ENTRY(privcmd_call)
> > > +	stmdb sp!, {r4}
> > > +	mov r12, r0
> > > +	mov r0, r1
> > > +	mov r1, r2
> > > +	mov r2, r3
> > > +	ldr r3, [sp, #8]
> > > +	ldr r4, [sp, #4]
> > > +	xen_hvc
> > > +	ldm sp!, {r4}
> > > +	mov pc, lr
> > 
> > Note that the preferred entry/exit sequences in such cases are:
> > 
> > 	stmfd	sp!, {r4,lr}
> > 	...
> > 	ldmfd	sp!, {r4,pc}
> > 
> > ...but it works either way.  I would bother to change it unless you
> > have other changes to make too.
> 
> Wouldn't this needlessly save and restore one more register (lr) to the
> stack?
> I would try to keep the hypercall wrappers as small as possible...

Argh, ignore me -- I was hallucinating for some reason that we actually
needed to save lr, but we don't.

Using the stmfd/ldmfd mnemonics might still be nicer than stmdb/ldm, since
the fd suffix makes the stack semantics more obvious, and the code
looks more symmetrical.  This was the conventional way to write these
mnemonics before the "push" and "pop" mnemonics existed.

That's purely cosmetic, though.

Cheers
---Dave

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 16:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 16:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzVwR-0001vP-UZ; Thu, 09 Aug 2012 16:50:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dave.martin@linaro.org>) id 1SzVwR-0001vH-2l
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 16:50:35 +0000
X-Env-Sender: dave.martin@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344531028!11752743!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15512 invoked from network); 9 Aug 2012 16:50:28 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:50:28 -0000
Received: by eekd4 with SMTP id d4so224418eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 09:50:28 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=sAxAH/vLWu9/ZVi4686ogYtHvkBUyF4Ag5O+24O13WE=;
	b=hGUMpx7VW35+7lf8Ny9vLL6+A/Gz2GO46MTvaasxxFyV6avKoYsfcmV/LeyoIlkq7s
	VSnK/O2+jhWernaKnNyb/l1EOjtX5beNoP33B05siAdOgzZ+nfMKsh8ukgkXQrlKEXX0
	YdysvnQNrIN+DvJZBTq1uggkxMzu/ug5k4LNn+h2Y+1QTVgHIlem7k6O4ghAS7243sBS
	eaPqcZNb2adCY+FTFfH00D6nCb/owLO7JVzbk6IZQa1A6QiXT0awgtftjrUfnMAZItbw
	fQu7dvEp+5rI/AbdNCv5W/Cy/FEfWOETor4oyRxB48HMQkA1mbpLw/8Qq4nYf8k/uPki
	CygA==
Received: by 10.14.209.129 with SMTP id s1mr18916885eeo.24.1344531027974;
	Thu, 09 Aug 2012 09:50:27 -0700 (PDT)
Received: from linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63])
	by mx.google.com with ESMTPS id 45sm4832664eed.17.2012.08.09.09.50.26
	(version=SSLv3 cipher=OTHER); Thu, 09 Aug 2012 09:50:27 -0700 (PDT)
Date: Thu, 9 Aug 2012 17:50:26 +0100
From: Dave Martin <dave.martin@linaro.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120809165026.GC17588@linaro.org>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120808124111.GB2134@linaro.org>
	<alpine.DEB.2.02.1208091046330.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208091046330.21096@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQlwgilL9Vl49NunBc1UIm7B2jFaOC1vcHpjzaCRSsELsoZvo06s/QINMVYQ9wE5VKZ+kr4E
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 02/23] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, 2012 at 04:37:24PM +0100, Stefano Stabellini wrote:
> On Wed, 8 Aug 2012, Dave Martin wrote:
> > On Mon, Aug 06, 2012 at 03:27:05PM +0100, Stefano Stabellini wrote:
> > > Use r12 to pass the hypercall number to the hypervisor.
> > > 
> > > We need a register to pass the hypercall number because we might not
> > > know it at compile time and HVC only takes an immediate argument.
> > > 
> > > Among the available registers r12 seems to be the best choice because it
> > > is defined as "intra-procedure call scratch register".
> > > 
> > > Use the ISS to pass an hypervisor specific tag.
> > > 
> > > Changes in v2:
> > > - define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
> > > at the moment is unused;
> > > - use ldm instead of pop;
> > > - fix up comments.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > ---
> > >  arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
> > >  arch/arm/xen/Makefile                |    2 +-
> > >  arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
> > >  3 files changed, 157 insertions(+), 1 deletions(-)
> > >  create mode 100644 arch/arm/include/asm/xen/hypercall.h
> > >  create mode 100644 arch/arm/xen/hypercall.S
> > 
> > [...]
> > 
> > > diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
> > > new file mode 100644
> > > index 0000000..074f5ed
> > > --- /dev/null
> > > +++ b/arch/arm/xen/hypercall.S
> > > @@ -0,0 +1,106 @@
> > > +/******************************************************************************
> > > + * hypercall.S
> > > + *
> > > + * Xen hypercall wrappers
> > > + *
> > > + * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
> > > + *
> > > + * This program is free software; you can redistribute it and/or
> > > + * modify it under the terms of the GNU General Public License version 2
> > > + * as published by the Free Software Foundation; or, when distributed
> > > + * separately from the Linux kernel or incorporated into other
> > > + * software packages, subject to the following license:
> > > + *
> > > + * Permission is hereby granted, free of charge, to any person obtaining a copy
> > > + * of this source file (the "Software"), to deal in the Software without
> > > + * restriction, including without limitation the rights to use, copy, modify,
> > > + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> > > + * and to permit persons to whom the Software is furnished to do so, subject to
> > > + * the following conditions:
> > > + *
> > > + * The above copyright notice and this permission notice shall be included in
> > > + * all copies or substantial portions of the Software.
> > > + *
> > > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> > > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> > > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > > + * IN THE SOFTWARE.
> > > + */
> > > +
> > > +/*
> > > + * The Xen hypercall calling convention is very similar to the ARM
> > > + * procedure calling convention: the first paramter is passed in r0, the
> > > + * second in r1, the third in r2 and the fourth in r3. Considering that
> > > + * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
> > > + * in r4, differently from the procedure calling convention of using the
> > > + * stack for that case.
> > > + *
> > > + * The hypercall number is passed in r12.
> > > + *
> > > + * The return value is in r0.
> > > + *
> > > + * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
> > > + * hypercall tag.
> > > + */
> > > +
> > > +#include <linux/linkage.h>
> > > +#include <asm/assembler.h>
> > > +#include <xen/interface/xen.h>
> > > +
> > > +
> > > +/* HVC 0xEA1 */
> > > +#ifdef CONFIG_THUMB2_KERNEL
> > > +#define xen_hvc .word 0xf7e08ea1
> > > +#else
> > > +#define xen_hvc .word 0xe140ea71
> > > +#endif
> > 
> > Consider using my opcode injection helpers patch for this (see
> > separate repost: [PATCH v2 REPOST 0/4] ARM: opcodes: Facilitate custom
> > opcode injection), assuming that nobody objects to it.  This should mean
> > that the right opcodes get generated when building a kernel for a big-
> > endian target for example.
> > 
> > I believe the __HVC(imm) macro which I put in <asm/opcodes-virt.h> as an
> > example should do what you need in this case.
> 
> Sure I can do that. Maybe I'll add another patch at the end of my series
> to replace xen_hvc with __HVC(0xEA1), so that it remains independent
> from your series.
> I have learned through experience that avoiding cross patch series
> dependencies help to reduce the amount of headaches during merge windows
> :)

I agree.  I'll let you know when my patch gets merged -- in the meantime,
it makes sense for you to keep your existing code.

> 
> 
> > > +
> > > +#define HYPERCALL_SIMPLE(hypercall)		\
> > > +ENTRY(HYPERVISOR_##hypercall)			\
> > > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > > +	xen_hvc;							\
> > > +	mov pc, lr;							\
> > > +ENDPROC(HYPERVISOR_##hypercall)
> > > +
> > > +#define HYPERCALL0 HYPERCALL_SIMPLE
> > > +#define HYPERCALL1 HYPERCALL_SIMPLE
> > > +#define HYPERCALL2 HYPERCALL_SIMPLE
> > > +#define HYPERCALL3 HYPERCALL_SIMPLE
> > > +#define HYPERCALL4 HYPERCALL_SIMPLE
> > > +
> > > +#define HYPERCALL5(hypercall)			\
> > > +ENTRY(HYPERVISOR_##hypercall)			\
> > > +	stmdb sp!, {r4}						\
> > > +	ldr r4, [sp, #4]					\
> > > +	mov r12, #__HYPERVISOR_##hypercall;	\
> > > +	xen_hvc								\
> > > +	ldm sp!, {r4}						\
> > > +	mov pc, lr							\
> > > +ENDPROC(HYPERVISOR_##hypercall)
> > > +
> > > +                .text
> > > +
> > > +HYPERCALL2(xen_version);
> > > +HYPERCALL3(console_io);
> > > +HYPERCALL3(grant_table_op);
> > > +HYPERCALL2(sched_op);
> > > +HYPERCALL2(event_channel_op);
> > > +HYPERCALL2(hvm_op);
> > > +HYPERCALL2(memory_op);
> > > +HYPERCALL2(physdev_op);
> > > +
> > > +ENTRY(privcmd_call)
> > > +	stmdb sp!, {r4}
> > > +	mov r12, r0
> > > +	mov r0, r1
> > > +	mov r1, r2
> > > +	mov r2, r3
> > > +	ldr r3, [sp, #8]
> > > +	ldr r4, [sp, #4]
> > > +	xen_hvc
> > > +	ldm sp!, {r4}
> > > +	mov pc, lr
> > 
> > Note that the preferred entry/exit sequences in such cases are:
> > 
> > 	stmfd	sp!, {r4,lr}
> > 	...
> > 	ldmfd	sp!, {r4,pc}
> > 
> > ...but it works either way.  I would bother to change it unless you
> > have other changes to make too.
> 
> Wouldn't this needlessly save and restore one more register (lr) to the
> stack?
> I would try to keep the hypercall wrappers as small as possible...

Argh, ignore me -- I was hallucinating for some reason that we actually
needed to save lr, but we don't.

Using the stmfd/ldmfd mnemonics might still be nicer than stmdb/ldm, since
the fd suffix makes the stack semantics more obvious, and the code
looks more symmetrical.  This was the conventional way to write these
mnemonics before the "push" and "pop" mnemonics existed.

That's purely cosmetic, though.

Cheers
---Dave

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 17:00:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 17:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzW5N-00026g-4j; Thu, 09 Aug 2012 16:59:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzW5L-00026Z-AI
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:59:47 +0000
Received: from [85.158.143.99:13632] by server-1.bemta-4.messagelabs.com id
	37/7A-20198-28CE3205; Thu, 09 Aug 2012 16:59:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344531584!17641806!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28094 invoked from network); 9 Aug 2012 16:59:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13936258"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 16:59:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 17:59:44 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzW5I-0006Jj-GS; Thu, 09 Aug 2012 16:59:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzW5I-0006Bu-Aa;
	Thu, 09 Aug 2012 17:59:44 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20515.60544.166512.657044@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 17:59:44 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <5022352D.7080607@citrix.com>
References: <501973F5.4030804@citrix.com>
	<20513.24085.195729.702722@mariner.uk.xensource.com>
	<5022352D.7080607@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [Xen-devel] tools/makefile: Add build target"):
> The issue there is that app_main() is defined as __attribute__((weak)),
> and has two definitions in the code.
> 
> I have just tested and confirmed that minios (and therefore stubdom) is
> not -j safe.  As a result, I would argue that this build failure is not
> nesseserally a barrier of entry to the patch itself;  There are more
> issues which need fixing as well.

After struggling with testing a bit, -j appears to be a red herring.

The real bug is that if you say "make -C stubdom build" you end up
trying to run the "c-stubdom" target.  However that target (some kind
of test application?) is what breaks.  It's not built or used by
"make -C stubdom install" which is what you get if you say "make" at
the toplevel.

See patch below.

With that and your patch "make build" (and "make build -j4") works for
me.  I don't understand how it can have worked for you with just your
original patch; without mine, "make build" (without -j) fails in the
same way as I quote above.

Can you go into a bit more detail about the -j bug(s) in minios ?  I'd
be happy to take patches to add ".NOTPARALLEL:" in appropriate places.

> stubdom itself has further build issues regarding relative paths to
> configure scripts, which I was going to around to fixing after some of
> my more important tasks.

Fixes welcome of course.

Ian.

# HG changeset patch
# Parent bcacd62f0460bd461f0df477e41f45144e19f2e8
stubdom: do not build "c-stubdom"

This does not build:

  /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/test.o: In function `app_main':
  /u/iwj/work/xen-unstable-tools.hg/extras/mini-os/test.c:441: multiple definition of `app_main'
  /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/main.o:/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/main.c:187: first defined here

It's not built by the default toplevel "make all", which is how this
has gone unnoticed.  c-stubdom appears to be some kind of test
application; in any case it is not currently used.

Fixing this means that the contents of the stubdom build: target are
aligned with the contents of the install: target.  Now
"make -C stubdom build" only builds the things that would be built
(for installation) by "make -C stubdom install".

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r bcacd62f0460 stubdom/Makefile
--- a/stubdom/Makefile	Tue Aug 07 19:14:31 2012 +0100
+++ b/stubdom/Makefile	Thu Aug 09 17:42:23 2012 +0100
@@ -81,7 +81,7 @@ CROSS_MAKE := $(MAKE) DESTDIR=
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom
+build: genpath ioemu-stubdom pv-grub xenstore-stubdom
 else
 build: genpath
 endif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 17:00:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 17:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzW5N-00026g-4j; Thu, 09 Aug 2012 16:59:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzW5L-00026Z-AI
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 16:59:47 +0000
Received: from [85.158.143.99:13632] by server-1.bemta-4.messagelabs.com id
	37/7A-20198-28CE3205; Thu, 09 Aug 2012 16:59:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344531584!17641806!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28094 invoked from network); 9 Aug 2012 16:59:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 16:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13936258"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 16:59:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 17:59:44 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzW5I-0006Jj-GS; Thu, 09 Aug 2012 16:59:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzW5I-0006Bu-Aa;
	Thu, 09 Aug 2012 17:59:44 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20515.60544.166512.657044@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 17:59:44 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <5022352D.7080607@citrix.com>
References: <501973F5.4030804@citrix.com>
	<20513.24085.195729.702722@mariner.uk.xensource.com>
	<5022352D.7080607@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] tools/makefile: Add build target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [Xen-devel] tools/makefile: Add build target"):
> The issue there is that app_main() is defined as __attribute__((weak)),
> and has two definitions in the code.
> 
> I have just tested and confirmed that minios (and therefore stubdom) is
> not -j safe.  As a result, I would argue that this build failure is not
> nesseserally a barrier of entry to the patch itself;  There are more
> issues which need fixing as well.

After struggling with testing a bit, -j appears to be a red herring.

The real bug is that if you say "make -C stubdom build" you end up
trying to run the "c-stubdom" target.  However that target (some kind
of test application?) is what breaks.  It's not built or used by
"make -C stubdom install" which is what you get if you say "make" at
the toplevel.

See patch below.

With that and your patch "make build" (and "make build -j4") works for
me.  I don't understand how it can have worked for you with just your
original patch; without mine, "make build" (without -j) fails in the
same way as I quote above.

Can you go into a bit more detail about the -j bug(s) in minios ?  I'd
be happy to take patches to add ".NOTPARALLEL:" in appropriate places.

> stubdom itself has further build issues regarding relative paths to
> configure scripts, which I was going to around to fixing after some of
> my more important tasks.

Fixes welcome of course.

Ian.

# HG changeset patch
# Parent bcacd62f0460bd461f0df477e41f45144e19f2e8
stubdom: do not build "c-stubdom"

This does not build:

  /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/test.o: In function `app_main':
  /u/iwj/work/xen-unstable-tools.hg/extras/mini-os/test.c:441: multiple definition of `app_main'
  /u/iwj/work/xen-unstable-tools.hg/stubdom/mini-os-x86_32-c/main.o:/u/iwj/work/xen-unstable-tools.hg/extras/mini-os/main.c:187: first defined here

It's not built by the default toplevel "make all", which is how this
has gone unnoticed.  c-stubdom appears to be some kind of test
application; in any case it is not currently used.

Fixing this means that the contents of the stubdom build: target are
aligned with the contents of the install: target.  Now
"make -C stubdom build" only builds the things that would be built
(for installation) by "make -C stubdom install".

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r bcacd62f0460 stubdom/Makefile
--- a/stubdom/Makefile	Tue Aug 07 19:14:31 2012 +0100
+++ b/stubdom/Makefile	Thu Aug 09 17:42:23 2012 +0100
@@ -81,7 +81,7 @@ CROSS_MAKE := $(MAKE) DESTDIR=
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
-build: genpath ioemu-stubdom c-stubdom pv-grub xenstore-stubdom
+build: genpath ioemu-stubdom pv-grub xenstore-stubdom
 else
 build: genpath
 endif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 17:04:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 17:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzWA7-0002LC-Rd; Thu, 09 Aug 2012 17:04:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzWA7-0002L4-AP
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 17:04:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344531869!13130164!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgxMDE3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12311 invoked from network); 9 Aug 2012 17:04:31 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 17:04:31 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79H49Ku016130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 17:04:10 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79H43lw008491
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 17:04:04 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79H3txg029684; Thu, 9 Aug 2012 12:03:55 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 10:03:55 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3A4D44211B; Thu,  9 Aug 2012 12:54:24 -0400 (EDT)
Date: Thu, 9 Aug 2012 12:54:24 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120809165424.GA7220@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
	<5022A303.709@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081836050.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208081836050.21096@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Right, the original patch didn't break anything with PV domains. The case
> > it doesn't handle is an HVM initial domain with an already-running
> > Xenstore domain; I think this applies both to ARM and hybrid/PVH on x86.
> > In that case, usage would be set to LOCAL instead of HVM.
> 
> 
> Right, however if I am not mistaken there is no such thing as an HVM
> dom0 right now on x86 and hybrid/PVH is probably going to return true on
> xen_pv_domain() and false on xen_hvm_domain().

The other way around.  HVM = true, PV = false.

Mukesh, correct me if I am wrong pls.
> 
> In the ARM case, given that we don't have a start_info page, we would
> need another way to figure out whether a xenstore stub domain is already
> running, so I think we can just postpone the solution of that problem
> for now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 17:04:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 17:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzWA7-0002LC-Rd; Thu, 09 Aug 2012 17:04:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzWA7-0002L4-AP
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 17:04:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344531869!13130164!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgxMDE3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12311 invoked from network); 9 Aug 2012 17:04:31 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 17:04:31 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79H49Ku016130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 17:04:10 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79H43lw008491
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 17:04:04 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79H3txg029684; Thu, 9 Aug 2012 12:03:55 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 10:03:55 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3A4D44211B; Thu,  9 Aug 2012 12:54:24 -0400 (EDT)
Date: Thu, 9 Aug 2012 12:54:24 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120809165424.GA7220@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208061428060.4645@kaball.uk.xensource.com>
	<1344263246-28036-10-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120807182157.GN15053@phenom.dumpdata.com>
	<50216228.7010407@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081743430.21096@kaball.uk.xensource.com>
	<50229B70.3090507@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081805520.21096@kaball.uk.xensource.com>
	<5022A303.709@tycho.nsa.gov>
	<alpine.DEB.2.02.1208081836050.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208081836050.21096@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v2 10/23] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Right, the original patch didn't break anything with PV domains. The case
> > it doesn't handle is an HVM initial domain with an already-running
> > Xenstore domain; I think this applies both to ARM and hybrid/PVH on x86.
> > In that case, usage would be set to LOCAL instead of HVM.
> 
> 
> Right, however if I am not mistaken there is no such thing as an HVM
> dom0 right now on x86 and hybrid/PVH is probably going to return true on
> xen_pv_domain() and false on xen_hvm_domain().

The other way around.  HVM = true, PV = false.

Mukesh, correct me if I am wrong pls.
> 
> In the ARM case, given that we don't have a start_info page, we would
> need another way to figure out whether a xenstore stub domain is already
> running, so I think we can just postpone the solution of that problem
> for now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 17:15:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 17:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzWKj-0002WF-09; Thu, 09 Aug 2012 17:15:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzWKi-0002WA-8p
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 17:15:40 +0000
Received: from [85.158.138.51:18088] by server-10.bemta-3.messagelabs.com id
	19/50-07905-B30F3205; Thu, 09 Aug 2012 17:15:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344532538!27549866!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14751 invoked from network); 9 Aug 2012 17:15:38 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 17:15:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzWKc-0005mG-S8; Thu, 09 Aug 2012 17:15:34 +0000
Date: Thu, 9 Aug 2012 18:15:34 +0100
From: Tim Deegan <tim@xen.org>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Message-ID: <20120809171534.GA20963@ocelot.phlegethon.org>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
	<a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
User-Agent: Mutt/1.4.2.1i
Cc: ian.jackson@citrix.com, security@xen.org, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
	destroy	p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:30 -0700 on 09 Aug (1344504612), Andres Lagar-Cavilla wrote:
> I realize Gridcentric is neither a service provider, nor a "big vendor",
> and therefore not on the pre-disclosure list.
> 
> However, this is a bug on which we have first-hand knowledge and ability
> to immediately mitigate. In fact, I wrote equivalent code for 4.2/unstable
> months ago.

For which, thank you -- your patch, and the description of it at the
time, made drafting this response much easier!

> I ignored the xen-devel discussion on pre-disclosure list (my bad), but
> understand now that there may be some use to Gridcentric being in that
> list.

If you mean helping draft a fix, being on the pre-disclosure list
wouldn't have made a difference (unless you see a problem with the
published fix), as that was all done before pre-disclosure.

As to whether GridCentric ought to be on the pre-disclosure list as a
downstream vendor, now is definitely the time to speak up in the
discussion of what the new policy should be.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 17:15:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 17:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzWKj-0002WF-09; Thu, 09 Aug 2012 17:15:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1SzWKi-0002WA-8p
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 17:15:40 +0000
Received: from [85.158.138.51:18088] by server-10.bemta-3.messagelabs.com id
	19/50-07905-B30F3205; Thu, 09 Aug 2012 17:15:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344532538!27549866!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14751 invoked from network); 9 Aug 2012 17:15:38 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Aug 2012 17:15:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1SzWKc-0005mG-S8; Thu, 09 Aug 2012 17:15:34 +0000
Date: Thu, 9 Aug 2012 18:15:34 +0100
From: Tim Deegan <tim@xen.org>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Message-ID: <20120809171534.GA20963@ocelot.phlegethon.org>
References: <mailman.10477.1344525712.1399.xen-devel@lists.xen.org>
	<a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <a70961e11bf7f64f32519d3d5ef85fc6.squirrel@webmail.lagarcavilla.org>
User-Agent: Mutt/1.4.2.1i
Cc: ian.jackson@citrix.com, security@xen.org, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Security Advisory 11 (CVE-2012-3433) - HVM
	destroy	p2m host DoS (Xen.org security team)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:30 -0700 on 09 Aug (1344504612), Andres Lagar-Cavilla wrote:
> I realize Gridcentric is neither a service provider, nor a "big vendor",
> and therefore not on the pre-disclosure list.
> 
> However, this is a bug on which we have first-hand knowledge and ability
> to immediately mitigate. In fact, I wrote equivalent code for 4.2/unstable
> months ago.

For which, thank you -- your patch, and the description of it at the
time, made drafting this response much easier!

> I ignored the xen-devel discussion on pre-disclosure list (my bad), but
> understand now that there may be some use to Gridcentric being in that
> list.

If you mean helping draft a fix, being on the pre-disclosure list
wouldn't have made a difference (unless you see a problem with the
published fix), as that was all done before pre-disclosure.

As to whether GridCentric ought to be on the pre-disclosure list as a
downstream vendor, now is definitely the time to speak up in the
discussion of what the new policy should be.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:14:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXFR-0003GZ-8i; Thu, 09 Aug 2012 18:14:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzXFP-0003GU-Jc
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 18:14:15 +0000
Received: from [85.158.143.99:36948] by server-2.bemta-4.messagelabs.com id
	77/30-19021-6FDF3205; Thu, 09 Aug 2012 18:14:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344536053!26857512!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4ODYxMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21613 invoked from network); 9 Aug 2012 18:14:14 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 18:14:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79IE9Cv005826
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 18:14:10 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79IE8Zm018440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 18:14:08 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79IE6nb019038; Thu, 9 Aug 2012 13:14:08 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 11:14:06 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3BB4D42120; Thu,  9 Aug 2012 14:04:33 -0400 (EDT)
Date: Thu, 9 Aug 2012 14:04:33 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Message-ID: <20120809180433.GA14457@phenom.dumpdata.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, 2012 at 03:03:06PM +0100, Thanos Makatos wrote:
> Hi,
> 
> I'd like to introduce blktap3: essentially blktap2 without the need of blkback. This has been developed by Santosh Jodh, and I'll maintain it.
> 

So where is the source of this driver located?
> In this patch, blktap2 binaries are suffixed with "2", so it's not yet possible to use it along with blktap3.
> 
> An example configuration file I used is the following:
> name = "debian bktap3 without pygrub"
> memory = 256
> disk = ['backendtype=xenio,format=vhd,vdev=xvda,access=rw,target=/root/debian-blktap3.vhd']
> kernel = "vmlinuz-2.6.32-5-amd64"
> root = '/dev/xvda1'
> ramdisk = "initrd.img-2.6.32-5-amd64"
> cpu_weight=256
> vif=['bridge=xenbr0']
> 
> Before starting any blktap3 VM, the xenio daemon must be started.
> 
> I've tested it on change set 472fc515a463 without pygrub.
> 
> Any comments are welcome :)
> 
> --
> Thanos Makatos
> 


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:14:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXFR-0003GZ-8i; Thu, 09 Aug 2012 18:14:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1SzXFP-0003GU-Jc
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 18:14:15 +0000
Received: from [85.158.143.99:36948] by server-2.bemta-4.messagelabs.com id
	77/30-19021-6FDF3205; Thu, 09 Aug 2012 18:14:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344536053!26857512!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4ODYxMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21613 invoked from network); 9 Aug 2012 18:14:14 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Aug 2012 18:14:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q79IE9Cv005826
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Aug 2012 18:14:10 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q79IE8Zm018440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Aug 2012 18:14:08 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q79IE6nb019038; Thu, 9 Aug 2012 13:14:08 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 11:14:06 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3BB4D42120; Thu,  9 Aug 2012 14:04:33 -0400 (EDT)
Date: Thu, 9 Aug 2012 14:04:33 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Message-ID: <20120809180433.GA14457@phenom.dumpdata.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, 2012 at 03:03:06PM +0100, Thanos Makatos wrote:
> Hi,
> 
> I'd like to introduce blktap3: essentially blktap2 without the need of blkback. This has been developed by Santosh Jodh, and I'll maintain it.
> 

So where is the source of this driver located?
> In this patch, blktap2 binaries are suffixed with "2", so it's not yet possible to use it along with blktap3.
> 
> An example configuration file I used is the following:
> name = "debian bktap3 without pygrub"
> memory = 256
> disk = ['backendtype=xenio,format=vhd,vdev=xvda,access=rw,target=/root/debian-blktap3.vhd']
> kernel = "vmlinuz-2.6.32-5-amd64"
> root = '/dev/xvda1'
> ramdisk = "initrd.img-2.6.32-5-amd64"
> cpu_weight=256
> vif=['bridge=xenbr0']
> 
> Before starting any blktap3 VM, the xenio daemon must be started.
> 
> I've tested it on change set 472fc515a463 without pygrub.
> 
> Any comments are welcome :)
> 
> --
> Thanos Makatos
> 


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXKU-0003OC-1U; Thu, 09 Aug 2012 18:19:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzXKS-0003O5-Oq
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:19:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344536362!6416154!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5936 invoked from network); 9 Aug 2012 18:19:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:19:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937255"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 18:19:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 19:19:22 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzXKM-0006uN-AD;
	Thu, 09 Aug 2012 18:19:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzXKM-0000Ja-4J;
	Thu, 09 Aug 2012 19:19:22 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13575-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 19:19:22 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13575 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13575/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 13528
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13528
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 13528
 build-amd64                   4 xen-build                 fail REGR. vs. 13528

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  228e6f382d5d
baseline version:
 xen                  6d7ae840463c

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   21612:228e6f382d5d
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:48:19 2012 +0100
    
    Added signature for changeset 8ea28053de39
    
    
changeset:   21611:abf8c57178aa
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:49 2012 +0100
    
    Added tag RELEASE-4.0.4 for changeset 8ea28053de39
    
    
changeset:   21610:8ea28053de39
tag:         RELEASE-4.0.4
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:23 2012 +0100
    
    Update Xen version to 4.0.4
    
    
changeset:   21609:2bd0027ba0d1
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Aug 09 16:45:12 2012 +0100
    
    cpufreq: P state stats aren't available if there is no cpufreq driver
    
    If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
    reading the P state statistics causes a deadlock as an uninitialized
    spinlock is locked in do_get_pm_info(). The spinlock is initialized in
    cpufreq_statistic_init() which is not called if cpufreq_driver ==
    NULL.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset:   25706:7fd5facb6084
    xen-unstable date:        Fri Aug 03 09:50:28 2012 +0200
    
    
changeset:   21608:a51c86b407d7
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 09 15:47:19 2012 +0100
    
    xen: only check for shared pages while any exist on teardown
    
    Avoids worst case behavour when guest has a large p2m.
    
    This is XSA-11 / CVE-2012-3433
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Olaf Hering <olaf@aepfle.de>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   21607:6d7ae840463c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:39:47 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:19:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXKU-0003OC-1U; Thu, 09 Aug 2012 18:19:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzXKS-0003O5-Oq
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:19:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344536362!6416154!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5936 invoked from network); 9 Aug 2012 18:19:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:19:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937255"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 18:19:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 19:19:22 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzXKM-0006uN-AD;
	Thu, 09 Aug 2012 18:19:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzXKM-0000Ja-4J;
	Thu, 09 Aug 2012 19:19:22 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13575-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 19:19:22 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13575 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13575/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 13528
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13528
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 13528
 build-amd64                   4 xen-build                 fail REGR. vs. 13528

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  228e6f382d5d
baseline version:
 xen                  6d7ae840463c

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   21612:228e6f382d5d
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:48:19 2012 +0100
    
    Added signature for changeset 8ea28053de39
    
    
changeset:   21611:abf8c57178aa
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:49 2012 +0100
    
    Added tag RELEASE-4.0.4 for changeset 8ea28053de39
    
    
changeset:   21610:8ea28053de39
tag:         RELEASE-4.0.4
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:23 2012 +0100
    
    Update Xen version to 4.0.4
    
    
changeset:   21609:2bd0027ba0d1
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Aug 09 16:45:12 2012 +0100
    
    cpufreq: P state stats aren't available if there is no cpufreq driver
    
    If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
    reading the P state statistics causes a deadlock as an uninitialized
    spinlock is locked in do_get_pm_info(). The spinlock is initialized in
    cpufreq_statistic_init() which is not called if cpufreq_driver ==
    NULL.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset:   25706:7fd5facb6084
    xen-unstable date:        Fri Aug 03 09:50:28 2012 +0200
    
    
changeset:   21608:a51c86b407d7
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 09 15:47:19 2012 +0100
    
    xen: only check for shared pages while any exist on teardown
    
    Avoids worst case behavour when guest has a large p2m.
    
    This is XSA-11 / CVE-2012-3433
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Olaf Hering <olaf@aepfle.de>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   21607:6d7ae840463c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:39:47 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:30:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXUf-0003Z3-9V; Thu, 09 Aug 2012 18:30:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzXUd-0003Yy-CO
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:29:59 +0000
Received: from [85.158.138.51:43025] by server-5.bemta-3.messagelabs.com id
	27/E2-27557-6A104205; Thu, 09 Aug 2012 18:29:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344536995!25716067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5827 invoked from network); 9 Aug 2012 18:29:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:29:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937461"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 18:29:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 19:29:55 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzXUY-0006yD-JN;
	Thu, 09 Aug 2012 18:29:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzXUY-00074g-IN;
	Thu, 09 Aug 2012 19:29:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13576-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 19:29:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13576: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13576 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13576/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 13569
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13569
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 13569
 build-amd64                   4 xen-build                 fail REGR. vs. 13569

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1225aff05dd2
baseline version:
 xen                  f8f8912b3de0

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23336:1225aff05dd2
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:48:07 2012 +0100
    
    Added signature for changeset ce7195d2b80e
    
    
changeset:   23335:7cdd79dea62b
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:55 2012 +0100
    
    Added tag RELEASE-4.1.3 for changeset ce7195d2b80e
    
    
changeset:   23334:ce7195d2b80e
tag:         RELEASE-4.1.3
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:30 2012 +0100
    
    Update Xen version to 4.1.3
    
    
changeset:   23333:985fb467d180
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Aug 09 16:44:51 2012 +0100
    
    cpufreq: P state stats aren't available if there is no cpufreq driver
    
    If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
    reading the P state statistics causes a deadlock as an uninitialized
    spinlock is locked in do_get_pm_info(). The spinlock is initialized in
    cpufreq_statistic_init() which is not called if cpufreq_driver ==
    NULL.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset:   25706:7fd5facb6084
    xen-unstable date:        Fri Aug 03 09:50:28 2012 +0200
    
    
changeset:   23332:859205b36fe9
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 09 15:47:42 2012 +0100
    
    xen: only check for shared pages while any exist on teardown
    
    Avoids worst case behavour when guest has a large p2m.
    
    This is XSA-11 / CVE-2012-3433
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Olaf Hering <olaf@aepfle.de>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23331:f8f8912b3de0
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:30:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXUf-0003Z3-9V; Thu, 09 Aug 2012 18:30:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzXUd-0003Yy-CO
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:29:59 +0000
Received: from [85.158.138.51:43025] by server-5.bemta-3.messagelabs.com id
	27/E2-27557-6A104205; Thu, 09 Aug 2012 18:29:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344536995!25716067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5827 invoked from network); 9 Aug 2012 18:29:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:29:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937461"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 18:29:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 19:29:55 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzXUY-0006yD-JN;
	Thu, 09 Aug 2012 18:29:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzXUY-00074g-IN;
	Thu, 09 Aug 2012 19:29:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13576-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Aug 2012 19:29:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13576: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13576 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13576/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 13569
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 13569
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 13569
 build-amd64                   4 xen-build                 fail REGR. vs. 13569

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  1225aff05dd2
baseline version:
 xen                  f8f8912b3de0

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23336:1225aff05dd2
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:48:07 2012 +0100
    
    Added signature for changeset ce7195d2b80e
    
    
changeset:   23335:7cdd79dea62b
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:55 2012 +0100
    
    Added tag RELEASE-4.1.3 for changeset ce7195d2b80e
    
    
changeset:   23334:ce7195d2b80e
tag:         RELEASE-4.1.3
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:30 2012 +0100
    
    Update Xen version to 4.1.3
    
    
changeset:   23333:985fb467d180
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Aug 09 16:44:51 2012 +0100
    
    cpufreq: P state stats aren't available if there is no cpufreq driver
    
    If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
    reading the P state statistics causes a deadlock as an uninitialized
    spinlock is locked in do_get_pm_info(). The spinlock is initialized in
    cpufreq_statistic_init() which is not called if cpufreq_driver ==
    NULL.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset:   25706:7fd5facb6084
    xen-unstable date:        Fri Aug 03 09:50:28 2012 +0200
    
    
changeset:   23332:859205b36fe9
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 09 15:47:42 2012 +0100
    
    xen: only check for shared pages while any exist on teardown
    
    Avoids worst case behavour when guest has a large p2m.
    
    This is XSA-11 / CVE-2012-3433
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Olaf Hering <olaf@aepfle.de>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   23331:f8f8912b3de0
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 03 10:43:24 2012 +0100
    
    nestedhvm: fix nested page fault build error on 32-bit
    
        cc1: warnings being treated as errors
        hvm.c: In function ?hvm_hap_nested_page_fault?:
        hvm.c:1282: error: passing argument 2 of
        ?nestedhvm_hap_nested_page_fault? from incompatible pointer type
        /local/scratch/ianc/devel/xen-unstable.hg/xen/include/asm/hvm/nestedhvm.h:55:
        note: expected ?paddr_t *? but argument is of type ?long unsigned
        int *?
    
    hvm_hap_nested_page_fault takes an unsigned long gpa and passes &gpa
    to nestedhvm_hap_nested_page_fault which takes a paddr_t *. Since both
    of the callers of hvm_hap_nested_page_fault (svm_do_nested_pgfault and
    ept_handle_violation) actually have the gpa which they pass to
    hvm_hap_nested_page_fault as a paddr_t I think it makes sense to
    change the argument to hvm_hap_nested_page_fault.
    
    The other user of gpa in hvm_hap_nested_page_fault is a call to
    p2m_mem_access_check, which currently also takes a paddr_t gpa but I
    think a paddr_t is appropriate there too.
    
    Jan points out that this is also an issue for >4GB guests on the 32
    bit hypervisor.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    xen-unstable changeset:   25724:612898732e66
    xen-unstable date:        Fri Aug 03 09:54:17 2012 +0100
    Backported-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXXH-0003gX-80; Thu, 09 Aug 2012 18:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzXXG-0003gQ-E0
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:32:42 +0000
Received: from [85.158.143.99:38965] by server-1.bemta-4.messagelabs.com id
	B5/55-20198-94204205; Thu, 09 Aug 2012 18:32:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344537161!17651630!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2495 invoked from network); 9 Aug 2012 18:32:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:32:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937485"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 18:32:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 19:32:41 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzXXE-0006z2-QH; Thu, 09 Aug 2012 18:32:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzXXE-0006Il-MP;
	Thu, 09 Aug 2012 19:32:40 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20516.584.592349.466383@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 19:32:40 +0100
To: xen.org <ian.jackson@eu.citrix.com>
In-Reply-To: <osstest-13575-mainreport@xen.org>
References: <osstest-13575-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-4.0-testing test] 13575: regressions - FAIL"):
> flight 13575 xen-4.0-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13575/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386                    4 xen-build            fail REGR. vs. 13528
>  build-i386-oldkern            4 xen-build             fail REGR. vs. 13528
>  build-amd64-oldkern           4 xen-build             fail REGR. vs. 13528
>  build-amd64                   4 xen-build             fail REGR. vs. 13528

This and the 4.1 failure are due to a miscommunication between Keir
and me about tagging the qemu trees.  I have tagged them now and all
is going to be fine, I hope ...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:33:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXXH-0003gX-80; Thu, 09 Aug 2012 18:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzXXG-0003gQ-E0
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:32:42 +0000
Received: from [85.158.143.99:38965] by server-1.bemta-4.messagelabs.com id
	B5/55-20198-94204205; Thu, 09 Aug 2012 18:32:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344537161!17651630!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2495 invoked from network); 9 Aug 2012 18:32:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:32:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937485"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 18:32:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 19:32:41 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1SzXXE-0006z2-QH; Thu, 09 Aug 2012 18:32:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1SzXXE-0006Il-MP;
	Thu, 09 Aug 2012 19:32:40 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20516.584.592349.466383@mariner.uk.xensource.com>
Date: Thu, 9 Aug 2012 19:32:40 +0100
To: xen.org <ian.jackson@eu.citrix.com>
In-Reply-To: <osstest-13575-mainreport@xen.org>
References: <osstest-13575-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-4.0-testing test] 13575: regressions - FAIL"):
> flight 13575 xen-4.0-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13575/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386                    4 xen-build            fail REGR. vs. 13528
>  build-i386-oldkern            4 xen-build             fail REGR. vs. 13528
>  build-amd64-oldkern           4 xen-build             fail REGR. vs. 13528
>  build-amd64                   4 xen-build             fail REGR. vs. 13528

This and the 4.1 failure are due to a miscommunication between Keir
and me about tagging the qemu trees.  I have tagged them now and all
is going to be fine, I hope ...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:43:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXhm-0003wO-EI; Thu, 09 Aug 2012 18:43:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SzXhk-0003wJ-Qm
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:43:33 +0000
Received: from [85.158.143.99:32149] by server-3.bemta-4.messagelabs.com id
	F1/85-31486-4D404205; Thu, 09 Aug 2012 18:43:32 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344537811!22105554!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10457 invoked from network); 9 Aug 2012 18:43:31 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:43:31 -0000
Received: by wgbdr1 with SMTP id dr1so607711wgb.24
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 11:43:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=h1bJOWx5/PrBbOf7hBRbCmoh8dbJ9oy8lxDyZBn/0Ak=;
	b=JzuuMespi2l++xgAl0T/hO+QCVwZClgw6BpCp5u6MT9ZEluT+5GvujSKtO+POB5gUz
	RRityqLZSIzpgcqwVVlO98b3nZN6DH94rstqAuSdob4HHm3YP7l33/F4IN9ovmckpiJA
	Bc2J4xD7NTGIpwypR8YvTnW3FUCo3DFjK4Eb5qdLwq5L5JWcqJ+9gddYHmhZjzPWeOKL
	vhJxSa26So59Fd2JghlatO+0F1YKNjd90FYnIU0shtTGvDCcZUKb7ODSbS9MzMKF7rn7
	4R1e3nJ1w2XBj0FCM6gIaFnn/4WTkvecN58NME0h7yVQhA9WPNDg1AriDJMfYzZyOfxS
	nCvA==
Received: by 10.216.237.193 with SMTP id y43mr141499weq.75.1344537811137;
	Thu, 09 Aug 2012 11:43:31 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id fb20sm4660363wid.1.2012.08.09.11.43.28
	(version=SSLv3 cipher=OTHER); Thu, 09 Aug 2012 11:43:30 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 09 Aug 2012 19:43:23 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <CC49C35B.3B364%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
Thread-Index: Ac12XtqXQzDaSWDEREWE/pSqW4Jn5w==
In-Reply-To: <20516.584.592349.466383@mariner.uk.xensource.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/08/2012 19:32, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:

> xen.org writes ("[xen-4.0-testing test] 13575: regressions - FAIL"):
>> flight 13575 xen-4.0-testing real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/13575/
>> 
>> Regressions :-(
>> 
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  build-i386                    4 xen-build            fail REGR. vs. 13528
>>  build-i386-oldkern            4 xen-build             fail REGR. vs. 13528
>>  build-amd64-oldkern           4 xen-build             fail REGR. vs. 13528
>>  build-amd64                   4 xen-build             fail REGR. vs. 13528
> 
> This and the 4.1 failure are due to a miscommunication between Keir
> and me about tagging the qemu trees.  I have tagged them now and all
> is going to be fine, I hope ...

Thanks. :)

> Ian.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 18:43:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 18:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzXhm-0003wO-EI; Thu, 09 Aug 2012 18:43:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SzXhk-0003wJ-Qm
	for xen-devel@lists.xensource.com; Thu, 09 Aug 2012 18:43:33 +0000
Received: from [85.158.143.99:32149] by server-3.bemta-4.messagelabs.com id
	F1/85-31486-4D404205; Thu, 09 Aug 2012 18:43:32 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344537811!22105554!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10457 invoked from network); 9 Aug 2012 18:43:31 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 18:43:31 -0000
Received: by wgbdr1 with SMTP id dr1so607711wgb.24
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 11:43:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=h1bJOWx5/PrBbOf7hBRbCmoh8dbJ9oy8lxDyZBn/0Ak=;
	b=JzuuMespi2l++xgAl0T/hO+QCVwZClgw6BpCp5u6MT9ZEluT+5GvujSKtO+POB5gUz
	RRityqLZSIzpgcqwVVlO98b3nZN6DH94rstqAuSdob4HHm3YP7l33/F4IN9ovmckpiJA
	Bc2J4xD7NTGIpwypR8YvTnW3FUCo3DFjK4Eb5qdLwq5L5JWcqJ+9gddYHmhZjzPWeOKL
	vhJxSa26So59Fd2JghlatO+0F1YKNjd90FYnIU0shtTGvDCcZUKb7ODSbS9MzMKF7rn7
	4R1e3nJ1w2XBj0FCM6gIaFnn/4WTkvecN58NME0h7yVQhA9WPNDg1AriDJMfYzZyOfxS
	nCvA==
Received: by 10.216.237.193 with SMTP id y43mr141499weq.75.1344537811137;
	Thu, 09 Aug 2012 11:43:31 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id fb20sm4660363wid.1.2012.08.09.11.43.28
	(version=SSLv3 cipher=OTHER); Thu, 09 Aug 2012 11:43:30 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 09 Aug 2012 19:43:23 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <CC49C35B.3B364%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
Thread-Index: Ac12XtqXQzDaSWDEREWE/pSqW4Jn5w==
In-Reply-To: <20516.584.592349.466383@mariner.uk.xensource.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-4.0-testing test] 13575: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/08/2012 19:32, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:

> xen.org writes ("[xen-4.0-testing test] 13575: regressions - FAIL"):
>> flight 13575 xen-4.0-testing real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/13575/
>> 
>> Regressions :-(
>> 
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  build-i386                    4 xen-build            fail REGR. vs. 13528
>>  build-i386-oldkern            4 xen-build             fail REGR. vs. 13528
>>  build-amd64-oldkern           4 xen-build             fail REGR. vs. 13528
>>  build-amd64                   4 xen-build             fail REGR. vs. 13528
> 
> This and the 4.1 failure are due to a miscommunication between Keir
> and me about tagging the qemu trees.  I have tagged them now and all
> is going to be fine, I hope ...

Thanks. :)

> Ian.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 19:05:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 19:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzY33-0004CF-B3; Thu, 09 Aug 2012 19:05:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzY31-0004CA-66
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 19:05:31 +0000
Received: from [85.158.139.83:44801] by server-4.bemta-5.messagelabs.com id
	A8/9C-32474-AF904205; Thu, 09 Aug 2012 19:05:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344539128!31140493!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8571 invoked from network); 9 Aug 2012 19:05:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 19:05:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937828"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 19:05:28 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	20:05:28 +0100
Message-ID: <1344539127.11783.41.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 9 Aug 2012 20:05:27 +0100
In-Reply-To: <20120809180433.GA14457@phenom.dumpdata.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Thanos Makatos <thanos.makatos@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 19:04 +0100, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 09, 2012 at 03:03:06PM +0100, Thanos Makatos wrote:
> > Hi,
> > 
> > I'd like to introduce blktap3: essentially blktap2 without the need of blkback. This has been developed by Santosh Jodh, and I'll maintain it.
> > 
> 
> So where is the source of this driver located?

I suspect that Thanos meant the blktap kernel driver rather than
blkback.

This is a completely userspace implementation of blktap, the fact that
you don't need to layer blkback on top of it is largely incidental.

This certainly doesn't aim to replace blkback, they both serve different
use cases (essentially speed vs flexibility).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 19:05:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 19:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzY33-0004CF-B3; Thu, 09 Aug 2012 19:05:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1SzY31-0004CA-66
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 19:05:31 +0000
Received: from [85.158.139.83:44801] by server-4.bemta-5.messagelabs.com id
	A8/9C-32474-AF904205; Thu, 09 Aug 2012 19:05:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344539128!31140493!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8571 invoked from network); 9 Aug 2012 19:05:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 19:05:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,741,1336348800"; d="scan'208";a="13937828"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 19:05:28 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0; Thu, 9 Aug 2012
	20:05:28 +0100
Message-ID: <1344539127.11783.41.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 9 Aug 2012 20:05:27 +0100
In-Reply-To: <20120809180433.GA14457@phenom.dumpdata.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Thanos Makatos <thanos.makatos@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 19:04 +0100, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 09, 2012 at 03:03:06PM +0100, Thanos Makatos wrote:
> > Hi,
> > 
> > I'd like to introduce blktap3: essentially blktap2 without the need of blkback. This has been developed by Santosh Jodh, and I'll maintain it.
> > 
> 
> So where is the source of this driver located?

I suspect that Thanos meant the blktap kernel driver rather than
blkback.

This is a completely userspace implementation of blktap, the fact that
you don't need to layer blkback on top of it is largely incidental.

This certainly doesn't aim to replace blkback, they both serve different
use cases (essentially speed vs flexibility).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 21:03:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 21:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzZsm-00056k-Qy; Thu, 09 Aug 2012 21:03:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SzZsl-00056b-Jm
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 21:03:03 +0000
Received: from [85.158.143.35:64303] by server-2.bemta-4.messagelabs.com id
	84/EE-19021-68524205; Thu, 09 Aug 2012 21:03:02 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344546181!13239568!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 831 invoked from network); 9 Aug 2012 21:03:02 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-21.messagelabs.com with SMTP;
	9 Aug 2012 21:03:02 -0000
X-TM-IMSS-Message-ID: <8bdfacc300061a32@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 8bdfacc300061a32 ;
	Thu, 9 Aug 2012 17:03:03 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q79L2xlH018462; 
	Thu, 9 Aug 2012 17:02:59 -0400
Message-ID: <50242583.3010201@tycho.nsa.gov>
Date: Thu, 09 Aug 2012 17:02:59 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/30/2012 10:03 AM, Ian Campbell wrote:
> This is based upon my inspection of a system with a single PV domain running
> and is therefore very incomplete.
> 
> There are several things I'm not sure of here, mostly marked with XXX in the
> text.
> 
> In particular:
> 
>  - We seem to expose various things to the guest which really it has no need to
>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>    the size of the video ram, store port and gref.

If the domid key is unneeded/removed, is there a recommended method for
a guest to query its own domid? I don't see a hypercall that returns it
directly, although there is one to return the guest's UUID - which seems
much less useful for a guest to know about itself.

While hypercalls are fairly consistent about accepting DOMID_SELF, a 
domain does occasionally need to know its own ID: xenstore permission
changes do not accept DOMID_SELF, and if two domains are attempting to
set up communication such as V4V or vchan, they need to be able to tell
their peer what domain ID to use.

It is possible for a domain to query its own domain ID indirectly, so it 
would be difficult to argue that a domain should not be able to obtain 
its own ID. One method for a domain to query its own ID is to create an
unbound event channel with remote_domid = DOMID_SELF, and then execute
evtchn_status on the event channel in order to see the resolved domain 
id. Querying Xenstore permissions on a newly-created key will show the
local domain as the first entry. Less reliably, the backend paths for
all xenbus devices contain the local and remote domain IDs.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 21:03:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 21:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzZsm-00056k-Qy; Thu, 09 Aug 2012 21:03:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1SzZsl-00056b-Jm
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 21:03:03 +0000
Received: from [85.158.143.35:64303] by server-2.bemta-4.messagelabs.com id
	84/EE-19021-68524205; Thu, 09 Aug 2012 21:03:02 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344546181!13239568!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 831 invoked from network); 9 Aug 2012 21:03:02 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-21.messagelabs.com with SMTP;
	9 Aug 2012 21:03:02 -0000
X-TM-IMSS-Message-ID: <8bdfacc300061a32@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id 8bdfacc300061a32 ;
	Thu, 9 Aug 2012 17:03:03 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q79L2xlH018462; 
	Thu, 9 Aug 2012 17:02:59 -0400
Message-ID: <50242583.3010201@tycho.nsa.gov>
Date: Thu, 09 Aug 2012 17:02:59 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/30/2012 10:03 AM, Ian Campbell wrote:
> This is based upon my inspection of a system with a single PV domain running
> and is therefore very incomplete.
> 
> There are several things I'm not sure of here, mostly marked with XXX in the
> text.
> 
> In particular:
> 
>  - We seem to expose various things to the guest which really it has no need to
>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>    the size of the video ram, store port and gref.

If the domid key is unneeded/removed, is there a recommended method for
a guest to query its own domid? I don't see a hypercall that returns it
directly, although there is one to return the guest's UUID - which seems
much less useful for a guest to know about itself.

While hypercalls are fairly consistent about accepting DOMID_SELF, a 
domain does occasionally need to know its own ID: xenstore permission
changes do not accept DOMID_SELF, and if two domains are attempting to
set up communication such as V4V or vchan, they need to be able to tell
their peer what domain ID to use.

It is possible for a domain to query its own domain ID indirectly, so it 
would be difficult to argue that a domain should not be able to obtain 
its own ID. One method for a domain to query its own ID is to create an
unbound event channel with remote_domid = DOMID_SELF, and then execute
evtchn_status on the event channel in order to see the resolved domain 
id. Querying Xenstore permissions on a newly-created key will show the
local domain as the first entry. Less reliably, the backend paths for
all xenbus devices contain the local and remote domain IDs.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 21:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 21:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzaFb-0005KE-0l; Thu, 09 Aug 2012 21:26:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzaFZ-0005K9-67
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 21:26:37 +0000
Received: from [85.158.138.51:21949] by server-1.bemta-3.messagelabs.com id
	F4/2D-32745-C0B24205; Thu, 09 Aug 2012 21:26:36 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344547594!27494277!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14198 invoked from network); 9 Aug 2012 21:26:35 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 21:26:35 -0000
Received: by vbip1 with SMTP id p1so1083282vbi.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 14:26:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=EajrWT/ihUkkjmB9S8U6pR3C14VXCs6gS+lBlzjNvbw=;
	b=wwY5BmDPe2SCifl6ItDqFDNgkUNDsPxmyfCS3WqjFMe4PUC+IljAkHH4XPSWENPHpc
	18dHfSGDNFD72lThfyb6jVfXfj0mB7TeTZgokFrd8NfYWbio3XOjXn4gLbRfKhKQoKWa
	Wc1/QhvZqyBOeotrrec0vxyNV8D0Nr2FdRDO/fuTc4+R9AV0BnoFWx6v7s7Gg6hBP397
	NUOQVRrftE3ixXIGwG7EDiGD6mrzxgxLDvR+3IrrZxtOZhEWNorNQh+f51y0jJvzaVfA
	1shyOe1xzkXxBymxOfO99npBQShlaLZwRA42UI0sQ86xKauGfzaPDe+HRQ4I66dKqHV/
	9meg==
Received: by 10.52.93.170 with SMTP id cv10mr523637vdb.78.1344547594215; Thu,
	09 Aug 2012 14:26:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Thu, 9 Aug 2012 14:26:14 -0700 (PDT)
In-Reply-To: <50242583.3010201@tycho.nsa.gov>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 22:26:14 +0100
Message-ID: <CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 22:02, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 07/30/2012 10:03 AM, Ian Campbell wrote:
>> This is based upon my inspection of a system with a single PV domain running
>> and is therefore very incomplete.
>>
>> There are several things I'm not sure of here, mostly marked with XXX in the
>> text.
>>
>> In particular:
>>
>>  - We seem to expose various things to the guest which really it has no need to
>>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>>    the size of the video ram, store port and gref.
>
> If the domid key is unneeded/removed, is there a recommended method for
> a guest to query its own domid? I don't see a hypercall that returns it
> directly, although there is one to return the guest's UUID - which seems
> much less useful for a guest to know about itself.
>
> While hypercalls are fairly consistent about accepting DOMID_SELF, a
> domain does occasionally need to know its own ID: xenstore permission
> changes do not accept DOMID_SELF, and if two domains are attempting to
> set up communication such as V4V or vchan, they need to be able to tell
> their peer what domain ID to use.
>

That is one way for doing it another way would be to use a name
resolution system
a bit like a DNS. A system like that would need to live where the VM
are created and destroyed
(probably dom0 or a domain builder VM).
The server could be using vchan, v4v or even a shared XenStore node,
but I think we
need something like that.

In the long run it's much better to rely on a name instead of a domid
because domids can
change throughout the VM life cycle (reboot, hibernate, save/restore,
migration, ...).

Jean

> It is possible for a domain to query its own domain ID indirectly, so it
> would be difficult to argue that a domain should not be able to obtain
> its own ID. One method for a domain to query its own ID is to create an
> unbound event channel with remote_domid = DOMID_SELF, and then execute
> evtchn_status on the event channel in order to see the resolved domain
> id. Querying Xenstore permissions on a newly-created key will show the
> local domain as the first entry. Less reliably, the backend paths for
> all xenbus devices contain the local and remote domain IDs.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 21:27:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 21:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzaFb-0005KE-0l; Thu, 09 Aug 2012 21:26:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1SzaFZ-0005K9-67
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 21:26:37 +0000
Received: from [85.158.138.51:21949] by server-1.bemta-3.messagelabs.com id
	F4/2D-32745-C0B24205; Thu, 09 Aug 2012 21:26:36 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344547594!27494277!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14198 invoked from network); 9 Aug 2012 21:26:35 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 21:26:35 -0000
Received: by vbip1 with SMTP id p1so1083282vbi.32
	for <xen-devel@lists.xen.org>; Thu, 09 Aug 2012 14:26:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=EajrWT/ihUkkjmB9S8U6pR3C14VXCs6gS+lBlzjNvbw=;
	b=wwY5BmDPe2SCifl6ItDqFDNgkUNDsPxmyfCS3WqjFMe4PUC+IljAkHH4XPSWENPHpc
	18dHfSGDNFD72lThfyb6jVfXfj0mB7TeTZgokFrd8NfYWbio3XOjXn4gLbRfKhKQoKWa
	Wc1/QhvZqyBOeotrrec0vxyNV8D0Nr2FdRDO/fuTc4+R9AV0BnoFWx6v7s7Gg6hBP397
	NUOQVRrftE3ixXIGwG7EDiGD6mrzxgxLDvR+3IrrZxtOZhEWNorNQh+f51y0jJvzaVfA
	1shyOe1xzkXxBymxOfO99npBQShlaLZwRA42UI0sQ86xKauGfzaPDe+HRQ4I66dKqHV/
	9meg==
Received: by 10.52.93.170 with SMTP id cv10mr523637vdb.78.1344547594215; Thu,
	09 Aug 2012 14:26:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Thu, 9 Aug 2012 14:26:14 -0700 (PDT)
In-Reply-To: <50242583.3010201@tycho.nsa.gov>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 9 Aug 2012 22:26:14 +0100
Message-ID: <CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 9 August 2012 22:02, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 07/30/2012 10:03 AM, Ian Campbell wrote:
>> This is based upon my inspection of a system with a single PV domain running
>> and is therefore very incomplete.
>>
>> There are several things I'm not sure of here, mostly marked with XXX in the
>> text.
>>
>> In particular:
>>
>>  - We seem to expose various things to the guest which really it has no need to
>>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>>    the size of the video ram, store port and gref.
>
> If the domid key is unneeded/removed, is there a recommended method for
> a guest to query its own domid? I don't see a hypercall that returns it
> directly, although there is one to return the guest's UUID - which seems
> much less useful for a guest to know about itself.
>
> While hypercalls are fairly consistent about accepting DOMID_SELF, a
> domain does occasionally need to know its own ID: xenstore permission
> changes do not accept DOMID_SELF, and if two domains are attempting to
> set up communication such as V4V or vchan, they need to be able to tell
> their peer what domain ID to use.
>

That is one way for doing it another way would be to use a name
resolution system
a bit like a DNS. A system like that would need to live where the VM
are created and destroyed
(probably dom0 or a domain builder VM).
The server could be using vchan, v4v or even a shared XenStore node,
but I think we
need something like that.

In the long run it's much better to rely on a name instead of a domid
because domids can
change throughout the VM life cycle (reboot, hibernate, save/restore,
migration, ...).

Jean

> It is possible for a domain to query its own domain ID indirectly, so it
> would be difficult to argue that a domain should not be able to obtain
> its own ID. One method for a domain to query its own ID is to create an
> unbound event channel with remote_domid = DOMID_SELF, and then execute
> evtchn_status on the event channel in order to see the resolved domain
> id. Querying Xenstore permissions on a newly-created key will show the
> local domain as the first entry. Less reliably, the backend paths for
> all xenbus devices contain the local and remote domain IDs.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 09 21:45:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 21:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzaXT-0005Xc-NP; Thu, 09 Aug 2012 21:45:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Goncalo.Gomes@eu.citrix.com>) id 1SzaXT-0005XX-6s
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 21:45:07 +0000
Received: from [85.158.143.99:58737] by server-2.bemta-4.messagelabs.com id
	32/60-19021-26F24205; Thu, 09 Aug 2012 21:45:06 +0000
X-Env-Sender: Goncalo.Gomes@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344548705!22121644!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26414 invoked from network); 9 Aug 2012 21:45:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 21:45:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,742,1336348800"; d="scan'208";a="13939898"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 21:45:04 +0000
Received: from eire.uk.xensource.com (10.80.2.151) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 9 Aug 2012 22:45:04 +0100
Date: Thu, 9 Aug 2012 22:39:33 +0100
From: Goncalo Gomes <Goncalo.Gomes@EU.CITRIX.COM>
To: Thanos Makatos <thanos.makatos@citrix.com>
Message-ID: <20120809213933.GA9099@eire.uk.xensource.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAwOSBBdWcgMjAxMiwgVGhhbm9zIE1ha2F0b3Mgd3JvdGU6Cgo+IEhpLAo+IAo+IEni
gJlkIGxpa2UgdG8gaW50cm9kdWNlIGJsa3RhcDM6IGVzc2VudGlhbGx5IGJsa3RhcDIgd2l0aG91
dCB0aGUgbmVlZCBvZiBibGtiYWNrLiBUaGlzIGhhcyBiZWVuIGRldmVsb3BlZCBieSBTYW50b3No
IEpvZGgsIGFuZCBJ4oCZbGwgbWFpbnRhaW4gaXQuCgpXZWxjb21lISBwcmVjaXNlbHksIHhlbmlv
IChha2EgYmxrdGFwMykgd2FzIGRldmVsb3BlZCBieSBEYW5pZWwgClN0b2RkZW4gYW5kIHJlY2Vu
dGx5IGNvbnRpbnVlZC9pbXByb3ZlZCBieSBTYW50b3NoIEpvZGggOi0pCgpBcyBJIGhhdmUgYSBz
bGlnaHQgaW50ZXJlc3QgaW4gdGhpcyBhcmVhLCBJIHdhcyB3b25kZXJpbmcgd2hhdCBhcmUgdGhl
IAptYWluIGltcHJvdmVtZW50cyBvdmVyIGJsa3RhcDI/CgpHb25jYWxvCgo+IEluIHRoaXMgcGF0
Y2gsIGJsa3RhcDIgYmluYXJpZXMgYXJlIHN1ZmZpeGVkIHdpdGgg4oCcMuKAnSwgc28gaXTigJlz
IG5vdCB5ZXQgcG9zc2libGUgdG8gdXNlIGl0IGFsb25nIHdpdGggYmxrdGFwMy4KPiAKPiBBbiBl
eGFtcGxlIGNvbmZpZ3VyYXRpb24gZmlsZSBJIHVzZWQgaXMgdGhlIGZvbGxvd2luZzoKPiBuYW1l
ID0gImRlYmlhbiBia3RhcDMgd2l0aG91dCBweWdydWIiCj4gbWVtb3J5ID0gMjU2Cj4gZGlzayA9
IFsnYmFja2VuZHR5cGU9eGVuaW8sZm9ybWF0PXZoZCx2ZGV2PXh2ZGEsYWNjZXNzPXJ3LHRhcmdl
dD0vcm9vdC9kZWJpYW4tYmxrdGFwMy52aGQnXQo+IGtlcm5lbCA9ICJ2bWxpbnV6LTIuNi4zMi01
LWFtZDY0Igo+IHJvb3QgPSAnL2Rldi94dmRhMScKPiByYW1kaXNrID0gImluaXRyZC5pbWctMi42
LjMyLTUtYW1kNjQiCj4gY3B1X3dlaWdodD0yNTYKPiB2aWY9WydicmlkZ2U9eGVuYnIwJ10KPiAK
PiBCZWZvcmUgc3RhcnRpbmcgYW55IGJsa3RhcDMgVk0sIHRoZSB4ZW5pbyBkYWVtb24gbXVzdCBi
ZSBzdGFydGVkLgo+IAo+IEnigJl2ZSB0ZXN0ZWQgaXQgb24gY2hhbmdlIHNldCA0NzJmYzUxNWE0
NjMgd2l0aG91dCBweWdydWIuCj4gCj4gQW55IGNvbW1lbnRzIGFyZSB3ZWxjb21lIDopCj4gCj4g
LS0KPiBUaGFub3MgTWFrYXRvcwo+IAoKCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18KPiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxp
c3RzLnhlbi5vcmcKPiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Thu Aug 09 21:45:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 21:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzaXT-0005Xc-NP; Thu, 09 Aug 2012 21:45:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Goncalo.Gomes@eu.citrix.com>) id 1SzaXT-0005XX-6s
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 21:45:07 +0000
Received: from [85.158.143.99:58737] by server-2.bemta-4.messagelabs.com id
	32/60-19021-26F24205; Thu, 09 Aug 2012 21:45:06 +0000
X-Env-Sender: Goncalo.Gomes@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344548705!22121644!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26414 invoked from network); 9 Aug 2012 21:45:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 21:45:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,742,1336348800"; d="scan'208";a="13939898"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 21:45:04 +0000
Received: from eire.uk.xensource.com (10.80.2.151) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 9 Aug 2012 22:45:04 +0100
Date: Thu, 9 Aug 2012 22:39:33 +0100
From: Goncalo Gomes <Goncalo.Gomes@EU.CITRIX.COM>
To: Thanos Makatos <thanos.makatos@citrix.com>
Message-ID: <20120809213933.GA9099@eire.uk.xensource.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAwOSBBdWcgMjAxMiwgVGhhbm9zIE1ha2F0b3Mgd3JvdGU6Cgo+IEhpLAo+IAo+IEni
gJlkIGxpa2UgdG8gaW50cm9kdWNlIGJsa3RhcDM6IGVzc2VudGlhbGx5IGJsa3RhcDIgd2l0aG91
dCB0aGUgbmVlZCBvZiBibGtiYWNrLiBUaGlzIGhhcyBiZWVuIGRldmVsb3BlZCBieSBTYW50b3No
IEpvZGgsIGFuZCBJ4oCZbGwgbWFpbnRhaW4gaXQuCgpXZWxjb21lISBwcmVjaXNlbHksIHhlbmlv
IChha2EgYmxrdGFwMykgd2FzIGRldmVsb3BlZCBieSBEYW5pZWwgClN0b2RkZW4gYW5kIHJlY2Vu
dGx5IGNvbnRpbnVlZC9pbXByb3ZlZCBieSBTYW50b3NoIEpvZGggOi0pCgpBcyBJIGhhdmUgYSBz
bGlnaHQgaW50ZXJlc3QgaW4gdGhpcyBhcmVhLCBJIHdhcyB3b25kZXJpbmcgd2hhdCBhcmUgdGhl
IAptYWluIGltcHJvdmVtZW50cyBvdmVyIGJsa3RhcDI/CgpHb25jYWxvCgo+IEluIHRoaXMgcGF0
Y2gsIGJsa3RhcDIgYmluYXJpZXMgYXJlIHN1ZmZpeGVkIHdpdGgg4oCcMuKAnSwgc28gaXTigJlz
IG5vdCB5ZXQgcG9zc2libGUgdG8gdXNlIGl0IGFsb25nIHdpdGggYmxrdGFwMy4KPiAKPiBBbiBl
eGFtcGxlIGNvbmZpZ3VyYXRpb24gZmlsZSBJIHVzZWQgaXMgdGhlIGZvbGxvd2luZzoKPiBuYW1l
ID0gImRlYmlhbiBia3RhcDMgd2l0aG91dCBweWdydWIiCj4gbWVtb3J5ID0gMjU2Cj4gZGlzayA9
IFsnYmFja2VuZHR5cGU9eGVuaW8sZm9ybWF0PXZoZCx2ZGV2PXh2ZGEsYWNjZXNzPXJ3LHRhcmdl
dD0vcm9vdC9kZWJpYW4tYmxrdGFwMy52aGQnXQo+IGtlcm5lbCA9ICJ2bWxpbnV6LTIuNi4zMi01
LWFtZDY0Igo+IHJvb3QgPSAnL2Rldi94dmRhMScKPiByYW1kaXNrID0gImluaXRyZC5pbWctMi42
LjMyLTUtYW1kNjQiCj4gY3B1X3dlaWdodD0yNTYKPiB2aWY9WydicmlkZ2U9eGVuYnIwJ10KPiAK
PiBCZWZvcmUgc3RhcnRpbmcgYW55IGJsa3RhcDMgVk0sIHRoZSB4ZW5pbyBkYWVtb24gbXVzdCBi
ZSBzdGFydGVkLgo+IAo+IEnigJl2ZSB0ZXN0ZWQgaXQgb24gY2hhbmdlIHNldCA0NzJmYzUxNWE0
NjMgd2l0aG91dCBweWdydWIuCj4gCj4gQW55IGNvbW1lbnRzIGFyZSB3ZWxjb21lIDopCj4gCj4g
LS0KPiBUaGFub3MgTWFrYXRvcwo+IAoKCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18KPiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxp
c3RzLnhlbi5vcmcKPiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Thu Aug 09 23:25:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 23:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szc5f-0006NO-PL; Thu, 09 Aug 2012 23:24:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1Szc5e-0006NJ-6M
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 23:24:30 +0000
Received: from [85.158.139.83:56780] by server-10.bemta-5.messagelabs.com id
	0C/4B-24472-DA644205; Thu, 09 Aug 2012 23:24:29 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344554668!19886306!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 972 invoked from network); 9 Aug 2012 23:24:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 23:24:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336348800"; d="scan'208";a="13941217"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 23:24:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 00:24:27 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1Szc5b-0000bH-5n;
	Thu, 09 Aug 2012 23:24:27 +0000
Received: by spongy (Postfix, from userid 2023)	id E408434040A; Fri, 10 Aug
	2012 00:25:47 +0100 (BST)
Date: Fri, 10 Aug 2012 00:25:47 +0100
From: Jean Guyader <jean.guyader@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120809232547.GA21925@spongy>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="FCuugMFkClbJLl1L"
Content-Disposition: inline
In-Reply-To: <CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--FCuugMFkClbJLl1L
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline

On 09/08 11:40, Jean Guyader wrote:
> On 9 August 2012 11:35, Tim Deegan <tim@xen.org> wrote:
> > At 11:23 +0100 on 09 Aug (1344511426), Ian Campbell wrote:
> >> On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
> >> > At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> >> > >
> >> > > Exposes evtchn_alloc_unbound_domain to the rest of
> >> > > Xen so we can create allocated unbound evtchn within Xen.
> >> > >
> >> > > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> >> >
> >> > > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> >> > >  {
> >> > >      struct evtchn *chn;
> >> > >      struct domain *d;
> >> > > -    int            port;
> >> > > +    evtchn_port_t  port;
> >> > >      domid_t        dom = alloc->dom;
> >> > > -    long           rc;
> >> > > +    int            rc;
> >> >
> >> > The function returns long; if you're tidying this up to be an int, might
> >> > as well change the return type too.
> >>
> >> I'm not sure if this is relevant but Jan just sent a patch to "make all
> >> (native) hypercalls consistently have "long" return type". I
> >> think/suspect this rc here turns into the result of the hypercall?
> >>
> >> Jan's patch was motivated by something to do with sign extension when a
> >> hypercall's int return is written to the long in the multicall arg
> >> struct which causes strangeness. Perhaps not totally relevant to
> >> evtchn_alloc which is unlikely to be in a MC.
> >
> > Yes, this eventually ends up in a hypercall handler, but s/long/int/
> > here doesn't cause problems because
> >  - rc is only ever set to an 'int' value here so we can't lose data
> >    from the type being too narrow; and
> >  - Those int values get cast up to long (either in here or in the
> >    caller) directly, which will sign-extend the.
> >
> > It really doesn't matter whether this function returns an int or a long,
> > but it's a bit untidy to change it half-way.
> >
> 
> The main reason why I changed it only base ERROR_EXIT_DOM expects an int based
> on the format string. I guess I could cast the long in int for the
> call to ERROR_EXIT_DOM
> but that doesn't really look nice either.
> 

Hi,

Here is a new version that should address the comments from Tim and Jan.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>

Jean

--FCuugMFkClbJLl1L
Content-Type: text/x-diff; charset="us-ascii"
Content-Disposition: attachment;
	filename="evtchn_alloc_unbound_domain.patch"

commit c43dbcee9c4e9d65520f9a562b39e8e6455efc36
Author: Jean Guyader <jean.guyader@citrix.com>
Date:   Thu Aug 2 16:19:23 2012 +0100

    xen: events, exposes evtchn_alloc_unbound_domain
    
    Exposes evtchn_alloc_unbound_domain to the rest of
    Xen so we can create allocated unbound evtchn within Xen.

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..880395e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -159,9 +159,8 @@ static int get_free_port(struct domain *d)
 
 static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 {
-    struct evtchn *chn;
     struct domain *d;
-    int            port;
+    evtchn_port_t  port;
     domid_t        dom = alloc->dom;
     long           rc;
 
@@ -169,26 +168,47 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         return rc;
 
+    rc = evtchn_alloc_unbound_domain(d, &port,
+            alloc->remote_dom == DOMID_SELF ? current->domain->domain_id
+                                            : alloc->remote_dom);
+    if ( rc )
+        ERROR_EXIT_DOM((int)rc, d);
+
+    alloc->port = port;
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid)
+{
+    struct evtchn *chn;
+    int           rc;
+    int           free_port;
+
     spin_lock(&d->event_lock);
 
-    if ( (port = get_free_port(d)) < 0 )
-        ERROR_EXIT_DOM(port, d);
-    chn = evtchn_from_port(d, port);
+    rc = free_port = get_free_port(d);
+    if ( free_port < 0 )
+        goto out;
 
-    rc = xsm_evtchn_unbound(d, chn, alloc->remote_dom);
+    chn = evtchn_from_port(d, free_port);
+    rc = xsm_evtchn_unbound(d, chn, remote_domid);
     if ( rc )
         goto out;
 
     chn->state = ECS_UNBOUND;
-    if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
-        chn->u.unbound.remote_domid = current->domain->domain_id;
+    chn->u.unbound.remote_domid = remote_domid;
 
-    alloc->port = port;
+    *port = free_port;
+    /* Everything is fine, returns 0 */
+    rc = 0;
 
  out:
     spin_unlock(&d->event_lock);
-    rcu_unlock_domain(d);
-
     return rc;
 }
 
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..1a0c832 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -69,6 +69,9 @@ int guest_enabled_event(struct vcpu *v, uint32_t virq);
 /* Notify remote end of a Xen-attached event channel.*/
 void notify_via_xen_event_channel(struct domain *ld, int lport);
 
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid);
+
 /* Internal event channel object accessors */
 #define bucket_from_port(d,p) \
     ((d)->evtchn[(p)/EVTCHNS_PER_BUCKET])

--FCuugMFkClbJLl1L
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--FCuugMFkClbJLl1L--


From xen-devel-bounces@lists.xen.org Thu Aug 09 23:25:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Aug 2012 23:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szc5f-0006NO-PL; Thu, 09 Aug 2012 23:24:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1Szc5e-0006NJ-6M
	for xen-devel@lists.xen.org; Thu, 09 Aug 2012 23:24:30 +0000
Received: from [85.158.139.83:56780] by server-10.bemta-5.messagelabs.com id
	0C/4B-24472-DA644205; Thu, 09 Aug 2012 23:24:29 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344554668!19886306!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg2ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 972 invoked from network); 9 Aug 2012 23:24:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Aug 2012 23:24:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336348800"; d="scan'208";a="13941217"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 23:24:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 00:24:27 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1Szc5b-0000bH-5n;
	Thu, 09 Aug 2012 23:24:27 +0000
Received: by spongy (Postfix, from userid 2023)	id E408434040A; Fri, 10 Aug
	2012 00:25:47 +0100 (BST)
Date: Fri, 10 Aug 2012 00:25:47 +0100
From: Jean Guyader <jean.guyader@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120809232547.GA21925@spongy>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="FCuugMFkClbJLl1L"
Content-Disposition: inline
In-Reply-To: <CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--FCuugMFkClbJLl1L
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline

On 09/08 11:40, Jean Guyader wrote:
> On 9 August 2012 11:35, Tim Deegan <tim@xen.org> wrote:
> > At 11:23 +0100 on 09 Aug (1344511426), Ian Campbell wrote:
> >> On Thu, 2012-08-09 at 11:06 +0100, Tim Deegan wrote:
> >> > At 20:50 +0100 on 03 Aug (1344027053), Jean Guyader wrote:
> >> > >
> >> > > Exposes evtchn_alloc_unbound_domain to the rest of
> >> > > Xen so we can create allocated unbound evtchn within Xen.
> >> > >
> >> > > Signed-off-by: Jean Guyader <jean.guyader@citrix.com>
> >> >
> >> > > @@ -161,18 +163,18 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> >> > >  {
> >> > >      struct evtchn *chn;
> >> > >      struct domain *d;
> >> > > -    int            port;
> >> > > +    evtchn_port_t  port;
> >> > >      domid_t        dom = alloc->dom;
> >> > > -    long           rc;
> >> > > +    int            rc;
> >> >
> >> > The function returns long; if you're tidying this up to be an int, might
> >> > as well change the return type too.
> >>
> >> I'm not sure if this is relevant but Jan just sent a patch to "make all
> >> (native) hypercalls consistently have "long" return type". I
> >> think/suspect this rc here turns into the result of the hypercall?
> >>
> >> Jan's patch was motivated by something to do with sign extension when a
> >> hypercall's int return is written to the long in the multicall arg
> >> struct which causes strangeness. Perhaps not totally relevant to
> >> evtchn_alloc which is unlikely to be in a MC.
> >
> > Yes, this eventually ends up in a hypercall handler, but s/long/int/
> > here doesn't cause problems because
> >  - rc is only ever set to an 'int' value here so we can't lose data
> >    from the type being too narrow; and
> >  - Those int values get cast up to long (either in here or in the
> >    caller) directly, which will sign-extend the.
> >
> > It really doesn't matter whether this function returns an int or a long,
> > but it's a bit untidy to change it half-way.
> >
> 
> The main reason why I changed it only base ERROR_EXIT_DOM expects an int based
> on the format string. I guess I could cast the long in int for the
> call to ERROR_EXIT_DOM
> but that doesn't really look nice either.
> 

Hi,

Here is a new version that should address the comments from Tim and Jan.

Signed-off-by: Jean Guyader <jean.guyader@citrix.com>

Jean

--FCuugMFkClbJLl1L
Content-Type: text/x-diff; charset="us-ascii"
Content-Disposition: attachment;
	filename="evtchn_alloc_unbound_domain.patch"

commit c43dbcee9c4e9d65520f9a562b39e8e6455efc36
Author: Jean Guyader <jean.guyader@citrix.com>
Date:   Thu Aug 2 16:19:23 2012 +0100

    xen: events, exposes evtchn_alloc_unbound_domain
    
    Exposes evtchn_alloc_unbound_domain to the rest of
    Xen so we can create allocated unbound evtchn within Xen.

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..880395e 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -159,9 +159,8 @@ static int get_free_port(struct domain *d)
 
 static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 {
-    struct evtchn *chn;
     struct domain *d;
-    int            port;
+    evtchn_port_t  port;
     domid_t        dom = alloc->dom;
     long           rc;
 
@@ -169,26 +168,47 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         return rc;
 
+    rc = evtchn_alloc_unbound_domain(d, &port,
+            alloc->remote_dom == DOMID_SELF ? current->domain->domain_id
+                                            : alloc->remote_dom);
+    if ( rc )
+        ERROR_EXIT_DOM((int)rc, d);
+
+    alloc->port = port;
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid)
+{
+    struct evtchn *chn;
+    int           rc;
+    int           free_port;
+
     spin_lock(&d->event_lock);
 
-    if ( (port = get_free_port(d)) < 0 )
-        ERROR_EXIT_DOM(port, d);
-    chn = evtchn_from_port(d, port);
+    rc = free_port = get_free_port(d);
+    if ( free_port < 0 )
+        goto out;
 
-    rc = xsm_evtchn_unbound(d, chn, alloc->remote_dom);
+    chn = evtchn_from_port(d, free_port);
+    rc = xsm_evtchn_unbound(d, chn, remote_domid);
     if ( rc )
         goto out;
 
     chn->state = ECS_UNBOUND;
-    if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
-        chn->u.unbound.remote_domid = current->domain->domain_id;
+    chn->u.unbound.remote_domid = remote_domid;
 
-    alloc->port = port;
+    *port = free_port;
+    /* Everything is fine, returns 0 */
+    rc = 0;
 
  out:
     spin_unlock(&d->event_lock);
-    rcu_unlock_domain(d);
-
     return rc;
 }
 
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..1a0c832 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -69,6 +69,9 @@ int guest_enabled_event(struct vcpu *v, uint32_t virq);
 /* Notify remote end of a Xen-attached event channel.*/
 void notify_via_xen_event_channel(struct domain *ld, int lport);
 
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid);
+
 /* Internal event channel object accessors */
 #define bucket_from_port(d,p) \
     ((d)->evtchn[(p)/EVTCHNS_PER_BUCKET])

--FCuugMFkClbJLl1L
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--FCuugMFkClbJLl1L--


From xen-devel-bounces@lists.xen.org Fri Aug 10 01:02:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 01:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szdc3-0001fv-04; Fri, 10 Aug 2012 01:02:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szdc1-00019e-AN
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 01:02:01 +0000
Received: from [85.158.143.99:54154] by server-3.bemta-4.messagelabs.com id
	51/22-31486-78D54205; Fri, 10 Aug 2012 01:01:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344560519!27125908!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20235 invoked from network); 10 Aug 2012 01:01:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 01:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336348800"; d="scan'208";a="13942300"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 01:01:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 02:01:58 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szdbx-0001Bd-Um;
	Fri, 10 Aug 2012 01:01:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szdbx-0003lk-S3;
	Fri, 10 Aug 2012 02:01:57 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13580-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 02:01:57 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13580: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13580 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13580/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             9 guest-start               fail REGR. vs. 13528

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-credit2    5 xen-boot                  fail REGR. vs. 13528
 test-amd64-amd64-xl-sedf     12 guest-saverestore.2       fail REGR. vs. 13524

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl           15 guest-stop                   fail   never pass
 test-amd64-amd64-xl          15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-sedf-pin 15 guest-stop                   fail   never pass
 test-amd64-i386-xl-multivcpu 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64  8 guest-saverestore            fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore      fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-rhel6hvm-intel  7 redhat-install               fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install           fail never pass
 test-amd64-i386-rhel6hvm-amd  7 redhat-install               fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  7 redhat-install         fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3    7 windows-install              fail   never pass
 test-i386-i386-xl-win         7 windows-install              fail   never pass
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3  7 windows-install            fail never pass
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail never pass
 test-amd64-i386-xl-win-vcpus1  7 windows-install              fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail never pass
 test-amd64-amd64-xl-win       7 windows-install              fail   never pass

version targeted for testing:
 xen                  228e6f382d5d
baseline version:
 xen                  6d7ae840463c

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   21612:228e6f382d5d
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:48:19 2012 +0100
    
    Added signature for changeset 8ea28053de39
    
    
changeset:   21611:abf8c57178aa
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:49 2012 +0100
    
    Added tag RELEASE-4.0.4 for changeset 8ea28053de39
    
    
changeset:   21610:8ea28053de39
tag:         RELEASE-4.0.4
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:23 2012 +0100
    
    Update Xen version to 4.0.4
    
    
changeset:   21609:2bd0027ba0d1
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Aug 09 16:45:12 2012 +0100
    
    cpufreq: P state stats aren't available if there is no cpufreq driver
    
    If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
    reading the P state statistics causes a deadlock as an uninitialized
    spinlock is locked in do_get_pm_info(). The spinlock is initialized in
    cpufreq_statistic_init() which is not called if cpufreq_driver ==
    NULL.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset:   25706:7fd5facb6084
    xen-unstable date:        Fri Aug 03 09:50:28 2012 +0200
    
    
changeset:   21608:a51c86b407d7
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 09 15:47:19 2012 +0100
    
    xen: only check for shared pages while any exist on teardown
    
    Avoids worst case behavour when guest has a large p2m.
    
    This is XSA-11 / CVE-2012-3433
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Olaf Hering <olaf@aepfle.de>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   21607:6d7ae840463c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:39:47 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 01:02:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 01:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szdc3-0001fv-04; Fri, 10 Aug 2012 01:02:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szdc1-00019e-AN
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 01:02:01 +0000
Received: from [85.158.143.99:54154] by server-3.bemta-4.messagelabs.com id
	51/22-31486-78D54205; Fri, 10 Aug 2012 01:01:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1344560519!27125908!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20235 invoked from network); 10 Aug 2012 01:01:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 01:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336348800"; d="scan'208";a="13942300"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 01:01:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 02:01:58 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szdbx-0001Bd-Um;
	Fri, 10 Aug 2012 01:01:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szdbx-0003lk-S3;
	Fri, 10 Aug 2012 02:01:57 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13580-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 02:01:57 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13580: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13580 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13580/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl             9 guest-start               fail REGR. vs. 13528

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-credit2    5 xen-boot                  fail REGR. vs. 13528
 test-amd64-amd64-xl-sedf     12 guest-saverestore.2       fail REGR. vs. 13524

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl           15 guest-stop                   fail   never pass
 test-amd64-amd64-xl          15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-sedf-pin 15 guest-stop                   fail   never pass
 test-amd64-i386-xl-multivcpu 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64  8 guest-saverestore            fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore      fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-rhel6hvm-intel  7 redhat-install               fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install           fail never pass
 test-amd64-i386-rhel6hvm-amd  7 redhat-install               fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  7 redhat-install         fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3    7 windows-install              fail   never pass
 test-i386-i386-xl-win         7 windows-install              fail   never pass
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3  7 windows-install            fail never pass
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail never pass
 test-amd64-i386-xl-win-vcpus1  7 windows-install              fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail never pass
 test-amd64-amd64-xl-win       7 windows-install              fail   never pass

version targeted for testing:
 xen                  228e6f382d5d
baseline version:
 xen                  6d7ae840463c

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   21612:228e6f382d5d
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:48:19 2012 +0100
    
    Added signature for changeset 8ea28053de39
    
    
changeset:   21611:abf8c57178aa
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:49 2012 +0100
    
    Added tag RELEASE-4.0.4 for changeset 8ea28053de39
    
    
changeset:   21610:8ea28053de39
tag:         RELEASE-4.0.4
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 09 16:47:23 2012 +0100
    
    Update Xen version to 4.0.4
    
    
changeset:   21609:2bd0027ba0d1
user:        David Vrabel <david.vrabel@citrix.com>
date:        Thu Aug 09 16:45:12 2012 +0100
    
    cpufreq: P state stats aren't available if there is no cpufreq driver
    
    If there is no cpufreq driver (e.g., with an AMD Opteron 8212) then
    reading the P state statistics causes a deadlock as an uninitialized
    spinlock is locked in do_get_pm_info(). The spinlock is initialized in
    cpufreq_statistic_init() which is not called if cpufreq_driver ==
    NULL.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    xen-unstable changeset:   25706:7fd5facb6084
    xen-unstable date:        Fri Aug 03 09:50:28 2012 +0200
    
    
changeset:   21608:a51c86b407d7
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 09 15:47:19 2012 +0100
    
    xen: only check for shared pages while any exist on teardown
    
    Avoids worst case behavour when guest has a large p2m.
    
    This is XSA-11 / CVE-2012-3433
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Olaf Hering <olaf@aepfle.de>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   21607:6d7ae840463c
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Jul 30 13:39:47 2012 +0100
    
    x86: fix off-by-one in nr_irqs_gsi calculation
    
    highest_gsi() returns the last valid GSI, not a count.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Joe Jin <joe.jin@oracle.com>
    Acked-by: Keir Fraser <keir@xen.org>
    xen-unstable changeset:   25688:e6266fc76d08
    xen-unstable date:        Fri Jul 27 12:22:13 2012 +0200
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 01:41:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 01:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzeEG-0003I8-8a; Fri, 10 Aug 2012 01:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1SzeEE-0003I3-5v
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 01:41:30 +0000
Received: from [85.158.143.35:41576] by server-2.bemta-4.messagelabs.com id
	F4/81-19021-9C664205; Fri, 10 Aug 2012 01:41:29 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344562886!11803888!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAyNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21558 invoked from network); 10 Aug 2012 01:41:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 01:41:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336363200"; d="scan'208";a="204767737"
Received: from sjcpmailmx02.citrite.net ([10.216.14.75])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 21:41:25 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi;
	Thu, 9 Aug 2012 18:41:24 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, "wei.wang2@amd.com" <wei.wang2@amd.com>
Date: Thu, 9 Aug 2012 18:41:06 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac12AEDG9mgMwO7sQ0+FKlZ6dQ0sQQAl6sqQ
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E9857@SJCPMAILBOX01.citrite.net>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
In-Reply-To: <5023822E0200007800093D2A@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for the detailed feedback. Please see inline:


> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level <= 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %016"PRIx64"\n", 
> + page_to_maddr(pg));

We specifically have PRIpaddr for this purpose.
[Santosh Jodh] Ah - one more format specifier :-). Will change it.

> +        }
> +
> +        if ( present )
> +        {
> +            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
> +                   address >> PAGE_SHIFT, next_table_maddr >> 
> + PAGE_SHIFT);

I'd prefer you to use PFN_DOWN() here.
[Santosh Jodh] ok.

Also, depth first, as requested by Tim, to me doesn't mean recursing before printing. I think you really want to print first, then recurse. Otherwise how would be output be made sense of?
[Santosh Jodh] it gives a simple gfp -> mfn map. With nested printing, you get the PD and PT addresses printed - which are not super useful. Anyway, I will change it.

>  static void __init parse_iommu_param(char *s)  {
>      char *ss;
> @@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
>      if ( !iommu_enabled )
>          return;
>  
> +    setup_iommu_dump();
>      d->need_iommu = !!iommu_dom0_strict;
>      if ( need_iommu(d) )
>      {
>...
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('o', &iommu_p2m_table); }

Furthermore, there's no real need for a separate function here anyway. Just call register_key_handler() directly. Or alternatively this ought to match other code doing the same - using an initcall.
[Santosh Jodh] will just call register_key_handler. I had to rearrange code in the file to avoid forward declarations. So much for trying to contain my changes to the end of the file :-)

> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
> +static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 
> +gpa) {
> +    u64 address;

Again, both gpa and address ought to be paddr_t, and the format specifiers should match.
[Santosh Jodh] will do.

> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);

Pointless cast.
[Santosh Jodh] yep - gone.

> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +        
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
> + address);
> +
> +        if ( level == 1 )
> +            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n", 
> +                    address >> PAGE_SHIFT_4K, pte->val >> 
> + PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);

Why do you print leaf (level 1) tables here only?
[Santosh Jodh] as I said above - I was dumping gfn -> mfn. I will make it indent and print all levels.

And the last line certainly is above 80 chars, so needs breaking up.
[Santosh Jodh] yep - done.

(Also, just to avoid you needing to do another iteration: Don't switch to PFN_DOWN() here.)
[Santosh Jodh] ok

I further wonder whether "superpage" alone is enough - don't we have both 2M and 1G pages? Of course, that would become mute if higher levels got also dumped (as then this knowledge is implicit).

Which reminds me to ask that both here and in the AMD code the recursion level should probably be reflected by indenting the printed strings.
[Santosh Jodh] will print indented and all levels.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 01:41:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 01:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzeEG-0003I8-8a; Fri, 10 Aug 2012 01:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1SzeEE-0003I3-5v
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 01:41:30 +0000
Received: from [85.158.143.35:41576] by server-2.bemta-4.messagelabs.com id
	F4/81-19021-9C664205; Fri, 10 Aug 2012 01:41:29 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344562886!11803888!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAyNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21558 invoked from network); 10 Aug 2012 01:41:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 01:41:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336363200"; d="scan'208";a="204767737"
Received: from sjcpmailmx02.citrite.net ([10.216.14.75])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 21:41:25 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi;
	Thu, 9 Aug 2012 18:41:24 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, "wei.wang2@amd.com" <wei.wang2@amd.com>
Date: Thu, 9 Aug 2012 18:41:06 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac12AEDG9mgMwO7sQ0+FKlZ6dQ0sQQAl6sqQ
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E9857@SJCPMAILBOX01.citrite.net>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
In-Reply-To: <5023822E0200007800093D2A@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for the detailed feedback. Please see inline:


> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level <= 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %016"PRIx64"\n", 
> + page_to_maddr(pg));

We specifically have PRIpaddr for this purpose.
[Santosh Jodh] Ah - one more format specifier :-). Will change it.

> +        }
> +
> +        if ( present )
> +        {
> +            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
> +                   address >> PAGE_SHIFT, next_table_maddr >> 
> + PAGE_SHIFT);

I'd prefer you to use PFN_DOWN() here.
[Santosh Jodh] ok.

Also, depth first, as requested by Tim, to me doesn't mean recursing before printing. I think you really want to print first, then recurse. Otherwise how would be output be made sense of?
[Santosh Jodh] it gives a simple gfp -> mfn map. With nested printing, you get the PD and PT addresses printed - which are not super useful. Anyway, I will change it.

>  static void __init parse_iommu_param(char *s)  {
>      char *ss;
> @@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
>      if ( !iommu_enabled )
>          return;
>  
> +    setup_iommu_dump();
>      d->need_iommu = !!iommu_dom0_strict;
>      if ( need_iommu(d) )
>      {
>...
> +void __init setup_iommu_dump(void)
> +{
> +    register_keyhandler('o', &iommu_p2m_table); }

Furthermore, there's no real need for a separate function here anyway. Just call register_key_handler() directly. Or alternatively this ought to match other code doing the same - using an initcall.
[Santosh Jodh] will just call register_key_handler. I had to rearrange code in the file to avoid forward declarations. So much for trying to contain my changes to the end of the file :-)

> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
> +static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 
> +gpa) {
> +    u64 address;

Again, both gpa and address ought to be paddr_t, and the format specifiers should match.
[Santosh Jodh] will do.

> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);

Pointless cast.
[Santosh Jodh] yep - gone.

> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +        
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
> + address);
> +
> +        if ( level == 1 )
> +            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n", 
> +                    address >> PAGE_SHIFT_4K, pte->val >> 
> + PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);

Why do you print leaf (level 1) tables here only?
[Santosh Jodh] as I said above - I was dumping gfn -> mfn. I will make it indent and print all levels.

And the last line certainly is above 80 chars, so needs breaking up.
[Santosh Jodh] yep - done.

(Also, just to avoid you needing to do another iteration: Don't switch to PFN_DOWN() here.)
[Santosh Jodh] ok

I further wonder whether "superpage" alone is enough - don't we have both 2M and 1G pages? Of course, that would become mute if higher levels got also dumped (as then this knowledge is implicit).

Which reminds me to ask that both here and in the AMD code the recursion level should probably be reflected by indenting the printed strings.
[Santosh Jodh] will print indented and all levels.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 01:43:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 01:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzeFu-0003MN-Oj; Fri, 10 Aug 2012 01:43:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1SzeFt-0003MF-Ln
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 01:43:14 +0000
Received: from [85.158.138.51:60686] by server-5.bemta-3.messagelabs.com id
	CF/0B-27557-03764205; Fri, 10 Aug 2012 01:43:12 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344562990!27482148!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAyNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30364 invoked from network); 10 Aug 2012 01:43:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 01:43:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336363200"; d="scan'208";a="204767876"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 21:43:10 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 21:43:10 -0400
MIME-Version: 1.0
X-Mercurial-Node: f687c55802629c72405d88b08669cd84ba3a34ec
Message-ID: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 9 Aug 2012 18:43:09 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 09 18:31:32 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,81 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int *indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level <= 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( present )
+        {
+            int i;
+
+            for ( i = 0; i < *indent; i++ )
+                printk("  ");
+
+            printk("gfn: %"PRIpaddr"  mfn: %"PRIpaddr"\n",
+                   PFN_DOWN(address), PFN_DOWN(next_table_maddr));
+
+            if ( (next_table_maddr != 0) && (next_level != 0) )
+            {
+                *indent += 1;
+                amd_dump_p2m_table_level(
+                    maddr_to_page(next_table_maddr), level - 1, 
+                    address, indent);
+
+                *indent -= 1;
+            }     
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+    int indent;
+
+    if ( !hd->root_table ) 
+        return;
+
+    indent = 0;
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, &indent);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +607,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Thu Aug 09 18:31:32 2012 -0700
@@ -18,11 +18,13 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 09 18:31:32 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,71 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int *indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( dma_pte_present(*pte) )
+        {
+            int j;
+
+            for ( j = 0; j < *indent; j++ )
+                printk("  ");
+
+            address = gpa + offset_level_address(i, level);
+            printk("gfn: %"PRIpaddr" mfn: %"PRIpaddr" super=%d rd=%d wr=%d\n",
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K,
+                    dma_pte_superpage(*pte)? 1 : 0, dma_pte_read(*pte)? 1 : 0,
+                    dma_pte_write(*pte)? 1 : 0);
+
+            if ( next_level >= 1 ) {
+                *indent += 1;
+                vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                         address, indent);
+
+                *indent -= 1;
+            }
+        }
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+    int indent;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    indent = 0;
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 
+                             0, &indent);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 09 18:31:32 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r f687c5580262 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 09 18:31:32 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r f687c5580262 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Thu Aug 09 18:31:32 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 01:43:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 01:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzeFu-0003MN-Oj; Fri, 10 Aug 2012 01:43:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1SzeFt-0003MF-Ln
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 01:43:14 +0000
Received: from [85.158.138.51:60686] by server-5.bemta-3.messagelabs.com id
	CF/0B-27557-03764205; Fri, 10 Aug 2012 01:43:12 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344562990!27482148!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzAyNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30364 invoked from network); 10 Aug 2012 01:43:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 01:43:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336363200"; d="scan'208";a="204767876"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	09 Aug 2012 21:43:10 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Thu, 9 Aug 2012 21:43:10 -0400
MIME-Version: 1.0
X-Mercurial-Node: f687c55802629c72405d88b08669cd84ba3a34ec
Message-ID: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 9 Aug 2012 18:43:09 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 09 18:31:32 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,81 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int *indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level <= 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( present )
+        {
+            int i;
+
+            for ( i = 0; i < *indent; i++ )
+                printk("  ");
+
+            printk("gfn: %"PRIpaddr"  mfn: %"PRIpaddr"\n",
+                   PFN_DOWN(address), PFN_DOWN(next_table_maddr));
+
+            if ( (next_table_maddr != 0) && (next_level != 0) )
+            {
+                *indent += 1;
+                amd_dump_p2m_table_level(
+                    maddr_to_page(next_table_maddr), level - 1, 
+                    address, indent);
+
+                *indent -= 1;
+            }     
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+    int indent;
+
+    if ( !hd->root_table ) 
+        return;
+
+    indent = 0;
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, &indent);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +607,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Thu Aug 09 18:31:32 2012 -0700
@@ -18,11 +18,13 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 09 18:31:32 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,71 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int *indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( dma_pte_present(*pte) )
+        {
+            int j;
+
+            for ( j = 0; j < *indent; j++ )
+                printk("  ");
+
+            address = gpa + offset_level_address(i, level);
+            printk("gfn: %"PRIpaddr" mfn: %"PRIpaddr" super=%d rd=%d wr=%d\n",
+                    address >> PAGE_SHIFT_4K, pte->val >> PAGE_SHIFT_4K,
+                    dma_pte_superpage(*pte)? 1 : 0, dma_pte_read(*pte)? 1 : 0,
+                    dma_pte_write(*pte)? 1 : 0);
+
+            if ( next_level >= 1 ) {
+                *indent += 1;
+                vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                         address, indent);
+
+                *indent -= 1;
+            }
+        }
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+    int indent;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    indent = 0;
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 
+                             0, &indent);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 09 18:31:32 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r f687c5580262 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 09 18:31:32 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r f687c5580262 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Thu Aug 09 18:31:32 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 03:11:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 03:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szfck-0004Gs-Q1; Fri, 10 Aug 2012 03:10:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szfcj-0004Gn-JW
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 03:10:53 +0000
Received: from [85.158.143.99:55765] by server-3.bemta-4.messagelabs.com id
	EA/B4-31486-BBB74205; Fri, 10 Aug 2012 03:10:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344568251!21078008!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27414 invoked from network); 10 Aug 2012 03:10:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 03:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336348800"; d="scan'208";a="13943724"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 03:10:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 04:10:50 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szfcg-00024D-HJ;
	Fri, 10 Aug 2012 03:10:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szfcg-0001Kz-BY;
	Fri, 10 Aug 2012 04:10:50 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13581-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 04:10:50 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13581: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13581 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13581/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 13569
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 13569

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1225aff05dd2
baseline version:
 xen                  f8f8912b3de0

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=1225aff05dd2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 1225aff05dd2
+ branch=xen-4.1-testing
+ revision=1225aff05dd2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 1225aff05dd2 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 03:11:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 03:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szfck-0004Gs-Q1; Fri, 10 Aug 2012 03:10:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szfcj-0004Gn-JW
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 03:10:53 +0000
Received: from [85.158.143.99:55765] by server-3.bemta-4.messagelabs.com id
	EA/B4-31486-BBB74205; Fri, 10 Aug 2012 03:10:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344568251!21078008!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27414 invoked from network); 10 Aug 2012 03:10:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 03:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,743,1336348800"; d="scan'208";a="13943724"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 03:10:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 04:10:50 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szfcg-00024D-HJ;
	Fri, 10 Aug 2012 03:10:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szfcg-0001Kz-BY;
	Fri, 10 Aug 2012 04:10:50 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13581-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 04:10:50 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 13581: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13581 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13581/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail like 13569
 test-amd64-amd64-xl-sedf      5 xen-boot                     fail   like 13569

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd 11 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd 11 leak-check/check         fail never pass
 test-amd64-i386-rhel6hvm-intel 11 leak-check/check             fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel 11 leak-check/check       fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1225aff05dd2
baseline version:
 xen                  f8f8912b3de0

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=1225aff05dd2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing 1225aff05dd2
+ branch=xen-4.1-testing
+ revision=1225aff05dd2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.1-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.1-testing.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.1-testing.hg
+ hg push -r 1225aff05dd2 ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.1-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 03:22:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 03:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szfo0-0004ad-EA; Fri, 10 Aug 2012 03:22:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Szfnz-0004aY-Kx
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 03:22:31 +0000
Received: from [85.158.139.83:20395] by server-11.bemta-5.messagelabs.com id
	E7/BB-11482-67E74205; Fri, 10 Aug 2012 03:22:30 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344568948!29252438!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11543 invoked from network); 10 Aug 2012 03:22:28 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 03:22:28 -0000
Received: by eekd4 with SMTP id d4so310045eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 20:22:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=E2sadaUn8RAJBm7RNOdRTRZ+YjLhQbPezqaaK9lHkIw=;
	b=XIq1Ral7qBI62i49J0b2ZTh6BYXWgPNA4QxdYOg9c4j+k1lqv1nrbvFJX9BiUAYmG2
	t3jUa7jTSzwRvFauNDPHwQ3DVkulHK/wjnCIUUW7Iu/inFuW4fOctrxNuScdgQnQgOqb
	jpkd5jg/wBBOugh/QDaTBxdwi23iAiWfsbbjAI1a4sOm/u6fzBnHzunQn4UViYyhh8hD
	i3nxIucJNh4kg8RROgUSW18wmKJxd6CEaFpobiQkGkKUNs90G2X2BAen+rG0BxO9H8dP
	1kWUagOFpSWtTlZGCbkyVptvLPbsSkAICD1VeBAFxhlkdHTeIH8/304QHvKJzRSLKVoD
	46IA==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr1454012eep.10.1344568948178; Thu, 09
	Aug 2012 20:22:28 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Thu, 9 Aug 2012 20:22:28 -0700 (PDT)
Date: Fri, 10 Aug 2012 11:22:28 +0800
Message-ID: <CA+ePHTDpXuggNAvD-una-APzx-F3k32a1HbhG8u2WCL9a-qP5A@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] [help] problem with watch_pipe in xenstore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6723995004920893500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6723995004920893500==
Content-Type: multipart/alternative; boundary=047d7b670a9d883e9a04c6e0da58

--047d7b670a9d883e9a04c6e0da58
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
      In src/tools/xenstore/xs.c : get_handle(), it says

/* Watch pipe is allocated on demand in xs_fileno(). */

 h->watch_pipe[0] = h->watch_pipe[1] = -1;


but in xs_fileno() function:

 137-int xs_fileno(struct xs_handle *h)



 138{



 139        char c = 0;



 140



 141        mutex_lock(&h->watch_mutex);



 142



 143        if ((h->watch_pipe[0] == -1) && (pipe(h->watch_pipe) != -1)) {



 144                /* Kick things off if the watch list is already
non-empty. */


 145                if (!list_empty(&h->watch_list))



 146                        while (write(h->watch_pipe[1], &c, 1) != 1)



 147                                continue;



 148        }



 149



 150        mutex_unlock(&h->watch_mutex);



 151



 152        return h->watch_pipe[0];



 153}

When did it assign value to watch_pipe[0] and what does watch_pipe[0] and
watch_pipe[1] stands for respectively?

In addition, the two lines `while (read(h->watch_pipe[0], &c, 1) != 1)` and
`while (write(h->watch_pipe[1], &c, 1) != 1)`  appears many times,
it's odd that why always writing to watch_pipe[1] but nobody read from it
and why always reading from watch_pipe[0] but nobody write to it?

Grep found no answer and it's difficult to search for the trivial question
by goole!

--047d7b670a9d883e9a04c6e0da58
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: base64

SGkgYWxsLDxkaXY+oCCgIKAgSW4gc3JjL3Rvb2xzL3hlbnN0b3JlL3hzLmMgOiBnZXRfaGFuZGxl
KCksIGl0IHNheXM8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PjxkaXY+LyogV2F0Y2ggcGlwZSBp
cyBhbGxvY2F0ZWQgb24gZGVtYW5kIGluIHhzX2ZpbGVubygpLiAqLyCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2PjxkaXY+oGgtJmd0O3dhdGNoX3Bp
cGVbMF0gPSBoLSZndDt3YXRjaF9waXBlWzFdID0gLTE7PC9kaXY+CjwvZGl2PjxkaXY+PGJyPjwv
ZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+YnV0IGluIHhzX2ZpbGVubygpIGZ1bmN0aW9uOjwvZGl2
PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGRpdj6gMTM3LWludCB4c19maWxlbm8oc3RydWN0IHhzX2hh
bmRsZSAqaCkgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgoDwvZGl2Pgo8ZGl2PqAxMzh7IKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9k
aXY+CjxkaXY+oDEzOSCgIKAgoCCgY2hhciBjID0gMDsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoKA8L2Rpdj4KPGRpdj6gMTQwIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqAxNDEgoCCg
IKAgoG11dGV4X2xvY2soJmFtcDtoLSZndDt3YXRjaF9tdXRleCk7IKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoDwvZGl2Pgo8ZGl2PqAxNDIgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDE0MyCgIKAgoCCgaWYgKCho
LSZndDt3YXRjaF9waXBlWzBdID09IC0xKSAmYW1wOyZhbXA7IChwaXBlKGgtJmd0O3dhdGNoX3Bp
cGUpICE9IC0xKSkgeyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoDwvZGl2Pgo8ZGl2PqAxNDQgoCCgIKAgoCCgIKAgoCCgLyogS2ljayB0aGluZ3Mgb2ZmIGlm
IHRoZSB3YXRjaCBsaXN0IGlzIGFscmVhZHkgbm9uLWVtcHR5LiAqLyCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9kaXY+CjxkaXY+oDE0NSCgIKAgoCCgIKAgoCCgIKBp
ZiAoIWxpc3RfZW1wdHkoJmFtcDtoLSZndDt3YXRjaF9saXN0KSkgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+
CjxkaXY+oDE0NiCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoHdoaWxlICh3cml0ZShoLSZndDt3YXRj
aF9waXBlWzFdLCAmYW1wO2MsIDEpICE9IDEpIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9kaXY+CjxkaXY+oDE0NyCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgY29udGludWU7IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoKA8L2Rpdj4KPGRpdj6gMTQ4
IKAgoCCgIKB9IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgoDwvZGl2Pgo8ZGl2PqAxNDkgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDE1MCCgIKAgoCCgbXV0ZXhfdW5s
b2NrKCZhbXA7aC0mZ3Q7d2F0Y2hfbXV0ZXgpOyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+
CjxkaXY+oDE1MSCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKA8L2Rpdj4KPGRpdj6gMTUyIKAgoCCgIKByZXR1cm4gaC0mZ3Q7d2F0Y2hf
cGlwZVswXTsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqAxNTN9PC9k
aXY+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5XaGVuIGRpZCBpdCBhc3NpZ24gdmFsdWUgdG8g
d2F0Y2hfcGlwZVswXSBhbmQgd2hhdCBkb2VzIHdhdGNoX3BpcGVbMF0gYW5kIHdhdGNoX3BpcGVb
MV0gc3RhbmRzIGZvciByZXNwZWN0aXZlbHk/PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5JbiBh
ZGRpdGlvbiwgdGhlIHR3byBsaW5lcyBgd2hpbGUgKHJlYWQoaC0mZ3Q7d2F0Y2hfcGlwZVswXSwg
JmFtcDtjLCAxKSAhPSAxKWAgYW5kIGB3aGlsZSAod3JpdGUoaC0mZ3Q7d2F0Y2hfcGlwZVsxXSwg
JmFtcDtjLCAxKSAhPSAxKWAgoGFwcGVhcnMgbWFueSB0aW1lcyygPC9kaXY+CjxkaXY+aXQmIzM5
O3Mgb2RkIHRoYXQgd2h5IGFsd2F5cyB3cml0aW5nIHRvIHdhdGNoX3BpcGVbMV0gYnV0IG5vYm9k
eSByZWFkIGZyb20gaXQgYW5kIHdoeSBhbHdheXMgcmVhZGluZyBmcm9tIHdhdGNoX3BpcGVbMF0g
YnV0IG5vYm9keSB3cml0ZSB0byBpdD88L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkdyZXAgZm91
bmQgbm8gYW5zd2VyIGFuZCBpdCYjMzk7cyBkaWZmaWN1bHQgdG8gc2VhcmNoIGZvciB0aGUgdHJp
dmlhbCBxdWVzdGlvbiBieSBnb29sZSE8L2Rpdj4K
--047d7b670a9d883e9a04c6e0da58--


--===============6723995004920893500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6723995004920893500==--


From xen-devel-bounces@lists.xen.org Fri Aug 10 03:22:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 03:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szfo0-0004ad-EA; Fri, 10 Aug 2012 03:22:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1Szfnz-0004aY-Kx
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 03:22:31 +0000
Received: from [85.158.139.83:20395] by server-11.bemta-5.messagelabs.com id
	E7/BB-11482-67E74205; Fri, 10 Aug 2012 03:22:30 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344568948!29252438!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11543 invoked from network); 10 Aug 2012 03:22:28 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 03:22:28 -0000
Received: by eekd4 with SMTP id d4so310045eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 09 Aug 2012 20:22:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=E2sadaUn8RAJBm7RNOdRTRZ+YjLhQbPezqaaK9lHkIw=;
	b=XIq1Ral7qBI62i49J0b2ZTh6BYXWgPNA4QxdYOg9c4j+k1lqv1nrbvFJX9BiUAYmG2
	t3jUa7jTSzwRvFauNDPHwQ3DVkulHK/wjnCIUUW7Iu/inFuW4fOctrxNuScdgQnQgOqb
	jpkd5jg/wBBOugh/QDaTBxdwi23iAiWfsbbjAI1a4sOm/u6fzBnHzunQn4UViYyhh8hD
	i3nxIucJNh4kg8RROgUSW18wmKJxd6CEaFpobiQkGkKUNs90G2X2BAen+rG0BxO9H8dP
	1kWUagOFpSWtTlZGCbkyVptvLPbsSkAICD1VeBAFxhlkdHTeIH8/304QHvKJzRSLKVoD
	46IA==
MIME-Version: 1.0
Received: by 10.14.223.9 with SMTP id u9mr1454012eep.10.1344568948178; Thu, 09
	Aug 2012 20:22:28 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Thu, 9 Aug 2012 20:22:28 -0700 (PDT)
Date: Fri, 10 Aug 2012 11:22:28 +0800
Message-ID: <CA+ePHTDpXuggNAvD-una-APzx-F3k32a1HbhG8u2WCL9a-qP5A@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] [help] problem with watch_pipe in xenstore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6723995004920893500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6723995004920893500==
Content-Type: multipart/alternative; boundary=047d7b670a9d883e9a04c6e0da58

--047d7b670a9d883e9a04c6e0da58
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
      In src/tools/xenstore/xs.c : get_handle(), it says

/* Watch pipe is allocated on demand in xs_fileno(). */

 h->watch_pipe[0] = h->watch_pipe[1] = -1;


but in xs_fileno() function:

 137-int xs_fileno(struct xs_handle *h)



 138{



 139        char c = 0;



 140



 141        mutex_lock(&h->watch_mutex);



 142



 143        if ((h->watch_pipe[0] == -1) && (pipe(h->watch_pipe) != -1)) {



 144                /* Kick things off if the watch list is already
non-empty. */


 145                if (!list_empty(&h->watch_list))



 146                        while (write(h->watch_pipe[1], &c, 1) != 1)



 147                                continue;



 148        }



 149



 150        mutex_unlock(&h->watch_mutex);



 151



 152        return h->watch_pipe[0];



 153}

When did it assign value to watch_pipe[0] and what does watch_pipe[0] and
watch_pipe[1] stands for respectively?

In addition, the two lines `while (read(h->watch_pipe[0], &c, 1) != 1)` and
`while (write(h->watch_pipe[1], &c, 1) != 1)`  appears many times,
it's odd that why always writing to watch_pipe[1] but nobody read from it
and why always reading from watch_pipe[0] but nobody write to it?

Grep found no answer and it's difficult to search for the trivial question
by goole!

--047d7b670a9d883e9a04c6e0da58
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: base64

SGkgYWxsLDxkaXY+oCCgIKAgSW4gc3JjL3Rvb2xzL3hlbnN0b3JlL3hzLmMgOiBnZXRfaGFuZGxl
KCksIGl0IHNheXM8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PjxkaXY+LyogV2F0Y2ggcGlwZSBp
cyBhbGxvY2F0ZWQgb24gZGVtYW5kIGluIHhzX2ZpbGVubygpLiAqLyCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2PjxkaXY+oGgtJmd0O3dhdGNoX3Bp
cGVbMF0gPSBoLSZndDt3YXRjaF9waXBlWzFdID0gLTE7PC9kaXY+CjwvZGl2PjxkaXY+PGJyPjwv
ZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+YnV0IGluIHhzX2ZpbGVubygpIGZ1bmN0aW9uOjwvZGl2
PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGRpdj6gMTM3LWludCB4c19maWxlbm8oc3RydWN0IHhzX2hh
bmRsZSAqaCkgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgoDwvZGl2Pgo8ZGl2PqAxMzh7IKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9k
aXY+CjxkaXY+oDEzOSCgIKAgoCCgY2hhciBjID0gMDsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoKA8L2Rpdj4KPGRpdj6gMTQwIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqAxNDEgoCCg
IKAgoG11dGV4X2xvY2soJmFtcDtoLSZndDt3YXRjaF9tdXRleCk7IKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoDwvZGl2Pgo8ZGl2PqAxNDIgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDE0MyCgIKAgoCCgaWYgKCho
LSZndDt3YXRjaF9waXBlWzBdID09IC0xKSAmYW1wOyZhbXA7IChwaXBlKGgtJmd0O3dhdGNoX3Bp
cGUpICE9IC0xKSkgeyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoDwvZGl2Pgo8ZGl2PqAxNDQgoCCgIKAgoCCgIKAgoCCgLyogS2ljayB0aGluZ3Mgb2ZmIGlm
IHRoZSB3YXRjaCBsaXN0IGlzIGFscmVhZHkgbm9uLWVtcHR5LiAqLyCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9kaXY+CjxkaXY+oDE0NSCgIKAgoCCgIKAgoCCgIKBp
ZiAoIWxpc3RfZW1wdHkoJmFtcDtoLSZndDt3YXRjaF9saXN0KSkgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+
CjxkaXY+oDE0NiCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoHdoaWxlICh3cml0ZShoLSZndDt3YXRj
aF9waXBlWzFdLCAmYW1wO2MsIDEpICE9IDEpIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKCgPC9kaXY+CjxkaXY+oDE0NyCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgY29udGludWU7IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoKA8L2Rpdj4KPGRpdj6gMTQ4
IKAgoCCgIKB9IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgoDwvZGl2Pgo8ZGl2PqAxNDkgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+CjxkaXY+oDE1MCCgIKAgoCCgbXV0ZXhfdW5s
b2NrKCZhbXA7aC0mZ3Q7d2F0Y2hfbXV0ZXgpOyCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgPC9kaXY+
CjxkaXY+oDE1MSCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKA8L2Rpdj4KPGRpdj6gMTUyIKAgoCCgIKByZXR1cm4gaC0mZ3Q7d2F0Y2hf
cGlwZVswXTsgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCg
IKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAg
oCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoCCgIKAgoDwvZGl2Pgo8ZGl2PqAxNTN9PC9k
aXY+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5XaGVuIGRpZCBpdCBhc3NpZ24gdmFsdWUgdG8g
d2F0Y2hfcGlwZVswXSBhbmQgd2hhdCBkb2VzIHdhdGNoX3BpcGVbMF0gYW5kIHdhdGNoX3BpcGVb
MV0gc3RhbmRzIGZvciByZXNwZWN0aXZlbHk/PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5JbiBh
ZGRpdGlvbiwgdGhlIHR3byBsaW5lcyBgd2hpbGUgKHJlYWQoaC0mZ3Q7d2F0Y2hfcGlwZVswXSwg
JmFtcDtjLCAxKSAhPSAxKWAgYW5kIGB3aGlsZSAod3JpdGUoaC0mZ3Q7d2F0Y2hfcGlwZVsxXSwg
JmFtcDtjLCAxKSAhPSAxKWAgoGFwcGVhcnMgbWFueSB0aW1lcyygPC9kaXY+CjxkaXY+aXQmIzM5
O3Mgb2RkIHRoYXQgd2h5IGFsd2F5cyB3cml0aW5nIHRvIHdhdGNoX3BpcGVbMV0gYnV0IG5vYm9k
eSByZWFkIGZyb20gaXQgYW5kIHdoeSBhbHdheXMgcmVhZGluZyBmcm9tIHdhdGNoX3BpcGVbMF0g
YnV0IG5vYm9keSB3cml0ZSB0byBpdD88L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkdyZXAgZm91
bmQgbm8gYW5zd2VyIGFuZCBpdCYjMzk7cyBkaWZmaWN1bHQgdG8gc2VhcmNoIGZvciB0aGUgdHJp
dmlhbCBxdWVzdGlvbiBieSBnb29sZSE8L2Rpdj4K
--047d7b670a9d883e9a04c6e0da58--


--===============6723995004920893500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6723995004920893500==--


From xen-devel-bounces@lists.xen.org Fri Aug 10 04:40:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 04:40:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szh0Z-000578-No; Fri, 10 Aug 2012 04:39:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Szh0Y-000573-1v
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 04:39:34 +0000
Received: from [85.158.143.99:38229] by server-1.bemta-4.messagelabs.com id
	81/8A-20198-58094205; Fri, 10 Aug 2012 04:39:33 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344573571!16272208!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4OTgyMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13367 invoked from network); 10 Aug 2012 04:39:32 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 04:39:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7A4dSoC032634
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 04:39:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7A4dRgT007042
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 04:39:27 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7A4dQn7022530; Thu, 9 Aug 2012 23:39:27 -0500
Received: from [10.191.10.122] (/10.191.10.122)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 21:39:25 -0700
Message-ID: <502490A7.7020603@oracle.com>
Date: Fri, 10 Aug 2012 12:40:07 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
In-Reply-To: <5023AE960200007800093DE8@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2070227487450461743=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============2070227487450461743==
Content-Type: multipart/alternative;
 boundary="------------090605070704070006080803"

This is a multi-part message in MIME format.
--------------090605070704070006080803
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable



=E4=BA=8E 2012-08-09 18:35, Jan Beulich =E5=86=99=E9=81=93:
>>>> On 09.08.12 at 11:42, "zhenzhong.duan"<zhenzhong.duan@oracle.com>  w=
rote:
>> =E4=BA=8E 2012-08-08 23:01, Jan Beulich =E5=86=99=E9=81=93:
>>>>>> On 08.08.12 at 11:48, "zhenzhong.duan"<zhenzhong.duan@oracle.com> =
  wrote:
>>>> =E4=BA=8E 2012-08-07 16:37, Jan Beulich =E5=86=99=E9=81=93:
>>>> Some spin at stop_machine after finish their job.
>>> And here you'd need to find out what they're waiting for,
>>> and what those CPUs are doing.
>> They are waiting the vcpu calling generic_set_all and those spin at
>> set_atomicity_lock.
>> In fact, all are waiting generic_set_all
> I think we're moving in circles - what is the vCPU currently
> generic_set_all() then doing?
Add some debug print, generic_set_all->prepare_set->write_cr0 took much=20
time,
all else are quick. set_atomicity_lock serialized this process between=20
cpus, make it worse.
One iteration:
MTRR: CPU 2
prepare_set: before read_cr0
prepare_set: before write_cr0 ------*block here*
prepare_set: before wbinvd
prepare_set: before read_cr4
prepare_set: before write_cr4
prepare_set: before __flush_tlb
prepare_set: before rdmsr
prepare_set: before wrmsr
generic_set_all: before set_mtrr_state
generic_set_all: before pat_init
post_set: before wbinvd
post_set: before wrmsr
post_set: before write_cr0
post_set: before write_cr4

>
>>> There's not that much being done in generic_set_all(), so the
>>> code should finish reasonably quickly. Are you perhaps having
>>> more vCPU-s in the guest than pCPU-s they can run on?
>> System env is an exalogic node with 24 cores + 100G mem (2 socket , 6
>> cores per socket, 2 HT threads per core).
>> Bootup a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device=
=2E
> So you're indeed over-committing the system. How many vCPU-s
> does you Dom0 have? Are there any other VMs? Is there any
> vCPU pinning in effect?
dom0 boot with 24 vcpus(same result with dom0_max_vcpus=3D4). No other vm=
=20
except dom0. All 24 vcpus spin from xentop result. Below is xentop clip.

       NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k)=20
MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR =20
VBD_RSECT  VBD_WSECT
  SSID
   Domain-0 -----r      43072  158.8    2050560    2.0   no limit      =20
n/a    24    0        0        0    0        0        0       =20
0          0          0
     0
VCPUs(sec):   0:      13649s  1:       6197s  2:       4254s  3:      =20
2006s  4:       1409s
           5:        930s  6:        698s  7:        630s  8:       =20
612s  9:       2038s
          10:        544s 11:        940s 12:        556s 13:       =20
510s 14:        456s
          15:        591s 16:        438s 17:        508s 18:      =20
3350s 19:        512s
          20:        544s 21:        529s 22:        547s 23:        610s=

zduan_test -----r      13140 2234.4   92327920   91.7   92327936     =20
91.7    24    1        0        0    1        0        0       =20
0          0          0
     0
VCPUs(sec):   0:        556s  1:        551s  2:        549s  3:       =20
544s  4:        549s
           5:        545s  6:        545s  7:        547s  8:       =20
545s  9:        548s
          10:        545s 11:        546s 12:        545s 13:       =20
548s 14:        543s
          15:        544s 16:        551s 17:        545s 18:       =20
547s 19:        551s
          20:        544s 21:        549s 22:        546s 23:        545s=

>>>    Does
>>> your hardware support Pause-Loop-Exiting (or the AMD
>>> equivalent, don't recall their term right now)?
>> I have no access to serial line, could I get the info by a command?
> "xl dmesg" run early enough (i.e. before the log buffer wraps).
Below is xl dmesg result for your reference. thanks
[root@scae02cn01 zduan]# xl dmesg
  __  __            _  _    ___   ____      _____     ____  __
  \ \/ /___ _ __   | || |  / _ \ |___ \    / _ \ \   / /  \/  |
   \  // _ \ '_ \  | || |_| | | |  __) |__| | | \ \ / /| |\/| |
   /  \  __/ | | | |__   _| |_| | / __/|__| |_| |\ V / | |  | |
  /_/\_\___|_| |_|    |_|(_)___(_)_____|   \___/  \_/  |_|  |_|

(XEN) Xen version 4.0.2-OVM (mockbuild@(none)) (gcc version 4.1.2=20
20080704 (Red Hat 4.1.2-48)) Fri Dec 23 17:00:16 EST 2011
(XEN) Latest ChangeSet: unavailable
(XEN) Bootloader: GNU GRUB 0.97
(XEN) Command line: dom0_mem=3D2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 1 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099400 (usable)
(XEN)  0000000000099400 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 000000007f780000 (usable)
(XEN)  000000007f78e000 - 000000007f790000 type 9
(XEN)  000000007f790000 - 000000007f79e000 (ACPI data)
(XEN)  000000007f79e000 - 000000007f7d0000 (ACPI NVS)
(XEN)  000000007f7d0000 - 000000007f7e0000 (reserved)
(XEN)  000000007f7ec000 - 0000000080000000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ffc00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000001880000000 (usable)
(XEN) ACPI: RSDP 000FAA40, 0024 (r2 SUN   )
(XEN) ACPI: XSDT 7F790100, 0094 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: FACP 7F790290, 00F4 (r4 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: DSDT 7F7905C0, 5ECF (r2 SUN    Xxx70           1 INTL 2005111=
7)
(XEN) ACPI: FACS 7F79E000, 0040
(XEN) ACPI: APIC 7F790390, 011E (r2 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: MCFG 7F790500, 003C (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: SLIT 7F790540, 0030 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: SPMI 7F790570, 0041 (r5 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: OEMB 7F79E040, 00BE (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: HPET 7F79A5C0, 0038 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: DMAR 7F79E100, 0130 (r1 SUN    Xxx70           1 MSFT       9=
7)
(XEN) ACPI: SRAT 7F79A600, 0250 (r1 SUN    Xxx70           1 INTC        =
1)
(XEN) ACPI: SSDT 7F79EF60, 0363 (r1  SUN   Xxx70          12 INTL 2005111=
7)
(XEN) ACPI: EINJ 7F79A850, 0130 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: BERT 7F79A9E0, 0030 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: ERST 7F79AA10, 01B0 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: HEST 7F79ABC0, 00A8 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) System RAM: 98295MB (100654180kB)
(XEN) Domain heap initialised DMA width 32 bits
(XEN) Processor #0 6:12 APIC version 21
(XEN) Processor #2 6:12 APIC version 21
(XEN) Processor #4 6:12 APIC version 21
(XEN) Processor #16 6:12 APIC version 21
(XEN) Processor #18 6:12 APIC version 21
(XEN) Processor #20 6:12 APIC version 21
(XEN) Processor #32 6:12 APIC version 21
(XEN) Processor #34 6:12 APIC version 21
(XEN) Processor #36 6:12 APIC version 21
(XEN) Processor #48 6:12 APIC version 21
(XEN) Processor #50 6:12 APIC version 21
(XEN) Processor #52 6:12 APIC version 21
(XEN) Processor #1 6:12 APIC version 21
(XEN) Processor #3 6:12 APIC version 21
(XEN) Processor #5 6:12 APIC version 21
(XEN) Processor #17 6:12 APIC version 21
(XEN) Processor #19 6:12 APIC version 21
(XEN) Processor #21 6:12 APIC version 21
(XEN) Processor #33 6:12 APIC version 21
(XEN) Processor #35 6:12 APIC version 21
(XEN) Processor #37 6:12 APIC version 21
(XEN) Processor #49 6:12 APIC version 21
(XEN) Processor #51 6:12 APIC version 21
(XEN) Processor #53 6:12 APIC version 21
(XEN) IOAPIC[0]: apic_id 6, version 32, address 0xfec00000, GSI 0-23
(XEN) IOAPIC[1]: apic_id 7, version 32, address 0xfec8a000, GSI 24-47
(XEN) Enabling APIC mode:  Phys.  Using 2 I/O APICs
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2926.029 MHz processor.
(XEN) Initing memory sharing.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) EPT supports 2MB super page.
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging detected.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) Total of 24 processors activated.
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) TSC is reliable, synchronization unnecessary
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) Brought up 24 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, lsb, paddr 0x2000 -> 0x6d5000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000835000000->0000000836000000 (520192 pages=20
to be allocated)
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff80002000->ffffffff806d5000
(XEN)  Init. ramdisk: ffffffff806d5000->ffffffff80ed7400
(XEN)  Phys-Mach map: ffffea0000000000->ffffea0000400000
(XEN)  Start info:    ffffffff80ed8000->ffffffff80ed84b4
(XEN)  Page tables:   ffffffff80ed9000->ffffffff80ee4000
(XEN)  Boot stack:    ffffffff80ee4000->ffffffff80ee5000
(XEN)  TOTAL:         ffffffff80000000->ffffffff81000000
(XEN)  ENTRY ADDRESS: ffffffff80002000
(XEN) Dom0 has maximum 24 VCPUs
(XEN) Scrubbing Free RAM:=20
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E.........................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch=20
input to Xen)
(XEN) Freed 168kB init memory.


--------------090605070704070006080803
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <br>
    äºŽ 2012-08-09 18:35, Jan Beulich å†™é“:
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
      <blockquote type="cite">
        <blockquote type="cite">
          <blockquote type="cite">
            <pre wrap="">On 09.08.12 at 11:42, "zhenzhong.duan" <a class="moz-txt-link-rfc2396E" href="mailto:zhenzhong.duan@oracle.com">&lt;zhenzhong.duan@oracle.com&gt;</a> wrote:
</pre>
          </blockquote>
        </blockquote>
        <pre wrap="">äºŽ 2012-08-08 23:01, Jan Beulich å†™é“:
</pre>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">
                <pre wrap="">On 08.08.12 at 11:48, "zhenzhong.duan"<a class="moz-txt-link-rfc2396E" href="mailto:zhenzhong.duan@oracle.com">&lt;zhenzhong.duan@oracle.com&gt;</a>  wrote:
</pre>
              </blockquote>
            </blockquote>
            <pre wrap="">äºŽ 2012-08-07 16:37, Jan Beulich å†™é“:
Some spin at stop_machine after finish their job.
</pre>
          </blockquote>
          <pre wrap="">And here you'd need to find out what they're waiting for,
and what those CPUs are doing.
</pre>
        </blockquote>
        <pre wrap="">They are waiting the vcpu calling generic_set_all and those spin at 
set_atomicity_lock.
In fact, all are waiting generic_set_all
</pre>
      </blockquote>
      <pre wrap="">
I think we're moving in circles - what is the vCPU currently
generic_set_all() then doing?</pre>
    </blockquote>
    <tt>Add some debug print,
      generic_set_all-&gt;prepare_set-&gt;write_cr0 took much time,<br>
      all else are quick. set_atomicity_lock serialized this process
      between cpus, make it worse.<br>
      One iteration:<br>
      MTRR: CPU 2<br>
      prepare_set: before read_cr0<br>
      prepare_set: before write_cr0Â Â Â  <small>------</small><b>block
        here</b><br>
      prepare_set: before wbinvd<br>
      prepare_set: before read_cr4<br>
      prepare_set: before write_cr4<br>
      prepare_set: before __flush_tlb<br>
      prepare_set: before rdmsr<br>
      prepare_set: before wrmsr<br>
      generic_set_all: before set_mtrr_state<br>
      generic_set_all: before pat_init<br>
      post_set: before wbinvd<br>
      post_set: before wrmsr<br>
      post_set: before write_cr0<br>
      post_set: before write_cr4<br>
      <br>
    </tt>
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

</pre>
      <blockquote type="cite">
        <blockquote type="cite">
          <pre wrap="">There's not that much being done in generic_set_all(), so the
code should finish reasonably quickly. Are you perhaps having
more vCPU-s in the guest than pCPU-s they can run on?
</pre>
        </blockquote>
        <pre wrap="">System env is an exalogic node with 24 cores + 100G mem (2 socket , 6 
cores per socket, 2 HT threads per core).
Bootup a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device.
</pre>
      </blockquote>
      <pre wrap="">
So you're indeed over-committing the system. How many vCPU-s
does you Dom0 have? Are there any other VMs? Is there any
vCPU pinning in effect?</pre>
    </blockquote>
    <tt>dom0 boot with 24 vcpus(same result with dom0_max_vcpus=4). No
      other vm except dom0. All 24 vcpus spin from xentop result. Below
      is xentop clip.<br>
      <br>
      Â Â Â Â Â  NAMEÂ  STATEÂ Â  CPU(sec) CPU(%)Â Â Â Â  MEM(k) MEM(%)Â  MAXMEM(k)
      MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDSÂ Â  VBD_OOÂ Â  VBD_RDÂ Â 
      VBD_WRÂ  VBD_RSECTÂ  VBD_WSECT<br>
      Â SSID<br>
      Â  Domain-0 -----rÂ Â Â Â Â  43072Â  158.8Â Â Â  2050560Â Â Â  2.0Â Â  no
      limitÂ Â Â Â Â Â  n/aÂ Â Â  24Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â 
      0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0<br>
      Â Â Â  0<br>
      VCPUs(sec):Â Â  0:Â Â Â Â Â  13649sÂ  1:Â Â Â Â Â Â  6197sÂ  2:Â Â Â Â Â Â  4254sÂ 
      3:Â Â Â Â Â Â  2006sÂ  4:Â Â Â Â Â Â  1409s<br>
      Â Â Â Â Â Â Â Â Â  5:Â Â Â Â Â Â Â  930sÂ  6:Â Â Â Â Â Â Â  698sÂ  7:Â Â Â Â Â Â Â  630sÂ 
      8:Â Â Â Â Â Â Â  612sÂ  9:Â Â Â Â Â Â  2038s<br>
      Â Â Â Â Â Â Â Â  10:Â Â Â Â Â Â Â  544s 11:Â Â Â Â Â Â Â  940s 12:Â Â Â Â Â Â Â  556s
      13:Â Â Â Â Â Â Â  510s 14:Â Â Â Â Â Â Â  456s<br>
      Â Â Â Â Â Â Â Â  15:Â Â Â Â Â Â Â  591s 16:Â Â Â Â Â Â Â  438s 17:Â Â Â Â Â Â Â  508s 18:Â Â Â Â Â Â 
      3350s 19:Â Â Â Â Â Â Â  512s<br>
      Â Â Â Â Â Â Â Â  20:Â Â Â Â Â Â Â  544s 21:Â Â Â Â Â Â Â  529s 22:Â Â Â Â Â Â Â  547s
      23:Â Â Â Â Â Â Â  610s<br>
      zduan_test -----rÂ Â Â Â Â  13140 2234.4Â Â  92327920Â Â  91.7Â Â 
      92327936Â Â Â Â Â  91.7Â Â Â  24Â Â Â  1Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â  1Â Â Â Â Â Â Â 
      0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0<br>
      Â Â Â  0<br>
      VCPUs(sec):Â Â  0:Â Â Â Â Â Â Â  556sÂ  1:Â Â Â Â Â Â Â  551sÂ  2:Â Â Â Â Â Â Â  549sÂ 
      3:Â Â Â Â Â Â Â  544sÂ  4:Â Â Â Â Â Â Â  549s<br>
      Â Â Â Â Â Â Â Â Â  5:Â Â Â Â Â Â Â  545sÂ  6:Â Â Â Â Â Â Â  545sÂ  7:Â Â Â Â Â Â Â  547sÂ 
      8:Â Â Â Â Â Â Â  545sÂ  9:Â Â Â Â Â Â Â  548s<br>
      Â Â Â Â Â Â Â Â  10:Â Â Â Â Â Â Â  545s 11:Â Â Â Â Â Â Â  546s 12:Â Â Â Â Â Â Â  545s
      13:Â Â Â Â Â Â Â  548s 14:Â Â Â Â Â Â Â  543s<br>
      Â Â Â Â Â Â Â Â  15:Â Â Â Â Â Â Â  544s 16:Â Â Â Â Â Â Â  551s 17:Â Â Â Â Â Â Â  545s
      18:Â Â Â Â Â Â Â  547s 19:Â Â Â Â Â Â Â  551s<br>
      Â Â Â Â Â Â Â Â  20:Â Â Â Â Â Â Â  544s 21:Â Â Â Â Â Â Â  549s 22:Â Â Â Â Â Â Â  546s
      23:Â Â Â Â Â Â Â  545s</tt><br>
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
      <blockquote type="cite">
        <blockquote type="cite">
          <pre wrap="">  Does
your hardware support Pause-Loop-Exiting (or the AMD
equivalent, don't recall their term right now)?
</pre>
        </blockquote>
        <pre wrap="">I have no access to serial line, could I get the info by a command?
</pre>
      </blockquote>
      <pre wrap="">
"xl dmesg" run early enough (i.e. before the log buffer wraps).</pre>
    </blockquote>
    <tt>Below is xl dmesg result for your reference. thanks</tt><br>
    <tt>[root@scae02cn01 zduan]# xl dmesg<br>
      Â __Â  __Â Â Â Â Â Â Â Â Â Â Â  _Â  _Â Â Â  ___Â Â  ____Â Â Â Â Â  _____Â Â Â Â  ____Â  __<br>
      Â \ \/ /___ _ __Â Â  | || |Â  / _ \ |___ \Â Â Â  / _ \ \Â Â  / /Â  \/Â  |<br>
      Â  \Â  // _ \ '_ \Â  | || |_| | | |Â  __) |__| | | \ \ / /| |\/| |<br>
      Â  /Â  \Â  __/ | | | |__Â Â  _| |_| | / __/|__| |_| |\ V / | |Â  | |<br>
      Â /_/\_\___|_| |_|Â Â Â  |_|(_)___(_)_____|Â Â  \___/Â  \_/Â  |_|Â  |_|<br>
      <br>
      (XEN) Xen version 4.0.2-OVM (mockbuild@(none)) (gcc version 4.1.2
      20080704 (Red Hat 4.1.2-48)) Fri Dec 23 17:00:16 EST 2011<br>
      (XEN) Latest ChangeSet: unavailable<br>
      (XEN) Bootloader: GNU GRUB 0.97<br>
      (XEN) Command line: dom0_mem=2G<br>
      (XEN) Video information:<br>
      (XEN)Â  VGA is text mode 80x25, font 8x16<br>
      (XEN)Â  VBE/DDC methods: none; EDID transfer time: 1 seconds<br>
      (XEN)Â  EDID info not retrieved because no DDC retrieval method
      detected<br>
      (XEN) Disc information:<br>
      (XEN)Â  Found 1 MBR signatures<br>
      (XEN)Â  Found 1 EDD information structures<br>
      (XEN) Xen-e820 RAM map:<br>
      (XEN)Â  0000000000000000 - 0000000000099400 (usable)<br>
      (XEN)Â  0000000000099400 - 00000000000a0000 (reserved)<br>
      (XEN)Â  00000000000e0000 - 0000000000100000 (reserved)<br>
      (XEN)Â  0000000000100000 - 000000007f780000 (usable)<br>
      (XEN)Â  000000007f78e000 - 000000007f790000 type 9<br>
      (XEN)Â  000000007f790000 - 000000007f79e000 (ACPI data)<br>
      (XEN)Â  000000007f79e000 - 000000007f7d0000 (ACPI NVS)<br>
      (XEN)Â  000000007f7d0000 - 000000007f7e0000 (reserved)<br>
      (XEN)Â  000000007f7ec000 - 0000000080000000 (reserved)<br>
      (XEN)Â  00000000e0000000 - 00000000f0000000 (reserved)<br>
      (XEN)Â  00000000fee00000 - 00000000fee01000 (reserved)<br>
      (XEN)Â  00000000ffc00000 - 0000000100000000 (reserved)<br>
      (XEN)Â  0000000100000000 - 0000001880000000 (usable)<br>
      (XEN) ACPI: RSDP 000FAA40, 0024 (r2 SUNÂ Â  )<br>
      (XEN) ACPI: XSDT 7F790100, 0094 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: FACP 7F790290, 00F4 (r4 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: DSDT 7F7905C0, 5ECF (r2 SUNÂ Â Â  Xxx70Â Â Â Â Â Â Â Â Â Â  1 INTL
      20051117)<br>
      (XEN) ACPI: FACS 7F79E000, 0040<br>
      (XEN) ACPI: APIC 7F790390, 011E (r2 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: MCFG 7F790500, 003C (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: SLIT 7F790540, 0030 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: SPMI 7F790570, 0041 (r5 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: OEMB 7F79E040, 00BE (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: HPET 7F79A5C0, 0038 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: DMAR 7F79E100, 0130 (r1 SUNÂ Â Â  Xxx70Â Â Â Â Â Â Â Â Â Â  1
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: SRAT 7F79A600, 0250 (r1 SUNÂ Â Â  Xxx70Â Â Â Â Â Â Â Â Â Â  1
      INTCÂ Â Â Â Â Â Â  1)<br>
      (XEN) ACPI: SSDT 7F79EF60, 0363 (r1Â  SUNÂ Â  Xxx70Â Â Â Â Â Â Â Â Â  12 INTL
      20051117)<br>
      (XEN) ACPI: EINJ 7F79A850, 0130 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: BERT 7F79A9E0, 0030 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: ERST 7F79AA10, 01B0 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: HEST 7F79ABC0, 00A8 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) System RAM: 98295MB (100654180kB)<br>
      (XEN) Domain heap initialised DMA width 32 bits<br>
      (XEN) Processor #0 6:12 APIC version 21<br>
      (XEN) Processor #2 6:12 APIC version 21<br>
      (XEN) Processor #4 6:12 APIC version 21<br>
      (XEN) Processor #16 6:12 APIC version 21<br>
      (XEN) Processor #18 6:12 APIC version 21<br>
      (XEN) Processor #20 6:12 APIC version 21<br>
      (XEN) Processor #32 6:12 APIC version 21<br>
      (XEN) Processor #34 6:12 APIC version 21<br>
      (XEN) Processor #36 6:12 APIC version 21<br>
      (XEN) Processor #48 6:12 APIC version 21<br>
      (XEN) Processor #50 6:12 APIC version 21<br>
      (XEN) Processor #52 6:12 APIC version 21<br>
      (XEN) Processor #1 6:12 APIC version 21<br>
      (XEN) Processor #3 6:12 APIC version 21<br>
      (XEN) Processor #5 6:12 APIC version 21<br>
      (XEN) Processor #17 6:12 APIC version 21<br>
      (XEN) Processor #19 6:12 APIC version 21<br>
      (XEN) Processor #21 6:12 APIC version 21<br>
      (XEN) Processor #33 6:12 APIC version 21<br>
      (XEN) Processor #35 6:12 APIC version 21<br>
      (XEN) Processor #37 6:12 APIC version 21<br>
      (XEN) Processor #49 6:12 APIC version 21<br>
      (XEN) Processor #51 6:12 APIC version 21<br>
      (XEN) Processor #53 6:12 APIC version 21<br>
      (XEN) IOAPIC[0]: apic_id 6, version 32, address 0xfec00000, GSI
      0-23<br>
      (XEN) IOAPIC[1]: apic_id 7, version 32, address 0xfec8a000, GSI
      24-47<br>
      (XEN) Enabling APIC mode:Â  Phys.Â  Using 2 I/O APICs<br>
      (XEN) Using scheduler: SMP Credit Scheduler (credit)<br>
      (XEN) Detected 2926.029 MHz processor.<br>
      (XEN) Initing memory sharing.<br>
      (XEN) VMX: Supported advanced features:<br>
      (XEN)Â  - APIC MMIO access virtualisation<br>
      (XEN)Â  - APIC TPR shadow<br>
      (XEN)Â  - Extended Page Tables (EPT)<br>
      (XEN)Â  - Virtual-Processor Identifiers (VPID)<br>
      (XEN)Â  - Virtual NMI<br>
      (XEN)Â  - MSR direct-access bitmap<br>
      (XEN)Â  - Unrestricted Guest<br>
      (XEN) EPT supports 2MB super page.<br>
      (XEN) HVM: ASIDs enabled.<br>
      (XEN) HVM: VMX enabled<br>
      (XEN) HVM: Hardware Assisted Paging detected.<br>
      (XEN) Intel VT-d Snoop Control enabled.<br>
      (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.<br>
      (XEN) Intel VT-d Queued Invalidation enabled.<br>
      (XEN) Intel VT-d Interrupt Remapping enabled.<br>
      (XEN) I/O virtualisation enabled<br>
      (XEN)Â  - Dom0 mode: Relaxed<br>
      (XEN) Enabled directed EOI with ioapic_ack_old on!<br>
      (XEN) Total of 24 processors activated.<br>
      (XEN) ENABLING IO-APIC IRQs<br>
      (XEN)Â  -&gt; Using old ACK method<br>
      (XEN) TSC is reliable, synchronization unnecessary<br>
      (XEN) Platform timer is 14.318MHz HPET<br>
      (XEN) Allocated console ring of 64 KiB.<br>
      (XEN) Brought up 24 CPUs<br>
      (XEN) *** LOADING DOMAIN 0 ***<br>
      (XEN)Â  XenÂ  kernel: 64-bit, lsb, compat32<br>
      (XEN)Â  Dom0 kernel: 64-bit, lsb, paddr 0x2000 -&gt; 0x6d5000<br>
      (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>
      (XEN)Â  Dom0 alloc.:Â Â  0000000835000000-&gt;0000000836000000
      (520192 pages to be allocated)<br>
      (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>
      (XEN)Â  Loaded kernel: ffffffff80002000-&gt;ffffffff806d5000<br>
      (XEN)Â  Init. ramdisk: ffffffff806d5000-&gt;ffffffff80ed7400<br>
      (XEN)Â  Phys-Mach map: ffffea0000000000-&gt;ffffea0000400000<br>
      (XEN)Â  Start info:Â Â Â  ffffffff80ed8000-&gt;ffffffff80ed84b4<br>
      (XEN)Â  Page tables:Â Â  ffffffff80ed9000-&gt;ffffffff80ee4000<br>
      (XEN)Â  Boot stack:Â Â Â  ffffffff80ee4000-&gt;ffffffff80ee5000<br>
      (XEN)Â  TOTAL:Â Â Â Â Â Â Â Â  ffffffff80000000-&gt;ffffffff81000000<br>
      (XEN)Â  ENTRY ADDRESS: ffffffff80002000<br>
      (XEN) Dom0 has maximum 24 VCPUs<br>
      (XEN) Scrubbing Free RAM:
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.<br>
      (XEN) Xen trace buffers: disabled<br>
      (XEN) Std. Loglevel: Errors and warnings<br>
      (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)<br>
      (XEN) Xen is relinquishing VGA console.<br>
      (XEN) *** Serial input -&gt; DOM0 (type 'CTRL-a' three times to
      switch input to Xen)<br>
      (XEN) Freed 168kB init memory.<br>
      <br>
    </tt>
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
    </blockquote>
  </body>
</html>

--------------090605070704070006080803--


--===============2070227487450461743==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2070227487450461743==--


From xen-devel-bounces@lists.xen.org Fri Aug 10 04:40:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 04:40:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szh0Z-000578-No; Fri, 10 Aug 2012 04:39:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1Szh0Y-000573-1v
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 04:39:34 +0000
Received: from [85.158.143.99:38229] by server-1.bemta-4.messagelabs.com id
	81/8A-20198-58094205; Fri, 10 Aug 2012 04:39:33 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344573571!16272208!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY4OTgyMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13367 invoked from network); 10 Aug 2012 04:39:32 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 04:39:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7A4dSoC032634
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 04:39:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7A4dRgT007042
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 04:39:27 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7A4dQn7022530; Thu, 9 Aug 2012 23:39:27 -0500
Received: from [10.191.10.122] (/10.191.10.122)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Aug 2012 21:39:25 -0700
Message-ID: <502490A7.7020603@oracle.com>
Date: Fri, 10 Aug 2012 12:40:07 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
In-Reply-To: <5023AE960200007800093DE8@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2070227487450461743=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============2070227487450461743==
Content-Type: multipart/alternative;
 boundary="------------090605070704070006080803"

This is a multi-part message in MIME format.
--------------090605070704070006080803
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable



=E4=BA=8E 2012-08-09 18:35, Jan Beulich =E5=86=99=E9=81=93:
>>>> On 09.08.12 at 11:42, "zhenzhong.duan"<zhenzhong.duan@oracle.com>  w=
rote:
>> =E4=BA=8E 2012-08-08 23:01, Jan Beulich =E5=86=99=E9=81=93:
>>>>>> On 08.08.12 at 11:48, "zhenzhong.duan"<zhenzhong.duan@oracle.com> =
  wrote:
>>>> =E4=BA=8E 2012-08-07 16:37, Jan Beulich =E5=86=99=E9=81=93:
>>>> Some spin at stop_machine after finish their job.
>>> And here you'd need to find out what they're waiting for,
>>> and what those CPUs are doing.
>> They are waiting the vcpu calling generic_set_all and those spin at
>> set_atomicity_lock.
>> In fact, all are waiting generic_set_all
> I think we're moving in circles - what is the vCPU currently
> generic_set_all() then doing?
Add some debug print, generic_set_all->prepare_set->write_cr0 took much=20
time,
all else are quick. set_atomicity_lock serialized this process between=20
cpus, make it worse.
One iteration:
MTRR: CPU 2
prepare_set: before read_cr0
prepare_set: before write_cr0 ------*block here*
prepare_set: before wbinvd
prepare_set: before read_cr4
prepare_set: before write_cr4
prepare_set: before __flush_tlb
prepare_set: before rdmsr
prepare_set: before wrmsr
generic_set_all: before set_mtrr_state
generic_set_all: before pat_init
post_set: before wbinvd
post_set: before wrmsr
post_set: before write_cr0
post_set: before write_cr4

>
>>> There's not that much being done in generic_set_all(), so the
>>> code should finish reasonably quickly. Are you perhaps having
>>> more vCPU-s in the guest than pCPU-s they can run on?
>> System env is an exalogic node with 24 cores + 100G mem (2 socket , 6
>> cores per socket, 2 HT threads per core).
>> Bootup a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device=
=2E
> So you're indeed over-committing the system. How many vCPU-s
> does you Dom0 have? Are there any other VMs? Is there any
> vCPU pinning in effect?
dom0 boot with 24 vcpus(same result with dom0_max_vcpus=3D4). No other vm=
=20
except dom0. All 24 vcpus spin from xentop result. Below is xentop clip.

       NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k)=20
MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR =20
VBD_RSECT  VBD_WSECT
  SSID
   Domain-0 -----r      43072  158.8    2050560    2.0   no limit      =20
n/a    24    0        0        0    0        0        0       =20
0          0          0
     0
VCPUs(sec):   0:      13649s  1:       6197s  2:       4254s  3:      =20
2006s  4:       1409s
           5:        930s  6:        698s  7:        630s  8:       =20
612s  9:       2038s
          10:        544s 11:        940s 12:        556s 13:       =20
510s 14:        456s
          15:        591s 16:        438s 17:        508s 18:      =20
3350s 19:        512s
          20:        544s 21:        529s 22:        547s 23:        610s=

zduan_test -----r      13140 2234.4   92327920   91.7   92327936     =20
91.7    24    1        0        0    1        0        0       =20
0          0          0
     0
VCPUs(sec):   0:        556s  1:        551s  2:        549s  3:       =20
544s  4:        549s
           5:        545s  6:        545s  7:        547s  8:       =20
545s  9:        548s
          10:        545s 11:        546s 12:        545s 13:       =20
548s 14:        543s
          15:        544s 16:        551s 17:        545s 18:       =20
547s 19:        551s
          20:        544s 21:        549s 22:        546s 23:        545s=

>>>    Does
>>> your hardware support Pause-Loop-Exiting (or the AMD
>>> equivalent, don't recall their term right now)?
>> I have no access to serial line, could I get the info by a command?
> "xl dmesg" run early enough (i.e. before the log buffer wraps).
Below is xl dmesg result for your reference. thanks
[root@scae02cn01 zduan]# xl dmesg
  __  __            _  _    ___   ____      _____     ____  __
  \ \/ /___ _ __   | || |  / _ \ |___ \    / _ \ \   / /  \/  |
   \  // _ \ '_ \  | || |_| | | |  __) |__| | | \ \ / /| |\/| |
   /  \  __/ | | | |__   _| |_| | / __/|__| |_| |\ V / | |  | |
  /_/\_\___|_| |_|    |_|(_)___(_)_____|   \___/  \_/  |_|  |_|

(XEN) Xen version 4.0.2-OVM (mockbuild@(none)) (gcc version 4.1.2=20
20080704 (Red Hat 4.1.2-48)) Fri Dec 23 17:00:16 EST 2011
(XEN) Latest ChangeSet: unavailable
(XEN) Bootloader: GNU GRUB 0.97
(XEN) Command line: dom0_mem=3D2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 1 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099400 (usable)
(XEN)  0000000000099400 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 000000007f780000 (usable)
(XEN)  000000007f78e000 - 000000007f790000 type 9
(XEN)  000000007f790000 - 000000007f79e000 (ACPI data)
(XEN)  000000007f79e000 - 000000007f7d0000 (ACPI NVS)
(XEN)  000000007f7d0000 - 000000007f7e0000 (reserved)
(XEN)  000000007f7ec000 - 0000000080000000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ffc00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000001880000000 (usable)
(XEN) ACPI: RSDP 000FAA40, 0024 (r2 SUN   )
(XEN) ACPI: XSDT 7F790100, 0094 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: FACP 7F790290, 00F4 (r4 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: DSDT 7F7905C0, 5ECF (r2 SUN    Xxx70           1 INTL 2005111=
7)
(XEN) ACPI: FACS 7F79E000, 0040
(XEN) ACPI: APIC 7F790390, 011E (r2 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: MCFG 7F790500, 003C (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: SLIT 7F790540, 0030 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: SPMI 7F790570, 0041 (r5 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: OEMB 7F79E040, 00BE (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: HPET 7F79A5C0, 0038 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: DMAR 7F79E100, 0130 (r1 SUN    Xxx70           1 MSFT       9=
7)
(XEN) ACPI: SRAT 7F79A600, 0250 (r1 SUN    Xxx70           1 INTC        =
1)
(XEN) ACPI: SSDT 7F79EF60, 0363 (r1  SUN   Xxx70          12 INTL 2005111=
7)
(XEN) ACPI: EINJ 7F79A850, 0130 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: BERT 7F79A9E0, 0030 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: ERST 7F79AA10, 01B0 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) ACPI: HEST 7F79ABC0, 00A8 (r1 SUN    Xxx70    20111011 MSFT       9=
7)
(XEN) System RAM: 98295MB (100654180kB)
(XEN) Domain heap initialised DMA width 32 bits
(XEN) Processor #0 6:12 APIC version 21
(XEN) Processor #2 6:12 APIC version 21
(XEN) Processor #4 6:12 APIC version 21
(XEN) Processor #16 6:12 APIC version 21
(XEN) Processor #18 6:12 APIC version 21
(XEN) Processor #20 6:12 APIC version 21
(XEN) Processor #32 6:12 APIC version 21
(XEN) Processor #34 6:12 APIC version 21
(XEN) Processor #36 6:12 APIC version 21
(XEN) Processor #48 6:12 APIC version 21
(XEN) Processor #50 6:12 APIC version 21
(XEN) Processor #52 6:12 APIC version 21
(XEN) Processor #1 6:12 APIC version 21
(XEN) Processor #3 6:12 APIC version 21
(XEN) Processor #5 6:12 APIC version 21
(XEN) Processor #17 6:12 APIC version 21
(XEN) Processor #19 6:12 APIC version 21
(XEN) Processor #21 6:12 APIC version 21
(XEN) Processor #33 6:12 APIC version 21
(XEN) Processor #35 6:12 APIC version 21
(XEN) Processor #37 6:12 APIC version 21
(XEN) Processor #49 6:12 APIC version 21
(XEN) Processor #51 6:12 APIC version 21
(XEN) Processor #53 6:12 APIC version 21
(XEN) IOAPIC[0]: apic_id 6, version 32, address 0xfec00000, GSI 0-23
(XEN) IOAPIC[1]: apic_id 7, version 32, address 0xfec8a000, GSI 24-47
(XEN) Enabling APIC mode:  Phys.  Using 2 I/O APICs
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2926.029 MHz processor.
(XEN) Initing memory sharing.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) EPT supports 2MB super page.
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging detected.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) Total of 24 processors activated.
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) TSC is reliable, synchronization unnecessary
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) Brought up 24 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, lsb, paddr 0x2000 -> 0x6d5000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000835000000->0000000836000000 (520192 pages=20
to be allocated)
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff80002000->ffffffff806d5000
(XEN)  Init. ramdisk: ffffffff806d5000->ffffffff80ed7400
(XEN)  Phys-Mach map: ffffea0000000000->ffffea0000400000
(XEN)  Start info:    ffffffff80ed8000->ffffffff80ed84b4
(XEN)  Page tables:   ffffffff80ed9000->ffffffff80ee4000
(XEN)  Boot stack:    ffffffff80ee4000->ffffffff80ee5000
(XEN)  TOTAL:         ffffffff80000000->ffffffff81000000
(XEN)  ENTRY ADDRESS: ffffffff80002000
(XEN) Dom0 has maximum 24 VCPUs
(XEN) Scrubbing Free RAM:=20
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E......................................................................=
=2E.........................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch=20
input to Xen)
(XEN) Freed 168kB init memory.


--------------090605070704070006080803
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <br>
    äºŽ 2012-08-09 18:35, Jan Beulich å†™é“:
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
      <blockquote type="cite">
        <blockquote type="cite">
          <blockquote type="cite">
            <pre wrap="">On 09.08.12 at 11:42, "zhenzhong.duan" <a class="moz-txt-link-rfc2396E" href="mailto:zhenzhong.duan@oracle.com">&lt;zhenzhong.duan@oracle.com&gt;</a> wrote:
</pre>
          </blockquote>
        </blockquote>
        <pre wrap="">äºŽ 2012-08-08 23:01, Jan Beulich å†™é“:
</pre>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">
                <pre wrap="">On 08.08.12 at 11:48, "zhenzhong.duan"<a class="moz-txt-link-rfc2396E" href="mailto:zhenzhong.duan@oracle.com">&lt;zhenzhong.duan@oracle.com&gt;</a>  wrote:
</pre>
              </blockquote>
            </blockquote>
            <pre wrap="">äºŽ 2012-08-07 16:37, Jan Beulich å†™é“:
Some spin at stop_machine after finish their job.
</pre>
          </blockquote>
          <pre wrap="">And here you'd need to find out what they're waiting for,
and what those CPUs are doing.
</pre>
        </blockquote>
        <pre wrap="">They are waiting the vcpu calling generic_set_all and those spin at 
set_atomicity_lock.
In fact, all are waiting generic_set_all
</pre>
      </blockquote>
      <pre wrap="">
I think we're moving in circles - what is the vCPU currently
generic_set_all() then doing?</pre>
    </blockquote>
    <tt>Add some debug print,
      generic_set_all-&gt;prepare_set-&gt;write_cr0 took much time,<br>
      all else are quick. set_atomicity_lock serialized this process
      between cpus, make it worse.<br>
      One iteration:<br>
      MTRR: CPU 2<br>
      prepare_set: before read_cr0<br>
      prepare_set: before write_cr0Â Â Â  <small>------</small><b>block
        here</b><br>
      prepare_set: before wbinvd<br>
      prepare_set: before read_cr4<br>
      prepare_set: before write_cr4<br>
      prepare_set: before __flush_tlb<br>
      prepare_set: before rdmsr<br>
      prepare_set: before wrmsr<br>
      generic_set_all: before set_mtrr_state<br>
      generic_set_all: before pat_init<br>
      post_set: before wbinvd<br>
      post_set: before wrmsr<br>
      post_set: before write_cr0<br>
      post_set: before write_cr4<br>
      <br>
    </tt>
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

</pre>
      <blockquote type="cite">
        <blockquote type="cite">
          <pre wrap="">There's not that much being done in generic_set_all(), so the
code should finish reasonably quickly. Are you perhaps having
more vCPU-s in the guest than pCPU-s they can run on?
</pre>
        </blockquote>
        <pre wrap="">System env is an exalogic node with 24 cores + 100G mem (2 socket , 6 
cores per socket, 2 HT threads per core).
Bootup a pvhvm with 12vpcus (or 24) + 90 GB + pci passthroughed device.
</pre>
      </blockquote>
      <pre wrap="">
So you're indeed over-committing the system. How many vCPU-s
does you Dom0 have? Are there any other VMs? Is there any
vCPU pinning in effect?</pre>
    </blockquote>
    <tt>dom0 boot with 24 vcpus(same result with dom0_max_vcpus=4). No
      other vm except dom0. All 24 vcpus spin from xentop result. Below
      is xentop clip.<br>
      <br>
      Â Â Â Â Â  NAMEÂ  STATEÂ Â  CPU(sec) CPU(%)Â Â Â Â  MEM(k) MEM(%)Â  MAXMEM(k)
      MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDSÂ Â  VBD_OOÂ Â  VBD_RDÂ Â 
      VBD_WRÂ  VBD_RSECTÂ  VBD_WSECT<br>
      Â SSID<br>
      Â  Domain-0 -----rÂ Â Â Â Â  43072Â  158.8Â Â Â  2050560Â Â Â  2.0Â Â  no
      limitÂ Â Â Â Â Â  n/aÂ Â Â  24Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â 
      0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0<br>
      Â Â Â  0<br>
      VCPUs(sec):Â Â  0:Â Â Â Â Â  13649sÂ  1:Â Â Â Â Â Â  6197sÂ  2:Â Â Â Â Â Â  4254sÂ 
      3:Â Â Â Â Â Â  2006sÂ  4:Â Â Â Â Â Â  1409s<br>
      Â Â Â Â Â Â Â Â Â  5:Â Â Â Â Â Â Â  930sÂ  6:Â Â Â Â Â Â Â  698sÂ  7:Â Â Â Â Â Â Â  630sÂ 
      8:Â Â Â Â Â Â Â  612sÂ  9:Â Â Â Â Â Â  2038s<br>
      Â Â Â Â Â Â Â Â  10:Â Â Â Â Â Â Â  544s 11:Â Â Â Â Â Â Â  940s 12:Â Â Â Â Â Â Â  556s
      13:Â Â Â Â Â Â Â  510s 14:Â Â Â Â Â Â Â  456s<br>
      Â Â Â Â Â Â Â Â  15:Â Â Â Â Â Â Â  591s 16:Â Â Â Â Â Â Â  438s 17:Â Â Â Â Â Â Â  508s 18:Â Â Â Â Â Â 
      3350s 19:Â Â Â Â Â Â Â  512s<br>
      Â Â Â Â Â Â Â Â  20:Â Â Â Â Â Â Â  544s 21:Â Â Â Â Â Â Â  529s 22:Â Â Â Â Â Â Â  547s
      23:Â Â Â Â Â Â Â  610s<br>
      zduan_test -----rÂ Â Â Â Â  13140 2234.4Â Â  92327920Â Â  91.7Â Â 
      92327936Â Â Â Â Â  91.7Â Â Â  24Â Â Â  1Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â  1Â Â Â Â Â Â Â 
      0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0Â Â Â Â Â Â Â Â Â  0<br>
      Â Â Â  0<br>
      VCPUs(sec):Â Â  0:Â Â Â Â Â Â Â  556sÂ  1:Â Â Â Â Â Â Â  551sÂ  2:Â Â Â Â Â Â Â  549sÂ 
      3:Â Â Â Â Â Â Â  544sÂ  4:Â Â Â Â Â Â Â  549s<br>
      Â Â Â Â Â Â Â Â Â  5:Â Â Â Â Â Â Â  545sÂ  6:Â Â Â Â Â Â Â  545sÂ  7:Â Â Â Â Â Â Â  547sÂ 
      8:Â Â Â Â Â Â Â  545sÂ  9:Â Â Â Â Â Â Â  548s<br>
      Â Â Â Â Â Â Â Â  10:Â Â Â Â Â Â Â  545s 11:Â Â Â Â Â Â Â  546s 12:Â Â Â Â Â Â Â  545s
      13:Â Â Â Â Â Â Â  548s 14:Â Â Â Â Â Â Â  543s<br>
      Â Â Â Â Â Â Â Â  15:Â Â Â Â Â Â Â  544s 16:Â Â Â Â Â Â Â  551s 17:Â Â Â Â Â Â Â  545s
      18:Â Â Â Â Â Â Â  547s 19:Â Â Â Â Â Â Â  551s<br>
      Â Â Â Â Â Â Â Â  20:Â Â Â Â Â Â Â  544s 21:Â Â Â Â Â Â Â  549s 22:Â Â Â Â Â Â Â  546s
      23:Â Â Â Â Â Â Â  545s</tt><br>
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
      <blockquote type="cite">
        <blockquote type="cite">
          <pre wrap="">  Does
your hardware support Pause-Loop-Exiting (or the AMD
equivalent, don't recall their term right now)?
</pre>
        </blockquote>
        <pre wrap="">I have no access to serial line, could I get the info by a command?
</pre>
      </blockquote>
      <pre wrap="">
"xl dmesg" run early enough (i.e. before the log buffer wraps).</pre>
    </blockquote>
    <tt>Below is xl dmesg result for your reference. thanks</tt><br>
    <tt>[root@scae02cn01 zduan]# xl dmesg<br>
      Â __Â  __Â Â Â Â Â Â Â Â Â Â Â  _Â  _Â Â Â  ___Â Â  ____Â Â Â Â Â  _____Â Â Â Â  ____Â  __<br>
      Â \ \/ /___ _ __Â Â  | || |Â  / _ \ |___ \Â Â Â  / _ \ \Â Â  / /Â  \/Â  |<br>
      Â  \Â  // _ \ '_ \Â  | || |_| | | |Â  __) |__| | | \ \ / /| |\/| |<br>
      Â  /Â  \Â  __/ | | | |__Â Â  _| |_| | / __/|__| |_| |\ V / | |Â  | |<br>
      Â /_/\_\___|_| |_|Â Â Â  |_|(_)___(_)_____|Â Â  \___/Â  \_/Â  |_|Â  |_|<br>
      <br>
      (XEN) Xen version 4.0.2-OVM (mockbuild@(none)) (gcc version 4.1.2
      20080704 (Red Hat 4.1.2-48)) Fri Dec 23 17:00:16 EST 2011<br>
      (XEN) Latest ChangeSet: unavailable<br>
      (XEN) Bootloader: GNU GRUB 0.97<br>
      (XEN) Command line: dom0_mem=2G<br>
      (XEN) Video information:<br>
      (XEN)Â  VGA is text mode 80x25, font 8x16<br>
      (XEN)Â  VBE/DDC methods: none; EDID transfer time: 1 seconds<br>
      (XEN)Â  EDID info not retrieved because no DDC retrieval method
      detected<br>
      (XEN) Disc information:<br>
      (XEN)Â  Found 1 MBR signatures<br>
      (XEN)Â  Found 1 EDD information structures<br>
      (XEN) Xen-e820 RAM map:<br>
      (XEN)Â  0000000000000000 - 0000000000099400 (usable)<br>
      (XEN)Â  0000000000099400 - 00000000000a0000 (reserved)<br>
      (XEN)Â  00000000000e0000 - 0000000000100000 (reserved)<br>
      (XEN)Â  0000000000100000 - 000000007f780000 (usable)<br>
      (XEN)Â  000000007f78e000 - 000000007f790000 type 9<br>
      (XEN)Â  000000007f790000 - 000000007f79e000 (ACPI data)<br>
      (XEN)Â  000000007f79e000 - 000000007f7d0000 (ACPI NVS)<br>
      (XEN)Â  000000007f7d0000 - 000000007f7e0000 (reserved)<br>
      (XEN)Â  000000007f7ec000 - 0000000080000000 (reserved)<br>
      (XEN)Â  00000000e0000000 - 00000000f0000000 (reserved)<br>
      (XEN)Â  00000000fee00000 - 00000000fee01000 (reserved)<br>
      (XEN)Â  00000000ffc00000 - 0000000100000000 (reserved)<br>
      (XEN)Â  0000000100000000 - 0000001880000000 (usable)<br>
      (XEN) ACPI: RSDP 000FAA40, 0024 (r2 SUNÂ Â  )<br>
      (XEN) ACPI: XSDT 7F790100, 0094 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: FACP 7F790290, 00F4 (r4 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: DSDT 7F7905C0, 5ECF (r2 SUNÂ Â Â  Xxx70Â Â Â Â Â Â Â Â Â Â  1 INTL
      20051117)<br>
      (XEN) ACPI: FACS 7F79E000, 0040<br>
      (XEN) ACPI: APIC 7F790390, 011E (r2 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: MCFG 7F790500, 003C (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: SLIT 7F790540, 0030 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: SPMI 7F790570, 0041 (r5 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: OEMB 7F79E040, 00BE (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: HPET 7F79A5C0, 0038 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: DMAR 7F79E100, 0130 (r1 SUNÂ Â Â  Xxx70Â Â Â Â Â Â Â Â Â Â  1
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: SRAT 7F79A600, 0250 (r1 SUNÂ Â Â  Xxx70Â Â Â Â Â Â Â Â Â Â  1
      INTCÂ Â Â Â Â Â Â  1)<br>
      (XEN) ACPI: SSDT 7F79EF60, 0363 (r1Â  SUNÂ Â  Xxx70Â Â Â Â Â Â Â Â Â  12 INTL
      20051117)<br>
      (XEN) ACPI: EINJ 7F79A850, 0130 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: BERT 7F79A9E0, 0030 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: ERST 7F79AA10, 01B0 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) ACPI: HEST 7F79ABC0, 00A8 (r1 SUNÂ Â Â  Xxx70Â Â Â  20111011
      MSFTÂ Â Â Â Â Â  97)<br>
      (XEN) System RAM: 98295MB (100654180kB)<br>
      (XEN) Domain heap initialised DMA width 32 bits<br>
      (XEN) Processor #0 6:12 APIC version 21<br>
      (XEN) Processor #2 6:12 APIC version 21<br>
      (XEN) Processor #4 6:12 APIC version 21<br>
      (XEN) Processor #16 6:12 APIC version 21<br>
      (XEN) Processor #18 6:12 APIC version 21<br>
      (XEN) Processor #20 6:12 APIC version 21<br>
      (XEN) Processor #32 6:12 APIC version 21<br>
      (XEN) Processor #34 6:12 APIC version 21<br>
      (XEN) Processor #36 6:12 APIC version 21<br>
      (XEN) Processor #48 6:12 APIC version 21<br>
      (XEN) Processor #50 6:12 APIC version 21<br>
      (XEN) Processor #52 6:12 APIC version 21<br>
      (XEN) Processor #1 6:12 APIC version 21<br>
      (XEN) Processor #3 6:12 APIC version 21<br>
      (XEN) Processor #5 6:12 APIC version 21<br>
      (XEN) Processor #17 6:12 APIC version 21<br>
      (XEN) Processor #19 6:12 APIC version 21<br>
      (XEN) Processor #21 6:12 APIC version 21<br>
      (XEN) Processor #33 6:12 APIC version 21<br>
      (XEN) Processor #35 6:12 APIC version 21<br>
      (XEN) Processor #37 6:12 APIC version 21<br>
      (XEN) Processor #49 6:12 APIC version 21<br>
      (XEN) Processor #51 6:12 APIC version 21<br>
      (XEN) Processor #53 6:12 APIC version 21<br>
      (XEN) IOAPIC[0]: apic_id 6, version 32, address 0xfec00000, GSI
      0-23<br>
      (XEN) IOAPIC[1]: apic_id 7, version 32, address 0xfec8a000, GSI
      24-47<br>
      (XEN) Enabling APIC mode:Â  Phys.Â  Using 2 I/O APICs<br>
      (XEN) Using scheduler: SMP Credit Scheduler (credit)<br>
      (XEN) Detected 2926.029 MHz processor.<br>
      (XEN) Initing memory sharing.<br>
      (XEN) VMX: Supported advanced features:<br>
      (XEN)Â  - APIC MMIO access virtualisation<br>
      (XEN)Â  - APIC TPR shadow<br>
      (XEN)Â  - Extended Page Tables (EPT)<br>
      (XEN)Â  - Virtual-Processor Identifiers (VPID)<br>
      (XEN)Â  - Virtual NMI<br>
      (XEN)Â  - MSR direct-access bitmap<br>
      (XEN)Â  - Unrestricted Guest<br>
      (XEN) EPT supports 2MB super page.<br>
      (XEN) HVM: ASIDs enabled.<br>
      (XEN) HVM: VMX enabled<br>
      (XEN) HVM: Hardware Assisted Paging detected.<br>
      (XEN) Intel VT-d Snoop Control enabled.<br>
      (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.<br>
      (XEN) Intel VT-d Queued Invalidation enabled.<br>
      (XEN) Intel VT-d Interrupt Remapping enabled.<br>
      (XEN) I/O virtualisation enabled<br>
      (XEN)Â  - Dom0 mode: Relaxed<br>
      (XEN) Enabled directed EOI with ioapic_ack_old on!<br>
      (XEN) Total of 24 processors activated.<br>
      (XEN) ENABLING IO-APIC IRQs<br>
      (XEN)Â  -&gt; Using old ACK method<br>
      (XEN) TSC is reliable, synchronization unnecessary<br>
      (XEN) Platform timer is 14.318MHz HPET<br>
      (XEN) Allocated console ring of 64 KiB.<br>
      (XEN) Brought up 24 CPUs<br>
      (XEN) *** LOADING DOMAIN 0 ***<br>
      (XEN)Â  XenÂ  kernel: 64-bit, lsb, compat32<br>
      (XEN)Â  Dom0 kernel: 64-bit, lsb, paddr 0x2000 -&gt; 0x6d5000<br>
      (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>
      (XEN)Â  Dom0 alloc.:Â Â  0000000835000000-&gt;0000000836000000
      (520192 pages to be allocated)<br>
      (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>
      (XEN)Â  Loaded kernel: ffffffff80002000-&gt;ffffffff806d5000<br>
      (XEN)Â  Init. ramdisk: ffffffff806d5000-&gt;ffffffff80ed7400<br>
      (XEN)Â  Phys-Mach map: ffffea0000000000-&gt;ffffea0000400000<br>
      (XEN)Â  Start info:Â Â Â  ffffffff80ed8000-&gt;ffffffff80ed84b4<br>
      (XEN)Â  Page tables:Â Â  ffffffff80ed9000-&gt;ffffffff80ee4000<br>
      (XEN)Â  Boot stack:Â Â Â  ffffffff80ee4000-&gt;ffffffff80ee5000<br>
      (XEN)Â  TOTAL:Â Â Â Â Â Â Â Â  ffffffff80000000-&gt;ffffffff81000000<br>
      (XEN)Â  ENTRY ADDRESS: ffffffff80002000<br>
      (XEN) Dom0 has maximum 24 VCPUs<br>
      (XEN) Scrubbing Free RAM:
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.<br>
      (XEN) Xen trace buffers: disabled<br>
      (XEN) Std. Loglevel: Errors and warnings<br>
      (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)<br>
      (XEN) Xen is relinquishing VGA console.<br>
      (XEN) *** Serial input -&gt; DOM0 (type 'CTRL-a' three times to
      switch input to Xen)<br>
      (XEN) Freed 168kB init memory.<br>
      <br>
    </tt>
    <blockquote cite="mid:5023AE960200007800093DE8@nat28.tlf.novell.com"
      type="cite">
    </blockquote>
  </body>
</html>

--------------090605070704070006080803--


--===============2070227487450461743==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2070227487450461743==--


From xen-devel-bounces@lists.xen.org Fri Aug 10 05:28:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 05:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szhls-0005ca-1p; Fri, 10 Aug 2012 05:28:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Szhlr-0005cV-2U
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 05:28:27 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344576500!4788585!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU2MTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28713 invoked from network); 10 Aug 2012 05:28:20 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-10.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 05:28:20 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 09 Aug 2012 22:28:19 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,744,1336374000"; d="scan'208";a="178428662"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 09 Aug 2012 22:28:19 -0700
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 9 Aug 2012 22:28:19 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 9 Aug 2012 22:28:19 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Fri, 10 Aug 2012 13:28:17 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images
	for IDE
Thread-Index: Ac12uPFOgBHlCEiHTr+5fqqpedpupQ==
Date: Fri, 10 Aug 2012 05:28:17 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE016B1@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk
 images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi list,

Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen).
To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure.

Even not consider the nested case, I saw there is a bug reporting normal VM boots slower than before (actually both qcow and disk image), see:
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
Therefore I think the boot delay is just much lengthened in L2 guest.

I root caused this issue to a change in qemu, and I saw there is a lot of discussions on this topic. I didn't see the final decision but later the patch was checked in. Could anybody helps to revisit this commit and explain the final decision?
http://lists.xen.org/archives/html/xen-devel/2012-03/msg02072.html

Thanks,
Dongxiao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 05:28:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 05:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szhls-0005ca-1p; Fri, 10 Aug 2012 05:28:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Szhlr-0005cV-2U
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 05:28:27 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344576500!4788585!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU2MTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.2; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28713 invoked from network); 10 Aug 2012 05:28:20 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-10.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 05:28:20 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 09 Aug 2012 22:28:19 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,744,1336374000"; d="scan'208";a="178428662"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 09 Aug 2012 22:28:19 -0700
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 9 Aug 2012 22:28:19 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 9 Aug 2012 22:28:19 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Fri, 10 Aug 2012 13:28:17 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images
	for IDE
Thread-Index: Ac12uPFOgBHlCEiHTr+5fqqpedpupQ==
Date: Fri, 10 Aug 2012 05:28:17 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE016B1@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk
 images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi list,

Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen).
To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure.

Even not consider the nested case, I saw there is a bug reporting normal VM boots slower than before (actually both qcow and disk image), see:
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
Therefore I think the boot delay is just much lengthened in L2 guest.

I root caused this issue to a change in qemu, and I saw there is a lot of discussions on this topic. I didn't see the final decision but later the patch was checked in. Could anybody helps to revisit this commit and explain the final decision?
http://lists.xen.org/archives/html/xen-devel/2012-03/msg02072.html

Thanks,
Dongxiao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 05:33:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 05:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szhq6-0005k9-Nj; Fri, 10 Aug 2012 05:32:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Szhq5-0005k4-6O
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 05:32:49 +0000
Received: from [85.158.143.35:59429] by server-3.bemta-4.messagelabs.com id
	64/1F-31486-00D94205; Fri, 10 Aug 2012 05:32:48 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344576766!12648282!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA3MjQ3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30827 invoked from network); 10 Aug 2012 05:32:47 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 05:32:47 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 09 Aug 2012 22:32:46 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,744,1336374000"; d="scan'208";a="183983046"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 09 Aug 2012 22:32:46 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 9 Aug 2012 22:32:45 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Fri, 10 Aug 2012 13:32:44 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images
	for IDE
Thread-Index: Ac12uZBjK3K3HmTeRfSjIG3Pa6SRcw==
Date: Fri, 10 Aug 2012 05:32:42 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk
 images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi list,

Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen).
To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure.

Even not consider the nested case, I saw there is a bug reporting normal VM boots slower than before (actually both qcow and disk image), see:
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
Therefore I think the boot delay is just much lengthened in L2 guest.

I root caused this issue to a change in qemu, and I saw there is a lot of discussions on this topic. I didn't see the final decision but later the patch was checked in. Could anybody helps to revisit this commit and explain the final decision?
http://lists.xen.org/archives/html/xen-devel/2012-03/msg02072.html

Thanks,
Dongxiao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 05:33:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 05:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szhq6-0005k9-Nj; Fri, 10 Aug 2012 05:32:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1Szhq5-0005k4-6O
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 05:32:49 +0000
Received: from [85.158.143.35:59429] by server-3.bemta-4.messagelabs.com id
	64/1F-31486-00D94205; Fri, 10 Aug 2012 05:32:48 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344576766!12648282!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA3MjQ3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30827 invoked from network); 10 Aug 2012 05:32:47 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 05:32:47 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 09 Aug 2012 22:32:46 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,744,1336374000"; d="scan'208";a="183983046"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 09 Aug 2012 22:32:46 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 9 Aug 2012 22:32:45 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Fri, 10 Aug 2012 13:32:44 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images
	for IDE
Thread-Index: Ac12uZBjK3K3HmTeRfSjIG3Pa6SRcw==
Date: Fri, 10 Aug 2012 05:32:42 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk
 images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi list,

Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen).
To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure.

Even not consider the nested case, I saw there is a bug reporting normal VM boots slower than before (actually both qcow and disk image), see:
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
Therefore I think the boot delay is just much lengthened in L2 guest.

I root caused this issue to a change in qemu, and I saw there is a lot of discussions on this topic. I didn't see the final decision but later the patch was checked in. Could anybody helps to revisit this commit and explain the final decision?
http://lists.xen.org/archives/html/xen-devel/2012-03/msg02072.html

Thanks,
Dongxiao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 06:51:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 06:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szj3o-0006Gp-Om; Fri, 10 Aug 2012 06:51:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szj3n-0006Gk-4N
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 06:51:03 +0000
Received: from [85.158.139.83:41613] by server-5.bemta-5.messagelabs.com id
	F7/B7-03096-55FA4205; Fri, 10 Aug 2012 06:51:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344581460!23129131!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19379 invoked from network); 10 Aug 2012 06:51:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with SMTP;
	10 Aug 2012 06:51:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 07:50:58 +0100
Message-Id: <5024CB720200007800094124@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 07:50:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
In-Reply-To: <CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
> iommu=no-intremap
> This seems to work around the issue on this platform, performing
> multiple suspend/resume cycles, and ahci came back afterwards just
> fine.
> 
> What is the downside to flipping this off?

Loss of security (against misbehaving/malicious guests). So we
certainly want/need to get to the bottom of this (especially if
this is not only one kind of system that's affected).

> iommu=off
> This test behaved similarly to the above, also working around the issue.

Of course, this is a superset of the former.

This result, however, makes it more likely again to indeed be a
Xen side problem, not Dom0 induced corruption.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 06:51:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 06:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szj3o-0006Gp-Om; Fri, 10 Aug 2012 06:51:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szj3n-0006Gk-4N
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 06:51:03 +0000
Received: from [85.158.139.83:41613] by server-5.bemta-5.messagelabs.com id
	F7/B7-03096-55FA4205; Fri, 10 Aug 2012 06:51:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344581460!23129131!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19379 invoked from network); 10 Aug 2012 06:51:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with SMTP;
	10 Aug 2012 06:51:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 07:50:58 +0100
Message-Id: <5024CB720200007800094124@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 07:50:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
In-Reply-To: <CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
> iommu=no-intremap
> This seems to work around the issue on this platform, performing
> multiple suspend/resume cycles, and ahci came back afterwards just
> fine.
> 
> What is the downside to flipping this off?

Loss of security (against misbehaving/malicious guests). So we
certainly want/need to get to the bottom of this (especially if
this is not only one kind of system that's affected).

> iommu=off
> This test behaved similarly to the above, also working around the issue.

Of course, this is a superset of the former.

This result, however, makes it more likely again to indeed be a
Xen side problem, not Dom0 induced corruption.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:35:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjk8-0006la-OC; Fri, 10 Aug 2012 07:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szjk6-0006lS-11
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:34:47 +0000
Received: from [85.158.143.99:60525] by server-1.bemta-4.messagelabs.com id
	0D/89-20198-599B4205; Fri, 10 Aug 2012 07:34:45 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344584084!27467858!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21977 invoked from network); 10 Aug 2012 07:34:44 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 07:34:44 -0000
Received: by wgbed3 with SMTP id ed3so826875wgb.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 00:34:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/SxfeQXBTL+LkQMyRw0dpPRNGhuzMc9TVOajvLSk8/E=;
	b=NaEQa0I3kNx4cf+ZlgsNojsq8Z1keCfIp6ZXXjSLN4w7itNhPgpDMsDDYfXICYZYi9
	tLzl7VA8YszS8GbkVIertGuV2uQOGt1snp7cyPOMDQaSPDoPIbMfDkAeKgfrHKFsrekT
	M5gKwmrnBhP5i//CAZhGzK2nmpzd2d6SW051kiiePyABHtSOKLr549IQc+ZvoN8ePQpa
	d4iCw1FDBVMt5XslrQxTg6Q6qyyq84wZm3w7XUmF1Zx6/z8lArD+tQByvhKluRBFm5b1
	IxG8mZ95xpN5F3mSlIVTkJj4hSfrgyyAByemhUdT/zJ3Xyo2Q1Gl/Odt2xTe8b6sbj3U
	V2Mg==
Received: by 10.180.97.135 with SMTP id ea7mr3843952wib.11.1344584084148;
	Fri, 10 Aug 2012 00:34:44 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id w7sm6215207wiz.0.2012.08.10.00.34.42
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 00:34:43 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 10 Aug 2012 08:34:38 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC4A781E.3B4D6%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] make all (native) hypercalls consistently
	have "long" return type
Thread-Index: Ac12ypikD0eqgRvaY0qcNSzZt8vHyA==
In-Reply-To: <5022AA540200007800093AF6@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH] make all (native) hypercalls consistently
 have "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 17:05, "Jan Beulich" <JBeulich@suse.com> wrote:

> for common and x86 ones at least, to address the problem of storing
> zero-extended values into the multicall result field otherwise.
> 
> Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

Of course this should go in for the next 4.2 RC.

 -- Keir

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2973,7 +2973,7 @@ static inline void fixunmap_domain_page(
>  #define fixunmap_domain_page(ptr) ((void)(ptr))
>  #endif
>  
> -int do_mmuext_op(
> +long do_mmuext_op(
>      XEN_GUEST_HANDLE(mmuext_op_t) uops,
>      unsigned int count,
>      XEN_GUEST_HANDLE(uint) pdone,
> @@ -3437,7 +3437,7 @@ int do_mmuext_op(
>      return rc;
>  }
>  
> -int do_mmu_update(
> +long do_mmu_update(
>      XEN_GUEST_HANDLE(mmu_update_t) ureqs,
>      unsigned int count,
>      XEN_GUEST_HANDLE(uint) pdone,
> @@ -4285,15 +4285,15 @@ static int __do_update_va_mapping(
>      return rc;
>  }
>  
> -int do_update_va_mapping(unsigned long va, u64 val64,
> -                         unsigned long flags)
> +long do_update_va_mapping(unsigned long va, u64 val64,
> +                          unsigned long flags)
>  {
>      return __do_update_va_mapping(va, val64, flags, current->domain);
>  }
>  
> -int do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
> -                                     unsigned long flags,
> -                                     domid_t domid)
> +long do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
> +                                      unsigned long flags,
> +                                      domid_t domid)
>  {
>      struct domain *pg_owner;
>      int rc;
> --- a/xen/common/compat/xenoprof.c
> +++ b/xen/common/compat/xenoprof.c
> @@ -5,6 +5,7 @@
>  #include <compat/xenoprof.h>
>  
>  #define COMPAT
> +#define ret_t int
>  
>  #define do_xenoprof_op compat_xenoprof_op
>  
> --- a/xen/common/kexec.c
> +++ b/xen/common/kexec.c
> @@ -845,8 +845,8 @@ static int kexec_exec(XEN_GUEST_HANDLE(v
>      return -EINVAL; /* never reached */
>  }
>  
> -int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
> -                           int compat)
> +static int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void)
> uarg,
> +                                bool_t compat)
>  {
>      unsigned long flags;
>      int ret = -EINVAL;
> --- a/xen/common/xenoprof.c
> +++ b/xen/common/xenoprof.c
> @@ -607,6 +607,8 @@ static int xenoprof_op_init(XEN_GUEST_HA
>      return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
>  }
>  
> +#define ret_t long
> +
>  #endif /* !COMPAT */
>  
>  static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
> @@ -660,7 +662,7 @@ static int xenoprof_op_get_buffer(XEN_GU
>                        || (op == XENOPROF_disable_virq)  \
>                        || (op == XENOPROF_get_buffer))
>   
> -int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
> +ret_t do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
>  {
>      int ret = 0;
>      
> @@ -904,6 +906,7 @@ int do_xenoprof_op(int op, XEN_GUEST_HAN
>  }
>  
>  #if defined(CONFIG_COMPAT) && !defined(COMPAT)
> +#undef ret_t
>  #include "compat/xenoprof.c"
>  #endif
>  
> --- a/xen/include/asm-x86/hypercall.h
> +++ b/xen/include/asm-x86/hypercall.h
> @@ -24,7 +24,7 @@ extern long
>  do_set_trap_table(
>      XEN_GUEST_HANDLE(const_trap_info_t) traps);
>  
> -extern int
> +extern long
>  do_mmu_update(
>      XEN_GUEST_HANDLE(mmu_update_t) ureqs,
>      unsigned int count,
> @@ -62,7 +62,7 @@ do_update_descriptor(
>  extern long
>  do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
>  
> -extern int
> +extern long
>  do_update_va_mapping(
>      unsigned long va,
>      u64 val64,
> @@ -72,14 +72,14 @@ extern long
>  do_physdev_op(
>      int cmd, XEN_GUEST_HANDLE(void) arg);
>  
> -extern int
> +extern long
>  do_update_va_mapping_otherdomain(
>      unsigned long va,
>      u64 val64,
>      unsigned long flags,
>      domid_t domid);
>  
> -extern int
> +extern long
>  do_mmuext_op(
>      XEN_GUEST_HANDLE(mmuext_op_t) uops,
>      unsigned int count,
> @@ -90,10 +90,6 @@ extern unsigned long
>  do_iret(
>      void);
>  
> -extern int
> -do_kexec(
> -    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
> -
>  #ifdef __x86_64__
>  
>  extern long
> --- a/xen/include/xen/hypercall.h
> +++ b/xen/include/xen/hypercall.h
> @@ -137,7 +137,7 @@ extern long
>  do_tmem_op(
>      XEN_GUEST_HANDLE(tmem_op_t) uops);
>  
> -extern int
> +extern long
>  do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
>  
>  #ifdef CONFIG_COMPAT
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:35:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjk8-0006la-OC; Fri, 10 Aug 2012 07:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szjk6-0006lS-11
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:34:47 +0000
Received: from [85.158.143.99:60525] by server-1.bemta-4.messagelabs.com id
	0D/89-20198-599B4205; Fri, 10 Aug 2012 07:34:45 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344584084!27467858!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21977 invoked from network); 10 Aug 2012 07:34:44 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 07:34:44 -0000
Received: by wgbed3 with SMTP id ed3so826875wgb.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 00:34:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/SxfeQXBTL+LkQMyRw0dpPRNGhuzMc9TVOajvLSk8/E=;
	b=NaEQa0I3kNx4cf+ZlgsNojsq8Z1keCfIp6ZXXjSLN4w7itNhPgpDMsDDYfXICYZYi9
	tLzl7VA8YszS8GbkVIertGuV2uQOGt1snp7cyPOMDQaSPDoPIbMfDkAeKgfrHKFsrekT
	M5gKwmrnBhP5i//CAZhGzK2nmpzd2d6SW051kiiePyABHtSOKLr549IQc+ZvoN8ePQpa
	d4iCw1FDBVMt5XslrQxTg6Q6qyyq84wZm3w7XUmF1Zx6/z8lArD+tQByvhKluRBFm5b1
	IxG8mZ95xpN5F3mSlIVTkJj4hSfrgyyAByemhUdT/zJ3Xyo2Q1Gl/Odt2xTe8b6sbj3U
	V2Mg==
Received: by 10.180.97.135 with SMTP id ea7mr3843952wib.11.1344584084148;
	Fri, 10 Aug 2012 00:34:44 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id w7sm6215207wiz.0.2012.08.10.00.34.42
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 00:34:43 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 10 Aug 2012 08:34:38 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC4A781E.3B4D6%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] make all (native) hypercalls consistently
	have "long" return type
Thread-Index: Ac12ypikD0eqgRvaY0qcNSzZt8vHyA==
In-Reply-To: <5022AA540200007800093AF6@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH] make all (native) hypercalls consistently
 have "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/08/2012 17:05, "Jan Beulich" <JBeulich@suse.com> wrote:

> for common and x86 ones at least, to address the problem of storing
> zero-extended values into the multicall result field otherwise.
> 
> Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

Of course this should go in for the next 4.2 RC.

 -- Keir

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2973,7 +2973,7 @@ static inline void fixunmap_domain_page(
>  #define fixunmap_domain_page(ptr) ((void)(ptr))
>  #endif
>  
> -int do_mmuext_op(
> +long do_mmuext_op(
>      XEN_GUEST_HANDLE(mmuext_op_t) uops,
>      unsigned int count,
>      XEN_GUEST_HANDLE(uint) pdone,
> @@ -3437,7 +3437,7 @@ int do_mmuext_op(
>      return rc;
>  }
>  
> -int do_mmu_update(
> +long do_mmu_update(
>      XEN_GUEST_HANDLE(mmu_update_t) ureqs,
>      unsigned int count,
>      XEN_GUEST_HANDLE(uint) pdone,
> @@ -4285,15 +4285,15 @@ static int __do_update_va_mapping(
>      return rc;
>  }
>  
> -int do_update_va_mapping(unsigned long va, u64 val64,
> -                         unsigned long flags)
> +long do_update_va_mapping(unsigned long va, u64 val64,
> +                          unsigned long flags)
>  {
>      return __do_update_va_mapping(va, val64, flags, current->domain);
>  }
>  
> -int do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
> -                                     unsigned long flags,
> -                                     domid_t domid)
> +long do_update_va_mapping_otherdomain(unsigned long va, u64 val64,
> +                                      unsigned long flags,
> +                                      domid_t domid)
>  {
>      struct domain *pg_owner;
>      int rc;
> --- a/xen/common/compat/xenoprof.c
> +++ b/xen/common/compat/xenoprof.c
> @@ -5,6 +5,7 @@
>  #include <compat/xenoprof.h>
>  
>  #define COMPAT
> +#define ret_t int
>  
>  #define do_xenoprof_op compat_xenoprof_op
>  
> --- a/xen/common/kexec.c
> +++ b/xen/common/kexec.c
> @@ -845,8 +845,8 @@ static int kexec_exec(XEN_GUEST_HANDLE(v
>      return -EINVAL; /* never reached */
>  }
>  
> -int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
> -                           int compat)
> +static int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void)
> uarg,
> +                                bool_t compat)
>  {
>      unsigned long flags;
>      int ret = -EINVAL;
> --- a/xen/common/xenoprof.c
> +++ b/xen/common/xenoprof.c
> @@ -607,6 +607,8 @@ static int xenoprof_op_init(XEN_GUEST_HA
>      return (copy_to_guest(arg, &xenoprof_init, 1) ? -EFAULT : 0);
>  }
>  
> +#define ret_t long
> +
>  #endif /* !COMPAT */
>  
>  static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
> @@ -660,7 +662,7 @@ static int xenoprof_op_get_buffer(XEN_GU
>                        || (op == XENOPROF_disable_virq)  \
>                        || (op == XENOPROF_get_buffer))
>   
> -int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
> +ret_t do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
>  {
>      int ret = 0;
>      
> @@ -904,6 +906,7 @@ int do_xenoprof_op(int op, XEN_GUEST_HAN
>  }
>  
>  #if defined(CONFIG_COMPAT) && !defined(COMPAT)
> +#undef ret_t
>  #include "compat/xenoprof.c"
>  #endif
>  
> --- a/xen/include/asm-x86/hypercall.h
> +++ b/xen/include/asm-x86/hypercall.h
> @@ -24,7 +24,7 @@ extern long
>  do_set_trap_table(
>      XEN_GUEST_HANDLE(const_trap_info_t) traps);
>  
> -extern int
> +extern long
>  do_mmu_update(
>      XEN_GUEST_HANDLE(mmu_update_t) ureqs,
>      unsigned int count,
> @@ -62,7 +62,7 @@ do_update_descriptor(
>  extern long
>  do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
>  
> -extern int
> +extern long
>  do_update_va_mapping(
>      unsigned long va,
>      u64 val64,
> @@ -72,14 +72,14 @@ extern long
>  do_physdev_op(
>      int cmd, XEN_GUEST_HANDLE(void) arg);
>  
> -extern int
> +extern long
>  do_update_va_mapping_otherdomain(
>      unsigned long va,
>      u64 val64,
>      unsigned long flags,
>      domid_t domid);
>  
> -extern int
> +extern long
>  do_mmuext_op(
>      XEN_GUEST_HANDLE(mmuext_op_t) uops,
>      unsigned int count,
> @@ -90,10 +90,6 @@ extern unsigned long
>  do_iret(
>      void);
>  
> -extern int
> -do_kexec(
> -    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
> -
>  #ifdef __x86_64__
>  
>  extern long
> --- a/xen/include/xen/hypercall.h
> +++ b/xen/include/xen/hypercall.h
> @@ -137,7 +137,7 @@ extern long
>  do_tmem_op(
>      XEN_GUEST_HANDLE(tmem_op_t) uops);
>  
> -extern int
> +extern long
>  do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
>  
>  #ifdef CONFIG_COMPAT
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:35:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjka-0006me-4g; Fri, 10 Aug 2012 07:35:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzjkY-0006mS-QB
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:35:14 +0000
Received: from [85.158.143.35:17645] by server-2.bemta-4.messagelabs.com id
	3F/64-19021-2B9B4205; Fri, 10 Aug 2012 07:35:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344584112!12559770!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31513 invoked from network); 10 Aug 2012 07:35:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 07:35:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:35:11 +0100
Message-Id: <5024D5D00200007800094134@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:35:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>,
	"Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
In-Reply-To: <20120809232547.GA21925@spongy>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 01:25, Jean Guyader <jean.guyader@citrix.com> wrote:
>--- a/xen/common/event_channel.c
>+++ b/xen/common/event_channel.c
>@@ -159,9 +159,8 @@ static int get_free_port(struct domain *d)
> 
> static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> {
>-    struct evtchn *chn;
>     struct domain *d;
>-    int            port;
>+    evtchn_port_t  port;
>     domid_t        dom = alloc->dom;
>     long           rc;
> 
>@@ -169,26 +168,47 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>     if ( rc )
>         return rc;
> 
>+    rc = evtchn_alloc_unbound_domain(d, &port,

Any reason you can't pass &alloc->port here directly?

>+            alloc->remote_dom == DOMID_SELF ? current->domain->domain_id
>+                                            : alloc->remote_dom);

Any reason this can't/shouldn't be done in the called function?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:35:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjka-0006me-4g; Fri, 10 Aug 2012 07:35:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzjkY-0006mS-QB
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:35:14 +0000
Received: from [85.158.143.35:17645] by server-2.bemta-4.messagelabs.com id
	3F/64-19021-2B9B4205; Fri, 10 Aug 2012 07:35:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344584112!12559770!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31513 invoked from network); 10 Aug 2012 07:35:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 07:35:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:35:11 +0100
Message-Id: <5024D5D00200007800094134@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:35:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>,
	"Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
In-Reply-To: <20120809232547.GA21925@spongy>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 01:25, Jean Guyader <jean.guyader@citrix.com> wrote:
>--- a/xen/common/event_channel.c
>+++ b/xen/common/event_channel.c
>@@ -159,9 +159,8 @@ static int get_free_port(struct domain *d)
> 
> static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> {
>-    struct evtchn *chn;
>     struct domain *d;
>-    int            port;
>+    evtchn_port_t  port;
>     domid_t        dom = alloc->dom;
>     long           rc;
> 
>@@ -169,26 +168,47 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
>     if ( rc )
>         return rc;
> 
>+    rc = evtchn_alloc_unbound_domain(d, &port,

Any reason you can't pass &alloc->port here directly?

>+            alloc->remote_dom == DOMID_SELF ? current->domain->domain_id
>+                                            : alloc->remote_dom);

Any reason this can't/shouldn't be done in the called function?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzjrG-00072X-0s; Fri, 10 Aug 2012 07:42:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzjrE-00072Q-VD
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:42:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344584520!1679254!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjQ5MjU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjQ5MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15061 invoked from network); 10 Aug 2012 07:42:01 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 07:42:01 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (joses mo88) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id X00c7eo7A669pX ;
	Fri, 10 Aug 2012 09:42:00 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id C042C18639; Fri, 10 Aug 2012 09:41:59 +0200 (CEST)
Date: Fri, 10 Aug 2012 09:41:59 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120810074159.GA11792@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120809143406.GA9317@aepfle.de>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, Olaf Hering wrote:

> Indeed, netback_probe is appearently never called in my case. I will
> check why that happens.

What I have seen so far is that in 4.2+xl the vif driver is not
registered, while in 4.1+xm there is a vif driver registered. Thats so
far the difference I could spot.
I will post more results why that happens later.

Olaf

root@satriani:~ # grep xen-backend xl-dmesg-3.0.34-sles11sp2_olh-xen.txt 
[    0.149879] bus: 'xen-backend': registered
[    0.149879] device: 'xen-backend': device_add
[    0.149879] PM: Adding info for No Bus:xen-backend
[   90.055240] bus: 'xen-backend': add device qdisk-1-768
[   90.055252] PM: Adding info for xen-backend:qdisk-1-768
[   90.064575] bus: 'xen-backend': add device qdisk-1-5632
[   90.064584] PM: Adding info for xen-backend:qdisk-1-5632
[   90.073196] bus: 'xen-backend': add device console-1-0
[   90.073205] PM: Adding info for xen-backend:console-1-0
[   90.081771] bus: 'xen-backend': add device vkbd-1-0
[   90.081776] PM: Adding info for xen-backend:vkbd-1-0
[   90.378494] bus: 'xen-backend': add device vif-1-0
[   90.378504] PM: Adding info for xen-backend:vif-1-0
[  100.401586] PM: Removing info for xen-backend:console-1-0
[  100.401596] bus: 'xen-backend': remove device console-1-0
[  102.400202] PM: Removing info for xen-backend:qdisk-1-768
[  102.400212] bus: 'xen-backend': remove device qdisk-1-768
[  102.406016] PM: Removing info for xen-backend:qdisk-1-5632
[  102.406025] bus: 'xen-backend': remove device qdisk-1-5632
[  102.411464] PM: Removing info for xen-backend:vkbd-1-0
[  102.411473] bus: 'xen-backend': remove device vkbd-1-0
[  110.410600] PM: Removing info for xen-backend:vif-1-0
[  110.410610] bus: 'xen-backend': remove device vif-1-0
root@satriani:~ # grep xen-backend xm-dmesg-3.0.34-sles11sp2_olh-xen.txt 
[    0.150119] bus: 'xen-backend': registered
[    0.150119] device: 'xen-backend': device_add
[    0.150119] PM: Adding info for No Bus:xen-backend
[   44.319441] bus: 'xen-backend': add driver tap
[   44.338383] bus: 'xen-backend': add driver vbd
[   44.367501] bus: 'xen-backend': add driver vif
[   44.378095] bus: 'xen-backend': add driver vusb
[  204.002506] bus: 'xen-backend': add device vfb-1-0
[  204.002514] PM: Adding info for xen-backend:vfb-1-0
[  204.017641] bus: 'xen-backend': add device vbd-1-768
[  204.017650] PM: Adding info for xen-backend:vbd-1-768
[  204.017663] bus: 'xen-backend': driver_probe_device: matched device vbd-1-768 with driver vbd
[  204.017667] bus: 'xen-backend': really_probe: probing driver vbd with device vbd-1-768
[  204.018903] bus: 'xen-backend': really_probe: bound device vbd-1-768 to driver vbd
[  204.032488] bus: 'xen-backend': add device vbd-1-5632
[  204.032494] PM: Adding info for xen-backend:vbd-1-5632
[  204.032502] bus: 'xen-backend': driver_probe_device: matched device vbd-1-5632 with driver vbd
[  204.032504] bus: 'xen-backend': really_probe: probing driver vbd with device vbd-1-5632
[  204.033534] bus: 'xen-backend': really_probe: bound device vbd-1-5632 to driver vbd
[  204.043973] bus: 'xen-backend': add device vif-1-0
[  204.043980] PM: Adding info for xen-backend:vif-1-0
[  204.043988] bus: 'xen-backend': driver_probe_device: matched device vif-1-0 with driver vif
[  204.043990] bus: 'xen-backend': really_probe: probing driver vif with device vif-1-0
[  204.049398] bus: 'xen-backend': really_probe: bound device vif-1-0 to driver vif
[  204.739981] bus: 'xen-backend': add device console-1-0
[  204.739993] PM: Adding info for xen-backend:console-1-0
[  340.548887] PM: Removing info for xen-backend:console-1-0
[  340.548902] bus: 'xen-backend': remove device console-1-0
[  340.570464] PM: Removing info for xen-backend:vfb-1-0
[  340.570470] bus: 'xen-backend': remove device vfb-1-0
[  340.577394] PM: Removing info for xen-backend:vbd-1-768
[  340.577403] bus: 'xen-backend': remove device vbd-1-768
[  340.578784] PM: Removing info for xen-backend:vbd-1-5632
[  340.578791] bus: 'xen-backend': remove device vbd-1-5632
[  340.581006] PM: Removing info for xen-backend:vif-1-0
[  340.581014] bus: 'xen-backend': remove device vif-1-0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzjrG-00072X-0s; Fri, 10 Aug 2012 07:42:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzjrE-00072Q-VD
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:42:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344584520!1679254!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjQ5MjU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjQ5MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15061 invoked from network); 10 Aug 2012 07:42:01 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 07:42:01 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (joses mo88) (RZmta 30.7 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id X00c7eo7A669pX ;
	Fri, 10 Aug 2012 09:42:00 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id C042C18639; Fri, 10 Aug 2012 09:41:59 +0200 (CEST)
Date: Fri, 10 Aug 2012 09:41:59 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120810074159.GA11792@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120809143406.GA9317@aepfle.de>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 09, Olaf Hering wrote:

> Indeed, netback_probe is appearently never called in my case. I will
> check why that happens.

What I have seen so far is that in 4.2+xl the vif driver is not
registered, while in 4.1+xm there is a vif driver registered. Thats so
far the difference I could spot.
I will post more results why that happens later.

Olaf

root@satriani:~ # grep xen-backend xl-dmesg-3.0.34-sles11sp2_olh-xen.txt 
[    0.149879] bus: 'xen-backend': registered
[    0.149879] device: 'xen-backend': device_add
[    0.149879] PM: Adding info for No Bus:xen-backend
[   90.055240] bus: 'xen-backend': add device qdisk-1-768
[   90.055252] PM: Adding info for xen-backend:qdisk-1-768
[   90.064575] bus: 'xen-backend': add device qdisk-1-5632
[   90.064584] PM: Adding info for xen-backend:qdisk-1-5632
[   90.073196] bus: 'xen-backend': add device console-1-0
[   90.073205] PM: Adding info for xen-backend:console-1-0
[   90.081771] bus: 'xen-backend': add device vkbd-1-0
[   90.081776] PM: Adding info for xen-backend:vkbd-1-0
[   90.378494] bus: 'xen-backend': add device vif-1-0
[   90.378504] PM: Adding info for xen-backend:vif-1-0
[  100.401586] PM: Removing info for xen-backend:console-1-0
[  100.401596] bus: 'xen-backend': remove device console-1-0
[  102.400202] PM: Removing info for xen-backend:qdisk-1-768
[  102.400212] bus: 'xen-backend': remove device qdisk-1-768
[  102.406016] PM: Removing info for xen-backend:qdisk-1-5632
[  102.406025] bus: 'xen-backend': remove device qdisk-1-5632
[  102.411464] PM: Removing info for xen-backend:vkbd-1-0
[  102.411473] bus: 'xen-backend': remove device vkbd-1-0
[  110.410600] PM: Removing info for xen-backend:vif-1-0
[  110.410610] bus: 'xen-backend': remove device vif-1-0
root@satriani:~ # grep xen-backend xm-dmesg-3.0.34-sles11sp2_olh-xen.txt 
[    0.150119] bus: 'xen-backend': registered
[    0.150119] device: 'xen-backend': device_add
[    0.150119] PM: Adding info for No Bus:xen-backend
[   44.319441] bus: 'xen-backend': add driver tap
[   44.338383] bus: 'xen-backend': add driver vbd
[   44.367501] bus: 'xen-backend': add driver vif
[   44.378095] bus: 'xen-backend': add driver vusb
[  204.002506] bus: 'xen-backend': add device vfb-1-0
[  204.002514] PM: Adding info for xen-backend:vfb-1-0
[  204.017641] bus: 'xen-backend': add device vbd-1-768
[  204.017650] PM: Adding info for xen-backend:vbd-1-768
[  204.017663] bus: 'xen-backend': driver_probe_device: matched device vbd-1-768 with driver vbd
[  204.017667] bus: 'xen-backend': really_probe: probing driver vbd with device vbd-1-768
[  204.018903] bus: 'xen-backend': really_probe: bound device vbd-1-768 to driver vbd
[  204.032488] bus: 'xen-backend': add device vbd-1-5632
[  204.032494] PM: Adding info for xen-backend:vbd-1-5632
[  204.032502] bus: 'xen-backend': driver_probe_device: matched device vbd-1-5632 with driver vbd
[  204.032504] bus: 'xen-backend': really_probe: probing driver vbd with device vbd-1-5632
[  204.033534] bus: 'xen-backend': really_probe: bound device vbd-1-5632 to driver vbd
[  204.043973] bus: 'xen-backend': add device vif-1-0
[  204.043980] PM: Adding info for xen-backend:vif-1-0
[  204.043988] bus: 'xen-backend': driver_probe_device: matched device vif-1-0 with driver vif
[  204.043990] bus: 'xen-backend': really_probe: probing driver vif with device vif-1-0
[  204.049398] bus: 'xen-backend': really_probe: bound device vif-1-0 to driver vif
[  204.739981] bus: 'xen-backend': add device console-1-0
[  204.739993] PM: Adding info for xen-backend:console-1-0
[  340.548887] PM: Removing info for xen-backend:console-1-0
[  340.548902] bus: 'xen-backend': remove device console-1-0
[  340.570464] PM: Removing info for xen-backend:vfb-1-0
[  340.570470] bus: 'xen-backend': remove device vfb-1-0
[  340.577394] PM: Removing info for xen-backend:vbd-1-768
[  340.577403] bus: 'xen-backend': remove device vbd-1-768
[  340.578784] PM: Removing info for xen-backend:vbd-1-5632
[  340.578791] bus: 'xen-backend': remove device vbd-1-5632
[  340.581006] PM: Removing info for xen-backend:vif-1-0
[  340.581014] bus: 'xen-backend': remove device vif-1-0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:50:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjyc-0007D4-Ts; Fri, 10 Aug 2012 07:49:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szjyb-0007Cz-Hr
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 07:49:45 +0000
Received: from [85.158.143.99:6915] by server-3.bemta-4.messagelabs.com id
	2C/A6-31486-81DB4205; Fri, 10 Aug 2012 07:49:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344584984!17864990!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22893 invoked from network); 10 Aug 2012 07:49:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with SMTP;
	10 Aug 2012 07:49:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:49:43 +0100
Message-Id: <5024D937020000780009414C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:49:43 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>, <xen-devel@lists.xensource.com>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
In-Reply-To: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 03:43, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.

Looks pretty good now to me, just one minor comment below.
But it will of course need the ack of both the VT-d and AMD IOMMU
maintainers (who, while on Cc, didn't participate in the discussion
so far).

> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, &indent);

Why do you pass a pointer here? Just pass 0, and in the recursive
call pass either +1 or +(level - next_level). (The inconsistency on
the use of next_level will need to be commented on by Wei anyway,
as already requested in an earlier reply.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:50:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjyc-0007D4-Ts; Fri, 10 Aug 2012 07:49:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szjyb-0007Cz-Hr
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 07:49:45 +0000
Received: from [85.158.143.99:6915] by server-3.bemta-4.messagelabs.com id
	2C/A6-31486-81DB4205; Fri, 10 Aug 2012 07:49:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344584984!17864990!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22893 invoked from network); 10 Aug 2012 07:49:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with SMTP;
	10 Aug 2012 07:49:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:49:43 +0100
Message-Id: <5024D937020000780009414C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:49:43 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>, <xen-devel@lists.xensource.com>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
In-Reply-To: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 03:43, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.

Looks pretty good now to me, just one minor comment below.
But it will of course need the ack of both the VT-d and AMD IOMMU
maintainers (who, while on Cc, didn't participate in the discussion
so far).

> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, &indent);

Why do you pass a pointer here? Just pass 0, and in the recursive
call pass either +1 or +(level - next_level). (The inconsistency on
the use of next_level will need to be commented on by Wei anyway,
as already requested in an earlier reply.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:50:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjyt-0007EM-Ek; Fri, 10 Aug 2012 07:50:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1Szjyr-0007EA-Up
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:50:02 +0000
Received: from [85.158.143.35:20826] by server-3.bemta-4.messagelabs.com id
	DC/07-31486-92DB4205; Fri, 10 Aug 2012 07:50:01 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344584980!14043929!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24570 invoked from network); 10 Aug 2012 07:49:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 07:49:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336348800"; d="scan'208";a="13946049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 07:49:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 08:49:40 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SzjyV-0003oM-QP;
	Fri, 10 Aug 2012 07:49:39 +0000
Received: by spongy (Postfix, from userid 2023)	id 142BC34049D; Fri, 10 Aug
	2012 08:51:01 +0100 (BST)
Date: Fri, 10 Aug 2012 08:51:01 +0100
From: Jean Guyader <jean.guyader@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810075100.GA30606@spongy>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
	<5024D5D00200007800094134@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="FL5UXtIhxfXey3p5"
Content-Disposition: inline
In-Reply-To: <5024D5D00200007800094134@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--FL5UXtIhxfXey3p5
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline

On 10/08 08:35, Jan Beulich wrote:
> >>> On 10.08.12 at 01:25, Jean Guyader <jean.guyader@citrix.com> wrote:
> >--- a/xen/common/event_channel.c
> >+++ b/xen/common/event_channel.c
> >@@ -159,9 +159,8 @@ static int get_free_port(struct domain *d)
> > 
> > static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> > {
> >-    struct evtchn *chn;
> >     struct domain *d;
> >-    int            port;
> >+    evtchn_port_t  port;
> >     domid_t        dom = alloc->dom;
> >     long           rc;
> > 
> >@@ -169,26 +168,47 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> >     if ( rc )
> >         return rc;
> > 
> >+    rc = evtchn_alloc_unbound_domain(d, &port,
> 
> Any reason you can't pass &alloc->port here directly?
> 
> >+            alloc->remote_dom == DOMID_SELF ? current->domain->domain_id
> >+                                            : alloc->remote_dom);
> 
> Any reason this can't/shouldn't be done in the called function?
> 

No specific reason for both of those thing. Here is a new version based
on your comments.

Thanks for reviewing,
Jean

--FL5UXtIhxfXey3p5
Content-Type: text/x-diff; charset="us-ascii"
Content-Disposition: attachment;
	filename="evtchn_alloc_unbound_domain.patch"

commit 208384d74852df9ae26294236d79e33967a75afa
Author: Jean Guyader <jean.guyader@citrix.com>
Date:   Thu Aug 2 16:19:23 2012 +0100

    xen: events, exposes evtchn_alloc_unbound_domain
    
    Exposes evtchn_alloc_unbound_domain to the rest of
    Xen so we can create allocated unbound evtchn within Xen.

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..fd626bf 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -159,36 +159,53 @@ static int get_free_port(struct domain *d)
 
 static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 {
-    struct evtchn *chn;
     struct domain *d;
-    int            port;
-    domid_t        dom = alloc->dom;
     long           rc;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
+    rc = rcu_lock_target_domain_by_id(alloc->dom, &d);
     if ( rc )
         return rc;
 
+    rc = evtchn_alloc_unbound_domain(d, &alloc->port, alloc->remote_dom);
+    if ( rc )
+        ERROR_EXIT_DOM((int)rc, d);
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid)
+{
+    struct evtchn *chn;
+    int           rc;
+    int           free_port;
+
     spin_lock(&d->event_lock);
 
-    if ( (port = get_free_port(d)) < 0 )
-        ERROR_EXIT_DOM(port, d);
-    chn = evtchn_from_port(d, port);
+    rc = free_port = get_free_port(d);
+    if ( free_port < 0 )
+        goto out;
 
-    rc = xsm_evtchn_unbound(d, chn, alloc->remote_dom);
+    chn = evtchn_from_port(d, free_port);
+    rc = xsm_evtchn_unbound(d, chn, remote_domid);
     if ( rc )
         goto out;
 
     chn->state = ECS_UNBOUND;
-    if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
+    if ( (chn->u.unbound.remote_domid = remote_domid) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
 
-    alloc->port = port;
+    chn->u.unbound.remote_domid = remote_domid;
+
+    *port = free_port;
+    /* Everything is fine, returns 0 */
+    rc = 0;
 
  out:
     spin_unlock(&d->event_lock);
-    rcu_unlock_domain(d);
-
     return rc;
 }
 
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..1a0c832 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -69,6 +69,9 @@ int guest_enabled_event(struct vcpu *v, uint32_t virq);
 /* Notify remote end of a Xen-attached event channel.*/
 void notify_via_xen_event_channel(struct domain *ld, int lport);
 
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid);
+
 /* Internal event channel object accessors */
 #define bucket_from_port(d,p) \
     ((d)->evtchn[(p)/EVTCHNS_PER_BUCKET])

--FL5UXtIhxfXey3p5
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--FL5UXtIhxfXey3p5--


From xen-devel-bounces@lists.xen.org Fri Aug 10 07:50:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szjyt-0007EM-Ek; Fri, 10 Aug 2012 07:50:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1Szjyr-0007EA-Up
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:50:02 +0000
Received: from [85.158.143.35:20826] by server-3.bemta-4.messagelabs.com id
	DC/07-31486-92DB4205; Fri, 10 Aug 2012 07:50:01 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344584980!14043929!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg3OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24570 invoked from network); 10 Aug 2012 07:49:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 07:49:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336348800"; d="scan'208";a="13946049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 07:49:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 08:49:40 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SzjyV-0003oM-QP;
	Fri, 10 Aug 2012 07:49:39 +0000
Received: by spongy (Postfix, from userid 2023)	id 142BC34049D; Fri, 10 Aug
	2012 08:51:01 +0100 (BST)
Date: Fri, 10 Aug 2012 08:51:01 +0100
From: Jean Guyader <jean.guyader@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810075100.GA30606@spongy>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
	<5024D5D00200007800094134@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="FL5UXtIhxfXey3p5"
Content-Disposition: inline
In-Reply-To: <5024D5D00200007800094134@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--FL5UXtIhxfXey3p5
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline

On 10/08 08:35, Jan Beulich wrote:
> >>> On 10.08.12 at 01:25, Jean Guyader <jean.guyader@citrix.com> wrote:
> >--- a/xen/common/event_channel.c
> >+++ b/xen/common/event_channel.c
> >@@ -159,9 +159,8 @@ static int get_free_port(struct domain *d)
> > 
> > static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> > {
> >-    struct evtchn *chn;
> >     struct domain *d;
> >-    int            port;
> >+    evtchn_port_t  port;
> >     domid_t        dom = alloc->dom;
> >     long           rc;
> > 
> >@@ -169,26 +168,47 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
> >     if ( rc )
> >         return rc;
> > 
> >+    rc = evtchn_alloc_unbound_domain(d, &port,
> 
> Any reason you can't pass &alloc->port here directly?
> 
> >+            alloc->remote_dom == DOMID_SELF ? current->domain->domain_id
> >+                                            : alloc->remote_dom);
> 
> Any reason this can't/shouldn't be done in the called function?
> 

No specific reason for both of those thing. Here is a new version based
on your comments.

Thanks for reviewing,
Jean

--FL5UXtIhxfXey3p5
Content-Type: text/x-diff; charset="us-ascii"
Content-Disposition: attachment;
	filename="evtchn_alloc_unbound_domain.patch"

commit 208384d74852df9ae26294236d79e33967a75afa
Author: Jean Guyader <jean.guyader@citrix.com>
Date:   Thu Aug 2 16:19:23 2012 +0100

    xen: events, exposes evtchn_alloc_unbound_domain
    
    Exposes evtchn_alloc_unbound_domain to the rest of
    Xen so we can create allocated unbound evtchn within Xen.

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..fd626bf 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -159,36 +159,53 @@ static int get_free_port(struct domain *d)
 
 static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 {
-    struct evtchn *chn;
     struct domain *d;
-    int            port;
-    domid_t        dom = alloc->dom;
     long           rc;
 
-    rc = rcu_lock_target_domain_by_id(dom, &d);
+    rc = rcu_lock_target_domain_by_id(alloc->dom, &d);
     if ( rc )
         return rc;
 
+    rc = evtchn_alloc_unbound_domain(d, &alloc->port, alloc->remote_dom);
+    if ( rc )
+        ERROR_EXIT_DOM((int)rc, d);
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid)
+{
+    struct evtchn *chn;
+    int           rc;
+    int           free_port;
+
     spin_lock(&d->event_lock);
 
-    if ( (port = get_free_port(d)) < 0 )
-        ERROR_EXIT_DOM(port, d);
-    chn = evtchn_from_port(d, port);
+    rc = free_port = get_free_port(d);
+    if ( free_port < 0 )
+        goto out;
 
-    rc = xsm_evtchn_unbound(d, chn, alloc->remote_dom);
+    chn = evtchn_from_port(d, free_port);
+    rc = xsm_evtchn_unbound(d, chn, remote_domid);
     if ( rc )
         goto out;
 
     chn->state = ECS_UNBOUND;
-    if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
+    if ( (chn->u.unbound.remote_domid = remote_domid) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
 
-    alloc->port = port;
+    chn->u.unbound.remote_domid = remote_domid;
+
+    *port = free_port;
+    /* Everything is fine, returns 0 */
+    rc = 0;
 
  out:
     spin_unlock(&d->event_lock);
-    rcu_unlock_domain(d);
-
     return rc;
 }
 
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 71c3e92..1a0c832 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -69,6 +69,9 @@ int guest_enabled_event(struct vcpu *v, uint32_t virq);
 /* Notify remote end of a Xen-attached event channel.*/
 void notify_via_xen_event_channel(struct domain *ld, int lport);
 
+int evtchn_alloc_unbound_domain(struct domain *d, evtchn_port_t *port,
+                                domid_t remote_domid);
+
 /* Internal event channel object accessors */
 #define bucket_from_port(d,p) \
     ((d)->evtchn[(p)/EVTCHNS_PER_BUCKET])

--FL5UXtIhxfXey3p5
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--FL5UXtIhxfXey3p5--


From xen-devel-bounces@lists.xen.org Fri Aug 10 07:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szk3q-0007VV-5W; Fri, 10 Aug 2012 07:55:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szk3p-0007VO-Be
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:55:09 +0000
Received: from [85.158.143.35:46691] by server-2.bemta-4.messagelabs.com id
	1B/7D-19021-C5EB4205; Fri, 10 Aug 2012 07:55:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344585281!13303067!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6213 invoked from network); 10 Aug 2012 07:54:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 07:54:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:54:41 +0100
Message-Id: <5024DA61020000780009414F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:54:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <5022AA540200007800093AF6@nat28.tlf.novell.com>
	<CC4A781E.3B4D6%keir.xen@gmail.com>
In-Reply-To: <CC4A781E.3B4D6%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: dgdegra@tycho.nsa.gov, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] make all (native) hypercalls consistently
 have "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 09:34, Keir Fraser <keir.xen@gmail.com> wrote:
> On 08/08/2012 17:05, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> for common and x86 ones at least, to address the problem of storing
>> zero-extended values into the multicall result field otherwise.
>> 
>> Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Keir Fraser <keir@xen.org>
> 
> Of course this should go in for the next 4.2 RC.

And I had hoped to also get this into 4.x.y, but I see that you
tagged the release already (and 4.1 also got pushed from
staging). So with 4.0.x now presumably dead, I'd just like to
ask to apply this to 4.1-testing then.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szk3q-0007VV-5W; Fri, 10 Aug 2012 07:55:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szk3p-0007VO-Be
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:55:09 +0000
Received: from [85.158.143.35:46691] by server-2.bemta-4.messagelabs.com id
	1B/7D-19021-C5EB4205; Fri, 10 Aug 2012 07:55:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344585281!13303067!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6213 invoked from network); 10 Aug 2012 07:54:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 07:54:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:54:41 +0100
Message-Id: <5024DA61020000780009414F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:54:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <5022AA540200007800093AF6@nat28.tlf.novell.com>
	<CC4A781E.3B4D6%keir.xen@gmail.com>
In-Reply-To: <CC4A781E.3B4D6%keir.xen@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: dgdegra@tycho.nsa.gov, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] make all (native) hypercalls consistently
 have "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 09:34, Keir Fraser <keir.xen@gmail.com> wrote:
> On 08/08/2012 17:05, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> for common and x86 ones at least, to address the problem of storing
>> zero-extended values into the multicall result field otherwise.
>> 
>> Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Keir Fraser <keir@xen.org>
> 
> Of course this should go in for the next 4.2 RC.

And I had hoped to also get this into 4.x.y, but I see that you
tagged the release already (and 4.1 also got pushed from
staging). So with 4.0.x now presumably dead, I'd just like to
ask to apply this to 4.1-testing then.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:57:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szk5o-0007cS-Nl; Fri, 10 Aug 2012 07:57:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szk5n-0007cK-02
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:57:11 +0000
Received: from [85.158.143.35:55459] by server-3.bemta-4.messagelabs.com id
	12/D0-31486-6DEB4205; Fri, 10 Aug 2012 07:57:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344585429!13303527!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17113 invoked from network); 10 Aug 2012 07:57:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 07:57:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:57:09 +0100
Message-Id: <5024DAF60200007800094167@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:57:10 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
	<5024D5D00200007800094134@nat28.tlf.novell.com>
	<20120810075100.GA30606@spongy>
In-Reply-To: <20120810075100.GA30606@spongy>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 09:51, Jean Guyader <jean.guyader@citrix.com> wrote:
> No specific reason for both of those thing. Here is a new version based
> on your comments.

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 07:57:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 07:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szk5o-0007cS-Nl; Fri, 10 Aug 2012 07:57:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szk5n-0007cK-02
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 07:57:11 +0000
Received: from [85.158.143.35:55459] by server-3.bemta-4.messagelabs.com id
	12/D0-31486-6DEB4205; Fri, 10 Aug 2012 07:57:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344585429!13303527!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17113 invoked from network); 10 Aug 2012 07:57:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 07:57:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 08:57:09 +0100
Message-Id: <5024DAF60200007800094167@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 08:57:10 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@citrix.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
	<5024D5D00200007800094134@nat28.tlf.novell.com>
	<20120810075100.GA30606@spongy>
In-Reply-To: <20120810075100.GA30606@spongy>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
 exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 09:51, Jean Guyader <jean.guyader@citrix.com> wrote:
> No specific reason for both of those thing. Here is a new version based
> on your comments.

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 08:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 08:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzkhI-0008No-Q5; Fri, 10 Aug 2012 08:35:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SzkhG-0008Nj-VP
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 08:35:55 +0000
Received: from [85.158.143.99:63620] by server-3.bemta-4.messagelabs.com id
	2D/9F-31486-AE7C4205; Fri, 10 Aug 2012 08:35:54 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344587728!21113367!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10530 invoked from network); 10 Aug 2012 08:35:28 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 08:35:28 -0000
Received: by eaac13 with SMTP id c13so364154eaa.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 01:35:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=M0rz6yA9KSOXkpfrQZQa7jRRQv1Amw8LncEXwwI6pwc=;
	b=Pim9EZkRWEZvOckZHA4s9+lx2mXzfM6t0sprv2wHhC0sAp0X1gwFMP4uVGz/KR56fP
	pRRUZPHUymj/HhAX6uQEzVdFhdUsnW3lOqvrQ4qA41EItUBfzKZsaEk8P01lPOecgfKS
	4siEQnWCjCswWV39cFcpovwhjV+uHAsqjl0x5JAYMvLxkwF5CB5qcDXhZfSu/ZuVsTtD
	sJxRnroTPTLQUetntR9FWBOAM/60nVY9nHuUqpBMjNPknjzaggsbmjhMeB8nhYCHWsnf
	yit96neFWGHBdBNg7DaWF79if++2/F+862rdlkBsOvvGPOpM1zPhY2z8mOEayjsNCAB+
	lGpQ==
Received: by 10.14.212.72 with SMTP id x48mr2155509eeo.40.1344587728280;
	Fri, 10 Aug 2012 01:35:28 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id d48sm9557291eeo.10.2012.08.10.01.35.26
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 01:35:27 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 10 Aug 2012 09:35:24 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC4A865C.3B4E7%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] make all (native) hypercalls consistently
	have "long" return type
Thread-Index: Ac120xXThMBPR6UKQk+2JhXz8/QhwQ==
In-Reply-To: <5024DA61020000780009414F@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: dgdegra@tycho.nsa.gov, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] make all (native) hypercalls consistently
 have "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/08/2012 08:54, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 10.08.12 at 09:34, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 08/08/2012 17:05, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> for common and x86 ones at least, to address the problem of storing
>>> zero-extended values into the multicall result field otherwise.
>>> 
>>> Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> Acked-by: Keir Fraser <keir@xen.org>
>> 
>> Of course this should go in for the next 4.2 RC.
> 
> And I had hoped to also get this into 4.x.y, but I see that you
> tagged the release already (and 4.1 also got pushed from
> staging). So with 4.0.x now presumably dead, I'd just like to
> ask to apply this to 4.1-testing then.

I'm happy to apply it to both, with a 4.x.next-rc1-pre tag. We may never
actually do another release from 4.0 branch however!

Since there's no rush, it can sit in xen-4.2-rc for a short while and then
get backported as part of a batch.

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 08:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 08:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzkhI-0008No-Q5; Fri, 10 Aug 2012 08:35:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1SzkhG-0008Nj-VP
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 08:35:55 +0000
Received: from [85.158.143.99:63620] by server-3.bemta-4.messagelabs.com id
	2D/9F-31486-AE7C4205; Fri, 10 Aug 2012 08:35:54 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344587728!21113367!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10530 invoked from network); 10 Aug 2012 08:35:28 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 08:35:28 -0000
Received: by eaac13 with SMTP id c13so364154eaa.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 01:35:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=M0rz6yA9KSOXkpfrQZQa7jRRQv1Amw8LncEXwwI6pwc=;
	b=Pim9EZkRWEZvOckZHA4s9+lx2mXzfM6t0sprv2wHhC0sAp0X1gwFMP4uVGz/KR56fP
	pRRUZPHUymj/HhAX6uQEzVdFhdUsnW3lOqvrQ4qA41EItUBfzKZsaEk8P01lPOecgfKS
	4siEQnWCjCswWV39cFcpovwhjV+uHAsqjl0x5JAYMvLxkwF5CB5qcDXhZfSu/ZuVsTtD
	sJxRnroTPTLQUetntR9FWBOAM/60nVY9nHuUqpBMjNPknjzaggsbmjhMeB8nhYCHWsnf
	yit96neFWGHBdBNg7DaWF79if++2/F+862rdlkBsOvvGPOpM1zPhY2z8mOEayjsNCAB+
	lGpQ==
Received: by 10.14.212.72 with SMTP id x48mr2155509eeo.40.1344587728280;
	Fri, 10 Aug 2012 01:35:28 -0700 (PDT)
Received: from [192.168.1.68] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id d48sm9557291eeo.10.2012.08.10.01.35.26
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 01:35:27 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 10 Aug 2012 09:35:24 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC4A865C.3B4E7%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] make all (native) hypercalls consistently
	have "long" return type
Thread-Index: Ac120xXThMBPR6UKQk+2JhXz8/QhwQ==
In-Reply-To: <5024DA61020000780009414F@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: dgdegra@tycho.nsa.gov, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] make all (native) hypercalls consistently
 have "long" return type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/08/2012 08:54, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 10.08.12 at 09:34, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 08/08/2012 17:05, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> for common and x86 ones at least, to address the problem of storing
>>> zero-extended values into the multicall result field otherwise.
>>> 
>>> Reported-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> Acked-by: Keir Fraser <keir@xen.org>
>> 
>> Of course this should go in for the next 4.2 RC.
> 
> And I had hoped to also get this into 4.x.y, but I see that you
> tagged the release already (and 4.1 also got pushed from
> staging). So with 4.0.x now presumably dead, I'd just like to
> ask to apply this to 4.1-testing then.

I'm happy to apply it to both, with a 4.x.next-rc1-pre tag. We may never
actually do another release from 4.0 branch however!

Since there's no rush, it can sit in xen-4.2-rc for a short while and then
get backported as part of a batch.

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 08:38:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 08:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szkiy-0008SL-9g; Fri, 10 Aug 2012 08:37:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Szkiw-0008SB-QB
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 08:37:39 +0000
Received: from [85.158.139.83:53896] by server-3.bemta-5.messagelabs.com id
	FB/B1-31899-158C4205; Fri, 10 Aug 2012 08:37:37 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344587856!27692247!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27108 invoked from network); 10 Aug 2012 08:37:37 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 08:37:37 -0000
Received: by vbip1 with SMTP id p1so1621922vbi.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 01:37:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=/ZY+sZpixK219PVk9Y299kSH/3O8/4lqo1WuFy4IqZo=;
	b=h/dmYuquJup2tFTnQhRlfSbSae0qtmEZZaD3+ETBrorRrUek17+0oCKTUisqmaAWaF
	WJBSPpomzP9MY87FkI+Hf1KkqKabAcNMyKlDl5xizK3bLG19Xuw6H3hUZkr42ZveKX8a
	WnIpDP1K9lBlcqyC7AEFJOHI3kJ2C0OYEMDA9mY2kIRxKY6yU0Q4TiyTYj4/+iG2lTtX
	EXVp6fDj14PgFyrTk0wlyaGqHA/2r80rTJ/YynylxubDY6yunks/WDSaZW4C24Rr9Yr7
	u/1gZrXH9C9/ODDdbRKnCMwKitminxcljl2sTmUT7mPE97XmIpyHuv0t31NGP+U0TZmK
	TzqQ==
Received: by 10.52.70.163 with SMTP id n3mr1660766vdu.64.1344587855648; Fri,
	10 Aug 2012 01:37:35 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Fri, 10 Aug 2012 01:37:15 -0700 (PDT)
In-Reply-To: <20120806152815.GE8967@phenom.dumpdata.com>
References: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
	<20120806152815.GE8967@phenom.dumpdata.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Fri, 10 Aug 2012 09:37:15 +0100
Message-ID: <CAEBdQ90s11dBVsKCURwvZTNE+PE0nuG2WeoHsc7QGZcQ_9oWZQ@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 16:28, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Aug 03, 2012 at 11:24:20PM +0100, Jean Guyader wrote:
>> This is a Linux driver for the V4V inter VM communication system.
>>
>> I've posted the V4V Xen patches for comments, to find more info about
>> V4V you can check out this link.
>> http://osdir.com/ml/general/2012-08/msg05904.html
>>
>> This linux driver exposes two char devices one for TCP one for UDP.
>> The interface exposed to userspace are made of IOCTLs, one per
>> network operation (listen, bind, accept, send, recv, ...).
>
> I haven't had a chance to take a look at this and won't until next
> week. But just a couple of quick questions:
>
>  - Is there a test application for this? If so where can I get it

I have a userspace library that talks to it, I'm in the process of
cleaning it up.
I'll send a patch series today that would add it in xen/tools.

>  - Is there any code in the Xen repository that uses it.

The Xen support is being upstream right now, but because it needs some
userspace kernel to be useful it's kind a chicken and a egg problem, so I'm
trying to upstream both at the same time.

You can find the last version of the Xen patches here:
http://lists.xen.org/archives/html/xen-devel/2012-08/msg00385.html

>  - Who are the users?

Right now we use a close but not compatible version in XenClient.
Potentially the users
would be anyone that is looking to for a easy way to communicate
between VMs with
that has a feel of TCP/UDP.

Some background info about V4V could be found here:
http://lists.xen.org/archives/html/xen-devel/2012-05/msg01866.html

>  - Why .. TCP and UDP ? Does that mean it masquarades as an Ethernet
>    device? Why the choice of using a char device?
>

Because of security concerns we didn't want to rely on the Linux
networking code because it would
have been hard for us to prove that a V4V packet could never end up on
your network card.
Although we understand that there is a need for a network like driver
and we are working on a version
of the V4V driver that will use SKBs and expose itself as a new socket type.

In fact we asked on the LKML if it would be acceptable to add a new
type of socket in linux for
inter-VM communication but we are still waiting for an answer.
http://comments.gmane.org/gmane.linux.kernel/1337472

The really nice feature about V4V is it's ability leverage all the
existing networking programs.
We have a libc interposer library that wraps all the networking
functions. Here is an example
to access a ssh server running in another domain (domid=16)

LD_PRELOAD=/usr/lib/libv4v.so ssh 1.0.0.16

Thanks,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 08:38:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 08:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szkiy-0008SL-9g; Fri, 10 Aug 2012 08:37:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1Szkiw-0008SB-QB
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 08:37:39 +0000
Received: from [85.158.139.83:53896] by server-3.bemta-5.messagelabs.com id
	FB/B1-31899-158C4205; Fri, 10 Aug 2012 08:37:37 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1344587856!27692247!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27108 invoked from network); 10 Aug 2012 08:37:37 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 08:37:37 -0000
Received: by vbip1 with SMTP id p1so1621922vbi.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 01:37:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=/ZY+sZpixK219PVk9Y299kSH/3O8/4lqo1WuFy4IqZo=;
	b=h/dmYuquJup2tFTnQhRlfSbSae0qtmEZZaD3+ETBrorRrUek17+0oCKTUisqmaAWaF
	WJBSPpomzP9MY87FkI+Hf1KkqKabAcNMyKlDl5xizK3bLG19Xuw6H3hUZkr42ZveKX8a
	WnIpDP1K9lBlcqyC7AEFJOHI3kJ2C0OYEMDA9mY2kIRxKY6yU0Q4TiyTYj4/+iG2lTtX
	EXVp6fDj14PgFyrTk0wlyaGqHA/2r80rTJ/YynylxubDY6yunks/WDSaZW4C24Rr9Yr7
	u/1gZrXH9C9/ODDdbRKnCMwKitminxcljl2sTmUT7mPE97XmIpyHuv0t31NGP+U0TZmK
	TzqQ==
Received: by 10.52.70.163 with SMTP id n3mr1660766vdu.64.1344587855648; Fri,
	10 Aug 2012 01:37:35 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Fri, 10 Aug 2012 01:37:15 -0700 (PDT)
In-Reply-To: <20120806152815.GE8967@phenom.dumpdata.com>
References: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
	<20120806152815.GE8967@phenom.dumpdata.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Fri, 10 Aug 2012 09:37:15 +0100
Message-ID: <CAEBdQ90s11dBVsKCURwvZTNE+PE0nuG2WeoHsc7QGZcQ_9oWZQ@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 16:28, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Aug 03, 2012 at 11:24:20PM +0100, Jean Guyader wrote:
>> This is a Linux driver for the V4V inter VM communication system.
>>
>> I've posted the V4V Xen patches for comments, to find more info about
>> V4V you can check out this link.
>> http://osdir.com/ml/general/2012-08/msg05904.html
>>
>> This linux driver exposes two char devices one for TCP one for UDP.
>> The interface exposed to userspace are made of IOCTLs, one per
>> network operation (listen, bind, accept, send, recv, ...).
>
> I haven't had a chance to take a look at this and won't until next
> week. But just a couple of quick questions:
>
>  - Is there a test application for this? If so where can I get it

I have a userspace library that talks to it, I'm in the process of
cleaning it up.
I'll send a patch series today that would add it in xen/tools.

>  - Is there any code in the Xen repository that uses it.

The Xen support is being upstream right now, but because it needs some
userspace kernel to be useful it's kind a chicken and a egg problem, so I'm
trying to upstream both at the same time.

You can find the last version of the Xen patches here:
http://lists.xen.org/archives/html/xen-devel/2012-08/msg00385.html

>  - Who are the users?

Right now we use a close but not compatible version in XenClient.
Potentially the users
would be anyone that is looking to for a easy way to communicate
between VMs with
that has a feel of TCP/UDP.

Some background info about V4V could be found here:
http://lists.xen.org/archives/html/xen-devel/2012-05/msg01866.html

>  - Why .. TCP and UDP ? Does that mean it masquarades as an Ethernet
>    device? Why the choice of using a char device?
>

Because of security concerns we didn't want to rely on the Linux
networking code because it would
have been hard for us to prove that a V4V packet could never end up on
your network card.
Although we understand that there is a need for a network like driver
and we are working on a version
of the V4V driver that will use SKBs and expose itself as a new socket type.

In fact we asked on the LKML if it would be acceptable to add a new
type of socket in linux for
inter-VM communication but we are still waiting for an answer.
http://comments.gmane.org/gmane.linux.kernel/1337472

The really nice feature about V4V is it's ability leverage all the
existing networking programs.
We have a libc interposer library that wraps all the networking
functions. Here is an example
to access a ssh server running in another domain (domid=16)

LD_PRELOAD=/usr/lib/libv4v.so ssh 1.0.0.16

Thanks,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:01:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szl6C-0000eb-7x; Fri, 10 Aug 2012 09:01:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Szl6B-0000eV-J1
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 09:01:39 +0000
Received: from [85.158.143.99:22002] by server-2.bemta-4.messagelabs.com id
	9E/4C-19021-2FDC4205; Fri, 10 Aug 2012 09:01:38 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344589297!26941238!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25164 invoked from network); 10 Aug 2012 09:01:37 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 09:01:37 -0000
Received: by weyx43 with SMTP id x43so1055450wey.30
	for <xen-devel@lists.xensource.com>;
	Fri, 10 Aug 2012 02:01:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=JStqSV7CV1jInErSMJY4c/0J/gi8ttybGxUMYCyjG8s=;
	b=Cvh++CQCIT+czZadY3XHchbtYZzrJbkrYk+7xfdsdklENMgyp1nUErK2OsMy8U76Bj
	Di60X7gjVHxSUMrIXcIYhhWIfJP4Bby7QIvXlFfbuhErEqGdPv3uEvbJK5l870WjOGMy
	BsZZrLsILSq4wd70q41PtHFd8J3L3LuxC645OmIPqDh3RRar0e2U0k2zQ92b7Z4GbgIc
	OKD5lAXh+0hzRKzM1QWga5Vx2fzlvKqvtnX8WaKtt0sAjKu7DWlGVggui5mNNhVGE67r
	a6SAnLThQYMrUc5ajeVshQOdpbyVTelp60IHln0fE5s4xCoIBouNKSKTp/x7QVpp05Sh
	KYGQ==
MIME-Version: 1.0
Received: by 10.216.135.147 with SMTP id u19mr1265723wei.12.1344589297450;
	Fri, 10 Aug 2012 02:01:37 -0700 (PDT)
Received: by 10.223.81.66 with HTTP; Fri, 10 Aug 2012 02:01:37 -0700 (PDT)
In-Reply-To: <CA+ePHTDpXuggNAvD-una-APzx-F3k32a1HbhG8u2WCL9a-qP5A@mail.gmail.com>
References: <CA+ePHTDpXuggNAvD-una-APzx-F3k32a1HbhG8u2WCL9a-qP5A@mail.gmail.com>
Date: Fri, 10 Aug 2012 10:01:37 +0100
X-Google-Sender-Auth: K2fWvOu3RwwxwiAC_w2s5elFfoQ
Message-ID: <CAFLBxZY9BzcLQoKYDnfgWod9JuoDD-W6QtMb2nZe8MZSek=SWw@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [help] problem with watch_pipe in xenstore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCBBdWcgMTAsIDIwMTIgYXQgNDoyMiBBTSwg6ams56OKIDxhd2FyZS53aHlAZ21haWwu
Y29tPiB3cm90ZToKPiBIaSBhbGwsCj4gICAgICAgSW4gc3JjL3Rvb2xzL3hlbnN0b3JlL3hzLmMg
OiBnZXRfaGFuZGxlKCksIGl0IHNheXMKPgo+IC8qIFdhdGNoIHBpcGUgaXMgYWxsb2NhdGVkIG9u
IGRlbWFuZCBpbiB4c19maWxlbm8oKS4gKi8KPiAgaC0+d2F0Y2hfcGlwZVswXSA9IGgtPndhdGNo
X3BpcGVbMV0gPSAtMTsKPgo+Cj4gYnV0IGluIHhzX2ZpbGVubygpIGZ1bmN0aW9uOgo+Cj4gIDEz
Ny1pbnQgeHNfZmlsZW5vKHN0cnVjdCB4c19oYW5kbGUgKmgpCj4gIDEzOHsKPiAgMTM5ICAgICAg
ICBjaGFyIGMgPSAwOwo+ICAxNDAKPiAgMTQxICAgICAgICBtdXRleF9sb2NrKCZoLT53YXRjaF9t
dXRleCk7Cj4gIDE0Mgo+ICAxNDMgICAgICAgIGlmICgoaC0+d2F0Y2hfcGlwZVswXSA9PSAtMSkg
JiYgKHBpcGUoaC0+d2F0Y2hfcGlwZSkgIT0gLTEpKSB7Cj4gIDE0NCAgICAgICAgICAgICAgICAv
KiBLaWNrIHRoaW5ncyBvZmYgaWYgdGhlIHdhdGNoIGxpc3QgaXMgYWxyZWFkeQo+IG5vbi1lbXB0
eS4gKi8KPiAgMTQ1ICAgICAgICAgICAgICAgIGlmICghbGlzdF9lbXB0eSgmaC0+d2F0Y2hfbGlz
dCkpCj4gIDE0NiAgICAgICAgICAgICAgICAgICAgICAgIHdoaWxlICh3cml0ZShoLT53YXRjaF9w
aXBlWzFdLCAmYywgMSkgIT0gMSkKPiAgMTQ3ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb250aW51ZTsKPiAgMTQ4ICAgICAgICB9Cj4gIDE0OQo+ICAxNTAgICAgICAgIG11dGV4X3Vu
bG9jaygmaC0+d2F0Y2hfbXV0ZXgpOwo+ICAxNTEKPiAgMTUyICAgICAgICByZXR1cm4gaC0+d2F0
Y2hfcGlwZVswXTsKPiAgMTUzfQo+Cj4gV2hlbiBkaWQgaXQgYXNzaWduIHZhbHVlIHRvIHdhdGNo
X3BpcGVbMF0gYW5kIHdoYXQgZG9lcyB3YXRjaF9waXBlWzBdIGFuZAo+IHdhdGNoX3BpcGVbMV0g
c3RhbmRzIGZvciByZXNwZWN0aXZlbHk/Cj4KPiBJbiBhZGRpdGlvbiwgdGhlIHR3byBsaW5lcyBg
d2hpbGUgKHJlYWQoaC0+d2F0Y2hfcGlwZVswXSwgJmMsIDEpICE9IDEpYCBhbmQKPiBgd2hpbGUg
KHdyaXRlKGgtPndhdGNoX3BpcGVbMV0sICZjLCAxKSAhPSAxKWAgIGFwcGVhcnMgbWFueSB0aW1l
cywKPiBpdCdzIG9kZCB0aGF0IHdoeSBhbHdheXMgd3JpdGluZyB0byB3YXRjaF9waXBlWzFdIGJ1
dCBub2JvZHkgcmVhZCBmcm9tIGl0Cj4gYW5kIHdoeSBhbHdheXMgcmVhZGluZyBmcm9tIHdhdGNo
X3BpcGVbMF0gYnV0IG5vYm9keSB3cml0ZSB0byBpdD8KPgo+IEdyZXAgZm91bmQgbm8gYW5zd2Vy
IGFuZCBpdCdzIGRpZmZpY3VsdCB0byBzZWFyY2ggZm9yIHRoZSB0cml2aWFsIHF1ZXN0aW9uCj4g
YnkgZ29vbGUhCgoibWFuIHBpcGUiIGNhbiBhbnN3ZXIgYWxsIHlvdXIgcXVlc3Rpb25zLgoKIC1H
ZW9yZ2UKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhl
bi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3Rz
Lnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:01:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szl6C-0000eb-7x; Fri, 10 Aug 2012 09:01:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1Szl6B-0000eV-J1
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 09:01:39 +0000
Received: from [85.158.143.99:22002] by server-2.bemta-4.messagelabs.com id
	9E/4C-19021-2FDC4205; Fri, 10 Aug 2012 09:01:38 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344589297!26941238!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25164 invoked from network); 10 Aug 2012 09:01:37 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 09:01:37 -0000
Received: by weyx43 with SMTP id x43so1055450wey.30
	for <xen-devel@lists.xensource.com>;
	Fri, 10 Aug 2012 02:01:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=JStqSV7CV1jInErSMJY4c/0J/gi8ttybGxUMYCyjG8s=;
	b=Cvh++CQCIT+czZadY3XHchbtYZzrJbkrYk+7xfdsdklENMgyp1nUErK2OsMy8U76Bj
	Di60X7gjVHxSUMrIXcIYhhWIfJP4Bby7QIvXlFfbuhErEqGdPv3uEvbJK5l870WjOGMy
	BsZZrLsILSq4wd70q41PtHFd8J3L3LuxC645OmIPqDh3RRar0e2U0k2zQ92b7Z4GbgIc
	OKD5lAXh+0hzRKzM1QWga5Vx2fzlvKqvtnX8WaKtt0sAjKu7DWlGVggui5mNNhVGE67r
	a6SAnLThQYMrUc5ajeVshQOdpbyVTelp60IHln0fE5s4xCoIBouNKSKTp/x7QVpp05Sh
	KYGQ==
MIME-Version: 1.0
Received: by 10.216.135.147 with SMTP id u19mr1265723wei.12.1344589297450;
	Fri, 10 Aug 2012 02:01:37 -0700 (PDT)
Received: by 10.223.81.66 with HTTP; Fri, 10 Aug 2012 02:01:37 -0700 (PDT)
In-Reply-To: <CA+ePHTDpXuggNAvD-una-APzx-F3k32a1HbhG8u2WCL9a-qP5A@mail.gmail.com>
References: <CA+ePHTDpXuggNAvD-una-APzx-F3k32a1HbhG8u2WCL9a-qP5A@mail.gmail.com>
Date: Fri, 10 Aug 2012 10:01:37 +0100
X-Google-Sender-Auth: K2fWvOu3RwwxwiAC_w2s5elFfoQ
Message-ID: <CAFLBxZY9BzcLQoKYDnfgWod9JuoDD-W6QtMb2nZe8MZSek=SWw@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [help] problem with watch_pipe in xenstore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCBBdWcgMTAsIDIwMTIgYXQgNDoyMiBBTSwg6ams56OKIDxhd2FyZS53aHlAZ21haWwu
Y29tPiB3cm90ZToKPiBIaSBhbGwsCj4gICAgICAgSW4gc3JjL3Rvb2xzL3hlbnN0b3JlL3hzLmMg
OiBnZXRfaGFuZGxlKCksIGl0IHNheXMKPgo+IC8qIFdhdGNoIHBpcGUgaXMgYWxsb2NhdGVkIG9u
IGRlbWFuZCBpbiB4c19maWxlbm8oKS4gKi8KPiAgaC0+d2F0Y2hfcGlwZVswXSA9IGgtPndhdGNo
X3BpcGVbMV0gPSAtMTsKPgo+Cj4gYnV0IGluIHhzX2ZpbGVubygpIGZ1bmN0aW9uOgo+Cj4gIDEz
Ny1pbnQgeHNfZmlsZW5vKHN0cnVjdCB4c19oYW5kbGUgKmgpCj4gIDEzOHsKPiAgMTM5ICAgICAg
ICBjaGFyIGMgPSAwOwo+ICAxNDAKPiAgMTQxICAgICAgICBtdXRleF9sb2NrKCZoLT53YXRjaF9t
dXRleCk7Cj4gIDE0Mgo+ICAxNDMgICAgICAgIGlmICgoaC0+d2F0Y2hfcGlwZVswXSA9PSAtMSkg
JiYgKHBpcGUoaC0+d2F0Y2hfcGlwZSkgIT0gLTEpKSB7Cj4gIDE0NCAgICAgICAgICAgICAgICAv
KiBLaWNrIHRoaW5ncyBvZmYgaWYgdGhlIHdhdGNoIGxpc3QgaXMgYWxyZWFkeQo+IG5vbi1lbXB0
eS4gKi8KPiAgMTQ1ICAgICAgICAgICAgICAgIGlmICghbGlzdF9lbXB0eSgmaC0+d2F0Y2hfbGlz
dCkpCj4gIDE0NiAgICAgICAgICAgICAgICAgICAgICAgIHdoaWxlICh3cml0ZShoLT53YXRjaF9w
aXBlWzFdLCAmYywgMSkgIT0gMSkKPiAgMTQ3ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb250aW51ZTsKPiAgMTQ4ICAgICAgICB9Cj4gIDE0OQo+ICAxNTAgICAgICAgIG11dGV4X3Vu
bG9jaygmaC0+d2F0Y2hfbXV0ZXgpOwo+ICAxNTEKPiAgMTUyICAgICAgICByZXR1cm4gaC0+d2F0
Y2hfcGlwZVswXTsKPiAgMTUzfQo+Cj4gV2hlbiBkaWQgaXQgYXNzaWduIHZhbHVlIHRvIHdhdGNo
X3BpcGVbMF0gYW5kIHdoYXQgZG9lcyB3YXRjaF9waXBlWzBdIGFuZAo+IHdhdGNoX3BpcGVbMV0g
c3RhbmRzIGZvciByZXNwZWN0aXZlbHk/Cj4KPiBJbiBhZGRpdGlvbiwgdGhlIHR3byBsaW5lcyBg
d2hpbGUgKHJlYWQoaC0+d2F0Y2hfcGlwZVswXSwgJmMsIDEpICE9IDEpYCBhbmQKPiBgd2hpbGUg
KHdyaXRlKGgtPndhdGNoX3BpcGVbMV0sICZjLCAxKSAhPSAxKWAgIGFwcGVhcnMgbWFueSB0aW1l
cywKPiBpdCdzIG9kZCB0aGF0IHdoeSBhbHdheXMgd3JpdGluZyB0byB3YXRjaF9waXBlWzFdIGJ1
dCBub2JvZHkgcmVhZCBmcm9tIGl0Cj4gYW5kIHdoeSBhbHdheXMgcmVhZGluZyBmcm9tIHdhdGNo
X3BpcGVbMF0gYnV0IG5vYm9keSB3cml0ZSB0byBpdD8KPgo+IEdyZXAgZm91bmQgbm8gYW5zd2Vy
IGFuZCBpdCdzIGRpZmZpY3VsdCB0byBzZWFyY2ggZm9yIHRoZSB0cml2aWFsIHF1ZXN0aW9uCj4g
YnkgZ29vbGUhCgoibWFuIHBpcGUiIGNhbiBhbnN3ZXIgYWxsIHlvdXIgcXVlc3Rpb25zLgoKIC1H
ZW9yZ2UKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhl
bi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3Rz
Lnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:29:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzlX0-0000wk-NG; Fri, 10 Aug 2012 09:29:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1SzlWz-0000wf-1m
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 09:29:21 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344590954!8566404!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31277 invoked from network); 10 Aug 2012 09:29:14 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 09:29:14 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 84A97A02F1;
	Fri, 10 Aug 2012 09:29:14 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id CtLKgLXGUAkp; Fri, 10 Aug 2012 09:29:14 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id EE652A02EA;
	Fri, 10 Aug 2012 09:29:13 +0000 (UTC)
Date: Fri, 10 Aug 2012 11:29:12 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Keir Fraser <keir@xen.org>
Message-ID: <20120810112912.6df40924@internecto.net>
In-Reply-To: <CC48557F.484FE%keir@xen.org>
References: <20120808174355.470746e0@internecto.net>
	<CC48557F.484FE%keir@xen.org>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Possible bug with huge unflushed console buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Greetings,

> > The behaviour I encountered i.e. a system lock seems to suggest that
> > there was some kind of buffer overflow. And this can be achieved by
> > something as simple as 'iptables -I INPUT -j LOG'. Perhaps it is a
> > wise idea to consider xl console -c to clear the console's history,
> > then at least I could exit the console and clear unsent messages...
> 
> Is your VM logging to its virtual serial line? You could just not do
> that...

Correct, it is logging to the virtual serial line and you're right, I
could just not do that. But isn't that just a workaround?

I just think that if you emulate something like a serial line, it should
be emulated properly. Why don't we just discard the data that is sent
to a detached serial line? That's what bare metal servers do too, right?

> I think this is due to the guest's qemu-dm process stuffing its
> transmitted serial data into a pty, to be consumed by 'xl console',
> and if that doesn't happen the buffers fill up, serial processing
> stops, and we back up all the way to the guest.

Right. So the issue is only with the guest, not with the hypervisor.
That's a relief. :)

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 09:11 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:29:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzlX0-0000wk-NG; Fri, 10 Aug 2012 09:29:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1SzlWz-0000wf-1m
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 09:29:21 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344590954!8566404!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31277 invoked from network); 10 Aug 2012 09:29:14 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 09:29:14 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 84A97A02F1;
	Fri, 10 Aug 2012 09:29:14 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id CtLKgLXGUAkp; Fri, 10 Aug 2012 09:29:14 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id EE652A02EA;
	Fri, 10 Aug 2012 09:29:13 +0000 (UTC)
Date: Fri, 10 Aug 2012 11:29:12 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Keir Fraser <keir@xen.org>
Message-ID: <20120810112912.6df40924@internecto.net>
In-Reply-To: <CC48557F.484FE%keir@xen.org>
References: <20120808174355.470746e0@internecto.net>
	<CC48557F.484FE%keir@xen.org>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Possible bug with huge unflushed console buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Greetings,

> > The behaviour I encountered i.e. a system lock seems to suggest that
> > there was some kind of buffer overflow. And this can be achieved by
> > something as simple as 'iptables -I INPUT -j LOG'. Perhaps it is a
> > wise idea to consider xl console -c to clear the console's history,
> > then at least I could exit the console and clear unsent messages...
> 
> Is your VM logging to its virtual serial line? You could just not do
> that...

Correct, it is logging to the virtual serial line and you're right, I
could just not do that. But isn't that just a workaround?

I just think that if you emulate something like a serial line, it should
be emulated properly. Why don't we just discard the data that is sent
to a detached serial line? That's what bare metal servers do too, right?

> I think this is due to the guest's qemu-dm process stuffing its
> transmitted serial data into a pty, to be consumed by 'xl console',
> and if that doesn't happen the buffers fill up, serial processing
> stops, and we back up all the way to the guest.

Right. So the issue is only with the guest, not with the hypervisor.
That's a relief. :)

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 09:11 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:43:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzlkG-0001Cw-8a; Fri, 10 Aug 2012 09:43:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzlkE-0001Cr-Pl
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 09:43:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344591767!8609043!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22063 invoked from network); 10 Aug 2012 09:42:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 09:42:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336348800"; d="scan'208";a="13948740"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 09:42:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 10:42:47 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szljz-0004Wy-CV;
	Fri, 10 Aug 2012 09:42:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szljx-0006cC-Km;
	Fri, 10 Aug 2012 10:42:46 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13587-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 10:42:45 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13587: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13587 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13587/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13574
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13574
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13574
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  d25406e25af4
baseline version:
 xen                  d25406e25af4

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:43:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzlkG-0001Cw-8a; Fri, 10 Aug 2012 09:43:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzlkE-0001Cr-Pl
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 09:43:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344591767!8609043!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22063 invoked from network); 10 Aug 2012 09:42:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 09:42:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336348800"; d="scan'208";a="13948740"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 09:42:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 10:42:47 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szljz-0004Wy-CV;
	Fri, 10 Aug 2012 09:42:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szljx-0006cC-Km;
	Fri, 10 Aug 2012 10:42:46 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13587-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 10:42:45 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13587: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13587 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13587/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13574
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13574
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13574
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13574

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  d25406e25af4
baseline version:
 xen                  d25406e25af4

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:54:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szlv0-0001Mw-F1; Fri, 10 Aug 2012 09:54:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1Szluz-0001Mr-7k
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 09:54:09 +0000
Received: from [85.158.139.83:42226] by server-6.bemta-5.messagelabs.com id
	0B/B0-27759-04AD4205; Fri, 10 Aug 2012 09:54:08 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344592447!27468715!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20050 invoked from network); 10 Aug 2012 09:54:07 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 09:54:07 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 5503FA02F1
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 09:54:07 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024) with ESMTP id F5qQwyst82X4 for <xen-devel@lists.xen.org>;
	Fri, 10 Aug 2012 09:54:06 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 779A0A02EA
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 09:54:06 +0000 (UTC)
Date: Fri, 10 Aug 2012 11:54:05 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: xen-devel@lists.xen.org
Message-ID: <20120810115405.05af653e@internecto.net>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Subject: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux /
	uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After someone suggested alpine linux makes a nice dom0 system I decided
to give it a go. Unfortunately the build process breaks:

---
  AR    i386-dm/libqemu.a
  LINK  i386-dm/qemu-dm
vl.o: In function `dynticks_stop_timer':
/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1603:
undefined reference to `timer_delete' vl.o: In function
`dynticks_start_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1585:
undefined reference to `timer_create' vl.o: In function
`dynticks_rearm_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1620:
undefined reference to
`timer_gettime' /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1633:
undefined reference to `timer_settime' collect2: ld returned 1 exit
status make[4]: *** [qemu-dm] Error 1 make[4]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote/i386-dm'
make[3]: *** [subdir-i386-dm] Error 2 make[3]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote'
make[2]: *** [subdir-install-qemu-xen-traditional-dir] Error 2 make[2]:
Leaving directory `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
make[1]: *** [subdirs-install] Error 2 make[1]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools' make: ***
[install-tools] Error 2 ---

I found this bug occurred to someone back in february '12 when someone
posted the following message and patch to the ML:

http://lists.xen.org/archives/html/xen-devel/2012-02/msg01475.html

But I am either not patching the correct file or this patch is not
working. I was trying a couple of different solutions just to see how I
could get it to work but I'll admit this is like doing rocket science
as a third grader. This is what I tried next to the previously named
patch:

http://pastebin.com/U39H38T7

But the end result:

---
  LINK  qemu-ga
  CC    libdis/i386-dis.o
  LINK  qemu-nbd
cutils.o: In function `strtosz_suffix_unit':
/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:354:
undefined reference to `__isnan'
/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:357:
undefined reference to `modf'
collect2: ld returned 1 exit status
make[3]: *** [qemu-ga] Error 1
make[3]: *** Waiting for unfinished jobs....
make[3]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir-remote'
make[2]: *** [subdir-all-qemu-xen-dir] Error 2
make[2]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
make: *** [install-tools] Error 2
---

Where do I go from here?

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 09:29 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 09:54:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 09:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szlv0-0001Mw-F1; Fri, 10 Aug 2012 09:54:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1Szluz-0001Mr-7k
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 09:54:09 +0000
Received: from [85.158.139.83:42226] by server-6.bemta-5.messagelabs.com id
	0B/B0-27759-04AD4205; Fri, 10 Aug 2012 09:54:08 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344592447!27468715!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20050 invoked from network); 10 Aug 2012 09:54:07 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 09:54:07 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 5503FA02F1
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 09:54:07 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024) with ESMTP id F5qQwyst82X4 for <xen-devel@lists.xen.org>;
	Fri, 10 Aug 2012 09:54:06 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 779A0A02EA
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 09:54:06 +0000 (UTC)
Date: Fri, 10 Aug 2012 11:54:05 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: xen-devel@lists.xen.org
Message-ID: <20120810115405.05af653e@internecto.net>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Subject: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux /
	uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After someone suggested alpine linux makes a nice dom0 system I decided
to give it a go. Unfortunately the build process breaks:

---
  AR    i386-dm/libqemu.a
  LINK  i386-dm/qemu-dm
vl.o: In function `dynticks_stop_timer':
/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1603:
undefined reference to `timer_delete' vl.o: In function
`dynticks_start_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1585:
undefined reference to `timer_create' vl.o: In function
`dynticks_rearm_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1620:
undefined reference to
`timer_gettime' /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1633:
undefined reference to `timer_settime' collect2: ld returned 1 exit
status make[4]: *** [qemu-dm] Error 1 make[4]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote/i386-dm'
make[3]: *** [subdir-i386-dm] Error 2 make[3]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote'
make[2]: *** [subdir-install-qemu-xen-traditional-dir] Error 2 make[2]:
Leaving directory `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
make[1]: *** [subdirs-install] Error 2 make[1]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools' make: ***
[install-tools] Error 2 ---

I found this bug occurred to someone back in february '12 when someone
posted the following message and patch to the ML:

http://lists.xen.org/archives/html/xen-devel/2012-02/msg01475.html

But I am either not patching the correct file or this patch is not
working. I was trying a couple of different solutions just to see how I
could get it to work but I'll admit this is like doing rocket science
as a third grader. This is what I tried next to the previously named
patch:

http://pastebin.com/U39H38T7

But the end result:

---
  LINK  qemu-ga
  CC    libdis/i386-dis.o
  LINK  qemu-nbd
cutils.o: In function `strtosz_suffix_unit':
/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:354:
undefined reference to `__isnan'
/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:357:
undefined reference to `modf'
collect2: ld returned 1 exit status
make[3]: *** [qemu-ga] Error 1
make[3]: *** Waiting for unfinished jobs....
make[3]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir-remote'
make[2]: *** [subdir-all-qemu-xen-dir] Error 2
make[2]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory
`/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
make: *** [install-tools] Error 2
---

Where do I go from here?

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 09:29 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 10:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 10:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzmI6-0001i8-OR; Fri, 10 Aug 2012 10:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzmI4-0001i3-U5
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 10:18:01 +0000
Received: from [85.158.143.35:5968] by server-3.bemta-4.messagelabs.com id
	56/6A-31486-8DFD4205; Fri, 10 Aug 2012 10:18:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344593879!11868837!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5281 invoked from network); 10 Aug 2012 10:17:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 10:17:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336348800"; d="scan'208";a="13949438"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 10:17:59 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 11:17:59 +0100
Date: Fri, 10 Aug 2012 11:17:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mark van Dijk <lists+xen@internecto.net>
In-Reply-To: <20120810115405.05af653e@internecto.net>
Message-ID: <alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger should have a better idea than me but he is currently on vacation.

On Fri, 10 Aug 2012, Mark van Dijk wrote:
> After someone suggested alpine linux makes a nice dom0 system I decided
> to give it a go. Unfortunately the build process breaks:
> 
> ---
>   AR    i386-dm/libqemu.a
>   LINK  i386-dm/qemu-dm
> vl.o: In function `dynticks_stop_timer':
> /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1603:
> undefined reference to `timer_delete' vl.o: In function
> `dynticks_start_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1585:
> undefined reference to `timer_create' vl.o: In function
> `dynticks_rearm_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1620:
> undefined reference to
> `timer_gettime' /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1633:
> undefined reference to `timer_settime' collect2: ld returned 1 exit
> status make[4]: *** [qemu-dm] Error 1 make[4]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote/i386-dm'
> make[3]: *** [subdir-i386-dm] Error 2 make[3]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote'
> make[2]: *** [subdir-install-qemu-xen-traditional-dir] Error 2 make[2]:
> Leaving directory `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
> make[1]: *** [subdirs-install] Error 2 make[1]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools' make: ***
> [install-tools] Error 2 ---
> 
> I found this bug occurred to someone back in february '12 when someone
> posted the following message and patch to the ML:
> 
> http://lists.xen.org/archives/html/xen-devel/2012-02/msg01475.html
> 
> But I am either not patching the correct file or this patch is not
> working. I was trying a couple of different solutions just to see how I
> could get it to work but I'll admit this is like doing rocket science
> as a third grader. This is what I tried next to the previously named
> patch:
> 
> http://pastebin.com/U39H38T7
> 
> But the end result:
> 
> ---
>   LINK  qemu-ga
>   CC    libdis/i386-dis.o
>   LINK  qemu-nbd
> cutils.o: In function `strtosz_suffix_unit':
> /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:354:
> undefined reference to `__isnan'
> /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:357:
> undefined reference to `modf'
> collect2: ld returned 1 exit status
> make[3]: *** [qemu-ga] Error 1
> make[3]: *** Waiting for unfinished jobs....
> make[3]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir-remote'
> make[2]: *** [subdir-all-qemu-xen-dir] Error 2
> make[2]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
> make[1]: *** [subdirs-install] Error 2
> make[1]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
> make: *** [install-tools] Error 2
> ---
> 
> Where do I go from here?

This is upstream QEMU that is breaking, not qemu-xen-traditional (see
the code path: qemu-xen-dir-remote instead of
qemu-xen-traditional-dir-remote). Moreover it is breaking compiling
qemu-nbd that we aren't currently using.
I would try out the following change to the configure script:


diff --git a/configure b/configure
index 027a718..9cc8e19 100755
--- a/configure
+++ b/configure
@@ -2614,11 +2613,7 @@ cat > $TMPC <<EOF
 int main(void) { return clock_gettime(CLOCK_REALTIME, NULL); }
 EOF
 
-if compile_prog "" "" ; then
-  :
-elif compile_prog "" "-lrt" ; then
-  LIBS="-lrt $LIBS"
-fi
+LIBS="-lrt -lm $LIBS"
 
 if test "$darwin" != "yes" -a "$mingw32" != "yes" -a "$solaris" != yes -a \
         "$aix" != "yes" -a "$haiku" != "yes" ; then

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 10:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 10:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzmI6-0001i8-OR; Fri, 10 Aug 2012 10:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzmI4-0001i3-U5
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 10:18:01 +0000
Received: from [85.158.143.35:5968] by server-3.bemta-4.messagelabs.com id
	56/6A-31486-8DFD4205; Fri, 10 Aug 2012 10:18:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344593879!11868837!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5281 invoked from network); 10 Aug 2012 10:17:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 10:17:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336348800"; d="scan'208";a="13949438"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 10:17:59 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 11:17:59 +0100
Date: Fri, 10 Aug 2012 11:17:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mark van Dijk <lists+xen@internecto.net>
In-Reply-To: <20120810115405.05af653e@internecto.net>
Message-ID: <alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger should have a better idea than me but he is currently on vacation.

On Fri, 10 Aug 2012, Mark van Dijk wrote:
> After someone suggested alpine linux makes a nice dom0 system I decided
> to give it a go. Unfortunately the build process breaks:
> 
> ---
>   AR    i386-dm/libqemu.a
>   LINK  i386-dm/qemu-dm
> vl.o: In function `dynticks_stop_timer':
> /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1603:
> undefined reference to `timer_delete' vl.o: In function
> `dynticks_start_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1585:
> undefined reference to `timer_create' vl.o: In function
> `dynticks_rearm_timer': /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1620:
> undefined reference to
> `timer_gettime' /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir/vl.c:1633:
> undefined reference to `timer_settime' collect2: ld returned 1 exit
> status make[4]: *** [qemu-dm] Error 1 make[4]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote/i386-dm'
> make[3]: *** [subdir-i386-dm] Error 2 make[3]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-traditional-dir-remote'
> make[2]: *** [subdir-install-qemu-xen-traditional-dir] Error 2 make[2]:
> Leaving directory `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
> make[1]: *** [subdirs-install] Error 2 make[1]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools' make: ***
> [install-tools] Error 2 ---
> 
> I found this bug occurred to someone back in february '12 when someone
> posted the following message and patch to the ML:
> 
> http://lists.xen.org/archives/html/xen-devel/2012-02/msg01475.html
> 
> But I am either not patching the correct file or this patch is not
> working. I was trying a couple of different solutions just to see how I
> could get it to work but I'll admit this is like doing rocket science
> as a third grader. This is what I tried next to the previously named
> patch:
> 
> http://pastebin.com/U39H38T7
> 
> But the end result:
> 
> ---
>   LINK  qemu-ga
>   CC    libdis/i386-dis.o
>   LINK  qemu-nbd
> cutils.o: In function `strtosz_suffix_unit':
> /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:354:
> undefined reference to `__isnan'
> /adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir/cutils.c:357:
> undefined reference to `modf'
> collect2: ld returned 1 exit status
> make[3]: *** [qemu-ga] Error 1
> make[3]: *** Waiting for unfinished jobs....
> make[3]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools/qemu-xen-dir-remote'
> make[2]: *** [subdir-all-qemu-xen-dir] Error 2
> make[2]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
> make[1]: *** [subdirs-install] Error 2
> make[1]: Leaving directory
> `/adm/anderson/xen-unstable/src/xen-git-4.2.0/tools'
> make: *** [install-tools] Error 2
> ---
> 
> Where do I go from here?

This is upstream QEMU that is breaking, not qemu-xen-traditional (see
the code path: qemu-xen-dir-remote instead of
qemu-xen-traditional-dir-remote). Moreover it is breaking compiling
qemu-nbd that we aren't currently using.
I would try out the following change to the configure script:


diff --git a/configure b/configure
index 027a718..9cc8e19 100755
--- a/configure
+++ b/configure
@@ -2614,11 +2613,7 @@ cat > $TMPC <<EOF
 int main(void) { return clock_gettime(CLOCK_REALTIME, NULL); }
 EOF
 
-if compile_prog "" "" ; then
-  :
-elif compile_prog "" "-lrt" ; then
-  LIBS="-lrt $LIBS"
-fi
+LIBS="-lrt -lm $LIBS"
 
 if test "$darwin" != "yes" -a "$mingw32" != "yes" -a "$solaris" != yes -a \
         "$aix" != "yes" -a "$haiku" != "yes" ; then

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 10:33:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 10:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzmWd-0001w5-56; Fri, 10 Aug 2012 10:33:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1SzmWb-0001vy-C7
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 10:33:01 +0000
Received: from [85.158.138.51:56095] by server-4.bemta-3.messagelabs.com id
	46/B9-06379-C53E4205; Fri, 10 Aug 2012 10:33:00 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344594778!27543769!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA1NDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3162 invoked from network); 10 Aug 2012 10:32:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 10:32:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336363200"; d="scan'208";a="204794858"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 06:32:58 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 06:32:58 -0400
Received: from [10.80.3.61]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <anthony.perard@citrix.com>)	id 1SzmWX-0004KV-GL;
	Fri, 10 Aug 2012 11:32:57 +0100
Message-ID: <5024E367.1010509@citrix.com>
Date: Fri, 10 Aug 2012 11:33:11 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
	<1344012563.21372.55.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344012563.21372.55.camel@zakaz.uk.xensource.com>
Cc: David Erickson <halcyon1981@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen_platform_pci
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/12 17:49, Ian Campbell wrote:
> On Fri, 2012-08-03 at 17:39 +0100, David Erickson wrote:
>> I tried setting xen_platform_pci=0 on my test ubuntu 11.10 livecd VM
>> hoping it would run in HVM only mode, but the guest's logs showed it
>> detecting the Xen host and loaded the PV netfront driver (after
>> modprobe).  Is this expected behavior?  Is there no way to force HVM
>> only and not PVHVM or total PV?
>>
>> Setup:
>> Xen Unstable
>> Qemu Upstream
>
> It looks like libxl doesn't propagate the xen_platform_pci setting to
> upstream qemu (only does it for trad). Anthony -- is this something you
> can look into please.
>

I will. Both QEMU and libxl will need to be changed.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 10:33:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 10:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzmWd-0001w5-56; Fri, 10 Aug 2012 10:33:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1SzmWb-0001vy-C7
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 10:33:01 +0000
Received: from [85.158.138.51:56095] by server-4.bemta-3.messagelabs.com id
	46/B9-06379-C53E4205; Fri, 10 Aug 2012 10:33:00 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344594778!27543769!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA1NDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3162 invoked from network); 10 Aug 2012 10:32:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 10:32:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,744,1336363200"; d="scan'208";a="204794858"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 06:32:58 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 06:32:58 -0400
Received: from [10.80.3.61]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <anthony.perard@citrix.com>)	id 1SzmWX-0004KV-GL;
	Fri, 10 Aug 2012 11:32:57 +0100
Message-ID: <5024E367.1010509@citrix.com>
Date: Fri, 10 Aug 2012 11:33:11 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <CANKx4w8GQnbw-1apA0Dk-0p=fBUaCM4fXA8nh+84RP97Di5K0Q@mail.gmail.com>
	<1344012563.21372.55.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344012563.21372.55.camel@zakaz.uk.xensource.com>
Cc: David Erickson <halcyon1981@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen_platform_pci
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/03/12 17:49, Ian Campbell wrote:
> On Fri, 2012-08-03 at 17:39 +0100, David Erickson wrote:
>> I tried setting xen_platform_pci=0 on my test ubuntu 11.10 livecd VM
>> hoping it would run in HVM only mode, but the guest's logs showed it
>> detecting the Xen host and loaded the PV netfront driver (after
>> modprobe).  Is this expected behavior?  Is there no way to force HVM
>> only and not PVHVM or total PV?
>>
>> Setup:
>> Xen Unstable
>> Qemu Upstream
>
> It looks like libxl doesn't propagate the xen_platform_pci setting to
> upstream qemu (only does it for trad). Anthony -- is this something you
> can look into please.
>

I will. Both QEMU and libxl will need to be changed.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 10:51:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 10:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzmoC-00026u-RX; Fri, 10 Aug 2012 10:51:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1SzmoB-00026p-FJ
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 10:51:11 +0000
Received: from [85.158.143.99:22826] by server-1.bemta-4.messagelabs.com id
	05/73-20198-E97E4205; Fri, 10 Aug 2012 10:51:10 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344595867!27546594!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21691 invoked from network); 10 Aug 2012 10:51:09 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-9.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Aug 2012 10:51:09 -0000
Received: from mail187-co1-R.bigfish.com (10.243.78.232) by
	CO1EHSOBE015.bigfish.com (10.243.66.78) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 10:51:07 +0000
Received: from mail187-co1 (localhost [127.0.0.1])	by
	mail187-co1-R.bigfish.com (Postfix) with ESMTP id 6903DC002C4;
	Fri, 10 Aug 2012 10:51:07 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -1
X-BigFish: VPS-1(zzbb2dI98dI9371I1432I1418I4015I78fbmzz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail187-co1 (localhost.localdomain [127.0.0.1]) by mail187-co1
	(MessageSwitch) id 1344595865313745_7872;
	Fri, 10 Aug 2012 10:51:05 +0000 (UTC)
Received: from CO1EHSMHS003.bigfish.com (unknown [10.243.78.232])	by
	mail187-co1.bigfish.com (Postfix) with ESMTP id 3FC6B6C0044;
	Fri, 10 Aug 2012 10:51:05 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS003.bigfish.com (10.243.66.13) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 10:51:05 +0000
X-WSS-ID: 0M8JCT2-01-1MG-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2409C1028035;	Fri, 10 Aug 2012 05:51:01 -0500 (CDT)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 10 Aug 2012 05:51:21 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 10 Aug 2012 05:51:01 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 10 Aug 2012
	06:51:00 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 6A4E849C20C; Fri, 10 Aug 2012
	11:50:59 +0100 (BST)
Message-ID: <5024E788.80300@amd.com>
Date: Fri, 10 Aug 2012 12:50:48 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
In-Reply-To: <5023822E0200007800093D2A@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/09/2012 09:26 AM, Jan Beulich wrote:

> Wei - here I'm particularly worried about the use of "level - 1"
> instead of "next_level", which would similarly apply to the
> original function. If the way this is currently done is okay, then
> why is next_level being computed in the first place?

I think that recalculation is to guarantee that this recursive function 
returns. It should run at most "paging_mode" times no matter what 
"next_level" says. But if we could assume that next level field in every 
pde is correct, then using next level is fine to me.

(And similar
> to the issue Santosh has already fixed here - the original
> function pointlessly maps/unmaps the page when "level<= 1".
> Furthermore, iommu_map.c has nice helper functions
> iommu_next_level() and amd_iommu_is_pte_present() - why
> aren't they in a header instead, so they could be used here,
> avoiding the open coding of them?)

Maybe those helps appears after the original function. I could sent a 
patch to clean up these:
* do not map/unmap if level <= 1
* move amd_iommu_is_pte_present() and iommu_next_level() to a header 
file. and use them in deallocate_next_page_table.
* Using next_level instead of recalculation (if requested)

Thanks,
Wei

>> +        }
>> +
>> +        if ( present )
>> +        {
>> +            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
>> +                   address>>  PAGE_SHIFT, next_table_maddr>>  PAGE_SHIFT);
>
> I'd prefer you to use PFN_DOWN() here.
>
> Also, depth first, as requested by Tim, to me doesn't mean
> recursing before printing. I think you really want to print first,
> then recurse. Otherwise how would be output be made sense
> of?
>
>> +        }
>> +    }
>> +
>> +    unmap_domain_page(table_vaddr);
>> +}
>> ...
>> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
>> +++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 09:56:50 2012 -0700
>> @@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
>>
>>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>>
>> +void setup_iommu_dump(void);
>> +
>
> This is completely bogus. If the function was called from another
> source file, the declaration would belong into a header file. Since
> it's only used here, it ought to be static.
>
>>   static void __init parse_iommu_param(char *s)
>>   {
>>       char *ss;
>> @@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
>>       if ( !iommu_enabled )
>>           return;
>>
>> +    setup_iommu_dump();
>>       d->need_iommu = !!iommu_dom0_strict;
>>       if ( need_iommu(d) )
>>       {
>> ...
>> +void __init setup_iommu_dump(void)
>> +{
>> +    register_keyhandler('o',&iommu_p2m_table);
>> +}
>
> Furthermore, there's no real need for a separate function here
> anyway. Just call register_key_handler() directly. Or
> alternatively this ought to match other code doing the same -
> using an initcall.
>
>> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
>> +++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
>> +static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
>> +{
>> +    u64 address;
>
> Again, both gpa and address ought to be paddr_t, and the format
> specifiers should match.
>
>> +    int i;
>> +    struct dma_pte *pt_vaddr, *pte;
>> +    int next_level;
>> +
>> +    if ( pt_maddr == 0 )
>> +        return;
>> +
>> +    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
>
> Pointless cast.
>
>> +    if ( pt_vaddr == NULL )
>> +    {
>> +        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
>> +        return;
>> +    }
>> +
>> +    next_level = level - 1;
>> +    for ( i = 0; i<  PTE_NUM; i++ )
>> +    {
>> +        if ( !(i % 2) )
>> +            process_pending_softirqs();
>> +
>> +        pte =&pt_vaddr[i];
>> +        if ( !dma_pte_present(*pte) )
>> +            continue;
>> +
>> +        address = gpa + offset_level_address(i, level);
>> +        if ( next_level>= 1 )
>> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
>> +
>> +        if ( level == 1 )
>> +            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n",
>> +                    address>>  PAGE_SHIFT_4K, pte->val>>  PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
>
> Why do you print leaf (level 1) tables here only?
>
> And the last line certainly is above 80 chars, so needs breaking up.
>
> (Also, just to avoid you needing to do another iteration: Don't
> switch to PFN_DOWN() here.)
>
> I further wonder whether "superpage" alone is enough - don't we
> have both 2M and 1G pages? Of course, that would become mute
> if higher levels got also dumped (as then this knowledge is implicit).
>
> Which reminds me to ask that both here and in the AMD code the
> recursion level should probably be reflected by indenting the
> printed strings.
>
> Jan
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 10:51:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 10:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzmoC-00026u-RX; Fri, 10 Aug 2012 10:51:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1SzmoB-00026p-FJ
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 10:51:11 +0000
Received: from [85.158.143.99:22826] by server-1.bemta-4.messagelabs.com id
	05/73-20198-E97E4205; Fri, 10 Aug 2012 10:51:10 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344595867!27546594!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21691 invoked from network); 10 Aug 2012 10:51:09 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-9.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Aug 2012 10:51:09 -0000
Received: from mail187-co1-R.bigfish.com (10.243.78.232) by
	CO1EHSOBE015.bigfish.com (10.243.66.78) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 10:51:07 +0000
Received: from mail187-co1 (localhost [127.0.0.1])	by
	mail187-co1-R.bigfish.com (Postfix) with ESMTP id 6903DC002C4;
	Fri, 10 Aug 2012 10:51:07 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -1
X-BigFish: VPS-1(zzbb2dI98dI9371I1432I1418I4015I78fbmzz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail187-co1 (localhost.localdomain [127.0.0.1]) by mail187-co1
	(MessageSwitch) id 1344595865313745_7872;
	Fri, 10 Aug 2012 10:51:05 +0000 (UTC)
Received: from CO1EHSMHS003.bigfish.com (unknown [10.243.78.232])	by
	mail187-co1.bigfish.com (Postfix) with ESMTP id 3FC6B6C0044;
	Fri, 10 Aug 2012 10:51:05 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS003.bigfish.com (10.243.66.13) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 10:51:05 +0000
X-WSS-ID: 0M8JCT2-01-1MG-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2409C1028035;	Fri, 10 Aug 2012 05:51:01 -0500 (CDT)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 10 Aug 2012 05:51:21 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 10 Aug 2012 05:51:01 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 10 Aug 2012
	06:51:00 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 6A4E849C20C; Fri, 10 Aug 2012
	11:50:59 +0100 (BST)
Message-ID: <5024E788.80300@amd.com>
Date: Fri, 10 Aug 2012 12:50:48 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
In-Reply-To: <5023822E0200007800093D2A@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/09/2012 09:26 AM, Jan Beulich wrote:

> Wei - here I'm particularly worried about the use of "level - 1"
> instead of "next_level", which would similarly apply to the
> original function. If the way this is currently done is okay, then
> why is next_level being computed in the first place?

I think that recalculation is to guarantee that this recursive function 
returns. It should run at most "paging_mode" times no matter what 
"next_level" says. But if we could assume that next level field in every 
pde is correct, then using next level is fine to me.

(And similar
> to the issue Santosh has already fixed here - the original
> function pointlessly maps/unmaps the page when "level<= 1".
> Furthermore, iommu_map.c has nice helper functions
> iommu_next_level() and amd_iommu_is_pte_present() - why
> aren't they in a header instead, so they could be used here,
> avoiding the open coding of them?)

Maybe those helps appears after the original function. I could sent a 
patch to clean up these:
* do not map/unmap if level <= 1
* move amd_iommu_is_pte_present() and iommu_next_level() to a header 
file. and use them in deallocate_next_page_table.
* Using next_level instead of recalculation (if requested)

Thanks,
Wei

>> +        }
>> +
>> +        if ( present )
>> +        {
>> +            printk("gfn: %016"PRIx64"  mfn: %016"PRIx64"\n",
>> +                   address>>  PAGE_SHIFT, next_table_maddr>>  PAGE_SHIFT);
>
> I'd prefer you to use PFN_DOWN() here.
>
> Also, depth first, as requested by Tim, to me doesn't mean
> recursing before printing. I think you really want to print first,
> then recurse. Otherwise how would be output be made sense
> of?
>
>> +        }
>> +    }
>> +
>> +    unmap_domain_page(table_vaddr);
>> +}
>> ...
>> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
>> +++ b/xen/drivers/passthrough/iommu.c	Wed Aug 08 09:56:50 2012 -0700
>> @@ -54,6 +55,8 @@ bool_t __read_mostly amd_iommu_perdev_in
>>
>>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>>
>> +void setup_iommu_dump(void);
>> +
>
> This is completely bogus. If the function was called from another
> source file, the declaration would belong into a header file. Since
> it's only used here, it ought to be static.
>
>>   static void __init parse_iommu_param(char *s)
>>   {
>>       char *ss;
>> @@ -119,6 +122,7 @@ void __init iommu_dom0_init(struct domai
>>       if ( !iommu_enabled )
>>           return;
>>
>> +    setup_iommu_dump();
>>       d->need_iommu = !!iommu_dom0_strict;
>>       if ( need_iommu(d) )
>>       {
>> ...
>> +void __init setup_iommu_dump(void)
>> +{
>> +    register_keyhandler('o',&iommu_p2m_table);
>> +}
>
> Furthermore, there's no real need for a separate function here
> anyway. Just call register_key_handler() directly. Or
> alternatively this ought to match other code doing the same -
> using an initcall.
>
>> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
>> +++ b/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 08 09:56:50 2012 -0700
>> +static void vtd_dump_p2m_table_level(u64 pt_maddr, int level, u64 gpa)
>> +{
>> +    u64 address;
>
> Again, both gpa and address ought to be paddr_t, and the format
> specifiers should match.
>
>> +    int i;
>> +    struct dma_pte *pt_vaddr, *pte;
>> +    int next_level;
>> +
>> +    if ( pt_maddr == 0 )
>> +        return;
>> +
>> +    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
>
> Pointless cast.
>
>> +    if ( pt_vaddr == NULL )
>> +    {
>> +        printk("Failed to map VT-D domain page %016"PRIx64"\n", pt_maddr);
>> +        return;
>> +    }
>> +
>> +    next_level = level - 1;
>> +    for ( i = 0; i<  PTE_NUM; i++ )
>> +    {
>> +        if ( !(i % 2) )
>> +            process_pending_softirqs();
>> +
>> +        pte =&pt_vaddr[i];
>> +        if ( !dma_pte_present(*pte) )
>> +            continue;
>> +
>> +        address = gpa + offset_level_address(i, level);
>> +        if ( next_level>= 1 )
>> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, address);
>> +
>> +        if ( level == 1 )
>> +            printk("gfn: %016"PRIx64" mfn: %016"PRIx64" superpage=%d\n",
>> +                    address>>  PAGE_SHIFT_4K, pte->val>>  PAGE_SHIFT_4K, dma_pte_superpage(*pte)? 1 : 0);
>
> Why do you print leaf (level 1) tables here only?
>
> And the last line certainly is above 80 chars, so needs breaking up.
>
> (Also, just to avoid you needing to do another iteration: Don't
> switch to PFN_DOWN() here.)
>
> I further wonder whether "superpage" alone is enough - don't we
> have both 2M and 1G pages? Of course, that would become mute
> if higher levels got also dumped (as then this knowledge is implicit).
>
> Which reminds me to ask that both here and in the AMD code the
> recursion level should probably be reflected by indenting the
> printed strings.
>
> Jan
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:03:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:03:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szmzb-0002MZ-7G; Fri, 10 Aug 2012 11:02:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SzmzZ-0002LW-Bl
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:02:57 +0000
Received: from [85.158.143.35:51840] by server-2.bemta-4.messagelabs.com id
	A6/92-19021-06AE4205; Fri, 10 Aug 2012 11:02:56 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344596576!14640711!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19566 invoked from network); 10 Aug 2012 11:02:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:02:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13950495"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:02:56 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	12:02:56 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 12:02:54 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac12Wsgrrb8rpKAgTL6EQAKNCtNtpgAjE2Xg
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
In-Reply-To: <20120809180433.GA14457@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

I'm not sure I understand your question. Blktap3 lives in tools/blktap3. The component that allows tapdisk3 to talk directly to blkfront is xenio, it lives in blktap3/tools/xenio. Since tapdisk3 can talk to blkfront via xenio, it doesn't interact with blkback/blktap kernel drivers.

-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] 
Sent: 09 August 2012 19:05
To: Thanos Makatos
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] RFC: blktap3

On Thu, Aug 09, 2012 at 03:03:06PM +0100, Thanos Makatos wrote:
> Hi,
> 
> I'd like to introduce blktap3: essentially blktap2 without the need of blkback. This has been developed by Santosh Jodh, and I'll maintain it.
> 

So where is the source of this driver located?
> In this patch, blktap2 binaries are suffixed with "2", so it's not yet possible to use it along with blktap3.
> 
> An example configuration file I used is the following:
> name = "debian bktap3 without pygrub"
> memory = 256
> disk = ['backendtype=xenio,format=vhd,vdev=xvda,access=rw,target=/root/debian-blktap3.vhd']
> kernel = "vmlinuz-2.6.32-5-amd64"
> root = '/dev/xvda1'
> ramdisk = "initrd.img-2.6.32-5-amd64"
> cpu_weight=256
> vif=['bridge=xenbr0']
> 
> Before starting any blktap3 VM, the xenio daemon must be started.
> 
> I've tested it on change set 472fc515a463 without pygrub.
> 
> Any comments are welcome :)
> 
> --
> Thanos Makatos
> 


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:03:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:03:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szmzb-0002MZ-7G; Fri, 10 Aug 2012 11:02:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SzmzZ-0002LW-Bl
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:02:57 +0000
Received: from [85.158.143.35:51840] by server-2.bemta-4.messagelabs.com id
	A6/92-19021-06AE4205; Fri, 10 Aug 2012 11:02:56 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344596576!14640711!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19566 invoked from network); 10 Aug 2012 11:02:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:02:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13950495"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:02:56 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	12:02:56 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 12:02:54 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac12Wsgrrb8rpKAgTL6EQAKNCtNtpgAjE2Xg
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
In-Reply-To: <20120809180433.GA14457@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

I'm not sure I understand your question. Blktap3 lives in tools/blktap3. The component that allows tapdisk3 to talk directly to blkfront is xenio, it lives in blktap3/tools/xenio. Since tapdisk3 can talk to blkfront via xenio, it doesn't interact with blkback/blktap kernel drivers.

-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] 
Sent: 09 August 2012 19:05
To: Thanos Makatos
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] RFC: blktap3

On Thu, Aug 09, 2012 at 03:03:06PM +0100, Thanos Makatos wrote:
> Hi,
> 
> I'd like to introduce blktap3: essentially blktap2 without the need of blkback. This has been developed by Santosh Jodh, and I'll maintain it.
> 

So where is the source of this driver located?
> In this patch, blktap2 binaries are suffixed with "2", so it's not yet possible to use it along with blktap3.
> 
> An example configuration file I used is the following:
> name = "debian bktap3 without pygrub"
> memory = 256
> disk = ['backendtype=xenio,format=vhd,vdev=xvda,access=rw,target=/root/debian-blktap3.vhd']
> kernel = "vmlinuz-2.6.32-5-amd64"
> root = '/dev/xvda1'
> ramdisk = "initrd.img-2.6.32-5-amd64"
> cpu_weight=256
> vif=['bridge=xenbr0']
> 
> Before starting any blktap3 VM, the xenio daemon must be started.
> 
> I've tested it on change set 472fc515a463 without pygrub.
> 
> Any comments are welcome :)
> 
> --
> Thanos Makatos
> 


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:06:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:06:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szn2e-0002Vi-QH; Fri, 10 Aug 2012 11:06:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Szn2d-0002Va-5e
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:06:07 +0000
Received: from [85.158.143.35:10313] by server-2.bemta-4.messagelabs.com id
	B6/78-19021-E1BE4205; Fri, 10 Aug 2012 11:06:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344596762!4944186!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28019 invoked from network); 10 Aug 2012 11:06:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:06:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13950579"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:06:01 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	12:06:02 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Goncalo Gomes <Goncalo.Gomes@eu.citrix.com>
Date: Fri, 10 Aug 2012 12:06:01 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac12eDxN8J1ddypFSJWtYdsap1Q5rwAb6FDQ
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809213933.GA9099@eire.uk.xensource.com>
In-Reply-To: <20120809213933.GA9099@eire.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSBoYXZlbid0IHlldCBsb29rZWQgYXQgdGhlIGNvZGUgaW4gZGVwdGgsIGJ1dCBJIHN1cHBvc2Ug
YmxrdGFwMyBjYW4sIGluIHNvbWUgY2FzZXMsIGJlIGZhc3RlciB0aGFuIGJsa3RhcDIgc2luY2Ug
dGhlIGRvbTAga2VybmVsIGRvZXNuJ3QgcGFydGljaXBhdGUgaW4gdGhlIGRhdGEgcGF0aC4gQXQg
bGVhc3QgdGhhdCdzIHdoYXQgSSdkIGV4cGVjdA0KOy0pLg0KDQotLS0tLU9yaWdpbmFsIE1lc3Nh
Z2UtLS0tLQ0KRnJvbTogR29uY2FsbyBHb21lcyANClNlbnQ6IDA5IEF1Z3VzdCAyMDEyIDIyOjQw
DQpUbzogVGhhbm9zIE1ha2F0b3MNCkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZw0KU3ViamVj
dDogUmU6IFtYZW4tZGV2ZWxdIFJGQzogYmxrdGFwMw0KDQpPbiBUaHUsIDA5IEF1ZyAyMDEyLCBU
aGFub3MgTWFrYXRvcyB3cm90ZToNCg0KPiBIaSwNCj4gDQo+IEnigJlkIGxpa2UgdG8gaW50cm9k
dWNlIGJsa3RhcDM6IGVzc2VudGlhbGx5IGJsa3RhcDIgd2l0aG91dCB0aGUgbmVlZCBvZiBibGti
YWNrLiBUaGlzIGhhcyBiZWVuIGRldmVsb3BlZCBieSBTYW50b3NoIEpvZGgsIGFuZCBJ4oCZbGwg
bWFpbnRhaW4gaXQuDQoNCldlbGNvbWUhIHByZWNpc2VseSwgeGVuaW8gKGFrYSBibGt0YXAzKSB3
YXMgZGV2ZWxvcGVkIGJ5IERhbmllbCBTdG9kZGVuIGFuZCByZWNlbnRseSBjb250aW51ZWQvaW1w
cm92ZWQgYnkgU2FudG9zaCBKb2RoIDotKQ0KDQpBcyBJIGhhdmUgYSBzbGlnaHQgaW50ZXJlc3Qg
aW4gdGhpcyBhcmVhLCBJIHdhcyB3b25kZXJpbmcgd2hhdCBhcmUgdGhlIG1haW4gaW1wcm92ZW1l
bnRzIG92ZXIgYmxrdGFwMj8NCg0KR29uY2Fsbw0KDQo+IEluIHRoaXMgcGF0Y2gsIGJsa3RhcDIg
YmluYXJpZXMgYXJlIHN1ZmZpeGVkIHdpdGgg4oCcMuKAnSwgc28gaXTigJlzIG5vdCB5ZXQgcG9z
c2libGUgdG8gdXNlIGl0IGFsb25nIHdpdGggYmxrdGFwMy4NCj4gDQo+IEFuIGV4YW1wbGUgY29u
ZmlndXJhdGlvbiBmaWxlIEkgdXNlZCBpcyB0aGUgZm9sbG93aW5nOg0KPiBuYW1lID0gImRlYmlh
biBia3RhcDMgd2l0aG91dCBweWdydWIiDQo+IG1lbW9yeSA9IDI1Ng0KPiBkaXNrID0gDQo+IFsn
YmFja2VuZHR5cGU9eGVuaW8sZm9ybWF0PXZoZCx2ZGV2PXh2ZGEsYWNjZXNzPXJ3LHRhcmdldD0v
cm9vdC9kZWJpYW4NCj4gLWJsa3RhcDMudmhkJ10NCj4ga2VybmVsID0gInZtbGludXotMi42LjMy
LTUtYW1kNjQiDQo+IHJvb3QgPSAnL2Rldi94dmRhMScNCj4gcmFtZGlzayA9ICJpbml0cmQuaW1n
LTIuNi4zMi01LWFtZDY0Ig0KPiBjcHVfd2VpZ2h0PTI1Ng0KPiB2aWY9WydicmlkZ2U9eGVuYnIw
J10NCj4gDQo+IEJlZm9yZSBzdGFydGluZyBhbnkgYmxrdGFwMyBWTSwgdGhlIHhlbmlvIGRhZW1v
biBtdXN0IGJlIHN0YXJ0ZWQuDQo+IA0KPiBJ4oCZdmUgdGVzdGVkIGl0IG9uIGNoYW5nZSBzZXQg
NDcyZmM1MTVhNDYzIHdpdGhvdXQgcHlncnViLg0KPiANCj4gQW55IGNvbW1lbnRzIGFyZSB3ZWxj
b21lIDopDQo+IA0KPiAtLQ0KPiBUaGFub3MgTWFrYXRvcw0KPiANCg0KDQo+IF9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IFhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QNCj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcNCj4gaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:06:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:06:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szn2e-0002Vi-QH; Fri, 10 Aug 2012 11:06:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1Szn2d-0002Va-5e
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:06:07 +0000
Received: from [85.158.143.35:10313] by server-2.bemta-4.messagelabs.com id
	B6/78-19021-E1BE4205; Fri, 10 Aug 2012 11:06:06 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344596762!4944186!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28019 invoked from network); 10 Aug 2012 11:06:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:06:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13950579"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:06:01 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	12:06:02 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Goncalo Gomes <Goncalo.Gomes@eu.citrix.com>
Date: Fri, 10 Aug 2012 12:06:01 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac12eDxN8J1ddypFSJWtYdsap1Q5rwAb6FDQ
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809213933.GA9099@eire.uk.xensource.com>
In-Reply-To: <20120809213933.GA9099@eire.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSBoYXZlbid0IHlldCBsb29rZWQgYXQgdGhlIGNvZGUgaW4gZGVwdGgsIGJ1dCBJIHN1cHBvc2Ug
YmxrdGFwMyBjYW4sIGluIHNvbWUgY2FzZXMsIGJlIGZhc3RlciB0aGFuIGJsa3RhcDIgc2luY2Ug
dGhlIGRvbTAga2VybmVsIGRvZXNuJ3QgcGFydGljaXBhdGUgaW4gdGhlIGRhdGEgcGF0aC4gQXQg
bGVhc3QgdGhhdCdzIHdoYXQgSSdkIGV4cGVjdA0KOy0pLg0KDQotLS0tLU9yaWdpbmFsIE1lc3Nh
Z2UtLS0tLQ0KRnJvbTogR29uY2FsbyBHb21lcyANClNlbnQ6IDA5IEF1Z3VzdCAyMDEyIDIyOjQw
DQpUbzogVGhhbm9zIE1ha2F0b3MNCkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZw0KU3ViamVj
dDogUmU6IFtYZW4tZGV2ZWxdIFJGQzogYmxrdGFwMw0KDQpPbiBUaHUsIDA5IEF1ZyAyMDEyLCBU
aGFub3MgTWFrYXRvcyB3cm90ZToNCg0KPiBIaSwNCj4gDQo+IEnigJlkIGxpa2UgdG8gaW50cm9k
dWNlIGJsa3RhcDM6IGVzc2VudGlhbGx5IGJsa3RhcDIgd2l0aG91dCB0aGUgbmVlZCBvZiBibGti
YWNrLiBUaGlzIGhhcyBiZWVuIGRldmVsb3BlZCBieSBTYW50b3NoIEpvZGgsIGFuZCBJ4oCZbGwg
bWFpbnRhaW4gaXQuDQoNCldlbGNvbWUhIHByZWNpc2VseSwgeGVuaW8gKGFrYSBibGt0YXAzKSB3
YXMgZGV2ZWxvcGVkIGJ5IERhbmllbCBTdG9kZGVuIGFuZCByZWNlbnRseSBjb250aW51ZWQvaW1w
cm92ZWQgYnkgU2FudG9zaCBKb2RoIDotKQ0KDQpBcyBJIGhhdmUgYSBzbGlnaHQgaW50ZXJlc3Qg
aW4gdGhpcyBhcmVhLCBJIHdhcyB3b25kZXJpbmcgd2hhdCBhcmUgdGhlIG1haW4gaW1wcm92ZW1l
bnRzIG92ZXIgYmxrdGFwMj8NCg0KR29uY2Fsbw0KDQo+IEluIHRoaXMgcGF0Y2gsIGJsa3RhcDIg
YmluYXJpZXMgYXJlIHN1ZmZpeGVkIHdpdGgg4oCcMuKAnSwgc28gaXTigJlzIG5vdCB5ZXQgcG9z
c2libGUgdG8gdXNlIGl0IGFsb25nIHdpdGggYmxrdGFwMy4NCj4gDQo+IEFuIGV4YW1wbGUgY29u
ZmlndXJhdGlvbiBmaWxlIEkgdXNlZCBpcyB0aGUgZm9sbG93aW5nOg0KPiBuYW1lID0gImRlYmlh
biBia3RhcDMgd2l0aG91dCBweWdydWIiDQo+IG1lbW9yeSA9IDI1Ng0KPiBkaXNrID0gDQo+IFsn
YmFja2VuZHR5cGU9eGVuaW8sZm9ybWF0PXZoZCx2ZGV2PXh2ZGEsYWNjZXNzPXJ3LHRhcmdldD0v
cm9vdC9kZWJpYW4NCj4gLWJsa3RhcDMudmhkJ10NCj4ga2VybmVsID0gInZtbGludXotMi42LjMy
LTUtYW1kNjQiDQo+IHJvb3QgPSAnL2Rldi94dmRhMScNCj4gcmFtZGlzayA9ICJpbml0cmQuaW1n
LTIuNi4zMi01LWFtZDY0Ig0KPiBjcHVfd2VpZ2h0PTI1Ng0KPiB2aWY9WydicmlkZ2U9eGVuYnIw
J10NCj4gDQo+IEJlZm9yZSBzdGFydGluZyBhbnkgYmxrdGFwMyBWTSwgdGhlIHhlbmlvIGRhZW1v
biBtdXN0IGJlIHN0YXJ0ZWQuDQo+IA0KPiBJ4oCZdmUgdGVzdGVkIGl0IG9uIGNoYW5nZSBzZXQg
NDcyZmM1MTVhNDYzIHdpdGhvdXQgcHlncnViLg0KPiANCj4gQW55IGNvbW1lbnRzIGFyZSB3ZWxj
b21lIDopDQo+IA0KPiAtLQ0KPiBUaGFub3MgTWFrYXRvcw0KPiANCg0KDQo+IF9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IFhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QNCj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcNCj4gaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:17:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SznDm-0002iP-13; Fri, 10 Aug 2012 11:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SznDl-0002iK-7T
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:17:37 +0000
Received: from [85.158.139.83:59166] by server-5.bemta-5.messagelabs.com id
	2F/B4-03096-0DDE4205; Fri, 10 Aug 2012 11:17:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344597455!26908955!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13447 invoked from network); 10 Aug 2012 11:17:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13950767"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:17:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 12:17:35 +0100
Date: Fri, 10 Aug 2012 12:17:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Thanos Makatos <thanos.makatos@citrix.com>
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208101216490.21096@kaball.uk.xensource.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Thanos Makatos wrote:
> Hi Konrad,
> 
> I'm not sure I understand your question. Blktap3 lives in tools/blktap3. The component that allows tapdisk3 to talk directly to blkfront is xenio, it lives in blktap3/tools/xenio. Since tapdisk3 can talk to blkfront via xenio, it doesn't interact with blkback/blktap kernel drivers.

Konrad, Blktap3 is purely userspace ;-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:17:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SznDm-0002iP-13; Fri, 10 Aug 2012 11:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SznDl-0002iK-7T
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:17:37 +0000
Received: from [85.158.139.83:59166] by server-5.bemta-5.messagelabs.com id
	2F/B4-03096-0DDE4205; Fri, 10 Aug 2012 11:17:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344597455!26908955!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13447 invoked from network); 10 Aug 2012 11:17:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13950767"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:17:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 12:17:35 +0100
Date: Fri, 10 Aug 2012 12:17:10 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Thanos Makatos <thanos.makatos@citrix.com>
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
Message-ID: <alpine.DEB.2.02.1208101216490.21096@kaball.uk.xensource.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Thanos Makatos wrote:
> Hi Konrad,
> 
> I'm not sure I understand your question. Blktap3 lives in tools/blktap3. The component that allows tapdisk3 to talk directly to blkfront is xenio, it lives in blktap3/tools/xenio. Since tapdisk3 can talk to blkfront via xenio, it doesn't interact with blkback/blktap kernel drivers.

Konrad, Blktap3 is purely userspace ;-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:30:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SznPz-0002sq-9x; Fri, 10 Aug 2012 11:30:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SznPx-0002si-W9
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:30:14 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344598206!1699250!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4288 invoked from network); 10 Aug 2012 11:30:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:30:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208,217";a="13950973"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:30:06 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	12:30:06 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Date: Fri, 10 Aug 2012 12:30:04 +0100
Thread-Topic: RFC: blktap3
Thread-Index: Ac12N7M32j6sBcjRSBOZA6AeyHi9wgAsn+Ow
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56C@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7870953111948737229=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7870953111948737229==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_"

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

I got some (internal) feedback: the best thing to do is to break blktap3 in=
 individual patches to make it easier for people to review it, which makes =
perfect sense. Also, it's been suggested that documenting it would hel. I'l=
l start by sending patches that affect the existing code in order to minimi=
ze rebasing.

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of Thanos Makatos
Sent: 09 August 2012 15:03
To: xen-devel@lists.xen.org
Subject: [Xen-devel] RFC: blktap3

Hi,

I'd like to introduce blktap3: essentially blktap2 without the need of blkb=
ack. This has been developed by Santosh Jodh, and I'll maintain it.

In this patch, blktap2 binaries are suffixed with "2", so it's not yet poss=
ible to use it along with blktap3.

An example configuration file I used is the following:
name =3D "debian bktap3 without pygrub"
memory =3D 256
disk =3D ['backendtype=3Dxenio,format=3Dvhd,vdev=3Dxvda,access=3Drw,target=
=3D/root/debian-blktap3.vhd']
kernel =3D "vmlinuz-2.6.32-5-amd64"
root =3D '/dev/xvda1'
ramdisk =3D "initrd.img-2.6.32-5-amd64"
cpu_weight=3D256
vif=3D['bridge=3Dxenbr0']

Before starting any blktap3 VM, the xenio daemon must be started.

I've tested it on change set 472fc515a463 without pygrub.

Any comments are welcome :)

--
Thanos Makatos


--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type content=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 12 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.EmailStyle18
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-GB link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span style=3D'c=
olor:#1F497D'>I got some (internal) feedback: the best thing to do is to br=
eak blktap3 in individual patches to make it easier for people to review it=
, which makes perfect sense. Also, it&#8217;s been suggested that documenti=
ng it would hel. I&#8217;ll start by sending patches that affect the existi=
ng code in order to minimize rebasing.<o:p></o:p></span></p><p class=3DMsoN=
ormal><span style=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p><div><div s=
tyle=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0c=
m'><p class=3DMsoNormal><b><span lang=3DEN-US style=3D'font-size:10.0pt;fon=
t-family:"Tahoma","sans-serif"'>From:</span></b><span lang=3DEN-US style=3D=
'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> xen-devel-bounces@lis=
ts.xen.org [mailto:xen-devel-bounces@lists.xen.org] <b>On Behalf Of </b>Tha=
nos Makatos<br><b>Sent:</b> 09 August 2012 15:03<br><b>To:</b> xen-devel@li=
sts.xen.org<br><b>Subject:</b> [Xen-devel] RFC: blktap3<o:p></o:p></span></=
p></div></div><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNorma=
l>Hi,<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMs=
oNormal>I&#8217;d like to introduce blktap3: essentially blktap2 without th=
e need of blkback. This has been developed by Santosh Jodh, and I&#8217;ll =
maintain it.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p cla=
ss=3DMsoNormal>In this patch, blktap2 binaries are suffixed with &#8220;2&#=
8221;, so it&#8217;s not yet possible to use it along with blktap3.<o:p></o=
:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>An ex=
ample configuration file I used is the following:<o:p></o:p></p><p class=3D=
MsoNormal><span style=3D'font-family:Consolas'>name =3D &quot;debian bktap3=
 without pygrub&quot;<o:p></o:p></span></p><p class=3DMsoNormal><span style=
=3D'font-family:Consolas'>memory =3D 256<o:p></o:p></span></p><p class=3DMs=
oNormal><span style=3D'font-family:Consolas'>disk =3D ['backendtype=3Dxenio=
,format=3Dvhd,vdev=3Dxvda,access=3Drw,target=3D/root/debian-blktap3.vhd']<o=
:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Consola=
s'>kernel =3D &quot;vmlinuz-2.6.32-5-amd64&quot;<o:p></o:p></span></p><p cl=
ass=3DMsoNormal><span style=3D'font-family:Consolas'>root =3D '/dev/xvda1'<=
o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Consol=
as'>ramdisk =3D &quot;initrd.img-2.6.32-5-amd64&quot;<o:p></o:p></span></p>=
<p class=3DMsoNormal><span style=3D'font-family:Consolas'>cpu_weight=3D256<=
o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Consol=
as'>vif=3D['bridge=3Dxenbr0']<o:p></o:p></span></p><p class=3DMsoNormal><sp=
an style=3D'font-family:Consolas'><o:p>&nbsp;</o:p></span></p><p class=3DMs=
oNormal>Before starting any blktap3 VM, the <span style=3D'font-family:Cons=
olas'>xenio</span> daemon must be started.<o:p></o:p></p><p class=3DMsoNorm=
al><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I&#8217;ve tested it on change=
 set 472fc515a463 without pygrub.<o:p></o:p></p><p class=3DMsoNormal><o:p>&=
nbsp;</o:p></p><p class=3DMsoNormal>Any comments are welcome <span style=3D=
'font-family:Wingdings'>J</span><o:p></o:p></p><p class=3DMsoNormal><o:p>&n=
bsp;</o:p></p><p class=3DMsoNormal>--<o:p></o:p></p><p class=3DMsoNormal>Th=
anos Makatos<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p></div>=
</body></html>=

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_--


--===============7870953111948737229==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7870953111948737229==--


From xen-devel-bounces@lists.xen.org Fri Aug 10 11:30:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SznPz-0002sq-9x; Fri, 10 Aug 2012 11:30:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SznPx-0002si-W9
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:30:14 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344598206!1699250!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4288 invoked from network); 10 Aug 2012 11:30:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:30:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208,217";a="13950973"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:30:06 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	12:30:06 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Date: Fri, 10 Aug 2012 12:30:04 +0100
Thread-Topic: RFC: blktap3
Thread-Index: Ac12N7M32j6sBcjRSBOZA6AeyHi9wgAsn+Ow
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56C@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7870953111948737229=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7870953111948737229==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_"

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

I got some (internal) feedback: the best thing to do is to break blktap3 in=
 individual patches to make it easier for people to review it, which makes =
perfect sense. Also, it's been suggested that documenting it would hel. I'l=
l start by sending patches that affect the existing code in order to minimi=
ze rebasing.

From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.o=
rg] On Behalf Of Thanos Makatos
Sent: 09 August 2012 15:03
To: xen-devel@lists.xen.org
Subject: [Xen-devel] RFC: blktap3

Hi,

I'd like to introduce blktap3: essentially blktap2 without the need of blkb=
ack. This has been developed by Santosh Jodh, and I'll maintain it.

In this patch, blktap2 binaries are suffixed with "2", so it's not yet poss=
ible to use it along with blktap3.

An example configuration file I used is the following:
name =3D "debian bktap3 without pygrub"
memory =3D 256
disk =3D ['backendtype=3Dxenio,format=3Dvhd,vdev=3Dxvda,access=3Drw,target=
=3D/root/debian-blktap3.vhd']
kernel =3D "vmlinuz-2.6.32-5-amd64"
root =3D '/dev/xvda1'
ramdisk =3D "initrd.img-2.6.32-5-amd64"
cpu_weight=3D256
vif=3D['bridge=3Dxenbr0']

Before starting any blktap3 VM, the xenio daemon must be started.

I've tested it on change set 472fc515a463 without pygrub.

Any comments are welcome :)

--
Thanos Makatos


--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type content=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 12 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.EmailStyle18
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-GB link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span style=3D'c=
olor:#1F497D'>I got some (internal) feedback: the best thing to do is to br=
eak blktap3 in individual patches to make it easier for people to review it=
, which makes perfect sense. Also, it&#8217;s been suggested that documenti=
ng it would hel. I&#8217;ll start by sending patches that affect the existi=
ng code in order to minimize rebasing.<o:p></o:p></span></p><p class=3DMsoN=
ormal><span style=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p><div><div s=
tyle=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0c=
m'><p class=3DMsoNormal><b><span lang=3DEN-US style=3D'font-size:10.0pt;fon=
t-family:"Tahoma","sans-serif"'>From:</span></b><span lang=3DEN-US style=3D=
'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> xen-devel-bounces@lis=
ts.xen.org [mailto:xen-devel-bounces@lists.xen.org] <b>On Behalf Of </b>Tha=
nos Makatos<br><b>Sent:</b> 09 August 2012 15:03<br><b>To:</b> xen-devel@li=
sts.xen.org<br><b>Subject:</b> [Xen-devel] RFC: blktap3<o:p></o:p></span></=
p></div></div><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNorma=
l>Hi,<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMs=
oNormal>I&#8217;d like to introduce blktap3: essentially blktap2 without th=
e need of blkback. This has been developed by Santosh Jodh, and I&#8217;ll =
maintain it.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p cla=
ss=3DMsoNormal>In this patch, blktap2 binaries are suffixed with &#8220;2&#=
8221;, so it&#8217;s not yet possible to use it along with blktap3.<o:p></o=
:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>An ex=
ample configuration file I used is the following:<o:p></o:p></p><p class=3D=
MsoNormal><span style=3D'font-family:Consolas'>name =3D &quot;debian bktap3=
 without pygrub&quot;<o:p></o:p></span></p><p class=3DMsoNormal><span style=
=3D'font-family:Consolas'>memory =3D 256<o:p></o:p></span></p><p class=3DMs=
oNormal><span style=3D'font-family:Consolas'>disk =3D ['backendtype=3Dxenio=
,format=3Dvhd,vdev=3Dxvda,access=3Drw,target=3D/root/debian-blktap3.vhd']<o=
:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Consola=
s'>kernel =3D &quot;vmlinuz-2.6.32-5-amd64&quot;<o:p></o:p></span></p><p cl=
ass=3DMsoNormal><span style=3D'font-family:Consolas'>root =3D '/dev/xvda1'<=
o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Consol=
as'>ramdisk =3D &quot;initrd.img-2.6.32-5-amd64&quot;<o:p></o:p></span></p>=
<p class=3DMsoNormal><span style=3D'font-family:Consolas'>cpu_weight=3D256<=
o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'font-family:Consol=
as'>vif=3D['bridge=3Dxenbr0']<o:p></o:p></span></p><p class=3DMsoNormal><sp=
an style=3D'font-family:Consolas'><o:p>&nbsp;</o:p></span></p><p class=3DMs=
oNormal>Before starting any blktap3 VM, the <span style=3D'font-family:Cons=
olas'>xenio</span> daemon must be started.<o:p></o:p></p><p class=3DMsoNorm=
al><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I&#8217;ve tested it on change=
 set 472fc515a463 without pygrub.<o:p></o:p></p><p class=3DMsoNormal><o:p>&=
nbsp;</o:p></p><p class=3DMsoNormal>Any comments are welcome <span style=3D=
'font-family:Wingdings'>J</span><o:p></o:p></p><p class=3DMsoNormal><o:p>&n=
bsp;</o:p></p><p class=3DMsoNormal>--<o:p></o:p></p><p class=3DMsoNormal>Th=
anos Makatos<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p></div>=
</body></html>=

--_000_4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D56CLONPMAILBOX_--


--===============7870953111948737229==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7870953111948737229==--


From xen-devel-bounces@lists.xen.org Fri Aug 10 11:49:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzniJ-00038f-B9; Fri, 10 Aug 2012 11:49:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SzniI-00038a-95
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:49:10 +0000
Received: from [85.158.143.35:60495] by server-2.bemta-4.messagelabs.com id
	E3/D6-19021-535F4205; Fri, 10 Aug 2012 11:49:09 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344599348!13349209!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20469 invoked from network); 10 Aug 2012 11:49:09 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:49:09 -0000
Received: by eaac13 with SMTP id c13so418032eaa.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 04:49:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=78Y6S690pC3I0T+BK2znj5qdNHD4OiDcYCIl0qHlErU=;
	b=DMamdca6KZEmsGpvKBD33qzehhopVLkKV0w6azVT7yF7heYbuTqLoEG8F2Jy+A/h9N
	G3BchZ8P4G2It6bBBKQYOC7M4IbSan3+yR+IT2W6/gv0tWDkJa5DR+0yPPRNCh1yTvLk
	a0b8d/+UXfsPTl3OmBSr6DxypnP4fGUNz7SOJDu5bkU/TwEoJnKVV8x2kPy7rHoqYcFC
	Sol4IHPBYNX9cjNwTRhuTkCIRm/bMM+ZXZs67zEBK9OKZY2tg3rcT3HKgk3cOxOxKhEo
	BJhpUbZWd4pNthUK98ouk42s2lXTTwJPyLfqE670SIEvkC1AIVkHkPrXJ47sVL0VYU9t
	PESw==
Received: by 10.14.5.78 with SMTP id 54mr3036071eek.1.1344599348610;
	Fri, 10 Aug 2012 04:49:08 -0700 (PDT)
Received: from [172.16.26.11] (b0fb6d57.bb.sky.com. [176.251.109.87])
	by mx.google.com with ESMTPS id u47sm10870834eeo.9.2012.08.10.04.49.06
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 04:49:07 -0700 (PDT)
Message-ID: <5024F531.20509@xen.org>
Date: Fri, 10 Aug 2012 12:49:05 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC31DDD9.464E7%keir@xen.org>
In-Reply-To: <CC31DDD9.464E7%keir@xen.org>
Subject: Re: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and
 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Keir,
do we have a list of what has gone into these (worth highlighting). I 
can't see any CVE and XSA numbers and the check-in descriptions don't 
tell me an awful lot
Lars

On 22/07/2012 16:42, Keir Fraser wrote:
> Folks,
>
> I have just tagged 3rd release candidates for 4.0.4 and 4.1.3:
>
> http://xenbits.xen.org/staging/xen-4.0-testing.hg (tag 4.0.4-rc3)
> http://xenbits.xen.org/staging/xen-4.1-testing.hg (tag 4.1.3-rc3)
>
> Please test! Assuming nothing crops up, we will make these the official
> point releases at the end of this month.
>
>   -- Keir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 11:49:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 11:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzniJ-00038f-B9; Fri, 10 Aug 2012 11:49:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1SzniI-00038a-95
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 11:49:10 +0000
Received: from [85.158.143.35:60495] by server-2.bemta-4.messagelabs.com id
	E3/D6-19021-535F4205; Fri, 10 Aug 2012 11:49:09 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344599348!13349209!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20469 invoked from network); 10 Aug 2012 11:49:09 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 11:49:09 -0000
Received: by eaac13 with SMTP id c13so418032eaa.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 04:49:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=78Y6S690pC3I0T+BK2znj5qdNHD4OiDcYCIl0qHlErU=;
	b=DMamdca6KZEmsGpvKBD33qzehhopVLkKV0w6azVT7yF7heYbuTqLoEG8F2Jy+A/h9N
	G3BchZ8P4G2It6bBBKQYOC7M4IbSan3+yR+IT2W6/gv0tWDkJa5DR+0yPPRNCh1yTvLk
	a0b8d/+UXfsPTl3OmBSr6DxypnP4fGUNz7SOJDu5bkU/TwEoJnKVV8x2kPy7rHoqYcFC
	Sol4IHPBYNX9cjNwTRhuTkCIRm/bMM+ZXZs67zEBK9OKZY2tg3rcT3HKgk3cOxOxKhEo
	BJhpUbZWd4pNthUK98ouk42s2lXTTwJPyLfqE670SIEvkC1AIVkHkPrXJ47sVL0VYU9t
	PESw==
Received: by 10.14.5.78 with SMTP id 54mr3036071eek.1.1344599348610;
	Fri, 10 Aug 2012 04:49:08 -0700 (PDT)
Received: from [172.16.26.11] (b0fb6d57.bb.sky.com. [176.251.109.87])
	by mx.google.com with ESMTPS id u47sm10870834eeo.9.2012.08.10.04.49.06
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 04:49:07 -0700 (PDT)
Message-ID: <5024F531.20509@xen.org>
Date: Fri, 10 Aug 2012 12:49:05 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC31DDD9.464E7%keir@xen.org>
In-Reply-To: <CC31DDD9.464E7%keir@xen.org>
Subject: Re: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and
 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Keir,
do we have a list of what has gone into these (worth highlighting). I 
can't see any CVE and XSA numbers and the check-in descriptions don't 
tell me an awful lot
Lars

On 22/07/2012 16:42, Keir Fraser wrote:
> Folks,
>
> I have just tagged 3rd release candidates for 4.0.4 and 4.1.3:
>
> http://xenbits.xen.org/staging/xen-4.0-testing.hg (tag 4.0.4-rc3)
> http://xenbits.xen.org/staging/xen-4.1-testing.hg (tag 4.1.3-rc3)
>
> Please test! Assuming nothing crops up, we will make these the official
> point releases at the end of this month.
>
>   -- Keir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:06:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sznyu-0003XY-RH; Fri, 10 Aug 2012 12:06:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1Sznyt-0003XE-Q8
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:06:19 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344600373!8420657!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26254 invoked from network); 10 Aug 2012 12:06:13 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 12:06:13 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 2B80BA02F1;
	Fri, 10 Aug 2012 12:06:13 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id DpRrgR3W8rSn; Fri, 10 Aug 2012 12:06:12 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 96082A02EA;
	Fri, 10 Aug 2012 12:06:12 +0000 (UTC)
Date: Fri, 10 Aug 2012 14:06:11 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120810140611.4ca8a1fb@internecto.net>
In-Reply-To: <alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This is upstream QEMU that is breaking, not qemu-xen-traditional (see
> the code path: qemu-xen-dir-remote instead of
> qemu-xen-traditional-dir-remote).

Ah, I didn't know, it's a little bit confusing. Would you like me to
submit a bug report with them?

> Moreover it is breaking compiling qemu-nbd that we aren't currently
> using. I would try out the following change to the configure script:
> (..snip..)

Yes, that works, thanks! But it gives a new error which I couldn't
solve yet:

---
LINK  qemu-nbd

cutils.o: In function `strtosz_suffix_unit':

tools/qemu-xen-dir/cutils.c:354: undefined reference to
`__isnan'

tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
collect2: ld returned 1 exit status
---

Any idea there?

Also -- If we're not using qemu-nbd then could you suggest a
workaround please? I'd prefer something that can be patched or
issued before I run the make process. (I run the make process
twice now - if the first run fails, patch, then run again and if it
fails again error out)

Thanks so far,
Mark

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 12:02 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:06:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Sznyu-0003XY-RH; Fri, 10 Aug 2012 12:06:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1Sznyt-0003XE-Q8
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:06:19 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344600373!8420657!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26254 invoked from network); 10 Aug 2012 12:06:13 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 12:06:13 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 2B80BA02F1;
	Fri, 10 Aug 2012 12:06:13 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id DpRrgR3W8rSn; Fri, 10 Aug 2012 12:06:12 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 96082A02EA;
	Fri, 10 Aug 2012 12:06:12 +0000 (UTC)
Date: Fri, 10 Aug 2012 14:06:11 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120810140611.4ca8a1fb@internecto.net>
In-Reply-To: <alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This is upstream QEMU that is breaking, not qemu-xen-traditional (see
> the code path: qemu-xen-dir-remote instead of
> qemu-xen-traditional-dir-remote).

Ah, I didn't know, it's a little bit confusing. Would you like me to
submit a bug report with them?

> Moreover it is breaking compiling qemu-nbd that we aren't currently
> using. I would try out the following change to the configure script:
> (..snip..)

Yes, that works, thanks! But it gives a new error which I couldn't
solve yet:

---
LINK  qemu-nbd

cutils.o: In function `strtosz_suffix_unit':

tools/qemu-xen-dir/cutils.c:354: undefined reference to
`__isnan'

tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
collect2: ld returned 1 exit status
---

Any idea there?

Also -- If we're not using qemu-nbd then could you suggest a
workaround please? I'd prefer something that can be patched or
issued before I run the make process. (I run the make process
twice now - if the first run fails, patch, then run again and if it
fails again error out)

Thanks so far,
Mark

-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 12:02 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:09:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo1z-0003ea-FZ; Fri, 10 Aug 2012 12:09:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo1x-0003eK-L8
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:09:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344600559!8556914!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29367 invoked from network); 10 Aug 2012 12:09:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:09:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13951861"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 12:09:18 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 13:09:18 +0100
Date: Fri, 10 Aug 2012 13:09:03 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.


It is based on Ian's arm-for-4.3 branch. 


Changes in v2:

- do not use an anonymous union in struct xen_add_to_physmap; 
- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t in python/xen/lowlevel/xc/xc;
- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code;
- add a patch to limit the maximum number of extents handled by
do_memory_op;
- remove the patch "introduce __lshrdi3 and __aeabi_llsr" that is
already in the for-4.3 branch.



Stefano Stabellini (5):
      xen: improve changes to xen_add_to_physmap
      xen: xen_ulong_t substitution
      xen: change the limit of nr_extents to UINT_MAX >> MEMOP_EXTENT_SHIFT
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate

 tools/firmware/hvmloader/pci.c           |    2 +-
 tools/python/xen/lowlevel/xc/xc.c        |    2 +-
 xen/arch/arm/domain.c                    |    2 +-
 xen/arch/arm/domctl.c                    |    2 +-
 xen/arch/arm/hvm.c                       |    2 +-
 xen/arch/arm/mm.c                        |    4 +-
 xen/arch/arm/physdev.c                   |    2 +-
 xen/arch/arm/sysctl.c                    |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c            |    2 +-
 xen/arch/x86/domain.c                    |    2 +-
 xen/arch/x86/domctl.c                    |    2 +-
 xen/arch/x86/efi/runtime.c               |    2 +-
 xen/arch/x86/hvm/hvm.c                   |   26 +++++++-------
 xen/arch/x86/microcode.c                 |    2 +-
 xen/arch/x86/mm.c                        |   24 +++++++-------
 xen/arch/x86/mm/hap/hap.c                |    2 +-
 xen/arch/x86/mm/mem_event.c              |    2 +-
 xen/arch/x86/mm/paging.c                 |    2 +-
 xen/arch/x86/mm/shadow/common.c          |    2 +-
 xen/arch/x86/physdev.c                   |    2 +-
 xen/arch/x86/platform_hypercall.c        |    2 +-
 xen/arch/x86/sysctl.c                    |    2 +-
 xen/arch/x86/traps.c                     |    2 +-
 xen/arch/x86/x86_32/mm.c                 |    2 +-
 xen/arch/x86/x86_32/traps.c              |    2 +-
 xen/arch/x86/x86_64/compat/mm.c          |   14 ++++++--
 xen/arch/x86/x86_64/domain.c             |    2 +-
 xen/arch/x86/x86_64/mm.c                 |    2 +-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/arch/x86/x86_64/traps.c              |    2 +-
 xen/common/compat/domain.c               |    2 +-
 xen/common/compat/grant_table.c          |    2 +-
 xen/common/compat/memory.c               |    2 +-
 xen/common/compat/multicall.c            |    1 +
 xen/common/domain.c                      |    2 +-
 xen/common/domctl.c                      |    2 +-
 xen/common/event_channel.c               |    2 +-
 xen/common/grant_table.c                 |   36 ++++++++++----------
 xen/common/kernel.c                      |    4 +-
 xen/common/kexec.c                       |   16 ++++----
 xen/common/memory.c                      |    6 ++--
 xen/common/multicall.c                   |    2 +-
 xen/common/schedule.c                    |    2 +-
 xen/common/sysctl.c                      |    2 +-
 xen/common/xenoprof.c                    |    8 ++--
 xen/drivers/acpi/pmstat.c                |    2 +-
 xen/drivers/char/console.c               |    6 ++--
 xen/drivers/passthrough/iommu.c          |    2 +-
 xen/include/asm-arm/guest_access.h       |    2 +-
 xen/include/asm-arm/hypercall.h          |    2 +-
 xen/include/asm-arm/mm.h                 |    2 +-
 xen/include/asm-x86/hap.h                |    2 +-
 xen/include/asm-x86/hypercall.h          |   24 +++++++-------
 xen/include/asm-x86/mem_event.h          |    2 +-
 xen/include/asm-x86/mm.h                 |    8 ++--
 xen/include/asm-x86/paging.h             |    2 +-
 xen/include/asm-x86/processor.h          |    2 +-
 xen/include/asm-x86/shadow.h             |    2 +-
 xen/include/asm-x86/xenoprof.h           |    6 ++--
 xen/include/public/arch-arm.h            |   30 +++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++
 xen/include/public/arch-x86/xen.h        |    9 +++++
 xen/include/public/memory.h              |   11 ++++--
 xen/include/public/version.h             |    4 ++-
 xen/include/xen/acpi.h                   |    4 +-
 xen/include/xen/hypercall.h              |   52 +++++++++++++++---------------
 xen/include/xen/iommu.h                  |    2 +-
 xen/include/xen/tmem_xen.h               |    2 +-
 xen/include/xsm/xsm.h                    |    4 +-
 xen/xsm/dummy.c                          |    2 +-
 xen/xsm/flask/flask_op.c                 |    4 +-
 xen/xsm/flask/hooks.c                    |    2 +-
 xen/xsm/xsm_core.c                       |    2 +-
 73 files changed, 228 insertions(+), 181 deletions(-)

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:09:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo1z-0003ea-FZ; Fri, 10 Aug 2012 12:09:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo1x-0003eK-L8
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:09:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344600559!8556914!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29367 invoked from network); 10 Aug 2012 12:09:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:09:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13951861"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 12:09:18 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 13:09:18 +0100
Date: Fri, 10 Aug 2012 13:09:03 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 0/5] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.


It is based on Ian's arm-for-4.3 branch. 


Changes in v2:

- do not use an anonymous union in struct xen_add_to_physmap; 
- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t in python/xen/lowlevel/xc/xc;
- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code;
- add a patch to limit the maximum number of extents handled by
do_memory_op;
- remove the patch "introduce __lshrdi3 and __aeabi_llsr" that is
already in the for-4.3 branch.



Stefano Stabellini (5):
      xen: improve changes to xen_add_to_physmap
      xen: xen_ulong_t substitution
      xen: change the limit of nr_extents to UINT_MAX >> MEMOP_EXTENT_SHIFT
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate

 tools/firmware/hvmloader/pci.c           |    2 +-
 tools/python/xen/lowlevel/xc/xc.c        |    2 +-
 xen/arch/arm/domain.c                    |    2 +-
 xen/arch/arm/domctl.c                    |    2 +-
 xen/arch/arm/hvm.c                       |    2 +-
 xen/arch/arm/mm.c                        |    4 +-
 xen/arch/arm/physdev.c                   |    2 +-
 xen/arch/arm/sysctl.c                    |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c            |    2 +-
 xen/arch/x86/domain.c                    |    2 +-
 xen/arch/x86/domctl.c                    |    2 +-
 xen/arch/x86/efi/runtime.c               |    2 +-
 xen/arch/x86/hvm/hvm.c                   |   26 +++++++-------
 xen/arch/x86/microcode.c                 |    2 +-
 xen/arch/x86/mm.c                        |   24 +++++++-------
 xen/arch/x86/mm/hap/hap.c                |    2 +-
 xen/arch/x86/mm/mem_event.c              |    2 +-
 xen/arch/x86/mm/paging.c                 |    2 +-
 xen/arch/x86/mm/shadow/common.c          |    2 +-
 xen/arch/x86/physdev.c                   |    2 +-
 xen/arch/x86/platform_hypercall.c        |    2 +-
 xen/arch/x86/sysctl.c                    |    2 +-
 xen/arch/x86/traps.c                     |    2 +-
 xen/arch/x86/x86_32/mm.c                 |    2 +-
 xen/arch/x86/x86_32/traps.c              |    2 +-
 xen/arch/x86/x86_64/compat/mm.c          |   14 ++++++--
 xen/arch/x86/x86_64/domain.c             |    2 +-
 xen/arch/x86/x86_64/mm.c                 |    2 +-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/arch/x86/x86_64/traps.c              |    2 +-
 xen/common/compat/domain.c               |    2 +-
 xen/common/compat/grant_table.c          |    2 +-
 xen/common/compat/memory.c               |    2 +-
 xen/common/compat/multicall.c            |    1 +
 xen/common/domain.c                      |    2 +-
 xen/common/domctl.c                      |    2 +-
 xen/common/event_channel.c               |    2 +-
 xen/common/grant_table.c                 |   36 ++++++++++----------
 xen/common/kernel.c                      |    4 +-
 xen/common/kexec.c                       |   16 ++++----
 xen/common/memory.c                      |    6 ++--
 xen/common/multicall.c                   |    2 +-
 xen/common/schedule.c                    |    2 +-
 xen/common/sysctl.c                      |    2 +-
 xen/common/xenoprof.c                    |    8 ++--
 xen/drivers/acpi/pmstat.c                |    2 +-
 xen/drivers/char/console.c               |    6 ++--
 xen/drivers/passthrough/iommu.c          |    2 +-
 xen/include/asm-arm/guest_access.h       |    2 +-
 xen/include/asm-arm/hypercall.h          |    2 +-
 xen/include/asm-arm/mm.h                 |    2 +-
 xen/include/asm-x86/hap.h                |    2 +-
 xen/include/asm-x86/hypercall.h          |   24 +++++++-------
 xen/include/asm-x86/mem_event.h          |    2 +-
 xen/include/asm-x86/mm.h                 |    8 ++--
 xen/include/asm-x86/paging.h             |    2 +-
 xen/include/asm-x86/processor.h          |    2 +-
 xen/include/asm-x86/shadow.h             |    2 +-
 xen/include/asm-x86/xenoprof.h           |    6 ++--
 xen/include/public/arch-arm.h            |   30 +++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++
 xen/include/public/arch-x86/xen.h        |    9 +++++
 xen/include/public/memory.h              |   11 ++++--
 xen/include/public/version.h             |    4 ++-
 xen/include/xen/acpi.h                   |    4 +-
 xen/include/xen/hypercall.h              |   52 +++++++++++++++---------------
 xen/include/xen/iommu.h                  |    2 +-
 xen/include/xen/tmem_xen.h               |    2 +-
 xen/include/xsm/xsm.h                    |    4 +-
 xen/xsm/dummy.c                          |    2 +-
 xen/xsm/flask/flask_op.c                 |    4 +-
 xen/xsm/flask/hooks.c                    |    2 +-
 xen/xsm/xsm_core.c                       |    2 +-
 73 files changed, 228 insertions(+), 181 deletions(-)

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo36-0003lI-Vh; Fri, 10 Aug 2012 12:10:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo35-0003jS-Qk
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600631!8581181!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15851 invoked from network); 10 Aug 2012 12:10:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247665"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-K0;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:12 +0100
Message-ID: <1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c              |    2 +-
 xen/arch/arm/domctl.c              |    2 +-
 xen/arch/arm/hvm.c                 |    2 +-
 xen/arch/arm/mm.c                  |    2 +-
 xen/arch/arm/physdev.c             |    2 +-
 xen/arch/arm/sysctl.c              |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
 xen/arch/x86/domain.c              |    2 +-
 xen/arch/x86/domctl.c              |    2 +-
 xen/arch/x86/efi/runtime.c         |    2 +-
 xen/arch/x86/hvm/hvm.c             |   26 +++++++++---------
 xen/arch/x86/microcode.c           |    2 +-
 xen/arch/x86/mm.c                  |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c          |    2 +-
 xen/arch/x86/mm/mem_event.c        |    2 +-
 xen/arch/x86/mm/paging.c           |    2 +-
 xen/arch/x86/mm/shadow/common.c    |    2 +-
 xen/arch/x86/physdev.c             |    2 +-
 xen/arch/x86/platform_hypercall.c  |    2 +-
 xen/arch/x86/sysctl.c              |    2 +-
 xen/arch/x86/traps.c               |    2 +-
 xen/arch/x86/x86_32/mm.c           |    2 +-
 xen/arch/x86/x86_32/traps.c        |    2 +-
 xen/arch/x86/x86_64/compat/mm.c    |    8 +++---
 xen/arch/x86/x86_64/domain.c       |    2 +-
 xen/arch/x86/x86_64/mm.c           |    2 +-
 xen/arch/x86/x86_64/traps.c        |    2 +-
 xen/common/compat/domain.c         |    2 +-
 xen/common/compat/grant_table.c    |    2 +-
 xen/common/compat/memory.c         |    2 +-
 xen/common/domain.c                |    2 +-
 xen/common/domctl.c                |    2 +-
 xen/common/event_channel.c         |    2 +-
 xen/common/grant_table.c           |   36 ++++++++++++------------
 xen/common/kernel.c                |    4 +-
 xen/common/kexec.c                 |   16 +++++-----
 xen/common/memory.c                |    4 +-
 xen/common/multicall.c             |    2 +-
 xen/common/schedule.c              |    2 +-
 xen/common/sysctl.c                |    2 +-
 xen/common/xenoprof.c              |    8 +++---
 xen/drivers/acpi/pmstat.c          |    2 +-
 xen/drivers/char/console.c         |    6 ++--
 xen/drivers/passthrough/iommu.c    |    2 +-
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/asm-arm/hypercall.h    |    2 +-
 xen/include/asm-arm/mm.h           |    2 +-
 xen/include/asm-x86/hap.h          |    2 +-
 xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h    |    2 +-
 xen/include/asm-x86/mm.h           |    8 +++---
 xen/include/asm-x86/paging.h       |    2 +-
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/asm-x86/shadow.h       |    2 +-
 xen/include/asm-x86/xenoprof.h     |    6 ++--
 xen/include/xen/acpi.h             |    4 +-
 xen/include/xen/hypercall.h        |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h            |    2 +-
 xen/include/xen/tmem_xen.h         |    2 +-
 xen/include/xsm/xsm.h              |    4 +-
 xen/xsm/dummy.c                    |    2 +-
 xen/xsm/flask/flask_op.c           |    4 +-
 xen/xsm/flask/hooks.c              |    2 +-
 xen/xsm/xsm_core.c                 |    2 +-
 64 files changed, 160 insertions(+), 160 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2400e1c..3e8b6cc 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index a89df6d..0f122b3 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1357,7 +1357,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..e2bf831 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3047,14 +3047,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3072,7 +3072,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3088,7 +3088,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3137,7 +3137,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3145,7 +3145,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3164,7 +3164,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3188,7 +3188,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3360,7 +3360,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3525,7 +3525,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3569,7 +3569,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3602,7 +3602,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3686,7 +3686,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f5c704e..4d72700 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 5bcd2fd..d24a324 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -266,9 +266,9 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..74a4733 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,7 +52,7 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..8e311ff 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7e58cc4..a683954 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 698711e..f8d62f2 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -515,7 +515,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 7a955cb..bf5005b 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { {_x } };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo36-0003lI-Vh; Fri, 10 Aug 2012 12:10:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo35-0003jS-Qk
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600631!8581181!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15851 invoked from network); 10 Aug 2012 12:10:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247665"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-K0;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:12 +0100
Message-ID: <1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c              |    2 +-
 xen/arch/arm/domctl.c              |    2 +-
 xen/arch/arm/hvm.c                 |    2 +-
 xen/arch/arm/mm.c                  |    2 +-
 xen/arch/arm/physdev.c             |    2 +-
 xen/arch/arm/sysctl.c              |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c      |    2 +-
 xen/arch/x86/domain.c              |    2 +-
 xen/arch/x86/domctl.c              |    2 +-
 xen/arch/x86/efi/runtime.c         |    2 +-
 xen/arch/x86/hvm/hvm.c             |   26 +++++++++---------
 xen/arch/x86/microcode.c           |    2 +-
 xen/arch/x86/mm.c                  |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c          |    2 +-
 xen/arch/x86/mm/mem_event.c        |    2 +-
 xen/arch/x86/mm/paging.c           |    2 +-
 xen/arch/x86/mm/shadow/common.c    |    2 +-
 xen/arch/x86/physdev.c             |    2 +-
 xen/arch/x86/platform_hypercall.c  |    2 +-
 xen/arch/x86/sysctl.c              |    2 +-
 xen/arch/x86/traps.c               |    2 +-
 xen/arch/x86/x86_32/mm.c           |    2 +-
 xen/arch/x86/x86_32/traps.c        |    2 +-
 xen/arch/x86/x86_64/compat/mm.c    |    8 +++---
 xen/arch/x86/x86_64/domain.c       |    2 +-
 xen/arch/x86/x86_64/mm.c           |    2 +-
 xen/arch/x86/x86_64/traps.c        |    2 +-
 xen/common/compat/domain.c         |    2 +-
 xen/common/compat/grant_table.c    |    2 +-
 xen/common/compat/memory.c         |    2 +-
 xen/common/domain.c                |    2 +-
 xen/common/domctl.c                |    2 +-
 xen/common/event_channel.c         |    2 +-
 xen/common/grant_table.c           |   36 ++++++++++++------------
 xen/common/kernel.c                |    4 +-
 xen/common/kexec.c                 |   16 +++++-----
 xen/common/memory.c                |    4 +-
 xen/common/multicall.c             |    2 +-
 xen/common/schedule.c              |    2 +-
 xen/common/sysctl.c                |    2 +-
 xen/common/xenoprof.c              |    8 +++---
 xen/drivers/acpi/pmstat.c          |    2 +-
 xen/drivers/char/console.c         |    6 ++--
 xen/drivers/passthrough/iommu.c    |    2 +-
 xen/include/asm-arm/guest_access.h |    2 +-
 xen/include/asm-arm/hypercall.h    |    2 +-
 xen/include/asm-arm/mm.h           |    2 +-
 xen/include/asm-x86/hap.h          |    2 +-
 xen/include/asm-x86/hypercall.h    |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h    |    2 +-
 xen/include/asm-x86/mm.h           |    8 +++---
 xen/include/asm-x86/paging.h       |    2 +-
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/asm-x86/shadow.h       |    2 +-
 xen/include/asm-x86/xenoprof.h     |    6 ++--
 xen/include/xen/acpi.h             |    4 +-
 xen/include/xen/hypercall.h        |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h            |    2 +-
 xen/include/xen/tmem_xen.h         |    2 +-
 xen/include/xsm/xsm.h              |    4 +-
 xen/xsm/dummy.c                    |    2 +-
 xen/xsm/flask/flask_op.c           |    4 +-
 xen/xsm/flask/hooks.c              |    2 +-
 xen/xsm/xsm_core.c                 |    2 +-
 64 files changed, 160 insertions(+), 160 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2400e1c..3e8b6cc 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index a89df6d..0f122b3 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1357,7 +1357,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..e2bf831 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3047,14 +3047,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3072,7 +3072,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3088,7 +3088,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3137,7 +3137,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3145,7 +3145,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3164,7 +3164,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3188,7 +3188,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3360,7 +3360,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3525,7 +3525,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3569,7 +3569,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3602,7 +3602,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3686,7 +3686,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f5c704e..4d72700 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 5bcd2fd..d24a324 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -266,9 +266,9 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..74a4733 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,7 +52,7 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..8e311ff 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7e58cc4..a683954 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 698711e..f8d62f2 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -515,7 +515,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 7a955cb..bf5005b 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { {_x } };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo35-0003kd-Vb; Fri, 10 Aug 2012 12:10:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo34-0003jK-VD
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600629!8581169!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15832 invoked from network); 10 Aug 2012 12:10:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247664"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-J6;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:10 +0100
Message-ID: <1344600612-10815-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 3/5] xen: change the limit of nr_extents to
	UINT_MAX >> MEMOP_EXTENT_SHIFT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently do_memory_op has a different maximum limit for nr_extents on
32 bit and 64 bit.
Change the limit to UINT_MAX >> MEMOP_EXTENT_SHIFT, so that it is the
same in both cases.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/memory.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..7e58cc4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -553,7 +553,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
             return start_extent;
 
         /* Is size too large for us to encode a continuation? */
-        if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
+        if ( reservation.nr_extents > (UINT_MAX >> MEMOP_EXTENT_SHIFT) )
             return start_extent;
 
         if ( unlikely(start_extent >= reservation.nr_extents) )
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo35-0003kd-Vb; Fri, 10 Aug 2012 12:10:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo34-0003jK-VD
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600629!8581169!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15832 invoked from network); 10 Aug 2012 12:10:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247664"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-J6;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:10 +0100
Message-ID: <1344600612-10815-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 3/5] xen: change the limit of nr_extents to
	UINT_MAX >> MEMOP_EXTENT_SHIFT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently do_memory_op has a different maximum limit for nr_extents on
32 bit and 64 bit.
Change the limit to UINT_MAX >> MEMOP_EXTENT_SHIFT, so that it is the
same in both cases.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/memory.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..7e58cc4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -553,7 +553,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
             return start_extent;
 
         /* Is size too large for us to encode a continuation? */
-        if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
+        if ( reservation.nr_extents > (UINT_MAX >> MEMOP_EXTENT_SHIFT) )
             return start_extent;
 
         if ( unlikely(start_extent >= reservation.nr_extents) )
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo35-0003kT-Jc; Fri, 10 Aug 2012 12:10:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo34-0003jJ-U0
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600629!8581169!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15753 invoked from network); 10 Aug 2012 12:10:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247662"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-HS;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:09 +0100
Message-ID: <1344600612-10815-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 2/5] xen: xen_ulong_t substitution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is still an unwanted unsigned long in the xen public interface:
replace it with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Changes in v2:

- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/python/xen/lowlevel/xc/xc.c |    2 +-
 xen/include/public/arch-arm.h     |    4 ++--
 xen/include/public/version.h      |    4 +++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..e220f68 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1439,7 +1439,7 @@ static PyObject *pyxc_xeninfo(XcObject *self)
     if ( xc_version(self->xc_handle, XENVER_commandline, &xen_commandline) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
-    snprintf(str, sizeof(str), "virt_start=0x%lx", p_parms.virt_start);
+    snprintf(str, sizeof(str), "virt_start=0x%"PRI_xen_ulong, p_parms.virt_start);
 
     xen_pagesize = xc_version(self->xc_handle, XENVER_pagesize, NULL);
     if (xen_pagesize < 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..c7e6f8c 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -28,6 +28,8 @@
 #ifndef __XEN_PUBLIC_VERSION_H__
 #define __XEN_PUBLIC_VERSION_H__
 
+#include "xen.h"
+
 /* NB. All ops return zero on success, except XENVER_{version,pagesize} */
 
 /* arg == NULL; returns major:minor (16:16). */
@@ -58,7 +60,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo35-0003kT-Jc; Fri, 10 Aug 2012 12:10:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo34-0003jJ-U0
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600629!8581169!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15753 invoked from network); 10 Aug 2012 12:10:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247662"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-HS;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:09 +0100
Message-ID: <1344600612-10815-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 2/5] xen: xen_ulong_t substitution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is still an unwanted unsigned long in the xen public interface:
replace it with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Changes in v2:

- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/python/xen/lowlevel/xc/xc.c |    2 +-
 xen/include/public/arch-arm.h     |    4 ++--
 xen/include/public/version.h      |    4 +++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..e220f68 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1439,7 +1439,7 @@ static PyObject *pyxc_xeninfo(XcObject *self)
     if ( xc_version(self->xc_handle, XENVER_commandline, &xen_commandline) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
-    snprintf(str, sizeof(str), "virt_start=0x%lx", p_parms.virt_start);
+    snprintf(str, sizeof(str), "virt_start=0x%"PRI_xen_ulong, p_parms.virt_start);
 
     xen_pagesize = xc_version(self->xc_handle, XENVER_pagesize, NULL);
     if (xen_pagesize < 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..c7e6f8c 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -28,6 +28,8 @@
 #ifndef __XEN_PUBLIC_VERSION_H__
 #define __XEN_PUBLIC_VERSION_H__
 
+#include "xen.h"
+
 /* NB. All ops return zero on success, except XENVER_{version,pagesize} */
 
 /* arg == NULL; returns major:minor (16:16). */
@@ -58,7 +60,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo32-0003jc-6r; Fri, 10 Aug 2012 12:10:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo30-0003jI-5a
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:34 +0000
Received: from [85.158.138.51:56661] by server-7.bemta-3.messagelabs.com id
	AF/C9-04660-93AF4205; Fri, 10 Aug 2012 12:10:33 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344600629!27630003!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA1NDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6424 invoked from network); 10 Aug 2012 12:10:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="204801793"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-JW;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:11 +0100
Message-ID: <1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: this change does not make any difference on x86 and ia64.


XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

Changes in v2:

- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/common/compat/multicall.c            |    1 +
 xen/include/asm-arm/guest_access.h       |    2 +-
 xen/include/public/arch-arm.h            |   24 ++++++++++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++++++
 xen/include/public/arch-x86/xen.h        |    9 +++++++++
 6 files changed, 41 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 188aa37..f577761 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
 
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 typedef int ret_t;
 
 #include "../platform_hypercall.c"
diff --git a/xen/common/compat/multicall.c b/xen/common/compat/multicall.c
index 0eb1212..72db213 100644
--- a/xen/common/compat/multicall.c
+++ b/xen/common/compat/multicall.c
@@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
 #define call                 compat_call
 #define do_multicall(l, n)   compat_multicall(_##l, n)
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 
 #include "../multicall.c"
 
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..7a955cb 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE(type)) { {_x } };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..9db3c81 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,34 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef struct { type *p; }                                 \
+        __guest_handle_ ## name;                                \
+    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_64_ ## name;
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
+ * aligned.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
+         (hnd).p = val;                                     \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..e4e9688 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -45,8 +45,17 @@
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on ia64 but
+ * they might not be on other architectures.
+ */
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..0e10260 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -38,12 +38,21 @@
     typedef type * __guest_handle_ ## name
 #endif
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on X86 but
+ * they might not be on other architectures.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo32-0003jc-6r; Fri, 10 Aug 2012 12:10:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo30-0003jI-5a
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:34 +0000
Received: from [85.158.138.51:56661] by server-7.bemta-3.messagelabs.com id
	AF/C9-04660-93AF4205; Fri, 10 Aug 2012 12:10:33 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344600629!27630003!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA1NDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6424 invoked from network); 10 Aug 2012 12:10:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="204801793"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-JW;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:11 +0100
Message-ID: <1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: this change does not make any difference on x86 and ia64.


XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

Changes in v2:

- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/common/compat/multicall.c            |    1 +
 xen/include/asm-arm/guest_access.h       |    2 +-
 xen/include/public/arch-arm.h            |   24 ++++++++++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++++++
 xen/include/public/arch-x86/xen.h        |    9 +++++++++
 6 files changed, 41 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 188aa37..f577761 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
 
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 typedef int ret_t;
 
 #include "../platform_hypercall.c"
diff --git a/xen/common/compat/multicall.c b/xen/common/compat/multicall.c
index 0eb1212..72db213 100644
--- a/xen/common/compat/multicall.c
+++ b/xen/common/compat/multicall.c
@@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
 #define call                 compat_call
 #define do_multicall(l, n)   compat_multicall(_##l, n)
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 
 #include "../multicall.c"
 
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..7a955cb 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -30,7 +30,7 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE(type)) { {_x } };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..9db3c81 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,34 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef struct { type *p; }                                 \
+        __guest_handle_ ## name;                                \
+    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_64_ ## name;
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
+ * aligned.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
+         (hnd).p = val;                                     \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..e4e9688 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -45,8 +45,17 @@
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on ia64 but
+ * they might not be on other architectures.
+ */
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..0e10260 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -38,12 +38,21 @@
     typedef type * __guest_handle_ ## name
 #endif
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on X86 but
+ * they might not be on other architectures.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo36-0003km-C6; Fri, 10 Aug 2012 12:10:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo34-0003jL-WA
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600629!8581169!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15801 invoked from network); 10 Aug 2012 12:10:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247663"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:28 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-Gw;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:08 +0100
Message-ID: <1344600612-10815-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Changes in v2:

- do not use an anonymous union.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/firmware/hvmloader/pci.c  |    2 +-
 xen/arch/arm/mm.c               |    2 +-
 xen/arch/x86/mm.c               |   10 +++++-----
 xen/arch/x86/x86_64/compat/mm.c |    6 ++++++
 xen/include/public/memory.h     |   11 +++++++----
 5 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index fd56e50..6375989 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -212,7 +212,7 @@ void pci_setup(void)
         xatp.space = XENMAPSPACE_gmfn_range;
         xatp.idx   = hvm_info->low_mem_pgend;
         xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
+        xatp.u.size  = nr_pages;
         if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
             BUG();
         hvm_info->high_mem_pgend += nr_pages;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..2400e1c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,7 +506,7 @@ static int xenmem_add_to_physmap_once(
         paddr_t maddr;
         struct domain *od;
 
-        rc = rcu_lock_target_domain_by_id(xatp->foreign_domid, &od);
+        rc = rcu_lock_target_domain_by_id(xatp->u.foreign_domid, &od);
         if ( rc < 0 )
             return rc;
         maddr = p2m_lookup(od, xatp->idx << PAGE_SHIFT);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..f5c704e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4630,7 +4630,7 @@ static int xenmem_add_to_physmap(struct domain *d,
             this_cpu(iommu_dont_flush_iotlb) = 1;
 
         start_xatp = *xatp;
-        while ( xatp->size > 0 )
+        while ( xatp->u.size > 0 )
         {
             rc = xenmem_add_to_physmap_once(d, xatp);
             if ( rc < 0 )
@@ -4638,10 +4638,10 @@ static int xenmem_add_to_physmap(struct domain *d,
 
             xatp->idx++;
             xatp->gpfn++;
-            xatp->size--;
+            xatp->u.size--;
 
             /* Check for continuation if it's not the last interation */
-            if ( xatp->size > 0 && hypercall_preempt_check() )
+            if ( xatp->u.size > 0 && hypercall_preempt_check() )
             {
                 rc = -EAGAIN;
                 break;
@@ -4651,8 +4651,8 @@ static int xenmem_add_to_physmap(struct domain *d,
         if ( need_iommu(d) )
         {
             this_cpu(iommu_dont_flush_iotlb) = 0;
-            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp->size);
-            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp->size);
+            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.u.size - xatp->u.size);
+            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.u.size - xatp->u.size);
         }
 
         return rc;
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..5bcd2fd 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -59,10 +59,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     {
         struct compat_add_to_physmap cmp;
         struct xen_add_to_physmap *nat = COMPAT_ARG_XLAT_VIRT_BASE;
+        enum XLAT_add_to_physmap_u u;
 
         if ( copy_from_guest(&cmp, arg, 1) )
             return -EFAULT;
 
+        if ( cmp.space == XENMAPSPACE_gmfn_range )
+            u = XLAT_add_to_physmap_u_size;
+        if ( cmp.space == XENMAPSPACE_gmfn_foreign )
+            u = XLAT_add_to_physmap_u_foreign_domid;
+
         XLAT_add_to_physmap(nat, &cmp);
         rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..7d4ee26 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:10:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo36-0003km-C6; Fri, 10 Aug 2012 12:10:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo34-0003jL-WA
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:10:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344600629!8581169!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15801 invoked from network); 10 Aug 2012 12:10:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:10:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34247663"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 08:10:29 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 10 Aug 2012 08:10:28 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Szo2p-0006BW-Gw;
	Fri, 10 Aug 2012 13:10:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Fri, 10 Aug 2012 13:10:08 +0100
Message-ID: <1344600612-10815-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Tim.Deegan@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 1/5] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Changes in v2:

- do not use an anonymous union.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/firmware/hvmloader/pci.c  |    2 +-
 xen/arch/arm/mm.c               |    2 +-
 xen/arch/x86/mm.c               |   10 +++++-----
 xen/arch/x86/x86_64/compat/mm.c |    6 ++++++
 xen/include/public/memory.h     |   11 +++++++----
 5 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index fd56e50..6375989 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -212,7 +212,7 @@ void pci_setup(void)
         xatp.space = XENMAPSPACE_gmfn_range;
         xatp.idx   = hvm_info->low_mem_pgend;
         xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
+        xatp.u.size  = nr_pages;
         if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
             BUG();
         hvm_info->high_mem_pgend += nr_pages;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..2400e1c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,7 +506,7 @@ static int xenmem_add_to_physmap_once(
         paddr_t maddr;
         struct domain *od;
 
-        rc = rcu_lock_target_domain_by_id(xatp->foreign_domid, &od);
+        rc = rcu_lock_target_domain_by_id(xatp->u.foreign_domid, &od);
         if ( rc < 0 )
             return rc;
         maddr = p2m_lookup(od, xatp->idx << PAGE_SHIFT);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..f5c704e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4630,7 +4630,7 @@ static int xenmem_add_to_physmap(struct domain *d,
             this_cpu(iommu_dont_flush_iotlb) = 1;
 
         start_xatp = *xatp;
-        while ( xatp->size > 0 )
+        while ( xatp->u.size > 0 )
         {
             rc = xenmem_add_to_physmap_once(d, xatp);
             if ( rc < 0 )
@@ -4638,10 +4638,10 @@ static int xenmem_add_to_physmap(struct domain *d,
 
             xatp->idx++;
             xatp->gpfn++;
-            xatp->size--;
+            xatp->u.size--;
 
             /* Check for continuation if it's not the last interation */
-            if ( xatp->size > 0 && hypercall_preempt_check() )
+            if ( xatp->u.size > 0 && hypercall_preempt_check() )
             {
                 rc = -EAGAIN;
                 break;
@@ -4651,8 +4651,8 @@ static int xenmem_add_to_physmap(struct domain *d,
         if ( need_iommu(d) )
         {
             this_cpu(iommu_dont_flush_iotlb) = 0;
-            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp->size);
-            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp->size);
+            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.u.size - xatp->u.size);
+            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.u.size - xatp->u.size);
         }
 
         return rc;
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..5bcd2fd 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -59,10 +59,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     {
         struct compat_add_to_physmap cmp;
         struct xen_add_to_physmap *nat = COMPAT_ARG_XLAT_VIRT_BASE;
+        enum XLAT_add_to_physmap_u u;
 
         if ( copy_from_guest(&cmp, arg, 1) )
             return -EFAULT;
 
+        if ( cmp.space == XENMAPSPACE_gmfn_range )
+            u = XLAT_add_to_physmap_u_size;
+        if ( cmp.space == XENMAPSPACE_gmfn_foreign )
+            u = XLAT_add_to_physmap_u_foreign_domid;
+
         XLAT_add_to_physmap(nat, &cmp);
         rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..7d4ee26 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:12:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo52-0004RE-5j; Fri, 10 Aug 2012 12:12:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szo50-0004Qu-Tw
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:12:39 +0000
Received: from [85.158.143.99:14709] by server-2.bemta-4.messagelabs.com id
	2D/7C-19021-6BAF4205; Fri, 10 Aug 2012 12:12:38 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344600757!17911132!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17821 invoked from network); 10 Aug 2012 12:12:37 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:12:37 -0000
Received: by wibhm2 with SMTP id hm2so1149322wib.2
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 05:12:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Vy5ZlK/gT/G/Zcwjzbpx9lf/0bFj8vQzVQ3kRzG0NCU=;
	b=iPCh7hTtFB+VuzKgkaTZEk9EDhCj7rS8QLvu7V6ock1AOATHfEzgoLwdkpw5eC4nCM
	ArON9q0kvbZe3bVg5AvVmo9fmIVuy3QZlIyxtw3FmfsBgDdKD1idsKGOsL+kGpKB9Ed0
	lXQYfWMwBYU9pq8djttVXSVBGXzmD8TLCTrBMGmlLolKl/FE47oYQy9BLzWEZjsYmaug
	reSZFUU3Z2mz0X07ulq5tqvH1pxNaqQiZbPLl5rHb/6OXjVLQoG/7NwY8HPdJKBuY5yH
	3DlVsskPsnIbRC8q1nKH5g9VCwov+7Y0D8xFIa5EBgzeNPDznyB/YxjwrDsjmWRiVHKQ
	1k8g==
Received: by 10.216.226.31 with SMTP id a31mr660471weq.41.1344600757509;
	Fri, 10 Aug 2012 05:12:37 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id h9sm7424998wiz.1.2012.08.10.05.12.34
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 05:12:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 10 Aug 2012 13:12:26 +0100
From: Keir Fraser <keir@xen.org>
To: <lars.kurth@xen.org>,
	<xen-devel@lists.xen.org>
Message-ID: <CC4AB93A.48780%keir@xen.org>
Thread-Topic: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and
	4.1.3
Thread-Index: Ac128WeLjTZEvG43bE6uYSF8KeC1Hg==
In-Reply-To: <5024F531.20509@xen.org>
Mime-version: 1.0
Subject: Re: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and
 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All of XSA-7 through XSA-11 are fixed in 4.1.3 and 4.0.4. So there's a good
security argument for everyone to upgrade, quite apart from the various
other fixes (e.g., there are over 100 patches in 4.1 branch since 4.1.2) --
libxl bug fixes, and a bunch of hypervisor stuff, fixing for particular
systems, or particular minority usage cases, for example... There's no
broader generalisation than that to be made really, each patch fixes some
usually fairly minor issue, but they add up to a big bag of small fixes!

XSA-7 to XSA-11 *are* called out explicitly in their respective changeset
comments in both 4.0 and 4.1 repositories by the way. If you can't see them
you may not be looking at the right thing. ;-)

 -- Keir

On 10/08/2012 12:49, "Lars Kurth" <lars.kurth@xen.org> wrote:

> Keir,
> do we have a list of what has gone into these (worth highlighting). I
> can't see any CVE and XSA numbers and the check-in descriptions don't
> tell me an awful lot
> Lars
> 
> On 22/07/2012 16:42, Keir Fraser wrote:
>> Folks,
>> 
>> I have just tagged 3rd release candidates for 4.0.4 and 4.1.3:
>> 
>> http://xenbits.xen.org/staging/xen-4.0-testing.hg (tag 4.0.4-rc3)
>> http://xenbits.xen.org/staging/xen-4.1-testing.hg (tag 4.1.3-rc3)
>> 
>> Please test! Assuming nothing crops up, we will make these the official
>> point releases at the end of this month.
>> 
>>   -- Keir
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:12:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo52-0004RE-5j; Fri, 10 Aug 2012 12:12:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szo50-0004Qu-Tw
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:12:39 +0000
Received: from [85.158.143.99:14709] by server-2.bemta-4.messagelabs.com id
	2D/7C-19021-6BAF4205; Fri, 10 Aug 2012 12:12:38 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344600757!17911132!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17821 invoked from network); 10 Aug 2012 12:12:37 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:12:37 -0000
Received: by wibhm2 with SMTP id hm2so1149322wib.2
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 05:12:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Vy5ZlK/gT/G/Zcwjzbpx9lf/0bFj8vQzVQ3kRzG0NCU=;
	b=iPCh7hTtFB+VuzKgkaTZEk9EDhCj7rS8QLvu7V6ock1AOATHfEzgoLwdkpw5eC4nCM
	ArON9q0kvbZe3bVg5AvVmo9fmIVuy3QZlIyxtw3FmfsBgDdKD1idsKGOsL+kGpKB9Ed0
	lXQYfWMwBYU9pq8djttVXSVBGXzmD8TLCTrBMGmlLolKl/FE47oYQy9BLzWEZjsYmaug
	reSZFUU3Z2mz0X07ulq5tqvH1pxNaqQiZbPLl5rHb/6OXjVLQoG/7NwY8HPdJKBuY5yH
	3DlVsskPsnIbRC8q1nKH5g9VCwov+7Y0D8xFIa5EBgzeNPDznyB/YxjwrDsjmWRiVHKQ
	1k8g==
Received: by 10.216.226.31 with SMTP id a31mr660471weq.41.1344600757509;
	Fri, 10 Aug 2012 05:12:37 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id h9sm7424998wiz.1.2012.08.10.05.12.34
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 05:12:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 10 Aug 2012 13:12:26 +0100
From: Keir Fraser <keir@xen.org>
To: <lars.kurth@xen.org>,
	<xen-devel@lists.xen.org>
Message-ID: <CC4AB93A.48780%keir@xen.org>
Thread-Topic: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and
	4.1.3
Thread-Index: Ac128WeLjTZEvG43bE6uYSF8KeC1Hg==
In-Reply-To: <5024F531.20509@xen.org>
Mime-version: 1.0
Subject: Re: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and
 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All of XSA-7 through XSA-11 are fixed in 4.1.3 and 4.0.4. So there's a good
security argument for everyone to upgrade, quite apart from the various
other fixes (e.g., there are over 100 patches in 4.1 branch since 4.1.2) --
libxl bug fixes, and a bunch of hypervisor stuff, fixing for particular
systems, or particular minority usage cases, for example... There's no
broader generalisation than that to be made really, each patch fixes some
usually fairly minor issue, but they add up to a big bag of small fixes!

XSA-7 to XSA-11 *are* called out explicitly in their respective changeset
comments in both 4.0 and 4.1 repositories by the way. If you can't see them
you may not be looking at the right thing. ;-)

 -- Keir

On 10/08/2012 12:49, "Lars Kurth" <lars.kurth@xen.org> wrote:

> Keir,
> do we have a list of what has gone into these (worth highlighting). I
> can't see any CVE and XSA numbers and the check-in descriptions don't
> tell me an awful lot
> Lars
> 
> On 22/07/2012 16:42, Keir Fraser wrote:
>> Folks,
>> 
>> I have just tagged 3rd release candidates for 4.0.4 and 4.1.3:
>> 
>> http://xenbits.xen.org/staging/xen-4.0-testing.hg (tag 4.0.4-rc3)
>> http://xenbits.xen.org/staging/xen-4.1-testing.hg (tag 4.1.3-rc3)
>> 
>> Please test! Assuming nothing crops up, we will make these the official
>> point releases at the end of this month.
>> 
>>   -- Keir
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:13:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo5D-0004U1-Jd; Fri, 10 Aug 2012 12:12:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szo5C-0004Tb-BB
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:12:50 +0000
Received: from [85.158.143.99:15489] by server-1.bemta-4.messagelabs.com id
	DA/70-20198-1CAF4205; Fri, 10 Aug 2012 12:12:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344600768!22261814!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29640 invoked from network); 10 Aug 2012 12:12:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:12:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13951957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 12:12:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 13:12:26 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szo4n-0006LH-IC;
	Fri, 10 Aug 2012 12:12:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szo4n-0007nD-Cm;
	Fri, 10 Aug 2012 13:12:25 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13588-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 13:12:25 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13588: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13588 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13588/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      7 debian-install              fail pass in 13580
 test-i386-i386-xl             9 guest-start        fail in 13580 pass in 13588
 test-amd64-i386-xl-credit2    5 xen-boot           fail in 13580 pass in 13588

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf  12 guest-saverestore.2 fail in 13580 REGR. vs. 13524

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl            15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl           15 guest-stop                   fail   never pass
 test-amd64-amd64-xl          15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-sedf-pin 15 guest-stop                   fail   never pass
 test-amd64-i386-xl-credit2   15 guest-stop                   fail   never pass
 test-amd64-i386-xl-multivcpu 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64  8 guest-saverestore            fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore      fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install           fail never pass
 test-amd64-i386-rhel6hvm-amd  7 redhat-install               fail   never pass
 test-amd64-i386-rhel6hvm-intel  7 redhat-install               fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  7 redhat-install         fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3    7 windows-install              fail   never pass
 test-i386-i386-xl-win         7 windows-install              fail   never pass
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3  7 windows-install            fail never pass
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail never pass
 test-amd64-i386-xl-win-vcpus1  7 windows-install              fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail never pass
 test-amd64-amd64-xl-win       7 windows-install              fail   never pass

version targeted for testing:
 xen                  228e6f382d5d
baseline version:
 xen                  6d7ae840463c

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.0-testing
+ revision=228e6f382d5d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.0-testing 228e6f382d5d
+ branch=xen-4.0-testing
+ revision=228e6f382d5d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.0-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.0-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.0-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.0-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.0-testing.git
++ : daily-cron.xen-4.0-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.0-testing.git
+ info_linux_tree xen-4.0-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.0-testing.hg
+ hg push -r 228e6f382d5d ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:13:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo5D-0004U1-Jd; Fri, 10 Aug 2012 12:12:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szo5C-0004Tb-BB
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:12:50 +0000
Received: from [85.158.143.99:15489] by server-1.bemta-4.messagelabs.com id
	DA/70-20198-1CAF4205; Fri, 10 Aug 2012 12:12:49 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344600768!22261814!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29640 invoked from network); 10 Aug 2012 12:12:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:12:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13951957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 12:12:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 13:12:26 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szo4n-0006LH-IC;
	Fri, 10 Aug 2012 12:12:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szo4n-0007nD-Cm;
	Fri, 10 Aug 2012 13:12:25 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13588-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 13:12:25 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.0-testing test] 13588: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13588 xen-4.0-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13588/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      7 debian-install              fail pass in 13580
 test-i386-i386-xl             9 guest-start        fail in 13580 pass in 13588
 test-amd64-i386-xl-credit2    5 xen-boot           fail in 13580 pass in 13588

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf  12 guest-saverestore.2 fail in 13580 REGR. vs. 13524

Tests which did not succeed, but are not blocking:
 test-i386-i386-xl            15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl           15 guest-stop                   fail   never pass
 test-amd64-amd64-xl          15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-sedf-pin 15 guest-stop                   fail   never pass
 test-amd64-i386-xl-credit2   15 guest-stop                   fail   never pass
 test-amd64-i386-xl-multivcpu 15 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64  8 guest-saverestore            fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore      fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install           fail never pass
 test-amd64-i386-rhel6hvm-amd  7 redhat-install               fail   never pass
 test-amd64-i386-rhel6hvm-intel  7 redhat-install               fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  7 redhat-install         fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3    7 windows-install              fail   never pass
 test-i386-i386-xl-win         7 windows-install              fail   never pass
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3  7 windows-install            fail never pass
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail never pass
 test-amd64-i386-xl-win-vcpus1  7 windows-install              fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail never pass
 test-amd64-amd64-xl-win       7 windows-install              fail   never pass

version targeted for testing:
 xen                  228e6f382d5d
baseline version:
 xen                  6d7ae840463c

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Joe Jin <joe.jin@oracle.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.0-testing
+ revision=228e6f382d5d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.0-testing 228e6f382d5d
+ branch=xen-4.0-testing
+ revision=228e6f382d5d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.0-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-4.0-testing.hg
++ : git://xenbits.xen.org/staging/qemu-xen-4.0-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.0-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.0-testing.git
++ : daily-cron.xen-4.0-testing
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-4.0-testing.git
+ info_linux_tree xen-4.0-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-4.0-testing.hg
+ hg push -r 228e6f382d5d ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-4.0-testing.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:16:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo8u-00051f-8c; Fri, 10 Aug 2012 12:16:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo8s-00051V-T6
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:16:39 +0000
Received: from [85.158.139.83:12019] by server-12.bemta-5.messagelabs.com id
	CE/66-22359-6ABF4205; Fri, 10 Aug 2012 12:16:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344600997!23588199!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13346 invoked from network); 10 Aug 2012 12:16:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13952057"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 12:16:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 13:16:37 +0100
Date: Fri, 10 Aug 2012 13:16:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mark van Dijk <lists+xen@internecto.net>
In-Reply-To: <20120810140611.4ca8a1fb@internecto.net>
Message-ID: <alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Mark van Dijk wrote:
> > This is upstream QEMU that is breaking, not qemu-xen-traditional (see
> > the code path: qemu-xen-dir-remote instead of
> > qemu-xen-traditional-dir-remote).
> 
> Ah, I didn't know, it's a little bit confusing. Would you like me to
> submit a bug report with them?
> 
> > Moreover it is breaking compiling qemu-nbd that we aren't currently
> > using. I would try out the following change to the configure script:
> > (..snip..)
> 
> Yes, that works, thanks! But it gives a new error which I couldn't
> solve yet:
> 
> ---
> LINK  qemu-nbd
> 
> cutils.o: In function `strtosz_suffix_unit':
> 
> tools/qemu-xen-dir/cutils.c:354: undefined reference to
> `__isnan'
> 
> tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
> collect2: ld returned 1 exit status
> ---
> 
> Any idea there?

It is another "-lm" missing somewhere.


> Also -- If we're not using qemu-nbd then could you suggest a
> workaround please? I'd prefer something that can be patched or
> issued before I run the make process. (I run the make process
> twice now - if the first run fails, patch, then run again and if it
> fails again error out)

You can disable qemu-nbd altogether with the following patch:


diff --git a/configure b/configure
index 027a718..f05d3c5 100755
--- a/configure
+++ b/configure
@@ -2993,12 +2988,6 @@ if test "$softmmu" = yes ; then
       virtfs=no
     fi
   fi
-  if [ "$linux" = "yes" -o "$bsd" = "yes" -o "$solaris" = "yes" ] ; then
-      tools="qemu-nbd\$(EXESUF) $tools"
-    if [ "$guest_agent" = "yes" ]; then
-      tools="qemu-ga\$(EXESUF) $tools"
-    fi
-  fi
 fi
 if test "$smartcard_nss" = "yes" ; then
   tools="vscclient\$(EXESUF) $tools"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:16:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo8u-00051f-8c; Fri, 10 Aug 2012 12:16:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1Szo8s-00051V-T6
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:16:39 +0000
Received: from [85.158.139.83:12019] by server-12.bemta-5.messagelabs.com id
	CE/66-22359-6ABF4205; Fri, 10 Aug 2012 12:16:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344600997!23588199!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13346 invoked from network); 10 Aug 2012 12:16:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13952057"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 12:16:37 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 13:16:37 +0100
Date: Fri, 10 Aug 2012 13:16:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mark van Dijk <lists+xen@internecto.net>
In-Reply-To: <20120810140611.4ca8a1fb@internecto.net>
Message-ID: <alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Mark van Dijk wrote:
> > This is upstream QEMU that is breaking, not qemu-xen-traditional (see
> > the code path: qemu-xen-dir-remote instead of
> > qemu-xen-traditional-dir-remote).
> 
> Ah, I didn't know, it's a little bit confusing. Would you like me to
> submit a bug report with them?
> 
> > Moreover it is breaking compiling qemu-nbd that we aren't currently
> > using. I would try out the following change to the configure script:
> > (..snip..)
> 
> Yes, that works, thanks! But it gives a new error which I couldn't
> solve yet:
> 
> ---
> LINK  qemu-nbd
> 
> cutils.o: In function `strtosz_suffix_unit':
> 
> tools/qemu-xen-dir/cutils.c:354: undefined reference to
> `__isnan'
> 
> tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
> collect2: ld returned 1 exit status
> ---
> 
> Any idea there?

It is another "-lm" missing somewhere.


> Also -- If we're not using qemu-nbd then could you suggest a
> workaround please? I'd prefer something that can be patched or
> issued before I run the make process. (I run the make process
> twice now - if the first run fails, patch, then run again and if it
> fails again error out)

You can disable qemu-nbd altogether with the following patch:


diff --git a/configure b/configure
index 027a718..f05d3c5 100755
--- a/configure
+++ b/configure
@@ -2993,12 +2988,6 @@ if test "$softmmu" = yes ; then
       virtfs=no
     fi
   fi
-  if [ "$linux" = "yes" -o "$bsd" = "yes" -o "$solaris" = "yes" ] ; then
-      tools="qemu-nbd\$(EXESUF) $tools"
-    if [ "$guest_agent" = "yes" ]; then
-      tools="qemu-ga\$(EXESUF) $tools"
-    fi
-  fi
 fi
 if test "$smartcard_nss" = "yes" ; then
   tools="vscclient\$(EXESUF) $tools"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo9N-00054s-Ly; Fri, 10 Aug 2012 12:17:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szo9L-00054R-Mv
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:17:07 +0000
Received: from [85.158.138.51:36369] by server-10.bemta-3.messagelabs.com id
	61/78-07905-2CBF4205; Fri, 10 Aug 2012 12:17:06 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344601026!27657393!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14759 invoked from network); 10 Aug 2012 12:17:06 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:17:06 -0000
Received: by wgbed3 with SMTP id ed3so966595wgb.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 05:17:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=S1dKjunUHlAqEV8AH42oakpQu8iihdJ1pb/SC5M1Zsw=;
	b=oTiuWcCwEuRVW2j2b8CMyJczvOL5XQO5+QfRCpP6wDn02tM6SVhraaJ4TNUz2+OR2F
	2yQW7UZ0yI9lSRiheZOg6t2+Wnu3EIvoPWegL52OGdMaoYW98j4uOn9vhBXVq3mAuMvv
	KrU/G4Sizmr3HkUDiQBv5egRD1+tfIHBlzZb15irSu4HukWbLFeoSPYl0Got030FCFbJ
	bnO4cpS5H3oQKiMXksDSpUhlt3mmhqhx/CBQcIMUbP/6NW3keYETcdperaNXNWwSpU+4
	FICFSgGE2cUAZZS31gvPMu4hUTTyBMRlo4CfovK3d49T/jHAW3KMM2zSq5kw/IvsOzm/
	EPNw==
Received: by 10.180.14.35 with SMTP id m3mr5563495wic.16.1344601026065;
	Fri, 10 Aug 2012 05:17:06 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id cu1sm7420085wib.6.2012.08.10.05.17.04
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 05:17:05 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 10 Aug 2012 13:16:52 +0100
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC4ABA44.487C2%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/cpuidle: clean up statistics reporting
	to user mode
Thread-Index: Ac128gYXlTWvhQV0EkKKQWS3sZ5o5g==
In-Reply-To: <5023EE340200007800093ED0@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/cpuidle: clean up statistics reporting
 to user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/08/2012 16:07, "Jan Beulich" <JBeulich@suse.com> wrote:

> First of all, when no ACPI Cx data was reported, make sure the usage
> count passed back to user mode is not random.
> 
> Besides that, fold a lot of redundant code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I don;t know a great deal about this code, but this looks good to me, so for
what it's worth you can have my ack.

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -1100,36 +1100,23 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
>      }
>  
>      stat->last = power->last_state ? power->last_state->idx : 0;
> -    stat->nr = power->count;
>      stat->idle_time = get_cpu_idle_time(cpuid);
>  
>      /* mimic the stat when detail info hasn't been registered by dom0 */
>      if ( pm_idle_save == NULL )
>      {
> -        /* C1 */
> -        usage[1] = 1;
> -        res[1] = stat->idle_time;
> -
> -        /* C0 */
> -        res[0] = NOW() - res[1];
> -
> -        if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], 2) ||
> -            copy_to_guest_offset(stat->residencies, 0, &res[0], 2) )
> -            return -EFAULT;
> -
> -        stat->pc2 = 0;
> -        stat->pc3 = 0;
> -        stat->pc6 = 0;
> -        stat->pc7 = 0;
> -        stat->cc3 = 0;
> -        stat->cc6 = 0;
> -        stat->cc7 = 0;
> -        return 0;
> -    }
> +        stat->nr = 2;
> +
> +        usage[1] = idle_usage = 1;
> +        res[1] = idle_res = stat->idle_time;
>  
> -    for ( i = power->count - 1; i >= 0; i-- )
> +        memset(&hw_res, 0, sizeof(hw_res));
> +    }
> +    else
>      {
> -        if ( i != 0 )
> +        stat->nr = power->count;
> +
> +        for ( i = 1; i < power->count; i++ )
>          {
>              spin_lock_irq(&power->stat_lock);
>              usage[i] = power->states[i].usage;
> @@ -1139,18 +1126,16 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
>              idle_usage += usage[i];
>              idle_res += res[i];
>          }
> -        else
> -        {
> -            usage[i] = idle_usage;
> -            res[i] = NOW() - idle_res;
> -        }
> +
> +        get_hw_residencies(cpuid, &hw_res);
>      }
>  
> -    if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], power->count) ||
> -        copy_to_guest_offset(stat->residencies, 0, &res[0], power->count) )
> -        return -EFAULT;
> +    usage[0] = idle_usage;
> +    res[0] = NOW() - idle_res;
>  
> -    get_hw_residencies(cpuid, &hw_res);
> +    if ( copy_to_guest(stat->triggers, usage, stat->nr) ||
> +         copy_to_guest(stat->residencies, res, stat->nr) )
> +        return -EFAULT;
>  
>      stat->pc2 = hw_res.pc2;
>      stat->pc3 = hw_res.pc3;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:17:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szo9N-00054s-Ly; Fri, 10 Aug 2012 12:17:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szo9L-00054R-Mv
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:17:07 +0000
Received: from [85.158.138.51:36369] by server-10.bemta-3.messagelabs.com id
	61/78-07905-2CBF4205; Fri, 10 Aug 2012 12:17:06 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344601026!27657393!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14759 invoked from network); 10 Aug 2012 12:17:06 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 12:17:06 -0000
Received: by wgbed3 with SMTP id ed3so966595wgb.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 05:17:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=S1dKjunUHlAqEV8AH42oakpQu8iihdJ1pb/SC5M1Zsw=;
	b=oTiuWcCwEuRVW2j2b8CMyJczvOL5XQO5+QfRCpP6wDn02tM6SVhraaJ4TNUz2+OR2F
	2yQW7UZ0yI9lSRiheZOg6t2+Wnu3EIvoPWegL52OGdMaoYW98j4uOn9vhBXVq3mAuMvv
	KrU/G4Sizmr3HkUDiQBv5egRD1+tfIHBlzZb15irSu4HukWbLFeoSPYl0Got030FCFbJ
	bnO4cpS5H3oQKiMXksDSpUhlt3mmhqhx/CBQcIMUbP/6NW3keYETcdperaNXNWwSpU+4
	FICFSgGE2cUAZZS31gvPMu4hUTTyBMRlo4CfovK3d49T/jHAW3KMM2zSq5kw/IvsOzm/
	EPNw==
Received: by 10.180.14.35 with SMTP id m3mr5563495wic.16.1344601026065;
	Fri, 10 Aug 2012 05:17:06 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id cu1sm7420085wib.6.2012.08.10.05.17.04
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 05:17:05 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 10 Aug 2012 13:16:52 +0100
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC4ABA44.487C2%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/cpuidle: clean up statistics reporting
	to user mode
Thread-Index: Ac128gYXlTWvhQV0EkKKQWS3sZ5o5g==
In-Reply-To: <5023EE340200007800093ED0@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/cpuidle: clean up statistics reporting
 to user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/08/2012 16:07, "Jan Beulich" <JBeulich@suse.com> wrote:

> First of all, when no ACPI Cx data was reported, make sure the usage
> count passed back to user mode is not random.
> 
> Besides that, fold a lot of redundant code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I don;t know a great deal about this code, but this looks good to me, so for
what it's worth you can have my ack.

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -1100,36 +1100,23 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
>      }
>  
>      stat->last = power->last_state ? power->last_state->idx : 0;
> -    stat->nr = power->count;
>      stat->idle_time = get_cpu_idle_time(cpuid);
>  
>      /* mimic the stat when detail info hasn't been registered by dom0 */
>      if ( pm_idle_save == NULL )
>      {
> -        /* C1 */
> -        usage[1] = 1;
> -        res[1] = stat->idle_time;
> -
> -        /* C0 */
> -        res[0] = NOW() - res[1];
> -
> -        if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], 2) ||
> -            copy_to_guest_offset(stat->residencies, 0, &res[0], 2) )
> -            return -EFAULT;
> -
> -        stat->pc2 = 0;
> -        stat->pc3 = 0;
> -        stat->pc6 = 0;
> -        stat->pc7 = 0;
> -        stat->cc3 = 0;
> -        stat->cc6 = 0;
> -        stat->cc7 = 0;
> -        return 0;
> -    }
> +        stat->nr = 2;
> +
> +        usage[1] = idle_usage = 1;
> +        res[1] = idle_res = stat->idle_time;
>  
> -    for ( i = power->count - 1; i >= 0; i-- )
> +        memset(&hw_res, 0, sizeof(hw_res));
> +    }
> +    else
>      {
> -        if ( i != 0 )
> +        stat->nr = power->count;
> +
> +        for ( i = 1; i < power->count; i++ )
>          {
>              spin_lock_irq(&power->stat_lock);
>              usage[i] = power->states[i].usage;
> @@ -1139,18 +1126,16 @@ int pmstat_get_cx_stat(uint32_t cpuid, s
>              idle_usage += usage[i];
>              idle_res += res[i];
>          }
> -        else
> -        {
> -            usage[i] = idle_usage;
> -            res[i] = NOW() - idle_res;
> -        }
> +
> +        get_hw_residencies(cpuid, &hw_res);
>      }
>  
> -    if ( copy_to_guest_offset(stat->triggers, 0, &usage[0], power->count) ||
> -        copy_to_guest_offset(stat->residencies, 0, &res[0], power->count) )
> -        return -EFAULT;
> +    usage[0] = idle_usage;
> +    res[0] = NOW() - idle_res;
>  
> -    get_hw_residencies(cpuid, &hw_res);
> +    if ( copy_to_guest(stat->triggers, usage, stat->nr) ||
> +         copy_to_guest(stat->residencies, res, stat->nr) )
> +        return -EFAULT;
>  
>      stat->pc2 = hw_res.pc2;
>      stat->pc3 = hw_res.pc3;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:32:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzoNy-0005Mp-4T; Fri, 10 Aug 2012 12:32:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1SzoNx-0005Mk-7I
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:32:13 +0000
Received: from [85.158.143.99:9218] by server-2.bemta-4.messagelabs.com id
	50/0B-19021-C4FF4205; Fri, 10 Aug 2012 12:32:12 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344601885!22291737!1
X-Originating-IP: [216.32.180.187]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26911 invoked from network); 10 Aug 2012 12:31:28 -0000
Received: from co1ehsobe004.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.187)
	by server-12.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Aug 2012 12:31:28 -0000
Received: from mail150-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE009.bigfish.com (10.243.66.72) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 12:31:24 +0000
Received: from mail150-co1 (localhost [127.0.0.1])	by
	mail150-co1-R.bigfish.com (Postfix) with ESMTP id BF122D40230;
	Fri, 10 Aug 2012 12:31:24 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I4015I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail150-co1 (localhost.localdomain [127.0.0.1]) by mail150-co1
	(MessageSwitch) id 1344601882771104_29303;
	Fri, 10 Aug 2012 12:31:22 +0000 (UTC)
Received: from CO1EHSMHS009.bigfish.com (unknown [10.243.78.249])	by
	mail150-co1.bigfish.com (Postfix) with ESMTP id B9EABD00044;
	Fri, 10 Aug 2012 12:31:22 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS009.bigfish.com (10.243.66.19) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 12:31:22 +0000
X-WSS-ID: 0M8JHG3-02-54W-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2D2C8C8009;	Fri, 10 Aug 2012 07:31:15 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 10 Aug 2012 07:31:39 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 10 Aug 2012 07:31:19 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 10 Aug 2012
	08:31:18 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 2D54649C20C; Fri, 10 Aug 2012
	13:31:17 +0100 (BST)
Message-ID: <5024FF0A.6090006@amd.com>
Date: Fri, 10 Aug 2012 14:31:06 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
In-Reply-To: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Santosh,

On 08/10/2012 03:43 AM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 09 18:31:32 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,81 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int *indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<= 1 )
> +        return;

So, the l1 page table is not printed, is it by design?


> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( present )
> +        {
> +            int i;
> +
> +            for ( i = 0; i<  *indent; i++ )
> +                printk("  ");
> +
> +            printk("gfn: %"PRIpaddr"  mfn: %"PRIpaddr"\n",
> +                   PFN_DOWN(address), PFN_DOWN(next_table_maddr));
> +
> +            if ( (next_table_maddr != 0)&&  (next_level != 0) )
> +            {
> +                *indent += 1;
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1,
> +                    address, indent);
> +
> +                *indent -= 1;
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +    int indent;
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    indent = 0;
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0,&indent);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +607,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };


I tested the amd part with an amd iommu system and I got output like this:

XEN) domain1 IOMMU p2m table:
(XEN) gfn: 0000000000000000  mfn: 00000000001b1fb5
(XEN)   gfn: 0000000000000000  mfn: 000000000021f80b
(XEN)   gfn: 0000000000000000  mfn: 000000000023f400
(XEN)   gfn: 0000000000000000  mfn: 000000000010a600
(XEN)   gfn: 0000000000000000  mfn: 000000000023f200
(XEN)   gfn: 0000000000000000  mfn: 000000000010a400
(XEN)   gfn: 0000000000000000  mfn: 000000000023f000
(XEN)   gfn: 0000000000000000  mfn: 000000000010ae00
(XEN)   gfn: 0000000000000000  mfn: 000000000023ee00
(XEN)   gfn: 0000000000000001  mfn: 000000000010ac00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ec00
(XEN)   gfn: 0000000000000001  mfn: 000000000010aa00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ea00
(XEN)   gfn: 0000000000000001  mfn: 000000000010a800
(XEN)   gfn: 0000000000000001  mfn: 000000000023e800
(XEN)   gfn: 0000000000000001  mfn: 000000000010be00
(XEN)   gfn: 0000000000000001  mfn: 000000000023e600
(XEN)   gfn: 0000000000000002  mfn: 000000000010bc00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e400
(XEN)   gfn: 0000000000000002  mfn: 000000000010ba00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e200
(XEN)   gfn: 0000000000000002  mfn: 000000000010b800
(XEN)   gfn: 0000000000000002  mfn: 000000000023e000
(XEN)   gfn: 0000000000000002  mfn: 000000000010b600
(XEN)   gfn: 0000000000000002  mfn: 000000000023de00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b400
(XEN)   gfn: 0000000000000003  mfn: 000000000023dc00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b200
(XEN)   gfn: 0000000000000003  mfn: 000000000023da00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b000
(XEN)   gfn: 0000000000000003  mfn: 000000000023d800
(XEN)   gfn: 0000000000000003  mfn: 000000000010fe00
(XEN)   gfn: 0000000000000003  mfn: 000000000023d600
(XEN)   gfn: 0000000000000004  mfn: 000000000010fc00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d400
(XEN)   gfn: 0000000000000004  mfn: 000000000010fa00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d200
(XEN)   gfn: 0000000000000004  mfn: 000000000010f800
(XEN)   gfn: 0000000000000004  mfn: 000000000023d000
(XEN)   gfn: 0000000000000004  mfn: 000000000010f600
(XEN)   gfn: 0000000000000004  mfn: 000000000023ce00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f400
(XEN)   gfn: 0000000000000005  mfn: 000000000023cc00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f200
(XEN)   gfn: 0000000000000005  mfn: 000000000023ca00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f000
(XEN)   gfn: 0000000000000005  mfn: 000000000023c800
(XEN)   gfn: 0000000000000005  mfn: 000000000010ee00
(XEN)   gfn: 0000000000000005  mfn: 000000000023c600
(XEN)   gfn: 0000000000000006  mfn: 000000000010ec00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c400
(XEN)   gfn: 0000000000000006  mfn: 000000000010ea00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c200
(XEN)   gfn: 0000000000000006  mfn: 000000000010e800
(XEN)   gfn: 0000000000000006  mfn: 000000000023c000

It looks that the same gfn has been mapped to multiple mfns. Do you want 
to output only the gfn to mfn mapping or you also want to output the 
address of intermediate page tables? What do "gfn" and "mfn" stands for 
here?

Thanks,
Wei



> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Thu Aug 09 18:31:32 2012 -0700
> @@ -18,11 +18,13 @@
>   #include<asm/hvm/iommu.h>
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
> +#include<xen/keyhandler.h>
>   #include<xen/softirq.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 09 18:31:32 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,71 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int *indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( dma_pte_present(*pte) )
> +        {
> +            int j;
> +
> +            for ( j = 0; j<  *indent; j++ )
> +                printk("  ");
> +
> +            address = gpa + offset_level_address(i, level);
> +            printk("gfn: %"PRIpaddr" mfn: %"PRIpaddr" super=%d rd=%d wr=%d\n",
> +                    address>>  PAGE_SHIFT_4K, pte->val>>  PAGE_SHIFT_4K,
> +                    dma_pte_superpage(*pte)? 1 : 0, dma_pte_read(*pte)? 1 : 0,
> +                    dma_pte_write(*pte)? 1 : 0);
> +
> +            if ( next_level>= 1 ) {
> +                *indent += 1;
> +                vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                         address, indent);
> +
> +                *indent -= 1;
> +            }
> +        }
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +    int indent;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    indent = 0;
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw),
> +                             0,&indent);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 09 18:31:32 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> @@ -277,6 +279,9 @@ struct dma_pte {
>   #define dma_set_pte_addr(p, addr) do {\
>               (p).val |= ((addr)&  PAGE_MASK_4K); } while (0)
>   #define dma_pte_present(p) (((p).val&  3) != 0)
> +#define dma_pte_superpage(p) (((p).val&  (1<<7)) != 0)
> +#define dma_pte_read(p) (((p).val&  DMA_PTE_READ) != 0)
> +#define dma_pte_write(p) (((p).val&  DMA_PTE_WRITE) != 0)
>
>   /* interrupt remap entry */
>   struct iremap_entry {
> diff -r 472fc515a463 -r f687c5580262 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 09 18:31:32 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  ((PTE_PER_TABLE_SHIFT * \
> +                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 472fc515a463 -r f687c5580262 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/xen/iommu.h	Thu Aug 09 18:31:32 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:32:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzoNy-0005Mp-4T; Fri, 10 Aug 2012 12:32:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1SzoNx-0005Mk-7I
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 12:32:13 +0000
Received: from [85.158.143.99:9218] by server-2.bemta-4.messagelabs.com id
	50/0B-19021-C4FF4205; Fri, 10 Aug 2012 12:32:12 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1344601885!22291737!1
X-Originating-IP: [216.32.180.187]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26911 invoked from network); 10 Aug 2012 12:31:28 -0000
Received: from co1ehsobe004.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.187)
	by server-12.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Aug 2012 12:31:28 -0000
Received: from mail150-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE009.bigfish.com (10.243.66.72) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 12:31:24 +0000
Received: from mail150-co1 (localhost [127.0.0.1])	by
	mail150-co1-R.bigfish.com (Postfix) with ESMTP id BF122D40230;
	Fri, 10 Aug 2012 12:31:24 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I4015I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail150-co1 (localhost.localdomain [127.0.0.1]) by mail150-co1
	(MessageSwitch) id 1344601882771104_29303;
	Fri, 10 Aug 2012 12:31:22 +0000 (UTC)
Received: from CO1EHSMHS009.bigfish.com (unknown [10.243.78.249])	by
	mail150-co1.bigfish.com (Postfix) with ESMTP id B9EABD00044;
	Fri, 10 Aug 2012 12:31:22 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS009.bigfish.com (10.243.66.19) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 12:31:22 +0000
X-WSS-ID: 0M8JHG3-02-54W-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2D2C8C8009;	Fri, 10 Aug 2012 07:31:15 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 10 Aug 2012 07:31:39 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 10 Aug 2012 07:31:19 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 10 Aug 2012
	08:31:18 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 2D54649C20C; Fri, 10 Aug 2012
	13:31:17 +0100 (BST)
Message-ID: <5024FF0A.6090006@amd.com>
Date: Fri, 10 Aug 2012 14:31:06 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
In-Reply-To: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Santosh,

On 08/10/2012 03:43 AM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 09 18:31:32 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,81 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int *indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<= 1 )
> +        return;

So, the l1 page table is not printed, is it by design?


> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( present )
> +        {
> +            int i;
> +
> +            for ( i = 0; i<  *indent; i++ )
> +                printk("  ");
> +
> +            printk("gfn: %"PRIpaddr"  mfn: %"PRIpaddr"\n",
> +                   PFN_DOWN(address), PFN_DOWN(next_table_maddr));
> +
> +            if ( (next_table_maddr != 0)&&  (next_level != 0) )
> +            {
> +                *indent += 1;
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1,
> +                    address, indent);
> +
> +                *indent -= 1;
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +    int indent;
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    indent = 0;
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0,&indent);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +607,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };


I tested the amd part with an amd iommu system and I got output like this:

XEN) domain1 IOMMU p2m table:
(XEN) gfn: 0000000000000000  mfn: 00000000001b1fb5
(XEN)   gfn: 0000000000000000  mfn: 000000000021f80b
(XEN)   gfn: 0000000000000000  mfn: 000000000023f400
(XEN)   gfn: 0000000000000000  mfn: 000000000010a600
(XEN)   gfn: 0000000000000000  mfn: 000000000023f200
(XEN)   gfn: 0000000000000000  mfn: 000000000010a400
(XEN)   gfn: 0000000000000000  mfn: 000000000023f000
(XEN)   gfn: 0000000000000000  mfn: 000000000010ae00
(XEN)   gfn: 0000000000000000  mfn: 000000000023ee00
(XEN)   gfn: 0000000000000001  mfn: 000000000010ac00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ec00
(XEN)   gfn: 0000000000000001  mfn: 000000000010aa00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ea00
(XEN)   gfn: 0000000000000001  mfn: 000000000010a800
(XEN)   gfn: 0000000000000001  mfn: 000000000023e800
(XEN)   gfn: 0000000000000001  mfn: 000000000010be00
(XEN)   gfn: 0000000000000001  mfn: 000000000023e600
(XEN)   gfn: 0000000000000002  mfn: 000000000010bc00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e400
(XEN)   gfn: 0000000000000002  mfn: 000000000010ba00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e200
(XEN)   gfn: 0000000000000002  mfn: 000000000010b800
(XEN)   gfn: 0000000000000002  mfn: 000000000023e000
(XEN)   gfn: 0000000000000002  mfn: 000000000010b600
(XEN)   gfn: 0000000000000002  mfn: 000000000023de00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b400
(XEN)   gfn: 0000000000000003  mfn: 000000000023dc00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b200
(XEN)   gfn: 0000000000000003  mfn: 000000000023da00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b000
(XEN)   gfn: 0000000000000003  mfn: 000000000023d800
(XEN)   gfn: 0000000000000003  mfn: 000000000010fe00
(XEN)   gfn: 0000000000000003  mfn: 000000000023d600
(XEN)   gfn: 0000000000000004  mfn: 000000000010fc00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d400
(XEN)   gfn: 0000000000000004  mfn: 000000000010fa00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d200
(XEN)   gfn: 0000000000000004  mfn: 000000000010f800
(XEN)   gfn: 0000000000000004  mfn: 000000000023d000
(XEN)   gfn: 0000000000000004  mfn: 000000000010f600
(XEN)   gfn: 0000000000000004  mfn: 000000000023ce00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f400
(XEN)   gfn: 0000000000000005  mfn: 000000000023cc00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f200
(XEN)   gfn: 0000000000000005  mfn: 000000000023ca00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f000
(XEN)   gfn: 0000000000000005  mfn: 000000000023c800
(XEN)   gfn: 0000000000000005  mfn: 000000000010ee00
(XEN)   gfn: 0000000000000005  mfn: 000000000023c600
(XEN)   gfn: 0000000000000006  mfn: 000000000010ec00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c400
(XEN)   gfn: 0000000000000006  mfn: 000000000010ea00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c200
(XEN)   gfn: 0000000000000006  mfn: 000000000010e800
(XEN)   gfn: 0000000000000006  mfn: 000000000023c000

It looks that the same gfn has been mapped to multiple mfns. Do you want 
to output only the gfn to mfn mapping or you also want to output the 
address of intermediate page tables? What do "gfn" and "mfn" stands for 
here?

Thanks,
Wei



> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Thu Aug 09 18:31:32 2012 -0700
> @@ -18,11 +18,13 @@
>   #include<asm/hvm/iommu.h>
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
> +#include<xen/keyhandler.h>
>   #include<xen/softirq.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 09 18:31:32 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,71 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int *indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( dma_pte_present(*pte) )
> +        {
> +            int j;
> +
> +            for ( j = 0; j<  *indent; j++ )
> +                printk("  ");
> +
> +            address = gpa + offset_level_address(i, level);
> +            printk("gfn: %"PRIpaddr" mfn: %"PRIpaddr" super=%d rd=%d wr=%d\n",
> +                    address>>  PAGE_SHIFT_4K, pte->val>>  PAGE_SHIFT_4K,
> +                    dma_pte_superpage(*pte)? 1 : 0, dma_pte_read(*pte)? 1 : 0,
> +                    dma_pte_write(*pte)? 1 : 0);
> +
> +            if ( next_level>= 1 ) {
> +                *indent += 1;
> +                vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                         address, indent);
> +
> +                *indent -= 1;
> +            }
> +        }
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +    int indent;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    indent = 0;
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw),
> +                             0,&indent);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 472fc515a463 -r f687c5580262 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 09 18:31:32 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> @@ -277,6 +279,9 @@ struct dma_pte {
>   #define dma_set_pte_addr(p, addr) do {\
>               (p).val |= ((addr)&  PAGE_MASK_4K); } while (0)
>   #define dma_pte_present(p) (((p).val&  3) != 0)
> +#define dma_pte_superpage(p) (((p).val&  (1<<7)) != 0)
> +#define dma_pte_read(p) (((p).val&  DMA_PTE_READ) != 0)
> +#define dma_pte_write(p) (((p).val&  DMA_PTE_WRITE) != 0)
>
>   /* interrupt remap entry */
>   struct iremap_entry {
> diff -r 472fc515a463 -r f687c5580262 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 09 18:31:32 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  ((PTE_PER_TABLE_SHIFT * \
> +                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 472fc515a463 -r f687c5580262 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/xen/iommu.h	Thu Aug 09 18:31:32 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:52:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:52:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szohh-0005gB-6y; Fri, 10 Aug 2012 12:52:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szohf-0005g6-QU
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:52:36 +0000
Received: from [85.158.143.35:16131] by server-2.bemta-4.messagelabs.com id
	F1/29-19021-31405205; Fri, 10 Aug 2012 12:52:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344603154!12616079!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 570 invoked from network); 10 Aug 2012 12:52:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 12:52:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 13:52:33 +0100
Message-Id: <50252031020000780009427B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 13:52:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Wang" <wei.wang2@amd.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
	<5024E788.80300@amd.com>
In-Reply-To: <5024E788.80300@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 12:50, Wei Wang <wei.wang2@amd.com> wrote:
> On 08/09/2012 09:26 AM, Jan Beulich wrote:
> 
>> Wei - here I'm particularly worried about the use of "level - 1"
>> instead of "next_level", which would similarly apply to the
>> original function. If the way this is currently done is okay, then
>> why is next_level being computed in the first place?
> 
> I think that recalculation is to guarantee that this recursive function 
> returns. It should run at most "paging_mode" times no matter what 
> "next_level" says. But if we could assume that next level field in every 
> pde is correct, then using next level is fine to me.

Especially in the dumping function we shouldn't assume too
much. However, wasn't it that one can skip levels in your
IOMMU implementation? That can't be handled correctly if
always subtracting 1.

>> (And similar
>> to the issue Santosh has already fixed here - the original
>> function pointlessly maps/unmaps the page when "level<= 1".
>> Furthermore, iommu_map.c has nice helper functions
>> iommu_next_level() and amd_iommu_is_pte_present() - why
>> aren't they in a header instead, so they could be used here,
>> avoiding the open coding of them?)
> 
> Maybe those helps appears after the original function. I could sent a 
> patch to clean up these:
> * do not map/unmap if level <= 1
> * move amd_iommu_is_pte_present() and iommu_next_level() to a header 
> file. and use them in deallocate_next_page_table.
> * Using next_level instead of recalculation (if requested)

Yes, please. As to using next_level - it depends, besides the above,
 on how bad it is if this is really wrong; an ASSERT() or BUG_ON()
might be on order here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 12:52:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 12:52:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szohh-0005gB-6y; Fri, 10 Aug 2012 12:52:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szohf-0005g6-QU
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:52:36 +0000
Received: from [85.158.143.35:16131] by server-2.bemta-4.messagelabs.com id
	F1/29-19021-31405205; Fri, 10 Aug 2012 12:52:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344603154!12616079!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 570 invoked from network); 10 Aug 2012 12:52:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Aug 2012 12:52:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 13:52:33 +0100
Message-Id: <50252031020000780009427B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 13:52:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Wang" <wei.wang2@amd.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
	<5024E788.80300@amd.com>
In-Reply-To: <5024E788.80300@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 12:50, Wei Wang <wei.wang2@amd.com> wrote:
> On 08/09/2012 09:26 AM, Jan Beulich wrote:
> 
>> Wei - here I'm particularly worried about the use of "level - 1"
>> instead of "next_level", which would similarly apply to the
>> original function. If the way this is currently done is okay, then
>> why is next_level being computed in the first place?
> 
> I think that recalculation is to guarantee that this recursive function 
> returns. It should run at most "paging_mode" times no matter what 
> "next_level" says. But if we could assume that next level field in every 
> pde is correct, then using next level is fine to me.

Especially in the dumping function we shouldn't assume too
much. However, wasn't it that one can skip levels in your
IOMMU implementation? That can't be handled correctly if
always subtracting 1.

>> (And similar
>> to the issue Santosh has already fixed here - the original
>> function pointlessly maps/unmaps the page when "level<= 1".
>> Furthermore, iommu_map.c has nice helper functions
>> iommu_next_level() and amd_iommu_is_pte_present() - why
>> aren't they in a header instead, so they could be used here,
>> avoiding the open coding of them?)
> 
> Maybe those helps appears after the original function. I could sent a 
> patch to clean up these:
> * do not map/unmap if level <= 1
> * move amd_iommu_is_pte_present() and iommu_next_level() to a header 
> file. and use them in deallocate_next_page_table.
> * Using next_level instead of recalculation (if requested)

Yes, please. As to using next_level - it depends, besides the above,
 on how bad it is if this is really wrong; an ASSERT() or BUG_ON()
might be on order here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:00:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szoon-0005q5-34; Fri, 10 Aug 2012 12:59:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szool-0005q0-Ls
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:59:55 +0000
Received: from [85.158.143.35:57543] by server-1.bemta-4.messagelabs.com id
	88/FE-20198-BC505205; Fri, 10 Aug 2012 12:59:55 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344603591!5477322!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27384 invoked from network); 10 Aug 2012 12:59:52 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 12:59:52 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jored mo11) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 00085co7AB4jys ;
	Fri, 10 Aug 2012 14:59:51 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 85A2618639; Fri, 10 Aug 2012 14:59:50 +0200 (CEST)
Date: Fri, 10 Aug 2012 14:59:50 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120810125950.GA451@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de>
	<20120810074159.GA11792@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120810074159.GA11792@aepfle.de>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, Olaf Hering wrote:

> On Thu, Aug 09, Olaf Hering wrote:
> 
> > Indeed, netback_probe is appearently never called in my case. I will
> > check why that happens.
> 
> What I have seen so far is that in 4.2+xl the vif driver is not
> registered, while in 4.1+xm there is a vif driver registered. Thats so
> far the difference I could spot.

Argh, I was expecting that required kernel drivers are loaded when
needed. But thats not the case. There is a workaround or fix for pvops
in 25728:a6edbc39fc84. But this changeset misses at least netbk and
blkbk.

Any idea why that changeset is now needed?
Why did it work for everyone before?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:00:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szoon-0005q5-34; Fri, 10 Aug 2012 12:59:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szool-0005q0-Ls
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 12:59:55 +0000
Received: from [85.158.143.35:57543] by server-1.bemta-4.messagelabs.com id
	88/FE-20198-BC505205; Fri, 10 Aug 2012 12:59:55 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344603591!5477322!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27384 invoked from network); 10 Aug 2012 12:59:52 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 12:59:52 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jored mo11) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 00085co7AB4jys ;
	Fri, 10 Aug 2012 14:59:51 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 85A2618639; Fri, 10 Aug 2012 14:59:50 +0200 (CEST)
Date: Fri, 10 Aug 2012 14:59:50 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120810125950.GA451@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de>
	<20120810074159.GA11792@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120810074159.GA11792@aepfle.de>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, Olaf Hering wrote:

> On Thu, Aug 09, Olaf Hering wrote:
> 
> > Indeed, netback_probe is appearently never called in my case. I will
> > check why that happens.
> 
> What I have seen so far is that in 4.2+xl the vif driver is not
> registered, while in 4.1+xm there is a vif driver registered. Thats so
> far the difference I could spot.

Argh, I was expecting that required kernel drivers are loaded when
needed. But thats not the case. There is a workaround or fix for pvops
in 25728:a6edbc39fc84. But this changeset misses at least netbk and
blkbk.

Any idea why that changeset is now needed?
Why did it work for everyone before?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:02:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzorP-0005yh-L9; Fri, 10 Aug 2012 13:02:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzorO-0005ya-EQ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:02:38 +0000
Received: from [85.158.139.83:17635] by server-3.bemta-5.messagelabs.com id
	B9/F1-31899-D6605205; Fri, 10 Aug 2012 13:02:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344603756!27469331!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4859 invoked from network); 10 Aug 2012 13:02:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	10 Aug 2012 13:02:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:02:36 +0100
Message-Id: <5025228C0200007800094294@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:02:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
	<5024FF0A.6090006@amd.com>
In-Reply-To: <5024FF0A.6090006@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Wang <wei.wang2@amd.com>, xen-devel@lists.xensource.com, tim@xen.org,
	xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 14:31, Wei Wang <wei.wang2@amd.com> wrote:
> On 08/10/2012 03:43 AM, Santosh Jodh wrote:
>> +    if ( level<= 1 )
>> +        return;
> 
> So, the l1 page table is not printed, is it by design?

Yeah, that puzzled me too, but I wasn't sure what's right here.

> I tested the amd part with an amd iommu system and I got output like this:
> 
> XEN) domain1 IOMMU p2m table:
> (XEN) gfn: 0000000000000000  mfn: 00000000001b1fb5
> (XEN)   gfn: 0000000000000000  mfn: 000000000021f80b
> (XEN)   gfn: 0000000000000000  mfn: 000000000023f400
> (XEN)   gfn: 0000000000000000  mfn: 000000000010a600
> (XEN)   gfn: 0000000000000000  mfn: 000000000023f200
> (XEN)   gfn: 0000000000000000  mfn: 000000000010a400
> (XEN)   gfn: 0000000000000000  mfn: 000000000023f000
> (XEN)   gfn: 0000000000000000  mfn: 000000000010ae00
> (XEN)   gfn: 0000000000000000  mfn: 000000000023ee00
> (XEN)   gfn: 0000000000000001  mfn: 000000000010ac00
> (XEN)   gfn: 0000000000000001  mfn: 000000000023ec00
> (XEN)   gfn: 0000000000000001  mfn: 000000000010aa00
> (XEN)   gfn: 0000000000000001  mfn: 000000000023ea00
> (XEN)   gfn: 0000000000000001  mfn: 000000000010a800
> (XEN)   gfn: 0000000000000001  mfn: 000000000023e800
> (XEN)   gfn: 0000000000000001  mfn: 000000000010be00
> (XEN)   gfn: 0000000000000001  mfn: 000000000023e600
> (XEN)   gfn: 0000000000000002  mfn: 000000000010bc00
> (XEN)   gfn: 0000000000000002  mfn: 000000000023e400
> (XEN)   gfn: 0000000000000002  mfn: 000000000010ba00
> (XEN)   gfn: 0000000000000002  mfn: 000000000023e200
> (XEN)   gfn: 0000000000000002  mfn: 000000000010b800
> (XEN)   gfn: 0000000000000002  mfn: 000000000023e000
> (XEN)   gfn: 0000000000000002  mfn: 000000000010b600
> (XEN)   gfn: 0000000000000002  mfn: 000000000023de00
> (XEN)   gfn: 0000000000000003  mfn: 000000000010b400
> (XEN)   gfn: 0000000000000003  mfn: 000000000023dc00
> (XEN)   gfn: 0000000000000003  mfn: 000000000010b200
> (XEN)   gfn: 0000000000000003  mfn: 000000000023da00
> (XEN)   gfn: 0000000000000003  mfn: 000000000010b000
> (XEN)   gfn: 0000000000000003  mfn: 000000000023d800
> (XEN)   gfn: 0000000000000003  mfn: 000000000010fe00
> (XEN)   gfn: 0000000000000003  mfn: 000000000023d600
> (XEN)   gfn: 0000000000000004  mfn: 000000000010fc00
> (XEN)   gfn: 0000000000000004  mfn: 000000000023d400
> (XEN)   gfn: 0000000000000004  mfn: 000000000010fa00
> (XEN)   gfn: 0000000000000004  mfn: 000000000023d200
> (XEN)   gfn: 0000000000000004  mfn: 000000000010f800
> (XEN)   gfn: 0000000000000004  mfn: 000000000023d000
> (XEN)   gfn: 0000000000000004  mfn: 000000000010f600
> (XEN)   gfn: 0000000000000004  mfn: 000000000023ce00
> (XEN)   gfn: 0000000000000005  mfn: 000000000010f400
> (XEN)   gfn: 0000000000000005  mfn: 000000000023cc00
> (XEN)   gfn: 0000000000000005  mfn: 000000000010f200
> (XEN)   gfn: 0000000000000005  mfn: 000000000023ca00
> (XEN)   gfn: 0000000000000005  mfn: 000000000010f000
> (XEN)   gfn: 0000000000000005  mfn: 000000000023c800
> (XEN)   gfn: 0000000000000005  mfn: 000000000010ee00
> (XEN)   gfn: 0000000000000005  mfn: 000000000023c600
> (XEN)   gfn: 0000000000000006  mfn: 000000000010ec00
> (XEN)   gfn: 0000000000000006  mfn: 000000000023c400
> (XEN)   gfn: 0000000000000006  mfn: 000000000010ea00
> (XEN)   gfn: 0000000000000006  mfn: 000000000023c200
> (XEN)   gfn: 0000000000000006  mfn: 000000000010e800
> (XEN)   gfn: 0000000000000006  mfn: 000000000023c000
> 
> It looks that the same gfn has been mapped to multiple mfns. Do you want 
> to output only the gfn to mfn mapping or you also want to output the 
> address of intermediate page tables? What do "gfn" and "mfn" stands for 
> here?

Indeed, apart from the apparent brokenness, printing the GFN
with the intermediate levels makes no sense. Nor does it make
sense to print with 16 digits when beyond the 10th a non-zero
one will never be observable. I realize that that's a downside
of using PRIpaddr - I was probably giving a bad recommendation
here (or at least it wasn't applicable to all cases), I'm sorry for
that; instead, when you convert to [GM]FN, you can safely cast
to (and hence print as) unsigned long.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:02:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzorP-0005yh-L9; Fri, 10 Aug 2012 13:02:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzorO-0005ya-EQ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:02:38 +0000
Received: from [85.158.139.83:17635] by server-3.bemta-5.messagelabs.com id
	B9/F1-31899-D6605205; Fri, 10 Aug 2012 13:02:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344603756!27469331!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4859 invoked from network); 10 Aug 2012 13:02:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	10 Aug 2012 13:02:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:02:36 +0100
Message-Id: <5025228C0200007800094294@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:02:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
	<5024FF0A.6090006@amd.com>
In-Reply-To: <5024FF0A.6090006@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Wang <wei.wang2@amd.com>, xen-devel@lists.xensource.com, tim@xen.org,
	xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 14:31, Wei Wang <wei.wang2@amd.com> wrote:
> On 08/10/2012 03:43 AM, Santosh Jodh wrote:
>> +    if ( level<= 1 )
>> +        return;
> 
> So, the l1 page table is not printed, is it by design?

Yeah, that puzzled me too, but I wasn't sure what's right here.

> I tested the amd part with an amd iommu system and I got output like this:
> 
> XEN) domain1 IOMMU p2m table:
> (XEN) gfn: 0000000000000000  mfn: 00000000001b1fb5
> (XEN)   gfn: 0000000000000000  mfn: 000000000021f80b
> (XEN)   gfn: 0000000000000000  mfn: 000000000023f400
> (XEN)   gfn: 0000000000000000  mfn: 000000000010a600
> (XEN)   gfn: 0000000000000000  mfn: 000000000023f200
> (XEN)   gfn: 0000000000000000  mfn: 000000000010a400
> (XEN)   gfn: 0000000000000000  mfn: 000000000023f000
> (XEN)   gfn: 0000000000000000  mfn: 000000000010ae00
> (XEN)   gfn: 0000000000000000  mfn: 000000000023ee00
> (XEN)   gfn: 0000000000000001  mfn: 000000000010ac00
> (XEN)   gfn: 0000000000000001  mfn: 000000000023ec00
> (XEN)   gfn: 0000000000000001  mfn: 000000000010aa00
> (XEN)   gfn: 0000000000000001  mfn: 000000000023ea00
> (XEN)   gfn: 0000000000000001  mfn: 000000000010a800
> (XEN)   gfn: 0000000000000001  mfn: 000000000023e800
> (XEN)   gfn: 0000000000000001  mfn: 000000000010be00
> (XEN)   gfn: 0000000000000001  mfn: 000000000023e600
> (XEN)   gfn: 0000000000000002  mfn: 000000000010bc00
> (XEN)   gfn: 0000000000000002  mfn: 000000000023e400
> (XEN)   gfn: 0000000000000002  mfn: 000000000010ba00
> (XEN)   gfn: 0000000000000002  mfn: 000000000023e200
> (XEN)   gfn: 0000000000000002  mfn: 000000000010b800
> (XEN)   gfn: 0000000000000002  mfn: 000000000023e000
> (XEN)   gfn: 0000000000000002  mfn: 000000000010b600
> (XEN)   gfn: 0000000000000002  mfn: 000000000023de00
> (XEN)   gfn: 0000000000000003  mfn: 000000000010b400
> (XEN)   gfn: 0000000000000003  mfn: 000000000023dc00
> (XEN)   gfn: 0000000000000003  mfn: 000000000010b200
> (XEN)   gfn: 0000000000000003  mfn: 000000000023da00
> (XEN)   gfn: 0000000000000003  mfn: 000000000010b000
> (XEN)   gfn: 0000000000000003  mfn: 000000000023d800
> (XEN)   gfn: 0000000000000003  mfn: 000000000010fe00
> (XEN)   gfn: 0000000000000003  mfn: 000000000023d600
> (XEN)   gfn: 0000000000000004  mfn: 000000000010fc00
> (XEN)   gfn: 0000000000000004  mfn: 000000000023d400
> (XEN)   gfn: 0000000000000004  mfn: 000000000010fa00
> (XEN)   gfn: 0000000000000004  mfn: 000000000023d200
> (XEN)   gfn: 0000000000000004  mfn: 000000000010f800
> (XEN)   gfn: 0000000000000004  mfn: 000000000023d000
> (XEN)   gfn: 0000000000000004  mfn: 000000000010f600
> (XEN)   gfn: 0000000000000004  mfn: 000000000023ce00
> (XEN)   gfn: 0000000000000005  mfn: 000000000010f400
> (XEN)   gfn: 0000000000000005  mfn: 000000000023cc00
> (XEN)   gfn: 0000000000000005  mfn: 000000000010f200
> (XEN)   gfn: 0000000000000005  mfn: 000000000023ca00
> (XEN)   gfn: 0000000000000005  mfn: 000000000010f000
> (XEN)   gfn: 0000000000000005  mfn: 000000000023c800
> (XEN)   gfn: 0000000000000005  mfn: 000000000010ee00
> (XEN)   gfn: 0000000000000005  mfn: 000000000023c600
> (XEN)   gfn: 0000000000000006  mfn: 000000000010ec00
> (XEN)   gfn: 0000000000000006  mfn: 000000000023c400
> (XEN)   gfn: 0000000000000006  mfn: 000000000010ea00
> (XEN)   gfn: 0000000000000006  mfn: 000000000023c200
> (XEN)   gfn: 0000000000000006  mfn: 000000000010e800
> (XEN)   gfn: 0000000000000006  mfn: 000000000023c000
> 
> It looks that the same gfn has been mapped to multiple mfns. Do you want 
> to output only the gfn to mfn mapping or you also want to output the 
> address of intermediate page tables? What do "gfn" and "mfn" stands for 
> here?

Indeed, apart from the apparent brokenness, printing the GFN
with the intermediate levels makes no sense. Nor does it make
sense to print with 16 digits when beyond the 10th a non-zero
one will never be observable. I realize that that's a downside
of using PRIpaddr - I was probably giving a bad recommendation
here (or at least it wasn't applicable to all cases), I'm sorry for
that; instead, when you convert to [GM]FN, you can safely cast
to (and hence print as) unsigned long.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:11:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szozv-0006IC-LT; Fri, 10 Aug 2012 13:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szozt-0006I7-R2
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:11:26 +0000
Received: from [85.158.139.83:44720] by server-8.bemta-5.messagelabs.com id
	3C/DC-05939-D7805205; Fri, 10 Aug 2012 13:11:25 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344604284!27470780!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27591 invoked from network); 10 Aug 2012 13:11:24 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 13:11:24 -0000
Received: by eekd4 with SMTP id d4so434939eek.30
	for <xen-devel@lists.xensource.com>;
	Fri, 10 Aug 2012 06:11:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:mime-version:content-type:content-transfer-encoding;
	bh=kXB/JShHcF6efqTASlKQV75+edceDGNBuaMSbMJELT0=;
	b=jyNgH4hgdG9qeU4vogfvSERZu+WuZnOweuXLBLazY+Ne+53rrMMVLDk7Cvw3AsJUa9
	Pg7V9AQmP8VLpEaC/qwQN+WvWBQ+EAXlBl0nKYlgiG6Hf/qQtL9nki4qvVExH8sDomJq
	d+psYtJkePIKUaJbd9sbNQknPkcx6S3vT6GnOq6SOWW1BVc2EPDmxVOT7oJhcXqALSSf
	oD3cbSb89REBDtJFI4sryLnZYVLk0Q0bQ1B9Prut2oDX5EeF6CowVhP+7iamInSg//0v
	FH2DSdUBQK31LOnMz+9jsY1ykKqNsC6g5EeNXiMHFqCI+HZZx2zGRwg3SdsfSBFaJPoZ
	hqWg==
Received: by 10.14.209.129 with SMTP id s1mr3188211eeo.24.1344604284088;
	Fri, 10 Aug 2012 06:11:24 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id a48sm11424250eeo.1.2012.08.10.06.11.20
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 06:11:23 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 10 Aug 2012 14:11:18 +0100
From: Keir Fraser <keir@xen.org>
To: <xen-devel@lists.xensource.com>
Message-ID: <CC4AC706.487CB%keir@xen.org>
Thread-Topic: [ANNOUNCE] Xen 4.1.3 and 4.0.4 released
Thread-Index: Ac12+aDHpjS95s5Xkka8AHSLmpCSfQ==
Mime-version: 1.0
Cc: Lars Kurth <lars.kurth@xen.org>
Subject: [Xen-devel] [ANNOUNCE] Xen 4.1.3 and 4.0.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Folks,

I am pleased to announce the release of Xen 4.0.4 and 4.1.3. These are
available immediately from their respective mercurial repositories:
http://xenbits.xen.org/xen-4.0-testing.hg (tag RELEASE-4.0.4)
http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.3)

These fix the following critical vulnerabilities:
 * CVE-2012-0217 / XSA-7:
    PV guest privilege escalation vulnerability
 * CVE-2012-0218 / XSA-8:
    guest denial of service on syscall/sysenter exception generation
 * CVE-2012-2934 / XSA-9:
    PV guest host Denial of Service
 * CVE-2012-3432 / XSA-10:
    HVM guest user mode MMIO emulation DoS vulnerability
 * CVE-2012-3433 / XSA-11:
    HVM guest destroy p2m teardown host DoS vulnerability

We recommend all users of the 4.0 and 4.1 stable series to update to these
latest point releases.

Among many bug fixes and improvements (over 100 since Xen 4.1.2):
 * Updates for the latest Intel/AMD CPU revisions
 * Bug fixes and improvements to the libxl tool stack
 * Bug fixes for IOMMU handling (device passthrough to HVM guests)
 * Bug fixes for host kexec/kdump

 Regards,
 Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:11:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szozv-0006IC-LT; Fri, 10 Aug 2012 13:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1Szozt-0006I7-R2
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:11:26 +0000
Received: from [85.158.139.83:44720] by server-8.bemta-5.messagelabs.com id
	3C/DC-05939-D7805205; Fri, 10 Aug 2012 13:11:25 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344604284!27470780!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27591 invoked from network); 10 Aug 2012 13:11:24 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 13:11:24 -0000
Received: by eekd4 with SMTP id d4so434939eek.30
	for <xen-devel@lists.xensource.com>;
	Fri, 10 Aug 2012 06:11:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:mime-version:content-type:content-transfer-encoding;
	bh=kXB/JShHcF6efqTASlKQV75+edceDGNBuaMSbMJELT0=;
	b=jyNgH4hgdG9qeU4vogfvSERZu+WuZnOweuXLBLazY+Ne+53rrMMVLDk7Cvw3AsJUa9
	Pg7V9AQmP8VLpEaC/qwQN+WvWBQ+EAXlBl0nKYlgiG6Hf/qQtL9nki4qvVExH8sDomJq
	d+psYtJkePIKUaJbd9sbNQknPkcx6S3vT6GnOq6SOWW1BVc2EPDmxVOT7oJhcXqALSSf
	oD3cbSb89REBDtJFI4sryLnZYVLk0Q0bQ1B9Prut2oDX5EeF6CowVhP+7iamInSg//0v
	FH2DSdUBQK31LOnMz+9jsY1ykKqNsC6g5EeNXiMHFqCI+HZZx2zGRwg3SdsfSBFaJPoZ
	hqWg==
Received: by 10.14.209.129 with SMTP id s1mr3188211eeo.24.1344604284088;
	Fri, 10 Aug 2012 06:11:24 -0700 (PDT)
Received: from [192.168.1.3] (host86-178-46-238.range86-178.btcentralplus.com.
	[86.178.46.238])
	by mx.google.com with ESMTPS id a48sm11424250eeo.1.2012.08.10.06.11.20
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 06:11:23 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 10 Aug 2012 14:11:18 +0100
From: Keir Fraser <keir@xen.org>
To: <xen-devel@lists.xensource.com>
Message-ID: <CC4AC706.487CB%keir@xen.org>
Thread-Topic: [ANNOUNCE] Xen 4.1.3 and 4.0.4 released
Thread-Index: Ac12+aDHpjS95s5Xkka8AHSLmpCSfQ==
Mime-version: 1.0
Cc: Lars Kurth <lars.kurth@xen.org>
Subject: [Xen-devel] [ANNOUNCE] Xen 4.1.3 and 4.0.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Folks,

I am pleased to announce the release of Xen 4.0.4 and 4.1.3. These are
available immediately from their respective mercurial repositories:
http://xenbits.xen.org/xen-4.0-testing.hg (tag RELEASE-4.0.4)
http://xenbits.xen.org/xen-4.1-testing.hg (tag RELEASE-4.1.3)

These fix the following critical vulnerabilities:
 * CVE-2012-0217 / XSA-7:
    PV guest privilege escalation vulnerability
 * CVE-2012-0218 / XSA-8:
    guest denial of service on syscall/sysenter exception generation
 * CVE-2012-2934 / XSA-9:
    PV guest host Denial of Service
 * CVE-2012-3432 / XSA-10:
    HVM guest user mode MMIO emulation DoS vulnerability
 * CVE-2012-3433 / XSA-11:
    HVM guest destroy p2m teardown host DoS vulnerability

We recommend all users of the 4.0 and 4.1 stable series to update to these
latest point releases.

Among many bug fixes and improvements (over 100 since Xen 4.1.2):
 * Updates for the latest Intel/AMD CPU revisions
 * Bug fixes and improvements to the libxl tool stack
 * Bug fixes for IOMMU handling (device passthrough to HVM guests)
 * Bug fixes for host kexec/kdump

 Regards,
 Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szp0E-0006J1-2O; Fri, 10 Aug 2012 13:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szp0C-0006Ip-Cw
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 13:11:44 +0000
Received: from [85.158.139.83:50252] by server-1.bemta-5.messagelabs.com id
	B3/83-14385-F8805205; Fri, 10 Aug 2012 13:11:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344604302!26930128!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5831 invoked from network); 10 Aug 2012 13:11:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	10 Aug 2012 13:11:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:11:42 +0100
Message-Id: <502524AC02000078000942A5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:11:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Thanos Makatos" <thanos.makatos@citrix.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809213933.GA9099@eire.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Goncalo Gomes <Goncalo.Gomes@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 13:06, Thanos Makatos <thanos.makatos@citrix.com> wrote:
> I haven't yet looked at the code in depth,

How come you post this not insignificant amount of code then?
You ought to be able to address review comments...

Jan

> but I suppose blktap3 can, in some 
> cases, be faster than blktap2 since the dom0 kernel doesn't participate in 
> the data path. At least that's what I'd expect
> ;-).



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szp0E-0006J1-2O; Fri, 10 Aug 2012 13:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szp0C-0006Ip-Cw
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 13:11:44 +0000
Received: from [85.158.139.83:50252] by server-1.bemta-5.messagelabs.com id
	B3/83-14385-F8805205; Fri, 10 Aug 2012 13:11:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344604302!26930128!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5831 invoked from network); 10 Aug 2012 13:11:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	10 Aug 2012 13:11:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:11:42 +0100
Message-Id: <502524AC02000078000942A5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:11:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Thanos Makatos" <thanos.makatos@citrix.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809213933.GA9099@eire.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Goncalo Gomes <Goncalo.Gomes@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 13:06, Thanos Makatos <thanos.makatos@citrix.com> wrote:
> I haven't yet looked at the code in depth,

How come you post this not insignificant amount of code then?
You ought to be able to address review comments...

Jan

> but I suppose blktap3 can, in some 
> cases, be faster than blktap2 since the dom0 kernel doesn't participate in 
> the data path. At least that's what I'd expect
> ;-).



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:19:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szp78-0006aT-V5; Fri, 10 Aug 2012 13:18:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szp78-0006aO-2C
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:18:54 +0000
Received: from [85.158.143.99:16358] by server-2.bemta-4.messagelabs.com id
	2F/BF-19021-D3A05205; Fri, 10 Aug 2012 13:18:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344604732!17922616!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23145 invoked from network); 10 Aug 2012 13:18:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with SMTP;
	10 Aug 2012 13:18:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:18:52 +0100
Message-Id: <5025265C02000078000942B4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:18:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> --- a/xen/arch/x86/x86_64/platform_hypercall.c
> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
> @@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
>  
>  #define COMPAT
>  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)

Was this ...

>  typedef int ret_t;
>  
>  #include "../platform_hypercall.c"
> --- a/xen/common/compat/multicall.c
> +++ b/xen/common/compat/multicall.c
> @@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
>  #define call                 compat_call
>  #define do_multicall(l, n)   compat_multicall(_##l, n)
>  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)


... and this merely added mechanically? Looking at the rest
of the patch I don't see why these would be needed. Or would
these simply belong into the next patch?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:19:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szp78-0006aT-V5; Fri, 10 Aug 2012 13:18:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szp78-0006aO-2C
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:18:54 +0000
Received: from [85.158.143.99:16358] by server-2.bemta-4.messagelabs.com id
	2F/BF-19021-D3A05205; Fri, 10 Aug 2012 13:18:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344604732!17922616!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23145 invoked from network); 10 Aug 2012 13:18:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-216.messagelabs.com with SMTP;
	10 Aug 2012 13:18:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:18:52 +0100
Message-Id: <5025265C02000078000942B4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:18:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> --- a/xen/arch/x86/x86_64/platform_hypercall.c
> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
> @@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
>  
>  #define COMPAT
>  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)

Was this ...

>  typedef int ret_t;
>  
>  #include "../platform_hypercall.c"
> --- a/xen/common/compat/multicall.c
> +++ b/xen/common/compat/multicall.c
> @@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
>  #define call                 compat_call
>  #define do_multicall(l, n)   compat_multicall(_##l, n)
>  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)


... and this merely added mechanically? Looking at the rest
of the patch I don't see why these would be needed. Or would
these simply belong into the next patch?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:22:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpAo-0006iJ-JP; Fri, 10 Aug 2012 13:22:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzpAn-0006iC-JX
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:22:41 +0000
Received: from [85.158.143.35:15904] by server-1.bemta-4.messagelabs.com id
	56/0A-20198-02B05205; Fri, 10 Aug 2012 13:22:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344604960!11527198!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21797 invoked from network); 10 Aug 2012 13:22:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 13:22:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13953857"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 13:22:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 14:22:40 +0100
Date: Fri, 10 Aug 2012 14:22:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5025265C02000078000942B4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208101420290.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025265C02000078000942B4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Jan Beulich wrote:
> >>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > --- a/xen/arch/x86/x86_64/platform_hypercall.c
> > +++ b/xen/arch/x86/x86_64/platform_hypercall.c
> > @@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
> >  
> >  #define COMPAT
> >  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> > +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
> 
> Was this ...
> 
> >  typedef int ret_t;
> >  
> >  #include "../platform_hypercall.c"
> > --- a/xen/common/compat/multicall.c
> > +++ b/xen/common/compat/multicall.c
> > @@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
> >  #define call                 compat_call
> >  #define do_multicall(l, n)   compat_multicall(_##l, n)
> >  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> > +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
> 
> 
> ... and this merely added mechanically? Looking at the rest
> of the patch I don't see why these would be needed. Or would
> these simply belong into the next patch?

They do belong to the next patch, but if I put them in there they would
be lost in the middle of a very long series of otherwise mechanical
substitutions, so I thought it would be a better idea to put them in
this patch. And I can see it worked :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:22:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpAo-0006iJ-JP; Fri, 10 Aug 2012 13:22:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1SzpAn-0006iC-JX
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:22:41 +0000
Received: from [85.158.143.35:15904] by server-1.bemta-4.messagelabs.com id
	56/0A-20198-02B05205; Fri, 10 Aug 2012 13:22:40 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344604960!11527198!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21797 invoked from network); 10 Aug 2012 13:22:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 13:22:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13953857"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 13:22:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 14:22:40 +0100
Date: Fri, 10 Aug 2012 14:22:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5025265C02000078000942B4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208101420290.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025265C02000078000942B4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/5] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Jan Beulich wrote:
> >>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > --- a/xen/arch/x86/x86_64/platform_hypercall.c
> > +++ b/xen/arch/x86/x86_64/platform_hypercall.c
> > @@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
> >  
> >  #define COMPAT
> >  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> > +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
> 
> Was this ...
> 
> >  typedef int ret_t;
> >  
> >  #include "../platform_hypercall.c"
> > --- a/xen/common/compat/multicall.c
> > +++ b/xen/common/compat/multicall.c
> > @@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
> >  #define call                 compat_call
> >  #define do_multicall(l, n)   compat_multicall(_##l, n)
> >  #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
> > +#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
> 
> 
> ... and this merely added mechanically? Looking at the rest
> of the patch I don't see why these would be needed. Or would
> these simply belong into the next patch?

They do belong to the next patch, but if I put them in there they would
be lost in the middle of a very long series of otherwise mechanical
substitutions, so I thought it would be a better idea to put them in
this patch. And I can see it worked :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:23:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpBX-0006lh-1o; Fri, 10 Aug 2012 13:23:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzpBV-0006lW-Lt
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:23:25 +0000
Received: from [85.158.143.99:42034] by server-3.bemta-4.messagelabs.com id
	76/15-31486-B4B05205; Fri, 10 Aug 2012 13:23:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344605003!22274937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3923 invoked from network); 10 Aug 2012 13:23:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	10 Aug 2012 13:23:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:23:23 +0100
Message-Id: <5025276B02000078000942BE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:23:23 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Note: these changes don't make any difference on x86 and ia64.

I can see your point in doing this in the x86 files nevertheless for
cosmetic/consistency reasons, but I'm really uncertain about this
uglification when it's not really necessary (plus it would shrink the
patch quite a bit).

> Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
> an hypercall argument.

I didn't look in too close detail, as this isn't intended for the main
branch yet, but I still wasn't able to spot any conversion method
in at least one direction (so that internally these can be passed
around irrespective of their origin).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:23:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpBX-0006lh-1o; Fri, 10 Aug 2012 13:23:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzpBV-0006lW-Lt
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:23:25 +0000
Received: from [85.158.143.99:42034] by server-3.bemta-4.messagelabs.com id
	76/15-31486-B4B05205; Fri, 10 Aug 2012 13:23:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344605003!22274937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3923 invoked from network); 10 Aug 2012 13:23:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	10 Aug 2012 13:23:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 14:23:23 +0100
Message-Id: <5025276B02000078000942BE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 14:23:23 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian.Campbell@citrix.com, Tim.Deegan@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> Note: these changes don't make any difference on x86 and ia64.

I can see your point in doing this in the x86 files nevertheless for
cosmetic/consistency reasons, but I'm really uncertain about this
uglification when it's not really necessary (plus it would shrink the
patch quite a bit).

> Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
> an hypercall argument.

I didn't look in too close detail, as this isn't intended for the main
branch yet, but I still wasn't able to spot any conversion method
in at least one direction (so that internally these can be passed
around irrespective of their origin).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:25:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpDd-0006yQ-Pg; Fri, 10 Aug 2012 13:25:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpDd-0006yF-2c
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:25:37 +0000
Received: from [85.158.139.83:22878] by server-7.bemta-5.messagelabs.com id
	01/B9-00857-0DB05205; Fri, 10 Aug 2012 13:25:36 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344605133!27473370!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5MTEzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7306 invoked from network); 10 Aug 2012 13:25:34 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 13:25:34 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADPPA1011095
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:25:26 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADPOCe013178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:25:25 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADPN4P023909; Fri, 10 Aug 2012 08:25:23 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:25:23 -0700
Date: Fri, 10 Aug 2012 15:23:57 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132357.GA2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

It looks that Xen support for crash have not been maintained
since 2009. I am trying to fix this. Here it is bundle of fixes:
  - xen: Always calculate max_cpus value,
  - xen: Read only crash notes for onlined CPUs,
  - x86/xen: Read variables from dynamically allocated per_cpu data,
  - xen: Get idle data from alternative source,
  - xen: Read data correctly from dynamically allocated console ring, too
    (fixed in this release),
  - xen: Add support for 3 level P2M tree
    (new patch in this release).

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:25:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpDd-0006yQ-Pg; Fri, 10 Aug 2012 13:25:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpDd-0006yF-2c
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:25:37 +0000
Received: from [85.158.139.83:22878] by server-7.bemta-5.messagelabs.com id
	01/B9-00857-0DB05205; Fri, 10 Aug 2012 13:25:36 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344605133!27473370!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5MTEzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7306 invoked from network); 10 Aug 2012 13:25:34 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 13:25:34 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADPPA1011095
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:25:26 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADPOCe013178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:25:25 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADPN4P023909; Fri, 10 Aug 2012 08:25:23 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:25:23 -0700
Date: Fri, 10 Aug 2012 15:23:57 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132357.GA2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

It looks that Xen support for crash have not been maintained
since 2009. I am trying to fix this. Here it is bundle of fixes:
  - xen: Always calculate max_cpus value,
  - xen: Read only crash notes for onlined CPUs,
  - x86/xen: Read variables from dynamically allocated per_cpu data,
  - xen: Get idle data from alternative source,
  - xen: Read data correctly from dynamically allocated console ring, too
    (fixed in this release),
  - xen: Add support for 3 level P2M tree
    (new patch in this release).

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:26:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpEj-00074T-80; Fri, 10 Aug 2012 13:26:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpEi-00074H-1c
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:26:44 +0000
Received: from [85.158.143.99:60165] by server-2.bemta-4.messagelabs.com id
	63/01-19021-31C05205; Fri, 10 Aug 2012 13:26:43 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344605197!22236819!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24240 invoked from network); 10 Aug 2012 13:26:38 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:26:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADQVdC029526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:26:31 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADQU01014663
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:26:30 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADQU0w018398; Fri, 10 Aug 2012 08:26:30 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:26:29 -0700
Date: Fri, 10 Aug 2012 15:25:13 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

max_cpus is not available since 20374 changeset (Miscellaneous data
placement adjustments). It was moved to __initdata section. This section
is freed after Xen initialization. Assume that max_cpus is always
equal to XEN_HYPER_SIZE(cpumask_t) * 8.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 14:52:59.000000000 +0200
@@ -1879,11 +1879,9 @@ xen_hyper_get_cpu_info(void)
 	uint *cpu_idx;
 	int i, j, cpus;
 
-	get_symbol_data("max_cpus", sizeof(xht->max_cpus), &xht->max_cpus);
 	XEN_HYPER_STRUCT_SIZE_INIT(cpumask_t, "cpumask_t");
-	if (XEN_HYPER_SIZE(cpumask_t) * 8 > xht->max_cpus) {
-		xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
-	}
+	xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
+
 	if (xht->cpumask) {
 		free(xht->cpumask);
 	}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:26:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpEj-00074T-80; Fri, 10 Aug 2012 13:26:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpEi-00074H-1c
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:26:44 +0000
Received: from [85.158.143.99:60165] by server-2.bemta-4.messagelabs.com id
	63/01-19021-31C05205; Fri, 10 Aug 2012 13:26:43 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344605197!22236819!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24240 invoked from network); 10 Aug 2012 13:26:38 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:26:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADQVdC029526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:26:31 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADQU01014663
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:26:30 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADQU0w018398; Fri, 10 Aug 2012 08:26:30 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:26:29 -0700
Date: Fri, 10 Aug 2012 15:25:13 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

max_cpus is not available since 20374 changeset (Miscellaneous data
placement adjustments). It was moved to __initdata section. This section
is freed after Xen initialization. Assume that max_cpus is always
equal to XEN_HYPER_SIZE(cpumask_t) * 8.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 14:52:59.000000000 +0200
@@ -1879,11 +1879,9 @@ xen_hyper_get_cpu_info(void)
 	uint *cpu_idx;
 	int i, j, cpus;
 
-	get_symbol_data("max_cpus", sizeof(xht->max_cpus), &xht->max_cpus);
 	XEN_HYPER_STRUCT_SIZE_INIT(cpumask_t, "cpumask_t");
-	if (XEN_HYPER_SIZE(cpumask_t) * 8 > xht->max_cpus) {
-		xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
-	}
+	xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
+
 	if (xht->cpumask) {
 		free(xht->cpumask);
 	}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:27:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:27:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpFM-00079a-Ly; Fri, 10 Aug 2012 13:27:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpFL-00079M-ED
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:27:23 +0000
Received: from [85.158.143.35:36579] by server-3.bemta-4.messagelabs.com id
	CD/7B-31486-A3C05205; Fri, 10 Aug 2012 13:27:22 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344605240!13369855!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5MTEzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8721 invoked from network); 10 Aug 2012 13:27:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:27:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADRF62013100
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:27:15 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADREgF015670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:27:14 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADRD43025057; Fri, 10 Aug 2012 08:27:13 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:27:13 -0700
Date: Fri, 10 Aug 2012 15:25:56 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132556.GC2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 2/6] xen: Read only crash notes for onlined
	CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read only crash notes for onlined CPUs. Crash notes for not running
CPUs does not exist in core file. Do not try to read them.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-07-05 15:34:45.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 15:35:05.000000000 +0200
@@ -633,18 +633,18 @@ xen_hyper_dumpinfo_init(void)
 	}
 
 	/* allocate a context area */
-	size = sizeof(struct xen_hyper_dumpinfo_context) * XEN_HYPER_MAX_CPUS();
+	size = sizeof(struct xen_hyper_dumpinfo_context) * machdep->get_smp_cpus();
 	if((xhdit->context_array = malloc(size)) == NULL) {
 		error(FATAL, "cannot malloc dumpinfo table context space.\n");
 	}
 	BZERO(xhdit->context_array, size);
-	size = sizeof(struct xen_hyper_dumpinfo_context_xen_core) * XEN_HYPER_MAX_CPUS();
+	size = sizeof(struct xen_hyper_dumpinfo_context_xen_core) * machdep->get_smp_cpus();
 	if((xhdit->context_xen_core_array = malloc(size)) == NULL) {
 		error(FATAL, "cannot malloc dumpinfo table context_xen_core_array space.\n");
 	}
 	BZERO(xhdit->context_xen_core_array, size);
 	addr = symbol_value("per_cpu__crash_notes");
-	for (i = 0; i < XEN_HYPER_MAX_CPUS(); i++) {
+	for (i = 0; i < machdep->get_smp_cpus(); i++) {
 		ulong addr_notes;
 
 		addr_notes = xen_hyper_per_cpu(addr, i);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:27:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:27:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpFM-00079a-Ly; Fri, 10 Aug 2012 13:27:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpFL-00079M-ED
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:27:23 +0000
Received: from [85.158.143.35:36579] by server-3.bemta-4.messagelabs.com id
	CD/7B-31486-A3C05205; Fri, 10 Aug 2012 13:27:22 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344605240!13369855!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5MTEzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8721 invoked from network); 10 Aug 2012 13:27:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:27:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADRF62013100
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:27:15 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADREgF015670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:27:14 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADRD43025057; Fri, 10 Aug 2012 08:27:13 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:27:13 -0700
Date: Fri, 10 Aug 2012 15:25:56 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132556.GC2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 2/6] xen: Read only crash notes for onlined
	CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read only crash notes for onlined CPUs. Crash notes for not running
CPUs does not exist in core file. Do not try to read them.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-07-05 15:34:45.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 15:35:05.000000000 +0200
@@ -633,18 +633,18 @@ xen_hyper_dumpinfo_init(void)
 	}
 
 	/* allocate a context area */
-	size = sizeof(struct xen_hyper_dumpinfo_context) * XEN_HYPER_MAX_CPUS();
+	size = sizeof(struct xen_hyper_dumpinfo_context) * machdep->get_smp_cpus();
 	if((xhdit->context_array = malloc(size)) == NULL) {
 		error(FATAL, "cannot malloc dumpinfo table context space.\n");
 	}
 	BZERO(xhdit->context_array, size);
-	size = sizeof(struct xen_hyper_dumpinfo_context_xen_core) * XEN_HYPER_MAX_CPUS();
+	size = sizeof(struct xen_hyper_dumpinfo_context_xen_core) * machdep->get_smp_cpus();
 	if((xhdit->context_xen_core_array = malloc(size)) == NULL) {
 		error(FATAL, "cannot malloc dumpinfo table context_xen_core_array space.\n");
 	}
 	BZERO(xhdit->context_xen_core_array, size);
 	addr = symbol_value("per_cpu__crash_notes");
-	for (i = 0; i < XEN_HYPER_MAX_CPUS(); i++) {
+	for (i = 0; i < machdep->get_smp_cpus(); i++) {
 		ulong addr_notes;
 
 		addr_notes = xen_hyper_per_cpu(addr, i);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:28:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpG7-0007Gy-3j; Fri, 10 Aug 2012 13:28:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpG6-0007Gl-I7
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:28:10 +0000
Received: from [85.158.143.99:10213] by server-3.bemta-4.messagelabs.com id
	4F/6E-31486-96C05205; Fri, 10 Aug 2012 13:28:09 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344605286!17783578!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5MTEzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28598 invoked from network); 10 Aug 2012 13:28:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:28:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADS1NV013926
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:28:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADS0bk016652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:28:01 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADS0O4019275; Fri, 10 Aug 2012 08:28:00 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:28:00 -0700
Date: Fri, 10 Aug 2012 15:26:41 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132641.GD2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 3/6] x86/xen: Read variables from dynamically
 allocated per_cpu data
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

per_cpu data is dynamically allocated since 21416 changeset (x86: Dynamically
allocate percpu data area when a CPU comes online). Take into account that
and read variables from correct address.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-07-05 15:47:09.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 15:50:19.000000000 +0200
@@ -64,7 +64,6 @@ xen_hyper_init(void)
 	machdep->get_smp_cpus();
 	machdep->memory_size();
 
-#ifdef IA64
 	if (symbol_exists("__per_cpu_offset")) {
 		xht->flags |= XEN_HYPER_SMP;
 		if((xht->__per_cpu_offset = malloc(sizeof(ulong) * XEN_HYPER_MAX_CPUS())) == NULL) {
@@ -76,7 +75,6 @@ xen_hyper_init(void)
 			error(FATAL, "cannot read __per_cpu_offset.\n");
 		}
 	}
-#endif
 
 #if defined(X86) || defined(X86_64)
 	if (symbol_exists("__per_cpu_shift")) {
diff -Npru crash-6.0.8.orig/xen_hyper_defs.h crash-6.0.8/xen_hyper_defs.h
--- crash-6.0.8.orig/xen_hyper_defs.h	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/xen_hyper_defs.h	2012-07-05 15:50:19.000000000 +0200
@@ -136,7 +136,13 @@ typedef uint32_t	Elf_Word;
 
 #if defined(X86) || defined(X86_64)
 #define xen_hyper_per_cpu(var, cpu)  \
-	((ulong)(var) + (((ulong)(cpu))<<xht->percpu_shift))
+	({ ulong __var_addr; \
+	   if (xht->__per_cpu_offset) \
+		__var_addr = (xht->flags & XEN_HYPER_SMP) ? \
+			((ulong)(var) + xht->__per_cpu_offset[cpu]) : (ulong)(var); \
+	   else \
+		__var_addr = (ulong)(var) + ((ulong)(cpu) << xht->percpu_shift); \
+	   __var_addr; })
 #elif defined(IA64)
 #define xen_hyper_per_cpu(var, cpu)  \
 	((xht->flags & XEN_HYPER_SMP) ? \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:28:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpG7-0007Gy-3j; Fri, 10 Aug 2012 13:28:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpG6-0007Gl-I7
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:28:10 +0000
Received: from [85.158.143.99:10213] by server-3.bemta-4.messagelabs.com id
	4F/6E-31486-96C05205; Fri, 10 Aug 2012 13:28:09 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344605286!17783578!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5MTEzNA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28598 invoked from network); 10 Aug 2012 13:28:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:28:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADS1NV013926
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:28:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADS0bk016652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:28:01 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADS0O4019275; Fri, 10 Aug 2012 08:28:00 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:28:00 -0700
Date: Fri, 10 Aug 2012 15:26:41 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132641.GD2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH v2 3/6] x86/xen: Read variables from dynamically
 allocated per_cpu data
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

per_cpu data is dynamically allocated since 21416 changeset (x86: Dynamically
allocate percpu data area when a CPU comes online). Take into account that
and read variables from correct address.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-07-05 15:47:09.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 15:50:19.000000000 +0200
@@ -64,7 +64,6 @@ xen_hyper_init(void)
 	machdep->get_smp_cpus();
 	machdep->memory_size();
 
-#ifdef IA64
 	if (symbol_exists("__per_cpu_offset")) {
 		xht->flags |= XEN_HYPER_SMP;
 		if((xht->__per_cpu_offset = malloc(sizeof(ulong) * XEN_HYPER_MAX_CPUS())) == NULL) {
@@ -76,7 +75,6 @@ xen_hyper_init(void)
 			error(FATAL, "cannot read __per_cpu_offset.\n");
 		}
 	}
-#endif
 
 #if defined(X86) || defined(X86_64)
 	if (symbol_exists("__per_cpu_shift")) {
diff -Npru crash-6.0.8.orig/xen_hyper_defs.h crash-6.0.8/xen_hyper_defs.h
--- crash-6.0.8.orig/xen_hyper_defs.h	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/xen_hyper_defs.h	2012-07-05 15:50:19.000000000 +0200
@@ -136,7 +136,13 @@ typedef uint32_t	Elf_Word;
 
 #if defined(X86) || defined(X86_64)
 #define xen_hyper_per_cpu(var, cpu)  \
-	((ulong)(var) + (((ulong)(cpu))<<xht->percpu_shift))
+	({ ulong __var_addr; \
+	   if (xht->__per_cpu_offset) \
+		__var_addr = (xht->flags & XEN_HYPER_SMP) ? \
+			((ulong)(var) + xht->__per_cpu_offset[cpu]) : (ulong)(var); \
+	   else \
+		__var_addr = (ulong)(var) + ((ulong)(cpu) << xht->percpu_shift); \
+	   __var_addr; })
 #elif defined(IA64)
 #define xen_hyper_per_cpu(var, cpu)  \
 	((xht->flags & XEN_HYPER_SMP) ? \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:28:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpGX-0007Ls-I3; Fri, 10 Aug 2012 13:28:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SzpGW-0007Kc-07
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 13:28:36 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344605122!8612930!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28225 invoked from network); 10 Aug 2012 13:25:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 13:25:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13953967"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 13:25:20 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	14:25:20 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 10 Aug 2012 14:25:17 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac12+bbpRxJwdpfvSkm8SeNj+fVwNQAAB17g
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D5A8@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809213933.GA9099@eire.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
	<502524AC02000078000942A5@nat28.tlf.novell.com>
In-Reply-To: <502524AC02000078000942A5@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Goncalo Gomes <Goncalo.Gomes@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This code was not implemented by me, it was handed over to me in order to release it and maintain it. I would really like to have the time to read and understand it to be able to address review comments as you said, but my allocated time for blktap3 isn't enough for this. My big problem is that code in libxl keeps evolving and renders blktap3 code out of sync, and it's a pain to rebase it. So, I wanted to see if it was possible to incorporate to xen-unstable ASAP, avoiding the rebase effort. I'll be now dividing the code into pieces and send it in individual patches, so I believe I'll get a better understanding of the code during this process.

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: 10 August 2012 14:12
To: Thanos Makatos
Cc: Goncalo Gomes; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] RFC: blktap3

>>> On 10.08.12 at 13:06, Thanos Makatos <thanos.makatos@citrix.com> wrote:
> I haven't yet looked at the code in depth,

How come you post this not insignificant amount of code then?
You ought to be able to address review comments...

Jan

> but I suppose blktap3 can, in some
> cases, be faster than blktap2 since the dom0 kernel doesn't 
> participate in the data path. At least that's what I'd expect ;-).



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:28:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpGX-0007Ls-I3; Fri, 10 Aug 2012 13:28:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1SzpGW-0007Kc-07
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 13:28:36 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344605122!8612930!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28225 invoked from network); 10 Aug 2012 13:25:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 13:25:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13953967"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 13:25:20 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Fri, 10 Aug 2012
	14:25:20 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 10 Aug 2012 14:25:17 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac12+bbpRxJwdpfvSkm8SeNj+fVwNQAAB17g
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D5A8@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809213933.GA9099@eire.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D561@LONPMAILBOX01.citrite.net>
	<502524AC02000078000942A5@nat28.tlf.novell.com>
In-Reply-To: <502524AC02000078000942A5@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: Goncalo Gomes <Goncalo.Gomes@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This code was not implemented by me, it was handed over to me in order to release it and maintain it. I would really like to have the time to read and understand it to be able to address review comments as you said, but my allocated time for blktap3 isn't enough for this. My big problem is that code in libxl keeps evolving and renders blktap3 code out of sync, and it's a pain to rebase it. So, I wanted to see if it was possible to incorporate to xen-unstable ASAP, avoiding the rebase effort. I'll be now dividing the code into pieces and send it in individual patches, so I believe I'll get a better understanding of the code during this process.

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: 10 August 2012 14:12
To: Thanos Makatos
Cc: Goncalo Gomes; xen-devel@lists.xen.org
Subject: Re: [Xen-devel] RFC: blktap3

>>> On 10.08.12 at 13:06, Thanos Makatos <thanos.makatos@citrix.com> wrote:
> I haven't yet looked at the code in depth,

How come you post this not insignificant amount of code then?
You ought to be able to address review comments...

Jan

> but I suppose blktap3 can, in some
> cases, be faster than blktap2 since the dom0 kernel doesn't 
> participate in the data path. At least that's what I'd expect ;-).



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:29:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpGt-0007R4-Vo; Fri, 10 Aug 2012 13:28:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpGs-0007Qg-Q5
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:28:58 +0000
Received: from [85.158.143.35:47348] by server-1.bemta-4.messagelabs.com id
	0C/C4-20198-99C05205; Fri, 10 Aug 2012 13:28:57 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344605336!13386350!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13857 invoked from network); 10 Aug 2012 13:28:57 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:28:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADSpJU032098
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:28:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADSpdw000963
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:28:51 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADSo7W019823; Fri, 10 Aug 2012 08:28:51 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:28:50 -0700
Date: Fri, 10 Aug 2012 15:27:33 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132733.GE2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH v2 4/6] xen: Get idle data from alternative
	source
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

idle member was removed from struct schedule_data by 21422 changeset
(Fix CPU hotplug after percpu data handling changes). Get idle data
from alternative source.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-07-05 16:05:31.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 16:08:52.000000000 +0200
@@ -397,7 +397,8 @@ xen_hyper_misc_init(void)
 	XEN_HYPER_STRUCT_SIZE_INIT(schedule_data, "schedule_data");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_schedule_lock, "schedule_data", "schedule_lock");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_curr, "schedule_data", "curr");
-	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_idle, "schedule_data", "idle");
+	if (MEMBER_EXISTS("schedule_data", "idle"))
+		XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_idle, "schedule_data", "idle");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_sched_priv, "schedule_data", "sched_priv");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_s_timer, "schedule_data", "s_timer");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_tick, "schedule_data", "tick");
@@ -539,7 +540,10 @@ xen_hyper_schedule_init(void)
 		}
 		schc->cpu_id = cpuid;
 		schc->curr = ULONG(buf + XEN_HYPER_OFFSET(schedule_data_curr));
-		schc->idle = ULONG(buf + XEN_HYPER_OFFSET(schedule_data_idle));
+		if (MEMBER_EXISTS("schedule_data", "idle"))
+			schc->idle = ULONG(buf + XEN_HYPER_OFFSET(schedule_data_idle));
+		else
+			schc->idle = xht->idle_vcpu_array[cpuid];
 		schc->sched_priv =
 			ULONG(buf + XEN_HYPER_OFFSET(schedule_data_sched_priv));
 		if (XEN_HYPER_VALID_MEMBER(schedule_data_tick))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:29:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpGt-0007R4-Vo; Fri, 10 Aug 2012 13:28:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpGs-0007Qg-Q5
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:28:58 +0000
Received: from [85.158.143.35:47348] by server-1.bemta-4.messagelabs.com id
	0C/C4-20198-99C05205; Fri, 10 Aug 2012 13:28:57 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344605336!13386350!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13857 invoked from network); 10 Aug 2012 13:28:57 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 13:28:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADSpJU032098
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:28:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADSpdw000963
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:28:51 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADSo7W019823; Fri, 10 Aug 2012 08:28:51 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:28:50 -0700
Date: Fri, 10 Aug 2012 15:27:33 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132733.GE2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH v2 4/6] xen: Get idle data from alternative
	source
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

idle member was removed from struct schedule_data by 21422 changeset
(Fix CPU hotplug after percpu data handling changes). Get idle data
from alternative source.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
--- crash-6.0.8.orig/xen_hyper.c	2012-07-05 16:05:31.000000000 +0200
+++ crash-6.0.8/xen_hyper.c	2012-07-05 16:08:52.000000000 +0200
@@ -397,7 +397,8 @@ xen_hyper_misc_init(void)
 	XEN_HYPER_STRUCT_SIZE_INIT(schedule_data, "schedule_data");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_schedule_lock, "schedule_data", "schedule_lock");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_curr, "schedule_data", "curr");
-	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_idle, "schedule_data", "idle");
+	if (MEMBER_EXISTS("schedule_data", "idle"))
+		XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_idle, "schedule_data", "idle");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_sched_priv, "schedule_data", "sched_priv");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_s_timer, "schedule_data", "s_timer");
 	XEN_HYPER_MEMBER_OFFSET_INIT(schedule_data_tick, "schedule_data", "tick");
@@ -539,7 +540,10 @@ xen_hyper_schedule_init(void)
 		}
 		schc->cpu_id = cpuid;
 		schc->curr = ULONG(buf + XEN_HYPER_OFFSET(schedule_data_curr));
-		schc->idle = ULONG(buf + XEN_HYPER_OFFSET(schedule_data_idle));
+		if (MEMBER_EXISTS("schedule_data", "idle"))
+			schc->idle = ULONG(buf + XEN_HYPER_OFFSET(schedule_data_idle));
+		else
+			schc->idle = xht->idle_vcpu_array[cpuid];
 		schc->sched_priv =
 			ULONG(buf + XEN_HYPER_OFFSET(schedule_data_sched_priv));
 		if (XEN_HYPER_VALID_MEMBER(schedule_data_tick))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:29:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:29:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpHh-0007cU-DK; Fri, 10 Aug 2012 13:29:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpHg-0007bz-Az
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:29:48 +0000
Received: from [85.158.138.51:28608] by server-6.bemta-3.messagelabs.com id
	1D/4C-02321-BCC05205; Fri, 10 Aug 2012 13:29:47 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344605385!27612989!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21076 invoked from network); 10 Aug 2012 13:29:46 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 13:29:46 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADTd7M000534
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:29:40 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADTcJq022926
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:29:39 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADTcMP008395; Fri, 10 Aug 2012 08:29:38 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:29:38 -0700
Date: Fri, 10 Aug 2012 15:28:21 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132821.GF2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [PATCH v2 5/6] xen: Read data correctly from
 dynamically allocated console ring, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

console ring is dynamically allocated since 19543 changeset (New option
conring_size= to allow larger console ring). Take into account that
and read data correctly from it, too.

v2 - Dave Anderson suggestions/fixes:
   - check conring type before determining its value.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper_command.c crash-6.0.8/xen_hyper_command.c
--- crash-6.0.8.orig/xen_hyper_command.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/xen_hyper_command.c	2012-08-10 14:05:24.000000000 +0200
@@ -590,24 +590,35 @@ xen_hyper_dump_log(void)
 	ulong conring;
 	char *buf;
 	char last;
+	uint32_t conring_size;
+
+	if (get_symbol_type("conring", NULL, NULL) == TYPE_CODE_ARRAY)
+		conring = symbol_value("conring");
+	else
+		get_symbol_data("conring", sizeof(ulong), &conring);
 
-	conring = symbol_value("conring");
 	get_symbol_data("conringc", sizeof(uint), &conringc);
 	get_symbol_data("conringp", sizeof(uint), &conringp);
+
+	if (symbol_exists("conring_size"))
+		get_symbol_data("conring_size", sizeof(uint32_t), &conring_size);
+	else
+		conring_size = XEN_HYPER_CONRING_SIZE;
+
 	warp = FALSE;
-	if (conringp >= XEN_HYPER_CONRING_SIZE) {
-		if ((start = conringp & (XEN_HYPER_CONRING_SIZE - 1))) {
+	if (conringp >= conring_size) {
+		if ((start = conringp & (conring_size - 1))) {
 			warp = TRUE;
 		}
 	} else {
 		start = 0;
 	}
 
-	buf = GETBUF(XEN_HYPER_CONRING_SIZE);
-	readmem(conring, KVADDR, buf, XEN_HYPER_CONRING_SIZE,
+	buf = GETBUF(conring_size);
+	readmem(conring, KVADDR, buf, conring_size,
 		"conring contents", FAULT_ON_ERROR);
 	idx = start;
-	len = XEN_HYPER_CONRING_SIZE;
+	len = conring_size;
 	last = 0;
 
 wrap_around:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:29:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:29:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpHh-0007cU-DK; Fri, 10 Aug 2012 13:29:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpHg-0007bz-Az
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:29:48 +0000
Received: from [85.158.138.51:28608] by server-6.bemta-3.messagelabs.com id
	1D/4C-02321-BCC05205; Fri, 10 Aug 2012 13:29:47 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344605385!27612989!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21076 invoked from network); 10 Aug 2012 13:29:46 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 13:29:46 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADTd7M000534
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:29:40 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADTcJq022926
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:29:39 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADTcMP008395; Fri, 10 Aug 2012 08:29:38 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:29:38 -0700
Date: Fri, 10 Aug 2012 15:28:21 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132821.GF2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [PATCH v2 5/6] xen: Read data correctly from
 dynamically allocated console ring, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

console ring is dynamically allocated since 19543 changeset (New option
conring_size= to allow larger console ring). Take into account that
and read data correctly from it, too.

v2 - Dave Anderson suggestions/fixes:
   - check conring type before determining its value.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/xen_hyper_command.c crash-6.0.8/xen_hyper_command.c
--- crash-6.0.8.orig/xen_hyper_command.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/xen_hyper_command.c	2012-08-10 14:05:24.000000000 +0200
@@ -590,24 +590,35 @@ xen_hyper_dump_log(void)
 	ulong conring;
 	char *buf;
 	char last;
+	uint32_t conring_size;
+
+	if (get_symbol_type("conring", NULL, NULL) == TYPE_CODE_ARRAY)
+		conring = symbol_value("conring");
+	else
+		get_symbol_data("conring", sizeof(ulong), &conring);
 
-	conring = symbol_value("conring");
 	get_symbol_data("conringc", sizeof(uint), &conringc);
 	get_symbol_data("conringp", sizeof(uint), &conringp);
+
+	if (symbol_exists("conring_size"))
+		get_symbol_data("conring_size", sizeof(uint32_t), &conring_size);
+	else
+		conring_size = XEN_HYPER_CONRING_SIZE;
+
 	warp = FALSE;
-	if (conringp >= XEN_HYPER_CONRING_SIZE) {
-		if ((start = conringp & (XEN_HYPER_CONRING_SIZE - 1))) {
+	if (conringp >= conring_size) {
+		if ((start = conringp & (conring_size - 1))) {
 			warp = TRUE;
 		}
 	} else {
 		start = 0;
 	}
 
-	buf = GETBUF(XEN_HYPER_CONRING_SIZE);
-	readmem(conring, KVADDR, buf, XEN_HYPER_CONRING_SIZE,
+	buf = GETBUF(conring_size);
+	readmem(conring, KVADDR, buf, conring_size,
 		"conring contents", FAULT_ON_ERROR);
 	idx = start;
-	len = XEN_HYPER_CONRING_SIZE;
+	len = conring_size;
 	last = 0;
 
 wrap_around:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:30:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpIg-0007oV-SH; Fri, 10 Aug 2012 13:30:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpIf-0007oE-IY
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:30:49 +0000
Received: from [85.158.139.83:60563] by server-11.bemta-5.messagelabs.com id
	8E/89-11482-80D05205; Fri, 10 Aug 2012 13:30:48 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344605446!20245831!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22836 invoked from network); 10 Aug 2012 13:30:47 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 13:30:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADUeHQ001708
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:30:41 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADUeXe005890
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:30:40 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADUec3021156; Fri, 10 Aug 2012 08:30:40 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:30:39 -0700
Date: Fri, 10 Aug 2012 15:29:22 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132922.GG2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [PATCH v2 6/6] xen: Add support for 3 level P2M tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Linux Kernel commit 58e05027b530ff081ecea68e38de8d59db8f87e0 (xen: convert
p2m to a 3 level tree) introduced 3 level P2M tree. Add support for this.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/defs.h crash-6.0.8/defs.h
--- crash-6.0.8.orig/defs.h	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/defs.h	2012-08-06 23:32:31.000000000 +0200
@@ -595,6 +595,10 @@ struct new_utsname {
 
 #define XEN_MACHADDR_NOT_FOUND   (~0ULL) 
 
+#define XEN_P2M_PER_PAGE	(PAGESIZE() / sizeof(unsigned long))
+#define XEN_P2M_MID_PER_PAGE	(PAGESIZE() / sizeof(unsigned long *))
+#define XEN_P2M_TOP_PER_PAGE	(PAGESIZE() / sizeof(unsigned long **))
+
 struct kernel_table {                   /* kernel data */
 	ulong flags;
 	ulong stext;
@@ -655,6 +659,7 @@ struct kernel_table {                   
 	struct pvops_xen_info {
 		int p2m_top_entries;
 		ulong p2m_top;
+		ulong p2m_mid_missing;
 		ulong p2m_missing;
 	} pvops_xen;
 	int highest_irq;
diff -Npru crash-6.0.8.orig/kernel.c crash-6.0.8/kernel.c
--- crash-6.0.8.orig/kernel.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/kernel.c	2012-08-07 13:44:12.000000000 +0200
@@ -49,6 +49,8 @@ static void verify_namelist(void);
 static char *debug_kernel_version(char *);
 static int restore_stack(struct bt_info *);
 static ulong __xen_m2p(ulonglong, ulong);
+static ulong __xen_pvops_m2p_l2(ulonglong, ulong);
+static ulong __xen_pvops_m2p_l3(ulonglong, ulong);
 static int search_mapping_page(ulong, ulong *, ulong *, ulong *);
 static void read_in_kernel_config_err(int, char *);
 static void BUG_bytes_init(void);
@@ -147,9 +149,19 @@ kernel_init()
                 if ((kt->m2p_page = (char *)malloc(PAGESIZE())) == NULL)
                        	error(FATAL, "cannot malloc m2p page.");
 
-		kt->pvops_xen.p2m_top_entries = get_array_length("p2m_top", NULL, 0);
-		kt->pvops_xen.p2m_top = symbol_value("p2m_top");
-		kt->pvops_xen.p2m_missing = symbol_value("p2m_missing");
+		if (symbol_exists("p2m_mid_missing")) {
+			kt->pvops_xen.p2m_top_entries = XEN_P2M_TOP_PER_PAGE;
+			get_symbol_data("p2m_top", sizeof(ulong),
+						&kt->pvops_xen.p2m_top);
+			get_symbol_data("p2m_mid_missing", sizeof(ulong),
+						&kt->pvops_xen.p2m_mid_missing);
+			get_symbol_data("p2m_missing", sizeof(ulong),
+						&kt->pvops_xen.p2m_missing);
+		} else {
+			kt->pvops_xen.p2m_top_entries = get_array_length("p2m_top", NULL, 0);
+			kt->pvops_xen.p2m_top = symbol_value("p2m_top");
+			kt->pvops_xen.p2m_missing = symbol_value("p2m_missing");
+		}
 	}
 
 	if (symbol_exists("smp_num_cpus")) {
@@ -5044,6 +5056,8 @@ no_cpu_flags:
 	fprintf(fp, "              pvops_xen:\n");
 	fprintf(fp, "                    p2m_top: %lx\n", kt->pvops_xen.p2m_top);
 	fprintf(fp, "            p2m_top_entries: %d\n", kt->pvops_xen.p2m_top_entries);
+	if (symbol_exists("p2m_mid_missing"))
+		fprintf(fp, "            p2m_mid_missing: %lx\n", kt->pvops_xen.p2m_mid_missing);
 	fprintf(fp, "                p2m_missing: %lx\n", kt->pvops_xen.p2m_missing);
 }
 
@@ -7391,15 +7405,9 @@ xen_m2p(ulonglong machine)
 static ulong
 __xen_m2p(ulonglong machine, ulong mfn)
 {
-	ulong mapping, p2m, kmfn, pfn, p, i, e, c;
+	ulong c, i, kmfn, mapping, p, pfn;
 	ulong start, end;
-	ulong *mp;
-
-	mp = (ulong *)kt->m2p_page;
-	if (PVOPS_XEN())
-		mapping = UNINITIALIZED;
-	else
-		mapping = kt->phys_to_machine_mapping;
+	ulong *mp = (ulong *)kt->m2p_page;
 
 	/*
 	 *  Check the FIFO cache first.
@@ -7449,55 +7457,21 @@ __xen_m2p(ulonglong machine, ulong mfn)
 		 *  beginning of the p2m_top array, caching the contiguous
 		 *  range containing the found machine address.
 		 */
-		for (e = p = 0, p2m = kt->pvops_xen.p2m_top;
-		     e < kt->pvops_xen.p2m_top_entries; 
-		     e++, p += XEN_PFNS_PER_PAGE, p2m += sizeof(void *)) {
-
-			if (!readmem(p2m, KVADDR, &mapping,
-			    sizeof(void *), "p2m_top", RETURN_ON_ERROR))
-				error(FATAL, "cannot access p2m_top[] entry\n");
-
-			if (mapping != kt->last_mapping_read) {
-				if (mapping != kt->pvops_xen.p2m_missing) {
-					if (!readmem(mapping, KVADDR, mp, 
-					    PAGESIZE(), "p2m_top page", 
-					    RETURN_ON_ERROR))
-						error(FATAL, 
-				     	    	    "cannot access "
-						    "p2m_top[] page\n");
-					kt->last_mapping_read = mapping;
-				}
-			}
-
-			if (mapping == kt->pvops_xen.p2m_missing)
-				continue;
-
-			kt->p2m_pages_searched++;
+		if (symbol_exists("p2m_mid_missing"))
+			pfn = __xen_pvops_m2p_l3(machine, mfn);
+		else
+			pfn = __xen_pvops_m2p_l2(machine, mfn);
 
-			if (search_mapping_page(mfn, &i, &start, &end)) {
-				pfn = p + i;
-				if (CRASHDEBUG(1))
-				    console("pages: %d mfn: %lx (%llx) p: %ld"
-					" i: %ld pfn: %lx (%llx)\n",
-					(p/XEN_PFNS_PER_PAGE)+1, mfn, machine,
-					p, i, pfn, XEN_PFN_TO_PSEUDO(pfn));
-	
-				c = kt->p2m_cache_index;
-				kt->p2m_mapping_cache[c].start = start;
-				kt->p2m_mapping_cache[c].end = end;
-				kt->p2m_mapping_cache[c].mapping = mapping;
-				kt->p2m_mapping_cache[c].pfn = p;
-				kt->p2m_cache_index = (c+1) % P2M_MAPPING_CACHE;
-	
-				return pfn;
-			}
-		}
+		if (pfn != XEN_MFN_NOT_FOUND)
+			return pfn;
 	} else {
 		/*
 		 *  The machine address was not cached, so search from the
 		 *  beginning of the phys_to_machine_mapping array, caching
 		 *  the contiguous range containing the found machine address.
 		 */
+		mapping = kt->phys_to_machine_mapping;
+
 		for (p = 0; p < kt->p2m_table_size; p += XEN_PFNS_PER_PAGE) 
 		{
 			if (mapping != kt->last_mapping_read) {
@@ -7540,6 +7514,115 @@ __xen_m2p(ulonglong machine, ulong mfn)
 	return (XEN_MFN_NOT_FOUND);
 }
 
+static ulong
+__xen_pvops_m2p_l2(ulonglong machine, ulong mfn)
+{
+	ulong c, e, end, i, mapping, p, p2m, pfn, start;
+
+	for (e = p = 0, p2m = kt->pvops_xen.p2m_top;
+	     e < kt->pvops_xen.p2m_top_entries;
+	     e++, p += XEN_PFNS_PER_PAGE, p2m += sizeof(void *)) {
+
+		if (!readmem(p2m, KVADDR, &mapping, sizeof(void *),
+						"p2m_top", RETURN_ON_ERROR))
+			error(FATAL, "cannot access p2m_top[] entry\n");
+
+		if (mapping == kt->pvops_xen.p2m_missing)
+			continue;
+
+		if (mapping != kt->last_mapping_read) {
+			if (!readmem(mapping, KVADDR, (void *)kt->m2p_page,
+					PAGESIZE(), "p2m_top page", RETURN_ON_ERROR))
+				error(FATAL, "cannot access p2m_top[] page\n");
+
+			kt->last_mapping_read = mapping;
+		}
+
+		kt->p2m_pages_searched++;
+
+		if (search_mapping_page(mfn, &i, &start, &end)) {
+			pfn = p + i;
+			if (CRASHDEBUG(1))
+			    console("pages: %d mfn: %lx (%llx) p: %ld"
+				" i: %ld pfn: %lx (%llx)\n",
+				(p/XEN_PFNS_PER_PAGE)+1, mfn, machine,
+				p, i, pfn, XEN_PFN_TO_PSEUDO(pfn));
+
+			c = kt->p2m_cache_index;
+			kt->p2m_mapping_cache[c].start = start;
+			kt->p2m_mapping_cache[c].end = end;
+			kt->p2m_mapping_cache[c].mapping = mapping;
+			kt->p2m_mapping_cache[c].pfn = p;
+			kt->p2m_cache_index = (c+1) % P2M_MAPPING_CACHE;
+
+			return pfn;
+		}
+	}
+
+	return XEN_MFN_NOT_FOUND;
+}
+
+static ulong
+__xen_pvops_m2p_l3(ulonglong machine, ulong mfn)
+{
+	ulong c, end, i, j, k, mapping, p;
+	ulong p2m_mid, p2m_top, pfn, start;
+
+	p2m_top = kt->pvops_xen.p2m_top;
+
+	for (i = 0; i < XEN_P2M_TOP_PER_PAGE; ++i, p2m_top += sizeof(void *)) {
+		if (!readmem(p2m_top, KVADDR, &mapping,
+				sizeof(void *), "p2m_top", RETURN_ON_ERROR))
+			error(FATAL, "cannot access p2m_top[] entry\n");
+
+		if (mapping == kt->pvops_xen.p2m_mid_missing)
+			continue;
+
+		p2m_mid = mapping;
+
+		for (j = 0; j < XEN_P2M_MID_PER_PAGE; ++j, p2m_mid += sizeof(void *)) {
+			if (!readmem(p2m_mid, KVADDR, &mapping,
+					sizeof(void *), "p2m_mid", RETURN_ON_ERROR))
+				error(FATAL, "cannot access p2m_mid[] entry\n");
+
+			if (mapping == kt->pvops_xen.p2m_missing)
+				continue;
+
+			if (mapping != kt->last_mapping_read) {
+				if (!readmem(mapping, KVADDR, (void *)kt->m2p_page,
+						PAGESIZE(), "p2m_mid page", RETURN_ON_ERROR))
+					error(FATAL, "cannot access p2m_mid[] page\n");
+
+				kt->last_mapping_read = mapping;
+			}
+
+			if (!search_mapping_page(mfn, &k, &start, &end))
+				continue;
+
+			p = i * XEN_P2M_MID_PER_PAGE * XEN_P2M_PER_PAGE;
+			p += j * XEN_P2M_PER_PAGE;
+			pfn = p + k;
+
+			if (CRASHDEBUG(1))
+				console("pages: %d mfn: %lx (%llx) p: %ld"
+					" i: %ld j: %ld k: %ld pfn: %lx (%llx)\n",
+					(p / XEN_P2M_PER_PAGE) + 1, mfn, machine,
+					p, i, j, k, pfn, XEN_PFN_TO_PSEUDO(pfn));
+
+			c = kt->p2m_cache_index;
+			kt->p2m_mapping_cache[c].start = start;
+			kt->p2m_mapping_cache[c].end = end;
+			kt->p2m_mapping_cache[c].mapping = mapping;
+			kt->p2m_mapping_cache[c].pfn = p;
+			kt->p2m_cache_index = (c + 1) % P2M_MAPPING_CACHE;
+
+			return pfn;
+		}
+	}
+
+	return XEN_MFN_NOT_FOUND;
+}
+
 /*
  *  Search for an mfn in the current mapping page, and if found, 
  *  determine the range of contiguous mfns that it's contained
diff -Npru crash-6.0.8.orig/x86.c crash-6.0.8/x86.c
--- crash-6.0.8.orig/x86.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/x86.c	2012-08-07 13:26:27.000000000 +0200
@@ -1024,6 +1024,8 @@ static void x86_init_kernel_pgd(void);
 static ulong xen_m2p_nonPAE(ulong);
 static int x86_xendump_p2m_create(struct xendump_data *);
 static int x86_pvops_xendump_p2m_create(struct xendump_data *);
+static int x86_pvops_xendump_p2m_l2_create(struct xendump_data *);
+static int x86_pvops_xendump_p2m_l3_create(struct xendump_data *);
 static void x86_debug_dump_page(FILE *, char *, char *);
 static int x86_xen_kdump_p2m_create(struct xen_kdump_data *);
 static char *x86_xen_kdump_load_page(ulong, char *);
@@ -4969,7 +4971,7 @@ x86_xendump_p2m_create(struct xendump_da
 static int 
 x86_pvops_xendump_p2m_create(struct xendump_data *xd)
 {
-	int i, p, idx;
+	int i;
 	ulong mfn, kvaddr, ctrlreg[8], ctrlreg_offset;
 	ulong *up;
 	ulonglong *ulp;
@@ -5040,21 +5042,29 @@ x86_pvops_xendump_p2m_create(struct xend
 	    malloc(xd->xc_core.p2m_frames * sizeof(int))) == NULL)
         	error(FATAL, "cannot malloc p2m_frame_index_list");
 
+	if (symbol_exists("p2m_mid_missing"))
+		return x86_pvops_xendump_p2m_l3_create(xd);
+	else
+		return x86_pvops_xendump_p2m_l2_create(xd);
+}
+
+static int x86_pvops_xendump_p2m_l2_create(struct xendump_data *xd)
+{
+	int i, idx, p;
+	ulong kvaddr, *up;
+
 	machdep->last_ptbl_read = BADADDR;
 	machdep->last_pmd_read = BADADDR;
+
 	kvaddr = symbol_value("p2m_top");
 
 	for (p = 0; p < xd->xc_core.p2m_frames; p += XEN_PFNS_PER_PAGE) {
 		if (!x86_xendump_load_page(kvaddr, xd->page))
 			return FALSE;
 
-		if ((idx = x86_xendump_page_index(kvaddr)) == MFN_NOT_FOUND)
-			return FALSE;
-
-		if (CRASHDEBUG(7)) {
-			x86_debug_dump_page(xd->ofp, xd->page,
-				"contents of page:");
-		}
+		if (CRASHDEBUG(7))
+ 			x86_debug_dump_page(xd->ofp, xd->page,
+                       		"contents of page:");
 
 		up = (ulong *)(xd->page);
 
@@ -5067,7 +5077,7 @@ x86_pvops_xendump_p2m_create(struct xend
 		}
 
 		kvaddr += PAGESIZE();
-        }
+	}
 
 	machdep->last_ptbl_read = 0;
 	machdep->last_pmd_read = 0;
@@ -5075,6 +5085,94 @@ x86_pvops_xendump_p2m_create(struct xend
 	return TRUE;
 }
 
+static int x86_pvops_xendump_p2m_l3_create(struct xendump_data *xd)
+{
+	int i, idx, j, p2m_frame, ret = FALSE;
+	ulong kvaddr, *p2m_mid, p2m_mid_missing, p2m_missing, *p2m_top;
+
+	machdep->last_ptbl_read = BADADDR;
+	machdep->last_pmd_read = BADADDR;
+
+	kvaddr = symbol_value("p2m_missing");
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	p2m_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_mid_missing");
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	p2m_mid_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_top");
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	kvaddr = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	if (CRASHDEBUG(7))
+		x86_debug_dump_page(xd->ofp, xd->page,
+					"contents of p2m_top page:");
+
+	p2m_top = malloc(PAGESIZE());
+
+	if (!p2m_top)
+		error(FATAL, "cannot malloc p2m_top");
+
+	memcpy(p2m_top, xd->page, PAGESIZE());
+
+	for (i = 0; i < XEN_P2M_TOP_PER_PAGE; ++i) {
+		p2m_frame = i * XEN_P2M_MID_PER_PAGE;
+
+		if (p2m_frame >= xd->xc_core.p2m_frames)
+			break;
+
+		if (p2m_top[i] == p2m_mid_missing)
+			continue;
+
+		if (!x86_xendump_load_page(p2m_top[i], xd->page))
+			goto err;
+
+		if (CRASHDEBUG(7))
+			x86_debug_dump_page(xd->ofp, xd->page,
+						"contents of p2m_mid page:");
+
+		p2m_mid = (ulong *)xd->page;
+
+		for (j = 0; j < XEN_P2M_MID_PER_PAGE; ++j, ++p2m_frame) {
+			if (p2m_frame >= xd->xc_core.p2m_frames)
+				break;
+
+			if (p2m_mid[j] == p2m_missing)
+				continue;
+
+			idx = x86_xendump_page_index(p2m_mid[j]);
+
+			if (idx == MFN_NOT_FOUND)
+				goto err;
+
+			xd->xc_core.p2m_frame_index_list[p2m_frame] = idx;
+		}
+	}
+
+	machdep->last_ptbl_read = 0;
+	machdep->last_pmd_read = 0;
+
+	ret = TRUE;
+
+err:
+	free(p2m_top);
+
+	return ret;
+}
+
 static void
 x86_debug_dump_page(FILE *ofp, char *page, char *name)
 {
diff -Npru crash-6.0.8.orig/x86_64.c crash-6.0.8/x86_64.c
--- crash-6.0.8.orig/x86_64.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/x86_64.c	2012-08-07 13:32:34.000000000 +0200
@@ -91,6 +91,8 @@ static void x86_64_framepointer_init(voi
 static int x86_64_virt_phys_base(void);
 static int x86_64_xendump_p2m_create(struct xendump_data *);
 static int x86_64_pvops_xendump_p2m_create(struct xendump_data *);
+static int x86_64_pvops_xendump_p2m_l2_create(struct xendump_data *);
+static int x86_64_pvops_xendump_p2m_l3_create(struct xendump_data *);
 static char *x86_64_xendump_load_page(ulong, struct xendump_data *);
 static int x86_64_xendump_page_index(ulong, struct xendump_data *);
 static int x86_64_xen_kdump_p2m_create(struct xen_kdump_data *);
@@ -6078,7 +6080,7 @@ x86_64_xendump_p2m_create(struct xendump
 static int 
 x86_64_pvops_xendump_p2m_create(struct xendump_data *xd)
 {
-	int i, p, idx;
+	int i;
 	ulong mfn, kvaddr, ctrlreg[8], ctrlreg_offset;
 	ulong *up;
 	off_t offset; 
@@ -6138,20 +6140,28 @@ x86_64_pvops_xendump_p2m_create(struct x
 	    malloc(xd->xc_core.p2m_frames * sizeof(ulong))) == NULL)
         	error(FATAL, "cannot malloc p2m_frame_list");
 
+	if (symbol_exists("p2m_mid_missing"))
+		return x86_64_pvops_xendump_p2m_l3_create(xd);
+	else
+		return x86_64_pvops_xendump_p2m_l2_create(xd);
+}
+
+static int x86_64_pvops_xendump_p2m_l2_create(struct xendump_data *xd)
+{
+	int i, idx, p;
+	ulong kvaddr, *up;
+
 	machdep->last_ptbl_read = BADADDR;
+
 	kvaddr = symbol_value("p2m_top");
 
 	for (p = 0; p < xd->xc_core.p2m_frames; p += XEN_PFNS_PER_PAGE) {
 		if (!x86_64_xendump_load_page(kvaddr, xd))
 			return FALSE;
 
-		if ((idx = x86_64_xendump_page_index(kvaddr, xd)) == MFN_NOT_FOUND)
-			return FALSE;
-
-		if (CRASHDEBUG(7)) {
+		if (CRASHDEBUG(7))
  			x86_64_debug_dump_page(xd->ofp, xd->page,
                        		"contents of page:");
-		}
 
 		up = (ulong *)(xd->page);
 
@@ -6160,17 +6170,103 @@ x86_64_pvops_xendump_p2m_create(struct x
 				break;
 			if ((idx = x86_64_xendump_page_index(*up, xd)) == MFN_NOT_FOUND)
 				return FALSE;
-			xd->xc_core.p2m_frame_index_list[p+i] = idx; 
+			xd->xc_core.p2m_frame_index_list[p+i] = idx;
 		}
 
 		kvaddr += PAGESIZE();
 	}
-	
+
 	machdep->last_ptbl_read = 0;
 
 	return TRUE;
 }
 
+static int x86_64_pvops_xendump_p2m_l3_create(struct xendump_data *xd)
+{
+	int i, idx, j, p2m_frame, ret = FALSE;
+	ulong kvaddr, *p2m_mid, p2m_mid_missing, p2m_missing, *p2m_top;
+
+	machdep->last_ptbl_read = BADADDR;
+
+	kvaddr = symbol_value("p2m_missing");
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	p2m_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_mid_missing");
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	p2m_mid_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_top");
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	kvaddr = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	if (CRASHDEBUG(7))
+		x86_64_debug_dump_page(xd->ofp, xd->page,
+					"contents of p2m_top page:");
+
+	p2m_top = malloc(PAGESIZE());
+
+	if (!p2m_top)
+		error(FATAL, "cannot malloc p2m_top");
+
+	memcpy(p2m_top, xd->page, PAGESIZE());
+
+	for (i = 0; i < XEN_P2M_TOP_PER_PAGE; ++i) {
+		p2m_frame = i * XEN_P2M_MID_PER_PAGE;
+
+		if (p2m_frame >= xd->xc_core.p2m_frames)
+			break;
+
+		if (p2m_top[i] == p2m_mid_missing)
+			continue;
+
+		if (!x86_64_xendump_load_page(p2m_top[i], xd))
+			goto err;
+
+		if (CRASHDEBUG(7))
+			x86_64_debug_dump_page(xd->ofp, xd->page,
+						"contents of p2m_mid page:");
+
+		p2m_mid = (ulong *)xd->page;
+
+		for (j = 0; j < XEN_P2M_MID_PER_PAGE; ++j, ++p2m_frame) {
+			if (p2m_frame >= xd->xc_core.p2m_frames)
+				break;
+
+			if (p2m_mid[j] == p2m_missing)
+				continue;
+
+			idx = x86_64_xendump_page_index(p2m_mid[j], xd);
+
+			if (idx == MFN_NOT_FOUND)
+				goto err;
+
+			xd->xc_core.p2m_frame_index_list[p2m_frame] = idx;
+		}
+	}
+
+	machdep->last_ptbl_read = 0;
+
+	ret = TRUE;
+
+err:
+	free(p2m_top);
+
+	return ret;
+}
+
 static void
 x86_64_debug_dump_page(FILE *ofp, char *page, char *name)
 {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:30:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpIg-0007oV-SH; Fri, 10 Aug 2012 13:30:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1SzpIf-0007oE-IY
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 13:30:49 +0000
Received: from [85.158.139.83:60563] by server-11.bemta-5.messagelabs.com id
	8E/89-11482-80D05205; Fri, 10 Aug 2012 13:30:48 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344605446!20245831!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22836 invoked from network); 10 Aug 2012 13:30:47 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 13:30:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ADUeHQ001708
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Aug 2012 13:30:41 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ADUeXe005890
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Aug 2012 13:30:40 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ADUec3021156; Fri, 10 Aug 2012 08:30:40 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 06:30:39 -0700
Date: Fri, 10 Aug 2012 15:29:22 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: anderson@redhat.com, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	olaf@aepfle.de, jbeulich@suse.com, ptesarik@suse.cz,
	crash-utility@redhat.com, kexec@lists.infradead.org,
	xen-devel@lists.xensource.com
Message-ID: <20120810132922.GG2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [PATCH v2 6/6] xen: Add support for 3 level P2M tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Linux Kernel commit 58e05027b530ff081ecea68e38de8d59db8f87e0 (xen: convert
p2m to a 3 level tree) introduced 3 level P2M tree. Add support for this.

Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>

diff -Npru crash-6.0.8.orig/defs.h crash-6.0.8/defs.h
--- crash-6.0.8.orig/defs.h	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/defs.h	2012-08-06 23:32:31.000000000 +0200
@@ -595,6 +595,10 @@ struct new_utsname {
 
 #define XEN_MACHADDR_NOT_FOUND   (~0ULL) 
 
+#define XEN_P2M_PER_PAGE	(PAGESIZE() / sizeof(unsigned long))
+#define XEN_P2M_MID_PER_PAGE	(PAGESIZE() / sizeof(unsigned long *))
+#define XEN_P2M_TOP_PER_PAGE	(PAGESIZE() / sizeof(unsigned long **))
+
 struct kernel_table {                   /* kernel data */
 	ulong flags;
 	ulong stext;
@@ -655,6 +659,7 @@ struct kernel_table {                   
 	struct pvops_xen_info {
 		int p2m_top_entries;
 		ulong p2m_top;
+		ulong p2m_mid_missing;
 		ulong p2m_missing;
 	} pvops_xen;
 	int highest_irq;
diff -Npru crash-6.0.8.orig/kernel.c crash-6.0.8/kernel.c
--- crash-6.0.8.orig/kernel.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/kernel.c	2012-08-07 13:44:12.000000000 +0200
@@ -49,6 +49,8 @@ static void verify_namelist(void);
 static char *debug_kernel_version(char *);
 static int restore_stack(struct bt_info *);
 static ulong __xen_m2p(ulonglong, ulong);
+static ulong __xen_pvops_m2p_l2(ulonglong, ulong);
+static ulong __xen_pvops_m2p_l3(ulonglong, ulong);
 static int search_mapping_page(ulong, ulong *, ulong *, ulong *);
 static void read_in_kernel_config_err(int, char *);
 static void BUG_bytes_init(void);
@@ -147,9 +149,19 @@ kernel_init()
                 if ((kt->m2p_page = (char *)malloc(PAGESIZE())) == NULL)
                        	error(FATAL, "cannot malloc m2p page.");
 
-		kt->pvops_xen.p2m_top_entries = get_array_length("p2m_top", NULL, 0);
-		kt->pvops_xen.p2m_top = symbol_value("p2m_top");
-		kt->pvops_xen.p2m_missing = symbol_value("p2m_missing");
+		if (symbol_exists("p2m_mid_missing")) {
+			kt->pvops_xen.p2m_top_entries = XEN_P2M_TOP_PER_PAGE;
+			get_symbol_data("p2m_top", sizeof(ulong),
+						&kt->pvops_xen.p2m_top);
+			get_symbol_data("p2m_mid_missing", sizeof(ulong),
+						&kt->pvops_xen.p2m_mid_missing);
+			get_symbol_data("p2m_missing", sizeof(ulong),
+						&kt->pvops_xen.p2m_missing);
+		} else {
+			kt->pvops_xen.p2m_top_entries = get_array_length("p2m_top", NULL, 0);
+			kt->pvops_xen.p2m_top = symbol_value("p2m_top");
+			kt->pvops_xen.p2m_missing = symbol_value("p2m_missing");
+		}
 	}
 
 	if (symbol_exists("smp_num_cpus")) {
@@ -5044,6 +5056,8 @@ no_cpu_flags:
 	fprintf(fp, "              pvops_xen:\n");
 	fprintf(fp, "                    p2m_top: %lx\n", kt->pvops_xen.p2m_top);
 	fprintf(fp, "            p2m_top_entries: %d\n", kt->pvops_xen.p2m_top_entries);
+	if (symbol_exists("p2m_mid_missing"))
+		fprintf(fp, "            p2m_mid_missing: %lx\n", kt->pvops_xen.p2m_mid_missing);
 	fprintf(fp, "                p2m_missing: %lx\n", kt->pvops_xen.p2m_missing);
 }
 
@@ -7391,15 +7405,9 @@ xen_m2p(ulonglong machine)
 static ulong
 __xen_m2p(ulonglong machine, ulong mfn)
 {
-	ulong mapping, p2m, kmfn, pfn, p, i, e, c;
+	ulong c, i, kmfn, mapping, p, pfn;
 	ulong start, end;
-	ulong *mp;
-
-	mp = (ulong *)kt->m2p_page;
-	if (PVOPS_XEN())
-		mapping = UNINITIALIZED;
-	else
-		mapping = kt->phys_to_machine_mapping;
+	ulong *mp = (ulong *)kt->m2p_page;
 
 	/*
 	 *  Check the FIFO cache first.
@@ -7449,55 +7457,21 @@ __xen_m2p(ulonglong machine, ulong mfn)
 		 *  beginning of the p2m_top array, caching the contiguous
 		 *  range containing the found machine address.
 		 */
-		for (e = p = 0, p2m = kt->pvops_xen.p2m_top;
-		     e < kt->pvops_xen.p2m_top_entries; 
-		     e++, p += XEN_PFNS_PER_PAGE, p2m += sizeof(void *)) {
-
-			if (!readmem(p2m, KVADDR, &mapping,
-			    sizeof(void *), "p2m_top", RETURN_ON_ERROR))
-				error(FATAL, "cannot access p2m_top[] entry\n");
-
-			if (mapping != kt->last_mapping_read) {
-				if (mapping != kt->pvops_xen.p2m_missing) {
-					if (!readmem(mapping, KVADDR, mp, 
-					    PAGESIZE(), "p2m_top page", 
-					    RETURN_ON_ERROR))
-						error(FATAL, 
-				     	    	    "cannot access "
-						    "p2m_top[] page\n");
-					kt->last_mapping_read = mapping;
-				}
-			}
-
-			if (mapping == kt->pvops_xen.p2m_missing)
-				continue;
-
-			kt->p2m_pages_searched++;
+		if (symbol_exists("p2m_mid_missing"))
+			pfn = __xen_pvops_m2p_l3(machine, mfn);
+		else
+			pfn = __xen_pvops_m2p_l2(machine, mfn);
 
-			if (search_mapping_page(mfn, &i, &start, &end)) {
-				pfn = p + i;
-				if (CRASHDEBUG(1))
-				    console("pages: %d mfn: %lx (%llx) p: %ld"
-					" i: %ld pfn: %lx (%llx)\n",
-					(p/XEN_PFNS_PER_PAGE)+1, mfn, machine,
-					p, i, pfn, XEN_PFN_TO_PSEUDO(pfn));
-	
-				c = kt->p2m_cache_index;
-				kt->p2m_mapping_cache[c].start = start;
-				kt->p2m_mapping_cache[c].end = end;
-				kt->p2m_mapping_cache[c].mapping = mapping;
-				kt->p2m_mapping_cache[c].pfn = p;
-				kt->p2m_cache_index = (c+1) % P2M_MAPPING_CACHE;
-	
-				return pfn;
-			}
-		}
+		if (pfn != XEN_MFN_NOT_FOUND)
+			return pfn;
 	} else {
 		/*
 		 *  The machine address was not cached, so search from the
 		 *  beginning of the phys_to_machine_mapping array, caching
 		 *  the contiguous range containing the found machine address.
 		 */
+		mapping = kt->phys_to_machine_mapping;
+
 		for (p = 0; p < kt->p2m_table_size; p += XEN_PFNS_PER_PAGE) 
 		{
 			if (mapping != kt->last_mapping_read) {
@@ -7540,6 +7514,115 @@ __xen_m2p(ulonglong machine, ulong mfn)
 	return (XEN_MFN_NOT_FOUND);
 }
 
+static ulong
+__xen_pvops_m2p_l2(ulonglong machine, ulong mfn)
+{
+	ulong c, e, end, i, mapping, p, p2m, pfn, start;
+
+	for (e = p = 0, p2m = kt->pvops_xen.p2m_top;
+	     e < kt->pvops_xen.p2m_top_entries;
+	     e++, p += XEN_PFNS_PER_PAGE, p2m += sizeof(void *)) {
+
+		if (!readmem(p2m, KVADDR, &mapping, sizeof(void *),
+						"p2m_top", RETURN_ON_ERROR))
+			error(FATAL, "cannot access p2m_top[] entry\n");
+
+		if (mapping == kt->pvops_xen.p2m_missing)
+			continue;
+
+		if (mapping != kt->last_mapping_read) {
+			if (!readmem(mapping, KVADDR, (void *)kt->m2p_page,
+					PAGESIZE(), "p2m_top page", RETURN_ON_ERROR))
+				error(FATAL, "cannot access p2m_top[] page\n");
+
+			kt->last_mapping_read = mapping;
+		}
+
+		kt->p2m_pages_searched++;
+
+		if (search_mapping_page(mfn, &i, &start, &end)) {
+			pfn = p + i;
+			if (CRASHDEBUG(1))
+			    console("pages: %d mfn: %lx (%llx) p: %ld"
+				" i: %ld pfn: %lx (%llx)\n",
+				(p/XEN_PFNS_PER_PAGE)+1, mfn, machine,
+				p, i, pfn, XEN_PFN_TO_PSEUDO(pfn));
+
+			c = kt->p2m_cache_index;
+			kt->p2m_mapping_cache[c].start = start;
+			kt->p2m_mapping_cache[c].end = end;
+			kt->p2m_mapping_cache[c].mapping = mapping;
+			kt->p2m_mapping_cache[c].pfn = p;
+			kt->p2m_cache_index = (c+1) % P2M_MAPPING_CACHE;
+
+			return pfn;
+		}
+	}
+
+	return XEN_MFN_NOT_FOUND;
+}
+
+static ulong
+__xen_pvops_m2p_l3(ulonglong machine, ulong mfn)
+{
+	ulong c, end, i, j, k, mapping, p;
+	ulong p2m_mid, p2m_top, pfn, start;
+
+	p2m_top = kt->pvops_xen.p2m_top;
+
+	for (i = 0; i < XEN_P2M_TOP_PER_PAGE; ++i, p2m_top += sizeof(void *)) {
+		if (!readmem(p2m_top, KVADDR, &mapping,
+				sizeof(void *), "p2m_top", RETURN_ON_ERROR))
+			error(FATAL, "cannot access p2m_top[] entry\n");
+
+		if (mapping == kt->pvops_xen.p2m_mid_missing)
+			continue;
+
+		p2m_mid = mapping;
+
+		for (j = 0; j < XEN_P2M_MID_PER_PAGE; ++j, p2m_mid += sizeof(void *)) {
+			if (!readmem(p2m_mid, KVADDR, &mapping,
+					sizeof(void *), "p2m_mid", RETURN_ON_ERROR))
+				error(FATAL, "cannot access p2m_mid[] entry\n");
+
+			if (mapping == kt->pvops_xen.p2m_missing)
+				continue;
+
+			if (mapping != kt->last_mapping_read) {
+				if (!readmem(mapping, KVADDR, (void *)kt->m2p_page,
+						PAGESIZE(), "p2m_mid page", RETURN_ON_ERROR))
+					error(FATAL, "cannot access p2m_mid[] page\n");
+
+				kt->last_mapping_read = mapping;
+			}
+
+			if (!search_mapping_page(mfn, &k, &start, &end))
+				continue;
+
+			p = i * XEN_P2M_MID_PER_PAGE * XEN_P2M_PER_PAGE;
+			p += j * XEN_P2M_PER_PAGE;
+			pfn = p + k;
+
+			if (CRASHDEBUG(1))
+				console("pages: %d mfn: %lx (%llx) p: %ld"
+					" i: %ld j: %ld k: %ld pfn: %lx (%llx)\n",
+					(p / XEN_P2M_PER_PAGE) + 1, mfn, machine,
+					p, i, j, k, pfn, XEN_PFN_TO_PSEUDO(pfn));
+
+			c = kt->p2m_cache_index;
+			kt->p2m_mapping_cache[c].start = start;
+			kt->p2m_mapping_cache[c].end = end;
+			kt->p2m_mapping_cache[c].mapping = mapping;
+			kt->p2m_mapping_cache[c].pfn = p;
+			kt->p2m_cache_index = (c + 1) % P2M_MAPPING_CACHE;
+
+			return pfn;
+		}
+	}
+
+	return XEN_MFN_NOT_FOUND;
+}
+
 /*
  *  Search for an mfn in the current mapping page, and if found, 
  *  determine the range of contiguous mfns that it's contained
diff -Npru crash-6.0.8.orig/x86.c crash-6.0.8/x86.c
--- crash-6.0.8.orig/x86.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/x86.c	2012-08-07 13:26:27.000000000 +0200
@@ -1024,6 +1024,8 @@ static void x86_init_kernel_pgd(void);
 static ulong xen_m2p_nonPAE(ulong);
 static int x86_xendump_p2m_create(struct xendump_data *);
 static int x86_pvops_xendump_p2m_create(struct xendump_data *);
+static int x86_pvops_xendump_p2m_l2_create(struct xendump_data *);
+static int x86_pvops_xendump_p2m_l3_create(struct xendump_data *);
 static void x86_debug_dump_page(FILE *, char *, char *);
 static int x86_xen_kdump_p2m_create(struct xen_kdump_data *);
 static char *x86_xen_kdump_load_page(ulong, char *);
@@ -4969,7 +4971,7 @@ x86_xendump_p2m_create(struct xendump_da
 static int 
 x86_pvops_xendump_p2m_create(struct xendump_data *xd)
 {
-	int i, p, idx;
+	int i;
 	ulong mfn, kvaddr, ctrlreg[8], ctrlreg_offset;
 	ulong *up;
 	ulonglong *ulp;
@@ -5040,21 +5042,29 @@ x86_pvops_xendump_p2m_create(struct xend
 	    malloc(xd->xc_core.p2m_frames * sizeof(int))) == NULL)
         	error(FATAL, "cannot malloc p2m_frame_index_list");
 
+	if (symbol_exists("p2m_mid_missing"))
+		return x86_pvops_xendump_p2m_l3_create(xd);
+	else
+		return x86_pvops_xendump_p2m_l2_create(xd);
+}
+
+static int x86_pvops_xendump_p2m_l2_create(struct xendump_data *xd)
+{
+	int i, idx, p;
+	ulong kvaddr, *up;
+
 	machdep->last_ptbl_read = BADADDR;
 	machdep->last_pmd_read = BADADDR;
+
 	kvaddr = symbol_value("p2m_top");
 
 	for (p = 0; p < xd->xc_core.p2m_frames; p += XEN_PFNS_PER_PAGE) {
 		if (!x86_xendump_load_page(kvaddr, xd->page))
 			return FALSE;
 
-		if ((idx = x86_xendump_page_index(kvaddr)) == MFN_NOT_FOUND)
-			return FALSE;
-
-		if (CRASHDEBUG(7)) {
-			x86_debug_dump_page(xd->ofp, xd->page,
-				"contents of page:");
-		}
+		if (CRASHDEBUG(7))
+ 			x86_debug_dump_page(xd->ofp, xd->page,
+                       		"contents of page:");
 
 		up = (ulong *)(xd->page);
 
@@ -5067,7 +5077,7 @@ x86_pvops_xendump_p2m_create(struct xend
 		}
 
 		kvaddr += PAGESIZE();
-        }
+	}
 
 	machdep->last_ptbl_read = 0;
 	machdep->last_pmd_read = 0;
@@ -5075,6 +5085,94 @@ x86_pvops_xendump_p2m_create(struct xend
 	return TRUE;
 }
 
+static int x86_pvops_xendump_p2m_l3_create(struct xendump_data *xd)
+{
+	int i, idx, j, p2m_frame, ret = FALSE;
+	ulong kvaddr, *p2m_mid, p2m_mid_missing, p2m_missing, *p2m_top;
+
+	machdep->last_ptbl_read = BADADDR;
+	machdep->last_pmd_read = BADADDR;
+
+	kvaddr = symbol_value("p2m_missing");
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	p2m_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_mid_missing");
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	p2m_mid_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_top");
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	kvaddr = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	if (!x86_xendump_load_page(kvaddr, xd->page))
+		goto err;
+
+	if (CRASHDEBUG(7))
+		x86_debug_dump_page(xd->ofp, xd->page,
+					"contents of p2m_top page:");
+
+	p2m_top = malloc(PAGESIZE());
+
+	if (!p2m_top)
+		error(FATAL, "cannot malloc p2m_top");
+
+	memcpy(p2m_top, xd->page, PAGESIZE());
+
+	for (i = 0; i < XEN_P2M_TOP_PER_PAGE; ++i) {
+		p2m_frame = i * XEN_P2M_MID_PER_PAGE;
+
+		if (p2m_frame >= xd->xc_core.p2m_frames)
+			break;
+
+		if (p2m_top[i] == p2m_mid_missing)
+			continue;
+
+		if (!x86_xendump_load_page(p2m_top[i], xd->page))
+			goto err;
+
+		if (CRASHDEBUG(7))
+			x86_debug_dump_page(xd->ofp, xd->page,
+						"contents of p2m_mid page:");
+
+		p2m_mid = (ulong *)xd->page;
+
+		for (j = 0; j < XEN_P2M_MID_PER_PAGE; ++j, ++p2m_frame) {
+			if (p2m_frame >= xd->xc_core.p2m_frames)
+				break;
+
+			if (p2m_mid[j] == p2m_missing)
+				continue;
+
+			idx = x86_xendump_page_index(p2m_mid[j]);
+
+			if (idx == MFN_NOT_FOUND)
+				goto err;
+
+			xd->xc_core.p2m_frame_index_list[p2m_frame] = idx;
+		}
+	}
+
+	machdep->last_ptbl_read = 0;
+	machdep->last_pmd_read = 0;
+
+	ret = TRUE;
+
+err:
+	free(p2m_top);
+
+	return ret;
+}
+
 static void
 x86_debug_dump_page(FILE *ofp, char *page, char *name)
 {
diff -Npru crash-6.0.8.orig/x86_64.c crash-6.0.8/x86_64.c
--- crash-6.0.8.orig/x86_64.c	2012-06-29 16:59:18.000000000 +0200
+++ crash-6.0.8/x86_64.c	2012-08-07 13:32:34.000000000 +0200
@@ -91,6 +91,8 @@ static void x86_64_framepointer_init(voi
 static int x86_64_virt_phys_base(void);
 static int x86_64_xendump_p2m_create(struct xendump_data *);
 static int x86_64_pvops_xendump_p2m_create(struct xendump_data *);
+static int x86_64_pvops_xendump_p2m_l2_create(struct xendump_data *);
+static int x86_64_pvops_xendump_p2m_l3_create(struct xendump_data *);
 static char *x86_64_xendump_load_page(ulong, struct xendump_data *);
 static int x86_64_xendump_page_index(ulong, struct xendump_data *);
 static int x86_64_xen_kdump_p2m_create(struct xen_kdump_data *);
@@ -6078,7 +6080,7 @@ x86_64_xendump_p2m_create(struct xendump
 static int 
 x86_64_pvops_xendump_p2m_create(struct xendump_data *xd)
 {
-	int i, p, idx;
+	int i;
 	ulong mfn, kvaddr, ctrlreg[8], ctrlreg_offset;
 	ulong *up;
 	off_t offset; 
@@ -6138,20 +6140,28 @@ x86_64_pvops_xendump_p2m_create(struct x
 	    malloc(xd->xc_core.p2m_frames * sizeof(ulong))) == NULL)
         	error(FATAL, "cannot malloc p2m_frame_list");
 
+	if (symbol_exists("p2m_mid_missing"))
+		return x86_64_pvops_xendump_p2m_l3_create(xd);
+	else
+		return x86_64_pvops_xendump_p2m_l2_create(xd);
+}
+
+static int x86_64_pvops_xendump_p2m_l2_create(struct xendump_data *xd)
+{
+	int i, idx, p;
+	ulong kvaddr, *up;
+
 	machdep->last_ptbl_read = BADADDR;
+
 	kvaddr = symbol_value("p2m_top");
 
 	for (p = 0; p < xd->xc_core.p2m_frames; p += XEN_PFNS_PER_PAGE) {
 		if (!x86_64_xendump_load_page(kvaddr, xd))
 			return FALSE;
 
-		if ((idx = x86_64_xendump_page_index(kvaddr, xd)) == MFN_NOT_FOUND)
-			return FALSE;
-
-		if (CRASHDEBUG(7)) {
+		if (CRASHDEBUG(7))
  			x86_64_debug_dump_page(xd->ofp, xd->page,
                        		"contents of page:");
-		}
 
 		up = (ulong *)(xd->page);
 
@@ -6160,17 +6170,103 @@ x86_64_pvops_xendump_p2m_create(struct x
 				break;
 			if ((idx = x86_64_xendump_page_index(*up, xd)) == MFN_NOT_FOUND)
 				return FALSE;
-			xd->xc_core.p2m_frame_index_list[p+i] = idx; 
+			xd->xc_core.p2m_frame_index_list[p+i] = idx;
 		}
 
 		kvaddr += PAGESIZE();
 	}
-	
+
 	machdep->last_ptbl_read = 0;
 
 	return TRUE;
 }
 
+static int x86_64_pvops_xendump_p2m_l3_create(struct xendump_data *xd)
+{
+	int i, idx, j, p2m_frame, ret = FALSE;
+	ulong kvaddr, *p2m_mid, p2m_mid_missing, p2m_missing, *p2m_top;
+
+	machdep->last_ptbl_read = BADADDR;
+
+	kvaddr = symbol_value("p2m_missing");
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	p2m_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_mid_missing");
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	p2m_mid_missing = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	kvaddr = symbol_value("p2m_top");
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	kvaddr = *(ulong *)(xd->page + PAGEOFFSET(kvaddr));
+
+	if (!x86_64_xendump_load_page(kvaddr, xd))
+		goto err;
+
+	if (CRASHDEBUG(7))
+		x86_64_debug_dump_page(xd->ofp, xd->page,
+					"contents of p2m_top page:");
+
+	p2m_top = malloc(PAGESIZE());
+
+	if (!p2m_top)
+		error(FATAL, "cannot malloc p2m_top");
+
+	memcpy(p2m_top, xd->page, PAGESIZE());
+
+	for (i = 0; i < XEN_P2M_TOP_PER_PAGE; ++i) {
+		p2m_frame = i * XEN_P2M_MID_PER_PAGE;
+
+		if (p2m_frame >= xd->xc_core.p2m_frames)
+			break;
+
+		if (p2m_top[i] == p2m_mid_missing)
+			continue;
+
+		if (!x86_64_xendump_load_page(p2m_top[i], xd))
+			goto err;
+
+		if (CRASHDEBUG(7))
+			x86_64_debug_dump_page(xd->ofp, xd->page,
+						"contents of p2m_mid page:");
+
+		p2m_mid = (ulong *)xd->page;
+
+		for (j = 0; j < XEN_P2M_MID_PER_PAGE; ++j, ++p2m_frame) {
+			if (p2m_frame >= xd->xc_core.p2m_frames)
+				break;
+
+			if (p2m_mid[j] == p2m_missing)
+				continue;
+
+			idx = x86_64_xendump_page_index(p2m_mid[j], xd);
+
+			if (idx == MFN_NOT_FOUND)
+				goto err;
+
+			xd->xc_core.p2m_frame_index_list[p2m_frame] = idx;
+		}
+	}
+
+	machdep->last_ptbl_read = 0;
+
+	ret = TRUE;
+
+err:
+	free(p2m_top);
+
+	return ret;
+}
+
 static void
 x86_64_debug_dump_page(FILE *ofp, char *page, char *name)
 {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:42:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpTd-0008KT-8G; Fri, 10 Aug 2012 13:42:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1SzpTb-0008KO-QE
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 13:42:08 +0000
Received: from [85.158.143.99:23328] by server-1.bemta-4.messagelabs.com id
	2C/98-20198-FAF05205; Fri, 10 Aug 2012 13:42:07 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344606124!17926957!1
X-Originating-IP: [216.32.180.187]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20359 invoked from network); 10 Aug 2012 13:42:06 -0000
Received: from co1ehsobe004.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.187)
	by server-14.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Aug 2012 13:42:06 -0000
Received: from mail27-co1-R.bigfish.com (10.243.78.226) by
	CO1EHSOBE002.bigfish.com (10.243.66.65) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 13:42:03 +0000
Received: from mail27-co1 (localhost [127.0.0.1])	by mail27-co1-R.bigfish.com
	(Postfix) with ESMTP id 0200314012C;
	Fri, 10 Aug 2012 13:42:04 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I1418I225amzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail27-co1 (localhost.localdomain [127.0.0.1]) by mail27-co1
	(MessageSwitch) id 1344606121495876_26627;
	Fri, 10 Aug 2012 13:42:01 +0000 (UTC)
Received: from CO1EHSMHS003.bigfish.com (unknown [10.243.78.233])	by
	mail27-co1.bigfish.com (Postfix) with ESMTP id 6CED1580057;
	Fri, 10 Aug 2012 13:42:01 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS003.bigfish.com (10.243.66.13) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 13:42:00 +0000
X-WSS-ID: 0M8JKPV-02-0DO-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B6AAC801A;	Fri, 10 Aug 2012 08:41:55 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 10 Aug 2012 08:42:19 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 10 Aug 2012 08:41:59 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 10 Aug 2012
	09:41:58 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id B771949C20C; Fri, 10 Aug 2012
	14:41:57 +0100 (BST)
Message-ID: <50250F9B.9090702@amd.com>
Date: Fri, 10 Aug 2012 15:41:47 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
	<5024E788.80300@amd.com>
	<50252031020000780009427B@nat28.tlf.novell.com>
In-Reply-To: <50252031020000780009427B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/10/2012 02:52 PM, Jan Beulich wrote:
>>>> On 10.08.12 at 12:50, Wei Wang<wei.wang2@amd.com>  wrote:
>> On 08/09/2012 09:26 AM, Jan Beulich wrote:
>>
>>> Wei - here I'm particularly worried about the use of "level - 1"
>>> instead of "next_level", which would similarly apply to the
>>> original function. If the way this is currently done is okay, then
>>> why is next_level being computed in the first place?
>>
>> I think that recalculation is to guarantee that this recursive function
>> returns. It should run at most "paging_mode" times no matter what
>> "next_level" says. But if we could assume that next level field in every
>> pde is correct, then using next level is fine to me.
>
> Especially in the dumping function we shouldn't assume too
> much. However, wasn't it that one can skip levels in your
> IOMMU implementation? That can't be handled correctly if
> always subtracting 1.

We have no skip levels yet. But since it checks (next_level != 0) before 
calling itself, it should not deallocate pages unexpectedly. But it will 
also waste some time in the loop. if next_level == 0 but level > 1 (e.g. 
we have only l4, l2, l1 tables). So, yes, now using next_level with 
ASSERT also looks better to me.

>
>>> (And similar
>>> to the issue Santosh has already fixed here - the original
>>> function pointlessly maps/unmaps the page when "level<= 1".
>>> Furthermore, iommu_map.c has nice helper functions
>>> iommu_next_level() and amd_iommu_is_pte_present() - why
>>> aren't they in a header instead, so they could be used here,
>>> avoiding the open coding of them?)
>>
>> Maybe those helps appears after the original function. I could sent a
>> patch to clean up these:
>> * do not map/unmap if level<= 1
>> * move amd_iommu_is_pte_present() and iommu_next_level() to a header
>> file. and use them in deallocate_next_page_table.
>> * Using next_level instead of recalculation (if requested)
>
> Yes, please. As to using next_level - it depends, besides the above,
>   on how bad it is if this is really wrong; an ASSERT() or BUG_ON()
> might be on order here.

How about ASSERT(next_level < = (level -1) )?

> Jan
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 13:42:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 13:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpTd-0008KT-8G; Fri, 10 Aug 2012 13:42:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1SzpTb-0008KO-QE
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 13:42:08 +0000
Received: from [85.158.143.99:23328] by server-1.bemta-4.messagelabs.com id
	2C/98-20198-FAF05205; Fri, 10 Aug 2012 13:42:07 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344606124!17926957!1
X-Originating-IP: [216.32.180.187]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20359 invoked from network); 10 Aug 2012 13:42:06 -0000
Received: from co1ehsobe004.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.187)
	by server-14.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Aug 2012 13:42:06 -0000
Received: from mail27-co1-R.bigfish.com (10.243.78.226) by
	CO1EHSOBE002.bigfish.com (10.243.66.65) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 13:42:03 +0000
Received: from mail27-co1 (localhost [127.0.0.1])	by mail27-co1-R.bigfish.com
	(Postfix) with ESMTP id 0200314012C;
	Fri, 10 Aug 2012 13:42:04 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I1418I225amzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail27-co1 (localhost.localdomain [127.0.0.1]) by mail27-co1
	(MessageSwitch) id 1344606121495876_26627;
	Fri, 10 Aug 2012 13:42:01 +0000 (UTC)
Received: from CO1EHSMHS003.bigfish.com (unknown [10.243.78.233])	by
	mail27-co1.bigfish.com (Postfix) with ESMTP id 6CED1580057;
	Fri, 10 Aug 2012 13:42:01 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS003.bigfish.com (10.243.66.13) with Microsoft SMTP Server id
	14.1.225.23; Fri, 10 Aug 2012 13:42:00 +0000
X-WSS-ID: 0M8JKPV-02-0DO-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B6AAC801A;	Fri, 10 Aug 2012 08:41:55 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 10 Aug 2012 08:42:19 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 10 Aug 2012 08:41:59 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 10 Aug 2012
	09:41:58 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id B771949C20C; Fri, 10 Aug 2012
	14:41:57 +0100 (BST)
Message-ID: <50250F9B.9090702@amd.com>
Date: Fri, 10 Aug 2012 15:41:47 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
	<5024E788.80300@amd.com>
	<50252031020000780009427B@nat28.tlf.novell.com>
In-Reply-To: <50252031020000780009427B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/10/2012 02:52 PM, Jan Beulich wrote:
>>>> On 10.08.12 at 12:50, Wei Wang<wei.wang2@amd.com>  wrote:
>> On 08/09/2012 09:26 AM, Jan Beulich wrote:
>>
>>> Wei - here I'm particularly worried about the use of "level - 1"
>>> instead of "next_level", which would similarly apply to the
>>> original function. If the way this is currently done is okay, then
>>> why is next_level being computed in the first place?
>>
>> I think that recalculation is to guarantee that this recursive function
>> returns. It should run at most "paging_mode" times no matter what
>> "next_level" says. But if we could assume that next level field in every
>> pde is correct, then using next level is fine to me.
>
> Especially in the dumping function we shouldn't assume too
> much. However, wasn't it that one can skip levels in your
> IOMMU implementation? That can't be handled correctly if
> always subtracting 1.

We have no skip levels yet. But since it checks (next_level != 0) before 
calling itself, it should not deallocate pages unexpectedly. But it will 
also waste some time in the loop. if next_level == 0 but level > 1 (e.g. 
we have only l4, l2, l1 tables). So, yes, now using next_level with 
ASSERT also looks better to me.

>
>>> (And similar
>>> to the issue Santosh has already fixed here - the original
>>> function pointlessly maps/unmaps the page when "level<= 1".
>>> Furthermore, iommu_map.c has nice helper functions
>>> iommu_next_level() and amd_iommu_is_pte_present() - why
>>> aren't they in a header instead, so they could be used here,
>>> avoiding the open coding of them?)
>>
>> Maybe those helps appears after the original function. I could sent a
>> patch to clean up these:
>> * do not map/unmap if level<= 1
>> * move amd_iommu_is_pte_present() and iommu_next_level() to a header
>> file. and use them in deallocate_next_page_table.
>> * Using next_level instead of recalculation (if requested)
>
> Yes, please. As to using next_level - it depends, besides the above,
>   on how bad it is if this is really wrong; an ASSERT() or BUG_ON()
> might be on order here.

How about ASSERT(next_level < = (level -1) )?

> Jan
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:11:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpvO-0000G3-Sa; Fri, 10 Aug 2012 14:10:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzpvN-0000Fv-RD
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:10:50 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344607841!8621048!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22211 invoked from network); 10 Aug 2012 14:10:41 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 14:10:41 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (josoe mo61) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id o0459bo7AE3Xc2
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 16:10:41 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 8258118639; Fri, 10 Aug 2012 16:10:40 +0200 (CEST)
Date: Fri, 10 Aug 2012 16:10:40 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20120810141040.GA8388@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Subject: [Xen-devel] How to load backend drivers in 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


The xencommons runlevel script has a few modprobe calls to load drivers
for dom0. Recently also the backend drivers for vbd and vif were added.
Unfortunately without an explanation why that is (suddenly) needed.

Now that I have been hitting such missing backend driver issue as well
with a xenlinux based dom0 kernel I wonder how to handle the situation.

The pvops kernel has 3 backend drivers which have module aliases:
  xen-netback ; alias:          xen-backend:vif
  xen-blkback ; alias:          xen-backend:vbd
  xen-pciback ; alias:          xen-backend:pci
These aliases have been in the tree since at least 3.2, which is a long
time.

The sles11sp2 kernel has more backend drivers, but they have no alias:
  netbk
  blkbk
  xen-scsibk
  usbbk
  tpmbk
  pciback

I'm sure adding ad a MODULE_ALIAS() entry to netbk, blkbk and pciback to
match mainline is trivial.

I wonder why libxl does not do a 'modprobe xen-backend:vif &>/dev/null' if
it is about to configure an interface for the first time? The current
error message when a backend driver is missing is just not helpful.

So what should be done in xencommons for 4.2? Add a dumb loop like this?

for m in xen-backend:{vif,vbd,pci} netbk blkbk pciback xen-scsibk usbbk tpmbk
do
        modprobe $m &> /dev/null
done

Or should rather libxl take care of loading these things?


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:11:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzpvO-0000G3-Sa; Fri, 10 Aug 2012 14:10:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzpvN-0000Fv-RD
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:10:50 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344607841!8621048!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22211 invoked from network); 10 Aug 2012 14:10:41 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 14:10:41 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (josoe mo61) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id o0459bo7AE3Xc2
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 16:10:41 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 8258118639; Fri, 10 Aug 2012 16:10:40 +0200 (CEST)
Date: Fri, 10 Aug 2012 16:10:40 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20120810141040.GA8388@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Subject: [Xen-devel] How to load backend drivers in 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


The xencommons runlevel script has a few modprobe calls to load drivers
for dom0. Recently also the backend drivers for vbd and vif were added.
Unfortunately without an explanation why that is (suddenly) needed.

Now that I have been hitting such missing backend driver issue as well
with a xenlinux based dom0 kernel I wonder how to handle the situation.

The pvops kernel has 3 backend drivers which have module aliases:
  xen-netback ; alias:          xen-backend:vif
  xen-blkback ; alias:          xen-backend:vbd
  xen-pciback ; alias:          xen-backend:pci
These aliases have been in the tree since at least 3.2, which is a long
time.

The sles11sp2 kernel has more backend drivers, but they have no alias:
  netbk
  blkbk
  xen-scsibk
  usbbk
  tpmbk
  pciback

I'm sure adding ad a MODULE_ALIAS() entry to netbk, blkbk and pciback to
match mainline is trivial.

I wonder why libxl does not do a 'modprobe xen-backend:vif &>/dev/null' if
it is about to configure an interface for the first time? The current
error message when a backend driver is missing is just not helpful.

So what should be done in xencommons for 4.2? Add a dumb loop like this?

for m in xen-backend:{vif,vbd,pci} netbk blkbk pciback xen-scsibk usbbk tpmbk
do
        modprobe $m &> /dev/null
done

Or should rather libxl take care of loading these things?


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:22:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szq6Q-0000Q2-0T; Fri, 10 Aug 2012 14:22:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szq6O-0000Pu-La
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:22:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344608519!8623318!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2223 invoked from network); 10 Aug 2012 14:22:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 14:22:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:21:59 +0100
Message-Id: <502535280200007800094322@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:22:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
In-Reply-To: <502490A7.7020603@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEwLjA4LjEyIGF0IDA2OjQwLCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wOSAxODozNSwgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwOS4wOC4xMiBhdCAxMTo0MiwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+PiDkuo4gMjAxMi0wOC0wOCAyMzowMSwg
SmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+Pj4+IE9uIDA4LjA4LjEyIGF0IDExOjQ4LCAiemhlbnpo
b25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+ICAgd3JvdGU6Cj4+Pj4+IOS6jiAy
MDEyLTA4LTA3IDE2OjM3LCBKYW4gQmV1bGljaCDlhpnpgZM6Cj4+Pj4+IFNvbWUgc3BpbiBhdCBz
dG9wX21hY2hpbmUgYWZ0ZXIgZmluaXNoIHRoZWlyIGpvYi4KPj4+PiBBbmQgaGVyZSB5b3UnZCBu
ZWVkIHRvIGZpbmQgb3V0IHdoYXQgdGhleSdyZSB3YWl0aW5nIGZvciwKPj4+PiBhbmQgd2hhdCB0
aG9zZSBDUFVzIGFyZSBkb2luZy4KPj4+IFRoZXkgYXJlIHdhaXRpbmcgdGhlIHZjcHUgY2FsbGlu
ZyBnZW5lcmljX3NldF9hbGwgYW5kIHRob3NlIHNwaW4gYXQKPj4+IHNldF9hdG9taWNpdHlfbG9j
ay4KPj4+IEluIGZhY3QsIGFsbCBhcmUgd2FpdGluZyBnZW5lcmljX3NldF9hbGwKPj4gSSB0aGlu
ayB3ZSdyZSBtb3ZpbmcgaW4gY2lyY2xlcyAtIHdoYXQgaXMgdGhlIHZDUFUgY3VycmVudGx5Cj4+
IGdlbmVyaWNfc2V0X2FsbCgpIHRoZW4gZG9pbmc/Cj4gQWRkIHNvbWUgZGVidWcgcHJpbnQsIGdl
bmVyaWNfc2V0X2FsbC0+cHJlcGFyZV9zZXQtPndyaXRlX2NyMCB0b29rIG11Y2ggCj4gdGltZSwK
PiBhbGwgZWxzZSBhcmUgcXVpY2suIHNldF9hdG9taWNpdHlfbG9jayBzZXJpYWxpemVkIHRoaXMg
cHJvY2VzcyBiZXR3ZWVuIAo+IGNwdXMsIG1ha2UgaXQgd29yc2UuCj4gT25lIGl0ZXJhdGlvbjoK
PiBNVFJSOiBDUFUgMgo+IHByZXBhcmVfc2V0OiBiZWZvcmUgcmVhZF9jcjAKPiBwcmVwYXJlX3Nl
dDogYmVmb3JlIHdyaXRlX2NyMCAtLS0tLS0qYmxvY2sgaGVyZSoKClllYWgsIHRoYXQgQ1IwIHdy
aXRlIGRpc2FibGVzIHRoZSBjYWNoZXMsIGFuZCB0aGF0J3MgcHJldHR5CmV4cGVuc2l2ZSBvbiBF
UFQgKG5vdCBzdXJlIHdoeSBOUFQgZG9lc24ndCB1c2UvbmVlZCB0aGUKc2FtZSBob29rKSB3aGVu
IHRoZSBndWVzdCBoYXMgYW55IGFjdGl2ZSBNTUlPIHJlZ2lvbnM6CnZteF9zZXRfdWNfbW9kZSgp
LCB3aGVuIEhBUCBpcyBlbmFibGVkLCBjYWxscwplcHRfY2hhbmdlX2VudHJ5X2VtdF93aXRoX3Jh
bmdlKCksIHdoaWNoIGlzIGEgd2FsayB0aHJvdWdoCnRoZSBlbnRpcmUgZ3Vlc3QgcGFnZSB0YWJs
ZXMgKGkuZS4gc2NhbGVzIHdpdGggZ3Vlc3Qgc2l6ZSwgb3IsIHRvCmJlIHByZWNpc2UsIHdpdGgg
dGhlIGhpZ2hlc3QgcG9wdWxhdGVkIEdGTikuCgpHb2luZyBiYWNrIHRvIHlvdXIgb3JpZ2luYWwg
bWFpbCwgSSB3b25kZXIgaG93ZXZlciB3aHkgdGhpcwpnZXRzIGRvbmUgYXQgYWxsLiBZb3Ugc2Fp
ZCBpdCBnb3QgdGhlcmUgdmlhCgptdHJyX2Fwc19pbml0KCkgCiBcLT4gc2V0X210cnIoKSAKICAg
ICBcLT4gbXRycl93b3JrX2hhbmRsZXIoKSAKCnlldCB0aGlzIGlzbid0IGRvbmUgdW5jb25kaXRp
b25hbGx5IC0gc2VlIHRoZSBjb21tZW50IGJlZm9yZQpjaGVja2luZyBtdHJyX2Fwc19kZWxheWVk
X2luaXQuIENhbiB5b3UgZmluZCBvdXQgd2hlcmUgdGhlCm9idmlvdXNseSBuZWNlc3NhcnkgY2Fs
bChzKSB0byBzZXRfbXRycl9hcHNfZGVsYXllZF9pbml0KCkKY29tZShzKSBmcm9tPwoKPiBwcmVw
YXJlX3NldDogYmVmb3JlIHdiaW52ZAo+IHByZXBhcmVfc2V0OiBiZWZvcmUgcmVhZF9jcjQKPiBw
cmVwYXJlX3NldDogYmVmb3JlIHdyaXRlX2NyNAo+IHByZXBhcmVfc2V0OiBiZWZvcmUgX19mbHVz
aF90bGIKPiBwcmVwYXJlX3NldDogYmVmb3JlIHJkbXNyCj4gcHJlcGFyZV9zZXQ6IGJlZm9yZSB3
cm1zcgo+IGdlbmVyaWNfc2V0X2FsbDogYmVmb3JlIHNldF9tdHJyX3N0YXRlCj4gZ2VuZXJpY19z
ZXRfYWxsOiBiZWZvcmUgcGF0X2luaXQKPiBwb3N0X3NldDogYmVmb3JlIHdiaW52ZAo+IHBvc3Rf
c2V0OiBiZWZvcmUgd3Jtc3IKPiBwb3N0X3NldDogYmVmb3JlIHdyaXRlX2NyMAo+IHBvc3Rfc2V0
OiBiZWZvcmUgd3JpdGVfY3I0Cj4gCj4+Cj4+Pj4gVGhlcmUncyBub3QgdGhhdCBtdWNoIGJlaW5n
IGRvbmUgaW4gZ2VuZXJpY19zZXRfYWxsKCksIHNvIHRoZQo+Pj4+IGNvZGUgc2hvdWxkIGZpbmlz
aCByZWFzb25hYmx5IHF1aWNrbHkuIEFyZSB5b3UgcGVyaGFwcyBoYXZpbmcKPj4+PiBtb3JlIHZD
UFUtcyBpbiB0aGUgZ3Vlc3QgdGhhbiBwQ1BVLXMgdGhleSBjYW4gcnVuIG9uPwo+Pj4gU3lzdGVt
IGVudiBpcyBhbiBleGFsb2dpYyBub2RlIHdpdGggMjQgY29yZXMgKyAxMDBHIG1lbSAoMiBzb2Nr
ZXQgLCA2Cj4+PiBjb3JlcyBwZXIgc29ja2V0LCAyIEhUIHRocmVhZHMgcGVyIGNvcmUpLgo+Pj4g
Qm9vdHVwIGEgcHZodm0gd2l0aCAxMnZwY3VzIChvciAyNCkgKyA5MCBHQiArIHBjaSBwYXNzdGhy
b3VnaGVkIGRldmljZS4KPj4gU28geW91J3JlIGluZGVlZCBvdmVyLWNvbW1pdHRpbmcgdGhlIHN5
c3RlbS4gSG93IG1hbnkgdkNQVS1zCj4+IGRvZXMgeW91IERvbTAgaGF2ZT8gQXJlIHRoZXJlIGFu
eSBvdGhlciBWTXM/IElzIHRoZXJlIGFueQo+PiB2Q1BVIHBpbm5pbmcgaW4gZWZmZWN0Pwo+IGRv
bTAgYm9vdCB3aXRoIDI0IHZjcHVzKHNhbWUgcmVzdWx0IHdpdGggZG9tMF9tYXhfdmNwdXM9NCku
IE5vIG90aGVyIHZtIAo+IGV4Y2VwdCBkb20wLiBBbGwgMjQgdmNwdXMgc3BpbiBmcm9tIHhlbnRv
cCByZXN1bHQuIEJlbG93IGlzIHhlbnRvcCBjbGlwLgoKWWVzLCB0aGlzIHdheSB5b3UgZG8gb3Zl
cmNvbW1pdCAtIDI0IGd1ZXN0IHZDUFUtcyBzcGlubmluZywgcGx1cwphbnl0aGluZyBEb20wIG1h
eSBuZWVkIHRvIGRvLgoKPj4+PiAgICBEb2VzCj4+Pj4geW91ciBoYXJkd2FyZSBzdXBwb3J0IFBh
dXNlLUxvb3AtRXhpdGluZyAob3IgdGhlIEFNRAo+Pj4+IGVxdWl2YWxlbnQsIGRvbid0IHJlY2Fs
bCB0aGVpciB0ZXJtIHJpZ2h0IG5vdyk/Cj4+PiBJIGhhdmUgbm8gYWNjZXNzIHRvIHNlcmlhbCBs
aW5lLCBjb3VsZCBJIGdldCB0aGUgaW5mbyBieSBhIGNvbW1hbmQ/Cj4+ICJ4bCBkbWVzZyIgcnVu
IGVhcmx5IGVub3VnaCAoaS5lLiBiZWZvcmUgdGhlIGxvZyBidWZmZXIgd3JhcHMpLgo+IEJlbG93
IGlzIHhsIGRtZXNnIHJlc3VsdCBmb3IgeW91ciByZWZlcmVuY2UuIHRoYW5rcwo+Li4uCj4gKFhF
TikgVk1YOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVhdHVyZXM6Cj4gKFhFTikgIC0gQVBJQyBNTUlP
IGFjY2VzcyB2aXJ0dWFsaXNhdGlvbgo+IChYRU4pICAtIEFQSUMgVFBSIHNoYWRvdwo+IChYRU4p
ICAtIEV4dGVuZGVkIFBhZ2UgVGFibGVzIChFUFQpCj4gKFhFTikgIC0gVmlydHVhbC1Qcm9jZXNz
b3IgSWRlbnRpZmllcnMgKFZQSUQpCj4gKFhFTikgIC0gVmlydHVhbCBOTUkKPiAoWEVOKSAgLSBN
U1IgZGlyZWN0LWFjY2VzcyBiaXRtYXAKPiAoWEVOKSAgLSBVbnJlc3RyaWN0ZWQgR3Vlc3QKCkkn
bSBzb3JyeSwgSSBoYWQgZXhwZWN0ZWQgdGhpcyB0byBiZSBwcmludGVkIGhlcmUsIGJ1dCBpdCBp
c24ndC4KSGVuY2UgSSBjYW4ndCB0ZWxsIGZvciBzdXJlIHdoZXRoZXIgUExFIGlzIGltcGxlbWVu
dGVkIHRoZXJlLApidXQgZ2l2ZW4gaG93IGxvbmcgaXQgaGFzIGJlZW4gYXZhaWxhYmxlIGl0IG91
Z2h0IHRvIGJlIHdoZW4KIlVucmVzdHJpY3RlZCBHdWVzdCIgaXMgdGhlcmUgKHdoaWNoIGlpcmMg
Z290IGludHJvZHVjZWQgbXVjaApsYXRlcikuCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:22:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szq6Q-0000Q2-0T; Fri, 10 Aug 2012 14:22:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szq6O-0000Pu-La
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:22:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344608519!8623318!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2223 invoked from network); 10 Aug 2012 14:22:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 14:22:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:21:59 +0100
Message-Id: <502535280200007800094322@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:22:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
In-Reply-To: <502490A7.7020603@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEwLjA4LjEyIGF0IDA2OjQwLCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0wOSAxODozNSwgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+Pj4+PiBPbiAwOS4wOC4xMiBhdCAxMTo0MiwgInpoZW56aG9uZy5kdWFuIjx6aGVu
emhvbmcuZHVhbkBvcmFjbGUuY29tPiAgd3JvdGU6Cj4+PiDkuo4gMjAxMi0wOC0wOCAyMzowMSwg
SmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+Pj4+IE9uIDA4LjA4LjEyIGF0IDExOjQ4LCAiemhlbnpo
b25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+ICAgd3JvdGU6Cj4+Pj4+IOS6jiAy
MDEyLTA4LTA3IDE2OjM3LCBKYW4gQmV1bGljaCDlhpnpgZM6Cj4+Pj4+IFNvbWUgc3BpbiBhdCBz
dG9wX21hY2hpbmUgYWZ0ZXIgZmluaXNoIHRoZWlyIGpvYi4KPj4+PiBBbmQgaGVyZSB5b3UnZCBu
ZWVkIHRvIGZpbmQgb3V0IHdoYXQgdGhleSdyZSB3YWl0aW5nIGZvciwKPj4+PiBhbmQgd2hhdCB0
aG9zZSBDUFVzIGFyZSBkb2luZy4KPj4+IFRoZXkgYXJlIHdhaXRpbmcgdGhlIHZjcHUgY2FsbGlu
ZyBnZW5lcmljX3NldF9hbGwgYW5kIHRob3NlIHNwaW4gYXQKPj4+IHNldF9hdG9taWNpdHlfbG9j
ay4KPj4+IEluIGZhY3QsIGFsbCBhcmUgd2FpdGluZyBnZW5lcmljX3NldF9hbGwKPj4gSSB0aGlu
ayB3ZSdyZSBtb3ZpbmcgaW4gY2lyY2xlcyAtIHdoYXQgaXMgdGhlIHZDUFUgY3VycmVudGx5Cj4+
IGdlbmVyaWNfc2V0X2FsbCgpIHRoZW4gZG9pbmc/Cj4gQWRkIHNvbWUgZGVidWcgcHJpbnQsIGdl
bmVyaWNfc2V0X2FsbC0+cHJlcGFyZV9zZXQtPndyaXRlX2NyMCB0b29rIG11Y2ggCj4gdGltZSwK
PiBhbGwgZWxzZSBhcmUgcXVpY2suIHNldF9hdG9taWNpdHlfbG9jayBzZXJpYWxpemVkIHRoaXMg
cHJvY2VzcyBiZXR3ZWVuIAo+IGNwdXMsIG1ha2UgaXQgd29yc2UuCj4gT25lIGl0ZXJhdGlvbjoK
PiBNVFJSOiBDUFUgMgo+IHByZXBhcmVfc2V0OiBiZWZvcmUgcmVhZF9jcjAKPiBwcmVwYXJlX3Nl
dDogYmVmb3JlIHdyaXRlX2NyMCAtLS0tLS0qYmxvY2sgaGVyZSoKClllYWgsIHRoYXQgQ1IwIHdy
aXRlIGRpc2FibGVzIHRoZSBjYWNoZXMsIGFuZCB0aGF0J3MgcHJldHR5CmV4cGVuc2l2ZSBvbiBF
UFQgKG5vdCBzdXJlIHdoeSBOUFQgZG9lc24ndCB1c2UvbmVlZCB0aGUKc2FtZSBob29rKSB3aGVu
IHRoZSBndWVzdCBoYXMgYW55IGFjdGl2ZSBNTUlPIHJlZ2lvbnM6CnZteF9zZXRfdWNfbW9kZSgp
LCB3aGVuIEhBUCBpcyBlbmFibGVkLCBjYWxscwplcHRfY2hhbmdlX2VudHJ5X2VtdF93aXRoX3Jh
bmdlKCksIHdoaWNoIGlzIGEgd2FsayB0aHJvdWdoCnRoZSBlbnRpcmUgZ3Vlc3QgcGFnZSB0YWJs
ZXMgKGkuZS4gc2NhbGVzIHdpdGggZ3Vlc3Qgc2l6ZSwgb3IsIHRvCmJlIHByZWNpc2UsIHdpdGgg
dGhlIGhpZ2hlc3QgcG9wdWxhdGVkIEdGTikuCgpHb2luZyBiYWNrIHRvIHlvdXIgb3JpZ2luYWwg
bWFpbCwgSSB3b25kZXIgaG93ZXZlciB3aHkgdGhpcwpnZXRzIGRvbmUgYXQgYWxsLiBZb3Ugc2Fp
ZCBpdCBnb3QgdGhlcmUgdmlhCgptdHJyX2Fwc19pbml0KCkgCiBcLT4gc2V0X210cnIoKSAKICAg
ICBcLT4gbXRycl93b3JrX2hhbmRsZXIoKSAKCnlldCB0aGlzIGlzbid0IGRvbmUgdW5jb25kaXRp
b25hbGx5IC0gc2VlIHRoZSBjb21tZW50IGJlZm9yZQpjaGVja2luZyBtdHJyX2Fwc19kZWxheWVk
X2luaXQuIENhbiB5b3UgZmluZCBvdXQgd2hlcmUgdGhlCm9idmlvdXNseSBuZWNlc3NhcnkgY2Fs
bChzKSB0byBzZXRfbXRycl9hcHNfZGVsYXllZF9pbml0KCkKY29tZShzKSBmcm9tPwoKPiBwcmVw
YXJlX3NldDogYmVmb3JlIHdiaW52ZAo+IHByZXBhcmVfc2V0OiBiZWZvcmUgcmVhZF9jcjQKPiBw
cmVwYXJlX3NldDogYmVmb3JlIHdyaXRlX2NyNAo+IHByZXBhcmVfc2V0OiBiZWZvcmUgX19mbHVz
aF90bGIKPiBwcmVwYXJlX3NldDogYmVmb3JlIHJkbXNyCj4gcHJlcGFyZV9zZXQ6IGJlZm9yZSB3
cm1zcgo+IGdlbmVyaWNfc2V0X2FsbDogYmVmb3JlIHNldF9tdHJyX3N0YXRlCj4gZ2VuZXJpY19z
ZXRfYWxsOiBiZWZvcmUgcGF0X2luaXQKPiBwb3N0X3NldDogYmVmb3JlIHdiaW52ZAo+IHBvc3Rf
c2V0OiBiZWZvcmUgd3Jtc3IKPiBwb3N0X3NldDogYmVmb3JlIHdyaXRlX2NyMAo+IHBvc3Rfc2V0
OiBiZWZvcmUgd3JpdGVfY3I0Cj4gCj4+Cj4+Pj4gVGhlcmUncyBub3QgdGhhdCBtdWNoIGJlaW5n
IGRvbmUgaW4gZ2VuZXJpY19zZXRfYWxsKCksIHNvIHRoZQo+Pj4+IGNvZGUgc2hvdWxkIGZpbmlz
aCByZWFzb25hYmx5IHF1aWNrbHkuIEFyZSB5b3UgcGVyaGFwcyBoYXZpbmcKPj4+PiBtb3JlIHZD
UFUtcyBpbiB0aGUgZ3Vlc3QgdGhhbiBwQ1BVLXMgdGhleSBjYW4gcnVuIG9uPwo+Pj4gU3lzdGVt
IGVudiBpcyBhbiBleGFsb2dpYyBub2RlIHdpdGggMjQgY29yZXMgKyAxMDBHIG1lbSAoMiBzb2Nr
ZXQgLCA2Cj4+PiBjb3JlcyBwZXIgc29ja2V0LCAyIEhUIHRocmVhZHMgcGVyIGNvcmUpLgo+Pj4g
Qm9vdHVwIGEgcHZodm0gd2l0aCAxMnZwY3VzIChvciAyNCkgKyA5MCBHQiArIHBjaSBwYXNzdGhy
b3VnaGVkIGRldmljZS4KPj4gU28geW91J3JlIGluZGVlZCBvdmVyLWNvbW1pdHRpbmcgdGhlIHN5
c3RlbS4gSG93IG1hbnkgdkNQVS1zCj4+IGRvZXMgeW91IERvbTAgaGF2ZT8gQXJlIHRoZXJlIGFu
eSBvdGhlciBWTXM/IElzIHRoZXJlIGFueQo+PiB2Q1BVIHBpbm5pbmcgaW4gZWZmZWN0Pwo+IGRv
bTAgYm9vdCB3aXRoIDI0IHZjcHVzKHNhbWUgcmVzdWx0IHdpdGggZG9tMF9tYXhfdmNwdXM9NCku
IE5vIG90aGVyIHZtIAo+IGV4Y2VwdCBkb20wLiBBbGwgMjQgdmNwdXMgc3BpbiBmcm9tIHhlbnRv
cCByZXN1bHQuIEJlbG93IGlzIHhlbnRvcCBjbGlwLgoKWWVzLCB0aGlzIHdheSB5b3UgZG8gb3Zl
cmNvbW1pdCAtIDI0IGd1ZXN0IHZDUFUtcyBzcGlubmluZywgcGx1cwphbnl0aGluZyBEb20wIG1h
eSBuZWVkIHRvIGRvLgoKPj4+PiAgICBEb2VzCj4+Pj4geW91ciBoYXJkd2FyZSBzdXBwb3J0IFBh
dXNlLUxvb3AtRXhpdGluZyAob3IgdGhlIEFNRAo+Pj4+IGVxdWl2YWxlbnQsIGRvbid0IHJlY2Fs
bCB0aGVpciB0ZXJtIHJpZ2h0IG5vdyk/Cj4+PiBJIGhhdmUgbm8gYWNjZXNzIHRvIHNlcmlhbCBs
aW5lLCBjb3VsZCBJIGdldCB0aGUgaW5mbyBieSBhIGNvbW1hbmQ/Cj4+ICJ4bCBkbWVzZyIgcnVu
IGVhcmx5IGVub3VnaCAoaS5lLiBiZWZvcmUgdGhlIGxvZyBidWZmZXIgd3JhcHMpLgo+IEJlbG93
IGlzIHhsIGRtZXNnIHJlc3VsdCBmb3IgeW91ciByZWZlcmVuY2UuIHRoYW5rcwo+Li4uCj4gKFhF
TikgVk1YOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVhdHVyZXM6Cj4gKFhFTikgIC0gQVBJQyBNTUlP
IGFjY2VzcyB2aXJ0dWFsaXNhdGlvbgo+IChYRU4pICAtIEFQSUMgVFBSIHNoYWRvdwo+IChYRU4p
ICAtIEV4dGVuZGVkIFBhZ2UgVGFibGVzIChFUFQpCj4gKFhFTikgIC0gVmlydHVhbC1Qcm9jZXNz
b3IgSWRlbnRpZmllcnMgKFZQSUQpCj4gKFhFTikgIC0gVmlydHVhbCBOTUkKPiAoWEVOKSAgLSBN
U1IgZGlyZWN0LWFjY2VzcyBiaXRtYXAKPiAoWEVOKSAgLSBVbnJlc3RyaWN0ZWQgR3Vlc3QKCkkn
bSBzb3JyeSwgSSBoYWQgZXhwZWN0ZWQgdGhpcyB0byBiZSBwcmludGVkIGhlcmUsIGJ1dCBpdCBp
c24ndC4KSGVuY2UgSSBjYW4ndCB0ZWxsIGZvciBzdXJlIHdoZXRoZXIgUExFIGlzIGltcGxlbWVu
dGVkIHRoZXJlLApidXQgZ2l2ZW4gaG93IGxvbmcgaXQgaGFzIGJlZW4gYXZhaWxhYmxlIGl0IG91
Z2h0IHRvIGJlIHdoZW4KIlVucmVzdHJpY3RlZCBHdWVzdCIgaXMgdGhlcmUgKHdoaWNoIGlpcmMg
Z290IGludHJvZHVjZWQgbXVjaApsYXRlcikuCgpKYW4KCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:25:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szq90-0000W0-IH; Fri, 10 Aug 2012 14:24:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szq8y-0000Vo-MC
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:24:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344608685!8558187!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20983 invoked from network); 10 Aug 2012 14:24:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 14:24:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:24:45 +0100
Message-Id: <502535CD0200007800094325@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:24:45 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Wang" <wei.wang2@amd.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
	<5024E788.80300@amd.com>
	<50252031020000780009427B@nat28.tlf.novell.com>
	<50250F9B.9090702@amd.com>
In-Reply-To: <50250F9B.9090702@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 15:41, Wei Wang <wei.wang2@amd.com> wrote:
> On 08/10/2012 02:52 PM, Jan Beulich wrote:
>>>>> On 10.08.12 at 12:50, Wei Wang<wei.wang2@amd.com>  wrote:
>>> On 08/09/2012 09:26 AM, Jan Beulich wrote:
>>>> (And similar
>>>> to the issue Santosh has already fixed here - the original
>>>> function pointlessly maps/unmaps the page when "level<= 1".
>>>> Furthermore, iommu_map.c has nice helper functions
>>>> iommu_next_level() and amd_iommu_is_pte_present() - why
>>>> aren't they in a header instead, so they could be used here,
>>>> avoiding the open coding of them?)
>>>
>>> Maybe those helps appears after the original function. I could sent a
>>> patch to clean up these:
>>> * do not map/unmap if level<= 1
>>> * move amd_iommu_is_pte_present() and iommu_next_level() to a header
>>> file. and use them in deallocate_next_page_table.
>>> * Using next_level instead of recalculation (if requested)
>>
>> Yes, please. As to using next_level - it depends, besides the above,
>>   on how bad it is if this is really wrong; an ASSERT() or BUG_ON()
>> might be on order here.
> 
> How about ASSERT(next_level < = (level -1) )?

But that would again mean levels can be skipped. If they can,
then using next_level is the way to go, and the assertion is fine.
If they can't, the assertion should say ==.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:25:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szq90-0000W0-IH; Fri, 10 Aug 2012 14:24:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szq8y-0000Vo-MC
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:24:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344608685!8558187!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20983 invoked from network); 10 Aug 2012 14:24:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 14:24:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:24:45 +0100
Message-Id: <502535CD0200007800094325@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:24:45 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Wang" <wei.wang2@amd.com>
References: <8deb7c7a25c4a3bc50d7.1344446255@REDBLD-XS.ad.xensource.com>
	<5023822E0200007800093D2A@nat28.tlf.novell.com>
	<5024E788.80300@amd.com>
	<50252031020000780009427B@nat28.tlf.novell.com>
	<50250F9B.9090702@amd.com>
In-Reply-To: <50250F9B.9090702@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 15:41, Wei Wang <wei.wang2@amd.com> wrote:
> On 08/10/2012 02:52 PM, Jan Beulich wrote:
>>>>> On 10.08.12 at 12:50, Wei Wang<wei.wang2@amd.com>  wrote:
>>> On 08/09/2012 09:26 AM, Jan Beulich wrote:
>>>> (And similar
>>>> to the issue Santosh has already fixed here - the original
>>>> function pointlessly maps/unmaps the page when "level<= 1".
>>>> Furthermore, iommu_map.c has nice helper functions
>>>> iommu_next_level() and amd_iommu_is_pte_present() - why
>>>> aren't they in a header instead, so they could be used here,
>>>> avoiding the open coding of them?)
>>>
>>> Maybe those helps appears after the original function. I could sent a
>>> patch to clean up these:
>>> * do not map/unmap if level<= 1
>>> * move amd_iommu_is_pte_present() and iommu_next_level() to a header
>>> file. and use them in deallocate_next_page_table.
>>> * Using next_level instead of recalculation (if requested)
>>
>> Yes, please. As to using next_level - it depends, besides the above,
>>   on how bad it is if this is really wrong; an ASSERT() or BUG_ON()
>> might be on order here.
> 
> How about ASSERT(next_level < = (level -1) )?

But that would again mean levels can be skipped. If they can,
then using next_level is the way to go, and the assertion is fine.
If they can't, the assertion should say ==.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:29:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqCy-0000ia-7V; Fri, 10 Aug 2012 14:29:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1SzqCw-0000iL-0t
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:28:58 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344608931!901237!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22895 invoked from network); 10 Aug 2012 14:28:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 14:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13955677"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 14:28:51 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:28:51 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 15:17:05 +0100
Message-ID: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 0/2] Document pagetable_reserve PVOP and enforce
	a better semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking for documenting the pagetable_reserve PVOP, I realized that it
assumes start == pgt_buf_start. I think this is not semantically right
(even if with the current code this should not be a problem in practice) and
what we really want is to extend the logic in order to do the RO -> RW
convertion also for the range [pgt_buf_start, start).
This patch then implements this missing conversion, adding some smaller
cleanups and finally provides documentation for the PVOP.
Please look at 2/2 for more details on how the comment is structured.
If we get this right we will have a reference to be used later on for others
PVOPs.
A preliminary version of this patch has been already reviewed by
Stefano Stabellini.

Attilio Rao (2):
  XEN, X86: Improve semantic support for pagetable_reserve PVOP
  Document the semantic of the pagetable_reserve PVOP

 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 arch/x86/mm/init.c              |    4 ++++
 arch/x86/xen/mmu.c              |   22 ++++++++++++++++++++--
 3 files changed, 41 insertions(+), 4 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:29:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqCy-0000ia-7V; Fri, 10 Aug 2012 14:29:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1SzqCw-0000iL-0t
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:28:58 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344608931!901237!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22895 invoked from network); 10 Aug 2012 14:28:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 14:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13955677"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 14:28:51 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:28:51 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 15:17:05 +0100
Message-ID: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 0/2] Document pagetable_reserve PVOP and enforce
	a better semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking for documenting the pagetable_reserve PVOP, I realized that it
assumes start == pgt_buf_start. I think this is not semantically right
(even if with the current code this should not be a problem in practice) and
what we really want is to extend the logic in order to do the RO -> RW
convertion also for the range [pgt_buf_start, start).
This patch then implements this missing conversion, adding some smaller
cleanups and finally provides documentation for the PVOP.
Please look at 2/2 for more details on how the comment is structured.
If we get this right we will have a reference to be used later on for others
PVOPs.
A preliminary version of this patch has been already reviewed by
Stefano Stabellini.

Attilio Rao (2):
  XEN, X86: Improve semantic support for pagetable_reserve PVOP
  Document the semantic of the pagetable_reserve PVOP

 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 arch/x86/mm/init.c              |    4 ++++
 arch/x86/xen/mmu.c              |   22 ++++++++++++++++++++--
 3 files changed, 41 insertions(+), 4 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:29:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqD0-0000iv-Je; Fri, 10 Aug 2012 14:29:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1SzqCz-0000iS-U0
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:29:02 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344608931!901237!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23503 invoked from network); 10 Aug 2012 14:28:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 14:28:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13955683"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 14:28:55 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:28:55 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 15:17:07 +0100
Message-ID: <1344608227-30910-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] Document the semantic of the
	pagetable_reserve PVOP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The informations added on the hook are:
- Native behaviour
- Xen specific behaviour
- Logic behind the Xen specific behaviour
- PVOP semantic

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 1 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..b22093c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -72,8 +72,23 @@ struct x86_init_oem {
  * struct x86_init_mapping - platform specific initial kernel pagetable setup
  * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
  *
- * For more details on the purpose of this hook, look in
- * init_memory_mapping and the commit that added it.
+ * It does reserve a range of pages, to be used as pagetable pages.
+ * The start and end parameters are expected to be contained in the
+ * [pgt_buf_start, pgt_buf_top] range.
+ * The native implementation reserves the pages via the memblock_reserve()
+ * interface.
+ * The Xen implementation, besides reserving the range via memblock_reserve(),
+ * also sets RW the remaining pages contained in the ranges
+ * [pgt_buf_start, start) and [end, pgt_buf_top).
+ * This is needed because the range [pgt_buf_start, pgt_buf_top] was
+ * previously mapped read-only by xen_set_pte_init: when running
+ * on Xen all the pagetable pages need to be mapped read-only in order to
+ * avoid protection faults from the hypervisor. However, once the correct
+ * amount of pages is reserved for the pagetables, all the others contained
+ * in the range must be set to RW so that they can be correctly recycled by
+ * the VM subsystem.
+ * This operation is meant to be performed only during init_memory_mapping(),
+ * just after space for the kernel direct mapping tables is found.
  */
 struct x86_init_mapping {
 	void (*pagetable_reserve)(u64 start, u64 end);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:29:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqD0-0000iv-Je; Fri, 10 Aug 2012 14:29:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1SzqCz-0000iS-U0
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:29:02 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344608931!901237!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23503 invoked from network); 10 Aug 2012 14:28:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 14:28:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13955683"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 14:28:55 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:28:55 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 15:17:07 +0100
Message-ID: <1344608227-30910-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] Document the semantic of the
	pagetable_reserve PVOP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The informations added on the hook are:
- Native behaviour
- Xen specific behaviour
- Logic behind the Xen specific behaviour
- PVOP semantic

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 1 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..b22093c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -72,8 +72,23 @@ struct x86_init_oem {
  * struct x86_init_mapping - platform specific initial kernel pagetable setup
  * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
  *
- * For more details on the purpose of this hook, look in
- * init_memory_mapping and the commit that added it.
+ * It does reserve a range of pages, to be used as pagetable pages.
+ * The start and end parameters are expected to be contained in the
+ * [pgt_buf_start, pgt_buf_top] range.
+ * The native implementation reserves the pages via the memblock_reserve()
+ * interface.
+ * The Xen implementation, besides reserving the range via memblock_reserve(),
+ * also sets RW the remaining pages contained in the ranges
+ * [pgt_buf_start, start) and [end, pgt_buf_top).
+ * This is needed because the range [pgt_buf_start, pgt_buf_top] was
+ * previously mapped read-only by xen_set_pte_init: when running
+ * on Xen all the pagetable pages need to be mapped read-only in order to
+ * avoid protection faults from the hypervisor. However, once the correct
+ * amount of pages is reserved for the pagetables, all the others contained
+ * in the range must be set to RW so that they can be correctly recycled by
+ * the VM subsystem.
+ * This operation is meant to be performed only during init_memory_mapping(),
+ * just after space for the kernel direct mapping tables is found.
  */
 struct x86_init_mapping {
 	void (*pagetable_reserve)(u64 start, u64 end);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:29:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqD0-0000j3-V6; Fri, 10 Aug 2012 14:29:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1SzqCz-0000iQ-Mk
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:29:02 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344608931!901237!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23396 invoked from network); 10 Aug 2012 14:28:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 14:28:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13955680"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 14:28:54 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:28:53 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 15:17:06 +0100
Message-ID: <1344608227-30910-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] XEN,
	X86: Improve semantic support for pagetable_reserve PVOP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Allow xen_mapping_pagetable_reserve() to handle a start different from
  pgt_buf_start, but still bigger than it.
- Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
  for verifying start and end are contained in the range
  [pgt_buf_start, pgt_buf_top].
- In xen_mapping_pagetable_reserve(), change printk into pr_debug.
- In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
  an actual need to do that (or, in other words, if there are actually some
  pages going to switch from RO to RW).

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/mm/init.c |    4 ++++
 arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e0e6990..c5849b6 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 
 void __init native_pagetable_reserve(u64 start, u64 end)
 {
+	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, PFN_PHYS(pgt_buf_start),
+			PFN_PHYS(pgt_buf_top));
 	memblock_reserve(start, end - start);
 }
 
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..8d943e0a 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 {
+	u64 sh_start;
+
+	sh_start = PFN_PHYS(pgt_buf_start);
+
+	if (start < sh_start || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, sh_start, PFN_PHYS(pgt_buf_top));
+
+	/* set RW the initial range */
+	if  (start != sh_start)
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			sh_start, start);
+	while (sh_start < start) {
+		make_lowmem_page_readwrite(__va(sh_start));
+		sh_start += PAGE_SIZE;
+	}
+
 	/* reserve the range used */
 	native_pagetable_reserve(start, end);
 
 	/* set as RW the rest */
-	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
-			PFN_PHYS(pgt_buf_top));
+	if (end != PFN_PHYS(pgt_buf_top))
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			end, PFN_PHYS(pgt_buf_top));
 	while (end < PFN_PHYS(pgt_buf_top)) {
 		make_lowmem_page_readwrite(__va(end));
 		end += PAGE_SIZE;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:29:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqD0-0000j3-V6; Fri, 10 Aug 2012 14:29:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1SzqCz-0000iQ-Mk
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:29:02 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1344608931!901237!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23396 invoked from network); 10 Aug 2012 14:28:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 14:28:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13955680"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 14:28:54 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:28:53 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Date: Fri, 10 Aug 2012 15:17:06 +0100
Message-ID: <1344608227-30910-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] XEN,
	X86: Improve semantic support for pagetable_reserve PVOP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Allow xen_mapping_pagetable_reserve() to handle a start different from
  pgt_buf_start, but still bigger than it.
- Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
  for verifying start and end are contained in the range
  [pgt_buf_start, pgt_buf_top].
- In xen_mapping_pagetable_reserve(), change printk into pr_debug.
- In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
  an actual need to do that (or, in other words, if there are actually some
  pages going to switch from RO to RW).

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/mm/init.c |    4 ++++
 arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e0e6990..c5849b6 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 
 void __init native_pagetable_reserve(u64 start, u64 end)
 {
+	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, PFN_PHYS(pgt_buf_start),
+			PFN_PHYS(pgt_buf_top));
 	memblock_reserve(start, end - start);
 }
 
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..8d943e0a 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 {
+	u64 sh_start;
+
+	sh_start = PFN_PHYS(pgt_buf_start);
+
+	if (start < sh_start || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, sh_start, PFN_PHYS(pgt_buf_top));
+
+	/* set RW the initial range */
+	if  (start != sh_start)
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			sh_start, start);
+	while (sh_start < start) {
+		make_lowmem_page_readwrite(__va(sh_start));
+		sh_start += PAGE_SIZE;
+	}
+
 	/* reserve the range used */
 	native_pagetable_reserve(start, end);
 
 	/* set as RW the rest */
-	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
-			PFN_PHYS(pgt_buf_top));
+	if (end != PFN_PHYS(pgt_buf_top))
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			end, PFN_PHYS(pgt_buf_top));
 	while (end < PFN_PHYS(pgt_buf_top)) {
 		make_lowmem_page_readwrite(__va(end));
 		end += PAGE_SIZE;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:31:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqFV-00012n-Nm; Fri, 10 Aug 2012 14:31:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzqFU-00012X-1t
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:31:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344609011!6169977!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14331 invoked from network); 10 Aug 2012 14:30:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 14:30:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:30:11 +0100
Message-Id: <502537130200007800094337@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:30:10 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <20120810141040.GA8388@aepfle.de>
In-Reply-To: <20120810141040.GA8388@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to load backend drivers in 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 16:10, Olaf Hering <olaf@aepfle.de> wrote:
> The xencommons runlevel script has a few modprobe calls to load drivers
> for dom0. Recently also the backend drivers for vbd and vif were added.
> Unfortunately without an explanation why that is (suddenly) needed.
> 
> Now that I have been hitting such missing backend driver issue as well
> with a xenlinux based dom0 kernel I wonder how to handle the situation.

See also http://lists.xen.org/archives/html/xen-devel/2012-08/msg00717.html.

> The pvops kernel has 3 backend drivers which have module aliases:
>   xen-netback ; alias:          xen-backend:vif
>   xen-blkback ; alias:          xen-backend:vbd
>   xen-pciback ; alias:          xen-backend:pci
> These aliases have been in the tree since at least 3.2, which is a long
> time.
> 
> The sles11sp2 kernel has more backend drivers, but they have no alias:
>   netbk
>   blkbk
>   xen-scsibk
>   usbbk
>   tpmbk
>   pciback
> 
> I'm sure adding ad a MODULE_ALIAS() entry to netbk, blkbk and pciback to
> match mainline is trivial.

They're just not there because they only got introduced in 3.1.
Adding them of course is a nobrainer.

> I wonder why libxl does not do a 'modprobe xen-backend:vif &>/dev/null' if
> it is about to configure an interface for the first time? The current
> error message when a backend driver is missing is just not helpful.
> 
> So what should be done in xencommons for 4.2? Add a dumb loop like this?
> 
> for m in xen-backend:{vif,vbd,pci} netbk blkbk pciback xen-scsibk usbbk tpmbk
> do
>         modprobe $m &> /dev/null
> done
> 
> Or should rather libxl take care of loading these things?

The outcome was that after 4.2 libxl should be doing this. For
4.2, we just need to get the list completed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:31:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqFV-00012n-Nm; Fri, 10 Aug 2012 14:31:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzqFU-00012X-1t
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:31:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344609011!6169977!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14331 invoked from network); 10 Aug 2012 14:30:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 14:30:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:30:11 +0100
Message-Id: <502537130200007800094337@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:30:10 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <20120810141040.GA8388@aepfle.de>
In-Reply-To: <20120810141040.GA8388@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to load backend drivers in 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 16:10, Olaf Hering <olaf@aepfle.de> wrote:
> The xencommons runlevel script has a few modprobe calls to load drivers
> for dom0. Recently also the backend drivers for vbd and vif were added.
> Unfortunately without an explanation why that is (suddenly) needed.
> 
> Now that I have been hitting such missing backend driver issue as well
> with a xenlinux based dom0 kernel I wonder how to handle the situation.

See also http://lists.xen.org/archives/html/xen-devel/2012-08/msg00717.html.

> The pvops kernel has 3 backend drivers which have module aliases:
>   xen-netback ; alias:          xen-backend:vif
>   xen-blkback ; alias:          xen-backend:vbd
>   xen-pciback ; alias:          xen-backend:pci
> These aliases have been in the tree since at least 3.2, which is a long
> time.
> 
> The sles11sp2 kernel has more backend drivers, but they have no alias:
>   netbk
>   blkbk
>   xen-scsibk
>   usbbk
>   tpmbk
>   pciback
> 
> I'm sure adding ad a MODULE_ALIAS() entry to netbk, blkbk and pciback to
> match mainline is trivial.

They're just not there because they only got introduced in 3.1.
Adding them of course is a nobrainer.

> I wonder why libxl does not do a 'modprobe xen-backend:vif &>/dev/null' if
> it is about to configure an interface for the first time? The current
> error message when a backend driver is missing is just not helpful.
> 
> So what should be done in xencommons for 4.2? Add a dumb loop like this?
> 
> for m in xen-backend:{vif,vbd,pci} netbk blkbk pciback xen-scsibk usbbk tpmbk
> do
>         modprobe $m &> /dev/null
> done
> 
> Or should rather libxl take care of loading these things?

The outcome was that after 4.2 libxl should be doing this. For
4.2, we just need to get the list completed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:39:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqNC-0001RS-QF; Fri, 10 Aug 2012 14:39:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzqNB-0001RN-W1
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:39:34 +0000
Received: from [85.158.138.51:30865] by server-6.bemta-3.messagelabs.com id
	9A/AD-02321-52D15205; Fri, 10 Aug 2012 14:39:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344609572!27595577!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5632 invoked from network); 10 Aug 2012 14:39:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with SMTP;
	10 Aug 2012 14:39:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:39:30 +0100
Message-Id: <502539410200007800094355@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:39:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel Kiper" <daniel.kiper@oracle.com>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
In-Reply-To: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> max_cpus is not available since 20374 changeset (Miscellaneous data
> placement adjustments). It was moved to __initdata section. This section
> is freed after Xen initialization. Assume that max_cpus is always
> equal to XEN_HYPER_SIZE(cpumask_t) * 8.

Just to repeat my response to the original version of this patch,
which I don't recall having got any answer from you:

"Using nr_cpu_ids, when available, would seem a better fit. And
 I don't see why, on dumps from old hypervisors, you wouldn't
 want to continue using max_cpus. Oh, wait, I see - you would
 have to be able to tell whether it actually sits in .init.data, which
 might not be strait forward."

Jan

> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> 
> diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
> --- crash-6.0.8.orig/xen_hyper.c	2012-06-29 16:59:18.000000000 +0200
> +++ crash-6.0.8/xen_hyper.c	2012-07-05 14:52:59.000000000 +0200
> @@ -1879,11 +1879,9 @@ xen_hyper_get_cpu_info(void)
>  	uint *cpu_idx;
>  	int i, j, cpus;
>  
> -	get_symbol_data("max_cpus", sizeof(xht->max_cpus), &xht->max_cpus);
>  	XEN_HYPER_STRUCT_SIZE_INIT(cpumask_t, "cpumask_t");
> -	if (XEN_HYPER_SIZE(cpumask_t) * 8 > xht->max_cpus) {
> -		xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
> -	}
> +	xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
> +
>  	if (xht->cpumask) {
>  		free(xht->cpumask);
>  	}




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:39:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqNC-0001RS-QF; Fri, 10 Aug 2012 14:39:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1SzqNB-0001RN-W1
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:39:34 +0000
Received: from [85.158.138.51:30865] by server-6.bemta-3.messagelabs.com id
	9A/AD-02321-52D15205; Fri, 10 Aug 2012 14:39:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344609572!27595577!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5632 invoked from network); 10 Aug 2012 14:39:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with SMTP;
	10 Aug 2012 14:39:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 15:39:30 +0100
Message-Id: <502539410200007800094355@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 15:39:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel Kiper" <daniel.kiper@oracle.com>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
In-Reply-To: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> max_cpus is not available since 20374 changeset (Miscellaneous data
> placement adjustments). It was moved to __initdata section. This section
> is freed after Xen initialization. Assume that max_cpus is always
> equal to XEN_HYPER_SIZE(cpumask_t) * 8.

Just to repeat my response to the original version of this patch,
which I don't recall having got any answer from you:

"Using nr_cpu_ids, when available, would seem a better fit. And
 I don't see why, on dumps from old hypervisors, you wouldn't
 want to continue using max_cpus. Oh, wait, I see - you would
 have to be able to tell whether it actually sits in .init.data, which
 might not be strait forward."

Jan

> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
> 
> diff -Npru crash-6.0.8.orig/xen_hyper.c crash-6.0.8/xen_hyper.c
> --- crash-6.0.8.orig/xen_hyper.c	2012-06-29 16:59:18.000000000 +0200
> +++ crash-6.0.8/xen_hyper.c	2012-07-05 14:52:59.000000000 +0200
> @@ -1879,11 +1879,9 @@ xen_hyper_get_cpu_info(void)
>  	uint *cpu_idx;
>  	int i, j, cpus;
>  
> -	get_symbol_data("max_cpus", sizeof(xht->max_cpus), &xht->max_cpus);
>  	XEN_HYPER_STRUCT_SIZE_INIT(cpumask_t, "cpumask_t");
> -	if (XEN_HYPER_SIZE(cpumask_t) * 8 > xht->max_cpus) {
> -		xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
> -	}
> +	xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
> +
>  	if (xht->cpumask) {
>  		free(xht->cpumask);
>  	}




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:48:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqVZ-0001aw-Pk; Fri, 10 Aug 2012 14:48:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists@internecto.net>) id 1SzqL8-0001Qt-O9
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:37:26 +0000
Received: from [85.158.143.99:13797] by server-3.bemta-4.messagelabs.com id
	0A/CA-31486-6AC15205; Fri, 10 Aug 2012 14:37:26 +0000
X-Env-Sender: lists@internecto.net
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344609445!16365626!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19688 invoked from network); 10 Aug 2012 14:37:25 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 14:37:25 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 09F7EA02F1;
	Fri, 10 Aug 2012 14:37:25 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id jXEJlqcfUpRt; Fri, 10 Aug 2012 14:37:24 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id C0741A02EA;
	Fri, 10 Aug 2012 14:37:23 +0000 (UTC)
Date: Fri, 10 Aug 2012 16:37:22 +0200
From: Internecto List Subscriber <lists@internecto.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120810163722.2feaadad@internecto.net>
In-Reply-To: <alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
X-Mailman-Approved-At: Fri, 10 Aug 2012 14:48:12 +0000
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Mark van Dijk <lists+xen@internecto.net>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012 13:16:12 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 10 Aug 2012, Mark van Dijk wrote:
> > > This is upstream QEMU that is breaking, not qemu-xen-traditional
> > > (see the code path: qemu-xen-dir-remote instead of
> > > qemu-xen-traditional-dir-remote).
> > 
> > Ah, I didn't know, it's a little bit confusing. Would you like me to
> > submit a bug report with them?
> > 
> > > Moreover it is breaking compiling qemu-nbd that we aren't
> > > currently using. I would try out the following change to the
> > > configure script: (..snip..)
> > 
> > Yes, that works, thanks! But it gives a new error which I couldn't
> > solve yet:
> > 
> > ---
> > LINK  qemu-nbd
> > 
> > cutils.o: In function `strtosz_suffix_unit':
> > 
> > tools/qemu-xen-dir/cutils.c:354: undefined reference to
> > `__isnan'
> > 
> > tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
> > collect2: ld returned 1 exit status
> > ---
> > 
> > Any idea there?
> 
> It is another "-lm" missing somewhere.

Ok, well I'll leave that to the people who can actually make a healthy
patch out of this.

> 
> 
> > Also -- If we're not using qemu-nbd then could you suggest a
> > workaround please? I'd prefer something that can be patched or
> > issued before I run the make process. (I run the make process
> > twice now - if the first run fails, patch, then run again and if it
> > fails again error out)
> 
> You can disable qemu-nbd altogether with the following patch:
> (..snip..)

While I couldn't find the proper configure script for this (I even
grepped for stuff like 'virtfs=no' but got nothing), it was a good
starting point. So thanks for pointing me in the right direction :)

For now building unstable on Alpine Linux works with the following
patch:

http://pastebin.com/QU8XuM0a



-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 13:48 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 14:48:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 14:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqVZ-0001aw-Pk; Fri, 10 Aug 2012 14:48:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists@internecto.net>) id 1SzqL8-0001Qt-O9
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 14:37:26 +0000
Received: from [85.158.143.99:13797] by server-3.bemta-4.messagelabs.com id
	0A/CA-31486-6AC15205; Fri, 10 Aug 2012 14:37:26 +0000
X-Env-Sender: lists@internecto.net
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344609445!16365626!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19688 invoked from network); 10 Aug 2012 14:37:25 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 14:37:25 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 09F7EA02F1;
	Fri, 10 Aug 2012 14:37:25 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id jXEJlqcfUpRt; Fri, 10 Aug 2012 14:37:24 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id C0741A02EA;
	Fri, 10 Aug 2012 14:37:23 +0000 (UTC)
Date: Fri, 10 Aug 2012 16:37:22 +0200
From: Internecto List Subscriber <lists@internecto.net>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120810163722.2feaadad@internecto.net>
In-Reply-To: <alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
X-Mailman-Approved-At: Fri, 10 Aug 2012 14:48:12 +0000
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Mark van Dijk <lists+xen@internecto.net>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012 13:16:12 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 10 Aug 2012, Mark van Dijk wrote:
> > > This is upstream QEMU that is breaking, not qemu-xen-traditional
> > > (see the code path: qemu-xen-dir-remote instead of
> > > qemu-xen-traditional-dir-remote).
> > 
> > Ah, I didn't know, it's a little bit confusing. Would you like me to
> > submit a bug report with them?
> > 
> > > Moreover it is breaking compiling qemu-nbd that we aren't
> > > currently using. I would try out the following change to the
> > > configure script: (..snip..)
> > 
> > Yes, that works, thanks! But it gives a new error which I couldn't
> > solve yet:
> > 
> > ---
> > LINK  qemu-nbd
> > 
> > cutils.o: In function `strtosz_suffix_unit':
> > 
> > tools/qemu-xen-dir/cutils.c:354: undefined reference to
> > `__isnan'
> > 
> > tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
> > collect2: ld returned 1 exit status
> > ---
> > 
> > Any idea there?
> 
> It is another "-lm" missing somewhere.

Ok, well I'll leave that to the people who can actually make a healthy
patch out of this.

> 
> 
> > Also -- If we're not using qemu-nbd then could you suggest a
> > workaround please? I'd prefer something that can be patched or
> > issued before I run the make process. (I run the make process
> > twice now - if the first run fails, patch, then run again and if it
> > fails again error out)
> 
> You can disable qemu-nbd altogether with the following patch:
> (..snip..)

While I couldn't find the proper configure script for this (I even
grepped for stuff like 'virtfs=no' but got nothing), it was a good
starting point. So thanks for pointing me in the right direction :)

For now building unstable on Alpine Linux works with the following
patch:

http://pastebin.com/QU8XuM0a



-- 
Stay in touch,
Mark van Dijk.               ,--------------------------------
----------------------------'        Fri Aug 10 13:48 UTC 2012
Today is Boomtime, the 3rd day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:03:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqjw-0001ne-7w; Fri, 10 Aug 2012 15:03:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Szqju-0001nX-IU
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:03:02 +0000
Received: from [85.158.143.99:9793] by server-1.bemta-4.messagelabs.com id
	18/B3-20198-5A225205; Fri, 10 Aug 2012 15:03:01 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344610980!21181645!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30888 invoked from network); 10 Aug 2012 15:03:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 15:03:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34269279"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:02:59 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 10 Aug 2012
	08:02:58 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Wei Wang <wei.wang2@amd.com>
Date: Fri, 10 Aug 2012 08:02:42 -0700
Thread-Topic: [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac129BB6fgCbMhIxQWSTxx8TDKX7GQAFKrrw
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E98BC@SJCPMAILBOX01.citrite.net>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
	<5024FF0A.6090006@amd.com>
In-Reply-To: <5024FF0A.6090006@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks Wei - please see inline.

> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int *indent) {
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<= 1 )
> +        return;

So, the l1 page table is not printed, is it by design?
[Santosh Jodh] That was a typo - it should be level < 1


I tested the amd part with an amd iommu system and I got output like this:

XEN) domain1 IOMMU p2m table:
(XEN) gfn: 0000000000000000  mfn: 00000000001b1fb5
(XEN)   gfn: 0000000000000000  mfn: 000000000021f80b
(XEN)   gfn: 0000000000000000  mfn: 000000000023f400
(XEN)   gfn: 0000000000000000  mfn: 000000000010a600
(XEN)   gfn: 0000000000000000  mfn: 000000000023f200
(XEN)   gfn: 0000000000000000  mfn: 000000000010a400
(XEN)   gfn: 0000000000000000  mfn: 000000000023f000
(XEN)   gfn: 0000000000000000  mfn: 000000000010ae00
(XEN)   gfn: 0000000000000000  mfn: 000000000023ee00
(XEN)   gfn: 0000000000000001  mfn: 000000000010ac00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ec00
(XEN)   gfn: 0000000000000001  mfn: 000000000010aa00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ea00
(XEN)   gfn: 0000000000000001  mfn: 000000000010a800
(XEN)   gfn: 0000000000000001  mfn: 000000000023e800
(XEN)   gfn: 0000000000000001  mfn: 000000000010be00
(XEN)   gfn: 0000000000000001  mfn: 000000000023e600
(XEN)   gfn: 0000000000000002  mfn: 000000000010bc00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e400
(XEN)   gfn: 0000000000000002  mfn: 000000000010ba00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e200
(XEN)   gfn: 0000000000000002  mfn: 000000000010b800
(XEN)   gfn: 0000000000000002  mfn: 000000000023e000
(XEN)   gfn: 0000000000000002  mfn: 000000000010b600
(XEN)   gfn: 0000000000000002  mfn: 000000000023de00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b400
(XEN)   gfn: 0000000000000003  mfn: 000000000023dc00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b200
(XEN)   gfn: 0000000000000003  mfn: 000000000023da00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b000
(XEN)   gfn: 0000000000000003  mfn: 000000000023d800
(XEN)   gfn: 0000000000000003  mfn: 000000000010fe00
(XEN)   gfn: 0000000000000003  mfn: 000000000023d600
(XEN)   gfn: 0000000000000004  mfn: 000000000010fc00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d400
(XEN)   gfn: 0000000000000004  mfn: 000000000010fa00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d200
(XEN)   gfn: 0000000000000004  mfn: 000000000010f800
(XEN)   gfn: 0000000000000004  mfn: 000000000023d000
(XEN)   gfn: 0000000000000004  mfn: 000000000010f600
(XEN)   gfn: 0000000000000004  mfn: 000000000023ce00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f400
(XEN)   gfn: 0000000000000005  mfn: 000000000023cc00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f200
(XEN)   gfn: 0000000000000005  mfn: 000000000023ca00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f000
(XEN)   gfn: 0000000000000005  mfn: 000000000023c800
(XEN)   gfn: 0000000000000005  mfn: 000000000010ee00
(XEN)   gfn: 0000000000000005  mfn: 000000000023c600
(XEN)   gfn: 0000000000000006  mfn: 000000000010ec00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c400
(XEN)   gfn: 0000000000000006  mfn: 000000000010ea00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c200
(XEN)   gfn: 0000000000000006  mfn: 000000000010e800
(XEN)   gfn: 0000000000000006  mfn: 000000000023c000

It looks that the same gfn has been mapped to multiple mfns. Do you want to output only the gfn to mfn mapping or you also want to output the address of intermediate page tables? What do "gfn" and "mfn" stands for here?
[Santosh Jodh] guest frame number and machine frame number. I had added the intermediate PD and PT prints after Jan Beulich suggested it - it's possible I misunderstood him. I will resend a new patch. Could you please try it as I don't have access to AMD IOMMU system.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:03:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqjw-0001ne-7w; Fri, 10 Aug 2012 15:03:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Szqju-0001nX-IU
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:03:02 +0000
Received: from [85.158.143.99:9793] by server-1.bemta-4.messagelabs.com id
	18/B3-20198-5A225205; Fri, 10 Aug 2012 15:03:01 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344610980!21181645!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30888 invoked from network); 10 Aug 2012 15:03:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 15:03:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336363200"; d="scan'208";a="34269279"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 11:02:59 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 10 Aug 2012
	08:02:58 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Wei Wang <wei.wang2@amd.com>
Date: Fri, 10 Aug 2012 08:02:42 -0700
Thread-Topic: [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac129BB6fgCbMhIxQWSTxx8TDKX7GQAFKrrw
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E98BC@SJCPMAILBOX01.citrite.net>
References: <f687c55802629c72405d.1344562989@REDBLD-XS.ad.xensource.com>
	<5024FF0A.6090006@amd.com>
In-Reply-To: <5024FF0A.6090006@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks Wei - please see inline.

> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int *indent) {
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<= 1 )
> +        return;

So, the l1 page table is not printed, is it by design?
[Santosh Jodh] That was a typo - it should be level < 1


I tested the amd part with an amd iommu system and I got output like this:

XEN) domain1 IOMMU p2m table:
(XEN) gfn: 0000000000000000  mfn: 00000000001b1fb5
(XEN)   gfn: 0000000000000000  mfn: 000000000021f80b
(XEN)   gfn: 0000000000000000  mfn: 000000000023f400
(XEN)   gfn: 0000000000000000  mfn: 000000000010a600
(XEN)   gfn: 0000000000000000  mfn: 000000000023f200
(XEN)   gfn: 0000000000000000  mfn: 000000000010a400
(XEN)   gfn: 0000000000000000  mfn: 000000000023f000
(XEN)   gfn: 0000000000000000  mfn: 000000000010ae00
(XEN)   gfn: 0000000000000000  mfn: 000000000023ee00
(XEN)   gfn: 0000000000000001  mfn: 000000000010ac00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ec00
(XEN)   gfn: 0000000000000001  mfn: 000000000010aa00
(XEN)   gfn: 0000000000000001  mfn: 000000000023ea00
(XEN)   gfn: 0000000000000001  mfn: 000000000010a800
(XEN)   gfn: 0000000000000001  mfn: 000000000023e800
(XEN)   gfn: 0000000000000001  mfn: 000000000010be00
(XEN)   gfn: 0000000000000001  mfn: 000000000023e600
(XEN)   gfn: 0000000000000002  mfn: 000000000010bc00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e400
(XEN)   gfn: 0000000000000002  mfn: 000000000010ba00
(XEN)   gfn: 0000000000000002  mfn: 000000000023e200
(XEN)   gfn: 0000000000000002  mfn: 000000000010b800
(XEN)   gfn: 0000000000000002  mfn: 000000000023e000
(XEN)   gfn: 0000000000000002  mfn: 000000000010b600
(XEN)   gfn: 0000000000000002  mfn: 000000000023de00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b400
(XEN)   gfn: 0000000000000003  mfn: 000000000023dc00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b200
(XEN)   gfn: 0000000000000003  mfn: 000000000023da00
(XEN)   gfn: 0000000000000003  mfn: 000000000010b000
(XEN)   gfn: 0000000000000003  mfn: 000000000023d800
(XEN)   gfn: 0000000000000003  mfn: 000000000010fe00
(XEN)   gfn: 0000000000000003  mfn: 000000000023d600
(XEN)   gfn: 0000000000000004  mfn: 000000000010fc00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d400
(XEN)   gfn: 0000000000000004  mfn: 000000000010fa00
(XEN)   gfn: 0000000000000004  mfn: 000000000023d200
(XEN)   gfn: 0000000000000004  mfn: 000000000010f800
(XEN)   gfn: 0000000000000004  mfn: 000000000023d000
(XEN)   gfn: 0000000000000004  mfn: 000000000010f600
(XEN)   gfn: 0000000000000004  mfn: 000000000023ce00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f400
(XEN)   gfn: 0000000000000005  mfn: 000000000023cc00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f200
(XEN)   gfn: 0000000000000005  mfn: 000000000023ca00
(XEN)   gfn: 0000000000000005  mfn: 000000000010f000
(XEN)   gfn: 0000000000000005  mfn: 000000000023c800
(XEN)   gfn: 0000000000000005  mfn: 000000000010ee00
(XEN)   gfn: 0000000000000005  mfn: 000000000023c600
(XEN)   gfn: 0000000000000006  mfn: 000000000010ec00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c400
(XEN)   gfn: 0000000000000006  mfn: 000000000010ea00
(XEN)   gfn: 0000000000000006  mfn: 000000000023c200
(XEN)   gfn: 0000000000000006  mfn: 000000000010e800
(XEN)   gfn: 0000000000000006  mfn: 000000000023c000

It looks that the same gfn has been mapped to multiple mfns. Do you want to output only the gfn to mfn mapping or you also want to output the address of intermediate page tables? What do "gfn" and "mfn" stands for here?
[Santosh Jodh] guest frame number and machine frame number. I had added the intermediate PD and PT prints after Jan Beulich suggested it - it's possible I misunderstood him. I will resend a new patch. Could you please try it as I don't have access to AMD IOMMU system.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:05:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqln-0001yz-JC; Fri, 10 Aug 2012 15:04:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szqll-0001yp-VQ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:04:58 +0000
Received: from [85.158.139.83:29704] by server-11.bemta-5.messagelabs.com id
	1C/06-11482-91325205; Fri, 10 Aug 2012 15:04:57 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344611096!27491909!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12624 invoked from network); 10 Aug 2012 15:04:56 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 15:04:56 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (joses mo83) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Y06660o7ACnsTN ;
	Fri, 10 Aug 2012 17:04:48 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id E860718639; Fri, 10 Aug 2012 17:04:47 +0200 (CEST)
Date: Fri, 10 Aug 2012 17:04:47 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810150447.GA13318@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50222C4A0200007800093711@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, Jan Beulich wrote:

> >>> On 07.08.12 at 19:22, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> >> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
> >> - not exclusively try the pv-ops kernel's module names.
> > 
> > Do you mean that 4.2 should try loading some bigger set of module
> > names ?  If so then do you have a list ? :-)
> 
> - xen-blkback's counterpart is blkbk
> - xen-netback's counterpart is netbk
> 
> xen-evtchn/evtchn and xen-gntdev/gntdev are already taken
> care of, albeit in a little strange a way (the two entries being
> separated by an increasing amount of other ones, when it is
> really pointless to load the second one if the first one's load
> succeeded).
> 
> To not needlessly try everything, it might additionally be worth
> a thought to
> - first try loading via module alias rather than module name (if
>   that succeeds for a carefully chosen module that got its alias
>   added late - according to our patches, the devname: aliases
>   got introduced in 2.6.35, and the xen-backend: ones in 3.1 -,
>   only try loading via module alias for all subsequent ones)
> - second try loading a (or all) pvops named module(s) (if that/
>   any of them succeed(s), there's no need to try _any_ non-
>   pvops names subsequently, i.e. including ones that don't even
>   exist in the legacy trees)
> - last try loading the traditional/forward-port named ones

I think it can be done like this because I'm sure that the xenlinux
based dom0 kernels have the drivers compiled as modules. So if evtchn.ko
exists its xenlinux based, otherwise its pvops. I havent runtime tested
that patch yet.

Olaf

diff -r 7ce01c435f5a tools/hotplug/Linux/init.d/xencommons
--- a/tools/hotplug/Linux/init.d/xencommons
+++ b/tools/hotplug/Linux/init.d/xencommons
@@ -50,18 +50,39 @@ if test -f /proc/xen/capabilities && \
 	exit 0
 fi
 
+# Load all drivers in a xenlinux based dom0
+do_modprobe_xenlinux() {
+	for mod in gntdev netbk blkbk xen-scsibk usbbk tpmbk pciback
+	do
+		modprobe ${mod} 2>/dev/null
+	done
+}
+
+# Load all drivers in a pvops based dom0
+do_modprobe_pvops() {
+	for mod in evtchn gntdev gntalloc acpi-processor
+	do
+		modprobe xen-${mod} 2>/dev/null
+	done
+	for be in vbd vif pci
+	do
+		modprobe xen-backend:${be} 2>/dev/null
+	done
+}
+
 do_start () {
         local time=0
 	local timeout=30
 
-	modprobe xen-evtchn 2>/dev/null
-	modprobe xen-gntdev 2>/dev/null
-	modprobe xen-gntalloc 2>/dev/null
-	modprobe xen-blkback 2>/dev/null
-	modprobe xen-netback 2>/dev/null
-	modprobe evtchn 2>/dev/null
-	modprobe gntdev 2>/dev/null
-	modprobe xen-acpi-processor 2>/dev/null
+	# Check if dom0 is based on xenlinux, its drivers are all modules
+	# If loading succeeds assume its xenlinux based, otherwise its pvops based
+	if modprobe evtchn 2>/dev/null
+	then
+		do_modprobe_xenlinux
+	else
+		do_modprobe_pvops
+	fi
+
 	mkdir -p /var/run/xen
 
 	if ! `xenstore-read -s / >/dev/null 2>&1`

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:05:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqln-0001yz-JC; Fri, 10 Aug 2012 15:04:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szqll-0001yp-VQ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:04:58 +0000
Received: from [85.158.139.83:29704] by server-11.bemta-5.messagelabs.com id
	1C/06-11482-91325205; Fri, 10 Aug 2012 15:04:57 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344611096!27491909!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12624 invoked from network); 10 Aug 2012 15:04:56 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 15:04:56 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (joses mo83) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id Y06660o7ACnsTN ;
	Fri, 10 Aug 2012 17:04:48 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id E860718639; Fri, 10 Aug 2012 17:04:47 +0200 (CEST)
Date: Fri, 10 Aug 2012 17:04:47 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810150447.GA13318@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50222C4A0200007800093711@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, Jan Beulich wrote:

> >>> On 07.08.12 at 19:22, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> >> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
> >> - not exclusively try the pv-ops kernel's module names.
> > 
> > Do you mean that 4.2 should try loading some bigger set of module
> > names ?  If so then do you have a list ? :-)
> 
> - xen-blkback's counterpart is blkbk
> - xen-netback's counterpart is netbk
> 
> xen-evtchn/evtchn and xen-gntdev/gntdev are already taken
> care of, albeit in a little strange a way (the two entries being
> separated by an increasing amount of other ones, when it is
> really pointless to load the second one if the first one's load
> succeeded).
> 
> To not needlessly try everything, it might additionally be worth
> a thought to
> - first try loading via module alias rather than module name (if
>   that succeeds for a carefully chosen module that got its alias
>   added late - according to our patches, the devname: aliases
>   got introduced in 2.6.35, and the xen-backend: ones in 3.1 -,
>   only try loading via module alias for all subsequent ones)
> - second try loading a (or all) pvops named module(s) (if that/
>   any of them succeed(s), there's no need to try _any_ non-
>   pvops names subsequently, i.e. including ones that don't even
>   exist in the legacy trees)
> - last try loading the traditional/forward-port named ones

I think it can be done like this because I'm sure that the xenlinux
based dom0 kernels have the drivers compiled as modules. So if evtchn.ko
exists its xenlinux based, otherwise its pvops. I havent runtime tested
that patch yet.

Olaf

diff -r 7ce01c435f5a tools/hotplug/Linux/init.d/xencommons
--- a/tools/hotplug/Linux/init.d/xencommons
+++ b/tools/hotplug/Linux/init.d/xencommons
@@ -50,18 +50,39 @@ if test -f /proc/xen/capabilities && \
 	exit 0
 fi
 
+# Load all drivers in a xenlinux based dom0
+do_modprobe_xenlinux() {
+	for mod in gntdev netbk blkbk xen-scsibk usbbk tpmbk pciback
+	do
+		modprobe ${mod} 2>/dev/null
+	done
+}
+
+# Load all drivers in a pvops based dom0
+do_modprobe_pvops() {
+	for mod in evtchn gntdev gntalloc acpi-processor
+	do
+		modprobe xen-${mod} 2>/dev/null
+	done
+	for be in vbd vif pci
+	do
+		modprobe xen-backend:${be} 2>/dev/null
+	done
+}
+
 do_start () {
         local time=0
 	local timeout=30
 
-	modprobe xen-evtchn 2>/dev/null
-	modprobe xen-gntdev 2>/dev/null
-	modprobe xen-gntalloc 2>/dev/null
-	modprobe xen-blkback 2>/dev/null
-	modprobe xen-netback 2>/dev/null
-	modprobe evtchn 2>/dev/null
-	modprobe gntdev 2>/dev/null
-	modprobe xen-acpi-processor 2>/dev/null
+	# Check if dom0 is based on xenlinux, its drivers are all modules
+	# If loading succeeds assume its xenlinux based, otherwise its pvops based
+	if modprobe evtchn 2>/dev/null
+	then
+		do_modprobe_xenlinux
+	else
+		do_modprobe_pvops
+	fi
+
 	mkdir -p /var/run/xen
 
 	if ! `xenstore-read -s / >/dev/null 2>&1`

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:08:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqp9-0002Fp-8e; Fri, 10 Aug 2012 15:08:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szqp7-0002F4-Nh
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:08:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344611295!8566660!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17026 invoked from network); 10 Aug 2012 15:08:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 15:08:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 16:08:15 +0100
Message-Id: <50253FFF020000780009437F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 16:08:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
In-Reply-To: <20120810150447.GA13318@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 17:04, Olaf Hering <olaf@aepfle.de> wrote:
> On Wed, Aug 08, Jan Beulich wrote:
> 
>> >>> On 07.08.12 at 19:22, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> >> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
>> >> - not exclusively try the pv-ops kernel's module names.
>> > 
>> > Do you mean that 4.2 should try loading some bigger set of module
>> > names ?  If so then do you have a list ? :-)
>> 
>> - xen-blkback's counterpart is blkbk
>> - xen-netback's counterpart is netbk
>> 
>> xen-evtchn/evtchn and xen-gntdev/gntdev are already taken
>> care of, albeit in a little strange a way (the two entries being
>> separated by an increasing amount of other ones, when it is
>> really pointless to load the second one if the first one's load
>> succeeded).
>> 
>> To not needlessly try everything, it might additionally be worth
>> a thought to
>> - first try loading via module alias rather than module name (if
>>   that succeeds for a carefully chosen module that got its alias
>>   added late - according to our patches, the devname: aliases
>>   got introduced in 2.6.35, and the xen-backend: ones in 3.1 -,
>>   only try loading via module alias for all subsequent ones)
>> - second try loading a (or all) pvops named module(s) (if that/
>>   any of them succeed(s), there's no need to try _any_ non-
>>   pvops names subsequently, i.e. including ones that don't even
>>   exist in the legacy trees)
>> - last try loading the traditional/forward-port named ones
> 
> I think it can be done like this because I'm sure that the xenlinux
> based dom0 kernels have the drivers compiled as modules. So if evtchn.ko
> exists its xenlinux based, otherwise its pvops. I havent runtime tested
> that patch yet.

That's the case for our kernels, but doesn't have to be for any
derived ones (and there are still a few people cloning our patches).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:08:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqp9-0002Fp-8e; Fri, 10 Aug 2012 15:08:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szqp7-0002F4-Nh
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:08:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344611295!8566660!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17026 invoked from network); 10 Aug 2012 15:08:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 15:08:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 16:08:15 +0100
Message-Id: <50253FFF020000780009437F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 16:08:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
In-Reply-To: <20120810150447.GA13318@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 17:04, Olaf Hering <olaf@aepfle.de> wrote:
> On Wed, Aug 08, Jan Beulich wrote:
> 
>> >>> On 07.08.12 at 19:22, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> >> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: 
>> >> - not exclusively try the pv-ops kernel's module names.
>> > 
>> > Do you mean that 4.2 should try loading some bigger set of module
>> > names ?  If so then do you have a list ? :-)
>> 
>> - xen-blkback's counterpart is blkbk
>> - xen-netback's counterpart is netbk
>> 
>> xen-evtchn/evtchn and xen-gntdev/gntdev are already taken
>> care of, albeit in a little strange a way (the two entries being
>> separated by an increasing amount of other ones, when it is
>> really pointless to load the second one if the first one's load
>> succeeded).
>> 
>> To not needlessly try everything, it might additionally be worth
>> a thought to
>> - first try loading via module alias rather than module name (if
>>   that succeeds for a carefully chosen module that got its alias
>>   added late - according to our patches, the devname: aliases
>>   got introduced in 2.6.35, and the xen-backend: ones in 3.1 -,
>>   only try loading via module alias for all subsequent ones)
>> - second try loading a (or all) pvops named module(s) (if that/
>>   any of them succeed(s), there's no need to try _any_ non-
>>   pvops names subsequently, i.e. including ones that don't even
>>   exist in the legacy trees)
>> - last try loading the traditional/forward-port named ones
> 
> I think it can be done like this because I'm sure that the xenlinux
> based dom0 kernels have the drivers compiled as modules. So if evtchn.ko
> exists its xenlinux based, otherwise its pvops. I havent runtime tested
> that patch yet.

That's the case for our kernels, but doesn't have to be for any
derived ones (and there are still a few people cloning our patches).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:10:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:10:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqrM-0002Ve-QO; Fri, 10 Aug 2012 15:10:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzqrL-0002VY-Kf
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:10:43 +0000
Received: from [85.158.143.99:14645] by server-3.bemta-4.messagelabs.com id
	D3/00-31486-37425205; Fri, 10 Aug 2012 15:10:43 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344611441!27005285!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17921 invoked from network); 10 Aug 2012 15:10:42 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 15:10:42 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jorabe mo77) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id a01b88o7ADCLDL ;
	Fri, 10 Aug 2012 17:10:34 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 51F3D18639; Fri, 10 Aug 2012 17:10:34 +0200 (CEST)
Date: Fri, 10 Aug 2012 17:10:34 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810151033.GA14059@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50253FFF020000780009437F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, Jan Beulich wrote:

> That's the case for our kernels, but doesn't have to be for any
> derived ones (and there are still a few people cloning our patches).

Do they build things into the kernel? 
Should some other module than evtchn be used to decide which branch to
take?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:10:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:10:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzqrM-0002Ve-QO; Fri, 10 Aug 2012 15:10:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1SzqrL-0002VY-Kf
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:10:43 +0000
Received: from [85.158.143.99:14645] by server-3.bemta-4.messagelabs.com id
	D3/00-31486-37425205; Fri, 10 Aug 2012 15:10:43 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344611441!27005285!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17921 invoked from network); 10 Aug 2012 15:10:42 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 15:10:42 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jorabe mo77) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id a01b88o7ADCLDL ;
	Fri, 10 Aug 2012 17:10:34 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 51F3D18639; Fri, 10 Aug 2012 17:10:34 +0200 (CEST)
Date: Fri, 10 Aug 2012 17:10:34 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810151033.GA14059@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50253FFF020000780009437F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, Jan Beulich wrote:

> That's the case for our kernels, but doesn't have to be for any
> derived ones (and there are still a few people cloning our patches).

Do they build things into the kernel? 
Should some other module than evtchn be used to decide which branch to
take?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqt6-0002k6-Fb; Fri, 10 Aug 2012 15:12:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szqt4-0002jg-DA
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:12:30 +0000
Received: from [85.158.138.51:62905] by server-4.bemta-3.messagelabs.com id
	5C/E4-06379-DD425205; Fri, 10 Aug 2012 15:12:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344611548!8946621!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18584 invoked from network); 10 Aug 2012 15:12:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 15:12:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13956744"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 15:12:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 16:12:27 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Szqt1-0007mU-J3; Fri, 10 Aug 2012 15:12:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Szqt1-0006wL-Dd;
	Fri, 10 Aug 2012 16:12:27 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20517.9435.326052.118897@mariner.uk.xensource.com>
Date: Fri, 10 Aug 2012 16:12:27 +0100
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20120810150447.GA13318@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> I think it can be done like this because I'm sure that the xenlinux
> based dom0 kernels have the drivers compiled as modules. So if evtchn.ko
> exists its xenlinux based, otherwise its pvops. I havent runtime tested
> that patch yet.

I was under the impression that there is no harm in attempting to load
nonexistent modules.  So it would surely be better just to expand the
list that we try.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqt6-0002k6-Fb; Fri, 10 Aug 2012 15:12:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szqt4-0002jg-DA
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:12:30 +0000
Received: from [85.158.138.51:62905] by server-4.bemta-3.messagelabs.com id
	5C/E4-06379-DD425205; Fri, 10 Aug 2012 15:12:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344611548!8946621!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18584 invoked from network); 10 Aug 2012 15:12:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 15:12:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,745,1336348800"; d="scan'208";a="13956744"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 15:12:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 16:12:27 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Szqt1-0007mU-J3; Fri, 10 Aug 2012 15:12:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Szqt1-0006wL-Dd;
	Fri, 10 Aug 2012 16:12:27 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20517.9435.326052.118897@mariner.uk.xensource.com>
Date: Fri, 10 Aug 2012 16:12:27 +0100
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20120810150447.GA13318@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> I think it can be done like this because I'm sure that the xenlinux
> based dom0 kernels have the drivers compiled as modules. So if evtchn.ko
> exists its xenlinux based, otherwise its pvops. I havent runtime tested
> that patch yet.

I was under the impression that there is no harm in attempting to load
nonexistent modules.  So it would surely be better just to expand the
list that we try.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:15:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqvv-00032h-2S; Fri, 10 Aug 2012 15:15:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szqvs-000324-TI
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:15:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344611718!6178448!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11992 invoked from network); 10 Aug 2012 15:15:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 15:15:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 16:15:18 +0100
Message-Id: <502541A60200007800094396@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 16:15:18 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
	<20120810151033.GA14059@aepfle.de>
In-Reply-To: <20120810151033.GA14059@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 17:10, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Aug 10, Jan Beulich wrote:
> 
>> That's the case for our kernels, but doesn't have to be for any
>> derived ones (and there are still a few people cloning our patches).
> 
> Do they build things into the kernel? 
> Should some other module than evtchn be used to decide which branch to
> take?

There are people who build in everything. So you would only
ever be able to derive results from being able to load some
specific module; not being able to load a certain module doesn't
allow drawing any conclusion.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:15:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqvv-00032h-2S; Fri, 10 Aug 2012 15:15:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1Szqvs-000324-TI
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 15:15:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344611718!6178448!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11992 invoked from network); 10 Aug 2012 15:15:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 15:15:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Aug 2012 16:15:18 +0100
Message-Id: <502541A60200007800094396@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 10 Aug 2012 16:15:18 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <4FC766F7.2030802@tiscali.it>
	<20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
	<20120810151033.GA14059@aepfle.de>
In-Reply-To: <20120810151033.GA14059@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 17:10, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Aug 10, Jan Beulich wrote:
> 
>> That's the case for our kernels, but doesn't have to be for any
>> derived ones (and there are still a few people cloning our patches).
> 
> Do they build things into the kernel? 
> Should some other module than evtchn be used to decide which branch to
> take?

There are people who build in everything. So you would only
ever be able to derive results from being able to load some
specific module; not being able to load a certain module doesn't
allow drawing any conclusion.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 15:19:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqzd-0003HD-RW; Fri, 10 Aug 2012 15:19:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1Szqzc-0003H0-9c
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 15:19:16 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344611947!6179093!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=2.5 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,MAILTO_TO_SPAM_ADDR,MANY_EXCLAMATIONS,
	MIME_BASE64_TEXT, MIME_BOUND_NEXTPART, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29009 invoked from network); 10 Aug 2012 15:19:08 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 15:19:08 -0000
Received: by pbbrp12 with SMTP id rp12so2892304pbb.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 08:19:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=8zGlm2NVOf217ZK4JiOo4mB3iXyPrQMaEjqW76JIHGs=;
	b=J6hK26y4mSk59Et9ljUC+oS2GY5YI5jqQtP3GIZeFpYP3p0cj06D+SsCVgOQJQa6j4
	vZDZdKTzGcldyvIlf4nvK8Tyz2yXNRj6Gs0eFVzuPB3c0PvLdtxzGGrdqx1eJYLCuNXX
	VwSgDsvSJTwzAAEdZFmbl97jhAguqVX0MHX0+ZEUfHJ1Tdlv4HbiOY0Up1ij0OIBhbIv
	LeII3DJdWB9OetCd2qPygVeYoBTd6pGtAiOd0xldBS5B++K7z347eL5L7GBvY4GVlsr1
	/68LJcX2/3e+ZnWNp6EVPmlM0r+DwOB6m2PwW81JxEJQHAqcfw9bE52Wiqtz/rDZOsNI
	v6Hg==
Received: by 10.68.235.236 with SMTP id up12mr13299570pbc.79.1344611946411;
	Fri, 10 Aug 2012 08:19:06 -0700 (PDT)
Received: from root ([115.199.255.118])
	by mx.google.com with ESMTPS id rz10sm3519994pbc.32.2012.08.10.08.17.59
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 08:19:05 -0700 (PDT)
Date: Fri, 10 Aug 2012 23:17:59 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>
References: <201208070018394210381@gmail.com>, 
	<50224B7402000078000937DA@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: EA6B0DE3-8EB2-44C6-BEFC-11C58EB28B8B
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081023124696835343@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
	foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4979082272455034970=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4979082272455034970==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart462848748572_=----"

This is a multi-part message in MIME format.

------=_001_NextPart462848748572_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBKYW4gQmV1bGljaDoNClRoYW5rcyBmb3IgcmVwbHkhIFlvdSBhcmUgYSB2ZXJ5IGNsZXZl
ciBtYW4sIHlvdSBoYXZlIHNlbnNlZCBzb21lIHRoaW5nIGltbWVkaWF0ZWx5IGFzIEkgZm91bmQg
bGF0ZWx5LiBQbGVhc2UgZm9yZ2l2ZSBteSBzbyBsYXRlbHkgcmVwbHkhDQoNCjEgd2h5IEpWTSBz
ZXQgdGhlIHNhbWUgcmF0ZSBkb3duIHNvIGZyZXF1ZW50bHkgPw0KdGhlIGZpcnN0IGFjaGlldmVt
ZW50IEkgd2lsbCBzaG93IGlzIEkgZm91bmQgdGhlIGFjdGlvbiBpbiBKVk0gYW5kIHRoZSByZWFz
b24gYnkgZGVidWdnaW5nIGRpc2Fzc2VtYmx5IGNvZGUuDQppdCBzZWVtcyB0byBtZSBsaWtlIHRo
aXMgaW4gSlZNOg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PSAxIHdo
YXQgaGFwcGVuZWQgaW4gSlZNID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0K
d2hpbGUgKGxvb3ApLy9vciBhIGZyZXF1ZW50IGNhbGwNCnsNCnRpbWVCZWdpblBlcmlvZCgpIC0t
PiBOdFNldFRpbWVyUmVzb2x1dGlvbigxKGVuYmxlKSkNCnJjID0gV2FpdEZvck11bHRpcGxlT2Jq
ZWN0cyg1LCAweDIyMjIyMjIsIDAsIDEpOyAvL3RoZSBsYXN0IHBhcmFtZXRlciBkZW1hbmRzIDFt
cyB0aW1lciByZXNvbHV0aW9uDQppZiAocmMgPSBUSU1FT1VUKXsNCmJyZWFrOw0KfQ0KZWxzZXsN
CmNhbGwgMHg0NDQ0NDQ0NDsNCn0NCnRpbWVFbmRQZXJpb2QoKSAtLT4gTnRTZXRUaW1lclJlc29s
dXRpb24oMChkaXNhYmxlKSkNCn0NCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT0NCnNvIGl0cyBiZWhhdmlvciBpcyB0b3RhbGx5IGxlZ2FsLCBmb3IgaXQgZGVtYW5k
cyBoaWdoZXIgdGltZXIgcmVzb2x1dGlvbiAoaGVyZSAxbXMpLCBzbyBpdCBjYWxscyBOdFNldFRp
bWVyUmVzb2x1dGlvbiB0byBpbXByb3ZlIHRoZSByZXNvbHV0aW9uLiANCml0IGlzIG5vdCBhcyBJ
IGd1ZXNzZWQgInVuYWNjdXJhdGUiDQoNCkkgYWxzbyB3cm90ZSBhIHRlc3RlciBiZWxvdywgd2hp
Y2ggY29uZmlybXMgbXkgc3VwcG9zZS4gaWYgeW91IGFyZSBpbnRlcmVzdGVkIGluIGl0ICwgeW91
IGNhbiBidWlsZCBpdCBieSBNUydzIGNvbXBpbGVyLCBhbmQgYWZ0ZXIgcnVubmluZyB0aGUgdGVz
dGVyIGluIFZNIGFib3V0IDEgbWludXRlcywgVk0ncyB0aW1lIHdpbGwgc2xvdy4NCj09PT09PT09
PT09PT09PT09PT09PT09PT09PSAyIGEgdGVzdCB3aGljaCB3aWxsIGxlYWQgdG8gdHJpZ2dlciBz
bG93aW5nIFZNJ3MgaW5uZXIgdGltZT09PT09PT09DQojaW5jbHVkZSA8c3RkaW8uaD4NCiNpbmNs
dWRlIDx3aW5kb3dzLmg+DQp0eXBlZGVmIGludCAoX19zdGRjYWxsICpOVFNFVFRJTUVSKShJTiBV
TE9ORyBSZXF1ZXN0ZWRSZXNvbHV0aW9uLCBJTiBCT09MRUFOIFNldCwgT1VUIFBVTE9ORyBBY3R1
YWxSZXNvbHV0aW9uICk7DQp0eXBlZGVmIGludCAoX19zdGRjYWxsICpOVFFVRVJZVElNRVIpKE9V
VCBQVUxPTkcgICBNaW5pbXVtUmVzb2x1dGlvbiwgT1VUIFBVTE9ORyBNYXhpbXVtUmVzb2x1dGlv
biwgT1VUIFBVTE9ORyBDdXJyZW50UmVzb2x1dGlvbiApOw0KaW50IG1haW4oKQ0Kew0KRFdPUkQg
bWluX3JlcyA9IDAsIG1heF9yZXMgPSAwLCBjdXJfcmVzID0gMCwgcmV0ID0gMDsNCkhNT0RVTEUg
IGhkbGwgPSBOVUxMOw0KaGRsbCA9IEdldE1vZHVsZUhhbmRsZSgibnRkbGwuZGxsIik7DQpOVFNF
VFRJTUVSIEFkZHJOdFNldFRpbWVyID0gMDsNCk5UUVVFUllUSU1FUiBBZGRyTnRRdWV5VGltZXIg
PSAwOw0KQWRkck50U2V0VGltZXIgPSAoTlRTRVRUSU1FUikgR2V0UHJvY0FkZHJlc3MoaGRsbCwg
Ik50U2V0VGltZXJSZXNvbHV0aW9uIik7DQpBZGRyTnRRdWV5VGltZXIgPSAoTlRRVUVSWVRJTUVS
KUdldFByb2NBZGRyZXNzKGhkbGwsICJOdFF1ZXJ5VGltZXJSZXNvbHV0aW9uIik7DQoNCndoaWxl
ICgxKQ0Kew0KcmV0ID0gQWRkck50UXVleVRpbWVyKCZtaW5fcmVzLCAmbWF4X3JlcywgJmN1cl9y
ZXMpOw0KcHJpbnRmKCJtaW5fcmVzID0gJWQsIG1heF9yZXMgPSAlZCwgY3VyX3JlcyA9ICVkXG4i
LG1pbl9yZXMsIG1heF9yZXMsIGN1cl9yZXMpOw0KU2xlZXAoMTApOw0KcmV0ID0gQWRkck50U2V0
VGltZXIoMTAwMDAsIDEsICZjdXJfcmVzKTsNClNsZWVwKDEwKTsNCnJldCA9IEFkZHJOdFNldFRp
bWVyKDEwMDAwLCAwLCAmY3VyX3Jlcyk7DQpTbGVlcCgxMCk7DQp9DQpyZXR1cm4gMDsNCn0NCj09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCg0KMiBCdWcgaW4geGVuDQpKVk0g
aXMgT0ssIHNvIGxlZnQgdGhlIGJ1ZyB0byB4ZW4sIEkgaGF2ZSBmb3VuZCBib3RoIHRoZSByZWFz
b24gYW5kIHNvbHV0aW9uLiBBcyBKYW4gbWVudGlvbmVkIGF2b2lkaW5nIGNhbGwgY3JlYXRlX3Bl
cmlvZGljX3RpbWUsIGl0IGdvdCBtdWNoIGJldHRlci4gc28gSSBtb2RpZmllZCBpdCBsaWtlIHRo
aXMsIGlmIHRoZSBwdCB0aW1lciBpcyBjcmVhdGVkIGJlZm9yZSwgc2V0dGluZyBSZWdBIGRvd24g
aXMganVzdCBjaGFuZ2luZyB0aGUgcGVyaW9kIHZhbHVlLCBzbyBJIGRvIG5vdGhpbmcgZXhjZXB0
IGp1c3Qgc2V0dGluZyBwZXJpb2QgdG8gcHQncyBmaWVsZC4gaXQgaXMgT0shDQoNCkkgdGhvdWdo
dCBwdC0+c2NoZWR1bGVkIGlzIHJlc3BvbnNpYmxlIGZvciBBY2N1cmFjeSBvZiBwdF9wcm9jZXNz
X21pc3NlZF90aWNrcywgc28gd2Ugc2hvdWxkIG5vdCBpbnRlcmZlcmUgaXQgYXQgYW55IG91dGVy
IGludmFkaW5nIG9yIGludGVycnVwdGlvbiwgc28gSSBsZXQgY3JlYXRlX3BlcmlvZGljX3RpbWUg
Y2hhbmdlZCBldmVyeXRoaW5nIGJ1dCByZXNlcnZlZCBvbmx5IG9uZSBmaWxlZCBwdC0+c2NoZWR1
bGVkIGFzIHNldHRlZCBiZWZvcmUsIEkgYW0gdmVyeSBoYXBweSB0byBmaW5kIHRoZSBidWcgZGlz
YXBwZWFyLiBBZnRlciBJIHJlY2hlY2tlZCB5b3VyIG1haWwgSSBmb3VuZCB5b3UgYXJlIHJlYWxs
eSBhIHZlcnkgc21hcnQgbWFuLCB5b3UgaGF2ZSBwcmVkaWN0ZWQgc29tZXRoaW5nIQ0KDQpEaWQg
eW91IGZ1cnRoZXIgY2hlY2sgd2hldGhlciB0aGUgYWRqdXN0bWVudHMgZG9uZSB0byB0aGUNCnNj
aGVkdWxlZCB0aW1lIGluIGNyZWF0ZV9wZXJpb2RpY190aW1lKCkgYXJlIHJlc3BvbnNpYmxlIGZv
ciB0aGlzDQpjb25jbHVzaW9uIG9mIHRoZSBKVk0gKGNvdWxkIGJlIGVmZmVjdGl2ZWx5IGRvdWJs
aW5nIHRoZSBmaXJzdA0KaW50ZXJ2YWwgaWYgSFZNX1BBUkFNX1ZQVF9BTElHTiBpcyBzZXQsIGFu
ZCB3aXRoIHRoZSBoaWdoIHJhdGUNCm9mIHJlLXNldHMgdGhpcyBjb3VsZCBjZXJ0YWlubHkgaGF2
ZSBhIG1vcmUgdmlzaWJsZSBlZmZlY3QgdGhhbg0KaW50ZW5kZWQpPw0KDQpBZnRlciBJIHRyYWNr
ZWQgcHQtPnNjaGVkdWxlZCwgbW9yZSBhbmQgbW9yZSB0cnV0aCBzdXJmYWNlZC4gSSB3aWxsIHNo
b3cgeW91IHRoZSBSVEMncyBzcG90dGluZyBhcyBJIG9ic2VydmVkLg0Kbm9ybWFsIHNwb3QgaXMg
bGlrZSB0aGlzOg0KMCAgICAgICAgICAgICAgIDEgICAgICAgICAgICAgICAyICAgICAgICAgICAg
ICAgMyAgICAgICAgICAgICAgIDQgICAgICAgICAgICAgICA1DQouICAgICAgICAgICAgICAgLiAg
ICAgICAgICAgICAgIC4gICAgICAgICAgICAgICAuICAgICAgICAgICAgICAgLiAgICAgICAgICAg
ICAgIC4gICAgICAobm9ybWFsIHNwb3QpDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHwocHQtPnNjaGVkdWxlZCBhdCAyKQ0KDQp3aGVuIGNyZWF0ZV9wZXJpb2RpY190aW1lIGludGVy
ZmVyZSBwdC0+c2NoZWR1bGUgYXQgTk9XDQowICAgICAgICAgICAgICAgMSAgICAgICAgICAgICAg
IDIgICAgICAgICAgICAgICAzICAgICAgICAgICAgICAgNCAgICAgICAgICAgICAgIDUNCi4gICAg
ICAgICAgICAgICAuICAgICAgICAgICAgICAgLiAgICAgICAgICAgICAgIC4gICAgICAgICAgICAg
ICAuICAgICAgICAgICAgICAgLiAgICAgIA0KICAgICAgICAgICAgICAgICAgICB8KE5PVykgICAg
ICAgICAgICAgICAgICAgICAgfCAobmV3IHB0LT5zY2hlZHVsZWQgaXMgbW92ZWQgdG8gMyBhZnRl
ciBBTElHTikNCg0Kc28gaXQgcmVhbCBzcG90IGlzIGxpa2UgdGhpczoNCi4gICAgICAgICAgICAg
ICAuICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4gICAgICAgICAgICAgICAuICAgICAg
ICAgICAgICAgLiAgICAgICANCjAgICAgICAgICAgICAgICAxICAgICAgICAgICAgICAgMiAgICAg
ICAgICAgICAgIDMgICAgICAgICAgICAgICA0ICAgICAgICAgICAgICAgNQ0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICB8KGhlcmUgd2UgbWlzc2VkIG9uZSBzcG90IGF0IDIpDQoNCnRo
ZSBvcmlnaW5hbCBwdC0+c2NoZWR1bGVkIGlzIGF0IDIgYmVmb3JlIGNyZWF0ZV9wZXJpb2RpY190
aW1lIHdpbGwgYmUgZXhlY3V0ZWQsIGJ1dCBhZnRlciBpdCwgcHQtPnNjaGVkdWxlZCBpcyBtb3Zl
ZCB0byAzLCBzbyBtaXNzZWQgb25lIHNwb3QgdG8gR3Vlc3RXaW4uDQoNCg0KMyB3aG8gaXMgd3Jv
bmc/DQpJIGRvdWJ0IGFsaWduX3RpbWVyIHdvcnRoIHN1c3BlY3RlZDoNCnNfdGltZV90IGFsaWdu
X3RpbWVyKHNfdGltZV90IGZpcnN0dGljaywgdWludDY0X3QgcGVyaW9kKQ0Kew0KICAgIGlmICgg
IXBlcmlvZCApDQogICAgICAgIHJldHVybiBmaXJzdHRpY2s7DQoNCiAgICByZXR1cm4gZmlyc3R0
aWNrICsgKHBlcmlvZCAtIDEpIC0gKChmaXJzdHRpY2sgLSAxKSAlIHBlcmlvZCk7IC8vaW4geGVu
NC54DQogICAgcmV0dXJuIGZpcnN0dGljayAgLSAoKGZpcnN0dGljayAtIDEpICUgcGVyaW9kKTsv
L2l0IHNob3VsZCBiZSBhbGlnbmVkIGxpa2UgdGhpcy4NCn0NCg0KSSBoYXZlIGFsc28gZm91bmQg
YW5vdGhlciB1cGRhdGluZyBSVEMncyBSZWdBIGluIHRvb2xzXGlvZW11LXFlbXUteGVuXGh3XE1D
YzE0NjgxOHJ0Yy5jOiANCg0KICAgIGlmIChwZXJpb2RfY29kZSAhPSAwICYmIChzLT5jbW9zX2Rh
dGFbUlRDX1JFR19CXSAmIFJFR19CX1BJRSkpIHsNCiNlbmRpZg0KICAgICAgICBpZiAocGVyaW9k
X2NvZGUgPD0gMikNCiAgICAgICAgICAgIHBlcmlvZF9jb2RlICs9IDc7DQogICAgICAgIC8qIHBl
cmlvZCBpbiAzMiBLaHogY3ljbGVzICovDQogICAgICAgIHBlcmlvZCA9IDEgPDwgKHBlcmlvZF9j
b2RlIC0gMSk7DQojaWZkZWYgSVJRX0NPQUxFU0NFX0hBQ0sNCiAgICAgICAgaWYocGVyaW9kICE9
IHMtPnBlcmlvZCkNCiAgICAgICAgICAgIHMtPmlycV9jb2FsZXNjZWQgPSAocy0+aXJxX2NvYWxl
c2NlZCAqIHMtPnBlcmlvZCkgLyBwZXJpb2Q7DQogICAgICAgIHMtPnBlcmlvZCA9IHBlcmlvZDsN
CiNlbmRpZg0KICAgICAgICAvKiBjb21wdXRlIDMyIGtoeiBjbG9jayAqLw0KICAgICAgICBjdXJf
Y2xvY2sgPSBtdWxkaXY2NChjdXJyZW50X3RpbWUsIDMyNzY4LCB0aWNrc19wZXJfc2VjKTsgLy9o
ZXJlIEkgZG9uJ3QgbWFrZSBzZW5zZSBpdCAuLi4uLi4NCiAgICAgICAgbmV4dF9pcnFfY2xvY2sg
PSAoY3VyX2Nsb2NrICYgfihwZXJpb2QgLSAxKSkgKyBwZXJpb2Q7DQogICAgICAgIHMtPm5leHRf
cGVyaW9kaWNfdGltZSA9IG11bGRpdjY0KG5leHRfaXJxX2Nsb2NrLCB0aWNrc19wZXJfc2VjLCAz
Mjc2OCkgKyAxOw0KICAgICAgICBxZW11X21vZF90aW1lcihzLT5wZXJpb2RpY190aW1lciwgcy0+
bmV4dF9wZXJpb2RpY190aW1lKTsNCg0KDQoNCkkgZG9uJ3Qga25vdyB3aGF0IGhhcHBlbmVkIGlu
IHJlYWwgUlRDLCB0aGUgbW9zdCBwb3B1bGFyIFJUQyBjaGlwIGlzIE1DMTQ2ODE4LCBJIGhhdmUg
Y2hlY2tlZCBpdHMgZGF0YXNoZWV0IGJ1dCBmb3VuZCBub3RoaW5nIEkgd2FudC4gV2hhdCBJIHdh
bnQgdG8ga25vdyBpcyBXaGVuIGEgcmVhbCBvdXRlciAweDcxIGNvbWUgZG93biB0byBzZXQgUlRD
J3MgUmVnQSwgd2hvIGRvZXMgaXQgY2hhbmdlIG9yIHNwb3QgdGhlIG5leHQgcGVyaW9kaWMgaW50
ZXJydXB0IHRpbWUgPyBJbiBteSBjYXNlLCB0aGUgcGVyaW9kIGRvZXNuJ3QgY2hhbmdlLCBpdCBt
aXNzZXMgb25lIHNwb3QsIGV2ZW4gaWYgdGhlIHBlcmlvZCBjaGFuZ2VzIGhvdyB3aWxsIGl0IHNw
b3QgdGhlIG5leHQgc2NoZWR1bGVkIHRpbWU/DQpJIG5lZWQgdGhlIG1hbiB3aG8ga25vd3MgdGhl
IGRldGFpbHMgZGVlcGx5IGNvbmNlcm5pbmcgd2hlbiB1cGRhdGluZyBhIHJlYWwgUlRDJ3MgcmVn
QSwgaG93IHdpbGwgaXQgdGFrZSBpdCBpbnRvIGVmZmVjdCwgYW5kIG1ha2UgdGhlIHRyYW5zaXRp
b24gc21vb3RobHkgaW4gYSByZWFsIFJUQy4NCg0KSSBoYXZlIGJlZW4gdmVyeSBhbnhpb3VzIGlu
IGFub3RoZXIgYXNwZWN0LCBpbiBvdXIgdmlydHVhbCBlbnZpcm9ubWVudCwgYXQgY3JlYXRlX3Bl
cmlvZGljX3RpbWUsIE5PVygpIG1heSBiZSBmYXIgbW9yZSBsYXRlciB0aGFuIHB0LT5zY2hlZHVs
ZWQgc2V0dGVkIGJlZm9yZSwgYXQgdGhpcyBwb2ludCwgdGhlIG5ldyBwdC0+c2NoZWR1bGVkIG1h
eSBiZSBmYXIgbW9yZSBiZWhpbmQgdGhlIG9uZSBhZnRlciBleGVjdXRpbmcgY3JlYXRlX3Blcmlv
ZGljX3RpbWUuIFNvIHRoZSBpbnRlcnZhbCBiZXR3ZWVuIGJvdGggd2hpY2ggc2hvdWxkIGJlIHRy
ZWF0ZWQgYnkgcHRfcHJvY2Vzc19taXNzZWRfdGlja3MgdG8gdGhlIGZvcm1lciBwdC0+c2NoZWR1
bGVkJ3MgcGVyaW9kIGlzIG5vdyBhbGwgdGhyb3duIGF3YXkgYXMgbm90aGluZyBtaXNzZWQuIFNv
IEkgdGhpbmsgaG93IHRvIGhhbmRsZSB0aGUgaW50ZXJ2YWwgYmV0d2VlbiBib3RoIHB0LT5zY2hl
ZHVsZWQgd29ydGggY29uc2lkZXJhdGlvbiBpbiBjcmVhdGVfcGVyaW9kaWNfdGltZS4NCg0KVGhh
bmtzIQ0KDQoNCg0KDQp0dXBlbmcyMTINCg0KRnJvbTogSmFuIEJldWxpY2gNCkRhdGU6IDIwMTIt
MDgtMDggMTc6MjANClRvOiB0dXBlbmcyMTINCkNDOiB4ZW4tZGV2ZWwNClN1YmplY3Q6IFJlOiBb
WGVuLWRldmVsXSBCaWcgQnVnOlRpbWUgaW4gVk0gcnVubmluZyBvbiB4ZW4gZ29lcyBzbG93ZXIN
Cj4+PiBPbiAwNy4wOC4xMiBhdCAxNzo0NCwgdHVwZW5nMjEyIDx0dXBlbmcyMTJAZ21haWwuY29t
PiB3cm90ZToNCj4gMiBYZW4NCj4gdm14X3ZtZXhpdF9oYW5kbGVyICAtLT4gLi4uLi4uLi4uIC0t
PiBoYW5kbGVfcnRjX2lvICAtLT4gcnRjX2lvcG9ydF93cml0ZSAgLS0+IA0KPiBydGNfdGltZXJf
dXBkYXRlIC0tPiBzZXQgUlRDJ3MgUkVHX0EgdG8gYSBoaWdoIHJhdGUtLT4gY3JlYXRlX3Blcmlv
ZGljX3RpbWUoZGlzYWJsZSANCj4gdGhlIGZvcm1lciB0aW1lciwgYW5kIGluaXQgYSBuZXcgb25l
KQ0KPiBXaW43IGlzIGluc3RhbGxlZCBpbiB0aGUgdm0uIFRoaXMgY2FsbGluZyBwYXRoIGlzIGV4
ZWN1dGVkIHNvIGZyZXF1ZW50IHRoYXQgDQo+IG1heSBjb21lIGRvd24gdG8gc2V0IHRoZSBSVEMn
cyBSRUdfQSBodW5kcmVkcyBvZiB0aW1lcyBldmVyeSBzZWNvbmQgYnV0IHdpdGggDQo+IHRoZSBz
YW1lIHJhdGUoOTc2LjU2MnVzKDEwMjRIWikpLCBpdCBpcyBzbyBhYm5vcm1hbCB0byBtZSB0byBz
ZWUgc3VjaCANCj4gYmVoYXZpb3IuDQoNCl9JZl8gdGhlIHByb2JsZW0gaXMgbWVyZWx5IHdpdGgg
dGhlIGhpZ2ggcmF0ZSBvZiBjYWxscyB0bw0KY3JlYXRlX3BlcmlvZGljX3RpbWUoKSwgSSB0aGlu
ayB0aGlzIGNvdWxkIGJlIHRha2VuIGNhcmUgb2YgYnkNCmF2b2lkaW5nIHRoZSBjYWxsIChhbmQg
cGVyaGFwcyB0aGUgY2FsbCB0byBydGNfdGltZXJfdXBkYXRlKCkgaW4NCnRoZSBmaXJzdCBwbGFj
ZSkgYnkgY2hlY2tpbmcgd2hldGhlciBhbnl0aGluZyBhY3R1YWxseSBjaGFuZ2VzDQpkdWUgdG8g
dGhlIGN1cnJlbnQgd3JpdGUuIEkgZG9uJ3QgdGhpbmssIGhvd2V2ZXIsIHRoYXQgdGhpcyB3b3Vs
ZA0KaGVscCBtdWNoLCBhcyB0aGUgaGlnaCByYXRlIG9mIHBvcnQgYWNjZXNzZXMgKGFuZCBoZW5j
ZSB0cmFwcw0KaW50byB0aGUgaHlwZXJ2aXNvcikgd291bGQgcmVtYWluLiBJdCBtaWdodCwgbmV2
ZXJ0aGVsZXNzLCBnZXQNCnlvdXIgaW1tZWRpYXRlIHByb2JsZW0gb2YgdGhlIHRpbWUgc2xvd2lu
ZyBkb3duIHRha2VuIGNhcmUgb2YNCmlmIHRoYXQgaXMgY2F1c2VkIGluc2lkZSBYZW4gKGJ1dCB0
aGUgY2F1c2UgaGVyZSBtYXkgYXMgd2VsbCBiZSBpbg0KdGhlIFdpbmRvd3Mga2VybmVsKS4NCg0K
PiAzIE9TDQo+IEkgaGF2ZSB0cmllZCB0byBmaW5kIHdoeSB0aGUgd2luNyBzZXR0ZWQgUlRDJ3Mg
cmVnQSBzbyBmcmVxdWVudGx5LiBmaW5hbGx5IA0KPiBnb3QgdGhlIHJlc3VsdCwgYWxsIHRoYXQg
Y29tZXMgZnJvbSBhIGZ1bmN0aW9uOiBOdFNldFRpbWVyUmVzb2x1dGlvbiAtLT4gDQo+IDB4NzAs
MHg3MQ0KPiB3aGVuIEkgYXR0YWNoZWQgd2luZGJnIGludG8gdGhlIGd1ZXN0IE9TLCBJIGFsc28g
Zm91bmQgdGhlIHNvdXJjZSwgdGhleSBhcmUgDQo+IGFsbCBjYWxsZWQgZnJvbSBhIHVwcGVyIHN5
c3RlbSBjYWxsIHRoYXQgY29tZXMgZnJvbSBKVk0oSmF2YSBWaXJ0dWFsIA0KPiBNYWNoaW5lKS4N
Cg0KR2V0dGluZyBXaW5kb3dzIHRvIGJlIGEgbGl0dGxlIHNtYXJ0ZXIgYW5kIGF2b2lkIHRoZSBw
b3J0IEkvTyB3aGVuDQpkb2luZyByZWR1bmRhbnQgd3JpdGVzIHdvdWxkIG9mIGNvdXJzZSBiZSBl
dmVuIGJldHRlciwgYnV0IGlzDQpjbGVhcmx5IGEgZGlmZmljdWx0IHRvIGFjaGlldmUgZ29hbC4N
Cg0KPiA0IEpWTQ0KPiBJIGRvbid0IGtub3cgd2h5IEpWTSBjYWxscyBOdFNldFRpbWVyUmVzb2x1
dGlvbiB0byBzZXQgdGhlIHNhbWUgUlRDJ3MgcmF0ZSANCj4gZG93biAoOTc2LjU2MnVzKDEwMjRI
WikpIHNvIGZyZXF1ZW50bHkuIA0KPiBCdXQgZm91bmQgc29tZXRoaW5nIHVzZWZ1bCwgaW4gdGhl
IGphdmEgc291cmNlIGNvZGUsIEkgZm91bmQgbWFueSBwYWxhY2VzIA0KPiB3cml0dGVuIHdpdGgg
dGltZS5zY2hlZHVsZUF0Rml4ZWRSYXRlKCksIEluZm9ybWF0aW9ucyBmcm9tIEludGVybmV0IHRv
bGQgbWUgDQo+IHRoaXMgZnVuY3Rpb24gc2NoZWR1bGVBdEZpeGVkUmF0ZSBkZW1hbmRzIGEgaGln
aGVyIHRpbWUgcmVzb2x1dGlvbi4gc28gSSANCj4gZ3Vlc3MgdGhlIHdob2xlIHByb2Nlc3MgbWF5
IGJlIHRoaXM6IA0KPiBqYXZhIHdhbnRzIGEgaGlnaGVyIHRpbWUgcmVzb2x1dGlvbiB0aW1lciwg
c28gaXQgY2hhbmdlcyB0aGUgUlRDJ3MgcmF0ZSBmcm9tIA0KPiAxNS42MjVtcyg2NEhaKSB0byA5
NzYuNTYydXMoMTAyNEhaKSwgYWZ0ZXIgdGhhdCwgaXQgcmVjb25maXJtcyB3aGV0aGVyIHRoZSAN
Cj4gdGltZSBpcyBhY2N1cmF0ZSBhcyBleHBlY3RlZCwgYnV0IGl0J3Mgc29ycnkgdG8gZ2V0IHNv
bWUgbm90aWNlIGl0ICdzIG5vdCANCj4gYWNjdXJhdGUgZWl0aGVyLiBzbyBpdCBzZXRzICB0aGUg
UlRDJ3MgcmF0ZSBmcm9tIDE1LjYyNW1zKDY0SFopIHRvIA0KPiA5NzYuNTYydXMoMTAyNEhaKSBh
Z2FpbiBhbmQgYWdhaW4uLi4sIGF0IGxhc3QsIHJlc3VsdHMgaW4gYSBzbG93IHN5c3RlbSB0aW1l
ciANCj4gaW4gdm0uDQoNCk5vdyB0aGF0J3MgcmVhbGx5IHRoZSBmdW5kYW1lbnRhbCB0aGluZyB0
byBmaW5kIG91dCAtIHdoYXQgbWFrZXMgaXQNCnRoaW5rIHRoZSBjbG9jayBpc24ndCBhY2N1cmF0
ZT8gSXMgdGhpcyBhbiBhcnRpZmFjdCBvZiBzY2hlZHVsaW5nIChhcw0KdGhlIHNjaGVkdWxlciB0
aWNrIGNlcnRhaW5seSBpcyBzZXZlcmFsIG1pbGxpc2Vjb25kcywgd2hlcmVhcw0KImFjY3VyYXRl
IiBoZXJlIGFwcGVhcnMgdG8gcmVxdWlyZSBiZWxvdyAxbXMgZ3JhbnVsYXJpdHkpLCBwZXJoYXBz
DQphcyBhIHJlc3VsdCBvZiB0aGUgYm94IGJlaW5nIG92ZXJsb2FkZWQgKGkuZS4gdGhlIFZNIG5v
dCBiZWluZyBhYmxlDQp0byBnZXQgc2NoZWR1bGVkIHF1aWNrbHkgZW5vdWdoIHdoZW4gdGhlIHRp
bWVyIGV4cGlyZXMpPyBGb3IgdGhhdCwNCmRpZCB5b3UgdHJ5IGxvd2VyaW5nIHRoZSBzY2hlZHVs
ZXIgdGltZSBzbGljZSBhbmQvb3IgaXRzIHJhdGUgbGltaXQNCihwb3NzaWJsZSB2aWEgY29tbWFu
ZCBsaW5lIG9wdGlvbik/IE9mIGNvdXJzZSBkb2luZyBzbyBtYXkgaGF2ZQ0Kb3RoZXIgdW5kZXNp
cmFibGUgc2lkZSBlZmZlY3RzLCBidXQgaXQgd291bGQgYmUgd29ydGggYSB0cnkuDQoNCkRpZCB5
b3UgZnVydGhlciBjaGVjayB3aGV0aGVyIHRoZSBhZGp1c3RtZW50cyBkb25lIHRvIHRoZQ0Kc2No
ZWR1bGVkIHRpbWUgaW4gY3JlYXRlX3BlcmlvZGljX3RpbWUoKSBhcmUgcmVzcG9uc2libGUgZm9y
IHRoaXMNCmNvbmNsdXNpb24gb2YgdGhlIEpWTSAoY291bGQgYmUgZWZmZWN0aXZlbHkgZG91Ymxp
bmcgdGhlIGZpcnN0DQppbnRlcnZhbCBpZiBIVk1fUEFSQU1fVlBUX0FMSUdOIGlzIHNldCwgYW5k
IHdpdGggdGhlIGhpZ2ggcmF0ZQ0Kb2YgcmUtc2V0cyB0aGlzIGNvdWxkIGNlcnRhaW5seSBoYXZl
IGEgbW9yZSB2aXNpYmxlIGVmZmVjdCB0aGFuDQppbnRlbmRlZCk/DQoNCkphbg==

------=_001_NextPart462848748572_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear Jan Beulich:</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Thanks for reply! You are a very clever ma=
n, you=20
have sensed some thing immediately&nbsp;as I found lately. Please forgive =
my so=20
lately reply!</DIV>
<DIV>&nbsp;</DIV>
<DIV>1 why JVM set the same rate down so frequently&nbsp;?</DIV>
<DIV>the first achievement I will show is I found the action in JVM and th=
e=20
reason by debugging disassembly code.</DIV>
<DIV>it seems to me like this in JVM:</DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 1 what happened in JVM=
=20
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</DIV>
<DIV>
<DIV>while&nbsp;(loop)//or a frequent call</DIV>
<DIV>{</DIV>
<BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
  <DIV>timeBeginPeriod()&nbsp;--&gt;&nbsp;NtSetTimerResolution(1(enble))</=
DIV>
  <DIV></DIV>
  <DIV>rc&nbsp;=3D&nbsp;WaitForMultipleObjects(5,&nbsp;0x2222222,&nbsp;0,&=
nbsp;1);&nbsp;//the&nbsp;last&nbsp;parameter&nbsp;demands&nbsp;1ms&nbsp;ti=
mer&nbsp;resolution</DIV>
  <DIV>if&nbsp;(rc&nbsp;=3D&nbsp;TIMEOUT){</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>break;</DIV></BLOCKQUOTE>
  <DIV>}</DIV>
  <DIV>else{</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>call&nbsp;0x44444444;</DIV></BLOCKQUOTE>
  <DIV>}</DIV>
  <DIV></DIV>
  <DIV>timeEndPeriod()&nbsp;--&gt;&nbsp;NtSetTimerResolution(0(disable))</=
DIV></BLOCKQUOTE>
<DIV>}</DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D</DIV></DIV>
<DIV>so its behavior is totally legal, for it demands higher timer resolut=
ion=20
(here 1ms), so it calls NtSetTimerResolution to improve the=20
resolution.&nbsp;</DIV>
<DIV>it is not as I guessed "unaccurate"<BR></DIV>
<DIV>I also wrote a tester below, which confirms my suppose. if you are=20
interested in it , you can build it by MS's compiler, and after running th=
e=20
tester in VM about 1 minutes, VM's time will slow.</DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D 2 a test which will lead to trigger slowing=20
VM's inner time=3D=3D=3D=3D=3D=3D=3D=3D</DIV>
<DIV>
<DIV>#include&nbsp;&lt;stdio.h&gt;</DIV>
<DIV>#include&nbsp;&lt;windows.h&gt;</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTSETTIMER)(IN&nbsp;ULONG&nbsp=
;RequestedResolution,&nbsp;IN&nbsp;BOOLEAN&nbsp;Set,&nbsp;OUT&nbsp;PULONG&=
nbsp;ActualResolution&nbsp;);</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTQUERYTIMER)(OUT&nbsp;PULONG&=
nbsp;&nbsp;&nbsp;MinimumResolution,&nbsp;OUT&nbsp;PULONG&nbsp;MaximumResol=
ution,&nbsp;OUT&nbsp;PULONG&nbsp;CurrentResolution&nbsp;);</DIV>
<DIV>int&nbsp;main()</DIV>
<DIV>{</DIV>
<BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
  <DIV>DWORD&nbsp;min_res&nbsp;=3D&nbsp;0,&nbsp;max_res&nbsp;=3D&nbsp;0,&n=
bsp;cur_res&nbsp;=3D&nbsp;0,&nbsp;ret&nbsp;=3D&nbsp;0;</DIV>
  <DIV>HMODULE&nbsp;&nbsp;hdll&nbsp;=3D&nbsp;NULL;</DIV>
  <DIV>hdll&nbsp;=3D&nbsp;GetModuleHandle("ntdll.dll");</DIV>
  <DIV>NTSETTIMER&nbsp;AddrNtSetTimer&nbsp;=3D&nbsp;0;</DIV>
  <DIV>NTQUERYTIMER&nbsp;AddrNtQueyTimer&nbsp;=3D&nbsp;0;</DIV>
  <DIV></DIV>
  <DIV>AddrNtSetTimer&nbsp;=3D&nbsp;(NTSETTIMER)&nbsp;GetProcAddress(hdll,=
&nbsp;"NtSetTimerResolution");</DIV>
  <DIV>AddrNtQueyTimer&nbsp;=3D&nbsp;(NTQUERYTIMER)GetProcAddress(hdll,&nb=
sp;"NtQueryTimerResolution");</DIV>
  <DIV>&nbsp;</DIV>
  <DIV>while&nbsp;(1)</DIV>
  <DIV>{</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>ret&nbsp;=3D&nbsp;AddrNtQueyTimer(&amp;min_res,&nbsp;&amp;max_res=
,&nbsp;&amp;cur_res);</DIV>
    <DIV>printf("min_res&nbsp;=3D&nbsp;%d,&nbsp;max_res&nbsp;=3D&nbsp;%d,&=
nbsp;cur_res&nbsp;=3D&nbsp;%d\n",min_res,&nbsp;max_res,&nbsp;cur_res);</DI=
V>
    <DIV>Sleep(10);</DIV>
    <DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;1,&nbsp;&amp;cur_res=
);</DIV>
    <DIV>Sleep(10);</DIV>
    <DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;0,&nbsp;&amp;cur_res=
);</DIV>
    <DIV>Sleep(10);</DIV></BLOCKQUOTE>
  <DIV>}</DIV>
  <DIV>return&nbsp;0;</DIV></BLOCKQUOTE>
<DIV>}</DIV></DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</DIV=
>
<DIV>&nbsp;</DIV>
<DIV>2 Bug in xen</DIV>
<DIV>JVM is OK, so left the bug to xen, I have found both the reason and=20
solution. As Jan mentioned avoiding call create_periodic_time, it got much=
=20
better. so I modified it&nbsp;like this,&nbsp;if the pt timer is created b=
efore,=20
setting RegA down is just changing the period value, so I do nothing excep=
t just=20
setting period to pt's field. it is OK!<BR></DIV>
<DIV>I thought pt-&gt;scheduled is responsible&nbsp;for <SPAN class=3Dshor=
t_text=20
lang=3Den id=3Dresult_box f=3D"4" a=3D"undefined" closure_uid_v1xnhe=3D"94=
"><SPAN class=3D""=20
closure_uid_v1xnhe=3D"322">Accuracy</SPAN></SPAN>&nbsp;of pt_process_misse=
d_ticks,=20
so we should not interfere it at any outer invading or interruption, so I =
let=20
create_periodic_time changed everything but reserved only one=20
filed&nbsp;pt-&gt;scheduled as setted before, I am very happy to find the =
bug=20
disappear. After I rechecked your mail I found you are really a very smart=
 man,=20
you have predicted something!</DIV>
<DIV>&nbsp;</DIV>
<DIV>
<DIV>Did&nbsp;you&nbsp;further&nbsp;check&nbsp;whether&nbsp;the&nbsp;adjus=
tments&nbsp;done&nbsp;to&nbsp;the</DIV>
<DIV>scheduled&nbsp;time&nbsp;in&nbsp;create_periodic_time()&nbsp;are&nbsp=
;responsible&nbsp;for&nbsp;this</DIV>
<DIV>conclusion&nbsp;of&nbsp;the&nbsp;JVM&nbsp;(could&nbsp;be&nbsp;effecti=
vely&nbsp;doubling&nbsp;the&nbsp;first</DIV>
<DIV>interval&nbsp;if&nbsp;HVM_PARAM_VPT_ALIGN&nbsp;is&nbsp;set,&nbsp;and&=
nbsp;with&nbsp;the&nbsp;high&nbsp;rate</DIV>
<DIV>of&nbsp;re-sets&nbsp;this&nbsp;could&nbsp;certainly&nbsp;have&nbsp;a&=
nbsp;more&nbsp;visible&nbsp;effect&nbsp;than</DIV>
<DIV>intended)?</DIV>
<DIV>&nbsp;</DIV>
<DIV>After I tracked pt-&gt;scheduled, more and more truth surfaced. I wil=
l show=20
you the RTC's spotting as I observed.</DIV>
<DIV>normal spot is like this:</DIV>
<DIV>0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;=20
1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;=20
&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;5</DIV>
<DIV>.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp; &nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp; &nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(normal=20
spot)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
|(pt-&gt;scheduled at 2)</DIV>
<DIV>&nbsp;</DIV>
<DIV>when create_periodic_time interfere pt-&gt;schedule at NOW</DIV>
<DIV>0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;=20
1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;=20
&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;=20
5</DIV>
<DIV>.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;=
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
|(NOW)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;| (new pt-&gt;scheduled is moved to 3 after ALIGN)</DIV>
<DIV>&nbsp;</DIV>
<DIV>so it real spot is like this:</DIV>
<DIV>.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;=20
1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;=20
&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;=20
5</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|(here=20
we missed one spot at 2)</DIV></DIV>
<DIV>&nbsp;</DIV>
<DIV>the original pt-&gt;scheduled is at 2 before create_periodic_time wil=
l be=20
executed, but after it, pt-&gt;scheduled is moved to 3, so missed one spot=
 to=20
GuestWin.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 who is wrong?</DIV>
<DIV>I doubt align_timer&nbsp;worth suspected:</DIV>
<DIV>
<DIV>s_time_t&nbsp;align_timer(s_time_t&nbsp;firsttick,&nbsp;uint64_t&nbsp=
;period)</DIV>
<DIV>{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;!period&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;firsttick=
;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;firsttick&nbsp;+&nbsp;(period&nbs=
p;-&nbsp;1)&nbsp;-&nbsp;((firsttick&nbsp;-&nbsp;1)&nbsp;%&nbsp;period);=20
//in xen4.x</DIV>
<DIV>&nbsp;&nbsp;&nbsp;=20
return&nbsp;firsttick&nbsp;&nbsp;-&nbsp;((firsttick&nbsp;-&nbsp;1)&nbsp;%&=
nbsp;period);//it=20
should be aligned like this.</DIV>
<DIV>}</DIV>
<DIV>&nbsp;</DIV>
<DIV>I have also found another updating RTC's RegA in=20
tools\ioemu-qemu-xen\hw\MCc146818rtc.c: </DIV>
<DIV>&nbsp;</DIV>
<DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(period_code&nbsp;!=3D&nbsp;0&nbsp;&a=
mp;&amp;&nbsp;(s-&gt;cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;REG_B_PIE))&nbsp=
;{</DIV>
<DIV>#endif</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(period_code&=
nbsp;&lt;=3D&nbsp;2)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;period_code&nbsp;+=3D&nbsp;7;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;period&nbsp;i=
n&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp;=
1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);</DIV>
<DIV>#ifdef&nbsp;IRQ_COALESCE_HACK</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if(period&nbsp;!=3D&n=
bsp;s-&gt;period)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;s-&gt;irq_coalesced&nbsp;=3D&nbsp;(s-&gt;irq_coalesced&nbsp;*&nbsp;s-&g=
t;period)&nbsp;/&nbsp;period;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;period&nbsp;=3D=
&nbsp;period;</DIV>
<DIV>#endif</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;compute&nbsp;=
32&nbsp;khz&nbsp;clock&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_clock&nbsp;=3D&nb=
sp;muldiv64(current_time,&nbsp;32768,&nbsp;ticks_per_sec);=20
//here I don't make sense it ......</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_irq_clock&nbsp;=
=3D&nbsp;(cur_clock&nbsp;&amp;&nbsp;~(period&nbsp;-&nbsp;1))&nbsp;+&nbsp;p=
eriod;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;next_periodic_t=
ime&nbsp;=3D&nbsp;muldiv64(next_irq_clock,&nbsp;ticks_per_sec,&nbsp;32768)=
&nbsp;+&nbsp;1;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;qemu_mod_timer(s-&gt;=
periodic_timer,&nbsp;s-&gt;next_periodic_time);</DIV></DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV></DIV>
<DIV>I don't know what happened in real RTC, the most <SPAN=20
style=3D"COLOR: #000080">popular RTC chip<SPAN style=3D"COLOR: #000000"><S=
PAN=20
style=3D"COLOR: #000080">&nbsp;is <FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN style=3D"COLOR: #000080">MC146818, I have c=
hecked its=20
datasheet but found nothing I want. What I want to know is When a real out=
er=20
0x71 come down to set RTC's RegA, who does it change or spot the <SPAN=20
style=3D"FONT-SIZE: 10pt">next <FONT style=3D"FONT-SIZE: 10pt"=20
size=3D1>periodic&nbsp;interrupt time</FONT>&nbsp;</SPAN>? In my case, the=
 period=20
doesn't change, it misses one spot, even if the period changes how will it=
 spot=20
the next scheduled time?</SPAN></SPAN></FONT></SPAN></SPAN></SPAN></DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN style=3D"FONT-SIZE: 10pt; COLOR: #000080">I=
 need the=20
man who knows the details deeply concerning when updating a real RTC's reg=
A, how=20
will&nbsp;it take it into&nbsp;effect, and make the transition smoothly in=
 a=20
real RTC.</SPAN></SPAN></FONT></SPAN></SPAN></SPAN></DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"FONT-SIZE: 10pt; COLOR: #000080"></SPAN></SPAN></FONT></SPAN></SP=
AN></SPAN>&nbsp;</DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D3><SPAN=20
style=3D"COLOR: #000000"><SPAN style=3D"FONT-SIZE: 10pt; COLOR: #000080">I=
 have been=20
very anxious in&nbsp;another aspect,&nbsp;in our virtual environment, at=20
create_periodic_time, NOW() may be far more&nbsp;later than pt-&gt;schedul=
ed=20
setted before, at this point,&nbsp;the new pt-&gt;scheduled may be far mor=
e=20
behind the one after executing&nbsp;create_periodic_time. So the interval=20
between both which should be treated&nbsp;by pt_process_missed_ticks to th=
e=20
former pt-&gt;scheduled's period&nbsp;is now all thrown away as nothing mi=
ssed.=20
So I think&nbsp;how to handle the interval between&nbsp;both pt-&gt;schedu=
led=20
worth consideration&nbsp;in=20
create_periodic_time.</SPAN></SPAN></FONT></SPAN></SPAN></SPAN></DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"></SPAN></SPAN></FONT></SPAN></SPAN></SPAN>&nbsp;<=
/DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080">Thanks!</SPAN></SPAN></FONT></SPAN></SPAN></SPAN>=
</DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"></SPAN></SPAN></SPAN></SPAN></SPAN></FONT>&nbsp;<=
/DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-08&nbsp;17:20</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A><=
/DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] Big Bug:Time in VM running on xe=
n goes=20
slower</DIV></DIV></DIV>
<DIV>
<DIV>&gt;&gt;&gt;&nbsp;On&nbsp;07.08.12&nbsp;at&nbsp;17:44,&nbsp;tupeng212=
&nbsp;&lt;tupeng212@gmail.com&gt;&nbsp;wrote:</DIV>
<DIV>&gt;&nbsp;2&nbsp;Xen</DIV>
<DIV>&gt;&nbsp;vmx_vmexit_handler&nbsp;&nbsp;--&gt;&nbsp;.........&nbsp;--=
&gt;&nbsp;handle_rtc_io&nbsp;&nbsp;--&gt;&nbsp;rtc_ioport_write&nbsp;&nbsp=
;--&gt;&nbsp;</DIV>
<DIV>&gt;&nbsp;rtc_timer_update&nbsp;--&gt;&nbsp;set&nbsp;RTC's&nbsp;REG_A=
&nbsp;to&nbsp;a&nbsp;high&nbsp;rate--&gt;&nbsp;create_periodic_time(disabl=
e&nbsp;</DIV>
<DIV>&gt;&nbsp;the&nbsp;former&nbsp;timer,&nbsp;and&nbsp;init&nbsp;a&nbsp;=
new&nbsp;one)</DIV>
<DIV>&gt;&nbsp;Win7&nbsp;is&nbsp;installed&nbsp;in&nbsp;the&nbsp;vm.&nbsp;=
This&nbsp;calling&nbsp;path&nbsp;is&nbsp;executed&nbsp;so&nbsp;frequent&nb=
sp;that&nbsp;</DIV>
<DIV>&gt;&nbsp;may&nbsp;come&nbsp;down&nbsp;to&nbsp;set&nbsp;the&nbsp;RTC'=
s&nbsp;REG_A&nbsp;hundreds&nbsp;of&nbsp;times&nbsp;every&nbsp;second&nbsp;=
but&nbsp;with&nbsp;</DIV>
<DIV>&gt;&nbsp;the&nbsp;same&nbsp;rate(976.562us(1024HZ)),&nbsp;it&nbsp;is=
&nbsp;so&nbsp;abnormal&nbsp;to&nbsp;me&nbsp;to&nbsp;see&nbsp;such&nbsp;</D=
IV>
<DIV>&gt;&nbsp;behavior.</DIV>
<DIV>&nbsp;</DIV>
<DIV>_If_&nbsp;the&nbsp;problem&nbsp;is&nbsp;merely&nbsp;with&nbsp;the&nbs=
p;high&nbsp;rate&nbsp;of&nbsp;calls&nbsp;to</DIV>
<DIV>create_periodic_time(),&nbsp;I&nbsp;think&nbsp;this&nbsp;could&nbsp;b=
e&nbsp;taken&nbsp;care&nbsp;of&nbsp;by</DIV>
<DIV>avoiding&nbsp;the&nbsp;call&nbsp;(and&nbsp;perhaps&nbsp;the&nbsp;call=
&nbsp;to&nbsp;rtc_timer_update()&nbsp;in</DIV>
<DIV>the&nbsp;first&nbsp;place)&nbsp;by&nbsp;checking&nbsp;whether&nbsp;an=
ything&nbsp;actually&nbsp;changes</DIV>
<DIV>due&nbsp;to&nbsp;the&nbsp;current&nbsp;write.&nbsp;I&nbsp;don't&nbsp;=
think,&nbsp;however,&nbsp;that&nbsp;this&nbsp;would</DIV>
<DIV>help&nbsp;much,&nbsp;as&nbsp;the&nbsp;high&nbsp;rate&nbsp;of&nbsp;por=
t&nbsp;accesses&nbsp;(and&nbsp;hence&nbsp;traps</DIV>
<DIV>into&nbsp;the&nbsp;hypervisor)&nbsp;would&nbsp;remain.&nbsp;It&nbsp;m=
ight,&nbsp;nevertheless,&nbsp;get</DIV>
<DIV>your&nbsp;immediate&nbsp;problem&nbsp;of&nbsp;the&nbsp;time&nbsp;slow=
ing&nbsp;down&nbsp;taken&nbsp;care&nbsp;of</DIV>
<DIV>if&nbsp;that&nbsp;is&nbsp;caused&nbsp;inside&nbsp;Xen&nbsp;(but&nbsp;=
the&nbsp;cause&nbsp;here&nbsp;may&nbsp;as&nbsp;well&nbsp;be&nbsp;in</DIV>
<DIV>the&nbsp;Windows&nbsp;kernel).</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp;3&nbsp;OS</DIV>
<DIV>&gt;&nbsp;I&nbsp;have&nbsp;tried&nbsp;to&nbsp;find&nbsp;why&nbsp;the&=
nbsp;win7&nbsp;setted&nbsp;RTC's&nbsp;regA&nbsp;so&nbsp;frequently.&nbsp;f=
inally&nbsp;</DIV>
<DIV>&gt;&nbsp;got&nbsp;the&nbsp;result,&nbsp;all&nbsp;that&nbsp;comes&nbs=
p;from&nbsp;a&nbsp;function:&nbsp;NtSetTimerResolution&nbsp;--&gt;&nbsp;</=
DIV>
<DIV>&gt;&nbsp;0x70,0x71</DIV>
<DIV>&gt;&nbsp;when&nbsp;I&nbsp;attached&nbsp;windbg&nbsp;into&nbsp;the&nb=
sp;guest&nbsp;OS,&nbsp;I&nbsp;also&nbsp;found&nbsp;the&nbsp;source,&nbsp;t=
hey&nbsp;are&nbsp;</DIV>
<DIV>&gt;&nbsp;all&nbsp;called&nbsp;from&nbsp;a&nbsp;upper&nbsp;system&nbs=
p;call&nbsp;that&nbsp;comes&nbsp;from&nbsp;JVM(Java&nbsp;Virtual&nbsp;</DI=
V>
<DIV>&gt;&nbsp;Machine).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Getting&nbsp;Windows&nbsp;to&nbsp;be&nbsp;a&nbsp;little&nbsp;smarter&=
nbsp;and&nbsp;avoid&nbsp;the&nbsp;port&nbsp;I/O&nbsp;when</DIV>
<DIV>doing&nbsp;redundant&nbsp;writes&nbsp;would&nbsp;of&nbsp;course&nbsp;=
be&nbsp;even&nbsp;better,&nbsp;but&nbsp;is</DIV>
<DIV>clearly&nbsp;a&nbsp;difficult&nbsp;to&nbsp;achieve&nbsp;goal.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp;4&nbsp;JVM</DIV>
<DIV>&gt;&nbsp;I&nbsp;don't&nbsp;know&nbsp;why&nbsp;JVM&nbsp;calls&nbsp;Nt=
SetTimerResolution&nbsp;to&nbsp;set&nbsp;the&nbsp;same&nbsp;RTC's&nbsp;rat=
e&nbsp;</DIV>
<DIV>&gt;&nbsp;down&nbsp;(976.562us(1024HZ))&nbsp;so&nbsp;frequently.&nbsp=
;</DIV>
<DIV>&gt;&nbsp;But&nbsp;found&nbsp;something&nbsp;useful,&nbsp;in&nbsp;the=
&nbsp;java&nbsp;source&nbsp;code,&nbsp;I&nbsp;found&nbsp;many&nbsp;palaces=
&nbsp;</DIV>
<DIV>&gt;&nbsp;written&nbsp;with&nbsp;time.scheduleAtFixedRate(),&nbsp;Inf=
ormations&nbsp;from&nbsp;Internet&nbsp;told&nbsp;me&nbsp;</DIV>
<DIV>&gt;&nbsp;this&nbsp;function&nbsp;scheduleAtFixedRate&nbsp;demands&nb=
sp;a&nbsp;higher&nbsp;time&nbsp;resolution.&nbsp;so&nbsp;I&nbsp;</DIV>
<DIV>&gt;&nbsp;guess&nbsp;the&nbsp;whole&nbsp;process&nbsp;may&nbsp;be&nbs=
p;this:&nbsp;</DIV>
<DIV>&gt;&nbsp;java&nbsp;wants&nbsp;a&nbsp;higher&nbsp;time&nbsp;resolutio=
n&nbsp;timer,&nbsp;so&nbsp;it&nbsp;changes&nbsp;the&nbsp;RTC's&nbsp;rate&n=
bsp;from&nbsp;</DIV>
<DIV>&gt;&nbsp;15.625ms(64HZ)&nbsp;to&nbsp;976.562us(1024HZ),&nbsp;after&n=
bsp;that,&nbsp;it&nbsp;reconfirms&nbsp;whether&nbsp;the&nbsp;</DIV>
<DIV>&gt;&nbsp;time&nbsp;is&nbsp;accurate&nbsp;as&nbsp;expected,&nbsp;but&=
nbsp;it's&nbsp;sorry&nbsp;to&nbsp;get&nbsp;some&nbsp;notice&nbsp;it&nbsp;'=
s&nbsp;not&nbsp;</DIV>
<DIV>&gt;&nbsp;accurate&nbsp;either.&nbsp;so&nbsp;it&nbsp;sets&nbsp;&nbsp;=
the&nbsp;RTC's&nbsp;rate&nbsp;from&nbsp;15.625ms(64HZ)&nbsp;to&nbsp;</DIV>
<DIV>&gt;&nbsp;976.562us(1024HZ)&nbsp;again&nbsp;and&nbsp;again...,&nbsp;a=
t&nbsp;last,&nbsp;results&nbsp;in&nbsp;a&nbsp;slow&nbsp;system&nbsp;timer&=
nbsp;</DIV>
<DIV>&gt;&nbsp;in&nbsp;vm.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Now&nbsp;that's&nbsp;really&nbsp;the&nbsp;fundamental&nbsp;thing&nbsp=
;to&nbsp;find&nbsp;out&nbsp;-&nbsp;what&nbsp;makes&nbsp;it</DIV>
<DIV>think&nbsp;the&nbsp;clock&nbsp;isn't&nbsp;accurate?&nbsp;Is&nbsp;this=
&nbsp;an&nbsp;artifact&nbsp;of&nbsp;scheduling&nbsp;(as</DIV>
<DIV>the&nbsp;scheduler&nbsp;tick&nbsp;certainly&nbsp;is&nbsp;several&nbsp=
;milliseconds,&nbsp;whereas</DIV>
<DIV>"accurate"&nbsp;here&nbsp;appears&nbsp;to&nbsp;require&nbsp;below&nbs=
p;1ms&nbsp;granularity),&nbsp;perhaps</DIV>
<DIV>as&nbsp;a&nbsp;result&nbsp;of&nbsp;the&nbsp;box&nbsp;being&nbsp;overl=
oaded&nbsp;(i.e.&nbsp;the&nbsp;VM&nbsp;not&nbsp;being&nbsp;able</DIV>
<DIV>to&nbsp;get&nbsp;scheduled&nbsp;quickly&nbsp;enough&nbsp;when&nbsp;th=
e&nbsp;timer&nbsp;expires)?&nbsp;For&nbsp;that,</DIV>
<DIV>did&nbsp;you&nbsp;try&nbsp;lowering&nbsp;the&nbsp;scheduler&nbsp;time=
&nbsp;slice&nbsp;and/or&nbsp;its&nbsp;rate&nbsp;limit</DIV>
<DIV>(possible&nbsp;via&nbsp;command&nbsp;line&nbsp;option)?&nbsp;Of&nbsp;=
course&nbsp;doing&nbsp;so&nbsp;may&nbsp;have</DIV>
<DIV>other&nbsp;undesirable&nbsp;side&nbsp;effects,&nbsp;but&nbsp;it&nbsp;=
would&nbsp;be&nbsp;worth&nbsp;a&nbsp;try.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Did&nbsp;you&nbsp;further&nbsp;check&nbsp;whether&nbsp;the&nbsp;adjus=
tments&nbsp;done&nbsp;to&nbsp;the</DIV>
<DIV>scheduled&nbsp;time&nbsp;in&nbsp;create_periodic_time()&nbsp;are&nbsp=
;responsible&nbsp;for&nbsp;this</DIV>
<DIV>conclusion&nbsp;of&nbsp;the&nbsp;JVM&nbsp;(could&nbsp;be&nbsp;effecti=
vely&nbsp;doubling&nbsp;the&nbsp;first</DIV>
<DIV>interval&nbsp;if&nbsp;HVM_PARAM_VPT_ALIGN&nbsp;is&nbsp;set,&nbsp;and&=
nbsp;with&nbsp;the&nbsp;high&nbsp;rate</DIV>
<DIV>of&nbsp;re-sets&nbsp;this&nbsp;could&nbsp;certainly&nbsp;have&nbsp;a&=
nbsp;more&nbsp;visible&nbsp;effect&nbsp;than</DIV>
<DIV>intended)?</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart462848748572_=------



--===============4979082272455034970==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4979082272455034970==--



From xen-devel-bounces@lists.xen.org Fri Aug 10 15:19:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 15:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szqzd-0003HD-RW; Fri, 10 Aug 2012 15:19:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1Szqzc-0003H0-9c
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 15:19:16 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344611947!6179093!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=2.5 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,MAILTO_TO_SPAM_ADDR,MANY_EXCLAMATIONS,
	MIME_BASE64_TEXT, MIME_BOUND_NEXTPART, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29009 invoked from network); 10 Aug 2012 15:19:08 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 15:19:08 -0000
Received: by pbbrp12 with SMTP id rp12so2892304pbb.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 08:19:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=8zGlm2NVOf217ZK4JiOo4mB3iXyPrQMaEjqW76JIHGs=;
	b=J6hK26y4mSk59Et9ljUC+oS2GY5YI5jqQtP3GIZeFpYP3p0cj06D+SsCVgOQJQa6j4
	vZDZdKTzGcldyvIlf4nvK8Tyz2yXNRj6Gs0eFVzuPB3c0PvLdtxzGGrdqx1eJYLCuNXX
	VwSgDsvSJTwzAAEdZFmbl97jhAguqVX0MHX0+ZEUfHJ1Tdlv4HbiOY0Up1ij0OIBhbIv
	LeII3DJdWB9OetCd2qPygVeYoBTd6pGtAiOd0xldBS5B++K7z347eL5L7GBvY4GVlsr1
	/68LJcX2/3e+ZnWNp6EVPmlM0r+DwOB6m2PwW81JxEJQHAqcfw9bE52Wiqtz/rDZOsNI
	v6Hg==
Received: by 10.68.235.236 with SMTP id up12mr13299570pbc.79.1344611946411;
	Fri, 10 Aug 2012 08:19:06 -0700 (PDT)
Received: from root ([115.199.255.118])
	by mx.google.com with ESMTPS id rz10sm3519994pbc.32.2012.08.10.08.17.59
	(version=SSLv3 cipher=OTHER); Fri, 10 Aug 2012 08:19:05 -0700 (PDT)
Date: Fri, 10 Aug 2012 23:17:59 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>
References: <201208070018394210381@gmail.com>, 
	<50224B7402000078000937DA@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: EA6B0DE3-8EB2-44C6-BEFC-11C58EB28B8B
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081023124696835343@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
	foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4979082272455034970=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4979082272455034970==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart462848748572_=----"

This is a multi-part message in MIME format.

------=_001_NextPart462848748572_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBKYW4gQmV1bGljaDoNClRoYW5rcyBmb3IgcmVwbHkhIFlvdSBhcmUgYSB2ZXJ5IGNsZXZl
ciBtYW4sIHlvdSBoYXZlIHNlbnNlZCBzb21lIHRoaW5nIGltbWVkaWF0ZWx5IGFzIEkgZm91bmQg
bGF0ZWx5LiBQbGVhc2UgZm9yZ2l2ZSBteSBzbyBsYXRlbHkgcmVwbHkhDQoNCjEgd2h5IEpWTSBz
ZXQgdGhlIHNhbWUgcmF0ZSBkb3duIHNvIGZyZXF1ZW50bHkgPw0KdGhlIGZpcnN0IGFjaGlldmVt
ZW50IEkgd2lsbCBzaG93IGlzIEkgZm91bmQgdGhlIGFjdGlvbiBpbiBKVk0gYW5kIHRoZSByZWFz
b24gYnkgZGVidWdnaW5nIGRpc2Fzc2VtYmx5IGNvZGUuDQppdCBzZWVtcyB0byBtZSBsaWtlIHRo
aXMgaW4gSlZNOg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PSAxIHdo
YXQgaGFwcGVuZWQgaW4gSlZNID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0K
d2hpbGUgKGxvb3ApLy9vciBhIGZyZXF1ZW50IGNhbGwNCnsNCnRpbWVCZWdpblBlcmlvZCgpIC0t
PiBOdFNldFRpbWVyUmVzb2x1dGlvbigxKGVuYmxlKSkNCnJjID0gV2FpdEZvck11bHRpcGxlT2Jq
ZWN0cyg1LCAweDIyMjIyMjIsIDAsIDEpOyAvL3RoZSBsYXN0IHBhcmFtZXRlciBkZW1hbmRzIDFt
cyB0aW1lciByZXNvbHV0aW9uDQppZiAocmMgPSBUSU1FT1VUKXsNCmJyZWFrOw0KfQ0KZWxzZXsN
CmNhbGwgMHg0NDQ0NDQ0NDsNCn0NCnRpbWVFbmRQZXJpb2QoKSAtLT4gTnRTZXRUaW1lclJlc29s
dXRpb24oMChkaXNhYmxlKSkNCn0NCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT0NCnNvIGl0cyBiZWhhdmlvciBpcyB0b3RhbGx5IGxlZ2FsLCBmb3IgaXQgZGVtYW5k
cyBoaWdoZXIgdGltZXIgcmVzb2x1dGlvbiAoaGVyZSAxbXMpLCBzbyBpdCBjYWxscyBOdFNldFRp
bWVyUmVzb2x1dGlvbiB0byBpbXByb3ZlIHRoZSByZXNvbHV0aW9uLiANCml0IGlzIG5vdCBhcyBJ
IGd1ZXNzZWQgInVuYWNjdXJhdGUiDQoNCkkgYWxzbyB3cm90ZSBhIHRlc3RlciBiZWxvdywgd2hp
Y2ggY29uZmlybXMgbXkgc3VwcG9zZS4gaWYgeW91IGFyZSBpbnRlcmVzdGVkIGluIGl0ICwgeW91
IGNhbiBidWlsZCBpdCBieSBNUydzIGNvbXBpbGVyLCBhbmQgYWZ0ZXIgcnVubmluZyB0aGUgdGVz
dGVyIGluIFZNIGFib3V0IDEgbWludXRlcywgVk0ncyB0aW1lIHdpbGwgc2xvdy4NCj09PT09PT09
PT09PT09PT09PT09PT09PT09PSAyIGEgdGVzdCB3aGljaCB3aWxsIGxlYWQgdG8gdHJpZ2dlciBz
bG93aW5nIFZNJ3MgaW5uZXIgdGltZT09PT09PT09DQojaW5jbHVkZSA8c3RkaW8uaD4NCiNpbmNs
dWRlIDx3aW5kb3dzLmg+DQp0eXBlZGVmIGludCAoX19zdGRjYWxsICpOVFNFVFRJTUVSKShJTiBV
TE9ORyBSZXF1ZXN0ZWRSZXNvbHV0aW9uLCBJTiBCT09MRUFOIFNldCwgT1VUIFBVTE9ORyBBY3R1
YWxSZXNvbHV0aW9uICk7DQp0eXBlZGVmIGludCAoX19zdGRjYWxsICpOVFFVRVJZVElNRVIpKE9V
VCBQVUxPTkcgICBNaW5pbXVtUmVzb2x1dGlvbiwgT1VUIFBVTE9ORyBNYXhpbXVtUmVzb2x1dGlv
biwgT1VUIFBVTE9ORyBDdXJyZW50UmVzb2x1dGlvbiApOw0KaW50IG1haW4oKQ0Kew0KRFdPUkQg
bWluX3JlcyA9IDAsIG1heF9yZXMgPSAwLCBjdXJfcmVzID0gMCwgcmV0ID0gMDsNCkhNT0RVTEUg
IGhkbGwgPSBOVUxMOw0KaGRsbCA9IEdldE1vZHVsZUhhbmRsZSgibnRkbGwuZGxsIik7DQpOVFNF
VFRJTUVSIEFkZHJOdFNldFRpbWVyID0gMDsNCk5UUVVFUllUSU1FUiBBZGRyTnRRdWV5VGltZXIg
PSAwOw0KQWRkck50U2V0VGltZXIgPSAoTlRTRVRUSU1FUikgR2V0UHJvY0FkZHJlc3MoaGRsbCwg
Ik50U2V0VGltZXJSZXNvbHV0aW9uIik7DQpBZGRyTnRRdWV5VGltZXIgPSAoTlRRVUVSWVRJTUVS
KUdldFByb2NBZGRyZXNzKGhkbGwsICJOdFF1ZXJ5VGltZXJSZXNvbHV0aW9uIik7DQoNCndoaWxl
ICgxKQ0Kew0KcmV0ID0gQWRkck50UXVleVRpbWVyKCZtaW5fcmVzLCAmbWF4X3JlcywgJmN1cl9y
ZXMpOw0KcHJpbnRmKCJtaW5fcmVzID0gJWQsIG1heF9yZXMgPSAlZCwgY3VyX3JlcyA9ICVkXG4i
LG1pbl9yZXMsIG1heF9yZXMsIGN1cl9yZXMpOw0KU2xlZXAoMTApOw0KcmV0ID0gQWRkck50U2V0
VGltZXIoMTAwMDAsIDEsICZjdXJfcmVzKTsNClNsZWVwKDEwKTsNCnJldCA9IEFkZHJOdFNldFRp
bWVyKDEwMDAwLCAwLCAmY3VyX3Jlcyk7DQpTbGVlcCgxMCk7DQp9DQpyZXR1cm4gMDsNCn0NCj09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCg0KMiBCdWcgaW4geGVuDQpKVk0g
aXMgT0ssIHNvIGxlZnQgdGhlIGJ1ZyB0byB4ZW4sIEkgaGF2ZSBmb3VuZCBib3RoIHRoZSByZWFz
b24gYW5kIHNvbHV0aW9uLiBBcyBKYW4gbWVudGlvbmVkIGF2b2lkaW5nIGNhbGwgY3JlYXRlX3Bl
cmlvZGljX3RpbWUsIGl0IGdvdCBtdWNoIGJldHRlci4gc28gSSBtb2RpZmllZCBpdCBsaWtlIHRo
aXMsIGlmIHRoZSBwdCB0aW1lciBpcyBjcmVhdGVkIGJlZm9yZSwgc2V0dGluZyBSZWdBIGRvd24g
aXMganVzdCBjaGFuZ2luZyB0aGUgcGVyaW9kIHZhbHVlLCBzbyBJIGRvIG5vdGhpbmcgZXhjZXB0
IGp1c3Qgc2V0dGluZyBwZXJpb2QgdG8gcHQncyBmaWVsZC4gaXQgaXMgT0shDQoNCkkgdGhvdWdo
dCBwdC0+c2NoZWR1bGVkIGlzIHJlc3BvbnNpYmxlIGZvciBBY2N1cmFjeSBvZiBwdF9wcm9jZXNz
X21pc3NlZF90aWNrcywgc28gd2Ugc2hvdWxkIG5vdCBpbnRlcmZlcmUgaXQgYXQgYW55IG91dGVy
IGludmFkaW5nIG9yIGludGVycnVwdGlvbiwgc28gSSBsZXQgY3JlYXRlX3BlcmlvZGljX3RpbWUg
Y2hhbmdlZCBldmVyeXRoaW5nIGJ1dCByZXNlcnZlZCBvbmx5IG9uZSBmaWxlZCBwdC0+c2NoZWR1
bGVkIGFzIHNldHRlZCBiZWZvcmUsIEkgYW0gdmVyeSBoYXBweSB0byBmaW5kIHRoZSBidWcgZGlz
YXBwZWFyLiBBZnRlciBJIHJlY2hlY2tlZCB5b3VyIG1haWwgSSBmb3VuZCB5b3UgYXJlIHJlYWxs
eSBhIHZlcnkgc21hcnQgbWFuLCB5b3UgaGF2ZSBwcmVkaWN0ZWQgc29tZXRoaW5nIQ0KDQpEaWQg
eW91IGZ1cnRoZXIgY2hlY2sgd2hldGhlciB0aGUgYWRqdXN0bWVudHMgZG9uZSB0byB0aGUNCnNj
aGVkdWxlZCB0aW1lIGluIGNyZWF0ZV9wZXJpb2RpY190aW1lKCkgYXJlIHJlc3BvbnNpYmxlIGZv
ciB0aGlzDQpjb25jbHVzaW9uIG9mIHRoZSBKVk0gKGNvdWxkIGJlIGVmZmVjdGl2ZWx5IGRvdWJs
aW5nIHRoZSBmaXJzdA0KaW50ZXJ2YWwgaWYgSFZNX1BBUkFNX1ZQVF9BTElHTiBpcyBzZXQsIGFu
ZCB3aXRoIHRoZSBoaWdoIHJhdGUNCm9mIHJlLXNldHMgdGhpcyBjb3VsZCBjZXJ0YWlubHkgaGF2
ZSBhIG1vcmUgdmlzaWJsZSBlZmZlY3QgdGhhbg0KaW50ZW5kZWQpPw0KDQpBZnRlciBJIHRyYWNr
ZWQgcHQtPnNjaGVkdWxlZCwgbW9yZSBhbmQgbW9yZSB0cnV0aCBzdXJmYWNlZC4gSSB3aWxsIHNo
b3cgeW91IHRoZSBSVEMncyBzcG90dGluZyBhcyBJIG9ic2VydmVkLg0Kbm9ybWFsIHNwb3QgaXMg
bGlrZSB0aGlzOg0KMCAgICAgICAgICAgICAgIDEgICAgICAgICAgICAgICAyICAgICAgICAgICAg
ICAgMyAgICAgICAgICAgICAgIDQgICAgICAgICAgICAgICA1DQouICAgICAgICAgICAgICAgLiAg
ICAgICAgICAgICAgIC4gICAgICAgICAgICAgICAuICAgICAgICAgICAgICAgLiAgICAgICAgICAg
ICAgIC4gICAgICAobm9ybWFsIHNwb3QpDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHwocHQtPnNjaGVkdWxlZCBhdCAyKQ0KDQp3aGVuIGNyZWF0ZV9wZXJpb2RpY190aW1lIGludGVy
ZmVyZSBwdC0+c2NoZWR1bGUgYXQgTk9XDQowICAgICAgICAgICAgICAgMSAgICAgICAgICAgICAg
IDIgICAgICAgICAgICAgICAzICAgICAgICAgICAgICAgNCAgICAgICAgICAgICAgIDUNCi4gICAg
ICAgICAgICAgICAuICAgICAgICAgICAgICAgLiAgICAgICAgICAgICAgIC4gICAgICAgICAgICAg
ICAuICAgICAgICAgICAgICAgLiAgICAgIA0KICAgICAgICAgICAgICAgICAgICB8KE5PVykgICAg
ICAgICAgICAgICAgICAgICAgfCAobmV3IHB0LT5zY2hlZHVsZWQgaXMgbW92ZWQgdG8gMyBhZnRl
ciBBTElHTikNCg0Kc28gaXQgcmVhbCBzcG90IGlzIGxpa2UgdGhpczoNCi4gICAgICAgICAgICAg
ICAuICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC4gICAgICAgICAgICAgICAuICAgICAg
ICAgICAgICAgLiAgICAgICANCjAgICAgICAgICAgICAgICAxICAgICAgICAgICAgICAgMiAgICAg
ICAgICAgICAgIDMgICAgICAgICAgICAgICA0ICAgICAgICAgICAgICAgNQ0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICB8KGhlcmUgd2UgbWlzc2VkIG9uZSBzcG90IGF0IDIpDQoNCnRo
ZSBvcmlnaW5hbCBwdC0+c2NoZWR1bGVkIGlzIGF0IDIgYmVmb3JlIGNyZWF0ZV9wZXJpb2RpY190
aW1lIHdpbGwgYmUgZXhlY3V0ZWQsIGJ1dCBhZnRlciBpdCwgcHQtPnNjaGVkdWxlZCBpcyBtb3Zl
ZCB0byAzLCBzbyBtaXNzZWQgb25lIHNwb3QgdG8gR3Vlc3RXaW4uDQoNCg0KMyB3aG8gaXMgd3Jv
bmc/DQpJIGRvdWJ0IGFsaWduX3RpbWVyIHdvcnRoIHN1c3BlY3RlZDoNCnNfdGltZV90IGFsaWdu
X3RpbWVyKHNfdGltZV90IGZpcnN0dGljaywgdWludDY0X3QgcGVyaW9kKQ0Kew0KICAgIGlmICgg
IXBlcmlvZCApDQogICAgICAgIHJldHVybiBmaXJzdHRpY2s7DQoNCiAgICByZXR1cm4gZmlyc3R0
aWNrICsgKHBlcmlvZCAtIDEpIC0gKChmaXJzdHRpY2sgLSAxKSAlIHBlcmlvZCk7IC8vaW4geGVu
NC54DQogICAgcmV0dXJuIGZpcnN0dGljayAgLSAoKGZpcnN0dGljayAtIDEpICUgcGVyaW9kKTsv
L2l0IHNob3VsZCBiZSBhbGlnbmVkIGxpa2UgdGhpcy4NCn0NCg0KSSBoYXZlIGFsc28gZm91bmQg
YW5vdGhlciB1cGRhdGluZyBSVEMncyBSZWdBIGluIHRvb2xzXGlvZW11LXFlbXUteGVuXGh3XE1D
YzE0NjgxOHJ0Yy5jOiANCg0KICAgIGlmIChwZXJpb2RfY29kZSAhPSAwICYmIChzLT5jbW9zX2Rh
dGFbUlRDX1JFR19CXSAmIFJFR19CX1BJRSkpIHsNCiNlbmRpZg0KICAgICAgICBpZiAocGVyaW9k
X2NvZGUgPD0gMikNCiAgICAgICAgICAgIHBlcmlvZF9jb2RlICs9IDc7DQogICAgICAgIC8qIHBl
cmlvZCBpbiAzMiBLaHogY3ljbGVzICovDQogICAgICAgIHBlcmlvZCA9IDEgPDwgKHBlcmlvZF9j
b2RlIC0gMSk7DQojaWZkZWYgSVJRX0NPQUxFU0NFX0hBQ0sNCiAgICAgICAgaWYocGVyaW9kICE9
IHMtPnBlcmlvZCkNCiAgICAgICAgICAgIHMtPmlycV9jb2FsZXNjZWQgPSAocy0+aXJxX2NvYWxl
c2NlZCAqIHMtPnBlcmlvZCkgLyBwZXJpb2Q7DQogICAgICAgIHMtPnBlcmlvZCA9IHBlcmlvZDsN
CiNlbmRpZg0KICAgICAgICAvKiBjb21wdXRlIDMyIGtoeiBjbG9jayAqLw0KICAgICAgICBjdXJf
Y2xvY2sgPSBtdWxkaXY2NChjdXJyZW50X3RpbWUsIDMyNzY4LCB0aWNrc19wZXJfc2VjKTsgLy9o
ZXJlIEkgZG9uJ3QgbWFrZSBzZW5zZSBpdCAuLi4uLi4NCiAgICAgICAgbmV4dF9pcnFfY2xvY2sg
PSAoY3VyX2Nsb2NrICYgfihwZXJpb2QgLSAxKSkgKyBwZXJpb2Q7DQogICAgICAgIHMtPm5leHRf
cGVyaW9kaWNfdGltZSA9IG11bGRpdjY0KG5leHRfaXJxX2Nsb2NrLCB0aWNrc19wZXJfc2VjLCAz
Mjc2OCkgKyAxOw0KICAgICAgICBxZW11X21vZF90aW1lcihzLT5wZXJpb2RpY190aW1lciwgcy0+
bmV4dF9wZXJpb2RpY190aW1lKTsNCg0KDQoNCkkgZG9uJ3Qga25vdyB3aGF0IGhhcHBlbmVkIGlu
IHJlYWwgUlRDLCB0aGUgbW9zdCBwb3B1bGFyIFJUQyBjaGlwIGlzIE1DMTQ2ODE4LCBJIGhhdmUg
Y2hlY2tlZCBpdHMgZGF0YXNoZWV0IGJ1dCBmb3VuZCBub3RoaW5nIEkgd2FudC4gV2hhdCBJIHdh
bnQgdG8ga25vdyBpcyBXaGVuIGEgcmVhbCBvdXRlciAweDcxIGNvbWUgZG93biB0byBzZXQgUlRD
J3MgUmVnQSwgd2hvIGRvZXMgaXQgY2hhbmdlIG9yIHNwb3QgdGhlIG5leHQgcGVyaW9kaWMgaW50
ZXJydXB0IHRpbWUgPyBJbiBteSBjYXNlLCB0aGUgcGVyaW9kIGRvZXNuJ3QgY2hhbmdlLCBpdCBt
aXNzZXMgb25lIHNwb3QsIGV2ZW4gaWYgdGhlIHBlcmlvZCBjaGFuZ2VzIGhvdyB3aWxsIGl0IHNw
b3QgdGhlIG5leHQgc2NoZWR1bGVkIHRpbWU/DQpJIG5lZWQgdGhlIG1hbiB3aG8ga25vd3MgdGhl
IGRldGFpbHMgZGVlcGx5IGNvbmNlcm5pbmcgd2hlbiB1cGRhdGluZyBhIHJlYWwgUlRDJ3MgcmVn
QSwgaG93IHdpbGwgaXQgdGFrZSBpdCBpbnRvIGVmZmVjdCwgYW5kIG1ha2UgdGhlIHRyYW5zaXRp
b24gc21vb3RobHkgaW4gYSByZWFsIFJUQy4NCg0KSSBoYXZlIGJlZW4gdmVyeSBhbnhpb3VzIGlu
IGFub3RoZXIgYXNwZWN0LCBpbiBvdXIgdmlydHVhbCBlbnZpcm9ubWVudCwgYXQgY3JlYXRlX3Bl
cmlvZGljX3RpbWUsIE5PVygpIG1heSBiZSBmYXIgbW9yZSBsYXRlciB0aGFuIHB0LT5zY2hlZHVs
ZWQgc2V0dGVkIGJlZm9yZSwgYXQgdGhpcyBwb2ludCwgdGhlIG5ldyBwdC0+c2NoZWR1bGVkIG1h
eSBiZSBmYXIgbW9yZSBiZWhpbmQgdGhlIG9uZSBhZnRlciBleGVjdXRpbmcgY3JlYXRlX3Blcmlv
ZGljX3RpbWUuIFNvIHRoZSBpbnRlcnZhbCBiZXR3ZWVuIGJvdGggd2hpY2ggc2hvdWxkIGJlIHRy
ZWF0ZWQgYnkgcHRfcHJvY2Vzc19taXNzZWRfdGlja3MgdG8gdGhlIGZvcm1lciBwdC0+c2NoZWR1
bGVkJ3MgcGVyaW9kIGlzIG5vdyBhbGwgdGhyb3duIGF3YXkgYXMgbm90aGluZyBtaXNzZWQuIFNv
IEkgdGhpbmsgaG93IHRvIGhhbmRsZSB0aGUgaW50ZXJ2YWwgYmV0d2VlbiBib3RoIHB0LT5zY2hl
ZHVsZWQgd29ydGggY29uc2lkZXJhdGlvbiBpbiBjcmVhdGVfcGVyaW9kaWNfdGltZS4NCg0KVGhh
bmtzIQ0KDQoNCg0KDQp0dXBlbmcyMTINCg0KRnJvbTogSmFuIEJldWxpY2gNCkRhdGU6IDIwMTIt
MDgtMDggMTc6MjANClRvOiB0dXBlbmcyMTINCkNDOiB4ZW4tZGV2ZWwNClN1YmplY3Q6IFJlOiBb
WGVuLWRldmVsXSBCaWcgQnVnOlRpbWUgaW4gVk0gcnVubmluZyBvbiB4ZW4gZ29lcyBzbG93ZXIN
Cj4+PiBPbiAwNy4wOC4xMiBhdCAxNzo0NCwgdHVwZW5nMjEyIDx0dXBlbmcyMTJAZ21haWwuY29t
PiB3cm90ZToNCj4gMiBYZW4NCj4gdm14X3ZtZXhpdF9oYW5kbGVyICAtLT4gLi4uLi4uLi4uIC0t
PiBoYW5kbGVfcnRjX2lvICAtLT4gcnRjX2lvcG9ydF93cml0ZSAgLS0+IA0KPiBydGNfdGltZXJf
dXBkYXRlIC0tPiBzZXQgUlRDJ3MgUkVHX0EgdG8gYSBoaWdoIHJhdGUtLT4gY3JlYXRlX3Blcmlv
ZGljX3RpbWUoZGlzYWJsZSANCj4gdGhlIGZvcm1lciB0aW1lciwgYW5kIGluaXQgYSBuZXcgb25l
KQ0KPiBXaW43IGlzIGluc3RhbGxlZCBpbiB0aGUgdm0uIFRoaXMgY2FsbGluZyBwYXRoIGlzIGV4
ZWN1dGVkIHNvIGZyZXF1ZW50IHRoYXQgDQo+IG1heSBjb21lIGRvd24gdG8gc2V0IHRoZSBSVEMn
cyBSRUdfQSBodW5kcmVkcyBvZiB0aW1lcyBldmVyeSBzZWNvbmQgYnV0IHdpdGggDQo+IHRoZSBz
YW1lIHJhdGUoOTc2LjU2MnVzKDEwMjRIWikpLCBpdCBpcyBzbyBhYm5vcm1hbCB0byBtZSB0byBz
ZWUgc3VjaCANCj4gYmVoYXZpb3IuDQoNCl9JZl8gdGhlIHByb2JsZW0gaXMgbWVyZWx5IHdpdGgg
dGhlIGhpZ2ggcmF0ZSBvZiBjYWxscyB0bw0KY3JlYXRlX3BlcmlvZGljX3RpbWUoKSwgSSB0aGlu
ayB0aGlzIGNvdWxkIGJlIHRha2VuIGNhcmUgb2YgYnkNCmF2b2lkaW5nIHRoZSBjYWxsIChhbmQg
cGVyaGFwcyB0aGUgY2FsbCB0byBydGNfdGltZXJfdXBkYXRlKCkgaW4NCnRoZSBmaXJzdCBwbGFj
ZSkgYnkgY2hlY2tpbmcgd2hldGhlciBhbnl0aGluZyBhY3R1YWxseSBjaGFuZ2VzDQpkdWUgdG8g
dGhlIGN1cnJlbnQgd3JpdGUuIEkgZG9uJ3QgdGhpbmssIGhvd2V2ZXIsIHRoYXQgdGhpcyB3b3Vs
ZA0KaGVscCBtdWNoLCBhcyB0aGUgaGlnaCByYXRlIG9mIHBvcnQgYWNjZXNzZXMgKGFuZCBoZW5j
ZSB0cmFwcw0KaW50byB0aGUgaHlwZXJ2aXNvcikgd291bGQgcmVtYWluLiBJdCBtaWdodCwgbmV2
ZXJ0aGVsZXNzLCBnZXQNCnlvdXIgaW1tZWRpYXRlIHByb2JsZW0gb2YgdGhlIHRpbWUgc2xvd2lu
ZyBkb3duIHRha2VuIGNhcmUgb2YNCmlmIHRoYXQgaXMgY2F1c2VkIGluc2lkZSBYZW4gKGJ1dCB0
aGUgY2F1c2UgaGVyZSBtYXkgYXMgd2VsbCBiZSBpbg0KdGhlIFdpbmRvd3Mga2VybmVsKS4NCg0K
PiAzIE9TDQo+IEkgaGF2ZSB0cmllZCB0byBmaW5kIHdoeSB0aGUgd2luNyBzZXR0ZWQgUlRDJ3Mg
cmVnQSBzbyBmcmVxdWVudGx5LiBmaW5hbGx5IA0KPiBnb3QgdGhlIHJlc3VsdCwgYWxsIHRoYXQg
Y29tZXMgZnJvbSBhIGZ1bmN0aW9uOiBOdFNldFRpbWVyUmVzb2x1dGlvbiAtLT4gDQo+IDB4NzAs
MHg3MQ0KPiB3aGVuIEkgYXR0YWNoZWQgd2luZGJnIGludG8gdGhlIGd1ZXN0IE9TLCBJIGFsc28g
Zm91bmQgdGhlIHNvdXJjZSwgdGhleSBhcmUgDQo+IGFsbCBjYWxsZWQgZnJvbSBhIHVwcGVyIHN5
c3RlbSBjYWxsIHRoYXQgY29tZXMgZnJvbSBKVk0oSmF2YSBWaXJ0dWFsIA0KPiBNYWNoaW5lKS4N
Cg0KR2V0dGluZyBXaW5kb3dzIHRvIGJlIGEgbGl0dGxlIHNtYXJ0ZXIgYW5kIGF2b2lkIHRoZSBw
b3J0IEkvTyB3aGVuDQpkb2luZyByZWR1bmRhbnQgd3JpdGVzIHdvdWxkIG9mIGNvdXJzZSBiZSBl
dmVuIGJldHRlciwgYnV0IGlzDQpjbGVhcmx5IGEgZGlmZmljdWx0IHRvIGFjaGlldmUgZ29hbC4N
Cg0KPiA0IEpWTQ0KPiBJIGRvbid0IGtub3cgd2h5IEpWTSBjYWxscyBOdFNldFRpbWVyUmVzb2x1
dGlvbiB0byBzZXQgdGhlIHNhbWUgUlRDJ3MgcmF0ZSANCj4gZG93biAoOTc2LjU2MnVzKDEwMjRI
WikpIHNvIGZyZXF1ZW50bHkuIA0KPiBCdXQgZm91bmQgc29tZXRoaW5nIHVzZWZ1bCwgaW4gdGhl
IGphdmEgc291cmNlIGNvZGUsIEkgZm91bmQgbWFueSBwYWxhY2VzIA0KPiB3cml0dGVuIHdpdGgg
dGltZS5zY2hlZHVsZUF0Rml4ZWRSYXRlKCksIEluZm9ybWF0aW9ucyBmcm9tIEludGVybmV0IHRv
bGQgbWUgDQo+IHRoaXMgZnVuY3Rpb24gc2NoZWR1bGVBdEZpeGVkUmF0ZSBkZW1hbmRzIGEgaGln
aGVyIHRpbWUgcmVzb2x1dGlvbi4gc28gSSANCj4gZ3Vlc3MgdGhlIHdob2xlIHByb2Nlc3MgbWF5
IGJlIHRoaXM6IA0KPiBqYXZhIHdhbnRzIGEgaGlnaGVyIHRpbWUgcmVzb2x1dGlvbiB0aW1lciwg
c28gaXQgY2hhbmdlcyB0aGUgUlRDJ3MgcmF0ZSBmcm9tIA0KPiAxNS42MjVtcyg2NEhaKSB0byA5
NzYuNTYydXMoMTAyNEhaKSwgYWZ0ZXIgdGhhdCwgaXQgcmVjb25maXJtcyB3aGV0aGVyIHRoZSAN
Cj4gdGltZSBpcyBhY2N1cmF0ZSBhcyBleHBlY3RlZCwgYnV0IGl0J3Mgc29ycnkgdG8gZ2V0IHNv
bWUgbm90aWNlIGl0ICdzIG5vdCANCj4gYWNjdXJhdGUgZWl0aGVyLiBzbyBpdCBzZXRzICB0aGUg
UlRDJ3MgcmF0ZSBmcm9tIDE1LjYyNW1zKDY0SFopIHRvIA0KPiA5NzYuNTYydXMoMTAyNEhaKSBh
Z2FpbiBhbmQgYWdhaW4uLi4sIGF0IGxhc3QsIHJlc3VsdHMgaW4gYSBzbG93IHN5c3RlbSB0aW1l
ciANCj4gaW4gdm0uDQoNCk5vdyB0aGF0J3MgcmVhbGx5IHRoZSBmdW5kYW1lbnRhbCB0aGluZyB0
byBmaW5kIG91dCAtIHdoYXQgbWFrZXMgaXQNCnRoaW5rIHRoZSBjbG9jayBpc24ndCBhY2N1cmF0
ZT8gSXMgdGhpcyBhbiBhcnRpZmFjdCBvZiBzY2hlZHVsaW5nIChhcw0KdGhlIHNjaGVkdWxlciB0
aWNrIGNlcnRhaW5seSBpcyBzZXZlcmFsIG1pbGxpc2Vjb25kcywgd2hlcmVhcw0KImFjY3VyYXRl
IiBoZXJlIGFwcGVhcnMgdG8gcmVxdWlyZSBiZWxvdyAxbXMgZ3JhbnVsYXJpdHkpLCBwZXJoYXBz
DQphcyBhIHJlc3VsdCBvZiB0aGUgYm94IGJlaW5nIG92ZXJsb2FkZWQgKGkuZS4gdGhlIFZNIG5v
dCBiZWluZyBhYmxlDQp0byBnZXQgc2NoZWR1bGVkIHF1aWNrbHkgZW5vdWdoIHdoZW4gdGhlIHRp
bWVyIGV4cGlyZXMpPyBGb3IgdGhhdCwNCmRpZCB5b3UgdHJ5IGxvd2VyaW5nIHRoZSBzY2hlZHVs
ZXIgdGltZSBzbGljZSBhbmQvb3IgaXRzIHJhdGUgbGltaXQNCihwb3NzaWJsZSB2aWEgY29tbWFu
ZCBsaW5lIG9wdGlvbik/IE9mIGNvdXJzZSBkb2luZyBzbyBtYXkgaGF2ZQ0Kb3RoZXIgdW5kZXNp
cmFibGUgc2lkZSBlZmZlY3RzLCBidXQgaXQgd291bGQgYmUgd29ydGggYSB0cnkuDQoNCkRpZCB5
b3UgZnVydGhlciBjaGVjayB3aGV0aGVyIHRoZSBhZGp1c3RtZW50cyBkb25lIHRvIHRoZQ0Kc2No
ZWR1bGVkIHRpbWUgaW4gY3JlYXRlX3BlcmlvZGljX3RpbWUoKSBhcmUgcmVzcG9uc2libGUgZm9y
IHRoaXMNCmNvbmNsdXNpb24gb2YgdGhlIEpWTSAoY291bGQgYmUgZWZmZWN0aXZlbHkgZG91Ymxp
bmcgdGhlIGZpcnN0DQppbnRlcnZhbCBpZiBIVk1fUEFSQU1fVlBUX0FMSUdOIGlzIHNldCwgYW5k
IHdpdGggdGhlIGhpZ2ggcmF0ZQ0Kb2YgcmUtc2V0cyB0aGlzIGNvdWxkIGNlcnRhaW5seSBoYXZl
IGEgbW9yZSB2aXNpYmxlIGVmZmVjdCB0aGFuDQppbnRlbmRlZCk/DQoNCkphbg==

------=_001_NextPart462848748572_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear Jan Beulich:</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Thanks for reply! You are a very clever ma=
n, you=20
have sensed some thing immediately&nbsp;as I found lately. Please forgive =
my so=20
lately reply!</DIV>
<DIV>&nbsp;</DIV>
<DIV>1 why JVM set the same rate down so frequently&nbsp;?</DIV>
<DIV>the first achievement I will show is I found the action in JVM and th=
e=20
reason by debugging disassembly code.</DIV>
<DIV>it seems to me like this in JVM:</DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 1 what happened in JVM=
=20
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</DIV>
<DIV>
<DIV>while&nbsp;(loop)//or a frequent call</DIV>
<DIV>{</DIV>
<BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
  <DIV>timeBeginPeriod()&nbsp;--&gt;&nbsp;NtSetTimerResolution(1(enble))</=
DIV>
  <DIV></DIV>
  <DIV>rc&nbsp;=3D&nbsp;WaitForMultipleObjects(5,&nbsp;0x2222222,&nbsp;0,&=
nbsp;1);&nbsp;//the&nbsp;last&nbsp;parameter&nbsp;demands&nbsp;1ms&nbsp;ti=
mer&nbsp;resolution</DIV>
  <DIV>if&nbsp;(rc&nbsp;=3D&nbsp;TIMEOUT){</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>break;</DIV></BLOCKQUOTE>
  <DIV>}</DIV>
  <DIV>else{</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>call&nbsp;0x44444444;</DIV></BLOCKQUOTE>
  <DIV>}</DIV>
  <DIV></DIV>
  <DIV>timeEndPeriod()&nbsp;--&gt;&nbsp;NtSetTimerResolution(0(disable))</=
DIV></BLOCKQUOTE>
<DIV>}</DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D</DIV></DIV>
<DIV>so its behavior is totally legal, for it demands higher timer resolut=
ion=20
(here 1ms), so it calls NtSetTimerResolution to improve the=20
resolution.&nbsp;</DIV>
<DIV>it is not as I guessed "unaccurate"<BR></DIV>
<DIV>I also wrote a tester below, which confirms my suppose. if you are=20
interested in it , you can build it by MS's compiler, and after running th=
e=20
tester in VM about 1 minutes, VM's time will slow.</DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D 2 a test which will lead to trigger slowing=20
VM's inner time=3D=3D=3D=3D=3D=3D=3D=3D</DIV>
<DIV>
<DIV>#include&nbsp;&lt;stdio.h&gt;</DIV>
<DIV>#include&nbsp;&lt;windows.h&gt;</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTSETTIMER)(IN&nbsp;ULONG&nbsp=
;RequestedResolution,&nbsp;IN&nbsp;BOOLEAN&nbsp;Set,&nbsp;OUT&nbsp;PULONG&=
nbsp;ActualResolution&nbsp;);</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTQUERYTIMER)(OUT&nbsp;PULONG&=
nbsp;&nbsp;&nbsp;MinimumResolution,&nbsp;OUT&nbsp;PULONG&nbsp;MaximumResol=
ution,&nbsp;OUT&nbsp;PULONG&nbsp;CurrentResolution&nbsp;);</DIV>
<DIV>int&nbsp;main()</DIV>
<DIV>{</DIV>
<BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
  <DIV>DWORD&nbsp;min_res&nbsp;=3D&nbsp;0,&nbsp;max_res&nbsp;=3D&nbsp;0,&n=
bsp;cur_res&nbsp;=3D&nbsp;0,&nbsp;ret&nbsp;=3D&nbsp;0;</DIV>
  <DIV>HMODULE&nbsp;&nbsp;hdll&nbsp;=3D&nbsp;NULL;</DIV>
  <DIV>hdll&nbsp;=3D&nbsp;GetModuleHandle("ntdll.dll");</DIV>
  <DIV>NTSETTIMER&nbsp;AddrNtSetTimer&nbsp;=3D&nbsp;0;</DIV>
  <DIV>NTQUERYTIMER&nbsp;AddrNtQueyTimer&nbsp;=3D&nbsp;0;</DIV>
  <DIV></DIV>
  <DIV>AddrNtSetTimer&nbsp;=3D&nbsp;(NTSETTIMER)&nbsp;GetProcAddress(hdll,=
&nbsp;"NtSetTimerResolution");</DIV>
  <DIV>AddrNtQueyTimer&nbsp;=3D&nbsp;(NTQUERYTIMER)GetProcAddress(hdll,&nb=
sp;"NtQueryTimerResolution");</DIV>
  <DIV>&nbsp;</DIV>
  <DIV>while&nbsp;(1)</DIV>
  <DIV>{</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>ret&nbsp;=3D&nbsp;AddrNtQueyTimer(&amp;min_res,&nbsp;&amp;max_res=
,&nbsp;&amp;cur_res);</DIV>
    <DIV>printf("min_res&nbsp;=3D&nbsp;%d,&nbsp;max_res&nbsp;=3D&nbsp;%d,&=
nbsp;cur_res&nbsp;=3D&nbsp;%d\n",min_res,&nbsp;max_res,&nbsp;cur_res);</DI=
V>
    <DIV>Sleep(10);</DIV>
    <DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;1,&nbsp;&amp;cur_res=
);</DIV>
    <DIV>Sleep(10);</DIV>
    <DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;0,&nbsp;&amp;cur_res=
);</DIV>
    <DIV>Sleep(10);</DIV></BLOCKQUOTE>
  <DIV>}</DIV>
  <DIV>return&nbsp;0;</DIV></BLOCKQUOTE>
<DIV>}</DIV></DIV>
<DIV>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D</DIV=
>
<DIV>&nbsp;</DIV>
<DIV>2 Bug in xen</DIV>
<DIV>JVM is OK, so left the bug to xen, I have found both the reason and=20
solution. As Jan mentioned avoiding call create_periodic_time, it got much=
=20
better. so I modified it&nbsp;like this,&nbsp;if the pt timer is created b=
efore,=20
setting RegA down is just changing the period value, so I do nothing excep=
t just=20
setting period to pt's field. it is OK!<BR></DIV>
<DIV>I thought pt-&gt;scheduled is responsible&nbsp;for <SPAN class=3Dshor=
t_text=20
lang=3Den id=3Dresult_box f=3D"4" a=3D"undefined" closure_uid_v1xnhe=3D"94=
"><SPAN class=3D""=20
closure_uid_v1xnhe=3D"322">Accuracy</SPAN></SPAN>&nbsp;of pt_process_misse=
d_ticks,=20
so we should not interfere it at any outer invading or interruption, so I =
let=20
create_periodic_time changed everything but reserved only one=20
filed&nbsp;pt-&gt;scheduled as setted before, I am very happy to find the =
bug=20
disappear. After I rechecked your mail I found you are really a very smart=
 man,=20
you have predicted something!</DIV>
<DIV>&nbsp;</DIV>
<DIV>
<DIV>Did&nbsp;you&nbsp;further&nbsp;check&nbsp;whether&nbsp;the&nbsp;adjus=
tments&nbsp;done&nbsp;to&nbsp;the</DIV>
<DIV>scheduled&nbsp;time&nbsp;in&nbsp;create_periodic_time()&nbsp;are&nbsp=
;responsible&nbsp;for&nbsp;this</DIV>
<DIV>conclusion&nbsp;of&nbsp;the&nbsp;JVM&nbsp;(could&nbsp;be&nbsp;effecti=
vely&nbsp;doubling&nbsp;the&nbsp;first</DIV>
<DIV>interval&nbsp;if&nbsp;HVM_PARAM_VPT_ALIGN&nbsp;is&nbsp;set,&nbsp;and&=
nbsp;with&nbsp;the&nbsp;high&nbsp;rate</DIV>
<DIV>of&nbsp;re-sets&nbsp;this&nbsp;could&nbsp;certainly&nbsp;have&nbsp;a&=
nbsp;more&nbsp;visible&nbsp;effect&nbsp;than</DIV>
<DIV>intended)?</DIV>
<DIV>&nbsp;</DIV>
<DIV>After I tracked pt-&gt;scheduled, more and more truth surfaced. I wil=
l show=20
you the RTC's spotting as I observed.</DIV>
<DIV>normal spot is like this:</DIV>
<DIV>0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;=20
1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;=20
&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;5</DIV>
<DIV>.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp; &nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp; &nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(normal=20
spot)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
|(pt-&gt;scheduled at 2)</DIV>
<DIV>&nbsp;</DIV>
<DIV>when create_periodic_time interfere pt-&gt;schedule at NOW</DIV>
<DIV>0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;=20
1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;=20
&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;=20
5</DIV>
<DIV>.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;=
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
|(NOW)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;| (new pt-&gt;scheduled is moved to 3 after ALIGN)</DIV>
<DIV>&nbsp;</DIV>
<DIV>so it real spot is like this:</DIV>
<DIV>.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;&nbsp;&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;=
&nbsp;=20
&nbsp;&nbsp;.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;=20
1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;=20
3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;=20
&nbsp;4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;=20
5</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|(here=20
we missed one spot at 2)</DIV></DIV>
<DIV>&nbsp;</DIV>
<DIV>the original pt-&gt;scheduled is at 2 before create_periodic_time wil=
l be=20
executed, but after it, pt-&gt;scheduled is moved to 3, so missed one spot=
 to=20
GuestWin.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 who is wrong?</DIV>
<DIV>I doubt align_timer&nbsp;worth suspected:</DIV>
<DIV>
<DIV>s_time_t&nbsp;align_timer(s_time_t&nbsp;firsttick,&nbsp;uint64_t&nbsp=
;period)</DIV>
<DIV>{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;!period&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;firsttick=
;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;firsttick&nbsp;+&nbsp;(period&nbs=
p;-&nbsp;1)&nbsp;-&nbsp;((firsttick&nbsp;-&nbsp;1)&nbsp;%&nbsp;period);=20
//in xen4.x</DIV>
<DIV>&nbsp;&nbsp;&nbsp;=20
return&nbsp;firsttick&nbsp;&nbsp;-&nbsp;((firsttick&nbsp;-&nbsp;1)&nbsp;%&=
nbsp;period);//it=20
should be aligned like this.</DIV>
<DIV>}</DIV>
<DIV>&nbsp;</DIV>
<DIV>I have also found another updating RTC's RegA in=20
tools\ioemu-qemu-xen\hw\MCc146818rtc.c: </DIV>
<DIV>&nbsp;</DIV>
<DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(period_code&nbsp;!=3D&nbsp;0&nbsp;&a=
mp;&amp;&nbsp;(s-&gt;cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;REG_B_PIE))&nbsp=
;{</DIV>
<DIV>#endif</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(period_code&=
nbsp;&lt;=3D&nbsp;2)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;period_code&nbsp;+=3D&nbsp;7;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;period&nbsp;i=
n&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp;=
1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);</DIV>
<DIV>#ifdef&nbsp;IRQ_COALESCE_HACK</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if(period&nbsp;!=3D&n=
bsp;s-&gt;period)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;s-&gt;irq_coalesced&nbsp;=3D&nbsp;(s-&gt;irq_coalesced&nbsp;*&nbsp;s-&g=
t;period)&nbsp;/&nbsp;period;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;period&nbsp;=3D=
&nbsp;period;</DIV>
<DIV>#endif</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;compute&nbsp;=
32&nbsp;khz&nbsp;clock&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_clock&nbsp;=3D&nb=
sp;muldiv64(current_time,&nbsp;32768,&nbsp;ticks_per_sec);=20
//here I don't make sense it ......</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_irq_clock&nbsp;=
=3D&nbsp;(cur_clock&nbsp;&amp;&nbsp;~(period&nbsp;-&nbsp;1))&nbsp;+&nbsp;p=
eriod;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;next_periodic_t=
ime&nbsp;=3D&nbsp;muldiv64(next_irq_clock,&nbsp;ticks_per_sec,&nbsp;32768)=
&nbsp;+&nbsp;1;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;qemu_mod_timer(s-&gt;=
periodic_timer,&nbsp;s-&gt;next_periodic_time);</DIV></DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV></DIV>
<DIV>I don't know what happened in real RTC, the most <SPAN=20
style=3D"COLOR: #000080">popular RTC chip<SPAN style=3D"COLOR: #000000"><S=
PAN=20
style=3D"COLOR: #000080">&nbsp;is <FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN style=3D"COLOR: #000080">MC146818, I have c=
hecked its=20
datasheet but found nothing I want. What I want to know is When a real out=
er=20
0x71 come down to set RTC's RegA, who does it change or spot the <SPAN=20
style=3D"FONT-SIZE: 10pt">next <FONT style=3D"FONT-SIZE: 10pt"=20
size=3D1>periodic&nbsp;interrupt time</FONT>&nbsp;</SPAN>? In my case, the=
 period=20
doesn't change, it misses one spot, even if the period changes how will it=
 spot=20
the next scheduled time?</SPAN></SPAN></FONT></SPAN></SPAN></SPAN></DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN style=3D"FONT-SIZE: 10pt; COLOR: #000080">I=
 need the=20
man who knows the details deeply concerning when updating a real RTC's reg=
A, how=20
will&nbsp;it take it into&nbsp;effect, and make the transition smoothly in=
 a=20
real RTC.</SPAN></SPAN></FONT></SPAN></SPAN></SPAN></DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"FONT-SIZE: 10pt; COLOR: #000080"></SPAN></SPAN></FONT></SPAN></SP=
AN></SPAN>&nbsp;</DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D3><SPAN=20
style=3D"COLOR: #000000"><SPAN style=3D"FONT-SIZE: 10pt; COLOR: #000080">I=
 have been=20
very anxious in&nbsp;another aspect,&nbsp;in our virtual environment, at=20
create_periodic_time, NOW() may be far more&nbsp;later than pt-&gt;schedul=
ed=20
setted before, at this point,&nbsp;the new pt-&gt;scheduled may be far mor=
e=20
behind the one after executing&nbsp;create_periodic_time. So the interval=20
between both which should be treated&nbsp;by pt_process_missed_ticks to th=
e=20
former pt-&gt;scheduled's period&nbsp;is now all thrown away as nothing mi=
ssed.=20
So I think&nbsp;how to handle the interval between&nbsp;both pt-&gt;schedu=
led=20
worth consideration&nbsp;in=20
create_periodic_time.</SPAN></SPAN></FONT></SPAN></SPAN></SPAN></DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"></SPAN></SPAN></FONT></SPAN></SPAN></SPAN>&nbsp;<=
/DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080">Thanks!</SPAN></SPAN></FONT></SPAN></SPAN></SPAN>=
</DIV>
<DIV><SPAN style=3D"COLOR: #000080"><SPAN style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"><FONT color=3D#cc0000 size=3D2><SPAN=20
style=3D"COLOR: #000000"><SPAN=20
style=3D"COLOR: #000080"></SPAN></SPAN></SPAN></SPAN></SPAN></FONT>&nbsp;<=
/DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-08&nbsp;17:20</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A><=
/DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] Big Bug:Time in VM running on xe=
n goes=20
slower</DIV></DIV></DIV>
<DIV>
<DIV>&gt;&gt;&gt;&nbsp;On&nbsp;07.08.12&nbsp;at&nbsp;17:44,&nbsp;tupeng212=
&nbsp;&lt;tupeng212@gmail.com&gt;&nbsp;wrote:</DIV>
<DIV>&gt;&nbsp;2&nbsp;Xen</DIV>
<DIV>&gt;&nbsp;vmx_vmexit_handler&nbsp;&nbsp;--&gt;&nbsp;.........&nbsp;--=
&gt;&nbsp;handle_rtc_io&nbsp;&nbsp;--&gt;&nbsp;rtc_ioport_write&nbsp;&nbsp=
;--&gt;&nbsp;</DIV>
<DIV>&gt;&nbsp;rtc_timer_update&nbsp;--&gt;&nbsp;set&nbsp;RTC's&nbsp;REG_A=
&nbsp;to&nbsp;a&nbsp;high&nbsp;rate--&gt;&nbsp;create_periodic_time(disabl=
e&nbsp;</DIV>
<DIV>&gt;&nbsp;the&nbsp;former&nbsp;timer,&nbsp;and&nbsp;init&nbsp;a&nbsp;=
new&nbsp;one)</DIV>
<DIV>&gt;&nbsp;Win7&nbsp;is&nbsp;installed&nbsp;in&nbsp;the&nbsp;vm.&nbsp;=
This&nbsp;calling&nbsp;path&nbsp;is&nbsp;executed&nbsp;so&nbsp;frequent&nb=
sp;that&nbsp;</DIV>
<DIV>&gt;&nbsp;may&nbsp;come&nbsp;down&nbsp;to&nbsp;set&nbsp;the&nbsp;RTC'=
s&nbsp;REG_A&nbsp;hundreds&nbsp;of&nbsp;times&nbsp;every&nbsp;second&nbsp;=
but&nbsp;with&nbsp;</DIV>
<DIV>&gt;&nbsp;the&nbsp;same&nbsp;rate(976.562us(1024HZ)),&nbsp;it&nbsp;is=
&nbsp;so&nbsp;abnormal&nbsp;to&nbsp;me&nbsp;to&nbsp;see&nbsp;such&nbsp;</D=
IV>
<DIV>&gt;&nbsp;behavior.</DIV>
<DIV>&nbsp;</DIV>
<DIV>_If_&nbsp;the&nbsp;problem&nbsp;is&nbsp;merely&nbsp;with&nbsp;the&nbs=
p;high&nbsp;rate&nbsp;of&nbsp;calls&nbsp;to</DIV>
<DIV>create_periodic_time(),&nbsp;I&nbsp;think&nbsp;this&nbsp;could&nbsp;b=
e&nbsp;taken&nbsp;care&nbsp;of&nbsp;by</DIV>
<DIV>avoiding&nbsp;the&nbsp;call&nbsp;(and&nbsp;perhaps&nbsp;the&nbsp;call=
&nbsp;to&nbsp;rtc_timer_update()&nbsp;in</DIV>
<DIV>the&nbsp;first&nbsp;place)&nbsp;by&nbsp;checking&nbsp;whether&nbsp;an=
ything&nbsp;actually&nbsp;changes</DIV>
<DIV>due&nbsp;to&nbsp;the&nbsp;current&nbsp;write.&nbsp;I&nbsp;don't&nbsp;=
think,&nbsp;however,&nbsp;that&nbsp;this&nbsp;would</DIV>
<DIV>help&nbsp;much,&nbsp;as&nbsp;the&nbsp;high&nbsp;rate&nbsp;of&nbsp;por=
t&nbsp;accesses&nbsp;(and&nbsp;hence&nbsp;traps</DIV>
<DIV>into&nbsp;the&nbsp;hypervisor)&nbsp;would&nbsp;remain.&nbsp;It&nbsp;m=
ight,&nbsp;nevertheless,&nbsp;get</DIV>
<DIV>your&nbsp;immediate&nbsp;problem&nbsp;of&nbsp;the&nbsp;time&nbsp;slow=
ing&nbsp;down&nbsp;taken&nbsp;care&nbsp;of</DIV>
<DIV>if&nbsp;that&nbsp;is&nbsp;caused&nbsp;inside&nbsp;Xen&nbsp;(but&nbsp;=
the&nbsp;cause&nbsp;here&nbsp;may&nbsp;as&nbsp;well&nbsp;be&nbsp;in</DIV>
<DIV>the&nbsp;Windows&nbsp;kernel).</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp;3&nbsp;OS</DIV>
<DIV>&gt;&nbsp;I&nbsp;have&nbsp;tried&nbsp;to&nbsp;find&nbsp;why&nbsp;the&=
nbsp;win7&nbsp;setted&nbsp;RTC's&nbsp;regA&nbsp;so&nbsp;frequently.&nbsp;f=
inally&nbsp;</DIV>
<DIV>&gt;&nbsp;got&nbsp;the&nbsp;result,&nbsp;all&nbsp;that&nbsp;comes&nbs=
p;from&nbsp;a&nbsp;function:&nbsp;NtSetTimerResolution&nbsp;--&gt;&nbsp;</=
DIV>
<DIV>&gt;&nbsp;0x70,0x71</DIV>
<DIV>&gt;&nbsp;when&nbsp;I&nbsp;attached&nbsp;windbg&nbsp;into&nbsp;the&nb=
sp;guest&nbsp;OS,&nbsp;I&nbsp;also&nbsp;found&nbsp;the&nbsp;source,&nbsp;t=
hey&nbsp;are&nbsp;</DIV>
<DIV>&gt;&nbsp;all&nbsp;called&nbsp;from&nbsp;a&nbsp;upper&nbsp;system&nbs=
p;call&nbsp;that&nbsp;comes&nbsp;from&nbsp;JVM(Java&nbsp;Virtual&nbsp;</DI=
V>
<DIV>&gt;&nbsp;Machine).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Getting&nbsp;Windows&nbsp;to&nbsp;be&nbsp;a&nbsp;little&nbsp;smarter&=
nbsp;and&nbsp;avoid&nbsp;the&nbsp;port&nbsp;I/O&nbsp;when</DIV>
<DIV>doing&nbsp;redundant&nbsp;writes&nbsp;would&nbsp;of&nbsp;course&nbsp;=
be&nbsp;even&nbsp;better,&nbsp;but&nbsp;is</DIV>
<DIV>clearly&nbsp;a&nbsp;difficult&nbsp;to&nbsp;achieve&nbsp;goal.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp;4&nbsp;JVM</DIV>
<DIV>&gt;&nbsp;I&nbsp;don't&nbsp;know&nbsp;why&nbsp;JVM&nbsp;calls&nbsp;Nt=
SetTimerResolution&nbsp;to&nbsp;set&nbsp;the&nbsp;same&nbsp;RTC's&nbsp;rat=
e&nbsp;</DIV>
<DIV>&gt;&nbsp;down&nbsp;(976.562us(1024HZ))&nbsp;so&nbsp;frequently.&nbsp=
;</DIV>
<DIV>&gt;&nbsp;But&nbsp;found&nbsp;something&nbsp;useful,&nbsp;in&nbsp;the=
&nbsp;java&nbsp;source&nbsp;code,&nbsp;I&nbsp;found&nbsp;many&nbsp;palaces=
&nbsp;</DIV>
<DIV>&gt;&nbsp;written&nbsp;with&nbsp;time.scheduleAtFixedRate(),&nbsp;Inf=
ormations&nbsp;from&nbsp;Internet&nbsp;told&nbsp;me&nbsp;</DIV>
<DIV>&gt;&nbsp;this&nbsp;function&nbsp;scheduleAtFixedRate&nbsp;demands&nb=
sp;a&nbsp;higher&nbsp;time&nbsp;resolution.&nbsp;so&nbsp;I&nbsp;</DIV>
<DIV>&gt;&nbsp;guess&nbsp;the&nbsp;whole&nbsp;process&nbsp;may&nbsp;be&nbs=
p;this:&nbsp;</DIV>
<DIV>&gt;&nbsp;java&nbsp;wants&nbsp;a&nbsp;higher&nbsp;time&nbsp;resolutio=
n&nbsp;timer,&nbsp;so&nbsp;it&nbsp;changes&nbsp;the&nbsp;RTC's&nbsp;rate&n=
bsp;from&nbsp;</DIV>
<DIV>&gt;&nbsp;15.625ms(64HZ)&nbsp;to&nbsp;976.562us(1024HZ),&nbsp;after&n=
bsp;that,&nbsp;it&nbsp;reconfirms&nbsp;whether&nbsp;the&nbsp;</DIV>
<DIV>&gt;&nbsp;time&nbsp;is&nbsp;accurate&nbsp;as&nbsp;expected,&nbsp;but&=
nbsp;it's&nbsp;sorry&nbsp;to&nbsp;get&nbsp;some&nbsp;notice&nbsp;it&nbsp;'=
s&nbsp;not&nbsp;</DIV>
<DIV>&gt;&nbsp;accurate&nbsp;either.&nbsp;so&nbsp;it&nbsp;sets&nbsp;&nbsp;=
the&nbsp;RTC's&nbsp;rate&nbsp;from&nbsp;15.625ms(64HZ)&nbsp;to&nbsp;</DIV>
<DIV>&gt;&nbsp;976.562us(1024HZ)&nbsp;again&nbsp;and&nbsp;again...,&nbsp;a=
t&nbsp;last,&nbsp;results&nbsp;in&nbsp;a&nbsp;slow&nbsp;system&nbsp;timer&=
nbsp;</DIV>
<DIV>&gt;&nbsp;in&nbsp;vm.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Now&nbsp;that's&nbsp;really&nbsp;the&nbsp;fundamental&nbsp;thing&nbsp=
;to&nbsp;find&nbsp;out&nbsp;-&nbsp;what&nbsp;makes&nbsp;it</DIV>
<DIV>think&nbsp;the&nbsp;clock&nbsp;isn't&nbsp;accurate?&nbsp;Is&nbsp;this=
&nbsp;an&nbsp;artifact&nbsp;of&nbsp;scheduling&nbsp;(as</DIV>
<DIV>the&nbsp;scheduler&nbsp;tick&nbsp;certainly&nbsp;is&nbsp;several&nbsp=
;milliseconds,&nbsp;whereas</DIV>
<DIV>"accurate"&nbsp;here&nbsp;appears&nbsp;to&nbsp;require&nbsp;below&nbs=
p;1ms&nbsp;granularity),&nbsp;perhaps</DIV>
<DIV>as&nbsp;a&nbsp;result&nbsp;of&nbsp;the&nbsp;box&nbsp;being&nbsp;overl=
oaded&nbsp;(i.e.&nbsp;the&nbsp;VM&nbsp;not&nbsp;being&nbsp;able</DIV>
<DIV>to&nbsp;get&nbsp;scheduled&nbsp;quickly&nbsp;enough&nbsp;when&nbsp;th=
e&nbsp;timer&nbsp;expires)?&nbsp;For&nbsp;that,</DIV>
<DIV>did&nbsp;you&nbsp;try&nbsp;lowering&nbsp;the&nbsp;scheduler&nbsp;time=
&nbsp;slice&nbsp;and/or&nbsp;its&nbsp;rate&nbsp;limit</DIV>
<DIV>(possible&nbsp;via&nbsp;command&nbsp;line&nbsp;option)?&nbsp;Of&nbsp;=
course&nbsp;doing&nbsp;so&nbsp;may&nbsp;have</DIV>
<DIV>other&nbsp;undesirable&nbsp;side&nbsp;effects,&nbsp;but&nbsp;it&nbsp;=
would&nbsp;be&nbsp;worth&nbsp;a&nbsp;try.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Did&nbsp;you&nbsp;further&nbsp;check&nbsp;whether&nbsp;the&nbsp;adjus=
tments&nbsp;done&nbsp;to&nbsp;the</DIV>
<DIV>scheduled&nbsp;time&nbsp;in&nbsp;create_periodic_time()&nbsp;are&nbsp=
;responsible&nbsp;for&nbsp;this</DIV>
<DIV>conclusion&nbsp;of&nbsp;the&nbsp;JVM&nbsp;(could&nbsp;be&nbsp;effecti=
vely&nbsp;doubling&nbsp;the&nbsp;first</DIV>
<DIV>interval&nbsp;if&nbsp;HVM_PARAM_VPT_ALIGN&nbsp;is&nbsp;set,&nbsp;and&=
nbsp;with&nbsp;the&nbsp;high&nbsp;rate</DIV>
<DIV>of&nbsp;re-sets&nbsp;this&nbsp;could&nbsp;certainly&nbsp;have&nbsp;a&=
nbsp;more&nbsp;visible&nbsp;effect&nbsp;than</DIV>
<DIV>intended)?</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart462848748572_=------



--===============4979082272455034970==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4979082272455034970==--



From xen-devel-bounces@lists.xen.org Fri Aug 10 16:05:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 16:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szri5-0004XJ-SD; Fri, 10 Aug 2012 16:05:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szri4-0004XB-IP
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 16:05:12 +0000
Received: from [85.158.138.51:38840] by server-10.bemta-3.messagelabs.com id
	09/23-07905-73135205; Fri, 10 Aug 2012 16:05:11 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344614709!23602207!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29071 invoked from network); 10 Aug 2012 16:05:09 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 16:05:09 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jored mo85) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id k020ddo7ADVTCN ;
	Fri, 10 Aug 2012 18:05:01 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 0D36518639; Fri, 10 Aug 2012 18:05:00 +0200 (CEST)
Date: Fri, 10 Aug 2012 18:05:00 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810160500.GA21459@aepfle.de>
References: <20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
	<20120810151033.GA14059@aepfle.de>
	<502541A60200007800094396@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502541A60200007800094396@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, Jan Beulich wrote:

> >>> On 10.08.12 at 17:10, Olaf Hering <olaf@aepfle.de> wrote:
> > On Fri, Aug 10, Jan Beulich wrote:
> > 
> >> That's the case for our kernels, but doesn't have to be for any
> >> derived ones (and there are still a few people cloning our patches).
> > 
> > Do they build things into the kernel? 
> > Should some other module than evtchn be used to decide which branch to
> > take?
> 
> There are people who build in everything. So you would only
> ever be able to derive results from being able to load some
> specific module; not being able to load a certain module doesn't
> allow drawing any conclusion.

If attempting to load pvops modules in a xenlinux based kernel is an
issue then there needs to be another way to tell them appart. A lame
test would be test -f /proc/xen/balloon which doesnt seem to exist in
pvops dom0.

If attempting to load non-existant modules is not an issue then I will
prepare a patch which adds "netbk blkbk xen-scsibk usbbk pciback"
to the list.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 16:05:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 16:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szri5-0004XJ-SD; Fri, 10 Aug 2012 16:05:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szri4-0004XB-IP
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 16:05:12 +0000
Received: from [85.158.138.51:38840] by server-10.bemta-3.messagelabs.com id
	09/23-07905-73135205; Fri, 10 Aug 2012 16:05:11 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344614709!23602207!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTAwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29071 invoked from network); 10 Aug 2012 16:05:09 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Aug 2012 16:05:09 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jored mo85) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id k020ddo7ADVTCN ;
	Fri, 10 Aug 2012 18:05:01 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 0D36518639; Fri, 10 Aug 2012 18:05:00 +0200 (CEST)
Date: Fri, 10 Aug 2012 18:05:00 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120810160500.GA21459@aepfle.de>
References: <20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
	<20120810151033.GA14059@aepfle.de>
	<502541A60200007800094396@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502541A60200007800094396@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, Jan Beulich wrote:

> >>> On 10.08.12 at 17:10, Olaf Hering <olaf@aepfle.de> wrote:
> > On Fri, Aug 10, Jan Beulich wrote:
> > 
> >> That's the case for our kernels, but doesn't have to be for any
> >> derived ones (and there are still a few people cloning our patches).
> > 
> > Do they build things into the kernel? 
> > Should some other module than evtchn be used to decide which branch to
> > take?
> 
> There are people who build in everything. So you would only
> ever be able to derive results from being able to load some
> specific module; not being able to load a certain module doesn't
> allow drawing any conclusion.

If attempting to load pvops modules in a xenlinux based kernel is an
issue then there needs to be another way to tell them appart. A lame
test would be test -f /proc/xen/balloon which doesnt seem to exist in
pvops dom0.

If attempting to load non-existant modules is not an issue then I will
prepare a patch which adds "netbk blkbk xen-scsibk usbbk pciback"
to the list.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 16:50:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 16:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzsPH-0004x7-8x; Fri, 10 Aug 2012 16:49:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SzsPG-0004x2-9I
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 16:49:50 +0000
Received: from [85.158.138.51:22912] by server-5.bemta-3.messagelabs.com id
	79/81-27557-DAB35205; Fri, 10 Aug 2012 16:49:49 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344617388!25886620!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15712 invoked from network); 10 Aug 2012 16:49:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 16:49:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13958326"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 16:49:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 17:49:48 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SzsPE-000091-1c;
	Fri, 10 Aug 2012 16:49:48 +0000
Received: by spongy (Postfix, from userid 2023)	id DDFC434049D; Fri, 10 Aug
	2012 17:51:09 +0100 (BST)
Date: Fri, 10 Aug 2012 17:51:09 +0100
From: Jean Guyader <jean.guyader@citrix.com>
To: Tim Deegan <tim@xen.org>
Message-ID: <20120810165109.GA19429@spongy>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120809103840.GD16986@ocelot.phlegethon.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/08 11:38, Tim Deegan wrote:
> Hi,
> 
> This looks pretty good; I think you've addressed almost all my comments
> except for one, which is really a design decision raether than an
> implementation one.  As I said last time: 
> 
> ] And what about protocol?  Protocol seems to have ended up as a bit of a
> ] second-class citizen in v4v; it's defined, and indeed required, but not
> ] used for routing or for acccess control, so all traffic to a given port
> ] _on every protocol_ ends up on the same ring. 
> ] 
> ] This is the inverse of the TCP/IP namespace that you're copying, where
> ] protocol demux happens before port demux.  And I think it will bite
> ] someone if you ever, for example, want to send ICMP or GRE over a v4v
> ] channel.
> 

The protocol field is used to inform about the type a message on the ring.

Right now we use two protocols in our linux driver: V4V_PROTO_DGRAM and
V4V_PROTO_STREAM. In the future that could probably be extended to new protocol
like V4V_PROTO_ICMP for instance.

The demultiplexing will happens at the other end, the driver can look at the
message and decide what to do with it based on the protocol field.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 16:50:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 16:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzsPH-0004x7-8x; Fri, 10 Aug 2012 16:49:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@citrix.com>) id 1SzsPG-0004x2-9I
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 16:49:50 +0000
Received: from [85.158.138.51:22912] by server-5.bemta-3.messagelabs.com id
	79/81-27557-DAB35205; Fri, 10 Aug 2012 16:49:49 +0000
X-Env-Sender: jean.guyader@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344617388!25886620!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15712 invoked from network); 10 Aug 2012 16:49:49 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 16:49:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13958326"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 16:49:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 17:49:48 +0100
Received: from spongy.cam.xci-test.com ([10.80.248.53] helo=spongy)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<jean.guyader@citrix.com>)	id 1SzsPE-000091-1c;
	Fri, 10 Aug 2012 16:49:48 +0000
Received: by spongy (Postfix, from userid 2023)	id DDFC434049D; Fri, 10 Aug
	2012 17:51:09 +0100 (BST)
Date: Fri, 10 Aug 2012 17:51:09 +0100
From: Jean Guyader <jean.guyader@citrix.com>
To: Tim Deegan <tim@xen.org>
Message-ID: <20120810165109.GA19429@spongy>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120809103840.GD16986@ocelot.phlegethon.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/08 11:38, Tim Deegan wrote:
> Hi,
> 
> This looks pretty good; I think you've addressed almost all my comments
> except for one, which is really a design decision raether than an
> implementation one.  As I said last time: 
> 
> ] And what about protocol?  Protocol seems to have ended up as a bit of a
> ] second-class citizen in v4v; it's defined, and indeed required, but not
> ] used for routing or for acccess control, so all traffic to a given port
> ] _on every protocol_ ends up on the same ring. 
> ] 
> ] This is the inverse of the TCP/IP namespace that you're copying, where
> ] protocol demux happens before port demux.  And I think it will bite
> ] someone if you ever, for example, want to send ICMP or GRE over a v4v
> ] channel.
> 

The protocol field is used to inform about the type a message on the ring.

Right now we use two protocols in our linux driver: V4V_PROTO_DGRAM and
V4V_PROTO_STREAM. In the future that could probably be extended to new protocol
like V4V_PROTO_ICMP for instance.

The demultiplexing will happens at the other end, the driver can look at the
message and decide what to do with it based on the protocol field.

Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 17:15:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 17:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szsnh-0005f9-6c; Fri, 10 Aug 2012 17:15:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szsnf-0005f3-Pr
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 17:15:03 +0000
Received: from [85.158.143.99:25849] by server-2.bemta-4.messagelabs.com id
	C4/7A-31966-79145205; Fri, 10 Aug 2012 17:15:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344618902!17816944!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17118 invoked from network); 10 Aug 2012 17:15:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 17:15:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13958602"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 17:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 18:15:02 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Szsnd-0000QQ-Uo; Fri, 10 Aug 2012 17:15:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Szsnd-00071Y-Oa;
	Fri, 10 Aug 2012 18:15:01 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20517.16789.665860.721279@mariner.uk.xensource.com>
Date: Fri, 10 Aug 2012 18:15:01 +0100
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20120810160500.GA21459@aepfle.de>
References: <20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
	<20120810151033.GA14059@aepfle.de>
	<502541A60200007800094396@nat28.tlf.novell.com>
	<20120810160500.GA21459@aepfle.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> If attempting to load non-existant modules is not an issue then I will
> prepare a patch which adds "netbk blkbk xen-scsibk usbbk pciback"
> to the list.

The existing scattershot list of modprobes is based on the assumption
that loading non-existent modules is harmless.  So yes please.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 17:15:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 17:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szsnh-0005f9-6c; Fri, 10 Aug 2012 17:15:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szsnf-0005f3-Pr
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 17:15:03 +0000
Received: from [85.158.143.99:25849] by server-2.bemta-4.messagelabs.com id
	C4/7A-31966-79145205; Fri, 10 Aug 2012 17:15:03 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344618902!17816944!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17118 invoked from network); 10 Aug 2012 17:15:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 17:15:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13958602"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 17:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 18:15:02 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1Szsnd-0000QQ-Uo; Fri, 10 Aug 2012 17:15:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Szsnd-00071Y-Oa;
	Fri, 10 Aug 2012 18:15:01 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20517.16789.665860.721279@mariner.uk.xensource.com>
Date: Fri, 10 Aug 2012 18:15:01 +0100
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20120810160500.GA21459@aepfle.de>
References: <20434.1848.747678.259199@mariner.uk.xensource.com>
	<1343824081.27221.84.camel@zakaz.uk.xensource.com>
	<20507.45344.552468.930223@mariner.uk.xensource.com>
	<502137BA0200007800093442@nat28.tlf.novell.com>
	<20513.20168.993350.590876@mariner.uk.xensource.com>
	<50222C4A0200007800093711@nat28.tlf.novell.com>
	<20120810150447.GA13318@aepfle.de>
	<50253FFF020000780009437F@nat28.tlf.novell.com>
	<20120810151033.GA14059@aepfle.de>
	<502541A60200007800094396@nat28.tlf.novell.com>
	<20120810160500.GA21459@aepfle.de>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "fantonifabio@tiscali.it" <fantonifabio@tiscali.it>,
	xen-devel <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other
 xen kernel modules on xencommons start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] [PATCH] tools/hotplug/Linux/init.d/: added other xen kernel modules on xencommons start"):
> If attempting to load non-existant modules is not an issue then I will
> prepare a patch which adds "netbk blkbk xen-scsibk usbbk pciback"
> to the list.

The existing scattershot list of modprobes is based on the assumption
that loading non-existent modules is harmless.  So yes please.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 17:49:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 17:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SztL1-00060e-I9; Fri, 10 Aug 2012 17:49:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SztKz-00060Z-7U
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 17:49:29 +0000
Received: from [85.158.138.51:43151] by server-6.bemta-3.messagelabs.com id
	A6/41-32013-8A945205; Fri, 10 Aug 2012 17:49:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344620966!27618219!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8997 invoked from network); 10 Aug 2012 17:49:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 17:49:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13959013"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 17:49:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 18:49:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SztKo-0000hI-Sx;
	Fri, 10 Aug 2012 17:49:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SztKo-0008UK-HF;
	Fri, 10 Aug 2012 18:49:18 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13591-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 18:49:18 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13591: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13591 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13591/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13587
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13587
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13587
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  47080c965937
baseline version:
 xen                  d25406e25af4

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=47080c965937
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 47080c965937
+ branch=xen-unstable
+ revision=47080c965937
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 47080c965937 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 17:49:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 17:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SztL1-00060e-I9; Fri, 10 Aug 2012 17:49:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SztKz-00060Z-7U
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 17:49:29 +0000
Received: from [85.158.138.51:43151] by server-6.bemta-3.messagelabs.com id
	A6/41-32013-8A945205; Fri, 10 Aug 2012 17:49:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1344620966!27618219!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8997 invoked from network); 10 Aug 2012 17:49:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 17:49:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13959013"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 17:49:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 18:49:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SztKo-0000hI-Sx;
	Fri, 10 Aug 2012 17:49:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SztKo-0008UK-HF;
	Fri, 10 Aug 2012 18:49:18 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13591-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 18:49:18 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13591: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13591 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13591/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13587
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13587
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13587
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13587

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  47080c965937
baseline version:
 xen                  d25406e25af4

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=47080c965937
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 47080c965937
+ branch=xen-unstable
+ revision=47080c965937
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 47080c965937 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:12:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szud4-00077s-PN; Fri, 10 Aug 2012 19:12:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1Szud2-00077k-Uk
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 19:12:13 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344625926!1774905!1
X-Originating-IP: [209.132.183.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjUgPT4gODI4ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25452 invoked from network); 10 Aug 2012 19:12:06 -0000
Received: from mx4-phx2.redhat.com (HELO mx4-phx2.redhat.com) (209.132.183.25)
	by server-14.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 19:12:06 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7AJBvmW020868;
	Fri, 10 Aug 2012 15:11:58 -0400
Date: Fri, 10 Aug 2012 15:11:57 -0400 (EDT)
From: Dave Anderson <anderson@redhat.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <1568927206.9177777.1344625917898.JavaMail.root@redhat.com>
In-Reply-To: <20120810132357.GA2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
X-Originating-IP: [10.16.185.59]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> Hi,
> 
> It looks that Xen support for crash have not been maintained
> since 2009. I am trying to fix this. Here it is bundle of fixes:
>   - xen: Always calculate max_cpus value,
>   - xen: Read only crash notes for onlined CPUs,
>   - x86/xen: Read variables from dynamically allocated per_cpu data,
>   - xen: Get idle data from alternative source,
>   - xen: Read data correctly from dynamically allocated console ring, too
>     (fixed in this release),
>   - xen: Add support for 3 level P2M tree (new patch in this release).
> 
> Daniel
> 

Hi Daniel,

The original 5 updates specific to the Xen hypervisor look OK,
but new patch 6/6 is going to take some studying/testing to
alleviate my backwards-compatibility worries.  Can I ask whether
you fully tested it with older 2-level P2M tree kernels? 

Thanks,
  Dave

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:12:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szud4-00077s-PN; Fri, 10 Aug 2012 19:12:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1Szud2-00077k-Uk
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 19:12:13 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344625926!1774905!1
X-Originating-IP: [209.132.183.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjUgPT4gODI4ODI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25452 invoked from network); 10 Aug 2012 19:12:06 -0000
Received: from mx4-phx2.redhat.com (HELO mx4-phx2.redhat.com) (209.132.183.25)
	by server-14.tower-27.messagelabs.com with SMTP;
	10 Aug 2012 19:12:06 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7AJBvmW020868;
	Fri, 10 Aug 2012 15:11:58 -0400
Date: Fri, 10 Aug 2012 15:11:57 -0400 (EDT)
From: Dave Anderson <anderson@redhat.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <1568927206.9177777.1344625917898.JavaMail.root@redhat.com>
In-Reply-To: <20120810132357.GA2576@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
X-Originating-IP: [10.16.185.59]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> Hi,
> 
> It looks that Xen support for crash have not been maintained
> since 2009. I am trying to fix this. Here it is bundle of fixes:
>   - xen: Always calculate max_cpus value,
>   - xen: Read only crash notes for onlined CPUs,
>   - x86/xen: Read variables from dynamically allocated per_cpu data,
>   - xen: Get idle data from alternative source,
>   - xen: Read data correctly from dynamically allocated console ring, too
>     (fixed in this release),
>   - xen: Add support for 3 level P2M tree (new patch in this release).
> 
> Daniel
> 

Hi Daniel,

The original 5 updates specific to the Xen hypervisor look OK,
but new patch 6/6 is going to take some studying/testing to
alleviate my backwards-compatibility worries.  Can I ask whether
you fully tested it with older 2-level P2M tree kernels? 

Thanks,
  Dave

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:15:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szufj-0007DL-Bs; Fri, 10 Aug 2012 19:14:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Szufi-0007DD-GJ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 19:14:58 +0000
Received: from [85.158.143.99:50316] by server-1.bemta-4.messagelabs.com id
	C4/3D-07754-1BD55205; Fri, 10 Aug 2012 19:14:57 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344626095!26693758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA1NDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24124 invoked from network); 10 Aug 2012 19:14:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 19:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336363200"; d="scan'208";a="204849460"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 15:14:55 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:14:55 -0400
MIME-Version: 1.0
X-Mercurial-Node: 9c7609a4fbc117b1600f8d515f261223acdc3d06
Message-ID: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 10 Aug 2012 12:14:54 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 08:19:58 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( (next_table_maddr != 0) && (next_level != 0) )
+        {
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1, 
+                address, indent + 1);
+        }
+        else 
+        {
+            int i;
+
+            for ( i = 0; i < indent; i++ )
+                printk("  ");
+
+            printk("gfn: %08lx  mfn: %08lx\n",
+                   (unsigned long)PFN_DOWN(address), 
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Fri Aug 10 08:19:58 2012 -0700
@@ -18,11 +18,13 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 10 08:19:58 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,71 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+        {
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        }
+        else
+        {
+            int j;
+
+            for ( j = 0; j < indent; j++ )
+                printk("  ");
+
+            printk("gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
+                   dma_pte_superpage(*pte)? 1 : 0,
+                   dma_pte_read(*pte)? 1 : 0,
+                   dma_pte_write(*pte)? 1 : 0);
+        }
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 10 08:19:58 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 10 08:19:58 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Fri Aug 10 08:19:58 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:15:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szufj-0007DL-Bs; Fri, 10 Aug 2012 19:14:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1Szufi-0007DD-GJ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 19:14:58 +0000
Received: from [85.158.143.99:50316] by server-1.bemta-4.messagelabs.com id
	C4/3D-07754-1BD55205; Fri, 10 Aug 2012 19:14:57 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344626095!26693758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA1NDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24124 invoked from network); 10 Aug 2012 19:14:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 19:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336363200"; d="scan'208";a="204849460"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 15:14:55 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 15:14:55 -0400
MIME-Version: 1.0
X-Mercurial-Node: 9c7609a4fbc117b1600f8d515f261223acdc3d06
Message-ID: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 10 Aug 2012 12:14:54 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>

diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 08:19:58 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( (next_table_maddr != 0) && (next_level != 0) )
+        {
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1, 
+                address, indent + 1);
+        }
+        else 
+        {
+            int i;
+
+            for ( i = 0; i < indent; i++ )
+                printk("  ");
+
+            printk("gfn: %08lx  mfn: %08lx\n",
+                   (unsigned long)PFN_DOWN(address), 
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+        }
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Fri Aug 10 08:19:58 2012 -0700
@@ -18,11 +18,13 @@
 #include <asm/hvm/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
+#include <xen/keyhandler.h>
 #include <xen/softirq.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 10 08:19:58 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,71 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( pt_maddr == 0 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+        {
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        }
+        else
+        {
+            int j;
+
+            for ( j = 0; j < indent; j++ )
+                printk("  ");
+
+            printk("gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
+                   dma_pte_superpage(*pte)? 1 : 0,
+                   dma_pte_read(*pte)? 1 : 0,
+                   dma_pte_write(*pte)? 1 : 0);
+        }
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 10 08:19:58 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 10 08:19:58 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << ((PTE_PER_TABLE_SHIFT * \
+                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
+++ b/xen/include/xen/iommu.h	Fri Aug 10 08:19:58 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:16:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szugm-0007IN-09; Fri, 10 Aug 2012 19:16:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Szugk-0007Hx-Bf
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 19:16:02 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344626152!8490067!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22775 invoked from network); 10 Aug 2012 19:15:53 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 19:15:53 -0000
Received: by yhpp34 with SMTP id p34so2166886yhp.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 12:15:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=b6yyEahjdk2hUkwyuoWQTe8hSX4cku5aC/MU8c5XuJc=;
	b=Vp/rZCIG8/pb9ZjGs0KeL1B9um8dbzGi4BdkI40sgORgko38cI5qmDexnoM4musg5E
	1RZKdjTyXCK1fkeXLEc1XcoP4AaeUr4oZHFlVjMM585LhfdBj49nIFMhRYO8SzjRtEmM
	/9cSRHBzfWlOx4lp40A2ggETIr91GkanjC/bn6/bJhF0jpxK6VUbTCGz94YFTuVnz7yK
	2uXiMigfSAMI69p078JlRuOP2fzHLrQcs/eh3PeIqtkYtkl5cJwjUwRpymXrfTo3Xuo/
	VKRBiN05GoE4qt8W6YT+fZ7r1wYogCC/j0yOm+go1sLfF5b7wGUB2N6ktNuYtQWDIDhu
	y3Wg==
MIME-Version: 1.0
Received: by 10.50.154.225 with SMTP id vr1mr2756444igb.70.1344626151999; Fri,
	10 Aug 2012 12:15:51 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Fri, 10 Aug 2012 12:15:51 -0700 (PDT)
In-Reply-To: <5024CB720200007800094124@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
Date: Fri, 10 Aug 2012 15:15:51 -0400
X-Google-Sender-Auth: jD7-yn-GquSMQfL4NsfpSQ8YJRc
Message-ID: <CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'll continue to investigate, as my schedule allows, but haven't found
any smoking gun just yet.

This happens to be on an Ivybridge system - I have other Sandybridge
systems that will go to sleep, but never wake up at all, forcing a
hard power cycle.

I tested these iommu= parameters on one of these machines, to no
effect. Every time they go into S3, a hard reset is necessary to get
them to come out of it.




On Fri, Aug 10, 2012 at 2:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
>> iommu=no-intremap
>> This seems to work around the issue on this platform, performing
>> multiple suspend/resume cycles, and ahci came back afterwards just
>> fine.
>>
>> What is the downside to flipping this off?
>
> Loss of security (against misbehaving/malicious guests). So we
> certainly want/need to get to the bottom of this (especially if
> this is not only one kind of system that's affected).
>
>> iommu=off
>> This test behaved similarly to the above, also working around the issue.
>
> Of course, this is a superset of the former.
>
> This result, however, makes it more likely again to indeed be a
> Xen side problem, not Dom0 induced corruption.
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:16:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szugm-0007IN-09; Fri, 10 Aug 2012 19:16:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1Szugk-0007Hx-Bf
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 19:16:02 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344626152!8490067!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22775 invoked from network); 10 Aug 2012 19:15:53 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 19:15:53 -0000
Received: by yhpp34 with SMTP id p34so2166886yhp.32
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 12:15:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=b6yyEahjdk2hUkwyuoWQTe8hSX4cku5aC/MU8c5XuJc=;
	b=Vp/rZCIG8/pb9ZjGs0KeL1B9um8dbzGi4BdkI40sgORgko38cI5qmDexnoM4musg5E
	1RZKdjTyXCK1fkeXLEc1XcoP4AaeUr4oZHFlVjMM585LhfdBj49nIFMhRYO8SzjRtEmM
	/9cSRHBzfWlOx4lp40A2ggETIr91GkanjC/bn6/bJhF0jpxK6VUbTCGz94YFTuVnz7yK
	2uXiMigfSAMI69p078JlRuOP2fzHLrQcs/eh3PeIqtkYtkl5cJwjUwRpymXrfTo3Xuo/
	VKRBiN05GoE4qt8W6YT+fZ7r1wYogCC/j0yOm+go1sLfF5b7wGUB2N6ktNuYtQWDIDhu
	y3Wg==
MIME-Version: 1.0
Received: by 10.50.154.225 with SMTP id vr1mr2756444igb.70.1344626151999; Fri,
	10 Aug 2012 12:15:51 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Fri, 10 Aug 2012 12:15:51 -0700 (PDT)
In-Reply-To: <5024CB720200007800094124@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
Date: Fri, 10 Aug 2012 15:15:51 -0400
X-Google-Sender-Auth: jD7-yn-GquSMQfL4NsfpSQ8YJRc
Message-ID: <CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'll continue to investigate, as my schedule allows, but haven't found
any smoking gun just yet.

This happens to be on an Ivybridge system - I have other Sandybridge
systems that will go to sleep, but never wake up at all, forcing a
hard power cycle.

I tested these iommu= parameters on one of these machines, to no
effect. Every time they go into S3, a hard reset is necessary to get
them to come out of it.




On Fri, Aug 10, 2012 at 2:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
>> iommu=no-intremap
>> This seems to work around the issue on this platform, performing
>> multiple suspend/resume cycles, and ahci came back afterwards just
>> fine.
>>
>> What is the downside to flipping this off?
>
> Loss of security (against misbehaving/malicious guests). So we
> certainly want/need to get to the bottom of this (especially if
> this is not only one kind of system that's affected).
>
>> iommu=off
>> This test behaved similarly to the above, also working around the issue.
>
> Of course, this is a superset of the former.
>
> This result, however, makes it more likely again to indeed be a
> Xen side problem, not Dom0 induced corruption.
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szukm-0007XS-MV; Fri, 10 Aug 2012 19:20:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szukk-0007XC-SJ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 19:20:11 +0000
Received: from [85.158.143.35:45709] by server-2.bemta-4.messagelabs.com id
	79/6B-31966-AEE55205; Fri, 10 Aug 2012 19:20:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344626408!5025692!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23038 invoked from network); 10 Aug 2012 19:20:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 19:20:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13960009"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 19:20:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 20:20:08 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szuki-0001Hc-82;
	Fri, 10 Aug 2012 19:20:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szukh-0004M0-Ie;
	Fri, 10 Aug 2012 20:20:08 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13592-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 20:20:08 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13592: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13592 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13592/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13537
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13537
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13537
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13537
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13537

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 linux                b09b34258046c4555e535a279e29032303a932f8
baseline version:
 linux                f351a1d7efda2edd52c23a150b07b8380c47b6c0

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=b09b34258046c4555e535a279e29032303a932f8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 b09b34258046c4555e535a279e29032303a932f8
+ branch=linux-3.0
+ revision=b09b34258046c4555e535a279e29032303a932f8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git b09b34258046c4555e535a279e29032303a932f8:tested/linux-3.0
Counting objects: 1   
Counting objects: 388   
Counting objects: 400, done.
Compressing objects:   1% (1/55)   
Compressing objects:   3% (2/55)   
Compressing objects:   5% (3/55)   
Compressing objects:   7% (4/55)   
Compressing objects:   9% (5/55)   
Compressing objects:  10% (6/55)   
Compressing objects:  12% (7/55)   
Compressing objects:  14% (8/55)   
Compressing objects:  16% (9/55)   
Compressing objects:  18% (10/55)   
Compressing objects:  20% (11/55)   
Compressing objects:  21% (12/55)   
Compressing objects:  23% (13/55)   
Compressing objects:  25% (14/55)   
Compressing objects:  27% (15/55)   
Compressing objects:  29% (16/55)   
Compressing objects:  30% (17/55)   
Compressing objects:  32% (18/55)   
Compressing objects:  34% (19/55)   
Compressing objects:  36% (20/55)   
Compressing objects:  38% (21/55)   
Compressing objects:  40% (22/55)   
Compressing objects:  41% (23/55)   
Compressing objects:  43% (24/55)   
Compressing objects:  45% (25/55)   
Compressing objects:  47% (26/55)   
Compressing objects:  49% (27/55)   
Compressing objects:  50% (28/55)   
Compressing objects:  52% (29/55)   
Compressing objects:  54% (30/55)   
Compressing objects:  56% (31/55)   
Compressing objects:  58% (32/55)   
Compressing objects:  60% (33/55)   
Compressing objects:  61% (34/55)   
Compressing objects:  63% (35/55)   
Compressing objects:  65% (36/55)   
Compressing objects:  67% (37/55)   
Compressing objects:  69% (38/55)   
Compressing objects:  70% (39/55)   
Compressing objects:  72% (40/55)   
Compressing objects:  74% (41/55)   
Compressing objects:  76% (42/55)   
Compressing objects:  78% (43/55)   
Compressing objects:  80% (44/55)   
Compressing objects:  81% (45/55)   
Compressing objects:  83% (46/55)   
Compressing objects:  85% (47/55)   
Compressing objects:  87% (48/55)   
Compressing objects:  89% (49/55)   
Compressing objects:  90% (50/55)   
Compressing objects:  92% (51/55)   
Compressing objects:  94% (52/55)   
Compressing objects:  96% (53/55)   
Compressing objects:  98% (54/55)   
Compressing objects: 100% (55/55)   
Compressing objects: 100% (55/55), done.
Writing objects:   0% (1/288)   
Writing objects:   1% (3/288)   
Writing objects:   2% (6/288)   
Writing objects:   3% (9/288)   
Writing objects:   4% (12/288)   
Writing objects:   5% (15/288)   
Writing objects:   6% (18/288)   
Writing objects:   7% (21/288)   
Writing objects:   8% (24/288)   
Writing objects:   9% (26/288)   
Writing objects:  10% (29/288)   
Writing objects:  11% (32/288)   
Writing objects:  12% (35/288)   
Writing objects:  13% (38/288)   
Writing objects:  14% (41/288)   
Writing objects:  15% (44/288)   
Writing objects:  16% (47/288)   
Writing objects:  17% (49/288)   
Writing objects:  18% (52/288)   
Writing objects:  19% (55/288)   
Writing objects:  20% (58/288)   
Writing objects:  21% (61/288)   
Writing objects:  22% (64/288)   
Writing objects:  23% (67/288)   
Writing objects:  24% (70/288)   
Writing objects:  25% (72/288)   
Writing objects:  26% (75/288)   
Writing objects:  27% (78/288)   
Writing objects:  28% (81/288)   
Writing objects:  29% (84/288)   
Writing objects:  30% (87/288)   
Writing objects:  31% (90/288)   
Writing objects:  32% (93/288)   
Writing objects:  33% (96/288)   
Writing objects:  34% (98/288)   
Writing objects:  35% (101/288)   
Writing objects:  36% (104/288)   
Writing objects:  37% (107/288)   
Writing objects:  38% (110/288)   
Writing objects:  39% (113/288)   
Writing objects:  40% (116/288)   
Writing objects:  41% (119/288)   
Writing objects:  42% (121/288)   
Writing objects:  43% (124/288)   
Writing objects:  44% (127/288)   
Writing objects:  45% (130/288)   
Writing objects:  46% (133/288)   
Writing objects:  47% (136/288)   
Writing objects:  48% (139/288)   
Writing objects:  49% (142/288)   
Writing objects:  50% (144/288)   
Writing objects:  51% (147/288)   
Writing objects:  52% (150/288)   
Writing objects:  53% (153/288)   
Writing objects:  54% (156/288)   
Writing objects:  55% (159/288)   
Writing objects:  56% (162/288)   
Writing objects:  57% (165/288)   
Writing objects:  58% (168/288)   
Writing objects:  59% (170/288)   
Writing objects:  60% (173/288)   
Writing objects:  61% (176/288)   
Writing objects:  62% (179/288)   
Writing objects:  63% (182/288)   
Writing objects:  64% (185/288)   
Writing objects:  65% (188/288)   
Writing objects:  66% (191/288)   
Writing objects:  67% (193/288)   
Writing objects:  68% (196/288)   
Writing objects:  69% (199/288)   
Writing objects:  70% (202/288)   
Writing objects:  71% (205/288)   
Writing objects:  72% (208/288)   
Writing objects:  73% (211/288)   
Writing objects:  74% (214/288)   
Writing objects:  75% (216/288)   
Writing objects:  76% (219/288)   
Writing objects:  77% (222/288)   
Writing objects:  78% (225/288)   
Writing objects:  79% (228/288)   
Writing objects:  80% (231/288)   
Writing objects:  81% (234/288)   
Writing objects:  82% (237/288)   
Writing objects:  83% (240/288)   
Writing objects:  84% (242/288)   
Writing objects:  85% (245/288)   
Writing objects:  86% (248/288)   
Writing objects:  87% (251/288)   
Writing objects:  88% (254/288)   
Writing objects:  89% (257/288)   
Writing objects:  90% (260/288)   
Writing objects:  91% (263/288)   
Writing objects:  92% (265/288)   
Writing objects:  93% (268/288)   
Writing objects:  94% (271/288)   
Writing objects:  95% (274/288)   
Writing objects:  96% (277/288)   
Writing objects:  97% (280/288)   
Writing objects:  98% (283/288)   
Writing objects:  99% (286/288)   
Writing objects: 100% (288/288)   
Writing objects: 100% (288/288), 50.80 KiB, done.
Total 288 (delta 230), reused 288 (delta 230)
To xen@xenbits.xensource.com:git/linux-pvops.git
   f351a1d..b09b342  b09b34258046c4555e535a279e29032303a932f8 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:20:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szukm-0007XS-MV; Fri, 10 Aug 2012 19:20:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1Szukk-0007XC-SJ
	for xen-devel@lists.xensource.com; Fri, 10 Aug 2012 19:20:11 +0000
Received: from [85.158.143.35:45709] by server-2.bemta-4.messagelabs.com id
	79/6B-31966-AEE55205; Fri, 10 Aug 2012 19:20:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344626408!5025692!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4NDM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23038 invoked from network); 10 Aug 2012 19:20:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 19:20:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,747,1336348800"; d="scan'208";a="13960009"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	10 Aug 2012 19:20:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 10 Aug 2012 20:20:08 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Szuki-0001Hc-82;
	Fri, 10 Aug 2012 19:20:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Szukh-0004M0-Ie;
	Fri, 10 Aug 2012 20:20:08 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13592-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Aug 2012 20:20:08 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13592: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13592 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13592/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13537
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13537
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13537
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13537
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13537

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 linux                b09b34258046c4555e535a279e29032303a932f8
baseline version:
 linux                f351a1d7efda2edd52c23a150b07b8380c47b6c0

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=b09b34258046c4555e535a279e29032303a932f8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 b09b34258046c4555e535a279e29032303a932f8
+ branch=linux-3.0
+ revision=b09b34258046c4555e535a279e29032303a932f8
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git b09b34258046c4555e535a279e29032303a932f8:tested/linux-3.0
Counting objects: 1   
Counting objects: 388   
Counting objects: 400, done.
Compressing objects:   1% (1/55)   
Compressing objects:   3% (2/55)   
Compressing objects:   5% (3/55)   
Compressing objects:   7% (4/55)   
Compressing objects:   9% (5/55)   
Compressing objects:  10% (6/55)   
Compressing objects:  12% (7/55)   
Compressing objects:  14% (8/55)   
Compressing objects:  16% (9/55)   
Compressing objects:  18% (10/55)   
Compressing objects:  20% (11/55)   
Compressing objects:  21% (12/55)   
Compressing objects:  23% (13/55)   
Compressing objects:  25% (14/55)   
Compressing objects:  27% (15/55)   
Compressing objects:  29% (16/55)   
Compressing objects:  30% (17/55)   
Compressing objects:  32% (18/55)   
Compressing objects:  34% (19/55)   
Compressing objects:  36% (20/55)   
Compressing objects:  38% (21/55)   
Compressing objects:  40% (22/55)   
Compressing objects:  41% (23/55)   
Compressing objects:  43% (24/55)   
Compressing objects:  45% (25/55)   
Compressing objects:  47% (26/55)   
Compressing objects:  49% (27/55)   
Compressing objects:  50% (28/55)   
Compressing objects:  52% (29/55)   
Compressing objects:  54% (30/55)   
Compressing objects:  56% (31/55)   
Compressing objects:  58% (32/55)   
Compressing objects:  60% (33/55)   
Compressing objects:  61% (34/55)   
Compressing objects:  63% (35/55)   
Compressing objects:  65% (36/55)   
Compressing objects:  67% (37/55)   
Compressing objects:  69% (38/55)   
Compressing objects:  70% (39/55)   
Compressing objects:  72% (40/55)   
Compressing objects:  74% (41/55)   
Compressing objects:  76% (42/55)   
Compressing objects:  78% (43/55)   
Compressing objects:  80% (44/55)   
Compressing objects:  81% (45/55)   
Compressing objects:  83% (46/55)   
Compressing objects:  85% (47/55)   
Compressing objects:  87% (48/55)   
Compressing objects:  89% (49/55)   
Compressing objects:  90% (50/55)   
Compressing objects:  92% (51/55)   
Compressing objects:  94% (52/55)   
Compressing objects:  96% (53/55)   
Compressing objects:  98% (54/55)   
Compressing objects: 100% (55/55)   
Compressing objects: 100% (55/55), done.
Writing objects:   0% (1/288)   
Writing objects:   1% (3/288)   
Writing objects:   2% (6/288)   
Writing objects:   3% (9/288)   
Writing objects:   4% (12/288)   
Writing objects:   5% (15/288)   
Writing objects:   6% (18/288)   
Writing objects:   7% (21/288)   
Writing objects:   8% (24/288)   
Writing objects:   9% (26/288)   
Writing objects:  10% (29/288)   
Writing objects:  11% (32/288)   
Writing objects:  12% (35/288)   
Writing objects:  13% (38/288)   
Writing objects:  14% (41/288)   
Writing objects:  15% (44/288)   
Writing objects:  16% (47/288)   
Writing objects:  17% (49/288)   
Writing objects:  18% (52/288)   
Writing objects:  19% (55/288)   
Writing objects:  20% (58/288)   
Writing objects:  21% (61/288)   
Writing objects:  22% (64/288)   
Writing objects:  23% (67/288)   
Writing objects:  24% (70/288)   
Writing objects:  25% (72/288)   
Writing objects:  26% (75/288)   
Writing objects:  27% (78/288)   
Writing objects:  28% (81/288)   
Writing objects:  29% (84/288)   
Writing objects:  30% (87/288)   
Writing objects:  31% (90/288)   
Writing objects:  32% (93/288)   
Writing objects:  33% (96/288)   
Writing objects:  34% (98/288)   
Writing objects:  35% (101/288)   
Writing objects:  36% (104/288)   
Writing objects:  37% (107/288)   
Writing objects:  38% (110/288)   
Writing objects:  39% (113/288)   
Writing objects:  40% (116/288)   
Writing objects:  41% (119/288)   
Writing objects:  42% (121/288)   
Writing objects:  43% (124/288)   
Writing objects:  44% (127/288)   
Writing objects:  45% (130/288)   
Writing objects:  46% (133/288)   
Writing objects:  47% (136/288)   
Writing objects:  48% (139/288)   
Writing objects:  49% (142/288)   
Writing objects:  50% (144/288)   
Writing objects:  51% (147/288)   
Writing objects:  52% (150/288)   
Writing objects:  53% (153/288)   
Writing objects:  54% (156/288)   
Writing objects:  55% (159/288)   
Writing objects:  56% (162/288)   
Writing objects:  57% (165/288)   
Writing objects:  58% (168/288)   
Writing objects:  59% (170/288)   
Writing objects:  60% (173/288)   
Writing objects:  61% (176/288)   
Writing objects:  62% (179/288)   
Writing objects:  63% (182/288)   
Writing objects:  64% (185/288)   
Writing objects:  65% (188/288)   
Writing objects:  66% (191/288)   
Writing objects:  67% (193/288)   
Writing objects:  68% (196/288)   
Writing objects:  69% (199/288)   
Writing objects:  70% (202/288)   
Writing objects:  71% (205/288)   
Writing objects:  72% (208/288)   
Writing objects:  73% (211/288)   
Writing objects:  74% (214/288)   
Writing objects:  75% (216/288)   
Writing objects:  76% (219/288)   
Writing objects:  77% (222/288)   
Writing objects:  78% (225/288)   
Writing objects:  79% (228/288)   
Writing objects:  80% (231/288)   
Writing objects:  81% (234/288)   
Writing objects:  82% (237/288)   
Writing objects:  83% (240/288)   
Writing objects:  84% (242/288)   
Writing objects:  85% (245/288)   
Writing objects:  86% (248/288)   
Writing objects:  87% (251/288)   
Writing objects:  88% (254/288)   
Writing objects:  89% (257/288)   
Writing objects:  90% (260/288)   
Writing objects:  91% (263/288)   
Writing objects:  92% (265/288)   
Writing objects:  93% (268/288)   
Writing objects:  94% (271/288)   
Writing objects:  95% (274/288)   
Writing objects:  96% (277/288)   
Writing objects:  97% (280/288)   
Writing objects:  98% (283/288)   
Writing objects:  99% (286/288)   
Writing objects: 100% (288/288)   
Writing objects: 100% (288/288), 50.80 KiB, done.
Total 288 (delta 230), reused 288 (delta 230)
To xen@xenbits.xensource.com:git/linux-pvops.git
   f351a1d..b09b342  b09b34258046c4555e535a279e29032303a932f8 -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:45:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szv9I-0007rr-V8; Fri, 10 Aug 2012 19:45:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szv9H-0007rm-Hb
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 19:45:31 +0000
Received: from [85.158.143.35:3211] by server-2.bemta-4.messagelabs.com id
	9E/C5-31966-AD465205; Fri, 10 Aug 2012 19:45:30 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344627930!11959003!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20336 invoked from network); 10 Aug 2012 19:45:30 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 19:45:30 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jorabe mo18) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 500bdbo7AHGnkh
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 21:45:30 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 666A018638
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 21:45:29 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 25849858146fabf682b8dbc88c33fe48c4522a30
Message-Id: <25849858146fabf682b8.1344627928@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Fri, 10 Aug 2012 21:45:28 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] tools: init.d/Linux/xencommons: load all known
	backend drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1344622797 -7200
# Node ID 25849858146fabf682b8dbc88c33fe48c4522a30
# Parent  7ce01c435f5a8b22da8469bc3947bfd32dd4a2f9
tools: init.d/Linux/xencommons: load all known backend drivers

Load all known backend drivers fron xenlinux and pvops based dom0
kernels.  There is currently no code in xend or libxl to load these
drivers on demand.  Currently libxl has also no helpful error message if
a backend driver is missing.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 7ce01c435f5a -r 25849858146f tools/hotplug/Linux/init.d/xencommons
--- a/tools/hotplug/Linux/init.d/xencommons
+++ b/tools/hotplug/Linux/init.d/xencommons
@@ -59,8 +59,14 @@ do_start () {
 	modprobe xen-gntalloc 2>/dev/null
 	modprobe xen-blkback 2>/dev/null
 	modprobe xen-netback 2>/dev/null
+	modprobe xen-pciback 2>/dev/null
 	modprobe evtchn 2>/dev/null
 	modprobe gntdev 2>/dev/null
+	modprobe netbk 2>/dev/null
+	modprobe blkbk 2>/dev/null
+	modprobe xen-scsibk 2>/dev/null
+	modprobe usbbk 2>/dev/null
+	modprobe pciback 2>/dev/null
 	modprobe xen-acpi-processor 2>/dev/null
 	mkdir -p /var/run/xen
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 19:45:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 19:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szv9I-0007rr-V8; Fri, 10 Aug 2012 19:45:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1Szv9H-0007rm-Hb
	for xen-devel@lists.xen.org; Fri, 10 Aug 2012 19:45:31 +0000
Received: from [85.158.143.35:3211] by server-2.bemta-4.messagelabs.com id
	9E/C5-31966-AD465205; Fri, 10 Aug 2012 19:45:30 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344627930!11959003!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTQ1MDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20336 invoked from network); 10 Aug 2012 19:45:30 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Aug 2012 19:45:30 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll3c7oGjw=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-084-038.pools.arcor-ip.net [84.57.84.38])
	by smtp.strato.de (jorabe mo18) (RZmta 30.8 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 500bdbo7AHGnkh
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 21:45:30 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 666A018638
	for <xen-devel@lists.xen.org>; Fri, 10 Aug 2012 21:45:29 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 25849858146fabf682b8dbc88c33fe48c4522a30
Message-Id: <25849858146fabf682b8.1344627928@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Fri, 10 Aug 2012 21:45:28 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] tools: init.d/Linux/xencommons: load all known
	backend drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1344622797 -7200
# Node ID 25849858146fabf682b8dbc88c33fe48c4522a30
# Parent  7ce01c435f5a8b22da8469bc3947bfd32dd4a2f9
tools: init.d/Linux/xencommons: load all known backend drivers

Load all known backend drivers fron xenlinux and pvops based dom0
kernels.  There is currently no code in xend or libxl to load these
drivers on demand.  Currently libxl has also no helpful error message if
a backend driver is missing.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 7ce01c435f5a -r 25849858146f tools/hotplug/Linux/init.d/xencommons
--- a/tools/hotplug/Linux/init.d/xencommons
+++ b/tools/hotplug/Linux/init.d/xencommons
@@ -59,8 +59,14 @@ do_start () {
 	modprobe xen-gntalloc 2>/dev/null
 	modprobe xen-blkback 2>/dev/null
 	modprobe xen-netback 2>/dev/null
+	modprobe xen-pciback 2>/dev/null
 	modprobe evtchn 2>/dev/null
 	modprobe gntdev 2>/dev/null
+	modprobe netbk 2>/dev/null
+	modprobe blkbk 2>/dev/null
+	modprobe xen-scsibk 2>/dev/null
+	modprobe usbbk 2>/dev/null
+	modprobe pciback 2>/dev/null
 	modprobe xen-acpi-processor 2>/dev/null
 	mkdir -p /var/run/xen
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 10 22:57:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 22:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szy8Z-0000kq-ES; Fri, 10 Aug 2012 22:56:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <torushikeshj@gmail.com>)
	id 1Szy8Y-0000kb-53; Fri, 10 Aug 2012 22:56:58 +0000
X-Env-Sender: torushikeshj@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344639409!8644643!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31265 invoked from network); 10 Aug 2012 22:56:50 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 22:56:50 -0000
Received: by eekd4 with SMTP id d4so567857eek.30
	for <multiple recipients>; Fri, 10 Aug 2012 15:56:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=SoNoVNKqDxPOyzGJ1wp2uNZ0pGCsuBrm94ls4I1lzeQ=;
	b=wxmsgjlio5OwSTJDyx5gQS1wemmraWLFQ/zt/rD/733UdHK+caKfWejQyjk0dKCHSy
	fwjuDkszIz6MRCN2A/9YYOy+IhwYiv8sgTLIsY5M11MXYZJTpjJrhuVKMmh7y0oO7Iw2
	5zcygk1+Oca6e4Ve/GPQbV3ixTwfmlJZ0RD+xMAgCdgbxhI3/5GjUZ+QK+ExS/SQa2iM
	JKjrjsZNikhiObzYe9H3Nlo35H9bFeDWmeOWO6jMdkI1EN1kyLNkaHSvOjdWeYSdwPD/
	Kjf+ISrWZMJ5Mc5tNNnHi2opTj0M1v08YHIuXRawQ2g9eL5xdvveGrBzGYBPlE5LSO5e
	bGZw==
MIME-Version: 1.0
Received: by 10.14.172.193 with SMTP id t41mr956641eel.25.1344639409818; Fri,
	10 Aug 2012 15:56:49 -0700 (PDT)
Received: by 10.14.213.72 with HTTP; Fri, 10 Aug 2012 15:56:49 -0700 (PDT)
Date: Sat, 11 Aug 2012 04:26:49 +0530
Message-ID: <CAO14VsOkki7FUZFJ65RRXkXfQBY_nU-pb=brHDL+Yiwh_42YLA@mail.gmail.com>
From: R J <torushikeshj@gmail.com>
To: xen-api@lists.xensource.com, xen-devel@lists.xensource.com
Subject: [Xen-devel] want to write a difference copy program to sync two
	VHDs in blktap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7314219350928194178=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7314219350928194178==
Content-Type: multipart/alternative; boundary=047d7b603ef25f8da004c6f1429e

--047d7b603ef25f8da004c6f1429e
Content-Type: text/plain; charset=ISO-8859-1

Hello List,

I'm using blktap on XCP 1.1 and developing a difference copy program for
easy backup and remote archival.

The idea is to read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP
and the data blocks.
Other way would be by adding a sha1 signature in each data block and
comparing it with remote VHD.

I tried to get some info from below programs but could not get required
info about adding a sha1 sign of data block while writing it.
Similar to ZFS this can be used to verify file integrity ?
May be if someone can guide me about how to inject data while writing it in
data block of VHD.

Please advice.

-- Rishi

On Mon, Aug 6, 2012 at 9:26 PM, R J <torushikeshj@gmail.com> wrote:

> Hello List,
>
> I would like to know details of tapdisk-diff and tapdisk-stream
> My aim is to find differences in two vhds and if possible sync them for
> redundancy via a difference-copy program
>
> The idea is
> - find the journel difference of src and dest VHD
> - copy journel and modified blocks from src VHD to dest VHD for backup.
>
> in my case both are nfs vhds.
>
> Could anyone please provide me more info on above two programs ?
>
> - Rishi
>

--047d7b603ef25f8da004c6f1429e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello List,<br><br>I&#39;m using blktap on XCP 1.1 and developing a differe=
nce copy program for easy backup and remote archival.<br><br>The idea is to=
 read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP and the data b=
locks.<br>
Other way would be by adding a sha1 signature in each data block and compar=
ing it with remote VHD.<br><br>I tried to get some info from below programs=
 but could not get required info about adding a sha1 sign of data block whi=
le writing it.<br>
Similar to ZFS this can be used to verify file integrity ?<br>May be if som=
eone can guide me about how to inject data while writing it in data block o=
f VHD.<br><br>Please advice.<br><br>-- Rishi<br><br><div class=3D"gmail_quo=
te">
On Mon, Aug 6, 2012 at 9:26 PM, R J <span dir=3D"ltr">&lt;<a href=3D"mailto=
:torushikeshj@gmail.com" target=3D"_blank">torushikeshj@gmail.com</a>&gt;</=
span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0pt 0pt 0=
pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hello List,<br><br>I would like to know details of tapdisk-diff and tapdisk=
-stream<br>My aim is to find differences in two vhds and if possible sync t=
hem for redundancy via a difference-copy program<br><br>The idea is<br>
- find the journel difference of src and dest VHD<br>
- copy journel and modified blocks from src VHD to dest VHD for backup.<br>=
<br>in my case both are nfs vhds.<br><br>Could anyone please provide me mor=
e info on above two programs ?<br><br>- Rishi<br>
</blockquote></div><br>

--047d7b603ef25f8da004c6f1429e--


--===============7314219350928194178==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7314219350928194178==--


From xen-devel-bounces@lists.xen.org Fri Aug 10 22:57:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Aug 2012 22:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Szy8Z-0000kq-ES; Fri, 10 Aug 2012 22:56:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <torushikeshj@gmail.com>)
	id 1Szy8Y-0000kb-53; Fri, 10 Aug 2012 22:56:58 +0000
X-Env-Sender: torushikeshj@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344639409!8644643!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31265 invoked from network); 10 Aug 2012 22:56:50 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Aug 2012 22:56:50 -0000
Received: by eekd4 with SMTP id d4so567857eek.30
	for <multiple recipients>; Fri, 10 Aug 2012 15:56:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=SoNoVNKqDxPOyzGJ1wp2uNZ0pGCsuBrm94ls4I1lzeQ=;
	b=wxmsgjlio5OwSTJDyx5gQS1wemmraWLFQ/zt/rD/733UdHK+caKfWejQyjk0dKCHSy
	fwjuDkszIz6MRCN2A/9YYOy+IhwYiv8sgTLIsY5M11MXYZJTpjJrhuVKMmh7y0oO7Iw2
	5zcygk1+Oca6e4Ve/GPQbV3ixTwfmlJZ0RD+xMAgCdgbxhI3/5GjUZ+QK+ExS/SQa2iM
	JKjrjsZNikhiObzYe9H3Nlo35H9bFeDWmeOWO6jMdkI1EN1kyLNkaHSvOjdWeYSdwPD/
	Kjf+ISrWZMJ5Mc5tNNnHi2opTj0M1v08YHIuXRawQ2g9eL5xdvveGrBzGYBPlE5LSO5e
	bGZw==
MIME-Version: 1.0
Received: by 10.14.172.193 with SMTP id t41mr956641eel.25.1344639409818; Fri,
	10 Aug 2012 15:56:49 -0700 (PDT)
Received: by 10.14.213.72 with HTTP; Fri, 10 Aug 2012 15:56:49 -0700 (PDT)
Date: Sat, 11 Aug 2012 04:26:49 +0530
Message-ID: <CAO14VsOkki7FUZFJ65RRXkXfQBY_nU-pb=brHDL+Yiwh_42YLA@mail.gmail.com>
From: R J <torushikeshj@gmail.com>
To: xen-api@lists.xensource.com, xen-devel@lists.xensource.com
Subject: [Xen-devel] want to write a difference copy program to sync two
	VHDs in blktap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7314219350928194178=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7314219350928194178==
Content-Type: multipart/alternative; boundary=047d7b603ef25f8da004c6f1429e

--047d7b603ef25f8da004c6f1429e
Content-Type: text/plain; charset=ISO-8859-1

Hello List,

I'm using blktap on XCP 1.1 and developing a difference copy program for
easy backup and remote archival.

The idea is to read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP
and the data blocks.
Other way would be by adding a sha1 signature in each data block and
comparing it with remote VHD.

I tried to get some info from below programs but could not get required
info about adding a sha1 sign of data block while writing it.
Similar to ZFS this can be used to verify file integrity ?
May be if someone can guide me about how to inject data while writing it in
data block of VHD.

Please advice.

-- Rishi

On Mon, Aug 6, 2012 at 9:26 PM, R J <torushikeshj@gmail.com> wrote:

> Hello List,
>
> I would like to know details of tapdisk-diff and tapdisk-stream
> My aim is to find differences in two vhds and if possible sync them for
> redundancy via a difference-copy program
>
> The idea is
> - find the journel difference of src and dest VHD
> - copy journel and modified blocks from src VHD to dest VHD for backup.
>
> in my case both are nfs vhds.
>
> Could anyone please provide me more info on above two programs ?
>
> - Rishi
>

--047d7b603ef25f8da004c6f1429e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello List,<br><br>I&#39;m using blktap on XCP 1.1 and developing a differe=
nce copy program for easy backup and remote archival.<br><br>The idea is to=
 read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP and the data b=
locks.<br>
Other way would be by adding a sha1 signature in each data block and compar=
ing it with remote VHD.<br><br>I tried to get some info from below programs=
 but could not get required info about adding a sha1 sign of data block whi=
le writing it.<br>
Similar to ZFS this can be used to verify file integrity ?<br>May be if som=
eone can guide me about how to inject data while writing it in data block o=
f VHD.<br><br>Please advice.<br><br>-- Rishi<br><br><div class=3D"gmail_quo=
te">
On Mon, Aug 6, 2012 at 9:26 PM, R J <span dir=3D"ltr">&lt;<a href=3D"mailto=
:torushikeshj@gmail.com" target=3D"_blank">torushikeshj@gmail.com</a>&gt;</=
span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0pt 0pt 0=
pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hello List,<br><br>I would like to know details of tapdisk-diff and tapdisk=
-stream<br>My aim is to find differences in two vhds and if possible sync t=
hem for redundancy via a difference-copy program<br><br>The idea is<br>
- find the journel difference of src and dest VHD<br>
- copy journel and modified blocks from src VHD to dest VHD for backup.<br>=
<br>in my case both are nfs vhds.<br><br>Could anyone please provide me mor=
e info on above two programs ?<br><br>- Rishi<br>
</blockquote></div><br>

--047d7b603ef25f8da004c6f1429e--


--===============7314219350928194178==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7314219350928194178==--


From xen-devel-bounces@lists.xen.org Sat Aug 11 00:04:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Aug 2012 00:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzzBq-0001ry-NR; Sat, 11 Aug 2012 00:04:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzzBo-0001rt-My
	for xen-devel@lists.xensource.com; Sat, 11 Aug 2012 00:04:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344643458!8650089!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4MTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22382 invoked from network); 11 Aug 2012 00:04:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Aug 2012 00:04:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,749,1336348800"; d="scan'208";a="13962441"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Aug 2012 00:04:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 11 Aug 2012 01:04:17 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzzBh-0003S6-BL;
	Sat, 11 Aug 2012 00:04:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzzBg-000137-V5;
	Sat, 11 Aug 2012 01:04:17 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13593-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Aug 2012 01:04:17 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13593: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13593 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13593/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13591
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13591
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13591
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13591
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  47080c965937

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=dc4970af48a0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable dc4970af48a0
+ branch=xen-unstable
+ revision=dc4970af48a0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r dc4970af48a0 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 11 00:04:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Aug 2012 00:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1SzzBq-0001ry-NR; Sat, 11 Aug 2012 00:04:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1SzzBo-0001rt-My
	for xen-devel@lists.xensource.com; Sat, 11 Aug 2012 00:04:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344643458!8650089!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4MTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22382 invoked from network); 11 Aug 2012 00:04:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Aug 2012 00:04:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,749,1336348800"; d="scan'208";a="13962441"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Aug 2012 00:04:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 11 Aug 2012 01:04:17 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1SzzBh-0003S6-BL;
	Sat, 11 Aug 2012 00:04:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1SzzBg-000137-V5;
	Sat, 11 Aug 2012 01:04:17 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13593-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Aug 2012 01:04:17 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13593: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13593 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13593/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13591
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13591
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13591
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13591
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  47080c965937

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=dc4970af48a0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable dc4970af48a0
+ branch=xen-unstable
+ revision=dc4970af48a0
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r dc4970af48a0 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 11 01:34:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Aug 2012 01:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T00aZ-0006OQ-1c; Sat, 11 Aug 2012 01:34:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T00aX-0006OL-OQ
	for xen-devel@lists.xensource.com; Sat, 11 Aug 2012 01:34:01 +0000
Received: from [85.158.143.99:51221] by server-3.bemta-4.messagelabs.com id
	2D/94-09529-986B5205; Sat, 11 Aug 2012 01:34:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344648839!27647018!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27344 invoked from network); 11 Aug 2012 01:34:00 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Aug 2012 01:34:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7B1Xp2E005385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 11 Aug 2012 01:33:52 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7B1XoCw015736
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 11 Aug 2012 01:33:50 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7B1XoMc030791; Fri, 10 Aug 2012 20:33:50 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 18:33:49 -0700
Date: Fri, 10 Aug 2012 18:33:48 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120810183348.25e1c973@mantra.us.oracle.com>
In-Reply-To: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, Tim.Deegan@citrix.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012 15:12:01 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> This is an incremental patch on top of
> c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> compatibility, it is better to introduce foreign_domid as part of a
> union containing both size and foreign_domid.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/public/memory.h |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index b2adfbe..b0af2fd 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };
>  
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info  0 /* shared info page */
> @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> -    uint16_t space;
> -    domid_t foreign_domid; /* IFF gmfn_foreign */
> +    unsigned int space;
>  
>  #define XENMAPIDX_grant_table_status 0x80000000
>  

Is this the final version? I don't see it in today's xen unstable tree!
I have this in my tree:

struct xen_add_to_physmap {
    /* Which domain to change the mapping for. */
    domid_t domid;

    /* Number of pages to go through for gmfn_range */
    uint16_t    size;

    /* Source mapping space. */
#define XENMAPSPACE_shared_info 0 /* shared info page */
#define XENMAPSPACE_grant_table 1 /* grant table page */
#define XENMAPSPACE_gmfn        2 /* GMFN */
#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
    uint16_t space;
    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */


thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 11 01:34:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Aug 2012 01:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T00aZ-0006OQ-1c; Sat, 11 Aug 2012 01:34:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T00aX-0006OL-OQ
	for xen-devel@lists.xensource.com; Sat, 11 Aug 2012 01:34:01 +0000
Received: from [85.158.143.99:51221] by server-3.bemta-4.messagelabs.com id
	2D/94-09529-986B5205; Sat, 11 Aug 2012 01:34:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344648839!27647018!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjgzNTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27344 invoked from network); 11 Aug 2012 01:34:00 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Aug 2012 01:34:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7B1Xp2E005385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 11 Aug 2012 01:33:52 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7B1XoCw015736
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 11 Aug 2012 01:33:50 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7B1XoMc030791; Fri, 10 Aug 2012 20:33:50 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Aug 2012 18:33:49 -0700
Date: Fri, 10 Aug 2012 18:33:48 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120810183348.25e1c973@mantra.us.oracle.com>
In-Reply-To: <1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, Tim.Deegan@citrix.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Aug 2012 15:12:01 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> This is an incremental patch on top of
> c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> compatibility, it is better to introduce foreign_domid as part of a
> union containing both size and foreign_domid.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/include/public/memory.h |   11 +++++++----
>  1 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index b2adfbe..b0af2fd 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
>  
> -    /* Number of pages to go through for gmfn_range */
> -    uint16_t    size;
> +    union {
> +        /* Number of pages to go through for gmfn_range */
> +        uint16_t    size;
> +        /* IFF gmfn_foreign */
> +        domid_t foreign_domid;
> +    };
>  
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info  0 /* shared info page */
> @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> -    uint16_t space;
> -    domid_t foreign_domid; /* IFF gmfn_foreign */
> +    unsigned int space;
>  
>  #define XENMAPIDX_grant_table_status 0x80000000
>  

Is this the final version? I don't see it in today's xen unstable tree!
I have this in my tree:

struct xen_add_to_physmap {
    /* Which domain to change the mapping for. */
    domid_t domid;

    /* Number of pages to go through for gmfn_range */
    uint16_t    size;

    /* Source mapping space. */
#define XENMAPSPACE_shared_info 0 /* shared info page */
#define XENMAPSPACE_grant_table 1 /* grant table page */
#define XENMAPSPACE_gmfn        2 /* GMFN */
#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
    uint16_t space;
    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */


thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 11 07:39:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Aug 2012 07:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T06HK-0000y4-Re; Sat, 11 Aug 2012 07:38:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T06HJ-0000xz-Bt
	for xen-devel@lists.xensource.com; Sat, 11 Aug 2012 07:38:33 +0000
Received: from [85.158.143.35:58989] by server-2.bemta-4.messagelabs.com id
	BF/84-31966-8FB06205; Sat, 11 Aug 2012 07:38:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344670711!12841632!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4MTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20713 invoked from network); 11 Aug 2012 07:38:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Aug 2012 07:38:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,749,1336348800"; d="scan'208";a="13963793"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Aug 2012 07:38:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 11 Aug 2012 08:38:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T06HG-0005yv-35;
	Sat, 11 Aug 2012 07:38:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T06HF-0003KF-Pj;
	Sat, 11 Aug 2012 08:38:29 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13594-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Aug 2012 08:38:29 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13594: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13594 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13594/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13593
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13593
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13593
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13593

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  dc4970af48a0

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 11 07:39:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Aug 2012 07:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T06HK-0000y4-Re; Sat, 11 Aug 2012 07:38:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T06HJ-0000xz-Bt
	for xen-devel@lists.xensource.com; Sat, 11 Aug 2012 07:38:33 +0000
Received: from [85.158.143.35:58989] by server-2.bemta-4.messagelabs.com id
	BF/84-31966-8FB06205; Sat, 11 Aug 2012 07:38:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344670711!12841632!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4MTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20713 invoked from network); 11 Aug 2012 07:38:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Aug 2012 07:38:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,749,1336348800"; d="scan'208";a="13963793"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	11 Aug 2012 07:38:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 11 Aug 2012 08:38:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T06HG-0005yv-35;
	Sat, 11 Aug 2012 07:38:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T06HF-0003KF-Pj;
	Sat, 11 Aug 2012 08:38:29 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13594-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Aug 2012 08:38:29 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13594: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13594 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13594/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13593
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13593
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13593
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13593

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  dc4970af48a0

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 12 06:33:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Aug 2012 06:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0Rj7-0001eG-J8; Sun, 12 Aug 2012 06:32:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0Rj5-0001dw-Ne
	for xen-devel@lists.xensource.com; Sun, 12 Aug 2012 06:32:39 +0000
Received: from [85.158.139.83:3147] by server-3.bemta-5.messagelabs.com id
	3C/19-27237-60E47205; Sun, 12 Aug 2012 06:32:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344753157!27595013!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22024 invoked from network); 12 Aug 2012 06:32:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Aug 2012 06:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,754,1336348800"; d="scan'208";a="13967951"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Aug 2012 06:32:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 12 Aug 2012 07:32:36 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T0Rj2-0005ph-Dv;
	Sun, 12 Aug 2012 06:32:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T0Rj1-0005xD-R1;
	Sun, 12 Aug 2012 07:32:36 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13595-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 12 Aug 2012 07:32:36 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13595: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13595 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13595/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13594
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13594
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13594
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13594

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  dc4970af48a0

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 12 06:33:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Aug 2012 06:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0Rj7-0001eG-J8; Sun, 12 Aug 2012 06:32:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0Rj5-0001dw-Ne
	for xen-devel@lists.xensource.com; Sun, 12 Aug 2012 06:32:39 +0000
Received: from [85.158.139.83:3147] by server-3.bemta-5.messagelabs.com id
	3C/19-27237-60E47205; Sun, 12 Aug 2012 06:32:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344753157!27595013!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22024 invoked from network); 12 Aug 2012 06:32:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Aug 2012 06:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,754,1336348800"; d="scan'208";a="13967951"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	12 Aug 2012 06:32:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 12 Aug 2012 07:32:36 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T0Rj2-0005ph-Dv;
	Sun, 12 Aug 2012 06:32:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T0Rj1-0005xD-R1;
	Sun, 12 Aug 2012 07:32:36 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13595-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 12 Aug 2012 07:32:36 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13595: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13595 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13595/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13594
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13594
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13594
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13594

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  dc4970af48a0

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 12 18:04:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Aug 2012 18:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0cVv-0001Ge-FJ; Sun, 12 Aug 2012 18:03:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1T0cVu-0001GW-0s
	for xen-devel@lists.xen.org; Sun, 12 Aug 2012 18:03:46 +0000
Received: from [85.158.143.35:59069] by server-1.bemta-4.messagelabs.com id
	97/B6-07754-100F7205; Sun, 12 Aug 2012 18:03:45 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344794624!13658104!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4OTI0Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23899 invoked from network); 12 Aug 2012 18:03:44 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Aug 2012 18:03:44 -0000
Received: from smtphost4.dur.ac.uk (smtphost4.dur.ac.uk [129.234.252.4])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q7CI3GiO006713;
	Sun, 12 Aug 2012 19:03:21 +0100
Received: from vega-a.dur.ac.uk (vega-a.dur.ac.uk [129.234.250.133])
	by smtphost4.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q7CI34QN031655
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 12 Aug 2012 19:03:04 +0100
Received: from vega-a.dur.ac.uk (localhost [127.0.0.1])
	by vega-a.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q7CI34ab019586;
	Sun, 12 Aug 2012 19:03:04 +0100
Received: from localhost (dcl0may@localhost)
	by vega-a.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id q7CI34hi019582;
	Sun, 12 Aug 2012 19:03:04 +0100
Date: Sun, 12 Aug 2012 19:03:00 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343205815.18971.43.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
Content-Type: MULTIPART/MIXED; BOUNDARY="8323329-138916489-1344794584=:7313"
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q7CI3GiO006713
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-138916489-1344794584=:7313
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Wed, 25 Jul 2012, Ian Campbell wrote:

> On Tue, 2012-07-24 at 20:36 +0100, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jul 24, 2012 at 08:04:30PM +0100, M A Young wrote:
>>> Fedora is keen to stop using PyXML and I have been sent a bug report
>>> https://bugzilla.redhat.com/show_bug.cgi?id=842843 which includes a
>>> patch to remove the use of XMLPrettyPrint in
>>> tools/python/xen/xm/create.py . I am going to try this in the Fedora
>>> build, but I was wondering if it makes sense to do this in the
>>> official xen releases as well.
>>
>> Yes.
>
> Agreed.
>
> According to the bug we've already moved from PyXML to lxml
> (22235:b8cc53d22545 from the looks of it) and simply missed this one use
> of PyXML.

Here is the patch from that bug (trivially) rebased to 4.2.

 	Michael Young
--8323329-138916489-1344794584=:7313
Content-Type: TEXT/PLAIN; charset=US-ASCII; name=remove.PyXML.patch
Content-Transfer-Encoding: BASE64
Content-Description: 
Content-Disposition: attachment; filename=remove.PyXML.patch

UmVwbGFjZSB0aGUgdXNlIG9mIFhNTFByZXR0eVByaW50IGZyb20gUHlYTUwg
aW4geG0gd2l0aCBzdGRsaWIgZnVuY3Rpb25hbGl0eS4NClRoaXMgd2FzIHJl
cG9ydGVkIGJ5IFRvc2hpbyBFcm5pZSBLdXJhdG9taSBhdA0KaHR0cHM6Ly9i
dWd6aWxsYS5yZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD04NDI4NDMNCg0K
U2lnbmVkLW9mZi1ieTogTWljaGFlbCBZb3VuZyA8bS5hLnlvdW5nQGR1cmhh
bS5hYy51az4NCi0tLSB4ZW4tNC4yLjAvdG9vbHMvcHl0aG9uL3hlbi94bS9j
cmVhdGUucHkub3JpZwkyMDEyLTA1LTEyIDE2OjQwOjQ4LjAwMDAwMDAwMCAr
MDEwMA0KKysrIHhlbi00LjIuMC90b29scy9weXRob24veGVuL3htL2NyZWF0
ZS5weQkyMDEyLTA4LTEyIDE3OjU5OjU2LjQ2NDI3MDcwNyArMDEwMA0KQEAg
LTE1NDMsOCArMTU0Myw3IEBADQogICAgICAgICAgICAgU1hQUHJldHR5UHJp
bnQucHJldHR5cHJpbnQoY29uZmlnKQ0KIA0KICAgICAgICAgaWYgb3B0cy52
YWxzLnhtbGRyeXJ1biBhbmQgc2VydmVyVHlwZSA9PSBTRVJWRVJfWEVOX0FQ
SToNCi0gICAgICAgICAgICBmcm9tIHhtbC5kb20uZXh0IGltcG9ydCBQcmV0
dHlQcmludCBhcyBYTUxQcmV0dHlQcmludA0KLSAgICAgICAgICAgIFhNTFBy
ZXR0eVByaW50KGRvYykNCisgICAgICAgICAgICBwcmludCBkb2MudG9wcmV0
dHl4bWwoKQ0KIA0KICAgICBpZiBvcHRzLnZhbHMuZHJ5cnVuIG9yIG9wdHMu
dmFscy54bWxkcnlydW46DQogICAgICAgICByZXR1cm4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0K

--8323329-138916489-1344794584=:7313
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--8323329-138916489-1344794584=:7313--


From xen-devel-bounces@lists.xen.org Sun Aug 12 18:04:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Aug 2012 18:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0cVv-0001Ge-FJ; Sun, 12 Aug 2012 18:03:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1T0cVu-0001GW-0s
	for xen-devel@lists.xen.org; Sun, 12 Aug 2012 18:03:46 +0000
Received: from [85.158.143.35:59069] by server-1.bemta-4.messagelabs.com id
	97/B6-07754-100F7205; Sun, 12 Aug 2012 18:03:45 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344794624!13658104!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4OTI0Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23899 invoked from network); 12 Aug 2012 18:03:44 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Aug 2012 18:03:44 -0000
Received: from smtphost4.dur.ac.uk (smtphost4.dur.ac.uk [129.234.252.4])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q7CI3GiO006713;
	Sun, 12 Aug 2012 19:03:21 +0100
Received: from vega-a.dur.ac.uk (vega-a.dur.ac.uk [129.234.250.133])
	by smtphost4.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q7CI34QN031655
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 12 Aug 2012 19:03:04 +0100
Received: from vega-a.dur.ac.uk (localhost [127.0.0.1])
	by vega-a.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q7CI34ab019586;
	Sun, 12 Aug 2012 19:03:04 +0100
Received: from localhost (dcl0may@localhost)
	by vega-a.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id q7CI34hi019582;
	Sun, 12 Aug 2012 19:03:04 +0100
Date: Sun, 12 Aug 2012 19:03:00 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1343205815.18971.43.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
Content-Type: MULTIPART/MIXED; BOUNDARY="8323329-138916489-1344794584=:7313"
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q7CI3GiO006713
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-138916489-1344794584=:7313
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Wed, 25 Jul 2012, Ian Campbell wrote:

> On Tue, 2012-07-24 at 20:36 +0100, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jul 24, 2012 at 08:04:30PM +0100, M A Young wrote:
>>> Fedora is keen to stop using PyXML and I have been sent a bug report
>>> https://bugzilla.redhat.com/show_bug.cgi?id=842843 which includes a
>>> patch to remove the use of XMLPrettyPrint in
>>> tools/python/xen/xm/create.py . I am going to try this in the Fedora
>>> build, but I was wondering if it makes sense to do this in the
>>> official xen releases as well.
>>
>> Yes.
>
> Agreed.
>
> According to the bug we've already moved from PyXML to lxml
> (22235:b8cc53d22545 from the looks of it) and simply missed this one use
> of PyXML.

Here is the patch from that bug (trivially) rebased to 4.2.

 	Michael Young
--8323329-138916489-1344794584=:7313
Content-Type: TEXT/PLAIN; charset=US-ASCII; name=remove.PyXML.patch
Content-Transfer-Encoding: BASE64
Content-Description: 
Content-Disposition: attachment; filename=remove.PyXML.patch

UmVwbGFjZSB0aGUgdXNlIG9mIFhNTFByZXR0eVByaW50IGZyb20gUHlYTUwg
aW4geG0gd2l0aCBzdGRsaWIgZnVuY3Rpb25hbGl0eS4NClRoaXMgd2FzIHJl
cG9ydGVkIGJ5IFRvc2hpbyBFcm5pZSBLdXJhdG9taSBhdA0KaHR0cHM6Ly9i
dWd6aWxsYS5yZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD04NDI4NDMNCg0K
U2lnbmVkLW9mZi1ieTogTWljaGFlbCBZb3VuZyA8bS5hLnlvdW5nQGR1cmhh
bS5hYy51az4NCi0tLSB4ZW4tNC4yLjAvdG9vbHMvcHl0aG9uL3hlbi94bS9j
cmVhdGUucHkub3JpZwkyMDEyLTA1LTEyIDE2OjQwOjQ4LjAwMDAwMDAwMCAr
MDEwMA0KKysrIHhlbi00LjIuMC90b29scy9weXRob24veGVuL3htL2NyZWF0
ZS5weQkyMDEyLTA4LTEyIDE3OjU5OjU2LjQ2NDI3MDcwNyArMDEwMA0KQEAg
LTE1NDMsOCArMTU0Myw3IEBADQogICAgICAgICAgICAgU1hQUHJldHR5UHJp
bnQucHJldHR5cHJpbnQoY29uZmlnKQ0KIA0KICAgICAgICAgaWYgb3B0cy52
YWxzLnhtbGRyeXJ1biBhbmQgc2VydmVyVHlwZSA9PSBTRVJWRVJfWEVOX0FQ
SToNCi0gICAgICAgICAgICBmcm9tIHhtbC5kb20uZXh0IGltcG9ydCBQcmV0
dHlQcmludCBhcyBYTUxQcmV0dHlQcmludA0KLSAgICAgICAgICAgIFhNTFBy
ZXR0eVByaW50KGRvYykNCisgICAgICAgICAgICBwcmludCBkb2MudG9wcmV0
dHl4bWwoKQ0KIA0KICAgICBpZiBvcHRzLnZhbHMuZHJ5cnVuIG9yIG9wdHMu
dmFscy54bWxkcnlydW46DQogICAgICAgICByZXR1cm4gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0K

--8323329-138916489-1344794584=:7313
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--8323329-138916489-1344794584=:7313--


From xen-devel-bounces@lists.xen.org Mon Aug 13 00:13:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 00:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0iHL-0003jf-GC; Mon, 13 Aug 2012 00:13:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T0iHJ-0003ja-VE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 00:13:06 +0000
Received: from [85.158.138.51:58344] by server-9.bemta-3.messagelabs.com id
	8B/73-23952-19648205; Mon, 13 Aug 2012 00:13:05 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344816781!27803693!1
X-Originating-IP: [74.125.149.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10217 invoked from network); 13 Aug 2012 00:13:03 -0000
Received: from na3sys009aog129.obsmtp.com (HELO na3sys009aog129.obsmtp.com)
	(74.125.149.142)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 00:13:03 -0000
Received: from INHYMS191.ca.com ([155.35.46.48]) (using TLSv1) by
	na3sys009aob129.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUChGjGGXlwM4eLq8WliRqjPkLiamlh7w@postini.com;
	Sun, 12 Aug 2012 17:13:03 PDT
Received: from INHYMS171.ca.com (155.35.35.45) by INHYMS191.ca.com
	(155.35.46.48) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Mon, 13 Aug 2012 05:42:58 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS171.ca.com
	([155.35.35.45]) with mapi id 14.01.0355.002;
	Mon, 13 Aug 2012 05:42:58 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH RFC] xen/netback: Count ring slots properly when larger
	MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQ==
Date: Mon, 13 Aug 2012 00:12:56 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.134.16.218]
Content-Type: multipart/mixed;
	boundary="_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly when
 larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: multipart/alternative;
	boundary="_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_"

--_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,

I ran into an issue where netback driver is crashing with BUG_ON(npo.meta_p=
rod > ARRAY_SIZE(netbk->meta)). It is happening in Intel 10Gbps network whe=
n larger mtu values  are used. The problem seems to be  the way the slots a=
re counted. After applying this patch things ran fine in my environment. I =
request to validate my changes.

Thanks
Siva


--_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style>
<!--
 /* Font Definitions */
 @font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
 /* Style Definitions */
 p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
-->
</style><!--[if gte mso 9]><xml>
 <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
 <o:shapelayout v:ext=3D"edit">
  <o:idmap v:ext=3D"edit" data=3D"1" />
 </o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I ran into an issue where netback driver is crashing=
 with <span style=3D"font-size:10.0pt;font-family:&quot;Courier New&quot;">
BUG_ON(npo.meta_prod &gt; ARRAY_SIZE(netbk-&gt;meta)).</span> It is happeni=
ng in Intel 10Gbps network when larger mtu values &nbsp;are used. The probl=
em seems to be&nbsp; the way the slots are counted. After applying this pat=
ch things ran fine in my environment. I request
 to validate my changes.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Thanks<o:p></o:p></p>
<p class=3D"MsoNormal">Siva<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_--

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: application/octet-stream; name="netback_slots_counting.patch"
Content-Description: netback_slots_counting.patch
Content-Disposition: attachment; filename="netback_slots_counting.patch";
	size=1637; creation-date="Sun, 12 Aug 2012 22:08:52 GMT";
	modification-date="Mon, 13 Aug 2012 00:02:56 GMT"
Content-Transfer-Encoding: base64

RnJvbTogU2l2YSBQYWxhZ3VtbWkgPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4NCg0KY291bnQgdmFy
aWFibGUgaW4geGVuX25ldGJrX3J4X2FjdGlvbiBuZWVkIHRvIGJlIGluY3JlbWVudGVkDQpjb3Jy
ZWN0bHkgdG8gdGFrZSBpbnRvIGFjY291bnQgb2YgZXh0cmEgc2xvdHMgcmVxdWlyZWQgd2hlbiBz
a2JfaGVhZGxlbiBpcyANCmdyZWF0ZXIgdGhhbiBQQUdFX1NJWkUgd2hlbiBsYXJnZXIgTVRVIHZh
bHVlcyBhcmUgdXNlZC4gV2l0aG91dCB0aGlzIGNoYW5nZSANCkJVR19PTihucG8ubWV0YV9wcm9k
ID4gQVJSQVlfU0laRShuZXRiay0+bWV0YSkpIGlzIGNhdXNpbmcgDQpuZXRiYWNrIHRocmVhZCB0
byBleGl0Lg0KDQpXaGlsZSBpbnNwZWN0aW5nIHRoZSBjb2RlLCBpdCBsb29rZWQgbGlrZSB3ZSBh
bHNvIG5lZWQgdG8gdGFrZSBjYXJlIG9mDQppbmNyZW1lbnRpbmcgdGhlIGNvdW50IHZhcmlhYmxl
IHdoZW4gZ3NvX3NpemUgaXMgbm9uIHplcm8uDQoNClRoZSBwcm9ibGVtIGlzIHNlZW4gd2l0aCBs
aW51eCAzLjIuMiBrZXJuZWwgb24gSW50ZWwgMTBHYnBzIG5ldHdvcmsuDQoNCg0KU2lnbmVkLW9m
Zi1ieTogU2l2YSBQYWxhZ3VtbWkgPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4NCi0tLQ0KDQpkaWZm
IC11cHJOIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svbmV0YmFjay5jIGIvZHJpdmVycy9uZXQv
eGVuLW5ldGJhY2svbmV0YmFjay5jDQotLS0gYS9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9uZXRi
YWNrLmMJMjAxMi0wMS0yNSAxOTozOTozMi4wMDAwMDAwMDAgLTA1MDANCisrKyBiL2RyaXZlcnMv
bmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYwkyMDEyLTA4LTEyIDE1OjUwOjUwLjAwMDAwMDAwMCAt
MDQwMA0KQEAgLTYyMyw2ICs2MjMsMjQgQEAgc3RhdGljIHZvaWQgeGVuX25ldGJrX3J4X2FjdGlv
bihzdHJ1Y3QgeA0KIA0KIAkJY291bnQgKz0gbnJfZnJhZ3MgKyAxOw0KIA0KKwkJLyoNCisJCSAq
IFRoZSBsb2dpYyBoZXJlIHNob3VsZCBiZSBzb21ld2hhdCBzaW1pbGFyIHRvDQorCQkgKiB4ZW5f
bmV0YmtfY291bnRfc2tiX3Nsb3RzLiBJbiBjYXNlIG9mIGxhcmdlciBNVFUgc2l6ZSwNCisJCSAq
IHNrYiBoZWFkIGxlbmd0aCBtYXkgYmUgbW9yZSB0aGFuIGEgUEFHRV9TSVpFLiBXZSBuZWVkIHRv
DQorCQkgKiBjb25zaWRlciByaW5nIHNsb3RzIGNvbnN1bWVkIGJ5IHRoYXQgZGF0YS4gSWYgd2Ug
ZG8gbm90LA0KKwkJICogdGhlbiB3aXRoaW4gdGhpcyBsb29wIGl0c2VsZiB3ZSBlbmQgdXAgY29u
c3VtaW5nIG1vcmUgbWV0YQ0KKwkJICogc2xvdHMgdHVybmluZyB0aGUgQlVHX09OIGJlbG93LiBX
aXRoIHRoaXMgZml4IHdlIG1heSBlbmQgdXANCisJCSAqIGl0ZXJhdGluZyB0aHJvdWdoIHhlbl9u
ZXRia19yeF9hY3Rpb24gbXVsdGlwbGUgdGltZXMNCisJCSAqIGluc3RlYWQgb2YgY3Jhc2hpbmcg
bmV0YmFjayB0aHJlYWQuDQorCQkgKi8NCisNCisNCisJCWNvdW50ICs9IERJVl9ST1VORF9VUChz
a2JfaGVhZGxlbihza2IpLCBQQUdFX1NJWkUpOw0KKw0KKwkJaWYgKHNrYl9zaGluZm8oc2tiKS0+
Z3NvX3NpemUpDQorCQkJY291bnQrKzsNCisNCiAJCV9fc2tiX3F1ZXVlX3RhaWwoJnJ4cSwgc2ti
KTsNCiANCiAJCS8qIEZpbGxlZCB0aGUgYmF0Y2ggcXVldWU/ICovDQo=

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_--


From xen-devel-bounces@lists.xen.org Mon Aug 13 00:13:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 00:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0iHL-0003jf-GC; Mon, 13 Aug 2012 00:13:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T0iHJ-0003ja-VE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 00:13:06 +0000
Received: from [85.158.138.51:58344] by server-9.bemta-3.messagelabs.com id
	8B/73-23952-19648205; Mon, 13 Aug 2012 00:13:05 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344816781!27803693!1
X-Originating-IP: [74.125.149.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10217 invoked from network); 13 Aug 2012 00:13:03 -0000
Received: from na3sys009aog129.obsmtp.com (HELO na3sys009aog129.obsmtp.com)
	(74.125.149.142)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 00:13:03 -0000
Received: from INHYMS191.ca.com ([155.35.46.48]) (using TLSv1) by
	na3sys009aob129.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUChGjGGXlwM4eLq8WliRqjPkLiamlh7w@postini.com;
	Sun, 12 Aug 2012 17:13:03 PDT
Received: from INHYMS171.ca.com (155.35.35.45) by INHYMS191.ca.com
	(155.35.46.48) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Mon, 13 Aug 2012 05:42:58 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS171.ca.com
	([155.35.35.45]) with mapi id 14.01.0355.002;
	Mon, 13 Aug 2012 05:42:58 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH RFC] xen/netback: Count ring slots properly when larger
	MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQ==
Date: Mon, 13 Aug 2012 00:12:56 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.134.16.218]
Content-Type: multipart/mixed;
	boundary="_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly when
 larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: multipart/alternative;
	boundary="_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_"

--_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,

I ran into an issue where netback driver is crashing with BUG_ON(npo.meta_p=
rod > ARRAY_SIZE(netbk->meta)). It is happening in Intel 10Gbps network whe=
n larger mtu values  are used. The problem seems to be  the way the slots a=
re counted. After applying this patch things ran fine in my environment. I =
request to validate my changes.

Thanks
Siva


--_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style>
<!--
 /* Font Definitions */
 @font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
 /* Style Definitions */
 p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
-->
</style><!--[if gte mso 9]><xml>
 <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
 <o:shapelayout v:ext=3D"edit">
  <o:idmap v:ext=3D"edit" data=3D"1" />
 </o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I ran into an issue where netback driver is crashing=
 with <span style=3D"font-size:10.0pt;font-family:&quot;Courier New&quot;">
BUG_ON(npo.meta_prod &gt; ARRAY_SIZE(netbk-&gt;meta)).</span> It is happeni=
ng in Intel 10Gbps network when larger mtu values &nbsp;are used. The probl=
em seems to be&nbsp; the way the slots are counted. After applying this pat=
ch things ran fine in my environment. I request
 to validate my changes.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Thanks<o:p></o:p></p>
<p class=3D"MsoNormal">Siva<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_--

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: application/octet-stream; name="netback_slots_counting.patch"
Content-Description: netback_slots_counting.patch
Content-Disposition: attachment; filename="netback_slots_counting.patch";
	size=1637; creation-date="Sun, 12 Aug 2012 22:08:52 GMT";
	modification-date="Mon, 13 Aug 2012 00:02:56 GMT"
Content-Transfer-Encoding: base64

RnJvbTogU2l2YSBQYWxhZ3VtbWkgPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4NCg0KY291bnQgdmFy
aWFibGUgaW4geGVuX25ldGJrX3J4X2FjdGlvbiBuZWVkIHRvIGJlIGluY3JlbWVudGVkDQpjb3Jy
ZWN0bHkgdG8gdGFrZSBpbnRvIGFjY291bnQgb2YgZXh0cmEgc2xvdHMgcmVxdWlyZWQgd2hlbiBz
a2JfaGVhZGxlbiBpcyANCmdyZWF0ZXIgdGhhbiBQQUdFX1NJWkUgd2hlbiBsYXJnZXIgTVRVIHZh
bHVlcyBhcmUgdXNlZC4gV2l0aG91dCB0aGlzIGNoYW5nZSANCkJVR19PTihucG8ubWV0YV9wcm9k
ID4gQVJSQVlfU0laRShuZXRiay0+bWV0YSkpIGlzIGNhdXNpbmcgDQpuZXRiYWNrIHRocmVhZCB0
byBleGl0Lg0KDQpXaGlsZSBpbnNwZWN0aW5nIHRoZSBjb2RlLCBpdCBsb29rZWQgbGlrZSB3ZSBh
bHNvIG5lZWQgdG8gdGFrZSBjYXJlIG9mDQppbmNyZW1lbnRpbmcgdGhlIGNvdW50IHZhcmlhYmxl
IHdoZW4gZ3NvX3NpemUgaXMgbm9uIHplcm8uDQoNClRoZSBwcm9ibGVtIGlzIHNlZW4gd2l0aCBs
aW51eCAzLjIuMiBrZXJuZWwgb24gSW50ZWwgMTBHYnBzIG5ldHdvcmsuDQoNCg0KU2lnbmVkLW9m
Zi1ieTogU2l2YSBQYWxhZ3VtbWkgPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4NCi0tLQ0KDQpkaWZm
IC11cHJOIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svbmV0YmFjay5jIGIvZHJpdmVycy9uZXQv
eGVuLW5ldGJhY2svbmV0YmFjay5jDQotLS0gYS9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9uZXRi
YWNrLmMJMjAxMi0wMS0yNSAxOTozOTozMi4wMDAwMDAwMDAgLTA1MDANCisrKyBiL2RyaXZlcnMv
bmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYwkyMDEyLTA4LTEyIDE1OjUwOjUwLjAwMDAwMDAwMCAt
MDQwMA0KQEAgLTYyMyw2ICs2MjMsMjQgQEAgc3RhdGljIHZvaWQgeGVuX25ldGJrX3J4X2FjdGlv
bihzdHJ1Y3QgeA0KIA0KIAkJY291bnQgKz0gbnJfZnJhZ3MgKyAxOw0KIA0KKwkJLyoNCisJCSAq
IFRoZSBsb2dpYyBoZXJlIHNob3VsZCBiZSBzb21ld2hhdCBzaW1pbGFyIHRvDQorCQkgKiB4ZW5f
bmV0YmtfY291bnRfc2tiX3Nsb3RzLiBJbiBjYXNlIG9mIGxhcmdlciBNVFUgc2l6ZSwNCisJCSAq
IHNrYiBoZWFkIGxlbmd0aCBtYXkgYmUgbW9yZSB0aGFuIGEgUEFHRV9TSVpFLiBXZSBuZWVkIHRv
DQorCQkgKiBjb25zaWRlciByaW5nIHNsb3RzIGNvbnN1bWVkIGJ5IHRoYXQgZGF0YS4gSWYgd2Ug
ZG8gbm90LA0KKwkJICogdGhlbiB3aXRoaW4gdGhpcyBsb29wIGl0c2VsZiB3ZSBlbmQgdXAgY29u
c3VtaW5nIG1vcmUgbWV0YQ0KKwkJICogc2xvdHMgdHVybmluZyB0aGUgQlVHX09OIGJlbG93LiBX
aXRoIHRoaXMgZml4IHdlIG1heSBlbmQgdXANCisJCSAqIGl0ZXJhdGluZyB0aHJvdWdoIHhlbl9u
ZXRia19yeF9hY3Rpb24gbXVsdGlwbGUgdGltZXMNCisJCSAqIGluc3RlYWQgb2YgY3Jhc2hpbmcg
bmV0YmFjayB0aHJlYWQuDQorCQkgKi8NCisNCisNCisJCWNvdW50ICs9IERJVl9ST1VORF9VUChz
a2JfaGVhZGxlbihza2IpLCBQQUdFX1NJWkUpOw0KKw0KKwkJaWYgKHNrYl9zaGluZm8oc2tiKS0+
Z3NvX3NpemUpDQorCQkJY291bnQrKzsNCisNCiAJCV9fc2tiX3F1ZXVlX3RhaWwoJnJ4cSwgc2ti
KTsNCiANCiAJCS8qIEZpbGxlZCB0aGUgYmF0Y2ggcXVldWU/ICovDQo=

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_7D7C26B1462EB14CB0E7246697A18C1310F8E2INHYMS111Bcacom_--


From xen-devel-bounces@lists.xen.org Mon Aug 13 06:12:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 06:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ns9-0002RQ-SF; Mon, 13 Aug 2012 06:11:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0ns8-0002RL-HZ
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 06:11:28 +0000
Received: from [85.158.139.83:14411] by server-1.bemta-5.messagelabs.com id
	35/FA-09980-F8A98205; Mon, 13 Aug 2012 06:11:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344838285!25064065!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30990 invoked from network); 13 Aug 2012 06:11:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 06:11:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,757,1336348800"; d="scan'208";a="13972631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 06:11:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 07:11:23 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T0ns3-0008Hk-RF;
	Mon, 13 Aug 2012 06:11:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T0ns3-00029a-K3;
	Mon, 13 Aug 2012 07:11:23 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13596-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Aug 2012 07:11:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13596: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13596 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13596/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  5 xen-boot                    fail pass in 13595

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13595
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13595
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13595
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore     fail in 13595 like 13594

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  dc4970af48a0

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 06:12:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 06:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ns9-0002RQ-SF; Mon, 13 Aug 2012 06:11:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0ns8-0002RL-HZ
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 06:11:28 +0000
Received: from [85.158.139.83:14411] by server-1.bemta-5.messagelabs.com id
	35/FA-09980-F8A98205; Mon, 13 Aug 2012 06:11:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344838285!25064065!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30990 invoked from network); 13 Aug 2012 06:11:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 06:11:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,757,1336348800"; d="scan'208";a="13972631"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 06:11:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 07:11:23 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T0ns3-0008Hk-RF;
	Mon, 13 Aug 2012 06:11:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T0ns3-00029a-K3;
	Mon, 13 Aug 2012 07:11:23 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13596-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Aug 2012 07:11:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13596: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13596 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13596/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  5 xen-boot                    fail pass in 13595

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13595
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13595
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13595
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore     fail in 13595 like 13594

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  dc4970af48a0
baseline version:
 xen                  dc4970af48a0

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:12:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0oox-00038K-6v; Mon, 13 Aug 2012 07:12:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0oov-00038F-U6
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:12:14 +0000
Received: from [85.158.139.83:23227] by server-11.bemta-5.messagelabs.com id
	7E/8B-29296-DC8A8205; Mon, 13 Aug 2012 07:12:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344841932!27708025!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14687 invoked from network); 13 Aug 2012 07:12:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 07:12:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 08:12:08 +0100
Message-Id: <5028C34C02000078000945FC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 08:05:16 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 10:58, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> hypervisor, nice to have:

- fix S3 regression(s?) reported by Ben Guthro
- address PoD problems with early host side accesses to guest
  address space (draft patch for 4.0.x exists, needs to be ported
  over to -unstable, which I'll expect to get to today)
- fix high change rate to CMOS RTC periodic interrupt causing
  guest wall clock time to lag (possible fix outlined, needs to be
  put in patch form and thoroughly reviewed/tested for unwanted
  side effects)

(some of these may need considering to be put under blockers)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:12:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0oox-00038K-6v; Mon, 13 Aug 2012 07:12:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0oov-00038F-U6
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:12:14 +0000
Received: from [85.158.139.83:23227] by server-11.bemta-5.messagelabs.com id
	7E/8B-29296-DC8A8205; Mon, 13 Aug 2012 07:12:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344841932!27708025!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14687 invoked from network); 13 Aug 2012 07:12:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 07:12:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 08:12:08 +0100
Message-Id: <5028C34C02000078000945FC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 08:05:16 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.08.12 at 10:58, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> hypervisor, nice to have:

- fix S3 regression(s?) reported by Ben Guthro
- address PoD problems with early host side accesses to guest
  address space (draft patch for 4.0.x exists, needs to be ported
  over to -unstable, which I'll expect to get to today)
- fix high change rate to CMOS RTC periodic interrupt causing
  guest wall clock time to lag (possible fix outlined, needs to be
  put in patch form and thoroughly reviewed/tested for unwanted
  side effects)

(some of these may need considering to be put under blockers)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:17:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0otZ-0003FM-Tc; Mon, 13 Aug 2012 07:17:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0otZ-0003FF-59
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:17:01 +0000
Received: from [85.158.143.35:43915] by server-3.bemta-4.messagelabs.com id
	7C/2C-09529-CE9A8205; Mon, 13 Aug 2012 07:17:00 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344842208!14444042!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19241 invoked from network); 13 Aug 2012 07:16:51 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 07:16:51 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0otK-00045N-DI
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:16:46 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 17:16:46 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 17:16:45 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1g==
Date: Mon, 13 Aug 2012 07:16:44 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.200.58]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.005
x-tm-as-result: No--35.271900-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 07:16:46.0214 (UTC)
	FILETIME=[990C2A60:01CD7923]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm having trouble using blkback under gplpv when the disk is on top of a bcache device. My devices are layered as follows:

/dev/sd[ab]
md0 (RAID1)
bcache
lvm

It seems that bcache presents a 4K sector size to Linux, which is then reflected by lvm and in turn blkback.

Obviously GPLPV isn't handling 4K sectors correctly... any suggestions as to what I might need to do to make this work properly? As a last resort I should be able to fake 512 byte sectors to Windows but would prefer that Windows knew it was dealing with a device with 4K sectors underneath.

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:17:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0otZ-0003FM-Tc; Mon, 13 Aug 2012 07:17:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0otZ-0003FF-59
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:17:01 +0000
Received: from [85.158.143.35:43915] by server-3.bemta-4.messagelabs.com id
	7C/2C-09529-CE9A8205; Mon, 13 Aug 2012 07:17:00 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344842208!14444042!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19241 invoked from network); 13 Aug 2012 07:16:51 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 07:16:51 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0otK-00045N-DI
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:16:46 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 17:16:46 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 17:16:45 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1g==
Date: Mon, 13 Aug 2012 07:16:44 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.200.58]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.005
x-tm-as-result: No--35.271900-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 07:16:46.0214 (UTC)
	FILETIME=[990C2A60:01CD7923]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm having trouble using blkback under gplpv when the disk is on top of a bcache device. My devices are layered as follows:

/dev/sd[ab]
md0 (RAID1)
bcache
lvm

It seems that bcache presents a 4K sector size to Linux, which is then reflected by lvm and in turn blkback.

Obviously GPLPV isn't handling 4K sectors correctly... any suggestions as to what I might need to do to make this work properly? As a last resort I should be able to fake 512 byte sectors to Windows but would prefer that Windows knew it was dealing with a device with 4K sectors underneath.

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:35:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pBK-0003fq-6J; Mon, 13 Aug 2012 07:35:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T0pBI-0003fk-LR
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:35:20 +0000
Received: from [85.158.143.35:25009] by server-3.bemta-4.messagelabs.com id
	42/97-09529-73EA8205; Mon, 13 Aug 2012 07:35:19 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344843316!13076040!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2774 invoked from network); 13 Aug 2012 07:35:18 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 07:35:18 -0000
Received: by obbta14 with SMTP id ta14so8494870obb.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 00:35:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding:x-gm-message-state;
	bh=rS2aRLFq+XkqHoLgOaP+sIGSbCn56C5F1hbe1eyTSzo=;
	b=ZM6WBIJ9bFt8IUiD7Hl4b8KrWUQWLjjlEKKfA/EzWfAYIfrlshFBNVdpToSGKbC2g+
	blACJ4Wa0vtQcu+wA9Yl+MP+c7K2mZGxHx9fwY2hRlvj3TPzrJvfbGBKEFxPfp/0m/vg
	v4mgDaoo4i9CjAW842L7/XnZW5AV+1W75QRwxA5DArbGX6ddBIhZeGoIwoKZlrQ6Pwhe
	/2q+ZfI/U2gb7oTOvzXrhLv083Gdu66xoHqdZGPWiMySu8w+vPZi1gV1nbN0FzhqKH9L
	ewFhzTeGGRA2yaDqAIARVp/H5DuFJ7i9hid1B0nNjraT+7+LWNvfNA3LK9V7sw1Z1sDO
	8fag==
MIME-Version: 1.0
Received: by 10.182.14.36 with SMTP id m4mr9882296obc.71.1344843316393; Mon,
	13 Aug 2012 00:35:16 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Mon, 13 Aug 2012 00:35:16 -0700 (PDT)
X-Originating-IP: [121.44.126.22]
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
Date: Mon, 13 Aug 2012 17:35:16 +1000
Message-ID: <CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: James Harper <james.harper@bendigoit.com.au>
X-Gm-Message-State: ALoCoQmSA0dVA1HjaH97LrpTBZqPXS+9mVcKFo/lddTRKT76R+FXWDmKXge0LcB0/LupHcOPZFBc
Cc: linux-bcache@vger.kernel.org, Kent Overstreet <koverstreet@google.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13 August 2012 17:16, James Harper <james.harper@bendigoit.com.au> wrote:
> I'm having trouble using blkback under gplpv when the disk is on top of a bcache device. My devices are layered as follows:
>
> /dev/sd[ab]
> md0 (RAID1)
> bcache
> lvm
>
> It seems that bcache presents a 4K sector size to Linux, which is then reflected by lvm and in turn blkback.
>
> Obviously GPLPV isn't handling 4K sectors correctly... any suggestions as to what I might need to do to make this work properly? As a last resort I should be able to fake 512 byte sectors to Windows but would prefer that Windows knew it was dealing with a device with 4K sectors underneath.
>
> Thanks
>
> James
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

This could very well be the issue I was having, I haven't been able to
pull the latest bcache code for a few days (repo down?) but if I can
help debug let me know.

-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:35:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pBK-0003fq-6J; Mon, 13 Aug 2012 07:35:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T0pBI-0003fk-LR
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:35:20 +0000
Received: from [85.158.143.35:25009] by server-3.bemta-4.messagelabs.com id
	42/97-09529-73EA8205; Mon, 13 Aug 2012 07:35:19 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344843316!13076040!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2774 invoked from network); 13 Aug 2012 07:35:18 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 07:35:18 -0000
Received: by obbta14 with SMTP id ta14so8494870obb.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 00:35:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding:x-gm-message-state;
	bh=rS2aRLFq+XkqHoLgOaP+sIGSbCn56C5F1hbe1eyTSzo=;
	b=ZM6WBIJ9bFt8IUiD7Hl4b8KrWUQWLjjlEKKfA/EzWfAYIfrlshFBNVdpToSGKbC2g+
	blACJ4Wa0vtQcu+wA9Yl+MP+c7K2mZGxHx9fwY2hRlvj3TPzrJvfbGBKEFxPfp/0m/vg
	v4mgDaoo4i9CjAW842L7/XnZW5AV+1W75QRwxA5DArbGX6ddBIhZeGoIwoKZlrQ6Pwhe
	/2q+ZfI/U2gb7oTOvzXrhLv083Gdu66xoHqdZGPWiMySu8w+vPZi1gV1nbN0FzhqKH9L
	ewFhzTeGGRA2yaDqAIARVp/H5DuFJ7i9hid1B0nNjraT+7+LWNvfNA3LK9V7sw1Z1sDO
	8fag==
MIME-Version: 1.0
Received: by 10.182.14.36 with SMTP id m4mr9882296obc.71.1344843316393; Mon,
	13 Aug 2012 00:35:16 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Mon, 13 Aug 2012 00:35:16 -0700 (PDT)
X-Originating-IP: [121.44.126.22]
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
Date: Mon, 13 Aug 2012 17:35:16 +1000
Message-ID: <CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: James Harper <james.harper@bendigoit.com.au>
X-Gm-Message-State: ALoCoQmSA0dVA1HjaH97LrpTBZqPXS+9mVcKFo/lddTRKT76R+FXWDmKXge0LcB0/LupHcOPZFBc
Cc: linux-bcache@vger.kernel.org, Kent Overstreet <koverstreet@google.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13 August 2012 17:16, James Harper <james.harper@bendigoit.com.au> wrote:
> I'm having trouble using blkback under gplpv when the disk is on top of a bcache device. My devices are layered as follows:
>
> /dev/sd[ab]
> md0 (RAID1)
> bcache
> lvm
>
> It seems that bcache presents a 4K sector size to Linux, which is then reflected by lvm and in turn blkback.
>
> Obviously GPLPV isn't handling 4K sectors correctly... any suggestions as to what I might need to do to make this work properly? As a last resort I should be able to fake 512 byte sectors to Windows but would prefer that Windows knew it was dealing with a device with 4K sectors underneath.
>
> Thanks
>
> James
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

This could very well be the issue I was having, I haven't been able to
pull the latest bcache code for a few days (repo down?) but if I can
help debug let me know.

-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:40:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pFa-0003vG-56; Mon, 13 Aug 2012 07:39:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T0pFY-0003ur-Uj
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 07:39:45 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344843577!8780420!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5Mzc0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23673 invoked from network); 13 Aug 2012 07:39:38 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 07:39:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7D7dRJR005158
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 07:39:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7D7dQF7011896
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 07:39:27 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7D7dO8h017142; Mon, 13 Aug 2012 02:39:24 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 00:39:24 -0700
Date: Mon, 13 Aug 2012 09:37:54 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Dave Anderson <anderson@redhat.com>
Message-ID: <20120813073754.GA2482@host-192-168-1-59.local.net-space.pl>
References: <20120810132357.GA2576@host-192-168-1-59.local.net-space.pl>
	<1568927206.9177777.1344625917898.JavaMail.root@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1568927206.9177777.1344625917898.JavaMail.root@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:11:57PM -0400, Dave Anderson wrote:
>
>
> ----- Original Message -----
> > Hi,
> >
> > It looks that Xen support for crash have not been maintained
> > since 2009. I am trying to fix this. Here it is bundle of fixes:
> >   - xen: Always calculate max_cpus value,
> >   - xen: Read only crash notes for onlined CPUs,
> >   - x86/xen: Read variables from dynamically allocated per_cpu data,
> >   - xen: Get idle data from alternative source,
> >   - xen: Read data correctly from dynamically allocated console ring, too
> >     (fixed in this release),
> >   - xen: Add support for 3 level P2M tree (new patch in this release).
> >
> > Daniel
>
> Hi Daniel,
>
> The original 5 updates specific to the Xen hypervisor look OK,
> but new patch 6/6 is going to take some studying/testing to
> alleviate my backwards-compatibility worries.  Can I ask whether
> you fully tested it with older 2-level P2M tree kernels?

As you asked me earlier I have tested all patches on Xen 3.1 and 4.1,
Linux Kernel Ver. 2.6.18 (P2M array), 2.6.36 (2 level P2M tree)
and 2.6.39 (3 level P2M tree). Additionaly, there were some internal
tests done by others in my company.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:40:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pFa-0003vG-56; Mon, 13 Aug 2012 07:39:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T0pFY-0003ur-Uj
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 07:39:45 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344843577!8780420!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5Mzc0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23673 invoked from network); 13 Aug 2012 07:39:38 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 07:39:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7D7dRJR005158
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 07:39:28 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7D7dQF7011896
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 07:39:27 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7D7dO8h017142; Mon, 13 Aug 2012 02:39:24 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 00:39:24 -0700
Date: Mon, 13 Aug 2012 09:37:54 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Dave Anderson <anderson@redhat.com>
Message-ID: <20120813073754.GA2482@host-192-168-1-59.local.net-space.pl>
References: <20120810132357.GA2576@host-192-168-1-59.local.net-space.pl>
	<1568927206.9177777.1344625917898.JavaMail.root@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1568927206.9177777.1344625917898.JavaMail.root@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:11:57PM -0400, Dave Anderson wrote:
>
>
> ----- Original Message -----
> > Hi,
> >
> > It looks that Xen support for crash have not been maintained
> > since 2009. I am trying to fix this. Here it is bundle of fixes:
> >   - xen: Always calculate max_cpus value,
> >   - xen: Read only crash notes for onlined CPUs,
> >   - x86/xen: Read variables from dynamically allocated per_cpu data,
> >   - xen: Get idle data from alternative source,
> >   - xen: Read data correctly from dynamically allocated console ring, too
> >     (fixed in this release),
> >   - xen: Add support for 3 level P2M tree (new patch in this release).
> >
> > Daniel
>
> Hi Daniel,
>
> The original 5 updates specific to the Xen hypervisor look OK,
> but new patch 6/6 is going to take some studying/testing to
> alleviate my backwards-compatibility worries.  Can I ask whether
> you fully tested it with older 2-level P2M tree kernels?

As you asked me earlier I have tested all patches on Xen 3.1 and 4.1,
Linux Kernel Ver. 2.6.18 (P2M array), 2.6.36 (2 level P2M tree)
and 2.6.39 (3 level P2M tree). Additionaly, there were some internal
tests done by others in my company.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:55:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pUE-0004C8-Kq; Mon, 13 Aug 2012 07:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0pUD-0004C3-0E
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:54:53 +0000
Received: from [85.158.143.35:38800] by server-1.bemta-4.messagelabs.com id
	1B/4C-07754-CC2B8205; Mon, 13 Aug 2012 07:54:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344844491!13717847!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30972 invoked from network); 13 Aug 2012 07:54:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 07:54:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 08:54:52 +0100
Message-Id: <5028CEE70200007800094623@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 08:54:47 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <konrad@darnok.org>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
	<20120803133001.GA13750@andromeda.dapyr.net>
	<501BF44602000078000928B4@nat28.tlf.novell.com>
	<CAPbh3rsXaqQS9WQQmJ2uQ46LZdyFzkbSodUabGDAyFS+qTEwUg@mail.gmail.com>
In-Reply-To: <CAPbh3rsXaqQS9WQQmJ2uQ46LZdyFzkbSodUabGDAyFS+qTEwUg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 16:46, Konrad Rzeszutek Wilk <konrad@darnok.org> wrote:
> Didn't get to it yet. Sorry for top posting. If you have a patch ready I
> can test it on Monday - travelling now.

So here's what I was thinking of (compile tested only).

Jan

--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -241,7 +241,7 @@ static int setup_pgtables_x86_32_pae(str
     l3_pgentry_64_t *l3tab;
     l2_pgentry_64_t *l2tab = NULL;
     l1_pgentry_64_t *l1tab = NULL;
-    unsigned long l3off, l2off, l1off;
+    unsigned long l3off, l2off = 0, l1off;
     xen_vaddr_t addr;
     xen_pfn_t pgpfn;
     xen_pfn_t l3mfn = xc_dom_p2m_guest(dom, l3pfn);
@@ -283,8 +283,6 @@ static int setup_pgtables_x86_32_pae(str
             l2off = l2_table_offset_pae(addr);
             l2tab[l2off] =
                 pfn_to_paddr(xc_dom_p2m_guest(dom, l1pfn)) | L2_PROT;
-            if ( l2off == (L2_PAGETABLE_ENTRIES_PAE - 1) )
-                l2tab = NULL;
             l1pfn++;
         }
 
@@ -296,8 +294,13 @@ static int setup_pgtables_x86_32_pae(str
         if ( (addr >= dom->pgtables_seg.vstart) &&
              (addr < dom->pgtables_seg.vend) )
             l1tab[l1off] &= ~_PAGE_RW; /* page tables are r/o */
+
         if ( l1off == (L1_PAGETABLE_ENTRIES_PAE - 1) )
+        {
             l1tab = NULL;
+            if ( l2off == (L2_PAGETABLE_ENTRIES_PAE - 1) )
+                l2tab = NULL;
+        }
     }
 
     if ( dom->virt_pgtab_end <= 0xc0000000 )
@@ -340,7 +343,7 @@ static int setup_pgtables_x86_64(struct 
     l3_pgentry_64_t *l3tab = NULL;
     l2_pgentry_64_t *l2tab = NULL;
     l1_pgentry_64_t *l1tab = NULL;
-    uint64_t l4off, l3off, l2off, l1off;
+    uint64_t l4off, l3off = 0, l2off = 0, l1off;
     uint64_t addr;
     xen_pfn_t pgpfn;
 
@@ -364,8 +367,6 @@ static int setup_pgtables_x86_64(struct 
             l3off = l3_table_offset_x86_64(addr);
             l3tab[l3off] =
                 pfn_to_paddr(xc_dom_p2m_guest(dom, l2pfn)) | L3_PROT;
-            if ( l3off == (L3_PAGETABLE_ENTRIES_X86_64 - 1) )
-                l3tab = NULL;
             l2pfn++;
         }
 
@@ -376,8 +377,6 @@ static int setup_pgtables_x86_64(struct 
             l2off = l2_table_offset_x86_64(addr);
             l2tab[l2off] =
                 pfn_to_paddr(xc_dom_p2m_guest(dom, l1pfn)) | L2_PROT;
-            if ( l2off == (L2_PAGETABLE_ENTRIES_X86_64 - 1) )
-                l2tab = NULL;
             l1pfn++;
         }
 
@@ -389,8 +388,17 @@ static int setup_pgtables_x86_64(struct 
         if ( (addr >= dom->pgtables_seg.vstart) && 
              (addr < dom->pgtables_seg.vend) )
             l1tab[l1off] &= ~_PAGE_RW; /* page tables are r/o */
+
         if ( l1off == (L1_PAGETABLE_ENTRIES_X86_64 - 1) )
+        {
             l1tab = NULL;
+            if ( l2off == (L2_PAGETABLE_ENTRIES_X86_64 - 1) )
+            {
+                l2tab = NULL;
+                if ( l3off == (L3_PAGETABLE_ENTRIES_X86_64 - 1) )
+                    l3tab = NULL;
+            }
+        }
     }
     return 0;
 }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:55:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pUE-0004C8-Kq; Mon, 13 Aug 2012 07:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0pUD-0004C3-0E
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:54:53 +0000
Received: from [85.158.143.35:38800] by server-1.bemta-4.messagelabs.com id
	1B/4C-07754-CC2B8205; Mon, 13 Aug 2012 07:54:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344844491!13717847!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30972 invoked from network); 13 Aug 2012 07:54:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 07:54:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 08:54:52 +0100
Message-Id: <5028CEE70200007800094623@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 08:54:47 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <konrad@darnok.org>
References: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>
	<20120801155040.GB15812@phenom.dumpdata.com>
	<501A5EF7020000780009219C@nat28.tlf.novell.com>
	<20120802141710.GF16749@phenom.dumpdata.com>
	<20120802160403.02de484e@mantra.us.oracle.com>
	<20120803133001.GA13750@andromeda.dapyr.net>
	<501BF44602000078000928B4@nat28.tlf.novell.com>
	<CAPbh3rsXaqQS9WQQmJ2uQ46LZdyFzkbSodUabGDAyFS+qTEwUg@mail.gmail.com>
In-Reply-To: <CAPbh3rsXaqQS9WQQmJ2uQ46LZdyFzkbSodUabGDAyFS+qTEwUg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v2)
 for 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.08.12 at 16:46, Konrad Rzeszutek Wilk <konrad@darnok.org> wrote:
> Didn't get to it yet. Sorry for top posting. If you have a patch ready I
> can test it on Monday - travelling now.

So here's what I was thinking of (compile tested only).

Jan

--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -241,7 +241,7 @@ static int setup_pgtables_x86_32_pae(str
     l3_pgentry_64_t *l3tab;
     l2_pgentry_64_t *l2tab = NULL;
     l1_pgentry_64_t *l1tab = NULL;
-    unsigned long l3off, l2off, l1off;
+    unsigned long l3off, l2off = 0, l1off;
     xen_vaddr_t addr;
     xen_pfn_t pgpfn;
     xen_pfn_t l3mfn = xc_dom_p2m_guest(dom, l3pfn);
@@ -283,8 +283,6 @@ static int setup_pgtables_x86_32_pae(str
             l2off = l2_table_offset_pae(addr);
             l2tab[l2off] =
                 pfn_to_paddr(xc_dom_p2m_guest(dom, l1pfn)) | L2_PROT;
-            if ( l2off == (L2_PAGETABLE_ENTRIES_PAE - 1) )
-                l2tab = NULL;
             l1pfn++;
         }
 
@@ -296,8 +294,13 @@ static int setup_pgtables_x86_32_pae(str
         if ( (addr >= dom->pgtables_seg.vstart) &&
              (addr < dom->pgtables_seg.vend) )
             l1tab[l1off] &= ~_PAGE_RW; /* page tables are r/o */
+
         if ( l1off == (L1_PAGETABLE_ENTRIES_PAE - 1) )
+        {
             l1tab = NULL;
+            if ( l2off == (L2_PAGETABLE_ENTRIES_PAE - 1) )
+                l2tab = NULL;
+        }
     }
 
     if ( dom->virt_pgtab_end <= 0xc0000000 )
@@ -340,7 +343,7 @@ static int setup_pgtables_x86_64(struct 
     l3_pgentry_64_t *l3tab = NULL;
     l2_pgentry_64_t *l2tab = NULL;
     l1_pgentry_64_t *l1tab = NULL;
-    uint64_t l4off, l3off, l2off, l1off;
+    uint64_t l4off, l3off = 0, l2off = 0, l1off;
     uint64_t addr;
     xen_pfn_t pgpfn;
 
@@ -364,8 +367,6 @@ static int setup_pgtables_x86_64(struct 
             l3off = l3_table_offset_x86_64(addr);
             l3tab[l3off] =
                 pfn_to_paddr(xc_dom_p2m_guest(dom, l2pfn)) | L3_PROT;
-            if ( l3off == (L3_PAGETABLE_ENTRIES_X86_64 - 1) )
-                l3tab = NULL;
             l2pfn++;
         }
 
@@ -376,8 +377,6 @@ static int setup_pgtables_x86_64(struct 
             l2off = l2_table_offset_x86_64(addr);
             l2tab[l2off] =
                 pfn_to_paddr(xc_dom_p2m_guest(dom, l1pfn)) | L2_PROT;
-            if ( l2off == (L2_PAGETABLE_ENTRIES_X86_64 - 1) )
-                l2tab = NULL;
             l1pfn++;
         }
 
@@ -389,8 +388,17 @@ static int setup_pgtables_x86_64(struct 
         if ( (addr >= dom->pgtables_seg.vstart) && 
              (addr < dom->pgtables_seg.vend) )
             l1tab[l1off] &= ~_PAGE_RW; /* page tables are r/o */
+
         if ( l1off == (L1_PAGETABLE_ENTRIES_X86_64 - 1) )
+        {
             l1tab = NULL;
+            if ( l2off == (L2_PAGETABLE_ENTRIES_X86_64 - 1) )
+            {
+                l2tab = NULL;
+                if ( l3off == (L3_PAGETABLE_ENTRIES_X86_64 - 1) )
+                    l3tab = NULL;
+            }
+        }
     }
     return 0;
 }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:58:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:58:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pXD-0004IE-7a; Mon, 13 Aug 2012 07:57:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1T0pXC-0004I8-Cq
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:57:58 +0000
Received: from [85.158.138.51:53550] by server-7.bemta-3.messagelabs.com id
	CB/14-01906-583B8205; Mon, 13 Aug 2012 07:57:57 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344844675!19913522!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5Mzc0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8177 invoked from network); 13 Aug 2012 07:57:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 07:57:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7D7vqId020728
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 07:57:53 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7D7vqkE003412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 07:57:52 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7D7vpsj012027; Mon, 13 Aug 2012 02:57:51 -0500
Received: from [10.191.15.64] (/10.191.15.64)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 00:57:51 -0700
Message-ID: <5028B3AB.7060705@oracle.com>
Date: Mon, 13 Aug 2012 15:58:35 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
In-Reply-To: <502535280200007800094322@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2Npbmcgc2F0aXNoIHdobyBmaXJzdCBmaW5kIHRoaXMgaXNzdWUuCgrkuo4gMjAxMi0wOC0xMCAy
MjoyMiwgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDEwLjA4LjEyIGF0IDA2OjQwLCAiemhl
bnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+ICB3cm90ZToKPj4g5LqOIDIw
MTItMDgtMDkgMTg6MzUsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+Pj4+IE9uIDA5LjA4LjEyIGF0
IDExOjQyLCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+ICAgd3Jv
dGU6Cj4+Pj4g5LqOIDIwMTItMDgtMDggMjM6MDEsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+Pj4+
Pj4gT24gMDguMDguMTIgYXQgMTE6NDgsICJ6aGVuemhvbmcuZHVhbiI8emhlbnpob25nLmR1YW5A
b3JhY2xlLmNvbT4gICAgd3JvdGU6Cj4+Pj4+PiDkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJl
dWxpY2gg5YaZ6YGTOgo+Pj4+Pj4gU29tZSBzcGluIGF0IHN0b3BfbWFjaGluZSBhZnRlciBmaW5p
c2ggdGhlaXIgam9iLgo+Pj4+PiBBbmQgaGVyZSB5b3UnZCBuZWVkIHRvIGZpbmQgb3V0IHdoYXQg
dGhleSdyZSB3YWl0aW5nIGZvciwKPj4+Pj4gYW5kIHdoYXQgdGhvc2UgQ1BVcyBhcmUgZG9pbmcu
Cj4+Pj4gVGhleSBhcmUgd2FpdGluZyB0aGUgdmNwdSBjYWxsaW5nIGdlbmVyaWNfc2V0X2FsbCBh
bmQgdGhvc2Ugc3BpbiBhdAo+Pj4+IHNldF9hdG9taWNpdHlfbG9jay4KPj4+PiBJbiBmYWN0LCBh
bGwgYXJlIHdhaXRpbmcgZ2VuZXJpY19zZXRfYWxsCj4+PiBJIHRoaW5rIHdlJ3JlIG1vdmluZyBp
biBjaXJjbGVzIC0gd2hhdCBpcyB0aGUgdkNQVSBjdXJyZW50bHkKPj4+IGdlbmVyaWNfc2V0X2Fs
bCgpIHRoZW4gZG9pbmc/Cj4+IEFkZCBzb21lIGRlYnVnIHByaW50LCBnZW5lcmljX3NldF9hbGwt
PnByZXBhcmVfc2V0LT53cml0ZV9jcjAgdG9vayBtdWNoCj4+IHRpbWUsCj4+IGFsbCBlbHNlIGFy
ZSBxdWljay4gc2V0X2F0b21pY2l0eV9sb2NrIHNlcmlhbGl6ZWQgdGhpcyBwcm9jZXNzIGJldHdl
ZW4KPj4gY3B1cywgbWFrZSBpdCB3b3JzZS4KPj4gT25lIGl0ZXJhdGlvbjoKPj4gTVRSUjogQ1BV
IDIKPj4gcHJlcGFyZV9zZXQ6IGJlZm9yZSByZWFkX2NyMAo+PiBwcmVwYXJlX3NldDogYmVmb3Jl
IHdyaXRlX2NyMCAtLS0tLS0qYmxvY2sgaGVyZSoKPgo+IFllYWgsIHRoYXQgQ1IwIHdyaXRlIGRp
c2FibGVzIHRoZSBjYWNoZXMsIGFuZCB0aGF0J3MgcHJldHR5Cj4gZXhwZW5zaXZlIG9uIEVQVCAo
bm90IHN1cmUgd2h5IE5QVCBkb2Vzbid0IHVzZS9uZWVkIHRoZQo+IHNhbWUgaG9vaykgd2hlbiB0
aGUgZ3Vlc3QgaGFzIGFueSBhY3RpdmUgTU1JTyByZWdpb25zOgo+IHZteF9zZXRfdWNfbW9kZSgp
LCB3aGVuIEhBUCBpcyBlbmFibGVkLCBjYWxscwo+IGVwdF9jaGFuZ2VfZW50cnlfZW10X3dpdGhf
cmFuZ2UoKSwgd2hpY2ggaXMgYSB3YWxrIHRocm91Z2gKPiB0aGUgZW50aXJlIGd1ZXN0IHBhZ2Ug
dGFibGVzIChpLmUuIHNjYWxlcyB3aXRoIGd1ZXN0IHNpemUsIG9yLCB0bwo+IGJlIHByZWNpc2Us
IHdpdGggdGhlIGhpZ2hlc3QgcG9wdWxhdGVkIEdGTikuCj4KPiBHb2luZyBiYWNrIHRvIHlvdXIg
b3JpZ2luYWwgbWFpbCwgSSB3b25kZXIgaG93ZXZlciB3aHkgdGhpcwo+IGdldHMgZG9uZSBhdCBh
bGwuIFlvdSBzYWlkIGl0IGdvdCB0aGVyZSB2aWEKPgo+IG10cnJfYXBzX2luaXQoKQo+ICAgXC0+
ICBzZXRfbXRycigpCj4gICAgICAgXC0+ICBtdHJyX3dvcmtfaGFuZGxlcigpCj4KPiB5ZXQgdGhp
cyBpc24ndCBkb25lIHVuY29uZGl0aW9uYWxseSAtIHNlZSB0aGUgY29tbWVudCBiZWZvcmUKPiBj
aGVja2luZyBtdHJyX2Fwc19kZWxheWVkX2luaXQuIENhbiB5b3UgZmluZCBvdXQgd2hlcmUgdGhl
Cj4gb2J2aW91c2x5IG5lY2Vzc2FyeSBjYWxsKHMpIHRvIHNldF9tdHJyX2Fwc19kZWxheWVkX2lu
aXQoKQo+IGNvbWUocykgZnJvbT8KQXQgYm9vdHVwIHN0YWdlLCBzZXRfbXRycl9hcHNfZGVsYXll
ZF9pbml0IGlzIGNhbGxlZCBieSBuYXRpdmVfc21wX3ByZXBhcmVfY3B1cy4KbXRycl9hcHNfZGVs
YXllZF9pbml0IGlzIGFsd2F5cyBzZXQgdG8gdHVyZSBmb3IgaW50ZWwgcHJvY2Vzc29yIGluIHVw
c3RyZWFtIGNvZGUuCgo+Pj4+PiAgICAgRG9lcwo+Pj4+PiB5b3VyIGhhcmR3YXJlIHN1cHBvcnQg
UGF1c2UtTG9vcC1FeGl0aW5nIChvciB0aGUgQU1ECj4+Pj4+IGVxdWl2YWxlbnQsIGRvbid0IHJl
Y2FsbCB0aGVpciB0ZXJtIHJpZ2h0IG5vdyk/Cj4+Pj4gSSBoYXZlIG5vIGFjY2VzcyB0byBzZXJp
YWwgbGluZSwgY291bGQgSSBnZXQgdGhlIGluZm8gYnkgYSBjb21tYW5kPwo+Pj4gInhsIGRtZXNn
IiBydW4gZWFybHkgZW5vdWdoIChpLmUuIGJlZm9yZSB0aGUgbG9nIGJ1ZmZlciB3cmFwcykuCj4+
IEJlbG93IGlzIHhsIGRtZXNnIHJlc3VsdCBmb3IgeW91ciByZWZlcmVuY2UuIHRoYW5rcwo+PiAu
Li4KPj4gKFhFTikgVk1YOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVhdHVyZXM6Cj4+IChYRU4pICAt
IEFQSUMgTU1JTyBhY2Nlc3MgdmlydHVhbGlzYXRpb24KPj4gKFhFTikgIC0gQVBJQyBUUFIgc2hh
ZG93Cj4+IChYRU4pICAtIEV4dGVuZGVkIFBhZ2UgVGFibGVzIChFUFQpCj4+IChYRU4pICAtIFZp
cnR1YWwtUHJvY2Vzc29yIElkZW50aWZpZXJzIChWUElEKQo+PiAoWEVOKSAgLSBWaXJ0dWFsIE5N
SQo+PiAoWEVOKSAgLSBNU1IgZGlyZWN0LWFjY2VzcyBiaXRtYXAKPj4gKFhFTikgIC0gVW5yZXN0
cmljdGVkIEd1ZXN0Cj4KPiBJJ20gc29ycnksIEkgaGFkIGV4cGVjdGVkIHRoaXMgdG8gYmUgcHJp
bnRlZCBoZXJlLCBidXQgaXQgaXNuJ3QuCj4gSGVuY2UgSSBjYW4ndCB0ZWxsIGZvciBzdXJlIHdo
ZXRoZXIgUExFIGlzIGltcGxlbWVudGVkIHRoZXJlLAo+IGJ1dCBnaXZlbiBob3cgbG9uZyBpdCBo
YXMgYmVlbiBhdmFpbGFibGUgaXQgb3VnaHQgdG8gYmUgd2hlbgo+ICJVbnJlc3RyaWN0ZWQgR3Vl
c3QiIGlzIHRoZXJlICh3aGljaCBpaXJjIGdvdCBpbnRyb2R1Y2VkIG11Y2gKPiBsYXRlcikuCiBG
cm9tIFZNQ1MgZHVtcCwgbG9va3MgUEFVU0UgZXhpdGluZyBpcyAwLCBQTEUgaXMgMS4KKFhFTikg
KioqIENvbnRyb2wgU3RhdGUgKioqCihYRU4pIFBpbkJhc2VkPTAwMDAwMDNmIENQVUJhc2VkPWI2
YTA2NWZlIFNlY29uZGFyeUV4ZWM9MDAwMDA0ZWIKCnpkdWFuCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 13 07:58:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 07:58:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pXD-0004IE-7a; Mon, 13 Aug 2012 07:57:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1T0pXC-0004I8-Cq
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 07:57:58 +0000
Received: from [85.158.138.51:53550] by server-7.bemta-3.messagelabs.com id
	CB/14-01906-583B8205; Mon, 13 Aug 2012 07:57:57 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344844675!19913522!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5Mzc0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8177 invoked from network); 13 Aug 2012 07:57:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 07:57:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7D7vqId020728
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 07:57:53 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7D7vqkE003412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 07:57:52 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7D7vpsj012027; Mon, 13 Aug 2012 02:57:51 -0500
Received: from [10.191.15.64] (/10.191.15.64)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 00:57:51 -0700
Message-ID: <5028B3AB.7060705@oracle.com>
Date: Mon, 13 Aug 2012 15:58:35 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
In-Reply-To: <502535280200007800094322@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Q2Npbmcgc2F0aXNoIHdobyBmaXJzdCBmaW5kIHRoaXMgaXNzdWUuCgrkuo4gMjAxMi0wOC0xMCAy
MjoyMiwgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDEwLjA4LjEyIGF0IDA2OjQwLCAiemhl
bnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+ICB3cm90ZToKPj4g5LqOIDIw
MTItMDgtMDkgMTg6MzUsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+Pj4+IE9uIDA5LjA4LjEyIGF0
IDExOjQyLCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+ICAgd3Jv
dGU6Cj4+Pj4g5LqOIDIwMTItMDgtMDggMjM6MDEsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+Pj4+
Pj4gT24gMDguMDguMTIgYXQgMTE6NDgsICJ6aGVuemhvbmcuZHVhbiI8emhlbnpob25nLmR1YW5A
b3JhY2xlLmNvbT4gICAgd3JvdGU6Cj4+Pj4+PiDkuo4gMjAxMi0wOC0wNyAxNjozNywgSmFuIEJl
dWxpY2gg5YaZ6YGTOgo+Pj4+Pj4gU29tZSBzcGluIGF0IHN0b3BfbWFjaGluZSBhZnRlciBmaW5p
c2ggdGhlaXIgam9iLgo+Pj4+PiBBbmQgaGVyZSB5b3UnZCBuZWVkIHRvIGZpbmQgb3V0IHdoYXQg
dGhleSdyZSB3YWl0aW5nIGZvciwKPj4+Pj4gYW5kIHdoYXQgdGhvc2UgQ1BVcyBhcmUgZG9pbmcu
Cj4+Pj4gVGhleSBhcmUgd2FpdGluZyB0aGUgdmNwdSBjYWxsaW5nIGdlbmVyaWNfc2V0X2FsbCBh
bmQgdGhvc2Ugc3BpbiBhdAo+Pj4+IHNldF9hdG9taWNpdHlfbG9jay4KPj4+PiBJbiBmYWN0LCBh
bGwgYXJlIHdhaXRpbmcgZ2VuZXJpY19zZXRfYWxsCj4+PiBJIHRoaW5rIHdlJ3JlIG1vdmluZyBp
biBjaXJjbGVzIC0gd2hhdCBpcyB0aGUgdkNQVSBjdXJyZW50bHkKPj4+IGdlbmVyaWNfc2V0X2Fs
bCgpIHRoZW4gZG9pbmc/Cj4+IEFkZCBzb21lIGRlYnVnIHByaW50LCBnZW5lcmljX3NldF9hbGwt
PnByZXBhcmVfc2V0LT53cml0ZV9jcjAgdG9vayBtdWNoCj4+IHRpbWUsCj4+IGFsbCBlbHNlIGFy
ZSBxdWljay4gc2V0X2F0b21pY2l0eV9sb2NrIHNlcmlhbGl6ZWQgdGhpcyBwcm9jZXNzIGJldHdl
ZW4KPj4gY3B1cywgbWFrZSBpdCB3b3JzZS4KPj4gT25lIGl0ZXJhdGlvbjoKPj4gTVRSUjogQ1BV
IDIKPj4gcHJlcGFyZV9zZXQ6IGJlZm9yZSByZWFkX2NyMAo+PiBwcmVwYXJlX3NldDogYmVmb3Jl
IHdyaXRlX2NyMCAtLS0tLS0qYmxvY2sgaGVyZSoKPgo+IFllYWgsIHRoYXQgQ1IwIHdyaXRlIGRp
c2FibGVzIHRoZSBjYWNoZXMsIGFuZCB0aGF0J3MgcHJldHR5Cj4gZXhwZW5zaXZlIG9uIEVQVCAo
bm90IHN1cmUgd2h5IE5QVCBkb2Vzbid0IHVzZS9uZWVkIHRoZQo+IHNhbWUgaG9vaykgd2hlbiB0
aGUgZ3Vlc3QgaGFzIGFueSBhY3RpdmUgTU1JTyByZWdpb25zOgo+IHZteF9zZXRfdWNfbW9kZSgp
LCB3aGVuIEhBUCBpcyBlbmFibGVkLCBjYWxscwo+IGVwdF9jaGFuZ2VfZW50cnlfZW10X3dpdGhf
cmFuZ2UoKSwgd2hpY2ggaXMgYSB3YWxrIHRocm91Z2gKPiB0aGUgZW50aXJlIGd1ZXN0IHBhZ2Ug
dGFibGVzIChpLmUuIHNjYWxlcyB3aXRoIGd1ZXN0IHNpemUsIG9yLCB0bwo+IGJlIHByZWNpc2Us
IHdpdGggdGhlIGhpZ2hlc3QgcG9wdWxhdGVkIEdGTikuCj4KPiBHb2luZyBiYWNrIHRvIHlvdXIg
b3JpZ2luYWwgbWFpbCwgSSB3b25kZXIgaG93ZXZlciB3aHkgdGhpcwo+IGdldHMgZG9uZSBhdCBh
bGwuIFlvdSBzYWlkIGl0IGdvdCB0aGVyZSB2aWEKPgo+IG10cnJfYXBzX2luaXQoKQo+ICAgXC0+
ICBzZXRfbXRycigpCj4gICAgICAgXC0+ICBtdHJyX3dvcmtfaGFuZGxlcigpCj4KPiB5ZXQgdGhp
cyBpc24ndCBkb25lIHVuY29uZGl0aW9uYWxseSAtIHNlZSB0aGUgY29tbWVudCBiZWZvcmUKPiBj
aGVja2luZyBtdHJyX2Fwc19kZWxheWVkX2luaXQuIENhbiB5b3UgZmluZCBvdXQgd2hlcmUgdGhl
Cj4gb2J2aW91c2x5IG5lY2Vzc2FyeSBjYWxsKHMpIHRvIHNldF9tdHJyX2Fwc19kZWxheWVkX2lu
aXQoKQo+IGNvbWUocykgZnJvbT8KQXQgYm9vdHVwIHN0YWdlLCBzZXRfbXRycl9hcHNfZGVsYXll
ZF9pbml0IGlzIGNhbGxlZCBieSBuYXRpdmVfc21wX3ByZXBhcmVfY3B1cy4KbXRycl9hcHNfZGVs
YXllZF9pbml0IGlzIGFsd2F5cyBzZXQgdG8gdHVyZSBmb3IgaW50ZWwgcHJvY2Vzc29yIGluIHVw
c3RyZWFtIGNvZGUuCgo+Pj4+PiAgICAgRG9lcwo+Pj4+PiB5b3VyIGhhcmR3YXJlIHN1cHBvcnQg
UGF1c2UtTG9vcC1FeGl0aW5nIChvciB0aGUgQU1ECj4+Pj4+IGVxdWl2YWxlbnQsIGRvbid0IHJl
Y2FsbCB0aGVpciB0ZXJtIHJpZ2h0IG5vdyk/Cj4+Pj4gSSBoYXZlIG5vIGFjY2VzcyB0byBzZXJp
YWwgbGluZSwgY291bGQgSSBnZXQgdGhlIGluZm8gYnkgYSBjb21tYW5kPwo+Pj4gInhsIGRtZXNn
IiBydW4gZWFybHkgZW5vdWdoIChpLmUuIGJlZm9yZSB0aGUgbG9nIGJ1ZmZlciB3cmFwcykuCj4+
IEJlbG93IGlzIHhsIGRtZXNnIHJlc3VsdCBmb3IgeW91ciByZWZlcmVuY2UuIHRoYW5rcwo+PiAu
Li4KPj4gKFhFTikgVk1YOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVhdHVyZXM6Cj4+IChYRU4pICAt
IEFQSUMgTU1JTyBhY2Nlc3MgdmlydHVhbGlzYXRpb24KPj4gKFhFTikgIC0gQVBJQyBUUFIgc2hh
ZG93Cj4+IChYRU4pICAtIEV4dGVuZGVkIFBhZ2UgVGFibGVzIChFUFQpCj4+IChYRU4pICAtIFZp
cnR1YWwtUHJvY2Vzc29yIElkZW50aWZpZXJzIChWUElEKQo+PiAoWEVOKSAgLSBWaXJ0dWFsIE5N
SQo+PiAoWEVOKSAgLSBNU1IgZGlyZWN0LWFjY2VzcyBiaXRtYXAKPj4gKFhFTikgIC0gVW5yZXN0
cmljdGVkIEd1ZXN0Cj4KPiBJJ20gc29ycnksIEkgaGFkIGV4cGVjdGVkIHRoaXMgdG8gYmUgcHJp
bnRlZCBoZXJlLCBidXQgaXQgaXNuJ3QuCj4gSGVuY2UgSSBjYW4ndCB0ZWxsIGZvciBzdXJlIHdo
ZXRoZXIgUExFIGlzIGltcGxlbWVudGVkIHRoZXJlLAo+IGJ1dCBnaXZlbiBob3cgbG9uZyBpdCBo
YXMgYmVlbiBhdmFpbGFibGUgaXQgb3VnaHQgdG8gYmUgd2hlbgo+ICJVbnJlc3RyaWN0ZWQgR3Vl
c3QiIGlzIHRoZXJlICh3aGljaCBpaXJjIGdvdCBpbnRyb2R1Y2VkIG11Y2gKPiBsYXRlcikuCiBG
cm9tIFZNQ1MgZHVtcCwgbG9va3MgUEFVU0UgZXhpdGluZyBpcyAwLCBQTEUgaXMgMS4KKFhFTikg
KioqIENvbnRyb2wgU3RhdGUgKioqCihYRU4pIFBpbkJhc2VkPTAwMDAwMDNmIENQVUJhc2VkPWI2
YTA2NWZlIFNlY29uZGFyeUV4ZWM9MDAwMDA0ZWIKCnpkdWFuCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:09:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0piT-0004wh-7z; Mon, 13 Aug 2012 08:09:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cmkim@core.kaist.ac.kr>) id 1T0piR-0004wZ-Kz
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:09:35 +0000
Received: from [85.158.143.35:61759] by server-1.bemta-4.messagelabs.com id
	44/16-07754-F36B8205; Mon, 13 Aug 2012 08:09:35 +0000
X-Env-Sender: cmkim@core.kaist.ac.kr
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344845317!14454399!1
X-Originating-IP: [143.248.147.118]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13862 invoked from network); 13 Aug 2012 08:08:40 -0000
Received: from core.kaist.ac.kr (HELO core.kaist.ac.kr) (143.248.147.118)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 08:08:40 -0000
Received: from [143.248.165.115] (az.kaist.ac.kr [143.248.165.115])
	by core.kaist.ac.kr (8.14.4/8.14.4) with ESMTP id q7D8AKMg032462
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 17:10:21 +0900
Message-ID: <5028B603.9010306@core.kaist.ac.kr>
Date: Mon, 13 Aug 2012 17:08:35 +0900
From: Chulmin Kim <cmkim@core.kaist.ac.kr>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <008601cd5f45$6eff7530$4cfe5f90$@core.kaist.ac.kr>
In-Reply-To: <008601cd5f45$6eff7530$4cfe5f90$@core.kaist.ac.kr>
Subject: Re: [Xen-devel] maximum memory size allocated by _xmalloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhvdWdoIHRoZSBpc3N1ZSBoYXMgYmVlbiB3cml0dGVuIG9uZSBtb250aCBhZ28sCkkgcG9zdCBt
eSBvd24gZXhwZXJpZW5jZSBvbiBpdC4KClVuZm9ydHVuYXRlbHksIGl0IHdhcyBub3QgdGhlIG1h
dHRlciBvZiB0aGUgcmVxdWVzdGVkIG1lbSBzaXplIG9yIAp4bWFsbG9jIGZ1bmN0aW9uLgpUaGUg
cHJvYmxlbSB3YXMgZHVlIHRvIHRoZSBmcmVlIG1lbW9yeSBzY3J1YmJpbmcuCgpJIHNldCBteSB4
bWFsbG9jIGNvZGUgaW4gdGhlIG1pZGRsZSBvZiB0aGUgYm9vdHVwIGNvZGUgb2YgeGVuIGFmdGVy
IHRoZSAKZnJlZSBtZW1vcnkgc2NydWJiaW5nIGZ1bmN0aW9uLgpCdXQgdGhlIHBsYWNlbWVudCB3
YXMgd3JvbmcuCgpBZnRlciBpIHJlbG9jYXRlZCB0aGUgY29kZSBsaW5lIGJlZm9yZSB0aGUgc2Ny
dWJiaW5nIGZ1bmN0aW9uLAppdCB3b3JrZWQgcGVyZmVjdGx5LgoKVGhhbmtzIGZvciB5b3VyIGhl
bHAhCgoKMjAxMi0wNy0xMSDsmKTtm4QgNjoxMywgQ2h1bG1pbiBLaW0g7JO0IOq4gDoKPiBIaSBh
bGwsCj4KPiBJJ20gY3VycmVudGx5IGluc2VydGluZyBteSBvd24gY29kZSB0byBhZGp1c3QgdGhl
IHNldmVyYWwgZXhpc3RpbmcgbWVtb3J5Cj4gYmFsbG9vbmluZyB3b3Jrcy4KPgo+IFRvIGFjY29t
cGxpc2ggaXQsIEkgbWFuYWdlIHNvbWUga2luZCBvZiBzdGF0aXN0aWNzIGluIFhlbiBtZW1vcnkg
YXJlYS4KPgo+IFVzaW5nIF94bWFsbG9jLCBJJ3ZlIGFsbG9jYXRlZCBjZXJ0YWluIHNpemUgb2Yg
bWVtb3J5IGNodW5rIGZvciB0aGUgZGF0YQo+IHN0cnVjdHVyZS4gKCBJIHZhcmllZCBpdCBmcm9t
IDEwa2IgdG8gMjQgTUIuKQo+Cj4gV2hlbiB0aGUgc2l6ZSBpcyBlcXVhbCB0byAyNCBNQiwgeGVu
IHdvbid0IGJvb3QgYW55bW9yZS4gIChzdHVjayBkdXJpbmcgdGhlCj4geG1hbGxvYywgYWNjb3Jk
aW5nIHRvIG15IGRlYnVnZ2luZy4gX3htYWxsb2MgcmV0dXJucyBOVUxMLikKPiBUaGVyZSB3YXMg
bm8gcHJvYmxlbSB3aGVuIHRoZSBzaXplIGlzIGJlbG93IDEyTUIuCj4KPiBJcyB0aGVyZSBhbnkg
bGltaXRhdGlvbiBzdWNoIGFzIG1heCBtZW1vcnkgc2l6ZSBmb3IgX3htYWxsb2M/Cj4KPiBJIHN1
c3BlY3RlZCB4ZW4gaGVhcCBzaXplLCBidXQsIGl0IGlzIG5vIGxvbmdlciBhZGp1c3RhYmxlLiBS
aWdodD8KPgo+IEkgaG9wZSBzb21lYm9keSBjYW4gZ2l2ZSBtZSBhIGNsdWUuICBUaGFua3MuCj4K
Pgo+Cj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:09:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0piT-0004wh-7z; Mon, 13 Aug 2012 08:09:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cmkim@core.kaist.ac.kr>) id 1T0piR-0004wZ-Kz
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:09:35 +0000
Received: from [85.158.143.35:61759] by server-1.bemta-4.messagelabs.com id
	44/16-07754-F36B8205; Mon, 13 Aug 2012 08:09:35 +0000
X-Env-Sender: cmkim@core.kaist.ac.kr
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344845317!14454399!1
X-Originating-IP: [143.248.147.118]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13862 invoked from network); 13 Aug 2012 08:08:40 -0000
Received: from core.kaist.ac.kr (HELO core.kaist.ac.kr) (143.248.147.118)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 08:08:40 -0000
Received: from [143.248.165.115] (az.kaist.ac.kr [143.248.165.115])
	by core.kaist.ac.kr (8.14.4/8.14.4) with ESMTP id q7D8AKMg032462
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 17:10:21 +0900
Message-ID: <5028B603.9010306@core.kaist.ac.kr>
Date: Mon, 13 Aug 2012 17:08:35 +0900
From: Chulmin Kim <cmkim@core.kaist.ac.kr>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <008601cd5f45$6eff7530$4cfe5f90$@core.kaist.ac.kr>
In-Reply-To: <008601cd5f45$6eff7530$4cfe5f90$@core.kaist.ac.kr>
Subject: Re: [Xen-devel] maximum memory size allocated by _xmalloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhvdWdoIHRoZSBpc3N1ZSBoYXMgYmVlbiB3cml0dGVuIG9uZSBtb250aCBhZ28sCkkgcG9zdCBt
eSBvd24gZXhwZXJpZW5jZSBvbiBpdC4KClVuZm9ydHVuYXRlbHksIGl0IHdhcyBub3QgdGhlIG1h
dHRlciBvZiB0aGUgcmVxdWVzdGVkIG1lbSBzaXplIG9yIAp4bWFsbG9jIGZ1bmN0aW9uLgpUaGUg
cHJvYmxlbSB3YXMgZHVlIHRvIHRoZSBmcmVlIG1lbW9yeSBzY3J1YmJpbmcuCgpJIHNldCBteSB4
bWFsbG9jIGNvZGUgaW4gdGhlIG1pZGRsZSBvZiB0aGUgYm9vdHVwIGNvZGUgb2YgeGVuIGFmdGVy
IHRoZSAKZnJlZSBtZW1vcnkgc2NydWJiaW5nIGZ1bmN0aW9uLgpCdXQgdGhlIHBsYWNlbWVudCB3
YXMgd3JvbmcuCgpBZnRlciBpIHJlbG9jYXRlZCB0aGUgY29kZSBsaW5lIGJlZm9yZSB0aGUgc2Ny
dWJiaW5nIGZ1bmN0aW9uLAppdCB3b3JrZWQgcGVyZmVjdGx5LgoKVGhhbmtzIGZvciB5b3VyIGhl
bHAhCgoKMjAxMi0wNy0xMSDsmKTtm4QgNjoxMywgQ2h1bG1pbiBLaW0g7JO0IOq4gDoKPiBIaSBh
bGwsCj4KPiBJJ20gY3VycmVudGx5IGluc2VydGluZyBteSBvd24gY29kZSB0byBhZGp1c3QgdGhl
IHNldmVyYWwgZXhpc3RpbmcgbWVtb3J5Cj4gYmFsbG9vbmluZyB3b3Jrcy4KPgo+IFRvIGFjY29t
cGxpc2ggaXQsIEkgbWFuYWdlIHNvbWUga2luZCBvZiBzdGF0aXN0aWNzIGluIFhlbiBtZW1vcnkg
YXJlYS4KPgo+IFVzaW5nIF94bWFsbG9jLCBJJ3ZlIGFsbG9jYXRlZCBjZXJ0YWluIHNpemUgb2Yg
bWVtb3J5IGNodW5rIGZvciB0aGUgZGF0YQo+IHN0cnVjdHVyZS4gKCBJIHZhcmllZCBpdCBmcm9t
IDEwa2IgdG8gMjQgTUIuKQo+Cj4gV2hlbiB0aGUgc2l6ZSBpcyBlcXVhbCB0byAyNCBNQiwgeGVu
IHdvbid0IGJvb3QgYW55bW9yZS4gIChzdHVjayBkdXJpbmcgdGhlCj4geG1hbGxvYywgYWNjb3Jk
aW5nIHRvIG15IGRlYnVnZ2luZy4gX3htYWxsb2MgcmV0dXJucyBOVUxMLikKPiBUaGVyZSB3YXMg
bm8gcHJvYmxlbSB3aGVuIHRoZSBzaXplIGlzIGJlbG93IDEyTUIuCj4KPiBJcyB0aGVyZSBhbnkg
bGltaXRhdGlvbiBzdWNoIGFzIG1heCBtZW1vcnkgc2l6ZSBmb3IgX3htYWxsb2M/Cj4KPiBJIHN1
c3BlY3RlZCB4ZW4gaGVhcCBzaXplLCBidXQsIGl0IGlzIG5vIGxvbmdlciBhZGp1c3RhYmxlLiBS
aWdodD8KPgo+IEkgaG9wZSBzb21lYm9keSBjYW4gZ2l2ZSBtZSBhIGNsdWUuICBUaGFua3MuCj4K
Pgo+Cj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:14:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:14:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pmw-0005Fk-Hu; Mon, 13 Aug 2012 08:14:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T0pmv-0005FD-08
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:14:13 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344845644!3774005!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5Mzc0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28945 invoked from network); 13 Aug 2012 08:14:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 08:14:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7D8DvVA002721
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 08:13:58 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7D8DtKO003116
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 08:13:55 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7D8DssF020781; Mon, 13 Aug 2012 03:13:54 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 01:13:54 -0700
Date: Mon, 13 Aug 2012 10:12:35 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
	<502539410200007800094355@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502539410200007800094355@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:39:29PM +0100, Jan Beulich wrote:
> >>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > max_cpus is not available since 20374 changeset (Miscellaneous data
> > placement adjustments). It was moved to __initdata section. This section
> > is freed after Xen initialization. Assume that max_cpus is always
> > equal to XEN_HYPER_SIZE(cpumask_t) * 8.
>
> Just to repeat my response to the original version of this patch,
> which I don't recall having got any answer from you:
>
> "Using nr_cpu_ids, when available, would seem a better fit. And
>  I don't see why, on dumps from old hypervisors, you wouldn't
>  want to continue using max_cpus. Oh, wait, I see - you would
>  have to be able to tell whether it actually sits in .init.data, which
>  might not be strait forward."

As I promised earlier I thought about that. The simplest way
to do that is to check in which section max_cpus resides. There
is some instrumentation in crash tool to do that. However, sadly
it does not differentiate between .data and .init.data section.
I could write something from scratch which could do that but
I think it could have larger costs then potential gains.
Let's leave it as is now. Current approximation is not so bad.
However, if any opportunity appears (some functions could
differentiate between .data and .init.data section) then
I could fix this.

Sorry, I should attach above description to original message.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:14:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:14:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pmw-0005Fk-Hu; Mon, 13 Aug 2012 08:14:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T0pmv-0005FD-08
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:14:13 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344845644!3774005!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5Mzc0MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28945 invoked from network); 13 Aug 2012 08:14:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 08:14:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7D8DvVA002721
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 08:13:58 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7D8DtKO003116
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 08:13:55 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7D8DssF020781; Mon, 13 Aug 2012 03:13:54 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 01:13:54 -0700
Date: Mon, 13 Aug 2012 10:12:35 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
	<502539410200007800094355@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502539410200007800094355@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:39:29PM +0100, Jan Beulich wrote:
> >>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > max_cpus is not available since 20374 changeset (Miscellaneous data
> > placement adjustments). It was moved to __initdata section. This section
> > is freed after Xen initialization. Assume that max_cpus is always
> > equal to XEN_HYPER_SIZE(cpumask_t) * 8.
>
> Just to repeat my response to the original version of this patch,
> which I don't recall having got any answer from you:
>
> "Using nr_cpu_ids, when available, would seem a better fit. And
>  I don't see why, on dumps from old hypervisors, you wouldn't
>  want to continue using max_cpus. Oh, wait, I see - you would
>  have to be able to tell whether it actually sits in .init.data, which
>  might not be strait forward."

As I promised earlier I thought about that. The simplest way
to do that is to check in which section max_cpus resides. There
is some instrumentation in crash tool to do that. However, sadly
it does not differentiate between .data and .init.data section.
I could write something from scratch which could do that but
I think it could have larger costs then potential gains.
Let's leave it as is now. Current approximation is not so bad.
However, if any opportunity appears (some functions could
differentiate between .data and .init.data section) then
I could fix this.

Sorry, I should attach above description to original message.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:18:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pqZ-0005fG-Vo; Mon, 13 Aug 2012 08:17:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0pqZ-0005f2-4P
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:17:59 +0000
Received: from [85.158.143.99:58181] by server-1.bemta-4.messagelabs.com id
	C7/37-07754-638B8205; Mon, 13 Aug 2012 08:17:58 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344845874!18242225!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32423 invoked from network); 13 Aug 2012 08:17:57 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 08:17:57 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T0pqM-0002rY-Ut; Mon, 13 Aug 2012 18:17:47 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 18:17:45 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 18:17:44 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
Thread-Topic: [Xen-devel] blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1v//btIA//9NMFA=
Date: Mon, 13 Aug 2012 08:17:43 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
In-Reply-To: <CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.005
x-tm-as-result: No--24.524500-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 08:17:45.0007 (UTC)
	FILETIME=[1DDBABF0:01CD792C]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	Kent Overstreet <koverstreet@google.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> This could very well be the issue I was having, I haven't been able to pull the
> latest bcache code for a few days (repo down?) but if I can help debug let me
> know.
> 

Is it Windows or Linux giving you problems? I've only tested Windows so far.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:18:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0pqZ-0005fG-Vo; Mon, 13 Aug 2012 08:17:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0pqZ-0005f2-4P
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:17:59 +0000
Received: from [85.158.143.99:58181] by server-1.bemta-4.messagelabs.com id
	C7/37-07754-638B8205; Mon, 13 Aug 2012 08:17:58 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-216.messagelabs.com!1344845874!18242225!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32423 invoked from network); 13 Aug 2012 08:17:57 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-14.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 08:17:57 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T0pqM-0002rY-Ut; Mon, 13 Aug 2012 18:17:47 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 18:17:45 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 18:17:44 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
Thread-Topic: [Xen-devel] blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1v//btIA//9NMFA=
Date: Mon, 13 Aug 2012 08:17:43 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
In-Reply-To: <CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.005
x-tm-as-result: No--24.524500-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 08:17:45.0007 (UTC)
	FILETIME=[1DDBABF0:01CD792C]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	Kent Overstreet <koverstreet@google.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> This could very well be the issue I was having, I haven't been able to pull the
> latest bcache code for a few days (repo down?) but if I can help debug let me
> know.
> 

Is it Windows or Linux giving you problems? I've only tested Windows so far.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:28:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0q0f-0006B7-7e; Mon, 13 Aug 2012 08:28:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0q0d-0006B0-Uc
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:28:24 +0000
Received: from [85.158.143.35:24334] by server-1.bemta-4.messagelabs.com id
	85/11-07754-7AAB8205; Mon, 13 Aug 2012 08:28:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344846480!15026091!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16487 invoked from network); 13 Aug 2012 08:28:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 08:28:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 09:28:07 +0100
Message-Id: <5028D6AC0200007800094651@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 09:27:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Siva Palagummi" <Siva.Palagummi@ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com> wrote:
>--- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000 -0500
>+++ b/drivers/net/xen-netback/netback.c	2012-08-12 15:50:50.000000000 -0400
>@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> 
> 		count += nr_frags + 1;
> 
>+		/*
>+		 * The logic here should be somewhat similar to
>+		 * xen_netbk_count_skb_slots. In case of larger MTU size,

Is there a reason why you can't simply use that function then?
Afaict it's being used on the very same skb before it gets put on
rx_queue already anyway.

>+		 * skb head length may be more than a PAGE_SIZE. We need to
>+		 * consider ring slots consumed by that data. If we do not,
>+		 * then within this loop itself we end up consuming more meta
>+		 * slots turning the BUG_ON below. With this fix we may end up
>+		 * iterating through xen_netbk_rx_action multiple times
>+		 * instead of crashing netback thread.
>+		 */
>+
>+
>+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

This now over-accounts by one I think (due to the "+ 1" above;
the calculation here really is to replace that increment).

Jan

>+
>+		if (skb_shinfo(skb)->gso_size)
>+			count++;
>+
> 		__skb_queue_tail(&rxq, skb);
> 
> 		/* Filled the batch queue? */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:28:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0q0f-0006B7-7e; Mon, 13 Aug 2012 08:28:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0q0d-0006B0-Uc
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:28:24 +0000
Received: from [85.158.143.35:24334] by server-1.bemta-4.messagelabs.com id
	85/11-07754-7AAB8205; Mon, 13 Aug 2012 08:28:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344846480!15026091!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16487 invoked from network); 13 Aug 2012 08:28:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 08:28:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 09:28:07 +0100
Message-Id: <5028D6AC0200007800094651@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 09:27:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Siva Palagummi" <Siva.Palagummi@ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com> wrote:
>--- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000 -0500
>+++ b/drivers/net/xen-netback/netback.c	2012-08-12 15:50:50.000000000 -0400
>@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> 
> 		count += nr_frags + 1;
> 
>+		/*
>+		 * The logic here should be somewhat similar to
>+		 * xen_netbk_count_skb_slots. In case of larger MTU size,

Is there a reason why you can't simply use that function then?
Afaict it's being used on the very same skb before it gets put on
rx_queue already anyway.

>+		 * skb head length may be more than a PAGE_SIZE. We need to
>+		 * consider ring slots consumed by that data. If we do not,
>+		 * then within this loop itself we end up consuming more meta
>+		 * slots turning the BUG_ON below. With this fix we may end up
>+		 * iterating through xen_netbk_rx_action multiple times
>+		 * instead of crashing netback thread.
>+		 */
>+
>+
>+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

This now over-accounts by one I think (due to the "+ 1" above;
the calculation here really is to replace that increment).

Jan

>+
>+		if (skb_shinfo(skb)->gso_size)
>+			count++;
>+
> 		__skb_queue_tail(&rxq, skb);
> 
> 		/* Filled the batch queue? */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:48:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qKI-0006Xe-4L; Mon, 13 Aug 2012 08:48:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T0qKG-0006XF-2v
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:48:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344847694!8902800!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTE3MDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTE3MDY=\n,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5460 invoked from network); 13 Aug 2012 08:48:16 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 08:48:16 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjC0PE0pk
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-082-107.pools.arcor-ip.net [88.65.82.107])
	by smtp.strato.de (jorabe mo76) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id j01fcfo7D6TGKI
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:48:13 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 612E51836D
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:48:06 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 26e3b184658352d71b1b4b06b26dbe5d0b46336b
Message-Id: <26e3b184658352d71b1b.1344847685@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Mon, 13 Aug 2012 10:48:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] stubdom: fix parallel build by expanding
	CROSS_MAKE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1344847636 -7200
# Node ID 26e3b184658352d71b1b4b06b26dbe5d0b46336b
# Parent  dc4970af48a0a2d7a3e54233bc1aa5e0da0fe44a
stubdom: fix parallel build by expanding CROSS_MAKE

Recently I changed my rpm xen.spec file from doing
'make -C tools -j N && make stubdom' to 'make -j N stubdom' because
stubdom depends on tools, so both get built.
The result was the failure below.

....
mkdir -p grub-x86_64
CPPFLAGS="-isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/posix -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../tools/xenstore  -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/x86 -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/posix -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem /usr/lib64/gcc/x86_64-suse-linux/4.7/include -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/lwip-x86_64/src/include -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/lwip-x86_64/src/include/ipv4 -I/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/include -I/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../xen/include" CFLAGS="-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions" make DESTDIR= -C grub OBJ_DIR=/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/grub-x86_64
make[2]: Entering directory `/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/grub'
make[2]: warning: jobserver unavailable: using -j1.  Add `+' to parent make rule.
make[2]: *** INTERNAL: readdir: Bad file descriptor
.  Stop.
make[2]: Makefile: Field 'stem' not cached: Makefile

make[2]: Leaving directory `/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/grub'
make[1]: *** [grub] Error 2
[ -d mini-os-x86_64-xenstore ] || \
for i in $(cd /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os ; find . -type d) ; do \
                mkdir -p mini-os-x86_64-xenstore/$i ; \
done
....

Expanding every occurrence of CROSS_MAKE qvoids this error. It also has
the nice side effect of actually enabling parallel build for stubdom.
According to the GNU make documentation $(MAKE) gets its special meaning
only if it appears directly in the recipe:

http://www.gnu.org/software/make/manual/html_node/MAKE-Variable.html

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r dc4970af48a0 -r 26e3b1846583 stubdom/Makefile
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -76,8 +76,6 @@ TARGET_LDFLAGS += -nostdlib -L$(CROSS_PR
 
 TARGETS=ioemu c caml grub xenstore
 
-CROSS_MAKE := $(MAKE) DESTDIR=
-
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
@@ -113,8 +111,8 @@ cross-newlib: $(NEWLIB_STAMPFILE)
 	mkdir -p newlib-$(XEN_TARGET_ARCH)
 	( cd newlib-$(XEN_TARGET_ARCH) && \
 	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --disable-multilib && \
-	  $(CROSS_MAKE) && \
-	  $(CROSS_MAKE) install )
+	  $(MAKE) DESTDIR= && \
+	  $(MAKE) DESTDIR= install )
 
 ############
 # Cross-zlib
@@ -133,8 +131,8 @@ cross-zlib: $(ZLIB_STAMPFILE)
 $(ZLIB_STAMPFILE): zlib-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	( cd $< && \
 	  CFLAGS="$(TARGET_CPPFLAGS) $(TARGET_CFLAGS)" CC=$(CC) ./configure --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf && \
-	  $(CROSS_MAKE) libz.a && \
-	  $(CROSS_MAKE) install )
+	  $(MAKE) DESTDIR= libz.a && \
+	  $(MAKE) DESTDIR= install )
 
 ##############
 # Cross-libpci
@@ -158,7 +156,7 @@ cross-libpci: $(LIBPCI_STAMPFILE)
 	  chmod u+w lib/config.h && \
 	  echo '#define PCILIB_VERSION "$(LIBPCI_VERSION)"' >> lib/config.h && \
 	  ln -sf ../../libpci.config.mak lib/config.mk && \
-	  $(CROSS_MAKE) CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I$(call realpath,$(MINI_OS)/include)" lib/libpci.a && \
+	  $(MAKE) DESTDIR= CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I$(call realpath,$(MINI_OS)/include)" lib/libpci.a && \
 	  $(INSTALL_DATA) lib/libpci.a $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib/ && \
 	  $(INSTALL_DIR) $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include/pci && \
 	  $(INSTALL_DATA) lib/config.h lib/header.h lib/pci.h lib/types.h $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include/pci/ \
@@ -203,8 +201,8 @@ cross-ocaml: $(OCAML_STAMPFILE)
 		-no-pthread -no-shared-libs -no-tk -no-curses \
 		-cc "$(CC) -U_FORTIFY_SOURCE -fno-stack-protector -mno-red-zone"
 	$(foreach i,$(MINIOS_HASNOT),sed -i 's,^\(#define HAS_$(i)\),//\1,' ocaml-$(XEN_TARGET_ARCH)/config/s.h ; )
-	$(CROSS_MAKE) -C ocaml-$(XEN_TARGET_ARCH) world
-	$(CROSS_MAKE) -C ocaml-$(XEN_TARGET_ARCH) opt
+	$(MAKE) DESTDIR= -C ocaml-$(XEN_TARGET_ARCH) world
+	$(MAKE) DESTDIR= -C ocaml-$(XEN_TARGET_ARCH) opt
 	$(MAKE) -C ocaml-$(XEN_TARGET_ARCH) install
 	touch $@
 
@@ -219,7 +217,7 @@ QEMU_ROOT := $(shell if [ -d "$(CONFIG_Q
 
 ifeq ($(QEMU_ROOT),.)
 $(XEN_ROOT)/tools/qemu-xen-traditional-dir:
-	$(CROSS_MAKE) -C $(XEN_ROOT)/tools qemu-xen-traditional-dir-find
+	$(MAKE) DESTDIR= -C $(XEN_ROOT)/tools qemu-xen-traditional-dir-find
 
 ioemu/linkfarm.stamp: $(XEN_ROOT)/tools/qemu-xen-traditional-dir
 	mkdir -p ioemu
@@ -250,7 +248,7 @@ mk-headers-$(XEN_TARGET_ARCH): ioemu/lin
           ( [ -h include/xen/libelf ] || ln -sf $(XEN_ROOT)/tools/include/xen/libelf include/xen/libelf ) && \
 	  mkdir -p include/xen-foreign && \
 	  ln -sf $(wildcard $(XEN_ROOT)/tools/include/xen-foreign/*) include/xen-foreign/ && \
-	  $(CROSS_MAKE) -C include/xen-foreign/ && \
+	  $(MAKE) DESTDIR= -C include/xen-foreign/ && \
 	  ( [ -h include/xen/foreign ] || ln -sf ../xen-foreign include/xen/foreign )
 	mkdir -p libxc-$(XEN_TARGET_ARCH)
 	[ -h libxc-$(XEN_TARGET_ARCH)/Makefile ] || ( cd libxc-$(XEN_TARGET_ARCH) && \
@@ -267,7 +265,7 @@ mk-headers-$(XEN_TARGET_ARCH): ioemu/lin
 	  ln -sf $(XEN_ROOT)/tools/xenstore/*.c . && \
 	  ln -sf $(XEN_ROOT)/tools/xenstore/*.h . && \
 	  ln -sf $(XEN_ROOT)/tools/xenstore/Makefile . )
-	$(CROSS_MAKE) -C $(MINI_OS) links
+	$(MAKE) DESTDIR= -C $(MINI_OS) links
 	touch mk-headers-$(XEN_TARGET_ARCH)
 
 TARGETS_MINIOS=$(addprefix mini-os-$(XEN_TARGET_ARCH)-,$(TARGETS))
@@ -284,7 +282,7 @@ TARGETS_MINIOS=$(addprefix mini-os-$(XEN
 .PHONY: libxc
 libxc: libxc-$(XEN_TARGET_ARCH)/libxenctrl.a libxc-$(XEN_TARGET_ARCH)/libxenguest.a
 libxc-$(XEN_TARGET_ARCH)/libxenctrl.a: cross-zlib
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C libxc-$(XEN_TARGET_ARCH)
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH)
 
  libxc-$(XEN_TARGET_ARCH)/libxenguest.a: libxc-$(XEN_TARGET_ARCH)/libxenctrl.a
 
@@ -302,7 +300,7 @@ ioemu: cross-zlib cross-libpci libxc
 	    TARGET_CFLAGS="$(TARGET_CFLAGS)" \
 	    TARGET_LDFLAGS="$(TARGET_LDFLAGS)" \
 	    $(QEMU_ROOT)/xen-setup-stubdom )
-	$(CROSS_MAKE) -C ioemu -f $(QEMU_ROOT)/Makefile
+	$(MAKE) DESTDIR= -C ioemu -f $(QEMU_ROOT)/Makefile
 
 ######
 # caml
@@ -310,7 +308,7 @@ ioemu: cross-zlib cross-libpci libxc
 
 .PHONY: caml
 caml: $(CROSS_ROOT)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) OCAMLC_CROSS_PREFIX=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/bin/
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) OCAMLC_CROSS_PREFIX=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/bin/
 
 ###
 # C
@@ -318,7 +316,7 @@ caml: $(CROSS_ROOT)
 
 .PHONY: c
 c: $(CROSS_ROOT)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
 
 ######
 # Grub
@@ -337,7 +335,7 @@ grub-upstream: grub-$(GRUB_VERSION).tar.
 .PHONY: grub
 grub: grub-upstream $(CROSS_ROOT)
 	mkdir -p grub-$(XEN_TARGET_ARCH)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ OBJ_DIR=$(CURDIR)/grub-$(XEN_TARGET_ARCH)
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ OBJ_DIR=$(CURDIR)/grub-$(XEN_TARGET_ARCH)
 
 ##########
 # xenstore
@@ -345,7 +343,7 @@ grub: grub-upstream $(CROSS_ROOT)
 
 .PHONY: xenstore
 xenstore: $(CROSS_ROOT)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ xenstored.a CONFIG_STUBDOM=y
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ xenstored.a CONFIG_STUBDOM=y
 
 ########
 # minios
@@ -354,23 +352,23 @@ xenstore: $(CROSS_ROOT)
 .PHONY: ioemu-stubdom
 ioemu-stubdom: APP_OBJS=$(CURDIR)/ioemu/i386-stubdom/qemu.a $(CURDIR)/ioemu/i386-stubdom/libqemu.a $(CURDIR)/ioemu/libqemu_common.a
 ioemu-stubdom: mini-os-$(XEN_TARGET_ARCH)-ioemu lwip-$(XEN_TARGET_ARCH) libxc ioemu
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/ioemu-minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(APP_OBJS)"
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/ioemu-minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(APP_OBJS)"
 
 .PHONY: caml-stubdom
 caml-stubdom: mini-os-$(XEN_TARGET_ARCH)-caml lwip-$(XEN_TARGET_ARCH) libxc cross-ocaml caml
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/caml/minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(CURDIR)/caml/main-caml.o $(CURDIR)/caml/caml.o $(CAMLLIB)/libasmrun.a"
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/caml/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(CURDIR)/caml/main-caml.o $(CURDIR)/caml/caml.o $(CAMLLIB)/libasmrun.a"
 
 .PHONY: c-stubdom
 c-stubdom: mini-os-$(XEN_TARGET_ARCH)-c lwip-$(XEN_TARGET_ARCH) libxc c
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
 
 .PHONY: pv-grub
 pv-grub: mini-os-$(XEN_TARGET_ARCH)-grub libxc grub
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
 
 .PHONY: xenstore-stubdom
 xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/xenstore-minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/xenstore/xenstored.a
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/xenstore-minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/xenstore/xenstored.a
 
 #########
 # install
@@ -412,13 +410,13 @@ clean:
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-caml
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-grub
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-xenstore
-	$(CROSS_MAKE) -C caml clean
-	$(CROSS_MAKE) -C c clean
+	$(MAKE) DESTDIR= -C caml clean
+	$(MAKE) DESTDIR= -C c clean
 	rm -fr grub-$(XEN_TARGET_ARCH)
 	rm -f $(STUBDOMPATH)
-	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(CROSS_MAKE) -C libxc-$(XEN_TARGET_ARCH) clean
-	-[ ! -d ioemu ] || $(CROSS_MAKE) -C ioemu clean
-	-[ ! -d xenstore ] || $(CROSS_MAKE) -C xenstore clean
+	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH) clean
+	-[ ! -d ioemu ] || $(MAKE) DESTDIR= -C ioemu clean
+	-[ ! -d xenstore ] || $(MAKE) DESTDIR= -C xenstore clean
 
 # clean the cross-compilation result
 .PHONY: crossclean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:48:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qKI-0006Xe-4L; Mon, 13 Aug 2012 08:48:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T0qKG-0006XF-2v
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:48:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344847694!8902800!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTE3MDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiAzOTE3MDY=\n,UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5460 invoked from network); 13 Aug 2012 08:48:16 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 08:48:16 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGjC0PE0pk
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-082-107.pools.arcor-ip.net [88.65.82.107])
	by smtp.strato.de (jorabe mo76) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id j01fcfo7D6TGKI
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:48:13 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 612E51836D
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:48:06 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 26e3b184658352d71b1b4b06b26dbe5d0b46336b
Message-Id: <26e3b184658352d71b1b.1344847685@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Mon, 13 Aug 2012 10:48:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] stubdom: fix parallel build by expanding
	CROSS_MAKE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1344847636 -7200
# Node ID 26e3b184658352d71b1b4b06b26dbe5d0b46336b
# Parent  dc4970af48a0a2d7a3e54233bc1aa5e0da0fe44a
stubdom: fix parallel build by expanding CROSS_MAKE

Recently I changed my rpm xen.spec file from doing
'make -C tools -j N && make stubdom' to 'make -j N stubdom' because
stubdom depends on tools, so both get built.
The result was the failure below.

....
mkdir -p grub-x86_64
CPPFLAGS="-isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include -D__MINIOS__ -DHAVE_LIBC -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/posix -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../tools/xenstore  -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/x86 -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/x86/x86_64 -U __linux__ -U __FreeBSD__ -U __sun__ -nostdinc -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os/include/posix -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/cross-root-x86_64/x86_64-xen-elf/include -isystem /usr/lib64/gcc/x86_64-suse-linux/4.7/include -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/lwip-x86_64/src/include -isystem /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/lwip-x86_64/src/include/ipv4 -I/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/include -I/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../xen/include" CFLAGS="-mno-red-zone -O1 -fno-omit-frame-pointer  -m64 -mno-red-zone -fno-reorder-blocks -fno-asynchronous-unwind-tables -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -fno-stack-protector -fno-exceptions" make DESTDIR= -C grub OBJ_DIR=/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/grub-x86_64
make[2]: Entering directory `/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/grub'
make[2]: warning: jobserver unavailable: using -j1.  Add `+' to parent make rule.
make[2]: *** INTERNAL: readdir: Bad file descriptor
.  Stop.
make[2]: Makefile: Field 'stem' not cached: Makefile

make[2]: Leaving directory `/home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/grub'
make[1]: *** [grub] Error 2
[ -d mini-os-x86_64-xenstore ] || \
for i in $(cd /home/abuild/rpmbuild/BUILD/xen-4.2.25602/non-dbg/stubdom/../extras/mini-os ; find . -type d) ; do \
                mkdir -p mini-os-x86_64-xenstore/$i ; \
done
....

Expanding every occurrence of CROSS_MAKE qvoids this error. It also has
the nice side effect of actually enabling parallel build for stubdom.
According to the GNU make documentation $(MAKE) gets its special meaning
only if it appears directly in the recipe:

http://www.gnu.org/software/make/manual/html_node/MAKE-Variable.html

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r dc4970af48a0 -r 26e3b1846583 stubdom/Makefile
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -76,8 +76,6 @@ TARGET_LDFLAGS += -nostdlib -L$(CROSS_PR
 
 TARGETS=ioemu c caml grub xenstore
 
-CROSS_MAKE := $(MAKE) DESTDIR=
-
 .PHONY: all
 all: build
 ifeq ($(STUBDOM_SUPPORTED),1)
@@ -113,8 +111,8 @@ cross-newlib: $(NEWLIB_STAMPFILE)
 	mkdir -p newlib-$(XEN_TARGET_ARCH)
 	( cd newlib-$(XEN_TARGET_ARCH) && \
 	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --disable-multilib && \
-	  $(CROSS_MAKE) && \
-	  $(CROSS_MAKE) install )
+	  $(MAKE) DESTDIR= && \
+	  $(MAKE) DESTDIR= install )
 
 ############
 # Cross-zlib
@@ -133,8 +131,8 @@ cross-zlib: $(ZLIB_STAMPFILE)
 $(ZLIB_STAMPFILE): zlib-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	( cd $< && \
 	  CFLAGS="$(TARGET_CPPFLAGS) $(TARGET_CFLAGS)" CC=$(CC) ./configure --prefix=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf && \
-	  $(CROSS_MAKE) libz.a && \
-	  $(CROSS_MAKE) install )
+	  $(MAKE) DESTDIR= libz.a && \
+	  $(MAKE) DESTDIR= install )
 
 ##############
 # Cross-libpci
@@ -158,7 +156,7 @@ cross-libpci: $(LIBPCI_STAMPFILE)
 	  chmod u+w lib/config.h && \
 	  echo '#define PCILIB_VERSION "$(LIBPCI_VERSION)"' >> lib/config.h && \
 	  ln -sf ../../libpci.config.mak lib/config.mk && \
-	  $(CROSS_MAKE) CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I$(call realpath,$(MINI_OS)/include)" lib/libpci.a && \
+	  $(MAKE) DESTDIR= CC="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -I$(call realpath,$(MINI_OS)/include)" lib/libpci.a && \
 	  $(INSTALL_DATA) lib/libpci.a $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/lib/ && \
 	  $(INSTALL_DIR) $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include/pci && \
 	  $(INSTALL_DATA) lib/config.h lib/header.h lib/pci.h lib/types.h $(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/include/pci/ \
@@ -203,8 +201,8 @@ cross-ocaml: $(OCAML_STAMPFILE)
 		-no-pthread -no-shared-libs -no-tk -no-curses \
 		-cc "$(CC) -U_FORTIFY_SOURCE -fno-stack-protector -mno-red-zone"
 	$(foreach i,$(MINIOS_HASNOT),sed -i 's,^\(#define HAS_$(i)\),//\1,' ocaml-$(XEN_TARGET_ARCH)/config/s.h ; )
-	$(CROSS_MAKE) -C ocaml-$(XEN_TARGET_ARCH) world
-	$(CROSS_MAKE) -C ocaml-$(XEN_TARGET_ARCH) opt
+	$(MAKE) DESTDIR= -C ocaml-$(XEN_TARGET_ARCH) world
+	$(MAKE) DESTDIR= -C ocaml-$(XEN_TARGET_ARCH) opt
 	$(MAKE) -C ocaml-$(XEN_TARGET_ARCH) install
 	touch $@
 
@@ -219,7 +217,7 @@ QEMU_ROOT := $(shell if [ -d "$(CONFIG_Q
 
 ifeq ($(QEMU_ROOT),.)
 $(XEN_ROOT)/tools/qemu-xen-traditional-dir:
-	$(CROSS_MAKE) -C $(XEN_ROOT)/tools qemu-xen-traditional-dir-find
+	$(MAKE) DESTDIR= -C $(XEN_ROOT)/tools qemu-xen-traditional-dir-find
 
 ioemu/linkfarm.stamp: $(XEN_ROOT)/tools/qemu-xen-traditional-dir
 	mkdir -p ioemu
@@ -250,7 +248,7 @@ mk-headers-$(XEN_TARGET_ARCH): ioemu/lin
           ( [ -h include/xen/libelf ] || ln -sf $(XEN_ROOT)/tools/include/xen/libelf include/xen/libelf ) && \
 	  mkdir -p include/xen-foreign && \
 	  ln -sf $(wildcard $(XEN_ROOT)/tools/include/xen-foreign/*) include/xen-foreign/ && \
-	  $(CROSS_MAKE) -C include/xen-foreign/ && \
+	  $(MAKE) DESTDIR= -C include/xen-foreign/ && \
 	  ( [ -h include/xen/foreign ] || ln -sf ../xen-foreign include/xen/foreign )
 	mkdir -p libxc-$(XEN_TARGET_ARCH)
 	[ -h libxc-$(XEN_TARGET_ARCH)/Makefile ] || ( cd libxc-$(XEN_TARGET_ARCH) && \
@@ -267,7 +265,7 @@ mk-headers-$(XEN_TARGET_ARCH): ioemu/lin
 	  ln -sf $(XEN_ROOT)/tools/xenstore/*.c . && \
 	  ln -sf $(XEN_ROOT)/tools/xenstore/*.h . && \
 	  ln -sf $(XEN_ROOT)/tools/xenstore/Makefile . )
-	$(CROSS_MAKE) -C $(MINI_OS) links
+	$(MAKE) DESTDIR= -C $(MINI_OS) links
 	touch mk-headers-$(XEN_TARGET_ARCH)
 
 TARGETS_MINIOS=$(addprefix mini-os-$(XEN_TARGET_ARCH)-,$(TARGETS))
@@ -284,7 +282,7 @@ TARGETS_MINIOS=$(addprefix mini-os-$(XEN
 .PHONY: libxc
 libxc: libxc-$(XEN_TARGET_ARCH)/libxenctrl.a libxc-$(XEN_TARGET_ARCH)/libxenguest.a
 libxc-$(XEN_TARGET_ARCH)/libxenctrl.a: cross-zlib
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C libxc-$(XEN_TARGET_ARCH)
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH)
 
  libxc-$(XEN_TARGET_ARCH)/libxenguest.a: libxc-$(XEN_TARGET_ARCH)/libxenctrl.a
 
@@ -302,7 +300,7 @@ ioemu: cross-zlib cross-libpci libxc
 	    TARGET_CFLAGS="$(TARGET_CFLAGS)" \
 	    TARGET_LDFLAGS="$(TARGET_LDFLAGS)" \
 	    $(QEMU_ROOT)/xen-setup-stubdom )
-	$(CROSS_MAKE) -C ioemu -f $(QEMU_ROOT)/Makefile
+	$(MAKE) DESTDIR= -C ioemu -f $(QEMU_ROOT)/Makefile
 
 ######
 # caml
@@ -310,7 +308,7 @@ ioemu: cross-zlib cross-libpci libxc
 
 .PHONY: caml
 caml: $(CROSS_ROOT)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) OCAMLC_CROSS_PREFIX=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/bin/
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) OCAMLC_CROSS_PREFIX=$(CROSS_PREFIX)/$(GNU_TARGET_ARCH)-xen-elf/bin/
 
 ###
 # C
@@ -318,7 +316,7 @@ caml: $(CROSS_ROOT)
 
 .PHONY: c
 c: $(CROSS_ROOT)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) 
 
 ######
 # Grub
@@ -337,7 +335,7 @@ grub-upstream: grub-$(GRUB_VERSION).tar.
 .PHONY: grub
 grub: grub-upstream $(CROSS_ROOT)
 	mkdir -p grub-$(XEN_TARGET_ARCH)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ OBJ_DIR=$(CURDIR)/grub-$(XEN_TARGET_ARCH)
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ OBJ_DIR=$(CURDIR)/grub-$(XEN_TARGET_ARCH)
 
 ##########
 # xenstore
@@ -345,7 +343,7 @@ grub: grub-upstream $(CROSS_ROOT)
 
 .PHONY: xenstore
 xenstore: $(CROSS_ROOT)
-	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(CROSS_MAKE) -C $@ xenstored.a CONFIG_STUBDOM=y
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C $@ xenstored.a CONFIG_STUBDOM=y
 
 ########
 # minios
@@ -354,23 +352,23 @@ xenstore: $(CROSS_ROOT)
 .PHONY: ioemu-stubdom
 ioemu-stubdom: APP_OBJS=$(CURDIR)/ioemu/i386-stubdom/qemu.a $(CURDIR)/ioemu/i386-stubdom/libqemu.a $(CURDIR)/ioemu/libqemu_common.a
 ioemu-stubdom: mini-os-$(XEN_TARGET_ARCH)-ioemu lwip-$(XEN_TARGET_ARCH) libxc ioemu
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/ioemu-minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(APP_OBJS)"
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/ioemu-minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(APP_OBJS)"
 
 .PHONY: caml-stubdom
 caml-stubdom: mini-os-$(XEN_TARGET_ARCH)-caml lwip-$(XEN_TARGET_ARCH) libxc cross-ocaml caml
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/caml/minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(CURDIR)/caml/main-caml.o $(CURDIR)/caml/caml.o $(CAMLLIB)/libasmrun.a"
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/caml/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS="$(CURDIR)/caml/main-caml.o $(CURDIR)/caml/caml.o $(CAMLLIB)/libasmrun.a"
 
 .PHONY: c-stubdom
 c-stubdom: mini-os-$(XEN_TARGET_ARCH)-c lwip-$(XEN_TARGET_ARCH) libxc c
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/c/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< LWIPDIR=$(CURDIR)/lwip-$(XEN_TARGET_ARCH) APP_OBJS=$(CURDIR)/c/main.a
 
 .PHONY: pv-grub
 pv-grub: mini-os-$(XEN_TARGET_ARCH)-grub libxc grub
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/grub/minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/grub-$(XEN_TARGET_ARCH)/main.a
 
 .PHONY: xenstore-stubdom
 xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore libxc xenstore
-	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/xenstore-minios.cfg" $(CROSS_MAKE) -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/xenstore/xenstored.a
+	DEF_CPPFLAGS="$(TARGET_CPPFLAGS)" DEF_CFLAGS="$(TARGET_CFLAGS)" DEF_LDFLAGS="$(TARGET_LDFLAGS)" MINIOS_CONFIG="$(CURDIR)/xenstore-minios.cfg" $(MAKE) DESTDIR= -C $(MINI_OS) OBJ_DIR=$(CURDIR)/$< APP_OBJS=$(CURDIR)/xenstore/xenstored.a
 
 #########
 # install
@@ -412,13 +410,13 @@ clean:
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-caml
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-grub
 	rm -fr mini-os-$(XEN_TARGET_ARCH)-xenstore
-	$(CROSS_MAKE) -C caml clean
-	$(CROSS_MAKE) -C c clean
+	$(MAKE) DESTDIR= -C caml clean
+	$(MAKE) DESTDIR= -C c clean
 	rm -fr grub-$(XEN_TARGET_ARCH)
 	rm -f $(STUBDOMPATH)
-	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(CROSS_MAKE) -C libxc-$(XEN_TARGET_ARCH) clean
-	-[ ! -d ioemu ] || $(CROSS_MAKE) -C ioemu clean
-	-[ ! -d xenstore ] || $(CROSS_MAKE) -C xenstore clean
+	[ ! -d libxc-$(XEN_TARGET_ARCH) ] || $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH) clean
+	-[ ! -d ioemu ] || $(MAKE) DESTDIR= -C ioemu clean
+	-[ ! -d xenstore ] || $(MAKE) DESTDIR= -C xenstore clean
 
 # clean the cross-compilation result
 .PHONY: crossclean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:59:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qUb-0006xx-8c; Mon, 13 Aug 2012 08:59:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0qUY-0006xr-UR
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:59:19 +0000
Received: from [85.158.139.83:35694] by server-11.bemta-5.messagelabs.com id
	CA/A4-29296-6E1C8205; Mon, 13 Aug 2012 08:59:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344848357!27729487!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10891 invoked from network); 13 Aug 2012 08:59:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 08:59:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 09:59:52 +0100
Message-Id: <5028DE020200007800094662@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 09:59:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
In-Reply-To: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 21:14, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level < 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( (next_table_maddr != 0) && (next_level != 0) )

Why do you do this differently than for VT-d here? There
you don't check next_table_maddr (and I see no reason you
would need to). Oh, I see, there's a similar check in a different
place there. But this needs to be functionally similar here then.
Specifically, ...

> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1, 
> +                address, indent + 1);
> +        }
> +        else 

... you'd get into the else's body if next_table_maddr was zero,
which is wrong afaict. So I think flow like

    if ( next_level )
        print
    else if ( next_table_maddr )
        recurse

would be the preferable way to go if you feel that these zero
checks are necessary (and if you do then, because this being
the case is really a bug, this shouldn't go through silently).

> +        {
> +            int i;
> +
> +            for ( i = 0; i < indent; i++ )
> +                printk("  ");

printk("%*s...", indent, "", ...);

> +
> +            printk("gfn: %08lx  mfn: %08lx\n",
> +                   (unsigned long)PFN_DOWN(address), 
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
>...
> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -2365,6 +2366,71 @@ static void vtd_resume(void)
>      }
>  }
>  
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 ) 
> +        {
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
> +                                     address, indent + 1);
> +        }
> +        else
> +        {
> +            int j;
> +
> +            for ( j = 0; j < indent; j++ )
> +                printk("  ");

See above.

Jan

> +
> +            printk("gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
> +                   (unsigned long)(address >> PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
> +                   dma_pte_superpage(*pte)? 1 : 0,
> +                   dma_pte_read(*pte)? 1 : 0,
> +                   dma_pte_write(*pte)? 1 : 0);
> +        }
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 08:59:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 08:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qUb-0006xx-8c; Mon, 13 Aug 2012 08:59:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0qUY-0006xr-UR
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 08:59:19 +0000
Received: from [85.158.139.83:35694] by server-11.bemta-5.messagelabs.com id
	CA/A4-29296-6E1C8205; Mon, 13 Aug 2012 08:59:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344848357!27729487!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10891 invoked from network); 13 Aug 2012 08:59:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 08:59:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 09:59:52 +0100
Message-Id: <5028DE020200007800094662@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 09:59:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
In-Reply-To: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.08.12 at 21:14, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level < 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( (next_table_maddr != 0) && (next_level != 0) )

Why do you do this differently than for VT-d here? There
you don't check next_table_maddr (and I see no reason you
would need to). Oh, I see, there's a similar check in a different
place there. But this needs to be functionally similar here then.
Specifically, ...

> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1, 
> +                address, indent + 1);
> +        }
> +        else 

... you'd get into the else's body if next_table_maddr was zero,
which is wrong afaict. So I think flow like

    if ( next_level )
        print
    else if ( next_table_maddr )
        recurse

would be the preferable way to go if you feel that these zero
checks are necessary (and if you do then, because this being
the case is really a bug, this shouldn't go through silently).

> +        {
> +            int i;
> +
> +            for ( i = 0; i < indent; i++ )
> +                printk("  ");

printk("%*s...", indent, "", ...);

> +
> +            printk("gfn: %08lx  mfn: %08lx\n",
> +                   (unsigned long)PFN_DOWN(address), 
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
>...
> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -2365,6 +2366,71 @@ static void vtd_resume(void)
>      }
>  }
>  
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 ) 
> +        {
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
> +                                     address, indent + 1);
> +        }
> +        else
> +        {
> +            int j;
> +
> +            for ( j = 0; j < indent; j++ )
> +                printk("  ");

See above.

Jan

> +
> +            printk("gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
> +                   (unsigned long)(address >> PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
> +                   dma_pte_superpage(*pte)? 1 : 0,
> +                   dma_pte_read(*pte)? 1 : 0,
> +                   dma_pte_write(*pte)? 1 : 0);
> +        }
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:08:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qd2-0007IH-EQ; Mon, 13 Aug 2012 09:08:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T0qd1-0007IC-GE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:08:03 +0000
Received: from [85.158.139.83:64672] by server-10.bemta-5.messagelabs.com id
	F0/43-13125-2F3C8205; Mon, 13 Aug 2012 09:08:02 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344848881!20465334!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3889 invoked from network); 13 Aug 2012 09:08:02 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 09:08:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T0qcw-000Jhd-0J; Mon, 13 Aug 2012 09:07:58 +0000
Date: Mon, 13 Aug 2012 10:07:57 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120813090757.GA75552@ocelot.phlegethon.org>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502535280200007800094322@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel <xen-devel@lists.xen.org>, Feng Jin <joe.jin@oracle.com>,
	zhenzhong.duan@oracle.com, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:22 +0100 on 10 Aug (1344612120), Jan Beulich wrote:
> Yeah, that CR0 write disables the caches, and that's pretty
> expensive on EPT (not sure why NPT doesn't use/need the
> same hook) when the guest has any active MMIO regions:
> vmx_set_uc_mode(), when HAP is enabled, calls
> ept_change_entry_emt_with_range(), which is a walk through
> the entire guest page tables (i.e. scales with guest size, or, to
> be precise, with the highest populated GFN).

:( That's not so great.  It can definitely be done more efficiently than
with that for() loop, and I wonder whether there isn't some better way
involving flipping a global flag somewhere.

If no EPT maintainers have commented on this by Thursday I'll look into
it then.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:08:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qd2-0007IH-EQ; Mon, 13 Aug 2012 09:08:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T0qd1-0007IC-GE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:08:03 +0000
Received: from [85.158.139.83:64672] by server-10.bemta-5.messagelabs.com id
	F0/43-13125-2F3C8205; Mon, 13 Aug 2012 09:08:02 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344848881!20465334!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3889 invoked from network); 13 Aug 2012 09:08:02 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 09:08:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T0qcw-000Jhd-0J; Mon, 13 Aug 2012 09:07:58 +0000
Date: Mon, 13 Aug 2012 10:07:57 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120813090757.GA75552@ocelot.phlegethon.org>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502535280200007800094322@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel <xen-devel@lists.xen.org>, Feng Jin <joe.jin@oracle.com>,
	zhenzhong.duan@oracle.com, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:22 +0100 on 10 Aug (1344612120), Jan Beulich wrote:
> Yeah, that CR0 write disables the caches, and that's pretty
> expensive on EPT (not sure why NPT doesn't use/need the
> same hook) when the guest has any active MMIO regions:
> vmx_set_uc_mode(), when HAP is enabled, calls
> ept_change_entry_emt_with_range(), which is a walk through
> the entire guest page tables (i.e. scales with guest size, or, to
> be precise, with the highest populated GFN).

:( That's not so great.  It can definitely be done more efficiently than
with that for() loop, and I wonder whether there isn't some better way
involving flipping a global flag somewhere.

If no EPT maintainers have commented on this by Thursday I'll look into
it then.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:11:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qgI-0007P4-1r; Mon, 13 Aug 2012 09:11:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0qgG-0007On-8E
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:11:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344849022!2106152!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21217 invoked from network); 13 Aug 2012 09:10:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	13 Aug 2012 09:10:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 10:11:52 +0100
Message-Id: <5028E09B0200007800094691@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 10:10:19 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel Kiper" <daniel.kiper@oracle.com>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
	<502539410200007800094355@nat28.tlf.novell.com>
	<20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
In-Reply-To: <20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 10:12, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> On Fri, Aug 10, 2012 at 03:39:29PM +0100, Jan Beulich wrote:
>> >>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
>> > max_cpus is not available since 20374 changeset (Miscellaneous data
>> > placement adjustments). It was moved to __initdata section. This section
>> > is freed after Xen initialization. Assume that max_cpus is always
>> > equal to XEN_HYPER_SIZE(cpumask_t) * 8.
>>
>> Just to repeat my response to the original version of this patch,
>> which I don't recall having got any answer from you:
>>
>> "Using nr_cpu_ids, when available, would seem a better fit. And
>>  I don't see why, on dumps from old hypervisors, you wouldn't
>>  want to continue using max_cpus. Oh, wait, I see - you would
>>  have to be able to tell whether it actually sits in .init.data, which
>>  might not be strait forward."
> 
> As I promised earlier I thought about that. The simplest way
> to do that is to check in which section max_cpus resides. There
> is some instrumentation in crash tool to do that. However, sadly
> it does not differentiate between .data and .init.data section.
> I could write something from scratch which could do that but
> I think it could have larger costs then potential gains.
> Let's leave it as is now. Current approximation is not so bad.
> However, if any opportunity appears (some functions could
> differentiate between .data and .init.data section) then
> I could fix this.

But minimally you should be using nr_cpu_ids when available.
You just have to be prepared for bits beyond that value within
any cpumask_t instance to have random contents.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:11:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0qgI-0007P4-1r; Mon, 13 Aug 2012 09:11:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0qgG-0007On-8E
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:11:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344849022!2106152!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21217 invoked from network); 13 Aug 2012 09:10:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	13 Aug 2012 09:10:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 10:11:52 +0100
Message-Id: <5028E09B0200007800094691@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 10:10:19 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel Kiper" <daniel.kiper@oracle.com>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
	<502539410200007800094355@nat28.tlf.novell.com>
	<20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
In-Reply-To: <20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 10:12, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> On Fri, Aug 10, 2012 at 03:39:29PM +0100, Jan Beulich wrote:
>> >>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
>> > max_cpus is not available since 20374 changeset (Miscellaneous data
>> > placement adjustments). It was moved to __initdata section. This section
>> > is freed after Xen initialization. Assume that max_cpus is always
>> > equal to XEN_HYPER_SIZE(cpumask_t) * 8.
>>
>> Just to repeat my response to the original version of this patch,
>> which I don't recall having got any answer from you:
>>
>> "Using nr_cpu_ids, when available, would seem a better fit. And
>>  I don't see why, on dumps from old hypervisors, you wouldn't
>>  want to continue using max_cpus. Oh, wait, I see - you would
>>  have to be able to tell whether it actually sits in .init.data, which
>>  might not be strait forward."
> 
> As I promised earlier I thought about that. The simplest way
> to do that is to check in which section max_cpus resides. There
> is some instrumentation in crash tool to do that. However, sadly
> it does not differentiate between .data and .init.data section.
> I could write something from scratch which could do that but
> I think it could have larger costs then potential gains.
> Let's leave it as is now. Current approximation is not so bad.
> However, if any opportunity appears (some functions could
> differentiate between .data and .init.data section) then
> I could fix this.

But minimally you should be using nr_cpu_ids when available.
You just have to be prepared for bits beyond that value within
any cpumask_t instance to have random contents.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0r5o-0007iA-8t; Mon, 13 Aug 2012 09:37:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0r5l-0007i5-Ri
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:37:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344850201!8917484!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22231 invoked from network); 13 Aug 2012 09:30:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	13 Aug 2012 09:30:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 10:32:03 +0100
Message-Id: <5028E53202000078000946B1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 10:29:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
In-Reply-To: <5028B3AB.7060705@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Feng Jin <joe.jin@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEzLjA4LjEyIGF0IDA5OjU4LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0xMCAyMjoyMiwgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+PiBHb2luZyBiYWNrIHRvIHlvdXIgb3JpZ2luYWwgbWFpbCwgSSB3b25kZXIgaG93
ZXZlciB3aHkgdGhpcwo+PiBnZXRzIGRvbmUgYXQgYWxsLiBZb3Ugc2FpZCBpdCBnb3QgdGhlcmUg
dmlhCj4+Cj4+IG10cnJfYXBzX2luaXQoKQo+PiAgIFwtPiAgc2V0X210cnIoKQo+PiAgICAgICBc
LT4gIG10cnJfd29ya19oYW5kbGVyKCkKPj4KPj4geWV0IHRoaXMgaXNuJ3QgZG9uZSB1bmNvbmRp
dGlvbmFsbHkgLSBzZWUgdGhlIGNvbW1lbnQgYmVmb3JlCj4+IGNoZWNraW5nIG10cnJfYXBzX2Rl
bGF5ZWRfaW5pdC4gQ2FuIHlvdSBmaW5kIG91dCB3aGVyZSB0aGUKPj4gb2J2aW91c2x5IG5lY2Vz
c2FyeSBjYWxsKHMpIHRvIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQoKQo+PiBjb21lKHMpIGZy
b20/Cj4gQXQgYm9vdHVwIHN0YWdlLCBzZXRfbXRycl9hcHNfZGVsYXllZF9pbml0IGlzIGNhbGxl
ZCBieSAKPiBuYXRpdmVfc21wX3ByZXBhcmVfY3B1cy4KPiBtdHJyX2Fwc19kZWxheWVkX2luaXQg
aXMgYWx3YXlzIHNldCB0byB0dXJlIGZvciBpbnRlbCBwcm9jZXNzb3IgaW4gdXBzdHJlYW0gCj4g
Y29kZS4KCkluZGVlZCwgYW5kIHRoYXQgKGluIG9uZSBmb3JtIG9yIGFub3RoZXIpIGhhcyBiZWVu
IGRvbmUKdmlydHVhbGx5IGZvcmV2ZXIgaW4gTGludXguIEkgd29uZGVyIHdoeSB0aGUgcHJvYmxl
bSB3YXNuJ3QKbm90aWNlZCAob3IgbG9va2VkIGludG8sIGlmIGl0IHdhcyBub3RpY2VkKSBzbyBm
YXIuCgpBcyBpdCdzIGdvaW5nIHRvIGJlIHJhdGhlciBkaWZmaWN1bHQgdG8gY29udmluY2UgdGhl
IExpbnV4IGZvbGtzCnRvIGNoYW5nZSB0aGVpciBjb2RlIChwbHVzIHRoaXMgd291bGRuJ3QgaGVs
cCB3aXRoIGV4aXN0aW5nCmtlcm5lbHMgYW55d2F5KSwgd2UnbGwgbmVlZCB0byBmaW5kIGEgd2F5
IHRvIGltcHJvdmUgdGhpcyBpbgp0aGUgaHlwZXJ2aXNvci4KCk9uZSBzZWVtaW5nbHkgb3J0aG9n
b25hbCB0aGluZyB3b3VsZCBwcmVzdW1hYmx5IGhlbHAgcXVpdGUKYSBiaXQgb24gdGhlIGd1ZXN0
IHNpZGUgbmV2ZXJ0aGVsZXNzIC0gcGFyYS12aXJ0dWFsaXplZCBzcGluCmxvY2tzLiBJIGhhdmUg
bm8gaWRlYSB3aHkgdGhpcyBpcyBvbmx5IGJlaW5nIGRvbmUgd2hlbiBydW5uaW5nCnB2LCBidXQg
bm90IGZvciBwdmh2bS4gS29ucmFkLCBTdGVmYW5vPwoKSmFuCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0r5o-0007iA-8t; Mon, 13 Aug 2012 09:37:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0r5l-0007i5-Ri
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:37:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344850201!8917484!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22231 invoked from network); 13 Aug 2012 09:30:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	13 Aug 2012 09:30:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 10:32:03 +0100
Message-Id: <5028E53202000078000946B1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 10:29:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
In-Reply-To: <5028B3AB.7060705@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Feng Jin <joe.jin@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEzLjA4LjEyIGF0IDA5OjU4LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKPiDkuo4gMjAxMi0wOC0xMCAyMjoyMiwgSmFuIEJldWxpY2gg
5YaZ6YGTOgo+PiBHb2luZyBiYWNrIHRvIHlvdXIgb3JpZ2luYWwgbWFpbCwgSSB3b25kZXIgaG93
ZXZlciB3aHkgdGhpcwo+PiBnZXRzIGRvbmUgYXQgYWxsLiBZb3Ugc2FpZCBpdCBnb3QgdGhlcmUg
dmlhCj4+Cj4+IG10cnJfYXBzX2luaXQoKQo+PiAgIFwtPiAgc2V0X210cnIoKQo+PiAgICAgICBc
LT4gIG10cnJfd29ya19oYW5kbGVyKCkKPj4KPj4geWV0IHRoaXMgaXNuJ3QgZG9uZSB1bmNvbmRp
dGlvbmFsbHkgLSBzZWUgdGhlIGNvbW1lbnQgYmVmb3JlCj4+IGNoZWNraW5nIG10cnJfYXBzX2Rl
bGF5ZWRfaW5pdC4gQ2FuIHlvdSBmaW5kIG91dCB3aGVyZSB0aGUKPj4gb2J2aW91c2x5IG5lY2Vz
c2FyeSBjYWxsKHMpIHRvIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQoKQo+PiBjb21lKHMpIGZy
b20/Cj4gQXQgYm9vdHVwIHN0YWdlLCBzZXRfbXRycl9hcHNfZGVsYXllZF9pbml0IGlzIGNhbGxl
ZCBieSAKPiBuYXRpdmVfc21wX3ByZXBhcmVfY3B1cy4KPiBtdHJyX2Fwc19kZWxheWVkX2luaXQg
aXMgYWx3YXlzIHNldCB0byB0dXJlIGZvciBpbnRlbCBwcm9jZXNzb3IgaW4gdXBzdHJlYW0gCj4g
Y29kZS4KCkluZGVlZCwgYW5kIHRoYXQgKGluIG9uZSBmb3JtIG9yIGFub3RoZXIpIGhhcyBiZWVu
IGRvbmUKdmlydHVhbGx5IGZvcmV2ZXIgaW4gTGludXguIEkgd29uZGVyIHdoeSB0aGUgcHJvYmxl
bSB3YXNuJ3QKbm90aWNlZCAob3IgbG9va2VkIGludG8sIGlmIGl0IHdhcyBub3RpY2VkKSBzbyBm
YXIuCgpBcyBpdCdzIGdvaW5nIHRvIGJlIHJhdGhlciBkaWZmaWN1bHQgdG8gY29udmluY2UgdGhl
IExpbnV4IGZvbGtzCnRvIGNoYW5nZSB0aGVpciBjb2RlIChwbHVzIHRoaXMgd291bGRuJ3QgaGVs
cCB3aXRoIGV4aXN0aW5nCmtlcm5lbHMgYW55d2F5KSwgd2UnbGwgbmVlZCB0byBmaW5kIGEgd2F5
IHRvIGltcHJvdmUgdGhpcyBpbgp0aGUgaHlwZXJ2aXNvci4KCk9uZSBzZWVtaW5nbHkgb3J0aG9n
b25hbCB0aGluZyB3b3VsZCBwcmVzdW1hYmx5IGhlbHAgcXVpdGUKYSBiaXQgb24gdGhlIGd1ZXN0
IHNpZGUgbmV2ZXJ0aGVsZXNzIC0gcGFyYS12aXJ0dWFsaXplZCBzcGluCmxvY2tzLiBJIGhhdmUg
bm8gaWRlYSB3aHkgdGhpcyBpcyBvbmx5IGJlaW5nIGRvbmUgd2hlbiBydW5uaW5nCnB2LCBidXQg
bm90IGZvciBwdmh2bS4gS29ucmFkLCBTdGVmYW5vPwoKSmFuCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0r6H-0007kF-QI; Mon, 13 Aug 2012 09:38:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T0r6G-0007jv-1M
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:38:16 +0000
Received: from [85.158.143.99:20377] by server-2.bemta-4.messagelabs.com id
	AB/1E-31966-70BC8205; Mon, 13 Aug 2012 09:38:15 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344850694!27867400!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9231 invoked from network); 13 Aug 2012 09:38:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 09:38:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T0r6B-000JmK-RY; Mon, 13 Aug 2012 09:38:11 +0000
Date: Mon, 13 Aug 2012 10:38:11 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120813093811.GB75552@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
	<20120810165109.GA19429@spongy>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120810165109.GA19429@spongy>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:51 +0100 on 10 Aug (1344621069), Jean Guyader wrote:
> On 09/08 11:38, Tim Deegan wrote:
> > Hi,
> > 
> > This looks pretty good; I think you've addressed almost all my comments
> > except for one, which is really a design decision raether than an
> > implementation one.  As I said last time: 
> > 
> > ] And what about protocol?  Protocol seems to have ended up as a bit of a
> > ] second-class citizen in v4v; it's defined, and indeed required, but not
> > ] used for routing or for acccess control, so all traffic to a given port
> > ] _on every protocol_ ends up on the same ring. 
> > ] 
> > ] This is the inverse of the TCP/IP namespace that you're copying, where
> > ] protocol demux happens before port demux.  And I think it will bite
> > ] someone if you ever, for example, want to send ICMP or GRE over a v4v
> > ] channel.
> > 
> 
> The protocol field is used to inform about the type a message on the ring.
> 
> Right now we use two protocols in our linux driver: V4V_PROTO_DGRAM and
> V4V_PROTO_STREAM. In the future that could probably be extended to new protocol
> like V4V_PROTO_ICMP for instance.
> 
> The demultiplexing will happens at the other end, the driver can look at the
> message and decide what to do with it based on the protocol field.

Yes, I understand all that - what I'm saying is that it seems like a
design flaw to me.  The namespace in V4V, as proposed, looks like this:

 Protocol
 Port
 Domain

and it would be more sensible to do (like the IP stack):

 Port
 Protocol
 Domain.

Or at the very least the protocol should be made part of the endpoint
address, and not just part of the packet header.  As it stands:

 - The handlers for port X in _all_ protocols _have_ to share a
   ring.  That seems kind of plausible because the IANA port assignments
   never give the same port number to different services on TCP and UDP,
   but will it make sense for every new protocol?  Is it sensible to
   require, say, an L2TP service to make its connection IDs not clash
   with V4V_PROTO_DGRAM and V4V_PROTO_STREAM users?

   It may not even make sense in existing protocols.  It's common enough
   for DNS servers to use different ACLs (and indeed different servers)
   for TCP and UDP.

 - Relatedly, every protocol _has_ to have port numbers.  How would you
   register an ICMP listener, for example?  You'd have to do something
   gross like declare a particular port to be the ICMP port so that you
   could demux it, or indeed send it in the first place.

You say:

> The demultiplexing will happens at the other end, the driver can look at the
> message and decide what to do with it based on the protocol field.

I'm willing to accept that argument, but only if we extend it to ports
too, get rid of all the namespace and ACL code in Xen and leave each
domain with a single RX ring that the (single) guest driver must demux. :P

Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 09:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 09:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0r6H-0007kF-QI; Mon, 13 Aug 2012 09:38:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T0r6G-0007jv-1M
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 09:38:16 +0000
Received: from [85.158.143.99:20377] by server-2.bemta-4.messagelabs.com id
	AB/1E-31966-70BC8205; Mon, 13 Aug 2012 09:38:15 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344850694!27867400!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9231 invoked from network); 13 Aug 2012 09:38:14 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-15.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 09:38:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T0r6B-000JmK-RY; Mon, 13 Aug 2012 09:38:11 +0000
Date: Mon, 13 Aug 2012 10:38:11 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@citrix.com>
Message-ID: <20120813093811.GB75552@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
	<20120810165109.GA19429@spongy>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120810165109.GA19429@spongy>
User-Agent: Mutt/1.4.2.1i
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:51 +0100 on 10 Aug (1344621069), Jean Guyader wrote:
> On 09/08 11:38, Tim Deegan wrote:
> > Hi,
> > 
> > This looks pretty good; I think you've addressed almost all my comments
> > except for one, which is really a design decision raether than an
> > implementation one.  As I said last time: 
> > 
> > ] And what about protocol?  Protocol seems to have ended up as a bit of a
> > ] second-class citizen in v4v; it's defined, and indeed required, but not
> > ] used for routing or for acccess control, so all traffic to a given port
> > ] _on every protocol_ ends up on the same ring. 
> > ] 
> > ] This is the inverse of the TCP/IP namespace that you're copying, where
> > ] protocol demux happens before port demux.  And I think it will bite
> > ] someone if you ever, for example, want to send ICMP or GRE over a v4v
> > ] channel.
> > 
> 
> The protocol field is used to inform about the type a message on the ring.
> 
> Right now we use two protocols in our linux driver: V4V_PROTO_DGRAM and
> V4V_PROTO_STREAM. In the future that could probably be extended to new protocol
> like V4V_PROTO_ICMP for instance.
> 
> The demultiplexing will happens at the other end, the driver can look at the
> message and decide what to do with it based on the protocol field.

Yes, I understand all that - what I'm saying is that it seems like a
design flaw to me.  The namespace in V4V, as proposed, looks like this:

 Protocol
 Port
 Domain

and it would be more sensible to do (like the IP stack):

 Port
 Protocol
 Domain.

Or at the very least the protocol should be made part of the endpoint
address, and not just part of the packet header.  As it stands:

 - The handlers for port X in _all_ protocols _have_ to share a
   ring.  That seems kind of plausible because the IANA port assignments
   never give the same port number to different services on TCP and UDP,
   but will it make sense for every new protocol?  Is it sensible to
   require, say, an L2TP service to make its connection IDs not clash
   with V4V_PROTO_DGRAM and V4V_PROTO_STREAM users?

   It may not even make sense in existing protocols.  It's common enough
   for DNS servers to use different ACLs (and indeed different servers)
   for TCP and UDP.

 - Relatedly, every protocol _has_ to have port numbers.  How would you
   register an ICMP listener, for example?  You'd have to do something
   gross like declare a particular port to be the ICMP port so that you
   could demux it, or indeed send it in the first place.

You say:

> The demultiplexing will happens at the other end, the driver can look at the
> message and decide what to do with it based on the protocol field.

I'm willing to accept that argument, but only if we extend it to ports
too, get rid of all the namespace and ACL code in Xen and leave each
domain with a single RX ring that the (single) guest driver must demux. :P

Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:26:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0rqs-0008IA-NB; Mon, 13 Aug 2012 10:26:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T0rqr-0008I5-7v
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:26:25 +0000
Received: from [85.158.143.35:12209] by server-3.bemta-4.messagelabs.com id
	D4/FB-09529-056D8205; Mon, 13 Aug 2012 10:26:24 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344853573!13754313!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17100 invoked from network); 13 Aug 2012 10:26:14 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 10:26:14 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 1B529A37E0;
	Mon, 13 Aug 2012 12:26:07 +0200 (CEST)
Date: Mon, 13 Aug 2012 11:26:04 +0100
From: Mel Gorman <mgorman@suse.de>
To: David Miller <davem@davemloft.net>
Message-ID: <20120813102604.GC4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120808.155046.820543563969484712.davem@davemloft.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, Jeremy Fitzhardinge <jeremy@xensource.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Ian.Campbell@eu.citrix.com, linux-mm@kvack.org,
	konrad@darnok.org, akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> From: Mel Gorman <mgorman@suse.de>
> Date: Tue, 7 Aug 2012 09:55:55 +0100
> 
> > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > for the following bug triggered by a xen network driver
>  ...
> > The problem is that the xenfront driver is passing a NULL page to
> > __skb_fill_page_desc() which was unexpected. This patch checks that
> > there is a page before dereferencing.
> > 
> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> 
> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> It's the only driver passing NULL here.
> 
> That whole song and dance figuring out what to do with the head
> fragment page, depending upon whether the length is greater than the
> RX_COPY_THRESHOLD, is completely unnecessary.
> 
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

I looked at this for a while but I did not see how __pskb_pull_tail()
could be used sensibly but I'm simily not familiar with writing network
device drivers or Xen.

This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
and backend communicate (maybe some fixed limitation of the xenbus). The
existing code looks like it is trying to take the fragments received and
pass them straight to the backend without copying by passing the fragments
to the backend without copying. I worry that if I try converting this to
__pskb_pull_tail() that it would either hit the limitation of xenbus or
introduce copying where it is not wanted.

I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
sure what the original intention was and I don't have a Xen setup anywhere
to test any patch. Jeremy, xen folk? 

-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:26:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0rqs-0008IA-NB; Mon, 13 Aug 2012 10:26:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T0rqr-0008I5-7v
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:26:25 +0000
Received: from [85.158.143.35:12209] by server-3.bemta-4.messagelabs.com id
	D4/FB-09529-056D8205; Mon, 13 Aug 2012 10:26:24 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344853573!13754313!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17100 invoked from network); 13 Aug 2012 10:26:14 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 10:26:14 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 1B529A37E0;
	Mon, 13 Aug 2012 12:26:07 +0200 (CEST)
Date: Mon, 13 Aug 2012 11:26:04 +0100
From: Mel Gorman <mgorman@suse.de>
To: David Miller <davem@davemloft.net>
Message-ID: <20120813102604.GC4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120808.155046.820543563969484712.davem@davemloft.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, Jeremy Fitzhardinge <jeremy@xensource.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Ian.Campbell@eu.citrix.com, linux-mm@kvack.org,
	konrad@darnok.org, akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> From: Mel Gorman <mgorman@suse.de>
> Date: Tue, 7 Aug 2012 09:55:55 +0100
> 
> > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > for the following bug triggered by a xen network driver
>  ...
> > The problem is that the xenfront driver is passing a NULL page to
> > __skb_fill_page_desc() which was unexpected. This patch checks that
> > there is a page before dereferencing.
> > 
> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> 
> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> It's the only driver passing NULL here.
> 
> That whole song and dance figuring out what to do with the head
> fragment page, depending upon whether the length is greater than the
> RX_COPY_THRESHOLD, is completely unnecessary.
> 
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

I looked at this for a while but I did not see how __pskb_pull_tail()
could be used sensibly but I'm simily not familiar with writing network
device drivers or Xen.

This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
and backend communicate (maybe some fixed limitation of the xenbus). The
existing code looks like it is trying to take the fragments received and
pass them straight to the backend without copying by passing the fragments
to the backend without copying. I worry that if I try converting this to
__pskb_pull_tail() that it would either hit the limitation of xenbus or
introduce copying where it is not wanted.

I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
sure what the original intention was and I don't have a Xen setup anywhere
to test any patch. Jeremy, xen folk? 

-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:32:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0rwW-0008Qp-Fu; Mon, 13 Aug 2012 10:32:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0rwU-0008Qj-T9
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:32:15 +0000
Received: from [85.158.143.35:46861] by server-3.bemta-4.messagelabs.com id
	C5/89-09529-EA7D8205; Mon, 13 Aug 2012 10:32:14 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344853927!15057514!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18936 invoked from network); 13 Aug 2012 10:32:08 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-13.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 10:32:08 -0000
Received: from mail187-ch1-R.bigfish.com (10.43.68.250) by
	CH1EHSOBE003.bigfish.com (10.43.70.53) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 10:32:06 +0000
Received: from mail187-ch1 (localhost [127.0.0.1])	by
	mail187-ch1-R.bigfish.com (Postfix) with ESMTP id 9C8753C02BE;
	Mon, 13 Aug 2012 10:32:06 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 1
X-BigFish: VPS1(zzbb2dI98dI9371I1432I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail187-ch1 (localhost.localdomain [127.0.0.1]) by mail187-ch1
	(MessageSwitch) id 1344853924315619_26425;
	Mon, 13 Aug 2012 10:32:04 +0000 (UTC)
Received: from CH1EHSMHS030.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.243])	by mail187-ch1.bigfish.com (Postfix) with ESMTP id
	40FDBE0102;	Mon, 13 Aug 2012 10:32:04 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS030.bigfish.com (10.43.70.30) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 10:32:02 +0000
X-WSS-ID: 0M8OVXA-02-AI5-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 27E1FC8052;	Mon, 13 Aug 2012 05:31:57 -0500 (CDT)
Received: from SAUSEXDAG04.amd.com (163.181.55.4) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 05:32:05 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag04.amd.com
	(163.181.55.4) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 05:32:01 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	06:31:59 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id EE0BC49C69F; Mon, 13 Aug 2012
	11:31:58 +0100 (BST)
Message-ID: <5028D79D.3030009@amd.com>
Date: Mon, 13 Aug 2012 12:31:57 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
In-Reply-To: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Santosh
Please some outputs below, gfn still seems incorrect.
Thanks
Wei


(XEN)     gfn: 000000f0  mfn: 001023ac
(XEN)     gfn: 000000f0  mfn: 0023f83d
(XEN)     gfn: 000000f0  mfn: 001023ab
(XEN)     gfn: 000000f0  mfn: 0023f83c
(XEN)     gfn: 000000f0  mfn: 001023aa
(XEN)     gfn: 000000f0  mfn: 0023f83b
(XEN)     gfn: 000000f0  mfn: 001023a9
(XEN)     gfn: 000000f0  mfn: 0023f83a
(XEN)     gfn: 000000f0  mfn: 001023a8
(XEN)     gfn: 000000f0  mfn: 0023f839
(XEN)     gfn: 000000f0  mfn: 001023a7
(XEN)     gfn: 000000f0  mfn: 0023f838
(XEN)     gfn: 000000f0  mfn: 001023a6
(XEN)     gfn: 000000f0  mfn: 0023f837
(XEN)     gfn: 000000f0  mfn: 001023a5
(XEN)     gfn: 000000f0  mfn: 0023f836
(XEN)     gfn: 000000f0  mfn: 001023a4
(XEN)     gfn: 000000f0  mfn: 0023f835
(XEN)     gfn: 000000f0  mfn: 001023a3
(XEN)     gfn: 000000f0  mfn: 0023f834
(XEN)     gfn: 000000f0  mfn: 001023a2
(XEN)     gfn: 000000f0  mfn: 0023f833
(XEN)     gfn: 000000f0  mfn: 001023a1
(XEN)     gfn: 000000f0  mfn: 0023f832
(XEN)     gfn: 000000f0  mfn: 001023a0
(XEN)     gfn: 000000f0  mfn: 0023f831
(XEN)     gfn: 000000f0  mfn: 0010239f
(XEN)     gfn: 000000f0  mfn: 0023f830
(XEN)     gfn: 000000f0  mfn: 0010239e
(XEN)     gfn: 000000f0  mfn: 0023f82f
(XEN)     gfn: 000000f0  mfn: 0010239d
(XEN)     gfn: 000000f0  mfn: 0023f82e
(XEN)     gfn: 000000f0  mfn: 0010239c
(XEN)     gfn: 000000f0  mfn: 0023f82d
(XEN)     gfn: 000000f0  mfn: 0010239b
(XEN)     gfn: 000000f0  mfn: 0023f82c
(XEN)     gfn: 000000f0  mfn: 0010239a
(XEN)     gfn: 000000f0  mfn: 0023f82b
(XEN)     gfn: 000000f0  mfn: 00102399
(XEN)     gfn: 000000f0  mfn: 0023f82a
(XEN)     gfn: 000000f0  mfn: 00102398
(XEN)     gfn: 000000f0  mfn: 0023f829
(XEN)     gfn: 000000f0  mfn: 00102397
(XEN)     gfn: 000000f0  mfn: 0023f828
(XEN)     gfn: 000000f0  mfn: 00102396
(XEN)     gfn: 000000f0  mfn: 0023f827
(XEN)     gfn: 000000f0  mfn: 00102395
(XEN)     gfn: 000000f0  mfn: 0023f826
(XEN)     gfn: 000000f0  mfn: 00102394
(XEN)     gfn: 000000f0  mfn: 0023f825
(XEN)     gfn: 000000f0  mfn: 00102393
(XEN)     gfn: 000000f0  mfn: 0023f824
(XEN)     gfn: 000000f0  mfn: 00102392
(XEN)     gfn: 000000f0  mfn: 0023f823
(XEN)     gfn: 000000f0  mfn: 00102391
(XEN)     gfn: 000000f0  mfn: 0023f822
(XEN)     gfn: 000000f0  mfn: 00102390
(XEN)     gfn: 000000f0  mfn: 0023f821
(XEN)     gfn: 000000f0  mfn: 0010238f
(XEN)     gfn: 000000f0  mfn: 0023f820
(XEN)     gfn: 000000f0  mfn: 0010238e
(XEN)     gfn: 000000f0  mfn: 0023f81f
(XEN)     gfn: 000000f0  mfn: 0010238d
(XEN)     gfn: 000000f0  mfn: 0023f81e
(XEN)     gfn: 000000f0  mfn: 0010238c
(XEN)     gfn: 000000f0  mfn: 0023f81d
(XEN)     gfn: 000000f0  mfn: 0010238b
(XEN)     gfn: 000000f0  mfn: 0023f81c
(XEN)     gfn: 000000f0  mfn: 0010238a
(XEN)     gfn: 000000f0  mfn: 0023f81b
(XEN)     gfn: 000000f0  mfn: 00102389
(XEN)     gfn: 000000f0  mfn: 0023f81a
(XEN)     gfn: 000000f0  mfn: 00102388
(XEN)     gfn: 000000f0  mfn: 0023f819
(XEN)     gfn: 000000f0  mfn: 00102387
(XEN)     gfn: 000000f0  mfn: 0023f818
(XEN)     gfn: 000000f0  mfn: 00102386
(XEN)     gfn: 000000f0  mfn: 0023f817
(XEN)     gfn: 000000f0  mfn: 00102385
(XEN)     gfn: 000000f0  mfn: 0023f816
(XEN)     gfn: 000000f0  mfn: 00102384
(XEN)     gfn: 000000f0  mfn: 0023f815
(XEN)     gfn: 000000f0  mfn: 00102383
(XEN)     gfn: 000000f0  mfn: 0023f814
(XEN)     gfn: 000000f0  mfn: 00102382
(XEN)     gfn: 000000f0  mfn: 0023f813
(XEN)     gfn: 000000f0  mfn: 00102381
(XEN)     gfn: 000000f0  mfn: 0023f812
(XEN)     gfn: 000000f0  mfn: 00102380
(XEN)     gfn: 000000f0  mfn: 0023f811
(XEN)     gfn: 000000f0  mfn: 0010217f
(XEN)     gfn: 000000f0  mfn: 0023f810
(XEN)     gfn: 000000f0  mfn: 0010217e
(XEN)     gfn: 000000f0  mfn: 0023f80f
(XEN)     gfn: 000000f0  mfn: 0010217d
(XEN)     gfn: 000000f0  mfn: 0023f80e
(XEN)     gfn: 000000f0  mfn: 0010217c
(XEN)     gfn: 000000f0  mfn: 0023f80d
(XEN)     gfn: 000000f0  mfn: 0010217b
(XEN)     gfn: 000000f0  mfn: 0023f80c
(XEN)     gfn: 000000f0  mfn: 0010217a
(XEN)     gfn: 000000f0  mfn: 0023f80b
(XEN)     gfn: 000000f0  mfn: 00102179
(XEN)     gfn: 000000fc  mfn: 0023f806
(XEN)     gfn: 000000fc  mfn: 0023f809
(XEN)     gfn: 000000fc  mfn: 001025f1
(XEN)     gfn: 000000fc  mfn: 0023f807
(XEN)     gfn: 000000fc  mfn: 001025f0
(XEN)     gfn: 000000fc  mfn: 001025ef
(XEN)     gfn: 000000fc  mfn: 0023f805
(XEN)     gfn: 000000fc  mfn: 001025ee
(XEN)     gfn: 000000fc  mfn: 0023f804
(XEN)     gfn: 000000fc  mfn: 001025ed
(XEN)     gfn: 000000fc  mfn: 0023f803
(XEN)     gfn: 000000fc  mfn: 001025ec
(XEN)     gfn: 000000fc  mfn: 0023f802
(XEN)     gfn: 000000fc  mfn: 001025eb
(XEN)     gfn: 000000fc  mfn: 0023f801
(XEN)     gfn: 000000fc  mfn: 001025ea
(XEN)     gfn: 000000fc  mfn: 0023f800
(XEN)     gfn: 000000fe  mfn: 000812b0
(XEN)     gfn: 000000fe  mfn: 0010a085
(XEN)     gfn: 000000fe  mfn: 0021f60c
(XEN)     gfn: 000000fe  mfn: 0010a084
(XEN)     gfn: 000000fe  mfn: 0021f60b
(XEN)     gfn: 000000fe  mfn: 0010a083



On 08/10/2012 09:14 PM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( (next_table_maddr != 0)&&  (next_level != 0) )
> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1,
> +                address, indent + 1);
> +        }
> +        else
> +        {
> +            int i;
> +
> +            for ( i = 0; i<  indent; i++ )
> +                printk("  ");
> +
> +            printk("gfn: %08lx  mfn: %08lx\n",
> +                   (unsigned long)PFN_DOWN(address),
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    printk("p2m table has %d levels\n", hd->paging_mode);
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -18,11 +18,13 @@
>   #include<asm/hvm/iommu.h>
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
> +#include<xen/keyhandler.h>
>   #include<xen/softirq.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,71 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level>= 1 )
> +        {
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                     address, indent + 1);
> +        }
> +        else
> +        {
> +            int j;
> +
> +            for ( j = 0; j<  indent; j++ )
> +                printk("  ");
> +
> +            printk("gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K),
> +                   dma_pte_superpage(*pte)? 1 : 0,
> +                   dma_pte_read(*pte)? 1 : 0,
> +                   dma_pte_write(*pte)? 1 : 0);
> +        }
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 10 08:19:58 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> @@ -277,6 +279,9 @@ struct dma_pte {
>   #define dma_set_pte_addr(p, addr) do {\
>               (p).val |= ((addr)&  PAGE_MASK_4K); } while (0)
>   #define dma_pte_present(p) (((p).val&  3) != 0)
> +#define dma_pte_superpage(p) (((p).val&  (1<<7)) != 0)
> +#define dma_pte_read(p) (((p).val&  DMA_PTE_READ) != 0)
> +#define dma_pte_write(p) (((p).val&  DMA_PTE_WRITE) != 0)
>
>   /* interrupt remap entry */
>   struct iremap_entry {
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 10 08:19:58 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  ((PTE_PER_TABLE_SHIFT * \
> +                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/xen/iommu.h	Fri Aug 10 08:19:58 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:32:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0rwW-0008Qp-Fu; Mon, 13 Aug 2012 10:32:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0rwU-0008Qj-T9
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:32:15 +0000
Received: from [85.158.143.35:46861] by server-3.bemta-4.messagelabs.com id
	C5/89-09529-EA7D8205; Mon, 13 Aug 2012 10:32:14 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344853927!15057514!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18936 invoked from network); 13 Aug 2012 10:32:08 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-13.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 10:32:08 -0000
Received: from mail187-ch1-R.bigfish.com (10.43.68.250) by
	CH1EHSOBE003.bigfish.com (10.43.70.53) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 10:32:06 +0000
Received: from mail187-ch1 (localhost [127.0.0.1])	by
	mail187-ch1-R.bigfish.com (Postfix) with ESMTP id 9C8753C02BE;
	Mon, 13 Aug 2012 10:32:06 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 1
X-BigFish: VPS1(zzbb2dI98dI9371I1432I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail187-ch1 (localhost.localdomain [127.0.0.1]) by mail187-ch1
	(MessageSwitch) id 1344853924315619_26425;
	Mon, 13 Aug 2012 10:32:04 +0000 (UTC)
Received: from CH1EHSMHS030.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.243])	by mail187-ch1.bigfish.com (Postfix) with ESMTP id
	40FDBE0102;	Mon, 13 Aug 2012 10:32:04 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS030.bigfish.com (10.43.70.30) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 10:32:02 +0000
X-WSS-ID: 0M8OVXA-02-AI5-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 27E1FC8052;	Mon, 13 Aug 2012 05:31:57 -0500 (CDT)
Received: from SAUSEXDAG04.amd.com (163.181.55.4) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 05:32:05 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag04.amd.com
	(163.181.55.4) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 05:32:01 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	06:31:59 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id EE0BC49C69F; Mon, 13 Aug 2012
	11:31:58 +0100 (BST)
Message-ID: <5028D79D.3030009@amd.com>
Date: Mon, 13 Aug 2012 12:31:57 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
In-Reply-To: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Santosh
Please some outputs below, gfn still seems incorrect.
Thanks
Wei


(XEN)     gfn: 000000f0  mfn: 001023ac
(XEN)     gfn: 000000f0  mfn: 0023f83d
(XEN)     gfn: 000000f0  mfn: 001023ab
(XEN)     gfn: 000000f0  mfn: 0023f83c
(XEN)     gfn: 000000f0  mfn: 001023aa
(XEN)     gfn: 000000f0  mfn: 0023f83b
(XEN)     gfn: 000000f0  mfn: 001023a9
(XEN)     gfn: 000000f0  mfn: 0023f83a
(XEN)     gfn: 000000f0  mfn: 001023a8
(XEN)     gfn: 000000f0  mfn: 0023f839
(XEN)     gfn: 000000f0  mfn: 001023a7
(XEN)     gfn: 000000f0  mfn: 0023f838
(XEN)     gfn: 000000f0  mfn: 001023a6
(XEN)     gfn: 000000f0  mfn: 0023f837
(XEN)     gfn: 000000f0  mfn: 001023a5
(XEN)     gfn: 000000f0  mfn: 0023f836
(XEN)     gfn: 000000f0  mfn: 001023a4
(XEN)     gfn: 000000f0  mfn: 0023f835
(XEN)     gfn: 000000f0  mfn: 001023a3
(XEN)     gfn: 000000f0  mfn: 0023f834
(XEN)     gfn: 000000f0  mfn: 001023a2
(XEN)     gfn: 000000f0  mfn: 0023f833
(XEN)     gfn: 000000f0  mfn: 001023a1
(XEN)     gfn: 000000f0  mfn: 0023f832
(XEN)     gfn: 000000f0  mfn: 001023a0
(XEN)     gfn: 000000f0  mfn: 0023f831
(XEN)     gfn: 000000f0  mfn: 0010239f
(XEN)     gfn: 000000f0  mfn: 0023f830
(XEN)     gfn: 000000f0  mfn: 0010239e
(XEN)     gfn: 000000f0  mfn: 0023f82f
(XEN)     gfn: 000000f0  mfn: 0010239d
(XEN)     gfn: 000000f0  mfn: 0023f82e
(XEN)     gfn: 000000f0  mfn: 0010239c
(XEN)     gfn: 000000f0  mfn: 0023f82d
(XEN)     gfn: 000000f0  mfn: 0010239b
(XEN)     gfn: 000000f0  mfn: 0023f82c
(XEN)     gfn: 000000f0  mfn: 0010239a
(XEN)     gfn: 000000f0  mfn: 0023f82b
(XEN)     gfn: 000000f0  mfn: 00102399
(XEN)     gfn: 000000f0  mfn: 0023f82a
(XEN)     gfn: 000000f0  mfn: 00102398
(XEN)     gfn: 000000f0  mfn: 0023f829
(XEN)     gfn: 000000f0  mfn: 00102397
(XEN)     gfn: 000000f0  mfn: 0023f828
(XEN)     gfn: 000000f0  mfn: 00102396
(XEN)     gfn: 000000f0  mfn: 0023f827
(XEN)     gfn: 000000f0  mfn: 00102395
(XEN)     gfn: 000000f0  mfn: 0023f826
(XEN)     gfn: 000000f0  mfn: 00102394
(XEN)     gfn: 000000f0  mfn: 0023f825
(XEN)     gfn: 000000f0  mfn: 00102393
(XEN)     gfn: 000000f0  mfn: 0023f824
(XEN)     gfn: 000000f0  mfn: 00102392
(XEN)     gfn: 000000f0  mfn: 0023f823
(XEN)     gfn: 000000f0  mfn: 00102391
(XEN)     gfn: 000000f0  mfn: 0023f822
(XEN)     gfn: 000000f0  mfn: 00102390
(XEN)     gfn: 000000f0  mfn: 0023f821
(XEN)     gfn: 000000f0  mfn: 0010238f
(XEN)     gfn: 000000f0  mfn: 0023f820
(XEN)     gfn: 000000f0  mfn: 0010238e
(XEN)     gfn: 000000f0  mfn: 0023f81f
(XEN)     gfn: 000000f0  mfn: 0010238d
(XEN)     gfn: 000000f0  mfn: 0023f81e
(XEN)     gfn: 000000f0  mfn: 0010238c
(XEN)     gfn: 000000f0  mfn: 0023f81d
(XEN)     gfn: 000000f0  mfn: 0010238b
(XEN)     gfn: 000000f0  mfn: 0023f81c
(XEN)     gfn: 000000f0  mfn: 0010238a
(XEN)     gfn: 000000f0  mfn: 0023f81b
(XEN)     gfn: 000000f0  mfn: 00102389
(XEN)     gfn: 000000f0  mfn: 0023f81a
(XEN)     gfn: 000000f0  mfn: 00102388
(XEN)     gfn: 000000f0  mfn: 0023f819
(XEN)     gfn: 000000f0  mfn: 00102387
(XEN)     gfn: 000000f0  mfn: 0023f818
(XEN)     gfn: 000000f0  mfn: 00102386
(XEN)     gfn: 000000f0  mfn: 0023f817
(XEN)     gfn: 000000f0  mfn: 00102385
(XEN)     gfn: 000000f0  mfn: 0023f816
(XEN)     gfn: 000000f0  mfn: 00102384
(XEN)     gfn: 000000f0  mfn: 0023f815
(XEN)     gfn: 000000f0  mfn: 00102383
(XEN)     gfn: 000000f0  mfn: 0023f814
(XEN)     gfn: 000000f0  mfn: 00102382
(XEN)     gfn: 000000f0  mfn: 0023f813
(XEN)     gfn: 000000f0  mfn: 00102381
(XEN)     gfn: 000000f0  mfn: 0023f812
(XEN)     gfn: 000000f0  mfn: 00102380
(XEN)     gfn: 000000f0  mfn: 0023f811
(XEN)     gfn: 000000f0  mfn: 0010217f
(XEN)     gfn: 000000f0  mfn: 0023f810
(XEN)     gfn: 000000f0  mfn: 0010217e
(XEN)     gfn: 000000f0  mfn: 0023f80f
(XEN)     gfn: 000000f0  mfn: 0010217d
(XEN)     gfn: 000000f0  mfn: 0023f80e
(XEN)     gfn: 000000f0  mfn: 0010217c
(XEN)     gfn: 000000f0  mfn: 0023f80d
(XEN)     gfn: 000000f0  mfn: 0010217b
(XEN)     gfn: 000000f0  mfn: 0023f80c
(XEN)     gfn: 000000f0  mfn: 0010217a
(XEN)     gfn: 000000f0  mfn: 0023f80b
(XEN)     gfn: 000000f0  mfn: 00102179
(XEN)     gfn: 000000fc  mfn: 0023f806
(XEN)     gfn: 000000fc  mfn: 0023f809
(XEN)     gfn: 000000fc  mfn: 001025f1
(XEN)     gfn: 000000fc  mfn: 0023f807
(XEN)     gfn: 000000fc  mfn: 001025f0
(XEN)     gfn: 000000fc  mfn: 001025ef
(XEN)     gfn: 000000fc  mfn: 0023f805
(XEN)     gfn: 000000fc  mfn: 001025ee
(XEN)     gfn: 000000fc  mfn: 0023f804
(XEN)     gfn: 000000fc  mfn: 001025ed
(XEN)     gfn: 000000fc  mfn: 0023f803
(XEN)     gfn: 000000fc  mfn: 001025ec
(XEN)     gfn: 000000fc  mfn: 0023f802
(XEN)     gfn: 000000fc  mfn: 001025eb
(XEN)     gfn: 000000fc  mfn: 0023f801
(XEN)     gfn: 000000fc  mfn: 001025ea
(XEN)     gfn: 000000fc  mfn: 0023f800
(XEN)     gfn: 000000fe  mfn: 000812b0
(XEN)     gfn: 000000fe  mfn: 0010a085
(XEN)     gfn: 000000fe  mfn: 0021f60c
(XEN)     gfn: 000000fe  mfn: 0010a084
(XEN)     gfn: 000000fe  mfn: 0021f60b
(XEN)     gfn: 000000fe  mfn: 0010a083



On 08/10/2012 09:14 PM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( (next_table_maddr != 0)&&  (next_level != 0) )
> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1,
> +                address, indent + 1);
> +        }
> +        else
> +        {
> +            int i;
> +
> +            for ( i = 0; i<  indent; i++ )
> +                printk("  ");
> +
> +            printk("gfn: %08lx  mfn: %08lx\n",
> +                   (unsigned long)PFN_DOWN(address),
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    printk("p2m table has %d levels\n", hd->paging_mode);
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -18,11 +18,13 @@
>   #include<asm/hvm/iommu.h>
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
> +#include<xen/keyhandler.h>
>   #include<xen/softirq.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 10 08:19:58 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,71 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( pt_maddr == 0 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level>= 1 )
> +        {
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                     address, indent + 1);
> +        }
> +        else
> +        {
> +            int j;
> +
> +            for ( j = 0; j<  indent; j++ )
> +                printk("  ");
> +
> +            printk("gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K),
> +                   dma_pte_superpage(*pte)? 1 : 0,
> +                   dma_pte_read(*pte)? 1 : 0,
> +                   dma_pte_write(*pte)? 1 : 0);
> +        }
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2453,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 10 08:19:58 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> @@ -277,6 +279,9 @@ struct dma_pte {
>   #define dma_set_pte_addr(p, addr) do {\
>               (p).val |= ((addr)&  PAGE_MASK_4K); } while (0)
>   #define dma_pte_present(p) (((p).val&  3) != 0)
> +#define dma_pte_superpage(p) (((p).val&  (1<<7)) != 0)
> +#define dma_pte_read(p) (((p).val&  DMA_PTE_READ) != 0)
> +#define dma_pte_write(p) (((p).val&  DMA_PTE_WRITE) != 0)
>
>   /* interrupt remap entry */
>   struct iremap_entry {
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 10 08:19:58 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  ((PTE_PER_TABLE_SHIFT * \
> +                             (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 472fc515a463 -r 9c7609a4fbc1 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Tue Aug 07 18:37:31 2012 +0100
> +++ b/xen/include/xen/iommu.h	Fri Aug 10 08:19:58 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:44:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0s7r-0000Eu-Mk; Mon, 13 Aug 2012 10:43:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T0s7q-0000Ep-8r
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:43:58 +0000
Received: from [85.158.143.35:55163] by server-2.bemta-4.messagelabs.com id
	09/44-31966-D6AD8205; Mon, 13 Aug 2012 10:43:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344854636!13118843!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30370 invoked from network); 13 Aug 2012 10:43:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 10:43:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13977998"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 10:43:56 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 11:43:56 +0100
Date: Mon, 13 Aug 2012 11:43:28 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120810183348.25e1c973@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208131141340.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120810183348.25e1c973@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 11 Aug 2012, Mukesh Rathor wrote:
> On Mon, 6 Aug 2012 15:12:01 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > This is an incremental patch on top of
> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> > compatibility, it is better to introduce foreign_domid as part of a
> > union containing both size and foreign_domid.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/include/public/memory.h |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index b2adfbe..b0af2fd 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > -    /* Number of pages to go through for gmfn_range */
> > -    uint16_t    size;
> > +    union {
> > +        /* Number of pages to go through for gmfn_range */
> > +        uint16_t    size;
> > +        /* IFF gmfn_foreign */
> > +        domid_t foreign_domid;
> > +    };
> >  
> >      /* Source mapping space. */
> >  #define XENMAPSPACE_shared_info  0 /* shared info page */
> > @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
> >  #define XENMAPSPACE_gmfn         2 /* GMFN */
> >  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> >  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> > -    uint16_t space;
> > -    domid_t foreign_domid; /* IFF gmfn_foreign */
> > +    unsigned int space;
> >  
> >  #define XENMAPIDX_grant_table_status 0x80000000
> >  
> 
> Is this the final version? I don't see it in today's xen unstable tree!
> I have this in my tree:
> 
> struct xen_add_to_physmap {
>     /* Which domain to change the mapping for. */
>     domid_t domid;
> 
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
> 
>     /* Source mapping space. */
> #define XENMAPSPACE_shared_info 0 /* shared info page */
> #define XENMAPSPACE_grant_table 1 /* grant table page */
> #define XENMAPSPACE_gmfn        2 /* GMFN */
> #define XENMAPSPACE_gmfn_range  3 /* GMFN range */
> #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
>     uint16_t space;
>     domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */

We are still discussing how the final version should actually look like
but the last patch I sent is this one (for the for-4.3 tree):

http://marc.info/?l=xen-devel&m=134460098420461

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:44:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0s7r-0000Eu-Mk; Mon, 13 Aug 2012 10:43:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T0s7q-0000Ep-8r
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:43:58 +0000
Received: from [85.158.143.35:55163] by server-2.bemta-4.messagelabs.com id
	09/44-31966-D6AD8205; Mon, 13 Aug 2012 10:43:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344854636!13118843!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30370 invoked from network); 13 Aug 2012 10:43:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 10:43:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13977998"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 10:43:56 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 11:43:56 +0100
Date: Mon, 13 Aug 2012 11:43:28 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120810183348.25e1c973@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208131141340.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208061447130.4645@kaball.uk.xensource.com>
	<1344262325-26598-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20120810183348.25e1c973@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/5] xen: improve changes to
 xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 11 Aug 2012, Mukesh Rathor wrote:
> On Mon, 6 Aug 2012 15:12:01 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > This is an incremental patch on top of
> > c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
> > compatibility, it is better to introduce foreign_domid as part of a
> > union containing both size and foreign_domid.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/include/public/memory.h |   11 +++++++----
> >  1 files changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index b2adfbe..b0af2fd 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -208,8 +208,12 @@ struct xen_add_to_physmap {
> >      /* Which domain to change the mapping for. */
> >      domid_t domid;
> >  
> > -    /* Number of pages to go through for gmfn_range */
> > -    uint16_t    size;
> > +    union {
> > +        /* Number of pages to go through for gmfn_range */
> > +        uint16_t    size;
> > +        /* IFF gmfn_foreign */
> > +        domid_t foreign_domid;
> > +    };
> >  
> >      /* Source mapping space. */
> >  #define XENMAPSPACE_shared_info  0 /* shared info page */
> > @@ -217,8 +221,7 @@ struct xen_add_to_physmap {
> >  #define XENMAPSPACE_gmfn         2 /* GMFN */
> >  #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
> >  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> > -    uint16_t space;
> > -    domid_t foreign_domid; /* IFF gmfn_foreign */
> > +    unsigned int space;
> >  
> >  #define XENMAPIDX_grant_table_status 0x80000000
> >  
> 
> Is this the final version? I don't see it in today's xen unstable tree!
> I have this in my tree:
> 
> struct xen_add_to_physmap {
>     /* Which domain to change the mapping for. */
>     domid_t domid;
> 
>     /* Number of pages to go through for gmfn_range */
>     uint16_t    size;
> 
>     /* Source mapping space. */
> #define XENMAPSPACE_shared_info 0 /* shared info page */
> #define XENMAPSPACE_grant_table 1 /* grant table page */
> #define XENMAPSPACE_gmfn        2 /* GMFN */
> #define XENMAPSPACE_gmfn_range  3 /* GMFN range */
> #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
>     uint16_t space;
>     domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */

We are still discussing how the final version should actually look like
but the last patch I sent is this one (for the for-4.3 tree):

http://marc.info/?l=xen-devel&m=134460098420461

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:48:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sBq-0000O6-KU; Mon, 13 Aug 2012 10:48:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T0sBp-0000O1-9y
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:48:05 +0000
Received: from [85.158.138.51:31251] by server-2.bemta-3.messagelabs.com id
	13/38-17748-46BD8205; Mon, 13 Aug 2012 10:48:04 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344854883!27981239!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27508 invoked from network); 13 Aug 2012 10:48:03 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 10:48:03 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 5EFB5A3D8B;
	Mon, 13 Aug 2012 12:47:56 +0200 (CEST)
Date: Mon, 13 Aug 2012 11:47:45 +0100
From: Mel Gorman <mgorman@suse.de>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Message-ID: <20120813104745.GE4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813102604.GC4177@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813102604.GC4177@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Resending to correct Jeremy's address.

On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> From: Mel Gorman <mgorman@suse.de>
> Date: Tue, 7 Aug 2012 09:55:55 +0100
> 
> > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > for the following bug triggered by a xen network driver
>  ...
> > The problem is that the xenfront driver is passing a NULL page to
> > __skb_fill_page_desc() which was unexpected. This patch checks that
> > there is a page before dereferencing.
> > 
> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> 
> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> It's the only driver passing NULL here.
> 
> That whole song and dance figuring out what to do with the head
> fragment page, depending upon whether the length is greater than the
> RX_COPY_THRESHOLD, is completely unnecessary.
> 
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

I looked at this for a while but I did not see how __pskb_pull_tail()
could be used sensibly but I'm simily not familiar with writing network
device drivers or Xen.

This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
and backend communicate (maybe some fixed limitation of the xenbus). The
existing code looks like it is trying to take the fragments received and
pass them straight to the backend without copying by passing the fragments
to the backend without copying. I worry that if I try converting this to
__pskb_pull_tail() that it would either hit the limitation of xenbus or
introduce copying where it is not wanted.

I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
sure what the original intention was and I don't have a Xen setup anywhere
to test any patch. Jeremy, xen folk? 


-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:48:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sBq-0000O6-KU; Mon, 13 Aug 2012 10:48:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T0sBp-0000O1-9y
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 10:48:05 +0000
Received: from [85.158.138.51:31251] by server-2.bemta-3.messagelabs.com id
	13/38-17748-46BD8205; Mon, 13 Aug 2012 10:48:04 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344854883!27981239!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27508 invoked from network); 13 Aug 2012 10:48:03 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 10:48:03 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 5EFB5A3D8B;
	Mon, 13 Aug 2012 12:47:56 +0200 (CEST)
Date: Mon, 13 Aug 2012 11:47:45 +0100
From: Mel Gorman <mgorman@suse.de>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Message-ID: <20120813104745.GE4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813102604.GC4177@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813102604.GC4177@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Resending to correct Jeremy's address.

On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> From: Mel Gorman <mgorman@suse.de>
> Date: Tue, 7 Aug 2012 09:55:55 +0100
> 
> > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > for the following bug triggered by a xen network driver
>  ...
> > The problem is that the xenfront driver is passing a NULL page to
> > __skb_fill_page_desc() which was unexpected. This patch checks that
> > there is a page before dereferencing.
> > 
> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> 
> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> It's the only driver passing NULL here.
> 
> That whole song and dance figuring out what to do with the head
> fragment page, depending upon whether the length is greater than the
> RX_COPY_THRESHOLD, is completely unnecessary.
> 
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

I looked at this for a while but I did not see how __pskb_pull_tail()
could be used sensibly but I'm simily not familiar with writing network
device drivers or Xen.

This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
and backend communicate (maybe some fixed limitation of the xenbus). The
existing code looks like it is trying to take the fragments received and
pass them straight to the backend without copying by passing the fragments
to the backend without copying. I worry that if I try converting this to
__pskb_pull_tail() that it would either hit the limitation of xenbus or
introduce copying where it is not wanted.

I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
sure what the original intention was and I don't have a Xen setup anywhere
to test any patch. Jeremy, xen folk? 


-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:54:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sHM-0000Ys-Cz; Mon, 13 Aug 2012 10:53:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T0sHL-0000Yn-Kc
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 10:53:47 +0000
Received: from [85.158.143.35:53541] by server-1.bemta-4.messagelabs.com id
	F9/5A-07754-BBCD8205; Mon, 13 Aug 2012 10:53:47 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344855224!13760940!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15298 invoked from network); 13 Aug 2012 10:53:46 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 10:53:46 -0000
Received: by obbta14 with SMTP id ta14so8799305obb.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 03:53:44 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=tE7yJdq3VLaNa4zkKq9dW/VrwpLjba89F56ncZo9HBc=;
	b=BBTJXHRKd+xLhCq7X1lRIm1tDaHjy962GDtcl+J6nshTJveLIv2JQwEgJa5wVZ6DoA
	yjT2ROW6TSLf8F+8Vzc2MFso/n2xBtNDCsOSe4onaxCjA2aO3nhcvLGDXg5kWH7OKRe7
	PBfiYPEG5EmtHxbP150IXLSup1DOCKs3DDFWkMA9Gjz5H3ooPG1zY8vGRJKzmLY0xM/D
	vAaSh8o+gVYVWuadjNc/m6QA4V8Y83JI3nPFFhDCUSVo1jI4TsGPEJ7sEqNrSAjDJVcN
	u3bQhssfFM9wTHAjnercC1kpQyHQxtxkFZMgH1pXJNKx5DiwxQ14aT2end6u8KbKZX9F
	8UKA==
MIME-Version: 1.0
Received: by 10.182.164.40 with SMTP id yn8mr10578153obb.40.1344855224494;
	Mon, 13 Aug 2012 03:53:44 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Mon, 13 Aug 2012 03:53:44 -0700 (PDT)
X-Originating-IP: [121.44.126.22]
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
Date: Mon, 13 Aug 2012 20:53:44 +1000
Message-ID: <CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: James Harper <james.harper@bendigoit.com.au>
X-Gm-Message-State: ALoCoQn7hyv5uBkv/2z+FrTBc0YCvYjRkKI8qlKQYHVukkTGypu0Xg1l4xGwv5PfEv/cxXaU7QwL
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	Kent Overstreet <koverstreet@google.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13 August 2012 18:17, James Harper <james.harper@bendigoit.com.au> wrote:
>>
>> This could very well be the issue I was having, I haven't been able to pull the
>> latest bcache code for a few days (repo down?) but if I can help debug let me
>> know.
>>
>
> Is it Windows or Linux giving you problems? I've only tested Windows so far.
>
> James
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Both, I wasn't able to get either to work correctly with blkback.

-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 10:54:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 10:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sHM-0000Ys-Cz; Mon, 13 Aug 2012 10:53:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T0sHL-0000Yn-Kc
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 10:53:47 +0000
Received: from [85.158.143.35:53541] by server-1.bemta-4.messagelabs.com id
	F9/5A-07754-BBCD8205; Mon, 13 Aug 2012 10:53:47 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344855224!13760940!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15298 invoked from network); 13 Aug 2012 10:53:46 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 10:53:46 -0000
Received: by obbta14 with SMTP id ta14so8799305obb.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 03:53:44 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=tE7yJdq3VLaNa4zkKq9dW/VrwpLjba89F56ncZo9HBc=;
	b=BBTJXHRKd+xLhCq7X1lRIm1tDaHjy962GDtcl+J6nshTJveLIv2JQwEgJa5wVZ6DoA
	yjT2ROW6TSLf8F+8Vzc2MFso/n2xBtNDCsOSe4onaxCjA2aO3nhcvLGDXg5kWH7OKRe7
	PBfiYPEG5EmtHxbP150IXLSup1DOCKs3DDFWkMA9Gjz5H3ooPG1zY8vGRJKzmLY0xM/D
	vAaSh8o+gVYVWuadjNc/m6QA4V8Y83JI3nPFFhDCUSVo1jI4TsGPEJ7sEqNrSAjDJVcN
	u3bQhssfFM9wTHAjnercC1kpQyHQxtxkFZMgH1pXJNKx5DiwxQ14aT2end6u8KbKZX9F
	8UKA==
MIME-Version: 1.0
Received: by 10.182.164.40 with SMTP id yn8mr10578153obb.40.1344855224494;
	Mon, 13 Aug 2012 03:53:44 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Mon, 13 Aug 2012 03:53:44 -0700 (PDT)
X-Originating-IP: [121.44.126.22]
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
Date: Mon, 13 Aug 2012 20:53:44 +1000
Message-ID: <CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: James Harper <james.harper@bendigoit.com.au>
X-Gm-Message-State: ALoCoQn7hyv5uBkv/2z+FrTBc0YCvYjRkKI8qlKQYHVukkTGypu0Xg1l4xGwv5PfEv/cxXaU7QwL
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	Kent Overstreet <koverstreet@google.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13 August 2012 18:17, James Harper <james.harper@bendigoit.com.au> wrote:
>>
>> This could very well be the issue I was having, I haven't been able to pull the
>> latest bcache code for a few days (repo down?) but if I can help debug let me
>> know.
>>
>
> Is it Windows or Linux giving you problems? I've only tested Windows so far.
>
> James
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Both, I wasn't able to get either to work correctly with blkback.

-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:09:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sVw-0000qP-QL; Mon, 13 Aug 2012 11:08:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T0sVv-0000qK-MT
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 11:08:51 +0000
Received: from [85.158.143.35:28288] by server-1.bemta-4.messagelabs.com id
	47/8F-07754-240E8205; Mon, 13 Aug 2012 11:08:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344856129!13764508!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 477 invoked from network); 13 Aug 2012 11:08:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13978572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 11:08:49 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 12:08:49 +0100
Date: Mon, 13 Aug 2012 12:08:21 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5028E53202000078000946B1@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-504235113-1344855878=:21096"
Content-ID: <alpine.DEB.2.02.1208131205180.21096@kaball.uk.xensource.com>
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Feng Jin <joe.jin@oracle.com>,
	"zhenzhong.duan@oracle.com" <zhenzhong.duan@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-504235113-1344855878=:21096
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.02.1208131205181.21096@kaball.uk.xensource.com>

On Mon, 13 Aug 2012, Jan Beulich wrote:
> >>> On 13.08.12 at 09:58, "zhenzhong.duan" <zhenzhong.duan@oracle.com> wrote:
> > äºŽ 2012-08-10 22:22, Jan Beulich å†™é“:
> >> Going back to your original mail, I wonder however why this
> >> gets done at all. You said it got there via
> >>
> >> mtrr_aps_init()
> >>   \->  set_mtrr()
> >>       \->  mtrr_work_handler()
> >>
> >> yet this isn't done unconditionally - see the comment before
> >> checking mtrr_aps_delayed_init. Can you find out where the
> >> obviously necessary call(s) to set_mtrr_aps_delayed_init()
> >> come(s) from?
> > At bootup stage, set_mtrr_aps_delayed_init is called by 
> > native_smp_prepare_cpus.
> > mtrr_aps_delayed_init is always set to ture for intel processor in upstream 
> > code.
> 
> Indeed, and that (in one form or another) has been done
> virtually forever in Linux. I wonder why the problem wasn't
> noticed (or looked into, if it was noticed) so far.
> 
> As it's going to be rather difficult to convince the Linux folks
> to change their code (plus this wouldn't help with existing
> kernels anyway), we'll need to find a way to improve this in
> the hypervisor.
> 
> One seemingly orthogonal thing would presumably help quite
> a bit on the guest side nevertheless - para-virtualized spin
> locks. I have no idea why this is only being done when running
> pv, but not for pvhvm. Konrad, Stefano?

I tried to use PV spinlocks on PV on HVM guests but I found that:

commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Sep 6 17:41:47 2011 +0100

    xen: disable PV spinlocks on HVM
    
    PV spinlocks cannot possibly work with the current code because they are
    enabled after pvops patching has already been done, and because PV
    spinlocks use a different data structure than native spinlocks so we
    cannot switch between them dynamically. A spinlock that has been taken
    once by the native code (__ticket_spin_lock) cannot be taken by
    __xen_spin_lock even after it has been released.
    
    Reported-and-Tested-by: Stefan Bader <stefan.bader@canonical.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


at that time Jeremy was finishing off his PV ticket locks series, that
has the nice side effect of making it much easier to implement PV on HVM
spin locks so I just deciced to wait and just append the following patch
to his series:

http://marc.info/?l=xen-devel&m=131846828430409&w=2

that clearly never went upstream.
--1342847746-504235113-1344855878=:21096
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-504235113-1344855878=:21096--


From xen-devel-bounces@lists.xen.org Mon Aug 13 11:09:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sVw-0000qP-QL; Mon, 13 Aug 2012 11:08:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T0sVv-0000qK-MT
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 11:08:51 +0000
Received: from [85.158.143.35:28288] by server-1.bemta-4.messagelabs.com id
	47/8F-07754-240E8205; Mon, 13 Aug 2012 11:08:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344856129!13764508!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 477 invoked from network); 13 Aug 2012 11:08:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13978572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 11:08:49 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 12:08:49 +0100
Date: Mon, 13 Aug 2012 12:08:21 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5028E53202000078000946B1@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-504235113-1344855878=:21096"
Content-ID: <alpine.DEB.2.02.1208131205180.21096@kaball.uk.xensource.com>
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Feng Jin <joe.jin@oracle.com>,
	"zhenzhong.duan@oracle.com" <zhenzhong.duan@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-504235113-1344855878=:21096
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.02.1208131205181.21096@kaball.uk.xensource.com>

On Mon, 13 Aug 2012, Jan Beulich wrote:
> >>> On 13.08.12 at 09:58, "zhenzhong.duan" <zhenzhong.duan@oracle.com> wrote:
> > äºŽ 2012-08-10 22:22, Jan Beulich å†™é“:
> >> Going back to your original mail, I wonder however why this
> >> gets done at all. You said it got there via
> >>
> >> mtrr_aps_init()
> >>   \->  set_mtrr()
> >>       \->  mtrr_work_handler()
> >>
> >> yet this isn't done unconditionally - see the comment before
> >> checking mtrr_aps_delayed_init. Can you find out where the
> >> obviously necessary call(s) to set_mtrr_aps_delayed_init()
> >> come(s) from?
> > At bootup stage, set_mtrr_aps_delayed_init is called by 
> > native_smp_prepare_cpus.
> > mtrr_aps_delayed_init is always set to ture for intel processor in upstream 
> > code.
> 
> Indeed, and that (in one form or another) has been done
> virtually forever in Linux. I wonder why the problem wasn't
> noticed (or looked into, if it was noticed) so far.
> 
> As it's going to be rather difficult to convince the Linux folks
> to change their code (plus this wouldn't help with existing
> kernels anyway), we'll need to find a way to improve this in
> the hypervisor.
> 
> One seemingly orthogonal thing would presumably help quite
> a bit on the guest side nevertheless - para-virtualized spin
> locks. I have no idea why this is only being done when running
> pv, but not for pvhvm. Konrad, Stefano?

I tried to use PV spinlocks on PV on HVM guests but I found that:

commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Sep 6 17:41:47 2011 +0100

    xen: disable PV spinlocks on HVM
    
    PV spinlocks cannot possibly work with the current code because they are
    enabled after pvops patching has already been done, and because PV
    spinlocks use a different data structure than native spinlocks so we
    cannot switch between them dynamically. A spinlock that has been taken
    once by the native code (__ticket_spin_lock) cannot be taken by
    __xen_spin_lock even after it has been released.
    
    Reported-and-Tested-by: Stefan Bader <stefan.bader@canonical.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


at that time Jeremy was finishing off his PV ticket locks series, that
has the nice side effect of making it much easier to implement PV on HVM
spin locks so I just deciced to wait and just append the following patch
to his series:

http://marc.info/?l=xen-devel&m=131846828430409&w=2

that clearly never went upstream.
--1342847746-504235113-1344855878=:21096
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-504235113-1344855878=:21096--


From xen-devel-bounces@lists.xen.org Mon Aug 13 11:27:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0snF-000117-IU; Mon, 13 Aug 2012 11:26:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T0snE-00010y-4x
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 11:26:44 +0000
Received: from [85.158.139.83:21067] by server-1.bemta-5.messagelabs.com id
	34/F9-09980-374E8205; Mon, 13 Aug 2012 11:26:43 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344857202!27839605!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1345 invoked from network); 13 Aug 2012 11:26:43 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:26:43 -0000
Received: by eeke53 with SMTP id e53so1007003eek.32
	for <multiple recipients>; Mon, 13 Aug 2012 04:26:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=x8Awr2vA/EcOYv3NP5GGQy+Dk/yNe9dwJVgoXE9ZyjI=;
	b=Uc8kaApRC1ui6DKBaylgxp27uFDtlyXRC6yMTbWd6llv81nICKehX9U5QCoU4lIYrp
	MerVKP1vkmcI2xsNGwuNKns4sM9esz3jB4PkVpKbsmmM/SacSYQeJeqJLnfDMY5cbDZk
	Ocd676t+GM4rhWjRKAvWjdEsbhBXRmkKkTAvrh8UEo87NzR9vIAv9Ko37apuPztqIpD3
	IlzbyPdRNTjeou3f34X62SWcXVCjzgRPwCEdheMHrYAMuKUnhf/qI7cV/CabopWGPlHv
	2/GC/Ee+PdCheBYVzhXYmc0xyot3LgI7KlYPxfQQC51C6b1W7LtzS1de7FvUE3TKiHgl
	UxEA==
Received: by 10.14.181.132 with SMTP id l4mr9400846eem.17.1344857202759;
	Mon, 13 Aug 2012 04:26:42 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id l42sm15948928eep.1.2012.08.13.04.26.41
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 04:26:42 -0700 (PDT)
Message-ID: <5028E470.4070105@xen.org>
Date: Mon, 13 Aug 2012 12:26:40 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	xen-users@lists.xen.org
Subject: [Xen-devel] Reminder: Xen Test Day on #xentest tomorrow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everybody,
just a quick reminder that our first Xen Test Day is on the #xentest IRC 
channel tomorrow. For more details see:
* http://wiki.xen.org/wiki/Xen_Test_Days
* http://wiki.xen.org/wiki/Xen_4.2_RC2_test_instructions
Hope too see you on IRC tomorrow
Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:27:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0snF-000117-IU; Mon, 13 Aug 2012 11:26:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T0snE-00010y-4x
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 11:26:44 +0000
Received: from [85.158.139.83:21067] by server-1.bemta-5.messagelabs.com id
	34/F9-09980-374E8205; Mon, 13 Aug 2012 11:26:43 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344857202!27839605!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1345 invoked from network); 13 Aug 2012 11:26:43 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:26:43 -0000
Received: by eeke53 with SMTP id e53so1007003eek.32
	for <multiple recipients>; Mon, 13 Aug 2012 04:26:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=x8Awr2vA/EcOYv3NP5GGQy+Dk/yNe9dwJVgoXE9ZyjI=;
	b=Uc8kaApRC1ui6DKBaylgxp27uFDtlyXRC6yMTbWd6llv81nICKehX9U5QCoU4lIYrp
	MerVKP1vkmcI2xsNGwuNKns4sM9esz3jB4PkVpKbsmmM/SacSYQeJeqJLnfDMY5cbDZk
	Ocd676t+GM4rhWjRKAvWjdEsbhBXRmkKkTAvrh8UEo87NzR9vIAv9Ko37apuPztqIpD3
	IlzbyPdRNTjeou3f34X62SWcXVCjzgRPwCEdheMHrYAMuKUnhf/qI7cV/CabopWGPlHv
	2/GC/Ee+PdCheBYVzhXYmc0xyot3LgI7KlYPxfQQC51C6b1W7LtzS1de7FvUE3TKiHgl
	UxEA==
Received: by 10.14.181.132 with SMTP id l4mr9400846eem.17.1344857202759;
	Mon, 13 Aug 2012 04:26:42 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id l42sm15948928eep.1.2012.08.13.04.26.41
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 04:26:42 -0700 (PDT)
Message-ID: <5028E470.4070105@xen.org>
Date: Mon, 13 Aug 2012 12:26:40 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	xen-users@lists.xen.org
Subject: [Xen-devel] Reminder: Xen Test Day on #xentest tomorrow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everybody,
just a quick reminder that our first Xen Test Day is on the #xentest IRC 
channel tomorrow. For more details see:
* http://wiki.xen.org/wiki/Xen_Test_Days
* http://wiki.xen.org/wiki/Xen_4.2_RC2_test_instructions
Hope too see you on IRC tomorrow
Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:30:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sqp-0001Ib-Rz; Mon, 13 Aug 2012 11:30:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T0sqp-0001IS-8N
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 11:30:27 +0000
Received: from [85.158.138.51:64820] by server-2.bemta-3.messagelabs.com id
	FA/C2-17748-255E8205; Mon, 13 Aug 2012 11:30:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344857425!28045572!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9690 invoked from network); 13 Aug 2012 11:30:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:30:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13978920"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 11:25:22 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 12:25:22 +0100
Date: Mon, 13 Aug 2012 12:24:54 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5025276B02000078000942BE@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Jan Beulich wrote:
> >>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > Note: these changes don't make any difference on x86 and ia64.
> 
> I can see your point in doing this in the x86 files nevertheless for
> cosmetic/consistency reasons, but I'm really uncertain about this
> uglification when it's not really necessary (plus it would shrink the
> patch quite a bit).

I don't have a strong opinion on this.
However it is going to be difficult to enforce XEN_GUEST_HANDLE_PARAM on
parameters anyway (because we don't have a simple way to make it fail at
compile time), if we make the change only in xen/commons, it is going to be
even harder.


> > Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
> > an hypercall argument.
> 
> I didn't look in too close detail, as this isn't intended for the main
> branch yet, but I still wasn't able to spot any conversion method
> in at least one direction (so that internally these can be passed
> around irrespective of their origin).

I thought that guest_handle_cast was supposed to be used to cast
XEN_GUEST_HANDLE_PARAMs into proper structs.
Also copy_from_guest can be used to get the struct from memory.

On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
there is any point in that because the other internal functions should
also have XEN_GUEST_HANDLE_PARAMs as parameters.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:30:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0sqp-0001Ib-Rz; Mon, 13 Aug 2012 11:30:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T0sqp-0001IS-8N
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 11:30:27 +0000
Received: from [85.158.138.51:64820] by server-2.bemta-3.messagelabs.com id
	FA/C2-17748-255E8205; Mon, 13 Aug 2012 11:30:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344857425!28045572!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9690 invoked from network); 13 Aug 2012 11:30:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:30:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13978920"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 11:25:22 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 12:25:22 +0100
Date: Mon, 13 Aug 2012 12:24:54 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5025276B02000078000942BE@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Aug 2012, Jan Beulich wrote:
> >>> On 10.08.12 at 14:10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > Note: these changes don't make any difference on x86 and ia64.
> 
> I can see your point in doing this in the x86 files nevertheless for
> cosmetic/consistency reasons, but I'm really uncertain about this
> uglification when it's not really necessary (plus it would shrink the
> patch quite a bit).

I don't have a strong opinion on this.
However it is going to be difficult to enforce XEN_GUEST_HANDLE_PARAM on
parameters anyway (because we don't have a simple way to make it fail at
compile time), if we make the change only in xen/commons, it is going to be
even harder.


> > Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
> > an hypercall argument.
> 
> I didn't look in too close detail, as this isn't intended for the main
> branch yet, but I still wasn't able to spot any conversion method
> in at least one direction (so that internally these can be passed
> around irrespective of their origin).

I thought that guest_handle_cast was supposed to be used to cast
XEN_GUEST_HANDLE_PARAMs into proper structs.
Also copy_from_guest can be used to get the struct from memory.

On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
there is any point in that because the other internal functions should
also have XEN_GUEST_HANDLE_PARAMs as parameters.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:32:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ssP-0001QD-Bn; Mon, 13 Aug 2012 11:32:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T0ssO-0001Q2-EJ
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 11:32:04 +0000
Received: from [85.158.143.99:14508] by server-3.bemta-4.messagelabs.com id
	52/43-09529-3B5E8205; Mon, 13 Aug 2012 11:32:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344857522!24539333!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8044 invoked from network); 13 Aug 2012 11:32:03 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:32:03 -0000
Received: by eaac13 with SMTP id c13so1009283eaa.32
	for <multiple recipients>; Mon, 13 Aug 2012 04:32:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=ciieYFeCHWnNm58mz8uXHSn72JZvJOhGEHGrSxdLt3s=;
	b=y2rlWwYyNDuI5lA7AvhVyXnxAM2JTuTM3cFNyvzA6BAVvnDMqd8bCQIyjgN1RPknlS
	RqxgELl4NPGkmRvUsfoeheWAcZaEMoriPoxuB/4k41JdP6qx04Pv3hcXYw0bJWpaVRR7
	pyRbVIR0EEGdAkPE+eMV4isW2XY+4xz7VpEtQRsbw+CSJ1Hnd/AlFdf5pqgXAuD6YRGV
	C4C/sUm9bjC+qklLlFYQBKIxLswj6Vyp86kyoL0nBDsNEsU4pGR/Gle0iDO7Cr+243jD
	MS+rZa2vMqnycBqj48M8DAswERTz1SQluoDGf9EQ+XbRWBbu4umUowTR+kzMquztIA26
	nT6w==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr3984424eej.0.1344857522837; Mon, 13
	Aug 2012 04:32:02 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Mon, 13 Aug 2012 04:32:02 -0700 (PDT)
In-Reply-To: <CAFLBxZaM+NrphF2eQ5v8+DVQue5F7CgQh_Yi-byv5dpQva1TMw@mail.gmail.com>
References: <CAFLBxZaM+NrphF2eQ5v8+DVQue5F7CgQh_Yi-byv5dpQva1TMw@mail.gmail.com>
Date: Mon, 13 Aug 2012 12:32:02 +0100
X-Google-Sender-Auth: tbUdu0VYFVWBJMNrij2NJKTCq6I
Message-ID: <CAFLBxZYJ5KLyNhBgKEz7bmJiJXCopg9w4sPo0q4f-N8gMyA2pQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org
Subject: Re: [Xen-devel] Security discussion poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just a follow-up for those who were on holiday last week: This poll
will be open for another week (until 20 August).  Thank you for
everyone who has already participated.

 -George

On Mon, Aug 6, 2012 at 3:18 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> As promised, here is the poll for the security discussion.  As a
> reminder, the purpose of this poll is mainly to see where people's
> attitudes are with respect to the various options, so that we can move
> the discussion forward towards a conclusion.  If you have any
> interested at all in the outcome, please make your voice heard.  I
> have CC'd everyone who has participated in the discussion so far.Joh
>
> The poll will not be secret.  You may fill out the poll anonymously,
> but if you do, your vote will be given less weight (to avoid ballot
> stuffing).  We don't necessarily plan on publishing the individual
> poll responses, but we may do so if we think it would be helpful.
>
> Because of the summer holidays, we will keep the poll open for two
> weeks; we will tabulate the results Monday, August 20.
>
> The poll can be found here:
>
> http://xen.org/polls/xen_dev_2012_security_process.html
>
> Thank you for your time.
>
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:32:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ssP-0001QD-Bn; Mon, 13 Aug 2012 11:32:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T0ssO-0001Q2-EJ
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 11:32:04 +0000
Received: from [85.158.143.99:14508] by server-3.bemta-4.messagelabs.com id
	52/43-09529-3B5E8205; Mon, 13 Aug 2012 11:32:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1344857522!24539333!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8044 invoked from network); 13 Aug 2012 11:32:03 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:32:03 -0000
Received: by eaac13 with SMTP id c13so1009283eaa.32
	for <multiple recipients>; Mon, 13 Aug 2012 04:32:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:content-type;
	bh=ciieYFeCHWnNm58mz8uXHSn72JZvJOhGEHGrSxdLt3s=;
	b=y2rlWwYyNDuI5lA7AvhVyXnxAM2JTuTM3cFNyvzA6BAVvnDMqd8bCQIyjgN1RPknlS
	RqxgELl4NPGkmRvUsfoeheWAcZaEMoriPoxuB/4k41JdP6qx04Pv3hcXYw0bJWpaVRR7
	pyRbVIR0EEGdAkPE+eMV4isW2XY+4xz7VpEtQRsbw+CSJ1Hnd/AlFdf5pqgXAuD6YRGV
	C4C/sUm9bjC+qklLlFYQBKIxLswj6Vyp86kyoL0nBDsNEsU4pGR/Gle0iDO7Cr+243jD
	MS+rZa2vMqnycBqj48M8DAswERTz1SQluoDGf9EQ+XbRWBbu4umUowTR+kzMquztIA26
	nT6w==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr3984424eej.0.1344857522837; Mon, 13
	Aug 2012 04:32:02 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Mon, 13 Aug 2012 04:32:02 -0700 (PDT)
In-Reply-To: <CAFLBxZaM+NrphF2eQ5v8+DVQue5F7CgQh_Yi-byv5dpQva1TMw@mail.gmail.com>
References: <CAFLBxZaM+NrphF2eQ5v8+DVQue5F7CgQh_Yi-byv5dpQva1TMw@mail.gmail.com>
Date: Mon, 13 Aug 2012 12:32:02 +0100
X-Google-Sender-Auth: tbUdu0VYFVWBJMNrij2NJKTCq6I
Message-ID: <CAFLBxZYJ5KLyNhBgKEz7bmJiJXCopg9w4sPo0q4f-N8gMyA2pQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org
Subject: Re: [Xen-devel] Security discussion poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just a follow-up for those who were on holiday last week: This poll
will be open for another week (until 20 August).  Thank you for
everyone who has already participated.

 -George

On Mon, Aug 6, 2012 at 3:18 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> As promised, here is the poll for the security discussion.  As a
> reminder, the purpose of this poll is mainly to see where people's
> attitudes are with respect to the various options, so that we can move
> the discussion forward towards a conclusion.  If you have any
> interested at all in the outcome, please make your voice heard.  I
> have CC'd everyone who has participated in the discussion so far.Joh
>
> The poll will not be secret.  You may fill out the poll anonymously,
> but if you do, your vote will be given less weight (to avoid ballot
> stuffing).  We don't necessarily plan on publishing the individual
> poll responses, but we may do so if we think it would be helpful.
>
> Because of the summer holidays, we will keep the poll open for two
> weeks; we will tabulate the results Monday, August 20.
>
> The poll can be found here:
>
> http://xen.org/polls/xen_dev_2012_security_process.html
>
> Thank you for your time.
>
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0t1k-00023s-2P; Mon, 13 Aug 2012 11:41:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T0t1i-00023l-BL
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 11:41:42 +0000
Received: from [85.158.138.51:20290] by server-10.bemta-3.messagelabs.com id
	91/30-20518-5F7E8205; Mon, 13 Aug 2012 11:41:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344858101!19908673!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12012 invoked from network); 13 Aug 2012 11:41:41 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:41:41 -0000
Received: by eaah11 with SMTP id h11so971288eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 13 Aug 2012 04:41:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=uzvR1QC8UsMfbqnsVNhepFjsBgL+iyu4yULphFeR1P4=;
	b=c53tgMa3QKp8wNhsG4ikqKQAUkirS52gcMH4Im0kED+xutNrS1IOau1mHIo3YfCba0
	2obJ5CvjNTtK/C44hCl+cu3baCOIV0bzaUFjIHopM7rlOQ7nZTXbrNjFArqrK7mANnp9
	YGujMqjbqn45fsDzU6uSvbenEhq/ZlAzZVsc6fMOsIuhPqRlbFKoWhc6gI2SZYqyBzJA
	vCa2LzbtlizM68atCBDZXqLERM4a/GSRu8l6fgTHUr3Hykywanc+6gqyWnsodKYLsxMg
	RfKJNFxTd9vGc7h4/X6lTR+rSTV6D9zsG3xtpUpQ/1sCz9Qe0+elqDf4y8AQSQ60oJ3T
	b5mA==
Received: by 10.14.216.198 with SMTP id g46mr212348eep.32.1344858100753;
	Mon, 13 Aug 2012 04:41:40 -0700 (PDT)
Received: from [192.168.1.3] (host86-149-87-102.range86-149.btcentralplus.com.
	[86.149.87.102])
	by mx.google.com with ESMTPS id y1sm18271869eel.0.2012.08.13.04.41.37
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 04:41:39 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Mon, 13 Aug 2012 12:41:30 +0100
From: Keir Fraser <keir@xen.org>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CC4EA67A.48A23%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
Thread-Index: Ac15SJSF91YOFtV0WUaymnT5KeFaSw==
In-Reply-To: <alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/08/2012 12:24, "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
wrote:

>> I can see your point in doing this in the x86 files nevertheless for
>> cosmetic/consistency reasons, but I'm really uncertain about this
>> uglification when it's not really necessary (plus it would shrink the
>> patch quite a bit).
> 
> I don't have a strong opinion on this.
> However it is going to be difficult to enforce XEN_GUEST_HANDLE_PARAM on
> parameters anyway (because we don't have a simple way to make it fail at
> compile time), if we make the change only in xen/commons, it is going to be
> even harder.

It has to be done everywhere, if at all. Else it's an ugly hack job.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 11:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 11:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0t1k-00023s-2P; Mon, 13 Aug 2012 11:41:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T0t1i-00023l-BL
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 11:41:42 +0000
Received: from [85.158.138.51:20290] by server-10.bemta-3.messagelabs.com id
	91/30-20518-5F7E8205; Mon, 13 Aug 2012 11:41:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1344858101!19908673!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12012 invoked from network); 13 Aug 2012 11:41:41 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 11:41:41 -0000
Received: by eaah11 with SMTP id h11so971288eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 13 Aug 2012 04:41:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=uzvR1QC8UsMfbqnsVNhepFjsBgL+iyu4yULphFeR1P4=;
	b=c53tgMa3QKp8wNhsG4ikqKQAUkirS52gcMH4Im0kED+xutNrS1IOau1mHIo3YfCba0
	2obJ5CvjNTtK/C44hCl+cu3baCOIV0bzaUFjIHopM7rlOQ7nZTXbrNjFArqrK7mANnp9
	YGujMqjbqn45fsDzU6uSvbenEhq/ZlAzZVsc6fMOsIuhPqRlbFKoWhc6gI2SZYqyBzJA
	vCa2LzbtlizM68atCBDZXqLERM4a/GSRu8l6fgTHUr3Hykywanc+6gqyWnsodKYLsxMg
	RfKJNFxTd9vGc7h4/X6lTR+rSTV6D9zsG3xtpUpQ/1sCz9Qe0+elqDf4y8AQSQ60oJ3T
	b5mA==
Received: by 10.14.216.198 with SMTP id g46mr212348eep.32.1344858100753;
	Mon, 13 Aug 2012 04:41:40 -0700 (PDT)
Received: from [192.168.1.3] (host86-149-87-102.range86-149.btcentralplus.com.
	[86.149.87.102])
	by mx.google.com with ESMTPS id y1sm18271869eel.0.2012.08.13.04.41.37
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 04:41:39 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Mon, 13 Aug 2012 12:41:30 +0100
From: Keir Fraser <keir@xen.org>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Message-ID: <CC4EA67A.48A23%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
Thread-Index: Ac15SJSF91YOFtV0WUaymnT5KeFaSw==
In-Reply-To: <alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/08/2012 12:24, "Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
wrote:

>> I can see your point in doing this in the x86 files nevertheless for
>> cosmetic/consistency reasons, but I'm really uncertain about this
>> uglification when it's not really necessary (plus it would shrink the
>> patch quite a bit).
> 
> I don't have a strong opinion on this.
> However it is going to be difficult to enforce XEN_GUEST_HANDLE_PARAM on
> parameters anyway (because we don't have a simple way to make it fail at
> compile time), if we make the change only in xen/commons, it is going to be
> even harder.

It has to be done everywhere, if at all. Else it's an ugly hack job.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0tOJ-0002ZO-Qx; Mon, 13 Aug 2012 12:05:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0tOI-0002ZI-Vg
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 12:05:03 +0000
Received: from [85.158.139.83:19108] by server-12.bemta-5.messagelabs.com id
	F0/55-22359-E6DE8205; Mon, 13 Aug 2012 12:05:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344859501!16586851!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12739 invoked from network); 13 Aug 2012 12:05:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 12:05:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 13:05:01 +0100
Message-Id: <5029098A020000780009471C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 13:04:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> there is any point in that because the other internal functions should
> also have XEN_GUEST_HANDLE_PARAMs as parameters.

So you obviously need a cast from "normal" to _PARAM (so that
you can pass embedded fields to functions).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0tOJ-0002ZO-Qx; Mon, 13 Aug 2012 12:05:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0tOI-0002ZI-Vg
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 12:05:03 +0000
Received: from [85.158.139.83:19108] by server-12.bemta-5.messagelabs.com id
	F0/55-22359-E6DE8205; Mon, 13 Aug 2012 12:05:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344859501!16586851!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12739 invoked from network); 13 Aug 2012 12:05:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 12:05:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 13:05:01 +0100
Message-Id: <5029098A020000780009471C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 13:04:58 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> there is any point in that because the other internal functions should
> also have XEN_GUEST_HANDLE_PARAMs as parameters.

So you obviously need a cast from "normal" to _PARAM (so that
you can pass embedded fields to functions).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:11:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0tUd-0002oQ-LO; Mon, 13 Aug 2012 12:11:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0tUb-0002oK-Ov
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:11:33 +0000
Received: from [85.158.138.51:61406] by server-3.bemta-3.messagelabs.com id
	F5/3F-13809-4FEE8205; Mon, 13 Aug 2012 12:11:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344859891!28026322!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11266 invoked from network); 13 Aug 2012 12:11:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with SMTP;
	13 Aug 2012 12:11:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 13:11:31 +0100
Message-Id: <50290B0C0200007800094750@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 13:11:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part023377FC.2__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>
Subject: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
	24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part023377FC.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

That c/s introduced a double unlock on the out-of-memory error path of
p2m_pod_demand_populate().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1075,6 +1075,7 @@ out_of_memory:
     printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " =
pod_entries %" PRIi32 "\n",
            __func__, d->tot_pages, p2m->pod.entry_count);
     domain_crash(d);
+    return -1;
 out_fail:
     pod_unlock(p2m);
     return -1;




--=__Part023377FC.2__=
Content-Type: text/plain; name="x86-PoD-locking-fix-24772.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PoD-locking-fix-24772.patch"

x86/PoD: fix (un)locking after 24772:28edc2b31a9b=0A=0AThat c/s introduced =
a double unlock on the out-of-memory error path of=0Ap2m_pod_demand_populat=
e().=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/mm/p2m-pod.c=0A+++ b/xen/arch/x86/mm/p2m-pod.c=0A@@ -1075,6 =
+1075,7 @@ out_of_memory:=0A     printk("%s: Out of populate-on-demand =
memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",=0A            =
__func__, d->tot_pages, p2m->pod.entry_count);=0A     domain_crash(d);=0A+ =
   return -1;=0A out_fail:=0A     pod_unlock(p2m);=0A     return -1;=0A
--=__Part023377FC.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part023377FC.2__=--


From xen-devel-bounces@lists.xen.org Mon Aug 13 12:11:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0tUd-0002oQ-LO; Mon, 13 Aug 2012 12:11:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0tUb-0002oK-Ov
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:11:33 +0000
Received: from [85.158.138.51:61406] by server-3.bemta-3.messagelabs.com id
	F5/3F-13809-4FEE8205; Mon, 13 Aug 2012 12:11:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344859891!28026322!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11266 invoked from network); 13 Aug 2012 12:11:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with SMTP;
	13 Aug 2012 12:11:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 13:11:31 +0100
Message-Id: <50290B0C0200007800094750@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 13:11:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part023377FC.2__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>
Subject: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
	24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part023377FC.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

That c/s introduced a double unlock on the out-of-memory error path of
p2m_pod_demand_populate().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1075,6 +1075,7 @@ out_of_memory:
     printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " =
pod_entries %" PRIi32 "\n",
            __func__, d->tot_pages, p2m->pod.entry_count);
     domain_crash(d);
+    return -1;
 out_fail:
     pod_unlock(p2m);
     return -1;




--=__Part023377FC.2__=
Content-Type: text/plain; name="x86-PoD-locking-fix-24772.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PoD-locking-fix-24772.patch"

x86/PoD: fix (un)locking after 24772:28edc2b31a9b=0A=0AThat c/s introduced =
a double unlock on the out-of-memory error path of=0Ap2m_pod_demand_populat=
e().=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/mm/p2m-pod.c=0A+++ b/xen/arch/x86/mm/p2m-pod.c=0A@@ -1075,6 =
+1075,7 @@ out_of_memory:=0A     printk("%s: Out of populate-on-demand =
memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",=0A            =
__func__, d->tot_pages, p2m->pod.entry_count);=0A     domain_crash(d);=0A+ =
   return -1;=0A out_fail:=0A     pod_unlock(p2m);=0A     return -1;=0A
--=__Part023377FC.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part023377FC.2__=--


From xen-devel-bounces@lists.xen.org Mon Aug 13 12:26:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0tiX-000334-Bi; Mon, 13 Aug 2012 12:25:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T0tiW-00032z-5k
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:25:56 +0000
Received: from [85.158.143.99:2864] by server-3.bemta-4.messagelabs.com id
	E5/91-09529-352F8205; Mon, 13 Aug 2012 12:25:55 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344860752!20890788!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4711 invoked from network); 13 Aug 2012 12:25:53 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 12:25:53 -0000
Received: by eeke53 with SMTP id e53so1027961eek.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 05:25:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=xinV56fKW8ZQnOXXcjP8V4t9O6xAO/6GJe1SCGxa4Ds=;
	b=kw2klPnYQ/yz1WZUkK0QQOpppbL/dFBWFDzTpm4ujjNUZBF53Fz9CamIuGNWaToCG1
	Piiin8LJrbA4qiRnDGd1DUk73G6cG4mibnrNqubt3JCGQtdot6pLHa0k+7JMLPv05qwm
	LBDTh/N0QlEwMK5P34VN2/Xf1NsLEU+1f81RNjzwUJRwpPxFjvfA7vIty1FADfUScoVn
	Kwni6AykCP3WC0yZyCuTnNuZuDw40pGWCNRRk1q2iZ6wYjN6UpYX6M8x2A6/rJp6E65a
	eFK8R/EkLJjDk3fdQYZludxqZBhRCzgJugIWlzvjC1ZjPpajrjGwXAX/LHtYtbcILrbA
	Hznw==
Received: by 10.14.198.65 with SMTP id u41mr627400een.22.1344860752468;
	Mon, 13 Aug 2012 05:25:52 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id k41sm18546841eep.13.2012.08.13.05.25.48
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 05:25:50 -0700 (PDT)
Message-ID: <5028F24A.7090307@xen.org>
Date: Mon, 13 Aug 2012 13:25:46 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Reminder: Xen Dev Meeting before XenSummit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

just a quick reminder that we do have the Xen Dev Meeting the Sunday 
before XenSummit. The event is invite only, but you can request an 
invite. For more information see here 
http://wiki.xen.org/wiki/Xen_Maintainer,_Committer_and_Developer_Meeting/XenSummit_NA_2012

We are almost full, but can accomodate another 5 or so maybe 10 maximum.

Regarding agenda: I originally was going to propose an agenda, but I got 
feedback from attendees that this is not the right approach. What I will 
be doing today is to go through the proposed topics and create a survey 
for attendees. Each attendee can choose 10 topics in the survey (about 
as much as we can handle in the time we have). Based on the results, I 
will create an ordered list of topics and we will work through these in 
order.

Best Regards
Lars



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:26:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0tiX-000334-Bi; Mon, 13 Aug 2012 12:25:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T0tiW-00032z-5k
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:25:56 +0000
Received: from [85.158.143.99:2864] by server-3.bemta-4.messagelabs.com id
	E5/91-09529-352F8205; Mon, 13 Aug 2012 12:25:55 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344860752!20890788!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4711 invoked from network); 13 Aug 2012 12:25:53 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 12:25:53 -0000
Received: by eeke53 with SMTP id e53so1027961eek.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 05:25:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=xinV56fKW8ZQnOXXcjP8V4t9O6xAO/6GJe1SCGxa4Ds=;
	b=kw2klPnYQ/yz1WZUkK0QQOpppbL/dFBWFDzTpm4ujjNUZBF53Fz9CamIuGNWaToCG1
	Piiin8LJrbA4qiRnDGd1DUk73G6cG4mibnrNqubt3JCGQtdot6pLHa0k+7JMLPv05qwm
	LBDTh/N0QlEwMK5P34VN2/Xf1NsLEU+1f81RNjzwUJRwpPxFjvfA7vIty1FADfUScoVn
	Kwni6AykCP3WC0yZyCuTnNuZuDw40pGWCNRRk1q2iZ6wYjN6UpYX6M8x2A6/rJp6E65a
	eFK8R/EkLJjDk3fdQYZludxqZBhRCzgJugIWlzvjC1ZjPpajrjGwXAX/LHtYtbcILrbA
	Hznw==
Received: by 10.14.198.65 with SMTP id u41mr627400een.22.1344860752468;
	Mon, 13 Aug 2012 05:25:52 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id k41sm18546841eep.13.2012.08.13.05.25.48
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 05:25:50 -0700 (PDT)
Message-ID: <5028F24A.7090307@xen.org>
Date: Mon, 13 Aug 2012 13:25:46 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Reminder: Xen Dev Meeting before XenSummit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

just a quick reminder that we do have the Xen Dev Meeting the Sunday 
before XenSummit. The event is invite only, but you can request an 
invite. For more information see here 
http://wiki.xen.org/wiki/Xen_Maintainer,_Committer_and_Developer_Meeting/XenSummit_NA_2012

We are almost full, but can accomodate another 5 or so maybe 10 maximum.

Regarding agenda: I originally was going to propose an agenda, but I got 
feedback from attendees that this is not the right approach. What I will 
be doing today is to go through the proposed topics and create a survey 
for attendees. Each attendee can choose 10 topics in the survey (about 
as much as we can handle in the time we have). Based on the results, I 
will create an ordered list of topics and we will work through these in 
order.

Best Regards
Lars



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:47:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:47:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0u3G-0003ZC-7I; Mon, 13 Aug 2012 12:47:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T0u3E-0003Z3-J3
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:47:20 +0000
Received: from [85.158.139.83:18090] by server-4.bemta-5.messagelabs.com id
	A7/7A-12386-757F8205; Mon, 13 Aug 2012 12:47:19 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344862037!20512658!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA5NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11936 invoked from network); 13 Aug 2012 12:47:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 12:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336363200"; d="scan'208";a="204991321"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 08:46:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 13 Aug 2012 08:45:31 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T0u1S-0004Gx-Og;
	Mon, 13 Aug 2012 13:45:30 +0100
Message-ID: <5028F61F.6080403@eu.citrix.com>
Date: Mon, 13 Aug 2012 13:42:07 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
In-Reply-To: <50290B0C0200007800094750@nat28.tlf.novell.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
	24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/08/12 13:11, Jan Beulich wrote:
> That c/s introduced a double unlock on the out-of-memory error path of
> p2m_pod_demand_populate().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Good catch.

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1075,6 +1075,7 @@ out_of_memory:
>       printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",
>              __func__, d->tot_pages, p2m->pod.entry_count);
>       domain_crash(d);
> +    return -1;
>   out_fail:
>       pod_unlock(p2m);
>       return -1;
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:47:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:47:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0u3G-0003ZC-7I; Mon, 13 Aug 2012 12:47:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T0u3E-0003Z3-J3
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:47:20 +0000
Received: from [85.158.139.83:18090] by server-4.bemta-5.messagelabs.com id
	A7/7A-12386-757F8205; Mon, 13 Aug 2012 12:47:19 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1344862037!20512658!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzA5NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11936 invoked from network); 13 Aug 2012 12:47:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 12:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336363200"; d="scan'208";a="204991321"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 08:46:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 13 Aug 2012 08:45:31 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T0u1S-0004Gx-Og;
	Mon, 13 Aug 2012 13:45:30 +0100
Message-ID: <5028F61F.6080403@eu.citrix.com>
Date: Mon, 13 Aug 2012 13:42:07 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
In-Reply-To: <50290B0C0200007800094750@nat28.tlf.novell.com>
Cc: "Tim \(Xen.org\)" <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
	24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/08/12 13:11, Jan Beulich wrote:
> That c/s introduced a double unlock on the out-of-memory error path of
> p2m_pod_demand_populate().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Good catch.

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1075,6 +1075,7 @@ out_of_memory:
>       printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",
>              __func__, d->tot_pages, p2m->pod.entry_count);
>       domain_crash(d);
> +    return -1;
>   out_fail:
>       pod_unlock(p2m);
>       return -1;
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:49:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0u4u-0003ez-O8; Mon, 13 Aug 2012 12:49:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1T0u4t-0003eo-Ii
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:49:03 +0000
Received: from [85.158.143.99:44213] by server-3.bemta-4.messagelabs.com id
	A1/1B-09529-EB7F8205; Mon, 13 Aug 2012 12:49:02 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344862141!18153777!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27244 invoked from network); 13 Aug 2012 12:49:02 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 12:49:02 -0000
Received: by vbip1 with SMTP id p1so4264660vbi.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 05:49:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=12FIUlyLVgn+UIV8rRPmW/GLubnDk3QnDcxoBOYOmIQ=;
	b=tB0nD4zV9HdZXZr0z0CVb9OcmhO0G5mAjThHWKWhiJTU9hDCTRV4UJxZTBYQtVK6QF
	jm2K9zFjk7s+/NxmmNe01Awx6qmyrKDV0btPdFY8ABTbxn7YfrXa8kzjb/+FKAAR0djo
	7kI/q5KN2bX/BcaZZTP8AxIa7nC7yRuDSVGWbK9l97YKjR7x/WXIdF1OlMlTYrpxPUm+
	HOTmC0D0KBm7+nfkFtaP7bVtxkBAgtghnct/j3eeX3M7fVmaKWHBz66sr1uKu2zyW7TP
	QnTOC191SvIsIWeba99aU2NtmXX9BSIvOWvL8yJg6Is3DNvioudS6LahiO5H++8sKMKJ
	gcbA==
Received: by 10.221.0.138 with SMTP id nm10mr8146775vcb.38.1344862140689; Mon,
	13 Aug 2012 05:49:00 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Mon, 13 Aug 2012 05:43:23 -0700 (PDT)
In-Reply-To: <20120813093811.GB75552@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
	<20120810165109.GA19429@spongy>
	<20120813093811.GB75552@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 13 Aug 2012 13:43:23 +0100
Message-ID: <CAEBdQ93L5WW+b=C+YkZiZccv+5zUwr573sibjLHLSk2qZmxxYg@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Jean Guyader <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13 August 2012 10:38, Tim Deegan <tim@xen.org> wrote:
> At 17:51 +0100 on 10 Aug (1344621069), Jean Guyader wrote:
>> On 09/08 11:38, Tim Deegan wrote:
>> > Hi,
>> >
>> > This looks pretty good; I think you've addressed almost all my comments
>> > except for one, which is really a design decision raether than an
>> > implementation one.  As I said last time:
>> >
>> > ] And what about protocol?  Protocol seems to have ended up as a bit of a
>> > ] second-class citizen in v4v; it's defined, and indeed required, but not
>> > ] used for routing or for acccess control, so all traffic to a given port
>> > ] _on every protocol_ ends up on the same ring.
>> > ]
>> > ] This is the inverse of the TCP/IP namespace that you're copying, where
>> > ] protocol demux happens before port demux.  And I think it will bite
>> > ] someone if you ever, for example, want to send ICMP or GRE over a v4v
>> > ] channel.
>> >
>>
>> The protocol field is used to inform about the type a message on the ring.
>>
>> Right now we use two protocols in our linux driver: V4V_PROTO_DGRAM and
>> V4V_PROTO_STREAM. In the future that could probably be extended to new protocol
>> like V4V_PROTO_ICMP for instance.
>>
>> The demultiplexing will happens at the other end, the driver can look at the
>> message and decide what to do with it based on the protocol field.
>
> Yes, I understand all that - what I'm saying is that it seems like a
> design flaw to me.  The namespace in V4V, as proposed, looks like this:
>
>  Protocol
>  Port
>  Domain
>

Protocol isn't part of the namespace - I think that's
where the confusion arises. The namespace is exclusively
Port/Domain. Protocol is there to describe the content of
_a particular message_ in the ring. It is included in the
hypercalls (rather than, say, being the first n bytes of
all messages) to force all senders to use it.

The other key point here is confusion as to what V4V is.
V4V is _not_ a tcp or udp clone. V4V exists to provide
a mechanism to transfer messages (which are arbitary
strings of bytes, labeled with a "protocol") between
endpoitns (labeled by a domain and port).

An example use of this facility uses two v4v rings to
implement something which looks quite like TCP. In this
case to distinguish TCP-a-like messages from other
such messages that may or may not be present on the ring,
the messages are labeled with a protocol field that
indicates what upper layer should handle these messages.

One can easily immagine circumstances where messages of
many different protocols are present on the ring. An
obvious example might be a message type which implemented
some sort of flow control, that quenced or started
transmission on a partner ring, regardless of what other
messages were being sent on the rings.

> and it would be more sensible to do (like the IP stack):
>
>  Port
>  Protocol
>  Domain.
>
> Or at the very least the protocol should be made part of the endpoint
> address, and not just part of the packet header.  As it stands:
>
>  - The handlers for port X in _all_ protocols _have_ to share a
>    ring.  That seems kind of plausible because the IANA port assignments
>    never give the same port number to different services on TCP and UDP,
>    but will it make sense for every new protocol?  Is it sensible to
>    require, say, an L2TP service to make its connection IDs not clash
>    with V4V_PROTO_DGRAM and V4V_PROTO_STREAM users?
>
>    It may not even make sense in existing protocols.  It's common enough
>    for DNS servers to use different ACLs (and indeed different servers)
>    for TCP and UDP.
>

V4V isn't IP, but you raise a valid point. We should
define two ranges of 16 bit V4V port numbers one which
is "well known" for the use of TCP encapsulation, and
one which is "well known" for the use of UDP
encapsulation.  That then makes the concept that you
think that protocol should be, be part of the
endpoint address, and it leaves the thing I misleadingly
called "protocol" free to be the message type label.

>  - Relatedly, every protocol _has_ to have port numbers.  How would you
>    register an ICMP listener, for example?  You'd have to do something
>    gross like declare a particular port to be the ICMP port so that you
>    could demux it, or indeed send it in the first place.
>
> You say:
>
>> The demultiplexing will happens at the other end, the driver can look at the
>> message and decide what to do with it based on the protocol field.
>
> I'm willing to accept that argument, but only if we extend it to ports
> too, get rid of all the namespace and ACL code in Xen and leave each
> domain with a single RX ring that the (single) guest driver must demux. :P
>

Cheers,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 12:49:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 12:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0u4u-0003ez-O8; Mon, 13 Aug 2012 12:49:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1T0u4t-0003eo-Ii
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 12:49:03 +0000
Received: from [85.158.143.99:44213] by server-3.bemta-4.messagelabs.com id
	A1/1B-09529-EB7F8205; Mon, 13 Aug 2012 12:49:02 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344862141!18153777!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27244 invoked from network); 13 Aug 2012 12:49:02 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 12:49:02 -0000
Received: by vbip1 with SMTP id p1so4264660vbi.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 05:49:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=12FIUlyLVgn+UIV8rRPmW/GLubnDk3QnDcxoBOYOmIQ=;
	b=tB0nD4zV9HdZXZr0z0CVb9OcmhO0G5mAjThHWKWhiJTU9hDCTRV4UJxZTBYQtVK6QF
	jm2K9zFjk7s+/NxmmNe01Awx6qmyrKDV0btPdFY8ABTbxn7YfrXa8kzjb/+FKAAR0djo
	7kI/q5KN2bX/BcaZZTP8AxIa7nC7yRuDSVGWbK9l97YKjR7x/WXIdF1OlMlTYrpxPUm+
	HOTmC0D0KBm7+nfkFtaP7bVtxkBAgtghnct/j3eeX3M7fVmaKWHBz66sr1uKu2zyW7TP
	QnTOC191SvIsIWeba99aU2NtmXX9BSIvOWvL8yJg6Is3DNvioudS6LahiO5H++8sKMKJ
	gcbA==
Received: by 10.221.0.138 with SMTP id nm10mr8146775vcb.38.1344862140689; Mon,
	13 Aug 2012 05:49:00 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.79.175 with HTTP; Mon, 13 Aug 2012 05:43:23 -0700 (PDT)
In-Reply-To: <20120813093811.GB75552@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
	<20120810165109.GA19429@spongy>
	<20120813093811.GB75552@ocelot.phlegethon.org>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Mon, 13 Aug 2012 13:43:23 +0100
Message-ID: <CAEBdQ93L5WW+b=C+YkZiZccv+5zUwr573sibjLHLSk2qZmxxYg@mail.gmail.com>
To: Tim Deegan <tim@xen.org>
Cc: Jean Guyader <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13 August 2012 10:38, Tim Deegan <tim@xen.org> wrote:
> At 17:51 +0100 on 10 Aug (1344621069), Jean Guyader wrote:
>> On 09/08 11:38, Tim Deegan wrote:
>> > Hi,
>> >
>> > This looks pretty good; I think you've addressed almost all my comments
>> > except for one, which is really a design decision raether than an
>> > implementation one.  As I said last time:
>> >
>> > ] And what about protocol?  Protocol seems to have ended up as a bit of a
>> > ] second-class citizen in v4v; it's defined, and indeed required, but not
>> > ] used for routing or for acccess control, so all traffic to a given port
>> > ] _on every protocol_ ends up on the same ring.
>> > ]
>> > ] This is the inverse of the TCP/IP namespace that you're copying, where
>> > ] protocol demux happens before port demux.  And I think it will bite
>> > ] someone if you ever, for example, want to send ICMP or GRE over a v4v
>> > ] channel.
>> >
>>
>> The protocol field is used to inform about the type a message on the ring.
>>
>> Right now we use two protocols in our linux driver: V4V_PROTO_DGRAM and
>> V4V_PROTO_STREAM. In the future that could probably be extended to new protocol
>> like V4V_PROTO_ICMP for instance.
>>
>> The demultiplexing will happens at the other end, the driver can look at the
>> message and decide what to do with it based on the protocol field.
>
> Yes, I understand all that - what I'm saying is that it seems like a
> design flaw to me.  The namespace in V4V, as proposed, looks like this:
>
>  Protocol
>  Port
>  Domain
>

Protocol isn't part of the namespace - I think that's
where the confusion arises. The namespace is exclusively
Port/Domain. Protocol is there to describe the content of
_a particular message_ in the ring. It is included in the
hypercalls (rather than, say, being the first n bytes of
all messages) to force all senders to use it.

The other key point here is confusion as to what V4V is.
V4V is _not_ a tcp or udp clone. V4V exists to provide
a mechanism to transfer messages (which are arbitary
strings of bytes, labeled with a "protocol") between
endpoitns (labeled by a domain and port).

An example use of this facility uses two v4v rings to
implement something which looks quite like TCP. In this
case to distinguish TCP-a-like messages from other
such messages that may or may not be present on the ring,
the messages are labeled with a protocol field that
indicates what upper layer should handle these messages.

One can easily immagine circumstances where messages of
many different protocols are present on the ring. An
obvious example might be a message type which implemented
some sort of flow control, that quenced or started
transmission on a partner ring, regardless of what other
messages were being sent on the rings.

> and it would be more sensible to do (like the IP stack):
>
>  Port
>  Protocol
>  Domain.
>
> Or at the very least the protocol should be made part of the endpoint
> address, and not just part of the packet header.  As it stands:
>
>  - The handlers for port X in _all_ protocols _have_ to share a
>    ring.  That seems kind of plausible because the IANA port assignments
>    never give the same port number to different services on TCP and UDP,
>    but will it make sense for every new protocol?  Is it sensible to
>    require, say, an L2TP service to make its connection IDs not clash
>    with V4V_PROTO_DGRAM and V4V_PROTO_STREAM users?
>
>    It may not even make sense in existing protocols.  It's common enough
>    for DNS servers to use different ACLs (and indeed different servers)
>    for TCP and UDP.
>

V4V isn't IP, but you raise a valid point. We should
define two ranges of 16 bit V4V port numbers one which
is "well known" for the use of TCP encapsulation, and
one which is "well known" for the use of UDP
encapsulation.  That then makes the concept that you
think that protocol should be, be part of the
endpoint address, and it leaves the thing I misleadingly
called "protocol" free to be the message type label.

>  - Relatedly, every protocol _has_ to have port numbers.  How would you
>    register an ICMP listener, for example?  You'd have to do something
>    gross like declare a particular port to be the ICMP port so that you
>    could demux it, or indeed send it in the first place.
>
> You say:
>
>> The demultiplexing will happens at the other end, the driver can look at the
>> message and decide what to do with it based on the protocol field.
>
> I'm willing to accept that argument, but only if we extend it to ports
> too, get rid of all the namespace and ACL code in Xen and leave each
> domain with a single RX ring that the (single) guest driver must demux. :P
>

Cheers,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:12:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uRX-0004BH-0m; Mon, 13 Aug 2012 13:12:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0uRV-0004BC-IV
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:12:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344863499!8954825!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20318 invoked from network); 13 Aug 2012 13:11:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13981376"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:11:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:11:30 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0uQc-0002wj-2W; Mon, 13 Aug 2012 13:11:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0uQb-0000fE-S1;
	Mon, 13 Aug 2012 14:11:29 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20520.64769.516166.833844@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:11:29 +0100
To: Olaf Hering <olaf@aepfle.de>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <25849858146fabf682b8.1344627928@probook.site>
References: <25849858146fabf682b8.1344627928@probook.site>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools: init.d/Linux/xencommons: load all
	known	backend drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("[Xen-devel] [PATCH] tools: init.d/Linux/xencommons: load all known backend drivers"):
> tools: init.d/Linux/xencommons: load all known backend drivers
> 
> Load all known backend drivers fron xenlinux and pvops based dom0
> kernels.  There is currently no code in xend or libxl to load these
> drivers on demand.  Currently libxl has also no helpful error message if
> a backend driver is missing.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:12:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uRX-0004BH-0m; Mon, 13 Aug 2012 13:12:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0uRV-0004BC-IV
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:12:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344863499!8954825!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20318 invoked from network); 13 Aug 2012 13:11:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13981376"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:11:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:11:30 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0uQc-0002wj-2W; Mon, 13 Aug 2012 13:11:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0uQb-0000fE-S1;
	Mon, 13 Aug 2012 14:11:29 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20520.64769.516166.833844@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:11:29 +0100
To: Olaf Hering <olaf@aepfle.de>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <25849858146fabf682b8.1344627928@probook.site>
References: <25849858146fabf682b8.1344627928@probook.site>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools: init.d/Linux/xencommons: load all
	known	backend drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("[Xen-devel] [PATCH] tools: init.d/Linux/xencommons: load all known backend drivers"):
> tools: init.d/Linux/xencommons: load all known backend drivers
> 
> Load all known backend drivers fron xenlinux and pvops based dom0
> kernels.  There is currently no code in xend or libxl to load these
> drivers on demand.  Currently libxl has also no helpful error message if
> a backend driver is missing.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:17:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:17:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uVp-0004Iv-Nk; Mon, 13 Aug 2012 13:16:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0uVo-0004Iq-Gh
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:16:52 +0000
Received: from [85.158.143.99:9753] by server-1.bemta-4.messagelabs.com id
	5F/4C-07754-34EF8205; Mon, 13 Aug 2012 13:16:51 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344863807!27367302!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9935 invoked from network); 13 Aug 2012 13:16:50 -0000
Received: from smtp2.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-3.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 13:16:50 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0uVh-0005Kj-DJ
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 23:16:45 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 23:16:45 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 23:16:44 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Xen and 4K Sectors (was blkback and bcache)
Thread-Index: Ac15VCC+bhqsYX5gQ8yq88xiueHFpA==
Date: Mon, 13 Aug 2012 13:16:43 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.007
x-tm-as-result: No--37.269500-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 13:16:45.0206 (UTC)
	FILETIME=[E30CCB60:01CD7955]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think the problem is definitely related to the 4K sector issue.

qemu appears to always present 512 byte sectors, thus only booting from a 512 byte sector partition table. Once the PV drivers take over though it all falls down because PV drivers are passed a 4K sector size and nothing matches up anymore.

What is the solution here? Do I just tell windows that we have a 512 byte sector and put up with the potential loss of performance?

Thanks

James

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of James Harper
> Sent: Monday, 13 August 2012 5:17 PM
> To: xen-devel@lists.xen.org
> Subject: [Xen-devel] blkback and bcache
> 
> I'm having trouble using blkback under gplpv when the disk is on top of a
> bcache device. My devices are layered as follows:
> 
> /dev/sd[ab]
> md0 (RAID1)
> bcache
> lvm
> 
> It seems that bcache presents a 4K sector size to Linux, which is then
> reflected by lvm and in turn blkback.
> 
> Obviously GPLPV isn't handling 4K sectors correctly... any suggestions as to
> what I might need to do to make this work properly? As a last resort I should
> be able to fake 512 byte sectors to Windows but would prefer that Windows
> knew it was dealing with a device with 4K sectors underneath.
> 
> Thanks
> 
> James
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:17:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:17:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uVp-0004Iv-Nk; Mon, 13 Aug 2012 13:16:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0uVo-0004Iq-Gh
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:16:52 +0000
Received: from [85.158.143.99:9753] by server-1.bemta-4.messagelabs.com id
	5F/4C-07754-34EF8205; Mon, 13 Aug 2012 13:16:51 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344863807!27367302!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9935 invoked from network); 13 Aug 2012 13:16:50 -0000
Received: from smtp2.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-3.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 13:16:50 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0uVh-0005Kj-DJ
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 23:16:45 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 23:16:45 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 23:16:44 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Xen and 4K Sectors (was blkback and bcache)
Thread-Index: Ac15VCC+bhqsYX5gQ8yq88xiueHFpA==
Date: Mon, 13 Aug 2012 13:16:43 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.007
x-tm-as-result: No--37.269500-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 13:16:45.0206 (UTC)
	FILETIME=[E30CCB60:01CD7955]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think the problem is definitely related to the 4K sector issue.

qemu appears to always present 512 byte sectors, thus only booting from a 512 byte sector partition table. Once the PV drivers take over though it all falls down because PV drivers are passed a 4K sector size and nothing matches up anymore.

What is the solution here? Do I just tell windows that we have a 512 byte sector and put up with the potential loss of performance?

Thanks

James

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of James Harper
> Sent: Monday, 13 August 2012 5:17 PM
> To: xen-devel@lists.xen.org
> Subject: [Xen-devel] blkback and bcache
> 
> I'm having trouble using blkback under gplpv when the disk is on top of a
> bcache device. My devices are layered as follows:
> 
> /dev/sd[ab]
> md0 (RAID1)
> bcache
> lvm
> 
> It seems that bcache presents a 4K sector size to Linux, which is then
> reflected by lvm and in turn blkback.
> 
> Obviously GPLPV isn't handling 4K sectors correctly... any suggestions as to
> what I might need to do to make this work properly? As a last resort I should
> be able to fake 512 byte sectors to Windows but would prefer that Windows
> knew it was dealing with a device with 4K sectors underneath.
> 
> Thanks
> 
> James
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:25:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0udy-0004Tl-Nm; Mon, 13 Aug 2012 13:25:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0udw-0004Tg-TP
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:25:17 +0000
Received: from [85.158.143.35:59427] by server-3.bemta-4.messagelabs.com id
	BF/AE-09529-C3009205; Mon, 13 Aug 2012 13:25:16 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344864311!14526361!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30947 invoked from network); 13 Aug 2012 13:25:14 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 13:25:14 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0udo-0003z1-Fg
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 23:25:08 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 23:25:08 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 23:25:08 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Xen and 4K Sectors (was blkback and bcache)
Thread-Index: Ac15VCC+bhqsYX5gQ8yq88xiueHFpAAArbBg
Date: Mon, 13 Aug 2012 13:25:07 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F73D1@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.007
x-tm-as-result: No--25.068100-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 13:25:08.0520 (UTC)
	FILETIME=[0F0C6A80:01CD7957]
X-Really-From-Bendigo-IT: magichashvalue
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I think the problem is definitely related to the 4K sector issue.
> 
> qemu appears to always present 512 byte sectors, thus only booting from a
> 512 byte sector partition table. Once the PV drivers take over though it all
> falls down because PV drivers are passed a 4K sector size and nothing
> matches up anymore.
> 
> What is the solution here? Do I just tell windows that we have a 512 byte
> sector and put up with the potential loss of performance?
> 

I just came across this...

http://support.microsoft.com/kb/982018

I guess Windows doesn't support 4K disks after all, so I'll emulate 512 byte sectors and fake whatever scsi interface is required to tell windows that the disk is 4K but emulating 512 bytes.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:25:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0udy-0004Tl-Nm; Mon, 13 Aug 2012 13:25:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0udw-0004Tg-TP
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:25:17 +0000
Received: from [85.158.143.35:59427] by server-3.bemta-4.messagelabs.com id
	BF/AE-09529-C3009205; Mon, 13 Aug 2012 13:25:16 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344864311!14526361!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30947 invoked from network); 13 Aug 2012 13:25:14 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 13:25:14 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0udo-0003z1-Fg
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 23:25:08 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Mon, 13 Aug 2012 23:25:08 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Mon, 13 Aug 2012 23:25:08 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Xen and 4K Sectors (was blkback and bcache)
Thread-Index: Ac15VCC+bhqsYX5gQ8yq88xiueHFpAAArbBg
Date: Mon, 13 Aug 2012 13:25:07 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F73D1@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.007
x-tm-as-result: No--25.068100-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 13:25:08.0520 (UTC)
	FILETIME=[0F0C6A80:01CD7957]
X-Really-From-Bendigo-IT: magichashvalue
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I think the problem is definitely related to the 4K sector issue.
> 
> qemu appears to always present 512 byte sectors, thus only booting from a
> 512 byte sector partition table. Once the PV drivers take over though it all
> falls down because PV drivers are passed a 4K sector size and nothing
> matches up anymore.
> 
> What is the solution here? Do I just tell windows that we have a 512 byte
> sector and put up with the potential loss of performance?
> 

I just came across this...

http://support.microsoft.com/kb/982018

I guess Windows doesn't support 4K disks after all, so I'll emulate 512 byte sectors and fake whatever scsi interface is required to tell windows that the disk is 4K but emulating 512 bytes.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:27:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ugE-0004at-DL; Mon, 13 Aug 2012 13:27:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0ugC-0004al-JH
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:27:36 +0000
Received: from [85.158.138.51:34811] by server-2.bemta-3.messagelabs.com id
	B4/89-17748-7C009205; Mon, 13 Aug 2012 13:27:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344864455!28015080!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14343 invoked from network); 13 Aug 2012 13:27:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:27:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13981758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:27:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:27:34 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0ugA-00035W-Dh; Mon, 13 Aug 2012 13:27:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0ugA-0000gH-A1;
	Mon, 13 Aug 2012 14:27:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.197.984373.290800@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:27:33 +0100
To: Olaf Hering <olaf@aepfle.de>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <26e3b184658352d71b1b.1344847685@probook.site>
References: <26e3b184658352d71b1b.1344847685@probook.site>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] stubdom: fix parallel build by
	expanding	CROSS_MAKE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("[Xen-devel] [PATCH] stubdom: fix parallel build by expandin> stubdom: fix parallel build by expanding CROSS_MAKE
> 
> Recently I changed my rpm xen.spec file from doing
> 'make -C tools -j N && make stubdom' to 'make -j N stubdom' because
> stubdom depends on tools, so both get built.
> The result was the failure below.

This looks like a good change to me but I don't want to enable
parallel build for stubdom at this stage of the release, since that's
likely to expose parallelism bugs in the stubdom makefiles (and we've
indeed had reports of such bugs).

So I think this should be revisited when 4.3 opens.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:27:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ugE-0004at-DL; Mon, 13 Aug 2012 13:27:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0ugC-0004al-JH
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:27:36 +0000
Received: from [85.158.138.51:34811] by server-2.bemta-3.messagelabs.com id
	B4/89-17748-7C009205; Mon, 13 Aug 2012 13:27:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1344864455!28015080!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14343 invoked from network); 13 Aug 2012 13:27:35 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:27:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13981758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:27:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:27:34 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0ugA-00035W-Dh; Mon, 13 Aug 2012 13:27:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0ugA-0000gH-A1;
	Mon, 13 Aug 2012 14:27:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.197.984373.290800@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:27:33 +0100
To: Olaf Hering <olaf@aepfle.de>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <26e3b184658352d71b1b.1344847685@probook.site>
References: <26e3b184658352d71b1b.1344847685@probook.site>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] stubdom: fix parallel build by
	expanding	CROSS_MAKE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("[Xen-devel] [PATCH] stubdom: fix parallel build by expandin> stubdom: fix parallel build by expanding CROSS_MAKE
> 
> Recently I changed my rpm xen.spec file from doing
> 'make -C tools -j N && make stubdom' to 'make -j N stubdom' because
> stubdom depends on tools, so both get built.
> The result was the failure below.

This looks like a good change to me but I don't want to enable
parallel build for stubdom at this stage of the release, since that's
likely to expose parallelism bugs in the stubdom makefiles (and we've
indeed had reports of such bugs).

So I think this should be revisited when 4.3 opens.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:33:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ulZ-0004nx-5b; Mon, 13 Aug 2012 13:33:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0ulY-0004no-04
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 13:33:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344864777!6577786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4735 invoked from network); 13 Aug 2012 13:32:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:32:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13981856"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:32:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:32:28 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0uku-00037D-4v; Mon, 13 Aug 2012 13:32:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0uku-0000gz-0z;
	Mon, 13 Aug 2012 14:32:28 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.491.811399.356551@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:32:27 +0100
To: Frediano Ziglio <frediano.ziglio@citrix.com>, Anthony Perard
	<anthony.perard@citrix.com>, "xen-devel@lists.xensource.com"
	<xen-devel@lists.xensource.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Keir Fraser <keir.xen@gmail.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20513.19940.781946.155910@mariner.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
	<20513.19940.781946.155910@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH] Fix invalidate if memory requested
	was	not	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH] Fix invalidate if memory requested was not	bucket aligned"):
> Frediano Ziglio writes ("[Xen-devel] [PATCH] Fix invalidate if memory requested was not bucket aligned"):
> > When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> > is created pointing to the virtual address of location requested.
> > The cached mapped entry is saved in last_address_vaddr with the memory
> > location of the base virtual address (without bucket offset).
> > However when this entry is invalidated the virtual address saved in the
> > reverse mapping is used. This cause that the mapping is freed but the
> > last_address_vaddr is not reset.
> > 
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> 
> Thanks for this!
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I think that this is a good candidate for a backport, after it has
> been given a workout in -unstable.

Is someone up for doing the backport for 4.1 ?  It doesn't apply
cleanly or I would have just done it.

Keir, is 4.0 closed now ?  If it's not _entirely_ closed then this
bugfix is a very good candidate for inclusion in a future 4.0.5.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:33:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ulZ-0004nx-5b; Mon, 13 Aug 2012 13:33:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0ulY-0004no-04
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 13:33:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344864777!6577786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4735 invoked from network); 13 Aug 2012 13:32:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:32:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13981856"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:32:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:32:28 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0uku-00037D-4v; Mon, 13 Aug 2012 13:32:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0uku-0000gz-0z;
	Mon, 13 Aug 2012 14:32:28 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.491.811399.356551@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:32:27 +0100
To: Frediano Ziglio <frediano.ziglio@citrix.com>, Anthony Perard
	<anthony.perard@citrix.com>, "xen-devel@lists.xensource.com"
	<xen-devel@lists.xensource.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Keir Fraser <keir.xen@gmail.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20513.19940.781946.155910@mariner.uk.xensource.com>
References: <7CE799CC0E4DE04B88D5FDF226E18AC2CDFFB08D17@LONPMAILBOX01.citrite.net>
	<20513.19940.781946.155910@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [PATCH] Fix invalidate if memory requested
	was	not	bucket aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH] Fix invalidate if memory requested was not	bucket aligned"):
> Frediano Ziglio writes ("[Xen-devel] [PATCH] Fix invalidate if memory requested was not bucket aligned"):
> > When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
> > is created pointing to the virtual address of location requested.
> > The cached mapped entry is saved in last_address_vaddr with the memory
> > location of the base virtual address (without bucket offset).
> > However when this entry is invalidated the virtual address saved in the
> > reverse mapping is used. This cause that the mapping is freed but the
> > last_address_vaddr is not reset.
> > 
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> 
> Thanks for this!
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I think that this is a good candidate for a backport, after it has
> been given a workout in -unstable.

Is someone up for doing the backport for 4.1 ?  It doesn't apply
cleanly or I would have just done it.

Keir, is 4.0 closed now ?  If it's not _entirely_ closed then this
bugfix is a very good candidate for inclusion in a future 4.0.5.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:44:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uvn-000532-9E; Mon, 13 Aug 2012 13:43:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T0uvl-00052x-I2
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:43:41 +0000
Received: from [85.158.138.51:2744] by server-2.bemta-3.messagelabs.com id
	4F/F4-17748-C8409205; Mon, 13 Aug 2012 13:43:40 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344865419!26257971!1
X-Originating-IP: [208.97.132.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi41ID0+IDI3Mzky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11473 invoked from network); 13 Aug 2012 13:43:39 -0000
Received: from mailbigip.dreamhost.com (HELO homiemail-a11.g.dreamhost.com)
	(208.97.132.5) by server-15.tower-174.messagelabs.com with SMTP;
	13 Aug 2012 13:43:39 -0000
Received: from homiemail-a11.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTP id 5A7576E06F;
	Mon, 13 Aug 2012 06:43:38 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=mvelJI/vh8Ms51otDRDY6e1fFob1OCFfZ2jaimNRgjOi
	tkjdlWNMwcfGhqgd/9bFBLPrJa+sLHX+05uu0d8Ep8G1FYt5i7eXPgsRifm0RjIC
	JpBqzZqhoCch5U5SYQqJLcM0FrS5wXnscAIm0BIVWlRqD1X0xgWSrKIBdAjsGgg=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=i+8QrDJZRf6I/7fV5qqnxKj50PI=; b=SgOIX/kS
	5kjUTuX+Dj5w04BLmiMsD5QW6fBDDBWUK0t9TXnM6YZFUgbXmgzJlFbtsRpCFYOk
	1GMOg1UTfZxg1P683adh725LPuhcPO/NrQBKIlX+U+fvj5vj/tlXptwnAK0v6or8
	rDtnLBriUksluUNJqDgyIyyrN0VrhffN+rw=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTPA id E8B156E06C;
	Mon, 13 Aug 2012 06:43:37 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Mon, 13 Aug 2012 06:43:38 -0700
Message-ID: <9e36dafa402ecf37afbe71ffd8834a00.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <50290B0C0200007800094750@nat28.tlf.novell.com>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
Date: Mon, 13 Aug 2012 06:43:38 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "Jan Beulich" <JBeulich@suse.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
	24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> That c/s introduced a double unlock on the out-of-memory error path of
> p2m_pod_demand_populate().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

Thanks
Andres
>
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1075,6 +1075,7 @@ out_of_memory:
>      printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 "
> pod_entries %" PRIi32 "\n",
>             __func__, d->tot_pages, p2m->pod.entry_count);
>      domain_crash(d);
> +    return -1;
>  out_fail:
>      pod_unlock(p2m);
>      return -1;
>
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:44:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uvn-000532-9E; Mon, 13 Aug 2012 13:43:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T0uvl-00052x-I2
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:43:41 +0000
Received: from [85.158.138.51:2744] by server-2.bemta-3.messagelabs.com id
	4F/F4-17748-C8409205; Mon, 13 Aug 2012 13:43:40 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344865419!26257971!1
X-Originating-IP: [208.97.132.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi41ID0+IDI3Mzky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11473 invoked from network); 13 Aug 2012 13:43:39 -0000
Received: from mailbigip.dreamhost.com (HELO homiemail-a11.g.dreamhost.com)
	(208.97.132.5) by server-15.tower-174.messagelabs.com with SMTP;
	13 Aug 2012 13:43:39 -0000
Received: from homiemail-a11.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTP id 5A7576E06F;
	Mon, 13 Aug 2012 06:43:38 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=mvelJI/vh8Ms51otDRDY6e1fFob1OCFfZ2jaimNRgjOi
	tkjdlWNMwcfGhqgd/9bFBLPrJa+sLHX+05uu0d8Ep8G1FYt5i7eXPgsRifm0RjIC
	JpBqzZqhoCch5U5SYQqJLcM0FrS5wXnscAIm0BIVWlRqD1X0xgWSrKIBdAjsGgg=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=i+8QrDJZRf6I/7fV5qqnxKj50PI=; b=SgOIX/kS
	5kjUTuX+Dj5w04BLmiMsD5QW6fBDDBWUK0t9TXnM6YZFUgbXmgzJlFbtsRpCFYOk
	1GMOg1UTfZxg1P683adh725LPuhcPO/NrQBKIlX+U+fvj5vj/tlXptwnAK0v6or8
	rDtnLBriUksluUNJqDgyIyyrN0VrhffN+rw=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTPA id E8B156E06C;
	Mon, 13 Aug 2012 06:43:37 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Mon, 13 Aug 2012 06:43:38 -0700
Message-ID: <9e36dafa402ecf37afbe71ffd8834a00.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <50290B0C0200007800094750@nat28.tlf.novell.com>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
Date: Mon, 13 Aug 2012 06:43:38 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "Jan Beulich" <JBeulich@suse.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
	24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> That c/s introduced a double unlock on the out-of-memory error path of
> p2m_pod_demand_populate().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

Thanks
Andres
>
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1075,6 +1075,7 @@ out_of_memory:
>      printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 "
> pod_entries %" PRIi32 "\n",
>             __func__, d->tot_pages, p2m->pod.entry_count);
>      domain_crash(d);
> +    return -1;
>  out_fail:
>      pod_unlock(p2m);
>      return -1;
>
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:44:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uwZ-00056i-N9; Mon, 13 Aug 2012 13:44:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0uwY-00056J-04
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:44:30 +0000
Received: from [85.158.139.83:5462] by server-8.bemta-5.messagelabs.com id
	13/A3-02481-CB409205; Mon, 13 Aug 2012 13:44:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344865467!27792720!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 884 invoked from network); 13 Aug 2012 13:44:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:44:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13982171"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:44:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:44:27 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0uwU-0003CD-T8; Mon, 13 Aug 2012 13:44:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0uwU-0000hi-Pi;
	Mon, 13 Aug 2012 14:44:26 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.1210.708532.71249@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:44:26 +0100
To: "Jan Beulich" <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50225ACD020000780009380F@nat28.tlf.novell.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("[Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3"):
> So far all we (explicitly) require is gcc 3.4 or better, so we
> shouldn't be unconditionally using features supported only by much
> newer versions.

Sorry about that.

> Short of a proper replacement, use the "deprecated" attribute instead:
> It also produces a warning (thus causing the build to fail due to
> -Werror), and is at least getting close to the intention here.

I think it would be fine to simply drop this check for earlier gcc's.
That would make the compatibility ifdeffery simpler so I would prefer
that.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:44:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0uwZ-00056i-N9; Mon, 13 Aug 2012 13:44:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0uwY-00056J-04
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:44:30 +0000
Received: from [85.158.139.83:5462] by server-8.bemta-5.messagelabs.com id
	13/A3-02481-CB409205; Mon, 13 Aug 2012 13:44:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344865467!27792720!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 884 invoked from network); 13 Aug 2012 13:44:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:44:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13982171"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:44:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:44:27 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0uwU-0003CD-T8; Mon, 13 Aug 2012 13:44:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0uwU-0000hi-Pi;
	Mon, 13 Aug 2012 14:44:26 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.1210.708532.71249@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:44:26 +0100
To: "Jan Beulich" <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <50225ACD020000780009380F@nat28.tlf.novell.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("[Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3"):
> So far all we (explicitly) require is gcc 3.4 or better, so we
> shouldn't be unconditionally using features supported only by much
> newer versions.

Sorry about that.

> Short of a proper replacement, use the "deprecated" attribute instead:
> It also produces a warning (thus causing the build to fail due to
> -Werror), and is at least getting close to the intention here.

I think it would be fine to simply drop this check for earlier gcc's.
That would make the compatibility ifdeffery simpler so I would prefer
that.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:51:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:51:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0v2T-0005L9-Gg; Mon, 13 Aug 2012 13:50:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0v2R-0005L1-DF
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:50:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344865803!8861231!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3342 invoked from network); 13 Aug 2012 13:50:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:50:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13982333"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:50:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:50:02 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0v1t-0003EE-Uk; Mon, 13 Aug 2012 13:50:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0v1t-0000iF-Qr;
	Mon, 13 Aug 2012 14:50:01 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.1545.734558.263730@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:50:01 +0100
To: Jan Beulich <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <5022AB240200007800093B0E@nat28.tlf.novell.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
	<20120808160606.GA9048@US-SEA-R8XVZTX>
	<5022AB240200007800093B0E@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too early"):
> >>> On 08.08.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> > Would you like me to take another pass at fixing this the "right" way,
> > by removing this bit altogether and adding lowercase variables to
> > tools/Config.mk?
> 
> Oh, yes, if you can do this in a more complete fashion, removing
> the suspicious default_dir.m4 altogether, that would be wonderful.
> Not sure what the tools maintainers think of this, though.

That would be fine by me.  Before throwing it into 4.2 I will do a few
build tests and check that differences in the output file layout are
as expected.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:51:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:51:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0v2T-0005L9-Gg; Mon, 13 Aug 2012 13:50:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0v2R-0005L1-DF
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:50:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344865803!8861231!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3342 invoked from network); 13 Aug 2012 13:50:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 13:50:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13982333"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 13:50:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 14:50:02 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0v1t-0003EE-Uk; Mon, 13 Aug 2012 13:50:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0v1t-0000iF-Qr;
	Mon, 13 Aug 2012 14:50:01 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.1545.734558.263730@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 14:50:01 +0100
To: Jan Beulich <JBeulich@suse.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <5022AB240200007800093B0E@nat28.tlf.novell.com>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
	<20120808160606.GA9048@US-SEA-R8XVZTX>
	<5022AB240200007800093B0E@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <ian.campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too early"):
> >>> On 08.08.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> > Would you like me to take another pass at fixing this the "right" way,
> > by removing this bit altogether and adding lowercase variables to
> > tools/Config.mk?
> 
> Oh, yes, if you can do this in a more complete fashion, removing
> the suspicious default_dir.m4 altogether, that would be wonderful.
> Not sure what the tools maintainers think of this, though.

That would be fine by me.  Before throwing it into 4.2 I will do a few
build tests and check that differences in the output file layout are
as expected.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:56:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0v8B-0005TS-Cz; Mon, 13 Aug 2012 13:56:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T0v89-0005TN-My
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:56:29 +0000
Received: from [85.158.143.99:18647] by server-1.bemta-4.messagelabs.com id
	84/20-07754-D8709205; Mon, 13 Aug 2012 13:56:29 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344866188!27962649!1
X-Originating-IP: [213.199.154.142]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24789 invoked from network); 13 Aug 2012 13:56:28 -0000
Received: from db3ehsobe004.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.142)
	by server-9.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 13:56:28 -0000
Received: from mail19-db3-R.bigfish.com (10.3.81.232) by
	DB3EHSOBE001.bigfish.com (10.3.84.21) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 13:56:06 +0000
Received: from mail19-db3 (localhost [127.0.0.1])	by mail19-db3-R.bigfish.com
	(Postfix) with ESMTP id 917C12C02FA;
	Mon, 13 Aug 2012 13:56:06 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail19-db3 (localhost.localdomain [127.0.0.1]) by mail19-db3
	(MessageSwitch) id 1344866163995778_692; Mon, 13 Aug 2012 13:56:03 +0000
	(UTC)
Received: from DB3EHSMHS014.bigfish.com (unknown [10.3.81.235])	by
	mail19-db3.bigfish.com (Postfix) with ESMTP id F0A282A004A;
	Mon, 13 Aug 2012 13:56:03 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	DB3EHSMHS014.bigfish.com (10.3.87.114) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 13:56:03 +0000
X-WSS-ID: 0M8P5D8-02-KYH-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 24D38C805C;	Mon, 13 Aug 2012 08:55:56 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 08:56:31 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 08:55:59 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	09:55:59 -0400
Message-ID: <5029076D.2090803@amd.com>
Date: Mon, 13 Aug 2012 15:55:57 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
	<1344244422.11339.17.camel@zakaz.uk.xensource.com>
	<20511.54333.44714.694390@mariner.uk.xensource.com>
In-Reply-To: <20511.54333.44714.694390@mariner.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/12 16:27, Ian Jackson wrote:

> Ian Campbell writes ("Re: [Xen-devel] xl segfault when starting a guest"):
>> It looks like 25727:a8d708fcb347 is missing some bits of my original
>> patch got missed during application, specifically the changes to the
>> iscsi/nbd/enbd prefix handling rule.
> 
> This was because:
>  - I mistakenly used the copy of the patch that Ian C had CC'd to me,
>    rather than the copy I got via the mailing list.  The former went
>    via the Citrix corporate email system which mangles things, which
>    is why this is a bad idea.  The latter does not.
>  - When I tried to apply it, it produced a bunch of rejects in the
>    autogenerated files.  However buried in those messages was a reject
>    in the .l source file, which I didn't spot.
>  - I therefore tried to regenerate the flex source (perhaps not with
>    complete success) and committed the result. 
> Sorry for messing this up.
> 
> I have backed out 25727:a8d708fcb347 and reapplied what I think is a
> non-mangled version as 25733:353bc0801b11.


Works for me. Thanks.

Christoph

-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 13:56:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 13:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0v8B-0005TS-Cz; Mon, 13 Aug 2012 13:56:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T0v89-0005TN-My
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 13:56:29 +0000
Received: from [85.158.143.99:18647] by server-1.bemta-4.messagelabs.com id
	84/20-07754-D8709205; Mon, 13 Aug 2012 13:56:29 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1344866188!27962649!1
X-Originating-IP: [213.199.154.142]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24789 invoked from network); 13 Aug 2012 13:56:28 -0000
Received: from db3ehsobe004.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.142)
	by server-9.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 13:56:28 -0000
Received: from mail19-db3-R.bigfish.com (10.3.81.232) by
	DB3EHSOBE001.bigfish.com (10.3.84.21) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 13:56:06 +0000
Received: from mail19-db3 (localhost [127.0.0.1])	by mail19-db3-R.bigfish.com
	(Postfix) with ESMTP id 917C12C02FA;
	Mon, 13 Aug 2012 13:56:06 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzzz2dh668h839hd25he5bhf0ah107ah)
Received: from mail19-db3 (localhost.localdomain [127.0.0.1]) by mail19-db3
	(MessageSwitch) id 1344866163995778_692; Mon, 13 Aug 2012 13:56:03 +0000
	(UTC)
Received: from DB3EHSMHS014.bigfish.com (unknown [10.3.81.235])	by
	mail19-db3.bigfish.com (Postfix) with ESMTP id F0A282A004A;
	Mon, 13 Aug 2012 13:56:03 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	DB3EHSMHS014.bigfish.com (10.3.87.114) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 13:56:03 +0000
X-WSS-ID: 0M8P5D8-02-KYH-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 24D38C805C;	Mon, 13 Aug 2012 08:55:56 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 08:56:31 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 08:55:59 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	09:55:59 -0400
Message-ID: <5029076D.2090803@amd.com>
Date: Mon, 13 Aug 2012 15:55:57 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <501BDF23.50409@amd.com>
	<1344005133.21372.54.camel@zakaz.uk.xensource.com>
	<501BEAD8.3040300@amd.com>
	<1344244422.11339.17.camel@zakaz.uk.xensource.com>
	<20511.54333.44714.694390@mariner.uk.xensource.com>
In-Reply-To: <20511.54333.44714.694390@mariner.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl segfault when starting a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/06/12 16:27, Ian Jackson wrote:

> Ian Campbell writes ("Re: [Xen-devel] xl segfault when starting a guest"):
>> It looks like 25727:a8d708fcb347 is missing some bits of my original
>> patch got missed during application, specifically the changes to the
>> iscsi/nbd/enbd prefix handling rule.
> 
> This was because:
>  - I mistakenly used the copy of the patch that Ian C had CC'd to me,
>    rather than the copy I got via the mailing list.  The former went
>    via the Citrix corporate email system which mangles things, which
>    is why this is a bad idea.  The latter does not.
>  - When I tried to apply it, it produced a bunch of rejects in the
>    autogenerated files.  However buried in those messages was a reject
>    in the .l source file, which I didn't spot.
>  - I therefore tried to regenerate the flex source (perhaps not with
>    complete success) and committed the result. 
> Sorry for messing this up.
> 
> I have backed out 25727:a8d708fcb347 and reapplied what I think is a
> non-mangled version as 25733:353bc0801b11.


Works for me. Thanks.

Christoph

-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:01:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vCo-0005hN-3L; Mon, 13 Aug 2012 14:01:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0vCm-0005hI-Qy
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:01:17 +0000
Received: from [85.158.143.99:47799] by server-2.bemta-4.messagelabs.com id
	68/0B-31966-CA809205; Mon, 13 Aug 2012 14:01:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344866474!16737218!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11514 invoked from network); 13 Aug 2012 14:01:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 14:01:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13982616"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 14:01:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 15:01:14 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0vCk-0003J8-Dq; Mon, 13 Aug 2012 14:01:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0vCk-0000jI-9w;
	Mon, 13 Aug 2012 15:01:14 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.2218.8505.913476@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 15:01:14 +0100
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tamas Lengyel writes ("Re: [Xen-devel] libxl config datastructures"):
> Hi Ian,
> >=A0The parsing of xl/xm style configuration files is specific to the=A0t=
oolstack
> and therefore belongs in xl.
> =

> While I understand the design decision now for keeping config formats gen=
eral,
> since the code is already written for xl, might as well let other develop=
ers
> access it through libxlu when it's=A0convenient for them. It would make (=
at least
> my) life easier.

I don't understand why you aren't happy to use just the config parser
supplied in libxlu.  It's there specifically for the use of people who
want to do things based on the xl config file format (or subsets of
it).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:01:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vCo-0005hN-3L; Mon, 13 Aug 2012 14:01:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0vCm-0005hI-Qy
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:01:17 +0000
Received: from [85.158.143.99:47799] by server-2.bemta-4.messagelabs.com id
	68/0B-31966-CA809205; Mon, 13 Aug 2012 14:01:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344866474!16737218!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11514 invoked from network); 13 Aug 2012 14:01:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 14:01:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13982616"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 14:01:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 15:01:14 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0vCk-0003J8-Dq; Mon, 13 Aug 2012 14:01:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0vCk-0000jI-9w;
	Mon, 13 Aug 2012 15:01:14 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.2218.8505.913476@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 15:01:14 +0100
To: Tamas Lengyel <tamas.k.lengyel@gmail.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tamas Lengyel writes ("Re: [Xen-devel] libxl config datastructures"):
> Hi Ian,
> >=A0The parsing of xl/xm style configuration files is specific to the=A0t=
oolstack
> and therefore belongs in xl.
> =

> While I understand the design decision now for keeping config formats gen=
eral,
> since the code is already written for xl, might as well let other develop=
ers
> access it through libxlu when it's=A0convenient for them. It would make (=
at least
> my) life easier.

I don't understand why you aren't happy to use just the config parser
supplied in libxlu.  It's there specifically for the use of people who
want to do things based on the xl config file format (or subsets of
it).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:14:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vOx-00068i-W7; Mon, 13 Aug 2012 14:13:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0vOw-00068b-CD
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:13:50 +0000
Received: from [85.158.143.99:60579] by server-2.bemta-4.messagelabs.com id
	94/21-31966-D9B09205; Mon, 13 Aug 2012 14:13:49 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344867181!27377838!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14932 invoked from network); 13 Aug 2012 14:13:04 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-3.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 14:13:04 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0vO7-0005XM-Ud
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 00:12:59 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Tue, 14 Aug 2012 00:12:59 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 00:12:59 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: bug when using 4K sectors?
Thread-Index: Ac15XJ9mOEWrLpZlQnymzekcRdWXAQ==
Date: Mon, 13 Aug 2012 14:12:58 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F74F8@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.007
x-tm-as-result: No--26.514400-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 14:12:59.0763 (UTC)
	FILETIME=[BE715430:01CD795D]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] bug when using 4K sectors?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I notice this code in drivers/block/xen-blkback/common.h

#define vbd_sz(_v)      ((_v)->bdev->bd_part ? \
                         (_v)->bdev->bd_part->nr_sects : \
                          get_capacity((_v)->bdev->bd_disk))

is the value returned by vbd_sz(_v) the number of sectors in the Linux device (eg size / 4096), or the number of 512 byte sectors? I suspect the former which is causing block requests beyond 1/8th the size of the device to fail (assuming 4K sectors are expected to work at all - I can't quite get my head around how it would be expected to work - does Linux do the read-modify-write if required?)

I can't test until tomorrow AEDT, but maybe someone here knows the answer already?

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:14:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vOx-00068i-W7; Mon, 13 Aug 2012 14:13:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0vOw-00068b-CD
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:13:50 +0000
Received: from [85.158.143.99:60579] by server-2.bemta-4.messagelabs.com id
	94/21-31966-D9B09205; Mon, 13 Aug 2012 14:13:49 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344867181!27377838!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14932 invoked from network); 13 Aug 2012 14:13:04 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-3.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 14:13:04 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T0vO7-0005XM-Ud
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 00:12:59 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Tue, 14 Aug 2012 00:12:59 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 00:12:59 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: bug when using 4K sectors?
Thread-Index: Ac15XJ9mOEWrLpZlQnymzekcRdWXAQ==
Date: Mon, 13 Aug 2012 14:12:58 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F74F8@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.132]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19108.007
x-tm-as-result: No--26.514400-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 14:12:59.0763 (UTC)
	FILETIME=[BE715430:01CD795D]
X-Really-From-Bendigo-IT: magichashvalue
Subject: [Xen-devel] bug when using 4K sectors?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I notice this code in drivers/block/xen-blkback/common.h

#define vbd_sz(_v)      ((_v)->bdev->bd_part ? \
                         (_v)->bdev->bd_part->nr_sects : \
                          get_capacity((_v)->bdev->bd_disk))

is the value returned by vbd_sz(_v) the number of sectors in the Linux device (eg size / 4096), or the number of 512 byte sectors? I suspect the former which is causing block requests beyond 1/8th the size of the device to fail (assuming 4K sectors are expected to work at all - I can't quite get my head around how it would be expected to work - does Linux do the read-modify-write if required?)

I can't test until tomorrow AEDT, but maybe someone here knows the answer already?

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:17:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:17:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vSH-0006K4-Ph; Mon, 13 Aug 2012 14:17:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0vSG-0006Jw-6O
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:17:16 +0000
Received: from [85.158.143.35:11502] by server-2.bemta-4.messagelabs.com id
	CC/97-31966-B6C09205; Mon, 13 Aug 2012 14:17:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344867416!14536914!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19615 invoked from network); 13 Aug 2012 14:16:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 14:16:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13983171"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 14:16:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 15:16:55 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0vRv-0003Pp-5H; Mon, 13 Aug 2012 14:16:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0vRv-0000kk-1k;
	Mon, 13 Aug 2012 15:16:55 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.3158.965869.397652@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 15:16:54 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20120807175938.GB5592@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
	<1344327782.11339.65.camel@zakaz.uk.xensource.com>
	<20120807175938.GB5592@US-SEA-R8XVZTX>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted text documentation from markdown"):
> I'm certainly not suggesting that we add a web browser to the build
> dependencies. If lynx isn't installed, the current behavior of copying
> the markdown file is maintained.
> 
> If a packager wishes to produce prettier text documentation, they can
> elect to add lynx to their build dependencies. Today doing this
> post-build from build control files is a bit tricky, since we drop the
> semantic information conveyed by the .markdown suffix by calling the
> final file .txt.
> 
> 4.2 will be the first release with markdown documentation. I think
> that making it well formatted, just as the previous .txt
> documentation, will be a better experience for the user.

I tend to agree with this line of argument.  But I have orthogonal
queries:

Firstly, why lynx and not w3m ?  Is lynx even maintained upstream any
more ?  Secondly, shouldn't we do the testing for the presence of
markdown and the html formatter in configure ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:17:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:17:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vSH-0006K4-Ph; Mon, 13 Aug 2012 14:17:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0vSG-0006Jw-6O
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:17:16 +0000
Received: from [85.158.143.35:11502] by server-2.bemta-4.messagelabs.com id
	CC/97-31966-B6C09205; Mon, 13 Aug 2012 14:17:15 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344867416!14536914!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19615 invoked from network); 13 Aug 2012 14:16:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 14:16:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,759,1336348800"; d="scan'208";a="13983171"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 14:16:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 15:16:55 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0vRv-0003Pp-5H; Mon, 13 Aug 2012 14:16:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0vRv-0000kk-1k;
	Mon, 13 Aug 2012 15:16:55 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.3158.965869.397652@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 15:16:54 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20120807175938.GB5592@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
	<1344327782.11339.65.camel@zakaz.uk.xensource.com>
	<20120807175938.GB5592@US-SEA-R8XVZTX>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted text documentation from markdown"):
> I'm certainly not suggesting that we add a web browser to the build
> dependencies. If lynx isn't installed, the current behavior of copying
> the markdown file is maintained.
> 
> If a packager wishes to produce prettier text documentation, they can
> elect to add lynx to their build dependencies. Today doing this
> post-build from build control files is a bit tricky, since we drop the
> semantic information conveyed by the .markdown suffix by calling the
> final file .txt.
> 
> 4.2 will be the first release with markdown documentation. I think
> that making it well formatted, just as the previous .txt
> documentation, will be a better experience for the user.

I tend to agree with this line of argument.  But I have orthogonal
queries:

Firstly, why lynx and not w3m ?  Is lynx even maintained upstream any
more ?  Secondly, shouldn't we do the testing for the presence of
markdown and the html formatter in configure ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:35:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vjR-0006ix-Ep; Mon, 13 Aug 2012 14:35:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T0vjQ-0006is-A0
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:35:00 +0000
Received: from [85.158.139.83:4391] by server-6.bemta-5.messagelabs.com id
	FD/2E-22415-39019205; Mon, 13 Aug 2012 14:34:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344868496!24016133!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2639 invoked from network); 13 Aug 2012 14:34:57 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 14:34:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DEYpV3017245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 14:34:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DEYo1q015008
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 14:34:51 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DEYoq1007099; Mon, 13 Aug 2012 09:34:50 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 07:34:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6D596402A4; Mon, 13 Aug 2012 10:25:11 -0400 (EDT)
Date: Mon, 13 Aug 2012 10:25:11 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120813142511.GD14666@phenom.dumpdata.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
	<alpine.DEB.2.02.1208101216490.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208101216490.21096@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Thanos Makatos <thanos.makatos@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 12:17:10PM +0100, Stefano Stabellini wrote:
> On Fri, 10 Aug 2012, Thanos Makatos wrote:
> > Hi Konrad,
> > 
> > I'm not sure I understand your question. Blktap3 lives in tools/blktap3. The component that allows tapdisk3 to talk directly to blkfront is xenio, it lives in blktap3/tools/xenio. Since tapdisk3 can talk to blkfront via xenio, it doesn't interact with blkback/blktap kernel drivers.
> 
> Konrad, Blktap3 is purely userspace ;-)

I somehow managed to miss the blktap3.bz2 file in the original thread and then
started looking for a git tree or hg tree and couldn't find it. So the answer
to my question is:  http://lists.xen.org/archives/html/xen-devel/2012-08/binRhvyhiESWY.bin

:-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 14:35:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 14:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0vjR-0006ix-Ep; Mon, 13 Aug 2012 14:35:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T0vjQ-0006is-A0
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 14:35:00 +0000
Received: from [85.158.139.83:4391] by server-6.bemta-5.messagelabs.com id
	FD/2E-22415-39019205; Mon, 13 Aug 2012 14:34:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1344868496!24016133!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2639 invoked from network); 13 Aug 2012 14:34:57 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 14:34:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DEYpV3017245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 14:34:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DEYo1q015008
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 14:34:51 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DEYoq1007099; Mon, 13 Aug 2012 09:34:50 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 07:34:50 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6D596402A4; Mon, 13 Aug 2012 10:25:11 -0400 (EDT)
Date: Mon, 13 Aug 2012 10:25:11 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120813142511.GD14666@phenom.dumpdata.com>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<20120809180433.GA14457@phenom.dumpdata.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D55E@LONPMAILBOX01.citrite.net>
	<alpine.DEB.2.02.1208101216490.21096@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208101216490.21096@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Thanos Makatos <thanos.makatos@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 12:17:10PM +0100, Stefano Stabellini wrote:
> On Fri, 10 Aug 2012, Thanos Makatos wrote:
> > Hi Konrad,
> > 
> > I'm not sure I understand your question. Blktap3 lives in tools/blktap3. The component that allows tapdisk3 to talk directly to blkfront is xenio, it lives in blktap3/tools/xenio. Since tapdisk3 can talk to blkfront via xenio, it doesn't interact with blkback/blktap kernel drivers.
> 
> Konrad, Blktap3 is purely userspace ;-)

I somehow managed to miss the blktap3.bz2 file in the original thread and then
started looking for a git tree or hg tree and couldn't find it. So the answer
to my question is:  http://lists.xen.org/archives/html/xen-devel/2012-08/binRhvyhiESWY.bin

:-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:10:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wHm-00077Q-AZ; Mon, 13 Aug 2012 15:10:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0wHk-00077L-Sj
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:10:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344870622!3866054!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7106 invoked from network); 13 Aug 2012 15:10:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	13 Aug 2012 15:10:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 15:50:10 +0100
Message-Id: <5029303F020000780009480B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 15:50:07 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Andres Lagar-Cavilla" <andres@lagarcavilla.org>,
	"Tim Deegan" <tim@xen.org>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
In-Reply-To: <50290B0C0200007800094750@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
 24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 14:11, "Jan Beulich" <JBeulich@suse.com> wrote:
> That c/s introduced a double unlock on the out-of-memory error path of
> p2m_pod_demand_populate().

I also wonder how correct that changeset's elimination of the page
alloc lock in a number of places here is - p2m_pod_set_mem_target()'s
calculations, for example, involve d->tot_pages, which with that lock
not held can change under its feet.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:10:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wHm-00077Q-AZ; Mon, 13 Aug 2012 15:10:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0wHk-00077L-Sj
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:10:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344870622!3866054!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7106 invoked from network); 13 Aug 2012 15:10:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	13 Aug 2012 15:10:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 15:50:10 +0100
Message-Id: <5029303F020000780009480B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 15:50:07 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Andres Lagar-Cavilla" <andres@lagarcavilla.org>,
	"Tim Deegan" <tim@xen.org>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
In-Reply-To: <50290B0C0200007800094750@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
 24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 14:11, "Jan Beulich" <JBeulich@suse.com> wrote:
> That c/s introduced a double unlock on the out-of-memory error path of
> p2m_pod_demand_populate().

I also wonder how correct that changeset's elimination of the page
alloc lock in a number of places here is - p2m_pod_set_mem_target()'s
calculations, for example, involve d->tot_pages, which with that lock
not held can change under its feet.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:21:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wSC-0007Gu-G6; Mon, 13 Aug 2012 15:21:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0wSB-0007Gp-1k
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:21:15 +0000
Received: from [85.158.143.99:58494] by server-3.bemta-4.messagelabs.com id
	A5/56-09529-A6B19205; Mon, 13 Aug 2012 15:21:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344871272!27051170!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.1 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13175 invoked from network); 13 Aug 2012 15:21:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	13 Aug 2012 15:21:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 16:21:11 +0100
Message-Id: <50293785020000780009484C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 16:21:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <201208070018394210381@gmail.com>,
	<50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>
In-Reply-To: <2012081023124696835343@gmail.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7F4E0B75.1__="
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
 foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7F4E0B75.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 10.08.12 at 17:17, tupeng212 <tupeng212@gmail.com> wrote:
> 2 Bug in xen
> JVM is OK, so left the bug to xen, I have found both the reason and=20
> solution. As Jan mentioned avoiding call create_periodic_time, it got =
much=20
> better. so I modified it like this, if the pt timer is created before,=20=

> setting RegA down is just changing the period value, so I do nothing =
except=20

What you describe doesn't sound accurate (i.e. I'm getting the
impression that you might have suppressed the call in cases
where you shouldn't).

Below/attached a first draft of a patch to fix not only this issue,
but a few more with the RTC emulation. Would you give this a
try?

Keir, Tim, others - the change to xen/arch/x86/hvm/vpt.c really
looks more like a hack than a solution, but I don't see another
way without much more intrusive changes. The point is that we
want the RTC code to decide whether to generate an interrupt
(so that RTC_PF can become set correctly even without RTC_PIE
getting enabled by the guest).

Jan

x86/HVM: assorted RTC emulation adjustments

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM under Windows)
- in the same spirit, don't call rtc_timer_update() or
  alarm_timer_update() on REG_B writes when the respective RTC_xIE bit
  didn't change
- raise the RTC IRQ not only when RTC_UIE gets set while RTC_UF was
  already set, but generalize this to alarm and periodic interrupts as
  well
- properly handle RTC_PF when the guest is not also setting RTC_PIE
- also handle the two other clock bases

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
=20
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d =3D vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s =3D opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
=20
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
=20
     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <=3D 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code !=3D 0) && (period_code <=3D 2) )
             period_code +=3D 7;
-
-        period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period =
in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code !=3D 0 )
+        {
+            period =3D 1 << (period_code - 1); /* period in 32 Khz cycles =
*/
+            period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns =
*/
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, =
NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
=20
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -343,7 +357,6 @@ static void alarm_timer_update(RTCState=20
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +364,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +374,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig, mask;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +392,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +416,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +426,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm =3D gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +435,27 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.
+         */
+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
+        if ( (data ^ orig) & RTC_PIE )
+            rtc_timer_update(s);
         check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_AIE )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
=20
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##nam=
e)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;
     uint64_t max_lag =3D -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
=20
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
=20
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued =3D 1;
     irq =3D earliest_pt->irq;
     is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
+    pt_priv =3D earliest_pt->priv;
=20
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
=20
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq =3D=3D RTC_IRQ )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
=20
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);


--=__Part7F4E0B75.1__=
Content-Type: text/plain; name="x86-hvm-rtc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc.patch"

x86/HVM: assorted RTC emulation adjustments=0A=0A- don't call rtc_timer_upd=
ate() on REG_A writes when the value didn't=0A  change (doing the call =
always was reported to cause wall clock time=0A  lagging with the JVM =
under Windows)=0A- in the same spirit, don't call rtc_timer_update() or=0A =
 alarm_timer_update() on REG_B writes when the respective RTC_xIE bit=0A  =
didn't change=0A- raise the RTC IRQ not only when RTC_UIE gets set while =
RTC_UF was=0A  already set, but generalize this to alarm and periodic =
interrupts as=0A  well=0A- properly handle RTC_PF when the guest is not =
also setting RTC_PIE=0A- also handle the two other clock bases=0A=0A--- =
a/xen/arch/x86/hvm/rtc.c=0A+++ b/xen/arch/x86/hvm/rtc.c=0A@@ -50,11 +50,24 =
@@ static void rtc_set_time(RTCState *s);=0A static inline int from_bcd(RTC=
State *s, int a);=0A static inline int convert_hour(RTCState *s, int =
hour);=0A =0A-static void rtc_periodic_cb(struct vcpu *v, void *opaque)=0A+=
static void rtc_toggle_irq(RTCState *s)=0A+{=0A+    struct domain *d =3D =
vrtc_domain(s);=0A+=0A+    ASSERT(spin_is_locked(&s->lock));=0A+    =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A+    hvm_isa_irq_deassert(d, =
RTC_IRQ);=0A+    hvm_isa_irq_assert(d, RTC_IRQ);=0A+}=0A+=0A+void =
rtc_periodic_interrupt(void *opaque)=0A {=0A     RTCState *s =3D opaque;=0A=
+=0A     spin_lock(&s->lock);=0A-    s->hw.cmos_data[RTC_REG_C] |=3D =
0xc0;=0A+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;=0A+    if ( s->hw.cmos=
_data[RTC_REG_B] & RTC_PIE )=0A+        rtc_toggle_irq(s);=0A     =
spin_unlock(&s->lock);=0A }=0A =0A@@ -68,19 +81,25 @@ static void =
rtc_timer_update(RTCState *s=0A     ASSERT(spin_is_locked(&s->lock));=0A =
=0A     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;=0A-  =
  if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) =
)=0A+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )=0A     {=0A-  =
      if ( period_code <=3D 2 )=0A+    case RTC_REF_CLCK_32KHZ:=0A+        =
if ( (period_code !=3D 0) && (period_code <=3D 2) )=0A             =
period_code +=3D 7;=0A-=0A-        period =3D 1 << (period_code - 1); /* =
period in 32 Khz cycles */=0A-        period =3D DIV_ROUND((period * =
1000000000ULL), 32768); /* period in ns */=0A-        create_periodic_time(=
v, &s->pt, period, period, RTC_IRQ,=0A-                             =
rtc_periodic_cb, s);=0A-    }=0A-    else=0A-    {=0A+        /* fall =
through */=0A+    case RTC_REF_CLCK_1MHZ:=0A+    case RTC_REF_CLCK_4MHZ:=0A=
+        if ( period_code !=3D 0 )=0A+        {=0A+            period =3D =
1 << (period_code - 1); /* period in 32 Khz cycles */=0A+            =
period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */=0A+       =
     create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, =
s);=0A+            break;=0A+        }=0A+        /* fall through */=0A+   =
 default:=0A         destroy_periodic_time(&s->pt);=0A+        break;=0A   =
  }=0A }=0A =0A@@ -144,7 +163,6 @@ static void rtc_update_timer(void =
*opaqu=0A static void rtc_update_timer2(void *opaque)=0A {=0A     RTCState =
*s =3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;=0A         s->hw.cmos_data[RTC_REG_=
A] &=3D ~RTC_UIP;=0A         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))=0A=
-        {=0A-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-    =
        hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert=
(d, RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
check_update_timer(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -343,7 =
+357,6 @@ static void alarm_timer_update(RTCState =0A static void =
rtc_alarm_cb(void *opaque)=0A {=0A     RTCState *s =3D opaque;=0A-    =
struct domain *d =3D vrtc_domain(s);=0A =0A     spin_lock(&s->lock);=0A    =
 if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A@@ -351,11 +364,7 @@ =
static void rtc_alarm_cb(void *opaque)=0A         s->hw.cmos_data[RTC_REG_C=
] |=3D RTC_AF;=0A         /* alarm interrupt */=0A         if (s->hw.cmos_d=
ata[RTC_REG_B] & RTC_AIE)=0A-        {=0A-            s->hw.cmos_data[RTC_R=
EG_C] |=3D RTC_IRQF;=0A-            hvm_isa_irq_deassert(d, RTC_IRQ);=0A-  =
          hvm_isa_irq_assert(d, RTC_IRQ);=0A-        }=0A+            =
rtc_toggle_irq(s);=0A         alarm_timer_update(s);=0A     }=0A     =
spin_unlock(&s->lock);=0A@@ -365,6 +374,7 @@ static int rtc_ioport_write(vo=
id *opaque=0A {=0A     RTCState *s =3D opaque;=0A     struct domain *d =3D =
vrtc_domain(s);=0A+    uint32_t orig, mask;=0A =0A     spin_lock(&s->lock);=
=0A =0A@@ -382,6 +392,7 @@ static int rtc_ioport_write(void *opaque=0A     =
    return 0;=0A     }=0A =0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index=
];=0A     switch ( s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALAR=
M:=0A@@ -405,9 +416,9 @@ static int rtc_ioport_write(void *opaque=0A       =
  break;=0A     case RTC_REG_A:=0A         /* UIP bit is read only */=0A-  =
      s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -415,7 +426,7 @@ static =
int rtc_ioport_write(void *opaque=0A             /* set mode: reset UIP =
mode */=0A             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A        =
     /* adjust cmos before stopping */=0A-            if (!(s->hw.cmos_data=
[RTC_REG_B] & RTC_SET))=0A+            if (!(orig & RTC_SET))=0A           =
  {=0A                 s->current_tm =3D gmtime(get_localtime(d));=0A      =
           rtc_copy_date(s);=0A@@ -424,21 +435,27 @@ static int rtc_ioport_=
write(void *opaque=0A         else=0A         {=0A             /* if =
disabling set mode, update the time */=0A-            if ( s->hw.cmos_data[=
RTC_REG_B] & RTC_SET )=0A+            if ( orig & RTC_SET )=0A             =
    rtc_set_time(s);=0A         }=0A-        /* if the interrupt is =
already set when the interrupt become=0A-         * enabled, raise an =
interrupt immediately*/=0A-        if ((data & RTC_UIE) && !(s->hw.cmos_dat=
a[RTC_REG_B] & RTC_UIE))=0A-            if (s->hw.cmos_data[RTC_REG_C] & =
RTC_UF)=0A+        /*=0A+         * If the interrupt is already set when =
the interrupt becomes=0A+         * enabled, raise an interrupt immediately=
.=0A+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.=0A+    =
     */=0A+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 =
)=0A+            if ( (data & mask) && !(orig & mask) &&=0A+               =
  (s->hw.cmos_data[RTC_REG_C] & mask) )=0A             {=0A-               =
 hvm_isa_irq_deassert(d, RTC_IRQ);=0A-                hvm_isa_irq_assert(d,=
 RTC_IRQ);=0A+                rtc_toggle_irq(s);=0A+                =
break;=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A+        if ( (data ^ orig) & =
RTC_PIE )=0A+            rtc_timer_update(s);=0A         check_update_timer=
(s);=0A-        alarm_timer_update(s);=0A+        if ( (data ^ orig) & =
RTC_AIE )=0A+            alarm_timer_update(s);=0A         break;=0A     =
case RTC_REG_C:=0A     case RTC_REG_D:=0A--- a/xen/arch/x86/hvm/vpt.c=0A+++=
 b/xen/arch/x86/hvm/vpt.c=0A@@ -22,6 +22,7 @@=0A #include <asm/hvm/vpt.h>=
=0A #include <asm/event.h>=0A #include <asm/apic.h>=0A+#include <asm/mc1468=
18rtc.h>=0A =0A #define mode_is(d, name) \=0A     ((d)->arch.hvm_domain.par=
ams[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##name)=0A@@ -218,6 +219,7 @@ void =
pt_update_irq(struct vcpu *v)=0A     struct periodic_time *pt, *temp, =
*earliest_pt =3D NULL;=0A     uint64_t max_lag =3D -1ULL;=0A     int irq, =
is_lapic;=0A+    void *pt_priv;=0A =0A     spin_lock(&v->arch.hvm_vcpu.tm_l=
ock);=0A =0A@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)=0A    =
 earliest_pt->irq_issued =3D 1;=0A     irq =3D earliest_pt->irq;=0A     =
is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);=0A+    pt_priv =3D =
earliest_pt->priv;=0A =0A     spin_unlock(&v->arch.hvm_vcpu.tm_lock);=0A =
=0A     if ( is_lapic )=0A-    {=0A         vlapic_set_irq(vcpu_vlapic(v), =
irq, 0);=0A-    }=0A+    else if ( irq =3D=3D RTC_IRQ )=0A+        =
rtc_periodic_interrupt(pt_priv);=0A     else=0A     {=0A         hvm_isa_ir=
q_deassert(v->domain, irq);=0A--- a/xen/include/asm-x86/hvm/vpt.h=0A+++ =
b/xen/include/asm-x86/hvm/vpt.h=0A@@ -181,6 +181,7 @@ void rtc_migrate_time=
rs(struct vcpu *v);=0A void rtc_deinit(struct domain *d);=0A void =
rtc_reset(struct domain *d);=0A void rtc_update_clock(struct domain =
*d);=0A+void rtc_periodic_interrupt(void *);=0A =0A void pmtimer_init(struc=
t vcpu *v);=0A void pmtimer_deinit(struct domain *d);=0A
--=__Part7F4E0B75.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7F4E0B75.1__=--


From xen-devel-bounces@lists.xen.org Mon Aug 13 15:21:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wSC-0007Gu-G6; Mon, 13 Aug 2012 15:21:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0wSB-0007Gp-1k
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:21:15 +0000
Received: from [85.158.143.99:58494] by server-3.bemta-4.messagelabs.com id
	A5/56-09529-A6B19205; Mon, 13 Aug 2012 15:21:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344871272!27051170!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.1 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13175 invoked from network); 13 Aug 2012 15:21:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	13 Aug 2012 15:21:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 16:21:11 +0100
Message-Id: <50293785020000780009484C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 16:21:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <201208070018394210381@gmail.com>,
	<50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>
In-Reply-To: <2012081023124696835343@gmail.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7F4E0B75.1__="
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
 foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7F4E0B75.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 10.08.12 at 17:17, tupeng212 <tupeng212@gmail.com> wrote:
> 2 Bug in xen
> JVM is OK, so left the bug to xen, I have found both the reason and=20
> solution. As Jan mentioned avoiding call create_periodic_time, it got =
much=20
> better. so I modified it like this, if the pt timer is created before,=20=

> setting RegA down is just changing the period value, so I do nothing =
except=20

What you describe doesn't sound accurate (i.e. I'm getting the
impression that you might have suppressed the call in cases
where you shouldn't).

Below/attached a first draft of a patch to fix not only this issue,
but a few more with the RTC emulation. Would you give this a
try?

Keir, Tim, others - the change to xen/arch/x86/hvm/vpt.c really
looks more like a hack than a solution, but I don't see another
way without much more intrusive changes. The point is that we
want the RTC code to decide whether to generate an interrupt
(so that RTC_PF can become set correctly even without RTC_PIE
getting enabled by the guest).

Jan

x86/HVM: assorted RTC emulation adjustments

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM under Windows)
- in the same spirit, don't call rtc_timer_update() or
  alarm_timer_update() on REG_B writes when the respective RTC_xIE bit
  didn't change
- raise the RTC IRQ not only when RTC_UIE gets set while RTC_UF was
  already set, but generalize this to alarm and periodic interrupts as
  well
- properly handle RTC_PF when the guest is not also setting RTC_PIE
- also handle the two other clock bases

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
=20
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d =3D vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s =3D opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
=20
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
=20
     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <=3D 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code !=3D 0) && (period_code <=3D 2) )
             period_code +=3D 7;
-
-        period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period =
in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code !=3D 0 )
+        {
+            period =3D 1 << (period_code - 1); /* period in 32 Khz cycles =
*/
+            period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns =
*/
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, =
NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
=20
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -343,7 +357,6 @@ static void alarm_timer_update(RTCState=20
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +364,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +374,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig, mask;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +392,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +416,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +426,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm =3D gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +435,27 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.
+         */
+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
+        if ( (data ^ orig) & RTC_PIE )
+            rtc_timer_update(s);
         check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_AIE )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
=20
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##nam=
e)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;
     uint64_t max_lag =3D -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
=20
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
=20
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued =3D 1;
     irq =3D earliest_pt->irq;
     is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
+    pt_priv =3D earliest_pt->priv;
=20
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
=20
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq =3D=3D RTC_IRQ )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
=20
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);


--=__Part7F4E0B75.1__=
Content-Type: text/plain; name="x86-hvm-rtc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc.patch"

x86/HVM: assorted RTC emulation adjustments=0A=0A- don't call rtc_timer_upd=
ate() on REG_A writes when the value didn't=0A  change (doing the call =
always was reported to cause wall clock time=0A  lagging with the JVM =
under Windows)=0A- in the same spirit, don't call rtc_timer_update() or=0A =
 alarm_timer_update() on REG_B writes when the respective RTC_xIE bit=0A  =
didn't change=0A- raise the RTC IRQ not only when RTC_UIE gets set while =
RTC_UF was=0A  already set, but generalize this to alarm and periodic =
interrupts as=0A  well=0A- properly handle RTC_PF when the guest is not =
also setting RTC_PIE=0A- also handle the two other clock bases=0A=0A--- =
a/xen/arch/x86/hvm/rtc.c=0A+++ b/xen/arch/x86/hvm/rtc.c=0A@@ -50,11 +50,24 =
@@ static void rtc_set_time(RTCState *s);=0A static inline int from_bcd(RTC=
State *s, int a);=0A static inline int convert_hour(RTCState *s, int =
hour);=0A =0A-static void rtc_periodic_cb(struct vcpu *v, void *opaque)=0A+=
static void rtc_toggle_irq(RTCState *s)=0A+{=0A+    struct domain *d =3D =
vrtc_domain(s);=0A+=0A+    ASSERT(spin_is_locked(&s->lock));=0A+    =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A+    hvm_isa_irq_deassert(d, =
RTC_IRQ);=0A+    hvm_isa_irq_assert(d, RTC_IRQ);=0A+}=0A+=0A+void =
rtc_periodic_interrupt(void *opaque)=0A {=0A     RTCState *s =3D opaque;=0A=
+=0A     spin_lock(&s->lock);=0A-    s->hw.cmos_data[RTC_REG_C] |=3D =
0xc0;=0A+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;=0A+    if ( s->hw.cmos=
_data[RTC_REG_B] & RTC_PIE )=0A+        rtc_toggle_irq(s);=0A     =
spin_unlock(&s->lock);=0A }=0A =0A@@ -68,19 +81,25 @@ static void =
rtc_timer_update(RTCState *s=0A     ASSERT(spin_is_locked(&s->lock));=0A =
=0A     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;=0A-  =
  if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) =
)=0A+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )=0A     {=0A-  =
      if ( period_code <=3D 2 )=0A+    case RTC_REF_CLCK_32KHZ:=0A+        =
if ( (period_code !=3D 0) && (period_code <=3D 2) )=0A             =
period_code +=3D 7;=0A-=0A-        period =3D 1 << (period_code - 1); /* =
period in 32 Khz cycles */=0A-        period =3D DIV_ROUND((period * =
1000000000ULL), 32768); /* period in ns */=0A-        create_periodic_time(=
v, &s->pt, period, period, RTC_IRQ,=0A-                             =
rtc_periodic_cb, s);=0A-    }=0A-    else=0A-    {=0A+        /* fall =
through */=0A+    case RTC_REF_CLCK_1MHZ:=0A+    case RTC_REF_CLCK_4MHZ:=0A=
+        if ( period_code !=3D 0 )=0A+        {=0A+            period =3D =
1 << (period_code - 1); /* period in 32 Khz cycles */=0A+            =
period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */=0A+       =
     create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, =
s);=0A+            break;=0A+        }=0A+        /* fall through */=0A+   =
 default:=0A         destroy_periodic_time(&s->pt);=0A+        break;=0A   =
  }=0A }=0A =0A@@ -144,7 +163,6 @@ static void rtc_update_timer(void =
*opaqu=0A static void rtc_update_timer2(void *opaque)=0A {=0A     RTCState =
*s =3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;=0A         s->hw.cmos_data[RTC_REG_=
A] &=3D ~RTC_UIP;=0A         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))=0A=
-        {=0A-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-    =
        hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert=
(d, RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
check_update_timer(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -343,7 =
+357,6 @@ static void alarm_timer_update(RTCState =0A static void =
rtc_alarm_cb(void *opaque)=0A {=0A     RTCState *s =3D opaque;=0A-    =
struct domain *d =3D vrtc_domain(s);=0A =0A     spin_lock(&s->lock);=0A    =
 if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A@@ -351,11 +364,7 @@ =
static void rtc_alarm_cb(void *opaque)=0A         s->hw.cmos_data[RTC_REG_C=
] |=3D RTC_AF;=0A         /* alarm interrupt */=0A         if (s->hw.cmos_d=
ata[RTC_REG_B] & RTC_AIE)=0A-        {=0A-            s->hw.cmos_data[RTC_R=
EG_C] |=3D RTC_IRQF;=0A-            hvm_isa_irq_deassert(d, RTC_IRQ);=0A-  =
          hvm_isa_irq_assert(d, RTC_IRQ);=0A-        }=0A+            =
rtc_toggle_irq(s);=0A         alarm_timer_update(s);=0A     }=0A     =
spin_unlock(&s->lock);=0A@@ -365,6 +374,7 @@ static int rtc_ioport_write(vo=
id *opaque=0A {=0A     RTCState *s =3D opaque;=0A     struct domain *d =3D =
vrtc_domain(s);=0A+    uint32_t orig, mask;=0A =0A     spin_lock(&s->lock);=
=0A =0A@@ -382,6 +392,7 @@ static int rtc_ioport_write(void *opaque=0A     =
    return 0;=0A     }=0A =0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index=
];=0A     switch ( s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALAR=
M:=0A@@ -405,9 +416,9 @@ static int rtc_ioport_write(void *opaque=0A       =
  break;=0A     case RTC_REG_A:=0A         /* UIP bit is read only */=0A-  =
      s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -415,7 +426,7 @@ static =
int rtc_ioport_write(void *opaque=0A             /* set mode: reset UIP =
mode */=0A             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A        =
     /* adjust cmos before stopping */=0A-            if (!(s->hw.cmos_data=
[RTC_REG_B] & RTC_SET))=0A+            if (!(orig & RTC_SET))=0A           =
  {=0A                 s->current_tm =3D gmtime(get_localtime(d));=0A      =
           rtc_copy_date(s);=0A@@ -424,21 +435,27 @@ static int rtc_ioport_=
write(void *opaque=0A         else=0A         {=0A             /* if =
disabling set mode, update the time */=0A-            if ( s->hw.cmos_data[=
RTC_REG_B] & RTC_SET )=0A+            if ( orig & RTC_SET )=0A             =
    rtc_set_time(s);=0A         }=0A-        /* if the interrupt is =
already set when the interrupt become=0A-         * enabled, raise an =
interrupt immediately*/=0A-        if ((data & RTC_UIE) && !(s->hw.cmos_dat=
a[RTC_REG_B] & RTC_UIE))=0A-            if (s->hw.cmos_data[RTC_REG_C] & =
RTC_UF)=0A+        /*=0A+         * If the interrupt is already set when =
the interrupt becomes=0A+         * enabled, raise an interrupt immediately=
.=0A+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.=0A+    =
     */=0A+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 =
)=0A+            if ( (data & mask) && !(orig & mask) &&=0A+               =
  (s->hw.cmos_data[RTC_REG_C] & mask) )=0A             {=0A-               =
 hvm_isa_irq_deassert(d, RTC_IRQ);=0A-                hvm_isa_irq_assert(d,=
 RTC_IRQ);=0A+                rtc_toggle_irq(s);=0A+                =
break;=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A+        if ( (data ^ orig) & =
RTC_PIE )=0A+            rtc_timer_update(s);=0A         check_update_timer=
(s);=0A-        alarm_timer_update(s);=0A+        if ( (data ^ orig) & =
RTC_AIE )=0A+            alarm_timer_update(s);=0A         break;=0A     =
case RTC_REG_C:=0A     case RTC_REG_D:=0A--- a/xen/arch/x86/hvm/vpt.c=0A+++=
 b/xen/arch/x86/hvm/vpt.c=0A@@ -22,6 +22,7 @@=0A #include <asm/hvm/vpt.h>=
=0A #include <asm/event.h>=0A #include <asm/apic.h>=0A+#include <asm/mc1468=
18rtc.h>=0A =0A #define mode_is(d, name) \=0A     ((d)->arch.hvm_domain.par=
ams[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##name)=0A@@ -218,6 +219,7 @@ void =
pt_update_irq(struct vcpu *v)=0A     struct periodic_time *pt, *temp, =
*earliest_pt =3D NULL;=0A     uint64_t max_lag =3D -1ULL;=0A     int irq, =
is_lapic;=0A+    void *pt_priv;=0A =0A     spin_lock(&v->arch.hvm_vcpu.tm_l=
ock);=0A =0A@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)=0A    =
 earliest_pt->irq_issued =3D 1;=0A     irq =3D earliest_pt->irq;=0A     =
is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);=0A+    pt_priv =3D =
earliest_pt->priv;=0A =0A     spin_unlock(&v->arch.hvm_vcpu.tm_lock);=0A =
=0A     if ( is_lapic )=0A-    {=0A         vlapic_set_irq(vcpu_vlapic(v), =
irq, 0);=0A-    }=0A+    else if ( irq =3D=3D RTC_IRQ )=0A+        =
rtc_periodic_interrupt(pt_priv);=0A     else=0A     {=0A         hvm_isa_ir=
q_deassert(v->domain, irq);=0A--- a/xen/include/asm-x86/hvm/vpt.h=0A+++ =
b/xen/include/asm-x86/hvm/vpt.h=0A@@ -181,6 +181,7 @@ void rtc_migrate_time=
rs(struct vcpu *v);=0A void rtc_deinit(struct domain *d);=0A void =
rtc_reset(struct domain *d);=0A void rtc_update_clock(struct domain =
*d);=0A+void rtc_periodic_interrupt(void *);=0A =0A void pmtimer_init(struc=
t vcpu *v);=0A void pmtimer_deinit(struct domain *d);=0A
--=__Part7F4E0B75.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7F4E0B75.1__=--


From xen-devel-bounces@lists.xen.org Mon Aug 13 15:22:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wTP-0007Kn-40; Mon, 13 Aug 2012 15:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T0wTN-0007KX-L2
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:22:29 +0000
Received: from [85.158.143.99:64340] by server-3.bemta-4.messagelabs.com id
	F8/08-09529-5BB19205; Mon, 13 Aug 2012 15:22:29 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344871347!27390663!1
X-Originating-IP: [208.97.132.119]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4xMTkgPT4gMzEwODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5313 invoked from network); 13 Aug 2012 15:22:28 -0000
Received: from caiajhbdcbbj.dreamhost.com (HELO homiemail-a19.g.dreamhost.com)
	(208.97.132.119) by server-3.tower-216.messagelabs.com with SMTP;
	13 Aug 2012 15:22:28 -0000
Received: from homiemail-a19.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTP id 4EEB1604069;
	Mon, 13 Aug 2012 08:22:27 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=Ek9ueTg6ZtKLDpR/FGvE1xWP9RTy7LIg2D0gmKfNOD+r
	ywYkkS8+8DmYek6InCE2TtuqWhoqBofhKz7Tozsf3gBHAMgWjchXGljGsaQWXpMc
	GdeBBEMqn9Gsv4bW/CG46BxdNqOLJZ3NbdIhGvZXNz3Z+uins/7BxQNUN79zzxs=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=cIqObX1SyMRfSzy8UMLs7Al5hBk=; b=iBOsl49u
	JzK41w4xZ2k+kGjNpZRY4x87rGBS52se03UD8cZA95sLrrOvXa/oxUrdW5BC/M2/
	trcKBHWmI8uwqObpJROM53gO9/m0IPnSDFzEoKAcCi8ERWfEAjOIgzOo9LsXYFOG
	p9QDhWG2p/15bctfwOyAm3EBAvvTPZba+9U=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTPA id A3FBF604061; 
	Mon, 13 Aug 2012 08:22:26 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Mon, 13 Aug 2012 08:22:22 -0700
Message-ID: <2a0c4a6bbbb7050073e8d5de103ef129.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <5029303F020000780009480B@nat28.tlf.novell.com>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
	<5029303F020000780009480B@nat28.tlf.novell.com>
Date: Mon, 13 Aug 2012 08:22:22 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "Jan Beulich" <JBeulich@suse.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
 24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>> On 13.08.12 at 14:11, "Jan Beulich" <JBeulich@suse.com> wrote:
>> That c/s introduced a double unlock on the out-of-memory error path of
>> p2m_pod_demand_populate().
>
> I also wonder how correct that changeset's elimination of the page
> alloc lock in a number of places here is - p2m_pod_set_mem_target()'s
> calculations, for example, involve d->tot_pages, which with that lock
> not held can change under its feet.

afaict, access to d->tot_pages was not protected by the page_alloc lock
even prior to 24772.

Back when, I thought those unprotected tot_pages accesses should either be
locked or atomic_read(). Slipped through the cracks.

Andres

>
> Jan
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:22:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wTP-0007Kn-40; Mon, 13 Aug 2012 15:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T0wTN-0007KX-L2
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:22:29 +0000
Received: from [85.158.143.99:64340] by server-3.bemta-4.messagelabs.com id
	F8/08-09529-5BB19205; Mon, 13 Aug 2012 15:22:29 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344871347!27390663!1
X-Originating-IP: [208.97.132.119]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4xMTkgPT4gMzEwODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5313 invoked from network); 13 Aug 2012 15:22:28 -0000
Received: from caiajhbdcbbj.dreamhost.com (HELO homiemail-a19.g.dreamhost.com)
	(208.97.132.119) by server-3.tower-216.messagelabs.com with SMTP;
	13 Aug 2012 15:22:28 -0000
Received: from homiemail-a19.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTP id 4EEB1604069;
	Mon, 13 Aug 2012 08:22:27 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=Ek9ueTg6ZtKLDpR/FGvE1xWP9RTy7LIg2D0gmKfNOD+r
	ywYkkS8+8DmYek6InCE2TtuqWhoqBofhKz7Tozsf3gBHAMgWjchXGljGsaQWXpMc
	GdeBBEMqn9Gsv4bW/CG46BxdNqOLJZ3NbdIhGvZXNz3Z+uins/7BxQNUN79zzxs=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=cIqObX1SyMRfSzy8UMLs7Al5hBk=; b=iBOsl49u
	JzK41w4xZ2k+kGjNpZRY4x87rGBS52se03UD8cZA95sLrrOvXa/oxUrdW5BC/M2/
	trcKBHWmI8uwqObpJROM53gO9/m0IPnSDFzEoKAcCi8ERWfEAjOIgzOo9LsXYFOG
	p9QDhWG2p/15bctfwOyAm3EBAvvTPZba+9U=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a19.g.dreamhost.com (Postfix) with ESMTPA id A3FBF604061; 
	Mon, 13 Aug 2012 08:22:26 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Mon, 13 Aug 2012 08:22:22 -0700
Message-ID: <2a0c4a6bbbb7050073e8d5de103ef129.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <5029303F020000780009480B@nat28.tlf.novell.com>
References: <50290B0C0200007800094750@nat28.tlf.novell.com>
	<5029303F020000780009480B@nat28.tlf.novell.com>
Date: Mon, 13 Aug 2012 08:22:22 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: "Jan Beulich" <JBeulich@suse.com>
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/PoD: fix (un)locking after
 24772:28edc2b31a9b
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>>> On 13.08.12 at 14:11, "Jan Beulich" <JBeulich@suse.com> wrote:
>> That c/s introduced a double unlock on the out-of-memory error path of
>> p2m_pod_demand_populate().
>
> I also wonder how correct that changeset's elimination of the page
> alloc lock in a number of places here is - p2m_pod_set_mem_target()'s
> calculations, for example, involve d->tot_pages, which with that lock
> not held can change under its feet.

afaict, access to d->tot_pages was not protected by the page_alloc lock
even prior to 24772.

Back when, I thought those unprotected tot_pages accesses should either be
locked or atomic_read(). Slipped through the cracks.

Andres

>
> Jan
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:45:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wpY-00083E-OM; Mon, 13 Aug 2012 15:45:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0wpW-000839-KE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:45:22 +0000
Received: from [85.158.139.83:55640] by server-6.bemta-5.messagelabs.com id
	7E/F3-22415-11129205; Mon, 13 Aug 2012 15:45:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344872720!27813954!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12024 invoked from network); 13 Aug 2012 15:45:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 15:45:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 16:45:19 +0100
Message-Id: <50293D2E0200007800094884@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 16:45:18 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
	<20521.1210.708532.71249@mariner.uk.xensource.com>
In-Reply-To: <20521.1210.708532.71249@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 15:44, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("[Xen-devel] [PATCH] libxl: fix build for gcc prior to 
>> Short of a proper replacement, use the "deprecated" attribute instead:
>> It also produces a warning (thus causing the build to fail due to
>> -Werror), and is at least getting close to the intention here.
> 
> I think it would be fine to simply drop this check for earlier gcc's.
> That would make the compatibility ifdeffery simpler so I would prefer
> that.

While I think the other version was reasonable, here you go.

Jan

libxl: fix build for gcc prior to 4.3

So far all we (explicitly) require is gcc 3.4 or better, so we
shouldn't be unconditionally using features supported only by much
newer versions.

Short of a proper replacement, use the "deprecated" attribute instead:
It also produces a warning (thus causing the build to fail due to
-Werror), and is at least getting close to the intention here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -55,8 +55,10 @@
 #ifdef LIBXL_H
 # error libxl.h should be included via libxl_internal.h, not separately
 #endif
-#define LIBXL_EXTERNAL_CALLERS_ONLY \
+#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)
+# define LIBXL_EXTERNAL_CALLERS_ONLY \
     __attribute__((warning("may not be called from within libxl")))
+#endif
 
 #include "libxl.h"
 #include "_paths.h"




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:45:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wpY-00083E-OM; Mon, 13 Aug 2012 15:45:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0wpW-000839-KE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:45:22 +0000
Received: from [85.158.139.83:55640] by server-6.bemta-5.messagelabs.com id
	7E/F3-22415-11129205; Mon, 13 Aug 2012 15:45:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344872720!27813954!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12024 invoked from network); 13 Aug 2012 15:45:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 15:45:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 16:45:19 +0100
Message-Id: <50293D2E0200007800094884@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 16:45:18 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
	<20521.1210.708532.71249@mariner.uk.xensource.com>
In-Reply-To: <20521.1210.708532.71249@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 15:44, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("[Xen-devel] [PATCH] libxl: fix build for gcc prior to 
>> Short of a proper replacement, use the "deprecated" attribute instead:
>> It also produces a warning (thus causing the build to fail due to
>> -Werror), and is at least getting close to the intention here.
> 
> I think it would be fine to simply drop this check for earlier gcc's.
> That would make the compatibility ifdeffery simpler so I would prefer
> that.

While I think the other version was reasonable, here you go.

Jan

libxl: fix build for gcc prior to 4.3

So far all we (explicitly) require is gcc 3.4 or better, so we
shouldn't be unconditionally using features supported only by much
newer versions.

Short of a proper replacement, use the "deprecated" attribute instead:
It also produces a warning (thus causing the build to fail due to
-Werror), and is at least getting close to the intention here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -55,8 +55,10 @@
 #ifdef LIBXL_H
 # error libxl.h should be included via libxl_internal.h, not separately
 #endif
-#define LIBXL_EXTERNAL_CALLERS_ONLY \
+#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)
+# define LIBXL_EXTERNAL_CALLERS_ONLY \
     __attribute__((warning("may not be called from within libxl")))
+#endif
 
 #include "libxl.h"
 #include "_paths.h"




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wvo-0008CQ-Iv; Mon, 13 Aug 2012 15:51:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T0wvn-0008CK-9x
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 15:51:51 +0000
Received: from [85.158.143.35:63968] by server-2.bemta-4.messagelabs.com id
	FE/E5-31966-69229205; Mon, 13 Aug 2012 15:51:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344873107!5432683!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20953 invoked from network); 13 Aug 2012 15:51:49 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 15:51:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DFpSeQ011904
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 15:51:29 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DFpQBK009797
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 15:51:27 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DFpOw8021157; Mon, 13 Aug 2012 10:51:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 08:51:24 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ADA4F402BA; Mon, 13 Aug 2012 11:41:44 -0400 (EDT)
Date: Mon, 13 Aug 2012 11:41:44 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Miller <davem@davemloft.net>,
	Ian Campbell <Ian.Campbell@eu.citrix.com>
Message-ID: <20120813154144.GA24868@phenom.dumpdata.com>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120808.155046.820543563969484712.davem@davemloft.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, mgorman@suse.de, konrad@darnok.org,
	akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> From: Mel Gorman <mgorman@suse.de>
> Date: Tue, 7 Aug 2012 09:55:55 +0100
> 
> > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > for the following bug triggered by a xen network driver
>  ...
> > The problem is that the xenfront driver is passing a NULL page to
> > __skb_fill_page_desc() which was unexpected. This patch checks that
> > there is a page before dereferencing.
> > 
> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> 
> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> It's the only driver passing NULL here.

It looks to be passing a valid page pointer (at least by looking
at the code) so I am not sure how it got turned in a NULL.

But let me double-check by instrumenting the driver..
> 
> That whole song and dance figuring out what to do with the head
> fragment page, depending upon whether the length is greater than the
> RX_COPY_THRESHOLD, is completely unnecessary.
> 
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

It looks like an overkill - it does a lot more than just allocate an SKB
and a page.

Deleting of extra code would be nice - however I am not going to be able
to do that for the next two weeks sadly - as my plate if full of debugging
some other stuff.

Lets see if Ian has some time.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wvo-0008CQ-Iv; Mon, 13 Aug 2012 15:51:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T0wvn-0008CK-9x
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 15:51:51 +0000
Received: from [85.158.143.35:63968] by server-2.bemta-4.messagelabs.com id
	FE/E5-31966-69229205; Mon, 13 Aug 2012 15:51:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344873107!5432683!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20953 invoked from network); 13 Aug 2012 15:51:49 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 15:51:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DFpSeQ011904
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 15:51:29 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DFpQBK009797
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 15:51:27 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DFpOw8021157; Mon, 13 Aug 2012 10:51:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 08:51:24 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ADA4F402BA; Mon, 13 Aug 2012 11:41:44 -0400 (EDT)
Date: Mon, 13 Aug 2012 11:41:44 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Miller <davem@davemloft.net>,
	Ian Campbell <Ian.Campbell@eu.citrix.com>
Message-ID: <20120813154144.GA24868@phenom.dumpdata.com>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120808.155046.820543563969484712.davem@davemloft.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, mgorman@suse.de, konrad@darnok.org,
	akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> From: Mel Gorman <mgorman@suse.de>
> Date: Tue, 7 Aug 2012 09:55:55 +0100
> 
> > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > for the following bug triggered by a xen network driver
>  ...
> > The problem is that the xenfront driver is passing a NULL page to
> > __skb_fill_page_desc() which was unexpected. This patch checks that
> > there is a page before dereferencing.
> > 
> > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> 
> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> It's the only driver passing NULL here.

It looks to be passing a valid page pointer (at least by looking
at the code) so I am not sure how it got turned in a NULL.

But let me double-check by instrumenting the driver..
> 
> That whole song and dance figuring out what to do with the head
> fragment page, depending upon whether the length is greater than the
> RX_COPY_THRESHOLD, is completely unnecessary.
> 
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

It looks like an overkill - it does a lot more than just allocate an SKB
and a page.

Deleting of extra code would be nice - however I am not going to be able
to do that for the next two weeks sadly - as my plate if full of debugging
some other stuff.

Lets see if Ian has some time.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ww2-0008DX-CQ; Mon, 13 Aug 2012 15:52:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0ww0-0008D4-D9
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:04 +0000
Received: from [85.158.138.51:39563] by server-12.bemta-3.messagelabs.com id
	72/3D-04073-3A229205; Mon, 13 Aug 2012 15:52:03 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344873121!20010722!1
X-Originating-IP: [216.32.180.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28261 invoked from network); 13 Aug 2012 15:52:03 -0000
Received: from co1ehsobe002.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.185)
	by server-6.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:03 -0000
Received: from mail143-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE011.bigfish.com (10.243.66.74) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:00 +0000
Received: from mail143-co1 (localhost [127.0.0.1])	by
	mail143-co1-R.bigfish.com (Postfix) with ESMTP id B56AE3003E7;
	Mon, 13 Aug 2012 15:52:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzzz2dh668h839h944hd24hf0ah107ah)
Received: from mail143-co1 (localhost.localdomain [127.0.0.1]) by mail143-co1
	(MessageSwitch) id 1344873117734510_20044;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from CO1EHSMHS001.bigfish.com (unknown [10.243.78.246])	by
	mail143-co1.bigfish.com (Postfix) with ESMTP id A6FF48C0132;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS001.bigfish.com (10.243.66.11) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQH-02-386-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 20DCBC8058;	Mon, 13 Aug 2012 10:51:53 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:52:24 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:52 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 68CA349C69F;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 4B3181C8001; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
Message-ID: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:04 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 0 of 3] amd iommu: Clean up iommu page table
	deallocation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan, this patch cleans up deallocate_next_page_table() function. Please take a look.

Thanks
Wei


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ww2-0008DX-CQ; Mon, 13 Aug 2012 15:52:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0ww0-0008D4-D9
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:04 +0000
Received: from [85.158.138.51:39563] by server-12.bemta-3.messagelabs.com id
	72/3D-04073-3A229205; Mon, 13 Aug 2012 15:52:03 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344873121!20010722!1
X-Originating-IP: [216.32.180.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28261 invoked from network); 13 Aug 2012 15:52:03 -0000
Received: from co1ehsobe002.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.185)
	by server-6.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:03 -0000
Received: from mail143-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE011.bigfish.com (10.243.66.74) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:00 +0000
Received: from mail143-co1 (localhost [127.0.0.1])	by
	mail143-co1-R.bigfish.com (Postfix) with ESMTP id B56AE3003E7;
	Mon, 13 Aug 2012 15:52:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzzz2dh668h839h944hd24hf0ah107ah)
Received: from mail143-co1 (localhost.localdomain [127.0.0.1]) by mail143-co1
	(MessageSwitch) id 1344873117734510_20044;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from CO1EHSMHS001.bigfish.com (unknown [10.243.78.246])	by
	mail143-co1.bigfish.com (Postfix) with ESMTP id A6FF48C0132;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS001.bigfish.com (10.243.66.11) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQH-02-386-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 20DCBC8058;	Mon, 13 Aug 2012 10:51:53 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:52:24 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:52 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 68CA349C69F;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 4B3181C8001; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
Message-ID: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:04 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 0 of 3] amd iommu: Clean up iommu page table
	deallocation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan, this patch cleans up deallocate_next_page_table() function. Please take a look.

Thanks
Wei


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ww2-0008Di-Q7; Mon, 13 Aug 2012 15:52:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0ww0-0008D5-Fi
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:04 +0000
Received: from [85.158.139.83:24755] by server-2.bemta-5.messagelabs.com id
	05/6F-10142-3A229205; Mon, 13 Aug 2012 15:52:03 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344873121!27899102!1
X-Originating-IP: [216.32.181.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8385 invoked from network); 13 Aug 2012 15:52:03 -0000
Received: from ch1ehsobe002.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.182)
	by server-12.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:03 -0000
Received: from mail255-ch1-R.bigfish.com (10.43.68.250) by
	CH1EHSOBE010.bigfish.com (10.43.70.60) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:01 +0000
Received: from mail255-ch1 (localhost [127.0.0.1])	by
	mail255-ch1-R.bigfish.com (Postfix) with ESMTP id 6E44620031A;
	Mon, 13 Aug 2012 15:52:01 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah107ah)
Received: from mail255-ch1 (localhost.localdomain [127.0.0.1]) by mail255-ch1
	(MessageSwitch) id 1344873119310409_24520;
	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from CH1EHSMHS004.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.235])	by mail255-ch1.bigfish.com (Postfix) with ESMTP id
	3FB2E1DC0045;	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS004.bigfish.com (10.43.70.4) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQK-01-68P-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 27103102803C;	Mon, 13 Aug 2012 10:51:55 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:52:00 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:54 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 6F9B349C6A1;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 5E7621C8002; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: b6ca536658ac712c5f03b5ffedce3bb61d55adaf
Message-ID: <b6ca536658ac712c5f03.1344872765@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
References: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:05 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 1 of 3] amd iommu: Add 2 helper functions:
 iommu_is_pte_present and iommu_next_level
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Wei Wang <wei.wang2@amd.com>
# Date 1344872310 -7200
# Node ID b6ca536658ac712c5f03b5ffedce3bb61d55adaf
# Parent  47080c96593702acd4145c5a1175b7d7f8f0679d
amd iommu: Add 2 helper functions: iommu_is_pte_present and iommu_next_level.

Signed-off-by: Wei Wang <wei.wang2@amd.com>

diff -r 47080c965937 -r b6ca536658ac xen/drivers/passthrough/amd/iommu_map.c
--- a/xen/drivers/passthrough/amd/iommu_map.c	Fri Aug 10 09:51:01 2012 +0200
+++ b/xen/drivers/passthrough/amd/iommu_map.c	Mon Aug 13 17:38:30 2012 +0200
@@ -306,20 +306,6 @@ u64 amd_iommu_get_next_table_from_pte(u3
     return ptr;
 }
 
-static unsigned int iommu_next_level(u32 *entry)
-{
-    return get_field_from_reg_u32(entry[0],
-                                  IOMMU_PDE_NEXT_LEVEL_MASK,
-                                  IOMMU_PDE_NEXT_LEVEL_SHIFT);
-}
-
-static int amd_iommu_is_pte_present(u32 *entry)
-{
-    return get_field_from_reg_u32(entry[0],
-                                  IOMMU_PDE_PRESENT_MASK,
-                                  IOMMU_PDE_PRESENT_SHIFT);
-}
-
 /* For each pde, We use ignored bits (bit 1 - bit 8 and bit 63)
  * to save pde count, pde count = 511 is a candidate of page coalescing.
  */
@@ -489,7 +475,7 @@ static int iommu_pde_from_gfn(struct dom
                          >> PAGE_SHIFT;
 
         /* Split super page frame into smaller pieces.*/
-        if ( amd_iommu_is_pte_present((u32*)pde) &&
+        if ( iommu_is_pte_present((u32*)pde) &&
              (iommu_next_level((u32*)pde) == 0) &&
              next_table_mfn != 0 )
         {
@@ -526,7 +512,7 @@ static int iommu_pde_from_gfn(struct dom
         }
 
         /* Install lower level page table for non-present entries */
-        else if ( !amd_iommu_is_pte_present((u32*)pde) )
+        else if ( !iommu_is_pte_present((u32*)pde) )
         {
             if ( next_table_mfn == 0 )
             {
diff -r 47080c965937 -r b6ca536658ac xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 09:51:01 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:30 2012 +0200
@@ -392,8 +392,7 @@ static void deallocate_next_page_table(s
 {
     void *table_vaddr, *pde;
     u64 next_table_maddr;
-    int index, next_level, present;
-    u32 *entry;
+    int index, next_level;
 
     table_vaddr = __map_domain_page(pg);
 
@@ -403,18 +402,11 @@ static void deallocate_next_page_table(s
         {
             pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
             next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
-            entry = (u32*)pde;
 
-            next_level = get_field_from_reg_u32(entry[0],
-                                                IOMMU_PDE_NEXT_LEVEL_MASK,
-                                                IOMMU_PDE_NEXT_LEVEL_SHIFT);
-
-            present = get_field_from_reg_u32(entry[0],
-                                             IOMMU_PDE_PRESENT_MASK,
-                                             IOMMU_PDE_PRESENT_SHIFT);
+            next_level = iommu_next_level((u32*)pde);
 
             if ( (next_table_maddr != 0) && (next_level != 0)
-                && present )
+                && iommu_is_pte_present((u32*)pde) )
             {
                 deallocate_next_page_table(
                     maddr_to_page(next_table_maddr), level - 1);
diff -r 47080c965937 -r b6ca536658ac xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h	Fri Aug 10 09:51:01 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h	Mon Aug 13 17:38:30 2012 +0200
@@ -257,4 +257,18 @@ static inline void iommu_set_addr_hi_to_
                          IOMMU_REG_BASE_ADDR_HIGH_SHIFT, reg);
 }
 
+static inline int iommu_is_pte_present(u32 *entry)
+{
+    return get_field_from_reg_u32(entry[0],
+                                  IOMMU_PDE_PRESENT_MASK,
+                                  IOMMU_PDE_PRESENT_SHIFT);
+}
+
+static inline unsigned int iommu_next_level(u32 *entry)
+{
+    return get_field_from_reg_u32(entry[0],
+                                  IOMMU_PDE_NEXT_LEVEL_MASK,
+                                  IOMMU_PDE_NEXT_LEVEL_SHIFT);
+}
+
 #endif /* _ASM_X86_64_AMD_IOMMU_PROTO_H */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ww2-0008Di-Q7; Mon, 13 Aug 2012 15:52:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0ww0-0008D5-Fi
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:04 +0000
Received: from [85.158.139.83:24755] by server-2.bemta-5.messagelabs.com id
	05/6F-10142-3A229205; Mon, 13 Aug 2012 15:52:03 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344873121!27899102!1
X-Originating-IP: [216.32.181.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8385 invoked from network); 13 Aug 2012 15:52:03 -0000
Received: from ch1ehsobe002.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.182)
	by server-12.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:03 -0000
Received: from mail255-ch1-R.bigfish.com (10.43.68.250) by
	CH1EHSOBE010.bigfish.com (10.43.70.60) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:01 +0000
Received: from mail255-ch1 (localhost [127.0.0.1])	by
	mail255-ch1-R.bigfish.com (Postfix) with ESMTP id 6E44620031A;
	Mon, 13 Aug 2012 15:52:01 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah107ah)
Received: from mail255-ch1 (localhost.localdomain [127.0.0.1]) by mail255-ch1
	(MessageSwitch) id 1344873119310409_24520;
	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from CH1EHSMHS004.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.235])	by mail255-ch1.bigfish.com (Postfix) with ESMTP id
	3FB2E1DC0045;	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS004.bigfish.com (10.43.70.4) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQK-01-68P-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 27103102803C;	Mon, 13 Aug 2012 10:51:55 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:52:00 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:54 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 6F9B349C6A1;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 5E7621C8002; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: b6ca536658ac712c5f03b5ffedce3bb61d55adaf
Message-ID: <b6ca536658ac712c5f03.1344872765@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
References: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:05 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 1 of 3] amd iommu: Add 2 helper functions:
 iommu_is_pte_present and iommu_next_level
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Wei Wang <wei.wang2@amd.com>
# Date 1344872310 -7200
# Node ID b6ca536658ac712c5f03b5ffedce3bb61d55adaf
# Parent  47080c96593702acd4145c5a1175b7d7f8f0679d
amd iommu: Add 2 helper functions: iommu_is_pte_present and iommu_next_level.

Signed-off-by: Wei Wang <wei.wang2@amd.com>

diff -r 47080c965937 -r b6ca536658ac xen/drivers/passthrough/amd/iommu_map.c
--- a/xen/drivers/passthrough/amd/iommu_map.c	Fri Aug 10 09:51:01 2012 +0200
+++ b/xen/drivers/passthrough/amd/iommu_map.c	Mon Aug 13 17:38:30 2012 +0200
@@ -306,20 +306,6 @@ u64 amd_iommu_get_next_table_from_pte(u3
     return ptr;
 }
 
-static unsigned int iommu_next_level(u32 *entry)
-{
-    return get_field_from_reg_u32(entry[0],
-                                  IOMMU_PDE_NEXT_LEVEL_MASK,
-                                  IOMMU_PDE_NEXT_LEVEL_SHIFT);
-}
-
-static int amd_iommu_is_pte_present(u32 *entry)
-{
-    return get_field_from_reg_u32(entry[0],
-                                  IOMMU_PDE_PRESENT_MASK,
-                                  IOMMU_PDE_PRESENT_SHIFT);
-}
-
 /* For each pde, We use ignored bits (bit 1 - bit 8 and bit 63)
  * to save pde count, pde count = 511 is a candidate of page coalescing.
  */
@@ -489,7 +475,7 @@ static int iommu_pde_from_gfn(struct dom
                          >> PAGE_SHIFT;
 
         /* Split super page frame into smaller pieces.*/
-        if ( amd_iommu_is_pte_present((u32*)pde) &&
+        if ( iommu_is_pte_present((u32*)pde) &&
              (iommu_next_level((u32*)pde) == 0) &&
              next_table_mfn != 0 )
         {
@@ -526,7 +512,7 @@ static int iommu_pde_from_gfn(struct dom
         }
 
         /* Install lower level page table for non-present entries */
-        else if ( !amd_iommu_is_pte_present((u32*)pde) )
+        else if ( !iommu_is_pte_present((u32*)pde) )
         {
             if ( next_table_mfn == 0 )
             {
diff -r 47080c965937 -r b6ca536658ac xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 10 09:51:01 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:30 2012 +0200
@@ -392,8 +392,7 @@ static void deallocate_next_page_table(s
 {
     void *table_vaddr, *pde;
     u64 next_table_maddr;
-    int index, next_level, present;
-    u32 *entry;
+    int index, next_level;
 
     table_vaddr = __map_domain_page(pg);
 
@@ -403,18 +402,11 @@ static void deallocate_next_page_table(s
         {
             pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
             next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
-            entry = (u32*)pde;
 
-            next_level = get_field_from_reg_u32(entry[0],
-                                                IOMMU_PDE_NEXT_LEVEL_MASK,
-                                                IOMMU_PDE_NEXT_LEVEL_SHIFT);
-
-            present = get_field_from_reg_u32(entry[0],
-                                             IOMMU_PDE_PRESENT_MASK,
-                                             IOMMU_PDE_PRESENT_SHIFT);
+            next_level = iommu_next_level((u32*)pde);
 
             if ( (next_table_maddr != 0) && (next_level != 0)
-                && present )
+                && iommu_is_pte_present((u32*)pde) )
             {
                 deallocate_next_page_table(
                     maddr_to_page(next_table_maddr), level - 1);
diff -r 47080c965937 -r b6ca536658ac xen/include/asm-x86/hvm/svm/amd-iommu-proto.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h	Fri Aug 10 09:51:01 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h	Mon Aug 13 17:38:30 2012 +0200
@@ -257,4 +257,18 @@ static inline void iommu_set_addr_hi_to_
                          IOMMU_REG_BASE_ADDR_HIGH_SHIFT, reg);
 }
 
+static inline int iommu_is_pte_present(u32 *entry)
+{
+    return get_field_from_reg_u32(entry[0],
+                                  IOMMU_PDE_PRESENT_MASK,
+                                  IOMMU_PDE_PRESENT_SHIFT);
+}
+
+static inline unsigned int iommu_next_level(u32 *entry)
+{
+    return get_field_from_reg_u32(entry[0],
+                                  IOMMU_PDE_NEXT_LEVEL_MASK,
+                                  IOMMU_PDE_NEXT_LEVEL_SHIFT);
+}
+
 #endif /* _ASM_X86_64_AMD_IOMMU_PROTO_H */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ww0-0008DK-Vo; Mon, 13 Aug 2012 15:52:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0wvz-0008D3-Tx
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:04 +0000
Received: from [85.158.139.83:24717] by server-11.bemta-5.messagelabs.com id
	35/DD-29296-3A229205; Mon, 13 Aug 2012 15:52:03 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344873120!27899097!1
X-Originating-IP: [216.32.180.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8296 invoked from network); 13 Aug 2012 15:52:02 -0000
Received: from co1ehsobe002.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.185)
	by server-12.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:02 -0000
Received: from mail138-co1-R.bigfish.com (10.243.78.226) by
	CO1EHSOBE012.bigfish.com (10.243.66.75) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:00 +0000
Received: from mail138-co1 (localhost [127.0.0.1])	by
	mail138-co1-R.bigfish.com (Postfix) with ESMTP id 0E3CFA80211;
	Mon, 13 Aug 2012 15:52:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah107ah)
Received: from mail138-co1 (localhost.localdomain [127.0.0.1]) by mail138-co1
	(MessageSwitch) id 1344873117500008_30347;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from CO1EHSMHS026.bigfish.com (unknown [10.243.78.238])	by
	mail138-co1.bigfish.com (Postfix) with ESMTP id 6EAE8340044;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS026.bigfish.com (10.243.66.36) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQI-02-389-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2EB58C8052;	Mon, 13 Aug 2012 10:51:54 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:52:25 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:53 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id A4C6649C6DA;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 868A01C8004; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 076df9db4c273e9786ea373bda092716015d9403
Message-ID: <076df9db4c273e9786ea.1344872767@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
References: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:07 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 3 of 3] amd iommu: Remove unnecessary map/unmap
 for l1 page tables
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Wei Wang <wei.wang2@amd.com>
# Date 1344872316 -7200
# Node ID 076df9db4c273e9786ea373bda092716015d9403
# Parent  273471c6dedd1e66caab7e4eede72130e4e0c00f
amd iommu: Remove unnecessary map/unmap for l1 page tables

Signed-off-by: Wei Wang <wei.wang2@amd.com>

diff -r 273471c6dedd -r 076df9db4c27 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:33 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:36 2012 +0200
@@ -394,25 +394,27 @@ static void deallocate_next_page_table(s
     u64 next_table_maddr;
     int index, next_level;
 
+    if ( level <= 1 )
+    {
+        free_amd_iommu_pgtable(pg);
+        return;
+    }
+
     table_vaddr = __map_domain_page(pg);
 
-    if ( level > 1 )
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
     {
-        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        next_level = iommu_next_level((u32*)pde);
+
+        if ( (next_table_maddr != 0) && (next_level != 0)
+             && iommu_is_pte_present((u32*)pde) )
         {
-            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
-            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
-
-            next_level = iommu_next_level((u32*)pde);
-
-            if ( (next_table_maddr != 0) && (next_level != 0)
-                && iommu_is_pte_present((u32*)pde) )
-            {
-                /* We do not support skip level yet */
-                ASSERT( next_level == level - 1 );
-                deallocate_next_page_table(
-                    maddr_to_page(next_table_maddr), next_level);
-            }
+            /* We do not support skip levels yet */
+            ASSERT( next_level == level - 1 );
+            deallocate_next_page_table(maddr_to_page(next_table_maddr), 
+                                       next_level);
         }
     }
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0ww0-0008DK-Vo; Mon, 13 Aug 2012 15:52:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0wvz-0008D3-Tx
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:04 +0000
Received: from [85.158.139.83:24717] by server-11.bemta-5.messagelabs.com id
	35/DD-29296-3A229205; Mon, 13 Aug 2012 15:52:03 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344873120!27899097!1
X-Originating-IP: [216.32.180.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8296 invoked from network); 13 Aug 2012 15:52:02 -0000
Received: from co1ehsobe002.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.185)
	by server-12.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:02 -0000
Received: from mail138-co1-R.bigfish.com (10.243.78.226) by
	CO1EHSOBE012.bigfish.com (10.243.66.75) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:00 +0000
Received: from mail138-co1 (localhost [127.0.0.1])	by
	mail138-co1-R.bigfish.com (Postfix) with ESMTP id 0E3CFA80211;
	Mon, 13 Aug 2012 15:52:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah107ah)
Received: from mail138-co1 (localhost.localdomain [127.0.0.1]) by mail138-co1
	(MessageSwitch) id 1344873117500008_30347;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from CO1EHSMHS026.bigfish.com (unknown [10.243.78.238])	by
	mail138-co1.bigfish.com (Postfix) with ESMTP id 6EAE8340044;
	Mon, 13 Aug 2012 15:51:57 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS026.bigfish.com (10.243.66.36) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQI-02-389-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2EB58C8052;	Mon, 13 Aug 2012 10:51:54 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:52:25 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:53 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id A4C6649C6DA;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 868A01C8004; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 076df9db4c273e9786ea373bda092716015d9403
Message-ID: <076df9db4c273e9786ea.1344872767@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
References: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:07 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 3 of 3] amd iommu: Remove unnecessary map/unmap
 for l1 page tables
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Wei Wang <wei.wang2@amd.com>
# Date 1344872316 -7200
# Node ID 076df9db4c273e9786ea373bda092716015d9403
# Parent  273471c6dedd1e66caab7e4eede72130e4e0c00f
amd iommu: Remove unnecessary map/unmap for l1 page tables

Signed-off-by: Wei Wang <wei.wang2@amd.com>

diff -r 273471c6dedd -r 076df9db4c27 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:33 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:36 2012 +0200
@@ -394,25 +394,27 @@ static void deallocate_next_page_table(s
     u64 next_table_maddr;
     int index, next_level;
 
+    if ( level <= 1 )
+    {
+        free_amd_iommu_pgtable(pg);
+        return;
+    }
+
     table_vaddr = __map_domain_page(pg);
 
-    if ( level > 1 )
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
     {
-        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        next_level = iommu_next_level((u32*)pde);
+
+        if ( (next_table_maddr != 0) && (next_level != 0)
+             && iommu_is_pte_present((u32*)pde) )
         {
-            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
-            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
-
-            next_level = iommu_next_level((u32*)pde);
-
-            if ( (next_table_maddr != 0) && (next_level != 0)
-                && iommu_is_pte_present((u32*)pde) )
-            {
-                /* We do not support skip level yet */
-                ASSERT( next_level == level - 1 );
-                deallocate_next_page_table(
-                    maddr_to_page(next_table_maddr), next_level);
-            }
+            /* We do not support skip levels yet */
+            ASSERT( next_level == level - 1 );
+            deallocate_next_page_table(maddr_to_page(next_table_maddr), 
+                                       next_level);
         }
     }
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wwW-0008LQ-Ct; Mon, 13 Aug 2012 15:52:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0wwU-0008Km-Mo
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:34 +0000
Received: from [85.158.143.35:6885] by server-1.bemta-4.messagelabs.com id
	DF/33-07754-1C229205; Mon, 13 Aug 2012 15:52:33 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344873123!14554861!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1455 invoked from network); 13 Aug 2012 15:52:05 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-15.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:05 -0000
Received: from mail197-co1-R.bigfish.com (10.243.78.236) by
	CO1EHSOBE001.bigfish.com (10.243.66.64) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:01 +0000
Received: from mail197-co1 (localhost [127.0.0.1])	by
	mail197-co1-R.bigfish.com (Postfix) with ESMTP id 5AB862003A6;
	Mon, 13 Aug 2012 15:52:01 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah107ah)
Received: from mail197-co1 (localhost.localdomain [127.0.0.1]) by mail197-co1
	(MessageSwitch) id 134487311951789_25167;
	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from CO1EHSMHS020.bigfish.com (unknown [10.243.78.249])	by
	mail197-co1.bigfish.com (Postfix) with ESMTP id 08EC1300044;
	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS020.bigfish.com (10.243.66.30) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQI-01-68N-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 230681028010;	Mon, 13 Aug 2012 10:51:54 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:51:59 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:53 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 8642A49C6D8;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 6FBAA1C8003; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 273471c6dedd1e66caab7e4eede72130e4e0c00f
Message-ID: <273471c6dedd1e66caab.1344872766@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
References: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:06 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 2 of 3] amd iommu: Use next_level instead of
	recalculating it
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Wei Wang <wei.wang2@amd.com>
# Date 1344872313 -7200
# Node ID 273471c6dedd1e66caab7e4eede72130e4e0c00f
# Parent  b6ca536658ac712c5f03b5ffedce3bb61d55adaf
amd iommu: Use next_level instead of recalculating it.

Signed-off-by: Wei Wang <wei.wang2@amd.com>

diff -r b6ca536658ac -r 273471c6dedd xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:30 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:33 2012 +0200
@@ -408,8 +408,10 @@ static void deallocate_next_page_table(s
             if ( (next_table_maddr != 0) && (next_level != 0)
                 && iommu_is_pte_present((u32*)pde) )
             {
+                /* We do not support skip level yet */
+                ASSERT( next_level == level - 1 );
                 deallocate_next_page_table(
-                    maddr_to_page(next_table_maddr), level - 1);
+                    maddr_to_page(next_table_maddr), next_level);
             }
         }
     }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 15:52:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 15:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0wwW-0008LQ-Ct; Mon, 13 Aug 2012 15:52:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T0wwU-0008Km-Mo
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 15:52:34 +0000
Received: from [85.158.143.35:6885] by server-1.bemta-4.messagelabs.com id
	DF/33-07754-1C229205; Mon, 13 Aug 2012 15:52:33 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344873123!14554861!1
X-Originating-IP: [216.32.180.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1455 invoked from network); 13 Aug 2012 15:52:05 -0000
Received: from co1ehsobe006.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.189)
	by server-15.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Aug 2012 15:52:05 -0000
Received: from mail197-co1-R.bigfish.com (10.243.78.236) by
	CO1EHSOBE001.bigfish.com (10.243.66.64) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:52:01 +0000
Received: from mail197-co1 (localhost [127.0.0.1])	by
	mail197-co1-R.bigfish.com (Postfix) with ESMTP id 5AB862003A6;
	Mon, 13 Aug 2012 15:52:01 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah107ah)
Received: from mail197-co1 (localhost.localdomain [127.0.0.1]) by mail197-co1
	(MessageSwitch) id 134487311951789_25167;
	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from CO1EHSMHS020.bigfish.com (unknown [10.243.78.249])	by
	mail197-co1.bigfish.com (Postfix) with ESMTP id 08EC1300044;
	Mon, 13 Aug 2012 15:51:59 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS020.bigfish.com (10.243.66.30) with Microsoft SMTP Server id
	14.1.225.23; Mon, 13 Aug 2012 15:51:56 +0000
X-WSS-ID: 0M8PAQI-01-68N-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 230681028010;	Mon, 13 Aug 2012 10:51:54 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Mon, 13 Aug 2012 10:51:59 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Mon, 13 Aug 2012 10:51:53 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Mon, 13 Aug 2012
	11:51:52 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 8642A49C6D8;
	Mon, 13 Aug 2012 16:51:51 +0100 (BST)
Received: from gran.amd.com (gran.osrc.amd.com [165.204.15.57])	by
	mail.osrc.amd.com (Postfix) with ESMTP id 6FBAA1C8003; Mon, 13 Aug 2012
	17:51:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 273471c6dedd1e66caab7e4eede72130e4e0c00f
Message-ID: <273471c6dedd1e66caab.1344872766@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
References: <patchbomb.1344872764@gran.amd.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Mon, 13 Aug 2012 17:46:06 +0200
From: Wei Wang <wei.wang2@amd.com>
To: <JBeulich@suse.com>, <xen-devel@lists.xen.org>
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH 2 of 3] amd iommu: Use next_level instead of
	recalculating it
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Wei Wang <wei.wang2@amd.com>
# Date 1344872313 -7200
# Node ID 273471c6dedd1e66caab7e4eede72130e4e0c00f
# Parent  b6ca536658ac712c5f03b5ffedce3bb61d55adaf
amd iommu: Use next_level instead of recalculating it.

Signed-off-by: Wei Wang <wei.wang2@amd.com>

diff -r b6ca536658ac -r 273471c6dedd xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:30 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Mon Aug 13 17:38:33 2012 +0200
@@ -408,8 +408,10 @@ static void deallocate_next_page_table(s
             if ( (next_table_maddr != 0) && (next_level != 0)
                 && iommu_is_pte_present((u32*)pde) )
             {
+                /* We do not support skip level yet */
+                ASSERT( next_level == level - 1 );
                 deallocate_next_page_table(
-                    maddr_to_page(next_table_maddr), level - 1);
+                    maddr_to_page(next_table_maddr), next_level);
             }
         }
     }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 16:10:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 16:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0xDI-00013D-28; Mon, 13 Aug 2012 16:09:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0xDG-000138-KS
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 16:09:54 +0000
Received: from [85.158.143.35:2591] by server-2.bemta-4.messagelabs.com id
	34/BC-31966-1D629205; Mon, 13 Aug 2012 16:09:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344874193!15128078!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4964 invoked from network); 13 Aug 2012 16:09:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 16:09:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 17:09:52 +0100
Message-Id: <502942ED02000078000948CE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 17:09:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Wang" <wei.wang2@amd.com>
References: <patchbomb.1344872764@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0 of 3] amd iommu: Clean up iommu page table
 deallocation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 17:46, Wei Wang <wei.wang2@amd.com> wrote:
> Hi Jan, this patch cleans up deallocate_next_page_table() function. Please 
> take a look.

Hi Wei,

looks okay, but unless I'm mistaken this is only cleanup, so I'd
like to postpone this until after 4.2 unless you have a rather
strong opinion towards getting it in now.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 16:10:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 16:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0xDI-00013D-28; Mon, 13 Aug 2012 16:09:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T0xDG-000138-KS
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 16:09:54 +0000
Received: from [85.158.143.35:2591] by server-2.bemta-4.messagelabs.com id
	34/BC-31966-1D629205; Mon, 13 Aug 2012 16:09:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344874193!15128078!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4964 invoked from network); 13 Aug 2012 16:09:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 16:09:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Aug 2012 17:09:52 +0100
Message-Id: <502942ED02000078000948CE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 13 Aug 2012 17:09:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Wang" <wei.wang2@amd.com>
References: <patchbomb.1344872764@gran.amd.com>
In-Reply-To: <patchbomb.1344872764@gran.amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0 of 3] amd iommu: Clean up iommu page table
 deallocation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 17:46, Wei Wang <wei.wang2@amd.com> wrote:
> Hi Jan, this patch cleans up deallocate_next_page_table() function. Please 
> take a look.

Hi Wei,

looks okay, but unless I'm mistaken this is only cleanup, so I'd
like to postpone this until after 4.2 unless you have a rather
strong opinion towards getting it in now.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:10:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0y95-0001vi-BQ; Mon, 13 Aug 2012 17:09:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T0y93-0001vd-OE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:09:37 +0000
Received: from [85.158.139.83:12861] by server-8.bemta-5.messagelabs.com id
	5E/CF-02481-0D439205; Mon, 13 Aug 2012 17:09:36 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344877776!27825813!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10461 invoked from network); 13 Aug 2012 17:09:36 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:09:36 -0000
Received: by eeke53 with SMTP id e53so1136564eek.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:09:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=y9bMInqhAASTaKPIyqY4wosXsrRvKVIzVW5SJeuEdp8=;
	b=aScgIBeeTG2q2RgmYnsG+mP8thMBLcrHq1OAJ2x03BF3O2nVh9nJ/Ptkjr6T2xwmwR
	xXU0KpdmICPVNQbEo7fLH05fXqIw1qtfsZnZy/0rPed9he87ZfkF//XogM9zd7dkETUD
	TIxNArAZcUgDwiLf36Q4lmhaK4xFb9o2jCpGRlSadV0KND9Cu2XwpvEEjkyOe28uQYm1
	VuPT1jlHc8hRdppfXI7lJLvhFjn4blrXdpOT7WfgBqyo4ip9zUUMBKOUMJNwztaS2LwT
	UgBjkgBl8gKFLOfugaqMQrH2zBrdqviv9gHlmF8MyQnqsOWPsJtCK/gXvvsPcT2NjUTf
	rmrA==
Received: by 10.14.211.3 with SMTP id v3mr10781231eeo.43.1344877776096;
	Mon, 13 Aug 2012 10:09:36 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id m45sm9841963eep.16.2012.08.13.10.09.35
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 10:09:35 -0700 (PDT)
Message-ID: <502934CB.7060409@xen.org>
Date: Mon, 13 Aug 2012 18:09:31 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I added a page on Xen 4.2 limits (based on info I know) at 
http://wiki.xen.org/wiki/Xen_4.2_Limits based on some information that 
Jan sent me a few months back. What I don't know is whether there is a 
difference for 64 bit and 32 bit guests. Also I may be missing missing 
some important figures/stuff that people want to know.

Feel free to reply to the thread or add to 
http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits

Regards
Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:10:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0y95-0001vi-BQ; Mon, 13 Aug 2012 17:09:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T0y93-0001vd-OE
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:09:37 +0000
Received: from [85.158.139.83:12861] by server-8.bemta-5.messagelabs.com id
	5E/CF-02481-0D439205; Mon, 13 Aug 2012 17:09:36 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344877776!27825813!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10461 invoked from network); 13 Aug 2012 17:09:36 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:09:36 -0000
Received: by eeke53 with SMTP id e53so1136564eek.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:09:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=y9bMInqhAASTaKPIyqY4wosXsrRvKVIzVW5SJeuEdp8=;
	b=aScgIBeeTG2q2RgmYnsG+mP8thMBLcrHq1OAJ2x03BF3O2nVh9nJ/Ptkjr6T2xwmwR
	xXU0KpdmICPVNQbEo7fLH05fXqIw1qtfsZnZy/0rPed9he87ZfkF//XogM9zd7dkETUD
	TIxNArAZcUgDwiLf36Q4lmhaK4xFb9o2jCpGRlSadV0KND9Cu2XwpvEEjkyOe28uQYm1
	VuPT1jlHc8hRdppfXI7lJLvhFjn4blrXdpOT7WfgBqyo4ip9zUUMBKOUMJNwztaS2LwT
	UgBjkgBl8gKFLOfugaqMQrH2zBrdqviv9gHlmF8MyQnqsOWPsJtCK/gXvvsPcT2NjUTf
	rmrA==
Received: by 10.14.211.3 with SMTP id v3mr10781231eeo.43.1344877776096;
	Mon, 13 Aug 2012 10:09:36 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id m45sm9841963eep.16.2012.08.13.10.09.35
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 10:09:35 -0700 (PDT)
Message-ID: <502934CB.7060409@xen.org>
Date: Mon, 13 Aug 2012 18:09:31 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I added a page on Xen 4.2 limits (based on info I know) at 
http://wiki.xen.org/wiki/Xen_4.2_Limits based on some information that 
Jan sent me a few months back. What I don't know is whether there is a 
difference for 64 bit and 32 bit guests. Also I may be missing missing 
some important figures/stuff that people want to know.

Feel free to reply to the thread or add to 
http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits

Regards
Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:10:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:10:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0y9Q-0001wi-Nk; Mon, 13 Aug 2012 17:10:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0y9P-0001wJ-E4
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:09:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344877793!9068923!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29781 invoked from network); 13 Aug 2012 17:09:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:09:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="13986897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 17:09:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 18:09:00 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0y8S-0004iU-Dz; Mon, 13 Aug 2012 17:09:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0y8S-0007RY-96;
	Mon, 13 Aug 2012 18:09:00 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.13484.181539.827170@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 18:09:00 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50293D2E0200007800094884@nat28.tlf.novell.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
	<20521.1210.708532.71249@mariner.uk.xensource.com>
	<50293D2E0200007800094884@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3"):
> While I think the other version was reasonable, here you go.

Thanks,

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

I removed the now-redundant paragraph about ((deprecated)) from the
commit message.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:10:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:10:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0y9Q-0001wi-Nk; Mon, 13 Aug 2012 17:10:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0y9P-0001wJ-E4
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:09:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344877793!9068923!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29781 invoked from network); 13 Aug 2012 17:09:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:09:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="13986897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 17:09:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 18:09:00 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0y8S-0004iU-Dz; Mon, 13 Aug 2012 17:09:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0y8S-0007RY-96;
	Mon, 13 Aug 2012 18:09:00 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.13484.181539.827170@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 18:09:00 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <50293D2E0200007800094884@nat28.tlf.novell.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
	<20521.1210.708532.71249@mariner.uk.xensource.com>
	<50293D2E0200007800094884@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3"):
> While I think the other version was reasonable, here you go.

Thanks,

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

I removed the now-redundant paragraph about ((deprecated)) from the
commit message.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:15:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yE5-0002Cv-GL; Mon, 13 Aug 2012 17:14:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0yE3-0002Cl-Or
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:14:47 +0000
Received: from [85.158.138.51:21049] by server-6.bemta-3.messagelabs.com id
	9B/E3-32013-60639205; Mon, 13 Aug 2012 17:14:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344878086!28080216!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32493 invoked from network); 13 Aug 2012 17:14:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:14:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="13986952"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 17:14:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 18:14:11 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0yDT-0004lK-Ds; Mon, 13 Aug 2012 17:14:11 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0yDT-0003ui-9f;
	Mon, 13 Aug 2012 18:14:11 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.13795.167543.739021@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 18:14:11 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <6db5c184a77782717a89.1343749919@andrewcoop.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<6db5c184a77782717a89.1343749919@andrewcoop.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 5] config: Split debug build from
	debug	symbols
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[Xen-devel] [PATCH 3 of 5] config: Split debug build from debug symbols"):
> RPM based packaging systems expect binaries to have debug symbols which get
> placed in a separate debuginfo RPM.
> 
> Split the concept of a debug build up so that binaries can be built with
> debugging symbols without having the other gubbins which $(debug) implies, most
> notibly frame pointers.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I build tested this and it seemed not to break.

Tested-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:15:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yE5-0002Cv-GL; Mon, 13 Aug 2012 17:14:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T0yE3-0002Cl-Or
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:14:47 +0000
Received: from [85.158.138.51:21049] by server-6.bemta-3.messagelabs.com id
	9B/E3-32013-60639205; Mon, 13 Aug 2012 17:14:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344878086!28080216!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32493 invoked from network); 13 Aug 2012 17:14:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:14:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="13986952"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 17:14:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 18:14:11 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T0yDT-0004lK-Ds; Mon, 13 Aug 2012 17:14:11 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T0yDT-0003ui-9f;
	Mon, 13 Aug 2012 18:14:11 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20521.13795.167543.739021@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 18:14:11 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <6db5c184a77782717a89.1343749919@andrewcoop.uk.xensource.com>
References: <patchbomb.1343749916@andrewcoop.uk.xensource.com>
	<6db5c184a77782717a89.1343749919@andrewcoop.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 5] config: Split debug build from
	debug	symbols
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[Xen-devel] [PATCH 3 of 5] config: Split debug build from debug symbols"):
> RPM based packaging systems expect binaries to have debug symbols which get
> placed in a separate debuginfo RPM.
> 
> Split the concept of a debug build up so that binaries can be built with
> debugging symbols without having the other gubbins which $(debug) implies, most
> notibly frame pointers.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I build tested this and it seemed not to break.

Tested-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:18:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yH0-0002KN-2g; Mon, 13 Aug 2012 17:17:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T0yGy-0002KC-24
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:17:48 +0000
Received: from [85.158.139.83:45677] by server-9.bemta-5.messagelabs.com id
	20/C1-26123-BB639205; Mon, 13 Aug 2012 17:17:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344878264!23960585!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDc0Mjc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25404 invoked from network); 13 Aug 2012 17:17:46 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:17:46 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344878266; x=1376414266;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=LFgck8GAGkylCJkRgKXNQ3KISp3sagqJw6b1LQKkuRA=;
	b=aR+4C7UasvgFgWtj6Cuw25H2T0/sOg+2H+0k8em1BUv18+PoLQQfJFaz
	3Ax+iq6pfzV95ensoryqWK0eakULiw==;
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="344571196"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 13 Aug 2012 17:17:43 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7DHHejY017836
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Mon, 13 Aug 2012 17:17:41 GMT
Received: from US-SEA-R8XVZTX (10.224.80.43) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 13 Aug 2012 10:17:39 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 13 Aug 2012
	10:17:39 -0700
Date: Mon, 13 Aug 2012 10:17:39 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120813171739.GA1540@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
	<1344327782.11339.65.camel@zakaz.uk.xensource.com>
	<20120807175938.GB5592@US-SEA-R8XVZTX>
	<20521.3158.965869.397652@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20521.3158.965869.397652@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 07:16:54AM -0700, Ian Jackson wrote:
> Matt Wilson writes ("Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted text documentation from markdown"):
> > I'm certainly not suggesting that we add a web browser to the build
> > dependencies. If lynx isn't installed, the current behavior of copying
> > the markdown file is maintained.
> > 
> > If a packager wishes to produce prettier text documentation, they can
> > elect to add lynx to their build dependencies. Today doing this
> > post-build from build control files is a bit tricky, since we drop the
> > semantic information conveyed by the .markdown suffix by calling the
> > final file .txt.
> > 
> > 4.2 will be the first release with markdown documentation. I think
> > that making it well formatted, just as the previous .txt
> > documentation, will be a better experience for the user.
> 
> I tend to agree with this line of argument.  But I have orthogonal
> queries:
> 
> Firstly, why lynx and not w3m ?  Is lynx even maintained upstream any
> more ?  Secondly, shouldn't we do the testing for the presence of
> markdown and the html formatter in configure ?

I suppose links or w3m work equally well. I tend to personally like
links' formatting over w3m. The last lynx development release was in
2010, whereas the latest links release was June 26, 2012. The latest
w3m release was in January 2011.

Adding these checks in configure is a good idea. It's a good method of
communicating the tools that can be or must be present for a
successful build of the auxiliary pieces of Xen.

Unfortunately I lost the build environment I set up that has the
correct version of autoconf, etc. so it might take a few days before I
can cook up a new version that adds autoconf checks.

Thanks for the suggestions.

Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:18:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yH0-0002KN-2g; Mon, 13 Aug 2012 17:17:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T0yGy-0002KC-24
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:17:48 +0000
Received: from [85.158.139.83:45677] by server-9.bemta-5.messagelabs.com id
	20/C1-26123-BB639205; Mon, 13 Aug 2012 17:17:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1344878264!23960585!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDc0Mjc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25404 invoked from network); 13 Aug 2012 17:17:46 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:17:46 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344878266; x=1376414266;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=LFgck8GAGkylCJkRgKXNQ3KISp3sagqJw6b1LQKkuRA=;
	b=aR+4C7UasvgFgWtj6Cuw25H2T0/sOg+2H+0k8em1BUv18+PoLQQfJFaz
	3Ax+iq6pfzV95ensoryqWK0eakULiw==;
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="344571196"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 13 Aug 2012 17:17:43 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7DHHejY017836
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Mon, 13 Aug 2012 17:17:41 GMT
Received: from US-SEA-R8XVZTX (10.224.80.43) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 13 Aug 2012 10:17:39 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 13 Aug 2012
	10:17:39 -0700
Date: Mon, 13 Aug 2012 10:17:39 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120813171739.GA1540@US-SEA-R8XVZTX>
References: <ef1271aef866effe07aa.1343676828@kaos-source-31003.sea31.amazon.com>
	<1343723343.15432.63.camel@zakaz.uk.xensource.com>
	<20120731153459.GD8228@US-SEA-R8XVZTX>
	<20120807032246.GA4324@US-SEA-R8XVZTX>
	<1344327782.11339.65.camel@zakaz.uk.xensource.com>
	<20120807175938.GB5592@US-SEA-R8XVZTX>
	<20521.3158.965869.397652@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20521.3158.965869.397652@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted
 text documentation from markdown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 07:16:54AM -0700, Ian Jackson wrote:
> Matt Wilson writes ("Re: [Xen-devel] [PATCH DOCDAY] use lynx to produce better formatted text documentation from markdown"):
> > I'm certainly not suggesting that we add a web browser to the build
> > dependencies. If lynx isn't installed, the current behavior of copying
> > the markdown file is maintained.
> > 
> > If a packager wishes to produce prettier text documentation, they can
> > elect to add lynx to their build dependencies. Today doing this
> > post-build from build control files is a bit tricky, since we drop the
> > semantic information conveyed by the .markdown suffix by calling the
> > final file .txt.
> > 
> > 4.2 will be the first release with markdown documentation. I think
> > that making it well formatted, just as the previous .txt
> > documentation, will be a better experience for the user.
> 
> I tend to agree with this line of argument.  But I have orthogonal
> queries:
> 
> Firstly, why lynx and not w3m ?  Is lynx even maintained upstream any
> more ?  Secondly, shouldn't we do the testing for the presence of
> markdown and the html formatter in configure ?

I suppose links or w3m work equally well. I tend to personally like
links' formatting over w3m. The last lynx development release was in
2010, whereas the latest links release was June 26, 2012. The latest
w3m release was in January 2011.

Adding these checks in configure is a good idea. It's a good method of
communicating the tools that can be or must be present for a
successful build of the auxiliary pieces of Xen.

Unfortunately I lost the build environment I set up that has the
correct version of autoconf, etc. so it might take a few days before I
can cook up a new version that adds autoconf checks.

Thanks for the suggestions.

Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:22:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yL2-0002cl-CU; Mon, 13 Aug 2012 17:22:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T0yKz-0002ca-PK
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:21:59 +0000
Received: from [85.158.143.35:35803] by server-2.bemta-4.messagelabs.com id
	D9/A7-31966-5B739205; Mon, 13 Aug 2012 17:21:57 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344878511!5445524!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwOTYxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1062 invoked from network); 13 Aug 2012 17:21:52 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:21:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344878512; x=1376414512;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=1rFeZQ+nLmSd3S6+sy0R9cpIplQg8FYc9ekK+GRZpn0=;
	b=DJyoZy7PloRCXSQ7M02Po2ZRGPxVMBQpvJNtpNRXSk+8DiSdR/Pm+E3/
	n5zf26grrTcE9DyP9gVtWWn9yNvPTQ==;
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="422738154"
Received: from smtp-in-0191.sea3.amazon.com ([10.224.12.28])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 13 Aug 2012 17:21:43 +0000
Received: from ex10-hub-36002.ant.amazon.com (ex10-hub-36002.sea32.amazon.com
	[10.250.1.7])
	by smtp-in-0191.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7DHLgrP028214
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Mon, 13 Aug 2012 17:21:42 GMT
Received: from US-SEA-R8XVZTX (10.224.80.41) by ex10-hub-36002.ant.amazon.com
	(10.250.1.7) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 13 Aug 2012 10:21:37 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 13 Aug 2012
	10:21:37 -0700
Date: Mon, 13 Aug 2012 10:21:37 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120813172137.GB1540@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
	<20120808160606.GA9048@US-SEA-R8XVZTX>
	<5022AB240200007800093B0E@nat28.tlf.novell.com>
	<20521.1545.734558.263730@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20521.1545.734558.263730@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 06:50:01AM -0700, Ian Jackson wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too early"):
> > >>> On 08.08.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> > > Would you like me to take another pass at fixing this the "right" way,
> > > by removing this bit altogether and adding lowercase variables to
> > > tools/Config.mk?
> > 
> > Oh, yes, if you can do this in a more complete fashion, removing
> > the suspicious default_dir.m4 altogether, that would be wonderful.
> > Not sure what the tools maintainers think of this, though.
> 
> That would be fine by me.  Before throwing it into 4.2 I will do a few
> build tests and check that differences in the output file layout are
> as expected.

OK, I'll get my development environment set up again with the right
version of autoconf and work on implementing this in the next few
days.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:22:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yL2-0002cl-CU; Mon, 13 Aug 2012 17:22:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T0yKz-0002ca-PK
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:21:59 +0000
Received: from [85.158.143.35:35803] by server-2.bemta-4.messagelabs.com id
	D9/A7-31966-5B739205; Mon, 13 Aug 2012 17:21:57 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344878511!5445524!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDEwOTYxNg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1062 invoked from network); 13 Aug 2012 17:21:52 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:21:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344878512; x=1376414512;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=1rFeZQ+nLmSd3S6+sy0R9cpIplQg8FYc9ekK+GRZpn0=;
	b=DJyoZy7PloRCXSQ7M02Po2ZRGPxVMBQpvJNtpNRXSk+8DiSdR/Pm+E3/
	n5zf26grrTcE9DyP9gVtWWn9yNvPTQ==;
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="422738154"
Received: from smtp-in-0191.sea3.amazon.com ([10.224.12.28])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 13 Aug 2012 17:21:43 +0000
Received: from ex10-hub-36002.ant.amazon.com (ex10-hub-36002.sea32.amazon.com
	[10.250.1.7])
	by smtp-in-0191.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7DHLgrP028214
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Mon, 13 Aug 2012 17:21:42 GMT
Received: from US-SEA-R8XVZTX (10.224.80.41) by ex10-hub-36002.ant.amazon.com
	(10.250.1.7) with Microsoft SMTP Server id 14.2.247.3;
	Mon, 13 Aug 2012 10:21:37 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Mon, 13 Aug 2012
	10:21:37 -0700
Date: Mon, 13 Aug 2012 10:21:37 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120813172137.GB1540@US-SEA-R8XVZTX>
References: <5022933B0200007800093A1F@nat28.tlf.novell.com>
	<20120808151037.GE5592@US-SEA-R8XVZTX>
	<5022A7980200007800093AD9@nat28.tlf.novell.com>
	<20120808160606.GA9048@US-SEA-R8XVZTX>
	<5022AB240200007800093B0E@nat28.tlf.novell.com>
	<20521.1545.734558.263730@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20521.1545.734558.263730@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix
 too early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 06:50:01AM -0700, Ian Jackson wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH] tools: don't expand prefix and exec_prefix too early"):
> > >>> On 08.08.12 at 18:06, Matt Wilson <msw@amazon.com> wrote:
> > > Would you like me to take another pass at fixing this the "right" way,
> > > by removing this bit altogether and adding lowercase variables to
> > > tools/Config.mk?
> > 
> > Oh, yes, if you can do this in a more complete fashion, removing
> > the suspicious default_dir.m4 altogether, that would be wonderful.
> > Not sure what the tools maintainers think of this, though.
> 
> That would be fine by me.  Before throwing it into 4.2 I will do a few
> build tests and check that differences in the output file layout are
> as expected.

OK, I'll get my development environment set up again with the right
version of autoconf and work on implementing this in the next few
days.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:26:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yPg-0002sW-5J; Mon, 13 Aug 2012 17:26:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T0yPe-0002sB-71
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:26:46 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344878796!6621205!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5701 invoked from network); 13 Aug 2012 17:26:37 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:26:37 -0000
Received: by yhpp34 with SMTP id p34so3822160yhp.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:26:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=qTA3kybmhdfiEWT0ufJGpe1rzpP7usunlrCgXdoOfXs=;
	b=GVbx5qaxSCmxj0nStHWz6WBE/F6l5pyrrsONffX9i5NTao/lqyGsNHwmFdCwM3ry/L
	OikS+uR5H1IPKwTATwCdIConI0fztPqdJhLBfBARAaNIkORW0r3v6Vzb8M5bdDQJARXC
	UGpCb6l6Wl3V69PP+9svAN5LW84YewPTYHb+W+G+XCTLJbedYIHH4qVXpVCYx+C32zC+
	rxzHX6PKDPEnAizT2HXCsuytlm2/I95iZmcmJWoBbSRQrOFtpJkeSmG8yF2d6V8XcS5/
	oJaVwO3njjX4XKwcFPH0T5jDqq3jfo+kkoJwBDYyzm9YQaZDt0RD7gB4CtIsWzX+ieoy
	S1DQ==
MIME-Version: 1.0
Received: by 10.50.87.198 with SMTP id ba6mr6628119igb.22.1344878795388; Mon,
	13 Aug 2012 10:26:35 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Mon, 13 Aug 2012 10:26:35 -0700 (PDT)
In-Reply-To: <5028C34C02000078000945FC@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
Date: Mon, 13 Aug 2012 13:26:35 -0400
X-Google-Sender-Auth: T2tZxfDSOcCuCuJV65D0WWV0OsA
Message-ID: <CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 3:05 AM, Jan Beulich <JBeulich@suse.com> wrote:
> - fix S3 regression(s?) reported by Ben Guthro

I am continuing to try to root-cause this, but it is slow going, as I
keep getting pulled off to look at other things.

I have not found a smoking gun yet.

Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
It seems to fail 100% of the time on 100% of x86 machines I have tried.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 17:26:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 17:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0yPg-0002sW-5J; Mon, 13 Aug 2012 17:26:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T0yPe-0002sB-71
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 17:26:46 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344878796!6621205!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5701 invoked from network); 13 Aug 2012 17:26:37 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 17:26:37 -0000
Received: by yhpp34 with SMTP id p34so3822160yhp.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 10:26:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=qTA3kybmhdfiEWT0ufJGpe1rzpP7usunlrCgXdoOfXs=;
	b=GVbx5qaxSCmxj0nStHWz6WBE/F6l5pyrrsONffX9i5NTao/lqyGsNHwmFdCwM3ry/L
	OikS+uR5H1IPKwTATwCdIConI0fztPqdJhLBfBARAaNIkORW0r3v6Vzb8M5bdDQJARXC
	UGpCb6l6Wl3V69PP+9svAN5LW84YewPTYHb+W+G+XCTLJbedYIHH4qVXpVCYx+C32zC+
	rxzHX6PKDPEnAizT2HXCsuytlm2/I95iZmcmJWoBbSRQrOFtpJkeSmG8yF2d6V8XcS5/
	oJaVwO3njjX4XKwcFPH0T5jDqq3jfo+kkoJwBDYyzm9YQaZDt0RD7gB4CtIsWzX+ieoy
	S1DQ==
MIME-Version: 1.0
Received: by 10.50.87.198 with SMTP id ba6mr6628119igb.22.1344878795388; Mon,
	13 Aug 2012 10:26:35 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Mon, 13 Aug 2012 10:26:35 -0700 (PDT)
In-Reply-To: <5028C34C02000078000945FC@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
Date: Mon, 13 Aug 2012 13:26:35 -0400
X-Google-Sender-Auth: T2tZxfDSOcCuCuJV65D0WWV0OsA
Message-ID: <CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 3:05 AM, Jan Beulich <JBeulich@suse.com> wrote:
> - fix S3 regression(s?) reported by Ben Guthro

I am continuing to try to root-cause this, but it is slow going, as I
keep getting pulled off to look at other things.

I have not found a smoking gun yet.

Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
It seems to fail 100% of the time on 100% of x86 machines I have tried.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 18:43:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 18:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0zbv-0004md-1U; Mon, 13 Aug 2012 18:43:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T0zbt-0004mQ-Ig
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 18:43:29 +0000
Received: from [85.158.143.35:41617] by server-1.bemta-4.messagelabs.com id
	43/4E-07754-0DA49205; Mon, 13 Aug 2012 18:43:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344883407!14577608!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3416 invoked from network); 13 Aug 2012 18:43:28 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 18:43:28 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 677131201;
	Mon, 13 Aug 2012 21:43:06 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 87B8F2005D; Mon, 13 Aug 2012 21:43:06 +0300 (EEST)
Date: Mon, 13 Aug 2012 21:43:06 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: James Harper <james.harper@bendigoit.com.au>
Message-ID: <20120813184306.GP19851@reaktio.net>
References: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B299F73D1@BITCOM1.int.sbss.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F73D1@BITCOM1.int.sbss.com.au>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
> > 
> > I think the problem is definitely related to the 4K sector issue.
> > 
> > qemu appears to always present 512 byte sectors, thus only booting from a
> > 512 byte sector partition table. Once the PV drivers take over though it all
> > falls down because PV drivers are passed a 4K sector size and nothing
> > matches up anymore.
> > 
> > What is the solution here? Do I just tell windows that we have a 512 byte
> > sector and put up with the potential loss of performance?
> > 
> 
> I just came across this...
> 
> http://support.microsoft.com/kb/982018
> 
> I guess Windows doesn't support 4K disks after all, so I'll emulate 512 byte sectors and fake whatever scsi interface is required to tell windows that the disk is 4K but emulating 512 bytes.
> 

At least Linux reports:
/sys/block/<disk>/queue/logical_block_size=512
/sys/block/<disk>/queue/physical_block_size=4096

So I assume you need to set those somehow..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 18:43:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 18:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0zbv-0004md-1U; Mon, 13 Aug 2012 18:43:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T0zbt-0004mQ-Ig
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 18:43:29 +0000
Received: from [85.158.143.35:41617] by server-1.bemta-4.messagelabs.com id
	43/4E-07754-0DA49205; Mon, 13 Aug 2012 18:43:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344883407!14577608!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3416 invoked from network); 13 Aug 2012 18:43:28 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 18:43:28 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 677131201;
	Mon, 13 Aug 2012 21:43:06 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 87B8F2005D; Mon, 13 Aug 2012 21:43:06 +0300 (EEST)
Date: Mon, 13 Aug 2012 21:43:06 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: James Harper <james.harper@bendigoit.com.au>
Message-ID: <20120813184306.GP19851@reaktio.net>
References: <6035A0D088A63A46850C3988ED045A4B299F737C@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B299F73D1@BITCOM1.int.sbss.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F73D1@BITCOM1.int.sbss.com.au>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
> > 
> > I think the problem is definitely related to the 4K sector issue.
> > 
> > qemu appears to always present 512 byte sectors, thus only booting from a
> > 512 byte sector partition table. Once the PV drivers take over though it all
> > falls down because PV drivers are passed a 4K sector size and nothing
> > matches up anymore.
> > 
> > What is the solution here? Do I just tell windows that we have a 512 byte
> > sector and put up with the potential loss of performance?
> > 
> 
> I just came across this...
> 
> http://support.microsoft.com/kb/982018
> 
> I guess Windows doesn't support 4K disks after all, so I'll emulate 512 byte sectors and fake whatever scsi interface is required to tell windows that the disk is 4K but emulating 512 bytes.
> 

At least Linux reports:
/sys/block/<disk>/queue/logical_block_size=512
/sys/block/<disk>/queue/physical_block_size=4096

So I assume you need to set those somehow..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 18:54:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 18:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0zm0-0005B1-5M; Mon, 13 Aug 2012 18:53:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1T0zly-0005Aw-Kz
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 18:53:54 +0000
Received: from [85.158.139.83:19594] by server-9.bemta-5.messagelabs.com id
	83/A2-26123-14D49205; Mon, 13 Aug 2012 18:53:53 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344884033!27921138!1
X-Originating-IP: [212.227.17.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIyNy4xNy4xMCA9PiA3MTA3Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12430 invoked from network); 13 Aug 2012 18:53:53 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.10) by server-12.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 18:53:53 -0000
Received: from [192.168.179.34] (hmbg-5f7603cb.pool.mediaWays.net
	[95.118.3.203])
	by mrelayeu.kundenserver.de (node=mrbap1) with ESMTP (Nemesis)
	id 0MNvyt-1T3Top3khH-0078Sn; Mon, 13 Aug 2012 20:53:53 +0200
Message-ID: <50294D4B.9050405@brockmann-consult.de>
Date: Mon, 13 Aug 2012 20:54:03 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120601 Thunderbird/13.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501F98A1.4070806@brockmann-consult.de>
	<501FB9060200007800092D4E@nat28.tlf.novell.com>
	<5020C2DE.304@brockmann-consult.de>
In-Reply-To: <5020C2DE.304@brockmann-consult.de>
X-Provags-ID: V02:K0:IqKRyx7WxX6ftv7f+yhNIk5jTega049STgfGQhSk3nZ
	85tPlPH8VtI7OXAtS2B5MdnulHMIz2J51TJa1gCDzXgKkSQbdf
	JX8IdlxlH1FUKGFv62zjGb8BWbecpCS1rr1RUa1+PS7IAnhY1h
	iPjaiIiUi9MDSNJ2y5e9YY4bMpeAXLbSuonQtcFDhc/X3A4M/x
	XjKUu+tM/+3/ydjN89lfw6tGMuMECL6DpMvIqbh270aE2iaZ61
	mELRZ4S165Whbqtg5Oo7hebKmcCGr3bC7KIVCR1cHmlQNznnf9
	R27sfiVDUktlK3cpW4LxVngrwQj4zWisQ8mxdf2aaltoXCFlEe
	InrgAaU9jDeG/QF7K4EOebEIEIMLxKlODWZshET/2vT1xuHUlk
	SVOrUYzoNIh2g==
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So... did my 4.2-unstable test, using a fresh pull from yesterday; dom0
is normal fast (unlike previous tests), and domU is ultra slow, but
actually boots, and graphics card passthrough works without any patches,
and so does the USB keyboard, but USB mouse passthrough doesn't work.


On 08/07/2012 09:25 AM, Peter Maloney wrote:
>> That still won't tell us which patches you did apply.
> I applied no patches and tested, and the result was slow. And then
> applied all patches, and it was fast. I didn't try figuring out which
> one it was.
>
>
> So I guess I'll try:
> - the latest unstable 4.2
> - the 4.1.3-rc (Which includes the patch Malcolm suggested)
> - and my rpm source with half patches, 3/4 of them, etc. binary search
> style to see which patch(es) changed the performance. But this means I
> won't be able to narrow it down to a single patch, but only the point in
> the long list where the most dramatic change happens, possibly depending
> on many previous patches.
>
> Thanks so far, guys.
>
>
> On 08/06/2012 12:31 PM, Jan Beulich wrote:
>>>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
>>> my AMD FX-8150 system with vanilla source code is super slow, both the
>>> dom0 and domUs. However, after I merge the upstream patches I found in
>>> the openSUSE rpm, it runs normally.
>> I'd be very surprised if you really just took the upstream patches,
>> and the result was better than 4.2-rc1. After all, what upstream
>> means is that they were taken from -unstable.
>>
>>> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
>>> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
>>> obviously those patches won't work any more since the 4.2 code looks
>>> completely reorganized, so I'm stuck with 4.1.2
>> Obviously the upstream patches can't be applied to something
>> that already has all those changes. Other patches, of which we
>> unfortunately have quite a few, would be a different story.
>>
>>> Here is the rpm I was using at the time:
>>> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
>>>
>>> To see the list of the patches and what order to apply them, see the
>>> spec file.
>> That still won't tell us which patches you did apply.
>>
>>> Please make sure this performance issue is fixed for the 4.2 release.
>>> And I would be happy to test whatever files you send me.
>> The sort of report you're doing isn't that helpful. What would
>> help is if you could narrow down which patch(es) it is that
>> make things so much better. Giving 4.1.3-rc a try might also
>> be worthwhile, albeit I would hope we don't have a regression
>> in 4.2.0-rc compared to 4.1.3-rc...
>>
>> Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 18:54:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 18:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0zm0-0005B1-5M; Mon, 13 Aug 2012 18:53:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1T0zly-0005Aw-Kz
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 18:53:54 +0000
Received: from [85.158.139.83:19594] by server-9.bemta-5.messagelabs.com id
	83/A2-26123-14D49205; Mon, 13 Aug 2012 18:53:53 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344884033!27921138!1
X-Originating-IP: [212.227.17.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEyLjIyNy4xNy4xMCA9PiA3MTA3Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12430 invoked from network); 13 Aug 2012 18:53:53 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.10) by server-12.tower-182.messagelabs.com with SMTP;
	13 Aug 2012 18:53:53 -0000
Received: from [192.168.179.34] (hmbg-5f7603cb.pool.mediaWays.net
	[95.118.3.203])
	by mrelayeu.kundenserver.de (node=mrbap1) with ESMTP (Nemesis)
	id 0MNvyt-1T3Top3khH-0078Sn; Mon, 13 Aug 2012 20:53:53 +0200
Message-ID: <50294D4B.9050405@brockmann-consult.de>
Date: Mon, 13 Aug 2012 20:54:03 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120601 Thunderbird/13.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501F98A1.4070806@brockmann-consult.de>
	<501FB9060200007800092D4E@nat28.tlf.novell.com>
	<5020C2DE.304@brockmann-consult.de>
In-Reply-To: <5020C2DE.304@brockmann-consult.de>
X-Provags-ID: V02:K0:IqKRyx7WxX6ftv7f+yhNIk5jTega049STgfGQhSk3nZ
	85tPlPH8VtI7OXAtS2B5MdnulHMIz2J51TJa1gCDzXgKkSQbdf
	JX8IdlxlH1FUKGFv62zjGb8BWbecpCS1rr1RUa1+PS7IAnhY1h
	iPjaiIiUi9MDSNJ2y5e9YY4bMpeAXLbSuonQtcFDhc/X3A4M/x
	XjKUu+tM/+3/ydjN89lfw6tGMuMECL6DpMvIqbh270aE2iaZ61
	mELRZ4S165Whbqtg5Oo7hebKmcCGr3bC7KIVCR1cHmlQNznnf9
	R27sfiVDUktlK3cpW4LxVngrwQj4zWisQ8mxdf2aaltoXCFlEe
	InrgAaU9jDeG/QF7K4EOebEIEIMLxKlODWZshET/2vT1xuHUlk
	SVOrUYzoNIh2g==
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So... did my 4.2-unstable test, using a fresh pull from yesterday; dom0
is normal fast (unlike previous tests), and domU is ultra slow, but
actually boots, and graphics card passthrough works without any patches,
and so does the USB keyboard, but USB mouse passthrough doesn't work.


On 08/07/2012 09:25 AM, Peter Maloney wrote:
>> That still won't tell us which patches you did apply.
> I applied no patches and tested, and the result was slow. And then
> applied all patches, and it was fast. I didn't try figuring out which
> one it was.
>
>
> So I guess I'll try:
> - the latest unstable 4.2
> - the 4.1.3-rc (Which includes the patch Malcolm suggested)
> - and my rpm source with half patches, 3/4 of them, etc. binary search
> style to see which patch(es) changed the performance. But this means I
> won't be able to narrow it down to a single patch, but only the point in
> the long list where the most dramatic change happens, possibly depending
> on many previous patches.
>
> Thanks so far, guys.
>
>
> On 08/06/2012 12:31 PM, Jan Beulich wrote:
>>>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
>>> my AMD FX-8150 system with vanilla source code is super slow, both the
>>> dom0 and domUs. However, after I merge the upstream patches I found in
>>> the openSUSE rpm, it runs normally.
>> I'd be very surprised if you really just took the upstream patches,
>> and the result was better than 4.2-rc1. After all, what upstream
>> means is that they were taken from -unstable.
>>
>>> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
>>> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
>>> obviously those patches won't work any more since the 4.2 code looks
>>> completely reorganized, so I'm stuck with 4.1.2
>> Obviously the upstream patches can't be applied to something
>> that already has all those changes. Other patches, of which we
>> unfortunately have quite a few, would be a different story.
>>
>>> Here is the rpm I was using at the time:
>>> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
>>>
>>> To see the list of the patches and what order to apply them, see the
>>> spec file.
>> That still won't tell us which patches you did apply.
>>
>>> Please make sure this performance issue is fixed for the 4.2 release.
>>> And I would be happy to test whatever files you send me.
>> The sort of report you're doing isn't that helpful. What would
>> help is if you could narrow down which patch(es) it is that
>> make things so much better. Giving 4.1.3-rc a try might also
>> be worthwhile, albeit I would hope we don't have a regression
>> in 4.2.0-rc compared to 4.1.3-rc...
>>
>> Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 18:57:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 18:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0zow-0005I7-O8; Mon, 13 Aug 2012 18:56:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jeremy@goop.org>) id 1T0zov-0005I2-1G
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 18:56:57 +0000
Received: from [85.158.139.83:30214] by server-5.bemta-5.messagelabs.com id
	AB/6A-31019-8FD49205; Mon, 13 Aug 2012 18:56:56 +0000
X-Env-Sender: jeremy@goop.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344884213!20694714!1
X-Originating-IP: [74.207.240.146]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18167 invoked from network); 13 Aug 2012 18:56:55 -0000
Received: from claw.goop.org (HELO claw.goop.org) (74.207.240.146)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 18:56:55 -0000
Received: from saboo.goop.org (unknown
	[IPv6:2001:470:1f05:899:942b:66ff:fe0f:12b0])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested) (Authenticated sender: smtp-saboo)
	by claw.goop.org (Postfix) with ESMTPSA id 9DE7F965E;
	Mon, 13 Aug 2012 11:56:52 -0700 (PDT)
Received: from saboo.goop.org (localhost [IPv6:::1])
	by saboo.goop.org (Postfix) with ESMTP id AF62A20C15;
	Mon, 13 Aug 2012 11:56:48 -0700 (PDT)
Message-ID: <50294DF0.8040206@goop.org>
Date: Mon, 13 Aug 2012 11:56:48 -0700
From: Jeremy Fitzhardinge <jeremy@goop.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Mel Gorman <mgorman@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813102604.GC4177@suse.de> <20120813104745.GE4177@suse.de>
In-Reply-To: <20120813104745.GE4177@suse.de>
X-Enigmail-Version: 1.4.3
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/13/2012 03:47 AM, Mel Gorman wrote:
> Resending to correct Jeremy's address.
>
> On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
>> From: Mel Gorman <mgorman@suse.de>
>> Date: Tue, 7 Aug 2012 09:55:55 +0100
>>
>>> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
>>> for the following bug triggered by a xen network driver
>>  ...
>>> The problem is that the xenfront driver is passing a NULL page to
>>> __skb_fill_page_desc() which was unexpected. This patch checks that
>>> there is a page before dereferencing.
>>>
>>> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Signed-off-by: Mel Gorman <mgorman@suse.de>
>> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
>> It's the only driver passing NULL here.
>>
>> That whole song and dance figuring out what to do with the head
>> fragment page, depending upon whether the length is greater than the
>> RX_COPY_THRESHOLD, is completely unnecessary.
>>
>> Just use something like a call to __pskb_pull_tail(skb, len) and all
>> that other crap around that area can simply be deleted.
> I looked at this for a while but I did not see how __pskb_pull_tail()
> could be used sensibly but I'm simily not familiar with writing network
> device drivers or Xen.
>
> This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
> and backend communicate (maybe some fixed limitation of the xenbus). The
> existing code looks like it is trying to take the fragments received and
> pass them straight to the backend without copying by passing the fragments
> to the backend without copying. I worry that if I try converting this to
> __pskb_pull_tail() that it would either hit the limitation of xenbus or
> introduce copying where it is not wanted.
>
> I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
> sure what the original intention was and I don't have a Xen setup anywhere
> to test any patch. Jeremy, xen folk? 

It's been a while since I've looked at that stuff, but as I remember,
the issue is that since the packet ring memory is shared with another
domain which may be untrustworthy, we want to make copies of the headers
before making any decisions based on them so that the other domain can't
change them after header processing but before they're actually sent. 
(The packet payload is considered less important, but of course the same
issue applies if you're using some kind of content-aware packet filter.)

So that's the rationale for always copying RX_COPY_THRESHOLD, even if
the packet is larger than that amount.  As far as I know, changing this
behaviour wouldn't break the ring protocol, but it does introduce a
potential security issue.

    J


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 18:57:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 18:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T0zow-0005I7-O8; Mon, 13 Aug 2012 18:56:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jeremy@goop.org>) id 1T0zov-0005I2-1G
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 18:56:57 +0000
Received: from [85.158.139.83:30214] by server-5.bemta-5.messagelabs.com id
	AB/6A-31019-8FD49205; Mon, 13 Aug 2012 18:56:56 +0000
X-Env-Sender: jeremy@goop.org
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344884213!20694714!1
X-Originating-IP: [74.207.240.146]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18167 invoked from network); 13 Aug 2012 18:56:55 -0000
Received: from claw.goop.org (HELO claw.goop.org) (74.207.240.146)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 18:56:55 -0000
Received: from saboo.goop.org (unknown
	[IPv6:2001:470:1f05:899:942b:66ff:fe0f:12b0])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested) (Authenticated sender: smtp-saboo)
	by claw.goop.org (Postfix) with ESMTPSA id 9DE7F965E;
	Mon, 13 Aug 2012 11:56:52 -0700 (PDT)
Received: from saboo.goop.org (localhost [IPv6:::1])
	by saboo.goop.org (Postfix) with ESMTP id AF62A20C15;
	Mon, 13 Aug 2012 11:56:48 -0700 (PDT)
Message-ID: <50294DF0.8040206@goop.org>
Date: Mon, 13 Aug 2012 11:56:48 -0700
From: Jeremy Fitzhardinge <jeremy@goop.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Mel Gorman <mgorman@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813102604.GC4177@suse.de> <20120813104745.GE4177@suse.de>
In-Reply-To: <20120813104745.GE4177@suse.de>
X-Enigmail-Version: 1.4.3
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/13/2012 03:47 AM, Mel Gorman wrote:
> Resending to correct Jeremy's address.
>
> On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
>> From: Mel Gorman <mgorman@suse.de>
>> Date: Tue, 7 Aug 2012 09:55:55 +0100
>>
>>> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
>>> for the following bug triggered by a xen network driver
>>  ...
>>> The problem is that the xenfront driver is passing a NULL page to
>>> __skb_fill_page_desc() which was unexpected. This patch checks that
>>> there is a page before dereferencing.
>>>
>>> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Signed-off-by: Mel Gorman <mgorman@suse.de>
>> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
>> It's the only driver passing NULL here.
>>
>> That whole song and dance figuring out what to do with the head
>> fragment page, depending upon whether the length is greater than the
>> RX_COPY_THRESHOLD, is completely unnecessary.
>>
>> Just use something like a call to __pskb_pull_tail(skb, len) and all
>> that other crap around that area can simply be deleted.
> I looked at this for a while but I did not see how __pskb_pull_tail()
> could be used sensibly but I'm simily not familiar with writing network
> device drivers or Xen.
>
> This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
> and backend communicate (maybe some fixed limitation of the xenbus). The
> existing code looks like it is trying to take the fragments received and
> pass them straight to the backend without copying by passing the fragments
> to the backend without copying. I worry that if I try converting this to
> __pskb_pull_tail() that it would either hit the limitation of xenbus or
> introduce copying where it is not wanted.
>
> I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
> sure what the original intention was and I don't have a Xen setup anywhere
> to test any patch. Jeremy, xen folk? 

It's been a while since I've looked at that stuff, but as I remember,
the issue is that since the packet ring memory is shared with another
domain which may be untrustworthy, we want to make copies of the headers
before making any decisions based on them so that the other domain can't
change them after header processing but before they're actually sent. 
(The packet payload is considered less important, but of course the same
issue applies if you're using some kind of content-aware packet filter.)

So that's the rationale for always copying RX_COPY_THRESHOLD, even if
the packet is larger than that amount.  As far as I know, changing this
behaviour wouldn't break the ring protocol, but it does introduce a
potential security issue.

    J


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 19:17:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 19:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T108T-0005hF-Ic; Mon, 13 Aug 2012 19:17:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1T108S-0005hA-6n
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 19:17:08 +0000
Received: from [85.158.143.35:38657] by server-2.bemta-4.messagelabs.com id
	E3/43-31966-3B259205; Mon, 13 Aug 2012 19:17:07 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344885424!12008296!1
X-Originating-IP: [209.132.183.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjUgPT4gODMwODY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12570 invoked from network); 13 Aug 2012 19:17:05 -0000
Received: from mx4-phx2.redhat.com (HELO mx4-phx2.redhat.com) (209.132.183.25)
	by server-11.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 19:17:05 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7DJGrAY007658;
	Mon, 13 Aug 2012 15:16:53 -0400
Date: Mon, 13 Aug 2012 15:16:53 -0400 (EDT)
From: Dave Anderson <anderson@redhat.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <1827880283.12350529.1344885413402.JavaMail.root@redhat.com>
In-Reply-To: <20120813073754.GA2482@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
X-Originating-IP: [10.16.185.59]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> On Fri, Aug 10, 2012 at 03:11:57PM -0400, Dave Anderson wrote:
> >
> >
> > ----- Original Message -----
> > > Hi,
> > >
> > > It looks that Xen support for crash have not been maintained
> > > since 2009. I am trying to fix this. Here it is bundle of fixes:
> > >   - xen: Always calculate max_cpus value,
> > >   - xen: Read only crash notes for onlined CPUs,
> > >   - x86/xen: Read variables from dynamically allocated per_cpu
> > >   data,
> > >   - xen: Get idle data from alternative source,
> > >   - xen: Read data correctly from dynamically allocated console
> > >   ring, too
> > >     (fixed in this release),
> > >   - xen: Add support for 3 level P2M tree (new patch in this
> > >   release).
> > >
> > > Daniel
> >
> > Hi Daniel,
> >
> > The original 5 updates specific to the Xen hypervisor look OK,
> > but new patch 6/6 is going to take some studying/testing to
> > alleviate my backwards-compatibility worries.  Can I ask whether
> > you fully tested it with older 2-level P2M tree kernels?
> 
> As you asked me earlier I have tested all patches on Xen 3.1 and 4.1,
> Linux Kernel Ver. 2.6.18 (P2M array), 2.6.36 (2 level P2M tree)
> and 2.6.39 (3 level P2M tree). Additionaly, there were some internal
> tests done by others in my company.
> 
> Daniel

OK good.  It tests OK on a few older pvops kernels that I have on hand.

The only thing I've changed is to handle compiler warnings in x86_64.c and 
x86.c by initializing p2m_top to NULL in x86_64_pvops_xendump_p2m_l3_create()
and x86_pvops_xendump_p2m_l3_create().  I also used GETBUF() in those two
functions to avoid having to add the malloc-failure line.

Queued for crash-6.0.9.

Thanks,
  Dave
  

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 19:17:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 19:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T108T-0005hF-Ic; Mon, 13 Aug 2012 19:17:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1T108S-0005hA-6n
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 19:17:08 +0000
Received: from [85.158.143.35:38657] by server-2.bemta-4.messagelabs.com id
	E3/43-31966-3B259205; Mon, 13 Aug 2012 19:17:07 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344885424!12008296!1
X-Originating-IP: [209.132.183.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjUgPT4gODMwODY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12570 invoked from network); 13 Aug 2012 19:17:05 -0000
Received: from mx4-phx2.redhat.com (HELO mx4-phx2.redhat.com) (209.132.183.25)
	by server-11.tower-21.messagelabs.com with SMTP;
	13 Aug 2012 19:17:05 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7DJGrAY007658;
	Mon, 13 Aug 2012 15:16:53 -0400
Date: Mon, 13 Aug 2012 15:16:53 -0400 (EDT)
From: Dave Anderson <anderson@redhat.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <1827880283.12350529.1344885413402.JavaMail.root@redhat.com>
In-Reply-To: <20120813073754.GA2482@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
X-Originating-IP: [10.16.185.59]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> On Fri, Aug 10, 2012 at 03:11:57PM -0400, Dave Anderson wrote:
> >
> >
> > ----- Original Message -----
> > > Hi,
> > >
> > > It looks that Xen support for crash have not been maintained
> > > since 2009. I am trying to fix this. Here it is bundle of fixes:
> > >   - xen: Always calculate max_cpus value,
> > >   - xen: Read only crash notes for onlined CPUs,
> > >   - x86/xen: Read variables from dynamically allocated per_cpu
> > >   data,
> > >   - xen: Get idle data from alternative source,
> > >   - xen: Read data correctly from dynamically allocated console
> > >   ring, too
> > >     (fixed in this release),
> > >   - xen: Add support for 3 level P2M tree (new patch in this
> > >   release).
> > >
> > > Daniel
> >
> > Hi Daniel,
> >
> > The original 5 updates specific to the Xen hypervisor look OK,
> > but new patch 6/6 is going to take some studying/testing to
> > alleviate my backwards-compatibility worries.  Can I ask whether
> > you fully tested it with older 2-level P2M tree kernels?
> 
> As you asked me earlier I have tested all patches on Xen 3.1 and 4.1,
> Linux Kernel Ver. 2.6.18 (P2M array), 2.6.36 (2 level P2M tree)
> and 2.6.39 (3 level P2M tree). Additionaly, there were some internal
> tests done by others in my company.
> 
> Daniel

OK good.  It tests OK on a few older pvops kernels that I have on hand.

The only thing I've changed is to handle compiler warnings in x86_64.c and 
x86.c by initializing p2m_top to NULL in x86_64_pvops_xendump_p2m_l3_create()
and x86_pvops_xendump_p2m_l3_create().  I also used GETBUF() in those two
functions to avoid having to add the malloc-failure line.

Queued for crash-6.0.9.

Thanks,
  Dave
  

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 19:28:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 19:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T10Ip-0005us-3I; Mon, 13 Aug 2012 19:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1T10Hf-0005ra-6Q
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 19:26:39 +0000
Received: from [85.158.143.35:37290] by server-3.bemta-4.messagelabs.com id
	0F/3C-09529-EE459205; Mon, 13 Aug 2012 19:26:38 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344885980!13213142!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 645 invoked from network); 13 Aug 2012 19:26:22 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 19:26:22 -0000
Received: by yenm4 with SMTP id m4so3988696yen.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 12:26:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:reply-to:to:cc:subject:x-mailer:references:in-reply-to
	:content-type:date:message-id:mime-version;
	bh=ySRqDwedE32sg6Vev+FFIn7k7eOSBVjy+keTBo0Oa+s=;
	b=pPO3LQWfJ4rxO/VsylRh0fhy0RV4Qnce0bYISCxXfRSsgYt3DIFYuCcslz2V4zZpLG
	fFBYic/xN5qrKLv1iUMFoUMGqR0kpPPsE2Wps0Q/Fzj9xLLwBuv4hZk/Fhfk3KXAhFoJ
	dIVuK2faWihMtH7UIRRwsbSlMPGnjwIe2RQrBXTlhH2XuL0lRjx1Q+5y+mEZcLlIRY3k
	p4djcHPhfGgfb6acX7gZXVLXr+BRxIguk+ZIAxCP+F1NLTWvT4OQvRgwjEw4KU/+kRtn
	tY5UrSG+kv8RgbhWJGbPSVY+h8VZu/tlvmMieB9S7qYNdZu+PyaSNhyKVQd7dXrwZRKi
	v1xQ==
Received: by 10.101.6.24 with SMTP id j24mr3870482ani.5.1344885980254;
	Mon, 13 Aug 2012 12:26:20 -0700 (PDT)
Received: from [100.248.21.86] (me62436d0.tmodns.net. [208.54.36.230])
	by mx.google.com with ESMTPS id n5sm397535ang.18.2012.08.13.12.26.17
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 12:26:19 -0700 (PDT)
From: tamas.k.lengyel@gmail.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
X-Mailer: Modest 3.2
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
	<20521.2218.8505.913476@mariner.uk.xensource.com>
In-Reply-To: <20521.2218.8505.913476@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 15:26:12 -0400
Message-Id: <1344885972.2726.5.camel@Nokia-N900>
Mime-Version: 1.0
X-Mailman-Approved-At: Mon, 13 Aug 2012 19:27:49 +0000
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tamas.k.lengyel@gmail.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8094514820432413847=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8094514820432413847==
Content-Type: multipart/alternative; boundary="=-eyuMXOzLguC8hC9/7d9N"


--=-eyuMXOzLguC8hC9/7d9N
Content-Type: text/plain; charset=utf-8
Content-ID: <1344885971.2726.2.camel@Nokia-N900>
Content-Transfer-Encoding: 8bit

Hi Ian,

> I don't understand why you aren't happy to use just the config parser supplied in libxlu.Â 

I'm OK with the config parser in Xlu, my problem is the conversion between XLU_Config and the datastructure the libxl domain creation functions are expecting. Right now the only way would be to duplicate the code in xl to do the transition, which I don't want to do, or by calling xl itself instead of using libxl, which I found very ugly and hack-ish.

Further problems with XLU_Config is the fact that the datastructure is transparent. It would be fine, if there were more accessor functions written for it, such that it could be saved again as a file or dumped in a char *. Also would be required the ability to change elements of a XLU_ConfigList.

Currently, the only way I was able to do this is by unmasking XLU_Config and XLU_ConfigList, and interacting with the elements directly. That way I can change the config, export it back to a file, then by forking and calling xl itself (!) I can create the VM with the modified config. It works, its a hack, I don't like it.

I would be happy to send you patches that would enable me to do the XLU_Config modifications I need to do without needing to unmask it.

As for libxl, without the datastructure conversion I won't be able 
to use it and have to do the ugly fork and xl call instead..

Tamas
 

--=-eyuMXOzLguC8hC9/7d9N
Content-Type: text/html; charset=utf-8
Content-ID: <1344885971.2726.4.camel@Nokia-N900>
Content-Transfer-Encoding: 7bit

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html><head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <meta name="generator" content="Osso Notes">
    <title></title></head>
<body>
<p>Hi Ian,
<br>
<br>&gt; I don't understand why you aren't happy to use just the config parser supplied in libxlu.&nbsp;
<br>
<br>I'm OK with the config parser in Xlu, my problem is the conversion between XLU_Config and the datastructure the libxl domain creation functions are expecting. Right now the only way would be to duplicate the code in xl to do the transition, which I don't want to do, or by calling xl itself instead of using libxl, which I found very ugly and hack-ish.
<br>
<br>Further problems with XLU_Config is the fact that the datastructure is transparent. It would be fine, if there were more accessor functions written for it, such that it could be saved again as a file or dumped in a char *. Also would be required the ability to change elements of a XLU_ConfigList.
<br>
<br>Currently, the only way I was able to do this is by unmasking XLU_Config and XLU_ConfigList, and interacting with the elements directly. That way I can change the config, export it back to a file, then by forking and calling xl itself (!) I can create the VM with the modified config. It works, its a hack, I don't like it.
<br>
<br>I would be happy to send you patches that would enable me to do the XLU_Config modifications I need to do without needing to unmask it.
<br>
<br>As for libxl, without the datastructure conversion I won't be able 
<br>to use it and have to do the ugly fork and xl call instead..
<br>
<br>Tamas
<br>&#32;<br></p>
</body>
</html>

--=-eyuMXOzLguC8hC9/7d9N--



--===============8094514820432413847==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8094514820432413847==--



From xen-devel-bounces@lists.xen.org Mon Aug 13 19:28:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 19:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T10Ip-0005us-3I; Mon, 13 Aug 2012 19:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.k.lengyel@gmail.com>) id 1T10Hf-0005ra-6Q
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 19:26:39 +0000
Received: from [85.158.143.35:37290] by server-3.bemta-4.messagelabs.com id
	0F/3C-09529-EE459205; Mon, 13 Aug 2012 19:26:38 +0000
X-Env-Sender: tamas.k.lengyel@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1344885980!13213142!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 645 invoked from network); 13 Aug 2012 19:26:22 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 19:26:22 -0000
Received: by yenm4 with SMTP id m4so3988696yen.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 12:26:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:reply-to:to:cc:subject:x-mailer:references:in-reply-to
	:content-type:date:message-id:mime-version;
	bh=ySRqDwedE32sg6Vev+FFIn7k7eOSBVjy+keTBo0Oa+s=;
	b=pPO3LQWfJ4rxO/VsylRh0fhy0RV4Qnce0bYISCxXfRSsgYt3DIFYuCcslz2V4zZpLG
	fFBYic/xN5qrKLv1iUMFoUMGqR0kpPPsE2Wps0Q/Fzj9xLLwBuv4hZk/Fhfk3KXAhFoJ
	dIVuK2faWihMtH7UIRRwsbSlMPGnjwIe2RQrBXTlhH2XuL0lRjx1Q+5y+mEZcLlIRY3k
	p4djcHPhfGgfb6acX7gZXVLXr+BRxIguk+ZIAxCP+F1NLTWvT4OQvRgwjEw4KU/+kRtn
	tY5UrSG+kv8RgbhWJGbPSVY+h8VZu/tlvmMieB9S7qYNdZu+PyaSNhyKVQd7dXrwZRKi
	v1xQ==
Received: by 10.101.6.24 with SMTP id j24mr3870482ani.5.1344885980254;
	Mon, 13 Aug 2012 12:26:20 -0700 (PDT)
Received: from [100.248.21.86] (me62436d0.tmodns.net. [208.54.36.230])
	by mx.google.com with ESMTPS id n5sm397535ang.18.2012.08.13.12.26.17
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 12:26:19 -0700 (PDT)
From: tamas.k.lengyel@gmail.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
X-Mailer: Modest 3.2
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
	<20521.2218.8505.913476@mariner.uk.xensource.com>
In-Reply-To: <20521.2218.8505.913476@mariner.uk.xensource.com>
Date: Mon, 13 Aug 2012 15:26:12 -0400
Message-Id: <1344885972.2726.5.camel@Nokia-N900>
Mime-Version: 1.0
X-Mailman-Approved-At: Mon, 13 Aug 2012 19:27:49 +0000
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tamas.k.lengyel@gmail.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8094514820432413847=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8094514820432413847==
Content-Type: multipart/alternative; boundary="=-eyuMXOzLguC8hC9/7d9N"


--=-eyuMXOzLguC8hC9/7d9N
Content-Type: text/plain; charset=utf-8
Content-ID: <1344885971.2726.2.camel@Nokia-N900>
Content-Transfer-Encoding: 8bit

Hi Ian,

> I don't understand why you aren't happy to use just the config parser supplied in libxlu.Â 

I'm OK with the config parser in Xlu, my problem is the conversion between XLU_Config and the datastructure the libxl domain creation functions are expecting. Right now the only way would be to duplicate the code in xl to do the transition, which I don't want to do, or by calling xl itself instead of using libxl, which I found very ugly and hack-ish.

Further problems with XLU_Config is the fact that the datastructure is transparent. It would be fine, if there were more accessor functions written for it, such that it could be saved again as a file or dumped in a char *. Also would be required the ability to change elements of a XLU_ConfigList.

Currently, the only way I was able to do this is by unmasking XLU_Config and XLU_ConfigList, and interacting with the elements directly. That way I can change the config, export it back to a file, then by forking and calling xl itself (!) I can create the VM with the modified config. It works, its a hack, I don't like it.

I would be happy to send you patches that would enable me to do the XLU_Config modifications I need to do without needing to unmask it.

As for libxl, without the datastructure conversion I won't be able 
to use it and have to do the ugly fork and xl call instead..

Tamas
 

--=-eyuMXOzLguC8hC9/7d9N
Content-Type: text/html; charset=utf-8
Content-ID: <1344885971.2726.4.camel@Nokia-N900>
Content-Transfer-Encoding: 7bit

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html><head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <meta name="generator" content="Osso Notes">
    <title></title></head>
<body>
<p>Hi Ian,
<br>
<br>&gt; I don't understand why you aren't happy to use just the config parser supplied in libxlu.&nbsp;
<br>
<br>I'm OK with the config parser in Xlu, my problem is the conversion between XLU_Config and the datastructure the libxl domain creation functions are expecting. Right now the only way would be to duplicate the code in xl to do the transition, which I don't want to do, or by calling xl itself instead of using libxl, which I found very ugly and hack-ish.
<br>
<br>Further problems with XLU_Config is the fact that the datastructure is transparent. It would be fine, if there were more accessor functions written for it, such that it could be saved again as a file or dumped in a char *. Also would be required the ability to change elements of a XLU_ConfigList.
<br>
<br>Currently, the only way I was able to do this is by unmasking XLU_Config and XLU_ConfigList, and interacting with the elements directly. That way I can change the config, export it back to a file, then by forking and calling xl itself (!) I can create the VM with the modified config. It works, its a hack, I don't like it.
<br>
<br>I would be happy to send you patches that would enable me to do the XLU_Config modifications I need to do without needing to unmask it.
<br>
<br>As for libxl, without the datastructure conversion I won't be able 
<br>to use it and have to do the ugly fork and xl call instead..
<br>
<br>Tamas
<br>&#32;<br></p>
</body>
</html>

--=-eyuMXOzLguC8hC9/7d9N--



--===============8094514820432413847==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8094514820432413847==--



From xen-devel-bounces@lists.xen.org Mon Aug 13 19:44:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 19:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T10YE-0006Fv-K9; Mon, 13 Aug 2012 19:43:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T10YC-0006Fn-VF
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 19:43:45 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344887016!9072804!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28921 invoked from network); 13 Aug 2012 19:43:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 19:43:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="13988582"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 19:43:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 20:43:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T10Y3-0005f1-3j;
	Mon, 13 Aug 2012 19:43:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T10Y3-0001Pl-3H;
	Mon, 13 Aug 2012 20:43:35 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13597-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Aug 2012 20:43:35 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13597: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13597 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13597/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13595
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13596
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13596
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13596

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  14788c9cb645
baseline version:
 xen                  dc4970af48a0

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=14788c9cb645
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 14788c9cb645
+ branch=xen-unstable
+ revision=14788c9cb645
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 14788c9cb645 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 19:44:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 19:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T10YE-0006Fv-K9; Mon, 13 Aug 2012 19:43:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T10YC-0006Fn-VF
	for xen-devel@lists.xensource.com; Mon, 13 Aug 2012 19:43:45 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344887016!9072804!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28921 invoked from network); 13 Aug 2012 19:43:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 19:43:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,761,1336348800"; d="scan'208";a="13988582"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	13 Aug 2012 19:43:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 13 Aug 2012 20:43:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T10Y3-0005f1-3j;
	Mon, 13 Aug 2012 19:43:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T10Y3-0001Pl-3H;
	Mon, 13 Aug 2012 20:43:35 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13597-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Aug 2012 20:43:35 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13597: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13597 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13597/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13595
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13596
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13596
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13596

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  14788c9cb645
baseline version:
 xen                  dc4970af48a0

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=14788c9cb645
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 14788c9cb645
+ branch=xen-unstable
+ revision=14788c9cb645
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 14788c9cb645 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 20:40:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 20:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11QY-0007Qi-71; Mon, 13 Aug 2012 20:39:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T11QX-0007Qd-22
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:39:53 +0000
Received: from [85.158.138.51:59811] by server-11.bemta-3.messagelabs.com id
	80/1E-23152-81669205; Mon, 13 Aug 2012 20:39:52 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344890391!21735253!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6506 invoked from network); 13 Aug 2012 20:39:51 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 20:39:51 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 99F171070
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 23:39:50 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 5EDBA2005D; Mon, 13 Aug 2012 23:39:50 +0300 (EEST)
Date: Mon, 13 Aug 2012 23:39:50 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813203950.GQ19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.20 (2009-06-14)
Subject: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
	ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar.gz
and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):

make tools:

..
make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make -C etherboot all
make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
make -C ipxe/src bin/rtl8139.rom
make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
  [BUILD] bin/isa.o
drivers/bus/isa.c: In function 'isabus_probe':
drivers/bus/isa.c:112:18: error: array subscript is above array bounds [-Werror=array-bounds]
cc1: all warnings being treated as errors
make[7]: *** [bin/isa.o] Error 1
make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
make[6]: *** [ipxe/src/bin/rtl8139.rom] Error 2
make[6]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
make[5]: *** [subdir-all-etherboot] Error 2
make[5]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make[4]: *** [subdirs-all] Error 2
make[4]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make[3]: *** [all] Error 2
make[3]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make[2]: *** [subdir-install-firmware] Error 2
make[2]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools'
make: *** [install-tools] Error 2


Did someone already notice/fix this issue? 


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 20:40:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 20:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11QY-0007Qi-71; Mon, 13 Aug 2012 20:39:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T11QX-0007Qd-22
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:39:53 +0000
Received: from [85.158.138.51:59811] by server-11.bemta-3.messagelabs.com id
	80/1E-23152-81669205; Mon, 13 Aug 2012 20:39:52 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344890391!21735253!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6506 invoked from network); 13 Aug 2012 20:39:51 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 20:39:51 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 99F171070
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 23:39:50 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 5EDBA2005D; Mon, 13 Aug 2012 23:39:50 +0300 (EEST)
Date: Mon, 13 Aug 2012 23:39:50 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813203950.GQ19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.20 (2009-06-14)
Subject: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
	ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar.gz
and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):

make tools:

..
make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make -C etherboot all
make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
make -C ipxe/src bin/rtl8139.rom
make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
  [BUILD] bin/isa.o
drivers/bus/isa.c: In function 'isabus_probe':
drivers/bus/isa.c:112:18: error: array subscript is above array bounds [-Werror=array-bounds]
cc1: all warnings being treated as errors
make[7]: *** [bin/isa.o] Error 1
make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
make[6]: *** [ipxe/src/bin/rtl8139.rom] Error 2
make[6]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
make[5]: *** [subdir-all-etherboot] Error 2
make[5]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make[4]: *** [subdirs-all] Error 2
make[4]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make[3]: *** [all] Error 2
make[3]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
make[2]: *** [subdir-install-firmware] Error 2
make[2]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools'
make: *** [install-tools] Error 2


Did someone already notice/fix this issue? 


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 20:52:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 20:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11cN-0007aQ-FR; Mon, 13 Aug 2012 20:52:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T11cL-0007aL-Uq
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:52:06 +0000
Received: from [85.158.139.83:50776] by server-6.bemta-5.messagelabs.com id
	58/21-22415-5F869205; Mon, 13 Aug 2012 20:52:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344891123!27919528!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7152 invoked from network); 13 Aug 2012 20:52:04 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 20:52:04 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DKpxfF014780
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 20:51:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DKpwej028383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 20:51:58 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DKpv91006409; Mon, 13 Aug 2012 15:51:57 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 13:51:57 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F13F2402BA; Mon, 13 Aug 2012 16:42:18 -0400 (EDT)
Date: Mon, 13 Aug 2012 16:42:18 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120813204218.GB12550@phenom.dumpdata.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/2] Document pagetable_reserve PVOP and
 enforce a better semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:17:05PM +0100, Attilio Rao wrote:
> When looking for documenting the pagetable_reserve PVOP, I realized that it
> assumes start == pgt_buf_start. I think this is not semantically right
> (even if with the current code this should not be a problem in practice) and
> what we really want is to extend the logic in order to do the RO -> RW
> convertion also for the range [pgt_buf_start, start).
> This patch then implements this missing conversion, adding some smaller
> cleanups and finally provides documentation for the PVOP.
> Please look at 2/2 for more details on how the comment is structured.
> If we get this right we will have a reference to be used later on for others
> PVOPs.
> A preliminary version of this patch has been already reviewed by
> Stefano Stabellini.
> 
> Attilio Rao (2):
>   XEN, X86: Improve semantic support for pagetable_reserve PVOP
>   Document the semantic of the pagetable_reserve PVOP

The titles need a prefix, like 'x86' or 'xen/x86'.

s/PVOP/PVOPS/

> 
>  arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
>  arch/x86/mm/init.c              |    4 ++++
>  arch/x86/xen/mmu.c              |   22 ++++++++++++++++++++--
>  3 files changed, 41 insertions(+), 4 deletions(-)
> 
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 20:52:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 20:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11cN-0007aQ-FR; Mon, 13 Aug 2012 20:52:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T11cL-0007aL-Uq
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:52:06 +0000
Received: from [85.158.139.83:50776] by server-6.bemta-5.messagelabs.com id
	58/21-22415-5F869205; Mon, 13 Aug 2012 20:52:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1344891123!27919528!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7152 invoked from network); 13 Aug 2012 20:52:04 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 20:52:04 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DKpxfF014780
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 20:51:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DKpwej028383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 20:51:58 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DKpv91006409; Mon, 13 Aug 2012 15:51:57 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 13:51:57 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F13F2402BA; Mon, 13 Aug 2012 16:42:18 -0400 (EDT)
Date: Mon, 13 Aug 2012 16:42:18 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120813204218.GB12550@phenom.dumpdata.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/2] Document pagetable_reserve PVOP and
 enforce a better semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:17:05PM +0100, Attilio Rao wrote:
> When looking for documenting the pagetable_reserve PVOP, I realized that it
> assumes start == pgt_buf_start. I think this is not semantically right
> (even if with the current code this should not be a problem in practice) and
> what we really want is to extend the logic in order to do the RO -> RW
> convertion also for the range [pgt_buf_start, start).
> This patch then implements this missing conversion, adding some smaller
> cleanups and finally provides documentation for the PVOP.
> Please look at 2/2 for more details on how the comment is structured.
> If we get this right we will have a reference to be used later on for others
> PVOPs.
> A preliminary version of this patch has been already reviewed by
> Stefano Stabellini.
> 
> Attilio Rao (2):
>   XEN, X86: Improve semantic support for pagetable_reserve PVOP
>   Document the semantic of the pagetable_reserve PVOP

The titles need a prefix, like 'x86' or 'xen/x86'.

s/PVOP/PVOPS/

> 
>  arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
>  arch/x86/mm/init.c              |    4 ++++
>  arch/x86/xen/mmu.c              |   22 ++++++++++++++++++++--
>  3 files changed, 41 insertions(+), 4 deletions(-)
> 
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 20:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 20:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11du-0007es-Uq; Mon, 13 Aug 2012 20:53:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T11dt-0007ed-LT
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:53:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344891214!9092643!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3150 invoked from network); 13 Aug 2012 20:53:35 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 20:53:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DKrUVG019773
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 20:53:31 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DKrT8d002358
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 20:53:30 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DKrT9O004575; Mon, 13 Aug 2012 15:53:29 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 13:53:29 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6C3D7402BA; Mon, 13 Aug 2012 16:43:50 -0400 (EDT)
Date: Mon, 13 Aug 2012 16:43:50 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120813204350.GC12550@phenom.dumpdata.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
	<1344608227-30910-2-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344608227-30910-2-git-send-email-attilio.rao@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:17:06PM +0100, Attilio Rao wrote:
> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>   pgt_buf_start, but still bigger than it.
> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>   for verifying start and end are contained in the range
>   [pgt_buf_start, pgt_buf_top].
> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>   an actual need to do that (or, in other words, if there are actually some
>   pages going to switch from RO to RW).
> 
> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> ---
>  arch/x86/mm/init.c |    4 ++++
>  arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>  2 files changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index e0e6990..c5849b6 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>  
>  void __init native_pagetable_reserve(u64 start, u64 end)
>  {
> +	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> +			start, end, PFN_PHYS(pgt_buf_start),
> +			PFN_PHYS(pgt_buf_top));
>  	memblock_reserve(start, end - start);
>  }
>  
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..8d943e0a 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
>  
>  static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
>  {
> +	u64 sh_start;

sh... shared? shell? Can it be just 'begin' or 'pgt_buf_start_phys' ?
Or 'start_phys' ?
Perhaps 'orig_start' ? 'early_start'? '_start'?

> +
> +	sh_start = PFN_PHYS(pgt_buf_start);
> +
> +	if (start < sh_start || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> +			start, end, sh_start, PFN_PHYS(pgt_buf_top));
> +
> +	/* set RW the initial range */
> +	if  (start != sh_start)
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			sh_start, start);
> +	while (sh_start < start) {
> +		make_lowmem_page_readwrite(__va(sh_start));
> +		sh_start += PAGE_SIZE;
> +	}
> +
>  	/* reserve the range used */
>  	native_pagetable_reserve(start, end);
>  
>  	/* set as RW the rest */
> -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> -			PFN_PHYS(pgt_buf_top));
> +	if (end != PFN_PHYS(pgt_buf_top))
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			end, PFN_PHYS(pgt_buf_top));
>  	while (end < PFN_PHYS(pgt_buf_top)) {
>  		make_lowmem_page_readwrite(__va(end));
>  		end += PAGE_SIZE;
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 20:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 20:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11du-0007es-Uq; Mon, 13 Aug 2012 20:53:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T11dt-0007ed-LT
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:53:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344891214!9092643!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg2NTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3150 invoked from network); 13 Aug 2012 20:53:35 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 20:53:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DKrUVG019773
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 20:53:31 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DKrT8d002358
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 20:53:30 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DKrT9O004575; Mon, 13 Aug 2012 15:53:29 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 13:53:29 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6C3D7402BA; Mon, 13 Aug 2012 16:43:50 -0400 (EDT)
Date: Mon, 13 Aug 2012 16:43:50 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120813204350.GC12550@phenom.dumpdata.com>
References: <1344608227-30910-1-git-send-email-attilio.rao@citrix.com>
	<1344608227-30910-2-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344608227-30910-2-git-send-email-attilio.rao@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 03:17:06PM +0100, Attilio Rao wrote:
> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>   pgt_buf_start, but still bigger than it.
> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>   for verifying start and end are contained in the range
>   [pgt_buf_start, pgt_buf_top].
> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>   an actual need to do that (or, in other words, if there are actually some
>   pages going to switch from RO to RW).
> 
> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> ---
>  arch/x86/mm/init.c |    4 ++++
>  arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>  2 files changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index e0e6990..c5849b6 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>  
>  void __init native_pagetable_reserve(u64 start, u64 end)
>  {
> +	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> +			start, end, PFN_PHYS(pgt_buf_start),
> +			PFN_PHYS(pgt_buf_top));
>  	memblock_reserve(start, end - start);
>  }
>  
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..8d943e0a 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
>  
>  static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
>  {
> +	u64 sh_start;

sh... shared? shell? Can it be just 'begin' or 'pgt_buf_start_phys' ?
Or 'start_phys' ?
Perhaps 'orig_start' ? 'early_start'? '_start'?

> +
> +	sh_start = PFN_PHYS(pgt_buf_start);
> +
> +	if (start < sh_start || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> +			start, end, sh_start, PFN_PHYS(pgt_buf_top));
> +
> +	/* set RW the initial range */
> +	if  (start != sh_start)
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			sh_start, start);
> +	while (sh_start < start) {
> +		make_lowmem_page_readwrite(__va(sh_start));
> +		sh_start += PAGE_SIZE;
> +	}
> +
>  	/* reserve the range used */
>  	native_pagetable_reserve(start, end);
>  
>  	/* set as RW the rest */
> -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> -			PFN_PHYS(pgt_buf_top));
> +	if (end != PFN_PHYS(pgt_buf_top))
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			end, PFN_PHYS(pgt_buf_top));
>  	while (end < PFN_PHYS(pgt_buf_top)) {
>  		make_lowmem_page_readwrite(__va(end));
>  		end += PAGE_SIZE;
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:00:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11jg-0007rt-Oh; Mon, 13 Aug 2012 20:59:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1T11jf-0007ro-2x
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:59:39 +0000
Received: from [85.158.138.51:63130] by server-6.bemta-3.messagelabs.com id
	64/16-32013-ABA69205; Mon, 13 Aug 2012 20:59:38 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344891576!28104199!1
X-Originating-IP: [212.227.126.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwOTY=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwOTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21665 invoked from network); 13 Aug 2012 20:59:36 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.171) by server-2.tower-174.messagelabs.com with SMTP;
	13 Aug 2012 20:59:36 -0000
Received: from [192.168.179.34] (hmbg-5f7603cb.pool.mediaWays.net
	[95.118.3.203])
	by mrelayeu.kundenserver.de (node=mrbap3) with ESMTP (Nemesis)
	id 0MeBiQ-1TMPxu2ADW-00Pv0y; Mon, 13 Aug 2012 22:59:36 +0200
Message-ID: <50296AC2.80600@brockmann-consult.de>
Date: Mon, 13 Aug 2012 22:59:46 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120601 Thunderbird/13.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501F98A1.4070806@brockmann-consult.de>
	<501FB9060200007800092D4E@nat28.tlf.novell.com>
	<5020C2DE.304@brockmann-consult.de>
	<50294D4B.9050405@brockmann-consult.de>
In-Reply-To: <50294D4B.9050405@brockmann-consult.de>
X-Provags-ID: V02:K0:cUzyQI1QT+RA8Gp0uMHAGx+dRYxv2TUHQniSzw4+m+1
	pWlTZ6yMXzFRr6VlKqFXIk9j6gGIETqzL4WeNU7ZazMjZC6QCT
	D4JiYd/tYn59+kgatSsMY+NAsnz4xeRATKt6yhN/2+c6lOB+kl
	AvhPKdbfB4gtCmwXotAo0Iik67beOe0lLajNOFdCNBNJIsJ+pm
	RM2oh0su2injp/NO5g53C8OjfAybrnEh3aF1/DJ6ghsY4h/4qv
	uQ1vkEb43WYGz46i6f7pxKUcYU4cs+idAK7VZPglkQptUKX9tu
	C+EKLFPLch93MEVb4oxmK0XnTHKAftqlfchiyzjdEFqu8IXIOt
	9DRRYYf//L2b+wQNZIAHU7f94GVu58gC0VnQ7N3FGra+SX9UUs
	VCUXrCizRKJrQ==
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I also tested 4.1.3, which is fast, and both USB and graphics
passthrough work, but "xl create" gave this message the first time I
started the vm (but not the second):

libxl: error: libxl_pci.c:750:libxl_device_pci_reset The kernel doesn't
support reset from sysfs for PCI device 0000:00:12.0


0000:00:12.0 is a USB device, which works in the VM.

peter:/opt # lspci -v | grep 00:12.0
00:12.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0
Controller (prog-if 10 [OHCI])


On 08/13/2012 08:54 PM, Peter Maloney wrote:
> So... did my 4.2-unstable test, using a fresh pull from yesterday; dom0
> is normal fast (unlike previous tests), and domU is ultra slow, but
> actually boots, and graphics card passthrough works without any patches,
> and so does the USB keyboard, but USB mouse passthrough doesn't work.
>
>
> On 08/07/2012 09:25 AM, Peter Maloney wrote:
>>> That still won't tell us which patches you did apply.
>> I applied no patches and tested, and the result was slow. And then
>> applied all patches, and it was fast. I didn't try figuring out which
>> one it was.
>>
>>
>> So I guess I'll try:
>> - the latest unstable 4.2
>> - the 4.1.3-rc (Which includes the patch Malcolm suggested)
>> - and my rpm source with half patches, 3/4 of them, etc. binary search
>> style to see which patch(es) changed the performance. But this means I
>> won't be able to narrow it down to a single patch, but only the point in
>> the long list where the most dramatic change happens, possibly depending
>> on many previous patches.
>>
>> Thanks so far, guys.
>>
>>
>> On 08/06/2012 12:31 PM, Jan Beulich wrote:
>>>>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
>>>> my AMD FX-8150 system with vanilla source code is super slow, both the
>>>> dom0 and domUs. However, after I merge the upstream patches I found in
>>>> the openSUSE rpm, it runs normally.
>>> I'd be very surprised if you really just took the upstream patches,
>>> and the result was better than 4.2-rc1. After all, what upstream
>>> means is that they were taken from -unstable.
>>>
>>>> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
>>>> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
>>>> obviously those patches won't work any more since the 4.2 code looks
>>>> completely reorganized, so I'm stuck with 4.1.2
>>> Obviously the upstream patches can't be applied to something
>>> that already has all those changes. Other patches, of which we
>>> unfortunately have quite a few, would be a different story.
>>>
>>>> Here is the rpm I was using at the time:
>>>> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
>>>>
>>>> To see the list of the patches and what order to apply them, see the
>>>> spec file.
>>> That still won't tell us which patches you did apply.
>>>
>>>> Please make sure this performance issue is fixed for the 4.2 release.
>>>> And I would be happy to test whatever files you send me.
>>> The sort of report you're doing isn't that helpful. What would
>>> help is if you could narrow down which patch(es) it is that
>>> make things so much better. Giving 4.1.3-rc a try might also
>>> be worthwhile, albeit I would hope we don't have a regression
>>> in 4.2.0-rc compared to 4.1.3-rc...
>>>
>>> Jan
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:00:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T11jg-0007rt-Oh; Mon, 13 Aug 2012 20:59:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maloney@brockmann-consult.de>)
	id 1T11jf-0007ro-2x
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 20:59:39 +0000
Received: from [85.158.138.51:63130] by server-6.bemta-3.messagelabs.com id
	64/16-32013-ABA69205; Mon, 13 Aug 2012 20:59:38 +0000
X-Env-Sender: peter.maloney@brockmann-consult.de
X-Msg-Ref: server-2.tower-174.messagelabs.com!1344891576!28104199!1
X-Originating-IP: [212.227.126.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwOTY=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xNzEgPT4gNTcwOTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21665 invoked from network); 13 Aug 2012 20:59:36 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.171) by server-2.tower-174.messagelabs.com with SMTP;
	13 Aug 2012 20:59:36 -0000
Received: from [192.168.179.34] (hmbg-5f7603cb.pool.mediaWays.net
	[95.118.3.203])
	by mrelayeu.kundenserver.de (node=mrbap3) with ESMTP (Nemesis)
	id 0MeBiQ-1TMPxu2ADW-00Pv0y; Mon, 13 Aug 2012 22:59:36 +0200
Message-ID: <50296AC2.80600@brockmann-consult.de>
Date: Mon, 13 Aug 2012 22:59:46 +0200
From: Peter Maloney <peter.maloney@brockmann-consult.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120601 Thunderbird/13.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <501F98A1.4070806@brockmann-consult.de>
	<501FB9060200007800092D4E@nat28.tlf.novell.com>
	<5020C2DE.304@brockmann-consult.de>
	<50294D4B.9050405@brockmann-consult.de>
In-Reply-To: <50294D4B.9050405@brockmann-consult.de>
X-Provags-ID: V02:K0:cUzyQI1QT+RA8Gp0uMHAGx+dRYxv2TUHQniSzw4+m+1
	pWlTZ6yMXzFRr6VlKqFXIk9j6gGIETqzL4WeNU7ZazMjZC6QCT
	D4JiYd/tYn59+kgatSsMY+NAsnz4xeRATKt6yhN/2+c6lOB+kl
	AvhPKdbfB4gtCmwXotAo0Iik67beOe0lLajNOFdCNBNJIsJ+pm
	RM2oh0su2injp/NO5g53C8OjfAybrnEh3aF1/DJ6ghsY4h/4qv
	uQ1vkEb43WYGz46i6f7pxKUcYU4cs+idAK7VZPglkQptUKX9tu
	C+EKLFPLch93MEVb4oxmK0XnTHKAftqlfchiyzjdEFqu8IXIOt
	9DRRYYf//L2b+wQNZIAHU7f94GVu58gC0VnQ7N3FGra+SX9UUs
	VCUXrCizRKJrQ==
Subject: Re: [Xen-devel] 4.1.2 very slow without upstream patches,
 but fast with them, also 4.2 very slow
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I also tested 4.1.3, which is fast, and both USB and graphics
passthrough work, but "xl create" gave this message the first time I
started the vm (but not the second):

libxl: error: libxl_pci.c:750:libxl_device_pci_reset The kernel doesn't
support reset from sysfs for PCI device 0000:00:12.0


0000:00:12.0 is a USB device, which works in the VM.

peter:/opt # lspci -v | grep 00:12.0
00:12.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0
Controller (prog-if 10 [OHCI])


On 08/13/2012 08:54 PM, Peter Maloney wrote:
> So... did my 4.2-unstable test, using a fresh pull from yesterday; dom0
> is normal fast (unlike previous tests), and domU is ultra slow, but
> actually boots, and graphics card passthrough works without any patches,
> and so does the USB keyboard, but USB mouse passthrough doesn't work.
>
>
> On 08/07/2012 09:25 AM, Peter Maloney wrote:
>>> That still won't tell us which patches you did apply.
>> I applied no patches and tested, and the result was slow. And then
>> applied all patches, and it was fast. I didn't try figuring out which
>> one it was.
>>
>>
>> So I guess I'll try:
>> - the latest unstable 4.2
>> - the 4.1.3-rc (Which includes the patch Malcolm suggested)
>> - and my rpm source with half patches, 3/4 of them, etc. binary search
>> style to see which patch(es) changed the performance. But this means I
>> won't be able to narrow it down to a single patch, but only the point in
>> the long list where the most dramatic change happens, possibly depending
>> on many previous patches.
>>
>> Thanks so far, guys.
>>
>>
>> On 08/06/2012 12:31 PM, Jan Beulich wrote:
>>>>>> On 06.08.12 at 12:12, Peter Maloney <peter.maloney@brockmann-consult.de> wrote:
>>>> my AMD FX-8150 system with vanilla source code is super slow, both the
>>>> dom0 and domUs. However, after I merge the upstream patches I found in
>>>> the openSUSE rpm, it runs normally.
>>> I'd be very surprised if you really just took the upstream patches,
>>> and the result was better than 4.2-rc1. After all, what upstream
>>> means is that they were taken from -unstable.
>>>
>>>> I tried 4.2-unstable and it was the same. There was no rc1 when I tested
>>>> it about 1.5 weeks ago. And 4.2 has the same horrible performance, and
>>>> obviously those patches won't work any more since the 4.2 code looks
>>>> completely reorganized, so I'm stuck with 4.1.2
>>> Obviously the upstream patches can't be applied to something
>>> that already has all those changes. Other patches, of which we
>>> unfortunately have quite a few, would be a different story.
>>>
>>>> Here is the rpm I was using at the time:
>>>> http://download.opensuse.org/update/12.1/src/xen-4.1.2_16-1.7.1.src.rpm 
>>>>
>>>> To see the list of the patches and what order to apply them, see the
>>>> spec file.
>>> That still won't tell us which patches you did apply.
>>>
>>>> Please make sure this performance issue is fixed for the 4.2 release.
>>>> And I would be happy to test whatever files you send me.
>>> The sort of report you're doing isn't that helpful. What would
>>> help is if you could narrow down which patch(es) it is that
>>> make things so much better. Giving 4.1.3-rc a try might also
>>> be worthwhile, albeit I would hope we don't have a regression
>>> in 4.2.0-rc compared to 4.1.3-rc...
>>>
>>> Jan
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:36:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12Is-0008Pq-6y; Mon, 13 Aug 2012 21:36:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12Iq-0008Pl-T0
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:36:01 +0000
Received: from [85.158.138.51:25598] by server-11.bemta-3.messagelabs.com id
	F2/62-23152-F3379205; Mon, 13 Aug 2012 21:35:59 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344893759!27987613!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3332 invoked from network); 13 Aug 2012 21:35:59 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 21:35:59 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id BE6892788;
	Tue, 14 Aug 2012 00:35:58 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 876752005D; Tue, 14 Aug 2012 00:35:58 +0300 (EEST)
Date: Tue, 14 Aug 2012 00:35:58 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813213558.GR19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813203950.GQ19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> Hello,
> =

> I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.=
2.0-rc2.tar.gz
> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> =

> make tools:
> =

> ..
> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> make -C etherboot all
> make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ether=
boot'
> make -C ipxe/src bin/rtl8139.rom
> make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ether=
boot/ipxe/src'
>   [BUILD] bin/isa.o
> drivers/bus/isa.c: In function 'isabus_probe':
> drivers/bus/isa.c:112:18: error: array subscript is above array bounds [-=
Werror=3Darray-bounds]
> cc1: all warnings being treated as errors
> make[7]: *** [bin/isa.o] Error 1
> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherb=
oot/ipxe/src'
>

Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-devel/=
2012-March/001279.html

Should we apply that patch to xen-unstable for Xen 4.2 ? =


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:36:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12Is-0008Pq-6y; Mon, 13 Aug 2012 21:36:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12Iq-0008Pl-T0
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:36:01 +0000
Received: from [85.158.138.51:25598] by server-11.bemta-3.messagelabs.com id
	F2/62-23152-F3379205; Mon, 13 Aug 2012 21:35:59 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-16.tower-174.messagelabs.com!1344893759!27987613!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3332 invoked from network); 13 Aug 2012 21:35:59 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 21:35:59 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id BE6892788;
	Tue, 14 Aug 2012 00:35:58 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 876752005D; Tue, 14 Aug 2012 00:35:58 +0300 (EEST)
Date: Tue, 14 Aug 2012 00:35:58 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813213558.GR19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813203950.GQ19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> Hello,
> =

> I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.=
2.0-rc2.tar.gz
> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> =

> make tools:
> =

> ..
> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> make -C etherboot all
> make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ether=
boot'
> make -C ipxe/src bin/rtl8139.rom
> make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ether=
boot/ipxe/src'
>   [BUILD] bin/isa.o
> drivers/bus/isa.c: In function 'isabus_probe':
> drivers/bus/isa.c:112:18: error: array subscript is above array bounds [-=
Werror=3Darray-bounds]
> cc1: all warnings being treated as errors
> make[7]: *** [bin/isa.o] Error 1
> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherb=
oot/ipxe/src'
>

Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-devel/=
2012-March/001279.html

Should we apply that patch to xen-unstable for Xen 4.2 ? =


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:39:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12Lq-0008VZ-QM; Mon, 13 Aug 2012 21:39:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12Lp-0008VT-HQ
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:39:05 +0000
Received: from [85.158.139.83:41426] by server-11.bemta-5.messagelabs.com id
	1F/D8-29296-8F379205; Mon, 13 Aug 2012 21:39:04 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344893943!27396699!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzcwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3736 invoked from network); 13 Aug 2012 21:39:04 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 21:39:04 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 533AB27C3;
	Tue, 14 Aug 2012 00:39:03 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 335E52005D; Tue, 14 Aug 2012 00:39:03 +0300 (EEST)
Date: Tue, 14 Aug 2012 00:39:03 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813213903.GS19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813213558.GR19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> > Hello,
> > =

> > I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-=
4.2.0-rc2.tar.gz
> > and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> > =

> > make tools:
> > =

> > ..
> > make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> > make -C etherboot all
> > make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/eth=
erboot'
> > make -C ipxe/src bin/rtl8139.rom
> > make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/eth=
erboot/ipxe/src'
> >   [BUILD] bin/isa.o
> > drivers/bus/isa.c: In function 'isabus_probe':
> > drivers/bus/isa.c:112:18: error: array subscript is above array bounds =
[-Werror=3Darray-bounds]
> > cc1: all warnings being treated as errors
> > make[7]: *** [bin/isa.o] Error 1
> > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ethe=
rboot/ipxe/src'
> >
> =

> Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-deve=
l/2012-March/001279.html
> =

> Should we apply that patch to xen-unstable for Xen 4.2 ? =

> =


And then there's the next build problem:

  [BUILD] bin/myri10ge.o
drivers/net/myri10ge.c: In function 'myri10ge_command':
drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer will=
 break strict-aliasing rules [-Werror=3Dstrict-aliasing]
drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer will=
 break strict-aliasing rules [-Werror=3Dstrict-aliasing]
cc1: all warnings being treated as errors
make[7]: *** [bin/myri10ge.o] Error 1
make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboo=
t/ipxe/src'


And patch from Olaf here: http://lists.ipxe.org/pipermail/ipxe-devel/2012-M=
arch/001280.html

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:39:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12Lq-0008VZ-QM; Mon, 13 Aug 2012 21:39:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12Lp-0008VT-HQ
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:39:05 +0000
Received: from [85.158.139.83:41426] by server-11.bemta-5.messagelabs.com id
	1F/D8-29296-8F379205; Mon, 13 Aug 2012 21:39:04 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344893943!27396699!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzcwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3736 invoked from network); 13 Aug 2012 21:39:04 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Aug 2012 21:39:04 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 533AB27C3;
	Tue, 14 Aug 2012 00:39:03 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 335E52005D; Tue, 14 Aug 2012 00:39:03 +0300 (EEST)
Date: Tue, 14 Aug 2012 00:39:03 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813213903.GS19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813213558.GR19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> > Hello,
> > =

> > I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-=
4.2.0-rc2.tar.gz
> > and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> > =

> > make tools:
> > =

> > ..
> > make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> > make -C etherboot all
> > make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/eth=
erboot'
> > make -C ipxe/src bin/rtl8139.rom
> > make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/eth=
erboot/ipxe/src'
> >   [BUILD] bin/isa.o
> > drivers/bus/isa.c: In function 'isabus_probe':
> > drivers/bus/isa.c:112:18: error: array subscript is above array bounds =
[-Werror=3Darray-bounds]
> > cc1: all warnings being treated as errors
> > make[7]: *** [bin/isa.o] Error 1
> > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ethe=
rboot/ipxe/src'
> >
> =

> Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-deve=
l/2012-March/001279.html
> =

> Should we apply that patch to xen-unstable for Xen 4.2 ? =

> =


And then there's the next build problem:

  [BUILD] bin/myri10ge.o
drivers/net/myri10ge.c: In function 'myri10ge_command':
drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer will=
 break strict-aliasing rules [-Werror=3Dstrict-aliasing]
drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer will=
 break strict-aliasing rules [-Werror=3Dstrict-aliasing]
cc1: all warnings being treated as errors
make[7]: *** [bin/myri10ge.o] Error 1
make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboo=
t/ipxe/src'


And patch from Olaf here: http://lists.ipxe.org/pipermail/ipxe-devel/2012-M=
arch/001280.html

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:45:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12RR-0000GH-JY; Mon, 13 Aug 2012 21:44:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12RQ-0000GC-Fd
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:44:52 +0000
Received: from [85.158.139.83:57420] by server-2.bemta-5.messagelabs.com id
	39/A7-10142-35579205; Mon, 13 Aug 2012 21:44:51 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344894290!25219610!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzcwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19999 invoked from network); 13 Aug 2012 21:44:51 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 21:44:51 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 406901C55;
	Tue, 14 Aug 2012 00:44:50 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 071B52005D; Tue, 14 Aug 2012 00:44:50 +0300 (EEST)
Date: Tue, 14 Aug 2012 00:44:49 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813214449.GT19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
	<20120813213903.GS19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813213903.GS19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> > On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> > > Hello,
> > > =

> > > I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xe=
n-4.2.0-rc2.tar.gz
> > > and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> > > =

> > > make tools:
> > > =

> > > ..
> > > make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> > > make -C etherboot all
> > > make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/e=
therboot'
> > > make -C ipxe/src bin/rtl8139.rom
> > > make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/e=
therboot/ipxe/src'
> > >   [BUILD] bin/isa.o
> > > drivers/bus/isa.c: In function 'isabus_probe':
> > > drivers/bus/isa.c:112:18: error: array subscript is above array bound=
s [-Werror=3Darray-bounds]
> > > cc1: all warnings being treated as errors
> > > make[7]: *** [bin/isa.o] Error 1
> > > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/et=
herboot/ipxe/src'
> > >
> > =

> > Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-de=
vel/2012-March/001279.html
> > =

> > Should we apply that patch to xen-unstable for Xen 4.2 ? =

> > =

> =

> And then there's the next build problem:
> =

>   [BUILD] bin/myri10ge.o
> drivers/net/myri10ge.c: In function 'myri10ge_command':
> drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer wi=
ll break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer wi=
ll break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> cc1: all warnings being treated as errors
> make[7]: *** [bin/myri10ge.o] Error 1
> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherb=
oot/ipxe/src'
> =

> =

> And patch from Olaf here: http://lists.ipxe.org/pipermail/ipxe-devel/2012=
-March/001280.html
> =


And the third build problem:

  [BUILD] bin/qib7322.o
drivers/infiniband/qib7322.c: In function 'qib7322_probe':
drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used uninit=
ialized in this function [-Werror=3Dmaybe-uninitialized]
drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared here
cc1: all warnings being treated as errors
make[7]: *** [bin/qib7322.o] Error 1
make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboo=
t/ipxe/src'


And patch from Christian Hesse here: http://permalink.gmane.org/gmane.netwo=
rk.ipxe.devel/1216


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 21:45:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 21:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12RR-0000GH-JY; Mon, 13 Aug 2012 21:44:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12RQ-0000GC-Fd
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:44:52 +0000
Received: from [85.158.139.83:57420] by server-2.bemta-5.messagelabs.com id
	39/A7-10142-35579205; Mon, 13 Aug 2012 21:44:51 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344894290!25219610!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzcwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19999 invoked from network); 13 Aug 2012 21:44:51 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 21:44:51 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 406901C55;
	Tue, 14 Aug 2012 00:44:50 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 071B52005D; Tue, 14 Aug 2012 00:44:50 +0300 (EEST)
Date: Tue, 14 Aug 2012 00:44:49 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813214449.GT19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
	<20120813213903.GS19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813213903.GS19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> > On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> > > Hello,
> > > =

> > > I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xe=
n-4.2.0-rc2.tar.gz
> > > and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> > > =

> > > make tools:
> > > =

> > > ..
> > > make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> > > make -C etherboot all
> > > make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/e=
therboot'
> > > make -C ipxe/src bin/rtl8139.rom
> > > make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/e=
therboot/ipxe/src'
> > >   [BUILD] bin/isa.o
> > > drivers/bus/isa.c: In function 'isabus_probe':
> > > drivers/bus/isa.c:112:18: error: array subscript is above array bound=
s [-Werror=3Darray-bounds]
> > > cc1: all warnings being treated as errors
> > > make[7]: *** [bin/isa.o] Error 1
> > > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/et=
herboot/ipxe/src'
> > >
> > =

> > Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-de=
vel/2012-March/001279.html
> > =

> > Should we apply that patch to xen-unstable for Xen 4.2 ? =

> > =

> =

> And then there's the next build problem:
> =

>   [BUILD] bin/myri10ge.o
> drivers/net/myri10ge.c: In function 'myri10ge_command':
> drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer wi=
ll break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer wi=
ll break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> cc1: all warnings being treated as errors
> make[7]: *** [bin/myri10ge.o] Error 1
> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherb=
oot/ipxe/src'
> =

> =

> And patch from Olaf here: http://lists.ipxe.org/pipermail/ipxe-devel/2012=
-March/001280.html
> =


And the third build problem:

  [BUILD] bin/qib7322.o
drivers/infiniband/qib7322.c: In function 'qib7322_probe':
drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used uninit=
ialized in this function [-Werror=3Dmaybe-uninitialized]
drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared here
cc1: all warnings being treated as errors
make[7]: *** [bin/qib7322.o] Error 1
make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboo=
t/ipxe/src'


And patch from Christian Hesse here: http://permalink.gmane.org/gmane.netwo=
rk.ipxe.devel/1216


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 22:07:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 22:07:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12nS-0000yU-D7; Mon, 13 Aug 2012 22:07:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12nR-0000yP-E0
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 22:07:37 +0000
Received: from [85.158.139.83:31893] by server-11.bemta-5.messagelabs.com id
	82/4C-29296-8AA79205; Mon, 13 Aug 2012 22:07:36 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344895655!27977250!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26534 invoked from network); 13 Aug 2012 22:07:35 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 22:07:35 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 8CCC11F74;
	Tue, 14 Aug 2012 01:07:34 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6AC4D2005D; Tue, 14 Aug 2012 01:07:34 +0300 (EEST)
Date: Tue, 14 Aug 2012 01:07:34 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813220734.GU19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
	<20120813213903.GS19851@reaktio.net>
	<20120813214449.GT19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813214449.GT19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello again,

So to be able to build Xen 4.2.0-rc2 on Fedora 17 with gcc 4.7 =

I had to apply these three patches to ipxe:

http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
http://permalink.gmane.org/gmane.network.ipxe.devel/1216


-- Pasi


On Tue, Aug 14, 2012 at 12:44:49AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
> > On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> > > On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> > > > Hello,
> > > > =

> > > > I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/=
xen-4.2.0-rc2.tar.gz
> > > > and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> > > > =

> > > > make tools:
> > > > =

> > > > ..
> > > > make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> > > > make -C etherboot all
> > > > make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware=
/etherboot'
> > > > make -C ipxe/src bin/rtl8139.rom
> > > > make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware=
/etherboot/ipxe/src'
> > > >   [BUILD] bin/isa.o
> > > > drivers/bus/isa.c: In function 'isabus_probe':
> > > > drivers/bus/isa.c:112:18: error: array subscript is above array bou=
nds [-Werror=3Darray-bounds]
> > > > cc1: all warnings being treated as errors
> > > > make[7]: *** [bin/isa.o] Error 1
> > > > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/=
etherboot/ipxe/src'
> > > >
> > > =

> > > Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-=
devel/2012-March/001279.html
> > > =

> > > Should we apply that patch to xen-unstable for Xen 4.2 ? =

> > > =

> > =

> > And then there's the next build problem:
> > =

> >   [BUILD] bin/myri10ge.o
> > drivers/net/myri10ge.c: In function 'myri10ge_command':
> > drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer =
will break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> > drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer =
will break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> > cc1: all warnings being treated as errors
> > make[7]: *** [bin/myri10ge.o] Error 1
> > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ethe=
rboot/ipxe/src'
> > =

> > =

> > And patch from Olaf here: http://lists.ipxe.org/pipermail/ipxe-devel/20=
12-March/001280.html
> > =

> =

> And the third build problem:
> =

>   [BUILD] bin/qib7322.o
> drivers/infiniband/qib7322.c: In function 'qib7322_probe':
> drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used unin=
itialized in this function [-Werror=3Dmaybe-uninitialized]
> drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared here
> cc1: all warnings being treated as errors
> make[7]: *** [bin/qib7322.o] Error 1
> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherb=
oot/ipxe/src'
> =

> =

> And patch from Christian Hesse here: http://permalink.gmane.org/gmane.net=
work.ipxe.devel/1216
> =

> =

> -- Pasi
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 22:07:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 22:07:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12nS-0000yU-D7; Mon, 13 Aug 2012 22:07:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T12nR-0000yP-E0
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 22:07:37 +0000
Received: from [85.158.139.83:31893] by server-11.bemta-5.messagelabs.com id
	82/4C-29296-8AA79205; Mon, 13 Aug 2012 22:07:36 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344895655!27977250!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzYzMTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26534 invoked from network); 13 Aug 2012 22:07:35 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 22:07:35 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 8CCC11F74;
	Tue, 14 Aug 2012 01:07:34 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6AC4D2005D; Tue, 14 Aug 2012 01:07:34 +0300 (EEST)
Date: Tue, 14 Aug 2012 01:07:34 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120813220734.GU19851@reaktio.net>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
	<20120813213903.GS19851@reaktio.net>
	<20120813214449.GT19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813214449.GT19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello again,

So to be able to build Xen 4.2.0-rc2 on Fedora 17 with gcc 4.7 =

I had to apply these three patches to ipxe:

http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
http://permalink.gmane.org/gmane.network.ipxe.devel/1216


-- Pasi


On Tue, Aug 14, 2012 at 12:44:49AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
> > On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> > > On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> > > > Hello,
> > > > =

> > > > I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/=
xen-4.2.0-rc2.tar.gz
> > > > and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> > > > =

> > > > make tools:
> > > > =

> > > > ..
> > > > make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> > > > make -C etherboot all
> > > > make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware=
/etherboot'
> > > > make -C ipxe/src bin/rtl8139.rom
> > > > make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware=
/etherboot/ipxe/src'
> > > >   [BUILD] bin/isa.o
> > > > drivers/bus/isa.c: In function 'isabus_probe':
> > > > drivers/bus/isa.c:112:18: error: array subscript is above array bou=
nds [-Werror=3Darray-bounds]
> > > > cc1: all warnings being treated as errors
> > > > make[7]: *** [bin/isa.o] Error 1
> > > > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/=
etherboot/ipxe/src'
> > > >
> > > =

> > > Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-=
devel/2012-March/001279.html
> > > =

> > > Should we apply that patch to xen-unstable for Xen 4.2 ? =

> > > =

> > =

> > And then there's the next build problem:
> > =

> >   [BUILD] bin/myri10ge.o
> > drivers/net/myri10ge.c: In function 'myri10ge_command':
> > drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer =
will break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> > drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer =
will break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> > cc1: all warnings being treated as errors
> > make[7]: *** [bin/myri10ge.o] Error 1
> > make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/ethe=
rboot/ipxe/src'
> > =

> > =

> > And patch from Olaf here: http://lists.ipxe.org/pipermail/ipxe-devel/20=
12-March/001280.html
> > =

> =

> And the third build problem:
> =

>   [BUILD] bin/qib7322.o
> drivers/infiniband/qib7322.c: In function 'qib7322_probe':
> drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used unin=
itialized in this function [-Werror=3Dmaybe-uninitialized]
> drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared here
> cc1: all warnings being treated as errors
> make[7]: *** [bin/qib7322.o] Error 1
> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherb=
oot/ipxe/src'
> =

> =

> And patch from Christian Hesse here: http://permalink.gmane.org/gmane.net=
work.ipxe.devel/1216
> =

> =

> -- Pasi
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 22:15:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 22:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12ua-00017s-9q; Mon, 13 Aug 2012 22:15:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T12uY-00017n-IU
	for Xen-devel@lists.xensource.com; Mon, 13 Aug 2012 22:14:58 +0000
Received: from [85.158.143.35:42686] by server-1.bemta-4.messagelabs.com id
	03/88-07754-16C79205; Mon, 13 Aug 2012 22:14:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344896095!13869952!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg3Nzcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4226 invoked from network); 13 Aug 2012 22:14:56 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 22:14:56 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DMEn53022116
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 22:14:50 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DMEmuZ014797
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 22:14:49 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DMEmNq021390; Mon, 13 Aug 2012 17:14:48 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 15:14:48 -0700
Date: Mon, 13 Aug 2012 15:14:46 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120813151446.22ae85b5@mantra.us.oracle.com>
In-Reply-To: <501A4E0C.1090509@eu.citrix.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012 10:53:16 +0100
George Dunlap <george.dunlap@eu.citrix.com> wrote:

> On 01/08/12 23:34, Mukesh Rathor wrote:
> > On Wed, 1 Aug 2012 16:25:01 +0100
> > George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> >
> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >> for this feature, mainly for "marketing" reasons.  I think it will
> >> probably give people the wrong idea about what the technology does.
> >> PV domains is one of Xen's really distinct advantages -- much
> >> simpler interface, lighter-weight (no qemu, legacy boot), &c &c.
> >> As I understand it, the mode you've been calling "hybrid" still
> >> has all of these advantages -- it just uses some of the HVM
> >> hardware extensions to make the interface even simpler / faster.
> >> I'm afraid "hybrid" may be seen as, "Even Xen has had to give up
> >> on PV."
> >>
> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >> makes it clear that PV domains are still fully PV, but just use
> >> some HVM extensions.
> >>
> >> Thoughts?
> > Hi George,
> >
> > We gave some thought looking for name. I figured pure PV will be
> > around for a while at least. So there's PV on one side and HVM on
> > the other, hybrid somewhere in between.
> I understand the idea, but I think it's not very accurate.  I would
> call Stefano's "PVHVM" stuff hybrid -- it has the legacy boot and
> emulated devices, but uses the PV interfaces for event delivery
> extensively.  The mode you're working on is too far towards the "PV"
> side to be called "hybrid".  (And as we've seen, the term has already
> confused people, who interpreted it as basically PVHVM.)
> >
> > The issue with PV in HVM is that it limits PV to HVM container
> > only. The vision I had was that hybrid, a PV ops kernel that is
> > somewhere in between PV and HVM, could be configured with options.
> > So, one could run hybrid with say EPT off (altho, won't be
> > supported this anymore). But generic name like hybrid allows it in
> > future to be flexible, instead of confined to a specific. I suppose
> > a PV guest could just be started with various options.
> In general, I think "PV" should mean, "Doesn't use legacy boot,
> doesn't need emulated devices".  So I don't think "PVH" places any
> limitations on what particular subset of HVM hardware you use.  For
> things that specifically depend on knowing whether guest PTs are
> using mfns or gpfns, I think we should have checks for specific
> things -- for instance, "xen_mm_translate()" or something like that.
> 
> Also, don't confuse EPT (which is Intel-specific) with HAP (which is
> the generic term for either EPT or RVI); and don't confuse either of
> those with what is called "translate" mode.  Translate mode (where
> Xen translates the guest PTs from gpfns to mfns) can be done either
> with HAP or with shadow; and given the performance issues HAP has
> with certain workloads, we need to make sure that the HVM container
> mode can use both.
> 
> > As for name in code, 'pvh' was confusing, as PVHVM is now routinely
> > used to refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV,
> > well, thats a certain virus ;). So I just used hybrid in the code
> > to refer to PV guest that runs in HVM container. I suppose I could
> > change the flag to pv_in_hvm or something.
> But is "pvhvm" ever actually used in the code?  If not, it's not a
> problem.
> 
> Actually, perhaps it would be better in any case, rather than having 
> checks for "pvh" mode, to have checks for specific things -- e.g., is 
> translation on or off (i.e., are running in classic PV mode, or with 
> HAP)?  I'm not sure the other things you're doing with HVM, but it 
> should be possible to come up with a descriptive name to use in the
> code for those options, rather than limiting to a specific mode.
> 
> In ancient days, there used to be options, both within Xen and within 
> the classic Xen kernel, to run a PV guest in fully-translated mode 
> (i.e., the guest PTs contained gpfns, not mfns), and
> "shadow_translate" was a mode used across guest types (both PV and
> HVM) to determine whether this was the case or not.
> 
>   -George



Ok, I changed all code references from xen_hybrid_domain to xen_pvh_domain
in linux. Changing xen code too. So it's PVH now.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 22:15:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 22:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12ua-00017s-9q; Mon, 13 Aug 2012 22:15:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T12uY-00017n-IU
	for Xen-devel@lists.xensource.com; Mon, 13 Aug 2012 22:14:58 +0000
Received: from [85.158.143.35:42686] by server-1.bemta-4.messagelabs.com id
	03/88-07754-16C79205; Mon, 13 Aug 2012 22:14:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344896095!13869952!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg3Nzcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4226 invoked from network); 13 Aug 2012 22:14:56 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 22:14:56 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7DMEn53022116
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Aug 2012 22:14:50 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7DMEmuZ014797
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 22:14:49 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7DMEmNq021390; Mon, 13 Aug 2012 17:14:48 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 15:14:48 -0700
Date: Mon, 13 Aug 2012 15:14:46 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120813151446.22ae85b5@mantra.us.oracle.com>
In-Reply-To: <501A4E0C.1090509@eu.citrix.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Aug 2012 10:53:16 +0100
George Dunlap <george.dunlap@eu.citrix.com> wrote:

> On 01/08/12 23:34, Mukesh Rathor wrote:
> > On Wed, 1 Aug 2012 16:25:01 +0100
> > George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> >
> >> I hope this isn't bikeshedding; but I don't like "Hybrid" as a name
> >> for this feature, mainly for "marketing" reasons.  I think it will
> >> probably give people the wrong idea about what the technology does.
> >> PV domains is one of Xen's really distinct advantages -- much
> >> simpler interface, lighter-weight (no qemu, legacy boot), &c &c.
> >> As I understand it, the mode you've been calling "hybrid" still
> >> has all of these advantages -- it just uses some of the HVM
> >> hardware extensions to make the interface even simpler / faster.
> >> I'm afraid "hybrid" may be seen as, "Even Xen has had to give up
> >> on PV."
> >>
> >> Can I suggest something like "PVH" instead?  That (at least to me)
> >> makes it clear that PV domains are still fully PV, but just use
> >> some HVM extensions.
> >>
> >> Thoughts?
> > Hi George,
> >
> > We gave some thought looking for name. I figured pure PV will be
> > around for a while at least. So there's PV on one side and HVM on
> > the other, hybrid somewhere in between.
> I understand the idea, but I think it's not very accurate.  I would
> call Stefano's "PVHVM" stuff hybrid -- it has the legacy boot and
> emulated devices, but uses the PV interfaces for event delivery
> extensively.  The mode you're working on is too far towards the "PV"
> side to be called "hybrid".  (And as we've seen, the term has already
> confused people, who interpreted it as basically PVHVM.)
> >
> > The issue with PV in HVM is that it limits PV to HVM container
> > only. The vision I had was that hybrid, a PV ops kernel that is
> > somewhere in between PV and HVM, could be configured with options.
> > So, one could run hybrid with say EPT off (altho, won't be
> > supported this anymore). But generic name like hybrid allows it in
> > future to be flexible, instead of confined to a specific. I suppose
> > a PV guest could just be started with various options.
> In general, I think "PV" should mean, "Doesn't use legacy boot,
> doesn't need emulated devices".  So I don't think "PVH" places any
> limitations on what particular subset of HVM hardware you use.  For
> things that specifically depend on knowing whether guest PTs are
> using mfns or gpfns, I think we should have checks for specific
> things -- for instance, "xen_mm_translate()" or something like that.
> 
> Also, don't confuse EPT (which is Intel-specific) with HAP (which is
> the generic term for either EPT or RVI); and don't confuse either of
> those with what is called "translate" mode.  Translate mode (where
> Xen translates the guest PTs from gpfns to mfns) can be done either
> with HAP or with shadow; and given the performance issues HAP has
> with certain workloads, we need to make sure that the HVM container
> mode can use both.
> 
> > As for name in code, 'pvh' was confusing, as PVHVM is now routinely
> > used to refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV,
> > well, thats a certain virus ;). So I just used hybrid in the code
> > to refer to PV guest that runs in HVM container. I suppose I could
> > change the flag to pv_in_hvm or something.
> But is "pvhvm" ever actually used in the code?  If not, it's not a
> problem.
> 
> Actually, perhaps it would be better in any case, rather than having 
> checks for "pvh" mode, to have checks for specific things -- e.g., is 
> translation on or off (i.e., are running in classic PV mode, or with 
> HAP)?  I'm not sure the other things you're doing with HVM, but it 
> should be possible to come up with a descriptive name to use in the
> code for those options, rather than limiting to a specific mode.
> 
> In ancient days, there used to be options, both within Xen and within 
> the classic Xen kernel, to run a PV guest in fully-translated mode 
> (i.e., the guest PTs contained gpfns, not mfns), and
> "shadow_translate" was a mode used across guest types (both PV and
> HVM) to determine whether this was the case or not.
> 
>   -George



Ok, I changed all code references from xen_hybrid_domain to xen_pvh_domain
in linux. Changing xen code too. So it's PVH now.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 22:19:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 22:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12yi-0001F5-Vq; Mon, 13 Aug 2012 22:19:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1T12yg-0001Eu-Or
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 22:19:14 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344896348!8928401!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4OTMzNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15393 invoked from network); 13 Aug 2012 22:19:08 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 22:19:08 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q7DMIepI000657;
	Mon, 13 Aug 2012 23:18:45 +0100
Received: from vega-c.dur.ac.uk (vega-c.dur.ac.uk [129.234.250.135])
	by smtphost1.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q7DMIJ6I020166
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 23:18:19 +0100
Received: from vega-c.dur.ac.uk (localhost [127.0.0.1])
	by vega-c.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q7DMIJGf013662;
	Mon, 13 Aug 2012 23:18:19 +0100
Received: from localhost (dcl0may@localhost)
	by vega-c.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id q7DMIJFb013658;
	Mon, 13 Aug 2012 23:18:19 +0100
Date: Mon, 13 Aug 2012 23:18:16 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: =?ISO-8859-15?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
In-Reply-To: <20120813213558.GR19851@reaktio.net>
Message-ID: <alpine.DEB.2.00.1208132313000.7786@vega-c.dur.ac.uk>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
Content-Type: MULTIPART/MIXED; BOUNDARY="8323329-994482624-1344896299=:7786"
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q7DMIepI000657
Cc: olaf@aepfle.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-994482624-1344896299=:7786
Content-Type: TEXT/PLAIN; charset=iso-8859-1; format=flowed
Content-Transfer-Encoding: 8BIT

On Mon, 13 Aug 2012, Pasi Kärkkäinen wrote:

> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi Kärkkäinen wrote:
>> Hello,
>>
>> I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar.gz
>> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
>>
>> make tools:
>>
>> ..
>> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
>> make -C etherboot all
>> make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
>> make -C ipxe/src bin/rtl8139.rom
>> make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>   [BUILD] bin/isa.o
>> drivers/bus/isa.c: In function 'isabus_probe':
>> drivers/bus/isa.c:112:18: error: array subscript is above array bounds [-Werror=array-bounds]
>> cc1: all warnings being treated as errors
>> make[7]: *** [bin/isa.o] Error 1
>> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>
>
> Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
>
> Should we apply that patch to xen-unstable for Xen 4.2 ?

It may simply be that xen needs to check out the ipxe tree at a later 
point. For Fedora an alternate approach is not to build ipxe at all, and 
take them instead from the ipxe-roms-qemu package.

 	Michael Young
--8323329-994482624-1344896299=:7786
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--8323329-994482624-1344896299=:7786--


From xen-devel-bounces@lists.xen.org Mon Aug 13 22:19:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 22:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T12yi-0001F5-Vq; Mon, 13 Aug 2012 22:19:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1T12yg-0001Eu-Or
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 22:19:14 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1344896348!8928401!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4OTMzNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15393 invoked from network); 13 Aug 2012 22:19:08 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Aug 2012 22:19:08 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q7DMIepI000657;
	Mon, 13 Aug 2012 23:18:45 +0100
Received: from vega-c.dur.ac.uk (vega-c.dur.ac.uk [129.234.250.135])
	by smtphost1.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q7DMIJ6I020166
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Aug 2012 23:18:19 +0100
Received: from vega-c.dur.ac.uk (localhost [127.0.0.1])
	by vega-c.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q7DMIJGf013662;
	Mon, 13 Aug 2012 23:18:19 +0100
Received: from localhost (dcl0may@localhost)
	by vega-c.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id q7DMIJFb013658;
	Mon, 13 Aug 2012 23:18:19 +0100
Date: Mon, 13 Aug 2012 23:18:16 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: =?ISO-8859-15?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
In-Reply-To: <20120813213558.GR19851@reaktio.net>
Message-ID: <alpine.DEB.2.00.1208132313000.7786@vega-c.dur.ac.uk>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
Content-Type: MULTIPART/MIXED; BOUNDARY="8323329-994482624-1344896299=:7786"
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q7DMIepI000657
Cc: olaf@aepfle.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe isa.c problem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-994482624-1344896299=:7786
Content-Type: TEXT/PLAIN; charset=iso-8859-1; format=flowed
Content-Transfer-Encoding: 8BIT

On Mon, 13 Aug 2012, Pasi Kärkkäinen wrote:

> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi Kärkkäinen wrote:
>> Hello,
>>
>> I just grabbed http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar.gz
>> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
>>
>> make tools:
>>
>> ..
>> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
>> make -C etherboot all
>> make[6]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
>> make -C ipxe/src bin/rtl8139.rom
>> make[7]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>   [BUILD] bin/isa.o
>> drivers/bus/isa.c: In function 'isabus_probe':
>> drivers/bus/isa.c:112:18: error: array subscript is above array bounds [-Werror=array-bounds]
>> cc1: all warnings being treated as errors
>> make[7]: *** [bin/isa.o] Error 1
>> make[7]: Leaving directory `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>
>
> Ok the patch from Olaf is here: http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
>
> Should we apply that patch to xen-unstable for Xen 4.2 ?

It may simply be that xen needs to check out the ipxe tree at a later 
point. For Fedora an alternate approach is not to build ipxe at all, and 
take them instead from the ipxe-roms-qemu package.

 	Michael Young
--8323329-994482624-1344896299=:7786
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--8323329-994482624-1344896299=:7786--


From xen-devel-bounces@lists.xen.org Mon Aug 13 23:30:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 23:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T145Y-0001sy-Gu; Mon, 13 Aug 2012 23:30:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T145W-0001st-Pa
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 23:30:23 +0000
Received: from [85.158.139.83:8127] by server-1.bemta-5.messagelabs.com id
	1F/04-09980-E0E89205; Mon, 13 Aug 2012 23:30:22 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344900618!23676779!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7172 invoked from network); 13 Aug 2012 23:30:21 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-14.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 23:30:21 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T145K-0006oE-UH; Tue, 14 Aug 2012 09:30:10 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Tue, 14 Aug 2012 09:30:11 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 09:30:10 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Kent Overstreet <koverstreet@google.com>, Joseph Glanville
	<joseph.glanville@orionvm.com.au>
Thread-Topic: [Xen-devel] blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1v//btIA//9NMFCAAOpDAIAAsyWA//84rcA=
Date: Mon, 13 Aug 2012 23:30:09 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
In-Reply-To: <20120813213455.GC6887@google.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:adff:8f5:49d0:6c03]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19112.001
x-tm-as-result: No--29.933700-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 23:30:11.0010 (UTC)
	FILETIME=[9504CA20:01CD79AB]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> Just mentioned this in the other thread, but if this is due to the 4k blocksize -
> that's easy to fix: just format with 512 byte blocksize
> 
> make-bcache --block 512
> 
> Maybe I should change the default.

I suggest making the default 512, but also print a warning if the user didn't explicitly set it,eg "Block size not set - defaulting to 512 bytes"

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 13 23:30:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Aug 2012 23:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T145Y-0001sy-Gu; Mon, 13 Aug 2012 23:30:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T145W-0001st-Pa
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 23:30:23 +0000
Received: from [85.158.139.83:8127] by server-1.bemta-5.messagelabs.com id
	1F/04-09980-E0E89205; Mon, 13 Aug 2012 23:30:22 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-182.messagelabs.com!1344900618!23676779!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7172 invoked from network); 13 Aug 2012 23:30:21 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-14.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Aug 2012 23:30:21 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T145K-0006oE-UH; Tue, 14 Aug 2012 09:30:10 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Tue, 14 Aug 2012 09:30:11 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 09:30:10 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Kent Overstreet <koverstreet@google.com>, Joseph Glanville
	<joseph.glanville@orionvm.com.au>
Thread-Topic: [Xen-devel] blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1v//btIA//9NMFCAAOpDAIAAsyWA//84rcA=
Date: Mon, 13 Aug 2012 23:30:09 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
In-Reply-To: <20120813213455.GC6887@google.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:adff:8f5:49d0:6c03]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19112.001
x-tm-as-result: No--29.933700-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Aug 2012 23:30:11.0010 (UTC)
	FILETIME=[9504CA20:01CD79AB]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> Just mentioned this in the other thread, but if this is due to the 4k blocksize -
> that's easy to fix: just format with 512 byte blocksize
> 
> make-bcache --block 512
> 
> Maybe I should change the default.

I suggest making the default 512, but also print a warning if the user didn't explicitly set it,eg "Block size not set - defaulting to 512 bytes"

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 00:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 00:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T14bl-0002ie-Cj; Tue, 14 Aug 2012 00:03:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T14bj-0002iZ-Kk
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 00:03:39 +0000
Received: from [85.158.138.51:29123] by server-7.bemta-3.messagelabs.com id
	D9/C3-01906-AD599205; Tue, 14 Aug 2012 00:03:38 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344902617!28146421!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25443 invoked from network); 14 Aug 2012 00:03:37 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 00:03:37 -0000
Received: by eaac13 with SMTP id c13so1225301eaa.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 17:03:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type
	:x-system-of-record;
	bh=GzJJKK+HC1xnO485thhUq9tFOVI6n90ANB3GVF3av20=;
	b=fUB4RLehnBmFwKTyVYLZZyUcyM9G5MR7opHBxos2iwEZij7VaQYxZXFyrKwiBTgtCF
	HB0gWq27a9LOT2/2xdVvxTvkBGnZRSfx9FW1IGATFbdonQhYq521rr7sVyvzHnUKV6yk
	AnvDSvAGF0SkyuTWBqGLbLJfS5+S0x2Uijgy7OCW2Hxd+vjCA+6RV3Cumiv/jLyc0hWw
	yByAKX3uFFb/ZtWHX71cKhfjeKOF9E3aN7339yDIt66JJ0Dri/o1HlJNikpeCA3I3YoL
	e+/xfFjiSzDHjZk4J0NHIERZymMYazjkR8kGQttJZBpUaqF1AImHUJ04d5lO4oH8/1a5
	HkAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type
	:x-system-of-record:x-gm-message-state;
	bh=GzJJKK+HC1xnO485thhUq9tFOVI6n90ANB3GVF3av20=;
	b=cf2ySrFuSEc7EXj00991PH/5uUjrGQunhNscZb9Y1WtVxuQeVZYY8MRf7WRRGohmIo
	ponJZ/loRaMOU7+dMHLFQlIPh5vP5JZzPM+7zWpDRYXfdvEi8xJ3fdii/YADN2Ki9gHC
	t1h7stdAaX7trvvLyYeco6H96GiL4VeN52l7yldacACS/EFd3W9+KgGASxEoKvJnPVoK
	MPFwPTGYp/f1+Sg79nGzF7fUpbojd1zk6Df3b2n2Pi6Jow2IzY51Qv06djpSwwV3ptr1
	zsSbqJtYZKcFNnGVUN9+fMDQ6vtCaKlxGmKgwHNml3xRXBgjQm/x/MeqG7Hlnu+BZhrW
	wFHw==
Received: by 10.14.218.5 with SMTP id j5mr16642474eep.16.1344902617370;
	Mon, 13 Aug 2012 17:03:37 -0700 (PDT)
Received: by 10.14.218.5 with SMTP id j5mr16642464eep.16.1344902617234; Mon,
	13 Aug 2012 17:03:37 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Mon, 13 Aug 2012 17:03:06 -0700 (PDT)
From: Peter Moody <pmoody@google.com>
Date: Mon, 13 Aug 2012 17:03:06 -0700
Message-ID: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=047d7b5dbe7ec1fe1104c72e8a75
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQnBuvdh8iV1bqAe9mzWE5Brw+Qe9f0XZR0fdEaEBSxCFnHXGCgepU8a2a1IcZwzzjyxZ1g7zn0QzBBxvPLtcRWsoY9K6OJhh3NkWX+wj/NuiKY422GxkEaT+/+2MXKZjlu8+YMp5tVLbjng42zCeTbkS/q3iQ4MFfRz2NtzeJzaEcFRKVioTNl6Q+nR3ZEFqJ35cgIG
Subject: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b5dbe7ec1fe1104c72e8a75
Content-Type: text/plain; charset=ISO-8859-1

This seems to be some combination of Xen and the audit subsystem, but
the attached program crashes my machine 100% of the time.

steps to reproduce the crash:

 *  1) compile with gcc -m32
 *  2) start auditd, install any rule (I've only tested syscall
auditing, but any syscall seems to work).
 *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a
exit,always -F arch=64 -S chmod
 *  3) run'n wait (this only loops twice for me before dying)
 *     ./a.out
 *  4) bask in instantaneous kernel oops.

here's xm info from dom0

[xen2.atl] root@gntb1:~# xm info
host                   : gntb1.atl.corp.google.com
release                : 3.2.13-ganeti-rx6-xen0
version                : #1 SMP Thu Jun 7 12:59:40 CEST 2012
machine                : x86_64
nr_cpus                : 12
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 1
cpu_mhz                : 2660
hw_caps                :
bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
virt_caps              : hvm
total_memory           : 32755
free_memory            : 22665
node_to_cpu            : node0:0,2,4,6,8,10
                         node1:1,3,5,7,9,11
node_to_memory         : node0:13083
                         node1:9582
node_to_dma32_mem      : node0:0
                         node1:3235
max_node_id            : 1
xen_major              : 4
xen_minor              : 0
xen_extra              : .1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : unavailable
xen_commandline        : placeholder dom0_mem=1024M loglvl=all
com1=115200,8n1 console=com1 iommu=0
cc_compiler            : gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
cc_compile_by          : pmacedo
cc_compile_domain      : google.com
cc_compile_date        : Wed Mar 16 15:24:06 UTC 2011
xend_config_format     : 4

I'm not sure what you need from the domU. It's running 2.6.38.8 (but
I've seen this bug all the way up to 3.5.0-rc7, the latest I've
tested). It's a fairly beefy setup, 32G memory and 6 cpus.

I suspect xen as opposed to auditd because:

 a) this only happens on our xen machines (though not all of them)
 b) one of my stack traces started with

[172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10

Any one have any idea what's going on?

Cheers,
peter

-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

--047d7b5dbe7ec1fe1104c72e8a75
Content-Type: text/x-csrc; charset=US-ASCII; name="crasher.c"
Content-Disposition: attachment; filename="crasher.c"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5u806430

LyoKICogc3RlcHM6CiAqICAxKSBjb21waWxlIHdpdGggZ2NjIC1tMzIKICogIDIpIHN0YXJ0IGF1
ZGl0ZCwgaW5zdGFsbCBhbnkgcnVsZSAoSSd2ZSBvbmx5IHRlc3RlZCBzeXNjYWxsIGF1ZGl0aW5n
LCBidXQgYW55IHN5c2NhbGwgc2VlbXMgdG8gd29yaykuCiAqICAgICAvZXRjL2luaXQuZC9hdWRp
dGQgc3RhcnQgOyBhdWRpdGN0bCAtRCA7IGF1ZGl0Y3RsIC1hIGV4aXQsYWx3YXlzIC1GIGFyY2g9
NjQgLVMgY2htb2QKICogIDMpIHJ1biduIHdhaXQgKHRoaXMgb25seSBsb29wcyB0d2ljZSBmb3Ig
bWUgYmVmb3JlIGR5aW5nKQogKiAgICAgLi9hLm91dAogKiAgNCkgYmFzayBpbiBpbnN0YW50YW5l
b3VzIGtlcm5lbCBvb3BzLgogWyAgNTcxLjI4Mjc3N10gLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBd
LS0tLS0tLS0tLS0tCiBbICA1NzEuMjgyNzg2XSBrZXJuZWwgQlVHIGF0IGZzL2J1ZmZlci5jOjEy
NjMhCiBbICA1NzEuMjgyNzkwXSBpbnZhbGlkIG9wY29kZTogMDAwMCBbIzFdIFNNUAogWyAgNTcx
LjI4Mjc5NV0gbGFzdCBzeXNmcyBmaWxlOiAvc3lzL2RldmljZXMvc3lzdGVtL2NwdS9zY2hlZF9t
Y19wb3dlcl9zYXZpbmdzCiBbICA1NzEuMjgyNzk4XSBDUFUgMAogWyAgNTcxLjI4MjgwMl0gUGlk
OiA3NDU3LCBjb21tOiBhLm91dCBOb3QgdGFpbnRlZCAyLjYuMzguOC1nZzg2OC1nYW5ldGl4ZW51
ICMxCiBbICA1NzEuMjgyODA4XSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMTUzODUzPl0gIFs8ZmZm
ZmZmZmY4MTE1Mzg1Mz5dIF9fZmluZF9nZXRfYmxvY2srMHgxZjMvMHgyMDAKIFsgIDU3MS4yODI4
MTldIFJTUDogZTAyYjpmZmZmODgwNzliN2RkYzc4ICBFRkxBR1M6IDAwMDEwMDQ2CiBbICA1NzEu
MjgyODIyXSBSQVg6IGZmZmY4ODA3YmMyOTAwMDAgUkJYOiBmZmZmODgwNmQ5YmI5YTk4IFJDWDog
MDAwMDAwMDAwMjNkYzE3YwogWyAgNTcxLjI4MjgyNl0gUkRYOiAwMDAwMDAwMDAwMDAxMDAwIFJT
STogMDAwMDAwMDAwMjNkYzE3YyBSREk6IGZmZmY4ODA3ZmVjMjlhMDAKIFsgIDU3MS4yODI4MzBd
IFJCUDogZmZmZjg4MDc5YjdkZGNkOCBSMDg6IDAwMDAwMDAwMDAwMDAwMDEgUjA5OiBmZmZmODgw
NmQ5YmI5OWMwCiBbICA1NzEuMjgyODM0XSBSMTA6IDAwMDAwMDAwMDAwMDAwMDAgUjExOiAwMDAw
MDAwMDAwMDAwMDAwIFIxMjogZmZmZjg4MDZkOWJiOTljNAogWyAgNTcxLjI4MjgzOV0gUjEzOiBm
ZmZmODgwNmQ5YmI5OWYwIFIxNDogZmZmZjg4MDdmZWZmOTA2MCBSMTU6IDAwMDAwMDAwMDIzZGMx
N2MKIFsgIDU3MS4yODI4NDVdIEZTOiAgMDAwMDdmOGY2YTc2YTdjMCgwMDAwKSBHUzpmZmZmODgw
N2ZmZjI2MDAwKDAwNjMpIGtubEdTOjAwMDAwMDAwMDAwMDAwMDAKIFsgIDU3MS4yODI4NDldIENT
OiAgZTAzMyBEUzogMDAyYiBFUzogMDAyYiBDUjA6IDAwMDAwMDAwODAwNTAwM2IKIFsgIDU3MS4y
ODI4NTNdIENSMjogMDAwMDAwMDBmNzZjNjk3MCBDUjM6IDAwMDAwMDA3YTI1MGIwMDAgQ1I0OiAw
MDAwMDAwMDAwMDAyNjYwCiBbICA1NzEuMjgyODU3XSBEUjA6IDAwMDAwMDAwMDAwMDAwMDAgRFIx
OiAwMDAwMDAwMDAwMDAwMDAwIERSMjogMDAwMDAwMDAwMDAwMDAwMAogWyAgNTcxLjI4Mjg2MV0g
RFIzOiAwMDAwMDAwMDAwMDAwMDAwIERSNjogMDAwMDAwMDBmZmZmMGZmMCBEUjc6IDAwMDAwMDAw
MDAwMDA0MDAKIFsgIDU3MS4yODI4NjZdIFByb2Nlc3MgYS5vdXQgKHBpZDogNzQ1NywgdGhyZWFk
aW5mbyBmZmZmODgwNzliN2RjMDAwLCB0YXNrIGZmZmY4ODA3Nzg2ODQzZTApCiBbICA1NzEuMjgy
ODcwXSBTdGFjazoKIFsgIDU3MS4yODI4NzJdICBmZmZmODgwNzliN2RkYzk4IGZmZmZmZmZmODE2
NTRjZDEgZmZmZjg4MDc5YjdkZGNhOCBmZmZmODgwNmQ5YmJhNDQwCiBbICA1NzEuMjgyODc5XSAg
ZmZmZjg4MDc5YjdkZGQwOCBmZmZmZmZmZjgxMWM5Mjk0IGZmZmY4ODA3ZmZmZmZmYzMgMDAwMDAw
MDAwMDAwMDAxNAogWyAgNTcxLjI4Mjg4N10gIGZmZmY4ODA2ZDliYjlhOTggZmZmZjg4MDZkOWJi
OTljNCBmZmZmODgwNmQ5YmI5OWYwIGZmZmY4ODA3ZmVmZjkwNjAKIFsgIDU3MS4yODI4OTVdIENh
bGwgVHJhY2U6CiBbICA1NzEuMjgyOTAxXSAgWzxmZmZmZmZmZjgxNjU0Y2QxPl0gPyBkb3duX3Jl
YWQrMHgxMS8weDMwCiBbICA1NzEuMjgyOTA3XSAgWzxmZmZmZmZmZjgxMWM5Mjk0Pl0gPyBleHQz
X3hhdHRyX2dldCsweGY0LzB4MmIwCiBbICA1NzEuMjgyOTEzXSAgWzxmZmZmZmZmZjgxMWJhZjg4
Pl0gZXh0M19jbGVhcl9ibG9ja3MrMHgxMjgvMHgxOTAKIFsgIDU3MS4yODI5MThdICBbPGZmZmZm
ZmZmODExYmIxMDQ+XSBleHQzX2ZyZWVfZGF0YSsweDExNC8weDE2MAogWyAgNTcxLjI4MjkyM10g
IFs8ZmZmZmZmZmY4MTFiYmMwYT5dIGV4dDNfdHJ1bmNhdGUrMHg4N2EvMHg5NTAKIFsgIDU3MS4y
ODI5MjhdICBbPGZmZmZmZmZmODEyMTMzZjU+XSA/IGpvdXJuYWxfc3RhcnQrMHhiNS8weDEwMAog
WyAgNTcxLjI4MjkzM10gIFs8ZmZmZmZmZmY4MTFiYzg0MD5dIGV4dDNfZXZpY3RfaW5vZGUrMHgx
ODAvMHgxYTAKIFsgIDU3MS4yODI5MzhdICBbPGZmZmZmZmZmODExNDA2NWY+XSBldmljdCsweDFm
LzB4YjAKIFsgIDU3MS4yODI5NDVdICBbPGZmZmZmZmZmODEwMDZkNTI+XSA/IGNoZWNrX2V2ZW50
cysweDEyLzB4MjAKIFsgIDU3MS4yODI5NDldICBbPGZmZmZmZmZmODExNDBjMTQ+XSBpcHV0KzB4
MWE0LzB4MjkwCiBbICA1NzEuMjgyOTU1XSAgWzxmZmZmZmZmZjgxMTNlZDA1Pl0gZHB1dCsweDI2
NS8weDMxMAogWyAgNTcxLjI4Mjk1OV0gIFs8ZmZmZmZmZmY4MTEzMjQzNT5dIHBhdGhfcHV0KzB4
MTUvMHgzMAogWyAgNTcxLjI4Mjk2NV0gIFs8ZmZmZmZmZmY4MTBhNWQzMT5dIGF1ZGl0X3N5c2Nh
bGxfZXhpdCsweDE3MS8weDI2MAogWyAgNTcxLjI4Mjk3MV0gIFs8ZmZmZmZmZmY4MTAzZWQ5YT5d
IHN5c2V4aXRfYXVkaXQrMHgyMS8weDVmCiBbICA1NzEuMjgyOTc0XSBDb2RlOiA4MiAwMCAwNSAw
MSAwMCA4NSBjMCA3NSBkZSA2NSA0OCA4OSAxYyAyNSAwMCAwNSAwMSAwMCBlOSA4NyBmZSBmZiBm
ZiA0OCA4OSBkZiBlOCBlOSBmYyBmZiBmZiA0YyA4OSBmNyBlOSAwMiBmZiBmZiBmZiAwZiAwYiBl
YiBmZSA8MGY+IDBiIGViIGZlIDBmIDBiIGViIGZlIDBmIDFmIDQ0IDAwIDAwIDU1IDQ4IDg5IGU1
IDQxIDU3IDQ5IDg5CiBbICA1NzEuMjgzMDI3XSBSSVAgIFs8ZmZmZmZmZmY4MTE1Mzg1Mz5dIF9f
ZmluZF9nZXRfYmxvY2srMHgxZjMvMHgyMDAKIFsgIDU3MS4yODMwMzNdICBSU1AgPGZmZmY4ODA3
OWI3ZGRjNzg+CiBbICA1NzEuMjgzMDM2XSAtLS1bIGVuZCB0cmFjZSA1OTc1ZmZlMjA4MDhlY2Qy
IF0tLS0KICoKICovCgojaW5jbHVkZSA8c3RkaW8uaD4KI2luY2x1ZGUgPHN5cy9zdGF0Lmg+CiNp
bmNsdWRlIDxzeXMvdHlwZXMuaD4KI2luY2x1ZGUgPHVuaXN0ZC5oPgoKI2RlZmluZSBLSUxMRElS
ICIvdXNyL2xvY2FsL3RtcC9jcmFzaGVyL2tpbGxfZGlyIgoKaW50IG1haW4odm9pZCkgewogIEZJ
TEUgKmY7CiAgY2hhciBmdWxscGF0aFs1MTJdOwogIGludCBpOwoKICB3aGlsZSAoMSkgewogICAg
ZnByaW50ZihzdGRlcnIsICIlZCAiLCBpKyspOwogICAgbWtkaXIoS0lMTERJUiwgMDc3Nyk7CiAg
ICBjaGRpcihLSUxMRElSKTsKICAgIHNwcmludGYoZnVsbHBhdGgsICIlcy9maWxlIiwgS0lMTERJ
Uik7CiAgICBmID0gZm9wZW4oZnVsbHBhdGgsICJ3KyIpOwogICAgZnByaW50ZihmLCAibm90aGlu
ZyB0byBzZWUgaGVyZSIpOwogICAgZmNsb3NlKGYpOwogICAgdW5saW5rKCIvdXNyL2xvY2FsL3Rt
cC9jcmFzaGVyL2tpbGxfZGlyL2ZpbGUiKTsKICAgIHJtZGlyKEtJTExESVIpOwogIH0KICByZXR1
cm4gMDsKfQo=
--047d7b5dbe7ec1fe1104c72e8a75
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b5dbe7ec1fe1104c72e8a75--


From xen-devel-bounces@lists.xen.org Tue Aug 14 00:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 00:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T14bl-0002ie-Cj; Tue, 14 Aug 2012 00:03:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T14bj-0002iZ-Kk
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 00:03:39 +0000
Received: from [85.158.138.51:29123] by server-7.bemta-3.messagelabs.com id
	D9/C3-01906-AD599205; Tue, 14 Aug 2012 00:03:38 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344902617!28146421!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25443 invoked from network); 14 Aug 2012 00:03:37 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 00:03:37 -0000
Received: by eaac13 with SMTP id c13so1225301eaa.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 17:03:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type
	:x-system-of-record;
	bh=GzJJKK+HC1xnO485thhUq9tFOVI6n90ANB3GVF3av20=;
	b=fUB4RLehnBmFwKTyVYLZZyUcyM9G5MR7opHBxos2iwEZij7VaQYxZXFyrKwiBTgtCF
	HB0gWq27a9LOT2/2xdVvxTvkBGnZRSfx9FW1IGATFbdonQhYq521rr7sVyvzHnUKV6yk
	AnvDSvAGF0SkyuTWBqGLbLJfS5+S0x2Uijgy7OCW2Hxd+vjCA+6RV3Cumiv/jLyc0hWw
	yByAKX3uFFb/ZtWHX71cKhfjeKOF9E3aN7339yDIt66JJ0Dri/o1HlJNikpeCA3I3YoL
	e+/xfFjiSzDHjZk4J0NHIERZymMYazjkR8kGQttJZBpUaqF1AImHUJ04d5lO4oH8/1a5
	HkAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type
	:x-system-of-record:x-gm-message-state;
	bh=GzJJKK+HC1xnO485thhUq9tFOVI6n90ANB3GVF3av20=;
	b=cf2ySrFuSEc7EXj00991PH/5uUjrGQunhNscZb9Y1WtVxuQeVZYY8MRf7WRRGohmIo
	ponJZ/loRaMOU7+dMHLFQlIPh5vP5JZzPM+7zWpDRYXfdvEi8xJ3fdii/YADN2Ki9gHC
	t1h7stdAaX7trvvLyYeco6H96GiL4VeN52l7yldacACS/EFd3W9+KgGASxEoKvJnPVoK
	MPFwPTGYp/f1+Sg79nGzF7fUpbojd1zk6Df3b2n2Pi6Jow2IzY51Qv06djpSwwV3ptr1
	zsSbqJtYZKcFNnGVUN9+fMDQ6vtCaKlxGmKgwHNml3xRXBgjQm/x/MeqG7Hlnu+BZhrW
	wFHw==
Received: by 10.14.218.5 with SMTP id j5mr16642474eep.16.1344902617370;
	Mon, 13 Aug 2012 17:03:37 -0700 (PDT)
Received: by 10.14.218.5 with SMTP id j5mr16642464eep.16.1344902617234; Mon,
	13 Aug 2012 17:03:37 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Mon, 13 Aug 2012 17:03:06 -0700 (PDT)
From: Peter Moody <pmoody@google.com>
Date: Mon, 13 Aug 2012 17:03:06 -0700
Message-ID: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=047d7b5dbe7ec1fe1104c72e8a75
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQnBuvdh8iV1bqAe9mzWE5Brw+Qe9f0XZR0fdEaEBSxCFnHXGCgepU8a2a1IcZwzzjyxZ1g7zn0QzBBxvPLtcRWsoY9K6OJhh3NkWX+wj/NuiKY422GxkEaT+/+2MXKZjlu8+YMp5tVLbjng42zCeTbkS/q3iQ4MFfRz2NtzeJzaEcFRKVioTNl6Q+nR3ZEFqJ35cgIG
Subject: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b5dbe7ec1fe1104c72e8a75
Content-Type: text/plain; charset=ISO-8859-1

This seems to be some combination of Xen and the audit subsystem, but
the attached program crashes my machine 100% of the time.

steps to reproduce the crash:

 *  1) compile with gcc -m32
 *  2) start auditd, install any rule (I've only tested syscall
auditing, but any syscall seems to work).
 *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a
exit,always -F arch=64 -S chmod
 *  3) run'n wait (this only loops twice for me before dying)
 *     ./a.out
 *  4) bask in instantaneous kernel oops.

here's xm info from dom0

[xen2.atl] root@gntb1:~# xm info
host                   : gntb1.atl.corp.google.com
release                : 3.2.13-ganeti-rx6-xen0
version                : #1 SMP Thu Jun 7 12:59:40 CEST 2012
machine                : x86_64
nr_cpus                : 12
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 1
cpu_mhz                : 2660
hw_caps                :
bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
virt_caps              : hvm
total_memory           : 32755
free_memory            : 22665
node_to_cpu            : node0:0,2,4,6,8,10
                         node1:1,3,5,7,9,11
node_to_memory         : node0:13083
                         node1:9582
node_to_dma32_mem      : node0:0
                         node1:3235
max_node_id            : 1
xen_major              : 4
xen_minor              : 0
xen_extra              : .1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : unavailable
xen_commandline        : placeholder dom0_mem=1024M loglvl=all
com1=115200,8n1 console=com1 iommu=0
cc_compiler            : gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
cc_compile_by          : pmacedo
cc_compile_domain      : google.com
cc_compile_date        : Wed Mar 16 15:24:06 UTC 2011
xend_config_format     : 4

I'm not sure what you need from the domU. It's running 2.6.38.8 (but
I've seen this bug all the way up to 3.5.0-rc7, the latest I've
tested). It's a fairly beefy setup, 32G memory and 6 cpus.

I suspect xen as opposed to auditd because:

 a) this only happens on our xen machines (though not all of them)
 b) one of my stack traces started with

[172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10

Any one have any idea what's going on?

Cheers,
peter

-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

--047d7b5dbe7ec1fe1104c72e8a75
Content-Type: text/x-csrc; charset=US-ASCII; name="crasher.c"
Content-Disposition: attachment; filename="crasher.c"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5u806430

LyoKICogc3RlcHM6CiAqICAxKSBjb21waWxlIHdpdGggZ2NjIC1tMzIKICogIDIpIHN0YXJ0IGF1
ZGl0ZCwgaW5zdGFsbCBhbnkgcnVsZSAoSSd2ZSBvbmx5IHRlc3RlZCBzeXNjYWxsIGF1ZGl0aW5n
LCBidXQgYW55IHN5c2NhbGwgc2VlbXMgdG8gd29yaykuCiAqICAgICAvZXRjL2luaXQuZC9hdWRp
dGQgc3RhcnQgOyBhdWRpdGN0bCAtRCA7IGF1ZGl0Y3RsIC1hIGV4aXQsYWx3YXlzIC1GIGFyY2g9
NjQgLVMgY2htb2QKICogIDMpIHJ1biduIHdhaXQgKHRoaXMgb25seSBsb29wcyB0d2ljZSBmb3Ig
bWUgYmVmb3JlIGR5aW5nKQogKiAgICAgLi9hLm91dAogKiAgNCkgYmFzayBpbiBpbnN0YW50YW5l
b3VzIGtlcm5lbCBvb3BzLgogWyAgNTcxLjI4Mjc3N10gLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBd
LS0tLS0tLS0tLS0tCiBbICA1NzEuMjgyNzg2XSBrZXJuZWwgQlVHIGF0IGZzL2J1ZmZlci5jOjEy
NjMhCiBbICA1NzEuMjgyNzkwXSBpbnZhbGlkIG9wY29kZTogMDAwMCBbIzFdIFNNUAogWyAgNTcx
LjI4Mjc5NV0gbGFzdCBzeXNmcyBmaWxlOiAvc3lzL2RldmljZXMvc3lzdGVtL2NwdS9zY2hlZF9t
Y19wb3dlcl9zYXZpbmdzCiBbICA1NzEuMjgyNzk4XSBDUFUgMAogWyAgNTcxLjI4MjgwMl0gUGlk
OiA3NDU3LCBjb21tOiBhLm91dCBOb3QgdGFpbnRlZCAyLjYuMzguOC1nZzg2OC1nYW5ldGl4ZW51
ICMxCiBbICA1NzEuMjgyODA4XSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMTUzODUzPl0gIFs8ZmZm
ZmZmZmY4MTE1Mzg1Mz5dIF9fZmluZF9nZXRfYmxvY2srMHgxZjMvMHgyMDAKIFsgIDU3MS4yODI4
MTldIFJTUDogZTAyYjpmZmZmODgwNzliN2RkYzc4ICBFRkxBR1M6IDAwMDEwMDQ2CiBbICA1NzEu
MjgyODIyXSBSQVg6IGZmZmY4ODA3YmMyOTAwMDAgUkJYOiBmZmZmODgwNmQ5YmI5YTk4IFJDWDog
MDAwMDAwMDAwMjNkYzE3YwogWyAgNTcxLjI4MjgyNl0gUkRYOiAwMDAwMDAwMDAwMDAxMDAwIFJT
STogMDAwMDAwMDAwMjNkYzE3YyBSREk6IGZmZmY4ODA3ZmVjMjlhMDAKIFsgIDU3MS4yODI4MzBd
IFJCUDogZmZmZjg4MDc5YjdkZGNkOCBSMDg6IDAwMDAwMDAwMDAwMDAwMDEgUjA5OiBmZmZmODgw
NmQ5YmI5OWMwCiBbICA1NzEuMjgyODM0XSBSMTA6IDAwMDAwMDAwMDAwMDAwMDAgUjExOiAwMDAw
MDAwMDAwMDAwMDAwIFIxMjogZmZmZjg4MDZkOWJiOTljNAogWyAgNTcxLjI4MjgzOV0gUjEzOiBm
ZmZmODgwNmQ5YmI5OWYwIFIxNDogZmZmZjg4MDdmZWZmOTA2MCBSMTU6IDAwMDAwMDAwMDIzZGMx
N2MKIFsgIDU3MS4yODI4NDVdIEZTOiAgMDAwMDdmOGY2YTc2YTdjMCgwMDAwKSBHUzpmZmZmODgw
N2ZmZjI2MDAwKDAwNjMpIGtubEdTOjAwMDAwMDAwMDAwMDAwMDAKIFsgIDU3MS4yODI4NDldIENT
OiAgZTAzMyBEUzogMDAyYiBFUzogMDAyYiBDUjA6IDAwMDAwMDAwODAwNTAwM2IKIFsgIDU3MS4y
ODI4NTNdIENSMjogMDAwMDAwMDBmNzZjNjk3MCBDUjM6IDAwMDAwMDA3YTI1MGIwMDAgQ1I0OiAw
MDAwMDAwMDAwMDAyNjYwCiBbICA1NzEuMjgyODU3XSBEUjA6IDAwMDAwMDAwMDAwMDAwMDAgRFIx
OiAwMDAwMDAwMDAwMDAwMDAwIERSMjogMDAwMDAwMDAwMDAwMDAwMAogWyAgNTcxLjI4Mjg2MV0g
RFIzOiAwMDAwMDAwMDAwMDAwMDAwIERSNjogMDAwMDAwMDBmZmZmMGZmMCBEUjc6IDAwMDAwMDAw
MDAwMDA0MDAKIFsgIDU3MS4yODI4NjZdIFByb2Nlc3MgYS5vdXQgKHBpZDogNzQ1NywgdGhyZWFk
aW5mbyBmZmZmODgwNzliN2RjMDAwLCB0YXNrIGZmZmY4ODA3Nzg2ODQzZTApCiBbICA1NzEuMjgy
ODcwXSBTdGFjazoKIFsgIDU3MS4yODI4NzJdICBmZmZmODgwNzliN2RkYzk4IGZmZmZmZmZmODE2
NTRjZDEgZmZmZjg4MDc5YjdkZGNhOCBmZmZmODgwNmQ5YmJhNDQwCiBbICA1NzEuMjgyODc5XSAg
ZmZmZjg4MDc5YjdkZGQwOCBmZmZmZmZmZjgxMWM5Mjk0IGZmZmY4ODA3ZmZmZmZmYzMgMDAwMDAw
MDAwMDAwMDAxNAogWyAgNTcxLjI4Mjg4N10gIGZmZmY4ODA2ZDliYjlhOTggZmZmZjg4MDZkOWJi
OTljNCBmZmZmODgwNmQ5YmI5OWYwIGZmZmY4ODA3ZmVmZjkwNjAKIFsgIDU3MS4yODI4OTVdIENh
bGwgVHJhY2U6CiBbICA1NzEuMjgyOTAxXSAgWzxmZmZmZmZmZjgxNjU0Y2QxPl0gPyBkb3duX3Jl
YWQrMHgxMS8weDMwCiBbICA1NzEuMjgyOTA3XSAgWzxmZmZmZmZmZjgxMWM5Mjk0Pl0gPyBleHQz
X3hhdHRyX2dldCsweGY0LzB4MmIwCiBbICA1NzEuMjgyOTEzXSAgWzxmZmZmZmZmZjgxMWJhZjg4
Pl0gZXh0M19jbGVhcl9ibG9ja3MrMHgxMjgvMHgxOTAKIFsgIDU3MS4yODI5MThdICBbPGZmZmZm
ZmZmODExYmIxMDQ+XSBleHQzX2ZyZWVfZGF0YSsweDExNC8weDE2MAogWyAgNTcxLjI4MjkyM10g
IFs8ZmZmZmZmZmY4MTFiYmMwYT5dIGV4dDNfdHJ1bmNhdGUrMHg4N2EvMHg5NTAKIFsgIDU3MS4y
ODI5MjhdICBbPGZmZmZmZmZmODEyMTMzZjU+XSA/IGpvdXJuYWxfc3RhcnQrMHhiNS8weDEwMAog
WyAgNTcxLjI4MjkzM10gIFs8ZmZmZmZmZmY4MTFiYzg0MD5dIGV4dDNfZXZpY3RfaW5vZGUrMHgx
ODAvMHgxYTAKIFsgIDU3MS4yODI5MzhdICBbPGZmZmZmZmZmODExNDA2NWY+XSBldmljdCsweDFm
LzB4YjAKIFsgIDU3MS4yODI5NDVdICBbPGZmZmZmZmZmODEwMDZkNTI+XSA/IGNoZWNrX2V2ZW50
cysweDEyLzB4MjAKIFsgIDU3MS4yODI5NDldICBbPGZmZmZmZmZmODExNDBjMTQ+XSBpcHV0KzB4
MWE0LzB4MjkwCiBbICA1NzEuMjgyOTU1XSAgWzxmZmZmZmZmZjgxMTNlZDA1Pl0gZHB1dCsweDI2
NS8weDMxMAogWyAgNTcxLjI4Mjk1OV0gIFs8ZmZmZmZmZmY4MTEzMjQzNT5dIHBhdGhfcHV0KzB4
MTUvMHgzMAogWyAgNTcxLjI4Mjk2NV0gIFs8ZmZmZmZmZmY4MTBhNWQzMT5dIGF1ZGl0X3N5c2Nh
bGxfZXhpdCsweDE3MS8weDI2MAogWyAgNTcxLjI4Mjk3MV0gIFs8ZmZmZmZmZmY4MTAzZWQ5YT5d
IHN5c2V4aXRfYXVkaXQrMHgyMS8weDVmCiBbICA1NzEuMjgyOTc0XSBDb2RlOiA4MiAwMCAwNSAw
MSAwMCA4NSBjMCA3NSBkZSA2NSA0OCA4OSAxYyAyNSAwMCAwNSAwMSAwMCBlOSA4NyBmZSBmZiBm
ZiA0OCA4OSBkZiBlOCBlOSBmYyBmZiBmZiA0YyA4OSBmNyBlOSAwMiBmZiBmZiBmZiAwZiAwYiBl
YiBmZSA8MGY+IDBiIGViIGZlIDBmIDBiIGViIGZlIDBmIDFmIDQ0IDAwIDAwIDU1IDQ4IDg5IGU1
IDQxIDU3IDQ5IDg5CiBbICA1NzEuMjgzMDI3XSBSSVAgIFs8ZmZmZmZmZmY4MTE1Mzg1Mz5dIF9f
ZmluZF9nZXRfYmxvY2srMHgxZjMvMHgyMDAKIFsgIDU3MS4yODMwMzNdICBSU1AgPGZmZmY4ODA3
OWI3ZGRjNzg+CiBbICA1NzEuMjgzMDM2XSAtLS1bIGVuZCB0cmFjZSA1OTc1ZmZlMjA4MDhlY2Qy
IF0tLS0KICoKICovCgojaW5jbHVkZSA8c3RkaW8uaD4KI2luY2x1ZGUgPHN5cy9zdGF0Lmg+CiNp
bmNsdWRlIDxzeXMvdHlwZXMuaD4KI2luY2x1ZGUgPHVuaXN0ZC5oPgoKI2RlZmluZSBLSUxMRElS
ICIvdXNyL2xvY2FsL3RtcC9jcmFzaGVyL2tpbGxfZGlyIgoKaW50IG1haW4odm9pZCkgewogIEZJ
TEUgKmY7CiAgY2hhciBmdWxscGF0aFs1MTJdOwogIGludCBpOwoKICB3aGlsZSAoMSkgewogICAg
ZnByaW50ZihzdGRlcnIsICIlZCAiLCBpKyspOwogICAgbWtkaXIoS0lMTERJUiwgMDc3Nyk7CiAg
ICBjaGRpcihLSUxMRElSKTsKICAgIHNwcmludGYoZnVsbHBhdGgsICIlcy9maWxlIiwgS0lMTERJ
Uik7CiAgICBmID0gZm9wZW4oZnVsbHBhdGgsICJ3KyIpOwogICAgZnByaW50ZihmLCAibm90aGlu
ZyB0byBzZWUgaGVyZSIpOwogICAgZmNsb3NlKGYpOwogICAgdW5saW5rKCIvdXNyL2xvY2FsL3Rt
cC9jcmFzaGVyL2tpbGxfZGlyL2ZpbGUiKTsKICAgIHJtZGlyKEtJTExESVIpOwogIH0KICByZXR1
cm4gMDsKfQo=
--047d7b5dbe7ec1fe1104c72e8a75
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b5dbe7ec1fe1104c72e8a75--


From xen-devel-bounces@lists.xen.org Tue Aug 14 01:01:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 01:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T15Un-0007yj-8a; Tue, 14 Aug 2012 01:00:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T15Ul-0007bh-MS
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 01:00:32 +0000
Received: from [85.158.139.83:3760] by server-12.bemta-5.messagelabs.com id
	F9/72-22359-E23A9205; Tue, 14 Aug 2012 01:00:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344906030!25235422!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32182 invoked from network); 14 Aug 2012 01:00:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 01:00:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,763,1336348800"; d="scan'208";a="13991174"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 01:00:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 02:00:29 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T15Uj-0007MT-2k;
	Tue, 14 Aug 2012 01:00:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T15Ui-0004g8-Oj;
	Tue, 14 Aug 2012 02:00:28 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13598-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Aug 2012 02:00:28 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13598: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13598 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13598/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13597
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13597
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13597
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13597

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  33d596f46521
baseline version:
 xen                  14788c9cb645

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=33d596f46521
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 33d596f46521
+ branch=xen-unstable
+ revision=33d596f46521
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 33d596f46521 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 2 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 01:01:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 01:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T15Un-0007yj-8a; Tue, 14 Aug 2012 01:00:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T15Ul-0007bh-MS
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 01:00:32 +0000
Received: from [85.158.139.83:3760] by server-12.bemta-5.messagelabs.com id
	F9/72-22359-E23A9205; Tue, 14 Aug 2012 01:00:30 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344906030!25235422!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDg4Njk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32182 invoked from network); 14 Aug 2012 01:00:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 01:00:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,763,1336348800"; d="scan'208";a="13991174"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 01:00:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 02:00:29 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T15Uj-0007MT-2k;
	Tue, 14 Aug 2012 01:00:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T15Ui-0004g8-Oj;
	Tue, 14 Aug 2012 02:00:28 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13598-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Aug 2012 02:00:28 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13598: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13598 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13598/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13597
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13597
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13597
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13597

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  33d596f46521
baseline version:
 xen                  14788c9cb645

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=33d596f46521
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 33d596f46521
+ branch=xen-unstable
+ revision=33d596f46521
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 33d596f46521 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 2 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 01:18:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 01:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T15li-0007N6-25; Tue, 14 Aug 2012 01:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1T15lg-0007Mv-Ui
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 01:18:01 +0000
Received: from [85.158.143.35:57178] by server-2.bemta-4.messagelabs.com id
	04/F6-31966-847A9205; Tue, 14 Aug 2012 01:18:00 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344907079!12042122!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18927 invoked from network); 14 Aug 2012 01:17:59 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-11.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 01:17:59 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 13 Aug 2012 18:17:58 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,763,1336374000"; d="scan'208";a="185978263"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 13 Aug 2012 18:17:57 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 18:17:57 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 18:17:57 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 09:17:56 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Zhou, Chao" <chao.zhou@intel.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
Thread-Index: AQHNcWQHo7KALelyBkWancTZpqx1/5dREAmQgAeA1jA=
Date: Tue, 14 Aug 2012 01:17:55 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest
	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhou, Chao wrote on 2012-08-09:
> I rebuild the upstream QEMU according to the wiki, but device static assignment
> doesn't work, no lspci output in guest. However hotplug & unplug works fine.

Hi Anthony,

We cannot see the device(via lspci) after guest boot up with upstream QEMU. Did you see the same problem? 
We follow the steps from wiki to build QEMU upstream and do the device assignment through old way(1. hide the device, 2 set the bdf in config file). 

> -----Original Message----- From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell Sent:
> Friday, August 03, 2012 6:36 PM To: Stefano Stabellini Cc: Zhang, Yang
> Z; Anthony Perard; xen-devel Subject: Re: [Xen-devel] error when pass
> through device to guest with qemu-xen-dir-remote
> 
> On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
>> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
>>> When create guest with device assigned, it shows the error and the
>>> device wasn't able to work inside guest: libxl: error:
>>> libxl_qmp.c:288:qmp_handle_error_response: received an error message
>>> from QMP server: Parameter 'driver' expects a driver name
>>> 
>>> It only fails with qemu-xen-dir-remote(Is this tree more close to
>>> upstream qemu?). I don't see the error with the traditional Qemu. I
>>> also tried qemu-upstream, but it fails when I try to enable pci
>>> pass-through
> for xen. I think Anthony's patch to add pci pass-through support for Xen is
> accepted by qemu-upstream, am I right?
>> 
>> Yes, it was accepted, but it is present only in upstream QEMU (from
>> git://git.qemu.org/qemu.git), not the tree we are currently using in
>> xen-unstable for development
>> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
>> Make sure you are using the right tree!
> 
> http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the
> upstream qemu tree instead of our stable branch of upstream.
> 
>> 
>> Anthony is currently on vacation and is going to be back in about a
>> week.
>> 
>>> Another question:
>>> Now I am trying to add some features (relevant to pass through device) to
> Qemu, which tree should I use? Since traditional qemu is great different from
> qemu-upstream, it is too old to develop patch base on it. But besides the old one,
> I cannot find a working qemu.
>> 
>> You should use upstream QEMU, I am going to rebase our tree on that
>> early on in the 4.3 release cycle.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 01:18:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 01:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T15li-0007N6-25; Tue, 14 Aug 2012 01:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1T15lg-0007Mv-Ui
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 01:18:01 +0000
Received: from [85.158.143.35:57178] by server-2.bemta-4.messagelabs.com id
	04/F6-31966-847A9205; Tue, 14 Aug 2012 01:18:00 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344907079!12042122!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18927 invoked from network); 14 Aug 2012 01:17:59 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-11.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 01:17:59 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 13 Aug 2012 18:17:58 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,763,1336374000"; d="scan'208";a="185978263"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 13 Aug 2012 18:17:57 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 18:17:57 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 18:17:57 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 09:17:56 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Zhou, Chao" <chao.zhou@intel.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
Thread-Index: AQHNcWQHo7KALelyBkWancTZpqx1/5dREAmQgAeA1jA=
Date: Tue, 14 Aug 2012 01:17:55 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to guest
	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhou, Chao wrote on 2012-08-09:
> I rebuild the upstream QEMU according to the wiki, but device static assignment
> doesn't work, no lspci output in guest. However hotplug & unplug works fine.

Hi Anthony,

We cannot see the device(via lspci) after guest boot up with upstream QEMU. Did you see the same problem? 
We follow the steps from wiki to build QEMU upstream and do the device assignment through old way(1. hide the device, 2 set the bdf in config file). 

> -----Original Message----- From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell Sent:
> Friday, August 03, 2012 6:36 PM To: Stefano Stabellini Cc: Zhang, Yang
> Z; Anthony Perard; xen-devel Subject: Re: [Xen-devel] error when pass
> through device to guest with qemu-xen-dir-remote
> 
> On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
>> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
>>> When create guest with device assigned, it shows the error and the
>>> device wasn't able to work inside guest: libxl: error:
>>> libxl_qmp.c:288:qmp_handle_error_response: received an error message
>>> from QMP server: Parameter 'driver' expects a driver name
>>> 
>>> It only fails with qemu-xen-dir-remote(Is this tree more close to
>>> upstream qemu?). I don't see the error with the traditional Qemu. I
>>> also tried qemu-upstream, but it fails when I try to enable pci
>>> pass-through
> for xen. I think Anthony's patch to add pci pass-through support for Xen is
> accepted by qemu-upstream, am I right?
>> 
>> Yes, it was accepted, but it is present only in upstream QEMU (from
>> git://git.qemu.org/qemu.git), not the tree we are currently using in
>> xen-unstable for development
>> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
>> Make sure you are using the right tree!
> 
> http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the
> upstream qemu tree instead of our stable branch of upstream.
> 
>> 
>> Anthony is currently on vacation and is going to be back in about a
>> week.
>> 
>>> Another question:
>>> Now I am trying to add some features (relevant to pass through device) to
> Qemu, which tree should I use? Since traditional qemu is great different from
> qemu-upstream, it is too old to develop patch base on it. But besides the old one,
> I cannot find a working qemu.
>> 
>> You should use upstream QEMU, I am going to rebase our tree on that
>> early on in the 4.3 release cycle.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 05:52:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 05:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1A2T-0001nL-Hs; Tue, 14 Aug 2012 05:51:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <koverstreet@google.com>) id 1T12Ht-0008PN-NK
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:35:01 +0000
Received: from [85.158.139.83:14059] by server-8.bemta-5.messagelabs.com id
	8A/22-02481-40379205; Mon, 13 Aug 2012 21:35:00 +0000
X-Env-Sender: koverstreet@google.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344893699!16669865!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4215 invoked from network); 13 Aug 2012 21:35:00 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 21:35:00 -0000
Received: by yhpp34 with SMTP id p34so4160262yhp.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 14:34:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=vDaq5XomB02L8GhzKytC5os+1lEyG6FtMpR0raITz5U=;
	b=gIQiZN31kBCxnwBvhVX8U19Q7gd0S3mraDkS25ZhLjq4GslQo6+982G5mpbEfEBCoR
	LQ4aTklX6zVCCsWK7eaTAT8RyU4mPYJnLzSFkbFWOUnSDA0MCRRvT67nIRiHBRJ+j785
	iyDjr27Mssd0RZm0nTa1sGsqILJnNjJxhIeS/9GuSvj1k+vVOCqq38RsBFkcA7BRTYPk
	sTQZUBTd43amVAJnKw6WJp2nUlRa34QhHs89TIr2BhcK4cjwLhpqzAOX9CFI1seIT7qI
	XOlpDriM4v2EdhWcsL/5Bi6+CrMEXiaEmQcffBIX5+dvn8SsZYyGN7RDzjbThkVI2XWy
	PrXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=vDaq5XomB02L8GhzKytC5os+1lEyG6FtMpR0raITz5U=;
	b=R0GIBEc5ulIcXnO56LxH3WOwr/ZTJvyS2bwW8J+fiqU5zDjoqNzQTM2bLoFyz1KU/w
	XEPr7L+Qbgm1m7sSI4Fhd57DdjC8Ya+lrLI6xG6xRh8k3rqPvgVrxC50C3e8HSsPxeTc
	TaxQzJUWFVkxnTYXahN8arHnQfNzOCOgGAeTqPfNE+mw/WlfKpIlypSUf2CC/QXIsrZA
	IS+S93yD0unNRteJYtINP41Ry8xBBIRZSCWJmArBk5iEutRwGI/odBDbWP76wr1eq3U1
	+H5RVSLWLUIucs3rvWLPB43ary7Nd1yZ0gbQgATddUpb8KYBkml1JOw6JV9gOrFlan/y
	B3tQ==
Received: by 10.50.173.71 with SMTP id bi7mr7428742igc.8.1344893698796;
	Mon, 13 Aug 2012 14:34:58 -0700 (PDT)
Received: by 10.50.173.71 with SMTP id bi7mr7428718igc.8.1344893698529;
	Mon, 13 Aug 2012 14:34:58 -0700 (PDT)
Received: from google.com ([2620:0:1000:2300:be30:5bff:fed1:fc17])
	by mx.google.com with ESMTPS id xm2sm6429642igb.3.2012.08.13.14.34.57
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 14:34:57 -0700 (PDT)
Date: Mon, 13 Aug 2012 14:34:55 -0700
From: Kent Overstreet <koverstreet@google.com>
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
Message-ID: <20120813213455.GC6887@google.com>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQl+vT3PagR61snpGqVnkzsInKbSTz0vWJOWhWA5z5443RqLVENmuCvCBQEtAk9fAK/vk9fNQasCssj0Dlo7axuC3xvgrkNgjBrLX1OeGgj074dZ+nn8qep2x1RjdDWQmkfFVscmezivHLncXeQmSVsNSs+3vie3P9+FF1RJNe6XrSrrbNk32T6Obh2zkb81WH42RlZa
X-Mailman-Approved-At: Tue, 14 Aug 2012 05:51:36 +0000
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	James Harper <james.harper@bendigoit.com.au>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 08:53:44PM +1000, Joseph Glanville wrote:
> On 13 August 2012 18:17, James Harper <james.harper@bendigoit.com.au> wrote:
> >>
> >> This could very well be the issue I was having, I haven't been able to pull the
> >> latest bcache code for a few days (repo down?) but if I can help debug let me
> >> know.
> >>
> >
> > Is it Windows or Linux giving you problems? I've only tested Windows so far.
> >
> > James
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> Both, I wasn't able to get either to work correctly with blkback.

Just mentioned this in the other thread, but if this is due to the 4k
blocksize - that's easy to fix: just format with 512 byte blocksize

make-bcache --block 512

Maybe I should change the default.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 05:52:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 05:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1A2T-0001nL-Hs; Tue, 14 Aug 2012 05:51:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <koverstreet@google.com>) id 1T12Ht-0008PN-NK
	for xen-devel@lists.xen.org; Mon, 13 Aug 2012 21:35:01 +0000
Received: from [85.158.139.83:14059] by server-8.bemta-5.messagelabs.com id
	8A/22-02481-40379205; Mon, 13 Aug 2012 21:35:00 +0000
X-Env-Sender: koverstreet@google.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344893699!16669865!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4215 invoked from network); 13 Aug 2012 21:35:00 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Aug 2012 21:35:00 -0000
Received: by yhpp34 with SMTP id p34so4160262yhp.32
	for <xen-devel@lists.xen.org>; Mon, 13 Aug 2012 14:34:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=vDaq5XomB02L8GhzKytC5os+1lEyG6FtMpR0raITz5U=;
	b=gIQiZN31kBCxnwBvhVX8U19Q7gd0S3mraDkS25ZhLjq4GslQo6+982G5mpbEfEBCoR
	LQ4aTklX6zVCCsWK7eaTAT8RyU4mPYJnLzSFkbFWOUnSDA0MCRRvT67nIRiHBRJ+j785
	iyDjr27Mssd0RZm0nTa1sGsqILJnNjJxhIeS/9GuSvj1k+vVOCqq38RsBFkcA7BRTYPk
	sTQZUBTd43amVAJnKw6WJp2nUlRa34QhHs89TIr2BhcK4cjwLhpqzAOX9CFI1seIT7qI
	XOlpDriM4v2EdhWcsL/5Bi6+CrMEXiaEmQcffBIX5+dvn8SsZYyGN7RDzjbThkVI2XWy
	PrXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=vDaq5XomB02L8GhzKytC5os+1lEyG6FtMpR0raITz5U=;
	b=R0GIBEc5ulIcXnO56LxH3WOwr/ZTJvyS2bwW8J+fiqU5zDjoqNzQTM2bLoFyz1KU/w
	XEPr7L+Qbgm1m7sSI4Fhd57DdjC8Ya+lrLI6xG6xRh8k3rqPvgVrxC50C3e8HSsPxeTc
	TaxQzJUWFVkxnTYXahN8arHnQfNzOCOgGAeTqPfNE+mw/WlfKpIlypSUf2CC/QXIsrZA
	IS+S93yD0unNRteJYtINP41Ry8xBBIRZSCWJmArBk5iEutRwGI/odBDbWP76wr1eq3U1
	+H5RVSLWLUIucs3rvWLPB43ary7Nd1yZ0gbQgATddUpb8KYBkml1JOw6JV9gOrFlan/y
	B3tQ==
Received: by 10.50.173.71 with SMTP id bi7mr7428742igc.8.1344893698796;
	Mon, 13 Aug 2012 14:34:58 -0700 (PDT)
Received: by 10.50.173.71 with SMTP id bi7mr7428718igc.8.1344893698529;
	Mon, 13 Aug 2012 14:34:58 -0700 (PDT)
Received: from google.com ([2620:0:1000:2300:be30:5bff:fed1:fc17])
	by mx.google.com with ESMTPS id xm2sm6429642igb.3.2012.08.13.14.34.57
	(version=SSLv3 cipher=OTHER); Mon, 13 Aug 2012 14:34:57 -0700 (PDT)
Date: Mon, 13 Aug 2012 14:34:55 -0700
From: Kent Overstreet <koverstreet@google.com>
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
Message-ID: <20120813213455.GC6887@google.com>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQl+vT3PagR61snpGqVnkzsInKbSTz0vWJOWhWA5z5443RqLVENmuCvCBQEtAk9fAK/vk9fNQasCssj0Dlo7axuC3xvgrkNgjBrLX1OeGgj074dZ+nn8qep2x1RjdDWQmkfFVscmezivHLncXeQmSVsNSs+3vie3P9+FF1RJNe6XrSrrbNk32T6Obh2zkb81WH42RlZa
X-Mailman-Approved-At: Tue, 14 Aug 2012 05:51:36 +0000
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	James Harper <james.harper@bendigoit.com.au>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 08:53:44PM +1000, Joseph Glanville wrote:
> On 13 August 2012 18:17, James Harper <james.harper@bendigoit.com.au> wrote:
> >>
> >> This could very well be the issue I was having, I haven't been able to pull the
> >> latest bcache code for a few days (repo down?) but if I can help debug let me
> >> know.
> >>
> >
> > Is it Windows or Linux giving you problems? I've only tested Windows so far.
> >
> > James
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> Both, I wasn't able to get either to work correctly with blkback.

Just mentioned this in the other thread, but if this is due to the 4k
blocksize - that's easy to fix: just format with 512 byte blocksize

make-bcache --block 512

Maybe I should change the default.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:16:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1AQ5-0002Ls-MU; Tue, 14 Aug 2012 06:16:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T1AQ5-0002Ln-0R
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 06:16:01 +0000
Received: from [85.158.139.83:51175] by server-9.bemta-5.messagelabs.com id
	63/82-26123-02DE9205; Tue, 14 Aug 2012 06:16:00 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344924958!25263444!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg3Nzcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12650 invoked from network); 14 Aug 2012 06:15:59 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 06:15:59 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7E6FnMu002247
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 06:15:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7E6FmdQ001496
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 06:15:49 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7E6FlsL004118; Tue, 14 Aug 2012 01:15:47 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 23:15:46 -0700
Date: Tue, 14 Aug 2012 08:14:08 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Dave Anderson <anderson@redhat.com>
Message-ID: <20120814061408.GA2471@host-192-168-1-59.local.net-space.pl>
References: <20120813073754.GA2482@host-192-168-1-59.local.net-space.pl>
	<1827880283.12350529.1344885413402.JavaMail.root@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1827880283.12350529.1344885413402.JavaMail.root@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 03:16:53PM -0400, Dave Anderson wrote:

[...]

> OK good.  It tests OK on a few older pvops kernels that I have on hand.
>
> The only thing I've changed is to handle compiler warnings in x86_64.c and
> x86.c by initializing p2m_top to NULL in x86_64_pvops_xendump_p2m_l3_create()
> and x86_pvops_xendump_p2m_l3_create().  I also used GETBUF() in those two
> functions to avoid having to add the malloc-failure line.

No problem. However, I am a bit surprised that you have seen some warnings.
I have not seen any. Did you compiled crash with extra options or something
like that? Or maybe there is a diffrence in our compilers (mine
is gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21) - quite ancient).

> Queued for crash-6.0.9.

Thanks.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:16:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1AQ5-0002Ls-MU; Tue, 14 Aug 2012 06:16:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T1AQ5-0002Ln-0R
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 06:16:01 +0000
Received: from [85.158.139.83:51175] by server-9.bemta-5.messagelabs.com id
	63/82-26123-02DE9205; Tue, 14 Aug 2012 06:16:00 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344924958!25263444!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg3Nzcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12650 invoked from network); 14 Aug 2012 06:15:59 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 06:15:59 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7E6FnMu002247
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 06:15:50 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7E6FmdQ001496
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 06:15:49 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7E6FlsL004118; Tue, 14 Aug 2012 01:15:47 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 23:15:46 -0700
Date: Tue, 14 Aug 2012 08:14:08 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Dave Anderson <anderson@redhat.com>
Message-ID: <20120814061408.GA2471@host-192-168-1-59.local.net-space.pl>
References: <20120813073754.GA2482@host-192-168-1-59.local.net-space.pl>
	<1827880283.12350529.1344885413402.JavaMail.root@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1827880283.12350529.1344885413402.JavaMail.root@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 03:16:53PM -0400, Dave Anderson wrote:

[...]

> OK good.  It tests OK on a few older pvops kernels that I have on hand.
>
> The only thing I've changed is to handle compiler warnings in x86_64.c and
> x86.c by initializing p2m_top to NULL in x86_64_pvops_xendump_p2m_l3_create()
> and x86_pvops_xendump_p2m_l3_create().  I also used GETBUF() in those two
> functions to avoid having to add the malloc-failure line.

No problem. However, I am a bit surprised that you have seen some warnings.
I have not seen any. Did you compiled crash with extra options or something
like that? Or maybe there is a diffrence in our compilers (mine
is gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21) - quite ancient).

> Queued for crash-6.0.9.

Thanks.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ATC-0002TG-9d; Tue, 14 Aug 2012 06:19:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T1ATB-0002TA-Gm
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:19:13 +0000
Received: from [85.158.138.51:36595] by server-12.bemta-3.messagelabs.com id
	6E/2B-04073-0EDE9205; Tue, 14 Aug 2012 06:19:12 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344925150!26362636!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg3Nzcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13804 invoked from network); 14 Aug 2012 06:19:12 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 06:19:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7E6J4D6004693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 06:19:05 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7E6J3iL005035
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 06:19:03 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7E6J3ox001941; Tue, 14 Aug 2012 01:19:03 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 23:19:02 -0700
Date: Tue, 14 Aug 2012 08:17:42 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120814061742.GB2471@host-192-168-1-59.local.net-space.pl>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
	<502539410200007800094355@nat28.tlf.novell.com>
	<20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
	<5028E09B0200007800094691@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5028E09B0200007800094691@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 10:10:19AM +0100, Jan Beulich wrote:
> >>> On 13.08.12 at 10:12, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > On Fri, Aug 10, 2012 at 03:39:29PM +0100, Jan Beulich wrote:
> >> >>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> >> > max_cpus is not available since 20374 changeset (Miscellaneous data
> >> > placement adjustments). It was moved to __initdata section. This section
> >> > is freed after Xen initialization. Assume that max_cpus is always
> >> > equal to XEN_HYPER_SIZE(cpumask_t) * 8.
> >>
> >> Just to repeat my response to the original version of this patch,
> >> which I don't recall having got any answer from you:
> >>
> >> "Using nr_cpu_ids, when available, would seem a better fit. And
> >>  I don't see why, on dumps from old hypervisors, you wouldn't
> >>  want to continue using max_cpus. Oh, wait, I see - you would
> >>  have to be able to tell whether it actually sits in .init.data, which
> >>  might not be strait forward."
> >
> > As I promised earlier I thought about that. The simplest way
> > to do that is to check in which section max_cpus resides. There
> > is some instrumentation in crash tool to do that. However, sadly
> > it does not differentiate between .data and .init.data section.
> > I could write something from scratch which could do that but
> > I think it could have larger costs then potential gains.
> > Let's leave it as is now. Current approximation is not so bad.
> > However, if any opportunity appears (some functions could
> > differentiate between .data and .init.data section) then
> > I could fix this.
>
> But minimally you should be using nr_cpu_ids when available.
> You just have to be prepared for bits beyond that value within
> any cpumask_t instance to have random contents.

I will try to improve that after my vacation (on September).

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ATC-0002TG-9d; Tue, 14 Aug 2012 06:19:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1T1ATB-0002TA-Gm
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:19:13 +0000
Received: from [85.158.138.51:36595] by server-12.bemta-3.messagelabs.com id
	6E/2B-04073-0EDE9205; Tue, 14 Aug 2012 06:19:12 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344925150!26362636!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg3Nzcy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13804 invoked from network); 14 Aug 2012 06:19:12 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 06:19:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7E6J4D6004693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 06:19:05 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7E6J3iL005035
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 06:19:03 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7E6J3ox001941; Tue, 14 Aug 2012 01:19:03 -0500
Received: from host-192-168-1-59.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Aug 2012 23:19:02 -0700
Date: Tue, 14 Aug 2012 08:17:42 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120814061742.GB2471@host-192-168-1-59.local.net-space.pl>
References: <20120810132513.GB2576@host-192-168-1-59.local.net-space.pl>
	<502539410200007800094355@nat28.tlf.novell.com>
	<20120813081235.GB2482@host-192-168-1-59.local.net-space.pl>
	<5028E09B0200007800094691@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5028E09B0200007800094691@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: olaf@aepfle.de, konrad.wilk@oracle.com, andrew.cooper3@citrix.com,
	kexec@lists.infradead.org, ptesarik@suse.cz,
	xen-devel <xen-devel@lists.xen.org>, anderson@redhat.com,
	crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH v2 1/6] xen: Always calculate max_cpus value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 10:10:19AM +0100, Jan Beulich wrote:
> >>> On 13.08.12 at 10:12, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > On Fri, Aug 10, 2012 at 03:39:29PM +0100, Jan Beulich wrote:
> >> >>> On 10.08.12 at 15:25, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> >> > max_cpus is not available since 20374 changeset (Miscellaneous data
> >> > placement adjustments). It was moved to __initdata section. This section
> >> > is freed after Xen initialization. Assume that max_cpus is always
> >> > equal to XEN_HYPER_SIZE(cpumask_t) * 8.
> >>
> >> Just to repeat my response to the original version of this patch,
> >> which I don't recall having got any answer from you:
> >>
> >> "Using nr_cpu_ids, when available, would seem a better fit. And
> >>  I don't see why, on dumps from old hypervisors, you wouldn't
> >>  want to continue using max_cpus. Oh, wait, I see - you would
> >>  have to be able to tell whether it actually sits in .init.data, which
> >>  might not be strait forward."
> >
> > As I promised earlier I thought about that. The simplest way
> > to do that is to check in which section max_cpus resides. There
> > is some instrumentation in crash tool to do that. However, sadly
> > it does not differentiate between .data and .init.data section.
> > I could write something from scratch which could do that but
> > I think it could have larger costs then potential gains.
> > Let's leave it as is now. Current approximation is not so bad.
> > However, if any opportunity appears (some functions could
> > differentiate between .data and .init.data section) then
> > I could fix this.
>
> But minimally you should be using nr_cpu_ids when available.
> You just have to be prepared for bits beyond that value within
> any cpumask_t instance to have random contents.

I will try to improve that after my vacation (on September).

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:27:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ab7-0002lk-JA; Tue, 14 Aug 2012 06:27:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Ab5-0002lf-Tw
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:27:24 +0000
Received: from [85.158.143.99:42272] by server-2.bemta-4.messagelabs.com id
	0E/3A-31966-BCFE9205; Tue, 14 Aug 2012 06:27:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344925642!18268867!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.1 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29544 invoked from network); 14 Aug 2012 06:27:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 06:27:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 07:29:07 +0100
Message-Id: <502A0BE50200007800094A4E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 07:27:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <201208070018394210381@gmail.com>,
	<50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>,
	<50293785020000780009484C@nat28.tlf.novell.com>
	<201208140024353598835@gmail.com>
In-Reply-To: <201208140024353598835@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
 foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(re-adding xen-devel to Cc)

>>> On 13.08.12 at 18:24, tupeng212 <tupeng212@gmail.com> wrote:
> I am home sending you this letter,  I can tell the details:
> I only compared the rate setted down with RegA, if same, don't call 
> create_periodic_time, and it is OK.
> but after a while it changed the rate to 152...(normal rate), and it changed 
> back to 97...(high rate), and 
> this action repeated, at this point, VM's time slowed again. so only block 
> the same rate's setting doesn't work.

But in this case the hypervisor has no choice - it has to update
the periodic timer. I don't think we could reliably guess that the
guest might plan on quickly toggling between two rates.

> but when I let pt->scheduled to be static(no changed) when setting regA down, 
> even at the rate's switching I notice above,
> it works very well, bug of time slowing never appear again.

Yeah, likely that solves the problem for you, but as it's not
correct, it's likely to cause problems for others.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:27:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ab7-0002lk-JA; Tue, 14 Aug 2012 06:27:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Ab5-0002lf-Tw
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:27:24 +0000
Received: from [85.158.143.99:42272] by server-2.bemta-4.messagelabs.com id
	0E/3A-31966-BCFE9205; Tue, 14 Aug 2012 06:27:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1344925642!18268867!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.1 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29544 invoked from network); 14 Aug 2012 06:27:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 06:27:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 07:29:07 +0100
Message-Id: <502A0BE50200007800094A4E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 07:27:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <201208070018394210381@gmail.com>,
	<50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>,
	<50293785020000780009484C@nat28.tlf.novell.com>
	<201208140024353598835@gmail.com>
In-Reply-To: <201208140024353598835@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
 foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(re-adding xen-devel to Cc)

>>> On 13.08.12 at 18:24, tupeng212 <tupeng212@gmail.com> wrote:
> I am home sending you this letter,  I can tell the details:
> I only compared the rate setted down with RegA, if same, don't call 
> create_periodic_time, and it is OK.
> but after a while it changed the rate to 152...(normal rate), and it changed 
> back to 97...(high rate), and 
> this action repeated, at this point, VM's time slowed again. so only block 
> the same rate's setting doesn't work.

But in this case the hypervisor has no choice - it has to update
the periodic timer. I don't think we could reliably guess that the
guest might plan on quickly toggling between two rates.

> but when I let pt->scheduled to be static(no changed) when setting regA down, 
> even at the rate's switching I notice above,
> it works very well, bug of time slowing never appear again.

Yeah, likely that solves the problem for you, but as it's not
correct, it's likely to cause problems for others.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:37:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Akz-0003Bk-Uj; Tue, 14 Aug 2012 06:37:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1T1Aky-0003BZ-AO
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 06:37:36 +0000
Received: from [85.158.143.99:27910] by server-3.bemta-4.messagelabs.com id
	9A/DC-09529-F22F9205; Tue, 14 Aug 2012 06:37:35 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344926254!21659143!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18395 invoked from network); 14 Aug 2012 06:37:35 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-10.tower-216.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 14 Aug 2012 06:37:35 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1596072Ab2HNGhT (ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Tue, 14 Aug 2012 08:37:19 +0200
Date: Tue, 14 Aug 2012 08:37:19 +0200
From: Daniel Kiper <dkiper@net-space.pl>
To: weijia wang <aawwjaa@gmail.com>
Message-ID: <20120814063719.GA30600@router-fw-old.local.net-space.pl>
References: <CAF41zFy-TF1yzCracNPW_G33UpvfseqhvX-mVaDjwvmN=Dk8hQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAF41zFy-TF1yzCracNPW_G33UpvfseqhvX-mVaDjwvmN=Dk8hQ@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xensource.com, kexec@lists.infradead.org
Subject: Re: [Xen-devel] kexec in DomU of xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 05:29:18PM +0200, weijia wang wrote:
> Hi,
>
>    I am wandering that if kexec has supported the DomU of xen?
>    Because when I failed to use kexec for reloading a kernel in DomU.

Not yet. It is WIP. First I am going to publish support for dom0.
It contains changes required for domU Linux Kernel side, too.
However, there are also some changes required for Xen itself
and kexec-tools. Stay tuned.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:37:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Akz-0003Bk-Uj; Tue, 14 Aug 2012 06:37:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1T1Aky-0003BZ-AO
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 06:37:36 +0000
Received: from [85.158.143.99:27910] by server-3.bemta-4.messagelabs.com id
	9A/DC-09529-F22F9205; Tue, 14 Aug 2012 06:37:35 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344926254!21659143!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18395 invoked from network); 14 Aug 2012 06:37:35 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-10.tower-216.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 14 Aug 2012 06:37:35 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1596072Ab2HNGhT (ORCPT <rfc822;xen-devel@lists.xensource.com>);
	Tue, 14 Aug 2012 08:37:19 +0200
Date: Tue, 14 Aug 2012 08:37:19 +0200
From: Daniel Kiper <dkiper@net-space.pl>
To: weijia wang <aawwjaa@gmail.com>
Message-ID: <20120814063719.GA30600@router-fw-old.local.net-space.pl>
References: <CAF41zFy-TF1yzCracNPW_G33UpvfseqhvX-mVaDjwvmN=Dk8hQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAF41zFy-TF1yzCracNPW_G33UpvfseqhvX-mVaDjwvmN=Dk8hQ@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xensource.com, kexec@lists.infradead.org
Subject: Re: [Xen-devel] kexec in DomU of xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 05:29:18PM +0200, weijia wang wrote:
> Hi,
>
>    I am wandering that if kexec has supported the DomU of xen?
>    Because when I failed to use kexec for reloading a kernel in DomU.

Not yet. It is WIP. First I am going to publish support for dom0.
It contains changes required for domU Linux Kernel side, too.
However, there are also some changes required for Xen itself
and kexec-tools. Stay tuned.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:38:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1AlK-0003Es-Ba; Tue, 14 Aug 2012 06:37:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1AlI-0003EZ-Vb
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:37:57 +0000
Received: from [85.158.143.35:37476] by server-2.bemta-4.messagelabs.com id
	6C/C4-31966-442F9205; Tue, 14 Aug 2012 06:37:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344926275!5524746!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.3 required=7.0 tests=BODY_RANDOM_LONG,
	MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19055 invoked from network); 14 Aug 2012 06:37:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 06:37:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 07:39:40 +0100
Message-Id: <502A0E610200007800094A58@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 07:37:53 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <201208070018394210381@gmail.com>,
	<50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>
	<50293785020000780009484C@nat28.tlf.novell.com>
In-Reply-To: <50293785020000780009484C@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tupeng212 <tupeng212@gmail.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
 foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 17:21, "Jan Beulich" <JBeulich@suse.com> wrote:
> - don't call rtc_timer_update() on REG_A writes when the value didn't
>   change (doing the call always was reported to cause wall clock time
>   lagging with the JVM under Windows)
> - in the same spirit, don't call rtc_timer_update() or
>   alarm_timer_update() on REG_B writes when the respective RTC_xIE bit
>   didn't change

Actually, this didn't go far enough yet: REG_B writes should
never cause any timers to get updated when merely one of the
xIE bits changes, as those bits shouldn't control the timers'
activity (and as a result, the eventual setting of the xF bits in
REG_C).

> - raise the RTC IRQ not only when RTC_UIE gets set while RTC_UF was
>   already set, but generalize this to alarm and periodic interrupts as
>   well
> - properly handle RTC_PF when the guest is not also setting RTC_PIE

In line with the above, this ought to also be done for AF and UF
(it may be the case for UF already).

Jan

> - also handle the two other clock bases




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:38:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1AlK-0003Es-Ba; Tue, 14 Aug 2012 06:37:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1AlI-0003EZ-Vb
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:37:57 +0000
Received: from [85.158.143.35:37476] by server-2.bemta-4.messagelabs.com id
	6C/C4-31966-442F9205; Tue, 14 Aug 2012 06:37:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1344926275!5524746!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.3 required=7.0 tests=BODY_RANDOM_LONG,
	MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19055 invoked from network); 14 Aug 2012 06:37:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 06:37:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 07:39:40 +0100
Message-Id: <502A0E610200007800094A58@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 07:37:53 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <201208070018394210381@gmail.com>,
	<50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>
	<50293785020000780009484C@nat28.tlf.novell.com>
In-Reply-To: <50293785020000780009484C@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tupeng212 <tupeng212@gmail.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
 foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 17:21, "Jan Beulich" <JBeulich@suse.com> wrote:
> - don't call rtc_timer_update() on REG_A writes when the value didn't
>   change (doing the call always was reported to cause wall clock time
>   lagging with the JVM under Windows)
> - in the same spirit, don't call rtc_timer_update() or
>   alarm_timer_update() on REG_B writes when the respective RTC_xIE bit
>   didn't change

Actually, this didn't go far enough yet: REG_B writes should
never cause any timers to get updated when merely one of the
xIE bits changes, as those bits shouldn't control the timers'
activity (and as a result, the eventual setting of the xF bits in
REG_C).

> - raise the RTC IRQ not only when RTC_UIE gets set while RTC_UF was
>   already set, but generalize this to alarm and periodic interrupts as
>   well
> - properly handle RTC_PF when the guest is not also setting RTC_PIE

In line with the above, this ought to also be done for AF and UF
(it may be the case for UF already).

Jan

> - also handle the two other clock bases




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:38:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Alx-0003Lz-QT; Tue, 14 Aug 2012 06:38:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T1Alw-0003LY-HV
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:38:36 +0000
Received: from [85.158.143.35:43711] by server-1.bemta-4.messagelabs.com id
	14/EF-07754-A62F9205; Tue, 14 Aug 2012 06:38:34 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344926313!15802835!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18069 invoked from network); 14 Aug 2012 06:38:34 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 06:38:34 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 13 Aug 2012 23:38:32 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,764,1336374000"; d="scan'208";a="180341169"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 13 Aug 2012 23:38:32 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 23:38:31 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 23:38:32 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 14:38:30 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
	disk images for IDE
Thread-Index: Ac12uZBjK3K3HmTeRfSjIG3Pa6SRcwDLYPSQ
Date: Tue, 14 Aug 2012 06:38:29 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I attached the original patch here, which impact the performance.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
---
 xenstore.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xenstore.c b/xenstore.c
index 4c483e2..ac90366 100644
--- a/xenstore.c
+++ b/xenstore.c
@@ -643,7 +643,7 @@ void xenstore_parse_domain_config(int hvm_domid)
            }
             pstrcpy(bs->filename, sizeof(bs->filename), params);
 
-            flags = BDRV_O_CACHE_WB; /* snapshot and write-back */
+            flags = BDRV_O_NOCACHE;
             is_readonly = 0;
             if (pasprintf(&buf, "%s/mode", bpath) == -1)
                 continue;
-- 
1.7.2.5

Could somebody tell me the reason why this is finally checked in with performance impact?

Thanks,
Dongxiao

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Xu, Dongxiao
> Sent: Friday, August 10, 2012 1:33 PM
> To: xen-devel@lists.xen.org
> Cc: Zhang, Yang Z; Ian Jackson; Ian Campbell; Stefano Stabellini
> Subject: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
> disk images for IDE
> 
> Hi list,
> 
> Recently I was debugging L2 guest slow booting issue in nested virtualization
> environment (both L0 and L1 hypervisors are all Xen).
> To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes
> after grub loaded, I did some profile, and see guest is doing disk operations by
> int13 BIOS procedure.
> 
> Even not consider the nested case, I saw there is a bug reporting normal VM
> boots slower than before (actually both qcow and disk image), see:
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> Therefore I think the boot delay is just much lengthened in L2 guest.
> 
> I root caused this issue to a change in qemu, and I saw there is a lot of
> discussions on this topic. I didn't see the final decision but later the patch was
> checked in. Could anybody helps to revisit this commit and explain the final
> decision?
> http://lists.xen.org/archives/html/xen-devel/2012-03/msg02072.html
> 
> Thanks,
> Dongxiao
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:38:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Alx-0003Lz-QT; Tue, 14 Aug 2012 06:38:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T1Alw-0003LY-HV
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:38:36 +0000
Received: from [85.158.143.35:43711] by server-1.bemta-4.messagelabs.com id
	14/EF-07754-A62F9205; Tue, 14 Aug 2012 06:38:34 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344926313!15802835!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU3MTA1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18069 invoked from network); 14 Aug 2012 06:38:34 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 06:38:34 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 13 Aug 2012 23:38:32 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,764,1336374000"; d="scan'208";a="180341169"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 13 Aug 2012 23:38:32 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 23:38:31 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 13 Aug 2012 23:38:32 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 14:38:30 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
	disk images for IDE
Thread-Index: Ac12uZBjK3K3HmTeRfSjIG3Pa6SRcwDLYPSQ
Date: Tue, 14 Aug 2012 06:38:29 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I attached the original patch here, which impact the performance.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
---
 xenstore.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xenstore.c b/xenstore.c
index 4c483e2..ac90366 100644
--- a/xenstore.c
+++ b/xenstore.c
@@ -643,7 +643,7 @@ void xenstore_parse_domain_config(int hvm_domid)
            }
             pstrcpy(bs->filename, sizeof(bs->filename), params);
 
-            flags = BDRV_O_CACHE_WB; /* snapshot and write-back */
+            flags = BDRV_O_NOCACHE;
             is_readonly = 0;
             if (pasprintf(&buf, "%s/mode", bpath) == -1)
                 continue;
-- 
1.7.2.5

Could somebody tell me the reason why this is finally checked in with performance impact?

Thanks,
Dongxiao

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Xu, Dongxiao
> Sent: Friday, August 10, 2012 1:33 PM
> To: xen-devel@lists.xen.org
> Cc: Zhang, Yang Z; Ian Jackson; Ian Campbell; Stefano Stabellini
> Subject: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
> disk images for IDE
> 
> Hi list,
> 
> Recently I was debugging L2 guest slow booting issue in nested virtualization
> environment (both L0 and L1 hypervisors are all Xen).
> To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes
> after grub loaded, I did some profile, and see guest is doing disk operations by
> int13 BIOS procedure.
> 
> Even not consider the nested case, I saw there is a bug reporting normal VM
> boots slower than before (actually both qcow and disk image), see:
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> Therefore I think the boot delay is just much lengthened in L2 guest.
> 
> I root caused this issue to a change in qemu, and I saw there is a lot of
> discussions on this topic. I didn't see the final decision but later the patch was
> checked in. Could anybody helps to revisit this commit and explain the final
> decision?
> http://lists.xen.org/archives/html/xen-devel/2012-03/msg02072.html
> 
> Thanks,
> Dongxiao
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Atc-0003zu-Ov; Tue, 14 Aug 2012 06:46:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1Ata-0003zk-V5
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:46:31 +0000
Received: from [85.158.143.35:16724] by server-2.bemta-4.messagelabs.com id
	3C/CD-31966-644F9205; Tue, 14 Aug 2012 06:46:30 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344926789!15218285!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzcwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26331 invoked from network); 14 Aug 2012 06:46:29 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 06:46:29 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B59D51255;
	Tue, 14 Aug 2012 09:46:28 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6746B2005D; Tue, 14 Aug 2012 09:46:28 +0300 (EEST)
Date: Tue, 14 Aug 2012 09:46:28 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Peter Moody <pmoody@google.com>
Message-ID: <20120814064628.GV19851@reaktio.net>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 05:03:06PM -0700, Peter Moody wrote:
> This seems to be some combination of Xen and the audit subsystem, but
> the attached program crashes my machine 100% of the time.
> 

Did you try with a later Xen version? 4.0.1 is quite old. 
For example the latest in Xen 4.0.x series which is 4.0.4 ? Or Xen 4.1.3 ? 

-- Pasi

> steps to reproduce the crash:
> 
>  *  1) compile with gcc -m32
>  *  2) start auditd, install any rule (I've only tested syscall
> auditing, but any syscall seems to work).
>  *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a
> exit,always -F arch=64 -S chmod
>  *  3) run'n wait (this only loops twice for me before dying)
>  *     ./a.out
>  *  4) bask in instantaneous kernel oops.
> 
> here's xm info from dom0
> 
> [xen2.atl] root@gntb1:~# xm info
> host                   : gntb1.atl.corp.google.com
> release                : 3.2.13-ganeti-rx6-xen0
> version                : #1 SMP Thu Jun 7 12:59:40 CEST 2012
> machine                : x86_64
> nr_cpus                : 12
> nr_nodes               : 2
> cores_per_socket       : 6
> threads_per_core       : 1
> cpu_mhz                : 2660
> hw_caps                :
> bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
> virt_caps              : hvm
> total_memory           : 32755
> free_memory            : 22665
> node_to_cpu            : node0:0,2,4,6,8,10
>                          node1:1,3,5,7,9,11
> node_to_memory         : node0:13083
>                          node1:9582
> node_to_dma32_mem      : node0:0
>                          node1:3235
> max_node_id            : 1
> xen_major              : 4
> xen_minor              : 0
> xen_extra              : .1
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : unavailable
> xen_commandline        : placeholder dom0_mem=1024M loglvl=all
> com1=115200,8n1 console=com1 iommu=0
> cc_compiler            : gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
> cc_compile_by          : pmacedo
> cc_compile_domain      : google.com
> cc_compile_date        : Wed Mar 16 15:24:06 UTC 2011
> xend_config_format     : 4
> 
> I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> tested). It's a fairly beefy setup, 32G memory and 6 cpus.
> 
> I suspect xen as opposed to auditd because:
> 
>  a) this only happens on our xen machines (though not all of them)
>  b) one of my stack traces started with
> 
> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
> 
> Any one have any idea what's going on?
> 
> Cheers,
> peter
> 
> -- 
> Peter Moody      Google    1.650.253.7306
> Security Engineer  pgp:0xC3410038

> /*
>  * steps:
>  *  1) compile with gcc -m32
>  *  2) start auditd, install any rule (I've only tested syscall auditing, but any syscall seems to work).
>  *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a exit,always -F arch=64 -S chmod
>  *  3) run'n wait (this only loops twice for me before dying)
>  *     ./a.out
>  *  4) bask in instantaneous kernel oops.
>  [  571.282777] ------------[ cut here ]------------
>  [  571.282786] kernel BUG at fs/buffer.c:1263!
>  [  571.282790] invalid opcode: 0000 [#1] SMP
>  [  571.282795] last sysfs file: /sys/devices/system/cpu/sched_mc_power_savings
>  [  571.282798] CPU 0
>  [  571.282802] Pid: 7457, comm: a.out Not tainted 2.6.38.8-gg868-ganetixenu #1
>  [  571.282808] RIP: e030:[<ffffffff81153853>]  [<ffffffff81153853>] __find_get_block+0x1f3/0x200
>  [  571.282819] RSP: e02b:ffff88079b7ddc78  EFLAGS: 00010046
>  [  571.282822] RAX: ffff8807bc290000 RBX: ffff8806d9bb9a98 RCX: 00000000023dc17c
>  [  571.282826] RDX: 0000000000001000 RSI: 00000000023dc17c RDI: ffff8807fec29a00
>  [  571.282830] RBP: ffff88079b7ddcd8 R08: 0000000000000001 R09: ffff8806d9bb99c0
>  [  571.282834] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8806d9bb99c4
>  [  571.282839] R13: ffff8806d9bb99f0 R14: ffff8807feff9060 R15: 00000000023dc17c
>  [  571.282845] FS:  00007f8f6a76a7c0(0000) GS:ffff8807fff26000(0063) knlGS:0000000000000000
>  [  571.282849] CS:  e033 DS: 002b ES: 002b CR0: 000000008005003b
>  [  571.282853] CR2: 00000000f76c6970 CR3: 00000007a250b000 CR4: 0000000000002660
>  [  571.282857] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>  [  571.282861] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>  [  571.282866] Process a.out (pid: 7457, threadinfo ffff88079b7dc000, task ffff8807786843e0)
>  [  571.282870] Stack:
>  [  571.282872]  ffff88079b7ddc98 ffffffff81654cd1 ffff88079b7ddca8 ffff8806d9bba440
>  [  571.282879]  ffff88079b7ddd08 ffffffff811c9294 ffff8807ffffffc3 0000000000000014
>  [  571.282887]  ffff8806d9bb9a98 ffff8806d9bb99c4 ffff8806d9bb99f0 ffff8807feff9060
>  [  571.282895] Call Trace:
>  [  571.282901]  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>  [  571.282907]  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>  [  571.282913]  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>  [  571.282918]  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>  [  571.282923]  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>  [  571.282928]  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>  [  571.282933]  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>  [  571.282938]  [<ffffffff8114065f>] evict+0x1f/0xb0
>  [  571.282945]  [<ffffffff81006d52>] ? check_events+0x12/0x20
>  [  571.282949]  [<ffffffff81140c14>] iput+0x1a4/0x290
>  [  571.282955]  [<ffffffff8113ed05>] dput+0x265/0x310
>  [  571.282959]  [<ffffffff81132435>] path_put+0x15/0x30
>  [  571.282965]  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>  [  571.282971]  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>  [  571.282974] Code: 82 00 05 01 00 85 c0 75 de 65 48 89 1c 25 00 05 01 00 e9 87 fe ff ff 48 89 df e8 e9 fc ff ff 4c 89 f7 e9 02 ff ff ff 0f 0b eb fe <0f> 0b eb fe 0f 0b eb fe 0f 1f 44 00 00 55 48 89 e5 41 57 49 89
>  [  571.283027] RIP  [<ffffffff81153853>] __find_get_block+0x1f3/0x200
>  [  571.283033]  RSP <ffff88079b7ddc78>
>  [  571.283036] ---[ end trace 5975ffe20808ecd2 ]---
>  *
>  */
> 
> #include <stdio.h>
> #include <sys/stat.h>
> #include <sys/types.h>
> #include <unistd.h>
> 
> #define KILLDIR "/usr/local/tmp/crasher/kill_dir"
> 
> int main(void) {
>   FILE *f;
>   char fullpath[512];
>   int i;
> 
>   while (1) {
>     fprintf(stderr, "%d ", i++);
>     mkdir(KILLDIR, 0777);
>     chdir(KILLDIR);
>     sprintf(fullpath, "%s/file", KILLDIR);
>     f = fopen(fullpath, "w+");
>     fprintf(f, "nothing to see here");
>     fclose(f);
>     unlink("/usr/local/tmp/crasher/kill_dir/file");
>     rmdir(KILLDIR);
>   }
>   return 0;
> }

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 06:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 06:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Atc-0003zu-Ov; Tue, 14 Aug 2012 06:46:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1Ata-0003zk-V5
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 06:46:31 +0000
Received: from [85.158.143.35:16724] by server-2.bemta-4.messagelabs.com id
	3C/CD-31966-644F9205; Tue, 14 Aug 2012 06:46:30 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344926789!15218285!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzcwMjg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26331 invoked from network); 14 Aug 2012 06:46:29 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 06:46:29 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B59D51255;
	Tue, 14 Aug 2012 09:46:28 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6746B2005D; Tue, 14 Aug 2012 09:46:28 +0300 (EEST)
Date: Tue, 14 Aug 2012 09:46:28 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Peter Moody <pmoody@google.com>
Message-ID: <20120814064628.GV19851@reaktio.net>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 05:03:06PM -0700, Peter Moody wrote:
> This seems to be some combination of Xen and the audit subsystem, but
> the attached program crashes my machine 100% of the time.
> 

Did you try with a later Xen version? 4.0.1 is quite old. 
For example the latest in Xen 4.0.x series which is 4.0.4 ? Or Xen 4.1.3 ? 

-- Pasi

> steps to reproduce the crash:
> 
>  *  1) compile with gcc -m32
>  *  2) start auditd, install any rule (I've only tested syscall
> auditing, but any syscall seems to work).
>  *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a
> exit,always -F arch=64 -S chmod
>  *  3) run'n wait (this only loops twice for me before dying)
>  *     ./a.out
>  *  4) bask in instantaneous kernel oops.
> 
> here's xm info from dom0
> 
> [xen2.atl] root@gntb1:~# xm info
> host                   : gntb1.atl.corp.google.com
> release                : 3.2.13-ganeti-rx6-xen0
> version                : #1 SMP Thu Jun 7 12:59:40 CEST 2012
> machine                : x86_64
> nr_cpus                : 12
> nr_nodes               : 2
> cores_per_socket       : 6
> threads_per_core       : 1
> cpu_mhz                : 2660
> hw_caps                :
> bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
> virt_caps              : hvm
> total_memory           : 32755
> free_memory            : 22665
> node_to_cpu            : node0:0,2,4,6,8,10
>                          node1:1,3,5,7,9,11
> node_to_memory         : node0:13083
>                          node1:9582
> node_to_dma32_mem      : node0:0
>                          node1:3235
> max_node_id            : 1
> xen_major              : 4
> xen_minor              : 0
> xen_extra              : .1
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : unavailable
> xen_commandline        : placeholder dom0_mem=1024M loglvl=all
> com1=115200,8n1 console=com1 iommu=0
> cc_compiler            : gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
> cc_compile_by          : pmacedo
> cc_compile_domain      : google.com
> cc_compile_date        : Wed Mar 16 15:24:06 UTC 2011
> xend_config_format     : 4
> 
> I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> tested). It's a fairly beefy setup, 32G memory and 6 cpus.
> 
> I suspect xen as opposed to auditd because:
> 
>  a) this only happens on our xen machines (though not all of them)
>  b) one of my stack traces started with
> 
> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
> 
> Any one have any idea what's going on?
> 
> Cheers,
> peter
> 
> -- 
> Peter Moody      Google    1.650.253.7306
> Security Engineer  pgp:0xC3410038

> /*
>  * steps:
>  *  1) compile with gcc -m32
>  *  2) start auditd, install any rule (I've only tested syscall auditing, but any syscall seems to work).
>  *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a exit,always -F arch=64 -S chmod
>  *  3) run'n wait (this only loops twice for me before dying)
>  *     ./a.out
>  *  4) bask in instantaneous kernel oops.
>  [  571.282777] ------------[ cut here ]------------
>  [  571.282786] kernel BUG at fs/buffer.c:1263!
>  [  571.282790] invalid opcode: 0000 [#1] SMP
>  [  571.282795] last sysfs file: /sys/devices/system/cpu/sched_mc_power_savings
>  [  571.282798] CPU 0
>  [  571.282802] Pid: 7457, comm: a.out Not tainted 2.6.38.8-gg868-ganetixenu #1
>  [  571.282808] RIP: e030:[<ffffffff81153853>]  [<ffffffff81153853>] __find_get_block+0x1f3/0x200
>  [  571.282819] RSP: e02b:ffff88079b7ddc78  EFLAGS: 00010046
>  [  571.282822] RAX: ffff8807bc290000 RBX: ffff8806d9bb9a98 RCX: 00000000023dc17c
>  [  571.282826] RDX: 0000000000001000 RSI: 00000000023dc17c RDI: ffff8807fec29a00
>  [  571.282830] RBP: ffff88079b7ddcd8 R08: 0000000000000001 R09: ffff8806d9bb99c0
>  [  571.282834] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8806d9bb99c4
>  [  571.282839] R13: ffff8806d9bb99f0 R14: ffff8807feff9060 R15: 00000000023dc17c
>  [  571.282845] FS:  00007f8f6a76a7c0(0000) GS:ffff8807fff26000(0063) knlGS:0000000000000000
>  [  571.282849] CS:  e033 DS: 002b ES: 002b CR0: 000000008005003b
>  [  571.282853] CR2: 00000000f76c6970 CR3: 00000007a250b000 CR4: 0000000000002660
>  [  571.282857] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>  [  571.282861] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>  [  571.282866] Process a.out (pid: 7457, threadinfo ffff88079b7dc000, task ffff8807786843e0)
>  [  571.282870] Stack:
>  [  571.282872]  ffff88079b7ddc98 ffffffff81654cd1 ffff88079b7ddca8 ffff8806d9bba440
>  [  571.282879]  ffff88079b7ddd08 ffffffff811c9294 ffff8807ffffffc3 0000000000000014
>  [  571.282887]  ffff8806d9bb9a98 ffff8806d9bb99c4 ffff8806d9bb99f0 ffff8807feff9060
>  [  571.282895] Call Trace:
>  [  571.282901]  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>  [  571.282907]  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>  [  571.282913]  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>  [  571.282918]  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>  [  571.282923]  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>  [  571.282928]  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>  [  571.282933]  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>  [  571.282938]  [<ffffffff8114065f>] evict+0x1f/0xb0
>  [  571.282945]  [<ffffffff81006d52>] ? check_events+0x12/0x20
>  [  571.282949]  [<ffffffff81140c14>] iput+0x1a4/0x290
>  [  571.282955]  [<ffffffff8113ed05>] dput+0x265/0x310
>  [  571.282959]  [<ffffffff81132435>] path_put+0x15/0x30
>  [  571.282965]  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>  [  571.282971]  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>  [  571.282974] Code: 82 00 05 01 00 85 c0 75 de 65 48 89 1c 25 00 05 01 00 e9 87 fe ff ff 48 89 df e8 e9 fc ff ff 4c 89 f7 e9 02 ff ff ff 0f 0b eb fe <0f> 0b eb fe 0f 0b eb fe 0f 1f 44 00 00 55 48 89 e5 41 57 49 89
>  [  571.283027] RIP  [<ffffffff81153853>] __find_get_block+0x1f3/0x200
>  [  571.283033]  RSP <ffff88079b7ddc78>
>  [  571.283036] ---[ end trace 5975ffe20808ecd2 ]---
>  *
>  */
> 
> #include <stdio.h>
> #include <sys/stat.h>
> #include <sys/types.h>
> #include <unistd.h>
> 
> #define KILLDIR "/usr/local/tmp/crasher/kill_dir"
> 
> int main(void) {
>   FILE *f;
>   char fullpath[512];
>   int i;
> 
>   while (1) {
>     fprintf(stderr, "%d ", i++);
>     mkdir(KILLDIR, 0777);
>     chdir(KILLDIR);
>     sprintf(fullpath, "%s/file", KILLDIR);
>     f = fopen(fullpath, "w+");
>     fprintf(f, "nothing to see here");
>     fclose(f);
>     unlink("/usr/local/tmp/crasher/kill_dir/file");
>     rmdir(KILLDIR);
>   }
>   return 0;
> }

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 07:28:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 07:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1BXj-0004gX-6s; Tue, 14 Aug 2012 07:27:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1BXh-0004gS-5T
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 07:27:57 +0000
Received: from [85.158.138.51:47236] by server-3.bemta-3.messagelabs.com id
	70/43-13809-CFDF9205; Tue, 14 Aug 2012 07:27:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344929275!28218166!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwMzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26440 invoked from network); 14 Aug 2012 07:27:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 07:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,764,1336348800"; d="scan'208";a="13994996"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 07:27:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 08:27:54 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1BXe-0001Tm-7p;
	Tue, 14 Aug 2012 07:27:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1BXd-0007uD-Oc;
	Tue, 14 Aug 2012 08:27:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13599-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Aug 2012 08:27:53 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13599: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13599 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13599/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 13598
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 13598

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13598
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13598
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13598
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13598

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  33d596f46521
baseline version:
 xen                  33d596f46521

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 07:28:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 07:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1BXj-0004gX-6s; Tue, 14 Aug 2012 07:27:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1BXh-0004gS-5T
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 07:27:57 +0000
Received: from [85.158.138.51:47236] by server-3.bemta-3.messagelabs.com id
	70/43-13809-CFDF9205; Tue, 14 Aug 2012 07:27:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344929275!28218166!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwMzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26440 invoked from network); 14 Aug 2012 07:27:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 07:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,764,1336348800"; d="scan'208";a="13994996"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 07:27:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 08:27:54 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1BXe-0001Tm-7p;
	Tue, 14 Aug 2012 07:27:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1BXd-0007uD-Oc;
	Tue, 14 Aug 2012 08:27:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13599-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Aug 2012 08:27:53 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13599: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13599 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13599/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-i386-i386-pv             3 host-install(3)           broken pass in 13598
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 13598

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13598
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13598
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13598
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13598

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  33d596f46521
baseline version:
 xen                  33d596f46521

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 07:38:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 07:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Bha-0004vF-IU; Tue, 14 Aug 2012 07:38:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1BhZ-0004v8-Et
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 07:38:09 +0000
Received: from [85.158.143.99:57822] by server-1.bemta-4.messagelabs.com id
	D3/1E-07754-0600A205; Tue, 14 Aug 2012 07:38:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344929887!22772090!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 811 invoked from network); 14 Aug 2012 07:38:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 07:38:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 08:38:06 +0100
Message-Id: <502A1C7B0200007800094A8F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 08:38:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>,
	"Ben Guthro" <ben@guthro.net>,"GabrielX Wu" <gabrielx.wu@intel.com>,
	"Yongjie Ren" <yongjie.ren@intel.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
In-Reply-To: <CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> It seems to fail 100% of the time on 100% of x86 machines I have tried.

Don't know. The systems I have tried S3 on have problems
even with native Linux, so there's not much point playing
with Xen on them.

Ian(J), I don't suppose this is part of the regression tests?

Gabriel, Yongjie - ISTR this is part of your regular VMX testing,
and the most recent report doesn't mention any problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 07:38:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 07:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Bha-0004vF-IU; Tue, 14 Aug 2012 07:38:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1BhZ-0004v8-Et
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 07:38:09 +0000
Received: from [85.158.143.99:57822] by server-1.bemta-4.messagelabs.com id
	D3/1E-07754-0600A205; Tue, 14 Aug 2012 07:38:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344929887!22772090!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 811 invoked from network); 14 Aug 2012 07:38:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 07:38:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 08:38:06 +0100
Message-Id: <502A1C7B0200007800094A8F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 08:38:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>,
	"Ben Guthro" <ben@guthro.net>,"GabrielX Wu" <gabrielx.wu@intel.com>,
	"Yongjie Ren" <yongjie.ren@intel.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
In-Reply-To: <CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> It seems to fail 100% of the time on 100% of x86 machines I have tried.

Don't know. The systems I have tried S3 on have problems
even with native Linux, so there's not much point playing
with Xen on them.

Ian(J), I don't suppose this is part of the regression tests?

Gabriel, Yongjie - ISTR this is part of your regular VMX testing,
and the most recent report doesn't mention any problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 07:41:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 07:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1BkN-00052I-4H; Tue, 14 Aug 2012 07:41:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1BkL-00052C-QS
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 07:41:01 +0000
Received: from [85.158.143.35:7700] by server-1.bemta-4.messagelabs.com id
	40/23-07754-D010A205; Tue, 14 Aug 2012 07:41:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344930045!15813224!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13921 invoked from network); 14 Aug 2012 07:40:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 07:40:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 08:40:45 +0100
Message-Id: <502A1D1B0200007800094A9B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 08:40:43 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
	<20521.1210.708532.71249@mariner.uk.xensource.com>
	<50293D2E0200007800094884@nat28.tlf.novell.com>
	<20521.13484.181539.827170@mariner.uk.xensource.com>
In-Reply-To: <20521.13484.181539.827170@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 19:09, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 
> 4.3"):
>> While I think the other version was reasonable, here you go.
> 
> Thanks,
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I removed the now-redundant paragraph about ((deprecated)) from the
> commit message.

Oh, thanks and sorry.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 07:41:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 07:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1BkN-00052I-4H; Tue, 14 Aug 2012 07:41:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1BkL-00052C-QS
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 07:41:01 +0000
Received: from [85.158.143.35:7700] by server-1.bemta-4.messagelabs.com id
	40/23-07754-D010A205; Tue, 14 Aug 2012 07:41:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344930045!15813224!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13921 invoked from network); 14 Aug 2012 07:40:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 07:40:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 08:40:45 +0100
Message-Id: <502A1D1B0200007800094A9B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 08:40:43 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <50225ACD020000780009380F@nat28.tlf.novell.com>
	<20521.1210.708532.71249@mariner.uk.xensource.com>
	<50293D2E0200007800094884@nat28.tlf.novell.com>
	<20521.13484.181539.827170@mariner.uk.xensource.com>
In-Reply-To: <20521.13484.181539.827170@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.08.12 at 19:09, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [PATCH] libxl: fix build for gcc prior to 
> 4.3"):
>> While I think the other version was reasonable, here you go.
> 
> Thanks,
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I removed the now-redundant paragraph about ((deprecated)) from the
> commit message.

Oh, thanks and sorry.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 08:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1CTq-00065M-VU; Tue, 14 Aug 2012 08:28:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1CTp-000659-G0
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:28:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344932853!1751934!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5908 invoked from network); 14 Aug 2012 08:27:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 08:27:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:27:32 +0100
Message-Id: <502A28130200007800094ADE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:27:31 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
In-Reply-To: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 02:03, Peter Moody <pmoody@google.com> wrote:
> I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> tested). It's a fairly beefy setup, 32G memory and 6 cpus.

Are these kernel versions refer to plain upstream ones?

Is the subject referring to 4.0.1 in any way meaningful? I.e.
does the problem not occur with other Xen versions?

> I suspect xen as opposed to auditd because:
> 
>  a) this only happens on our xen machines (though not all of them)
>  b) one of my stack traces started with
> 
> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10

This is a weak indication of a problem with Xen, but could as
well just indicate it's a problem that only gets surfaced under
Xen. It would certainly help if you included the full oops
message (or multiple of them if they're meaningfully different).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 08:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1CTq-00065M-VU; Tue, 14 Aug 2012 08:28:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1CTp-000659-G0
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:28:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344932853!1751934!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5908 invoked from network); 14 Aug 2012 08:27:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 08:27:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:27:32 +0100
Message-Id: <502A28130200007800094ADE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:27:31 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
In-Reply-To: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 02:03, Peter Moody <pmoody@google.com> wrote:
> I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> tested). It's a fairly beefy setup, 32G memory and 6 cpus.

Are these kernel versions refer to plain upstream ones?

Is the subject referring to 4.0.1 in any way meaningful? I.e.
does the problem not occur with other Xen versions?

> I suspect xen as opposed to auditd because:
> 
>  a) this only happens on our xen machines (though not all of them)
>  b) one of my stack traces started with
> 
> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10

This is a weak indication of a problem with Xen, but could as
well just indicate it's a problem that only gets surfaced under
Xen. It would certainly help if you included the full oops
message (or multiple of them if they're meaningfully different).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 08:45:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:45:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1CkN-0006YY-Nk; Tue, 14 Aug 2012 08:45:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1CkM-0006YT-OO
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:45:06 +0000
Received: from [85.158.143.99:35621] by server-2.bemta-4.messagelabs.com id
	23/C6-31966-1101A205; Tue, 14 Aug 2012 08:45:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344933905!22751397!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5049 invoked from network); 14 Aug 2012 08:45:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 08:45:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:45:04 +0100
Message-Id: <502A2C2E0200007800094AEC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:45:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] [PATCH 0/2] x86: PoD adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: x86/PoD: prevent guest from being destroyed upon early access to its memory
2: x86/PoD: clean up types

The meat of it is the first one; the second one is just addressing
inconsistencies I noticed while putting together the former.

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 08:45:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:45:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1CkN-0006YY-Nk; Tue, 14 Aug 2012 08:45:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1CkM-0006YT-OO
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:45:06 +0000
Received: from [85.158.143.99:35621] by server-2.bemta-4.messagelabs.com id
	23/C6-31966-1101A205; Tue, 14 Aug 2012 08:45:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344933905!22751397!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5049 invoked from network); 14 Aug 2012 08:45:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 08:45:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:45:04 +0100
Message-Id: <502A2C2E0200007800094AEC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:45:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] [PATCH 0/2] x86: PoD adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: x86/PoD: prevent guest from being destroyed upon early access to its memory
2: x86/PoD: clean up types

The meat of it is the first one; the second one is just addressing
inconsistencies I noticed while putting together the former.

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 08:46:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Cl7-0006av-4i; Tue, 14 Aug 2012 08:45:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Cl5-0006an-Pu
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:45:52 +0000
Received: from [85.158.143.99:14230] by server-2.bemta-4.messagelabs.com id
	70/78-31966-F301A205; Tue, 14 Aug 2012 08:45:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344933949!28047634!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4851 invoked from network); 14 Aug 2012 08:45:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 08:45:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:45:48 +0100
Message-Id: <502A2C590200007800094AF0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:45:45 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2B1A5C29.0__="
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: [Xen-devel] [PATCH 1/2] x86/PoD: prevent guest from being destroyed
 upon early access to its memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2B1A5C29.0__=
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Content-Disposition: inline

x86/PoD: prevent guest from being destroyed upon early access to its memory

When an external agent (e.g. a monitoring daemon) happens to access the
memory of a PoD guest prior to setting the PoD target, that access must
fail for there not being any page in the PoD cache, and only the space
above the low 2Mb gets scanned for victim pages (while only the low 2Mb
got real pages populated so far).

To accomodate for this
- set the PoD target first
- do all physmap population in PoD mode (i.e. not just large [2Mb or
  1Gb] pages)
- slightly lift the restrictions enforced by p2m_pod_set_mem_target()
  to accomodate for the changed tools behavior

Tested-by: JÃ¼rgen GroÃŸ <juergen.gross@ts.fujitsu.com>
           (in a 4.0.x based incarnation)
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -160,7 +160,7 @@ static int setup_guest(xc_interface *xch
     int pod_mode = 0;
 
     if ( nr_pages > target_pages )
-        pod_mode = 1;
+        pod_mode = XENMEMF_populate_on_demand;
 
     memset(&elf, 0, sizeof(elf));
     if ( elf_init(&elf, image, image_size) != 0 )
@@ -197,6 +197,22 @@ static int setup_guest(xc_interface *xch
     for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
         page_array[i] += mmio_size >> PAGE_SHIFT;
 
+    if ( pod_mode )
+    {
+        /*
+         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
+         * adjust the PoD cache size so that domain tot_pages will be
+         * target_pages - 0x20 after this call.
+         */
+        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+                                      NULL, NULL, NULL);
+        if ( rc != 0 )
+        {
+            PERROR("Could not set PoD target for HVM guest.\n");
+            goto error_out;
+        }
+    }
+
     /*
      * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000.
      *
@@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch
      * ensure that we can be preempted and hence dom0 remains responsive.
      */
     rc = xc_domain_populate_physmap_exact(
-        xch, dom, 0xa0, 0, 0, &page_array[0x00]);
+        xch, dom, 0xa0, 0, pod_mode, &page_array[0x00]);
     cur_pages = 0xc0;
     stat_normal_pages = 0xc0;
     while ( (rc == 0) && (nr_pages > cur_pages) )
@@ -247,8 +263,7 @@ static int setup_guest(xc_interface *xch
                 sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];
 
             done = xc_domain_populate_physmap(xch, dom, nr_extents, SUPERPAGE_1GB_SHIFT,
-                                              pod_mode ? XENMEMF_populate_on_demand : 0,
-                                              sp_extents);
+                                              pod_mode, sp_extents);
 
             if ( done > 0 )
             {
@@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch
                     sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_2MB_SHIFT)];
 
                 done = xc_domain_populate_physmap(xch, dom, nr_extents, SUPERPAGE_2MB_SHIFT,
-                                                  pod_mode ? XENMEMF_populate_on_demand : 0,
-                                                  sp_extents);
+                                                  pod_mode, sp_extents);
 
                 if ( done > 0 )
                 {
@@ -302,19 +316,12 @@ static int setup_guest(xc_interface *xch
         if ( count != 0 )
         {
             rc = xc_domain_populate_physmap_exact(
-                xch, dom, count, 0, 0, &page_array[cur_pages]);
+                xch, dom, count, 0, pod_mode, &page_array[cur_pages]);
             cur_pages += count;
             stat_normal_pages += count;
         }
     }
 
-    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen will
-     * adjust the PoD cache size so that domain tot_pages will be
-     * target_pages - 0x20 after this call. */
-    if ( pod_mode )
-        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
-                                      NULL, NULL, NULL);
-
     if ( rc != 0 )
     {
         PERROR("Could not allocate memory for HVM guest.");
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain *d,
 
     pod_lock(p2m);
 
-    /* P == B: Nothing to do. */
-    if ( p2m->pod.entry_count == 0 )
+    /* P == B: Nothing to do (unless the guest is being created). */
+    populated = d->tot_pages - p2m->pod.count;
+    if ( populated > 0 && p2m->pod.entry_count == 0 )
         goto out;
 
     /* Don't do anything if the domain is being torn down */
@@ -357,13 +358,11 @@ p2m_pod_set_mem_target(struct domain *d,
     if ( target < d->tot_pages )
         goto out;
 
-    populated  = d->tot_pages - p2m->pod.count;
-
     pod_target = target - populated;
 
     /* B < T': Set the cache size equal to # of outstanding entries,
      * let the balloon driver fill in the rest. */
-    if ( pod_target > p2m->pod.entry_count )
+    if ( populated > 0 && pod_target > p2m->pod.entry_count )
         pod_target = p2m->pod.entry_count;
 
     ASSERT( pod_target >= p2m->pod.count );



--=__Part2B1A5C29.0__=
Content-Type: text/plain; name="x86-PoD-early-access.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PoD-early-access.patch"

x86/PoD: prevent guest from being destroyed upon early access to its =
memory=0A=0AWhen an external agent (e.g. a monitoring daemon) happens to =
access the=0Amemory of a PoD guest prior to setting the PoD target, that =
access must=0Afail for there not being any page in the PoD cache, and only =
the space=0Aabove the low 2Mb gets scanned for victim pages (while only =
the low 2Mb=0Agot real pages populated so far).=0A=0ATo accomodate for =
this=0A- set the PoD target first=0A- do all physmap population in PoD =
mode (i.e. not just large [2Mb or=0A  1Gb] pages)=0A- slightly lift the =
restrictions enforced by p2m_pod_set_mem_target()=0A  to accomodate for =
the changed tools behavior=0A=0ATested-by: J=FCrgen Gro=DF <juergen.gross@t=
s.fujitsu.com>=0A           (in a 4.0.x based incarnation)=0ASigned-off-by:=
 Jan Beulich <jbeulich@suse.com>=0A=0A--- a/tools/libxc/xc_hvm_build_x86.c=
=0A+++ b/tools/libxc/xc_hvm_build_x86.c=0A@@ -160,7 +160,7 @@ static int =
setup_guest(xc_interface *xch=0A     int pod_mode =3D 0;=0A =0A     if ( =
nr_pages > target_pages )=0A-        pod_mode =3D 1;=0A+        pod_mode =
=3D XENMEMF_populate_on_demand;=0A =0A     memset(&elf, 0, sizeof(elf));=0A=
     if ( elf_init(&elf, image, image_size) !=3D 0 )=0A@@ -197,6 +197,22 =
@@ static int setup_guest(xc_interface *xch=0A     for ( i =3D mmio_start =
>> PAGE_SHIFT; i < nr_pages; i++ )=0A         page_array[i] +=3D mmio_size =
>> PAGE_SHIFT;=0A =0A+    if ( pod_mode )=0A+    {=0A+        /*=0A+       =
  * Subtract 0x20 from target_pages for the VGA "hole".  Xen will=0A+      =
   * adjust the PoD cache size so that domain tot_pages will be=0A+        =
 * target_pages - 0x20 after this call.=0A+         */=0A+        rc =3D =
xc_domain_set_pod_target(xch, dom, target_pages - 0x20,=0A+                =
                      NULL, NULL, NULL);=0A+        if ( rc !=3D 0 )=0A+   =
     {=0A+            PERROR("Could not set PoD target for HVM guest.\n");=
=0A+            goto error_out;=0A+        }=0A+    }=0A+=0A     /*=0A     =
 * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000.=0A    =
  *=0A@@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch=0A      =
* ensure that we can be preempted and hence dom0 remains responsive.=0A    =
  */=0A     rc =3D xc_domain_populate_physmap_exact(=0A-        xch, dom, =
0xa0, 0, 0, &page_array[0x00]);=0A+        xch, dom, 0xa0, 0, pod_mode, =
&page_array[0x00]);=0A     cur_pages =3D 0xc0;=0A     stat_normal_pages =
=3D 0xc0;=0A     while ( (rc =3D=3D 0) && (nr_pages > cur_pages) )=0A@@ =
-247,8 +263,7 @@ static int setup_guest(xc_interface *xch=0A               =
  sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];=0A =0A =
            done =3D xc_domain_populate_physmap(xch, dom, nr_extents, =
SUPERPAGE_1GB_SHIFT,=0A-                                              =
pod_mode ? XENMEMF_populate_on_demand : 0,=0A-                             =
                 sp_extents);=0A+                                          =
    pod_mode, sp_extents);=0A =0A             if ( done > 0 )=0A           =
  {=0A@@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch=0A      =
               sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_2MB_SHI=
FT)];=0A =0A                 done =3D xc_domain_populate_physmap(xch, dom, =
nr_extents, SUPERPAGE_2MB_SHIFT,=0A-                                       =
           pod_mode ? XENMEMF_populate_on_demand : 0,=0A-                  =
                                sp_extents);=0A+                           =
                       pod_mode, sp_extents);=0A =0A                 if ( =
done > 0 )=0A                 {=0A@@ -302,19 +316,12 @@ static int =
setup_guest(xc_interface *xch=0A         if ( count !=3D 0 )=0A         =
{=0A             rc =3D xc_domain_populate_physmap_exact(=0A-              =
  xch, dom, count, 0, 0, &page_array[cur_pages]);=0A+                xch, =
dom, count, 0, pod_mode, &page_array[cur_pages]);=0A             cur_pages =
+=3D count;=0A             stat_normal_pages +=3D count;=0A         }=0A   =
  }=0A =0A-    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen =
will=0A-     * adjust the PoD cache size so that domain tot_pages will =
be=0A-     * target_pages - 0x20 after this call. */=0A-    if ( pod_mode =
)=0A-        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - =
0x20,=0A-                                      NULL, NULL, NULL);=0A-=0A   =
  if ( rc !=3D 0 )=0A     {=0A         PERROR("Could not allocate memory =
for HVM guest.");=0A--- a/xen/arch/x86/mm/p2m-pod.c=0A+++ b/xen/arch/x86/mm=
/p2m-pod.c=0A@@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain =
*d,=0A =0A     pod_lock(p2m);=0A =0A-    /* P =3D=3D B: Nothing to do. =
*/=0A-    if ( p2m->pod.entry_count =3D=3D 0 )=0A+    /* P =3D=3D B: =
Nothing to do (unless the guest is being created). */=0A+    populated =3D =
d->tot_pages - p2m->pod.count;=0A+    if ( populated > 0 && p2m->pod.entry_=
count =3D=3D 0 )=0A         goto out;=0A =0A     /* Don't do anything if =
the domain is being torn down */=0A@@ -357,13 +358,11 @@ p2m_pod_set_mem_ta=
rget(struct domain *d,=0A     if ( target < d->tot_pages )=0A         goto =
out;=0A =0A-    populated  =3D d->tot_pages - p2m->pod.count;=0A-=0A     =
pod_target =3D target - populated;=0A =0A     /* B < T': Set the cache =
size equal to # of outstanding entries,=0A      * let the balloon driver =
fill in the rest. */=0A-    if ( pod_target > p2m->pod.entry_count )=0A+   =
 if ( populated > 0 && pod_target > p2m->pod.entry_count )=0A         =
pod_target =3D p2m->pod.entry_count;=0A =0A     ASSERT( pod_target >=3D =
p2m->pod.count );=0A
--=__Part2B1A5C29.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2B1A5C29.0__=--


From xen-devel-bounces@lists.xen.org Tue Aug 14 08:46:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Cl7-0006av-4i; Tue, 14 Aug 2012 08:45:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Cl5-0006an-Pu
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:45:52 +0000
Received: from [85.158.143.99:14230] by server-2.bemta-4.messagelabs.com id
	70/78-31966-F301A205; Tue, 14 Aug 2012 08:45:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344933949!28047634!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4851 invoked from network); 14 Aug 2012 08:45:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 08:45:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:45:48 +0100
Message-Id: <502A2C590200007800094AF0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:45:45 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2B1A5C29.0__="
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: [Xen-devel] [PATCH 1/2] x86/PoD: prevent guest from being destroyed
 upon early access to its memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2B1A5C29.0__=
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Content-Disposition: inline

x86/PoD: prevent guest from being destroyed upon early access to its memory

When an external agent (e.g. a monitoring daemon) happens to access the
memory of a PoD guest prior to setting the PoD target, that access must
fail for there not being any page in the PoD cache, and only the space
above the low 2Mb gets scanned for victim pages (while only the low 2Mb
got real pages populated so far).

To accomodate for this
- set the PoD target first
- do all physmap population in PoD mode (i.e. not just large [2Mb or
  1Gb] pages)
- slightly lift the restrictions enforced by p2m_pod_set_mem_target()
  to accomodate for the changed tools behavior

Tested-by: JÃ¼rgen GroÃŸ <juergen.gross@ts.fujitsu.com>
           (in a 4.0.x based incarnation)
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -160,7 +160,7 @@ static int setup_guest(xc_interface *xch
     int pod_mode = 0;
 
     if ( nr_pages > target_pages )
-        pod_mode = 1;
+        pod_mode = XENMEMF_populate_on_demand;
 
     memset(&elf, 0, sizeof(elf));
     if ( elf_init(&elf, image, image_size) != 0 )
@@ -197,6 +197,22 @@ static int setup_guest(xc_interface *xch
     for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
         page_array[i] += mmio_size >> PAGE_SHIFT;
 
+    if ( pod_mode )
+    {
+        /*
+         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
+         * adjust the PoD cache size so that domain tot_pages will be
+         * target_pages - 0x20 after this call.
+         */
+        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+                                      NULL, NULL, NULL);
+        if ( rc != 0 )
+        {
+            PERROR("Could not set PoD target for HVM guest.\n");
+            goto error_out;
+        }
+    }
+
     /*
      * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000.
      *
@@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch
      * ensure that we can be preempted and hence dom0 remains responsive.
      */
     rc = xc_domain_populate_physmap_exact(
-        xch, dom, 0xa0, 0, 0, &page_array[0x00]);
+        xch, dom, 0xa0, 0, pod_mode, &page_array[0x00]);
     cur_pages = 0xc0;
     stat_normal_pages = 0xc0;
     while ( (rc == 0) && (nr_pages > cur_pages) )
@@ -247,8 +263,7 @@ static int setup_guest(xc_interface *xch
                 sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];
 
             done = xc_domain_populate_physmap(xch, dom, nr_extents, SUPERPAGE_1GB_SHIFT,
-                                              pod_mode ? XENMEMF_populate_on_demand : 0,
-                                              sp_extents);
+                                              pod_mode, sp_extents);
 
             if ( done > 0 )
             {
@@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch
                     sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_2MB_SHIFT)];
 
                 done = xc_domain_populate_physmap(xch, dom, nr_extents, SUPERPAGE_2MB_SHIFT,
-                                                  pod_mode ? XENMEMF_populate_on_demand : 0,
-                                                  sp_extents);
+                                                  pod_mode, sp_extents);
 
                 if ( done > 0 )
                 {
@@ -302,19 +316,12 @@ static int setup_guest(xc_interface *xch
         if ( count != 0 )
         {
             rc = xc_domain_populate_physmap_exact(
-                xch, dom, count, 0, 0, &page_array[cur_pages]);
+                xch, dom, count, 0, pod_mode, &page_array[cur_pages]);
             cur_pages += count;
             stat_normal_pages += count;
         }
     }
 
-    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen will
-     * adjust the PoD cache size so that domain tot_pages will be
-     * target_pages - 0x20 after this call. */
-    if ( pod_mode )
-        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
-                                      NULL, NULL, NULL);
-
     if ( rc != 0 )
     {
         PERROR("Could not allocate memory for HVM guest.");
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain *d,
 
     pod_lock(p2m);
 
-    /* P == B: Nothing to do. */
-    if ( p2m->pod.entry_count == 0 )
+    /* P == B: Nothing to do (unless the guest is being created). */
+    populated = d->tot_pages - p2m->pod.count;
+    if ( populated > 0 && p2m->pod.entry_count == 0 )
         goto out;
 
     /* Don't do anything if the domain is being torn down */
@@ -357,13 +358,11 @@ p2m_pod_set_mem_target(struct domain *d,
     if ( target < d->tot_pages )
         goto out;
 
-    populated  = d->tot_pages - p2m->pod.count;
-
     pod_target = target - populated;
 
     /* B < T': Set the cache size equal to # of outstanding entries,
      * let the balloon driver fill in the rest. */
-    if ( pod_target > p2m->pod.entry_count )
+    if ( populated > 0 && pod_target > p2m->pod.entry_count )
         pod_target = p2m->pod.entry_count;
 
     ASSERT( pod_target >= p2m->pod.count );



--=__Part2B1A5C29.0__=
Content-Type: text/plain; name="x86-PoD-early-access.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PoD-early-access.patch"

x86/PoD: prevent guest from being destroyed upon early access to its =
memory=0A=0AWhen an external agent (e.g. a monitoring daemon) happens to =
access the=0Amemory of a PoD guest prior to setting the PoD target, that =
access must=0Afail for there not being any page in the PoD cache, and only =
the space=0Aabove the low 2Mb gets scanned for victim pages (while only =
the low 2Mb=0Agot real pages populated so far).=0A=0ATo accomodate for =
this=0A- set the PoD target first=0A- do all physmap population in PoD =
mode (i.e. not just large [2Mb or=0A  1Gb] pages)=0A- slightly lift the =
restrictions enforced by p2m_pod_set_mem_target()=0A  to accomodate for =
the changed tools behavior=0A=0ATested-by: J=FCrgen Gro=DF <juergen.gross@t=
s.fujitsu.com>=0A           (in a 4.0.x based incarnation)=0ASigned-off-by:=
 Jan Beulich <jbeulich@suse.com>=0A=0A--- a/tools/libxc/xc_hvm_build_x86.c=
=0A+++ b/tools/libxc/xc_hvm_build_x86.c=0A@@ -160,7 +160,7 @@ static int =
setup_guest(xc_interface *xch=0A     int pod_mode =3D 0;=0A =0A     if ( =
nr_pages > target_pages )=0A-        pod_mode =3D 1;=0A+        pod_mode =
=3D XENMEMF_populate_on_demand;=0A =0A     memset(&elf, 0, sizeof(elf));=0A=
     if ( elf_init(&elf, image, image_size) !=3D 0 )=0A@@ -197,6 +197,22 =
@@ static int setup_guest(xc_interface *xch=0A     for ( i =3D mmio_start =
>> PAGE_SHIFT; i < nr_pages; i++ )=0A         page_array[i] +=3D mmio_size =
>> PAGE_SHIFT;=0A =0A+    if ( pod_mode )=0A+    {=0A+        /*=0A+       =
  * Subtract 0x20 from target_pages for the VGA "hole".  Xen will=0A+      =
   * adjust the PoD cache size so that domain tot_pages will be=0A+        =
 * target_pages - 0x20 after this call.=0A+         */=0A+        rc =3D =
xc_domain_set_pod_target(xch, dom, target_pages - 0x20,=0A+                =
                      NULL, NULL, NULL);=0A+        if ( rc !=3D 0 )=0A+   =
     {=0A+            PERROR("Could not set PoD target for HVM guest.\n");=
=0A+            goto error_out;=0A+        }=0A+    }=0A+=0A     /*=0A     =
 * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000.=0A    =
  *=0A@@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch=0A      =
* ensure that we can be preempted and hence dom0 remains responsive.=0A    =
  */=0A     rc =3D xc_domain_populate_physmap_exact(=0A-        xch, dom, =
0xa0, 0, 0, &page_array[0x00]);=0A+        xch, dom, 0xa0, 0, pod_mode, =
&page_array[0x00]);=0A     cur_pages =3D 0xc0;=0A     stat_normal_pages =
=3D 0xc0;=0A     while ( (rc =3D=3D 0) && (nr_pages > cur_pages) )=0A@@ =
-247,8 +263,7 @@ static int setup_guest(xc_interface *xch=0A               =
  sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];=0A =0A =
            done =3D xc_domain_populate_physmap(xch, dom, nr_extents, =
SUPERPAGE_1GB_SHIFT,=0A-                                              =
pod_mode ? XENMEMF_populate_on_demand : 0,=0A-                             =
                 sp_extents);=0A+                                          =
    pod_mode, sp_extents);=0A =0A             if ( done > 0 )=0A           =
  {=0A@@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch=0A      =
               sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_2MB_SHI=
FT)];=0A =0A                 done =3D xc_domain_populate_physmap(xch, dom, =
nr_extents, SUPERPAGE_2MB_SHIFT,=0A-                                       =
           pod_mode ? XENMEMF_populate_on_demand : 0,=0A-                  =
                                sp_extents);=0A+                           =
                       pod_mode, sp_extents);=0A =0A                 if ( =
done > 0 )=0A                 {=0A@@ -302,19 +316,12 @@ static int =
setup_guest(xc_interface *xch=0A         if ( count !=3D 0 )=0A         =
{=0A             rc =3D xc_domain_populate_physmap_exact(=0A-              =
  xch, dom, count, 0, 0, &page_array[cur_pages]);=0A+                xch, =
dom, count, 0, pod_mode, &page_array[cur_pages]);=0A             cur_pages =
+=3D count;=0A             stat_normal_pages +=3D count;=0A         }=0A   =
  }=0A =0A-    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen =
will=0A-     * adjust the PoD cache size so that domain tot_pages will =
be=0A-     * target_pages - 0x20 after this call. */=0A-    if ( pod_mode =
)=0A-        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - =
0x20,=0A-                                      NULL, NULL, NULL);=0A-=0A   =
  if ( rc !=3D 0 )=0A     {=0A         PERROR("Could not allocate memory =
for HVM guest.");=0A--- a/xen/arch/x86/mm/p2m-pod.c=0A+++ b/xen/arch/x86/mm=
/p2m-pod.c=0A@@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain =
*d,=0A =0A     pod_lock(p2m);=0A =0A-    /* P =3D=3D B: Nothing to do. =
*/=0A-    if ( p2m->pod.entry_count =3D=3D 0 )=0A+    /* P =3D=3D B: =
Nothing to do (unless the guest is being created). */=0A+    populated =3D =
d->tot_pages - p2m->pod.count;=0A+    if ( populated > 0 && p2m->pod.entry_=
count =3D=3D 0 )=0A         goto out;=0A =0A     /* Don't do anything if =
the domain is being torn down */=0A@@ -357,13 +358,11 @@ p2m_pod_set_mem_ta=
rget(struct domain *d,=0A     if ( target < d->tot_pages )=0A         goto =
out;=0A =0A-    populated  =3D d->tot_pages - p2m->pod.count;=0A-=0A     =
pod_target =3D target - populated;=0A =0A     /* B < T': Set the cache =
size equal to # of outstanding entries,=0A      * let the balloon driver =
fill in the rest. */=0A-    if ( pod_target > p2m->pod.entry_count )=0A+   =
 if ( populated > 0 && pod_target > p2m->pod.entry_count )=0A         =
pod_target =3D p2m->pod.entry_count;=0A =0A     ASSERT( pod_target >=3D =
p2m->pod.count );=0A
--=__Part2B1A5C29.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2B1A5C29.0__=--


From xen-devel-bounces@lists.xen.org Tue Aug 14 08:46:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Clg-0006eC-If; Tue, 14 Aug 2012 08:46:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Clf-0006dy-Am
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:46:27 +0000
Received: from [85.158.143.99:20799] by server-1.bemta-4.messagelabs.com id
	10/64-07754-2601A205; Tue, 14 Aug 2012 08:46:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344933985!27500259!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11164 invoked from network); 14 Aug 2012 08:46:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 08:46:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:46:25 +0100
Message-Id: <502A2C7D0200007800094AF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:46:21 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4F7E384D.0__="
Subject: [Xen-devel] [PATCH 2/2] x86/PoD: clean up types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4F7E384D.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

GMFN values must undoubtedly be "unsigned long". "count" and
"entry_count", since they are signed types, should also be "long" as
otherwise they can't fit all values that can fit into "d->tot_pages"
(which currently is "uint32_t").

Beyond that, the patch doesn't convert everything to "long" as in many
places it is clear that "int" suffices. In places where "long" is being
used partially already, the change is however being done.

Furthermore, page order values have no use of being "long".

Finally, in the course of updating a few printk messages anyway, some
also get slightly shortened (to focus on the relevant information).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -66,7 +66,7 @@ static inline void unlock_page_alloc(str
 static int
 p2m_pod_cache_add(struct p2m_domain *p2m,
                   struct page_info *page,
-                  unsigned long order)
+                  unsigned int order)
 {
     int i;
     struct page_info *p;
@@ -80,7 +80,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
     /* Check to make sure this is a contiguous region */
     if( mfn_x(mfn) & ((1 << order) - 1) )
     {
-        printk("%s: mfn %lx not aligned order %lu! (mask %lx)\n",
+        printk("%s: mfn %lx not aligned order %u! (mask %lx)\n",
                __func__, mfn_x(mfn), order, ((1UL << order) - 1));
         return -1;
     }
@@ -146,7 +146,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
  * down 2-meg pages into singleton pages automatically.  Returns null if
  * a superpage is requested and no superpages are available. */
 static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
-                                            unsigned long order)
+                                            unsigned int order)
 {
     struct page_info *p =3D NULL;
     int i;
@@ -234,7 +234,7 @@ p2m_pod_set_cache_target(struct p2m_doma
                 goto retry;
             }  =20
            =20
-            printk("%s: Unable to allocate domheap page for pod cache.  =
target %lu cachesize %d\n",
+            printk("%s: Unable to allocate page for PoD cache (target=3D%l=
u cache=3D%ld)\n",
                    __func__, pod_target, p2m->pod.count);
             ret =3D -ENOMEM;
             goto out;
@@ -337,10 +337,9 @@ out:
 int
 p2m_pod_set_mem_target(struct domain *d, unsigned long target)
 {
-    unsigned pod_target;
     struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
     int ret =3D 0;
-    unsigned long populated;
+    unsigned long populated, pod_target;
=20
     pod_lock(p2m);
=20
@@ -633,7 +632,8 @@ out_unlock:
 void p2m_pod_dump_data(struct domain *d)
 {
     struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
-    printk("    PoD entries=3D%d cachesize=3D%d\n",
+
+    printk("    PoD entries=3D%ld cachesize=3D%ld\n",
            p2m->pod.entry_count, p2m->pod.count);
 }
=20
@@ -1071,8 +1071,9 @@ p2m_pod_demand_populate(struct p2m_domai
 out_of_memory:
     pod_unlock(p2m);
=20
-    printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " =
pod_entries %" PRIi32 "\n",
-           __func__, d->tot_pages, p2m->pod.entry_count);
+    printk("%s: Dom%d out of PoD memory! (tot=3D%"PRIu32" ents=3D%ld =
dom%d)\n",
+           __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count,
+           current->domain->domain_id);
     domain_crash(d);
     return -1;
 out_fail:
@@ -1111,10 +1112,9 @@ guest_physmap_mark_populate_on_demand(st
                                       unsigned int order)
 {
     struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
-    unsigned long i;
+    unsigned long i, pod_count =3D 0;
     p2m_type_t ot;
     mfn_t omfn;
-    int pod_count =3D 0;
     int rc =3D 0;
=20
     BUG_ON(!paging_mode_translate(d));
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -965,8 +965,7 @@ static void p2m_change_type_global(struc
 #if P2M_AUDIT
 long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 {
-    int entry_count =3D 0;
-    unsigned long pmbad =3D 0;
+    unsigned long entry_count =3D 0, pmbad =3D 0;
     unsigned long mfn, gfn, m2pfn;
     int test_linear;
     struct domain *d =3D p2m->domain;
@@ -1126,7 +1125,7 @@ long p2m_pt_audit_p2m(struct p2m_domain=20
=20
     if ( entry_count !=3D p2m->pod.entry_count )
     {
-        printk("%s: refcounted entry count %d, audit count %d!\n",
+        printk("%s: refcounted entry count %ld, audit count %lu!\n",
                __func__,
                p2m->pod.entry_count,
                entry_count);
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -282,10 +282,10 @@ struct p2m_domain {
     struct {
         struct page_list_head super,   /* List of superpages              =
  */
                          single;       /* Non-super lists                 =
  */
-        int              count,        /* # of pages in cache lists       =
  */
+        long             count,        /* # of pages in cache lists       =
  */
                          entry_count;  /* # of pages in p2m marked pod    =
  */
-        unsigned         reclaim_single; /* Last gpfn of a scan */
-        unsigned         max_guest;    /* gpfn of max guest demand-populat=
e */
+        unsigned long    reclaim_single; /* Last gpfn of a scan */
+        unsigned long    max_guest;    /* gpfn of max guest demand-populat=
e */
 #define POD_HISTORY_MAX 128
         /* gpfn of last guest superpage demand-populated */
         unsigned long    last_populated[POD_HISTORY_MAX];=20



--=__Part4F7E384D.0__=
Content-Type: text/plain; name="x86-PoD-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PoD-types.patch"

x86/PoD: clean up types=0A=0AGMFN values must undoubtedly be "unsigned =
long". "count" and=0A"entry_count", since they are signed types, should =
also be "long" as=0Aotherwise they can't fit all values that can fit into =
"d->tot_pages"=0A(which currently is "uint32_t").=0A=0ABeyond that, the =
patch doesn't convert everything to "long" as in many=0Aplaces it is clear =
that "int" suffices. In places where "long" is being=0Aused partially =
already, the change is however being done.=0A=0AFurthermore, page order =
values have no use of being "long".=0A=0AFinally, in the course of =
updating a few printk messages anyway, some=0Aalso get slightly shortened =
(to focus on the relevant information).=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm/p2m-pod.c=0A+++ b/xen/arch/x=
86/mm/p2m-pod.c=0A@@ -66,7 +66,7 @@ static inline void unlock_page_alloc(st=
r=0A static int=0A p2m_pod_cache_add(struct p2m_domain *p2m,=0A            =
       struct page_info *page,=0A-                  unsigned long =
order)=0A+                  unsigned int order)=0A {=0A     int i;=0A     =
struct page_info *p;=0A@@ -80,7 +80,7 @@ p2m_pod_cache_add(struct =
p2m_domain *p2m=0A     /* Check to make sure this is a contiguous region =
*/=0A     if( mfn_x(mfn) & ((1 << order) - 1) )=0A     {=0A-        =
printk("%s: mfn %lx not aligned order %lu! (mask %lx)\n",=0A+        =
printk("%s: mfn %lx not aligned order %u! (mask %lx)\n",=0A                =
__func__, mfn_x(mfn), order, ((1UL << order) - 1));=0A         return =
-1;=0A     }=0A@@ -146,7 +146,7 @@ p2m_pod_cache_add(struct p2m_domain =
*p2m=0A  * down 2-meg pages into singleton pages automatically.  Returns =
null if=0A  * a superpage is requested and no superpages are available. =
*/=0A static struct page_info * p2m_pod_cache_get(struct p2m_domain =
*p2m,=0A-                                            unsigned long =
order)=0A+                                            unsigned int =
order)=0A {=0A     struct page_info *p =3D NULL;=0A     int i;=0A@@ -234,7 =
+234,7 @@ p2m_pod_set_cache_target(struct p2m_doma=0A                 goto =
retry;=0A             }   =0A             =0A-            printk("%s: =
Unable to allocate domheap page for pod cache.  target %lu cachesize =
%d\n",=0A+            printk("%s: Unable to allocate page for PoD cache =
(target=3D%lu cache=3D%ld)\n",=0A                    __func__, pod_target, =
p2m->pod.count);=0A             ret =3D -ENOMEM;=0A             goto =
out;=0A@@ -337,10 +337,9 @@ out:=0A int=0A p2m_pod_set_mem_target(struct =
domain *d, unsigned long target)=0A {=0A-    unsigned pod_target;=0A     =
struct p2m_domain *p2m =3D p2m_get_hostp2m(d);=0A     int ret =3D 0;=0A-   =
 unsigned long populated;=0A+    unsigned long populated, pod_target;=0A =
=0A     pod_lock(p2m);=0A =0A@@ -633,7 +632,8 @@ out_unlock:=0A void =
p2m_pod_dump_data(struct domain *d)=0A {=0A     struct p2m_domain *p2m =3D =
p2m_get_hostp2m(d);=0A-    printk("    PoD entries=3D%d cachesize=3D%d\n",=
=0A+=0A+    printk("    PoD entries=3D%ld cachesize=3D%ld\n",=0A           =
 p2m->pod.entry_count, p2m->pod.count);=0A }=0A =0A@@ -1071,8 +1071,9 @@ =
p2m_pod_demand_populate(struct p2m_domai=0A out_of_memory:=0A     =
pod_unlock(p2m);=0A =0A-    printk("%s: Out of populate-on-demand memory! =
tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",=0A-           __func__, =
d->tot_pages, p2m->pod.entry_count);=0A+    printk("%s: Dom%d out of PoD =
memory! (tot=3D%"PRIu32" ents=3D%ld dom%d)\n",=0A+           __func__, =
d->domain_id, d->tot_pages, p2m->pod.entry_count,=0A+           current->do=
main->domain_id);=0A     domain_crash(d);=0A     return -1;=0A out_fail:=0A=
@@ -1111,10 +1112,9 @@ guest_physmap_mark_populate_on_demand(st=0A         =
                              unsigned int order)=0A {=0A     struct =
p2m_domain *p2m =3D p2m_get_hostp2m(d);=0A-    unsigned long i;=0A+    =
unsigned long i, pod_count =3D 0;=0A     p2m_type_t ot;=0A     mfn_t =
omfn;=0A-    int pod_count =3D 0;=0A     int rc =3D 0;=0A =0A     =
BUG_ON(!paging_mode_translate(d));=0A--- a/xen/arch/x86/mm/p2m-pt.c=0A+++ =
b/xen/arch/x86/mm/p2m-pt.c=0A@@ -965,8 +965,7 @@ static void p2m_change_typ=
e_global(struc=0A #if P2M_AUDIT=0A long p2m_pt_audit_p2m(struct p2m_domain =
*p2m)=0A {=0A-    int entry_count =3D 0;=0A-    unsigned long pmbad =3D =
0;=0A+    unsigned long entry_count =3D 0, pmbad =3D 0;=0A     unsigned =
long mfn, gfn, m2pfn;=0A     int test_linear;=0A     struct domain *d =3D =
p2m->domain;=0A@@ -1126,7 +1125,7 @@ long p2m_pt_audit_p2m(struct =
p2m_domain =0A =0A     if ( entry_count !=3D p2m->pod.entry_count )=0A     =
{=0A-        printk("%s: refcounted entry count %d, audit count %d!\n",=0A+=
        printk("%s: refcounted entry count %ld, audit count %lu!\n",=0A    =
            __func__,=0A                p2m->pod.entry_count,=0A           =
     entry_count);=0A--- a/xen/include/asm-x86/p2m.h=0A+++ b/xen/include/as=
m-x86/p2m.h=0A@@ -282,10 +282,10 @@ struct p2m_domain {=0A     struct {=0A =
        struct page_list_head super,   /* List of superpages               =
 */=0A                          single;       /* Non-super lists           =
        */=0A-        int              count,        /* # of pages in =
cache lists         */=0A+        long             count,        /* # of =
pages in cache lists         */=0A                          entry_count;  =
/* # of pages in p2m marked pod      */=0A-        unsigned         =
reclaim_single; /* Last gpfn of a scan */=0A-        unsigned         =
max_guest;    /* gpfn of max guest demand-populate */=0A+        unsigned =
long    reclaim_single; /* Last gpfn of a scan */=0A+        unsigned long =
   max_guest;    /* gpfn of max guest demand-populate */=0A #define =
POD_HISTORY_MAX 128=0A         /* gpfn of last guest superpage demand-popul=
ated */=0A         unsigned long    last_populated[POD_HISTORY_MAX]; =0A
--=__Part4F7E384D.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4F7E384D.0__=--


From xen-devel-bounces@lists.xen.org Tue Aug 14 08:46:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 08:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Clg-0006eC-If; Tue, 14 Aug 2012 08:46:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Clf-0006dy-Am
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 08:46:27 +0000
Received: from [85.158.143.99:20799] by server-1.bemta-4.messagelabs.com id
	10/64-07754-2601A205; Tue, 14 Aug 2012 08:46:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1344933985!27500259!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11164 invoked from network); 14 Aug 2012 08:46:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 08:46:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 09:46:25 +0100
Message-Id: <502A2C7D0200007800094AF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 09:46:21 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4F7E384D.0__="
Subject: [Xen-devel] [PATCH 2/2] x86/PoD: clean up types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4F7E384D.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

GMFN values must undoubtedly be "unsigned long". "count" and
"entry_count", since they are signed types, should also be "long" as
otherwise they can't fit all values that can fit into "d->tot_pages"
(which currently is "uint32_t").

Beyond that, the patch doesn't convert everything to "long" as in many
places it is clear that "int" suffices. In places where "long" is being
used partially already, the change is however being done.

Furthermore, page order values have no use of being "long".

Finally, in the course of updating a few printk messages anyway, some
also get slightly shortened (to focus on the relevant information).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -66,7 +66,7 @@ static inline void unlock_page_alloc(str
 static int
 p2m_pod_cache_add(struct p2m_domain *p2m,
                   struct page_info *page,
-                  unsigned long order)
+                  unsigned int order)
 {
     int i;
     struct page_info *p;
@@ -80,7 +80,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
     /* Check to make sure this is a contiguous region */
     if( mfn_x(mfn) & ((1 << order) - 1) )
     {
-        printk("%s: mfn %lx not aligned order %lu! (mask %lx)\n",
+        printk("%s: mfn %lx not aligned order %u! (mask %lx)\n",
                __func__, mfn_x(mfn), order, ((1UL << order) - 1));
         return -1;
     }
@@ -146,7 +146,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
  * down 2-meg pages into singleton pages automatically.  Returns null if
  * a superpage is requested and no superpages are available. */
 static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
-                                            unsigned long order)
+                                            unsigned int order)
 {
     struct page_info *p =3D NULL;
     int i;
@@ -234,7 +234,7 @@ p2m_pod_set_cache_target(struct p2m_doma
                 goto retry;
             }  =20
            =20
-            printk("%s: Unable to allocate domheap page for pod cache.  =
target %lu cachesize %d\n",
+            printk("%s: Unable to allocate page for PoD cache (target=3D%l=
u cache=3D%ld)\n",
                    __func__, pod_target, p2m->pod.count);
             ret =3D -ENOMEM;
             goto out;
@@ -337,10 +337,9 @@ out:
 int
 p2m_pod_set_mem_target(struct domain *d, unsigned long target)
 {
-    unsigned pod_target;
     struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
     int ret =3D 0;
-    unsigned long populated;
+    unsigned long populated, pod_target;
=20
     pod_lock(p2m);
=20
@@ -633,7 +632,8 @@ out_unlock:
 void p2m_pod_dump_data(struct domain *d)
 {
     struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
-    printk("    PoD entries=3D%d cachesize=3D%d\n",
+
+    printk("    PoD entries=3D%ld cachesize=3D%ld\n",
            p2m->pod.entry_count, p2m->pod.count);
 }
=20
@@ -1071,8 +1071,9 @@ p2m_pod_demand_populate(struct p2m_domai
 out_of_memory:
     pod_unlock(p2m);
=20
-    printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " =
pod_entries %" PRIi32 "\n",
-           __func__, d->tot_pages, p2m->pod.entry_count);
+    printk("%s: Dom%d out of PoD memory! (tot=3D%"PRIu32" ents=3D%ld =
dom%d)\n",
+           __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count,
+           current->domain->domain_id);
     domain_crash(d);
     return -1;
 out_fail:
@@ -1111,10 +1112,9 @@ guest_physmap_mark_populate_on_demand(st
                                       unsigned int order)
 {
     struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
-    unsigned long i;
+    unsigned long i, pod_count =3D 0;
     p2m_type_t ot;
     mfn_t omfn;
-    int pod_count =3D 0;
     int rc =3D 0;
=20
     BUG_ON(!paging_mode_translate(d));
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -965,8 +965,7 @@ static void p2m_change_type_global(struc
 #if P2M_AUDIT
 long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 {
-    int entry_count =3D 0;
-    unsigned long pmbad =3D 0;
+    unsigned long entry_count =3D 0, pmbad =3D 0;
     unsigned long mfn, gfn, m2pfn;
     int test_linear;
     struct domain *d =3D p2m->domain;
@@ -1126,7 +1125,7 @@ long p2m_pt_audit_p2m(struct p2m_domain=20
=20
     if ( entry_count !=3D p2m->pod.entry_count )
     {
-        printk("%s: refcounted entry count %d, audit count %d!\n",
+        printk("%s: refcounted entry count %ld, audit count %lu!\n",
                __func__,
                p2m->pod.entry_count,
                entry_count);
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -282,10 +282,10 @@ struct p2m_domain {
     struct {
         struct page_list_head super,   /* List of superpages              =
  */
                          single;       /* Non-super lists                 =
  */
-        int              count,        /* # of pages in cache lists       =
  */
+        long             count,        /* # of pages in cache lists       =
  */
                          entry_count;  /* # of pages in p2m marked pod    =
  */
-        unsigned         reclaim_single; /* Last gpfn of a scan */
-        unsigned         max_guest;    /* gpfn of max guest demand-populat=
e */
+        unsigned long    reclaim_single; /* Last gpfn of a scan */
+        unsigned long    max_guest;    /* gpfn of max guest demand-populat=
e */
 #define POD_HISTORY_MAX 128
         /* gpfn of last guest superpage demand-populated */
         unsigned long    last_populated[POD_HISTORY_MAX];=20



--=__Part4F7E384D.0__=
Content-Type: text/plain; name="x86-PoD-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PoD-types.patch"

x86/PoD: clean up types=0A=0AGMFN values must undoubtedly be "unsigned =
long". "count" and=0A"entry_count", since they are signed types, should =
also be "long" as=0Aotherwise they can't fit all values that can fit into =
"d->tot_pages"=0A(which currently is "uint32_t").=0A=0ABeyond that, the =
patch doesn't convert everything to "long" as in many=0Aplaces it is clear =
that "int" suffices. In places where "long" is being=0Aused partially =
already, the change is however being done.=0A=0AFurthermore, page order =
values have no use of being "long".=0A=0AFinally, in the course of =
updating a few printk messages anyway, some=0Aalso get slightly shortened =
(to focus on the relevant information).=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm/p2m-pod.c=0A+++ b/xen/arch/x=
86/mm/p2m-pod.c=0A@@ -66,7 +66,7 @@ static inline void unlock_page_alloc(st=
r=0A static int=0A p2m_pod_cache_add(struct p2m_domain *p2m,=0A            =
       struct page_info *page,=0A-                  unsigned long =
order)=0A+                  unsigned int order)=0A {=0A     int i;=0A     =
struct page_info *p;=0A@@ -80,7 +80,7 @@ p2m_pod_cache_add(struct =
p2m_domain *p2m=0A     /* Check to make sure this is a contiguous region =
*/=0A     if( mfn_x(mfn) & ((1 << order) - 1) )=0A     {=0A-        =
printk("%s: mfn %lx not aligned order %lu! (mask %lx)\n",=0A+        =
printk("%s: mfn %lx not aligned order %u! (mask %lx)\n",=0A                =
__func__, mfn_x(mfn), order, ((1UL << order) - 1));=0A         return =
-1;=0A     }=0A@@ -146,7 +146,7 @@ p2m_pod_cache_add(struct p2m_domain =
*p2m=0A  * down 2-meg pages into singleton pages automatically.  Returns =
null if=0A  * a superpage is requested and no superpages are available. =
*/=0A static struct page_info * p2m_pod_cache_get(struct p2m_domain =
*p2m,=0A-                                            unsigned long =
order)=0A+                                            unsigned int =
order)=0A {=0A     struct page_info *p =3D NULL;=0A     int i;=0A@@ -234,7 =
+234,7 @@ p2m_pod_set_cache_target(struct p2m_doma=0A                 goto =
retry;=0A             }   =0A             =0A-            printk("%s: =
Unable to allocate domheap page for pod cache.  target %lu cachesize =
%d\n",=0A+            printk("%s: Unable to allocate page for PoD cache =
(target=3D%lu cache=3D%ld)\n",=0A                    __func__, pod_target, =
p2m->pod.count);=0A             ret =3D -ENOMEM;=0A             goto =
out;=0A@@ -337,10 +337,9 @@ out:=0A int=0A p2m_pod_set_mem_target(struct =
domain *d, unsigned long target)=0A {=0A-    unsigned pod_target;=0A     =
struct p2m_domain *p2m =3D p2m_get_hostp2m(d);=0A     int ret =3D 0;=0A-   =
 unsigned long populated;=0A+    unsigned long populated, pod_target;=0A =
=0A     pod_lock(p2m);=0A =0A@@ -633,7 +632,8 @@ out_unlock:=0A void =
p2m_pod_dump_data(struct domain *d)=0A {=0A     struct p2m_domain *p2m =3D =
p2m_get_hostp2m(d);=0A-    printk("    PoD entries=3D%d cachesize=3D%d\n",=
=0A+=0A+    printk("    PoD entries=3D%ld cachesize=3D%ld\n",=0A           =
 p2m->pod.entry_count, p2m->pod.count);=0A }=0A =0A@@ -1071,8 +1071,9 @@ =
p2m_pod_demand_populate(struct p2m_domai=0A out_of_memory:=0A     =
pod_unlock(p2m);=0A =0A-    printk("%s: Out of populate-on-demand memory! =
tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",=0A-           __func__, =
d->tot_pages, p2m->pod.entry_count);=0A+    printk("%s: Dom%d out of PoD =
memory! (tot=3D%"PRIu32" ents=3D%ld dom%d)\n",=0A+           __func__, =
d->domain_id, d->tot_pages, p2m->pod.entry_count,=0A+           current->do=
main->domain_id);=0A     domain_crash(d);=0A     return -1;=0A out_fail:=0A=
@@ -1111,10 +1112,9 @@ guest_physmap_mark_populate_on_demand(st=0A         =
                              unsigned int order)=0A {=0A     struct =
p2m_domain *p2m =3D p2m_get_hostp2m(d);=0A-    unsigned long i;=0A+    =
unsigned long i, pod_count =3D 0;=0A     p2m_type_t ot;=0A     mfn_t =
omfn;=0A-    int pod_count =3D 0;=0A     int rc =3D 0;=0A =0A     =
BUG_ON(!paging_mode_translate(d));=0A--- a/xen/arch/x86/mm/p2m-pt.c=0A+++ =
b/xen/arch/x86/mm/p2m-pt.c=0A@@ -965,8 +965,7 @@ static void p2m_change_typ=
e_global(struc=0A #if P2M_AUDIT=0A long p2m_pt_audit_p2m(struct p2m_domain =
*p2m)=0A {=0A-    int entry_count =3D 0;=0A-    unsigned long pmbad =3D =
0;=0A+    unsigned long entry_count =3D 0, pmbad =3D 0;=0A     unsigned =
long mfn, gfn, m2pfn;=0A     int test_linear;=0A     struct domain *d =3D =
p2m->domain;=0A@@ -1126,7 +1125,7 @@ long p2m_pt_audit_p2m(struct =
p2m_domain =0A =0A     if ( entry_count !=3D p2m->pod.entry_count )=0A     =
{=0A-        printk("%s: refcounted entry count %d, audit count %d!\n",=0A+=
        printk("%s: refcounted entry count %ld, audit count %lu!\n",=0A    =
            __func__,=0A                p2m->pod.entry_count,=0A           =
     entry_count);=0A--- a/xen/include/asm-x86/p2m.h=0A+++ b/xen/include/as=
m-x86/p2m.h=0A@@ -282,10 +282,10 @@ struct p2m_domain {=0A     struct {=0A =
        struct page_list_head super,   /* List of superpages               =
 */=0A                          single;       /* Non-super lists           =
        */=0A-        int              count,        /* # of pages in =
cache lists         */=0A+        long             count,        /* # of =
pages in cache lists         */=0A                          entry_count;  =
/* # of pages in p2m marked pod      */=0A-        unsigned         =
reclaim_single; /* Last gpfn of a scan */=0A-        unsigned         =
max_guest;    /* gpfn of max guest demand-populate */=0A+        unsigned =
long    reclaim_single; /* Last gpfn of a scan */=0A+        unsigned long =
   max_guest;    /* gpfn of max guest demand-populate */=0A #define =
POD_HISTORY_MAX 128=0A         /* gpfn of last guest superpage demand-popul=
ated */=0A         unsigned long    last_populated[POD_HISTORY_MAX]; =0A
--=__Part4F7E384D.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4F7E384D.0__=--


From xen-devel-bounces@lists.xen.org Tue Aug 14 09:03:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D26-0007B4-Fi; Tue, 14 Aug 2012 09:03:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T1D25-0007Az-4z
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:03:25 +0000
Received: from [85.158.143.99:27253] by server-3.bemta-4.messagelabs.com id
	1F/12-09529-C541A205; Tue, 14 Aug 2012 09:03:24 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344935002!28051305!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwOTMxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26486 invoked from network); 14 Aug 2012 09:03:23 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-15.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 09:03:23 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 14 Aug 2012 02:03:20 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,765,1336374000"; d="scan'208";a="200506420"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 14 Aug 2012 02:03:01 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 14 Aug 2012 02:03:01 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 17:02:59 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>, 
	Ben Guthro <ben@guthro.net>, "Wu, GabrielX" <gabrielx.wu@intel.com>
Thread-Topic: [Xen-devel] 4.2 TODO / Release Plan
Thread-Index: AQHNee/LQWENY/xaNkWdPia9JLgun5dZArpg
Date: Tue, 14 Aug 2012 09:02:59 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10142A81@SHSMSX101.ccr.corp.intel.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
	<502A1C7B0200007800094A8F@nat28.tlf.novell.com>
In-Reply-To: <502A1C7B0200007800094A8F@nat28.tlf.novell.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, August 14, 2012 3:38 PM
> To: Ian Jackson; Ben Guthro; Wu, GabrielX; Ren, Yongjie
> Cc: Ian Campbell; xen-devel; Keir Fraser
> Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
> 
> >>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> > Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> > It seems to fail 100% of the time on 100% of x86 machines I have tried.
> 
> Don't know. The systems I have tried S3 on have problems
> even with native Linux, so there's not much point playing
> with Xen on them.
> 
> Ian(J), I don't suppose this is part of the regression tests?
> 
> Gabriel, Yongjie - ISTR this is part of your regular VMX testing,
> and the most recent report doesn't mention any problem.
> 
I'll double check the S3 issue, and give update later.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:03:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D26-0007B4-Fi; Tue, 14 Aug 2012 09:03:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T1D25-0007Az-4z
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:03:25 +0000
Received: from [85.158.143.99:27253] by server-3.bemta-4.messagelabs.com id
	1F/12-09529-C541A205; Tue, 14 Aug 2012 09:03:24 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344935002!28051305!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwOTMxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26486 invoked from network); 14 Aug 2012 09:03:23 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-15.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 09:03:23 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 14 Aug 2012 02:03:20 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,765,1336374000"; d="scan'208";a="200506420"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 14 Aug 2012 02:03:01 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 14 Aug 2012 02:03:01 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.196]) with mapi id
	14.01.0355.002; Tue, 14 Aug 2012 17:02:59 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>, 
	Ben Guthro <ben@guthro.net>, "Wu, GabrielX" <gabrielx.wu@intel.com>
Thread-Topic: [Xen-devel] 4.2 TODO / Release Plan
Thread-Index: AQHNee/LQWENY/xaNkWdPia9JLgun5dZArpg
Date: Tue, 14 Aug 2012 09:02:59 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10142A81@SHSMSX101.ccr.corp.intel.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
	<502A1C7B0200007800094A8F@nat28.tlf.novell.com>
In-Reply-To: <502A1C7B0200007800094A8F@nat28.tlf.novell.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, August 14, 2012 3:38 PM
> To: Ian Jackson; Ben Guthro; Wu, GabrielX; Ren, Yongjie
> Cc: Ian Campbell; xen-devel; Keir Fraser
> Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
> 
> >>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> > Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> > It seems to fail 100% of the time on 100% of x86 machines I have tried.
> 
> Don't know. The systems I have tried S3 on have problems
> even with native Linux, so there's not much point playing
> with Xen on them.
> 
> Ian(J), I don't suppose this is part of the regression tests?
> 
> Gabriel, Yongjie - ISTR this is part of your regular VMX testing,
> and the most recent report doesn't mention any problem.
> 
I'll double check the S3 issue, and give update later.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D2I-0007CK-SD; Tue, 14 Aug 2012 09:03:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1D2H-0007C9-8f
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:03:37 +0000
Received: from [85.158.143.35:36973] by server-2.bemta-4.messagelabs.com id
	35/ED-31966-8641A205; Tue, 14 Aug 2012 09:03:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344934975!10366579!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23720 invoked from network); 14 Aug 2012 09:02:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:02:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13997092"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:02:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:02:55 +0100
Message-ID: <1344934973.5926.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Aug 2012 10:02:53 +0100
In-Reply-To: <5028C34C02000078000945FC@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-13 at 08:05 +0100, Jan Beulich wrote:
> - address PoD problems with early host side accesses to guest
>   address space (draft patch for 4.0.x exists, needs to be ported
>   over to -unstable, which I'll expect to get to today)

Is this the same as one of the two existing PoD entries? Expecting not
I'll include it separately today (I'm going to post the update very
shortly) and we can reconcile any duplication for next week.

Hypervisor:
    * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
      stop halfway through searching, causing a guest to crash even if
      there was zeroed memory available.  This is NOT a regression
      from 4.1, and is a very rare case, so probably shouldn't be a
      blocker.  (In fact, I'd be open to the idea that it should wait
      until after the release to get more testing.)
      	    (George Dunlap)

Tools:
    * [BUG(?)] If domain 0 attempts to access a guests' memory before
      it is finished being built, and it is being built in PoD mode,
      this may cause the guest to crash.  Again, this is NOT a
      regression from 4.1.  Furthermore, it's only been reported
      (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
      (Again, I'd be open to the idea that it should wait until after
      the release to get more testing.)
      	  (George Dunlap / Jan Beulich)

> (some of these may need considering to be put under blockers)

I'll make them all nice to have for now, let me know if you decide some
of them should be blockers.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D2I-0007CK-SD; Tue, 14 Aug 2012 09:03:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1D2H-0007C9-8f
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:03:37 +0000
Received: from [85.158.143.35:36973] by server-2.bemta-4.messagelabs.com id
	35/ED-31966-8641A205; Tue, 14 Aug 2012 09:03:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344934975!10366579!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23720 invoked from network); 14 Aug 2012 09:02:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:02:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13997092"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:02:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:02:55 +0100
Message-ID: <1344934973.5926.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Aug 2012 10:02:53 +0100
In-Reply-To: <5028C34C02000078000945FC@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-13 at 08:05 +0100, Jan Beulich wrote:
> - address PoD problems with early host side accesses to guest
>   address space (draft patch for 4.0.x exists, needs to be ported
>   over to -unstable, which I'll expect to get to today)

Is this the same as one of the two existing PoD entries? Expecting not
I'll include it separately today (I'm going to post the update very
shortly) and we can reconcile any duplication for next week.

Hypervisor:
    * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
      stop halfway through searching, causing a guest to crash even if
      there was zeroed memory available.  This is NOT a regression
      from 4.1, and is a very rare case, so probably shouldn't be a
      blocker.  (In fact, I'd be open to the idea that it should wait
      until after the release to get more testing.)
      	    (George Dunlap)

Tools:
    * [BUG(?)] If domain 0 attempts to access a guests' memory before
      it is finished being built, and it is being built in PoD mode,
      this may cause the guest to crash.  Again, this is NOT a
      regression from 4.1.  Furthermore, it's only been reported
      (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
      (Again, I'd be open to the idea that it should wait until after
      the release to get more testing.)
      	  (George Dunlap / Jan Beulich)

> (some of these may need considering to be put under blockers)

I'll make them all nice to have for now, let me know if you decide some
of them should be blockers.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:09:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D7B-0007YU-Ki; Tue, 14 Aug 2012 09:08:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1D7A-0007Wi-1e
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:08:40 +0000
Received: from [85.158.143.35:37976] by server-1.bemta-4.messagelabs.com id
	5D/28-07754-7951A205; Tue, 14 Aug 2012 09:08:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344935143!14669374!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17534 invoked from network); 14 Aug 2012 09:05:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13997185"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:05:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:05:43 +0100
Message-ID: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Date: Tue, 14 Aug 2012 10:05:41 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW5jbHVkaW5nIHhlbi11c2Vyc0AgYXMgd2VsbCBhcyB4ZW4tZGV2ZWxAIG5vdyB0aGF0IHdlIGFy
ZSBpbnRvIHRoZSByYwpwaGFzZXMuIEFsc28gc2VlIGJlbG93IGZvciBhIHJlbWluZGVyIGFib3V0
IHRoZSBYZW4gVGVzdCBEYXkgd2hpY2ggaXMKaGFwcGVuaW5nIHJpZ2h0IG5vdy4KClBsYW4gZm9y
IGEgNC4yIHJlbGVhc2U6Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRl
dmVsLzIwMTItMDMvbXNnMDA3OTMuaHRtbAoKVGhlIHRpbWUgbGluZSBpcyBhcyBmb2xsb3dzOgoK
MTkgTWFyY2ggICAgICAgIC0tIFRPRE8gbGlzdCBsb2NrZWQgZG93bgoyIEFwcmlsICAgICAgICAg
LS0gRmVhdHVyZSBGcmVlemUKMzAgSnVseSAgICAgICAgIC0tIEZpcnN0IHJlbGVhc2UgY2FuZGlk
YXRlCldlZWtseSAgICAgICAgICAtLSBSQ04rMSB1bnRpbCByZWxlYXNlICAgICAgICAgIDw8IFdF
IEFSRSBIRVJFCgpXZSByZWxlYXNlZCBSQzIgbGFzdCB3ZWVrICh0YWcgIjQuMi4wLXJjMiIgaW4g
eGVuLXVuc3RhYmxlLmhnKS4gVGhlCmZpcnN0IFhlbiB0ZXN0IGRheSBpcyBoYXBwZW5pbmcgcmln
aHQgbm93IG9uIHRoZSAjeGVudGVzdCBjaGFubmVsIG9uCkZyZWVub2RlLiBUaGUgYWltIG9mIHRo
aXMgZmlyc3QgdGVzdCBkYXkgaXMgdG8gdGVzdCB0aGUgeGwKdG9vbHN0YWNrLiBTZWUgZm9yIG1v
cmUgZGV0YWlscyBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvWGVuX1Rlc3RfRGF5cwoKVGhlIHVw
ZGF0ZWQgVE9ETyBsaXN0IGZvbGxvd3MuCgpoeXBlcnZpc29yLCBibG9ja2VyczoKCiAgICAqIE5v
bmUKCnRvb2xzLCBibG9ja2VyczoKCiAgICAqIGxpYnhsIHN0YWJsZSBBUEkgLS0gd2Ugd291bGQg
bGlrZSA0LjIgdG8gZGVmaW5lIGEgc3RhYmxlIEFQSQogICAgICB3aGljaCBkb3duc3RyZWFtJ3Mg
Y2FuIHN0YXJ0IHRvIHJlbHkgb24gbm90IGNoYW5naW5nLiBBc3BlY3RzIG9mCiAgICAgIHRoaXMg
YXJlOgoKICAgICAgICAqIE5vbmUga25vd24KCiAgICAqIHhsIGNvbXBhdGliaWxpdHkgd2l0aCB4
bToKCiAgICAgICAgKiBObyBrbm93biBpc3N1ZXMKCiAgICAqIFtDSEVDS10gTW9yZSBmb3JtYWxs
eSBkZXByZWNhdGUgeG0veGVuZC4gTWFucGFnZSBwYXRjaGVzIGFscmVhZHkKICAgICAgaW4gdHJl
ZS4gTmVlZHMgcmVsZWFzZSBub3RpbmcgYW5kIGNvbW11bmljYXRpb24gYXJvdW5kIC1yYzEgdG8K
ICAgICAgcmVtaW5kIHBlb3BsZSB0byB0ZXN0IHhsLgoKICAgICogW0NIRUNLXSBDb25maXJtIHRo
YXQgbWlncmF0aW9uIGZyb20gWGVuIDQuMSAtPiA0LjIgd29ya3MuCgogICAgKiBCdW1wIGxpYnJh
cnkgU09OQU1FUyBhcyBuZWNlc3NhcnkuCiAgICAgIDwyMDUwMi4zOTQ0MC45Njk2MTkuODI0OTc2
QG1hcmluZXIudWsueGVuc291cmNlLmNvbT4KCmh5cGVydmlzb3IsIG5pY2UgdG8gaGF2ZToKCiAg
ICAqIHZNQ0Ugc2F2ZS9yZXN0b3JlIGNoYW5nZXMsIHRvIHNpbXBsaWZ5IG1pZ3JhdGlvbiA0LjIt
PjQuMyB3aXRoCiAgICAgIG5ldyB2TUNFIGluIDQuMy4gKEppbnNvbmcgTGl1LCBKYW4gQmV1bGlj
aCwgRE9ORSBmb3IgNC4yKQoKICAgICogW0JVRyg/KV0gVW5kZXIgY2VydGFpbiBjb25kaXRpb25z
LCB0aGUgcDJtX3BvZF9zd2VlcCBjb2RlIHdpbGwKICAgICAgc3RvcCBoYWxmd2F5IHRocm91Z2gg
c2VhcmNoaW5nLCBjYXVzaW5nIGEgZ3Vlc3QgdG8gY3Jhc2ggZXZlbiBpZgogICAgICB0aGVyZSB3
YXMgemVyb2VkIG1lbW9yeSBhdmFpbGFibGUuICBUaGlzIGlzIE5PVCBhIHJlZ3Jlc3Npb24KICAg
ICAgZnJvbSA0LjEsIGFuZCBpcyBhIHZlcnkgcmFyZSBjYXNlLCBzbyBwcm9iYWJseSBzaG91bGRu
J3QgYmUgYQogICAgICBibG9ja2VyLiAgKEluIGZhY3QsIEknZCBiZSBvcGVuIHRvIHRoZSBpZGVh
IHRoYXQgaXQgc2hvdWxkIHdhaXQKICAgICAgdW50aWwgYWZ0ZXIgdGhlIHJlbGVhc2UgdG8gZ2V0
IG1vcmUgdGVzdGluZy4pCiAgICAgIAkgICAgKEdlb3JnZSBEdW5sYXApCgogICAgKiBTMyByZWdy
ZXNzaW9uKHM/KSByZXBvcnRlZCBieSBCZW4gR3V0aHJvIChCZW4gJiBKYW4gQmV1bGljaCkKCiAg
ICAqIGFkZHJlc3MgUG9EIHByb2JsZW1zIHdpdGggZWFybHkgaG9zdCBzaWRlIGFjY2Vzc2VzIHRv
IGd1ZXN0CiAgICAgIGFkZHJlc3Mgc3BhY2UgKGRyYWZ0IHBhdGNoIGZvciA0LjAueCBleGlzdHMs
IG5lZWRzIHRvIGJlIHBvcnRlZAogICAgICBvdmVyIHRvIC11bnN0YWJsZSwgd2hpY2ggSSdsbCBl
eHBlY3QgdG8gZ2V0IHRvIHRvZGF5LCBKYW4KICAgICAgQmV1bGljaCkKCiAgICAqIGZpeCBoaWdo
IGNoYW5nZSByYXRlIHRvIENNT1MgUlRDIHBlcmlvZGljIGludGVycnVwdCBjYXVzaW5nCiAgICAg
IGd1ZXN0IHdhbGwgY2xvY2sgdGltZSB0byBsYWcgKHBvc3NpYmxlIGZpeCBvdXRsaW5lZCwgbmVl
ZHMgdG8gYmUKICAgICAgcHV0IGluIHBhdGNoIGZvcm0gYW5kIHRob3JvdWdobHkgcmV2aWV3ZWQv
dGVzdGVkIGZvciB1bndhbnRlZAogICAgICBzaWRlIGVmZmVjdHMsIEphbiBCZXVsaWNoKQoKdG9v
bHMsIG5pY2UgdG8gaGF2ZToKCiAgICAqIHhsIGNvbXBhdGliaWxpdHkgd2l0aCB4bToKCiAgICAg
ICAgKiBOb25lCgogICAgKiB4bC5jZmcoNSkgZG9jdW1lbnRhdGlvbiBwYXRjaCBmb3IgcWVtdS11
cHN0cmVhbQogICAgICB2aWRlb3JhbS92aWRlb21lbSBzdXBwb3J0OgogICAgICBodHRwOi8vbGlz
dHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTA1L21zZzAwMjUwLmh0bWwK
ICAgICAgcWVtdS11cHN0cmVhbSBkb2Vzbid0IHN1cHBvcnQgc3BlY2lmeWluZyB2aWRlb21lbSBz
aXplIGZvciB0aGUKICAgICAgSFZNIGd1ZXN0IGNpcnJ1cy9zdGR2Z2EuICAoYnV0IHRoaXMgd29y
a3Mgd2l0aAogICAgICBxZW11LXhlbi10cmFkaXRpb25hbCkuIChQYXNpIEvDpHJra8OkaW5lbikK
CiAgICAqIFtCVUddIGxvbmcgc3RvcCBkdXJpbmcgdGhlIGd1ZXN0IGJvb3QgcHJvY2VzcyB3aXRo
IHFjb3cgaW1hZ2UsCiAgICAgIHJlcG9ydGVkIGJ5IEludGVsOiBodHRwOi8vYnVnemlsbGEueGVu
Lm9yZy9idWd6aWxsYS9zaG93X2J1Zy5jZ2k/aWQ9MTgyMQoKICAgICogW0JVR10gdmNwdS1zZXQg
ZG9lc24ndCB0YWtlIGVmZmVjdCBvbiBndWVzdCwgcmVwb3J0ZWQgYnkgSW50ZWw6CiAgICAgIGh0
dHA6Ly9idWd6aWxsYS54ZW4ub3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0xODIyCgogICAg
KiBMb2FkIGJsa3RhcCBkcml2ZXIgZnJvbSB4ZW5jb21tb25zIGluaXRzY3JpcHQgaWYgYXZhaWxh
YmxlLCB0aHJlYWQgYXQ6CiAgICAgIDxkYjYxNGU5MmZhZjc0M2UyMGIzZi4xMzM3MDk2OTc3QGtv
ZG8yPi4gVG8gYmUgZml4ZWQgbW9yZQogICAgICBwcm9wZXJseSBpbiA0LjMuIChQYXRjaCBwb3N0
ZWQsIGRpc2N1c3Npb24sIHBsYW4gdG8gdGFrZSBzaW1wbGUKICAgICAgeGVuY29tbW9ucyBwYXRj
aCBmb3IgNC4yIGFuZCByZXZpc3QgZm9yIDQuMy4gUGluZyBzZW50KQoKICAgICogW0JVR10geGwg
YWxsb3dzIHNhbWUgUENJIGRldmljZSB0byBiZSBhc3NpZ25lZCB0byBtdWx0aXBsZQogICAgICBn
dWVzdHMuIGh0dHA6Ly9idWd6aWxsYS54ZW4ub3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0x
ODI2CiAgICAgICg8RTQ1NThDMEM5NjY4ODc0ODgzN0VCMUIwNUJFRUQ3NUEwRkQ1NTc0QUBTSFNN
U1gxMDIuY2NyLmNvcnAuaW50ZWwuY29tPikKCiAgICAqIFtCVUcoPyldIElmIGRvbWFpbiAwIGF0
dGVtcHRzIHRvIGFjY2VzcyBhIGd1ZXN0cycgbWVtb3J5IGJlZm9yZQogICAgICBpdCBpcyBmaW5p
c2hlZCBiZWluZyBidWlsdCwgYW5kIGl0IGlzIGJlaW5nIGJ1aWx0IGluIFBvRCBtb2RlLAogICAg
ICB0aGlzIG1heSBjYXVzZSB0aGUgZ3Vlc3QgdG8gY3Jhc2guICBBZ2FpbiwgdGhpcyBpcyBOT1Qg
YQogICAgICByZWdyZXNzaW9uIGZyb20gNC4xLiAgRnVydGhlcm1vcmUsIGl0J3Mgb25seSBiZWVu
IHJlcG9ydGVkCiAgICAgIChBSVVJKSBieSBhIGN1c3RvbWVyIG9mIFN1U0U7IHNvIGl0IHNob3Vk
bid0IGJlIGEgYmxvY2tlci4KICAgICAgKEFnYWluLCBJJ2QgYmUgb3BlbiB0byB0aGUgaWRlYSB0
aGF0IGl0IHNob3VsZCB3YWl0IHVudGlsIGFmdGVyCiAgICAgIHRoZSByZWxlYXNlIHRvIGdldCBt
b3JlIHRlc3RpbmcuKQogICAgICAJICAoR2VvcmdlIER1bmxhcCAvIEphbiBCZXVsaWNoKQoKCgpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwg
bWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:09:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D7B-0007YU-Ki; Tue, 14 Aug 2012 09:08:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1D7A-0007Wi-1e
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:08:40 +0000
Received: from [85.158.143.35:37976] by server-1.bemta-4.messagelabs.com id
	5D/28-07754-7951A205; Tue, 14 Aug 2012 09:08:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344935143!14669374!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17534 invoked from network); 14 Aug 2012 09:05:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13997185"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:05:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:05:43 +0100
Message-ID: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Date: Tue, 14 Aug 2012 10:05:41 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW5jbHVkaW5nIHhlbi11c2Vyc0AgYXMgd2VsbCBhcyB4ZW4tZGV2ZWxAIG5vdyB0aGF0IHdlIGFy
ZSBpbnRvIHRoZSByYwpwaGFzZXMuIEFsc28gc2VlIGJlbG93IGZvciBhIHJlbWluZGVyIGFib3V0
IHRoZSBYZW4gVGVzdCBEYXkgd2hpY2ggaXMKaGFwcGVuaW5nIHJpZ2h0IG5vdy4KClBsYW4gZm9y
IGEgNC4yIHJlbGVhc2U6Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRl
dmVsLzIwMTItMDMvbXNnMDA3OTMuaHRtbAoKVGhlIHRpbWUgbGluZSBpcyBhcyBmb2xsb3dzOgoK
MTkgTWFyY2ggICAgICAgIC0tIFRPRE8gbGlzdCBsb2NrZWQgZG93bgoyIEFwcmlsICAgICAgICAg
LS0gRmVhdHVyZSBGcmVlemUKMzAgSnVseSAgICAgICAgIC0tIEZpcnN0IHJlbGVhc2UgY2FuZGlk
YXRlCldlZWtseSAgICAgICAgICAtLSBSQ04rMSB1bnRpbCByZWxlYXNlICAgICAgICAgIDw8IFdF
IEFSRSBIRVJFCgpXZSByZWxlYXNlZCBSQzIgbGFzdCB3ZWVrICh0YWcgIjQuMi4wLXJjMiIgaW4g
eGVuLXVuc3RhYmxlLmhnKS4gVGhlCmZpcnN0IFhlbiB0ZXN0IGRheSBpcyBoYXBwZW5pbmcgcmln
aHQgbm93IG9uIHRoZSAjeGVudGVzdCBjaGFubmVsIG9uCkZyZWVub2RlLiBUaGUgYWltIG9mIHRo
aXMgZmlyc3QgdGVzdCBkYXkgaXMgdG8gdGVzdCB0aGUgeGwKdG9vbHN0YWNrLiBTZWUgZm9yIG1v
cmUgZGV0YWlscyBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvWGVuX1Rlc3RfRGF5cwoKVGhlIHVw
ZGF0ZWQgVE9ETyBsaXN0IGZvbGxvd3MuCgpoeXBlcnZpc29yLCBibG9ja2VyczoKCiAgICAqIE5v
bmUKCnRvb2xzLCBibG9ja2VyczoKCiAgICAqIGxpYnhsIHN0YWJsZSBBUEkgLS0gd2Ugd291bGQg
bGlrZSA0LjIgdG8gZGVmaW5lIGEgc3RhYmxlIEFQSQogICAgICB3aGljaCBkb3duc3RyZWFtJ3Mg
Y2FuIHN0YXJ0IHRvIHJlbHkgb24gbm90IGNoYW5naW5nLiBBc3BlY3RzIG9mCiAgICAgIHRoaXMg
YXJlOgoKICAgICAgICAqIE5vbmUga25vd24KCiAgICAqIHhsIGNvbXBhdGliaWxpdHkgd2l0aCB4
bToKCiAgICAgICAgKiBObyBrbm93biBpc3N1ZXMKCiAgICAqIFtDSEVDS10gTW9yZSBmb3JtYWxs
eSBkZXByZWNhdGUgeG0veGVuZC4gTWFucGFnZSBwYXRjaGVzIGFscmVhZHkKICAgICAgaW4gdHJl
ZS4gTmVlZHMgcmVsZWFzZSBub3RpbmcgYW5kIGNvbW11bmljYXRpb24gYXJvdW5kIC1yYzEgdG8K
ICAgICAgcmVtaW5kIHBlb3BsZSB0byB0ZXN0IHhsLgoKICAgICogW0NIRUNLXSBDb25maXJtIHRo
YXQgbWlncmF0aW9uIGZyb20gWGVuIDQuMSAtPiA0LjIgd29ya3MuCgogICAgKiBCdW1wIGxpYnJh
cnkgU09OQU1FUyBhcyBuZWNlc3NhcnkuCiAgICAgIDwyMDUwMi4zOTQ0MC45Njk2MTkuODI0OTc2
QG1hcmluZXIudWsueGVuc291cmNlLmNvbT4KCmh5cGVydmlzb3IsIG5pY2UgdG8gaGF2ZToKCiAg
ICAqIHZNQ0Ugc2F2ZS9yZXN0b3JlIGNoYW5nZXMsIHRvIHNpbXBsaWZ5IG1pZ3JhdGlvbiA0LjIt
PjQuMyB3aXRoCiAgICAgIG5ldyB2TUNFIGluIDQuMy4gKEppbnNvbmcgTGl1LCBKYW4gQmV1bGlj
aCwgRE9ORSBmb3IgNC4yKQoKICAgICogW0JVRyg/KV0gVW5kZXIgY2VydGFpbiBjb25kaXRpb25z
LCB0aGUgcDJtX3BvZF9zd2VlcCBjb2RlIHdpbGwKICAgICAgc3RvcCBoYWxmd2F5IHRocm91Z2gg
c2VhcmNoaW5nLCBjYXVzaW5nIGEgZ3Vlc3QgdG8gY3Jhc2ggZXZlbiBpZgogICAgICB0aGVyZSB3
YXMgemVyb2VkIG1lbW9yeSBhdmFpbGFibGUuICBUaGlzIGlzIE5PVCBhIHJlZ3Jlc3Npb24KICAg
ICAgZnJvbSA0LjEsIGFuZCBpcyBhIHZlcnkgcmFyZSBjYXNlLCBzbyBwcm9iYWJseSBzaG91bGRu
J3QgYmUgYQogICAgICBibG9ja2VyLiAgKEluIGZhY3QsIEknZCBiZSBvcGVuIHRvIHRoZSBpZGVh
IHRoYXQgaXQgc2hvdWxkIHdhaXQKICAgICAgdW50aWwgYWZ0ZXIgdGhlIHJlbGVhc2UgdG8gZ2V0
IG1vcmUgdGVzdGluZy4pCiAgICAgIAkgICAgKEdlb3JnZSBEdW5sYXApCgogICAgKiBTMyByZWdy
ZXNzaW9uKHM/KSByZXBvcnRlZCBieSBCZW4gR3V0aHJvIChCZW4gJiBKYW4gQmV1bGljaCkKCiAg
ICAqIGFkZHJlc3MgUG9EIHByb2JsZW1zIHdpdGggZWFybHkgaG9zdCBzaWRlIGFjY2Vzc2VzIHRv
IGd1ZXN0CiAgICAgIGFkZHJlc3Mgc3BhY2UgKGRyYWZ0IHBhdGNoIGZvciA0LjAueCBleGlzdHMs
IG5lZWRzIHRvIGJlIHBvcnRlZAogICAgICBvdmVyIHRvIC11bnN0YWJsZSwgd2hpY2ggSSdsbCBl
eHBlY3QgdG8gZ2V0IHRvIHRvZGF5LCBKYW4KICAgICAgQmV1bGljaCkKCiAgICAqIGZpeCBoaWdo
IGNoYW5nZSByYXRlIHRvIENNT1MgUlRDIHBlcmlvZGljIGludGVycnVwdCBjYXVzaW5nCiAgICAg
IGd1ZXN0IHdhbGwgY2xvY2sgdGltZSB0byBsYWcgKHBvc3NpYmxlIGZpeCBvdXRsaW5lZCwgbmVl
ZHMgdG8gYmUKICAgICAgcHV0IGluIHBhdGNoIGZvcm0gYW5kIHRob3JvdWdobHkgcmV2aWV3ZWQv
dGVzdGVkIGZvciB1bndhbnRlZAogICAgICBzaWRlIGVmZmVjdHMsIEphbiBCZXVsaWNoKQoKdG9v
bHMsIG5pY2UgdG8gaGF2ZToKCiAgICAqIHhsIGNvbXBhdGliaWxpdHkgd2l0aCB4bToKCiAgICAg
ICAgKiBOb25lCgogICAgKiB4bC5jZmcoNSkgZG9jdW1lbnRhdGlvbiBwYXRjaCBmb3IgcWVtdS11
cHN0cmVhbQogICAgICB2aWRlb3JhbS92aWRlb21lbSBzdXBwb3J0OgogICAgICBodHRwOi8vbGlz
dHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTA1L21zZzAwMjUwLmh0bWwK
ICAgICAgcWVtdS11cHN0cmVhbSBkb2Vzbid0IHN1cHBvcnQgc3BlY2lmeWluZyB2aWRlb21lbSBz
aXplIGZvciB0aGUKICAgICAgSFZNIGd1ZXN0IGNpcnJ1cy9zdGR2Z2EuICAoYnV0IHRoaXMgd29y
a3Mgd2l0aAogICAgICBxZW11LXhlbi10cmFkaXRpb25hbCkuIChQYXNpIEvDpHJra8OkaW5lbikK
CiAgICAqIFtCVUddIGxvbmcgc3RvcCBkdXJpbmcgdGhlIGd1ZXN0IGJvb3QgcHJvY2VzcyB3aXRo
IHFjb3cgaW1hZ2UsCiAgICAgIHJlcG9ydGVkIGJ5IEludGVsOiBodHRwOi8vYnVnemlsbGEueGVu
Lm9yZy9idWd6aWxsYS9zaG93X2J1Zy5jZ2k/aWQ9MTgyMQoKICAgICogW0JVR10gdmNwdS1zZXQg
ZG9lc24ndCB0YWtlIGVmZmVjdCBvbiBndWVzdCwgcmVwb3J0ZWQgYnkgSW50ZWw6CiAgICAgIGh0
dHA6Ly9idWd6aWxsYS54ZW4ub3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0xODIyCgogICAg
KiBMb2FkIGJsa3RhcCBkcml2ZXIgZnJvbSB4ZW5jb21tb25zIGluaXRzY3JpcHQgaWYgYXZhaWxh
YmxlLCB0aHJlYWQgYXQ6CiAgICAgIDxkYjYxNGU5MmZhZjc0M2UyMGIzZi4xMzM3MDk2OTc3QGtv
ZG8yPi4gVG8gYmUgZml4ZWQgbW9yZQogICAgICBwcm9wZXJseSBpbiA0LjMuIChQYXRjaCBwb3N0
ZWQsIGRpc2N1c3Npb24sIHBsYW4gdG8gdGFrZSBzaW1wbGUKICAgICAgeGVuY29tbW9ucyBwYXRj
aCBmb3IgNC4yIGFuZCByZXZpc3QgZm9yIDQuMy4gUGluZyBzZW50KQoKICAgICogW0JVR10geGwg
YWxsb3dzIHNhbWUgUENJIGRldmljZSB0byBiZSBhc3NpZ25lZCB0byBtdWx0aXBsZQogICAgICBn
dWVzdHMuIGh0dHA6Ly9idWd6aWxsYS54ZW4ub3JnL2J1Z3ppbGxhL3Nob3dfYnVnLmNnaT9pZD0x
ODI2CiAgICAgICg8RTQ1NThDMEM5NjY4ODc0ODgzN0VCMUIwNUJFRUQ3NUEwRkQ1NTc0QUBTSFNN
U1gxMDIuY2NyLmNvcnAuaW50ZWwuY29tPikKCiAgICAqIFtCVUcoPyldIElmIGRvbWFpbiAwIGF0
dGVtcHRzIHRvIGFjY2VzcyBhIGd1ZXN0cycgbWVtb3J5IGJlZm9yZQogICAgICBpdCBpcyBmaW5p
c2hlZCBiZWluZyBidWlsdCwgYW5kIGl0IGlzIGJlaW5nIGJ1aWx0IGluIFBvRCBtb2RlLAogICAg
ICB0aGlzIG1heSBjYXVzZSB0aGUgZ3Vlc3QgdG8gY3Jhc2guICBBZ2FpbiwgdGhpcyBpcyBOT1Qg
YQogICAgICByZWdyZXNzaW9uIGZyb20gNC4xLiAgRnVydGhlcm1vcmUsIGl0J3Mgb25seSBiZWVu
IHJlcG9ydGVkCiAgICAgIChBSVVJKSBieSBhIGN1c3RvbWVyIG9mIFN1U0U7IHNvIGl0IHNob3Vk
bid0IGJlIGEgYmxvY2tlci4KICAgICAgKEFnYWluLCBJJ2QgYmUgb3BlbiB0byB0aGUgaWRlYSB0
aGF0IGl0IHNob3VsZCB3YWl0IHVudGlsIGFmdGVyCiAgICAgIHRoZSByZWxlYXNlIHRvIGdldCBt
b3JlIHRlc3RpbmcuKQogICAgICAJICAoR2VvcmdlIER1bmxhcCAvIEphbiBCZXVsaWNoKQoKCgpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwg
bWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:10:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D8g-0007rA-O8; Tue, 14 Aug 2012 09:10:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T1D8f-0007qb-DF
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:10:13 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344935326!3995084!1
X-Originating-IP: [213.199.154.204]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7708 invoked from network); 14 Aug 2012 09:08:48 -0000
Received: from am1ehsobe001.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.204)
	by server-10.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	14 Aug 2012 09:08:48 -0000
Received: from mail62-am1-R.bigfish.com (10.3.201.238) by
	AM1EHSOBE004.bigfish.com (10.3.204.24) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 09:08:46 +0000
Received: from mail62-am1 (localhost [127.0.0.1])	by mail62-am1-R.bigfish.com
	(Postfix) with ESMTP id 839801600CA;
	Tue, 14 Aug 2012 09:08:46 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail62-am1 (localhost.localdomain [127.0.0.1]) by mail62-am1
	(MessageSwitch) id 1344935325188872_920; Tue, 14 Aug 2012 09:08:45 +0000
	(UTC)
Received: from AM1EHSMHS009.bigfish.com (unknown [10.3.201.247])	by
	mail62-am1.bigfish.com (Postfix) with ESMTP id 2C55312005D;
	Tue, 14 Aug 2012 09:08:45 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	AM1EHSMHS009.bigfish.com (10.3.207.109) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 09:08:44 +0000
X-WSS-ID: 0M8QMQI-01-B0C-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2690B1028007;	Tue, 14 Aug 2012 04:08:41 -0500 (CDT)
Received: from sausexhtp01.amd.com (163.181.3.165) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 14 Aug 2012 04:09:15 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp01.amd.com
	(163.181.3.165) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Tue, 14 Aug 2012 04:08:40 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Tue, 14 Aug 2012
	05:08:39 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id C3C4049C1E6; Tue, 14 Aug 2012
	10:08:38 +0100 (BST)
Message-ID: <502A1598.7000803@amd.com>
Date: Tue, 14 Aug 2012 11:08:40 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1344872764@gran.amd.com>
	<502942ED02000078000948CE@nat28.tlf.novell.com>
In-Reply-To: <502942ED02000078000948CE@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0 of 3] amd iommu: Clean up iommu page table
	deallocation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/13/2012 06:09 PM, Jan Beulich wrote:
>>>> On 13.08.12 at 17:46, Wei Wang<wei.wang2@amd.com>  wrote:
>> Hi Jan, this patch cleans up deallocate_next_page_table() function. Please
>> take a look.
>
> Hi Wei,
>
> looks okay, but unless I'm mistaken this is only cleanup, so I'd
> like to postpone this until after 4.2 unless you have a rather
> strong opinion towards getting it in now.
>
> Jan
>
>

Yes, just clean up. After 4.2 is ok to me.
Thanks
Wei


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:10:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1D8g-0007rA-O8; Tue, 14 Aug 2012 09:10:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T1D8f-0007qb-DF
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:10:13 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1344935326!3995084!1
X-Originating-IP: [213.199.154.204]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7708 invoked from network); 14 Aug 2012 09:08:48 -0000
Received: from am1ehsobe001.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.204)
	by server-10.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	14 Aug 2012 09:08:48 -0000
Received: from mail62-am1-R.bigfish.com (10.3.201.238) by
	AM1EHSOBE004.bigfish.com (10.3.204.24) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 09:08:46 +0000
Received: from mail62-am1 (localhost [127.0.0.1])	by mail62-am1-R.bigfish.com
	(Postfix) with ESMTP id 839801600CA;
	Tue, 14 Aug 2012 09:08:46 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail62-am1 (localhost.localdomain [127.0.0.1]) by mail62-am1
	(MessageSwitch) id 1344935325188872_920; Tue, 14 Aug 2012 09:08:45 +0000
	(UTC)
Received: from AM1EHSMHS009.bigfish.com (unknown [10.3.201.247])	by
	mail62-am1.bigfish.com (Postfix) with ESMTP id 2C55312005D;
	Tue, 14 Aug 2012 09:08:45 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	AM1EHSMHS009.bigfish.com (10.3.207.109) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 09:08:44 +0000
X-WSS-ID: 0M8QMQI-01-B0C-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2690B1028007;	Tue, 14 Aug 2012 04:08:41 -0500 (CDT)
Received: from sausexhtp01.amd.com (163.181.3.165) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 14 Aug 2012 04:09:15 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp01.amd.com
	(163.181.3.165) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Tue, 14 Aug 2012 04:08:40 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Tue, 14 Aug 2012
	05:08:39 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id C3C4049C1E6; Tue, 14 Aug 2012
	10:08:38 +0100 (BST)
Message-ID: <502A1598.7000803@amd.com>
Date: Tue, 14 Aug 2012 11:08:40 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <patchbomb.1344872764@gran.amd.com>
	<502942ED02000078000948CE@nat28.tlf.novell.com>
In-Reply-To: <502942ED02000078000948CE@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0 of 3] amd iommu: Clean up iommu page table
	deallocation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/13/2012 06:09 PM, Jan Beulich wrote:
>>>> On 13.08.12 at 17:46, Wei Wang<wei.wang2@amd.com>  wrote:
>> Hi Jan, this patch cleans up deallocate_next_page_table() function. Please
>> take a look.
>
> Hi Wei,
>
> looks okay, but unless I'm mistaken this is only cleanup, so I'd
> like to postpone this until after 4.2 unless you have a rather
> strong opinion towards getting it in now.
>
> Jan
>
>

Yes, just clean up. After 4.2 is ok to me.
Thanks
Wei


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:12:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:12:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DAF-00080F-7t; Tue, 14 Aug 2012 09:11:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iustin@google.com>) id 1T1DAD-000800-MM
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:11:49 +0000
Received: from [85.158.138.51:26712] by server-11.bemta-3.messagelabs.com id
	81/CB-23152-4561A205; Tue, 14 Aug 2012 09:11:48 +0000
X-Env-Sender: iustin@google.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344935508!24121168!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5379 invoked from network); 14 Aug 2012 09:11:48 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:11:48 -0000
Received: by eeke53 with SMTP id e53so58535eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 02:11:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=Wd3Ya94GwCmWh4A0RmVxt1LH3+F71Xkiq0AmS/b/KR4=;
	b=SkbOTtD48HbvPBowM7WbbBAad7doppN9H+lNV3cXuZhf5YYr3ofoPkszYZWjalc7Yp
	SMYQ1zJc4a5tLXk+qll3inI2PLJHkagOFLoSXK+IkIUj5L55HFM+dIcjHeAy6J5io46n
	Ssu3hyMQTJOoOgUK9pSKBNT+qsbATMoHhBoLZ08niOkqPNK/w9WuOYgtUQQ1z1TkiUCt
	Y0QeioMm+AKvQKcJsVSTWa13q6T5v5pKZ03wDsZxdxyGezlCZC5fobjLYiiLH8Ur0DVW
	Butj51rWjGR7kfdeAHpfZ4tOOE1O8Hk2oMcHWlDlCeXPhWE1dbyVnAgRTzmFiWsPnlSw
	qb4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent:x-gm-message-state;
	bh=Wd3Ya94GwCmWh4A0RmVxt1LH3+F71Xkiq0AmS/b/KR4=;
	b=Yc8y6+pR03eOicyVxLF8NgdJr4Y3IsJXam9+5mtlmNmcoLNfrlhDqg06FqIBCVmv6Z
	VrtMi9isG1G+6vM9vtN/+ODaBWB0Ex4H5JaJO4p86O/XTdVZkG043GJHstksa0m6Ur7t
	md4efsJuvN5ZIkwLvbqKPkf7iC8Bnh2TlY43bLwfKe0rKVr/hLsqzB+HxLTwaWWB3KqI
	XfdxyrwaCAdxlD7NlXoYF8OAWrfBXV0mVH7nnTrIxV5P4/Z7fc0SJ5GuRnkfVmOg/lpK
	Qm47euSJnMcyaCgGQKsx7LQMl4ebj0iDoSO5cISk3ONEtur1hho0+6A3HBLomOCFPtzC
	AabQ==
Received: by 10.14.194.198 with SMTP id m46mr18310084een.13.1344935508175;
	Tue, 14 Aug 2012 02:11:48 -0700 (PDT)
Received: by 10.14.194.198 with SMTP id m46mr18310073een.13.1344935508057;
	Tue, 14 Aug 2012 02:11:48 -0700 (PDT)
Received: from google.com ([2620:0:105f:5:226:b9ff:fe84:a92e])
	by mx.google.com with ESMTPS id h42sm5053817eem.5.2012.08.14.02.11.47
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 02:11:47 -0700 (PDT)
Date: Tue, 14 Aug 2012 11:11:46 +0200
From: Iustin Pop <iustin@google.com>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Message-ID: <20120814091145.GR11580@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<20120814064628.GV19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120814064628.GV19851@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQnE+e4PWSB4sPNC+B1F63e2G8c/QKscaLbLZlb8mdP7ep57K1AELsNDk/axvwG7JWbJkPDgvIZwhzBRyEaOWLo3cQVYUuwydEJRlrDu7/uTzw5Cz42T5Z9wjQGxZnADQdgRNtdrz74uLzVuqGmvugahDdzQq9ht6XTWpU0YZeMCTP7iYAb5eC08or4jaZ6bQi4VpXyZ
Cc: Peter Moody <pmoody@google.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 09:46:28AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Mon, Aug 13, 2012 at 05:03:06PM -0700, Peter Moody wrote:
> > This seems to be some combination of Xen and the audit subsystem, but
> > the attached program crashes my machine 100% of the time.
> > =

> =

> Did you try with a later Xen version? 4.0.1 is quite old. =

> For example the latest in Xen 4.0.x series which is 4.0.4 ? Or Xen 4.1.3 =
? =


This is 4.0.1 from Debian, so it has at least all the CVEs applied. We
haven't tried yet with a newer Xen though (yet).

regards,
iustin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:12:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:12:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DAF-00080F-7t; Tue, 14 Aug 2012 09:11:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iustin@google.com>) id 1T1DAD-000800-MM
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:11:49 +0000
Received: from [85.158.138.51:26712] by server-11.bemta-3.messagelabs.com id
	81/CB-23152-4561A205; Tue, 14 Aug 2012 09:11:48 +0000
X-Env-Sender: iustin@google.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344935508!24121168!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5379 invoked from network); 14 Aug 2012 09:11:48 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:11:48 -0000
Received: by eeke53 with SMTP id e53so58535eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 02:11:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=Wd3Ya94GwCmWh4A0RmVxt1LH3+F71Xkiq0AmS/b/KR4=;
	b=SkbOTtD48HbvPBowM7WbbBAad7doppN9H+lNV3cXuZhf5YYr3ofoPkszYZWjalc7Yp
	SMYQ1zJc4a5tLXk+qll3inI2PLJHkagOFLoSXK+IkIUj5L55HFM+dIcjHeAy6J5io46n
	Ssu3hyMQTJOoOgUK9pSKBNT+qsbATMoHhBoLZ08niOkqPNK/w9WuOYgtUQQ1z1TkiUCt
	Y0QeioMm+AKvQKcJsVSTWa13q6T5v5pKZ03wDsZxdxyGezlCZC5fobjLYiiLH8Ur0DVW
	Butj51rWjGR7kfdeAHpfZ4tOOE1O8Hk2oMcHWlDlCeXPhWE1dbyVnAgRTzmFiWsPnlSw
	qb4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent:x-gm-message-state;
	bh=Wd3Ya94GwCmWh4A0RmVxt1LH3+F71Xkiq0AmS/b/KR4=;
	b=Yc8y6+pR03eOicyVxLF8NgdJr4Y3IsJXam9+5mtlmNmcoLNfrlhDqg06FqIBCVmv6Z
	VrtMi9isG1G+6vM9vtN/+ODaBWB0Ex4H5JaJO4p86O/XTdVZkG043GJHstksa0m6Ur7t
	md4efsJuvN5ZIkwLvbqKPkf7iC8Bnh2TlY43bLwfKe0rKVr/hLsqzB+HxLTwaWWB3KqI
	XfdxyrwaCAdxlD7NlXoYF8OAWrfBXV0mVH7nnTrIxV5P4/Z7fc0SJ5GuRnkfVmOg/lpK
	Qm47euSJnMcyaCgGQKsx7LQMl4ebj0iDoSO5cISk3ONEtur1hho0+6A3HBLomOCFPtzC
	AabQ==
Received: by 10.14.194.198 with SMTP id m46mr18310084een.13.1344935508175;
	Tue, 14 Aug 2012 02:11:48 -0700 (PDT)
Received: by 10.14.194.198 with SMTP id m46mr18310073een.13.1344935508057;
	Tue, 14 Aug 2012 02:11:48 -0700 (PDT)
Received: from google.com ([2620:0:105f:5:226:b9ff:fe84:a92e])
	by mx.google.com with ESMTPS id h42sm5053817eem.5.2012.08.14.02.11.47
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 02:11:47 -0700 (PDT)
Date: Tue, 14 Aug 2012 11:11:46 +0200
From: Iustin Pop <iustin@google.com>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Message-ID: <20120814091145.GR11580@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<20120814064628.GV19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120814064628.GV19851@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQnE+e4PWSB4sPNC+B1F63e2G8c/QKscaLbLZlb8mdP7ep57K1AELsNDk/axvwG7JWbJkPDgvIZwhzBRyEaOWLo3cQVYUuwydEJRlrDu7/uTzw5Cz42T5Z9wjQGxZnADQdgRNtdrz74uLzVuqGmvugahDdzQq9ht6XTWpU0YZeMCTP7iYAb5eC08or4jaZ6bQi4VpXyZ
Cc: Peter Moody <pmoody@google.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 09:46:28AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Mon, Aug 13, 2012 at 05:03:06PM -0700, Peter Moody wrote:
> > This seems to be some combination of Xen and the audit subsystem, but
> > the attached program crashes my machine 100% of the time.
> > =

> =

> Did you try with a later Xen version? 4.0.1 is quite old. =

> For example the latest in Xen 4.0.x series which is 4.0.4 ? Or Xen 4.1.3 =
? =


This is 4.0.1 from Debian, so it has at least all the CVEs applied. We
haven't tried yet with a newer Xen though (yet).

regards,
iustin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:13:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DBb-0008BV-NL; Tue, 14 Aug 2012 09:13:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iustin@google.com>) id 1T1DBa-0008BD-GI
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:13:14 +0000
Received: from [85.158.143.35:49352] by server-1.bemta-4.messagelabs.com id
	7A/85-07754-9A61A205; Tue, 14 Aug 2012 09:13:13 +0000
X-Env-Sender: iustin@google.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344935574!13950098!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23612 invoked from network); 14 Aug 2012 09:12:54 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:12:54 -0000
Received: by eaac13 with SMTP id c13so58262eaa.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 02:12:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=aBO0VIF29Y91MzV4jatHm4jx9nKa7JkfSQW9Blw5bp8=;
	b=Tnab/twUe2CGOG01fbvPGgeU1v80qoHTALuJErJ1+Uy5l8PGEyjcTG1f6C/BNOBSPi
	0dZA1a9T9lKGlGyQD4oZJVWSUBGkkZX9GyRq4GiDCO61TfmqdVTWRFMJZZeOTYZYr46V
	mMTeTAQLfYXPo583272TRqFwWwxVaZlE0NfBomBkJ3Zgzkkm1ofU0WesDS0/lF4wsBEN
	OZNrha2SqkIXqcmEVqZjMqR2S7BEHoeu61lZjV/WbrjmeyDL+EVBfd4ND7zFbWUxBgzZ
	TW4YFw3RAWdYUii5dWrqmxA/2HkJ9cB8WdMoQSubkGea/xay0AvtD0GiZEBrI8AJH9Ob
	18Iw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=aBO0VIF29Y91MzV4jatHm4jx9nKa7JkfSQW9Blw5bp8=;
	b=l3P2lbJM9hstWoS0WZAaACbI9z+fNjPgoppuHEyXmtRYjsywefG/sALWIdbhD8A0Ab
	D//iKCZ1JOZBUSwBRl0BYaUyC4OQDcGXtK91eVmmou3sifDFCiUyLmNZpablET+o9bqo
	sd94SkM0f4M7JMQAYzqKAwvspP65ytuRrha9Mz4Zf3HUN5DuH+bkSiaX+zVgPlEnXPDo
	eCGHGsiPs/ephsyCwlNslPow1vhoLiVgGj6LelhiQB/ps/O5LUJXofptV7v0gS9/2KvQ
	WypweDJTI/i2wznBz2x8uwEf9ZctdOj+/CGDDlBmZLtghZDPgGaE3YcM1Qks7nCdVI8E
	tHbg==
Received: by 10.14.179.200 with SMTP id h48mr18484186eem.12.1344935574380;
	Tue, 14 Aug 2012 02:12:54 -0700 (PDT)
Received: by 10.14.179.200 with SMTP id h48mr18484179eem.12.1344935574220;
	Tue, 14 Aug 2012 02:12:54 -0700 (PDT)
Received: from google.com ([2620:0:105f:5:226:b9ff:fe84:a92e])
	by mx.google.com with ESMTPS id u8sm5034636eel.11.2012.08.14.02.12.53
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 02:12:53 -0700 (PDT)
Date: Tue, 14 Aug 2012 11:12:52 +0200
From: Iustin Pop <iustin@google.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120814091251.GS11580@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<502A28130200007800094ADE@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502A28130200007800094ADE@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQm7sH3DqxGmkRyPvBWvMZVnGceNMU5sZei79Pm8b/32x314D2d9ZlRohxewimvEZfRoSBZgYlTfnfGikLGELXyMscSid+Dv+6vjl/Ht9fxI3x6f+z1bonF6GpzEaXpD/qTgDmCLltVRlf3IvBx4FVol2NbcvD8x0y04KLf3Zb+BNSa7bqBCjn0xgOBTuGOVBYZUynEu
Cc: Peter Moody <pmoody@google.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 09:27:31AM +0100, Jan Beulich wrote:
> >>> On 14.08.12 at 02:03, Peter Moody <pmoody@google.com> wrote:
> > I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> > I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> > tested). It's a fairly beefy setup, 32G memory and 6 cpus.
> 
> Are these kernel versions refer to plain upstream ones?

They are mostly Ubuntu kernels, so not vanilla.
> 
> Is the subject referring to 4.0.1 in any way meaningful? I.e.
> does the problem not occur with other Xen versions?

I believe this was only related to the version we run, not that it's
fixed in other Xen version. We will try to test with newer Xens to see.

regards,
iustin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:13:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DBb-0008BV-NL; Tue, 14 Aug 2012 09:13:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iustin@google.com>) id 1T1DBa-0008BD-GI
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:13:14 +0000
Received: from [85.158.143.35:49352] by server-1.bemta-4.messagelabs.com id
	7A/85-07754-9A61A205; Tue, 14 Aug 2012 09:13:13 +0000
X-Env-Sender: iustin@google.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344935574!13950098!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23612 invoked from network); 14 Aug 2012 09:12:54 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:12:54 -0000
Received: by eaac13 with SMTP id c13so58262eaa.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 02:12:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=aBO0VIF29Y91MzV4jatHm4jx9nKa7JkfSQW9Blw5bp8=;
	b=Tnab/twUe2CGOG01fbvPGgeU1v80qoHTALuJErJ1+Uy5l8PGEyjcTG1f6C/BNOBSPi
	0dZA1a9T9lKGlGyQD4oZJVWSUBGkkZX9GyRq4GiDCO61TfmqdVTWRFMJZZeOTYZYr46V
	mMTeTAQLfYXPo583272TRqFwWwxVaZlE0NfBomBkJ3Zgzkkm1ofU0WesDS0/lF4wsBEN
	OZNrha2SqkIXqcmEVqZjMqR2S7BEHoeu61lZjV/WbrjmeyDL+EVBfd4ND7zFbWUxBgzZ
	TW4YFw3RAWdYUii5dWrqmxA/2HkJ9cB8WdMoQSubkGea/xay0AvtD0GiZEBrI8AJH9Ob
	18Iw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=aBO0VIF29Y91MzV4jatHm4jx9nKa7JkfSQW9Blw5bp8=;
	b=l3P2lbJM9hstWoS0WZAaACbI9z+fNjPgoppuHEyXmtRYjsywefG/sALWIdbhD8A0Ab
	D//iKCZ1JOZBUSwBRl0BYaUyC4OQDcGXtK91eVmmou3sifDFCiUyLmNZpablET+o9bqo
	sd94SkM0f4M7JMQAYzqKAwvspP65ytuRrha9Mz4Zf3HUN5DuH+bkSiaX+zVgPlEnXPDo
	eCGHGsiPs/ephsyCwlNslPow1vhoLiVgGj6LelhiQB/ps/O5LUJXofptV7v0gS9/2KvQ
	WypweDJTI/i2wznBz2x8uwEf9ZctdOj+/CGDDlBmZLtghZDPgGaE3YcM1Qks7nCdVI8E
	tHbg==
Received: by 10.14.179.200 with SMTP id h48mr18484186eem.12.1344935574380;
	Tue, 14 Aug 2012 02:12:54 -0700 (PDT)
Received: by 10.14.179.200 with SMTP id h48mr18484179eem.12.1344935574220;
	Tue, 14 Aug 2012 02:12:54 -0700 (PDT)
Received: from google.com ([2620:0:105f:5:226:b9ff:fe84:a92e])
	by mx.google.com with ESMTPS id u8sm5034636eel.11.2012.08.14.02.12.53
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 02:12:53 -0700 (PDT)
Date: Tue, 14 Aug 2012 11:12:52 +0200
From: Iustin Pop <iustin@google.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120814091251.GS11580@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<502A28130200007800094ADE@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502A28130200007800094ADE@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQm7sH3DqxGmkRyPvBWvMZVnGceNMU5sZei79Pm8b/32x314D2d9ZlRohxewimvEZfRoSBZgYlTfnfGikLGELXyMscSid+Dv+6vjl/Ht9fxI3x6f+z1bonF6GpzEaXpD/qTgDmCLltVRlf3IvBx4FVol2NbcvD8x0y04KLf3Zb+BNSa7bqBCjn0xgOBTuGOVBYZUynEu
Cc: Peter Moody <pmoody@google.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 09:27:31AM +0100, Jan Beulich wrote:
> >>> On 14.08.12 at 02:03, Peter Moody <pmoody@google.com> wrote:
> > I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> > I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> > tested). It's a fairly beefy setup, 32G memory and 6 cpus.
> 
> Are these kernel versions refer to plain upstream ones?

They are mostly Ubuntu kernels, so not vanilla.
> 
> Is the subject referring to 4.0.1 in any way meaningful? I.e.
> does the problem not occur with other Xen versions?

I believe this was only related to the version we run, not that it's
fixed in other Xen version. We will try to test with newer Xens to see.

regards,
iustin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:20:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DI3-000068-IG; Tue, 14 Aug 2012 09:19:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1DI2-00005x-Cm
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:19:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344935952!6730786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2845 invoked from network); 14 Aug 2012 09:19:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:19:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13997549"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:19:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:19:12 +0100
Message-ID: <1344935951.5926.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 10:19:11 +0100
In-Reply-To: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 01:03 +0100, Peter Moody wrote:
> This seems to be some combination of Xen and the audit subsystem, but
> the attached program crashes my machine 100% of the time.
> 
> steps to reproduce the crash:
> 
>  *  1) compile with gcc -m32
>  *  2) start auditd, install any rule (I've only tested syscall
> auditing, but any syscall seems to work).
>  *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a
> exit,always -F arch=64 -S chmod
>  *  3) run'n wait (this only loops twice for me before dying)
>  *     ./a.out
>  *  4) bask in instantaneous kernel oops.
> 
> here's xm info from dom0
> 
> [xen2.atl] root@gntb1:~# xm info
> host                   : gntb1.atl.corp.google.com
> release                : 3.2.13-ganeti-rx6-xen0
> version                : #1 SMP Thu Jun 7 12:59:40 CEST 2012
> machine                : x86_64
> nr_cpus                : 12
> nr_nodes               : 2
> cores_per_socket       : 6
> threads_per_core       : 1
> cpu_mhz                : 2660
> hw_caps                :
> bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
> virt_caps              : hvm
> total_memory           : 32755
> free_memory            : 22665
> node_to_cpu            : node0:0,2,4,6,8,10
>                          node1:1,3,5,7,9,11
> node_to_memory         : node0:13083
>                          node1:9582
> node_to_dma32_mem      : node0:0
>                          node1:3235
> max_node_id            : 1
> xen_major              : 4
> xen_minor              : 0
> xen_extra              : .1
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : unavailable
> xen_commandline        : placeholder dom0_mem=1024M loglvl=all
> com1=115200,8n1 console=com1 iommu=0
> cc_compiler            : gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
> cc_compile_by          : pmacedo
> cc_compile_domain      : google.com
> cc_compile_date        : Wed Mar 16 15:24:06 UTC 2011
> xend_config_format     : 4
> 
> I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> tested). It's a fairly beefy setup, 32G memory and 6 cpus.
> 
> I suspect xen as opposed to auditd because:
> 
>  a) this only happens on our xen machines (though not all of them)
>  b) one of my stack traces started with
> 
> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10

This is likely to be a coincidence IMHO since this function forces a
call to the hypervisor to trigger the (re)injection of any pending
interrupts (typically after reenabling interrupts), so it is not unusual
for it to be at the bottom of any stack trace which happens in interrupt
context.

The example stack trace in crasher.c doesn't involve Xen -- can you post
any examples of ones which do.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:20:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DI3-000068-IG; Tue, 14 Aug 2012 09:19:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1DI2-00005x-Cm
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:19:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344935952!6730786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2845 invoked from network); 14 Aug 2012 09:19:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:19:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13997549"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:19:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:19:12 +0100
Message-ID: <1344935951.5926.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 10:19:11 +0100
In-Reply-To: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 01:03 +0100, Peter Moody wrote:
> This seems to be some combination of Xen and the audit subsystem, but
> the attached program crashes my machine 100% of the time.
> 
> steps to reproduce the crash:
> 
>  *  1) compile with gcc -m32
>  *  2) start auditd, install any rule (I've only tested syscall
> auditing, but any syscall seems to work).
>  *     /etc/init.d/auditd start ; auditctl -D ; auditctl -a
> exit,always -F arch=64 -S chmod
>  *  3) run'n wait (this only loops twice for me before dying)
>  *     ./a.out
>  *  4) bask in instantaneous kernel oops.
> 
> here's xm info from dom0
> 
> [xen2.atl] root@gntb1:~# xm info
> host                   : gntb1.atl.corp.google.com
> release                : 3.2.13-ganeti-rx6-xen0
> version                : #1 SMP Thu Jun 7 12:59:40 CEST 2012
> machine                : x86_64
> nr_cpus                : 12
> nr_nodes               : 2
> cores_per_socket       : 6
> threads_per_core       : 1
> cpu_mhz                : 2660
> hw_caps                :
> bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
> virt_caps              : hvm
> total_memory           : 32755
> free_memory            : 22665
> node_to_cpu            : node0:0,2,4,6,8,10
>                          node1:1,3,5,7,9,11
> node_to_memory         : node0:13083
>                          node1:9582
> node_to_dma32_mem      : node0:0
>                          node1:3235
> max_node_id            : 1
> xen_major              : 4
> xen_minor              : 0
> xen_extra              : .1
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : unavailable
> xen_commandline        : placeholder dom0_mem=1024M loglvl=all
> com1=115200,8n1 console=com1 iommu=0
> cc_compiler            : gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
> cc_compile_by          : pmacedo
> cc_compile_domain      : google.com
> cc_compile_date        : Wed Mar 16 15:24:06 UTC 2011
> xend_config_format     : 4
> 
> I'm not sure what you need from the domU. It's running 2.6.38.8 (but
> I've seen this bug all the way up to 3.5.0-rc7, the latest I've
> tested). It's a fairly beefy setup, 32G memory and 6 cpus.
> 
> I suspect xen as opposed to auditd because:
> 
>  a) this only happens on our xen machines (though not all of them)
>  b) one of my stack traces started with
> 
> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10

This is likely to be a coincidence IMHO since this function forces a
call to the hypervisor to trigger the (re)injection of any pending
interrupts (typically after reenabling interrupts), so it is not unusual
for it to be at the bottom of any stack trace which happens in interrupt
context.

The example stack trace in crasher.c doesn't involve Xen -- can you post
any examples of ones which do.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:32:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DTZ-0000NM-Vi; Tue, 14 Aug 2012 09:31:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adrian.wenl@gmail.com>) id 1T1DTX-0000NH-NC
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:31:47 +0000
Received: from [85.158.143.99:57089] by server-1.bemta-4.messagelabs.com id
	D8/B4-07754-20B1A205; Tue, 14 Aug 2012 09:31:46 +0000
X-Env-Sender: adrian.wenl@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344936704!16873557!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1974 invoked from network); 14 Aug 2012 09:31:46 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:31:46 -0000
Received: by obbta14 with SMTP id ta14so334708obb.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 02:31:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=v6pze3ZIU6VatOFdZeqkUISN1cP3NzecOUmFodUzhII=;
	b=M0FpIeJpO3lNqnmRi2ab1dESNwqfxblH4Yp2cO4HBpohm3FUMrGcH2d5ikdDM1CObG
	f+kSAE3TPNejN/KZTXWG43a8+ylCujBkZYEpKphEMJIaasNGuuGjedi6sATnuLNlD6GX
	UyttslTEMyXSWRAVCLH44BKLunnm8uRUPqHeMOSo/7R0Ia/v49e07u2C5fS3Yo7qGDsL
	dBBZVwoJ71Zvg44Dedrg/6a6jsf53so86n4z1fUXuhsgfaZxDKi/reEgaXTx7lnpkBxu
	SZm9rjz9n/rp6K4RIidyVGufkSspFPH+MnCB9JPXJ3503qrxk80lE/2+ZRdWBj3ucZLc
	2fbw==
MIME-Version: 1.0
Received: by 10.60.28.101 with SMTP id a5mr18819641oeh.69.1344936704490; Tue,
	14 Aug 2012 02:31:44 -0700 (PDT)
Received: by 10.182.53.138 with HTTP; Tue, 14 Aug 2012 02:31:44 -0700 (PDT)
Date: Tue, 14 Aug 2012 17:31:44 +0800
Message-ID: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
From: Lei Wen <adrian.wenl@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I am a newbie for the XEN area and curious to know whether current XEN
could support nommu arch, or only MPU?
If current it doesn't support, does XEN have any plan or road map to
add this support in future?

Thanks,
Lei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:32:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DTZ-0000NM-Vi; Tue, 14 Aug 2012 09:31:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adrian.wenl@gmail.com>) id 1T1DTX-0000NH-NC
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:31:47 +0000
Received: from [85.158.143.99:57089] by server-1.bemta-4.messagelabs.com id
	D8/B4-07754-20B1A205; Tue, 14 Aug 2012 09:31:46 +0000
X-Env-Sender: adrian.wenl@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344936704!16873557!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1974 invoked from network); 14 Aug 2012 09:31:46 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:31:46 -0000
Received: by obbta14 with SMTP id ta14so334708obb.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 02:31:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=v6pze3ZIU6VatOFdZeqkUISN1cP3NzecOUmFodUzhII=;
	b=M0FpIeJpO3lNqnmRi2ab1dESNwqfxblH4Yp2cO4HBpohm3FUMrGcH2d5ikdDM1CObG
	f+kSAE3TPNejN/KZTXWG43a8+ylCujBkZYEpKphEMJIaasNGuuGjedi6sATnuLNlD6GX
	UyttslTEMyXSWRAVCLH44BKLunnm8uRUPqHeMOSo/7R0Ia/v49e07u2C5fS3Yo7qGDsL
	dBBZVwoJ71Zvg44Dedrg/6a6jsf53so86n4z1fUXuhsgfaZxDKi/reEgaXTx7lnpkBxu
	SZm9rjz9n/rp6K4RIidyVGufkSspFPH+MnCB9JPXJ3503qrxk80lE/2+ZRdWBj3ucZLc
	2fbw==
MIME-Version: 1.0
Received: by 10.60.28.101 with SMTP id a5mr18819641oeh.69.1344936704490; Tue,
	14 Aug 2012 02:31:44 -0700 (PDT)
Received: by 10.182.53.138 with HTTP; Tue, 14 Aug 2012 02:31:44 -0700 (PDT)
Date: Tue, 14 Aug 2012 17:31:44 +0800
Message-ID: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
From: Lei Wen <adrian.wenl@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I am a newbie for the XEN area and curious to know whether current XEN
could support nommu arch, or only MPU?
If current it doesn't support, does XEN have any plan or road map to
add this support in future?

Thanks,
Lei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:35:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DWe-0000U0-J4; Tue, 14 Aug 2012 09:35:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1DWd-0000Tv-HN
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:34:59 +0000
Received: from [85.158.138.51:28398] by server-10.bemta-3.messagelabs.com id
	CA/A4-20518-2CB1A205; Tue, 14 Aug 2012 09:34:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344936898!28219696!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21265 invoked from network); 14 Aug 2012 09:34:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 09:34:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 10:34:57 +0100
Message-Id: <502A37DE0200007800094B3D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 10:34:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<1344934973.5926.3.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344934973.5926.3.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Keir \(Xen.org\)" <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 11:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-13 at 08:05 +0100, Jan Beulich wrote:
>> - address PoD problems with early host side accesses to guest
>>   address space (draft patch for 4.0.x exists, needs to be ported
>>   over to -unstable, which I'll expect to get to today)
> 
> Is this the same as one of the two existing PoD entries? Expecting not
> I'll include it separately today (I'm going to post the update very
> shortly) and we can reconcile any duplication for next week.
> 
> Hypervisor:
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>       	    (George Dunlap)
> 
> Tools:
>     * [BUG(?)] If domain 0 attempts to access a guests' memory before
>       it is finished being built, and it is being built in PoD mode,
>       this may cause the guest to crash.  Again, this is NOT a
>       regression from 4.1.  Furthermore, it's only been reported
>       (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
>       (Again, I'd be open to the idea that it should wait until after
>       the release to get more testing.)
>       	  (George Dunlap / Jan Beulich)

It's the same as this second entry, albeit the fix is not limited to
the tools. Patch posted a few minutes ago.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:35:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1DWe-0000U0-J4; Tue, 14 Aug 2012 09:35:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1DWd-0000Tv-HN
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:34:59 +0000
Received: from [85.158.138.51:28398] by server-10.bemta-3.messagelabs.com id
	CA/A4-20518-2CB1A205; Tue, 14 Aug 2012 09:34:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344936898!28219696!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21265 invoked from network); 14 Aug 2012 09:34:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 09:34:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 10:34:57 +0100
Message-Id: <502A37DE0200007800094B3D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 10:34:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<1344934973.5926.3.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344934973.5926.3.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Keir \(Xen.org\)" <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 11:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2012-08-13 at 08:05 +0100, Jan Beulich wrote:
>> - address PoD problems with early host side accesses to guest
>>   address space (draft patch for 4.0.x exists, needs to be ported
>>   over to -unstable, which I'll expect to get to today)
> 
> Is this the same as one of the two existing PoD entries? Expecting not
> I'll include it separately today (I'm going to post the update very
> shortly) and we can reconcile any duplication for next week.
> 
> Hypervisor:
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>       	    (George Dunlap)
> 
> Tools:
>     * [BUG(?)] If domain 0 attempts to access a guests' memory before
>       it is finished being built, and it is being built in PoD mode,
>       this may cause the guest to crash.  Again, this is NOT a
>       regression from 4.1.  Furthermore, it's only been reported
>       (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
>       (Again, I'd be open to the idea that it should wait until after
>       the release to get more testing.)
>       	  (George Dunlap / Jan Beulich)

It's the same as this second entry, albeit the fix is not limited to
the tools. Patch posted a few minutes ago.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:51:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Dmc-0000pb-9v; Tue, 14 Aug 2012 09:51:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Dma-0000pW-Nh
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:51:29 +0000
Received: from [85.158.143.99:19483] by server-2.bemta-4.messagelabs.com id
	D5/A6-31966-0AF1A205; Tue, 14 Aug 2012 09:51:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344937886!28061722!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19901 invoked from network); 14 Aug 2012 09:51:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 09:51:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 10:51:26 +0100
Message-Id: <502A3BBC0200007800094B68@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 10:51:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>,
	"Keir Fraser" <keir@xen.org>,"Tim Deegan" <tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tupeng212 <tupeng212@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Below/attached a second draft of a patch to fix not only this
issue, but a few more with the RTC emulation.

Keir, Tim, Yang, others - the change to xen/arch/x86/hvm/vpt.c really
looks more like a hack than a solution, but I don't see another
way without much more intrusive changes. The point is that we
want the RTC code to decide whether to generate an interrupt
(so that RTC_PF can become set correctly even without RTC_PIE
getting enabled by the guest).

Additionally I wonder whether alarm_timer_update() shouldn't
bail on non-conforming RTC_*_ALARM values (as those would
never match the values they get compared against, whereas
with the current way of handling this they would appear to
match - i.e. set RTC_AF and possibly generate an interrupt -
some other point in time). I realize the behavior here may not
be precisely specified, but the specification saying "the current
time has matched the alarm time" means to me a value by value
comparison, which implies that non-conforming values would
never match (since non-conforming current time values could
get replaced at any time by the hardware due to overflow
detection).

Jan

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes at all
- only call alarm_timer_update() on REG_B writes when relevant bits
  change
- only call check_update_timer() on REG_B writes when SET changes
- instead properly handle AF and PF when the guest is not also setting
  AIE/PIE respectively (for UF this was already the case, only a
  comment was slightly inaccurate)
- raise the RTC IRQ not only when UIE gets set while UF was already
  set, but generalize this to cover AIE and PIE as well
- properly mask off bit 7 when retrieving the hour values in
  alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
  converting from 12- to 24-hour value
- also handle the two other possible clock bases
- use RTC_* names in a couple of places where literal numbers were used
  so far

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
 
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d = vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s = opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |= 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
 
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
 
     period_code = s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code != 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <= 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code != 0) && (period_code <= 2) )
             period_code += 7;
-
-        period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period = DIV_ROUND((period * 1000000000ULL), 32768); /* period in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code != 0 )
+        {
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
 
@@ -102,7 +121,7 @@ static void check_update_timer(RTCState 
         guest_usec = get_localtime_us(d) % USEC_PER_SEC;
         if (guest_usec >= (USEC_PER_SEC - 244))
         {
-            /* RTC is in update cycle when enabling UIE */
+            /* RTC is in update cycle */
             s->hw.cmos_data[RTC_REG_A] |= RTC_UIP;
             next_update_time = (USEC_PER_SEC - guest_usec) * NS_PER_USEC;
             expire_time = NOW() + next_update_time;
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s = opaque;
-    struct domain *d = vrtc_domain(s);
 
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |= RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState 
 
     stop_timer(&s->alarm_timer);
 
-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
     {
         s->current_tm = gmtime(get_localtime(d));
         rtc_copy_date(s);
 
         alarm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
         alarm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
-        alarm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
-        alarm_hour = convert_hour(s, alarm_hour);
+        alarm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
 
         cur_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
         cur_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-        cur_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
-        cur_hour = convert_hour(s, cur_hour);
+        cur_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
 
         next_update_time = USEC_PER_SEC - (get_localtime_us(d) % USEC_PER_SEC);
         next_update_time = next_update_time * NS_PER_USEC + NOW();
@@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState 
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s = opaque;
-    struct domain *d = vrtc_domain(s);
 
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |= RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s = opaque;
     struct domain *d = vrtc_domain(s);
+    uint32_t orig, mask;
 
     spin_lock(&s->lock);
 
@@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
 
+    orig = s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) | (orig & RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm = gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE == RTC_{A,P,U}F respectively.
+         */
+        for ( mask = RTC_UIE; mask <= RTC_PIE; mask <<= 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] = data;
-        rtc_timer_update(s);
-        check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_SET )
+            check_update_timer(s);
+        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
@@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
 
 static inline int to_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a / 10) << 4) | (a % 10);
@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
 
 static inline int from_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a >> 4) * 10) + (a & 0x0f);
@@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s, 
 
 /* Hours in 12 hour mode are in 1-12 range, not 0-11.
  * So we need convert it before using it*/
-static inline int convert_hour(RTCState *s, int hour)
+static inline int convert_hour(RTCState *s, int raw)
 {
+    int hour = from_bcd(s, raw & 0x7f);
+
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
     {
         hour %= 12;
-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
+        if (raw & 0x80)
             hour += 12;
     }
     return hour;
@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
     
     tm->tm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
     tm->tm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-    tm->tm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
-    tm->tm_hour = convert_hour(s, tm->tm_hour);
+    tm->tm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
     tm->tm_wday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
     tm->tm_mday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
     tm->tm_mon = from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
 
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt = NULL;
     uint64_t max_lag = -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
 
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
+    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq == RTC_IRQ )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:51:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Dmc-0000pb-9v; Tue, 14 Aug 2012 09:51:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Dma-0000pW-Nh
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:51:29 +0000
Received: from [85.158.143.99:19483] by server-2.bemta-4.messagelabs.com id
	D5/A6-31966-0AF1A205; Tue, 14 Aug 2012 09:51:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344937886!28061722!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19901 invoked from network); 14 Aug 2012 09:51:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 09:51:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 10:51:26 +0100
Message-Id: <502A3BBC0200007800094B68@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 10:51:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>,
	"Keir Fraser" <keir@xen.org>,"Tim Deegan" <tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tupeng212 <tupeng212@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Below/attached a second draft of a patch to fix not only this
issue, but a few more with the RTC emulation.

Keir, Tim, Yang, others - the change to xen/arch/x86/hvm/vpt.c really
looks more like a hack than a solution, but I don't see another
way without much more intrusive changes. The point is that we
want the RTC code to decide whether to generate an interrupt
(so that RTC_PF can become set correctly even without RTC_PIE
getting enabled by the guest).

Additionally I wonder whether alarm_timer_update() shouldn't
bail on non-conforming RTC_*_ALARM values (as those would
never match the values they get compared against, whereas
with the current way of handling this they would appear to
match - i.e. set RTC_AF and possibly generate an interrupt -
some other point in time). I realize the behavior here may not
be precisely specified, but the specification saying "the current
time has matched the alarm time" means to me a value by value
comparison, which implies that non-conforming values would
never match (since non-conforming current time values could
get replaced at any time by the hardware due to overflow
detection).

Jan

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes at all
- only call alarm_timer_update() on REG_B writes when relevant bits
  change
- only call check_update_timer() on REG_B writes when SET changes
- instead properly handle AF and PF when the guest is not also setting
  AIE/PIE respectively (for UF this was already the case, only a
  comment was slightly inaccurate)
- raise the RTC IRQ not only when UIE gets set while UF was already
  set, but generalize this to cover AIE and PIE as well
- properly mask off bit 7 when retrieving the hour values in
  alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
  converting from 12- to 24-hour value
- also handle the two other possible clock bases
- use RTC_* names in a couple of places where literal numbers were used
  so far

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
 
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d = vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s = opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |= 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
 
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
 
     period_code = s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code != 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <= 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code != 0) && (period_code <= 2) )
             period_code += 7;
-
-        period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period = DIV_ROUND((period * 1000000000ULL), 32768); /* period in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code != 0 )
+        {
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
 
@@ -102,7 +121,7 @@ static void check_update_timer(RTCState 
         guest_usec = get_localtime_us(d) % USEC_PER_SEC;
         if (guest_usec >= (USEC_PER_SEC - 244))
         {
-            /* RTC is in update cycle when enabling UIE */
+            /* RTC is in update cycle */
             s->hw.cmos_data[RTC_REG_A] |= RTC_UIP;
             next_update_time = (USEC_PER_SEC - guest_usec) * NS_PER_USEC;
             expire_time = NOW() + next_update_time;
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s = opaque;
-    struct domain *d = vrtc_domain(s);
 
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |= RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState 
 
     stop_timer(&s->alarm_timer);
 
-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
     {
         s->current_tm = gmtime(get_localtime(d));
         rtc_copy_date(s);
 
         alarm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
         alarm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
-        alarm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
-        alarm_hour = convert_hour(s, alarm_hour);
+        alarm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
 
         cur_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
         cur_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-        cur_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
-        cur_hour = convert_hour(s, cur_hour);
+        cur_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
 
         next_update_time = USEC_PER_SEC - (get_localtime_us(d) % USEC_PER_SEC);
         next_update_time = next_update_time * NS_PER_USEC + NOW();
@@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState 
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s = opaque;
-    struct domain *d = vrtc_domain(s);
 
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |= RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s = opaque;
     struct domain *d = vrtc_domain(s);
+    uint32_t orig, mask;
 
     spin_lock(&s->lock);
 
@@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
 
+    orig = s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) | (orig & RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm = gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE == RTC_{A,P,U}F respectively.
+         */
+        for ( mask = RTC_UIE; mask <= RTC_PIE; mask <<= 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] = data;
-        rtc_timer_update(s);
-        check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_SET )
+            check_update_timer(s);
+        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
@@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
 
 static inline int to_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a / 10) << 4) | (a % 10);
@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
 
 static inline int from_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a >> 4) * 10) + (a & 0x0f);
@@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s, 
 
 /* Hours in 12 hour mode are in 1-12 range, not 0-11.
  * So we need convert it before using it*/
-static inline int convert_hour(RTCState *s, int hour)
+static inline int convert_hour(RTCState *s, int raw)
 {
+    int hour = from_bcd(s, raw & 0x7f);
+
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
     {
         hour %= 12;
-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
+        if (raw & 0x80)
             hour += 12;
     }
     return hour;
@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
     
     tm->tm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
     tm->tm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-    tm->tm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
-    tm->tm_hour = convert_hour(s, tm->tm_hour);
+    tm->tm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
     tm->tm_wday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
     tm->tm_mday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
     tm->tm_mon = from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
 
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt = NULL;
     uint64_t max_lag = -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
 
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
+    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq == RTC_IRQ )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:52:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Dne-0000sq-P0; Tue, 14 Aug 2012 09:52:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Dnc-0000sZ-Hd
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:52:32 +0000
Received: from [85.158.143.99:28316] by server-1.bemta-4.messagelabs.com id
	D8/DD-07754-FDF1A205; Tue, 14 Aug 2012 09:52:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344937951!20553639!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3664 invoked from network); 14 Aug 2012 09:52:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 09:52:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 10:52:30 +0100
Message-Id: <502A3BFD0200007800094B6B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 10:52:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"xen-devel" <xen-devel@lists.xen.org>,
	"xen-users" <xen-users@lists.xen.org>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 11:05, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)

Second draft of a patch posted; no test results so far for first draft.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:52:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Dne-0000sq-P0; Tue, 14 Aug 2012 09:52:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Dnc-0000sZ-Hd
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:52:32 +0000
Received: from [85.158.143.99:28316] by server-1.bemta-4.messagelabs.com id
	D8/DD-07754-FDF1A205; Tue, 14 Aug 2012 09:52:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344937951!20553639!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3664 invoked from network); 14 Aug 2012 09:52:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 09:52:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 10:52:30 +0100
Message-Id: <502A3BFD0200007800094B6B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 10:52:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"xen-devel" <xen-devel@lists.xen.org>,
	"xen-users" <xen-users@lists.xen.org>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 11:05, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)

Second draft of a patch posted; no test results so far for first draft.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Dpw-00017r-VV; Tue, 14 Aug 2012 09:54:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Dpv-00017c-8X
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:54:55 +0000
Received: from [85.158.143.35:15728] by server-3.bemta-4.messagelabs.com id
	0F/EC-09529-E602A205; Tue, 14 Aug 2012 09:54:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344938092!5425166!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16478 invoked from network); 14 Aug 2012 09:54:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:54:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13998449"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:54:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:54:52 +0100
Message-ID: <1344938090.5926.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Aug 2012 10:54:50 +0100
In-Reply-To: <502A37DE0200007800094B3D@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<1344934973.5926.3.camel@zakaz.uk.xensource.com>
	<502A37DE0200007800094B3D@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 10:34 +0100, Jan Beulich wrote:
> >>> On 14.08.12 at 11:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-08-13 at 08:05 +0100, Jan Beulich wrote:
> >> - address PoD problems with early host side accesses to guest
> >>   address space (draft patch for 4.0.x exists, needs to be ported
> >>   over to -unstable, which I'll expect to get to today)
> > 
> > Is this the same as one of the two existing PoD entries? Expecting not
> > I'll include it separately today (I'm going to post the update very
> > shortly) and we can reconcile any duplication for next week.
> > 
> > Hypervisor:
> >     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
> >       stop halfway through searching, causing a guest to crash even if
> >       there was zeroed memory available.  This is NOT a regression
> >       from 4.1, and is a very rare case, so probably shouldn't be a
> >       blocker.  (In fact, I'd be open to the idea that it should wait
> >       until after the release to get more testing.)
> >       	    (George Dunlap)
> > 
> > Tools:
> >     * [BUG(?)] If domain 0 attempts to access a guests' memory before
> >       it is finished being built, and it is being built in PoD mode,
> >       this may cause the guest to crash.  Again, this is NOT a
> >       regression from 4.1.  Furthermore, it's only been reported
> >       (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
> >       (Again, I'd be open to the idea that it should wait until after
> >       the release to get more testing.)
> >       	  (George Dunlap / Jan Beulich)
> 
> It's the same as this second entry, albeit the fix is not limited to
> the tools. Patch posted a few minutes ago.

Thanks, I collapsed both entries into 
    * address PoD problems with early host side accesses to guest
      address space (Jan Beulich, patch posted)

although I expect it will be DONE by the time I repost next week...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 09:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 09:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Dpw-00017r-VV; Tue, 14 Aug 2012 09:54:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Dpv-00017c-8X
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 09:54:55 +0000
Received: from [85.158.143.35:15728] by server-3.bemta-4.messagelabs.com id
	0F/EC-09529-E602A205; Tue, 14 Aug 2012 09:54:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1344938092!5425166!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16478 invoked from network); 14 Aug 2012 09:54:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 09:54:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13998449"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:54:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:54:52 +0100
Message-ID: <1344938090.5926.10.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Aug 2012 10:54:50 +0100
In-Reply-To: <502A37DE0200007800094B3D@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<1344934973.5926.3.camel@zakaz.uk.xensource.com>
	<502A37DE0200007800094B3D@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 10:34 +0100, Jan Beulich wrote:
> >>> On 14.08.12 at 11:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2012-08-13 at 08:05 +0100, Jan Beulich wrote:
> >> - address PoD problems with early host side accesses to guest
> >>   address space (draft patch for 4.0.x exists, needs to be ported
> >>   over to -unstable, which I'll expect to get to today)
> > 
> > Is this the same as one of the two existing PoD entries? Expecting not
> > I'll include it separately today (I'm going to post the update very
> > shortly) and we can reconcile any duplication for next week.
> > 
> > Hypervisor:
> >     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
> >       stop halfway through searching, causing a guest to crash even if
> >       there was zeroed memory available.  This is NOT a regression
> >       from 4.1, and is a very rare case, so probably shouldn't be a
> >       blocker.  (In fact, I'd be open to the idea that it should wait
> >       until after the release to get more testing.)
> >       	    (George Dunlap)
> > 
> > Tools:
> >     * [BUG(?)] If domain 0 attempts to access a guests' memory before
> >       it is finished being built, and it is being built in PoD mode,
> >       this may cause the guest to crash.  Again, this is NOT a
> >       regression from 4.1.  Furthermore, it's only been reported
> >       (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
> >       (Again, I'd be open to the idea that it should wait until after
> >       the release to get more testing.)
> >       	  (George Dunlap / Jan Beulich)
> 
> It's the same as this second entry, albeit the fix is not limited to
> the tools. Patch posted a few minutes ago.

Thanks, I collapsed both entries into 
    * address PoD problems with early host side accesses to guest
      address space (Jan Beulich, patch posted)

although I expect it will be DONE by the time I repost next week...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:05:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1E0E-0001cf-2g; Tue, 14 Aug 2012 10:05:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T1E0C-0001cX-1f
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:05:32 +0000
Received: from [85.158.139.83:26087] by server-2.bemta-5.messagelabs.com id
	06/2A-10142-BE22A205; Tue, 14 Aug 2012 10:05:31 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344938730!20802587!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14075 invoked from network); 14 Aug 2012 10:05:30 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 10:05:30 -0000
Received: from relay2.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id BA6BEA39D2;
	Tue, 14 Aug 2012 12:05:28 +0200 (CEST)
Date: Tue, 14 Aug 2012 11:05:22 +0100
From: Mel Gorman <mgorman@suse.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120814100522.GL4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813154144.GA24868@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813154144.GA24868@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, linux-mm@kvack.org,
	konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 11:41:44AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> > From: Mel Gorman <mgorman@suse.de>
> > Date: Tue, 7 Aug 2012 09:55:55 +0100
> > 
> > > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > > for the following bug triggered by a xen network driver
> >  ...
> > > The problem is that the xenfront driver is passing a NULL page to
> > > __skb_fill_page_desc() which was unexpected. This patch checks that
> > > there is a page before dereferencing.
> > > 
> > > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > 
> > That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> > It's the only driver passing NULL here.
> 
> It looks to be passing a valid page pointer (at least by looking
> at the code) so I am not sure how it got turned in a NULL.
> 

Are we looking at different code bases? I see this and I was assuming it
was the source of the bug.

	__skb_fill_page_desc(skb, 0, NULL, 0, 0);

-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:05:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1E0E-0001cf-2g; Tue, 14 Aug 2012 10:05:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T1E0C-0001cX-1f
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:05:32 +0000
Received: from [85.158.139.83:26087] by server-2.bemta-5.messagelabs.com id
	06/2A-10142-BE22A205; Tue, 14 Aug 2012 10:05:31 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344938730!20802587!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14075 invoked from network); 14 Aug 2012 10:05:30 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 10:05:30 -0000
Received: from relay2.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id BA6BEA39D2;
	Tue, 14 Aug 2012 12:05:28 +0200 (CEST)
Date: Tue, 14 Aug 2012 11:05:22 +0100
From: Mel Gorman <mgorman@suse.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120814100522.GL4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813154144.GA24868@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813154144.GA24868@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, linux-mm@kvack.org,
	konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 11:41:44AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> > From: Mel Gorman <mgorman@suse.de>
> > Date: Tue, 7 Aug 2012 09:55:55 +0100
> > 
> > > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > > for the following bug triggered by a xen network driver
> >  ...
> > > The problem is that the xenfront driver is passing a NULL page to
> > > __skb_fill_page_desc() which was unexpected. This patch checks that
> > > there is a page before dereferencing.
> > > 
> > > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > 
> > That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> > It's the only driver passing NULL here.
> 
> It looks to be passing a valid page pointer (at least by looking
> at the code) so I am not sure how it got turned in a NULL.
> 

Are we looking at different code bases? I see this and I was assuming it
was the source of the bug.

	__skb_fill_page_desc(skb, 0, NULL, 0, 0);

-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:18:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ECZ-0002GH-89; Tue, 14 Aug 2012 10:18:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T1ECX-0002GA-Ia
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:18:17 +0000
Received: from [85.158.143.35:30883] by server-1.bemta-4.messagelabs.com id
	94/85-07754-8E52A205; Tue, 14 Aug 2012 10:18:16 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344939495!15268444!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21550 invoked from network); 14 Aug 2012 10:18:16 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 10:18:16 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 079CAA3A49;
	Tue, 14 Aug 2012 12:18:11 +0200 (CEST)
Date: Tue, 14 Aug 2012 11:18:07 +0100
From: Mel Gorman <mgorman@suse.de>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Message-ID: <20120814101807.GM4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813102604.GC4177@suse.de> <20120813104745.GE4177@suse.de>
	<50294DF0.8040206@goop.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50294DF0.8040206@goop.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 11:56:48AM -0700, Jeremy Fitzhardinge wrote:
> On 08/13/2012 03:47 AM, Mel Gorman wrote:
> > Resending to correct Jeremy's address.
> >
> > On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> >> From: Mel Gorman <mgorman@suse.de>
> >> Date: Tue, 7 Aug 2012 09:55:55 +0100
> >>
> >>> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> >>> for the following bug triggered by a xen network driver
> >>  ...
> >>> The problem is that the xenfront driver is passing a NULL page to
> >>> __skb_fill_page_desc() which was unexpected. This patch checks that
> >>> there is a page before dereferencing.
> >>>
> >>> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>> Signed-off-by: Mel Gorman <mgorman@suse.de>
> >> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> >> It's the only driver passing NULL here.
> >>
> >> That whole song and dance figuring out what to do with the head
> >> fragment page, depending upon whether the length is greater than the
> >> RX_COPY_THRESHOLD, is completely unnecessary.
> >>
> >> Just use something like a call to __pskb_pull_tail(skb, len) and all
> >> that other crap around that area can simply be deleted.
> > I looked at this for a while but I did not see how __pskb_pull_tail()
> > could be used sensibly but I'm simily not familiar with writing network
> > device drivers or Xen.
> >
> > This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
> > and backend communicate (maybe some fixed limitation of the xenbus). The
> > existing code looks like it is trying to take the fragments received and
> > pass them straight to the backend without copying by passing the fragments
> > to the backend without copying. I worry that if I try converting this to
> > __pskb_pull_tail() that it would either hit the limitation of xenbus or
> > introduce copying where it is not wanted.
> >
> > I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
> > sure what the original intention was and I don't have a Xen setup anywhere
> > to test any patch. Jeremy, xen folk? 
> 
> It's been a while since I've looked at that stuff, but as I remember,
> the issue is that since the packet ring memory is shared with another
> domain which may be untrustworthy, we want to make copies of the headers
> before making any decisions based on them so that the other domain can't
> change them after header processing but before they're actually sent. 
> (The packet payload is considered less important, but of course the same
> issue applies if you're using some kind of content-aware packet filter.)
> 
> So that's the rationale for always copying RX_COPY_THRESHOLD, even if
> the packet is larger than that amount.  As far as I know, changing this
> behaviour wouldn't break the ring protocol, but it does introduce a
> potential security issue.
> 

David,

This leaves us somewhat in a pickle. If I'm reading this right (which I may
not be) it means that using __pskb_pull_tail() will work in the ideal case
but potentially introduces a subtle issue in the future. This bug could be
"fixed" in the driver by partially reverting [01c68026: xen: netfront:
convert to SKB paged frag API.]. That could be viewed as sweeping the
problem under the carpet but it does contain the problem to the xen-netfront
driver. A new helper could be created like __skb_clear_page_desc but that
is overkill for one driver and feels as ugly.

-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:18:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ECZ-0002GH-89; Tue, 14 Aug 2012 10:18:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1T1ECX-0002GA-Ia
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:18:17 +0000
Received: from [85.158.143.35:30883] by server-1.bemta-4.messagelabs.com id
	94/85-07754-8E52A205; Tue, 14 Aug 2012 10:18:16 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344939495!15268444!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21550 invoked from network); 14 Aug 2012 10:18:16 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 10:18:16 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 079CAA3A49;
	Tue, 14 Aug 2012 12:18:11 +0200 (CEST)
Date: Tue, 14 Aug 2012 11:18:07 +0100
From: Mel Gorman <mgorman@suse.de>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Message-ID: <20120814101807.GM4177@suse.de>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813102604.GC4177@suse.de> <20120813104745.GE4177@suse.de>
	<50294DF0.8040206@goop.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50294DF0.8040206@goop.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Ian.Campbell@eu.citrix.com,
	linux-mm@kvack.org, konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 13, 2012 at 11:56:48AM -0700, Jeremy Fitzhardinge wrote:
> On 08/13/2012 03:47 AM, Mel Gorman wrote:
> > Resending to correct Jeremy's address.
> >
> > On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> >> From: Mel Gorman <mgorman@suse.de>
> >> Date: Tue, 7 Aug 2012 09:55:55 +0100
> >>
> >>> Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> >>> for the following bug triggered by a xen network driver
> >>  ...
> >>> The problem is that the xenfront driver is passing a NULL page to
> >>> __skb_fill_page_desc() which was unexpected. This patch checks that
> >>> there is a page before dereferencing.
> >>>
> >>> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>> Signed-off-by: Mel Gorman <mgorman@suse.de>
> >> That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> >> It's the only driver passing NULL here.
> >>
> >> That whole song and dance figuring out what to do with the head
> >> fragment page, depending upon whether the length is greater than the
> >> RX_COPY_THRESHOLD, is completely unnecessary.
> >>
> >> Just use something like a call to __pskb_pull_tail(skb, len) and all
> >> that other crap around that area can simply be deleted.
> > I looked at this for a while but I did not see how __pskb_pull_tail()
> > could be used sensibly but I'm simily not familiar with writing network
> > device drivers or Xen.
> >
> > This messing with RX_COPY_THRESHOLD seems to be related to how the frontend
> > and backend communicate (maybe some fixed limitation of the xenbus). The
> > existing code looks like it is trying to take the fragments received and
> > pass them straight to the backend without copying by passing the fragments
> > to the backend without copying. I worry that if I try converting this to
> > __pskb_pull_tail() that it would either hit the limitation of xenbus or
> > introduce copying where it is not wanted.
> >
> > I'm going to have to punt this to Jeremy and the other Xen folk as I'm not
> > sure what the original intention was and I don't have a Xen setup anywhere
> > to test any patch. Jeremy, xen folk? 
> 
> It's been a while since I've looked at that stuff, but as I remember,
> the issue is that since the packet ring memory is shared with another
> domain which may be untrustworthy, we want to make copies of the headers
> before making any decisions based on them so that the other domain can't
> change them after header processing but before they're actually sent. 
> (The packet payload is considered less important, but of course the same
> issue applies if you're using some kind of content-aware packet filter.)
> 
> So that's the rationale for always copying RX_COPY_THRESHOLD, even if
> the packet is larger than that amount.  As far as I know, changing this
> behaviour wouldn't break the ring protocol, but it does introduce a
> potential security issue.
> 

David,

This leaves us somewhat in a pickle. If I'm reading this right (which I may
not be) it means that using __pskb_pull_tail() will work in the ideal case
but potentially introduces a subtle issue in the future. This bug could be
"fixed" in the driver by partially reverting [01c68026: xen: netfront:
convert to SKB paged frag API.]. That could be viewed as sweeping the
problem under the carpet but it does contain the problem to the xen-netfront
driver. A new helper could be created like __skb_clear_page_desc but that
is overkill for one driver and feels as ugly.

-- 
Mel Gorman
SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:19:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ED9-0002Ip-Ld; Tue, 14 Aug 2012 10:18:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xensource.com@bloms.de>) id 1T1E1k-0001kD-85
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:07:11 +0000
Received: from [85.158.138.51:7332] by server-5.bemta-3.messagelabs.com id
	98/50-08865-B432A205; Tue, 14 Aug 2012 10:07:07 +0000
X-Env-Sender: xensource.com@bloms.de
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344938826!20234622!1
X-Originating-IP: [84.200.248.35]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8442 invoked from network); 14 Aug 2012 10:07:06 -0000
Received: from smtp.bloms.de (HELO smtp.bloms.de) (84.200.248.35)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 10:07:06 -0000
Received: from smtp.bloms.de (localhost [127.0.0.1])
	by smtp.bloms.de (Postfix) with ESMTP id 018EA1C140CF;
	Tue, 14 Aug 2012 12:07:06 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=bloms.de; h=date:from:to
	:subject:message-id:references:mime-version:content-type
	:in-reply-to; s=selector1; bh=45BvRtlSS8f2g4kF6g6wcO9YuXA=; b=M4
	qn0sFKHGxQy+gG+jpSfEgL7FS8EUZMO7UMQ3G86cowbEiZpI6spL9WPf7pQ1dWWi
	x3rNnQ4YMo8JfAP0yQxSFobCpC+/qHllNtk8TowO2MAAczGrV34F7Se3SZzoJfof
	cN/kUr4XB8b7MQkCQ3RmxVXndvXZB6j6eVdu9Fk1sNsPTuTQI3FKZiO0gOkd9qk+
	TOpWn3isk+a8nJTRhHwM6cgGA6Rgkuek28bFkUQCQ2ywzjoDQ/2EnW3Lp/ecSfIJ
	+xt3H7OQl6q1urlBrkHHtFU3+1WLKvw+SZLPSCEkg2bjO10yU1S8qLNijAFek/2c
	5DCFr7SJgqUT9XvBdwGHx2Ewk00uI8wTX1n+8LasqoHEXKiUlHNvYFF8tYsVfsZZ
	FL+9cK9ONO5VLtkUtWd0nlQXGVx7Mb/HTau1j1QSd2bNTXCbKU1GCyUtJmlQXTqN
	czeJcS9spvN/27eiSiWJqoQMgbhD33ZKhmAlVEu5rL5jVT1LmWFUkA7cnJwVO6Kt
	236yOnd+euEHtBtntny+nRT/fFdPCfkLIJRVdULyN0gVfLFojCecX0cJ4twKwl0b
	4JEeDmEpaM7+bqrUoz+0PDanmPuClJlNRTSQ3oVMBU7xqmvZQA/UvnHPT9QvepkY
	47/pLuNUzHPzNv6/iswSpxqN/BoQ5Hq4hvT56Y2z4=
Received: by smtp.bloms.de (Postfix, from userid 1000)
	id D1A731C140E9; Tue, 14 Aug 2012 12:07:05 +0200 (CEST)
Date: Tue, 14 Aug 2012 12:07:05 +0200
From: Dieter Bloms <xensource.com@bloms.de>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org
Message-ID: <20120814100704.GA19704@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Mailman-Approved-At: Tue, 14 Aug 2012 10:18:54 +0000
Subject: [Xen-devel] Xen 4.2 TODO (io and irq parameter are not evaluated by
	xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Tue, Aug 14, Ian Campbell wrote:

> tools, blockers:
> 
>     * xl compatibility with xm:
> 
>         * No known issues

the parameter io and irq in domU config files are not evaluated by xl.
So it is not possible to passthrough a parallel port for my printer to
domU when I start the domU with xl command.
With xm I have no issue.

-- 
Best regards

  Dieter Bloms

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
>From field.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:19:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ED9-0002Ip-Ld; Tue, 14 Aug 2012 10:18:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xensource.com@bloms.de>) id 1T1E1k-0001kD-85
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:07:11 +0000
Received: from [85.158.138.51:7332] by server-5.bemta-3.messagelabs.com id
	98/50-08865-B432A205; Tue, 14 Aug 2012 10:07:07 +0000
X-Env-Sender: xensource.com@bloms.de
X-Msg-Ref: server-12.tower-174.messagelabs.com!1344938826!20234622!1
X-Originating-IP: [84.200.248.35]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8442 invoked from network); 14 Aug 2012 10:07:06 -0000
Received: from smtp.bloms.de (HELO smtp.bloms.de) (84.200.248.35)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 10:07:06 -0000
Received: from smtp.bloms.de (localhost [127.0.0.1])
	by smtp.bloms.de (Postfix) with ESMTP id 018EA1C140CF;
	Tue, 14 Aug 2012 12:07:06 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=bloms.de; h=date:from:to
	:subject:message-id:references:mime-version:content-type
	:in-reply-to; s=selector1; bh=45BvRtlSS8f2g4kF6g6wcO9YuXA=; b=M4
	qn0sFKHGxQy+gG+jpSfEgL7FS8EUZMO7UMQ3G86cowbEiZpI6spL9WPf7pQ1dWWi
	x3rNnQ4YMo8JfAP0yQxSFobCpC+/qHllNtk8TowO2MAAczGrV34F7Se3SZzoJfof
	cN/kUr4XB8b7MQkCQ3RmxVXndvXZB6j6eVdu9Fk1sNsPTuTQI3FKZiO0gOkd9qk+
	TOpWn3isk+a8nJTRhHwM6cgGA6Rgkuek28bFkUQCQ2ywzjoDQ/2EnW3Lp/ecSfIJ
	+xt3H7OQl6q1urlBrkHHtFU3+1WLKvw+SZLPSCEkg2bjO10yU1S8qLNijAFek/2c
	5DCFr7SJgqUT9XvBdwGHx2Ewk00uI8wTX1n+8LasqoHEXKiUlHNvYFF8tYsVfsZZ
	FL+9cK9ONO5VLtkUtWd0nlQXGVx7Mb/HTau1j1QSd2bNTXCbKU1GCyUtJmlQXTqN
	czeJcS9spvN/27eiSiWJqoQMgbhD33ZKhmAlVEu5rL5jVT1LmWFUkA7cnJwVO6Kt
	236yOnd+euEHtBtntny+nRT/fFdPCfkLIJRVdULyN0gVfLFojCecX0cJ4twKwl0b
	4JEeDmEpaM7+bqrUoz+0PDanmPuClJlNRTSQ3oVMBU7xqmvZQA/UvnHPT9QvepkY
	47/pLuNUzHPzNv6/iswSpxqN/BoQ5Hq4hvT56Y2z4=
Received: by smtp.bloms.de (Postfix, from userid 1000)
	id D1A731C140E9; Tue, 14 Aug 2012 12:07:05 +0200 (CEST)
Date: Tue, 14 Aug 2012 12:07:05 +0200
From: Dieter Bloms <xensource.com@bloms.de>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org
Message-ID: <20120814100704.GA19704@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
X-Mailman-Approved-At: Tue, 14 Aug 2012 10:18:54 +0000
Subject: [Xen-devel] Xen 4.2 TODO (io and irq parameter are not evaluated by
	xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Tue, Aug 14, Ian Campbell wrote:

> tools, blockers:
> 
>     * xl compatibility with xm:
> 
>         * No known issues

the parameter io and irq in domU config files are not evaluated by xl.
So it is not possible to passthrough a parallel port for my printer to
domU when I start the domU with xl command.
With xm I have no issue.

-- 
Best regards

  Dieter Bloms

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
>From field.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:19:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EDF-0002JY-1Z; Tue, 14 Aug 2012 10:19:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1T1EDD-0002Ij-Aq; Tue, 14 Aug 2012 10:18:59 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344939504!9150915!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzc1OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2734 invoked from network); 14 Aug 2012 10:18:25 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 10:18:25 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id D03251C3B;
	Tue, 14 Aug 2012 13:18:23 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 300D42005D; Tue, 14 Aug 2012 13:18:23 +0300 (EEST)
Date: Tue, 14 Aug 2012 13:18:22 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120814101822.GX19851@reaktio.net>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 10:05:41AM +0100, Ian Campbell wrote:
> 
> tools, nice to have:
> 

* fix ipxe build problems with gcc 4.7 (fedora 17).
  The following files fail to build:
	- ipxe/src/drivers/bus/isa.c
	- ipxe/src/drivers/net/myri10ge.c
	- ipxe/src/drivers/infiniband/qib7322.c
  Patches have been posted to ipxe-devel mailinglist,
  so we need to update our ipxe version or grab the patches.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:19:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EDF-0002JY-1Z; Tue, 14 Aug 2012 10:19:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1T1EDD-0002Ij-Aq; Tue, 14 Aug 2012 10:18:59 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-27.messagelabs.com!1344939504!9150915!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzc1OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2734 invoked from network); 14 Aug 2012 10:18:25 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 10:18:25 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id D03251C3B;
	Tue, 14 Aug 2012 13:18:23 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 300D42005D; Tue, 14 Aug 2012 13:18:23 +0300 (EEST)
Date: Tue, 14 Aug 2012 13:18:22 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120814101822.GX19851@reaktio.net>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 10:05:41AM +0100, Ian Campbell wrote:
> 
> tools, nice to have:
> 

* fix ipxe build problems with gcc 4.7 (fedora 17).
  The following files fail to build:
	- ipxe/src/drivers/bus/isa.c
	- ipxe/src/drivers/net/myri10ge.c
	- ipxe/src/drivers/infiniband/qib7322.c
  Patches have been posted to ipxe-devel mailinglist,
  so we need to update our ipxe version or grab the patches.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:19:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EDy-0002Uq-Ab; Tue, 14 Aug 2012 10:19:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1EDw-0002Tl-JA
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:19:44 +0000
Received: from [85.158.143.99:38128] by server-2.bemta-4.messagelabs.com id
	94/38-31966-F362A205; Tue, 14 Aug 2012 10:19:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344939583!22807430!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12029 invoked from network); 14 Aug 2012 10:19:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:19:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999028"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:18:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 11:18:47 +0100
Message-ID: <1344939526.5926.14.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 14 Aug 2012 11:18:46 +0100
In-Reply-To: <20120810125950.GA451@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de> <20120810074159.GA11792@aepfle.de>
	<20120810125950.GA451@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-10 at 13:59 +0100, Olaf Hering wrote:
> On Fri, Aug 10, Olaf Hering wrote:
> 
> > On Thu, Aug 09, Olaf Hering wrote:
> > 
> > > Indeed, netback_probe is appearently never called in my case. I will
> > > check why that happens.
> > 
> > What I have seen so far is that in 4.2+xl the vif driver is not
> > registered, while in 4.1+xm there is a vif driver registered. Thats so
> > far the difference I could spot.
> 
> Argh, I was expecting that required kernel drivers are loaded when
> needed. But thats not the case. There is a workaround or fix for pvops
> in 25728:a6edbc39fc84. But this changeset misses at least netbk and
> blkbk.
> 
> Any idea why that changeset is now needed?
> Why did it work for everyone before?

Backend driver autoloading is relatively new in pvops kernels at least
(I don't know if it was ever a feature of the older kernels). Perhaps
the SLES kernels used to build those drivers into the kernel statically?
(that was quite common in the classic-Xen kernel days, but now with
pvops modular is becoming more common)

None of which answers your questions as to why it used to work for you
though.

I think we would accept an update to 25728:a6edbc39fc84 to add some new
aliases, it was discussed at the time but I think it petered out after a
short discussion about what the correct names were.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:19:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EDy-0002Uq-Ab; Tue, 14 Aug 2012 10:19:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1EDw-0002Tl-JA
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:19:44 +0000
Received: from [85.158.143.99:38128] by server-2.bemta-4.messagelabs.com id
	94/38-31966-F362A205; Tue, 14 Aug 2012 10:19:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344939583!22807430!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12029 invoked from network); 14 Aug 2012 10:19:43 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:19:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999028"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:18:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 11:18:47 +0100
Message-ID: <1344939526.5926.14.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 14 Aug 2012 11:18:46 +0100
In-Reply-To: <20120810125950.GA451@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de> <20120810074159.GA11792@aepfle.de>
	<20120810125950.GA451@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-10 at 13:59 +0100, Olaf Hering wrote:
> On Fri, Aug 10, Olaf Hering wrote:
> 
> > On Thu, Aug 09, Olaf Hering wrote:
> > 
> > > Indeed, netback_probe is appearently never called in my case. I will
> > > check why that happens.
> > 
> > What I have seen so far is that in 4.2+xl the vif driver is not
> > registered, while in 4.1+xm there is a vif driver registered. Thats so
> > far the difference I could spot.
> 
> Argh, I was expecting that required kernel drivers are loaded when
> needed. But thats not the case. There is a workaround or fix for pvops
> in 25728:a6edbc39fc84. But this changeset misses at least netbk and
> blkbk.
> 
> Any idea why that changeset is now needed?
> Why did it work for everyone before?

Backend driver autoloading is relatively new in pvops kernels at least
(I don't know if it was ever a feature of the older kernels). Perhaps
the SLES kernels used to build those drivers into the kernel statically?
(that was quite common in the classic-Xen kernel days, but now with
pvops modular is becoming more common)

None of which answers your questions as to why it used to work for you
though.

I think we would accept an update to 25728:a6edbc39fc84 to add some new
aliases, it was discussed at the time but I think it petered out after a
short discussion about what the correct names were.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:20:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EEt-0002gs-Pv; Tue, 14 Aug 2012 10:20:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1EEs-0002gS-Ix
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:20:42 +0000
Received: from [85.158.143.99:41357] by server-3.bemta-4.messagelabs.com id
	63/59-09529-9762A205; Tue, 14 Aug 2012 10:20:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344939640!22807666!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzc1OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18700 invoked from network); 14 Aug 2012 10:20:41 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 10:20:41 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B4B8E2F8E;
	Tue, 14 Aug 2012 13:20:39 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8FCD22005D; Tue, 14 Aug 2012 13:20:39 +0300 (EEST)
Date: Tue, 14 Aug 2012 13:20:39 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120814102039.GY19851@reaktio.net>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814101822.GX19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120814101822.GX19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 01:18:22PM +0300, Pasi K=E4rkk=E4inen wrote:
> On Tue, Aug 14, 2012 at 10:05:41AM +0100, Ian Campbell wrote:
> > =

> > tools, nice to have:
> > =

> =

> * fix ipxe build problems with gcc 4.7 (fedora 17).
>   The following files fail to build:
> 	- ipxe/src/drivers/bus/isa.c
> 	- ipxe/src/drivers/net/myri10ge.c
> 	- ipxe/src/drivers/infiniband/qib7322.c
>   Patches have been posted to ipxe-devel mailinglist,
>   so we need to update our ipxe version or grab the patches.
> =


And the needed patches are listed here: http://lists.xen.org/archives/html/=
xen-devel/2012-08/msg01048.html

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:20:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EEt-0002gs-Pv; Tue, 14 Aug 2012 10:20:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1EEs-0002gS-Ix
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:20:42 +0000
Received: from [85.158.143.99:41357] by server-3.bemta-4.messagelabs.com id
	63/59-09529-9762A205; Tue, 14 Aug 2012 10:20:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-216.messagelabs.com!1344939640!22807666!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzc1OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18700 invoked from network); 14 Aug 2012 10:20:41 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 10:20:41 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B4B8E2F8E;
	Tue, 14 Aug 2012 13:20:39 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8FCD22005D; Tue, 14 Aug 2012 13:20:39 +0300 (EEST)
Date: Tue, 14 Aug 2012 13:20:39 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120814102039.GY19851@reaktio.net>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814101822.GX19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120814101822.GX19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 01:18:22PM +0300, Pasi K=E4rkk=E4inen wrote:
> On Tue, Aug 14, 2012 at 10:05:41AM +0100, Ian Campbell wrote:
> > =

> > tools, nice to have:
> > =

> =

> * fix ipxe build problems with gcc 4.7 (fedora 17).
>   The following files fail to build:
> 	- ipxe/src/drivers/bus/isa.c
> 	- ipxe/src/drivers/net/myri10ge.c
> 	- ipxe/src/drivers/infiniband/qib7322.c
>   Patches have been posted to ipxe-devel mailinglist,
>   so we need to update our ipxe version or grab the patches.
> =


And the needed patches are listed here: http://lists.xen.org/archives/html/=
xen-devel/2012-08/msg01048.html

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:36:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ETm-000443-6t; Tue, 14 Aug 2012 10:36:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1ETk-00043K-Vl
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:36:05 +0000
Received: from [85.158.139.83:64301] by server-10.bemta-5.messagelabs.com id
	89/50-13125-41A2A205; Tue, 14 Aug 2012 10:36:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344940562!20808706!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19502 invoked from network); 14 Aug 2012 10:36:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:36:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999440"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:35:15 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:35:15 +0100
Message-ID: <502A2A23.4050205@citrix.com>
Date: Tue, 14 Aug 2012 11:36:19 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Internecto List Subscriber <lists@internecto.net>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
	<20120810163722.2feaadad@internecto.net>
In-Reply-To: <20120810163722.2feaadad@internecto.net>
X-Enigmail-Version: 1.2.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Mark van Dijk <lists+xen@internecto.net>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Internecto List Subscriber wrote:
> On Fri, 10 Aug 2012 13:16:12 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
>> On Fri, 10 Aug 2012, Mark van Dijk wrote:
>>>> This is upstream QEMU that is breaking, not qemu-xen-traditional
>>>> (see the code path: qemu-xen-dir-remote instead of
>>>> qemu-xen-traditional-dir-remote).
>>> Ah, I didn't know, it's a little bit confusing. Would you like me to
>>> submit a bug report with them?
>>>
>>>> Moreover it is breaking compiling qemu-nbd that we aren't
>>>> currently using. I would try out the following change to the
>>>> configure script: (..snip..)
>>> Yes, that works, thanks! But it gives a new error which I couldn't
>>> solve yet:
>>>
>>> ---
>>> LINK  qemu-nbd
>>>
>>> cutils.o: In function `strtosz_suffix_unit':
>>>
>>> tools/qemu-xen-dir/cutils.c:354: undefined reference to
>>> `__isnan'
>>>
>>> tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
>>> collect2: ld returned 1 exit status
>>> ---
>>>
>>> Any idea there?
>> It is another "-lm" missing somewhere.
> 
> Ok, well I'll leave that to the people who can actually make a healthy
> patch out of this.
> 
>>
>>> Also -- If we're not using qemu-nbd then could you suggest a
>>> workaround please? I'd prefer something that can be patched or
>>> issued before I run the make process. (I run the make process
>>> twice now - if the first run fails, patch, then run again and if it
>>> fails again error out)
>> You can disable qemu-nbd altogether with the following patch:
>> (..snip..)
> 
> While I couldn't find the proper configure script for this (I even
> grepped for stuff like 'virtfs=no' but got nothing), it was a good
> starting point. So thanks for pointing me in the right direction :)
> 
> For now building unstable on Alpine Linux works with the following
> patch:
> 
> http://pastebin.com/QU8XuM0a

Natanael Copa sent a patch to Qemu-devel some months ago to fix the
build of Qemu on uClibc, but it seems like it was ignored:

http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg02388.html

Could you try if that still applies and fixes your problems?

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:36:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ETm-000443-6t; Tue, 14 Aug 2012 10:36:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1ETk-00043K-Vl
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:36:05 +0000
Received: from [85.158.139.83:64301] by server-10.bemta-5.messagelabs.com id
	89/50-13125-41A2A205; Tue, 14 Aug 2012 10:36:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344940562!20808706!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19502 invoked from network); 14 Aug 2012 10:36:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:36:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999440"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:35:15 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:35:15 +0100
Message-ID: <502A2A23.4050205@citrix.com>
Date: Tue, 14 Aug 2012 11:36:19 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Internecto List Subscriber <lists@internecto.net>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
	<20120810163722.2feaadad@internecto.net>
In-Reply-To: <20120810163722.2feaadad@internecto.net>
X-Enigmail-Version: 1.2.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Mark van Dijk <lists+xen@internecto.net>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Internecto List Subscriber wrote:
> On Fri, 10 Aug 2012 13:16:12 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
>> On Fri, 10 Aug 2012, Mark van Dijk wrote:
>>>> This is upstream QEMU that is breaking, not qemu-xen-traditional
>>>> (see the code path: qemu-xen-dir-remote instead of
>>>> qemu-xen-traditional-dir-remote).
>>> Ah, I didn't know, it's a little bit confusing. Would you like me to
>>> submit a bug report with them?
>>>
>>>> Moreover it is breaking compiling qemu-nbd that we aren't
>>>> currently using. I would try out the following change to the
>>>> configure script: (..snip..)
>>> Yes, that works, thanks! But it gives a new error which I couldn't
>>> solve yet:
>>>
>>> ---
>>> LINK  qemu-nbd
>>>
>>> cutils.o: In function `strtosz_suffix_unit':
>>>
>>> tools/qemu-xen-dir/cutils.c:354: undefined reference to
>>> `__isnan'
>>>
>>> tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
>>> collect2: ld returned 1 exit status
>>> ---
>>>
>>> Any idea there?
>> It is another "-lm" missing somewhere.
> 
> Ok, well I'll leave that to the people who can actually make a healthy
> patch out of this.
> 
>>
>>> Also -- If we're not using qemu-nbd then could you suggest a
>>> workaround please? I'd prefer something that can be patched or
>>> issued before I run the make process. (I run the make process
>>> twice now - if the first run fails, patch, then run again and if it
>>> fails again error out)
>> You can disable qemu-nbd altogether with the following patch:
>> (..snip..)
> 
> While I couldn't find the proper configure script for this (I even
> grepped for stuff like 'virtfs=no' but got nothing), it was a good
> starting point. So thanks for pointing me in the right direction :)
> 
> For now building unstable on Alpine Linux works with the following
> patch:
> 
> http://pastebin.com/QU8XuM0a

Natanael Copa sent a patch to Qemu-devel some months ago to fix the
build of Qemu on uClibc, but it seems like it was ignored:

http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg02388.html

Could you try if that still applies and fixes your problems?

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:39:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EWz-0004X7-Pe; Tue, 14 Aug 2012 10:39:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1EWy-0004Wt-6q
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:39:24 +0000
Received: from [85.158.143.99:50860] by server-1.bemta-4.messagelabs.com id
	0F/FE-07754-BDA2A205; Tue, 14 Aug 2012 10:39:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344940761!27188094!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11616 invoked from network); 14 Aug 2012 10:39:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:39:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999512"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:39:21 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:39:21 +0100
Date: Tue, 14 Aug 2012 11:38:52 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5029098A020000780009471C@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Aug 2012, Jan Beulich wrote:
> >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> > there is any point in that because the other internal functions should
> > also have XEN_GUEST_HANDLE_PARAMs as parameters.
> 
> So you obviously need a cast from "normal" to _PARAM (so that
> you can pass embedded fields to functions).

In practice embedded fields are in guest memory, so the first thing Xen
does is calling copy_from_guest and work with the struct pointer
directly from that point on.

However I do see how a function like that might make the distinction
between XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAMs clearer, so I'll
add one.

The implementation is going to identical to guest_handle_cast though.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:39:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EWz-0004X7-Pe; Tue, 14 Aug 2012 10:39:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1EWy-0004Wt-6q
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:39:24 +0000
Received: from [85.158.143.99:50860] by server-1.bemta-4.messagelabs.com id
	0F/FE-07754-BDA2A205; Tue, 14 Aug 2012 10:39:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1344940761!27188094!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11616 invoked from network); 14 Aug 2012 10:39:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:39:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999512"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:39:21 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:39:21 +0100
Date: Tue, 14 Aug 2012 11:38:52 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5029098A020000780009471C@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Aug 2012, Jan Beulich wrote:
> >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> > there is any point in that because the other internal functions should
> > also have XEN_GUEST_HANDLE_PARAMs as parameters.
> 
> So you obviously need a cast from "normal" to _PARAM (so that
> you can pass embedded fields to functions).

In practice embedded fields are in guest memory, so the first thing Xen
does is calling copy_from_guest and work with the struct pointer
directly from that point on.

However I do see how a function like that might make the distinction
between XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAMs clearer, so I'll
add one.

The implementation is going to identical to guest_handle_cast though.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:41:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:41:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EYr-0004ka-CK; Tue, 14 Aug 2012 10:41:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1EYp-0004kJ-Pf
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:41:20 +0000
Received: from [85.158.138.51:48064] by server-7.bemta-3.messagelabs.com id
	21/FC-01906-E4B2A205; Tue, 14 Aug 2012 10:41:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344940876!9494878!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17757 invoked from network); 14 Aug 2012 10:41:16 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:41:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999570"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:41:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:41:09 +0100
Date: Tue, 14 Aug 2012 11:40:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208141140060.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Stefano Stabellini wrote:
> On Mon, 13 Aug 2012, Jan Beulich wrote:
> > >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> > > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> > > there is any point in that because the other internal functions should
> > > also have XEN_GUEST_HANDLE_PARAMs as parameters.
> > 
> > So you obviously need a cast from "normal" to _PARAM (so that
> > you can pass embedded fields to functions).
> 
> In practice embedded fields are in guest memory, so the first thing Xen
> does is calling copy_from_guest and work with the struct pointer
> directly from that point on.
> 
> However I do see how a function like that might make the distinction
> between XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAMs clearer, so I'll
> add one.
> 
> The implementation is going to identical to guest_handle_cast though.
> 

Actually it is probably better to add a good comment on top of
guest_handle_cast explaining that can be used with both XEN_GUEST_HANDLE
or XEN_GUEST_HANDLE_PARAM as paramters.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:41:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:41:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1EYr-0004ka-CK; Tue, 14 Aug 2012 10:41:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1EYp-0004kJ-Pf
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:41:20 +0000
Received: from [85.158.138.51:48064] by server-7.bemta-3.messagelabs.com id
	21/FC-01906-E4B2A205; Tue, 14 Aug 2012 10:41:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344940876!9494878!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17757 invoked from network); 14 Aug 2012 10:41:16 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:41:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999570"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:41:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:41:09 +0100
Date: Tue, 14 Aug 2012 11:40:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208141140060.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Stefano Stabellini wrote:
> On Mon, 13 Aug 2012, Jan Beulich wrote:
> > >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> > > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> > > there is any point in that because the other internal functions should
> > > also have XEN_GUEST_HANDLE_PARAMs as parameters.
> > 
> > So you obviously need a cast from "normal" to _PARAM (so that
> > you can pass embedded fields to functions).
> 
> In practice embedded fields are in guest memory, so the first thing Xen
> does is calling copy_from_guest and work with the struct pointer
> directly from that point on.
> 
> However I do see how a function like that might make the distinction
> between XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAMs clearer, so I'll
> add one.
> 
> The implementation is going to identical to guest_handle_cast though.
> 

Actually it is probably better to add a good comment on top of
guest_handle_cast explaining that can be used with both XEN_GUEST_HANDLE
or XEN_GUEST_HANDLE_PARAM as paramters.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:45:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Eci-00057u-8H; Tue, 14 Aug 2012 10:45:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1Ech-00057k-3U
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:45:19 +0000
Received: from [85.158.139.83:25758] by server-11.bemta-5.messagelabs.com id
	DB/5D-29296-E3C2A205; Tue, 14 Aug 2012 10:45:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344941113!20810448!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21988 invoked from network); 14 Aug 2012 10:45:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:45:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999669"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:45:13 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:45:13 +0100
Date: Tue, 14 Aug 2012 11:44:44 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120813151446.22ae85b5@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> Ok, I changed all code references from xen_hybrid_domain to xen_pvh_domain
> in linux. Changing xen code too. So it's PVH now.

What would xen_pv_domain() and xen_hvm_domain() return in an hybrid
guest?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:45:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Eci-00057u-8H; Tue, 14 Aug 2012 10:45:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1Ech-00057k-3U
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 10:45:19 +0000
Received: from [85.158.139.83:25758] by server-11.bemta-5.messagelabs.com id
	DB/5D-29296-E3C2A205; Tue, 14 Aug 2012 10:45:18 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344941113!20810448!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21988 invoked from network); 14 Aug 2012 10:45:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:45:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999669"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:45:13 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 11:45:13 +0100
Date: Tue, 14 Aug 2012 11:44:44 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120813151446.22ae85b5@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> Ok, I changed all code references from xen_hybrid_domain to xen_pvh_domain
> in linux. Changing xen code too. So it's PVH now.

What would xen_pv_domain() and xen_hvm_domain() return in an hybrid
guest?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:57:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ent-0005f8-5O; Tue, 14 Aug 2012 10:56:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Enr-0005f3-1s
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:56:51 +0000
Received: from [85.158.139.83:11204] by server-8.bemta-5.messagelabs.com id
	B9/EC-02481-2FE2A205; Tue, 14 Aug 2012 10:56:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344941809!20812908!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8725 invoked from network); 14 Aug 2012 10:56:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:56:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999962"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:56:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 11:56:49 +0100
Message-ID: <1344941807.5926.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Tue, 14 Aug 2012 11:56:47 +0100
In-Reply-To: <502934CB.7060409@xen.org>
References: <502934CB.7060409@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> Hi,
> 
> I added a page on Xen 4.2 limits (based on info I know) at 
> http://wiki.xen.org/wiki/Xen_4.2_Limits based on some information that 
> Jan sent me a few months back. What I don't know is whether there is a 
> difference for 64 bit and 32 bit guests. Also I may be missing missing 
> some important figures/stuff that people want to know.
> 
> Feel free to reply to the thread or add to 
> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits

Is the intention to merge this into
http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 10:57:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 10:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ent-0005f8-5O; Tue, 14 Aug 2012 10:56:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Enr-0005f3-1s
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 10:56:51 +0000
Received: from [85.158.139.83:11204] by server-8.bemta-5.messagelabs.com id
	B9/EC-02481-2FE2A205; Tue, 14 Aug 2012 10:56:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344941809!20812908!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8725 invoked from network); 14 Aug 2012 10:56:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 10:56:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,765,1336348800"; d="scan'208";a="13999962"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:56:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 11:56:49 +0100
Message-ID: <1344941807.5926.25.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Tue, 14 Aug 2012 11:56:47 +0100
In-Reply-To: <502934CB.7060409@xen.org>
References: <502934CB.7060409@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> Hi,
> 
> I added a page on Xen 4.2 limits (based on info I know) at 
> http://wiki.xen.org/wiki/Xen_4.2_Limits based on some information that 
> Jan sent me a few months back. What I don't know is whether there is a 
> difference for 64 bit and 32 bit guests. Also I may be missing missing 
> some important figures/stuff that people want to know.
> 
> Feel free to reply to the thread or add to 
> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits

Is the intention to merge this into
http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1FMU-0006BW-9A; Tue, 14 Aug 2012 11:32:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T1FMT-0006BR-1z
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 11:32:37 +0000
Received: from [85.158.143.35:25116] by server-1.bemta-4.messagelabs.com id
	61/84-07754-4573A205; Tue, 14 Aug 2012 11:32:36 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344943853!14701211!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjkzMTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjkzMTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 779 invoked from network); 14 Aug 2012 11:30:54 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 11:30:54 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGiy0MEnhN
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-085-213.pools.arcor-ip.net [88.65.85.213])
	by smtp.strato.de (jorabe mo61) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id o0194bo7EBBDBE ;
	Tue, 14 Aug 2012 13:30:53 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 7D50B1836E; Tue, 14 Aug 2012 13:30:52 +0200 (CEST)
Date: Tue, 14 Aug 2012 13:30:52 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120814113052.GA31244@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de>
	<20120810074159.GA11792@aepfle.de> <20120810125950.GA451@aepfle.de>
	<1344939526.5926.14.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344939526.5926.14.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, Ian Campbell wrote:

> I think we would accept an update to 25728:a6edbc39fc84 to add some new
> aliases, it was discussed at the time but I think it petered out after a
> short discussion about what the correct names were.

Perhaps the kernel should do a request_module(xen-backend:$type), but
that does not solve it for older kernels.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1FMU-0006BW-9A; Tue, 14 Aug 2012 11:32:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T1FMT-0006BR-1z
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 11:32:37 +0000
Received: from [85.158.143.35:25116] by server-1.bemta-4.messagelabs.com id
	61/84-07754-4573A205; Tue, 14 Aug 2012 11:32:36 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1344943853!14701211!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjkzMTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNjkzMTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 779 invoked from network); 14 Aug 2012 11:30:54 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 11:30:54 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGiy0MEnhN
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-085-213.pools.arcor-ip.net [88.65.85.213])
	by smtp.strato.de (jorabe mo61) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id o0194bo7EBBDBE ;
	Tue, 14 Aug 2012 13:30:53 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 7D50B1836E; Tue, 14 Aug 2012 13:30:52 +0200 (CEST)
Date: Tue, 14 Aug 2012 13:30:52 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120814113052.GA31244@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de>
	<20120810074159.GA11792@aepfle.de> <20120810125950.GA451@aepfle.de>
	<1344939526.5926.14.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1344939526.5926.14.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, Ian Campbell wrote:

> I think we would accept an update to 25728:a6edbc39fc84 to add some new
> aliases, it was discussed at the time but I think it petered out after a
> short discussion about what the correct names were.

Perhaps the kernel should do a request_module(xen-backend:$type), but
that does not solve it for older kernels.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:40:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:40:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1FTT-0006UK-4o; Tue, 14 Aug 2012 11:39:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1FTR-0006UE-4I
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 11:39:49 +0000
Received: from [85.158.138.51:57163] by server-12.bemta-3.messagelabs.com id
	44/6F-04073-4093A205; Tue, 14 Aug 2012 11:39:48 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344944387!19291033!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8381 invoked from network); 14 Aug 2012 11:39:47 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 11:39:47 -0000
Received: by eeke53 with SMTP id e53so113249eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 04:39:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=1xcm8WAqGGfNJ64si0XcfJpMWiJvCSg9hdq3H3AcmqY=;
	b=eCSZSTceyvavNkt7V/yjK/YV9VbsTW6b2VVknSUGxXVmMfVGHyntbtXmZuGlC77Zu9
	YLgvL2b2I2eekC/XC6hMYP5I4SH/RIT/ztvIn9atMb4FTyqIZhJjkKckVEYdzrYg2eJt
	ZFHV4p3i7g9zU5BtKp9HbxB89uSIumYtSyHGoNvQkyYbLke9vm5/8z9rJRUSqJA9SkXZ
	sI0VcmyAR8eiffL1mhMCIwL446xYx9oDClyAJROSgytVwiZRtzwJo01NFfPPyFDyT07L
	OZQ5+QVEjMEA13JKpMuqloXhgairMClaIszodPyl3vD69w24FjWhtHdcnjQmMTVZwU/Y
	t2PA==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr18680254eeo.40.1344944387367; Tue,
	14 Aug 2012 04:39:47 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Tue, 14 Aug 2012 04:39:47 -0700 (PDT)
Date: Tue, 14 Aug 2012 12:39:47 +0100
X-Google-Sender-Auth: efE0L6imsRYAVJIBb2_KQOvGDQU
Message-ID: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [TESTDAY] Compiling on 64-bit Ubuntu systems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It appears that we have a build dependency on 32-bit headers which
isn't checked during ./configure, leading to strange-looking error:

---- snip ----

    make -C tcgbios all
    make[10]: Entering directory
`/home/xenuser/hg/xen-unstable.hg/tools/firmware/rombios/32bit/tcgbios'
    gcc   -O1 -fno-omit-frame-pointer -m32 -march=i686 -g
-fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
-Wdeclaration-after-statement -Wno-unused-but-set-variable
-D__XEN_TOOLS__ -MMD -MF .tcgbios.o.d  -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls
-mno-tls-direct-seg-refs  -Werror -fno-stack-protector -fno-exceptions
-fno-builtin -msoft-float
-I/home/xenuser/hg/xen-unstable.hg/tools/firmware/rombios/32bit/tcgbios/../../../../../tools/include
-I.. -I../..  -c -o tcgbios.o tcgbios.c
    In file included from /usr/include/stdint.h:26:0,
                     from /usr/lib/gcc/x86_64-linux-gnu/4.6/include/stdint.h:3,
                     from ../../../hvmloader/acpi/acpi2_0.h:21,
                     from ../util.h:4,
                     from tcgbios.c:27:
    /usr/include/features.h:324:26: fatal error: bits/predefs.h: No
such file or directory
    compilation terminated.
    gcc   -O1 -fno-omit-frame-pointer -m32 -march=i686 -g
-fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
-Wdeclaration-after-statement -Wno-unused-but-set-variable
-D__XEN_TOOLS__ -MMD -MF .tpm_drivers.o.d  -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls
-mno-tls-direct-seg-refs  -Werror -fno-stack-protector -fno-exceptions
-fno-builtin -msoft-float
-I/home/xenuser/hg/xen-unstable.hg/tools/firmware/rombios/32bit/tcgbios/../../../../../tools/include
-I.. -I../..  -c -o tpm_drivers.o tpm_drivers.c
    In file included from /usr/include/stdint.h:26:0,
                     from /usr/lib/gcc/x86_64-linux-gnu/4.6/include/stdint.h:3,
                     from ../../../hvmloader/acpi/acpi2_0.h:21,
                     from ../util.h:4,
                     from tpm_drivers.c:25:
    /usr/include/features.h:324:26: fatal error: bits/predefs.h: No
such file or directory
    compilation terminated.

---- snip ----

Doing the following solved the problem (and Google was very helpful):

sudo apt-get install libc6-dev-i386

But it would be nice if the error message happen earlier and be a bit
more friendly.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:40:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:40:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1FTT-0006UK-4o; Tue, 14 Aug 2012 11:39:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1FTR-0006UE-4I
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 11:39:49 +0000
Received: from [85.158.138.51:57163] by server-12.bemta-3.messagelabs.com id
	44/6F-04073-4093A205; Tue, 14 Aug 2012 11:39:48 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344944387!19291033!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8381 invoked from network); 14 Aug 2012 11:39:47 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 11:39:47 -0000
Received: by eeke53 with SMTP id e53so113249eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 04:39:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=1xcm8WAqGGfNJ64si0XcfJpMWiJvCSg9hdq3H3AcmqY=;
	b=eCSZSTceyvavNkt7V/yjK/YV9VbsTW6b2VVknSUGxXVmMfVGHyntbtXmZuGlC77Zu9
	YLgvL2b2I2eekC/XC6hMYP5I4SH/RIT/ztvIn9atMb4FTyqIZhJjkKckVEYdzrYg2eJt
	ZFHV4p3i7g9zU5BtKp9HbxB89uSIumYtSyHGoNvQkyYbLke9vm5/8z9rJRUSqJA9SkXZ
	sI0VcmyAR8eiffL1mhMCIwL446xYx9oDClyAJROSgytVwiZRtzwJo01NFfPPyFDyT07L
	OZQ5+QVEjMEA13JKpMuqloXhgairMClaIszodPyl3vD69w24FjWhtHdcnjQmMTVZwU/Y
	t2PA==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr18680254eeo.40.1344944387367; Tue,
	14 Aug 2012 04:39:47 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Tue, 14 Aug 2012 04:39:47 -0700 (PDT)
Date: Tue, 14 Aug 2012 12:39:47 +0100
X-Google-Sender-Auth: efE0L6imsRYAVJIBb2_KQOvGDQU
Message-ID: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [TESTDAY] Compiling on 64-bit Ubuntu systems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It appears that we have a build dependency on 32-bit headers which
isn't checked during ./configure, leading to strange-looking error:

---- snip ----

    make -C tcgbios all
    make[10]: Entering directory
`/home/xenuser/hg/xen-unstable.hg/tools/firmware/rombios/32bit/tcgbios'
    gcc   -O1 -fno-omit-frame-pointer -m32 -march=i686 -g
-fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
-Wdeclaration-after-statement -Wno-unused-but-set-variable
-D__XEN_TOOLS__ -MMD -MF .tcgbios.o.d  -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls
-mno-tls-direct-seg-refs  -Werror -fno-stack-protector -fno-exceptions
-fno-builtin -msoft-float
-I/home/xenuser/hg/xen-unstable.hg/tools/firmware/rombios/32bit/tcgbios/../../../../../tools/include
-I.. -I../..  -c -o tcgbios.o tcgbios.c
    In file included from /usr/include/stdint.h:26:0,
                     from /usr/lib/gcc/x86_64-linux-gnu/4.6/include/stdint.h:3,
                     from ../../../hvmloader/acpi/acpi2_0.h:21,
                     from ../util.h:4,
                     from tcgbios.c:27:
    /usr/include/features.h:324:26: fatal error: bits/predefs.h: No
such file or directory
    compilation terminated.
    gcc   -O1 -fno-omit-frame-pointer -m32 -march=i686 -g
-fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
-Wdeclaration-after-statement -Wno-unused-but-set-variable
-D__XEN_TOOLS__ -MMD -MF .tpm_drivers.o.d  -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls
-mno-tls-direct-seg-refs  -Werror -fno-stack-protector -fno-exceptions
-fno-builtin -msoft-float
-I/home/xenuser/hg/xen-unstable.hg/tools/firmware/rombios/32bit/tcgbios/../../../../../tools/include
-I.. -I../..  -c -o tpm_drivers.o tpm_drivers.c
    In file included from /usr/include/stdint.h:26:0,
                     from /usr/lib/gcc/x86_64-linux-gnu/4.6/include/stdint.h:3,
                     from ../../../hvmloader/acpi/acpi2_0.h:21,
                     from ../util.h:4,
                     from tpm_drivers.c:25:
    /usr/include/features.h:324:26: fatal error: bits/predefs.h: No
such file or directory
    compilation terminated.

---- snip ----

Doing the following solved the problem (and Google was very helpful):

sudo apt-get install libc6-dev-i386

But it would be nice if the error message happen earlier and be a bit
more friendly.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:44:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1FXl-0006bb-Qg; Tue, 14 Aug 2012 11:44:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1FXk-0006bW-5h
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 11:44:16 +0000
Received: from [85.158.143.35:41280] by server-2.bemta-4.messagelabs.com id
	59/1D-31966-F0A3A205; Tue, 14 Aug 2012 11:44:15 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344944654!15664731!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11636 invoked from network); 14 Aug 2012 11:44:15 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 11:44:15 -0000
Received: by eaac13 with SMTP id c13so113360eaa.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 04:44:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5LIGt+RsRy3u/3gNli+PsyiFeS7M09sYG05NjPIrhPI=;
	b=Sota03gQkyKiOik55Gm8zAAnpHJLmoNtP4Mj/2HIrbD6n5ayvJGcgyYT9g4WcCVtmf
	/qgW3aUJGAoUsf30z7fmF7jMPaZJ8U84V9d3wt/M0kUCMXwp9ggPtAYJUcFlh2enaM6y
	cD7g2z5kXVQahCdqI1FX7WcilKB98dRrSy0K3FA/CRi/pmP77Q9lyxM7LxgF2RKiIiIS
	NPtornCZHYkVDfCB/uncZhnXdrrZ/EEgtWeWIWYD15Yg2V+PjkaZMoZjqOlyRyXAqEVr
	Rw8CLFWNe3Apg9yX4X5h2+QLhK28LU6JXCmRDU/KLUzZrvMs4RFiAGeJfm4dtikAV1bU
	a1Vw==
Received: by 10.14.212.72 with SMTP id x48mr18698758eeo.40.1344944654809;
	Tue, 14 Aug 2012 04:44:14 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id e7sm6193086eep.2.2012.08.14.04.44.13
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 04:44:13 -0700 (PDT)
Message-ID: <502A3A0B.6000401@xen.org>
Date: Tue, 14 Aug 2012 12:44:11 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
CC: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344941807.5926.25.camel@zakaz.uk.xensource.com>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/2012 11:56, Ian Campbell wrote:
> On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
>> Feel free to reply to the thread or add to
>> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> Is the intention to merge this into
> http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?
Ian, we could merge it, or we could have a line  in the matrix linking 
to the release limits.
No strong preference
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:44:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1FXl-0006bb-Qg; Tue, 14 Aug 2012 11:44:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1FXk-0006bW-5h
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 11:44:16 +0000
Received: from [85.158.143.35:41280] by server-2.bemta-4.messagelabs.com id
	59/1D-31966-F0A3A205; Tue, 14 Aug 2012 11:44:15 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344944654!15664731!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.5 required=7.0 tests=RCVD_BY_IP,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11636 invoked from network); 14 Aug 2012 11:44:15 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 11:44:15 -0000
Received: by eaac13 with SMTP id c13so113360eaa.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 04:44:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5LIGt+RsRy3u/3gNli+PsyiFeS7M09sYG05NjPIrhPI=;
	b=Sota03gQkyKiOik55Gm8zAAnpHJLmoNtP4Mj/2HIrbD6n5ayvJGcgyYT9g4WcCVtmf
	/qgW3aUJGAoUsf30z7fmF7jMPaZJ8U84V9d3wt/M0kUCMXwp9ggPtAYJUcFlh2enaM6y
	cD7g2z5kXVQahCdqI1FX7WcilKB98dRrSy0K3FA/CRi/pmP77Q9lyxM7LxgF2RKiIiIS
	NPtornCZHYkVDfCB/uncZhnXdrrZ/EEgtWeWIWYD15Yg2V+PjkaZMoZjqOlyRyXAqEVr
	Rw8CLFWNe3Apg9yX4X5h2+QLhK28LU6JXCmRDU/KLUzZrvMs4RFiAGeJfm4dtikAV1bU
	a1Vw==
Received: by 10.14.212.72 with SMTP id x48mr18698758eeo.40.1344944654809;
	Tue, 14 Aug 2012 04:44:14 -0700 (PDT)
Received: from [172.16.26.11] (027fe822.bb.sky.com. [2.127.232.34])
	by mx.google.com with ESMTPS id e7sm6193086eep.2.2012.08.14.04.44.13
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 04:44:13 -0700 (PDT)
Message-ID: <502A3A0B.6000401@xen.org>
Date: Tue, 14 Aug 2012 12:44:11 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
CC: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344941807.5926.25.camel@zakaz.uk.xensource.com>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/2012 11:56, Ian Campbell wrote:
> On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
>> Feel free to reply to the thread or add to
>> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> Is the intention to merge this into
> http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?
Ian, we could merge it, or we could have a line  in the matrix linking 
to the release limits.
No strong preference
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:56:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Fix-0006n8-1T; Tue, 14 Aug 2012 11:55:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Fiw-0006n3-8v
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 11:55:50 +0000
Received: from [85.158.143.35:20194] by server-3.bemta-4.messagelabs.com id
	25/7C-09529-5CC3A205; Tue, 14 Aug 2012 11:55:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344945330!13229723!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17055 invoked from network); 14 Aug 2012 11:55:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 11:55:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 12:55:29 +0100
Message-Id: <502A58CF0200007800094BE4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 12:55:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 12:38, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 13 Aug 2012, Jan Beulich wrote:
>> >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
>> > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
>> > there is any point in that because the other internal functions should
>> > also have XEN_GUEST_HANDLE_PARAMs as parameters.
>> 
>> So you obviously need a cast from "normal" to _PARAM (so that
>> you can pass embedded fields to functions).
> 
> In practice embedded fields are in guest memory, so the first thing Xen
> does is calling copy_from_guest and work with the struct pointer
> directly from that point on.

Perhaps we have a different understanding of embedded fields:
I'm thinking of structure field having XEN_GUEST_HANDLE() type.
An example would be struct mmuext_op's vcpumask field, which
is being passed to vcpumask_to_pcpumask(). This must remain to
be possible (and not just in x86-specific code, where it's mere luck
that both are really identical).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 11:56:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 11:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Fix-0006n8-1T; Tue, 14 Aug 2012 11:55:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Fiw-0006n3-8v
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 11:55:50 +0000
Received: from [85.158.143.35:20194] by server-3.bemta-4.messagelabs.com id
	25/7C-09529-5CC3A205; Tue, 14 Aug 2012 11:55:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1344945330!13229723!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17055 invoked from network); 14 Aug 2012 11:55:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 11:55:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 12:55:29 +0100
Message-Id: <502A58CF0200007800094BE4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 12:55:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 12:38, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Mon, 13 Aug 2012, Jan Beulich wrote:
>> >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
>> > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
>> > there is any point in that because the other internal functions should
>> > also have XEN_GUEST_HANDLE_PARAMs as parameters.
>> 
>> So you obviously need a cast from "normal" to _PARAM (so that
>> you can pass embedded fields to functions).
> 
> In practice embedded fields are in guest memory, so the first thing Xen
> does is calling copy_from_guest and work with the struct pointer
> directly from that point on.

Perhaps we have a different understanding of embedded fields:
I'm thinking of structure field having XEN_GUEST_HANDLE() type.
An example would be struct mmuext_op's vcpumask field, which
is being passed to vcpumask_to_pcpumask(). This must remain to
be possible (and not just in x86-specific code, where it's mere luck
that both are really identical).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:18:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1G4N-0007g4-U4; Tue, 14 Aug 2012 12:17:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waldi@debian.org>) id 1T1G4M-0007fy-Vp
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:17:59 +0000
Received: from [85.158.143.35:3943] by server-2.bemta-4.messagelabs.com id
	26/AD-31966-4F14A205; Tue, 14 Aug 2012 12:17:56 +0000
X-Env-Sender: waldi@debian.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344946675!15292690!1
X-Originating-IP: [82.139.201.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8034 invoked from network); 14 Aug 2012 12:17:55 -0000
Received: from wavehammer.waldi.eu.org (HELO wavehammer.waldi.eu.org)
	(82.139.201.20)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 12:17:55 -0000
Received: by wavehammer.waldi.eu.org (Postfix, from userid 1000)
	id C0C1F5424A; Tue, 14 Aug 2012 14:17:52 +0200 (CEST)
Date: Tue, 14 Aug 2012 14:17:52 +0200
From: Bastian Blank <waldi@debian.org>
To: xen-devel@lists.xen.org
Message-ID: <20120814121741.GA10214@wavehammer.waldi.eu.org>
Mail-Followup-To: Bastian Blank <waldi@debian.org>, xen-devel@lists.xen.org
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] xl list -l produces invalid output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

eGwgbGlzdCAtbCBjaGFuZ2VkIG91dHB1dCByZWNlbnRseS4gRnJvbSBTWFAsIG9mdGVuIGJyb2tl
biwgaXQgc3dpdGNoZWQKdG8gSlNPTi0tLW9yIGJldHRlcjoga2luZC1vZi1KU09OLgoKSlNPTiBp
cyBkZWZpbmVkIGluIFJGQyA0NjI3LiBJdCBpbmNsdWRlcyB0aGUgZGVmaW5pdGlvbjoKfCBKU09O
LXRleHQgPSBvYmplY3QgLyBhcnJheQp8IG9iamVjdCA9IGJlZ2luLW9iamVjdCBb4oCmXSBlbmQt
b2JqZWN0CnwgYXJyYXkgPSBiZWdpbi1hcnJheSBb4oCmXSBlbmQtYXJyYXkKClRoZSBvdXRwdXQg
b2YgeGwgbGlzdCAtbCBkb2VzIG5vdCBkdW1wIGFzIG9uZSBvYmplY3QuIEluc3RlYWQgaXQKY29u
c2lzdHMgb2YgbGluZXMsIGVhY2ggd2l0aCBvbmUgb2JqZWN0LiBObyByZWFsIHBhcnNlciB3aWxs
IGV2ZXIgcmVhZAp0aGlzLgoKQmFzdGlhbgoKLS0gCkNvbnF1ZXN0IGlzIGVhc3kuIENvbnRyb2wg
aXMgbm90LgoJCS0tIEtpcmssICJNaXJyb3IsIE1pcnJvciIsIHN0YXJkYXRlIHVua25vd24KCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:18:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1G4N-0007g4-U4; Tue, 14 Aug 2012 12:17:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waldi@debian.org>) id 1T1G4M-0007fy-Vp
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:17:59 +0000
Received: from [85.158.143.35:3943] by server-2.bemta-4.messagelabs.com id
	26/AD-31966-4F14A205; Tue, 14 Aug 2012 12:17:56 +0000
X-Env-Sender: waldi@debian.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344946675!15292690!1
X-Originating-IP: [82.139.201.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8034 invoked from network); 14 Aug 2012 12:17:55 -0000
Received: from wavehammer.waldi.eu.org (HELO wavehammer.waldi.eu.org)
	(82.139.201.20)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 12:17:55 -0000
Received: by wavehammer.waldi.eu.org (Postfix, from userid 1000)
	id C0C1F5424A; Tue, 14 Aug 2012 14:17:52 +0200 (CEST)
Date: Tue, 14 Aug 2012 14:17:52 +0200
From: Bastian Blank <waldi@debian.org>
To: xen-devel@lists.xen.org
Message-ID: <20120814121741.GA10214@wavehammer.waldi.eu.org>
Mail-Followup-To: Bastian Blank <waldi@debian.org>, xen-devel@lists.xen.org
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] xl list -l produces invalid output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

eGwgbGlzdCAtbCBjaGFuZ2VkIG91dHB1dCByZWNlbnRseS4gRnJvbSBTWFAsIG9mdGVuIGJyb2tl
biwgaXQgc3dpdGNoZWQKdG8gSlNPTi0tLW9yIGJldHRlcjoga2luZC1vZi1KU09OLgoKSlNPTiBp
cyBkZWZpbmVkIGluIFJGQyA0NjI3LiBJdCBpbmNsdWRlcyB0aGUgZGVmaW5pdGlvbjoKfCBKU09O
LXRleHQgPSBvYmplY3QgLyBhcnJheQp8IG9iamVjdCA9IGJlZ2luLW9iamVjdCBb4oCmXSBlbmQt
b2JqZWN0CnwgYXJyYXkgPSBiZWdpbi1hcnJheSBb4oCmXSBlbmQtYXJyYXkKClRoZSBvdXRwdXQg
b2YgeGwgbGlzdCAtbCBkb2VzIG5vdCBkdW1wIGFzIG9uZSBvYmplY3QuIEluc3RlYWQgaXQKY29u
c2lzdHMgb2YgbGluZXMsIGVhY2ggd2l0aCBvbmUgb2JqZWN0LiBObyByZWFsIHBhcnNlciB3aWxs
IGV2ZXIgcmVhZAp0aGlzLgoKQmFzdGlhbgoKLS0gCkNvbnF1ZXN0IGlzIGVhc3kuIENvbnRyb2wg
aXMgbm90LgoJCS0tIEtpcmssICJNaXJyb3IsIE1pcnJvciIsIHN0YXJkYXRlIHVua25vd24KCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:37:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GMg-000800-Ah; Tue, 14 Aug 2012 12:36:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1GMe-0007zX-Ep
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:36:52 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344947803!1813066!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14233 invoked from network); 14 Aug 2012 12:36:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:36:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002296"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:36:46 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:36:45 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 14 Aug 2012 13:24:22 +0100
Message-ID: <1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
	pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The informations added on the hook are:
- Native behaviour
- Xen specific behaviour
- Logic behind the Xen specific behaviour
- PVOPS semantic

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 1 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..b22093c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -72,8 +72,23 @@ struct x86_init_oem {
  * struct x86_init_mapping - platform specific initial kernel pagetable setup
  * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
  *
- * For more details on the purpose of this hook, look in
- * init_memory_mapping and the commit that added it.
+ * It does reserve a range of pages, to be used as pagetable pages.
+ * The start and end parameters are expected to be contained in the
+ * [pgt_buf_start, pgt_buf_top] range.
+ * The native implementation reserves the pages via the memblock_reserve()
+ * interface.
+ * The Xen implementation, besides reserving the range via memblock_reserve(),
+ * also sets RW the remaining pages contained in the ranges
+ * [pgt_buf_start, start) and [end, pgt_buf_top).
+ * This is needed because the range [pgt_buf_start, pgt_buf_top] was
+ * previously mapped read-only by xen_set_pte_init: when running
+ * on Xen all the pagetable pages need to be mapped read-only in order to
+ * avoid protection faults from the hypervisor. However, once the correct
+ * amount of pages is reserved for the pagetables, all the others contained
+ * in the range must be set to RW so that they can be correctly recycled by
+ * the VM subsystem.
+ * This operation is meant to be performed only during init_memory_mapping(),
+ * just after space for the kernel direct mapping tables is found.
  */
 struct x86_init_mapping {
 	void (*pagetable_reserve)(u64 start, u64 end);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:37:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GMg-000800-Ah; Tue, 14 Aug 2012 12:36:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1GMe-0007zX-Ep
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:36:52 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344947803!1813066!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14233 invoked from network); 14 Aug 2012 12:36:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:36:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002296"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:36:46 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:36:45 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 14 Aug 2012 13:24:22 +0100
Message-ID: <1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
	pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The informations added on the hook are:
- Native behaviour
- Xen specific behaviour
- Logic behind the Xen specific behaviour
- PVOPS semantic

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 1 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..b22093c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -72,8 +72,23 @@ struct x86_init_oem {
  * struct x86_init_mapping - platform specific initial kernel pagetable setup
  * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
  *
- * For more details on the purpose of this hook, look in
- * init_memory_mapping and the commit that added it.
+ * It does reserve a range of pages, to be used as pagetable pages.
+ * The start and end parameters are expected to be contained in the
+ * [pgt_buf_start, pgt_buf_top] range.
+ * The native implementation reserves the pages via the memblock_reserve()
+ * interface.
+ * The Xen implementation, besides reserving the range via memblock_reserve(),
+ * also sets RW the remaining pages contained in the ranges
+ * [pgt_buf_start, start) and [end, pgt_buf_top).
+ * This is needed because the range [pgt_buf_start, pgt_buf_top] was
+ * previously mapped read-only by xen_set_pte_init: when running
+ * on Xen all the pagetable pages need to be mapped read-only in order to
+ * avoid protection faults from the hypervisor. However, once the correct
+ * amount of pages is reserved for the pagetables, all the others contained
+ * in the range must be set to RW so that they can be correctly recycled by
+ * the VM subsystem.
+ * This operation is meant to be performed only during init_memory_mapping(),
+ * just after space for the kernel direct mapping tables is found.
  */
 struct x86_init_mapping {
 	void (*pagetable_reserve)(u64 start, u64 end);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:37:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GMf-0007zt-VC; Tue, 14 Aug 2012 12:36:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1GMe-0007zW-6U
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:36:52 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344947803!1813066!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13977 invoked from network); 14 Aug 2012 12:36:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002294"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:36:43 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:36:42 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 14 Aug 2012 13:24:20 +0100
Message-ID: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/2] XEN/X86: Document pagetable_reserve
	PVOPS and enforce a better semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking for documenting the pagetable_reserve PVOPS, I realized that it
assumes start == pgt_buf_start. I think this is not semantically right
(even if with the current code this should not be a problem in practice) and
what we really want is to extend the logic in order to do the RO -> RW
convertion also for the range [pgt_buf_start, start).
This patch then implements this missing conversion, adding some smaller
cleanups and finally provides documentation for the PVOPS.
Please look at 2/2 for more details on how the comment is structured.
If we get this right we will have a reference to be used later on for others
PVOPS.

The difference with v1 is that sh_start local variable in
xen_mapping_pagetable_reserve() is renamed as begin. Also, the commit messages
have been tweaked.

Attilio Rao (2):
  XEN, X86: Improve semantic support for pagetable_reserve PVOPS
  Xen: Document the semantic of the pagetable_reserve PVOPS

 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 arch/x86/mm/init.c              |    4 ++++
 arch/x86/xen/mmu.c              |   22 ++++++++++++++++++++--
 3 files changed, 41 insertions(+), 4 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:37:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GMf-0007zt-VC; Tue, 14 Aug 2012 12:36:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1GMe-0007zW-6U
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:36:52 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344947803!1813066!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13977 invoked from network); 14 Aug 2012 12:36:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002294"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:36:43 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:36:42 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 14 Aug 2012 13:24:20 +0100
Message-ID: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/2] XEN/X86: Document pagetable_reserve
	PVOPS and enforce a better semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking for documenting the pagetable_reserve PVOPS, I realized that it
assumes start == pgt_buf_start. I think this is not semantically right
(even if with the current code this should not be a problem in practice) and
what we really want is to extend the logic in order to do the RO -> RW
convertion also for the range [pgt_buf_start, start).
This patch then implements this missing conversion, adding some smaller
cleanups and finally provides documentation for the PVOPS.
Please look at 2/2 for more details on how the comment is structured.
If we get this right we will have a reference to be used later on for others
PVOPS.

The difference with v1 is that sh_start local variable in
xen_mapping_pagetable_reserve() is renamed as begin. Also, the commit messages
have been tweaked.

Attilio Rao (2):
  XEN, X86: Improve semantic support for pagetable_reserve PVOPS
  Xen: Document the semantic of the pagetable_reserve PVOPS

 arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
 arch/x86/mm/init.c              |    4 ++++
 arch/x86/xen/mmu.c              |   22 ++++++++++++++++++++--
 3 files changed, 41 insertions(+), 4 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:37:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GMe-0007zi-Ib; Tue, 14 Aug 2012 12:36:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1GMd-0007zV-4F
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:36:51 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344947803!1813066!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14045 invoked from network); 14 Aug 2012 12:36:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002295"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:36:44 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:36:44 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 14 Aug 2012 13:24:21 +0100
Message-ID: <1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/2] XEN,
	X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Allow xen_mapping_pagetable_reserve() to handle a start different from
  pgt_buf_start, but still bigger than it.
- Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
  for verifying start and end are contained in the range
  [pgt_buf_start, pgt_buf_top].
- In xen_mapping_pagetable_reserve(), change printk into pr_debug.
- In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
  an actual need to do that (or, in other words, if there are actually some
  pages going to switch from RO to RW).

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/mm/init.c |    4 ++++
 arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e0e6990..c5849b6 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 
 void __init native_pagetable_reserve(u64 start, u64 end)
 {
+	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, PFN_PHYS(pgt_buf_start),
+			PFN_PHYS(pgt_buf_top));
 	memblock_reserve(start, end - start);
 }
 
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..66d73a2 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 {
+	u64 begin;
+
+	begin = PFN_PHYS(pgt_buf_start);
+
+	if (start < begin || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, begin, PFN_PHYS(pgt_buf_top));
+
+	/* set RW the initial range */
+	if  (start != begin)
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			begin, start);
+	while (begin < start) {
+		make_lowmem_page_readwrite(__va(begin));
+		begin += PAGE_SIZE;
+	}
+
 	/* reserve the range used */
 	native_pagetable_reserve(start, end);
 
 	/* set as RW the rest */
-	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
-			PFN_PHYS(pgt_buf_top));
+	if (end != PFN_PHYS(pgt_buf_top))
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			end, PFN_PHYS(pgt_buf_top));
 	while (end < PFN_PHYS(pgt_buf_top)) {
 		make_lowmem_page_readwrite(__va(end));
 		end += PAGE_SIZE;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:37:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GMe-0007zi-Ib; Tue, 14 Aug 2012 12:36:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1GMd-0007zV-4F
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:36:51 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344947803!1813066!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14045 invoked from network); 14 Aug 2012 12:36:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002295"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:36:44 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:36:44 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <xen-devel@lists.xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Tue, 14 Aug 2012 13:24:21 +0100
Message-ID: <1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/2] XEN,
	X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Allow xen_mapping_pagetable_reserve() to handle a start different from
  pgt_buf_start, but still bigger than it.
- Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
  for verifying start and end are contained in the range
  [pgt_buf_start, pgt_buf_top].
- In xen_mapping_pagetable_reserve(), change printk into pr_debug.
- In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
  an actual need to do that (or, in other words, if there are actually some
  pages going to switch from RO to RW).

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/mm/init.c |    4 ++++
 arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e0e6990..c5849b6 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 
 void __init native_pagetable_reserve(u64 start, u64 end)
 {
+	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, PFN_PHYS(pgt_buf_start),
+			PFN_PHYS(pgt_buf_top));
 	memblock_reserve(start, end - start);
 }
 
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..66d73a2 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 {
+	u64 begin;
+
+	begin = PFN_PHYS(pgt_buf_start);
+
+	if (start < begin || end > PFN_PHYS(pgt_buf_top))
+		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
+			start, end, begin, PFN_PHYS(pgt_buf_top));
+
+	/* set RW the initial range */
+	if  (start != begin)
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			begin, start);
+	while (begin < start) {
+		make_lowmem_page_readwrite(__va(begin));
+		begin += PAGE_SIZE;
+	}
+
 	/* reserve the range used */
 	native_pagetable_reserve(start, end);
 
 	/* set as RW the rest */
-	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
-			PFN_PHYS(pgt_buf_top));
+	if (end != PFN_PHYS(pgt_buf_top))
+		pr_debug("xen: setting RW the range %llx - %llx\n",
+			end, PFN_PHYS(pgt_buf_top));
 	while (end < PFN_PHYS(pgt_buf_top)) {
 		make_lowmem_page_readwrite(__va(end));
 		end += PAGE_SIZE;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Gg2-0008Ur-6I; Tue, 14 Aug 2012 12:56:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1Gg0-0008Um-ND
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 12:56:52 +0000
Received: from [85.158.143.35:15376] by server-3.bemta-4.messagelabs.com id
	94/5C-09529-21B4A205; Tue, 14 Aug 2012 12:56:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344949004!15681519!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10704 invoked from network); 14 Aug 2012 12:56:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:56:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002730"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:56:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:56:44 +0100
Date: Tue, 14 Aug 2012 13:56:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502A58CF0200007800094BE4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Jan Beulich wrote:
> >>> On 14.08.12 at 12:38, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Mon, 13 Aug 2012, Jan Beulich wrote:
> >> >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> >> > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> >> > there is any point in that because the other internal functions should
> >> > also have XEN_GUEST_HANDLE_PARAMs as parameters.
> >> 
> >> So you obviously need a cast from "normal" to _PARAM (so that
> >> you can pass embedded fields to functions).
> > 
> > In practice embedded fields are in guest memory, so the first thing Xen
> > does is calling copy_from_guest and work with the struct pointer
> > directly from that point on.
> 
> Perhaps we have a different understanding of embedded fields:
> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
> An example would be struct mmuext_op's vcpumask field, which
> is being passed to vcpumask_to_pcpumask(). This must remain to
> be possible (and not just in x86-specific code, where it's mere luck
> that both are really identical).

Thanks for the concrete example; glancing through the common code I
didn't find any examples like this.
As I wrote in the follow up email, guest_handle_cast is just what we
need:

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..70ffa58 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3198,7 +3198,9 @@ int do_mmuext_op(
         {
             cpumask_t pmask;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            if ( unlikely(vcpumask_to_pcpumask(d,
+                            guest_handle_cast(op.arg2.vcpumask, const_void),
+                            &pmask)) )
             {
                 okay = 0;
                 break;

Unfortuantely that means that I have missed some substitutions from my
original "replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when
appropriate" patch.
I'll try to catch them in the next version.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:57:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Gg2-0008Ur-6I; Tue, 14 Aug 2012 12:56:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1Gg0-0008Um-ND
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 12:56:52 +0000
Received: from [85.158.143.35:15376] by server-3.bemta-4.messagelabs.com id
	94/5C-09529-21B4A205; Tue, 14 Aug 2012 12:56:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344949004!15681519!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10704 invoked from network); 14 Aug 2012 12:56:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:56:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002730"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:56:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 13:56:44 +0100
Date: Tue, 14 Aug 2012 13:56:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502A58CF0200007800094BE4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Jan Beulich wrote:
> >>> On 14.08.12 at 12:38, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Mon, 13 Aug 2012, Jan Beulich wrote:
> >> >>> On 13.08.12 at 13:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> > wrote:
> >> > On the other hand if you mean casting a XEN_GUEST_HANDLE_PARAM into a
> >> > XEN_GUEST_HANDLE to pass it to other internal functions, I don't think
> >> > there is any point in that because the other internal functions should
> >> > also have XEN_GUEST_HANDLE_PARAMs as parameters.
> >> 
> >> So you obviously need a cast from "normal" to _PARAM (so that
> >> you can pass embedded fields to functions).
> > 
> > In practice embedded fields are in guest memory, so the first thing Xen
> > does is calling copy_from_guest and work with the struct pointer
> > directly from that point on.
> 
> Perhaps we have a different understanding of embedded fields:
> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
> An example would be struct mmuext_op's vcpumask field, which
> is being passed to vcpumask_to_pcpumask(). This must remain to
> be possible (and not just in x86-specific code, where it's mere luck
> that both are really identical).

Thanks for the concrete example; glancing through the common code I
didn't find any examples like this.
As I wrote in the follow up email, guest_handle_cast is just what we
need:

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..70ffa58 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3198,7 +3198,9 @@ int do_mmuext_op(
         {
             cpumask_t pmask;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            if ( unlikely(vcpumask_to_pcpumask(d,
+                            guest_handle_cast(op.arg2.vcpumask, const_void),
+                            &pmask)) )
             {
                 okay = 0;
                 break;

Unfortuantely that means that I have missed some substitutions from my
original "replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when
appropriate" patch.
I'll try to catch them in the next version.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:58:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:58:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ghg-00007K-Lr; Tue, 14 Aug 2012 12:58:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Ghf-000077-Dt
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:58:35 +0000
Received: from [85.158.143.99:26465] by server-1.bemta-4.messagelabs.com id
	40/DF-07754-A7B4A205; Tue, 14 Aug 2012 12:58:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344949112!16916815!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18784 invoked from network); 14 Aug 2012 12:58:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:58:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002763"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:57:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:57:57 +0100
Message-ID: <1344949075.5926.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dieter Bloms <xensource.com@bloms.de>
Date: Tue, 14 Aug 2012 13:57:55 +0100
In-Reply-To: <20120814100704.GA19704@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 11:07 +0100, Dieter Bloms wrote:
> Hi,
> 
> On Tue, Aug 14, Ian Campbell wrote:
> 
> > tools, blockers:
> > 
> >     * xl compatibility with xm:
> > 
> >         * No known issues
> 
> the parameter io and irq in domU config files are not evaluated by xl.
> So it is not possible to passthrough a parallel port for my printer to
> domU when I start the domU with xl command.
> With xm I have no issue.

Thanks, I have added this as a nice to have issue. I'll try and look at
it soon unless someone else gets there first.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:58:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:58:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ghg-00007K-Lr; Tue, 14 Aug 2012 12:58:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Ghf-000077-Dt
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:58:35 +0000
Received: from [85.158.143.99:26465] by server-1.bemta-4.messagelabs.com id
	40/DF-07754-A7B4A205; Tue, 14 Aug 2012 12:58:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1344949112!16916815!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18784 invoked from network); 14 Aug 2012 12:58:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:58:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002763"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:57:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:57:57 +0100
Message-ID: <1344949075.5926.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dieter Bloms <xensource.com@bloms.de>
Date: Tue, 14 Aug 2012 13:57:55 +0100
In-Reply-To: <20120814100704.GA19704@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 11:07 +0100, Dieter Bloms wrote:
> Hi,
> 
> On Tue, Aug 14, Ian Campbell wrote:
> 
> > tools, blockers:
> > 
> >     * xl compatibility with xm:
> > 
> >         * No known issues
> 
> the parameter io and irq in domU config files are not evaluated by xl.
> So it is not possible to passthrough a parallel port for my printer to
> domU when I start the domU with xl command.
> With xm I have no issue.

Thanks, I have added this as a nice to have issue. I'll try and look at
it soon unless someone else gets there first.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:59:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GiA-0000AW-N6; Tue, 14 Aug 2012 12:59:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Gi9-0000AE-93
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:59:05 +0000
Received: from [85.158.139.83:4816] by server-2.bemta-5.messagelabs.com id
	CC/10-10142-89B4A205; Tue, 14 Aug 2012 12:59:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344949142!16797666!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17262 invoked from network); 14 Aug 2012 12:59:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:59:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002807"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:59:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:59:02 +0100
Message-ID: <1344949140.5926.30.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Tue, 14 Aug 2012 13:59:00 +0100
In-Reply-To: <20120814102039.GY19851@reaktio.net>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814101822.GX19851@reaktio.net>
	<20120814102039.GY19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

KGFkZGluZyBLZWlyIGFuZCBkcm9wcGluZyB4bmUtdXNlcnNAKQoKT24gVHVlLCAyMDEyLTA4LTE0
IGF0IDExOjIwICswMTAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90ZToKPiBPbiBUdWUsIEF1ZyAx
NCwgMjAxMiBhdCAwMToxODoyMlBNICswMzAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90ZToKPiA+
IE9uIFR1ZSwgQXVnIDE0LCAyMDEyIGF0IDEwOjA1OjQxQU0gKzAxMDAsIElhbiBDYW1wYmVsbCB3
cm90ZToKPiA+ID4gCj4gPiA+IHRvb2xzLCBuaWNlIHRvIGhhdmU6Cj4gPiA+IAo+ID4gCj4gPiAq
IGZpeCBpcHhlIGJ1aWxkIHByb2JsZW1zIHdpdGggZ2NjIDQuNyAoZmVkb3JhIDE3KS4KPiA+ICAg
VGhlIGZvbGxvd2luZyBmaWxlcyBmYWlsIHRvIGJ1aWxkOgo+ID4gCS0gaXB4ZS9zcmMvZHJpdmVy
cy9idXMvaXNhLmMKPiA+IAktIGlweGUvc3JjL2RyaXZlcnMvbmV0L215cmkxMGdlLmMKPiA+IAkt
IGlweGUvc3JjL2RyaXZlcnMvaW5maW5pYmFuZC9xaWI3MzIyLmMKPiA+ICAgUGF0Y2hlcyBoYXZl
IGJlZW4gcG9zdGVkIHRvIGlweGUtZGV2ZWwgbWFpbGluZ2xpc3QsCj4gPiAgIHNvIHdlIG5lZWQg
dG8gdXBkYXRlIG91ciBpcHhlIHZlcnNpb24gb3IgZ3JhYiB0aGUgcGF0Y2hlcy4KPiA+IAo+IAo+
IEFuZCB0aGUgbmVlZGVkIHBhdGNoZXMgYXJlIGxpc3RlZCBoZXJlOiBodHRwOi8vbGlzdHMueGVu
Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTA4L21zZzAxMDQ4Lmh0bWwKCkkgdGhp
bmsgdGFraW5nIHRoZSBpbmRpdmlkdWFsIHBhdGNoZXMgd291bGQgYmUgbW9yZSBzZW5zaWJsZSB0
aGFuIGEKd2hvbGVzYWxlIHJlZnJlc2ggYXQgdGhpcyBzdGFnZSwgS2Vpcj8KCk5COiBJIGRpZG4n
dCBhY3R1YWxseSBsb29rIGF0IHRoZSBwYXRjaGVzLi4uCgpJYW4uCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Aug 14 12:59:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 12:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1GiA-0000AW-N6; Tue, 14 Aug 2012 12:59:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Gi9-0000AE-93
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 12:59:05 +0000
Received: from [85.158.139.83:4816] by server-2.bemta-5.messagelabs.com id
	CC/10-10142-89B4A205; Tue, 14 Aug 2012 12:59:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1344949142!16797666!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17262 invoked from network); 14 Aug 2012 12:59:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 12:59:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002807"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 12:59:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:59:02 +0100
Message-ID: <1344949140.5926.30.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Tue, 14 Aug 2012 13:59:00 +0100
In-Reply-To: <20120814102039.GY19851@reaktio.net>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814101822.GX19851@reaktio.net>
	<20120814102039.GY19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

KGFkZGluZyBLZWlyIGFuZCBkcm9wcGluZyB4bmUtdXNlcnNAKQoKT24gVHVlLCAyMDEyLTA4LTE0
IGF0IDExOjIwICswMTAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90ZToKPiBPbiBUdWUsIEF1ZyAx
NCwgMjAxMiBhdCAwMToxODoyMlBNICswMzAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90ZToKPiA+
IE9uIFR1ZSwgQXVnIDE0LCAyMDEyIGF0IDEwOjA1OjQxQU0gKzAxMDAsIElhbiBDYW1wYmVsbCB3
cm90ZToKPiA+ID4gCj4gPiA+IHRvb2xzLCBuaWNlIHRvIGhhdmU6Cj4gPiA+IAo+ID4gCj4gPiAq
IGZpeCBpcHhlIGJ1aWxkIHByb2JsZW1zIHdpdGggZ2NjIDQuNyAoZmVkb3JhIDE3KS4KPiA+ICAg
VGhlIGZvbGxvd2luZyBmaWxlcyBmYWlsIHRvIGJ1aWxkOgo+ID4gCS0gaXB4ZS9zcmMvZHJpdmVy
cy9idXMvaXNhLmMKPiA+IAktIGlweGUvc3JjL2RyaXZlcnMvbmV0L215cmkxMGdlLmMKPiA+IAkt
IGlweGUvc3JjL2RyaXZlcnMvaW5maW5pYmFuZC9xaWI3MzIyLmMKPiA+ICAgUGF0Y2hlcyBoYXZl
IGJlZW4gcG9zdGVkIHRvIGlweGUtZGV2ZWwgbWFpbGluZ2xpc3QsCj4gPiAgIHNvIHdlIG5lZWQg
dG8gdXBkYXRlIG91ciBpcHhlIHZlcnNpb24gb3IgZ3JhYiB0aGUgcGF0Y2hlcy4KPiA+IAo+IAo+
IEFuZCB0aGUgbmVlZGVkIHBhdGNoZXMgYXJlIGxpc3RlZCBoZXJlOiBodHRwOi8vbGlzdHMueGVu
Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTA4L21zZzAxMDQ4Lmh0bWwKCkkgdGhp
bmsgdGFraW5nIHRoZSBpbmRpdmlkdWFsIHBhdGNoZXMgd291bGQgYmUgbW9yZSBzZW5zaWJsZSB0
aGFuIGEKd2hvbGVzYWxlIHJlZnJlc2ggYXQgdGhpcyBzdGFnZSwgS2Vpcj8KCk5COiBJIGRpZG4n
dCBhY3R1YWxseSBsb29rIGF0IHRoZSBwYXRjaGVzLi4uCgpJYW4uCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:06:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Got-0000uB-JF; Tue, 14 Aug 2012 13:06:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Gor-0000u6-GC
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:06:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344949524!8768097!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14409 invoked from network); 14 Aug 2012 13:05:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:05:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002993"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:04:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 14:04:24 +0100
Message-ID: <1344949463.5926.32.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 14 Aug 2012 14:04:23 +0100
In-Reply-To: <20120814113052.GA31244@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de> <20120810074159.GA11792@aepfle.de>
	<20120810125950.GA451@aepfle.de>
	<1344939526.5926.14.camel@zakaz.uk.xensource.com>
	<20120814113052.GA31244@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 12:30 +0100, Olaf Hering wrote:
> On Tue, Aug 14, Ian Campbell wrote:
> 
> > I think we would accept an update to 25728:a6edbc39fc84 to add some new
> > aliases, it was discussed at the time but I think it petered out after a
> > short discussion about what the correct names were.
> 
> Perhaps the kernel should do a request_module(xen-backend:$type), but
> that does not solve it for older kernels.

Modern kernels already do (effectively) that. That's the "Backend driver
autoloading is relatively new in pvops kernels" I referred to is.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:06:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Got-0000uB-JF; Tue, 14 Aug 2012 13:06:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Gor-0000u6-GC
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:06:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1344949524!8768097!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14409 invoked from network); 14 Aug 2012 13:05:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:05:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14002993"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:04:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 14:04:24 +0100
Message-ID: <1344949463.5926.32.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 14 Aug 2012 14:04:23 +0100
In-Reply-To: <20120814113052.GA31244@aepfle.de>
References: <20120806173905.GA26336@aepfle.de>
	<1344318133.24794.16.camel@dagon.hellion.org.uk>
	<20120807152502.GA24503@aepfle.de>
	<1344353581.11339.105.camel@zakaz.uk.xensource.com>
	<20120808172809.GA22206@aepfle.de>
	<1344501152.32142.78.camel@zakaz.uk.xensource.com>
	<20120809143406.GA9317@aepfle.de> <20120810074159.GA11792@aepfle.de>
	<20120810125950.GA451@aepfle.de>
	<1344939526.5926.14.camel@zakaz.uk.xensource.com>
	<20120814113052.GA31244@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vif backend configuration times out
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 12:30 +0100, Olaf Hering wrote:
> On Tue, Aug 14, Ian Campbell wrote:
> 
> > I think we would accept an update to 25728:a6edbc39fc84 to add some new
> > aliases, it was discussed at the time but I think it petered out after a
> > short discussion about what the correct names were.
> 
> Perhaps the kernel should do a request_module(xen-backend:$type), but
> that does not solve it for older kernels.

Modern kernels already do (effectively) that. That's the "Backend driver
autoloading is relatively new in pvops kernels" I referred to is.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:07:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Gpd-0000x1-0V; Tue, 14 Aug 2012 13:06:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Gpa-0000we-RG
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:06:47 +0000
Received: from [85.158.139.83:5080] by server-9.bemta-5.messagelabs.com id
	F1/29-26123-56D4A205; Tue, 14 Aug 2012 13:06:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344949605!25351927!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18947 invoked from network); 14 Aug 2012 13:06:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:06:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14003036"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:06:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 14:06:45 +0100
Message-ID: <1344949603.5926.35.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lei Wen <adrian.wenl@gmail.com>
Date: Tue, 14 Aug 2012 14:06:43 +0100
In-Reply-To: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
> Hi,
> 
> I am a newbie for the XEN area and curious to know whether current XEN
> could support nommu arch, or only MPU?
> If current it doesn't support, does XEN have any plan or road map to
> add this support in future?

You should be able to run a NOMMU guest OS as an HVM guest today (or at
least I can't think why it wouldn't work).

There are no plans to make Xen itself usable on a no-MMU system, use of
the MMU is pretty firmly baked into Xens architecture.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:07:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Gpd-0000x1-0V; Tue, 14 Aug 2012 13:06:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Gpa-0000we-RG
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:06:47 +0000
Received: from [85.158.139.83:5080] by server-9.bemta-5.messagelabs.com id
	F1/29-26123-56D4A205; Tue, 14 Aug 2012 13:06:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1344949605!25351927!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18947 invoked from network); 14 Aug 2012 13:06:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:06:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14003036"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:06:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 14:06:45 +0100
Message-ID: <1344949603.5926.35.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lei Wen <adrian.wenl@gmail.com>
Date: Tue, 14 Aug 2012 14:06:43 +0100
In-Reply-To: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
> Hi,
> 
> I am a newbie for the XEN area and curious to know whether current XEN
> could support nommu arch, or only MPU?
> If current it doesn't support, does XEN have any plan or road map to
> add this support in future?

You should be able to run a NOMMU guest OS as an HVM guest today (or at
least I can't think why it wouldn't work).

There are no plans to make Xen itself usable on a no-MMU system, use of
the MMU is pretty firmly baked into Xens architecture.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:24:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:24:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1H6B-0001Ks-Oe; Tue, 14 Aug 2012 13:23:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adrian.wenl@gmail.com>) id 1T1H6A-0001Kn-H5
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:23:54 +0000
Received: from [85.158.138.51:56953] by server-4.bemta-3.messagelabs.com id
	A5/C9-04276-9615A205; Tue, 14 Aug 2012 13:23:53 +0000
X-Env-Sender: adrian.wenl@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344950631!28298125!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_18, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9014 invoked from network); 14 Aug 2012 13:23:52 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:23:52 -0000
Received: by obbta14 with SMTP id ta14so679057obb.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 06:23:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=8drFu1Xb7FdcaQDwvb55TNsQWSdMizMq9kqygqb9c6M=;
	b=pFhrKCG5LbDjQE3ys2LgTivQ5paQ7fFsmlx4/NZy3HQXrPxwnTUiDVWSaj1njHg9YB
	hzWMySCrqWKkepoxzFTUYLXFlyFXFDHQ/Hz7t0R6sfICYmFezw8ypv+1Bkm4WOa1OmdG
	aAk4pnL7IU/xgSn2uQp2SRHl5nd0ww0qBI67Mw4z4ilHAR2mezoEuJ82n0R9C+v38aas
	xjOMAsocEWGch7+kGIfgNyBWMn+FfMzPzM1oizbuZF6AAXl/0ISi9dVvKCq2Uv6GYCcF
	mdVW/sNzlOJYrWxsGcSwjd8jhoJCm52PQho0DxAqnbq3/DdQv0DeQ0FQ5t7a32HQlVQJ
	HXiA==
MIME-Version: 1.0
Received: by 10.182.226.41 with SMTP id rp9mr18662358obc.22.1344950631193;
	Tue, 14 Aug 2012 06:23:51 -0700 (PDT)
Received: by 10.182.53.138 with HTTP; Tue, 14 Aug 2012 06:23:51 -0700 (PDT)
In-Reply-To: <1344949603.5926.35.camel@zakaz.uk.xensource.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
	<1344949603.5926.35.camel@zakaz.uk.xensource.com>
Date: Tue, 14 Aug 2012 21:23:51 +0800
Message-ID: <CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
From: Lei Wen <adrian.wenl@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On Tue, Aug 14, 2012 at 9:06 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
>> Hi,
>>
>> I am a newbie for the XEN area and curious to know whether current XEN
>> could support nommu arch, or only MPU?
>> If current it doesn't support, does XEN have any plan or road map to
>> add this support in future?
>
> You should be able to run a NOMMU guest OS as an HVM guest today (or at
> least I can't think why it wouldn't work).
>
> There are no plans to make Xen itself usable on a no-MMU system, use of
> the MMU is pretty firmly baked into Xens architecture.

Yep, I get your point. However, on modern embedded system, there are still
place for no-mmu arches, like the ARM cortex-r7 processor:
http://www.arm.com/products/processors/cortex-r/cortex-r7.php

So making such platform virtualised would have the business benefit, since
could reduce total embedded product BOM price.

Do you have any guideline or idea that could make nommu possible? :)
I would like to take the chance to investigate whether we could enable
it on XEN.

Thanks,
Lei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:24:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:24:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1H6B-0001Ks-Oe; Tue, 14 Aug 2012 13:23:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adrian.wenl@gmail.com>) id 1T1H6A-0001Kn-H5
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:23:54 +0000
Received: from [85.158.138.51:56953] by server-4.bemta-3.messagelabs.com id
	A5/C9-04276-9615A205; Tue, 14 Aug 2012 13:23:53 +0000
X-Env-Sender: adrian.wenl@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344950631!28298125!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_18, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9014 invoked from network); 14 Aug 2012 13:23:52 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:23:52 -0000
Received: by obbta14 with SMTP id ta14so679057obb.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 06:23:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=8drFu1Xb7FdcaQDwvb55TNsQWSdMizMq9kqygqb9c6M=;
	b=pFhrKCG5LbDjQE3ys2LgTivQ5paQ7fFsmlx4/NZy3HQXrPxwnTUiDVWSaj1njHg9YB
	hzWMySCrqWKkepoxzFTUYLXFlyFXFDHQ/Hz7t0R6sfICYmFezw8ypv+1Bkm4WOa1OmdG
	aAk4pnL7IU/xgSn2uQp2SRHl5nd0ww0qBI67Mw4z4ilHAR2mezoEuJ82n0R9C+v38aas
	xjOMAsocEWGch7+kGIfgNyBWMn+FfMzPzM1oizbuZF6AAXl/0ISi9dVvKCq2Uv6GYCcF
	mdVW/sNzlOJYrWxsGcSwjd8jhoJCm52PQho0DxAqnbq3/DdQv0DeQ0FQ5t7a32HQlVQJ
	HXiA==
MIME-Version: 1.0
Received: by 10.182.226.41 with SMTP id rp9mr18662358obc.22.1344950631193;
	Tue, 14 Aug 2012 06:23:51 -0700 (PDT)
Received: by 10.182.53.138 with HTTP; Tue, 14 Aug 2012 06:23:51 -0700 (PDT)
In-Reply-To: <1344949603.5926.35.camel@zakaz.uk.xensource.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
	<1344949603.5926.35.camel@zakaz.uk.xensource.com>
Date: Tue, 14 Aug 2012 21:23:51 +0800
Message-ID: <CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
From: Lei Wen <adrian.wenl@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On Tue, Aug 14, 2012 at 9:06 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
>> Hi,
>>
>> I am a newbie for the XEN area and curious to know whether current XEN
>> could support nommu arch, or only MPU?
>> If current it doesn't support, does XEN have any plan or road map to
>> add this support in future?
>
> You should be able to run a NOMMU guest OS as an HVM guest today (or at
> least I can't think why it wouldn't work).
>
> There are no plans to make Xen itself usable on a no-MMU system, use of
> the MMU is pretty firmly baked into Xens architecture.

Yep, I get your point. However, on modern embedded system, there are still
place for no-mmu arches, like the ARM cortex-r7 processor:
http://www.arm.com/products/processors/cortex-r/cortex-r7.php

So making such platform virtualised would have the business benefit, since
could reduce total embedded product BOM price.

Do you have any guideline or idea that could make nommu possible? :)
I would like to take the chance to investigate whether we could enable
it on XEN.

Thanks,
Lei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HHB-0001dg-U5; Tue, 14 Aug 2012 13:35:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1HHA-0001db-6R
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:35:16 +0000
Received: from [85.158.138.51:42766] by server-3.bemta-3.messagelabs.com id
	54/28-13809-3145A205; Tue, 14 Aug 2012 13:35:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344951314!19315884!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n,
	ML_RADAR_SPEW_LINKS_18,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6303 invoked from network); 14 Aug 2012 13:35:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:35:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14003831"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:35:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 14:35:14 +0100
Message-ID: <1344951312.5926.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lei Wen <adrian.wenl@gmail.com>
Date: Tue, 14 Aug 2012 14:35:12 +0100
In-Reply-To: <CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
	<1344949603.5926.35.camel@zakaz.uk.xensource.com>
	<CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 14:23 +0100, Lei Wen wrote:
> Hi Ian,
> 
> On Tue, Aug 14, 2012 at 9:06 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
> >> Hi,
> >>
> >> I am a newbie for the XEN area and curious to know whether current XEN
> >> could support nommu arch, or only MPU?
> >> If current it doesn't support, does XEN have any plan or road map to
> >> add this support in future?
> >
> > You should be able to run a NOMMU guest OS as an HVM guest today (or at
> > least I can't think why it wouldn't work).
> >
> > There are no plans to make Xen itself usable on a no-MMU system, use of
> > the MMU is pretty firmly baked into Xens architecture.
> 
> Yep, I get your point. However, on modern embedded system, there are still
> place for no-mmu arches, like the ARM cortex-r7 processor:
> http://www.arm.com/products/processors/cortex-r/cortex-r7.php

OK, you should have made it clear you were talking about ARM.

The ARM port which I am involved with[0] requires the virtualisation
extensions, which AFAIK are a ARMv7-A only and an MMU is always assumed
(it's presence pretty much baked into the extensions in various ways).
We have no plans to support non-mmu ports of this variant of Xen.

The other ARM port (the PV port[1]) does not require virtualisation
extensions, however the correct list for questions/discussion about that
port is the xen-arm@ list.

[0] http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions
[1] http://wiki.xen.org/wiki/Xen_ARM_%28PV%29

> So making such platform virtualised would have the business benefit, since
> could reduce total embedded product BOM price.
> 
> Do you have any guideline or idea that could make nommu possible? :)

Even the ARM PV port makes extensive use of the MMU to provide isolation
between guests (at least I presume so). If you want to work on nommu
systems then it seems like this would be the first problem you would
have to consider.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:35:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HHB-0001dg-U5; Tue, 14 Aug 2012 13:35:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1HHA-0001db-6R
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:35:16 +0000
Received: from [85.158.138.51:42766] by server-3.bemta-3.messagelabs.com id
	54/28-13809-3145A205; Tue, 14 Aug 2012 13:35:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344951314!19315884!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n,
	ML_RADAR_SPEW_LINKS_18,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6303 invoked from network); 14 Aug 2012 13:35:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:35:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14003831"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:35:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 14:35:14 +0100
Message-ID: <1344951312.5926.53.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lei Wen <adrian.wenl@gmail.com>
Date: Tue, 14 Aug 2012 14:35:12 +0100
In-Reply-To: <CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
	<1344949603.5926.35.camel@zakaz.uk.xensource.com>
	<CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 14:23 +0100, Lei Wen wrote:
> Hi Ian,
> 
> On Tue, Aug 14, 2012 at 9:06 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
> >> Hi,
> >>
> >> I am a newbie for the XEN area and curious to know whether current XEN
> >> could support nommu arch, or only MPU?
> >> If current it doesn't support, does XEN have any plan or road map to
> >> add this support in future?
> >
> > You should be able to run a NOMMU guest OS as an HVM guest today (or at
> > least I can't think why it wouldn't work).
> >
> > There are no plans to make Xen itself usable on a no-MMU system, use of
> > the MMU is pretty firmly baked into Xens architecture.
> 
> Yep, I get your point. However, on modern embedded system, there are still
> place for no-mmu arches, like the ARM cortex-r7 processor:
> http://www.arm.com/products/processors/cortex-r/cortex-r7.php

OK, you should have made it clear you were talking about ARM.

The ARM port which I am involved with[0] requires the virtualisation
extensions, which AFAIK are a ARMv7-A only and an MMU is always assumed
(it's presence pretty much baked into the extensions in various ways).
We have no plans to support non-mmu ports of this variant of Xen.

The other ARM port (the PV port[1]) does not require virtualisation
extensions, however the correct list for questions/discussion about that
port is the xen-arm@ list.

[0] http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions
[1] http://wiki.xen.org/wiki/Xen_ARM_%28PV%29

> So making such platform virtualised would have the business benefit, since
> could reduce total embedded product BOM price.
> 
> Do you have any guideline or idea that could make nommu possible? :)

Even the ARM PV port makes extensive use of the MMU to provide isolation
between guests (at least I presume so). If you want to work on nommu
systems then it seems like this would be the first problem you would
have to consider.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:38:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HJz-0001js-Ga; Tue, 14 Aug 2012 13:38:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T1HJx-0001jm-VV
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 13:38:10 +0000
Received: from [85.158.139.83:31336] by server-11.bemta-5.messagelabs.com id
	AF/0B-29296-1C45A205; Tue, 14 Aug 2012 13:38:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344951485!27991530!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5NjU3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30023 invoked from network); 14 Aug 2012 13:38:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 13:38:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EDblSX006245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 13:37:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EDbkTs005868
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 13:37:46 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EDbjZg028511; Tue, 14 Aug 2012 08:37:45 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 06:37:45 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E7399402BE; Tue, 14 Aug 2012 09:28:03 -0400 (EDT)
Date: Tue, 14 Aug 2012 09:28:03 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mel Gorman <mgorman@suse.de>
Message-ID: <20120814132803.GC10880@phenom.dumpdata.com>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813154144.GA24868@phenom.dumpdata.com>
	<20120814100522.GL4177@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120814100522.GL4177@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, linux-mm@kvack.org,
	konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 11:05:22AM +0100, Mel Gorman wrote:
> On Mon, Aug 13, 2012 at 11:41:44AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> > > From: Mel Gorman <mgorman@suse.de>
> > > Date: Tue, 7 Aug 2012 09:55:55 +0100
> > > 
> > > > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > > > for the following bug triggered by a xen network driver
> > >  ...
> > > > The problem is that the xenfront driver is passing a NULL page to
> > > > __skb_fill_page_desc() which was unexpected. This patch checks that
> > > > there is a page before dereferencing.
> > > > 
> > > > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > > 
> > > That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> > > It's the only driver passing NULL here.
> > 
> > It looks to be passing a valid page pointer (at least by looking
> > at the code) so I am not sure how it got turned in a NULL.
> > 
> 
> Are we looking at different code bases? I see this and I was assuming it
> was the source of the bug.
> 
> 	__skb_fill_page_desc(skb, 0, NULL, 0, 0);

Yes! Well, that is embarrassing. I was looking at the first invocation of 
__skb_fill_page_desc (which is in xennet_alloc_rx_buffers) <sigh>

> 
> -- 
> Mel Gorman
> SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:38:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HJz-0001js-Ga; Tue, 14 Aug 2012 13:38:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T1HJx-0001jm-VV
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 13:38:10 +0000
Received: from [85.158.139.83:31336] by server-11.bemta-5.messagelabs.com id
	AF/0B-29296-1C45A205; Tue, 14 Aug 2012 13:38:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1344951485!27991530!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5NjU3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30023 invoked from network); 14 Aug 2012 13:38:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 13:38:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EDblSX006245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 13:37:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EDbkTs005868
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 13:37:46 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EDbjZg028511; Tue, 14 Aug 2012 08:37:45 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 06:37:45 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E7399402BE; Tue, 14 Aug 2012 09:28:03 -0400 (EDT)
Date: Tue, 14 Aug 2012 09:28:03 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mel Gorman <mgorman@suse.de>
Message-ID: <20120814132803.GC10880@phenom.dumpdata.com>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<20120813154144.GA24868@phenom.dumpdata.com>
	<20120814100522.GL4177@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120814100522.GL4177@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@eu.citrix.com>, linux-mm@kvack.org,
	konrad@darnok.org, akpm@linux-foundation.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 11:05:22AM +0100, Mel Gorman wrote:
> On Mon, Aug 13, 2012 at 11:41:44AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Aug 08, 2012 at 03:50:46PM -0700, David Miller wrote:
> > > From: Mel Gorman <mgorman@suse.de>
> > > Date: Tue, 7 Aug 2012 09:55:55 +0100
> > > 
> > > > Commit [c48a11c7: netvm: propagate page->pfmemalloc to skb] is responsible
> > > > for the following bug triggered by a xen network driver
> > >  ...
> > > > The problem is that the xenfront driver is passing a NULL page to
> > > > __skb_fill_page_desc() which was unexpected. This patch checks that
> > > > there is a page before dereferencing.
> > > > 
> > > > Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > > 
> > > That call to __skb_fill_page_desc() in xen-netfront.c looks completely bogus.
> > > It's the only driver passing NULL here.
> > 
> > It looks to be passing a valid page pointer (at least by looking
> > at the code) so I am not sure how it got turned in a NULL.
> > 
> 
> Are we looking at different code bases? I see this and I was assuming it
> was the source of the bug.
> 
> 	__skb_fill_page_desc(skb, 0, NULL, 0, 0);

Yes! Well, that is embarrassing. I was looking at the first invocation of 
__skb_fill_page_desc (which is in xennet_alloc_rx_buffers) <sigh>

> 
> -- 
> Mel Gorman
> SUSE Labs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:40:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HLl-0001rV-3z; Tue, 14 Aug 2012 13:40:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1HLj-0001r0-UJ
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 13:40:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344951577!9198397!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23603 invoked from network); 14 Aug 2012 13:39:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 13:39:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 14:39:37 +0100
Message-Id: <502A71370200007800094C5F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 14:39:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 14:56, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Tue, 14 Aug 2012, Jan Beulich wrote:
>> Perhaps we have a different understanding of embedded fields:
>> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
>> An example would be struct mmuext_op's vcpumask field, which
>> is being passed to vcpumask_to_pcpumask(). This must remain to
>> be possible (and not just in x86-specific code, where it's mere luck
>> that both are really identical).
> 
> Thanks for the concrete example; glancing through the common code I
> didn't find any examples like this.
> As I wrote in the follow up email, guest_handle_cast is just what we
> need:
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 4d72700..70ffa58 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3198,7 +3198,9 @@ int do_mmuext_op(
>          {
>              cpumask_t pmask;
>  
> -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
> +            if ( unlikely(vcpumask_to_pcpumask(d,
> +                            guest_handle_cast(op.arg2.vcpumask, const_void),

No, the conversion should explicitly _not_ require specification
of the type, i.e. this should not be a true cast. Type safety
(checked by the compiler) can only be achieved if no intermediate
cast is involved.

Jan

> +                            &pmask)) )
>              {
>                  okay = 0;
>                  break;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:40:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HLl-0001rV-3z; Tue, 14 Aug 2012 13:40:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1HLj-0001r0-UJ
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 13:40:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344951577!9198397!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23603 invoked from network); 14 Aug 2012 13:39:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 13:39:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 14:39:37 +0100
Message-Id: <502A71370200007800094C5F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 14:39:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 14:56, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Tue, 14 Aug 2012, Jan Beulich wrote:
>> Perhaps we have a different understanding of embedded fields:
>> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
>> An example would be struct mmuext_op's vcpumask field, which
>> is being passed to vcpumask_to_pcpumask(). This must remain to
>> be possible (and not just in x86-specific code, where it's mere luck
>> that both are really identical).
> 
> Thanks for the concrete example; glancing through the common code I
> didn't find any examples like this.
> As I wrote in the follow up email, guest_handle_cast is just what we
> need:
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 4d72700..70ffa58 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3198,7 +3198,9 @@ int do_mmuext_op(
>          {
>              cpumask_t pmask;
>  
> -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
> +            if ( unlikely(vcpumask_to_pcpumask(d,
> +                            guest_handle_cast(op.arg2.vcpumask, const_void),

No, the conversion should explicitly _not_ require specification
of the type, i.e. this should not be a true cast. Type safety
(checked by the compiler) can only be achieved if no intermediate
cast is involved.

Jan

> +                            &pmask)) )
>              {
>                  okay = 0;
>                  break;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:40:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HLy-0001sj-Gw; Tue, 14 Aug 2012 13:40:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adrian.wenl@gmail.com>) id 1T1HLx-0001sR-Cy
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:40:13 +0000
Received: from [85.158.139.83:52187] by server-2.bemta-5.messagelabs.com id
	7C/89-10142-C355A205; Tue, 14 Aug 2012 13:40:12 +0000
X-Env-Sender: adrian.wenl@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344951610!27538586!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_18, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6826 invoked from network); 14 Aug 2012 13:40:11 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:40:11 -0000
Received: by obbta14 with SMTP id ta14so701865obb.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 06:40:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=6eKt5UE6MG/VXvcHPLoTYwAAUUgZD3rwo/Yq5PZBXjc=;
	b=OKSZOfArC4cxnkfBOfc4WqOtfeMwaTsGmxO9G6WptIvtA21aUJ/hrabdFSwGd5Qusf
	fJ2AasqhU0ek0UKNqmozM4Nh3oOvSIPDaizv5FhOpuHkWdUBK+wwnoHg9iCZ5IlCNVpl
	R/AjWCjV45F7v6+Na8XfiuVl/hU3KXe+xubyoaS0L2RazGkC2HAaIxtygxY+VK24i20y
	bz2WxYEfVYXugJxv8WsiQfBhUYM0J2eiQmRLZqiZ7ewLQAJwhqdYrtMShmnMCdbuMKLh
	Skl6qBnlEIc28sZ174d8sdUOmW7Adkq0G3a5mEvTwSLV1Av7ShCV4DIhcySN2ZQhSRMp
	4/hw==
MIME-Version: 1.0
Received: by 10.182.44.68 with SMTP id c4mr18962480obm.27.1344951609648; Tue,
	14 Aug 2012 06:40:09 -0700 (PDT)
Received: by 10.182.53.138 with HTTP; Tue, 14 Aug 2012 06:40:09 -0700 (PDT)
In-Reply-To: <1344951312.5926.53.camel@zakaz.uk.xensource.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
	<1344949603.5926.35.camel@zakaz.uk.xensource.com>
	<CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
	<1344951312.5926.53.camel@zakaz.uk.xensource.com>
Date: Tue, 14 Aug 2012 21:40:09 +0800
Message-ID: <CALZhoSQ9=my0K=1_yskxy8rNAuX12v5zQm2AQiXADbmFJZFgbQ@mail.gmail.com>
From: Lei Wen <adrian.wenl@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:35 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-14 at 14:23 +0100, Lei Wen wrote:
>> Hi Ian,
>>
>> On Tue, Aug 14, 2012 at 9:06 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
>> >> Hi,
>> >>
>> >> I am a newbie for the XEN area and curious to know whether current XEN
>> >> could support nommu arch, or only MPU?
>> >> If current it doesn't support, does XEN have any plan or road map to
>> >> add this support in future?
>> >
>> > You should be able to run a NOMMU guest OS as an HVM guest today (or at
>> > least I can't think why it wouldn't work).
>> >
>> > There are no plans to make Xen itself usable on a no-MMU system, use of
>> > the MMU is pretty firmly baked into Xens architecture.
>>
>> Yep, I get your point. However, on modern embedded system, there are still
>> place for no-mmu arches, like the ARM cortex-r7 processor:
>> http://www.arm.com/products/processors/cortex-r/cortex-r7.php
>
> OK, you should have made it clear you were talking about ARM.
>
> The ARM port which I am involved with[0] requires the virtualisation
> extensions, which AFAIK are a ARMv7-A only and an MMU is always assumed
> (it's presence pretty much baked into the extensions in various ways).
> We have no plans to support non-mmu ports of this variant of Xen.
>
> The other ARM port (the PV port[1]) does not require virtualisation
> extensions, however the correct list for questions/discussion about that
> port is the xen-arm@ list.
>
> [0] http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions
> [1] http://wiki.xen.org/wiki/Xen_ARM_%28PV%29
>
>> So making such platform virtualised would have the business benefit, since
>> could reduce total embedded product BOM price.
>>
>> Do you have any guideline or idea that could make nommu possible? :)
>
> Even the ARM PV port makes extensive use of the MMU to provide isolation
> between guests (at least I presume so). If you want to work on nommu
> systems then it seems like this would be the first problem you would
> have to consider.
>

Understand. Thanks for your kindly reply. :)

Thanks,
Lei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:40:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HLy-0001sj-Gw; Tue, 14 Aug 2012 13:40:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adrian.wenl@gmail.com>) id 1T1HLx-0001sR-Cy
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:40:13 +0000
Received: from [85.158.139.83:52187] by server-2.bemta-5.messagelabs.com id
	7C/89-10142-C355A205; Tue, 14 Aug 2012 13:40:12 +0000
X-Env-Sender: adrian.wenl@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344951610!27538586!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_18, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6826 invoked from network); 14 Aug 2012 13:40:11 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:40:11 -0000
Received: by obbta14 with SMTP id ta14so701865obb.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 06:40:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=6eKt5UE6MG/VXvcHPLoTYwAAUUgZD3rwo/Yq5PZBXjc=;
	b=OKSZOfArC4cxnkfBOfc4WqOtfeMwaTsGmxO9G6WptIvtA21aUJ/hrabdFSwGd5Qusf
	fJ2AasqhU0ek0UKNqmozM4Nh3oOvSIPDaizv5FhOpuHkWdUBK+wwnoHg9iCZ5IlCNVpl
	R/AjWCjV45F7v6+Na8XfiuVl/hU3KXe+xubyoaS0L2RazGkC2HAaIxtygxY+VK24i20y
	bz2WxYEfVYXugJxv8WsiQfBhUYM0J2eiQmRLZqiZ7ewLQAJwhqdYrtMShmnMCdbuMKLh
	Skl6qBnlEIc28sZ174d8sdUOmW7Adkq0G3a5mEvTwSLV1Av7ShCV4DIhcySN2ZQhSRMp
	4/hw==
MIME-Version: 1.0
Received: by 10.182.44.68 with SMTP id c4mr18962480obm.27.1344951609648; Tue,
	14 Aug 2012 06:40:09 -0700 (PDT)
Received: by 10.182.53.138 with HTTP; Tue, 14 Aug 2012 06:40:09 -0700 (PDT)
In-Reply-To: <1344951312.5926.53.camel@zakaz.uk.xensource.com>
References: <CALZhoSRTS9HrqS_Z__vC2iY_DUbi4DCBemRkpm8w5H5=7F9-8g@mail.gmail.com>
	<1344949603.5926.35.camel@zakaz.uk.xensource.com>
	<CALZhoSRm=2+ZDn7PUWxtDcJa-hoP_M7Tto1=ZR9S99xiXPHAyw@mail.gmail.com>
	<1344951312.5926.53.camel@zakaz.uk.xensource.com>
Date: Tue, 14 Aug 2012 21:40:09 +0800
Message-ID: <CALZhoSQ9=my0K=1_yskxy8rNAuX12v5zQm2AQiXADbmFJZFgbQ@mail.gmail.com>
From: Lei Wen <adrian.wenl@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can xen support nommu cpu virtualization?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:35 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-14 at 14:23 +0100, Lei Wen wrote:
>> Hi Ian,
>>
>> On Tue, Aug 14, 2012 at 9:06 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Tue, 2012-08-14 at 10:31 +0100, Lei Wen wrote:
>> >> Hi,
>> >>
>> >> I am a newbie for the XEN area and curious to know whether current XEN
>> >> could support nommu arch, or only MPU?
>> >> If current it doesn't support, does XEN have any plan or road map to
>> >> add this support in future?
>> >
>> > You should be able to run a NOMMU guest OS as an HVM guest today (or at
>> > least I can't think why it wouldn't work).
>> >
>> > There are no plans to make Xen itself usable on a no-MMU system, use of
>> > the MMU is pretty firmly baked into Xens architecture.
>>
>> Yep, I get your point. However, on modern embedded system, there are still
>> place for no-mmu arches, like the ARM cortex-r7 processor:
>> http://www.arm.com/products/processors/cortex-r/cortex-r7.php
>
> OK, you should have made it clear you were talking about ARM.
>
> The ARM port which I am involved with[0] requires the virtualisation
> extensions, which AFAIK are a ARMv7-A only and an MMU is always assumed
> (it's presence pretty much baked into the extensions in various ways).
> We have no plans to support non-mmu ports of this variant of Xen.
>
> The other ARM port (the PV port[1]) does not require virtualisation
> extensions, however the correct list for questions/discussion about that
> port is the xen-arm@ list.
>
> [0] http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions
> [1] http://wiki.xen.org/wiki/Xen_ARM_%28PV%29
>
>> So making such platform virtualised would have the business benefit, since
>> could reduce total embedded product BOM price.
>>
>> Do you have any guideline or idea that could make nommu possible? :)
>
> Even the ARM PV port makes extensive use of the MMU to provide isolation
> between guests (at least I presume so). If you want to work on nommu
> systems then it seems like this would be the first problem you would
> have to consider.
>

Understand. Thanks for your kindly reply. :)

Thanks,
Lei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:52:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HXg-0002Hv-Pi; Tue, 14 Aug 2012 13:52:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1HXe-0002Hq-Vd
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:52:19 +0000
Received: from [85.158.138.51:22792] by server-11.bemta-3.messagelabs.com id
	3C/A3-23152-2185A205; Tue, 14 Aug 2012 13:52:18 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344952335!28304820!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23634 invoked from network); 14 Aug 2012 13:52:16 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-5.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 13:52:16 -0000
X-TM-IMSS-Message-ID: <a414e05f00085738@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id a414e05f00085738 ;
	Tue, 14 Aug 2012 09:52:02 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EDq7ei017701; 
	Tue, 14 Aug 2012 09:52:07 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Jan Beulich <jbeulich@suse.com>
Date: Tue, 14 Aug 2012 09:52:06 -0400
Message-Id: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
	build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise, the build fails due to a missing __stack_chk_fail symbol.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/efi/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
index 005e3e0..b727757 100644
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -1,11 +1,11 @@
-CFLAGS += -fshort-wchar
+CFLAGS += -fshort-wchar -fno-stack-protector
 
 obj-y += stub.o
 
 create = test -e $(1) || touch -t 199901010000 $(1)
 
 efi := $(filter y,$(x86_64)$(shell rm -f disabled))
-efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
+efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))
 efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
 efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:52:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HXg-0002Hv-Pi; Tue, 14 Aug 2012 13:52:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1HXe-0002Hq-Vd
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:52:19 +0000
Received: from [85.158.138.51:22792] by server-11.bemta-3.messagelabs.com id
	3C/A3-23152-2185A205; Tue, 14 Aug 2012 13:52:18 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-174.messagelabs.com!1344952335!28304820!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23634 invoked from network); 14 Aug 2012 13:52:16 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-5.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 13:52:16 -0000
X-TM-IMSS-Message-ID: <a414e05f00085738@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id a414e05f00085738 ;
	Tue, 14 Aug 2012 09:52:02 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EDq7ei017701; 
	Tue, 14 Aug 2012 09:52:07 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Jan Beulich <jbeulich@suse.com>
Date: Tue, 14 Aug 2012 09:52:06 -0400
Message-Id: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
	build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise, the build fails due to a missing __stack_chk_fail symbol.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/efi/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
index 005e3e0..b727757 100644
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -1,11 +1,11 @@
-CFLAGS += -fshort-wchar
+CFLAGS += -fshort-wchar -fno-stack-protector
 
 obj-y += stub.o
 
 create = test -e $(1) || touch -t 199901010000 $(1)
 
 efi := $(filter y,$(x86_64)$(shell rm -f disabled))
-efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
+efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))
 efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
 efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HZK-0002MZ-8w; Tue, 14 Aug 2012 13:54:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1HZI-0002MQ-JY
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:54:00 +0000
Received: from [85.158.139.83:54738] by server-2.bemta-5.messagelabs.com id
	80/26-10142-7785A205; Tue, 14 Aug 2012 13:53:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344952438!27541696!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6116 invoked from network); 14 Aug 2012 13:53:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:53:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14004308"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:53:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 14:53:58 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1HZG-0004Lc-Dx; Tue, 14 Aug 2012 13:53:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1HZG-0004fr-84;
	Tue, 14 Aug 2012 14:53:58 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.22646.152006.874730@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 14:53:58 +0100
To: "tamas.k.lengyel@gmail.com" <tamas.k.lengyel@gmail.com>
In-Reply-To: <1344885972.2726.5.camel@Nokia-N900>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
	<20521.2218.8505.913476@mariner.uk.xensource.com>
	<1344885972.2726.5.camel@Nokia-N900>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

tamas.k.lengyel@gmail.com writes ("Re: [Xen-devel] libxl config datastructures"):
> I'm OK with the config parser in Xlu, my problem is the conversion between XLU_Config and the datastructure the libxl domain creation functions are expecting. Right now the only way would be to duplicate the code in xl to do the transition, which I don't want to do, or by calling xl itself instead of using libxl, which I found very ugly and hack-ish.

Yes.

> Further problems with XLU_Config is the fact that the datastructure is transparent. It would be fine, if there were more accessor functions written for it, such that it could be saved again as a file or dumped in a char *. Also would be required the ability to change elements of a XLU_ConfigList.

I have just gone and looked at the code and you mean
  parse_config_data
in xl_cmdimpl.c.

I think this should be in libxlu.  Sadly this is too late for Xen 4.2.

If you would like to send patches to move the code from
parse_config_data into libxlu, we'll consider them for 4.3.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:54:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HZK-0002MZ-8w; Tue, 14 Aug 2012 13:54:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1HZI-0002MQ-JY
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:54:00 +0000
Received: from [85.158.139.83:54738] by server-2.bemta-5.messagelabs.com id
	80/26-10142-7785A205; Tue, 14 Aug 2012 13:53:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344952438!27541696!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6116 invoked from network); 14 Aug 2012 13:53:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:53:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14004308"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:53:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 14:53:58 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1HZG-0004Lc-Dx; Tue, 14 Aug 2012 13:53:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1HZG-0004fr-84;
	Tue, 14 Aug 2012 14:53:58 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.22646.152006.874730@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 14:53:58 +0100
To: "tamas.k.lengyel@gmail.com" <tamas.k.lengyel@gmail.com>
In-Reply-To: <1344885972.2726.5.camel@Nokia-N900>
References: <CABfawh=yoidWLbcYqs4JOD+b30vxYrrT1Q7a2QBNttwx4U9=Ug@mail.gmail.com>
	<CABfawh=1NC-VypsYLNr-J6EkvRS8PBXO5spF8w9GQdaUaso+jQ@mail.gmail.com>
	<1343809281.27221.14.camel@zakaz.uk.xensource.com>
	<CABfawhmrJ=Cb37AqGgyEEXXmtyyTyui0h6-29iqEybvbNVsXxQ@mail.gmail.com>
	<20521.2218.8505.913476@mariner.uk.xensource.com>
	<1344885972.2726.5.camel@Nokia-N900>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl config datastructures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

tamas.k.lengyel@gmail.com writes ("Re: [Xen-devel] libxl config datastructures"):
> I'm OK with the config parser in Xlu, my problem is the conversion between XLU_Config and the datastructure the libxl domain creation functions are expecting. Right now the only way would be to duplicate the code in xl to do the transition, which I don't want to do, or by calling xl itself instead of using libxl, which I found very ugly and hack-ish.

Yes.

> Further problems with XLU_Config is the fact that the datastructure is transparent. It would be fine, if there were more accessor functions written for it, such that it could be saved again as a file or dumped in a char *. Also would be required the ability to change elements of a XLU_ConfigList.

I have just gone and looked at the code and you mean
  parse_config_data
in xl_cmdimpl.c.

I think this should be in libxlu.  Sadly this is too late for Xen 4.2.

If you would like to send patches to move the code from
parse_config_data into libxlu, we'll consider them for 4.3.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:56:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Haz-0002TK-P8; Tue, 14 Aug 2012 13:55:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1Hay-0002T5-SY
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:55:44 +0000
Received: from [85.158.143.99:59854] by server-3.bemta-4.messagelabs.com id
	EA/65-09529-0E85A205; Tue, 14 Aug 2012 13:55:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344952542!21746252!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21618 invoked from network); 14 Aug 2012 13:55:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:55:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14004352"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:55:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 14:55:42 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1Haw-0004NA-DC; Tue, 14 Aug 2012 13:55:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1Haw-0004g4-9T;
	Tue, 14 Aug 2012 14:55:42 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.22750.190548.204595@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 14:55:42 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502A1C7B0200007800094A8F@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
	<502A1C7B0200007800094A8F@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Yongjie Ren <yongjie.ren@intel.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	GabrielX Wu <gabrielx.wu@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, Ben Guthro <ben@guthro.net>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] 4.2 TODO / Release Plan"):
> >>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> > Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> > It seems to fail 100% of the time on 100% of x86 machines I have tried.
> 
> Don't know. The systems I have tried S3 on have problems
> even with native Linux, so there's not much point playing
> with Xen on them.
> 
> Ian(J), I don't suppose this is part of the regression tests?

No, I'm afraid not.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:56:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Haz-0002TK-P8; Tue, 14 Aug 2012 13:55:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1Hay-0002T5-SY
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:55:44 +0000
Received: from [85.158.143.99:59854] by server-3.bemta-4.messagelabs.com id
	EA/65-09529-0E85A205; Tue, 14 Aug 2012 13:55:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1344952542!21746252!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21618 invoked from network); 14 Aug 2012 13:55:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:55:43 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14004352"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:55:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 14:55:42 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1Haw-0004NA-DC; Tue, 14 Aug 2012 13:55:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1Haw-0004g4-9T;
	Tue, 14 Aug 2012 14:55:42 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.22750.190548.204595@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 14:55:42 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502A1C7B0200007800094A8F@nat28.tlf.novell.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
	<502A1C7B0200007800094A8F@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Yongjie Ren <yongjie.ren@intel.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	GabrielX Wu <gabrielx.wu@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, Ben Guthro <ben@guthro.net>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] 4.2 TODO / Release Plan"):
> >>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> > Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> > It seems to fail 100% of the time on 100% of x86 machines I have tried.
> 
> Don't know. The systems I have tried S3 on have problems
> even with native Linux, so there's not much point playing
> with Xen on them.
> 
> Ian(J), I don't suppose this is part of the regression tests?

No, I'm afraid not.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:59:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1He3-0002iq-IT; Tue, 14 Aug 2012 13:58:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T1He1-0002iX-Qo
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:58:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344952678!9165396!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5468 invoked from network); 14 Aug 2012 13:58:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:58:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="205116368"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:57:57 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Tue, 14 Aug 2012
	09:57:57 -0400
Message-ID: <502A5964.2080509@citrix.com>
Date: Tue, 14 Aug 2012 14:57:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Attilio Rao <attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
In-Reply-To: <1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 13:24, Attilio Rao wrote:
> The informations added on the hook are:
> - Native behaviour
> - Xen specific behaviour
> - Logic behind the Xen specific behaviour

These are implementation details and should be documented with the
implementations (if necessary).

> - PVOPS semantic

This is the interesting stuff.

This particular pvop seems a little odd really.  It might make more
sense if it took a third parameter for pgt_buf_top.

"@pagetable_reserve is used to reserve a range of PFNs used for the
kernel direct mapping page tables and cleans-up any PFNs that ended up
not being used for the tables.

It shall reserve the range (start, end] with memblock_reserve(). It
shall prepare PFNs in the range (end, pgt_buf_top] for general (non page
table) use.

It shall only be called in init_memory_mapping() after the direct
mapping tables have been constructed."

Having said that, I couldn't immediately see where pages in (end,
pgt_buf_top] was getting set RO.  Can you point me to where it's done?

David

> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> ---
>  arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
>  1 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
> index 38155f6..b22093c 100644
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -72,8 +72,23 @@ struct x86_init_oem {
>   * struct x86_init_mapping - platform specific initial kernel pagetable setup
>   * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
>   *
> - * For more details on the purpose of this hook, look in
> - * init_memory_mapping and the commit that added it.
> + * It does reserve a range of pages, to be used as pagetable pages.
> + * The start and end parameters are expected to be contained in the
> + * [pgt_buf_start, pgt_buf_top] range.
> + * The native implementation reserves the pages via the memblock_reserve()
> + * interface.
> + * The Xen implementation, besides reserving the range via memblock_reserve(),
> + * also sets RW the remaining pages contained in the ranges
> + * [pgt_buf_start, start) and [end, pgt_buf_top).
> + * This is needed because the range [pgt_buf_start, pgt_buf_top] was
> + * previously mapped read-only by xen_set_pte_init: when running
> + * on Xen all the pagetable pages need to be mapped read-only in order to
> + * avoid protection faults from the hypervisor. However, once the correct
> + * amount of pages is reserved for the pagetables, all the others contained
> + * in the range must be set to RW so that they can be correctly recycled by
> + * the VM subsystem.
> + * This operation is meant to be performed only during init_memory_mapping(),
> + * just after space for the kernel direct mapping tables is found.
>   */
>  struct x86_init_mapping {
>  	void (*pagetable_reserve)(u64 start, u64 end);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:59:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1He3-0002iq-IT; Tue, 14 Aug 2012 13:58:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T1He1-0002iX-Qo
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:58:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344952678!9165396!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5468 invoked from network); 14 Aug 2012 13:58:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:58:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="205116368"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 09:57:57 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Tue, 14 Aug 2012
	09:57:57 -0400
Message-ID: <502A5964.2080509@citrix.com>
Date: Tue, 14 Aug 2012 14:57:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Attilio Rao <attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
In-Reply-To: <1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 13:24, Attilio Rao wrote:
> The informations added on the hook are:
> - Native behaviour
> - Xen specific behaviour
> - Logic behind the Xen specific behaviour

These are implementation details and should be documented with the
implementations (if necessary).

> - PVOPS semantic

This is the interesting stuff.

This particular pvop seems a little odd really.  It might make more
sense if it took a third parameter for pgt_buf_top.

"@pagetable_reserve is used to reserve a range of PFNs used for the
kernel direct mapping page tables and cleans-up any PFNs that ended up
not being used for the tables.

It shall reserve the range (start, end] with memblock_reserve(). It
shall prepare PFNs in the range (end, pgt_buf_top] for general (non page
table) use.

It shall only be called in init_memory_mapping() after the direct
mapping tables have been constructed."

Having said that, I couldn't immediately see where pages in (end,
pgt_buf_top] was getting set RO.  Can you point me to where it's done?

David

> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> ---
>  arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
>  1 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
> index 38155f6..b22093c 100644
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -72,8 +72,23 @@ struct x86_init_oem {
>   * struct x86_init_mapping - platform specific initial kernel pagetable setup
>   * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
>   *
> - * For more details on the purpose of this hook, look in
> - * init_memory_mapping and the commit that added it.
> + * It does reserve a range of pages, to be used as pagetable pages.
> + * The start and end parameters are expected to be contained in the
> + * [pgt_buf_start, pgt_buf_top] range.
> + * The native implementation reserves the pages via the memblock_reserve()
> + * interface.
> + * The Xen implementation, besides reserving the range via memblock_reserve(),
> + * also sets RW the remaining pages contained in the ranges
> + * [pgt_buf_start, start) and [end, pgt_buf_top).
> + * This is needed because the range [pgt_buf_start, pgt_buf_top] was
> + * previously mapped read-only by xen_set_pte_init: when running
> + * on Xen all the pagetable pages need to be mapped read-only in order to
> + * avoid protection faults from the hypervisor. However, once the correct
> + * amount of pages is reserved for the pagetables, all the others contained
> + * in the range must be set to RW so that they can be correctly recycled by
> + * the VM subsystem.
> + * This operation is meant to be performed only during init_memory_mapping(),
> + * just after space for the kernel direct mapping tables is found.
>   */
>  struct x86_init_mapping {
>  	void (*pagetable_reserve)(u64 start, u64 end);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HeB-0002jh-Ur; Tue, 14 Aug 2012 13:59:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1HeA-0002jP-L3
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:59:02 +0000
Received: from [85.158.139.83:40867] by server-11.bemta-5.messagelabs.com id
	E9/EA-29296-5A95A205; Tue, 14 Aug 2012 13:59:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344952740!28120120!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24658 invoked from network); 14 Aug 2012 13:59:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:59:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14004418"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:59:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 14:59:00 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1He7-0004ON-Rd; Tue, 14 Aug 2012 13:58:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1He7-0004gS-Ny;
	Tue, 14 Aug 2012 14:58:59 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.22946.930177.26680@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 14:58:58 +0100
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xu, Dongxiao writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE"):
> I attached the original patch here, which impact the performance.

Stefano, when you proposed what is now
1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38 this you described it as a
major performance improvement.

Would you care to comment ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 13:59:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 13:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HeB-0002jh-Ur; Tue, 14 Aug 2012 13:59:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1HeA-0002jP-L3
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 13:59:02 +0000
Received: from [85.158.139.83:40867] by server-11.bemta-5.messagelabs.com id
	E9/EA-29296-5A95A205; Tue, 14 Aug 2012 13:59:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1344952740!28120120!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24658 invoked from network); 14 Aug 2012 13:59:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 13:59:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14004418"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:59:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 14:59:00 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1He7-0004ON-Rd; Tue, 14 Aug 2012 13:58:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1He7-0004gS-Ny;
	Tue, 14 Aug 2012 14:58:59 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.22946.930177.26680@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 14:58:58 +0100
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xu, Dongxiao writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE"):
> I attached the original patch here, which impact the performance.

Stefano, when you proposed what is now
1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38 this you described it as a
major performance improvement.

Would you care to comment ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:04:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HjY-0003CC-NJ; Tue, 14 Aug 2012 14:04:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1T1HjX-0003BO-DY
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 14:04:35 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344953049!2357023!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gOTI2NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20661 invoked from network); 14 Aug 2012 14:04:10 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-14.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 14:04:10 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7EE4093010832;
	Tue, 14 Aug 2012 10:04:00 -0400
Date: Tue, 14 Aug 2012 10:04:00 -0400 (EDT)
From: Dave Anderson <anderson@redhat.com>
To: "Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
Message-ID: <1543188889.13757694.1344953040454.JavaMail.root@redhat.com>
In-Reply-To: <20120814061408.GA2471@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
X-Originating-IP: [10.16.185.59]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> On Mon, Aug 13, 2012 at 03:16:53PM -0400, Dave Anderson wrote:
> 
> [...]
> 
> > OK good.  It tests OK on a few older pvops kernels that I have on
> > hand.
> >
> > The only thing I've changed is to handle compiler warnings in x86_64.c and
> > x86.c by initializing p2m_top to NULL in x86_64_pvops_xendump_p2m_l3_create()
> > and x86_pvops_xendump_p2m_l3_create().  I also used GETBUF() in those two
> > functions to avoid having to add the malloc-failure line.
> 
> No problem. However, I am a bit surprised that you have seen some warnings.
> I have not seen any. Did you compiled crash with extra options or something
> like that? Or maybe there is a diffrence in our compilers (mine is gcc version
>  4.1.2 20061115 (prerelease) (Debian 4.1.1-21) - quite ancient).

Extra options (also with gcc 4.1.2) -- try building with "make warn".  It adds:

 WARNING_OPTIONS=-Wall -O2 -Wstrict-prototypes -Wmissing-prototypes -fstack-protector

or "make Warn", which adds -Werror to the above.

In those two functions there are "goto err" statements prior to allocating
p2m_top, and so the free(p2m_top) could receive a random stack value.  Of
course even if it did get by the free() call in that highly unlikely case,
the crash session is just about to fatally shut itself down.

Thanks,
  Dave

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:04:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HjY-0003CC-NJ; Tue, 14 Aug 2012 14:04:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1T1HjX-0003BO-DY
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 14:04:35 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344953049!2357023!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gOTI2NTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20661 invoked from network); 14 Aug 2012 14:04:10 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-14.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 14:04:10 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7EE4093010832;
	Tue, 14 Aug 2012 10:04:00 -0400
Date: Tue, 14 Aug 2012 10:04:00 -0400 (EDT)
From: Dave Anderson <anderson@redhat.com>
To: "Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
Message-ID: <1543188889.13757694.1344953040454.JavaMail.root@redhat.com>
In-Reply-To: <20120814061408.GA2471@host-192-168-1-59.local.net-space.pl>
MIME-Version: 1.0
X-Originating-IP: [10.16.185.59]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: olaf@aepfle.de, xen-devel@lists.xensource.com,
	konrad wilk <konrad.wilk@oracle.com>,
	andrew cooper3 <andrew.cooper3@citrix.com>, ptesarik@suse.cz,
	jbeulich@suse.com, kexec@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2 0/6] crash: Bundle of fixes for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> On Mon, Aug 13, 2012 at 03:16:53PM -0400, Dave Anderson wrote:
> 
> [...]
> 
> > OK good.  It tests OK on a few older pvops kernels that I have on
> > hand.
> >
> > The only thing I've changed is to handle compiler warnings in x86_64.c and
> > x86.c by initializing p2m_top to NULL in x86_64_pvops_xendump_p2m_l3_create()
> > and x86_pvops_xendump_p2m_l3_create().  I also used GETBUF() in those two
> > functions to avoid having to add the malloc-failure line.
> 
> No problem. However, I am a bit surprised that you have seen some warnings.
> I have not seen any. Did you compiled crash with extra options or something
> like that? Or maybe there is a diffrence in our compilers (mine is gcc version
>  4.1.2 20061115 (prerelease) (Debian 4.1.1-21) - quite ancient).

Extra options (also with gcc 4.1.2) -- try building with "make warn".  It adds:

 WARNING_OPTIONS=-Wall -O2 -Wstrict-prototypes -Wmissing-prototypes -fstack-protector

or "make Warn", which adds -Werror to the above.

In those two functions there are "goto err" statements prior to allocating
p2m_top, and so the free(p2m_top) could receive a random stack value.  Of
course even if it did get by the free() call in that highly unlikely case,
the crash session is just about to fatally shut itself down.

Thanks,
  Dave

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HpP-0003QY-H2; Tue, 14 Aug 2012 14:10:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1HpO-0003QT-6V
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:10:38 +0000
Received: from [85.158.139.83:39201] by server-7.bemta-5.messagelabs.com id
	31/8E-32634-D5C5A205; Tue, 14 Aug 2012 14:10:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344953433!20856557!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ3NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30096 invoked from network); 14 Aug 2012 14:10:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:10:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="34585031"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:10:33 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:10:32 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1HpI-00037e-D5;
	Tue, 14 Aug 2012 15:10:32 +0100
Message-ID: <502A5C58.5050001@citrix.com>
Date: Tue, 14 Aug 2012 15:10:32 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------080105090505080107050005"
Subject: [Xen-devel] tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------080105090505080107050005
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

I discovered this with repeated calls to "make clean; make dist-tools"

An alternative, if you wish to remove the use of $(PYTHON) from the
clean path is to just `rm -rf build`, but using setup.py seems neater.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------080105090505080107050005
Content-Type: text/x-patch; name="tools-python-clean.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-python-clean.patch"

# HG changeset patch
# Parent 33d596f46521ea852e90cf6dbdbf3680d104134c
tools/python: Clean python correctly

Cleaning the python directory should call `$(PYTHON) setup.py clean`
which will clean the build/ subdirectory.  Otherwise, subsequent builds
may be short-circuited and a stale build installed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 33d596f46521 tools/python/Makefile
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -33,6 +33,7 @@ test:
 
 .PHONY: clean
 clean:
+	$(PYTHON) setup.py clean
 	rm -f $(XENPATH)
 	rm -rf *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
 	rm -f xen/lowlevel/xl/_pyxl_types.h

--------------080105090505080107050005
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080105090505080107050005--


From xen-devel-bounces@lists.xen.org Tue Aug 14 14:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1HpP-0003QY-H2; Tue, 14 Aug 2012 14:10:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1HpO-0003QT-6V
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:10:38 +0000
Received: from [85.158.139.83:39201] by server-7.bemta-5.messagelabs.com id
	31/8E-32634-D5C5A205; Tue, 14 Aug 2012 14:10:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1344953433!20856557!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ3NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30096 invoked from network); 14 Aug 2012 14:10:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:10:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="34585031"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 10:10:33 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 10:10:32 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1HpI-00037e-D5;
	Tue, 14 Aug 2012 15:10:32 +0100
Message-ID: <502A5C58.5050001@citrix.com>
Date: Tue, 14 Aug 2012 15:10:32 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------080105090505080107050005"
Subject: [Xen-devel] tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------080105090505080107050005
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

I discovered this with repeated calls to "make clean; make dist-tools"

An alternative, if you wish to remove the use of $(PYTHON) from the
clean path is to just `rm -rf build`, but using setup.py seems neater.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------080105090505080107050005
Content-Type: text/x-patch; name="tools-python-clean.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-python-clean.patch"

# HG changeset patch
# Parent 33d596f46521ea852e90cf6dbdbf3680d104134c
tools/python: Clean python correctly

Cleaning the python directory should call `$(PYTHON) setup.py clean`
which will clean the build/ subdirectory.  Otherwise, subsequent builds
may be short-circuited and a stale build installed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 33d596f46521 tools/python/Makefile
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -33,6 +33,7 @@ test:
 
 .PHONY: clean
 clean:
+	$(PYTHON) setup.py clean
 	rm -f $(XENPATH)
 	rm -rf *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
 	rm -f xen/lowlevel/xl/_pyxl_types.h

--------------080105090505080107050005
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080105090505080107050005--


From xen-devel-bounces@lists.xen.org Tue Aug 14 14:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Hy5-0003a1-HN; Tue, 14 Aug 2012 14:19:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Hy3-0003Zw-SG
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:19:36 +0000
Received: from [85.158.138.51:3202] by server-8.bemta-3.messagelabs.com id
	3C/42-29583-67E5A205; Tue, 14 Aug 2012 14:19:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344953974!26463440!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27427 invoked from network); 14 Aug 2012 14:19:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 14:19:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 15:19:34 +0100
Message-Id: <502A7A940200007800094CE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 15:19:32 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
	build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 15:52, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> Otherwise, the build fails due to a missing __stack_chk_fail symbol.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  xen/arch/x86/efi/Makefile | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
> index 005e3e0..b727757 100644
> --- a/xen/arch/x86/efi/Makefile
> +++ b/xen/arch/x86/efi/Makefile
> @@ -1,11 +1,11 @@
> -CFLAGS += -fshort-wchar
> +CFLAGS += -fshort-wchar -fno-stack-protector

It shouldn't be needed here (or else the rest of the Xen build
should fail too). Or where would that symbol magically come
from?

>  obj-y += stub.o
>  
>  create = test -e $(1) || touch -t 199901010000 $(1)
>  
>  efi := $(filter y,$(x86_64)$(shell rm -f disabled))
> -efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
> +efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))

I can see why it might be needed here, albeit I'm curious why
this is not a problem for our builds: Are you having a compiler
in use that had been configured in a non-standard way? Plus
I first want to understand why this special option isn't needed
for the rest of the build tree.

Jan

>  efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
>  efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:19:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Hy5-0003a1-HN; Tue, 14 Aug 2012 14:19:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Hy3-0003Zw-SG
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:19:36 +0000
Received: from [85.158.138.51:3202] by server-8.bemta-3.messagelabs.com id
	3C/42-29583-67E5A205; Tue, 14 Aug 2012 14:19:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344953974!26463440!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27427 invoked from network); 14 Aug 2012 14:19:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 14:19:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 15:19:34 +0100
Message-Id: <502A7A940200007800094CE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 15:19:32 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
	build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 15:52, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> Otherwise, the build fails due to a missing __stack_chk_fail symbol.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  xen/arch/x86/efi/Makefile | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
> index 005e3e0..b727757 100644
> --- a/xen/arch/x86/efi/Makefile
> +++ b/xen/arch/x86/efi/Makefile
> @@ -1,11 +1,11 @@
> -CFLAGS += -fshort-wchar
> +CFLAGS += -fshort-wchar -fno-stack-protector

It shouldn't be needed here (or else the rest of the Xen build
should fail too). Or where would that symbol magically come
from?

>  obj-y += stub.o
>  
>  create = test -e $(1) || touch -t 199901010000 $(1)
>  
>  efi := $(filter y,$(x86_64)$(shell rm -f disabled))
> -efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
> +efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))

I can see why it might be needed here, albeit I'm curious why
this is not a problem for our builds: Are you having a compiler
in use that had been configured in a non-standard way? Plus
I first want to understand why this special option isn't needed
for the rest of the build tree.

Jan

>  efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
>  efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:25:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I39-0003k2-DT; Tue, 14 Aug 2012 14:24:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1I37-0003jt-3t
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:24:49 +0000
Received: from [85.158.138.51:15595] by server-3.bemta-3.messagelabs.com id
	D2/D5-13809-0BF5A205; Tue, 14 Aug 2012 14:24:48 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344954287!26464626!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18770 invoked from network); 14 Aug 2012 14:24:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:24:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005084"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:24:47 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 15:24:47 +0100
Message-ID: <502A5CD0.8000201@citrix.com>
Date: Tue, 14 Aug 2012 15:12:32 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com>
In-Reply-To: <502A5964.2080509@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 14:57, David Vrabel wrote:
> On 14/08/12 13:24, Attilio Rao wrote:
>    
>> The informations added on the hook are:
>> - Native behaviour
>> - Xen specific behaviour
>> - Logic behind the Xen specific behaviour
>>      
> These are implementation details and should be documented with the
> implementations (if necessary).
>    

In this specific case, implementation details are very valuable to 
understand the semantic of operations, this is why I added them there. I 
think, at least for this case, this is the best trade-off.

>> - PVOPS semantic
>>      
> This is the interesting stuff.
>
> This particular pvop seems a little odd really.  It might make more
> sense if it took a third parameter for pgt_buf_top.
>    

The thing is that this work (documenting PVOPS) should help in 
understanding the logic behind some PVOPS and possibly improve/rework 
them. For this stage, I agreed with Konrad to keep the changes as small 
as possible. Once the documentation about the semantic is in place we 
can think about ways to improve things more effectively (for example, in 
some cases we may want to rewrite the PVOP completely).

> "@pagetable_reserve is used to reserve a range of PFNs used for the
> kernel direct mapping page tables and cleans-up any PFNs that ended up
> not being used for the tables.
>
> It shall reserve the range (start, end] with memblock_reserve(). It
> shall prepare PFNs in the range (end, pgt_buf_top] for general (non page
> table) use.
>
> It shall only be called in init_memory_mapping() after the direct
> mapping tables have been constructed."
>
> Having said that, I couldn't immediately see where pages in (end,
> pgt_buf_top] was getting set RO.  Can you point me to where it's done?
>    

As mentioned in the comment, please look at xen_set_pte_init().

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:25:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I39-0003k2-DT; Tue, 14 Aug 2012 14:24:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1I37-0003jt-3t
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:24:49 +0000
Received: from [85.158.138.51:15595] by server-3.bemta-3.messagelabs.com id
	D2/D5-13809-0BF5A205; Tue, 14 Aug 2012 14:24:48 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344954287!26464626!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18770 invoked from network); 14 Aug 2012 14:24:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:24:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005084"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:24:47 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 15:24:47 +0100
Message-ID: <502A5CD0.8000201@citrix.com>
Date: Tue, 14 Aug 2012 15:12:32 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com>
In-Reply-To: <502A5964.2080509@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 14:57, David Vrabel wrote:
> On 14/08/12 13:24, Attilio Rao wrote:
>    
>> The informations added on the hook are:
>> - Native behaviour
>> - Xen specific behaviour
>> - Logic behind the Xen specific behaviour
>>      
> These are implementation details and should be documented with the
> implementations (if necessary).
>    

In this specific case, implementation details are very valuable to 
understand the semantic of operations, this is why I added them there. I 
think, at least for this case, this is the best trade-off.

>> - PVOPS semantic
>>      
> This is the interesting stuff.
>
> This particular pvop seems a little odd really.  It might make more
> sense if it took a third parameter for pgt_buf_top.
>    

The thing is that this work (documenting PVOPS) should help in 
understanding the logic behind some PVOPS and possibly improve/rework 
them. For this stage, I agreed with Konrad to keep the changes as small 
as possible. Once the documentation about the semantic is in place we 
can think about ways to improve things more effectively (for example, in 
some cases we may want to rewrite the PVOP completely).

> "@pagetable_reserve is used to reserve a range of PFNs used for the
> kernel direct mapping page tables and cleans-up any PFNs that ended up
> not being used for the tables.
>
> It shall reserve the range (start, end] with memblock_reserve(). It
> shall prepare PFNs in the range (end, pgt_buf_top] for general (non page
> table) use.
>
> It shall only be called in init_memory_mapping() after the direct
> mapping tables have been constructed."
>
> Having said that, I couldn't immediately see where pages in (end,
> pgt_buf_top] was getting set RO.  Can you point me to where it's done?
>    

As mentioned in the comment, please look at xen_set_pte_init().

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:25:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I3P-0003lR-QW; Tue, 14 Aug 2012 14:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T1I3O-0003lE-Jn
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 14:25:06 +0000
Received: from [85.158.138.51:21008] by server-10.bemta-3.messagelabs.com id
	0C/52-20518-1CF5A205; Tue, 14 Aug 2012 14:25:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344954302!20196556!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg5MTc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15976 invoked from network); 14 Aug 2012 14:25:03 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 14:25:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EEOwag003474
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 14:24:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EEOuVU009275
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 14:24:58 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EEOt98005037; Tue, 14 Aug 2012 09:24:55 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 07:24:55 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 26724402EF; Tue, 14 Aug 2012 10:15:14 -0400 (EDT)
Date: Tue, 14 Aug 2012 10:15:14 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Message-ID: <20120814141513.GA17776@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.6-rc1-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2932669423724541196=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2932669423724541196==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="envbJBWh7q8WU6mo"
Content-Disposition: inline


--envbJBWh7q8WU6mo
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please pull this tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.6-rc1-tag

which has just one tiny fix - way back in v3.5 we added a mechanism to populate
back pages that were released (they overlapped with MMIO regions), but neglected to
reserve the proper amount of virtual space for extend_brk to work properly. Coincidentally
some other commit aligned the _brk space to larger area so I didn't trigger this until
it was run on a machine with more than 2GB of MMIO space.

 arch/x86/xen/p2m.c |    5 -----
 1 files changed, 0 insertions(+), 5 deletions(-)

Konrad Rzeszutek Wilk (1):
      xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.


--envbJBWh7q8WU6mo
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJQKl1rAAoJEFjIrFwIi8fJIzQIAM7KCWmrmf3pMFClEoeRzDB/
Ywtut3KZvbP+h35LGJvHBLURishtGRrmuW1aubb/J1TyE+z++6p/IEzArYZ6BjOY
eNFqCCPV8atQYJe1kRpHW1pJggHjIsQ7w3KP7bM6+0ROG3EXtgUh2oBybKN5YfWX
vz7Ol6RteTq1ElbQ1TrhyxN/4EkWHof8pViR677ofWhjKajRdF2aP5MF3oggAqg1
C8tZFkD9D0tf+JQKO00HSVmIGcY63EsceOt9+RuARQzI8zecyS9me3/99gWGsHSx
mDBigjxzWGfSTZC9R84WUmyLL98pIjNutdLvvsyfSBz+akd8xaOqWMWLpvBdwLU=
=wRra
-----END PGP SIGNATURE-----

--envbJBWh7q8WU6mo--


--===============2932669423724541196==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2932669423724541196==--


From xen-devel-bounces@lists.xen.org Tue Aug 14 14:25:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I3P-0003lR-QW; Tue, 14 Aug 2012 14:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T1I3O-0003lE-Jn
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 14:25:06 +0000
Received: from [85.158.138.51:21008] by server-10.bemta-3.messagelabs.com id
	0C/52-20518-1CF5A205; Tue, 14 Aug 2012 14:25:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344954302!20196556!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg5MTc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15976 invoked from network); 14 Aug 2012 14:25:03 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 14:25:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EEOwag003474
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 14:24:59 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EEOuVU009275
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 14:24:58 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EEOt98005037; Tue, 14 Aug 2012 09:24:55 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 07:24:55 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 26724402EF; Tue, 14 Aug 2012 10:15:14 -0400 (EDT)
Date: Tue, 14 Aug 2012 10:15:14 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Message-ID: <20120814141513.GA17776@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.6-rc1-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2932669423724541196=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2932669423724541196==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="envbJBWh7q8WU6mo"
Content-Disposition: inline


--envbJBWh7q8WU6mo
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please pull this tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-linus-3.6-rc1-tag

which has just one tiny fix - way back in v3.5 we added a mechanism to populate
back pages that were released (they overlapped with MMIO regions), but neglected to
reserve the proper amount of virtual space for extend_brk to work properly. Coincidentally
some other commit aligned the _brk space to larger area so I didn't trigger this until
it was run on a machine with more than 2GB of MMIO space.

 arch/x86/xen/p2m.c |    5 -----
 1 files changed, 0 insertions(+), 5 deletions(-)

Konrad Rzeszutek Wilk (1):
      xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.


--envbJBWh7q8WU6mo
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJQKl1rAAoJEFjIrFwIi8fJIzQIAM7KCWmrmf3pMFClEoeRzDB/
Ywtut3KZvbP+h35LGJvHBLURishtGRrmuW1aubb/J1TyE+z++6p/IEzArYZ6BjOY
eNFqCCPV8atQYJe1kRpHW1pJggHjIsQ7w3KP7bM6+0ROG3EXtgUh2oBybKN5YfWX
vz7Ol6RteTq1ElbQ1TrhyxN/4EkWHof8pViR677ofWhjKajRdF2aP5MF3oggAqg1
C8tZFkD9D0tf+JQKO00HSVmIGcY63EsceOt9+RuARQzI8zecyS9me3/99gWGsHSx
mDBigjxzWGfSTZC9R84WUmyLL98pIjNutdLvvsyfSBz+akd8xaOqWMWLpvBdwLU=
=wRra
-----END PGP SIGNATURE-----

--envbJBWh7q8WU6mo--


--===============2932669423724541196==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2932669423724541196==--


From xen-devel-bounces@lists.xen.org Tue Aug 14 14:27:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I5C-0003v0-BJ; Tue, 14 Aug 2012 14:26:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T1I5A-0003ua-R0
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:26:56 +0000
Received: from [85.158.143.99:19479] by server-3.bemta-4.messagelabs.com id
	E1/F4-09529-0306A205; Tue, 14 Aug 2012 14:26:56 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344954415!20607447!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1756 invoked from network); 14 Aug 2012 14:26:56 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:26:56 -0000
Received: by eeke53 with SMTP id e53so180902eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 07:26:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=c//g6wABO+tzXOxhTWq24SBdGnN1VRsE1iI2Nt0FPnw=;
	b=y/l7nrTm3N5jlgi7K++B2Ya4bmaOr6c8VvxB09k5ZDbrTfjgnOX1HvpkovP4wUPTOH
	arKDDE5uu+Biy6HNMRuF3UhBG0jppbWPqKOXOWTTyila4uNm1lHYt+aKloX9B97Xl03a
	iOB4X+3fvzmVk1lMg3h+4PIGXh2VVFyB3Vs7iciC8KK4e5TNwTHP+v7Zr0u/921xGGf4
	CeDVjR0D4sI4DmrMAKuT4dm00q4E85NMl+WIah0XrogleWitYqI2ZkLU40+kMP9u+YrF
	3cYHF/p2p42eqLjFyWs4WNjT769v5pVXeMww2aqC2KpN4J3vL3fUGCAQMsofSzTWofsc
	Zg+A==
Received: by 10.14.179.200 with SMTP id h48mr19877670eem.12.1344954415831;
	Tue, 14 Aug 2012 07:26:55 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id a7sm7296187eep.14.2012.08.14.07.26.54
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 07:26:55 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Tue, 14 Aug 2012 15:26:51 +0100
From: Keir Fraser <keir@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Pasi =?ISO-8859-1?B?S+Rya2vkaW5lbg==?= <pasik@iki.fi>
Message-ID: <CC501EBB.48C47%keir@xen.org>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
Thread-Index: Ac16KNhPy3azBAy8W0KZW/VVpzdOGg==
In-Reply-To: <1344949140.5926.30.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/2012 13:59, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

>> And the needed patches are listed here:
>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01048.html
> 
> I think taking the individual patches would be more sensible than a
> wholesale refresh at this stage, Keir?
> 
> NB: I didn't actually look at the patches...

Yes, I will do the work to integrate the patches into the tree (simple job).

We can nudge ourselves up to tip of ipxe after 4.2.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:27:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I5C-0003v0-BJ; Tue, 14 Aug 2012 14:26:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T1I5A-0003ua-R0
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:26:56 +0000
Received: from [85.158.143.99:19479] by server-3.bemta-4.messagelabs.com id
	E1/F4-09529-0306A205; Tue, 14 Aug 2012 14:26:56 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1344954415!20607447!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1756 invoked from network); 14 Aug 2012 14:26:56 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:26:56 -0000
Received: by eeke53 with SMTP id e53so180902eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 07:26:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=c//g6wABO+tzXOxhTWq24SBdGnN1VRsE1iI2Nt0FPnw=;
	b=y/l7nrTm3N5jlgi7K++B2Ya4bmaOr6c8VvxB09k5ZDbrTfjgnOX1HvpkovP4wUPTOH
	arKDDE5uu+Biy6HNMRuF3UhBG0jppbWPqKOXOWTTyila4uNm1lHYt+aKloX9B97Xl03a
	iOB4X+3fvzmVk1lMg3h+4PIGXh2VVFyB3Vs7iciC8KK4e5TNwTHP+v7Zr0u/921xGGf4
	CeDVjR0D4sI4DmrMAKuT4dm00q4E85NMl+WIah0XrogleWitYqI2ZkLU40+kMP9u+YrF
	3cYHF/p2p42eqLjFyWs4WNjT769v5pVXeMww2aqC2KpN4J3vL3fUGCAQMsofSzTWofsc
	Zg+A==
Received: by 10.14.179.200 with SMTP id h48mr19877670eem.12.1344954415831;
	Tue, 14 Aug 2012 07:26:55 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id a7sm7296187eep.14.2012.08.14.07.26.54
	(version=SSLv3 cipher=OTHER); Tue, 14 Aug 2012 07:26:55 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Tue, 14 Aug 2012 15:26:51 +0100
From: Keir Fraser <keir@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Pasi =?ISO-8859-1?B?S+Rya2vkaW5lbg==?= <pasik@iki.fi>
Message-ID: <CC501EBB.48C47%keir@xen.org>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
Thread-Index: Ac16KNhPy3azBAy8W0KZW/VVpzdOGg==
In-Reply-To: <1344949140.5926.30.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan / ipxe gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/2012 13:59, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

>> And the needed patches are listed here:
>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01048.html
> 
> I think taking the individual patches would be more sensible than a
> wholesale refresh at this stage, Keir?
> 
> NB: I didn't actually look at the patches...

Yes, I will do the work to integrate the patches into the tree (simple job).

We can nudge ourselves up to tip of ipxe after 4.2.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:30:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I8c-0004Lg-TW; Tue, 14 Aug 2012 14:30:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1I8b-0004LM-IF
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:30:29 +0000
Received: from [85.158.143.35:61771] by server-1.bemta-4.messagelabs.com id
	42/C5-07754-4016A205; Tue, 14 Aug 2012 14:30:28 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344954626!15704269!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15739 invoked from network); 14 Aug 2012 14:30:27 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 14:30:27 -0000
X-TM-IMSS-Message-ID: <a437e3c900086b17@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id a437e3c900086b17 ;
	Tue, 14 Aug 2012 10:30:17 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EEUNcP021047; 
	Tue, 14 Aug 2012 10:30:23 -0400
Message-ID: <502A60FF.9060801@tycho.nsa.gov>
Date: Tue, 14 Aug 2012 10:30:23 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
	<502A7A940200007800094CE1@nat28.tlf.novell.com>
In-Reply-To: <502A7A940200007800094CE1@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
	build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/14/2012 10:19 AM, Jan Beulich wrote:
>>>> On 14.08.12 at 15:52, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> Otherwise, the build fails due to a missing __stack_chk_fail symbol.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> ---
>>  xen/arch/x86/efi/Makefile | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
>> index 005e3e0..b727757 100644
>> --- a/xen/arch/x86/efi/Makefile
>> +++ b/xen/arch/x86/efi/Makefile
>> @@ -1,11 +1,11 @@
>> -CFLAGS += -fshort-wchar
>> +CFLAGS += -fshort-wchar -fno-stack-protector
> 
> It shouldn't be needed here (or else the rest of the Xen build
> should fail too). Or where would that symbol magically come
> from?

You are correct, this change is not needed. The Xen build already adds
-fno-stack-protector to CFLAGS, so this change would just duplicate it.

>>  obj-y += stub.o
>>  
>>  create = test -e $(1) || touch -t 199901010000 $(1)
>>  
>>  efi := $(filter y,$(x86_64)$(shell rm -f disabled))
>> -efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
>> +efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))
> 
> I can see why it might be needed here, albeit I'm curious why
> this is not a problem for our builds: Are you having a compiler
> in use that had been configured in a non-standard way? Plus
> I first want to understand why this special option isn't needed
> for the rest of the build tree.
> 
> Jan

This build was done on Gentoo using a hardened profile, which I assume
adds -fstack-protector to the default compiler flags. Compiler version
string is "gcc version 4.6.3 (Gentoo Hardened 4.6.3 p1.6, pie-0.5.2)"


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:30:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1I8c-0004Lg-TW; Tue, 14 Aug 2012 14:30:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1I8b-0004LM-IF
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:30:29 +0000
Received: from [85.158.143.35:61771] by server-1.bemta-4.messagelabs.com id
	42/C5-07754-4016A205; Tue, 14 Aug 2012 14:30:28 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344954626!15704269!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15739 invoked from network); 14 Aug 2012 14:30:27 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-21.messagelabs.com with SMTP;
	14 Aug 2012 14:30:27 -0000
X-TM-IMSS-Message-ID: <a437e3c900086b17@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id a437e3c900086b17 ;
	Tue, 14 Aug 2012 10:30:17 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EEUNcP021047; 
	Tue, 14 Aug 2012 10:30:23 -0400
Message-ID: <502A60FF.9060801@tycho.nsa.gov>
Date: Tue, 14 Aug 2012 10:30:23 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
	<502A7A940200007800094CE1@nat28.tlf.novell.com>
In-Reply-To: <502A7A940200007800094CE1@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
	build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/14/2012 10:19 AM, Jan Beulich wrote:
>>>> On 14.08.12 at 15:52, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>> Otherwise, the build fails due to a missing __stack_chk_fail symbol.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> ---
>>  xen/arch/x86/efi/Makefile | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
>> index 005e3e0..b727757 100644
>> --- a/xen/arch/x86/efi/Makefile
>> +++ b/xen/arch/x86/efi/Makefile
>> @@ -1,11 +1,11 @@
>> -CFLAGS += -fshort-wchar
>> +CFLAGS += -fshort-wchar -fno-stack-protector
> 
> It shouldn't be needed here (or else the rest of the Xen build
> should fail too). Or where would that symbol magically come
> from?

You are correct, this change is not needed. The Xen build already adds
-fno-stack-protector to CFLAGS, so this change would just duplicate it.

>>  obj-y += stub.o
>>  
>>  create = test -e $(1) || touch -t 199901010000 $(1)
>>  
>>  efi := $(filter y,$(x86_64)$(shell rm -f disabled))
>> -efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
>> +efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))
> 
> I can see why it might be needed here, albeit I'm curious why
> this is not a problem for our builds: Are you having a compiler
> in use that had been configured in a non-standard way? Plus
> I first want to understand why this special option isn't needed
> for the rest of the build tree.
> 
> Jan

This build was done on Gentoo using a hardened profile, which I assume
adds -fstack-protector to the default compiler flags. Compiler version
string is "gcc version 4.6.3 (Gentoo Hardened 4.6.3 p1.6, pie-0.5.2)"


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:37:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IEp-0004o6-ON; Tue, 14 Aug 2012 14:36:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T1IEo-0004o0-CS
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:36:54 +0000
Received: from [85.158.143.99:21906] by server-3.bemta-4.messagelabs.com id
	9D/08-09529-5826A205; Tue, 14 Aug 2012 14:36:53 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344955012!22825207!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTgwODE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTgwODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31884 invoked from network); 14 Aug 2012 14:36:53 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 14:36:53 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGiy0MEnhN
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-085-213.pools.arcor-ip.net [88.65.85.213])
	by smtp.strato.de (josoe mo68) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 302e5do7EDoVWE ;
	Tue, 14 Aug 2012 16:36:50 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id B9D231836E; Tue, 14 Aug 2012 16:36:49 +0200 (CEST)
Date: Tue, 14 Aug 2012 16:36:49 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Pasi =?utf-8?B?S8Okcmtrw6RpbmVu?= <pasik@iki.fi>
Message-ID: <20120814143649.GA23135@aepfle.de>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
	<20120813213903.GS19851@reaktio.net>
	<20120813214449.GT19851@reaktio.net>
	<20120813220734.GU19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813220734.GU19851@reaktio.net>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBBdWcgMTQsIFBhc2kgS8Okcmtrw6RpbmVuIHdyb3RlOgoKPiBIZWxsbyBhZ2FpbiwK
PiAKPiBTbyB0byBiZSBhYmxlIHRvIGJ1aWxkIFhlbiA0LjIuMC1yYzIgb24gRmVkb3JhIDE3IHdp
dGggZ2NjIDQuNyAKPiBJIGhhZCB0byBhcHBseSB0aGVzZSB0aHJlZSBwYXRjaGVzIHRvIGlweGU6
Cj4gCj4gaHR0cDovL2xpc3RzLmlweGUub3JnL3BpcGVybWFpbC9pcHhlLWRldmVsLzIwMTItTWFy
Y2gvMDAxMjc5Lmh0bWwKPiBodHRwOi8vbGlzdHMuaXB4ZS5vcmcvcGlwZXJtYWlsL2lweGUtZGV2
ZWwvMjAxMi1NYXJjaC8wMDEyODAuaHRtbAo+IGh0dHA6Ly9wZXJtYWxpbmsuZ21hbmUub3JnL2dt
YW5lLm5ldHdvcmsuaXB4ZS5kZXZlbC8xMjE2CgpTb3JyeSwgSSBqdXN0IHJlYWxpemVkIHRoYXQg
bXkgeGVuLXVuc3RhYmxlIHJwbSBwYWNrYWdlIHN0aWxsIGNvbnRhaW5zIGEKcGF0Y2hlcy50YXIu
Z3ogd2hpY2ggaXMgZXh0cmFjdGVkIGluIHRvb2xzL2Zpcm13YXJlL2V0aGVyYm9vdC4gU28gSSBk
aWQKbm90IG5vdGljZSB0aGUgYnVpbGQgZXJyb3JzIGluIHBsYWluIHhlbi11bnN0YWJsZS4KCk9s
YWYKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:37:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IEp-0004o6-ON; Tue, 14 Aug 2012 14:36:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T1IEo-0004o0-CS
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:36:54 +0000
Received: from [85.158.143.99:21906] by server-3.bemta-4.messagelabs.com id
	9D/08-09529-5826A205; Tue, 14 Aug 2012 14:36:53 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344955012!22825207!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTgwODE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiAzOTgwODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31884 invoked from network); 14 Aug 2012 14:36:53 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 14:36:53 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFGiy0MEnhN
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-085-213.pools.arcor-ip.net [88.65.85.213])
	by smtp.strato.de (josoe mo68) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 302e5do7EDoVWE ;
	Tue, 14 Aug 2012 16:36:50 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id B9D231836E; Tue, 14 Aug 2012 16:36:49 +0200 (CEST)
Date: Tue, 14 Aug 2012 16:36:49 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Pasi =?utf-8?B?S8Okcmtrw6RpbmVu?= <pasik@iki.fi>
Message-ID: <20120814143649.GA23135@aepfle.de>
References: <20120813203950.GQ19851@reaktio.net>
	<20120813213558.GR19851@reaktio.net>
	<20120813213903.GS19851@reaktio.net>
	<20120813214449.GT19851@reaktio.net>
	<20120813220734.GU19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120813220734.GU19851@reaktio.net>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBBdWcgMTQsIFBhc2kgS8Okcmtrw6RpbmVuIHdyb3RlOgoKPiBIZWxsbyBhZ2FpbiwK
PiAKPiBTbyB0byBiZSBhYmxlIHRvIGJ1aWxkIFhlbiA0LjIuMC1yYzIgb24gRmVkb3JhIDE3IHdp
dGggZ2NjIDQuNyAKPiBJIGhhZCB0byBhcHBseSB0aGVzZSB0aHJlZSBwYXRjaGVzIHRvIGlweGU6
Cj4gCj4gaHR0cDovL2xpc3RzLmlweGUub3JnL3BpcGVybWFpbC9pcHhlLWRldmVsLzIwMTItTWFy
Y2gvMDAxMjc5Lmh0bWwKPiBodHRwOi8vbGlzdHMuaXB4ZS5vcmcvcGlwZXJtYWlsL2lweGUtZGV2
ZWwvMjAxMi1NYXJjaC8wMDEyODAuaHRtbAo+IGh0dHA6Ly9wZXJtYWxpbmsuZ21hbmUub3JnL2dt
YW5lLm5ldHdvcmsuaXB4ZS5kZXZlbC8xMjE2CgpTb3JyeSwgSSBqdXN0IHJlYWxpemVkIHRoYXQg
bXkgeGVuLXVuc3RhYmxlIHJwbSBwYWNrYWdlIHN0aWxsIGNvbnRhaW5zIGEKcGF0Y2hlcy50YXIu
Z3ogd2hpY2ggaXMgZXh0cmFjdGVkIGluIHRvb2xzL2Zpcm13YXJlL2V0aGVyYm9vdC4gU28gSSBk
aWQKbm90IG5vdGljZSB0aGUgYnVpbGQgZXJyb3JzIGluIHBsYWluIHhlbi11bnN0YWJsZS4KCk9s
YWYKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:40:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IHy-0004wd-Dk; Tue, 14 Aug 2012 14:40:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1IHw-0004wC-5A
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:40:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344955201!9257621!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19546 invoked from network); 14 Aug 2012 14:40:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 14:40:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 15:40:01 +0100
Message-Id: <502A7F5C0200007800094D24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 15:39:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
	<502A7A940200007800094CE1@nat28.tlf.novell.com>
	<502A60FF.9060801@tycho.nsa.gov>
In-Reply-To: <502A60FF.9060801@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
 build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 16:30, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/14/2012 10:19 AM, Jan Beulich wrote:
>>>>> On 14.08.12 at 15:52, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> Otherwise, the build fails due to a missing __stack_chk_fail symbol.
>>>
>>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>> ---
>>>  xen/arch/x86/efi/Makefile | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
>>> index 005e3e0..b727757 100644
>>> --- a/xen/arch/x86/efi/Makefile
>>> +++ b/xen/arch/x86/efi/Makefile
>>> @@ -1,11 +1,11 @@
>>> -CFLAGS += -fshort-wchar
>>> +CFLAGS += -fshort-wchar -fno-stack-protector
>> 
>> It shouldn't be needed here (or else the rest of the Xen build
>> should fail too). Or where would that symbol magically come
>> from?
> 
> You are correct, this change is not needed. The Xen build already adds
> -fno-stack-protector to CFLAGS, so this change would just duplicate it.
> 
>>>  obj-y += stub.o
>>>  
>>>  create = test -e $(1) || touch -t 199901010000 $(1)
>>>  
>>>  efi := $(filter y,$(x86_64)$(shell rm -f disabled))
>>> -efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
>>> +efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))
>> 
>> I can see why it might be needed here, albeit I'm curious why
>> this is not a problem for our builds: Are you having a compiler
>> in use that had been configured in a non-standard way? Plus
>> I first want to understand why this special option isn't needed
>> for the rest of the build tree.
> 
> This build was done on Gentoo using a hardened profile, which I assume
> adds -fstack-protector to the default compiler flags. Compiler version
> string is "gcc version 4.6.3 (Gentoo Hardened 4.6.3 p1.6, pie-0.5.2)"

So I'd prefer adding $(EMBEDDED_EXTRA_CFLAGS) instead of
the explicit option then, if that's okay for you too. Mind re-
spinning the patch that way?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:40:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IHy-0004wd-Dk; Tue, 14 Aug 2012 14:40:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1IHw-0004wC-5A
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:40:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344955201!9257621!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19546 invoked from network); 14 Aug 2012 14:40:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 14:40:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 15:40:01 +0100
Message-Id: <502A7F5C0200007800094D24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 15:39:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344952326-28209-1-git-send-email-dgdegra@tycho.nsa.gov>
	<502A7A940200007800094CE1@nat28.tlf.novell.com>
	<502A60FF.9060801@tycho.nsa.gov>
In-Reply-To: <502A60FF.9060801@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86-64/EFI: add -fno-stack-protector to EFI
 build
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 16:30, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 08/14/2012 10:19 AM, Jan Beulich wrote:
>>>>> On 14.08.12 at 15:52, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> Otherwise, the build fails due to a missing __stack_chk_fail symbol.
>>>
>>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>> ---
>>>  xen/arch/x86/efi/Makefile | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
>>> index 005e3e0..b727757 100644
>>> --- a/xen/arch/x86/efi/Makefile
>>> +++ b/xen/arch/x86/efi/Makefile
>>> @@ -1,11 +1,11 @@
>>> -CFLAGS += -fshort-wchar
>>> +CFLAGS += -fshort-wchar -fno-stack-protector
>> 
>> It shouldn't be needed here (or else the rest of the Xen build
>> should fail too). Or where would that symbol magically come
>> from?
> 
> You are correct, this change is not needed. The Xen build already adds
> -fno-stack-protector to CFLAGS, so this change would just duplicate it.
> 
>>>  obj-y += stub.o
>>>  
>>>  create = test -e $(1) || touch -t 199901010000 $(1)
>>>  
>>>  efi := $(filter y,$(x86_64)$(shell rm -f disabled))
>>> -efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
>>> +efi := $(if $(efi),$(shell $(CC) -fno-stack-protector -c -Werror check.c 2>disabled && echo y))
>> 
>> I can see why it might be needed here, albeit I'm curious why
>> this is not a problem for our builds: Are you having a compiler
>> in use that had been configured in a non-standard way? Plus
>> I first want to understand why this special option isn't needed
>> for the rest of the build tree.
> 
> This build was done on Gentoo using a hardened profile, which I assume
> adds -fstack-protector to the default compiler flags. Compiler version
> string is "gcc version 4.6.3 (Gentoo Hardened 4.6.3 p1.6, pie-0.5.2)"

So I'd prefer adding $(EMBEDDED_EXTRA_CFLAGS) instead of
the explicit option then, if that's okay for you too. Mind re-
spinning the patch that way?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:43:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IL7-00057G-14; Tue, 14 Aug 2012 14:43:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T1IL5-000576-MZ
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:43:23 +0000
Received: from [85.158.143.35:2390] by server-3.bemta-4.messagelabs.com id
	22/93-09529-B046A205; Tue, 14 Aug 2012 14:43:23 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344955399!5540570!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6850 invoked from network); 14 Aug 2012 14:43:19 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:43:19 -0000
Received: by eeke53 with SMTP id e53so187189eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 07:43:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=uSC2zU6Y0+XNZN+1xllaHEB+R9Kn4n8FB+26Dpuobfk=;
	b=XdgEFTMZ1FJt8UdictsOtthUFWnuomcW+aZFr59y8sgxJArJHSXycWd7BpZSh+D3FW
	99UUID7WZ9FlD7jw9nJIBXVwLyppWt7u7XP7NHJf875poI1/598t1tgB3PDRbEB5TecN
	W9pfKz/2Xozpxg0jEAvcXCww9brtBms3AfToP9FHWHcfMVI+5ZvNglo6UvYxRnD3SA3T
	txwvpw8Zjq4ebYnBnkDRmBogPRfAFT67BRc7K2aAPh6pIvyQcPwxbx18notz0p9TmRJL
	w++4VsUB0z3Ssox+awmoTOLx8Tns7ItPI1ENi0VAW1hAN0KMOvCUc3ppUroUKFPEYj9+
	RzKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=uSC2zU6Y0+XNZN+1xllaHEB+R9Kn4n8FB+26Dpuobfk=;
	b=QG9AqtQrRHhrBtuunM46pfY9DAl4skir+QGsO4ddBmKNu40zULZxNvoVCZGjYP+FWH
	c9k0HGr4JSz39Bp5bvT7YQjuVl9qtsL93NhoQBZweCHi1tq7FjsehDh2iDHiXRfTfpWI
	kstaM825gChshyjs0PxB8r1HUFbh6GDPb1PwI+Fp7j+qAM8BEgCuKD8zFYoJWK5fYU0l
	JI6y8PDNjcMtPudIEvrYBbPXPHVTu8nEnySJ9qTMfHSR1nsaSSH8LW8vMeFc6aYjfEq0
	yY4y0xEY2BVtxOOl9cpDB5t9K817ervG+fO8ybTi7yQJ9Fh3HAn9QKUWtTwG8UxlZz3E
	mFsg==
Received: by 10.14.206.200 with SMTP id l48mr19736365eeo.41.1344955399262;
	Tue, 14 Aug 2012 07:43:19 -0700 (PDT)
Received: by 10.14.206.200 with SMTP id l48mr19736345eeo.41.1344955399077;
	Tue, 14 Aug 2012 07:43:19 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Tue, 14 Aug 2012 07:42:48 -0700 (PDT)
In-Reply-To: <1344935951.5926.9.camel@zakaz.uk.xensource.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
From: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 07:42:48 -0700
Message-ID: <CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQmA+R4TDayv4JnGVy3X1Dj+tJsX13no0CEgSWWOT0O4ctk4wbb/hHydgTi6eDhre1GhI9iQylf0kaYeSeOjFwA8Dp6p9ybK9IN7OCdTUeOd4gzJe1RPQIBmcYVAHqx8FYF6YtwNWWPMoRy8XwIanY1ieTYnS+w2ZPJ3qm3tNiEkNcs7JWlhluNl8roY7Oa4RMpgAhbG
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 2:19 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

>>
>>  a) this only happens on our xen machines (though not all of them)
>>  b) one of my stack traces started with
>>
>> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>
> This is likely to be a coincidence IMHO since this function forces a
> call to the hypervisor to trigger the (re)injection of any pending
> interrupts (typically after reenabling interrupts), so it is not unusual
> for it to be at the bottom of any stack trace which happens in interrupt
> context.
>
> The example stack trace in crasher.c doesn't involve Xen -- can you post
> any examples of ones which do.

Hi Ian, here's the trace in question. I'm perfectly happy with this
not being a xen issue if for no other reason then it means that I have
one less thing I need to look at. The python script in question was
essentially doing the same thing as crasher.c, though in the middle of
other, more productive activities.

Cheers,
peter

------------[ cut here ]------------
kernel BUG at fs/buffer.c:1263!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/devices/system/cpu/online
CPU 3
Pid: 27277, comm: python2.6 Not tainted 2.6.38.8-gg868-ganetixenu #1
RIP: e030:[<ffffffff81153853>]  [<ffffffff81153853>]
__find_get_block+0x1f3/0x200
RSP: e02b:ffff880496cffc78  EFLAGS: 00010046
RAX: ffff8807b9480000 RBX: ffff88049f172de8 RCX: 000000000086dafd
RDX: 0000000000001000 RSI: 000000000086dafd RDI: ffff8807ba4dd380
RBP: ffff880496cffcd8 R08: 0000000000000001 R09: ffff88049f172d10
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88049f172d14
R13: ffff88049f172d40 R14: ffff8807ba4b7228 R15: 000000000086dafd
FS:  00007f667a0ca700(0000) GS:ffff8807fff74000(0063) knlGS:0000000000000000
CS:  e033 DS: 002b ES: 002b CR0: 000000008005003b
CR2: 000000000a130260 CR3: 00000004e978c000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process python2.6 (pid: 27277, threadinfo ffff880496cfe000, task
ffff8804b5a72d40)
Stack:
 ffff880496cffc98 ffffffff81654cd1 ffff880496cffca8 ffff88062d8d2440
 ffff880496cffd08 ffffffff811c9294 ffff8804ffffffc3 0000000000000014
 ffff88049f172de8 ffff88049f172d14 ffff88049f172d40 ffff8807ba4b7228
Call Trace:
 [<ffffffff81654cd1>] ? down_read+0x11/0x30
 [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
 [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
 [<ffffffff811bb104>] ext3_free_data+0x114/0x160
 [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
 [<ffffffff812133f5>] ? journal_start+0xb5/0x100
 [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
 [<ffffffff8114065f>] evict+0x1f/0xb0
 [<ffffffff81006d52>] ? check_events+0x12/0x20
 [<ffffffff81140c14>] iput+0x1a4/0x290
 [<ffffffff8113ed05>] dput+0x265/0x310
 [<ffffffff81132435>] path_put+0x15/0x30
 [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
 [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
 [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
 [<ffffffff81006d52>] ? check_events+0x12/0x20
Code: 82 00 05 01 00 85 c0 75 de 65 48 89 1c 25 00 05 01 00 e9 87 fe
ff ff 48 89 df e8 e9 fc ff ff 4c 89 f7 e9 02 ff ff ff 0f 0b eb fe <0f>
0b eb fe 0f 0b eb fe 0f 1f 44 00 00 55 48 89 e5 41 57 49 89
RIP  [<ffffffff81153853>] __find_get_block+0x1f3/0x200
 RSP <ffff880496cffc78>
---[ end trace d45267c89c4e0548 ]---


-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:43:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IL7-00057G-14; Tue, 14 Aug 2012 14:43:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T1IL5-000576-MZ
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:43:23 +0000
Received: from [85.158.143.35:2390] by server-3.bemta-4.messagelabs.com id
	22/93-09529-B046A205; Tue, 14 Aug 2012 14:43:23 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1344955399!5540570!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6850 invoked from network); 14 Aug 2012 14:43:19 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:43:19 -0000
Received: by eeke53 with SMTP id e53so187189eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 07:43:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=uSC2zU6Y0+XNZN+1xllaHEB+R9Kn4n8FB+26Dpuobfk=;
	b=XdgEFTMZ1FJt8UdictsOtthUFWnuomcW+aZFr59y8sgxJArJHSXycWd7BpZSh+D3FW
	99UUID7WZ9FlD7jw9nJIBXVwLyppWt7u7XP7NHJf875poI1/598t1tgB3PDRbEB5TecN
	W9pfKz/2Xozpxg0jEAvcXCww9brtBms3AfToP9FHWHcfMVI+5ZvNglo6UvYxRnD3SA3T
	txwvpw8Zjq4ebYnBnkDRmBogPRfAFT67BRc7K2aAPh6pIvyQcPwxbx18notz0p9TmRJL
	w++4VsUB0z3Ssox+awmoTOLx8Tns7ItPI1ENi0VAW1hAN0KMOvCUc3ppUroUKFPEYj9+
	RzKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=uSC2zU6Y0+XNZN+1xllaHEB+R9Kn4n8FB+26Dpuobfk=;
	b=QG9AqtQrRHhrBtuunM46pfY9DAl4skir+QGsO4ddBmKNu40zULZxNvoVCZGjYP+FWH
	c9k0HGr4JSz39Bp5bvT7YQjuVl9qtsL93NhoQBZweCHi1tq7FjsehDh2iDHiXRfTfpWI
	kstaM825gChshyjs0PxB8r1HUFbh6GDPb1PwI+Fp7j+qAM8BEgCuKD8zFYoJWK5fYU0l
	JI6y8PDNjcMtPudIEvrYBbPXPHVTu8nEnySJ9qTMfHSR1nsaSSH8LW8vMeFc6aYjfEq0
	yY4y0xEY2BVtxOOl9cpDB5t9K817ervG+fO8ybTi7yQJ9Fh3HAn9QKUWtTwG8UxlZz3E
	mFsg==
Received: by 10.14.206.200 with SMTP id l48mr19736365eeo.41.1344955399262;
	Tue, 14 Aug 2012 07:43:19 -0700 (PDT)
Received: by 10.14.206.200 with SMTP id l48mr19736345eeo.41.1344955399077;
	Tue, 14 Aug 2012 07:43:19 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Tue, 14 Aug 2012 07:42:48 -0700 (PDT)
In-Reply-To: <1344935951.5926.9.camel@zakaz.uk.xensource.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
From: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 07:42:48 -0700
Message-ID: <CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQmA+R4TDayv4JnGVy3X1Dj+tJsX13no0CEgSWWOT0O4ctk4wbb/hHydgTi6eDhre1GhI9iQylf0kaYeSeOjFwA8Dp6p9ybK9IN7OCdTUeOd4gzJe1RPQIBmcYVAHqx8FYF6YtwNWWPMoRy8XwIanY1ieTYnS+w2ZPJ3qm3tNiEkNcs7JWlhluNl8roY7Oa4RMpgAhbG
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 2:19 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

>>
>>  a) this only happens on our xen machines (though not all of them)
>>  b) one of my stack traces started with
>>
>> [172577.560441]  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>
> This is likely to be a coincidence IMHO since this function forces a
> call to the hypervisor to trigger the (re)injection of any pending
> interrupts (typically after reenabling interrupts), so it is not unusual
> for it to be at the bottom of any stack trace which happens in interrupt
> context.
>
> The example stack trace in crasher.c doesn't involve Xen -- can you post
> any examples of ones which do.

Hi Ian, here's the trace in question. I'm perfectly happy with this
not being a xen issue if for no other reason then it means that I have
one less thing I need to look at. The python script in question was
essentially doing the same thing as crasher.c, though in the middle of
other, more productive activities.

Cheers,
peter

------------[ cut here ]------------
kernel BUG at fs/buffer.c:1263!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/devices/system/cpu/online
CPU 3
Pid: 27277, comm: python2.6 Not tainted 2.6.38.8-gg868-ganetixenu #1
RIP: e030:[<ffffffff81153853>]  [<ffffffff81153853>]
__find_get_block+0x1f3/0x200
RSP: e02b:ffff880496cffc78  EFLAGS: 00010046
RAX: ffff8807b9480000 RBX: ffff88049f172de8 RCX: 000000000086dafd
RDX: 0000000000001000 RSI: 000000000086dafd RDI: ffff8807ba4dd380
RBP: ffff880496cffcd8 R08: 0000000000000001 R09: ffff88049f172d10
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88049f172d14
R13: ffff88049f172d40 R14: ffff8807ba4b7228 R15: 000000000086dafd
FS:  00007f667a0ca700(0000) GS:ffff8807fff74000(0063) knlGS:0000000000000000
CS:  e033 DS: 002b ES: 002b CR0: 000000008005003b
CR2: 000000000a130260 CR3: 00000004e978c000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process python2.6 (pid: 27277, threadinfo ffff880496cfe000, task
ffff8804b5a72d40)
Stack:
 ffff880496cffc98 ffffffff81654cd1 ffff880496cffca8 ffff88062d8d2440
 ffff880496cffd08 ffffffff811c9294 ffff8804ffffffc3 0000000000000014
 ffff88049f172de8 ffff88049f172d14 ffff88049f172d40 ffff8807ba4b7228
Call Trace:
 [<ffffffff81654cd1>] ? down_read+0x11/0x30
 [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
 [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
 [<ffffffff811bb104>] ext3_free_data+0x114/0x160
 [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
 [<ffffffff812133f5>] ? journal_start+0xb5/0x100
 [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
 [<ffffffff8114065f>] evict+0x1f/0xb0
 [<ffffffff81006d52>] ? check_events+0x12/0x20
 [<ffffffff81140c14>] iput+0x1a4/0x290
 [<ffffffff8113ed05>] dput+0x265/0x310
 [<ffffffff81132435>] path_put+0x15/0x30
 [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
 [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
 [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
 [<ffffffff81006d52>] ? check_events+0x12/0x20
Code: 82 00 05 01 00 85 c0 75 de 65 48 89 1c 25 00 05 01 00 e9 87 fe
ff ff 48 89 df e8 e9 fc ff ff 4c 89 f7 e9 02 ff ff ff 0f 0b eb fe <0f>
0b eb fe 0f 0b eb fe 0f 1f 44 00 00 55 48 89 e5 41 57 49 89
RIP  [<ffffffff81153853>] __find_get_block+0x1f3/0x200
 RSP <ffff880496cffc78>
---[ end trace d45267c89c4e0548 ]---


-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:46:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1INX-0005FB-IP; Tue, 14 Aug 2012 14:45:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1INV-0005F6-Tl
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:45:54 +0000
Received: from [85.158.143.99:13216] by server-3.bemta-4.messagelabs.com id
	FC/F8-09529-1A46A205; Tue, 14 Aug 2012 14:45:53 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344955551!22826615!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30256 invoked from network); 14 Aug 2012 14:45:52 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 14:45:52 -0000
X-TM-IMSS-Message-ID: <a445bfb60008ca46@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	a445bfb60008ca46 ; Tue, 14 Aug 2012 10:46:57 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EEjjkT021959; 
	Tue, 14 Aug 2012 10:45:45 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Jan Beulich <jbeulich@suse.com>
Date: Tue, 14 Aug 2012 10:45:44 -0400
Message-Id: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <502A7F5C0200007800094D24@nat28.tlf.novell.com>
References: <502A7F5C0200007800094D24@nat28.tlf.novell.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] x86-64/EFI: add embedded flags to check
	compile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this, the compilation of check.c could fail due to compiler
features such as -fstack-protector being enabled, which causes a missing
__stack_chk_fail symbol error.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/efi/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
index 005e3e0..b32ea66 100644
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -5,7 +5,7 @@ obj-y += stub.o
 create = test -e $(1) || touch -t 199901010000 $(1)
 
 efi := $(filter y,$(x86_64)$(shell rm -f disabled))
-efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
+efi := $(if $(efi),$(shell $(CC) $(EMBEDDED_EXTRA_CFLAGS) -c -Werror check.c 2>disabled && echo y))
 efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
 efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:46:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1INX-0005FB-IP; Tue, 14 Aug 2012 14:45:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1INV-0005F6-Tl
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:45:54 +0000
Received: from [85.158.143.99:13216] by server-3.bemta-4.messagelabs.com id
	FC/F8-09529-1A46A205; Tue, 14 Aug 2012 14:45:53 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-216.messagelabs.com!1344955551!22826615!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30256 invoked from network); 14 Aug 2012 14:45:52 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-216.messagelabs.com with SMTP;
	14 Aug 2012 14:45:52 -0000
X-TM-IMSS-Message-ID: <a445bfb60008ca46@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	a445bfb60008ca46 ; Tue, 14 Aug 2012 10:46:57 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EEjjkT021959; 
	Tue, 14 Aug 2012 10:45:45 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Jan Beulich <jbeulich@suse.com>
Date: Tue, 14 Aug 2012 10:45:44 -0400
Message-Id: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <502A7F5C0200007800094D24@nat28.tlf.novell.com>
References: <502A7F5C0200007800094D24@nat28.tlf.novell.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] x86-64/EFI: add embedded flags to check
	compile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this, the compilation of check.c could fail due to compiler
features such as -fstack-protector being enabled, which causes a missing
__stack_chk_fail symbol error.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/efi/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
index 005e3e0..b32ea66 100644
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -5,7 +5,7 @@ obj-y += stub.o
 create = test -e $(1) || touch -t 199901010000 $(1)
 
 efi := $(filter y,$(x86_64)$(shell rm -f disabled))
-efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
+efi := $(if $(efi),$(shell $(CC) $(EMBEDDED_EXTRA_CFLAGS) -c -Werror check.c 2>disabled && echo y))
 efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
 efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:47:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IP2-0005LI-1Y; Tue, 14 Aug 2012 14:47:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1IOz-0005Kt-U3
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:47:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344955639!9259079!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16273 invoked from network); 14 Aug 2012 14:47:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 14:47:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 15:47:18 +0100
Message-Id: <502A81130200007800094D38@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 15:47:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
In-Reply-To: <CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 16:42, Peter Moody <pmoody@google.com> wrote:
> Hi Ian, here's the trace in question. I'm perfectly happy with this
> not being a xen issue if for no other reason then it means that I have
> one less thing I need to look at. The python script in question was
> essentially doing the same thing as crasher.c, though in the middle of
> other, more productive activities.
> ...
> Call Trace:
>  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>  [<ffffffff8114065f>] evict+0x1f/0xb0
>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>  [<ffffffff81140c14>] iput+0x1a4/0x290
>  [<ffffffff8113ed05>] dput+0x265/0x310
>  [<ffffffff81132435>] path_put+0x15/0x30
>  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>  [<ffffffff81006d52>] ? check_events+0x12/0x20

This obviously is just a leftover on the stack, one can see clearly
that we're in the middle of a syscall (which would never have
xen_force_evtchn_callback that deep into the stack (i.e. where
we just came from user mode).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:47:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IP2-0005LI-1Y; Tue, 14 Aug 2012 14:47:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1IOz-0005Kt-U3
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:47:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344955639!9259079!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16273 invoked from network); 14 Aug 2012 14:47:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 14:47:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 15:47:18 +0100
Message-Id: <502A81130200007800094D38@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 15:47:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
In-Reply-To: <CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 16:42, Peter Moody <pmoody@google.com> wrote:
> Hi Ian, here's the trace in question. I'm perfectly happy with this
> not being a xen issue if for no other reason then it means that I have
> one less thing I need to look at. The python script in question was
> essentially doing the same thing as crasher.c, though in the middle of
> other, more productive activities.
> ...
> Call Trace:
>  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>  [<ffffffff8114065f>] evict+0x1f/0xb0
>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>  [<ffffffff81140c14>] iput+0x1a4/0x290
>  [<ffffffff8113ed05>] dput+0x265/0x310
>  [<ffffffff81132435>] path_put+0x15/0x30
>  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>  [<ffffffff81006d52>] ? check_events+0x12/0x20

This obviously is just a leftover on the stack, one can see clearly
that we're in the middle of a syscall (which would never have
xen_force_evtchn_callback that deep into the stack (i.e. where
we just came from user mode).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:47:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IPM-0005OD-F0; Tue, 14 Aug 2012 14:47:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1IPL-0005Nx-L7
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:47:47 +0000
Received: from [85.158.138.51:32086] by server-9.bemta-3.messagelabs.com id
	F0/F5-23952-2156A205; Tue, 14 Aug 2012 14:47:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344955666!20201236!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14417 invoked from network); 14 Aug 2012 14:47:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:47:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005689"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:47:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 15:47:41 +0100
Message-ID: <1344955660.5926.91.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Tue, 14 Aug 2012 15:47:40 +0100
In-Reply-To: <502A3A0B.6000401@xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 12:44 +0100, Lars Kurth wrote:
> On 14/08/2012 11:56, Ian Campbell wrote:
> > On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> >> Feel free to reply to the thread or add to
> >> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> > Is the intention to merge this into
> > http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?
> Ian, we could merge it, or we could have a line  in the matrix linking 
> to the release limits.
> No strong preference

I think we should either merge or delete the limits rows from the table
in http://wiki.xen.org/wiki/Xen_Release_Features. I'd be in favour of
merging.

Ian.





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:47:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IPM-0005OD-F0; Tue, 14 Aug 2012 14:47:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1IPL-0005Nx-L7
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:47:47 +0000
Received: from [85.158.138.51:32086] by server-9.bemta-3.messagelabs.com id
	F0/F5-23952-2156A205; Tue, 14 Aug 2012 14:47:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1344955666!20201236!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14417 invoked from network); 14 Aug 2012 14:47:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:47:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005689"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:47:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 15:47:41 +0100
Message-ID: <1344955660.5926.91.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Tue, 14 Aug 2012 15:47:40 +0100
In-Reply-To: <502A3A0B.6000401@xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 12:44 +0100, Lars Kurth wrote:
> On 14/08/2012 11:56, Ian Campbell wrote:
> > On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> >> Feel free to reply to the thread or add to
> >> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> > Is the intention to merge this into
> > http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?
> Ian, we could merge it, or we could have a line  in the matrix linking 
> to the release limits.
> No strong preference

I think we should either merge or delete the limits rows from the table
in http://wiki.xen.org/wiki/Xen_Release_Features. I'd be in favour of
merging.

Ian.





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:51:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IT0-0005hp-3j; Tue, 14 Aug 2012 14:51:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1ISy-0005hh-MW
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:51:32 +0000
Received: from [85.158.143.35:15162] by server-2.bemta-4.messagelabs.com id
	3B/AA-31966-3F56A205; Tue, 14 Aug 2012 14:51:31 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344955890!15708458!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzc1OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4683 invoked from network); 14 Aug 2012 14:51:31 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 14:51:31 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id A3DA11F03;
	Tue, 14 Aug 2012 17:51:30 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6FD152005D; Tue, 14 Aug 2012 17:51:29 +0300 (EEST)
Date: Tue, 14 Aug 2012 17:51:29 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120814145129.GB19851@reaktio.net>
References: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TESTDAY] Compiling on 64-bit Ubuntu systems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 12:39:47PM +0100, George Dunlap wrote:
> It appears that we have a build dependency on 32-bit headers which
> isn't checked during ./configure, leading to strange-looking error:
> 
> ---- snip ----
> 
> 
> ---- snip ----
> 
> Doing the following solved the problem (and Google was very helpful):
> 
> sudo apt-get install libc6-dev-i386
> 
> But it would be nice if the error message happen earlier and be a bit
> more friendly.
> 

I also added libc6-dev-i386 to debian package list at:
http://wiki.xen.org/wiki/Xen_4.2_RC2_test_instructions

Thanks,

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:51:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IT0-0005hp-3j; Tue, 14 Aug 2012 14:51:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1ISy-0005hh-MW
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:51:32 +0000
Received: from [85.158.143.35:15162] by server-2.bemta-4.messagelabs.com id
	3B/AA-31966-3F56A205; Tue, 14 Aug 2012 14:51:31 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-21.messagelabs.com!1344955890!15708458!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzc1OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4683 invoked from network); 14 Aug 2012 14:51:31 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 14:51:31 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id A3DA11F03;
	Tue, 14 Aug 2012 17:51:30 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6FD152005D; Tue, 14 Aug 2012 17:51:29 +0300 (EEST)
Date: Tue, 14 Aug 2012 17:51:29 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120814145129.GB19851@reaktio.net>
References: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TESTDAY] Compiling on 64-bit Ubuntu systems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 12:39:47PM +0100, George Dunlap wrote:
> It appears that we have a build dependency on 32-bit headers which
> isn't checked during ./configure, leading to strange-looking error:
> 
> ---- snip ----
> 
> 
> ---- snip ----
> 
> Doing the following solved the problem (and Google was very helpful):
> 
> sudo apt-get install libc6-dev-i386
> 
> But it would be nice if the error message happen earlier and be a bit
> more friendly.
> 

I also added libc6-dev-i386 to debian package list at:
http://wiki.xen.org/wiki/Xen_4.2_RC2_test_instructions

Thanks,

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:53:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IUK-0005pI-OG; Tue, 14 Aug 2012 14:52:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1IUJ-0005p1-7h
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:52:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344955950!2368238!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12341 invoked from network); 14 Aug 2012 14:52:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005779"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:51:52 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:51:52 +0100
Date: Tue, 14 Aug 2012 15:51:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <20522.22946.930177.26680@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
	<20522.22946.930177.26680@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Ian Jackson wrote:
> Xu, Dongxiao writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE"):
> > I attached the original patch here, which impact the performance.
> 
> Stefano, when you proposed what is now
> 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38 this you described it as a
> major performance improvement.
> 
> Would you care to comment ?

QDISK needs NOCACHE, that's why I wrote the original patch series.

I also thought that IDE also needs NOCACHE for safety but after a
lengthy discussion, we came up with the conclusion that WRITEBACK is
OK for IDE, see your message:

http://marc.info/?l=xen-devel&m=133311527009773

So I think we should revert 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38, I
don't know why it was committed.

Also it seems to me that libxl is not specifying cache=writeback for
upstream QEMU, that means it is going to default to writethrough.
I'll write a patch for that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:53:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IUK-0005pI-OG; Tue, 14 Aug 2012 14:52:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1IUJ-0005p1-7h
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:52:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1344955950!2368238!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12341 invoked from network); 14 Aug 2012 14:52:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005779"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:51:52 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:51:52 +0100
Date: Tue, 14 Aug 2012 15:51:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <20522.22946.930177.26680@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
	<20522.22946.930177.26680@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Ian Jackson wrote:
> Xu, Dongxiao writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE"):
> > I attached the original patch here, which impact the performance.
> 
> Stefano, when you proposed what is now
> 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38 this you described it as a
> major performance improvement.
> 
> Would you care to comment ?

QDISK needs NOCACHE, that's why I wrote the original patch series.

I also thought that IDE also needs NOCACHE for safety but after a
lengthy discussion, we came up with the conclusion that WRITEBACK is
OK for IDE, see your message:

http://marc.info/?l=xen-devel&m=133311527009773

So I think we should revert 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38, I
don't know why it was committed.

Also it seems to me that libxl is not specifying cache=writeback for
upstream QEMU, that means it is going to default to writethrough.
I'll write a patch for that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:58:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IZP-00062i-Fy; Tue, 14 Aug 2012 14:58:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1IZO-00062d-7h
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:58:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344956276!6804952!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10780 invoked from network); 14 Aug 2012 14:57:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:57:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005910"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:57:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 15:57:49 +0100
Message-ID: <1344956268.5926.98.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Tue, 14 Aug 2012 15:57:48 +0100
In-Reply-To: <20120814145129.GB19851@reaktio.net>
References: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
	<20120814145129.GB19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Compiling on 64-bit Ubuntu systems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTA4LTE0IGF0IDE1OjUxICswMTAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBUdWUsIEF1ZyAxNCwgMjAxMiBhdCAxMjozOTo0N1BNICswMTAwLCBHZW9yZ2UgRHVu
bGFwIHdyb3RlOgo+ID4gSXQgYXBwZWFycyB0aGF0IHdlIGhhdmUgYSBidWlsZCBkZXBlbmRlbmN5
IG9uIDMyLWJpdCBoZWFkZXJzIHdoaWNoCj4gPiBpc24ndCBjaGVja2VkIGR1cmluZyAuL2NvbmZp
Z3VyZSwgbGVhZGluZyB0byBzdHJhbmdlLWxvb2tpbmcgZXJyb3I6Cj4gPiAKPiA+IC0tLS0gc25p
cCAtLS0tCj4gPiAKPiA+IAo+ID4gLS0tLSBzbmlwIC0tLS0KPiA+IAo+ID4gRG9pbmcgdGhlIGZv
bGxvd2luZyBzb2x2ZWQgdGhlIHByb2JsZW0gKGFuZCBHb29nbGUgd2FzIHZlcnkgaGVscGZ1bCk6
Cj4gPiAKPiA+IHN1ZG8gYXB0LWdldCBpbnN0YWxsIGxpYmM2LWRldi1pMzg2Cj4gPiAKPiA+IEJ1
dCBpdCB3b3VsZCBiZSBuaWNlIGlmIHRoZSBlcnJvciBtZXNzYWdlIGhhcHBlbiBlYXJsaWVyIGFu
ZCBiZSBhIGJpdAo+ID4gbW9yZSBmcmllbmRseS4KPiA+IAo+IAo+IEkgYWxzbyBhZGRlZCBsaWJj
Ni1kZXYtaTM4NiB0byBkZWJpYW4gcGFja2FnZSBsaXN0IGF0Ogo+IGh0dHA6Ly93aWtpLnhlbi5v
cmcvd2lraS9YZW5fNC4yX1JDMl90ZXN0X2luc3RydWN0aW9ucwoKQ2FuIHNvbWVvbmUgc2VuZCBh
IHBhdGNoIGZvciB0aGUgZGVwZW5kZW5jeSBsaXN0IGluIHRoZSBSRUFETUUgdG9vCnBsZWFzZS4K
ClRoYW5rcywKSWFuLgoKCj4gCj4gVGhhbmtzLAo+IAo+IC0tIFBhc2kKPiAKPiAKPiBfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IFhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+IGh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:58:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1IZP-00062i-Fy; Tue, 14 Aug 2012 14:58:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1IZO-00062d-7h
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:58:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1344956276!6804952!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10780 invoked from network); 14 Aug 2012 14:57:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:57:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005910"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:57:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 15:57:49 +0100
Message-ID: <1344956268.5926.98.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Tue, 14 Aug 2012 15:57:48 +0100
In-Reply-To: <20120814145129.GB19851@reaktio.net>
References: <CAFLBxZaniKKKH0Snpsqs_m-bQbu7moBS4GcnT2b2M23c9PYFYw@mail.gmail.com>
	<20120814145129.GB19851@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Compiling on 64-bit Ubuntu systems
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEyLTA4LTE0IGF0IDE1OjUxICswMTAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBUdWUsIEF1ZyAxNCwgMjAxMiBhdCAxMjozOTo0N1BNICswMTAwLCBHZW9yZ2UgRHVu
bGFwIHdyb3RlOgo+ID4gSXQgYXBwZWFycyB0aGF0IHdlIGhhdmUgYSBidWlsZCBkZXBlbmRlbmN5
IG9uIDMyLWJpdCBoZWFkZXJzIHdoaWNoCj4gPiBpc24ndCBjaGVja2VkIGR1cmluZyAuL2NvbmZp
Z3VyZSwgbGVhZGluZyB0byBzdHJhbmdlLWxvb2tpbmcgZXJyb3I6Cj4gPiAKPiA+IC0tLS0gc25p
cCAtLS0tCj4gPiAKPiA+IAo+ID4gLS0tLSBzbmlwIC0tLS0KPiA+IAo+ID4gRG9pbmcgdGhlIGZv
bGxvd2luZyBzb2x2ZWQgdGhlIHByb2JsZW0gKGFuZCBHb29nbGUgd2FzIHZlcnkgaGVscGZ1bCk6
Cj4gPiAKPiA+IHN1ZG8gYXB0LWdldCBpbnN0YWxsIGxpYmM2LWRldi1pMzg2Cj4gPiAKPiA+IEJ1
dCBpdCB3b3VsZCBiZSBuaWNlIGlmIHRoZSBlcnJvciBtZXNzYWdlIGhhcHBlbiBlYXJsaWVyIGFu
ZCBiZSBhIGJpdAo+ID4gbW9yZSBmcmllbmRseS4KPiA+IAo+IAo+IEkgYWxzbyBhZGRlZCBsaWJj
Ni1kZXYtaTM4NiB0byBkZWJpYW4gcGFja2FnZSBsaXN0IGF0Ogo+IGh0dHA6Ly93aWtpLnhlbi5v
cmcvd2lraS9YZW5fNC4yX1JDMl90ZXN0X2luc3RydWN0aW9ucwoKQ2FuIHNvbWVvbmUgc2VuZCBh
IHBhdGNoIGZvciB0aGUgZGVwZW5kZW5jeSBsaXN0IGluIHRoZSBSRUFETUUgdG9vCnBsZWFzZS4K
ClRoYW5rcywKSWFuLgoKCj4gCj4gVGhhbmtzLAo+IAo+IC0tIFBhc2kKPiAKPiAKPiBfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IFhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+IGh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:59:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ia6-00065Y-TU; Tue, 14 Aug 2012 14:58:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1Ia5-00065K-5L
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:58:53 +0000
Received: from [85.158.138.51:34565] by server-7.bemta-3.messagelabs.com id
	2F/97-01906-CA76A205; Tue, 14 Aug 2012 14:58:52 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344956331!26471327!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15419 invoked from network); 14 Aug 2012 14:58:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:58:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005928"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:58:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:58:51 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1Ia2-00054V-Lo; Tue, 14 Aug 2012 14:58:50 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1Ia2-0004rx-IG;
	Tue, 14 Aug 2012 15:58:50 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.26538.475622.352650@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 15:58:50 +0100
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
	<20522.22946.930177.26680@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE"):
> I also thought that IDE also needs NOCACHE for safety but after a
> lengthy discussion, we came up with the conclusion that WRITEBACK is
> OK for IDE, see your message:
> 
> http://marc.info/?l=xen-devel&m=133311527009773
> 
> So I think we should revert 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38, I
> don't know why it was committed.

I was probably confused.  Sorry.  I have reverted it.

> Also it seems to me that libxl is not specifying cache=writeback for
> upstream QEMU, that means it is going to default to writethrough.
> I'll write a patch for that.

Right, thanks.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 14:59:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 14:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ia6-00065Y-TU; Tue, 14 Aug 2012 14:58:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1Ia5-00065K-5L
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 14:58:53 +0000
Received: from [85.158.138.51:34565] by server-7.bemta-3.messagelabs.com id
	2F/97-01906-CA76A205; Tue, 14 Aug 2012 14:58:52 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344956331!26471327!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15419 invoked from network); 14 Aug 2012 14:58:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 14:58:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14005928"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 14:58:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:58:51 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1Ia2-00054V-Lo; Tue, 14 Aug 2012 14:58:50 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1Ia2-0004rx-IG;
	Tue, 14 Aug 2012 15:58:50 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.26538.475622.352650@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 15:58:50 +0100
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
	<20522.22946.930177.26680@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "Xu,
	Dongxiao" <dongxiao.xu@intel.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE"):
> I also thought that IDE also needs NOCACHE for safety but after a
> lengthy discussion, we came up with the conclusion that WRITEBACK is
> OK for IDE, see your message:
> 
> http://marc.info/?l=xen-devel&m=133311527009773
> 
> So I think we should revert 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38, I
> don't know why it was committed.

I was probably confused.  Sorry.  I have reverted it.

> Also it seems to me that libxl is not specifying cache=writeback for
> upstream QEMU, that means it is going to default to writethrough.
> I'll write a patch for that.

Right, thanks.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:05:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Igd-0006O3-QX; Tue, 14 Aug 2012 15:05:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1Igb-0006Ny-Pq
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 15:05:38 +0000
Received: from [85.158.138.51:33383] by server-10.bemta-3.messagelabs.com id
	AC/8D-20518-0496A205; Tue, 14 Aug 2012 15:05:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344956735!19333622!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31132 invoked from network); 14 Aug 2012 15:05:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:05:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006073"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:05:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:05:35 +0100
Date: Tue, 14 Aug 2012 16:05:16 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: xen-devel@lists.xensource.com
Message-ID: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH] libxl/qemu-xen: use cache=writeback for IDE and
 cache=none for SCSI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..1c94e80 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -549,10 +549,10 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             if (disks[i].is_cdrom) {
                 if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
                     drive = libxl__sprintf
-                        (gc, "if=ide,index=%d,media=cdrom", disk);
+                        (gc, "if=ide,index=%d,media=cdrom,cache=writeback", disk);
                 else
                     drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
+                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s,cache=writeback",
                          disks[i].pdev_path, disk, format);
             } else {
                 if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
@@ -575,11 +575,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                  */
                 if (strncmp(disks[i].vdev, "sd", 2) == 0)
                     drive = libxl__sprintf
-                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
+                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=none",
                          disks[i].pdev_path, disk, format);
                 else if (disk < 4)
                     drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
+                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s,cache=writeback",
                          disks[i].pdev_path, disk, format);
                 else
                     continue; /* Do not emulate this disk */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:05:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Igd-0006O3-QX; Tue, 14 Aug 2012 15:05:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1Igb-0006Ny-Pq
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 15:05:38 +0000
Received: from [85.158.138.51:33383] by server-10.bemta-3.messagelabs.com id
	AC/8D-20518-0496A205; Tue, 14 Aug 2012 15:05:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344956735!19333622!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31132 invoked from network); 14 Aug 2012 15:05:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:05:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006073"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:05:35 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:05:35 +0100
Date: Tue, 14 Aug 2012 16:05:16 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: xen-devel@lists.xensource.com
Message-ID: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH] libxl/qemu-xen: use cache=writeback for IDE and
 cache=none for SCSI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..1c94e80 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -549,10 +549,10 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             if (disks[i].is_cdrom) {
                 if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
                     drive = libxl__sprintf
-                        (gc, "if=ide,index=%d,media=cdrom", disk);
+                        (gc, "if=ide,index=%d,media=cdrom,cache=writeback", disk);
                 else
                     drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
+                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s,cache=writeback",
                          disks[i].pdev_path, disk, format);
             } else {
                 if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
@@ -575,11 +575,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                  */
                 if (strncmp(disks[i].vdev, "sd", 2) == 0)
                     drive = libxl__sprintf
-                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
+                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=none",
                          disks[i].pdev_path, disk, format);
                 else if (disk < 4)
                     drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
+                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s,cache=writeback",
                          disks[i].pdev_path, disk, format);
                 else
                     continue; /* Do not emulate this disk */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:06:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ih2-0006Q1-6y; Tue, 14 Aug 2012 15:06:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1Ih0-0006Pg-Jo
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:06:02 +0000
Received: from [85.158.139.83:37267] by server-10.bemta-5.messagelabs.com id
	3D/37-13125-9596A205; Tue, 14 Aug 2012 15:06:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344956761!27557708!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17655 invoked from network); 14 Aug 2012 15:06:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:06:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006090"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:06:00 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:06:00 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Aug 2012 16:06:55 +0100
Message-ID: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v3] hotplug/NetBSD: check type of file to attach
	from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xend used to set the xenbus backend entry "type" to either "phy" or
"file", but now libxl sets it to "phy" for both file and block device.
We have to manually check for the type of the "param" field in order
to detect if we are trying to attach a file or a block device.

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
Changes since v2:

 * Better error messages.

 * Check if params is empty.

 * Replace xenstore_write with xenstore-write in error function.

 * Add quotation marks to xparams when testing.

Changes since v1:

 * Check that file is either a block special file or a regular file
   and report error otherwise.
---
 tools/hotplug/NetBSD/block |   13 +++++++++++--
 1 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index cf5ff3a..5ffc334 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,15 +12,24 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore_write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error
 	exit 1
 }
 	
 
 xpath=$1
 xstatus=$2
-xtype=$(xenstore-read "$xpath/type")
 xparams=$(xenstore-read "$xpath/params")
+if [ -b "$xparams" ]; then
+	xtype="phy"
+elif [ -f "$xparams" ]; then
+	xtype="file"
+elif [ -z "$xparams" ]; then
+	error "No image or block device found in $xpath/params"
+else
+	error "Invalid file type for block device." \
+	      "Only block and regular image files accepted."
+fi
 
 case $xstatus in
 6)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:06:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ih2-0006Q1-6y; Tue, 14 Aug 2012 15:06:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1Ih0-0006Pg-Jo
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:06:02 +0000
Received: from [85.158.139.83:37267] by server-10.bemta-5.messagelabs.com id
	3D/37-13125-9596A205; Tue, 14 Aug 2012 15:06:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1344956761!27557708!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17655 invoked from network); 14 Aug 2012 15:06:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:06:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006090"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:06:00 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:06:00 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Aug 2012 16:06:55 +0100
Message-ID: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v3] hotplug/NetBSD: check type of file to attach
	from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xend used to set the xenbus backend entry "type" to either "phy" or
"file", but now libxl sets it to "phy" for both file and block device.
We have to manually check for the type of the "param" field in order
to detect if we are trying to attach a file or a block device.

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
Changes since v2:

 * Better error messages.

 * Check if params is empty.

 * Replace xenstore_write with xenstore-write in error function.

 * Add quotation marks to xparams when testing.

Changes since v1:

 * Check that file is either a block special file or a regular file
   and report error otherwise.
---
 tools/hotplug/NetBSD/block |   13 +++++++++++--
 1 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index cf5ff3a..5ffc334 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,15 +12,24 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore_write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error
 	exit 1
 }
 	
 
 xpath=$1
 xstatus=$2
-xtype=$(xenstore-read "$xpath/type")
 xparams=$(xenstore-read "$xpath/params")
+if [ -b "$xparams" ]; then
+	xtype="phy"
+elif [ -f "$xparams" ]; then
+	xtype="file"
+elif [ -z "$xparams" ]; then
+	error "No image or block device found in $xpath/params"
+else
+	error "Invalid file type for block device." \
+	      "Only block and regular image files accepted."
+fi
 
 case $xstatus in
 6)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:10:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Il8-0006qr-Sl; Tue, 14 Aug 2012 15:10:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Il7-0006pq-HL
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:10:17 +0000
Received: from [85.158.138.51:39199] by server-5.bemta-3.messagelabs.com id
	50/A4-08865-85A6A205; Tue, 14 Aug 2012 15:10:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344957012!28204139!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17176 invoked from network); 14 Aug 2012 15:10:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:10:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006166"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:10:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 16:10:12 +0100
Message-ID: <1344957010.5926.103.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 14 Aug 2012 16:10:10 +0100
In-Reply-To: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
References: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 16:06 +0100, Roger Pau Monne wrote:
> xend used to set the xenbus backend entry "type" to either "phy" or
> "file", but now libxl sets it to "phy" for both file and block device.
> We have to manually check for the type of the "param" field in order
> to detect if we are trying to attach a file or a block device.
> 
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> ---
> Changes since v2:
> 
>  * Better error messages.
> 
>  * Check if params is empty.
> 
>  * Replace xenstore_write with xenstore-write in error function.
> 
>  * Add quotation marks to xparams when testing.
> 
> Changes since v1:
> 
>  * Check that file is either a block special file or a regular file
>    and report error otherwise.
> ---
>  tools/hotplug/NetBSD/block |   13 +++++++++++--
>  1 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
> index cf5ff3a..5ffc334 100644
> --- a/tools/hotplug/NetBSD/block
> +++ b/tools/hotplug/NetBSD/block
> @@ -12,15 +12,24 @@ export PATH
>  
>  error() {
>  	echo "$@" >&2
> -	xenstore_write $xpath/hotplug-status error
> +	xenstore-write $xpath/hotplug-status error
>  	exit 1
>  }
>  	
>  
>  xpath=$1
>  xstatus=$2
> -xtype=$(xenstore-read "$xpath/type")
>  xparams=$(xenstore-read "$xpath/params")
> +if [ -b "$xparams" ]; then
> +	xtype="phy"
> +elif [ -f "$xparams" ]; then
> +	xtype="file"
> +elif [ -z "$xparams" ]; then
> +	error "No image or block device found in $xpath/params"
> +else
> +	error "Invalid file type for block device." \
> +	      "Only block and regular image files accepted."

Perhaps include $xparams in here somewhere? Perhaps $xpath too?

> +fi
>  
>  case $xstatus in
>  6)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:10:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Il8-0006qr-Sl; Tue, 14 Aug 2012 15:10:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1Il7-0006pq-HL
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:10:17 +0000
Received: from [85.158.138.51:39199] by server-5.bemta-3.messagelabs.com id
	50/A4-08865-85A6A205; Tue, 14 Aug 2012 15:10:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344957012!28204139!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17176 invoked from network); 14 Aug 2012 15:10:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:10:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006166"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:10:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 16:10:12 +0100
Message-ID: <1344957010.5926.103.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 14 Aug 2012 16:10:10 +0100
In-Reply-To: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
References: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 16:06 +0100, Roger Pau Monne wrote:
> xend used to set the xenbus backend entry "type" to either "phy" or
> "file", but now libxl sets it to "phy" for both file and block device.
> We have to manually check for the type of the "param" field in order
> to detect if we are trying to attach a file or a block device.
> 
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> ---
> Changes since v2:
> 
>  * Better error messages.
> 
>  * Check if params is empty.
> 
>  * Replace xenstore_write with xenstore-write in error function.
> 
>  * Add quotation marks to xparams when testing.
> 
> Changes since v1:
> 
>  * Check that file is either a block special file or a regular file
>    and report error otherwise.
> ---
>  tools/hotplug/NetBSD/block |   13 +++++++++++--
>  1 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
> index cf5ff3a..5ffc334 100644
> --- a/tools/hotplug/NetBSD/block
> +++ b/tools/hotplug/NetBSD/block
> @@ -12,15 +12,24 @@ export PATH
>  
>  error() {
>  	echo "$@" >&2
> -	xenstore_write $xpath/hotplug-status error
> +	xenstore-write $xpath/hotplug-status error
>  	exit 1
>  }
>  	
>  
>  xpath=$1
>  xstatus=$2
> -xtype=$(xenstore-read "$xpath/type")
>  xparams=$(xenstore-read "$xpath/params")
> +if [ -b "$xparams" ]; then
> +	xtype="phy"
> +elif [ -f "$xparams" ]; then
> +	xtype="file"
> +elif [ -z "$xparams" ]; then
> +	error "No image or block device found in $xpath/params"
> +else
> +	error "Invalid file type for block device." \
> +	      "Only block and regular image files accepted."

Perhaps include $xparams in here somewhere? Perhaps $xpath too?

> +fi
>  
>  case $xstatus in
>  6)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:23:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ixv-00075v-Cj; Tue, 14 Aug 2012 15:23:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1Ixt-00075q-Mz
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:23:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344957798!9200895!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31867 invoked from network); 14 Aug 2012 15:23:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:23:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006451"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:22:53 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:22:53 +0100
Message-ID: <502A6D8D.1030309@citrix.com>
Date: Tue, 14 Aug 2012 16:23:57 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
	<1344957010.5926.103.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344957010.5926.103.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Tue, 2012-08-14 at 16:06 +0100, Roger Pau Monne wrote:
>> xend used to set the xenbus backend entry "type" to either "phy" or
>> "file", but now libxl sets it to "phy" for both file and block device.
>> We have to manually check for the type of the "param" field in order
>> to detect if we are trying to attach a file or a block device.
>>
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
>> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
>> ---
>> Changes since v2:
>>
>>  * Better error messages.
>>
>>  * Check if params is empty.
>>
>>  * Replace xenstore_write with xenstore-write in error function.
>>
>>  * Add quotation marks to xparams when testing.
>>
>> Changes since v1:
>>
>>  * Check that file is either a block special file or a regular file
>>    and report error otherwise.
>> ---
>>  tools/hotplug/NetBSD/block |   13 +++++++++++--
>>  1 files changed, 11 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
>> index cf5ff3a..5ffc334 100644
>> --- a/tools/hotplug/NetBSD/block
>> +++ b/tools/hotplug/NetBSD/block
>> @@ -12,15 +12,24 @@ export PATH
>>  
>>  error() {
>>  	echo "$@" >&2
>> -	xenstore_write $xpath/hotplug-status error
>> +	xenstore-write $xpath/hotplug-status error
>>  	exit 1
>>  }
>>  	
>>  
>>  xpath=$1
>>  xstatus=$2
>> -xtype=$(xenstore-read "$xpath/type")
>>  xparams=$(xenstore-read "$xpath/params")
>> +if [ -b "$xparams" ]; then
>> +	xtype="phy"
>> +elif [ -f "$xparams" ]; then
>> +	xtype="file"
>> +elif [ -z "$xparams" ]; then
>> +	error "No image or block device found in $xpath/params"
>> +else
>> +	error "Invalid file type for block device." \
>> +	      "Only block and regular image files accepted."
> 
> Perhaps include $xparams in here somewhere? Perhaps $xpath too?

Thanks for the review.

I think including $xparams should be enough (since it is not null).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:23:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ixv-00075v-Cj; Tue, 14 Aug 2012 15:23:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1Ixt-00075q-Mz
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:23:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344957798!9200895!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31867 invoked from network); 14 Aug 2012 15:23:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:23:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006451"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:22:53 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:22:53 +0100
Message-ID: <502A6D8D.1030309@citrix.com>
Date: Tue, 14 Aug 2012 16:23:57 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344956815-4957-1-git-send-email-roger.pau@citrix.com>
	<1344957010.5926.103.camel@zakaz.uk.xensource.com>
In-Reply-To: <1344957010.5926.103.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Tue, 2012-08-14 at 16:06 +0100, Roger Pau Monne wrote:
>> xend used to set the xenbus backend entry "type" to either "phy" or
>> "file", but now libxl sets it to "phy" for both file and block device.
>> We have to manually check for the type of the "param" field in order
>> to detect if we are trying to attach a file or a block device.
>>
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
>> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
>> ---
>> Changes since v2:
>>
>>  * Better error messages.
>>
>>  * Check if params is empty.
>>
>>  * Replace xenstore_write with xenstore-write in error function.
>>
>>  * Add quotation marks to xparams when testing.
>>
>> Changes since v1:
>>
>>  * Check that file is either a block special file or a regular file
>>    and report error otherwise.
>> ---
>>  tools/hotplug/NetBSD/block |   13 +++++++++++--
>>  1 files changed, 11 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
>> index cf5ff3a..5ffc334 100644
>> --- a/tools/hotplug/NetBSD/block
>> +++ b/tools/hotplug/NetBSD/block
>> @@ -12,15 +12,24 @@ export PATH
>>  
>>  error() {
>>  	echo "$@" >&2
>> -	xenstore_write $xpath/hotplug-status error
>> +	xenstore-write $xpath/hotplug-status error
>>  	exit 1
>>  }
>>  	
>>  
>>  xpath=$1
>>  xstatus=$2
>> -xtype=$(xenstore-read "$xpath/type")
>>  xparams=$(xenstore-read "$xpath/params")
>> +if [ -b "$xparams" ]; then
>> +	xtype="phy"
>> +elif [ -f "$xparams" ]; then
>> +	xtype="file"
>> +elif [ -z "$xparams" ]; then
>> +	error "No image or block device found in $xpath/params"
>> +else
>> +	error "Invalid file type for block device." \
>> +	      "Only block and regular image files accepted."
> 
> Perhaps include $xparams in here somewhere? Perhaps $xpath too?

Thanks for the review.

I think including $xparams should be enough (since it is not null).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:44:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JHl-0007UH-9E; Tue, 14 Aug 2012 15:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1JHj-0007UC-LH
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 15:43:59 +0000
Received: from [85.158.139.83:19272] by server-11.bemta-5.messagelabs.com id
	60/A0-29296-E327A205; Tue, 14 Aug 2012 15:43:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344959036!28095314!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31646 invoked from network); 14 Aug 2012 15:43:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:43:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006820"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:43:04 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:43:04 +0100
Date: Tue, 14 Aug 2012 16:42:35 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502A71370200007800094C5F@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208141633420.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
	<502A71370200007800094C5F@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Jan Beulich wrote:
> >>> On 14.08.12 at 14:56, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On Tue, 14 Aug 2012, Jan Beulich wrote:
> >> Perhaps we have a different understanding of embedded fields:
> >> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
> >> An example would be struct mmuext_op's vcpumask field, which
> >> is being passed to vcpumask_to_pcpumask(). This must remain to
> >> be possible (and not just in x86-specific code, where it's mere luck
> >> that both are really identical).
> > 
> > Thanks for the concrete example; glancing through the common code I
> > didn't find any examples like this.
> > As I wrote in the follow up email, guest_handle_cast is just what we
> > need:
> > 
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 4d72700..70ffa58 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -3198,7 +3198,9 @@ int do_mmuext_op(
> >          {
> >              cpumask_t pmask;
> >  
> > -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
> > +            if ( unlikely(vcpumask_to_pcpumask(d,
> > +                            guest_handle_cast(op.arg2.vcpumask, const_void),
> 
> No, the conversion should explicitly _not_ require specification
> of the type, i.e. this should not be a true cast. Type safety
> (checked by the compiler) can only be achieved if no intermediate
> cast is involved.

guest_handle_cast is implemented as:

#define guest_handle_cast(hnd, type) ({         \
    type *_x = (hnd).p;                         \
    (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
})

as you can see there is actually no explicit cast involved.
If you specify the wrong type the compiler would fail at:

type *_x = (hnd).p;

I think that having to specify the type as parameter is acceptable if it
makes up for simpler code over all.

The alternative would be something like the following:

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..e6685c7 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3197,8 +3197,11 @@ int do_mmuext_op(
         case MMUEXT_INVLPG_MULTI:
         {
             cpumask_t pmask;
+            XEN_GUEST_HANDLE_PARAM(const_void) param;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            set_xen_guest_handle(param, op.arg2.vcpumask.p);
+
+            if ( unlikely(vcpumask_to_pcpumask(d, param, &pmask)) )
             {
                 okay = 0;
                 break;

but I think it makes the code worse.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:44:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JHl-0007UH-9E; Tue, 14 Aug 2012 15:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1JHj-0007UC-LH
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 15:43:59 +0000
Received: from [85.158.139.83:19272] by server-11.bemta-5.messagelabs.com id
	60/A0-29296-E327A205; Tue, 14 Aug 2012 15:43:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344959036!28095314!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31646 invoked from network); 14 Aug 2012 15:43:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:43:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14006820"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:43:04 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 16:43:04 +0100
Date: Tue, 14 Aug 2012 16:42:35 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502A71370200007800094C5F@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208141633420.21096@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
	<502A71370200007800094C5F@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Jan Beulich wrote:
> >>> On 14.08.12 at 14:56, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On Tue, 14 Aug 2012, Jan Beulich wrote:
> >> Perhaps we have a different understanding of embedded fields:
> >> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
> >> An example would be struct mmuext_op's vcpumask field, which
> >> is being passed to vcpumask_to_pcpumask(). This must remain to
> >> be possible (and not just in x86-specific code, where it's mere luck
> >> that both are really identical).
> > 
> > Thanks for the concrete example; glancing through the common code I
> > didn't find any examples like this.
> > As I wrote in the follow up email, guest_handle_cast is just what we
> > need:
> > 
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 4d72700..70ffa58 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -3198,7 +3198,9 @@ int do_mmuext_op(
> >          {
> >              cpumask_t pmask;
> >  
> > -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
> > +            if ( unlikely(vcpumask_to_pcpumask(d,
> > +                            guest_handle_cast(op.arg2.vcpumask, const_void),
> 
> No, the conversion should explicitly _not_ require specification
> of the type, i.e. this should not be a true cast. Type safety
> (checked by the compiler) can only be achieved if no intermediate
> cast is involved.

guest_handle_cast is implemented as:

#define guest_handle_cast(hnd, type) ({         \
    type *_x = (hnd).p;                         \
    (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
})

as you can see there is actually no explicit cast involved.
If you specify the wrong type the compiler would fail at:

type *_x = (hnd).p;

I think that having to specify the type as parameter is acceptable if it
makes up for simpler code over all.

The alternative would be something like the following:

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..e6685c7 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3197,8 +3197,11 @@ int do_mmuext_op(
         case MMUEXT_INVLPG_MULTI:
         {
             cpumask_t pmask;
+            XEN_GUEST_HANDLE_PARAM(const_void) param;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            set_xen_guest_handle(param, op.arg2.vcpumask.p);
+
+            if ( unlikely(vcpumask_to_pcpumask(d, param, &pmask)) )
             {
                 okay = 0;
                 break;

but I think it makes the code worse.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JTb-0007g3-Lo; Tue, 14 Aug 2012 15:56:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T1JTa-0007fy-4I
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:56:14 +0000
Received: from [85.158.138.51:64465] by server-10.bemta-3.messagelabs.com id
	40/7C-20518-D157A205; Tue, 14 Aug 2012 15:56:13 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344959772!24203242!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24031 invoked from network); 14 Aug 2012 15:56:12 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:56:12 -0000
Received: by eeke53 with SMTP id e53so216268eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 08:56:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=XM5Cd6Su1qvbOORWpqAilQdjcCibZOvA+RSsqnPuYdo=;
	b=YLNSS7Y38TNVcqJ4kPrT1uJkS3uyFv7fDzvPq/FppN7CyQQ3p3XdpWUiRilTmW2jas
	va8hoZrmiMKgax1Zq4sRGzFOqATVEipAjgDysfQgnuLWqs9mrl+B1yELMMckicYxn3gg
	ImtTVsk7VIQXNizTIiwFaEwc4vVOLgvUPv6xGJWFLN9y2+RzURrUV9s2rg7b/+RWg8VU
	FoB1VRXB1ETvX2h/HhqQPmLUDSI7SqQZ64dFgfvShbrvvPUS+3u1VevBwxgOM6JjVk/I
	9UmyKn6VUjaZrM41fM7mY78NMWwpNotQs8cksz5YsVY3ZqQVabIAQhq/+pWhDkIwL31/
	u7UQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=XM5Cd6Su1qvbOORWpqAilQdjcCibZOvA+RSsqnPuYdo=;
	b=I0O7SYv9rRuEgcS1g2wWPZ5skWtwGJILQ+xHMd99g6M+jrMTWbJbawySTv8zadr+Nu
	zBqOSwfAhHEzybS4vsFlNTN0QWgBR7nn1AqGrfv1mRReAeW/51tiuPp/i4aTv4/8q4LJ
	bvtHH4ekSdhSZzgqDan611yRFE1GSwX7iDCtPiQlC4qJyqUua+ayCSZqr65/jdccMZ9Q
	S7kcZs2/EmCa5ZM9CvH0Hw1W2lqZSxozV1IQvBr4Znfg7JyEm3VxDwjNH6kxAaVY3Z7C
	SNW9FNJEWBAsNVOaVB0ps1VYwH2S18HkvCHQP2A3YpBCHwzcJ39stuYxXKfttn8uBnSn
	Pabw==
Received: by 10.14.224.193 with SMTP id x41mr18906292eep.46.1344959772158;
	Tue, 14 Aug 2012 08:56:12 -0700 (PDT)
Received: by 10.14.224.193 with SMTP id x41mr18906283eep.46.1344959772034;
	Tue, 14 Aug 2012 08:56:12 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Tue, 14 Aug 2012 08:55:41 -0700 (PDT)
In-Reply-To: <502A81130200007800094D38@nat28.tlf.novell.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
From: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 08:55:41 -0700
Message-ID: <CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQlQF6NIk0CKv+TaVBFehgGuPH7W4PyD8j0/lPdymX4F4nNcjw2GcaX21dR/pSRd4Lu44l60KJCa/e+fSgB7rPJ6VFHjv/7CRH2rm3C5yLCEiXGMVfZR0dFXOdvLEVJvmWAQD/AjZ59icYmLelTBBBM/khQke6+E5DlPASrT+A4OkYKrN+0mQwlmIU5cXT1Hz0++TeiD
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 7:47 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 14.08.12 at 16:42, Peter Moody <pmoody@google.com> wrote:
>> Hi Ian, here's the trace in question. I'm perfectly happy with this
>> not being a xen issue if for no other reason then it means that I have
>> one less thing I need to look at. The python script in question was
>> essentially doing the same thing as crasher.c, though in the middle of
>> other, more productive activities.
>> ...
>> Call Trace:
>>  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>>  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>>  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>>  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>>  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>>  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>>  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>>  [<ffffffff8114065f>] evict+0x1f/0xb0
>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>>  [<ffffffff81140c14>] iput+0x1a4/0x290
>>  [<ffffffff8113ed05>] dput+0x265/0x310
>>  [<ffffffff81132435>] path_put+0x15/0x30
>>  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>>  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>>  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>
> This obviously is just a leftover on the stack, one can see clearly
> that we're in the middle of a syscall (which would never have
> xen_force_evtchn_callback that deep into the stack (i.e. where
> we just came from user mode).

Interesting, thanks. Do you have any idea why something like this
would only be reproducible (thus far anyway, still trying to get my
hands on some other test systems) on xen? And not just xen, but on
this particular xen configuration (huge memory, lots of cpus, etc)? Is
this likely a race condition with the audit subsystem or some other
part of the kernel that this configuration somehow tickles?

Cheers,
peter

> Jan
>



-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 15:56:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 15:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JTb-0007g3-Lo; Tue, 14 Aug 2012 15:56:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T1JTa-0007fy-4I
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 15:56:14 +0000
Received: from [85.158.138.51:64465] by server-10.bemta-3.messagelabs.com id
	40/7C-20518-D157A205; Tue, 14 Aug 2012 15:56:13 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344959772!24203242!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24031 invoked from network); 14 Aug 2012 15:56:12 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 15:56:12 -0000
Received: by eeke53 with SMTP id e53so216268eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 08:56:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=XM5Cd6Su1qvbOORWpqAilQdjcCibZOvA+RSsqnPuYdo=;
	b=YLNSS7Y38TNVcqJ4kPrT1uJkS3uyFv7fDzvPq/FppN7CyQQ3p3XdpWUiRilTmW2jas
	va8hoZrmiMKgax1Zq4sRGzFOqATVEipAjgDysfQgnuLWqs9mrl+B1yELMMckicYxn3gg
	ImtTVsk7VIQXNizTIiwFaEwc4vVOLgvUPv6xGJWFLN9y2+RzURrUV9s2rg7b/+RWg8VU
	FoB1VRXB1ETvX2h/HhqQPmLUDSI7SqQZ64dFgfvShbrvvPUS+3u1VevBwxgOM6JjVk/I
	9UmyKn6VUjaZrM41fM7mY78NMWwpNotQs8cksz5YsVY3ZqQVabIAQhq/+pWhDkIwL31/
	u7UQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=XM5Cd6Su1qvbOORWpqAilQdjcCibZOvA+RSsqnPuYdo=;
	b=I0O7SYv9rRuEgcS1g2wWPZ5skWtwGJILQ+xHMd99g6M+jrMTWbJbawySTv8zadr+Nu
	zBqOSwfAhHEzybS4vsFlNTN0QWgBR7nn1AqGrfv1mRReAeW/51tiuPp/i4aTv4/8q4LJ
	bvtHH4ekSdhSZzgqDan611yRFE1GSwX7iDCtPiQlC4qJyqUua+ayCSZqr65/jdccMZ9Q
	S7kcZs2/EmCa5ZM9CvH0Hw1W2lqZSxozV1IQvBr4Znfg7JyEm3VxDwjNH6kxAaVY3Z7C
	SNW9FNJEWBAsNVOaVB0ps1VYwH2S18HkvCHQP2A3YpBCHwzcJ39stuYxXKfttn8uBnSn
	Pabw==
Received: by 10.14.224.193 with SMTP id x41mr18906292eep.46.1344959772158;
	Tue, 14 Aug 2012 08:56:12 -0700 (PDT)
Received: by 10.14.224.193 with SMTP id x41mr18906283eep.46.1344959772034;
	Tue, 14 Aug 2012 08:56:12 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Tue, 14 Aug 2012 08:55:41 -0700 (PDT)
In-Reply-To: <502A81130200007800094D38@nat28.tlf.novell.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
From: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 08:55:41 -0700
Message-ID: <CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQlQF6NIk0CKv+TaVBFehgGuPH7W4PyD8j0/lPdymX4F4nNcjw2GcaX21dR/pSRd4Lu44l60KJCa/e+fSgB7rPJ6VFHjv/7CRH2rm3C5yLCEiXGMVfZR0dFXOdvLEVJvmWAQD/AjZ59icYmLelTBBBM/khQke6+E5DlPASrT+A4OkYKrN+0mQwlmIU5cXT1Hz0++TeiD
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 7:47 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 14.08.12 at 16:42, Peter Moody <pmoody@google.com> wrote:
>> Hi Ian, here's the trace in question. I'm perfectly happy with this
>> not being a xen issue if for no other reason then it means that I have
>> one less thing I need to look at. The python script in question was
>> essentially doing the same thing as crasher.c, though in the middle of
>> other, more productive activities.
>> ...
>> Call Trace:
>>  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>>  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>>  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>>  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>>  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>>  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>>  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>>  [<ffffffff8114065f>] evict+0x1f/0xb0
>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>>  [<ffffffff81140c14>] iput+0x1a4/0x290
>>  [<ffffffff8113ed05>] dput+0x265/0x310
>>  [<ffffffff81132435>] path_put+0x15/0x30
>>  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>>  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>>  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>
> This obviously is just a leftover on the stack, one can see clearly
> that we're in the middle of a syscall (which would never have
> xen_force_evtchn_callback that deep into the stack (i.e. where
> we just came from user mode).

Interesting, thanks. Do you have any idea why something like this
would only be reproducible (thus far anyway, still trying to get my
hands on some other test systems) on xen? And not just xen, but on
this particular xen configuration (huge memory, lots of cpus, etc)? Is
this likely a race condition with the audit subsystem or some other
part of the kernel that this configuration somehow tickles?

Cheers,
peter

> Jan
>



-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Jg8-0008Ro-VQ; Tue, 14 Aug 2012 16:09:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Jg7-0008Rj-5C
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:09:11 +0000
Received: from [85.158.138.51:34739] by server-3.bemta-3.messagelabs.com id
	94/A8-13809-6287A205; Tue, 14 Aug 2012 16:09:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344960549!26483929!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18982 invoked from network); 14 Aug 2012 16:09:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 16:09:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:09:08 +0100
Message-Id: <502A946A0200007800094DE6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:09:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
In-Reply-To: <CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 17:55, Peter Moody <pmoody@google.com> wrote:
> On Tue, Aug 14, 2012 at 7:47 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 14.08.12 at 16:42, Peter Moody <pmoody@google.com> wrote:
>>> Hi Ian, here's the trace in question. I'm perfectly happy with this
>>> not being a xen issue if for no other reason then it means that I have
>>> one less thing I need to look at. The python script in question was
>>> essentially doing the same thing as crasher.c, though in the middle of
>>> other, more productive activities.
>>> ...
>>> Call Trace:
>>>  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>>>  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>>>  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>>>  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>>>  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>>>  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>>>  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>>>  [<ffffffff8114065f>] evict+0x1f/0xb0
>>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>>>  [<ffffffff81140c14>] iput+0x1a4/0x290
>>>  [<ffffffff8113ed05>] dput+0x265/0x310
>>>  [<ffffffff81132435>] path_put+0x15/0x30
>>>  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>>>  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>>>  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>>
>> This obviously is just a leftover on the stack, one can see clearly
>> that we're in the middle of a syscall (which would never have
>> xen_force_evtchn_callback that deep into the stack (i.e. where
>> we just came from user mode).
> 
> Interesting, thanks. Do you have any idea why something like this
> would only be reproducible (thus far anyway, still trying to get my
> hands on some other test systems) on xen? And not just xen, but on
> this particular xen configuration (huge memory, lots of cpus, etc)? Is
> this likely a race condition with the audit subsystem or some other
> part of the kernel that this configuration somehow tickles?

>From the above as well as based on you indicating that the
traces are highly variable between instances, I'd suppose
this is memory corruption of some sort, which can easily be
hidden by all sorts of factors.

Until you can find a pattern, I don't think there can be done
much by anyone not having an affected system available for
debugging.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Jg8-0008Ro-VQ; Tue, 14 Aug 2012 16:09:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Jg7-0008Rj-5C
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:09:11 +0000
Received: from [85.158.138.51:34739] by server-3.bemta-3.messagelabs.com id
	94/A8-13809-6287A205; Tue, 14 Aug 2012 16:09:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344960549!26483929!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18982 invoked from network); 14 Aug 2012 16:09:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 16:09:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:09:08 +0100
Message-Id: <502A946A0200007800094DE6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:09:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
In-Reply-To: <CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 17:55, Peter Moody <pmoody@google.com> wrote:
> On Tue, Aug 14, 2012 at 7:47 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 14.08.12 at 16:42, Peter Moody <pmoody@google.com> wrote:
>>> Hi Ian, here's the trace in question. I'm perfectly happy with this
>>> not being a xen issue if for no other reason then it means that I have
>>> one less thing I need to look at. The python script in question was
>>> essentially doing the same thing as crasher.c, though in the middle of
>>> other, more productive activities.
>>> ...
>>> Call Trace:
>>>  [<ffffffff81654cd1>] ? down_read+0x11/0x30
>>>  [<ffffffff811c9294>] ? ext3_xattr_get+0xf4/0x2b0
>>>  [<ffffffff811baf88>] ext3_clear_blocks+0x128/0x190
>>>  [<ffffffff811bb104>] ext3_free_data+0x114/0x160
>>>  [<ffffffff811bbc0a>] ext3_truncate+0x87a/0x950
>>>  [<ffffffff812133f5>] ? journal_start+0xb5/0x100
>>>  [<ffffffff811bc840>] ext3_evict_inode+0x180/0x1a0
>>>  [<ffffffff8114065f>] evict+0x1f/0xb0
>>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>>>  [<ffffffff81140c14>] iput+0x1a4/0x290
>>>  [<ffffffff8113ed05>] dput+0x265/0x310
>>>  [<ffffffff81132435>] path_put+0x15/0x30
>>>  [<ffffffff810a5d31>] audit_syscall_exit+0x171/0x260
>>>  [<ffffffff8103ed9a>] sysexit_audit+0x21/0x5f
>>>  [<ffffffff810065ad>] ? xen_force_evtchn_callback+0xd/0x10
>>>  [<ffffffff81006d52>] ? check_events+0x12/0x20
>>
>> This obviously is just a leftover on the stack, one can see clearly
>> that we're in the middle of a syscall (which would never have
>> xen_force_evtchn_callback that deep into the stack (i.e. where
>> we just came from user mode).
> 
> Interesting, thanks. Do you have any idea why something like this
> would only be reproducible (thus far anyway, still trying to get my
> hands on some other test systems) on xen? And not just xen, but on
> this particular xen configuration (huge memory, lots of cpus, etc)? Is
> this likely a race condition with the audit subsystem or some other
> part of the kernel that this configuration somehow tickles?

>From the above as well as based on you indicating that the
traces are highly variable between instances, I'd suppose
this is memory corruption of some sort, which can easily be
hidden by all sorts of factors.

Until you can find a pattern, I don't think there can be done
much by anyone not having an affected system available for
debugging.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JkI-00007c-L8; Tue, 14 Aug 2012 16:13:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1JkG-00007W-O1
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 16:13:28 +0000
Received: from [85.158.138.51:60768] by server-5.bemta-3.messagelabs.com id
	69/2C-08865-7297A205; Tue, 14 Aug 2012 16:13:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344960805!9561843!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20560 invoked from network); 14 Aug 2012 16:13:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 16:13:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:13:25 +0100
Message-Id: <502A956C0200007800094DFE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:14:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
	<502A71370200007800094C5F@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141633420.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208141633420.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 17:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Tue, 14 Aug 2012, Jan Beulich wrote:
>> >>> On 14.08.12 at 14:56, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > On Tue, 14 Aug 2012, Jan Beulich wrote:
>> >> Perhaps we have a different understanding of embedded fields:
>> >> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
>> >> An example would be struct mmuext_op's vcpumask field, which
>> >> is being passed to vcpumask_to_pcpumask(). This must remain to
>> >> be possible (and not just in x86-specific code, where it's mere luck
>> >> that both are really identical).
>> > 
>> > Thanks for the concrete example; glancing through the common code I
>> > didn't find any examples like this.
>> > As I wrote in the follow up email, guest_handle_cast is just what we
>> > need:
>> > 
>> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> > index 4d72700..70ffa58 100644
>> > --- a/xen/arch/x86/mm.c
>> > +++ b/xen/arch/x86/mm.c
>> > @@ -3198,7 +3198,9 @@ int do_mmuext_op(
>> >          {
>> >              cpumask_t pmask;
>> >  
>> > -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) 
> )
>> > +            if ( unlikely(vcpumask_to_pcpumask(d,
>> > +                            guest_handle_cast(op.arg2.vcpumask, 
> const_void),
>> 
>> No, the conversion should explicitly _not_ require specification
>> of the type, i.e. this should not be a true cast. Type safety
>> (checked by the compiler) can only be achieved if no intermediate
>> cast is involved.
> 
> guest_handle_cast is implemented as:
> 
> #define guest_handle_cast(hnd, type) ({         \
>     type *_x = (hnd).p;                         \
>     (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
> })
> 
> as you can see there is actually no explicit cast involved.
> If you specify the wrong type the compiler would fail at:
> 
> type *_x = (hnd).p;

Except if, as in your example, "type" is (a qualified version of)
"void"...

> I think that having to specify the type as parameter is acceptable if it
> makes up for simpler code over all.
> 
> The alternative would be something like the following:
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 4d72700..e6685c7 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3197,8 +3197,11 @@ int do_mmuext_op(
>          case MMUEXT_INVLPG_MULTI:
>          {
>              cpumask_t pmask;
> +            XEN_GUEST_HANDLE_PARAM(const_void) param;
>  
> -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
> +            set_xen_guest_handle(param, op.arg2.vcpumask.p);
> +
> +            if ( unlikely(vcpumask_to_pcpumask(d, param, &pmask)) )

No, I would expect this to be possible without an intermediate
variable (i.e. similar to the guest_handle_cast() approach, just
without specifying the type). And in no case should there be
an open coded access to the "p" member.

>              {
>                  okay = 0;
>                  break;
> 
> but I think it makes the code worse.

Agreed, this variant definitely looked worse.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JkI-00007c-L8; Tue, 14 Aug 2012 16:13:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1JkG-00007W-O1
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 16:13:28 +0000
Received: from [85.158.138.51:60768] by server-5.bemta-3.messagelabs.com id
	69/2C-08865-7297A205; Tue, 14 Aug 2012 16:13:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1344960805!9561843!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20560 invoked from network); 14 Aug 2012 16:13:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	14 Aug 2012 16:13:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:13:25 +0100
Message-Id: <502A956C0200007800094DFE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:14:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208101222370.21096@kaball.uk.xensource.com>
	<1344600612-10815-5-git-send-email-stefano.stabellini@eu.citrix.com>
	<5025276B02000078000942BE@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131210170.21096@kaball.uk.xensource.com>
	<5029098A020000780009471C@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131541460.21096@kaball.uk.xensource.com>
	<502A58CF0200007800094BE4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141350080.21096@kaball.uk.xensource.com>
	<502A71370200007800094C5F@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208141633420.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208141633420.21096@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 5/5] xen: replace XEN_GUEST_HANDLE with
 XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 17:42, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Tue, 14 Aug 2012, Jan Beulich wrote:
>> >>> On 14.08.12 at 14:56, Stefano Stabellini <stefano.stabellini@eu.citrix.com> 
> wrote:
>> > On Tue, 14 Aug 2012, Jan Beulich wrote:
>> >> Perhaps we have a different understanding of embedded fields:
>> >> I'm thinking of structure field having XEN_GUEST_HANDLE() type.
>> >> An example would be struct mmuext_op's vcpumask field, which
>> >> is being passed to vcpumask_to_pcpumask(). This must remain to
>> >> be possible (and not just in x86-specific code, where it's mere luck
>> >> that both are really identical).
>> > 
>> > Thanks for the concrete example; glancing through the common code I
>> > didn't find any examples like this.
>> > As I wrote in the follow up email, guest_handle_cast is just what we
>> > need:
>> > 
>> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> > index 4d72700..70ffa58 100644
>> > --- a/xen/arch/x86/mm.c
>> > +++ b/xen/arch/x86/mm.c
>> > @@ -3198,7 +3198,9 @@ int do_mmuext_op(
>> >          {
>> >              cpumask_t pmask;
>> >  
>> > -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) 
> )
>> > +            if ( unlikely(vcpumask_to_pcpumask(d,
>> > +                            guest_handle_cast(op.arg2.vcpumask, 
> const_void),
>> 
>> No, the conversion should explicitly _not_ require specification
>> of the type, i.e. this should not be a true cast. Type safety
>> (checked by the compiler) can only be achieved if no intermediate
>> cast is involved.
> 
> guest_handle_cast is implemented as:
> 
> #define guest_handle_cast(hnd, type) ({         \
>     type *_x = (hnd).p;                         \
>     (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
> })
> 
> as you can see there is actually no explicit cast involved.
> If you specify the wrong type the compiler would fail at:
> 
> type *_x = (hnd).p;

Except if, as in your example, "type" is (a qualified version of)
"void"...

> I think that having to specify the type as parameter is acceptable if it
> makes up for simpler code over all.
> 
> The alternative would be something like the following:
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 4d72700..e6685c7 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3197,8 +3197,11 @@ int do_mmuext_op(
>          case MMUEXT_INVLPG_MULTI:
>          {
>              cpumask_t pmask;
> +            XEN_GUEST_HANDLE_PARAM(const_void) param;
>  
> -            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
> +            set_xen_guest_handle(param, op.arg2.vcpumask.p);
> +
> +            if ( unlikely(vcpumask_to_pcpumask(d, param, &pmask)) )

No, I would expect this to be possible without an intermediate
variable (i.e. similar to the guest_handle_cast() approach, just
without specifying the type). And in no case should there be
an open coded access to the "p" member.

>              {
>                  okay = 0;
>                  break;
> 
> but I think it makes the code worse.

Agreed, this variant definitely looked worse.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:16:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JnJ-0000G4-7t; Tue, 14 Aug 2012 16:16:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T1JnI-0000Fw-Bi
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:16:36 +0000
Received: from [85.158.139.83:64957] by server-10.bemta-5.messagelabs.com id
	69/8D-13125-3E97A205; Tue, 14 Aug 2012 16:16:35 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344960994!28101628!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1710 invoked from network); 14 Aug 2012 16:16:35 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 16:16:35 -0000
Received: by eaac13 with SMTP id c13so223392eaa.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 09:16:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=5/+vyV/LMNZL73scYMBehnjYdqij08Jsw9mBw09npOs=;
	b=ZRlX5cLJOs9nql0tlGRxBfINQrrSu0ENqQRsdbvRqM6DTlRwmUK6Irf6Gq/wZUWk9m
	PCA3GM7zuSiAkloVkJkfFVSkr7i736UqGkTEg5jzTcJ9NHAAyFPzFj7Emy/zBFGjmMyy
	0uuPGWfgviLz/BAfnKegHk+VXYWH4g5jleljzw8n1JKDuulmxdkwOm6CWIjilU8Hvhqi
	yx0g9ZbwbrNBsaKtpK17PqSuSgR7XONG7eqTQVL2+UQ9/gR0AjNw2vGwrqpvIL1y2QWe
	tsToll2yCiPAm482qw3f2JMWT+peXsV8MxOYb9W6jM88djfWCjF6lZDJuff7AQxuzxQJ
	9epw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=5/+vyV/LMNZL73scYMBehnjYdqij08Jsw9mBw09npOs=;
	b=A4ui95HX5px1/2jpoaIzYP70lF7xnTJBvWrl5EKbmIe7bjetmqmwoRoX7ID4eOxLCX
	ejCB0vb9EPvrg3gLrY/+fa1bfn2KoK30PT8tYkumWEUuFBsRBpFPqT8zeGkmk6ioeR24
	B7ONzHGLYnQLfBDMLgGcY7JJbtAbxSetzNcFoxnug59pVd9L6TG8qv7479wZTB6x1OF6
	jNoyzynouZwqnQ9o5piD/AD9jtvEVOyQ0iXeQ6LMz+8TzzXeYvFXm461Twak75VWP1JG
	Wr/rYc8d0jKoMl1bRriNCv3gaUQn8dTIDlnrUCwESBKtpJSXEWm7SedqSPjpLM6COMEs
	Wu/Q==
Received: by 10.14.206.200 with SMTP id l48mr20143682eeo.41.1344960994833;
	Tue, 14 Aug 2012 09:16:34 -0700 (PDT)
Received: by 10.14.206.200 with SMTP id l48mr20143667eeo.41.1344960994700;
	Tue, 14 Aug 2012 09:16:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Tue, 14 Aug 2012 09:16:04 -0700 (PDT)
In-Reply-To: <502A946A0200007800094DE6@nat28.tlf.novell.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
	<502A946A0200007800094DE6@nat28.tlf.novell.com>
From: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 09:16:04 -0700
Message-ID: <CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQnPjOhbwT2HQNDxLghHRZLHHKGyYeUA7hngH5SewaoVshirImeM9NotzhEHY3rkvQZ/zavs9UOqv4EHkU6zTxSJkRjyDIdFbiPtHeeOP8gmJfYxVBFWy5I2dBPldmQ8F0Q0pWQregzT8ssx0qk9rka6sR97EfW+fqWtOXuIlpvjLmPyBgcfYkVScMtk6dVGUx/G9Ack
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:09 AM, Jan Beulich <JBeulich@suse.com> wrote:

> From the above as well as based on you indicating that the
> traces are highly variable between instances, I'd suppose
> this is memory corruption of some sort, which can easily be
> hidden by all sorts of factors.
>
> Until you can find a pattern, I don't think there can be done
> much by anyone not having an affected system available for
> debugging.

So I have such a system :).

Are there any pointers or tips you can give me to help me track down
the root cause? I realize that's a broad question, and a perfectly
justifiable answer is "read the memory management chapter of
understanding linux device drivers" but at this point basically any
advice you can give me is appreciated (and will most likely get me
closer to the solution).

Cheers,
peter

> Jan
>



-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:16:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JnJ-0000G4-7t; Tue, 14 Aug 2012 16:16:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T1JnI-0000Fw-Bi
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:16:36 +0000
Received: from [85.158.139.83:64957] by server-10.bemta-5.messagelabs.com id
	69/8D-13125-3E97A205; Tue, 14 Aug 2012 16:16:35 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344960994!28101628!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1710 invoked from network); 14 Aug 2012 16:16:35 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 16:16:35 -0000
Received: by eaac13 with SMTP id c13so223392eaa.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 09:16:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=5/+vyV/LMNZL73scYMBehnjYdqij08Jsw9mBw09npOs=;
	b=ZRlX5cLJOs9nql0tlGRxBfINQrrSu0ENqQRsdbvRqM6DTlRwmUK6Irf6Gq/wZUWk9m
	PCA3GM7zuSiAkloVkJkfFVSkr7i736UqGkTEg5jzTcJ9NHAAyFPzFj7Emy/zBFGjmMyy
	0uuPGWfgviLz/BAfnKegHk+VXYWH4g5jleljzw8n1JKDuulmxdkwOm6CWIjilU8Hvhqi
	yx0g9ZbwbrNBsaKtpK17PqSuSgR7XONG7eqTQVL2+UQ9/gR0AjNw2vGwrqpvIL1y2QWe
	tsToll2yCiPAm482qw3f2JMWT+peXsV8MxOYb9W6jM88djfWCjF6lZDJuff7AQxuzxQJ
	9epw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=5/+vyV/LMNZL73scYMBehnjYdqij08Jsw9mBw09npOs=;
	b=A4ui95HX5px1/2jpoaIzYP70lF7xnTJBvWrl5EKbmIe7bjetmqmwoRoX7ID4eOxLCX
	ejCB0vb9EPvrg3gLrY/+fa1bfn2KoK30PT8tYkumWEUuFBsRBpFPqT8zeGkmk6ioeR24
	B7ONzHGLYnQLfBDMLgGcY7JJbtAbxSetzNcFoxnug59pVd9L6TG8qv7479wZTB6x1OF6
	jNoyzynouZwqnQ9o5piD/AD9jtvEVOyQ0iXeQ6LMz+8TzzXeYvFXm461Twak75VWP1JG
	Wr/rYc8d0jKoMl1bRriNCv3gaUQn8dTIDlnrUCwESBKtpJSXEWm7SedqSPjpLM6COMEs
	Wu/Q==
Received: by 10.14.206.200 with SMTP id l48mr20143682eeo.41.1344960994833;
	Tue, 14 Aug 2012 09:16:34 -0700 (PDT)
Received: by 10.14.206.200 with SMTP id l48mr20143667eeo.41.1344960994700;
	Tue, 14 Aug 2012 09:16:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Tue, 14 Aug 2012 09:16:04 -0700 (PDT)
In-Reply-To: <502A946A0200007800094DE6@nat28.tlf.novell.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
	<502A946A0200007800094DE6@nat28.tlf.novell.com>
From: Peter Moody <pmoody@google.com>
Date: Tue, 14 Aug 2012 09:16:04 -0700
Message-ID: <CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQnPjOhbwT2HQNDxLghHRZLHHKGyYeUA7hngH5SewaoVshirImeM9NotzhEHY3rkvQZ/zavs9UOqv4EHkU6zTxSJkRjyDIdFbiPtHeeOP8gmJfYxVBFWy5I2dBPldmQ8F0Q0pWQregzT8ssx0qk9rka6sR97EfW+fqWtOXuIlpvjLmPyBgcfYkVScMtk6dVGUx/G9Ack
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:09 AM, Jan Beulich <JBeulich@suse.com> wrote:

> From the above as well as based on you indicating that the
> traces are highly variable between instances, I'd suppose
> this is memory corruption of some sort, which can easily be
> hidden by all sorts of factors.
>
> Until you can find a pattern, I don't think there can be done
> much by anyone not having an affected system available for
> debugging.

So I have such a system :).

Are there any pointers or tips you can give me to help me track down
the root cause? I realize that's a broad question, and a perfectly
justifiable answer is "read the memory management chapter of
understanding linux device drivers" but at this point basically any
advice you can give me is appreciated (and will most likely get me
closer to the solution).

Cheers,
peter

> Jan
>



-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:27:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JxA-0000S9-BW; Tue, 14 Aug 2012 16:26:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Jx9-0000S4-9S
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:26:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344961566!9232916!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12913 invoked from network); 14 Aug 2012 16:26:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 16:26:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:26:05 +0100
Message-Id: <502A98640200007800094E1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:26:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
	<502A946A0200007800094DE6@nat28.tlf.novell.com>
	<CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
In-Reply-To: <CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 18:16, Peter Moody <pmoody@google.com> wrote:
> On Tue, Aug 14, 2012 at 9:09 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> From the above as well as based on you indicating that the
>> traces are highly variable between instances, I'd suppose
>> this is memory corruption of some sort, which can easily be
>> hidden by all sorts of factors.
>>
>> Until you can find a pattern, I don't think there can be done
>> much by anyone not having an affected system available for
>> debugging.
> 
> So I have such a system :).

That's what I implied.

> Are there any pointers or tips you can give me to help me track down
> the root cause? I realize that's a broad question, and a perfectly
> justifiable answer is "read the memory management chapter of
> understanding linux device drivers" but at this point basically any
> advice you can give me is appreciated (and will most likely get me
> closer to the solution).

As said, figuring out a pattern in the crashes would likely help
placing debug prints, breakpoints, or anything similar to aid
detecting the presumed corruption earlier. Without a pattern,
there's regretfully not much I can suggest.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:27:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1JxA-0000S9-BW; Tue, 14 Aug 2012 16:26:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Jx9-0000S4-9S
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:26:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1344961566!9232916!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12913 invoked from network); 14 Aug 2012 16:26:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 16:26:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:26:05 +0100
Message-Id: <502A98640200007800094E1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:26:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Peter Moody" <pmoody@google.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
	<502A946A0200007800094DE6@nat28.tlf.novell.com>
	<CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
In-Reply-To: <CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 18:16, Peter Moody <pmoody@google.com> wrote:
> On Tue, Aug 14, 2012 at 9:09 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> From the above as well as based on you indicating that the
>> traces are highly variable between instances, I'd suppose
>> this is memory corruption of some sort, which can easily be
>> hidden by all sorts of factors.
>>
>> Until you can find a pattern, I don't think there can be done
>> much by anyone not having an affected system available for
>> debugging.
> 
> So I have such a system :).

That's what I implied.

> Are there any pointers or tips you can give me to help me track down
> the root cause? I realize that's a broad question, and a perfectly
> justifiable answer is "read the memory management chapter of
> understanding linux device drivers" but at this point basically any
> advice you can give me is appreciated (and will most likely get me
> closer to the solution).

As said, figuring out a pattern in the crashes would likely help
placing debug prints, breakpoints, or anything similar to aid
detecting the presumed corruption earlier. Without a pattern,
there's regretfully not much I can suggest.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:38:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1K8Y-0000oX-Ij; Tue, 14 Aug 2012 16:38:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T1K8X-0000oS-9S
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 16:38:33 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344962305!1864637!1
X-Originating-IP: [216.32.181.183]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10765 invoked from network); 14 Aug 2012 16:38:26 -0000
Received: from ch1ehsobe003.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.183)
	by server-15.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	14 Aug 2012 16:38:26 -0000
Received: from mail100-ch1-R.bigfish.com (10.43.68.233) by
	CH1EHSOBE011.bigfish.com (10.43.70.61) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 16:38:25 +0000
Received: from mail100-ch1 (localhost [127.0.0.1])	by
	mail100-ch1-R.bigfish.com (Postfix) with ESMTP id 6E1564E0325	for
	<xen-devel@lists.xensource.com>; Tue, 14 Aug 2012 16:38:25 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah)
Received: from mail100-ch1 (localhost.localdomain [127.0.0.1]) by mail100-ch1
	(MessageSwitch) id 1344962304704421_4354;
	Tue, 14 Aug 2012 16:38:24 +0000 (UTC)
Received: from CH1EHSMHS006.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.252])	by mail100-ch1.bigfish.com (Postfix) with ESMTP id
	AA75D44003F	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 16:38:24 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS006.bigfish.com (10.43.70.6) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 16:38:24 +0000
X-WSS-ID: 0M8R7JY-02-JP9-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1) with ESMTP id 20652C805B	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 11:38:21 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 14 Aug 2012 11:38:31 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Tue, 14 Aug 2012 11:38:21 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Tue, 14 Aug 2012
	12:38:18 -0400
Received: from wotan.amd.com (wotan.osrc.amd.com [165.204.15.11])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 847EA49C1E6; Tue, 14 Aug 2012
	17:38:17 +0100 (BST)
Received: by wotan.amd.com (Postfix, from userid 41729)	id 7FA922D202B; Tue,
	14 Aug 2012 18:38:17 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 4ebf248d3aa1423da340d6900dd5f21072e519b3
Message-ID: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
User-Agent: Mercurial-patchbomb/1.8.2
Date: Tue, 14 Aug 2012 18:37:57 +0200
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <xen-devel@lists.xensource.com>
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com
Subject: [Xen-devel] [PATCH] acpi: Make sure valid CPU is passed to
	do_pm_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Boris Ostrovsky <boris.ostrovsky@amd.com>
# Date 1344962129 -7200
# Node ID 4ebf248d3aa1423da340d6900dd5f21072e519b3
# Parent  33d596f46521ea852e90cf6dbdbf3680d104134c
acpi: Make sure valid CPU is passed to do_pm_op()

Passing invalid CPU value to do_pm_op() will cause assertion
in cpu_online().

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

diff -r 33d596f46521 -r 4ebf248d3aa1 xen/drivers/acpi/pmstat.c
--- a/xen/drivers/acpi/pmstat.c	Mon Aug 13 18:09:33 2012 +0100
+++ b/xen/drivers/acpi/pmstat.c	Tue Aug 14 18:35:29 2012 +0200
@@ -419,7 +419,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
     int ret = 0;
     const struct processor_pminfo *pmpt;
 
-    if ( !op || !cpu_online(op->cpuid) )
+    if ( !op || op->cpuid >= nr_cpu_ids || !cpu_online(op->cpuid) )
         return -EINVAL;
     pmpt = processor_pminfo[op->cpuid];
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:38:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1K8Y-0000oX-Ij; Tue, 14 Aug 2012 16:38:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T1K8X-0000oS-9S
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 16:38:33 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344962305!1864637!1
X-Originating-IP: [216.32.181.183]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10765 invoked from network); 14 Aug 2012 16:38:26 -0000
Received: from ch1ehsobe003.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.183)
	by server-15.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	14 Aug 2012 16:38:26 -0000
Received: from mail100-ch1-R.bigfish.com (10.43.68.233) by
	CH1EHSOBE011.bigfish.com (10.43.70.61) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 16:38:25 +0000
Received: from mail100-ch1 (localhost [127.0.0.1])	by
	mail100-ch1-R.bigfish.com (Postfix) with ESMTP id 6E1564E0325	for
	<xen-devel@lists.xensource.com>; Tue, 14 Aug 2012 16:38:25 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah)
Received: from mail100-ch1 (localhost.localdomain [127.0.0.1]) by mail100-ch1
	(MessageSwitch) id 1344962304704421_4354;
	Tue, 14 Aug 2012 16:38:24 +0000 (UTC)
Received: from CH1EHSMHS006.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.252])	by mail100-ch1.bigfish.com (Postfix) with ESMTP id
	AA75D44003F	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 16:38:24 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS006.bigfish.com (10.43.70.6) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 16:38:24 +0000
X-WSS-ID: 0M8R7JY-02-JP9-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1) with ESMTP id 20652C805B	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 11:38:21 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 14 Aug 2012 11:38:31 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Tue, 14 Aug 2012 11:38:21 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Tue, 14 Aug 2012
	12:38:18 -0400
Received: from wotan.amd.com (wotan.osrc.amd.com [165.204.15.11])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 847EA49C1E6; Tue, 14 Aug 2012
	17:38:17 +0100 (BST)
Received: by wotan.amd.com (Postfix, from userid 41729)	id 7FA922D202B; Tue,
	14 Aug 2012 18:38:17 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 4ebf248d3aa1423da340d6900dd5f21072e519b3
Message-ID: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
User-Agent: Mercurial-patchbomb/1.8.2
Date: Tue, 14 Aug 2012 18:37:57 +0200
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <xen-devel@lists.xensource.com>
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com
Subject: [Xen-devel] [PATCH] acpi: Make sure valid CPU is passed to
	do_pm_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Boris Ostrovsky <boris.ostrovsky@amd.com>
# Date 1344962129 -7200
# Node ID 4ebf248d3aa1423da340d6900dd5f21072e519b3
# Parent  33d596f46521ea852e90cf6dbdbf3680d104134c
acpi: Make sure valid CPU is passed to do_pm_op()

Passing invalid CPU value to do_pm_op() will cause assertion
in cpu_online().

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

diff -r 33d596f46521 -r 4ebf248d3aa1 xen/drivers/acpi/pmstat.c
--- a/xen/drivers/acpi/pmstat.c	Mon Aug 13 18:09:33 2012 +0100
+++ b/xen/drivers/acpi/pmstat.c	Tue Aug 14 18:35:29 2012 +0200
@@ -419,7 +419,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
     int ret = 0;
     const struct processor_pminfo *pmpt;
 
-    if ( !op || !cpu_online(op->cpuid) )
+    if ( !op || op->cpuid >= nr_cpu_ids || !cpu_online(op->cpuid) )
         return -EINVAL;
     pmpt = processor_pminfo[op->cpuid];
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 16:55:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KOt-00010b-BA; Tue, 14 Aug 2012 16:55:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1KOr-00010W-Vh
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:55:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344963317!9282183!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1449 invoked from network); 14 Aug 2012 16:55:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 16:55:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:55:17 +0100
Message-Id: <502A9F3C0200007800094E3E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:55:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@amd.com>
References: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
In-Reply-To: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part9DACEA0C.1__="
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] acpi: Make sure valid CPU is passed to
	do_pm_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part9DACEA0C.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 14.08.12 at 18:37, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> acpi: Make sure valid CPU is passed to do_pm_op()
>=20
> Passing invalid CPU value to do_pm_op() will cause assertion
> in cpu_online().
>=20
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

I'd like to propose the below extension to you change instead.

Jan

Subject: acpi: Make sure valid CPU is passed to do_pm_op()

Passing invalid CPU value to do_pm_op() will cause assertion
in cpu_online().

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

Such checks would, at a first glance, then also be missing at the top
of various helper functions, but these check really were already
redundant with the check in do_pm_op(). Remove the redundant checks
for clarity and brevity.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -201,8 +201,6 @@ static int get_cpufreq_para(struct xen_s
     struct list_head *pos;
     uint32_t cpu, i, j =3D 0;
=20
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
     pmpt =3D processor_pminfo[op->cpuid];
     policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);
=20
@@ -305,9 +303,6 @@ static int set_cpufreq_gov(struct xen_sy
 {
     struct cpufreq_policy new_policy, *old_policy;
=20
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
-
     old_policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);
     if ( !old_policy )
         return -EINVAL;
@@ -326,8 +321,6 @@ static int set_cpufreq_para(struct xen_s
     int ret =3D 0;
     struct cpufreq_policy *policy;
=20
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
     policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);
=20
     if ( !policy || !policy->governor )
@@ -404,22 +397,12 @@ static int set_cpufreq_para(struct xen_s
     return ret;
 }
=20
-static int get_cpufreq_avgfreq(struct xen_sysctl_pm_op *op)
-{
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
-
-    op->u.get_avgfreq =3D cpufreq_driver_getavg(op->cpuid, USR_GETAVG);
-
-    return 0;
-}
-
 int do_pm_op(struct xen_sysctl_pm_op *op)
 {
     int ret =3D 0;
     const struct processor_pminfo *pmpt;
=20
-    if ( !op || !cpu_online(op->cpuid) )
+    if ( !op || op->cpuid >=3D nr_cpu_ids || !cpu_online(op->cpuid) )
         return -EINVAL;
     pmpt =3D processor_pminfo[op->cpuid];
=20
@@ -455,7 +438,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
=20
     case GET_CPUFREQ_AVGFREQ:
     {
-        ret =3D get_cpufreq_avgfreq(op);
+        op->u.get_avgfreq =3D cpufreq_driver_getavg(op->cpuid, USR_GETAVG)=
;
         break;
     }
=20




--=__Part9DACEA0C.1__=
Content-Type: text/plain; name="x86-acpi-pm-op-cpu-check.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-acpi-pm-op-cpu-check.patch"

acpi: Make sure valid CPU is passed to do_pm_op()=0A=0APassing invalid CPU =
value to do_pm_op() will cause assertion=0Ain cpu_online().=0A=0ASigned-off=
-by: Boris Ostrovsky <boris.ostrovsky@amd.com>=0A=0ASuch checks would, at =
a first glance, then also be missing at the top=0Aof various helper =
functions, but these check really were already=0Aredundant with the check =
in do_pm_op(). Remove the redundant checks=0Afor clarity and brevity.=0A=0A=
Signed-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/drivers/acpi/=
pmstat.c=0A+++ b/xen/drivers/acpi/pmstat.c=0A@@ -201,8 +201,6 @@ static =
int get_cpufreq_para(struct xen_s=0A     struct list_head *pos;=0A     =
uint32_t cpu, i, j =3D 0;=0A =0A-    if ( !op || !cpu_online(op->cpuid) =
)=0A-        return -EINVAL;=0A     pmpt =3D processor_pminfo[op->cpuid];=
=0A     policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);=0A =0A@@ -305,9 =
+303,6 @@ static int set_cpufreq_gov(struct xen_sy=0A {=0A     struct =
cpufreq_policy new_policy, *old_policy;=0A =0A-    if ( !op || !cpu_online(=
op->cpuid) )=0A-        return -EINVAL;=0A-=0A     old_policy =3D =
per_cpu(cpufreq_cpu_policy, op->cpuid);=0A     if ( !old_policy )=0A       =
  return -EINVAL;=0A@@ -326,8 +321,6 @@ static int set_cpufreq_para(struct =
xen_s=0A     int ret =3D 0;=0A     struct cpufreq_policy *policy;=0A =0A-  =
  if ( !op || !cpu_online(op->cpuid) )=0A-        return -EINVAL;=0A     =
policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);=0A =0A     if ( !policy =
|| !policy->governor )=0A@@ -404,22 +397,12 @@ static int set_cpufreq_para(=
struct xen_s=0A     return ret;=0A }=0A =0A-static int get_cpufreq_avgfreq(=
struct xen_sysctl_pm_op *op)=0A-{=0A-    if ( !op || !cpu_online(op->cpuid)=
 )=0A-        return -EINVAL;=0A-=0A-    op->u.get_avgfreq =3D cpufreq_driv=
er_getavg(op->cpuid, USR_GETAVG);=0A-=0A-    return 0;=0A-}=0A-=0A int =
do_pm_op(struct xen_sysctl_pm_op *op)=0A {=0A     int ret =3D 0;=0A     =
const struct processor_pminfo *pmpt;=0A =0A-    if ( !op || !cpu_online(op-=
>cpuid) )=0A+    if ( !op || op->cpuid >=3D nr_cpu_ids || !cpu_online(op->c=
puid) )=0A         return -EINVAL;=0A     pmpt =3D processor_pminfo[op->cpu=
id];=0A =0A@@ -455,7 +438,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op=0A =
=0A     case GET_CPUFREQ_AVGFREQ:=0A     {=0A-        ret =3D get_cpufreq_a=
vgfreq(op);=0A+        op->u.get_avgfreq =3D cpufreq_driver_getavg(op->cpui=
d, USR_GETAVG);=0A         break;=0A     }=0A =0A
--=__Part9DACEA0C.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part9DACEA0C.1__=--


From xen-devel-bounces@lists.xen.org Tue Aug 14 16:55:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 16:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KOt-00010b-BA; Tue, 14 Aug 2012 16:55:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1KOr-00010W-Vh
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 16:55:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1344963317!9282183!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1449 invoked from network); 14 Aug 2012 16:55:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 16:55:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Aug 2012 17:55:17 +0100
Message-Id: <502A9F3C0200007800094E3E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 14 Aug 2012 17:55:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@amd.com>
References: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
In-Reply-To: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part9DACEA0C.1__="
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] acpi: Make sure valid CPU is passed to
	do_pm_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part9DACEA0C.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 14.08.12 at 18:37, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> acpi: Make sure valid CPU is passed to do_pm_op()
>=20
> Passing invalid CPU value to do_pm_op() will cause assertion
> in cpu_online().
>=20
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

I'd like to propose the below extension to you change instead.

Jan

Subject: acpi: Make sure valid CPU is passed to do_pm_op()

Passing invalid CPU value to do_pm_op() will cause assertion
in cpu_online().

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

Such checks would, at a first glance, then also be missing at the top
of various helper functions, but these check really were already
redundant with the check in do_pm_op(). Remove the redundant checks
for clarity and brevity.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -201,8 +201,6 @@ static int get_cpufreq_para(struct xen_s
     struct list_head *pos;
     uint32_t cpu, i, j =3D 0;
=20
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
     pmpt =3D processor_pminfo[op->cpuid];
     policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);
=20
@@ -305,9 +303,6 @@ static int set_cpufreq_gov(struct xen_sy
 {
     struct cpufreq_policy new_policy, *old_policy;
=20
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
-
     old_policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);
     if ( !old_policy )
         return -EINVAL;
@@ -326,8 +321,6 @@ static int set_cpufreq_para(struct xen_s
     int ret =3D 0;
     struct cpufreq_policy *policy;
=20
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
     policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);
=20
     if ( !policy || !policy->governor )
@@ -404,22 +397,12 @@ static int set_cpufreq_para(struct xen_s
     return ret;
 }
=20
-static int get_cpufreq_avgfreq(struct xen_sysctl_pm_op *op)
-{
-    if ( !op || !cpu_online(op->cpuid) )
-        return -EINVAL;
-
-    op->u.get_avgfreq =3D cpufreq_driver_getavg(op->cpuid, USR_GETAVG);
-
-    return 0;
-}
-
 int do_pm_op(struct xen_sysctl_pm_op *op)
 {
     int ret =3D 0;
     const struct processor_pminfo *pmpt;
=20
-    if ( !op || !cpu_online(op->cpuid) )
+    if ( !op || op->cpuid >=3D nr_cpu_ids || !cpu_online(op->cpuid) )
         return -EINVAL;
     pmpt =3D processor_pminfo[op->cpuid];
=20
@@ -455,7 +438,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
=20
     case GET_CPUFREQ_AVGFREQ:
     {
-        ret =3D get_cpufreq_avgfreq(op);
+        op->u.get_avgfreq =3D cpufreq_driver_getavg(op->cpuid, USR_GETAVG)=
;
         break;
     }
=20




--=__Part9DACEA0C.1__=
Content-Type: text/plain; name="x86-acpi-pm-op-cpu-check.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-acpi-pm-op-cpu-check.patch"

acpi: Make sure valid CPU is passed to do_pm_op()=0A=0APassing invalid CPU =
value to do_pm_op() will cause assertion=0Ain cpu_online().=0A=0ASigned-off=
-by: Boris Ostrovsky <boris.ostrovsky@amd.com>=0A=0ASuch checks would, at =
a first glance, then also be missing at the top=0Aof various helper =
functions, but these check really were already=0Aredundant with the check =
in do_pm_op(). Remove the redundant checks=0Afor clarity and brevity.=0A=0A=
Signed-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/drivers/acpi/=
pmstat.c=0A+++ b/xen/drivers/acpi/pmstat.c=0A@@ -201,8 +201,6 @@ static =
int get_cpufreq_para(struct xen_s=0A     struct list_head *pos;=0A     =
uint32_t cpu, i, j =3D 0;=0A =0A-    if ( !op || !cpu_online(op->cpuid) =
)=0A-        return -EINVAL;=0A     pmpt =3D processor_pminfo[op->cpuid];=
=0A     policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);=0A =0A@@ -305,9 =
+303,6 @@ static int set_cpufreq_gov(struct xen_sy=0A {=0A     struct =
cpufreq_policy new_policy, *old_policy;=0A =0A-    if ( !op || !cpu_online(=
op->cpuid) )=0A-        return -EINVAL;=0A-=0A     old_policy =3D =
per_cpu(cpufreq_cpu_policy, op->cpuid);=0A     if ( !old_policy )=0A       =
  return -EINVAL;=0A@@ -326,8 +321,6 @@ static int set_cpufreq_para(struct =
xen_s=0A     int ret =3D 0;=0A     struct cpufreq_policy *policy;=0A =0A-  =
  if ( !op || !cpu_online(op->cpuid) )=0A-        return -EINVAL;=0A     =
policy =3D per_cpu(cpufreq_cpu_policy, op->cpuid);=0A =0A     if ( !policy =
|| !policy->governor )=0A@@ -404,22 +397,12 @@ static int set_cpufreq_para(=
struct xen_s=0A     return ret;=0A }=0A =0A-static int get_cpufreq_avgfreq(=
struct xen_sysctl_pm_op *op)=0A-{=0A-    if ( !op || !cpu_online(op->cpuid)=
 )=0A-        return -EINVAL;=0A-=0A-    op->u.get_avgfreq =3D cpufreq_driv=
er_getavg(op->cpuid, USR_GETAVG);=0A-=0A-    return 0;=0A-}=0A-=0A int =
do_pm_op(struct xen_sysctl_pm_op *op)=0A {=0A     int ret =3D 0;=0A     =
const struct processor_pminfo *pmpt;=0A =0A-    if ( !op || !cpu_online(op-=
>cpuid) )=0A+    if ( !op || op->cpuid >=3D nr_cpu_ids || !cpu_online(op->c=
puid) )=0A         return -EINVAL;=0A     pmpt =3D processor_pminfo[op->cpu=
id];=0A =0A@@ -455,7 +438,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op=0A =
=0A     case GET_CPUFREQ_AVGFREQ:=0A     {=0A-        ret =3D get_cpufreq_a=
vgfreq(op);=0A+        op->u.get_avgfreq =3D cpufreq_driver_getavg(op->cpui=
d, USR_GETAVG);=0A         break;=0A     }=0A =0A
--=__Part9DACEA0C.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part9DACEA0C.1__=--


From xen-devel-bounces@lists.xen.org Tue Aug 14 17:01:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KUP-0001BB-3y; Tue, 14 Aug 2012 17:01:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1KUN-0001B4-BH
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:01:07 +0000
Received: from [85.158.143.35:31117] by server-1.bemta-4.messagelabs.com id
	12/A3-07754-2548A205; Tue, 14 Aug 2012 17:01:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344963651!12204145!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18413 invoked from network); 14 Aug 2012 17:00:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:00:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="205145596"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:00:50 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:00:50 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1KOI-0006BG-VY;
	Tue, 14 Aug 2012 17:54:51 +0100
Message-ID: <502A82DA.7000205@citrix.com>
Date: Tue, 14 Aug 2012 17:54:50 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <502A5C58.5050001@citrix.com>
In-Reply-To: <502A5C58.5050001@citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------060407050907020200050608"
Subject: Re: [Xen-devel] (V2) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060407050907020200050608
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

Attached version 2, because it seems that despite my testing, "python
setup.py clean" still doesn't actually clean enough.

On 14/08/12 15:10, Andrew Cooper wrote:
> I discovered this with repeated calls to "make clean; make dist-tools"
>
> An alternative, if you wish to remove the use of $(PYTHON) from the
> clean path is to just `rm -rf build`, but using setup.py seems neater.
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------060407050907020200050608
Content-Type: text/x-patch; name="tools-python-clean.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-python-clean.patch"

# HG changeset patch
# Parent 33d596f46521ea852e90cf6dbdbf3680d104134c
tools/python: Clean python correctly

Cleaning the python directory should call `$(PYTHON) setup.py clean`
which will clean the build/ subdirectory.  Otherwise, subsequent builds
may be short-circuited and a stale build installed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 33d596f46521 tools/python/Makefile
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -33,6 +33,7 @@ test:
 
 .PHONY: clean
 clean:
+	$(PYTHON) setup.py clean --all
 	rm -f $(XENPATH)
 	rm -rf *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
 	rm -f xen/lowlevel/xl/_pyxl_types.h

--------------060407050907020200050608
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060407050907020200050608--


From xen-devel-bounces@lists.xen.org Tue Aug 14 17:01:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KUP-0001BB-3y; Tue, 14 Aug 2012 17:01:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1KUN-0001B4-BH
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:01:07 +0000
Received: from [85.158.143.35:31117] by server-1.bemta-4.messagelabs.com id
	12/A3-07754-2548A205; Tue, 14 Aug 2012 17:01:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1344963651!12204145!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18413 invoked from network); 14 Aug 2012 17:00:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:00:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="205145596"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:00:50 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:00:50 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1KOI-0006BG-VY;
	Tue, 14 Aug 2012 17:54:51 +0100
Message-ID: <502A82DA.7000205@citrix.com>
Date: Tue, 14 Aug 2012 17:54:50 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Ian Campbell <Ian.Campbell@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <502A5C58.5050001@citrix.com>
In-Reply-To: <502A5C58.5050001@citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------060407050907020200050608"
Subject: Re: [Xen-devel] (V2) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060407050907020200050608
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

Attached version 2, because it seems that despite my testing, "python
setup.py clean" still doesn't actually clean enough.

On 14/08/12 15:10, Andrew Cooper wrote:
> I discovered this with repeated calls to "make clean; make dist-tools"
>
> An alternative, if you wish to remove the use of $(PYTHON) from the
> clean path is to just `rm -rf build`, but using setup.py seems neater.
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------060407050907020200050608
Content-Type: text/x-patch; name="tools-python-clean.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-python-clean.patch"

# HG changeset patch
# Parent 33d596f46521ea852e90cf6dbdbf3680d104134c
tools/python: Clean python correctly

Cleaning the python directory should call `$(PYTHON) setup.py clean`
which will clean the build/ subdirectory.  Otherwise, subsequent builds
may be short-circuited and a stale build installed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 33d596f46521 tools/python/Makefile
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -33,6 +33,7 @@ test:
 
 .PHONY: clean
 clean:
+	$(PYTHON) setup.py clean --all
 	rm -f $(XENPATH)
 	rm -rf *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
 	rm -f xen/lowlevel/xl/_pyxl_types.h

--------------060407050907020200050608
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060407050907020200050608--


From xen-devel-bounces@lists.xen.org Tue Aug 14 17:04:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KXY-0001Uo-Nm; Tue, 14 Aug 2012 17:04:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T1KXX-0001Uc-7i
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:04:23 +0000
Received: from [85.158.138.51:27167] by server-6.bemta-3.messagelabs.com id
	F2/A0-32013-6158A205; Tue, 14 Aug 2012 17:04:22 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344963860!28222066!1
X-Originating-IP: [216.32.181.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5926 invoked from network); 14 Aug 2012 17:04:21 -0000
Received: from ch1ehsobe005.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.185)
	by server-4.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	14 Aug 2012 17:04:21 -0000
Received: from mail94-ch1-R.bigfish.com (10.43.68.243) by
	CH1EHSOBE016.bigfish.com (10.43.70.66) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 17:04:20 +0000
Received: from mail94-ch1 (localhost [127.0.0.1])	by mail94-ch1-R.bigfish.com
	(Postfix) with ESMTP id ECEBF2E0402;
	Tue, 14 Aug 2012 17:04:19 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 11
X-BigFish: VPS11(zzbb2dI98dI9371I1432Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah133w)
Received: from mail94-ch1 (localhost.localdomain [127.0.0.1]) by mail94-ch1
	(MessageSwitch) id 1344963857447637_17609;
	Tue, 14 Aug 2012 17:04:17 +0000 (UTC)
Received: from CH1EHSMHS013.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.239])	by mail94-ch1.bigfish.com (Postfix) with ESMTP id
	6159C2C0047;	Tue, 14 Aug 2012 17:04:17 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS013.bigfish.com (10.43.70.13) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 17:04:16 +0000
X-WSS-ID: 0M8R8R2-01-645-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B71E102800D;	Tue, 14 Aug 2012 12:04:14 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 14 Aug 2012 12:04:49 -0500
Received: from [10.234.222.82] (163.181.55.254) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server id 14.1.323.3;
	Tue, 14 Aug 2012 12:04:13 -0500
Message-ID: <502A850D.8050304@amd.com>
Date: Tue, 14 Aug 2012 13:04:13 -0400
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
	<502A9F3C0200007800094E3E@nat28.tlf.novell.com>
In-Reply-To: <502A9F3C0200007800094E3E@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] acpi: Make sure valid CPU is passed to
	do_pm_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/14/2012 12:55 PM, Jan Beulich wrote:
>>>> On 14.08.12 at 18:37, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> acpi: Make sure valid CPU is passed to do_pm_op()
>>
>> Passing invalid CPU value to do_pm_op() will cause assertion
>> in cpu_online().
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>
> I'd like to propose the below extension to you change instead.

Yes, thanks, I didn't notice these.

-boris


>
> Jan
>
> Subject: acpi: Make sure valid CPU is passed to do_pm_op()
>
> Passing invalid CPU value to do_pm_op() will cause assertion
> in cpu_online().
>
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>
> Such checks would, at a first glance, then also be missing at the top
> of various helper functions, but these check really were already
> redundant with the check in do_pm_op(). Remove the redundant checks
> for clarity and brevity.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/drivers/acpi/pmstat.c
> +++ b/xen/drivers/acpi/pmstat.c
> @@ -201,8 +201,6 @@ static int get_cpufreq_para(struct xen_s
>       struct list_head *pos;
>       uint32_t cpu, i, j = 0;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
>       pmpt = processor_pminfo[op->cpuid];
>       policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
>
> @@ -305,9 +303,6 @@ static int set_cpufreq_gov(struct xen_sy
>   {
>       struct cpufreq_policy new_policy, *old_policy;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
> -
>       old_policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
>       if ( !old_policy )
>           return -EINVAL;
> @@ -326,8 +321,6 @@ static int set_cpufreq_para(struct xen_s
>       int ret = 0;
>       struct cpufreq_policy *policy;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
>       policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
>
>       if ( !policy || !policy->governor )
> @@ -404,22 +397,12 @@ static int set_cpufreq_para(struct xen_s
>       return ret;
>   }
>
> -static int get_cpufreq_avgfreq(struct xen_sysctl_pm_op *op)
> -{
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
> -
> -    op->u.get_avgfreq = cpufreq_driver_getavg(op->cpuid, USR_GETAVG);
> -
> -    return 0;
> -}
> -
>   int do_pm_op(struct xen_sysctl_pm_op *op)
>   {
>       int ret = 0;
>       const struct processor_pminfo *pmpt;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> +    if ( !op || op->cpuid >= nr_cpu_ids || !cpu_online(op->cpuid) )
>           return -EINVAL;
>       pmpt = processor_pminfo[op->cpuid];
>
> @@ -455,7 +438,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
>
>       case GET_CPUFREQ_AVGFREQ:
>       {
> -        ret = get_cpufreq_avgfreq(op);
> +        op->u.get_avgfreq = cpufreq_driver_getavg(op->cpuid, USR_GETAVG);
>           break;
>       }
>
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:04:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KXY-0001Uo-Nm; Tue, 14 Aug 2012 17:04:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T1KXX-0001Uc-7i
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:04:23 +0000
Received: from [85.158.138.51:27167] by server-6.bemta-3.messagelabs.com id
	F2/A0-32013-6158A205; Tue, 14 Aug 2012 17:04:22 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1344963860!28222066!1
X-Originating-IP: [216.32.181.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5926 invoked from network); 14 Aug 2012 17:04:21 -0000
Received: from ch1ehsobe005.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.185)
	by server-4.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	14 Aug 2012 17:04:21 -0000
Received: from mail94-ch1-R.bigfish.com (10.43.68.243) by
	CH1EHSOBE016.bigfish.com (10.43.70.66) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 17:04:20 +0000
Received: from mail94-ch1 (localhost [127.0.0.1])	by mail94-ch1-R.bigfish.com
	(Postfix) with ESMTP id ECEBF2E0402;
	Tue, 14 Aug 2012 17:04:19 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 11
X-BigFish: VPS11(zzbb2dI98dI9371I1432Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah133w)
Received: from mail94-ch1 (localhost.localdomain [127.0.0.1]) by mail94-ch1
	(MessageSwitch) id 1344963857447637_17609;
	Tue, 14 Aug 2012 17:04:17 +0000 (UTC)
Received: from CH1EHSMHS013.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.239])	by mail94-ch1.bigfish.com (Postfix) with ESMTP id
	6159C2C0047;	Tue, 14 Aug 2012 17:04:17 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS013.bigfish.com (10.43.70.13) with Microsoft SMTP Server id
	14.1.225.23; Tue, 14 Aug 2012 17:04:16 +0000
X-WSS-ID: 0M8R8R2-01-645-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B71E102800D;	Tue, 14 Aug 2012 12:04:14 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 14 Aug 2012 12:04:49 -0500
Received: from [10.234.222.82] (163.181.55.254) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server id 14.1.323.3;
	Tue, 14 Aug 2012 12:04:13 -0500
Message-ID: <502A850D.8050304@amd.com>
Date: Tue, 14 Aug 2012 13:04:13 -0400
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <4ebf248d3aa1423da340.1344962277@localhost.localdomain>
	<502A9F3C0200007800094E3E@nat28.tlf.novell.com>
In-Reply-To: <502A9F3C0200007800094E3E@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] acpi: Make sure valid CPU is passed to
	do_pm_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/14/2012 12:55 PM, Jan Beulich wrote:
>>>> On 14.08.12 at 18:37, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> acpi: Make sure valid CPU is passed to do_pm_op()
>>
>> Passing invalid CPU value to do_pm_op() will cause assertion
>> in cpu_online().
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>
> I'd like to propose the below extension to you change instead.

Yes, thanks, I didn't notice these.

-boris


>
> Jan
>
> Subject: acpi: Make sure valid CPU is passed to do_pm_op()
>
> Passing invalid CPU value to do_pm_op() will cause assertion
> in cpu_online().
>
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>
> Such checks would, at a first glance, then also be missing at the top
> of various helper functions, but these check really were already
> redundant with the check in do_pm_op(). Remove the redundant checks
> for clarity and brevity.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/drivers/acpi/pmstat.c
> +++ b/xen/drivers/acpi/pmstat.c
> @@ -201,8 +201,6 @@ static int get_cpufreq_para(struct xen_s
>       struct list_head *pos;
>       uint32_t cpu, i, j = 0;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
>       pmpt = processor_pminfo[op->cpuid];
>       policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
>
> @@ -305,9 +303,6 @@ static int set_cpufreq_gov(struct xen_sy
>   {
>       struct cpufreq_policy new_policy, *old_policy;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
> -
>       old_policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
>       if ( !old_policy )
>           return -EINVAL;
> @@ -326,8 +321,6 @@ static int set_cpufreq_para(struct xen_s
>       int ret = 0;
>       struct cpufreq_policy *policy;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
>       policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
>
>       if ( !policy || !policy->governor )
> @@ -404,22 +397,12 @@ static int set_cpufreq_para(struct xen_s
>       return ret;
>   }
>
> -static int get_cpufreq_avgfreq(struct xen_sysctl_pm_op *op)
> -{
> -    if ( !op || !cpu_online(op->cpuid) )
> -        return -EINVAL;
> -
> -    op->u.get_avgfreq = cpufreq_driver_getavg(op->cpuid, USR_GETAVG);
> -
> -    return 0;
> -}
> -
>   int do_pm_op(struct xen_sysctl_pm_op *op)
>   {
>       int ret = 0;
>       const struct processor_pminfo *pmpt;
>
> -    if ( !op || !cpu_online(op->cpuid) )
> +    if ( !op || op->cpuid >= nr_cpu_ids || !cpu_online(op->cpuid) )
>           return -EINVAL;
>       pmpt = processor_pminfo[op->cpuid];
>
> @@ -455,7 +438,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
>
>       case GET_CPUFREQ_AVGFREQ:
>       {
> -        ret = get_cpufreq_avgfreq(op);
> +        op->u.get_avgfreq = cpufreq_driver_getavg(op->cpuid, USR_GETAVG);
>           break;
>       }
>
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:31:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KxT-000224-Mj; Tue, 14 Aug 2012 17:31:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kumarsukhani@gmail.com>) id 1T1KxR-00021w-IY
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:31:09 +0000
Received: from [85.158.138.51:15907] by server-2.bemta-3.messagelabs.com id
	A0/DD-17748-C5B8A205; Tue, 14 Aug 2012 17:31:08 +0000
X-Env-Sender: kumarsukhani@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344965467!19356374!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17572 invoked from network); 14 Aug 2012 17:31:07 -0000
Received: from mail-lpp01m010-f43.google.com (HELO
	mail-lpp01m010-f43.google.com) (209.85.215.43)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:31:07 -0000
Received: by lagk11 with SMTP id k11so429718lag.30
	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 10:31:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=8GVMnYI5C3qBBdPXq7CVehtQhT80+e+88h8oYCm/Qs0=;
	b=EE3tGgXloERx3QWGuP/BEgeHChZWNuSSGxDBmN0knQyYRRKgcW+QHjgi8nVNPMC9md
	NeVFO1YdxPG50PeVAhMD1SuMFEgsqypG8w6uT07ti+04LTq4rmsfu7U2TxI4JWmFVICq
	0P7i4jbzFGS5jSSOLwmsr9M41E04yUQpMJlgUn2t3U6t7BQaiDZBzQ59qQwxHBjsZAVw
	pVRG+Jp45k/Bc3v2FRlrpvOrnsYUVDN4rGy/76Za9SsQiXaGb4uulAg0Q81OUQGo08B3
	1uWTtPSrO0WRPZ+2/t7AzTEVnGQEgEZkChFjnyj2FzQo8Ne6OSJ/6VhQS802I/MZripK
	on/g==
MIME-Version: 1.0
Received: by 10.152.108.42 with SMTP id hh10mr16816341lab.9.1344965466779;
	Tue, 14 Aug 2012 10:31:06 -0700 (PDT)
Received: by 10.112.26.168 with HTTP; Tue, 14 Aug 2012 10:31:06 -0700 (PDT)
Date: Tue, 14 Aug 2012 23:01:06 +0530
Message-ID: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
From: Kumar Sukhani <kumarsukhani@gmail.com>
To: xen-devel@lists.xensource.com
Cc: Surabhi Mutha <muthasurabhi@gmail.com>,
	Saurabh Gadia <saurabh.gadia4@gmail.com>,
	Akash deep agrawal <akashagrawal14@gmail.com>
Subject: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have gone through various virtualization techniques and have found
these few advantages about the operating system virtualization which
we can integrate with Xen.

Advantages of O.S. level virtualization platform :
1) Virtual servers share the same system call interface and do not
have any emulation overhead.
2) Processes within the virtual server run as regular processes on the
host system. This is somewhat more memory-efficient and I/O-efficient
than whole-system emulation
3)Processes within the virtual server are queued on the same scheduler
as on the host, allowing guests processes to run concurrently on SMP
systems. This is not trivial to implement with whole-system emulation.
4) Networking is based on isolation rather than virtualization, so
there is no additional overhead for packets.

But operating system level virtualization has the disadvantage that we
cant create Virtualization environment for propitiatory O.S. like
Windows. For which we have to use Hardware level virtualization like
Xen.

So we are planning to introduce features of O.S. level virtualization
in Xen, by proposing one integrated architecture[1] having Operating
system virtualization over one of the VM of Xen.
Please review our architecture and let us know whether it is worth to
implement it or not.

[1] http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream
--
Be Happy Always
+919579650250

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:31:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1KxT-000224-Mj; Tue, 14 Aug 2012 17:31:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kumarsukhani@gmail.com>) id 1T1KxR-00021w-IY
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:31:09 +0000
Received: from [85.158.138.51:15907] by server-2.bemta-3.messagelabs.com id
	A0/DD-17748-C5B8A205; Tue, 14 Aug 2012 17:31:08 +0000
X-Env-Sender: kumarsukhani@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1344965467!19356374!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17572 invoked from network); 14 Aug 2012 17:31:07 -0000
Received: from mail-lpp01m010-f43.google.com (HELO
	mail-lpp01m010-f43.google.com) (209.85.215.43)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:31:07 -0000
Received: by lagk11 with SMTP id k11so429718lag.30
	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 10:31:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=8GVMnYI5C3qBBdPXq7CVehtQhT80+e+88h8oYCm/Qs0=;
	b=EE3tGgXloERx3QWGuP/BEgeHChZWNuSSGxDBmN0knQyYRRKgcW+QHjgi8nVNPMC9md
	NeVFO1YdxPG50PeVAhMD1SuMFEgsqypG8w6uT07ti+04LTq4rmsfu7U2TxI4JWmFVICq
	0P7i4jbzFGS5jSSOLwmsr9M41E04yUQpMJlgUn2t3U6t7BQaiDZBzQ59qQwxHBjsZAVw
	pVRG+Jp45k/Bc3v2FRlrpvOrnsYUVDN4rGy/76Za9SsQiXaGb4uulAg0Q81OUQGo08B3
	1uWTtPSrO0WRPZ+2/t7AzTEVnGQEgEZkChFjnyj2FzQo8Ne6OSJ/6VhQS802I/MZripK
	on/g==
MIME-Version: 1.0
Received: by 10.152.108.42 with SMTP id hh10mr16816341lab.9.1344965466779;
	Tue, 14 Aug 2012 10:31:06 -0700 (PDT)
Received: by 10.112.26.168 with HTTP; Tue, 14 Aug 2012 10:31:06 -0700 (PDT)
Date: Tue, 14 Aug 2012 23:01:06 +0530
Message-ID: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
From: Kumar Sukhani <kumarsukhani@gmail.com>
To: xen-devel@lists.xensource.com
Cc: Surabhi Mutha <muthasurabhi@gmail.com>,
	Saurabh Gadia <saurabh.gadia4@gmail.com>,
	Akash deep agrawal <akashagrawal14@gmail.com>
Subject: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have gone through various virtualization techniques and have found
these few advantages about the operating system virtualization which
we can integrate with Xen.

Advantages of O.S. level virtualization platform :
1) Virtual servers share the same system call interface and do not
have any emulation overhead.
2) Processes within the virtual server run as regular processes on the
host system. This is somewhat more memory-efficient and I/O-efficient
than whole-system emulation
3)Processes within the virtual server are queued on the same scheduler
as on the host, allowing guests processes to run concurrently on SMP
systems. This is not trivial to implement with whole-system emulation.
4) Networking is based on isolation rather than virtualization, so
there is no additional overhead for packets.

But operating system level virtualization has the disadvantage that we
cant create Virtualization environment for propitiatory O.S. like
Windows. For which we have to use Hardware level virtualization like
Xen.

So we are planning to introduce features of O.S. level virtualization
in Xen, by proposing one integrated architecture[1] having Operating
system virtualization over one of the VM of Xen.
Please review our architecture and let us know whether it is worth to
implement it or not.

[1] http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream
--
Be Happy Always
+919579650250

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:31:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Kxk-00022f-41; Tue, 14 Aug 2012 17:31:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1Kxi-00022Z-3a
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:31:26 +0000
Received: from [85.158.143.35:31597] by server-2.bemta-4.messagelabs.com id
	19/09-31966-D6B8A205; Tue, 14 Aug 2012 17:31:25 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344965483!15946237!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24710 invoked from network); 14 Aug 2012 17:31:24 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:31:24 -0000
Received: by vbip1 with SMTP id p1so760028vbi.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 10:31:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=U4FWzwGUzm/59kSlVJgqF5dUn6hk7vUUdECIN+ltTkE=;
	b=nWpvLhjJ5x4YnCyK2J7/pIGuAnaZLJccF2n9R/ucOHvV7C5ucV1s3eyhIuIXnAxOZs
	ZDTrh35uJUdsOgETzUBNCCUVnmjSUQi4rq4x27G0v5EEv8Adl1pGVSSO6UDltpmOo3VX
	vhiEekNOdlZqvJc7Oo7jxUHjDF21dWqw7Y6U2TXzbNqk1KUUkrLV0b1RPpaBxgVmnx6J
	63bVYjDfBPw6kBXGmXo/S86nqqaEOvAbVKlVD7TNQGQ6bdpHtpoS6g7Qtpk/LEYa17Lf
	NioDGWAu3zWOBft/85IrFEEce5iR6XDZ+aujJWIXiusjzNR4qdFX7neCyO9aU22/gYXI
	Hmxw==
MIME-Version: 1.0
Received: by 10.43.46.194 with SMTP id up2mr11280596icb.22.1344965482774; Tue,
	14 Aug 2012 10:31:22 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Tue, 14 Aug 2012 10:31:22 -0700 (PDT)
In-Reply-To: <CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
Date: Tue, 14 Aug 2012 13:31:22 -0400
X-Google-Sender-Auth: 2BN8l1ffvlOJZtW4sYjHgeyPbAU
Message-ID: <CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've bisected this issue - and it looks like it is a rather old problem.

The following changeset introduced it in the 4.1 development stream on
Jun 17, 2010

http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42


Since we (XenClient Enterprise / NxTop) skipped over Xen-4.1 - we
never ran into this until now.


Any thoughts as to a solution?


-Ben



On Fri, Aug 10, 2012 at 3:15 PM, Ben Guthro <ben@guthro.net> wrote:
> I'll continue to investigate, as my schedule allows, but haven't found
> any smoking gun just yet.
>
> This happens to be on an Ivybridge system - I have other Sandybridge
> systems that will go to sleep, but never wake up at all, forcing a
> hard power cycle.
>
> I tested these iommu= parameters on one of these machines, to no
> effect. Every time they go into S3, a hard reset is necessary to get
> them to come out of it.
>
>
>
>
> On Fri, Aug 10, 2012 at 2:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
>>> iommu=no-intremap
>>> This seems to work around the issue on this platform, performing
>>> multiple suspend/resume cycles, and ahci came back afterwards just
>>> fine.
>>>
>>> What is the downside to flipping this off?
>>
>> Loss of security (against misbehaving/malicious guests). So we
>> certainly want/need to get to the bottom of this (especially if
>> this is not only one kind of system that's affected).
>>
>>> iommu=off
>>> This test behaved similarly to the above, also working around the issue.
>>
>> Of course, this is a superset of the former.
>>
>> This result, however, makes it more likely again to indeed be a
>> Xen side problem, not Dom0 induced corruption.
>>
>> Jan
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:31:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Kxk-00022f-41; Tue, 14 Aug 2012 17:31:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1Kxi-00022Z-3a
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:31:26 +0000
Received: from [85.158.143.35:31597] by server-2.bemta-4.messagelabs.com id
	19/09-31966-D6B8A205; Tue, 14 Aug 2012 17:31:25 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1344965483!15946237!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24710 invoked from network); 14 Aug 2012 17:31:24 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:31:24 -0000
Received: by vbip1 with SMTP id p1so760028vbi.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 10:31:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=U4FWzwGUzm/59kSlVJgqF5dUn6hk7vUUdECIN+ltTkE=;
	b=nWpvLhjJ5x4YnCyK2J7/pIGuAnaZLJccF2n9R/ucOHvV7C5ucV1s3eyhIuIXnAxOZs
	ZDTrh35uJUdsOgETzUBNCCUVnmjSUQi4rq4x27G0v5EEv8Adl1pGVSSO6UDltpmOo3VX
	vhiEekNOdlZqvJc7Oo7jxUHjDF21dWqw7Y6U2TXzbNqk1KUUkrLV0b1RPpaBxgVmnx6J
	63bVYjDfBPw6kBXGmXo/S86nqqaEOvAbVKlVD7TNQGQ6bdpHtpoS6g7Qtpk/LEYa17Lf
	NioDGWAu3zWOBft/85IrFEEce5iR6XDZ+aujJWIXiusjzNR4qdFX7neCyO9aU22/gYXI
	Hmxw==
MIME-Version: 1.0
Received: by 10.43.46.194 with SMTP id up2mr11280596icb.22.1344965482774; Tue,
	14 Aug 2012 10:31:22 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Tue, 14 Aug 2012 10:31:22 -0700 (PDT)
In-Reply-To: <CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
Date: Tue, 14 Aug 2012 13:31:22 -0400
X-Google-Sender-Auth: 2BN8l1ffvlOJZtW4sYjHgeyPbAU
Message-ID: <CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've bisected this issue - and it looks like it is a rather old problem.

The following changeset introduced it in the 4.1 development stream on
Jun 17, 2010

http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42


Since we (XenClient Enterprise / NxTop) skipped over Xen-4.1 - we
never ran into this until now.


Any thoughts as to a solution?


-Ben



On Fri, Aug 10, 2012 at 3:15 PM, Ben Guthro <ben@guthro.net> wrote:
> I'll continue to investigate, as my schedule allows, but haven't found
> any smoking gun just yet.
>
> This happens to be on an Ivybridge system - I have other Sandybridge
> systems that will go to sleep, but never wake up at all, forcing a
> hard power cycle.
>
> I tested these iommu= parameters on one of these machines, to no
> effect. Every time they go into S3, a hard reset is necessary to get
> them to come out of it.
>
>
>
>
> On Fri, Aug 10, 2012 at 2:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
>>> iommu=no-intremap
>>> This seems to work around the issue on this platform, performing
>>> multiple suspend/resume cycles, and ahci came back afterwards just
>>> fine.
>>>
>>> What is the downside to flipping this off?
>>
>> Loss of security (against misbehaving/malicious guests). So we
>> certainly want/need to get to the bottom of this (especially if
>> this is not only one kind of system that's affected).
>>
>>> iommu=off
>>> This test behaved similarly to the above, also working around the issue.
>>
>> Of course, this is a superset of the former.
>>
>> This result, however, makes it more likely again to indeed be a
>> Xen side problem, not Dom0 induced corruption.
>>
>> Jan
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:34:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L0H-0002Fn-Ld; Tue, 14 Aug 2012 17:34:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1L0G-0002FY-SU
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:34:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344965638!1692304!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4395 invoked from network); 14 Aug 2012 17:33:58 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:33:58 -0000
Received: by eeke53 with SMTP id e53so248459eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 10:33:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=AhM6JFku5BGaTyqM8AxG3ubyydUE7THlljUQXD96Dsg=;
	b=BXuCx5qKF3U6dw4uXgpcWwlZMIehIU9Ee/VFjiM7+9O2vGc5N4LiXSkZVz6YCQXE6b
	9MrUOBjkA9cfpppMYQaGQiCsdkMWrBGpOTmJTH3DrzGpqdICSS7rph6MZbEjiA7udGg0
	PQqH+RlNt0iE78m3RuVZX53PD/wjs0P9hfsdzfDP9dhirkiLIME32WvMmVmttwm4VJoU
	R9bUYuL8c7N0swStCWCh+ErtXn4NflEVdEZX8v9B8Bl9As44SX7SY5fdA525TvCVSoho
	KeL4oPaL9EfhDD/xtGpOtLiJHWV2crHjvL2sDl6Ndnd0Q/fHayzc9ToN188GBL5IO/vs
	MerQ==
MIME-Version: 1.0
Received: by 10.14.225.200 with SMTP id z48mr20293396eep.39.1344965638537;
	Tue, 14 Aug 2012 10:33:58 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Tue, 14 Aug 2012 10:33:58 -0700 (PDT)
Date: Tue, 14 Aug 2012 18:33:58 +0100
X-Google-Sender-Auth: YjyF0trHiPAjLYR90psTFKaLue8
Message-ID: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: xen-devel@lists.xen.org, Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid
	configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# xl cpupool-create 'name="pool2" sched="credit2"'
command line:2: config parsing error near `sched': syntax error,
unexpected IDENT, expecting NEWLINE or ';'
Failed to parse config file: Invalid argument
*** glibc detected *** xl: free(): invalid pointer: 0x0000000001a79a10 ***
Segmentation fault (core dumped)

Looking at the code in xl_cmdimpl.c:main_cpupoolcreate() it calls:
* xlu_cfg_init()
* xlu_cfg_readdata()

if the readdata() fails, it calls xlu_cfg_destroy() before returning.

Other callers to readdata() seem to call exit(1) if readdata() fails.

So is readdata() leavnig the config in an inconsistent state?  Or is
config already cleaned up?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:34:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L0H-0002Fn-Ld; Tue, 14 Aug 2012 17:34:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1L0G-0002FY-SU
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:34:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1344965638!1692304!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4395 invoked from network); 14 Aug 2012 17:33:58 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:33:58 -0000
Received: by eeke53 with SMTP id e53so248459eek.32
	for <xen-devel@lists.xen.org>; Tue, 14 Aug 2012 10:33:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=AhM6JFku5BGaTyqM8AxG3ubyydUE7THlljUQXD96Dsg=;
	b=BXuCx5qKF3U6dw4uXgpcWwlZMIehIU9Ee/VFjiM7+9O2vGc5N4LiXSkZVz6YCQXE6b
	9MrUOBjkA9cfpppMYQaGQiCsdkMWrBGpOTmJTH3DrzGpqdICSS7rph6MZbEjiA7udGg0
	PQqH+RlNt0iE78m3RuVZX53PD/wjs0P9hfsdzfDP9dhirkiLIME32WvMmVmttwm4VJoU
	R9bUYuL8c7N0swStCWCh+ErtXn4NflEVdEZX8v9B8Bl9As44SX7SY5fdA525TvCVSoho
	KeL4oPaL9EfhDD/xtGpOtLiJHWV2crHjvL2sDl6Ndnd0Q/fHayzc9ToN188GBL5IO/vs
	MerQ==
MIME-Version: 1.0
Received: by 10.14.225.200 with SMTP id z48mr20293396eep.39.1344965638537;
	Tue, 14 Aug 2012 10:33:58 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Tue, 14 Aug 2012 10:33:58 -0700 (PDT)
Date: Tue, 14 Aug 2012 18:33:58 +0100
X-Google-Sender-Auth: YjyF0trHiPAjLYR90psTFKaLue8
Message-ID: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: xen-devel@lists.xen.org, Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid
	configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# xl cpupool-create 'name="pool2" sched="credit2"'
command line:2: config parsing error near `sched': syntax error,
unexpected IDENT, expecting NEWLINE or ';'
Failed to parse config file: Invalid argument
*** glibc detected *** xl: free(): invalid pointer: 0x0000000001a79a10 ***
Segmentation fault (core dumped)

Looking at the code in xl_cmdimpl.c:main_cpupoolcreate() it calls:
* xlu_cfg_init()
* xlu_cfg_readdata()

if the readdata() fails, it calls xlu_cfg_destroy() before returning.

Other callers to readdata() seem to call exit(1) if readdata() fails.

So is readdata() leavnig the config in an inconsistent state?  Or is
config already cleaned up?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:34:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L0f-0002J6-7E; Tue, 14 Aug 2012 17:34:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1L0d-0002IH-RS
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:34:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344965661!1872539!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18003 invoked from network); 14 Aug 2012 17:34:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:34:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14009165"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 17:34:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 18:34:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1L0X-0006MW-BC; Tue, 14 Aug 2012 17:34:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1L0X-00056i-7v;
	Tue, 14 Aug 2012 18:34:21 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.35865.492083.486374@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 18:34:17 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <502A82DA.7000205@citrix.com>
References: <502A5C58.5050001@citrix.com>
	<502A82DA.7000205@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V2) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [Xen-devel] (V2) tools/python: Clean python correctly"):
> Attached version 2, because it seems that despite my testing, "python
> setup.py clean" still doesn't actually clean enough.
> 
> On 14/08/12 15:10, Andrew Cooper wrote:
> > I discovered this with repeated calls to "make clean; make dist-tools"
> >
> > An alternative, if you wish to remove the use of $(PYTHON) from the
> > clean path is to just `rm -rf build`, but using setup.py seems neater.

Perhaps removing the build directory would be better and faster and
more reliable ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:34:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L0f-0002J6-7E; Tue, 14 Aug 2012 17:34:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1L0d-0002IH-RS
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:34:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1344965661!1872539!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18003 invoked from network); 14 Aug 2012 17:34:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:34:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14009165"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 17:34:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 18:34:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T1L0X-0006MW-BC; Tue, 14 Aug 2012 17:34:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T1L0X-00056i-7v;
	Tue, 14 Aug 2012 18:34:21 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20522.35865.492083.486374@mariner.uk.xensource.com>
Date: Tue, 14 Aug 2012 18:34:17 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <502A82DA.7000205@citrix.com>
References: <502A5C58.5050001@citrix.com>
	<502A82DA.7000205@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V2) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [Xen-devel] (V2) tools/python: Clean python correctly"):
> Attached version 2, because it seems that despite my testing, "python
> setup.py clean" still doesn't actually clean enough.
> 
> On 14/08/12 15:10, Andrew Cooper wrote:
> > I discovered this with repeated calls to "make clean; make dist-tools"
> >
> > An alternative, if you wish to remove the use of $(PYTHON) from the
> > clean path is to just `rm -rf build`, but using setup.py seems neater.

Perhaps removing the build directory would be better and faster and
more reliable ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:38:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L4k-0002gn-Tx; Tue, 14 Aug 2012 17:38:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1L4i-0002gg-Qa
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:38:41 +0000
Received: from [85.158.138.51:55733] by server-5.bemta-3.messagelabs.com id
	E8/D6-08865-02D8A205; Tue, 14 Aug 2012 17:38:40 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344965918!24218205!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5NjU3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5608 invoked from network); 14 Aug 2012 17:38:39 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 17:38:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EHcVTY032731
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 17:38:31 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EHcUcd028211
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 17:38:31 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EHcUx1019121; Tue, 14 Aug 2012 12:38:30 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 10:38:30 -0700
Date: Tue, 14 Aug 2012 10:38:27 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120814103827.7dcf55f1@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012 11:44:44 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> > Ok, I changed all code references from xen_hybrid_domain to
> > xen_pvh_domain in linux. Changing xen code too. So it's PVH now.
> 
> What would xen_pv_domain() and xen_hvm_domain() return in an hybrid
> guest?

xen_pv_domain() == TRUE
xen_hvm_domain() == FALSE


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:38:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L4k-0002gn-Tx; Tue, 14 Aug 2012 17:38:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1L4i-0002gg-Qa
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:38:41 +0000
Received: from [85.158.138.51:55733] by server-5.bemta-3.messagelabs.com id
	E8/D6-08865-02D8A205; Tue, 14 Aug 2012 17:38:40 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1344965918!24218205!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5NjU3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5608 invoked from network); 14 Aug 2012 17:38:39 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 17:38:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EHcVTY032731
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 17:38:31 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EHcUcd028211
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 17:38:31 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EHcUx1019121; Tue, 14 Aug 2012 12:38:30 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 10:38:30 -0700
Date: Tue, 14 Aug 2012 10:38:27 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120814103827.7dcf55f1@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012 11:44:44 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> > Ok, I changed all code references from xen_hybrid_domain to
> > xen_pvh_domain in linux. Changing xen code too. So it's PVH now.
> 
> What would xen_pv_domain() and xen_hvm_domain() return in an hybrid
> guest?

xen_pv_domain() == TRUE
xen_hvm_domain() == FALSE


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:42:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L7j-0002pH-HV; Tue, 14 Aug 2012 17:41:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1L7i-0002p7-Po
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:41:46 +0000
Received: from [85.158.143.35:3276] by server-3.bemta-4.messagelabs.com id
	BC/32-09529-ADD8A205; Tue, 14 Aug 2012 17:41:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344966104!11859461!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ3NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28467 invoked from network); 14 Aug 2012 17:41:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:41:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="34619512"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:41:43 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:41:43 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1L7f-0006te-7h;
	Tue, 14 Aug 2012 18:41:43 +0100
Message-ID: <502A8DD7.3080504@citrix.com>
Date: Tue, 14 Aug 2012 18:41:43 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <502A5C58.5050001@citrix.com>	<502A82DA.7000205@citrix.com>
	<20522.35865.492083.486374@mariner.uk.xensource.com>
In-Reply-To: <20522.35865.492083.486374@mariner.uk.xensource.com>
X-Enigmail-Version: 1.4.3
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V2) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 18:34, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [Xen-devel] (V2) tools/python: Clean python correctly"):
>> Attached version 2, because it seems that despite my testing, "python
>> setup.py clean" still doesn't actually clean enough.
>>
>> On 14/08/12 15:10, Andrew Cooper wrote:
>>> I discovered this with repeated calls to "make clean; make dist-tools"
>>>
>>> An alternative, if you wish to remove the use of $(PYTHON) from the
>>> clean path is to just `rm -rf build`, but using setup.py seems neater.
> Perhaps removing the build directory would be better and faster and
> more reliable ?
>
> Ian.

That would put the code in line with tools/pygrub.  Sec while I make a
v3 patch

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:42:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L7j-0002pH-HV; Tue, 14 Aug 2012 17:41:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1L7i-0002p7-Po
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:41:46 +0000
Received: from [85.158.143.35:3276] by server-3.bemta-4.messagelabs.com id
	BC/32-09529-ADD8A205; Tue, 14 Aug 2012 17:41:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1344966104!11859461!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ3NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28467 invoked from network); 14 Aug 2012 17:41:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:41:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="34619512"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:41:43 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:41:43 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1L7f-0006te-7h;
	Tue, 14 Aug 2012 18:41:43 +0100
Message-ID: <502A8DD7.3080504@citrix.com>
Date: Tue, 14 Aug 2012 18:41:43 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <502A5C58.5050001@citrix.com>	<502A82DA.7000205@citrix.com>
	<20522.35865.492083.486374@mariner.uk.xensource.com>
In-Reply-To: <20522.35865.492083.486374@mariner.uk.xensource.com>
X-Enigmail-Version: 1.4.3
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V2) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 18:34, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [Xen-devel] (V2) tools/python: Clean python correctly"):
>> Attached version 2, because it seems that despite my testing, "python
>> setup.py clean" still doesn't actually clean enough.
>>
>> On 14/08/12 15:10, Andrew Cooper wrote:
>>> I discovered this with repeated calls to "make clean; make dist-tools"
>>>
>>> An alternative, if you wish to remove the use of $(PYTHON) from the
>>> clean path is to just `rm -rf build`, but using setup.py seems neater.
> Perhaps removing the build directory would be better and faster and
> more reliable ?
>
> Ian.

That would put the code in line with tools/pygrub.  Sec while I make a
v3 patch

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:43:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L9K-0002vz-2e; Tue, 14 Aug 2012 17:43:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1L9I-0002vs-Vx
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:43:25 +0000
Received: from [85.158.138.51:10675] by server-3.bemta-3.messagelabs.com id
	43/09-13809-C3E8A205; Tue, 14 Aug 2012 17:43:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344966203!26497652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2522 invoked from network); 14 Aug 2012 17:43:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:43:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14009276"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 17:43:23 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 18:43:23 +0100
Date: Tue, 14 Aug 2012 18:42:54 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120814103827.7dcf55f1@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Mukesh Rathor wrote:
> On Tue, 14 Aug 2012 11:44:44 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> > > Ok, I changed all code references from xen_hybrid_domain to
> > > xen_pvh_domain in linux. Changing xen code too. So it's PVH now.
> > 
> > What would xen_pv_domain() and xen_hvm_domain() return in an hybrid
> > guest?
> 
> xen_pv_domain() == TRUE
> xen_hvm_domain() == FALSE
> 

good, just as I expected :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:43:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1L9K-0002vz-2e; Tue, 14 Aug 2012 17:43:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1L9I-0002vs-Vx
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:43:25 +0000
Received: from [85.158.138.51:10675] by server-3.bemta-3.messagelabs.com id
	43/09-13809-C3E8A205; Tue, 14 Aug 2012 17:43:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1344966203!26497652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2522 invoked from network); 14 Aug 2012 17:43:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:43:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14009276"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 17:43:23 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 18:43:23 +0100
Date: Tue, 14 Aug 2012 18:42:54 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120814103827.7dcf55f1@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Mukesh Rathor wrote:
> On Tue, 14 Aug 2012 11:44:44 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> > > Ok, I changed all code references from xen_hybrid_domain to
> > > xen_pvh_domain in linux. Changing xen code too. So it's PVH now.
> > 
> > What would xen_pv_domain() and xen_hvm_domain() return in an hybrid
> > guest?
> 
> xen_pv_domain() == TRUE
> xen_hvm_domain() == FALSE
> 

good, just as I expected :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:46:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LC1-00036Y-M6; Tue, 14 Aug 2012 17:46:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1LC0-00036L-9U
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:46:12 +0000
Received: from [85.158.138.51:39137] by server-10.bemta-3.messagelabs.com id
	5B/B6-20518-3EE8A205; Tue, 14 Aug 2012 17:46:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344966368!28314083!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32274 invoked from network); 14 Aug 2012 17:46:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:46:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="205152758"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:45:40 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:45:40 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1LBT-0006x0-US;
	Tue, 14 Aug 2012 18:45:39 +0100
Message-ID: <502A8EC3.1020107@citrix.com>
Date: Tue, 14 Aug 2012 18:45:39 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <502A5C58.5050001@citrix.com>	<502A82DA.7000205@citrix.com>
	<20522.35865.492083.486374@mariner.uk.xensource.com>
	<502A8DD7.3080504@citrix.com>
In-Reply-To: <502A8DD7.3080504@citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------080008020204040007010304"
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V3) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------080008020204040007010304
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 14/08/12 15:10, Andrew Cooper wrote:
>>>> I discovered this with repeated calls to "make clean; make dist-tools"
>>>>
>>>> An alternative, if you wish to remove the use of $(PYTHON) from the
>>>> clean path is to just `rm -rf build`, but using setup.py seems neater.
>> Perhaps removing the build directory would be better and faster and
>> more reliable ?
>>
>> Ian.
>>
>>

Attached

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------080008020204040007010304
Content-Type: text/x-patch; name="tools-python-clean.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-python-clean.patch"

# HG changeset patch
# Parent 33d596f46521ea852e90cf6dbdbf3680d104134c
tools/python: Clean python correctly

Cleaning the python directory should completely remove the build/
directory, otherwise subsequent builds may be short-circuited and a
stale build installed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 33d596f46521 tools/python/Makefile
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -34,7 +34,7 @@ test:
 .PHONY: clean
 clean:
 	rm -f $(XENPATH)
-	rm -rf *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
+	rm -rf build/ *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
 	rm -f xen/lowlevel/xl/_pyxl_types.h
 	rm -f xen/lowlevel/xl/_pyxl_types.c
 	rm -f $(DEPS)

--------------080008020204040007010304
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080008020204040007010304--


From xen-devel-bounces@lists.xen.org Tue Aug 14 17:46:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LC1-00036Y-M6; Tue, 14 Aug 2012 17:46:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1LC0-00036L-9U
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 17:46:12 +0000
Received: from [85.158.138.51:39137] by server-10.bemta-3.messagelabs.com id
	5B/B6-20518-3EE8A205; Tue, 14 Aug 2012 17:46:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1344966368!28314083!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32274 invoked from network); 14 Aug 2012 17:46:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:46:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336363200"; d="scan'208";a="205152758"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 13:45:40 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 14 Aug 2012 13:45:40 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1LBT-0006x0-US;
	Tue, 14 Aug 2012 18:45:39 +0100
Message-ID: <502A8EC3.1020107@citrix.com>
Date: Tue, 14 Aug 2012 18:45:39 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <502A5C58.5050001@citrix.com>	<502A82DA.7000205@citrix.com>
	<20522.35865.492083.486374@mariner.uk.xensource.com>
	<502A8DD7.3080504@citrix.com>
In-Reply-To: <502A8DD7.3080504@citrix.com>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------080008020204040007010304"
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V3) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------080008020204040007010304
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 14/08/12 15:10, Andrew Cooper wrote:
>>>> I discovered this with repeated calls to "make clean; make dist-tools"
>>>>
>>>> An alternative, if you wish to remove the use of $(PYTHON) from the
>>>> clean path is to just `rm -rf build`, but using setup.py seems neater.
>> Perhaps removing the build directory would be better and faster and
>> more reliable ?
>>
>> Ian.
>>
>>

Attached

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------080008020204040007010304
Content-Type: text/x-patch; name="tools-python-clean.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="tools-python-clean.patch"

# HG changeset patch
# Parent 33d596f46521ea852e90cf6dbdbf3680d104134c
tools/python: Clean python correctly

Cleaning the python directory should completely remove the build/
directory, otherwise subsequent builds may be short-circuited and a
stale build installed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 33d596f46521 tools/python/Makefile
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -34,7 +34,7 @@ test:
 .PHONY: clean
 clean:
 	rm -f $(XENPATH)
-	rm -rf *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
+	rm -rf build/ *.pyc *.pyo *.o *.a *~ xen/util/auxbin.pyc
 	rm -f xen/lowlevel/xl/_pyxl_types.h
 	rm -f xen/lowlevel/xl/_pyxl_types.c
 	rm -f $(DEPS)

--------------080008020204040007010304
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080008020204040007010304--


From xen-devel-bounces@lists.xen.org Tue Aug 14 17:47:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LDI-0003Lr-5g; Tue, 14 Aug 2012 17:47:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1LDH-0003La-4i
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:47:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344966443!9204623!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11103 invoked from network); 14 Aug 2012 17:47:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:47:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14009351"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 17:47:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 18:47:21 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1LD7-0006Sa-0F;
	Tue, 14 Aug 2012 17:47:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1LD3-0007fS-UL;
	Tue, 14 Aug 2012 18:47:20 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13600-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Aug 2012 18:47:17 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13600: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13600 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13600/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13599
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13599
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13599
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13599

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  dc56a9defa30
baseline version:
 xen                  33d596f46521

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=dc56a9defa30
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable dc56a9defa30
+ branch=xen-unstable
+ revision=dc56a9defa30
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r dc56a9defa30 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:47:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LDI-0003Lr-5g; Tue, 14 Aug 2012 17:47:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1LDH-0003La-4i
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:47:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1344966443!9204623!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11103 invoked from network); 14 Aug 2012 17:47:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 17:47:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,766,1336348800"; d="scan'208";a="14009351"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 17:47:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 18:47:21 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1LD7-0006Sa-0F;
	Tue, 14 Aug 2012 17:47:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1LD3-0007fS-UL;
	Tue, 14 Aug 2012 18:47:20 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13600-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Aug 2012 18:47:17 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13600: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13600 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13600/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13599
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13599
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13599
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13599

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  dc56a9defa30
baseline version:
 xen                  33d596f46521

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=dc56a9defa30
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable dc56a9defa30
+ branch=xen-unstable
+ revision=dc56a9defa30
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r dc56a9defa30 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:52:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LHn-0003Z1-Sb; Tue, 14 Aug 2012 17:52:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1LHm-0003Yq-G5
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:52:10 +0000
Received: from [85.158.139.83:5730] by server-10.bemta-5.messagelabs.com id
	54/43-13125-9409A205; Tue, 14 Aug 2012 17:52:09 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344966727!28117854!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg5MTc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31780 invoked from network); 14 Aug 2012 17:52:08 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 17:52:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EHq0p4019354
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 17:52:01 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EHpxKP018210
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 17:52:00 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EHpxuX012790; Tue, 14 Aug 2012 12:51:59 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 10:51:59 -0700
Date: Tue, 14 Aug 2012 10:51:57 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120814105157.044a755e@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012 18:42:54 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 14 Aug 2012, Mukesh Rathor wrote:
> > On Tue, 14 Aug 2012 11:44:44 +0100
> > Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > 
> > > On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> > > > Ok, I changed all code references from xen_hybrid_domain to
> > > > xen_pvh_domain in linux. Changing xen code too. So it's PVH now.
> > > 
> > > What would xen_pv_domain() and xen_hvm_domain() return in an
> > > hybrid guest?
> > 
> > xen_pv_domain() == TRUE
> > xen_hvm_domain() == FALSE
> > 
> 
> good, just as I expected :)

BTW, being a good hybrid as it is, it uses fields from both pv_domain
and hvm_domain structs. Combining into a union created difficulties for
me. I experimented creating a new struct, hyb_domain, or adding hvm
related fields to pv_domain struct for hybrid, but both involved way too
much code change. So back to having them separated again. LMK if there
any objections undoing the union.

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 17:52:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 17:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LHn-0003Z1-Sb; Tue, 14 Aug 2012 17:52:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1LHm-0003Yq-G5
	for Xen-devel@lists.xensource.com; Tue, 14 Aug 2012 17:52:10 +0000
Received: from [85.158.139.83:5730] by server-10.bemta-5.messagelabs.com id
	54/43-13125-9409A205; Tue, 14 Aug 2012 17:52:09 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1344966727!28117854!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjg5MTc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31780 invoked from network); 14 Aug 2012 17:52:08 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Aug 2012 17:52:08 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7EHq0p4019354
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Aug 2012 17:52:01 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7EHpxKP018210
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Aug 2012 17:52:00 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7EHpxuX012790; Tue, 14 Aug 2012 12:51:59 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Aug 2012 10:51:59 -0700
Date: Tue, 14 Aug 2012 10:51:57 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120814105157.044a755e@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012 18:42:54 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 14 Aug 2012, Mukesh Rathor wrote:
> > On Tue, 14 Aug 2012 11:44:44 +0100
> > Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > 
> > > On Mon, 13 Aug 2012, Mukesh Rathor wrote:
> > > > Ok, I changed all code references from xen_hybrid_domain to
> > > > xen_pvh_domain in linux. Changing xen code too. So it's PVH now.
> > > 
> > > What would xen_pv_domain() and xen_hvm_domain() return in an
> > > hybrid guest?
> > 
> > xen_pv_domain() == TRUE
> > xen_hvm_domain() == FALSE
> > 
> 
> good, just as I expected :)

BTW, being a good hybrid as it is, it uses fields from both pv_domain
and hvm_domain structs. Combining into a union created difficulties for
me. I experimented creating a new struct, hyb_domain, or adding hvm
related fields to pv_domain struct for hybrid, but both involved way too
much code change. So back to having them separated again. LMK if there
any objections undoing the union.

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 18:08:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 18:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LWv-00043s-L5; Tue, 14 Aug 2012 18:07:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T1LWu-00043n-LV
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 18:07:49 +0000
Received: from [85.158.143.35:29221] by server-2.bemta-4.messagelabs.com id
	92/E7-31966-3F39A205; Tue, 14 Aug 2012 18:07:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344967665!14077443!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDExMDY1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10789 invoked from network); 14 Aug 2012 18:07:46 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 18:07:46 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344967666; x=1376503666;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=khKEDzTA6FyNHxYeZklZw0b36OEl5C1FVhDBqQmOp4g=;
	b=C2RE+ENeFmDuNHEzB5Msh9594kN9Y1/kxehkOjaK6axaIjlVoPWWOzT2
	1ddmoMHL4X6hVs+CbNrFMum3oVgKcA==;
X-IronPort-AV: E=Sophos;i="4.77,768,1336348800"; d="scan'208";a="423329246"
Received: from smtp-in-0191.sea3.amazon.com ([10.224.12.28])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 14 Aug 2012 18:07:13 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0191.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7EI6jlN019697
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 14 Aug 2012 18:06:45 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 14 Aug 2012 11:06:43 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 14 Aug 2012
	11:06:44 -0700
Date: Tue, 14 Aug 2012 11:06:43 -0700
From: Matt Wilson <msw@amazon.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <20120814180631.GA9984@US-SEA-R8XVZTX>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120807032453.GB4324@US-SEA-R8XVZTX>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 08:24:53PM -0700, Wilson, Matt wrote:
> On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> > This change improves documentation for several Xen command line
> > parameters. Some of the Itanium-specific options are now removed. A
> > more thorough check should be performed to remove any other remnants.
> >
> > I've reformatted some of the entries to fit in 80 column terminals.
> >
> > Options that are yet undocumented but accept standard boolean /
> > integer values are now annotated as such.
> >
> > The size suffixes have been corrected to use the binary prefixes
> > instead of decimal prefixes.
> >
> > Changes since v2:
> >  * Change *bi prefixes to GiB, MiB, KiB
> >
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> George's concerns were adressed in this version, and Andrew gave an
> Ack. Anything else keeping this from landing in staging?

Ping?

Matt

> > diff -r bf922651da96 -r 1809175cdc9b docs/misc/xen-command-line.markdown
> > --- a/docs/misc/xen-command-line.markdown       Sat Jul 28 17:27:30 2012 +0000
> > +++ b/docs/misc/xen-command-line.markdown       Mon Jul 30 19:04:59 2012 +0000
> > @@ -46,9 +46,9 @@ if a leading `0` is present.
> >
> >  A size parameter may be any integer, with a size suffix
> >
> > -* `G` or `g`: Giga (2^30)
> > -* `M` or `m`: Mega (2^20)
> > -* `K` or `k`: Kilo (2^10)
> > +* `G` or `g`: GiB (2^30)
> > +* `M` or `m`: MiB (2^20)
> > +* `K` or `k`: KiB (2^10)
> >  * `B` or `b`: Bytes
> >
> >  Without a size suffix, the default will be kilo.
> > @@ -107,8 +107,10 @@ Specify which ACPI MADT table to parse f
> >  than one is present.
> >
> >  ### acpi\_pstate\_strict
> > +> `= <integer>`
> >
> >  ### acpi\_skip\_timer\_override
> > +> `= <boolean>`
> >
> >  Instruct Xen to ignore timer-interrupt override.
> >
> > @@ -117,6 +119,8 @@ the domain 0 kernel this option is autom
> >  domain 0 command line
> >
> >  ### acpi\_sleep
> > +> `= s3_bios | s3_mode`
> > +
> >  ### allowsuperpage
> >  > `= <boolean>`
> >
> > @@ -136,12 +140,12 @@ there are more than 8 CPUs, Xen will swi
> >
> >  > Default: `false`
> >
> > -Force boot on potentially unsafe systems. By default Xen will refuse to boot on
> > -systems with the following errata:
> > +Force boot on potentially unsafe systems. By default Xen will refuse
> > +to boot on systems with the following errata:
> >
> >  * AMD Erratum 121. Processors with this erratum are subject to a guest
> > -  triggerable Denial of Service. Override only if you trust all of your PV
> > -  guests.
> > +  triggerable Denial of Service. Override only if you trust all of
> > +  your PV guests.
> >
> >  ### apic\_verbosity
> >  > `= verbose | debug`
> > @@ -153,15 +157,16 @@ Increase the verbosity of the APIC code
> >
> >  > Default: `true`
> >
> > -Permits Xen to set up and use PCI Address Translation Services, which is required
> > -for PCI Passthrough.
> > +Permits Xen to set up and use PCI Address Translation Services, which
> > +is required for PCI Passthrough.
> >
> >  ### availmem
> >  > `= <size>`
> >
> >  > Default: `0` (no limit)
> >
> > -Specify a maximum amount of available memory, to which Xen will clamp the e820 table.
> > +Specify a maximum amount of available memory, to which Xen will clamp
> > +the e820 table.
> >
> >  ### badpage
> >  > `= List of [ <integer> | <integer>-<integer> ]`
> > @@ -176,8 +181,9 @@ Xen's command line.
> >
> >  > Default: `true`
> >
> > -Scrub free RAM during boot.  This is a safety feature to prevent accidentally leaking
> > -sensitive VM data into other VMs if Xen crashes and reboots.
> > +Scrub free RAM during boot.  This is a safety feature to prevent
> > +accidentally leaking sensitive VM data into other VMs if Xen crashes
> > +and reboots.
> >
> >  ### cachesize
> >  > `= <size>`
> > @@ -227,7 +233,6 @@ Both option `com1` and `com2` follow the
> >
> >  A typical setup for most situations might be `com1=115200,8n1`
> >
> > -
> >  ### conring\_size
> >  > `= <size>`
> >
> > @@ -300,25 +305,30 @@ Indicate where the responsibility for dr
> >  ### cpuid\_mask\_cpu (AMD only)
> >  > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
> >
> > -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set (unspecified
> > -on the command line), specify a pre-canned cpuid mask to mask the current
> > -processor down to appear as the specified processor.  It is important to ensure
> > -that all hosts in a pool appear the same to guests to allow successful live
> > -migration.
> > +If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
> > +(unspecified on the command line), specify a pre-canned cpuid mask to
> > +mask the current processor down to appear as the specified processor.
> > +It is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
> >  > `= <integer>`
> >
> >  > Default: `~0` (all bits set)
> >
> > -These five command line parameters are used to specify cpuid masks to help with
> > -cpuid levelling across a pool of hosts.  Setting a bit in the mask indicates that
> > -the feature should be enabled, while clearing a bit in the mask indicates that
> > -the feature should be disabled.  It is important to ensure that all hosts in a
> > -pool appear the same to guests to allow successful live migration.
> > +These five command line parameters are used to specify cpuid masks to
> > +help with cpuid levelling across a pool of hosts.  Setting a bit in
> > +the mask indicates that the feature should be enabled, while clearing
> > +a bit in the mask indicates that the feature should be disabled.  It
> > +is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuidle
> > +> `= <boolean>`
> > +
> >  ### cpuinfo
> > +> `= <boolean>`
> > +
> >  ### crashinfo_maxaddr
> >  > `= <size>`
> >
> > @@ -328,17 +338,42 @@ Specify the maximum address to allocate
> >  combination with the `low_crashinfo` command line option.
> >
> >  ### crashkernel
> > +> `= <ramsize-range>:<size>[,...][@<offset>]`
> > +
> >  ### credit2\_balance\_over
> > +> `= <integer>`
> > +
> >  ### credit2\_balance\_under
> > +> `= <integer>`
> > +
> >  ### credit2\_load\_window\_shift
> > +> `= <integer>`
> > +
> >  ### debug\_stack\_lines
> > +> `= <integer>`
> > +
> > +> Default: `20`
> > +
> > +Limits the number lines printed in Xen stack traces.
> > +
> >  ### debugtrace
> > +> `= <integer>`
> > +
> > +> Default: `128`
> > +
> > +Specify the size of the console debug trace buffer in KiB. The debug
> > +trace feature is only enabled in debugging builds of Xen.
> > +
> >  ### dma\_bits
> >  > `= <integer>`
> >
> >  Specify the bit width of the DMA heap.
> >
> >  ### dom0\_ioports\_disable
> > +> `= List of <hex>-<hex>`
> > +
> > +Specify a list of IO ports to be excluded from dom0 access.
> > +
> >  ### dom0\_max\_vcpus
> >  > `= <integer>`
> >
> > @@ -372,6 +407,8 @@ For example, to set dom0's initial memor
> >  allow it to balloon up as far as 1GB use `dom0_mem=512M,max:1G`
> >
> >  ### dom0\_shadow
> > +> `= <boolean>`
> > +
> >  ### dom0\_vcpus\_pin
> >  > `= <boolean>`
> >
> > @@ -379,10 +416,21 @@ allow it to balloon up as far as 1GB use
> >
> >  Pin dom0 vcpus to their respective pcpus
> >
> > -### dom0\_vhpt\_size\_log2
> > -### dom\_rid\_bits
> >  ### e820-mtrr-clip
> > +> `= <boolean>`
> > +
> > +Flag that specifies if RAM should be clipped to the highest cacheable
> > +MTRR.
> > +
> > +> Default: `true` on Intel CPUs, otherwise `false`
> > +
> >  ### e820-verbose
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> > +
> > +Flag that enables verbose output when processing e820 information and
> > +applying clipping.
> >
> >  ### edd (x86)
> >  > `= off | on | skipmbr`
> > @@ -397,17 +445,32 @@ Either force retrieval of monitor EDID i
> >  disable it (edid=no). This option should not normally be required
> >  except for debugging purposes.
> >
> > -### efi\_print
> >  ### extra\_guest\_irqs
> >  > `= <number>`
> >
> >  Increase the number of PIRQs available for the guest. The default is 32.
> >
> >  ### flask\_enabled
> > +> `= <integer>`
> > +
> >  ### flask\_enforcing
> > +> `= <integer>`
> > +
> >  ### font
> > +> `= <height>` where height is `8x8 | 8x14 | 8x16 '`
> > +
> > +Specify the font size when using the VESA console driver.
> > +
> >  ### gdb
> > +> `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
> > +
> > +Specify the serial parameters for the GDB stub.
> > +
> >  ### gnttab\_max\_nr\_frames
> > +> `= <integer>`
> > +
> > +Specify the maximum number of frames per grant table operation.
> > +
> >  ### guest\_loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -420,15 +483,41 @@ The optional `<rate-limited level>` opti
> >  should be rate limited.
> >
> >  ### hap\_1gb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hap\_2mb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hpetbroadcast
> > +> `= <boolean>`
> > +
> >  ### hvm\_debug
> > +> `= <integer>`
> > +
> >  ### hvm\_port80
> > +> `= <boolean>`
> > +
> >  ### idle\_latency\_factor
> > +> `= <integer>`
> > +
> >  ### ioapic\_ack
> >  ### iommu
> >  ### iommu\_inclusive\_mapping
> > +> `= <boolean>`
> > +
> >  ### irq\_ratelimit
> > +> `= <integer>`
> > +
> >  ### irq\_vector\_map
> >  ### lapic
> >
> > @@ -437,7 +526,11 @@ if left disabled by the BIOS.  This opti
> >  all.
> >
> >  ### lapic\_timer\_c2\_ok
> > +> `= <boolean>`
> > +
> >  ### ler
> > +> `= <boolean>`
> > +
> >  ### loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -461,18 +554,38 @@ so the crash kernel may find find them.
> >  with **crashinfo_maxaddr**.
> >
> >  ### max\_cstate
> > +> `= <integer>`
> > +
> >  ### max\_gsi\_irqs
> > +> `= <integer>`
> > +
> >  ### maxcpus
> > +> `= <integer>`
> > +
> >  ### mce
> > +> `= <integer>`
> > +
> >  ### mce\_fb
> > +> `= <integer>`
> > +
> >  ### mce\_verbosity
> > +> `= verbose`
> > +
> > +Specify verbose machine check output.
> > +
> >  ### mem
> >  > `= <size>`
> >
> > -Specifies the maximum address of physical RAM.  Any RAM beyond this
> > +Specify the maximum address of physical RAM.  Any RAM beyond this
> >  limit is ignored by Xen.
> >
> >  ### mmcfg
> > +> `= <boolean>[,amd-fam10]`
> > +
> > +> Default: `1`
> > +
> > +Specify if the MMConfig space should be enabled.
> > +
> >  ### nmi
> >  > `= ignore | dom0 | fatal`
> >
> > @@ -493,6 +606,8 @@ domain 0 kernel this option is automatic
> >  0 command line.
> >
> >  ### nofxsr
> > +> `= <boolean>`
> > +
> >  ### noirqbalance
> >  > `= <boolean>`
> >
> > @@ -501,11 +616,15 @@ systems such as Dell 1850/2850 that have
> >  IRQ routing issues.
> >
> >  ### nolapic
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> >
> >  Ignore the local APIC on a uniprocessor system, even if enabled by the
> >  BIOS.  This option will accept value.
> >
> >  ### no-real-mode (x86)
> > +> `= <boolean>`
> >
> >  Do not execute real-mode bootstrap code when booting Xen. This option
> >  should not be used except for debugging. It will effectively disable
> > @@ -519,6 +638,10 @@ catching debug output.  Defaults to auto
> >  seconds.
> >
> >  ### noserialnumber
> > +> `= <boolean>`
> > +
> > +Disable CPU serial number reporting.
> > +
> >  ### nosmp
> >  > `= <boolean>`
> >
> > @@ -526,11 +649,39 @@ Disable SMP support.  No secondary proce
> >  Defaults to booting secondary processors.
> >
> >  ### nr\_irqs
> > +> `= <integer>`
> > +
> >  ### numa
> > -### pervcpu\_vhpt
> > +> `= on | off | fake=<integer> | noacpi`
> > +
> > +Default: `on`
> > +
> >  ### ple\_gap
> > +> `= <integer>`
> > +
> >  ### ple\_window
> > +> `= <integer>`
> > +
> >  ### reboot
> > +> `= b[ios] | t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
> > +
> > +Default: `0`
> > +
> > +Specify the host reboot method.
> > +
> > +`warm` instructs Xen to not set the cold reboot flag.
> > +
> > +`cold` instructs Xen to set the cold reboot flag.
> > +
> > +`bios` instructs Xen to reboot the host by jumping to BIOS. This is
> > +only available on 32-bit x86 platforms.
> > +
> > +`triple` instructs Xen to reboot the host by causing a triple fault.
> > +
> > +`kbd` instructs Xen to reboot the host via the keyboard controller.
> > +
> > +`acpi` instructs Xen to reboot the host using RESET_REG in the ACPI FADT.
> > +
> >  ### sched
> >  > `= credit | credit2 | sedf | arinc653`
> >
> > @@ -539,10 +690,20 @@ Defaults to booting secondary processors
> >  Choose the default scheduler.
> >
> >  ### sched\_credit2\_migrate\_resist
> > +> `= <integer>`
> > +
> >  ### sched\_credit\_default\_yield
> > +> `= <boolean>`
> > +
> >  ### sched\_credit\_tslice\_ms
> > +> `= <integer>`
> > +
> >  ### sched\_ratelimit\_us
> > +> `= <integer>`
> > +
> >  ### sched\_smt\_power\_savings
> > +> `= <boolean>`
> > +
> >  ### serial\_tx\_buffer
> >  > `= <size>`
> >
> > @@ -551,7 +712,15 @@ Choose the default scheduler.
> >  Set the serial transmit buffer size.
> >
> >  ### smep
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable Supervisor Mode Execution Protection
> > +
> >  ### snb\_igd\_quirk
> > +> `= <boolean>`
> > +
> >  ### sync\_console
> >  > `= <boolean>`
> >
> > @@ -561,28 +730,80 @@ Flag to force synchronous console output
> >  not suitable for production environments due to incurred overhead.
> >
> >  ### tboot
> > +> `= 0x<phys_addr>`
> > +
> > +Specify the physical address of the trusted boot shared page.
> > +
> >  ### tbuf\_size
> >  > `= <integer>`
> >
> >  Specify the per-cpu trace buffer size in pages.
> >
> >  ### tdt
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable TSC deadline as the APIC timer mode.
> > +
> >  ### tevt\_mask
> > +> `= <integer>`
> > +
> > +Specify a mask for Xen event tracing. This allows Xen tracing to be
> > +enabled at boot. Refer to the xentrace(8) documentation for a list of
> > +valid event mask values. In order to enable tracing, a buffer size (in
> > +pages) must also be specified via the tbuf\_size parameter.
> > +
> >  ### tickle\_one\_idle\_cpu
> > +> `= <boolean>`
> > +
> >  ### timer\_slop
> > +> `= <integer>`
> > +
> >  ### tmem
> > +> `= <boolean>`
> > +
> >  ### tmem\_compress
> > +> `= <boolean>`
> > +
> >  ### tmem\_dedup
> > +> `= <boolean>`
> > +
> >  ### tmem\_lock
> > +> `= <integer>`
> > +
> >  ### tmem\_shared\_auth
> > +> `= <boolean>`
> > +
> >  ### tmem\_tze
> > +> `= <integer>`
> > +
> >  ### tsc
> > +> `= unstable | skewed`
> > +
> >  ### ucode
> >  ### unrestricted\_guest
> > +> `= <boolean>`
> > +
> >  ### vcpu\_migration\_delay
> > +> `= <integer>`
> > +
> > +> Default: `0`
> > +
> > +Specify a delay, in microseconds, between migrations of a VCPU between
> > +PCPUs when using the credit1 scheduler. This prevents rapid fluttering
> > +of a VCPU between CPUs, and reduces the implicit overheads such as
> > +cache-warming. 1ms (1000) has been measured as a good value.
> > +
> >  ### vesa-map
> > +> `= <integer>`
> > +
> >  ### vesa-mtrr
> > +> `= <integer>`
> > +
> >  ### vesa-ram
> > +> `= <integer>`
> > +
> >  ### vga
> >  > `= ( ask | current | text-80x<rows> | gfx-<width>x<height>x<depth> | mode-<mode> )[,keep]`
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 18:08:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 18:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LWv-00043s-L5; Tue, 14 Aug 2012 18:07:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T1LWu-00043n-LV
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 18:07:49 +0000
Received: from [85.158.143.35:29221] by server-2.bemta-4.messagelabs.com id
	92/E7-31966-3F39A205; Tue, 14 Aug 2012 18:07:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1344967665!14077443!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDExMDY1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10789 invoked from network); 14 Aug 2012 18:07:46 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 18:07:46 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1344967666; x=1376503666;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=khKEDzTA6FyNHxYeZklZw0b36OEl5C1FVhDBqQmOp4g=;
	b=C2RE+ENeFmDuNHEzB5Msh9594kN9Y1/kxehkOjaK6axaIjlVoPWWOzT2
	1ddmoMHL4X6hVs+CbNrFMum3oVgKcA==;
X-IronPort-AV: E=Sophos;i="4.77,768,1336348800"; d="scan'208";a="423329246"
Received: from smtp-in-0191.sea3.amazon.com ([10.224.12.28])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 14 Aug 2012 18:07:13 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-0191.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7EI6jlN019697
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 14 Aug 2012 18:06:45 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Tue, 14 Aug 2012 11:06:43 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Tue, 14 Aug 2012
	11:06:44 -0700
Date: Tue, 14 Aug 2012 11:06:43 -0700
From: Matt Wilson <msw@amazon.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <20120814180631.GA9984@US-SEA-R8XVZTX>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120807032453.GB4324@US-SEA-R8XVZTX>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 08:24:53PM -0700, Wilson, Matt wrote:
> On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> > This change improves documentation for several Xen command line
> > parameters. Some of the Itanium-specific options are now removed. A
> > more thorough check should be performed to remove any other remnants.
> >
> > I've reformatted some of the entries to fit in 80 column terminals.
> >
> > Options that are yet undocumented but accept standard boolean /
> > integer values are now annotated as such.
> >
> > The size suffixes have been corrected to use the binary prefixes
> > instead of decimal prefixes.
> >
> > Changes since v2:
> >  * Change *bi prefixes to GiB, MiB, KiB
> >
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> George's concerns were adressed in this version, and Andrew gave an
> Ack. Anything else keeping this from landing in staging?

Ping?

Matt

> > diff -r bf922651da96 -r 1809175cdc9b docs/misc/xen-command-line.markdown
> > --- a/docs/misc/xen-command-line.markdown       Sat Jul 28 17:27:30 2012 +0000
> > +++ b/docs/misc/xen-command-line.markdown       Mon Jul 30 19:04:59 2012 +0000
> > @@ -46,9 +46,9 @@ if a leading `0` is present.
> >
> >  A size parameter may be any integer, with a size suffix
> >
> > -* `G` or `g`: Giga (2^30)
> > -* `M` or `m`: Mega (2^20)
> > -* `K` or `k`: Kilo (2^10)
> > +* `G` or `g`: GiB (2^30)
> > +* `M` or `m`: MiB (2^20)
> > +* `K` or `k`: KiB (2^10)
> >  * `B` or `b`: Bytes
> >
> >  Without a size suffix, the default will be kilo.
> > @@ -107,8 +107,10 @@ Specify which ACPI MADT table to parse f
> >  than one is present.
> >
> >  ### acpi\_pstate\_strict
> > +> `= <integer>`
> >
> >  ### acpi\_skip\_timer\_override
> > +> `= <boolean>`
> >
> >  Instruct Xen to ignore timer-interrupt override.
> >
> > @@ -117,6 +119,8 @@ the domain 0 kernel this option is autom
> >  domain 0 command line
> >
> >  ### acpi\_sleep
> > +> `= s3_bios | s3_mode`
> > +
> >  ### allowsuperpage
> >  > `= <boolean>`
> >
> > @@ -136,12 +140,12 @@ there are more than 8 CPUs, Xen will swi
> >
> >  > Default: `false`
> >
> > -Force boot on potentially unsafe systems. By default Xen will refuse to boot on
> > -systems with the following errata:
> > +Force boot on potentially unsafe systems. By default Xen will refuse
> > +to boot on systems with the following errata:
> >
> >  * AMD Erratum 121. Processors with this erratum are subject to a guest
> > -  triggerable Denial of Service. Override only if you trust all of your PV
> > -  guests.
> > +  triggerable Denial of Service. Override only if you trust all of
> > +  your PV guests.
> >
> >  ### apic\_verbosity
> >  > `= verbose | debug`
> > @@ -153,15 +157,16 @@ Increase the verbosity of the APIC code
> >
> >  > Default: `true`
> >
> > -Permits Xen to set up and use PCI Address Translation Services, which is required
> > -for PCI Passthrough.
> > +Permits Xen to set up and use PCI Address Translation Services, which
> > +is required for PCI Passthrough.
> >
> >  ### availmem
> >  > `= <size>`
> >
> >  > Default: `0` (no limit)
> >
> > -Specify a maximum amount of available memory, to which Xen will clamp the e820 table.
> > +Specify a maximum amount of available memory, to which Xen will clamp
> > +the e820 table.
> >
> >  ### badpage
> >  > `= List of [ <integer> | <integer>-<integer> ]`
> > @@ -176,8 +181,9 @@ Xen's command line.
> >
> >  > Default: `true`
> >
> > -Scrub free RAM during boot.  This is a safety feature to prevent accidentally leaking
> > -sensitive VM data into other VMs if Xen crashes and reboots.
> > +Scrub free RAM during boot.  This is a safety feature to prevent
> > +accidentally leaking sensitive VM data into other VMs if Xen crashes
> > +and reboots.
> >
> >  ### cachesize
> >  > `= <size>`
> > @@ -227,7 +233,6 @@ Both option `com1` and `com2` follow the
> >
> >  A typical setup for most situations might be `com1=115200,8n1`
> >
> > -
> >  ### conring\_size
> >  > `= <size>`
> >
> > @@ -300,25 +305,30 @@ Indicate where the responsibility for dr
> >  ### cpuid\_mask\_cpu (AMD only)
> >  > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
> >
> > -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set (unspecified
> > -on the command line), specify a pre-canned cpuid mask to mask the current
> > -processor down to appear as the specified processor.  It is important to ensure
> > -that all hosts in a pool appear the same to guests to allow successful live
> > -migration.
> > +If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
> > +(unspecified on the command line), specify a pre-canned cpuid mask to
> > +mask the current processor down to appear as the specified processor.
> > +It is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
> >  > `= <integer>`
> >
> >  > Default: `~0` (all bits set)
> >
> > -These five command line parameters are used to specify cpuid masks to help with
> > -cpuid levelling across a pool of hosts.  Setting a bit in the mask indicates that
> > -the feature should be enabled, while clearing a bit in the mask indicates that
> > -the feature should be disabled.  It is important to ensure that all hosts in a
> > -pool appear the same to guests to allow successful live migration.
> > +These five command line parameters are used to specify cpuid masks to
> > +help with cpuid levelling across a pool of hosts.  Setting a bit in
> > +the mask indicates that the feature should be enabled, while clearing
> > +a bit in the mask indicates that the feature should be disabled.  It
> > +is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuidle
> > +> `= <boolean>`
> > +
> >  ### cpuinfo
> > +> `= <boolean>`
> > +
> >  ### crashinfo_maxaddr
> >  > `= <size>`
> >
> > @@ -328,17 +338,42 @@ Specify the maximum address to allocate
> >  combination with the `low_crashinfo` command line option.
> >
> >  ### crashkernel
> > +> `= <ramsize-range>:<size>[,...][@<offset>]`
> > +
> >  ### credit2\_balance\_over
> > +> `= <integer>`
> > +
> >  ### credit2\_balance\_under
> > +> `= <integer>`
> > +
> >  ### credit2\_load\_window\_shift
> > +> `= <integer>`
> > +
> >  ### debug\_stack\_lines
> > +> `= <integer>`
> > +
> > +> Default: `20`
> > +
> > +Limits the number lines printed in Xen stack traces.
> > +
> >  ### debugtrace
> > +> `= <integer>`
> > +
> > +> Default: `128`
> > +
> > +Specify the size of the console debug trace buffer in KiB. The debug
> > +trace feature is only enabled in debugging builds of Xen.
> > +
> >  ### dma\_bits
> >  > `= <integer>`
> >
> >  Specify the bit width of the DMA heap.
> >
> >  ### dom0\_ioports\_disable
> > +> `= List of <hex>-<hex>`
> > +
> > +Specify a list of IO ports to be excluded from dom0 access.
> > +
> >  ### dom0\_max\_vcpus
> >  > `= <integer>`
> >
> > @@ -372,6 +407,8 @@ For example, to set dom0's initial memor
> >  allow it to balloon up as far as 1GB use `dom0_mem=512M,max:1G`
> >
> >  ### dom0\_shadow
> > +> `= <boolean>`
> > +
> >  ### dom0\_vcpus\_pin
> >  > `= <boolean>`
> >
> > @@ -379,10 +416,21 @@ allow it to balloon up as far as 1GB use
> >
> >  Pin dom0 vcpus to their respective pcpus
> >
> > -### dom0\_vhpt\_size\_log2
> > -### dom\_rid\_bits
> >  ### e820-mtrr-clip
> > +> `= <boolean>`
> > +
> > +Flag that specifies if RAM should be clipped to the highest cacheable
> > +MTRR.
> > +
> > +> Default: `true` on Intel CPUs, otherwise `false`
> > +
> >  ### e820-verbose
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> > +
> > +Flag that enables verbose output when processing e820 information and
> > +applying clipping.
> >
> >  ### edd (x86)
> >  > `= off | on | skipmbr`
> > @@ -397,17 +445,32 @@ Either force retrieval of monitor EDID i
> >  disable it (edid=no). This option should not normally be required
> >  except for debugging purposes.
> >
> > -### efi\_print
> >  ### extra\_guest\_irqs
> >  > `= <number>`
> >
> >  Increase the number of PIRQs available for the guest. The default is 32.
> >
> >  ### flask\_enabled
> > +> `= <integer>`
> > +
> >  ### flask\_enforcing
> > +> `= <integer>`
> > +
> >  ### font
> > +> `= <height>` where height is `8x8 | 8x14 | 8x16 '`
> > +
> > +Specify the font size when using the VESA console driver.
> > +
> >  ### gdb
> > +> `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
> > +
> > +Specify the serial parameters for the GDB stub.
> > +
> >  ### gnttab\_max\_nr\_frames
> > +> `= <integer>`
> > +
> > +Specify the maximum number of frames per grant table operation.
> > +
> >  ### guest\_loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -420,15 +483,41 @@ The optional `<rate-limited level>` opti
> >  should be rate limited.
> >
> >  ### hap\_1gb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hap\_2mb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hpetbroadcast
> > +> `= <boolean>`
> > +
> >  ### hvm\_debug
> > +> `= <integer>`
> > +
> >  ### hvm\_port80
> > +> `= <boolean>`
> > +
> >  ### idle\_latency\_factor
> > +> `= <integer>`
> > +
> >  ### ioapic\_ack
> >  ### iommu
> >  ### iommu\_inclusive\_mapping
> > +> `= <boolean>`
> > +
> >  ### irq\_ratelimit
> > +> `= <integer>`
> > +
> >  ### irq\_vector\_map
> >  ### lapic
> >
> > @@ -437,7 +526,11 @@ if left disabled by the BIOS.  This opti
> >  all.
> >
> >  ### lapic\_timer\_c2\_ok
> > +> `= <boolean>`
> > +
> >  ### ler
> > +> `= <boolean>`
> > +
> >  ### loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -461,18 +554,38 @@ so the crash kernel may find find them.
> >  with **crashinfo_maxaddr**.
> >
> >  ### max\_cstate
> > +> `= <integer>`
> > +
> >  ### max\_gsi\_irqs
> > +> `= <integer>`
> > +
> >  ### maxcpus
> > +> `= <integer>`
> > +
> >  ### mce
> > +> `= <integer>`
> > +
> >  ### mce\_fb
> > +> `= <integer>`
> > +
> >  ### mce\_verbosity
> > +> `= verbose`
> > +
> > +Specify verbose machine check output.
> > +
> >  ### mem
> >  > `= <size>`
> >
> > -Specifies the maximum address of physical RAM.  Any RAM beyond this
> > +Specify the maximum address of physical RAM.  Any RAM beyond this
> >  limit is ignored by Xen.
> >
> >  ### mmcfg
> > +> `= <boolean>[,amd-fam10]`
> > +
> > +> Default: `1`
> > +
> > +Specify if the MMConfig space should be enabled.
> > +
> >  ### nmi
> >  > `= ignore | dom0 | fatal`
> >
> > @@ -493,6 +606,8 @@ domain 0 kernel this option is automatic
> >  0 command line.
> >
> >  ### nofxsr
> > +> `= <boolean>`
> > +
> >  ### noirqbalance
> >  > `= <boolean>`
> >
> > @@ -501,11 +616,15 @@ systems such as Dell 1850/2850 that have
> >  IRQ routing issues.
> >
> >  ### nolapic
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> >
> >  Ignore the local APIC on a uniprocessor system, even if enabled by the
> >  BIOS.  This option will accept value.
> >
> >  ### no-real-mode (x86)
> > +> `= <boolean>`
> >
> >  Do not execute real-mode bootstrap code when booting Xen. This option
> >  should not be used except for debugging. It will effectively disable
> > @@ -519,6 +638,10 @@ catching debug output.  Defaults to auto
> >  seconds.
> >
> >  ### noserialnumber
> > +> `= <boolean>`
> > +
> > +Disable CPU serial number reporting.
> > +
> >  ### nosmp
> >  > `= <boolean>`
> >
> > @@ -526,11 +649,39 @@ Disable SMP support.  No secondary proce
> >  Defaults to booting secondary processors.
> >
> >  ### nr\_irqs
> > +> `= <integer>`
> > +
> >  ### numa
> > -### pervcpu\_vhpt
> > +> `= on | off | fake=<integer> | noacpi`
> > +
> > +Default: `on`
> > +
> >  ### ple\_gap
> > +> `= <integer>`
> > +
> >  ### ple\_window
> > +> `= <integer>`
> > +
> >  ### reboot
> > +> `= b[ios] | t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
> > +
> > +Default: `0`
> > +
> > +Specify the host reboot method.
> > +
> > +`warm` instructs Xen to not set the cold reboot flag.
> > +
> > +`cold` instructs Xen to set the cold reboot flag.
> > +
> > +`bios` instructs Xen to reboot the host by jumping to BIOS. This is
> > +only available on 32-bit x86 platforms.
> > +
> > +`triple` instructs Xen to reboot the host by causing a triple fault.
> > +
> > +`kbd` instructs Xen to reboot the host via the keyboard controller.
> > +
> > +`acpi` instructs Xen to reboot the host using RESET_REG in the ACPI FADT.
> > +
> >  ### sched
> >  > `= credit | credit2 | sedf | arinc653`
> >
> > @@ -539,10 +690,20 @@ Defaults to booting secondary processors
> >  Choose the default scheduler.
> >
> >  ### sched\_credit2\_migrate\_resist
> > +> `= <integer>`
> > +
> >  ### sched\_credit\_default\_yield
> > +> `= <boolean>`
> > +
> >  ### sched\_credit\_tslice\_ms
> > +> `= <integer>`
> > +
> >  ### sched\_ratelimit\_us
> > +> `= <integer>`
> > +
> >  ### sched\_smt\_power\_savings
> > +> `= <boolean>`
> > +
> >  ### serial\_tx\_buffer
> >  > `= <size>`
> >
> > @@ -551,7 +712,15 @@ Choose the default scheduler.
> >  Set the serial transmit buffer size.
> >
> >  ### smep
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable Supervisor Mode Execution Protection
> > +
> >  ### snb\_igd\_quirk
> > +> `= <boolean>`
> > +
> >  ### sync\_console
> >  > `= <boolean>`
> >
> > @@ -561,28 +730,80 @@ Flag to force synchronous console output
> >  not suitable for production environments due to incurred overhead.
> >
> >  ### tboot
> > +> `= 0x<phys_addr>`
> > +
> > +Specify the physical address of the trusted boot shared page.
> > +
> >  ### tbuf\_size
> >  > `= <integer>`
> >
> >  Specify the per-cpu trace buffer size in pages.
> >
> >  ### tdt
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable TSC deadline as the APIC timer mode.
> > +
> >  ### tevt\_mask
> > +> `= <integer>`
> > +
> > +Specify a mask for Xen event tracing. This allows Xen tracing to be
> > +enabled at boot. Refer to the xentrace(8) documentation for a list of
> > +valid event mask values. In order to enable tracing, a buffer size (in
> > +pages) must also be specified via the tbuf\_size parameter.
> > +
> >  ### tickle\_one\_idle\_cpu
> > +> `= <boolean>`
> > +
> >  ### timer\_slop
> > +> `= <integer>`
> > +
> >  ### tmem
> > +> `= <boolean>`
> > +
> >  ### tmem\_compress
> > +> `= <boolean>`
> > +
> >  ### tmem\_dedup
> > +> `= <boolean>`
> > +
> >  ### tmem\_lock
> > +> `= <integer>`
> > +
> >  ### tmem\_shared\_auth
> > +> `= <boolean>`
> > +
> >  ### tmem\_tze
> > +> `= <integer>`
> > +
> >  ### tsc
> > +> `= unstable | skewed`
> > +
> >  ### ucode
> >  ### unrestricted\_guest
> > +> `= <boolean>`
> > +
> >  ### vcpu\_migration\_delay
> > +> `= <integer>`
> > +
> > +> Default: `0`
> > +
> > +Specify a delay, in microseconds, between migrations of a VCPU between
> > +PCPUs when using the credit1 scheduler. This prevents rapid fluttering
> > +of a VCPU between CPUs, and reduces the implicit overheads such as
> > +cache-warming. 1ms (1000) has been measured as a good value.
> > +
> >  ### vesa-map
> > +> `= <integer>`
> > +
> >  ### vesa-mtrr
> > +> `= <integer>`
> > +
> >  ### vesa-ram
> > +> `= <integer>`
> > +
> >  ### vga
> >  > `= ( ask | current | text-80x<rows> | gfx-<width>x<height>x<depth> | mode-<mode> )[,keep]`
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 18:09:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 18:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LYB-00047v-3w; Tue, 14 Aug 2012 18:09:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1LY9-00047R-Hz
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 18:09:05 +0000
Received: from [85.158.143.35:33811] by server-2.bemta-4.messagelabs.com id
	5A/B8-31966-1449A205; Tue, 14 Aug 2012 18:09:05 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344967744!10475786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15439 invoked from network); 14 Aug 2012 18:09:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 18:09:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336348800"; d="scan'208";a="14009643"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 18:09:04 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 19:09:04 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Aug 2012 19:09:59 +0100
Message-ID: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to attach
	from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xend used to set the xenbus backend entry "type" to either "phy" or
"file", but now libxl sets it to "phy" for both file and block device.
We have to manually check for the type of the "param" field in order
to detect if we are trying to attach a file or a block device.

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
Changes since v3:

 * Add $xparams (that contains the path to the disk file) to the error
   message.

Changes since v2:

 * Better error messages.

 * Check if params is empty.

 * Replace xenstore_write with xenstore-write in error function.

 * Add quotation marks to xparams when testing.

Changes since v1:

 * Check that file is either a block special file or a regular file
   and report error otherwise.
---
 tools/hotplug/NetBSD/block |   13 +++++++++++--
 1 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index cf5ff3a..f849fe4 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,15 +12,24 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore_write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error
 	exit 1
 }
 	
 
 xpath=$1
 xstatus=$2
-xtype=$(xenstore-read "$xpath/type")
 xparams=$(xenstore-read "$xpath/params")
+if [ -b "$xparams" ]; then
+	xtype="phy"
+elif [ -f "$xparams" ]; then
+	xtype="file"
+elif [ -z "$xparams" ]; then
+	error "$xpath/params is empty, unable to attach block device."
+else
+	error "$xparams is not a valid file type to use as block device." \
+	      "Only block and regular image files accepted."
+fi
 
 case $xstatus in
 6)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 18:09:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 18:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1LYB-00047v-3w; Tue, 14 Aug 2012 18:09:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1LY9-00047R-Hz
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 18:09:05 +0000
Received: from [85.158.143.35:33811] by server-2.bemta-4.messagelabs.com id
	5A/B8-31966-1449A205; Tue, 14 Aug 2012 18:09:05 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1344967744!10475786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15439 invoked from network); 14 Aug 2012 18:09:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 18:09:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336348800"; d="scan'208";a="14009643"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 18:09:04 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 19:09:04 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Aug 2012 19:09:59 +0100
Message-ID: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to attach
	from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xend used to set the xenbus backend entry "type" to either "phy" or
"file", but now libxl sets it to "phy" for both file and block device.
We have to manually check for the type of the "param" field in order
to detect if we are trying to attach a file or a block device.

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
Changes since v3:

 * Add $xparams (that contains the path to the disk file) to the error
   message.

Changes since v2:

 * Better error messages.

 * Check if params is empty.

 * Replace xenstore_write with xenstore-write in error function.

 * Add quotation marks to xparams when testing.

Changes since v1:

 * Check that file is either a block special file or a regular file
   and report error otherwise.
---
 tools/hotplug/NetBSD/block |   13 +++++++++++--
 1 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index cf5ff3a..f849fe4 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,15 +12,24 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore_write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error
 	exit 1
 }
 	
 
 xpath=$1
 xstatus=$2
-xtype=$(xenstore-read "$xpath/type")
 xparams=$(xenstore-read "$xpath/params")
+if [ -b "$xparams" ]; then
+	xtype="phy"
+elif [ -f "$xparams" ]; then
+	xtype="file"
+elif [ -z "$xparams" ]; then
+	error "$xpath/params is empty, unable to attach block device."
+else
+	error "$xparams is not a valid file type to use as block device." \
+	      "Only block and regular image files accepted."
+fi
 
 case $xstatus in
 6)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 18:14:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 18:14:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ld2-0004LI-Rx; Tue, 14 Aug 2012 18:14:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1Ld1-0004L9-CE
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 18:14:07 +0000
Received: from [85.158.143.35:48516] by server-2.bemta-4.messagelabs.com id
	C9/DC-31966-E659A205; Tue, 14 Aug 2012 18:14:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344968045!15360406!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7991 invoked from network); 14 Aug 2012 18:14:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 18:14:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336348800"; d="scan'208";a="14009678"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 18:14:05 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 19:14:05 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Aug 2012 19:15:06 +0100
Message-ID: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
	run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vif interfaces allows the user to specify the domain that should run
the backend (also known as driver domain) using the 'backend'
parameter. This is not compatible with run_hotplug_scripts=1, since
libxl can only run the hotplug scripts from the Domain 0.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
 docs/misc/xl-network-configuration.markdown |    6 ++++--
 tools/libxl/libxl.c                         |   14 ++++++++++++++
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xl-network-configuration.markdown b/docs/misc/xl-network-configuration.markdown
index 650926c..5e2f049 100644
--- a/docs/misc/xl-network-configuration.markdown
+++ b/docs/misc/xl-network-configuration.markdown
@@ -122,8 +122,10 @@ specified IP address to be used by the guest (blocking all others).
 ### backend
 
 Specifies the backend domain which this device should attach to. This
-defaults to domain 0. Specifying another domain requires setting up a
-driver domain which is outside the scope of this document.
+defaults to domain 0. This option does not work if `run_hotplug_scripts`
+is not disabled in xl.conf (see xl.conf(5) man page for more information
+on this option). Specifying another domain requires setting up a driver
+domain which is outside the scope of this document.
 
 ### rate
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8ea3478..6b85cdc 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2474,6 +2474,8 @@ out:
 int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
                                  uint32_t domid)
 {
+    int run_hotplug_scripts;
+
     if (!nic->mtu)
         nic->mtu = 1492;
     if (!nic->model) {
@@ -2503,6 +2505,18 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
                                   libxl__xen_script_dir_path()) < 0 )
         return ERROR_FAIL;
 
+    run_hotplug_scripts = libxl__hotplug_settings(gc, XBT_NULL);
+    if (run_hotplug_scripts < 0) {
+        LOG(ERROR, "unable to get current hotplug scripts execution setting");
+        return run_hotplug_scripts;
+    }
+    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
+        LOG(ERROR, "the vif 'backend=' option cannot be used in conjunction "
+                   "with run_hotplug_scripts, please set run_hotplug_scripts "
+                   "to 0 in xl.conf");
+        return ERROR_FAIL;
+    }
+
     switch (libxl__domain_type(gc, domid)) {
     case LIBXL_DOMAIN_TYPE_HVM:
         if (!nic->nictype)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 18:14:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 18:14:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Ld2-0004LI-Rx; Tue, 14 Aug 2012 18:14:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1Ld1-0004L9-CE
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 18:14:07 +0000
Received: from [85.158.143.35:48516] by server-2.bemta-4.messagelabs.com id
	C9/DC-31966-E659A205; Tue, 14 Aug 2012 18:14:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1344968045!15360406!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwNzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7991 invoked from network); 14 Aug 2012 18:14:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 18:14:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336348800"; d="scan'208";a="14009678"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 18:14:05 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 19:14:05 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Aug 2012 19:15:06 +0100
Message-ID: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
	run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vif interfaces allows the user to specify the domain that should run
the backend (also known as driver domain) using the 'backend'
parameter. This is not compatible with run_hotplug_scripts=1, since
libxl can only run the hotplug scripts from the Domain 0.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
 docs/misc/xl-network-configuration.markdown |    6 ++++--
 tools/libxl/libxl.c                         |   14 ++++++++++++++
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xl-network-configuration.markdown b/docs/misc/xl-network-configuration.markdown
index 650926c..5e2f049 100644
--- a/docs/misc/xl-network-configuration.markdown
+++ b/docs/misc/xl-network-configuration.markdown
@@ -122,8 +122,10 @@ specified IP address to be used by the guest (blocking all others).
 ### backend
 
 Specifies the backend domain which this device should attach to. This
-defaults to domain 0. Specifying another domain requires setting up a
-driver domain which is outside the scope of this document.
+defaults to domain 0. This option does not work if `run_hotplug_scripts`
+is not disabled in xl.conf (see xl.conf(5) man page for more information
+on this option). Specifying another domain requires setting up a driver
+domain which is outside the scope of this document.
 
 ### rate
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8ea3478..6b85cdc 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2474,6 +2474,8 @@ out:
 int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
                                  uint32_t domid)
 {
+    int run_hotplug_scripts;
+
     if (!nic->mtu)
         nic->mtu = 1492;
     if (!nic->model) {
@@ -2503,6 +2505,18 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
                                   libxl__xen_script_dir_path()) < 0 )
         return ERROR_FAIL;
 
+    run_hotplug_scripts = libxl__hotplug_settings(gc, XBT_NULL);
+    if (run_hotplug_scripts < 0) {
+        LOG(ERROR, "unable to get current hotplug scripts execution setting");
+        return run_hotplug_scripts;
+    }
+    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
+        LOG(ERROR, "the vif 'backend=' option cannot be used in conjunction "
+                   "with run_hotplug_scripts, please set run_hotplug_scripts "
+                   "to 0 in xl.conf");
+        return ERROR_FAIL;
+    }
+
     switch (libxl__domain_type(gc, domid)) {
     case LIBXL_DOMAIN_TYPE_HVM:
         if (!nic->nictype)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:32:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1MqN-000553-TA; Tue, 14 Aug 2012 19:31:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T1MqL-00054y-V3
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 19:31:58 +0000
Received: from [85.158.143.99:31612] by server-3.bemta-4.messagelabs.com id
	11/89-09529-DA7AA205; Tue, 14 Aug 2012 19:31:57 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344972715!21147040!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9778 invoked from network); 14 Aug 2012 19:31:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 19:31:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336363200"; d="scan'208";a="205171674"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:31:30 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:31:30 -0400
MIME-Version: 1.0
X-Mercurial-Node: 105dd8275da21b61a9c74a81df898b6c4b98ec39
Message-ID: <105dd8275da21b61a9c7.1344972689@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 14 Aug 2012 12:31:29 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] [mq]: santosh.diff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 11:24:32 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,72 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1,
+                address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
+                   indent, " ",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +598,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/iommu.c	Tue Aug 14 11:24:32 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 11:24:32 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,63 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
+                   indent, " ",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
+                   dma_pte_superpage(*pte)? 1 : 0,
+                   dma_pte_read(*pte)? 1 : 0,
+                   dma_pte_write(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2445,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 11:24:32 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r dc56a9defa30 -r 105dd8275da2 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 11:24:32 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r dc56a9defa30 -r 105dd8275da2 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/xen/iommu.h	Tue Aug 14 11:24:32 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:32:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1MqN-000553-TA; Tue, 14 Aug 2012 19:31:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T1MqL-00054y-V3
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 19:31:58 +0000
Received: from [85.158.143.99:31612] by server-3.bemta-4.messagelabs.com id
	11/89-09529-DA7AA205; Tue, 14 Aug 2012 19:31:57 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1344972715!21147040!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9778 invoked from network); 14 Aug 2012 19:31:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 19:31:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336363200"; d="scan'208";a="205171674"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:31:30 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:31:30 -0400
MIME-Version: 1.0
X-Mercurial-Node: 105dd8275da21b61a9c74a81df898b6c4b98ec39
Message-ID: <105dd8275da21b61a9c7.1344972689@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 14 Aug 2012 12:31:29 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] [mq]: santosh.diff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 11:24:32 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,72 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1,
+                address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
+                   indent, " ",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +598,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/iommu.c	Tue Aug 14 11:24:32 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 11:24:32 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,63 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
+                   indent, " ",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
+                   dma_pte_superpage(*pte)? 1 : 0,
+                   dma_pte_read(*pte)? 1 : 0,
+                   dma_pte_write(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2445,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r dc56a9defa30 -r 105dd8275da2 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 11:24:32 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r dc56a9defa30 -r 105dd8275da2 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 11:24:32 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r dc56a9defa30 -r 105dd8275da2 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/xen/iommu.h	Tue Aug 14 11:24:32 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1MtL-0005PX-LE; Tue, 14 Aug 2012 19:35:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T1MtK-0005PR-0c
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 19:35:02 +0000
Received: from [85.158.143.99:38200] by server-3.bemta-4.messagelabs.com id
	1A/1C-09529-568AA205; Tue, 14 Aug 2012 19:35:01 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344972899!28160763!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25441 invoked from network); 14 Aug 2012 19:35:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 19:35:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336363200"; d="scan'208";a="205172262"
Received: from sjcpmailmx02.citrite.net ([10.216.14.75])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:34:57 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi; Tue, 14 Aug 2012
	12:34:56 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Aug 2012 12:34:41 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac15Me3VPKrWffpqRwG4vIDCTvrcLQBIZahg
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E9F41@SJCPMAILBOX01.citrite.net>
References: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
	<5028DE020200007800094662@nat28.tlf.novell.com>
In-Reply-To: <5028DE020200007800094662@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Why do you do this differently than for VT-d here? There you don't check next_table_maddr (and I see no reason you would need to). Oh, I see, there's a similar check in a different place there. But this needs to be functionally similar here then.
Specifically, ...

> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1, 
> +                address, indent + 1);
> +        }
> +        else

... you'd get into the else's body if next_table_maddr was zero, which is wrong afaict. So I think flow like

    if ( next_level )
        print
    else if ( next_table_maddr )
        recurse

would be the preferable way to go if you feel that these zero checks are necessary (and if you do then, because this being the case is really a bug, this shouldn't go through silently).
[Santosh Jodh] I was basing my code on existing code in the individual files. I was just being paranoid as this is debug code and I would not want to crash the system. Anyway, I am resending a patch that structures the code in the same way for both Intel and AMD.

> +        {
> +            int i;
> +
> +            for ( i = 0; i < indent; i++ )
> +                printk("  ");

printk("%*s...", indent, "", ...);
[Santosh Jodh] Cool - got it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1MtL-0005PX-LE; Tue, 14 Aug 2012 19:35:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T1MtK-0005PR-0c
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 19:35:02 +0000
Received: from [85.158.143.99:38200] by server-3.bemta-4.messagelabs.com id
	1A/1C-09529-568AA205; Tue, 14 Aug 2012 19:35:01 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1344972899!28160763!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzEyMjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25441 invoked from network); 14 Aug 2012 19:35:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 19:35:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336363200"; d="scan'208";a="205172262"
Received: from sjcpmailmx02.citrite.net ([10.216.14.75])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:34:57 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi; Tue, 14 Aug 2012
	12:34:56 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Aug 2012 12:34:41 -0700
Thread-Topic: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
Thread-Index: Ac15Me3VPKrWffpqRwG4vIDCTvrcLQBIZahg
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E9F41@SJCPMAILBOX01.citrite.net>
References: <9c7609a4fbc117b1600f.1344626094@REDBLD-XS.ad.xensource.com>
	<5028DE020200007800094662@nat28.tlf.novell.com>
In-Reply-To: <5028DE020200007800094662@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Why do you do this differently than for VT-d here? There you don't check next_table_maddr (and I see no reason you would need to). Oh, I see, there's a similar check in a different place there. But this needs to be functionally similar here then.
Specifically, ...

> +        {
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1, 
> +                address, indent + 1);
> +        }
> +        else

... you'd get into the else's body if next_table_maddr was zero, which is wrong afaict. So I think flow like

    if ( next_level )
        print
    else if ( next_table_maddr )
        recurse

would be the preferable way to go if you feel that these zero checks are necessary (and if you do then, because this being the case is really a bug, this shouldn't go through silently).
[Santosh Jodh] I was basing my code on existing code in the individual files. I was just being paranoid as this is debug code and I would not want to crash the system. Anyway, I am resending a patch that structures the code in the same way for both Intel and AMD.

> +        {
> +            int i;
> +
> +            for ( i = 0; i < indent; i++ )
> +                printk("  ");

printk("%*s...", indent, "", ...);
[Santosh Jodh] Cool - got it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:38:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Mvv-0005Wz-7D; Tue, 14 Aug 2012 19:37:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1Mvt-0005Wi-Dw
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 19:37:41 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344973048!9236258!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15977 invoked from network); 14 Aug 2012 19:37:28 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 19:37:28 -0000
X-TM-IMSS-Message-ID: <a550bec200000c43@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	a550bec200000c43 ; Tue, 14 Aug 2012 15:38:35 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EJbN0s008979; 
	Tue, 14 Aug 2012 15:37:23 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Jan Beulich <jbeulich@suse.com>
Date: Tue, 14 Aug 2012 15:37:23 -0400
Message-Id: <1344973043-10752-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v3] x86-64/EFI: add CFLAGS to check compile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this, the compilation of check.c could fail due to compiler
features such as -fstack-protector being enabled, which causes a missing
__stack_chk_fail symbol error.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---

Change from v2: EMBEDDED_EXTRA_CFLAGS may include flags not supported by
the compiler; CFLAGS includes a filtered version of these flags.

diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
index 005e3e0..1ba069d 100644
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -5,7 +5,7 @@ obj-y += stub.o
 create = test -e $(1) || touch -t 199901010000 $(1)
 
 efi := $(filter y,$(x86_64)$(shell rm -f disabled))
-efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
+efi := $(if $(efi),$(shell $(CC) $(CFLAGS) -c -Werror check.c 2>disabled && echo y))
 efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
 efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:38:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Mvv-0005Wz-7D; Tue, 14 Aug 2012 19:37:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T1Mvt-0005Wi-Dw
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 19:37:41 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-27.messagelabs.com!1344973048!9236258!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15977 invoked from network); 14 Aug 2012 19:37:28 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-27.messagelabs.com with SMTP;
	14 Aug 2012 19:37:28 -0000
X-TM-IMSS-Message-ID: <a550bec200000c43@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	a550bec200000c43 ; Tue, 14 Aug 2012 15:38:35 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7EJbN0s008979; 
	Tue, 14 Aug 2012 15:37:23 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: Jan Beulich <jbeulich@suse.com>
Date: Tue, 14 Aug 2012 15:37:23 -0400
Message-Id: <1344973043-10752-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v3] x86-64/EFI: add CFLAGS to check compile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this, the compilation of check.c could fail due to compiler
features such as -fstack-protector being enabled, which causes a missing
__stack_chk_fail symbol error.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---

Change from v2: EMBEDDED_EXTRA_CFLAGS may include flags not supported by
the compiler; CFLAGS includes a filtered version of these flags.

diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
index 005e3e0..1ba069d 100644
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -5,7 +5,7 @@ obj-y += stub.o
 create = test -e $(1) || touch -t 199901010000 $(1)
 
 efi := $(filter y,$(x86_64)$(shell rm -f disabled))
-efi := $(if $(efi),$(shell $(CC) -c -Werror check.c 2>disabled && echo y))
+efi := $(if $(efi),$(shell $(CC) $(CFLAGS) -c -Werror check.c 2>disabled && echo y))
 efi := $(if $(efi),$(shell $(LD) -mi386pep --subsystem=10 -o check.efi check.o 2>disabled && echo y))
 efi := $(if $(efi),$(shell rm disabled)y,$(shell $(call create,boot.init.o); $(call create,runtime.o)))
 
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1NCu-0005le-Qg; Tue, 14 Aug 2012 19:55:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T1NCs-0005lZ-O8
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 19:55:15 +0000
Received: from [85.158.138.51:64622] by server-1.bemta-3.messagelabs.com id
	F4/67-09327-22DAA205; Tue, 14 Aug 2012 19:55:14 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344974111!21936488!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ3NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5579 invoked from network); 14 Aug 2012 19:55:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 19:55:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336363200"; d="scan'208";a="34643871"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:55:10 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:55:10 -0400
MIME-Version: 1.0
X-Mercurial-Node: 5357dccf4ba353d08e8e09ebf4eecf30dce4e38d
Message-ID: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 14 Aug 2012 12:55:09 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 12:54:55 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,72 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1,
+                address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
+                   indent, " ",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +598,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/iommu.c	Tue Aug 14 12:54:55 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 12:54:55 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,63 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
+                   indent, " ",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
+                   dma_pte_superpage(*pte)? 1 : 0,
+                   dma_pte_read(*pte)? 1 : 0,
+                   dma_pte_write(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2445,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 12:54:55 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 12:54:55 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/xen/iommu.h	Tue Aug 14 12:54:55 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 19:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 19:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1NCu-0005le-Qg; Tue, 14 Aug 2012 19:55:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T1NCs-0005lZ-O8
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 19:55:15 +0000
Received: from [85.158.138.51:64622] by server-1.bemta-3.messagelabs.com id
	F4/67-09327-22DAA205; Tue, 14 Aug 2012 19:55:14 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1344974111!21936488!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ3NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5579 invoked from network); 14 Aug 2012 19:55:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 19:55:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,768,1336363200"; d="scan'208";a="34643871"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	14 Aug 2012 15:55:10 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.66) with Microsoft SMTP Server id
	8.3.213.0; Tue, 14 Aug 2012 15:55:10 -0400
MIME-Version: 1.0
X-Mercurial-Node: 5357dccf4ba353d08e8e09ebf4eecf30dce4e38d
Message-ID: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 14 Aug 2012 12:55:09 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 14 12:54:55 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,72 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), level - 1,
+                address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
+                   indent, " ",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +598,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/iommu.c	Tue Aug 14 12:54:55 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.c	Tue Aug 14 12:54:55 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,63 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
+                   indent, " ",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
+                   dma_pte_superpage(*pte)? 1 : 0,
+                   dma_pte_read(*pte)? 1 : 0,
+                   dma_pte_write(*pte)? 1 : 0);
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2445,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/drivers/passthrough/vtd/iommu.h	Tue Aug 14 12:54:55 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
@@ -277,6 +279,9 @@ struct dma_pte {
 #define dma_set_pte_addr(p, addr) do {\
             (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & 3) != 0)
+#define dma_pte_superpage(p) (((p).val & (1<<7)) != 0)
+#define dma_pte_read(p) (((p).val & DMA_PTE_READ) != 0)
+#define dma_pte_write(p) (((p).val & DMA_PTE_WRITE) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Tue Aug 14 12:54:55 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r dc56a9defa30 -r 5357dccf4ba3 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Tue Aug 14 10:28:14 2012 +0200
+++ b/xen/include/xen/iommu.h	Tue Aug 14 12:54:55 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 20:21:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 20:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1NcE-0006EC-25; Tue, 14 Aug 2012 20:21:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fajar@fajar.net>) id 1T1NcD-0006E7-3l
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 20:21:25 +0000
X-Env-Sender: fajar@fajar.net
X-Msg-Ref: server-6.tower-27.messagelabs.com!1344975677!2868237!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8126 invoked from network); 14 Aug 2012 20:21:18 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 20:21:18 -0000
Received: by qcad1 with SMTP id d1so904010qca.30
	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 13:21:17 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=YxcEBZ6hdcu4qm4aqOnMJcCMyKfB0FVGEDT9UAdOctU=;
	b=NbNewiizpGssvG8TnBZXcLf9hjfRCQl8GPF12m7zEmftTRQfjxauxbmrgW0sGvyAr2
	sfQ4h02BsGxlisOiwaZTvGyc52B5aXGhlk+acduw1BpnikCD+R/CtygxB52NTAjgmnd9
	DyeNZrBQ1dMyf3TRsMwua6ahhsg2rcH8YdAAtOD/71gpP/96od6kjguX28hPDluLPpyE
	RBvZdf02qVEMihOCvr3lymwURQ+XZE4vEfGnTPQpf6wyxEh4PM/uhPIIQBDeLltSKPTv
	885TkZfmrhOAJjKNmwKVtFAseELstJ+UFNB4pUo6NHUthwJmnTSd1gC2mPta3JyhprHo
	eDsg==
MIME-Version: 1.0
Received: by 10.229.136.197 with SMTP id s5mr9700144qct.42.1344975677084; Tue,
	14 Aug 2012 13:21:17 -0700 (PDT)
Received: by 10.229.136.199 with HTTP; Tue, 14 Aug 2012 13:21:17 -0700 (PDT)
In-Reply-To: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
References: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
Date: Wed, 15 Aug 2012 03:21:17 +0700
Message-ID: <CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
From: "Fajar A. Nugraha" <list@fajar.net>
To: Kumar Sukhani <kumarsukhani@gmail.com>
X-Gm-Message-State: ALoCoQn1aj5enhwett2e52hkZPYNCvv+BX+U5fuWk8j5GTKC6Tdi2+TmVhuJYWXg60gNbIMNhnV2
Cc: Surabhi Mutha <muthasurabhi@gmail.com>, xen-devel@lists.xensource.com,
	Akash deep agrawal <akashagrawal14@gmail.com>,
	Saurabh Gadia <saurabh.gadia4@gmail.com>
Subject: Re: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani <kumarsukhani@gmail.com> wrote:

> So we are planning to introduce features of O.S. level virtualization
> in Xen, by proposing one integrated architecture[1] having Operating
> system virtualization over one of the VM of Xen.
> Please review our architecture and let us know whether it is worth to
> implement it or not.
>
> [1] http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream


How is this "integrated"?

For example, what is the difference of that proposal with running lxc
on an Ubuntu domU (which you can already do)?

-- 
Fajar

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 20:21:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 20:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1NcE-0006EC-25; Tue, 14 Aug 2012 20:21:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fajar@fajar.net>) id 1T1NcD-0006E7-3l
	for xen-devel@lists.xensource.com; Tue, 14 Aug 2012 20:21:25 +0000
X-Env-Sender: fajar@fajar.net
X-Msg-Ref: server-6.tower-27.messagelabs.com!1344975677!2868237!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8126 invoked from network); 14 Aug 2012 20:21:18 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Aug 2012 20:21:18 -0000
Received: by qcad1 with SMTP id d1so904010qca.30
	for <xen-devel@lists.xensource.com>;
	Tue, 14 Aug 2012 13:21:17 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=YxcEBZ6hdcu4qm4aqOnMJcCMyKfB0FVGEDT9UAdOctU=;
	b=NbNewiizpGssvG8TnBZXcLf9hjfRCQl8GPF12m7zEmftTRQfjxauxbmrgW0sGvyAr2
	sfQ4h02BsGxlisOiwaZTvGyc52B5aXGhlk+acduw1BpnikCD+R/CtygxB52NTAjgmnd9
	DyeNZrBQ1dMyf3TRsMwua6ahhsg2rcH8YdAAtOD/71gpP/96od6kjguX28hPDluLPpyE
	RBvZdf02qVEMihOCvr3lymwURQ+XZE4vEfGnTPQpf6wyxEh4PM/uhPIIQBDeLltSKPTv
	885TkZfmrhOAJjKNmwKVtFAseELstJ+UFNB4pUo6NHUthwJmnTSd1gC2mPta3JyhprHo
	eDsg==
MIME-Version: 1.0
Received: by 10.229.136.197 with SMTP id s5mr9700144qct.42.1344975677084; Tue,
	14 Aug 2012 13:21:17 -0700 (PDT)
Received: by 10.229.136.199 with HTTP; Tue, 14 Aug 2012 13:21:17 -0700 (PDT)
In-Reply-To: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
References: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
Date: Wed, 15 Aug 2012 03:21:17 +0700
Message-ID: <CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
From: "Fajar A. Nugraha" <list@fajar.net>
To: Kumar Sukhani <kumarsukhani@gmail.com>
X-Gm-Message-State: ALoCoQn1aj5enhwett2e52hkZPYNCvv+BX+U5fuWk8j5GTKC6Tdi2+TmVhuJYWXg60gNbIMNhnV2
Cc: Surabhi Mutha <muthasurabhi@gmail.com>, xen-devel@lists.xensource.com,
	Akash deep agrawal <akashagrawal14@gmail.com>,
	Saurabh Gadia <saurabh.gadia4@gmail.com>
Subject: Re: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani <kumarsukhani@gmail.com> wrote:

> So we are planning to introduce features of O.S. level virtualization
> in Xen, by proposing one integrated architecture[1] having Operating
> system virtualization over one of the VM of Xen.
> Please review our architecture and let us know whether it is worth to
> implement it or not.
>
> [1] http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream


How is this "integrated"?

For example, what is the difference of that proposal with running lxc
on an Ubuntu domU (which you can already do)?

-- 
Fajar

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 21:18:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 21:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1OUn-0006z3-Qg; Tue, 14 Aug 2012 21:17:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T1OUl-0006yv-Ng
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 21:17:47 +0000
Received: from [85.158.143.35:62901] by server-2.bemta-4.messagelabs.com id
	05/32-31966-B70CA205; Tue, 14 Aug 2012 21:17:47 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344979063!14026635!1
X-Originating-IP: [74.125.149.201]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13944 invoked from network); 14 Aug 2012 21:17:46 -0000
Received: from na3sys009aog109.obsmtp.com (HELO na3sys009aog109.obsmtp.com)
	(74.125.149.201)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 21:17:46 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob109.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUCrAd6xPBCZBP3NJjXaZ6tqS3Yf3sR4O@postini.com;
	Tue, 14 Aug 2012 14:17:45 PDT
Received: from INHYMS173.ca.com (155.35.35.47) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 15 Aug 2012 02:47:38 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS173.ca.com
	([155.35.35.47]) with mapi id 14.01.0355.002;
	Wed, 15 Aug 2012 02:47:38 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZA=
Date: Tue, 14 Aug 2012 21:17:38 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
In-Reply-To: <5028D6AC0200007800094651@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {76E2D7C5-832B-4955-90A8-E81211A9E716}
x-cr-hashedpuzzle: Df0T Fivd GNZ/ IORf IYQo Ib7g KztB KzzE LfKD OzZX Srge
	U4jV VPNG VSYJ XpYE Yhkq; 2;
	agBiAGUAdQBsAGkAYwBoAEAAcwB1AHMAZQAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {76E2D7C5-832B-4955-90A8-E81211A9E716};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Tue,
	14 Aug 2012 21:15:46 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, August 13, 2012 1:58 PM
> To: Palagummi, Siva
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> wrote:
> >--- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000
> -0500
> >+++ b/drivers/net/xen-netback/netback.c	2012-08-12 15:50:50.000000000
> -0400
> >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> >
> > 		count += nr_frags + 1;
> >
> >+		/*
> >+		 * The logic here should be somewhat similar to
> >+		 * xen_netbk_count_skb_slots. In case of larger MTU size,
> 
> Is there a reason why you can't simply use that function then?
> Afaict it's being used on the very same skb before it gets put on
> rx_queue already anyway.
> 

I did think about it. But this would mean iterating through similar piece of code twice with additional function calls. netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing similar code. And also not sure about any other implications. So decided to fix it by adding few lines of code in line.

> >+		 * skb head length may be more than a PAGE_SIZE. We need to
> >+		 * consider ring slots consumed by that data. If we do not,
> >+		 * then within this loop itself we end up consuming more
> meta
> >+		 * slots turning the BUG_ON below. With this fix we may end
> up
> >+		 * iterating through xen_netbk_rx_action multiple times
> >+		 * instead of crashing netback thread.
> >+		 */
> >+
> >+
> >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This now over-accounts by one I think (due to the "+ 1" above;
> the calculation here really is to replace that increment).
> 
> Jan
> 
I also wasn't sure about the actual purpose of "+1" above whether it is meant to take care of skb_headlen or non zero gso_size case or some other case.  That's why I left it like that so that I can exit the loop on safer side. If someone who knows this area of code can confirm that we do not need it, I will create a new patch. In my environment I did observe that "count" is always greater than 
actual meta slots produced because of this additional "+1" with my patch. When I took out this extra addition then count is always equal to actual meta slots produced and loop is exiting safely with more meta slots produced under heavy traffic.

Thanks
Siva
 
> >+
> >+		if (skb_shinfo(skb)->gso_size)
> >+			count++;
> >+
> > 		__skb_queue_tail(&rxq, skb);
> >
> > 		/* Filled the batch queue? */
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 14 21:18:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Aug 2012 21:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1OUn-0006z3-Qg; Tue, 14 Aug 2012 21:17:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T1OUl-0006yv-Ng
	for xen-devel@lists.xen.org; Tue, 14 Aug 2012 21:17:47 +0000
Received: from [85.158.143.35:62901] by server-2.bemta-4.messagelabs.com id
	05/32-31966-B70CA205; Tue, 14 Aug 2012 21:17:47 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1344979063!14026635!1
X-Originating-IP: [74.125.149.201]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13944 invoked from network); 14 Aug 2012 21:17:46 -0000
Received: from na3sys009aog109.obsmtp.com (HELO na3sys009aog109.obsmtp.com)
	(74.125.149.201)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Aug 2012 21:17:46 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob109.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUCrAd6xPBCZBP3NJjXaZ6tqS3Yf3sR4O@postini.com;
	Tue, 14 Aug 2012 14:17:45 PDT
Received: from INHYMS173.ca.com (155.35.35.47) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 15 Aug 2012 02:47:38 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS173.ca.com
	([155.35.35.47]) with mapi id 14.01.0355.002;
	Wed, 15 Aug 2012 02:47:38 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZA=
Date: Tue, 14 Aug 2012 21:17:38 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
In-Reply-To: <5028D6AC0200007800094651@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {76E2D7C5-832B-4955-90A8-E81211A9E716}
x-cr-hashedpuzzle: Df0T Fivd GNZ/ IORf IYQo Ib7g KztB KzzE LfKD OzZX Srge
	U4jV VPNG VSYJ XpYE Yhkq; 2;
	agBiAGUAdQBsAGkAYwBoAEAAcwB1AHMAZQAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {76E2D7C5-832B-4955-90A8-E81211A9E716};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Tue,
	14 Aug 2012 21:15:46 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, August 13, 2012 1:58 PM
> To: Palagummi, Siva
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> wrote:
> >--- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000
> -0500
> >+++ b/drivers/net/xen-netback/netback.c	2012-08-12 15:50:50.000000000
> -0400
> >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> >
> > 		count += nr_frags + 1;
> >
> >+		/*
> >+		 * The logic here should be somewhat similar to
> >+		 * xen_netbk_count_skb_slots. In case of larger MTU size,
> 
> Is there a reason why you can't simply use that function then?
> Afaict it's being used on the very same skb before it gets put on
> rx_queue already anyway.
> 

I did think about it. But this would mean iterating through similar piece of code twice with additional function calls. netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing similar code. And also not sure about any other implications. So decided to fix it by adding few lines of code in line.

> >+		 * skb head length may be more than a PAGE_SIZE. We need to
> >+		 * consider ring slots consumed by that data. If we do not,
> >+		 * then within this loop itself we end up consuming more
> meta
> >+		 * slots turning the BUG_ON below. With this fix we may end
> up
> >+		 * iterating through xen_netbk_rx_action multiple times
> >+		 * instead of crashing netback thread.
> >+		 */
> >+
> >+
> >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This now over-accounts by one I think (due to the "+ 1" above;
> the calculation here really is to replace that increment).
> 
> Jan
> 
I also wasn't sure about the actual purpose of "+1" above whether it is meant to take care of skb_headlen or non zero gso_size case or some other case.  That's why I left it like that so that I can exit the loop on safer side. If someone who knows this area of code can confirm that we do not need it, I will create a new patch. In my environment I did observe that "count" is always greater than 
actual meta slots produced because of this additional "+1" with my patch. When I took out this extra addition then count is always equal to actual meta slots produced and loop is exiting safely with more meta slots produced under heavy traffic.

Thanks
Siva
 
> >+
> >+		if (skb_shinfo(skb)->gso_size)
> >+			count++;
> >+
> > 		__skb_queue_tail(&rxq, skb);
> >
> > 		/* Filled the batch queue? */
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 00:49:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 00:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Rmi-00012i-IG; Wed, 15 Aug 2012 00:48:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1Rmg-00012d-47
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 00:48:30 +0000
Received: from [85.158.139.83:25797] by server-12.bemta-5.messagelabs.com id
	AB/1A-22359-DD1FA205; Wed, 15 Aug 2012 00:48:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344991708!28154436!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13261 invoked from network); 15 Aug 2012 00:48:28 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 00:48:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,770,1336348800"; d="scan'208";a="14012499"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 00:48:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 01:48:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1Rmd-0000QV-61;
	Wed, 15 Aug 2012 00:48:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1Rmc-0003Rp-OG;
	Wed, 15 Aug 2012 01:48:26 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13601-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 01:48:26 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13601: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13601 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13601/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 13600
 test-amd64-i386-pair          6 xen-install/dst_host      fail REGR. vs. 13600
 test-amd64-i386-pair          5 xen-install/src_host      fail REGR. vs. 13600
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 13600
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 13600

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13600
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13600
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13600
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 13600
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13600

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  af7143d97fa2
baseline version:
 xen                  dc56a9defa30

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         fail    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25750:af7143d97fa2
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 14 15:59:38 2012 +0100
    
    QEMU_TAG update
    
    
changeset:   25749:dc56a9defa30
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Aug 14 10:28:14 2012 +0200
    
    x86/PoD: fix (un)locking after 24772:28edc2b31a9b
    
    That c/s introduced a double unlock on the out-of-memory error path of
    p2m_pod_demand_populate().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 00:49:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 00:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Rmi-00012i-IG; Wed, 15 Aug 2012 00:48:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1Rmg-00012d-47
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 00:48:30 +0000
Received: from [85.158.139.83:25797] by server-12.bemta-5.messagelabs.com id
	AB/1A-22359-DD1FA205; Wed, 15 Aug 2012 00:48:29 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1344991708!28154436!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13261 invoked from network); 15 Aug 2012 00:48:28 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 00:48:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,770,1336348800"; d="scan'208";a="14012499"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 00:48:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 01:48:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1Rmd-0000QV-61;
	Wed, 15 Aug 2012 00:48:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1Rmc-0003Rp-OG;
	Wed, 15 Aug 2012 01:48:26 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13601-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 01:48:26 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13601: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13601 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13601/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)         broken REGR. vs. 13600
 test-amd64-i386-pair          6 xen-install/dst_host      fail REGR. vs. 13600
 test-amd64-i386-pair          5 xen-install/src_host      fail REGR. vs. 13600
 test-amd64-amd64-pair        3 host-install/src_host(3) broken REGR. vs. 13600
 test-amd64-amd64-pair        4 host-install/dst_host(4) broken REGR. vs. 13600

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13600
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13600
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13600
 test-amd64-amd64-xl-pcipt-intel  3 host-install(3)      broken REGR. vs. 13600
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13600

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  af7143d97fa2
baseline version:
 xen                  dc56a9defa30

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         fail    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25750:af7143d97fa2
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 14 15:59:38 2012 +0100
    
    QEMU_TAG update
    
    
changeset:   25749:dc56a9defa30
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Aug 14 10:28:14 2012 +0200
    
    x86/PoD: fix (un)locking after 24772:28edc2b31a9b
    
    That c/s introduced a double unlock on the out-of-memory error path of
    p2m_pod_demand_populate().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:53:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XSz-0006hY-Jr; Wed, 15 Aug 2012 06:52:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XSy-0006hE-1h
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:52:32 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345013544!2929300!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA3OTIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8107 invoked from network); 15 Aug 2012 06:52:24 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 06:52:24 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 14 Aug 2012 23:52:23 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="186591382"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by orsmga002.jf.intel.com with ESMTP; 14 Aug 2012 23:52:22 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:54:41 +0800
Message-Id: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
Cc: Xudong Hao <xudong.hao@intel.com>, ian.jackson@eu.citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently it is assumed PCI device BAR access < 4G memory. If there is such a
device whose BAR size is larger than 4G, it must access > 4G memory address.
This patch enable the 64bits big BAR support on hvmloader.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff -r 663eb766cdde tools/firmware/hvmloader/config.h
--- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
+++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
@@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
 /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
 #define PCI_MEM_START       0xf0000000
 #define PCI_MEM_END         0xfc000000
+#define PCI_HIGH_MEM_START  0xa000000000ULL
+#define PCI_HIGH_MEM_END    0xf000000000ULL
+#define PCI_MIN_MMIO_ADDR   0x80000000
+
 extern unsigned long pci_mem_start, pci_mem_end;
 
 
diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
--- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
+++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
@@ -31,24 +31,33 @@
 unsigned long pci_mem_start = PCI_MEM_START;
 unsigned long pci_mem_end = PCI_MEM_END;
 
+uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
+uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
+
 enum virtual_vga virtual_vga = VGA_none;
 unsigned long igd_opregion_pgbase = 0;
 
 void pci_setup(void)
 {
-    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
+    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
+    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
+    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
     uint32_t vga_devfn = 256;
     uint16_t class, vendor_id, device_id;
     unsigned int bar, pin, link, isa_irq;
+    int64_t mmio_left;
 
     /* Resources assignable to PCI devices via BARs. */
     struct resource {
-        uint32_t base, max;
-    } *resource, mem_resource, io_resource;
+        uint64_t base, max;
+    } *resource, mem_resource, high_mem_resource, io_resource;
 
     /* Create a list of device BARs in descending order of size. */
     struct bars {
-        uint32_t devfn, bar_reg, bar_sz;
+        uint32_t is_64bar;
+        uint32_t devfn;
+        uint32_t bar_reg;
+        uint64_t bar_sz;
     } *bars = (struct bars *)scratch_start;
     unsigned int i, nr_bars = 0;
 
@@ -133,23 +142,35 @@ void pci_setup(void)
         /* Map the I/O memory and port resources. */
         for ( bar = 0; bar < 7; bar++ )
         {
+            bar_sz_upper = 0;
             bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
             if ( bar == 6 )
                 bar_reg = PCI_ROM_ADDRESS;
 
             bar_data = pci_readl(devfn, bar_reg);
+            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
+                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
+                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64));
             pci_writel(devfn, bar_reg, ~0);
             bar_sz = pci_readl(devfn, bar_reg);
             pci_writel(devfn, bar_reg, bar_data);
+
+            if (is_64bar) {
+                bar_data_upper = pci_readl(devfn, bar_reg + 4);
+                pci_writel(devfn, bar_reg + 4, ~0);
+                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
+                pci_writel(devfn, bar_reg + 4, bar_data_upper);
+                bar_sz = (bar_sz_upper << 32) | bar_sz;
+            }
+            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
+                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
+                       0xfffffffffffffff0 :
+                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
+            bar_sz &= ~(bar_sz - 1);
             if ( bar_sz == 0 )
                 continue;
 
-            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
-                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
-                       PCI_BASE_ADDRESS_MEM_MASK :
-                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
-            bar_sz &= ~(bar_sz - 1);
-
             for ( i = 0; i < nr_bars; i++ )
                 if ( bars[i].bar_sz < bar_sz )
                     break;
@@ -157,6 +178,7 @@ void pci_setup(void)
             if ( i != nr_bars )
                 memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
 
+            bars[i].is_64bar = is_64bar;
             bars[i].devfn   = devfn;
             bars[i].bar_reg = bar_reg;
             bars[i].bar_sz  = bar_sz;
@@ -167,11 +189,8 @@ void pci_setup(void)
 
             nr_bars++;
 
-            /* Skip the upper-half of the address for a 64-bit BAR. */
-            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
-                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
-                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
-                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
+            /*The upper half is already calculated, skip it! */
+            if (is_64bar)
                 bar++;
         }
 
@@ -193,10 +212,14 @@ void pci_setup(void)
         pci_writew(devfn, PCI_COMMAND, cmd);
     }
 
-    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
-            ((pci_mem_start << 1) != 0) )
+    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )
         pci_mem_start <<= 1;
 
+    if (!pci_mem_start) {
+        bar64_relocate = 1;
+        pci_mem_start = PCI_MIN_MMIO_ADDR;
+    }
+
     /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
     while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
     {
@@ -218,11 +241,15 @@ void pci_setup(void)
         hvm_info->high_mem_pgend += nr_pages;
     }
 
+    high_mem_resource.base = pci_high_mem_start; 
+    high_mem_resource.max = pci_high_mem_end;
     mem_resource.base = pci_mem_start;
     mem_resource.max = pci_mem_end;
     io_resource.base = 0xc000;
     io_resource.max = 0x10000;
 
+    mmio_left = pci_mem_end - pci_mem_end;
+
     /* Assign iomem and ioport resources in descending order of size. */
     for ( i = 0; i < nr_bars; i++ )
     {
@@ -230,13 +257,21 @@ void pci_setup(void)
         bar_reg = bars[i].bar_reg;
         bar_sz  = bars[i].bar_sz;
 
+        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < bar_sz);
         bar_data = pci_readl(devfn, bar_reg);
 
         if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
              PCI_BASE_ADDRESS_SPACE_MEMORY )
         {
-            resource = &mem_resource;
-            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
+            if (using_64bar) {
+                resource = &high_mem_resource;
+                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
+            } 
+            else {
+                resource = &mem_resource;
+                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
+            }
+            mmio_left -= bar_sz;
         }
         else
         {
@@ -244,13 +279,14 @@ void pci_setup(void)
             bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
         }
 
-        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
-        bar_data |= base;
+        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
+        bar_data |= (uint32_t)base;
+        bar_data_upper = (uint32_t)(base >> 32);
         base += bar_sz;
 
         if ( (base < resource->base) || (base > resource->max) )
         {
-            printf("pci dev %02x:%x bar %02x size %08x: no space for "
+            printf("pci dev %02x:%x bar %02x size %llx: no space for "
                    "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
             continue;
         }
@@ -258,7 +294,9 @@ void pci_setup(void)
         resource->base = base;
 
         pci_writel(devfn, bar_reg, bar_data);
-        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
+        if (using_64bar)
+            pci_writel(devfn, bar_reg + 4, bar_data_upper);
+        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
                devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
 
         /* Now enable the memory or I/O mapping. */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:53:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XSz-0006hY-Jr; Wed, 15 Aug 2012 06:52:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XSy-0006hE-1h
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:52:32 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345013544!2929300!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA3OTIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8107 invoked from network); 15 Aug 2012 06:52:24 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 06:52:24 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 14 Aug 2012 23:52:23 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="186591382"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by orsmga002.jf.intel.com with ESMTP; 14 Aug 2012 23:52:22 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:54:41 +0800
Message-Id: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
Cc: Xudong Hao <xudong.hao@intel.com>, ian.jackson@eu.citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently it is assumed PCI device BAR access < 4G memory. If there is such a
device whose BAR size is larger than 4G, it must access > 4G memory address.
This patch enable the 64bits big BAR support on hvmloader.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff -r 663eb766cdde tools/firmware/hvmloader/config.h
--- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
+++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
@@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
 /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
 #define PCI_MEM_START       0xf0000000
 #define PCI_MEM_END         0xfc000000
+#define PCI_HIGH_MEM_START  0xa000000000ULL
+#define PCI_HIGH_MEM_END    0xf000000000ULL
+#define PCI_MIN_MMIO_ADDR   0x80000000
+
 extern unsigned long pci_mem_start, pci_mem_end;
 
 
diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
--- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
+++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
@@ -31,24 +31,33 @@
 unsigned long pci_mem_start = PCI_MEM_START;
 unsigned long pci_mem_end = PCI_MEM_END;
 
+uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
+uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
+
 enum virtual_vga virtual_vga = VGA_none;
 unsigned long igd_opregion_pgbase = 0;
 
 void pci_setup(void)
 {
-    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
+    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
+    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
+    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
     uint32_t vga_devfn = 256;
     uint16_t class, vendor_id, device_id;
     unsigned int bar, pin, link, isa_irq;
+    int64_t mmio_left;
 
     /* Resources assignable to PCI devices via BARs. */
     struct resource {
-        uint32_t base, max;
-    } *resource, mem_resource, io_resource;
+        uint64_t base, max;
+    } *resource, mem_resource, high_mem_resource, io_resource;
 
     /* Create a list of device BARs in descending order of size. */
     struct bars {
-        uint32_t devfn, bar_reg, bar_sz;
+        uint32_t is_64bar;
+        uint32_t devfn;
+        uint32_t bar_reg;
+        uint64_t bar_sz;
     } *bars = (struct bars *)scratch_start;
     unsigned int i, nr_bars = 0;
 
@@ -133,23 +142,35 @@ void pci_setup(void)
         /* Map the I/O memory and port resources. */
         for ( bar = 0; bar < 7; bar++ )
         {
+            bar_sz_upper = 0;
             bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
             if ( bar == 6 )
                 bar_reg = PCI_ROM_ADDRESS;
 
             bar_data = pci_readl(devfn, bar_reg);
+            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
+                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
+                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64));
             pci_writel(devfn, bar_reg, ~0);
             bar_sz = pci_readl(devfn, bar_reg);
             pci_writel(devfn, bar_reg, bar_data);
+
+            if (is_64bar) {
+                bar_data_upper = pci_readl(devfn, bar_reg + 4);
+                pci_writel(devfn, bar_reg + 4, ~0);
+                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
+                pci_writel(devfn, bar_reg + 4, bar_data_upper);
+                bar_sz = (bar_sz_upper << 32) | bar_sz;
+            }
+            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
+                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
+                       0xfffffffffffffff0 :
+                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
+            bar_sz &= ~(bar_sz - 1);
             if ( bar_sz == 0 )
                 continue;
 
-            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
-                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
-                       PCI_BASE_ADDRESS_MEM_MASK :
-                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
-            bar_sz &= ~(bar_sz - 1);
-
             for ( i = 0; i < nr_bars; i++ )
                 if ( bars[i].bar_sz < bar_sz )
                     break;
@@ -157,6 +178,7 @@ void pci_setup(void)
             if ( i != nr_bars )
                 memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
 
+            bars[i].is_64bar = is_64bar;
             bars[i].devfn   = devfn;
             bars[i].bar_reg = bar_reg;
             bars[i].bar_sz  = bar_sz;
@@ -167,11 +189,8 @@ void pci_setup(void)
 
             nr_bars++;
 
-            /* Skip the upper-half of the address for a 64-bit BAR. */
-            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
-                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
-                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
-                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
+            /*The upper half is already calculated, skip it! */
+            if (is_64bar)
                 bar++;
         }
 
@@ -193,10 +212,14 @@ void pci_setup(void)
         pci_writew(devfn, PCI_COMMAND, cmd);
     }
 
-    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
-            ((pci_mem_start << 1) != 0) )
+    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )
         pci_mem_start <<= 1;
 
+    if (!pci_mem_start) {
+        bar64_relocate = 1;
+        pci_mem_start = PCI_MIN_MMIO_ADDR;
+    }
+
     /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
     while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
     {
@@ -218,11 +241,15 @@ void pci_setup(void)
         hvm_info->high_mem_pgend += nr_pages;
     }
 
+    high_mem_resource.base = pci_high_mem_start; 
+    high_mem_resource.max = pci_high_mem_end;
     mem_resource.base = pci_mem_start;
     mem_resource.max = pci_mem_end;
     io_resource.base = 0xc000;
     io_resource.max = 0x10000;
 
+    mmio_left = pci_mem_end - pci_mem_end;
+
     /* Assign iomem and ioport resources in descending order of size. */
     for ( i = 0; i < nr_bars; i++ )
     {
@@ -230,13 +257,21 @@ void pci_setup(void)
         bar_reg = bars[i].bar_reg;
         bar_sz  = bars[i].bar_sz;
 
+        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < bar_sz);
         bar_data = pci_readl(devfn, bar_reg);
 
         if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
              PCI_BASE_ADDRESS_SPACE_MEMORY )
         {
-            resource = &mem_resource;
-            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
+            if (using_64bar) {
+                resource = &high_mem_resource;
+                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
+            } 
+            else {
+                resource = &mem_resource;
+                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
+            }
+            mmio_left -= bar_sz;
         }
         else
         {
@@ -244,13 +279,14 @@ void pci_setup(void)
             bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
         }
 
-        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
-        bar_data |= base;
+        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
+        bar_data |= (uint32_t)base;
+        bar_data_upper = (uint32_t)(base >> 32);
         base += bar_sz;
 
         if ( (base < resource->base) || (base > resource->max) )
         {
-            printf("pci dev %02x:%x bar %02x size %08x: no space for "
+            printf("pci dev %02x:%x bar %02x size %llx: no space for "
                    "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
             continue;
         }
@@ -258,7 +294,9 @@ void pci_setup(void)
         resource->base = base;
 
         pci_writel(devfn, bar_reg, bar_data);
-        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
+        if (using_64bar)
+            pci_writel(devfn, bar_reg + 4, bar_data_upper);
+        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
                devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
 
         /* Now enable the memory or I/O mapping. */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:53:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XSw-0006hK-6G; Wed, 15 Aug 2012 06:52:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XSu-0006hF-B6
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:52:28 +0000
Received: from [85.158.138.51:61953] by server-8.bemta-3.messagelabs.com id
	AF/5A-29583-B274B205; Wed, 15 Aug 2012 06:52:27 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345013545!28306125!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA3OTIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30945 invoked from network); 15 Aug 2012 06:52:25 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-174.messagelabs.com with SMTP;
	15 Aug 2012 06:52:25 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 14 Aug 2012 23:52:24 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="186591387"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by orsmga002.jf.intel.com with ESMTP; 14 Aug 2012 23:52:23 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:54:42 +0800
Message-Id: <1345013682-20618-2-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
Cc: Xudong Hao <xudong.hao@intel.com>, ian.jackson@eu.citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 3/3] qemu-xen: Add 64 bits big bar support on
	qemu xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently it is assumed PCI device BAR access < 4G memory. If there is such a
device whose BAR size is larger than 4G, it must access > 4G memory address.
This patch enable the 64bits big BAR support on qemu-xen.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff --git a/hw/pass-through.c b/hw/pass-through.c
index 6e396e3..9087fa5 100644
--- a/hw/pass-through.c
+++ b/hw/pass-through.c
@@ -1117,13 +1117,13 @@ uint8_t pci_intx(struct pt_dev *ptdev)
 }
 
 static int _pt_iomem_helper(struct pt_dev *assigned_device, int i,
-                            uint32_t e_base, uint32_t e_size, int op)
+                            unsigned long e_base, unsigned long e_size, int op)
 {
     if ( has_msix_mapping(assigned_device, i) )
     {
-        uint32_t msix_last_pfn = (assigned_device->msix->mmio_base_addr - 1 +
+        unsigned long msix_last_pfn = (assigned_device->msix->mmio_base_addr - 1 +
             assigned_device->msix->total_entries * 16) >> XC_PAGE_SHIFT;
-        uint32_t bar_last_pfn = (e_base + e_size - 1) >> XC_PAGE_SHIFT;
+        unsigned long bar_last_pfn = (e_base + e_size - 1) >> XC_PAGE_SHIFT;
         int ret = 0;
 
         if ( assigned_device->msix->table_off )
@@ -1159,26 +1159,33 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
                          int type)
 {
     struct pt_dev *assigned_device  = (struct pt_dev *)d;
-    uint32_t old_ebase = assigned_device->bases[i].e_physbase;
+    uint64_t e_phys64 = e_phys, e_size64 = e_size, old_ebase = assigned_device->bases[i].e_physbase;
     int first_map = ( assigned_device->bases[i].e_size == 0 );
+    PCIIORegion *r = &d->io_regions[i];
     int ret = 0;
 
-    assigned_device->bases[i].e_physbase = e_phys;
-    assigned_device->bases[i].e_size= e_size;
-
-    PT_LOG("e_phys=%08x maddr=%lx type=%d len=%d index=%d first_map=%d\n",
-        e_phys, (unsigned long)assigned_device->bases[i].access.maddr,
-        type, e_size, i, first_map);
-
-    if ( e_size == 0 )
+    if ( assigned_device->bases[i + 1].bar_flag == PT_BAR_FLAG_UPPER) {
+        uint64_t upper_addr = (r + 1)->addr;
+        uint64_t upper_size = (r + 1)->size;
+        e_phys64 += upper_addr << 32;
+        e_size64 += upper_size << 32;
+    } 
+    PT_LOG("e_phys64=%lx maddr=%lx type=%d len=%lx index=%d first_map=%d\n",
+        e_phys64, (unsigned long)assigned_device->bases[i].access.maddr,
+        type, e_size64, i, first_map);
+   
+    if(e_size64== 0 || !valid_addr(e_phys64))
         return;
 
+    assigned_device->bases[i].e_physbase = e_phys64;
+    assigned_device->bases[i].e_size= e_size64;
+
     if ( !first_map && old_ebase != -1 )
     {
         if ( has_msix_mapping(assigned_device, i) )
             unregister_iomem(assigned_device->msix->mmio_base_addr);
 
-        ret = _pt_iomem_helper(assigned_device, i, old_ebase, e_size,
+        ret = _pt_iomem_helper(assigned_device, i, old_ebase, e_size64,
                                DPCI_REMOVE_MAPPING);
         if ( ret != 0 )
         {
@@ -1188,7 +1195,7 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
     }
 
     /* map only valid guest address */
-    if (e_phys != -1)
+    if (e_phys64 != -1)
     {
         if ( has_msix_mapping(assigned_device, i) )
         {
@@ -1202,7 +1209,7 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
                  assigned_device->msix->mmio_index);
         }
 
-        ret = _pt_iomem_helper(assigned_device, i, e_phys, e_size,
+        ret = _pt_iomem_helper(assigned_device, i, e_phys64, e_size64,
                                DPCI_ADD_MAPPING);
         if ( ret != 0 )
         {
@@ -1210,7 +1217,7 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
             return;
         }
 
-        if ( old_ebase != e_phys && old_ebase != -1 )
+        if ( old_ebase != e_phys64 && old_ebase != -1 )
             pt_msix_update_remap(assigned_device, i);
     }
 }
@@ -1853,7 +1860,7 @@ exit:
 
 static void pt_libpci_fixup(struct pci_dev *dev)
 {
-#if !defined(PCI_LIB_VERSION) || PCI_LIB_VERSION < 0x030100
+#if !defined(PCI_LIB_VERSION) || PCI_LIB_VERSION <= 0x030100
     int i;
     FILE *fp;
     char path[PATH_MAX], buf[256];
@@ -1907,7 +1914,7 @@ static int pt_dev_is_virtfn(struct pci_dev *dev)
 
 static int pt_register_regions(struct pt_dev *assigned_device)
 {
-    int i = 0;
+    int i = 0, current_bar, bar_flag;
     uint32_t bar_data = 0;
     struct pci_dev *pci_dev = assigned_device->pci_dev;
     PCIDevice *d = &assigned_device->dev;
@@ -1916,6 +1923,7 @@ static int pt_register_regions(struct pt_dev *assigned_device)
     /* Register PIO/MMIO BARs */
     for ( i = 0; i < PCI_BAR_ENTRIES; i++ )
     {
+        current_bar = i;
         if ( pt_pci_base_addr(pci_dev->base_addr[i]) )
         {
             assigned_device->bases[i].e_physbase =
@@ -1928,18 +1936,26 @@ static int pt_register_regions(struct pt_dev *assigned_device)
                 pci_register_io_region((PCIDevice *)assigned_device, i,
                     (uint32_t)pci_dev->size[i], PCI_ADDRESS_SPACE_IO,
                     pt_ioport_map);
-            else if ( pci_dev->base_addr[i] & PCI_ADDRESS_SPACE_MEM_PREFETCH )
+            else if ( pci_dev->base_addr[i] & PCI_ADDRESS_SPACE_MEM_64BIT) {
+                bar_flag = pci_dev->base_addr[i] & 0xf;
                 pci_register_io_region((PCIDevice *)assigned_device, i,
-                    (uint32_t)pci_dev->size[i], PCI_ADDRESS_SPACE_MEM_PREFETCH,
+                    (uint32_t)pci_dev->size[i], bar_flag,
                     pt_iomem_map);
-            else
-                pci_register_io_region((PCIDevice *)assigned_device, i,
-                    (uint32_t)pci_dev->size[i], PCI_ADDRESS_SPACE_MEM,
+                pci_register_io_region((PCIDevice *)assigned_device, i + 1,
+                    (uint32_t)(pci_dev->size[i] >> 32), PCI_ADDRESS_SPACE_MEM,
                     pt_iomem_map);
-
-            PT_LOG("IO region registered (size=0x%08x base_addr=0x%08x)\n",
-                (uint32_t)(pci_dev->size[i]),
-                (uint32_t)(pci_dev->base_addr[i]));
+                /* skip upper half. */
+                i++;
+            } 
+            else {
+                bar_flag = pci_dev->base_addr[i] & 0xf;
+                pci_register_io_region((PCIDevice *)assigned_device, i,
+                        (uint32_t)(pci_dev->size[i]), bar_flag,
+                        pt_iomem_map);
+            }
+            PT_LOG("IO region registered (bar:%d,size=0x%lx base_addr=0x%lx)\n", current_bar, 
+                    (pci_dev->size[current_bar]),
+                    (pci_dev->base_addr[current_bar]));
         }
     }
 
@@ -1984,7 +2000,7 @@ static void pt_unregister_regions(struct pt_dev *assigned_device)
 
         type = d->io_regions[i].type;
 
-        if ( type == PCI_ADDRESS_SPACE_MEM ||
+        if ( type == PCI_ADDRESS_SPACE_MEM || type == PCI_ADDRESS_SPACE_MEM_64BIT ||
              type == PCI_ADDRESS_SPACE_MEM_PREFETCH )
         {
             ret = _pt_iomem_helper(assigned_device, i,
@@ -2117,6 +2133,7 @@ int pt_pci_host_write(struct pci_dev *pci_dev, u32 addr, u32 val, int len)
     return ret;
 }
 
+static uint64_t pt_get_bar_size(PCIIORegion *r);
 /* parse BAR */
 static int pt_bar_reg_parse(
         struct pt_dev *ptdev, struct pt_reg_info_tbl *reg)
@@ -2145,7 +2162,7 @@ static int pt_bar_reg_parse(
 
     /* check unused BAR */
     r = &d->io_regions[index];
-    if (!r->size)
+    if (!pt_get_bar_size(r))
         goto out;
 
     /* for ExpROM BAR */
@@ -2165,6 +2182,86 @@ out:
     return bar_flag;
 }
 
+static bool is_64bit_bar(PCIIORegion *r)
+{
+    return !!(r->type & PCI_ADDRESS_SPACE_MEM_64BIT);
+}
+
+static uint64_t pt_get_bar_size(PCIIORegion *r)
+{
+    if (is_64bit_bar(r))
+    {
+        uint64_t size64;
+        size64 = (r + 1)->size; 
+        size64 <<= 32; 
+        size64 += r->size;
+        return size64; 
+    }
+    return r->size; 
+}
+
+static uint64_t pt_get_bar_base(PCIIORegion *r)
+{
+    if (is_64bit_bar(r))
+    {
+        uint64_t base64;
+
+        base64 = (r + 1)->addr; 
+        base64 <<= 32; 
+        base64 += r->addr;
+        return base64; 
+    }
+    return r->addr; 
+}
+
+int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint64_t addr,
+                        uint64_t size, uint8_t type)
+{
+    PCIDevice *devices = NULL;
+    PCIIORegion *r;
+    int ret = 0;
+    int i, j;
+
+    /* check Overlapped to Base Address */
+    for (i=0; i<256; i++)
+    {
+        if ( !(devices = bus->devices[i]) )
+            continue;
+
+        /* skip itself */
+        if (devices->devfn == devfn)
+            continue;
+        
+        for (j=0; j<PCI_NUM_REGIONS; j++)
+        {
+            r = &devices->io_regions[j];
+
+            /* skip different resource type, but don't skip when
+             * prefetch and non-prefetch memory are compared.
+             */
+            if (type != r->type)
+            {
+                if (type == PCI_ADDRESS_SPACE_IO ||
+                    r->type == PCI_ADDRESS_SPACE_IO)
+                    continue;
+            }
+
+            if ((addr < (pt_get_bar_base(r) + pt_get_bar_size(r))) && ((addr + size) > pt_get_bar_base(r)))
+            {
+                printf("Overlapped to device[%02x:%02x.%x][Region:%d]"
+                    "[Address:%lxh][Size:%lxh]\n", bus->bus_num,
+                    (devices->devfn >> 3) & 0x1F, (devices->devfn & 0x7),
+                    j, pt_get_bar_base(r), pt_get_bar_size(r));
+                ret = 1;
+                goto out;
+            }
+        }
+    }
+
+out:
+    return ret;
+}
+
 /* mapping BAR */
 static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
     int mem_enable)
@@ -2174,13 +2271,13 @@ static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
     struct pt_reg_grp_tbl *reg_grp_entry = NULL;
     struct pt_reg_tbl *reg_entry = NULL;
     struct pt_region *base = NULL;
-    uint32_t r_size = 0, r_addr = -1;
+    uint64_t r_size = 0, r_addr = -1;
     int ret = 0;
 
     r = &dev->io_regions[bar];
-
+    
     /* check valid region */
-    if (!r->size)
+    if (!pt_get_bar_size(r))
         return;
 
     base = &ptdev->bases[bar];
@@ -2190,12 +2287,13 @@ static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
            return;
 
     /* copy region address to temporary */
-    r_addr = r->addr;
+    r_addr = pt_get_bar_base(r);
 
     /* need unmapping in case I/O Space or Memory Space disable */
     if (((base->bar_flag == PT_BAR_FLAG_IO) && !io_enable ) ||
         ((base->bar_flag == PT_BAR_FLAG_MEM) && !mem_enable ))
         r_addr = -1;
+
     if ( (bar == PCI_ROM_SLOT) && (r_addr != -1) )
     {
         reg_grp_entry = pt_find_reg_grp(ptdev, PCI_ROM_ADDRESS);
@@ -2208,26 +2306,27 @@ static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
     }
 
     /* prevent guest software mapping memory resource to 00000000h */
-    if ((base->bar_flag == PT_BAR_FLAG_MEM) && (r_addr == 0))
+    if ((base->bar_flag == PT_BAR_FLAG_MEM) && (pt_get_bar_base(r) == 0))
         r_addr = -1;
 
     /* align resource size (memory type only) */
-    r_size = r->size;
+    r_size = pt_get_bar_size(r);
     PT_GET_EMUL_SIZE(base->bar_flag, r_size);
 
     /* check overlapped address */
     ret = pt_chk_bar_overlap(dev->bus, dev->devfn,
                     r_addr, r_size, r->type);
     if (ret > 0)
-        PT_LOG_DEV(dev, "Warning: [Region:%d][Address:%08xh]"
-            "[Size:%08xh] is overlapped.\n", bar, r_addr, r_size);
+        PT_LOG("Warning: ptdev[%02x:%02x.%x][Region:%d][Address:%lxh]"
+            "[Size:%lxh] is overlapped.\n", pci_bus_num(dev->bus),
+             PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), bar, r_addr, r_size);
 
     /* check whether we need to update the mapping or not */
     if (r_addr != ptdev->bases[bar].e_physbase)
     {
         /* mapping BAR */
-        r->map_func((PCIDevice *)ptdev, bar, r_addr,
-                     r_size, r->type);
+        r->map_func((PCIDevice *)ptdev, bar, (uint32_t)r_addr,
+                     (uint32_t)r_size, r->type);
     }
 }
 
@@ -2823,7 +2922,7 @@ static uint32_t pt_bar_reg_init(struct pt_dev *ptdev,
     }
 
     /* set initial guest physical base address to -1 */
-    ptdev->bases[index].e_physbase = -1;
+    ptdev->bases[index].e_physbase = -1UL;
 
     /* set BAR flag */
     ptdev->bases[index].bar_flag = pt_bar_reg_parse(ptdev, reg);
@@ -3506,7 +3605,10 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
     {
     case PT_BAR_FLAG_MEM:
         bar_emu_mask = PT_BAR_MEM_EMU_MASK;
-        bar_ro_mask = PT_BAR_MEM_RO_MASK | (r_size - 1);
+        if (!r_size)
+            bar_ro_mask = PT_BAR_ALLF;
+        else
+            bar_ro_mask = PT_BAR_MEM_RO_MASK | (r_size - 1);
         break;
     case PT_BAR_FLAG_IO:
         bar_emu_mask = PT_BAR_IO_EMU_MASK;
@@ -3514,7 +3616,10 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
         break;
     case PT_BAR_FLAG_UPPER:
         bar_emu_mask = PT_BAR_ALLF;
-        bar_ro_mask = 0;    /* all upper 32bit are R/W */
+        if (!r_size)
+            bar_ro_mask = 0; 
+        else
+            bar_ro_mask = r_size - 1;
         break;
     default:
         break;
@@ -3527,6 +3632,7 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
     /* check whether we need to update the virtual region address or not */
     switch (ptdev->bases[index].bar_flag)
     {
+    case PT_BAR_FLAG_UPPER:
     case PT_BAR_FLAG_MEM:
         /* nothing to do */
         break;
@@ -3550,42 +3656,6 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
             goto exit;
         }
         break;
-    case PT_BAR_FLAG_UPPER:
-        if (cfg_entry->data)
-        {
-            if (cfg_entry->data != (PT_BAR_ALLF & ~bar_ro_mask))
-            {
-                PT_LOG_DEV(d, "Warning: Guest attempt to set high MMIO Base Address. "
-                    "Ignore mapping. "
-                    "[Offset:%02xh][High Address:%08xh]\n",
-                    reg->offset, cfg_entry->data);
-            }
-            /* clear lower address */
-            d->io_regions[index-1].addr = -1;
-        }
-        else
-        {
-            /* find lower 32bit BAR */
-            prev_offset = (reg->offset - 4);
-            reg_grp_entry = pt_find_reg_grp(ptdev, prev_offset);
-            if (reg_grp_entry)
-            {
-                reg_entry = pt_find_reg(reg_grp_entry, prev_offset);
-                if (reg_entry)
-                    /* restore lower address */
-                    d->io_regions[index-1].addr = reg_entry->data;
-                else
-                    return -1;
-            }
-            else
-                return -1;
-        }
-
-        /* never mapping the 'empty' upper region,
-         * because we'll do it enough for the lower region.
-         */
-        r->addr = -1;
-        goto exit;
     default:
         break;
     }
@@ -3599,7 +3669,7 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
      * rather than mmio. Remapping this value to mmio should be prevented.
      */
 
-    if ( cfg_entry->data != writable_mask )
+    if ( cfg_entry->data != writable_mask || !cfg_entry->data)
         r->addr = cfg_entry->data;
 
 exit:
diff --git a/hw/pass-through.h b/hw/pass-through.h
index d7d837c..b651192 100644
--- a/hw/pass-through.h
+++ b/hw/pass-through.h
@@ -158,10 +158,13 @@ enum {
 #define PT_MERGE_VALUE(value, data, val_mask) \
     (((value) & (val_mask)) | ((data) & ~(val_mask)))
 
+#define valid_addr(addr) \
+    (addr >= 0x80000000 && !(addr & 0xfff))
+
 struct pt_region {
     /* Virtual phys base & size */
-    uint32_t e_physbase;
-    uint32_t e_size;
+    uint64_t e_physbase;
+    uint64_t e_size;
     /* Index of region in qemu */
     uint32_t memory_index;
     /* BAR flag */
diff --git a/hw/pci.c b/hw/pci.c
index f051de1..839863d 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -39,24 +39,6 @@ extern int igd_passthru;
 
 //#define DEBUG_PCI
 
-struct PCIBus {
-    int bus_num;
-    int devfn_min;
-    pci_set_irq_fn set_irq;
-    pci_map_irq_fn map_irq;
-    uint32_t config_reg; /* XXX: suppress */
-    /* low level pic */
-    SetIRQFunc *low_set_irq;
-    qemu_irq *irq_opaque;
-    PCIDevice *devices[256];
-    PCIDevice *parent_dev;
-    PCIBus *next;
-    /* The bus IRQ state is the logical OR of the connected devices.
-       Keep a count of the number of devices with raised IRQs.  */
-    int nirq;
-    int irq_count[];
-};
-
 static void pci_update_mappings(PCIDevice *d);
 static void pci_set_irq(void *opaque, int irq_num, int level);
 
@@ -938,50 +920,3 @@ PCIBus *pci_bridge_init(PCIBus *bus, int devfn, uint16_t vid, uint16_t did,
     return s->bus;
 }
 
-int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint32_t addr,
-                        uint32_t size, uint8_t type)
-{
-    PCIDevice *devices = NULL;
-    PCIIORegion *r;
-    int ret = 0;
-    int i, j;
-
-    /* check Overlapped to Base Address */
-    for (i=0; i<256; i++)
-    {
-        if ( !(devices = bus->devices[i]) )
-            continue;
-
-        /* skip itself */
-        if (devices->devfn == devfn)
-            continue;
-        
-        for (j=0; j<PCI_NUM_REGIONS; j++)
-        {
-            r = &devices->io_regions[j];
-
-            /* skip different resource type, but don't skip when
-             * prefetch and non-prefetch memory are compared.
-             */
-            if (type != r->type)
-            {
-                if (type == PCI_ADDRESS_SPACE_IO ||
-                    r->type == PCI_ADDRESS_SPACE_IO)
-                    continue;
-            }
-
-            if ((addr < (r->addr + r->size)) && ((addr + size) > r->addr))
-            {
-                printf("Overlapped to device[%02x:%02x.%x][Region:%d]"
-                    "[Address:%08xh][Size:%08xh]\n", bus->bus_num,
-                    (devices->devfn >> 3) & 0x1F, (devices->devfn & 0x7),
-                    j, r->addr, r->size);
-                ret = 1;
-                goto out;
-            }
-        }
-    }
-
-out:
-    return ret;
-}
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..a036cc3 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -137,6 +137,7 @@ typedef int PCIUnregisterFunc(PCIDevice *pci_dev);
 
 #define PCI_ADDRESS_SPACE_MEM		0x00
 #define PCI_ADDRESS_SPACE_IO		0x01
+#define PCI_ADDRESS_SPACE_MEM_64BIT     0x04
 #define PCI_ADDRESS_SPACE_MEM_PREFETCH	0x08
 
 typedef struct PCIIORegion {
@@ -240,8 +241,8 @@ void pci_register_io_region(PCIDevice *pci_dev, int region_num,
                             uint32_t size, int type,
                             PCIMapIORegionFunc *map_func);
 
-int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint32_t addr,
-                       uint32_t size, uint8_t type);
+int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint64_t addr,
+                       uint64_t size, uint8_t type);
 
 uint32_t pci_default_read_config(PCIDevice *d,
                                  uint32_t address, int len);
@@ -360,5 +361,23 @@ void pci_bridge_write_config(PCIDevice *d,
                              uint32_t address, uint32_t val, int len);
 PCIBus *pci_register_secondary_bus(PCIDevice *dev, pci_map_irq_fn map_irq);
 
+struct PCIBus {
+    int bus_num;
+    int devfn_min;
+    pci_set_irq_fn set_irq;
+    pci_map_irq_fn map_irq;
+    uint32_t config_reg; /* XXX: suppress */
+    /* low level pic */
+    SetIRQFunc *low_set_irq;
+    qemu_irq *irq_opaque;
+    PCIDevice *devices[256];
+    PCIDevice *parent_dev;
+    PCIBus *next;
+    /* The bus IRQ state is the logical OR of the connected devices.
+       Keep a count of the number of devices with raised IRQs.  */
+    int nirq;
+    int irq_count[];
+};
+
 
 #endif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:53:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XSw-0006hK-6G; Wed, 15 Aug 2012 06:52:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XSu-0006hF-B6
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:52:28 +0000
Received: from [85.158.138.51:61953] by server-8.bemta-3.messagelabs.com id
	AF/5A-29583-B274B205; Wed, 15 Aug 2012 06:52:27 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345013545!28306125!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA3OTIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30945 invoked from network); 15 Aug 2012 06:52:25 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-174.messagelabs.com with SMTP;
	15 Aug 2012 06:52:25 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 14 Aug 2012 23:52:24 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="186591387"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by orsmga002.jf.intel.com with ESMTP; 14 Aug 2012 23:52:23 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:54:42 +0800
Message-Id: <1345013682-20618-2-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
Cc: Xudong Hao <xudong.hao@intel.com>, ian.jackson@eu.citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 3/3] qemu-xen: Add 64 bits big bar support on
	qemu xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently it is assumed PCI device BAR access < 4G memory. If there is such a
device whose BAR size is larger than 4G, it must access > 4G memory address.
This patch enable the 64bits big BAR support on qemu-xen.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff --git a/hw/pass-through.c b/hw/pass-through.c
index 6e396e3..9087fa5 100644
--- a/hw/pass-through.c
+++ b/hw/pass-through.c
@@ -1117,13 +1117,13 @@ uint8_t pci_intx(struct pt_dev *ptdev)
 }
 
 static int _pt_iomem_helper(struct pt_dev *assigned_device, int i,
-                            uint32_t e_base, uint32_t e_size, int op)
+                            unsigned long e_base, unsigned long e_size, int op)
 {
     if ( has_msix_mapping(assigned_device, i) )
     {
-        uint32_t msix_last_pfn = (assigned_device->msix->mmio_base_addr - 1 +
+        unsigned long msix_last_pfn = (assigned_device->msix->mmio_base_addr - 1 +
             assigned_device->msix->total_entries * 16) >> XC_PAGE_SHIFT;
-        uint32_t bar_last_pfn = (e_base + e_size - 1) >> XC_PAGE_SHIFT;
+        unsigned long bar_last_pfn = (e_base + e_size - 1) >> XC_PAGE_SHIFT;
         int ret = 0;
 
         if ( assigned_device->msix->table_off )
@@ -1159,26 +1159,33 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
                          int type)
 {
     struct pt_dev *assigned_device  = (struct pt_dev *)d;
-    uint32_t old_ebase = assigned_device->bases[i].e_physbase;
+    uint64_t e_phys64 = e_phys, e_size64 = e_size, old_ebase = assigned_device->bases[i].e_physbase;
     int first_map = ( assigned_device->bases[i].e_size == 0 );
+    PCIIORegion *r = &d->io_regions[i];
     int ret = 0;
 
-    assigned_device->bases[i].e_physbase = e_phys;
-    assigned_device->bases[i].e_size= e_size;
-
-    PT_LOG("e_phys=%08x maddr=%lx type=%d len=%d index=%d first_map=%d\n",
-        e_phys, (unsigned long)assigned_device->bases[i].access.maddr,
-        type, e_size, i, first_map);
-
-    if ( e_size == 0 )
+    if ( assigned_device->bases[i + 1].bar_flag == PT_BAR_FLAG_UPPER) {
+        uint64_t upper_addr = (r + 1)->addr;
+        uint64_t upper_size = (r + 1)->size;
+        e_phys64 += upper_addr << 32;
+        e_size64 += upper_size << 32;
+    } 
+    PT_LOG("e_phys64=%lx maddr=%lx type=%d len=%lx index=%d first_map=%d\n",
+        e_phys64, (unsigned long)assigned_device->bases[i].access.maddr,
+        type, e_size64, i, first_map);
+   
+    if(e_size64== 0 || !valid_addr(e_phys64))
         return;
 
+    assigned_device->bases[i].e_physbase = e_phys64;
+    assigned_device->bases[i].e_size= e_size64;
+
     if ( !first_map && old_ebase != -1 )
     {
         if ( has_msix_mapping(assigned_device, i) )
             unregister_iomem(assigned_device->msix->mmio_base_addr);
 
-        ret = _pt_iomem_helper(assigned_device, i, old_ebase, e_size,
+        ret = _pt_iomem_helper(assigned_device, i, old_ebase, e_size64,
                                DPCI_REMOVE_MAPPING);
         if ( ret != 0 )
         {
@@ -1188,7 +1195,7 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
     }
 
     /* map only valid guest address */
-    if (e_phys != -1)
+    if (e_phys64 != -1)
     {
         if ( has_msix_mapping(assigned_device, i) )
         {
@@ -1202,7 +1209,7 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
                  assigned_device->msix->mmio_index);
         }
 
-        ret = _pt_iomem_helper(assigned_device, i, e_phys, e_size,
+        ret = _pt_iomem_helper(assigned_device, i, e_phys64, e_size64,
                                DPCI_ADD_MAPPING);
         if ( ret != 0 )
         {
@@ -1210,7 +1217,7 @@ static void pt_iomem_map(PCIDevice *d, int i, uint32_t e_phys, uint32_t e_size,
             return;
         }
 
-        if ( old_ebase != e_phys && old_ebase != -1 )
+        if ( old_ebase != e_phys64 && old_ebase != -1 )
             pt_msix_update_remap(assigned_device, i);
     }
 }
@@ -1853,7 +1860,7 @@ exit:
 
 static void pt_libpci_fixup(struct pci_dev *dev)
 {
-#if !defined(PCI_LIB_VERSION) || PCI_LIB_VERSION < 0x030100
+#if !defined(PCI_LIB_VERSION) || PCI_LIB_VERSION <= 0x030100
     int i;
     FILE *fp;
     char path[PATH_MAX], buf[256];
@@ -1907,7 +1914,7 @@ static int pt_dev_is_virtfn(struct pci_dev *dev)
 
 static int pt_register_regions(struct pt_dev *assigned_device)
 {
-    int i = 0;
+    int i = 0, current_bar, bar_flag;
     uint32_t bar_data = 0;
     struct pci_dev *pci_dev = assigned_device->pci_dev;
     PCIDevice *d = &assigned_device->dev;
@@ -1916,6 +1923,7 @@ static int pt_register_regions(struct pt_dev *assigned_device)
     /* Register PIO/MMIO BARs */
     for ( i = 0; i < PCI_BAR_ENTRIES; i++ )
     {
+        current_bar = i;
         if ( pt_pci_base_addr(pci_dev->base_addr[i]) )
         {
             assigned_device->bases[i].e_physbase =
@@ -1928,18 +1936,26 @@ static int pt_register_regions(struct pt_dev *assigned_device)
                 pci_register_io_region((PCIDevice *)assigned_device, i,
                     (uint32_t)pci_dev->size[i], PCI_ADDRESS_SPACE_IO,
                     pt_ioport_map);
-            else if ( pci_dev->base_addr[i] & PCI_ADDRESS_SPACE_MEM_PREFETCH )
+            else if ( pci_dev->base_addr[i] & PCI_ADDRESS_SPACE_MEM_64BIT) {
+                bar_flag = pci_dev->base_addr[i] & 0xf;
                 pci_register_io_region((PCIDevice *)assigned_device, i,
-                    (uint32_t)pci_dev->size[i], PCI_ADDRESS_SPACE_MEM_PREFETCH,
+                    (uint32_t)pci_dev->size[i], bar_flag,
                     pt_iomem_map);
-            else
-                pci_register_io_region((PCIDevice *)assigned_device, i,
-                    (uint32_t)pci_dev->size[i], PCI_ADDRESS_SPACE_MEM,
+                pci_register_io_region((PCIDevice *)assigned_device, i + 1,
+                    (uint32_t)(pci_dev->size[i] >> 32), PCI_ADDRESS_SPACE_MEM,
                     pt_iomem_map);
-
-            PT_LOG("IO region registered (size=0x%08x base_addr=0x%08x)\n",
-                (uint32_t)(pci_dev->size[i]),
-                (uint32_t)(pci_dev->base_addr[i]));
+                /* skip upper half. */
+                i++;
+            } 
+            else {
+                bar_flag = pci_dev->base_addr[i] & 0xf;
+                pci_register_io_region((PCIDevice *)assigned_device, i,
+                        (uint32_t)(pci_dev->size[i]), bar_flag,
+                        pt_iomem_map);
+            }
+            PT_LOG("IO region registered (bar:%d,size=0x%lx base_addr=0x%lx)\n", current_bar, 
+                    (pci_dev->size[current_bar]),
+                    (pci_dev->base_addr[current_bar]));
         }
     }
 
@@ -1984,7 +2000,7 @@ static void pt_unregister_regions(struct pt_dev *assigned_device)
 
         type = d->io_regions[i].type;
 
-        if ( type == PCI_ADDRESS_SPACE_MEM ||
+        if ( type == PCI_ADDRESS_SPACE_MEM || type == PCI_ADDRESS_SPACE_MEM_64BIT ||
              type == PCI_ADDRESS_SPACE_MEM_PREFETCH )
         {
             ret = _pt_iomem_helper(assigned_device, i,
@@ -2117,6 +2133,7 @@ int pt_pci_host_write(struct pci_dev *pci_dev, u32 addr, u32 val, int len)
     return ret;
 }
 
+static uint64_t pt_get_bar_size(PCIIORegion *r);
 /* parse BAR */
 static int pt_bar_reg_parse(
         struct pt_dev *ptdev, struct pt_reg_info_tbl *reg)
@@ -2145,7 +2162,7 @@ static int pt_bar_reg_parse(
 
     /* check unused BAR */
     r = &d->io_regions[index];
-    if (!r->size)
+    if (!pt_get_bar_size(r))
         goto out;
 
     /* for ExpROM BAR */
@@ -2165,6 +2182,86 @@ out:
     return bar_flag;
 }
 
+static bool is_64bit_bar(PCIIORegion *r)
+{
+    return !!(r->type & PCI_ADDRESS_SPACE_MEM_64BIT);
+}
+
+static uint64_t pt_get_bar_size(PCIIORegion *r)
+{
+    if (is_64bit_bar(r))
+    {
+        uint64_t size64;
+        size64 = (r + 1)->size; 
+        size64 <<= 32; 
+        size64 += r->size;
+        return size64; 
+    }
+    return r->size; 
+}
+
+static uint64_t pt_get_bar_base(PCIIORegion *r)
+{
+    if (is_64bit_bar(r))
+    {
+        uint64_t base64;
+
+        base64 = (r + 1)->addr; 
+        base64 <<= 32; 
+        base64 += r->addr;
+        return base64; 
+    }
+    return r->addr; 
+}
+
+int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint64_t addr,
+                        uint64_t size, uint8_t type)
+{
+    PCIDevice *devices = NULL;
+    PCIIORegion *r;
+    int ret = 0;
+    int i, j;
+
+    /* check Overlapped to Base Address */
+    for (i=0; i<256; i++)
+    {
+        if ( !(devices = bus->devices[i]) )
+            continue;
+
+        /* skip itself */
+        if (devices->devfn == devfn)
+            continue;
+        
+        for (j=0; j<PCI_NUM_REGIONS; j++)
+        {
+            r = &devices->io_regions[j];
+
+            /* skip different resource type, but don't skip when
+             * prefetch and non-prefetch memory are compared.
+             */
+            if (type != r->type)
+            {
+                if (type == PCI_ADDRESS_SPACE_IO ||
+                    r->type == PCI_ADDRESS_SPACE_IO)
+                    continue;
+            }
+
+            if ((addr < (pt_get_bar_base(r) + pt_get_bar_size(r))) && ((addr + size) > pt_get_bar_base(r)))
+            {
+                printf("Overlapped to device[%02x:%02x.%x][Region:%d]"
+                    "[Address:%lxh][Size:%lxh]\n", bus->bus_num,
+                    (devices->devfn >> 3) & 0x1F, (devices->devfn & 0x7),
+                    j, pt_get_bar_base(r), pt_get_bar_size(r));
+                ret = 1;
+                goto out;
+            }
+        }
+    }
+
+out:
+    return ret;
+}
+
 /* mapping BAR */
 static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
     int mem_enable)
@@ -2174,13 +2271,13 @@ static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
     struct pt_reg_grp_tbl *reg_grp_entry = NULL;
     struct pt_reg_tbl *reg_entry = NULL;
     struct pt_region *base = NULL;
-    uint32_t r_size = 0, r_addr = -1;
+    uint64_t r_size = 0, r_addr = -1;
     int ret = 0;
 
     r = &dev->io_regions[bar];
-
+    
     /* check valid region */
-    if (!r->size)
+    if (!pt_get_bar_size(r))
         return;
 
     base = &ptdev->bases[bar];
@@ -2190,12 +2287,13 @@ static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
            return;
 
     /* copy region address to temporary */
-    r_addr = r->addr;
+    r_addr = pt_get_bar_base(r);
 
     /* need unmapping in case I/O Space or Memory Space disable */
     if (((base->bar_flag == PT_BAR_FLAG_IO) && !io_enable ) ||
         ((base->bar_flag == PT_BAR_FLAG_MEM) && !mem_enable ))
         r_addr = -1;
+
     if ( (bar == PCI_ROM_SLOT) && (r_addr != -1) )
     {
         reg_grp_entry = pt_find_reg_grp(ptdev, PCI_ROM_ADDRESS);
@@ -2208,26 +2306,27 @@ static void pt_bar_mapping_one(struct pt_dev *ptdev, int bar, int io_enable,
     }
 
     /* prevent guest software mapping memory resource to 00000000h */
-    if ((base->bar_flag == PT_BAR_FLAG_MEM) && (r_addr == 0))
+    if ((base->bar_flag == PT_BAR_FLAG_MEM) && (pt_get_bar_base(r) == 0))
         r_addr = -1;
 
     /* align resource size (memory type only) */
-    r_size = r->size;
+    r_size = pt_get_bar_size(r);
     PT_GET_EMUL_SIZE(base->bar_flag, r_size);
 
     /* check overlapped address */
     ret = pt_chk_bar_overlap(dev->bus, dev->devfn,
                     r_addr, r_size, r->type);
     if (ret > 0)
-        PT_LOG_DEV(dev, "Warning: [Region:%d][Address:%08xh]"
-            "[Size:%08xh] is overlapped.\n", bar, r_addr, r_size);
+        PT_LOG("Warning: ptdev[%02x:%02x.%x][Region:%d][Address:%lxh]"
+            "[Size:%lxh] is overlapped.\n", pci_bus_num(dev->bus),
+             PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), bar, r_addr, r_size);
 
     /* check whether we need to update the mapping or not */
     if (r_addr != ptdev->bases[bar].e_physbase)
     {
         /* mapping BAR */
-        r->map_func((PCIDevice *)ptdev, bar, r_addr,
-                     r_size, r->type);
+        r->map_func((PCIDevice *)ptdev, bar, (uint32_t)r_addr,
+                     (uint32_t)r_size, r->type);
     }
 }
 
@@ -2823,7 +2922,7 @@ static uint32_t pt_bar_reg_init(struct pt_dev *ptdev,
     }
 
     /* set initial guest physical base address to -1 */
-    ptdev->bases[index].e_physbase = -1;
+    ptdev->bases[index].e_physbase = -1UL;
 
     /* set BAR flag */
     ptdev->bases[index].bar_flag = pt_bar_reg_parse(ptdev, reg);
@@ -3506,7 +3605,10 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
     {
     case PT_BAR_FLAG_MEM:
         bar_emu_mask = PT_BAR_MEM_EMU_MASK;
-        bar_ro_mask = PT_BAR_MEM_RO_MASK | (r_size - 1);
+        if (!r_size)
+            bar_ro_mask = PT_BAR_ALLF;
+        else
+            bar_ro_mask = PT_BAR_MEM_RO_MASK | (r_size - 1);
         break;
     case PT_BAR_FLAG_IO:
         bar_emu_mask = PT_BAR_IO_EMU_MASK;
@@ -3514,7 +3616,10 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
         break;
     case PT_BAR_FLAG_UPPER:
         bar_emu_mask = PT_BAR_ALLF;
-        bar_ro_mask = 0;    /* all upper 32bit are R/W */
+        if (!r_size)
+            bar_ro_mask = 0; 
+        else
+            bar_ro_mask = r_size - 1;
         break;
     default:
         break;
@@ -3527,6 +3632,7 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
     /* check whether we need to update the virtual region address or not */
     switch (ptdev->bases[index].bar_flag)
     {
+    case PT_BAR_FLAG_UPPER:
     case PT_BAR_FLAG_MEM:
         /* nothing to do */
         break;
@@ -3550,42 +3656,6 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
             goto exit;
         }
         break;
-    case PT_BAR_FLAG_UPPER:
-        if (cfg_entry->data)
-        {
-            if (cfg_entry->data != (PT_BAR_ALLF & ~bar_ro_mask))
-            {
-                PT_LOG_DEV(d, "Warning: Guest attempt to set high MMIO Base Address. "
-                    "Ignore mapping. "
-                    "[Offset:%02xh][High Address:%08xh]\n",
-                    reg->offset, cfg_entry->data);
-            }
-            /* clear lower address */
-            d->io_regions[index-1].addr = -1;
-        }
-        else
-        {
-            /* find lower 32bit BAR */
-            prev_offset = (reg->offset - 4);
-            reg_grp_entry = pt_find_reg_grp(ptdev, prev_offset);
-            if (reg_grp_entry)
-            {
-                reg_entry = pt_find_reg(reg_grp_entry, prev_offset);
-                if (reg_entry)
-                    /* restore lower address */
-                    d->io_regions[index-1].addr = reg_entry->data;
-                else
-                    return -1;
-            }
-            else
-                return -1;
-        }
-
-        /* never mapping the 'empty' upper region,
-         * because we'll do it enough for the lower region.
-         */
-        r->addr = -1;
-        goto exit;
     default:
         break;
     }
@@ -3599,7 +3669,7 @@ static int pt_bar_reg_write(struct pt_dev *ptdev,
      * rather than mmio. Remapping this value to mmio should be prevented.
      */
 
-    if ( cfg_entry->data != writable_mask )
+    if ( cfg_entry->data != writable_mask || !cfg_entry->data)
         r->addr = cfg_entry->data;
 
 exit:
diff --git a/hw/pass-through.h b/hw/pass-through.h
index d7d837c..b651192 100644
--- a/hw/pass-through.h
+++ b/hw/pass-through.h
@@ -158,10 +158,13 @@ enum {
 #define PT_MERGE_VALUE(value, data, val_mask) \
     (((value) & (val_mask)) | ((data) & ~(val_mask)))
 
+#define valid_addr(addr) \
+    (addr >= 0x80000000 && !(addr & 0xfff))
+
 struct pt_region {
     /* Virtual phys base & size */
-    uint32_t e_physbase;
-    uint32_t e_size;
+    uint64_t e_physbase;
+    uint64_t e_size;
     /* Index of region in qemu */
     uint32_t memory_index;
     /* BAR flag */
diff --git a/hw/pci.c b/hw/pci.c
index f051de1..839863d 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -39,24 +39,6 @@ extern int igd_passthru;
 
 //#define DEBUG_PCI
 
-struct PCIBus {
-    int bus_num;
-    int devfn_min;
-    pci_set_irq_fn set_irq;
-    pci_map_irq_fn map_irq;
-    uint32_t config_reg; /* XXX: suppress */
-    /* low level pic */
-    SetIRQFunc *low_set_irq;
-    qemu_irq *irq_opaque;
-    PCIDevice *devices[256];
-    PCIDevice *parent_dev;
-    PCIBus *next;
-    /* The bus IRQ state is the logical OR of the connected devices.
-       Keep a count of the number of devices with raised IRQs.  */
-    int nirq;
-    int irq_count[];
-};
-
 static void pci_update_mappings(PCIDevice *d);
 static void pci_set_irq(void *opaque, int irq_num, int level);
 
@@ -938,50 +920,3 @@ PCIBus *pci_bridge_init(PCIBus *bus, int devfn, uint16_t vid, uint16_t did,
     return s->bus;
 }
 
-int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint32_t addr,
-                        uint32_t size, uint8_t type)
-{
-    PCIDevice *devices = NULL;
-    PCIIORegion *r;
-    int ret = 0;
-    int i, j;
-
-    /* check Overlapped to Base Address */
-    for (i=0; i<256; i++)
-    {
-        if ( !(devices = bus->devices[i]) )
-            continue;
-
-        /* skip itself */
-        if (devices->devfn == devfn)
-            continue;
-        
-        for (j=0; j<PCI_NUM_REGIONS; j++)
-        {
-            r = &devices->io_regions[j];
-
-            /* skip different resource type, but don't skip when
-             * prefetch and non-prefetch memory are compared.
-             */
-            if (type != r->type)
-            {
-                if (type == PCI_ADDRESS_SPACE_IO ||
-                    r->type == PCI_ADDRESS_SPACE_IO)
-                    continue;
-            }
-
-            if ((addr < (r->addr + r->size)) && ((addr + size) > r->addr))
-            {
-                printf("Overlapped to device[%02x:%02x.%x][Region:%d]"
-                    "[Address:%08xh][Size:%08xh]\n", bus->bus_num,
-                    (devices->devfn >> 3) & 0x1F, (devices->devfn & 0x7),
-                    j, r->addr, r->size);
-                ret = 1;
-                goto out;
-            }
-        }
-    }
-
-out:
-    return ret;
-}
diff --git a/hw/pci.h b/hw/pci.h
index edc58b6..a036cc3 100644
--- a/hw/pci.h
+++ b/hw/pci.h
@@ -137,6 +137,7 @@ typedef int PCIUnregisterFunc(PCIDevice *pci_dev);
 
 #define PCI_ADDRESS_SPACE_MEM		0x00
 #define PCI_ADDRESS_SPACE_IO		0x01
+#define PCI_ADDRESS_SPACE_MEM_64BIT     0x04
 #define PCI_ADDRESS_SPACE_MEM_PREFETCH	0x08
 
 typedef struct PCIIORegion {
@@ -240,8 +241,8 @@ void pci_register_io_region(PCIDevice *pci_dev, int region_num,
                             uint32_t size, int type,
                             PCIMapIORegionFunc *map_func);
 
-int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint32_t addr,
-                       uint32_t size, uint8_t type);
+int pt_chk_bar_overlap(PCIBus *bus, int devfn, uint64_t addr,
+                       uint64_t size, uint8_t type);
 
 uint32_t pci_default_read_config(PCIDevice *d,
                                  uint32_t address, int len);
@@ -360,5 +361,23 @@ void pci_bridge_write_config(PCIDevice *d,
                              uint32_t address, uint32_t val, int len);
 PCIBus *pci_register_secondary_bus(PCIDevice *dev, pci_map_irq_fn map_irq);
 
+struct PCIBus {
+    int bus_num;
+    int devfn_min;
+    pci_set_irq_fn set_irq;
+    pci_map_irq_fn map_irq;
+    uint32_t config_reg; /* XXX: suppress */
+    /* low level pic */
+    SetIRQFunc *low_set_irq;
+    qemu_irq *irq_opaque;
+    PCIDevice *devices[256];
+    PCIDevice *parent_dev;
+    PCIBus *next;
+    /* The bus IRQ state is the logical OR of the connected devices.
+       Keep a count of the number of devices with raised IRQs.  */
+    int nirq;
+    int irq_count[];
+};
+
 
 #endif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:53:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XTU-0006jx-5s; Wed, 15 Aug 2012 06:53:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XTT-0006jp-KA
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:53:03 +0000
Received: from [85.158.139.83:24101] by server-11.bemta-5.messagelabs.com id
	F3/57-29296-E474B205; Wed, 15 Aug 2012 06:53:02 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345013581!28111940!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzNjE0Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7792 invoked from network); 15 Aug 2012 06:53:02 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-15.tower-182.messagelabs.com with SMTP;
	15 Aug 2012 06:53:02 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 14 Aug 2012 23:53:00 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,771,1336374000"; d="scan'208";a="208621457"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by fmsmga002.fm.intel.com with ESMTP; 14 Aug 2012 23:53:00 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:55:20 +0800
Message-Id: <1345013720-20640-1-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
Cc: @xen.org, Xudong Hao <xudong.hao@intel.com>, tim@xen.orgtim,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid 
may return failure, so using INVALID_MFN to measure.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
--- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
+++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
@@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn_x(mfn)) &&
+    if ( (mfn_x(mfn) != INVALID_MFN) &&
          (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:53:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XTU-0006jx-5s; Wed, 15 Aug 2012 06:53:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XTT-0006jp-KA
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:53:03 +0000
Received: from [85.158.139.83:24101] by server-11.bemta-5.messagelabs.com id
	F3/57-29296-E474B205; Wed, 15 Aug 2012 06:53:02 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345013581!28111940!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzNjE0Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7792 invoked from network); 15 Aug 2012 06:53:02 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-15.tower-182.messagelabs.com with SMTP;
	15 Aug 2012 06:53:02 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 14 Aug 2012 23:53:00 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,771,1336374000"; d="scan'208";a="208621457"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by fmsmga002.fm.intel.com with ESMTP; 14 Aug 2012 23:53:00 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:55:20 +0800
Message-Id: <1345013720-20640-1-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
Cc: @xen.org, Xudong Hao <xudong.hao@intel.com>, tim@xen.orgtim,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid 
may return failure, so using INVALID_MFN to measure.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
--- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
+++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
@@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn_x(mfn)) &&
+    if ( (mfn_x(mfn) != INVALID_MFN) &&
          (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:55:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XVG-0006wq-Mz; Wed, 15 Aug 2012 06:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XVF-0006wg-Sp
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:54:53 +0000
Received: from [85.158.143.99:62081] by server-3.bemta-4.messagelabs.com id
	EB/DA-09529-DB74B205; Wed, 15 Aug 2012 06:54:53 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345013692!21210042!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwOTYxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24366 invoked from network); 15 Aug 2012 06:54:52 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 06:54:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 14 Aug 2012 23:54:51 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="208621859"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by fmsmga002.fm.intel.com with ESMTP; 14 Aug 2012 23:54:50 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:57:11 +0800
Message-Id: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
Cc: Xudong Hao <xudong.hao@intel.com>, tim@xen.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid 
may return failure, so using INVALID_MFN to measure.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
--- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
+++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
@@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn_x(mfn)) &&
+    if ( (mfn_x(mfn) != INVALID_MFN) &&
          (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 06:55:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 06:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XVG-0006wq-Mz; Wed, 15 Aug 2012 06:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1XVF-0006wg-Sp
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 06:54:53 +0000
Received: from [85.158.143.99:62081] by server-3.bemta-4.messagelabs.com id
	EB/DA-09529-DB74B205; Wed, 15 Aug 2012 06:54:53 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345013692!21210042!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMwOTYxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24366 invoked from network); 15 Aug 2012 06:54:52 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 06:54:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 14 Aug 2012 23:54:51 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="208621859"
Received: from xhao-dev.sh.intel.com (HELO localhost.localdomain)
	([10.239.48.48])
	by fmsmga002.fm.intel.com with ESMTP; 14 Aug 2012 23:54:50 -0700
From: Xudong Hao <xudong.hao@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Aug 2012 14:57:11 +0800
Message-Id: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
X-Mailer: git-send-email 1.5.5
Cc: Xudong Hao <xudong.hao@intel.com>, tim@xen.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid 
may return failure, so using INVALID_MFN to measure.

Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Xudong Hao <xudong.hao@intel.com>

diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
--- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
+++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
@@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn_x(mfn)) &&
+    if ( (mfn_x(mfn) != INVALID_MFN) &&
          (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 07:18:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 07:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XrK-0007Sm-GJ; Wed, 15 Aug 2012 07:17:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1XrI-0007Sh-16
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 07:17:40 +0000
Received: from [85.158.139.83:57935] by server-2.bemta-5.messagelabs.com id
	0A/E0-10142-31D4B205; Wed, 15 Aug 2012 07:17:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345015058!28183354!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5421 invoked from network); 15 Aug 2012 07:17:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 07:17:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14014583"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 07:17:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 08:17:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1Xqg-0002aG-DB;
	Wed, 15 Aug 2012 07:17:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1Xqg-0007yp-Ah;
	Wed, 15 Aug 2012 08:17:02 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13602-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 08:17:02 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13602: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13602 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13602/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair         6 xen-install/dst_host      fail REGR. vs. 13600
 test-amd64-amd64-pair         5 xen-install/src_host      fail REGR. vs. 13600

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           4 xen-install                 fail pass in 13601
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)  broken in 13601 pass in 13602
 test-amd64-amd64-xl-pcipt-intel 3 host-install(3) broken in 13601 pass in 13602
 test-amd64-i386-pair        6 xen-install/dst_host fail in 13601 pass in 13602
 test-amd64-i386-pair        5 xen-install/src_host fail in 13601 pass in 13602
 test-amd64-amd64-pair 3 host-install/src_host(3) broken in 13601 pass in 13602
 test-amd64-amd64-pair 4 host-install/dst_host(4) broken in 13601 pass in 13602

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13600
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13600
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13600
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13600

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  af7143d97fa2
baseline version:
 xen                  dc56a9defa30

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25750:af7143d97fa2
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 14 15:59:38 2012 +0100
    
    QEMU_TAG update
    
    
changeset:   25749:dc56a9defa30
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Aug 14 10:28:14 2012 +0200
    
    x86/PoD: fix (un)locking after 24772:28edc2b31a9b
    
    That c/s introduced a double unlock on the out-of-memory error path of
    p2m_pod_demand_populate().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 07:18:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 07:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XrK-0007Sm-GJ; Wed, 15 Aug 2012 07:17:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1XrI-0007Sh-16
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 07:17:40 +0000
Received: from [85.158.139.83:57935] by server-2.bemta-5.messagelabs.com id
	0A/E0-10142-31D4B205; Wed, 15 Aug 2012 07:17:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345015058!28183354!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5421 invoked from network); 15 Aug 2012 07:17:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 07:17:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14014583"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 07:17:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 08:17:02 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1Xqg-0002aG-DB;
	Wed, 15 Aug 2012 07:17:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1Xqg-0007yp-Ah;
	Wed, 15 Aug 2012 08:17:02 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13602-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 08:17:02 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13602: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13602 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13602/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair         6 xen-install/dst_host      fail REGR. vs. 13600
 test-amd64-amd64-pair         5 xen-install/src_host      fail REGR. vs. 13600

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           4 xen-install                 fail pass in 13601
 test-amd64-amd64-xl-winxpsp3  3 host-install(3)  broken in 13601 pass in 13602
 test-amd64-amd64-xl-pcipt-intel 3 host-install(3) broken in 13601 pass in 13602
 test-amd64-i386-pair        6 xen-install/dst_host fail in 13601 pass in 13602
 test-amd64-i386-pair        5 xen-install/src_host fail in 13601 pass in 13602
 test-amd64-amd64-pair 3 host-install/src_host(3) broken in 13601 pass in 13602
 test-amd64-amd64-pair 4 host-install/dst_host(4) broken in 13601 pass in 13602

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13600
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13600
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13600
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13600

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  af7143d97fa2
baseline version:
 xen                  dc56a9defa30

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25750:af7143d97fa2
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 14 15:59:38 2012 +0100
    
    QEMU_TAG update
    
    
changeset:   25749:dc56a9defa30
user:        Jan Beulich <jbeulich@suse.com>
date:        Tue Aug 14 10:28:14 2012 +0200
    
    x86/PoD: fix (un)locking after 24772:28edc2b31a9b
    
    That c/s introduced a double unlock on the out-of-memory error path of
    p2m_pod_demand_populate().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 07:26:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 07:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XzJ-0007cD-Ef; Wed, 15 Aug 2012 07:25:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1XzI-0007c8-D7
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 07:25:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345015550!9307002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29806 invoked from network); 15 Aug 2012 07:25:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 07:25:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 08:25:49 +0100
Message-Id: <502B6B450200007800094F9E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 08:26:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344973043-10752-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344973043-10752-1-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3] x86-64/EFI: add CFLAGS to check compile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 21:37, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> Change from v2: EMBEDDED_EXTRA_CFLAGS may include flags not supported by
> the compiler; CFLAGS includes a filtered version of these flags.

I noticed this too when trying it out before committing. I'll commit
a further adjusted version of it though: Using all of $(CFLAGS)
causes a stray file "..d" to be left in that directory, so I'll be
filtering out a number of things (specifically, I'll simply filter out
the whole set of things in $(CFLAGS-y)).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 07:26:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 07:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1XzJ-0007cD-Ef; Wed, 15 Aug 2012 07:25:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1XzI-0007c8-D7
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 07:25:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345015550!9307002!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29806 invoked from network); 15 Aug 2012 07:25:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 07:25:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 08:25:49 +0100
Message-Id: <502B6B450200007800094F9E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 08:26:29 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <1344955544-30153-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1344973043-10752-1-git-send-email-dgdegra@tycho.nsa.gov>
In-Reply-To: <1344973043-10752-1-git-send-email-dgdegra@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3] x86-64/EFI: add CFLAGS to check compile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 21:37, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> Change from v2: EMBEDDED_EXTRA_CFLAGS may include flags not supported by
> the compiler; CFLAGS includes a filtered version of these flags.

I noticed this too when trying it out before committing. I'll commit
a further adjusted version of it though: Using all of $(CFLAGS)
causes a stray file "..d" to be left in that directory, so I'll be
filtering out a number of things (specifically, I'll simply filter out
the whole set of things in $(CFLAGS-y)).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1YgT-0008Ig-58; Wed, 15 Aug 2012 08:10:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1YgR-0008Ib-QI
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:10:32 +0000
Received: from [85.158.138.51:61909] by server-10.bemta-3.messagelabs.com id
	71/FE-20518-7795B205; Wed, 15 Aug 2012 08:10:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345018228!28320596!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31154 invoked from network); 15 Aug 2012 08:10:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with SMTP;
	15 Aug 2012 08:10:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 09:10:27 +0100
Message-Id: <502B75BA0200007800094FD4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 09:11:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
In-Reply-To: <CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 19:31, Ben Guthro <ben@guthro.net> wrote:
> I've bisected this issue - and it looks like it is a rather old problem.
> 
> The following changeset introduced it in the 4.1 development stream on
> Jun 17, 2010
> 
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 

Interesting. That I wouldn't have suspected at all.

> Since we (XenClient Enterprise / NxTop) skipped over Xen-4.1 - we
> never ran into this until now.
> 
> Any thoughts as to a solution?

First try collectively removing the three calls to
evtchn_move_pirqs() in xen/common/schedule.c. If that helps,
see whether any smaller set also does. From the result of this,
I'll have to think further, perhaps handing you a debugging
patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1YgT-0008Ig-58; Wed, 15 Aug 2012 08:10:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1YgR-0008Ib-QI
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:10:32 +0000
Received: from [85.158.138.51:61909] by server-10.bemta-3.messagelabs.com id
	71/FE-20518-7795B205; Wed, 15 Aug 2012 08:10:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345018228!28320596!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31154 invoked from network); 15 Aug 2012 08:10:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with SMTP;
	15 Aug 2012 08:10:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 09:10:27 +0100
Message-Id: <502B75BA0200007800094FD4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 09:11:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
In-Reply-To: <CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 19:31, Ben Guthro <ben@guthro.net> wrote:
> I've bisected this issue - and it looks like it is a rather old problem.
> 
> The following changeset introduced it in the 4.1 development stream on
> Jun 17, 2010
> 
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 

Interesting. That I wouldn't have suspected at all.

> Since we (XenClient Enterprise / NxTop) skipped over Xen-4.1 - we
> never ran into this until now.
> 
> Any thoughts as to a solution?

First try collectively removing the three calls to
evtchn_move_pirqs() in xen/common/schedule.c. If that helps,
see whether any smaller set also does. From the result of this,
I'll have to think further, perhaps handing you a debugging
patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:19:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1YoW-0008S1-47; Wed, 15 Aug 2012 08:18:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1YoU-0008Rw-C8
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:18:50 +0000
Received: from [85.158.138.51:35974] by server-12.bemta-3.messagelabs.com id
	BA/A5-04073-96B5B205; Wed, 15 Aug 2012 08:18:49 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345018728!28409415!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzg1MTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18472 invoked from network); 15 Aug 2012 08:18:48 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 08:18:48 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 3CBE71FFE;
	Wed, 15 Aug 2012 11:18:47 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 0FCBA2005D; Wed, 15 Aug 2012 11:18:47 +0300 (EEST)
Date: Wed, 15 Aug 2012 11:18:46 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Xudong Hao <xudong.hao@intel.com>
Message-ID: <20120815081846.GC19851@reaktio.net>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: ian.jackson@eu.citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 02:54:41PM +0800, Xudong Hao wrote:
> Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> device whose BAR size is larger than 4G, it must access > 4G memory address.
> This patch enable the 64bits big BAR support on hvmloader.
> 

Hello,

Do you have an example of such a PCI device with >4G BAR? 

Thanks,

-- Pasi

> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> 
> diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>  #define PCI_MEM_START       0xf0000000
>  #define PCI_MEM_END         0xfc000000
> +#define PCI_HIGH_MEM_START  0xa000000000ULL
> +#define PCI_HIGH_MEM_END    0xf000000000ULL
> +#define PCI_MIN_MMIO_ADDR   0x80000000
> +
>  extern unsigned long pci_mem_start, pci_mem_end;
>  
>  
> diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -31,24 +31,33 @@
>  unsigned long pci_mem_start = PCI_MEM_START;
>  unsigned long pci_mem_end = PCI_MEM_END;
>  
> +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> +
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
>  void pci_setup(void)
>  {
> -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
>      uint32_t vga_devfn = 256;
>      uint16_t class, vendor_id, device_id;
>      unsigned int bar, pin, link, isa_irq;
> +    int64_t mmio_left;
>  
>      /* Resources assignable to PCI devices via BARs. */
>      struct resource {
> -        uint32_t base, max;
> -    } *resource, mem_resource, io_resource;
> +        uint64_t base, max;
> +    } *resource, mem_resource, high_mem_resource, io_resource;
>  
>      /* Create a list of device BARs in descending order of size. */
>      struct bars {
> -        uint32_t devfn, bar_reg, bar_sz;
> +        uint32_t is_64bar;
> +        uint32_t devfn;
> +        uint32_t bar_reg;
> +        uint64_t bar_sz;
>      } *bars = (struct bars *)scratch_start;
>      unsigned int i, nr_bars = 0;
>  
> @@ -133,23 +142,35 @@ void pci_setup(void)
>          /* Map the I/O memory and port resources. */
>          for ( bar = 0; bar < 7; bar++ )
>          {
> +            bar_sz_upper = 0;
>              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>              if ( bar == 6 )
>                  bar_reg = PCI_ROM_ADDRESS;
>  
>              bar_data = pci_readl(devfn, bar_reg);
> +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>              pci_writel(devfn, bar_reg, ~0);
>              bar_sz = pci_readl(devfn, bar_reg);
>              pci_writel(devfn, bar_reg, bar_data);
> +
> +            if (is_64bar) {
> +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, ~0);
> +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> +            }
> +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> +                       0xfffffffffffffff0 :
> +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> +            bar_sz &= ~(bar_sz - 1);
>              if ( bar_sz == 0 )
>                  continue;
>  
> -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> -                       PCI_BASE_ADDRESS_MEM_MASK :
> -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> -            bar_sz &= ~(bar_sz - 1);
> -
>              for ( i = 0; i < nr_bars; i++ )
>                  if ( bars[i].bar_sz < bar_sz )
>                      break;
> @@ -157,6 +178,7 @@ void pci_setup(void)
>              if ( i != nr_bars )
>                  memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
>  
> +            bars[i].is_64bar = is_64bar;
>              bars[i].devfn   = devfn;
>              bars[i].bar_reg = bar_reg;
>              bars[i].bar_sz  = bar_sz;
> @@ -167,11 +189,8 @@ void pci_setup(void)
>  
>              nr_bars++;
>  
> -            /* Skip the upper-half of the address for a 64-bit BAR. */
> -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> -                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
> -                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
> -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> +            /*The upper half is already calculated, skip it! */
> +            if (is_64bar)
>                  bar++;
>          }
>  
> @@ -193,10 +212,14 @@ void pci_setup(void)
>          pci_writew(devfn, PCI_COMMAND, cmd);
>      }
>  
> -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> -            ((pci_mem_start << 1) != 0) )
> +    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )
>          pci_mem_start <<= 1;
>  
> +    if (!pci_mem_start) {
> +        bar64_relocate = 1;
> +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> +    }
> +
>      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
>      while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
>      {
> @@ -218,11 +241,15 @@ void pci_setup(void)
>          hvm_info->high_mem_pgend += nr_pages;
>      }
>  
> +    high_mem_resource.base = pci_high_mem_start; 
> +    high_mem_resource.max = pci_high_mem_end;
>      mem_resource.base = pci_mem_start;
>      mem_resource.max = pci_mem_end;
>      io_resource.base = 0xc000;
>      io_resource.max = 0x10000;
>  
> +    mmio_left = pci_mem_end - pci_mem_end;
> +
>      /* Assign iomem and ioport resources in descending order of size. */
>      for ( i = 0; i < nr_bars; i++ )
>      {
> @@ -230,13 +257,21 @@ void pci_setup(void)
>          bar_reg = bars[i].bar_reg;
>          bar_sz  = bars[i].bar_sz;
>  
> +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < bar_sz);
>          bar_data = pci_readl(devfn, bar_reg);
>  
>          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
>               PCI_BASE_ADDRESS_SPACE_MEMORY )
>          {
> -            resource = &mem_resource;
> -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            if (using_64bar) {
> +                resource = &high_mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            } 
> +            else {
> +                resource = &mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            }
> +            mmio_left -= bar_sz;
>          }
>          else
>          {
> @@ -244,13 +279,14 @@ void pci_setup(void)
>              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
>          }
>  
> -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> -        bar_data |= base;
> +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> +        bar_data |= (uint32_t)base;
> +        bar_data_upper = (uint32_t)(base >> 32);
>          base += bar_sz;
>  
>          if ( (base < resource->base) || (base > resource->max) )
>          {
> -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
>                     "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
>              continue;
>          }
> @@ -258,7 +294,9 @@ void pci_setup(void)
>          resource->base = base;
>  
>          pci_writel(devfn, bar_reg, bar_data);
> -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> +        if (using_64bar)
> +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
>                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
>  
>          /* Now enable the memory or I/O mapping. */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:19:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1YoW-0008S1-47; Wed, 15 Aug 2012 08:18:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1YoU-0008Rw-C8
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:18:50 +0000
Received: from [85.158.138.51:35974] by server-12.bemta-3.messagelabs.com id
	BA/A5-04073-96B5B205; Wed, 15 Aug 2012 08:18:49 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345018728!28409415!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0Nzg1MTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18472 invoked from network); 15 Aug 2012 08:18:48 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 08:18:48 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 3CBE71FFE;
	Wed, 15 Aug 2012 11:18:47 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 0FCBA2005D; Wed, 15 Aug 2012 11:18:47 +0300 (EEST)
Date: Wed, 15 Aug 2012 11:18:46 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Xudong Hao <xudong.hao@intel.com>
Message-ID: <20120815081846.GC19851@reaktio.net>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: ian.jackson@eu.citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 02:54:41PM +0800, Xudong Hao wrote:
> Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> device whose BAR size is larger than 4G, it must access > 4G memory address.
> This patch enable the 64bits big BAR support on hvmloader.
> 

Hello,

Do you have an example of such a PCI device with >4G BAR? 

Thanks,

-- Pasi

> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> 
> diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>  #define PCI_MEM_START       0xf0000000
>  #define PCI_MEM_END         0xfc000000
> +#define PCI_HIGH_MEM_START  0xa000000000ULL
> +#define PCI_HIGH_MEM_END    0xf000000000ULL
> +#define PCI_MIN_MMIO_ADDR   0x80000000
> +
>  extern unsigned long pci_mem_start, pci_mem_end;
>  
>  
> diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -31,24 +31,33 @@
>  unsigned long pci_mem_start = PCI_MEM_START;
>  unsigned long pci_mem_end = PCI_MEM_END;
>  
> +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> +
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
>  void pci_setup(void)
>  {
> -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
>      uint32_t vga_devfn = 256;
>      uint16_t class, vendor_id, device_id;
>      unsigned int bar, pin, link, isa_irq;
> +    int64_t mmio_left;
>  
>      /* Resources assignable to PCI devices via BARs. */
>      struct resource {
> -        uint32_t base, max;
> -    } *resource, mem_resource, io_resource;
> +        uint64_t base, max;
> +    } *resource, mem_resource, high_mem_resource, io_resource;
>  
>      /* Create a list of device BARs in descending order of size. */
>      struct bars {
> -        uint32_t devfn, bar_reg, bar_sz;
> +        uint32_t is_64bar;
> +        uint32_t devfn;
> +        uint32_t bar_reg;
> +        uint64_t bar_sz;
>      } *bars = (struct bars *)scratch_start;
>      unsigned int i, nr_bars = 0;
>  
> @@ -133,23 +142,35 @@ void pci_setup(void)
>          /* Map the I/O memory and port resources. */
>          for ( bar = 0; bar < 7; bar++ )
>          {
> +            bar_sz_upper = 0;
>              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>              if ( bar == 6 )
>                  bar_reg = PCI_ROM_ADDRESS;
>  
>              bar_data = pci_readl(devfn, bar_reg);
> +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>              pci_writel(devfn, bar_reg, ~0);
>              bar_sz = pci_readl(devfn, bar_reg);
>              pci_writel(devfn, bar_reg, bar_data);
> +
> +            if (is_64bar) {
> +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, ~0);
> +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> +            }
> +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> +                       0xfffffffffffffff0 :
> +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> +            bar_sz &= ~(bar_sz - 1);
>              if ( bar_sz == 0 )
>                  continue;
>  
> -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> -                       PCI_BASE_ADDRESS_MEM_MASK :
> -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> -            bar_sz &= ~(bar_sz - 1);
> -
>              for ( i = 0; i < nr_bars; i++ )
>                  if ( bars[i].bar_sz < bar_sz )
>                      break;
> @@ -157,6 +178,7 @@ void pci_setup(void)
>              if ( i != nr_bars )
>                  memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
>  
> +            bars[i].is_64bar = is_64bar;
>              bars[i].devfn   = devfn;
>              bars[i].bar_reg = bar_reg;
>              bars[i].bar_sz  = bar_sz;
> @@ -167,11 +189,8 @@ void pci_setup(void)
>  
>              nr_bars++;
>  
> -            /* Skip the upper-half of the address for a 64-bit BAR. */
> -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> -                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
> -                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
> -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> +            /*The upper half is already calculated, skip it! */
> +            if (is_64bar)
>                  bar++;
>          }
>  
> @@ -193,10 +212,14 @@ void pci_setup(void)
>          pci_writew(devfn, PCI_COMMAND, cmd);
>      }
>  
> -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> -            ((pci_mem_start << 1) != 0) )
> +    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )
>          pci_mem_start <<= 1;
>  
> +    if (!pci_mem_start) {
> +        bar64_relocate = 1;
> +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> +    }
> +
>      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
>      while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
>      {
> @@ -218,11 +241,15 @@ void pci_setup(void)
>          hvm_info->high_mem_pgend += nr_pages;
>      }
>  
> +    high_mem_resource.base = pci_high_mem_start; 
> +    high_mem_resource.max = pci_high_mem_end;
>      mem_resource.base = pci_mem_start;
>      mem_resource.max = pci_mem_end;
>      io_resource.base = 0xc000;
>      io_resource.max = 0x10000;
>  
> +    mmio_left = pci_mem_end - pci_mem_end;
> +
>      /* Assign iomem and ioport resources in descending order of size. */
>      for ( i = 0; i < nr_bars; i++ )
>      {
> @@ -230,13 +257,21 @@ void pci_setup(void)
>          bar_reg = bars[i].bar_reg;
>          bar_sz  = bars[i].bar_sz;
>  
> +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < bar_sz);
>          bar_data = pci_readl(devfn, bar_reg);
>  
>          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
>               PCI_BASE_ADDRESS_SPACE_MEMORY )
>          {
> -            resource = &mem_resource;
> -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            if (using_64bar) {
> +                resource = &high_mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            } 
> +            else {
> +                resource = &mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            }
> +            mmio_left -= bar_sz;
>          }
>          else
>          {
> @@ -244,13 +279,14 @@ void pci_setup(void)
>              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
>          }
>  
> -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> -        bar_data |= base;
> +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> +        bar_data |= (uint32_t)base;
> +        bar_data_upper = (uint32_t)(base >> 32);
>          base += bar_sz;
>  
>          if ( (base < resource->base) || (base > resource->max) )
>          {
> -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
>                     "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
>              continue;
>          }
> @@ -258,7 +294,9 @@ void pci_setup(void)
>          resource->base = base;
>  
>          pci_writel(devfn, bar_reg, bar_data);
> -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> +        if (using_64bar)
> +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
>                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
>  
>          /* Now enable the memory or I/O mapping. */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:19:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Yp1-0008UH-MM; Wed, 15 Aug 2012 08:19:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Yp0-0008Th-Fm
	for Xen-devel@lists.xensource.com; Wed, 15 Aug 2012 08:19:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345018742!9324390!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6207 invoked from network); 15 Aug 2012 08:19:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 08:19:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 09:19:01 +0100
Message-Id: <502B77BC0200007800094FF2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 09:19:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
	<20120814105157.044a755e@mantra.us.oracle.com>
In-Reply-To: <20120814105157.044a755e@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 19:51, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> BTW, being a good hybrid as it is, it uses fields from both pv_domain
> and hvm_domain structs. Combining into a union created difficulties for
> me. I experimented creating a new struct, hyb_domain, or adding hvm
> related fields to pv_domain struct for hybrid, but both involved way too
> much code change. So back to having them separated again. LMK if there
> any objections undoing the union.

I suppose there are going to be fields that are used exclusively
for PV or HVM, and if so I'd like them to be retained as a union
as far as possible. The main point of the change was to shrink
the size of struct vcpu (which is required to fit into a page,
which you will need to make sure continues to be the case even
with all sorts of debugging options turned on).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:19:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Yp1-0008UH-MM; Wed, 15 Aug 2012 08:19:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Yp0-0008Th-Fm
	for Xen-devel@lists.xensource.com; Wed, 15 Aug 2012 08:19:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345018742!9324390!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6207 invoked from network); 15 Aug 2012 08:19:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 08:19:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 09:19:01 +0100
Message-Id: <502B77BC0200007800094FF2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 09:19:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
	<20120814105157.044a755e@mantra.us.oracle.com>
In-Reply-To: <20120814105157.044a755e@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir.xen@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 19:51, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> BTW, being a good hybrid as it is, it uses fields from both pv_domain
> and hvm_domain structs. Combining into a union created difficulties for
> me. I experimented creating a new struct, hyb_domain, or adding hvm
> related fields to pv_domain struct for hybrid, but both involved way too
> much code change. So back to having them separated again. LMK if there
> any objections undoing the union.

I suppose there are going to be fields that are used exclusively
for PV or HVM, and if so I'd like them to be retained as a union
as far as possible. The main point of the change was to shrink
the size of struct vcpu (which is required to fit into a page,
which you will need to make sure continues to be the case even
with all sorts of debugging options turned on).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:42:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZBM-0000Ni-1E; Wed, 15 Aug 2012 08:42:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T1ZBK-0000Nd-OY
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:42:26 +0000
Received: from [85.158.138.51:60115] by server-10.bemta-3.messagelabs.com id
	46/AD-20518-2F06B205; Wed, 15 Aug 2012 08:42:26 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345020144!19457992!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32461 invoked from network); 15 Aug 2012 08:42:25 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 08:42:25 -0000
Received: by wibhq4 with SMTP id hq4so992330wib.14
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 01:42:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/HtiRLB0HudyrUM6HAz7BDXPp2bLeoj3MgeVJY3YLI8=;
	b=P1hXM6g8wmmivYDF9mXxhCRAvLbP++VobpSpq6hlxir+zHebkMxWnEjmYdksv6UFe5
	Bd7MD7V/EOANNM7YRvIH4Q4mrpbaT4BHnWB5uVI0v93gYQuirOTgnU2uhJpmmTKESyEu
	n/xVTSHyGsZoUtOYvEbVM4X6Zv6G70SwZLKKzE+WFzVDzatwPTOYV6v7qlbqUkeITwHJ
	7Bl6AxzEwJe2G4aulSTV46RyGxJZkNPOgN1HR9gOOQgY6GKO7tBk3xNJ5rp52ywYwK11
	hNPBVFZS26WEQh+AqYQcbEqUVSyitD8Tr6cxdNckz3x0RPbqbRMSV1G9v3CPY/sObczI
	tSMw==
Received: by 10.180.86.133 with SMTP id p5mr35290690wiz.17.1345020144568;
	Wed, 15 Aug 2012 01:42:24 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id q4sm27321059wix.9.2012.08.15.01.42.20
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 01:42:22 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 15 Aug 2012 09:42:08 +0100
From: Keir Fraser <keir@xen.org>
To: Pasi =?ISO-8859-1?B?S+Rya2vkaW5lbg==?= <pasik@iki.fi>,
	<xen-devel@lists.xen.org>
Message-ID: <CC511F70.490FB%keir@xen.org>
Thread-Topic: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17, ipxe
	build problems with gcc 4.7
Thread-Index: Ac16wdqyRwq/3Rh5lEycNFrC/XPcsw==
In-Reply-To: <20120813220734.GU19851@reaktio.net>
Mime-version: 1.0
Cc: olaf@aepfle.de, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/08/2012 23:07, "Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:

> Hello again,
> =

> So to be able to build Xen 4.2.0-rc2 on Fedora 17 with gcc 4.7
> I had to apply these three patches to ipxe:
> =

> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
> http://permalink.gmane.org/gmane.network.ipxe.devel/1216

I've checked these into etherboot/patches/ in xen-unstable.hg.

 -- Keir

> -- Pasi
> =

> =

> On Tue, Aug 14, 2012 at 12:44:49AM +0300, Pasi K=E4rkk=E4inen wrote:
>> On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
>>> On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
>>>> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
>>>>> Hello,
>>>>> =

>>>>> I just grabbed
>>>>> http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar=
.gz
>>>>> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
>>>>> =

>>>>> make tools:
>>>>> =

>>>>> ..
>>>>> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
>>>>> make -C etherboot all
>>>>> make[6]: Entering directory
>>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
>>>>> make -C ipxe/src bin/rtl8139.rom
>>>>> make[7]: Entering directory
>>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>>>>   [BUILD] bin/isa.o
>>>>> drivers/bus/isa.c: In function 'isabus_probe':
>>>>> drivers/bus/isa.c:112:18: error: array subscript is above array bounds
>>>>> [-Werror=3Darray-bounds]
>>>>> cc1: all warnings being treated as errors
>>>>> make[7]: *** [bin/isa.o] Error 1
>>>>> make[7]: Leaving directory
>>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>>>> =

>>>> =

>>>> Ok the patch from Olaf is here:
>>>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
>>>> =

>>>> Should we apply that patch to xen-unstable for Xen 4.2 ?
>>>> =

>>> =

>>> And then there's the next build problem:
>>> =

>>>   [BUILD] bin/myri10ge.o
>>> drivers/net/myri10ge.c: In function 'myri10ge_command':
>>> drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer =
will
>>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
>>> drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer =
will
>>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
>>> cc1: all warnings being treated as errors
>>> make[7]: *** [bin/myri10ge.o] Error 1
>>> make[7]: Leaving directory
>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>> =

>>> =

>>> And patch from Olaf here:
>>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
>>> =

>> =

>> And the third build problem:
>> =

>>   [BUILD] bin/qib7322.o
>> drivers/infiniband/qib7322.c: In function 'qib7322_probe':
>> drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used
>> uninitialized in this function [-Werror=3Dmaybe-uninitialized]
>> drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared here
>> cc1: all warnings being treated as errors
>> make[7]: *** [bin/qib7322.o] Error 1
>> make[7]: Leaving directory
>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>> =

>> =

>> And patch from Christian Hesse here:
>> http://permalink.gmane.org/gmane.network.ipxe.devel/1216
>> =

>> =

>> -- Pasi
>> =

>> =

>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:42:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZBM-0000Ni-1E; Wed, 15 Aug 2012 08:42:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T1ZBK-0000Nd-OY
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:42:26 +0000
Received: from [85.158.138.51:60115] by server-10.bemta-3.messagelabs.com id
	46/AD-20518-2F06B205; Wed, 15 Aug 2012 08:42:26 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345020144!19457992!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32461 invoked from network); 15 Aug 2012 08:42:25 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 08:42:25 -0000
Received: by wibhq4 with SMTP id hq4so992330wib.14
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 01:42:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=/HtiRLB0HudyrUM6HAz7BDXPp2bLeoj3MgeVJY3YLI8=;
	b=P1hXM6g8wmmivYDF9mXxhCRAvLbP++VobpSpq6hlxir+zHebkMxWnEjmYdksv6UFe5
	Bd7MD7V/EOANNM7YRvIH4Q4mrpbaT4BHnWB5uVI0v93gYQuirOTgnU2uhJpmmTKESyEu
	n/xVTSHyGsZoUtOYvEbVM4X6Zv6G70SwZLKKzE+WFzVDzatwPTOYV6v7qlbqUkeITwHJ
	7Bl6AxzEwJe2G4aulSTV46RyGxJZkNPOgN1HR9gOOQgY6GKO7tBk3xNJ5rp52ywYwK11
	hNPBVFZS26WEQh+AqYQcbEqUVSyitD8Tr6cxdNckz3x0RPbqbRMSV1G9v3CPY/sObczI
	tSMw==
Received: by 10.180.86.133 with SMTP id p5mr35290690wiz.17.1345020144568;
	Wed, 15 Aug 2012 01:42:24 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id q4sm27321059wix.9.2012.08.15.01.42.20
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 01:42:22 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 15 Aug 2012 09:42:08 +0100
From: Keir Fraser <keir@xen.org>
To: Pasi =?ISO-8859-1?B?S+Rya2vkaW5lbg==?= <pasik@iki.fi>,
	<xen-devel@lists.xen.org>
Message-ID: <CC511F70.490FB%keir@xen.org>
Thread-Topic: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17, ipxe
	build problems with gcc 4.7
Thread-Index: Ac16wdqyRwq/3Rh5lEycNFrC/XPcsw==
In-Reply-To: <20120813220734.GU19851@reaktio.net>
Mime-version: 1.0
Cc: olaf@aepfle.de, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/08/2012 23:07, "Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:

> Hello again,
> =

> So to be able to build Xen 4.2.0-rc2 on Fedora 17 with gcc 4.7
> I had to apply these three patches to ipxe:
> =

> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
> http://permalink.gmane.org/gmane.network.ipxe.devel/1216

I've checked these into etherboot/patches/ in xen-unstable.hg.

 -- Keir

> -- Pasi
> =

> =

> On Tue, Aug 14, 2012 at 12:44:49AM +0300, Pasi K=E4rkk=E4inen wrote:
>> On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
>>> On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
>>>> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
>>>>> Hello,
>>>>> =

>>>>> I just grabbed
>>>>> http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar=
.gz
>>>>> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
>>>>> =

>>>>> make tools:
>>>>> =

>>>>> ..
>>>>> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
>>>>> make -C etherboot all
>>>>> make[6]: Entering directory
>>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
>>>>> make -C ipxe/src bin/rtl8139.rom
>>>>> make[7]: Entering directory
>>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>>>>   [BUILD] bin/isa.o
>>>>> drivers/bus/isa.c: In function 'isabus_probe':
>>>>> drivers/bus/isa.c:112:18: error: array subscript is above array bounds
>>>>> [-Werror=3Darray-bounds]
>>>>> cc1: all warnings being treated as errors
>>>>> make[7]: *** [bin/isa.o] Error 1
>>>>> make[7]: Leaving directory
>>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>>>> =

>>>> =

>>>> Ok the patch from Olaf is here:
>>>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
>>>> =

>>>> Should we apply that patch to xen-unstable for Xen 4.2 ?
>>>> =

>>> =

>>> And then there's the next build problem:
>>> =

>>>   [BUILD] bin/myri10ge.o
>>> drivers/net/myri10ge.c: In function 'myri10ge_command':
>>> drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointer =
will
>>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
>>> drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointer =
will
>>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
>>> cc1: all warnings being treated as errors
>>> make[7]: *** [bin/myri10ge.o] Error 1
>>> make[7]: Leaving directory
>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>>> =

>>> =

>>> And patch from Olaf here:
>>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
>>> =

>> =

>> And the third build problem:
>> =

>>   [BUILD] bin/qib7322.o
>> drivers/infiniband/qib7322.c: In function 'qib7322_probe':
>> drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used
>> uninitialized in this function [-Werror=3Dmaybe-uninitialized]
>> drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared here
>> cc1: all warnings being treated as errors
>> make[7]: *** [bin/qib7322.o] Error 1
>> make[7]: Leaving directory
>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
>> =

>> =

>> And patch from Christian Hesse here:
>> http://permalink.gmane.org/gmane.network.ipxe.devel/1216
>> =

>> =

>> -- Pasi
>> =

>> =

>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:43:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZBn-0000PH-Dl; Wed, 15 Aug 2012 08:42:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T1ZBm-0000P4-AY
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:42:54 +0000
Received: from [85.158.138.51:9121] by server-2.bemta-3.messagelabs.com id
	6C/A0-17748-D016B205; Wed, 15 Aug 2012 08:42:53 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345020170!28442133!1
X-Originating-IP: [216.32.180.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21228 invoked from network); 15 Aug 2012 08:42:52 -0000
Received: from co1ehsobe002.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.185)
	by server-5.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Aug 2012 08:42:52 -0000
Received: from mail40-co1-R.bigfish.com (10.243.78.237) by
	CO1EHSOBE004.bigfish.com (10.243.66.67) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 08:42:50 +0000
Received: from mail40-co1 (localhost [127.0.0.1])	by mail40-co1-R.bigfish.com
	(Postfix) with ESMTP id 12CCB80160;
	Wed, 15 Aug 2012 08:42:50 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail40-co1 (localhost.localdomain [127.0.0.1]) by mail40-co1
	(MessageSwitch) id 134502016894199_28054;
	Wed, 15 Aug 2012 08:42:48 +0000 (UTC)
Received: from CO1EHSMHS010.bigfish.com (unknown [10.243.78.254])	by
	mail40-co1.bigfish.com (Postfix) with ESMTP id 0B19320044;
	Wed, 15 Aug 2012 08:42:48 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS010.bigfish.com (10.243.66.20) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 08:42:47 +0000
X-WSS-ID: 0M8SG79-01-34G-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2F8BC102801E;	Wed, 15 Aug 2012 03:42:44 -0500 (CDT)
Received: from SAUSEXDAG04.amd.com (163.181.55.4) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 15 Aug 2012 03:42:56 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag04.amd.com
	(163.181.55.4) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Wed, 15 Aug 2012 03:42:43 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	04:42:42 -0400
Message-ID: <502B60FD.2010603@amd.com>
Date: Wed, 15 Aug 2012 10:42:37 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
X-OriginatorOrg: amd.com
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Fine with me.
Christoph


On 08/14/12 20:09, Roger Pau Monne wrote:

> xend used to set the xenbus backend entry "type" to either "phy" or
> "file", but now libxl sets it to "phy" for both file and block device.
> We have to manually check for the type of the "param" field in order
> to detect if we are trying to attach a file or a block device.
> 
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> ---
> Changes since v3:
> 
>  * Add $xparams (that contains the path to the disk file) to the error
>    message.
> 
> Changes since v2:
> 
>  * Better error messages.
> 
>  * Check if params is empty.
> 
>  * Replace xenstore_write with xenstore-write in error function.
> 
>  * Add quotation marks to xparams when testing.
> 
> Changes since v1:
> 
>  * Check that file is either a block special file or a regular file
>    and report error otherwise.
> ---
>  tools/hotplug/NetBSD/block |   13 +++++++++++--
>  1 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
> index cf5ff3a..f849fe4 100644
> --- a/tools/hotplug/NetBSD/block
> +++ b/tools/hotplug/NetBSD/block
> @@ -12,15 +12,24 @@ export PATH
>  
>  error() {
>  	echo "$@" >&2
> -	xenstore_write $xpath/hotplug-status error
> +	xenstore-write $xpath/hotplug-status error
>  	exit 1
>  }
>  	
>  
>  xpath=$1
>  xstatus=$2
> -xtype=$(xenstore-read "$xpath/type")
>  xparams=$(xenstore-read "$xpath/params")
> +if [ -b "$xparams" ]; then
> +	xtype="phy"
> +elif [ -f "$xparams" ]; then
> +	xtype="file"
> +elif [ -z "$xparams" ]; then
> +	error "$xpath/params is empty, unable to attach block device."
> +else
> +	error "$xparams is not a valid file type to use as block device." \
> +	      "Only block and regular image files accepted."
> +fi
>  
>  case $xstatus in
>  6)



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:43:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZBn-0000PH-Dl; Wed, 15 Aug 2012 08:42:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T1ZBm-0000P4-AY
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:42:54 +0000
Received: from [85.158.138.51:9121] by server-2.bemta-3.messagelabs.com id
	6C/A0-17748-D016B205; Wed, 15 Aug 2012 08:42:53 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345020170!28442133!1
X-Originating-IP: [216.32.180.185]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21228 invoked from network); 15 Aug 2012 08:42:52 -0000
Received: from co1ehsobe002.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.185)
	by server-5.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Aug 2012 08:42:52 -0000
Received: from mail40-co1-R.bigfish.com (10.243.78.237) by
	CO1EHSOBE004.bigfish.com (10.243.66.67) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 08:42:50 +0000
Received: from mail40-co1 (localhost [127.0.0.1])	by mail40-co1-R.bigfish.com
	(Postfix) with ESMTP id 12CCB80160;
	Wed, 15 Aug 2012 08:42:50 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail40-co1 (localhost.localdomain [127.0.0.1]) by mail40-co1
	(MessageSwitch) id 134502016894199_28054;
	Wed, 15 Aug 2012 08:42:48 +0000 (UTC)
Received: from CO1EHSMHS010.bigfish.com (unknown [10.243.78.254])	by
	mail40-co1.bigfish.com (Postfix) with ESMTP id 0B19320044;
	Wed, 15 Aug 2012 08:42:48 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CO1EHSMHS010.bigfish.com (10.243.66.20) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 08:42:47 +0000
X-WSS-ID: 0M8SG79-01-34G-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2F8BC102801E;	Wed, 15 Aug 2012 03:42:44 -0500 (CDT)
Received: from SAUSEXDAG04.amd.com (163.181.55.4) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 15 Aug 2012 03:42:56 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag04.amd.com
	(163.181.55.4) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Wed, 15 Aug 2012 03:42:43 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	04:42:42 -0400
Message-ID: <502B60FD.2010603@amd.com>
Date: Wed, 15 Aug 2012 10:42:37 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
X-OriginatorOrg: amd.com
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Fine with me.
Christoph


On 08/14/12 20:09, Roger Pau Monne wrote:

> xend used to set the xenbus backend entry "type" to either "phy" or
> "file", but now libxl sets it to "phy" for both file and block device.
> We have to manually check for the type of the "param" field in order
> to detect if we are trying to attach a file or a block device.
> 
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> ---
> Changes since v3:
> 
>  * Add $xparams (that contains the path to the disk file) to the error
>    message.
> 
> Changes since v2:
> 
>  * Better error messages.
> 
>  * Check if params is empty.
> 
>  * Replace xenstore_write with xenstore-write in error function.
> 
>  * Add quotation marks to xparams when testing.
> 
> Changes since v1:
> 
>  * Check that file is either a block special file or a regular file
>    and report error otherwise.
> ---
>  tools/hotplug/NetBSD/block |   13 +++++++++++--
>  1 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
> index cf5ff3a..f849fe4 100644
> --- a/tools/hotplug/NetBSD/block
> +++ b/tools/hotplug/NetBSD/block
> @@ -12,15 +12,24 @@ export PATH
>  
>  error() {
>  	echo "$@" >&2
> -	xenstore_write $xpath/hotplug-status error
> +	xenstore-write $xpath/hotplug-status error
>  	exit 1
>  }
>  	
>  
>  xpath=$1
>  xstatus=$2
> -xtype=$(xenstore-read "$xpath/type")
>  xparams=$(xenstore-read "$xpath/params")
> +if [ -b "$xparams" ]; then
> +	xtype="phy"
> +elif [ -f "$xparams" ]; then
> +	xtype="file"
> +elif [ -z "$xparams" ]; then
> +	error "$xpath/params is empty, unable to attach block device."
> +else
> +	error "$xparams is not a valid file type to use as block device." \
> +	      "Only block and regular image files accepted."
> +fi
>  
>  case $xstatus in
>  6)



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:53:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZLx-0000fz-Jt; Wed, 15 Aug 2012 08:53:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1ZLw-0000fu-Ib
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:53:24 +0000
Received: from [85.158.143.99:56767] by server-1.bemta-4.messagelabs.com id
	B1/76-07754-3836B205; Wed, 15 Aug 2012 08:53:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345020803!27698808!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2054 invoked from network); 15 Aug 2012 08:53:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 08:53:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 09:53:21 +0100
Message-Id: <502B7FC9020000780009500F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 09:54:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
In-Reply-To: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 21:55, Santosh Jodh <santosh.jodh@citrix.com> wrote:

Sorry to be picky; after this many rounds I would have
expected that no further comments would be needed.

> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level < 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( next_level >= 1 )
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1,

Did you see Wei's cleanup patches to the code you cloned from?
You should follow that route (replacing the ASSERT() with
printing of the inconsistency and _not_ recursing or doing the
normal printing), and using either "level" or "next_level"
consistently here.

> +                address, indent + 1);
> +        else
> +            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
> +                   indent, " ",

            printk("%*sgfn: %08lx  mfn: %08lx\n",
                   indent, "",

I can vaguely see the point in splitting the two strings in the
first argument, but the extra space in the third argument is
definitely wrong - it'll make level 1 and level 2 indistinguishable.

I also don't see how you addressed Wei's reporting of this still
not printing correctly. I may be overlooking something, but
without you making clear in the description what you changed
over the previous version that's also relatively easy to happen.

> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( level < 1 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 ) 
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
> +                                     address, indent + 1);
> +        else
> +            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
> +                   indent, " ",

Same comment as above.

> +                   (unsigned long)(address >> PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
> +                   dma_pte_superpage(*pte)? 1 : 0,
> +                   dma_pte_read(*pte)? 1 : 0,
> +                   dma_pte_write(*pte)? 1 : 0);

Missing spaces. Even worse - given your definitions of these
macros there's no point in using the conditional operators here
at all.

And, despite your claim in another response, this still isn't similar
to AMD's variant (which still doesn't print any of these three
attributes).

The printing of the superpage status is pretty pointless anyway,
given that there's no single use of dma_set_pte_superpage()
throughout the tree - validly so since superpages can be in use
currently only when the tables are shared with EPT, in which
case you don't print anything. Plus you'd need to detect the flag
_above_ level 1 (at leaf level the bit is ignored and hence just
confusing if printed) and print the entry instead of recursing. And
if you decide to indeed properly implement this (rather than just
dropping superpage support here), _I_ would expect you to
properly implement level skipping in the corresponding AMD code
too (which similarly isn't being used currently).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:53:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZLx-0000fz-Jt; Wed, 15 Aug 2012 08:53:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1ZLw-0000fu-Ib
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:53:24 +0000
Received: from [85.158.143.99:56767] by server-1.bemta-4.messagelabs.com id
	B1/76-07754-3836B205; Wed, 15 Aug 2012 08:53:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345020803!27698808!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2054 invoked from network); 15 Aug 2012 08:53:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 08:53:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 09:53:21 +0100
Message-Id: <502B7FC9020000780009500F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 09:54:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <santosh.jodh@citrix.com>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
In-Reply-To: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: wei.wang2@amd.com, tim@xen.org, xiantao.zhang@intel.com,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.08.12 at 21:55, Santosh Jodh <santosh.jodh@citrix.com> wrote:

Sorry to be picky; after this many rounds I would have
expected that no further comments would be needed.

> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level < 1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( next_level >= 1 )
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), level - 1,

Did you see Wei's cleanup patches to the code you cloned from?
You should follow that route (replacing the ASSERT() with
printing of the inconsistency and _not_ recursing or doing the
normal printing), and using either "level" or "next_level"
consistently here.

> +                address, indent + 1);
> +        else
> +            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
> +                   indent, " ",

            printk("%*sgfn: %08lx  mfn: %08lx\n",
                   indent, "",

I can vaguely see the point in splitting the two strings in the
first argument, but the extra space in the third argument is
definitely wrong - it'll make level 1 and level 2 indistinguishable.

I also don't see how you addressed Wei's reporting of this still
not printing correctly. I may be overlooking something, but
without you making clear in the description what you changed
over the previous version that's also relatively easy to happen.

> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( level < 1 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i < PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte = &pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level >= 1 ) 
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
> +                                     address, indent + 1);
> +        else
> +            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d wr=%d\n",
> +                   indent, " ",

Same comment as above.

> +                   (unsigned long)(address >> PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
> +                   dma_pte_superpage(*pte)? 1 : 0,
> +                   dma_pte_read(*pte)? 1 : 0,
> +                   dma_pte_write(*pte)? 1 : 0);

Missing spaces. Even worse - given your definitions of these
macros there's no point in using the conditional operators here
at all.

And, despite your claim in another response, this still isn't similar
to AMD's variant (which still doesn't print any of these three
attributes).

The printing of the superpage status is pretty pointless anyway,
given that there's no single use of dma_set_pte_superpage()
throughout the tree - validly so since superpages can be in use
currently only when the tables are shared with EPT, in which
case you don't print anything. Plus you'd need to detect the flag
_above_ level 1 (at leaf level the bit is ignored and hence just
confusing if printed) and print the entry instead of recursing. And
if you decide to indeed properly implement this (rather than just
dropping superpage support here), _I_ would expect you to
properly implement level skipping in the corresponding AMD code
too (which similarly isn't being used currently).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:55:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZNQ-0000kT-30; Wed, 15 Aug 2012 08:54:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1ZNO-0000kN-4x
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:54:54 +0000
Received: from [85.158.138.51:48955] by server-7.bemta-3.messagelabs.com id
	82/8E-01906-DD36B205; Wed, 15 Aug 2012 08:54:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345020892!20279626!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6708 invoked from network); 15 Aug 2012 08:54:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 08:54:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14016169"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 08:54:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 09:54:50 +0100
Message-ID: <1345020888.5926.115.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Wed, 15 Aug 2012 09:54:48 +0100
In-Reply-To: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 19:15 +0100, Roger Pau Monne wrote:
> vif interfaces allows the user to specify the domain that should run
> the backend (also known as driver domain) using the 'backend'
> parameter. This is not compatible with run_hotplug_scripts=1, since
> libxl can only run the hotplug scripts from the Domain 0.
> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> ---
>  docs/misc/xl-network-configuration.markdown |    6 ++++--
>  tools/libxl/libxl.c                         |   14 ++++++++++++++
>  2 files changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/docs/misc/xl-network-configuration.markdown b/docs/misc/xl-network-configuration.markdown
> index 650926c..5e2f049 100644
> --- a/docs/misc/xl-network-configuration.markdown
> +++ b/docs/misc/xl-network-configuration.markdown
> @@ -122,8 +122,10 @@ specified IP address to be used by the guest (blocking all others).
>  ### backend
>  
>  Specifies the backend domain which this device should attach to. This
> -defaults to domain 0. Specifying another domain requires setting up a
> -driver domain which is outside the scope of this document.
> +defaults to domain 0. This option does not work if `run_hotplug_scripts`
> +is not disabled in xl.conf (see xl.conf(5) man page for more information
> +on this option). Specifying another domain requires setting up a driver
> +domain which is outside the scope of this document.
>  
>  ### rate
>  
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 8ea3478..6b85cdc 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2474,6 +2474,8 @@ out:
>  int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>                                   uint32_t domid)
>  {
> +    int run_hotplug_scripts;
> +
>      if (!nic->mtu)
>          nic->mtu = 1492;
>      if (!nic->model) {
> @@ -2503,6 +2505,18 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>                                    libxl__xen_script_dir_path()) < 0 )
>          return ERROR_FAIL;
>  
> +    run_hotplug_scripts = libxl__hotplug_settings(gc, XBT_NULL);
> +    if (run_hotplug_scripts < 0) {
> +        LOG(ERROR, "unable to get current hotplug scripts execution setting");

Include the error value?

> +        return run_hotplug_scripts;
> +    }
> +    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
> +        LOG(ERROR, "the vif 'backend=' option cannot be used in conjunction "
> +                   "with run_hotplug_scripts, please set run_hotplug_scripts "
> +                   "to 0 in xl.conf");

This mention of xl.conf in libxl is a layering violation.

I think it would be fine for libxl to log something generic about
hotplug scripts and return an error and to have a more specific check t
parse time in xl which errors out with a reference to the config option.

> +        return ERROR_FAIL;
> +    }
> +
>      switch (libxl__domain_type(gc, domid)) {
>      case LIBXL_DOMAIN_TYPE_HVM:
>          if (!nic->nictype)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 08:55:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 08:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZNQ-0000kT-30; Wed, 15 Aug 2012 08:54:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1ZNO-0000kN-4x
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 08:54:54 +0000
Received: from [85.158.138.51:48955] by server-7.bemta-3.messagelabs.com id
	82/8E-01906-DD36B205; Wed, 15 Aug 2012 08:54:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345020892!20279626!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6708 invoked from network); 15 Aug 2012 08:54:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 08:54:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14016169"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 08:54:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 09:54:50 +0100
Message-ID: <1345020888.5926.115.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Wed, 15 Aug 2012 09:54:48 +0100
In-Reply-To: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 19:15 +0100, Roger Pau Monne wrote:
> vif interfaces allows the user to specify the domain that should run
> the backend (also known as driver domain) using the 'backend'
> parameter. This is not compatible with run_hotplug_scripts=1, since
> libxl can only run the hotplug scripts from the Domain 0.
> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
> ---
>  docs/misc/xl-network-configuration.markdown |    6 ++++--
>  tools/libxl/libxl.c                         |   14 ++++++++++++++
>  2 files changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/docs/misc/xl-network-configuration.markdown b/docs/misc/xl-network-configuration.markdown
> index 650926c..5e2f049 100644
> --- a/docs/misc/xl-network-configuration.markdown
> +++ b/docs/misc/xl-network-configuration.markdown
> @@ -122,8 +122,10 @@ specified IP address to be used by the guest (blocking all others).
>  ### backend
>  
>  Specifies the backend domain which this device should attach to. This
> -defaults to domain 0. Specifying another domain requires setting up a
> -driver domain which is outside the scope of this document.
> +defaults to domain 0. This option does not work if `run_hotplug_scripts`
> +is not disabled in xl.conf (see xl.conf(5) man page for more information
> +on this option). Specifying another domain requires setting up a driver
> +domain which is outside the scope of this document.
>  
>  ### rate
>  
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 8ea3478..6b85cdc 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2474,6 +2474,8 @@ out:
>  int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>                                   uint32_t domid)
>  {
> +    int run_hotplug_scripts;
> +
>      if (!nic->mtu)
>          nic->mtu = 1492;
>      if (!nic->model) {
> @@ -2503,6 +2505,18 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>                                    libxl__xen_script_dir_path()) < 0 )
>          return ERROR_FAIL;
>  
> +    run_hotplug_scripts = libxl__hotplug_settings(gc, XBT_NULL);
> +    if (run_hotplug_scripts < 0) {
> +        LOG(ERROR, "unable to get current hotplug scripts execution setting");

Include the error value?

> +        return run_hotplug_scripts;
> +    }
> +    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
> +        LOG(ERROR, "the vif 'backend=' option cannot be used in conjunction "
> +                   "with run_hotplug_scripts, please set run_hotplug_scripts "
> +                   "to 0 in xl.conf");

This mention of xl.conf in libxl is a layering violation.

I think it would be fine for libxl to log something generic about
hotplug scripts and return an error and to have a more specific check t
parse time in xl which errors out with a reference to the config option.

> +        return ERROR_FAIL;
> +    }
> +
>      switch (libxl__domain_type(gc, domid)) {
>      case LIBXL_DOMAIN_TYPE_HVM:
>          if (!nic->nictype)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:16:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZhZ-00012s-4c; Wed, 15 Aug 2012 09:15:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1ZhX-00012n-Eu
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 09:15:43 +0000
Received: from [85.158.143.35:15094] by server-3.bemta-4.messagelabs.com id
	0A/C9-09529-EB86B205; Wed, 15 Aug 2012 09:15:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345022139!13405366!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26242 invoked from network); 15 Aug 2012 09:15:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	15 Aug 2012 09:15:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 10:15:38 +0100
Message-Id: <502B85020200007800095036@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 10:16:18 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.jackson@eu.citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>  #define PCI_MEM_START       0xf0000000
>  #define PCI_MEM_END         0xfc000000
> +#define PCI_HIGH_MEM_START  0xa000000000ULL
> +#define PCI_HIGH_MEM_END    0xf000000000ULL

With such hard coded values, this is hardly meant to be anything
more than an RFC, is it? These values should not exist in the first
place, and the variables below should be determined from VM
characteristics (best would presumably be to allocate them top
down from the end of physical address space, making sure you
don't run into RAM).

> +#define PCI_MIN_MMIO_ADDR   0x80000000
> +
>  extern unsigned long pci_mem_start, pci_mem_end;
>  
>  
> diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -31,24 +31,33 @@
>  unsigned long pci_mem_start = PCI_MEM_START;
>  unsigned long pci_mem_end = PCI_MEM_END;
>  
> +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> +
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
>  void pci_setup(void)
>  {
> -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
>      uint32_t vga_devfn = 256;
>      uint16_t class, vendor_id, device_id;
>      unsigned int bar, pin, link, isa_irq;
> +    int64_t mmio_left;
>  
>      /* Resources assignable to PCI devices via BARs. */
>      struct resource {
> -        uint32_t base, max;
> -    } *resource, mem_resource, io_resource;
> +        uint64_t base, max;
> +    } *resource, mem_resource, high_mem_resource, io_resource;
>  
>      /* Create a list of device BARs in descending order of size. */
>      struct bars {
> -        uint32_t devfn, bar_reg, bar_sz;
> +        uint32_t is_64bar;
> +        uint32_t devfn;
> +        uint32_t bar_reg;
> +        uint64_t bar_sz;
>      } *bars = (struct bars *)scratch_start;
>      unsigned int i, nr_bars = 0;
>  
> @@ -133,23 +142,35 @@ void pci_setup(void)
>          /* Map the I/O memory and port resources. */
>          for ( bar = 0; bar < 7; bar++ )
>          {
> +            bar_sz_upper = 0;
>              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>              if ( bar == 6 )
>                  bar_reg = PCI_ROM_ADDRESS;
>  
>              bar_data = pci_readl(devfn, bar_reg);
> +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>              pci_writel(devfn, bar_reg, ~0);
>              bar_sz = pci_readl(devfn, bar_reg);
>              pci_writel(devfn, bar_reg, bar_data);
> +
> +            if (is_64bar) {
> +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, ~0);
> +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> +            }
> +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> +                       0xfffffffffffffff0 :

This should be a proper constant (or the masking could be
done earlier, in which case you could continue to use the
existing PCI_BASE_ADDRESS_MEM_MASK).

> +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> +            bar_sz &= ~(bar_sz - 1);
>              if ( bar_sz == 0 )
>                  continue;
>  
> -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> -                       PCI_BASE_ADDRESS_MEM_MASK :
> -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> -            bar_sz &= ~(bar_sz - 1);
> -
>              for ( i = 0; i < nr_bars; i++ )
>                  if ( bars[i].bar_sz < bar_sz )
>                      break;
> @@ -157,6 +178,7 @@ void pci_setup(void)
>              if ( i != nr_bars )
>                  memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
>  
> +            bars[i].is_64bar = is_64bar;
>              bars[i].devfn   = devfn;
>              bars[i].bar_reg = bar_reg;
>              bars[i].bar_sz  = bar_sz;
> @@ -167,11 +189,8 @@ void pci_setup(void)
>  
>              nr_bars++;
>  
> -            /* Skip the upper-half of the address for a 64-bit BAR. */
> -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> -                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
> -                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
> -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> +            /*The upper half is already calculated, skip it! */
> +            if (is_64bar)
>                  bar++;
>          }
>  
> @@ -193,10 +212,14 @@ void pci_setup(void)
>          pci_writew(devfn, PCI_COMMAND, cmd);
>      }
>  
> -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> -            ((pci_mem_start << 1) != 0) )
> +    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )

The old code here could remain in place if ...

>          pci_mem_start <<= 1;
>  
> +    if (!pci_mem_start) {

.. the condition here would get changed to the one used in the
first part of the while above.

> +        bar64_relocate = 1;
> +        pci_mem_start = PCI_MIN_MMIO_ADDR;

Which would then also make this assignment (and the
constant) unnecessary.

> +    }
> +
>      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
>      while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
>      {
> @@ -218,11 +241,15 @@ void pci_setup(void)
>          hvm_info->high_mem_pgend += nr_pages;
>      }
>  
> +    high_mem_resource.base = pci_high_mem_start; 
> +    high_mem_resource.max = pci_high_mem_end;
>      mem_resource.base = pci_mem_start;
>      mem_resource.max = pci_mem_end;
>      io_resource.base = 0xc000;
>      io_resource.max = 0x10000;
>  
> +    mmio_left = pci_mem_end - pci_mem_end;
> +
>      /* Assign iomem and ioport resources in descending order of size. */
>      for ( i = 0; i < nr_bars; i++ )
>      {
> @@ -230,13 +257,21 @@ void pci_setup(void)
>          bar_reg = bars[i].bar_reg;
>          bar_sz  = bars[i].bar_sz;
>  
> +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < 
> bar_sz);
>          bar_data = pci_readl(devfn, bar_reg);
>  
>          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
>               PCI_BASE_ADDRESS_SPACE_MEMORY )
>          {
> -            resource = &mem_resource;
> -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            if (using_64bar) {
> +                resource = &high_mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            } 
> +            else {
> +                resource = &mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            }
> +            mmio_left -= bar_sz;
>          }
>          else
>          {
> @@ -244,13 +279,14 @@ void pci_setup(void)
>              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
>          }
>  
> -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> -        bar_data |= base;
> +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> +        bar_data |= (uint32_t)base;
> +        bar_data_upper = (uint32_t)(base >> 32);
>          base += bar_sz;
>  
>          if ( (base < resource->base) || (base > resource->max) )
>          {
> -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
>                     "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
>              continue;
>          }
> @@ -258,7 +294,9 @@ void pci_setup(void)
>          resource->base = base;
>  
>          pci_writel(devfn, bar_reg, bar_data);
> -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> +        if (using_64bar)
> +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
>                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
>  
>          /* Now enable the memory or I/O mapping. */

Besides that, I'd encourage you to have an intermediate state
between not using BARs above 4Gb and forcing all 64-bit ones
beyond 4Gb for maximum compatibility - try fitting as many as
you can into the low 2Gb. Perhaps this would warrant an option
of some sort.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:16:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZhZ-00012s-4c; Wed, 15 Aug 2012 09:15:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1ZhX-00012n-Eu
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 09:15:43 +0000
Received: from [85.158.143.35:15094] by server-3.bemta-4.messagelabs.com id
	0A/C9-09529-EB86B205; Wed, 15 Aug 2012 09:15:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345022139!13405366!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26242 invoked from network); 15 Aug 2012 09:15:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	15 Aug 2012 09:15:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 10:15:38 +0100
Message-Id: <502B85020200007800095036@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 10:16:18 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.jackson@eu.citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>  #define PCI_MEM_START       0xf0000000
>  #define PCI_MEM_END         0xfc000000
> +#define PCI_HIGH_MEM_START  0xa000000000ULL
> +#define PCI_HIGH_MEM_END    0xf000000000ULL

With such hard coded values, this is hardly meant to be anything
more than an RFC, is it? These values should not exist in the first
place, and the variables below should be determined from VM
characteristics (best would presumably be to allocate them top
down from the end of physical address space, making sure you
don't run into RAM).

> +#define PCI_MIN_MMIO_ADDR   0x80000000
> +
>  extern unsigned long pci_mem_start, pci_mem_end;
>  
>  
> diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -31,24 +31,33 @@
>  unsigned long pci_mem_start = PCI_MEM_START;
>  unsigned long pci_mem_end = PCI_MEM_END;
>  
> +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> +
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
>  void pci_setup(void)
>  {
> -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
>      uint32_t vga_devfn = 256;
>      uint16_t class, vendor_id, device_id;
>      unsigned int bar, pin, link, isa_irq;
> +    int64_t mmio_left;
>  
>      /* Resources assignable to PCI devices via BARs. */
>      struct resource {
> -        uint32_t base, max;
> -    } *resource, mem_resource, io_resource;
> +        uint64_t base, max;
> +    } *resource, mem_resource, high_mem_resource, io_resource;
>  
>      /* Create a list of device BARs in descending order of size. */
>      struct bars {
> -        uint32_t devfn, bar_reg, bar_sz;
> +        uint32_t is_64bar;
> +        uint32_t devfn;
> +        uint32_t bar_reg;
> +        uint64_t bar_sz;
>      } *bars = (struct bars *)scratch_start;
>      unsigned int i, nr_bars = 0;
>  
> @@ -133,23 +142,35 @@ void pci_setup(void)
>          /* Map the I/O memory and port resources. */
>          for ( bar = 0; bar < 7; bar++ )
>          {
> +            bar_sz_upper = 0;
>              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>              if ( bar == 6 )
>                  bar_reg = PCI_ROM_ADDRESS;
>  
>              bar_data = pci_readl(devfn, bar_reg);
> +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>              pci_writel(devfn, bar_reg, ~0);
>              bar_sz = pci_readl(devfn, bar_reg);
>              pci_writel(devfn, bar_reg, bar_data);
> +
> +            if (is_64bar) {
> +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, ~0);
> +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> +            }
> +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> +                       0xfffffffffffffff0 :

This should be a proper constant (or the masking could be
done earlier, in which case you could continue to use the
existing PCI_BASE_ADDRESS_MEM_MASK).

> +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> +            bar_sz &= ~(bar_sz - 1);
>              if ( bar_sz == 0 )
>                  continue;
>  
> -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> -                       PCI_BASE_ADDRESS_MEM_MASK :
> -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> -            bar_sz &= ~(bar_sz - 1);
> -
>              for ( i = 0; i < nr_bars; i++ )
>                  if ( bars[i].bar_sz < bar_sz )
>                      break;
> @@ -157,6 +178,7 @@ void pci_setup(void)
>              if ( i != nr_bars )
>                  memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
>  
> +            bars[i].is_64bar = is_64bar;
>              bars[i].devfn   = devfn;
>              bars[i].bar_reg = bar_reg;
>              bars[i].bar_sz  = bar_sz;
> @@ -167,11 +189,8 @@ void pci_setup(void)
>  
>              nr_bars++;
>  
> -            /* Skip the upper-half of the address for a 64-bit BAR. */
> -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> -                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
> -                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
> -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> +            /*The upper half is already calculated, skip it! */
> +            if (is_64bar)
>                  bar++;
>          }
>  
> @@ -193,10 +212,14 @@ void pci_setup(void)
>          pci_writew(devfn, PCI_COMMAND, cmd);
>      }
>  
> -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> -            ((pci_mem_start << 1) != 0) )
> +    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )

The old code here could remain in place if ...

>          pci_mem_start <<= 1;
>  
> +    if (!pci_mem_start) {

.. the condition here would get changed to the one used in the
first part of the while above.

> +        bar64_relocate = 1;
> +        pci_mem_start = PCI_MIN_MMIO_ADDR;

Which would then also make this assignment (and the
constant) unnecessary.

> +    }
> +
>      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
>      while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
>      {
> @@ -218,11 +241,15 @@ void pci_setup(void)
>          hvm_info->high_mem_pgend += nr_pages;
>      }
>  
> +    high_mem_resource.base = pci_high_mem_start; 
> +    high_mem_resource.max = pci_high_mem_end;
>      mem_resource.base = pci_mem_start;
>      mem_resource.max = pci_mem_end;
>      io_resource.base = 0xc000;
>      io_resource.max = 0x10000;
>  
> +    mmio_left = pci_mem_end - pci_mem_end;
> +
>      /* Assign iomem and ioport resources in descending order of size. */
>      for ( i = 0; i < nr_bars; i++ )
>      {
> @@ -230,13 +257,21 @@ void pci_setup(void)
>          bar_reg = bars[i].bar_reg;
>          bar_sz  = bars[i].bar_sz;
>  
> +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < 
> bar_sz);
>          bar_data = pci_readl(devfn, bar_reg);
>  
>          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
>               PCI_BASE_ADDRESS_SPACE_MEMORY )
>          {
> -            resource = &mem_resource;
> -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            if (using_64bar) {
> +                resource = &high_mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            } 
> +            else {
> +                resource = &mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            }
> +            mmio_left -= bar_sz;
>          }
>          else
>          {
> @@ -244,13 +279,14 @@ void pci_setup(void)
>              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
>          }
>  
> -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> -        bar_data |= base;
> +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> +        bar_data |= (uint32_t)base;
> +        bar_data_upper = (uint32_t)(base >> 32);
>          base += bar_sz;
>  
>          if ( (base < resource->base) || (base > resource->max) )
>          {
> -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
>                     "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
>              continue;
>          }
> @@ -258,7 +294,9 @@ void pci_setup(void)
>          resource->base = base;
>  
>          pci_writel(devfn, bar_reg, bar_data);
> -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> +        if (using_64bar)
> +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
>                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
>  
>          /* Now enable the memory or I/O mapping. */

Besides that, I'd encourage you to have an intermediate state
between not using BARs above 4Gb and forcing all 64-bit ones
beyond 4Gb for maximum compatibility - try fitting as many as
you can into the low 2Gb. Perhaps this would warrant an option
of some sort.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:18:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Zji-00019o-SY; Wed, 15 Aug 2012 09:17:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1Zjh-00019h-3T
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 09:17:57 +0000
Received: from [85.158.143.35:48782] by server-3.bemta-4.messagelabs.com id
	D6/ED-09529-4496B205; Wed, 15 Aug 2012 09:17:56 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345022274!5778454!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU3ODEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5929 invoked from network); 15 Aug 2012 09:17:55 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-21.messagelabs.com with SMTP;
	15 Aug 2012 09:17:55 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 15 Aug 2012 02:17:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="180929943"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 15 Aug 2012 02:17:53 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 02:17:53 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 02:17:53 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Wed, 15 Aug 2012 17:17:51 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaAawAgACRS+A=
Date: Wed, 15 Aug 2012 09:17:51 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE89C5F@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<20120815081846.GC19851@reaktio.net>
In-Reply-To: <20120815081846.GC19851@reaktio.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Pasi K=E4rkk=E4inen [mailto:pasik@iki.fi]
> Sent: Wednesday, August 15, 2012 4:19 PM
> To: Hao, Xudong
> Cc: xen-devel@lists.xen.org; ian.jackson@eu.citrix.com; Zhang, Xiantao
> Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar suppo=
rt
> =

> On Wed, Aug 15, 2012 at 02:54:41PM +0800, Xudong Hao wrote:
> > Currently it is assumed PCI device BAR access < 4G memory. If there is =
such a
> > device whose BAR size is larger than 4G, it must access > 4G memory
> address.
> > This patch enable the 64bits big BAR support on hvmloader.
> >
> =

> Hello,
> =

> Do you have an example of such a PCI device with >4G BAR?
> =


We have a standard PCE-e device which integrated Intel MIC program, this de=
vice has 16G size bar. If we want to passthrough such a device to guest, cu=
rrent Xen can't support it, so we want Xen has this big bar support and mak=
e this patch.

For Intel MIC program technology, you can refer to http://openlab.web.cern.=
ch/sites/openlab.web.cern.ch/files/presentations/KNC_ISA_Overview_Apr2012_S=
J_New_V4.pdf  or =

http://openlab.web.cern.ch/sites/openlab.web.cern.ch/files/presentations/20=
12.04.25%20Andrzej%20Nowak%20-%20An%20overview%20of%20Intel%20MIC%20-%20tec=
hnology,%20hardware%20and%20software%20v3.pdf

- Thanks
Xudong

> Thanks,
> =

> -- Pasi
> =

> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
> >  #define PCI_MEM_START       0xf0000000
> >  #define PCI_MEM_END         0xfc000000
> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> > +#define PCI_MIN_MMIO_ADDR   0x80000000
> > +
> >  extern unsigned long pci_mem_start, pci_mem_end;
> >
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> > --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -31,24 +31,33 @@
> >  unsigned long pci_mem_start =3D PCI_MEM_START;
> >  unsigned long pci_mem_end =3D PCI_MEM_END;
> >
> > +uint64_t pci_high_mem_start =3D PCI_HIGH_MEM_START;
> > +uint64_t pci_high_mem_end =3D PCI_HIGH_MEM_END;
> > +
> >  enum virtual_vga virtual_vga =3D VGA_none;
> >  unsigned long igd_opregion_pgbase =3D 0;
> >
> >  void pci_setup(void)
> >  {
> > -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total =
=3D 0;
> > +    uint8_t is_64bar, using_64bar, bar64_relocate =3D 0;
> > +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> > +    uint64_t base, bar_sz, bar_sz_upper, mmio_total =3D 0;
> >      uint32_t vga_devfn =3D 256;
> >      uint16_t class, vendor_id, device_id;
> >      unsigned int bar, pin, link, isa_irq;
> > +    int64_t mmio_left;
> >
> >      /* Resources assignable to PCI devices via BARs. */
> >      struct resource {
> > -        uint32_t base, max;
> > -    } *resource, mem_resource, io_resource;
> > +        uint64_t base, max;
> > +    } *resource, mem_resource, high_mem_resource, io_resource;
> >
> >      /* Create a list of device BARs in descending order of size. */
> >      struct bars {
> > -        uint32_t devfn, bar_reg, bar_sz;
> > +        uint32_t is_64bar;
> > +        uint32_t devfn;
> > +        uint32_t bar_reg;
> > +        uint64_t bar_sz;
> >      } *bars =3D (struct bars *)scratch_start;
> >      unsigned int i, nr_bars =3D 0;
> >
> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >          /* Map the I/O memory and port resources. */
> >          for ( bar =3D 0; bar < 7; bar++ )
> >          {
> > +            bar_sz_upper =3D 0;
> >              bar_reg =3D PCI_BASE_ADDRESS_0 + 4*bar;
> >              if ( bar =3D=3D 6 )
> >                  bar_reg =3D PCI_ROM_ADDRESS;
> >
> >              bar_data =3D pci_readl(devfn, bar_reg);
> > +            is_64bar =3D !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) =3D=3D
> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >              pci_writel(devfn, bar_reg, ~0);
> >              bar_sz =3D pci_readl(devfn, bar_reg);
> >              pci_writel(devfn, bar_reg, bar_data);
> > +
> > +            if (is_64bar) {
> > +                bar_data_upper =3D pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, ~0);
> > +                bar_sz_upper =3D pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +                bar_sz =3D (bar_sz_upper << 32) | bar_sz;
> > +            }
> > +            bar_sz &=3D (((bar_data & PCI_BASE_ADDRESS_SPACE) =3D=3D
> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > +                       0xfffffffffffffff0 :
> > +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > +            bar_sz &=3D ~(bar_sz - 1);
> >              if ( bar_sz =3D=3D 0 )
> >                  continue;
> >
> > -            bar_sz &=3D (((bar_data & PCI_BASE_ADDRESS_SPACE) =3D=3D
> > -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > -                       PCI_BASE_ADDRESS_MEM_MASK :
> > -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > -            bar_sz &=3D ~(bar_sz - 1);
> > -
> >              for ( i =3D 0; i < nr_bars; i++ )
> >                  if ( bars[i].bar_sz < bar_sz )
> >                      break;
> > @@ -157,6 +178,7 @@ void pci_setup(void)
> >              if ( i !=3D nr_bars )
> >                  memmove(&bars[i+1], &bars[i], (nr_bars-i) *
> sizeof(*bars));
> >
> > +            bars[i].is_64bar =3D is_64bar;
> >              bars[i].devfn   =3D devfn;
> >              bars[i].bar_reg =3D bar_reg;
> >              bars[i].bar_sz  =3D bar_sz;
> > @@ -167,11 +189,8 @@ void pci_setup(void)
> >
> >              nr_bars++;
> >
> > -            /* Skip the upper-half of the address for a 64-bit BAR. */
> > -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> > -
> PCI_BASE_ADDRESS_MEM_TYPE_MASK)) =3D=3D
> > -                 (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> > +            /*The upper half is already calculated, skip it! */
> > +            if (is_64bar)
> >                  bar++;
> >          }
> >
> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >      }
> >
> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> > -            ((pci_mem_start << 1) !=3D 0) )
> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> pci_mem_start )
> >          pci_mem_start <<=3D 1;
> >
> > +    if (!pci_mem_start) {
> > +        bar64_relocate =3D 1;
> > +        pci_mem_start =3D PCI_MIN_MMIO_ADDR;
> > +    }
> > +
> >      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
> >      while ( (pci_mem_start >> PAGE_SHIFT) <
> hvm_info->low_mem_pgend )
> >      {
> > @@ -218,11 +241,15 @@ void pci_setup(void)
> >          hvm_info->high_mem_pgend +=3D nr_pages;
> >      }
> >
> > +    high_mem_resource.base =3D pci_high_mem_start;
> > +    high_mem_resource.max =3D pci_high_mem_end;
> >      mem_resource.base =3D pci_mem_start;
> >      mem_resource.max =3D pci_mem_end;
> >      io_resource.base =3D 0xc000;
> >      io_resource.max =3D 0x10000;
> >
> > +    mmio_left =3D pci_mem_end - pci_mem_end;
> > +
> >      /* Assign iomem and ioport resources in descending order of size. =
*/
> >      for ( i =3D 0; i < nr_bars; i++ )
> >      {
> > @@ -230,13 +257,21 @@ void pci_setup(void)
> >          bar_reg =3D bars[i].bar_reg;
> >          bar_sz  =3D bars[i].bar_sz;
> >
> > +        using_64bar =3D bars[i].is_64bar && bar64_relocate && (mmio_le=
ft
> < bar_sz);
> >          bar_data =3D pci_readl(devfn, bar_reg);
> >
> >          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) =3D=3D
> >               PCI_BASE_ADDRESS_SPACE_MEMORY )
> >          {
> > -            resource =3D &mem_resource;
> > -            bar_data &=3D ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            if (using_64bar) {
> > +                resource =3D &high_mem_resource;
> > +                bar_data &=3D ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            else {
> > +                resource =3D &mem_resource;
> > +                bar_data &=3D ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            mmio_left -=3D bar_sz;
> >          }
> >          else
> >          {
> > @@ -244,13 +279,14 @@ void pci_setup(void)
> >              bar_data &=3D ~PCI_BASE_ADDRESS_IO_MASK;
> >          }
> >
> > -        base =3D (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> > -        bar_data |=3D base;
> > +        base =3D (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz -=
 1);
> > +        bar_data |=3D (uint32_t)base;
> > +        bar_data_upper =3D (uint32_t)(base >> 32);
> >          base +=3D bar_sz;
> >
> >          if ( (base < resource->base) || (base > resource->max) )
> >          {
> > -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> > +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
> >                     "resource!\n", devfn>>3, devfn&7, bar_reg,
> bar_sz);
> >              continue;
> >          }
> > @@ -258,7 +294,9 @@ void pci_setup(void)
> >          resource->base =3D base;
> >
> >          pci_writel(devfn, bar_reg, bar_data);
> > -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> > +        if (using_64bar)
> > +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
> >                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
> >
> >          /* Now enable the memory or I/O mapping. */
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:18:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Zji-00019o-SY; Wed, 15 Aug 2012 09:17:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1Zjh-00019h-3T
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 09:17:57 +0000
Received: from [85.158.143.35:48782] by server-3.bemta-4.messagelabs.com id
	D6/ED-09529-4496B205; Wed, 15 Aug 2012 09:17:56 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345022274!5778454!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU3ODEw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5929 invoked from network); 15 Aug 2012 09:17:55 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-21.messagelabs.com with SMTP;
	15 Aug 2012 09:17:55 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 15 Aug 2012 02:17:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,772,1336374000"; d="scan'208";a="180929943"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 15 Aug 2012 02:17:53 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 02:17:53 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 02:17:53 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Wed, 15 Aug 2012 17:17:51 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaAawAgACRS+A=
Date: Wed, 15 Aug 2012 09:17:51 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE89C5F@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<20120815081846.GC19851@reaktio.net>
In-Reply-To: <20120815081846.GC19851@reaktio.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Pasi K=E4rkk=E4inen [mailto:pasik@iki.fi]
> Sent: Wednesday, August 15, 2012 4:19 PM
> To: Hao, Xudong
> Cc: xen-devel@lists.xen.org; ian.jackson@eu.citrix.com; Zhang, Xiantao
> Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar suppo=
rt
> =

> On Wed, Aug 15, 2012 at 02:54:41PM +0800, Xudong Hao wrote:
> > Currently it is assumed PCI device BAR access < 4G memory. If there is =
such a
> > device whose BAR size is larger than 4G, it must access > 4G memory
> address.
> > This patch enable the 64bits big BAR support on hvmloader.
> >
> =

> Hello,
> =

> Do you have an example of such a PCI device with >4G BAR?
> =


We have a standard PCE-e device which integrated Intel MIC program, this de=
vice has 16G size bar. If we want to passthrough such a device to guest, cu=
rrent Xen can't support it, so we want Xen has this big bar support and mak=
e this patch.

For Intel MIC program technology, you can refer to http://openlab.web.cern.=
ch/sites/openlab.web.cern.ch/files/presentations/KNC_ISA_Overview_Apr2012_S=
J_New_V4.pdf  or =

http://openlab.web.cern.ch/sites/openlab.web.cern.ch/files/presentations/20=
12.04.25%20Andrzej%20Nowak%20-%20An%20overview%20of%20Intel%20MIC%20-%20tec=
hnology,%20hardware%20and%20software%20v3.pdf

- Thanks
Xudong

> Thanks,
> =

> -- Pasi
> =

> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
> >  #define PCI_MEM_START       0xf0000000
> >  #define PCI_MEM_END         0xfc000000
> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> > +#define PCI_MIN_MMIO_ADDR   0x80000000
> > +
> >  extern unsigned long pci_mem_start, pci_mem_end;
> >
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> > --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -31,24 +31,33 @@
> >  unsigned long pci_mem_start =3D PCI_MEM_START;
> >  unsigned long pci_mem_end =3D PCI_MEM_END;
> >
> > +uint64_t pci_high_mem_start =3D PCI_HIGH_MEM_START;
> > +uint64_t pci_high_mem_end =3D PCI_HIGH_MEM_END;
> > +
> >  enum virtual_vga virtual_vga =3D VGA_none;
> >  unsigned long igd_opregion_pgbase =3D 0;
> >
> >  void pci_setup(void)
> >  {
> > -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total =
=3D 0;
> > +    uint8_t is_64bar, using_64bar, bar64_relocate =3D 0;
> > +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> > +    uint64_t base, bar_sz, bar_sz_upper, mmio_total =3D 0;
> >      uint32_t vga_devfn =3D 256;
> >      uint16_t class, vendor_id, device_id;
> >      unsigned int bar, pin, link, isa_irq;
> > +    int64_t mmio_left;
> >
> >      /* Resources assignable to PCI devices via BARs. */
> >      struct resource {
> > -        uint32_t base, max;
> > -    } *resource, mem_resource, io_resource;
> > +        uint64_t base, max;
> > +    } *resource, mem_resource, high_mem_resource, io_resource;
> >
> >      /* Create a list of device BARs in descending order of size. */
> >      struct bars {
> > -        uint32_t devfn, bar_reg, bar_sz;
> > +        uint32_t is_64bar;
> > +        uint32_t devfn;
> > +        uint32_t bar_reg;
> > +        uint64_t bar_sz;
> >      } *bars =3D (struct bars *)scratch_start;
> >      unsigned int i, nr_bars =3D 0;
> >
> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >          /* Map the I/O memory and port resources. */
> >          for ( bar =3D 0; bar < 7; bar++ )
> >          {
> > +            bar_sz_upper =3D 0;
> >              bar_reg =3D PCI_BASE_ADDRESS_0 + 4*bar;
> >              if ( bar =3D=3D 6 )
> >                  bar_reg =3D PCI_ROM_ADDRESS;
> >
> >              bar_data =3D pci_readl(devfn, bar_reg);
> > +            is_64bar =3D !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) =3D=3D
> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >              pci_writel(devfn, bar_reg, ~0);
> >              bar_sz =3D pci_readl(devfn, bar_reg);
> >              pci_writel(devfn, bar_reg, bar_data);
> > +
> > +            if (is_64bar) {
> > +                bar_data_upper =3D pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, ~0);
> > +                bar_sz_upper =3D pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +                bar_sz =3D (bar_sz_upper << 32) | bar_sz;
> > +            }
> > +            bar_sz &=3D (((bar_data & PCI_BASE_ADDRESS_SPACE) =3D=3D
> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > +                       0xfffffffffffffff0 :
> > +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > +            bar_sz &=3D ~(bar_sz - 1);
> >              if ( bar_sz =3D=3D 0 )
> >                  continue;
> >
> > -            bar_sz &=3D (((bar_data & PCI_BASE_ADDRESS_SPACE) =3D=3D
> > -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > -                       PCI_BASE_ADDRESS_MEM_MASK :
> > -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > -            bar_sz &=3D ~(bar_sz - 1);
> > -
> >              for ( i =3D 0; i < nr_bars; i++ )
> >                  if ( bars[i].bar_sz < bar_sz )
> >                      break;
> > @@ -157,6 +178,7 @@ void pci_setup(void)
> >              if ( i !=3D nr_bars )
> >                  memmove(&bars[i+1], &bars[i], (nr_bars-i) *
> sizeof(*bars));
> >
> > +            bars[i].is_64bar =3D is_64bar;
> >              bars[i].devfn   =3D devfn;
> >              bars[i].bar_reg =3D bar_reg;
> >              bars[i].bar_sz  =3D bar_sz;
> > @@ -167,11 +189,8 @@ void pci_setup(void)
> >
> >              nr_bars++;
> >
> > -            /* Skip the upper-half of the address for a 64-bit BAR. */
> > -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> > -
> PCI_BASE_ADDRESS_MEM_TYPE_MASK)) =3D=3D
> > -                 (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> > +            /*The upper half is already calculated, skip it! */
> > +            if (is_64bar)
> >                  bar++;
> >          }
> >
> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >      }
> >
> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> > -            ((pci_mem_start << 1) !=3D 0) )
> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> pci_mem_start )
> >          pci_mem_start <<=3D 1;
> >
> > +    if (!pci_mem_start) {
> > +        bar64_relocate =3D 1;
> > +        pci_mem_start =3D PCI_MIN_MMIO_ADDR;
> > +    }
> > +
> >      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
> >      while ( (pci_mem_start >> PAGE_SHIFT) <
> hvm_info->low_mem_pgend )
> >      {
> > @@ -218,11 +241,15 @@ void pci_setup(void)
> >          hvm_info->high_mem_pgend +=3D nr_pages;
> >      }
> >
> > +    high_mem_resource.base =3D pci_high_mem_start;
> > +    high_mem_resource.max =3D pci_high_mem_end;
> >      mem_resource.base =3D pci_mem_start;
> >      mem_resource.max =3D pci_mem_end;
> >      io_resource.base =3D 0xc000;
> >      io_resource.max =3D 0x10000;
> >
> > +    mmio_left =3D pci_mem_end - pci_mem_end;
> > +
> >      /* Assign iomem and ioport resources in descending order of size. =
*/
> >      for ( i =3D 0; i < nr_bars; i++ )
> >      {
> > @@ -230,13 +257,21 @@ void pci_setup(void)
> >          bar_reg =3D bars[i].bar_reg;
> >          bar_sz  =3D bars[i].bar_sz;
> >
> > +        using_64bar =3D bars[i].is_64bar && bar64_relocate && (mmio_le=
ft
> < bar_sz);
> >          bar_data =3D pci_readl(devfn, bar_reg);
> >
> >          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) =3D=3D
> >               PCI_BASE_ADDRESS_SPACE_MEMORY )
> >          {
> > -            resource =3D &mem_resource;
> > -            bar_data &=3D ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            if (using_64bar) {
> > +                resource =3D &high_mem_resource;
> > +                bar_data &=3D ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            else {
> > +                resource =3D &mem_resource;
> > +                bar_data &=3D ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            mmio_left -=3D bar_sz;
> >          }
> >          else
> >          {
> > @@ -244,13 +279,14 @@ void pci_setup(void)
> >              bar_data &=3D ~PCI_BASE_ADDRESS_IO_MASK;
> >          }
> >
> > -        base =3D (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> > -        bar_data |=3D base;
> > +        base =3D (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz -=
 1);
> > +        bar_data |=3D (uint32_t)base;
> > +        bar_data_upper =3D (uint32_t)(base >> 32);
> >          base +=3D bar_sz;
> >
> >          if ( (base < resource->base) || (base > resource->max) )
> >          {
> > -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> > +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
> >                     "resource!\n", devfn>>3, devfn&7, bar_reg,
> bar_sz);
> >              continue;
> >          }
> > @@ -258,7 +294,9 @@ void pci_setup(void)
> >          resource->base =3D base;
> >
> >          pci_writel(devfn, bar_reg, bar_data);
> > -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> > +        if (using_64bar)
> > +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
> >                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
> >
> >          /* Now enable the memory or I/O mapping. */
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:21:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Zmg-0001Ij-FX; Wed, 15 Aug 2012 09:21:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Zme-0001IZ-AY
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 09:21:00 +0000
Received: from [85.158.143.35:6674] by server-3.bemta-4.messagelabs.com id
	2F/43-09529-BF96B205; Wed, 15 Aug 2012 09:20:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345022458!12321234!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31086 invoked from network); 15 Aug 2012 09:20:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	15 Aug 2012 09:20:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 10:20:58 +0100
Message-Id: <502B86420200007800095048@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 10:21:38 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
In-Reply-To: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> 64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid 
> may return failure, so using INVALID_MFN to measure.

Hmm, that can be true for 32-bit BARs too (on systems with less
than 4Gb).

> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> 
> diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
> --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>      }
>  
>      /* Track the highest gfn for which we have ever had a valid mapping */
> -    if ( mfn_valid(mfn_x(mfn)) &&
> +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;

Depending on how the above comment gets addressed (i.e.
whether MMIO MFNs are to be considered here at all), this
might need changing anyway, as this a huge max_mapped_pfn
value likely wouldn't be very useful anymore.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:21:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1Zmg-0001Ij-FX; Wed, 15 Aug 2012 09:21:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1Zme-0001IZ-AY
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 09:21:00 +0000
Received: from [85.158.143.35:6674] by server-3.bemta-4.messagelabs.com id
	2F/43-09529-BF96B205; Wed, 15 Aug 2012 09:20:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345022458!12321234!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31086 invoked from network); 15 Aug 2012 09:20:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	15 Aug 2012 09:20:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 10:20:58 +0100
Message-Id: <502B86420200007800095048@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 10:21:38 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
In-Reply-To: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Xiantao Zhang <xiantao.zhang@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> 64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid 
> may return failure, so using INVALID_MFN to measure.

Hmm, that can be true for 32-bit BARs too (on systems with less
than 4Gb).

> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> 
> diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
> --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>      }
>  
>      /* Track the highest gfn for which we have ever had a valid mapping */
> -    if ( mfn_valid(mfn_x(mfn)) &&
> +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;

Depending on how the above comment gets addressed (i.e.
whether MMIO MFNs are to be considered here at all), this
might need changing anyway, as this a huge max_mapped_pfn
value likely wouldn't be very useful anymore.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:32:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZxH-0001Vf-KD; Wed, 15 Aug 2012 09:31:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T1ZxG-0001Va-CK
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 09:31:58 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345023110!1983503!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23943 invoked from network); 15 Aug 2012 09:31:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 09:31:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336363200"; d="scan'208";a="205231570"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 05:31:49 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 05:31:49 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24] helo=[127.0.1.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T1Zx7-0004cK-9Z;
	Wed, 15 Aug 2012 10:31:49 +0100
MIME-Version: 1.0
X-Mercurial-Node: 0982bad392e4f96fb39a025d6528c33be32c6c04
Message-ID: <0982bad392e4f96fb39a.1345022903@elijah>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 15 Aug 2012 10:28:23 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: george.dunlap@eu.citrix.com
Subject: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
	cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User George Dunlap <george.dunlap@eu.citrix.com>
# Date 1345022863 -3600
# Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
# Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
xl: Suppress spurious warning message for cpupool-list

libxl_cpupool_list() enumerates the cpupools by "probing": calling
cpupool_info, starting at 0 and stopping when it gets an error. However,
cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
resulting in every xl command that uses libxl_list_cpupool (such as
cpupool-list) printing that error message spuriously.

This patch adds a "probe" argument to cpupool_info(). If set, it won't print
a warning if the xc_cpupool_getinfo() fails with ENOENT.

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -583,7 +583,8 @@ int libxl_domain_info(libxl_ctx *ctx, li
 static int cpupool_info(libxl__gc *gc,
                         libxl_cpupoolinfo *info,
                         uint32_t poolid,
-                        bool exact /* exactly poolid or >= poolid */)
+                        bool exact /* exactly poolid or >= poolid */,
+                        bool probe /* Don't complain for non-existent pools */)
 {
     xc_cpupoolinfo_t *xcinfo;
     int rc = ERROR_FAIL;
@@ -591,7 +592,8 @@ static int cpupool_info(libxl__gc *gc,
     xcinfo = xc_cpupool_getinfo(CTX->xch, poolid);
     if (xcinfo == NULL)
     {
-        LOGE(ERROR, "failed to get info for cpupool%d\n", poolid);
+        if (!probe || errno != ENOENT)
+            LOGE(ERROR, "failed to get info for cpupool%d\n", poolid);
         return ERROR_FAIL;
     }
 
@@ -623,7 +625,7 @@ int libxl_cpupool_info(libxl_ctx *ctx,
                        libxl_cpupoolinfo *info, uint32_t poolid)
 {
     GC_INIT(ctx);
-    int rc = cpupool_info(gc, info, poolid, true);
+    int rc = cpupool_info(gc, info, poolid, true, false);
     GC_FREE;
     return rc;
 }
@@ -639,7 +641,7 @@ libxl_cpupoolinfo * libxl_list_cpupool(l
 
     poolid = 0;
     for (i = 0;; i++) {
-        if (cpupool_info(gc, &info, poolid, false))
+        if (cpupool_info(gc, &info, poolid, false, true))
             break;
         tmp = realloc(ptr, (i + 1) * sizeof(libxl_cpupoolinfo));
         if (!tmp) {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:32:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ZxH-0001Vf-KD; Wed, 15 Aug 2012 09:31:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T1ZxG-0001Va-CK
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 09:31:58 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345023110!1983503!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23943 invoked from network); 15 Aug 2012 09:31:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 09:31:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336363200"; d="scan'208";a="205231570"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 05:31:49 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 05:31:49 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24] helo=[127.0.1.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T1Zx7-0004cK-9Z;
	Wed, 15 Aug 2012 10:31:49 +0100
MIME-Version: 1.0
X-Mercurial-Node: 0982bad392e4f96fb39a025d6528c33be32c6c04
Message-ID: <0982bad392e4f96fb39a.1345022903@elijah>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 15 Aug 2012 10:28:23 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: george.dunlap@eu.citrix.com
Subject: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
	cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User George Dunlap <george.dunlap@eu.citrix.com>
# Date 1345022863 -3600
# Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
# Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
xl: Suppress spurious warning message for cpupool-list

libxl_cpupool_list() enumerates the cpupools by "probing": calling
cpupool_info, starting at 0 and stopping when it gets an error. However,
cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
resulting in every xl command that uses libxl_list_cpupool (such as
cpupool-list) printing that error message spuriously.

This patch adds a "probe" argument to cpupool_info(). If set, it won't print
a warning if the xc_cpupool_getinfo() fails with ENOENT.

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -583,7 +583,8 @@ int libxl_domain_info(libxl_ctx *ctx, li
 static int cpupool_info(libxl__gc *gc,
                         libxl_cpupoolinfo *info,
                         uint32_t poolid,
-                        bool exact /* exactly poolid or >= poolid */)
+                        bool exact /* exactly poolid or >= poolid */,
+                        bool probe /* Don't complain for non-existent pools */)
 {
     xc_cpupoolinfo_t *xcinfo;
     int rc = ERROR_FAIL;
@@ -591,7 +592,8 @@ static int cpupool_info(libxl__gc *gc,
     xcinfo = xc_cpupool_getinfo(CTX->xch, poolid);
     if (xcinfo == NULL)
     {
-        LOGE(ERROR, "failed to get info for cpupool%d\n", poolid);
+        if (!probe || errno != ENOENT)
+            LOGE(ERROR, "failed to get info for cpupool%d\n", poolid);
         return ERROR_FAIL;
     }
 
@@ -623,7 +625,7 @@ int libxl_cpupool_info(libxl_ctx *ctx,
                        libxl_cpupoolinfo *info, uint32_t poolid)
 {
     GC_INIT(ctx);
-    int rc = cpupool_info(gc, info, poolid, true);
+    int rc = cpupool_info(gc, info, poolid, true, false);
     GC_FREE;
     return rc;
 }
@@ -639,7 +641,7 @@ libxl_cpupoolinfo * libxl_list_cpupool(l
 
     poolid = 0;
     for (i = 0;; i++) {
-        if (cpupool_info(gc, &info, poolid, false))
+        if (cpupool_info(gc, &info, poolid, false, true))
             break;
         tmp = realloc(ptr, (i + 1) * sizeof(libxl_cpupoolinfo));
         if (!tmp) {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:42:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1a7K-0001gl-6K; Wed, 15 Aug 2012 09:42:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1a7I-0001gg-AH
	for Xen-devel@lists.xensource.com; Wed, 15 Aug 2012 09:42:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345023733!6944560!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22542 invoked from network); 15 Aug 2012 09:42:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 09:42:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14017170"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 09:42:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 10:42:13 +0100
Message-ID: <1345023731.5926.149.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Aug 2012 10:42:11 +0100
In-Reply-To: <502B77BC0200007800094FF2@nat28.tlf.novell.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
	<20120814105157.044a755e@mantra.us.oracle.com>
	<502B77BC0200007800094FF2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 09:19 +0100, Jan Beulich wrote:
> >>> On 14.08.12 at 19:51, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > BTW, being a good hybrid as it is, it uses fields from both pv_domain
> > and hvm_domain structs. Combining into a union created difficulties for
> > me. I experimented creating a new struct, hyb_domain, or adding hvm
> > related fields to pv_domain struct for hybrid, but both involved way too
> > much code change. So back to having them separated again. LMK if there
> > any objections undoing the union.
> 
> I suppose there are going to be fields that are used exclusively
> for PV or HVM, and if so I'd like them to be retained as a union
> as far as possible.

I guess there will be some subset of fields which will be specific to
the use of VT-d/SVM generally but not specifically to the emulation of a
full PC ("HVM") or hybrid mode. I don't know what proportion that would
be but it might be worth splitting along those lines, e.g. pull HW state
out into its own sub-struct and have a union of the true pv-, hvm- and
pvh-only fields?

>  The main point of the change was to shrink
> the size of struct vcpu (which is required to fit into a page,
> which you will need to make sure continues to be the case even
> with all sorts of debugging options turned on).
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:42:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1a7K-0001gl-6K; Wed, 15 Aug 2012 09:42:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1a7I-0001gg-AH
	for Xen-devel@lists.xensource.com; Wed, 15 Aug 2012 09:42:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345023733!6944560!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22542 invoked from network); 15 Aug 2012 09:42:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 09:42:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14017170"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 09:42:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 10:42:13 +0100
Message-ID: <1345023731.5926.149.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Aug 2012 10:42:11 +0100
In-Reply-To: <502B77BC0200007800094FF2@nat28.tlf.novell.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
	<20120814105157.044a755e@mantra.us.oracle.com>
	<502B77BC0200007800094FF2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 09:19 +0100, Jan Beulich wrote:
> >>> On 14.08.12 at 19:51, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> > BTW, being a good hybrid as it is, it uses fields from both pv_domain
> > and hvm_domain structs. Combining into a union created difficulties for
> > me. I experimented creating a new struct, hyb_domain, or adding hvm
> > related fields to pv_domain struct for hybrid, but both involved way too
> > much code change. So back to having them separated again. LMK if there
> > any objections undoing the union.
> 
> I suppose there are going to be fields that are used exclusively
> for PV or HVM, and if so I'd like them to be retained as a union
> as far as possible.

I guess there will be some subset of fields which will be specific to
the use of VT-d/SVM generally but not specifically to the emulation of a
full PC ("HVM") or hybrid mode. I don't know what proportion that would
be but it might be worth splitting along those lines, e.g. pull HW state
out into its own sub-struct and have a union of the true pv-, hvm- and
pvh-only fields?

>  The main point of the change was to shrink
> the size of struct vcpu (which is required to fit into a page,
> which you will need to make sure continues to be the case even
> with all sorts of debugging options turned on).
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:54:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:54:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1aIl-0001sJ-Co; Wed, 15 Aug 2012 09:54:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1aIk-0001sB-6B
	for Xen-devel@lists.xensource.com; Wed, 15 Aug 2012 09:54:10 +0000
Received: from [85.158.143.99:47200] by server-2.bemta-4.messagelabs.com id
	8B/82-31966-1C17B205; Wed, 15 Aug 2012 09:54:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345024448!21243342!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9147 invoked from network); 15 Aug 2012 09:54:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 09:54:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 10:54:08 +0100
Message-Id: <502B8E0802000078000950A9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 10:54:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
	<20120814105157.044a755e@mantra.us.oracle.com>
	<502B77BC0200007800094FF2@nat28.tlf.novell.com>
	<1345023731.5926.149.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345023731.5926.149.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 11:42, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-15 at 09:19 +0100, Jan Beulich wrote:
>> >>> On 14.08.12 at 19:51, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> > BTW, being a good hybrid as it is, it uses fields from both pv_domain
>> > and hvm_domain structs. Combining into a union created difficulties for
>> > me. I experimented creating a new struct, hyb_domain, or adding hvm
>> > related fields to pv_domain struct for hybrid, but both involved way too
>> > much code change. So back to having them separated again. LMK if there
>> > any objections undoing the union.
>> 
>> I suppose there are going to be fields that are used exclusively
>> for PV or HVM, and if so I'd like them to be retained as a union
>> as far as possible.
> 
> I guess there will be some subset of fields which will be specific to
> the use of VT-d/SVM generally but not specifically to the emulation of a
> full PC ("HVM") or hybrid mode. I don't know what proportion that would
> be but it might be worth splitting along those lines, e.g. pull HW state
> out into its own sub-struct and have a union of the true pv-, hvm- and
> pvh-only fields?

Yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 09:54:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 09:54:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1aIl-0001sJ-Co; Wed, 15 Aug 2012 09:54:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1aIk-0001sB-6B
	for Xen-devel@lists.xensource.com; Wed, 15 Aug 2012 09:54:10 +0000
Received: from [85.158.143.99:47200] by server-2.bemta-4.messagelabs.com id
	8B/82-31966-1C17B205; Wed, 15 Aug 2012 09:54:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345024448!21243342!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9147 invoked from network); 15 Aug 2012 09:54:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 09:54:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 10:54:08 +0100
Message-Id: <502B8E0802000078000950A9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 10:54:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20120626181707.4203d336@mantra.us.oracle.com>
	<CAFLBxZagm=53bZ=KaWmd7XQkkAduD+znECJRsaJUqrGJDZgkQQ@mail.gmail.com>
	<20120801153439.3f81c923@mantra.us.oracle.com>
	<501A4E0C.1090509@eu.citrix.com>
	<20120813151446.22ae85b5@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141144070.21096@kaball.uk.xensource.com>
	<20120814103827.7dcf55f1@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208141842460.21096@kaball.uk.xensource.com>
	<20120814105157.044a755e@mantra.us.oracle.com>
	<502B77BC0200007800094FF2@nat28.tlf.novell.com>
	<1345023731.5926.149.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345023731.5926.149.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Keir Fraser <keir.xen@gmail.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [HYBRID]: status update...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 11:42, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2012-08-15 at 09:19 +0100, Jan Beulich wrote:
>> >>> On 14.08.12 at 19:51, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>> > BTW, being a good hybrid as it is, it uses fields from both pv_domain
>> > and hvm_domain structs. Combining into a union created difficulties for
>> > me. I experimented creating a new struct, hyb_domain, or adding hvm
>> > related fields to pv_domain struct for hybrid, but both involved way too
>> > much code change. So back to having them separated again. LMK if there
>> > any objections undoing the union.
>> 
>> I suppose there are going to be fields that are used exclusively
>> for PV or HVM, and if so I'd like them to be retained as a union
>> as far as possible.
> 
> I guess there will be some subset of fields which will be specific to
> the use of VT-d/SVM generally but not specifically to the emulation of a
> full PC ("HVM") or hybrid mode. I don't know what proportion that would
> be but it might be worth splitting along those lines, e.g. pull HW state
> out into its own sub-struct and have a union of the true pv-, hvm- and
> pvh-only fields?

Yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:19:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1agk-00028E-IG; Wed, 15 Aug 2012 10:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1agi-000289-NI
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:18:57 +0000
Received: from [85.158.138.51:43883] by server-5.bemta-3.messagelabs.com id
	10/29-08865-F877B205; Wed, 15 Aug 2012 10:18:55 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345025934!20350365!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32278 invoked from network); 15 Aug 2012 10:18:54 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 10:18:54 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 7FF5B28DD;
	Wed, 15 Aug 2012 13:18:53 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6052A2005D; Wed, 15 Aug 2012 13:18:53 +0300 (EEST)
Date: Wed, 15 Aug 2012 13:18:53 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Keir Fraser <keir@xen.org>
Message-ID: <20120815101853.GD19851@reaktio.net>
References: <20120813220734.GU19851@reaktio.net> <CC511F70.490FB%keir@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC511F70.490FB%keir@xen.org>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 09:42:08AM +0100, Keir Fraser wrote:
> On 13/08/2012 23:07, "Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
> =

> > Hello again,
> > =

> > So to be able to build Xen 4.2.0-rc2 on Fedora 17 with gcc 4.7
> > I had to apply these three patches to ipxe:
> > =

> > http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
> > http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
> > http://permalink.gmane.org/gmane.network.ipxe.devel/1216
> =

> I've checked these into etherboot/patches/ in xen-unstable.hg.
> =

>  -- Keir
> =


Thanks!

-- Pasi


> > -- Pasi
> > =

> > =

> > On Tue, Aug 14, 2012 at 12:44:49AM +0300, Pasi K=E4rkk=E4inen wrote:
> >> On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
> >>> On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> >>>> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> >>>>> Hello,
> >>>>> =

> >>>>> I just grabbed
> >>>>> http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.t=
ar.gz
> >>>>> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> >>>>> =

> >>>>> make tools:
> >>>>> =

> >>>>> ..
> >>>>> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> >>>>> make -C etherboot all
> >>>>> make[6]: Entering directory
> >>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
> >>>>> make -C ipxe/src bin/rtl8139.rom
> >>>>> make[7]: Entering directory
> >>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >>>>>   [BUILD] bin/isa.o
> >>>>> drivers/bus/isa.c: In function 'isabus_probe':
> >>>>> drivers/bus/isa.c:112:18: error: array subscript is above array bou=
nds
> >>>>> [-Werror=3Darray-bounds]
> >>>>> cc1: all warnings being treated as errors
> >>>>> make[7]: *** [bin/isa.o] Error 1
> >>>>> make[7]: Leaving directory
> >>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >>>>> =

> >>>> =

> >>>> Ok the patch from Olaf is here:
> >>>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
> >>>> =

> >>>> Should we apply that patch to xen-unstable for Xen 4.2 ?
> >>>> =

> >>> =

> >>> And then there's the next build problem:
> >>> =

> >>>   [BUILD] bin/myri10ge.o
> >>> drivers/net/myri10ge.c: In function 'myri10ge_command':
> >>> drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointe=
r will
> >>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> >>> drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointe=
r will
> >>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> >>> cc1: all warnings being treated as errors
> >>> make[7]: *** [bin/myri10ge.o] Error 1
> >>> make[7]: Leaving directory
> >>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >>> =

> >>> =

> >>> And patch from Olaf here:
> >>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
> >>> =

> >> =

> >> And the third build problem:
> >> =

> >>   [BUILD] bin/qib7322.o
> >> drivers/infiniband/qib7322.c: In function 'qib7322_probe':
> >> drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used
> >> uninitialized in this function [-Werror=3Dmaybe-uninitialized]
> >> drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared h=
ere
> >> cc1: all warnings being treated as errors
> >> make[7]: *** [bin/qib7322.o] Error 1
> >> make[7]: Leaving directory
> >> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >> =

> >> =

> >> And patch from Christian Hesse here:
> >> http://permalink.gmane.org/gmane.network.ipxe.devel/1216
> >> =

> >> =

> >> -- Pasi
> >> =

> >> =

> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:19:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1agk-00028E-IG; Wed, 15 Aug 2012 10:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1agi-000289-NI
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:18:57 +0000
Received: from [85.158.138.51:43883] by server-5.bemta-3.messagelabs.com id
	10/29-08865-F877B205; Wed, 15 Aug 2012 10:18:55 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345025934!20350365!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32278 invoked from network); 15 Aug 2012 10:18:54 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-6.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 10:18:54 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 7FF5B28DD;
	Wed, 15 Aug 2012 13:18:53 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6052A2005D; Wed, 15 Aug 2012 13:18:53 +0300 (EEST)
Date: Wed, 15 Aug 2012 13:18:53 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Keir Fraser <keir@xen.org>
Message-ID: <20120815101853.GD19851@reaktio.net>
References: <20120813220734.GU19851@reaktio.net> <CC511F70.490FB%keir@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC511F70.490FB%keir@xen.org>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: olaf@aepfle.de, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0-rc2: make tools fails on Fedora 17,
 ipxe build problems with gcc 4.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 09:42:08AM +0100, Keir Fraser wrote:
> On 13/08/2012 23:07, "Pasi K=E4rkk=E4inen" <pasik@iki.fi> wrote:
> =

> > Hello again,
> > =

> > So to be able to build Xen 4.2.0-rc2 on Fedora 17 with gcc 4.7
> > I had to apply these three patches to ipxe:
> > =

> > http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
> > http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
> > http://permalink.gmane.org/gmane.network.ipxe.devel/1216
> =

> I've checked these into etherboot/patches/ in xen-unstable.hg.
> =

>  -- Keir
> =


Thanks!

-- Pasi


> > -- Pasi
> > =

> > =

> > On Tue, Aug 14, 2012 at 12:44:49AM +0300, Pasi K=E4rkk=E4inen wrote:
> >> On Tue, Aug 14, 2012 at 12:39:03AM +0300, Pasi K=E4rkk=E4inen wrote:
> >>> On Tue, Aug 14, 2012 at 12:35:58AM +0300, Pasi K=E4rkk=E4inen wrote:
> >>>> On Mon, Aug 13, 2012 at 11:39:50PM +0300, Pasi K=E4rkk=E4inen wrote:
> >>>>> Hello,
> >>>>> =

> >>>>> I just grabbed
> >>>>> http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.t=
ar.gz
> >>>>> and tried to build it on Fedora 17 x86_64 host (gcc 4.7.0):
> >>>>> =

> >>>>> make tools:
> >>>>> =

> >>>>> ..
> >>>>> make[5]: Entering directory `/root/xen/xen-4.2.0-rc2/tools/firmware'
> >>>>> make -C etherboot all
> >>>>> make[6]: Entering directory
> >>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot'
> >>>>> make -C ipxe/src bin/rtl8139.rom
> >>>>> make[7]: Entering directory
> >>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >>>>>   [BUILD] bin/isa.o
> >>>>> drivers/bus/isa.c: In function 'isabus_probe':
> >>>>> drivers/bus/isa.c:112:18: error: array subscript is above array bou=
nds
> >>>>> [-Werror=3Darray-bounds]
> >>>>> cc1: all warnings being treated as errors
> >>>>> make[7]: *** [bin/isa.o] Error 1
> >>>>> make[7]: Leaving directory
> >>>>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >>>>> =

> >>>> =

> >>>> Ok the patch from Olaf is here:
> >>>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001279.html
> >>>> =

> >>>> Should we apply that patch to xen-unstable for Xen 4.2 ?
> >>>> =

> >>> =

> >>> And then there's the next build problem:
> >>> =

> >>>   [BUILD] bin/myri10ge.o
> >>> drivers/net/myri10ge.c: In function 'myri10ge_command':
> >>> drivers/net/myri10ge.c:308:3: error: dereferencing type-punned pointe=
r will
> >>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> >>> drivers/net/myri10ge.c:310:2: error: dereferencing type-punned pointe=
r will
> >>> break strict-aliasing rules [-Werror=3Dstrict-aliasing]
> >>> cc1: all warnings being treated as errors
> >>> make[7]: *** [bin/myri10ge.o] Error 1
> >>> make[7]: Leaving directory
> >>> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >>> =

> >>> =

> >>> And patch from Olaf here:
> >>> http://lists.ipxe.org/pipermail/ipxe-devel/2012-March/001280.html
> >>> =

> >> =

> >> And the third build problem:
> >> =

> >>   [BUILD] bin/qib7322.o
> >> drivers/infiniband/qib7322.c: In function 'qib7322_probe':
> >> drivers/infiniband/qib7322.c:2141:28: error: 'old_value' may be used
> >> uninitialized in this function [-Werror=3Dmaybe-uninitialized]
> >> drivers/infiniband/qib7322.c:2123:11: note: 'old_value' was declared h=
ere
> >> cc1: all warnings being treated as errors
> >> make[7]: *** [bin/qib7322.o] Error 1
> >> make[7]: Leaving directory
> >> `/root/xen/xen-4.2.0-rc2/tools/firmware/etherboot/ipxe/src'
> >> =

> >> =

> >> And patch from Christian Hesse here:
> >> http://permalink.gmane.org/gmane.network.ipxe.devel/1216
> >> =

> >> =

> >> -- Pasi
> >> =

> >> =

> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:21:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ajA-0002Fj-8w; Wed, 15 Aug 2012 10:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1aj8-0002FZ-Om
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:21:26 +0000
Received: from [85.158.139.83:11336] by server-7.bemta-5.messagelabs.com id
	DF/8C-32634-5287B205; Wed, 15 Aug 2012 10:21:25 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345026084!24369682!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27456 invoked from network); 15 Aug 2012 10:21:25 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 10:21:25 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 579A31B52;
	Wed, 15 Aug 2012 13:21:24 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 278722005D; Wed, 15 Aug 2012 13:21:24 +0300 (EEST)
Date: Wed, 15 Aug 2012 13:21:24 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Hao, Xudong" <xudong.hao@intel.com>
Message-ID: <20120815102124.GE19851@reaktio.net>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<20120815081846.GC19851@reaktio.net>
	<403610A45A2B5242BD291EDAE8B37D300FE89C5F@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE89C5F@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 09:17:51AM +0000, Hao, Xudong wrote:
> =

> > -----Original Message-----
> > From: Pasi K=E4rkk=E4inen [mailto:pasik@iki.fi]
> > Sent: Wednesday, August 15, 2012 4:19 PM
> > To: Hao, Xudong
> > Cc: xen-devel@lists.xen.org; ian.jackson@eu.citrix.com; Zhang, Xiantao
> > Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar sup=
port
> > =

> > On Wed, Aug 15, 2012 at 02:54:41PM +0800, Xudong Hao wrote:
> > > Currently it is assumed PCI device BAR access < 4G memory. If there i=
s such a
> > > device whose BAR size is larger than 4G, it must access > 4G memory
> > address.
> > > This patch enable the 64bits big BAR support on hvmloader.
> > >
> > =

> > Hello,
> > =

> > Do you have an example of such a PCI device with >4G BAR?
> > =

> =

> We have a standard PCE-e device which integrated Intel MIC program, this =
device has 16G size bar. If we want to passthrough such a device to guest, =
current Xen can't support it, so we want Xen has this big bar support and m=
ake this patch.
> =

> For Intel MIC program technology, you can refer to http://openlab.web.cer=
n.ch/sites/openlab.web.cern.ch/files/presentations/KNC_ISA_Overview_Apr2012=
_SJ_New_V4.pdf  or =

> http://openlab.web.cern.ch/sites/openlab.web.cern.ch/files/presentations/=
2012.04.25%20Andrzej%20Nowak%20-%20An%20overview%20of%20Intel%20MIC%20-%20t=
echnology,%20hardware%20and%20software%20v3.pdf
> =


Interesting, thanks!

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:21:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ajA-0002Fj-8w; Wed, 15 Aug 2012 10:21:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1aj8-0002FZ-Om
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:21:26 +0000
Received: from [85.158.139.83:11336] by server-7.bemta-5.messagelabs.com id
	DF/8C-32634-5287B205; Wed, 15 Aug 2012 10:21:25 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345026084!24369682!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27456 invoked from network); 15 Aug 2012 10:21:25 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 10:21:25 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 579A31B52;
	Wed, 15 Aug 2012 13:21:24 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 278722005D; Wed, 15 Aug 2012 13:21:24 +0300 (EEST)
Date: Wed, 15 Aug 2012 13:21:24 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Hao, Xudong" <xudong.hao@intel.com>
Message-ID: <20120815102124.GE19851@reaktio.net>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<20120815081846.GC19851@reaktio.net>
	<403610A45A2B5242BD291EDAE8B37D300FE89C5F@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE89C5F@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 09:17:51AM +0000, Hao, Xudong wrote:
> =

> > -----Original Message-----
> > From: Pasi K=E4rkk=E4inen [mailto:pasik@iki.fi]
> > Sent: Wednesday, August 15, 2012 4:19 PM
> > To: Hao, Xudong
> > Cc: xen-devel@lists.xen.org; ian.jackson@eu.citrix.com; Zhang, Xiantao
> > Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar sup=
port
> > =

> > On Wed, Aug 15, 2012 at 02:54:41PM +0800, Xudong Hao wrote:
> > > Currently it is assumed PCI device BAR access < 4G memory. If there i=
s such a
> > > device whose BAR size is larger than 4G, it must access > 4G memory
> > address.
> > > This patch enable the 64bits big BAR support on hvmloader.
> > >
> > =

> > Hello,
> > =

> > Do you have an example of such a PCI device with >4G BAR?
> > =

> =

> We have a standard PCE-e device which integrated Intel MIC program, this =
device has 16G size bar. If we want to passthrough such a device to guest, =
current Xen can't support it, so we want Xen has this big bar support and m=
ake this patch.
> =

> For Intel MIC program technology, you can refer to http://openlab.web.cer=
n.ch/sites/openlab.web.cern.ch/files/presentations/KNC_ISA_Overview_Apr2012=
_SJ_New_V4.pdf  or =

> http://openlab.web.cern.ch/sites/openlab.web.cern.ch/files/presentations/=
2012.04.25%20Andrzej%20Nowak%20-%20An%20overview%20of%20Intel%20MIC%20-%20t=
echnology,%20hardware%20and%20software%20v3.pdf
> =


Interesting, thanks!

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:21:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ajA-0002Fq-Ky; Wed, 15 Aug 2012 10:21:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1aj9-0002Fa-CP
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:21:27 +0000
Received: from [85.158.143.99:24869] by server-3.bemta-4.messagelabs.com id
	48/5D-09529-6287B205; Wed, 15 Aug 2012 10:21:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345026084!28262549!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30377 invoked from network); 15 Aug 2012 10:21:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 10:21:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14017902"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 10:20:47 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 11:20:47 +0100
Date: Wed, 15 Aug 2012 11:20:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Xudong Hao <xudong.hao@intel.com>
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
Message-ID: <alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, Xudong Hao wrote:
> Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> device whose BAR size is larger than 4G, it must access > 4G memory address.
> This patch enable the 64bits big BAR support on hvmloader.
> 
> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> Signed-off-by: Xudong Hao <xudong.hao@intel.com>

It is very good to see that this problem has been solved!

Considering that at this point it is too late for the 4.2 release cycle,
it might be worth spinning a version of this patches for SeaBIOS and
upstream QEMU, that now supports PCI passthrough.



> diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>  #define PCI_MEM_START       0xf0000000
>  #define PCI_MEM_END         0xfc000000
> +#define PCI_HIGH_MEM_START  0xa000000000ULL
> +#define PCI_HIGH_MEM_END    0xf000000000ULL
> +#define PCI_MIN_MMIO_ADDR   0x80000000
> +
>  extern unsigned long pci_mem_start, pci_mem_end;
>  
>  
> diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -31,24 +31,33 @@
>  unsigned long pci_mem_start = PCI_MEM_START;
>  unsigned long pci_mem_end = PCI_MEM_END;
>  
> +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> +
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
>  void pci_setup(void)
>  {
> -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
>      uint32_t vga_devfn = 256;
>      uint16_t class, vendor_id, device_id;
>      unsigned int bar, pin, link, isa_irq;
> +    int64_t mmio_left;
>  
>      /* Resources assignable to PCI devices via BARs. */
>      struct resource {
> -        uint32_t base, max;
> -    } *resource, mem_resource, io_resource;
> +        uint64_t base, max;
> +    } *resource, mem_resource, high_mem_resource, io_resource;
>  
>      /* Create a list of device BARs in descending order of size. */
>      struct bars {
> -        uint32_t devfn, bar_reg, bar_sz;
> +        uint32_t is_64bar;
> +        uint32_t devfn;
> +        uint32_t bar_reg;
> +        uint64_t bar_sz;
>      } *bars = (struct bars *)scratch_start;
>      unsigned int i, nr_bars = 0;
>  
> @@ -133,23 +142,35 @@ void pci_setup(void)
>          /* Map the I/O memory and port resources. */
>          for ( bar = 0; bar < 7; bar++ )
>          {
> +            bar_sz_upper = 0;
>              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>              if ( bar == 6 )
>                  bar_reg = PCI_ROM_ADDRESS;
>  
>              bar_data = pci_readl(devfn, bar_reg);
> +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>              pci_writel(devfn, bar_reg, ~0);
>              bar_sz = pci_readl(devfn, bar_reg);
>              pci_writel(devfn, bar_reg, bar_data);
> +
> +            if (is_64bar) {
> +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, ~0);
> +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> +            }
> +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> +                       0xfffffffffffffff0 :
> +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> +            bar_sz &= ~(bar_sz - 1);
>              if ( bar_sz == 0 )
>                  continue;
>  
> -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> -                       PCI_BASE_ADDRESS_MEM_MASK :
> -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> -            bar_sz &= ~(bar_sz - 1);
> -
>              for ( i = 0; i < nr_bars; i++ )
>                  if ( bars[i].bar_sz < bar_sz )
>                      break;
> @@ -157,6 +178,7 @@ void pci_setup(void)
>              if ( i != nr_bars )
>                  memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
>  
> +            bars[i].is_64bar = is_64bar;
>              bars[i].devfn   = devfn;
>              bars[i].bar_reg = bar_reg;
>              bars[i].bar_sz  = bar_sz;
> @@ -167,11 +189,8 @@ void pci_setup(void)
>  
>              nr_bars++;
>  
> -            /* Skip the upper-half of the address for a 64-bit BAR. */
> -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> -                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
> -                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
> -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> +            /*The upper half is already calculated, skip it! */
> +            if (is_64bar)
>                  bar++;
>          }
>  
> @@ -193,10 +212,14 @@ void pci_setup(void)
>          pci_writew(devfn, PCI_COMMAND, cmd);
>      }
>  
> -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> -            ((pci_mem_start << 1) != 0) )
> +    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )
>          pci_mem_start <<= 1;
>  
> +    if (!pci_mem_start) {
> +        bar64_relocate = 1;
> +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> +    }
> +
>      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
>      while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
>      {
> @@ -218,11 +241,15 @@ void pci_setup(void)
>          hvm_info->high_mem_pgend += nr_pages;
>      }
>  
> +    high_mem_resource.base = pci_high_mem_start; 
> +    high_mem_resource.max = pci_high_mem_end;
>      mem_resource.base = pci_mem_start;
>      mem_resource.max = pci_mem_end;
>      io_resource.base = 0xc000;
>      io_resource.max = 0x10000;
>  
> +    mmio_left = pci_mem_end - pci_mem_end;
> +
>      /* Assign iomem and ioport resources in descending order of size. */
>      for ( i = 0; i < nr_bars; i++ )
>      {
> @@ -230,13 +257,21 @@ void pci_setup(void)
>          bar_reg = bars[i].bar_reg;
>          bar_sz  = bars[i].bar_sz;
>  
> +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < bar_sz);
>          bar_data = pci_readl(devfn, bar_reg);
>  
>          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
>               PCI_BASE_ADDRESS_SPACE_MEMORY )
>          {
> -            resource = &mem_resource;
> -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            if (using_64bar) {
> +                resource = &high_mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            } 
> +            else {
> +                resource = &mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            }
> +            mmio_left -= bar_sz;
>          }
>          else
>          {
> @@ -244,13 +279,14 @@ void pci_setup(void)
>              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
>          }
>  
> -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> -        bar_data |= base;
> +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> +        bar_data |= (uint32_t)base;
> +        bar_data_upper = (uint32_t)(base >> 32);
>          base += bar_sz;
>  
>          if ( (base < resource->base) || (base > resource->max) )
>          {
> -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
>                     "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
>              continue;
>          }
> @@ -258,7 +294,9 @@ void pci_setup(void)
>          resource->base = base;
>  
>          pci_writel(devfn, bar_reg, bar_data);
> -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> +        if (using_64bar)
> +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
>                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
>  
>          /* Now enable the memory or I/O mapping. */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:21:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ajA-0002Fq-Ky; Wed, 15 Aug 2012 10:21:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1aj9-0002Fa-CP
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:21:27 +0000
Received: from [85.158.143.99:24869] by server-3.bemta-4.messagelabs.com id
	48/5D-09529-6287B205; Wed, 15 Aug 2012 10:21:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345026084!28262549!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30377 invoked from network); 15 Aug 2012 10:21:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 10:21:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14017902"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 10:20:47 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 11:20:47 +0100
Date: Wed, 15 Aug 2012 11:20:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Xudong Hao <xudong.hao@intel.com>
In-Reply-To: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
Message-ID: <alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, Xudong Hao wrote:
> Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> device whose BAR size is larger than 4G, it must access > 4G memory address.
> This patch enable the 64bits big BAR support on hvmloader.
> 
> Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> Signed-off-by: Xudong Hao <xudong.hao@intel.com>

It is very good to see that this problem has been solved!

Considering that at this point it is too late for the 4.2 release cycle,
it might be worth spinning a version of this patches for SeaBIOS and
upstream QEMU, that now supports PCI passthrough.



> diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>  #define PCI_MEM_START       0xf0000000
>  #define PCI_MEM_END         0xfc000000
> +#define PCI_HIGH_MEM_START  0xa000000000ULL
> +#define PCI_HIGH_MEM_END    0xf000000000ULL
> +#define PCI_MIN_MMIO_ADDR   0x80000000
> +
>  extern unsigned long pci_mem_start, pci_mem_end;
>  
>  
> diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> @@ -31,24 +31,33 @@
>  unsigned long pci_mem_start = PCI_MEM_START;
>  unsigned long pci_mem_end = PCI_MEM_END;
>  
> +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> +
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
>  void pci_setup(void)
>  {
> -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
>      uint32_t vga_devfn = 256;
>      uint16_t class, vendor_id, device_id;
>      unsigned int bar, pin, link, isa_irq;
> +    int64_t mmio_left;
>  
>      /* Resources assignable to PCI devices via BARs. */
>      struct resource {
> -        uint32_t base, max;
> -    } *resource, mem_resource, io_resource;
> +        uint64_t base, max;
> +    } *resource, mem_resource, high_mem_resource, io_resource;
>  
>      /* Create a list of device BARs in descending order of size. */
>      struct bars {
> -        uint32_t devfn, bar_reg, bar_sz;
> +        uint32_t is_64bar;
> +        uint32_t devfn;
> +        uint32_t bar_reg;
> +        uint64_t bar_sz;
>      } *bars = (struct bars *)scratch_start;
>      unsigned int i, nr_bars = 0;
>  
> @@ -133,23 +142,35 @@ void pci_setup(void)
>          /* Map the I/O memory and port resources. */
>          for ( bar = 0; bar < 7; bar++ )
>          {
> +            bar_sz_upper = 0;
>              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>              if ( bar == 6 )
>                  bar_reg = PCI_ROM_ADDRESS;
>  
>              bar_data = pci_readl(devfn, bar_reg);
> +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>              pci_writel(devfn, bar_reg, ~0);
>              bar_sz = pci_readl(devfn, bar_reg);
>              pci_writel(devfn, bar_reg, bar_data);
> +
> +            if (is_64bar) {
> +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, ~0);
> +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> +            }
> +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> +                       0xfffffffffffffff0 :
> +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> +            bar_sz &= ~(bar_sz - 1);
>              if ( bar_sz == 0 )
>                  continue;
>  
> -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> -                       PCI_BASE_ADDRESS_MEM_MASK :
> -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> -            bar_sz &= ~(bar_sz - 1);
> -
>              for ( i = 0; i < nr_bars; i++ )
>                  if ( bars[i].bar_sz < bar_sz )
>                      break;
> @@ -157,6 +178,7 @@ void pci_setup(void)
>              if ( i != nr_bars )
>                  memmove(&bars[i+1], &bars[i], (nr_bars-i) * sizeof(*bars));
>  
> +            bars[i].is_64bar = is_64bar;
>              bars[i].devfn   = devfn;
>              bars[i].bar_reg = bar_reg;
>              bars[i].bar_sz  = bar_sz;
> @@ -167,11 +189,8 @@ void pci_setup(void)
>  
>              nr_bars++;
>  
> -            /* Skip the upper-half of the address for a 64-bit BAR. */
> -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> -                              PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == 
> -                 (PCI_BASE_ADDRESS_SPACE_MEMORY | 
> -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> +            /*The upper half is already calculated, skip it! */
> +            if (is_64bar)
>                  bar++;
>          }
>  
> @@ -193,10 +212,14 @@ void pci_setup(void)
>          pci_writew(devfn, PCI_COMMAND, cmd);
>      }
>  
> -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> -            ((pci_mem_start << 1) != 0) )
> +    while ( mmio_total > (pci_mem_end - pci_mem_start) && pci_mem_start )
>          pci_mem_start <<= 1;
>  
> +    if (!pci_mem_start) {
> +        bar64_relocate = 1;
> +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> +    }
> +
>      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
>      while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
>      {
> @@ -218,11 +241,15 @@ void pci_setup(void)
>          hvm_info->high_mem_pgend += nr_pages;
>      }
>  
> +    high_mem_resource.base = pci_high_mem_start; 
> +    high_mem_resource.max = pci_high_mem_end;
>      mem_resource.base = pci_mem_start;
>      mem_resource.max = pci_mem_end;
>      io_resource.base = 0xc000;
>      io_resource.max = 0x10000;
>  
> +    mmio_left = pci_mem_end - pci_mem_end;
> +
>      /* Assign iomem and ioport resources in descending order of size. */
>      for ( i = 0; i < nr_bars; i++ )
>      {
> @@ -230,13 +257,21 @@ void pci_setup(void)
>          bar_reg = bars[i].bar_reg;
>          bar_sz  = bars[i].bar_sz;
>  
> +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left < bar_sz);
>          bar_data = pci_readl(devfn, bar_reg);
>  
>          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
>               PCI_BASE_ADDRESS_SPACE_MEMORY )
>          {
> -            resource = &mem_resource;
> -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            if (using_64bar) {
> +                resource = &high_mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            } 
> +            else {
> +                resource = &mem_resource;
> +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> +            }
> +            mmio_left -= bar_sz;
>          }
>          else
>          {
> @@ -244,13 +279,14 @@ void pci_setup(void)
>              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
>          }
>  
> -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> -        bar_data |= base;
> +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> +        bar_data |= (uint32_t)base;
> +        bar_data_upper = (uint32_t)(base >> 32);
>          base += bar_sz;
>  
>          if ( (base < resource->base) || (base > resource->max) )
>          {
> -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
>                     "resource!\n", devfn>>3, devfn&7, bar_reg, bar_sz);
>              continue;
>          }
> @@ -258,7 +294,9 @@ void pci_setup(void)
>          resource->base = base;
>  
>          pci_writel(devfn, bar_reg, bar_data);
> -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> +        if (using_64bar)
> +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
>                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
>  
>          /* Now enable the memory or I/O mapping. */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:33:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1auO-0002cK-Rv; Wed, 15 Aug 2012 10:33:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1auN-0002cD-Fe
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:33:03 +0000
Received: from [85.158.139.83:40979] by server-4.bemta-5.messagelabs.com id
	10/68-12386-EDA7B205; Wed, 15 Aug 2012 10:33:02 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345026777!27549831!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18040 invoked from network); 15 Aug 2012 10:32:59 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 10:32:59 -0000
Received: by yenm4 with SMTP id m4so1761039yen.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 03:32:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Hsb5MjxID4XtMtNuxLkcQeLPzNIZBA8voK0+b0OEen4=;
	b=ebi84HwihCEjPHjD4lWZmpPXjvelRvQkDUJdCzSCr1lWcGCp4snNR+eNYhIL1UtDis
	CIHPO03g7RBw8i4fMECoJTS4NmeemP20HB7xi5ojCdJzbAamzd7ahlpyZog0me4F/bsy
	7zQyngRoycA4+/jFRdHveWbOTzLtjZsF4BIQXjEvsxHIsPxwB49nxSPA5QwUXOP5/L2c
	dZ3/G/jZMvepXbjUOB7mgtJLzvvf/xtI34po6sxUHNem8M5YbgTrcsg/cg5T9URu8wy0
	G0oZeMoPRzsyarYHE90S9B+9rUIKGFxtw2m/MaR/fiX2n6XmxlrkS3+3PvCsjD7nS+Fc
	Ho8A==
MIME-Version: 1.0
Received: by 10.43.46.194 with SMTP id up2mr14980454icb.22.1345026776864; Wed,
	15 Aug 2012 03:32:56 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 03:32:56 -0700 (PDT)
In-Reply-To: <502B75BA0200007800094FD4@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 06:32:56 -0400
X-Google-Sender-Auth: 93KSucfYQySznstAKVYb33CSIgY
Message-ID: <CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 4:11 AM, Jan Beulich <JBeulich@suse.com> wrote:

> First try collectively removing the three calls to
> evtchn_move_pirqs() in xen/common/schedule.c. If that helps,

Sadly, I tried this yesterday (against the tip) with no success.

I'm starting to question my bisecting results.

It is, of course easy to know that it failed - but since it doesn't
fail until sometime after the first, second, or third suspend/resume
cycle - it can be easy to misinterpret a failure that has not occurred
yet as a success.

I will retest this morning when I get to work with this changeset, and
its parent, to better verify my results.

I'll also try the  evtchn_move_pirqs() removal against the changeset
above, to see if my results differ than when I did the same test
against the tip.

> see whether any smaller set also does. From the result of this,
> I'll have to think further, perhaps handing you a debugging
> patch.

I'm happy to test any debug patch.
I will report results from the tests above later this morning.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:33:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1auO-0002cK-Rv; Wed, 15 Aug 2012 10:33:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1auN-0002cD-Fe
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:33:03 +0000
Received: from [85.158.139.83:40979] by server-4.bemta-5.messagelabs.com id
	10/68-12386-EDA7B205; Wed, 15 Aug 2012 10:33:02 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345026777!27549831!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18040 invoked from network); 15 Aug 2012 10:32:59 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 10:32:59 -0000
Received: by yenm4 with SMTP id m4so1761039yen.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 03:32:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Hsb5MjxID4XtMtNuxLkcQeLPzNIZBA8voK0+b0OEen4=;
	b=ebi84HwihCEjPHjD4lWZmpPXjvelRvQkDUJdCzSCr1lWcGCp4snNR+eNYhIL1UtDis
	CIHPO03g7RBw8i4fMECoJTS4NmeemP20HB7xi5ojCdJzbAamzd7ahlpyZog0me4F/bsy
	7zQyngRoycA4+/jFRdHveWbOTzLtjZsF4BIQXjEvsxHIsPxwB49nxSPA5QwUXOP5/L2c
	dZ3/G/jZMvepXbjUOB7mgtJLzvvf/xtI34po6sxUHNem8M5YbgTrcsg/cg5T9URu8wy0
	G0oZeMoPRzsyarYHE90S9B+9rUIKGFxtw2m/MaR/fiX2n6XmxlrkS3+3PvCsjD7nS+Fc
	Ho8A==
MIME-Version: 1.0
Received: by 10.43.46.194 with SMTP id up2mr14980454icb.22.1345026776864; Wed,
	15 Aug 2012 03:32:56 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 03:32:56 -0700 (PDT)
In-Reply-To: <502B75BA0200007800094FD4@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 06:32:56 -0400
X-Google-Sender-Auth: 93KSucfYQySznstAKVYb33CSIgY
Message-ID: <CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 4:11 AM, Jan Beulich <JBeulich@suse.com> wrote:

> First try collectively removing the three calls to
> evtchn_move_pirqs() in xen/common/schedule.c. If that helps,

Sadly, I tried this yesterday (against the tip) with no success.

I'm starting to question my bisecting results.

It is, of course easy to know that it failed - but since it doesn't
fail until sometime after the first, second, or third suspend/resume
cycle - it can be easy to misinterpret a failure that has not occurred
yet as a success.

I will retest this morning when I get to work with this changeset, and
its parent, to better verify my results.

I'll also try the  evtchn_move_pirqs() removal against the changeset
above, to see if my results differ than when I did the same test
against the tip.

> see whether any smaller set also does. From the result of this,
> I'll have to think further, perhaps handing you a debugging
> patch.

I'm happy to test any debug patch.
I will report results from the tests above later this morning.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 10:40:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1b1X-0002ln-Rk; Wed, 15 Aug 2012 10:40:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T1b1U-0002li-Ts
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:40:25 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345027200!9411413!1
X-Originating-IP: [213.199.154.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14323 invoked from network); 15 Aug 2012 10:40:01 -0000
Received: from am1ehsobe006.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.209)
	by server-8.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Aug 2012 10:40:01 -0000
Received: from mail24-am1-R.bigfish.com (10.3.201.248) by
	AM1EHSOBE009.bigfish.com (10.3.204.29) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 10:40:00 +0000
Received: from mail24-am1 (localhost [127.0.0.1])	by mail24-am1-R.bigfish.com
	(Postfix) with ESMTP id 68976220145;
	Wed, 15 Aug 2012 10:40:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -6
X-BigFish: VPS-6(zzbb2dI98dI9371Ic89bh146fI1432Ic857h4015Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah34h1155h)
Received: from mail24-am1 (localhost.localdomain [127.0.0.1]) by mail24-am1
	(MessageSwitch) id 1345027197479092_18147;
	Wed, 15 Aug 2012 10:39:57 +0000 (UTC)
Received: from AM1EHSMHS010.bigfish.com (unknown [10.3.201.238])	by
	mail24-am1.bigfish.com (Postfix) with ESMTP id 70471420044;
	Wed, 15 Aug 2012 10:39:57 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	AM1EHSMHS010.bigfish.com (10.3.207.110) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 10:39:55 +0000
X-WSS-ID: 0M8SLME-01-0IA-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2A10F1028014;	Wed, 15 Aug 2012 05:39:50 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 15 Aug 2012 05:40:28 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Wed, 15 Aug 2012 05:39:50 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	06:39:32 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 4F7EA49C0D5; Wed, 15 Aug 2012
	11:39:31 +0100 (BST)
Message-ID: <502B7C68.2090808@amd.com>
Date: Wed, 15 Aug 2012 12:39:36 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
	<502B7FC9020000780009500F@nat28.tlf.novell.com>
In-Reply-To: <502B7FC9020000780009500F@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="------------020406080607020004070107"
X-OriginatorOrg: amd.com
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------020406080607020004070107
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08/15/2012 10:54 AM, Jan Beulich wrote:
>>>> On 14.08.12 at 21:55, Santosh Jodh<santosh.jodh@citrix.com>  wrote:
>
> Sorry to be picky; after this many rounds I would have
> expected that no further comments would be needed.
>
>> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
>> +                                     paddr_t gpa, int indent)
>> +{
>> +    paddr_t address;
>> +    void *table_vaddr, *pde;
>> +    paddr_t next_table_maddr;
>> +    int index, next_level, present;
>> +    u32 *entry;
>> +
>> +    if ( level<  1 )
>> +        return;
>> +
>> +    table_vaddr =3D __map_domain_page(pg);
>> +    if ( table_vaddr =3D=3D NULL )
>> +    {
>> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
>> +                page_to_maddr(pg));
>> +        return;
>> +    }
>> +
>> +    for ( index =3D 0; index<  PTE_PER_TABLE_SIZE; index++ )
>> +    {
>> +        if ( !(index % 2) )
>> +            process_pending_softirqs();
>> +
>> +        pde =3D table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
>> +        next_table_maddr =3D amd_iommu_get_next_table_from_pte(pde);
>> +        entry =3D (u32*)pde;
>> +
>> +        present =3D get_field_from_reg_u32(entry[0],
>> +                                         IOMMU_PDE_PRESENT_MASK,
>> +                                         IOMMU_PDE_PRESENT_SHIFT);
>> +
>> +        if ( !present )
>> +            continue;
>> +
>> +        next_level =3D get_field_from_reg_u32(entry[0],
>> +                                            IOMMU_PDE_NEXT_LEVEL_MASK=
,
>> +                                            IOMMU_PDE_NEXT_LEVEL_SHIF=
T);
>> +
>> +        address =3D gpa + amd_offset_level_address(index, level);
>> +        if ( next_level>=3D 1 )
>> +            amd_dump_p2m_table_level(
>> +                maddr_to_page(next_table_maddr), level - 1,
>
> Did you see Wei's cleanup patches to the code you cloned from?
> You should follow that route (replacing the ASSERT() with
> printing of the inconsistency and _not_ recursing or doing the
> normal printing), and using either "level" or "next_level"
> consistently here.

Hi, I tested the patch and the output looks much better now=EF=BC=8Cpleas=
e see=20
attachment. One thing I notice: there is a 1GB mapping in the guest, but=20
the format of it looks like other 2MB mappings:

(XEN)  gfn: 0003fa00  mfn: 0023b000
(XEN)  gfn: 0003fc00  mfn: 00136200
(XEN)  gfn: 0003fe00  mfn: 0023ae00
(XEN)  gfn: 00040000  mfn: 00040000 << 1GB here
(XEN)  gfn: 00080000  mfn: 0023ac00
(XEN)  gfn: 00080200  mfn: 00136000
(XEN)  gfn: 00080400  mfn: 0023aa00
(XEN)  gfn: 00080600  mfn: 00135e00
(XEN)  gfn: 00080800  mfn: 0023a800
(XEN)  gfn: 00080a00  mfn: 00135c00
(XEN)  gfn: 00080c00  mfn: 0023a600
(XEN)  gfn: 00080e00  mfn: 00135a00
(XEN)  gfn: 00081000  mfn: 0023a400
(XEN)  gfn: 00081200  mfn: 00135800
(XEN)  gfn: 00081400  mfn: 0023a200
(XEN)  gfn: 00081600  mfn: 00135600

Thanks,
Wei

>> +                address, indent + 1);
>> +        else
>> +            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
>> +                   indent, " ",
>
>              printk("%*sgfn: %08lx  mfn: %08lx\n",
>                     indent, "",
>
> I can vaguely see the point in splitting the two strings in the
> first argument, but the extra space in the third argument is
> definitely wrong - it'll make level 1 and level 2 indistinguishable.
>
> I also don't see how you addressed Wei's reporting of this still
> not printing correctly. I may be overlooking something, but
> without you making clear in the description what you changed
> over the previous version that's also relatively easy to happen.
>
>> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, pad=
dr_t gpa,
>> +                                     int indent)
>> +{
>> +    paddr_t address;
>> +    int i;
>> +    struct dma_pte *pt_vaddr, *pte;
>> +    int next_level;
>> +
>> +    if ( level<  1 )
>> +        return;
>> +
>> +    pt_vaddr =3D map_vtd_domain_page(pt_maddr);
>> +    if ( pt_vaddr =3D=3D NULL )
>> +    {
>> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_mad=
dr);
>> +        return;
>> +    }
>> +
>> +    next_level =3D level - 1;
>> +    for ( i =3D 0; i<  PTE_NUM; i++ )
>> +    {
>> +        if ( !(i % 2) )
>> +            process_pending_softirqs();
>> +
>> +        pte =3D&pt_vaddr[i];
>> +        if ( !dma_pte_present(*pte) )
>> +            continue;
>> +
>> +        address =3D gpa + offset_level_address(i, level);
>> +        if ( next_level>=3D 1 )
>> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
>> +                                     address, indent + 1);
>> +        else
>> +            printk("%*s" "gfn: %08lx mfn: %08lx super=3D%d rd=3D%d wr=
=3D%d\n",
>> +                   indent, " ",
>
> Same comment as above.
>
>> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
>> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K),
>> +                   dma_pte_superpage(*pte)? 1 : 0,
>> +                   dma_pte_read(*pte)? 1 : 0,
>> +                   dma_pte_write(*pte)? 1 : 0);
>
> Missing spaces. Even worse - given your definitions of these
> macros there's no point in using the conditional operators here
> at all.
>
> And, despite your claim in another response, this still isn't similar
> to AMD's variant (which still doesn't print any of these three
> attributes).
>
> The printing of the superpage status is pretty pointless anyway,
> given that there's no single use of dma_set_pte_superpage()
> throughout the tree - validly so since superpages can be in use
> currently only when the tables are shared with EPT, in which
> case you don't print anything. Plus you'd need to detect the flag
> _above_ level 1 (at leaf level the bit is ignored and hence just
> confusing if printed) and print the entry instead of recursing. And
> if you decide to indeed properly implement this (rather than just
> dropping superpage support here), _I_ would expect you to
> properly implement level skipping in the corresponding AMD code
> too (which similarly isn't being used currently).
>
> Jan
>


--------------020406080607020004070107
Content-Type: text/plain; charset="UTF-8"; name="io_pt.dump"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="io_pt.dump"
Content-Description: io_pt.dump

CihYRU4pIEhWTTI6IGludDEzX2hhcmRkaXNrOiBmdW5jdGlvbiA0MSwgdW5tYXBwZWQgZGV2
aWNlIGZvciBFTERMPTgxCihYRU4pIEhWTTI6IGludDEzX2hhcmRkaXNrOiBmdW5jdGlvbiAw
OCwgdW5tYXBwZWQgZGV2aWNlIGZvciBFTERMPTgxCihYRU4pIEhWTTI6ICoqKiBpbnQgMTVo
IGZ1bmN0aW9uIEFYPTAwYzAsIEJYPTAwMDAgbm90IHlldCBzdXBwb3J0ZWQhCihYRU4pIAoo
WEVOKSBkb21haW4yIElPTU1VIHAybSB0YWJsZTogCihYRU4pIHAybSB0YWJsZSBoYXMgMyBs
ZXZlbHMKKFhFTikgICBnZm46IDAwMDAwMDAwICBtZm46IDAwMjE4ZjAwCihYRU4pICAgZ2Zu
OiAwMDAwMDAwMSAgbWZuOiAwMDEwMTllMQooWEVOKSAgIGdmbjogMDAwMDAwMDIgIG1mbjog
MDAyMThlZmYKKFhFTikgICBnZm46IDAwMDAwMDAzICBtZm46IDAwMTAyZTY2CihYRU4pICAg
Z2ZuOiAwMDAwMDAwNCAgbWZuOiAwMDIxOGVmZQooWEVOKSAgIGdmbjogMDAwMDAwMDUgIG1m
bjogMDAxMDJiZWEKKFhFTikgICBnZm46IDAwMDAwMDA2ICBtZm46IDAwMjE4ZWZkCihYRU4p
ICAgZ2ZuOiAwMDAwMDAwNyAgbWZuOiAwMDEwMTM1MgooWEVOKSAgIGdmbjogMDAwMDAwMDgg
IG1mbjogMDAyMThlZmMKKFhFTikgICBnZm46IDAwMDAwMDA5ICBtZm46IDAwMTAxYzhmCihY
RU4pICAgZ2ZuOiAwMDAwMDAwYSAgbWZuOiAwMDIxOGVmYgooWEVOKSAgIGdmbjogMDAwMDAw
MGIgIG1mbjogMDAxMDFhNWYKKFhFTikgICBnZm46IDAwMDAwMDBjICBtZm46IDAwMjE4ZWZh
CihYRU4pICAgZ2ZuOiAwMDAwMDAwZCAgbWZuOiAwMDEwMThmOQooWEVOKSAgIGdmbjogMDAw
MDAwMGUgIG1mbjogMDAyMThlZjkKKFhFTikgICBnZm46IDAwMDAwMDBmICBtZm46IDAwMTAw
NGM5CihYRU4pICAgZ2ZuOiAwMDAwMDAxMCAgbWZuOiAwMDIxOGVmOAooWEVOKSAgIGdmbjog
MDAwMDAwMTEgIG1mbjogMDAxMDI3MDgKKFhFTikgICBnZm46IDAwMDAwMDEyICBtZm46IDAw
MjE4ZWY3CihYRU4pICAgZ2ZuOiAwMDAwMDAxMyAgbWZuOiAwMDEwMTkwMAooWEVOKSAgIGdm
bjogMDAwMDAwMTQgIG1mbjogMDAyMThlZjYKKFhFTikgICBnZm46IDAwMDAwMDE1ICBtZm46
IDAwMTAyNTEwCihYRU4pICAgZ2ZuOiAwMDAwMDAxNiAgbWZuOiAwMDIxOGVmNQooWEVOKSAg
IGdmbjogMDAwMDAwMTcgIG1mbjogMDAxMDFkZDEKKFhFTikgICBnZm46IDAwMDAwMDE4ICBt
Zm46IDAwMjE4ZWY0CihYRU4pICAgZ2ZuOiAwMDAwMDAxOSAgbWZuOiAwMDEwMjcyNgooWEVO
KSAgIGdmbjogMDAwMDAwMWEgIG1mbjogMDAyMThlZjMKKFhFTikgICBnZm46IDAwMDAwMDFi
ICBtZm46IDAwMTAxOThhCihYRU4pICAgZ2ZuOiAwMDAwMDAxYyAgbWZuOiAwMDIxOGVmMgoo
WEVOKSAgIGdmbjogMDAwMDAwMWQgIG1mbjogMDAxMDI2ZTkKKFhFTikgICBnZm46IDAwMDAw
MDFlICBtZm46IDAwMjE4ZWYxCihYRU4pICAgZ2ZuOiAwMDAwMDAxZiAgbWZuOiAwMDEwMjk3
ZgooWEVOKSAgIGdmbjogMDAwMDAwMjAgIG1mbjogMDAyMThlZjAKKFhFTikgICBnZm46IDAw
MDAwMDIxICBtZm46IDAwMTAyZWIzCihYRU4pICAgZ2ZuOiAwMDAwMDAyMiAgbWZuOiAwMDIx
OGVlZgooWEVOKSAgIGdmbjogMDAwMDAwMjMgIG1mbjogMDAxMDA1MWYKKFhFTikgICBnZm46
IDAwMDAwMDI0ICBtZm46IDAwMjE4ZWVlCihYRU4pICAgZ2ZuOiAwMDAwMDAyNSAgbWZuOiAw
MDEwMjkzMQooWEVOKSAgIGdmbjogMDAwMDAwMjYgIG1mbjogMDAyMThlZWQKKFhFTikgICBn
Zm46IDAwMDAwMDI3ICBtZm46IDAwMTAxODkxCihYRU4pICAgZ2ZuOiAwMDAwMDAyOCAgbWZu
OiAwMDIxOGVlYwooWEVOKSAgIGdmbjogMDAwMDAwMjkgIG1mbjogMDAxMDEzZmYKKFhFTikg
ICBnZm46IDAwMDAwMDJhICBtZm46IDAwMjE4ZWViCihYRU4pICAgZ2ZuOiAwMDAwMDAyYiAg
bWZuOiAwMDEwMTRiMQooWEVOKSAgIGdmbjogMDAwMDAwMmMgIG1mbjogMDAyMThlZWEKKFhF
TikgICBnZm46IDAwMDAwMDJkICBtZm46IDAwMTAxYTc0CihYRU4pICAgZ2ZuOiAwMDAwMDAy
ZSAgbWZuOiAwMDIxOGVlOQooWEVOKSAgIGdmbjogMDAwMDAwMmYgIG1mbjogMDAxMDI3M2UK
KFhFTikgICBnZm46IDAwMDAwMDMwICBtZm46IDAwMjE4ZWU4CihYRU4pICAgZ2ZuOiAwMDAw
MDAzMSAgbWZuOiAwMDEwMjg0YgooWEVOKSAgIGdmbjogMDAwMDAwMzIgIG1mbjogMDAyMThl
ZTcKKFhFTikgICBnZm46IDAwMDAwMDMzICBtZm46IDAwMTAxNGZkCihYRU4pICAgZ2ZuOiAw
MDAwMDAzNCAgbWZuOiAwMDIxOGVlNgooWEVOKSAgIGdmbjogMDAwMDAwMzUgIG1mbjogMDAx
MDA1NWUKKFhFTikgICBnZm46IDAwMDAwMDM2ICBtZm46IDAwMjE4ZWU1CihYRU4pICAgZ2Zu
OiAwMDAwMDAzNyAgbWZuOiAwMDEwMTM0ZQooWEVOKSAgIGdmbjogMDAwMDAwMzggIG1mbjog
MDAyMThlZTQKKFhFTikgICBnZm46IDAwMDAwMDM5ICBtZm46IDAwMTAxNDg4CihYRU4pICAg
Z2ZuOiAwMDAwMDAzYSAgbWZuOiAwMDIxOGVlMwooWEVOKSAgIGdmbjogMDAwMDAwM2IgIG1m
bjogMDAxMDEzMTAKKFhFTikgICBnZm46IDAwMDAwMDNjICBtZm46IDAwMjE4ZWUyCihYRU4p
ICAgZ2ZuOiAwMDAwMDAzZCAgbWZuOiAwMDEwMTg4ZQooWEVOKSAgIGdmbjogMDAwMDAwM2Ug
IG1mbjogMDAyMThlZTEKKFhFTikgICBnZm46IDAwMDAwMDNmICBtZm46IDAwMTAyODZlCihY
RU4pICAgZ2ZuOiAwMDAwMDA0MCAgbWZuOiAwMDIxOGVlMAooWEVOKSAgIGdmbjogMDAwMDAw
NDEgIG1mbjogMDAxMDE4N2UKKFhFTikgICBnZm46IDAwMDAwMDQyICBtZm46IDAwMjE4ZWRm
CihYRU4pICAgZ2ZuOiAwMDAwMDA0MyAgbWZuOiAwMDEwMDMwZQooWEVOKSAgIGdmbjogMDAw
MDAwNDQgIG1mbjogMDAyMThlZGUKKFhFTikgICBnZm46IDAwMDAwMDQ1ICBtZm46IDAwMTAw
NDk4CihYRU4pICAgZ2ZuOiAwMDAwMDA0NiAgbWZuOiAwMDIxOGVkZAooWEVOKSAgIGdmbjog
MDAwMDAwNDcgIG1mbjogMDAxMDJkZDAKKFhFTikgICBnZm46IDAwMDAwMDQ4ICBtZm46IDAw
MjE4ZWRjCihYRU4pICAgZ2ZuOiAwMDAwMDA0OSAgbWZuOiAwMDEwMmRkYQooWEVOKSAgIGdm
bjogMDAwMDAwNGEgIG1mbjogMDAyMThlZGIKKFhFTikgICBnZm46IDAwMDAwMDRiICBtZm46
IDAwMTAxNzcwCihYRU4pICAgZ2ZuOiAwMDAwMDA0YyAgbWZuOiAwMDIxOGVkYQooWEVOKSAg
IGdmbjogMDAwMDAwNGQgIG1mbjogMDAxMDJjYjUKKFhFTikgICBnZm46IDAwMDAwMDRlICBt
Zm46IDAwMjE4ZWQ5CihYRU4pICAgZ2ZuOiAwMDAwMDA0ZiAgbWZuOiAwMDEwMmU5YQooWEVO
KSAgIGdmbjogMDAwMDAwNTAgIG1mbjogMDAyMThlZDgKKFhFTikgICBnZm46IDAwMDAwMDUx
ICBtZm46IDAwMTAxODI5CihYRU4pICAgZ2ZuOiAwMDAwMDA1MiAgbWZuOiAwMDIxOGVkNwoo
WEVOKSAgIGdmbjogMDAwMDAwNTMgIG1mbjogMDAxMDI5OTEKKFhFTikgICBnZm46IDAwMDAw
MDU0ICBtZm46IDAwMjE4ZWQ2CihYRU4pICAgZ2ZuOiAwMDAwMDA1NSAgbWZuOiAwMDEwMTU3
YwooWEVOKSAgIGdmbjogMDAwMDAwNTYgIG1mbjogMDAyMThlZDUKKFhFTikgICBnZm46IDAw
MDAwMDU3ICBtZm46IDAwMTAxN2MzCihYRU4pICAgZ2ZuOiAwMDAwMDA1OCAgbWZuOiAwMDIx
OGVkNAooWEVOKSAgIGdmbjogMDAwMDAwNTkgIG1mbjogMDAxMDJhNDIKKFhFTikgICBnZm46
IDAwMDAwMDVhICBtZm46IDAwMjE4ZWQzCihYRU4pICAgZ2ZuOiAwMDAwMDA1YiAgbWZuOiAw
MDEwMWE4ZgooWEVOKSAgIGdmbjogMDAwMDAwNWMgIG1mbjogMDAyMThlZDIKKFhFTikgICBn
Zm46IDAwMDAwMDVkICBtZm46IDAwMTAyYTY5CihYRU4pICAgZ2ZuOiAwMDAwMDA1ZSAgbWZu
OiAwMDIxOGVkMQooWEVOKSAgIGdmbjogMDAwMDAwNWYgIG1mbjogMDAxMDE4MWQKKFhFTikg
ICBnZm46IDAwMDAwMDYwICBtZm46IDAwMjE4ZWQwCihYRU4pICAgZ2ZuOiAwMDAwMDA2MSAg
bWZuOiAwMDEwMWFjNwooWEVOKSAgIGdmbjogMDAwMDAwNjIgIG1mbjogMDAyMThlY2YKKFhF
TikgICBnZm46IDAwMDAwMDYzICBtZm46IDAwMTAxNGRhCihYRU4pICAgZ2ZuOiAwMDAwMDA2
NCAgbWZuOiAwMDIxOGVjZQooWEVOKSAgIGdmbjogMDAwMDAwNjUgIG1mbjogMDAxMDFhMGMK
KFhFTikgICBnZm46IDAwMDAwMDY2ICBtZm46IDAwMjE4ZWNkCihYRU4pICAgZ2ZuOiAwMDAw
MDA2NyAgbWZuOiAwMDEwMjhjNgooWEVOKSAgIGdmbjogMDAwMDAwNjggIG1mbjogMDAyMThl
Y2MKKFhFTikgICBnZm46IDAwMDAwMDY5ICBtZm46IDAwMTAyYTFhCihYRU4pICAgZ2ZuOiAw
MDAwMDA2YSAgbWZuOiAwMDIxOGVjYgooWEVOKSAgIGdmbjogMDAwMDAwNmIgIG1mbjogMDAx
MDI0NDYKKFhFTikgICBnZm46IDAwMDAwMDZjICBtZm46IDAwMjE4ZWNhCihYRU4pICAgZ2Zu
OiAwMDAwMDA2ZCAgbWZuOiAwMDEwMTM4MwooWEVOKSAgIGdmbjogMDAwMDAwNmUgIG1mbjog
MDAyMThlYzkKKFhFTikgICBnZm46IDAwMDAwMDZmICBtZm46IDAwMTAyY2MyCihYRU4pICAg
Z2ZuOiAwMDAwMDA3MCAgbWZuOiAwMDIxOGVjOAooWEVOKSAgIGdmbjogMDAwMDAwNzEgIG1m
bjogMDAxMDFkZWIKKFhFTikgICBnZm46IDAwMDAwMDcyICBtZm46IDAwMjE4ZWM3CihYRU4p
ICAgZ2ZuOiAwMDAwMDA3MyAgbWZuOiAwMDEwMjFhOAooWEVOKSAgIGdmbjogMDAwMDAwNzQg
IG1mbjogMDAyMThlYzYKKFhFTikgICBnZm46IDAwMDAwMDc1ICBtZm46IDAwMTAwMmZiCihY
RU4pICAgZ2ZuOiAwMDAwMDA3NiAgbWZuOiAwMDIxOGVjNQooWEVOKSAgIGdmbjogMDAwMDAw
NzcgIG1mbjogMDAxMDE0Y2MKKFhFTikgICBnZm46IDAwMDAwMDc4ICBtZm46IDAwMjE4ZWM0
CihYRU4pICAgZ2ZuOiAwMDAwMDA3OSAgbWZuOiAwMDEwMTU0NAooWEVOKSAgIGdmbjogMDAw
MDAwN2EgIG1mbjogMDAyMThlYzMKKFhFTikgICBnZm46IDAwMDAwMDdiICBtZm46IDAwMTAx
ZDlmCihYRU4pICAgZ2ZuOiAwMDAwMDA3YyAgbWZuOiAwMDIxOGVjMgooWEVOKSAgIGdmbjog
MDAwMDAwN2QgIG1mbjogMDAxMDFhODgKKFhFTikgICBnZm46IDAwMDAwMDdlICBtZm46IDAw
MjE4ZWMxCihYRU4pICAgZ2ZuOiAwMDAwMDA3ZiAgbWZuOiAwMDEwMWE2MAooWEVOKSAgIGdm
bjogMDAwMDAwODAgIG1mbjogMDAyMThlYzAKKFhFTikgICBnZm46IDAwMDAwMDgxICBtZm46
IDAwMTAyOTE1CihYRU4pICAgZ2ZuOiAwMDAwMDA4MiAgbWZuOiAwMDIxOGViZgooWEVOKSAg
IGdmbjogMDAwMDAwODMgIG1mbjogMDAxMDEzMDEKKFhFTikgICBnZm46IDAwMDAwMDg0ICBt
Zm46IDAwMjE4ZWJlCihYRU4pICAgZ2ZuOiAwMDAwMDA4NSAgbWZuOiAwMDEwMTVkOAooWEVO
KSAgIGdmbjogMDAwMDAwODYgIG1mbjogMDAyMThlYmQKKFhFTikgICBnZm46IDAwMDAwMDg3
ICBtZm46IDAwMTAxOGFkCihYRU4pICAgZ2ZuOiAwMDAwMDA4OCAgbWZuOiAwMDIxOGViYwoo
WEVOKSAgIGdmbjogMDAwMDAwODkgIG1mbjogMDAxMDE1MzMKKFhFTikgICBnZm46IDAwMDAw
MDhhICBtZm46IDAwMjE4ZWJiCihYRU4pICAgZ2ZuOiAwMDAwMDA4YiAgbWZuOiAwMDEwMWVk
NAooWEVOKSAgIGdmbjogMDAwMDAwOGMgIG1mbjogMDAyMThlYmEKKFhFTikgICBnZm46IDAw
MDAwMDhkICBtZm46IDAwMTAxYTAxCihYRU4pICAgZ2ZuOiAwMDAwMDA4ZSAgbWZuOiAwMDIx
OGViOQooWEVOKSAgIGdmbjogMDAwMDAwOGYgIG1mbjogMDAxMDE1OWUKKFhFTikgICBnZm46
IDAwMDAwMDkwICBtZm46IDAwMjE4ZWI4CihYRU4pICAgZ2ZuOiAwMDAwMDA5MSAgbWZuOiAw
MDEwMjU3MQooWEVOKSAgIGdmbjogMDAwMDAwOTIgIG1mbjogMDAyMThlYjcKKFhFTikgICBn
Zm46IDAwMDAwMDkzICBtZm46IDAwMTAyY2ZlCihYRU4pICAgZ2ZuOiAwMDAwMDA5NCAgbWZu
OiAwMDIxOGViNgooWEVOKSAgIGdmbjogMDAwMDAwOTUgIG1mbjogMDAxMDA1NWQKKFhFTikg
ICBnZm46IDAwMDAwMDk2ICBtZm46IDAwMjE4ZWI1CihYRU4pICAgZ2ZuOiAwMDAwMDA5NyAg
bWZuOiAwMDEwMmQyNwooWEVOKSAgIGdmbjogMDAwMDAwOTggIG1mbjogMDAyMThlYjQKKFhF
TikgICBnZm46IDAwMDAwMDk5ICBtZm46IDAwMTAyN2M2CihYRU4pICAgZ2ZuOiAwMDAwMDA5
YSAgbWZuOiAwMDIxOGViMwooWEVOKSAgIGdmbjogMDAwMDAwOWIgIG1mbjogMDAxMDE0YWYK
KFhFTikgICBnZm46IDAwMDAwMDljICBtZm46IDAwMjE4ZWIyCihYRU4pICAgZ2ZuOiAwMDAw
MDA5ZCAgbWZuOiAwMDEwMmU3NgooWEVOKSAgIGdmbjogMDAwMDAwOWUgIG1mbjogMDAyMThl
YjEKKFhFTikgICBnZm46IDAwMDAwMDlmICBtZm46IDAwMTAyMzVlCihYRU4pICAgZ2ZuOiAw
MDAwMDEwMCAgbWZuOiAwMDIxOGU5MAooWEVOKSAgIGdmbjogMDAwMDAxMDEgIG1mbjogMDAx
MDE3MWUKKFhFTikgICBnZm46IDAwMDAwMTAyICBtZm46IDAwMjE4ZThmCihYRU4pICAgZ2Zu
OiAwMDAwMDEwMyAgbWZuOiAwMDEwMmViMQooWEVOKSAgIGdmbjogMDAwMDAxMDQgIG1mbjog
MDAyMThlOGUKKFhFTikgICBnZm46IDAwMDAwMTA1ICBtZm46IDAwMTAxOWQyCihYRU4pICAg
Z2ZuOiAwMDAwMDEwNiAgbWZuOiAwMDIxOGU4ZAooWEVOKSAgIGdmbjogMDAwMDAxMDcgIG1m
bjogMDAxMDAzMTAKKFhFTikgICBnZm46IDAwMDAwMTA4ICBtZm46IDAwMjE4ZThjCihYRU4p
ICAgZ2ZuOiAwMDAwMDEwOSAgbWZuOiAwMDEwMTgxNwooWEVOKSAgIGdmbjogMDAwMDAxMGEg
IG1mbjogMDAyMThlOGIKKFhFTikgICBnZm46IDAwMDAwMTBiICBtZm46IDAwMTAxODdiCihY
RU4pICAgZ2ZuOiAwMDAwMDEwYyAgbWZuOiAwMDIxOGU4YQooWEVOKSAgIGdmbjogMDAwMDAx
MGQgIG1mbjogMDAxMDFhNTMKKFhFTikgICBnZm46IDAwMDAwMTBlICBtZm46IDAwMjE4ZTg5
CihYRU4pICAgZ2ZuOiAwMDAwMDEwZiAgbWZuOiAwMDEwMjk5NwooWEVOKSAgIGdmbjogMDAw
MDAxMTAgIG1mbjogMDAyMThlODgKKFhFTikgICBnZm46IDAwMDAwMTExICBtZm46IDAwMTAx
NTA0CihYRU4pICAgZ2ZuOiAwMDAwMDExMiAgbWZuOiAwMDIxOGU4NwooWEVOKSAgIGdmbjog
MDAwMDAxMTMgIG1mbjogMDAxMDFhOTgKKFhFTikgICBnZm46IDAwMDAwMTE0ICBtZm46IDAw
MjE4ZTg2CihYRU4pICAgZ2ZuOiAwMDAwMDExNSAgbWZuOiAwMDEwMjgzMgooWEVOKSAgIGdm
bjogMDAwMDAxMTYgIG1mbjogMDAyMThlODUKKFhFTikgICBnZm46IDAwMDAwMTE3ICBtZm46
IDAwMTAxZWMzCihYRU4pICAgZ2ZuOiAwMDAwMDExOCAgbWZuOiAwMDIxOGU4NAooWEVOKSAg
IGdmbjogMDAwMDAxMTkgIG1mbjogMDAxMDJlMWYKKFhFTikgICBnZm46IDAwMDAwMTFhICBt
Zm46IDAwMjE4ZTgzCihYRU4pICAgZ2ZuOiAwMDAwMDExYiAgbWZuOiAwMDEwMDNhOAooWEVO
KSAgIGdmbjogMDAwMDAxMWMgIG1mbjogMDAyMThlODIKKFhFTikgICBnZm46IDAwMDAwMTFk
ICBtZm46IDAwMTAxZTczCihYRU4pICAgZ2ZuOiAwMDAwMDExZSAgbWZuOiAwMDIxOGU4MQoo
WEVOKSAgIGdmbjogMDAwMDAxMWYgIG1mbjogMDAxMDI2ZWQKKFhFTikgICBnZm46IDAwMDAw
MTIwICBtZm46IDAwMjE4ZTgwCihYRU4pICAgZ2ZuOiAwMDAwMDEyMSAgbWZuOiAwMDEwMDU2
NwooWEVOKSAgIGdmbjogMDAwMDAxMjIgIG1mbjogMDAyMThlN2YKKFhFTikgICBnZm46IDAw
MDAwMTIzICBtZm46IDAwMTAxNGYyCihYRU4pICAgZ2ZuOiAwMDAwMDEyNCAgbWZuOiAwMDIx
OGU3ZQooWEVOKSAgIGdmbjogMDAwMDAxMjUgIG1mbjogMDAxMDAyZWQKKFhFTikgICBnZm46
IDAwMDAwMTI2ICBtZm46IDAwMjE4ZTdkCihYRU4pICAgZ2ZuOiAwMDAwMDEyNyAgbWZuOiAw
MDEwMThiYQooWEVOKSAgIGdmbjogMDAwMDAxMjggIG1mbjogMDAyMThlN2MKKFhFTikgICBn
Zm46IDAwMDAwMTI5ICBtZm46IDAwMTAxMzc1CihYRU4pICAgZ2ZuOiAwMDAwMDEyYSAgbWZu
OiAwMDIxOGU3YgooWEVOKSAgIGdmbjogMDAwMDAxMmIgIG1mbjogMDAxMDEzZWEKKFhFTikg
ICBnZm46IDAwMDAwMTJjICBtZm46IDAwMjE4ZTdhCihYRU4pICAgZ2ZuOiAwMDAwMDEyZCAg
bWZuOiAwMDEwMThkZQooWEVOKSAgIGdmbjogMDAwMDAxMmUgIG1mbjogMDAyMThlNzkKKFhF
TikgICBnZm46IDAwMDAwMTJmICBtZm46IDAwMTAwNTcwCihYRU4pICAgZ2ZuOiAwMDAwMDEz
MCAgbWZuOiAwMDIxOGU3OAooWEVOKSAgIGdmbjogMDAwMDAxMzEgIG1mbjogMDAxMDE0M2UK
KFhFTikgICBnZm46IDAwMDAwMTMyICBtZm46IDAwMjE4ZTc3CihYRU4pICAgZ2ZuOiAwMDAw
MDEzMyAgbWZuOiAwMDEwMTM2YQooWEVOKSAgIGdmbjogMDAwMDAxMzQgIG1mbjogMDAyMThl
NzYKKFhFTikgICBnZm46IDAwMDAwMTM1ICBtZm46IDAwMTAxM2E1CihYRU4pICAgZ2ZuOiAw
MDAwMDEzNiAgbWZuOiAwMDIxOGU3NQooWEVOKSAgIGdmbjogMDAwMDAxMzcgIG1mbjogMDAx
MDI3ZjUKKFhFTikgICBnZm46IDAwMDAwMTM4ICBtZm46IDAwMjE4ZTc0CihYRU4pICAgZ2Zu
OiAwMDAwMDEzOSAgbWZuOiAwMDEwMmE0OQooWEVOKSAgIGdmbjogMDAwMDAxM2EgIG1mbjog
MDAyMThlNzMKKFhFTikgICBnZm46IDAwMDAwMTNiICBtZm46IDAwMTAxZGM3CihYRU4pICAg
Z2ZuOiAwMDAwMDEzYyAgbWZuOiAwMDIxOGU3MgooWEVOKSAgIGdmbjogMDAwMDAxM2QgIG1m
bjogMDAxMDJkNjAKKFhFTikgICBnZm46IDAwMDAwMTNlICBtZm46IDAwMjE4ZTcxCihYRU4p
ICAgZ2ZuOiAwMDAwMDEzZiAgbWZuOiAwMDEwMTVhMgooWEVOKSAgIGdmbjogMDAwMDAxNDAg
IG1mbjogMDAyMThlNzAKKFhFTikgICBnZm46IDAwMDAwMTQxICBtZm46IDAwMTAyNzdiCihY
RU4pICAgZ2ZuOiAwMDAwMDE0MiAgbWZuOiAwMDIxOGU2ZgooWEVOKSAgIGdmbjogMDAwMDAx
NDMgIG1mbjogMDAxMDI4YWYKKFhFTikgICBnZm46IDAwMDAwMTQ0ICBtZm46IDAwMjE4ZTZl
CihYRU4pICAgZ2ZuOiAwMDAwMDE0NSAgbWZuOiAwMDEwMTk0ZgooWEVOKSAgIGdmbjogMDAw
MDAxNDYgIG1mbjogMDAyMThlNmQKKFhFTikgICBnZm46IDAwMDAwMTQ3ICBtZm46IDAwMTAx
YTYzCihYRU4pICAgZ2ZuOiAwMDAwMDE0OCAgbWZuOiAwMDIxOGU2YwooWEVOKSAgIGdmbjog
MDAwMDAxNDkgIG1mbjogMDAxMDE1NDEKKFhFTikgICBnZm46IDAwMDAwMTRhICBtZm46IDAw
MjE4ZTZiCihYRU4pICAgZ2ZuOiAwMDAwMDE0YiAgbWZuOiAwMDEwMmRjYwooWEVOKSAgIGdm
bjogMDAwMDAxNGMgIG1mbjogMDAyMThlNmEKKFhFTikgICBnZm46IDAwMDAwMTRkICBtZm46
IDAwMTAyODFjCihYRU4pICAgZ2ZuOiAwMDAwMDE0ZSAgbWZuOiAwMDIxOGU2OQooWEVOKSAg
IGdmbjogMDAwMDAxNGYgIG1mbjogMDAxMDJlMzIKKFhFTikgICBnZm46IDAwMDAwMTUwICBt
Zm46IDAwMjE4ZTY4CihYRU4pICAgZ2ZuOiAwMDAwMDE1MSAgbWZuOiAwMDEwMDRjNAooWEVO
KSAgIGdmbjogMDAwMDAxNTIgIG1mbjogMDAyMThlNjcKKFhFTikgICBnZm46IDAwMDAwMTUz
ICBtZm46IDAwMTAyYzE4CihYRU4pICAgZ2ZuOiAwMDAwMDE1NCAgbWZuOiAwMDIxOGU2Ngoo
WEVOKSAgIGdmbjogMDAwMDAxNTUgIG1mbjogMDAxMDJjMTcKKFhFTikgICBnZm46IDAwMDAw
MTU2ICBtZm46IDAwMjE4ZTY1CihYRU4pICAgZ2ZuOiAwMDAwMDE1NyAgbWZuOiAwMDEwMWRk
OAooWEVOKSAgIGdmbjogMDAwMDAxNTggIG1mbjogMDAyMThlNjQKKFhFTikgICBnZm46IDAw
MDAwMTU5ICBtZm46IDAwMTAxZGQ3CihYRU4pICAgZ2ZuOiAwMDAwMDE1YSAgbWZuOiAwMDIx
OGU2MwooWEVOKSAgIGdmbjogMDAwMDAxNWIgIG1mbjogMDAxMDI3YmYKKFhFTikgICBnZm46
IDAwMDAwMTVjICBtZm46IDAwMjE4ZTYyCihYRU4pICAgZ2ZuOiAwMDAwMDE1ZCAgbWZuOiAw
MDEwMDUzNAooWEVOKSAgIGdmbjogMDAwMDAxNWUgIG1mbjogMDAyMThlNjEKKFhFTikgICBn
Zm46IDAwMDAwMTVmICBtZm46IDAwMTAyZTk2CihYRU4pICAgZ2ZuOiAwMDAwMDE2MCAgbWZu
OiAwMDIxOGU2MAooWEVOKSAgIGdmbjogMDAwMDAxNjEgIG1mbjogMDAxMDE0Y2EKKFhFTikg
ICBnZm46IDAwMDAwMTYyICBtZm46IDAwMjE4ZTVmCihYRU4pICAgZ2ZuOiAwMDAwMDE2MyAg
bWZuOiAwMDEwMTRjOQooWEVOKSAgIGdmbjogMDAwMDAxNjQgIG1mbjogMDAyMThlNWUKKFhF
TikgICBnZm46IDAwMDAwMTY1ICBtZm46IDAwMTAyZTJhCihYRU4pICAgZ2ZuOiAwMDAwMDE2
NiAgbWZuOiAwMDIxOGU1ZAooWEVOKSAgIGdmbjogMDAwMDAxNjcgIG1mbjogMDAxMDJlMjkK
KFhFTikgICBnZm46IDAwMDAwMTY4ICBtZm46IDAwMjE4ZTVjCihYRU4pICAgZ2ZuOiAwMDAw
MDE2OSAgbWZuOiAwMDEwMjQ2NQooWEVOKSAgIGdmbjogMDAwMDAxNmEgIG1mbjogMDAyMThl
NWIKKFhFTikgICBnZm46IDAwMDAwMTZiICBtZm46IDAwMTAyZTg0CihYRU4pICAgZ2ZuOiAw
MDAwMDE2YyAgbWZuOiAwMDIxOGU1YQooWEVOKSAgIGdmbjogMDAwMDAxNmQgIG1mbjogMDAx
MDJlODMKKFhFTikgICBnZm46IDAwMDAwMTZlICBtZm46IDAwMjE4ZTU5CihYRU4pICAgZ2Zu
OiAwMDAwMDE2ZiAgbWZuOiAwMDEwMjc1NwooWEVOKSAgIGdmbjogMDAwMDAxNzAgIG1mbjog
MDAyMThlNTgKKFhFTikgICBnZm46IDAwMDAwMTcxICBtZm46IDAwMTAxNDkyCihYRU4pICAg
Z2ZuOiAwMDAwMDE3MiAgbWZuOiAwMDIxOGU1NwooWEVOKSAgIGdmbjogMDAwMDAxNzMgIG1m
bjogMDAxMDJhMTAKKFhFTikgICBnZm46IDAwMDAwMTc0ICBtZm46IDAwMjE4ZTU2CihYRU4p
ICAgZ2ZuOiAwMDAwMDE3NSAgbWZuOiAwMDEwMmEwZgooWEVOKSAgIGdmbjogMDAwMDAxNzYg
IG1mbjogMDAyMThlNTUKKFhFTikgICBnZm46IDAwMDAwMTc3ICBtZm46IDAwMTAyZWM0CihY
RU4pICAgZ2ZuOiAwMDAwMDE3OCAgbWZuOiAwMDIxOGU1NAooWEVOKSAgIGdmbjogMDAwMDAx
NzkgIG1mbjogMDAxMDJlYzMKKFhFTikgICBnZm46IDAwMDAwMTdhICBtZm46IDAwMjE4ZTUz
CihYRU4pICAgZ2ZuOiAwMDAwMDE3YiAgbWZuOiAwMDEwMmE1YQooWEVOKSAgIGdmbjogMDAw
MDAxN2MgIG1mbjogMDAyMThlNTIKKFhFTikgICBnZm46IDAwMDAwMTdkICBtZm46IDAwMTAy
YTU5CihYRU4pICAgZ2ZuOiAwMDAwMDE3ZSAgbWZuOiAwMDIxOGU1MQooWEVOKSAgIGdmbjog
MDAwMDAxN2YgIG1mbjogMDAxMDEzMGUKKFhFTikgICBnZm46IDAwMDAwMTgwICBtZm46IDAw
MjE4ZTUwCihYRU4pICAgZ2ZuOiAwMDAwMDE4MSAgbWZuOiAwMDEwMThiNAooWEVOKSAgIGdm
bjogMDAwMDAxODIgIG1mbjogMDAyMThlNGYKKFhFTikgICBnZm46IDAwMDAwMTgzICBtZm46
IDAwMTAxOGIzCihYRU4pICAgZ2ZuOiAwMDAwMDE4NCAgbWZuOiAwMDIxOGU0ZQooWEVOKSAg
IGdmbjogMDAwMDAxODUgIG1mbjogMDAxMDI0N2UKKFhFTikgICBnZm46IDAwMDAwMTg2ICBt
Zm46IDAwMjE4ZTRkCihYRU4pICAgZ2ZuOiAwMDAwMDE4NyAgbWZuOiAwMDEwMTM5NAooWEVO
KSAgIGdmbjogMDAwMDAxODggIG1mbjogMDAyMThlNGMKKFhFTikgICBnZm46IDAwMDAwMTg5
ICBtZm46IDAwMTAxMzkzCihYRU4pICAgZ2ZuOiAwMDAwMDE4YSAgbWZuOiAwMDIxOGU0Ygoo
WEVOKSAgIGdmbjogMDAwMDAxOGIgIG1mbjogMDAxMDJkZDYKKFhFTikgICBnZm46IDAwMDAw
MThjICBtK2ZuOiAwMDIxOGU0YQooWEVOKSAgIGdmbjogMDAwMDAxOGQgIG1mbjogMDAxMDJk
ZDUKKFhFTikgICBnZm46IDAwMDAwMThlICBtZm46IDAwMjE4ZTQ5CihYRU4pICAgZ2ZuOiAw
MDAwMDE4ZiAgbWZuOiAwMDEwMjk5NAooWEVOKSAgIGdmbjogMDAwMDAxOTAgIG1mbjogMDAy
MThlNDgKKFhFTikgICBnZm46IDAwMDAwMTkxICBtZm46IDAwMTAyOTkzCihYRU4pICAgZ2Zu
OiAwMDAwMDE5MiAgbWZuOiAwMDIxOGU0NwooWEVOKSAgIGdmbjogMDAwMDAxOTMgIG1mbjog
MDAxMDJkZTIKKFhFTikgICBnZm46IDAwMDAwMTk0ICBtZm46IDAwMjE4ZTQ2CihYRU4pICAg
Z2ZuOiAwMDAwMDE5NSAgbWZuOiAwMDEwMmRlMQooWEVOKSAgIGdmbjogMDAwMDAxOTYgIG1m
bjogMDAyMThlNDUKKFhFTikgICBnZm46IDAwMDAwMTk3ICBtZm46IDAwMTAxNzEzCihYRU4p
ICAgZ2ZuOiAwMDAwMDE5OCAgbWZuOiAwMDIxOGU0NAooWEVOKSAgIGdmbjogMDAwMDAxOTkg
IG1mbjogMDAxMDEzOWEKKFhFTikgICBnZm46IDAwMDAwMTlhICBtZm46IDAwMjE4ZTQzCihY
RU4pICAgZ2ZuOiAwMDAwMDE5YiAgbWZuOiAwMDEwMmMyNgooWEVOKSAgIGdmbjogMDAwMDAx
OWMgIG1mbjogMDAyMThlNDIKKFhFTikgICBnZm46IDAwMDAwMTlkICBtZm46IDAwMTAyYzI1
CihYRU4pICAgZ2ZuOiAwMDAwMDE5ZSAgbWZuOiAwMDIxOGU0MQooWEVOKSAgIGdmbjogMDAw
MDAxOWYgIG1mbjogMDAxMDE0NDgKKFhFTikgICBnZm46IDAwMDAwMWEwICBtZm46IDAwMjE4
ZTQwCihYRU4pICAgZ2ZuOiAwMDAwMDFhMSAgbWZuOiAwMDEwMTQ0NwooWEVOKSAgIGdmbjog
MDAwMDAxYTIgIG1mbjogMDAyMThlM2YKKFhFTikgICBnZm46IDAwMDAwMWEzICBtZm46IDAw
MTAxZTg0CihYRU4pICAgZ2ZuOiAwMDAwMDFhNCAgbWZuOiAwMDIxOGUzZQooWEVOKSAgIGdm
bjogMDAwMDAxYTUgIG1mbjogMDAxMDEzYTcKKFhFTikgICBnZm46IDAwMDAwMWE2ICBtZm46
IDAwMjE4ZTNkCihYRU4pICAgZ2ZuOiAwMDAwMDFhNyAgbWZuOiAwMDEwMmRlZgooWEVOKSAg
IGdmbjogMDAwMDAxYTggIG1mbjogMDAyMThlM2MKKFhFTikgICBnZm46IDAwMDAwMWE5ICBt
Zm46IDAwMTAxMmRiCihYRU4pICAgZ2ZuOiAwMDAwMDFhYSAgbWZuOiAwMDIxOGUzYgooWEVO
KSAgIGdmbjogMDAwMDAxYWIgIG1mbjogMDAxMDA1MjQKKFhFTikgICBnZm46IDAwMDAwMWFj
ICBtZm46IDAwMjE4ZTNhCihYRU4pICAgZ2ZuOiAwMDAwMDFhZCAgbWZuOiAwMDEwMDUyMwoo
WEVOKSAgIGdmbjogMDAwMDAxYWUgIG1mbjogMDAyMThlMzkKKFhFTikgICBnZm46IDAwMDAw
MWFmICBtZm46IDAwMTAyZTA4CihYRU4pICAgZ2ZuOiAwMDAwMDFiMCAgbWZuOiAwMDIxOGUz
OAooWEVOKSAgIGdmbjogMDAwMDAxYjEgIG1mbjogMDAxMDJlMDcKKFhFTikgICBnZm46IDAw
MDAwMWIyICBtZm46IDAwMjE4ZTM3CihYRU4pICAgZ2ZuOiAwMDAwMDFiMyAgbWZuOiAwMDEw
Mjg4MQooWEVOKSAgIGdmbjogMDAwMDAxYjQgIG1mbjogMDAyMThlMzYKKFhFTikgICBnZm46
IDAwMDAwMWI1ICBtZm46IDAwMTAyOTQ2CihYRU4pICAgZ2ZuOiAwMDAwMDFiNiAgbWZuOiAw
MDIxOGUzNQooWEVOKSAgIGdmbjogMDAwMDAxYjcgIG1mbjogMDAxMDAzNTcKKFhFTikgICBn
Zm46IDAwMDAwMWI4ICBtZm46IDAwMjE4ZTM0CihYRU4pICAgZ2ZuOiAwMDAwMDFiOSAgbWZu
OiAwMDEwMTg3NgooWEVOKSAgIGdmbjogMDAwMDAxYmEgIG1mbjogMDAyMThlMzMKKFhFTikg
ICBnZm46IDAwMDAwMWJiICBtZm46IDAwMTAxODc1CihYRU4pICAgZ2ZuOiAwMDAwMDFiYyAg
bWZuOiAwMDIxOGUzMgooWEVOKSAgIGdmbjogMDAwMDAxYmQgIG1mbjogMDAxMDE0OGMKKFhF
TikgICBnZm46IDAwMDAwMWJlICBtZm46IDAwMjE4ZTMxCihYRU4pICAgZ2ZuOiAwMDAwMDFi
ZiAgbWZuOiAwMDEwMTQ4YgooWEVOKSAgIGdmbjogMDAwMDAxYzAgIG1mbjogMDAyMThlMzAK
KFhFTikgICBnZm46IDAwMDAwMWMxICBtZm46IDAwMTAxODlhCihYRU4pICAgZ2ZuOiAwMDAw
MDFjMiAgbWZuOiAwMDIxOGUyZgooWEVOKSAgIGdmbjogMDAwMDAxYzMgIG1mbjogMDAxMDE4
OTkKKFhFTikgICBnZm46IDAwMDAwMWM0ICBtZm46IDAwMjE4ZTJlCihYRU4pICAgZ2ZuOiAw
MDAwMDFjNSAgbWZuOiAwMDEwMTg1OAooWEVOKSAgIGdmbjogMDAwMDAxYzYgIG1mbjogMDAy
MThlMmQKKFhFTikgICBnZm46IDAwMDAwMWM3ICBtZm46IDAwMTAxODU3CihYRU4pICAgZ2Zu
OiAwMDAwMDFjOCAgbWZuOiAwMDIxOGUyYwooWEVOKSAgIGdmbjogMDAwMDAxYzkgIG1mbjog
MDAxMDJkZTYKKFhFTikgICBnZm46IDAwMDAwMWNhICBtZm46IDAwMjE4ZTJiCihYRU4pICAg
Z2ZuOiAwMDAwMDFjYiAgbWZuOiAwMDEwMmRlNQooWEVOKSAgIGdmbjogMDAwMDAxY2MgIG1m
bjogMDAyMThlMmEKKFhFTikgICBnZm46IDAwMDAwMWNkICBtZm46IDAwMTAxNzVkCihYRU4p
ICAgZ2ZuOiAwMDAwMDFjZSAgbWZuOiAwMDIxOGUyOQooWEVOKSAgIGdmbjogMDAwMDAxY2Yg
IG1mbjogMDAxMDE3NWMKKFhFTikgICBnZm46IDAwMDAwMWQwICBtZm46IDAwMjE4ZTI4CihY
RU4pICAgZ2ZuOiAwMDAwMDFkMSAgbWZuOiAwMDEwMmM1MwooWEVOKSAgIGdmbjogMDAwMDAx
ZDIgIG1mbjogMDAyMThlMjcKKFhFTikgICBnZm46IDAwMDAwMWQzICBtZm46IDAwMTAyYzUy
CihYRU4pICAgZ2ZuOiAwMDAwMDFkNCAgbWZuOiAwMDIxOGUyNgooWEVOKSAgIGdmbjogMDAw
MDAxZDUgIG1mbjogMDAxMDE3NDEKKFhFTikgICBnZm46IDAwMDAwMWQ2ICBtZm46IDAwMjE4
ZTI1CihYRU4pICAgZ2ZuOiAwMDAwMDFkNyAgbWZuOiAwMDEwMTc0MAooWEVOKSAgIGdmbjog
MDAwMDAxZDggIG1mbjogMDAyMThlMjQKKFhFTikgICBnZm46IDAwMDAwMWQ5ICBtZm46IDAw
MTAyODZiCihYRU4pICAgZ2ZuOiAwMDAwMDFkYSAgbWZuOiAwMDIxOGUyMwooWEVOKSAgIGdm
bjogMDAwMDAxZGIgIG1mbjogMDAxMDI4NmEKKFhFTikgICBnZm46IDAwMDAwMWRjICBtZm46
IDAwMjE4ZTIyCihYRU4pICAgZ2ZuOiAwMDAwMDFkZCAgbWZuOiAwMDEwMTgwNQooWEVOKSAg
IGdmbjogMDAwMDAxZGUgIG1mbjogMDAyMThlMjEKKFhFTikgICBnZm46IDAwMDAwMWRmICBt
Zm46IDAwMTAxODA0CihYRU4pICAgZ2ZuOiAwMDAwMDFlMCAgbWZuOiAwMDIxOGUyMAooWEVO
KSAgIGdmbjogMDAwMDAxZTEgIG1mbjogMDAxMDE4MDMKKFhFTikgICBnZm46IDAwMDAwMWUy
ICBtZm46IDAwMjE4ZTFmCihYRU4pICAgZ2ZuOiAwMDAwMDFlMyAgbWZuOiAwMDEwMTgwMgoo
WEVOKSAgIGdmbjogMDAwMDAxZTQgIG1mbjogMDAyMThlMWUKKFhFTikgICBnZm46IDAwMDAw
MWU1ICBtZm46IDAwMTAyODdiCihYRU4pICAgZ2ZuOiAwMDAwMDFlNiAgbWZuOiAwMDIxOGUx
ZAooWEVOKSAgIGdmbjogMDAwMDAxZTcgIG1mbjogMDAxMDI4N2EKKFhFTikgICBnZm46IDAw
MDAwMWU4ICBtZm46IDAwMjE4ZTFjCihYRU4pICAgZ2ZuOiAwMDAwMDFlOSAgbWZuOiAwMDEw
MjdiYgooWEVOKSAgIGdmbjogMDAwMDAxZWEgIG1mbjogMDAyMThlMWIKKFhFTikgICBnZm46
IDAwMDAwMWViICBtZm46IDAwMTAyN2JhCihYRU4pICAgZ2ZuOiAwMDAwMDFlYyAgbWZuOiAw
MDIxOGUxYQooWEVOKSAgIGdmbjogMDAwMDAxZWQgIG1mbjogMDAxMDJjZmIKKFhFTikgICBn
Zm46IDAwMDAwMWVlICBtZm46IDAwMjE4ZTE5CihYRU4pICAgZ2ZuOiAwMDAwMDFlZiAgbWZu
OiAwMDEwMmNmYQooWEVOKSAgIGdmbjogMDAwMDAxZjAgIG1mbjogMDAyMThlMTgKKFhFTikg
ICBnZm46IDAwMDAwMWYxICBtZm46IDAwMTAxZTM3CihYRU4pICAgZ2ZuOiAwMDAwMDFmMiAg
bWZuOiAwMDIxOGUxNwooWEVOKSAgIGdmbjogMDAwMDAxZjMgIG1mbjogMDAxMDFlMzYKKFhF
TikgICBnZm46IDAwMDAwMWY0ICBtZm46IDAwMjE4ZTE2CihYRU4pICAgZ2ZuOiAwMDAwMDFm
NSAgbWZuOiAwMDEwMWFhZgooWEVOKSAgIGdmbjogMDAwMDAxZjYgIG1mbjogMDAyMThlMTUK
KFhFTikgICBnZm46IDAwMDAwMWY3ICBtZm46IDAwMTAxYWFlCihYRU4pICAgZ2ZuOiAwMDAw
MDFmOCAgbWZuOiAwMDIxOGUxNAooWEVOKSAgIGdmbjogMDAwMDAxZjkgIG1mbjogMDAxMDFh
NmYKKFhFTikgICBnZm46IDAwMDAwMWZhICBtZm46IDAwMjE4ZTEzCihYRU4pICAgZ2ZuOiAw
MDAwMDFmYiAgbWZuOiAwMDEwMWE2ZQooWEVOKSAgIGdmbjogMDAwMDAxZmMgIG1mbjogMDAy
MThlMTIKKFhFTikgICBnZm46IDAwMDAwMWZkICBtZm46IDAwMTAxYTQzCihYRU4pICAgZ2Zu
OiAwMDAwMDFmZSAgbWZuOiAwMDIxOGUxMQooWEVOKSAgIGdmbjogMDAwMDAxZmYgIG1mbjog
MDAxMDFhNDIKKFhFTikgIGdmbjogMDAwMDAyMDAgIG1mbjogMDAyMThjMDAKKFhFTikgIGdm
bjogMDAwMDA0MDAgIG1mbjogMDAxMGJlMDAKKFhFTikgIGdmbjogMDAwMDA2MDAgIG1mbjog
MDAyMThhMDAKKFhFTikgIGdmbjogMDAwMDA4MDAgIG1mbjogMDAxMGJjMDAKKFhFTikgIGdm
bjogMDAwMDBhMDAgIG1mbjogMDAyMTg4MDAKKFhFTikgIGdmbjogMDAwMDBjMDAgIG1mbjog
MDAxMGJhMDAKKFhFTikgIGdmbjogMDAwMDBlMDAgIG1mbjogMDAyMTg2MDAKKFhFTikgIGdm
bjogMDAwMDEwMDAgIG1mbjogMDAxMGI4MDAKKFhFTikgIGdmbjogMDAwMDEyMDAgIG1mbjog
MDAyMTg0MDAKKFhFTikgIGdmbjogMDAwMDE0MDAgIG1mbjogMDAxMGI2MDAKKFhFTikgIGdm
bjogMDAwMDE2MDAgIG1mbjogMDAyMTgyMDAKKFhFTikgIGdmbjogMDAwMDE4MDAgIG1mbjog
MDAxMGI0MDAKKFhFTikgIGdmbjogMDAwMDFhMDAgIG1mbjogMDAyMTgwMDAKKFhFTikgIGdm
bjogMDAwMDFjMDAgIG1mbjogMDAxMGIyMDAKKFhFTikgIGdmbjogMDAwMDFlMDAgIG1mbjog
MDAyMWZlMDAKKFhFTikgIGdmbjogMDAwMDIwMDAgIG1mbjogMDAxMGIwMDAKKFhFTikgIGdm
bjogMDAwMDIyMDAgIG1mbjogMDAyMWZjMDAKKFhFTikgIGdmbjogMDAwMDI0MDAgIG1mbjog
MDAxMGFlMDAKKFhFTikgIGdmbjogMDAwMDI2MDAgIG1mbjogMDAyMWZhMDAKKFhFTikgIGdm
bjogMDAwMDI4MDAgIG1mbjogMDAxMGFjMDAKKFhFTikgIGdmbjogMDAwMDJhMDAgIG1mbjog
MDAyMWY4MDAKKFhFTikgIGdmbjogMDAwMDJjMDAgIG1mbjogMDAxMGFhMDAKKFhFTikgIGdm
bjogMDAwMDJlMDAgIG1mbjogMDAyMWY2MDAKKFhFTikgIGdmbjogMDAwMDMwMDAgIG1mbjog
MDAxMGE4MDAKKFhFTikgIGdmbjogMDAwMDMyMDAgIG1mbjogMDAyMWY0MDAKKFhFTikgIGdm
bjogMDAwMDM0MDAgIG1mbjogMDAxMGE2MDAKKFhFTikgIGdmbjogMDAwMDM2MDAgIG1mbjog
MDAyMWYyMDAKKFhFTikgIGdmbjogMDAwMDM4MDAgIG1mbjogMDAxMGE0MDAKKFhFTikgIGdm
bjogMDAwMDNhMDAgIG1mbjogMDAyMWYwMDAKKFhFTikgIGdmbjogMDAwMDNjMDAgIG1mbjog
MDAxMGEyMDAKKFhFTikgIGdmbjogMDAwMDNlMDAgIG1mbjogMDAyMWVlMDAKKFhFTikgIGdm
bjogMDAwMDQwMDAgIG1mbjogMDAxMGEwMDAKKFhFTikgIGdmbjogMDAwMDQyMDAgIG1mbjog
MDAyMWVjMDAKKFhFTikgIGdmbjogMDAwMDQ0MDAgIG1mbjogMDAxMGZlMDAKKFhFTikgIGdm
bjogMDAwMDQ2MDAgIG1mbjogMDAyMWVhMDAKKFhFTikgIGdmbjogMDAwMDQ4MDAgIG1mbjog
MDAxMGZjMDAKKFhFTikgIGdmbjogMDAwMDRhMDAgIG1mbjogMDAyMWU4MDAKKFhFTikgIGdm
bjogMDAwMDRjMDAgIG1mbjogMDAxMGZhMDAKKFhFTikgIGdmbjogMDAwMDRlMDAgIG1mbjog
MDAyMWU2MDAKKFhFTikgIGdmbjogMDAwMDUwMDAgIG1mbjogMDAxMGY4MDAKKFhFTikgIGdm
bjogMDAwMDUyMDAgIG1mbjogMDAyMWU0MDAKKFhFTikgIGdmbjogMDAwMDU0MDAgIG1mbjog
MDAxMGY2MDAKKFhFTikgIGdmbjogMDAwMDU2MDAgIG1mbjogMDAyMWUyMDAKKFhFTikgIGdm
bjogMDAwMDU4MDAgIG1mbjogMDAxMGY0MDAKKFhFTikgIGdmbjogMDAwMDVhMDAgIG1mbjog
MDAyMWUwMDAKKFhFTikgIGdmbjogMDAwMDVjMDAgIG1mbjogMDAxMGYyMDAKKFhFTikgIGdm
bjogMDAwMDVlMDAgIG1mbjogMDAyMTdlMDAKKFhFTikgIGdmbjogMDAwMDYwMDAgIG1mbjog
MDAxMGYwMDAKKFhFTikgIGdmbjogMDAwMDYyMDAgIG1mbjogMDAyMTdjMDAKKFhFTikgIGdm
bjogMDAwMDY0MDAgIG1mbjogMDAxMGVlMDAKKFhFTikgIGdmbjogMDAwMDY2MDAgIG1mbjog
MDAyMTdhMDAKKFhFTikgIGdmbjogMDAwMDY4MDAgIG1mbjogMDAxMGVjMDAKKFhFTikgIGdm
bjogMDAwMDZhMDAgIG1mbjogMDAyMTc4MDAKKFhFTikgIGdmbjogMDAwMDZjMDAgIG1mbjog
MDAxMGVhMDAKKFhFTikgIGdmbjogMDAwMDZlMDAgIG1mbjogMDAyMTc2MDAKKFhFTikgIGdm
bjogMDAwMDcwMDAgIG1mbjogMDAxMGU4MDAKKFhFTikgIGdmbjogMDAwMDcyMDAgIG1mbjog
MDAyMTc0MDAKKFhFTikgIGdmbjogMDAwMDc0MDAgIG1mbjogMDAxMGU2MDAKKFhFTikgIGdm
bjogMDAwMDc2MDAgIG1mbjogMDAyMTcyMDAKKFhFTikgIGdmbjogMDAwMDc4MDAgIG1mbjog
MDAxMGU0MDAKKFhFTikgIGdmbjogMDAwMDdhMDAgIG1mbjogMDAyMTcwMDAKKFhFTikgIGdm
bjogMDAwMDdjMDAgIG1mbjogMDAxMGUyMDAKKFhFTikgIGdmbjogMDAwMDdlMDAgIG1mbjog
MDAyMTZlMDAKKFhFTikgIGdmbjogMDAwMDgwMDAgIG1mbjogMDAxMGUwMDAKKFhFTikgIGdm
bjogMDAwMDgyMDAgIG1mbjogMDAyMTZjMDAKKFhFTikgIGdmbjogMDAwMDg0MDAgIG1mbjog
MDAxMGRlMDAKKFhFTikgIGdmbjogMDAwMDg2MDAgIG1mbjogMDAyMTZhMDAKKFhFTikgIGdm
bjogMDAwMDg4MDAgIG1mbjogMDAxMGRjMDAKKFhFTikgIGdmbjogMDAwMDhhMDAgIG1mbjog
MDAyMTY4MDAKKFhFTikgIGdmbjogMDAwMDhjMDAgIG1mbjogMDAxMGRhMDAKKFhFTikgIGdm
bjogMDAwMDhlMDAgIG1mbjogMDAyMTY2MDAKKFhFTikgIGdmbjogMDAwMDkwMDAgIG1mbjog
MDAxMGQ4MDAKKFhFTikgIGdmbjogMDAwMDkyMDAgIG1mbjogMDAyMTY0MDAKKFhFTikgIGdm
bjogMDAwMDk0MDAgIG1mbjogMDAxMGQ2MDAKKFhFTikgIGdmbjogMDAwMDk2MDAgIG1mbjog
MDAyMTYyMDAKKFhFTikgIGdmbjogMDAwMDk4MDAgIG1mbjogMDAxMGQ0MDAKKFhFTikgIGdm
bjogMDAwMDlhMDAgIG1mbjogMDAyMTYwMDAKKFhFTikgIGdmbjogMDAwMDljMDAgIG1mbjog
MDAxMGQyMDAKKFhFTikgIGdmbjogMDAwMDllMDAgIG1mbjogMDAyMTVlMDAKKFhFTikgIGdm
bjogMDAwMGEwMDAgIG1mbjogMDAxMGQwMDAKKFhFTikgIGdmbjogMDAwMGEyMDAgIG1mbjog
MDAyMTVjMDAKKFhFTikgIGdmbjogMDAwMGE0MDAgIG1mbjogMDAxMGNlMDAKKFhFTikgIGdm
bjogMDAwMGE2MDAgIG1mbjogMDAyMTVhMDAKKFhFTikgIGdmbjogMDAwMGE4MDAgIG1mbjog
MDAxMGNjMDAKKFhFTikgIGdmbjogMDAwMGFhMDAgIG1mbjogMDAyMTU4MDAKKFhFTikgIGdm
bjogMDAwMGFjMDAgIG1mbjogMDAxMGNhMDAKKFhFTikgIGdmbjogMDAwMGFlMDAgIG1mbjog
MDAyMTU2MDAKKFhFTikgIGdmbjogMDAwMGIwMDAgIG1mbjogMDAxMGM4MDAKKFhFTikgIGdm
bjogMDAwMGIyMDAgIG1mbjogMDAyMTU0MDAKKFhFTikgIGdmbjogMDAwMGI0MDAgIG1mbjog
MDAxMGM2MDAKKFhFTikgIGdmbjogMDAwMGI2MDAgIG1mbjogMDAyMTUyMDAKKFhFTikgIGdm
bjogMDAwMGI4MDAgIG1mbjogMDAxMGM0MDAKKFhFTikgIGdmbjogMDAwMGJhMDAgIG1mbjog
MDAyMTUwMDAKKFhFTikgIGdmbjogMDAwMGJjMDAgIG1mbjogMDAxMGMyMDAKKFhFTikgIGdm
bjogMDAwMGJlMDAgIG1mbjogMDAyMTRlMDAKKFhFTikgIGdmbjogMDAwMGMwMDAgIG1mbjog
MDAxMGMwMDAKKFhFTikgIGdmbjogMDAwMGMyMDAgIG1mbjogMDAyMTRjMDAKKFhFTikgIGdm
bjogMDAwMGM0MDAgIG1mbjogMDAxMWZlMDAKKFhFTikgIGdmbjogMDAwMGM2MDAgIG1mbjog
MDAyMTRhMDAKKFhFTikgIGdmbjogMDAwMGM4MDAgIG1mbjogMDAxMWZjMDAKKFhFTikgIGdm
bjogMDAwMGNhMDAgIG1mbjogMDAyMTQ4MDAKKFhFTikgIGdmbjogMDAwMGNjMDAgIG1mbjog
MDAxMWZhMDAKKFhFTikgIGdmbjogMDAwMGNlMDAgIG1mbjogMDAyMTQ2MDAKKFhFTikgIGdm
bjogMDAwMGQwMDAgIG1mbjogMDAxMWY4MDAKKFhFTikgIGdmbjogMDAwMGQyMDAgIG1mbjog
MDAyMTQ0MDAKKFhFTikgIGdmbjogMDAwMGQ0MDAgIG1mbjogMDAxMWY2MDAKKFhFTikgIGdm
bjogMDAwMGQ2MDAgIG1mbjogMDAyMTQyMDAKKFhFTikgIGdmbjogMDAwMGQ4MDAgIG1mbjog
MDAxMWY0MDAKKFhFTikgIGdmbjogMDAwMGRhMDAgIG1mbjogMDAyMTQwMDAKKFhFTikgIGdm
bjogMDAwMGRjMDAgIG1mbjogMDAxMWYyMDAKKFhFTikgIGdmbjogMDAwMGRlMDAgIG1mbjog
MDAyMTNlMDAKKFhFTikgIGdmbjogMDAwMGUwMDAgIG1mbjogMDAxMWYwMDAKKFhFTikgIGdm
bjogMDAwMGUyMDAgIG1mbjogMDAyMTNjMDAKKFhFTikgIGdmbjogMDAwMGU0MDAgIG1mbjog
MDAxMWVlMDAKKFhFTikgIGdmbjogMDAwMGU2MDAgIG1mbjogMDAyMTNhMDAKKFhFTikgIGdm
bjogMDAwMGU4MDAgIG1mbjogMDAxMWVjMDAKKFhFTikgIGdmbjogMDAwMGVhMDAgIG1mbjog
MDAyMTM4MDAKKFhFTikgIGdmbjogMDAwMGVjMDAgIG1mbjogMDAxMWVhMDAKKFhFTikgIGdm
bjogMDAwMGVlMDAgIG1mbjogMDAyMTM2MDAKKFhFTikgIGdmbjogMDAwMGYwMDAgIG1mbjog
MDAxMWU4MDAKKFhFTikgIGdmbjogMDAwMGYyMDAgIG1mbjogMDAyMTM0MDAKKFhFTikgIGdm
bjogMDAwMGY0MDAgIG1mbjogMDAxMWU2MDAKKFhFTikgIGdmbjogMDAwMGY2MDAgIG1mbjog
MDAyMTMyMDAKKFhFTikgIGdmbjogMDAwMGY4MDAgIG1mbjogMDAxMWU0MDAKKFhFTikgIGdm
bjogMDAwMGZhMDAgIG1mbjogMDAyMTMwMDAKKFhFTikgIGdmbjogMDAwMGZjMDAgIG1mbjog
MDAxMWUyMDAKKFhFTikgIGdmbjogMDAwMGZlMDAgIG1mbjogMDAyMTJlMDAKKFhFTikgIGdm
bjogMDAwMTAwMDAgIG1mbjogMDAxMWUwMDAKKFhFTikgIGdmbjogMDAwMTAyMDAgIG1mbjog
MDAyMTJjMDAKKFhFTikgIGdmbjogMDAwMTA0MDAgIG1mbjogMDAxMWRlMDAKKFhFTikgIGdm
bjogMDAwMTA2MDAgIG1mbjogMDAyMTJhMDAKKFhFTikgIGdmbjogMDAwMTA4MDAgIG1mbjog
MDAxMWRjMDAKKFhFTikgIGdmbjogMDAwMTBhMDAgIG1mbjogMDAyMTI4MDAKKFhFTikgIGdm
bjogMDAwMTBjMDAgIG1mbjogMDAxMWRhMDAKKFhFTikgIGdmbjogMDAwMTBlMDAgIG1mbjog
MDAyMTI2MDAKKFhFTikgIGdmbjogMDAwMTEwMDAgIG1mbjogMDAxMWQ4MDAKKFhFTikgIGdm
bjogMDAwMTEyMDAgIG1mbjogMDAyMTI0MDAKKFhFTikgIGdmbjogMDAwMTE0MDAgIG1mbjog
MDAxMWQ2MDAKKFhFTikgIGdmbjogMDAwMTE2MDAgIG1mbjogMDAyMTIyMDAKKFhFTikgIGdm
bjogMDAwMTE4MDAgIG1mbjogMDAxMWQ0MDAKKFhFTikgIGdmbjogMDAwMTFhMDAgIG1mbjog
MDAyMTIwMDAKKFhFTikgIGdmbjogMDAwMTFjMDAgIG1mbjogMDAxMWQyMDAKKFhFTikgIGdm
bjogMDAwMTFlMDAgIG1mbjogMDAyMTFlMDAKKFhFTikgIGdmbjogMDAwMTIwMDAgIG1mbjog
MDAxMWQwMDAKKFhFTikgIGdmbjogMDAwMTIyMDAgIG1mbjogMDAyMTFjMDAKKFhFTikgIGdm
bjogMDAwMTI0MDAgIG1mbjogMDAxMWNlMDAKKFhFTikgIGdmbjogMDAwMTI2MDAgIG1mbjog
MDAyMTFhMDAKKFhFTikgIGdmbjogMDAwMTI4MDAgIG1mbjogMDAxMWNjMDAKKFhFTikgIGdm
bjogMDAwMTJhMDAgIG1mbjogMDAyMTE4MDAKKFhFTikgIGdmbjogMDAwMTJjMDAgIG1mbjog
MDAxMWNhMDAKKFhFTikgIGdmbjogMDAwMTJlMDAgIG1mbjogMDAyMTE2MDAKKFhFTikgIGdm
bjogMDAwMTMwMDAgIG1mbjogMDAxMWM4MDAKKFhFTikgIGdmbjogMDAwMTMyMDAgIG1mbjog
MDAyMTE0MDAKKFhFTikgIGdmbjogMDAwMTM0MDAgIG1mbjogMDAxMWM2MDAKKFhFTikgIGdm
bjogMDAwMTM2MDAgIG1mbjogMDAyMTEyMDAKKFhFTikgIGdmbjogMDAwMTM4MDAgIG1mbjog
MDAxMWM0MDAKKFhFTikgIGdmbjogMDAwMTNhMDAgIG1mbjogMDAyMTEwMDAKKFhFTikgIGdm
bjogMDAwMTNjMDAgIG1mbjogMDAxMWMyMDAKKFhFTikgIGdmbjogMDAwMTNlMDAgIG1mbjog
MDAyMTBlMDAKKFhFTikgIGdmbjogMDAwMTQwMDAgIG1mbjogMDAxMWMwMDAKKFhFTikgIGdm
bjogMDAwMTQyMDAgIG1mbjogMDAyMTBjMDAKKFhFTikgIGdmbjogMDAwMTQ0MDAgIG1mbjog
MDAxMWJlMDAKKFhFTikgIGdmbjogMDAwMTQ2MDAgIG1mbjogMDAyMTBhMDAKKFhFTikgIGdm
bjogMDAwMTQ4MDAgIG1mbjogMDAxMWJjMDAKKFhFTikgIGdmbjogMDAwMTRhMDAgIG1mbjog
MDAyMTA4MDAKKFhFTikgIGdmbjogMDAwMTRjMDAgIG1mbjogMDAxMWJhMDAKKFhFTikgIGdm
bjogMDAwMTRlMDAgIG1mbjogMDAyMTA2MDAKKFhFTikgIGdmbjogMDAwMTUwMDAgIG1mbjog
MDAxMWI4MDAKKFhFTikgIGdmbjogMDAwMTUyMDAgIG1mbjogMDAyMTA0MDAKKFhFTikgIGdm
bjogMDAwMTU0MDAgIG1mbjogMDAxMWI2MDAKKFhFTikgIGdmbjogMDAwMTU2MDAgIG1mbjog
MDAyMTAyMDAKKFhFTikgIGdmbjogMDAwMTU4MDAgIG1mbjogMDAxMWI0MDAKKFhFTikgIGdm
bjogMDAwMTVhMDAgIG1mbjogMDAyMTAwMDAKKFhFTikgIGdmbjogMDAwMTVjMDAgIG1mbjog
MDAxMWIyMDAKKFhFTikgIGdmbjogMDAwMTVlMDAgIG1mbjogMDAyMGZlMDAKKFhFTikgIGdm
bjogMDAwMTYwMDAgIG1mbjogMDAxMWIwMDAKKFhFTikgIGdmbjogMDAwMTYyMDAgIG1mbjog
MDAyMGZjMDAKKFhFTikgIGdmbjogMDAwMTY0MDAgIG1mbjogMDAxMWFlMDAKKFhFTikgIGdm
bjogMDAwMTY2MDAgIG1mbjogMDAyMGZhMDAKKFhFTikgIGdmbjogMDAwMTY4MDAgIG1mbjog
MDAxMWFjMDAKKFhFTikgIGdmbjogMDAwMTZhMDAgIG1mbjogMDAyMGY4MDAKKFhFTikgIGdm
bjogMDAwMTZjMDAgIG1mbjogMDAxMWFhMDAKKFhFTikgIGdmbjogMDAwMTZlMDAgIG1mbjog
MDAyMGY2MDAKKFhFTikgIGdmbjogMDAwMTcwMDAgIG1mbjogMDAxMWE4MDAKKFhFTikgIGdm
bjogMDAwMTcyMDAgIG1mbjogMDAyMGY0MDAKKFhFTikgIGdmbjogMDAwMTc0MDAgIG1mbjog
MDAxMWE2MDAKKFhFTikgIGdmbjogMDAwMTc2MDAgIG1mbjogMDAyMGYyMDAKKFhFTikgIGdm
bjogMDAwMTc4MDAgIG1mbjogMDAxMWE0MDAKKFhFTikgIGdmbjogMDAwMTdhMDAgIG1mbjog
MDAyMGYwMDAKKFhFTikgIGdmbjogMDAwMTdjMDAgIG1mbjogMDAxMWEyMDAKKFhFTikgIGdm
bjogMDAwMTdlMDAgIG1mbjogMDAyMGVlMDAKKFhFTikgIGdmbjogMDAwMTgwMDAgIG1mbjog
MDAxMWEwMDAKKFhFTikgIGdmbjogMDAwMTgyMDAgIG1mbjogMDAyMGVjMDAKKFhFTikgIGdm
bjogMDAwMTg0MDAgIG1mbjogMDAxMTllMDAKKFhFTikgIGdmbjogMDAwMTg2MDAgIG1mbjog
MDAyMGVhMDAKKFhFTikgIGdmbjogMDAwMTg4MDAgIG1mbjogMDAxMTljMDAKKFhFTikgIGdm
bjogMDAwMThhMDAgIG1mbjogMDAyMGU4MDAKKFhFTikgIGdmbjogMDAwMThjMDAgIG1mbjog
MDAxMTlhMDAKKFhFTikgIGdmbjogMDAwMThlMDAgIG1mbjogMDAyMGU2MDAKKFhFTikgIGdm
bjogMDAwMTkwMDAgIG1mbjogMDAxMTk4MDAKKFhFTikgIGdmbjogMDAwMTkyMDAgIG1mbjog
MDAyMGU0MDAKKFhFTikgIGdmbjogMDAwMTk0MDAgIG1mbjogMDAxMTk2MDAKKFhFTikgIGdm
bjogMDAwMTk2MDAgIG1mbjogMDAyMGUyMDAKKFhFTikgIGdmbjogMDAwMTk4MDAgIG1mbjog
MDAxMTk0MDAKKFhFTikgIGdmbjogMDAwMTlhMDAgIG1mbjogMDAyMGUwMDAKKFhFTikgIGdm
bjogMDAwMTljMDAgIG1mbjogMDAxMTkyMDAKKFhFTikgIGdmbjogMDAwMTllMDAgIG1mbjog
MDAyMGRlMDAKKFhFTikgIGdmbjogMDAwMWEwMDAgIG1mbjogMDAxMTkwMDAKKFhFTikgIGdm
bjogMDAwMWEyMDAgIG1mbjogMDAyMGRjMDAKKFhFTikgIGdmbjogMDAwMWE0MDAgIG1mbjog
MDAxMThlMDAKKFhFTikgIGdmbjogMDAwMWE2MDAgIG1mbjogMDAyMGRhMDAKKFhFTikgIGdm
bjogMDAwMWE4MDAgIG1mbjogMDAxMThjMDAKKFhFTikgIGdmbjogMDAwMWFhMDAgIG1mbjog
MDAyMGQ4MDAKKFhFTikgIGdmbjogMDAwMWFjMDAgIG1mbjogMDAxMThhMDAKKFhFTikgIGdm
bjogMDAwMWFlMDAgIG1mbjogMDAyMGQ2MDAKKFhFTikgIGdmbjogMDAwMWIwMDAgIG1mbjog
MDAxMTg4MDAKKFhFTikgIGdmbjogMDAwMWIyMDAgIG1mbjogMDAyMGQ0MDAKKFhFTikgIGdm
bjogMDAwMWI0MDAgIG1mbjogMDAxMTg2MDAKKFhFTikgIGdmbjogMDAwMWI2MDAgIG1mbjog
MDAyMGQyMDAKKFhFTikgIGdmbjogMDAwMWI4MDAgIG1mbjogMDAxMTg0MDAKKFhFTikgIGdm
bjogMDAwMWJhMDAgIG1mbjogMDAyMGQwMDAKKFhFTikgIGdmbjogMDAwMWJjMDAgIG1mbjog
MDAxMTgyMDAKKFhFTikgIGdmbjogMDAwMWJlMDAgIG1mbjogMDAyMGNlMDAKKFhFTikgIGdm
bjogMDAwMWMwMDAgIG1mbjogMDAxMTgwMDAKKFhFTikgIGdmbjogMDAwMWMyMDAgIG1mbjog
MDAyMGNjMDAKKFhFTikgIGdmbjogMDAwMWM0MDAgIG1mbjogMDAxMTdlMDAKKFhFTikgIGdm
bjogMDAwMWM2MDAgIG1mbjogMDAyMGNhMDAKKFhFTikgIGdmbjogMDAwMWM4MDAgIG1mbjog
MDAxMTdjMDAKKFhFTikgIGdmbjogMDAwMWNhMDAgIG1mbjogMDAyMGM4MDAKKFhFTikgIGdm
bjogMDAwMWNjMDAgIG1mbjogMDAxMTdhMDAKKFhFTikgIGdmbjogMDAwMWNlMDAgIG1mbjog
MDAyMGM2MDAKKFhFTikgIGdmbjogMDAwMWQwMDAgIG1mbjogMDAxMTc4MDAKKFhFTikgIGdm
bjogMDAwMWQyMDAgIG1mbjogMDAyMGM0MDAKKFhFTikgIGdmbjogMDAwMWQ0MDAgIG1mbjog
MDAxMTc2MDAKKFhFTikgIGdmbjogMDAwMWQ2MDAgIG1mbjogMDAyMGMyMDAKKFhFTikgIGdm
bjogMDAwMWQ4MDAgIG1mbjogMDAxMTc0MDAKKFhFTikgIGdmbjogMDAwMWRhMDAgIG1mbjog
MDAyMGMwMDAKKFhFTikgIGdmbjogMDAwMWRjMDAgIG1mbjogMDAxMTcyMDAKKFhFTikgIGdm
bjogMDAwMWRlMDAgIG1mbjogMDAyMGJlMDAKKFhFTikgIGdmbjogMDAwMWUwMDAgIG1mbjog
MDAxMTcwMDAKKFhFTikgIGdmbjogMDAwMWUyMDAgIG1mbjogMDAyMGJjMDAKKFhFTikgIGdm
bjogMDAwMWU0MDAgIG1mbjogMDAxMTZlMDAKKFhFTikgIGdmbjogMDAwMWU2MDAgIG1mbjog
MDAyMGJhMDAKKFhFTikgIGdmbjogMDAwMWU4MDAgIG1mbjogMDAxMTZjMDAKKFhFTikgIGdm
bjogMDAwMWVhMDAgIG1mbjogMDAyMGI4MDAKKFhFTikgIGdmbjogMDAwMWVjMDAgIG1mbjog
MDAxMTZhMDAKKFhFTikgIGdmbjogMDAwMWVlMDAgIG1mbjogMDAyMGI2MDAKKFhFTikgIGdm
bjogMDAwMWYwMDAgIG1mbjogMDAxMTY4MDAKKFhFTikgIGdmbjogMDAwMWYyMDAgIG1mbjog
MDAyMGI0MDAKKFhFTikgIGdmbjogMDAwMWY0MDAgIG1mbjogMDAxMTY2MDAKKFhFTikgIGdm
bjogMDAwMWY2MDAgIG1mbjogMDAyMGIyMDAKKFhFTikgIGdmbjogMDAwMWY4MDAgIG1mbjog
MDAxMTY0MDAKKFhFTikgIGdmbjogMDAwMWZhMDAgIG1mbjogMDAyMGIwMDAKKFhFTikgIGdm
bjogMDAwMWZjMDAgIG1mbjogMDAxMTYyMDAKKFhFTikgIGdmbjogMDAwMWZlMDAgIG1mbjog
MDAyMGFlMDAKKFhFTikgIGdmbjogMDAwMjAwMDAgIG1mbjogMDAxMTYwMDAKKFhFTikgIGdm
bjogMDAwMjAyMDAgIG1mbjogMDAyMGFjMDAKKFhFTikgIGdmbjogMDAwMjA0MDAgIG1mbjog
MDAxMTVlMDAKKFhFTikgIGdmbjogMDAwMjA2MDAgIG1mbjogMDAyMGFhMDAKKFhFTikgIGdm
bjogMDAwMjA4MDAgIG1mbjogMDAxMTVjMDAKKFhFTikgIGdmbjogMDAwMjBhMDAgIG1mbjog
MDAyMGE4MDAKKFhFTikgIGdmbjogMDAwMjBjMDAgIG1mbjogMDAxMTVhMDAKKFhFTikgIGdm
bjogMDAwMjBlMDAgIG1mbjogMDAyMGE2MDAKKFhFTikgIGdmbjogMDAwMjEwMDAgIG1mbjog
MDAxMTU4MDAKKFhFTikgIGdmbjogMDAwMjEyMDAgIG1mbjogMDAyMGE0MDAKKFhFTikgIGdm
bjogMDAwMjE0MDAgIG1mbjogMDAxMTU2MDAKKFhFTikgIGdmbjogMDAwMjE2MDAgIG1mbjog
MDAyMGEyMDAKKFhFTikgIGdmbjogMDAwMjE4MDAgIG1mbjogMDAxMTU0MDAKKFhFTikgIGdm
bjogMDAwMjFhMDAgIG1mbjogMDAyMGEwMDAKKFhFTikgIGdmbjogMDAwMjFjMDAgIG1mbjog
MDAxMTUyMDAKKFhFTikgIGdmbjogMDAwMjFlMDAgIG1mbjogMDAyMDllMDAKKFhFTikgIGdm
bjogMDAwMjIwMDAgIG1mbjogMDAxMTUwMDAKKFhFTikgIGdmbjogMDAwMjIyMDAgIG1mbjog
MDAyMDljMDAKKFhFTikgIGdmbjogMDAwMjI0MDAgIG1mbjogMDAxMTRlMDAKKFhFTikgIGdm
bjogMDAwMjI2MDAgIG1mbjogMDAyMDlhMDAKKFhFTikgIGdmbjogMDAwMjI4MDAgIG1mbjog
MDAxMTRjMDAKKFhFTikgIGdmbjogMDAwMjJhMDAgIG1mbjogMDAyMDk4MDAKKFhFTikgIGdm
bjogMDAwMjJjMDAgIG1mbjogMDAxMTRhMDAKKFhFTikgIGdmbjogMDAwMjJlMDAgIG1mbjog
MDAyMDk2MDAKKFhFTikgIGdmbjogMDAwMjMwMDAgIG1mbjogMDAxMTQ4MDAKKFhFTikgIGdm
bjogMDAwMjMyMDAgIG1mbjogMDAyMDk0MDAKKFhFTikgIGdmbjogMDAwMjM0MDAgIG1mbjog
MDAxMTQ2MDAKKFhFTikgIGdmbjogMDAwMjM2MDAgIG1mbjogMDAyMDkyMDAKKFhFTikgIGdm
bjogMDAwMjM4MDAgIG1mbjogMDAxMTQ0MDAKKFhFTikgIGdmbjogMDAwMjNhMDAgIG1mbjog
MDAyMDkwMDAKKFhFTikgIGdmbjogMDAwMjNjMDAgIG1mbjogMDAxMTQyMDAKKFhFTikgIGdm
bjogMDAwMjNlMDAgIG1mbjogMDAyMDhlMDAKKFhFTikgIGdmbjogMDAwMjQwMDAgIG1mbjog
MDAxMTQwMDAKKFhFTikgIGdmbjogMDAwMjQyMDAgIG1mbjogMDAyMDhjMDAKKFhFTikgIGdm
bjogMDAwMjQ0MDAgIG1mbjogMDAxMTNlMDAKKFhFTikgIGdmbjogMDAwMjQ2MDAgIG1mbjog
MDAyMDhhMDAKKFhFTikgIGdmbjogMDAwMjQ4MDAgIG1mbjogMDAxMTNjMDAKKFhFTikgIGdm
bjogMDAwMjRhMDAgIG1mbjogMDAyMDg4MDAKKFhFTikgIGdmbjogMDAwMjRjMDAgIG1mbjog
MDAxMTNhMDAKKFhFTikgIGdmbjogMDAwMjRlMDAgIG1mbjogMDAyMDg2MDAKKFhFTikgIGdm
bjogMDAwMjUwMDAgIG1mbjogMDAxMTM4MDAKKFhFTikgIGdmbjogMDAwMjUyMDAgIG1mbjog
MDAyMDg0MDAKKFhFTikgIGdmbjogMDAwMjU0MDAgIG1mbjogMDAxMTM2MDAKKFhFTikgIGdm
bjogMDAwMjU2MDAgIG1mbjogMDAyMDgyMDAKKFhFTikgIGdmbjogMDAwMjU4MDAgIG1mbjog
MDAxMTM0MDAKKFhFTikgIGdmbjogMDAwMjVhMDAgIG1mbjogMDAyMDgwMDAKKFhFTikgIGdm
bjogMDAwMjVjMDAgIG1mbjogMDAxMTMyMDAKKFhFTikgIGdmbjogMDAwMjVlMDAgIG1mbjog
MDAyMDdlMDAKKFhFTikgIGdmbjogMDAwMjYwMDAgIG1mbjogMDAxMTMwMDAKKFhFTikgIGdm
bjogMDAwMjYyMDAgIG1mbjogMDAyMDdjMDAKKFhFTikgIGdmbjogMDAwMjY0MDAgIG1mbjog
MDAxMTJlMDAKKFhFTikgIGdmbjogMDAwMjY2MDAgIG1mbjogMDAyMDdhMDAKKFhFTikgIGdm
bjogMDAwMjY4MDAgIG1mbjogMDAxMTJjMDAKKFhFTikgIGdmbjogMDAwMjZhMDAgIG1mbjog
MDAyMDc4MDAKKFhFTikgIGdmbjogMDAwMjZjMDAgIG1mbjogMDAxMTJhMDAKKFhFTikgIGdm
bjogMDAwMjZlMDAgIG1mbjogMDAyMDc2MDAKKFhFTikgIGdmbjogMDAwMjcwMDAgIG1mbjog
MDAxMTI4MDAKKFhFTikgIGdmbjogMDAwMjcyMDAgIG1mbjogMDAyMDc0MDAKKFhFTikgIGdm
bjogMDAwMjc0MDAgIG1mbjogMDAxMTI2MDAKKFhFTikgIGdmbjogMDAwMjc2MDAgIG1mbjog
MDAyMDcyMDAKKFhFTikgIGdmbjogMDAwMjc4MDAgIG1mbjogMDAxMTI0MDAKKFhFTikgIGdm
bjogMDAwMjdhMDAgIG1mbjogMDAyMDcwMDAKKFhFTikgIGdmbjogMDAwMjdjMDAgIG1mbjog
MDAxMTIyMDAKKFhFTikgIGdmbjogMDAwMjdlMDAgIG1mbjogMDAyMDZlMDAKKFhFTikgIGdm
bjogMDAwMjgwMDAgIG1mbjogMDAxMTIwMDAKKFhFTikgIGdmbjogMDAwMjgyMDAgIG1mbjog
MDAyMDZjMDAKKFhFTikgIGdmbjogMDAwMjg0MDAgIG1mbjogMDAxMTFlMDAKKFhFTikgIGdm
bjogMDAwMjg2MDAgIG1mbjogMDAyMDZhMDAKKFhFTikgIGdmbjogMDAwMjg4MDAgIG1mbjog
MDAxMTFjMDAKKFhFTikgIGdmbjogMDAwMjhhMDAgIG1mbjogMDAyMDY4MDAKKFhFTikgIGdm
bjogMDAwMjhjMDAgIG1mbjogMDAxMTFhMDAKKFhFTikgIGdmbjogMDAwMjhlMDAgIG1mbjog
MDAyMDY2MDAKKFhFTikgIGdmbjogMDAwMjkwMDAgIG1mbjogMDAxMTE4MDAKKFhFTikgIGdm
bjogMDAwMjkyMDAgIG1mbjogMDAyMDY0MDAKKFhFTikgIGdmbjogMDAwMjk0MDAgIG1mbjog
MDAxMTE2MDAKKFhFTikgIGdmbjogMDAwMjk2MDAgIG1mbjogMDAyMDYyMDAKKFhFTikgIGdm
bjogMDAwMjk4MDAgIG1mbjogMDAxMTE0MDAKKFhFTikgIGdmbjogMDAwMjlhMDAgIG1mbjog
MDAyMDYwMDAKKFhFTikgIGdmbjogMDAwMjljMDAgIG1mbjogMDAxMTEyMDAKKFhFTikgIGdm
bjogMDAwMjllMDAgIG1mbjogMDAyMDVlMDAKKFhFTikgIGdmbjogMDAwMmEwMDAgIG1mbjog
MDAxMTEwMDAKKFhFTikgIGdmbjogMDAwMmEyMDArICBtZm46IDAwMjA1YzAwCihYRU4pICBn
Zm46IDAwMDJhNDAwICBtZm46IDAwMTEwZTAwCihYRU4pICBnZm46IDAwMDJhNjAwICBtZm46
IDAwMjA1YTAwCihYRU4pICBnZm46IDAwMDJhODAwICBtZm46IDAwMTEwYzAwCihYRU4pICBn
Zm46IDAwMDJhYTAwICBtZm46IDAwMjA1ODAwCihYRU4pICBnZm46IDAwMDJhYzAwICBtZm46
IDAwMTEwYTAwCihYRU4pICBnZm46IDAwMDJhZTAwICBtZm46IDAwMjA1NjAwCihYRU4pICBn
Zm46IDAwMDJiMDAwICBtZm46IDAwMTEwODAwCihYRU4pICBnZm46IDAwMDJiMjAwICBtZm46
IDAwMjA1NDAwCihYRU4pICBnZm46IDAwMDJiNDAwICBtZm46IDAwMTEwNjAwCihYRU4pICBn
Zm46IDAwMDJiNjAwICBtZm46IDAwMjA1MjAwCihYRU4pICBnZm46IDAwMDJiODAwICBtZm46
IDAwMTEwNDAwCihYRU4pICBnZm46IDAwMDJiYTAwICBtZm46IDAwMjA1MDAwCihYRU4pICBn
Zm46IDAwMDJiYzAwICBtZm46IDAwMTEwMjAwCihYRU4pICBnZm46IDAwMDJiZTAwICBtZm46
IDAwMjA0ZTAwCihYRU4pICBnZm46IDAwMDJjMDAwICBtZm46IDAwMTEwMDAwCihYRU4pICBn
Zm46IDAwMDJjMjAwICBtZm46IDAwMjA0YzAwCihYRU4pICBnZm46IDAwMDJjNDAwICBtZm46
IDAwMTNmZTAwCihYRU4pICBnZm46IDAwMDJjNjAwICBtZm46IDAwMjA0YTAwCihYRU4pICBn
Zm46IDAwMDJjODAwICBtZm46IDAwMTNmYzAwCihYRU4pICBnZm46IDAwMDJjYTAwICBtZm46
IDAwMjA0ODAwCihYRU4pICBnZm46IDAwMDJjYzAwICBtZm46IDAwMTNmYTAwCihYRU4pICBn
Zm46IDAwMDJjZTAwICBtZm46IDAwMjA0NjAwCihYRU4pICBnZm46IDAwMDJkMDAwICBtZm46
IDAwMTNmODAwCihYRU4pICBnZm46IDAwMDJkMjAwICBtZm46IDAwMjA0NDAwCihYRU4pICBn
Zm46IDAwMDJkNDAwICBtZm46IDAwMTNmNjAwCihYRU4pICBnZm46IDAwMDJkNjAwICBtZm46
IDAwMjA0MjAwCihYRU4pICBnZm46IDAwMDJkODAwICBtZm46IDAwMTNmNDAwCihYRU4pICBn
Zm46IDAwMDJkYTAwICBtZm46IDAwMjA0MDAwCihYRU4pICBnZm46IDAwMDJkYzAwICBtZm46
IDAwMTNmMjAwCihYRU4pICBnZm46IDAwMDJkZTAwICBtZm46IDAwMjAzZTAwCihYRU4pICBn
Zm46IDAwMDJlMDAwICBtZm46IDAwMTNmMDAwCihYRU4pICBnZm46IDAwMDJlMjAwICBtZm46
IDAwMjAzYzAwCihYRU4pICBnZm46IDAwMDJlNDAwICBtZm46IDAwMTNlZTAwCihYRU4pICBn
Zm46IDAwMDJlNjAwICBtZm46IDAwMjAzYTAwCihYRU4pICBnZm46IDAwMDJlODAwICBtZm46
IDAwMTNlYzAwCihYRU4pICBnZm46IDAwMDJlYTAwICBtZm46IDAwMjAzODAwCihYRU4pICBn
Zm46IDAwMDJlYzAwICBtZm46IDAwMTNlYTAwCihYRU4pICBnZm46IDAwMDJlZTAwICBtZm46
IDAwMjAzNjAwCihYRU4pICBnZm46IDAwMDJmMDAwICBtZm46IDAwMTNlODAwCihYRU4pICBn
Zm46IDAwMDJmMjAwICBtZm46IDAwMjAzNDAwCihYRU4pICBnZm46IDAwMDJmNDAwICBtZm46
IDAwMTNlNjAwCihYRU4pICBnZm46IDAwMDJmNjAwICBtZm46IDAwMjAzMjAwCihYRU4pICBn
Zm46IDAwMDJmODAwICBtZm46IDAwMTNlNDAwCihYRU4pICBnZm46IDAwMDJmYTAwICBtZm46
IDAwMjAzMDAwCihYRU4pICBnZm46IDAwMDJmYzAwICBtZm46IDAwMTNlMjAwCihYRU4pICBn
Zm46IDAwMDJmZTAwICBtZm46IDAwMjAyZTAwCihYRU4pICBnZm46IDAwMDMwMDAwICBtZm46
IDAwMTNlMDAwCihYRU4pICBnZm46IDAwMDMwMjAwICBtZm46IDAwMjAyYzAwCihYRU4pICBn
Zm46IDAwMDMwNDAwICBtZm46IDAwMTNkZTAwCihYRU4pICBnZm46IDAwMDMwNjAwICBtZm46
IDAwMjAyYTAwCihYRU4pICBnZm46IDAwMDMwODAwICBtZm46IDAwMTNkYzAwCihYRU4pICBn
Zm46IDAwMDMwYTAwICBtZm46IDAwMjAyODAwCihYRU4pICBnZm46IDAwMDMwYzAwICBtZm46
IDAwMTNkYTAwCihYRU4pICBnZm46IDAwMDMwZTAwICBtZm46IDAwMjAyNjAwCihYRU4pICBn
Zm46IDAwMDMxMDAwICBtZm46IDAwMTNkODAwCihYRU4pICBnZm46IDAwMDMxMjAwICBtZm46
IDAwMjAyNDAwCihYRU4pICBnZm46IDAwMDMxNDAwICBtZm46IDAwMTNkNjAwCihYRU4pICBn
Zm46IDAwMDMxNjAwICBtZm46IDAwMjAyMjAwCihYRU4pICBnZm46IDAwMDMxODAwICBtZm46
IDAwMTNkNDAwCihYRU4pICBnZm46IDAwMDMxYTAwICBtZm46IDAwMjAyMDAwCihYRU4pICBn
Zm46IDAwMDMxYzAwICBtZm46IDAwMTNkMjAwCihYRU4pICBnZm46IDAwMDMxZTAwICBtZm46
IDAwMjAxZTAwCihYRU4pICBnZm46IDAwMDMyMDAwICBtZm46IDAwMTNkMDAwCihYRU4pICBn
Zm46IDAwMDMyMjAwICBtZm46IDAwMjAxYzAwCihYRU4pICBnZm46IDAwMDMyNDAwICBtZm46
IDAwMTNjZTAwCihYRU4pICBnZm46IDAwMDMyNjAwICBtZm46IDAwMjAxYTAwCihYRU4pICBn
Zm46IDAwMDMyODAwICBtZm46IDAwMTNjYzAwCihYRU4pICBnZm46IDAwMDMyYTAwICBtZm46
IDAwMjAxODAwCihYRU4pICBnZm46IDAwMDMyYzAwICBtZm46IDAwMTNjYTAwCihYRU4pICBn
Zm46IDAwMDMyZTAwICBtZm46IDAwMjAxNjAwCihYRU4pICBnZm46IDAwMDMzMDAwICBtZm46
IDAwMTNjODAwCihYRU4pICBnZm46IDAwMDMzMjAwICBtZm46IDAwMjAxNDAwCihYRU4pICBn
Zm46IDAwMDMzNDAwICBtZm46IDAwMTNjNjAwCihYRU4pICBnZm46IDAwMDMzNjAwICBtZm46
IDAwMjAxMjAwCihYRU4pICBnZm46IDAwMDMzODAwICBtZm46IDAwMTNjNDAwCihYRU4pICBn
Zm46IDAwMDMzYTAwICBtZm46IDAwMjAxMDAwCihYRU4pICBnZm46IDAwMDMzYzAwICBtZm46
IDAwMTNjMjAwCihYRU4pICBnZm46IDAwMDMzZTAwICBtZm46IDAwMjAwZTAwCihYRU4pICBn
Zm46IDAwMDM0MDAwICBtZm46IDAwMTNjMDAwCihYRU4pICBnZm46IDAwMDM0MjAwICBtZm46
IDAwMjAwYzAwCihYRU4pICBnZm46IDAwMDM0NDAwICBtZm46IDAwMTNiZTAwCihYRU4pICBn
Zm46IDAwMDM0NjAwICBtZm46IDAwMjAwYTAwCihYRU4pICBnZm46IDAwMDM0ODAwICBtZm46
IDAwMTNiYzAwCihYRU4pICBnZm46IDAwMDM0YTAwICBtZm46IDAwMjAwODAwCihYRU4pICBn
Zm46IDAwMDM0YzAwICBtZm46IDAwMTNiYTAwCihYRU4pICBnZm46IDAwMDM0ZTAwICBtZm46
IDAwMjAwNjAwCihYRU4pICBnZm46IDAwMDM1MDAwICBtZm46IDAwMTNiODAwCihYRU4pICBn
Zm46IDAwMDM1MjAwICBtZm46IDAwMjAwNDAwCihYRU4pICBnZm46IDAwMDM1NDAwICBtZm46
IDAwMTNiNjAwCihYRU4pICBnZm46IDAwMDM1NjAwICBtZm46IDAwMjAwMjAwCihYRU4pICBn
Zm46IDAwMDM1ODAwICBtZm46IDAwMTNiNDAwCihYRU4pICBnZm46IDAwMDM1YTAwICBtZm46
IDAwMjAwMDAwCihYRU4pICBnZm46IDAwMDM1YzAwICBtZm46IDAwMTNiMjAwCihYRU4pICBn
Zm46IDAwMDM1ZTAwICBtZm46IDAwMjNmZTAwCihYRU4pICBnZm46IDAwMDM2MDAwICBtZm46
IDAwMTNiMDAwCihYRU4pICBnZm46IDAwMDM2MjAwICBtZm46IDAwMjNmYzAwCihYRU4pICBn
Zm46IDAwMDM2NDAwICBtZm46IDAwMTNhZTAwCihYRU4pICBnZm46IDAwMDM2NjAwICBtZm46
IDAwMjNmYTAwCihYRU4pICBnZm46IDAwMDM2ODAwICBtZm46IDAwMTNhYzAwCihYRU4pICBn
Zm46IDAwMDM2YTAwICBtZm46IDAwMjNmODAwCihYRU4pICBnZm46IDAwMDM2YzAwICBtZm46
IDAwMTNhYTAwCihYRU4pICBnZm46IDAwMDM2ZTAwICBtZm46IDAwMjNmNjAwCihYRU4pICBn
Zm46IDAwMDM3MDAwICBtZm46IDAwMTNhODAwCihYRU4pICBnZm46IDAwMDM3MjAwICBtZm46
IDAwMjNmNDAwCihYRU4pICBnZm46IDAwMDM3NDAwICBtZm46IDAwMTNhNjAwCihYRU4pICBn
Zm46IDAwMDM3NjAwICBtZm46IDAwMjNmMjAwCihYRU4pICBnZm46IDAwMDM3ODAwICBtZm46
IDAwMTNhNDAwCihYRU4pICBnZm46IDAwMDM3YTAwICBtZm46IDAwMjNmMDAwCihYRU4pICBn
Zm46IDAwMDM3YzAwICBtZm46IDAwMTNhMjAwCihYRU4pICBnZm46IDAwMDM3ZTAwICBtZm46
IDAwMjNlZTAwCihYRU4pICBnZm46IDAwMDM4MDAwICBtZm46IDAwMTNhMDAwCihYRU4pICBn
Zm46IDAwMDM4MjAwICBtZm46IDAwMjNlYzAwCihYRU4pICBnZm46IDAwMDM4NDAwICBtZm46
IDAwMTM5ZTAwCihYRU4pICBnZm46IDAwMDM4NjAwICBtZm46IDAwMjNlYTAwCihYRU4pICBn
Zm46IDAwMDM4ODAwICBtZm46IDAwMTM5YzAwCihYRU4pICBnZm46IDAwMDM4YTAwICBtZm46
IDAwMjNlODAwCihYRU4pICBnZm46IDAwMDM4YzAwICBtZm46IDAwMTM5YTAwCihYRU4pICBn
Zm46IDAwMDM4ZTAwICBtZm46IDAwMjNlNjAwCihYRU4pICBnZm46IDAwMDM5MDAwICBtZm46
IDAwMTM5ODAwCihYRU4pICBnZm46IDAwMDM5MjAwICBtZm46IDAwMjNlNDAwCihYRU4pICBn
Zm46IDAwMDM5NDAwICBtZm46IDAwMTM5NjAwCihYRU4pICBnZm46IDAwMDM5NjAwICBtZm46
IDAwMjNlMjAwCihYRU4pICBnZm46IDAwMDM5ODAwICBtZm46IDAwMTM5NDAwCihYRU4pICBn
Zm46IDAwMDM5YTAwICBtZm46IDAwMjNlMDAwCihYRU4pICBnZm46IDAwMDM5YzAwICBtZm46
IDAwMTM5MjAwCihYRU4pICBnZm46IDAwMDM5ZTAwICBtZm46IDAwMjNkZTAwCihYRU4pICBn
Zm46IDAwMDNhMDAwICBtZm46IDAwMTM5MDAwCihYRU4pICBnZm46IDAwMDNhMjAwICBtZm46
IDAwMjNkYzAwCihYRU4pICBnZm46IDAwMDNhNDAwICBtZm46IDAwMTM4ZTAwCihYRU4pICBn
Zm46IDAwMDNhNjAwICBtZm46IDAwMjNkYTAwCihYRU4pICBnZm46IDAwMDNhODAwICBtZm46
IDAwMTM4YzAwCihYRU4pICBnZm46IDAwMDNhYTAwICBtZm46IDAwMjNkODAwCihYRU4pICBn
Zm46IDAwMDNhYzAwICBtZm46IDAwMTM4YTAwCihYRU4pICBnZm46IDAwMDNhZTAwICBtZm46
IDAwMjNkNjAwCihYRU4pICBnZm46IDAwMDNiMDAwICBtZm46IDAwMTM4ODAwCihYRU4pICBn
Zm46IDAwMDNiMjAwICBtZm46IDAwMjNkNDAwCihYRU4pICBnZm46IDAwMDNiNDAwICBtZm46
IDAwMTM4NjAwCihYRU4pICBnZm46IDAwMDNiNjAwICBtZm46IDAwMjNkMjAwCihYRU4pICBn
Zm46IDAwMDNiODAwICBtZm46IDAwMTM4NDAwCihYRU4pICBnZm46IDAwMDNiYTAwICBtZm46
IDAwMjNkMDAwCihYRU4pICBnZm46IDAwMDNiYzAwICBtZm46IDAwMTM4MjAwCihYRU4pICBn
Zm46IDAwMDNiZTAwICBtZm46IDAwMjNjZTAwCihYRU4pICBnZm46IDAwMDNjMDAwICBtZm46
IDAwMTM4MDAwCihYRU4pICBnZm46IDAwMDNjMjAwICBtZm46IDAwMjNjYzAwCihYRU4pICBn
Zm46IDAwMDNjNDAwICBtZm46IDAwMTM3ZTAwCihYRU4pICBnZm46IDAwMDNjNjAwICBtZm46
IDAwMjNjYTAwCihYRU4pICBnZm46IDAwMDNjODAwICBtZm46IDAwMTM3YzAwCihYRU4pICBn
Zm46IDAwMDNjYTAwICBtZm46IDAwMjNjODAwCihYRU4pICBnZm46IDAwMDNjYzAwICBtZm46
IDAwMTM3YTAwCihYRU4pICBnZm46IDAwMDNjZTAwICBtZm46IDAwMjNjNjAwCihYRU4pICBn
Zm46IDAwMDNkMDAwICBtZm46IDAwMTM3ODAwCihYRU4pICBnZm46IDAwMDNkMjAwICBtZm46
IDAwMjNjNDAwCihYRU4pICBnZm46IDAwMDNkNDAwICBtZm46IDAwMTM3NjAwCihYRU4pICBn
Zm46IDAwMDNkNjAwICBtZm46IDAwMjNjMjAwCihYRU4pICBnZm46IDAwMDNkODAwICBtZm46
IDAwMTM3NDAwCihYRU4pICBnZm46IDAwMDNkYTAwICBtZm46IDAwMjNjMDAwCihYRU4pICBn
Zm46IDAwMDNkYzAwICBtZm46IDAwMTM3MjAwCihYRU4pICBnZm46IDAwMDNkZTAwICBtZm46
IDAwMjNiZTAwCihYRU4pICBnZm46IDAwMDNlMDAwICBtZm46IDAwMTM3MDAwCihYRU4pICBn
Zm46IDAwMDNlMjAwICBtZm46IDAwMjNiYzAwCihYRU4pICBnZm46IDAwMDNlNDAwICBtZm46
IDAwMTM2ZTAwCihYRU4pICBnZm46IDAwMDNlNjAwICBtZm46IDAwMjNiYTAwCihYRU4pICBn
Zm46IDAwMDNlODAwICBtZm46IDAwMTM2YzAwCihYRU4pICBnZm46IDAwMDNlYTAwICBtZm46
IDAwMjNiODAwCihYRU4pICBnZm46IDAwMDNlYzAwICBtZm46IDAwMTM2YTAwCihYRU4pICBn
Zm46IDAwMDNlZTAwICBtZm46IDAwMjNiNjAwCihYRU4pICBnZm46IDAwMDNmMDAwICBtZm46
IDAwMTM2ODAwCihYRU4pICBnZm46IDAwMDNmMjAwICBtZm46IDAwMjNiNDAwCihYRU4pICBn
Zm46IDAwMDNmNDAwICBtZm46IDAwMTM2NjAwCihYRU4pICBnZm46IDAwMDNmNjAwICBtZm46
IDAwMjNiMjAwCihYRU4pICBnZm46IDAwMDNmODAwICBtZm46IDAwMTM2NDAwCihYRU4pICBn
Zm46IDAwMDNmYTAwICBtZm46IDAwMjNiMDAwCihYRU4pICBnZm46IDAwMDNmYzAwICBtZm46
IDAwMTM2MjAwCihYRU4pICBnZm46IDAwMDNmZTAwICBtZm46IDAwMjNhZTAwCihYRU4pICBn
Zm46IDAwMDQwMDAwICBtZm46IDAwMDQwMDAwCihYRU4pICBnZm46IDAwMDgwMDAwICBtZm46
IDAwMjNhYzAwCihYRU4pICBnZm46IDAwMDgwMjAwICBtZm46IDAwMTM2MDAwCihYRU4pICBn
Zm46IDAwMDgwNDAwICBtZm46IDAwMjNhYTAwCihYRU4pICBnZm46IDAwMDgwNjAwICBtZm46
IDAwMTM1ZTAwCihYRU4pICBnZm46IDAwMDgwODAwICBtZm46IDAwMjNhODAwCihYRU4pICBn
Zm46IDAwMDgwYTAwICBtZm46IDAwMTM1YzAwCihYRU4pICBnZm46IDAwMDgwYzAwICBtZm46
IDAwMjNhNjAwCihYRU4pICBnZm46IDAwMDgwZTAwICBtZm46IDAwMTM1YTAwCihYRU4pICBn
Zm46IDAwMDgxMDAwICBtZm46IDAwMjNhNDAwCihYRU4pICBnZm46IDAwMDgxMjAwICBtZm46
IDAwMTM1ODAwCihYRU4pICBnZm46IDAwMDgxNDAwICBtZm46IDAwMjNhMjAwCihYRU4pICBn
Zm46IDAwMDgxNjAwICBtZm46IDAwMTM1NjAwCihYRU4pICBnZm46IDAwMDgxODAwICBtZm46
IDAwMjNhMDAwCihYRU4pICBnZm46IDAwMDgxYTAwICBtZm46IDAwMTM1NDAwCihYRU4pICBn
Zm46IDAwMDgxYzAwICBtZm46IDAwMjM5ZTAwCihYRU4pICBnZm46IDAwMDgxZTAwICBtZm46
IDAwMTM1MjAwCihYRU4pICBnZm46IDAwMDgyMDAwICBtZm46IDAwMjM5YzAwCihYRU4pICBn
Zm46IDAwMDgyMjAwICBtZm46IDAwMTM1MDAwCihYRU4pICBnZm46IDAwMDgyNDAwICBtZm46
IDAwMjM5YTAwCihYRU4pICBnZm46IDAwMDgyNjAwICBtZm46IDAwMTM0ZTAwCihYRU4pICBn
Zm46IDAwMDgyODAwICBtZm46IDAwMjM5ODAwCihYRU4pICBnZm46IDAwMDgyYTAwICBtZm46
IDAwMTM0YzAwCihYRU4pICBnZm46IDAwMDgyYzAwICBtZm46IDAwMjM5NjAwCihYRU4pICBn
Zm46IDAwMDgyZTAwICBtZm46IDAwMTM0YTAwCihYRU4pICBnZm46IDAwMDgzMDAwICBtZm46
IDAwMjM5NDAwCihYRU4pICBnZm46IDAwMDgzMjAwICBtZm46IDAwMTM0ODAwCihYRU4pICBn
Zm46IDAwMDgzNDAwICBtZm46IDAwMjM5MjAwCihYRU4pICBnZm46IDAwMDgzNjAwICBtZm46
IDAwMTM0NjAwCihYRU4pICBnZm46IDAwMDgzODAwICBtZm46IDAwMjM5MDAwCihYRU4pICBn
Zm46IDAwMDgzYTAwICBtZm46IDAwMTM0NDAwCihYRU4pICBnZm46IDAwMDgzYzAwICBtZm46
IDAwMjM4ZTAwCihYRU4pICBnZm46IDAwMDgzZTAwICBtZm46IDAwMTM0MjAwCihYRU4pICBn
Zm46IDAwMDg0MDAwICBtZm46IDAwMjM4YzAwCihYRU4pICBnZm46IDAwMDg0MjAwICBtZm46
IDAwMTM0MDAwCihYRU4pICBnZm46IDAwMDg0NDAwICBtZm46IDAwMjM4YTAwCihYRU4pICBn
Zm46IDAwMDg0NjAwICBtZm46IDAwMTMzZTAwCihYRU4pICBnZm46IDAwMDg0ODAwICBtZm46
IDAwMjM4ODAwCihYRU4pICBnZm46IDAwMDg0YTAwICBtZm46IDAwMTMzYzAwCihYRU4pICBn
Zm46IDAwMDg0YzAwICBtZm46IDAwMjM4NjAwCihYRU4pICBnZm46IDAwMDg0ZTAwICBtZm46
IDAwMTMzYTAwCihYRU4pICBnZm46IDAwMDg1MDAwICBtZm46IDAwMjM4NDAwCihYRU4pICBn
Zm46IDAwMDg1MjAwICBtZm46IDAwMTMzODAwCihYRU4pICBnZm46IDAwMDg1NDAwICBtZm46
IDAwMjM4MjAwCihYRU4pICBnZm46IDAwMDg1NjAwICBtZm46IDAwMTMzNjAwCihYRU4pICBn
Zm46IDAwMDg1ODAwICBtZm46IDAwMjM4MDAwCihYRU4pICBnZm46IDAwMDg1YTAwICBtZm46
IDAwMTMzNDAwCihYRU4pICBnZm46IDAwMDg1YzAwICBtZm46IDAwMjM3ZTAwCihYRU4pICBn
Zm46IDAwMDg1ZTAwICBtZm46IDAwMTMzMjAwCihYRU4pICBnZm46IDAwMDg2MDAwICBtZm46
IDAwMjM3YzAwCihYRU4pICBnZm46IDAwMDg2MjAwICBtZm46IDAwMTMzMDAwCihYRU4pICBn
Zm46IDAwMDg2NDAwICBtZm46IDAwMjM3YTAwCihYRU4pICBnZm46IDAwMDg2NjAwICBtZm46
IDAwMTMyZTAwCihYRU4pICBnZm46IDAwMDg2ODAwICBtZm46IDAwMjM3ODAwCihYRU4pICBn
Zm46IDAwMDg2YTAwICBtZm46IDAwMTMyYzAwCihYRU4pICBnZm46IDAwMDg2YzAwICBtZm46
IDAwMjM3NjAwCihYRU4pICBnZm46IDAwMDg2ZTAwICBtZm46IDAwMTMyYTAwCihYRU4pICBn
Zm46IDAwMDg3MDAwICBtZm46IDAwMjM3NDAwCihYRU4pICBnZm46IDAwMDg3MjAwICBtZm46
IDAwMTMyODAwCihYRU4pICBnZm46IDAwMDg3NDAwICBtZm46IDAwMjM3MjAwCihYRU4pICBn
Zm46IDAwMDg3NjAwICBtZm46IDAwMTMyNjAwCihYRU4pICBnZm46IDAwMDg3ODAwICBtZm46
IDAwMjM3MDAwCihYRU4pICBnZm46IDAwMDg3YTAwICBtZm46IDAwMTMyNDAwCihYRU4pICBn
Zm46IDAwMDg3YzAwICBtZm46IDAwMjM2ZTAwCihYRU4pICBnZm46IDAwMDg3ZTAwICBtZm46
IDAwMTMyMjAwCihYRU4pICBnZm46IDAwMDg4MDAwICBtZm46IDAwMjM2YzAwCihYRU4pICBn
Zm46IDAwMDg4MjAwICBtZm46IDAwMTMyMDAwCihYRU4pICBnZm46IDAwMDg4NDAwICBtZm46
IDAwMjM2YTAwCihYRU4pICBnZm46IDAwMDg4NjAwICBtZm46IDAwMTMxZTAwCihYRU4pICBn
Zm46IDAwMDg4ODAwICBtZm46IDAwMjM2ODAwCihYRU4pICBnZm46IDAwMDg4YTAwICBtZm46
IDAwMTMxYzAwCihYRU4pICBnZm46IDAwMDg4YzAwICBtZm46IDAwMjM2NjAwCihYRU4pICBn
Zm46IDAwMDg4ZTAwICBtZm46IDAwMTMxYTAwCihYRU4pICBnZm46IDAwMDg5MDAwICBtZm46
IDAwMjM2NDAwCihYRU4pICBnZm46IDAwMDg5MjAwICBtZm46IDAwMTMxODAwCihYRU4pICBn
Zm46IDAwMDg5NDAwICBtZm46IDAwMjM2MjAwCihYRU4pICBnZm46IDAwMDg5NjAwICBtZm46
IDAwMTMxNjAwCihYRU4pICBnZm46IDAwMDg5ODAwICBtZm46IDAwMjM2MDAwCihYRU4pICBn
Zm46IDAwMDg5YTAwICBtZm46IDAwMTMxNDAwCihYRU4pICBnZm46IDAwMDg5YzAwICBtZm46
IDAwMjM1ZTAwCihYRU4pICBnZm46IDAwMDg5ZTAwICBtZm46IDAwMTMxMjAwCihYRU4pICBn
Zm46IDAwMDhhMDAwICBtZm46IDAwMjM1YzAwCihYRU4pICBnZm46IDAwMDhhMjAwICBtZm46
IDAwMTMxMDAwCihYRU4pICBnZm46IDAwMDhhNDAwICBtZm46IDAwMjM1YTAwCihYRU4pICBn
Zm46IDAwMDhhNjAwICBtZm46IDAwMTMwZTAwCihYRU4pICBnZm46IDAwMDhhODAwICBtZm46
IDAwMjM1ODAwCihYRU4pICBnZm46IDAwMDhhYTAwICBtZm46IDAwMTMwYzAwCihYRU4pICBn
Zm46IDAwMDhhYzAwICBtZm46IDAwMjM1NjAwCihYRU4pICBnZm46IDAwMDhhZTAwICBtZm46
IDAwMTMwYTAwCihYRU4pICBnZm46IDAwMDhiMDAwICBtZm46IDAwMjM1NDAwCihYRU4pICBn
Zm46IDAwMDhiMjAwICBtZm46IDAwMTMwODAwCihYRU4pICBnZm46IDAwMDhiNDAwICBtZm46
IDAwMjM1MjAwCihYRU4pICBnZm46IDAwMDhiNjAwICBtZm46IDAwMTMwNjAwCihYRU4pICBn
Zm46IDAwMDhiODAwICBtZm46IDAwMjM1MDAwCihYRU4pICBnZm46IDAwMDhiYTAwICBtZm46
IDAwMTMwNDAwCihYRU4pICBnZm46IDAwMDhiYzAwICBtZm46IDAwMjM0ZTAwCihYRU4pICBn
Zm46IDAwMDhiZTAwICBtZm46IDAwMTMwMjAwCihYRU4pICBnZm46IDAwMDhjMDAwICBtZm46
IDAwMjM0YzAwCihYRU4pICBnZm46IDAwMDhjMjAwICBtZm46IDAwMTMwMDAwCihYRU4pICBn
Zm46IDAwMDhjNDAwICBtZm46IDAwMjM0YTAwCihYRU4pICBnZm46IDAwMDhjNjAwICBtZm46
IDAwMTJmZTAwCihYRU4pICBnZm46IDAwMDhjODAwICBtZm46IDAwMjM0ODAwCihYRU4pICBn
Zm46IDAwMDhjYTAwICBtZm46IDAwMTJmYzAwCihYRU4pICBnZm46IDAwMDhjYzAwICBtZm46
IDAwMjM0NjAwCihYRU4pICBnZm46IDAwMDhjZTAwICBtZm46IDAwMTJmYTAwCihYRU4pICBn
Zm46IDAwMDhkMDAwICBtZm46IDAwMjM0NDAwCihYRU4pICBnZm46IDAwMDhkMjAwICBtZm46
IDAwMTJmODAwCihYRU4pICBnZm46IDAwMDhkNDAwICBtZm46IDAwMjM0MjAwCihYRU4pICBn
Zm46IDAwMDhkNjAwICBtZm46IDAwMTJmNjAwCihYRU4pICBnZm46IDAwMDhkODAwICBtZm46
IDAwMjM0MDAwCihYRU4pICBnZm46IDAwMDhkYTAwICBtZm46IDAwMTJmNDAwCihYRU4pICBn
Zm46IDAwMDhkYzAwICBtZm46IDAwMjMzZTAwCihYRU4pICBnZm46IDAwMDhkZTAwICBtZm46
IDAwMTJmMjAwCihYRU4pICBnZm46IDAwMDhlMDAwICBtZm46IDAwMjMzYzAwCihYRU4pICBn
Zm46IDAwMDhlMjAwICBtZm46IDAwMTJmMDAwCihYRU4pICBnZm46IDAwMDhlNDAwICBtZm46
IDAwMjMzYTAwCihYRU4pICBnZm46IDAwMDhlNjAwICBtZm46IDAwMTJlZTAwCihYRU4pICBn
Zm46IDAwMDhlODAwICBtZm46IDAwMjMzODAwCihYRU4pICBnZm46IDAwMDhlYTAwICBtZm46
IDAwMTJlYzAwCihYRU4pICBnZm46IDAwMDhlYzAwICBtZm46IDAwMjMzNjAwCihYRU4pICBn
Zm46IDAwMDhlZTAwICBtZm46IDAwMTJlYTAwCihYRU4pICBnZm46IDAwMDhmMDAwICBtZm46
IDAwMjMzNDAwCihYRU4pICBnZm46IDAwMDhmMjAwICBtZm46IDAwMTJlODAwCihYRU4pICBn
Zm46IDAwMDhmNDAwICBtZm46IDAwMjMzMjAwCihYRU4pICBnZm46IDAwMDhmNjAwICBtZm46
IDAwMTJlNjAwCihYRU4pICBnZm46IDAwMDhmODAwICBtZm46IDAwMjMzMDAwCihYRU4pICBn
Zm46IDAwMDhmYTAwICBtZm46IDAwMTJlNDAwCihYRU4pICBnZm46IDAwMDhmYzAwICBtZm46
IDAwMjMyZTAwCihYRU4pICBnZm46IDAwMDhmZTAwICBtZm46IDAwMTJlMjAwCihYRU4pICBn
Zm46IDAwMDkwMDAwICBtZm46IDAwMjMyYzAwCihYRU4pICBnZm46IDAwMDkwMjAwICBtZm46
IDAwMTJlMDAwCihYRU4pICBnZm46IDAwMDkwNDAwICBtZm46IDAwMjMyYTAwCihYRU4pICBn
Zm46IDAwMDkwNjAwICBtZm46IDAwMTJkZTAwCihYRU4pICBnZm46IDAwMDkwODAwICBtZm46
IDAwMjMyODAwCihYRU4pICBnZm46IDAwMDkwYTAwICBtZm46IDAwMTJkYzAwCihYRU4pICBn
Zm46IDAwMDkwYzAwICBtZm46IDAwMjMyNjAwCihYRU4pICBnZm46IDAwMDkwZTAwICBtZm46
IDAwMTJkYTAwCihYRU4pICBnZm46IDAwMDkxMDAwICBtZm46IDAwMjMyNDAwCihYRU4pICBn
Zm46IDAwMDkxMjAwICBtZm46IDAwMTJkODAwCihYRU4pICBnZm46IDAwMDkxNDAwICBtZm46
IDAwMjMyMjAwCihYRU4pICBnZm46IDAwMDkxNjAwICBtZm46IDAwMTJkNjAwCihYRU4pICBn
Zm46IDAwMDkxODAwICBtZm46IDAwMjMyMDAwCihYRU4pICBnZm46IDAwMDkxYTAwICBtZm46
IDAwMTJkNDAwCihYRU4pICBnZm46IDAwMDkxYzAwICBtZm46IDAwMjMxZTAwCihYRU4pICBn
Zm46IDAwMDkxZTAwICBtZm46IDAwMTJkMjAwCihYRU4pICBnZm46IDAwMDkyMDAwICBtZm46
IDAwMjMxYzAwCihYRU4pICBnZm46IDAwMDkyMjAwICBtZm46IDAwMTJkMDAwCihYRU4pICBn
Zm46IDAwMDkyNDAwICBtZm46IDAwMjMxYTAwCihYRU4pICBnZm46IDAwMDkyNjAwICBtZm46
IDAwMTJjZTAwCihYRU4pICBnZm46IDAwMDkyODAwICBtZm46IDAwMjMxODAwCihYRU4pICBn
Zm46IDAwMDkyYTAwICBtZm46IDAwMTJjYzAwCihYRU4pICBnZm46IDAwMDkyYzAwICBtZm46
IDAwMjMxNjAwCihYRU4pICBnZm46IDAwMDkyZTAwICBtZm46IDAwMTJjYTAwCihYRU4pICBn
Zm46IDAwMDkzMDAwICBtZm46IDAwMjMxNDAwCihYRU4pICBnZm46IDAwMDkzMjAwICBtZm46
IDAwMTJjODAwCihYRU4pICBnZm46IDAwMDkzNDAwICBtZm46IDAwMjMxMjAwCihYRU4pICBn
Zm46IDAwMDkzNjAwICBtZm46IDAwMTJjNjAwCihYRU4pICBnZm46IDAwMDkzODAwICBtZm46
IDAwMjMxMDAwCihYRU4pICBnZm46IDAwMDkzYTAwICBtZm46IDAwMTJjNDAwCihYRU4pICBn
Zm46IDAwMDkzYzAwICBtZm46IDAwMjMwZTAwCihYRU4pICBnZm46IDAwMDkzZTAwICBtZm46
IDAwMTJjMjAwCihYRU4pICBnZm46IDAwMDk0MDAwICBtZm46IDAwMjMwYzAwCihYRU4pICBn
Zm46IDAwMDk0MjAwICBtZm46IDAwMTJjMDAwCihYRU4pICBnZm46IDAwMDk0NDAwICBtZm46
IDAwMjMwYTAwCihYRU4pICBnZm46IDAwMDk0NjAwICBtZm46IDAwMTJiZTAwCihYRU4pICBn
Zm46IDAwMDk0ODAwICBtZm46IDAwMjMwODAwCihYRU4pICBnZm46IDAwMDk0YTAwICBtZm46
IDAwMTJiYzAwCihYRU4pICBnZm46IDAwMDk0YzAwICBtZm46IDAwMjMwNjAwCihYRU4pICBn
Zm46IDAwMDk0ZTAwICBtZm46IDAwMTJiYTAwCihYRU4pICBnZm46IDAwMDk1MDAwICBtZm46
IDAwMjMwNDAwCihYRU4pICBnZm46IDAwMDk1MjAwICBtZm46IDAwMTJiODAwCihYRU4pICBn
Zm46IDAwMDk1NDAwICBtZm46IDAwMjMwMjAwCihYRU4pICBnZm46IDAwMDk1NjAwICBtZm46
IDAwMTJiNjAwCihYRU4pICBnZm46IDAwMDk1ODAwICBtZm46IDAwMjMwMDAwCihYRU4pICBn
Zm46IDAwMDk1YTAwICBtZm46IDAwMTJiNDAwCihYRU4pICBnZm46IDAwMDk1YzAwICBtZm46
IDAwMjJmZTAwCihYRU4pICBnZm46IDAwMDk1ZTAwICBtZm46IDAwMTJiMjAwCihYRU4pICBn
Zm46IDAwMDk2MDAwICBtZm46IDAwMjJmYzAwCihYRU4pICBnZm46IDAwMDk2MjAwICBtZm46
IDAwMTJiMDAwCihYRU4pICBnZm46IDAwMDk2NDAwICBtZm46IDAwMjJmYTAwCihYRU4pICBn
Zm46IDAwMDk2NjAwICBtZm46IDAwMTJhZTAwCihYRU4pICBnZm46IDAwMDk2ODAwICBtZm46
IDAwMjJmODAwCihYRU4pICBnZm46IDAwMDk2YTAwICBtZm46IDAwMTJhYzAwCihYRU4pICBn
Zm46IDAwMDk2YzAwICBtZm46IDAwMjJmNjAwCihYRU4pICBnZm46IDAwMDk2ZTAwICBtZm46
IDAwMTJhYTAwCihYRU4pICBnZm46IDAwMDk3MDAwICBtZm46IDAwMjJmNDAwCihYRU4pICBn
Zm46IDAwMDk3MjAwICBtZm46IDAwMTJhODAwCihYRU4pICBnZm46IDAwMDk3NDAwICBtZm46
IDAwMjJmMjAwCihYRU4pICBnZm46IDAwMDk3NjAwICBtZm46IDAwMTJhNjAwCihYRU4pICBn
Zm46IDAwMDk3ODAwICBtZm46IDAwMjJmMDAwCihYRU4pICBnZm46IDAwMDk3YTAwICBtZm46
IDAwMTJhNDAwCihYRU4pICBnZm46IDAwMDk3YzAwICBtZm46IDAwMjJlZTAwCihYRU4pICBn
Zm46IDAwMDk3ZTAwICBtZm46IDAwMTJhMjAwCihYRU4pICBnZm46IDAwMDk4MDAwICBtZm46
IDAwMjJlYzAwCihYRU4pICBnZm46IDAwMDk4MjAwICBtZm46IDAwMTJhMDAwCihYRU4pICBn
Zm46IDAwMDk4NDAwICBtZm46IDAwMjJlYTAwCihYRU4pICBnZm46IDAwMDk4NjAwICBtZm46
IDAwMTI5ZTAwCihYRU4pICBnZm46IDAwMDk4ODAwICBtZm46IDAwMjJlODAwCihYRU4pICBn
Zm46IDAwMDk4YTAwICBtZm46IDAwMTI5YzAwCihYRU4pICBnZm46IDAwMDk4YzAwICBtZm46
IDAwMjJlNjAwCihYRU4pICBnZm46IDAwMDk4ZTAwICBtZm46IDAwMTI5YTAwCihYRU4pICBn
Zm46IDAwMDk5MDAwICBtZm46IDAwMjJlNDAwCihYRU4pICBnZm46IDAwMDk5MjAwICBtZm46
IDAwMTI5ODAwCihYRU4pICBnZm46IDAwMDk5NDAwICBtZm46IDAwMjJlMjAwCihYRU4pICBn
Zm46IDAwMDk5NjAwICBtZm46IDAwMTI5NjAwCihYRU4pICBnZm46IDAwMDk5ODAwICBtZm46
IDAwMjJlMDAwCihYRU4pICBnZm46IDAwMDk5YTAwICBtZm46IDAwMTI5NDAwCihYRU4pICBn
Zm46IDAwMDk5YzAwICBtZm46IDAwMjJkZTAwCihYRU4pICBnZm46IDAwMDk5ZTAwICBtZm46
IDAwMTI5MjAwCihYRU4pICBnZm46IDAwMDlhMDAwICBtZm46IDAwMjJkYzAwCihYRU4pICBn
Zm46IDAwMDlhMjAwICBtZm46IDAwMTI5MDAwCihYRU4pICBnZm46IDAwMDlhNDAwICBtZm46
IDAwMjJkYTAwCihYRU4pICBnZm46IDAwMDlhNjAwICBtZm46IDAwMTI4ZTAwCihYRU4pICBn
Zm46IDAwMDlhODAwICBtZm46IDAwMjJkODAwCihYRU4pICBnZm46IDAwMDlhYTAwICBtZm46
IDAwMTI4YzAwCihYRU4pICBnZm46IDAwMDlhYzAwICBtZm46IDAwMjJkNjAwCihYRU4pICBn
Zm46IDAwMDlhZTAwICBtZm46IDAwMTI4YTAwCihYRU4pICBnZm46IDAwMDliMDAwICBtZm46
IDAwMjJkNDAwCihYRU4pICBnZm46IDAwMDliMjAwICBtZm46IDAwMTI4ODAwCihYRU4pICBn
Zm46IDAwMDliNDAwICBtZm46IDAwMjJkMjAwCihYRU4pICBnZm46IDAwMDliNjAwICBtZm46
IDAwMTI4NjAwCihYRU4pICBnZm46IDAwMDliODAwICBtZm46IDAwMjJkMDAwCihYRU4pICBn
Zm46IDAwMDliYTAwICBtZm46IDAwMTI4NDAwCihYRU4pICBnZm46IDAwMDliYzAwICBtZm46
IDAwMjJjZTAwCihYRU4pICBnZm46IDAwMDliZTAwICBtZm46IDAwMTI4MjAwCihYRU4pICBn
Zm46IDAwMDljMDAwICBtZm46IDAwMjJjYzAwCihYRU4pICBnZm46IDAwMDljMjAwICBtZm46
IDAwMTI4MDAwCihYRU4pICBnZm46IDAwMDljNDAwICBtZm46IDAwMjJjYTAwCihYRU4pICBn
Zm46IDAwMDljNjAwICBtZm46IDAwMTI3ZTAwCihYRU4pICBnZm46IDAwMDljODAwICBtZm46
IDAwMjJjODAwCihYRU4pICBnZm46IDAwMDljYTAwICBtZm46IDAwMTI3YzAwCihYRU4pICBn
Zm46IDAwMDljYzAwICBtZm46IDAwMjJjNjAwCihYRU4pICBnZm46IDAwMDljZTAwICBtZm46
IDAwMTI3YTAwCihYRU4pICBnZm46IDAwMDlkMDAwICBtZm46IDAwMjJjNDAwCihYRU4pICBn
Zm46IDAwMDlkMjAwICBtZm46IDAwMTI3ODAwCihYRU4pICBnZm46IDAwMDlkNDAwICBtZm46
IDAwMjJjMjAwCihYRU4pICBnZm46IDAwMDlkNjAwICBtZm46IDAwMTI3NjAwCihYRU4pICBn
Zm46IDAwMDlkODAwICBtZm46IDAwMjJjMDAwCihYRU4pICBnZm46IDAwMDlkYTAwICBtZm46
IDAwMTI3NDAwCihYRU4pICBnZm46IDAwMDlkYzAwICBtZm46IDAwMjJiZTAwCihYRU4pICBn
Zm46IDAwMDlkZTAwICBtZm46IDAwMTI3MjAwCihYRU4pICBnZm46IDAwMDllMDAwICBtZm46
IDAwMjJiYzAwCihYRU4pICBnZm46IDAwMDllMjAwICBtZm46IDAwMTI3MDAwCihYRU4pICBn
Zm46IDAwMDllNDAwICBtZm46IDAwMjJiYTAwCihYRU4pICBnZm46IDAwMDllNjAwICBtZm46
IDAwMTI2ZTAwCihYRU4pICBnZm46IDAwMDllODAwICBtZm46IDAwMjJiODAwCihYRU4pICBn
Zm46IDAwMDllYTAwICBtZm46IDAwMTI2YzAwCihYRU4pICBnZm46IDAwMDllYzAwICBtZm46
IDAwMjJiNjAwCihYRU4pICBnZm46IDAwMDllZTAwICBtZm46IDAwMTI2YTAwCihYRU4pICBn
Zm46IDAwMDlmMDAwICBtZm46IDAwMjJiNDAwCihYRU4pICBnZm46IDAwMDlmMjAwICBtZm46
IDAwMTI2ODAwCihYRU4pICBnZm46IDAwMDlmNDAwICBtZm46IDAwMjJiMjAwCihYRU4pICBn
Zm46IDAwMDlmNjAwICBtZm46IDAwMTI2NjAwCihYRU4pICBnZm46IDAwMDlmODAwICBtZm46
IDAwMjJiMDAwCihYRU4pICBnZm46IDAwMDlmYTAwICBtZm46IDAwMTI2NDAwCihYRU4pICBn
Zm46IDAwMDlmYzAwICBtZm46IDAwMjJhZTAwCihYRU4pICBnZm46IDAwMDlmZTAwICBtZm46
IDAwMTI2MjAwCihYRU4pICBnZm46IDAwMGEwMDAwICBtZm46IDAwMjJhYzAwCihYRU4pICBn
Zm46IDAwMGEwMjAwICBtZm46IDAwMTI2MDAwCihYRU4pICBnZm46IDAwMGEwNDAwICBtZm46
IDAwMjJhYTAwCihYRU4pICBnZm46IDAwMGEwNjAwICBtZm46IDAwMTI1ZTAwCihYRU4pICBn
Zm46IDAwMGEwODAwICBtZm46IDAwMjJhODAwCihYRU4pICBnZm46IDAwMGEwYTAwICBtZm46
IDAwMTI1YzAwCihYRU4pICBnZm46IDAwMGEwYzAwICBtZm46IDAwMjJhNjAwCihYRU4pICBn
Zm46IDAwMGEwZTAwICBtZm46IDAwMTI1YTAwCihYRU4pICBnZm46IDAwMGExMDAwICBtZm46
IDAwMjJhNDAwCihYRU4pICBnZm46IDAwMGExMjAwICBtZm46IDAwMTI1ODAwCihYRU4pICBn
Zm46IDAwMGExNDAwICBtZm46IDAwMjJhMjAwCihYRU4pICBnZm46IDAwMGExNjAwICBtZm46
IDAwMTI1NjAwCihYRU4pICBnZm46IDAwMGExODAwICBtZm46IDAwMjJhMDAwCihYRU4pICBn
Zm46IDAwMGExYTAwICBtZm46IDAwMTI1NDAwCihYRU4pICBnZm46IDAwMGExYzAwICBtZm46
IDAwMjI5ZTAwCihYRU4pICBnZm46IDAwMGExZTAwICBtZm46IDAwMTI1MjAwCihYRU4pICBn
Zm46IDAwMGEyMDAwICBtZm46IDAwMjI5YzAwCihYRU4pICBnZm46IDAwMGEyMjAwICBtZm46
IDAwMTI1MDAwCihYRU4pICBnZm46IDAwMGEyNDAwICBtZm46IDAwMjI5YTAwCihYRU4pICBn
Zm46IDAwMGEyNjAwICBtZm46IDAwMTI0ZTAwCihYRU4pICBnZm46IDAwMGEyODAwICBtZm46
IDAwMjI5ODAwCihYRU4pICBnZm46IDAwMGEyYTAwICBtZm46IDAwMTI0YzAwCihYRU4pICBn
Zm46IDAwMGEyYzAwICBtZm46IDAwMjI5NjAwCihYRU4pICBnZm46IDAwMGEyZTAwICBtZitu
OiAwMDEyNGEwMAooWEVOKSAgZ2ZuOiAwMDBhMzAwMCAgbWZuOiAwMDIyOTQwMAooWEVOKSAg
Z2ZuOiAwMDBhMzIwMCAgbWZuOiAwMDEyNDgwMAooWEVOKSAgZ2ZuOiAwMDBhMzQwMCAgbWZu
OiAwMDIyOTIwMAooWEVOKSAgZ2ZuOiAwMDBhMzYwMCAgbWZuOiAwMDEyNDYwMAooWEVOKSAg
Z2ZuOiAwMDBhMzgwMCAgbWZuOiAwMDIyOTAwMAooWEVOKSAgZ2ZuOiAwMDBhM2EwMCAgbWZu
OiAwMDEyNDQwMAooWEVOKSAgZ2ZuOiAwMDBhM2MwMCAgbWZuOiAwMDIyOGUwMAooWEVOKSAg
Z2ZuOiAwMDBhM2UwMCAgbWZuOiAwMDEyNDIwMAooWEVOKSAgZ2ZuOiAwMDBhNDAwMCAgbWZu
OiAwMDIyOGMwMAooWEVOKSAgZ2ZuOiAwMDBhNDIwMCAgbWZuOiAwMDEyNDAwMAooWEVOKSAg
Z2ZuOiAwMDBhNDQwMCAgbWZuOiAwMDIyOGEwMAooWEVOKSAgZ2ZuOiAwMDBhNDYwMCAgbWZu
OiAwMDEyM2UwMAooWEVOKSAgZ2ZuOiAwMDBhNDgwMCAgbWZuOiAwMDIyODgwMAooWEVOKSAg
Z2ZuOiAwMDBhNGEwMCAgbWZuOiAwMDEyM2MwMAooWEVOKSAgZ2ZuOiAwMDBhNGMwMCAgbWZu
OiAwMDIyODYwMAooWEVOKSAgZ2ZuOiAwMDBhNGUwMCAgbWZuOiAwMDEyM2EwMAooWEVOKSAg
Z2ZuOiAwMDBhNTAwMCAgbWZuOiAwMDIyODQwMAooWEVOKSAgZ2ZuOiAwMDBhNTIwMCAgbWZu
OiAwMDEyMzgwMAooWEVOKSAgZ2ZuOiAwMDBhNTQwMCAgbWZuOiAwMDIyODIwMAooWEVOKSAg
Z2ZuOiAwMDBhNTYwMCAgbWZuOiAwMDEyMzYwMAooWEVOKSAgZ2ZuOiAwMDBhNTgwMCAgbWZu
OiAwMDIyODAwMAooWEVOKSAgZ2ZuOiAwMDBhNWEwMCAgbWZuOiAwMDEyMzQwMAooWEVOKSAg
Z2ZuOiAwMDBhNWMwMCAgbWZuOiAwMDIyN2UwMAooWEVOKSAgZ2ZuOiAwMDBhNWUwMCAgbWZu
OiAwMDEyMzIwMAooWEVOKSAgZ2ZuOiAwMDBhNjAwMCAgbWZuOiAwMDIyN2MwMAooWEVOKSAg
Z2ZuOiAwMDBhNjIwMCAgbWZuOiAwMDEyMzAwMAooWEVOKSAgZ2ZuOiAwMDBhNjQwMCAgbWZu
OiAwMDIyN2EwMAooWEVOKSAgZ2ZuOiAwMDBhNjYwMCAgbWZuOiAwMDEyMmUwMAooWEVOKSAg
Z2ZuOiAwMDBhNjgwMCAgbWZuOiAwMDIyNzgwMAooWEVOKSAgZ2ZuOiAwMDBhNmEwMCAgbWZu
OiAwMDEyMmMwMAooWEVOKSAgZ2ZuOiAwMDBhNmMwMCAgbWZuOiAwMDIyNzYwMAooWEVOKSAg
Z2ZuOiAwMDBhNmUwMCAgbWZuOiAwMDEyMmEwMAooWEVOKSAgZ2ZuOiAwMDBhNzAwMCAgbWZu
OiAwMDIyNzQwMAooWEVOKSAgZ2ZuOiAwMDBhNzIwMCAgbWZuOiAwMDEyMjgwMAooWEVOKSAg
Z2ZuOiAwMDBhNzQwMCAgbWZuOiAwMDIyNzIwMAooWEVOKSAgZ2ZuOiAwMDBhNzYwMCAgbWZu
OiAwMDEyMjYwMAooWEVOKSAgZ2ZuOiAwMDBhNzgwMCAgbWZuOiAwMDIyNzAwMAooWEVOKSAg
Z2ZuOiAwMDBhN2EwMCAgbWZuOiAwMDEyMjQwMAooWEVOKSAgZ2ZuOiAwMDBhN2MwMCAgbWZu
OiAwMDIyNmUwMAooWEVOKSAgZ2ZuOiAwMDBhN2UwMCAgbWZuOiAwMDEyMjIwMAooWEVOKSAg
Z2ZuOiAwMDBhODAwMCAgbWZuOiAwMDIyNmMwMAooWEVOKSAgZ2ZuOiAwMDBhODIwMCAgbWZu
OiAwMDEyMjAwMAooWEVOKSAgZ2ZuOiAwMDBhODQwMCAgbWZuOiAwMDIyNmEwMAooWEVOKSAg
Z2ZuOiAwMDBhODYwMCAgbWZuOiAwMDEyMWUwMAooWEVOKSAgZ2ZuOiAwMDBhODgwMCAgbWZu
OiAwMDIyNjgwMAooWEVOKSAgZ2ZuOiAwMDBhOGEwMCAgbWZuOiAwMDEyMWMwMAooWEVOKSAg
Z2ZuOiAwMDBhOGMwMCAgbWZuOiAwMDIyNjYwMAooWEVOKSAgZ2ZuOiAwMDBhOGUwMCAgbWZu
OiAwMDEyMWEwMAooWEVOKSAgZ2ZuOiAwMDBhOTAwMCAgbWZuOiAwMDIyNjQwMAooWEVOKSAg
Z2ZuOiAwMDBhOTIwMCAgbWZuOiAwMDEyMTgwMAooWEVOKSAgZ2ZuOiAwMDBhOTQwMCAgbWZu
OiAwMDIyNjIwMAooWEVOKSAgZ2ZuOiAwMDBhOTYwMCAgbWZuOiAwMDEyMTYwMAooWEVOKSAg
Z2ZuOiAwMDBhOTgwMCAgbWZuOiAwMDIyNjAwMAooWEVOKSAgZ2ZuOiAwMDBhOWEwMCAgbWZu
OiAwMDEyMTQwMAooWEVOKSAgZ2ZuOiAwMDBhOWMwMCAgbWZuOiAwMDIyNWUwMAooWEVOKSAg
Z2ZuOiAwMDBhOWUwMCAgbWZuOiAwMDEyMTIwMAooWEVOKSAgZ2ZuOiAwMDBhYTAwMCAgbWZu
OiAwMDIyNWMwMAooWEVOKSAgZ2ZuOiAwMDBhYTIwMCAgbWZuOiAwMDEyMTAwMAooWEVOKSAg
Z2ZuOiAwMDBhYTQwMCAgbWZuOiAwMDIyNWEwMAooWEVOKSAgZ2ZuOiAwMDBhYTYwMCAgbWZu
OiAwMDEyMGUwMAooWEVOKSAgZ2ZuOiAwMDBhYTgwMCAgbWZuOiAwMDIyNTgwMAooWEVOKSAg
Z2ZuOiAwMDBhYWEwMCAgbWZuOiAwMDEyMGMwMAooWEVOKSAgZ2ZuOiAwMDBhYWMwMCAgbWZu
OiAwMDIyNTYwMAooWEVOKSAgZ2ZuOiAwMDBhYWUwMCAgbWZuOiAwMDEyMGEwMAooWEVOKSAg
Z2ZuOiAwMDBhYjAwMCAgbWZuOiAwMDIyNTQwMAooWEVOKSAgZ2ZuOiAwMDBhYjIwMCAgbWZu
OiAwMDEyMDgwMAooWEVOKSAgZ2ZuOiAwMDBhYjQwMCAgbWZuOiAwMDIyNTIwMAooWEVOKSAg
Z2ZuOiAwMDBhYjYwMCAgbWZuOiAwMDEyMDYwMAooWEVOKSAgZ2ZuOiAwMDBhYjgwMCAgbWZu
OiAwMDIyNTAwMAooWEVOKSAgZ2ZuOiAwMDBhYmEwMCAgbWZuOiAwMDEyMDQwMAooWEVOKSAg
Z2ZuOiAwMDBhYmMwMCAgbWZuOiAwMDIyNGUwMAooWEVOKSAgZ2ZuOiAwMDBhYmUwMCAgbWZu
OiAwMDEyMDIwMAooWEVOKSAgZ2ZuOiAwMDBhYzAwMCAgbWZuOiAwMDIyNGMwMAooWEVOKSAg
Z2ZuOiAwMDBhYzIwMCAgbWZuOiAwMDEyMDAwMAooWEVOKSAgZ2ZuOiAwMDBhYzQwMCAgbWZu
OiAwMDIyNGEwMAooWEVOKSAgZ2ZuOiAwMDBhYzYwMCAgbWZuOiAwMDA4MDgwMAooWEVOKSAg
Z2ZuOiAwMDBhYzgwMCAgbWZuOiAwMDIyNDgwMAooWEVOKSAgZ2ZuOiAwMDBhY2EwMCAgbWZu
OiAwMDA4MTAwMAooWEVOKSAgZ2ZuOiAwMDBhY2MwMCAgbWZuOiAwMDIyNDYwMAooWEVOKSAg
Z2ZuOiAwMDBhY2UwMCAgbWZuOiAwMDA4MTYwMAooWEVOKSAgZ2ZuOiAwMDBhZDAwMCAgbWZu
OiAwMDIyNDQwMAooWEVOKSAgZ2ZuOiAwMDBhZDIwMCAgbWZuOiAwMDA4MTQwMAooWEVOKSAg
Z2ZuOiAwMDBhZDQwMCAgbWZuOiAwMDIyNDIwMAooWEVOKSAgZ2ZuOiAwMDBhZDYwMCAgbWZu
OiAwMDA4MDYwMAooWEVOKSAgZ2ZuOiAwMDBhZDgwMCAgbWZuOiAwMDIyNDAwMAooWEVOKSAg
Z2ZuOiAwMDBhZGEwMCAgbWZuOiAwMDA4MDQwMAooWEVOKSAgZ2ZuOiAwMDBhZGMwMCAgbWZu
OiAwMDIyM2UwMAooWEVOKSAgZ2ZuOiAwMDBhZGUwMCAgbWZuOiAwMDA4MDIwMAooWEVOKSAg
Z2ZuOiAwMDBhZTAwMCAgbWZuOiAwMDIyM2MwMAooWEVOKSAgZ2ZuOiAwMDBhZTIwMCAgbWZu
OiAwMDA4MDAwMAooWEVOKSAgZ2ZuOiAwMDBhZTQwMCAgbWZuOiAwMDIyM2EwMAooWEVOKSAg
Z2ZuOiAwMDBhZTYwMCAgbWZuOiAwMDA4MWUwMAooWEVOKSAgZ2ZuOiAwMDBhZTgwMCAgbWZu
OiAwMDIyMzgwMAooWEVOKSAgZ2ZuOiAwMDBhZWEwMCAgbWZuOiAwMDA4MWMwMAooWEVOKSAg
Z2ZuOiAwMDBhZWMwMCAgbWZuOiAwMDIyMzYwMAooWEVOKSAgZ2ZuOiAwMDBhZWUwMCAgbWZu
OiAwMDA4MWEwMAooWEVOKSAgZ2ZuOiAwMDBhZjAwMCAgbWZuOiAwMDIyMzQwMAooWEVOKSAg
Z2ZuOiAwMDBhZjIwMCAgbWZuOiAwMDA4MTgwMAooWEVOKSAgZ2ZuOiAwMDBhZjQwMCAgbWZu
OiAwMDIyMzIwMAooWEVOKSAgZ2ZuOiAwMDBhZjYwMCAgbWZuOiAwMDA4M2UwMAooWEVOKSAg
Z2ZuOiAwMDBhZjgwMCAgbWZuOiAwMDIyMzAwMAooWEVOKSAgZ2ZuOiAwMDBhZmEwMCAgbWZu
OiAwMDA4M2MwMAooWEVOKSAgZ2ZuOiAwMDBhZmMwMCAgbWZuOiAwMDIyMmUwMAooWEVOKSAg
Z2ZuOiAwMDBhZmUwMCAgbWZuOiAwMDA4M2EwMAooWEVOKSAgZ2ZuOiAwMDBiMDAwMCAgbWZu
OiAwMDIyMmMwMAooWEVOKSAgZ2ZuOiAwMDBiMDIwMCAgbWZuOiAwMDA4MzgwMAooWEVOKSAg
Z2ZuOiAwMDBiMDQwMCAgbWZuOiAwMDIyMmEwMAooWEVOKSAgZ2ZuOiAwMDBiMDYwMCAgbWZu
OiAwMDA4MzYwMAooWEVOKSAgZ2ZuOiAwMDBiMDgwMCAgbWZuOiAwMDIyMjgwMAooWEVOKSAg
Z2ZuOiAwMDBiMGEwMCAgbWZuOiAwMDA4MzQwMAooWEVOKSAgZ2ZuOiAwMDBiMGMwMCAgbWZu
OiAwMDIyMjYwMAooWEVOKSAgZ2ZuOiAwMDBiMGUwMCAgbWZuOiAwMDA4MzIwMAooWEVOKSAg
Z2ZuOiAwMDBiMTAwMCAgbWZuOiAwMDIyMjQwMAooWEVOKSAgZ2ZuOiAwMDBiMTIwMCAgbWZu
OiAwMDA4MzAwMAooWEVOKSAgZ2ZuOiAwMDBiMTQwMCAgbWZuOiAwMDIyMjIwMAooWEVOKSAg
Z2ZuOiAwMDBiMTYwMCAgbWZuOiAwMDA4MmUwMAooWEVOKSAgZ2ZuOiAwMDBiMTgwMCAgbWZu
OiAwMDIyMjAwMAooWEVOKSAgZ2ZuOiAwMDBiMWEwMCAgbWZuOiAwMDA4MmMwMAooWEVOKSAg
Z2ZuOiAwMDBiMWMwMCAgbWZuOiAwMDIyMWUwMAooWEVOKSAgZ2ZuOiAwMDBiMWUwMCAgbWZu
OiAwMDA4MmEwMAooWEVOKSAgZ2ZuOiAwMDBiMjAwMCAgbWZuOiAwMDIyMWMwMAooWEVOKSAg
Z2ZuOiAwMDBiMjIwMCAgbWZuOiAwMDA4MjgwMAooWEVOKSAgZ2ZuOiAwMDBiMjQwMCAgbWZu
OiAwMDIyMWEwMAooWEVOKSAgZ2ZuOiAwMDBiMjYwMCAgbWZuOiAwMDA4MjYwMAooWEVOKSAg
Z2ZuOiAwMDBiMjgwMCAgbWZuOiAwMDIyMTgwMAooWEVOKSAgZ2ZuOiAwMDBiMmEwMCAgbWZu
OiAwMDA4MjQwMAooWEVOKSAgZ2ZuOiAwMDBiMmMwMCAgbWZuOiAwMDIyMTYwMAooWEVOKSAg
Z2ZuOiAwMDBiMmUwMCAgbWZuOiAwMDA4MjIwMAooWEVOKSAgZ2ZuOiAwMDBiMzAwMCAgbWZu
OiAwMDIyMTQwMAooWEVOKSAgZ2ZuOiAwMDBiMzIwMCAgbWZuOiAwMDA4MjAwMAooWEVOKSAg
Z2ZuOiAwMDBiMzQwMCAgbWZuOiAwMDIyMTIwMAooWEVOKSAgZ2ZuOiAwMDBiMzYwMCAgbWZu
OiAwMDA4N2UwMAooWEVOKSAgZ2ZuOiAwMDBiMzgwMCAgbWZuOiAwMDIyMTAwMAooWEVOKSAg
Z2ZuOiAwMDBiM2EwMCAgbWZuOiAwMDA4N2MwMAooWEVOKSAgZ2ZuOiAwMDBiM2MwMCAgbWZu
OiAwMDIyMGUwMAooWEVOKSAgZ2ZuOiAwMDBiM2UwMCAgbWZuOiAwMDA4N2EwMAooWEVOKSAg
Z2ZuOiAwMDBiNDAwMCAgbWZuOiAwMDIyMGMwMAooWEVOKSAgZ2ZuOiAwMDBiNDIwMCAgbWZu
OiAwMDA4NzgwMAooWEVOKSAgZ2ZuOiAwMDBiNDQwMCAgbWZuOiAwMDIyMGEwMAooWEVOKSAg
Z2ZuOiAwMDBiNDYwMCAgbWZuOiAwMDA4NzYwMAooWEVOKSAgZ2ZuOiAwMDBiNDgwMCAgbWZu
OiAwMDIyMDgwMAooWEVOKSAgZ2ZuOiAwMDBiNGEwMCAgbWZuOiAwMDA4NzQwMAooWEVOKSAg
Z2ZuOiAwMDBiNGMwMCAgbWZuOiAwMDIyMDYwMAooWEVOKSAgZ2ZuOiAwMDBiNGUwMCAgbWZu
OiAwMDA4NzIwMAooWEVOKSAgZ2ZuOiAwMDBiNTAwMCAgbWZuOiAwMDIyMDQwMAooWEVOKSAg
Z2ZuOiAwMDBiNTIwMCAgbWZuOiAwMDA4NzAwMAooWEVOKSAgZ2ZuOiAwMDBiNTQwMCAgbWZu
OiAwMDIyMDIwMAooWEVOKSAgZ2ZuOiAwMDBiNTYwMCAgbWZuOiAwMDA4NmUwMAooWEVOKSAg
Z2ZuOiAwMDBiNTgwMCAgbWZuOiAwMDIyMDAwMAooWEVOKSAgZ2ZuOiAwMDBiNWEwMCAgbWZu
OiAwMDA4NmMwMAooWEVOKSAgZ2ZuOiAwMDBiNWMwMCAgbWZuOiAwMDFjMDgwMAooWEVOKSAg
Z2ZuOiAwMDBiNWUwMCAgbWZuOiAwMDA4NmEwMAooWEVOKSAgZ2ZuOiAwMDBiNjAwMCAgbWZu
OiAwMDFiMDQwMAooWEVOKSAgZ2ZuOiAwMDBiNjIwMCAgbWZuOiAwMDA4NjgwMAooWEVOKSAg
Z2ZuOiAwMDBiNjQwMCAgbWZuOiAwMDE2MmMwMAooWEVOKSAgZ2ZuOiAwMDBiNjYwMCAgbWZu
OiAwMDA4NjYwMAooWEVOKSAgZ2ZuOiAwMDBiNjgwMCAgbWZuOiAwMDFkMGUwMAooWEVOKSAg
Z2ZuOiAwMDBiNmEwMCAgbWZuOiAwMDA4NjQwMAooWEVOKSAgZ2ZuOiAwMDBiNmMwMCAgbWZu
OiAwMDFkMGMwMAooWEVOKSAgZ2ZuOiAwMDBiNmUwMCAgbWZuOiAwMDA4NjIwMAooWEVOKSAg
Z2ZuOiAwMDBiNzAwMCAgbWZuOiAwMDFiMWEwMAooWEVOKSAgZ2ZuOiAwMDBiNzIwMCAgbWZu
OiAwMDA4NjAwMAooWEVOKSAgZ2ZuOiAwMDBiNzQwMCAgbWZuOiAwMDFiMTgwMAooWEVOKSAg
Z2ZuOiAwMDBiNzYwMCAgbWZuOiAwMDA4NWUwMAooWEVOKSAgZ2ZuOiAwMDBiNzgwMCAgbWZu
OiAwMDFiMGUwMAooWEVOKSAgZ2ZuOiAwMDBiN2EwMCAgbWZuOiAwMDA4NWMwMAooWEVOKSAg
Z2ZuOiAwMDBiN2MwMCAgbWZuOiAwMDFiMGMwMAooWEVOKSAgZ2ZuOiAwMDBiN2UwMCAgbWZu
OiAwMDA4NWEwMAooWEVOKSAgZ2ZuOiAwMDBiODAwMCAgbWZuOiAwMDFiMDIwMAooWEVOKSAg
Z2ZuOiAwMDBiODIwMCAgbWZuOiAwMDA4NTgwMAooWEVOKSAgZ2ZuOiAwMDBiODQwMCAgbWZu
OiAwMDFiMDAwMAooWEVOKSAgZ2ZuOiAwMDBiODYwMCAgbWZuOiAwMDA4NTYwMAooWEVOKSAg
Z2ZuOiAwMDBiODgwMCAgbWZuOiAwMDE2MmEwMAooWEVOKSAgZ2ZuOiAwMDBiOGEwMCAgbWZu
OiAwMDA4NTQwMAooWEVOKSAgZ2ZuOiAwMDBiOGMwMCAgbWZuOiAwMDE2MjgwMAooWEVOKSAg
Z2ZuOiAwMDBiOGUwMCAgbWZuOiAwMDA4NTIwMAooWEVOKSAgZ2ZuOiAwMDBiOTAwMCAgbWZu
OiAwMDFjMDYwMAooWEVOKSAgZ2ZuOiAwMDBiOTIwMCAgbWZuOiAwMDA4NTAwMAooWEVOKSAg
Z2ZuOiAwMDBiOTQwMCAgbWZuOiAwMDFjMDQwMAooWEVOKSAgZ2ZuOiAwMDBiOTYwMCAgbWZu
OiAwMDA4NGUwMAooWEVOKSAgZ2ZuOiAwMDBiOTgwMCAgbWZuOiAwMDFjMDIwMAooWEVOKSAg
Z2ZuOiAwMDBiOWEwMCAgbWZuOiAwMDA4NGMwMAooWEVOKSAgZ2ZuOiAwMDBiOWMwMCAgbWZu
OiAwMDFjMDAwMAooWEVOKSAgZ2ZuOiAwMDBiOWUwMCAgbWZuOiAwMDA4NGEwMAooWEVOKSAg
Z2ZuOiAwMDBiYTAwMCAgbWZuOiAwMDFiMTYwMAooWEVOKSAgZ2ZuOiAwMDBiYTIwMCAgbWZu
OiAwMDA4NDgwMAooWEVOKSAgZ2ZuOiAwMDBiYTQwMCAgbWZuOiAwMDFiMTQwMAooWEVOKSAg
Z2ZuOiAwMDBiYTYwMCAgbWZuOiAwMDA4NDYwMAooWEVOKSAgZ2ZuOiAwMDBiYTgwMCAgbWZu
OiAwMDFiMTIwMAooWEVOKSAgZ2ZuOiAwMDBiYWEwMCAgbWZuOiAwMDA4NDQwMAooWEVOKSAg
Z2ZuOiAwMDBiYWMwMCAgbWZuOiAwMDFiMTAwMAooWEVOKSAgZ2ZuOiAwMDBiYWUwMCAgbWZu
OiAwMDA4NDIwMAooWEVOKSAgZ2ZuOiAwMDBiYjAwMCAgbWZuOiAwMDE2MjYwMAooWEVOKSAg
Z2ZuOiAwMDBiYjIwMCAgbWZuOiAwMDA4NDAwMAooWEVOKSAgZ2ZuOiAwMDBiYjQwMCAgbWZu
OiAwMDE2MjQwMAooWEVOKSAgZ2ZuOiAwMDBiYjYwMCAgbWZuOiAwMDA4ZmUwMAooWEVOKSAg
Z2ZuOiAwMDBiYjgwMCAgbWZuOiAwMDE2MjIwMAooWEVOKSAgZ2ZuOiAwMDBiYmEwMCAgbWZu
OiAwMDA4ZmMwMAooWEVOKSAgZ2ZuOiAwMDBiYmMwMCAgbWZuOiAwMDE2MjAwMAooWEVOKSAg
Z2ZuOiAwMDBiYmUwMCAgbWZuOiAwMDA4ZmEwMAooWEVOKSAgZ2ZuOiAwMDBiYzAwMCAgbWZu
OiAwMDFkMWUwMAooWEVOKSAgZ2ZuOiAwMDBiYzIwMCAgbWZuOiAwMDA4ZjgwMAooWEVOKSAg
Z2ZuOiAwMDBiYzQwMCAgbWZuOiAwMDFkMWMwMAooWEVOKSAgZ2ZuOiAwMDBiYzYwMCAgbWZu
OiAwMDA4ZjYwMAooWEVOKSAgZ2ZuOiAwMDBiYzgwMCAgbWZuOiAwMDFkMWEwMAooWEVOKSAg
Z2ZuOiAwMDBiY2EwMCAgbWZuOiAwMDA4ZjQwMAooWEVOKSAgZ2ZuOiAwMDBiY2MwMCAgbWZu
OiAwMDFkMTgwMAooWEVOKSAgZ2ZuOiAwMDBiY2UwMCAgbWZuOiAwMDA4ZjIwMAooWEVOKSAg
Z2ZuOiAwMDBiZDAwMCAgbWZuOiAwMDFkMTYwMAooWEVOKSAgZ2ZuOiAwMDBiZDIwMCAgbWZu
OiAwMDA4ZjAwMAooWEVOKSAgZ2ZuOiAwMDBiZDQwMCAgbWZuOiAwMDFkMTQwMAooWEVOKSAg
Z2ZuOiAwMDBiZDYwMCAgbWZuOiAwMDA4ZWUwMAooWEVOKSAgZ2ZuOiAwMDBiZDgwMCAgbWZu
OiAwMDFkMTIwMAooWEVOKSAgZ2ZuOiAwMDBiZGEwMCAgbWZuOiAwMDA4ZWMwMAooWEVOKSAg
Z2ZuOiAwMDBiZGMwMCAgbWZuOiAwMDFkMTAwMAooWEVOKSAgZ2ZuOiAwMDBiZGUwMCAgbWZu
OiAwMDA4ZWEwMAooWEVOKSAgZ2ZuOiAwMDBiZTAwMCAgbWZuOiAwMDFkM2UwMAooWEVOKSAg
Z2ZuOiAwMDBiZTIwMCAgbWZuOiAwMDA4ZTgwMAooWEVOKSAgZ2ZuOiAwMDBiZTQwMCAgbWZu
OiAwMDFkM2MwMAooWEVOKSAgZ2ZuOiAwMDBiZTYwMCAgbWZuOiAwMDA4ZTYwMAooWEVOKSAg
Z2ZuOiAwMDBiZTgwMCAgbWZuOiAwMDFkM2EwMAooWEVOKSAgZ2ZuOiAwMDBiZWEwMCAgbWZu
OiAwMDA4ZTQwMAooWEVOKSAgZ2ZuOiAwMDBiZWMwMCAgbWZuOiAwMDFkMzgwMAooWEVOKSAg
Z2ZuOiAwMDBiZWUwMCAgbWZuOiAwMDA4ZTIwMAooWEVOKSAgZ2ZuOiAwMDBiZjAwMCAgbWZu
OiAwMDFkMzYwMAooWEVOKSAgZ2ZuOiAwMDBiZjIwMCAgbWZuOiAwMDA4ZTAwMAooWEVOKSAg
Z2ZuOiAwMDBiZjQwMCAgbWZuOiAwMDFkMzQwMAooWEVOKSAgZ2ZuOiAwMDBiZjYwMCAgbWZu
OiAwMDA4ZGUwMAooWEVOKSAgZ2ZuOiAwMDBiZjgwMCAgbWZuOiAwMDFkMzIwMAooWEVOKSAg
Z2ZuOiAwMDBiZmEwMCAgbWZuOiAwMDA4ZGMwMAooWEVOKSAgZ2ZuOiAwMDBiZmMwMCAgbWZu
OiAwMDFkMzAwMAooWEVOKSAgZ2ZuOiAwMDBiZmUwMCAgbWZuOiAwMDA4ZGEwMAooWEVOKSAg
Z2ZuOiAwMDBjMDAwMCAgbWZuOiAwMDFkMmUwMAooWEVOKSAgZ2ZuOiAwMDBjMDIwMCAgbWZu
OiAwMDA4ZDgwMAooWEVOKSAgZ2ZuOiAwMDBjMDQwMCAgbWZuOiAwMDFkMmMwMAooWEVOKSAg
Z2ZuOiAwMDBjMDYwMCAgbWZuOiAwMDA4ZDYwMAooWEVOKSAgZ2ZuOiAwMDBjMDgwMCAgbWZu
OiAwMDFkMmEwMAooWEVOKSAgZ2ZuOiAwMDBjMGEwMCAgbWZuOiAwMDA4ZDQwMAooWEVOKSAg
Z2ZuOiAwMDBjMGMwMCAgbWZuOiAwMDFkMjgwMAooWEVOKSAgZ2ZuOiAwMDBjMGUwMCAgbWZu
OiAwMDA4ZDIwMAooWEVOKSAgZ2ZuOiAwMDBjMTAwMCAgbWZuOiAwMDFkMjYwMAooWEVOKSAg
Z2ZuOiAwMDBjMTIwMCAgbWZuOiAwMDA4ZDAwMAooWEVOKSAgZ2ZuOiAwMDBjMTQwMCAgbWZu
OiAwMDFkMjQwMAooWEVOKSAgZ2ZuOiAwMDBjMTYwMCAgbWZuOiAwMDA4Y2UwMAooWEVOKSAg
Z2ZuOiAwMDBjMTgwMCAgbWZuOiAwMDFkMjIwMAooWEVOKSAgZ2ZuOiAwMDBjMWEwMCAgbWZu
OiAwMDA4Y2MwMAooWEVOKSAgZ2ZuOiAwMDBjMWMwMCAgbWZuOiAwMDFkMjAwMAooWEVOKSAg
Z2ZuOiAwMDBjMWUwMCAgbWZuOiAwMDA4Y2EwMAooWEVOKSAgZ2ZuOiAwMDBjMjAwMCAgbWZu
OiAwMDFiM2UwMAooWEVOKSAgZ2ZuOiAwMDBjMjIwMCAgbWZuOiAwMDA4YzgwMAooWEVOKSAg
Z2ZuOiAwMDBjMjQwMCAgbWZuOiAwMDFiM2MwMAooWEVOKSAgZ2ZuOiAwMDBjMjYwMCAgbWZu
OiAwMDA4YzYwMAooWEVOKSAgZ2ZuOiAwMDBjMjgwMCAgbWZuOiAwMDFiM2EwMAooWEVOKSAg
Z2ZuOiAwMDBjMmEwMCAgbWZuOiAwMDA4YzQwMAooWEVOKSAgZ2ZuOiAwMDBjMmMwMCAgbWZu
OiAwMDFiMzgwMAooWEVOKSAgZ2ZuOiAwMDBjMmUwMCAgbWZuOiAwMDA4YzIwMAooWEVOKSAg
Z2ZuOiAwMDBjMzAwMCAgbWZuOiAwMDFiMzYwMAooWEVOKSAgZ2ZuOiAwMDBjMzIwMCAgbWZu
OiAwMDA4YzAwMAooWEVOKSAgZ2ZuOiAwMDBjMzQwMCAgbWZuOiAwMDFiMzQwMAooWEVOKSAg
Z2ZuOiAwMDBjMzYwMCAgbWZuOiAwMDA4YmUwMAooWEVOKSAgZ2ZuOiAwMDBjMzgwMCAgbWZu
OiAwMDFiMzIwMAooWEVOKSAgZ2ZuOiAwMDBjM2EwMCAgbWZuOiAwMDA4YmMwMAooWEVOKSAg
Z2ZuOiAwMDBjM2MwMCAgbWZuOiAwMDFiMzAwMAooWEVOKSAgZ2ZuOiAwMDBjM2UwMCAgbWZu
OiAwMDA4YmEwMAooWEVOKSAgZ2ZuOiAwMDBjNDAwMCAgbWZuOiAwMDFiMmUwMAooWEVOKSAg
Z2ZuOiAwMDBjNDIwMCAgbWZuOiAwMDA4YjgwMAooWEVOKSAgZ2ZuOiAwMDBjNDQwMCAgbWZu
OiAwMDFiMmMwMAooWEVOKSAgZ2ZuOiAwMDBjNDYwMCAgbWZuOiAwMDA4YjYwMAooWEVOKSAg
Z2ZuOiAwMDBjNDgwMCAgbWZuOiAwMDFiMmEwMAooWEVOKSAgZ2ZuOiAwMDBjNGEwMCAgbWZu
OiAwMDA4YjQwMAooWEVOKSAgZ2ZuOiAwMDBjNGMwMCAgbWZuOiAwMDFiMjgwMAooWEVOKSAg
Z2ZuOiAwMDBjNGUwMCAgbWZuOiAwMDA4YjIwMAooWEVOKSAgZ2ZuOiAwMDBjNTAwMCAgbWZu
OiAwMDFiMjYwMAooWEVOKSAgZ2ZuOiAwMDBjNTIwMCAgbWZuOiAwMDA4YjAwMAooWEVOKSAg
Z2ZuOiAwMDBjNTQwMCAgbWZuOiAwMDFiMjQwMAooWEVOKSAgZ2ZuOiAwMDBjNTYwMCAgbWZu
OiAwMDA4YWUwMAooWEVOKSAgZ2ZuOiAwMDBjNTgwMCAgbWZuOiAwMDFiMjIwMAooWEVOKSAg
Z2ZuOiAwMDBjNWEwMCAgbWZuOiAwMDA4YWMwMAooWEVOKSAgZ2ZuOiAwMDBjNWMwMCAgbWZu
OiAwMDFiMjAwMAooWEVOKSAgZ2ZuOiAwMDBjNWUwMCAgbWZuOiAwMDA4YWEwMAooWEVOKSAg
Z2ZuOiAwMDBjNjAwMCAgbWZuOiAwMDE2MWUwMAooWEVOKSAgZ2ZuOiAwMDBjNjIwMCAgbWZu
OiAwMDA4YTgwMAooWEVOKSAgZ2ZuOiAwMDBjNjQwMCAgbWZuOiAwMDE2MWMwMAooWEVOKSAg
Z2ZuOiAwMDBjNjYwMCAgbWZuOiAwMDA4YTYwMAooWEVOKSAgZ2ZuOiAwMDBjNjgwMCAgbWZu
OiAwMDE2MWEwMAooWEVOKSAgZ2ZuOiAwMDBjNmEwMCAgbWZuOiAwMDA4YTQwMAooWEVOKSAg
Z2ZuOiAwMDBjNmMwMCAgbWZuOiAwMDE2MTgwMAooWEVOKSAgZ2ZuOiAwMDBjNmUwMCAgbWZu
OiAwMDA4YTIwMAooWEVOKSAgZ2ZuOiAwMDBjNzAwMCAgbWZuOiAwMDE2MTYwMAooWEVOKSAg
Z2ZuOiAwMDBjNzIwMCAgbWZuOiAwMDA4YTAwMAooWEVOKSAgZ2ZuOiAwMDBjNzQwMCAgbWZu
OiAwMDE2MTQwMAooWEVOKSAgZ2ZuOiAwMDBjNzYwMCAgbWZuOiAwMDA4OWUwMAooWEVOKSAg
Z2ZuOiAwMDBjNzgwMCAgbWZuOiAwMDE2MTIwMAooWEVOKSAgZ2ZuOiAwMDBjN2EwMCAgbWZu
OiAwMDA4OWMwMAooWEVOKSAgZ2ZuOiAwMDBjN2MwMCAgbWZuOiAwMDE2MTAwMAooWEVOKSAg
Z2ZuOiAwMDBjN2UwMCAgbWZuOiAwMDA4OWEwMAooWEVOKSAgZ2ZuOiAwMDBjODAwMCAgbWZu
OiAwMDE2MGUwMAooWEVOKSAgZ2ZuOiAwMDBjODIwMCAgbWZuOiAwMDA4OTgwMAooWEVOKSAg
Z2ZuOiAwMDBjODQwMCAgbWZuOiAwMDE2MGMwMAooWEVOKSAgZ2ZuOiAwMDBjODYwMCAgbWZu
OiAwMDA4OTYwMAooWEVOKSAgZ2ZuOiAwMDBjODgwMCAgbWZuOiAwMDE2MGEwMAooWEVOKSAg
Z2ZuOiAwMDBjOGEwMCAgbWZuOiAwMDA4OTQwMAooWEVOKSAgZ2ZuOiAwMDBjOGMwMCAgbWZu
OiAwMDE2MDgwMAooWEVOKSAgZ2ZuOiAwMDBjOGUwMCAgbWZuOiAwMDA4OTIwMAooWEVOKSAg
Z2ZuOiAwMDBjOTAwMCAgbWZuOiAwMDE2MDYwMAooWEVOKSAgZ2ZuOiAwMDBjOTIwMCAgbWZu
OiAwMDA4OTAwMAooWEVOKSAgZ2ZuOiAwMDBjOTQwMCAgbWZuOiAwMDE2MDQwMAooWEVOKSAg
Z2ZuOiAwMDBjOTYwMCAgbWZuOiAwMDA4OGUwMAooWEVOKSAgZ2ZuOiAwMDBjOTgwMCAgbWZu
OiAwMDE2MDIwMAooWEVOKSAgZ2ZuOiAwMDBjOWEwMCAgbWZuOiAwMDA4OGMwMAooWEVOKSAg
Z2ZuOiAwMDBjOWMwMCAgbWZuOiAwMDE2MDAwMAooWEVOKSAgZ2ZuOiAwMDBjOWUwMCAgbWZu
OiAwMDA4OGEwMAooWEVOKSAgZ2ZuOiAwMDBjYTAwMCAgbWZuOiAwMDFkN2UwMAooWEVOKSAg
Z2ZuOiAwMDBjYTIwMCAgbWZuOiAwMDA4ODgwMAooWEVOKSAgZ2ZuOiAwMDBjYTQwMCAgbWZu
OiAwMDFkN2MwMAooWEVOKSAgZ2ZuOiAwMDBjYTYwMCAgbWZuOiAwMDA4ODYwMAooWEVOKSAg
Z2ZuOiAwMDBjYTgwMCAgbWZuOiAwMDFkN2EwMAooWEVOKSAgZ2ZuOiAwMDBjYWEwMCAgbWZu
OiAwMDA4ODQwMAooWEVOKSAgZ2ZuOiAwMDBjYWMwMCAgbWZuOiAwMDFkNzgwMAooWEVOKSAg
Z2ZuOiAwMDBjYWUwMCAgbWZuOiAwMDA4ODIwMAooWEVOKSAgZ2ZuOiAwMDBjYjAwMCAgbWZu
OiAwMDFkNzYwMAooWEVOKSAgZ2ZuOiAwMDBjYjIwMCAgbWZuOiAwMDA4ODAwMAooWEVOKSAg
Z2ZuOiAwMDBjYjQwMCAgbWZuOiAwMDFkNzQwMAooWEVOKSAgZ2ZuOiAwMDBjYjYwMCAgbWZu
OiAwMDA5ZmUwMAooWEVOKSAgZ2ZuOiAwMDBjYjgwMCAgbWZuOiAwMDFkNzIwMAooWEVOKSAg
Z2ZuOiAwMDBjYmEwMCAgbWZuOiAwMDA5ZmMwMAooWEVOKSAgZ2ZuOiAwMDBjYmMwMCAgbWZu
OiAwMDFkNzAwMAooWEVOKSAgZ2ZuOiAwMDBjYmUwMCAgbWZuOiAwMDA5ZmEwMAooWEVOKSAg
Z2ZuOiAwMDBjYzAwMCAgbWZuOiAwMDFkNmUwMAooWEVOKSAgZ2ZuOiAwMDBjYzIwMCAgbWZu
OiAwMDA5ZjgwMAooWEVOKSAgZ2ZuOiAwMDBjYzQwMCAgbWZuOiAwMDFkNmMwMAooWEVOKSAg
Z2ZuOiAwMDBjYzYwMCAgbWZuOiAwMDA5ZjYwMAooWEVOKSAgZ2ZuOiAwMDBjYzgwMCAgbWZu
OiAwMDFkNmEwMAooWEVOKSAgZ2ZuOiAwMDBjY2EwMCAgbWZuOiAwMDA5ZjQwMAooWEVOKSAg
Z2ZuOiAwMDBjY2MwMCAgbWZuOiAwMDFkNjgwMAooWEVOKSAgZ2ZuOiAwMDBjY2UwMCAgbWZu
OiAwMDA5ZjIwMAooWEVOKSAgZ2ZuOiAwMDBjZDAwMCAgbWZuOiAwMDFkNjYwMAooWEVOKSAg
Z2ZuOiAwMDBjZDIwMCAgbWZuOiAwMDA5ZjAwMAooWEVOKSAgZ2ZuOiAwMDBjZDQwMCAgbWZu
OiAwMDFkNjQwMAooWEVOKSAgZ2ZuOiAwMDBjZDYwMCAgbWZuOiAwMDA5ZWUwMAooWEVOKSAg
Z2ZuOiAwMDBjZDgwMCAgbWZuOiAwMDFkNjIwMAooWEVOKSAgZ2ZuOiAwMDBjZGEwMCAgbWZu
OiAwMDA5ZWMwMAooWEVOKSAgZ2ZuOiAwMDBjZGMwMCAgbWZuOiAwMDFkNjAwMAooWEVOKSAg
Z2ZuOiAwMDBjZGUwMCAgbWZuOiAwMDA5ZWEwMAooWEVOKSAgZ2ZuOiAwMDBjZTAwMCAgbWZu
OiAwMDFkNWUwMAooWEVOKSAgZ2ZuOiAwMDBjZTIwMCAgbWZuOiAwMDA5ZTgwMAooWEVOKSAg
Z2ZuOiAwMDBjZTQwMCAgbWZuOiAwMDFkNWMwMAooWEVOKSAgZ2ZuOiAwMDBjZTYwMCAgbWZu
OiAwMDA5ZTYwMAooWEVOKSAgZ2ZuOiAwMDBjZTgwMCAgbWZuOiAwMDFkNWEwMAooWEVOKSAg
Z2ZuOiAwMDBjZWEwMCAgbWZuOiAwMDA5ZTQwMAooWEVOKSAgZ2ZuOiAwMDBjZWMwMCAgbWZu
OiAwMDFkNTgwMAooWEVOKSAgZ2ZuOiAwMDBjZWUwMCAgbWZuOiAwMDA5ZTIwMAooWEVOKSAg
Z2ZuOiAwMDBjZjAwMCAgbWZuOiAwMDFkNTYwMAooWEVOKSAgZ2ZuOiAwMDBjZjIwMCAgbWZu
OiAwMDA5ZTAwMAooWEVOKSAgZ2ZuOiAwMDBjZjQwMCAgbWZuOiAwMDFkNTQwMAooWEVOKSAg
Z2ZuOiAwMDBjZjYwMCAgbWZuOiAwMDA5ZGUwMAooWEVOKSAgZ2ZuOiAwMDBjZjgwMCAgbWZu
OiAwMDFkNTIwMAooWEVOKSAgZ2ZuOiAwMDBjZmEwMCAgbWZuOiAwMDA5ZGMwMAooWEVOKSAg
Z2ZuOiAwMDBjZmMwMCAgbWZuOiAwMDFkNTAwMAooWEVOKSAgZ2ZuOiAwMDBjZmUwMCAgbWZu
OiAwMDA5ZGEwMAooWEVOKSAgZ2ZuOiAwMDBkMDAwMCAgbWZuOiAwMDFkNGUwMAooWEVOKSAg
Z2ZuOiAwMDBkMDIwMCAgbWZuOiAwMDA5ZDgwMAooWEVOKSAgZ2ZuOiAwMDBkMDQwMCAgbWZu
OiAwMDFkNGMwMAooWEVOKSAgZ2ZuOiAwMDBkMDYwMCAgbWZuOiAwMDA5ZDYwMAooWEVOKSAg
Z2ZuOiAwMDBkMDgwMCAgbWZuOiAwMDFkNGEwMAooWEVOKSAgZ2ZuOiAwMDBkMGEwMCAgbWZu
OiAwMDA5ZDQwMAooWEVOKSAgZ2ZuOiAwMDBkMGMwMCAgbWZuOiAwMDFkNDgwMAooWEVOKSAg
Z2ZuOiAwMDBkMGUwMCAgbWZuOiAwMDA5ZDIwMAooWEVOKSAgZ2ZuOiAwMDBkMTAwMCAgbWZu
OiAwMDFkNDYwMAooWEVOKSAgZ2ZuOiAwMDBkMTIwMCAgbWZuOiAwMDA5ZDAwMAooWEVOKSAg
Z2ZuOiAwMDBkMTQwMCAgbWZuOiAwMDFkNDQwMAooWEVOKSAgZ2ZuOiAwMDBkMTYwMCAgbWZu
OiAwMDA5Y2UwMAooWEVOKSAgZ2ZuOiAwMDBkMTgwMCAgbWZuOiAwMDFkNDIwMAooWEVOKSAg
Z2ZuOiAwMDBkMWEwMCAgbWZuOiAwMDA5Y2MwMAooWEVOKSAgZ2ZuOiAwMDBkMWMwMCAgbWZu
OiAwMDFkNDAwMAooWEVOKSAgZ2ZuOiAwMDBkMWUwMCAgbWZuOiAwMDA5Y2EwMAooWEVOKSAg
Z2ZuOiAwMDBkMjAwMCAgbWZuOiAwMDFiN2UwMAooWEVOKSAgZ2ZuOiAwMDBkMjIwMCAgbWZu
OiAwMDA5YzgwMAooWEVOKSAgZ2ZuOiAwMDBkMjQwMCAgbWZuOiAwMDFiN2MwMAooWEVOKSAg
Z2ZuOiAwMDBkMjYwMCAgbWZuOiAwMDA5YzYwMAooWEVOKSAgZ2ZuOiAwMDBkMjgwMCAgbWZu
OiAwMDFiN2EwMAooWEVOKSAgZ2ZuOiAwMDBkMmEwMCAgbWZuOiAwMDA5YzQwMAooWEVOKSAg
Z2ZuOiAwMDBkMmMwMCAgbWZuOiAwMDFiNzgwMAooWEVOKSAgZ2ZuOiAwMDBkMmUwMCAgbWZu
OiAwMDA5YzIwMAooWEVOKSAgZ2ZuOiAwMDBkMzAwMCAgbWZuOiAwMDFiNzYwMAooWEVOKSAg
Z2ZuOiAwMDBkMzIwMCAgbWZuOiAwMDA5YzAwMAooWEVOKSAgZ2ZuOiAwMDBkMzQwMCAgbWZu
OiAwMDFiNzQwMAooWEVOKSAgZ2ZuOiAwMDBkMzYwMCAgbWZuOiAwMDA5YmUwMAooWEVOKSAg
Z2ZuOiAwMDBkMzgwMCAgbWZuOiAwMDFiNzIwMAooWEVOKSAgZ2ZuOiAwMDBkM2EwMCAgbWZu
OiAwMDA5YmMwMAooWEVOKSAgZ2ZuOiAwMDBkM2MwMCAgbWZuOiAwMDFiNzAwMAooWEVOKSAg
Z2ZuOiAwMDBkM2UwMCAgbWZuOiAwMDA5YmEwMAooWEVOKSAgZ2ZuOiAwMDBkNDAwMCAgbWZu
OiAwMDFiNmUwMAooWEVOKSAgZ2ZuOiAwMDBkNDIwMCAgbWZuOiAwMDA5YjgwMAooWEVOKSAg
Z2ZuOiAwMDBkNDQwMCAgbWZuOiAwMDFiNmMwMAooWEVOKSAgZ2ZuOiAwMDBkNDYwMCAgbWZu
OiAwMDA5YjYwMAooWEVOKSAgZ2ZuOiAwMDBkNDgwMCAgbWZuOiAwMDFiNmEwMAooWEVOKSAg
Z2ZuOiAwMDBkNGEwMCAgbWZuOiAwMDA5YjQwMAooWEVOKSAgZ2ZuOiAwMDBkNGMwMCAgbWZu
OiAwMDFiNjgwMAooWEVOKSAgZ2ZuOiAwMDBkNGUwMCAgbWZuOiAwMDA5YjIwMAooWEVOKSAg
Z2ZuOiAwMDBkNTAwMCAgbWZuOiAwMDFiNjYwMAooWEVOKSAgZ2ZuOiAwMDBkNTIwMCAgbWZu
OiAwMDA5YjAwMAooWEVOKSAgZ2ZuOiAwMDBkNTQwMCAgbWZuOiAwMDFiNjQwMAooWEVOKSAg
Z2ZuOiAwMDBkNTYwMCAgbWZuOiAwMDA5YWUwMAooWEVOKSAgZ2ZuOiAwMDBkNTgwMCAgbWZu
OiAwMDFiNjIwMAooWEVOKSAgZ2ZuOiAwMDBkNWEwMCAgbWZuOiAwMDA5YWMwMAooWEVOKSAg
Z2ZuOiAwMDBkNWMwMCAgbWZuOiAwMDFiNjAwMAooWEVOKSAgZ2ZuOiAwMDBkNWUwMCAgbWZu
OiAwMDA5YWEwMAooWEVOKSAgZ2ZuOiAwMDBkNjAwMCAgbWZuOiAwMDFiNWUwMAooWEVOKSAg
Z2ZuOiAwMDBkNjIwMCAgbWZuOiAwMDA5YTgwMAooWEVOKSAgZ2ZuOiAwMDBkNjQwMCAgbWZu
OiAwMDFiNWMwMAooWEVOKSAgZ2ZuOiAwMDBkNjYwMCAgbWZuOiAwMDA5YTYwMAooWEVOKSAg
Z2ZuOiAwMDBkNjgwMCAgbWZuOiAwMDFiNWEwMAooWEVOKSAgZ2ZuOiAwMDBkNmEwMCAgbWZu
OiAwMDA5YTQwMAooWEVOKSAgZ2ZuOiAwMDBkNmMwMCAgbWZuOiAwMDFiNTgwMAooWEVOKSAg
Z2ZuOiAwMDBkNmUwMCAgbWZuOiAwMDA5YTIwMAooWEVOKSAgZ2ZuOiAwMDBkNzAwMCAgbWZu
OiAwMDFiNTYwMAooWEVOKSAgZ2ZuOiAwMDBkNzIwMCAgbWZuOiAwMDA5YTAwMAooWEVOKSAg
Z2ZuOiAwMDBkNzQwMCAgbWZuOiAwMDFiNTQwMAooWEVOKSAgZ2ZuOiAwMDBkNzYwMCAgbWZu
OiAwMDA5OWUwMAooWEVOKSAgZ2ZuOiAwMDBkNzgwMCAgbWZuOiAwMDFiNTIwMAooWEVOKSAg
Z2ZuOiAwMDBkN2EwMCAgbWZuOiAwMDA5OWMwMAooWEVOKSAgZ2ZuOiAwMDBkN2MwMCAgbWZu
OiAwMDFiNTAwMAooWEVOKSAgZ2ZuOiAwMDBkN2UwMCAgbWZuOiAwMDA5OWEwMAooWEVOKSAg
Z2ZuOiAwMDBkODAwMCAgbWZuOiAwMDFiNGUwMAooWEVOKSAgZ2ZuOiAwMDBkODIwMCAgbWZu
OiAwMDA5OTgwMAooWEVOKSAgZ2ZuOiAwMDBkODQwMCAgbWZuOiAwMDFiNGMwMAooWEVOKSAg
Z2ZuOiAwMDBkODYwMCAgbWZuOiAwMDA5OTYwMAooWEVOKSAgZ2ZuOiAwMDBkODgwMCAgbWZu
OiAwMDFiNGEwMAooWEVOKSAgZ2ZuOiAwMDBkOGEwMCAgbWZuOiAwMDA5OTQwMAooWEVOKSAg
Z2ZuOiAwMDBkOGMwMCAgbWZuOiAwMDFiNDgwMAooWEVOKSAgZ2ZuOiAwMDBkOGUwMCAgbWZu
OiAwMDA5OTIwMAooWEVOKSAgZ2ZuOiAwMDBkOTAwMCAgbWZuOiAwMDFiNDYwMAooWEVOKSAg
Z2ZuOiAwMDBkOTIwMCAgbWZuOiAwMDA5OTAwMAooWEVOKSAgZ2ZuOiAwMDBkOTQwMCAgbWZu
OiAwMDFiNDQwMAooWEVOKSAgZ2ZuOiAwMDBkOTYwMCAgbWZuOiAwMDA5OGUwMAooWEVOKSAg
Z2ZuOiAwMDBkOTgwMCAgbWZuOiAwMDFiNDIwMAooWEVOKSAgZ2ZuOiAwMDBkOWEwMCAgbWZu
OiAwMDA5OGMwMAooWEVOKSAgZ2ZuOiAwMDBkOWMwMCAgbWZuOiAwMDFiNDAwMAooWEVOKSAg
Z2ZuOiAwMDBkOWUwMCAgbWZuOiAwMDA5OGEwMAooWEVOKSAgZ2ZuOiAwMDBkYTAwMCAgbWZu
OiAwMDFkZmUwMAooWEVOKSAgZ2ZuOiAwMDBkYTIwMCAgbWZuOiAwMDA5ODgwMAooWEVOKSAg
Z2ZuOiAwMDBkYTQwMCAgbWZuOiAwMDFkZmMwMAooWEVOKSAgZ2ZuOiAwMDBkYTYwMCAgbWZu
OiAwMDA5ODYwMAooWEVOKSAgZ2ZuOiAwMDBkYTgwMCAgbWZuOiAwMDFkZmEwMAooWEVOKSAg
Z2ZuOiAwMDBkYWEwMCAgbWZuOiAwMDA5ODQwMAooWEVOKSAgZ2ZuOiAwMDBkYWMwMCAgbWZu
OiAwMDFkZjgwMAooWEVOKSAgZ2ZuOiAwMDBkYWUwMCAgbWZuOiAwMDA5ODIwMAooWEVOKSAg
Z2ZuOiAwMDBkYjAwMCAgbWZuOiAwMDFkZjYwMAooWEVOKSAgZ2ZuOiAwMDBkYjIwMCAgbWZu
OiAwMDA5ODAwMAooWEVOKSAgZ2ZuOiAwMDBkYjQwMCAgbWZuOiAwMDFkZjQwMAooWEVOKSAg
Z2ZuOiAwMDBkYjYwMCAgbWZuOiAwMDA5N2UwMAooWEVOKSAgZ2ZuOiAwMDBkYjgwMCAgbWZu
OiAwMDFkZjIwMAooWEVOKSAgZ2ZuOiAwMDBkYmEwMCAgbWZuOiAwMDA5N2MwMAooWEVOKSAg
Z2ZuOiAwMDBkYmMwMCAgbWZuOiAwKzAxZGYwMDAKKFhFTikgIGdmbjogMDAwZGJlMDAgIG1m
bjogMDAwOTdhMDAKKFhFTikgIGdmbjogMDAwZGMwMDAgIG1mbjogMDAxZGVlMDAKKFhFTikg
IGdmbjogMDAwZGMyMDAgIG1mbjogMDAwOTc4MDAKKFhFTikgc3RkdmdhLmM6MTUxOmQyIGxl
YXZpbmcgc3RkdmdhCihYRU4pICBnZm46IDAwMGRjNDAwICBtZm46IDAwMWRlYzAwCihYRU4p
ICBnZm46IDAwMGRjNjAwICBtZm46IDAwMDk3NjAwCihYRU4pICBnZm46IDAwMGRjODAwICBt
Zm46IDAwMWRlYTAwCihYRU4pICBnZm46IDAwMGRjYTAwICBtZm46IDAwMDk3NDAwCihYRU4p
ICBnZm46IDAwMGRjYzAwICBtZm46IDAwMWRlODAwCihYRU4pICBnZm46IDAwMGRjZTAwICBt
Zm46IDAwMDk3MjAwCihYRU4pICBnZm46IDAwMGRkMDAwICBtZm46IDAwMWRlNjAwCihYRU4p
ICBnZm46IDAwMGRkMjAwICBtZm46IDAwMDk3MDAwCihYRU4pICBnZm46IDAwMGRkNDAwICBt
Zm46IDAwMWRlNDAwCihYRU4pICBnZm46IDAwMGRkNjAwICBtZm46IDAwMDk2ZTAwCihYRU4p
ICBnZm46IDAwMGRkODAwICBtZm46IDAwMWRlMjAwCihYRU4pICBnZm46IDAwMGRkYTAwICBt
Zm46IDAwMDk2YzAwCihYRU4pICBnZm46IDAwMGRkYzAwICBtZm46IDAwMWRlMDAwCihYRU4p
ICBnZm46IDAwMGRkZTAwICBtZm46IDAwMDk2YTAwCihYRU4pICBnZm46IDAwMGRlMDAwICBt
Zm46IDAwMWRkZTAwCihYRU4pICBnZm46IDAwMGRlMjAwICBtZm46IDAwMDk2ODAwCihYRU4p
ICBnZm46IDAwMGRlNDAwICBtZm46IDAwMWRkYzAwCihYRU4pICBnZm46IDAwMGRlNjAwICBt
Zm46IDAwMDk2NjAwCihYRU4pICBnZm46IDAwMGRlODAwICBtZm46IDAwMWRkYTAwCihYRU4p
ICBnZm46IDAwMGRlYTAwICBtZm46IDAwMDk2NDAwCihYRU4pICBnZm46IDAwMGRlYzAwICBt
Zm46IDAwMWRkODAwCihYRU4pICBnZm46IDAwMGRlZTAwICBtZm46IDAwMDk2MjAwCihYRU4p
ICBnZm46IDAwMGRmMDAwICBtZm46IDAwMWRkNjAwCihYRU4pICBnZm46IDAwMGRmMjAwICBt
Zm46IDAwMDk2MDAwCihYRU4pICBnZm46IDAwMGRmNDAwICBtZm46IDAwMWRkNDAwCihYRU4p
ICBnZm46IDAwMGRmNjAwICBtZm46IDAwMDk1ZTAwCihYRU4pICBnZm46IDAwMGRmODAwICBt
Zm46IDAwMWRkMjAwCihYRU4pICBnZm46IDAwMGRmYTAwICBtZm46IDAwMDk1YzAwCihYRU4p
ICBnZm46IDAwMGRmYzAwICBtZm46IDAwMWRkMDAwCihYRU4pICBnZm46IDAwMGRmZTAwICBt
Zm46IDAwMDk1YTAwCihYRU4pICBnZm46IDAwMGUwMDAwICBtZm46IDAwMWRjZTAwCihYRU4p
ICBnZm46IDAwMGUwMjAwICBtZm46IDAwMDk1ODAwCihYRU4pICBnZm46IDAwMGUwNDAwICBt
Zm46IDAwMWRjYzAwCihYRU4pICBnZm46IDAwMGUwNjAwICBtZm46IDAwMDk1NjAwCihYRU4p
ICBnZm46IDAwMGUwODAwICBtZm46IDAwMWRjYTAwCihYRU4pICBnZm46IDAwMGUwYTAwICBt
Zm46IDAwMDk1NDAwCihYRU4pICBnZm46IDAwMGUwYzAwICBtZm46IDAwMWRjODAwCihYRU4p
ICBnZm46IDAwMGUwZTAwICBtZm46IDAwMDk1MjAwCihYRU4pICBnZm46IDAwMGUxMDAwICBt
Zm46IDAwMWRjNjAwCihYRU4pICBnZm46IDAwMGUxMjAwICBtZm46IDAwMDk1MDAwCihYRU4p
ICBnZm46IDAwMGUxNDAwICBtZm46IDAwMWRjNDAwCihYRU4pICBnZm46IDAwMGUxNjAwICBt
Zm46IDAwMDk0ZTAwCihYRU4pICBnZm46IDAwMGUxODAwICBtZm46IDAwMWRjMjAwCihYRU4p
ICBnZm46IDAwMGUxYTAwICBtZm46IDAwMDk0YzAwCihYRU4pICBnZm46IDAwMGUxYzAwICBt
Zm46IDAwMWRjMDAwCihYRU4pICBnZm46IDAwMGUxZTAwICBtZm46IDAwMDk0YTAwCihYRU4p
ICBnZm46IDAwMGUyMDAwICBtZm46IDAwMWRiZTAwCihYRU4pICBnZm46IDAwMGUyMjAwICBt
Zm46IDAwMDk0ODAwCihYRU4pICBnZm46IDAwMGUyNDAwICBtZm46IDAwMWRiYzAwCihYRU4p
ICBnZm46IDAwMGUyNjAwICBtZm46IDAwMDk0NjAwCihYRU4pICBnZm46IDAwMGUyODAwICBt
Zm46IDAwMWRiYTAwCihYRU4pICBnZm46IDAwMGUyYTAwICBtZm46IDAwMDk0NDAwCihYRU4p
ICBnZm46IDAwMGUyYzAwICBtZm46IDAwMWRiODAwCihYRU4pICBnZm46IDAwMGUyZTAwICBt
Zm46IDAwMDk0MjAwCihYRU4pICBnZm46IDAwMGUzMDAwICBtZm46IDAwMWRiNjAwCihYRU4p
ICBnZm46IDAwMGUzMjAwICBtZm46IDAwMDk0MDAwCihYRU4pICBnZm46IDAwMGUzNDAwICBt
Zm46IDAwMWRiNDAwCihYRU4pICBnZm46IDAwMGUzNjAwICBtZm46IDAwMDkzZTAwCihYRU4p
ICBnZm46IDAwMGUzODAwICBtZm46IDAwMWRiMjAwCihYRU4pICBnZm46IDAwMGUzYTAwICBt
Zm46IDAwMDkzYzAwCihYRU4pICBnZm46IDAwMGUzYzAwICBtZm46IDAwMWRiMDAwCihYRU4p
ICBnZm46IDAwMGUzZTAwICBtZm46IDAwMDkzYTAwCihYRU4pICBnZm46IDAwMGU0MDAwICBt
Zm46IDAwMWRhZTAwCihYRU4pICBnZm46IDAwMGU0MjAwICBtZm46IDAwMDkzODAwCihYRU4p
ICBnZm46IDAwMGU0NDAwICBtZm46IDAwMWRhYzAwCihYRU4pICBnZm46IDAwMGU0NjAwICBt
Zm46IDAwMDkzNjAwCihYRU4pICBnZm46IDAwMGU0ODAwICBtZm46IDAwMWRhYTAwCihYRU4p
ICBnZm46IDAwMGU0YTAwICBtZm46IDAwMDkzNDAwCihYRU4pICBnZm46IDAwMGU0YzAwICBt
Zm46IDAwMWRhODAwCihYRU4pICBnZm46IDAwMGU0ZTAwICBtZm46IDAwMDkzMjAwCihYRU4p
ICBnZm46IDAwMGU1MDAwICBtZm46IDAwMWRhNjAwCihYRU4pICBnZm46IDAwMGU1MjAwICBt
Zm46IDAwMDkzMDAwCihYRU4pICBnZm46IDAwMGU1NDAwICBtZm46IDAwMWRhNDAwCihYRU4p
ICBnZm46IDAwMGU1NjAwICBtZm46IDAwMDkyZTAwCihYRU4pICBnZm46IDAwMGU1ODAwICBt
Zm46IDAwMWRhMjAwCihYRU4pICBnZm46IDAwMGU1YTAwICBtZm46IDAwMDkyYzAwCihYRU4p
ICBnZm46IDAwMGU1YzAwICBtZm46IDAwMWRhMDAwCihYRU4pICBnZm46IDAwMGU1ZTAwICBt
Zm46IDAwMDkyYTAwCihYRU4pICBnZm46IDAwMGU2MDAwICBtZm46IDAwMWQ5ZTAwCihYRU4p
ICBnZm46IDAwMGU2MjAwICBtZm46IDAwMDkyODAwCihYRU4pICBnZm46IDAwMGU2NDAwICBt
Zm46IDAwMWQ5YzAwCihYRU4pICBnZm46IDAwMGU2NjAwICBtZm46IDAwMDkyNjAwCihYRU4p
ICBnZm46IDAwMGU2ODAwICBtZm46IDAwMWQ5YTAwCihYRU4pICBnZm46IDAwMGU2YTAwICBt
Zm46IDAwMDkyNDAwCihYRU4pICBnZm46IDAwMGU2YzAwICBtZm46IDAwMWQ5ODAwCihYRU4p
ICBnZm46IDAwMGU2ZTAwICBtZm46IDAwMDkyMjAwCihYRU4pICBnZm46IDAwMGU3MDAwICBt
Zm46IDAwMWQ5NjAwCihYRU4pICBnZm46IDAwMGU3MjAwICBtZm46IDAwMDkyMDAwCihYRU4p
ICBnZm46IDAwMGU3NDAwICBtZm46IDAwMWQ5NDAwCihYRU4pICBnZm46IDAwMGU3NjAwICBt
Zm46IDAwMDkxZTAwCihYRU4pICBnZm46IDAwMGU3ODAwICBtZm46IDAwMWQ5MjAwCihYRU4p
ICBnZm46IDAwMGU3YTAwICBtZm46IDAwMDkxYzAwCihYRU4pICBnZm46IDAwMGU3YzAwICBt
Zm46IDAwMWQ5MDAwCihYRU4pICBnZm46IDAwMGU3ZTAwICBtZm46IDAwMDkxYTAwCihYRU4p
ICBnZm46IDAwMGU4MDAwICBtZm46IDAwMWQ4ZTAwCihYRU4pICBnZm46IDAwMGU4MjAwICBt
Zm46IDAwMDkxODAwCihYRU4pICBnZm46IDAwMGU4NDAwICBtZm46IDAwMWQ4YzAwCihYRU4p
ICBnZm46IDAwMGU4NjAwICBtZm46IDAwMDkxNjAwCihYRU4pICBnZm46IDAwMGU4ODAwICBt
Zm46IDAwMWQ4YTAwCihYRU4pICBnZm46IDAwMGU4YTAwICBtZm46IDAwMDkxNDAwCihYRU4p
ICBnZm46IDAwMGU4YzAwICBtZm46IDAwMWQ4ODAwCihYRU4pICBnZm46IDAwMGU4ZTAwICBt
Zm46IDAwMDkxMjAwCihYRU4pICBnZm46IDAwMGU5MDAwICBtZm46IDAwMWQ4NjAwCihYRU4p
ICBnZm46IDAwMGU5MjAwICBtZm46IDAwMDkxMDAwCihYRU4pICBnZm46IDAwMGU5NDAwICBt
Zm46IDAwMWQ4NDAwCihYRU4pICBnZm46IDAwMGU5NjAwICBtZm46IDAwMDkwZTAwCihYRU4p
ICBnZm46IDAwMGU5ODAwICBtZm46IDAwMWQ4MjAwCihYRU4pICBnZm46IDAwMGU5YTAwICBt
Zm46IDAwMDkwYzAwCihYRU4pICBnZm46IDAwMGU5YzAwICBtZm46IDAwMWQ4MDAwCihYRU4p
ICBnZm46IDAwMGU5ZTAwICBtZm46IDAwMDkwYTAwCihYRU4pICBnZm46IDAwMGVhMDAwICBt
Zm46IDAwMWJmZTAwCihYRU4pICBnZm46IDAwMGVhMjAwICBtZm46IDAwMDkwODAwCihYRU4p
ICBnZm46IDAwMGVhNDAwICBtZm46IDAwMWJmYzAwCihYRU4pICBnZm46IDAwMGVhNjAwICBt
Zm46IDAwMDkwNjAwCihYRU4pICBnZm46IDAwMGVhODAwICBtZm46IDAwMWJmYTAwCihYRU4p
ICBnZm46IDAwMGVhYTAwICBtZm46IDAwMDkwNDAwCihYRU4pICBnZm46IDAwMGVhYzAwICBt
Zm46IDAwMWJmODAwCihYRU4pICBnZm46IDAwMGVhZTAwICBtZm46IDAwMDkwMjAwCihYRU4p
ICBnZm46IDAwMGViMDAwICBtZm46IDAwMWJmNjAwCihYRU4pICBnZm46IDAwMGViMjAwICBt
Zm46IDAwMDkwMDAwCihYRU4pICBnZm46IDAwMGViNDAwICBtZm46IDAwMWJmNDAwCihYRU4p
ICBnZm46IDAwMGViNjAwICBtZm46IDAwMWJmMjAwCihYRU4pICBnZm46IDAwMGViODAwICBt
Zm46IDAwMWJmMDAwCihYRU4pICBnZm46IDAwMGViYTAwICBtZm46IDAwMWJlZTAwCihYRU4p
ICBnZm46IDAwMGViYzAwICBtZm46IDAwMWJlYzAwCihYRU4pICBnZm46IDAwMGViZTAwICBt
Zm46IDAwMWJlYTAwCihYRU4pICBnZm46IDAwMGVjMDAwICBtZm46IDAwMWJlODAwCihYRU4p
ICBnZm46IDAwMGVjMjAwICBtZm46IDAwMWJlNjAwCihYRU4pICBnZm46IDAwMGVjNDAwICBt
Zm46IDAwMWJlNDAwCihYRU4pICBnZm46IDAwMGVjNjAwICBtZm46IDAwMWJlMjAwCihYRU4p
ICBnZm46IDAwMGVjODAwICBtZm46IDAwMWJlMDAwCihYRU4pICBnZm46IDAwMGVjYTAwICBt
Zm46IDAwMWJkZTAwCihYRU4pICBnZm46IDAwMGVjYzAwICBtZm46IDAwMWJkYzAwCihYRU4p
ICBnZm46IDAwMGVjZTAwICBtZm46IDAwMWJkYTAwCihYRU4pICBnZm46IDAwMGVkMDAwICBt
Zm46IDAwMWJkODAwCihYRU4pICBnZm46IDAwMGVkMjAwICBtZm46IDAwMWJkNjAwCihYRU4p
ICBnZm46IDAwMGVkNDAwICBtZm46IDAwMWJkNDAwCihYRU4pICBnZm46IDAwMGVkNjAwICBt
Zm46IDAwMWJkMjAwCihYRU4pICBnZm46IDAwMGVkODAwICBtZm46IDAwMWJkMDAwCihYRU4p
ICBnZm46IDAwMGVkYTAwICBtZm46IDAwMWJjZTAwCihYRU4pICBnZm46IDAwMGVkYzAwICBt
Zm46IDAwMWJjYzAwCihYRU4pICBnZm46IDAwMGVkZTAwICBtZm46IDAwMWJjYTAwCihYRU4p
ICBnZm46IDAwMGVlMDAwICBtZm46IDAwMWJjODAwCihYRU4pICBnZm46IDAwMGVlMjAwICBt
Zm46IDAwMWJjNjAwCihYRU4pICBnZm46IDAwMGVlNDAwICBtZm46IDAwMWJjNDAwCihYRU4p
ICBnZm46IDAwMGVlNjAwICBtZm46IDAwMWJjMjAwCihYRU4pICBnZm46IDAwMGVlODAwICBt
Zm46IDAwMWJjMDAwCihYRU4pICBnZm46IDAwMGVlYTAwICBtZm46IDAwMWJiZTAwCihYRU4p
ICBnZm46IDAwMGVlYzAwICBtZm46IDAwMWJiYzAwCihYRU4pICBnZm46IDAwMGVlZTAwICBt
Zm46IDAwMWJiYTAwCihYRU4pICBnZm46IDAwMGVmMDAwICBtZm46IDAwMWJiODAwCihYRU4p
ICBnZm46IDAwMGVmMjAwICBtZm46IDAwMWJiNjAwCihYRU4pICBnZm46IDAwMGVmNDAwICBt
Zm46IDAwMWJiNDAwCihYRU4pICBnZm46IDAwMGVmNjAwICBtZm46IDAwMWJiMjAwCihYRU4p
ICBnZm46IDAwMGVmODAwICBtZm46IDAwMWJiMDAwCihYRU4pICBnZm46IDAwMGVmYTAwICBt
Zm46IDAwMWJhZTAwCihYRU4pICBnZm46IDAwMGVmYzAwICBtZm46IDAwMWJhYzAwCihYRU4p
ICBnZm46IDAwMGVmZTAwICBtZm46IDAwMWJhYTAwCihYRU4pICAgZ2ZuOiAwMDBmMDAwMCAg
bWZuOiAwMDIxOGUwZQooWEVOKSAgIGdmbjogMDAwZjAwMDEgIG1mbjogMDAxMDFlMmEKKFhF
TikgICBnZm46IDAwMGYwMDAyICBtZm46IDAwMjE4ZTBkCihYRU4pICAgZ2ZuOiAwMDBmMDAw
MyAgbWZuOiAwMDEwMjg4ZgooWEVOKSAgIGdmbjogMDAwZjAwMDQgIG1mbjogMDAyMThlMGMK
KFhFTikgICBnZm46IDAwMGYwMDA1ICBtZm46IDAwMTAyODhlCihYRU4pICAgZ2ZuOiAwMDBm
MDAwNiAgbWZuOiAwMDIxOGUwYgooWEVOKSAgIGdmbjogMDAwZjAwMDcgIG1mbjogMDAxMDE1
M2IKKFhFTikgICBnZm46IDAwMGYwMDA4ICBtZm46IDAwMjE4ZTBhCihYRU4pICAgZ2ZuOiAw
MDBmMDAwOSAgbWZuOiAwMDEwMTUzYQooWEVOKSAgIGdmbjogMDAwZjAwMGEgIG1mbjogMDAy
MThlMDkKKFhFTikgICBnZm46IDAwMGYwMDBiICBtZm46IDAwMTAyODU3CihYRU4pICAgZ2Zu
OiAwMDBmMDAwYyAgbWZuOiAwMDIxOGUwOAooWEVOKSAgIGdmbjogMDAwZjAwMGQgIG1mbjog
MDAxMDI4NTYKKFhFTikgICBnZm46IDAwMGYwMDBlICBtZm46IDAwMjE4ZTA3CihYRU4pICAg
Z2ZuOiAwMDBmMDAwZiAgbWZuOiAwMDEwMTViNwooWEVOKSAgIGdmbjogMDAwZjAwMTAgIG1m
bjogMDAyMThlMDYKKFhFTikgICBnZm46IDAwMGYwMDExICBtZm46IDAwMTAxNWI2CihYRU4p
ICAgZ2ZuOiAwMDBmMDAxMiAgbWZuOiAwMDIxOGUwNQooWEVOKSAgIGdmbjogMDAwZjAwMTMg
IG1mbjogMDAxMDI4MjcKKFhFTikgICBnZm46IDAwMGYwMDE0ICBtZm46IDAwMjE4ZTA0CihY
RU4pICAgZ2ZuOiAwMDBmMDAxNSAgbWZuOiAwMDEwMjgyNgooWEVOKSAgIGdmbjogMDAwZjAw
MTYgIG1mbjogMDAyMThlMDMKKFhFTikgICBnZm46IDAwMGYwMDE3ICBtZm46IDAwMTAwNTJi
CihYRU4pICAgZ2ZuOiAwMDBmMDAxOCAgbWZuOiAwMDIxOGUwMgooWEVOKSAgIGdmbjogMDAw
ZjAwMTkgIG1mbjogMDAxMDA1MmEKKFhFTikgICBnZm46IDAwMGYwMDFhICBtZm46IDAwMjE4
ZTAxCihYRU4pICAgZ2ZuOiAwMDBmMDAxYiAgbWZuOiAwMDEwMjgxNwooWEVOKSAgIGdmbjog
MDAwZjAwMWMgIG1mbjogMDAyMThlMDAKKFhFTikgICBnZm46IDAwMGYwMDFkICBtZm46IDAw
MTAyODE2CihYRU4pICAgZ2ZuOiAwMDBmMDAxZSAgbWZuOiAwMDFiMWM0YQooWEVOKSAgIGdm
bjogMDAwZjAwMWYgIG1mbjogMDAxMDJjM2IKKFhFTikgICBnZm46IDAwMGYwMDIwICBtZm46
IDAwMWIxZjlhCihYRU4pICAgZ2ZuOiAwMDBmMDAyMSAgbWZuOiAwMDEwMmMzYQooWEVOKSAg
IGdmbjogMDAwZjAwMjIgIG1mbjogMDAxZDBiMTIKKFhFTikgICBnZm46IDAwMGYwMDIzICBt
Zm46IDAwMTAyYzhiCihYRU4pICAgZ2ZuOiAwMDBmMDAyNCAgbWZuOiAwMDFkMGIwYgooWEVO
KSAgIGdmbjogMDAwZjAwMjUgIG1mbjogMDAxMDJjOGEKKFhFTikgICBnZm46IDAwMGYwMDI2
ICBtZm46IDAwMWQwYjhkCihYRU4pICAgZ2ZuOiAwMDBmMDAyNyAgbWZuOiAwMDEwMjcwZgoo
WEVOKSAgIGdmbjogMDAwZjAwMjggIG1mbjogMDAxYjFmYzMKKFhFTikgICBnZm46IDAwMGYw
MDI5ICBtZm46IDAwMTAyNzBlCihYRU4pICAgZ2ZuOiAwMDBmMDAyYSAgbWZuOiAwMDFiMWMy
MwooWEVOKSAgIGdmbjogMDAwZjAwMmIgIG1mbjogMDAxMDFlYmIKKFhFTikgICBnZm46IDAw
MGYwMDJjICBtZm46IDAwMWQwYmI4CihYRU4pICAgZ2ZuOiAwMDBmMDAyZCAgbWZuOiAwMDEw
MWViYQooWEVOKSAgIGdmbjogMDAwZjAwMmUgIG1mbjogMDAxZDBiYmMKKFhFTikgICBnZm46
IDAwMGYwMDJmICBtZm46IDAwMTAyZTZiCihYRU4pICAgZ2ZuOiAwMDBmMDAzMCAgbWZuOiAw
MDFkMGI5YwooWEVOKSAgIGdmbjogMDAwZjAwMzEgIG1mbjogMDAxMDJlNmEKKFhFTikgICBn
Zm46IDAwMGYwMDMyICBtZm46IDAwMWQwYmM2CihYRU4pICAgZ2ZuOiAwMDBmMDAzMyAgbWZu
OiAwMDEwMmM1ZgooWEVOKSAgIGdmbjogMDAwZjAwMzQgIG1mbjogMDAxZDBiNDYKKFhFTikg
ICBnZm46IDAwMGYwMDM1ICBtZm46IDAwMTAyYzVlCihYRU4pICAgZ2ZuOiAwMDBmMDAzNiAg
bWZuOiAwMDFkMGI0NAooWEVOKSAgIGdmbjogMDAwZjAwMzcgIG1mbjogMDAxMDJlOTMKKFhF
TikgICBnZm46IDAwMGYwMDM4ICBtZm46IDAwMWQwYmFkCihYRU4pICAgZ2ZuOiAwMDBmMDAz
OSAgbWZuOiAwMDEwMmU5MgooWEVOKSAgIGdmbjogMDAwZjAwM2EgIG1mbjogMDAxZDBiYWUK
KFhFTikgICBnZm46IDAwMGYwMDNiICBtZm46IDAwMTAyODliCihYRU4pICAgZ2ZuOiAwMDBm
MDAzYyAgbWZuOiAwMDFkMGI0MQooWEVOKSAgIGdmbjogMDAwZjAwM2QgIG1mbjogMDAxMDI4
OWEKKFhFTikgICBnZm46IDAwMGYwMDNlICBtZm46IDAwMWIxZmVhCihYRU4pICAgZ2ZuOiAw
MDBmMDAzZiAgbWZuOiAwMDEwMmNjNwooWEVOKSAgIGdmbjogMDAwZjAwNDAgIG1mbjogMDAx
ZDBiYzEKKFhFTikgICBnZm46IDAwMGYwMDQxICBtZm46IDAwMTAyY2M2CihYRU4pICAgZ2Zu
OiAwMDBmMDA0MiAgbWZuOiAwMDFiMWZkZgooWEVOKSAgIGdmbjogMDAwZjAwNDMgIG1mbjog
MDAxMDI5NjMKKFhFTikgICBnZm46IDAwMGYwMDQ0ICBtZm46IDAwMWQwYjM3CihYRU4pICAg
Z2ZuOiAwMDBmMDA0NSAgbWZuOiAwMDEwMjk2MgooWEVOKSAgIGdmbjogMDAwZjAwNDYgIG1m
bjogMDAxZDBiOWUKKFhFTikgICBnZm46IDAwMGYwMDQ3ICBtZm46IDAwMTAyODg3CihYRU4p
ICAgZ2ZuOiAwMDBmMDA0OCAgbWZuOiAwMDFiMDZkOAooWEVOKSAgIGdmbjogMDAwZjAwNDkg
IG1mbjogMDAxMDI4ODYKKFhFTikgICBnZm46IDAwMGYwMDRhICBtZm46IDAwMTYyZWQ4CihY
RU4pICAgZ2ZuOiAwMDBmMDA0YiAgbWZuOiAwMDEwMTljNQooWEVOKSAgIGdmbjogMDAwZjAw
NGMgIG1mbjogMDAxNjMyZDgKKFhFTikgICBnZm46IDAwMGYwMDRkICBtZm46IDAwMTAxOWM0
CihYRU4pICAgZ2ZuOiAwMDBmMDA0ZSAgbWZuOiAwMDFjMGE2MQooWEVOKSAgIGdmbjogMDAw
ZjAwNGYgIG1mbjogMDAxMDE5YzMKKFhFTikgICBnZm46IDAwMGYwMDUwICBtZm46IDAwMWMw
YTYwCihYRU4pICAgZ2ZuOiAwMDBmMDA1MSAgbWZuOiAwMDEwMTljMgooWEVOKSAgIGdmbjog
MDAwZjAwNTIgIG1mbjogMDAxYjFjNDkKKFhFTikgICBnZm46IDAwMGYwMDUzICBtZm46IDAw
MTAxNTBmCihYRU4pICAgZ2ZuOiAwMDBmMDA1NCAgbWZuOiAwMDFiMWM0OAooWEVOKSAgIGdm
bjogMDAwZjAwNTUgIG1mbjogMDAxMDE1MGUKKFhFTikgICBnZm46IDAwMGYwMDU2ICBtZm46
IDAwMWIxZjk5CihYRU4pICAgZ2ZuOiAwMDBmMDA1NyAgbWZuOiAwMDEwMWFhOQooWEVOKSAg
IGdmbjogMDAwZjAwNTggIG1mbjogMDAxYjFmOTgKKFhFTikgICBnZm46IDAwMGYwMDU5ICBt
Zm46IDAwMTAxYWE4CihYRU4pICAgZ2ZuOiAwMDBmMDA1YSAgbWZuOiAwMDFkMGI2NQooWEVO
KSAgIGdmbjogMDAwZjAwNWIgIG1mbjogMDAxMDFhYTcKKFhFTikgICBnZm46IDAwMGYwMDVj
ICBtZm46IDAwMWQwYjY0CihYRU4pICAgZ2ZuOiAwMDBmMDA1ZCAgbWZuOiAwMDEwMWFhNgoo
WEVOKSAgIGdmbjogMDAwZjAwNWUgIG1mbjogMDAxZDBiMTEKKFhFTikgICBnZm46IDAwMGYw
MDVmICBtZm46IDAwMTAxOTg1CihYRU4pICAgZ2ZuOiAwMDBmMDA2MCAgbWZuOiAwMDFkMGIx
MAooWEVOKSAgIGdmbjogMDAwZjAwNjEgIG1mbjogMDAxMDE5ODQKKFhFTikgICBnZm46IDAw
MGYwMDYyICBtZm46IDAwMWQwYmM1CihYRU4pICAgZ2ZuOiAwMDBmMDA2MyAgbWZuOiAwMDEw
MTk4MwooWEVOKSAgIGdmbjogMDAwZjAwNjQgIG1mbjogMDAxZDBiYzQKKFhFTikgICBnZm46
IDAwMGYwMDY1ICBtZm46IDAwMTAxOTgyCihYRU4pICAgZ2ZuOiAwMDBmMDA2NiAgbWZuOiAw
MDFkMGI0MwooWEVOKSAgIGdmbjogMDAwZjAwNjcgIG1mbjogMDAxMDFlNmIKKFhFTikgICBn
Zm46IDAwMGYwMDY4ICBtZm46IDAwMWQwYjQyCihYRU4pICAgZ2ZuOiAwMDBmMDA2OSAgbWZu
OiAwMDEwMWU2YQooWEVOKSAgIGdmbjogMDAwZjAwNmEgIG1mbjogMDAxYjFmZTkKKFhFTikg
ICBnZm46IDAwMGYwMDZiICBtZm46IDAwMTAyNzYzCihYRU4pICAgZ2ZuOiAwMDBmMDA2YyAg
bWZuOiAwMDFiMWZlOAooWEVOKSAgIGdmbjogMDAwZjAwNmQgIG1mbjogMDAxMDI3NjIKKFhF
TikgICBnZm46IDAwMGYwMDZlICBtZm46IDAwMWQwYmMzCihYRU4pICAgZ2ZuOiAwMDBmMDA2
ZiAgbWZuOiAwMDEwMmQzZgooWEVOKSAgIGdmbjogMDAwZjAwNzAgIG1mbjogMDAxZDBiYzIK
KFhFTikgICBnZm46IDAwMGYwMDcxICBtZm46IDAwMTAyZDNlCihYRU4pICAgZ2ZuOiAwMDBm
MDA3MiAgbWZuOiAwMDFkMGJmZgooWEVOKSAgIGdmbjogMDAwZjAwNzMgIG1mbjogMDAxMDA0
NmIKKFhFTikgICBnZm46IDAwMGYwMDc0ICBtZm46IDAwMWQwYmZlCihYRU4pICAgZ2ZuOiAw
MDBmMDA3NSAgbWZuOiAwMDEwMDQ2YQooWEVOKSAgIGdmbjogMDAwZjAwNzYgIG1mbjogMDAx
ZDBiZmQKKFhFTikgICBnZm46IDAwMGYwMDc3ICBtZm46IDAwMTAwM2Q3CihYRU4pICAgZ2Zu
OiAwMDBmMDA3OCAgbWZuOiAwMDFkMGJmYwooWEVOKSAgIGdmbjogMDAwZjAwNzkgIG1mbjog
MDAxMDAzZDYKKFhFTikgICBnZm46IDAwMGYwMDdhICBtZm46IDAwMWQwYmNmCihYRU4pICAg
Z2ZuOiAwMDBmMDA3YiAgbWZuOiAwMDEwMDMxYgooWEVOKSAgIGdmbjogMDAwZjAwN2MgIG1m
bjogMDAxZDBiY2UKKFhFTikgICBnZm46IDAwMGYwMDdkICBtZm46IDAwMTAwMzFhCihYRU4p
ICAgZ2ZuOiAwMDBmMDA3ZSAgbWZuOiAwMDFkMGJjZAooWEVOKSAgIGdmbjogMDAwZjAwN2Yg
IG1mbjogMDAxMDJkYzcKKFhFTikgICBnZm46IDAwMGYwMDgwICBtZm46IDAwMWQwYmNjCihY
RU4pICAgZ2ZuOiAwMDBmMDA4MSAgbWZuOiAwMDEwMmRjNgooWEVOKSAgIGdmbjogMDAwZjAw
ODIgIG1mbjogMDAxZDBiMGYKKFhFTikgICBnZm46IDAwMGYwMDgzICBtZm46IDAwMTAxZDk3
CihYRU4pICAgZ2ZuOiAwMDBmMDA4NCAgbWZuOiAwMDFkMGIwZQooWEVOKSAgIGdmbjogMDAw
ZjAwODUgIG1mbjogMDAxMDFkOTYKKFhFTikgICBnZm46IDAwMGYwMDg2ICBtZm46IDAwMWQw
YjBkCihYRU4pICAgZ2ZuOiAwMDBmMDA4NyAgbWZuOiAwMDEwMjc2YgooWEVOKSAgIGdmbjog
MDAwZjAwODggIG1mbjogMDAxZDBiMGMKKFhFTikgICBnZm46IDAwMGYwMDg5ICBtZm46IDAw
MTAyNzZhCihYRU4pICAgZ2ZuOiAwMDBmMDA4YSAgbWZuOiAwMDE2MzBiZgooWEVOKSAgIGdm
bjogMDAwZjAwOGIgIG1mbjogMDAxMDFhMzMKKFhFTikgICBnZm46IDAwMGYwMDhjICBtZm46
IDAwMTYzMGJlCihYRU4pICAgZ2ZuOiAwMDBmMDA4ZCAgbWZuOiAwMDEwMWEzMgooWEVOKSAg
IGdmbjogMDAwZjAwOGUgIG1mbjogMDAxNjMwYmQKKFhFTikgICBnZm46IDAwMGYwMDhmICBt
Zm46IDAwMTAyNTNkCihYRU4pICAgZ2ZuOiAwMDBmMDA5MCAgbWZuOiAwMDE2MzBiYwooWEVO
KSAgIGdmbjogMDAwZjAwOTEgIG1mbjogMDAxMDI1M2MKKFhFTikgICBnZm46IDAwMGYwMDky
ICBtZm46IDAwMWIxYzQ3CihYRU4pICAgZ2ZuOiAwMDBmMDA5MyAgbWZuOiAwMDEwMjM2OQoo
WEVOKSAgIGdmbjogMDAwZjAwOTQgIG1mbjogMDAxYjFjNDYKKFhFTikgICBnZm46IDAwMGYw
MDk1ICBtZm46IDAwMTAyMzY4CihYRU4pICAgZ2ZuOiAwMDBmMDA5NiAgbWZuOiAwMDFiMWM0
NQooWEVOKSAgIGdmbjogMDAwZjAwOTcgIG1mbjogMDAxMDI4M2YKKFhFTikgICBnZm46IDAw
MGYwMDk4ICBtZm46IDAwMWIxYzQ0CihYRU4pICAgZ2ZuOiAwMDBmMDA5OSAgbWZuOiAwMDEw
MjgzZQooWEVOKSAgIGdmbjogMDAwZjAwOWEgIG1mbjogMDAxYjFjNDMKKFhFTikgICBnZm46
IDAwMGYwMDliICBtZm46IDAwMTAxNTgzCihYRU4pICAgZ2ZuOiAwMDBmMDA5YyAgbWZuOiAw
MDFiMWM0MgooWEVOKSAgIGdmbjogMDAwZjAwOWQgIG1mbjogMDAxMDE1ODIKKFhFTikgICBn
Zm46IDAwMGYwMDllICBtZm46IDAwMWIxYzQxCihYRU4pICAgZ2ZuOiAwMDBmMDA5ZiAgbWZu
OiAwMDEwMTgwYgooWEVOKSAgIGdmbjogMDAwZjAwYTAgIG1mbjogMDAxYjFjNDAKKFhFTikg
ICBnZm46IDAwMGYwMGExICBtZm46IDAwMTAxODBhCihYRU4pICAgZ2ZuOiAwMDBmMDBhMiAg
bWZuOiAwMDFiMWY5NwooWEVOKSAgIGdmbjogMDAwZjAwYTMgIG1mbjogMDAxMDI5NGYKKFhF
TikgICBnZm46IDAwMGYwMGE0ICBtZm46IDAwMWIxZjk2CihYRU4pICAgZ2ZuOiAwMDBmMDBh
NSAgbWZuOiAwMDEwMjk0ZQooWEVOKSAgIGdmbjogMDAwZjAwYTYgIG1mbjogMDAxYjFmOTUK
KFhFTikgICBnZm46IDAwMGYwMGE3ICBtZm46IDAwMTAyZDFmCihYRU4pICAgZ2ZuOiAwMDBm
MDBhOCAgbWZuOiAwMDFiMWY5NAooWEVOKSAgIGdmbjogMDAwZjAwYTkgIG1mbjogMDAxMDJk
MWUKKFhFTikgICBnZm46IDAwMGYwMGFhICBtZm46IDAwMWIxZjkzCihYRU4pICAgZ2ZuOiAw
MDBmMDBhYiAgbWZuOiAwMDEwMjM3OQooWEVOKSAgIGdmbjogMDAwZjAwYWMgIG1mbjogMDAx
YjFmOTIKKFhFTikgICBnZm46IDAwMGYwMGFkICBtZm46IDAwMTAyMzc4CihYRU4pICAgZ2Zu
OiAwMDBmMDBhZSAgbWZuOiAwMDFiMWY5MQooWEVOKSAgIGdmbjogMDAwZjAwYWYgIG1mbjog
MDAxMDJjYTMKKFhFTikgICBnZm46IDAwMGYwMGIwICBtZm46IDAwMWIxZjkwCihYRU4pICAg
Z2ZuOiAwMDBmMDBiMSAgbWZuOiAwMDEwMmNhMgooWEVOKSAgIGdmbjogMDAwZjAwYjIgIG1m
bjogMDAxYjA2ZDcKKFhFTikgICBnZm46IDAwMGYwMGIzICBtZm46IDAwMTAyODJmCihYRU4p
ICAgZ2ZuOiAwMDBmMDBiNCAgbWZuOiAwMDFiMDZkNgooWEVOKSAgIGdmbjogMDAwZjAwYjUg
IG1mbjogMDAxMDI4MmUKKFhFTikgICBnZm46IDAwMGYwMGI2ICBtZm46IDAwMWIwNmQ1CihY
RU4pICAgZ2ZuOiAwMDBmMDBiNyAgbWZuOiAwMDEwMmU1ZgooWEVOKSAgIGdmbjogMDAwZjAw
YjggIG1mbjogMDAxYjA2ZDQKKFhFTikgICBnZm46IDAwMGYwMGI5ICBtZm46IDAwMTAyZTVl
CihYRU4pICAgZ2ZuOiAwMDBmMDBiYSAgbWZuOiAwMDFiMDZkMwooWEVOKSAgIGdmbjogMDAw
ZjAwYmIgIG1mbjogMDAxMDI3NzcKKFhFTikgICBnZm46IDAwMGYwMGJjICBtZm46IDAwMWIw
NmQyCihYRU4pICAgZ2ZuOiAwMDBmMDBiZCAgbWZuOiAwMDEwMjc3NgooWEVOKSAgIGdmbjog
MDAwZjAwYmUgIG1mbjogMDAxYjA2ZDEKKFhFTikgICBnZm46IDAwMGYwMGJmICBtZm46IDAw
MTAxOWY3CihYRU4pICAgZ2ZuOiAwMDBmMDBjMCAgbWZuOiAwMDFiMDZkMAooWEVOKSAgIGdm
bjogMDAwZjAwYzEgIG1mbjogMDAxMDE5ZjYKKFhFTikgICBnZm46IDAwMGYwMGMyICBtZm46
IDAwMTYyZWQ3CihYRU4pICAgZ2ZuOiAwMDBmMDBjMyAgbWZuOiAwMDEwMWE5MwooWEVOKSAg
IGdmbjogMDAwZjAwYzQgIG1mbjogMDAxNjJlZDYKKFhFTikgICBnZm46IDAwMGYwMGM1ICBt
Zm46IDAwMTAxYTkyCihYRU4pICAgZ2ZuOiAwMDBmMDBjNiAgbWZuOiAwMDE2MmVkNQooWEVO
KSAgIGdmbjogMDAwZjAwYzcgIG1mbjogMDAxMDE0MWYKKFhFTikgICBnZm46IDAwMGYwMGM4
ICBtZm46IDAwMTYyZWQ0CihYRU4pICAgZ2ZuOiAwMDBmMDBjOSAgbWZuOiAwMDEwMTQxZQoo
WEVOKSAgIGdmbjogMDAwZjAwY2EgIG1mbjogMDAxNjJlZDMKKFhFTikgICBnZm46IDAwMGYw
MGNiICBtZm46IDAwMTAxNDBiCihYRU4pICAgZ2ZuOiAwMDBmMDBjYyAgbWZuOiAwMDE2MmVk
MgooWEVOKSAgIGdmbjogMDAwZjAwY2QgIG1mbjogMDAxMDE0MGEKKFhFTikgICBnZm46IDAw
MGYwMGNlICBtZm46IDAwMTYyZWQxCihYRU4pICAgZ2ZuOiAwMDBmMDBjZiAgbWZuOiAwMDEw
MmM2NwooWEVOKSAgIGdmbjogMDAwZjAwZDAgIG1mbjogMDAxNjJlZDAKKFhFTikgICBnZm46
IDAwMGYwMGQxICBtZm46IDAwMTAyYzY2CihYRU4pICAgZ2ZuOiAwMDBmMDBkMiAgbWZuOiAw
MDE2MzJkNwooWEVOKSAgIGdmbjogMDAwZjAwZDMgIG1mbjogMDAxMDI1MTcKKFhFTikgICBn
Zm46IDAwMGYwMGQ0ICBtZm46IDAwMTYzMmQ2CihYRU4pICAgZ2ZuOiAwMDBmMDBkNSAgbWZu
OiAwMDEwMjUxNgooWEVOKSAgIGdmbjogMDAwZjAwZDYgIG1mbjogMDAxNjMyZDUKKFhFTikg
ICBnZm46IDAwMGYwMGQ3ICBtZm46IDAwMTAyNzRmCihYRU4pICAgZ2ZuOiAwMDBmMDBkOCAg
bWZuOiAwMDE2MzJkNAooWEVOKSAgIGdmbjogMDAwZjAwZDkgIG1mbjogMDAxMDI3NGUKKFhF
TikgICBnZm46IDAwMGYwMGRhICBtZm46IDAwMTYzMmQzCihYRU4pICAgZ2ZuOiAwMDBmMDBk
YiAgbWZuOiAwMDEwMmEyNwooWEVOKSAgIGdmbjogMDAwZjAwZGMgIG1mbjogMDAxNjMyZDIK
KFhFTikgICBnZm46IDAwMGYwMGRkICBtZm46IDAwMTAyYTI2CihYRU4pICAgZ2ZuOiAwMDBm
MDBkZSAgbWZuOiAwMDE2MzJkMQooWEVOKSAgIGdmbjogMDAwZjAwZGYgIG1mbjogMDAxMDJl
YjcKKFhFTikgICBnZm46IDAwMGYwMGUwICBtZm46IDAwMTYzMmQwCihYRU4pICAgZ2ZuOiAw
MDBmMDBlMSAgbWZuOiAwMDEwMmViNgooWEVOKSAgIGdmbjogMDAwZjAwZTIgIG1mbjogMDAx
ZDBiZGYKKFhFTikgICBnZm46IDAwMGYwMGUzICBtZm46IDAwMTAyOTQzCihYRU4pICAgZ2Zu
OiAwMDBmMDBlNCAgbWZuOiAwMDFkMGJkZQooWEVOKSAgIGdmbjogMDAwZjAwZTUgIG1mbjog
MDAxMDI5NDIKKFhFTikgICBnZm46IDAwMGYwMGU2ICBtZm46IDAwMWQwYmRkCihYRU4pICAg
Z2ZuOiAwMDBmMDBlNyAgbWZuOiAwMDEwMjcxZgooWEVOKSAgIGdmbjogMDAwZjAwZTggIG1m
bjogMDAxZDBiZGMKKFhFTikgICBnZm46IDAwMGYwMGU5ICBtZm46IDAwMTAyNzFlCihYRU4p
ICAgZ2ZuOiAwMDBmMDBlYSAgbWZuOiAwMDFkMGJkYgooWEVOKSAgIGdmbjogMDAwZjAwZWIg
IG1mbjogMDAxMDI4MzcKKFhFTikgICBnZm46IDAwMGYwMGVjICBtZm46IDAwMWQwYmRhCihY
RU4pICAgZ2ZuOiAwMDBmMDBlZCAgbWZuOiAwMDEwMjgzNgooWEVOKSAgIGdmbjogMDAwZjAw
ZWUgIG1mbjogMDAxZDBiZDkKKFhFTikgICBnZm46IDAwMGYwMGVmICBtZm46IDAwMTAyZGEz
CihYRU4pICAgZ2ZuOiAwMDBmMDBmMCAgbWZuOiAwMDFkMGJkOAooWEVOKSAgIGdmbjogMDAw
ZjAwZjEgIG1mbjogMDAxMDJkYTIKKFhFTikgICBnZm46IDAwMGYwMGYyICBtZm46IDAwMWQw
YmQ3CihYRU4pICAgZ2ZuOiAwMDBmMDBmMyAgbWZuOiAwMDEwMTkyNwooWEVOKSAgIGdmbjog
MDAwZjAwZjQgIG1mbjogMDAxZDBiZDYKKFhFTikgICBnZm46IDAwMGYwMGY1ICBtZm46IDAw
MTAxOTI2CihYRU4pICAgZ2ZuOiAwMDBmMDBmNiAgbWZuOiAwMDFkMGJkNQooWEVOKSAgIGdm
bjogMDAwZjAwZjcgIG1mbjogMDAxMDJkYWIKKFhFTikgICBnZm46IDAwMGYwMGY4ICBtZm46
IDAwMWQwYmQ0CihYRU4pICAgZ2ZuOiAwMDBmMDBmOSAgbWZuOiAwMDEwMmRhYQooWEVOKSAg
IGdmbjogMDAwZjAwZmEgIG1mbjogMDAxZDBiZDMKKFhFTikgICBnZm46IDAwMGYwMGZiICBt
Zm46IDAwMTAyZTFiCihYRU4pICAgZ2ZuOiAwMDBmMDBmYyAgbWZuOiAwMDFkMGJkMgooWEVO
KSAgIGdmbjogMDAwZjAwZmQgIG1mbjogMDAxMDJlMWEKKFhFTikgICBnZm46IDAwMGYwMGZl
ICBtZm46IDAwMWQwYmQxCihYRU4pICAgZ2ZuOiAwMDBmMDBmZiAgbWZuOiAwMDEwMTk0Mwoo
WEVOKSAgIGdmbjogMDAwZjAxMDAgIG1mbjogMDAxZDBiZDAKKFhFTikgICBnZm46IDAwMGYw
MTAxICBtZm46IDAwMTAxOTQyCihYRU4pICAgZ2ZuOiAwMDBmMDEwMiAgbWZuOiAwMDFiMWY4
ZgooWEVOKSAgIGdmbjogMDAwZjAxMDMgIG1mbjogMDAxMDJiZWYKKFhFTikgICBnZm46IDAw
MGYwMTA0ICBtZm46IDAwMWIxZjhlCihYRU4pICAgZ2ZuOiAwMDBmMDEwNSAgbWZuOiAwMDEw
MmJlZQooWEVOKSAgIGdmbjogMDAwZjAxMDYgIG1mbjogMDAxYjFmOGQKKFhFTikgICBnZm46
IDAwMGYwMTA3ICBtZm46IDAwMTAxZGVmCihYRU4pICAgZ2ZuOiAwMDBmMDEwOCAgbWZuOiAw
MDFiMWY4YwooWEVOKSAgIGdmbjogMDAwZjAxMDkgIG1mbjogMDAxMDFkZWUKKFhFTikgICBn
Zm46IDAwMGYwMTBhICBtZm46IDAwMWIxZjhiCihYRU4pICAgZ2ZuOiAwMDBmMDEwYiAgbWZu
OiAwMDEwMjk3MwooWEVOKSAgIGdmbjogMDAwZjAxMGMgIG1mbjogMDAxYjFmOGEKKFhFTikg
ICBnZm46IDAwMGYwMTBkICBtZm46IDAwMTAyOTcyCihYRU4pICAgZ2ZuOiAwMDBmMDEwZSAg
bWZuOiAwMDFiMWY4OQooWEVOKSAgIGdmbjogMDAwZjAxMGYgIG1mbjogMDAxMDI5OGIKKFhF
TikgICBnZm46IDAwMGYwMTEwICBtZm46IDAwMWIxZjg4CihYRU4pICAgZ2ZuOiAwMDBmMDEx
MSAgbWZuOiAwMDEwMjk4YQooWEVOKSAgIGdmbjogMDAwZjAxMTIgIG1mbjogMDAxYjFmODcK
KFhFTikgICBnZm46IDAwMGYwMTEzICBtZm46IDAwMTAwMzAxCihYRU4pICAgZ2ZuOiAwMDBm
MDExNCAgbWZuOiAwMDFiMWY4NgooWEVOKSAgIGdmbjogMDAwZjAxMTUgIG1mbjogMDAxMDAz
MDAKKFhFTikgICBnZm46IDAwMGYwMTE2ICBtZm46IDAwMWIxZjg1CihYRU4pICAgZ2ZuOiAw
MDBmMDExNyAgbWZuOiAwMDEwMDJmZgooWEVOKSAgIGdmbjogMDAwZjAxMTggIG1mbjogMDAx
YjFmODQKKFhFTikgICBnZm46IDAwMGYwMTE5ICBtZm46IDAwMTAwMmZlCihYRU4pICAgZ2Zu
OiAwMDBmMDExYSAgbWZuOiAwMDFiMWY4MwooWEVOKSAgIGdmbjogMDAwZjAxMWIgIG1mbjog
MDAxMDI4MDMKKFhFTikgICBnZm46IDAwMGYwMTFjICBtZm46IDAwMWIxZjgrMgooWEVOKSAg
IGdmbjogMDAwZjAxMWQgIG1mbjogMDAxMDI4MDIKKFhFTikgICBnZm46IDAwMGYwMTFlICBt
Zm46IDAwMWIxZjgxCihYRU4pICAgZ2ZuOiAwMDBmMDExZiAgbWZuOiAwMDEwMDM4ZgooWEVO
KSAgIGdmbjogMDAwZjAxMjAgIG1mbjogMDAxYjFmODAKKFhFTikgICBnZm46IDAwMGYwMTIx
ICBtZm46IDAwMTAwMzhlCihYRU4pICAgZ2ZuOiAwMDBmMDEyMiAgbWZuOiAwMDFiMDZjZgoo
WEVOKSAgIGdmbjogMDAwZjAxMjMgIG1mbjogMDAxMDJlMjMKKFhFTikgICBnZm46IDAwMGYw
MTI0ICBtZm46IDAwMWIwNmNlCihYRU4pICAgZ2ZuOiAwMDBmMDEyNSAgbWZuOiAwMDEwMmUy
MgooWEVOKSAgIGdmbjogMDAwZjAxMjYgIG1mbjogMDAxYjA2Y2QKKFhFTikgICBnZm46IDAw
MGYwMTI3ICBtZm46IDAwMTAyNzliCihYRU4pICAgZ2ZuOiAwMDBmMDEyOCAgbWZuOiAwMDFi
MDZjYwooWEVOKSAgIGdmbjogMDAwZjAxMjkgIG1mbjogMDAxMDI3OWEKKFhFTikgICBnZm46
IDAwMGYwMTJhICBtZm46IDAwMWIwNmNiCihYRU4pICAgZ2ZuOiAwMDBmMDEyYiAgbWZuOiAw
MDEwMmMzMwooWEVOKSAgIGdmbjogMDAwZjAxMmMgIG1mbjogMDAxYjA2Y2EKKFhFTikgICBn
Zm46IDAwMGYwMTJkICBtZm46IDAwMTAyYzMyCihYRU4pICAgZ2ZuOiAwMDBmMDEyZSAgbWZu
OiAwMDFiMDZjOQooWEVOKSAgIGdmbjogMDAwZjAxMmYgIG1mbjogMDAxMDI4ZDcKKFhFTikg
ICBnZm46IDAwMGYwMTMwICBtZm46IDAwMWIwNmM4CihYRU4pICAgZ2ZuOiAwMDBmMDEzMSAg
bWZuOiAwMDEwMjhkNgooWEVOKSAgIGdmbjogMDAwZjAxMzIgIG1mbjogMDAxYjA2YzcKKFhF
TikgICBnZm46IDAwMGYwMTMzICBtZm46IDAwMTAyYzZmCihYRU4pICAgZ2ZuOiAwMDBmMDEz
NCAgbWZuOiAwMDFiMDZjNgooWEVOKSAgIGdmbjogMDAwZjAxMzUgIG1mbjogMDAxMDJjNmUK
KFhFTikgICBnZm46IDAwMGYwMTM2ICBtZm46IDAwMWIwNmM1CihYRU4pICAgZ2ZuOiAwMDBm
MDEzNyAgbWZuOiAwMDEwMmQ4ZgooWEVOKSAgIGdmbjogMDAwZjAxMzggIG1mbjogMDAxYjA2
YzQKKFhFTikgICBnZm46IDAwMGYwMTM5ICBtZm46IDAwMTAyZDhlCihYRU4pICAgZ2ZuOiAw
MDBmMDEzYSAgbWZuOiAwMDFiMDZjMwooWEVOKSAgIGdmbjogMDAwZjAxM2IgIG1mbjogMDAx
MDE4YmYKKFhFTikgICBnZm46IDAwMGYwMTNjICBtZm46IDAwMWIwNmMyCihYRU4pICAgZ2Zu
OiAwMDBmMDEzZCAgbWZuOiAwMDEwMThiZQooWEVOKSAgIGdmbjogMDAwZjAxM2UgIG1mbjog
MDAxYjA2YzEKKFhFTikgICBnZm46IDAwMGYwMTNmICBtZm46IDAwMTAyMzcxCihYRU4pICAg
Z2ZuOiAwMDBmMDE0MCAgbWZuOiAwMDFiMDZjMAooWEVOKSAgIGdmbjogMDAwZjAxNDEgIG1m
bjogMDAxMDIzNzAKKFhFTikgICBnZm46IDAwMGYwMTQyICBtZm46IDAwMTYyZWNmCihYRU4p
ICAgZ2ZuOiAwMDBmMDE0MyAgbWZuOiAwMDEwMmVjZgooWEVOKSAgIGdmbjogMDAwZjAxNDQg
IG1mbjogMDAxNjJlY2UKKFhFTikgICBnZm46IDAwMGYwMTQ1ICBtZm46IDAwMTAyZWNlCihY
RU4pICAgZ2ZuOiAwMDBmMDE0NiAgbWZuOiAwMDE2MmVjZAooWEVOKSAgIGdmbjogMDAwZjAx
NDcgIG1mbjogMDAxMDI3YTMKKFhFTikgICBnZm46IDAwMGYwMTQ4ICBtZm46IDAwMTYyZWNj
CihYRU4pICAgZ2ZuOiAwMDBmMDE0OSAgbWZuOiAwMDEwMjdhMgooWEVOKSAgIGdmbjogMDAw
ZjAxNGEgIG1mbjogMDAxNjJlY2IKKFhFTikgICBnZm46IDAwMGYwMTRiICBtZm46IDAwMTAx
ODRmCihYRU4pICAgZ2ZuOiAwMDBmMDE0YyAgbWZuOiAwMDE2MmVjYQooWEVOKSAgIGdmbjog
MDAwZjAxNGQgIG1mbjogMDAxMDE4NGUKKFhFTikgICBnZm46IDAwMGYwMTRlICBtZm46IDAw
MTYyZWM5CihYRU4pICAgZ2ZuOiAwMDBmMDE0ZiAgbWZuOiAwMDEwMTU2YgooWEVOKSAgIGdm
bjogMDAwZjAxNTAgIG1mbjogMDAxNjJlYzgKKFhFTikgICBnZm46IDAwMGYwMTUxICBtZm46
IDAwMTAxNTZhCihYRU4pICAgZ2ZuOiAwMDBmMDE1MiAgbWZuOiAwMDE2MmVjNwooWEVOKSAg
IGdmbjogMDAwZjAxNTMgIG1mbjogMDAxMDE3ODMKKFhFTikgICBnZm46IDAwMGYwMTU0ICBt
Zm46IDAwMTYyZWM2CihYRU4pICAgZ2ZuOiAwMDBmMDE1NSAgbWZuOiAwMDEwMTc4MgooWEVO
KSAgIGdmbjogMDAwZjAxNTYgIG1mbjogMDAxNjJlYzUKKFhFTikgICBnZm46IDAwMGYwMTU3
ICBtZm46IDAwMTAxMzY1CihYRU4pICAgZ2ZuOiAwMDBmMDE1OCAgbWZuOiAwMDE2MmVjNAoo
WEVOKSAgIGdmbjogMDAwZjAxNTkgIG1mbjogMDAxMDEzNjQKKFhFTikgICBnZm46IDAwMGYw
MTVhICBtZm46IDAwMTYyZWMzCihYRU4pICAgZ2ZuOiAwMDBmMDE1YiAgbWZuOiAwMDEwMTM2
MwooWEVOKSAgIGdmbjogMDAwZjAxNWMgIG1mbjogMDAxNjJlYzIKKFhFTikgICBnZm46IDAw
MGYwMTVkICBtZm46IDAwMTAxMzYyCihYRU4pICAgZ2ZuOiAwMDBmMDE1ZSAgbWZuOiAwMDE2
MmVjMQooWEVOKSAgIGdmbjogMDAwZjAxNWYgIG1mbjogMDAxMDEzNTcKKFhFTikgICBnZm46
IDAwMGYwMTYwICBtZm46IDAwMTYyZWMwCihYRU4pICAgZ2ZuOiAwMDBmMDE2MSAgbWZuOiAw
MDEwMTM1NgooWEVOKSAgIGdmbjogMDAwZjAxNjIgIG1mbjogMDAxNjMyY2YKKFhFTikgICBn
Zm46IDAwMGYwMTYzICBtZm46IDAwMTAyYmQ5CihYRU4pICAgZ2ZuOiAwMDBmMDE2NCAgbWZu
OiAwMDE2MzJjZQooWEVOKSAgIGdmbjogMDAwZjAxNjUgIG1mbjogMDAxMDJiZDgKKFhFTikg
ICBnZm46IDAwMGYwMTY2ICBtZm46IDAwMTYzMmNkCihYRU4pICAgZ2ZuOiAwMDBmMDE2NyAg
bWZuOiAwMDEwMDJkZgooWEVOKSAgIGdmbjogMDAwZjAxNjggIG1mbjogMDAxNjMyY2MKKFhF
TikgICBnZm46IDAwMGYwMTY5ICBtZm46IDAwMTAwMmRlCihYRU4pICAgZ2ZuOiAwMDBmMDE2
YSAgbWZuOiAwMDE2MzJjYgooWEVOKSAgIGdmbjogMDAwZjAxNmIgIG1mbjogMDAxMDJiY2IK
KFhFTikgICBnZm46IDAwMGYwMTZjICBtZm46IDAwMTYzMmNhCihYRU4pICAgZ2ZuOiAwMDBm
MDE2ZCAgbWZuOiAwMDEwMmJjYQooWEVOKSAgIGdmbjogMDAwZjAxNmUgIG1mbjogMDAxNjMy
YzkKKFhFTikgICBnZm46IDAwMGYwMTZmICBtZm46IDAwMTAyZTUzCihYRU4pICAgZ2ZuOiAw
MDBmMDE3MCAgbWZuOiAwMDE2MzJjOAooWEVOKSAgIGdmbjogMDAwZjAxNzEgIG1mbjogMDAx
MDJlNTIKKFhFTikgICBnZm46IDAwMGYwMTcyICBtZm46IDAwMTYzMmM3CihYRU4pICAgZ2Zu
OiAwMDBmMDE3MyAgbWZuOiAwMDEwMjUzYgooWEVOKSAgIGdmbjogMDAwZjAxNzQgIG1mbjog
MDAxNjMyYzYKKFhFTikgICBnZm46IDAwMGYwMTc1ICBtZm46IDAwMTAyNTNhCihYRU4pICAg
Z2ZuOiAwMDBmMDE3NiAgbWZuOiAwMDE2MzJjNQooWEVOKSAgIGdmbjogMDAwZjAxNzcgIG1m
bjogMDAxMDI1N2QKKFhFTikgICBnZm46IDAwMGYwMTc4ICBtZm46IDAwMTYzMmM0CihYRU4p
ICAgZ2ZuOiAwMDBmMDE3OSAgbWZuOiAwMDEwMjU3YwooWEVOKSAgIGdmbjogMDAwZjAxN2Eg
IG1mbjogMDAxNjMyYzMKKFhFTikgICBnZm46IDAwMGYwMTdiICBtZm46IDAwMTAyNTc5CihY
RU4pICAgZ2ZuOiAwMDBmMDE3YyAgbWZuOiAwMDE2MzJjMgooWEVOKSAgIGdmbjogMDAwZjAx
N2QgIG1mbjogMDAxMDI1NzgKKFhFTikgICBnZm46IDAwMGYwMTdlICBtZm46IDAwMTYzMmMx
CihYRU4pICAgZ2ZuOiAwMDBmMDE3ZiAgbWZuOiAwMDEwOTZkYgooWEVOKSAgIGdmbjogMDAw
ZjAxODAgIG1mbjogMDAxNjMyYzAKKFhFTikgICBnZm46IDAwMGYwMTgxICBtZm46IDAwMTA5
NmRhCihYRU4pICAgZ2ZuOiAwMDBmMDE4MiAgbWZuOiAwMDFiMDZiZgooWEVOKSAgIGdmbjog
MDAwZjAxODMgIG1mbjogMDAxMDllZGIKKFhFTikgICBnZm46IDAwMGYwMTg0ICBtZm46IDAw
MWIwNmJlCihYRU4pICAgZ2ZuOiAwMDBmMDE4NSAgbWZuOiAwMDEwOWVkYQooWEVOKSAgIGdm
bjogMDAwZjAxODYgIG1mbjogMDAxYjA2YmQKKFhFTikgICBnZm46IDAwMGYwMTg3ICBtZm46
IDAwMTAyYzliCihYRU4pICAgZ2ZuOiAwMDBmMDE4OCAgbWZuOiAwMDFiMDZiYwooWEVOKSAg
IGdmbjogMDAwZjAxODkgIG1mbjogMDAxMDJjOWEKKFhFTikgICBnZm46IDAwMGYwMThhICBt
Zm46IDAwMWIwNmJiCihYRU4pICAgZ2ZuOiAwMDBmMDE4YiAgbWZuOiAwMDEwMmQwNwooWEVO
KSAgIGdmbjogMDAwZjAxOGMgIG1mbjogMDAxYjA2YmEKKFhFTikgICBnZm46IDAwMGYwMThk
ICBtZm46IDAwMTAyZDA2CihYRU4pICAgZ2ZuOiAwMDBmMDE4ZSAgbWZuOiAwMDFiMDZiOQoo
WEVOKSAgIGdmbjogMDAwZjAxOGYgIG1mbjogMDAxMDJkMWIKKFhFTikgICBnZm46IDAwMGYw
MTkwICBtZm46IDAwMWIwNmI4CihYRU4pICAgZ2ZuOiAwMDBmMDE5MSAgbWZuOiAwMDEwMmQx
YQooWEVOKSAgIGdmbjogMDAwZjAxOTIgIG1mbjogMDAxYjA2YjcKKFhFTikgICBnZm46IDAw
MGYwMTkzICBtZm46IDAwMTAyZTJmCihYRU4pICAgZ2ZuOiAwMDBmMDE5NCAgbWZuOiAwMDFi
MDZiNgooWEVOKSAgIGdmbjogMDAwZjAxOTUgIG1mbjogMDAxMDJlMmUKKFhFTikgICBnZm46
IDAwMGYwMTk2ICBtZm46IDAwMWIwNmI1CihYRU4pICAgZ2ZuOiAwMDBmMDE5NyAgbWZuOiAw
MDEwMTMyOQooWEVOKSAgIGdmbjogMDAwZjAxOTggIG1mbjogMDAxYjA2YjQKKFhFTikgICBn
Zm46IDAwMGYwMTk5ICBtZm46IDAwMTAxMzI4CihYRU4pICAgZ2ZuOiAwMDBmMDE5YSAgbWZu
OiAwMDFiMDZiMwooWEVOKSAgIGdmbjogMDAwZjAxOWIgIG1mbjogMDAxMDJkN2IKKFhFTikg
ICBnZm46IDAwMGYwMTljICBtZm46IDAwMWIwNmIyCihYRU4pICAgZ2ZuOiAwMDBmMDE5ZCAg
bWZuOiAwMDEwMmQ3YQooWEVOKSAgIGdmbjogMDAwZjAxOWUgIG1mbjogMDAxYjA2YjEKKFhF
TikgICBnZm46IDAwMGYwMTlmICBtZm46IDAwMTAyOTZiCihYRU4pICAgZ2ZuOiAwMDBmMDFh
MCAgbWZuOiAwMDFiMDZiMAooWEVOKSAgIGdmbjogMDAwZjAxYTEgIG1mbjogMDAxMDI5NmEK
KFhFTikgICBnZm46IDAwMGYwMWEyICBtZm46IDAwMWIwNmFmCihYRU4pICAgZ2ZuOiAwMDBm
MDFhMyAgbWZuOiAwMDEwMjg5MwooWEVOKSAgIGdmbjogMDAwZjAxYTQgIG1mbjogMDAxYjA2
YWUKKFhFTikgICBnZm46IDAwMGYwMWE1ICBtZm46IDAwMTAyODkyCihYRU4pICAgZ2ZuOiAw
MDBmMDFhNiAgbWZuOiAwMDFiMDZhZAooWEVOKSAgIGdmbjogMDAwZjAxYTcgIG1mbjogMDAx
MDE4OTcKKFhFTikgICBnZm46IDAwMGYwMWE4ICBtZm46IDAwMWIwNmFjCihYRU4pICAgZ2Zu
OiAwMDBmMDFhOSAgbWZuOiAwMDEwMTg5NgooWEVOKSAgIGdmbjogMDAwZjAxYWEgIG1mbjog
MDAxYjA2YWIKKFhFTikgICBnZm46IDAwMGYwMWFiICBtZm46IDAwMTAyNDZmCihYRU4pICAg
Z2ZuOiAwMDBmMDFhYyAgbWZuOiAwMDFiMDZhYQooWEVOKSAgIGdmbjogMDAwZjAxYWQgIG1m
bjogMDAxMDI0NmUKKFhFTikgICBnZm46IDAwMGYwMWFlICBtZm46IDAwMWIwNmE5CihYRU4p
ICAgZ2ZuOiAwMDBmMDFhZiAgbWZuOiAwMDEwMjkyOQooWEVOKSAgIGdmbjogMDAwZjAxYjAg
IG1mbjogMDAxYjA2YTgKKFhFTikgICBnZm46IDAwMGYwMWIxICBtZm46IDAwMTAyOTI4CihY
RU4pICAgZ2ZuOiAwMDBmMDFiMiAgbWZuOiAwMDFiMDZhNwooWEVOKSAgIGdmbjogMDAwZjAx
YjMgIG1mbjogMDAxMDJiZTEKKFhFTikgICBnZm46IDAwMGYwMWI0ICBtZm46IDAwMWIwNmE2
CihYRU4pICAgZ2ZuOiAwMDBmMDFiNSAgbWZuOiAwMDEwMmJlMAooWEVOKSAgIGdmbjogMDAw
ZjAxYjYgIG1mbjogMDAxYjA2YTUKKFhFTikgICBnZm46IDAwMGYwMWI3ICBtZm46IDAwMTAx
ODY5CihYRU4pICAgZ2ZuOiAwMDBmMDFiOCAgbWZuOiAwMDFiMDZhNAooWEVOKSAgIGdmbjog
MDAwZjAxYjkgIG1mbjogMDAxMDE4NjgKKFhFTikgICBnZm46IDAwMGYwMWJhICBtZm46IDAw
MWIwNmEzCihYRU4pICAgZ2ZuOiAwMDBmMDFiYiAgbWZuOiAwMDEwMTk1ZgooWEVOKSAgIGdm
bjogMDAwZjAxYmMgIG1mbjogMDAxYjA2YTIKKFhFTikgICBnZm46IDAwMGYwMWJkICBtZm46
IDAwMTAxOTVlCihYRU4pICAgZ2ZuOiAwMDBmMDFiZSAgbWZuOiAwMDFiMDZhMQooWEVOKSAg
IGdmbjogMDAwZjAxYmYgIG1mbjogMDAxMDI3NTEKKFhFTikgICBnZm46IDAwMGYwMWMwICBt
Zm46IDAwMWIwNmEwCihYRU4pICAgZ2ZuOiAwMDBmMDFjMSAgbWZuOiAwMDEwMjc1MAooWEVO
KSAgIGdmbjogMDAwZjAxYzIgIG1mbjogMDAxYjA2OWYKKFhFTikgICBnZm46IDAwMGYwMWMz
ICBtZm46IDAwMTAxNDdmCihYRU4pICAgZ2ZuOiAwMDBmMDFjNCAgbWZuOiAwMDFiMDY5ZQoo
WEVOKSAgIGdmbjogMDAwZjAxYzUgIG1mbjogMDAxMDE0N2UKKFhFTikgICBnZm46IDAwMGYw
MWM2ICBtZm46IDAwMWIwNjlkCihYRU4pICAgZ2ZuOiAwMDBmMDFjNyAgbWZuOiAwMDEwMTQ1
MwooWEVOKSAgIGdmbjogMDAwZjAxYzggIG1mbjogMDAxYjA2OWMKKFhFTikgICBnZm46IDAw
MGYwMWM5ICBtZm46IDAwMTAxNDUyCihYRU4pICAgZ2ZuOiAwMDBmMDFjYSAgbWZuOiAwMDFi
MDY5YgooWEVOKSAgIGdmbjogMDAwZjAxY2IgIG1mbjogMDAxMDE3NmQKKFhFTikgICBnZm46
IDAwMGYwMWNjICBtZm46IDAwMWIwNjlhCihYRU4pICAgZ2ZuOiAwMDBmMDFjZCAgbWZuOiAw
MDEwMTc2YwooWEVOKSAgIGdmbjogMDAwZjAxY2UgIG1mbjogMDAxYjA2OTkKKFhFTikgICBn
Zm46IDAwMGYwMWNmICBtZm46IDAwMTAxNTExCihYRU4pICAgZ2ZuOiAwMDBmMDFkMCAgbWZu
OiAwMDFiMDY5OAooWEVOKSAgIGdmbjogMDAwZjAxZDEgIG1mbjogMDAxMDE1MTAKKFhFTikg
ICBnZm46IDAwMGYwMWQyICBtZm46IDAwMWIwNjk3CihYRU4pICAgZ2ZuOiAwMDBmMDFkMyAg
bWZuOiAwMDEwMjhiNQooWEVOKSAgIGdmbjogMDAwZjAxZDQgIG1mbjogMDAxYjA2OTYKKFhF
TikgICBnZm46IDAwMGYwMWQ1ICBtZm46IDAwMTAyOGI0CihYRU4pICAgZ2ZuOiAwMDBmMDFk
NiAgbWZuOiAwMDFiMDY5NQooWEVOKSAgIGdmbjogMDAwZjAxZDcgIG1mbjogMDAxMDE0MTEK
KFhFTikgICBnZm46IDAwMGYwMWQ4ICBtZm46IDAwMWIwNjk0CihYRU4pICAgZ2ZuOiAwMDBm
MDFkOSAgbWZuOiAwMDEwMTQxMAooWEVOKSAgIGdmbjogMDAwZjAxZGEgIG1mbjogMDAxYjA2
OTMKKFhFTikgICBnZm46IDAwMGYwMWRiICBtZm46IDAwMTAxNWE3CihYRU4pICAgZ2ZuOiAw
MDBmMDFkYyAgbWZuOiAwMDFiMDY5MgooWEVOKSAgIGdmbjogMDAwZjAxZGQgIG1mbjogMDAx
MDE1YTYKKFhFTikgICBnZm46IDAwMGYwMWRlICBtZm46IDAwMWIwNjkxCihYRU4pICAgZ2Zu
OiAwMDBmMDFkZiAgbWZuOiAwMDEwMmNmNQooWEVOKSAgIGdmbjogMDAwZjAxZTAgIG1mbjog
MDAxYjA2OTAKKFhFTikgICBnZm46IDAwMGYwMWUxICBtZm46IDAwMTAyY2Y0CihYRU4pICAg
Z2ZuOiAwMDBmMDFlMiAgbWZuOiAwMDFiMDY4ZgooWEVOKSAgIGdmbjogMDAwZjAxZTMgIG1m
bjogMDAxMDI3YzEKKFhFTikgICBnZm46IDAwMGYwMWU0ICBtZm46IDAwMWIwNjhlCihYRU4p
ICAgZ2ZuOiAwMDBmMDFlNSAgbWZuOiAwMDEwMjdjMAooWEVOKSAgIGdmbjogMDAwZjAxZTYg
IG1mbjogMDAxYjA2OGQKKFhFTikgICBnZm46IDAwMGYwMWU3ICBtZm46IDAwMTAwNTMzCihY
RU4pICAgZ2ZuOiAwMDBmMDFlOCAgbWZuOiAwMDFiMDY4YwooWEVOKSAgIGdmbjogMDAwZjAx
ZTkgIG1mbjogMDAxMDA1MzIKKFhFTikgICBnZm46IDAwMGYwMWVhICBtZm46IDAwMWIwNjhi
CihYRU4pICAgZ2ZuOiAwMDBmMDFlYiAgbWZuOiAwMDEwMmU5NQooWEVOKSAgIGdmbjogMDAw
ZjAxZWMgIG1mbjogMDAxYjA2OGEKKFhFTikgICBnZm46IDAwMGYwMWVkICBtZm46IDAwMTAy
ZTk0CihYRU4pICAgZ2ZuOiAwMDBmMDFlZSAgbWZuOiAwMDFiMDY4OQooWEVOKSAgIGdmbjog
MDAwZjAxZWYgIG1mbjogMDAxMDE0OTEKKFhFTikgICBnZm46IDAwMGYwMWYwICBtZm46IDAw
MWIwNjg4CihYRU4pICAgZ2ZuOiAwMDBmMDFmMSAgbWZuOiAwMDEwMTQ5MAooWEVOKSAgIGdm
bjogMDAwZjAxZjIgIG1mbjogMDAxYjA2ODcKKFhFTikgICBnZm46IDAwMGYwMWYzICBtZm46
IDAwMTAxMzBkCihYRU4pICAgZ2ZuOiAwMDBmMDFmNCAgbWZuOiAwMDFiMDY4NgooWEVOKSAg
IGdmbjogMDAwZjAxZjUgIG1mbjogMDAxMDEzMGMKKFhFTikgICBnZm46IDAwMGYwMWY2ICBt
Zm46IDAwMWIwNjg1CihYRU4pICAgZ2ZuOiAwMDBmMDFmNyAgbWZuOiAwMDEwMTcxNQooWEVO
KSAgIGdmbjogMDAwZjAxZjggIG1mbjogMDAxYjA2ODQKKFhFTikgICBnZm46IDAwMGYwMWY5
ICBtZm46IDAwMTAxNzE0CihYRU4pICAgZ2ZuOiAwMDBmMDFmYSAgbWZuOiAwMDFiMDY4Mwoo
WEVOKSAgIGdmbjogMDAwZjAxZmIgIG1mbjogMDAxMDEzOTkKKFhFTikgICBnZm46IDAwMGYw
MWZjICBtZm46IDAwMWIwNjgyCihYRU4pICAgZ2ZuOiAwMDBmMDFmZCAgbWZuOiAwMDEwMTM5
OAooWEVOKSAgIGdmbjogMDAwZjAxZmUgIG1mbjogMDAxYjA2ODEKKFhFTikgICBnZm46IDAw
MGYwMWZmICBtZm46IDAwMTAxZTgzCihYRU4pICAgZ2ZuOiAwMDBmMDIwMCAgbWZuOiAwMDFi
MDY4MAooWEVOKSAgIGdmbjogMDAwZjAyMDEgIG1mbjogMDAxMDFlODIKKFhFTikgICBnZm46
IDAwMGYwMjAyICBtZm46IDAwMTYyZWJmCihYRU4pICAgZ2ZuOiAwMDBmMDIwMyAgbWZuOiAw
MDEwMTNhOQooWEVOKSAgIGdmbjogMDAwZjAyMDQgIG1mbjogMDAxNjJlYmUKKFhFTikgICBn
Zm46IDAwMGYwMjA1ICBtZm46IDAwMTAxM2E4CihYRU4pICAgZ2ZuOiAwMDBmMDIwNiAgbWZu
OiAwMDE2MmViZAooWEVOKSAgIGdmbjogMDAwZjAyMDcgIG1mbjogMDAxMDI4ODMKKFhFTikg
ICBnZm46IDAwMGYwMjA4ICBtZm46IDAwMTYyZWJjCihYRU4pICAgZ2ZuOiAwMDBmMDIwOSAg
bWZuOiAwMDEwMjg4MgooWEVOKSAgIGdmbjogMDAwZjAyMGEgIG1mbjogMDAxNjJlYmIKKFhF
TikgICBnZm46IDAwMGYwMjBiICBtZm46IDAwMTAyOTQ1CihYRU4pICAgZ2ZuOiAwMDBmMDIw
YyAgbWZuOiAwMDE2MmViYQooWEVOKSAgIGdmbjogMDAwZjAyMGQgIG1mbjogMDAxMDI5NDQK
KFhFTikgICBnZm46IDAwMGYwMjBlICBtZm46IDAwMTYyZWI5CihYRU4pICAgZ2ZuOiAwMDBm
MDIwZiAgbWZuOiAwMDEwMWFjMwooWEVOKSAgIGdmbjogMDAwZjAyMTAgIG1mbjogMDAxNjJl
YjgKKFhFTikgICBnZm46IDAwMGYwMjExICBtZm46IDAwMTAxYWMyCihYRU4pICAgZ2ZuOiAw
MDBmMDIxMiAgbWZuOiAwMDE2MmViNwooWEVOKSAgIGdmbjogMDAwZjAyMTMgIG1mbjogMDAx
MDFhYzEKKFhFTikgICBnZm46IDAwMGYwMjE0ICBtZm46IDAwMTYyZWI2CihYRU4pICAgZ2Zu
OiAwMDBmMDIxNSAgbWZuOiAwMDEwMWFjMAooWEVOKSAgIGdmbjogMDAwZjAyMTYgIG1mbjog
MDAxNjJlYjUKKFhFTikgICBnZm46IDAwMGYwMjE3ICBtZm46IDAwMTAxNzQ3CihYRU4pICAg
Z2ZuOiAwMDBmMDIxOCAgbWZuOiAwMDE2MmViNAooWEVOKSAgIGdmbjogMDAwZjAyMTkgIG1m
bjogMDAxMDE3NDYKKFhFTikgICBnZm46IDAwMGYwMjFhICBtZm46IDAwMTYyZWIzCihYRU4p
ICAgZ2ZuOiAwMDBmMDIxYiAgbWZuOiAwMDEwMTc0NQooWEVOKSAgIGdmbjogMDAwZjAyMWMg
IG1mbjogMDAxNjJlYjIKKFhFTikgICBnZm46IDAwMGYwMjFkICBtZm46IDAwMTAxNzQ0CihY
RU4pICAgZ2ZuOiAwMDBmMDIxZSAgbWZuOiAwMDE2MmViMQooWEVOKSAgIGdmbjogMDAwZjAy
MWYgIG1mbjogMDAxMDI5ZjcKKFhFTikgICBnZm46IDAwMGYwMjIwICBtZm46IDAwMTYyZWIw
CihYRU4pICAgZ2ZuOiAwMDBmMDIyMSAgbWZuOiAwMDEwMjlmNgooWEVOKSAgIGdmbjogMDAw
ZjAyMjIgIG1mbjogMDAxNjJlYWYKKFhFTikgICBnZm46IDAwMGYwMjIzICBtZm46IDAwMTAy
OWY1CihYRU4pICAgZ2ZuOiAwMDBmMDIyNCAgbWZuOiAwMDE2MmVhZQooWEVOKSAgIGdmbjog
MDAwZjAyMjUgIG1mbjogMDAxMDI5ZjQKKFhFTikgICBnZm46IDAwMGYwMjI2ICBtZm46IDAw
MTYyZWFkCihYRU4pICAgZ2ZuOiAwMDBmMDIyNyAgbWZuOiAwMDEwMmQzNwooWEVOKSAgIGdm
bjogMDAwZjAyMjggIG1mbjogMDAxNjJlYWMKKFhFTikgICBnZm46IDAwMGYwMjI5ICBtZm46
IDAwMTAyZDM2CihYRU4pICAgZ2ZuOiAwMDBmMDIyYSAgbWZuOiAwMDE2MmVhYgooWEVOKSAg
IGdmbjogMDAwZjAyMmIgIG1mbjogMDAxMDJkMzUKKFhFTikgICBnZm46IDAwMGYwMjJjICBt
Zm46IDAwMTYyZWFhCihYRU4pICAgZ2ZuOiAwMDBmMDIyZCAgbWZuOiAwMDEwMmQzNAooWEVO
KSAgIGdmbjogMDAwZjAyMmUgIG1mbjogMDAxNjJlYTkKKFhFTikgICBnZm46IDAwMGYwMjJm
ICBtZm46IDAwMTAyNzM3CihYRU4pICAgZ2ZuOiAwMDBmMDIzMCAgbWZuOiAwMDE2MmVhOAoo
WEVOKSAgIGdmbjogMDAwZjAyMzEgIG1mbjogMDAxMDI3MzYKKFhFTikgICBnZm46IDAwMGYw
MjMyICBtZm46IDAwMTYyZWE3CihYRU4pICAgZ2ZuOiAwMDBmMDIzMyAgbWZuOiAwMDEwMjcz
NQooWEVOKSAgIGdmbjogMDAwZjAyMzQgIG1mbjogMDAxNjJlYTYKKFhFTikgICBnZm46IDAw
MGYwMjM1ICBtZm46IDAwMTAyNzM0CihYRU4pICAgZ2ZuOiAwMDBmMDIzNiAgbWZuOiAwMDE2
MmVhNQooWEVOKSAgIGdmbjogMDAwZjAyMzcgIG1mbjogMDAxMDE5ZGIKKFhFTikgICBnZm46
IDAwMGYwMjM4ICBtZm46IDAwMTYyZWE0CihYRU4pICAgZ2ZuOiAwMDBmMDIzOSAgbWZuOiAw
MDEwMTlkYQooWEVOKSAgIGdmbjogMDAwZjAyM2EgIG1mbjogMDAxNjJlYTMKKFhFTikgICBn
Zm46IDAwMGYwMjNiICBtZm46IDAwMTAxOWQ5CihYRU4pICAgZ2ZuOiAwMDBmMDIzYyAgbWZu
OiAwMDE2MmVhMgooWEVOKSAgIGdmbjogMDAwZjAyM2QgIG1mbjogMDAxMDE5ZDgKKFhFTikg
ICBnZm46IDAwMGYwMjNlICBtZm46IDAwMTYyZWExCihYRU4pICAgZ2ZuOiAwMDBmMDIzZiAg
bWZuOiAwMDEwMWQ5YgooWEVOKSAgIGdmbjogMDAwZjAyNDAgIG1mbjogMDAxNjJlYTAKKFhF
TikgICBnZm46IDAwMGYwMjQxICBtZm46IDAwMTAxZDlhCihYRU4pICAgZ2ZuOiAwMDBmMDI0
MiAgbWZuOiAwMDE2MmU5ZgooWEVOKSAgIGdmbjogMDAwZjAyNDMgIG1mbjogMDAxMDFkOTkK
KFhFTikgICBnZm46IDAwMGYwMjQ0ICBtZm46IDAwMTYyZTllCihYRU4pICAgZ2ZuOiAwMDBm
MDI0NSAgbWZuOiAwMDEwMWQ5OAooWEVOKSAgIGdmbjogMDAwZjAyNDYgIG1mbjogMDAxNjJl
OWQKKFhFTikgICBnZm46IDAwMGYwMjQ3ICBtZm46IDAwMTAxNzc3CihYRU4pICAgZ2ZuOiAw
MDBmMDI0OCAgbWZuOiAwMDE2MmU5YwooWEVOKSAgIGdmbjogMDAwZjAyNDkgIG1mbjogMDAx
MDE3NzYKKFhFTikgICBnZm46IDAwMGYwMjRhICBtZm46IDAwMTYyZTliCihYRU4pICAgZ2Zu
OiAwMDBmMDI0YiAgbWZuOiAwMDEwMTc3NQooWEVOKSAgIGdmbjogMDAwZjAyNGMgIG1mbjog
MDAxNjJlOWEKKFhFTikgICBnZm46IDAwMGYwMjRkICBtZm46IDAwMTAxNzc0CihYRU4pICAg
Z2ZuOiAwMDBmMDI0ZSAgbWZuOiAwMDE2MmU5OQooWEVOKSAgIGdmbjogMDAwZjAyNGYgIG1m
bjogMDAxMDI5MjcKKFhFTikgICBnZm46IDAwMGYwMjUwICBtZm46IDAwMTYyZTk4CihYRU4p
ICAgZ2ZuOiAwMDBmMDI1MSAgbWZuOiAwMDEwMjkyNgooWEVOKSAgIGdmbjogMDAwZjAyNTIg
IG1mbjogMDAxNjJlOTcKKFhFTikgICBnZm46IDAwMGYwMjUzICBtZm46IDAwMTAyOTI1CihY
RU4pICAgZ2ZuOiAwMDBmMDI1NCAgbWZuOiAwMDE2MmU5NgooWEVOKSAgIGdmbjogMDAwZjAy
NTUgIG1mbjogMDAxMDI5MjQKKFhFTikgICBnZm46IDAwMGYwMjU2ICBtZm46IDAwMTYyZTk1
CihYRU4pICAgZ2ZuOiAwMDBmMDI1NyAgbWZuOiAwMDEwMTU2MwooWEVOKSAgIGdmbjogMDAw
ZjAyNTggIG1mbjogMDAxNjJlOTQKKFhFTikgICBnZm46IDAwMGYwMjU5ICBtZm46IDAwMTAx
NTYyCihYRU4pICAgZ2ZuOiAwMDBmMDI1YSAgbWZuOiAwMDE2MmU5MwooWEVOKSAgIGdmbjog
MDAwZjAyNWIgIG1mbjogMDAxMDE1NjEKKFhFTikgICBnZm46IDAwMGYwMjVjICBtZm46IDAw
MTYyZTkyCihYRU4pICAgZ2ZuOiAwMDBmMDI1ZCAgbWZuOiAwMDEwMTU2MAooWEVOKSAgIGdm
bjogMDAwZjAyNWUgIG1mbjogMDAxNjJlOTEKKFhFTikgICBnZm46IDAwMGYwMjVmICBtZm46
IDAwMTAyZDgzCihYRU4pICAgZ2ZuOiAwMDBmMDI2MCAgbWZuOiAwMDE2MmU5MAooWEVOKSAg
IGdmbjogMDAwZjAyNjEgIG1mbjogMDAxMDJkODIKKFhFTikgICBnZm46IDAwMGYwMjYyICBt
Zm46IDAwMTYyZThmCihYRU4pICAgZ2ZuOiAwMDBmMDI2MyAgbWZuOiAwMDEwMmQ4MQooWEVO
KSAgIGdmbjogMDAwZjAyNjQgIG1mbjogMDAxNjJlOGUKKFhFTikgICBnZm46IDAwMGYwMjY1
ICBtZm46IDAwMTAyZDgwCihYRU4pICAgZ2ZuOiAwMDBmMDI2NiAgbWZuOiAwMDE2MmU4ZAoo
WEVOKSAgIGdmbjogMDAwZjAyNjcgIG1mbjogMDAxMDJkN2YKKFhFTikgICBnZm46IDAwMGYw
MjY4ICBtZm46IDAwMTYyZThjCihYRU4pICAgZ2ZuOiAwMDBmMDI2OSAgbWZuOiAwMDEwMmQ3
ZQooWEVOKSAgIGdmbjogMDAwZjAyNmEgIG1mbjogMDAxNjJlOGIKKFhFTikgICBnZm46IDAw
MGYwMjZiICBtZm46IDAwMTAyZDdkCihYRU4pICAgZ2ZuOiAwMDBmMDI2YyAgbWZuOiAwMDE2
MmU4YQooWEVOKSAgIGdmbjogMDAwZjAyNmQgIG1mbjogMDAxMDJkN2MKKFhFTikgICBnZm46
IDAwMGYwMjZlICBtZm46IDAwMTYyZTg5CihYRU4pICAgZ2ZuOiAwMDBmMDI2ZiAgbWZuOiAw
MDEwMmNjZgooWEVOKSAgIGdmbjogMDAwZjAyNzAgIG1mbjogMDAxNjJlODgKKFhFTikgICBn
Zm46IDAwMGYwMjcxICBtZm46IDAwMTAyY2NlCihYRU4pICAgZ2ZuOiAwMDBmMDI3MiAgbWZu
OiAwMDE2MmU4NwooWEVOKSAgIGdmbjogMDAwZjAyNzMgIG1mbjogMDAxMDJjY2QKKFhFTikg
ICBnZm46IDAwMGYwMjc0ICBtZm46IDAwMTYyZTg2CihYRU4pICAgZ2ZuOiAwMDBmMDI3NSAg
bWZuOiAwMDEwMmNjYwooWEVOKSAgIGdmbjogMDAwZjAyNzYgIG1mbjogMDAxNjJlODUKKFhF
TikgICBnZm46IDAwMGYwMjc3ICBtZm46IDAwMTAyOWZiCihYRU4pICAgZ2ZuOiAwMDBmMDI3
OCAgbWZuOiAwMDE2MmU4NAooWEVOKSAgIGdmbjogMDAwZjAyNzkgIG1mbjogMDAxMDI5ZmEK
KFhFTikgICBnZm46IDAwMGYwMjdhICBtZm46IDAwMTYyZTgzCihYRU4pICAgZ2ZuOiAwMDBm
MDI3YiAgbWZuOiAwMDEwMjlmOQooWEVOKSAgIGdmbjogMDAwZjAyN2MgIG1mbjogMDAxNjJl
ODIKKFhFTikgICBnZm46IDAwMGYwMjdkICBtZm46IDAwMTAyOWY4CihYRU4pICAgZ2ZuOiAw
MDBmMDI3ZSAgbWZuOiAwMDE2MmU4MQooWEVOKSAgIGdmbjogMDAwZjAyN2YgIG1mbjogMDAx
MDA0YjcKKFhFTikgICBnZm46IDAwMGYwMjgwICBtZm46IDAwMTYyZTgwCihYRU4pICAgZ2Zu
OiAwMDBmMDI4MSAgbWZuOiAwMDEwMDRiNgooWEVOKSAgIGdmbjogMDAwZjAyODIgIG1mbjog
MDAxNjMyYmYKKFhFTikgICBnZm46IDAwMGYwMjgzICBtZm46IDAwMTAwNGI1CihYRU4pICAg
Z2ZuOiAwMDBmMDI4NCAgbWZuOiAwMDE2MzJiZQooWEVOKSAgIGdmbjogMDAwZjAyODUgIG1m
bjogMDAxMDA0YjQKKFhFTikgICBnZm46IDAwMGYwMjg2ICBtZm46IDAwMTYzMmJkCihYRU4p
ICAgZ2ZuOiAwMDBmMDI4NyAgbWZuOiAwMDEwMTMwNwooWEVOKSAgIGdmbjogMDAwZjAyODgg
IG1mbjogMDAxNjMyYmMKKFhFTikgICBnZm46IDAwMGYwMjg5ICBtZm46IDAwMTAxMzA2CihY
RU4pICAgZ2ZuOiAwMDBmMDI4YSAgbWZuOiAwMDE2MzJiYgooWEVOKSAgIGdmbjogMDAwZjAy
OGIgIG1mbjogMDAxMDEzMDUKKFhFTikgICBnZm46IDAwMGYwMjhjICBtZm46IDAwMTYzMmJh
CihYRU4pICAgZ2ZuOiAwMDBmMDI4ZCAgbWZuOiAwMDEwMTMwNAooWEVOKSAgIGdmbjogMDAw
ZjAyOGUgIG1mbjogMDAxNjMyYjkKKFhFTikgICBnZm46IDAwMGYwMjhmICBtZm46IDAwMTAx
MzBiCihYRU4pICAgZ2ZuOiAwMDBmMDI5MCAgbWZuOiAwMDE2MzJiOAooWEVOKSAgIGdmbjog
MDAwZjAyOTEgIG1mbjogMDAxMDEzMGEKKFhFTikgICBnZm46IDAwMGYwMjkyICBtZm46IDAw
MTYzMmI3CihYRU4pICAgZ2ZuOiAwMDBmMDI5MyAgbWZuOiAwMDEwMTMwOQooWEVOKSAgIGdm
bjogMDAwZjAyOTQgIG1mbjogMDAxNjMyYjYKKFhFTikgICBnZm46IDAwMGYwMjk1ICBtZm46
IDAwMTAxMzA4CihYRU4pICAgZ2ZuOiAwMDBmMDI5NiAgbWZuOiAwMDE2MzJiNQooWEVOKSAg
IGdmbjogMDAwZjAyOTcgIG1mbjogMDAxMDJlNTcKKFhFTikgICBnZm46IDAwMGYwMjk4ICBt
Zm46IDAwMTYzMmI0CihYRU4pICAgZ2ZuOiAwMDBmMDI5OSAgbWZuOiAwMDEwMmU1NgooWEVO
KSAgIGdmbjogMDAwZjAyOWEgIG1mbjogMDAxNjMyYjMKKFhFTikgICBnZm46IDAwMGYwMjli
ICBtZm46IDAwMTAyZTU1CihYRU4pICAgZ2ZuOiAwMDBmMDI5YyAgbWZuOiAwMDE2MzJiMgoo
WEVOKSAgIGdmbjogMDAwZjAyOWQgIG1mbjogMDAxMDJlNTQKKFhFTikgICBnZm46IDAwMGYw
MjllICBtZm46IDAwMTYzMmIxCihYRU4pICAgZ2ZuOiAwMDBmMDI5ZiAgbWZuOiAwMDEwMjY2
ZgooWEVOKSAgIGdmbjogMDAwZjAyYTAgIG1mbjogMDAxNjMyYjAKKFhFTikgICBnZm46IDAw
MGYwMmExICBtZm46IDAwMTAyNjZlCihYRU4pICAgZ2ZuOiAwMDBmMDJhMiAgbWZuOiAwMDE2
MzJhZgooWEVOKSAgIGdmbjogMDAwZjAyYTMgIG1mbjogMDAxMDI2NmQKKFhFTikgICBnZm46
IDAwMGYwMmE0ICBtZm46IDAwMTYzMmFlCihYRU4pICAgZ2ZuOiAwMDBmMDJhNSAgbWZuOiAw
MDEwMjY2YwooWEVOKSAgIGdmbjogMDAwZjAyYTYgIG1mbjogMDAxNjMyYWQKKFhFTikgICBn
Zm46IDAwMGYwMmE3ICBtZm46IDAwMTAxN2E3CihYRU4pICAgZ2ZuOiAwMDBmMDJhOCAgbWZu
OiAwMDE2MzJhYwooWEVOKSAgIGdmbjogMDAwZjAyYTkgIG1mbjogMDAxMDE3YTYKKFhFTikg
ICBnZm46IDAwMGYwMmFhICBtZm46IDAwMTYzMmFiCihYRU4pICAgZ2ZuOiAwMDBmMDJhYiAg
bWZuOiAwMDEwMTdhNQooWEVOKSAgIGdmbjogMDAwZjAyYWMgIG1mbjogMDAxNjMyYWEKKFhF
TikgICBnZm46IDAwMGYwMmFkICBtZm46IDAwMTAxN2E0CihYRU4pICAgZ2ZuOiAwMDBmMDJh
ZSAgbWZuOiAwMDE2MzJhOQooWEVOKSAgIGdmbjogMDAwZjAyYWYgIG1mbjogMDAxMDk2ZGYK
KFhFTikgICBnZm46IDAwMGYwMmIwICBtZm46IDAwMTYzMmE4CihYRU4pICAgZ2ZuOiAwMDBm
MDJiMSAgbWZuOiAwMDEwOTZkZQooWEVOKSAgIGdmbjogMDAwZjAyYjIgIG1mbjogMDAxNjMy
YTcKKFhFTikgICBnZm46IDAwMGYwMmIzICBtZm46IDAwMTA5NmRkCihYRU4pICAgZ2ZuOiAw
MDBmMDJiNCAgbWZuOiAwMDE2MzJhNgooWEVOKSAgIGdmbjogMDAwZjAyYjUgIG1mbjogMDAx
MDk2ZGMKKFhFTikgICBnZm46IDAwMGYwMmI2ICBtZm46IDAwMTYzMmE1CihYRU4pICAgZ2Zu
OiAwMDBmMDJiNyAgbWZuOiAwMDEwOWVkZgooWEVOKSAgIGdmbjogMDAwZjAyYjggIG1mbjog
MDAxNjMyYTQKKFhFTikgICBnZm46IDAwMGYwMmI5ICBtZm46IDAwMTA5ZWRlCihYRU4pICAg
Z2ZuOiAwMDBmMDJiYSAgbWZuOiAwMDE2MzJhMwooWEVOKSAgIGdmbjogMDAwZjAyYmIgIG1m
bjogMDAxMDllZGQKKFhFTikgICBnZm46IDAwMGYwMmJjICBtZm46IDAwMTYzMmEyCihYRU4p
ICAgZ2ZuOiAwMDBmMDJiZCAgbWZuOiAwMDEwOWVkYwooWEVOKSAgIGdmbjogMDAwZjAyYmUg
IG1mbjogMDAxNjMyYTEKKFhFTikgICBnZm46IDAwMGYwMmJmICBtZm46IDAwMTAyNWVmCihY
RU4pICAgZ2ZuOiAwMDBmMDJjMCAgbWZuOiAwMDE2MzJhMAooWEVOKSAgIGdmbjogMDAwZjAy
YzEgIG1mbjogMDAxMDI1ZWUKKFhFTikgICBnZm46IDAwMGYwMmMyICBtZm46IDAwMTYzMjlm
CihYRU4pICAgZ2ZuOiAwMDBmMDJjMyAgbWZuOiAwMDEwMjVlZAooWEVOKSAgIGdmbjogMDAw
ZjAyYzQgIG1mbjogMDAxNjMyOWUKKFhFTikgICBnZm46IDAwMGYwMmM1ICBtZm46IDAwMTAy
NWVjCihYRU4pICAgZ2ZuOiAwMDBmMDJjNiAgbWZuOiAwMDE2MzI5ZAooWEVOKSAgIGdmbjog
MDAwZjAyYzcgIG1mbjogMDAxMDJiZDcKKFhFTikgICBnZm46IDAwMGYwMmM4ICBtZm46IDAw
MTYzMjljCihYRU4pICAgZ2ZuOiAwMDBmMDJjOSAgbWZuOiAwMDEwMmJkNgooWEVOKSAgIGdm
bjogMDAwZjAyY2EgIG1mbjogMDAxNjMyOWIKKFhFTikgICBnZm46IDAwMGYwMmNiICBtZm46
IDAwMTAyYmQ1CihYRU4pICAgZ2ZuOiAwMDBmMDJjYyAgbWZuOiAwMDE2MzI5YQooWEVOKSAg
IGdmbjogMDAwZjAyY2QgIG1mbjogMDAxMDJiZDQKKFhFTikgICBnZm46IDAwMGYwMmNlICBt
Zm46IDAwMTYzMjk5CihYRU4pICAgZ2ZuOiAwMDBmMDJjZiAgbWZuOiAwMDEwMWU5NwooWEVO
KSAgIGdmbjogMDAwZjAyZDAgIG1mbjogMDAxNjMyOTgKKFhFTikgICBnZm46IDAwMGYwMmQx
ICBtZm46IDAwMTAxZTk2CihYRU4pICAgZ2ZuOiAwMDBmMDJkMiAgbWZuOiAwMDE2MzI5Nwoo
WEVOKSAgIGdmbjogMDAwZjAyZDMgIG1mbjogMDAxMDFlOTUKKFhFTikgICBnZm46IDAwMGYw
MmQ0ICBtZm46IDAwMTYzMjk2CihYRU4pICAgZ2ZuOiAwMDBmMDJkNSAgbWZuOiAwMDEwMWU5
NAooWEVOKSAgIGdmbjogMDAwZjAyZDYgIG1mbjogMDAxNjMyOTUKKFhFTikgICBnZm46IDAw
MGYwMmQ3ICBtZm46ICswMDEwMWU0ZgooWEVOKSAgIGdmbjogMDAwZjAyZDggIG1mbjogMDAx
NjMyOTQKKFhFTikgICBnZm46IDAwMGYwMmQ5ICBtZm46IDAwMTAxZTRlCihYRU4pICAgZ2Zu
OiAwMDBmMDJkYSAgbWZuOiAwMDE2MzI5MwooWEVOKSAgIGdmbjogMDAwZjAyZGIgIG1mbjog
MDAxMDFlNGQKKFhFTikgICBnZm46IDAwMGYwMmRjICBtZm46IDAwMTYzMjkyCihYRU4pICAg
Z2ZuOiAwMDBmMDJkZCAgbWZuOiAwMDEwMWU0YwooWEVOKSAgIGdmbjogMDAwZjAyZGUgIG1m
bjogMDAxNjMyOTEKKFhFTikgICBnZm46IDAwMGYwMmRmICBtZm46IDAwMTAxYWJmCihYRU4p
ICAgZ2ZuOiAwMDBmMDJlMCAgbWZuOiAwMDE2MzI5MAooWEVOKSAgIGdmbjogMDAwZjAyZTEg
IG1mbjogMDAxMDFhYmUKKFhFTikgICBnZm46IDAwMGYwMmUyICBtZm46IDAwMTYzMjhmCihY
RU4pICAgZ2ZuOiAwMDBmMDJlMyAgbWZuOiAwMDEwMWFiZAooWEVOKSAgIGdmbjogMDAwZjAy
ZTQgIG1mbjogMDAxNjMyOGUKKFhFTikgICBnZm46IDAwMGYwMmU1ICBtZm46IDAwMTAxYWJj
CihYRU4pICAgZ2ZuOiAwMDBmMDJlNiAgbWZuOiAwMDE2MzI4ZAooWEVOKSAgIGdmbjogMDAw
ZjAyZTcgIG1mbjogMDAxMDJiY2YKKFhFTikgICBnZm46IDAwMGYwMmU4ICBtZm46IDAwMTYz
MjhjCihYRU4pICAgZ2ZuOiAwMDBmMDJlOSAgbWZuOiAwMDEwMmJjZQooWEVOKSAgIGdmbjog
MDAwZjAyZWEgIG1mbjogMDAxNjMyOGIKKFhFTikgICBnZm46IDAwMGYwMmViICBtZm46IDAw
MTAyYmNkCihYRU4pICAgZ2ZuOiAwMDBmMDJlYyAgbWZuOiAwMDE2MzI4YQooWEVOKSAgIGdm
bjogMDAwZjAyZWQgIG1mbjogMDAxMDJiY2MKKFhFTikgICBnZm46IDAwMGYwMmVlICBtZm46
IDAwMTYzMjg5CihYRU4pICAgZ2ZuOiAwMDBmMDJlZiAgbWZuOiAwMDEwMmU3ZgooWEVOKSAg
IGdmbjogMDAwZjAyZjAgIG1mbjogMDAxNjMyODgKKFhFTikgICBnZm46IDAwMGYwMmYxICBt
Zm46IDAwMTAyZTdlCihYRU4pICAgZ2ZuOiAwMDBmMDJmMiAgbWZuOiAwMDE2MzI4NwooWEVO
KSAgIGdmbjogMDAwZjAyZjMgIG1mbjogMDAxMDJlN2QKKFhFTikgICBnZm46IDAwMGYwMmY0
ICBtZm46IDAwMTYzMjg2CihYRU4pICAgZ2ZuOiAwMDBmMDJmNSAgbWZuOiAwMDEwMmU3Ywoo
WEVOKSAgIGdmbjogMDAwZjAyZjYgIG1mbjogMDAxNjMyODUKKFhFTikgICBnZm46IDAwMGYw
MmY3ICBtZm46IDAwMTAxMzRiCihYRU4pICAgZ2ZuOiAwMDBmMDJmOCAgbWZuOiAwMDE2MzI4
NAooWEVOKSAgIGdmbjogMDAwZjAyZjkgIG1mbjogMDAxMDEzNGEKKFhFTikgICBnZm46IDAw
MGYwMmZhICBtZm46IDAwMTYzMjgzCihYRU4pICAgZ2ZuOiAwMDBmMDJmYiAgbWZuOiAwMDEw
MTM0OQooWEVOKSAgIGdmbjogMDAwZjAyZmMgIG1mbjogMDAxNjMyODIKKFhFTikgICBnZm46
IDAwMGYwMmZkICBtZm46IDAwMTAxMzQ4CihYRU4pICAgZ2ZuOiAwMDBmMDJmZSAgbWZuOiAw
MDE2MzI4MQooWEVOKSAgIGdmbjogMDAwZjAyZmYgIG1mbjogMDAxMDI3NWIKKFhFTikgICBn
Zm46IDAwMGYwMzAwICBtZm46IDAwMTYzMjgwCihYRU4pICAgZ2ZuOiAwMDBmMDMwMSAgbWZu
OiAwMDEwMjc1YQooWEVOKSAgIGdmbjogMDAwZjAzMDIgIG1mbjogMDAxNjMwZmYKKFhFTikg
ICBnZm46IDAwMGYwMzAzICBtZm46IDAwMTAyNzU5CihYRU4pICAgZ2ZuOiAwMDBmMDMwNCAg
bWZuOiAwMDE2MzBmZQooWEVOKSAgIGdmbjogMDAwZjAzMDUgIG1mbjogMDAxMDI3NTgKKFhF
TikgICBnZm46IDAwMGYwMzA2ICBtZm46IDAwMTYzMGZkCihYRU4pICAgZ2ZuOiAwMDBmMDMw
NyAgbWZuOiAwMDEwMmRmMwooWEVOKSAgIGdmbjogMDAwZjAzMDggIG1mbjogMDAxNjMwZmMK
KFhFTikgICBnZm46IDAwMGYwMzA5ICBtZm46IDAwMTAyZGYyCihYRU4pICAgZ2ZuOiAwMDBm
MDMwYSAgbWZuOiAwMDE2MzBmYgooWEVOKSAgIGdmbjogMDAwZjAzMGIgIG1mbjogMDAxMDJk
ZjEKKFhFTikgICBnZm46IDAwMGYwMzBjICBtZm46IDAwMTYzMGZhCihYRU4pICAgZ2ZuOiAw
MDBmMDMwZCAgbWZuOiAwMDEwMmRmMAooWEVOKSAgIGdmbjogMDAwZjAzMGUgIG1mbjogMDAx
NjMwZjkKKFhFTikgICBnZm46IDAwMGYwMzBmICBtZm46IDAwMTAxMmRmCihYRU4pICAgZ2Zu
OiAwMDBmMDMxMCAgbWZuOiAwMDE2MzBmOAooWEVOKSAgIGdmbjogMDAwZjAzMTEgIG1mbjog
MDAxMDEyZGUKKFhFTikgICBnZm46IDAwMGYwMzEyICBtZm46IDAwMTYzMGY3CihYRU4pICAg
Z2ZuOiAwMDBmMDMxMyAgbWZuOiAwMDEwMTJkZAooWEVOKSAgIGdmbjogMDAwZjAzMTQgIG1m
bjogMDAxNjMwZjYKKFhFTikgICBnZm46IDAwMGYwMzE1ICBtZm46IDAwMTAxMmRjCihYRU4p
ICAgZ2ZuOiAwMDBmMDMxNiAgbWZuOiAwMDE2MzBmNQooWEVOKSAgIGdmbjogMDAwZjAzMTcg
IG1mbjogMDAxMDAzNWIKKFhFTikgICBnZm46IDAwMGYwMzE4ICBtZm46IDAwMTYzMGY0CihY
RU4pICAgZ2ZuOiAwMDBmMDMxOSAgbWZuOiAwMDEwMDM1YQooWEVOKSAgIGdmbjogMDAwZjAz
MWEgIG1mbjogMDAxNjMwZjMKKFhFTikgICBnZm46IDAwMGYwMzFiICBtZm46IDAwMTAwMzU5
CihYRU4pICAgZ2ZuOiAwMDBmMDMxYyAgbWZuOiAwMDE2MzBmMgooWEVOKSAgIGdmbjogMDAw
ZjAzMWQgIG1mbjogMDAxMDAzNTgKKFhFTikgICBnZm46IDAwMGYwMzFlICBtZm46IDAwMTYz
MGYxCihYRU4pICAgZ2ZuOiAwMDBmMDMxZiAgbWZuOiAwMDEwMjVlNwooWEVOKSAgIGdmbjog
MDAwZjAzMjAgIG1mbjogMDAxNjMwZjAKKFhFTikgICBnZm46IDAwMGYwMzIxICBtZm46IDAw
MTAyNWU2CihYRU4pICAgZ2ZuOiAwMDBmMDMyMiAgbWZuOiAwMDE2MzBlZgooWEVOKSAgIGdm
bjogMDAwZjAzMjMgIG1mbjogMDAxMDI1ZTUKKFhFTikgICBnZm46IDAwMGYwMzI0ICBtZm46
IDAwMTYzMGVlCihYRU4pICAgZ2ZuOiAwMDBmMDMyNSAgbWZuOiAwMDEwMjVlNAooWEVOKSAg
IGdmbjogMDAwZjAzMjYgIG1mbjogMDAxNjMwZWQKKFhFTikgICBnZm46IDAwMGYwMzI3ICBt
Zm46IDAwMTAyNWUzCihYRU4pICAgZ2ZuOiAwMDBmMDMyOCAgbWZuOiAwMDE2MzBlYwooWEVO
KSAgIGdmbjogMDAwZjAzMjkgIG1mbjogMDAxMDI1ZTIKKFhFTikgICBnZm46IDAwMGYwMzJh
ICBtZm46IDAwMTYzMGViCihYRU4pICAgZ2ZuOiAwMDBmMDMyYiAgbWZuOiAwMDEwMjVlMQoo
WEVOKSAgIGdmbjogMDAwZjAzMmMgIG1mbjogMDAxNjMwZWEKKFhFTikgICBnZm46IDAwMGYw
MzJkICBtZm46IDAwMTAyNWUwCihYRU4pICAgZ2ZuOiAwMDBmMDMyZSAgbWZuOiAwMDE2MzBl
OQooWEVOKSAgIGdmbjogMDAwZjAzMmYgIG1mbjogMDAxMDI2NzcKKFhFTikgICBnZm46IDAw
MGYwMzMwICBtZm46IDAwMTYzMGU4CihYRU4pICAgZ2ZuOiAwMDBmMDMzMSAgbWZuOiAwMDEw
MjY3NgooWEVOKSAgIGdmbjogMDAwZjAzMzIgIG1mbjogMDAxNjMwZTcKKFhFTikgICBnZm46
IDAwMGYwMzMzICBtZm46IDAwMTAyNjc1CihYRU4pICAgZ2ZuOiAwMDBmMDMzNCAgbWZuOiAw
MDE2MzBlNgooWEVOKSAgIGdmbjogMDAwZjAzMzUgIG1mbjogMDAxMDI2NzQKKFhFTikgICBn
Zm46IDAwMGYwMzM2ICBtZm46IDAwMTYzMGU1CihYRU4pICAgZ2ZuOiAwMDBmMDMzNyAgbWZu
OiAwMDEwMjY3MwooWEVOKSAgIGdmbjogMDAwZjAzMzggIG1mbjogMDAxNjMwZTQKKFhFTikg
ICBnZm46IDAwMGYwMzM5ICBtZm46IDAwMTAyNjcyCihYRU4pICAgZ2ZuOiAwMDBmMDMzYSAg
bWZuOiAwMDE2MzBlMwooWEVOKSAgIGdmbjogMDAwZjAzM2IgIG1mbjogMDAxMDI2NzEKKFhF
TikgICBnZm46IDAwMGYwMzNjICBtZm46IDAwMTYzMGUyCihYRU4pICAgZ2ZuOiAwMDBmMDMz
ZCAgbWZuOiAwMDEwMjY3MAooWEVOKSAgIGdmbjogMDAwZjAzM2UgIG1mbjogMDAxNjMwZTEK
KFhFTikgICBnZm46IDAwMGYwMzNmICBtZm46IDAwMTAyMzQ3CihYRU4pICAgZ2ZuOiAwMDBm
MDM0MCAgbWZuOiAwMDE2MzBlMAooWEVOKSAgIGdmbjogMDAwZjAzNDEgIG1mbjogMDAxMDIz
NDYKKFhFTikgICBnZm46IDAwMGYwMzQyICBtZm46IDAwMTYzMGRmCihYRU4pICAgZ2ZuOiAw
MDBmMDM0MyAgbWZuOiAwMDEwMjM0NQooWEVOKSAgIGdmbjogMDAwZjAzNDQgIG1mbjogMDAx
NjMwZGUKKFhFTikgICBnZm46IDAwMGYwMzQ1ICBtZm46IDAwMTAyMzQ0CihYRU4pICAgZ2Zu
OiAwMDBmMDM0NiAgbWZuOiAwMDE2MzBkZAooWEVOKSAgIGdmbjogMDAwZjAzNDcgIG1mbjog
MDAxMDIzNDMKKFhFTikgICBnZm46IDAwMGYwMzQ4ICBtZm46IDAwMTYzMGRjCihYRU4pICAg
Z2ZuOiAwMDBmMDM0OSAgbWZuOiAwMDEwMjM0MgooWEVOKSAgIGdmbjogMDAwZjAzNGEgIG1m
bjogMDAxNjMwZGIKKFhFTikgICBnZm46IDAwMGYwMzRiICBtZm46IDAwMTAyMzQxCihYRU4p
ICAgZ2ZuOiAwMDBmMDM0YyAgbWZuOiAwMDE2MzBkYQooWEVOKSAgIGdmbjogMDAwZjAzNGQg
IG1mbjogMDAxMDIzNDAKKFhFTikgICBnZm46IDAwMGYwMzRlICBtZm46IDAwMTYzMGQ5CihY
RU4pICAgZ2ZuOiAwMDBmMDM0ZiAgbWZuOiAwMDEwOTgxNwooWEVOKSAgIGdmbjogMDAwZjAz
NTAgIG1mbjogMDAxNjMwZDgKKFhFTikgICBnZm46IDAwMGYwMzUxICBtZm46IDAwMTA5ODE2
CihYRU4pICAgZ2ZuOiAwMDBmMDM1MiAgbWZuOiAwMDE2MzBkNwooWEVOKSAgIGdmbjogMDAw
ZjAzNTMgIG1mbjogMDAxMDk4MTUKKFhFTikgICBnZm46IDAwMGYwMzU0ICBtZm46IDAwMTYz
MGQ2CihYRU4pICAgZ2ZuOiAwMDBmMDM1NSAgbWZuOiAwMDEwOTgxNAooWEVOKSAgIGdmbjog
MDAwZjAzNTYgIG1mbjogMDAxNjMwZDUKKFhFTikgICBnZm46IDAwMGYwMzU3ICBtZm46IDAw
MTA5ODEzCihYRU4pICAgZ2ZuOiAwMDBmMDM1OCAgbWZuOiAwMDE2MzBkNAooWEVOKSAgIGdm
bjogMDAwZjAzNTkgIG1mbjogMDAxMDk4MTIKKFhFTikgICBnZm46IDAwMGYwMzVhICBtZm46
IDAwMTYzMGQzCihYRU4pICAgZ2ZuOiAwMDBmMDM1YiAgbWZuOiAwMDEwOTgxMQooWEVOKSAg
IGdmbjogMDAwZjAzNWMgIG1mbjogMDAxNjMwZDIKKFhFTikgICBnZm46IDAwMGYwMzVkICBt
Zm46IDAwMTA5ODEwCihYRU4pICAgZ2ZuOiAwMDBmMDM1ZSAgbWZuOiAwMDE2MzBkMQooWEVO
KSAgIGdmbjogMDAwZjAzNWYgIG1mbjogMDAxMDIyNGYKKFhFTikgICBnZm46IDAwMGYwMzYw
ICBtZm46IDAwMTYzMGQwCihYRU4pICAgZ2ZuOiAwMDBmMDM2MSAgbWZuOiAwMDEwMjI0ZQoo
WEVOKSAgIGdmbjogMDAwZjAzNjIgIG1mbjogMDAxNjMwY2YKKFhFTikgICBnZm46IDAwMGYw
MzYzICBtZm46IDAwMTAyMjRkCihYRU4pICAgZ2ZuOiAwMDBmMDM2NCAgbWZuOiAwMDE2MzBj
ZQooWEVOKSAgIGdmbjogMDAwZjAzNjUgIG1mbjogMDAxMDIyNGMKKFhFTikgICBnZm46IDAw
MGYwMzY2ICBtZm46IDAwMTYzMGNkCihYRU4pICAgZ2ZuOiAwMDBmMDM2NyAgbWZuOiAwMDEw
MjI0YgooWEVOKSAgIGdmbjogMDAwZjAzNjggIG1mbjogMDAxNjMwY2MKKFhFTikgICBnZm46
IDAwMGYwMzY5ICBtZm46IDAwMTAyMjRhCihYRU4pICAgZ2ZuOiAwMDBmMDM2YSAgbWZuOiAw
MDE2MzBjYgooWEVOKSAgIGdmbjogMDAwZjAzNmIgIG1mbjogMDAxMDIyNDkKKFhFTikgICBn
Zm46IDAwMGYwMzZjICBtZm46IDAwMTYzMGNhCihYRU4pICAgZ2ZuOiAwMDBmMDM2ZCAgbWZu
OiAwMDEwMjI0OAooWEVOKSAgIGdmbjogMDAwZjAzNmUgIG1mbjogMDAxNjMwYzkKKFhFTikg
ICBnZm46IDAwMGYwMzZmICBtZm46IDAwMTAyMjQ3CihYRU4pICAgZ2ZuOiAwMDBmMDM3MCAg
bWZuOiAwMDE2MzBjOAooWEVOKSAgIGdmbjogMDAwZjAzNzEgIG1mbjogMDAxMDIyNDYKKFhF
TikgICBnZm46IDAwMGYwMzcyICBtZm46IDAwMTYzMGM3CihYRU4pICAgZ2ZuOiAwMDBmMDM3
MyAgbWZuOiAwMDEwMjI0NQooWEVOKSAgIGdmbjogMDAwZjAzNzQgIG1mbjogMDAxNjMwYzYK
KFhFTikgICBnZm46IDAwMGYwMzc1ICBtZm46IDAwMTAyMjQ0CihYRU4pICAgZ2ZuOiAwMDBm
MDM3NiAgbWZuOiAwMDE2MzBjNQooWEVOKSAgIGdmbjogMDAwZjAzNzcgIG1mbjogMDAxMDIy
NDMKKFhFTikgICBnZm46IDAwMGYwMzc4ICBtZm46IDAwMTYzMGM0CihYRU4pICAgZ2ZuOiAw
MDBmMDM3OSAgbWZuOiAwMDEwMjI0MgooWEVOKSAgIGdmbjogMDAwZjAzN2EgIG1mbjogMDAx
NjMwYzMKKFhFTikgICBnZm46IDAwMGYwMzdiICBtZm46IDAwMTAyMjQxCihYRU4pICAgZ2Zu
OiAwMDBmMDM3YyAgbWZuOiAwMDE2MzBjMgooWEVOKSAgIGdmbjogMDAwZjAzN2QgIG1mbjog
MDAxMDIyNDAKKFhFTikgICBnZm46IDAwMGYwMzdlICBtZm46IDAwMTYzMGMxCihYRU4pICAg
Z2ZuOiAwMDBmMDM3ZiAgbWZuOiAwMDEwOTgwZgooWEVOKSAgIGdmbjogMDAwZjAzODAgIG1m
bjogMDAxNjMwYzAKKFhFTikgICBnZm46IDAwMGYwMzgxICBtZm46IDAwMTA5ODBlCihYRU4p
ICAgZ2ZuOiAwMDBmMDM4MiAgbWZuOiAwMDFiMDY3ZgooWEVOKSAgIGdmbjogMDAwZjAzODMg
IG1mbjogMDAxMDk4MGQKKFhFTikgICBnZm46IDAwMGYwMzg0ICBtZm46IDAwMWIwNjdlCihY
RU4pICAgZ2ZuOiAwMDBmMDM4NSAgbWZuOiAwMDEwOTgwYwooWEVOKSAgIGdmbjogMDAwZjAz
ODYgIG1mbjogMDAxYjA2N2QKKFhFTikgICBnZm46IDAwMGYwMzg3ICBtZm46IDAwMTA5ODBi
CihYRU4pICAgZ2ZuOiAwMDBmMDM4OCAgbWZuOiAwMDFiMDY3YwooWEVOKSAgIGdmbjogMDAw
ZjAzODkgIG1mbjogMDAxMDk4MGEKKFhFTikgICBnZm46IDAwMGYwMzhhICBtZm46IDAwMWIw
NjdiCihYRU4pICAgZ2ZuOiAwMDBmMDM4YiAgbWZuOiAwMDEwOTgwOQooWEVOKSAgIGdmbjog
MDAwZjAzOGMgIG1mbjogMDAxYjA2N2EKKFhFTikgICBnZm46IDAwMGYwMzhkICBtZm46IDAw
MTA5ODA4CihYRU4pICAgZ2ZuOiAwMDBmMDM4ZSAgbWZuOiAwMDFiMDY3OQooWEVOKSAgIGdm
bjogMDAwZjAzOGYgIG1mbjogMDAxMDk4MDcKKFhFTikgICBnZm46IDAwMGYwMzkwICBtZm46
IDAwMWIwNjc4CihYRU4pICAgZ2ZuOiAwMDBmMDM5MSAgbWZuOiAwMDEwOTgwNgooWEVOKSAg
IGdmbjogMDAwZjAzOTIgIG1mbjogMDAxYjA2NzcKKFhFTikgICBnZm46IDAwMGYwMzkzICBt
Zm46IDAwMTA5ODA1CihYRU4pICAgZ2ZuOiAwMDBmMDM5NCAgbWZuOiAwMDFiMDY3NgooWEVO
KSAgIGdmbjogMDAwZjAzOTUgIG1mbjogMDAxMDk4MDQKKFhFTikgICBnZm46IDAwMGYwMzk2
ICBtZm46IDAwMWIwNjc1CihYRU4pICAgZ2ZuOiAwMDBmMDM5NyAgbWZuOiAwMDEwOTgwMwoo
WEVOKSAgIGdmbjogMDAwZjAzOTggIG1mbjogMDAxYjA2NzQKKFhFTikgICBnZm46IDAwMGYw
Mzk5ICBtZm46IDAwMTA5ODAyCihYRU4pICAgZ2ZuOiAwMDBmMDM5YSAgbWZuOiAwMDFiMDY3
MwooWEVOKSAgIGdmbjogMDAwZjAzOWIgIG1mbjogMDAxMDk4MDEKKFhFTikgICBnZm46IDAw
MGYwMzljICBtZm46IDAwMWIwNjcyCihYRU4pICAgZ2ZuOiAwMDBmMDM5ZCAgbWZuOiAwMDEw
OTgwMAooWEVOKSAgIGdmbjogMDAwZjAzOWUgIG1mbjogMDAxYjA2NzEKKFhFTikgICBnZm46
IDAwMGYwMzlmICBtZm46IDAwMTAyNWZmCihYRU4pICAgZ2ZuOiAwMDBmMDNhMCAgbWZuOiAw
MDFiMDY3MAooWEVOKSAgIGdmbjogMDAwZjAzYTEgIG1mbjogMDAxMDI1ZmUKKFhFTikgICBn
Zm46IDAwMGYwM2EyICBtZm46IDAwMWIwNjZmCihYRU4pICAgZ2ZuOiAwMDBmMDNhMyAgbWZu
OiAwMDEwMjVmZAooWEVOKSAgIGdmbjogMDAwZjAzYTQgIG1mbjogMDAxYjA2NmUKKFhFTikg
ICBnZm46IDAwMGYwM2E1ICBtZm46IDAwMTAyNWZjCihYRU4pICAgZ2ZuOiAwMDBmMDNhNiAg
bWZuOiAwMDFiMDY2ZAooWEVOKSAgIGdmbjogMDAwZjAzYTcgIG1mbjogMDAxMDI1ZmIKKFhF
TikgICBnZm46IDAwMGYwM2E4ICBtZm46IDAwMWIwNjZjCihYRU4pICAgZ2ZuOiAwMDBmMDNh
OSAgbWZuOiAwMDEwMjVmYQooWEVOKSAgIGdmbjogMDAwZjAzYWEgIG1mbjogMDAxYjA2NmIK
KFhFTikgICBnZm46IDAwMGYwM2FiICBtZm46IDAwMTAyNWY5CihYRU4pICAgZ2ZuOiAwMDBm
MDNhYyAgbWZuOiAwMDFiMDY2YQooWEVOKSAgIGdmbjogMDAwZjAzYWQgIG1mbjogMDAxMDI1
ZjgKKFhFTikgICBnZm46IDAwMGYwM2FlICBtZm46IDAwMWIwNjY5CihYRU4pICAgZ2ZuOiAw
MDBmMDNhZiAgbWZuOiAwMDEwMjVmNwooWEVOKSAgIGdmbjogMDAwZjAzYjAgIG1mbjogMDAx
YjA2NjgKKFhFTikgICBnZm46IDAwMGYwM2IxICBtZm46IDAwMTAyNWY2CihYRU4pICAgZ2Zu
OiAwMDBmMDNiMiAgbWZuOiAwMDFiMDY2NwooWEVOKSAgIGdmbjogMDAwZjAzYjMgIG1mbjog
MDAxMDI1ZjUKKFhFTikgICBnZm46IDAwMGYwM2I0ICBtZm46IDAwMWIwNjY2CihYRU4pICAg
Z2ZuOiAwMDBmMDNiNSAgbWZuOiAwMDEwMjVmNAooWEVOKSAgIGdmbjogMDAwZjAzYjYgIG1m
bjogMDAxYjA2NjUKKFhFTikgICBnZm46IDAwMGYwM2I3ICBtZm46IDAwMTAyNWYzCihYRU4p
ICAgZ2ZuOiAwMDBmMDNiOCAgbWZuOiAwMDFiMDY2NAooWEVOKSAgIGdmbjogMDAwZjAzYjkg
IG1mbjogMDAxMDI1ZjIKKFhFTikgICBnZm46IDAwMGYwM2JhICBtZm46IDAwMWIwNjYzCihY
RU4pICAgZ2ZuOiAwMDBmMDNiYiAgbWZuOiAwMDEwMjVmMQooWEVOKSAgIGdmbjogMDAwZjAz
YmMgIG1mbjogMDAxYjA2NjIKKFhFTikgICBnZm46IDAwMGYwM2JkICBtZm46IDAwMTAyNWYw
CihYRU4pICAgZ2ZuOiAwMDBmMDNiZSAgbWZuOiAwMDFiMDY2MQooWEVOKSAgIGdmbjogMDAw
ZjAzYmYgIG1mbjogMDAxMDI1ZGYKKFhFTikgICBnZm46IDAwMGYwM2MwICBtZm46IDAwMWIw
NjYwCihYRU4pICAgZ2ZuOiAwMDBmMDNjMSAgbWZuOiAwMDEwMjVkZQooWEVOKSAgIGdmbjog
MDAwZjAzYzIgIG1mbjogMDAxYjA2NWYKKFhFTikgICBnZm46IDAwMGYwM2MzICBtZm46IDAw
MTAyNWRkCihYRU4pICAgZ2ZuOiAwMDBmMDNjNCAgbWZuOiAwMDFiMDY1ZQooWEVOKSAgIGdm
bjogMDAwZjAzYzUgIG1mbjogMDAxMDI1ZGMKKFhFTikgICBnZm46IDAwMGYwM2M2ICBtZm46
IDAwMWIwNjVkCihYRU4pICAgZ2ZuOiAwMDBmMDNjNyAgbWZuOiAwMDEwMjVkYgooWEVOKSAg
IGdmbjogMDAwZjAzYzggIG1mbjogMDAxYjA2NWMKKFhFTikgICBnZm46IDAwMGYwM2M5ICBt
Zm46IDAwMTAyNWRhCihYRU4pICAgZ2ZuOiAwMDBmMDNjYSAgbWZuOiAwMDFiMDY1YgooWEVO
KSAgIGdmbjogMDAwZjAzY2IgIG1mbjogMDAxMDI1ZDkKKFhFTikgICBnZm46IDAwMGYwM2Nj
ICBtZm46IDAwMWIwNjVhCihYRU4pICAgZ2ZuOiAwMDBmMDNjZCAgbWZuOiAwMDEwMjVkOAoo
WEVOKSAgIGdmbjogMDAwZjAzY2UgIG1mbjogMDAxYjA2NTkKKFhFTikgICBnZm46IDAwMGYw
M2NmICBtZm46IDAwMTAyNWQ3CihYRU4pICAgZ2ZuOiAwMDBmMDNkMCAgbWZuOiAwMDFiMDY1
OAooWEVOKSAgIGdmbjogMDAwZjAzZDEgIG1mbjogMDAxMDI1ZDYKKFhFTikgICBnZm46IDAw
MGYwM2QyICBtZm46IDAwMWIwNjU3CihYRU4pICAgZ2ZuOiAwMDBmMDNkMyAgbWZuOiAwMDEw
MjVkNQooWEVOKSAgIGdmbjogMDAwZjAzZDQgIG1mbjogMDAxYjA2NTYKKFhFTikgICBnZm46
IDAwMGYwM2Q1ICBtZm46IDAwMTAyNWQ0CihYRU4pICAgZ2ZuOiAwMDBmMDNkNiAgbWZuOiAw
MDFiMDY1NQooWEVOKSAgIGdmbjogMDAwZjAzZDcgIG1mbjogMDAxMDI1ZDMKKFhFTikgICBn
Zm46IDAwMGYwM2Q4ICBtZm46IDAwMWIwNjU0CihYRU4pICAgZ2ZuOiAwMDBmMDNkOSAgbWZu
OiAwMDEwMjVkMgooWEVOKSAgIGdmbjogMDAwZjAzZGEgIG1mbjogMDAxYjA2NTMKKFhFTikg
ICBnZm46IDAwMGYwM2RiICBtZm46IDAwMTAyNWQxCihYRU4pICAgZ2ZuOiAwMDBmMDNkYyAg
bWZuOiAwMDFiMDY1MgooWEVOKSAgIGdmbjogMDAwZjAzZGQgIG1mbjogMDAxMDI1ZDAKKFhF
TikgICBnZm46IDAwMGYwM2RlICBtZm46IDAwMWIwNjUxCihYRU4pICAgZ2ZuOiAwMDBmMDNk
ZiAgbWZuOiAwMDEwMjVjZgooWEVOKSAgIGdmbjogMDAwZjAzZTAgIG1mbjogMDAxYjA2NTAK
KFhFTikgICBnZm46IDAwMGYwM2UxICBtZm46IDAwMTAyNWNlCihYRU4pICAgZ2ZuOiAwMDBm
MDNlMiAgbWZuOiAwMDFiMDY0ZgooWEVOKSAgIGdmbjogMDAwZjAzZTMgIG1mbjogMDAxMDI1
Y2QKKFhFTikgICBnZm46IDAwMGYwM2U0ICBtZm46IDAwMWIwNjRlCihYRU4pICAgZ2ZuOiAw
MDBmMDNlNSAgbWZuOiAwMDEwMjVjYwooWEVOKSAgIGdmbjogMDAwZjAzZTYgIG1mbjogMDAx
YjA2NGQKKFhFTikgICBnZm46IDAwMGYwM2U3ICBtZm46IDAwMTAyNWNiCihYRU4pICAgZ2Zu
OiAwMDBmMDNlOCAgbWZuOiAwMDFiMDY0YwooWEVOKSAgIGdmbjogMDAwZjAzZTkgIG1mbjog
MDAxMDI1Y2EKKFhFTikgICBnZm46IDAwMGYwM2VhICBtZm46IDAwMWIwNjRiCihYRU4pICAg
Z2ZuOiAwMDBmMDNlYiAgbWZuOiAwMDEwMjVjOQooWEVOKSAgIGdmbjogMDAwZjAzZWMgIG1m
bjogMDAxYjA2NGEKKFhFTikgICBnZm46IDAwMGYwM2VkICBtZm46IDAwMTAyNWM4CihYRU4p
ICAgZ2ZuOiAwMDBmMDNlZSAgbWZuOiAwMDFiMDY0OQooWEVOKSAgIGdmbjogMDAwZjAzZWYg
IG1mbjogMDAxMDI1YzcKKFhFTikgICBnZm46IDAwMGYwM2YwICBtZm46IDAwMWIwNjQ4CihY
RU4pICAgZ2ZuOiAwMDBmMDNmMSAgbWZuOiAwMDEwMjVjNgooWEVOKSAgIGdmbjogMDAwZjAz
ZjIgIG1mbjogMDAxYjA2NDcKKFhFTikgICBnZm46IDAwMGYwM2YzICBtZm46IDAwMTAyNWM1
CihYRU4pICAgZ2ZuOiAwMDBmMDNmNCAgbWZuOiAwMDFiMDY0NgooWEVOKSAgIGdmbjogMDAw
ZjAzZjUgIG1mbjogMDAxMDI1YzQKKFhFTikgICBnZm46IDAwMGYwM2Y2ICBtZm46IDAwMWIw
NjQ1CihYRU4pICAgZ2ZuOiAwMDBmMDNmNyAgbWZuOiAwMDEwMjVjMwooWEVOKSAgIGdmbjog
MDAwZjAzZjggIG1mbjogMDAxYjA2NDQKKFhFTikgICBnZm46IDAwMGYwM2Y5ICBtZm46IDAw
MTAyNWMyCihYRU4pICAgZ2ZuOiAwMDBmMDNmYSAgbWZuOiAwMDFiMDY0MwooWEVOKSAgIGdm
bjogMDAwZjAzZmIgIG1mbjogMDAxMDI1YzEKKFhFTikgICBnZm46IDAwMGYwM2ZjICBtZm46
IDAwMWIwNjQyCihYRU4pICAgZ2ZuOiAwMDBmMDNmZCAgbWZuOiAwMDEwMjVjMAooWEVOKSAg
IGdmbjogMDAwZjAzZmUgIG1mbjogMDAxYjA2NDEKKFhFTikgICBnZm46IDAwMGYwM2ZmICBt
Zm46IDAwMTAyMmZmCihYRU4pICAgZ2ZuOiAwMDBmYzAwMCAgbWZuOiAwMDFiMDYzZAooWEVO
KSAgIGdmbjogMDAwZmMwMDEgIG1mbjogMDAxYjA2NDAKKFhFTikgICBnZm46IDAwMGZjMDAy
ICBtZm46IDAwMTA5NmY1CihYRU4pICAgZ2ZuOiAwMDBmYzAwMyAgbWZuOiAwMDFiMDYzZQoo
WEVOKSAgIGdmbjogMDAwZmMwMDQgIG1mbjogMDAxMDk2ZjQKKFhFTikgICBnZm46IDAwMGZj
MDA1ICBtZm46IDAwMTA5NmYzCihYRU4pICAgZ2ZuOiAwMDBmYzAwNiAgbWZuOiAwMDFiMDYz
YwooWEVOKSAgIGdmbjogMDAwZmMwMDcgIG1mbjogMDAxMDk2ZjIKKFhFTikgICBnZm46IDAw
MGZjMDA4ICBtZm46IDAwMWIwNjNiCihYRU4pICAgZ2ZuOiAwMDBmYzAwOSAgbWZuOiAwMDEw
OTZmMQooWEVOKSAgIGdmbjogMDAwZmMwMGEgIG1mbjogMDAxYjA2M2EKKFhFTikgICBnZm46
IDAwMGZjMDBiICBtZm46IDAwMTA5NmYwCihYRU4pICAgZ2ZuOiAwMDBmYzAwYyAgbWZuOiAw
MDFiMDYzOQooWEVOKSAgIGdmbjogMDAwZmMwMGQgIG1mbjogMDAxMDk2ZWYKKFhFTikgICBn
Zm46IDAwMGZjMDBlICBtZm46IDAwMWIwNjM4CihYRU4pICAgZ2ZuOiAwMDBmYzAwZiAgbWZu
OiAwMDEwOTZlZQooWEVOKSAgIGdmbjogMDAwZmMwMTAgIG1mbjogMDAxYjA2MzcKKFhFTikg
ICBnZm46IDAwMGZlZmZhICBtZm46IDAwMDgxMmI5CihYRU4pICAgZ2ZuOiAwMDBmZWZmYiAg
bWZuOiAwMDEwMjc1ZAooWEVOKSAgIGdmbjogMDAwZmVmZmMgIG1mbjogMDAyMThlMTAKKFhF
TikgICBnZm46IDAwMGZlZmZkICBtZm46IDAwMTAyNzVjCihYRU4pICAgZ2ZuOiAwMDBmZWZm
ZSAgbWZuOiAwMDIxOGUwZgooWEVOKSAgIGdmbjogMDAwZmVmZmYgIG1mbjogMDAxMDA0OWYK
KFhFTikgIGdmbjogMDAxMDAwMDAgIG1mbjogMDAxYmE4MDAKKFhFTikgIGdmbjogMDAxMDAy
MDAgIG1mbjogMDAxYmE2MDAKKFhFTikgIGdmbjogMDAxMDA0MDAgIG1mbjogMDAxYmE0MDAK
KFhFTikgIGdmbjogMDAxMDA2MDAgIG1mbjogMDAxYmEyMDAKKFhFTikgIGdmbjogMDAxMDA4
MDAgIG1mbjogMDAxYmEwMDAKKFhFTikgIGdmbjogMDAxMDBhMDAgIG1mbjogMDAxYjllMDAK
KFhFTikgIGdmbjogMDAxMDBjMDAgIG1mbjogMDAxYjljMDAKKFhFTikgIGdmbjogMDAxMDBl
MDAgIG1mbjogMDAxYjlhMDAKKFhFTikgIGdmbjogMDAxMDEwMDAgIG1mbjogMDAxYjk4MDAK
KFhFTikgIGdmbjogMDAxMDEyMDAgIG1mbjogMDAxYjk2MDAKKFhFTikgIGdmbjogMDAxMDE0
MDAgIG1mbjogMDAxYjk0MDAKKFhFTikgIGdmbjogMDAxMDE2MDAgIG1mbjogMDAxYjkyMDAK
KFhFTikgIGdmbjogMDAxMDE4MDAgIG1mbjogMDAxYjkwMDAKKFhFTikgIGdmbjogMDAxMDFh
MDAgIG1mbjogMDAxYjhlMDAKKFhFTikgIGdmbjogMDAxMDFjMDAgIG1mbjogMDAxYjhjMDAK
KFhFTikgIGdmbjogMDAxMDFlMDAgIG1mbjogMDAxYjhhMDAKKFhFTikgIGdmbjogMDAxMDIw
MDAgIG1mbjogMDAxYjg4MDAKKFhFTikgIGdmbjogMDAxMDIyMDAgIG1mbjogMDAxYjg2MDAK
KFhFTikgIGdmbjogMDAxMDI0MDAgIG1mbjogMDAxYjg0MDAKKFhFTikgIGdmbjogMDAxMDI2
MDAgIG1mbjogMDAxYjgyMDAKKFhFTikgIGdmbjogMDAxMDI4MDAgIG1mbjogMDAxYjgwMDAK
KFhFTikgIGdmbjogMDAxMDJhMDAgIG1mbjogMDAxYWZlMDAKKFhFTikgIGdmbjogMDAxMDJj
MDAgIG1mbjogMDAxYWZjMDAKKFhFTikgIGdmbjogMDAxMDJlMDAgIG1mbjogMDAxYWZhMDAK
KFhFTikgIGdmbjogMDAxMDMwMDAgIG1mbjogMDAxYWY4MDAKKFhFTikgIGdmbjogMDAxMDMy
MDAgIG1mbjogMDAxYWY2MDAKKFhFTikgIGdmbjogMDAxMDM0MDAgIG1mbjogMDAxYWY0MDAK
KFhFTikgIGdmbjogMDAxMDM2MDAgIG1mbjogMDAxYWYyMDAKKFhFTikgIGdmbjogMDAxMDM4
MDAgIG1mbjogMDAxYWYwMDAKKFhFTikgIGdmbjogMDAxMDNhMDAgIG1mbjogMDAxYWVlMDAK
KFhFTikgIGdmbjogMDAxMDNjMDAgIG1mbjogMDAxYWVjMDAKKFhFTikgIGdmbjogMDAxMDNl
MDAgIG1mbjogMDAxYWVhMDAKKFhFTikgIGdmbjogMDAxMDQwMDAgIG1mbjogMDAxYWU4MDAK
KFhFTikgIGdmbjogMDAxMDQyMDAgIG1mbjogMDAxYWU2MDAKKFhFTikgIGdmbjogMDAxMDQ0
MDAgIG1mbjogMDAxYWU0MDAKKFhFTikgIGdmbjogMDAxMDQ2MDAgIG1mbjogMDAxYWUyMDAK
KFhFTikgIGdmbjogMDAxMDQ4MDAgIG1mbjogMDAxYWUwMDAKKFhFTikgIGdmbjogMDAxMDRh
MDAgIG1mbjogMDAxYWRlMDAKKFhFTikgIGdmbjogMDAxMDRjMDAgIG1mbjogMDAxYWRjMDAK
KFhFTikgIGdmbjogMDAxMDRlMDAgIG1mbjogMDAxYWRhMDAKKFhFTikgIGdmbjogMDAxMDUw
MDAgIG1mbjogMDAxYWQ4MDAKKFhFTikgIGdmbjogMDAxMDUyMDAgIG1mbjogMDAxYWQ2MDAK
KFhFTikgIGdmbjogMDAxMDU0MDAgIG1mbjogMDAxYWQ0MDAKKFhFTikgIGdmbjogMDAxMDU2
MDAgIG1mbjogMDAxYWQyMDAKKFhFTikgIGdmbjogMDAxMDU4MDAgIG1mbjogMDAxYWQwMDAK
KFhFTikgIGdmbjogMDAxMDVhMDAgIG1mbjogMDAxYWNlMDAKKFhFTikgIGdmbjogMDAxMDVj
MDAgIG1mbjogMDAxYWNjMDAKKFhFTikgIGdmbjogMDAxMDVlMDAgIG1mbjogMDAxYWNhMDAK
KFhFTikgIGdmbjogMDAxMDYwMDAgIG1mbjogMDAxYWM4MDAKKFhFTikgIGdmbjogMDAxMDYy
MDAgIG1mbjogMDAxYWM2MDAKKFhFTikgIGdmbjogMDAxMDY0MDAgIG1mbjogMDAxYWM0MDAK
KFhFTikgIGdmbjogMDAxMDY2MDAgIG1mbjogMDAxYWMyMDAKKFhFTikgIGdmbjogMDAxMDY4
MDAgIG1mbjogMDAxYWMwMDAKKFhFTikgIGdmbjogMDAxMDZhMDAgIG1mbjogMDAxYWJlMDAK
KFhFTikgIGdmbjogMDAxMDZjMDAgIG1mbjogMDAxYWJjMDAKKFhFTikgIGdmbjogMDAxMDZl
MDAgIG1mbjogMDAxYWJhMDAKKFhFTikgIGdmbjogMDAxMDcwMDAgIG1mbjogMDAxYWI4MDAK
KFhFTikgIGdmbjogMDAxMDcyMDAgIG1mbjogMDAxYWI2MDAKKFhFTikgIGdmbjogMDAxMDc0
MDAgIG1mbjogMDAxYWI0MDAKKFhFTikgIGdmbjogMDAxMDc2MDAgIG1mbjogMDAxYWIyMDAK
KFhFTikgIGdmbjogMDAxMDc4MDAgIG1mbjogMDAxYWIwMDAKKFhFTikgIGdmbjogMDAxMDdh
MDAgIG1mbjogMDAxYWFlMDAKKFhFTikgIGdmbjogMDAxMDdjMDAgIG1mbjogMDAxYWFjMDAK
KFhFTikgIGdmbjogMDAxMDdlMDAgIG1mbjogMDAxYWFhMDAKKFhFTikgIGdmbjogMDAxMDgw
MDAgIG1mbjogMDAxYWE4MDAKKFhFTikgIGdmbjogMDAxMDgyMDAgIG1mbjogMDAxYWE2MDAK
KFhFTikgIGdmbjogMDAxMDg0MDAgIG1mbjogMDAxYWE0MDAKKFhFTikgIGdmbjogMDAxMDg2
MDAgIG1mbjogMDAxYWEyMDAKKFhFTikgIGdmbjogMDAxMDg4MDAgIG1mbjogMDAxYWEwMDAK
KFhFTikgIGdmbjogMDAxMDhhMDAgIG1mbjogMDAxYTllMDAKKFhFTikgIGdmbjogMDAxMDhj
MDAgIG1mbjogMDAxYTljMDAKKFhFTikgIGdmbjogMDAxMDhlMDAgIG1mbjogMDAxYTlhMDAK
KFhFTikgIGdmbjogMDAxMDkwMDAgIG1mbjogMDAxYTk4MDAKKFhFTikgIGdmbjogMDAxMDky
MDAgIG1mbjogMDAxYTk2MDAKKFhFTikgIGdmbjogMDAxMDk0MDAgIG1mbjogMDAxYTk0MDAK
KFhFTikgIGdmbjogMDAxMDk2MDAgIG1mbjogMDAxYTkyMDAKKFhFTikgIGdmbjogMDAxMDk4
MDAgIG1mbjogMDAxYTkwMDAKKFhFTikgIGdmbjogMDAxMDlhMDAgIG1mbjogMDAxYThlMDAK
KFhFTikgIGdmbjogMDAxMDljMDAgIG1mbjogMDAxYThjMDAKKFhFTikgIGdmbjogMDAxMDll
MDAgIG1mbjogMDAxYThhMDAKKFhFTikgIGdmbjogMDAxMGEwMDAgIG1mbjogMDAxYTg4MDAK
KFhFTikgIGdmbjogMDAxMGEyMDAgIG1mbjogMDAxYTg2MDAKKFhFTikgIGdmbjogMDAxMGE0
MDAgIG1mbjogMDAxYTg0MDAKKFhFTikgIGdmbjogMDAxMGE2MDAgIG1mbjogMDAxYTgyMDAK
KFhFTikgIGdmbjogMDAxMGE4MDAgIG1mbjogMDAxYTgwMDAKKFhFTikgIGdmbjogMDAxMGFh
MDAgIG1mbjogMDAxYTdlMDAKKFhFTikgIGdmbjogMDAxMGFjMDAgIG1mbjogMDAxYTdjMDAK
KFhFTikgIGdmbjogMDAxMGFlMDAgIG1mbjogMDAxYTdhMDAKKFhFTikgIGdmbjogMDAxMGIw
MDAgIG1mbjogMDAxYTc4MDAKKFhFTikgIGdmbjogMDAxMGIyMDAgIG1mbjogMDAxYTc2MDAK
KFhFTikgIGdmbjogMDAxMGI0MDAgIG1mbjogMDAxYTc0MDAKKFhFTikgIGdmbjogMDAxMGI2
MDAgIG1mbjogMDAxYTcyMDAKKFhFTikgIGdmbjogMDAxMGI4MDAgIG1mbjogMDAxYTcwMDAK
KFhFTikgIGdmbjogMDAxMGJhMDAgIG1mbjogMDAxYTZlMDAKKFhFTikgIGdmbjogMDAxMGJj
MDAgIG1mbjogMDAxYTZjMDAKKFhFTikgIGdmbjogMDAxMGJlMDAgIG1mbjogMDAxYTZhMDAK
KFhFTikgIGdmbjogMDAxMGMwMDAgIG1mbjogMDAxYTY4MDAKKFhFTikgIGdmbjogMDAxMGMy
MDAgIG1mbjogMDAxYTY2MDAKKFhFTikgIGdmbjogMDAxMGM0MDAgIG1mbjogMDAxYTY0MDAK
KFhFTikgIGdmbjogMDAxMGM2MDAgIG1mbjogMDAxYTYyMDAKKFhFTikgIGdmbjogMDAxMGM4
MDAgIG1mbjogMDAxYTYwMDAKKFhFTikgIGdmbjogMDAxMGNhMDAgIG1mbjogMDAxYTVlMDAK
KFhFTikgIGdmbjogMDAxMGNjMDAgIG1mbjogMDAxYTVjMDAKKFhFTikgIGdmbjogMDAxMGNl
MDAgIG1mbjogMDAxYTVhMDAKKFhFTikgIGdmbjogMDAxMGQwMDAgIG1mbjogMDAxYTU4MDAK
KFhFTikgIGdmbjogMDAxMGQyMDAgIG1mbjogMDAxYTU2MDAKKFhFTikgIGdmbjogMDAxMGQ0
MDAgIG1mbjogMDAxYTU0MDAKKFhFTikgIGdmbjogMDAxMGQ2MDAgIG1mbjogMDAxYTUyMDAK
KFhFTikgIGdmbjogMDAxMGQ4MDAgIG1mbjogMDAxYTUwMDAKKFhFTikgIGdmbjogMDAxMGRh
MDAgIG1mbjogMDAxYTRlMDAKKFhFTikgIGdmbjogMDAxMGRjMDAgIG1mbjogMDAxYTRjMDAK
KFhFTikgIGdmbjogMDAxMGRlMDAgIG1mbjogMDAxYTRhMDAKKFhFTikgIGdmbjogMDAxMGUw
MDAgIG1mbjogMDAxYTQ4MDAKKFhFTikgIGdmbjogMDAxMGUyMDAgIG1mbjogMDAxYTQ2MDAK
KFhFTikgIGdmbjogMDAxMGU0MDAgIG1mbjogMDAxYTQ0MDAKKFhFTikgIGdmbjogMDAxMGU2
MDAgIG1mbjogMDAxYTQyMDAKKFhFTikgIGdmbjogMDAxMGU4MDAgIG1mbjogMDAxYTQwMDAK
KFhFTikgIGdmbjogMDAxMGVhMDAgIG1mbjogMDAxYTNlMDAKKFhFTikgIGdmbjogMDAxMGVj
MDAgIG1mbjogMDAxYTNjMDAKKFhFTikgIGdmbjogMDAxMGVlMDAgIG1mbjogMDAxYTNhMDAK
KFhFTikgIGdmbjogMDAxMGYwMDAgIG1mbjogMDAxYTM4MDAKKFhFTikgIGdmbjogMDAxMGYy
MDAgIG1mbjogMDAxYTM2MDAKKFhFTikgIGdmbjogMDAxMGY0MDAgIG1mbjogMDAxYTM0MDAK
KFhFTikgIGdmbjogMDAxMGY2MDAgIG1mbjogMDAxYTMyMDAKKFhFTikgJ0EnIHByZXNzZWQg
LT4gdXNpbmcgYWx0ZXJuYXRpdmUga2V5IGhhbmRsaW5nCihYRU4pIHN0ZHZnYS5jOjE0Nzpk
MiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKw==
--------------020406080607020004070107
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020406080607020004070107--


From xen-devel-bounces@lists.xen.org Wed Aug 15 10:40:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 10:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1b1X-0002ln-Rk; Wed, 15 Aug 2012 10:40:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T1b1U-0002li-Ts
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 10:40:25 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345027200!9411413!1
X-Originating-IP: [213.199.154.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14323 invoked from network); 15 Aug 2012 10:40:01 -0000
Received: from am1ehsobe006.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.209)
	by server-8.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Aug 2012 10:40:01 -0000
Received: from mail24-am1-R.bigfish.com (10.3.201.248) by
	AM1EHSOBE009.bigfish.com (10.3.204.29) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 10:40:00 +0000
Received: from mail24-am1 (localhost [127.0.0.1])	by mail24-am1-R.bigfish.com
	(Postfix) with ESMTP id 68976220145;
	Wed, 15 Aug 2012 10:40:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -6
X-BigFish: VPS-6(zzbb2dI98dI9371Ic89bh146fI1432Ic857h4015Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah34h1155h)
Received: from mail24-am1 (localhost.localdomain [127.0.0.1]) by mail24-am1
	(MessageSwitch) id 1345027197479092_18147;
	Wed, 15 Aug 2012 10:39:57 +0000 (UTC)
Received: from AM1EHSMHS010.bigfish.com (unknown [10.3.201.238])	by
	mail24-am1.bigfish.com (Postfix) with ESMTP id 70471420044;
	Wed, 15 Aug 2012 10:39:57 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	AM1EHSMHS010.bigfish.com (10.3.207.110) with Microsoft SMTP Server id
	14.1.225.23; Wed, 15 Aug 2012 10:39:55 +0000
X-WSS-ID: 0M8SLME-01-0IA-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2A10F1028014;	Wed, 15 Aug 2012 05:39:50 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 15 Aug 2012 05:40:28 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Wed, 15 Aug 2012 05:39:50 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	06:39:32 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 4F7EA49C0D5; Wed, 15 Aug 2012
	11:39:31 +0100 (BST)
Message-ID: <502B7C68.2090808@amd.com>
Date: Wed, 15 Aug 2012 12:39:36 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
	<502B7FC9020000780009500F@nat28.tlf.novell.com>
In-Reply-To: <502B7FC9020000780009500F@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="------------020406080607020004070107"
X-OriginatorOrg: amd.com
Cc: tim@xen.org, xiantao.zhang@intel.com,
	Santosh Jodh <santosh.jodh@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------020406080607020004070107
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08/15/2012 10:54 AM, Jan Beulich wrote:
>>>> On 14.08.12 at 21:55, Santosh Jodh<santosh.jodh@citrix.com>  wrote:
>
> Sorry to be picky; after this many rounds I would have
> expected that no further comments would be needed.
>
>> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
>> +                                     paddr_t gpa, int indent)
>> +{
>> +    paddr_t address;
>> +    void *table_vaddr, *pde;
>> +    paddr_t next_table_maddr;
>> +    int index, next_level, present;
>> +    u32 *entry;
>> +
>> +    if ( level<  1 )
>> +        return;
>> +
>> +    table_vaddr =3D __map_domain_page(pg);
>> +    if ( table_vaddr =3D=3D NULL )
>> +    {
>> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
>> +                page_to_maddr(pg));
>> +        return;
>> +    }
>> +
>> +    for ( index =3D 0; index<  PTE_PER_TABLE_SIZE; index++ )
>> +    {
>> +        if ( !(index % 2) )
>> +            process_pending_softirqs();
>> +
>> +        pde =3D table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
>> +        next_table_maddr =3D amd_iommu_get_next_table_from_pte(pde);
>> +        entry =3D (u32*)pde;
>> +
>> +        present =3D get_field_from_reg_u32(entry[0],
>> +                                         IOMMU_PDE_PRESENT_MASK,
>> +                                         IOMMU_PDE_PRESENT_SHIFT);
>> +
>> +        if ( !present )
>> +            continue;
>> +
>> +        next_level =3D get_field_from_reg_u32(entry[0],
>> +                                            IOMMU_PDE_NEXT_LEVEL_MASK=
,
>> +                                            IOMMU_PDE_NEXT_LEVEL_SHIF=
T);
>> +
>> +        address =3D gpa + amd_offset_level_address(index, level);
>> +        if ( next_level>=3D 1 )
>> +            amd_dump_p2m_table_level(
>> +                maddr_to_page(next_table_maddr), level - 1,
>
> Did you see Wei's cleanup patches to the code you cloned from?
> You should follow that route (replacing the ASSERT() with
> printing of the inconsistency and _not_ recursing or doing the
> normal printing), and using either "level" or "next_level"
> consistently here.

Hi, I tested the patch and the output looks much better now=EF=BC=8Cpleas=
e see=20
attachment. One thing I notice: there is a 1GB mapping in the guest, but=20
the format of it looks like other 2MB mappings:

(XEN)  gfn: 0003fa00  mfn: 0023b000
(XEN)  gfn: 0003fc00  mfn: 00136200
(XEN)  gfn: 0003fe00  mfn: 0023ae00
(XEN)  gfn: 00040000  mfn: 00040000 << 1GB here
(XEN)  gfn: 00080000  mfn: 0023ac00
(XEN)  gfn: 00080200  mfn: 00136000
(XEN)  gfn: 00080400  mfn: 0023aa00
(XEN)  gfn: 00080600  mfn: 00135e00
(XEN)  gfn: 00080800  mfn: 0023a800
(XEN)  gfn: 00080a00  mfn: 00135c00
(XEN)  gfn: 00080c00  mfn: 0023a600
(XEN)  gfn: 00080e00  mfn: 00135a00
(XEN)  gfn: 00081000  mfn: 0023a400
(XEN)  gfn: 00081200  mfn: 00135800
(XEN)  gfn: 00081400  mfn: 0023a200
(XEN)  gfn: 00081600  mfn: 00135600

Thanks,
Wei

>> +                address, indent + 1);
>> +        else
>> +            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
>> +                   indent, " ",
>
>              printk("%*sgfn: %08lx  mfn: %08lx\n",
>                     indent, "",
>
> I can vaguely see the point in splitting the two strings in the
> first argument, but the extra space in the third argument is
> definitely wrong - it'll make level 1 and level 2 indistinguishable.
>
> I also don't see how you addressed Wei's reporting of this still
> not printing correctly. I may be overlooking something, but
> without you making clear in the description what you changed
> over the previous version that's also relatively easy to happen.
>
>> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, pad=
dr_t gpa,
>> +                                     int indent)
>> +{
>> +    paddr_t address;
>> +    int i;
>> +    struct dma_pte *pt_vaddr, *pte;
>> +    int next_level;
>> +
>> +    if ( level<  1 )
>> +        return;
>> +
>> +    pt_vaddr =3D map_vtd_domain_page(pt_maddr);
>> +    if ( pt_vaddr =3D=3D NULL )
>> +    {
>> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_mad=
dr);
>> +        return;
>> +    }
>> +
>> +    next_level =3D level - 1;
>> +    for ( i =3D 0; i<  PTE_NUM; i++ )
>> +    {
>> +        if ( !(i % 2) )
>> +            process_pending_softirqs();
>> +
>> +        pte =3D&pt_vaddr[i];
>> +        if ( !dma_pte_present(*pte) )
>> +            continue;
>> +
>> +        address =3D gpa + offset_level_address(i, level);
>> +        if ( next_level>=3D 1 )
>> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
>> +                                     address, indent + 1);
>> +        else
>> +            printk("%*s" "gfn: %08lx mfn: %08lx super=3D%d rd=3D%d wr=
=3D%d\n",
>> +                   indent, " ",
>
> Same comment as above.
>
>> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
>> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K),
>> +                   dma_pte_superpage(*pte)? 1 : 0,
>> +                   dma_pte_read(*pte)? 1 : 0,
>> +                   dma_pte_write(*pte)? 1 : 0);
>
> Missing spaces. Even worse - given your definitions of these
> macros there's no point in using the conditional operators here
> at all.
>
> And, despite your claim in another response, this still isn't similar
> to AMD's variant (which still doesn't print any of these three
> attributes).
>
> The printing of the superpage status is pretty pointless anyway,
> given that there's no single use of dma_set_pte_superpage()
> throughout the tree - validly so since superpages can be in use
> currently only when the tables are shared with EPT, in which
> case you don't print anything. Plus you'd need to detect the flag
> _above_ level 1 (at leaf level the bit is ignored and hence just
> confusing if printed) and print the entry instead of recursing. And
> if you decide to indeed properly implement this (rather than just
> dropping superpage support here), _I_ would expect you to
> properly implement level skipping in the corresponding AMD code
> too (which similarly isn't being used currently).
>
> Jan
>


--------------020406080607020004070107
Content-Type: text/plain; charset="UTF-8"; name="io_pt.dump"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="io_pt.dump"
Content-Description: io_pt.dump

CihYRU4pIEhWTTI6IGludDEzX2hhcmRkaXNrOiBmdW5jdGlvbiA0MSwgdW5tYXBwZWQgZGV2
aWNlIGZvciBFTERMPTgxCihYRU4pIEhWTTI6IGludDEzX2hhcmRkaXNrOiBmdW5jdGlvbiAw
OCwgdW5tYXBwZWQgZGV2aWNlIGZvciBFTERMPTgxCihYRU4pIEhWTTI6ICoqKiBpbnQgMTVo
IGZ1bmN0aW9uIEFYPTAwYzAsIEJYPTAwMDAgbm90IHlldCBzdXBwb3J0ZWQhCihYRU4pIAoo
WEVOKSBkb21haW4yIElPTU1VIHAybSB0YWJsZTogCihYRU4pIHAybSB0YWJsZSBoYXMgMyBs
ZXZlbHMKKFhFTikgICBnZm46IDAwMDAwMDAwICBtZm46IDAwMjE4ZjAwCihYRU4pICAgZ2Zu
OiAwMDAwMDAwMSAgbWZuOiAwMDEwMTllMQooWEVOKSAgIGdmbjogMDAwMDAwMDIgIG1mbjog
MDAyMThlZmYKKFhFTikgICBnZm46IDAwMDAwMDAzICBtZm46IDAwMTAyZTY2CihYRU4pICAg
Z2ZuOiAwMDAwMDAwNCAgbWZuOiAwMDIxOGVmZQooWEVOKSAgIGdmbjogMDAwMDAwMDUgIG1m
bjogMDAxMDJiZWEKKFhFTikgICBnZm46IDAwMDAwMDA2ICBtZm46IDAwMjE4ZWZkCihYRU4p
ICAgZ2ZuOiAwMDAwMDAwNyAgbWZuOiAwMDEwMTM1MgooWEVOKSAgIGdmbjogMDAwMDAwMDgg
IG1mbjogMDAyMThlZmMKKFhFTikgICBnZm46IDAwMDAwMDA5ICBtZm46IDAwMTAxYzhmCihY
RU4pICAgZ2ZuOiAwMDAwMDAwYSAgbWZuOiAwMDIxOGVmYgooWEVOKSAgIGdmbjogMDAwMDAw
MGIgIG1mbjogMDAxMDFhNWYKKFhFTikgICBnZm46IDAwMDAwMDBjICBtZm46IDAwMjE4ZWZh
CihYRU4pICAgZ2ZuOiAwMDAwMDAwZCAgbWZuOiAwMDEwMThmOQooWEVOKSAgIGdmbjogMDAw
MDAwMGUgIG1mbjogMDAyMThlZjkKKFhFTikgICBnZm46IDAwMDAwMDBmICBtZm46IDAwMTAw
NGM5CihYRU4pICAgZ2ZuOiAwMDAwMDAxMCAgbWZuOiAwMDIxOGVmOAooWEVOKSAgIGdmbjog
MDAwMDAwMTEgIG1mbjogMDAxMDI3MDgKKFhFTikgICBnZm46IDAwMDAwMDEyICBtZm46IDAw
MjE4ZWY3CihYRU4pICAgZ2ZuOiAwMDAwMDAxMyAgbWZuOiAwMDEwMTkwMAooWEVOKSAgIGdm
bjogMDAwMDAwMTQgIG1mbjogMDAyMThlZjYKKFhFTikgICBnZm46IDAwMDAwMDE1ICBtZm46
IDAwMTAyNTEwCihYRU4pICAgZ2ZuOiAwMDAwMDAxNiAgbWZuOiAwMDIxOGVmNQooWEVOKSAg
IGdmbjogMDAwMDAwMTcgIG1mbjogMDAxMDFkZDEKKFhFTikgICBnZm46IDAwMDAwMDE4ICBt
Zm46IDAwMjE4ZWY0CihYRU4pICAgZ2ZuOiAwMDAwMDAxOSAgbWZuOiAwMDEwMjcyNgooWEVO
KSAgIGdmbjogMDAwMDAwMWEgIG1mbjogMDAyMThlZjMKKFhFTikgICBnZm46IDAwMDAwMDFi
ICBtZm46IDAwMTAxOThhCihYRU4pICAgZ2ZuOiAwMDAwMDAxYyAgbWZuOiAwMDIxOGVmMgoo
WEVOKSAgIGdmbjogMDAwMDAwMWQgIG1mbjogMDAxMDI2ZTkKKFhFTikgICBnZm46IDAwMDAw
MDFlICBtZm46IDAwMjE4ZWYxCihYRU4pICAgZ2ZuOiAwMDAwMDAxZiAgbWZuOiAwMDEwMjk3
ZgooWEVOKSAgIGdmbjogMDAwMDAwMjAgIG1mbjogMDAyMThlZjAKKFhFTikgICBnZm46IDAw
MDAwMDIxICBtZm46IDAwMTAyZWIzCihYRU4pICAgZ2ZuOiAwMDAwMDAyMiAgbWZuOiAwMDIx
OGVlZgooWEVOKSAgIGdmbjogMDAwMDAwMjMgIG1mbjogMDAxMDA1MWYKKFhFTikgICBnZm46
IDAwMDAwMDI0ICBtZm46IDAwMjE4ZWVlCihYRU4pICAgZ2ZuOiAwMDAwMDAyNSAgbWZuOiAw
MDEwMjkzMQooWEVOKSAgIGdmbjogMDAwMDAwMjYgIG1mbjogMDAyMThlZWQKKFhFTikgICBn
Zm46IDAwMDAwMDI3ICBtZm46IDAwMTAxODkxCihYRU4pICAgZ2ZuOiAwMDAwMDAyOCAgbWZu
OiAwMDIxOGVlYwooWEVOKSAgIGdmbjogMDAwMDAwMjkgIG1mbjogMDAxMDEzZmYKKFhFTikg
ICBnZm46IDAwMDAwMDJhICBtZm46IDAwMjE4ZWViCihYRU4pICAgZ2ZuOiAwMDAwMDAyYiAg
bWZuOiAwMDEwMTRiMQooWEVOKSAgIGdmbjogMDAwMDAwMmMgIG1mbjogMDAyMThlZWEKKFhF
TikgICBnZm46IDAwMDAwMDJkICBtZm46IDAwMTAxYTc0CihYRU4pICAgZ2ZuOiAwMDAwMDAy
ZSAgbWZuOiAwMDIxOGVlOQooWEVOKSAgIGdmbjogMDAwMDAwMmYgIG1mbjogMDAxMDI3M2UK
KFhFTikgICBnZm46IDAwMDAwMDMwICBtZm46IDAwMjE4ZWU4CihYRU4pICAgZ2ZuOiAwMDAw
MDAzMSAgbWZuOiAwMDEwMjg0YgooWEVOKSAgIGdmbjogMDAwMDAwMzIgIG1mbjogMDAyMThl
ZTcKKFhFTikgICBnZm46IDAwMDAwMDMzICBtZm46IDAwMTAxNGZkCihYRU4pICAgZ2ZuOiAw
MDAwMDAzNCAgbWZuOiAwMDIxOGVlNgooWEVOKSAgIGdmbjogMDAwMDAwMzUgIG1mbjogMDAx
MDA1NWUKKFhFTikgICBnZm46IDAwMDAwMDM2ICBtZm46IDAwMjE4ZWU1CihYRU4pICAgZ2Zu
OiAwMDAwMDAzNyAgbWZuOiAwMDEwMTM0ZQooWEVOKSAgIGdmbjogMDAwMDAwMzggIG1mbjog
MDAyMThlZTQKKFhFTikgICBnZm46IDAwMDAwMDM5ICBtZm46IDAwMTAxNDg4CihYRU4pICAg
Z2ZuOiAwMDAwMDAzYSAgbWZuOiAwMDIxOGVlMwooWEVOKSAgIGdmbjogMDAwMDAwM2IgIG1m
bjogMDAxMDEzMTAKKFhFTikgICBnZm46IDAwMDAwMDNjICBtZm46IDAwMjE4ZWUyCihYRU4p
ICAgZ2ZuOiAwMDAwMDAzZCAgbWZuOiAwMDEwMTg4ZQooWEVOKSAgIGdmbjogMDAwMDAwM2Ug
IG1mbjogMDAyMThlZTEKKFhFTikgICBnZm46IDAwMDAwMDNmICBtZm46IDAwMTAyODZlCihY
RU4pICAgZ2ZuOiAwMDAwMDA0MCAgbWZuOiAwMDIxOGVlMAooWEVOKSAgIGdmbjogMDAwMDAw
NDEgIG1mbjogMDAxMDE4N2UKKFhFTikgICBnZm46IDAwMDAwMDQyICBtZm46IDAwMjE4ZWRm
CihYRU4pICAgZ2ZuOiAwMDAwMDA0MyAgbWZuOiAwMDEwMDMwZQooWEVOKSAgIGdmbjogMDAw
MDAwNDQgIG1mbjogMDAyMThlZGUKKFhFTikgICBnZm46IDAwMDAwMDQ1ICBtZm46IDAwMTAw
NDk4CihYRU4pICAgZ2ZuOiAwMDAwMDA0NiAgbWZuOiAwMDIxOGVkZAooWEVOKSAgIGdmbjog
MDAwMDAwNDcgIG1mbjogMDAxMDJkZDAKKFhFTikgICBnZm46IDAwMDAwMDQ4ICBtZm46IDAw
MjE4ZWRjCihYRU4pICAgZ2ZuOiAwMDAwMDA0OSAgbWZuOiAwMDEwMmRkYQooWEVOKSAgIGdm
bjogMDAwMDAwNGEgIG1mbjogMDAyMThlZGIKKFhFTikgICBnZm46IDAwMDAwMDRiICBtZm46
IDAwMTAxNzcwCihYRU4pICAgZ2ZuOiAwMDAwMDA0YyAgbWZuOiAwMDIxOGVkYQooWEVOKSAg
IGdmbjogMDAwMDAwNGQgIG1mbjogMDAxMDJjYjUKKFhFTikgICBnZm46IDAwMDAwMDRlICBt
Zm46IDAwMjE4ZWQ5CihYRU4pICAgZ2ZuOiAwMDAwMDA0ZiAgbWZuOiAwMDEwMmU5YQooWEVO
KSAgIGdmbjogMDAwMDAwNTAgIG1mbjogMDAyMThlZDgKKFhFTikgICBnZm46IDAwMDAwMDUx
ICBtZm46IDAwMTAxODI5CihYRU4pICAgZ2ZuOiAwMDAwMDA1MiAgbWZuOiAwMDIxOGVkNwoo
WEVOKSAgIGdmbjogMDAwMDAwNTMgIG1mbjogMDAxMDI5OTEKKFhFTikgICBnZm46IDAwMDAw
MDU0ICBtZm46IDAwMjE4ZWQ2CihYRU4pICAgZ2ZuOiAwMDAwMDA1NSAgbWZuOiAwMDEwMTU3
YwooWEVOKSAgIGdmbjogMDAwMDAwNTYgIG1mbjogMDAyMThlZDUKKFhFTikgICBnZm46IDAw
MDAwMDU3ICBtZm46IDAwMTAxN2MzCihYRU4pICAgZ2ZuOiAwMDAwMDA1OCAgbWZuOiAwMDIx
OGVkNAooWEVOKSAgIGdmbjogMDAwMDAwNTkgIG1mbjogMDAxMDJhNDIKKFhFTikgICBnZm46
IDAwMDAwMDVhICBtZm46IDAwMjE4ZWQzCihYRU4pICAgZ2ZuOiAwMDAwMDA1YiAgbWZuOiAw
MDEwMWE4ZgooWEVOKSAgIGdmbjogMDAwMDAwNWMgIG1mbjogMDAyMThlZDIKKFhFTikgICBn
Zm46IDAwMDAwMDVkICBtZm46IDAwMTAyYTY5CihYRU4pICAgZ2ZuOiAwMDAwMDA1ZSAgbWZu
OiAwMDIxOGVkMQooWEVOKSAgIGdmbjogMDAwMDAwNWYgIG1mbjogMDAxMDE4MWQKKFhFTikg
ICBnZm46IDAwMDAwMDYwICBtZm46IDAwMjE4ZWQwCihYRU4pICAgZ2ZuOiAwMDAwMDA2MSAg
bWZuOiAwMDEwMWFjNwooWEVOKSAgIGdmbjogMDAwMDAwNjIgIG1mbjogMDAyMThlY2YKKFhF
TikgICBnZm46IDAwMDAwMDYzICBtZm46IDAwMTAxNGRhCihYRU4pICAgZ2ZuOiAwMDAwMDA2
NCAgbWZuOiAwMDIxOGVjZQooWEVOKSAgIGdmbjogMDAwMDAwNjUgIG1mbjogMDAxMDFhMGMK
KFhFTikgICBnZm46IDAwMDAwMDY2ICBtZm46IDAwMjE4ZWNkCihYRU4pICAgZ2ZuOiAwMDAw
MDA2NyAgbWZuOiAwMDEwMjhjNgooWEVOKSAgIGdmbjogMDAwMDAwNjggIG1mbjogMDAyMThl
Y2MKKFhFTikgICBnZm46IDAwMDAwMDY5ICBtZm46IDAwMTAyYTFhCihYRU4pICAgZ2ZuOiAw
MDAwMDA2YSAgbWZuOiAwMDIxOGVjYgooWEVOKSAgIGdmbjogMDAwMDAwNmIgIG1mbjogMDAx
MDI0NDYKKFhFTikgICBnZm46IDAwMDAwMDZjICBtZm46IDAwMjE4ZWNhCihYRU4pICAgZ2Zu
OiAwMDAwMDA2ZCAgbWZuOiAwMDEwMTM4MwooWEVOKSAgIGdmbjogMDAwMDAwNmUgIG1mbjog
MDAyMThlYzkKKFhFTikgICBnZm46IDAwMDAwMDZmICBtZm46IDAwMTAyY2MyCihYRU4pICAg
Z2ZuOiAwMDAwMDA3MCAgbWZuOiAwMDIxOGVjOAooWEVOKSAgIGdmbjogMDAwMDAwNzEgIG1m
bjogMDAxMDFkZWIKKFhFTikgICBnZm46IDAwMDAwMDcyICBtZm46IDAwMjE4ZWM3CihYRU4p
ICAgZ2ZuOiAwMDAwMDA3MyAgbWZuOiAwMDEwMjFhOAooWEVOKSAgIGdmbjogMDAwMDAwNzQg
IG1mbjogMDAyMThlYzYKKFhFTikgICBnZm46IDAwMDAwMDc1ICBtZm46IDAwMTAwMmZiCihY
RU4pICAgZ2ZuOiAwMDAwMDA3NiAgbWZuOiAwMDIxOGVjNQooWEVOKSAgIGdmbjogMDAwMDAw
NzcgIG1mbjogMDAxMDE0Y2MKKFhFTikgICBnZm46IDAwMDAwMDc4ICBtZm46IDAwMjE4ZWM0
CihYRU4pICAgZ2ZuOiAwMDAwMDA3OSAgbWZuOiAwMDEwMTU0NAooWEVOKSAgIGdmbjogMDAw
MDAwN2EgIG1mbjogMDAyMThlYzMKKFhFTikgICBnZm46IDAwMDAwMDdiICBtZm46IDAwMTAx
ZDlmCihYRU4pICAgZ2ZuOiAwMDAwMDA3YyAgbWZuOiAwMDIxOGVjMgooWEVOKSAgIGdmbjog
MDAwMDAwN2QgIG1mbjogMDAxMDFhODgKKFhFTikgICBnZm46IDAwMDAwMDdlICBtZm46IDAw
MjE4ZWMxCihYRU4pICAgZ2ZuOiAwMDAwMDA3ZiAgbWZuOiAwMDEwMWE2MAooWEVOKSAgIGdm
bjogMDAwMDAwODAgIG1mbjogMDAyMThlYzAKKFhFTikgICBnZm46IDAwMDAwMDgxICBtZm46
IDAwMTAyOTE1CihYRU4pICAgZ2ZuOiAwMDAwMDA4MiAgbWZuOiAwMDIxOGViZgooWEVOKSAg
IGdmbjogMDAwMDAwODMgIG1mbjogMDAxMDEzMDEKKFhFTikgICBnZm46IDAwMDAwMDg0ICBt
Zm46IDAwMjE4ZWJlCihYRU4pICAgZ2ZuOiAwMDAwMDA4NSAgbWZuOiAwMDEwMTVkOAooWEVO
KSAgIGdmbjogMDAwMDAwODYgIG1mbjogMDAyMThlYmQKKFhFTikgICBnZm46IDAwMDAwMDg3
ICBtZm46IDAwMTAxOGFkCihYRU4pICAgZ2ZuOiAwMDAwMDA4OCAgbWZuOiAwMDIxOGViYwoo
WEVOKSAgIGdmbjogMDAwMDAwODkgIG1mbjogMDAxMDE1MzMKKFhFTikgICBnZm46IDAwMDAw
MDhhICBtZm46IDAwMjE4ZWJiCihYRU4pICAgZ2ZuOiAwMDAwMDA4YiAgbWZuOiAwMDEwMWVk
NAooWEVOKSAgIGdmbjogMDAwMDAwOGMgIG1mbjogMDAyMThlYmEKKFhFTikgICBnZm46IDAw
MDAwMDhkICBtZm46IDAwMTAxYTAxCihYRU4pICAgZ2ZuOiAwMDAwMDA4ZSAgbWZuOiAwMDIx
OGViOQooWEVOKSAgIGdmbjogMDAwMDAwOGYgIG1mbjogMDAxMDE1OWUKKFhFTikgICBnZm46
IDAwMDAwMDkwICBtZm46IDAwMjE4ZWI4CihYRU4pICAgZ2ZuOiAwMDAwMDA5MSAgbWZuOiAw
MDEwMjU3MQooWEVOKSAgIGdmbjogMDAwMDAwOTIgIG1mbjogMDAyMThlYjcKKFhFTikgICBn
Zm46IDAwMDAwMDkzICBtZm46IDAwMTAyY2ZlCihYRU4pICAgZ2ZuOiAwMDAwMDA5NCAgbWZu
OiAwMDIxOGViNgooWEVOKSAgIGdmbjogMDAwMDAwOTUgIG1mbjogMDAxMDA1NWQKKFhFTikg
ICBnZm46IDAwMDAwMDk2ICBtZm46IDAwMjE4ZWI1CihYRU4pICAgZ2ZuOiAwMDAwMDA5NyAg
bWZuOiAwMDEwMmQyNwooWEVOKSAgIGdmbjogMDAwMDAwOTggIG1mbjogMDAyMThlYjQKKFhF
TikgICBnZm46IDAwMDAwMDk5ICBtZm46IDAwMTAyN2M2CihYRU4pICAgZ2ZuOiAwMDAwMDA5
YSAgbWZuOiAwMDIxOGViMwooWEVOKSAgIGdmbjogMDAwMDAwOWIgIG1mbjogMDAxMDE0YWYK
KFhFTikgICBnZm46IDAwMDAwMDljICBtZm46IDAwMjE4ZWIyCihYRU4pICAgZ2ZuOiAwMDAw
MDA5ZCAgbWZuOiAwMDEwMmU3NgooWEVOKSAgIGdmbjogMDAwMDAwOWUgIG1mbjogMDAyMThl
YjEKKFhFTikgICBnZm46IDAwMDAwMDlmICBtZm46IDAwMTAyMzVlCihYRU4pICAgZ2ZuOiAw
MDAwMDEwMCAgbWZuOiAwMDIxOGU5MAooWEVOKSAgIGdmbjogMDAwMDAxMDEgIG1mbjogMDAx
MDE3MWUKKFhFTikgICBnZm46IDAwMDAwMTAyICBtZm46IDAwMjE4ZThmCihYRU4pICAgZ2Zu
OiAwMDAwMDEwMyAgbWZuOiAwMDEwMmViMQooWEVOKSAgIGdmbjogMDAwMDAxMDQgIG1mbjog
MDAyMThlOGUKKFhFTikgICBnZm46IDAwMDAwMTA1ICBtZm46IDAwMTAxOWQyCihYRU4pICAg
Z2ZuOiAwMDAwMDEwNiAgbWZuOiAwMDIxOGU4ZAooWEVOKSAgIGdmbjogMDAwMDAxMDcgIG1m
bjogMDAxMDAzMTAKKFhFTikgICBnZm46IDAwMDAwMTA4ICBtZm46IDAwMjE4ZThjCihYRU4p
ICAgZ2ZuOiAwMDAwMDEwOSAgbWZuOiAwMDEwMTgxNwooWEVOKSAgIGdmbjogMDAwMDAxMGEg
IG1mbjogMDAyMThlOGIKKFhFTikgICBnZm46IDAwMDAwMTBiICBtZm46IDAwMTAxODdiCihY
RU4pICAgZ2ZuOiAwMDAwMDEwYyAgbWZuOiAwMDIxOGU4YQooWEVOKSAgIGdmbjogMDAwMDAx
MGQgIG1mbjogMDAxMDFhNTMKKFhFTikgICBnZm46IDAwMDAwMTBlICBtZm46IDAwMjE4ZTg5
CihYRU4pICAgZ2ZuOiAwMDAwMDEwZiAgbWZuOiAwMDEwMjk5NwooWEVOKSAgIGdmbjogMDAw
MDAxMTAgIG1mbjogMDAyMThlODgKKFhFTikgICBnZm46IDAwMDAwMTExICBtZm46IDAwMTAx
NTA0CihYRU4pICAgZ2ZuOiAwMDAwMDExMiAgbWZuOiAwMDIxOGU4NwooWEVOKSAgIGdmbjog
MDAwMDAxMTMgIG1mbjogMDAxMDFhOTgKKFhFTikgICBnZm46IDAwMDAwMTE0ICBtZm46IDAw
MjE4ZTg2CihYRU4pICAgZ2ZuOiAwMDAwMDExNSAgbWZuOiAwMDEwMjgzMgooWEVOKSAgIGdm
bjogMDAwMDAxMTYgIG1mbjogMDAyMThlODUKKFhFTikgICBnZm46IDAwMDAwMTE3ICBtZm46
IDAwMTAxZWMzCihYRU4pICAgZ2ZuOiAwMDAwMDExOCAgbWZuOiAwMDIxOGU4NAooWEVOKSAg
IGdmbjogMDAwMDAxMTkgIG1mbjogMDAxMDJlMWYKKFhFTikgICBnZm46IDAwMDAwMTFhICBt
Zm46IDAwMjE4ZTgzCihYRU4pICAgZ2ZuOiAwMDAwMDExYiAgbWZuOiAwMDEwMDNhOAooWEVO
KSAgIGdmbjogMDAwMDAxMWMgIG1mbjogMDAyMThlODIKKFhFTikgICBnZm46IDAwMDAwMTFk
ICBtZm46IDAwMTAxZTczCihYRU4pICAgZ2ZuOiAwMDAwMDExZSAgbWZuOiAwMDIxOGU4MQoo
WEVOKSAgIGdmbjogMDAwMDAxMWYgIG1mbjogMDAxMDI2ZWQKKFhFTikgICBnZm46IDAwMDAw
MTIwICBtZm46IDAwMjE4ZTgwCihYRU4pICAgZ2ZuOiAwMDAwMDEyMSAgbWZuOiAwMDEwMDU2
NwooWEVOKSAgIGdmbjogMDAwMDAxMjIgIG1mbjogMDAyMThlN2YKKFhFTikgICBnZm46IDAw
MDAwMTIzICBtZm46IDAwMTAxNGYyCihYRU4pICAgZ2ZuOiAwMDAwMDEyNCAgbWZuOiAwMDIx
OGU3ZQooWEVOKSAgIGdmbjogMDAwMDAxMjUgIG1mbjogMDAxMDAyZWQKKFhFTikgICBnZm46
IDAwMDAwMTI2ICBtZm46IDAwMjE4ZTdkCihYRU4pICAgZ2ZuOiAwMDAwMDEyNyAgbWZuOiAw
MDEwMThiYQooWEVOKSAgIGdmbjogMDAwMDAxMjggIG1mbjogMDAyMThlN2MKKFhFTikgICBn
Zm46IDAwMDAwMTI5ICBtZm46IDAwMTAxMzc1CihYRU4pICAgZ2ZuOiAwMDAwMDEyYSAgbWZu
OiAwMDIxOGU3YgooWEVOKSAgIGdmbjogMDAwMDAxMmIgIG1mbjogMDAxMDEzZWEKKFhFTikg
ICBnZm46IDAwMDAwMTJjICBtZm46IDAwMjE4ZTdhCihYRU4pICAgZ2ZuOiAwMDAwMDEyZCAg
bWZuOiAwMDEwMThkZQooWEVOKSAgIGdmbjogMDAwMDAxMmUgIG1mbjogMDAyMThlNzkKKFhF
TikgICBnZm46IDAwMDAwMTJmICBtZm46IDAwMTAwNTcwCihYRU4pICAgZ2ZuOiAwMDAwMDEz
MCAgbWZuOiAwMDIxOGU3OAooWEVOKSAgIGdmbjogMDAwMDAxMzEgIG1mbjogMDAxMDE0M2UK
KFhFTikgICBnZm46IDAwMDAwMTMyICBtZm46IDAwMjE4ZTc3CihYRU4pICAgZ2ZuOiAwMDAw
MDEzMyAgbWZuOiAwMDEwMTM2YQooWEVOKSAgIGdmbjogMDAwMDAxMzQgIG1mbjogMDAyMThl
NzYKKFhFTikgICBnZm46IDAwMDAwMTM1ICBtZm46IDAwMTAxM2E1CihYRU4pICAgZ2ZuOiAw
MDAwMDEzNiAgbWZuOiAwMDIxOGU3NQooWEVOKSAgIGdmbjogMDAwMDAxMzcgIG1mbjogMDAx
MDI3ZjUKKFhFTikgICBnZm46IDAwMDAwMTM4ICBtZm46IDAwMjE4ZTc0CihYRU4pICAgZ2Zu
OiAwMDAwMDEzOSAgbWZuOiAwMDEwMmE0OQooWEVOKSAgIGdmbjogMDAwMDAxM2EgIG1mbjog
MDAyMThlNzMKKFhFTikgICBnZm46IDAwMDAwMTNiICBtZm46IDAwMTAxZGM3CihYRU4pICAg
Z2ZuOiAwMDAwMDEzYyAgbWZuOiAwMDIxOGU3MgooWEVOKSAgIGdmbjogMDAwMDAxM2QgIG1m
bjogMDAxMDJkNjAKKFhFTikgICBnZm46IDAwMDAwMTNlICBtZm46IDAwMjE4ZTcxCihYRU4p
ICAgZ2ZuOiAwMDAwMDEzZiAgbWZuOiAwMDEwMTVhMgooWEVOKSAgIGdmbjogMDAwMDAxNDAg
IG1mbjogMDAyMThlNzAKKFhFTikgICBnZm46IDAwMDAwMTQxICBtZm46IDAwMTAyNzdiCihY
RU4pICAgZ2ZuOiAwMDAwMDE0MiAgbWZuOiAwMDIxOGU2ZgooWEVOKSAgIGdmbjogMDAwMDAx
NDMgIG1mbjogMDAxMDI4YWYKKFhFTikgICBnZm46IDAwMDAwMTQ0ICBtZm46IDAwMjE4ZTZl
CihYRU4pICAgZ2ZuOiAwMDAwMDE0NSAgbWZuOiAwMDEwMTk0ZgooWEVOKSAgIGdmbjogMDAw
MDAxNDYgIG1mbjogMDAyMThlNmQKKFhFTikgICBnZm46IDAwMDAwMTQ3ICBtZm46IDAwMTAx
YTYzCihYRU4pICAgZ2ZuOiAwMDAwMDE0OCAgbWZuOiAwMDIxOGU2YwooWEVOKSAgIGdmbjog
MDAwMDAxNDkgIG1mbjogMDAxMDE1NDEKKFhFTikgICBnZm46IDAwMDAwMTRhICBtZm46IDAw
MjE4ZTZiCihYRU4pICAgZ2ZuOiAwMDAwMDE0YiAgbWZuOiAwMDEwMmRjYwooWEVOKSAgIGdm
bjogMDAwMDAxNGMgIG1mbjogMDAyMThlNmEKKFhFTikgICBnZm46IDAwMDAwMTRkICBtZm46
IDAwMTAyODFjCihYRU4pICAgZ2ZuOiAwMDAwMDE0ZSAgbWZuOiAwMDIxOGU2OQooWEVOKSAg
IGdmbjogMDAwMDAxNGYgIG1mbjogMDAxMDJlMzIKKFhFTikgICBnZm46IDAwMDAwMTUwICBt
Zm46IDAwMjE4ZTY4CihYRU4pICAgZ2ZuOiAwMDAwMDE1MSAgbWZuOiAwMDEwMDRjNAooWEVO
KSAgIGdmbjogMDAwMDAxNTIgIG1mbjogMDAyMThlNjcKKFhFTikgICBnZm46IDAwMDAwMTUz
ICBtZm46IDAwMTAyYzE4CihYRU4pICAgZ2ZuOiAwMDAwMDE1NCAgbWZuOiAwMDIxOGU2Ngoo
WEVOKSAgIGdmbjogMDAwMDAxNTUgIG1mbjogMDAxMDJjMTcKKFhFTikgICBnZm46IDAwMDAw
MTU2ICBtZm46IDAwMjE4ZTY1CihYRU4pICAgZ2ZuOiAwMDAwMDE1NyAgbWZuOiAwMDEwMWRk
OAooWEVOKSAgIGdmbjogMDAwMDAxNTggIG1mbjogMDAyMThlNjQKKFhFTikgICBnZm46IDAw
MDAwMTU5ICBtZm46IDAwMTAxZGQ3CihYRU4pICAgZ2ZuOiAwMDAwMDE1YSAgbWZuOiAwMDIx
OGU2MwooWEVOKSAgIGdmbjogMDAwMDAxNWIgIG1mbjogMDAxMDI3YmYKKFhFTikgICBnZm46
IDAwMDAwMTVjICBtZm46IDAwMjE4ZTYyCihYRU4pICAgZ2ZuOiAwMDAwMDE1ZCAgbWZuOiAw
MDEwMDUzNAooWEVOKSAgIGdmbjogMDAwMDAxNWUgIG1mbjogMDAyMThlNjEKKFhFTikgICBn
Zm46IDAwMDAwMTVmICBtZm46IDAwMTAyZTk2CihYRU4pICAgZ2ZuOiAwMDAwMDE2MCAgbWZu
OiAwMDIxOGU2MAooWEVOKSAgIGdmbjogMDAwMDAxNjEgIG1mbjogMDAxMDE0Y2EKKFhFTikg
ICBnZm46IDAwMDAwMTYyICBtZm46IDAwMjE4ZTVmCihYRU4pICAgZ2ZuOiAwMDAwMDE2MyAg
bWZuOiAwMDEwMTRjOQooWEVOKSAgIGdmbjogMDAwMDAxNjQgIG1mbjogMDAyMThlNWUKKFhF
TikgICBnZm46IDAwMDAwMTY1ICBtZm46IDAwMTAyZTJhCihYRU4pICAgZ2ZuOiAwMDAwMDE2
NiAgbWZuOiAwMDIxOGU1ZAooWEVOKSAgIGdmbjogMDAwMDAxNjcgIG1mbjogMDAxMDJlMjkK
KFhFTikgICBnZm46IDAwMDAwMTY4ICBtZm46IDAwMjE4ZTVjCihYRU4pICAgZ2ZuOiAwMDAw
MDE2OSAgbWZuOiAwMDEwMjQ2NQooWEVOKSAgIGdmbjogMDAwMDAxNmEgIG1mbjogMDAyMThl
NWIKKFhFTikgICBnZm46IDAwMDAwMTZiICBtZm46IDAwMTAyZTg0CihYRU4pICAgZ2ZuOiAw
MDAwMDE2YyAgbWZuOiAwMDIxOGU1YQooWEVOKSAgIGdmbjogMDAwMDAxNmQgIG1mbjogMDAx
MDJlODMKKFhFTikgICBnZm46IDAwMDAwMTZlICBtZm46IDAwMjE4ZTU5CihYRU4pICAgZ2Zu
OiAwMDAwMDE2ZiAgbWZuOiAwMDEwMjc1NwooWEVOKSAgIGdmbjogMDAwMDAxNzAgIG1mbjog
MDAyMThlNTgKKFhFTikgICBnZm46IDAwMDAwMTcxICBtZm46IDAwMTAxNDkyCihYRU4pICAg
Z2ZuOiAwMDAwMDE3MiAgbWZuOiAwMDIxOGU1NwooWEVOKSAgIGdmbjogMDAwMDAxNzMgIG1m
bjogMDAxMDJhMTAKKFhFTikgICBnZm46IDAwMDAwMTc0ICBtZm46IDAwMjE4ZTU2CihYRU4p
ICAgZ2ZuOiAwMDAwMDE3NSAgbWZuOiAwMDEwMmEwZgooWEVOKSAgIGdmbjogMDAwMDAxNzYg
IG1mbjogMDAyMThlNTUKKFhFTikgICBnZm46IDAwMDAwMTc3ICBtZm46IDAwMTAyZWM0CihY
RU4pICAgZ2ZuOiAwMDAwMDE3OCAgbWZuOiAwMDIxOGU1NAooWEVOKSAgIGdmbjogMDAwMDAx
NzkgIG1mbjogMDAxMDJlYzMKKFhFTikgICBnZm46IDAwMDAwMTdhICBtZm46IDAwMjE4ZTUz
CihYRU4pICAgZ2ZuOiAwMDAwMDE3YiAgbWZuOiAwMDEwMmE1YQooWEVOKSAgIGdmbjogMDAw
MDAxN2MgIG1mbjogMDAyMThlNTIKKFhFTikgICBnZm46IDAwMDAwMTdkICBtZm46IDAwMTAy
YTU5CihYRU4pICAgZ2ZuOiAwMDAwMDE3ZSAgbWZuOiAwMDIxOGU1MQooWEVOKSAgIGdmbjog
MDAwMDAxN2YgIG1mbjogMDAxMDEzMGUKKFhFTikgICBnZm46IDAwMDAwMTgwICBtZm46IDAw
MjE4ZTUwCihYRU4pICAgZ2ZuOiAwMDAwMDE4MSAgbWZuOiAwMDEwMThiNAooWEVOKSAgIGdm
bjogMDAwMDAxODIgIG1mbjogMDAyMThlNGYKKFhFTikgICBnZm46IDAwMDAwMTgzICBtZm46
IDAwMTAxOGIzCihYRU4pICAgZ2ZuOiAwMDAwMDE4NCAgbWZuOiAwMDIxOGU0ZQooWEVOKSAg
IGdmbjogMDAwMDAxODUgIG1mbjogMDAxMDI0N2UKKFhFTikgICBnZm46IDAwMDAwMTg2ICBt
Zm46IDAwMjE4ZTRkCihYRU4pICAgZ2ZuOiAwMDAwMDE4NyAgbWZuOiAwMDEwMTM5NAooWEVO
KSAgIGdmbjogMDAwMDAxODggIG1mbjogMDAyMThlNGMKKFhFTikgICBnZm46IDAwMDAwMTg5
ICBtZm46IDAwMTAxMzkzCihYRU4pICAgZ2ZuOiAwMDAwMDE4YSAgbWZuOiAwMDIxOGU0Ygoo
WEVOKSAgIGdmbjogMDAwMDAxOGIgIG1mbjogMDAxMDJkZDYKKFhFTikgICBnZm46IDAwMDAw
MThjICBtK2ZuOiAwMDIxOGU0YQooWEVOKSAgIGdmbjogMDAwMDAxOGQgIG1mbjogMDAxMDJk
ZDUKKFhFTikgICBnZm46IDAwMDAwMThlICBtZm46IDAwMjE4ZTQ5CihYRU4pICAgZ2ZuOiAw
MDAwMDE4ZiAgbWZuOiAwMDEwMjk5NAooWEVOKSAgIGdmbjogMDAwMDAxOTAgIG1mbjogMDAy
MThlNDgKKFhFTikgICBnZm46IDAwMDAwMTkxICBtZm46IDAwMTAyOTkzCihYRU4pICAgZ2Zu
OiAwMDAwMDE5MiAgbWZuOiAwMDIxOGU0NwooWEVOKSAgIGdmbjogMDAwMDAxOTMgIG1mbjog
MDAxMDJkZTIKKFhFTikgICBnZm46IDAwMDAwMTk0ICBtZm46IDAwMjE4ZTQ2CihYRU4pICAg
Z2ZuOiAwMDAwMDE5NSAgbWZuOiAwMDEwMmRlMQooWEVOKSAgIGdmbjogMDAwMDAxOTYgIG1m
bjogMDAyMThlNDUKKFhFTikgICBnZm46IDAwMDAwMTk3ICBtZm46IDAwMTAxNzEzCihYRU4p
ICAgZ2ZuOiAwMDAwMDE5OCAgbWZuOiAwMDIxOGU0NAooWEVOKSAgIGdmbjogMDAwMDAxOTkg
IG1mbjogMDAxMDEzOWEKKFhFTikgICBnZm46IDAwMDAwMTlhICBtZm46IDAwMjE4ZTQzCihY
RU4pICAgZ2ZuOiAwMDAwMDE5YiAgbWZuOiAwMDEwMmMyNgooWEVOKSAgIGdmbjogMDAwMDAx
OWMgIG1mbjogMDAyMThlNDIKKFhFTikgICBnZm46IDAwMDAwMTlkICBtZm46IDAwMTAyYzI1
CihYRU4pICAgZ2ZuOiAwMDAwMDE5ZSAgbWZuOiAwMDIxOGU0MQooWEVOKSAgIGdmbjogMDAw
MDAxOWYgIG1mbjogMDAxMDE0NDgKKFhFTikgICBnZm46IDAwMDAwMWEwICBtZm46IDAwMjE4
ZTQwCihYRU4pICAgZ2ZuOiAwMDAwMDFhMSAgbWZuOiAwMDEwMTQ0NwooWEVOKSAgIGdmbjog
MDAwMDAxYTIgIG1mbjogMDAyMThlM2YKKFhFTikgICBnZm46IDAwMDAwMWEzICBtZm46IDAw
MTAxZTg0CihYRU4pICAgZ2ZuOiAwMDAwMDFhNCAgbWZuOiAwMDIxOGUzZQooWEVOKSAgIGdm
bjogMDAwMDAxYTUgIG1mbjogMDAxMDEzYTcKKFhFTikgICBnZm46IDAwMDAwMWE2ICBtZm46
IDAwMjE4ZTNkCihYRU4pICAgZ2ZuOiAwMDAwMDFhNyAgbWZuOiAwMDEwMmRlZgooWEVOKSAg
IGdmbjogMDAwMDAxYTggIG1mbjogMDAyMThlM2MKKFhFTikgICBnZm46IDAwMDAwMWE5ICBt
Zm46IDAwMTAxMmRiCihYRU4pICAgZ2ZuOiAwMDAwMDFhYSAgbWZuOiAwMDIxOGUzYgooWEVO
KSAgIGdmbjogMDAwMDAxYWIgIG1mbjogMDAxMDA1MjQKKFhFTikgICBnZm46IDAwMDAwMWFj
ICBtZm46IDAwMjE4ZTNhCihYRU4pICAgZ2ZuOiAwMDAwMDFhZCAgbWZuOiAwMDEwMDUyMwoo
WEVOKSAgIGdmbjogMDAwMDAxYWUgIG1mbjogMDAyMThlMzkKKFhFTikgICBnZm46IDAwMDAw
MWFmICBtZm46IDAwMTAyZTA4CihYRU4pICAgZ2ZuOiAwMDAwMDFiMCAgbWZuOiAwMDIxOGUz
OAooWEVOKSAgIGdmbjogMDAwMDAxYjEgIG1mbjogMDAxMDJlMDcKKFhFTikgICBnZm46IDAw
MDAwMWIyICBtZm46IDAwMjE4ZTM3CihYRU4pICAgZ2ZuOiAwMDAwMDFiMyAgbWZuOiAwMDEw
Mjg4MQooWEVOKSAgIGdmbjogMDAwMDAxYjQgIG1mbjogMDAyMThlMzYKKFhFTikgICBnZm46
IDAwMDAwMWI1ICBtZm46IDAwMTAyOTQ2CihYRU4pICAgZ2ZuOiAwMDAwMDFiNiAgbWZuOiAw
MDIxOGUzNQooWEVOKSAgIGdmbjogMDAwMDAxYjcgIG1mbjogMDAxMDAzNTcKKFhFTikgICBn
Zm46IDAwMDAwMWI4ICBtZm46IDAwMjE4ZTM0CihYRU4pICAgZ2ZuOiAwMDAwMDFiOSAgbWZu
OiAwMDEwMTg3NgooWEVOKSAgIGdmbjogMDAwMDAxYmEgIG1mbjogMDAyMThlMzMKKFhFTikg
ICBnZm46IDAwMDAwMWJiICBtZm46IDAwMTAxODc1CihYRU4pICAgZ2ZuOiAwMDAwMDFiYyAg
bWZuOiAwMDIxOGUzMgooWEVOKSAgIGdmbjogMDAwMDAxYmQgIG1mbjogMDAxMDE0OGMKKFhF
TikgICBnZm46IDAwMDAwMWJlICBtZm46IDAwMjE4ZTMxCihYRU4pICAgZ2ZuOiAwMDAwMDFi
ZiAgbWZuOiAwMDEwMTQ4YgooWEVOKSAgIGdmbjogMDAwMDAxYzAgIG1mbjogMDAyMThlMzAK
KFhFTikgICBnZm46IDAwMDAwMWMxICBtZm46IDAwMTAxODlhCihYRU4pICAgZ2ZuOiAwMDAw
MDFjMiAgbWZuOiAwMDIxOGUyZgooWEVOKSAgIGdmbjogMDAwMDAxYzMgIG1mbjogMDAxMDE4
OTkKKFhFTikgICBnZm46IDAwMDAwMWM0ICBtZm46IDAwMjE4ZTJlCihYRU4pICAgZ2ZuOiAw
MDAwMDFjNSAgbWZuOiAwMDEwMTg1OAooWEVOKSAgIGdmbjogMDAwMDAxYzYgIG1mbjogMDAy
MThlMmQKKFhFTikgICBnZm46IDAwMDAwMWM3ICBtZm46IDAwMTAxODU3CihYRU4pICAgZ2Zu
OiAwMDAwMDFjOCAgbWZuOiAwMDIxOGUyYwooWEVOKSAgIGdmbjogMDAwMDAxYzkgIG1mbjog
MDAxMDJkZTYKKFhFTikgICBnZm46IDAwMDAwMWNhICBtZm46IDAwMjE4ZTJiCihYRU4pICAg
Z2ZuOiAwMDAwMDFjYiAgbWZuOiAwMDEwMmRlNQooWEVOKSAgIGdmbjogMDAwMDAxY2MgIG1m
bjogMDAyMThlMmEKKFhFTikgICBnZm46IDAwMDAwMWNkICBtZm46IDAwMTAxNzVkCihYRU4p
ICAgZ2ZuOiAwMDAwMDFjZSAgbWZuOiAwMDIxOGUyOQooWEVOKSAgIGdmbjogMDAwMDAxY2Yg
IG1mbjogMDAxMDE3NWMKKFhFTikgICBnZm46IDAwMDAwMWQwICBtZm46IDAwMjE4ZTI4CihY
RU4pICAgZ2ZuOiAwMDAwMDFkMSAgbWZuOiAwMDEwMmM1MwooWEVOKSAgIGdmbjogMDAwMDAx
ZDIgIG1mbjogMDAyMThlMjcKKFhFTikgICBnZm46IDAwMDAwMWQzICBtZm46IDAwMTAyYzUy
CihYRU4pICAgZ2ZuOiAwMDAwMDFkNCAgbWZuOiAwMDIxOGUyNgooWEVOKSAgIGdmbjogMDAw
MDAxZDUgIG1mbjogMDAxMDE3NDEKKFhFTikgICBnZm46IDAwMDAwMWQ2ICBtZm46IDAwMjE4
ZTI1CihYRU4pICAgZ2ZuOiAwMDAwMDFkNyAgbWZuOiAwMDEwMTc0MAooWEVOKSAgIGdmbjog
MDAwMDAxZDggIG1mbjogMDAyMThlMjQKKFhFTikgICBnZm46IDAwMDAwMWQ5ICBtZm46IDAw
MTAyODZiCihYRU4pICAgZ2ZuOiAwMDAwMDFkYSAgbWZuOiAwMDIxOGUyMwooWEVOKSAgIGdm
bjogMDAwMDAxZGIgIG1mbjogMDAxMDI4NmEKKFhFTikgICBnZm46IDAwMDAwMWRjICBtZm46
IDAwMjE4ZTIyCihYRU4pICAgZ2ZuOiAwMDAwMDFkZCAgbWZuOiAwMDEwMTgwNQooWEVOKSAg
IGdmbjogMDAwMDAxZGUgIG1mbjogMDAyMThlMjEKKFhFTikgICBnZm46IDAwMDAwMWRmICBt
Zm46IDAwMTAxODA0CihYRU4pICAgZ2ZuOiAwMDAwMDFlMCAgbWZuOiAwMDIxOGUyMAooWEVO
KSAgIGdmbjogMDAwMDAxZTEgIG1mbjogMDAxMDE4MDMKKFhFTikgICBnZm46IDAwMDAwMWUy
ICBtZm46IDAwMjE4ZTFmCihYRU4pICAgZ2ZuOiAwMDAwMDFlMyAgbWZuOiAwMDEwMTgwMgoo
WEVOKSAgIGdmbjogMDAwMDAxZTQgIG1mbjogMDAyMThlMWUKKFhFTikgICBnZm46IDAwMDAw
MWU1ICBtZm46IDAwMTAyODdiCihYRU4pICAgZ2ZuOiAwMDAwMDFlNiAgbWZuOiAwMDIxOGUx
ZAooWEVOKSAgIGdmbjogMDAwMDAxZTcgIG1mbjogMDAxMDI4N2EKKFhFTikgICBnZm46IDAw
MDAwMWU4ICBtZm46IDAwMjE4ZTFjCihYRU4pICAgZ2ZuOiAwMDAwMDFlOSAgbWZuOiAwMDEw
MjdiYgooWEVOKSAgIGdmbjogMDAwMDAxZWEgIG1mbjogMDAyMThlMWIKKFhFTikgICBnZm46
IDAwMDAwMWViICBtZm46IDAwMTAyN2JhCihYRU4pICAgZ2ZuOiAwMDAwMDFlYyAgbWZuOiAw
MDIxOGUxYQooWEVOKSAgIGdmbjogMDAwMDAxZWQgIG1mbjogMDAxMDJjZmIKKFhFTikgICBn
Zm46IDAwMDAwMWVlICBtZm46IDAwMjE4ZTE5CihYRU4pICAgZ2ZuOiAwMDAwMDFlZiAgbWZu
OiAwMDEwMmNmYQooWEVOKSAgIGdmbjogMDAwMDAxZjAgIG1mbjogMDAyMThlMTgKKFhFTikg
ICBnZm46IDAwMDAwMWYxICBtZm46IDAwMTAxZTM3CihYRU4pICAgZ2ZuOiAwMDAwMDFmMiAg
bWZuOiAwMDIxOGUxNwooWEVOKSAgIGdmbjogMDAwMDAxZjMgIG1mbjogMDAxMDFlMzYKKFhF
TikgICBnZm46IDAwMDAwMWY0ICBtZm46IDAwMjE4ZTE2CihYRU4pICAgZ2ZuOiAwMDAwMDFm
NSAgbWZuOiAwMDEwMWFhZgooWEVOKSAgIGdmbjogMDAwMDAxZjYgIG1mbjogMDAyMThlMTUK
KFhFTikgICBnZm46IDAwMDAwMWY3ICBtZm46IDAwMTAxYWFlCihYRU4pICAgZ2ZuOiAwMDAw
MDFmOCAgbWZuOiAwMDIxOGUxNAooWEVOKSAgIGdmbjogMDAwMDAxZjkgIG1mbjogMDAxMDFh
NmYKKFhFTikgICBnZm46IDAwMDAwMWZhICBtZm46IDAwMjE4ZTEzCihYRU4pICAgZ2ZuOiAw
MDAwMDFmYiAgbWZuOiAwMDEwMWE2ZQooWEVOKSAgIGdmbjogMDAwMDAxZmMgIG1mbjogMDAy
MThlMTIKKFhFTikgICBnZm46IDAwMDAwMWZkICBtZm46IDAwMTAxYTQzCihYRU4pICAgZ2Zu
OiAwMDAwMDFmZSAgbWZuOiAwMDIxOGUxMQooWEVOKSAgIGdmbjogMDAwMDAxZmYgIG1mbjog
MDAxMDFhNDIKKFhFTikgIGdmbjogMDAwMDAyMDAgIG1mbjogMDAyMThjMDAKKFhFTikgIGdm
bjogMDAwMDA0MDAgIG1mbjogMDAxMGJlMDAKKFhFTikgIGdmbjogMDAwMDA2MDAgIG1mbjog
MDAyMThhMDAKKFhFTikgIGdmbjogMDAwMDA4MDAgIG1mbjogMDAxMGJjMDAKKFhFTikgIGdm
bjogMDAwMDBhMDAgIG1mbjogMDAyMTg4MDAKKFhFTikgIGdmbjogMDAwMDBjMDAgIG1mbjog
MDAxMGJhMDAKKFhFTikgIGdmbjogMDAwMDBlMDAgIG1mbjogMDAyMTg2MDAKKFhFTikgIGdm
bjogMDAwMDEwMDAgIG1mbjogMDAxMGI4MDAKKFhFTikgIGdmbjogMDAwMDEyMDAgIG1mbjog
MDAyMTg0MDAKKFhFTikgIGdmbjogMDAwMDE0MDAgIG1mbjogMDAxMGI2MDAKKFhFTikgIGdm
bjogMDAwMDE2MDAgIG1mbjogMDAyMTgyMDAKKFhFTikgIGdmbjogMDAwMDE4MDAgIG1mbjog
MDAxMGI0MDAKKFhFTikgIGdmbjogMDAwMDFhMDAgIG1mbjogMDAyMTgwMDAKKFhFTikgIGdm
bjogMDAwMDFjMDAgIG1mbjogMDAxMGIyMDAKKFhFTikgIGdmbjogMDAwMDFlMDAgIG1mbjog
MDAyMWZlMDAKKFhFTikgIGdmbjogMDAwMDIwMDAgIG1mbjogMDAxMGIwMDAKKFhFTikgIGdm
bjogMDAwMDIyMDAgIG1mbjogMDAyMWZjMDAKKFhFTikgIGdmbjogMDAwMDI0MDAgIG1mbjog
MDAxMGFlMDAKKFhFTikgIGdmbjogMDAwMDI2MDAgIG1mbjogMDAyMWZhMDAKKFhFTikgIGdm
bjogMDAwMDI4MDAgIG1mbjogMDAxMGFjMDAKKFhFTikgIGdmbjogMDAwMDJhMDAgIG1mbjog
MDAyMWY4MDAKKFhFTikgIGdmbjogMDAwMDJjMDAgIG1mbjogMDAxMGFhMDAKKFhFTikgIGdm
bjogMDAwMDJlMDAgIG1mbjogMDAyMWY2MDAKKFhFTikgIGdmbjogMDAwMDMwMDAgIG1mbjog
MDAxMGE4MDAKKFhFTikgIGdmbjogMDAwMDMyMDAgIG1mbjogMDAyMWY0MDAKKFhFTikgIGdm
bjogMDAwMDM0MDAgIG1mbjogMDAxMGE2MDAKKFhFTikgIGdmbjogMDAwMDM2MDAgIG1mbjog
MDAyMWYyMDAKKFhFTikgIGdmbjogMDAwMDM4MDAgIG1mbjogMDAxMGE0MDAKKFhFTikgIGdm
bjogMDAwMDNhMDAgIG1mbjogMDAyMWYwMDAKKFhFTikgIGdmbjogMDAwMDNjMDAgIG1mbjog
MDAxMGEyMDAKKFhFTikgIGdmbjogMDAwMDNlMDAgIG1mbjogMDAyMWVlMDAKKFhFTikgIGdm
bjogMDAwMDQwMDAgIG1mbjogMDAxMGEwMDAKKFhFTikgIGdmbjogMDAwMDQyMDAgIG1mbjog
MDAyMWVjMDAKKFhFTikgIGdmbjogMDAwMDQ0MDAgIG1mbjogMDAxMGZlMDAKKFhFTikgIGdm
bjogMDAwMDQ2MDAgIG1mbjogMDAyMWVhMDAKKFhFTikgIGdmbjogMDAwMDQ4MDAgIG1mbjog
MDAxMGZjMDAKKFhFTikgIGdmbjogMDAwMDRhMDAgIG1mbjogMDAyMWU4MDAKKFhFTikgIGdm
bjogMDAwMDRjMDAgIG1mbjogMDAxMGZhMDAKKFhFTikgIGdmbjogMDAwMDRlMDAgIG1mbjog
MDAyMWU2MDAKKFhFTikgIGdmbjogMDAwMDUwMDAgIG1mbjogMDAxMGY4MDAKKFhFTikgIGdm
bjogMDAwMDUyMDAgIG1mbjogMDAyMWU0MDAKKFhFTikgIGdmbjogMDAwMDU0MDAgIG1mbjog
MDAxMGY2MDAKKFhFTikgIGdmbjogMDAwMDU2MDAgIG1mbjogMDAyMWUyMDAKKFhFTikgIGdm
bjogMDAwMDU4MDAgIG1mbjogMDAxMGY0MDAKKFhFTikgIGdmbjogMDAwMDVhMDAgIG1mbjog
MDAyMWUwMDAKKFhFTikgIGdmbjogMDAwMDVjMDAgIG1mbjogMDAxMGYyMDAKKFhFTikgIGdm
bjogMDAwMDVlMDAgIG1mbjogMDAyMTdlMDAKKFhFTikgIGdmbjogMDAwMDYwMDAgIG1mbjog
MDAxMGYwMDAKKFhFTikgIGdmbjogMDAwMDYyMDAgIG1mbjogMDAyMTdjMDAKKFhFTikgIGdm
bjogMDAwMDY0MDAgIG1mbjogMDAxMGVlMDAKKFhFTikgIGdmbjogMDAwMDY2MDAgIG1mbjog
MDAyMTdhMDAKKFhFTikgIGdmbjogMDAwMDY4MDAgIG1mbjogMDAxMGVjMDAKKFhFTikgIGdm
bjogMDAwMDZhMDAgIG1mbjogMDAyMTc4MDAKKFhFTikgIGdmbjogMDAwMDZjMDAgIG1mbjog
MDAxMGVhMDAKKFhFTikgIGdmbjogMDAwMDZlMDAgIG1mbjogMDAyMTc2MDAKKFhFTikgIGdm
bjogMDAwMDcwMDAgIG1mbjogMDAxMGU4MDAKKFhFTikgIGdmbjogMDAwMDcyMDAgIG1mbjog
MDAyMTc0MDAKKFhFTikgIGdmbjogMDAwMDc0MDAgIG1mbjogMDAxMGU2MDAKKFhFTikgIGdm
bjogMDAwMDc2MDAgIG1mbjogMDAyMTcyMDAKKFhFTikgIGdmbjogMDAwMDc4MDAgIG1mbjog
MDAxMGU0MDAKKFhFTikgIGdmbjogMDAwMDdhMDAgIG1mbjogMDAyMTcwMDAKKFhFTikgIGdm
bjogMDAwMDdjMDAgIG1mbjogMDAxMGUyMDAKKFhFTikgIGdmbjogMDAwMDdlMDAgIG1mbjog
MDAyMTZlMDAKKFhFTikgIGdmbjogMDAwMDgwMDAgIG1mbjogMDAxMGUwMDAKKFhFTikgIGdm
bjogMDAwMDgyMDAgIG1mbjogMDAyMTZjMDAKKFhFTikgIGdmbjogMDAwMDg0MDAgIG1mbjog
MDAxMGRlMDAKKFhFTikgIGdmbjogMDAwMDg2MDAgIG1mbjogMDAyMTZhMDAKKFhFTikgIGdm
bjogMDAwMDg4MDAgIG1mbjogMDAxMGRjMDAKKFhFTikgIGdmbjogMDAwMDhhMDAgIG1mbjog
MDAyMTY4MDAKKFhFTikgIGdmbjogMDAwMDhjMDAgIG1mbjogMDAxMGRhMDAKKFhFTikgIGdm
bjogMDAwMDhlMDAgIG1mbjogMDAyMTY2MDAKKFhFTikgIGdmbjogMDAwMDkwMDAgIG1mbjog
MDAxMGQ4MDAKKFhFTikgIGdmbjogMDAwMDkyMDAgIG1mbjogMDAyMTY0MDAKKFhFTikgIGdm
bjogMDAwMDk0MDAgIG1mbjogMDAxMGQ2MDAKKFhFTikgIGdmbjogMDAwMDk2MDAgIG1mbjog
MDAyMTYyMDAKKFhFTikgIGdmbjogMDAwMDk4MDAgIG1mbjogMDAxMGQ0MDAKKFhFTikgIGdm
bjogMDAwMDlhMDAgIG1mbjogMDAyMTYwMDAKKFhFTikgIGdmbjogMDAwMDljMDAgIG1mbjog
MDAxMGQyMDAKKFhFTikgIGdmbjogMDAwMDllMDAgIG1mbjogMDAyMTVlMDAKKFhFTikgIGdm
bjogMDAwMGEwMDAgIG1mbjogMDAxMGQwMDAKKFhFTikgIGdmbjogMDAwMGEyMDAgIG1mbjog
MDAyMTVjMDAKKFhFTikgIGdmbjogMDAwMGE0MDAgIG1mbjogMDAxMGNlMDAKKFhFTikgIGdm
bjogMDAwMGE2MDAgIG1mbjogMDAyMTVhMDAKKFhFTikgIGdmbjogMDAwMGE4MDAgIG1mbjog
MDAxMGNjMDAKKFhFTikgIGdmbjogMDAwMGFhMDAgIG1mbjogMDAyMTU4MDAKKFhFTikgIGdm
bjogMDAwMGFjMDAgIG1mbjogMDAxMGNhMDAKKFhFTikgIGdmbjogMDAwMGFlMDAgIG1mbjog
MDAyMTU2MDAKKFhFTikgIGdmbjogMDAwMGIwMDAgIG1mbjogMDAxMGM4MDAKKFhFTikgIGdm
bjogMDAwMGIyMDAgIG1mbjogMDAyMTU0MDAKKFhFTikgIGdmbjogMDAwMGI0MDAgIG1mbjog
MDAxMGM2MDAKKFhFTikgIGdmbjogMDAwMGI2MDAgIG1mbjogMDAyMTUyMDAKKFhFTikgIGdm
bjogMDAwMGI4MDAgIG1mbjogMDAxMGM0MDAKKFhFTikgIGdmbjogMDAwMGJhMDAgIG1mbjog
MDAyMTUwMDAKKFhFTikgIGdmbjogMDAwMGJjMDAgIG1mbjogMDAxMGMyMDAKKFhFTikgIGdm
bjogMDAwMGJlMDAgIG1mbjogMDAyMTRlMDAKKFhFTikgIGdmbjogMDAwMGMwMDAgIG1mbjog
MDAxMGMwMDAKKFhFTikgIGdmbjogMDAwMGMyMDAgIG1mbjogMDAyMTRjMDAKKFhFTikgIGdm
bjogMDAwMGM0MDAgIG1mbjogMDAxMWZlMDAKKFhFTikgIGdmbjogMDAwMGM2MDAgIG1mbjog
MDAyMTRhMDAKKFhFTikgIGdmbjogMDAwMGM4MDAgIG1mbjogMDAxMWZjMDAKKFhFTikgIGdm
bjogMDAwMGNhMDAgIG1mbjogMDAyMTQ4MDAKKFhFTikgIGdmbjogMDAwMGNjMDAgIG1mbjog
MDAxMWZhMDAKKFhFTikgIGdmbjogMDAwMGNlMDAgIG1mbjogMDAyMTQ2MDAKKFhFTikgIGdm
bjogMDAwMGQwMDAgIG1mbjogMDAxMWY4MDAKKFhFTikgIGdmbjogMDAwMGQyMDAgIG1mbjog
MDAyMTQ0MDAKKFhFTikgIGdmbjogMDAwMGQ0MDAgIG1mbjogMDAxMWY2MDAKKFhFTikgIGdm
bjogMDAwMGQ2MDAgIG1mbjogMDAyMTQyMDAKKFhFTikgIGdmbjogMDAwMGQ4MDAgIG1mbjog
MDAxMWY0MDAKKFhFTikgIGdmbjogMDAwMGRhMDAgIG1mbjogMDAyMTQwMDAKKFhFTikgIGdm
bjogMDAwMGRjMDAgIG1mbjogMDAxMWYyMDAKKFhFTikgIGdmbjogMDAwMGRlMDAgIG1mbjog
MDAyMTNlMDAKKFhFTikgIGdmbjogMDAwMGUwMDAgIG1mbjogMDAxMWYwMDAKKFhFTikgIGdm
bjogMDAwMGUyMDAgIG1mbjogMDAyMTNjMDAKKFhFTikgIGdmbjogMDAwMGU0MDAgIG1mbjog
MDAxMWVlMDAKKFhFTikgIGdmbjogMDAwMGU2MDAgIG1mbjogMDAyMTNhMDAKKFhFTikgIGdm
bjogMDAwMGU4MDAgIG1mbjogMDAxMWVjMDAKKFhFTikgIGdmbjogMDAwMGVhMDAgIG1mbjog
MDAyMTM4MDAKKFhFTikgIGdmbjogMDAwMGVjMDAgIG1mbjogMDAxMWVhMDAKKFhFTikgIGdm
bjogMDAwMGVlMDAgIG1mbjogMDAyMTM2MDAKKFhFTikgIGdmbjogMDAwMGYwMDAgIG1mbjog
MDAxMWU4MDAKKFhFTikgIGdmbjogMDAwMGYyMDAgIG1mbjogMDAyMTM0MDAKKFhFTikgIGdm
bjogMDAwMGY0MDAgIG1mbjogMDAxMWU2MDAKKFhFTikgIGdmbjogMDAwMGY2MDAgIG1mbjog
MDAyMTMyMDAKKFhFTikgIGdmbjogMDAwMGY4MDAgIG1mbjogMDAxMWU0MDAKKFhFTikgIGdm
bjogMDAwMGZhMDAgIG1mbjogMDAyMTMwMDAKKFhFTikgIGdmbjogMDAwMGZjMDAgIG1mbjog
MDAxMWUyMDAKKFhFTikgIGdmbjogMDAwMGZlMDAgIG1mbjogMDAyMTJlMDAKKFhFTikgIGdm
bjogMDAwMTAwMDAgIG1mbjogMDAxMWUwMDAKKFhFTikgIGdmbjogMDAwMTAyMDAgIG1mbjog
MDAyMTJjMDAKKFhFTikgIGdmbjogMDAwMTA0MDAgIG1mbjogMDAxMWRlMDAKKFhFTikgIGdm
bjogMDAwMTA2MDAgIG1mbjogMDAyMTJhMDAKKFhFTikgIGdmbjogMDAwMTA4MDAgIG1mbjog
MDAxMWRjMDAKKFhFTikgIGdmbjogMDAwMTBhMDAgIG1mbjogMDAyMTI4MDAKKFhFTikgIGdm
bjogMDAwMTBjMDAgIG1mbjogMDAxMWRhMDAKKFhFTikgIGdmbjogMDAwMTBlMDAgIG1mbjog
MDAyMTI2MDAKKFhFTikgIGdmbjogMDAwMTEwMDAgIG1mbjogMDAxMWQ4MDAKKFhFTikgIGdm
bjogMDAwMTEyMDAgIG1mbjogMDAyMTI0MDAKKFhFTikgIGdmbjogMDAwMTE0MDAgIG1mbjog
MDAxMWQ2MDAKKFhFTikgIGdmbjogMDAwMTE2MDAgIG1mbjogMDAyMTIyMDAKKFhFTikgIGdm
bjogMDAwMTE4MDAgIG1mbjogMDAxMWQ0MDAKKFhFTikgIGdmbjogMDAwMTFhMDAgIG1mbjog
MDAyMTIwMDAKKFhFTikgIGdmbjogMDAwMTFjMDAgIG1mbjogMDAxMWQyMDAKKFhFTikgIGdm
bjogMDAwMTFlMDAgIG1mbjogMDAyMTFlMDAKKFhFTikgIGdmbjogMDAwMTIwMDAgIG1mbjog
MDAxMWQwMDAKKFhFTikgIGdmbjogMDAwMTIyMDAgIG1mbjogMDAyMTFjMDAKKFhFTikgIGdm
bjogMDAwMTI0MDAgIG1mbjogMDAxMWNlMDAKKFhFTikgIGdmbjogMDAwMTI2MDAgIG1mbjog
MDAyMTFhMDAKKFhFTikgIGdmbjogMDAwMTI4MDAgIG1mbjogMDAxMWNjMDAKKFhFTikgIGdm
bjogMDAwMTJhMDAgIG1mbjogMDAyMTE4MDAKKFhFTikgIGdmbjogMDAwMTJjMDAgIG1mbjog
MDAxMWNhMDAKKFhFTikgIGdmbjogMDAwMTJlMDAgIG1mbjogMDAyMTE2MDAKKFhFTikgIGdm
bjogMDAwMTMwMDAgIG1mbjogMDAxMWM4MDAKKFhFTikgIGdmbjogMDAwMTMyMDAgIG1mbjog
MDAyMTE0MDAKKFhFTikgIGdmbjogMDAwMTM0MDAgIG1mbjogMDAxMWM2MDAKKFhFTikgIGdm
bjogMDAwMTM2MDAgIG1mbjogMDAyMTEyMDAKKFhFTikgIGdmbjogMDAwMTM4MDAgIG1mbjog
MDAxMWM0MDAKKFhFTikgIGdmbjogMDAwMTNhMDAgIG1mbjogMDAyMTEwMDAKKFhFTikgIGdm
bjogMDAwMTNjMDAgIG1mbjogMDAxMWMyMDAKKFhFTikgIGdmbjogMDAwMTNlMDAgIG1mbjog
MDAyMTBlMDAKKFhFTikgIGdmbjogMDAwMTQwMDAgIG1mbjogMDAxMWMwMDAKKFhFTikgIGdm
bjogMDAwMTQyMDAgIG1mbjogMDAyMTBjMDAKKFhFTikgIGdmbjogMDAwMTQ0MDAgIG1mbjog
MDAxMWJlMDAKKFhFTikgIGdmbjogMDAwMTQ2MDAgIG1mbjogMDAyMTBhMDAKKFhFTikgIGdm
bjogMDAwMTQ4MDAgIG1mbjogMDAxMWJjMDAKKFhFTikgIGdmbjogMDAwMTRhMDAgIG1mbjog
MDAyMTA4MDAKKFhFTikgIGdmbjogMDAwMTRjMDAgIG1mbjogMDAxMWJhMDAKKFhFTikgIGdm
bjogMDAwMTRlMDAgIG1mbjogMDAyMTA2MDAKKFhFTikgIGdmbjogMDAwMTUwMDAgIG1mbjog
MDAxMWI4MDAKKFhFTikgIGdmbjogMDAwMTUyMDAgIG1mbjogMDAyMTA0MDAKKFhFTikgIGdm
bjogMDAwMTU0MDAgIG1mbjogMDAxMWI2MDAKKFhFTikgIGdmbjogMDAwMTU2MDAgIG1mbjog
MDAyMTAyMDAKKFhFTikgIGdmbjogMDAwMTU4MDAgIG1mbjogMDAxMWI0MDAKKFhFTikgIGdm
bjogMDAwMTVhMDAgIG1mbjogMDAyMTAwMDAKKFhFTikgIGdmbjogMDAwMTVjMDAgIG1mbjog
MDAxMWIyMDAKKFhFTikgIGdmbjogMDAwMTVlMDAgIG1mbjogMDAyMGZlMDAKKFhFTikgIGdm
bjogMDAwMTYwMDAgIG1mbjogMDAxMWIwMDAKKFhFTikgIGdmbjogMDAwMTYyMDAgIG1mbjog
MDAyMGZjMDAKKFhFTikgIGdmbjogMDAwMTY0MDAgIG1mbjogMDAxMWFlMDAKKFhFTikgIGdm
bjogMDAwMTY2MDAgIG1mbjogMDAyMGZhMDAKKFhFTikgIGdmbjogMDAwMTY4MDAgIG1mbjog
MDAxMWFjMDAKKFhFTikgIGdmbjogMDAwMTZhMDAgIG1mbjogMDAyMGY4MDAKKFhFTikgIGdm
bjogMDAwMTZjMDAgIG1mbjogMDAxMWFhMDAKKFhFTikgIGdmbjogMDAwMTZlMDAgIG1mbjog
MDAyMGY2MDAKKFhFTikgIGdmbjogMDAwMTcwMDAgIG1mbjogMDAxMWE4MDAKKFhFTikgIGdm
bjogMDAwMTcyMDAgIG1mbjogMDAyMGY0MDAKKFhFTikgIGdmbjogMDAwMTc0MDAgIG1mbjog
MDAxMWE2MDAKKFhFTikgIGdmbjogMDAwMTc2MDAgIG1mbjogMDAyMGYyMDAKKFhFTikgIGdm
bjogMDAwMTc4MDAgIG1mbjogMDAxMWE0MDAKKFhFTikgIGdmbjogMDAwMTdhMDAgIG1mbjog
MDAyMGYwMDAKKFhFTikgIGdmbjogMDAwMTdjMDAgIG1mbjogMDAxMWEyMDAKKFhFTikgIGdm
bjogMDAwMTdlMDAgIG1mbjogMDAyMGVlMDAKKFhFTikgIGdmbjogMDAwMTgwMDAgIG1mbjog
MDAxMWEwMDAKKFhFTikgIGdmbjogMDAwMTgyMDAgIG1mbjogMDAyMGVjMDAKKFhFTikgIGdm
bjogMDAwMTg0MDAgIG1mbjogMDAxMTllMDAKKFhFTikgIGdmbjogMDAwMTg2MDAgIG1mbjog
MDAyMGVhMDAKKFhFTikgIGdmbjogMDAwMTg4MDAgIG1mbjogMDAxMTljMDAKKFhFTikgIGdm
bjogMDAwMThhMDAgIG1mbjogMDAyMGU4MDAKKFhFTikgIGdmbjogMDAwMThjMDAgIG1mbjog
MDAxMTlhMDAKKFhFTikgIGdmbjogMDAwMThlMDAgIG1mbjogMDAyMGU2MDAKKFhFTikgIGdm
bjogMDAwMTkwMDAgIG1mbjogMDAxMTk4MDAKKFhFTikgIGdmbjogMDAwMTkyMDAgIG1mbjog
MDAyMGU0MDAKKFhFTikgIGdmbjogMDAwMTk0MDAgIG1mbjogMDAxMTk2MDAKKFhFTikgIGdm
bjogMDAwMTk2MDAgIG1mbjogMDAyMGUyMDAKKFhFTikgIGdmbjogMDAwMTk4MDAgIG1mbjog
MDAxMTk0MDAKKFhFTikgIGdmbjogMDAwMTlhMDAgIG1mbjogMDAyMGUwMDAKKFhFTikgIGdm
bjogMDAwMTljMDAgIG1mbjogMDAxMTkyMDAKKFhFTikgIGdmbjogMDAwMTllMDAgIG1mbjog
MDAyMGRlMDAKKFhFTikgIGdmbjogMDAwMWEwMDAgIG1mbjogMDAxMTkwMDAKKFhFTikgIGdm
bjogMDAwMWEyMDAgIG1mbjogMDAyMGRjMDAKKFhFTikgIGdmbjogMDAwMWE0MDAgIG1mbjog
MDAxMThlMDAKKFhFTikgIGdmbjogMDAwMWE2MDAgIG1mbjogMDAyMGRhMDAKKFhFTikgIGdm
bjogMDAwMWE4MDAgIG1mbjogMDAxMThjMDAKKFhFTikgIGdmbjogMDAwMWFhMDAgIG1mbjog
MDAyMGQ4MDAKKFhFTikgIGdmbjogMDAwMWFjMDAgIG1mbjogMDAxMThhMDAKKFhFTikgIGdm
bjogMDAwMWFlMDAgIG1mbjogMDAyMGQ2MDAKKFhFTikgIGdmbjogMDAwMWIwMDAgIG1mbjog
MDAxMTg4MDAKKFhFTikgIGdmbjogMDAwMWIyMDAgIG1mbjogMDAyMGQ0MDAKKFhFTikgIGdm
bjogMDAwMWI0MDAgIG1mbjogMDAxMTg2MDAKKFhFTikgIGdmbjogMDAwMWI2MDAgIG1mbjog
MDAyMGQyMDAKKFhFTikgIGdmbjogMDAwMWI4MDAgIG1mbjogMDAxMTg0MDAKKFhFTikgIGdm
bjogMDAwMWJhMDAgIG1mbjogMDAyMGQwMDAKKFhFTikgIGdmbjogMDAwMWJjMDAgIG1mbjog
MDAxMTgyMDAKKFhFTikgIGdmbjogMDAwMWJlMDAgIG1mbjogMDAyMGNlMDAKKFhFTikgIGdm
bjogMDAwMWMwMDAgIG1mbjogMDAxMTgwMDAKKFhFTikgIGdmbjogMDAwMWMyMDAgIG1mbjog
MDAyMGNjMDAKKFhFTikgIGdmbjogMDAwMWM0MDAgIG1mbjogMDAxMTdlMDAKKFhFTikgIGdm
bjogMDAwMWM2MDAgIG1mbjogMDAyMGNhMDAKKFhFTikgIGdmbjogMDAwMWM4MDAgIG1mbjog
MDAxMTdjMDAKKFhFTikgIGdmbjogMDAwMWNhMDAgIG1mbjogMDAyMGM4MDAKKFhFTikgIGdm
bjogMDAwMWNjMDAgIG1mbjogMDAxMTdhMDAKKFhFTikgIGdmbjogMDAwMWNlMDAgIG1mbjog
MDAyMGM2MDAKKFhFTikgIGdmbjogMDAwMWQwMDAgIG1mbjogMDAxMTc4MDAKKFhFTikgIGdm
bjogMDAwMWQyMDAgIG1mbjogMDAyMGM0MDAKKFhFTikgIGdmbjogMDAwMWQ0MDAgIG1mbjog
MDAxMTc2MDAKKFhFTikgIGdmbjogMDAwMWQ2MDAgIG1mbjogMDAyMGMyMDAKKFhFTikgIGdm
bjogMDAwMWQ4MDAgIG1mbjogMDAxMTc0MDAKKFhFTikgIGdmbjogMDAwMWRhMDAgIG1mbjog
MDAyMGMwMDAKKFhFTikgIGdmbjogMDAwMWRjMDAgIG1mbjogMDAxMTcyMDAKKFhFTikgIGdm
bjogMDAwMWRlMDAgIG1mbjogMDAyMGJlMDAKKFhFTikgIGdmbjogMDAwMWUwMDAgIG1mbjog
MDAxMTcwMDAKKFhFTikgIGdmbjogMDAwMWUyMDAgIG1mbjogMDAyMGJjMDAKKFhFTikgIGdm
bjogMDAwMWU0MDAgIG1mbjogMDAxMTZlMDAKKFhFTikgIGdmbjogMDAwMWU2MDAgIG1mbjog
MDAyMGJhMDAKKFhFTikgIGdmbjogMDAwMWU4MDAgIG1mbjogMDAxMTZjMDAKKFhFTikgIGdm
bjogMDAwMWVhMDAgIG1mbjogMDAyMGI4MDAKKFhFTikgIGdmbjogMDAwMWVjMDAgIG1mbjog
MDAxMTZhMDAKKFhFTikgIGdmbjogMDAwMWVlMDAgIG1mbjogMDAyMGI2MDAKKFhFTikgIGdm
bjogMDAwMWYwMDAgIG1mbjogMDAxMTY4MDAKKFhFTikgIGdmbjogMDAwMWYyMDAgIG1mbjog
MDAyMGI0MDAKKFhFTikgIGdmbjogMDAwMWY0MDAgIG1mbjogMDAxMTY2MDAKKFhFTikgIGdm
bjogMDAwMWY2MDAgIG1mbjogMDAyMGIyMDAKKFhFTikgIGdmbjogMDAwMWY4MDAgIG1mbjog
MDAxMTY0MDAKKFhFTikgIGdmbjogMDAwMWZhMDAgIG1mbjogMDAyMGIwMDAKKFhFTikgIGdm
bjogMDAwMWZjMDAgIG1mbjogMDAxMTYyMDAKKFhFTikgIGdmbjogMDAwMWZlMDAgIG1mbjog
MDAyMGFlMDAKKFhFTikgIGdmbjogMDAwMjAwMDAgIG1mbjogMDAxMTYwMDAKKFhFTikgIGdm
bjogMDAwMjAyMDAgIG1mbjogMDAyMGFjMDAKKFhFTikgIGdmbjogMDAwMjA0MDAgIG1mbjog
MDAxMTVlMDAKKFhFTikgIGdmbjogMDAwMjA2MDAgIG1mbjogMDAyMGFhMDAKKFhFTikgIGdm
bjogMDAwMjA4MDAgIG1mbjogMDAxMTVjMDAKKFhFTikgIGdmbjogMDAwMjBhMDAgIG1mbjog
MDAyMGE4MDAKKFhFTikgIGdmbjogMDAwMjBjMDAgIG1mbjogMDAxMTVhMDAKKFhFTikgIGdm
bjogMDAwMjBlMDAgIG1mbjogMDAyMGE2MDAKKFhFTikgIGdmbjogMDAwMjEwMDAgIG1mbjog
MDAxMTU4MDAKKFhFTikgIGdmbjogMDAwMjEyMDAgIG1mbjogMDAyMGE0MDAKKFhFTikgIGdm
bjogMDAwMjE0MDAgIG1mbjogMDAxMTU2MDAKKFhFTikgIGdmbjogMDAwMjE2MDAgIG1mbjog
MDAyMGEyMDAKKFhFTikgIGdmbjogMDAwMjE4MDAgIG1mbjogMDAxMTU0MDAKKFhFTikgIGdm
bjogMDAwMjFhMDAgIG1mbjogMDAyMGEwMDAKKFhFTikgIGdmbjogMDAwMjFjMDAgIG1mbjog
MDAxMTUyMDAKKFhFTikgIGdmbjogMDAwMjFlMDAgIG1mbjogMDAyMDllMDAKKFhFTikgIGdm
bjogMDAwMjIwMDAgIG1mbjogMDAxMTUwMDAKKFhFTikgIGdmbjogMDAwMjIyMDAgIG1mbjog
MDAyMDljMDAKKFhFTikgIGdmbjogMDAwMjI0MDAgIG1mbjogMDAxMTRlMDAKKFhFTikgIGdm
bjogMDAwMjI2MDAgIG1mbjogMDAyMDlhMDAKKFhFTikgIGdmbjogMDAwMjI4MDAgIG1mbjog
MDAxMTRjMDAKKFhFTikgIGdmbjogMDAwMjJhMDAgIG1mbjogMDAyMDk4MDAKKFhFTikgIGdm
bjogMDAwMjJjMDAgIG1mbjogMDAxMTRhMDAKKFhFTikgIGdmbjogMDAwMjJlMDAgIG1mbjog
MDAyMDk2MDAKKFhFTikgIGdmbjogMDAwMjMwMDAgIG1mbjogMDAxMTQ4MDAKKFhFTikgIGdm
bjogMDAwMjMyMDAgIG1mbjogMDAyMDk0MDAKKFhFTikgIGdmbjogMDAwMjM0MDAgIG1mbjog
MDAxMTQ2MDAKKFhFTikgIGdmbjogMDAwMjM2MDAgIG1mbjogMDAyMDkyMDAKKFhFTikgIGdm
bjogMDAwMjM4MDAgIG1mbjogMDAxMTQ0MDAKKFhFTikgIGdmbjogMDAwMjNhMDAgIG1mbjog
MDAyMDkwMDAKKFhFTikgIGdmbjogMDAwMjNjMDAgIG1mbjogMDAxMTQyMDAKKFhFTikgIGdm
bjogMDAwMjNlMDAgIG1mbjogMDAyMDhlMDAKKFhFTikgIGdmbjogMDAwMjQwMDAgIG1mbjog
MDAxMTQwMDAKKFhFTikgIGdmbjogMDAwMjQyMDAgIG1mbjogMDAyMDhjMDAKKFhFTikgIGdm
bjogMDAwMjQ0MDAgIG1mbjogMDAxMTNlMDAKKFhFTikgIGdmbjogMDAwMjQ2MDAgIG1mbjog
MDAyMDhhMDAKKFhFTikgIGdmbjogMDAwMjQ4MDAgIG1mbjogMDAxMTNjMDAKKFhFTikgIGdm
bjogMDAwMjRhMDAgIG1mbjogMDAyMDg4MDAKKFhFTikgIGdmbjogMDAwMjRjMDAgIG1mbjog
MDAxMTNhMDAKKFhFTikgIGdmbjogMDAwMjRlMDAgIG1mbjogMDAyMDg2MDAKKFhFTikgIGdm
bjogMDAwMjUwMDAgIG1mbjogMDAxMTM4MDAKKFhFTikgIGdmbjogMDAwMjUyMDAgIG1mbjog
MDAyMDg0MDAKKFhFTikgIGdmbjogMDAwMjU0MDAgIG1mbjogMDAxMTM2MDAKKFhFTikgIGdm
bjogMDAwMjU2MDAgIG1mbjogMDAyMDgyMDAKKFhFTikgIGdmbjogMDAwMjU4MDAgIG1mbjog
MDAxMTM0MDAKKFhFTikgIGdmbjogMDAwMjVhMDAgIG1mbjogMDAyMDgwMDAKKFhFTikgIGdm
bjogMDAwMjVjMDAgIG1mbjogMDAxMTMyMDAKKFhFTikgIGdmbjogMDAwMjVlMDAgIG1mbjog
MDAyMDdlMDAKKFhFTikgIGdmbjogMDAwMjYwMDAgIG1mbjogMDAxMTMwMDAKKFhFTikgIGdm
bjogMDAwMjYyMDAgIG1mbjogMDAyMDdjMDAKKFhFTikgIGdmbjogMDAwMjY0MDAgIG1mbjog
MDAxMTJlMDAKKFhFTikgIGdmbjogMDAwMjY2MDAgIG1mbjogMDAyMDdhMDAKKFhFTikgIGdm
bjogMDAwMjY4MDAgIG1mbjogMDAxMTJjMDAKKFhFTikgIGdmbjogMDAwMjZhMDAgIG1mbjog
MDAyMDc4MDAKKFhFTikgIGdmbjogMDAwMjZjMDAgIG1mbjogMDAxMTJhMDAKKFhFTikgIGdm
bjogMDAwMjZlMDAgIG1mbjogMDAyMDc2MDAKKFhFTikgIGdmbjogMDAwMjcwMDAgIG1mbjog
MDAxMTI4MDAKKFhFTikgIGdmbjogMDAwMjcyMDAgIG1mbjogMDAyMDc0MDAKKFhFTikgIGdm
bjogMDAwMjc0MDAgIG1mbjogMDAxMTI2MDAKKFhFTikgIGdmbjogMDAwMjc2MDAgIG1mbjog
MDAyMDcyMDAKKFhFTikgIGdmbjogMDAwMjc4MDAgIG1mbjogMDAxMTI0MDAKKFhFTikgIGdm
bjogMDAwMjdhMDAgIG1mbjogMDAyMDcwMDAKKFhFTikgIGdmbjogMDAwMjdjMDAgIG1mbjog
MDAxMTIyMDAKKFhFTikgIGdmbjogMDAwMjdlMDAgIG1mbjogMDAyMDZlMDAKKFhFTikgIGdm
bjogMDAwMjgwMDAgIG1mbjogMDAxMTIwMDAKKFhFTikgIGdmbjogMDAwMjgyMDAgIG1mbjog
MDAyMDZjMDAKKFhFTikgIGdmbjogMDAwMjg0MDAgIG1mbjogMDAxMTFlMDAKKFhFTikgIGdm
bjogMDAwMjg2MDAgIG1mbjogMDAyMDZhMDAKKFhFTikgIGdmbjogMDAwMjg4MDAgIG1mbjog
MDAxMTFjMDAKKFhFTikgIGdmbjogMDAwMjhhMDAgIG1mbjogMDAyMDY4MDAKKFhFTikgIGdm
bjogMDAwMjhjMDAgIG1mbjogMDAxMTFhMDAKKFhFTikgIGdmbjogMDAwMjhlMDAgIG1mbjog
MDAyMDY2MDAKKFhFTikgIGdmbjogMDAwMjkwMDAgIG1mbjogMDAxMTE4MDAKKFhFTikgIGdm
bjogMDAwMjkyMDAgIG1mbjogMDAyMDY0MDAKKFhFTikgIGdmbjogMDAwMjk0MDAgIG1mbjog
MDAxMTE2MDAKKFhFTikgIGdmbjogMDAwMjk2MDAgIG1mbjogMDAyMDYyMDAKKFhFTikgIGdm
bjogMDAwMjk4MDAgIG1mbjogMDAxMTE0MDAKKFhFTikgIGdmbjogMDAwMjlhMDAgIG1mbjog
MDAyMDYwMDAKKFhFTikgIGdmbjogMDAwMjljMDAgIG1mbjogMDAxMTEyMDAKKFhFTikgIGdm
bjogMDAwMjllMDAgIG1mbjogMDAyMDVlMDAKKFhFTikgIGdmbjogMDAwMmEwMDAgIG1mbjog
MDAxMTEwMDAKKFhFTikgIGdmbjogMDAwMmEyMDArICBtZm46IDAwMjA1YzAwCihYRU4pICBn
Zm46IDAwMDJhNDAwICBtZm46IDAwMTEwZTAwCihYRU4pICBnZm46IDAwMDJhNjAwICBtZm46
IDAwMjA1YTAwCihYRU4pICBnZm46IDAwMDJhODAwICBtZm46IDAwMTEwYzAwCihYRU4pICBn
Zm46IDAwMDJhYTAwICBtZm46IDAwMjA1ODAwCihYRU4pICBnZm46IDAwMDJhYzAwICBtZm46
IDAwMTEwYTAwCihYRU4pICBnZm46IDAwMDJhZTAwICBtZm46IDAwMjA1NjAwCihYRU4pICBn
Zm46IDAwMDJiMDAwICBtZm46IDAwMTEwODAwCihYRU4pICBnZm46IDAwMDJiMjAwICBtZm46
IDAwMjA1NDAwCihYRU4pICBnZm46IDAwMDJiNDAwICBtZm46IDAwMTEwNjAwCihYRU4pICBn
Zm46IDAwMDJiNjAwICBtZm46IDAwMjA1MjAwCihYRU4pICBnZm46IDAwMDJiODAwICBtZm46
IDAwMTEwNDAwCihYRU4pICBnZm46IDAwMDJiYTAwICBtZm46IDAwMjA1MDAwCihYRU4pICBn
Zm46IDAwMDJiYzAwICBtZm46IDAwMTEwMjAwCihYRU4pICBnZm46IDAwMDJiZTAwICBtZm46
IDAwMjA0ZTAwCihYRU4pICBnZm46IDAwMDJjMDAwICBtZm46IDAwMTEwMDAwCihYRU4pICBn
Zm46IDAwMDJjMjAwICBtZm46IDAwMjA0YzAwCihYRU4pICBnZm46IDAwMDJjNDAwICBtZm46
IDAwMTNmZTAwCihYRU4pICBnZm46IDAwMDJjNjAwICBtZm46IDAwMjA0YTAwCihYRU4pICBn
Zm46IDAwMDJjODAwICBtZm46IDAwMTNmYzAwCihYRU4pICBnZm46IDAwMDJjYTAwICBtZm46
IDAwMjA0ODAwCihYRU4pICBnZm46IDAwMDJjYzAwICBtZm46IDAwMTNmYTAwCihYRU4pICBn
Zm46IDAwMDJjZTAwICBtZm46IDAwMjA0NjAwCihYRU4pICBnZm46IDAwMDJkMDAwICBtZm46
IDAwMTNmODAwCihYRU4pICBnZm46IDAwMDJkMjAwICBtZm46IDAwMjA0NDAwCihYRU4pICBn
Zm46IDAwMDJkNDAwICBtZm46IDAwMTNmNjAwCihYRU4pICBnZm46IDAwMDJkNjAwICBtZm46
IDAwMjA0MjAwCihYRU4pICBnZm46IDAwMDJkODAwICBtZm46IDAwMTNmNDAwCihYRU4pICBn
Zm46IDAwMDJkYTAwICBtZm46IDAwMjA0MDAwCihYRU4pICBnZm46IDAwMDJkYzAwICBtZm46
IDAwMTNmMjAwCihYRU4pICBnZm46IDAwMDJkZTAwICBtZm46IDAwMjAzZTAwCihYRU4pICBn
Zm46IDAwMDJlMDAwICBtZm46IDAwMTNmMDAwCihYRU4pICBnZm46IDAwMDJlMjAwICBtZm46
IDAwMjAzYzAwCihYRU4pICBnZm46IDAwMDJlNDAwICBtZm46IDAwMTNlZTAwCihYRU4pICBn
Zm46IDAwMDJlNjAwICBtZm46IDAwMjAzYTAwCihYRU4pICBnZm46IDAwMDJlODAwICBtZm46
IDAwMTNlYzAwCihYRU4pICBnZm46IDAwMDJlYTAwICBtZm46IDAwMjAzODAwCihYRU4pICBn
Zm46IDAwMDJlYzAwICBtZm46IDAwMTNlYTAwCihYRU4pICBnZm46IDAwMDJlZTAwICBtZm46
IDAwMjAzNjAwCihYRU4pICBnZm46IDAwMDJmMDAwICBtZm46IDAwMTNlODAwCihYRU4pICBn
Zm46IDAwMDJmMjAwICBtZm46IDAwMjAzNDAwCihYRU4pICBnZm46IDAwMDJmNDAwICBtZm46
IDAwMTNlNjAwCihYRU4pICBnZm46IDAwMDJmNjAwICBtZm46IDAwMjAzMjAwCihYRU4pICBn
Zm46IDAwMDJmODAwICBtZm46IDAwMTNlNDAwCihYRU4pICBnZm46IDAwMDJmYTAwICBtZm46
IDAwMjAzMDAwCihYRU4pICBnZm46IDAwMDJmYzAwICBtZm46IDAwMTNlMjAwCihYRU4pICBn
Zm46IDAwMDJmZTAwICBtZm46IDAwMjAyZTAwCihYRU4pICBnZm46IDAwMDMwMDAwICBtZm46
IDAwMTNlMDAwCihYRU4pICBnZm46IDAwMDMwMjAwICBtZm46IDAwMjAyYzAwCihYRU4pICBn
Zm46IDAwMDMwNDAwICBtZm46IDAwMTNkZTAwCihYRU4pICBnZm46IDAwMDMwNjAwICBtZm46
IDAwMjAyYTAwCihYRU4pICBnZm46IDAwMDMwODAwICBtZm46IDAwMTNkYzAwCihYRU4pICBn
Zm46IDAwMDMwYTAwICBtZm46IDAwMjAyODAwCihYRU4pICBnZm46IDAwMDMwYzAwICBtZm46
IDAwMTNkYTAwCihYRU4pICBnZm46IDAwMDMwZTAwICBtZm46IDAwMjAyNjAwCihYRU4pICBn
Zm46IDAwMDMxMDAwICBtZm46IDAwMTNkODAwCihYRU4pICBnZm46IDAwMDMxMjAwICBtZm46
IDAwMjAyNDAwCihYRU4pICBnZm46IDAwMDMxNDAwICBtZm46IDAwMTNkNjAwCihYRU4pICBn
Zm46IDAwMDMxNjAwICBtZm46IDAwMjAyMjAwCihYRU4pICBnZm46IDAwMDMxODAwICBtZm46
IDAwMTNkNDAwCihYRU4pICBnZm46IDAwMDMxYTAwICBtZm46IDAwMjAyMDAwCihYRU4pICBn
Zm46IDAwMDMxYzAwICBtZm46IDAwMTNkMjAwCihYRU4pICBnZm46IDAwMDMxZTAwICBtZm46
IDAwMjAxZTAwCihYRU4pICBnZm46IDAwMDMyMDAwICBtZm46IDAwMTNkMDAwCihYRU4pICBn
Zm46IDAwMDMyMjAwICBtZm46IDAwMjAxYzAwCihYRU4pICBnZm46IDAwMDMyNDAwICBtZm46
IDAwMTNjZTAwCihYRU4pICBnZm46IDAwMDMyNjAwICBtZm46IDAwMjAxYTAwCihYRU4pICBn
Zm46IDAwMDMyODAwICBtZm46IDAwMTNjYzAwCihYRU4pICBnZm46IDAwMDMyYTAwICBtZm46
IDAwMjAxODAwCihYRU4pICBnZm46IDAwMDMyYzAwICBtZm46IDAwMTNjYTAwCihYRU4pICBn
Zm46IDAwMDMyZTAwICBtZm46IDAwMjAxNjAwCihYRU4pICBnZm46IDAwMDMzMDAwICBtZm46
IDAwMTNjODAwCihYRU4pICBnZm46IDAwMDMzMjAwICBtZm46IDAwMjAxNDAwCihYRU4pICBn
Zm46IDAwMDMzNDAwICBtZm46IDAwMTNjNjAwCihYRU4pICBnZm46IDAwMDMzNjAwICBtZm46
IDAwMjAxMjAwCihYRU4pICBnZm46IDAwMDMzODAwICBtZm46IDAwMTNjNDAwCihYRU4pICBn
Zm46IDAwMDMzYTAwICBtZm46IDAwMjAxMDAwCihYRU4pICBnZm46IDAwMDMzYzAwICBtZm46
IDAwMTNjMjAwCihYRU4pICBnZm46IDAwMDMzZTAwICBtZm46IDAwMjAwZTAwCihYRU4pICBn
Zm46IDAwMDM0MDAwICBtZm46IDAwMTNjMDAwCihYRU4pICBnZm46IDAwMDM0MjAwICBtZm46
IDAwMjAwYzAwCihYRU4pICBnZm46IDAwMDM0NDAwICBtZm46IDAwMTNiZTAwCihYRU4pICBn
Zm46IDAwMDM0NjAwICBtZm46IDAwMjAwYTAwCihYRU4pICBnZm46IDAwMDM0ODAwICBtZm46
IDAwMTNiYzAwCihYRU4pICBnZm46IDAwMDM0YTAwICBtZm46IDAwMjAwODAwCihYRU4pICBn
Zm46IDAwMDM0YzAwICBtZm46IDAwMTNiYTAwCihYRU4pICBnZm46IDAwMDM0ZTAwICBtZm46
IDAwMjAwNjAwCihYRU4pICBnZm46IDAwMDM1MDAwICBtZm46IDAwMTNiODAwCihYRU4pICBn
Zm46IDAwMDM1MjAwICBtZm46IDAwMjAwNDAwCihYRU4pICBnZm46IDAwMDM1NDAwICBtZm46
IDAwMTNiNjAwCihYRU4pICBnZm46IDAwMDM1NjAwICBtZm46IDAwMjAwMjAwCihYRU4pICBn
Zm46IDAwMDM1ODAwICBtZm46IDAwMTNiNDAwCihYRU4pICBnZm46IDAwMDM1YTAwICBtZm46
IDAwMjAwMDAwCihYRU4pICBnZm46IDAwMDM1YzAwICBtZm46IDAwMTNiMjAwCihYRU4pICBn
Zm46IDAwMDM1ZTAwICBtZm46IDAwMjNmZTAwCihYRU4pICBnZm46IDAwMDM2MDAwICBtZm46
IDAwMTNiMDAwCihYRU4pICBnZm46IDAwMDM2MjAwICBtZm46IDAwMjNmYzAwCihYRU4pICBn
Zm46IDAwMDM2NDAwICBtZm46IDAwMTNhZTAwCihYRU4pICBnZm46IDAwMDM2NjAwICBtZm46
IDAwMjNmYTAwCihYRU4pICBnZm46IDAwMDM2ODAwICBtZm46IDAwMTNhYzAwCihYRU4pICBn
Zm46IDAwMDM2YTAwICBtZm46IDAwMjNmODAwCihYRU4pICBnZm46IDAwMDM2YzAwICBtZm46
IDAwMTNhYTAwCihYRU4pICBnZm46IDAwMDM2ZTAwICBtZm46IDAwMjNmNjAwCihYRU4pICBn
Zm46IDAwMDM3MDAwICBtZm46IDAwMTNhODAwCihYRU4pICBnZm46IDAwMDM3MjAwICBtZm46
IDAwMjNmNDAwCihYRU4pICBnZm46IDAwMDM3NDAwICBtZm46IDAwMTNhNjAwCihYRU4pICBn
Zm46IDAwMDM3NjAwICBtZm46IDAwMjNmMjAwCihYRU4pICBnZm46IDAwMDM3ODAwICBtZm46
IDAwMTNhNDAwCihYRU4pICBnZm46IDAwMDM3YTAwICBtZm46IDAwMjNmMDAwCihYRU4pICBn
Zm46IDAwMDM3YzAwICBtZm46IDAwMTNhMjAwCihYRU4pICBnZm46IDAwMDM3ZTAwICBtZm46
IDAwMjNlZTAwCihYRU4pICBnZm46IDAwMDM4MDAwICBtZm46IDAwMTNhMDAwCihYRU4pICBn
Zm46IDAwMDM4MjAwICBtZm46IDAwMjNlYzAwCihYRU4pICBnZm46IDAwMDM4NDAwICBtZm46
IDAwMTM5ZTAwCihYRU4pICBnZm46IDAwMDM4NjAwICBtZm46IDAwMjNlYTAwCihYRU4pICBn
Zm46IDAwMDM4ODAwICBtZm46IDAwMTM5YzAwCihYRU4pICBnZm46IDAwMDM4YTAwICBtZm46
IDAwMjNlODAwCihYRU4pICBnZm46IDAwMDM4YzAwICBtZm46IDAwMTM5YTAwCihYRU4pICBn
Zm46IDAwMDM4ZTAwICBtZm46IDAwMjNlNjAwCihYRU4pICBnZm46IDAwMDM5MDAwICBtZm46
IDAwMTM5ODAwCihYRU4pICBnZm46IDAwMDM5MjAwICBtZm46IDAwMjNlNDAwCihYRU4pICBn
Zm46IDAwMDM5NDAwICBtZm46IDAwMTM5NjAwCihYRU4pICBnZm46IDAwMDM5NjAwICBtZm46
IDAwMjNlMjAwCihYRU4pICBnZm46IDAwMDM5ODAwICBtZm46IDAwMTM5NDAwCihYRU4pICBn
Zm46IDAwMDM5YTAwICBtZm46IDAwMjNlMDAwCihYRU4pICBnZm46IDAwMDM5YzAwICBtZm46
IDAwMTM5MjAwCihYRU4pICBnZm46IDAwMDM5ZTAwICBtZm46IDAwMjNkZTAwCihYRU4pICBn
Zm46IDAwMDNhMDAwICBtZm46IDAwMTM5MDAwCihYRU4pICBnZm46IDAwMDNhMjAwICBtZm46
IDAwMjNkYzAwCihYRU4pICBnZm46IDAwMDNhNDAwICBtZm46IDAwMTM4ZTAwCihYRU4pICBn
Zm46IDAwMDNhNjAwICBtZm46IDAwMjNkYTAwCihYRU4pICBnZm46IDAwMDNhODAwICBtZm46
IDAwMTM4YzAwCihYRU4pICBnZm46IDAwMDNhYTAwICBtZm46IDAwMjNkODAwCihYRU4pICBn
Zm46IDAwMDNhYzAwICBtZm46IDAwMTM4YTAwCihYRU4pICBnZm46IDAwMDNhZTAwICBtZm46
IDAwMjNkNjAwCihYRU4pICBnZm46IDAwMDNiMDAwICBtZm46IDAwMTM4ODAwCihYRU4pICBn
Zm46IDAwMDNiMjAwICBtZm46IDAwMjNkNDAwCihYRU4pICBnZm46IDAwMDNiNDAwICBtZm46
IDAwMTM4NjAwCihYRU4pICBnZm46IDAwMDNiNjAwICBtZm46IDAwMjNkMjAwCihYRU4pICBn
Zm46IDAwMDNiODAwICBtZm46IDAwMTM4NDAwCihYRU4pICBnZm46IDAwMDNiYTAwICBtZm46
IDAwMjNkMDAwCihYRU4pICBnZm46IDAwMDNiYzAwICBtZm46IDAwMTM4MjAwCihYRU4pICBn
Zm46IDAwMDNiZTAwICBtZm46IDAwMjNjZTAwCihYRU4pICBnZm46IDAwMDNjMDAwICBtZm46
IDAwMTM4MDAwCihYRU4pICBnZm46IDAwMDNjMjAwICBtZm46IDAwMjNjYzAwCihYRU4pICBn
Zm46IDAwMDNjNDAwICBtZm46IDAwMTM3ZTAwCihYRU4pICBnZm46IDAwMDNjNjAwICBtZm46
IDAwMjNjYTAwCihYRU4pICBnZm46IDAwMDNjODAwICBtZm46IDAwMTM3YzAwCihYRU4pICBn
Zm46IDAwMDNjYTAwICBtZm46IDAwMjNjODAwCihYRU4pICBnZm46IDAwMDNjYzAwICBtZm46
IDAwMTM3YTAwCihYRU4pICBnZm46IDAwMDNjZTAwICBtZm46IDAwMjNjNjAwCihYRU4pICBn
Zm46IDAwMDNkMDAwICBtZm46IDAwMTM3ODAwCihYRU4pICBnZm46IDAwMDNkMjAwICBtZm46
IDAwMjNjNDAwCihYRU4pICBnZm46IDAwMDNkNDAwICBtZm46IDAwMTM3NjAwCihYRU4pICBn
Zm46IDAwMDNkNjAwICBtZm46IDAwMjNjMjAwCihYRU4pICBnZm46IDAwMDNkODAwICBtZm46
IDAwMTM3NDAwCihYRU4pICBnZm46IDAwMDNkYTAwICBtZm46IDAwMjNjMDAwCihYRU4pICBn
Zm46IDAwMDNkYzAwICBtZm46IDAwMTM3MjAwCihYRU4pICBnZm46IDAwMDNkZTAwICBtZm46
IDAwMjNiZTAwCihYRU4pICBnZm46IDAwMDNlMDAwICBtZm46IDAwMTM3MDAwCihYRU4pICBn
Zm46IDAwMDNlMjAwICBtZm46IDAwMjNiYzAwCihYRU4pICBnZm46IDAwMDNlNDAwICBtZm46
IDAwMTM2ZTAwCihYRU4pICBnZm46IDAwMDNlNjAwICBtZm46IDAwMjNiYTAwCihYRU4pICBn
Zm46IDAwMDNlODAwICBtZm46IDAwMTM2YzAwCihYRU4pICBnZm46IDAwMDNlYTAwICBtZm46
IDAwMjNiODAwCihYRU4pICBnZm46IDAwMDNlYzAwICBtZm46IDAwMTM2YTAwCihYRU4pICBn
Zm46IDAwMDNlZTAwICBtZm46IDAwMjNiNjAwCihYRU4pICBnZm46IDAwMDNmMDAwICBtZm46
IDAwMTM2ODAwCihYRU4pICBnZm46IDAwMDNmMjAwICBtZm46IDAwMjNiNDAwCihYRU4pICBn
Zm46IDAwMDNmNDAwICBtZm46IDAwMTM2NjAwCihYRU4pICBnZm46IDAwMDNmNjAwICBtZm46
IDAwMjNiMjAwCihYRU4pICBnZm46IDAwMDNmODAwICBtZm46IDAwMTM2NDAwCihYRU4pICBn
Zm46IDAwMDNmYTAwICBtZm46IDAwMjNiMDAwCihYRU4pICBnZm46IDAwMDNmYzAwICBtZm46
IDAwMTM2MjAwCihYRU4pICBnZm46IDAwMDNmZTAwICBtZm46IDAwMjNhZTAwCihYRU4pICBn
Zm46IDAwMDQwMDAwICBtZm46IDAwMDQwMDAwCihYRU4pICBnZm46IDAwMDgwMDAwICBtZm46
IDAwMjNhYzAwCihYRU4pICBnZm46IDAwMDgwMjAwICBtZm46IDAwMTM2MDAwCihYRU4pICBn
Zm46IDAwMDgwNDAwICBtZm46IDAwMjNhYTAwCihYRU4pICBnZm46IDAwMDgwNjAwICBtZm46
IDAwMTM1ZTAwCihYRU4pICBnZm46IDAwMDgwODAwICBtZm46IDAwMjNhODAwCihYRU4pICBn
Zm46IDAwMDgwYTAwICBtZm46IDAwMTM1YzAwCihYRU4pICBnZm46IDAwMDgwYzAwICBtZm46
IDAwMjNhNjAwCihYRU4pICBnZm46IDAwMDgwZTAwICBtZm46IDAwMTM1YTAwCihYRU4pICBn
Zm46IDAwMDgxMDAwICBtZm46IDAwMjNhNDAwCihYRU4pICBnZm46IDAwMDgxMjAwICBtZm46
IDAwMTM1ODAwCihYRU4pICBnZm46IDAwMDgxNDAwICBtZm46IDAwMjNhMjAwCihYRU4pICBn
Zm46IDAwMDgxNjAwICBtZm46IDAwMTM1NjAwCihYRU4pICBnZm46IDAwMDgxODAwICBtZm46
IDAwMjNhMDAwCihYRU4pICBnZm46IDAwMDgxYTAwICBtZm46IDAwMTM1NDAwCihYRU4pICBn
Zm46IDAwMDgxYzAwICBtZm46IDAwMjM5ZTAwCihYRU4pICBnZm46IDAwMDgxZTAwICBtZm46
IDAwMTM1MjAwCihYRU4pICBnZm46IDAwMDgyMDAwICBtZm46IDAwMjM5YzAwCihYRU4pICBn
Zm46IDAwMDgyMjAwICBtZm46IDAwMTM1MDAwCihYRU4pICBnZm46IDAwMDgyNDAwICBtZm46
IDAwMjM5YTAwCihYRU4pICBnZm46IDAwMDgyNjAwICBtZm46IDAwMTM0ZTAwCihYRU4pICBn
Zm46IDAwMDgyODAwICBtZm46IDAwMjM5ODAwCihYRU4pICBnZm46IDAwMDgyYTAwICBtZm46
IDAwMTM0YzAwCihYRU4pICBnZm46IDAwMDgyYzAwICBtZm46IDAwMjM5NjAwCihYRU4pICBn
Zm46IDAwMDgyZTAwICBtZm46IDAwMTM0YTAwCihYRU4pICBnZm46IDAwMDgzMDAwICBtZm46
IDAwMjM5NDAwCihYRU4pICBnZm46IDAwMDgzMjAwICBtZm46IDAwMTM0ODAwCihYRU4pICBn
Zm46IDAwMDgzNDAwICBtZm46IDAwMjM5MjAwCihYRU4pICBnZm46IDAwMDgzNjAwICBtZm46
IDAwMTM0NjAwCihYRU4pICBnZm46IDAwMDgzODAwICBtZm46IDAwMjM5MDAwCihYRU4pICBn
Zm46IDAwMDgzYTAwICBtZm46IDAwMTM0NDAwCihYRU4pICBnZm46IDAwMDgzYzAwICBtZm46
IDAwMjM4ZTAwCihYRU4pICBnZm46IDAwMDgzZTAwICBtZm46IDAwMTM0MjAwCihYRU4pICBn
Zm46IDAwMDg0MDAwICBtZm46IDAwMjM4YzAwCihYRU4pICBnZm46IDAwMDg0MjAwICBtZm46
IDAwMTM0MDAwCihYRU4pICBnZm46IDAwMDg0NDAwICBtZm46IDAwMjM4YTAwCihYRU4pICBn
Zm46IDAwMDg0NjAwICBtZm46IDAwMTMzZTAwCihYRU4pICBnZm46IDAwMDg0ODAwICBtZm46
IDAwMjM4ODAwCihYRU4pICBnZm46IDAwMDg0YTAwICBtZm46IDAwMTMzYzAwCihYRU4pICBn
Zm46IDAwMDg0YzAwICBtZm46IDAwMjM4NjAwCihYRU4pICBnZm46IDAwMDg0ZTAwICBtZm46
IDAwMTMzYTAwCihYRU4pICBnZm46IDAwMDg1MDAwICBtZm46IDAwMjM4NDAwCihYRU4pICBn
Zm46IDAwMDg1MjAwICBtZm46IDAwMTMzODAwCihYRU4pICBnZm46IDAwMDg1NDAwICBtZm46
IDAwMjM4MjAwCihYRU4pICBnZm46IDAwMDg1NjAwICBtZm46IDAwMTMzNjAwCihYRU4pICBn
Zm46IDAwMDg1ODAwICBtZm46IDAwMjM4MDAwCihYRU4pICBnZm46IDAwMDg1YTAwICBtZm46
IDAwMTMzNDAwCihYRU4pICBnZm46IDAwMDg1YzAwICBtZm46IDAwMjM3ZTAwCihYRU4pICBn
Zm46IDAwMDg1ZTAwICBtZm46IDAwMTMzMjAwCihYRU4pICBnZm46IDAwMDg2MDAwICBtZm46
IDAwMjM3YzAwCihYRU4pICBnZm46IDAwMDg2MjAwICBtZm46IDAwMTMzMDAwCihYRU4pICBn
Zm46IDAwMDg2NDAwICBtZm46IDAwMjM3YTAwCihYRU4pICBnZm46IDAwMDg2NjAwICBtZm46
IDAwMTMyZTAwCihYRU4pICBnZm46IDAwMDg2ODAwICBtZm46IDAwMjM3ODAwCihYRU4pICBn
Zm46IDAwMDg2YTAwICBtZm46IDAwMTMyYzAwCihYRU4pICBnZm46IDAwMDg2YzAwICBtZm46
IDAwMjM3NjAwCihYRU4pICBnZm46IDAwMDg2ZTAwICBtZm46IDAwMTMyYTAwCihYRU4pICBn
Zm46IDAwMDg3MDAwICBtZm46IDAwMjM3NDAwCihYRU4pICBnZm46IDAwMDg3MjAwICBtZm46
IDAwMTMyODAwCihYRU4pICBnZm46IDAwMDg3NDAwICBtZm46IDAwMjM3MjAwCihYRU4pICBn
Zm46IDAwMDg3NjAwICBtZm46IDAwMTMyNjAwCihYRU4pICBnZm46IDAwMDg3ODAwICBtZm46
IDAwMjM3MDAwCihYRU4pICBnZm46IDAwMDg3YTAwICBtZm46IDAwMTMyNDAwCihYRU4pICBn
Zm46IDAwMDg3YzAwICBtZm46IDAwMjM2ZTAwCihYRU4pICBnZm46IDAwMDg3ZTAwICBtZm46
IDAwMTMyMjAwCihYRU4pICBnZm46IDAwMDg4MDAwICBtZm46IDAwMjM2YzAwCihYRU4pICBn
Zm46IDAwMDg4MjAwICBtZm46IDAwMTMyMDAwCihYRU4pICBnZm46IDAwMDg4NDAwICBtZm46
IDAwMjM2YTAwCihYRU4pICBnZm46IDAwMDg4NjAwICBtZm46IDAwMTMxZTAwCihYRU4pICBn
Zm46IDAwMDg4ODAwICBtZm46IDAwMjM2ODAwCihYRU4pICBnZm46IDAwMDg4YTAwICBtZm46
IDAwMTMxYzAwCihYRU4pICBnZm46IDAwMDg4YzAwICBtZm46IDAwMjM2NjAwCihYRU4pICBn
Zm46IDAwMDg4ZTAwICBtZm46IDAwMTMxYTAwCihYRU4pICBnZm46IDAwMDg5MDAwICBtZm46
IDAwMjM2NDAwCihYRU4pICBnZm46IDAwMDg5MjAwICBtZm46IDAwMTMxODAwCihYRU4pICBn
Zm46IDAwMDg5NDAwICBtZm46IDAwMjM2MjAwCihYRU4pICBnZm46IDAwMDg5NjAwICBtZm46
IDAwMTMxNjAwCihYRU4pICBnZm46IDAwMDg5ODAwICBtZm46IDAwMjM2MDAwCihYRU4pICBn
Zm46IDAwMDg5YTAwICBtZm46IDAwMTMxNDAwCihYRU4pICBnZm46IDAwMDg5YzAwICBtZm46
IDAwMjM1ZTAwCihYRU4pICBnZm46IDAwMDg5ZTAwICBtZm46IDAwMTMxMjAwCihYRU4pICBn
Zm46IDAwMDhhMDAwICBtZm46IDAwMjM1YzAwCihYRU4pICBnZm46IDAwMDhhMjAwICBtZm46
IDAwMTMxMDAwCihYRU4pICBnZm46IDAwMDhhNDAwICBtZm46IDAwMjM1YTAwCihYRU4pICBn
Zm46IDAwMDhhNjAwICBtZm46IDAwMTMwZTAwCihYRU4pICBnZm46IDAwMDhhODAwICBtZm46
IDAwMjM1ODAwCihYRU4pICBnZm46IDAwMDhhYTAwICBtZm46IDAwMTMwYzAwCihYRU4pICBn
Zm46IDAwMDhhYzAwICBtZm46IDAwMjM1NjAwCihYRU4pICBnZm46IDAwMDhhZTAwICBtZm46
IDAwMTMwYTAwCihYRU4pICBnZm46IDAwMDhiMDAwICBtZm46IDAwMjM1NDAwCihYRU4pICBn
Zm46IDAwMDhiMjAwICBtZm46IDAwMTMwODAwCihYRU4pICBnZm46IDAwMDhiNDAwICBtZm46
IDAwMjM1MjAwCihYRU4pICBnZm46IDAwMDhiNjAwICBtZm46IDAwMTMwNjAwCihYRU4pICBn
Zm46IDAwMDhiODAwICBtZm46IDAwMjM1MDAwCihYRU4pICBnZm46IDAwMDhiYTAwICBtZm46
IDAwMTMwNDAwCihYRU4pICBnZm46IDAwMDhiYzAwICBtZm46IDAwMjM0ZTAwCihYRU4pICBn
Zm46IDAwMDhiZTAwICBtZm46IDAwMTMwMjAwCihYRU4pICBnZm46IDAwMDhjMDAwICBtZm46
IDAwMjM0YzAwCihYRU4pICBnZm46IDAwMDhjMjAwICBtZm46IDAwMTMwMDAwCihYRU4pICBn
Zm46IDAwMDhjNDAwICBtZm46IDAwMjM0YTAwCihYRU4pICBnZm46IDAwMDhjNjAwICBtZm46
IDAwMTJmZTAwCihYRU4pICBnZm46IDAwMDhjODAwICBtZm46IDAwMjM0ODAwCihYRU4pICBn
Zm46IDAwMDhjYTAwICBtZm46IDAwMTJmYzAwCihYRU4pICBnZm46IDAwMDhjYzAwICBtZm46
IDAwMjM0NjAwCihYRU4pICBnZm46IDAwMDhjZTAwICBtZm46IDAwMTJmYTAwCihYRU4pICBn
Zm46IDAwMDhkMDAwICBtZm46IDAwMjM0NDAwCihYRU4pICBnZm46IDAwMDhkMjAwICBtZm46
IDAwMTJmODAwCihYRU4pICBnZm46IDAwMDhkNDAwICBtZm46IDAwMjM0MjAwCihYRU4pICBn
Zm46IDAwMDhkNjAwICBtZm46IDAwMTJmNjAwCihYRU4pICBnZm46IDAwMDhkODAwICBtZm46
IDAwMjM0MDAwCihYRU4pICBnZm46IDAwMDhkYTAwICBtZm46IDAwMTJmNDAwCihYRU4pICBn
Zm46IDAwMDhkYzAwICBtZm46IDAwMjMzZTAwCihYRU4pICBnZm46IDAwMDhkZTAwICBtZm46
IDAwMTJmMjAwCihYRU4pICBnZm46IDAwMDhlMDAwICBtZm46IDAwMjMzYzAwCihYRU4pICBn
Zm46IDAwMDhlMjAwICBtZm46IDAwMTJmMDAwCihYRU4pICBnZm46IDAwMDhlNDAwICBtZm46
IDAwMjMzYTAwCihYRU4pICBnZm46IDAwMDhlNjAwICBtZm46IDAwMTJlZTAwCihYRU4pICBn
Zm46IDAwMDhlODAwICBtZm46IDAwMjMzODAwCihYRU4pICBnZm46IDAwMDhlYTAwICBtZm46
IDAwMTJlYzAwCihYRU4pICBnZm46IDAwMDhlYzAwICBtZm46IDAwMjMzNjAwCihYRU4pICBn
Zm46IDAwMDhlZTAwICBtZm46IDAwMTJlYTAwCihYRU4pICBnZm46IDAwMDhmMDAwICBtZm46
IDAwMjMzNDAwCihYRU4pICBnZm46IDAwMDhmMjAwICBtZm46IDAwMTJlODAwCihYRU4pICBn
Zm46IDAwMDhmNDAwICBtZm46IDAwMjMzMjAwCihYRU4pICBnZm46IDAwMDhmNjAwICBtZm46
IDAwMTJlNjAwCihYRU4pICBnZm46IDAwMDhmODAwICBtZm46IDAwMjMzMDAwCihYRU4pICBn
Zm46IDAwMDhmYTAwICBtZm46IDAwMTJlNDAwCihYRU4pICBnZm46IDAwMDhmYzAwICBtZm46
IDAwMjMyZTAwCihYRU4pICBnZm46IDAwMDhmZTAwICBtZm46IDAwMTJlMjAwCihYRU4pICBn
Zm46IDAwMDkwMDAwICBtZm46IDAwMjMyYzAwCihYRU4pICBnZm46IDAwMDkwMjAwICBtZm46
IDAwMTJlMDAwCihYRU4pICBnZm46IDAwMDkwNDAwICBtZm46IDAwMjMyYTAwCihYRU4pICBn
Zm46IDAwMDkwNjAwICBtZm46IDAwMTJkZTAwCihYRU4pICBnZm46IDAwMDkwODAwICBtZm46
IDAwMjMyODAwCihYRU4pICBnZm46IDAwMDkwYTAwICBtZm46IDAwMTJkYzAwCihYRU4pICBn
Zm46IDAwMDkwYzAwICBtZm46IDAwMjMyNjAwCihYRU4pICBnZm46IDAwMDkwZTAwICBtZm46
IDAwMTJkYTAwCihYRU4pICBnZm46IDAwMDkxMDAwICBtZm46IDAwMjMyNDAwCihYRU4pICBn
Zm46IDAwMDkxMjAwICBtZm46IDAwMTJkODAwCihYRU4pICBnZm46IDAwMDkxNDAwICBtZm46
IDAwMjMyMjAwCihYRU4pICBnZm46IDAwMDkxNjAwICBtZm46IDAwMTJkNjAwCihYRU4pICBn
Zm46IDAwMDkxODAwICBtZm46IDAwMjMyMDAwCihYRU4pICBnZm46IDAwMDkxYTAwICBtZm46
IDAwMTJkNDAwCihYRU4pICBnZm46IDAwMDkxYzAwICBtZm46IDAwMjMxZTAwCihYRU4pICBn
Zm46IDAwMDkxZTAwICBtZm46IDAwMTJkMjAwCihYRU4pICBnZm46IDAwMDkyMDAwICBtZm46
IDAwMjMxYzAwCihYRU4pICBnZm46IDAwMDkyMjAwICBtZm46IDAwMTJkMDAwCihYRU4pICBn
Zm46IDAwMDkyNDAwICBtZm46IDAwMjMxYTAwCihYRU4pICBnZm46IDAwMDkyNjAwICBtZm46
IDAwMTJjZTAwCihYRU4pICBnZm46IDAwMDkyODAwICBtZm46IDAwMjMxODAwCihYRU4pICBn
Zm46IDAwMDkyYTAwICBtZm46IDAwMTJjYzAwCihYRU4pICBnZm46IDAwMDkyYzAwICBtZm46
IDAwMjMxNjAwCihYRU4pICBnZm46IDAwMDkyZTAwICBtZm46IDAwMTJjYTAwCihYRU4pICBn
Zm46IDAwMDkzMDAwICBtZm46IDAwMjMxNDAwCihYRU4pICBnZm46IDAwMDkzMjAwICBtZm46
IDAwMTJjODAwCihYRU4pICBnZm46IDAwMDkzNDAwICBtZm46IDAwMjMxMjAwCihYRU4pICBn
Zm46IDAwMDkzNjAwICBtZm46IDAwMTJjNjAwCihYRU4pICBnZm46IDAwMDkzODAwICBtZm46
IDAwMjMxMDAwCihYRU4pICBnZm46IDAwMDkzYTAwICBtZm46IDAwMTJjNDAwCihYRU4pICBn
Zm46IDAwMDkzYzAwICBtZm46IDAwMjMwZTAwCihYRU4pICBnZm46IDAwMDkzZTAwICBtZm46
IDAwMTJjMjAwCihYRU4pICBnZm46IDAwMDk0MDAwICBtZm46IDAwMjMwYzAwCihYRU4pICBn
Zm46IDAwMDk0MjAwICBtZm46IDAwMTJjMDAwCihYRU4pICBnZm46IDAwMDk0NDAwICBtZm46
IDAwMjMwYTAwCihYRU4pICBnZm46IDAwMDk0NjAwICBtZm46IDAwMTJiZTAwCihYRU4pICBn
Zm46IDAwMDk0ODAwICBtZm46IDAwMjMwODAwCihYRU4pICBnZm46IDAwMDk0YTAwICBtZm46
IDAwMTJiYzAwCihYRU4pICBnZm46IDAwMDk0YzAwICBtZm46IDAwMjMwNjAwCihYRU4pICBn
Zm46IDAwMDk0ZTAwICBtZm46IDAwMTJiYTAwCihYRU4pICBnZm46IDAwMDk1MDAwICBtZm46
IDAwMjMwNDAwCihYRU4pICBnZm46IDAwMDk1MjAwICBtZm46IDAwMTJiODAwCihYRU4pICBn
Zm46IDAwMDk1NDAwICBtZm46IDAwMjMwMjAwCihYRU4pICBnZm46IDAwMDk1NjAwICBtZm46
IDAwMTJiNjAwCihYRU4pICBnZm46IDAwMDk1ODAwICBtZm46IDAwMjMwMDAwCihYRU4pICBn
Zm46IDAwMDk1YTAwICBtZm46IDAwMTJiNDAwCihYRU4pICBnZm46IDAwMDk1YzAwICBtZm46
IDAwMjJmZTAwCihYRU4pICBnZm46IDAwMDk1ZTAwICBtZm46IDAwMTJiMjAwCihYRU4pICBn
Zm46IDAwMDk2MDAwICBtZm46IDAwMjJmYzAwCihYRU4pICBnZm46IDAwMDk2MjAwICBtZm46
IDAwMTJiMDAwCihYRU4pICBnZm46IDAwMDk2NDAwICBtZm46IDAwMjJmYTAwCihYRU4pICBn
Zm46IDAwMDk2NjAwICBtZm46IDAwMTJhZTAwCihYRU4pICBnZm46IDAwMDk2ODAwICBtZm46
IDAwMjJmODAwCihYRU4pICBnZm46IDAwMDk2YTAwICBtZm46IDAwMTJhYzAwCihYRU4pICBn
Zm46IDAwMDk2YzAwICBtZm46IDAwMjJmNjAwCihYRU4pICBnZm46IDAwMDk2ZTAwICBtZm46
IDAwMTJhYTAwCihYRU4pICBnZm46IDAwMDk3MDAwICBtZm46IDAwMjJmNDAwCihYRU4pICBn
Zm46IDAwMDk3MjAwICBtZm46IDAwMTJhODAwCihYRU4pICBnZm46IDAwMDk3NDAwICBtZm46
IDAwMjJmMjAwCihYRU4pICBnZm46IDAwMDk3NjAwICBtZm46IDAwMTJhNjAwCihYRU4pICBn
Zm46IDAwMDk3ODAwICBtZm46IDAwMjJmMDAwCihYRU4pICBnZm46IDAwMDk3YTAwICBtZm46
IDAwMTJhNDAwCihYRU4pICBnZm46IDAwMDk3YzAwICBtZm46IDAwMjJlZTAwCihYRU4pICBn
Zm46IDAwMDk3ZTAwICBtZm46IDAwMTJhMjAwCihYRU4pICBnZm46IDAwMDk4MDAwICBtZm46
IDAwMjJlYzAwCihYRU4pICBnZm46IDAwMDk4MjAwICBtZm46IDAwMTJhMDAwCihYRU4pICBn
Zm46IDAwMDk4NDAwICBtZm46IDAwMjJlYTAwCihYRU4pICBnZm46IDAwMDk4NjAwICBtZm46
IDAwMTI5ZTAwCihYRU4pICBnZm46IDAwMDk4ODAwICBtZm46IDAwMjJlODAwCihYRU4pICBn
Zm46IDAwMDk4YTAwICBtZm46IDAwMTI5YzAwCihYRU4pICBnZm46IDAwMDk4YzAwICBtZm46
IDAwMjJlNjAwCihYRU4pICBnZm46IDAwMDk4ZTAwICBtZm46IDAwMTI5YTAwCihYRU4pICBn
Zm46IDAwMDk5MDAwICBtZm46IDAwMjJlNDAwCihYRU4pICBnZm46IDAwMDk5MjAwICBtZm46
IDAwMTI5ODAwCihYRU4pICBnZm46IDAwMDk5NDAwICBtZm46IDAwMjJlMjAwCihYRU4pICBn
Zm46IDAwMDk5NjAwICBtZm46IDAwMTI5NjAwCihYRU4pICBnZm46IDAwMDk5ODAwICBtZm46
IDAwMjJlMDAwCihYRU4pICBnZm46IDAwMDk5YTAwICBtZm46IDAwMTI5NDAwCihYRU4pICBn
Zm46IDAwMDk5YzAwICBtZm46IDAwMjJkZTAwCihYRU4pICBnZm46IDAwMDk5ZTAwICBtZm46
IDAwMTI5MjAwCihYRU4pICBnZm46IDAwMDlhMDAwICBtZm46IDAwMjJkYzAwCihYRU4pICBn
Zm46IDAwMDlhMjAwICBtZm46IDAwMTI5MDAwCihYRU4pICBnZm46IDAwMDlhNDAwICBtZm46
IDAwMjJkYTAwCihYRU4pICBnZm46IDAwMDlhNjAwICBtZm46IDAwMTI4ZTAwCihYRU4pICBn
Zm46IDAwMDlhODAwICBtZm46IDAwMjJkODAwCihYRU4pICBnZm46IDAwMDlhYTAwICBtZm46
IDAwMTI4YzAwCihYRU4pICBnZm46IDAwMDlhYzAwICBtZm46IDAwMjJkNjAwCihYRU4pICBn
Zm46IDAwMDlhZTAwICBtZm46IDAwMTI4YTAwCihYRU4pICBnZm46IDAwMDliMDAwICBtZm46
IDAwMjJkNDAwCihYRU4pICBnZm46IDAwMDliMjAwICBtZm46IDAwMTI4ODAwCihYRU4pICBn
Zm46IDAwMDliNDAwICBtZm46IDAwMjJkMjAwCihYRU4pICBnZm46IDAwMDliNjAwICBtZm46
IDAwMTI4NjAwCihYRU4pICBnZm46IDAwMDliODAwICBtZm46IDAwMjJkMDAwCihYRU4pICBn
Zm46IDAwMDliYTAwICBtZm46IDAwMTI4NDAwCihYRU4pICBnZm46IDAwMDliYzAwICBtZm46
IDAwMjJjZTAwCihYRU4pICBnZm46IDAwMDliZTAwICBtZm46IDAwMTI4MjAwCihYRU4pICBn
Zm46IDAwMDljMDAwICBtZm46IDAwMjJjYzAwCihYRU4pICBnZm46IDAwMDljMjAwICBtZm46
IDAwMTI4MDAwCihYRU4pICBnZm46IDAwMDljNDAwICBtZm46IDAwMjJjYTAwCihYRU4pICBn
Zm46IDAwMDljNjAwICBtZm46IDAwMTI3ZTAwCihYRU4pICBnZm46IDAwMDljODAwICBtZm46
IDAwMjJjODAwCihYRU4pICBnZm46IDAwMDljYTAwICBtZm46IDAwMTI3YzAwCihYRU4pICBn
Zm46IDAwMDljYzAwICBtZm46IDAwMjJjNjAwCihYRU4pICBnZm46IDAwMDljZTAwICBtZm46
IDAwMTI3YTAwCihYRU4pICBnZm46IDAwMDlkMDAwICBtZm46IDAwMjJjNDAwCihYRU4pICBn
Zm46IDAwMDlkMjAwICBtZm46IDAwMTI3ODAwCihYRU4pICBnZm46IDAwMDlkNDAwICBtZm46
IDAwMjJjMjAwCihYRU4pICBnZm46IDAwMDlkNjAwICBtZm46IDAwMTI3NjAwCihYRU4pICBn
Zm46IDAwMDlkODAwICBtZm46IDAwMjJjMDAwCihYRU4pICBnZm46IDAwMDlkYTAwICBtZm46
IDAwMTI3NDAwCihYRU4pICBnZm46IDAwMDlkYzAwICBtZm46IDAwMjJiZTAwCihYRU4pICBn
Zm46IDAwMDlkZTAwICBtZm46IDAwMTI3MjAwCihYRU4pICBnZm46IDAwMDllMDAwICBtZm46
IDAwMjJiYzAwCihYRU4pICBnZm46IDAwMDllMjAwICBtZm46IDAwMTI3MDAwCihYRU4pICBn
Zm46IDAwMDllNDAwICBtZm46IDAwMjJiYTAwCihYRU4pICBnZm46IDAwMDllNjAwICBtZm46
IDAwMTI2ZTAwCihYRU4pICBnZm46IDAwMDllODAwICBtZm46IDAwMjJiODAwCihYRU4pICBn
Zm46IDAwMDllYTAwICBtZm46IDAwMTI2YzAwCihYRU4pICBnZm46IDAwMDllYzAwICBtZm46
IDAwMjJiNjAwCihYRU4pICBnZm46IDAwMDllZTAwICBtZm46IDAwMTI2YTAwCihYRU4pICBn
Zm46IDAwMDlmMDAwICBtZm46IDAwMjJiNDAwCihYRU4pICBnZm46IDAwMDlmMjAwICBtZm46
IDAwMTI2ODAwCihYRU4pICBnZm46IDAwMDlmNDAwICBtZm46IDAwMjJiMjAwCihYRU4pICBn
Zm46IDAwMDlmNjAwICBtZm46IDAwMTI2NjAwCihYRU4pICBnZm46IDAwMDlmODAwICBtZm46
IDAwMjJiMDAwCihYRU4pICBnZm46IDAwMDlmYTAwICBtZm46IDAwMTI2NDAwCihYRU4pICBn
Zm46IDAwMDlmYzAwICBtZm46IDAwMjJhZTAwCihYRU4pICBnZm46IDAwMDlmZTAwICBtZm46
IDAwMTI2MjAwCihYRU4pICBnZm46IDAwMGEwMDAwICBtZm46IDAwMjJhYzAwCihYRU4pICBn
Zm46IDAwMGEwMjAwICBtZm46IDAwMTI2MDAwCihYRU4pICBnZm46IDAwMGEwNDAwICBtZm46
IDAwMjJhYTAwCihYRU4pICBnZm46IDAwMGEwNjAwICBtZm46IDAwMTI1ZTAwCihYRU4pICBn
Zm46IDAwMGEwODAwICBtZm46IDAwMjJhODAwCihYRU4pICBnZm46IDAwMGEwYTAwICBtZm46
IDAwMTI1YzAwCihYRU4pICBnZm46IDAwMGEwYzAwICBtZm46IDAwMjJhNjAwCihYRU4pICBn
Zm46IDAwMGEwZTAwICBtZm46IDAwMTI1YTAwCihYRU4pICBnZm46IDAwMGExMDAwICBtZm46
IDAwMjJhNDAwCihYRU4pICBnZm46IDAwMGExMjAwICBtZm46IDAwMTI1ODAwCihYRU4pICBn
Zm46IDAwMGExNDAwICBtZm46IDAwMjJhMjAwCihYRU4pICBnZm46IDAwMGExNjAwICBtZm46
IDAwMTI1NjAwCihYRU4pICBnZm46IDAwMGExODAwICBtZm46IDAwMjJhMDAwCihYRU4pICBn
Zm46IDAwMGExYTAwICBtZm46IDAwMTI1NDAwCihYRU4pICBnZm46IDAwMGExYzAwICBtZm46
IDAwMjI5ZTAwCihYRU4pICBnZm46IDAwMGExZTAwICBtZm46IDAwMTI1MjAwCihYRU4pICBn
Zm46IDAwMGEyMDAwICBtZm46IDAwMjI5YzAwCihYRU4pICBnZm46IDAwMGEyMjAwICBtZm46
IDAwMTI1MDAwCihYRU4pICBnZm46IDAwMGEyNDAwICBtZm46IDAwMjI5YTAwCihYRU4pICBn
Zm46IDAwMGEyNjAwICBtZm46IDAwMTI0ZTAwCihYRU4pICBnZm46IDAwMGEyODAwICBtZm46
IDAwMjI5ODAwCihYRU4pICBnZm46IDAwMGEyYTAwICBtZm46IDAwMTI0YzAwCihYRU4pICBn
Zm46IDAwMGEyYzAwICBtZm46IDAwMjI5NjAwCihYRU4pICBnZm46IDAwMGEyZTAwICBtZitu
OiAwMDEyNGEwMAooWEVOKSAgZ2ZuOiAwMDBhMzAwMCAgbWZuOiAwMDIyOTQwMAooWEVOKSAg
Z2ZuOiAwMDBhMzIwMCAgbWZuOiAwMDEyNDgwMAooWEVOKSAgZ2ZuOiAwMDBhMzQwMCAgbWZu
OiAwMDIyOTIwMAooWEVOKSAgZ2ZuOiAwMDBhMzYwMCAgbWZuOiAwMDEyNDYwMAooWEVOKSAg
Z2ZuOiAwMDBhMzgwMCAgbWZuOiAwMDIyOTAwMAooWEVOKSAgZ2ZuOiAwMDBhM2EwMCAgbWZu
OiAwMDEyNDQwMAooWEVOKSAgZ2ZuOiAwMDBhM2MwMCAgbWZuOiAwMDIyOGUwMAooWEVOKSAg
Z2ZuOiAwMDBhM2UwMCAgbWZuOiAwMDEyNDIwMAooWEVOKSAgZ2ZuOiAwMDBhNDAwMCAgbWZu
OiAwMDIyOGMwMAooWEVOKSAgZ2ZuOiAwMDBhNDIwMCAgbWZuOiAwMDEyNDAwMAooWEVOKSAg
Z2ZuOiAwMDBhNDQwMCAgbWZuOiAwMDIyOGEwMAooWEVOKSAgZ2ZuOiAwMDBhNDYwMCAgbWZu
OiAwMDEyM2UwMAooWEVOKSAgZ2ZuOiAwMDBhNDgwMCAgbWZuOiAwMDIyODgwMAooWEVOKSAg
Z2ZuOiAwMDBhNGEwMCAgbWZuOiAwMDEyM2MwMAooWEVOKSAgZ2ZuOiAwMDBhNGMwMCAgbWZu
OiAwMDIyODYwMAooWEVOKSAgZ2ZuOiAwMDBhNGUwMCAgbWZuOiAwMDEyM2EwMAooWEVOKSAg
Z2ZuOiAwMDBhNTAwMCAgbWZuOiAwMDIyODQwMAooWEVOKSAgZ2ZuOiAwMDBhNTIwMCAgbWZu
OiAwMDEyMzgwMAooWEVOKSAgZ2ZuOiAwMDBhNTQwMCAgbWZuOiAwMDIyODIwMAooWEVOKSAg
Z2ZuOiAwMDBhNTYwMCAgbWZuOiAwMDEyMzYwMAooWEVOKSAgZ2ZuOiAwMDBhNTgwMCAgbWZu
OiAwMDIyODAwMAooWEVOKSAgZ2ZuOiAwMDBhNWEwMCAgbWZuOiAwMDEyMzQwMAooWEVOKSAg
Z2ZuOiAwMDBhNWMwMCAgbWZuOiAwMDIyN2UwMAooWEVOKSAgZ2ZuOiAwMDBhNWUwMCAgbWZu
OiAwMDEyMzIwMAooWEVOKSAgZ2ZuOiAwMDBhNjAwMCAgbWZuOiAwMDIyN2MwMAooWEVOKSAg
Z2ZuOiAwMDBhNjIwMCAgbWZuOiAwMDEyMzAwMAooWEVOKSAgZ2ZuOiAwMDBhNjQwMCAgbWZu
OiAwMDIyN2EwMAooWEVOKSAgZ2ZuOiAwMDBhNjYwMCAgbWZuOiAwMDEyMmUwMAooWEVOKSAg
Z2ZuOiAwMDBhNjgwMCAgbWZuOiAwMDIyNzgwMAooWEVOKSAgZ2ZuOiAwMDBhNmEwMCAgbWZu
OiAwMDEyMmMwMAooWEVOKSAgZ2ZuOiAwMDBhNmMwMCAgbWZuOiAwMDIyNzYwMAooWEVOKSAg
Z2ZuOiAwMDBhNmUwMCAgbWZuOiAwMDEyMmEwMAooWEVOKSAgZ2ZuOiAwMDBhNzAwMCAgbWZu
OiAwMDIyNzQwMAooWEVOKSAgZ2ZuOiAwMDBhNzIwMCAgbWZuOiAwMDEyMjgwMAooWEVOKSAg
Z2ZuOiAwMDBhNzQwMCAgbWZuOiAwMDIyNzIwMAooWEVOKSAgZ2ZuOiAwMDBhNzYwMCAgbWZu
OiAwMDEyMjYwMAooWEVOKSAgZ2ZuOiAwMDBhNzgwMCAgbWZuOiAwMDIyNzAwMAooWEVOKSAg
Z2ZuOiAwMDBhN2EwMCAgbWZuOiAwMDEyMjQwMAooWEVOKSAgZ2ZuOiAwMDBhN2MwMCAgbWZu
OiAwMDIyNmUwMAooWEVOKSAgZ2ZuOiAwMDBhN2UwMCAgbWZuOiAwMDEyMjIwMAooWEVOKSAg
Z2ZuOiAwMDBhODAwMCAgbWZuOiAwMDIyNmMwMAooWEVOKSAgZ2ZuOiAwMDBhODIwMCAgbWZu
OiAwMDEyMjAwMAooWEVOKSAgZ2ZuOiAwMDBhODQwMCAgbWZuOiAwMDIyNmEwMAooWEVOKSAg
Z2ZuOiAwMDBhODYwMCAgbWZuOiAwMDEyMWUwMAooWEVOKSAgZ2ZuOiAwMDBhODgwMCAgbWZu
OiAwMDIyNjgwMAooWEVOKSAgZ2ZuOiAwMDBhOGEwMCAgbWZuOiAwMDEyMWMwMAooWEVOKSAg
Z2ZuOiAwMDBhOGMwMCAgbWZuOiAwMDIyNjYwMAooWEVOKSAgZ2ZuOiAwMDBhOGUwMCAgbWZu
OiAwMDEyMWEwMAooWEVOKSAgZ2ZuOiAwMDBhOTAwMCAgbWZuOiAwMDIyNjQwMAooWEVOKSAg
Z2ZuOiAwMDBhOTIwMCAgbWZuOiAwMDEyMTgwMAooWEVOKSAgZ2ZuOiAwMDBhOTQwMCAgbWZu
OiAwMDIyNjIwMAooWEVOKSAgZ2ZuOiAwMDBhOTYwMCAgbWZuOiAwMDEyMTYwMAooWEVOKSAg
Z2ZuOiAwMDBhOTgwMCAgbWZuOiAwMDIyNjAwMAooWEVOKSAgZ2ZuOiAwMDBhOWEwMCAgbWZu
OiAwMDEyMTQwMAooWEVOKSAgZ2ZuOiAwMDBhOWMwMCAgbWZuOiAwMDIyNWUwMAooWEVOKSAg
Z2ZuOiAwMDBhOWUwMCAgbWZuOiAwMDEyMTIwMAooWEVOKSAgZ2ZuOiAwMDBhYTAwMCAgbWZu
OiAwMDIyNWMwMAooWEVOKSAgZ2ZuOiAwMDBhYTIwMCAgbWZuOiAwMDEyMTAwMAooWEVOKSAg
Z2ZuOiAwMDBhYTQwMCAgbWZuOiAwMDIyNWEwMAooWEVOKSAgZ2ZuOiAwMDBhYTYwMCAgbWZu
OiAwMDEyMGUwMAooWEVOKSAgZ2ZuOiAwMDBhYTgwMCAgbWZuOiAwMDIyNTgwMAooWEVOKSAg
Z2ZuOiAwMDBhYWEwMCAgbWZuOiAwMDEyMGMwMAooWEVOKSAgZ2ZuOiAwMDBhYWMwMCAgbWZu
OiAwMDIyNTYwMAooWEVOKSAgZ2ZuOiAwMDBhYWUwMCAgbWZuOiAwMDEyMGEwMAooWEVOKSAg
Z2ZuOiAwMDBhYjAwMCAgbWZuOiAwMDIyNTQwMAooWEVOKSAgZ2ZuOiAwMDBhYjIwMCAgbWZu
OiAwMDEyMDgwMAooWEVOKSAgZ2ZuOiAwMDBhYjQwMCAgbWZuOiAwMDIyNTIwMAooWEVOKSAg
Z2ZuOiAwMDBhYjYwMCAgbWZuOiAwMDEyMDYwMAooWEVOKSAgZ2ZuOiAwMDBhYjgwMCAgbWZu
OiAwMDIyNTAwMAooWEVOKSAgZ2ZuOiAwMDBhYmEwMCAgbWZuOiAwMDEyMDQwMAooWEVOKSAg
Z2ZuOiAwMDBhYmMwMCAgbWZuOiAwMDIyNGUwMAooWEVOKSAgZ2ZuOiAwMDBhYmUwMCAgbWZu
OiAwMDEyMDIwMAooWEVOKSAgZ2ZuOiAwMDBhYzAwMCAgbWZuOiAwMDIyNGMwMAooWEVOKSAg
Z2ZuOiAwMDBhYzIwMCAgbWZuOiAwMDEyMDAwMAooWEVOKSAgZ2ZuOiAwMDBhYzQwMCAgbWZu
OiAwMDIyNGEwMAooWEVOKSAgZ2ZuOiAwMDBhYzYwMCAgbWZuOiAwMDA4MDgwMAooWEVOKSAg
Z2ZuOiAwMDBhYzgwMCAgbWZuOiAwMDIyNDgwMAooWEVOKSAgZ2ZuOiAwMDBhY2EwMCAgbWZu
OiAwMDA4MTAwMAooWEVOKSAgZ2ZuOiAwMDBhY2MwMCAgbWZuOiAwMDIyNDYwMAooWEVOKSAg
Z2ZuOiAwMDBhY2UwMCAgbWZuOiAwMDA4MTYwMAooWEVOKSAgZ2ZuOiAwMDBhZDAwMCAgbWZu
OiAwMDIyNDQwMAooWEVOKSAgZ2ZuOiAwMDBhZDIwMCAgbWZuOiAwMDA4MTQwMAooWEVOKSAg
Z2ZuOiAwMDBhZDQwMCAgbWZuOiAwMDIyNDIwMAooWEVOKSAgZ2ZuOiAwMDBhZDYwMCAgbWZu
OiAwMDA4MDYwMAooWEVOKSAgZ2ZuOiAwMDBhZDgwMCAgbWZuOiAwMDIyNDAwMAooWEVOKSAg
Z2ZuOiAwMDBhZGEwMCAgbWZuOiAwMDA4MDQwMAooWEVOKSAgZ2ZuOiAwMDBhZGMwMCAgbWZu
OiAwMDIyM2UwMAooWEVOKSAgZ2ZuOiAwMDBhZGUwMCAgbWZuOiAwMDA4MDIwMAooWEVOKSAg
Z2ZuOiAwMDBhZTAwMCAgbWZuOiAwMDIyM2MwMAooWEVOKSAgZ2ZuOiAwMDBhZTIwMCAgbWZu
OiAwMDA4MDAwMAooWEVOKSAgZ2ZuOiAwMDBhZTQwMCAgbWZuOiAwMDIyM2EwMAooWEVOKSAg
Z2ZuOiAwMDBhZTYwMCAgbWZuOiAwMDA4MWUwMAooWEVOKSAgZ2ZuOiAwMDBhZTgwMCAgbWZu
OiAwMDIyMzgwMAooWEVOKSAgZ2ZuOiAwMDBhZWEwMCAgbWZuOiAwMDA4MWMwMAooWEVOKSAg
Z2ZuOiAwMDBhZWMwMCAgbWZuOiAwMDIyMzYwMAooWEVOKSAgZ2ZuOiAwMDBhZWUwMCAgbWZu
OiAwMDA4MWEwMAooWEVOKSAgZ2ZuOiAwMDBhZjAwMCAgbWZuOiAwMDIyMzQwMAooWEVOKSAg
Z2ZuOiAwMDBhZjIwMCAgbWZuOiAwMDA4MTgwMAooWEVOKSAgZ2ZuOiAwMDBhZjQwMCAgbWZu
OiAwMDIyMzIwMAooWEVOKSAgZ2ZuOiAwMDBhZjYwMCAgbWZuOiAwMDA4M2UwMAooWEVOKSAg
Z2ZuOiAwMDBhZjgwMCAgbWZuOiAwMDIyMzAwMAooWEVOKSAgZ2ZuOiAwMDBhZmEwMCAgbWZu
OiAwMDA4M2MwMAooWEVOKSAgZ2ZuOiAwMDBhZmMwMCAgbWZuOiAwMDIyMmUwMAooWEVOKSAg
Z2ZuOiAwMDBhZmUwMCAgbWZuOiAwMDA4M2EwMAooWEVOKSAgZ2ZuOiAwMDBiMDAwMCAgbWZu
OiAwMDIyMmMwMAooWEVOKSAgZ2ZuOiAwMDBiMDIwMCAgbWZuOiAwMDA4MzgwMAooWEVOKSAg
Z2ZuOiAwMDBiMDQwMCAgbWZuOiAwMDIyMmEwMAooWEVOKSAgZ2ZuOiAwMDBiMDYwMCAgbWZu
OiAwMDA4MzYwMAooWEVOKSAgZ2ZuOiAwMDBiMDgwMCAgbWZuOiAwMDIyMjgwMAooWEVOKSAg
Z2ZuOiAwMDBiMGEwMCAgbWZuOiAwMDA4MzQwMAooWEVOKSAgZ2ZuOiAwMDBiMGMwMCAgbWZu
OiAwMDIyMjYwMAooWEVOKSAgZ2ZuOiAwMDBiMGUwMCAgbWZuOiAwMDA4MzIwMAooWEVOKSAg
Z2ZuOiAwMDBiMTAwMCAgbWZuOiAwMDIyMjQwMAooWEVOKSAgZ2ZuOiAwMDBiMTIwMCAgbWZu
OiAwMDA4MzAwMAooWEVOKSAgZ2ZuOiAwMDBiMTQwMCAgbWZuOiAwMDIyMjIwMAooWEVOKSAg
Z2ZuOiAwMDBiMTYwMCAgbWZuOiAwMDA4MmUwMAooWEVOKSAgZ2ZuOiAwMDBiMTgwMCAgbWZu
OiAwMDIyMjAwMAooWEVOKSAgZ2ZuOiAwMDBiMWEwMCAgbWZuOiAwMDA4MmMwMAooWEVOKSAg
Z2ZuOiAwMDBiMWMwMCAgbWZuOiAwMDIyMWUwMAooWEVOKSAgZ2ZuOiAwMDBiMWUwMCAgbWZu
OiAwMDA4MmEwMAooWEVOKSAgZ2ZuOiAwMDBiMjAwMCAgbWZuOiAwMDIyMWMwMAooWEVOKSAg
Z2ZuOiAwMDBiMjIwMCAgbWZuOiAwMDA4MjgwMAooWEVOKSAgZ2ZuOiAwMDBiMjQwMCAgbWZu
OiAwMDIyMWEwMAooWEVOKSAgZ2ZuOiAwMDBiMjYwMCAgbWZuOiAwMDA4MjYwMAooWEVOKSAg
Z2ZuOiAwMDBiMjgwMCAgbWZuOiAwMDIyMTgwMAooWEVOKSAgZ2ZuOiAwMDBiMmEwMCAgbWZu
OiAwMDA4MjQwMAooWEVOKSAgZ2ZuOiAwMDBiMmMwMCAgbWZuOiAwMDIyMTYwMAooWEVOKSAg
Z2ZuOiAwMDBiMmUwMCAgbWZuOiAwMDA4MjIwMAooWEVOKSAgZ2ZuOiAwMDBiMzAwMCAgbWZu
OiAwMDIyMTQwMAooWEVOKSAgZ2ZuOiAwMDBiMzIwMCAgbWZuOiAwMDA4MjAwMAooWEVOKSAg
Z2ZuOiAwMDBiMzQwMCAgbWZuOiAwMDIyMTIwMAooWEVOKSAgZ2ZuOiAwMDBiMzYwMCAgbWZu
OiAwMDA4N2UwMAooWEVOKSAgZ2ZuOiAwMDBiMzgwMCAgbWZuOiAwMDIyMTAwMAooWEVOKSAg
Z2ZuOiAwMDBiM2EwMCAgbWZuOiAwMDA4N2MwMAooWEVOKSAgZ2ZuOiAwMDBiM2MwMCAgbWZu
OiAwMDIyMGUwMAooWEVOKSAgZ2ZuOiAwMDBiM2UwMCAgbWZuOiAwMDA4N2EwMAooWEVOKSAg
Z2ZuOiAwMDBiNDAwMCAgbWZuOiAwMDIyMGMwMAooWEVOKSAgZ2ZuOiAwMDBiNDIwMCAgbWZu
OiAwMDA4NzgwMAooWEVOKSAgZ2ZuOiAwMDBiNDQwMCAgbWZuOiAwMDIyMGEwMAooWEVOKSAg
Z2ZuOiAwMDBiNDYwMCAgbWZuOiAwMDA4NzYwMAooWEVOKSAgZ2ZuOiAwMDBiNDgwMCAgbWZu
OiAwMDIyMDgwMAooWEVOKSAgZ2ZuOiAwMDBiNGEwMCAgbWZuOiAwMDA4NzQwMAooWEVOKSAg
Z2ZuOiAwMDBiNGMwMCAgbWZuOiAwMDIyMDYwMAooWEVOKSAgZ2ZuOiAwMDBiNGUwMCAgbWZu
OiAwMDA4NzIwMAooWEVOKSAgZ2ZuOiAwMDBiNTAwMCAgbWZuOiAwMDIyMDQwMAooWEVOKSAg
Z2ZuOiAwMDBiNTIwMCAgbWZuOiAwMDA4NzAwMAooWEVOKSAgZ2ZuOiAwMDBiNTQwMCAgbWZu
OiAwMDIyMDIwMAooWEVOKSAgZ2ZuOiAwMDBiNTYwMCAgbWZuOiAwMDA4NmUwMAooWEVOKSAg
Z2ZuOiAwMDBiNTgwMCAgbWZuOiAwMDIyMDAwMAooWEVOKSAgZ2ZuOiAwMDBiNWEwMCAgbWZu
OiAwMDA4NmMwMAooWEVOKSAgZ2ZuOiAwMDBiNWMwMCAgbWZuOiAwMDFjMDgwMAooWEVOKSAg
Z2ZuOiAwMDBiNWUwMCAgbWZuOiAwMDA4NmEwMAooWEVOKSAgZ2ZuOiAwMDBiNjAwMCAgbWZu
OiAwMDFiMDQwMAooWEVOKSAgZ2ZuOiAwMDBiNjIwMCAgbWZuOiAwMDA4NjgwMAooWEVOKSAg
Z2ZuOiAwMDBiNjQwMCAgbWZuOiAwMDE2MmMwMAooWEVOKSAgZ2ZuOiAwMDBiNjYwMCAgbWZu
OiAwMDA4NjYwMAooWEVOKSAgZ2ZuOiAwMDBiNjgwMCAgbWZuOiAwMDFkMGUwMAooWEVOKSAg
Z2ZuOiAwMDBiNmEwMCAgbWZuOiAwMDA4NjQwMAooWEVOKSAgZ2ZuOiAwMDBiNmMwMCAgbWZu
OiAwMDFkMGMwMAooWEVOKSAgZ2ZuOiAwMDBiNmUwMCAgbWZuOiAwMDA4NjIwMAooWEVOKSAg
Z2ZuOiAwMDBiNzAwMCAgbWZuOiAwMDFiMWEwMAooWEVOKSAgZ2ZuOiAwMDBiNzIwMCAgbWZu
OiAwMDA4NjAwMAooWEVOKSAgZ2ZuOiAwMDBiNzQwMCAgbWZuOiAwMDFiMTgwMAooWEVOKSAg
Z2ZuOiAwMDBiNzYwMCAgbWZuOiAwMDA4NWUwMAooWEVOKSAgZ2ZuOiAwMDBiNzgwMCAgbWZu
OiAwMDFiMGUwMAooWEVOKSAgZ2ZuOiAwMDBiN2EwMCAgbWZuOiAwMDA4NWMwMAooWEVOKSAg
Z2ZuOiAwMDBiN2MwMCAgbWZuOiAwMDFiMGMwMAooWEVOKSAgZ2ZuOiAwMDBiN2UwMCAgbWZu
OiAwMDA4NWEwMAooWEVOKSAgZ2ZuOiAwMDBiODAwMCAgbWZuOiAwMDFiMDIwMAooWEVOKSAg
Z2ZuOiAwMDBiODIwMCAgbWZuOiAwMDA4NTgwMAooWEVOKSAgZ2ZuOiAwMDBiODQwMCAgbWZu
OiAwMDFiMDAwMAooWEVOKSAgZ2ZuOiAwMDBiODYwMCAgbWZuOiAwMDA4NTYwMAooWEVOKSAg
Z2ZuOiAwMDBiODgwMCAgbWZuOiAwMDE2MmEwMAooWEVOKSAgZ2ZuOiAwMDBiOGEwMCAgbWZu
OiAwMDA4NTQwMAooWEVOKSAgZ2ZuOiAwMDBiOGMwMCAgbWZuOiAwMDE2MjgwMAooWEVOKSAg
Z2ZuOiAwMDBiOGUwMCAgbWZuOiAwMDA4NTIwMAooWEVOKSAgZ2ZuOiAwMDBiOTAwMCAgbWZu
OiAwMDFjMDYwMAooWEVOKSAgZ2ZuOiAwMDBiOTIwMCAgbWZuOiAwMDA4NTAwMAooWEVOKSAg
Z2ZuOiAwMDBiOTQwMCAgbWZuOiAwMDFjMDQwMAooWEVOKSAgZ2ZuOiAwMDBiOTYwMCAgbWZu
OiAwMDA4NGUwMAooWEVOKSAgZ2ZuOiAwMDBiOTgwMCAgbWZuOiAwMDFjMDIwMAooWEVOKSAg
Z2ZuOiAwMDBiOWEwMCAgbWZuOiAwMDA4NGMwMAooWEVOKSAgZ2ZuOiAwMDBiOWMwMCAgbWZu
OiAwMDFjMDAwMAooWEVOKSAgZ2ZuOiAwMDBiOWUwMCAgbWZuOiAwMDA4NGEwMAooWEVOKSAg
Z2ZuOiAwMDBiYTAwMCAgbWZuOiAwMDFiMTYwMAooWEVOKSAgZ2ZuOiAwMDBiYTIwMCAgbWZu
OiAwMDA4NDgwMAooWEVOKSAgZ2ZuOiAwMDBiYTQwMCAgbWZuOiAwMDFiMTQwMAooWEVOKSAg
Z2ZuOiAwMDBiYTYwMCAgbWZuOiAwMDA4NDYwMAooWEVOKSAgZ2ZuOiAwMDBiYTgwMCAgbWZu
OiAwMDFiMTIwMAooWEVOKSAgZ2ZuOiAwMDBiYWEwMCAgbWZuOiAwMDA4NDQwMAooWEVOKSAg
Z2ZuOiAwMDBiYWMwMCAgbWZuOiAwMDFiMTAwMAooWEVOKSAgZ2ZuOiAwMDBiYWUwMCAgbWZu
OiAwMDA4NDIwMAooWEVOKSAgZ2ZuOiAwMDBiYjAwMCAgbWZuOiAwMDE2MjYwMAooWEVOKSAg
Z2ZuOiAwMDBiYjIwMCAgbWZuOiAwMDA4NDAwMAooWEVOKSAgZ2ZuOiAwMDBiYjQwMCAgbWZu
OiAwMDE2MjQwMAooWEVOKSAgZ2ZuOiAwMDBiYjYwMCAgbWZuOiAwMDA4ZmUwMAooWEVOKSAg
Z2ZuOiAwMDBiYjgwMCAgbWZuOiAwMDE2MjIwMAooWEVOKSAgZ2ZuOiAwMDBiYmEwMCAgbWZu
OiAwMDA4ZmMwMAooWEVOKSAgZ2ZuOiAwMDBiYmMwMCAgbWZuOiAwMDE2MjAwMAooWEVOKSAg
Z2ZuOiAwMDBiYmUwMCAgbWZuOiAwMDA4ZmEwMAooWEVOKSAgZ2ZuOiAwMDBiYzAwMCAgbWZu
OiAwMDFkMWUwMAooWEVOKSAgZ2ZuOiAwMDBiYzIwMCAgbWZuOiAwMDA4ZjgwMAooWEVOKSAg
Z2ZuOiAwMDBiYzQwMCAgbWZuOiAwMDFkMWMwMAooWEVOKSAgZ2ZuOiAwMDBiYzYwMCAgbWZu
OiAwMDA4ZjYwMAooWEVOKSAgZ2ZuOiAwMDBiYzgwMCAgbWZuOiAwMDFkMWEwMAooWEVOKSAg
Z2ZuOiAwMDBiY2EwMCAgbWZuOiAwMDA4ZjQwMAooWEVOKSAgZ2ZuOiAwMDBiY2MwMCAgbWZu
OiAwMDFkMTgwMAooWEVOKSAgZ2ZuOiAwMDBiY2UwMCAgbWZuOiAwMDA4ZjIwMAooWEVOKSAg
Z2ZuOiAwMDBiZDAwMCAgbWZuOiAwMDFkMTYwMAooWEVOKSAgZ2ZuOiAwMDBiZDIwMCAgbWZu
OiAwMDA4ZjAwMAooWEVOKSAgZ2ZuOiAwMDBiZDQwMCAgbWZuOiAwMDFkMTQwMAooWEVOKSAg
Z2ZuOiAwMDBiZDYwMCAgbWZuOiAwMDA4ZWUwMAooWEVOKSAgZ2ZuOiAwMDBiZDgwMCAgbWZu
OiAwMDFkMTIwMAooWEVOKSAgZ2ZuOiAwMDBiZGEwMCAgbWZuOiAwMDA4ZWMwMAooWEVOKSAg
Z2ZuOiAwMDBiZGMwMCAgbWZuOiAwMDFkMTAwMAooWEVOKSAgZ2ZuOiAwMDBiZGUwMCAgbWZu
OiAwMDA4ZWEwMAooWEVOKSAgZ2ZuOiAwMDBiZTAwMCAgbWZuOiAwMDFkM2UwMAooWEVOKSAg
Z2ZuOiAwMDBiZTIwMCAgbWZuOiAwMDA4ZTgwMAooWEVOKSAgZ2ZuOiAwMDBiZTQwMCAgbWZu
OiAwMDFkM2MwMAooWEVOKSAgZ2ZuOiAwMDBiZTYwMCAgbWZuOiAwMDA4ZTYwMAooWEVOKSAg
Z2ZuOiAwMDBiZTgwMCAgbWZuOiAwMDFkM2EwMAooWEVOKSAgZ2ZuOiAwMDBiZWEwMCAgbWZu
OiAwMDA4ZTQwMAooWEVOKSAgZ2ZuOiAwMDBiZWMwMCAgbWZuOiAwMDFkMzgwMAooWEVOKSAg
Z2ZuOiAwMDBiZWUwMCAgbWZuOiAwMDA4ZTIwMAooWEVOKSAgZ2ZuOiAwMDBiZjAwMCAgbWZu
OiAwMDFkMzYwMAooWEVOKSAgZ2ZuOiAwMDBiZjIwMCAgbWZuOiAwMDA4ZTAwMAooWEVOKSAg
Z2ZuOiAwMDBiZjQwMCAgbWZuOiAwMDFkMzQwMAooWEVOKSAgZ2ZuOiAwMDBiZjYwMCAgbWZu
OiAwMDA4ZGUwMAooWEVOKSAgZ2ZuOiAwMDBiZjgwMCAgbWZuOiAwMDFkMzIwMAooWEVOKSAg
Z2ZuOiAwMDBiZmEwMCAgbWZuOiAwMDA4ZGMwMAooWEVOKSAgZ2ZuOiAwMDBiZmMwMCAgbWZu
OiAwMDFkMzAwMAooWEVOKSAgZ2ZuOiAwMDBiZmUwMCAgbWZuOiAwMDA4ZGEwMAooWEVOKSAg
Z2ZuOiAwMDBjMDAwMCAgbWZuOiAwMDFkMmUwMAooWEVOKSAgZ2ZuOiAwMDBjMDIwMCAgbWZu
OiAwMDA4ZDgwMAooWEVOKSAgZ2ZuOiAwMDBjMDQwMCAgbWZuOiAwMDFkMmMwMAooWEVOKSAg
Z2ZuOiAwMDBjMDYwMCAgbWZuOiAwMDA4ZDYwMAooWEVOKSAgZ2ZuOiAwMDBjMDgwMCAgbWZu
OiAwMDFkMmEwMAooWEVOKSAgZ2ZuOiAwMDBjMGEwMCAgbWZuOiAwMDA4ZDQwMAooWEVOKSAg
Z2ZuOiAwMDBjMGMwMCAgbWZuOiAwMDFkMjgwMAooWEVOKSAgZ2ZuOiAwMDBjMGUwMCAgbWZu
OiAwMDA4ZDIwMAooWEVOKSAgZ2ZuOiAwMDBjMTAwMCAgbWZuOiAwMDFkMjYwMAooWEVOKSAg
Z2ZuOiAwMDBjMTIwMCAgbWZuOiAwMDA4ZDAwMAooWEVOKSAgZ2ZuOiAwMDBjMTQwMCAgbWZu
OiAwMDFkMjQwMAooWEVOKSAgZ2ZuOiAwMDBjMTYwMCAgbWZuOiAwMDA4Y2UwMAooWEVOKSAg
Z2ZuOiAwMDBjMTgwMCAgbWZuOiAwMDFkMjIwMAooWEVOKSAgZ2ZuOiAwMDBjMWEwMCAgbWZu
OiAwMDA4Y2MwMAooWEVOKSAgZ2ZuOiAwMDBjMWMwMCAgbWZuOiAwMDFkMjAwMAooWEVOKSAg
Z2ZuOiAwMDBjMWUwMCAgbWZuOiAwMDA4Y2EwMAooWEVOKSAgZ2ZuOiAwMDBjMjAwMCAgbWZu
OiAwMDFiM2UwMAooWEVOKSAgZ2ZuOiAwMDBjMjIwMCAgbWZuOiAwMDA4YzgwMAooWEVOKSAg
Z2ZuOiAwMDBjMjQwMCAgbWZuOiAwMDFiM2MwMAooWEVOKSAgZ2ZuOiAwMDBjMjYwMCAgbWZu
OiAwMDA4YzYwMAooWEVOKSAgZ2ZuOiAwMDBjMjgwMCAgbWZuOiAwMDFiM2EwMAooWEVOKSAg
Z2ZuOiAwMDBjMmEwMCAgbWZuOiAwMDA4YzQwMAooWEVOKSAgZ2ZuOiAwMDBjMmMwMCAgbWZu
OiAwMDFiMzgwMAooWEVOKSAgZ2ZuOiAwMDBjMmUwMCAgbWZuOiAwMDA4YzIwMAooWEVOKSAg
Z2ZuOiAwMDBjMzAwMCAgbWZuOiAwMDFiMzYwMAooWEVOKSAgZ2ZuOiAwMDBjMzIwMCAgbWZu
OiAwMDA4YzAwMAooWEVOKSAgZ2ZuOiAwMDBjMzQwMCAgbWZuOiAwMDFiMzQwMAooWEVOKSAg
Z2ZuOiAwMDBjMzYwMCAgbWZuOiAwMDA4YmUwMAooWEVOKSAgZ2ZuOiAwMDBjMzgwMCAgbWZu
OiAwMDFiMzIwMAooWEVOKSAgZ2ZuOiAwMDBjM2EwMCAgbWZuOiAwMDA4YmMwMAooWEVOKSAg
Z2ZuOiAwMDBjM2MwMCAgbWZuOiAwMDFiMzAwMAooWEVOKSAgZ2ZuOiAwMDBjM2UwMCAgbWZu
OiAwMDA4YmEwMAooWEVOKSAgZ2ZuOiAwMDBjNDAwMCAgbWZuOiAwMDFiMmUwMAooWEVOKSAg
Z2ZuOiAwMDBjNDIwMCAgbWZuOiAwMDA4YjgwMAooWEVOKSAgZ2ZuOiAwMDBjNDQwMCAgbWZu
OiAwMDFiMmMwMAooWEVOKSAgZ2ZuOiAwMDBjNDYwMCAgbWZuOiAwMDA4YjYwMAooWEVOKSAg
Z2ZuOiAwMDBjNDgwMCAgbWZuOiAwMDFiMmEwMAooWEVOKSAgZ2ZuOiAwMDBjNGEwMCAgbWZu
OiAwMDA4YjQwMAooWEVOKSAgZ2ZuOiAwMDBjNGMwMCAgbWZuOiAwMDFiMjgwMAooWEVOKSAg
Z2ZuOiAwMDBjNGUwMCAgbWZuOiAwMDA4YjIwMAooWEVOKSAgZ2ZuOiAwMDBjNTAwMCAgbWZu
OiAwMDFiMjYwMAooWEVOKSAgZ2ZuOiAwMDBjNTIwMCAgbWZuOiAwMDA4YjAwMAooWEVOKSAg
Z2ZuOiAwMDBjNTQwMCAgbWZuOiAwMDFiMjQwMAooWEVOKSAgZ2ZuOiAwMDBjNTYwMCAgbWZu
OiAwMDA4YWUwMAooWEVOKSAgZ2ZuOiAwMDBjNTgwMCAgbWZuOiAwMDFiMjIwMAooWEVOKSAg
Z2ZuOiAwMDBjNWEwMCAgbWZuOiAwMDA4YWMwMAooWEVOKSAgZ2ZuOiAwMDBjNWMwMCAgbWZu
OiAwMDFiMjAwMAooWEVOKSAgZ2ZuOiAwMDBjNWUwMCAgbWZuOiAwMDA4YWEwMAooWEVOKSAg
Z2ZuOiAwMDBjNjAwMCAgbWZuOiAwMDE2MWUwMAooWEVOKSAgZ2ZuOiAwMDBjNjIwMCAgbWZu
OiAwMDA4YTgwMAooWEVOKSAgZ2ZuOiAwMDBjNjQwMCAgbWZuOiAwMDE2MWMwMAooWEVOKSAg
Z2ZuOiAwMDBjNjYwMCAgbWZuOiAwMDA4YTYwMAooWEVOKSAgZ2ZuOiAwMDBjNjgwMCAgbWZu
OiAwMDE2MWEwMAooWEVOKSAgZ2ZuOiAwMDBjNmEwMCAgbWZuOiAwMDA4YTQwMAooWEVOKSAg
Z2ZuOiAwMDBjNmMwMCAgbWZuOiAwMDE2MTgwMAooWEVOKSAgZ2ZuOiAwMDBjNmUwMCAgbWZu
OiAwMDA4YTIwMAooWEVOKSAgZ2ZuOiAwMDBjNzAwMCAgbWZuOiAwMDE2MTYwMAooWEVOKSAg
Z2ZuOiAwMDBjNzIwMCAgbWZuOiAwMDA4YTAwMAooWEVOKSAgZ2ZuOiAwMDBjNzQwMCAgbWZu
OiAwMDE2MTQwMAooWEVOKSAgZ2ZuOiAwMDBjNzYwMCAgbWZuOiAwMDA4OWUwMAooWEVOKSAg
Z2ZuOiAwMDBjNzgwMCAgbWZuOiAwMDE2MTIwMAooWEVOKSAgZ2ZuOiAwMDBjN2EwMCAgbWZu
OiAwMDA4OWMwMAooWEVOKSAgZ2ZuOiAwMDBjN2MwMCAgbWZuOiAwMDE2MTAwMAooWEVOKSAg
Z2ZuOiAwMDBjN2UwMCAgbWZuOiAwMDA4OWEwMAooWEVOKSAgZ2ZuOiAwMDBjODAwMCAgbWZu
OiAwMDE2MGUwMAooWEVOKSAgZ2ZuOiAwMDBjODIwMCAgbWZuOiAwMDA4OTgwMAooWEVOKSAg
Z2ZuOiAwMDBjODQwMCAgbWZuOiAwMDE2MGMwMAooWEVOKSAgZ2ZuOiAwMDBjODYwMCAgbWZu
OiAwMDA4OTYwMAooWEVOKSAgZ2ZuOiAwMDBjODgwMCAgbWZuOiAwMDE2MGEwMAooWEVOKSAg
Z2ZuOiAwMDBjOGEwMCAgbWZuOiAwMDA4OTQwMAooWEVOKSAgZ2ZuOiAwMDBjOGMwMCAgbWZu
OiAwMDE2MDgwMAooWEVOKSAgZ2ZuOiAwMDBjOGUwMCAgbWZuOiAwMDA4OTIwMAooWEVOKSAg
Z2ZuOiAwMDBjOTAwMCAgbWZuOiAwMDE2MDYwMAooWEVOKSAgZ2ZuOiAwMDBjOTIwMCAgbWZu
OiAwMDA4OTAwMAooWEVOKSAgZ2ZuOiAwMDBjOTQwMCAgbWZuOiAwMDE2MDQwMAooWEVOKSAg
Z2ZuOiAwMDBjOTYwMCAgbWZuOiAwMDA4OGUwMAooWEVOKSAgZ2ZuOiAwMDBjOTgwMCAgbWZu
OiAwMDE2MDIwMAooWEVOKSAgZ2ZuOiAwMDBjOWEwMCAgbWZuOiAwMDA4OGMwMAooWEVOKSAg
Z2ZuOiAwMDBjOWMwMCAgbWZuOiAwMDE2MDAwMAooWEVOKSAgZ2ZuOiAwMDBjOWUwMCAgbWZu
OiAwMDA4OGEwMAooWEVOKSAgZ2ZuOiAwMDBjYTAwMCAgbWZuOiAwMDFkN2UwMAooWEVOKSAg
Z2ZuOiAwMDBjYTIwMCAgbWZuOiAwMDA4ODgwMAooWEVOKSAgZ2ZuOiAwMDBjYTQwMCAgbWZu
OiAwMDFkN2MwMAooWEVOKSAgZ2ZuOiAwMDBjYTYwMCAgbWZuOiAwMDA4ODYwMAooWEVOKSAg
Z2ZuOiAwMDBjYTgwMCAgbWZuOiAwMDFkN2EwMAooWEVOKSAgZ2ZuOiAwMDBjYWEwMCAgbWZu
OiAwMDA4ODQwMAooWEVOKSAgZ2ZuOiAwMDBjYWMwMCAgbWZuOiAwMDFkNzgwMAooWEVOKSAg
Z2ZuOiAwMDBjYWUwMCAgbWZuOiAwMDA4ODIwMAooWEVOKSAgZ2ZuOiAwMDBjYjAwMCAgbWZu
OiAwMDFkNzYwMAooWEVOKSAgZ2ZuOiAwMDBjYjIwMCAgbWZuOiAwMDA4ODAwMAooWEVOKSAg
Z2ZuOiAwMDBjYjQwMCAgbWZuOiAwMDFkNzQwMAooWEVOKSAgZ2ZuOiAwMDBjYjYwMCAgbWZu
OiAwMDA5ZmUwMAooWEVOKSAgZ2ZuOiAwMDBjYjgwMCAgbWZuOiAwMDFkNzIwMAooWEVOKSAg
Z2ZuOiAwMDBjYmEwMCAgbWZuOiAwMDA5ZmMwMAooWEVOKSAgZ2ZuOiAwMDBjYmMwMCAgbWZu
OiAwMDFkNzAwMAooWEVOKSAgZ2ZuOiAwMDBjYmUwMCAgbWZuOiAwMDA5ZmEwMAooWEVOKSAg
Z2ZuOiAwMDBjYzAwMCAgbWZuOiAwMDFkNmUwMAooWEVOKSAgZ2ZuOiAwMDBjYzIwMCAgbWZu
OiAwMDA5ZjgwMAooWEVOKSAgZ2ZuOiAwMDBjYzQwMCAgbWZuOiAwMDFkNmMwMAooWEVOKSAg
Z2ZuOiAwMDBjYzYwMCAgbWZuOiAwMDA5ZjYwMAooWEVOKSAgZ2ZuOiAwMDBjYzgwMCAgbWZu
OiAwMDFkNmEwMAooWEVOKSAgZ2ZuOiAwMDBjY2EwMCAgbWZuOiAwMDA5ZjQwMAooWEVOKSAg
Z2ZuOiAwMDBjY2MwMCAgbWZuOiAwMDFkNjgwMAooWEVOKSAgZ2ZuOiAwMDBjY2UwMCAgbWZu
OiAwMDA5ZjIwMAooWEVOKSAgZ2ZuOiAwMDBjZDAwMCAgbWZuOiAwMDFkNjYwMAooWEVOKSAg
Z2ZuOiAwMDBjZDIwMCAgbWZuOiAwMDA5ZjAwMAooWEVOKSAgZ2ZuOiAwMDBjZDQwMCAgbWZu
OiAwMDFkNjQwMAooWEVOKSAgZ2ZuOiAwMDBjZDYwMCAgbWZuOiAwMDA5ZWUwMAooWEVOKSAg
Z2ZuOiAwMDBjZDgwMCAgbWZuOiAwMDFkNjIwMAooWEVOKSAgZ2ZuOiAwMDBjZGEwMCAgbWZu
OiAwMDA5ZWMwMAooWEVOKSAgZ2ZuOiAwMDBjZGMwMCAgbWZuOiAwMDFkNjAwMAooWEVOKSAg
Z2ZuOiAwMDBjZGUwMCAgbWZuOiAwMDA5ZWEwMAooWEVOKSAgZ2ZuOiAwMDBjZTAwMCAgbWZu
OiAwMDFkNWUwMAooWEVOKSAgZ2ZuOiAwMDBjZTIwMCAgbWZuOiAwMDA5ZTgwMAooWEVOKSAg
Z2ZuOiAwMDBjZTQwMCAgbWZuOiAwMDFkNWMwMAooWEVOKSAgZ2ZuOiAwMDBjZTYwMCAgbWZu
OiAwMDA5ZTYwMAooWEVOKSAgZ2ZuOiAwMDBjZTgwMCAgbWZuOiAwMDFkNWEwMAooWEVOKSAg
Z2ZuOiAwMDBjZWEwMCAgbWZuOiAwMDA5ZTQwMAooWEVOKSAgZ2ZuOiAwMDBjZWMwMCAgbWZu
OiAwMDFkNTgwMAooWEVOKSAgZ2ZuOiAwMDBjZWUwMCAgbWZuOiAwMDA5ZTIwMAooWEVOKSAg
Z2ZuOiAwMDBjZjAwMCAgbWZuOiAwMDFkNTYwMAooWEVOKSAgZ2ZuOiAwMDBjZjIwMCAgbWZu
OiAwMDA5ZTAwMAooWEVOKSAgZ2ZuOiAwMDBjZjQwMCAgbWZuOiAwMDFkNTQwMAooWEVOKSAg
Z2ZuOiAwMDBjZjYwMCAgbWZuOiAwMDA5ZGUwMAooWEVOKSAgZ2ZuOiAwMDBjZjgwMCAgbWZu
OiAwMDFkNTIwMAooWEVOKSAgZ2ZuOiAwMDBjZmEwMCAgbWZuOiAwMDA5ZGMwMAooWEVOKSAg
Z2ZuOiAwMDBjZmMwMCAgbWZuOiAwMDFkNTAwMAooWEVOKSAgZ2ZuOiAwMDBjZmUwMCAgbWZu
OiAwMDA5ZGEwMAooWEVOKSAgZ2ZuOiAwMDBkMDAwMCAgbWZuOiAwMDFkNGUwMAooWEVOKSAg
Z2ZuOiAwMDBkMDIwMCAgbWZuOiAwMDA5ZDgwMAooWEVOKSAgZ2ZuOiAwMDBkMDQwMCAgbWZu
OiAwMDFkNGMwMAooWEVOKSAgZ2ZuOiAwMDBkMDYwMCAgbWZuOiAwMDA5ZDYwMAooWEVOKSAg
Z2ZuOiAwMDBkMDgwMCAgbWZuOiAwMDFkNGEwMAooWEVOKSAgZ2ZuOiAwMDBkMGEwMCAgbWZu
OiAwMDA5ZDQwMAooWEVOKSAgZ2ZuOiAwMDBkMGMwMCAgbWZuOiAwMDFkNDgwMAooWEVOKSAg
Z2ZuOiAwMDBkMGUwMCAgbWZuOiAwMDA5ZDIwMAooWEVOKSAgZ2ZuOiAwMDBkMTAwMCAgbWZu
OiAwMDFkNDYwMAooWEVOKSAgZ2ZuOiAwMDBkMTIwMCAgbWZuOiAwMDA5ZDAwMAooWEVOKSAg
Z2ZuOiAwMDBkMTQwMCAgbWZuOiAwMDFkNDQwMAooWEVOKSAgZ2ZuOiAwMDBkMTYwMCAgbWZu
OiAwMDA5Y2UwMAooWEVOKSAgZ2ZuOiAwMDBkMTgwMCAgbWZuOiAwMDFkNDIwMAooWEVOKSAg
Z2ZuOiAwMDBkMWEwMCAgbWZuOiAwMDA5Y2MwMAooWEVOKSAgZ2ZuOiAwMDBkMWMwMCAgbWZu
OiAwMDFkNDAwMAooWEVOKSAgZ2ZuOiAwMDBkMWUwMCAgbWZuOiAwMDA5Y2EwMAooWEVOKSAg
Z2ZuOiAwMDBkMjAwMCAgbWZuOiAwMDFiN2UwMAooWEVOKSAgZ2ZuOiAwMDBkMjIwMCAgbWZu
OiAwMDA5YzgwMAooWEVOKSAgZ2ZuOiAwMDBkMjQwMCAgbWZuOiAwMDFiN2MwMAooWEVOKSAg
Z2ZuOiAwMDBkMjYwMCAgbWZuOiAwMDA5YzYwMAooWEVOKSAgZ2ZuOiAwMDBkMjgwMCAgbWZu
OiAwMDFiN2EwMAooWEVOKSAgZ2ZuOiAwMDBkMmEwMCAgbWZuOiAwMDA5YzQwMAooWEVOKSAg
Z2ZuOiAwMDBkMmMwMCAgbWZuOiAwMDFiNzgwMAooWEVOKSAgZ2ZuOiAwMDBkMmUwMCAgbWZu
OiAwMDA5YzIwMAooWEVOKSAgZ2ZuOiAwMDBkMzAwMCAgbWZuOiAwMDFiNzYwMAooWEVOKSAg
Z2ZuOiAwMDBkMzIwMCAgbWZuOiAwMDA5YzAwMAooWEVOKSAgZ2ZuOiAwMDBkMzQwMCAgbWZu
OiAwMDFiNzQwMAooWEVOKSAgZ2ZuOiAwMDBkMzYwMCAgbWZuOiAwMDA5YmUwMAooWEVOKSAg
Z2ZuOiAwMDBkMzgwMCAgbWZuOiAwMDFiNzIwMAooWEVOKSAgZ2ZuOiAwMDBkM2EwMCAgbWZu
OiAwMDA5YmMwMAooWEVOKSAgZ2ZuOiAwMDBkM2MwMCAgbWZuOiAwMDFiNzAwMAooWEVOKSAg
Z2ZuOiAwMDBkM2UwMCAgbWZuOiAwMDA5YmEwMAooWEVOKSAgZ2ZuOiAwMDBkNDAwMCAgbWZu
OiAwMDFiNmUwMAooWEVOKSAgZ2ZuOiAwMDBkNDIwMCAgbWZuOiAwMDA5YjgwMAooWEVOKSAg
Z2ZuOiAwMDBkNDQwMCAgbWZuOiAwMDFiNmMwMAooWEVOKSAgZ2ZuOiAwMDBkNDYwMCAgbWZu
OiAwMDA5YjYwMAooWEVOKSAgZ2ZuOiAwMDBkNDgwMCAgbWZuOiAwMDFiNmEwMAooWEVOKSAg
Z2ZuOiAwMDBkNGEwMCAgbWZuOiAwMDA5YjQwMAooWEVOKSAgZ2ZuOiAwMDBkNGMwMCAgbWZu
OiAwMDFiNjgwMAooWEVOKSAgZ2ZuOiAwMDBkNGUwMCAgbWZuOiAwMDA5YjIwMAooWEVOKSAg
Z2ZuOiAwMDBkNTAwMCAgbWZuOiAwMDFiNjYwMAooWEVOKSAgZ2ZuOiAwMDBkNTIwMCAgbWZu
OiAwMDA5YjAwMAooWEVOKSAgZ2ZuOiAwMDBkNTQwMCAgbWZuOiAwMDFiNjQwMAooWEVOKSAg
Z2ZuOiAwMDBkNTYwMCAgbWZuOiAwMDA5YWUwMAooWEVOKSAgZ2ZuOiAwMDBkNTgwMCAgbWZu
OiAwMDFiNjIwMAooWEVOKSAgZ2ZuOiAwMDBkNWEwMCAgbWZuOiAwMDA5YWMwMAooWEVOKSAg
Z2ZuOiAwMDBkNWMwMCAgbWZuOiAwMDFiNjAwMAooWEVOKSAgZ2ZuOiAwMDBkNWUwMCAgbWZu
OiAwMDA5YWEwMAooWEVOKSAgZ2ZuOiAwMDBkNjAwMCAgbWZuOiAwMDFiNWUwMAooWEVOKSAg
Z2ZuOiAwMDBkNjIwMCAgbWZuOiAwMDA5YTgwMAooWEVOKSAgZ2ZuOiAwMDBkNjQwMCAgbWZu
OiAwMDFiNWMwMAooWEVOKSAgZ2ZuOiAwMDBkNjYwMCAgbWZuOiAwMDA5YTYwMAooWEVOKSAg
Z2ZuOiAwMDBkNjgwMCAgbWZuOiAwMDFiNWEwMAooWEVOKSAgZ2ZuOiAwMDBkNmEwMCAgbWZu
OiAwMDA5YTQwMAooWEVOKSAgZ2ZuOiAwMDBkNmMwMCAgbWZuOiAwMDFiNTgwMAooWEVOKSAg
Z2ZuOiAwMDBkNmUwMCAgbWZuOiAwMDA5YTIwMAooWEVOKSAgZ2ZuOiAwMDBkNzAwMCAgbWZu
OiAwMDFiNTYwMAooWEVOKSAgZ2ZuOiAwMDBkNzIwMCAgbWZuOiAwMDA5YTAwMAooWEVOKSAg
Z2ZuOiAwMDBkNzQwMCAgbWZuOiAwMDFiNTQwMAooWEVOKSAgZ2ZuOiAwMDBkNzYwMCAgbWZu
OiAwMDA5OWUwMAooWEVOKSAgZ2ZuOiAwMDBkNzgwMCAgbWZuOiAwMDFiNTIwMAooWEVOKSAg
Z2ZuOiAwMDBkN2EwMCAgbWZuOiAwMDA5OWMwMAooWEVOKSAgZ2ZuOiAwMDBkN2MwMCAgbWZu
OiAwMDFiNTAwMAooWEVOKSAgZ2ZuOiAwMDBkN2UwMCAgbWZuOiAwMDA5OWEwMAooWEVOKSAg
Z2ZuOiAwMDBkODAwMCAgbWZuOiAwMDFiNGUwMAooWEVOKSAgZ2ZuOiAwMDBkODIwMCAgbWZu
OiAwMDA5OTgwMAooWEVOKSAgZ2ZuOiAwMDBkODQwMCAgbWZuOiAwMDFiNGMwMAooWEVOKSAg
Z2ZuOiAwMDBkODYwMCAgbWZuOiAwMDA5OTYwMAooWEVOKSAgZ2ZuOiAwMDBkODgwMCAgbWZu
OiAwMDFiNGEwMAooWEVOKSAgZ2ZuOiAwMDBkOGEwMCAgbWZuOiAwMDA5OTQwMAooWEVOKSAg
Z2ZuOiAwMDBkOGMwMCAgbWZuOiAwMDFiNDgwMAooWEVOKSAgZ2ZuOiAwMDBkOGUwMCAgbWZu
OiAwMDA5OTIwMAooWEVOKSAgZ2ZuOiAwMDBkOTAwMCAgbWZuOiAwMDFiNDYwMAooWEVOKSAg
Z2ZuOiAwMDBkOTIwMCAgbWZuOiAwMDA5OTAwMAooWEVOKSAgZ2ZuOiAwMDBkOTQwMCAgbWZu
OiAwMDFiNDQwMAooWEVOKSAgZ2ZuOiAwMDBkOTYwMCAgbWZuOiAwMDA5OGUwMAooWEVOKSAg
Z2ZuOiAwMDBkOTgwMCAgbWZuOiAwMDFiNDIwMAooWEVOKSAgZ2ZuOiAwMDBkOWEwMCAgbWZu
OiAwMDA5OGMwMAooWEVOKSAgZ2ZuOiAwMDBkOWMwMCAgbWZuOiAwMDFiNDAwMAooWEVOKSAg
Z2ZuOiAwMDBkOWUwMCAgbWZuOiAwMDA5OGEwMAooWEVOKSAgZ2ZuOiAwMDBkYTAwMCAgbWZu
OiAwMDFkZmUwMAooWEVOKSAgZ2ZuOiAwMDBkYTIwMCAgbWZuOiAwMDA5ODgwMAooWEVOKSAg
Z2ZuOiAwMDBkYTQwMCAgbWZuOiAwMDFkZmMwMAooWEVOKSAgZ2ZuOiAwMDBkYTYwMCAgbWZu
OiAwMDA5ODYwMAooWEVOKSAgZ2ZuOiAwMDBkYTgwMCAgbWZuOiAwMDFkZmEwMAooWEVOKSAg
Z2ZuOiAwMDBkYWEwMCAgbWZuOiAwMDA5ODQwMAooWEVOKSAgZ2ZuOiAwMDBkYWMwMCAgbWZu
OiAwMDFkZjgwMAooWEVOKSAgZ2ZuOiAwMDBkYWUwMCAgbWZuOiAwMDA5ODIwMAooWEVOKSAg
Z2ZuOiAwMDBkYjAwMCAgbWZuOiAwMDFkZjYwMAooWEVOKSAgZ2ZuOiAwMDBkYjIwMCAgbWZu
OiAwMDA5ODAwMAooWEVOKSAgZ2ZuOiAwMDBkYjQwMCAgbWZuOiAwMDFkZjQwMAooWEVOKSAg
Z2ZuOiAwMDBkYjYwMCAgbWZuOiAwMDA5N2UwMAooWEVOKSAgZ2ZuOiAwMDBkYjgwMCAgbWZu
OiAwMDFkZjIwMAooWEVOKSAgZ2ZuOiAwMDBkYmEwMCAgbWZuOiAwMDA5N2MwMAooWEVOKSAg
Z2ZuOiAwMDBkYmMwMCAgbWZuOiAwKzAxZGYwMDAKKFhFTikgIGdmbjogMDAwZGJlMDAgIG1m
bjogMDAwOTdhMDAKKFhFTikgIGdmbjogMDAwZGMwMDAgIG1mbjogMDAxZGVlMDAKKFhFTikg
IGdmbjogMDAwZGMyMDAgIG1mbjogMDAwOTc4MDAKKFhFTikgc3RkdmdhLmM6MTUxOmQyIGxl
YXZpbmcgc3RkdmdhCihYRU4pICBnZm46IDAwMGRjNDAwICBtZm46IDAwMWRlYzAwCihYRU4p
ICBnZm46IDAwMGRjNjAwICBtZm46IDAwMDk3NjAwCihYRU4pICBnZm46IDAwMGRjODAwICBt
Zm46IDAwMWRlYTAwCihYRU4pICBnZm46IDAwMGRjYTAwICBtZm46IDAwMDk3NDAwCihYRU4p
ICBnZm46IDAwMGRjYzAwICBtZm46IDAwMWRlODAwCihYRU4pICBnZm46IDAwMGRjZTAwICBt
Zm46IDAwMDk3MjAwCihYRU4pICBnZm46IDAwMGRkMDAwICBtZm46IDAwMWRlNjAwCihYRU4p
ICBnZm46IDAwMGRkMjAwICBtZm46IDAwMDk3MDAwCihYRU4pICBnZm46IDAwMGRkNDAwICBt
Zm46IDAwMWRlNDAwCihYRU4pICBnZm46IDAwMGRkNjAwICBtZm46IDAwMDk2ZTAwCihYRU4p
ICBnZm46IDAwMGRkODAwICBtZm46IDAwMWRlMjAwCihYRU4pICBnZm46IDAwMGRkYTAwICBt
Zm46IDAwMDk2YzAwCihYRU4pICBnZm46IDAwMGRkYzAwICBtZm46IDAwMWRlMDAwCihYRU4p
ICBnZm46IDAwMGRkZTAwICBtZm46IDAwMDk2YTAwCihYRU4pICBnZm46IDAwMGRlMDAwICBt
Zm46IDAwMWRkZTAwCihYRU4pICBnZm46IDAwMGRlMjAwICBtZm46IDAwMDk2ODAwCihYRU4p
ICBnZm46IDAwMGRlNDAwICBtZm46IDAwMWRkYzAwCihYRU4pICBnZm46IDAwMGRlNjAwICBt
Zm46IDAwMDk2NjAwCihYRU4pICBnZm46IDAwMGRlODAwICBtZm46IDAwMWRkYTAwCihYRU4p
ICBnZm46IDAwMGRlYTAwICBtZm46IDAwMDk2NDAwCihYRU4pICBnZm46IDAwMGRlYzAwICBt
Zm46IDAwMWRkODAwCihYRU4pICBnZm46IDAwMGRlZTAwICBtZm46IDAwMDk2MjAwCihYRU4p
ICBnZm46IDAwMGRmMDAwICBtZm46IDAwMWRkNjAwCihYRU4pICBnZm46IDAwMGRmMjAwICBt
Zm46IDAwMDk2MDAwCihYRU4pICBnZm46IDAwMGRmNDAwICBtZm46IDAwMWRkNDAwCihYRU4p
ICBnZm46IDAwMGRmNjAwICBtZm46IDAwMDk1ZTAwCihYRU4pICBnZm46IDAwMGRmODAwICBt
Zm46IDAwMWRkMjAwCihYRU4pICBnZm46IDAwMGRmYTAwICBtZm46IDAwMDk1YzAwCihYRU4p
ICBnZm46IDAwMGRmYzAwICBtZm46IDAwMWRkMDAwCihYRU4pICBnZm46IDAwMGRmZTAwICBt
Zm46IDAwMDk1YTAwCihYRU4pICBnZm46IDAwMGUwMDAwICBtZm46IDAwMWRjZTAwCihYRU4p
ICBnZm46IDAwMGUwMjAwICBtZm46IDAwMDk1ODAwCihYRU4pICBnZm46IDAwMGUwNDAwICBt
Zm46IDAwMWRjYzAwCihYRU4pICBnZm46IDAwMGUwNjAwICBtZm46IDAwMDk1NjAwCihYRU4p
ICBnZm46IDAwMGUwODAwICBtZm46IDAwMWRjYTAwCihYRU4pICBnZm46IDAwMGUwYTAwICBt
Zm46IDAwMDk1NDAwCihYRU4pICBnZm46IDAwMGUwYzAwICBtZm46IDAwMWRjODAwCihYRU4p
ICBnZm46IDAwMGUwZTAwICBtZm46IDAwMDk1MjAwCihYRU4pICBnZm46IDAwMGUxMDAwICBt
Zm46IDAwMWRjNjAwCihYRU4pICBnZm46IDAwMGUxMjAwICBtZm46IDAwMDk1MDAwCihYRU4p
ICBnZm46IDAwMGUxNDAwICBtZm46IDAwMWRjNDAwCihYRU4pICBnZm46IDAwMGUxNjAwICBt
Zm46IDAwMDk0ZTAwCihYRU4pICBnZm46IDAwMGUxODAwICBtZm46IDAwMWRjMjAwCihYRU4p
ICBnZm46IDAwMGUxYTAwICBtZm46IDAwMDk0YzAwCihYRU4pICBnZm46IDAwMGUxYzAwICBt
Zm46IDAwMWRjMDAwCihYRU4pICBnZm46IDAwMGUxZTAwICBtZm46IDAwMDk0YTAwCihYRU4p
ICBnZm46IDAwMGUyMDAwICBtZm46IDAwMWRiZTAwCihYRU4pICBnZm46IDAwMGUyMjAwICBt
Zm46IDAwMDk0ODAwCihYRU4pICBnZm46IDAwMGUyNDAwICBtZm46IDAwMWRiYzAwCihYRU4p
ICBnZm46IDAwMGUyNjAwICBtZm46IDAwMDk0NjAwCihYRU4pICBnZm46IDAwMGUyODAwICBt
Zm46IDAwMWRiYTAwCihYRU4pICBnZm46IDAwMGUyYTAwICBtZm46IDAwMDk0NDAwCihYRU4p
ICBnZm46IDAwMGUyYzAwICBtZm46IDAwMWRiODAwCihYRU4pICBnZm46IDAwMGUyZTAwICBt
Zm46IDAwMDk0MjAwCihYRU4pICBnZm46IDAwMGUzMDAwICBtZm46IDAwMWRiNjAwCihYRU4p
ICBnZm46IDAwMGUzMjAwICBtZm46IDAwMDk0MDAwCihYRU4pICBnZm46IDAwMGUzNDAwICBt
Zm46IDAwMWRiNDAwCihYRU4pICBnZm46IDAwMGUzNjAwICBtZm46IDAwMDkzZTAwCihYRU4p
ICBnZm46IDAwMGUzODAwICBtZm46IDAwMWRiMjAwCihYRU4pICBnZm46IDAwMGUzYTAwICBt
Zm46IDAwMDkzYzAwCihYRU4pICBnZm46IDAwMGUzYzAwICBtZm46IDAwMWRiMDAwCihYRU4p
ICBnZm46IDAwMGUzZTAwICBtZm46IDAwMDkzYTAwCihYRU4pICBnZm46IDAwMGU0MDAwICBt
Zm46IDAwMWRhZTAwCihYRU4pICBnZm46IDAwMGU0MjAwICBtZm46IDAwMDkzODAwCihYRU4p
ICBnZm46IDAwMGU0NDAwICBtZm46IDAwMWRhYzAwCihYRU4pICBnZm46IDAwMGU0NjAwICBt
Zm46IDAwMDkzNjAwCihYRU4pICBnZm46IDAwMGU0ODAwICBtZm46IDAwMWRhYTAwCihYRU4p
ICBnZm46IDAwMGU0YTAwICBtZm46IDAwMDkzNDAwCihYRU4pICBnZm46IDAwMGU0YzAwICBt
Zm46IDAwMWRhODAwCihYRU4pICBnZm46IDAwMGU0ZTAwICBtZm46IDAwMDkzMjAwCihYRU4p
ICBnZm46IDAwMGU1MDAwICBtZm46IDAwMWRhNjAwCihYRU4pICBnZm46IDAwMGU1MjAwICBt
Zm46IDAwMDkzMDAwCihYRU4pICBnZm46IDAwMGU1NDAwICBtZm46IDAwMWRhNDAwCihYRU4p
ICBnZm46IDAwMGU1NjAwICBtZm46IDAwMDkyZTAwCihYRU4pICBnZm46IDAwMGU1ODAwICBt
Zm46IDAwMWRhMjAwCihYRU4pICBnZm46IDAwMGU1YTAwICBtZm46IDAwMDkyYzAwCihYRU4p
ICBnZm46IDAwMGU1YzAwICBtZm46IDAwMWRhMDAwCihYRU4pICBnZm46IDAwMGU1ZTAwICBt
Zm46IDAwMDkyYTAwCihYRU4pICBnZm46IDAwMGU2MDAwICBtZm46IDAwMWQ5ZTAwCihYRU4p
ICBnZm46IDAwMGU2MjAwICBtZm46IDAwMDkyODAwCihYRU4pICBnZm46IDAwMGU2NDAwICBt
Zm46IDAwMWQ5YzAwCihYRU4pICBnZm46IDAwMGU2NjAwICBtZm46IDAwMDkyNjAwCihYRU4p
ICBnZm46IDAwMGU2ODAwICBtZm46IDAwMWQ5YTAwCihYRU4pICBnZm46IDAwMGU2YTAwICBt
Zm46IDAwMDkyNDAwCihYRU4pICBnZm46IDAwMGU2YzAwICBtZm46IDAwMWQ5ODAwCihYRU4p
ICBnZm46IDAwMGU2ZTAwICBtZm46IDAwMDkyMjAwCihYRU4pICBnZm46IDAwMGU3MDAwICBt
Zm46IDAwMWQ5NjAwCihYRU4pICBnZm46IDAwMGU3MjAwICBtZm46IDAwMDkyMDAwCihYRU4p
ICBnZm46IDAwMGU3NDAwICBtZm46IDAwMWQ5NDAwCihYRU4pICBnZm46IDAwMGU3NjAwICBt
Zm46IDAwMDkxZTAwCihYRU4pICBnZm46IDAwMGU3ODAwICBtZm46IDAwMWQ5MjAwCihYRU4p
ICBnZm46IDAwMGU3YTAwICBtZm46IDAwMDkxYzAwCihYRU4pICBnZm46IDAwMGU3YzAwICBt
Zm46IDAwMWQ5MDAwCihYRU4pICBnZm46IDAwMGU3ZTAwICBtZm46IDAwMDkxYTAwCihYRU4p
ICBnZm46IDAwMGU4MDAwICBtZm46IDAwMWQ4ZTAwCihYRU4pICBnZm46IDAwMGU4MjAwICBt
Zm46IDAwMDkxODAwCihYRU4pICBnZm46IDAwMGU4NDAwICBtZm46IDAwMWQ4YzAwCihYRU4p
ICBnZm46IDAwMGU4NjAwICBtZm46IDAwMDkxNjAwCihYRU4pICBnZm46IDAwMGU4ODAwICBt
Zm46IDAwMWQ4YTAwCihYRU4pICBnZm46IDAwMGU4YTAwICBtZm46IDAwMDkxNDAwCihYRU4p
ICBnZm46IDAwMGU4YzAwICBtZm46IDAwMWQ4ODAwCihYRU4pICBnZm46IDAwMGU4ZTAwICBt
Zm46IDAwMDkxMjAwCihYRU4pICBnZm46IDAwMGU5MDAwICBtZm46IDAwMWQ4NjAwCihYRU4p
ICBnZm46IDAwMGU5MjAwICBtZm46IDAwMDkxMDAwCihYRU4pICBnZm46IDAwMGU5NDAwICBt
Zm46IDAwMWQ4NDAwCihYRU4pICBnZm46IDAwMGU5NjAwICBtZm46IDAwMDkwZTAwCihYRU4p
ICBnZm46IDAwMGU5ODAwICBtZm46IDAwMWQ4MjAwCihYRU4pICBnZm46IDAwMGU5YTAwICBt
Zm46IDAwMDkwYzAwCihYRU4pICBnZm46IDAwMGU5YzAwICBtZm46IDAwMWQ4MDAwCihYRU4p
ICBnZm46IDAwMGU5ZTAwICBtZm46IDAwMDkwYTAwCihYRU4pICBnZm46IDAwMGVhMDAwICBt
Zm46IDAwMWJmZTAwCihYRU4pICBnZm46IDAwMGVhMjAwICBtZm46IDAwMDkwODAwCihYRU4p
ICBnZm46IDAwMGVhNDAwICBtZm46IDAwMWJmYzAwCihYRU4pICBnZm46IDAwMGVhNjAwICBt
Zm46IDAwMDkwNjAwCihYRU4pICBnZm46IDAwMGVhODAwICBtZm46IDAwMWJmYTAwCihYRU4p
ICBnZm46IDAwMGVhYTAwICBtZm46IDAwMDkwNDAwCihYRU4pICBnZm46IDAwMGVhYzAwICBt
Zm46IDAwMWJmODAwCihYRU4pICBnZm46IDAwMGVhZTAwICBtZm46IDAwMDkwMjAwCihYRU4p
ICBnZm46IDAwMGViMDAwICBtZm46IDAwMWJmNjAwCihYRU4pICBnZm46IDAwMGViMjAwICBt
Zm46IDAwMDkwMDAwCihYRU4pICBnZm46IDAwMGViNDAwICBtZm46IDAwMWJmNDAwCihYRU4p
ICBnZm46IDAwMGViNjAwICBtZm46IDAwMWJmMjAwCihYRU4pICBnZm46IDAwMGViODAwICBt
Zm46IDAwMWJmMDAwCihYRU4pICBnZm46IDAwMGViYTAwICBtZm46IDAwMWJlZTAwCihYRU4p
ICBnZm46IDAwMGViYzAwICBtZm46IDAwMWJlYzAwCihYRU4pICBnZm46IDAwMGViZTAwICBt
Zm46IDAwMWJlYTAwCihYRU4pICBnZm46IDAwMGVjMDAwICBtZm46IDAwMWJlODAwCihYRU4p
ICBnZm46IDAwMGVjMjAwICBtZm46IDAwMWJlNjAwCihYRU4pICBnZm46IDAwMGVjNDAwICBt
Zm46IDAwMWJlNDAwCihYRU4pICBnZm46IDAwMGVjNjAwICBtZm46IDAwMWJlMjAwCihYRU4p
ICBnZm46IDAwMGVjODAwICBtZm46IDAwMWJlMDAwCihYRU4pICBnZm46IDAwMGVjYTAwICBt
Zm46IDAwMWJkZTAwCihYRU4pICBnZm46IDAwMGVjYzAwICBtZm46IDAwMWJkYzAwCihYRU4p
ICBnZm46IDAwMGVjZTAwICBtZm46IDAwMWJkYTAwCihYRU4pICBnZm46IDAwMGVkMDAwICBt
Zm46IDAwMWJkODAwCihYRU4pICBnZm46IDAwMGVkMjAwICBtZm46IDAwMWJkNjAwCihYRU4p
ICBnZm46IDAwMGVkNDAwICBtZm46IDAwMWJkNDAwCihYRU4pICBnZm46IDAwMGVkNjAwICBt
Zm46IDAwMWJkMjAwCihYRU4pICBnZm46IDAwMGVkODAwICBtZm46IDAwMWJkMDAwCihYRU4p
ICBnZm46IDAwMGVkYTAwICBtZm46IDAwMWJjZTAwCihYRU4pICBnZm46IDAwMGVkYzAwICBt
Zm46IDAwMWJjYzAwCihYRU4pICBnZm46IDAwMGVkZTAwICBtZm46IDAwMWJjYTAwCihYRU4p
ICBnZm46IDAwMGVlMDAwICBtZm46IDAwMWJjODAwCihYRU4pICBnZm46IDAwMGVlMjAwICBt
Zm46IDAwMWJjNjAwCihYRU4pICBnZm46IDAwMGVlNDAwICBtZm46IDAwMWJjNDAwCihYRU4p
ICBnZm46IDAwMGVlNjAwICBtZm46IDAwMWJjMjAwCihYRU4pICBnZm46IDAwMGVlODAwICBt
Zm46IDAwMWJjMDAwCihYRU4pICBnZm46IDAwMGVlYTAwICBtZm46IDAwMWJiZTAwCihYRU4p
ICBnZm46IDAwMGVlYzAwICBtZm46IDAwMWJiYzAwCihYRU4pICBnZm46IDAwMGVlZTAwICBt
Zm46IDAwMWJiYTAwCihYRU4pICBnZm46IDAwMGVmMDAwICBtZm46IDAwMWJiODAwCihYRU4p
ICBnZm46IDAwMGVmMjAwICBtZm46IDAwMWJiNjAwCihYRU4pICBnZm46IDAwMGVmNDAwICBt
Zm46IDAwMWJiNDAwCihYRU4pICBnZm46IDAwMGVmNjAwICBtZm46IDAwMWJiMjAwCihYRU4p
ICBnZm46IDAwMGVmODAwICBtZm46IDAwMWJiMDAwCihYRU4pICBnZm46IDAwMGVmYTAwICBt
Zm46IDAwMWJhZTAwCihYRU4pICBnZm46IDAwMGVmYzAwICBtZm46IDAwMWJhYzAwCihYRU4p
ICBnZm46IDAwMGVmZTAwICBtZm46IDAwMWJhYTAwCihYRU4pICAgZ2ZuOiAwMDBmMDAwMCAg
bWZuOiAwMDIxOGUwZQooWEVOKSAgIGdmbjogMDAwZjAwMDEgIG1mbjogMDAxMDFlMmEKKFhF
TikgICBnZm46IDAwMGYwMDAyICBtZm46IDAwMjE4ZTBkCihYRU4pICAgZ2ZuOiAwMDBmMDAw
MyAgbWZuOiAwMDEwMjg4ZgooWEVOKSAgIGdmbjogMDAwZjAwMDQgIG1mbjogMDAyMThlMGMK
KFhFTikgICBnZm46IDAwMGYwMDA1ICBtZm46IDAwMTAyODhlCihYRU4pICAgZ2ZuOiAwMDBm
MDAwNiAgbWZuOiAwMDIxOGUwYgooWEVOKSAgIGdmbjogMDAwZjAwMDcgIG1mbjogMDAxMDE1
M2IKKFhFTikgICBnZm46IDAwMGYwMDA4ICBtZm46IDAwMjE4ZTBhCihYRU4pICAgZ2ZuOiAw
MDBmMDAwOSAgbWZuOiAwMDEwMTUzYQooWEVOKSAgIGdmbjogMDAwZjAwMGEgIG1mbjogMDAy
MThlMDkKKFhFTikgICBnZm46IDAwMGYwMDBiICBtZm46IDAwMTAyODU3CihYRU4pICAgZ2Zu
OiAwMDBmMDAwYyAgbWZuOiAwMDIxOGUwOAooWEVOKSAgIGdmbjogMDAwZjAwMGQgIG1mbjog
MDAxMDI4NTYKKFhFTikgICBnZm46IDAwMGYwMDBlICBtZm46IDAwMjE4ZTA3CihYRU4pICAg
Z2ZuOiAwMDBmMDAwZiAgbWZuOiAwMDEwMTViNwooWEVOKSAgIGdmbjogMDAwZjAwMTAgIG1m
bjogMDAyMThlMDYKKFhFTikgICBnZm46IDAwMGYwMDExICBtZm46IDAwMTAxNWI2CihYRU4p
ICAgZ2ZuOiAwMDBmMDAxMiAgbWZuOiAwMDIxOGUwNQooWEVOKSAgIGdmbjogMDAwZjAwMTMg
IG1mbjogMDAxMDI4MjcKKFhFTikgICBnZm46IDAwMGYwMDE0ICBtZm46IDAwMjE4ZTA0CihY
RU4pICAgZ2ZuOiAwMDBmMDAxNSAgbWZuOiAwMDEwMjgyNgooWEVOKSAgIGdmbjogMDAwZjAw
MTYgIG1mbjogMDAyMThlMDMKKFhFTikgICBnZm46IDAwMGYwMDE3ICBtZm46IDAwMTAwNTJi
CihYRU4pICAgZ2ZuOiAwMDBmMDAxOCAgbWZuOiAwMDIxOGUwMgooWEVOKSAgIGdmbjogMDAw
ZjAwMTkgIG1mbjogMDAxMDA1MmEKKFhFTikgICBnZm46IDAwMGYwMDFhICBtZm46IDAwMjE4
ZTAxCihYRU4pICAgZ2ZuOiAwMDBmMDAxYiAgbWZuOiAwMDEwMjgxNwooWEVOKSAgIGdmbjog
MDAwZjAwMWMgIG1mbjogMDAyMThlMDAKKFhFTikgICBnZm46IDAwMGYwMDFkICBtZm46IDAw
MTAyODE2CihYRU4pICAgZ2ZuOiAwMDBmMDAxZSAgbWZuOiAwMDFiMWM0YQooWEVOKSAgIGdm
bjogMDAwZjAwMWYgIG1mbjogMDAxMDJjM2IKKFhFTikgICBnZm46IDAwMGYwMDIwICBtZm46
IDAwMWIxZjlhCihYRU4pICAgZ2ZuOiAwMDBmMDAyMSAgbWZuOiAwMDEwMmMzYQooWEVOKSAg
IGdmbjogMDAwZjAwMjIgIG1mbjogMDAxZDBiMTIKKFhFTikgICBnZm46IDAwMGYwMDIzICBt
Zm46IDAwMTAyYzhiCihYRU4pICAgZ2ZuOiAwMDBmMDAyNCAgbWZuOiAwMDFkMGIwYgooWEVO
KSAgIGdmbjogMDAwZjAwMjUgIG1mbjogMDAxMDJjOGEKKFhFTikgICBnZm46IDAwMGYwMDI2
ICBtZm46IDAwMWQwYjhkCihYRU4pICAgZ2ZuOiAwMDBmMDAyNyAgbWZuOiAwMDEwMjcwZgoo
WEVOKSAgIGdmbjogMDAwZjAwMjggIG1mbjogMDAxYjFmYzMKKFhFTikgICBnZm46IDAwMGYw
MDI5ICBtZm46IDAwMTAyNzBlCihYRU4pICAgZ2ZuOiAwMDBmMDAyYSAgbWZuOiAwMDFiMWMy
MwooWEVOKSAgIGdmbjogMDAwZjAwMmIgIG1mbjogMDAxMDFlYmIKKFhFTikgICBnZm46IDAw
MGYwMDJjICBtZm46IDAwMWQwYmI4CihYRU4pICAgZ2ZuOiAwMDBmMDAyZCAgbWZuOiAwMDEw
MWViYQooWEVOKSAgIGdmbjogMDAwZjAwMmUgIG1mbjogMDAxZDBiYmMKKFhFTikgICBnZm46
IDAwMGYwMDJmICBtZm46IDAwMTAyZTZiCihYRU4pICAgZ2ZuOiAwMDBmMDAzMCAgbWZuOiAw
MDFkMGI5YwooWEVOKSAgIGdmbjogMDAwZjAwMzEgIG1mbjogMDAxMDJlNmEKKFhFTikgICBn
Zm46IDAwMGYwMDMyICBtZm46IDAwMWQwYmM2CihYRU4pICAgZ2ZuOiAwMDBmMDAzMyAgbWZu
OiAwMDEwMmM1ZgooWEVOKSAgIGdmbjogMDAwZjAwMzQgIG1mbjogMDAxZDBiNDYKKFhFTikg
ICBnZm46IDAwMGYwMDM1ICBtZm46IDAwMTAyYzVlCihYRU4pICAgZ2ZuOiAwMDBmMDAzNiAg
bWZuOiAwMDFkMGI0NAooWEVOKSAgIGdmbjogMDAwZjAwMzcgIG1mbjogMDAxMDJlOTMKKFhF
TikgICBnZm46IDAwMGYwMDM4ICBtZm46IDAwMWQwYmFkCihYRU4pICAgZ2ZuOiAwMDBmMDAz
OSAgbWZuOiAwMDEwMmU5MgooWEVOKSAgIGdmbjogMDAwZjAwM2EgIG1mbjogMDAxZDBiYWUK
KFhFTikgICBnZm46IDAwMGYwMDNiICBtZm46IDAwMTAyODliCihYRU4pICAgZ2ZuOiAwMDBm
MDAzYyAgbWZuOiAwMDFkMGI0MQooWEVOKSAgIGdmbjogMDAwZjAwM2QgIG1mbjogMDAxMDI4
OWEKKFhFTikgICBnZm46IDAwMGYwMDNlICBtZm46IDAwMWIxZmVhCihYRU4pICAgZ2ZuOiAw
MDBmMDAzZiAgbWZuOiAwMDEwMmNjNwooWEVOKSAgIGdmbjogMDAwZjAwNDAgIG1mbjogMDAx
ZDBiYzEKKFhFTikgICBnZm46IDAwMGYwMDQxICBtZm46IDAwMTAyY2M2CihYRU4pICAgZ2Zu
OiAwMDBmMDA0MiAgbWZuOiAwMDFiMWZkZgooWEVOKSAgIGdmbjogMDAwZjAwNDMgIG1mbjog
MDAxMDI5NjMKKFhFTikgICBnZm46IDAwMGYwMDQ0ICBtZm46IDAwMWQwYjM3CihYRU4pICAg
Z2ZuOiAwMDBmMDA0NSAgbWZuOiAwMDEwMjk2MgooWEVOKSAgIGdmbjogMDAwZjAwNDYgIG1m
bjogMDAxZDBiOWUKKFhFTikgICBnZm46IDAwMGYwMDQ3ICBtZm46IDAwMTAyODg3CihYRU4p
ICAgZ2ZuOiAwMDBmMDA0OCAgbWZuOiAwMDFiMDZkOAooWEVOKSAgIGdmbjogMDAwZjAwNDkg
IG1mbjogMDAxMDI4ODYKKFhFTikgICBnZm46IDAwMGYwMDRhICBtZm46IDAwMTYyZWQ4CihY
RU4pICAgZ2ZuOiAwMDBmMDA0YiAgbWZuOiAwMDEwMTljNQooWEVOKSAgIGdmbjogMDAwZjAw
NGMgIG1mbjogMDAxNjMyZDgKKFhFTikgICBnZm46IDAwMGYwMDRkICBtZm46IDAwMTAxOWM0
CihYRU4pICAgZ2ZuOiAwMDBmMDA0ZSAgbWZuOiAwMDFjMGE2MQooWEVOKSAgIGdmbjogMDAw
ZjAwNGYgIG1mbjogMDAxMDE5YzMKKFhFTikgICBnZm46IDAwMGYwMDUwICBtZm46IDAwMWMw
YTYwCihYRU4pICAgZ2ZuOiAwMDBmMDA1MSAgbWZuOiAwMDEwMTljMgooWEVOKSAgIGdmbjog
MDAwZjAwNTIgIG1mbjogMDAxYjFjNDkKKFhFTikgICBnZm46IDAwMGYwMDUzICBtZm46IDAw
MTAxNTBmCihYRU4pICAgZ2ZuOiAwMDBmMDA1NCAgbWZuOiAwMDFiMWM0OAooWEVOKSAgIGdm
bjogMDAwZjAwNTUgIG1mbjogMDAxMDE1MGUKKFhFTikgICBnZm46IDAwMGYwMDU2ICBtZm46
IDAwMWIxZjk5CihYRU4pICAgZ2ZuOiAwMDBmMDA1NyAgbWZuOiAwMDEwMWFhOQooWEVOKSAg
IGdmbjogMDAwZjAwNTggIG1mbjogMDAxYjFmOTgKKFhFTikgICBnZm46IDAwMGYwMDU5ICBt
Zm46IDAwMTAxYWE4CihYRU4pICAgZ2ZuOiAwMDBmMDA1YSAgbWZuOiAwMDFkMGI2NQooWEVO
KSAgIGdmbjogMDAwZjAwNWIgIG1mbjogMDAxMDFhYTcKKFhFTikgICBnZm46IDAwMGYwMDVj
ICBtZm46IDAwMWQwYjY0CihYRU4pICAgZ2ZuOiAwMDBmMDA1ZCAgbWZuOiAwMDEwMWFhNgoo
WEVOKSAgIGdmbjogMDAwZjAwNWUgIG1mbjogMDAxZDBiMTEKKFhFTikgICBnZm46IDAwMGYw
MDVmICBtZm46IDAwMTAxOTg1CihYRU4pICAgZ2ZuOiAwMDBmMDA2MCAgbWZuOiAwMDFkMGIx
MAooWEVOKSAgIGdmbjogMDAwZjAwNjEgIG1mbjogMDAxMDE5ODQKKFhFTikgICBnZm46IDAw
MGYwMDYyICBtZm46IDAwMWQwYmM1CihYRU4pICAgZ2ZuOiAwMDBmMDA2MyAgbWZuOiAwMDEw
MTk4MwooWEVOKSAgIGdmbjogMDAwZjAwNjQgIG1mbjogMDAxZDBiYzQKKFhFTikgICBnZm46
IDAwMGYwMDY1ICBtZm46IDAwMTAxOTgyCihYRU4pICAgZ2ZuOiAwMDBmMDA2NiAgbWZuOiAw
MDFkMGI0MwooWEVOKSAgIGdmbjogMDAwZjAwNjcgIG1mbjogMDAxMDFlNmIKKFhFTikgICBn
Zm46IDAwMGYwMDY4ICBtZm46IDAwMWQwYjQyCihYRU4pICAgZ2ZuOiAwMDBmMDA2OSAgbWZu
OiAwMDEwMWU2YQooWEVOKSAgIGdmbjogMDAwZjAwNmEgIG1mbjogMDAxYjFmZTkKKFhFTikg
ICBnZm46IDAwMGYwMDZiICBtZm46IDAwMTAyNzYzCihYRU4pICAgZ2ZuOiAwMDBmMDA2YyAg
bWZuOiAwMDFiMWZlOAooWEVOKSAgIGdmbjogMDAwZjAwNmQgIG1mbjogMDAxMDI3NjIKKFhF
TikgICBnZm46IDAwMGYwMDZlICBtZm46IDAwMWQwYmMzCihYRU4pICAgZ2ZuOiAwMDBmMDA2
ZiAgbWZuOiAwMDEwMmQzZgooWEVOKSAgIGdmbjogMDAwZjAwNzAgIG1mbjogMDAxZDBiYzIK
KFhFTikgICBnZm46IDAwMGYwMDcxICBtZm46IDAwMTAyZDNlCihYRU4pICAgZ2ZuOiAwMDBm
MDA3MiAgbWZuOiAwMDFkMGJmZgooWEVOKSAgIGdmbjogMDAwZjAwNzMgIG1mbjogMDAxMDA0
NmIKKFhFTikgICBnZm46IDAwMGYwMDc0ICBtZm46IDAwMWQwYmZlCihYRU4pICAgZ2ZuOiAw
MDBmMDA3NSAgbWZuOiAwMDEwMDQ2YQooWEVOKSAgIGdmbjogMDAwZjAwNzYgIG1mbjogMDAx
ZDBiZmQKKFhFTikgICBnZm46IDAwMGYwMDc3ICBtZm46IDAwMTAwM2Q3CihYRU4pICAgZ2Zu
OiAwMDBmMDA3OCAgbWZuOiAwMDFkMGJmYwooWEVOKSAgIGdmbjogMDAwZjAwNzkgIG1mbjog
MDAxMDAzZDYKKFhFTikgICBnZm46IDAwMGYwMDdhICBtZm46IDAwMWQwYmNmCihYRU4pICAg
Z2ZuOiAwMDBmMDA3YiAgbWZuOiAwMDEwMDMxYgooWEVOKSAgIGdmbjogMDAwZjAwN2MgIG1m
bjogMDAxZDBiY2UKKFhFTikgICBnZm46IDAwMGYwMDdkICBtZm46IDAwMTAwMzFhCihYRU4p
ICAgZ2ZuOiAwMDBmMDA3ZSAgbWZuOiAwMDFkMGJjZAooWEVOKSAgIGdmbjogMDAwZjAwN2Yg
IG1mbjogMDAxMDJkYzcKKFhFTikgICBnZm46IDAwMGYwMDgwICBtZm46IDAwMWQwYmNjCihY
RU4pICAgZ2ZuOiAwMDBmMDA4MSAgbWZuOiAwMDEwMmRjNgooWEVOKSAgIGdmbjogMDAwZjAw
ODIgIG1mbjogMDAxZDBiMGYKKFhFTikgICBnZm46IDAwMGYwMDgzICBtZm46IDAwMTAxZDk3
CihYRU4pICAgZ2ZuOiAwMDBmMDA4NCAgbWZuOiAwMDFkMGIwZQooWEVOKSAgIGdmbjogMDAw
ZjAwODUgIG1mbjogMDAxMDFkOTYKKFhFTikgICBnZm46IDAwMGYwMDg2ICBtZm46IDAwMWQw
YjBkCihYRU4pICAgZ2ZuOiAwMDBmMDA4NyAgbWZuOiAwMDEwMjc2YgooWEVOKSAgIGdmbjog
MDAwZjAwODggIG1mbjogMDAxZDBiMGMKKFhFTikgICBnZm46IDAwMGYwMDg5ICBtZm46IDAw
MTAyNzZhCihYRU4pICAgZ2ZuOiAwMDBmMDA4YSAgbWZuOiAwMDE2MzBiZgooWEVOKSAgIGdm
bjogMDAwZjAwOGIgIG1mbjogMDAxMDFhMzMKKFhFTikgICBnZm46IDAwMGYwMDhjICBtZm46
IDAwMTYzMGJlCihYRU4pICAgZ2ZuOiAwMDBmMDA4ZCAgbWZuOiAwMDEwMWEzMgooWEVOKSAg
IGdmbjogMDAwZjAwOGUgIG1mbjogMDAxNjMwYmQKKFhFTikgICBnZm46IDAwMGYwMDhmICBt
Zm46IDAwMTAyNTNkCihYRU4pICAgZ2ZuOiAwMDBmMDA5MCAgbWZuOiAwMDE2MzBiYwooWEVO
KSAgIGdmbjogMDAwZjAwOTEgIG1mbjogMDAxMDI1M2MKKFhFTikgICBnZm46IDAwMGYwMDky
ICBtZm46IDAwMWIxYzQ3CihYRU4pICAgZ2ZuOiAwMDBmMDA5MyAgbWZuOiAwMDEwMjM2OQoo
WEVOKSAgIGdmbjogMDAwZjAwOTQgIG1mbjogMDAxYjFjNDYKKFhFTikgICBnZm46IDAwMGYw
MDk1ICBtZm46IDAwMTAyMzY4CihYRU4pICAgZ2ZuOiAwMDBmMDA5NiAgbWZuOiAwMDFiMWM0
NQooWEVOKSAgIGdmbjogMDAwZjAwOTcgIG1mbjogMDAxMDI4M2YKKFhFTikgICBnZm46IDAw
MGYwMDk4ICBtZm46IDAwMWIxYzQ0CihYRU4pICAgZ2ZuOiAwMDBmMDA5OSAgbWZuOiAwMDEw
MjgzZQooWEVOKSAgIGdmbjogMDAwZjAwOWEgIG1mbjogMDAxYjFjNDMKKFhFTikgICBnZm46
IDAwMGYwMDliICBtZm46IDAwMTAxNTgzCihYRU4pICAgZ2ZuOiAwMDBmMDA5YyAgbWZuOiAw
MDFiMWM0MgooWEVOKSAgIGdmbjogMDAwZjAwOWQgIG1mbjogMDAxMDE1ODIKKFhFTikgICBn
Zm46IDAwMGYwMDllICBtZm46IDAwMWIxYzQxCihYRU4pICAgZ2ZuOiAwMDBmMDA5ZiAgbWZu
OiAwMDEwMTgwYgooWEVOKSAgIGdmbjogMDAwZjAwYTAgIG1mbjogMDAxYjFjNDAKKFhFTikg
ICBnZm46IDAwMGYwMGExICBtZm46IDAwMTAxODBhCihYRU4pICAgZ2ZuOiAwMDBmMDBhMiAg
bWZuOiAwMDFiMWY5NwooWEVOKSAgIGdmbjogMDAwZjAwYTMgIG1mbjogMDAxMDI5NGYKKFhF
TikgICBnZm46IDAwMGYwMGE0ICBtZm46IDAwMWIxZjk2CihYRU4pICAgZ2ZuOiAwMDBmMDBh
NSAgbWZuOiAwMDEwMjk0ZQooWEVOKSAgIGdmbjogMDAwZjAwYTYgIG1mbjogMDAxYjFmOTUK
KFhFTikgICBnZm46IDAwMGYwMGE3ICBtZm46IDAwMTAyZDFmCihYRU4pICAgZ2ZuOiAwMDBm
MDBhOCAgbWZuOiAwMDFiMWY5NAooWEVOKSAgIGdmbjogMDAwZjAwYTkgIG1mbjogMDAxMDJk
MWUKKFhFTikgICBnZm46IDAwMGYwMGFhICBtZm46IDAwMWIxZjkzCihYRU4pICAgZ2ZuOiAw
MDBmMDBhYiAgbWZuOiAwMDEwMjM3OQooWEVOKSAgIGdmbjogMDAwZjAwYWMgIG1mbjogMDAx
YjFmOTIKKFhFTikgICBnZm46IDAwMGYwMGFkICBtZm46IDAwMTAyMzc4CihYRU4pICAgZ2Zu
OiAwMDBmMDBhZSAgbWZuOiAwMDFiMWY5MQooWEVOKSAgIGdmbjogMDAwZjAwYWYgIG1mbjog
MDAxMDJjYTMKKFhFTikgICBnZm46IDAwMGYwMGIwICBtZm46IDAwMWIxZjkwCihYRU4pICAg
Z2ZuOiAwMDBmMDBiMSAgbWZuOiAwMDEwMmNhMgooWEVOKSAgIGdmbjogMDAwZjAwYjIgIG1m
bjogMDAxYjA2ZDcKKFhFTikgICBnZm46IDAwMGYwMGIzICBtZm46IDAwMTAyODJmCihYRU4p
ICAgZ2ZuOiAwMDBmMDBiNCAgbWZuOiAwMDFiMDZkNgooWEVOKSAgIGdmbjogMDAwZjAwYjUg
IG1mbjogMDAxMDI4MmUKKFhFTikgICBnZm46IDAwMGYwMGI2ICBtZm46IDAwMWIwNmQ1CihY
RU4pICAgZ2ZuOiAwMDBmMDBiNyAgbWZuOiAwMDEwMmU1ZgooWEVOKSAgIGdmbjogMDAwZjAw
YjggIG1mbjogMDAxYjA2ZDQKKFhFTikgICBnZm46IDAwMGYwMGI5ICBtZm46IDAwMTAyZTVl
CihYRU4pICAgZ2ZuOiAwMDBmMDBiYSAgbWZuOiAwMDFiMDZkMwooWEVOKSAgIGdmbjogMDAw
ZjAwYmIgIG1mbjogMDAxMDI3NzcKKFhFTikgICBnZm46IDAwMGYwMGJjICBtZm46IDAwMWIw
NmQyCihYRU4pICAgZ2ZuOiAwMDBmMDBiZCAgbWZuOiAwMDEwMjc3NgooWEVOKSAgIGdmbjog
MDAwZjAwYmUgIG1mbjogMDAxYjA2ZDEKKFhFTikgICBnZm46IDAwMGYwMGJmICBtZm46IDAw
MTAxOWY3CihYRU4pICAgZ2ZuOiAwMDBmMDBjMCAgbWZuOiAwMDFiMDZkMAooWEVOKSAgIGdm
bjogMDAwZjAwYzEgIG1mbjogMDAxMDE5ZjYKKFhFTikgICBnZm46IDAwMGYwMGMyICBtZm46
IDAwMTYyZWQ3CihYRU4pICAgZ2ZuOiAwMDBmMDBjMyAgbWZuOiAwMDEwMWE5MwooWEVOKSAg
IGdmbjogMDAwZjAwYzQgIG1mbjogMDAxNjJlZDYKKFhFTikgICBnZm46IDAwMGYwMGM1ICBt
Zm46IDAwMTAxYTkyCihYRU4pICAgZ2ZuOiAwMDBmMDBjNiAgbWZuOiAwMDE2MmVkNQooWEVO
KSAgIGdmbjogMDAwZjAwYzcgIG1mbjogMDAxMDE0MWYKKFhFTikgICBnZm46IDAwMGYwMGM4
ICBtZm46IDAwMTYyZWQ0CihYRU4pICAgZ2ZuOiAwMDBmMDBjOSAgbWZuOiAwMDEwMTQxZQoo
WEVOKSAgIGdmbjogMDAwZjAwY2EgIG1mbjogMDAxNjJlZDMKKFhFTikgICBnZm46IDAwMGYw
MGNiICBtZm46IDAwMTAxNDBiCihYRU4pICAgZ2ZuOiAwMDBmMDBjYyAgbWZuOiAwMDE2MmVk
MgooWEVOKSAgIGdmbjogMDAwZjAwY2QgIG1mbjogMDAxMDE0MGEKKFhFTikgICBnZm46IDAw
MGYwMGNlICBtZm46IDAwMTYyZWQxCihYRU4pICAgZ2ZuOiAwMDBmMDBjZiAgbWZuOiAwMDEw
MmM2NwooWEVOKSAgIGdmbjogMDAwZjAwZDAgIG1mbjogMDAxNjJlZDAKKFhFTikgICBnZm46
IDAwMGYwMGQxICBtZm46IDAwMTAyYzY2CihYRU4pICAgZ2ZuOiAwMDBmMDBkMiAgbWZuOiAw
MDE2MzJkNwooWEVOKSAgIGdmbjogMDAwZjAwZDMgIG1mbjogMDAxMDI1MTcKKFhFTikgICBn
Zm46IDAwMGYwMGQ0ICBtZm46IDAwMTYzMmQ2CihYRU4pICAgZ2ZuOiAwMDBmMDBkNSAgbWZu
OiAwMDEwMjUxNgooWEVOKSAgIGdmbjogMDAwZjAwZDYgIG1mbjogMDAxNjMyZDUKKFhFTikg
ICBnZm46IDAwMGYwMGQ3ICBtZm46IDAwMTAyNzRmCihYRU4pICAgZ2ZuOiAwMDBmMDBkOCAg
bWZuOiAwMDE2MzJkNAooWEVOKSAgIGdmbjogMDAwZjAwZDkgIG1mbjogMDAxMDI3NGUKKFhF
TikgICBnZm46IDAwMGYwMGRhICBtZm46IDAwMTYzMmQzCihYRU4pICAgZ2ZuOiAwMDBmMDBk
YiAgbWZuOiAwMDEwMmEyNwooWEVOKSAgIGdmbjogMDAwZjAwZGMgIG1mbjogMDAxNjMyZDIK
KFhFTikgICBnZm46IDAwMGYwMGRkICBtZm46IDAwMTAyYTI2CihYRU4pICAgZ2ZuOiAwMDBm
MDBkZSAgbWZuOiAwMDE2MzJkMQooWEVOKSAgIGdmbjogMDAwZjAwZGYgIG1mbjogMDAxMDJl
YjcKKFhFTikgICBnZm46IDAwMGYwMGUwICBtZm46IDAwMTYzMmQwCihYRU4pICAgZ2ZuOiAw
MDBmMDBlMSAgbWZuOiAwMDEwMmViNgooWEVOKSAgIGdmbjogMDAwZjAwZTIgIG1mbjogMDAx
ZDBiZGYKKFhFTikgICBnZm46IDAwMGYwMGUzICBtZm46IDAwMTAyOTQzCihYRU4pICAgZ2Zu
OiAwMDBmMDBlNCAgbWZuOiAwMDFkMGJkZQooWEVOKSAgIGdmbjogMDAwZjAwZTUgIG1mbjog
MDAxMDI5NDIKKFhFTikgICBnZm46IDAwMGYwMGU2ICBtZm46IDAwMWQwYmRkCihYRU4pICAg
Z2ZuOiAwMDBmMDBlNyAgbWZuOiAwMDEwMjcxZgooWEVOKSAgIGdmbjogMDAwZjAwZTggIG1m
bjogMDAxZDBiZGMKKFhFTikgICBnZm46IDAwMGYwMGU5ICBtZm46IDAwMTAyNzFlCihYRU4p
ICAgZ2ZuOiAwMDBmMDBlYSAgbWZuOiAwMDFkMGJkYgooWEVOKSAgIGdmbjogMDAwZjAwZWIg
IG1mbjogMDAxMDI4MzcKKFhFTikgICBnZm46IDAwMGYwMGVjICBtZm46IDAwMWQwYmRhCihY
RU4pICAgZ2ZuOiAwMDBmMDBlZCAgbWZuOiAwMDEwMjgzNgooWEVOKSAgIGdmbjogMDAwZjAw
ZWUgIG1mbjogMDAxZDBiZDkKKFhFTikgICBnZm46IDAwMGYwMGVmICBtZm46IDAwMTAyZGEz
CihYRU4pICAgZ2ZuOiAwMDBmMDBmMCAgbWZuOiAwMDFkMGJkOAooWEVOKSAgIGdmbjogMDAw
ZjAwZjEgIG1mbjogMDAxMDJkYTIKKFhFTikgICBnZm46IDAwMGYwMGYyICBtZm46IDAwMWQw
YmQ3CihYRU4pICAgZ2ZuOiAwMDBmMDBmMyAgbWZuOiAwMDEwMTkyNwooWEVOKSAgIGdmbjog
MDAwZjAwZjQgIG1mbjogMDAxZDBiZDYKKFhFTikgICBnZm46IDAwMGYwMGY1ICBtZm46IDAw
MTAxOTI2CihYRU4pICAgZ2ZuOiAwMDBmMDBmNiAgbWZuOiAwMDFkMGJkNQooWEVOKSAgIGdm
bjogMDAwZjAwZjcgIG1mbjogMDAxMDJkYWIKKFhFTikgICBnZm46IDAwMGYwMGY4ICBtZm46
IDAwMWQwYmQ0CihYRU4pICAgZ2ZuOiAwMDBmMDBmOSAgbWZuOiAwMDEwMmRhYQooWEVOKSAg
IGdmbjogMDAwZjAwZmEgIG1mbjogMDAxZDBiZDMKKFhFTikgICBnZm46IDAwMGYwMGZiICBt
Zm46IDAwMTAyZTFiCihYRU4pICAgZ2ZuOiAwMDBmMDBmYyAgbWZuOiAwMDFkMGJkMgooWEVO
KSAgIGdmbjogMDAwZjAwZmQgIG1mbjogMDAxMDJlMWEKKFhFTikgICBnZm46IDAwMGYwMGZl
ICBtZm46IDAwMWQwYmQxCihYRU4pICAgZ2ZuOiAwMDBmMDBmZiAgbWZuOiAwMDEwMTk0Mwoo
WEVOKSAgIGdmbjogMDAwZjAxMDAgIG1mbjogMDAxZDBiZDAKKFhFTikgICBnZm46IDAwMGYw
MTAxICBtZm46IDAwMTAxOTQyCihYRU4pICAgZ2ZuOiAwMDBmMDEwMiAgbWZuOiAwMDFiMWY4
ZgooWEVOKSAgIGdmbjogMDAwZjAxMDMgIG1mbjogMDAxMDJiZWYKKFhFTikgICBnZm46IDAw
MGYwMTA0ICBtZm46IDAwMWIxZjhlCihYRU4pICAgZ2ZuOiAwMDBmMDEwNSAgbWZuOiAwMDEw
MmJlZQooWEVOKSAgIGdmbjogMDAwZjAxMDYgIG1mbjogMDAxYjFmOGQKKFhFTikgICBnZm46
IDAwMGYwMTA3ICBtZm46IDAwMTAxZGVmCihYRU4pICAgZ2ZuOiAwMDBmMDEwOCAgbWZuOiAw
MDFiMWY4YwooWEVOKSAgIGdmbjogMDAwZjAxMDkgIG1mbjogMDAxMDFkZWUKKFhFTikgICBn
Zm46IDAwMGYwMTBhICBtZm46IDAwMWIxZjhiCihYRU4pICAgZ2ZuOiAwMDBmMDEwYiAgbWZu
OiAwMDEwMjk3MwooWEVOKSAgIGdmbjogMDAwZjAxMGMgIG1mbjogMDAxYjFmOGEKKFhFTikg
ICBnZm46IDAwMGYwMTBkICBtZm46IDAwMTAyOTcyCihYRU4pICAgZ2ZuOiAwMDBmMDEwZSAg
bWZuOiAwMDFiMWY4OQooWEVOKSAgIGdmbjogMDAwZjAxMGYgIG1mbjogMDAxMDI5OGIKKFhF
TikgICBnZm46IDAwMGYwMTEwICBtZm46IDAwMWIxZjg4CihYRU4pICAgZ2ZuOiAwMDBmMDEx
MSAgbWZuOiAwMDEwMjk4YQooWEVOKSAgIGdmbjogMDAwZjAxMTIgIG1mbjogMDAxYjFmODcK
KFhFTikgICBnZm46IDAwMGYwMTEzICBtZm46IDAwMTAwMzAxCihYRU4pICAgZ2ZuOiAwMDBm
MDExNCAgbWZuOiAwMDFiMWY4NgooWEVOKSAgIGdmbjogMDAwZjAxMTUgIG1mbjogMDAxMDAz
MDAKKFhFTikgICBnZm46IDAwMGYwMTE2ICBtZm46IDAwMWIxZjg1CihYRU4pICAgZ2ZuOiAw
MDBmMDExNyAgbWZuOiAwMDEwMDJmZgooWEVOKSAgIGdmbjogMDAwZjAxMTggIG1mbjogMDAx
YjFmODQKKFhFTikgICBnZm46IDAwMGYwMTE5ICBtZm46IDAwMTAwMmZlCihYRU4pICAgZ2Zu
OiAwMDBmMDExYSAgbWZuOiAwMDFiMWY4MwooWEVOKSAgIGdmbjogMDAwZjAxMWIgIG1mbjog
MDAxMDI4MDMKKFhFTikgICBnZm46IDAwMGYwMTFjICBtZm46IDAwMWIxZjgrMgooWEVOKSAg
IGdmbjogMDAwZjAxMWQgIG1mbjogMDAxMDI4MDIKKFhFTikgICBnZm46IDAwMGYwMTFlICBt
Zm46IDAwMWIxZjgxCihYRU4pICAgZ2ZuOiAwMDBmMDExZiAgbWZuOiAwMDEwMDM4ZgooWEVO
KSAgIGdmbjogMDAwZjAxMjAgIG1mbjogMDAxYjFmODAKKFhFTikgICBnZm46IDAwMGYwMTIx
ICBtZm46IDAwMTAwMzhlCihYRU4pICAgZ2ZuOiAwMDBmMDEyMiAgbWZuOiAwMDFiMDZjZgoo
WEVOKSAgIGdmbjogMDAwZjAxMjMgIG1mbjogMDAxMDJlMjMKKFhFTikgICBnZm46IDAwMGYw
MTI0ICBtZm46IDAwMWIwNmNlCihYRU4pICAgZ2ZuOiAwMDBmMDEyNSAgbWZuOiAwMDEwMmUy
MgooWEVOKSAgIGdmbjogMDAwZjAxMjYgIG1mbjogMDAxYjA2Y2QKKFhFTikgICBnZm46IDAw
MGYwMTI3ICBtZm46IDAwMTAyNzliCihYRU4pICAgZ2ZuOiAwMDBmMDEyOCAgbWZuOiAwMDFi
MDZjYwooWEVOKSAgIGdmbjogMDAwZjAxMjkgIG1mbjogMDAxMDI3OWEKKFhFTikgICBnZm46
IDAwMGYwMTJhICBtZm46IDAwMWIwNmNiCihYRU4pICAgZ2ZuOiAwMDBmMDEyYiAgbWZuOiAw
MDEwMmMzMwooWEVOKSAgIGdmbjogMDAwZjAxMmMgIG1mbjogMDAxYjA2Y2EKKFhFTikgICBn
Zm46IDAwMGYwMTJkICBtZm46IDAwMTAyYzMyCihYRU4pICAgZ2ZuOiAwMDBmMDEyZSAgbWZu
OiAwMDFiMDZjOQooWEVOKSAgIGdmbjogMDAwZjAxMmYgIG1mbjogMDAxMDI4ZDcKKFhFTikg
ICBnZm46IDAwMGYwMTMwICBtZm46IDAwMWIwNmM4CihYRU4pICAgZ2ZuOiAwMDBmMDEzMSAg
bWZuOiAwMDEwMjhkNgooWEVOKSAgIGdmbjogMDAwZjAxMzIgIG1mbjogMDAxYjA2YzcKKFhF
TikgICBnZm46IDAwMGYwMTMzICBtZm46IDAwMTAyYzZmCihYRU4pICAgZ2ZuOiAwMDBmMDEz
NCAgbWZuOiAwMDFiMDZjNgooWEVOKSAgIGdmbjogMDAwZjAxMzUgIG1mbjogMDAxMDJjNmUK
KFhFTikgICBnZm46IDAwMGYwMTM2ICBtZm46IDAwMWIwNmM1CihYRU4pICAgZ2ZuOiAwMDBm
MDEzNyAgbWZuOiAwMDEwMmQ4ZgooWEVOKSAgIGdmbjogMDAwZjAxMzggIG1mbjogMDAxYjA2
YzQKKFhFTikgICBnZm46IDAwMGYwMTM5ICBtZm46IDAwMTAyZDhlCihYRU4pICAgZ2ZuOiAw
MDBmMDEzYSAgbWZuOiAwMDFiMDZjMwooWEVOKSAgIGdmbjogMDAwZjAxM2IgIG1mbjogMDAx
MDE4YmYKKFhFTikgICBnZm46IDAwMGYwMTNjICBtZm46IDAwMWIwNmMyCihYRU4pICAgZ2Zu
OiAwMDBmMDEzZCAgbWZuOiAwMDEwMThiZQooWEVOKSAgIGdmbjogMDAwZjAxM2UgIG1mbjog
MDAxYjA2YzEKKFhFTikgICBnZm46IDAwMGYwMTNmICBtZm46IDAwMTAyMzcxCihYRU4pICAg
Z2ZuOiAwMDBmMDE0MCAgbWZuOiAwMDFiMDZjMAooWEVOKSAgIGdmbjogMDAwZjAxNDEgIG1m
bjogMDAxMDIzNzAKKFhFTikgICBnZm46IDAwMGYwMTQyICBtZm46IDAwMTYyZWNmCihYRU4p
ICAgZ2ZuOiAwMDBmMDE0MyAgbWZuOiAwMDEwMmVjZgooWEVOKSAgIGdmbjogMDAwZjAxNDQg
IG1mbjogMDAxNjJlY2UKKFhFTikgICBnZm46IDAwMGYwMTQ1ICBtZm46IDAwMTAyZWNlCihY
RU4pICAgZ2ZuOiAwMDBmMDE0NiAgbWZuOiAwMDE2MmVjZAooWEVOKSAgIGdmbjogMDAwZjAx
NDcgIG1mbjogMDAxMDI3YTMKKFhFTikgICBnZm46IDAwMGYwMTQ4ICBtZm46IDAwMTYyZWNj
CihYRU4pICAgZ2ZuOiAwMDBmMDE0OSAgbWZuOiAwMDEwMjdhMgooWEVOKSAgIGdmbjogMDAw
ZjAxNGEgIG1mbjogMDAxNjJlY2IKKFhFTikgICBnZm46IDAwMGYwMTRiICBtZm46IDAwMTAx
ODRmCihYRU4pICAgZ2ZuOiAwMDBmMDE0YyAgbWZuOiAwMDE2MmVjYQooWEVOKSAgIGdmbjog
MDAwZjAxNGQgIG1mbjogMDAxMDE4NGUKKFhFTikgICBnZm46IDAwMGYwMTRlICBtZm46IDAw
MTYyZWM5CihYRU4pICAgZ2ZuOiAwMDBmMDE0ZiAgbWZuOiAwMDEwMTU2YgooWEVOKSAgIGdm
bjogMDAwZjAxNTAgIG1mbjogMDAxNjJlYzgKKFhFTikgICBnZm46IDAwMGYwMTUxICBtZm46
IDAwMTAxNTZhCihYRU4pICAgZ2ZuOiAwMDBmMDE1MiAgbWZuOiAwMDE2MmVjNwooWEVOKSAg
IGdmbjogMDAwZjAxNTMgIG1mbjogMDAxMDE3ODMKKFhFTikgICBnZm46IDAwMGYwMTU0ICBt
Zm46IDAwMTYyZWM2CihYRU4pICAgZ2ZuOiAwMDBmMDE1NSAgbWZuOiAwMDEwMTc4MgooWEVO
KSAgIGdmbjogMDAwZjAxNTYgIG1mbjogMDAxNjJlYzUKKFhFTikgICBnZm46IDAwMGYwMTU3
ICBtZm46IDAwMTAxMzY1CihYRU4pICAgZ2ZuOiAwMDBmMDE1OCAgbWZuOiAwMDE2MmVjNAoo
WEVOKSAgIGdmbjogMDAwZjAxNTkgIG1mbjogMDAxMDEzNjQKKFhFTikgICBnZm46IDAwMGYw
MTVhICBtZm46IDAwMTYyZWMzCihYRU4pICAgZ2ZuOiAwMDBmMDE1YiAgbWZuOiAwMDEwMTM2
MwooWEVOKSAgIGdmbjogMDAwZjAxNWMgIG1mbjogMDAxNjJlYzIKKFhFTikgICBnZm46IDAw
MGYwMTVkICBtZm46IDAwMTAxMzYyCihYRU4pICAgZ2ZuOiAwMDBmMDE1ZSAgbWZuOiAwMDE2
MmVjMQooWEVOKSAgIGdmbjogMDAwZjAxNWYgIG1mbjogMDAxMDEzNTcKKFhFTikgICBnZm46
IDAwMGYwMTYwICBtZm46IDAwMTYyZWMwCihYRU4pICAgZ2ZuOiAwMDBmMDE2MSAgbWZuOiAw
MDEwMTM1NgooWEVOKSAgIGdmbjogMDAwZjAxNjIgIG1mbjogMDAxNjMyY2YKKFhFTikgICBn
Zm46IDAwMGYwMTYzICBtZm46IDAwMTAyYmQ5CihYRU4pICAgZ2ZuOiAwMDBmMDE2NCAgbWZu
OiAwMDE2MzJjZQooWEVOKSAgIGdmbjogMDAwZjAxNjUgIG1mbjogMDAxMDJiZDgKKFhFTikg
ICBnZm46IDAwMGYwMTY2ICBtZm46IDAwMTYzMmNkCihYRU4pICAgZ2ZuOiAwMDBmMDE2NyAg
bWZuOiAwMDEwMDJkZgooWEVOKSAgIGdmbjogMDAwZjAxNjggIG1mbjogMDAxNjMyY2MKKFhF
TikgICBnZm46IDAwMGYwMTY5ICBtZm46IDAwMTAwMmRlCihYRU4pICAgZ2ZuOiAwMDBmMDE2
YSAgbWZuOiAwMDE2MzJjYgooWEVOKSAgIGdmbjogMDAwZjAxNmIgIG1mbjogMDAxMDJiY2IK
KFhFTikgICBnZm46IDAwMGYwMTZjICBtZm46IDAwMTYzMmNhCihYRU4pICAgZ2ZuOiAwMDBm
MDE2ZCAgbWZuOiAwMDEwMmJjYQooWEVOKSAgIGdmbjogMDAwZjAxNmUgIG1mbjogMDAxNjMy
YzkKKFhFTikgICBnZm46IDAwMGYwMTZmICBtZm46IDAwMTAyZTUzCihYRU4pICAgZ2ZuOiAw
MDBmMDE3MCAgbWZuOiAwMDE2MzJjOAooWEVOKSAgIGdmbjogMDAwZjAxNzEgIG1mbjogMDAx
MDJlNTIKKFhFTikgICBnZm46IDAwMGYwMTcyICBtZm46IDAwMTYzMmM3CihYRU4pICAgZ2Zu
OiAwMDBmMDE3MyAgbWZuOiAwMDEwMjUzYgooWEVOKSAgIGdmbjogMDAwZjAxNzQgIG1mbjog
MDAxNjMyYzYKKFhFTikgICBnZm46IDAwMGYwMTc1ICBtZm46IDAwMTAyNTNhCihYRU4pICAg
Z2ZuOiAwMDBmMDE3NiAgbWZuOiAwMDE2MzJjNQooWEVOKSAgIGdmbjogMDAwZjAxNzcgIG1m
bjogMDAxMDI1N2QKKFhFTikgICBnZm46IDAwMGYwMTc4ICBtZm46IDAwMTYzMmM0CihYRU4p
ICAgZ2ZuOiAwMDBmMDE3OSAgbWZuOiAwMDEwMjU3YwooWEVOKSAgIGdmbjogMDAwZjAxN2Eg
IG1mbjogMDAxNjMyYzMKKFhFTikgICBnZm46IDAwMGYwMTdiICBtZm46IDAwMTAyNTc5CihY
RU4pICAgZ2ZuOiAwMDBmMDE3YyAgbWZuOiAwMDE2MzJjMgooWEVOKSAgIGdmbjogMDAwZjAx
N2QgIG1mbjogMDAxMDI1NzgKKFhFTikgICBnZm46IDAwMGYwMTdlICBtZm46IDAwMTYzMmMx
CihYRU4pICAgZ2ZuOiAwMDBmMDE3ZiAgbWZuOiAwMDEwOTZkYgooWEVOKSAgIGdmbjogMDAw
ZjAxODAgIG1mbjogMDAxNjMyYzAKKFhFTikgICBnZm46IDAwMGYwMTgxICBtZm46IDAwMTA5
NmRhCihYRU4pICAgZ2ZuOiAwMDBmMDE4MiAgbWZuOiAwMDFiMDZiZgooWEVOKSAgIGdmbjog
MDAwZjAxODMgIG1mbjogMDAxMDllZGIKKFhFTikgICBnZm46IDAwMGYwMTg0ICBtZm46IDAw
MWIwNmJlCihYRU4pICAgZ2ZuOiAwMDBmMDE4NSAgbWZuOiAwMDEwOWVkYQooWEVOKSAgIGdm
bjogMDAwZjAxODYgIG1mbjogMDAxYjA2YmQKKFhFTikgICBnZm46IDAwMGYwMTg3ICBtZm46
IDAwMTAyYzliCihYRU4pICAgZ2ZuOiAwMDBmMDE4OCAgbWZuOiAwMDFiMDZiYwooWEVOKSAg
IGdmbjogMDAwZjAxODkgIG1mbjogMDAxMDJjOWEKKFhFTikgICBnZm46IDAwMGYwMThhICBt
Zm46IDAwMWIwNmJiCihYRU4pICAgZ2ZuOiAwMDBmMDE4YiAgbWZuOiAwMDEwMmQwNwooWEVO
KSAgIGdmbjogMDAwZjAxOGMgIG1mbjogMDAxYjA2YmEKKFhFTikgICBnZm46IDAwMGYwMThk
ICBtZm46IDAwMTAyZDA2CihYRU4pICAgZ2ZuOiAwMDBmMDE4ZSAgbWZuOiAwMDFiMDZiOQoo
WEVOKSAgIGdmbjogMDAwZjAxOGYgIG1mbjogMDAxMDJkMWIKKFhFTikgICBnZm46IDAwMGYw
MTkwICBtZm46IDAwMWIwNmI4CihYRU4pICAgZ2ZuOiAwMDBmMDE5MSAgbWZuOiAwMDEwMmQx
YQooWEVOKSAgIGdmbjogMDAwZjAxOTIgIG1mbjogMDAxYjA2YjcKKFhFTikgICBnZm46IDAw
MGYwMTkzICBtZm46IDAwMTAyZTJmCihYRU4pICAgZ2ZuOiAwMDBmMDE5NCAgbWZuOiAwMDFi
MDZiNgooWEVOKSAgIGdmbjogMDAwZjAxOTUgIG1mbjogMDAxMDJlMmUKKFhFTikgICBnZm46
IDAwMGYwMTk2ICBtZm46IDAwMWIwNmI1CihYRU4pICAgZ2ZuOiAwMDBmMDE5NyAgbWZuOiAw
MDEwMTMyOQooWEVOKSAgIGdmbjogMDAwZjAxOTggIG1mbjogMDAxYjA2YjQKKFhFTikgICBn
Zm46IDAwMGYwMTk5ICBtZm46IDAwMTAxMzI4CihYRU4pICAgZ2ZuOiAwMDBmMDE5YSAgbWZu
OiAwMDFiMDZiMwooWEVOKSAgIGdmbjogMDAwZjAxOWIgIG1mbjogMDAxMDJkN2IKKFhFTikg
ICBnZm46IDAwMGYwMTljICBtZm46IDAwMWIwNmIyCihYRU4pICAgZ2ZuOiAwMDBmMDE5ZCAg
bWZuOiAwMDEwMmQ3YQooWEVOKSAgIGdmbjogMDAwZjAxOWUgIG1mbjogMDAxYjA2YjEKKFhF
TikgICBnZm46IDAwMGYwMTlmICBtZm46IDAwMTAyOTZiCihYRU4pICAgZ2ZuOiAwMDBmMDFh
MCAgbWZuOiAwMDFiMDZiMAooWEVOKSAgIGdmbjogMDAwZjAxYTEgIG1mbjogMDAxMDI5NmEK
KFhFTikgICBnZm46IDAwMGYwMWEyICBtZm46IDAwMWIwNmFmCihYRU4pICAgZ2ZuOiAwMDBm
MDFhMyAgbWZuOiAwMDEwMjg5MwooWEVOKSAgIGdmbjogMDAwZjAxYTQgIG1mbjogMDAxYjA2
YWUKKFhFTikgICBnZm46IDAwMGYwMWE1ICBtZm46IDAwMTAyODkyCihYRU4pICAgZ2ZuOiAw
MDBmMDFhNiAgbWZuOiAwMDFiMDZhZAooWEVOKSAgIGdmbjogMDAwZjAxYTcgIG1mbjogMDAx
MDE4OTcKKFhFTikgICBnZm46IDAwMGYwMWE4ICBtZm46IDAwMWIwNmFjCihYRU4pICAgZ2Zu
OiAwMDBmMDFhOSAgbWZuOiAwMDEwMTg5NgooWEVOKSAgIGdmbjogMDAwZjAxYWEgIG1mbjog
MDAxYjA2YWIKKFhFTikgICBnZm46IDAwMGYwMWFiICBtZm46IDAwMTAyNDZmCihYRU4pICAg
Z2ZuOiAwMDBmMDFhYyAgbWZuOiAwMDFiMDZhYQooWEVOKSAgIGdmbjogMDAwZjAxYWQgIG1m
bjogMDAxMDI0NmUKKFhFTikgICBnZm46IDAwMGYwMWFlICBtZm46IDAwMWIwNmE5CihYRU4p
ICAgZ2ZuOiAwMDBmMDFhZiAgbWZuOiAwMDEwMjkyOQooWEVOKSAgIGdmbjogMDAwZjAxYjAg
IG1mbjogMDAxYjA2YTgKKFhFTikgICBnZm46IDAwMGYwMWIxICBtZm46IDAwMTAyOTI4CihY
RU4pICAgZ2ZuOiAwMDBmMDFiMiAgbWZuOiAwMDFiMDZhNwooWEVOKSAgIGdmbjogMDAwZjAx
YjMgIG1mbjogMDAxMDJiZTEKKFhFTikgICBnZm46IDAwMGYwMWI0ICBtZm46IDAwMWIwNmE2
CihYRU4pICAgZ2ZuOiAwMDBmMDFiNSAgbWZuOiAwMDEwMmJlMAooWEVOKSAgIGdmbjogMDAw
ZjAxYjYgIG1mbjogMDAxYjA2YTUKKFhFTikgICBnZm46IDAwMGYwMWI3ICBtZm46IDAwMTAx
ODY5CihYRU4pICAgZ2ZuOiAwMDBmMDFiOCAgbWZuOiAwMDFiMDZhNAooWEVOKSAgIGdmbjog
MDAwZjAxYjkgIG1mbjogMDAxMDE4NjgKKFhFTikgICBnZm46IDAwMGYwMWJhICBtZm46IDAw
MWIwNmEzCihYRU4pICAgZ2ZuOiAwMDBmMDFiYiAgbWZuOiAwMDEwMTk1ZgooWEVOKSAgIGdm
bjogMDAwZjAxYmMgIG1mbjogMDAxYjA2YTIKKFhFTikgICBnZm46IDAwMGYwMWJkICBtZm46
IDAwMTAxOTVlCihYRU4pICAgZ2ZuOiAwMDBmMDFiZSAgbWZuOiAwMDFiMDZhMQooWEVOKSAg
IGdmbjogMDAwZjAxYmYgIG1mbjogMDAxMDI3NTEKKFhFTikgICBnZm46IDAwMGYwMWMwICBt
Zm46IDAwMWIwNmEwCihYRU4pICAgZ2ZuOiAwMDBmMDFjMSAgbWZuOiAwMDEwMjc1MAooWEVO
KSAgIGdmbjogMDAwZjAxYzIgIG1mbjogMDAxYjA2OWYKKFhFTikgICBnZm46IDAwMGYwMWMz
ICBtZm46IDAwMTAxNDdmCihYRU4pICAgZ2ZuOiAwMDBmMDFjNCAgbWZuOiAwMDFiMDY5ZQoo
WEVOKSAgIGdmbjogMDAwZjAxYzUgIG1mbjogMDAxMDE0N2UKKFhFTikgICBnZm46IDAwMGYw
MWM2ICBtZm46IDAwMWIwNjlkCihYRU4pICAgZ2ZuOiAwMDBmMDFjNyAgbWZuOiAwMDEwMTQ1
MwooWEVOKSAgIGdmbjogMDAwZjAxYzggIG1mbjogMDAxYjA2OWMKKFhFTikgICBnZm46IDAw
MGYwMWM5ICBtZm46IDAwMTAxNDUyCihYRU4pICAgZ2ZuOiAwMDBmMDFjYSAgbWZuOiAwMDFi
MDY5YgooWEVOKSAgIGdmbjogMDAwZjAxY2IgIG1mbjogMDAxMDE3NmQKKFhFTikgICBnZm46
IDAwMGYwMWNjICBtZm46IDAwMWIwNjlhCihYRU4pICAgZ2ZuOiAwMDBmMDFjZCAgbWZuOiAw
MDEwMTc2YwooWEVOKSAgIGdmbjogMDAwZjAxY2UgIG1mbjogMDAxYjA2OTkKKFhFTikgICBn
Zm46IDAwMGYwMWNmICBtZm46IDAwMTAxNTExCihYRU4pICAgZ2ZuOiAwMDBmMDFkMCAgbWZu
OiAwMDFiMDY5OAooWEVOKSAgIGdmbjogMDAwZjAxZDEgIG1mbjogMDAxMDE1MTAKKFhFTikg
ICBnZm46IDAwMGYwMWQyICBtZm46IDAwMWIwNjk3CihYRU4pICAgZ2ZuOiAwMDBmMDFkMyAg
bWZuOiAwMDEwMjhiNQooWEVOKSAgIGdmbjogMDAwZjAxZDQgIG1mbjogMDAxYjA2OTYKKFhF
TikgICBnZm46IDAwMGYwMWQ1ICBtZm46IDAwMTAyOGI0CihYRU4pICAgZ2ZuOiAwMDBmMDFk
NiAgbWZuOiAwMDFiMDY5NQooWEVOKSAgIGdmbjogMDAwZjAxZDcgIG1mbjogMDAxMDE0MTEK
KFhFTikgICBnZm46IDAwMGYwMWQ4ICBtZm46IDAwMWIwNjk0CihYRU4pICAgZ2ZuOiAwMDBm
MDFkOSAgbWZuOiAwMDEwMTQxMAooWEVOKSAgIGdmbjogMDAwZjAxZGEgIG1mbjogMDAxYjA2
OTMKKFhFTikgICBnZm46IDAwMGYwMWRiICBtZm46IDAwMTAxNWE3CihYRU4pICAgZ2ZuOiAw
MDBmMDFkYyAgbWZuOiAwMDFiMDY5MgooWEVOKSAgIGdmbjogMDAwZjAxZGQgIG1mbjogMDAx
MDE1YTYKKFhFTikgICBnZm46IDAwMGYwMWRlICBtZm46IDAwMWIwNjkxCihYRU4pICAgZ2Zu
OiAwMDBmMDFkZiAgbWZuOiAwMDEwMmNmNQooWEVOKSAgIGdmbjogMDAwZjAxZTAgIG1mbjog
MDAxYjA2OTAKKFhFTikgICBnZm46IDAwMGYwMWUxICBtZm46IDAwMTAyY2Y0CihYRU4pICAg
Z2ZuOiAwMDBmMDFlMiAgbWZuOiAwMDFiMDY4ZgooWEVOKSAgIGdmbjogMDAwZjAxZTMgIG1m
bjogMDAxMDI3YzEKKFhFTikgICBnZm46IDAwMGYwMWU0ICBtZm46IDAwMWIwNjhlCihYRU4p
ICAgZ2ZuOiAwMDBmMDFlNSAgbWZuOiAwMDEwMjdjMAooWEVOKSAgIGdmbjogMDAwZjAxZTYg
IG1mbjogMDAxYjA2OGQKKFhFTikgICBnZm46IDAwMGYwMWU3ICBtZm46IDAwMTAwNTMzCihY
RU4pICAgZ2ZuOiAwMDBmMDFlOCAgbWZuOiAwMDFiMDY4YwooWEVOKSAgIGdmbjogMDAwZjAx
ZTkgIG1mbjogMDAxMDA1MzIKKFhFTikgICBnZm46IDAwMGYwMWVhICBtZm46IDAwMWIwNjhi
CihYRU4pICAgZ2ZuOiAwMDBmMDFlYiAgbWZuOiAwMDEwMmU5NQooWEVOKSAgIGdmbjogMDAw
ZjAxZWMgIG1mbjogMDAxYjA2OGEKKFhFTikgICBnZm46IDAwMGYwMWVkICBtZm46IDAwMTAy
ZTk0CihYRU4pICAgZ2ZuOiAwMDBmMDFlZSAgbWZuOiAwMDFiMDY4OQooWEVOKSAgIGdmbjog
MDAwZjAxZWYgIG1mbjogMDAxMDE0OTEKKFhFTikgICBnZm46IDAwMGYwMWYwICBtZm46IDAw
MWIwNjg4CihYRU4pICAgZ2ZuOiAwMDBmMDFmMSAgbWZuOiAwMDEwMTQ5MAooWEVOKSAgIGdm
bjogMDAwZjAxZjIgIG1mbjogMDAxYjA2ODcKKFhFTikgICBnZm46IDAwMGYwMWYzICBtZm46
IDAwMTAxMzBkCihYRU4pICAgZ2ZuOiAwMDBmMDFmNCAgbWZuOiAwMDFiMDY4NgooWEVOKSAg
IGdmbjogMDAwZjAxZjUgIG1mbjogMDAxMDEzMGMKKFhFTikgICBnZm46IDAwMGYwMWY2ICBt
Zm46IDAwMWIwNjg1CihYRU4pICAgZ2ZuOiAwMDBmMDFmNyAgbWZuOiAwMDEwMTcxNQooWEVO
KSAgIGdmbjogMDAwZjAxZjggIG1mbjogMDAxYjA2ODQKKFhFTikgICBnZm46IDAwMGYwMWY5
ICBtZm46IDAwMTAxNzE0CihYRU4pICAgZ2ZuOiAwMDBmMDFmYSAgbWZuOiAwMDFiMDY4Mwoo
WEVOKSAgIGdmbjogMDAwZjAxZmIgIG1mbjogMDAxMDEzOTkKKFhFTikgICBnZm46IDAwMGYw
MWZjICBtZm46IDAwMWIwNjgyCihYRU4pICAgZ2ZuOiAwMDBmMDFmZCAgbWZuOiAwMDEwMTM5
OAooWEVOKSAgIGdmbjogMDAwZjAxZmUgIG1mbjogMDAxYjA2ODEKKFhFTikgICBnZm46IDAw
MGYwMWZmICBtZm46IDAwMTAxZTgzCihYRU4pICAgZ2ZuOiAwMDBmMDIwMCAgbWZuOiAwMDFi
MDY4MAooWEVOKSAgIGdmbjogMDAwZjAyMDEgIG1mbjogMDAxMDFlODIKKFhFTikgICBnZm46
IDAwMGYwMjAyICBtZm46IDAwMTYyZWJmCihYRU4pICAgZ2ZuOiAwMDBmMDIwMyAgbWZuOiAw
MDEwMTNhOQooWEVOKSAgIGdmbjogMDAwZjAyMDQgIG1mbjogMDAxNjJlYmUKKFhFTikgICBn
Zm46IDAwMGYwMjA1ICBtZm46IDAwMTAxM2E4CihYRU4pICAgZ2ZuOiAwMDBmMDIwNiAgbWZu
OiAwMDE2MmViZAooWEVOKSAgIGdmbjogMDAwZjAyMDcgIG1mbjogMDAxMDI4ODMKKFhFTikg
ICBnZm46IDAwMGYwMjA4ICBtZm46IDAwMTYyZWJjCihYRU4pICAgZ2ZuOiAwMDBmMDIwOSAg
bWZuOiAwMDEwMjg4MgooWEVOKSAgIGdmbjogMDAwZjAyMGEgIG1mbjogMDAxNjJlYmIKKFhF
TikgICBnZm46IDAwMGYwMjBiICBtZm46IDAwMTAyOTQ1CihYRU4pICAgZ2ZuOiAwMDBmMDIw
YyAgbWZuOiAwMDE2MmViYQooWEVOKSAgIGdmbjogMDAwZjAyMGQgIG1mbjogMDAxMDI5NDQK
KFhFTikgICBnZm46IDAwMGYwMjBlICBtZm46IDAwMTYyZWI5CihYRU4pICAgZ2ZuOiAwMDBm
MDIwZiAgbWZuOiAwMDEwMWFjMwooWEVOKSAgIGdmbjogMDAwZjAyMTAgIG1mbjogMDAxNjJl
YjgKKFhFTikgICBnZm46IDAwMGYwMjExICBtZm46IDAwMTAxYWMyCihYRU4pICAgZ2ZuOiAw
MDBmMDIxMiAgbWZuOiAwMDE2MmViNwooWEVOKSAgIGdmbjogMDAwZjAyMTMgIG1mbjogMDAx
MDFhYzEKKFhFTikgICBnZm46IDAwMGYwMjE0ICBtZm46IDAwMTYyZWI2CihYRU4pICAgZ2Zu
OiAwMDBmMDIxNSAgbWZuOiAwMDEwMWFjMAooWEVOKSAgIGdmbjogMDAwZjAyMTYgIG1mbjog
MDAxNjJlYjUKKFhFTikgICBnZm46IDAwMGYwMjE3ICBtZm46IDAwMTAxNzQ3CihYRU4pICAg
Z2ZuOiAwMDBmMDIxOCAgbWZuOiAwMDE2MmViNAooWEVOKSAgIGdmbjogMDAwZjAyMTkgIG1m
bjogMDAxMDE3NDYKKFhFTikgICBnZm46IDAwMGYwMjFhICBtZm46IDAwMTYyZWIzCihYRU4p
ICAgZ2ZuOiAwMDBmMDIxYiAgbWZuOiAwMDEwMTc0NQooWEVOKSAgIGdmbjogMDAwZjAyMWMg
IG1mbjogMDAxNjJlYjIKKFhFTikgICBnZm46IDAwMGYwMjFkICBtZm46IDAwMTAxNzQ0CihY
RU4pICAgZ2ZuOiAwMDBmMDIxZSAgbWZuOiAwMDE2MmViMQooWEVOKSAgIGdmbjogMDAwZjAy
MWYgIG1mbjogMDAxMDI5ZjcKKFhFTikgICBnZm46IDAwMGYwMjIwICBtZm46IDAwMTYyZWIw
CihYRU4pICAgZ2ZuOiAwMDBmMDIyMSAgbWZuOiAwMDEwMjlmNgooWEVOKSAgIGdmbjogMDAw
ZjAyMjIgIG1mbjogMDAxNjJlYWYKKFhFTikgICBnZm46IDAwMGYwMjIzICBtZm46IDAwMTAy
OWY1CihYRU4pICAgZ2ZuOiAwMDBmMDIyNCAgbWZuOiAwMDE2MmVhZQooWEVOKSAgIGdmbjog
MDAwZjAyMjUgIG1mbjogMDAxMDI5ZjQKKFhFTikgICBnZm46IDAwMGYwMjI2ICBtZm46IDAw
MTYyZWFkCihYRU4pICAgZ2ZuOiAwMDBmMDIyNyAgbWZuOiAwMDEwMmQzNwooWEVOKSAgIGdm
bjogMDAwZjAyMjggIG1mbjogMDAxNjJlYWMKKFhFTikgICBnZm46IDAwMGYwMjI5ICBtZm46
IDAwMTAyZDM2CihYRU4pICAgZ2ZuOiAwMDBmMDIyYSAgbWZuOiAwMDE2MmVhYgooWEVOKSAg
IGdmbjogMDAwZjAyMmIgIG1mbjogMDAxMDJkMzUKKFhFTikgICBnZm46IDAwMGYwMjJjICBt
Zm46IDAwMTYyZWFhCihYRU4pICAgZ2ZuOiAwMDBmMDIyZCAgbWZuOiAwMDEwMmQzNAooWEVO
KSAgIGdmbjogMDAwZjAyMmUgIG1mbjogMDAxNjJlYTkKKFhFTikgICBnZm46IDAwMGYwMjJm
ICBtZm46IDAwMTAyNzM3CihYRU4pICAgZ2ZuOiAwMDBmMDIzMCAgbWZuOiAwMDE2MmVhOAoo
WEVOKSAgIGdmbjogMDAwZjAyMzEgIG1mbjogMDAxMDI3MzYKKFhFTikgICBnZm46IDAwMGYw
MjMyICBtZm46IDAwMTYyZWE3CihYRU4pICAgZ2ZuOiAwMDBmMDIzMyAgbWZuOiAwMDEwMjcz
NQooWEVOKSAgIGdmbjogMDAwZjAyMzQgIG1mbjogMDAxNjJlYTYKKFhFTikgICBnZm46IDAw
MGYwMjM1ICBtZm46IDAwMTAyNzM0CihYRU4pICAgZ2ZuOiAwMDBmMDIzNiAgbWZuOiAwMDE2
MmVhNQooWEVOKSAgIGdmbjogMDAwZjAyMzcgIG1mbjogMDAxMDE5ZGIKKFhFTikgICBnZm46
IDAwMGYwMjM4ICBtZm46IDAwMTYyZWE0CihYRU4pICAgZ2ZuOiAwMDBmMDIzOSAgbWZuOiAw
MDEwMTlkYQooWEVOKSAgIGdmbjogMDAwZjAyM2EgIG1mbjogMDAxNjJlYTMKKFhFTikgICBn
Zm46IDAwMGYwMjNiICBtZm46IDAwMTAxOWQ5CihYRU4pICAgZ2ZuOiAwMDBmMDIzYyAgbWZu
OiAwMDE2MmVhMgooWEVOKSAgIGdmbjogMDAwZjAyM2QgIG1mbjogMDAxMDE5ZDgKKFhFTikg
ICBnZm46IDAwMGYwMjNlICBtZm46IDAwMTYyZWExCihYRU4pICAgZ2ZuOiAwMDBmMDIzZiAg
bWZuOiAwMDEwMWQ5YgooWEVOKSAgIGdmbjogMDAwZjAyNDAgIG1mbjogMDAxNjJlYTAKKFhF
TikgICBnZm46IDAwMGYwMjQxICBtZm46IDAwMTAxZDlhCihYRU4pICAgZ2ZuOiAwMDBmMDI0
MiAgbWZuOiAwMDE2MmU5ZgooWEVOKSAgIGdmbjogMDAwZjAyNDMgIG1mbjogMDAxMDFkOTkK
KFhFTikgICBnZm46IDAwMGYwMjQ0ICBtZm46IDAwMTYyZTllCihYRU4pICAgZ2ZuOiAwMDBm
MDI0NSAgbWZuOiAwMDEwMWQ5OAooWEVOKSAgIGdmbjogMDAwZjAyNDYgIG1mbjogMDAxNjJl
OWQKKFhFTikgICBnZm46IDAwMGYwMjQ3ICBtZm46IDAwMTAxNzc3CihYRU4pICAgZ2ZuOiAw
MDBmMDI0OCAgbWZuOiAwMDE2MmU5YwooWEVOKSAgIGdmbjogMDAwZjAyNDkgIG1mbjogMDAx
MDE3NzYKKFhFTikgICBnZm46IDAwMGYwMjRhICBtZm46IDAwMTYyZTliCihYRU4pICAgZ2Zu
OiAwMDBmMDI0YiAgbWZuOiAwMDEwMTc3NQooWEVOKSAgIGdmbjogMDAwZjAyNGMgIG1mbjog
MDAxNjJlOWEKKFhFTikgICBnZm46IDAwMGYwMjRkICBtZm46IDAwMTAxNzc0CihYRU4pICAg
Z2ZuOiAwMDBmMDI0ZSAgbWZuOiAwMDE2MmU5OQooWEVOKSAgIGdmbjogMDAwZjAyNGYgIG1m
bjogMDAxMDI5MjcKKFhFTikgICBnZm46IDAwMGYwMjUwICBtZm46IDAwMTYyZTk4CihYRU4p
ICAgZ2ZuOiAwMDBmMDI1MSAgbWZuOiAwMDEwMjkyNgooWEVOKSAgIGdmbjogMDAwZjAyNTIg
IG1mbjogMDAxNjJlOTcKKFhFTikgICBnZm46IDAwMGYwMjUzICBtZm46IDAwMTAyOTI1CihY
RU4pICAgZ2ZuOiAwMDBmMDI1NCAgbWZuOiAwMDE2MmU5NgooWEVOKSAgIGdmbjogMDAwZjAy
NTUgIG1mbjogMDAxMDI5MjQKKFhFTikgICBnZm46IDAwMGYwMjU2ICBtZm46IDAwMTYyZTk1
CihYRU4pICAgZ2ZuOiAwMDBmMDI1NyAgbWZuOiAwMDEwMTU2MwooWEVOKSAgIGdmbjogMDAw
ZjAyNTggIG1mbjogMDAxNjJlOTQKKFhFTikgICBnZm46IDAwMGYwMjU5ICBtZm46IDAwMTAx
NTYyCihYRU4pICAgZ2ZuOiAwMDBmMDI1YSAgbWZuOiAwMDE2MmU5MwooWEVOKSAgIGdmbjog
MDAwZjAyNWIgIG1mbjogMDAxMDE1NjEKKFhFTikgICBnZm46IDAwMGYwMjVjICBtZm46IDAw
MTYyZTkyCihYRU4pICAgZ2ZuOiAwMDBmMDI1ZCAgbWZuOiAwMDEwMTU2MAooWEVOKSAgIGdm
bjogMDAwZjAyNWUgIG1mbjogMDAxNjJlOTEKKFhFTikgICBnZm46IDAwMGYwMjVmICBtZm46
IDAwMTAyZDgzCihYRU4pICAgZ2ZuOiAwMDBmMDI2MCAgbWZuOiAwMDE2MmU5MAooWEVOKSAg
IGdmbjogMDAwZjAyNjEgIG1mbjogMDAxMDJkODIKKFhFTikgICBnZm46IDAwMGYwMjYyICBt
Zm46IDAwMTYyZThmCihYRU4pICAgZ2ZuOiAwMDBmMDI2MyAgbWZuOiAwMDEwMmQ4MQooWEVO
KSAgIGdmbjogMDAwZjAyNjQgIG1mbjogMDAxNjJlOGUKKFhFTikgICBnZm46IDAwMGYwMjY1
ICBtZm46IDAwMTAyZDgwCihYRU4pICAgZ2ZuOiAwMDBmMDI2NiAgbWZuOiAwMDE2MmU4ZAoo
WEVOKSAgIGdmbjogMDAwZjAyNjcgIG1mbjogMDAxMDJkN2YKKFhFTikgICBnZm46IDAwMGYw
MjY4ICBtZm46IDAwMTYyZThjCihYRU4pICAgZ2ZuOiAwMDBmMDI2OSAgbWZuOiAwMDEwMmQ3
ZQooWEVOKSAgIGdmbjogMDAwZjAyNmEgIG1mbjogMDAxNjJlOGIKKFhFTikgICBnZm46IDAw
MGYwMjZiICBtZm46IDAwMTAyZDdkCihYRU4pICAgZ2ZuOiAwMDBmMDI2YyAgbWZuOiAwMDE2
MmU4YQooWEVOKSAgIGdmbjogMDAwZjAyNmQgIG1mbjogMDAxMDJkN2MKKFhFTikgICBnZm46
IDAwMGYwMjZlICBtZm46IDAwMTYyZTg5CihYRU4pICAgZ2ZuOiAwMDBmMDI2ZiAgbWZuOiAw
MDEwMmNjZgooWEVOKSAgIGdmbjogMDAwZjAyNzAgIG1mbjogMDAxNjJlODgKKFhFTikgICBn
Zm46IDAwMGYwMjcxICBtZm46IDAwMTAyY2NlCihYRU4pICAgZ2ZuOiAwMDBmMDI3MiAgbWZu
OiAwMDE2MmU4NwooWEVOKSAgIGdmbjogMDAwZjAyNzMgIG1mbjogMDAxMDJjY2QKKFhFTikg
ICBnZm46IDAwMGYwMjc0ICBtZm46IDAwMTYyZTg2CihYRU4pICAgZ2ZuOiAwMDBmMDI3NSAg
bWZuOiAwMDEwMmNjYwooWEVOKSAgIGdmbjogMDAwZjAyNzYgIG1mbjogMDAxNjJlODUKKFhF
TikgICBnZm46IDAwMGYwMjc3ICBtZm46IDAwMTAyOWZiCihYRU4pICAgZ2ZuOiAwMDBmMDI3
OCAgbWZuOiAwMDE2MmU4NAooWEVOKSAgIGdmbjogMDAwZjAyNzkgIG1mbjogMDAxMDI5ZmEK
KFhFTikgICBnZm46IDAwMGYwMjdhICBtZm46IDAwMTYyZTgzCihYRU4pICAgZ2ZuOiAwMDBm
MDI3YiAgbWZuOiAwMDEwMjlmOQooWEVOKSAgIGdmbjogMDAwZjAyN2MgIG1mbjogMDAxNjJl
ODIKKFhFTikgICBnZm46IDAwMGYwMjdkICBtZm46IDAwMTAyOWY4CihYRU4pICAgZ2ZuOiAw
MDBmMDI3ZSAgbWZuOiAwMDE2MmU4MQooWEVOKSAgIGdmbjogMDAwZjAyN2YgIG1mbjogMDAx
MDA0YjcKKFhFTikgICBnZm46IDAwMGYwMjgwICBtZm46IDAwMTYyZTgwCihYRU4pICAgZ2Zu
OiAwMDBmMDI4MSAgbWZuOiAwMDEwMDRiNgooWEVOKSAgIGdmbjogMDAwZjAyODIgIG1mbjog
MDAxNjMyYmYKKFhFTikgICBnZm46IDAwMGYwMjgzICBtZm46IDAwMTAwNGI1CihYRU4pICAg
Z2ZuOiAwMDBmMDI4NCAgbWZuOiAwMDE2MzJiZQooWEVOKSAgIGdmbjogMDAwZjAyODUgIG1m
bjogMDAxMDA0YjQKKFhFTikgICBnZm46IDAwMGYwMjg2ICBtZm46IDAwMTYzMmJkCihYRU4p
ICAgZ2ZuOiAwMDBmMDI4NyAgbWZuOiAwMDEwMTMwNwooWEVOKSAgIGdmbjogMDAwZjAyODgg
IG1mbjogMDAxNjMyYmMKKFhFTikgICBnZm46IDAwMGYwMjg5ICBtZm46IDAwMTAxMzA2CihY
RU4pICAgZ2ZuOiAwMDBmMDI4YSAgbWZuOiAwMDE2MzJiYgooWEVOKSAgIGdmbjogMDAwZjAy
OGIgIG1mbjogMDAxMDEzMDUKKFhFTikgICBnZm46IDAwMGYwMjhjICBtZm46IDAwMTYzMmJh
CihYRU4pICAgZ2ZuOiAwMDBmMDI4ZCAgbWZuOiAwMDEwMTMwNAooWEVOKSAgIGdmbjogMDAw
ZjAyOGUgIG1mbjogMDAxNjMyYjkKKFhFTikgICBnZm46IDAwMGYwMjhmICBtZm46IDAwMTAx
MzBiCihYRU4pICAgZ2ZuOiAwMDBmMDI5MCAgbWZuOiAwMDE2MzJiOAooWEVOKSAgIGdmbjog
MDAwZjAyOTEgIG1mbjogMDAxMDEzMGEKKFhFTikgICBnZm46IDAwMGYwMjkyICBtZm46IDAw
MTYzMmI3CihYRU4pICAgZ2ZuOiAwMDBmMDI5MyAgbWZuOiAwMDEwMTMwOQooWEVOKSAgIGdm
bjogMDAwZjAyOTQgIG1mbjogMDAxNjMyYjYKKFhFTikgICBnZm46IDAwMGYwMjk1ICBtZm46
IDAwMTAxMzA4CihYRU4pICAgZ2ZuOiAwMDBmMDI5NiAgbWZuOiAwMDE2MzJiNQooWEVOKSAg
IGdmbjogMDAwZjAyOTcgIG1mbjogMDAxMDJlNTcKKFhFTikgICBnZm46IDAwMGYwMjk4ICBt
Zm46IDAwMTYzMmI0CihYRU4pICAgZ2ZuOiAwMDBmMDI5OSAgbWZuOiAwMDEwMmU1NgooWEVO
KSAgIGdmbjogMDAwZjAyOWEgIG1mbjogMDAxNjMyYjMKKFhFTikgICBnZm46IDAwMGYwMjli
ICBtZm46IDAwMTAyZTU1CihYRU4pICAgZ2ZuOiAwMDBmMDI5YyAgbWZuOiAwMDE2MzJiMgoo
WEVOKSAgIGdmbjogMDAwZjAyOWQgIG1mbjogMDAxMDJlNTQKKFhFTikgICBnZm46IDAwMGYw
MjllICBtZm46IDAwMTYzMmIxCihYRU4pICAgZ2ZuOiAwMDBmMDI5ZiAgbWZuOiAwMDEwMjY2
ZgooWEVOKSAgIGdmbjogMDAwZjAyYTAgIG1mbjogMDAxNjMyYjAKKFhFTikgICBnZm46IDAw
MGYwMmExICBtZm46IDAwMTAyNjZlCihYRU4pICAgZ2ZuOiAwMDBmMDJhMiAgbWZuOiAwMDE2
MzJhZgooWEVOKSAgIGdmbjogMDAwZjAyYTMgIG1mbjogMDAxMDI2NmQKKFhFTikgICBnZm46
IDAwMGYwMmE0ICBtZm46IDAwMTYzMmFlCihYRU4pICAgZ2ZuOiAwMDBmMDJhNSAgbWZuOiAw
MDEwMjY2YwooWEVOKSAgIGdmbjogMDAwZjAyYTYgIG1mbjogMDAxNjMyYWQKKFhFTikgICBn
Zm46IDAwMGYwMmE3ICBtZm46IDAwMTAxN2E3CihYRU4pICAgZ2ZuOiAwMDBmMDJhOCAgbWZu
OiAwMDE2MzJhYwooWEVOKSAgIGdmbjogMDAwZjAyYTkgIG1mbjogMDAxMDE3YTYKKFhFTikg
ICBnZm46IDAwMGYwMmFhICBtZm46IDAwMTYzMmFiCihYRU4pICAgZ2ZuOiAwMDBmMDJhYiAg
bWZuOiAwMDEwMTdhNQooWEVOKSAgIGdmbjogMDAwZjAyYWMgIG1mbjogMDAxNjMyYWEKKFhF
TikgICBnZm46IDAwMGYwMmFkICBtZm46IDAwMTAxN2E0CihYRU4pICAgZ2ZuOiAwMDBmMDJh
ZSAgbWZuOiAwMDE2MzJhOQooWEVOKSAgIGdmbjogMDAwZjAyYWYgIG1mbjogMDAxMDk2ZGYK
KFhFTikgICBnZm46IDAwMGYwMmIwICBtZm46IDAwMTYzMmE4CihYRU4pICAgZ2ZuOiAwMDBm
MDJiMSAgbWZuOiAwMDEwOTZkZQooWEVOKSAgIGdmbjogMDAwZjAyYjIgIG1mbjogMDAxNjMy
YTcKKFhFTikgICBnZm46IDAwMGYwMmIzICBtZm46IDAwMTA5NmRkCihYRU4pICAgZ2ZuOiAw
MDBmMDJiNCAgbWZuOiAwMDE2MzJhNgooWEVOKSAgIGdmbjogMDAwZjAyYjUgIG1mbjogMDAx
MDk2ZGMKKFhFTikgICBnZm46IDAwMGYwMmI2ICBtZm46IDAwMTYzMmE1CihYRU4pICAgZ2Zu
OiAwMDBmMDJiNyAgbWZuOiAwMDEwOWVkZgooWEVOKSAgIGdmbjogMDAwZjAyYjggIG1mbjog
MDAxNjMyYTQKKFhFTikgICBnZm46IDAwMGYwMmI5ICBtZm46IDAwMTA5ZWRlCihYRU4pICAg
Z2ZuOiAwMDBmMDJiYSAgbWZuOiAwMDE2MzJhMwooWEVOKSAgIGdmbjogMDAwZjAyYmIgIG1m
bjogMDAxMDllZGQKKFhFTikgICBnZm46IDAwMGYwMmJjICBtZm46IDAwMTYzMmEyCihYRU4p
ICAgZ2ZuOiAwMDBmMDJiZCAgbWZuOiAwMDEwOWVkYwooWEVOKSAgIGdmbjogMDAwZjAyYmUg
IG1mbjogMDAxNjMyYTEKKFhFTikgICBnZm46IDAwMGYwMmJmICBtZm46IDAwMTAyNWVmCihY
RU4pICAgZ2ZuOiAwMDBmMDJjMCAgbWZuOiAwMDE2MzJhMAooWEVOKSAgIGdmbjogMDAwZjAy
YzEgIG1mbjogMDAxMDI1ZWUKKFhFTikgICBnZm46IDAwMGYwMmMyICBtZm46IDAwMTYzMjlm
CihYRU4pICAgZ2ZuOiAwMDBmMDJjMyAgbWZuOiAwMDEwMjVlZAooWEVOKSAgIGdmbjogMDAw
ZjAyYzQgIG1mbjogMDAxNjMyOWUKKFhFTikgICBnZm46IDAwMGYwMmM1ICBtZm46IDAwMTAy
NWVjCihYRU4pICAgZ2ZuOiAwMDBmMDJjNiAgbWZuOiAwMDE2MzI5ZAooWEVOKSAgIGdmbjog
MDAwZjAyYzcgIG1mbjogMDAxMDJiZDcKKFhFTikgICBnZm46IDAwMGYwMmM4ICBtZm46IDAw
MTYzMjljCihYRU4pICAgZ2ZuOiAwMDBmMDJjOSAgbWZuOiAwMDEwMmJkNgooWEVOKSAgIGdm
bjogMDAwZjAyY2EgIG1mbjogMDAxNjMyOWIKKFhFTikgICBnZm46IDAwMGYwMmNiICBtZm46
IDAwMTAyYmQ1CihYRU4pICAgZ2ZuOiAwMDBmMDJjYyAgbWZuOiAwMDE2MzI5YQooWEVOKSAg
IGdmbjogMDAwZjAyY2QgIG1mbjogMDAxMDJiZDQKKFhFTikgICBnZm46IDAwMGYwMmNlICBt
Zm46IDAwMTYzMjk5CihYRU4pICAgZ2ZuOiAwMDBmMDJjZiAgbWZuOiAwMDEwMWU5NwooWEVO
KSAgIGdmbjogMDAwZjAyZDAgIG1mbjogMDAxNjMyOTgKKFhFTikgICBnZm46IDAwMGYwMmQx
ICBtZm46IDAwMTAxZTk2CihYRU4pICAgZ2ZuOiAwMDBmMDJkMiAgbWZuOiAwMDE2MzI5Nwoo
WEVOKSAgIGdmbjogMDAwZjAyZDMgIG1mbjogMDAxMDFlOTUKKFhFTikgICBnZm46IDAwMGYw
MmQ0ICBtZm46IDAwMTYzMjk2CihYRU4pICAgZ2ZuOiAwMDBmMDJkNSAgbWZuOiAwMDEwMWU5
NAooWEVOKSAgIGdmbjogMDAwZjAyZDYgIG1mbjogMDAxNjMyOTUKKFhFTikgICBnZm46IDAw
MGYwMmQ3ICBtZm46ICswMDEwMWU0ZgooWEVOKSAgIGdmbjogMDAwZjAyZDggIG1mbjogMDAx
NjMyOTQKKFhFTikgICBnZm46IDAwMGYwMmQ5ICBtZm46IDAwMTAxZTRlCihYRU4pICAgZ2Zu
OiAwMDBmMDJkYSAgbWZuOiAwMDE2MzI5MwooWEVOKSAgIGdmbjogMDAwZjAyZGIgIG1mbjog
MDAxMDFlNGQKKFhFTikgICBnZm46IDAwMGYwMmRjICBtZm46IDAwMTYzMjkyCihYRU4pICAg
Z2ZuOiAwMDBmMDJkZCAgbWZuOiAwMDEwMWU0YwooWEVOKSAgIGdmbjogMDAwZjAyZGUgIG1m
bjogMDAxNjMyOTEKKFhFTikgICBnZm46IDAwMGYwMmRmICBtZm46IDAwMTAxYWJmCihYRU4p
ICAgZ2ZuOiAwMDBmMDJlMCAgbWZuOiAwMDE2MzI5MAooWEVOKSAgIGdmbjogMDAwZjAyZTEg
IG1mbjogMDAxMDFhYmUKKFhFTikgICBnZm46IDAwMGYwMmUyICBtZm46IDAwMTYzMjhmCihY
RU4pICAgZ2ZuOiAwMDBmMDJlMyAgbWZuOiAwMDEwMWFiZAooWEVOKSAgIGdmbjogMDAwZjAy
ZTQgIG1mbjogMDAxNjMyOGUKKFhFTikgICBnZm46IDAwMGYwMmU1ICBtZm46IDAwMTAxYWJj
CihYRU4pICAgZ2ZuOiAwMDBmMDJlNiAgbWZuOiAwMDE2MzI4ZAooWEVOKSAgIGdmbjogMDAw
ZjAyZTcgIG1mbjogMDAxMDJiY2YKKFhFTikgICBnZm46IDAwMGYwMmU4ICBtZm46IDAwMTYz
MjhjCihYRU4pICAgZ2ZuOiAwMDBmMDJlOSAgbWZuOiAwMDEwMmJjZQooWEVOKSAgIGdmbjog
MDAwZjAyZWEgIG1mbjogMDAxNjMyOGIKKFhFTikgICBnZm46IDAwMGYwMmViICBtZm46IDAw
MTAyYmNkCihYRU4pICAgZ2ZuOiAwMDBmMDJlYyAgbWZuOiAwMDE2MzI4YQooWEVOKSAgIGdm
bjogMDAwZjAyZWQgIG1mbjogMDAxMDJiY2MKKFhFTikgICBnZm46IDAwMGYwMmVlICBtZm46
IDAwMTYzMjg5CihYRU4pICAgZ2ZuOiAwMDBmMDJlZiAgbWZuOiAwMDEwMmU3ZgooWEVOKSAg
IGdmbjogMDAwZjAyZjAgIG1mbjogMDAxNjMyODgKKFhFTikgICBnZm46IDAwMGYwMmYxICBt
Zm46IDAwMTAyZTdlCihYRU4pICAgZ2ZuOiAwMDBmMDJmMiAgbWZuOiAwMDE2MzI4NwooWEVO
KSAgIGdmbjogMDAwZjAyZjMgIG1mbjogMDAxMDJlN2QKKFhFTikgICBnZm46IDAwMGYwMmY0
ICBtZm46IDAwMTYzMjg2CihYRU4pICAgZ2ZuOiAwMDBmMDJmNSAgbWZuOiAwMDEwMmU3Ywoo
WEVOKSAgIGdmbjogMDAwZjAyZjYgIG1mbjogMDAxNjMyODUKKFhFTikgICBnZm46IDAwMGYw
MmY3ICBtZm46IDAwMTAxMzRiCihYRU4pICAgZ2ZuOiAwMDBmMDJmOCAgbWZuOiAwMDE2MzI4
NAooWEVOKSAgIGdmbjogMDAwZjAyZjkgIG1mbjogMDAxMDEzNGEKKFhFTikgICBnZm46IDAw
MGYwMmZhICBtZm46IDAwMTYzMjgzCihYRU4pICAgZ2ZuOiAwMDBmMDJmYiAgbWZuOiAwMDEw
MTM0OQooWEVOKSAgIGdmbjogMDAwZjAyZmMgIG1mbjogMDAxNjMyODIKKFhFTikgICBnZm46
IDAwMGYwMmZkICBtZm46IDAwMTAxMzQ4CihYRU4pICAgZ2ZuOiAwMDBmMDJmZSAgbWZuOiAw
MDE2MzI4MQooWEVOKSAgIGdmbjogMDAwZjAyZmYgIG1mbjogMDAxMDI3NWIKKFhFTikgICBn
Zm46IDAwMGYwMzAwICBtZm46IDAwMTYzMjgwCihYRU4pICAgZ2ZuOiAwMDBmMDMwMSAgbWZu
OiAwMDEwMjc1YQooWEVOKSAgIGdmbjogMDAwZjAzMDIgIG1mbjogMDAxNjMwZmYKKFhFTikg
ICBnZm46IDAwMGYwMzAzICBtZm46IDAwMTAyNzU5CihYRU4pICAgZ2ZuOiAwMDBmMDMwNCAg
bWZuOiAwMDE2MzBmZQooWEVOKSAgIGdmbjogMDAwZjAzMDUgIG1mbjogMDAxMDI3NTgKKFhF
TikgICBnZm46IDAwMGYwMzA2ICBtZm46IDAwMTYzMGZkCihYRU4pICAgZ2ZuOiAwMDBmMDMw
NyAgbWZuOiAwMDEwMmRmMwooWEVOKSAgIGdmbjogMDAwZjAzMDggIG1mbjogMDAxNjMwZmMK
KFhFTikgICBnZm46IDAwMGYwMzA5ICBtZm46IDAwMTAyZGYyCihYRU4pICAgZ2ZuOiAwMDBm
MDMwYSAgbWZuOiAwMDE2MzBmYgooWEVOKSAgIGdmbjogMDAwZjAzMGIgIG1mbjogMDAxMDJk
ZjEKKFhFTikgICBnZm46IDAwMGYwMzBjICBtZm46IDAwMTYzMGZhCihYRU4pICAgZ2ZuOiAw
MDBmMDMwZCAgbWZuOiAwMDEwMmRmMAooWEVOKSAgIGdmbjogMDAwZjAzMGUgIG1mbjogMDAx
NjMwZjkKKFhFTikgICBnZm46IDAwMGYwMzBmICBtZm46IDAwMTAxMmRmCihYRU4pICAgZ2Zu
OiAwMDBmMDMxMCAgbWZuOiAwMDE2MzBmOAooWEVOKSAgIGdmbjogMDAwZjAzMTEgIG1mbjog
MDAxMDEyZGUKKFhFTikgICBnZm46IDAwMGYwMzEyICBtZm46IDAwMTYzMGY3CihYRU4pICAg
Z2ZuOiAwMDBmMDMxMyAgbWZuOiAwMDEwMTJkZAooWEVOKSAgIGdmbjogMDAwZjAzMTQgIG1m
bjogMDAxNjMwZjYKKFhFTikgICBnZm46IDAwMGYwMzE1ICBtZm46IDAwMTAxMmRjCihYRU4p
ICAgZ2ZuOiAwMDBmMDMxNiAgbWZuOiAwMDE2MzBmNQooWEVOKSAgIGdmbjogMDAwZjAzMTcg
IG1mbjogMDAxMDAzNWIKKFhFTikgICBnZm46IDAwMGYwMzE4ICBtZm46IDAwMTYzMGY0CihY
RU4pICAgZ2ZuOiAwMDBmMDMxOSAgbWZuOiAwMDEwMDM1YQooWEVOKSAgIGdmbjogMDAwZjAz
MWEgIG1mbjogMDAxNjMwZjMKKFhFTikgICBnZm46IDAwMGYwMzFiICBtZm46IDAwMTAwMzU5
CihYRU4pICAgZ2ZuOiAwMDBmMDMxYyAgbWZuOiAwMDE2MzBmMgooWEVOKSAgIGdmbjogMDAw
ZjAzMWQgIG1mbjogMDAxMDAzNTgKKFhFTikgICBnZm46IDAwMGYwMzFlICBtZm46IDAwMTYz
MGYxCihYRU4pICAgZ2ZuOiAwMDBmMDMxZiAgbWZuOiAwMDEwMjVlNwooWEVOKSAgIGdmbjog
MDAwZjAzMjAgIG1mbjogMDAxNjMwZjAKKFhFTikgICBnZm46IDAwMGYwMzIxICBtZm46IDAw
MTAyNWU2CihYRU4pICAgZ2ZuOiAwMDBmMDMyMiAgbWZuOiAwMDE2MzBlZgooWEVOKSAgIGdm
bjogMDAwZjAzMjMgIG1mbjogMDAxMDI1ZTUKKFhFTikgICBnZm46IDAwMGYwMzI0ICBtZm46
IDAwMTYzMGVlCihYRU4pICAgZ2ZuOiAwMDBmMDMyNSAgbWZuOiAwMDEwMjVlNAooWEVOKSAg
IGdmbjogMDAwZjAzMjYgIG1mbjogMDAxNjMwZWQKKFhFTikgICBnZm46IDAwMGYwMzI3ICBt
Zm46IDAwMTAyNWUzCihYRU4pICAgZ2ZuOiAwMDBmMDMyOCAgbWZuOiAwMDE2MzBlYwooWEVO
KSAgIGdmbjogMDAwZjAzMjkgIG1mbjogMDAxMDI1ZTIKKFhFTikgICBnZm46IDAwMGYwMzJh
ICBtZm46IDAwMTYzMGViCihYRU4pICAgZ2ZuOiAwMDBmMDMyYiAgbWZuOiAwMDEwMjVlMQoo
WEVOKSAgIGdmbjogMDAwZjAzMmMgIG1mbjogMDAxNjMwZWEKKFhFTikgICBnZm46IDAwMGYw
MzJkICBtZm46IDAwMTAyNWUwCihYRU4pICAgZ2ZuOiAwMDBmMDMyZSAgbWZuOiAwMDE2MzBl
OQooWEVOKSAgIGdmbjogMDAwZjAzMmYgIG1mbjogMDAxMDI2NzcKKFhFTikgICBnZm46IDAw
MGYwMzMwICBtZm46IDAwMTYzMGU4CihYRU4pICAgZ2ZuOiAwMDBmMDMzMSAgbWZuOiAwMDEw
MjY3NgooWEVOKSAgIGdmbjogMDAwZjAzMzIgIG1mbjogMDAxNjMwZTcKKFhFTikgICBnZm46
IDAwMGYwMzMzICBtZm46IDAwMTAyNjc1CihYRU4pICAgZ2ZuOiAwMDBmMDMzNCAgbWZuOiAw
MDE2MzBlNgooWEVOKSAgIGdmbjogMDAwZjAzMzUgIG1mbjogMDAxMDI2NzQKKFhFTikgICBn
Zm46IDAwMGYwMzM2ICBtZm46IDAwMTYzMGU1CihYRU4pICAgZ2ZuOiAwMDBmMDMzNyAgbWZu
OiAwMDEwMjY3MwooWEVOKSAgIGdmbjogMDAwZjAzMzggIG1mbjogMDAxNjMwZTQKKFhFTikg
ICBnZm46IDAwMGYwMzM5ICBtZm46IDAwMTAyNjcyCihYRU4pICAgZ2ZuOiAwMDBmMDMzYSAg
bWZuOiAwMDE2MzBlMwooWEVOKSAgIGdmbjogMDAwZjAzM2IgIG1mbjogMDAxMDI2NzEKKFhF
TikgICBnZm46IDAwMGYwMzNjICBtZm46IDAwMTYzMGUyCihYRU4pICAgZ2ZuOiAwMDBmMDMz
ZCAgbWZuOiAwMDEwMjY3MAooWEVOKSAgIGdmbjogMDAwZjAzM2UgIG1mbjogMDAxNjMwZTEK
KFhFTikgICBnZm46IDAwMGYwMzNmICBtZm46IDAwMTAyMzQ3CihYRU4pICAgZ2ZuOiAwMDBm
MDM0MCAgbWZuOiAwMDE2MzBlMAooWEVOKSAgIGdmbjogMDAwZjAzNDEgIG1mbjogMDAxMDIz
NDYKKFhFTikgICBnZm46IDAwMGYwMzQyICBtZm46IDAwMTYzMGRmCihYRU4pICAgZ2ZuOiAw
MDBmMDM0MyAgbWZuOiAwMDEwMjM0NQooWEVOKSAgIGdmbjogMDAwZjAzNDQgIG1mbjogMDAx
NjMwZGUKKFhFTikgICBnZm46IDAwMGYwMzQ1ICBtZm46IDAwMTAyMzQ0CihYRU4pICAgZ2Zu
OiAwMDBmMDM0NiAgbWZuOiAwMDE2MzBkZAooWEVOKSAgIGdmbjogMDAwZjAzNDcgIG1mbjog
MDAxMDIzNDMKKFhFTikgICBnZm46IDAwMGYwMzQ4ICBtZm46IDAwMTYzMGRjCihYRU4pICAg
Z2ZuOiAwMDBmMDM0OSAgbWZuOiAwMDEwMjM0MgooWEVOKSAgIGdmbjogMDAwZjAzNGEgIG1m
bjogMDAxNjMwZGIKKFhFTikgICBnZm46IDAwMGYwMzRiICBtZm46IDAwMTAyMzQxCihYRU4p
ICAgZ2ZuOiAwMDBmMDM0YyAgbWZuOiAwMDE2MzBkYQooWEVOKSAgIGdmbjogMDAwZjAzNGQg
IG1mbjogMDAxMDIzNDAKKFhFTikgICBnZm46IDAwMGYwMzRlICBtZm46IDAwMTYzMGQ5CihY
RU4pICAgZ2ZuOiAwMDBmMDM0ZiAgbWZuOiAwMDEwOTgxNwooWEVOKSAgIGdmbjogMDAwZjAz
NTAgIG1mbjogMDAxNjMwZDgKKFhFTikgICBnZm46IDAwMGYwMzUxICBtZm46IDAwMTA5ODE2
CihYRU4pICAgZ2ZuOiAwMDBmMDM1MiAgbWZuOiAwMDE2MzBkNwooWEVOKSAgIGdmbjogMDAw
ZjAzNTMgIG1mbjogMDAxMDk4MTUKKFhFTikgICBnZm46IDAwMGYwMzU0ICBtZm46IDAwMTYz
MGQ2CihYRU4pICAgZ2ZuOiAwMDBmMDM1NSAgbWZuOiAwMDEwOTgxNAooWEVOKSAgIGdmbjog
MDAwZjAzNTYgIG1mbjogMDAxNjMwZDUKKFhFTikgICBnZm46IDAwMGYwMzU3ICBtZm46IDAw
MTA5ODEzCihYRU4pICAgZ2ZuOiAwMDBmMDM1OCAgbWZuOiAwMDE2MzBkNAooWEVOKSAgIGdm
bjogMDAwZjAzNTkgIG1mbjogMDAxMDk4MTIKKFhFTikgICBnZm46IDAwMGYwMzVhICBtZm46
IDAwMTYzMGQzCihYRU4pICAgZ2ZuOiAwMDBmMDM1YiAgbWZuOiAwMDEwOTgxMQooWEVOKSAg
IGdmbjogMDAwZjAzNWMgIG1mbjogMDAxNjMwZDIKKFhFTikgICBnZm46IDAwMGYwMzVkICBt
Zm46IDAwMTA5ODEwCihYRU4pICAgZ2ZuOiAwMDBmMDM1ZSAgbWZuOiAwMDE2MzBkMQooWEVO
KSAgIGdmbjogMDAwZjAzNWYgIG1mbjogMDAxMDIyNGYKKFhFTikgICBnZm46IDAwMGYwMzYw
ICBtZm46IDAwMTYzMGQwCihYRU4pICAgZ2ZuOiAwMDBmMDM2MSAgbWZuOiAwMDEwMjI0ZQoo
WEVOKSAgIGdmbjogMDAwZjAzNjIgIG1mbjogMDAxNjMwY2YKKFhFTikgICBnZm46IDAwMGYw
MzYzICBtZm46IDAwMTAyMjRkCihYRU4pICAgZ2ZuOiAwMDBmMDM2NCAgbWZuOiAwMDE2MzBj
ZQooWEVOKSAgIGdmbjogMDAwZjAzNjUgIG1mbjogMDAxMDIyNGMKKFhFTikgICBnZm46IDAw
MGYwMzY2ICBtZm46IDAwMTYzMGNkCihYRU4pICAgZ2ZuOiAwMDBmMDM2NyAgbWZuOiAwMDEw
MjI0YgooWEVOKSAgIGdmbjogMDAwZjAzNjggIG1mbjogMDAxNjMwY2MKKFhFTikgICBnZm46
IDAwMGYwMzY5ICBtZm46IDAwMTAyMjRhCihYRU4pICAgZ2ZuOiAwMDBmMDM2YSAgbWZuOiAw
MDE2MzBjYgooWEVOKSAgIGdmbjogMDAwZjAzNmIgIG1mbjogMDAxMDIyNDkKKFhFTikgICBn
Zm46IDAwMGYwMzZjICBtZm46IDAwMTYzMGNhCihYRU4pICAgZ2ZuOiAwMDBmMDM2ZCAgbWZu
OiAwMDEwMjI0OAooWEVOKSAgIGdmbjogMDAwZjAzNmUgIG1mbjogMDAxNjMwYzkKKFhFTikg
ICBnZm46IDAwMGYwMzZmICBtZm46IDAwMTAyMjQ3CihYRU4pICAgZ2ZuOiAwMDBmMDM3MCAg
bWZuOiAwMDE2MzBjOAooWEVOKSAgIGdmbjogMDAwZjAzNzEgIG1mbjogMDAxMDIyNDYKKFhF
TikgICBnZm46IDAwMGYwMzcyICBtZm46IDAwMTYzMGM3CihYRU4pICAgZ2ZuOiAwMDBmMDM3
MyAgbWZuOiAwMDEwMjI0NQooWEVOKSAgIGdmbjogMDAwZjAzNzQgIG1mbjogMDAxNjMwYzYK
KFhFTikgICBnZm46IDAwMGYwMzc1ICBtZm46IDAwMTAyMjQ0CihYRU4pICAgZ2ZuOiAwMDBm
MDM3NiAgbWZuOiAwMDE2MzBjNQooWEVOKSAgIGdmbjogMDAwZjAzNzcgIG1mbjogMDAxMDIy
NDMKKFhFTikgICBnZm46IDAwMGYwMzc4ICBtZm46IDAwMTYzMGM0CihYRU4pICAgZ2ZuOiAw
MDBmMDM3OSAgbWZuOiAwMDEwMjI0MgooWEVOKSAgIGdmbjogMDAwZjAzN2EgIG1mbjogMDAx
NjMwYzMKKFhFTikgICBnZm46IDAwMGYwMzdiICBtZm46IDAwMTAyMjQxCihYRU4pICAgZ2Zu
OiAwMDBmMDM3YyAgbWZuOiAwMDE2MzBjMgooWEVOKSAgIGdmbjogMDAwZjAzN2QgIG1mbjog
MDAxMDIyNDAKKFhFTikgICBnZm46IDAwMGYwMzdlICBtZm46IDAwMTYzMGMxCihYRU4pICAg
Z2ZuOiAwMDBmMDM3ZiAgbWZuOiAwMDEwOTgwZgooWEVOKSAgIGdmbjogMDAwZjAzODAgIG1m
bjogMDAxNjMwYzAKKFhFTikgICBnZm46IDAwMGYwMzgxICBtZm46IDAwMTA5ODBlCihYRU4p
ICAgZ2ZuOiAwMDBmMDM4MiAgbWZuOiAwMDFiMDY3ZgooWEVOKSAgIGdmbjogMDAwZjAzODMg
IG1mbjogMDAxMDk4MGQKKFhFTikgICBnZm46IDAwMGYwMzg0ICBtZm46IDAwMWIwNjdlCihY
RU4pICAgZ2ZuOiAwMDBmMDM4NSAgbWZuOiAwMDEwOTgwYwooWEVOKSAgIGdmbjogMDAwZjAz
ODYgIG1mbjogMDAxYjA2N2QKKFhFTikgICBnZm46IDAwMGYwMzg3ICBtZm46IDAwMTA5ODBi
CihYRU4pICAgZ2ZuOiAwMDBmMDM4OCAgbWZuOiAwMDFiMDY3YwooWEVOKSAgIGdmbjogMDAw
ZjAzODkgIG1mbjogMDAxMDk4MGEKKFhFTikgICBnZm46IDAwMGYwMzhhICBtZm46IDAwMWIw
NjdiCihYRU4pICAgZ2ZuOiAwMDBmMDM4YiAgbWZuOiAwMDEwOTgwOQooWEVOKSAgIGdmbjog
MDAwZjAzOGMgIG1mbjogMDAxYjA2N2EKKFhFTikgICBnZm46IDAwMGYwMzhkICBtZm46IDAw
MTA5ODA4CihYRU4pICAgZ2ZuOiAwMDBmMDM4ZSAgbWZuOiAwMDFiMDY3OQooWEVOKSAgIGdm
bjogMDAwZjAzOGYgIG1mbjogMDAxMDk4MDcKKFhFTikgICBnZm46IDAwMGYwMzkwICBtZm46
IDAwMWIwNjc4CihYRU4pICAgZ2ZuOiAwMDBmMDM5MSAgbWZuOiAwMDEwOTgwNgooWEVOKSAg
IGdmbjogMDAwZjAzOTIgIG1mbjogMDAxYjA2NzcKKFhFTikgICBnZm46IDAwMGYwMzkzICBt
Zm46IDAwMTA5ODA1CihYRU4pICAgZ2ZuOiAwMDBmMDM5NCAgbWZuOiAwMDFiMDY3NgooWEVO
KSAgIGdmbjogMDAwZjAzOTUgIG1mbjogMDAxMDk4MDQKKFhFTikgICBnZm46IDAwMGYwMzk2
ICBtZm46IDAwMWIwNjc1CihYRU4pICAgZ2ZuOiAwMDBmMDM5NyAgbWZuOiAwMDEwOTgwMwoo
WEVOKSAgIGdmbjogMDAwZjAzOTggIG1mbjogMDAxYjA2NzQKKFhFTikgICBnZm46IDAwMGYw
Mzk5ICBtZm46IDAwMTA5ODAyCihYRU4pICAgZ2ZuOiAwMDBmMDM5YSAgbWZuOiAwMDFiMDY3
MwooWEVOKSAgIGdmbjogMDAwZjAzOWIgIG1mbjogMDAxMDk4MDEKKFhFTikgICBnZm46IDAw
MGYwMzljICBtZm46IDAwMWIwNjcyCihYRU4pICAgZ2ZuOiAwMDBmMDM5ZCAgbWZuOiAwMDEw
OTgwMAooWEVOKSAgIGdmbjogMDAwZjAzOWUgIG1mbjogMDAxYjA2NzEKKFhFTikgICBnZm46
IDAwMGYwMzlmICBtZm46IDAwMTAyNWZmCihYRU4pICAgZ2ZuOiAwMDBmMDNhMCAgbWZuOiAw
MDFiMDY3MAooWEVOKSAgIGdmbjogMDAwZjAzYTEgIG1mbjogMDAxMDI1ZmUKKFhFTikgICBn
Zm46IDAwMGYwM2EyICBtZm46IDAwMWIwNjZmCihYRU4pICAgZ2ZuOiAwMDBmMDNhMyAgbWZu
OiAwMDEwMjVmZAooWEVOKSAgIGdmbjogMDAwZjAzYTQgIG1mbjogMDAxYjA2NmUKKFhFTikg
ICBnZm46IDAwMGYwM2E1ICBtZm46IDAwMTAyNWZjCihYRU4pICAgZ2ZuOiAwMDBmMDNhNiAg
bWZuOiAwMDFiMDY2ZAooWEVOKSAgIGdmbjogMDAwZjAzYTcgIG1mbjogMDAxMDI1ZmIKKFhF
TikgICBnZm46IDAwMGYwM2E4ICBtZm46IDAwMWIwNjZjCihYRU4pICAgZ2ZuOiAwMDBmMDNh
OSAgbWZuOiAwMDEwMjVmYQooWEVOKSAgIGdmbjogMDAwZjAzYWEgIG1mbjogMDAxYjA2NmIK
KFhFTikgICBnZm46IDAwMGYwM2FiICBtZm46IDAwMTAyNWY5CihYRU4pICAgZ2ZuOiAwMDBm
MDNhYyAgbWZuOiAwMDFiMDY2YQooWEVOKSAgIGdmbjogMDAwZjAzYWQgIG1mbjogMDAxMDI1
ZjgKKFhFTikgICBnZm46IDAwMGYwM2FlICBtZm46IDAwMWIwNjY5CihYRU4pICAgZ2ZuOiAw
MDBmMDNhZiAgbWZuOiAwMDEwMjVmNwooWEVOKSAgIGdmbjogMDAwZjAzYjAgIG1mbjogMDAx
YjA2NjgKKFhFTikgICBnZm46IDAwMGYwM2IxICBtZm46IDAwMTAyNWY2CihYRU4pICAgZ2Zu
OiAwMDBmMDNiMiAgbWZuOiAwMDFiMDY2NwooWEVOKSAgIGdmbjogMDAwZjAzYjMgIG1mbjog
MDAxMDI1ZjUKKFhFTikgICBnZm46IDAwMGYwM2I0ICBtZm46IDAwMWIwNjY2CihYRU4pICAg
Z2ZuOiAwMDBmMDNiNSAgbWZuOiAwMDEwMjVmNAooWEVOKSAgIGdmbjogMDAwZjAzYjYgIG1m
bjogMDAxYjA2NjUKKFhFTikgICBnZm46IDAwMGYwM2I3ICBtZm46IDAwMTAyNWYzCihYRU4p
ICAgZ2ZuOiAwMDBmMDNiOCAgbWZuOiAwMDFiMDY2NAooWEVOKSAgIGdmbjogMDAwZjAzYjkg
IG1mbjogMDAxMDI1ZjIKKFhFTikgICBnZm46IDAwMGYwM2JhICBtZm46IDAwMWIwNjYzCihY
RU4pICAgZ2ZuOiAwMDBmMDNiYiAgbWZuOiAwMDEwMjVmMQooWEVOKSAgIGdmbjogMDAwZjAz
YmMgIG1mbjogMDAxYjA2NjIKKFhFTikgICBnZm46IDAwMGYwM2JkICBtZm46IDAwMTAyNWYw
CihYRU4pICAgZ2ZuOiAwMDBmMDNiZSAgbWZuOiAwMDFiMDY2MQooWEVOKSAgIGdmbjogMDAw
ZjAzYmYgIG1mbjogMDAxMDI1ZGYKKFhFTikgICBnZm46IDAwMGYwM2MwICBtZm46IDAwMWIw
NjYwCihYRU4pICAgZ2ZuOiAwMDBmMDNjMSAgbWZuOiAwMDEwMjVkZQooWEVOKSAgIGdmbjog
MDAwZjAzYzIgIG1mbjogMDAxYjA2NWYKKFhFTikgICBnZm46IDAwMGYwM2MzICBtZm46IDAw
MTAyNWRkCihYRU4pICAgZ2ZuOiAwMDBmMDNjNCAgbWZuOiAwMDFiMDY1ZQooWEVOKSAgIGdm
bjogMDAwZjAzYzUgIG1mbjogMDAxMDI1ZGMKKFhFTikgICBnZm46IDAwMGYwM2M2ICBtZm46
IDAwMWIwNjVkCihYRU4pICAgZ2ZuOiAwMDBmMDNjNyAgbWZuOiAwMDEwMjVkYgooWEVOKSAg
IGdmbjogMDAwZjAzYzggIG1mbjogMDAxYjA2NWMKKFhFTikgICBnZm46IDAwMGYwM2M5ICBt
Zm46IDAwMTAyNWRhCihYRU4pICAgZ2ZuOiAwMDBmMDNjYSAgbWZuOiAwMDFiMDY1YgooWEVO
KSAgIGdmbjogMDAwZjAzY2IgIG1mbjogMDAxMDI1ZDkKKFhFTikgICBnZm46IDAwMGYwM2Nj
ICBtZm46IDAwMWIwNjVhCihYRU4pICAgZ2ZuOiAwMDBmMDNjZCAgbWZuOiAwMDEwMjVkOAoo
WEVOKSAgIGdmbjogMDAwZjAzY2UgIG1mbjogMDAxYjA2NTkKKFhFTikgICBnZm46IDAwMGYw
M2NmICBtZm46IDAwMTAyNWQ3CihYRU4pICAgZ2ZuOiAwMDBmMDNkMCAgbWZuOiAwMDFiMDY1
OAooWEVOKSAgIGdmbjogMDAwZjAzZDEgIG1mbjogMDAxMDI1ZDYKKFhFTikgICBnZm46IDAw
MGYwM2QyICBtZm46IDAwMWIwNjU3CihYRU4pICAgZ2ZuOiAwMDBmMDNkMyAgbWZuOiAwMDEw
MjVkNQooWEVOKSAgIGdmbjogMDAwZjAzZDQgIG1mbjogMDAxYjA2NTYKKFhFTikgICBnZm46
IDAwMGYwM2Q1ICBtZm46IDAwMTAyNWQ0CihYRU4pICAgZ2ZuOiAwMDBmMDNkNiAgbWZuOiAw
MDFiMDY1NQooWEVOKSAgIGdmbjogMDAwZjAzZDcgIG1mbjogMDAxMDI1ZDMKKFhFTikgICBn
Zm46IDAwMGYwM2Q4ICBtZm46IDAwMWIwNjU0CihYRU4pICAgZ2ZuOiAwMDBmMDNkOSAgbWZu
OiAwMDEwMjVkMgooWEVOKSAgIGdmbjogMDAwZjAzZGEgIG1mbjogMDAxYjA2NTMKKFhFTikg
ICBnZm46IDAwMGYwM2RiICBtZm46IDAwMTAyNWQxCihYRU4pICAgZ2ZuOiAwMDBmMDNkYyAg
bWZuOiAwMDFiMDY1MgooWEVOKSAgIGdmbjogMDAwZjAzZGQgIG1mbjogMDAxMDI1ZDAKKFhF
TikgICBnZm46IDAwMGYwM2RlICBtZm46IDAwMWIwNjUxCihYRU4pICAgZ2ZuOiAwMDBmMDNk
ZiAgbWZuOiAwMDEwMjVjZgooWEVOKSAgIGdmbjogMDAwZjAzZTAgIG1mbjogMDAxYjA2NTAK
KFhFTikgICBnZm46IDAwMGYwM2UxICBtZm46IDAwMTAyNWNlCihYRU4pICAgZ2ZuOiAwMDBm
MDNlMiAgbWZuOiAwMDFiMDY0ZgooWEVOKSAgIGdmbjogMDAwZjAzZTMgIG1mbjogMDAxMDI1
Y2QKKFhFTikgICBnZm46IDAwMGYwM2U0ICBtZm46IDAwMWIwNjRlCihYRU4pICAgZ2ZuOiAw
MDBmMDNlNSAgbWZuOiAwMDEwMjVjYwooWEVOKSAgIGdmbjogMDAwZjAzZTYgIG1mbjogMDAx
YjA2NGQKKFhFTikgICBnZm46IDAwMGYwM2U3ICBtZm46IDAwMTAyNWNiCihYRU4pICAgZ2Zu
OiAwMDBmMDNlOCAgbWZuOiAwMDFiMDY0YwooWEVOKSAgIGdmbjogMDAwZjAzZTkgIG1mbjog
MDAxMDI1Y2EKKFhFTikgICBnZm46IDAwMGYwM2VhICBtZm46IDAwMWIwNjRiCihYRU4pICAg
Z2ZuOiAwMDBmMDNlYiAgbWZuOiAwMDEwMjVjOQooWEVOKSAgIGdmbjogMDAwZjAzZWMgIG1m
bjogMDAxYjA2NGEKKFhFTikgICBnZm46IDAwMGYwM2VkICBtZm46IDAwMTAyNWM4CihYRU4p
ICAgZ2ZuOiAwMDBmMDNlZSAgbWZuOiAwMDFiMDY0OQooWEVOKSAgIGdmbjogMDAwZjAzZWYg
IG1mbjogMDAxMDI1YzcKKFhFTikgICBnZm46IDAwMGYwM2YwICBtZm46IDAwMWIwNjQ4CihY
RU4pICAgZ2ZuOiAwMDBmMDNmMSAgbWZuOiAwMDEwMjVjNgooWEVOKSAgIGdmbjogMDAwZjAz
ZjIgIG1mbjogMDAxYjA2NDcKKFhFTikgICBnZm46IDAwMGYwM2YzICBtZm46IDAwMTAyNWM1
CihYRU4pICAgZ2ZuOiAwMDBmMDNmNCAgbWZuOiAwMDFiMDY0NgooWEVOKSAgIGdmbjogMDAw
ZjAzZjUgIG1mbjogMDAxMDI1YzQKKFhFTikgICBnZm46IDAwMGYwM2Y2ICBtZm46IDAwMWIw
NjQ1CihYRU4pICAgZ2ZuOiAwMDBmMDNmNyAgbWZuOiAwMDEwMjVjMwooWEVOKSAgIGdmbjog
MDAwZjAzZjggIG1mbjogMDAxYjA2NDQKKFhFTikgICBnZm46IDAwMGYwM2Y5ICBtZm46IDAw
MTAyNWMyCihYRU4pICAgZ2ZuOiAwMDBmMDNmYSAgbWZuOiAwMDFiMDY0MwooWEVOKSAgIGdm
bjogMDAwZjAzZmIgIG1mbjogMDAxMDI1YzEKKFhFTikgICBnZm46IDAwMGYwM2ZjICBtZm46
IDAwMWIwNjQyCihYRU4pICAgZ2ZuOiAwMDBmMDNmZCAgbWZuOiAwMDEwMjVjMAooWEVOKSAg
IGdmbjogMDAwZjAzZmUgIG1mbjogMDAxYjA2NDEKKFhFTikgICBnZm46IDAwMGYwM2ZmICBt
Zm46IDAwMTAyMmZmCihYRU4pICAgZ2ZuOiAwMDBmYzAwMCAgbWZuOiAwMDFiMDYzZAooWEVO
KSAgIGdmbjogMDAwZmMwMDEgIG1mbjogMDAxYjA2NDAKKFhFTikgICBnZm46IDAwMGZjMDAy
ICBtZm46IDAwMTA5NmY1CihYRU4pICAgZ2ZuOiAwMDBmYzAwMyAgbWZuOiAwMDFiMDYzZQoo
WEVOKSAgIGdmbjogMDAwZmMwMDQgIG1mbjogMDAxMDk2ZjQKKFhFTikgICBnZm46IDAwMGZj
MDA1ICBtZm46IDAwMTA5NmYzCihYRU4pICAgZ2ZuOiAwMDBmYzAwNiAgbWZuOiAwMDFiMDYz
YwooWEVOKSAgIGdmbjogMDAwZmMwMDcgIG1mbjogMDAxMDk2ZjIKKFhFTikgICBnZm46IDAw
MGZjMDA4ICBtZm46IDAwMWIwNjNiCihYRU4pICAgZ2ZuOiAwMDBmYzAwOSAgbWZuOiAwMDEw
OTZmMQooWEVOKSAgIGdmbjogMDAwZmMwMGEgIG1mbjogMDAxYjA2M2EKKFhFTikgICBnZm46
IDAwMGZjMDBiICBtZm46IDAwMTA5NmYwCihYRU4pICAgZ2ZuOiAwMDBmYzAwYyAgbWZuOiAw
MDFiMDYzOQooWEVOKSAgIGdmbjogMDAwZmMwMGQgIG1mbjogMDAxMDk2ZWYKKFhFTikgICBn
Zm46IDAwMGZjMDBlICBtZm46IDAwMWIwNjM4CihYRU4pICAgZ2ZuOiAwMDBmYzAwZiAgbWZu
OiAwMDEwOTZlZQooWEVOKSAgIGdmbjogMDAwZmMwMTAgIG1mbjogMDAxYjA2MzcKKFhFTikg
ICBnZm46IDAwMGZlZmZhICBtZm46IDAwMDgxMmI5CihYRU4pICAgZ2ZuOiAwMDBmZWZmYiAg
bWZuOiAwMDEwMjc1ZAooWEVOKSAgIGdmbjogMDAwZmVmZmMgIG1mbjogMDAyMThlMTAKKFhF
TikgICBnZm46IDAwMGZlZmZkICBtZm46IDAwMTAyNzVjCihYRU4pICAgZ2ZuOiAwMDBmZWZm
ZSAgbWZuOiAwMDIxOGUwZgooWEVOKSAgIGdmbjogMDAwZmVmZmYgIG1mbjogMDAxMDA0OWYK
KFhFTikgIGdmbjogMDAxMDAwMDAgIG1mbjogMDAxYmE4MDAKKFhFTikgIGdmbjogMDAxMDAy
MDAgIG1mbjogMDAxYmE2MDAKKFhFTikgIGdmbjogMDAxMDA0MDAgIG1mbjogMDAxYmE0MDAK
KFhFTikgIGdmbjogMDAxMDA2MDAgIG1mbjogMDAxYmEyMDAKKFhFTikgIGdmbjogMDAxMDA4
MDAgIG1mbjogMDAxYmEwMDAKKFhFTikgIGdmbjogMDAxMDBhMDAgIG1mbjogMDAxYjllMDAK
KFhFTikgIGdmbjogMDAxMDBjMDAgIG1mbjogMDAxYjljMDAKKFhFTikgIGdmbjogMDAxMDBl
MDAgIG1mbjogMDAxYjlhMDAKKFhFTikgIGdmbjogMDAxMDEwMDAgIG1mbjogMDAxYjk4MDAK
KFhFTikgIGdmbjogMDAxMDEyMDAgIG1mbjogMDAxYjk2MDAKKFhFTikgIGdmbjogMDAxMDE0
MDAgIG1mbjogMDAxYjk0MDAKKFhFTikgIGdmbjogMDAxMDE2MDAgIG1mbjogMDAxYjkyMDAK
KFhFTikgIGdmbjogMDAxMDE4MDAgIG1mbjogMDAxYjkwMDAKKFhFTikgIGdmbjogMDAxMDFh
MDAgIG1mbjogMDAxYjhlMDAKKFhFTikgIGdmbjogMDAxMDFjMDAgIG1mbjogMDAxYjhjMDAK
KFhFTikgIGdmbjogMDAxMDFlMDAgIG1mbjogMDAxYjhhMDAKKFhFTikgIGdmbjogMDAxMDIw
MDAgIG1mbjogMDAxYjg4MDAKKFhFTikgIGdmbjogMDAxMDIyMDAgIG1mbjogMDAxYjg2MDAK
KFhFTikgIGdmbjogMDAxMDI0MDAgIG1mbjogMDAxYjg0MDAKKFhFTikgIGdmbjogMDAxMDI2
MDAgIG1mbjogMDAxYjgyMDAKKFhFTikgIGdmbjogMDAxMDI4MDAgIG1mbjogMDAxYjgwMDAK
KFhFTikgIGdmbjogMDAxMDJhMDAgIG1mbjogMDAxYWZlMDAKKFhFTikgIGdmbjogMDAxMDJj
MDAgIG1mbjogMDAxYWZjMDAKKFhFTikgIGdmbjogMDAxMDJlMDAgIG1mbjogMDAxYWZhMDAK
KFhFTikgIGdmbjogMDAxMDMwMDAgIG1mbjogMDAxYWY4MDAKKFhFTikgIGdmbjogMDAxMDMy
MDAgIG1mbjogMDAxYWY2MDAKKFhFTikgIGdmbjogMDAxMDM0MDAgIG1mbjogMDAxYWY0MDAK
KFhFTikgIGdmbjogMDAxMDM2MDAgIG1mbjogMDAxYWYyMDAKKFhFTikgIGdmbjogMDAxMDM4
MDAgIG1mbjogMDAxYWYwMDAKKFhFTikgIGdmbjogMDAxMDNhMDAgIG1mbjogMDAxYWVlMDAK
KFhFTikgIGdmbjogMDAxMDNjMDAgIG1mbjogMDAxYWVjMDAKKFhFTikgIGdmbjogMDAxMDNl
MDAgIG1mbjogMDAxYWVhMDAKKFhFTikgIGdmbjogMDAxMDQwMDAgIG1mbjogMDAxYWU4MDAK
KFhFTikgIGdmbjogMDAxMDQyMDAgIG1mbjogMDAxYWU2MDAKKFhFTikgIGdmbjogMDAxMDQ0
MDAgIG1mbjogMDAxYWU0MDAKKFhFTikgIGdmbjogMDAxMDQ2MDAgIG1mbjogMDAxYWUyMDAK
KFhFTikgIGdmbjogMDAxMDQ4MDAgIG1mbjogMDAxYWUwMDAKKFhFTikgIGdmbjogMDAxMDRh
MDAgIG1mbjogMDAxYWRlMDAKKFhFTikgIGdmbjogMDAxMDRjMDAgIG1mbjogMDAxYWRjMDAK
KFhFTikgIGdmbjogMDAxMDRlMDAgIG1mbjogMDAxYWRhMDAKKFhFTikgIGdmbjogMDAxMDUw
MDAgIG1mbjogMDAxYWQ4MDAKKFhFTikgIGdmbjogMDAxMDUyMDAgIG1mbjogMDAxYWQ2MDAK
KFhFTikgIGdmbjogMDAxMDU0MDAgIG1mbjogMDAxYWQ0MDAKKFhFTikgIGdmbjogMDAxMDU2
MDAgIG1mbjogMDAxYWQyMDAKKFhFTikgIGdmbjogMDAxMDU4MDAgIG1mbjogMDAxYWQwMDAK
KFhFTikgIGdmbjogMDAxMDVhMDAgIG1mbjogMDAxYWNlMDAKKFhFTikgIGdmbjogMDAxMDVj
MDAgIG1mbjogMDAxYWNjMDAKKFhFTikgIGdmbjogMDAxMDVlMDAgIG1mbjogMDAxYWNhMDAK
KFhFTikgIGdmbjogMDAxMDYwMDAgIG1mbjogMDAxYWM4MDAKKFhFTikgIGdmbjogMDAxMDYy
MDAgIG1mbjogMDAxYWM2MDAKKFhFTikgIGdmbjogMDAxMDY0MDAgIG1mbjogMDAxYWM0MDAK
KFhFTikgIGdmbjogMDAxMDY2MDAgIG1mbjogMDAxYWMyMDAKKFhFTikgIGdmbjogMDAxMDY4
MDAgIG1mbjogMDAxYWMwMDAKKFhFTikgIGdmbjogMDAxMDZhMDAgIG1mbjogMDAxYWJlMDAK
KFhFTikgIGdmbjogMDAxMDZjMDAgIG1mbjogMDAxYWJjMDAKKFhFTikgIGdmbjogMDAxMDZl
MDAgIG1mbjogMDAxYWJhMDAKKFhFTikgIGdmbjogMDAxMDcwMDAgIG1mbjogMDAxYWI4MDAK
KFhFTikgIGdmbjogMDAxMDcyMDAgIG1mbjogMDAxYWI2MDAKKFhFTikgIGdmbjogMDAxMDc0
MDAgIG1mbjogMDAxYWI0MDAKKFhFTikgIGdmbjogMDAxMDc2MDAgIG1mbjogMDAxYWIyMDAK
KFhFTikgIGdmbjogMDAxMDc4MDAgIG1mbjogMDAxYWIwMDAKKFhFTikgIGdmbjogMDAxMDdh
MDAgIG1mbjogMDAxYWFlMDAKKFhFTikgIGdmbjogMDAxMDdjMDAgIG1mbjogMDAxYWFjMDAK
KFhFTikgIGdmbjogMDAxMDdlMDAgIG1mbjogMDAxYWFhMDAKKFhFTikgIGdmbjogMDAxMDgw
MDAgIG1mbjogMDAxYWE4MDAKKFhFTikgIGdmbjogMDAxMDgyMDAgIG1mbjogMDAxYWE2MDAK
KFhFTikgIGdmbjogMDAxMDg0MDAgIG1mbjogMDAxYWE0MDAKKFhFTikgIGdmbjogMDAxMDg2
MDAgIG1mbjogMDAxYWEyMDAKKFhFTikgIGdmbjogMDAxMDg4MDAgIG1mbjogMDAxYWEwMDAK
KFhFTikgIGdmbjogMDAxMDhhMDAgIG1mbjogMDAxYTllMDAKKFhFTikgIGdmbjogMDAxMDhj
MDAgIG1mbjogMDAxYTljMDAKKFhFTikgIGdmbjogMDAxMDhlMDAgIG1mbjogMDAxYTlhMDAK
KFhFTikgIGdmbjogMDAxMDkwMDAgIG1mbjogMDAxYTk4MDAKKFhFTikgIGdmbjogMDAxMDky
MDAgIG1mbjogMDAxYTk2MDAKKFhFTikgIGdmbjogMDAxMDk0MDAgIG1mbjogMDAxYTk0MDAK
KFhFTikgIGdmbjogMDAxMDk2MDAgIG1mbjogMDAxYTkyMDAKKFhFTikgIGdmbjogMDAxMDk4
MDAgIG1mbjogMDAxYTkwMDAKKFhFTikgIGdmbjogMDAxMDlhMDAgIG1mbjogMDAxYThlMDAK
KFhFTikgIGdmbjogMDAxMDljMDAgIG1mbjogMDAxYThjMDAKKFhFTikgIGdmbjogMDAxMDll
MDAgIG1mbjogMDAxYThhMDAKKFhFTikgIGdmbjogMDAxMGEwMDAgIG1mbjogMDAxYTg4MDAK
KFhFTikgIGdmbjogMDAxMGEyMDAgIG1mbjogMDAxYTg2MDAKKFhFTikgIGdmbjogMDAxMGE0
MDAgIG1mbjogMDAxYTg0MDAKKFhFTikgIGdmbjogMDAxMGE2MDAgIG1mbjogMDAxYTgyMDAK
KFhFTikgIGdmbjogMDAxMGE4MDAgIG1mbjogMDAxYTgwMDAKKFhFTikgIGdmbjogMDAxMGFh
MDAgIG1mbjogMDAxYTdlMDAKKFhFTikgIGdmbjogMDAxMGFjMDAgIG1mbjogMDAxYTdjMDAK
KFhFTikgIGdmbjogMDAxMGFlMDAgIG1mbjogMDAxYTdhMDAKKFhFTikgIGdmbjogMDAxMGIw
MDAgIG1mbjogMDAxYTc4MDAKKFhFTikgIGdmbjogMDAxMGIyMDAgIG1mbjogMDAxYTc2MDAK
KFhFTikgIGdmbjogMDAxMGI0MDAgIG1mbjogMDAxYTc0MDAKKFhFTikgIGdmbjogMDAxMGI2
MDAgIG1mbjogMDAxYTcyMDAKKFhFTikgIGdmbjogMDAxMGI4MDAgIG1mbjogMDAxYTcwMDAK
KFhFTikgIGdmbjogMDAxMGJhMDAgIG1mbjogMDAxYTZlMDAKKFhFTikgIGdmbjogMDAxMGJj
MDAgIG1mbjogMDAxYTZjMDAKKFhFTikgIGdmbjogMDAxMGJlMDAgIG1mbjogMDAxYTZhMDAK
KFhFTikgIGdmbjogMDAxMGMwMDAgIG1mbjogMDAxYTY4MDAKKFhFTikgIGdmbjogMDAxMGMy
MDAgIG1mbjogMDAxYTY2MDAKKFhFTikgIGdmbjogMDAxMGM0MDAgIG1mbjogMDAxYTY0MDAK
KFhFTikgIGdmbjogMDAxMGM2MDAgIG1mbjogMDAxYTYyMDAKKFhFTikgIGdmbjogMDAxMGM4
MDAgIG1mbjogMDAxYTYwMDAKKFhFTikgIGdmbjogMDAxMGNhMDAgIG1mbjogMDAxYTVlMDAK
KFhFTikgIGdmbjogMDAxMGNjMDAgIG1mbjogMDAxYTVjMDAKKFhFTikgIGdmbjogMDAxMGNl
MDAgIG1mbjogMDAxYTVhMDAKKFhFTikgIGdmbjogMDAxMGQwMDAgIG1mbjogMDAxYTU4MDAK
KFhFTikgIGdmbjogMDAxMGQyMDAgIG1mbjogMDAxYTU2MDAKKFhFTikgIGdmbjogMDAxMGQ0
MDAgIG1mbjogMDAxYTU0MDAKKFhFTikgIGdmbjogMDAxMGQ2MDAgIG1mbjogMDAxYTUyMDAK
KFhFTikgIGdmbjogMDAxMGQ4MDAgIG1mbjogMDAxYTUwMDAKKFhFTikgIGdmbjogMDAxMGRh
MDAgIG1mbjogMDAxYTRlMDAKKFhFTikgIGdmbjogMDAxMGRjMDAgIG1mbjogMDAxYTRjMDAK
KFhFTikgIGdmbjogMDAxMGRlMDAgIG1mbjogMDAxYTRhMDAKKFhFTikgIGdmbjogMDAxMGUw
MDAgIG1mbjogMDAxYTQ4MDAKKFhFTikgIGdmbjogMDAxMGUyMDAgIG1mbjogMDAxYTQ2MDAK
KFhFTikgIGdmbjogMDAxMGU0MDAgIG1mbjogMDAxYTQ0MDAKKFhFTikgIGdmbjogMDAxMGU2
MDAgIG1mbjogMDAxYTQyMDAKKFhFTikgIGdmbjogMDAxMGU4MDAgIG1mbjogMDAxYTQwMDAK
KFhFTikgIGdmbjogMDAxMGVhMDAgIG1mbjogMDAxYTNlMDAKKFhFTikgIGdmbjogMDAxMGVj
MDAgIG1mbjogMDAxYTNjMDAKKFhFTikgIGdmbjogMDAxMGVlMDAgIG1mbjogMDAxYTNhMDAK
KFhFTikgIGdmbjogMDAxMGYwMDAgIG1mbjogMDAxYTM4MDAKKFhFTikgIGdmbjogMDAxMGYy
MDAgIG1mbjogMDAxYTM2MDAKKFhFTikgIGdmbjogMDAxMGY0MDAgIG1mbjogMDAxYTM0MDAK
KFhFTikgIGdmbjogMDAxMGY2MDAgIG1mbjogMDAxYTMyMDAKKFhFTikgJ0EnIHByZXNzZWQg
LT4gdXNpbmcgYWx0ZXJuYXRpdmUga2V5IGhhbmRsaW5nCihYRU4pIHN0ZHZnYS5jOjE0Nzpk
MiBlbnRlcmluZyBzdGR2Z2EgYW5kIGNhY2hpbmcgbW9kZXMKKw==
--------------020406080607020004070107
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020406080607020004070107--


From xen-devel-bounces@lists.xen.org Wed Aug 15 11:20:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 11:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1bdg-0003dz-Jx; Wed, 15 Aug 2012 11:19:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T1bde-0003dt-Mv
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 11:19:51 +0000
Received: from [85.158.139.83:32363] by server-12.bemta-5.messagelabs.com id
	DA/C3-22359-5D58B205; Wed, 15 Aug 2012 11:19:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345029587!28499498!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11324 invoked from network); 15 Aug 2012 11:19:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 11:19:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336363200"; d="scan'208";a="205237994"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 07:19:47 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	07:19:47 -0400
Message-ID: <502B85D1.8000606@citrix.com>
Date: Wed, 15 Aug 2012 12:19:45 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Attilio Rao <attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
In-Reply-To: <502A5CD0.8000201@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 15:12, Attilio Rao wrote:
> On 14/08/12 14:57, David Vrabel wrote:
>> On 14/08/12 13:24, Attilio Rao wrote:
>> 
>>> The informations added on the hook are: - Native behaviour - Xen 
>>> specific behaviour - Logic behind the Xen specific behaviour
>>> 
>> These are implementation details and should be documented with the
>>  implementations (if necessary).
>> 
> 
> In this specific case, implementation details are very valuable to 
> understand the semantic of operations, this is why I added them 
> there. I think, at least for this case, this is the best trade-off.

Documenting the implementation details will be useful for reviewing or
refactoring the pv-ops but I don't think it useful in the longer term
for it to be included in the API documentation upstream.

>>> - PVOPS semantic
>>> 
>> This is the interesting stuff.
>> 
>> This particular pvop seems a little odd really.  It might make more
>> sense if it took a third parameter for pgt_buf_top.
> 
> The thing is that this work (documenting PVOPS) should help in 
> understanding the logic behind some PVOPS and possibly
> improve/rework them. For this stage, I agreed with Konrad to keep the
> changes as small as possible. Once the documentation about the
> semantic is in place we can think about ways to improve things more
> effectively (for example, in some cases we may want to rewrite the
> PVOP completely).

After looking at it some more, I think this pv-ops is unnecessary. How
about the following patch to just remove it completely?

I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
is sound.

>> "@pagetable_reserve is used to reserve a range of PFNs used for the
>> kernel direct mapping page tables and cleans-up any PFNs that ended
>> up not being used for the tables.
>> 
>> It shall reserve the range (start, end] with memblock_reserve(). It
>> shall prepare PFNs in the range (end, pgt_buf_top] for general (non
>> page table) use.
>> 
>> It shall only be called in init_memory_mapping() after the direct 
>> mapping tables have been constructed."
>> 
>> Having said that, I couldn't immediately see where pages in (end, 
>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
>> done?
>> 
> 
> As mentioned in the comment, please look at xen_set_pte_init().

xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
is already present and read-only.

David

8<----------------------
x86: remove x86_init.mapping.pagetable_reserve paravirt op

The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
guests to set the writable flag for the mapping of (pgt_buf_end,
pgt_buf_top].  This is not necessary as these pages are never set as
read-only as they have never contained page tables.

When running as a Xen guest, the initial page tables are provided by
Xen (these are reserved with memblock_reserve() in
xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
guests) or in the kernel's .data section (for 64-bit guests, see
head_64.S).

Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
does not overlap with them and the mappings for these PFNs will be
read-write.

Since Xen doesn't need to change the mapping its implementation
becomes the same as a native and we can simply remove this pv-op
completely.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    1 -
 arch/x86/include/asm/x86_init.h      |   12 ------------
 arch/x86/kernel/x86_init.c           |    4 ----
 arch/x86/mm/init.c                   |   22 +++-------------------
 arch/x86/xen/mmu.c                   |   19 ++-----------------
 5 files changed, 5 insertions(+), 53 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 013286a..0a11293 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -301,7 +301,6 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 /* Install a pte for a particular vaddr in kernel space. */
 void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
-extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_setup_start(pgd_t *base);
 extern void native_pagetable_setup_done(pgd_t *base);
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..b527dd4 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -69,17 +69,6 @@ struct x86_init_oem {
 };
 
 /**
- * struct x86_init_mapping - platform specific initial kernel pagetable setup
- * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
- *
- * For more details on the purpose of this hook, look in
- * init_memory_mapping and the commit that added it.
- */
-struct x86_init_mapping {
-	void (*pagetable_reserve)(u64 start, u64 end);
-};
-
-/**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_setup_start:	platform specific pre paging_init() call
  * @pagetable_setup_done:	platform specific post paging_init() call
@@ -135,7 +124,6 @@ struct x86_init_ops {
 	struct x86_init_mpparse		mpparse;
 	struct x86_init_irqs		irqs;
 	struct x86_init_oem		oem;
-	struct x86_init_mapping		mapping;
 	struct x86_init_paging		paging;
 	struct x86_init_timers		timers;
 	struct x86_init_iommu		iommu;
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 9f3167e..040c05f 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -63,10 +63,6 @@ struct x86_init_ops x86_init __initdata = {
 		.banner			= default_banner,
 	},
 
-	.mapping = {
-		.pagetable_reserve		= native_pagetable_reserve,
-	},
-
 	.paging = {
 		.pagetable_setup_start	= native_pagetable_setup_start,
 		.pagetable_setup_done	= native_pagetable_setup_done,
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e0e6990..c449873 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -90,11 +90,6 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 }
 
-void __init native_pagetable_reserve(u64 start, u64 end)
-{
-	memblock_reserve(start, end - start);
-}
-
 #ifdef CONFIG_X86_32
 #define NR_RANGE_MR 3
 #else /* CONFIG_X86_64 */
@@ -283,22 +278,11 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 
 	/*
 	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
-	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
-	 * so that they can be reused for other purposes.
-	 *
-	 * On native it just means calling memblock_reserve, on Xen it also
-	 * means marking RW the pagetable pages that we allocated before
-	 * but that haven't been used.
-	 *
-	 * In fact on xen we mark RO the whole range pgt_buf_start -
-	 * pgt_buf_top, because we have to make sure that when
-	 * init_memory_mapping reaches the pagetable pages area, it maps
-	 * RO all the pagetable pages, including the ones that are beyond
-	 * pgt_buf_end at that time.
+	 * pgt_buf_end).
 	 */
 	if (!after_bootmem && pgt_buf_end > pgt_buf_start)
-		x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
-				PFN_PHYS(pgt_buf_end));
+		memblock_reserve(PFN_PHYS(pgt_buf_start),
+				 PFN_PHYS(pgt_buf_end) - PFN_PHYS(pgt_buf_start));
 
 	if (!after_bootmem)
 		early_memtest(start, end);
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..e55dfc0 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1178,20 +1178,6 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
 {
 }
 
-static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
-{
-	/* reserve the range used */
-	native_pagetable_reserve(start, end);
-
-	/* set as RW the rest */
-	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
-			PFN_PHYS(pgt_buf_top));
-	while (end < PFN_PHYS(pgt_buf_top)) {
-		make_lowmem_page_readwrite(__va(end));
-		end += PAGE_SIZE;
-	}
-}
-
 static void xen_post_allocator_init(void);
 
 static void __init xen_pagetable_setup_done(pgd_t *base)
@@ -2067,7 +2053,6 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 
 void __init xen_init_mmu_ops(void)
 {
-	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
 	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
-- 
1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 11:20:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 11:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1bdg-0003dz-Jx; Wed, 15 Aug 2012 11:19:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T1bde-0003dt-Mv
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 11:19:51 +0000
Received: from [85.158.139.83:32363] by server-12.bemta-5.messagelabs.com id
	DA/C3-22359-5D58B205; Wed, 15 Aug 2012 11:19:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345029587!28499498!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11324 invoked from network); 15 Aug 2012 11:19:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 11:19:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336363200"; d="scan'208";a="205237994"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 07:19:47 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	07:19:47 -0400
Message-ID: <502B85D1.8000606@citrix.com>
Date: Wed, 15 Aug 2012 12:19:45 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Attilio Rao <attilio.rao@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
In-Reply-To: <502A5CD0.8000201@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/08/12 15:12, Attilio Rao wrote:
> On 14/08/12 14:57, David Vrabel wrote:
>> On 14/08/12 13:24, Attilio Rao wrote:
>> 
>>> The informations added on the hook are: - Native behaviour - Xen 
>>> specific behaviour - Logic behind the Xen specific behaviour
>>> 
>> These are implementation details and should be documented with the
>>  implementations (if necessary).
>> 
> 
> In this specific case, implementation details are very valuable to 
> understand the semantic of operations, this is why I added them 
> there. I think, at least for this case, this is the best trade-off.

Documenting the implementation details will be useful for reviewing or
refactoring the pv-ops but I don't think it useful in the longer term
for it to be included in the API documentation upstream.

>>> - PVOPS semantic
>>> 
>> This is the interesting stuff.
>> 
>> This particular pvop seems a little odd really.  It might make more
>> sense if it took a third parameter for pgt_buf_top.
> 
> The thing is that this work (documenting PVOPS) should help in 
> understanding the logic behind some PVOPS and possibly
> improve/rework them. For this stage, I agreed with Konrad to keep the
> changes as small as possible. Once the documentation about the
> semantic is in place we can think about ways to improve things more
> effectively (for example, in some cases we may want to rewrite the
> PVOP completely).

After looking at it some more, I think this pv-ops is unnecessary. How
about the following patch to just remove it completely?

I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
is sound.

>> "@pagetable_reserve is used to reserve a range of PFNs used for the
>> kernel direct mapping page tables and cleans-up any PFNs that ended
>> up not being used for the tables.
>> 
>> It shall reserve the range (start, end] with memblock_reserve(). It
>> shall prepare PFNs in the range (end, pgt_buf_top] for general (non
>> page table) use.
>> 
>> It shall only be called in init_memory_mapping() after the direct 
>> mapping tables have been constructed."
>> 
>> Having said that, I couldn't immediately see where pages in (end, 
>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
>> done?
>> 
> 
> As mentioned in the comment, please look at xen_set_pte_init().

xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
is already present and read-only.

David

8<----------------------
x86: remove x86_init.mapping.pagetable_reserve paravirt op

The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
guests to set the writable flag for the mapping of (pgt_buf_end,
pgt_buf_top].  This is not necessary as these pages are never set as
read-only as they have never contained page tables.

When running as a Xen guest, the initial page tables are provided by
Xen (these are reserved with memblock_reserve() in
xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
guests) or in the kernel's .data section (for 64-bit guests, see
head_64.S).

Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
does not overlap with them and the mappings for these PFNs will be
read-write.

Since Xen doesn't need to change the mapping its implementation
becomes the same as a native and we can simply remove this pv-op
completely.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    1 -
 arch/x86/include/asm/x86_init.h      |   12 ------------
 arch/x86/kernel/x86_init.c           |    4 ----
 arch/x86/mm/init.c                   |   22 +++-------------------
 arch/x86/xen/mmu.c                   |   19 ++-----------------
 5 files changed, 5 insertions(+), 53 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 013286a..0a11293 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -301,7 +301,6 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 /* Install a pte for a particular vaddr in kernel space. */
 void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
-extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_setup_start(pgd_t *base);
 extern void native_pagetable_setup_done(pgd_t *base);
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..b527dd4 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -69,17 +69,6 @@ struct x86_init_oem {
 };
 
 /**
- * struct x86_init_mapping - platform specific initial kernel pagetable setup
- * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
- *
- * For more details on the purpose of this hook, look in
- * init_memory_mapping and the commit that added it.
- */
-struct x86_init_mapping {
-	void (*pagetable_reserve)(u64 start, u64 end);
-};
-
-/**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_setup_start:	platform specific pre paging_init() call
  * @pagetable_setup_done:	platform specific post paging_init() call
@@ -135,7 +124,6 @@ struct x86_init_ops {
 	struct x86_init_mpparse		mpparse;
 	struct x86_init_irqs		irqs;
 	struct x86_init_oem		oem;
-	struct x86_init_mapping		mapping;
 	struct x86_init_paging		paging;
 	struct x86_init_timers		timers;
 	struct x86_init_iommu		iommu;
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 9f3167e..040c05f 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -63,10 +63,6 @@ struct x86_init_ops x86_init __initdata = {
 		.banner			= default_banner,
 	},
 
-	.mapping = {
-		.pagetable_reserve		= native_pagetable_reserve,
-	},
-
 	.paging = {
 		.pagetable_setup_start	= native_pagetable_setup_start,
 		.pagetable_setup_done	= native_pagetable_setup_done,
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e0e6990..c449873 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -90,11 +90,6 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 }
 
-void __init native_pagetable_reserve(u64 start, u64 end)
-{
-	memblock_reserve(start, end - start);
-}
-
 #ifdef CONFIG_X86_32
 #define NR_RANGE_MR 3
 #else /* CONFIG_X86_64 */
@@ -283,22 +278,11 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 
 	/*
 	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
-	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
-	 * so that they can be reused for other purposes.
-	 *
-	 * On native it just means calling memblock_reserve, on Xen it also
-	 * means marking RW the pagetable pages that we allocated before
-	 * but that haven't been used.
-	 *
-	 * In fact on xen we mark RO the whole range pgt_buf_start -
-	 * pgt_buf_top, because we have to make sure that when
-	 * init_memory_mapping reaches the pagetable pages area, it maps
-	 * RO all the pagetable pages, including the ones that are beyond
-	 * pgt_buf_end at that time.
+	 * pgt_buf_end).
 	 */
 	if (!after_bootmem && pgt_buf_end > pgt_buf_start)
-		x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
-				PFN_PHYS(pgt_buf_end));
+		memblock_reserve(PFN_PHYS(pgt_buf_start),
+				 PFN_PHYS(pgt_buf_end) - PFN_PHYS(pgt_buf_start));
 
 	if (!after_bootmem)
 		early_memtest(start, end);
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..e55dfc0 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1178,20 +1178,6 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
 {
 }
 
-static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
-{
-	/* reserve the range used */
-	native_pagetable_reserve(start, end);
-
-	/* set as RW the rest */
-	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
-			PFN_PHYS(pgt_buf_top));
-	while (end < PFN_PHYS(pgt_buf_top)) {
-		make_lowmem_page_readwrite(__va(end));
-		end += PAGE_SIZE;
-	}
-}
-
 static void xen_post_allocator_init(void);
 
 static void __init xen_pagetable_setup_done(pgd_t *base)
@@ -2067,7 +2053,6 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 
 void __init xen_init_mmu_ops(void)
 {
-	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
 	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
-- 
1.7.2.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1cQX-00047H-Ha; Wed, 15 Aug 2012 12:10:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1cQW-00047C-Ec
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 12:10:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345032612!2016652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12972 invoked from network); 15 Aug 2012 12:10:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 12:10:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14019816"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 12:10:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 13:10:12 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1cQN-0004dX-G6;
	Wed, 15 Aug 2012 12:10:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1cQN-0007WD-93;
	Wed, 15 Aug 2012 13:10:11 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13603-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 13:10:11 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13603: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13603 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13603/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13600
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13600
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13600
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13600

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  af7143d97fa2
baseline version:
 xen                  dc56a9defa30

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=af7143d97fa2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable af7143d97fa2
+ branch=xen-unstable
+ revision=af7143d97fa2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r af7143d97fa2 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1cQX-00047H-Ha; Wed, 15 Aug 2012 12:10:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1cQW-00047C-Ec
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 12:10:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345032612!2016652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12972 invoked from network); 15 Aug 2012 12:10:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 12:10:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,772,1336348800"; d="scan'208";a="14019816"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 12:10:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 13:10:12 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1cQN-0004dX-G6;
	Wed, 15 Aug 2012 12:10:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1cQN-0007WD-93;
	Wed, 15 Aug 2012 13:10:11 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13603-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 13:10:11 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13603: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13603 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13603/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13600
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13600
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13600
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13600

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  af7143d97fa2
baseline version:
 xen                  dc56a9defa30

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=af7143d97fa2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable af7143d97fa2
+ branch=xen-unstable
+ revision=af7143d97fa2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r af7143d97fa2 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1clr-0004KG-MD; Wed, 15 Aug 2012 12:32:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1clq-0004KB-3L
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 12:32:22 +0000
Received: from [85.158.143.35:63015] by server-1.bemta-4.messagelabs.com id
	D8/F3-07754-5D69B205; Wed, 15 Aug 2012 12:32:21 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345033933!13566973!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25116 invoked from network); 15 Aug 2012 12:32:14 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 12:32:14 -0000
Received: by yenm4 with SMTP id m4so1873270yen.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 05:32:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=y9OoC22ViWo8YE1EuhLrLnh2XwwolVkwEqRK3ewGLkY=;
	b=RckG4YvK4VCXwhmniyyhokIYAZ+GzPotbNMfLwZMaku5pd48rw9vml68hPgd8OU2RE
	+4gkwagsZwtAf2lgOm41nUw6fY/zJR8R+t1HX+P/ZV8GKLk2ycGAX2p9KcXStn1T63+X
	3s9s3jXEAcv3BrFmy1T2VfajPji9ic5Bx77iMx2svMR2rDhTd/nXVaaOOzdZmLYVJ1ja
	MX4FE8UvsHismPM9R15a4Zdo2b6P0NgFLCbYJNrE1NNCrgKHn3GR5E/z+hOVWG3BwEzX
	XWeahLaAqYkd69lDgoPaIMIEdybXDfFm4NGFSS7KjoeY0LBpxg0DyI+xmr5LfTsLc8iO
	+6Hg==
MIME-Version: 1.0
Received: by 10.50.87.198 with SMTP id ba6mr18531090igb.22.1345033932451; Wed,
	15 Aug 2012 05:32:12 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 05:32:12 -0700 (PDT)
In-Reply-To: <CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
Date: Wed, 15 Aug 2012 08:32:12 -0400
X-Google-Sender-Auth: dJG7PMlrSWvrmNZrl9VCbvuPjsY
Message-ID: <CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 6:32 AM, Ben Guthro <ben@guthro.net> wrote:
> I will retest this morning when I get to work with this changeset, and
> its parent, to better verify my results.

I retested this, and am more convinced now that this introduced a failure.
(see test results below)

> I'll also try the  evtchn_move_pirqs() removal against the changeset
> above, to see if my results differ than when I did the same test
> against the tip.

21624:b9c541d9c138
12 successful suspend / resume cycles

21625:0695a5cdcb42
2 successful suspend / resume cycles - failure on 3rd (ahci)

21625:0695a5cdcb42 + evtchn_move_pirqs() removal:
12 successsful suspend / resume cycles

This was encouraging, so I tried the same change against the tree
tip...unfortunately that didn't go as well.

tip + evtchn_move_pirqs() removal:
did not resume from the first suspend.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:32:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1clr-0004KG-MD; Wed, 15 Aug 2012 12:32:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1clq-0004KB-3L
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 12:32:22 +0000
Received: from [85.158.143.35:63015] by server-1.bemta-4.messagelabs.com id
	D8/F3-07754-5D69B205; Wed, 15 Aug 2012 12:32:21 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345033933!13566973!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25116 invoked from network); 15 Aug 2012 12:32:14 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 12:32:14 -0000
Received: by yenm4 with SMTP id m4so1873270yen.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 05:32:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=y9OoC22ViWo8YE1EuhLrLnh2XwwolVkwEqRK3ewGLkY=;
	b=RckG4YvK4VCXwhmniyyhokIYAZ+GzPotbNMfLwZMaku5pd48rw9vml68hPgd8OU2RE
	+4gkwagsZwtAf2lgOm41nUw6fY/zJR8R+t1HX+P/ZV8GKLk2ycGAX2p9KcXStn1T63+X
	3s9s3jXEAcv3BrFmy1T2VfajPji9ic5Bx77iMx2svMR2rDhTd/nXVaaOOzdZmLYVJ1ja
	MX4FE8UvsHismPM9R15a4Zdo2b6P0NgFLCbYJNrE1NNCrgKHn3GR5E/z+hOVWG3BwEzX
	XWeahLaAqYkd69lDgoPaIMIEdybXDfFm4NGFSS7KjoeY0LBpxg0DyI+xmr5LfTsLc8iO
	+6Hg==
MIME-Version: 1.0
Received: by 10.50.87.198 with SMTP id ba6mr18531090igb.22.1345033932451; Wed,
	15 Aug 2012 05:32:12 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 05:32:12 -0700 (PDT)
In-Reply-To: <CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
Date: Wed, 15 Aug 2012 08:32:12 -0400
X-Google-Sender-Auth: dJG7PMlrSWvrmNZrl9VCbvuPjsY
Message-ID: <CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 6:32 AM, Ben Guthro <ben@guthro.net> wrote:
> I will retest this morning when I get to work with this changeset, and
> its parent, to better verify my results.

I retested this, and am more convinced now that this introduced a failure.
(see test results below)

> I'll also try the  evtchn_move_pirqs() removal against the changeset
> above, to see if my results differ than when I did the same test
> against the tip.

21624:b9c541d9c138
12 successful suspend / resume cycles

21625:0695a5cdcb42
2 successful suspend / resume cycles - failure on 3rd (ahci)

21625:0695a5cdcb42 + evtchn_move_pirqs() removal:
12 successsful suspend / resume cycles

This was encouraging, so I tried the same change against the tree
tip...unfortunately that didn't go as well.

tip + evtchn_move_pirqs() removal:
did not resume from the first suspend.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:40:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ctR-0004Up-JY; Wed, 15 Aug 2012 12:40:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T1ctQ-0004Uk-Eb
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 12:40:12 +0000
Received: from [85.158.143.35:12349] by server-3.bemta-4.messagelabs.com id
	DE/06-09529-BA89B205; Wed, 15 Aug 2012 12:40:11 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345034408!10623683!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12443 invoked from network); 15 Aug 2012 12:40:10 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 12:40:10 -0000
Received: by obbta14 with SMTP id ta14so2645111obb.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 05:40:08 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=Qlk+VF/n8to3MFcyTEBkAMPd+uXptVnftKX0f79sOUE=;
	b=RJeBntMig7qSiAAzaKCiLNomQdkHep9fJ8tUjhkvLJuf4viTqxiJNhoCMsT4uDN20Q
	ggHbW0IChNFDiZqrLvPC5UEExqC66aXu92oM/EEeO0oRvArVB4XX7y/K8B8Z4qbLasRv
	EdYONsQk4LVo2H0RhfjEvtIzr1l532QRXMwwCyAKKOXQjjLFoqOf+6XPRwU6MvKQzLqi
	/lkk1K+G5qLITOA/wVGaHTHfjbWuCnUFiUoshOoYB+uFfAlI2gb9Du03l8w9BBa2u2YJ
	+r0+0ggi6VsfcfuePbVeRh20GG1EyFkRRvkMpu25KmWcPOCN1kSFoVqoXZ8EjCgJKZv5
	zp5w==
MIME-Version: 1.0
Received: by 10.182.40.6 with SMTP id t6mr1494029obk.100.1345034408336; Wed,
	15 Aug 2012 05:40:08 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Wed, 15 Aug 2012 05:40:08 -0700 (PDT)
X-Originating-IP: [59.167.234.130]
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
	<6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
Date: Wed, 15 Aug 2012 22:40:08 +1000
Message-ID: <CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: James Harper <james.harper@bendigoit.com.au>
X-Gm-Message-State: ALoCoQn9d99B+bUxoXusKmZ2OfG+zBs9q5k1BBuxJTPLhVw0UOiPkkWCN42lrbxdzC991mZSpHAY
Cc: Kent Overstreet <koverstreet@google.com>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi James, Kent.

I can confirm this is the issue I was seeing, thanks for locating the
blkback issue!
Is there anything xen-devel should be doing about this? I wouldn't
expect blkback to care about block size...

Joseph.

On 14 August 2012 09:30, James Harper <james.harper@bendigoit.com.au> wrote:
>>
>> Just mentioned this in the other thread, but if this is due to the 4k blocksize -
>> that's easy to fix: just format with 512 byte blocksize
>>
>> make-bcache --block 512
>>
>> Maybe I should change the default.
>
> I suggest making the default 512, but also print a warning if the user didn't explicitly set it,eg "Block size not set - defaulting to 512 bytes"
>
> James
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:40:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ctR-0004Up-JY; Wed, 15 Aug 2012 12:40:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T1ctQ-0004Uk-Eb
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 12:40:12 +0000
Received: from [85.158.143.35:12349] by server-3.bemta-4.messagelabs.com id
	DE/06-09529-BA89B205; Wed, 15 Aug 2012 12:40:11 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345034408!10623683!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12443 invoked from network); 15 Aug 2012 12:40:10 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 12:40:10 -0000
Received: by obbta14 with SMTP id ta14so2645111obb.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 05:40:08 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=Qlk+VF/n8to3MFcyTEBkAMPd+uXptVnftKX0f79sOUE=;
	b=RJeBntMig7qSiAAzaKCiLNomQdkHep9fJ8tUjhkvLJuf4viTqxiJNhoCMsT4uDN20Q
	ggHbW0IChNFDiZqrLvPC5UEExqC66aXu92oM/EEeO0oRvArVB4XX7y/K8B8Z4qbLasRv
	EdYONsQk4LVo2H0RhfjEvtIzr1l532QRXMwwCyAKKOXQjjLFoqOf+6XPRwU6MvKQzLqi
	/lkk1K+G5qLITOA/wVGaHTHfjbWuCnUFiUoshOoYB+uFfAlI2gb9Du03l8w9BBa2u2YJ
	+r0+0ggi6VsfcfuePbVeRh20GG1EyFkRRvkMpu25KmWcPOCN1kSFoVqoXZ8EjCgJKZv5
	zp5w==
MIME-Version: 1.0
Received: by 10.182.40.6 with SMTP id t6mr1494029obk.100.1345034408336; Wed,
	15 Aug 2012 05:40:08 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Wed, 15 Aug 2012 05:40:08 -0700 (PDT)
X-Originating-IP: [59.167.234.130]
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
	<6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
Date: Wed, 15 Aug 2012 22:40:08 +1000
Message-ID: <CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: James Harper <james.harper@bendigoit.com.au>
X-Gm-Message-State: ALoCoQn9d99B+bUxoXusKmZ2OfG+zBs9q5k1BBuxJTPLhVw0UOiPkkWCN42lrbxdzC991mZSpHAY
Cc: Kent Overstreet <koverstreet@google.com>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi James, Kent.

I can confirm this is the issue I was seeing, thanks for locating the
blkback issue!
Is there anything xen-devel should be doing about this? I wouldn't
expect blkback to care about block size...

Joseph.

On 14 August 2012 09:30, James Harper <james.harper@bendigoit.com.au> wrote:
>>
>> Just mentioned this in the other thread, but if this is due to the 4k blocksize -
>> that's easy to fix: just format with 512 byte blocksize
>>
>> make-bcache --block 512
>>
>> Maybe I should change the default.
>
> I suggest making the default 512, but also print a warning if the user didn't explicitly set it,eg "Block size not set - defaulting to 512 bytes"
>
> James
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:57:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1dAA-0004ex-6x; Wed, 15 Aug 2012 12:57:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1dA8-0004es-Fg
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 12:57:28 +0000
Received: from [85.158.139.83:46558] by server-3.bemta-5.messagelabs.com id
	A8/73-27237-7BC9B205; Wed, 15 Aug 2012 12:57:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345035447!25550501!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5235 invoked from network); 15 Aug 2012 12:57:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with SMTP;
	15 Aug 2012 12:57:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 13:57:26 +0100
Message-Id: <502BB8FD02000078000951E0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 13:58:05 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
In-Reply-To: <CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
> This was encouraging, so I tried the same change against the tree
> tip...unfortunately that didn't go as well.
> 
> tip + evtchn_move_pirqs() removal:
> did not resume from the first suspend.

Any logs of this (i.e. indications of what's going wrong - still
the same AHCI not working, but else apparently fine)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 12:57:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 12:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1dAA-0004ex-6x; Wed, 15 Aug 2012 12:57:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1dA8-0004es-Fg
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 12:57:28 +0000
Received: from [85.158.139.83:46558] by server-3.bemta-5.messagelabs.com id
	A8/73-27237-7BC9B205; Wed, 15 Aug 2012 12:57:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345035447!25550501!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5235 invoked from network); 15 Aug 2012 12:57:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-182.messagelabs.com with SMTP;
	15 Aug 2012 12:57:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 13:57:26 +0100
Message-Id: <502BB8FD02000078000951E0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 13:58:05 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
In-Reply-To: <CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
> This was encouraging, so I tried the same change against the tree
> tip...unfortunately that didn't go as well.
> 
> tip + evtchn_move_pirqs() removal:
> did not resume from the first suspend.

Any logs of this (i.e. indications of what's going wrong - still
the same AHCI not working, but else apparently fine)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1dNv-0004rx-In; Wed, 15 Aug 2012 13:11:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1dNu-0004rs-5M
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:11:42 +0000
Received: from [85.158.138.51:18364] by server-2.bemta-3.messagelabs.com id
	97/F8-17748-D00AB205; Wed, 15 Aug 2012 13:11:41 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345036299!9729417!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21898 invoked from network); 15 Aug 2012 13:11:40 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:11:40 -0000
Received: by qabg14 with SMTP id g14so1351594qab.11
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 06:11:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=G+6dTVPuAq7Oqzfn38wxbqJjM3zMY5OFV4xnrfxSoOE=;
	b=ZDlOpmJnc1PSTeFbrVVAAdE02IytKpD7b/2sasKUEQ24pIO2L4jM/09vadNNhfWVdc
	lxIQeV+P10W8GWvNUw5qO2WnhbzNs5Gd8ZLp7Tksv4R5jpfWLTY9G0vHRLerS/uxMPtP
	IqxmpLlAUzUEe/Homo8CKR3Cp7sMPhz3tp4olx9msTkwVDKnxwY3L6Ci3veA83c3VEfw
	1iNEAOA/f5pnyyNN1wsQRnHMyNkrfktMDeOw7QZuks3VexfNeFq5LfJvQ+srnbeu/xR5
	4FNNyc74uToDW4WKGzWI1AGMSCld0o8fiNaURfrH7qQ7jhXh7o8yJfIsdTEqb7KqYkJA
	ik9w==
MIME-Version: 1.0
Received: by 10.50.186.196 with SMTP id fm4mr18538576igc.1.1345036299178; Wed,
	15 Aug 2012 06:11:39 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 06:11:38 -0700 (PDT)
In-Reply-To: <502BB8FD02000078000951E0@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 09:11:38 -0400
X-Google-Sender-Auth: jpgNaXuhmLoG_P2u7oG7dUKU2yY
Message-ID: <CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>> This was encouraging, so I tried the same change against the tree
>> tip...unfortunately that didn't go as well.
>>
>> tip + evtchn_move_pirqs() removal:
>> did not resume from the first suspend.
>
> Any logs of this (i.e. indications of what's going wrong - still
> the same AHCI not working, but else apparently fine)?

This is a bit strange, in that the observed behavior changes when I am
logging to the serial connection.

When I am logging to serial, the failure is the same as before -
The first suspend / resume works -
The second fails with AHCI not working

However, when I am not logging to serial - the system goes down, but
never comes back up. I cannot ssh in, and no graphics are displayed on
the screen. My only recourse is to hard power cycle the machine.

This, of course makes collecting logs difficult.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:12:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1dNv-0004rx-In; Wed, 15 Aug 2012 13:11:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1dNu-0004rs-5M
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:11:42 +0000
Received: from [85.158.138.51:18364] by server-2.bemta-3.messagelabs.com id
	97/F8-17748-D00AB205; Wed, 15 Aug 2012 13:11:41 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345036299!9729417!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21898 invoked from network); 15 Aug 2012 13:11:40 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:11:40 -0000
Received: by qabg14 with SMTP id g14so1351594qab.11
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 06:11:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=G+6dTVPuAq7Oqzfn38wxbqJjM3zMY5OFV4xnrfxSoOE=;
	b=ZDlOpmJnc1PSTeFbrVVAAdE02IytKpD7b/2sasKUEQ24pIO2L4jM/09vadNNhfWVdc
	lxIQeV+P10W8GWvNUw5qO2WnhbzNs5Gd8ZLp7Tksv4R5jpfWLTY9G0vHRLerS/uxMPtP
	IqxmpLlAUzUEe/Homo8CKR3Cp7sMPhz3tp4olx9msTkwVDKnxwY3L6Ci3veA83c3VEfw
	1iNEAOA/f5pnyyNN1wsQRnHMyNkrfktMDeOw7QZuks3VexfNeFq5LfJvQ+srnbeu/xR5
	4FNNyc74uToDW4WKGzWI1AGMSCld0o8fiNaURfrH7qQ7jhXh7o8yJfIsdTEqb7KqYkJA
	ik9w==
MIME-Version: 1.0
Received: by 10.50.186.196 with SMTP id fm4mr18538576igc.1.1345036299178; Wed,
	15 Aug 2012 06:11:39 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 06:11:38 -0700 (PDT)
In-Reply-To: <502BB8FD02000078000951E0@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 09:11:38 -0400
X-Google-Sender-Auth: jpgNaXuhmLoG_P2u7oG7dUKU2yY
Message-ID: <CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>> This was encouraging, so I tried the same change against the tree
>> tip...unfortunately that didn't go as well.
>>
>> tip + evtchn_move_pirqs() removal:
>> did not resume from the first suspend.
>
> Any logs of this (i.e. indications of what's going wrong - still
> the same AHCI not working, but else apparently fine)?

This is a bit strange, in that the observed behavior changes when I am
logging to the serial connection.

When I am logging to serial, the failure is the same as before -
The first suspend / resume works -
The second fails with AHCI not working

However, when I am not logging to serial - the system goes down, but
never comes back up. I cannot ssh in, and no graphics are displayed on
the screen. My only recourse is to hard power cycle the machine.

This, of course makes collecting logs difficult.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:50:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1dzG-0005Pa-Bi; Wed, 15 Aug 2012 13:50:18 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1dzF-0005PR-43
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:50:17 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345038609!9448680!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22238 invoked from network); 15 Aug 2012 13:50:10 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:50:10 -0000
Received: by eeke53 with SMTP id e53so497810eek.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 06:50:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=4HtTRmzVUmMolTjGyF94rarT2TaQk67RqOLIolvcuug=;
	b=iyo9e9UGlItdGsY+flCy0CKCLzQbu7XLXXPmmR8XB7gboRUtS6enMtgevXgZzv/9fV
	taq5qQyX6Bf4XOWFqxNgzO+esOu5MCTaY4K7X9MJg0p5o6vvpXPMDWXa2HDlgUjcjQ6A
	aCH4XP/qNkSp+CKFEbBbTNVDYDPEcTYVFdNdv88TKEcB5bEhmPojs3OdMDAEUzwhYCRH
	NNQ3u1qJjtvvY3QPu1DxyY24uD5qDmYBIvVYojlZ2qqUZG84YBlaLgeX/+VcsHWfVYcS
	b1pcVpqVRN04qb5BJpr6gMZ2WsCdhMVz2bUUOduSYVO14Odr/PE5sbaIeFn1CxmrfzE5
	EGTg==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr24264795eeo.40.1345038608549; Wed,
	15 Aug 2012 06:50:08 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Wed, 15 Aug 2012 06:50:08 -0700 (PDT)
In-Reply-To: <502A2C590200007800094AF0@nat28.tlf.novell.com>
References: <502A2C590200007800094AF0@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 14:50:08 +0100
X-Google-Sender-Auth: u1ropV3zD_hzAw_EQsTtQc9qTxg
Message-ID: <CAFLBxZbB0YLYcbGEPrLab_RS2Y9yjAbTr287yYxh58-PDS24oQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] x86/PoD: prevent guest from being
 destroyed upon early access to its memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:45 AM, Jan Beulich <JBeulich@suse.com> wrote:
> x86/PoD: prevent guest from being destroyed upon early access to its memo=
ry
>
> When an external agent (e.g. a monitoring daemon) happens to access the
> memory of a PoD guest prior to setting the PoD target, that access must
> fail for there not being any page in the PoD cache, and only the space
> above the low 2Mb gets scanned for victim pages (while only the low 2Mb
> got real pages populated so far).
>
> To accomodate for this
> - set the PoD target first
> - do all physmap population in PoD mode (i.e. not just large [2Mb or
>   1Gb] pages)
> - slightly lift the restrictions enforced by p2m_pod_set_mem_target()
>   to accomodate for the changed tools behavior
>
> Tested-by: J=FCrgen Gro=DF <juergen.gross@ts.fujitsu.com>
>            (in a 4.0.x based incarnation)
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

However, my "hg qpush" chokes on the German characters in Juergen's name...

 -George

>
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -160,7 +160,7 @@ static int setup_guest(xc_interface *xch
>      int pod_mode =3D 0;
>
>      if ( nr_pages > target_pages )
> -        pod_mode =3D 1;
> +        pod_mode =3D XENMEMF_populate_on_demand;
>
>      memset(&elf, 0, sizeof(elf));
>      if ( elf_init(&elf, image, image_size) !=3D 0 )
> @@ -197,6 +197,22 @@ static int setup_guest(xc_interface *xch
>      for ( i =3D mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
>          page_array[i] +=3D mmio_size >> PAGE_SHIFT;
>
> +    if ( pod_mode )
> +    {
> +        /*
> +         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> +         * adjust the PoD cache size so that domain tot_pages will be
> +         * target_pages - 0x20 after this call.
> +         */
> +        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +                                      NULL, NULL, NULL);
> +        if ( rc !=3D 0 )
> +        {
> +            PERROR("Could not set PoD target for HVM guest.\n");
> +            goto error_out;
> +        }
> +    }
> +
>      /*
>       * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000.
>       *
> @@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch
>       * ensure that we can be preempted and hence dom0 remains responsive.
>       */
>      rc =3D xc_domain_populate_physmap_exact(
> -        xch, dom, 0xa0, 0, 0, &page_array[0x00]);
> +        xch, dom, 0xa0, 0, pod_mode, &page_array[0x00]);
>      cur_pages =3D 0xc0;
>      stat_normal_pages =3D 0xc0;
>      while ( (rc =3D=3D 0) && (nr_pages > cur_pages) )
> @@ -247,8 +263,7 @@ static int setup_guest(xc_interface *xch
>                  sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_1GB=
_SHIFT)];
>
>              done =3D xc_domain_populate_physmap(xch, dom, nr_extents, SU=
PERPAGE_1GB_SHIFT,
> -                                              pod_mode ? XENMEMF_populat=
e_on_demand : 0,
> -                                              sp_extents);
> +                                              pod_mode, sp_extents);
>
>              if ( done > 0 )
>              {
> @@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch
>                      sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE=
_2MB_SHIFT)];
>
>                  done =3D xc_domain_populate_physmap(xch, dom, nr_extents=
, SUPERPAGE_2MB_SHIFT,
> -                                                  pod_mode ? XENMEMF_pop=
ulate_on_demand : 0,
> -                                                  sp_extents);
> +                                                  pod_mode, sp_extents);
>
>                  if ( done > 0 )
>                  {
> @@ -302,19 +316,12 @@ static int setup_guest(xc_interface *xch
>          if ( count !=3D 0 )
>          {
>              rc =3D xc_domain_populate_physmap_exact(
> -                xch, dom, count, 0, 0, &page_array[cur_pages]);
> +                xch, dom, count, 0, pod_mode, &page_array[cur_pages]);
>              cur_pages +=3D count;
>              stat_normal_pages +=3D count;
>          }
>      }
>
> -    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -     * adjust the PoD cache size so that domain tot_pages will be
> -     * target_pages - 0x20 after this call. */
> -    if ( pod_mode )
> -        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> -                                      NULL, NULL, NULL);
> -
>      if ( rc !=3D 0 )
>      {
>          PERROR("Could not allocate memory for HVM guest.");
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain *d,
>
>      pod_lock(p2m);
>
> -    /* P =3D=3D B: Nothing to do. */
> -    if ( p2m->pod.entry_count =3D=3D 0 )
> +    /* P =3D=3D B: Nothing to do (unless the guest is being created). */
> +    populated =3D d->tot_pages - p2m->pod.count;
> +    if ( populated > 0 && p2m->pod.entry_count =3D=3D 0 )
>          goto out;
>
>      /* Don't do anything if the domain is being torn down */
> @@ -357,13 +358,11 @@ p2m_pod_set_mem_target(struct domain *d,
>      if ( target < d->tot_pages )
>          goto out;
>
> -    populated  =3D d->tot_pages - p2m->pod.count;
> -
>      pod_target =3D target - populated;
>
>      /* B < T': Set the cache size equal to # of outstanding entries,
>       * let the balloon driver fill in the rest. */
> -    if ( pod_target > p2m->pod.entry_count )
> +    if ( populated > 0 && pod_target > p2m->pod.entry_count )
>          pod_target =3D p2m->pod.entry_count;
>
>      ASSERT( pod_target >=3D p2m->pod.count );
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:50:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1dzG-0005Pa-Bi; Wed, 15 Aug 2012 13:50:18 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1dzF-0005PR-43
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:50:17 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345038609!9448680!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22238 invoked from network); 15 Aug 2012 13:50:10 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:50:10 -0000
Received: by eeke53 with SMTP id e53so497810eek.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 06:50:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=4HtTRmzVUmMolTjGyF94rarT2TaQk67RqOLIolvcuug=;
	b=iyo9e9UGlItdGsY+flCy0CKCLzQbu7XLXXPmmR8XB7gboRUtS6enMtgevXgZzv/9fV
	taq5qQyX6Bf4XOWFqxNgzO+esOu5MCTaY4K7X9MJg0p5o6vvpXPMDWXa2HDlgUjcjQ6A
	aCH4XP/qNkSp+CKFEbBbTNVDYDPEcTYVFdNdv88TKEcB5bEhmPojs3OdMDAEUzwhYCRH
	NNQ3u1qJjtvvY3QPu1DxyY24uD5qDmYBIvVYojlZ2qqUZG84YBlaLgeX/+VcsHWfVYcS
	b1pcVpqVRN04qb5BJpr6gMZ2WsCdhMVz2bUUOduSYVO14Odr/PE5sbaIeFn1CxmrfzE5
	EGTg==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr24264795eeo.40.1345038608549; Wed,
	15 Aug 2012 06:50:08 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Wed, 15 Aug 2012 06:50:08 -0700 (PDT)
In-Reply-To: <502A2C590200007800094AF0@nat28.tlf.novell.com>
References: <502A2C590200007800094AF0@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 14:50:08 +0100
X-Google-Sender-Auth: u1ropV3zD_hzAw_EQsTtQc9qTxg
Message-ID: <CAFLBxZbB0YLYcbGEPrLab_RS2Y9yjAbTr287yYxh58-PDS24oQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] x86/PoD: prevent guest from being
 destroyed upon early access to its memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:45 AM, Jan Beulich <JBeulich@suse.com> wrote:
> x86/PoD: prevent guest from being destroyed upon early access to its memo=
ry
>
> When an external agent (e.g. a monitoring daemon) happens to access the
> memory of a PoD guest prior to setting the PoD target, that access must
> fail for there not being any page in the PoD cache, and only the space
> above the low 2Mb gets scanned for victim pages (while only the low 2Mb
> got real pages populated so far).
>
> To accomodate for this
> - set the PoD target first
> - do all physmap population in PoD mode (i.e. not just large [2Mb or
>   1Gb] pages)
> - slightly lift the restrictions enforced by p2m_pod_set_mem_target()
>   to accomodate for the changed tools behavior
>
> Tested-by: J=FCrgen Gro=DF <juergen.gross@ts.fujitsu.com>
>            (in a 4.0.x based incarnation)
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

However, my "hg qpush" chokes on the German characters in Juergen's name...

 -George

>
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -160,7 +160,7 @@ static int setup_guest(xc_interface *xch
>      int pod_mode =3D 0;
>
>      if ( nr_pages > target_pages )
> -        pod_mode =3D 1;
> +        pod_mode =3D XENMEMF_populate_on_demand;
>
>      memset(&elf, 0, sizeof(elf));
>      if ( elf_init(&elf, image, image_size) !=3D 0 )
> @@ -197,6 +197,22 @@ static int setup_guest(xc_interface *xch
>      for ( i =3D mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
>          page_array[i] +=3D mmio_size >> PAGE_SHIFT;
>
> +    if ( pod_mode )
> +    {
> +        /*
> +         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> +         * adjust the PoD cache size so that domain tot_pages will be
> +         * target_pages - 0x20 after this call.
> +         */
> +        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +                                      NULL, NULL, NULL);
> +        if ( rc !=3D 0 )
> +        {
> +            PERROR("Could not set PoD target for HVM guest.\n");
> +            goto error_out;
> +        }
> +    }
> +
>      /*
>       * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000.
>       *
> @@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch
>       * ensure that we can be preempted and hence dom0 remains responsive.
>       */
>      rc =3D xc_domain_populate_physmap_exact(
> -        xch, dom, 0xa0, 0, 0, &page_array[0x00]);
> +        xch, dom, 0xa0, 0, pod_mode, &page_array[0x00]);
>      cur_pages =3D 0xc0;
>      stat_normal_pages =3D 0xc0;
>      while ( (rc =3D=3D 0) && (nr_pages > cur_pages) )
> @@ -247,8 +263,7 @@ static int setup_guest(xc_interface *xch
>                  sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_1GB=
_SHIFT)];
>
>              done =3D xc_domain_populate_physmap(xch, dom, nr_extents, SU=
PERPAGE_1GB_SHIFT,
> -                                              pod_mode ? XENMEMF_populat=
e_on_demand : 0,
> -                                              sp_extents);
> +                                              pod_mode, sp_extents);
>
>              if ( done > 0 )
>              {
> @@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch
>                      sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE=
_2MB_SHIFT)];
>
>                  done =3D xc_domain_populate_physmap(xch, dom, nr_extents=
, SUPERPAGE_2MB_SHIFT,
> -                                                  pod_mode ? XENMEMF_pop=
ulate_on_demand : 0,
> -                                                  sp_extents);
> +                                                  pod_mode, sp_extents);
>
>                  if ( done > 0 )
>                  {
> @@ -302,19 +316,12 @@ static int setup_guest(xc_interface *xch
>          if ( count !=3D 0 )
>          {
>              rc =3D xc_domain_populate_physmap_exact(
> -                xch, dom, count, 0, 0, &page_array[cur_pages]);
> +                xch, dom, count, 0, pod_mode, &page_array[cur_pages]);
>              cur_pages +=3D count;
>              stat_normal_pages +=3D count;
>          }
>      }
>
> -    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -     * adjust the PoD cache size so that domain tot_pages will be
> -     * target_pages - 0x20 after this call. */
> -    if ( pod_mode )
> -        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> -                                      NULL, NULL, NULL);
> -
>      if ( rc !=3D 0 )
>      {
>          PERROR("Could not allocate memory for HVM guest.");
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain *d,
>
>      pod_lock(p2m);
>
> -    /* P =3D=3D B: Nothing to do. */
> -    if ( p2m->pod.entry_count =3D=3D 0 )
> +    /* P =3D=3D B: Nothing to do (unless the guest is being created). */
> +    populated =3D d->tot_pages - p2m->pod.count;
> +    if ( populated > 0 && p2m->pod.entry_count =3D=3D 0 )
>          goto out;
>
>      /* Don't do anything if the domain is being torn down */
> @@ -357,13 +358,11 @@ p2m_pod_set_mem_target(struct domain *d,
>      if ( target < d->tot_pages )
>          goto out;
>
> -    populated  =3D d->tot_pages - p2m->pod.count;
> -
>      pod_target =3D target - populated;
>
>      /* B < T': Set the cache size equal to # of outstanding entries,
>       * let the balloon driver fill in the rest. */
> -    if ( pod_target > p2m->pod.entry_count )
> +    if ( populated > 0 && pod_target > p2m->pod.entry_count )
>          pod_target =3D p2m->pod.entry_count;
>
>      ASSERT( pod_target >=3D p2m->pod.count );
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:53:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1e1y-0005Wf-2u; Wed, 15 Aug 2012 13:53:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1e1x-0005WU-6o
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:53:05 +0000
Received: from [85.158.138.51:45570] by server-7.bemta-3.messagelabs.com id
	9E/13-01906-DB9AB205; Wed, 15 Aug 2012 13:53:01 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345038779!28451813!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4593 invoked from network); 15 Aug 2012 13:52:59 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:52:59 -0000
Received: by eaac13 with SMTP id c13so497559eaa.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 06:52:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=w+gdlA/x2Z6YnerL/JC2sZHnlVI9dw+BUCF73khUHnU=;
	b=xaYwzo099IPWUICQQgNvhUouWLV9L8PCon8vSKxrEurjt5BQReuRw4+sB8GDCYxefB
	XkjfMWOX0+JATESren6yKtk8xOnGylHQUz5UXA0Ld/Nj9RrmUJDNvi0evSwayyRCnF0x
	StU5Y7Cto5k7NvHKXyB5A4/rtRP+pIn6Rt9m/G0TmUUXgcuChYDkfqdR32yDd7X9yWmN
	UgfpjIBcRnFGdAJq5uCMSepsVnbLEOByVGQubiTaStWDOODRxcp6uA1LRt2Ui53PjCWj
	B/0+wz7Tf6fBBvQlwxcydZmuVdZi49UxxlsLVI9L02xcn+zD5ryY/BXf+fuvzvhlo2hy
	waXw==
MIME-Version: 1.0
Received: by 10.14.225.200 with SMTP id z48mr24266054eep.39.1345038779205;
	Wed, 15 Aug 2012 06:52:59 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Wed, 15 Aug 2012 06:52:59 -0700 (PDT)
In-Reply-To: <502A2C7D0200007800094AF4@nat28.tlf.novell.com>
References: <502A2C7D0200007800094AF4@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 14:52:59 +0100
X-Google-Sender-Auth: KYv_Rdeq4NbyhNid_ON_OMSK_uU
Message-ID: <CAFLBxZYOCcCPaWVQQii8OGEOwhB2j02a2iZ=W9U_7Sj+u8go5A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] x86/PoD: clean up types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:46 AM, Jan Beulich <JBeulich@suse.com> wrote:
> GMFN values must undoubtedly be "unsigned long". "count" and
> "entry_count", since they are signed types, should also be "long" as
> otherwise they can't fit all values that can fit into "d->tot_pages"
> (which currently is "uint32_t").
>
> Beyond that, the patch doesn't convert everything to "long" as in many
> places it is clear that "int" suffices. In places where "long" is being
> used partially already, the change is however being done.
>
> Furthermore, page order values have no use of being "long".
>
> Finally, in the course of updating a few printk messages anyway, some
> also get slightly shortened (to focus on the relevant information).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Looks good, thanks.

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>


>
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -66,7 +66,7 @@ static inline void unlock_page_alloc(str
>  static int
>  p2m_pod_cache_add(struct p2m_domain *p2m,
>                    struct page_info *page,
> -                  unsigned long order)
> +                  unsigned int order)
>  {
>      int i;
>      struct page_info *p;
> @@ -80,7 +80,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
>      /* Check to make sure this is a contiguous region */
>      if( mfn_x(mfn) & ((1 << order) - 1) )
>      {
> -        printk("%s: mfn %lx not aligned order %lu! (mask %lx)\n",
> +        printk("%s: mfn %lx not aligned order %u! (mask %lx)\n",
>                 __func__, mfn_x(mfn), order, ((1UL << order) - 1));
>          return -1;
>      }
> @@ -146,7 +146,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
>   * down 2-meg pages into singleton pages automatically.  Returns null if
>   * a superpage is requested and no superpages are available. */
>  static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
> -                                            unsigned long order)
> +                                            unsigned int order)
>  {
>      struct page_info *p = NULL;
>      int i;
> @@ -234,7 +234,7 @@ p2m_pod_set_cache_target(struct p2m_doma
>                  goto retry;
>              }
>
> -            printk("%s: Unable to allocate domheap page for pod cache.  target %lu cachesize %d\n",
> +            printk("%s: Unable to allocate page for PoD cache (target=%lu cache=%ld)\n",
>                     __func__, pod_target, p2m->pod.count);
>              ret = -ENOMEM;
>              goto out;
> @@ -337,10 +337,9 @@ out:
>  int
>  p2m_pod_set_mem_target(struct domain *d, unsigned long target)
>  {
> -    unsigned pod_target;
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>      int ret = 0;
> -    unsigned long populated;
> +    unsigned long populated, pod_target;
>
>      pod_lock(p2m);
>
> @@ -633,7 +632,8 @@ out_unlock:
>  void p2m_pod_dump_data(struct domain *d)
>  {
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    printk("    PoD entries=%d cachesize=%d\n",
> +
> +    printk("    PoD entries=%ld cachesize=%ld\n",
>             p2m->pod.entry_count, p2m->pod.count);
>  }
>
> @@ -1071,8 +1071,9 @@ p2m_pod_demand_populate(struct p2m_domai
>  out_of_memory:
>      pod_unlock(p2m);
>
> -    printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",
> -           __func__, d->tot_pages, p2m->pod.entry_count);
> +    printk("%s: Dom%d out of PoD memory! (tot=%"PRIu32" ents=%ld dom%d)\n",
> +           __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count,
> +           current->domain->domain_id);
>      domain_crash(d);
>      return -1;
>  out_fail:
> @@ -1111,10 +1112,9 @@ guest_physmap_mark_populate_on_demand(st
>                                        unsigned int order)
>  {
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    unsigned long i;
> +    unsigned long i, pod_count = 0;
>      p2m_type_t ot;
>      mfn_t omfn;
> -    int pod_count = 0;
>      int rc = 0;
>
>      BUG_ON(!paging_mode_translate(d));
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -965,8 +965,7 @@ static void p2m_change_type_global(struc
>  #if P2M_AUDIT
>  long p2m_pt_audit_p2m(struct p2m_domain *p2m)
>  {
> -    int entry_count = 0;
> -    unsigned long pmbad = 0;
> +    unsigned long entry_count = 0, pmbad = 0;
>      unsigned long mfn, gfn, m2pfn;
>      int test_linear;
>      struct domain *d = p2m->domain;
> @@ -1126,7 +1125,7 @@ long p2m_pt_audit_p2m(struct p2m_domain
>
>      if ( entry_count != p2m->pod.entry_count )
>      {
> -        printk("%s: refcounted entry count %d, audit count %d!\n",
> +        printk("%s: refcounted entry count %ld, audit count %lu!\n",
>                 __func__,
>                 p2m->pod.entry_count,
>                 entry_count);
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -282,10 +282,10 @@ struct p2m_domain {
>      struct {
>          struct page_list_head super,   /* List of superpages                */
>                           single;       /* Non-super lists                   */
> -        int              count,        /* # of pages in cache lists         */
> +        long             count,        /* # of pages in cache lists         */
>                           entry_count;  /* # of pages in p2m marked pod      */
> -        unsigned         reclaim_single; /* Last gpfn of a scan */
> -        unsigned         max_guest;    /* gpfn of max guest demand-populate */
> +        unsigned long    reclaim_single; /* Last gpfn of a scan */
> +        unsigned long    max_guest;    /* gpfn of max guest demand-populate */
>  #define POD_HISTORY_MAX 128
>          /* gpfn of last guest superpage demand-populated */
>          unsigned long    last_populated[POD_HISTORY_MAX];
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:53:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1e1y-0005Wf-2u; Wed, 15 Aug 2012 13:53:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1e1x-0005WU-6o
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:53:05 +0000
Received: from [85.158.138.51:45570] by server-7.bemta-3.messagelabs.com id
	9E/13-01906-DB9AB205; Wed, 15 Aug 2012 13:53:01 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345038779!28451813!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4593 invoked from network); 15 Aug 2012 13:52:59 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:52:59 -0000
Received: by eaac13 with SMTP id c13so497559eaa.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 06:52:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=w+gdlA/x2Z6YnerL/JC2sZHnlVI9dw+BUCF73khUHnU=;
	b=xaYwzo099IPWUICQQgNvhUouWLV9L8PCon8vSKxrEurjt5BQReuRw4+sB8GDCYxefB
	XkjfMWOX0+JATESren6yKtk8xOnGylHQUz5UXA0Ld/Nj9RrmUJDNvi0evSwayyRCnF0x
	StU5Y7Cto5k7NvHKXyB5A4/rtRP+pIn6Rt9m/G0TmUUXgcuChYDkfqdR32yDd7X9yWmN
	UgfpjIBcRnFGdAJq5uCMSepsVnbLEOByVGQubiTaStWDOODRxcp6uA1LRt2Ui53PjCWj
	B/0+wz7Tf6fBBvQlwxcydZmuVdZi49UxxlsLVI9L02xcn+zD5ryY/BXf+fuvzvhlo2hy
	waXw==
MIME-Version: 1.0
Received: by 10.14.225.200 with SMTP id z48mr24266054eep.39.1345038779205;
	Wed, 15 Aug 2012 06:52:59 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Wed, 15 Aug 2012 06:52:59 -0700 (PDT)
In-Reply-To: <502A2C7D0200007800094AF4@nat28.tlf.novell.com>
References: <502A2C7D0200007800094AF4@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 14:52:59 +0100
X-Google-Sender-Auth: KYv_Rdeq4NbyhNid_ON_OMSK_uU
Message-ID: <CAFLBxZYOCcCPaWVQQii8OGEOwhB2j02a2iZ=W9U_7Sj+u8go5A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] x86/PoD: clean up types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 9:46 AM, Jan Beulich <JBeulich@suse.com> wrote:
> GMFN values must undoubtedly be "unsigned long". "count" and
> "entry_count", since they are signed types, should also be "long" as
> otherwise they can't fit all values that can fit into "d->tot_pages"
> (which currently is "uint32_t").
>
> Beyond that, the patch doesn't convert everything to "long" as in many
> places it is clear that "int" suffices. In places where "long" is being
> used partially already, the change is however being done.
>
> Furthermore, page order values have no use of being "long".
>
> Finally, in the course of updating a few printk messages anyway, some
> also get slightly shortened (to focus on the relevant information).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Looks good, thanks.

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>


>
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -66,7 +66,7 @@ static inline void unlock_page_alloc(str
>  static int
>  p2m_pod_cache_add(struct p2m_domain *p2m,
>                    struct page_info *page,
> -                  unsigned long order)
> +                  unsigned int order)
>  {
>      int i;
>      struct page_info *p;
> @@ -80,7 +80,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
>      /* Check to make sure this is a contiguous region */
>      if( mfn_x(mfn) & ((1 << order) - 1) )
>      {
> -        printk("%s: mfn %lx not aligned order %lu! (mask %lx)\n",
> +        printk("%s: mfn %lx not aligned order %u! (mask %lx)\n",
>                 __func__, mfn_x(mfn), order, ((1UL << order) - 1));
>          return -1;
>      }
> @@ -146,7 +146,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
>   * down 2-meg pages into singleton pages automatically.  Returns null if
>   * a superpage is requested and no superpages are available. */
>  static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
> -                                            unsigned long order)
> +                                            unsigned int order)
>  {
>      struct page_info *p = NULL;
>      int i;
> @@ -234,7 +234,7 @@ p2m_pod_set_cache_target(struct p2m_doma
>                  goto retry;
>              }
>
> -            printk("%s: Unable to allocate domheap page for pod cache.  target %lu cachesize %d\n",
> +            printk("%s: Unable to allocate page for PoD cache (target=%lu cache=%ld)\n",
>                     __func__, pod_target, p2m->pod.count);
>              ret = -ENOMEM;
>              goto out;
> @@ -337,10 +337,9 @@ out:
>  int
>  p2m_pod_set_mem_target(struct domain *d, unsigned long target)
>  {
> -    unsigned pod_target;
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>      int ret = 0;
> -    unsigned long populated;
> +    unsigned long populated, pod_target;
>
>      pod_lock(p2m);
>
> @@ -633,7 +632,8 @@ out_unlock:
>  void p2m_pod_dump_data(struct domain *d)
>  {
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    printk("    PoD entries=%d cachesize=%d\n",
> +
> +    printk("    PoD entries=%ld cachesize=%ld\n",
>             p2m->pod.entry_count, p2m->pod.count);
>  }
>
> @@ -1071,8 +1071,9 @@ p2m_pod_demand_populate(struct p2m_domai
>  out_of_memory:
>      pod_unlock(p2m);
>
> -    printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 " pod_entries %" PRIi32 "\n",
> -           __func__, d->tot_pages, p2m->pod.entry_count);
> +    printk("%s: Dom%d out of PoD memory! (tot=%"PRIu32" ents=%ld dom%d)\n",
> +           __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count,
> +           current->domain->domain_id);
>      domain_crash(d);
>      return -1;
>  out_fail:
> @@ -1111,10 +1112,9 @@ guest_physmap_mark_populate_on_demand(st
>                                        unsigned int order)
>  {
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    unsigned long i;
> +    unsigned long i, pod_count = 0;
>      p2m_type_t ot;
>      mfn_t omfn;
> -    int pod_count = 0;
>      int rc = 0;
>
>      BUG_ON(!paging_mode_translate(d));
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -965,8 +965,7 @@ static void p2m_change_type_global(struc
>  #if P2M_AUDIT
>  long p2m_pt_audit_p2m(struct p2m_domain *p2m)
>  {
> -    int entry_count = 0;
> -    unsigned long pmbad = 0;
> +    unsigned long entry_count = 0, pmbad = 0;
>      unsigned long mfn, gfn, m2pfn;
>      int test_linear;
>      struct domain *d = p2m->domain;
> @@ -1126,7 +1125,7 @@ long p2m_pt_audit_p2m(struct p2m_domain
>
>      if ( entry_count != p2m->pod.entry_count )
>      {
> -        printk("%s: refcounted entry count %d, audit count %d!\n",
> +        printk("%s: refcounted entry count %ld, audit count %lu!\n",
>                 __func__,
>                 p2m->pod.entry_count,
>                 entry_count);
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -282,10 +282,10 @@ struct p2m_domain {
>      struct {
>          struct page_list_head super,   /* List of superpages                */
>                           single;       /* Non-super lists                   */
> -        int              count,        /* # of pages in cache lists         */
> +        long             count,        /* # of pages in cache lists         */
>                           entry_count;  /* # of pages in p2m marked pod      */
> -        unsigned         reclaim_single; /* Last gpfn of a scan */
> -        unsigned         max_guest;    /* gpfn of max guest demand-populate */
> +        unsigned long    reclaim_single; /* Last gpfn of a scan */
> +        unsigned long    max_guest;    /* gpfn of max guest demand-populate */
>  #define POD_HISTORY_MAX 128
>          /* gpfn of last guest superpage demand-populated */
>          unsigned long    last_populated[POD_HISTORY_MAX];
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:56:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:56:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1e4y-0005g8-MY; Wed, 15 Aug 2012 13:56:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1e4x-0005fs-Qs
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:56:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345038962!6996963!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17318 invoked from network); 15 Aug 2012 13:56:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14021752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 13:56:02 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 14:56:02 +0100
Date: Wed, 15 Aug 2012 14:55:46 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <502B85D1.8000606@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, David Vrabel wrote:
> On 14/08/12 15:12, Attilio Rao wrote:
> > On 14/08/12 14:57, David Vrabel wrote:
> >> On 14/08/12 13:24, Attilio Rao wrote:
> >> 
> >>> The informations added on the hook are: - Native behaviour - Xen 
> >>> specific behaviour - Logic behind the Xen specific behaviour
> >>> 
> >> These are implementation details and should be documented with the
> >>  implementations (if necessary).
> >> 
> > 
> > In this specific case, implementation details are very valuable to 
> > understand the semantic of operations, this is why I added them 
> > there. I think, at least for this case, this is the best trade-off.
> 
> Documenting the implementation details will be useful for reviewing or
> refactoring the pv-ops but I don't think it useful in the longer term
> for it to be included in the API documentation upstream.
> 
> >>> - PVOPS semantic
> >>> 
> >> This is the interesting stuff.
> >> 
> >> This particular pvop seems a little odd really.  It might make more
> >> sense if it took a third parameter for pgt_buf_top.
> > 
> > The thing is that this work (documenting PVOPS) should help in 
> > understanding the logic behind some PVOPS and possibly
> > improve/rework them. For this stage, I agreed with Konrad to keep the
> > changes as small as possible. Once the documentation about the
> > semantic is in place we can think about ways to improve things more
> > effectively (for example, in some cases we may want to rewrite the
> > PVOP completely).
> 
> After looking at it some more, I think this pv-ops is unnecessary. How
> about the following patch to just remove it completely?
> 
> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
> is sound.

Do you have more then 4G to dom0 on those boxes?
It certainly fixed a serious crash at the time it was introduced, see
http://marc.info/?l=linux-kernel&m=129901609503574 and
http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
changed in kernel_physical_mapping_init, I think we still need it.
Depending on the e820 of your test box, the kernel could crash (or not),
possibly in different places.


> >> "@pagetable_reserve is used to reserve a range of PFNs used for the
> >> kernel direct mapping page tables and cleans-up any PFNs that ended
> >> up not being used for the tables.
> >> 
> >> It shall reserve the range (start, end] with memblock_reserve(). It
> >> shall prepare PFNs in the range (end, pgt_buf_top] for general (non
> >> page table) use.
> >> 
> >> It shall only be called in init_memory_mapping() after the direct 
> >> mapping tables have been constructed."
> >> 
> >> Having said that, I couldn't immediately see where pages in (end, 
> >> pgt_buf_top] was getting set RO.  Can you point me to where it's 
> >> done?
> >> 
> > 
> > As mentioned in the comment, please look at xen_set_pte_init().
> 
> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
> is already present and read-only.

look at mask_rw_pte and read the threads linked above, unfortunately it
is not that simple.


> David
> 
> 8<----------------------
> x86: remove x86_init.mapping.pagetable_reserve paravirt op
> 
> The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
> guests to set the writable flag for the mapping of (pgt_buf_end,
> pgt_buf_top].  This is not necessary as these pages are never set as
> read-only as they have never contained page tables.

Is this actually true? It is possible when pagetable pages are
allocated by alloc_low_page.


> When running as a Xen guest, the initial page tables are provided by
> Xen (these are reserved with memblock_reserve() in
> xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
> guests) or in the kernel's .data section (for 64-bit guests, see
> head_64.S).
> 
> Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
> does not overlap with them and the mappings for these PFNs will be
> read-write.

We are talking about pagetable pages built by
kernel_physical_mapping_init.


> Since Xen doesn't need to change the mapping its implementation
> becomes the same as a native and we can simply remove this pv-op
> completely.

I don't think so.



> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/pgtable_types.h |    1 -
>  arch/x86/include/asm/x86_init.h      |   12 ------------
>  arch/x86/kernel/x86_init.c           |    4 ----
>  arch/x86/mm/init.c                   |   22 +++-------------------
>  arch/x86/xen/mmu.c                   |   19 ++-----------------
>  5 files changed, 5 insertions(+), 53 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 013286a..0a11293 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -301,7 +301,6 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
>  /* Install a pte for a particular vaddr in kernel space. */
>  void set_pte_vaddr(unsigned long vaddr, pte_t pte);
>  
> -extern void native_pagetable_reserve(u64 start, u64 end);
>  #ifdef CONFIG_X86_32
>  extern void native_pagetable_setup_start(pgd_t *base);
>  extern void native_pagetable_setup_done(pgd_t *base);
> diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
> index 38155f6..b527dd4 100644
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -69,17 +69,6 @@ struct x86_init_oem {
>  };
>  
>  /**
> - * struct x86_init_mapping - platform specific initial kernel pagetable setup
> - * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
> - *
> - * For more details on the purpose of this hook, look in
> - * init_memory_mapping and the commit that added it.
> - */
> -struct x86_init_mapping {
> -	void (*pagetable_reserve)(u64 start, u64 end);
> -};
> -
> -/**
>   * struct x86_init_paging - platform specific paging functions
>   * @pagetable_setup_start:	platform specific pre paging_init() call
>   * @pagetable_setup_done:	platform specific post paging_init() call
> @@ -135,7 +124,6 @@ struct x86_init_ops {
>  	struct x86_init_mpparse		mpparse;
>  	struct x86_init_irqs		irqs;
>  	struct x86_init_oem		oem;
> -	struct x86_init_mapping		mapping;
>  	struct x86_init_paging		paging;
>  	struct x86_init_timers		timers;
>  	struct x86_init_iommu		iommu;
> diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
> index 9f3167e..040c05f 100644
> --- a/arch/x86/kernel/x86_init.c
> +++ b/arch/x86/kernel/x86_init.c
> @@ -63,10 +63,6 @@ struct x86_init_ops x86_init __initdata = {
>  		.banner			= default_banner,
>  	},
>  
> -	.mapping = {
> -		.pagetable_reserve		= native_pagetable_reserve,
> -	},
> -
>  	.paging = {
>  		.pagetable_setup_start	= native_pagetable_setup_start,
>  		.pagetable_setup_done	= native_pagetable_setup_done,
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index e0e6990..c449873 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -90,11 +90,6 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>  		(pgt_buf_top << PAGE_SHIFT) - 1);
>  }
>  
> -void __init native_pagetable_reserve(u64 start, u64 end)
> -{
> -	memblock_reserve(start, end - start);
> -}
> -
>  #ifdef CONFIG_X86_32
>  #define NR_RANGE_MR 3
>  #else /* CONFIG_X86_64 */
> @@ -283,22 +278,11 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
>  
>  	/*
>  	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
> -	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
> -	 * so that they can be reused for other purposes.
> -	 *
> -	 * On native it just means calling memblock_reserve, on Xen it also
> -	 * means marking RW the pagetable pages that we allocated before
> -	 * but that haven't been used.
> -	 *
> -	 * In fact on xen we mark RO the whole range pgt_buf_start -
> -	 * pgt_buf_top, because we have to make sure that when
> -	 * init_memory_mapping reaches the pagetable pages area, it maps
> -	 * RO all the pagetable pages, including the ones that are beyond
> -	 * pgt_buf_end at that time.
> +	 * pgt_buf_end).
>  	 */
>  	if (!after_bootmem && pgt_buf_end > pgt_buf_start)
> -		x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
> -				PFN_PHYS(pgt_buf_end));
> +		memblock_reserve(PFN_PHYS(pgt_buf_start),
> +				 PFN_PHYS(pgt_buf_end) - PFN_PHYS(pgt_buf_start));
>  
>  	if (!after_bootmem)
>  		early_memtest(start, end);
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..e55dfc0 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1178,20 +1178,6 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
>  {
>  }
>  
> -static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
> -{
> -	/* reserve the range used */
> -	native_pagetable_reserve(start, end);
> -
> -	/* set as RW the rest */
> -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> -			PFN_PHYS(pgt_buf_top));
> -	while (end < PFN_PHYS(pgt_buf_top)) {
> -		make_lowmem_page_readwrite(__va(end));
> -		end += PAGE_SIZE;
> -	}
> -}
> -
>  static void xen_post_allocator_init(void);
>  
>  static void __init xen_pagetable_setup_done(pgd_t *base)
> @@ -2067,7 +2053,6 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
>  
>  void __init xen_init_mmu_ops(void)
>  {
> -	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
>  	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
>  	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
>  	pv_mmu_ops = xen_mmu_ops;
> -- 
> 1.7.2.5
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 13:56:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 13:56:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1e4y-0005g8-MY; Wed, 15 Aug 2012 13:56:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1e4x-0005fs-Qs
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 13:56:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345038962!6996963!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17318 invoked from network); 15 Aug 2012 13:56:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 13:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14021752"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 13:56:02 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 14:56:02 +0100
Date: Wed, 15 Aug 2012 14:55:46 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <502B85D1.8000606@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, David Vrabel wrote:
> On 14/08/12 15:12, Attilio Rao wrote:
> > On 14/08/12 14:57, David Vrabel wrote:
> >> On 14/08/12 13:24, Attilio Rao wrote:
> >> 
> >>> The informations added on the hook are: - Native behaviour - Xen 
> >>> specific behaviour - Logic behind the Xen specific behaviour
> >>> 
> >> These are implementation details and should be documented with the
> >>  implementations (if necessary).
> >> 
> > 
> > In this specific case, implementation details are very valuable to 
> > understand the semantic of operations, this is why I added them 
> > there. I think, at least for this case, this is the best trade-off.
> 
> Documenting the implementation details will be useful for reviewing or
> refactoring the pv-ops but I don't think it useful in the longer term
> for it to be included in the API documentation upstream.
> 
> >>> - PVOPS semantic
> >>> 
> >> This is the interesting stuff.
> >> 
> >> This particular pvop seems a little odd really.  It might make more
> >> sense if it took a third parameter for pgt_buf_top.
> > 
> > The thing is that this work (documenting PVOPS) should help in 
> > understanding the logic behind some PVOPS and possibly
> > improve/rework them. For this stage, I agreed with Konrad to keep the
> > changes as small as possible. Once the documentation about the
> > semantic is in place we can think about ways to improve things more
> > effectively (for example, in some cases we may want to rewrite the
> > PVOP completely).
> 
> After looking at it some more, I think this pv-ops is unnecessary. How
> about the following patch to just remove it completely?
> 
> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
> is sound.

Do you have more then 4G to dom0 on those boxes?
It certainly fixed a serious crash at the time it was introduced, see
http://marc.info/?l=linux-kernel&m=129901609503574 and
http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
changed in kernel_physical_mapping_init, I think we still need it.
Depending on the e820 of your test box, the kernel could crash (or not),
possibly in different places.


> >> "@pagetable_reserve is used to reserve a range of PFNs used for the
> >> kernel direct mapping page tables and cleans-up any PFNs that ended
> >> up not being used for the tables.
> >> 
> >> It shall reserve the range (start, end] with memblock_reserve(). It
> >> shall prepare PFNs in the range (end, pgt_buf_top] for general (non
> >> page table) use.
> >> 
> >> It shall only be called in init_memory_mapping() after the direct 
> >> mapping tables have been constructed."
> >> 
> >> Having said that, I couldn't immediately see where pages in (end, 
> >> pgt_buf_top] was getting set RO.  Can you point me to where it's 
> >> done?
> >> 
> > 
> > As mentioned in the comment, please look at xen_set_pte_init().
> 
> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
> is already present and read-only.

look at mask_rw_pte and read the threads linked above, unfortunately it
is not that simple.


> David
> 
> 8<----------------------
> x86: remove x86_init.mapping.pagetable_reserve paravirt op
> 
> The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
> guests to set the writable flag for the mapping of (pgt_buf_end,
> pgt_buf_top].  This is not necessary as these pages are never set as
> read-only as they have never contained page tables.

Is this actually true? It is possible when pagetable pages are
allocated by alloc_low_page.


> When running as a Xen guest, the initial page tables are provided by
> Xen (these are reserved with memblock_reserve() in
> xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
> guests) or in the kernel's .data section (for 64-bit guests, see
> head_64.S).
> 
> Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
> does not overlap with them and the mappings for these PFNs will be
> read-write.

We are talking about pagetable pages built by
kernel_physical_mapping_init.


> Since Xen doesn't need to change the mapping its implementation
> becomes the same as a native and we can simply remove this pv-op
> completely.

I don't think so.



> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/pgtable_types.h |    1 -
>  arch/x86/include/asm/x86_init.h      |   12 ------------
>  arch/x86/kernel/x86_init.c           |    4 ----
>  arch/x86/mm/init.c                   |   22 +++-------------------
>  arch/x86/xen/mmu.c                   |   19 ++-----------------
>  5 files changed, 5 insertions(+), 53 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 013286a..0a11293 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -301,7 +301,6 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
>  /* Install a pte for a particular vaddr in kernel space. */
>  void set_pte_vaddr(unsigned long vaddr, pte_t pte);
>  
> -extern void native_pagetable_reserve(u64 start, u64 end);
>  #ifdef CONFIG_X86_32
>  extern void native_pagetable_setup_start(pgd_t *base);
>  extern void native_pagetable_setup_done(pgd_t *base);
> diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
> index 38155f6..b527dd4 100644
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -69,17 +69,6 @@ struct x86_init_oem {
>  };
>  
>  /**
> - * struct x86_init_mapping - platform specific initial kernel pagetable setup
> - * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
> - *
> - * For more details on the purpose of this hook, look in
> - * init_memory_mapping and the commit that added it.
> - */
> -struct x86_init_mapping {
> -	void (*pagetable_reserve)(u64 start, u64 end);
> -};
> -
> -/**
>   * struct x86_init_paging - platform specific paging functions
>   * @pagetable_setup_start:	platform specific pre paging_init() call
>   * @pagetable_setup_done:	platform specific post paging_init() call
> @@ -135,7 +124,6 @@ struct x86_init_ops {
>  	struct x86_init_mpparse		mpparse;
>  	struct x86_init_irqs		irqs;
>  	struct x86_init_oem		oem;
> -	struct x86_init_mapping		mapping;
>  	struct x86_init_paging		paging;
>  	struct x86_init_timers		timers;
>  	struct x86_init_iommu		iommu;
> diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
> index 9f3167e..040c05f 100644
> --- a/arch/x86/kernel/x86_init.c
> +++ b/arch/x86/kernel/x86_init.c
> @@ -63,10 +63,6 @@ struct x86_init_ops x86_init __initdata = {
>  		.banner			= default_banner,
>  	},
>  
> -	.mapping = {
> -		.pagetable_reserve		= native_pagetable_reserve,
> -	},
> -
>  	.paging = {
>  		.pagetable_setup_start	= native_pagetable_setup_start,
>  		.pagetable_setup_done	= native_pagetable_setup_done,
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index e0e6990..c449873 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -90,11 +90,6 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>  		(pgt_buf_top << PAGE_SHIFT) - 1);
>  }
>  
> -void __init native_pagetable_reserve(u64 start, u64 end)
> -{
> -	memblock_reserve(start, end - start);
> -}
> -
>  #ifdef CONFIG_X86_32
>  #define NR_RANGE_MR 3
>  #else /* CONFIG_X86_64 */
> @@ -283,22 +278,11 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
>  
>  	/*
>  	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
> -	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
> -	 * so that they can be reused for other purposes.
> -	 *
> -	 * On native it just means calling memblock_reserve, on Xen it also
> -	 * means marking RW the pagetable pages that we allocated before
> -	 * but that haven't been used.
> -	 *
> -	 * In fact on xen we mark RO the whole range pgt_buf_start -
> -	 * pgt_buf_top, because we have to make sure that when
> -	 * init_memory_mapping reaches the pagetable pages area, it maps
> -	 * RO all the pagetable pages, including the ones that are beyond
> -	 * pgt_buf_end at that time.
> +	 * pgt_buf_end).
>  	 */
>  	if (!after_bootmem && pgt_buf_end > pgt_buf_start)
> -		x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
> -				PFN_PHYS(pgt_buf_end));
> +		memblock_reserve(PFN_PHYS(pgt_buf_start),
> +				 PFN_PHYS(pgt_buf_end) - PFN_PHYS(pgt_buf_start));
>  
>  	if (!after_bootmem)
>  		early_memtest(start, end);
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..e55dfc0 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1178,20 +1178,6 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
>  {
>  }
>  
> -static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
> -{
> -	/* reserve the range used */
> -	native_pagetable_reserve(start, end);
> -
> -	/* set as RW the rest */
> -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> -			PFN_PHYS(pgt_buf_top));
> -	while (end < PFN_PHYS(pgt_buf_top)) {
> -		make_lowmem_page_readwrite(__va(end));
> -		end += PAGE_SIZE;
> -	}
> -}
> -
>  static void xen_post_allocator_init(void);
>  
>  static void __init xen_pagetable_setup_done(pgd_t *base)
> @@ -2067,7 +2053,6 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
>  
>  void __init xen_init_mmu_ops(void)
>  {
> -	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
>  	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
>  	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
>  	pv_mmu_ops = xen_mmu_ops;
> -- 
> 1.7.2.5
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 14:08:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eGE-0006BN-IH; Wed, 15 Aug 2012 14:07:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T1eGB-0006BE-RQ
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:07:48 +0000
Received: from [85.158.143.35:17312] by server-1.bemta-4.messagelabs.com id
	7E/8E-07754-33DAB205; Wed, 15 Aug 2012 14:07:47 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345039650!14242832!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=1.4 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8199 invoked from network); 15 Aug 2012 14:07:31 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 14:07:31 -0000
Received: by dadn15 with SMTP id n15so180673dad.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 07:07:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=DPY/2h4FwX/0mUL+0IemDK0HkZtLqNn9iniJ/BYbQD0=;
	b=o/T4A6hV/E6p9zAg28V8Rm+hdqRZp/NLrs3p9xOSjeXYK0e7a0C5scEeXosSMqfWSg
	GwRwics5903Zk5+dbmpMfHNCmkgCns0VpTWGhregwaWIkC6nG+U1ytvUFN/+SVOZm5Cb
	7Os4m9686MtFUsXwG3v7HdR74glUtHFrDqQkpMRZpDxivoxgq663x0bvV9Tqgcfo7gwC
	/MN3rz+1QUIJJYi8R4fHuzNHNqDVJ8+kLgF/cnwhoYZOKzzAYIKTjTk8xm3ZmXeHclj4
	+FLVfHejkSpEi09rWvyUKm7HahAzzBZIpDt0rpoVxtLaEbxNyJUl/cjutmuAn+jNbOi4
	VdJQ==
Received: by 10.66.83.234 with SMTP id t10mr32322200pay.39.1345039649634;
	Wed, 15 Aug 2012 07:07:29 -0700 (PDT)
Received: from root ([115.199.253.245])
	by mx.google.com with ESMTPS id jz4sm616595pbc.17.2012.08.15.07.07.02
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 07:07:27 -0700 (PDT)
Date: Wed, 15 Aug 2012 22:07:07 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>, 
	"Yang Z Zhang" <yang.z.zhang@intel.com>, "Keir Fraser" <keir@xen.org>, 
	"Tim Deegan" <tim@xen.org>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: EC02886C-1CA8-4587-83B3-CEE6627B4FFE
X-Has-Attach: yes
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081522045495397713@gmail.com>
Content-Type: multipart/mixed; boundary="----=_001_NextPart215818845028_=----"
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

------=_001_NextPart215818845028_=----
Content-Type: multipart/alternative;
	boundary="----=_002_NextPart316018485136_=----"


------=_002_NextPart316018485136_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

SGksIEphbjoNCkkgYW0gc29ycnkgSSByZWFsbHkgZG9uJ3QgaGF2ZSBtdWNoIHRpbWUgdG8gdHJ5
IGEgdGVzdCBvZiB5b3VyIHBhdGNoLCBhbmQgaXQgaXMgbm90IGNvbnZlbmllbnQNCmZvciBtZSB0
byBoYXZlIGEgdHJ5LiBGb3IgdGhlIHZlcnNpb24gSSBoYXZlIGJlZW4gdXNpbmcgaXMgeGVuNC4w
LngsIGFuZCB5b3VyIHBhdGNoIGlzIGJhc2VkIG9uIA0KdGhlIGxhdGVzdCB2ZXJzaW9uIHhlbjQu
Mi54LihJIGhhdmUgbmV2ZXIgY29tcGxpZWQgdGhlIHVuc3RhYmxlIG9uZSksIHNvIEkgbWVyZ2Vk
IHlvdXIgcGF0Y2ggdG8gbXkgDQp4ZW40LjAueCwgc3RpbGwgY291bGRuJ3QgZmluZCB0aGUgdHdv
IGZ1bmN0aW9ucyBiZWxvdzoNCiBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpv
cGFxdWUpIA0KIHN0YXRpYyB2b2lkIHJ0Y19hbGFybV9jYih2b2lkICpvcGFxdWUpIA0Kc28gSSBk
aWRuJ3QgbWVyZ2UgdGhlIHR3byBmdW5jdGlvbnMgd2hpY2ggY29udGFpbnMgYSBydGNfdG9nZ2xl
X2lycSgpIC4NCg0KVGhlIHJlc3VsdHMgZm9yIG1lIHdlcmUgdGhlc2U6DQoxIEluIG15IHJlYWwg
YXBwbGljYXRpb24gZW52aXJvbm1lbnQsIGl0IHdvcmtlZCB2ZXJ5IHdlbGwgaW4gdGhlIGZvcm1l
ciA1bWlucywgbXVjaCBiZXR0ZXIgdGhhbiBiZWZvcmUsDQogYnV0IGF0IGxhc3QgaXQgbGFnZ2Vk
IGFnYWluLiBJIGRvbid0IGtub3cgd2hldGhlciBpdCBiZWxvbmdzIHRvIHRoZSB0d28gbWlzc2Vk
IGZ1bmN0aW9ucy4gSSBsYWNrIHRoZSANCiBhYmlsaXR5IHRvIGZpZ3VyZSB0aGVtIG91dC4NCg0K
MiBXaGVuIEkgdGVzdGVkIG15IHRlc3QgcHJvZ3JhbSB3aGljaCBJIHByb3ZpZGVkIGRheXMgYmVm
b3JlLCBpdCB3b3JrZWQgdmVyeSB3ZWxsLCBtYXliZSB0aGUgcHJvZ3JhbSBkb2Vzbid0IA0KZW11
bGF0ZSB0aGUgcmVhbCBlbnZpcm9ubWVudCBkdWUgdG8gdGhlIHNhbWUgc2V0dGluZyByYXRlLCBz
byBJIG1vZGlmaWVkIHRoaXMgcHJvZ3JhbSBhcyB3aGljaCBpbiB0aGUgYXR0YWNobWVudC4NCmlm
IHlvdSBhcmUgbW9yZSBjb252ZW5pZW50LCB5b3UgY2FuIGhlbHAgbWUgdG8gaGF2ZSBhIGxvb2sg
b2YgaXQuDQpBbmQgSSBoYXZlIGEgb3BpbmlvbiwgYmVjYXVzZSBvdXIgcHJvZHVjdCBpcyBiYXNl
ZCBvbiBWZXJzaW9uIFhlbjQuMC54LCBpZiB5b3UgaGF2ZSBlbm91Z2ggdGltZSwgY2FuIHlvdSB3
cml0ZSANCmFub3RoZXIgcGF0Y2ggYmFzZWQgaHR0cDovL3hlbmJpdHMueGVuLm9yZy9oZy94ZW4t
NC4wLXRlc3RpbmcuaGcvIGZvciBtZSwgdGhhbmsgeW91IHZlcnkgbXVjaCENCg0KMyBJIGFsc28g
aGF2ZSBhIHRob3VnaHQgdGhhdCBjYW4gd2UgaGF2ZSBzb21lIGRldGVjdGluZyBtZXRob2RzIHRv
IGZpbmQgdGhlIGxhZ2dpbmcgdGltZSBlYXJsaWVyIHRvIGFkanVzdCB0aW1lDQpiYWNrIHRvIG5v
cm1hbCB2YWx1ZSBpbiB0aGUgY29kZT8NCg0KYmVzdCByZWdhcmRzLA0KDQoNCg0KDQp0dXBlbmcy
MTINCg0KU2Vjb25kIGRyYWZ0IG9mIGEgcGF0Y2ggcG9zdGVkOyBubyB0ZXN0IHJlc3VsdHMgc28g
ZmFyIGZvciBmaXJzdCBkcmFmdC4NCkphbg0KDQpGcm9tOiBKYW4gQmV1bGljaA0KRGF0ZTogMjAx
Mi0wOC0xNCAxNzo1MQ0KVG86IFlhbmcgWiBaaGFuZzsgS2VpciBGcmFzZXI7IFRpbSBEZWVnYW4N
CkNDOiB0dXBlbmcyMTI7IHhlbi1kZXZlbA0KU3ViamVjdDogW1hlbi1kZXZlbF0gW1BBVENILCBS
RkMgdjJdIHg4Ni9IVk06IGFzc29ydGVkIFJUQyBlbXVsYXRpb24gYWRqdXN0bWVudHMgKHdhcyBS
ZTogQmlnIEJ1ZzpUaW1lIGluIFZNIGdvZXMgc2xvd2VyLi4uKQ0KQmVsb3cvYXR0YWNoZWQgYSBz
ZWNvbmQgZHJhZnQgb2YgYSBwYXRjaCB0byBmaXggbm90IG9ubHkgdGhpcw0KaXNzdWUsIGJ1dCBh
IGZldyBtb3JlIHdpdGggdGhlIFJUQyBlbXVsYXRpb24uDQoNCktlaXIsIFRpbSwgWWFuZywgb3Ro
ZXJzIC0gdGhlIGNoYW5nZSB0byB4ZW4vYXJjaC94ODYvaHZtL3ZwdC5jIHJlYWxseQ0KbG9va3Mg
bW9yZSBsaWtlIGEgaGFjayB0aGFuIGEgc29sdXRpb24sIGJ1dCBJIGRvbid0IHNlZSBhbm90aGVy
DQp3YXkgd2l0aG91dCBtdWNoIG1vcmUgaW50cnVzaXZlIGNoYW5nZXMuIFRoZSBwb2ludCBpcyB0
aGF0IHdlDQp3YW50IHRoZSBSVEMgY29kZSB0byBkZWNpZGUgd2hldGhlciB0byBnZW5lcmF0ZSBh
biBpbnRlcnJ1cHQNCihzbyB0aGF0IFJUQ19QRiBjYW4gYmVjb21lIHNldCBjb3JyZWN0bHkgZXZl
biB3aXRob3V0IFJUQ19QSUUNCmdldHRpbmcgZW5hYmxlZCBieSB0aGUgZ3Vlc3QpLg0KDQpBZGRp
dGlvbmFsbHkgSSB3b25kZXIgd2hldGhlciBhbGFybV90aW1lcl91cGRhdGUoKSBzaG91bGRuJ3QN
CmJhaWwgb24gbm9uLWNvbmZvcm1pbmcgUlRDXypfQUxBUk0gdmFsdWVzIChhcyB0aG9zZSB3b3Vs
ZA0KbmV2ZXIgbWF0Y2ggdGhlIHZhbHVlcyB0aGV5IGdldCBjb21wYXJlZCBhZ2FpbnN0LCB3aGVy
ZWFzDQp3aXRoIHRoZSBjdXJyZW50IHdheSBvZiBoYW5kbGluZyB0aGlzIHRoZXkgd291bGQgYXBw
ZWFyIHRvDQptYXRjaCAtIGkuZS4gc2V0IFJUQ19BRiBhbmQgcG9zc2libHkgZ2VuZXJhdGUgYW4g
aW50ZXJydXB0IC0NCnNvbWUgb3RoZXIgcG9pbnQgaW4gdGltZSkuIEkgcmVhbGl6ZSB0aGUgYmVo
YXZpb3IgaGVyZSBtYXkgbm90DQpiZSBwcmVjaXNlbHkgc3BlY2lmaWVkLCBidXQgdGhlIHNwZWNp
ZmljYXRpb24gc2F5aW5nICJ0aGUgY3VycmVudA0KdGltZSBoYXMgbWF0Y2hlZCB0aGUgYWxhcm0g
dGltZSIgbWVhbnMgdG8gbWUgYSB2YWx1ZSBieSB2YWx1ZQ0KY29tcGFyaXNvbiwgd2hpY2ggaW1w
bGllcyB0aGF0IG5vbi1jb25mb3JtaW5nIHZhbHVlcyB3b3VsZA0KbmV2ZXIgbWF0Y2ggKHNpbmNl
IG5vbi1jb25mb3JtaW5nIGN1cnJlbnQgdGltZSB2YWx1ZXMgY291bGQNCmdldCByZXBsYWNlZCBh
dCBhbnkgdGltZSBieSB0aGUgaGFyZHdhcmUgZHVlIHRvIG92ZXJmbG93DQpkZXRlY3Rpb24pLg0K
DQpKYW4NCg0KLSBkb24ndCBjYWxsIHJ0Y190aW1lcl91cGRhdGUoKSBvbiBSRUdfQSB3cml0ZXMg
d2hlbiB0aGUgdmFsdWUgZGlkbid0DQogIGNoYW5nZSAoZG9pbmcgdGhlIGNhbGwgYWx3YXlzIHdh
cyByZXBvcnRlZCB0byBjYXVzZSB3YWxsIGNsb2NrIHRpbWUNCiAgbGFnZ2luZyB3aXRoIHRoZSBK
Vk0gcnVubmluZyBvbiBXaW5kb3dzKQ0KLSBkb24ndCBjYWxsIHJ0Y190aW1lcl91cGRhdGUoKSBv
biBSRUdfQiB3cml0ZXMgYXQgYWxsDQotIG9ubHkgY2FsbCBhbGFybV90aW1lcl91cGRhdGUoKSBv
biBSRUdfQiB3cml0ZXMgd2hlbiByZWxldmFudCBiaXRzDQogIGNoYW5nZQ0KLSBvbmx5IGNhbGwg
Y2hlY2tfdXBkYXRlX3RpbWVyKCkgb24gUkVHX0Igd3JpdGVzIHdoZW4gU0VUIGNoYW5nZXMNCi0g
aW5zdGVhZCBwcm9wZXJseSBoYW5kbGUgQUYgYW5kIFBGIHdoZW4gdGhlIGd1ZXN0IGlzIG5vdCBh
bHNvIHNldHRpbmcNCiAgQUlFL1BJRSByZXNwZWN0aXZlbHkgKGZvciBVRiB0aGlzIHdhcyBhbHJl
YWR5IHRoZSBjYXNlLCBvbmx5IGENCiAgY29tbWVudCB3YXMgc2xpZ2h0bHkgaW5hY2N1cmF0ZSkN
Ci0gcmFpc2UgdGhlIFJUQyBJUlEgbm90IG9ubHkgd2hlbiBVSUUgZ2V0cyBzZXQgd2hpbGUgVUYg
d2FzIGFscmVhZHkNCiAgc2V0LCBidXQgZ2VuZXJhbGl6ZSB0aGlzIHRvIGNvdmVyIEFJRSBhbmQg
UElFIGFzIHdlbGwNCi0gcHJvcGVybHkgbWFzayBvZmYgYml0IDcgd2hlbiByZXRyaWV2aW5nIHRo
ZSBob3VyIHZhbHVlcyBpbg0KICBhbGFybV90aW1lcl91cGRhdGUoKSwgYW5kIHByb3Blcmx5IHVz
ZSBSVENfSE9VUlNfQUxBUk0ncyBiaXQgNyB3aGVuDQogIGNvbnZlcnRpbmcgZnJvbSAxMi0gdG8g
MjQtaG91ciB2YWx1ZQ0KLSBhbHNvIGhhbmRsZSB0aGUgdHdvIG90aGVyIHBvc3NpYmxlIGNsb2Nr
IGJhc2VzDQotIHVzZSBSVENfKiBuYW1lcyBpbiBhIGNvdXBsZSBvZiBwbGFjZXMgd2hlcmUgbGl0
ZXJhbCBudW1iZXJzIHdlcmUgdXNlZA0KICBzbyBmYXINCg0KLS0tIGEveGVuL2FyY2gveDg2L2h2
bS9ydGMuYw0KKysrIGIveGVuL2FyY2gveDg2L2h2bS9ydGMuYw0KQEAgLTUwLDExICs1MCwyNCBA
QCBzdGF0aWMgdm9pZCBydGNfc2V0X3RpbWUoUlRDU3RhdGUgKnMpOw0KIHN0YXRpYyBpbmxpbmUg
aW50IGZyb21fYmNkKFJUQ1N0YXRlICpzLCBpbnQgYSk7DQogc3RhdGljIGlubGluZSBpbnQgY29u
dmVydF9ob3VyKFJUQ1N0YXRlICpzLCBpbnQgaG91cik7DQoNCi1zdGF0aWMgdm9pZCBydGNfcGVy
aW9kaWNfY2Ioc3RydWN0IHZjcHUgKnYsIHZvaWQgKm9wYXF1ZSkNCitzdGF0aWMgdm9pZCBydGNf
dG9nZ2xlX2lycShSVENTdGF0ZSAqcykNCit7DQorICAgIHN0cnVjdCBkb21haW4gKmQgPSB2cnRj
X2RvbWFpbihzKTsNCisNCisgICAgQVNTRVJUKHNwaW5faXNfbG9ja2VkKCZzLT5sb2NrKSk7DQor
ICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19JUlFGOw0KKyAgICBodm1faXNh
X2lycV9kZWFzc2VydChkLCBSVENfSVJRKTsNCisgICAgaHZtX2lzYV9pcnFfYXNzZXJ0KGQsIFJU
Q19JUlEpOw0KK30NCisNCit2b2lkIHJ0Y19wZXJpb2RpY19pbnRlcnJ1cHQodm9pZCAqb3BhcXVl
KQ0KIHsNCiAgICAgUlRDU3RhdGUgKnMgPSBvcGFxdWU7DQorDQogICAgIHNwaW5fbG9jaygmcy0+
bG9jayk7DQotICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IDB4YzA7DQorICAgIHMt
Pmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19QRjsNCisgICAgaWYgKCBzLT5ody5jbW9z
X2RhdGFbUlRDX1JFR19CXSAmIFJUQ19QSUUgKQ0KKyAgICAgICAgcnRjX3RvZ2dsZV9pcnEocyk7
DQogICAgIHNwaW5fdW5sb2NrKCZzLT5sb2NrKTsNCiB9DQoNCkBAIC02OCwxOSArODEsMjUgQEAg
c3RhdGljIHZvaWQgcnRjX3RpbWVyX3VwZGF0ZShSVENTdGF0ZSAqcw0KICAgICBBU1NFUlQoc3Bp
bl9pc19sb2NrZWQoJnMtPmxvY2spKTsNCg0KICAgICBwZXJpb2RfY29kZSA9IHMtPmh3LmNtb3Nf
ZGF0YVtSVENfUkVHX0FdICYgUlRDX1JBVEVfU0VMRUNUOw0KLSAgICBpZiAoIChwZXJpb2RfY29k
ZSAhPSAwKSAmJiAocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfUElFKSApDQorICAg
IHN3aXRjaCAoIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICYgUlRDX0RJVl9DVEwgKQ0KICAg
ICB7DQotICAgICAgICBpZiAoIHBlcmlvZF9jb2RlIDw9IDIgKQ0KKyAgICBjYXNlIFJUQ19SRUZf
Q0xDS18zMktIWjoNCisgICAgICAgIGlmICggKHBlcmlvZF9jb2RlICE9IDApICYmIChwZXJpb2Rf
Y29kZSA8PSAyKSApDQogICAgICAgICAgICAgcGVyaW9kX2NvZGUgKz0gNzsNCi0NCi0gICAgICAg
IHBlcmlvZCA9IDEgPDwgKHBlcmlvZF9jb2RlIC0gMSk7IC8qIHBlcmlvZCBpbiAzMiBLaHogY3lj
bGVzICovDQotICAgICAgICBwZXJpb2QgPSBESVZfUk9VTkQoKHBlcmlvZCAqIDEwMDAwMDAwMDBV
TEwpLCAzMjc2OCk7IC8qIHBlcmlvZCBpbiBucyAqLw0KLSAgICAgICAgY3JlYXRlX3BlcmlvZGlj
X3RpbWUodiwgJnMtPnB0LCBwZXJpb2QsIHBlcmlvZCwgUlRDX0lSUSwNCi0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHJ0Y19wZXJpb2RpY19jYiwgcyk7DQotICAgIH0NCi0gICAgZWxzZQ0K
LSAgICB7DQorICAgICAgICAvKiBmYWxsIHRocm91Z2ggKi8NCisgICAgY2FzZSBSVENfUkVGX0NM
Q0tfMU1IWjoNCisgICAgY2FzZSBSVENfUkVGX0NMQ0tfNE1IWjoNCisgICAgICAgIGlmICggcGVy
aW9kX2NvZGUgIT0gMCApDQorICAgICAgICB7DQorICAgICAgICAgICAgcGVyaW9kID0gMSA8PCAo
cGVyaW9kX2NvZGUgLSAxKTsgLyogcGVyaW9kIGluIDMyIEtoeiBjeWNsZXMgKi8NCisgICAgICAg
ICAgICBwZXJpb2QgPSBESVZfUk9VTkQocGVyaW9kICogMTAwMDAwMDAwMFVMTCwgMzI3NjgpOyAv
KiBpbiBucyAqLw0KKyAgICAgICAgICAgIGNyZWF0ZV9wZXJpb2RpY190aW1lKHYsICZzLT5wdCwg
cGVyaW9kLCBwZXJpb2QsIFJUQ19JUlEsIE5VTEwsIHMpOw0KKyAgICAgICAgICAgIGJyZWFrOw0K
KyAgICAgICAgfQ0KKyAgICAgICAgLyogZmFsbCB0aHJvdWdoICovDQorICAgIGRlZmF1bHQ6DQog
ICAgICAgICBkZXN0cm95X3BlcmlvZGljX3RpbWUoJnMtPnB0KTsNCisgICAgICAgIGJyZWFrOw0K
ICAgICB9DQogfQ0KDQpAQCAtMTAyLDcgKzEyMSw3IEBAIHN0YXRpYyB2b2lkIGNoZWNrX3VwZGF0
ZV90aW1lcihSVENTdGF0ZSANCiAgICAgICAgIGd1ZXN0X3VzZWMgPSBnZXRfbG9jYWx0aW1lX3Vz
KGQpICUgVVNFQ19QRVJfU0VDOw0KICAgICAgICAgaWYgKGd1ZXN0X3VzZWMgPj0gKFVTRUNfUEVS
X1NFQyAtIDI0NCkpDQogICAgICAgICB7DQotICAgICAgICAgICAgLyogUlRDIGlzIGluIHVwZGF0
ZSBjeWNsZSB3aGVuIGVuYWJsaW5nIFVJRSAqLw0KKyAgICAgICAgICAgIC8qIFJUQyBpcyBpbiB1
cGRhdGUgY3ljbGUgKi8NCiAgICAgICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19BXSB8
PSBSVENfVUlQOw0KICAgICAgICAgICAgIG5leHRfdXBkYXRlX3RpbWUgPSAoVVNFQ19QRVJfU0VD
IC0gZ3Vlc3RfdXNlYykgKiBOU19QRVJfVVNFQzsNCiAgICAgICAgICAgICBleHBpcmVfdGltZSA9
IE5PVygpICsgbmV4dF91cGRhdGVfdGltZTsNCkBAIC0xNDQsNyArMTYzLDYgQEAgc3RhdGljIHZv
aWQgcnRjX3VwZGF0ZV90aW1lcih2b2lkICpvcGFxdQ0KIHN0YXRpYyB2b2lkIHJ0Y191cGRhdGVf
dGltZXIyKHZvaWQgKm9wYXF1ZSkNCiB7DQogICAgIFJUQ1N0YXRlICpzID0gb3BhcXVlOw0KLSAg
ICBzdHJ1Y3QgZG9tYWluICpkID0gdnJ0Y19kb21haW4ocyk7DQoNCiAgICAgc3Bpbl9sb2NrKCZz
LT5sb2NrKTsNCiAgICAgaWYgKCEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfU0VU
KSkNCkBAIC0xNTIsMTEgKzE3MCw3IEBAIHN0YXRpYyB2b2lkIHJ0Y191cGRhdGVfdGltZXIyKHZv
aWQgKm9wYXENCiAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19VRjsN
CiAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICY9IH5SVENfVUlQOw0KICAgICAg
ICAgaWYgKChzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19VSUUpKQ0KLSAgICAgICAg
ew0KLSAgICAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19JUlFGOw0K
LSAgICAgICAgICAgIGh2bV9pc2FfaXJxX2RlYXNzZXJ0KGQsIFJUQ19JUlEpOw0KLSAgICAgICAg
ICAgIGh2bV9pc2FfaXJxX2Fzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgIH0NCisgICAgICAg
ICAgICBydGNfdG9nZ2xlX2lycShzKTsNCiAgICAgICAgIGNoZWNrX3VwZGF0ZV90aW1lcihzKTsN
CiAgICAgfQ0KICAgICBzcGluX3VubG9jaygmcy0+bG9jayk7DQpAQCAtMTc1LDIxICsxODksMTgg
QEAgc3RhdGljIHZvaWQgYWxhcm1fdGltZXJfdXBkYXRlKFJUQ1N0YXRlIA0KDQogICAgIHN0b3Bf
dGltZXIoJnMtPmFsYXJtX3RpbWVyKTsNCg0KLSAgICBpZiAoKHMtPmh3LmNtb3NfZGF0YVtSVENf
UkVHX0JdICYgUlRDX0FJRSkgJiYNCi0gICAgICAgICAgICAhKHMtPmh3LmNtb3NfZGF0YVtSVENf
UkVHX0JdICYgUlRDX1NFVCkpDQorICAgIGlmICggIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19C
XSAmIFJUQ19TRVQpICkNCiAgICAgew0KICAgICAgICAgcy0+Y3VycmVudF90bSA9IGdtdGltZShn
ZXRfbG9jYWx0aW1lKGQpKTsNCiAgICAgICAgIHJ0Y19jb3B5X2RhdGUocyk7DQoNCiAgICAgICAg
IGFsYXJtX3NlYyA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfU0VDT05EU19BTEFS
TV0pOw0KICAgICAgICAgYWxhcm1fbWluID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JU
Q19NSU5VVEVTX0FMQVJNXSk7DQotICAgICAgICBhbGFybV9ob3VyID0gZnJvbV9iY2Qocywgcy0+
aHcuY21vc19kYXRhW1JUQ19IT1VSU19BTEFSTV0pOw0KLSAgICAgICAgYWxhcm1faG91ciA9IGNv
bnZlcnRfaG91cihzLCBhbGFybV9ob3VyKTsNCisgICAgICAgIGFsYXJtX2hvdXIgPSBjb252ZXJ0
X2hvdXIocywgcy0+aHcuY21vc19kYXRhW1JUQ19IT1VSU19BTEFSTV0pOw0KDQogICAgICAgICBj
dXJfc2VjID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19TRUNPTkRTXSk7DQogICAg
ICAgICBjdXJfbWluID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19NSU5VVEVTXSk7
DQotICAgICAgICBjdXJfaG91ciA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfSE9V
UlNdKTsNCi0gICAgICAgIGN1cl9ob3VyID0gY29udmVydF9ob3VyKHMsIGN1cl9ob3VyKTsNCisg
ICAgICAgIGN1cl9ob3VyID0gY29udmVydF9ob3VyKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfSE9V
UlNdKTsNCg0KICAgICAgICAgbmV4dF91cGRhdGVfdGltZSA9IFVTRUNfUEVSX1NFQyAtIChnZXRf
bG9jYWx0aW1lX3VzKGQpICUgVVNFQ19QRVJfU0VDKTsNCiAgICAgICAgIG5leHRfdXBkYXRlX3Rp
bWUgPSBuZXh0X3VwZGF0ZV90aW1lICogTlNfUEVSX1VTRUMgKyBOT1coKTsNCkBAIC0zNDMsNyAr
MzU0LDYgQEAgc3RhdGljIHZvaWQgYWxhcm1fdGltZXJfdXBkYXRlKFJUQ1N0YXRlIA0KIHN0YXRp
YyB2b2lkIHJ0Y19hbGFybV9jYih2b2lkICpvcGFxdWUpDQogew0KICAgICBSVENTdGF0ZSAqcyA9
IG9wYXF1ZTsNCi0gICAgc3RydWN0IGRvbWFpbiAqZCA9IHZydGNfZG9tYWluKHMpOw0KDQogICAg
IHNwaW5fbG9jaygmcy0+bG9jayk7DQogICAgIGlmICghKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVH
X0JdICYgUlRDX1NFVCkpDQpAQCAtMzUxLDExICszNjEsNyBAQCBzdGF0aWMgdm9pZCBydGNfYWxh
cm1fY2Iodm9pZCAqb3BhcXVlKQ0KICAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10g
fD0gUlRDX0FGOw0KICAgICAgICAgLyogYWxhcm0gaW50ZXJydXB0ICovDQogICAgICAgICBpZiAo
cy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfQUlFKQ0KLSAgICAgICAgew0KLSAgICAg
ICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19JUlFGOw0KLSAgICAgICAg
ICAgIGh2bV9pc2FfaXJxX2RlYXNzZXJ0KGQsIFJUQ19JUlEpOw0KLSAgICAgICAgICAgIGh2bV9p
c2FfaXJxX2Fzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgIH0NCisgICAgICAgICAgICBydGNf
dG9nZ2xlX2lycShzKTsNCiAgICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsNCiAgICAgfQ0K
ICAgICBzcGluX3VubG9jaygmcy0+bG9jayk7DQpAQCAtMzY1LDYgKzM3MSw3IEBAIHN0YXRpYyBp
bnQgcnRjX2lvcG9ydF93cml0ZSh2b2lkICpvcGFxdWUNCiB7DQogICAgIFJUQ1N0YXRlICpzID0g
b3BhcXVlOw0KICAgICBzdHJ1Y3QgZG9tYWluICpkID0gdnJ0Y19kb21haW4ocyk7DQorICAgIHVp
bnQzMl90IG9yaWcsIG1hc2s7DQoNCiAgICAgc3Bpbl9sb2NrKCZzLT5sb2NrKTsNCg0KQEAgLTM4
Miw2ICszODksNyBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRfd3JpdGUodm9pZCAqb3BhcXVlDQog
ICAgICAgICByZXR1cm4gMDsNCiAgICAgfQ0KDQorICAgIG9yaWcgPSBzLT5ody5jbW9zX2RhdGFb
cy0+aHcuY21vc19pbmRleF07DQogICAgIHN3aXRjaCAoIHMtPmh3LmNtb3NfaW5kZXggKQ0KICAg
ICB7DQogICAgIGNhc2UgUlRDX1NFQ09ORFNfQUxBUk06DQpAQCAtNDA1LDkgKzQxMyw5IEBAIHN0
YXRpYyBpbnQgcnRjX2lvcG9ydF93cml0ZSh2b2lkICpvcGFxdWUNCiAgICAgICAgIGJyZWFrOw0K
ICAgICBjYXNlIFJUQ19SRUdfQToNCiAgICAgICAgIC8qIFVJUCBiaXQgaXMgcmVhZCBvbmx5ICov
DQotICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19BXSA9IChkYXRhICYgflJUQ19VSVAp
IHwNCi0gICAgICAgICAgICAocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQV0gJiBSVENfVUlQKTsN
Ci0gICAgICAgIHJ0Y190aW1lcl91cGRhdGUocyk7DQorICAgICAgICBzLT5ody5jbW9zX2RhdGFb
UlRDX1JFR19BXSA9IChkYXRhICYgflJUQ19VSVApIHwgKG9yaWcgJiBSVENfVUlQKTsNCisgICAg
ICAgIGlmICggKGRhdGEgXiBvcmlnKSAmIChSVENfUkFURV9TRUxFQ1QgfCBSVENfRElWX0NUTCkg
KQ0KKyAgICAgICAgICAgIHJ0Y190aW1lcl91cGRhdGUocyk7DQogICAgICAgICBicmVhazsNCiAg
ICAgY2FzZSBSVENfUkVHX0I6DQogICAgICAgICBpZiAoIGRhdGEgJiBSVENfU0VUICkNCkBAIC00
MTUsNyArNDIzLDcgQEAgc3RhdGljIGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1ZQ0K
ICAgICAgICAgICAgIC8qIHNldCBtb2RlOiByZXNldCBVSVAgbW9kZSAqLw0KICAgICAgICAgICAg
IHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICY9IH5SVENfVUlQOw0KICAgICAgICAgICAgIC8q
IGFkanVzdCBjbW9zIGJlZm9yZSBzdG9wcGluZyAqLw0KLSAgICAgICAgICAgIGlmICghKHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCkpDQorICAgICAgICAgICAgaWYgKCEob3Jp
ZyAmIFJUQ19TRVQpKQ0KICAgICAgICAgICAgIHsNCiAgICAgICAgICAgICAgICAgcy0+Y3VycmVu
dF90bSA9IGdtdGltZShnZXRfbG9jYWx0aW1lKGQpKTsNCiAgICAgICAgICAgICAgICAgcnRjX2Nv
cHlfZGF0ZShzKTsNCkBAIC00MjQsMjEgKzQzMiwyNiBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRf
d3JpdGUodm9pZCAqb3BhcXVlDQogICAgICAgICBlbHNlDQogICAgICAgICB7DQogICAgICAgICAg
ICAgLyogaWYgZGlzYWJsaW5nIHNldCBtb2RlLCB1cGRhdGUgdGhlIHRpbWUgKi8NCi0gICAgICAg
ICAgICBpZiAoIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCApDQorICAgICAg
ICAgICAgaWYgKCBvcmlnICYgUlRDX1NFVCApDQogICAgICAgICAgICAgICAgIHJ0Y19zZXRfdGlt
ZShzKTsNCiAgICAgICAgIH0NCi0gICAgICAgIC8qIGlmIHRoZSBpbnRlcnJ1cHQgaXMgYWxyZWFk
eSBzZXQgd2hlbiB0aGUgaW50ZXJydXB0IGJlY29tZQ0KLSAgICAgICAgICogZW5hYmxlZCwgcmFp
c2UgYW4gaW50ZXJydXB0IGltbWVkaWF0ZWx5Ki8NCi0gICAgICAgIGlmICgoZGF0YSAmIFJUQ19V
SUUpICYmICEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfVUlFKSkNCi0gICAgICAg
ICAgICBpZiAocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gJiBSVENfVUYpDQorICAgICAgICAv
Kg0KKyAgICAgICAgICogSWYgdGhlIGludGVycnVwdCBpcyBhbHJlYWR5IHNldCB3aGVuIHRoZSBp
bnRlcnJ1cHQgYmVjb21lcw0KKyAgICAgICAgICogZW5hYmxlZCwgcmFpc2UgYW4gaW50ZXJydXB0
IGltbWVkaWF0ZWx5Lg0KKyAgICAgICAgICogTkI6IFJUQ197QSxQLFV9SUUgPT0gUlRDX3tBLFAs
VX1GIHJlc3BlY3RpdmVseS4NCisgICAgICAgICAqLw0KKyAgICAgICAgZm9yICggbWFzayA9IFJU
Q19VSUU7IG1hc2sgPD0gUlRDX1BJRTsgbWFzayA8PD0gMSApDQorICAgICAgICAgICAgaWYgKCAo
ZGF0YSAmIG1hc2spICYmICEob3JpZyAmIG1hc2spICYmDQorICAgICAgICAgICAgICAgICAocy0+
aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gJiBtYXNrKSApDQogICAgICAgICAgICAgew0KLSAgICAg
ICAgICAgICAgICBodm1faXNhX2lycV9kZWFzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgICAg
ICAgICAgaHZtX2lzYV9pcnFfYXNzZXJ0KGQsIFJUQ19JUlEpOw0KKyAgICAgICAgICAgICAgICBy
dGNfdG9nZ2xlX2lycShzKTsNCisgICAgICAgICAgICAgICAgYnJlYWs7DQogICAgICAgICAgICAg
fQ0KICAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gPSBkYXRhOw0KLSAgICAgICAg
cnRjX3RpbWVyX3VwZGF0ZShzKTsNCi0gICAgICAgIGNoZWNrX3VwZGF0ZV90aW1lcihzKTsNCi0g
ICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsNCisgICAgICAgIGlmICggKGRhdGEgXiBvcmln
KSAmIFJUQ19TRVQgKQ0KKyAgICAgICAgICAgIGNoZWNrX3VwZGF0ZV90aW1lcihzKTsNCisgICAg
ICAgIGlmICggKGRhdGEgXiBvcmlnKSAmIChSVENfMjRIIHwgUlRDX0RNX0JJTkFSWSB8IFJUQ19T
RVQpICkNCisgICAgICAgICAgICBhbGFybV90aW1lcl91cGRhdGUocyk7DQogICAgICAgICBicmVh
azsNCiAgICAgY2FzZSBSVENfUkVHX0M6DQogICAgIGNhc2UgUlRDX1JFR19EOg0KQEAgLTQ1Myw3
ICs0NjYsNyBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRfd3JpdGUodm9pZCAqb3BhcXVlDQoNCiBz
dGF0aWMgaW5saW5lIGludCB0b19iY2QoUlRDU3RhdGUgKnMsIGludCBhKQ0KIHsNCi0gICAgaWYg
KCBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIDB4MDQgKQ0KKyAgICBpZiAoIHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX0RNX0JJTkFSWSApDQogICAgICAgICByZXR1cm4gYTsN
CiAgICAgZWxzZQ0KICAgICAgICAgcmV0dXJuICgoYSAvIDEwKSA8PCA0KSB8IChhICUgMTApOw0K
QEAgLTQ2MSw3ICs0NzQsNyBAQCBzdGF0aWMgaW5saW5lIGludCB0b19iY2QoUlRDU3RhdGUgKnMs
IGluDQoNCiBzdGF0aWMgaW5saW5lIGludCBmcm9tX2JjZChSVENTdGF0ZSAqcywgaW50IGEpDQog
ew0KLSAgICBpZiAoIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgMHgwNCApDQorICAgIGlm
ICggcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfRE1fQklOQVJZICkNCiAgICAgICAg
IHJldHVybiBhOw0KICAgICBlbHNlDQogICAgICAgICByZXR1cm4gKChhID4+IDQpICogMTApICsg
KGEgJiAweDBmKTsNCkBAIC00NjksMTIgKzQ4MiwxNCBAQCBzdGF0aWMgaW5saW5lIGludCBmcm9t
X2JjZChSVENTdGF0ZSAqcywgDQoNCiAvKiBIb3VycyBpbiAxMiBob3VyIG1vZGUgYXJlIGluIDEt
MTIgcmFuZ2UsIG5vdCAwLTExLg0KICAqIFNvIHdlIG5lZWQgY29udmVydCBpdCBiZWZvcmUgdXNp
bmcgaXQqLw0KLXN0YXRpYyBpbmxpbmUgaW50IGNvbnZlcnRfaG91cihSVENTdGF0ZSAqcywgaW50
IGhvdXIpDQorc3RhdGljIGlubGluZSBpbnQgY29udmVydF9ob3VyKFJUQ1N0YXRlICpzLCBpbnQg
cmF3KQ0KIHsNCisgICAgaW50IGhvdXIgPSBmcm9tX2JjZChzLCByYXcgJiAweDdmKTsNCisNCiAg
ICAgaWYgKCEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfMjRIKSkNCiAgICAgew0K
ICAgICAgICAgaG91ciAlPSAxMjsNCi0gICAgICAgIGlmIChzLT5ody5jbW9zX2RhdGFbUlRDX0hP
VVJTXSAmIDB4ODApDQorICAgICAgICBpZiAocmF3ICYgMHg4MCkNCiAgICAgICAgICAgICBob3Vy
ICs9IDEyOw0KICAgICB9DQogICAgIHJldHVybiBob3VyOw0KQEAgLTQ5Myw4ICs1MDgsNyBAQCBz
dGF0aWMgdm9pZCBydGNfc2V0X3RpbWUoUlRDU3RhdGUgKnMpDQogICAgIA0KICAgICB0bS0+dG1f
c2VjID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19TRUNPTkRTXSk7DQogICAgIHRt
LT50bV9taW4gPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX01JTlVURVNdKTsNCi0g
ICAgdG0tPnRtX2hvdXIgPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSAm
IDB4N2YpOw0KLSAgICB0bS0+dG1faG91ciA9IGNvbnZlcnRfaG91cihzLCB0bS0+dG1faG91cik7
DQorICAgIHRtLT50bV9ob3VyID0gY29udmVydF9ob3VyKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENf
SE9VUlNdKTsNCiAgICAgdG0tPnRtX3dkYXkgPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFb
UlRDX0RBWV9PRl9XRUVLXSk7DQogICAgIHRtLT50bV9tZGF5ID0gZnJvbV9iY2Qocywgcy0+aHcu
Y21vc19kYXRhW1JUQ19EQVlfT0ZfTU9OVEhdKTsNCiAgICAgdG0tPnRtX21vbiA9IGZyb21fYmNk
KHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfTU9OVEhdKSAtIDE7DQotLS0gYS94ZW4vYXJjaC94ODYv
aHZtL3ZwdC5jDQorKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZwdC5jDQpAQCAtMjIsNiArMjIsNyBA
QA0KICNpbmNsdWRlIDxhc20vaHZtL3ZwdC5oPg0KICNpbmNsdWRlIDxhc20vZXZlbnQuaD4NCiAj
aW5jbHVkZSA8YXNtL2FwaWMuaD4NCisjaW5jbHVkZSA8YXNtL21jMTQ2ODE4cnRjLmg+DQoNCiAj
ZGVmaW5lIG1vZGVfaXMoZCwgbmFtZSkgXA0KICAgICAoKGQpLT5hcmNoLmh2bV9kb21haW4ucGFy
YW1zW0hWTV9QQVJBTV9USU1FUl9NT0RFXSA9PSBIVk1QVE1fIyNuYW1lKQ0KQEAgLTIxOCw2ICsy
MTksNyBAQCB2b2lkIHB0X3VwZGF0ZV9pcnEoc3RydWN0IHZjcHUgKnYpDQogICAgIHN0cnVjdCBw
ZXJpb2RpY190aW1lICpwdCwgKnRlbXAsICplYXJsaWVzdF9wdCA9IE5VTEw7DQogICAgIHVpbnQ2
NF90IG1heF9sYWcgPSAtMVVMTDsNCiAgICAgaW50IGlycSwgaXNfbGFwaWM7DQorICAgIHZvaWQg
KnB0X3ByaXY7DQoNCiAgICAgc3Bpbl9sb2NrKCZ2LT5hcmNoLmh2bV92Y3B1LnRtX2xvY2spOw0K
DQpAQCAtMjUxLDEzICsyNTMsMTQgQEAgdm9pZCBwdF91cGRhdGVfaXJxKHN0cnVjdCB2Y3B1ICp2
KQ0KICAgICBlYXJsaWVzdF9wdC0+aXJxX2lzc3VlZCA9IDE7DQogICAgIGlycSA9IGVhcmxpZXN0
X3B0LT5pcnE7DQogICAgIGlzX2xhcGljID0gKGVhcmxpZXN0X3B0LT5zb3VyY2UgPT0gUFRTUkNf
bGFwaWMpOw0KKyAgICBwdF9wcml2ID0gZWFybGllc3RfcHQtPnByaXY7DQoNCiAgICAgc3Bpbl91
bmxvY2soJnYtPmFyY2guaHZtX3ZjcHUudG1fbG9jayk7DQoNCiAgICAgaWYgKCBpc19sYXBpYyAp
DQotICAgIHsNCiAgICAgICAgIHZsYXBpY19zZXRfaXJxKHZjcHVfdmxhcGljKHYpLCBpcnEsIDAp
Ow0KLSAgICB9DQorICAgIGVsc2UgaWYgKCBpcnEgPT0gUlRDX0lSUSApDQorICAgICAgICBydGNf
cGVyaW9kaWNfaW50ZXJydXB0KHB0X3ByaXYpOw0KICAgICBlbHNlDQogICAgIHsNCiAgICAgICAg
IGh2bV9pc2FfaXJxX2RlYXNzZXJ0KHYtPmRvbWFpbiwgaXJxKTsNCi0tLSBhL3hlbi9pbmNsdWRl
L2FzbS14ODYvaHZtL3ZwdC5oDQorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92cHQuaA0K
QEAgLTE4MSw2ICsxODEsNyBAQCB2b2lkIHJ0Y19taWdyYXRlX3RpbWVycyhzdHJ1Y3QgdmNwdSAq
dik7DQogdm9pZCBydGNfZGVpbml0KHN0cnVjdCBkb21haW4gKmQpOw0KIHZvaWQgcnRjX3Jlc2V0
KHN0cnVjdCBkb21haW4gKmQpOw0KIHZvaWQgcnRjX3VwZGF0ZV9jbG9jayhzdHJ1Y3QgZG9tYWlu
ICpkKTsNCit2b2lkIHJ0Y19wZXJpb2RpY19pbnRlcnJ1cHQodm9pZCAqKTsNCg0KIHZvaWQgcG10
aW1lcl9pbml0KHN0cnVjdCB2Y3B1ICp2KTsNCiB2b2lkIHBtdGltZXJfZGVpbml0KHN0cnVjdCBk
b21haW4gKmQpOw0KDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fDQpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0DQpYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZw0K
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs

------=_002_NextPart316018485136_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Hi, Jan:</DIV>
<DIV>I am sorry I really don't have much time to try a test of your patch,=
 and=20
it is not convenient</DIV>
<DIV>for me to have a try. For the version I have been using is xen4.0.x, =
and=20
your patch is based on </DIV>
<DIV>the latest version xen4.2.x.(I have never complied the unstable one),=
 so I=20
merged your patch to my </DIV>
<DIV>xen4.0.x, still couldn't find the two functions below:</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque) </DI=
V>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque) </DIV>
<DIV>so I didn't merge the two functions which contains a rtc_toggle_irq()=
=20
.</DIV>
<DIV>&nbsp;</DIV>
<DIV>The results for me were these:</DIV>
<DIV>1 In my real application environment, it worked very well in the form=
er=20
5mins, much better than before,</DIV>
<DIV>&nbsp;but at last it lagged again. I don't know whether it belongs to=
 the=20
two missed functions. I lack the </DIV>
<DIV>&nbsp;ability to figure them out.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>2 When I tested my test program which I provided days before, it work=
ed=20
very well, maybe the program doesn't </DIV>
<DIV>emulate the real environment due to&nbsp;the same setting rate,&nbsp;=
so I=20
modified this program as which in the attachment.</DIV>
<DIV>if you are more convenient, you can help me to have a look of it.</DI=
V>
<DIV>And I have a opinion, because our product is based on Version Xen4.0.=
x, if=20
you have enough time, can you write </DIV>
<DIV>another patch based <A=20
href=3D"http://xenbits.xen.org/hg/xen-4.0-testing.hg/">http://xenbits.xen.=
org/hg/xen-4.0-testing.hg/</A>&nbsp;for=20
me, thank you very much!</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 I also have a thought that can we have some detecting methods to fi=
nd the=20
lagging time earlier to adjust time</DIV>
<DIV>back to normal value in the code?</DIV>
<DIV>&nbsp;</DIV>
<DIV>best regards,</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>
</DIV>
<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>
<DIV>&nbsp;</DIV></DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV>
<DIV>Second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patch&nbsp;posted;&nbsp;no&nbsp=
;test&nbsp;results&nbsp;so&nbsp;far&nbsp;for&nbsp;first&nbsp;draft.</DIV>
<DIV>Jan</DIV></DIV>
<DIV><B></B>&nbsp;</DIV>
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-14&nbsp;17:51</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:yang.z.zhang@intel.com">Yang Z Zhan=
g</A>;=20
<A href=3D"mailto:keir@xen.org">Keir Fraser</A>; <A href=3D"mailto:tim@xen=
.org">Tim=20
Deegan</A></DIV>
<DIV><B>CC:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A>;=
 <A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;[Xen-devel] [PATCH, RFC v2] x86/HVM: assorted RT=
C=20
emulation adjustments (was Re: Big Bug:Time in VM goes=20
slower...)</DIV></DIV></DIV>
<DIV>
<DIV>Below/attached&nbsp;a&nbsp;second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patc=
h&nbsp;to&nbsp;fix&nbsp;not&nbsp;only&nbsp;this</DIV>
<DIV>issue,&nbsp;but&nbsp;a&nbsp;few&nbsp;more&nbsp;with&nbsp;the&nbsp;RTC=
&nbsp;emulation.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Keir,&nbsp;Tim,&nbsp;Yang,&nbsp;others&nbsp;-&nbsp;the&nbsp;change&nb=
sp;to&nbsp;xen/arch/x86/hvm/vpt.c&nbsp;really</DIV>
<DIV>looks&nbsp;more&nbsp;like&nbsp;a&nbsp;hack&nbsp;than&nbsp;a&nbsp;solu=
tion,&nbsp;but&nbsp;I&nbsp;don't&nbsp;see&nbsp;another</DIV>
<DIV>way&nbsp;without&nbsp;much&nbsp;more&nbsp;intrusive&nbsp;changes.&nbs=
p;The&nbsp;point&nbsp;is&nbsp;that&nbsp;we</DIV>
<DIV>want&nbsp;the&nbsp;RTC&nbsp;code&nbsp;to&nbsp;decide&nbsp;whether&nbs=
p;to&nbsp;generate&nbsp;an&nbsp;interrupt</DIV>
<DIV>(so&nbsp;that&nbsp;RTC_PF&nbsp;can&nbsp;become&nbsp;set&nbsp;correctl=
y&nbsp;even&nbsp;without&nbsp;RTC_PIE</DIV>
<DIV>getting&nbsp;enabled&nbsp;by&nbsp;the&nbsp;guest).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Additionally&nbsp;I&nbsp;wonder&nbsp;whether&nbsp;alarm_timer_update(=
)&nbsp;shouldn't</DIV>
<DIV>bail&nbsp;on&nbsp;non-conforming&nbsp;RTC_*_ALARM&nbsp;values&nbsp;(a=
s&nbsp;those&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;the&nbsp;values&nbsp;they&nbsp;get&nbsp;compare=
d&nbsp;against,&nbsp;whereas</DIV>
<DIV>with&nbsp;the&nbsp;current&nbsp;way&nbsp;of&nbsp;handling&nbsp;this&n=
bsp;they&nbsp;would&nbsp;appear&nbsp;to</DIV>
<DIV>match&nbsp;-&nbsp;i.e.&nbsp;set&nbsp;RTC_AF&nbsp;and&nbsp;possibly&nb=
sp;generate&nbsp;an&nbsp;interrupt&nbsp;-</DIV>
<DIV>some&nbsp;other&nbsp;point&nbsp;in&nbsp;time).&nbsp;I&nbsp;realize&nb=
sp;the&nbsp;behavior&nbsp;here&nbsp;may&nbsp;not</DIV>
<DIV>be&nbsp;precisely&nbsp;specified,&nbsp;but&nbsp;the&nbsp;specificatio=
n&nbsp;saying&nbsp;"the&nbsp;current</DIV>
<DIV>time&nbsp;has&nbsp;matched&nbsp;the&nbsp;alarm&nbsp;time"&nbsp;means&=
nbsp;to&nbsp;me&nbsp;a&nbsp;value&nbsp;by&nbsp;value</DIV>
<DIV>comparison,&nbsp;which&nbsp;implies&nbsp;that&nbsp;non-conforming&nbs=
p;values&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;(since&nbsp;non-conforming&nbsp;current&nbsp;ti=
me&nbsp;values&nbsp;could</DIV>
<DIV>get&nbsp;replaced&nbsp;at&nbsp;any&nbsp;time&nbsp;by&nbsp;the&nbsp;ha=
rdware&nbsp;due&nbsp;to&nbsp;overflow</DIV>
<DIV>detection).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_A&nbs=
p;writes&nbsp;when&nbsp;the&nbsp;value&nbsp;didn't</DIV>
<DIV>&nbsp;&nbsp;change&nbsp;(doing&nbsp;the&nbsp;call&nbsp;always&nbsp;wa=
s&nbsp;reported&nbsp;to&nbsp;cause&nbsp;wall&nbsp;clock&nbsp;time</DIV>
<DIV>&nbsp;&nbsp;lagging&nbsp;with&nbsp;the&nbsp;JVM&nbsp;running&nbsp;on&=
nbsp;Windows)</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_B&nbs=
p;writes&nbsp;at&nbsp;all</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;alarm_timer_update()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;relevant&nbsp;bits</DIV>
<DIV>&nbsp;&nbsp;change</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;check_update_timer()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;SET&nbsp;changes</DIV>
<DIV>-&nbsp;instead&nbsp;properly&nbsp;handle&nbsp;AF&nbsp;and&nbsp;PF&nbs=
p;when&nbsp;the&nbsp;guest&nbsp;is&nbsp;not&nbsp;also&nbsp;setting</DIV>
<DIV>&nbsp;&nbsp;AIE/PIE&nbsp;respectively&nbsp;(for&nbsp;UF&nbsp;this&nbs=
p;was&nbsp;already&nbsp;the&nbsp;case,&nbsp;only&nbsp;a</DIV>
<DIV>&nbsp;&nbsp;comment&nbsp;was&nbsp;slightly&nbsp;inaccurate)</DIV>
<DIV>-&nbsp;raise&nbsp;the&nbsp;RTC&nbsp;IRQ&nbsp;not&nbsp;only&nbsp;when&=
nbsp;UIE&nbsp;gets&nbsp;set&nbsp;while&nbsp;UF&nbsp;was&nbsp;already</DIV>
<DIV>&nbsp;&nbsp;set,&nbsp;but&nbsp;generalize&nbsp;this&nbsp;to&nbsp;cove=
r&nbsp;AIE&nbsp;and&nbsp;PIE&nbsp;as&nbsp;well</DIV>
<DIV>-&nbsp;properly&nbsp;mask&nbsp;off&nbsp;bit&nbsp;7&nbsp;when&nbsp;ret=
rieving&nbsp;the&nbsp;hour&nbsp;values&nbsp;in</DIV>
<DIV>&nbsp;&nbsp;alarm_timer_update(),&nbsp;and&nbsp;properly&nbsp;use&nbs=
p;RTC_HOURS_ALARM's&nbsp;bit&nbsp;7&nbsp;when</DIV>
<DIV>&nbsp;&nbsp;converting&nbsp;from&nbsp;12-&nbsp;to&nbsp;24-hour&nbsp;v=
alue</DIV>
<DIV>-&nbsp;also&nbsp;handle&nbsp;the&nbsp;two&nbsp;other&nbsp;possible&nb=
sp;clock&nbsp;bases</DIV>
<DIV>-&nbsp;use&nbsp;RTC_*&nbsp;names&nbsp;in&nbsp;a&nbsp;couple&nbsp;of&n=
bsp;places&nbsp;where&nbsp;literal&nbsp;numbers&nbsp;were&nbsp;used</DIV>
<DIV>&nbsp;&nbsp;so&nbsp;far</DIV>
<DIV>&nbsp;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>@@&nbsp;-50,11&nbsp;+50,24&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,=
&nbsp;int&nbsp;hour);</DIV>
<DIV>&nbsp;</DIV>
<DIV>-static&nbsp;void&nbsp;rtc_periodic_cb(struct&nbsp;vcpu&nbsp;*v,&nbsp=
;void&nbsp;*opaque)</DIV>
<DIV>+static&nbsp;void&nbsp;rtc_toggle_irq(RTCState&nbsp;*s)</DIV>
<DIV>+{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>+</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock));</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_IRQF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+}</DIV>
<DIV>+</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;0xc0;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_PF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_PIE&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</=
DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-68,19&nbsp;+81,25&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_tim=
er_update(RTCState&nbsp;*s</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock))=
;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period_code&nbsp;=3D&nbsp;s-&gt;hw.cmos=
_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_RATE_SELECT;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(period_code&nbsp;!=3D&nbsp;0=
)&nbsp;&amp;&amp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_=
PIE)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_RE=
G_A]&nbsp;&amp;&nbsp;RTC_DIV_CTL&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;&lt;=3D&nbsp;2&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_32KHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(peri=
od_code&nbsp;!=3D&nbsp;0)&nbsp;&amp;&amp;&nbsp;(period_code&nbsp;&lt;=3D&n=
bsp;2)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;period_code&nbsp;+=3D&nbsp;7;</DIV>
<DIV>-</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);&nbsp;/*&nbsp;period&nbs=
p;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;DIV_ROUND((period&nbsp;*&nbsp;1000000000ULL),&nbsp;32768);&nbsp;/*&nbsp;p=
eriod&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;create_periodic_time=
(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&nbsp;RTC_IRQ,</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_cb,&nbsp;s);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_1MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_4MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;!=3D&nbsp;0&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);=
&nbsp;/*&nbsp;period&nbsp;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;DIV_ROUND(period&nbsp;*&nbsp;1000000000ULL,&nbsp;=
32768);&nbsp;/*&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;create_periodic_time(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&=
nbsp;RTC_IRQ,&nbsp;NULL,&nbsp;s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;break;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;default:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;destroy_periodi=
c_time(&amp;s-&gt;pt);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-102,7&nbsp;+121,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;check_u=
pdate_timer(RTCState&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;guest_usec&nbsp=
;=3D&nbsp;get_localtime_us(d)&nbsp;%&nbsp;USEC_PER_SEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(guest_=
usec&nbsp;&gt;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;244))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;when&nbsp;enab=
ling&nbsp;UIE&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;|=3D&nbsp;RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;next_update_time&nbsp;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;guest_us=
ec)&nbsp;*&nbsp;NS_PER_USEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;expire_time&nbsp;=3D&nbsp;NOW()&nbsp;+&nbsp;next_update_time;</DI=
V>
<DIV>@@&nbsp;-144,7&nbsp;+163,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_upd=
ate_timer(void&nbsp;*opaqu</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque)</DIV=
>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-152,11&nbsp;+170,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_up=
date_timer2(void&nbsp;*opaq</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_UF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt=
;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_ti=
mer(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-175,21&nbsp;+189,18&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm=
_timer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;stop_timer(&amp;s-&gt;alarm_timer);</DI=
V>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt;hw.cmos_data[RTC_REG_B]&nbsp=
;&amp;&nbsp;RTC_AIE)&nbsp;&amp;&amp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_=
B]&nbsp;&amp;&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_t=
m&nbsp;=3D&nbsp;gmtime(get_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s=
);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS_ALARM]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;alarm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;cur_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;USEC_PER_SEC&nbsp;-&nbsp;(get_localtime_us(d)&nbsp;%&nbsp;=
USEC_PER_SEC);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;next_update_time&nbsp;*&nbsp;NS_PER_USEC&nbsp;+&nbsp;NOW()=
;</DIV>
<DIV>@@&nbsp;-343,7&nbsp;+354,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm_t=
imer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-351,11&nbsp;+361,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_al=
arm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_AF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;alarm&n=
bsp;interrupt&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;=
hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_AIE)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_upd=
ate(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-365,6&nbsp;+371,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbs=
p;vrtc_domain(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;uint32_t&nbsp;orig,&nbsp;mask;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-382,6&nbsp;+389,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;0;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;orig&nbsp;=3D&nbsp;s-&gt;hw.cmos_data[s-&gt;=
hw.cmos_index];</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_index&=
nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_SECONDS_ALARM:</DIV>
<DIV>@@&nbsp;-405,9&nbsp;+413,9&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_A:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;UIP&nbs=
p;bit&nbsp;is&nbsp;read&nbsp;only&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;(s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|&nbsp;(orig&=
nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_RATE_SELECT&nbsp;|&nbsp;RTC_DIV_CT=
L)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_B:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;=
data&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>@@&nbsp;-415,7&nbsp;+423,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;set&nbsp;mode:&nbsp;reset&nbsp;UIP&nbsp;mode&nbsp;*/</DIV=
>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;adjust&nbsp;cmos&nbsp;before&nbsp;stopping&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(orig&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_tm&nbsp;=3D&nbsp;gmtime(get=
_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s);</DIV>
<DIV>@@&nbsp;-424,21&nbsp;+432,26&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_io=
port_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;if&nbsp;disabling&nbsp;set&nbsp;mode,&nbsp;update&nbsp;th=
e&nbsp;time&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET&n=
bsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;orig&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_set_time(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;if&nbsp;the&=
nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;inter=
rupt&nbsp;become</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((data&nbsp;=
&amp;&nbsp;RTC_UIE)&nbsp;&amp;&amp;&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&n=
bsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp;&nbsp;RTC_UF)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;If&nbsp=
;the&nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;=
interrupt&nbsp;becomes</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;NB:&nbs=
p;RTC_{A,P,U}IE&nbsp;=3D=3D&nbsp;RTC_{A,P,U}F&nbsp;respectively.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(&nbsp;mask=
&nbsp;=3D&nbsp;RTC_UIE;&nbsp;mask&nbsp;&lt;=3D&nbsp;RTC_PIE;&nbsp;mask&nbs=
p;&lt;&lt;=3D&nbsp;1&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;(data&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;&nbsp;!(orig=
&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp=
;&nbsp;mask)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_B]&nbsp;=3D&nbsp;data;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_timer(s=
);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_update(s=
);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;check_update_timer(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_24H&nbsp;|&nbsp;RTC_DM_BINARY&nbsp=
;|&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;alarm_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_C:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_D:</DIV>
<DIV>@@&nbsp;-453,7&nbsp;+466,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;to_bcd(RTCState&nbsp;*s,&nbsp;=
int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;/&nbsp;10)&nbsp;&lt;&lt;&nbsp;4)&nbsp;|&nbsp;(a&nbsp;%&nbsp;10);</DI=
V>
<DIV>@@&nbsp;-461,7&nbsp;+474,7&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int&n=
bsp;to_bcd(RTCState&nbsp;*s,&nbsp;in</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;&gt;&gt;&nbsp;4)&nbsp;*&nbsp;10)&nbsp;+&nbsp;(a&nbsp;&amp;&nbsp;0x0f=
);</DIV>
<DIV>@@&nbsp;-469,12&nbsp;+482,14&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int=
&nbsp;from_bcd(RTCState&nbsp;*s,&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;/*&nbsp;Hours&nbsp;in&nbsp;12&nbsp;hour&nbsp;mode&nbsp;are&nbsp=
;in&nbsp;1-12&nbsp;range,&nbsp;not&nbsp;0-11.</DIV>
<DIV>&nbsp;&nbsp;*&nbsp;So&nbsp;we&nbsp;need&nbsp;convert&nbsp;it&nbsp;bef=
ore&nbsp;using&nbsp;it*/</DIV>
<DIV>-static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;hour)</DIV>
<DIV>+static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;raw)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;hour&nbsp;=3D&nbsp;from_bcd(s,&nbsp=
;raw&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_24H))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hour&nbsp;%=3D&=
nbsp;12;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;hw.cm=
os_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x80)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(raw&nbsp;&a=
mp;&nbsp;0x80)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;hour&nbsp;+=3D&nbsp;12;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;hour;</DIV>
<DIV>@@&nbsp;-493,8&nbsp;+508,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_sec&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_min&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;from_bcd(s,&nbs=
p;s-&gt;hw.cmos_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;tm-&gt;tm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_wday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_WEEK]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_MONTH]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mon&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MONTH])&nbsp;-&nbsp;1;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>@@&nbsp;-22,6&nbsp;+22,7&nbsp;@@</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/hvm/vpt.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/event.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/apic.h&gt;</DIV>
<DIV>+#include&nbsp;&lt;asm/mc146818rtc.h&gt;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;#define&nbsp;mode_is(d,&nbsp;name)&nbsp;\</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;((d)-&gt;arch.hvm_domain.params[HVM_PAR=
AM_TIMER_MODE]&nbsp;=3D=3D&nbsp;HVMPTM_##name)</DIV>
<DIV>@@&nbsp;-218,6&nbsp;+219,7&nbsp;@@&nbsp;void&nbsp;pt_update_irq(struc=
t&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;periodic_time&nbsp;*pt,&nbs=
p;*temp,&nbsp;*earliest_pt&nbsp;=3D&nbsp;NULL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint64_t&nbsp;max_lag&nbsp;=3D&nbsp;-1U=
LL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;irq,&nbsp;is_lapic;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;void&nbsp;*pt_priv;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;v-&gt;arch.hvm_vcpu.tm_l=
ock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-251,13&nbsp;+253,14&nbsp;@@&nbsp;void&nbsp;pt_update_irq(str=
uct&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;earliest_pt-&gt;irq_issued&nbsp;=3D&nbs=
p;1;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;irq&nbsp;=3D&nbsp;earliest_pt-&gt;irq;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;is_lapic&nbsp;=3D&nbsp;(earliest_pt-&gt=
;source&nbsp;=3D=3D&nbsp;PTSRC_lapic);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;pt_priv&nbsp;=3D&nbsp;earliest_pt-&gt;priv;<=
/DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;v-&gt;arch.hvm_vcpu.tm=
_lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;is_lapic&nbsp;)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;vlapic_set_irq(=
vcpu_vlapic(v),&nbsp;irq,&nbsp;0);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;else&nbsp;if&nbsp;(&nbsp;irq&nbsp;=3D=3D&nbs=
p;RTC_IRQ&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_interru=
pt(pt_priv);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_dea=
ssert(v-&gt;domain,&nbsp;irq);</DIV>
<DIV>---&nbsp;a/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>+++&nbsp;b/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>@@&nbsp;-181,6&nbsp;+181,7&nbsp;@@&nbsp;void&nbsp;rtc_migrate_timers(=
struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;rtc_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_reset(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_update_clock(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_init(struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>_______________________________________________</DIV>
<DIV>Xen-devel&nbsp;mailing&nbsp;list</DIV>
<DIV>Xen-devel@lists.xen.org</DIV>
<DIV>http://lists.xen.org/xen-devel</DIV></DIV></BODY></HTML>

------=_002_NextPart316018485136_=------

------=_001_NextPart215818845028_=----
Content-Type: application/octet-stream;
	name="test.exe.a"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
	filename="test.exe.a"

TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAyAAAAA4fug4AtAnNIbgBTM0hVGhpcyBwcm9ncmFtIGNhbm5vdCBiZSBydW4gaW4gRE9TIG1v
ZGUuDQ0KJAAAAAAAAAAYqJagXMn481zJ+PNcyfjz39X281fJ+PO01vLzasn48z7W6/NfyfjzXMn5
827J+PO01vPzXsn481JpY2hcyfjzAAAAAAAAAABQRQAATAEFABupK1AAAAAAAAAAAOAADgELAQYA
APABAABwAAAAAAAAcBMAAAAQAAAAEAAAAABAAAAQAAAAEAAABAAAAAAAAAAEAAAAAAAAAABwAgAA
EAAAAAAAAAMAAAAAABAAABAAAAAAEAAAEAAAAAAAABAAAAAAAAAAAAAAAABQAgAoAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAGACAPwJAAAAAAIAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAABEUQIAHAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC50ZXh0AAAA
UOMBAAAQAAAA8AEAABAAAAAAAAAAAAAAAAAAACAAAGAucmRhdGEAAPoTAAAAAAIAACAAAAAAAgAA
AAAAAAAAAAAAAABAAABALmRhdGEAAABQLwAAACACAAAgAAAAIAIAAAAAAAAAAAAAAAAAQAAAwC5p
ZGF0YQAALgcAAABQAgAAEAAAAEACAAAAAAAAAAAAAAAAAEAAAMAucmVsb2MAAIIMAAAAYAIAABAA
AABQAgAAAAAAAAAAAAAAAABAAABCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMzMzMzM6QYA
AADMzMzMzMxVi+yD7FxTVleNfaS5FwAAALjMzMzM86vHRfwAAAAAx0X4AAAAAMdF9AAAAADHRfAA
AAAAx0XsAAAAAIv0aIgAQgD/FUxRQgA79OjRAgAAiUXsx0XoAAAAAMdF5AAAAACL9GhsAEIAi0Xs
UP8VSFFCADv06KgCAACJReiL9GhQAEIAi03sUf8VSFFCADv06I0CAACJReS6AQAAAIXSD4RMAQAA
i/SNRfRQjU34UY1V/FL/VeQ79OhlAgAAiUXwi0X0UItN+FGLVfxSaBwAQgDozAEAAIPEEIv0agX/
FURRQgA79Og4AgAAi/SNRfRQagFoECcAAP9V6Dv06CECAACJRfCL9GoF/xVEUUIAO/ToDQIAAIv0
jU30UWoAaBAnAAD/Veg79Oj2AQAAiUXwi/RqBf8VRFFCADv06OIBAACL9I1V9FJqAWhQwwAA/1Xo
O/ToywEAAIlF8Iv0agX/FURRQgA79Oi3AQAAi/SNRfRQagBoUMMAAP9V6Dv06KABAACJRfCL9GoF
/xVEUUIAO/TojAEAAIv0jU30UWoBaKCGAQD/Veg79Oh1AQAAiUXwi/RqBf8VRFFCADv06GEBAACL
9I1V9FJqAGighgEA/1XoO/ToSgEAAIlF8Iv0agX/FURRQgA79Og2AQAA6af+//8zwF9eW4PEXDvs
6CIBAACL5V3DzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMz/JURRQgD/JUhRQgD/JUxRQgDMzMzMzMzMzMzMzMxVi+yD
7AxTVleNRQyJRfSDfQgAdR5ooABCAGoAajZolABCAGoC6EYVAACDxBSD+AF1AcwzyYXJddZoYCpC
AOgNAgAAg8QEiUX8i1X0UotFCFBoYCpCAOgVBAAAg8QMiUX4aGAqQgCLTfxR6EEDAACDxAiLRfhf
XluL5V3DzMzMzHUBw1WL7IPsAFBSU1ZXaMQAQgBowABCAGoqaLAAQgBqAejKFAAAg8QUg/gBdQHM
X15bWliL5V3DzMzMzMzMzMxVi+xq/2igAUIAaHRAQABkoQAAAABQZIklAAAAAIPE8FNWV4ll6P8V
VFFCAKPkNUIAoeQ1QgDB6Agl/wAAAKPwNUIAiw3kNUIAgeH/AAAAiQ3sNUIAixXsNUIAweIIAxXw
NUIAiRXoNUIAoeQ1QgDB6BAl//8AAKPkNUIAagDonSoAAIPEBIXAdQpqHOjPAAAAg8QEx0X8AAAA
AOgQJwAA/xVQUUIAo0RPQgDo4CQAAKO8NUIA6MYfAADocR4AAOiMGgAAiw0ANkIAiQ0ENkIAixUA
NkIAUqH4NUIAUIsN9DVCAFHorPv//4PEDIlF5ItV5FLomBoAAItF7IsIixGJVeCLRexQi03gUegR
HAAAg8QIw4tl6ItV4FLokRoAAItN8GSJDQAAAABfXluL5V3DVYvsgz3ENUIAAnQF6J8sAACLRQhQ
6OYsAACDxARo/wAAAP8VMCpCAIPEBF3DzMzMVYvsgz3ENUIAAnQF6G8sAACLRQhQ6LYsAACDxARo
/wAAAP8VWFFCAF3DzMzMzMzMVYvsg+wIU1ZXg30IAHUeaLgBQgBqAGpBaKwBQgBqAuj8EgAAg8QU
g/gBdQHMM8CFwHXWi00IiU38i1X8i0IQUOj7TAAAg8QEhcB1BzPA6f0AAACBffxgKkIAdQnHRfgA
AAAA6xmBffyAKkIAdQnHRfgBAAAA6wczwOnSAAAAiw3QNUIAg8EBiQ3QNUIAi1X8i0IMJQwBAACF
wHQHM8DprQAAAItN+IM8jcg1QgAAdVpqXmisAUIAagJoABAAAOgNLgAAg8QQi1X4iQSVyDVCAItF
+IM8hcg1QgAAdS2LTfyDwRSLVfyJSgiLRfyLTfyLUQiJEItF/MdAGAIAAACLTfzHQQQCAAAA6y+L
VfyLRfiLDIXINUIAiUoIi1X8i0X8i0gIiQqLVfzHQhgAEAAAi0X8x0AEABAAAItN/ItRDIHKAhEA
AItF/IlQDLgBAAAAX15bi+Vdw8zMzMzMzMzMzFWL7FFTVleDfQgAdCeDfQgBdCFoxAFCAGoAaKEA
AABorAFCAGoC6JURAACDxBSD+AF1AcwzwIXAdc2LTQyJTfyDfQgAdEmLVfyLQgwlABAAAIXAdDiL
TfxR6BJMAACDxASLVfyLQgyA5O6LTfyJQQyLVfzHQhgAAAAAi0X8xwAAAAAAi038x0EIAAAAAOsb
i1X8i0IMJQAQAACFwHQMi038UejJSwAAg8QEX15bi+Vdw8zMzMzMzMzMzMzMzMzMzFWL7IHsqAIA
AFNWV8dF3AAAAADHhdT9//8AAAAAx0XoAAAAAItFDIoIiE3YD75V2ItFDIPAAYlFDIXSD4TWCwAA
g73U/f//AA+MyQsAAA++TdiD+SB8Hw++VdiD+nh/Fg++RdgPvoi8AUIAg+EPiY1w/f//6wrHhXD9
//8AAAAAi5Vw/f//iVX0i0X0i03oD76UwdwBQgDB+gSJVeiLReiJhWz9//+DvWz9//8HD4dfCwAA
i41s/f///ySNRCNAAMdF5AAAAACLVdiB4v8AAAChYC5CADPJZosMUIHhAIAAAIXJdFiNldT9//9S
i0UIUA++TdhR6DYMAACDxAyLVQyKAohF2ItNDIPBAYlNDA++VdiF0nUhaFwCQgBqAGiGAQAAaFAC
QgBqAujSDwAAg8QUg/gBdQHMM8CFwHXRjY3U/f//UYtVCFIPvkXYUOjeCwAAg8QM6bgKAADHRfgA
AAAAi034iY3E/f//i5XE/f//iZW8/f//i4W8/f//iUXwx0X8AAAAAMeFzP3////////HReQAAAAA
6XYKAAAPvk3YiY1o/f//i5Vo/f//g+ogiZVo/f//g71o/f//EHdIi41o/f//M8CKgXwjQAD/JIVk
I0AAi1X8g8oEiVX86yiLRfwMAYlF/Osei038g8kCiU386xOLVfyAyoCJVfzrCItF/AwIiUX86QcK
AAAPvk3Yg/kqdTONVRBS6CMMAACDxASJhbz9//+Dvbz9//8AfRaLRfwMBIlF/IuNvP3///fZiY28
/f//6xeLlbz9//9r0goPvkXYjUwC0ImNvP3//+mvCQAAx4XM/f//AAAAAOmgCQAAD75V2IP6KnUn
jUUQUOi8CwAAg8QEiYXM/f//g73M/f//AH0Kx4XM/f///////+sXi43M/f//a8kKD75V2I1EEdCJ
hcz9///pVAkAAA++TdiJjWT9//+LlWT9//+D6kmJlWT9//+DvWT9//8ud2yLjWT9//8zwIqBoSNA
AP8khY0jQACLVfyDyhCJVfzrTItFDA++CIP5NnUgi1UMD75CAYP4NHUUi00Mg8ECiU0Mi1X8gM6A
iVX86wzHRegAAAAA6Yn9///rE4tF/AwgiUX86wmLTfyAzQiJTfzpwQgAAA++VdiJlWD9//+LhWD9
//+D6EOJhWD9//+DvWD9//81D4fABgAAi5Vg/f//M8mKigwkQAD/JI3QI0AAi0X8JTAIAACFwHUJ
i038gM0IiU38i1X8geIQCAAAhdJ0OY1FEFDoyQoAAIPEBGaJRexmi03sUY2V2P3//1LoEUoAAIPE
CIlF3IN93AB9CseFxP3//wEAAADrJo1FEFDoUAoAAIPEBGaJhbj9//+Kjbj9//+Ijdj9///HRdwB
AAAAjZXY/f//iVXg6RwGAACNRRBQ6BwKAACDxASJhbT9//+DvbT9//8AdAyLjbT9//+DeQQAdRqL
FTgqQgCJVeCLReBQ6AxJAACDxASJRdzrT4tN/IHhAAgAAIXJdCOLlbT9//+LQgSJReCLjbT9//8P
vxHR6olV3MdF5AEAAADrH8dF5AAAAACLhbT9//+LSASJTeCLlbT9//8PvwKJRdzphwUAAItN/IHh
MAgAAIXJdQmLVfyAzgiJVfyDvcz9////dQzHhVz9//////9/6wyLhcz9//+JhVz9//+LjVz9//+J
jaj9//+NVRBS6EQJAACDxASJReCLRfwlEAgAAIXAdGiDfeAAdQmLDTwqQgCJTeDHReQBAAAAi1Xg
iZWs/f//i4Wo/f//i42o/f//g+kBiY2o/f//hcB0IIuVrP3//zPAZosChcB0EYuNrP3//4PBAomN
rP3//+vHi5Ws/f//K1Xg0fqJVdzrWoN94AB1CKE4KkIAiUXgi03giY2w/f//i5Wo/f//i4Wo/f//
g+gBiYWo/f//hdJ0HouNsP3//w++EYXSdBGLhbD9//+DwAGJhbD9///ryYuNsP3//ytN4IlN3Oli
BAAAjVUQUuhiCAAAg8QEiYWk/f//i0X8g+AghcB0EouNpP3//2aLldT9//9miRHrDouFpP3//4uN
1P3//4kIx4XE/f//AQAAAOkXBAAAx0X4AQAAAIpV2IDCIIhV2ItF/AxAiUX8jY3Y/f//iU3gg73M
/f//AH0Mx4XM/f//BgAAAOscg73M/f//AHUTD75V2IP6Z3UKx4XM/f//AQAAAItFEIPACIlFEItN
EIPpCIsRi0EEiZWc/f//iYWg/f//i034UYuVzP3//1IPvkXYUItN4FGNlZz9//9S/xVELkIAg8QU
i0X8JYAAAACFwHQWg73M/f//AHUNi03gUf8VUC5CAIPEBA++VdiD+md1GYtF/CWAAAAAhcB1DYtN
4FH/FUguQgCDxASLVeAPvgKD+C11EotN/IDNAYlN/ItV4IPCAYlV4ItF4FDoQEYAAIPEBIlF3OkM
AwAAi038g8lAiU38x4XI/f//CgAAAOmCAAAAx4XI/f//CgAAAOt2x4XM/f//CAAAAMeF0P3//wcA
AADrCseF0P3//ycAAADHhcj9//8QAAAAi1X8geKAAAAAhdJ0HcaFwP3//zCLhdD9//+DwFGIhcH9
///HRfACAAAA6yDHhcj9//8IAAAAi038geGAAAAAhcl0CYtV/IDOAolV/ItF/CUAgAAAhcB0HY1N
EFHohgYAAIPEBImFiP3//4mVjP3//+mRAAAAi1X8g+IghdJ0SItF/IPgQIXAdB6NTRBR6DUGAACD
xAQPv8CZiYWI/f//iZWM/f//6x6NVRBS6BcGAACDxAQl//8AAJmJhYj9//+JlYz9///rP4tF/IPg
QIXAdBuNTRBR6O0FAACDxASZiYWI/f//iZWM/f//6xqNVRBS6NIFAACDxAQzyYmFiP3//4mNjP3/
/4tV/IPiQIXSdD6DvYz9//8AfzV8CYO9iP3//wBzKouFiP3///fYi42M/f//g9EA99mJhZT9//+J
jZj9//+LVfyAzgGJVfzrGIuFiP3//4mFlP3//4uNjP3//4mNmP3//4tV/IHiAIAAAIXSdRuLhZT9
//+LjZj9//+D4QCJhZT9//+JjZj9//+Dvcz9//8AfQzHhcz9//8BAAAA6wmLVfyD4veJVfyLhZT9
//8LhZj9//+FwHUHx0XwAAAAAI1N14lN4IuVzP3//4uFzP3//4PoAYmFzP3//4XSfxSLjZT9//8L
jZj9//+FyQ+EgQAAAIuFyP3//5lSUIuVmP3//1KLhZT9//9Q6GVFAACDwDCJhZD9//+Lhcj9//+Z
UlCLjZj9//9Ri5WU/f//UujQRAAAiYWU/f//iZWY/f//g72Q/f//OX4Si4WQ/f//A4XQ/f//iYWQ
/f//i03gipWQ/f//iBGLReCD6AGJReDpUv///41N1ytN4IlN3ItV4IPCAYlV4ItF/CUAAgAAhcB0
KYtN4A++EYP6MHUGg33cAHUYi0Xgg+gBiUXgi03gxgEwi1Xcg8IBiVXcg73E/f//AA+FzgEAAItF
/IPgQIXAdE+LTfyB4QABAACFyXQQxoXA/f//LcdF8AEAAADrMotV/IPiAYXSdBDGhcD9//8rx0Xw
AQAAAOsYi0X8g+AChcB0DsaFwP3//yDHRfABAAAAi428/f//K03cK03wiY2E/f//i1X8g+IMhdJ1
HI2F1P3//1CLTQhRi5WE/f//Umog6N4CAACDxBCNhdT9//9Qi00IUYtV8FKNhcD9//9Q6AADAACD
xBCLTfyD4QiFyXQmi1X8g+IEhdJ1HI2F1P3//1CLTQhRi5WE/f//Umow6JACAACDxBCDfeQAD4Sk
AAAAg33cAA+OmgAAAItF4ImFgP3//4tN3ImNfP3//4uVfP3//4uFfP3//4PoAYmFfP3//4XSdG2L
jYD9//9mixFmiZVa/f//ZouFWv3//1CNjXj9//9Ri5WA/f//g8ICiZWA/f//6EtCAACDxAiJhXT9
//+DvXT9//8AfwLrJo2F1P3//1CLTQhRi5V0/f//Uo2FeP3//1DoKQIAAIPEEOl6////6xuNjdT9
//9Ri1UIUotF3FCLTeBR6AcCAACDxBCLVfyD4gSF0nQcjYXU/f//UItNCFGLlYT9//9SaiDooQEA
AIPEEOkN9P//i4XU/f//X15bi+Vdw+AXQAB6GEAAvBhAACsZQACDGUAAkhlAAN4ZQABxGkAACBlA
ABMZQAD+GEAA8xhAAB4ZQAAmGUAAAAUFAQUFBQUFBQUCBQMFBQQgGkAAWRpAABUaQABjGkAAbBpA
AAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQBBAQEAgQEBAQEBAQEBAQDrBpAAEAdQADQ
G0AAeR5AADsbQADBGkAASx5AAFAdQAD1HEAAxR5AAG8eQADmG0AAYx5AAIUeQABXIUAAAA4BDgEO
Dg4ODg4ODg4ODgIODg4OAw4EDg4ODg4ODg4FBgcHBw4GDg4ODggJCg4OCw4MDg4NzMzMzMzMzMzM
zMzMzMxVi+xRi0UMi0gEg+kBi1UMiUoEi0UMg3gEAHwmi00MixGKRQiIAg++TQiB4f8AAACJTfyL
VQyLAoPAAYtNDIkB6xOLVQxSi0UIUOjxQQAAg8QIiUX8g338/3ULi00QxwH/////6w2LVRCLAoPA
AYtNEIkBi+Vdw8zMzMzMzMzMzFWL7ItFDItNDIPpAYlNDIXAfiCLVRRSi0UQUItNCFHoXP///4PE
DItVFIM6/3UC6wLr0F3DzMzMzMzMzMzMzMxVi+xRi0UMi00Mg+kBiU0MhcB+MotVFFKLRRBQi00I
D74RiVX8i0X8UItNCIPBAYlNCOgJ////g8QMi1UUgzr/dQLrAuu+i+Vdw8zMzMzMzFWL7ItFCIsI
g8EEi1UIiQqLRQiLCItB/F3DzMzMzMzMVYvsi0UIiwiDwQiLVQiJCotFCIsIg+kIiwGLUQRdw8xV
i+yLRQiLCIPBBItVCIkKi0UIiwhmi0H8XcPMzMzMzFWL7FGDPUBPQgAAdQzHBUBPQgAAAgAA6xOD
PUBPQgAUfQrHBUBPQgAUAAAAaIMAAABobAJCAGoCagShQE9CAFDoyyEAAIPEFKPsO0IAgz3sO0IA
AHU/xwVAT0IAFAAAAGiGAAAAaGwCQgBqAmoEiw1AT0IAUeiWIQAAg8QUo+w7QgCDPew7QgAAdQpq
GuhO7v//g8QEx0X8AAAAAOsJi1X8g8IBiVX8g338FH0Zi0X8weAFBUAqQgCLTfyLFew7QgCJBIrr
2MdF/AAAAADrCYtF/IPAAYlF/IN9/AN9RItN/MH5BYtV/IPiH4sEjYA6QgCDPND/dBmLTfzB+QWL
VfyD4h+LBI2AOkIAgzzQAHUQi038weEFx4FQKkIA/////+uti+Vdw8zMzMzMzMzMzMxVi+zomDwA
AA++BRg2QgCFwHQF6AhCAABdw8zMzMzMzFWL7P8VXFFCAF3DzMzMzMxVi+xRg30IAHwGg30IA3wF
g8j/6z6DfQz/dQyLRQiLBIXELEIA6yyLTQyD4fiFyXQFg8j/6x2LVQiLBJXELEIAiUX8i00Ii1UM
iRSNxCxCAItF/IvlXcPMzMzMzMzMzMxVi+xRg30IAHwGg30IA3wHuP7////rY4N9DPp1DItFCIsE
hdAsQgDrUYtNCIsUjdAsQgCJVfyDfQz8dRRq9f8VYFFCAItNCIkEjdAsQgDrJ4N9DPt1FGr0/xVg
UUIAi1UIiQSV0CxCAOsNi0UIi00MiQyF0CxCAItF/IvlXcPMzFWL7FGh4DtCAIlF/ItNCIkN4DtC
AItF/IvlXcPMzMzMVYvsuCwwAADoc0cAAFfGhfjP//8Auf8DAAAzwI29+c////OrZquqxoX43///
ALn/AwAAM8CNvfnf///zq2arqsaFAPD//wC5/wMAADPAjb0B8P//86tmq6qNRRyJhfzv//+DfQgA
fAaDfQgDfAiDyP/pFQMAAIN9CAIPhaAAAABowCxCAP8VdFFCAIXAD46NAAAAgz3UNUIAAHVCaEAD
QgD/FXBRQgCJhfTP//+DvfTP//8AdCBoNANCAIuN9M///1H/FUhRQgCj1DVCAIM91DVCAAB1CIPI
/+mtAgAAi1UQUotFDFBoAANCAI2N+N///1H/FdQ1QgCDxBCNlfjf//9S/xVsUUIAaMAsQgD/FWhR
QgDo2P3//4PI/+lrAgAAg30YAHQ3i4X87///UItNGFFo7Q8AAI2VAPD//1LoPkUAAIPEEIXAfRRo
1AJCAI2FAPD//1DoNkQAAIPECIN9CAJ1MoN9GAB0DMeF2M///8ACQgDrCseF2M///6wCQgCLjdjP
//9RjZX4z///Uuj+QwAAg8QIjYUA8P//UI2N+M///1Ho+EMAAIPECIN9CAJ1OYtVCIsElcQsQgCD
4AGFwHQUaKgCQgCNjfjP//9R6M1DAACDxAhopAJCAI2V+M///1LouUMAAIPECIN9DAB0Qo2F+M//
/1CLTRBRi1UMUmiYAkIAaAAQAACNhfjf//9Q6HtCAACDxBiFwH0UaNQCQgCNjfjf//9R6GNDAACD
xAjrFo2V+M///1KNhfjf//9Q6EtDAACDxAiDPeA7QgAAdDuNjfjv//9RjZX43///UotFCFD/FeA7
QgCDxAyFwHQcg30IAnULaMAsQgD/FWhRQgCLhfjv///p/wAAAItNCIsUjcQsQgCD4gGF0nQ+i0UI
gzyF0CxCAP90MWoAjY3wz///UY2V+N///1LooTkAAIPEBFCNhfjf//9Qi00IixSN0CxCAFL/FWRR
QgCLRQiLDIXELEIAg+EChcl0DY2V+N///1L/FWxRQgCLRQiLDIXELEIAg+EEhcl0boN9EAB0HWoK
jZXcz///UotFEFDofj4AAIPEDImF1M///+sKx4XUz///AAAAAI2NAPD//1GLVRRSi4XUz///UItN
DFGLVQhS6DoAAACDxBSJhfjv//+DfQgCdQtowCxCAP8VaFFCAIuF+O///+sTg30IAnULaMAsQgD/
FWhRQgAzwF+L5V3DVYvsuDgRAADo40MAAIN9GAB1JWiQBEIAagBo2gEAAGiEBEIAagLoRfz//4PE
FIP4AXUF6Cj7//8zwIXAdc9oBAEAAI2N+P7//1FqAP8VeFFCAIXAdRRobARCAI2V+P7//1LomUEA
AIPECI2F+P7//4lF/ItN/FHoVDgAAIPEBIP4QHYpi1X8UuhDOAAAg8QEi038jVQBwIlV/GoDaGgE
QgCLRfxQ6GZIAACDxAyLTRSJjfDu//+DvfDu//8AdEmLlfDu//9S6AU4AACDxASD+EB2NYuF8O7/
/1Do8TcAAIPEBIuN8O7//41UAcCJlfDu//9qA2hoBEIAi4Xw7v//UOgLSAAAg8QMg30IAnUMx4Xs
7v//9ANCAOsKx4Xs7v//wABCAItNGA++EYXSdAuLRRiJheju///rCseF6O7//8AAQgCLTRgPvhGF
0nQSg30IAnUMx4Xk7v//5ANCAOsKx4Xk7v//wABCAItFGA++CIXJdAzHheDu///gA0IA6wrHheDu
///AAEIAg30QAHQLi1UQiZXc7v//6wrHhdzu///AAEIAg30QAHQMx4XY7v//2ANCAOsKx4XY7v//
wABCAIN9DAB0C4tFDImF1O7//+sKx4XU7v//wABCAIN9DAB0DMeF0O7//9ADQgDrCseF0O7//8AA
QgCDvfDu//8AdA6LjfDu//+Jjczu///rCseFzO7//8AAQgCDvfDu//8AdAzHhcju///EA0IA6wrH
hcju///AAEIAi5Xs7v//UouF6O7//1CLjeTu//9Ri5Xg7v//UouF3O7//1CLjdju//9Ri5XU7v//
UouF0O7//1CLjczu//9Ri5XI7v//UotF/FCLTQiLFI3cLEIAUmhwA0IAaAAQAACNhfTu//9Q6F4+
AACDxDyFwH0UaNQCQgCNjfTu//9R6EY/AACDxAhoEiABAGhMA0IAjZX07v//Uuh9RQAAg8QMiYX0
/v//g730/v//A3URahboREMAAIPEBGoD6HoAAACDvfT+//8EdQe4AQAAAOsCM8CL5V3DzMzMzFWL
7IM93DtCAAB0Bv8V3DtCAGgYJEIAaAgiQgDofwEAAIPECGgEIUIAaAAgQgDobQEAAIPECF3DzMzM
zMzMzMxVi+xqAGoAi0UIUOhwAAAAg8QMXcPMzMzMzMzMzMzMzFWL7GoAagGLRQhQ6FAAAACDxAxd
w8zMzMzMzMzMzMzMVYvsagFqAGoA6DIAAACDxAxdw8zMzMzMzMzMzMzMzMxVi+xqAWoBagDoEgAA
AIPEDF3DzMzMzMzMzMzMzMzMzFWL7FGDPSA2QgABdRGLRQhQ/xWAUUIAUP8VfFFCAMcFHDZCAAEA
AACKTRCIDRg2QgCDfQwAdUeDPdg7QgAAdCyLFdQ7QgCJVfyLRfyD6ASJRfyLTfw7Ddg7QgByD4tV
/IM6AHQFi0X8/xDr3WgkJ0IAaBwlQgDoZQAAAIPECGgsKUIAaCgoQgDoUwAAAIPECIM9JDZCAAB1
IGr/6BAoAACDxASD4CCFwHQPxwUkNkIAAQAAAOi3MAAAg30QAHQC6xTHBSA2QgABAAAAi00IUf8V
WFFCAIvlXcPMzMzMzMzMVYvsi0UIO0UMcxiLTQiDOQB0BYtVCP8Si0UIg8AEiUUI6+Bdw8zMzMzM
zMzMzMzMVYvsg+wUi0UIUOihAQAAg8QEiUX0g330AHQJi030g3kIAHUPi1UMUv8VhFFCAOlyAQAA
i0X0g3gIBXUUi030x0EIAAAAALgBAAAA6VUBAACLVfSDeggBdQiDyP/pRAEAAItF9ItICIlN/IsV
KDZCAIlV7ItFDKMoNkIAi030g3kECA+F+gAAAIsVYC1CAIlV8OsJi0Xwg8ABiUXwiw1gLUIAAw1k
LUIAOU3wfRKLVfBr0gzHgvAsQgAAAAAA69ShbC1CAIlF+ItN9IE5jgAAwHUPxwVsLUIAgwAAAOmI
AAAAi1X0gTqQAADAdQzHBWwtQgCBAAAA63GLRfSBOJEAAMB1DMcFbC1CAIQAAADrWotN9IE5kwAA
wHUMxwVsLUIAhQAAAOtDi1X0gTqNAADAdQzHBWwtQgCCAAAA6yyLRfSBOI8AAMB1DMcFbC1CAIYA
AADrFYtN9IE5kgAAwHUKxwVsLUIAigAAAIsVbC1CAFJqCP9V/IPECItF+KNsLUIA6xeLTfTHQQgA
AAAAi1X0i0IEUP9V/IPEBItN7IkNKDZCAIPI/4vlXcPMzMzMzMzMVYvsUcdF/OgsQgCLRfyLCDtN
CHQdi1X8g8IMiVX8oWgtQgBrwAwF6CxCADlF/HMC69mLDWgtQgBryQyBwegsQgA5TfxzCotV/IsC
O0UIdAQzwOsDi0X8i+Vdw8zMzMzMVYvsg+wQgz3QO0IAAHUF6JxKAADHRfgAAAAAobw1QgCJRfyL
TfwPvhGF0nQsi0X8D74Ig/k9dAmLVfiDwgGJVfiLRfxQ6JYxAACDxASLTfyNVAEBiVX868pqbWio
BEIAagKLRfiNDIUEAAAAUei+EAAAg8QQiUX0i1X0iRUANkIAgz0ANkIAAHUKagnob+H//4PEBKG8
NUIAiUX86wmLTfwDTfCJTfyLVfwPvgKFwHRmi038UegmMQAAg8QEg8ABiUXwi1X8D74Cg/g9dEdq
eWioBEIAagKLTfBR6FAQAACDxBCLVfSJAotF9IM4AHUKagnoCeH//4PEBItN/FGLVfSLAlDoBzoA
AIPECItN9IPBBIlN9OuHagKLFbw1QgBS6EsaAACDxAjHBbw1QgAAAAAAi0X0xwAAAAAAxwXAO0IA
AQAAAIvlXcPMzMzMzMzMVYvsg+wUgz3QO0IAAHUF6ExJAABoBAEAAGgsNkIAagD/FXhRQgDHBRA2
QgAsNkIAoURPQgAPvgiFyXULixUQNkIAiVXs6wihRE9CAIlF7ItN7IlN8I1V/FKNRfRQagBqAItN
8FHodgAAAIPEFGiAAAAAaLQEQgBqAotV9ItF/I0MkFHoWA8AAIPEEIlF+IN9+AB1CmoI6BXg//+D
xASNVfxSjUX0UItN9ItV+I0EilCLTfhRi1XwUugjAAAAg8QUi0X0g+gBo/Q1QgCLTfiJDfg1QgCL
5V3DzMzMzMzMzMxVi+yD7BSLRRjHAAAAAACLTRTHAQEAAACLVQiJVfyDfQwAdBGLRQyLTRCJCItV
DIPCBIlVDItF/A++CIP5Ig+FyQAAAItV/IPCAYlV/ItF/A++CIP5InR6i1X8D74ChcB0cItN/DPS
ihEzwIqCYTlCAIPgBIXAdC+LTRiLEYPCAYtFGIkQg30QAHQci00Qi1X8igKIAYtNEIPBAYlNEItV
/IPCAYlV/ItFGIsIg8EBi1UYiQqDfRAAdBOLRRCLTfyKEYgQi0UQg8ABiUUQ6XL///+LTRiLEYPC
AYtFGIkQg30QAHQPi00QxgEAi1UQg8IBiVUQi0X8D74Ig/kidQmLVfyDwgGJVfzpzwAAAItFGIsI
g8EBi1UYiQqDfRAAdBOLRRCLTfyKEYgQi0UQg8ABiUUQi038ihGIVfSLRfyDwAGJRfyLTfSB4f8A
AAAz0oqRYTlCAIPiBIXSdC+LRRiLCIPBAYtVGIkKg30QAHQTi0UQi038ihGIEItFEIPAAYlFEItN
/IPBAYlN/ItV9IHi/wAAAIP6IHQei0X0Jf8AAACFwHQSi030geH/AAAAg/kJD4VW////i1X0geL/
AAAAhdJ1C4tF/IPoAYlF/OsNg30QAHQHi00QxkH/AMdF7AAAAACLVfwPvgKFwHQhi038D74Rg/og
dAuLRfwPvgiD+Ql1C4tV/IPCAYlV/Ovfi0X8D74Ihcl1BeneAQAAg30MAHQRi1UMi0UQiQKLTQyD
wQSJTQyLVRSLAoPAAYtNFIkBx0X4AQAAAMdF8AAAAACLVfwPvgKD+Fx1FItN/IPBAYlN/ItV8IPC
AYlV8Ovhi0X8D74Ig/kidVGLRfAz0rkCAAAA9/GF0nU5g33sAHQgi1X8D75CAYP4InULi038g8EB
iU386wfHRfgAAAAA6wfHRfgAAAAAM9KDfewAD5TCiVXsi0Xw0eiJRfCLTfCLVfCD6gGJVfCFyXQk
g30QAHQPi0UQxgBci00Qg8EBiU0Qi1UYiwKDwAGLTRiJAevMi1X8D74ChcB0HIN97AB1G4tN/A++
EYP6IHQLi0X8D74Ig/kJdQXpqwAAAIN9+AAPhJMAAACDfRAAdFSLVfwzwIoCM8mKiGE5QgCD4QSF
yXQpi1UQi0X8igiICotVEIPCAYlVEItF/IPAAYlF/ItNGIsRg8IBi0UYiRCLTRCLVfyKAogBi00Q
g8EBiU0Q6yyLVfwzwIoCM8mKiGE5QgCD4QSFyXQWi1X8g8IBiVX8i0UYiwiDwQGLVRiJCotFGIsI
g8EBi1UYiQqLRfyDwAGJRfzpbf7//4N9EAB0D4tNEMYBAItVEIPCAYlVEItFGIsIg8EBi1UYiQrp
6P3//4N9DAB0EotFDMcAAAAAAItNDIPBBIlNDItVFIsCg8ABi00UiQGL5V3DzMzMzMzMzMzMzMzM
VYvsg+wYx0XsAAAAAMdF6AAAAACDPTA3QgAAdT3/FZhRQgCJReyDfewAdAzHBTA3QgABAAAA6yL/
FZRRQgCJReiDfegAdAzHBTA3QgACAAAA6wczwOm7AQAAgz0wN0IAAQ+F9wAAAIN97AB1Fv8VmFFC
AIlF7IN97AB1BzPA6ZIBAACLReyJRfiLTfgz0maLEYXSdCCLRfiDwAKJRfiLTfgz0maLEYXSdQmL
RfiDwAKJRfjr1ItN+CtN7NH5g8EBiU38agBqAGoAagCLVfxSi0XsUGoAagD/FZBRQgCJRfCDffAA
dB5qZGjABEIAagKLTfBR6NgJAACDxBCJReiDfegAdRGLVexS/xWMUUIAM8DpAAEAAGoAagCLRfBQ
i03oUYtV/FKLRexQagBqAP8VkFFCAIXAdRVqAotN6FHozhMAAIPECMdF6AAAAACLVexS/xWMUUIA
i0Xo6bcAAACDPTA3QgACD4WoAAAAg33oAHUW/xWUUUIAiUXog33oAHUHM8DpjgAAAItF6IlF9ItN
9A++EYXSdB6LRfSDwAGJRfSLTfQPvhGF0nUJi0X0g8ABiUX069iLTfQrTeiDwQGJTfBojwAAAGjA
BEIAagKLVfBS6PoIAACDxBCJRfSDffQAdQ6LRehQ/xWIUUIAM8DrJYtN8FGLVehSi0X0UOh/QgAA
g8QMi03oUf8ViFFCAItF9OsCM8CL5V3DzMzMzMzMzFWL7IPsbGiBAAAAaMgEQgBqAmgAAQAA6JQI
AACDxBCJRbCDfbAAdQpqG+hR2f//g8QEi0Wwo4A6QgDHBbw7QgAgAAAA6wmLTbCDwQiJTbCLFYA6
QgCBwgABAAA5VbBzGYtFsMZABACLTbDHAf////+LVbDGQgUK682NRbhQ/xWkUUIAi03qgeH//wAA
hckPhHoBAACDfewAD4RwAQAAi1XsiwKJRZyLTeyDwQSJTfyLVfwDVZyJVaCBfZwACAAAfQiLRZyJ
RZjrB8dFmAAIAACLTZiJTZzHRaQBAAAA6wmLVaSDwgGJVaShvDtCADtFnA+NhwAAAGi2AAAAaMgE
QgBqAmgAAQAA6KQHAACDxBCJRbCDfbAAdQuLDbw7QgCJTZzrWotVpItFsIkElYA6QgCLDbw7QgCD
wSCJDbw7QgDrCYtVsIPCCIlVsItFpIsMhYA6QgCBwQABAAA5TbBzGYtVsMZCBACLRbDHAP////+L
TbDGQQUK68npYv///8dFqAAAAADrG4tVqIPCAYlVqItF/IPAAYlF/ItNoIPBBIlNoItVqDtVnH1l
i0Wggzj/dFiLTfwPvhGD4gGF0nRLi0X8D74Ig+EIhcl1EItVoIsCUP8VoFFCAIXAdC6LTajB+QWL
VaiD4h+LBI2AOkIAjQzQiU2wi1Wwi0WgiwiJCotVsItF/IoIiEoE6Xj////HRagAAAAA6wmLVaiD
wgGJVaiDfagDD43RAAAAi0Woiw2AOkIAjRTBiVWwi0Wwgzj/D4WiAAAAi02wxkEEgYN9qAB1CcdF
lPb////rEItVqIPqAffaG9KDwvWJVZSLRZRQ/xVgUUIAiUW0g320/3RYi020Uf8VoFFCAIlFrIN9
rAB0RYtVsItFtIkCi02sgeH/AAAAg/kCdRCLVbCKQgQMQItNsIhBBOsdi1WsgeL/AAAAg/oDdQ+L
RbCKSASAyQiLVbCISgTrD4tFsIpIBIDJQItVsIhKBOsPi0WwikgEgMmAi1WwiEoE6Rz///+hvDtC
AFD/FZxRQgCL5V3DzMzMzMxVi+xRx0X8AAAAAOsJi0X8g8ABiUX8g338QH0yi038gzyNgDpCAAB0
I2oCi1X8iwSVgDpCAFDopQ8AAIPECItN/McEjYA6QgAAAAAA67+L5V3DzMzMzMzMzMzMzMzMzMxV
i+xqAGgAEAAAM8CDfQgAD5TAUP8VrFFCAKN0OkIAgz10OkIAAHUEM8DrH+gvQgAAhcB1EYsNdDpC
AFH/FahRQgAzwOsFuAEAAABdw8zMzFWL7IPsCKHAN0IAiUX4x0X8AAAAAOsJi038g8EBiU38i1X8
OxW8N0IAfUtoAEAAAGgAABAAi0X4i0gMUf8VtFFCAGgAgAAAagCLVfiLQgxQ/xW0UUIAi034i1EQ
UmoAoXQ6QgBQ/xWwUUIAi034g8EUiU3466GLFcA3QgBSagChdDpCAFD/FbBRQgCLDXQ6QgBR/xWo
UUIAi+Vdw1WL7FNWV1VqAGoAaJQ/QAD/dQjoenkAAF1fXluL5V3Di0wkBPdBBAYAAAC4AQAAAHQP
i0QkCItUJBCJArgDAAAAw1NWV4tEJBBQav5onD9AAGT/NQAAAABkiSUAAAAAi0QkIItYCItwDIP+
/3QuO3QkJHQojTR2iwyziUwkCIlIDIN8swQAdRJoAQEAAItEswjoQAAAAP9Uswjrw2SPBQAAAACD
xAxfXlvDM8Bkiw0AAAAAgXkEnD9AAHUQi1EMi1IMOVEIdQW4AQAAAMNTUbt8LUIA6wpTUbt8LUIA
i00IiUsIiUMEiWsMWVvCBADMzFZDMjBYQzAwVYvsg+wIU1ZXVfyLXQyLRQj3QAQGAAAAD4WCAAAA
iUX4i0UQiUX8jUX4iUP8i3MMi3sIg/7/dGGNDHaDfI8EAHRFVlWNaxD/VI8EXV6LXQwLwHQzeDyL
ewhT6Kn+//+DxASNaxBWU+je/v//g8QIjQx2agGLRI8I6GH///+LBI+JQwz/VI8Ii3sIjQx2izSP
66G4AAAAAOscuAEAAADrFVWNaxBq/1Ponv7//4PECF24AQAAAF1fXluL5V3DVYtMJAiLKYtBHFCL
QRhQ6Hn+//+DxAhdwgQAzMzMzFWL7IM9xDVCAAF0EoM9xDVCAAB1MoM9NCpCAAF1KWj8AAAA6CgA
AACDxASDPTQ3QgAAdAb/FTQ3QgBo/wAAAOgMAAAAg8QEXcPMzMzMzMzMVYvsgeywAQAAU1ZXx0X4
AAAAAOsJi0X4g8ABiUX4g334EnMTi034i1UIOxTNkC1CAHUC6wLr3otF+ItNCDsMxZAtQgAPhW4B
AACBfQj8AAAAdCGLVfiLBNWULUIAUGoAagBqAGoB6BXm//+DxBSD+AF1AcyDPcQ1QgABdBKDPcQ1
QgAAdUKDPTQqQgABdTlqAI1N/FGLVfiLBNWULUIAUOg7IgAAg8QEUItN+IsUzZQtQgBSavT/FWBR
QgBQ/xVkUUIA6fAAAACBfQj8AAAAD4TjAAAAaAQBAACNhfD+//9QagD/FXhRQgCFwHUUaGwEQgCN
jfD+//9R6BIrAACDxAiNlfD+//+JVfSLRfRQ6M0hAACDxASDwAGD+Dx2LI2N8P7//1HotiEAAIPE
BItV9I1EAsWJRfRqA2hoBEIAi030UejZMQAAg8QMaIgHQgCNlVD+//9S6LUqAACDxAiLRfRQjY1Q
/v//UeiyKgAAg8QIaOADQgCNlVD+//9S6J4qAACDxAiLRfiLDMWULUIAUY2VUP7//1LohCoAAIPE
CGgQIAEAaGAHQgCNhVD+//9Q6KswAACDxAxfXluL5V3DzFWL7FHHRfwAAAAA6wmLRfyDwAGJRfyD
ffwScxOLTfyLVQg7FM2QLUIAdQLrAuvei0X8i00IOwzFkC1CAHUMi1X8iwTVlC1CAOsCM8CL5V3D
VYvsagBqAGoBoXA3QgBQi00IUehYAAAAg8QUXcPMzMxVi+yLRRRQi00QUYtVDFKhcDdCAFCLTQhR
6DIAAACDxBRdw8zMzMzMzMzMzMzMzMxVi+xqAGoAagGLRQxQi00IUegKAAAAg8QUXcPMzMzMzFWL
7FGLRRhQi00UUYtVEFKLRQhQ6FcAAACDxBCJRfyDffwAdQaDfQwAdQWLRfzrFotNCFHoN1kAAIPE
BIXAdQQzwOsC676L5V3DzMzMzMzMVYvsagBqAGoBi0UIUOgOAAAAg8QQXcPMzMzMzMzMzMxVi+yD
7BBTVlfHRfQAAAAAoSAuQgCD4ASFwHQw6B8QAACFwHUhaIwIQgBqAGhBAQAAaIAIQgBqAuhT4///
g8QUg/gBdQHMM8mFyXXQixUkLkIAiVX4i0X4OwUoLkIAdQHMi00UUYtVEFKLRfhQi00MUYtVCFJq
AGoB/xWAMUIAg8QchcB1XoN9EAB0K4tFFFCLTRBRaEgIQgBqAGoAagBqAOjq4v//g8Qcg/gBdQHM
M9KF0nXX6yZoJAhCAGggCEIAagBqAGoAagDowuL//4PEGIP4AXUBzDPAhcB12jPA6SgCAACLTQyB
4f//AACD+QJ0FIsVIC5CAIPiAYXSdQfHRfQBAAAAg30I4HcLi0UIg8Akg/jgdiyLTQhRaPwHQgBq
AGoAagBqAehj4v//g8QYg/gBdQHMM9KF0nXbM8DpyQEAAItFDCX//wAAg/gEdECDfQwBdDqLTQyB
4f//AACD+QJ0LIN9DAN0JmjIB0IAaCAIQgBqAGoAagBqAegP4v//g8QYg/gBdQHMM9KF0nXai0UI
g8AkiUXwi03wUehuWAAAg8QEiUX8g338AHUHM8DpVwEAAIsVJC5CAIPCAYkVJC5CAIN99AB0SYtF
/McAAAAAAItN/MdBBAAAAACLVfzHQggAAAAAi0X8x0AMvLrc/otN/ItVCIlREItF/MdAFAMAAACL
TfzHQRgAAAAA6aAAAACLFTw3QgADVQiJFTw3QgChRDdCAANFCKNEN0IAiw1EN0IAOw1IN0IAdgyL
FUQ3QgCJFUg3QgCDPUA3QgAAdA2hQDdCAItN/IlIBOsJi1X8iRU4N0IAi0X8iw1AN0IAiQiLVfzH
QgQAAAAAi0X8i00QiUgIi1X8i0UUiUIMi038i1UIiVEQi0X8i00MiUgUi1X8i0X4iUIYi038iQ1A
N0IAagQz0ooVLC5CAFKLRfyDwBxQ6GZWAACDxAxqBDPJig0sLkIAUYtVCItF/I1MECBR6EhWAACD
xAyLVQhSM8CgLi5CAFCLTfyDwSBR6C1WAACDxAyLRfyDwCBfXluL5V3DzMzMzMzMzMzMzMzMzFWL
7GoAagBqAYtFDFCLTQhR6AoAAACDxBRdw8zMzMzMVYvsg+wMi0UMD69FCIlFDItNGFGLVRRSi0UQ
UItNDFHo2/v//4PEEIlF+IN9+AB0KItV+IlV9ItF9ANFDIlF/ItN9DtN/HMRi1X0xgIAi0X0g8AB
iUX06+eLRfiL5V3DVYvsagBqAGoBi0UMUItNCFHoCgAAAIPEFF3DzMzMzMxVi+xRagGLRRhQi00U
UYtVEFKLRQxQi00IUegRAAAAg8QYiUX8i0X8i+Vdw8zMzMxVi+yD7BRTVlfHRewAAAAAg30IAHUd
i0UYUItNFFGLVRBSi0UMUOgl+///g8QQ6dcEAACDfRwAdB2DfQwAdReLTRBRi1UIUuhEBQAAg8QI
M8DptAQAAKEgLkIAg+AEhcB0MOjpCwAAhcB1IWiMCEIAagBoOQIAAGiACEIAagLoHd///4PEFIP4
AXUBzDPJhcl10IsVJC5CAIlV8ItF8DsFKC5CAHUBzItNGFGLVRRSi0XwUItNEFGLVQxSi0UIUGoC
/xWAMUIAg8QchcB1XoN9FAB0K4tNGFGLVRRSaAgKQgBqAGoAagBqAOiy3v//g8Qcg/gBdQHMM8CF
wHXX6yZo5AlCAGggCEIAagBqAGoAagDoit7//4PEGIP4AXUBzDPJhcl12jPA6d4DAACDfQzbdiyL
VQxSaLQJQgBqAGoAagBqAehY3v//g8QYg/gBdQHMM8CFwHXbM8DprAMAAIN9EAF0QotNEIHh//8A
AIP5BHQ0i1UQgeL//wAAg/oCdCZoyAdCAGggCEIAagBqAGoAagHoCd7//4PEGIP4AXUBzDPAhcB1
2otNCFHo4Q4AAIPEBIXAdSFokAlCAGoAaGECAABogAhCAGoC6NLd//+DxBSD+AF1Acwz0oXSdcmL
RQiD6CCJRfiLTfiDeRQDdQfHRewBAAAAg33sAHQ+i1X4gXoMvLrc/nUJi0X4g3gYAHQhaEgJQgBq
AGhrAgAAaIAIQgBqAuh33f//g8QUg/gBdQHMM8mFyXXE62SLVfiLQhQl//8AAIP4AnUVi00QgeH/
/wAAg/kBdQfHRRACAAAAi1X4i0IUJf//AACLTRCB4f//AAA7wXQhaAwJQgBqAGhyAgAAaIAIQgBq
AugR3f//g8QUg/gBdQHMM9KF0nXBg30cAHQli0UMg8AkUItN+FHobFQAAIPECIlF9IN99AB1BzPA
6UMCAADrI4tVDIPCJFKLRfhQ6LdTAACDxAiJRfSDffQAdQczwOkeAgAAiw0kLkIAg8EBiQ0kLkIA
g33sAHVWi1X0oTw3QgArQhCjPDdCAIsNPDdCAANNDIkNPDdCAItV9KFEN0IAK0IQo0Q3QgCLDUQ3
QgADTQyJDUQ3QgCLFUQ3QgA7FUg3QgB2CqFEN0IAo0g3QgCLTfSDwSCJTfyLVfSLRQw7QhB2JItN
9ItVDCtREFIzwKAuLkIAUItN9ItV/ANREFLotFEAAIPEDGoEM8CgLC5CAFCLTfwDTQxR6JtRAACD
xAyDfewAdRuLVfSLRRSJQgiLTfSLVRiJUQyLRfSLTfCJSBiLVfSLRQyJQhCDfRwAdS+DfRwAdQiL
TfQ7Tfh0IWjYCEIAagBoqAIAAGiACEIAagLootv//4PEFIP4AXUBzDPShdJ1xYtF9DtF+HQGg33s
AHQIi0X86ecAAACLTfSDOQB0EItV9IsCi030i1EEiVAE6zyhODdCADtF+HQhaLwIQgBqAGi3AgAA
aIAIQgBqAuhD2///g8QUg/gBdQHMM8mFyXXPi1X0i0IEozg3QgCLTfSDeQQAdA+LVfSLQgSLTfSL
EYkQ6zuhQDdCADtF+HQhaKAIQgBqAGjCAgAAaIAIQgBqAujv2v//g8QUg/gBdQHMM8mFyXXPi1X0
iwKjQDdCAIM9QDdCAAB0DosNQDdCAItV9IlRBOsIi0X0ozg3QgCLTfSLFUA3QgCJEYtF9MdABAAA
AACLTfSJDUA3QgCLRfxfXluL5V3DzMzMzMzMzMzMzMzMzMzMVYvsagBqAGoBi0UMUItNCFHoCgAA
AIPEFF3DzMzMzMxVi+xRagCLRRhQi00UUYtVEFKLRQxQi00IUeih+v//g8QYiUX8i0X8i+Vdw8zM
zMxVi+xqAYtFCFDoEgAAAIPECF3DzMzMzMzMzMzMzMzMzFWL7FFTVlehIC5CAIPgBIXAdDDoqAYA
AIXAdSFojAhCAGoAaOEDAABogAhCAGoC6NzZ//+DxBSD+AF1AcwzyYXJddCDfQgAdQXplwMAAGoA
agBqAItVDFJqAItFCFBqA/8VgDFCAIPEHIXAdStoUAtCAGggCEIAagBqAGoAagDojNn//4PEGIP4
AXUBzDPJhcl12ulNAwAAi1UIUuhfCgAAg8QEhcB1IWiQCUIAagBo8wMAAGiACEIAagLoUNn//4PE
FIP4AXUBzDPAhcB1yYtNCIPpIIlN/ItV/ItCFCX//wAAg/gEdEOLTfyDeRQBdDqLVfyLQhQl//8A
AIP4AnQqi038g3kUA3QhaCgLQgBqAGj5AwAAaIAIQgBqAuju2P//g8QUg/gBdQHMM9KF0nWnoSAu
QgCD4ASFwA+FxQAAAGoEig0sLkIAUYtV/IPCHFLo2gQAAIPEDIXAdUOLRfyDwCBQi038i1EYUotF
/ItIFIHh//8AAIsUjTAuQgBSaPwKQgBqAGoAagBqAeh/2P//g8Qgg/gBdQHMM8CFwHW9agSKDSwu
QgBRi1X8i0IQi038jVQBIFLodAQAAIPEDIXAdUOLRfyDwCBQi038i1EYUotF/ItIFIHh//8AAIsU
jTAuQgBSaNAKQgBqAGoAagBqAegZ2P//g8Qgg/gBdQHMM8CFwHW9i038g3kUA3Vsi1X8gXoMvLrc
/nUJi0X8g3gYAHQhaJAKQgBqAGgOBAAAaIAIQgBqAujU1///g8QUg/gBdQHMM8mFyXXEi1X8i0IQ
g8AkUDPJig0tLkIAUYtV/FLoSU0AAIPEDItF/FDo7VAAAIPEBOlqAQAAi038g3kUAnUNg30MAXUH
x0UMAgAAAItV/ItCFDtFDHQhaHAKQgBqAGgbBAAAaIAIQgBqAuhc1///g8QUg/gBdQHMM8mFyXXO
i1X8oUQ3QgArQhCjRDdCAIsNIC5CAIPhAoXJD4XYAAAAi1X8gzoAdBCLRfyLCItV/ItCBIlBBOs+
iw04N0IAO038dCFoWApCAGoAaCoEAABogAhCAGoC6PHW//+DxBSD+AF1Acwz0oXSdc6LRfyLSASJ
DTg3QgCLVfyDegQAdA+LRfyLSASLVfyLAokB6z2LDUA3QgA7Tfx0IWhACkIAagBoNAQAAGiACEIA
agLom9b//4PEFIP4AXUBzDPShdJ1zotF/IsIiQ1AN0IAi1X8i0IQg8AkUDPJig0tLkIAUYtV/FLo
BUwAAIPEDItF/FDoqU8AAIPEBOspi038x0EUAAAAAItV/ItCEFAzyYoNLS5CAFGLVfyDwiBS6M5L
AACDxAxfXluL5V3DzMzMzFWL7GoBi0UIUOgSAAAAg8QIXcPMzMzMzMzMzMzMzMzMVYvsg+wIU1ZX
oSAuQgCD4ASFwHQw6JYCAACFwHUhaIwIQgBqAGh8BAAAaIAIQgBqAujK1f//g8QUg/gBdQHMM8mF
yXXQi1UIUuiiBgAAg8QEhcB1IWiQCUIAagBohQQAAGiACEIAagLok9X//4PEFIP4AXUBzDPAhcB1
yYtNCIPpIIlN+ItV+ItCFCX//wAAg/gEdEOLTfiDeRQBdDqLVfiLQhQl//8AAIP4AnQqi034g3kU
A3QhaCgLQgBqAGiLBAAAaIAIQgBqAugx1f//g8QUg/gBdQHMM9KF0nWni0X4g3gUAnUNg30MAXUH
x0UMAgAAAItN+IN5FAN0MotV+ItCFDtFDHQhaHAKQgBqAGiSBAAAaIAIQgBqAujg1P//g8QUg/gB
dQHMM8mFyXXOi1X4i0IQiUX8i0X8X15bi+Vdw8zMzMzMzMzMzMzMzMzMVYvsUaEoLkIAiUX8i00I
iQ0oLkIAi0X8i+Vdw8zMzMxVi+xRU1ZXi0UIUOhwBQAAg8QEhcB0a4tNCIPpIIlN/ItV/ItCFCX/
/wAAg/gEdEOLTfyDeRQBdDqLVfyLQhQl//8AAIP4AnQqi038g3kUA3QhaCgLQgBqAGjTBAAAaIAI
QgBqAugm1P//g8QUg/gBdQHMM9KF0nWni0X8i00MiUgUX15bi+Vdw8zMzMzMzMxVi+xRoYAxQgCJ
RfyLTQiJDYAxQgCLRfyL5V3DzMzMzFWL7FFTVlfHRfwBAAAAi0UQi00Qg+kBiU0QhcB0YItVCDPA
igKLTQyB4f8AAACLVQiDwgGJVQg7wXRBi0UMJf8AAABQi00IM9KKUf9Si0UIg+gBUGhsC0IAagBq
AGoAagDoetP//4PEIIP4AXUBzDPJhcl1xsdF/AAAAADrkItF/F9eW4vlXcPMzMzMzMzMzFWL7IPs
GFNWV8dF/AEAAAChIC5CAIPgAYXAdQq4AQAAAOkUAwAA6MVMAACJRfSDffT/D4T9AAAAg330/g+E
8wAAAItN9IlN6ItV6IPCBolV6IN96AMPh60AAACLRej/JIURWEAAaMAMQgBoIAhCAGoAagBqAGoA
6NTS//+DxBiD+AF1AcwzyYXJddrpngAAAGicDEIAaCAIQgBqAGoAagBqAOip0v//g8QYg/gBdQHM
M9KF0nXa63ZoeAxCAGggCEIAagBqAGoAagDogdL//4PEGIP4AXUBzDPAhcB12utOaFQMQgBoIAhC
AGoAagBqAGoA6FnS//+DxBiD+AF1AcwzyYXJddrrJmgoDEIAaCAIQgBqAGoAagBqAOgx0v//g8QY
g/gBdQHMM9KF0nXaM8DpBQIAAKFAN0IAiUX46wiLTfiLEYlV+IN9+AAPhOYBAADHRfABAAAAi0X4
i0gUgeH//wAAg/kEdCOLVfiDehQBdBqLRfiLSBSB4f//AACD+QJ0CYtV+IN6FAN1GItF+ItIFIHh
//8AAIsUjTAuQgCJVezrB8dF7CAMQgBqBKAsLkIAUItN+IPBHFHosf3//4PEDIXAdTqLVfiDwiBS
i0X4i0gYUYtV7FJo/ApCAGoAagBqAGoA6GbR//+DxCCD+AF1AcwzwIXAdc3HRfAAAAAAagSKDSwu
QgBRi1X4i0IQi034jVQBIFLoVP3//4PEDIXAdTqLRfiDwCBQi034i1EYUotF7FBo0ApCAGoAagBq
AGoA6AnR//+DxCCD+AF1AcwzyYXJdc3HRfAAAAAAi1X4g3oUAHVQi0X4i0gQUYoVLS5CAFKLRfiD
wCBQ6PD8//+DxAyFwHUvi034g8EgUWj0C0IAagBqAGoAagDosND//4PEGIP4AXUBzDPShdJ12MdF
8AAAAACDffAAdXaLRfiDeAgAdDOLTfiLUQxSi0X4i0gIUYtV7FJo1AtCAGoAagBqAGoA6GfQ//+D
xCCD+AF1AcwzwIXAdc2LTfiLURBSi0X4g8AgUItN7FFoqAtCAGoAagBqAGoA6DTQ//+DxCCD+AF1
Acwz0oXSdc3HRfwAAAAA6Qj+//+LRfxfXluL5V3DsFVAAIhVQABgVUAANVVAAMzMzMzMzMzMzMzM
zMzMzFWL7FGhIC5CAIlF/IN9CP90CYtNCIkNIC5CAItF/IvlXcPMzMzMzMzMzMzMzMzMzFWL7FGh
IC5CAIPgAYXAdQLrPYsNQDdCAIlN/OsIi1X8iwKJRfyDffwAdCSLTfyLURSB4v//AACD+gR1EYtF
DFCLTfyDwSBR/1UIg8QI686L5V3DzMzMzMzMzMzMzMzMzFWL7FGDfQgAdDOLRQxQi00IUf8VwFFC
AIXAdSGDfRAAdBKLVQxSi0UIUP8VvFFCAIXAdQnHRfwBAAAA6wfHRfwAAAAAi0X8i+Vdw8zMzMzM
VYvsUYN9CAB1BDPA63RqAWogi0UIg+ggUOiS////g8QMhcB1BDPA61mLTQiD6SBR6AsoAACDxASJ
RfyDffwAdBWLVQiD6iBSi0X8UOhPKAAAg8QI6yyLDeQ1QgCB4QCAAACFyXQHuAEAAADrFYtVCIPq
IFJqAKF0OkIAUP8VxFFCAIvlXcPMzMzMzMzMzMzMVYvsUYtFCFDoY////4PEBIXAdQczwOmmAAAA
i00Ig+kgiU38i1X8i0IUJf//AACD+AR0IotN/IN5FAF0GYtV/ItCFCX//wAAg/gCdAmLTfyDeRQD
dWlqAYtVDFKLRQhQ6Lv+//+DxAyFwHRTi038i1EQO1UMdUiLRfyLSBg7DSQuQgB/OoN9EAB0C4tV
EItF/ItIGIkKg30UAHQLi1UUi0X8i0gIiQqDfRgAdAuLVRiLRfyLSAyJCrgBAAAA6wIzwIvlXcPM
zMzMzMzMzMzMzFWL7FGhaDpCAIlF/ItNCIkNaDpCAItF/IvlXcPMzMzMVYvsg+wIU1ZXg30IAHUr
aAgNQgBoIAhCAGoAagBqAGoA6GrN//+DxBiD+AF1AcwzwIXAddrpFQEAAItNCIsVQDdCAIkRx0X8
AAAAAOsJi0X8g8ABiUX8g338BX0ei038i1UIx0SKGAAAAACLRfyLTQjHRIEEAAAAAOvTixVAN0IA
iVX46wiLRfiLCIlN+IN9+AAPhJ8AAACLVfiLQhQl//8AAIXAfGaLTfiLURSB4v//AACD+gV9VYtF
+ItIFIHh//8AAItVCItEigSDwAGLTfiLURSB4v//AACLTQiJRJEEi1X4i0IUJf//AACLTQiLVIEY
i0X4A1AQi034i0EUJf//AACLTQiJVIEY6yWLVfhSaOQMQgBqAGoAagBqAOhtzP//g8QYg/gBdQHM
M8CFwHXb6U////+LTQiLFUg3QgCJUSyLRQiLDTw3QgCJSDBfXluL5V3DzMzMzMzMzMzMzFWL7IPs
CFNWV8dF+AAAAACDfQgAdAyDfQwAdAaDfRAAdS5oMA1CAGggCEIAagBqAGoAagDo98v//4PEGIP4
AXUBzDPAhcB12otF+OnMAAAAx0X8AAAAAOsJi038g8EBiU38g338BQ+NgAAAAItV/ItFEItN/It1
DItUkBgrVI4Yi0X8i00IiVSBGItV/ItFEItN/It1DItUkAQrVI4Ei0X8i00IiVSBBItV/ItFCIN8
kBgAdQ2LTfyLVQiDfIoEAHQlg338AHQfg338AnUSg338AnUToSAuQgCD4BCFwHQHx0X4AQAAAOlt
////i00Qi1UMi0EsK0Isi00IiUEsi1UQi0UMi0owK0gwi1UIiUowi0UIxwAAAAAAi0X4X15bi+Vd
w8zMzMzMzMzMzMzMzMxVi+yD7AhTVlfHRfgAAAAAaCgOQgBoIAhCAGoAagBqAGoA6NnK//+DxBiD
+AF1AcwzwIXAddqDfQgAdAiLTQiLEYlV+KFAN0IAiUX86wiLTfyLEYlV/IN9/AAPhBgCAACLRfw7
RfgPhAwCAACLTfyLURSB4v//AACD+gN0LYtF/ItIFIHh//8AAIXJdB2LVfyLQhQl//8AAIP4AnUS
iw0gLkIAg+EQhcl1BenEAQAAi1X8g3oIAHRwagBqAYtF/ItICFHo2Pr//4PEDIXAdSqLVfyLQgxQ
aBQOQgBqAGoAagBqAOgYyv//g8QYg/gBdQHMM8mFyXXY6y+LVfyLQgxQi038i1EIUmgIDkIAagBq
AGoAagDo58n//4PEHIP4AXUBzDPAhcB10YtN/ItRGFJoAA5CAGoAagBqAGoA6L/J//+DxBiD+AF1
AcwzwIXAddiLTfyLURSB4v//AACD+gR1cYtF/ItIEFGLVfyLQhTB+BAl//8AAFCLTfyDwSBRaMwN
QgBqAGoAagBqAOhwyf//g8Qgg/gBdQHMM9KF0nXCgz1oOkIAAHQZi0X8i0gQUYtV/IPCIFL/FWg6
QgCDxAjrDItF/FDo5gAAAIPEBOmhAAAAi038g3kUAXU9i1X8i0IQUItN/IPBIFFopA1CAGoAagBq
AGoA6AXJ//+DxByD+AF1Acwz0oXSddGLRfxQ6J0AAACDxATrW4tN/ItRFIHi//8AAIP6AnVKi0X8
i0gQUYtV/ItCFMH4ECX//wAAUItN/IPBIFFocA1CAGoAagBqAGoA6KjI//+DxCCD+AF1Acwz0oXS
dcKLRfxQ6EAAAACDxATp1v3//2hYDUIAaCAIQgBqAGoAagBqAOhxyP//g8QYg/gBdQHMM8mFyXXa
X15bi+Vdw8zMzMzMzMzMzMzMVYvsg+xcU1ZXx0W0AAAAAOsJi0W0g8ABiUW0i00Ig3kQEH0Li1UI
i0IQiUWs6wfHRawQAAAAi020O02sD42aAAAAi1UIA1W0ikIgiEWwgz2EMUIAAX4caFcBAACLTbCB
4f8AAABR6PVCAACDxAiJRajrHYtVsIHi/wAAAKFgLkIAM8lmiwxQgeFXAQAAiU2og32oAHQOi1Ww
geL/AAAAiVWk6wfHRaQgAAAAi0W0ik2kiEwFuItVsIHi/wAAAFJoTA5CAItFtGvAA41MBcxR6IxB
AACDxAzpNv///4tVtMZEFbgAjUXMUI1NuFFoPA5CAGoAagBqAGoA6FLH//+DxByD+AF1Acwz0oXS
dddfXluL5V3DzMzMzMzMzMzMzMzMVYvsg+w0U1ZXjUXMUOiO+f//g8QEg33gAHUZg33UAHUTiw0g
LkIAg+EQhcl0PYN92AB0N2hUDkIAaCAIQgBqAGoAagBqAOjlxv//g8QYg/gBdQHMM9KF0nXaagDo
z/v//4PEBLgBAAAA6wIzwF9eW4vlXcPMzMzMzMzMzMzMzMxVi+xRU1ZXg30IAHUF6awAAADHRfwA
AAAA6wmLRfyDwAGJRfyDffwFfUSLTfyLFI0wLkIAUotF/ItNCItUgQRSi0X8i00Ii1SBGFJosA5C
AGoAagBqAGoA6FPG//+DxCCD+AF1AcwzwIXAdb7rrYtNCItRLFJojA5CAGoAagBqAGoA6CnG//+D
xBiD+AF1AcwzwIXAddiLTQiLUTBSaGwOQgBqAGoAagBqAOgBxv//g8QYg/gBdQHMM8CFwHXYX15b
i+Vdw8zMzMzMzMzMzMzMVYvsi0UIOwW8O0IAcgQzwOsbi00IwfkFi1UIg+IfiwSNgDpCAA++RNAE
g+BAXcPMVYvsg30IAHUMagDoIAEAAIPEBOs8i0UIUOhCAAAAg8QEhcB0BYPI/+sni00Ii1EMgeIA
QAAAhdJ0FYtFCItIEFHoOkEAAIPEBPfYG8DrAjPAXcPMzMzMzMzMzMzMzMzMVYvsg+wMx0X8AAAA
AItFCIlF+ItN+ItRDIPiA4P6AnV6i0X4i0gMgeEIAQAAhcl0aotV+ItF+IsKK0gIiU30g330AH5W
i1X0UotF+ItICFGLVfiLQhBQ6HRBAACDxAw7RfR1IYtN+ItRDIHigAAAAIXSdA+LRfiLSAyD4f2L
VfiJSgzrFotF+ItIDIPJIItV+IlKDMdF/P////+LRfiLTfiLUQiJEItF+MdABAAAAACLRfyL5V3D
zMzMzMzMzMzMVYvsagHoBgAAAIPEBF3DzFWL7IPsDMdF/AAAAADHRfgAAAAAx0X0AAAAAOsJi0X0
g8ABiUX0i030Ow1AT0IAD42XAAAAi1X0oew7QgCDPJAAD4SAAAAAi030ixXsO0IAiwSKi0gMgeGD
AAAAhcl0Z4N9CAF1JItV9KHsO0IAiwyQUehZ/v//g8QEg/j/dAmLVfyDwgGJVfzrPYN9CAB1N4tF
9IsN7DtCAIsUgYtCDIPgAoXAdCGLTfSLFew7QgCLBIpQ6Bj+//+DxASD+P91B8dF+P/////pUf//
/4N9CAF1BYtF/OsDi0X4i+Vdw8zMi0wkBPfBAwAAAHQUigFBhMB0QPfBAwAAAHXxBQAAAACLAbr/
/v5+A9CD8P8zwoPBBKkAAQGBdOiLQfyEwHQyhOR0JKkAAP8AdBOpAAAA/3QC682NQf+LTCQEK8HD
jUH+i0wkBCvBw41B/YtMJAQrwcONQfyLTCQEK8HDzMzMzMxVi+yD7AiDfQgAdQczwOmHAAAAgz2A
N0IAAHUti0UMJf//AAA9/wAAAH4PxwXYNUIAKgAAAIPI/+tgi00IilUMiBG4AQAAAOtRx0X4AAAA
AI1F+FBqAIsNhDFCAFGLVQhSagGNRQxQaCACAACLDZA3QgBR/xWQUUIAiUX8g338AHQGg334AHQP
xwXYNUIAKgAAAIPI/+sDi0X8i+Vdw8zMU1aLRCQYC8B1GItMJBSLRCQQM9L38YvYi0QkDPfxi9Pr
QYvIi1wkFItUJBCLRCQM0enR29Hq0dgLyXX09/OL8PdkJBiLyItEJBT35gPRcg47VCQQdwhyBztE
JAx2AU4z0ovGXlvCEADMzMzMzMzMzFOLRCQUC8B1GItMJBCLRCQMM9L38YtEJAj38YvCM9LrUIvI
i1wkEItUJAyLRCQI0enR29Hq0dgLyXX09/OLyPdkJBSR92QkEAPRcg47VCQMdwhyDjtEJAh2CCtE
JBAbVCQUK0QkCBtUJAz32vfYg9oAW8IQAMzMzMzMzMzMzMzMVYvsg+wUU1ZXg30MAHUeaLgBQgBq
AGppaBAPQgBqAuhswf//g8QUg/gBdQHMM8CFwHXWi00MiU34i1X4i0IQiUXwi034i1EMgeKCAAAA
hdJ0DYtF+ItIDIPhQIXJdBaLVfiLQgwMIItN+IlBDIPI/+n2AQAAi1X4i0IMg+ABhcB0SotN+MdB
BAAAAACLVfiLQgyD4BCFwHQci034i1X4i0IIiQGLTfiLUQyD4v6LRfiJUAzrF4tN+ItRDIPKIItF
+IlQDIPI/+mfAQAAi034i1EMg8oCi0X4iVAMi034i1EMg+Lvi0X4iVAMi034x0EEAAAAAMdF/AAA
AACLVfyJVfSLRfiLSAyB4QwBAACFyXUugX34YCpCAHQJgX34gCpCAHUQi1XwUuiE+v//g8QEhcB1
DItF+FDohEAAAIPEBItN+ItRDIHiCAEAAIXSD4TWAAAAi0X4i034ixArUQiF0n0haNAOQgBqAGig
AAAAaBAPQgBqAugWwP//g8QUg/gBdQHMM8CFwHXKi034i1X4iwErQgiJRfyLTfiLUQiDwgGLRfiJ
EItN+ItRGIPqAYtF+IlQBIN9/AB+HItN/FGLVfiLQghQi03wUehCPAAAg8QMiUX060aDffD/dBuL
VfDB+gWLRfCD4B+LDJWAOkIAjRTBiVXs6wfHRexwLUIAi0XsD75IBIPhIIXJdBBqAmoAi1XwUui3
PgAAg8QMi0X4i0gIilUIiBHrHsdF/AEAAACLRfxQjU0IUYtV8FLozzsAAIPEDIlF9ItF9DtF/HQU
i034i1EMg8ogi0X4iVAMg8j/6wiLRQgl/wAAAF9eW4vlXcPMzMzMzMzMzMzMzMzMzFWL7IPsCMdF
/AAAAADHRfgDAAAA6wmLRfiDwAGJRfiLTfg7DUBPQgB9e4tV+KHsO0IAgzyQAHRoi034ixXsO0IA
iwSKi0gMgeGDAAAAhcl0IotV+KHsO0IAiwyQUeiuPwAAg8QEg/j/dAmLVfyDwgGJVfyDffgUfCdq
AotF+IsN7DtCAIsUgVLoc+T//4PECItF+IsN7DtCAMcEgQAAAADpcf///4tF/IvlXcPMzMzMVYvs
g30QCnUeg30IAH0YagGLRRBQi00MUYtVCFLoLgAAAIPEEOsWagCLRRBQi00MUYtVCFLoFgAAAIPE
EItFDF3DzMzMzMzMzMzMzMzMzMxVi+yD7BCLRQyJRfyDfRQAdBeLTfzGAS2LVfyDwgGJVfyLRQj3
2IlFCItN/IlN+ItFCDPS93UQiVX0i0UIM9L3dRCJRQiDffQJdhaLVfSDwleLRfyIEItN/IPBAYlN
/OsUi1X0g8Iwi0X8iBCLTfyDwQGJTfyDfQgAd7SLVfzGAgCLRfyD6AGJRfyLTfyKEYhV8ItF/ItN
+IoRiBCLRfiKTfCICItV/IPqAYlV/ItF+IPAAYlF+ItN+DtN/HLMi+Vdw8zMzMzMzMzMzMzMzMzM
VYvsUYN9EAp1D4N9CAB9CcdF/AEAAADrB8dF/AAAAACLRfxQi00QUYtVDFKLRQhQ6Pv+//+DxBCL
RQyL5V3DzFWL7GoAi0UQUItNDFGLVQhS6Nr+//+DxBCLRQxdw8zMVYvsUYN9FAp1F4N9DAB/EXwG
g30IAHMJx0X8AQAAAOsHx0X8AAAAAItF/FCLTRRRi1UQUotFDFCLTQhR6A8AAACLRRCL5V3DzMzM
zMzMzMxVi+yD7BCLRRCJRfyDfRgAdCKLTfzGAS2LVfyDwgGJVfyLRQj32ItNDIPRAPfZiUUIiU0M
i1X8iVX4i0UUM8lRUItVDFKLRQhQ6DL6//+JRfSLTRQz0lJRi0UMUItNCFHoq/n//4lFCIlVDIN9
9Al2FotV9IPCV4tF/IgQi038g8EBiU386xSLVfSDwjCLRfyIEItN/IPBAYlN/IN9DAB3mXIGg30I
AHeRi1X8xgIAi0X8g+gBiUX8i038ihGIVfCLRfyLTfiKEYgQi0X4ik3wiAiLVfyD6gGJVfyLRfiD
wAGJRfiLTfg7TfxyzIvlXcIUAMzMzMzMzMzMzMzMzMzMVYvsagCLRRRQi00QUYtVDFKLRQhQ6Ob+
//+LRRBdw8xVi+yD7DBTVleNReCJRdyNTRSJTdSDfQgAdR5oKA9CAGoAal1oHA9CAGoC6EC7//+D
xBSD+AF1Acwz0oXSddaDfRAAdR5ooABCAGoAal5oHA9CAGoC6Ba7//+DxBSD+AF1AcwzwIXAddaL
TdzHQQxCAAAAi1Xci0UIiUIIi03ci1UIiRGLRdyLTQyJSASLVdRSi0UQUItN3FHo0qn//4PEDIlF
2ItV3ItCBIPoAYtN3IlBBItV3IN6BAB8IotF3IsIxgEAM9KB4v8AAACJVdCLRdyLCIPBAYtV3IkK
6xGLRdxQagDo9/j//4PECIlF0ItF2F9eW4vlXcPMzMzMzMzMV4t8JAjrao2kJAAAAACL/4tMJARX
98EDAAAAdA+KAUGEwHQ798EDAAAAdfGLAbr//v5+A9CD8P8zwoPBBKkAAQGBdOiLQfyEwHQjhOR0
GqkAAP8AdA6pAAAA/3QC682Nef/rDY15/usIjXn96wONefyLTCQM98EDAAAAdBmKEUGE0nRkiBdH
98EDAAAAde7rBYkXg8cEuv/+/n6LAQPQg/D/M8KLEYPBBKkAAQGBdOGE0nQ0hPZ0J/fCAAD/AHQS
98IAAAD/dALrx4kXi0QkCF/DZokXi0QkCMZHAgBfw2aJF4tEJAhfw4gXi0QkCF/DVYvsg+wsU1ZX
jUXgiUXcg30IAHUeaCgPQgBqAGpaaDgPQgBqAuhWuf//g8QUg/gBdQHMM8mFyXXWg30QAHUeaKAA
QgBqAGpbaDgPQgBqAugsuf//g8QUg/gBdQHMM9KF0nXWi0Xcx0AMQgAAAItN3ItVCIlRCItF3ItN
CIkIi1Xci0UMiUIEi00UUYtVEFKLRdxQ6Oin//+DxAyJRdiLTdyLUQSD6gGLRdyJUASLTdyDeQQA
fCKLVdyLAsYAADPJgeH/AAAAiU3Ui1XciwKDwAGLTdyJAesRi1XcUmoA6A33//+DxAiJRdSLRdhf
XluL5V3DzMzMzMzMzMzMzMzMzFE9ABAAAI1MJAhyFIHpABAAAC0AEAAAhQE9ABAAAHPsK8iLxIUB
i+GLCItABFDDzFWL7IPsDIN9DAR0BoN9DAN1BelCAQAAg30IAnQWg30IFXQQg30IFnQKg30IDw+F
uAAAAIN9CAJ0BoN9CBV1N4M9XDdCAAB1LmoBaHBxQAD/FcxRQgCD+AF1DMcFXDdCAAEAAADrEP8V
yFFCAKPcNUIA6eMAAACLRQiJRfSLTfSD6QKJTfSDffQUd16LRfQz0oqQTnFAAP8klTpxQACLDUw3
QgCJTfiLVQyJFUw3QgDrOKFQN0IAiUX4i00MiQ1QN0IA6yWLFVQ3QgCJVfiLRQyjVDdCAOsSiw1Y
N0IAiU34i1UMiRVYN0IA62mDfQgIdA6DfQgEdAiDfQgLdALrWotFCFDoyAIAAIPEBIlF/IN9/AB1
AutDi038i1EIiVX4i0X8i0gEO00IdSqLVfyLRQyJQgiLTfyDwQyJTfyLFWgtQgBr0gyBwugsQgA5
VfxyAusC68uLRfjrDccF2DVCABYAAACDyP+L5V3DbXBAAKdwQACBcEAAlHBAALlwQAAABAQEBAQE
BAQEBAQEAQQEBAQEAgPMzMzMzMzMzMzMzMzMVYvsg+wMg30IAHUYx0X4TDdCAItF+IsIiU30x0X8
AgAAAOsWx0X4UDdCAItV+IsCiUX0x0X8FQAAAIN99AB1BDPA6x6DffQBdBOLTfjHAQAAAACLVfxS
/1X0g8QEuAEAAACL5V3CBADMzMzMzMzMzFWL7IPsGItFCIlF6ItN6IPpAolN6IN96BR3cotF6DPS
ipB8c0AA/ySVZHNAAMdF8Ew3QgCLTfCLEYlV7OtXx0XwUDdCAItF8IsIiU3s60bHRfBUN0IAi1Xw
iwKJRezrNcdF8Fg3QgCLTfCLEYlV7Oski0UIUOhGAQAAg8QEg8AIiUXwi03wixGJVezrCIPI/+nr
AAAAg33sAXUHM8Dp3gAAAIN97AB1B2oD6JG8//+DfQgIdAyDfQgLdAaDfQgEdSuhKDZCAIlF9McF
KDZCAAAAAACDfQgIdROLDWwtQgCJTfzHBWwtQgCMAAAAg30ICHU5ixVgLUIAiVX46wmLRfiDwAGJ
RfiLDWAtQgADDWQtQgA5Tfh9EotV+GvSDMeC8CxCAAAAAADr1OsJi0XwxwAAAAAAg30ICHURiw1s
LUIAUWoI/1Xsg8QI6wqLVQhS/1Xsg8QEg30ICHQMg30IC3QGg30IBHUXi0X0oyg2QgCDfQgIdQmL
TfyJDWwtQgAzwIvlXcMNckAAUXJAAEByQAAeckAAL3JAAG1yQAAABQEFBQUBBQUBBQUFAgUFBQUF
AwTMzMzMzMzMzMzMzMzMzMxVi+xRx0X86CxCAItF/ItIBDtNCHQdi1X8g8IMiVX8oWgtQgBrwAwF
6CxCADlF/HMC69iLDWgtQgBryQyBwegsQgA5TfxzEItV/ItCBDtFCHUFi0X86wIzwIvlXcPMzMxV
i+yD7AjHRfwAAAAAgz1gN0IAAHVdaEADQgD/FXBRQgCJRfiDffgAdB1oaA9CAItF+FD/FUhRQgCj
YDdCAIM9YDdCAAB1BDPA62xoWA9CAItN+FH/FUhRQgCjZDdCAGhED0IAi1X4Uv8VSFFCAKNoN0IA
gz1kN0IAAHQJ/xVkN0IAiUX8g338AHQWgz1oN0IAAHQNi0X8UP8VaDdCAIlF/ItNEFGLVQxSi0UI
UItN/FH/FWA3QgCL5V3DzMzMzMyLTCQMV4XJdHpWU4vZi3QkFPfGAwAAAIt8JBB1B8HpAnVv6yGK
BkaIB0dJdCWEwHQp98YDAAAAdeuL2cHpAnVRg+MDdA2KBkaIB0eEwHQvS3Xzi0QkEFteX8P3xwMA
AAB0EogHR0kPhIoAAAD3xwMAAAB17ovZwekCdWyIB0dLdfpbXotEJAhfw4kXg8cESXSvuv/+/n6L
BgPQg/D/M8KLFoPGBKkAAQGBdN6E0nQshPZ0HvfCAAD/AHQM98IAAAD/dcaJF+sYgeL//wAAiRfr
DoHi/wAAAIkX6wQz0okXg8cEM8BJdAozwIkHg8cESXX4g+MDdYWLRCQQW15fw8zMVYvsg+woi0UI
UOjxAgAAg8QEiUUIi00IOw3EN0IAdQczwOnTAgAAg30IAHUR6K4DAADoKQQAADPA6bwCAADHRfwA
AAAA6wmLVfyDwgGJVfyDffwFD4M9AQAAi0X8a8Awi4h4MEIAO00ID4UjAQAAx0XcAAAAAOsJi1Xc
g8IBiVXcgX3cAQEAAHMMi0XcxoBgOUIAAOvix0X0AAAAAOsJi030g8EBiU30g330BHN7i1X8a9Iw
i0X0jYzCiDBCAIlN+OsJi1X4g8ICiVX4i0X4M8mKCIXJdE2LVfgzwIpCAYXAdEGLTfgz0ooRiVXc
6wmLRdyDwAGJRdyLTfgz0opRATlV3Hcdi0Xci030ipBhOUIACpFwMEIAi0XciJBhOUIA683rn+l2
////i00IiQ3EN0IAxwVMOEIAAQAAAIsVxDdCAFLoGAIAAIPEBKNkOkIAx0X0AAAAAOsJi0X0g8AB
iUX0g330BnMei038a8kwi1X0i0X0ZouMQXwwQgBmiQxVQDhCAOvT6NUCAAAzwOloAQAA6bD+//+N
VeBSi0UIUP8V0FFCAIP4AQ+FMgEAAMdF3AAAAADrCYtN3IPBAYlN3IF93AEBAABzDItV3MaCYDlC
AADr4otFCKPEN0IAxwVkOkIAAAAAAIN94AEPhrUAAACNTeaJTdjrCYtV2IPCAolV2ItF2DPJigiF
yXRHi1XYM8CKQgGFwHQ7i03YM9KKEYlV3OsJi0Xcg8ABiUXci03YM9KKUQE5Vdx3F4tF3IqIYTlC
AIDJBItV3IiKYTlCAOvT66XHRdwBAAAA6wmLRdyDwAGJRdyBfdz/AAAAcxeLTdyKkWE5QgCAygiL
RdyIkGE5QgDr14sNxDdCAFHozgAAAIPEBKNkOkIAxwVMOEIAAQAAAOsKxwVMOEIAAAAAAMdF9AAA
AADrCYtV9IPCAYlV9IN99AZzD4tF9GbHBEVAOEIAAADr4uiEAQAAM8DrGoM9bDdCAAB0DujyAAAA
6G0BAAAzwOsDg8j/i+Vdw8zMVYvsxwVsN0IAAAAAAIN9CP51EscFbDdCAAEAAAD/FdhRQgDrMoN9
CP11EscFbDdCAAEAAAD/FdRRQgDrGoN9CPx1EccFbDdCAAEAAAChkDdCAOsDi0UIXcPMzMzMzMzM
VYvsUYtFCIlF/ItN/IHppAMAAIlN/IN9/BJ3LotF/DPSipCEeUAA/ySVcHlAALgRBAAA6xe4BAgA
AOsQuBIEAADrCbgEBAAA6wIzwIvlXcNOeUAAVXlAAFx5QABjeUAAanlAAAAEBAQBBAQEBAQEBAQE
BAQEAgPMzMzMzMzMzMxVi+xRx0X8AAAAAOsJi0X8g8ABiUX8gX38AQEAAH0Mi038xoFgOUIAAOvi
xwXEN0IAAAAAAMcFTDhCAAAAAADHBWQ6QgAAAAAAx0X8AAAAAOsJi1X8g8IBiVX8g338Bn0Pi0X8
ZscERUA4QgAAAOvii+Vdw8zMzMzMzMzMzMzMzFWL7IHsHAUAAI2F6Pz//1CLDcQ3QgBR/xXQUUIA
g/gBD4UTAgAAx4Xk+v//AAAAAOsPi5Xk+v//g8IBiZXk+v//gb3k+v//AAEAAHMVi4Xk+v//io3k
+v//iIwF/Pz//+vQxoX8/P//II2V7vz//4lV/OsJi0X8g8ACiUX8i038M9KKEYXSdECLRfwzyYoI
iY3k+v//6w+LleT6//+DwgGJleT6//+LRfwzyYpIATmN5Pr//3cQi5Xk+v//xoQV/Pz//yDr0eus
agChZDpCAFCLDcQ3QgBRjZX8/f//UmgAAQAAjYX8/P//UGoB6E8yAACDxBxqAIsNxDdCAFFoAAEA
AI2V6Pv//1JoAAEAAI2F/Pz//1BoAAEAAIsNZDpCAFHoui4AAIPEIGoAixXEN0IAUmgAAQAAjYXo
+v//UGgAAQAAjY38/P//UWgAAgAAixVkOkIAUuiFLgAAg8Qgx4Xk+v//AAAAAOsPi4Xk+v//g8AB
iYXk+v//gb3k+v//AAEAAA+DqwAAAIuN5Pr//zPSZouUTfz9//+D4gGF0nQ2i4Xk+v//iohhOUIA
gMkQi5Xk+v//iIphOUIAi4Xk+v//i43k+v//ipQN6Pv//4iQYDhCAOtZi4Xk+v//M8lmi4xF/P3/
/4PhAoXJdDWLleT6//+KgmE5QgAMIIuN5Pr//4iBYTlCAIuV5Pr//4uF5Pr//4qMBej6//+IimA4
QgDrDYuV5Pr//8aCYDhCAADpNv///+nFAAAAx4Xk+v//AAAAAOsPi4Xk+v//g8ABiYXk+v//gb3k
+v//AAEAAA+DmgAAAIO95Pr//0FyO4O95Pr//1p3MouN5Pr//4qRYTlCAIDKEIuF5Pr//4iQYTlC
AIuN5Pr//4PBIIuV5Pr//4iKYDhCAOtRg73k+v//YXI7g73k+v//encyi4Xk+v//iohhOUIAgMkg
i5Xk+v//iIphOUIAi4Xk+v//g+ggi43k+v//iIFgOEIA6w2LleT6///GgmA4QgAA6Uf///+L5V3D
zMzMzMzMzMzMzMzMzMxVi+yDPUw4QgAAdAehxDdCAOsCM8Bdw8zMzMzMzMzMzFWL7IM90DtCAAB1
FGr96F34//+DxATHBdA7QgABAAAAXcPMzMzMzMzMzMzMzMzMzFWL7FdWi3UMi00Qi30Ii8GL0QPG
O/52CDv4D4J4AQAA98cDAAAAdRTB6QKD4gOD+QhyKfOl/ySVyH5AAIvHugMAAACD6QRyDIPgAwPI
/ySF4H1AAP8kjdh+QACQ/ySNXH5AAJDwfUAAHH5AAEB+QAAj0YoGiAeKRgGIRwGKRgLB6QKIRwKD
xgODxwOD+QhyzPOl/ySVyH5AAI1JACPRigaIB4pGAcHpAohHAYPGAoPHAoP5CHKm86X/JJXIfkAA
kCPRigaIB0bB6QJHg/kIcozzpf8klch+QACNSQC/fkAArH5AAKR+QACcfkAAlH5AAIx+QACEfkAA
fH5AAItEjuSJRI/ki0SO6IlEj+iLRI7siUSP7ItEjvCJRI/wi0SO9IlEj/SLRI74iUSP+ItEjvyJ
RI/8jQSNAAAAAAPwA/j/JJXIfkAAi//YfkAA4H5AAOx+QAAAf0AAi0UIXl/Jw5CKBogHi0UIXl/J
w5CKBogHikYBiEcBi0UIXl/Jw41JAIoGiAeKRgGIRwGKRgKIRwKLRQheX8nDkI10MfyNfDn898cD
AAAAdSTB6QKD4gOD+QhyDf3zpfz/JJVggEAAi//32f8kjRCAQACNSQCLx7oDAAAAg/kEcgyD4AMr
yP8khWh/QAD/JI1ggEAAkHh/QACYf0AAwH9AAIpGAyPRiEcDTsHpAk+D+Qhytv3zpfz/JJVggEAA
jUkAikYDI9GIRwOKRgLB6QKIRwKD7gKD7wKD+QhyjP3zpfz/JJVggEAAkIpGAyPRiEcDikYCiEcC
ikYBwekCiEcBg+4Dg+8Dg/kID4Ja/////fOl/P8klWCAQACNSQAUgEAAHIBAACSAQAAsgEAANIBA
ADyAQABEgEAAV4BAAItEjhyJRI8ci0SOGIlEjxiLRI4UiUSPFItEjhCJRI8Qi0SODIlEjwyLRI4I
iUSPCItEjgSJRI8EjQSNAAAAAAPwA/j/JJVggEAAi/9wgEAAeIBAAIiAQACcgEAAi0UIXl/Jw5CK
RgOIRwOLRQheX8nDjUkAikYDiEcDikYCiEcCi0UIXl/Jw5CKRgOIRwOKRgKIRwKKRgGIRwGLRQhe
X8nDzMzMzMzMzMzMzMxVi+yhcDFCAF3DzMzMzMzMVYvsgX0I+AMAAHYEM8DrDYtFCKNwMUIAuAEA
AABdw8xVi+xoQAEAAGoAoXQ6QgBQ/xXcUUIAo8A3QgCDPcA3QgAAdQQzwOsviw3AN0IAiQ20N0IA
xwW4N0IAAAAAAMcFvDdCAAAAAADHBaA3QgAQAAAAuAEAAABdw8zMzMzMzMxVi+yD7AyhvDdCAGvA
FIsNwDdCAAPIiU30ixXAN0IAiVX4i0X4O0X0cyWLTfiLVQgrUQyJVfyBffwAABAAcwWLRfjrDYtF
+IPAFIlF+OvTM8CL5V3DzMzMzMzMzMzMzMxVi+yD7AyLRQiLTQwrSAyJTfiLVfjB6g+JVfy4AAAA
gItN/NPoi00Ii1EII9CF0nUgi0X4g+APhcB1FotN+IHh/w8AAIXJdAnHRfQBAAAA6wfHRfQAAAAA
i0X0i+Vdw8xVi+yD7DyLRQiLSBCJTcSLVQiLRQwrQgyJRfCLTfDB6Q+JTfyLVfxp0gQCAACLRcSN
jBBEAQAAiU34i1UMg+oEiVXki0XkiwiD6QGJTdCLVeQDVdCJVciLRciLCIlN7ItV5ItC/IlF9ItN
7IPhAYXJD4UiAQAAi1XswfoEg+oBiVXcg33cP3YHx0XcPwAAAItFyItNyItQBDtRCA+F0AAAAIN9
3CBzX7gAAACAi03c0+j30ItN/ItVxItMikQjyItV/ItFxIlMkESLTcQDTdyKUQSA6gGLRcQDRdyI
UASLTcQDTdwPvlEEhdJ1GLgAAACAi03c0+j30ItNCIsRI9CLRQiJEOtri03cg+kgugAAAIDT6vfS
i0X8i03Ei4SBxAAAACPCi038i1XEiYSKxAAAAItFxANF3IpIBIDpAYtVxANV3IhKBItFxANF3A++
SASFyXUdi03cg+kgugAAAIDT6vfSi0UIi0gEI8qLVQiJSgSLRciLSAiLVciLQgSJQQSLTciLUQSL
RciLSAiJSgiLVdADVeyJVdCLRdDB+ASD6AGJRdiDfdg/dgfHRdg/AAAAi030g+EBhckPhVYBAACL
VeQrVfSJVcyLRfTB+ASD6AGJRdSDfdQ/dgfHRdQ/AAAAi03QA030iU3Qi1XQwfoEg+oBiVXYg33Y
P3YHx0XYPwAAAItF1DtF2A+EAAEAAItNzItVzItBBDtCCA+F0AAAAIN91CBzX7oAAACAi03U0+r3
0otF/ItNxItEgUQjwotN/ItVxIlEikSLRcQDRdSKSASA6QGLVcQDVdSISgSLRcQDRdQPvkgEhcl1
GLoAAACAi03U0+r30otFCIsII8qLVQiJCutri03Ug+kguAAAAIDT6PfQi038i1XEi4yKxAAAACPI
i1X8i0XEiYyQxAAAAItNxANN1IpRBIDqAYtFxANF1IhQBItNxANN1A++UQSF0nUdi03Ug+kguAAA
AIDT6PfQi00Ii1EEI9CLRQiJUASLTcyLUQiLRcyLSASJSgSLVcyLQgSLTcyLUQiJUAiLRcyJReSL
TfSD4QGFyXUMi1XUO1XYD4QQAQAAi0XYi034jRTBiVXgi0Xki03gi1EEiVAEi0Xki03giUgIi1Xg
i0XkiUIEi03ki1EEi0XkiUIIi03ki1Xki0EEO0IID4XIAAAAg33YIHNbi03EA03YD75RBItFxANF
2IpIBIDBAYtFxANF2IhIBIXSdRa6AAAAgItN2NPqi0UIiwgLyotVCIkKuAAAAICLTdjT6ItN/ItV
xItMikQLyItV/ItFxIlMkETrZ4tNxANN2A++UQSLRcQDRdiKSASAwQGLRcQDRdiISASF0nUbi03Y
g+kgugAAAIDT6otFCItIBAvKi1UIiUoEi03Yg+kguAAAAIDT6ItN/ItVxIuMisQAAAALyItV/ItF
xImMkMQAAACLTeSLVdCJEYtF5ANF0ItN0IlI/ItV+IsCg+gBi034iQGLVfiDOgAPhWEBAACDPbg3
QgAAD4RDAQAAobA3QgDB4A+LDbg3QgCLUQwD0IlV6GgAQAAAaACAAACLRehQ/xW0UUIAugAAAICL
DbA3QgDT6qG4N0IAi0gIC8qLFbg3QgCJSgihuDdCAItIEIsVsDdCAMeEkcQAAAAAAAAAobg3QgCL
SBCKUUOA6gGhuDdCAItIEIhRQ4sVuDdCAItCEA++SEOFyXUUixW4N0IAi0IEJP6LDbg3QgCJQQSL
Fbg3QgCDegj/D4WSAAAAaACAAABqAKG4N0IAi0gMUf8VtFFCAIsVuDdCAItCEFBqAIsNdDpCAFH/
FbBRQgCLFbw3QgBr0hShwDdCAAPCiw24N0IAg8EUK8FQixW4N0IAg8IUUqG4N0IAUOiKJwAAg8QM
iw28N0IAg+kBiQ28N0IAi1UIOxW4N0IAdgmLRQiD6BSJRQiLDcA3QgCJDbQ3QgCLVQiJFbg3QgCL
RfyjsDdCAIvlXcPMzMxVi+yD7DhWobw3QgBrwBSLDcA3QgADyIlN1ItVCIPCF4Pi8IlV2ItF2MH4
BIPoAYlF4IN94CB9FIPK/4tN4NPqiVXcx0XM/////+sVx0XcAAAAAItN4IPpIIPI/9PoiUXMiw20
N0IAiU3oi1XoO1XUcySLReiLTdwjCItV6ItFzCNCBAvIhcl0AusLi03og8EUiU3o69SLVeg7VdQP
hdsAAAChwDdCAIlF6ItN6DsNtDdCAHMki1Xoi0XcIwKLTeiLVcwjUQQLwoXAdALrC4tF6IPAFIlF
6OvRi03oOw20N0IAD4WVAAAAi1XoO1XUcxaLReiDeAgAdALrC4tN6IPBFIlN6Ovii1XoO1XUdUmh
wDdCAIlF6ItN6DsNtDdCAHMWi1Xog3oIAHQC6wuLReiDwBSJRejr34tN6DsNtDdCAHUV6PkDAACJ
ReiDfegAdQczwOnaAwAAi1XoUujwBAAAg8QEi03oi1EQiQKLReiLSBCDOf91BzPA6bQDAACLVeiJ
FbQ3QgCLReiLSBCJTciLVciLAolF0IN90P90I4tN0ItVyItF3CNEikSLTdCLVciLdcwjtIrEAAAA
C8aFwHU1x0XQAAAAAItF0ItNyItV3CNUgUSLRdCLTciLdcwjtIHEAAAAC9aF0nULi1XQg8IBiVXQ
69KLRdBpwAQCAACLTciNlAFEAQAAiVX8x0XgAAAAAItF0ItNyItV3CNUgUSJVeSDfeQAdRrHReAg
AAAAi0XQi03Ii1XMI5SBxAAAAIlV5IN95AB8E4tF5NHgiUXki03gg8EBiU3g6+eLVeCLRfyLTNAE
iU3wi1XwiwIrRdiJRfiLTfjB+QSD6QGJTeyDfew/fgfHRew/AAAAi1XsO1XgD4QYAgAAi0Xwi03w
i1AEO1EID4XQAAAAg33gIH1fuAAAAICLTeDT6PfQi03Qi1XIi0yKRCPIi1XQi0XIiUyQRItNyANN
4IpRBIDqAYtFyANF4IhQBItNyANN4A++UQSF0nUYuAAAAICLTeDT6PfQi03oixEj0ItF6IkQ62uL
TeCD6SC6AAAAgNPq99KLRdCLTciLhIHEAAAAI8KLTdCLVciJhIrEAAAAi0XIA0XgikgEgOkBi1XI
A1XgiEoEi0XIA0XgD75IBIXJdR2LTeCD6SC6AAAAgNPq99KLReiLSAQjyotV6IlKBItF8ItICItV
8ItCBIlBBItN8ItRBItF8ItICIlKCIN9+AAPhA4BAACLVeyLRfyNDNCJTfSLVfCLRfSLSASJSgSL
VfCLRfSJQgiLTfSLVfCJUQSLRfCLSASLVfCJUQiLRfCLTfCLUAQ7UQgPhcYAAACDfewgfVqLRcgD
RewPvkgEi1XIA1XsikIEBAGLVcgDVeyIQgSFyXUWuAAAAICLTezT6ItN6IsRC9CLReiJELoAAACA
i03s0+qLRdCLTciLRIFEC8KLTdCLVciJRIpE62aLRcgDRewPvkgEi1XIA1XsikIEBAGLVcgDVeyI
QgSFyXUbi03sg+kguAAAAIDT6ItN6ItRBAvQi0XoiVAEi03sg+kgugAAAIDT6otF0ItNyIuEgcQA
AAALwotN0ItVyImEisQAAACDffgAdBSLRfCLTfiJCItV8ANV+ItF+IlC/ItN8ANN+IlN8ItV2IPC
AYtF8IkQi03Yg8EBi1XwA1XYiUr8i0X8iwiLVfyLAoPAAYtV/IkChcl1IItF6DsFuDdCAHUVi03Q
Ow2wN0IAdQrHBbg3QgAAAAAAi1XIi0XQiQKLRfCDwARei+Vdw8zMzMzMzMzMzMxVi+xRobw3QgA7
BaA3QgB1SosNoDdCAIPBEGvJFFGLFcA3QgBSagChdDpCAFD/FeRRQgCJRfyDffwAdQczwOnIAAAA
i038iQ3AN0IAixWgN0IAg8IQiRWgN0IAobw3QgBrwBSLDcA3QgADyIlN/GjEQQAAagiLFXQ6QgBS
/xXcUUIAi038iUEQi1X8g3oQAHUEM8DrdmoEaAAgAABoAAAQAGoA/xXgUUIAi038iUEMi1X8g3oM
AHUai0X8i0gQUWoAixV0OkIAUv8VsFFCADPA6zmLRfzHAAAAAACLTfzHQQQAAAAAi1X8x0II////
/6G8N0IAg8ABo7w3QgCLTfyLURDHAv////+LRfyL5V3DzFWL7IPsLItFCItIEIlN1ItVCItCCIlF
+MdF2AAAAACDffgAfBOLTfjR4YlN+ItV2IPCAYlV2Ovni0XYacAEAgAAi03UjZQBRAEAAIlV9MdF
4AAAAADrCYtF4IPAAYlF4IN94D99IItN4ItV9I0EyolF6ItN6ItV6IlRCItF6ItN6IlIBOvRi1XY
weIPi0UIi0gMA8qJTfBqBGgAEAAAaACAAACLVfBS/xXgUUIAhcB1CIPI/+kxAQAAi0XwBQBwAACJ
ReSLTfCJTfzrDItV/IHCABAAAIlV/ItF/DtF5Hddi038x0EI/////4tV/MeC/A8AAP////+LRfyD
wAyJReiLTejHAfAPAACLVeiBwgAQAACLReiJUASLTeiB6QAQAACLVeiJSgiLRegF7A8AAIlF3ItN
3McB8A8AAOuPi1X0gcL4AQAAiVXsi0Xwg8AMi03siUEEi1Xsi0IEiUXoi03oi1XsiVEIi0Xkg8AM
i03siUEIi1Xsi0IIiUXoi03oi1XsiVEEi0XYi03Ux0SBRAAAAACLVdiLRdTHhJDEAAAAAQAAAItN
1A++UUOLRdSKSEOAwQGLRdSISEOF0nUPi00Ii1EEg8oBi0UIiVAEugAAAICLTdjT6vfSi0UIi0gI
I8qLVQiJSgiLRdiL5V3DzMxVi+yD7DCLRRCDwBck8IlF5ItNCItREIlV0ItFCItNDCtIDIlN9ItV
9MHqD4lV/ItF/GnABAIAAItN0I2UAUQBAACJVfiLRQyD6ASJReyLTeyLEYPqAYlV2ItF7ANF2IlF
1ItN1IsRiVXwi0XkO0XYD46wAgAAi03wg+EBhcl1C4tV2ANV8DlV5H4HM8DpVQUAAItF8MH4BIPo
AYlF4IN94D92B8dF4D8AAACLTdSLVdSLQQQ7QggPhdAAAACDfeAgc1+6AAAAgItN4NPq99KLRfyL
TdCLRIFEI8KLTfyLVdCJRIpEi0XQA0XgikgEgOkBi1XQA1XgiEoEi0XQA0XgD75IBIXJdRi6AAAA
gItN4NPq99KLRQiLCCPKi1UIiQrra4tN4IPpILgAAACA0+j30ItN/ItV0IuMisQAAAAjyItV/ItF
0ImMkMQAAACLTdADTeCKUQSA6gGLRdADReCIUASLTdADTeAPvlEEhdJ1HYtN4IPpILgAAACA0+j3
0ItNCItRBCPQi0UIiVAEi03Ui1EIi0XUi0gEiUoEi1XUi0IEi03Ui1EIiVAIi0XYA0XwK0XkiUXw
g33wAA+ORgEAAItN7ANN5IlN1ItV8MH6BIPqAYlV4IN94D92B8dF4D8AAACLReCLTfiNFMGJVeiL
RdSLTeiLUQSJUASLRdSLTeiJSAiLVeiLRdSJQgSLTdSLUQSLRdSJQgiLTdSLVdSLQQQ7QggPhcgA
AACDfeAgc1uLTdADTeAPvlEEi0XQA0XgikgEgMEBi0XQA0XgiEgEhdJ1FroAAACAi03g0+qLRQiL
CAvKi1UIiQq4AAAAgItN4NPoi038i1XQi0yKRAvIi1X8i0XQiUyQROtni03QA03gD75RBItF0ANF
4IpIBIDBAYtF0ANF4IhIBIXSdRuLTeCD6SC6AAAAgNPqi0UIi0gEC8qLVQiJSgSLTeCD6SC4AAAA
gNPoi038i1XQi4yKxAAAAAvIi1X8i0XQiYyQxAAAAItN1ItV8IkRi0XUA0Xwi03wiUj8i1Xkg8IB
i0XsiRCLTeSDwQGLVewDVeSJSvzpvAIAAItF5DtF2A+NsAIAAItN5IPBAYtV7IkKi0Xkg8ABi03s
A03kiUH8i1XsA1XkiVXsi0XYK0XkiUXYi03YwfkEg+kBiU3cg33cP3YHx0XcPwAAAItV8IPiAYXS
D4U7AQAAi0XwwfgEg+gBiUXgg33gP3YHx0XgPwAAAItN1ItV1ItBBDtCCA+F0AAAAIN94CBzX7oA
AACAi03g0+r30otF/ItN0ItEgUQjwotN/ItV0IlEikSLRdADReCKSASA6QGLVdADVeCISgSLRdAD
ReAPvkgEhcl1GLoAAACAi03g0+r30otFCIsII8qLVQiJCutri03gg+kguAAAAIDT6PfQi038i1XQ
i4yKxAAAACPIi1X8i0XQiYyQxAAAAItN0ANN4IpRBIDqAYtF0ANF4IhQBItN0ANN4A++UQSF0nUd
i03gg+kguAAAAIDT6PfQi00Ii1EEI9CLRQiJUASLTdSLUQiLRdSLSASJSgSLVdSLQgSLTdSLUQiJ
UAiLRdgDRfCJRdiLTdjB+QSD6QGJTdyDfdw/dgfHRdw/AAAAi1Xci0X4jQzQiU3oi1Xsi0Xoi0gE
iUoEi1Xsi0XoiUIIi03oi1XsiVEEi0Xsi0gEi1XsiVEIi0Xsi03si1AEO1EID4XGAAAAg33cIHNa
i0XQA0XcD75IBItV0ANV3IpCBAQBi1XQA1XciEIEhcl1FrgAAACAi03c0+iLTQiLEQvQi0UIiRC6
AAAAgItN3NPqi0X8i03Qi0SBRAvCi038i1XQiUSKROtmi0XQA0XcD75IBItV0ANV3IpCBAQBi1XQ
A1XciEIEhcl1G4tN3IPpILgAAACA0+iLTQiLUQQL0ItFCIlQBItN3IPpILoAAACA0+qLRfyLTdCL
hIHEAAAAC8KLTfyLVdCJhIrEAAAAi0Xsi03YiQiLVewDVdiLRdiJQvy4AQAAAIvlXcPMzMzMzFWL
7FGDPbg3QgAAD4QbAQAAobA3QgDB4A+LDbg3QgCLUQwD0IlV/GgAQAAAaACAAACLRfxQ/xW0UUIA
ugAAAICLDbA3QgDT6qG4N0IAi0gIC8qLFbg3QgCJSgihuDdCAItIEIsVsDdCAMeEkcQAAAAAAAAA
obg3QgCLSBCKUUOA6gGhuDdCAItIEIhRQ4sVuDdCAItCEA++SEOFyXUUixW4N0IAi0IEJP6LDbg3
QgCJQQSLFbg3QgCDegj/dWSDPbw3QgABfluhuDdCAItIEFFqAIsVdDpCAFL/FbBRQgChvDdCAGvA
FIsNwDdCAAPIixW4N0IAg8IUK8pRobg3QgCDwBRQiw24N0IAUegAGAAAg8QMixW8N0IAg+oBiRW8
N0IAxwW4N0IAAAAAAIvlXcNVi+yB7GgBAAChvDdCAGvAFFCLDcA3QgBR/xW8UUIAhcB0CIPI/+nu
BQAAixXAN0IAiZXE/v//x4Xg/v//AAAAAOsPi4Xg/v//g8ABiYXg/v//i43g/v//Ow28N0IAD42z
BQAAi5XE/v//i0IQiYWg/v//aMRBAACLjaD+//9R/xW8UUIAhcB0Crj+////6YYFAACLlcT+//+L
QgyJhdj+//+LjaD+//+BwUQBAACJTeiLlcT+//+LQgiJRfzHhbz+//8AAAAAx4Wo/v//AAAAAMdF
9AAAAADrCYtN9IPBAYlN9IN99CAPje4EAADHheT+//8AAAAAx4Ww/v//AAAAAMeF1P7//wAAAADH
hbT+//8AAAAA6w+LlbT+//+DwgGJlbT+//+DvbT+//9AfROLhbT+///HhIXo/v//AAAAAOvVg338
AA+MMQQAAGgAgAAAi43Y/v//Uf8VvFFCAIXAdAq4/P///+mtBAAAi5XY/v//iVX4x4XA/v//AAAA
AOsPi4XA/v//g8ABiYXA/v//g73A/v//CA+NdwEAAItN+IPBDImN0P7//4uV0P7//4HC8A8AAImV
yP7//4uF0P7//4N4/P91C4uNyP7//4M5/3QKuPv////pPQQAAIuV0P7//4sCiYW4/v//i424/v//
iY2s/v//i5Ws/v//g+IBhdJ0NouFuP7//4PoAYmFuP7//4G9uP7//wAEAAB+Crj6////6fEDAACL
jdT+//+DwQGJjdT+///rQouVuP7//8H6BIPqAYmVtP7//4O9tP7//z9+CseFtP7//z8AAACLhbT+
//+LjIXo/v//g8EBi5W0/v//iYyV6P7//4O9uP7//xB8GYuFuP7//4PgD4XAdQyBvbj+///wDwAA
fgq4+f///+lyAwAAi43Q/v//A424/v//i1H8O5Ws/v//dAq4+P///+lRAwAAi4XQ/v//A4W4/v//
iYXQ/v//i43Q/v//O43I/v//D4Lw/v//i5XQ/v//O5XI/v//dAq4+P///+kVAwAAi0X4BQAQAACJ
Rfjpbf7//4tN6IsRO5XU/v//dAq49////+nuAgAAi0XoiYXM/v//x0XsAAAAAOsJi03sg8EBiU3s
g33sQA+NLQIAAMeFmP7//wAAAACLlcz+//+JldD+//+LhdD+//+LSASJjaT+//+LlaT+//87lcz+
//8PhCMBAACLReyLjZj+//87jIXo/v//D4QNAQAAi5Wk/v//O5XY/v//chOLhdj+//8FAIAAADmF
pP7//3IKuPb////pUQIAAIuNpP7//4HhAPD//4mNnP7//4uVnP7//4PCDIlV8ItF8AXwDwAAiYXc
/v//i03wO43c/v//dB+LVfA7laT+//91AusSi0XwiwiD4f6LVfAD0YlV8OvWi0XwO4Xc/v//dQq4
9f///+nmAQAAi42k/v//ixHB+gSD6gGJlbT+//+DvbT+//8/fgrHhbT+//8/AAAAi4W0/v//O0Xs
dAq49P///+mqAQAAi42k/v//i1EIO5XQ/v//dAq48////+mPAQAAi4Wk/v//iYXQ/v//i42Y/v//
g8EBiY2Y/v//6bz+//+DvZj+//8AdG6DfewgfTK6AAAAgItN7NPqi4Xk/v//C8KJheT+//+6AAAA
gItN7NPqi4W8/v//C8KJhbz+///rNotN7IPpILoAAACA0+qLhbD+//8LwomFsP7//4tN7IPpILoA
AACA0+qLhaj+//8LwomFqP7//4uN0P7//4tRBDuVzP7//3USi0Xsi42Y/v//O4yF6P7//3QKuPL/
///pywAAAIuVzP7//4tCCDuF0P7//3QKuPH////psAAAAIuNzP7//4PBCImNzP7//+nA/f//i1X0
i4Wg/v//i43k/v//O0yQRHUYi1X0i4Wg/v//i42w/v//O4yQxAAAAHQHuPD////raIuV2P7//4HC
AIAAAImV2P7//4tF6AUEAgAAiUXoi0380eGJTfzp//r//4uVxP7//4uFvP7//zsCdRGLjcT+//+L
laj+//87UQR0B7jv////6xaLhcT+//+DwBSJhcT+///pLPr//zPAi+Vdw8zMzFWL7FGhdDdCAIlF
/ItNCIkNdDdCAItF/IvlXcPMzMzMVYvsoXQ3QgBdw8zMzMzMzFWL7FGhdDdCAIlF/IN9/AB0DotN
CFH/VfyDxASFwHUEM8DrBbgBAAAAi+Vdw8zMzItUJAyLTCQEhdJ0RzPAikQkCFeL+YP6BHIt99mD
4QN0CCvRiAdHSXX6i8jB4AgDwYvIweAQA8GLyoPiA8HpAnQG86uF0nQGiAdHSnX6i0QkCF/Di0Qk
BMPMzMzMzMzMzFWL7KFwN0IAUItNCFHoDgAAAIPECF3DzMzMzMzMzMzMVYvsUYN9COB2BDPA60WD
fQjgdxGLRQhQ6EMAAACDxASJRfzrB8dF/AAAAACDffwAdQaDfQwAdQWLRfzrFotNCFHoCv///4PE
BIXAdQQzwOsC67uL5V3DzMzMzMzMzMzMVYvsUYtFCDsFcDFCAHcai00IUego6f//g8QEiUX8g338
AHQFi0X86yyDfQgAdQfHRQgBAAAAi1UIg8IPg+LwiVUIi0UIUGoAiw10OkIAUf8V3FFCAIvlXcPM
zMzMzMzMVYvsuAEAAABdw8zMzMzMzFWL7IPsCIN9DOB2BDPA63iLRQhQ6Cfi//+DxASJRfiDffgA
dDXHRfwAAAAAi00MOw1wMUIAdx6LVQxSi0UIUItN+FHoyPD//4PEDIXAdAaLVQiJVfyLRfzrLoN9
DAB1B8dFDAEAAACLRQyDwA8k8IlFDItNDFGLVQhSahChdDpCAFD/FeRRQgCL5V3DzMzMzFWL7IPs
FIN9CAB1EYtFDFDoa/7//4PEBOmrAQAAg30MAHUTi00IUeikAQAAg8QEM8DpkgEAAMdF+AAAAACD
fQzgD4dUAQAAi1UIUuhg4f//g8QEiUX0g330AA+ECAEAAItFDDsFcDFCAHd7i00MUYtVCFKLRfRQ
6ATw//+DxAyFwHQIi00IiU3461uLVQxS6Kzn//+DxASJRfiDffgAdEaLRQiLSPyD6QGJTfyLVfw7
VQxzCItF/IlF8OsGi00MiU3wi1XwUotFCFCLTfhR6A3d//+DxAyLVQhSi0X0UOiN4f//g8QIg334
AHV6g30MAHUHx0UMAQAAAItNDIPBD4Ph8IlNDItVDFJqAKF0OkIAUP8V3FFCAIlF+IN9+AB0RotN
CItR/IPqAYlV/ItF/DtFDHMIi038iU3s6waLVQyJVeyLRexQi00IUYtV+FLojdz//4PEDItFCFCL
TfRR6A3h//+DxAjrM4N9DAB1B8dFDAEAAACLVQyDwg+D4vCJVQyLRQxQi00IUWoAixV0OkIAUv8V
5FFCAIlF+IN9+AB1CYM9cDdCAAB1BYtF+OsZi0UMUOg4/P//g8QEhcB1BDPA6wXpbv7//4vlXcPM
zMzMVYvsUYN9CAB1Aus6i0UIUOjL3///g8QEiUX8g338AHQSi00IUYtV/FLocuD//4PECOsTi0UI
UGoAiw10OkIAUf8VsFFCAIvlXcPMzMzMzMxVi+xRx0X8/v///+hw9f//hcB9B8dF/Pz///9qAGoA
oXQ6QgBQ/xXEUUIAhcB1KP8VyFFCAIP4eHUWxwXcNUIAeAAAAMcF2DVCACgAAADrB8dF/Pz///+L
RfyL5V3DzMxVi+zomP///13DzMzMzMzMVYvsg+wwU1ZXjUXgiUXcjU0QiU3Ug30IAHUeaCgPQgBq
AGpdaBwPQgBqAujAhf//g8QUg/gBdQHMM9KF0nXWg30MAHUeaKAAQgBqAGpeaBwPQgBqAuiWhf//
g8QUg/gBdQHMM8CFwHXWi03cx0EMQgAAAItV3ItFCIlCCItN3ItVCIkRi0Xcx0AE////f4tN1FGL
VQxSi0XcUOhRdP//g8QMiUXYi03ci1EEg+oBi0XciVAEi03cg3kEAHwii1XciwLGAAAzyYHh/wAA
AIlN0ItV3IsCg8ABi03ciQHrEYtV3FJqAOh2w///g8QIiUXQi0XYX15bi+Vdw8zMzMzMzFWL7IPs
DItFCIPAAT0AAQAAdxeLTQiLFWAuQgAzwGaLBEojRQzpiQAAAItNCMH5CIHh/wAAAIHh/wAAAIsV
YC5CADPAZosESiUAgAAAhcB0IotNCMH5CIHh/wAAAIhN9IpVCIhV9cZF9gDHRfgCAAAA6xGKRQiI
RfTGRfUAx0X4AQAAAGoBagBqAI1N/FGLVfhSjUX0UGoB6JMJAACDxByFwHUEM8DrC4tF/CX//wAA
I0UMi+Vdw8zMzMzMzMzMzFWL7FGLRQg7Bbw7QgBzH4tNCMH5BYtVCIPiH4sEjYA6QgAPvkzQBIPh
AYXJdQ/HBdg1QgAJAAAAg8j/622LVQjB+gWLRQiD4B+LDJWAOkIAD75UwQSD4gGF0nQ6i0UIUOi7
EAAAg8QEUP8V6FFCAIXAdQv/FchRQgCJRfzrB8dF/AAAAACDffwAdQLrGotN/IkN3DVCAMcF2DVC
AAkAAADHRfz/////i0X8i+Vdw8zMVYvsgewgBAAAi0UIOwW8O0IAcx+LTQjB+QWLVQiD4h+LBI2A
OkIAD75M0ASD4QGFyXUcxwXYNUIACQAAAMcF3DVCAAAAAACDyP/pTwIAAMdF8AAAAACLVfCJleD7
//+DfRAAdQczwOkyAgAAi0UIwfgFi00Ig+EfixSFgDpCAA++RMoEg+AghcB0EGoCagCLTQhR6CgC
AACDxAyLVQjB+gWLRQiD4B+LDJWAOkIAD75UwQSB4oAAAACF0g+ECAEAAItFDIlF/MdF9AAAAACL
TfwrTQw7TRAPg+oAAACNlez7//+JVfiLRfiNjez7//8rwT0ABAAAfV+LVfwrVQw7VRBzVItF/IoI
iI3k+///i1X8g8IBiVX8D76F5Pv//4P4CnUei43g+///g8EBiY3g+///i1X4xgINi0X4g8ABiUX4
i034ipXk+///iBGLRfiDwAGJRfjrj2oAjY3o+///UYtV+I2F7Pv//yvQUo2N7Pv//1GLVQjB+gWL
RQiD4B+LDJWAOkIAixTBUv8VZFFCAIXAdCOLRfADhej7//+JRfCLTfiNlez7//8ryjmN6Pv//30C
6xLrC/8VyFFCAIlF9OsF6Qf////rTWoAjYXo+///UItNEFGLVQxSi0UIwfgFi00Ig+EfixSFgDpC
AIsEylD/FWRRQgCFwHQSx0X0AAAAAIuN6Pv//4lN8OsJ/xXIUUIAiUX0g33wAHV5g330AHQsg330
BXUVxwXYNUIACQAAAItV9IkV3DVCAOsMi0X0UOh6DwAAg8QEg8j/61CLTQjB+QWLVQiD4h+LBI2A
OkIAD75M0ASD4UCFyXQPi1UMD74Cg/gadQQzwOsixwXYNUIAHAAAAMcF3DVCAAAAAACDyP/rCYtF
8CuF4Pv//4vlXcPMzMzMzMzMzMzMzMzMzFWL7GoC6EZt//+DxARdw8xVi+yD7AyLRQg7Bbw7QgBz
H4tNCMH5BYtVCIPiH4sEjYA6QgAPvkzQBIPhAYXJdRzHBdg1QgAJAAAAxwXcNUIAAAAAAIPI/+me
AAAAi1UIUuhbDQAAg8QEiUX0g330/3UPxwXYNUIACQAAAIPI/+t6i0UQUGoAi00MUYtV9FL/FexR
QgCJRfiDffj/dQv/FchRQgCJRfzrB8dF/AAAAACDffwAdBGLRfxQ6FIOAACDxASDyP/rNItNCMH5
BYtVCIPiH4sEjYA6QgCKTNAEgOH9i1UIwfoFi0UIg+AfixSVgDpCAIhMwgSLRfiL5V3DzMxVi+xR
U1ZXg30IAHUeaLgBQgBqAGouaHQPQgBqAuiuf///g8QUg/gBdQHMM8CFwHXWiw3QNUIAg8EBiQ3Q
NUIAi1UIiVX8ajtodA9CAGoCaAAQAADoJ5v//4PEEItN/IlBCItV/IN6CAB0G4tF/ItIDIPJCItV
/IlKDItF/MdAGAAQAADrJYtN/ItRDIPKBItF/IlQDItN/IPBFItV/IlKCItF/MdAGAIAAACLTfyL
VfyLQgiJAYtN/MdBBAAAAABfXluL5V3DzMzMzMzMzMzMVYvsg+wIU1ZXx0X8/////4tFCIlF+ItN
+ItRDIPiQIXSdBKLRfjHQAwAAAAAg8j/6aEAAACDfQgAdR5ouAFCAGoAandogA9CAGoC6LB+//+D
xBSD+AF1AcwzyYXJddaLVfiLQgwlgwAAAIXAdFuLTfhR6Dm5//+DxASJRfyLVfhS6DoOAACDxASL
RfiLSBBR6DsNAACDxASFwH0Jx0X8/////+ski1X4g3ocAHQbagKLRfiLSBxR6DSk//+DxAiLVfjH
QhwAAAAAi0X4x0AMAAAAAItF/F9eW4vlXcPMzMxVi+xq/2iYD0IAaHRAQABkoQAAAABQZIklAAAA
AIPE3FNWV4ll6IM9mDdCAAB1V2oAagBqAWiQD0IAaAABAABqAP8V+FFCAIXAdAzHBZg3QgABAAAA
6y9qAGoAagFojA9CAGgAAQAAagD/FfRRQgCFwHQMxwWYN0IAAgAAAOsHM8DpawIAAIN9FAB+E4tF
FFCLTRBR6HcCAACDxAiJRRSDPZg3QgACdSOLVRxSi0UYUItNFFGLVRBSi0UMUItNCFH/FfRRQgDp
JgIAAIM9mDdCAAEPhRcCAACDfSAAdQmLFZA3QgCJVSBqAGoAi0UUUItNEFGLVST32hvSg+IIg8IB
UotFIFD/FfBRQgCJReSDfeQAdQczwOnWAQAAx0X8AAAAAItF5NHgg8ADJPzob8T//4ll0Ill6ItN
0IlN3MdF/P/////rF7gBAAAAw4tl6MdF3AAAAADHRfz/////g33cAHUHM8DphwEAAItV5FKLRdxQ
i00UUYtVEFJqAYtFIFD/FfBRQgCFwHUHM8DpYAEAAGoAagCLTeRRi1XcUotFDFCLTQhR/xX4UUIA
iUXYg33YAHUHM8DpNgEAAItVDIHiAAQAAIXSdEODfRwAdDiLRdg7RRx+BzPA6RQBAACLTRxRi1UY
UotF5FCLTdxRi1UMUotFCFD/FfhRQgCFwHUHM8Dp6wAAAOnfAAAAi03YiU3Ux0X8AQAAAItF1NHg
g8ADJPzoecP//4llzIll6ItVzIlV4MdF/P/////rF7gBAAAAw4tl6MdF4AAAAADHRfz/////g33g
AHUHM8DpkQAAAItF1FCLTeBRi1XkUotF3FCLTQxRi1UIUv8V+FFCAIXAdQQzwOtrg30cAHUuagBq
AGoAagCLRdRQi03gUWggAgAAi1UgUv8VkFFCAIlF2IN92AB1BDPA6znrMGoAagCLRRxQi00YUYtV
1FKLReBQaCACAACLTSBR/xWQUUIAiUXYg33YAHUEM8DrB4tF2OsCM8CNZcCLTfBkiQ0AAAAAX15b
i+Vdw8zMzMzMzMzMzMzMVYvsg+wIi0UMiUX4i00IiU38i1X4i0X4g+gBiUX4hdJ0FYtN/A++EYXS
dAuLRfyDwAGJRfzr24tN/A++EYXSdQiLRfwrRQjrA4tFDIvlXcNVi+xq/2iwD0IAaHRAQABkoQAA
AABQZIklAAAAAIPE5FNWV4ll6IM9nDdCAAB1T41F5FBqAWiQD0IAagH/FQBSQgCFwHQMxwWcN0IA
AQAAAOssjU3kUWoBaIwPQgBqAWoA/xX8UUIAhcB0DMcFnDdCAAIAAADrBzPA6SoBAACDPZw3QgAC
dS6DfRwAdQmLFYA3QgCJVRyLRRRQi00QUYtVDFKLRQhQi00cUf8V/FFCAOnzAAAAgz2cN0IAAQ+F
5AAAAIN9GAB1CYsVkDdCAIlVGGoAagCLRRBQi00MUYtVIPfaG9KD4giDwgFSi0UYUP8V8FFCAIlF
4IN94AB1BzPA6aMAAADHRfwAAAAAi0Xg0eCDwAMk/Oglwf//iWXUiWXoi03UiU3ci1Xg0eJSagCL
RdxQ6Cjv//+DxAzHRfz/////6xe4AQAAAMOLZejHRdwAAAAAx0X8/////4N93AB1BDPA60OLTeBR
i1XcUotFEFCLTQxRagGLVRhS/xXwUUIAiUXYg33YAHUEM8DrGotFFFCLTdhRi1XcUotFCFD/FQBS
QgDrAjPAjWXIi03wZIkNAAAAAF9eW4vlXcPMzMzMVYvsV1aLdQyLTRCLfQiLwYvRA8Y7/nYIO/gP
gngBAAD3xwMAAAB1FMHpAoPiA4P5CHIp86X/JJVosEAAi8e6AwAAAIPpBHIMg+ADA8j/JIWAr0AA
/ySNeLBAAJD/JI38r0AAkJCvQAC8r0AA4K9AACPRigaIB4pGAYhHAYpGAsHpAohHAoPGA4PHA4P5
CHLM86X/JJVosEAAjUkAI9GKBogHikYBwekCiEcBg8YCg8cCg/kIcqbzpf8klWiwQACQI9GKBogH
RsHpAkeD+QhyjPOl/ySVaLBAAI1JAF+wQABMsEAARLBAADywQAA0sEAALLBAACSwQAAcsEAAi0SO
5IlEj+SLRI7oiUSP6ItEjuyJRI/si0SO8IlEj/CLRI70iUSP9ItEjviJRI/4i0SO/IlEj/yNBI0A
AAAAA/AD+P8klWiwQACL/3iwQACAsEAAjLBAAKCwQACLRQheX8nDkIoGiAeLRQheX8nDkIoGiAeK
RgGIRwGLRQheX8nDjUkAigaIB4pGAYhHAYpGAohHAotFCF5fycOQjXQx/I18Ofz3xwMAAAB1JMHp
AoPiA4P5CHIN/fOl/P8klQCyQACL//fZ/ySNsLFAAI1JAIvHugMAAACD+QRyDIPgAyvI/ySFCLFA
AP8kjQCyQACQGLFAADixQABgsUAAikYDI9GIRwNOwekCT4P5CHK2/fOl/P8klQCyQACNSQCKRgMj
0YhHA4pGAsHpAohHAoPuAoPvAoP5CHKM/fOl/P8klQCyQACQikYDI9GIRwOKRgKIRwKKRgHB6QKI
RwGD7gOD7wOD+QgPglr////986X8/ySVALJAAI1JALSxQAC8sUAAxLFAAMyxQADUsUAA3LFAAOSx
QAD3sUAAi0SOHIlEjxyLRI4YiUSPGItEjhSJRI8Ui0SOEIlEjxCLRI4MiUSPDItEjgiJRI8Ii0SO
BIlEjwSNBI0AAAAAA/AD+P8klQCyQACL/xCyQAAYskAAKLJAADyyQACLRQheX8nDkIpGA4hHA4tF
CF5fycONSQCKRgOIRwOKRgKIRwKLRQheX8nDkIpGA4hHA4pGAohHAopGAYhHAYtFCF5fycPMzMzM
zMzMzMzMzFWL7IPsDMdF+P/////HRfQAAAAA6wmLRfSDwAGJRfSDffRAD439AAAAi030gzyNgDpC
AAB0b4tV9IsElYA6QgCJRfzrCYtN/IPBCIlN/ItV9IsElYA6QgAFAAEAADlF/HM2i038D75RBIPi
AYXSdSaLRfzHAP////+LTfTB4QWLVfSLRfwrBJWAOkIAwfgDA8iJTfjrAuutg334/3QF6YMAAADr
fGp5aLwPQgBqAmgAAQAA6LiQ//+DxBCJRfyDffwAdFuLTfSLVfyJFI2AOkIAobw7QgCDwCCjvDtC
AOsJi038g8EIiU38i1X0iwSVgDpCAAUAAQAAOUX8cxmLTfzGQQQAi1X8xwL/////i0X8xkAFCuvK
i030weEFiU346wXp8P7//4tF+IvlXcPMzMxVi+xRi0UIOwW8O0IAD4OBAAAAi00IwfkFi1UIg+If
iwSNgDpCAIM80P91aIM9NCpCAAF1QotNCIlN/IN9/AB0DoN9/AF0FoN9/AJ0Husoi1UMUmr2/xUE
UkIA6xqLRQxQavX/FQRSQgDrDItNDFFq9P8VBFJCAItVCMH6BYtFCIPgH4sMlYA6QgCLVQyJFMEz
wOsXxwXYNUIACQAAAMcF3DVCAAAAAACDyP+L5V3DzFWL7FGLRQg7Bbw7QgAPg5sAAACLTQjB+QWL
VQiD4h+LBI2AOkIAD75M0ASD4QGFyXR8i1UIwfoFi0UIg+AfiwyVgDpCAIM8wf90Y4M9NCpCAAF1
PItVCIlV/IN9/AB0DoN9/AF0FIN9/AJ0GusiagBq9v8VBFJCAOsWagBq9f8VBFJCAOsKagBq9P8V
BFJCAItFCMH4BYtNCIPhH4sUhYA6QgDHBMr/////M8DrF8cF2DVCAAkAAADHBdw1QgAAAAAAg8j/
i+Vdw8zMzMzMzMxVi+yLRQg7Bbw7QgBzN4tNCMH5BYtVCIPiH4sEjYA6QgAPvkzQBIPhAYXJdBiL
VQjB+gWLRQiD4B+LDJWAOkIAiwTB6xfHBdg1QgAJAAAAxwXcNUIAAAAAAIPI/13DzMxVi+yD7AzG
RfQAi0UMg+AIhcB0CYpN9IDJIIhN9ItVDIHiAEAAAIXSdAiKRfQMgIhF9ItNDIHhgAAAAIXJdAmK
VfSAyhCIVfSLRQhQ/xWgUUIAiUX8g338AHUU/xXIUUIAUOiJAAAAg8QEg8j/632DffwCdQuKTfSA
yUCITfTrD4N9/AN1CYpV9IDKCIhV9Ohc/P//iUX4g334/3UZxwXYNUIAGAAAAMcF3DVCAAAAAACD
yP/rNotFCFCLTfhR6F39//+DxAiKVfSAygGIVfSLRfjB+AWLTfiD4R+LFIWAOkIAikX0iETKBItF
+IvlXcNVi+xRi0UIo9w1QgDHRfwAAAAA6wmLTfyDwQGJTfyDffwtcyOLVfyLRQg7BNWQMUIAdRKL
TfyLFM2UMUIAiRXYNUIA60LrzoN9CBNyEoN9CCR3DMcF2DVCAA0AAADrKIF9CLwAAAByFYF9CMoA
AAB3DMcF2DVCAAgAAADrCscF2DVCABYAAACL5V3DzMzMzMxVi+xRVotFCDsFvDtCAHMfi00IwfkF
i1UIg+IfiwSNgDpCAA++TNAEg+EBhcl1HMcF2DVCAAkAAADHBdw1QgAAAAAAg8j/6Z0AAACLVQhS
6Mz9//+DxASD+P90PYN9CAF0BoN9CAJ1GmoB6LH9//+DxASL8GoC6KX9//+DxAQ78HQXi0UIUOiV
/f//g8QEUP8VCFJCAIXAdAnHRfwAAAAA6wn/FchRQgCJRfyLTQhR6Jz8//+DxASLVQjB+gWLRQiD
4B+LDJWAOkIAxkTBBACDffwAdBGLVfxS6JL+//+DxASDyP/rAjPAXovlXcPMzMxVi+xTVleDfQgA
dR5o1A9CAGoAajBoyA9CAGoC6B9w//+DxBSD+AF1AcwzwIXAddaLTQiLUQyB4oMAAACF0nRNi0UI
i0gMg+EIhcl0QGoCi1UIi0IIUOjVlf//g8QIi00Ii1EMgeL3+///i0UIiVAMi00IxwEAAAAAi1UI
x0IIAAAAAItFCMdABAAAAABfXltdw/8lUFFCAP8lVFFCAP8lWFFCAP8lXFFCAP8lYFFCAP8lZFFC
AP8laFFCAP8lbFFCAP8lcFFCAP8ldFFCAP8leFFCAP8lfFFCAP8lgFFCAP8lhFFCAP8liFFCAP8l
jFFCAP8lkFFCAP8llFFCAP8lmFFCAP8lnFFCAP8loFFCAP8lpFFCAP8lqFFCAP8lrFFCAP8lsFFC
AP8ltFFCAP8luFFCAP8lvFFCAP8lwFFCAP8lxFFCAP8lyFFCAP8lzFFCAP8l0FFCAP8l1FFCAP8l
2FFCAP8l3FFCAP8l4FFCAP8l5FFCAP8l6FFCAP8l7FFCAP8l8FFCAP8l9FFCAP8l+FFCAP8l/FFC
AP8lAFJCAP8lBFJCAP8lCFJCAMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMwAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEakrUAAAAAACAAAALAAAAAAAAAAAYAIA
bWluX3JlcyA9ICVkLCBtYXhfcmVzID0gJWQsIGN1cl9yZXMgPSAlZAoAAAAAAAAAAAAAAE50UXVl
cnlUaW1lclJlc29sdXRpb24AAAAAAABOdFNldFRpbWVyUmVzb2x1dGlvbgAAAAAAAAAAbnRkbGwu
ZGxsAAAAcHJpbnRmLmMAAAAAZm9ybWF0ICE9IE5VTEwAAGkzODZcY2hrZXNwLmMAAAAAAAAAVGhl
IHZhbHVlIG9mIEVTUCB3YXMgbm90IHByb3Blcmx5IHNhdmVkIGFjcm9zcyBhIGZ1bmN0aW9uIGNh
bGwuICBUaGlzIGlzIHVzdWFsbHkgYSByZXN1bHQgb2YgY2FsbGluZyBhIGZ1bmN0aW9uIGRlY2xh
cmVkIHdpdGggb25lIGNhbGxpbmcgY29udmVudGlvbiB3aXRoIGEgZnVuY3Rpb24gcG9pbnRlciBk
ZWNsYXJlZCB3aXRoIGEgZGlmZmVyZW50IGNhbGxpbmcgY29udmVudGlvbi4gAP////9oFEAAgxRA
AF9zZnRidWYuYwAAAHN0ciAhPSBOVUxMAGZsYWcgPT0gMCB8fCBmbGFnID09IDEAAAYAAAYAAQAA
EAADBgAGAhAERUVFBQUFBQU1MABQAAAAACAoOFBYBwgANzAwV1AHAAAgIAgAAAAACGBoYGBgYAAA
cHB4eHh4CAcIAAAHAAgICAAACAAIAAcIAAAAKABuAHUAbABsACkAAAAAAChudWxsKQAAb3V0cHV0
LmMAAAAAY2ggIT0gX1QoJ1wwJykAAF9maWxlLmMAQXNzZXJ0aW9uIEZhaWxlZAAAAABFcnJvcgAA
AFdhcm5pbmcAJXMoJWQpIDogJXMACgAAAA0AAABBc3NlcnRpb24gZmFpbGVkIQAAAEFzc2VydGlv
biBmYWlsZWQ6IAAAX0NydERiZ1JlcG9ydDogU3RyaW5nIHRvbyBsb25nIG9yIElPIEVycm9yAABT
ZWNvbmQgQ2hhbmNlIEFzc2VydGlvbiBGYWlsZWQ6IEZpbGUgJXMsIExpbmUgJWQKAAAAd3Nwcmlu
dGZBAAAAdXNlcjMyLmRsbAAATWljcm9zb2Z0IFZpc3VhbCBDKysgRGVidWcgTGlicmFyeQAARGVi
dWcgJXMhCgpQcm9ncmFtOiAlcyVzJXMlcyVzJXMlcyVzJXMlcyVzCgooUHJlc3MgUmV0cnkgdG8g
ZGVidWcgdGhlIGFwcGxpY2F0aW9uKQAACk1vZHVsZTogAAAACkZpbGU6IAAKTGluZTogAAoKAABF
eHByZXNzaW9uOiAAAAAACgpGb3IgaW5mb3JtYXRpb24gb24gaG93IHlvdXIgcHJvZ3JhbSBjYW4g
Y2F1c2UgYW4gYXNzZXJ0aW9uCmZhaWx1cmUsIHNlZSB0aGUgVmlzdWFsIEMrKyBkb2N1bWVudGF0
aW9uIG9uIGFzc2VydHMuAAAuLi4APHByb2dyYW0gbmFtZSB1bmtub3duPgAAZGJncnB0LmMAAAAA
c3pVc2VyTWVzc2FnZSAhPSBOVUxMAAAAc3RkZW52cC5jAAAAc3RkYXJndi5jAAAAYV9lbnYuYwBp
b2luaXQuYwAAAABydW50aW1lIGVycm9yIAAADQoAAFRMT1NTIGVycm9yDQoAAABTSU5HIGVycm9y
DQoAAAAARE9NQUlOIGVycm9yDQoAAFI2MDI4DQotIHVuYWJsZSB0byBpbml0aWFsaXplIGhlYXAN
CgAAAABSNjAyNw0KLSBub3QgZW5vdWdoIHNwYWNlIGZvciBsb3dpbyBpbml0aWFsaXphdGlvbg0K
AAAAAFI2MDI2DQotIG5vdCBlbm91Z2ggc3BhY2UgZm9yIHN0ZGlvIGluaXRpYWxpemF0aW9uDQoA
AAAAUjYwMjUNCi0gcHVyZSB2aXJ0dWFsIGZ1bmN0aW9uIGNhbGwNCgAAAFI2MDI0DQotIG5vdCBl
bm91Z2ggc3BhY2UgZm9yIF9vbmV4aXQvYXRleGl0IHRhYmxlDQoAAAAAUjYwMTkNCi0gdW5hYmxl
IHRvIG9wZW4gY29uc29sZSBkZXZpY2UNCgAAAABSNjAxOA0KLSB1bmV4cGVjdGVkIGhlYXAgZXJy
b3INCgAAAABSNjAxNw0KLSB1bmV4cGVjdGVkIG11bHRpdGhyZWFkIGxvY2sgZXJyb3INCgAAAABS
NjAxNg0KLSBub3QgZW5vdWdoIHNwYWNlIGZvciB0aHJlYWQgZGF0YQ0KAA0KYWJub3JtYWwgcHJv
Z3JhbSB0ZXJtaW5hdGlvbg0KAAAAAFI2MDA5DQotIG5vdCBlbm91Z2ggc3BhY2UgZm9yIGVudmly
b25tZW50DQoAUjYwMDgNCi0gbm90IGVub3VnaCBzcGFjZSBmb3IgYXJndW1lbnRzDQoAAABSNjAw
Mg0KLSBmbG9hdGluZyBwb2ludCBub3QgbG9hZGVkDQoAAAAATWljcm9zb2Z0IFZpc3VhbCBDKysg
UnVudGltZSBMaWJyYXJ5AAAAAFJ1bnRpbWUgRXJyb3IhCgpQcm9ncmFtOiAAAABDbGllbnQAAEln
bm9yZQAAQ1JUAE5vcm1hbAAARnJlZQAAAABFcnJvcjogbWVtb3J5IGFsbG9jYXRpb246IGJhZCBt
ZW1vcnkgYmxvY2sgdHlwZS4KAAAASW52YWxpZCBhbGxvY2F0aW9uIHNpemU6ICV1IGJ5dGVzLgoA
JXMAAENsaWVudCBob29rIGFsbG9jYXRpb24gZmFpbHVyZS4KAAAAAENsaWVudCBob29rIGFsbG9j
YXRpb24gZmFpbHVyZSBhdCBmaWxlICVocyBsaW5lICVkLgoAAAAAZGJnaGVhcC5jAAAAX0NydENo
ZWNrTWVtb3J5KCkAAABfcEZpcnN0QmxvY2sgPT0gcE9sZEJsb2NrAAAAX3BMYXN0QmxvY2sgPT0g
cE9sZEJsb2NrAAAAAGZSZWFsbG9jIHx8ICghZlJlYWxsb2MgJiYgcE5ld0Jsb2NrID09IHBPbGRC
bG9jaykAAABfQkxPQ0tfVFlQRShwT2xkQmxvY2stPm5CbG9ja1VzZSk9PV9CTE9DS19UWVBFKG5C
bG9ja1VzZSkAAABwT2xkQmxvY2stPm5MaW5lID09IElHTk9SRV9MSU5FICYmIHBPbGRCbG9jay0+
bFJlcXVlc3QgPT0gSUdOT1JFX1JFUQAAAABfQ3J0SXNWYWxpZEhlYXBQb2ludGVyKHBVc2VyRGF0
YSkAAABBbGxvY2F0aW9uIHRvbyBsYXJnZSBvciBuZWdhdGl2ZTogJXUgYnl0ZXMuCgAAAABDbGll
bnQgaG9vayByZS1hbGxvY2F0aW9uIGZhaWx1cmUuCgBDbGllbnQgaG9vayByZS1hbGxvY2F0aW9u
IGZhaWx1cmUgYXQgZmlsZSAlaHMgbGluZSAlZC4KAF9wRmlyc3RCbG9jayA9PSBwSGVhZAAAAF9w
TGFzdEJsb2NrID09IHBIZWFkAAAAAHBIZWFkLT5uQmxvY2tVc2UgPT0gbkJsb2NrVXNlAAAAcEhl
YWQtPm5MaW5lID09IElHTk9SRV9MSU5FICYmIHBIZWFkLT5sUmVxdWVzdCA9PSBJR05PUkVfUkVR
AAAAAERBTUFHRTogYWZ0ZXIgJWhzIGJsb2NrICgjJWQpIGF0IDB4JTA4WC4KAAAAREFNQUdFOiBi
ZWZvcmUgJWhzIGJsb2NrICgjJWQpIGF0IDB4JTA4WC4KAABfQkxPQ0tfVFlQRV9JU19WQUxJRChw
SGVhZC0+bkJsb2NrVXNlKQAAQ2xpZW50IGhvb2sgZnJlZSBmYWlsdXJlLgoAAG1lbW9yeSBjaGVj
ayBlcnJvciBhdCAweCUwOFggPSAweCUwMlgsIHNob3VsZCBiZSAweCUwMlguCgAAACVocyBsb2Nh
dGVkIGF0IDB4JTA4WCBpcyAldSBieXRlcyBsb25nLgoAAAAAJWhzIGFsbG9jYXRlZCBhdCBmaWxl
ICVocyglZCkuCgBEQU1BR0U6IG9uIHRvcCBvZiBGcmVlIGJsb2NrIGF0IDB4JTA4WC4KAAAAAERB
TUFHRUQAX2hlYXBjaGsgZmFpbHMgd2l0aCB1bmtub3duIHJldHVybiB2YWx1ZSEKAABfaGVhcGNo
ayBmYWlscyB3aXRoIF9IRUFQQkFEUFRSLgoAAABfaGVhcGNoayBmYWlscyB3aXRoIF9IRUFQQkFE
RU5ELgoAAABfaGVhcGNoayBmYWlscyB3aXRoIF9IRUFQQkFETk9ERS4KAABfaGVhcGNoayBmYWls
cyB3aXRoIF9IRUFQQkFEQkVHSU4uCgBCYWQgbWVtb3J5IGJsb2NrIGZvdW5kIGF0IDB4JTA4WC4K
AABfQ3J0TWVtQ2hlY2tQb2ludDogTlVMTCBzdGF0ZSBwb2ludGVyLgoAX0NydE1lbURpZmZlcmVu
Y2U6IE5VTEwgc3RhdGUgcG9pbnRlci4KAE9iamVjdCBkdW1wIGNvbXBsZXRlLgoAAGNydCBibG9j
ayBhdCAweCUwOFgsIHN1YnR5cGUgJXgsICV1IGJ5dGVzIGxvbmcuCgAAAABub3JtYWwgYmxvY2sg
YXQgMHglMDhYLCAldSBieXRlcyBsb25nLgoAY2xpZW50IGJsb2NrIGF0IDB4JTA4WCwgc3VidHlw
ZSAleCwgJXUgYnl0ZXMgbG9uZy4KAHslbGR9IAAAJWhzKCVkKSA6IAAAI0ZpbGUgRXJyb3IjKCVk
KSA6IABEdW1waW5nIG9iamVjdHMgLT4KACBEYXRhOiA8JXM+ICVzCgAlLjJYIAAAAERldGVjdGVk
IG1lbW9yeSBsZWFrcyEKAFRvdGFsIGFsbG9jYXRpb25zOiAlbGQgYnl0ZXMuCgAATGFyZ2VzdCBu
dW1iZXIgdXNlZDogJWxkIGJ5dGVzLgoAAAAAJWxkIGJ5dGVzIGluICVsZCAlaHMgQmxvY2tzLgoA
AAAoImluY29uc2lzdGVudCBJT0IgZmllbGRzIiwgc3RyZWFtLT5fcHRyIC0gc3RyZWFtLT5fYmFz
ZSA+PSAwKQAAX2Zsc2J1Zi5jAAAAc3ByaW50Zi5jAAAAc3RyaW5nICE9IE5VTEwAAHZzcHJpbnRm
LmMAAEdldExhc3RBY3RpdmVQb3B1cAAAR2V0QWN0aXZlV2luZG93AE1lc3NhZ2VCb3hBAF9nZXRi
dWYuYwAAAGZjbG9zZS5jAAAAAAAAAAAAAAAAAAAAAP////9Gq0AATKtAAP////88rEAAQqxAAP//
//+krkAAqq5AAG9zZmluZm8uYwAAAF9mcmVlYnVmLmMAAHN0cmVhbSAhPSBOVUxMAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAlQABQ
fUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA8CZAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAg
L0AAAQAAAEgCQgA4AkIAQD9CAAAAAABAP0IAAQEAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAA
AAACAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAACAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP////8CAAAABAAAAAQAAAD///////////////+Q
AkIAiAJCAHQCQgAFAADACwAAAAAAAAAdAADABAAAAAAAAACWAADABAAAAAAAAACNAADACAAAAAAA
AACOAADACAAAAAAAAACPAADACAAAAAAAAACQAADACAAAAAAAAACRAADACAAAAAAAAACSAADACAAA
AAAAAACTAADACAAAAAAAAAADAAAABwAAAAoAAACMAAAA/////wAKAAAQAAAAIAWTGQAAAAAAAAAA
AAAAAAAAAAACAAAAOAdCAAgAAAAMB0IACQAAAOAGQgAKAAAAvAZCABAAAACQBkIAEQAAAGAGQgAS
AAAAPAZCABMAAAAQBkIAGAAAANgFQgAZAAAAsAVCABoAAAB4BUIAGwAAAEAFQgAcAAAAGAVCAHgA
AAAIBUIAeQAAAPgEQgB6AAAA6ARCAPwAAADkBEIA/wAAANQEQgABAAAAAQAAAP/////93c0AwAdC
ALgHQgC0B0IArAdCAKQHQgBQp0AAUKdAAFCnQABQp0AAUKdAAFCnQAAAAAAAai5CAGouQgAAACAA
IAAgACAAIAAgACAAIAAgACgAKAAoACgAKAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAg
ACAAIABIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAIQAhACEAIQAhACEAIQAhACEAIQA
EAAQABAAEAAQABAAEACBAIEAgQCBAIEAgQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAB
AAEAAQABAAEAEAAQABAAEAAQABAAggCCAIIAggCCAIIAAgACAAIAAgACAAIAAgACAAIAAgACAAIA
AgACAAIAAgACAAIAAgACABAAEAAQABAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAQIECAAAAACkAwAAYIJ5giEAAAAAAAAApt8AAAAAAAChpQAAAAAAAIGf4PwAAAAAQH6A/AAA
AACoAwAAwaPaoyAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIH+AAAAAAAAQP4AAAAAAAC1AwAAwaPa
oyAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIH+AAAAAAAAQf4AAAAAAAC2AwAAz6LkohoA5aLoolsA
AAAAAAAAAAAAAAAAAAAAAIH+AAAAAAAAQH6h/gAAAABRBQAAUdpe2iAAX9pq2jIAAAAAAAAAAAAA
AAAAAAAAAIHT2N7g+QAAMX6B/gAAAAAAAAAAAAAAAPgDAAAAAAAAAAAAAAAAAAAAn0AAAQAAAC4A
AAABAAAAAQAAABYAAAACAAAAAgAAAAMAAAACAAAABAAAABgAAAAFAAAADQAAAAYAAAAJAAAABwAA
AAwAAAAIAAAADAAAAAkAAAAMAAAACgAAAAcAAAALAAAACAAAAAwAAAAWAAAADQAAABYAAAAPAAAA
AgAAABAAAAANAAAAEQAAABIAAAASAAAAAgAAACEAAAANAAAANQAAAAIAAABBAAAADQAAAEMAAAAC
AAAAUAAAABEAAABSAAAADQAAAFMAAAANAAAAVwAAABYAAABZAAAACwAAAGwAAAANAAAAbQAAACAA
AABwAAAAHAAAAHIAAAAJAAAABgAAABYAAACAAAAACgAAAIEAAAAKAAAAggAAAAkAAACDAAAAFgAA
AIQAAAANAAAAkQAAACkAAACeAAAADQAAAKEAAAACAAAApAAAAAsAAACnAAAADQAAALcAAAARAAAA
zgAAAAIAAADXAAAACwAAABgHAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKFAC
AAAAAAAAAAAAjlICAERRAgAAAAAAAAAAAAAAAAAAAAAAAAAAAGBSAgBoUgIAelICAJxSAgCuUgIA
vFICAMpSAgDYUgIA6FICAPRSAgAMUwIAIlMCADJTAgBKUwIAYFMCAHRTAgCIUwIApFMCAL5TAgDY
UwIA7lMCAAZUAgAgVAIAMlQCAEBUAgBSVAIAYFQCAG5UAgB6VAIAiFQCAJRUAgCkVAIAtFQCAMRU
AgDUVAIA7FQCAPhUAgACVQIADlUCABpVAgAqVQIAOFUCAExVAgBeVQIAdFUCAIRVAgCUVQIAplUC
ALhVAgDIVQIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYFICAGhSAgB6UgIAnFICAK5SAgC8
UgIAylICANhSAgDoUgIA9FICAAxTAgAiUwIAMlMCAEpTAgBgUwIAdFMCAIhTAgCkUwIAvlMCANhT
AgDuUwIABlQCACBUAgAyVAIAQFQCAFJUAgBgVAIAblQCAHpUAgCIVAIAlFQCAKRUAgC0VAIAxFQC
ANRUAgDsVAIA+FQCAAJVAgAOVQIAGlUCACpVAgA4VQIATFUCAF5VAgB0VQIAhFUCAJRVAgCmVQIA
uFUCAMhVAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACWAlNsZWVwAD4BR2V0UHJvY0FkZHJl
c3MAACYBR2V0TW9kdWxlSGFuZGxlQQAAS0VSTkVMMzIuZGxsAADKAEdldENvbW1hbmRMaW5lQQB0
AUdldFZlcnNpb24AAH0ARXhpdFByb2Nlc3MAUQBEZWJ1Z0JyZWFrAABSAUdldFN0ZEhhbmRsZQAA
3wJXcml0ZUZpbGUArQFJbnRlcmxvY2tlZERlY3JlbWVudAAA9QFPdXRwdXREZWJ1Z1N0cmluZ0EA
AMIBTG9hZExpYnJhcnlBAACwAUludGVybG9ja2VkSW5jcmVtZW50AAAkAUdldE1vZHVsZUZpbGVO
YW1lQQAAngJUZXJtaW5hdGVQcm9jZXNzAAD3AEdldEN1cnJlbnRQcm9jZXNzAK0CVW5oYW5kbGVk
RXhjZXB0aW9uRmlsdGVyAACyAEZyZWVFbnZpcm9ubWVudFN0cmluZ3NBALMARnJlZUVudmlyb25t
ZW50U3RyaW5nc1cA0gJXaWRlQ2hhclRvTXVsdGlCeXRlAAYBR2V0RW52aXJvbm1lbnRTdHJpbmdz
AAgBR2V0RW52aXJvbm1lbnRTdHJpbmdzVwAAbQJTZXRIYW5kbGVDb3VudAAAFQFHZXRGaWxlVHlw
ZQBQAUdldFN0YXJ0dXBJbmZvQQCdAUhlYXBEZXN0cm95AJsBSGVhcENyZWF0ZQAAnwFIZWFwRnJl
ZQAAvwJWaXJ0dWFsRnJlZQAvAlJ0bFVud2luZAC4AUlzQmFkV3JpdGVQdHIAtQFJc0JhZFJlYWRQ
dHIAAKcBSGVhcFZhbGlkYXRlAAAaAUdldExhc3RFcnJvcgAAQQJTZXRDb25zb2xlQ3RybEhhbmRs
ZXIAvwBHZXRDUEluZm8AuQBHZXRBQ1AAADEBR2V0T0VNQ1AAAJkBSGVhcEFsbG9jALsCVmlydHVh
bEFsbG9jAACiAUhlYXBSZUFsbG9jAKoARmx1c2hGaWxlQnVmZmVycwAAagJTZXRGaWxlUG9pbnRl
cgAA5AFNdWx0aUJ5dGVUb1dpZGVDaGFyAL8BTENNYXBTdHJpbmdBAADAAUxDTWFwU3RyaW5nVwAA
UwFHZXRTdHJpbmdUeXBlQQAAVgFHZXRTdHJpbmdUeXBlVwAAfAJTZXRTdGRIYW5kbGUAABsAQ2xv
c2VIYW5kbGUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAACsAAAATjBU
MHMwfTCOMJgw2zDtMBgxQzFuMZkxxDHvMZQymjKgMsYyzzLqMgIzEjM/M0QzSzN2M3szmDOdM6Iz
rzO1M8EzxzPQM9Yz2zPoMxI0FzQhNDY0PDRCNEg0TzSlNMQ01TT0NBA1GTVWNWg1gDWJNak1szXM
NdY1EzZ0NoA2hzezN9w38Tc3OEM46DjvOAo6ETqhOqg6ZDs2PJ08zz3xPRM+AAAAIAAALAEAAEQz
SDNMM1AzVDNYM1wzYDNkM2gzbDNwM3QzeDONM5EzlTOZM50z0DPUM9gz3DPgM+Qz6DPsM/Az9DP4
M/wzADQENAg0xjXPNds15DXyNfs1CTYPNhg2JjYwNj42RDZ0Nn02rTbGNtg2+zYVN0E3XDdsN6M3
rzfAN8o32jfkN/M3BTgROKE4pzi1OL04wzjXOOQ46TjvOAc5FDkkOSk5LzlqOY85mznXOeM59zkg
Oj06ajqFOpc6nTqyOsM68Dr3OgE7FTsfO4Y7jDufO6U7xDvQO/47BzxNPKU8xDzQPO88CT0VPSk9
NT1QPWA9bD2HPZc9oz3EPdc94z03Pj0+Wj5zPsU+zj7TPtg+5T7qPoY/kz+aP6A/rT+5P8I/1z/t
P/I//z8AMAAA5AAAAAQwEjAsMEMwUTC6MPwwBzEaMS4xNDFFMVAxZDF+MZUxrDHDMdox8TH7MQwy
LjJHMl8yZzJ0Mn0yqDK8MvwyHTMjMzUzcTO7M8oz3TP4Mwo0EjQYNBw0ITQuNDg0YzS3NMA0PTUh
NhQ4aDgWOR85Ljk6OUk5XDlvOdk56TkKOi86UjpgOnM6xzroOgo7LDtWO1w7cTufOwk8HDw6PEw8
UjxbPHA88zwKPU89kT2kPRw+Iz5SPmE+dD6mPqs+sT7HPs4+5z4FPx4/Mj9AP0c/WD9gP2c/bT90
P4g/yT8AQAAACAEAADIwTDBVMFUxXjFnMX0xhjHNMeAx+TEWMh8yKDI7Mk8yWDJfMoUyjjLaMusy
EjMrM0UzhTOYM6QzujPgM6E0tjTCNN406jQLNSU1SDVNNYQ1rDX7NQA2RzZQNqU2rjazNrs2wTbH
Ns821TbbNuM29Db9Nj83STdjN4I31zjsOPg4FDkgOUM5XTmAOYU5tzkBOgY6NzpDOpI6njr4OgQ7
bjt3O4U7jTuTO5w7pDusO7I7uzvBO8c7zjvTO/g7FDxnPHM8vDzGPNI88zwQPRo9Jj1GPUw9VT1l
PW49gz0YPi0+OT5yPn4+gz65PsU+Gz8nP0I/VT+KP5A/tD/wP/Y/AAAAUAAAzAAAADUwQTBpMK0w
uTDXMN8w5TAOMRgxJDFGMWQxbjF6MZsxrTHhMSoyPzJLMnYygjLYMuQyKTM1M3UzgTPjM+8zJTQx
NJU04TQxNTY1OzVhNWY1iTWONbE1tjXZNd41BjZrNnc2fjapNtQ2Bjc/N183qDfbNxE4FTgZOB04
NThHOGU4dDjUOOw4aDmHOY45Hzp1OoE6oDqlOs86DTuiO8w72DsTPBg8xTwxPTY9ZT2/Pfc9KD5Q
Pp8+wT7YPgo/Zz+ZP54/AAAAYAAAaAAAACAwTTCQML0wEzElMSoxoDG8MeYxDjJIMmEyyzPZM+wz
CjQ1NEs0FTUtNVg1bjV1NYo1oDapNqQ3rTfzN/83eziKODQ5PjlNOWU5kDmlOcw81Tz2PP88tj6/
PuA+6T4AAABwAABwAQAAETAbMCEwLDA4MD0wYjBpMG8wezCCMI4wljChMKkwtTAOMRcxKzE6MT4x
QjFGMUoxfzGXMQIyCTIQMiEyMjJDMqIyqzK7MsQy1DLoMu4y/zIcM0szWjNkM2gzbDNwM3QzeDOn
M8AzyDPVM94zDzQXNB00KzQ1NDo0QDRMNFY0WzRgNGo0bzR1NH40jTSaNLM02jUiNk82ejbNNtM2
3DbuNvQ2/jYMNzg3QDdhN443mTefNwA4DDg0OEA4SDhWOFw4aDiPOKI4xTjVON847Tj3OAU5DjlD
OUo5cDl0OXg5fDmAOcQ5zTnXOeE5CDoyOjk66zryOhg7PDtNO3E7yDvXO/A7FTwjPDw8SjyePK08
wjziPPE8Bj0UPTU9PT1VPWg9uD3QPdc93z3kPeg97D0VPjs+VT5cPmA+ZD5oPmw+cD50Png+wj7I
Psw+0D7UPjo/RT9gP2c/bD9wP3Q/kT+7P+0/9D/4P/w/AIAAALgAAAAAMAQwCDAMMBAwWjBgMGQw
aDBsMMQw5DD7MAIxBzENMRoxIDEmMTAxOjFXMWAxazF6NoY2jzarNrY2vTbINtA22TbpNvc2AzcU
Nx83KDc+N0g3TjdaN2E3ZzdvN3c3gzeMN5s3pDetN743xDfNN9U36DfxN0M4gziPOL447zj7OBw5
YjnoPPM8+zwlPSs9Mz1APUg9Tz1oPW49dz18PYU9lz2ePcU94z3qPRA+GD7ZPgCQAABkAAAAFjYi
Nis2RzZSNlk2ZDZsNnU2hTaTNp82sDa7NsQ20DbYNuQ26zbwNvk2ATcMNxY3JTcuNzQ3SjdUN1s3
bTeaN8E3mjhlPXE9hD2VPSQ+qT7qPvE+QT+NP5Q/AAAAoAAA4AAAAAUwrDCzMC0xNDFDMasxsjHg
Mecx8TH8MQYyTDJVMnYyfzJIM24z+TMONCA0PDRbNGU0gjSINK40wzTVNN80GzVKNSI2LDZZNok2
kzavNso21zb9Nh43KDdrN4A3kjecN8M34DfvNyU4PzheOGc4gziMOJk4XDllOQY6CzooOjY6QzpN
Ol46azp1OqE6wjrNOuA6BzuCO6c79zt6PKc82TxmPWs9iD2WPZ49qD25PcM9zT3gPe89DD4XPio+
UT7dPgA/WD9wP3c/fz+EP4g/jD+1P9s/9T/8PwCwAAAYAQAAADAEMAgwDDAQMBQwGDBiMGgwbDBw
MHQw2jDlMAAxBzEMMRAxFDExMVsxjTGUMZgxnDGgMaQxqDGsMbAx+jEAMgQyCDIMMo8ynDK0Mugy
CDMtMzIzOjNPM5kzsjO+M+cz9TMDNBY0JjQwNEk0YjSBNI00tDTANMw03zTwNPo0GDUtNUw1VzVh
Nb41zTUPNhk2TjZoNo02mTafNrU20zbfNvo2DzchNys3gTeUN7Y37Tf2N3Q4ejiAOIY4jDiSOJg4
njikOKo4sDi2OLw4wjjIOM441DjaOOA45jjsOPI4+Dj+OAQ5CjkQORY5HDkiOSg5Ljk0OTo5QDlG
OUw5UjlYOV45ZDlqOXA5djl8OYI5iDkAAAIAGAAAAKQxqDGcP6A/qD+sP7Q/uD8AIAIAXAAAAAwz
EDMgNjA6ODo8OkA6SDrcPOA85DyUPZw9pD2sPbQ9vD3EPcw91D3cPeQ97D30Pfw9BD4MPhQ+HD4w
PjQ+OD48PkA+RD5IPkw+UD5UPlg+YD5kPgAwAgAMAAAAgDEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABOQjEwAAAAABGpK1ACAAAARDpc
dGVzdFx0aW1lXERlYnVnXHRlc3QucGRiAA==

------=_001_NextPart215818845028_=----
Content-Type: application/octet-stream;
	name="test.cpp"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
	filename="test.cpp"

I2luY2x1ZGUgPHN0ZGlvLmg+DQojaW5jbHVkZSA8d2luZG93cy5oPg0KdHlwZWRlZiBpbnQgKF9f
c3RkY2FsbCAqTlRTRVRUSU1FUikoSU4gVUxPTkcgUmVxdWVzdGVkUmVzb2x1dGlvbiwgSU4gQk9P
TEVBTiBTZXQsIE9VVCBQVUxPTkcgQWN0dWFsUmVzb2x1dGlvbiApOw0KdHlwZWRlZiBpbnQgKF9f
c3RkY2FsbCAqTlRRVUVSWVRJTUVSKShPVVQgUFVMT05HICAgTWluaW11bVJlc29sdXRpb24sIE9V
VCBQVUxPTkcgTWF4aW11bVJlc29sdXRpb24sIE9VVCBQVUxPTkcgQ3VycmVudFJlc29sdXRpb24g
KTsNCg0KaW50IG1haW4oKQ0Kew0KCURXT1JEIG1pbl9yZXMgPSAwLCBtYXhfcmVzID0gMCwgY3Vy
X3JlcyA9IDAsIHJldCA9IDA7DQoJSE1PRFVMRSAgaGRsbCA9IE5VTEw7DQoJaGRsbCA9IEdldE1v
ZHVsZUhhbmRsZSgibnRkbGwuZGxsIik7DQoJTlRTRVRUSU1FUiBBZGRyTnRTZXRUaW1lciA9IDA7
DQoJTlRRVUVSWVRJTUVSIEFkZHJOdFF1ZXlUaW1lciA9IDA7DQoJDQoJQWRkck50U2V0VGltZXIg
PSAoTlRTRVRUSU1FUikgR2V0UHJvY0FkZHJlc3MoaGRsbCwgIk50U2V0VGltZXJSZXNvbHV0aW9u
Iik7DQoJQWRkck50UXVleVRpbWVyID0gKE5UUVVFUllUSU1FUilHZXRQcm9jQWRkcmVzcyhoZGxs
LCAiTnRRdWVyeVRpbWVyUmVzb2x1dGlvbiIpOw0KDQoJd2hpbGUgKDEpDQoJew0KCQlyZXQgPSBB
ZGRyTnRRdWV5VGltZXIoJm1pbl9yZXMsICZtYXhfcmVzLCAmY3VyX3Jlcyk7DQoJCXByaW50Zigi
bWluX3JlcyA9ICVkLCBtYXhfcmVzID0gJWQsIGN1cl9yZXMgPSAlZFxuIixtaW5fcmVzLCBtYXhf
cmVzLCBjdXJfcmVzKTsNCgkJU2xlZXAoNSk7DQoJCXJldCA9IEFkZHJOdFNldFRpbWVyKDEwMDAw
LCAxLCAmY3VyX3Jlcyk7DQoJCVNsZWVwKDUpOw0KCQlyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAw
MCwgMCwgJmN1cl9yZXMpOw0KCQlTbGVlcCg1KTsNCgkJcmV0ID0gQWRkck50U2V0VGltZXIoNTAw
MDAsIDEsICZjdXJfcmVzKTsNCgkJU2xlZXAoNSk7DQoJCXJldCA9IEFkZHJOdFNldFRpbWVyKDUw
MDAwLCAwLCAmY3VyX3Jlcyk7DQoJCVNsZWVwKDUpOw0KCQlyZXQgPSBBZGRyTnRTZXRUaW1lcigx
MDAwMDAsIDEsICZjdXJfcmVzKTsNCgkJU2xlZXAoNSk7DQoJCXJldCA9IEFkZHJOdFNldFRpbWVy
KDEwMDAwMCwgMCwgJmN1cl9yZXMpOw0KCQlTbGVlcCg1KTsNCgl9DQoNCglyZXR1cm4gMDsNCn0N
Cg0K

------=_001_NextPart215818845028_=----
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------=_001_NextPart215818845028_=------



From xen-devel-bounces@lists.xen.org Wed Aug 15 14:08:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eGE-0006BN-IH; Wed, 15 Aug 2012 14:07:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T1eGB-0006BE-RQ
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:07:48 +0000
Received: from [85.158.143.35:17312] by server-1.bemta-4.messagelabs.com id
	7E/8E-07754-33DAB205; Wed, 15 Aug 2012 14:07:47 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345039650!14242832!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=1.4 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8199 invoked from network); 15 Aug 2012 14:07:31 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 14:07:31 -0000
Received: by dadn15 with SMTP id n15so180673dad.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 07:07:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=DPY/2h4FwX/0mUL+0IemDK0HkZtLqNn9iniJ/BYbQD0=;
	b=o/T4A6hV/E6p9zAg28V8Rm+hdqRZp/NLrs3p9xOSjeXYK0e7a0C5scEeXosSMqfWSg
	GwRwics5903Zk5+dbmpMfHNCmkgCns0VpTWGhregwaWIkC6nG+U1ytvUFN/+SVOZm5Cb
	7Os4m9686MtFUsXwG3v7HdR74glUtHFrDqQkpMRZpDxivoxgq663x0bvV9Tqgcfo7gwC
	/MN3rz+1QUIJJYi8R4fHuzNHNqDVJ8+kLgF/cnwhoYZOKzzAYIKTjTk8xm3ZmXeHclj4
	+FLVfHejkSpEi09rWvyUKm7HahAzzBZIpDt0rpoVxtLaEbxNyJUl/cjutmuAn+jNbOi4
	VdJQ==
Received: by 10.66.83.234 with SMTP id t10mr32322200pay.39.1345039649634;
	Wed, 15 Aug 2012 07:07:29 -0700 (PDT)
Received: from root ([115.199.253.245])
	by mx.google.com with ESMTPS id jz4sm616595pbc.17.2012.08.15.07.07.02
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 07:07:27 -0700 (PDT)
Date: Wed, 15 Aug 2012 22:07:07 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>, 
	"Yang Z Zhang" <yang.z.zhang@intel.com>, "Keir Fraser" <keir@xen.org>, 
	"Tim Deegan" <tim@xen.org>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: EC02886C-1CA8-4587-83B3-CEE6627B4FFE
X-Has-Attach: yes
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081522045495397713@gmail.com>
Content-Type: multipart/mixed; boundary="----=_001_NextPart215818845028_=----"
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

------=_001_NextPart215818845028_=----
Content-Type: multipart/alternative;
	boundary="----=_002_NextPart316018485136_=----"


------=_002_NextPart316018485136_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

SGksIEphbjoNCkkgYW0gc29ycnkgSSByZWFsbHkgZG9uJ3QgaGF2ZSBtdWNoIHRpbWUgdG8gdHJ5
IGEgdGVzdCBvZiB5b3VyIHBhdGNoLCBhbmQgaXQgaXMgbm90IGNvbnZlbmllbnQNCmZvciBtZSB0
byBoYXZlIGEgdHJ5LiBGb3IgdGhlIHZlcnNpb24gSSBoYXZlIGJlZW4gdXNpbmcgaXMgeGVuNC4w
LngsIGFuZCB5b3VyIHBhdGNoIGlzIGJhc2VkIG9uIA0KdGhlIGxhdGVzdCB2ZXJzaW9uIHhlbjQu
Mi54LihJIGhhdmUgbmV2ZXIgY29tcGxpZWQgdGhlIHVuc3RhYmxlIG9uZSksIHNvIEkgbWVyZ2Vk
IHlvdXIgcGF0Y2ggdG8gbXkgDQp4ZW40LjAueCwgc3RpbGwgY291bGRuJ3QgZmluZCB0aGUgdHdv
IGZ1bmN0aW9ucyBiZWxvdzoNCiBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpv
cGFxdWUpIA0KIHN0YXRpYyB2b2lkIHJ0Y19hbGFybV9jYih2b2lkICpvcGFxdWUpIA0Kc28gSSBk
aWRuJ3QgbWVyZ2UgdGhlIHR3byBmdW5jdGlvbnMgd2hpY2ggY29udGFpbnMgYSBydGNfdG9nZ2xl
X2lycSgpIC4NCg0KVGhlIHJlc3VsdHMgZm9yIG1lIHdlcmUgdGhlc2U6DQoxIEluIG15IHJlYWwg
YXBwbGljYXRpb24gZW52aXJvbm1lbnQsIGl0IHdvcmtlZCB2ZXJ5IHdlbGwgaW4gdGhlIGZvcm1l
ciA1bWlucywgbXVjaCBiZXR0ZXIgdGhhbiBiZWZvcmUsDQogYnV0IGF0IGxhc3QgaXQgbGFnZ2Vk
IGFnYWluLiBJIGRvbid0IGtub3cgd2hldGhlciBpdCBiZWxvbmdzIHRvIHRoZSB0d28gbWlzc2Vk
IGZ1bmN0aW9ucy4gSSBsYWNrIHRoZSANCiBhYmlsaXR5IHRvIGZpZ3VyZSB0aGVtIG91dC4NCg0K
MiBXaGVuIEkgdGVzdGVkIG15IHRlc3QgcHJvZ3JhbSB3aGljaCBJIHByb3ZpZGVkIGRheXMgYmVm
b3JlLCBpdCB3b3JrZWQgdmVyeSB3ZWxsLCBtYXliZSB0aGUgcHJvZ3JhbSBkb2Vzbid0IA0KZW11
bGF0ZSB0aGUgcmVhbCBlbnZpcm9ubWVudCBkdWUgdG8gdGhlIHNhbWUgc2V0dGluZyByYXRlLCBz
byBJIG1vZGlmaWVkIHRoaXMgcHJvZ3JhbSBhcyB3aGljaCBpbiB0aGUgYXR0YWNobWVudC4NCmlm
IHlvdSBhcmUgbW9yZSBjb252ZW5pZW50LCB5b3UgY2FuIGhlbHAgbWUgdG8gaGF2ZSBhIGxvb2sg
b2YgaXQuDQpBbmQgSSBoYXZlIGEgb3BpbmlvbiwgYmVjYXVzZSBvdXIgcHJvZHVjdCBpcyBiYXNl
ZCBvbiBWZXJzaW9uIFhlbjQuMC54LCBpZiB5b3UgaGF2ZSBlbm91Z2ggdGltZSwgY2FuIHlvdSB3
cml0ZSANCmFub3RoZXIgcGF0Y2ggYmFzZWQgaHR0cDovL3hlbmJpdHMueGVuLm9yZy9oZy94ZW4t
NC4wLXRlc3RpbmcuaGcvIGZvciBtZSwgdGhhbmsgeW91IHZlcnkgbXVjaCENCg0KMyBJIGFsc28g
aGF2ZSBhIHRob3VnaHQgdGhhdCBjYW4gd2UgaGF2ZSBzb21lIGRldGVjdGluZyBtZXRob2RzIHRv
IGZpbmQgdGhlIGxhZ2dpbmcgdGltZSBlYXJsaWVyIHRvIGFkanVzdCB0aW1lDQpiYWNrIHRvIG5v
cm1hbCB2YWx1ZSBpbiB0aGUgY29kZT8NCg0KYmVzdCByZWdhcmRzLA0KDQoNCg0KDQp0dXBlbmcy
MTINCg0KU2Vjb25kIGRyYWZ0IG9mIGEgcGF0Y2ggcG9zdGVkOyBubyB0ZXN0IHJlc3VsdHMgc28g
ZmFyIGZvciBmaXJzdCBkcmFmdC4NCkphbg0KDQpGcm9tOiBKYW4gQmV1bGljaA0KRGF0ZTogMjAx
Mi0wOC0xNCAxNzo1MQ0KVG86IFlhbmcgWiBaaGFuZzsgS2VpciBGcmFzZXI7IFRpbSBEZWVnYW4N
CkNDOiB0dXBlbmcyMTI7IHhlbi1kZXZlbA0KU3ViamVjdDogW1hlbi1kZXZlbF0gW1BBVENILCBS
RkMgdjJdIHg4Ni9IVk06IGFzc29ydGVkIFJUQyBlbXVsYXRpb24gYWRqdXN0bWVudHMgKHdhcyBS
ZTogQmlnIEJ1ZzpUaW1lIGluIFZNIGdvZXMgc2xvd2VyLi4uKQ0KQmVsb3cvYXR0YWNoZWQgYSBz
ZWNvbmQgZHJhZnQgb2YgYSBwYXRjaCB0byBmaXggbm90IG9ubHkgdGhpcw0KaXNzdWUsIGJ1dCBh
IGZldyBtb3JlIHdpdGggdGhlIFJUQyBlbXVsYXRpb24uDQoNCktlaXIsIFRpbSwgWWFuZywgb3Ro
ZXJzIC0gdGhlIGNoYW5nZSB0byB4ZW4vYXJjaC94ODYvaHZtL3ZwdC5jIHJlYWxseQ0KbG9va3Mg
bW9yZSBsaWtlIGEgaGFjayB0aGFuIGEgc29sdXRpb24sIGJ1dCBJIGRvbid0IHNlZSBhbm90aGVy
DQp3YXkgd2l0aG91dCBtdWNoIG1vcmUgaW50cnVzaXZlIGNoYW5nZXMuIFRoZSBwb2ludCBpcyB0
aGF0IHdlDQp3YW50IHRoZSBSVEMgY29kZSB0byBkZWNpZGUgd2hldGhlciB0byBnZW5lcmF0ZSBh
biBpbnRlcnJ1cHQNCihzbyB0aGF0IFJUQ19QRiBjYW4gYmVjb21lIHNldCBjb3JyZWN0bHkgZXZl
biB3aXRob3V0IFJUQ19QSUUNCmdldHRpbmcgZW5hYmxlZCBieSB0aGUgZ3Vlc3QpLg0KDQpBZGRp
dGlvbmFsbHkgSSB3b25kZXIgd2hldGhlciBhbGFybV90aW1lcl91cGRhdGUoKSBzaG91bGRuJ3QN
CmJhaWwgb24gbm9uLWNvbmZvcm1pbmcgUlRDXypfQUxBUk0gdmFsdWVzIChhcyB0aG9zZSB3b3Vs
ZA0KbmV2ZXIgbWF0Y2ggdGhlIHZhbHVlcyB0aGV5IGdldCBjb21wYXJlZCBhZ2FpbnN0LCB3aGVy
ZWFzDQp3aXRoIHRoZSBjdXJyZW50IHdheSBvZiBoYW5kbGluZyB0aGlzIHRoZXkgd291bGQgYXBw
ZWFyIHRvDQptYXRjaCAtIGkuZS4gc2V0IFJUQ19BRiBhbmQgcG9zc2libHkgZ2VuZXJhdGUgYW4g
aW50ZXJydXB0IC0NCnNvbWUgb3RoZXIgcG9pbnQgaW4gdGltZSkuIEkgcmVhbGl6ZSB0aGUgYmVo
YXZpb3IgaGVyZSBtYXkgbm90DQpiZSBwcmVjaXNlbHkgc3BlY2lmaWVkLCBidXQgdGhlIHNwZWNp
ZmljYXRpb24gc2F5aW5nICJ0aGUgY3VycmVudA0KdGltZSBoYXMgbWF0Y2hlZCB0aGUgYWxhcm0g
dGltZSIgbWVhbnMgdG8gbWUgYSB2YWx1ZSBieSB2YWx1ZQ0KY29tcGFyaXNvbiwgd2hpY2ggaW1w
bGllcyB0aGF0IG5vbi1jb25mb3JtaW5nIHZhbHVlcyB3b3VsZA0KbmV2ZXIgbWF0Y2ggKHNpbmNl
IG5vbi1jb25mb3JtaW5nIGN1cnJlbnQgdGltZSB2YWx1ZXMgY291bGQNCmdldCByZXBsYWNlZCBh
dCBhbnkgdGltZSBieSB0aGUgaGFyZHdhcmUgZHVlIHRvIG92ZXJmbG93DQpkZXRlY3Rpb24pLg0K
DQpKYW4NCg0KLSBkb24ndCBjYWxsIHJ0Y190aW1lcl91cGRhdGUoKSBvbiBSRUdfQSB3cml0ZXMg
d2hlbiB0aGUgdmFsdWUgZGlkbid0DQogIGNoYW5nZSAoZG9pbmcgdGhlIGNhbGwgYWx3YXlzIHdh
cyByZXBvcnRlZCB0byBjYXVzZSB3YWxsIGNsb2NrIHRpbWUNCiAgbGFnZ2luZyB3aXRoIHRoZSBK
Vk0gcnVubmluZyBvbiBXaW5kb3dzKQ0KLSBkb24ndCBjYWxsIHJ0Y190aW1lcl91cGRhdGUoKSBv
biBSRUdfQiB3cml0ZXMgYXQgYWxsDQotIG9ubHkgY2FsbCBhbGFybV90aW1lcl91cGRhdGUoKSBv
biBSRUdfQiB3cml0ZXMgd2hlbiByZWxldmFudCBiaXRzDQogIGNoYW5nZQ0KLSBvbmx5IGNhbGwg
Y2hlY2tfdXBkYXRlX3RpbWVyKCkgb24gUkVHX0Igd3JpdGVzIHdoZW4gU0VUIGNoYW5nZXMNCi0g
aW5zdGVhZCBwcm9wZXJseSBoYW5kbGUgQUYgYW5kIFBGIHdoZW4gdGhlIGd1ZXN0IGlzIG5vdCBh
bHNvIHNldHRpbmcNCiAgQUlFL1BJRSByZXNwZWN0aXZlbHkgKGZvciBVRiB0aGlzIHdhcyBhbHJl
YWR5IHRoZSBjYXNlLCBvbmx5IGENCiAgY29tbWVudCB3YXMgc2xpZ2h0bHkgaW5hY2N1cmF0ZSkN
Ci0gcmFpc2UgdGhlIFJUQyBJUlEgbm90IG9ubHkgd2hlbiBVSUUgZ2V0cyBzZXQgd2hpbGUgVUYg
d2FzIGFscmVhZHkNCiAgc2V0LCBidXQgZ2VuZXJhbGl6ZSB0aGlzIHRvIGNvdmVyIEFJRSBhbmQg
UElFIGFzIHdlbGwNCi0gcHJvcGVybHkgbWFzayBvZmYgYml0IDcgd2hlbiByZXRyaWV2aW5nIHRo
ZSBob3VyIHZhbHVlcyBpbg0KICBhbGFybV90aW1lcl91cGRhdGUoKSwgYW5kIHByb3Blcmx5IHVz
ZSBSVENfSE9VUlNfQUxBUk0ncyBiaXQgNyB3aGVuDQogIGNvbnZlcnRpbmcgZnJvbSAxMi0gdG8g
MjQtaG91ciB2YWx1ZQ0KLSBhbHNvIGhhbmRsZSB0aGUgdHdvIG90aGVyIHBvc3NpYmxlIGNsb2Nr
IGJhc2VzDQotIHVzZSBSVENfKiBuYW1lcyBpbiBhIGNvdXBsZSBvZiBwbGFjZXMgd2hlcmUgbGl0
ZXJhbCBudW1iZXJzIHdlcmUgdXNlZA0KICBzbyBmYXINCg0KLS0tIGEveGVuL2FyY2gveDg2L2h2
bS9ydGMuYw0KKysrIGIveGVuL2FyY2gveDg2L2h2bS9ydGMuYw0KQEAgLTUwLDExICs1MCwyNCBA
QCBzdGF0aWMgdm9pZCBydGNfc2V0X3RpbWUoUlRDU3RhdGUgKnMpOw0KIHN0YXRpYyBpbmxpbmUg
aW50IGZyb21fYmNkKFJUQ1N0YXRlICpzLCBpbnQgYSk7DQogc3RhdGljIGlubGluZSBpbnQgY29u
dmVydF9ob3VyKFJUQ1N0YXRlICpzLCBpbnQgaG91cik7DQoNCi1zdGF0aWMgdm9pZCBydGNfcGVy
aW9kaWNfY2Ioc3RydWN0IHZjcHUgKnYsIHZvaWQgKm9wYXF1ZSkNCitzdGF0aWMgdm9pZCBydGNf
dG9nZ2xlX2lycShSVENTdGF0ZSAqcykNCit7DQorICAgIHN0cnVjdCBkb21haW4gKmQgPSB2cnRj
X2RvbWFpbihzKTsNCisNCisgICAgQVNTRVJUKHNwaW5faXNfbG9ja2VkKCZzLT5sb2NrKSk7DQor
ICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19JUlFGOw0KKyAgICBodm1faXNh
X2lycV9kZWFzc2VydChkLCBSVENfSVJRKTsNCisgICAgaHZtX2lzYV9pcnFfYXNzZXJ0KGQsIFJU
Q19JUlEpOw0KK30NCisNCit2b2lkIHJ0Y19wZXJpb2RpY19pbnRlcnJ1cHQodm9pZCAqb3BhcXVl
KQ0KIHsNCiAgICAgUlRDU3RhdGUgKnMgPSBvcGFxdWU7DQorDQogICAgIHNwaW5fbG9jaygmcy0+
bG9jayk7DQotICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IDB4YzA7DQorICAgIHMt
Pmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19QRjsNCisgICAgaWYgKCBzLT5ody5jbW9z
X2RhdGFbUlRDX1JFR19CXSAmIFJUQ19QSUUgKQ0KKyAgICAgICAgcnRjX3RvZ2dsZV9pcnEocyk7
DQogICAgIHNwaW5fdW5sb2NrKCZzLT5sb2NrKTsNCiB9DQoNCkBAIC02OCwxOSArODEsMjUgQEAg
c3RhdGljIHZvaWQgcnRjX3RpbWVyX3VwZGF0ZShSVENTdGF0ZSAqcw0KICAgICBBU1NFUlQoc3Bp
bl9pc19sb2NrZWQoJnMtPmxvY2spKTsNCg0KICAgICBwZXJpb2RfY29kZSA9IHMtPmh3LmNtb3Nf
ZGF0YVtSVENfUkVHX0FdICYgUlRDX1JBVEVfU0VMRUNUOw0KLSAgICBpZiAoIChwZXJpb2RfY29k
ZSAhPSAwKSAmJiAocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfUElFKSApDQorICAg
IHN3aXRjaCAoIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICYgUlRDX0RJVl9DVEwgKQ0KICAg
ICB7DQotICAgICAgICBpZiAoIHBlcmlvZF9jb2RlIDw9IDIgKQ0KKyAgICBjYXNlIFJUQ19SRUZf
Q0xDS18zMktIWjoNCisgICAgICAgIGlmICggKHBlcmlvZF9jb2RlICE9IDApICYmIChwZXJpb2Rf
Y29kZSA8PSAyKSApDQogICAgICAgICAgICAgcGVyaW9kX2NvZGUgKz0gNzsNCi0NCi0gICAgICAg
IHBlcmlvZCA9IDEgPDwgKHBlcmlvZF9jb2RlIC0gMSk7IC8qIHBlcmlvZCBpbiAzMiBLaHogY3lj
bGVzICovDQotICAgICAgICBwZXJpb2QgPSBESVZfUk9VTkQoKHBlcmlvZCAqIDEwMDAwMDAwMDBV
TEwpLCAzMjc2OCk7IC8qIHBlcmlvZCBpbiBucyAqLw0KLSAgICAgICAgY3JlYXRlX3BlcmlvZGlj
X3RpbWUodiwgJnMtPnB0LCBwZXJpb2QsIHBlcmlvZCwgUlRDX0lSUSwNCi0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHJ0Y19wZXJpb2RpY19jYiwgcyk7DQotICAgIH0NCi0gICAgZWxzZQ0K
LSAgICB7DQorICAgICAgICAvKiBmYWxsIHRocm91Z2ggKi8NCisgICAgY2FzZSBSVENfUkVGX0NM
Q0tfMU1IWjoNCisgICAgY2FzZSBSVENfUkVGX0NMQ0tfNE1IWjoNCisgICAgICAgIGlmICggcGVy
aW9kX2NvZGUgIT0gMCApDQorICAgICAgICB7DQorICAgICAgICAgICAgcGVyaW9kID0gMSA8PCAo
cGVyaW9kX2NvZGUgLSAxKTsgLyogcGVyaW9kIGluIDMyIEtoeiBjeWNsZXMgKi8NCisgICAgICAg
ICAgICBwZXJpb2QgPSBESVZfUk9VTkQocGVyaW9kICogMTAwMDAwMDAwMFVMTCwgMzI3NjgpOyAv
KiBpbiBucyAqLw0KKyAgICAgICAgICAgIGNyZWF0ZV9wZXJpb2RpY190aW1lKHYsICZzLT5wdCwg
cGVyaW9kLCBwZXJpb2QsIFJUQ19JUlEsIE5VTEwsIHMpOw0KKyAgICAgICAgICAgIGJyZWFrOw0K
KyAgICAgICAgfQ0KKyAgICAgICAgLyogZmFsbCB0aHJvdWdoICovDQorICAgIGRlZmF1bHQ6DQog
ICAgICAgICBkZXN0cm95X3BlcmlvZGljX3RpbWUoJnMtPnB0KTsNCisgICAgICAgIGJyZWFrOw0K
ICAgICB9DQogfQ0KDQpAQCAtMTAyLDcgKzEyMSw3IEBAIHN0YXRpYyB2b2lkIGNoZWNrX3VwZGF0
ZV90aW1lcihSVENTdGF0ZSANCiAgICAgICAgIGd1ZXN0X3VzZWMgPSBnZXRfbG9jYWx0aW1lX3Vz
KGQpICUgVVNFQ19QRVJfU0VDOw0KICAgICAgICAgaWYgKGd1ZXN0X3VzZWMgPj0gKFVTRUNfUEVS
X1NFQyAtIDI0NCkpDQogICAgICAgICB7DQotICAgICAgICAgICAgLyogUlRDIGlzIGluIHVwZGF0
ZSBjeWNsZSB3aGVuIGVuYWJsaW5nIFVJRSAqLw0KKyAgICAgICAgICAgIC8qIFJUQyBpcyBpbiB1
cGRhdGUgY3ljbGUgKi8NCiAgICAgICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19BXSB8
PSBSVENfVUlQOw0KICAgICAgICAgICAgIG5leHRfdXBkYXRlX3RpbWUgPSAoVVNFQ19QRVJfU0VD
IC0gZ3Vlc3RfdXNlYykgKiBOU19QRVJfVVNFQzsNCiAgICAgICAgICAgICBleHBpcmVfdGltZSA9
IE5PVygpICsgbmV4dF91cGRhdGVfdGltZTsNCkBAIC0xNDQsNyArMTYzLDYgQEAgc3RhdGljIHZv
aWQgcnRjX3VwZGF0ZV90aW1lcih2b2lkICpvcGFxdQ0KIHN0YXRpYyB2b2lkIHJ0Y191cGRhdGVf
dGltZXIyKHZvaWQgKm9wYXF1ZSkNCiB7DQogICAgIFJUQ1N0YXRlICpzID0gb3BhcXVlOw0KLSAg
ICBzdHJ1Y3QgZG9tYWluICpkID0gdnJ0Y19kb21haW4ocyk7DQoNCiAgICAgc3Bpbl9sb2NrKCZz
LT5sb2NrKTsNCiAgICAgaWYgKCEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfU0VU
KSkNCkBAIC0xNTIsMTEgKzE3MCw3IEBAIHN0YXRpYyB2b2lkIHJ0Y191cGRhdGVfdGltZXIyKHZv
aWQgKm9wYXENCiAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19VRjsN
CiAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICY9IH5SVENfVUlQOw0KICAgICAg
ICAgaWYgKChzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19VSUUpKQ0KLSAgICAgICAg
ew0KLSAgICAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19JUlFGOw0K
LSAgICAgICAgICAgIGh2bV9pc2FfaXJxX2RlYXNzZXJ0KGQsIFJUQ19JUlEpOw0KLSAgICAgICAg
ICAgIGh2bV9pc2FfaXJxX2Fzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgIH0NCisgICAgICAg
ICAgICBydGNfdG9nZ2xlX2lycShzKTsNCiAgICAgICAgIGNoZWNrX3VwZGF0ZV90aW1lcihzKTsN
CiAgICAgfQ0KICAgICBzcGluX3VubG9jaygmcy0+bG9jayk7DQpAQCAtMTc1LDIxICsxODksMTgg
QEAgc3RhdGljIHZvaWQgYWxhcm1fdGltZXJfdXBkYXRlKFJUQ1N0YXRlIA0KDQogICAgIHN0b3Bf
dGltZXIoJnMtPmFsYXJtX3RpbWVyKTsNCg0KLSAgICBpZiAoKHMtPmh3LmNtb3NfZGF0YVtSVENf
UkVHX0JdICYgUlRDX0FJRSkgJiYNCi0gICAgICAgICAgICAhKHMtPmh3LmNtb3NfZGF0YVtSVENf
UkVHX0JdICYgUlRDX1NFVCkpDQorICAgIGlmICggIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19C
XSAmIFJUQ19TRVQpICkNCiAgICAgew0KICAgICAgICAgcy0+Y3VycmVudF90bSA9IGdtdGltZShn
ZXRfbG9jYWx0aW1lKGQpKTsNCiAgICAgICAgIHJ0Y19jb3B5X2RhdGUocyk7DQoNCiAgICAgICAg
IGFsYXJtX3NlYyA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfU0VDT05EU19BTEFS
TV0pOw0KICAgICAgICAgYWxhcm1fbWluID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JU
Q19NSU5VVEVTX0FMQVJNXSk7DQotICAgICAgICBhbGFybV9ob3VyID0gZnJvbV9iY2Qocywgcy0+
aHcuY21vc19kYXRhW1JUQ19IT1VSU19BTEFSTV0pOw0KLSAgICAgICAgYWxhcm1faG91ciA9IGNv
bnZlcnRfaG91cihzLCBhbGFybV9ob3VyKTsNCisgICAgICAgIGFsYXJtX2hvdXIgPSBjb252ZXJ0
X2hvdXIocywgcy0+aHcuY21vc19kYXRhW1JUQ19IT1VSU19BTEFSTV0pOw0KDQogICAgICAgICBj
dXJfc2VjID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19TRUNPTkRTXSk7DQogICAg
ICAgICBjdXJfbWluID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19NSU5VVEVTXSk7
DQotICAgICAgICBjdXJfaG91ciA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfSE9V
UlNdKTsNCi0gICAgICAgIGN1cl9ob3VyID0gY29udmVydF9ob3VyKHMsIGN1cl9ob3VyKTsNCisg
ICAgICAgIGN1cl9ob3VyID0gY29udmVydF9ob3VyKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfSE9V
UlNdKTsNCg0KICAgICAgICAgbmV4dF91cGRhdGVfdGltZSA9IFVTRUNfUEVSX1NFQyAtIChnZXRf
bG9jYWx0aW1lX3VzKGQpICUgVVNFQ19QRVJfU0VDKTsNCiAgICAgICAgIG5leHRfdXBkYXRlX3Rp
bWUgPSBuZXh0X3VwZGF0ZV90aW1lICogTlNfUEVSX1VTRUMgKyBOT1coKTsNCkBAIC0zNDMsNyAr
MzU0LDYgQEAgc3RhdGljIHZvaWQgYWxhcm1fdGltZXJfdXBkYXRlKFJUQ1N0YXRlIA0KIHN0YXRp
YyB2b2lkIHJ0Y19hbGFybV9jYih2b2lkICpvcGFxdWUpDQogew0KICAgICBSVENTdGF0ZSAqcyA9
IG9wYXF1ZTsNCi0gICAgc3RydWN0IGRvbWFpbiAqZCA9IHZydGNfZG9tYWluKHMpOw0KDQogICAg
IHNwaW5fbG9jaygmcy0+bG9jayk7DQogICAgIGlmICghKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVH
X0JdICYgUlRDX1NFVCkpDQpAQCAtMzUxLDExICszNjEsNyBAQCBzdGF0aWMgdm9pZCBydGNfYWxh
cm1fY2Iodm9pZCAqb3BhcXVlKQ0KICAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10g
fD0gUlRDX0FGOw0KICAgICAgICAgLyogYWxhcm0gaW50ZXJydXB0ICovDQogICAgICAgICBpZiAo
cy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfQUlFKQ0KLSAgICAgICAgew0KLSAgICAg
ICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19JUlFGOw0KLSAgICAgICAg
ICAgIGh2bV9pc2FfaXJxX2RlYXNzZXJ0KGQsIFJUQ19JUlEpOw0KLSAgICAgICAgICAgIGh2bV9p
c2FfaXJxX2Fzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgIH0NCisgICAgICAgICAgICBydGNf
dG9nZ2xlX2lycShzKTsNCiAgICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsNCiAgICAgfQ0K
ICAgICBzcGluX3VubG9jaygmcy0+bG9jayk7DQpAQCAtMzY1LDYgKzM3MSw3IEBAIHN0YXRpYyBp
bnQgcnRjX2lvcG9ydF93cml0ZSh2b2lkICpvcGFxdWUNCiB7DQogICAgIFJUQ1N0YXRlICpzID0g
b3BhcXVlOw0KICAgICBzdHJ1Y3QgZG9tYWluICpkID0gdnJ0Y19kb21haW4ocyk7DQorICAgIHVp
bnQzMl90IG9yaWcsIG1hc2s7DQoNCiAgICAgc3Bpbl9sb2NrKCZzLT5sb2NrKTsNCg0KQEAgLTM4
Miw2ICszODksNyBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRfd3JpdGUodm9pZCAqb3BhcXVlDQog
ICAgICAgICByZXR1cm4gMDsNCiAgICAgfQ0KDQorICAgIG9yaWcgPSBzLT5ody5jbW9zX2RhdGFb
cy0+aHcuY21vc19pbmRleF07DQogICAgIHN3aXRjaCAoIHMtPmh3LmNtb3NfaW5kZXggKQ0KICAg
ICB7DQogICAgIGNhc2UgUlRDX1NFQ09ORFNfQUxBUk06DQpAQCAtNDA1LDkgKzQxMyw5IEBAIHN0
YXRpYyBpbnQgcnRjX2lvcG9ydF93cml0ZSh2b2lkICpvcGFxdWUNCiAgICAgICAgIGJyZWFrOw0K
ICAgICBjYXNlIFJUQ19SRUdfQToNCiAgICAgICAgIC8qIFVJUCBiaXQgaXMgcmVhZCBvbmx5ICov
DQotICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19BXSA9IChkYXRhICYgflJUQ19VSVAp
IHwNCi0gICAgICAgICAgICAocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQV0gJiBSVENfVUlQKTsN
Ci0gICAgICAgIHJ0Y190aW1lcl91cGRhdGUocyk7DQorICAgICAgICBzLT5ody5jbW9zX2RhdGFb
UlRDX1JFR19BXSA9IChkYXRhICYgflJUQ19VSVApIHwgKG9yaWcgJiBSVENfVUlQKTsNCisgICAg
ICAgIGlmICggKGRhdGEgXiBvcmlnKSAmIChSVENfUkFURV9TRUxFQ1QgfCBSVENfRElWX0NUTCkg
KQ0KKyAgICAgICAgICAgIHJ0Y190aW1lcl91cGRhdGUocyk7DQogICAgICAgICBicmVhazsNCiAg
ICAgY2FzZSBSVENfUkVHX0I6DQogICAgICAgICBpZiAoIGRhdGEgJiBSVENfU0VUICkNCkBAIC00
MTUsNyArNDIzLDcgQEAgc3RhdGljIGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1ZQ0K
ICAgICAgICAgICAgIC8qIHNldCBtb2RlOiByZXNldCBVSVAgbW9kZSAqLw0KICAgICAgICAgICAg
IHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICY9IH5SVENfVUlQOw0KICAgICAgICAgICAgIC8q
IGFkanVzdCBjbW9zIGJlZm9yZSBzdG9wcGluZyAqLw0KLSAgICAgICAgICAgIGlmICghKHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCkpDQorICAgICAgICAgICAgaWYgKCEob3Jp
ZyAmIFJUQ19TRVQpKQ0KICAgICAgICAgICAgIHsNCiAgICAgICAgICAgICAgICAgcy0+Y3VycmVu
dF90bSA9IGdtdGltZShnZXRfbG9jYWx0aW1lKGQpKTsNCiAgICAgICAgICAgICAgICAgcnRjX2Nv
cHlfZGF0ZShzKTsNCkBAIC00MjQsMjEgKzQzMiwyNiBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRf
d3JpdGUodm9pZCAqb3BhcXVlDQogICAgICAgICBlbHNlDQogICAgICAgICB7DQogICAgICAgICAg
ICAgLyogaWYgZGlzYWJsaW5nIHNldCBtb2RlLCB1cGRhdGUgdGhlIHRpbWUgKi8NCi0gICAgICAg
ICAgICBpZiAoIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCApDQorICAgICAg
ICAgICAgaWYgKCBvcmlnICYgUlRDX1NFVCApDQogICAgICAgICAgICAgICAgIHJ0Y19zZXRfdGlt
ZShzKTsNCiAgICAgICAgIH0NCi0gICAgICAgIC8qIGlmIHRoZSBpbnRlcnJ1cHQgaXMgYWxyZWFk
eSBzZXQgd2hlbiB0aGUgaW50ZXJydXB0IGJlY29tZQ0KLSAgICAgICAgICogZW5hYmxlZCwgcmFp
c2UgYW4gaW50ZXJydXB0IGltbWVkaWF0ZWx5Ki8NCi0gICAgICAgIGlmICgoZGF0YSAmIFJUQ19V
SUUpICYmICEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfVUlFKSkNCi0gICAgICAg
ICAgICBpZiAocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gJiBSVENfVUYpDQorICAgICAgICAv
Kg0KKyAgICAgICAgICogSWYgdGhlIGludGVycnVwdCBpcyBhbHJlYWR5IHNldCB3aGVuIHRoZSBp
bnRlcnJ1cHQgYmVjb21lcw0KKyAgICAgICAgICogZW5hYmxlZCwgcmFpc2UgYW4gaW50ZXJydXB0
IGltbWVkaWF0ZWx5Lg0KKyAgICAgICAgICogTkI6IFJUQ197QSxQLFV9SUUgPT0gUlRDX3tBLFAs
VX1GIHJlc3BlY3RpdmVseS4NCisgICAgICAgICAqLw0KKyAgICAgICAgZm9yICggbWFzayA9IFJU
Q19VSUU7IG1hc2sgPD0gUlRDX1BJRTsgbWFzayA8PD0gMSApDQorICAgICAgICAgICAgaWYgKCAo
ZGF0YSAmIG1hc2spICYmICEob3JpZyAmIG1hc2spICYmDQorICAgICAgICAgICAgICAgICAocy0+
aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gJiBtYXNrKSApDQogICAgICAgICAgICAgew0KLSAgICAg
ICAgICAgICAgICBodm1faXNhX2lycV9kZWFzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgICAg
ICAgICAgaHZtX2lzYV9pcnFfYXNzZXJ0KGQsIFJUQ19JUlEpOw0KKyAgICAgICAgICAgICAgICBy
dGNfdG9nZ2xlX2lycShzKTsNCisgICAgICAgICAgICAgICAgYnJlYWs7DQogICAgICAgICAgICAg
fQ0KICAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gPSBkYXRhOw0KLSAgICAgICAg
cnRjX3RpbWVyX3VwZGF0ZShzKTsNCi0gICAgICAgIGNoZWNrX3VwZGF0ZV90aW1lcihzKTsNCi0g
ICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsNCisgICAgICAgIGlmICggKGRhdGEgXiBvcmln
KSAmIFJUQ19TRVQgKQ0KKyAgICAgICAgICAgIGNoZWNrX3VwZGF0ZV90aW1lcihzKTsNCisgICAg
ICAgIGlmICggKGRhdGEgXiBvcmlnKSAmIChSVENfMjRIIHwgUlRDX0RNX0JJTkFSWSB8IFJUQ19T
RVQpICkNCisgICAgICAgICAgICBhbGFybV90aW1lcl91cGRhdGUocyk7DQogICAgICAgICBicmVh
azsNCiAgICAgY2FzZSBSVENfUkVHX0M6DQogICAgIGNhc2UgUlRDX1JFR19EOg0KQEAgLTQ1Myw3
ICs0NjYsNyBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRfd3JpdGUodm9pZCAqb3BhcXVlDQoNCiBz
dGF0aWMgaW5saW5lIGludCB0b19iY2QoUlRDU3RhdGUgKnMsIGludCBhKQ0KIHsNCi0gICAgaWYg
KCBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIDB4MDQgKQ0KKyAgICBpZiAoIHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX0RNX0JJTkFSWSApDQogICAgICAgICByZXR1cm4gYTsN
CiAgICAgZWxzZQ0KICAgICAgICAgcmV0dXJuICgoYSAvIDEwKSA8PCA0KSB8IChhICUgMTApOw0K
QEAgLTQ2MSw3ICs0NzQsNyBAQCBzdGF0aWMgaW5saW5lIGludCB0b19iY2QoUlRDU3RhdGUgKnMs
IGluDQoNCiBzdGF0aWMgaW5saW5lIGludCBmcm9tX2JjZChSVENTdGF0ZSAqcywgaW50IGEpDQog
ew0KLSAgICBpZiAoIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgMHgwNCApDQorICAgIGlm
ICggcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfRE1fQklOQVJZICkNCiAgICAgICAg
IHJldHVybiBhOw0KICAgICBlbHNlDQogICAgICAgICByZXR1cm4gKChhID4+IDQpICogMTApICsg
KGEgJiAweDBmKTsNCkBAIC00NjksMTIgKzQ4MiwxNCBAQCBzdGF0aWMgaW5saW5lIGludCBmcm9t
X2JjZChSVENTdGF0ZSAqcywgDQoNCiAvKiBIb3VycyBpbiAxMiBob3VyIG1vZGUgYXJlIGluIDEt
MTIgcmFuZ2UsIG5vdCAwLTExLg0KICAqIFNvIHdlIG5lZWQgY29udmVydCBpdCBiZWZvcmUgdXNp
bmcgaXQqLw0KLXN0YXRpYyBpbmxpbmUgaW50IGNvbnZlcnRfaG91cihSVENTdGF0ZSAqcywgaW50
IGhvdXIpDQorc3RhdGljIGlubGluZSBpbnQgY29udmVydF9ob3VyKFJUQ1N0YXRlICpzLCBpbnQg
cmF3KQ0KIHsNCisgICAgaW50IGhvdXIgPSBmcm9tX2JjZChzLCByYXcgJiAweDdmKTsNCisNCiAg
ICAgaWYgKCEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfMjRIKSkNCiAgICAgew0K
ICAgICAgICAgaG91ciAlPSAxMjsNCi0gICAgICAgIGlmIChzLT5ody5jbW9zX2RhdGFbUlRDX0hP
VVJTXSAmIDB4ODApDQorICAgICAgICBpZiAocmF3ICYgMHg4MCkNCiAgICAgICAgICAgICBob3Vy
ICs9IDEyOw0KICAgICB9DQogICAgIHJldHVybiBob3VyOw0KQEAgLTQ5Myw4ICs1MDgsNyBAQCBz
dGF0aWMgdm9pZCBydGNfc2V0X3RpbWUoUlRDU3RhdGUgKnMpDQogICAgIA0KICAgICB0bS0+dG1f
c2VjID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19TRUNPTkRTXSk7DQogICAgIHRt
LT50bV9taW4gPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX01JTlVURVNdKTsNCi0g
ICAgdG0tPnRtX2hvdXIgPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSAm
IDB4N2YpOw0KLSAgICB0bS0+dG1faG91ciA9IGNvbnZlcnRfaG91cihzLCB0bS0+dG1faG91cik7
DQorICAgIHRtLT50bV9ob3VyID0gY29udmVydF9ob3VyKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENf
SE9VUlNdKTsNCiAgICAgdG0tPnRtX3dkYXkgPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFb
UlRDX0RBWV9PRl9XRUVLXSk7DQogICAgIHRtLT50bV9tZGF5ID0gZnJvbV9iY2Qocywgcy0+aHcu
Y21vc19kYXRhW1JUQ19EQVlfT0ZfTU9OVEhdKTsNCiAgICAgdG0tPnRtX21vbiA9IGZyb21fYmNk
KHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfTU9OVEhdKSAtIDE7DQotLS0gYS94ZW4vYXJjaC94ODYv
aHZtL3ZwdC5jDQorKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZwdC5jDQpAQCAtMjIsNiArMjIsNyBA
QA0KICNpbmNsdWRlIDxhc20vaHZtL3ZwdC5oPg0KICNpbmNsdWRlIDxhc20vZXZlbnQuaD4NCiAj
aW5jbHVkZSA8YXNtL2FwaWMuaD4NCisjaW5jbHVkZSA8YXNtL21jMTQ2ODE4cnRjLmg+DQoNCiAj
ZGVmaW5lIG1vZGVfaXMoZCwgbmFtZSkgXA0KICAgICAoKGQpLT5hcmNoLmh2bV9kb21haW4ucGFy
YW1zW0hWTV9QQVJBTV9USU1FUl9NT0RFXSA9PSBIVk1QVE1fIyNuYW1lKQ0KQEAgLTIxOCw2ICsy
MTksNyBAQCB2b2lkIHB0X3VwZGF0ZV9pcnEoc3RydWN0IHZjcHUgKnYpDQogICAgIHN0cnVjdCBw
ZXJpb2RpY190aW1lICpwdCwgKnRlbXAsICplYXJsaWVzdF9wdCA9IE5VTEw7DQogICAgIHVpbnQ2
NF90IG1heF9sYWcgPSAtMVVMTDsNCiAgICAgaW50IGlycSwgaXNfbGFwaWM7DQorICAgIHZvaWQg
KnB0X3ByaXY7DQoNCiAgICAgc3Bpbl9sb2NrKCZ2LT5hcmNoLmh2bV92Y3B1LnRtX2xvY2spOw0K
DQpAQCAtMjUxLDEzICsyNTMsMTQgQEAgdm9pZCBwdF91cGRhdGVfaXJxKHN0cnVjdCB2Y3B1ICp2
KQ0KICAgICBlYXJsaWVzdF9wdC0+aXJxX2lzc3VlZCA9IDE7DQogICAgIGlycSA9IGVhcmxpZXN0
X3B0LT5pcnE7DQogICAgIGlzX2xhcGljID0gKGVhcmxpZXN0X3B0LT5zb3VyY2UgPT0gUFRTUkNf
bGFwaWMpOw0KKyAgICBwdF9wcml2ID0gZWFybGllc3RfcHQtPnByaXY7DQoNCiAgICAgc3Bpbl91
bmxvY2soJnYtPmFyY2guaHZtX3ZjcHUudG1fbG9jayk7DQoNCiAgICAgaWYgKCBpc19sYXBpYyAp
DQotICAgIHsNCiAgICAgICAgIHZsYXBpY19zZXRfaXJxKHZjcHVfdmxhcGljKHYpLCBpcnEsIDAp
Ow0KLSAgICB9DQorICAgIGVsc2UgaWYgKCBpcnEgPT0gUlRDX0lSUSApDQorICAgICAgICBydGNf
cGVyaW9kaWNfaW50ZXJydXB0KHB0X3ByaXYpOw0KICAgICBlbHNlDQogICAgIHsNCiAgICAgICAg
IGh2bV9pc2FfaXJxX2RlYXNzZXJ0KHYtPmRvbWFpbiwgaXJxKTsNCi0tLSBhL3hlbi9pbmNsdWRl
L2FzbS14ODYvaHZtL3ZwdC5oDQorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92cHQuaA0K
QEAgLTE4MSw2ICsxODEsNyBAQCB2b2lkIHJ0Y19taWdyYXRlX3RpbWVycyhzdHJ1Y3QgdmNwdSAq
dik7DQogdm9pZCBydGNfZGVpbml0KHN0cnVjdCBkb21haW4gKmQpOw0KIHZvaWQgcnRjX3Jlc2V0
KHN0cnVjdCBkb21haW4gKmQpOw0KIHZvaWQgcnRjX3VwZGF0ZV9jbG9jayhzdHJ1Y3QgZG9tYWlu
ICpkKTsNCit2b2lkIHJ0Y19wZXJpb2RpY19pbnRlcnJ1cHQodm9pZCAqKTsNCg0KIHZvaWQgcG10
aW1lcl9pbml0KHN0cnVjdCB2Y3B1ICp2KTsNCiB2b2lkIHBtdGltZXJfZGVpbml0KHN0cnVjdCBk
b21haW4gKmQpOw0KDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fDQpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0DQpYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZw0K
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs

------=_002_NextPart316018485136_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Hi, Jan:</DIV>
<DIV>I am sorry I really don't have much time to try a test of your patch,=
 and=20
it is not convenient</DIV>
<DIV>for me to have a try. For the version I have been using is xen4.0.x, =
and=20
your patch is based on </DIV>
<DIV>the latest version xen4.2.x.(I have never complied the unstable one),=
 so I=20
merged your patch to my </DIV>
<DIV>xen4.0.x, still couldn't find the two functions below:</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque) </DI=
V>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque) </DIV>
<DIV>so I didn't merge the two functions which contains a rtc_toggle_irq()=
=20
.</DIV>
<DIV>&nbsp;</DIV>
<DIV>The results for me were these:</DIV>
<DIV>1 In my real application environment, it worked very well in the form=
er=20
5mins, much better than before,</DIV>
<DIV>&nbsp;but at last it lagged again. I don't know whether it belongs to=
 the=20
two missed functions. I lack the </DIV>
<DIV>&nbsp;ability to figure them out.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>2 When I tested my test program which I provided days before, it work=
ed=20
very well, maybe the program doesn't </DIV>
<DIV>emulate the real environment due to&nbsp;the same setting rate,&nbsp;=
so I=20
modified this program as which in the attachment.</DIV>
<DIV>if you are more convenient, you can help me to have a look of it.</DI=
V>
<DIV>And I have a opinion, because our product is based on Version Xen4.0.=
x, if=20
you have enough time, can you write </DIV>
<DIV>another patch based <A=20
href=3D"http://xenbits.xen.org/hg/xen-4.0-testing.hg/">http://xenbits.xen.=
org/hg/xen-4.0-testing.hg/</A>&nbsp;for=20
me, thank you very much!</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 I also have a thought that can we have some detecting methods to fi=
nd the=20
lagging time earlier to adjust time</DIV>
<DIV>back to normal value in the code?</DIV>
<DIV>&nbsp;</DIV>
<DIV>best regards,</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>
</DIV>
<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>
<DIV>&nbsp;</DIV></DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV>
<DIV>Second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patch&nbsp;posted;&nbsp;no&nbsp=
;test&nbsp;results&nbsp;so&nbsp;far&nbsp;for&nbsp;first&nbsp;draft.</DIV>
<DIV>Jan</DIV></DIV>
<DIV><B></B>&nbsp;</DIV>
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-14&nbsp;17:51</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:yang.z.zhang@intel.com">Yang Z Zhan=
g</A>;=20
<A href=3D"mailto:keir@xen.org">Keir Fraser</A>; <A href=3D"mailto:tim@xen=
.org">Tim=20
Deegan</A></DIV>
<DIV><B>CC:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A>;=
 <A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;[Xen-devel] [PATCH, RFC v2] x86/HVM: assorted RT=
C=20
emulation adjustments (was Re: Big Bug:Time in VM goes=20
slower...)</DIV></DIV></DIV>
<DIV>
<DIV>Below/attached&nbsp;a&nbsp;second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patc=
h&nbsp;to&nbsp;fix&nbsp;not&nbsp;only&nbsp;this</DIV>
<DIV>issue,&nbsp;but&nbsp;a&nbsp;few&nbsp;more&nbsp;with&nbsp;the&nbsp;RTC=
&nbsp;emulation.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Keir,&nbsp;Tim,&nbsp;Yang,&nbsp;others&nbsp;-&nbsp;the&nbsp;change&nb=
sp;to&nbsp;xen/arch/x86/hvm/vpt.c&nbsp;really</DIV>
<DIV>looks&nbsp;more&nbsp;like&nbsp;a&nbsp;hack&nbsp;than&nbsp;a&nbsp;solu=
tion,&nbsp;but&nbsp;I&nbsp;don't&nbsp;see&nbsp;another</DIV>
<DIV>way&nbsp;without&nbsp;much&nbsp;more&nbsp;intrusive&nbsp;changes.&nbs=
p;The&nbsp;point&nbsp;is&nbsp;that&nbsp;we</DIV>
<DIV>want&nbsp;the&nbsp;RTC&nbsp;code&nbsp;to&nbsp;decide&nbsp;whether&nbs=
p;to&nbsp;generate&nbsp;an&nbsp;interrupt</DIV>
<DIV>(so&nbsp;that&nbsp;RTC_PF&nbsp;can&nbsp;become&nbsp;set&nbsp;correctl=
y&nbsp;even&nbsp;without&nbsp;RTC_PIE</DIV>
<DIV>getting&nbsp;enabled&nbsp;by&nbsp;the&nbsp;guest).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Additionally&nbsp;I&nbsp;wonder&nbsp;whether&nbsp;alarm_timer_update(=
)&nbsp;shouldn't</DIV>
<DIV>bail&nbsp;on&nbsp;non-conforming&nbsp;RTC_*_ALARM&nbsp;values&nbsp;(a=
s&nbsp;those&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;the&nbsp;values&nbsp;they&nbsp;get&nbsp;compare=
d&nbsp;against,&nbsp;whereas</DIV>
<DIV>with&nbsp;the&nbsp;current&nbsp;way&nbsp;of&nbsp;handling&nbsp;this&n=
bsp;they&nbsp;would&nbsp;appear&nbsp;to</DIV>
<DIV>match&nbsp;-&nbsp;i.e.&nbsp;set&nbsp;RTC_AF&nbsp;and&nbsp;possibly&nb=
sp;generate&nbsp;an&nbsp;interrupt&nbsp;-</DIV>
<DIV>some&nbsp;other&nbsp;point&nbsp;in&nbsp;time).&nbsp;I&nbsp;realize&nb=
sp;the&nbsp;behavior&nbsp;here&nbsp;may&nbsp;not</DIV>
<DIV>be&nbsp;precisely&nbsp;specified,&nbsp;but&nbsp;the&nbsp;specificatio=
n&nbsp;saying&nbsp;"the&nbsp;current</DIV>
<DIV>time&nbsp;has&nbsp;matched&nbsp;the&nbsp;alarm&nbsp;time"&nbsp;means&=
nbsp;to&nbsp;me&nbsp;a&nbsp;value&nbsp;by&nbsp;value</DIV>
<DIV>comparison,&nbsp;which&nbsp;implies&nbsp;that&nbsp;non-conforming&nbs=
p;values&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;(since&nbsp;non-conforming&nbsp;current&nbsp;ti=
me&nbsp;values&nbsp;could</DIV>
<DIV>get&nbsp;replaced&nbsp;at&nbsp;any&nbsp;time&nbsp;by&nbsp;the&nbsp;ha=
rdware&nbsp;due&nbsp;to&nbsp;overflow</DIV>
<DIV>detection).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_A&nbs=
p;writes&nbsp;when&nbsp;the&nbsp;value&nbsp;didn't</DIV>
<DIV>&nbsp;&nbsp;change&nbsp;(doing&nbsp;the&nbsp;call&nbsp;always&nbsp;wa=
s&nbsp;reported&nbsp;to&nbsp;cause&nbsp;wall&nbsp;clock&nbsp;time</DIV>
<DIV>&nbsp;&nbsp;lagging&nbsp;with&nbsp;the&nbsp;JVM&nbsp;running&nbsp;on&=
nbsp;Windows)</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_B&nbs=
p;writes&nbsp;at&nbsp;all</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;alarm_timer_update()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;relevant&nbsp;bits</DIV>
<DIV>&nbsp;&nbsp;change</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;check_update_timer()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;SET&nbsp;changes</DIV>
<DIV>-&nbsp;instead&nbsp;properly&nbsp;handle&nbsp;AF&nbsp;and&nbsp;PF&nbs=
p;when&nbsp;the&nbsp;guest&nbsp;is&nbsp;not&nbsp;also&nbsp;setting</DIV>
<DIV>&nbsp;&nbsp;AIE/PIE&nbsp;respectively&nbsp;(for&nbsp;UF&nbsp;this&nbs=
p;was&nbsp;already&nbsp;the&nbsp;case,&nbsp;only&nbsp;a</DIV>
<DIV>&nbsp;&nbsp;comment&nbsp;was&nbsp;slightly&nbsp;inaccurate)</DIV>
<DIV>-&nbsp;raise&nbsp;the&nbsp;RTC&nbsp;IRQ&nbsp;not&nbsp;only&nbsp;when&=
nbsp;UIE&nbsp;gets&nbsp;set&nbsp;while&nbsp;UF&nbsp;was&nbsp;already</DIV>
<DIV>&nbsp;&nbsp;set,&nbsp;but&nbsp;generalize&nbsp;this&nbsp;to&nbsp;cove=
r&nbsp;AIE&nbsp;and&nbsp;PIE&nbsp;as&nbsp;well</DIV>
<DIV>-&nbsp;properly&nbsp;mask&nbsp;off&nbsp;bit&nbsp;7&nbsp;when&nbsp;ret=
rieving&nbsp;the&nbsp;hour&nbsp;values&nbsp;in</DIV>
<DIV>&nbsp;&nbsp;alarm_timer_update(),&nbsp;and&nbsp;properly&nbsp;use&nbs=
p;RTC_HOURS_ALARM's&nbsp;bit&nbsp;7&nbsp;when</DIV>
<DIV>&nbsp;&nbsp;converting&nbsp;from&nbsp;12-&nbsp;to&nbsp;24-hour&nbsp;v=
alue</DIV>
<DIV>-&nbsp;also&nbsp;handle&nbsp;the&nbsp;two&nbsp;other&nbsp;possible&nb=
sp;clock&nbsp;bases</DIV>
<DIV>-&nbsp;use&nbsp;RTC_*&nbsp;names&nbsp;in&nbsp;a&nbsp;couple&nbsp;of&n=
bsp;places&nbsp;where&nbsp;literal&nbsp;numbers&nbsp;were&nbsp;used</DIV>
<DIV>&nbsp;&nbsp;so&nbsp;far</DIV>
<DIV>&nbsp;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>@@&nbsp;-50,11&nbsp;+50,24&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,=
&nbsp;int&nbsp;hour);</DIV>
<DIV>&nbsp;</DIV>
<DIV>-static&nbsp;void&nbsp;rtc_periodic_cb(struct&nbsp;vcpu&nbsp;*v,&nbsp=
;void&nbsp;*opaque)</DIV>
<DIV>+static&nbsp;void&nbsp;rtc_toggle_irq(RTCState&nbsp;*s)</DIV>
<DIV>+{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>+</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock));</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_IRQF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+}</DIV>
<DIV>+</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;0xc0;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_PF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_PIE&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</=
DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-68,19&nbsp;+81,25&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_tim=
er_update(RTCState&nbsp;*s</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock))=
;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period_code&nbsp;=3D&nbsp;s-&gt;hw.cmos=
_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_RATE_SELECT;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(period_code&nbsp;!=3D&nbsp;0=
)&nbsp;&amp;&amp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_=
PIE)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_RE=
G_A]&nbsp;&amp;&nbsp;RTC_DIV_CTL&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;&lt;=3D&nbsp;2&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_32KHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(peri=
od_code&nbsp;!=3D&nbsp;0)&nbsp;&amp;&amp;&nbsp;(period_code&nbsp;&lt;=3D&n=
bsp;2)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;period_code&nbsp;+=3D&nbsp;7;</DIV>
<DIV>-</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);&nbsp;/*&nbsp;period&nbs=
p;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;DIV_ROUND((period&nbsp;*&nbsp;1000000000ULL),&nbsp;32768);&nbsp;/*&nbsp;p=
eriod&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;create_periodic_time=
(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&nbsp;RTC_IRQ,</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_cb,&nbsp;s);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_1MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_4MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;!=3D&nbsp;0&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);=
&nbsp;/*&nbsp;period&nbsp;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;DIV_ROUND(period&nbsp;*&nbsp;1000000000ULL,&nbsp;=
32768);&nbsp;/*&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;create_periodic_time(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&=
nbsp;RTC_IRQ,&nbsp;NULL,&nbsp;s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;break;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;default:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;destroy_periodi=
c_time(&amp;s-&gt;pt);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-102,7&nbsp;+121,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;check_u=
pdate_timer(RTCState&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;guest_usec&nbsp=
;=3D&nbsp;get_localtime_us(d)&nbsp;%&nbsp;USEC_PER_SEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(guest_=
usec&nbsp;&gt;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;244))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;when&nbsp;enab=
ling&nbsp;UIE&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;|=3D&nbsp;RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;next_update_time&nbsp;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;guest_us=
ec)&nbsp;*&nbsp;NS_PER_USEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;expire_time&nbsp;=3D&nbsp;NOW()&nbsp;+&nbsp;next_update_time;</DI=
V>
<DIV>@@&nbsp;-144,7&nbsp;+163,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_upd=
ate_timer(void&nbsp;*opaqu</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque)</DIV=
>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-152,11&nbsp;+170,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_up=
date_timer2(void&nbsp;*opaq</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_UF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt=
;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_ti=
mer(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-175,21&nbsp;+189,18&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm=
_timer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;stop_timer(&amp;s-&gt;alarm_timer);</DI=
V>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt;hw.cmos_data[RTC_REG_B]&nbsp=
;&amp;&nbsp;RTC_AIE)&nbsp;&amp;&amp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_=
B]&nbsp;&amp;&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_t=
m&nbsp;=3D&nbsp;gmtime(get_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s=
);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS_ALARM]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;alarm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;cur_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;USEC_PER_SEC&nbsp;-&nbsp;(get_localtime_us(d)&nbsp;%&nbsp;=
USEC_PER_SEC);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;next_update_time&nbsp;*&nbsp;NS_PER_USEC&nbsp;+&nbsp;NOW()=
;</DIV>
<DIV>@@&nbsp;-343,7&nbsp;+354,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm_t=
imer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-351,11&nbsp;+361,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_al=
arm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_AF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;alarm&n=
bsp;interrupt&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;=
hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_AIE)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_upd=
ate(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-365,6&nbsp;+371,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbs=
p;vrtc_domain(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;uint32_t&nbsp;orig,&nbsp;mask;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-382,6&nbsp;+389,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;0;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;orig&nbsp;=3D&nbsp;s-&gt;hw.cmos_data[s-&gt;=
hw.cmos_index];</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_index&=
nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_SECONDS_ALARM:</DIV>
<DIV>@@&nbsp;-405,9&nbsp;+413,9&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_A:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;UIP&nbs=
p;bit&nbsp;is&nbsp;read&nbsp;only&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;(s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|&nbsp;(orig&=
nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_RATE_SELECT&nbsp;|&nbsp;RTC_DIV_CT=
L)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_B:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;=
data&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>@@&nbsp;-415,7&nbsp;+423,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;set&nbsp;mode:&nbsp;reset&nbsp;UIP&nbsp;mode&nbsp;*/</DIV=
>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;adjust&nbsp;cmos&nbsp;before&nbsp;stopping&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(orig&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_tm&nbsp;=3D&nbsp;gmtime(get=
_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s);</DIV>
<DIV>@@&nbsp;-424,21&nbsp;+432,26&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_io=
port_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;if&nbsp;disabling&nbsp;set&nbsp;mode,&nbsp;update&nbsp;th=
e&nbsp;time&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET&n=
bsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;orig&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_set_time(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;if&nbsp;the&=
nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;inter=
rupt&nbsp;become</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((data&nbsp;=
&amp;&nbsp;RTC_UIE)&nbsp;&amp;&amp;&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&n=
bsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp;&nbsp;RTC_UF)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;If&nbsp=
;the&nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;=
interrupt&nbsp;becomes</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;NB:&nbs=
p;RTC_{A,P,U}IE&nbsp;=3D=3D&nbsp;RTC_{A,P,U}F&nbsp;respectively.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(&nbsp;mask=
&nbsp;=3D&nbsp;RTC_UIE;&nbsp;mask&nbsp;&lt;=3D&nbsp;RTC_PIE;&nbsp;mask&nbs=
p;&lt;&lt;=3D&nbsp;1&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;(data&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;&nbsp;!(orig=
&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp=
;&nbsp;mask)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_B]&nbsp;=3D&nbsp;data;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_timer(s=
);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_update(s=
);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;check_update_timer(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_24H&nbsp;|&nbsp;RTC_DM_BINARY&nbsp=
;|&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;alarm_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_C:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_D:</DIV>
<DIV>@@&nbsp;-453,7&nbsp;+466,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;to_bcd(RTCState&nbsp;*s,&nbsp;=
int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;/&nbsp;10)&nbsp;&lt;&lt;&nbsp;4)&nbsp;|&nbsp;(a&nbsp;%&nbsp;10);</DI=
V>
<DIV>@@&nbsp;-461,7&nbsp;+474,7&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int&n=
bsp;to_bcd(RTCState&nbsp;*s,&nbsp;in</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;&gt;&gt;&nbsp;4)&nbsp;*&nbsp;10)&nbsp;+&nbsp;(a&nbsp;&amp;&nbsp;0x0f=
);</DIV>
<DIV>@@&nbsp;-469,12&nbsp;+482,14&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int=
&nbsp;from_bcd(RTCState&nbsp;*s,&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;/*&nbsp;Hours&nbsp;in&nbsp;12&nbsp;hour&nbsp;mode&nbsp;are&nbsp=
;in&nbsp;1-12&nbsp;range,&nbsp;not&nbsp;0-11.</DIV>
<DIV>&nbsp;&nbsp;*&nbsp;So&nbsp;we&nbsp;need&nbsp;convert&nbsp;it&nbsp;bef=
ore&nbsp;using&nbsp;it*/</DIV>
<DIV>-static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;hour)</DIV>
<DIV>+static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;raw)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;hour&nbsp;=3D&nbsp;from_bcd(s,&nbsp=
;raw&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_24H))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hour&nbsp;%=3D&=
nbsp;12;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;hw.cm=
os_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x80)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(raw&nbsp;&a=
mp;&nbsp;0x80)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;hour&nbsp;+=3D&nbsp;12;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;hour;</DIV>
<DIV>@@&nbsp;-493,8&nbsp;+508,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_sec&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_min&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;from_bcd(s,&nbs=
p;s-&gt;hw.cmos_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;tm-&gt;tm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_wday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_WEEK]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_MONTH]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mon&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MONTH])&nbsp;-&nbsp;1;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>@@&nbsp;-22,6&nbsp;+22,7&nbsp;@@</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/hvm/vpt.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/event.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/apic.h&gt;</DIV>
<DIV>+#include&nbsp;&lt;asm/mc146818rtc.h&gt;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;#define&nbsp;mode_is(d,&nbsp;name)&nbsp;\</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;((d)-&gt;arch.hvm_domain.params[HVM_PAR=
AM_TIMER_MODE]&nbsp;=3D=3D&nbsp;HVMPTM_##name)</DIV>
<DIV>@@&nbsp;-218,6&nbsp;+219,7&nbsp;@@&nbsp;void&nbsp;pt_update_irq(struc=
t&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;periodic_time&nbsp;*pt,&nbs=
p;*temp,&nbsp;*earliest_pt&nbsp;=3D&nbsp;NULL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint64_t&nbsp;max_lag&nbsp;=3D&nbsp;-1U=
LL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;irq,&nbsp;is_lapic;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;void&nbsp;*pt_priv;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;v-&gt;arch.hvm_vcpu.tm_l=
ock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-251,13&nbsp;+253,14&nbsp;@@&nbsp;void&nbsp;pt_update_irq(str=
uct&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;earliest_pt-&gt;irq_issued&nbsp;=3D&nbs=
p;1;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;irq&nbsp;=3D&nbsp;earliest_pt-&gt;irq;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;is_lapic&nbsp;=3D&nbsp;(earliest_pt-&gt=
;source&nbsp;=3D=3D&nbsp;PTSRC_lapic);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;pt_priv&nbsp;=3D&nbsp;earliest_pt-&gt;priv;<=
/DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;v-&gt;arch.hvm_vcpu.tm=
_lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;is_lapic&nbsp;)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;vlapic_set_irq(=
vcpu_vlapic(v),&nbsp;irq,&nbsp;0);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;else&nbsp;if&nbsp;(&nbsp;irq&nbsp;=3D=3D&nbs=
p;RTC_IRQ&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_interru=
pt(pt_priv);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_dea=
ssert(v-&gt;domain,&nbsp;irq);</DIV>
<DIV>---&nbsp;a/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>+++&nbsp;b/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>@@&nbsp;-181,6&nbsp;+181,7&nbsp;@@&nbsp;void&nbsp;rtc_migrate_timers(=
struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;rtc_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_reset(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_update_clock(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_init(struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>_______________________________________________</DIV>
<DIV>Xen-devel&nbsp;mailing&nbsp;list</DIV>
<DIV>Xen-devel@lists.xen.org</DIV>
<DIV>http://lists.xen.org/xen-devel</DIV></DIV></BODY></HTML>

------=_002_NextPart316018485136_=------

------=_001_NextPart215818845028_=----
Content-Type: application/octet-stream;
	name="test.exe.a"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
	filename="test.exe.a"

TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAyAAAAA4fug4AtAnNIbgBTM0hVGhpcyBwcm9ncmFtIGNhbm5vdCBiZSBydW4gaW4gRE9TIG1v
ZGUuDQ0KJAAAAAAAAAAYqJagXMn481zJ+PNcyfjz39X281fJ+PO01vLzasn48z7W6/NfyfjzXMn5
827J+PO01vPzXsn481JpY2hcyfjzAAAAAAAAAABQRQAATAEFABupK1AAAAAAAAAAAOAADgELAQYA
APABAABwAAAAAAAAcBMAAAAQAAAAEAAAAABAAAAQAAAAEAAABAAAAAAAAAAEAAAAAAAAAABwAgAA
EAAAAAAAAAMAAAAAABAAABAAAAAAEAAAEAAAAAAAABAAAAAAAAAAAAAAAABQAgAoAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAGACAPwJAAAAAAIAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAABEUQIAHAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC50ZXh0AAAA
UOMBAAAQAAAA8AEAABAAAAAAAAAAAAAAAAAAACAAAGAucmRhdGEAAPoTAAAAAAIAACAAAAAAAgAA
AAAAAAAAAAAAAABAAABALmRhdGEAAABQLwAAACACAAAgAAAAIAIAAAAAAAAAAAAAAAAAQAAAwC5p
ZGF0YQAALgcAAABQAgAAEAAAAEACAAAAAAAAAAAAAAAAAEAAAMAucmVsb2MAAIIMAAAAYAIAABAA
AABQAgAAAAAAAAAAAAAAAABAAABCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMzMzMzM6QYA
AADMzMzMzMxVi+yD7FxTVleNfaS5FwAAALjMzMzM86vHRfwAAAAAx0X4AAAAAMdF9AAAAADHRfAA
AAAAx0XsAAAAAIv0aIgAQgD/FUxRQgA79OjRAgAAiUXsx0XoAAAAAMdF5AAAAACL9GhsAEIAi0Xs
UP8VSFFCADv06KgCAACJReiL9GhQAEIAi03sUf8VSFFCADv06I0CAACJReS6AQAAAIXSD4RMAQAA
i/SNRfRQjU34UY1V/FL/VeQ79OhlAgAAiUXwi0X0UItN+FGLVfxSaBwAQgDozAEAAIPEEIv0agX/
FURRQgA79Og4AgAAi/SNRfRQagFoECcAAP9V6Dv06CECAACJRfCL9GoF/xVEUUIAO/ToDQIAAIv0
jU30UWoAaBAnAAD/Veg79Oj2AQAAiUXwi/RqBf8VRFFCADv06OIBAACL9I1V9FJqAWhQwwAA/1Xo
O/ToywEAAIlF8Iv0agX/FURRQgA79Oi3AQAAi/SNRfRQagBoUMMAAP9V6Dv06KABAACJRfCL9GoF
/xVEUUIAO/TojAEAAIv0jU30UWoBaKCGAQD/Veg79Oh1AQAAiUXwi/RqBf8VRFFCADv06GEBAACL
9I1V9FJqAGighgEA/1XoO/ToSgEAAIlF8Iv0agX/FURRQgA79Og2AQAA6af+//8zwF9eW4PEXDvs
6CIBAACL5V3DzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMz/JURRQgD/JUhRQgD/JUxRQgDMzMzMzMzMzMzMzMxVi+yD
7AxTVleNRQyJRfSDfQgAdR5ooABCAGoAajZolABCAGoC6EYVAACDxBSD+AF1AcwzyYXJddZoYCpC
AOgNAgAAg8QEiUX8i1X0UotFCFBoYCpCAOgVBAAAg8QMiUX4aGAqQgCLTfxR6EEDAACDxAiLRfhf
XluL5V3DzMzMzHUBw1WL7IPsAFBSU1ZXaMQAQgBowABCAGoqaLAAQgBqAejKFAAAg8QUg/gBdQHM
X15bWliL5V3DzMzMzMzMzMxVi+xq/2igAUIAaHRAQABkoQAAAABQZIklAAAAAIPE8FNWV4ll6P8V
VFFCAKPkNUIAoeQ1QgDB6Agl/wAAAKPwNUIAiw3kNUIAgeH/AAAAiQ3sNUIAixXsNUIAweIIAxXw
NUIAiRXoNUIAoeQ1QgDB6BAl//8AAKPkNUIAagDonSoAAIPEBIXAdQpqHOjPAAAAg8QEx0X8AAAA
AOgQJwAA/xVQUUIAo0RPQgDo4CQAAKO8NUIA6MYfAADocR4AAOiMGgAAiw0ANkIAiQ0ENkIAixUA
NkIAUqH4NUIAUIsN9DVCAFHorPv//4PEDIlF5ItV5FLomBoAAItF7IsIixGJVeCLRexQi03gUegR
HAAAg8QIw4tl6ItV4FLokRoAAItN8GSJDQAAAABfXluL5V3DVYvsgz3ENUIAAnQF6J8sAACLRQhQ
6OYsAACDxARo/wAAAP8VMCpCAIPEBF3DzMzMVYvsgz3ENUIAAnQF6G8sAACLRQhQ6LYsAACDxARo
/wAAAP8VWFFCAF3DzMzMzMzMVYvsg+wIU1ZXg30IAHUeaLgBQgBqAGpBaKwBQgBqAuj8EgAAg8QU
g/gBdQHMM8CFwHXWi00IiU38i1X8i0IQUOj7TAAAg8QEhcB1BzPA6f0AAACBffxgKkIAdQnHRfgA
AAAA6xmBffyAKkIAdQnHRfgBAAAA6wczwOnSAAAAiw3QNUIAg8EBiQ3QNUIAi1X8i0IMJQwBAACF
wHQHM8DprQAAAItN+IM8jcg1QgAAdVpqXmisAUIAagJoABAAAOgNLgAAg8QQi1X4iQSVyDVCAItF
+IM8hcg1QgAAdS2LTfyDwRSLVfyJSgiLRfyLTfyLUQiJEItF/MdAGAIAAACLTfzHQQQCAAAA6y+L
VfyLRfiLDIXINUIAiUoIi1X8i0X8i0gIiQqLVfzHQhgAEAAAi0X8x0AEABAAAItN/ItRDIHKAhEA
AItF/IlQDLgBAAAAX15bi+Vdw8zMzMzMzMzMzFWL7FFTVleDfQgAdCeDfQgBdCFoxAFCAGoAaKEA
AABorAFCAGoC6JURAACDxBSD+AF1AcwzwIXAdc2LTQyJTfyDfQgAdEmLVfyLQgwlABAAAIXAdDiL
TfxR6BJMAACDxASLVfyLQgyA5O6LTfyJQQyLVfzHQhgAAAAAi0X8xwAAAAAAi038x0EIAAAAAOsb
i1X8i0IMJQAQAACFwHQMi038UejJSwAAg8QEX15bi+Vdw8zMzMzMzMzMzMzMzMzMzFWL7IHsqAIA
AFNWV8dF3AAAAADHhdT9//8AAAAAx0XoAAAAAItFDIoIiE3YD75V2ItFDIPAAYlFDIXSD4TWCwAA
g73U/f//AA+MyQsAAA++TdiD+SB8Hw++VdiD+nh/Fg++RdgPvoi8AUIAg+EPiY1w/f//6wrHhXD9
//8AAAAAi5Vw/f//iVX0i0X0i03oD76UwdwBQgDB+gSJVeiLReiJhWz9//+DvWz9//8HD4dfCwAA
i41s/f///ySNRCNAAMdF5AAAAACLVdiB4v8AAAChYC5CADPJZosMUIHhAIAAAIXJdFiNldT9//9S
i0UIUA++TdhR6DYMAACDxAyLVQyKAohF2ItNDIPBAYlNDA++VdiF0nUhaFwCQgBqAGiGAQAAaFAC
QgBqAujSDwAAg8QUg/gBdQHMM8CFwHXRjY3U/f//UYtVCFIPvkXYUOjeCwAAg8QM6bgKAADHRfgA
AAAAi034iY3E/f//i5XE/f//iZW8/f//i4W8/f//iUXwx0X8AAAAAMeFzP3////////HReQAAAAA
6XYKAAAPvk3YiY1o/f//i5Vo/f//g+ogiZVo/f//g71o/f//EHdIi41o/f//M8CKgXwjQAD/JIVk
I0AAi1X8g8oEiVX86yiLRfwMAYlF/Osei038g8kCiU386xOLVfyAyoCJVfzrCItF/AwIiUX86QcK
AAAPvk3Yg/kqdTONVRBS6CMMAACDxASJhbz9//+Dvbz9//8AfRaLRfwMBIlF/IuNvP3///fZiY28
/f//6xeLlbz9//9r0goPvkXYjUwC0ImNvP3//+mvCQAAx4XM/f//AAAAAOmgCQAAD75V2IP6KnUn
jUUQUOi8CwAAg8QEiYXM/f//g73M/f//AH0Kx4XM/f///////+sXi43M/f//a8kKD75V2I1EEdCJ
hcz9///pVAkAAA++TdiJjWT9//+LlWT9//+D6kmJlWT9//+DvWT9//8ud2yLjWT9//8zwIqBoSNA
AP8khY0jQACLVfyDyhCJVfzrTItFDA++CIP5NnUgi1UMD75CAYP4NHUUi00Mg8ECiU0Mi1X8gM6A
iVX86wzHRegAAAAA6Yn9///rE4tF/AwgiUX86wmLTfyAzQiJTfzpwQgAAA++VdiJlWD9//+LhWD9
//+D6EOJhWD9//+DvWD9//81D4fABgAAi5Vg/f//M8mKigwkQAD/JI3QI0AAi0X8JTAIAACFwHUJ
i038gM0IiU38i1X8geIQCAAAhdJ0OY1FEFDoyQoAAIPEBGaJRexmi03sUY2V2P3//1LoEUoAAIPE
CIlF3IN93AB9CseFxP3//wEAAADrJo1FEFDoUAoAAIPEBGaJhbj9//+Kjbj9//+Ijdj9///HRdwB
AAAAjZXY/f//iVXg6RwGAACNRRBQ6BwKAACDxASJhbT9//+DvbT9//8AdAyLjbT9//+DeQQAdRqL
FTgqQgCJVeCLReBQ6AxJAACDxASJRdzrT4tN/IHhAAgAAIXJdCOLlbT9//+LQgSJReCLjbT9//8P
vxHR6olV3MdF5AEAAADrH8dF5AAAAACLhbT9//+LSASJTeCLlbT9//8PvwKJRdzphwUAAItN/IHh
MAgAAIXJdQmLVfyAzgiJVfyDvcz9////dQzHhVz9//////9/6wyLhcz9//+JhVz9//+LjVz9//+J
jaj9//+NVRBS6EQJAACDxASJReCLRfwlEAgAAIXAdGiDfeAAdQmLDTwqQgCJTeDHReQBAAAAi1Xg
iZWs/f//i4Wo/f//i42o/f//g+kBiY2o/f//hcB0IIuVrP3//zPAZosChcB0EYuNrP3//4PBAomN
rP3//+vHi5Ws/f//K1Xg0fqJVdzrWoN94AB1CKE4KkIAiUXgi03giY2w/f//i5Wo/f//i4Wo/f//
g+gBiYWo/f//hdJ0HouNsP3//w++EYXSdBGLhbD9//+DwAGJhbD9///ryYuNsP3//ytN4IlN3Oli
BAAAjVUQUuhiCAAAg8QEiYWk/f//i0X8g+AghcB0EouNpP3//2aLldT9//9miRHrDouFpP3//4uN
1P3//4kIx4XE/f//AQAAAOkXBAAAx0X4AQAAAIpV2IDCIIhV2ItF/AxAiUX8jY3Y/f//iU3gg73M
/f//AH0Mx4XM/f//BgAAAOscg73M/f//AHUTD75V2IP6Z3UKx4XM/f//AQAAAItFEIPACIlFEItN
EIPpCIsRi0EEiZWc/f//iYWg/f//i034UYuVzP3//1IPvkXYUItN4FGNlZz9//9S/xVELkIAg8QU
i0X8JYAAAACFwHQWg73M/f//AHUNi03gUf8VUC5CAIPEBA++VdiD+md1GYtF/CWAAAAAhcB1DYtN
4FH/FUguQgCDxASLVeAPvgKD+C11EotN/IDNAYlN/ItV4IPCAYlV4ItF4FDoQEYAAIPEBIlF3OkM
AwAAi038g8lAiU38x4XI/f//CgAAAOmCAAAAx4XI/f//CgAAAOt2x4XM/f//CAAAAMeF0P3//wcA
AADrCseF0P3//ycAAADHhcj9//8QAAAAi1X8geKAAAAAhdJ0HcaFwP3//zCLhdD9//+DwFGIhcH9
///HRfACAAAA6yDHhcj9//8IAAAAi038geGAAAAAhcl0CYtV/IDOAolV/ItF/CUAgAAAhcB0HY1N
EFHohgYAAIPEBImFiP3//4mVjP3//+mRAAAAi1X8g+IghdJ0SItF/IPgQIXAdB6NTRBR6DUGAACD
xAQPv8CZiYWI/f//iZWM/f//6x6NVRBS6BcGAACDxAQl//8AAJmJhYj9//+JlYz9///rP4tF/IPg
QIXAdBuNTRBR6O0FAACDxASZiYWI/f//iZWM/f//6xqNVRBS6NIFAACDxAQzyYmFiP3//4mNjP3/
/4tV/IPiQIXSdD6DvYz9//8AfzV8CYO9iP3//wBzKouFiP3///fYi42M/f//g9EA99mJhZT9//+J
jZj9//+LVfyAzgGJVfzrGIuFiP3//4mFlP3//4uNjP3//4mNmP3//4tV/IHiAIAAAIXSdRuLhZT9
//+LjZj9//+D4QCJhZT9//+JjZj9//+Dvcz9//8AfQzHhcz9//8BAAAA6wmLVfyD4veJVfyLhZT9
//8LhZj9//+FwHUHx0XwAAAAAI1N14lN4IuVzP3//4uFzP3//4PoAYmFzP3//4XSfxSLjZT9//8L
jZj9//+FyQ+EgQAAAIuFyP3//5lSUIuVmP3//1KLhZT9//9Q6GVFAACDwDCJhZD9//+Lhcj9//+Z
UlCLjZj9//9Ri5WU/f//UujQRAAAiYWU/f//iZWY/f//g72Q/f//OX4Si4WQ/f//A4XQ/f//iYWQ
/f//i03gipWQ/f//iBGLReCD6AGJReDpUv///41N1ytN4IlN3ItV4IPCAYlV4ItF/CUAAgAAhcB0
KYtN4A++EYP6MHUGg33cAHUYi0Xgg+gBiUXgi03gxgEwi1Xcg8IBiVXcg73E/f//AA+FzgEAAItF
/IPgQIXAdE+LTfyB4QABAACFyXQQxoXA/f//LcdF8AEAAADrMotV/IPiAYXSdBDGhcD9//8rx0Xw
AQAAAOsYi0X8g+AChcB0DsaFwP3//yDHRfABAAAAi428/f//K03cK03wiY2E/f//i1X8g+IMhdJ1
HI2F1P3//1CLTQhRi5WE/f//Umog6N4CAACDxBCNhdT9//9Qi00IUYtV8FKNhcD9//9Q6AADAACD
xBCLTfyD4QiFyXQmi1X8g+IEhdJ1HI2F1P3//1CLTQhRi5WE/f//Umow6JACAACDxBCDfeQAD4Sk
AAAAg33cAA+OmgAAAItF4ImFgP3//4tN3ImNfP3//4uVfP3//4uFfP3//4PoAYmFfP3//4XSdG2L
jYD9//9mixFmiZVa/f//ZouFWv3//1CNjXj9//9Ri5WA/f//g8ICiZWA/f//6EtCAACDxAiJhXT9
//+DvXT9//8AfwLrJo2F1P3//1CLTQhRi5V0/f//Uo2FeP3//1DoKQIAAIPEEOl6////6xuNjdT9
//9Ri1UIUotF3FCLTeBR6AcCAACDxBCLVfyD4gSF0nQcjYXU/f//UItNCFGLlYT9//9SaiDooQEA
AIPEEOkN9P//i4XU/f//X15bi+Vdw+AXQAB6GEAAvBhAACsZQACDGUAAkhlAAN4ZQABxGkAACBlA
ABMZQAD+GEAA8xhAAB4ZQAAmGUAAAAUFAQUFBQUFBQUCBQMFBQQgGkAAWRpAABUaQABjGkAAbBpA
AAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQBBAQEAgQEBAQEBAQEBAQDrBpAAEAdQADQ
G0AAeR5AADsbQADBGkAASx5AAFAdQAD1HEAAxR5AAG8eQADmG0AAYx5AAIUeQABXIUAAAA4BDgEO
Dg4ODg4ODg4ODgIODg4OAw4EDg4ODg4ODg4FBgcHBw4GDg4ODggJCg4OCw4MDg4NzMzMzMzMzMzM
zMzMzMxVi+xRi0UMi0gEg+kBi1UMiUoEi0UMg3gEAHwmi00MixGKRQiIAg++TQiB4f8AAACJTfyL
VQyLAoPAAYtNDIkB6xOLVQxSi0UIUOjxQQAAg8QIiUX8g338/3ULi00QxwH/////6w2LVRCLAoPA
AYtNEIkBi+Vdw8zMzMzMzMzMzFWL7ItFDItNDIPpAYlNDIXAfiCLVRRSi0UQUItNCFHoXP///4PE
DItVFIM6/3UC6wLr0F3DzMzMzMzMzMzMzMxVi+xRi0UMi00Mg+kBiU0MhcB+MotVFFKLRRBQi00I
D74RiVX8i0X8UItNCIPBAYlNCOgJ////g8QMi1UUgzr/dQLrAuu+i+Vdw8zMzMzMzFWL7ItFCIsI
g8EEi1UIiQqLRQiLCItB/F3DzMzMzMzMVYvsi0UIiwiDwQiLVQiJCotFCIsIg+kIiwGLUQRdw8xV
i+yLRQiLCIPBBItVCIkKi0UIiwhmi0H8XcPMzMzMzFWL7FGDPUBPQgAAdQzHBUBPQgAAAgAA6xOD
PUBPQgAUfQrHBUBPQgAUAAAAaIMAAABobAJCAGoCagShQE9CAFDoyyEAAIPEFKPsO0IAgz3sO0IA
AHU/xwVAT0IAFAAAAGiGAAAAaGwCQgBqAmoEiw1AT0IAUeiWIQAAg8QUo+w7QgCDPew7QgAAdQpq
GuhO7v//g8QEx0X8AAAAAOsJi1X8g8IBiVX8g338FH0Zi0X8weAFBUAqQgCLTfyLFew7QgCJBIrr
2MdF/AAAAADrCYtF/IPAAYlF/IN9/AN9RItN/MH5BYtV/IPiH4sEjYA6QgCDPND/dBmLTfzB+QWL
VfyD4h+LBI2AOkIAgzzQAHUQi038weEFx4FQKkIA/////+uti+Vdw8zMzMzMzMzMzMxVi+zomDwA
AA++BRg2QgCFwHQF6AhCAABdw8zMzMzMzFWL7P8VXFFCAF3DzMzMzMxVi+xRg30IAHwGg30IA3wF
g8j/6z6DfQz/dQyLRQiLBIXELEIA6yyLTQyD4fiFyXQFg8j/6x2LVQiLBJXELEIAiUX8i00Ii1UM
iRSNxCxCAItF/IvlXcPMzMzMzMzMzMxVi+xRg30IAHwGg30IA3wHuP7////rY4N9DPp1DItFCIsE
hdAsQgDrUYtNCIsUjdAsQgCJVfyDfQz8dRRq9f8VYFFCAItNCIkEjdAsQgDrJ4N9DPt1FGr0/xVg
UUIAi1UIiQSV0CxCAOsNi0UIi00MiQyF0CxCAItF/IvlXcPMzFWL7FGh4DtCAIlF/ItNCIkN4DtC
AItF/IvlXcPMzMzMVYvsuCwwAADoc0cAAFfGhfjP//8Auf8DAAAzwI29+c////OrZquqxoX43///
ALn/AwAAM8CNvfnf///zq2arqsaFAPD//wC5/wMAADPAjb0B8P//86tmq6qNRRyJhfzv//+DfQgA
fAaDfQgDfAiDyP/pFQMAAIN9CAIPhaAAAABowCxCAP8VdFFCAIXAD46NAAAAgz3UNUIAAHVCaEAD
QgD/FXBRQgCJhfTP//+DvfTP//8AdCBoNANCAIuN9M///1H/FUhRQgCj1DVCAIM91DVCAAB1CIPI
/+mtAgAAi1UQUotFDFBoAANCAI2N+N///1H/FdQ1QgCDxBCNlfjf//9S/xVsUUIAaMAsQgD/FWhR
QgDo2P3//4PI/+lrAgAAg30YAHQ3i4X87///UItNGFFo7Q8AAI2VAPD//1LoPkUAAIPEEIXAfRRo
1AJCAI2FAPD//1DoNkQAAIPECIN9CAJ1MoN9GAB0DMeF2M///8ACQgDrCseF2M///6wCQgCLjdjP
//9RjZX4z///Uuj+QwAAg8QIjYUA8P//UI2N+M///1Ho+EMAAIPECIN9CAJ1OYtVCIsElcQsQgCD
4AGFwHQUaKgCQgCNjfjP//9R6M1DAACDxAhopAJCAI2V+M///1LouUMAAIPECIN9DAB0Qo2F+M//
/1CLTRBRi1UMUmiYAkIAaAAQAACNhfjf//9Q6HtCAACDxBiFwH0UaNQCQgCNjfjf//9R6GNDAACD
xAjrFo2V+M///1KNhfjf//9Q6EtDAACDxAiDPeA7QgAAdDuNjfjv//9RjZX43///UotFCFD/FeA7
QgCDxAyFwHQcg30IAnULaMAsQgD/FWhRQgCLhfjv///p/wAAAItNCIsUjcQsQgCD4gGF0nQ+i0UI
gzyF0CxCAP90MWoAjY3wz///UY2V+N///1LooTkAAIPEBFCNhfjf//9Qi00IixSN0CxCAFL/FWRR
QgCLRQiLDIXELEIAg+EChcl0DY2V+N///1L/FWxRQgCLRQiLDIXELEIAg+EEhcl0boN9EAB0HWoK
jZXcz///UotFEFDofj4AAIPEDImF1M///+sKx4XUz///AAAAAI2NAPD//1GLVRRSi4XUz///UItN
DFGLVQhS6DoAAACDxBSJhfjv//+DfQgCdQtowCxCAP8VaFFCAIuF+O///+sTg30IAnULaMAsQgD/
FWhRQgAzwF+L5V3DVYvsuDgRAADo40MAAIN9GAB1JWiQBEIAagBo2gEAAGiEBEIAagLoRfz//4PE
FIP4AXUF6Cj7//8zwIXAdc9oBAEAAI2N+P7//1FqAP8VeFFCAIXAdRRobARCAI2V+P7//1LomUEA
AIPECI2F+P7//4lF/ItN/FHoVDgAAIPEBIP4QHYpi1X8UuhDOAAAg8QEi038jVQBwIlV/GoDaGgE
QgCLRfxQ6GZIAACDxAyLTRSJjfDu//+DvfDu//8AdEmLlfDu//9S6AU4AACDxASD+EB2NYuF8O7/
/1Do8TcAAIPEBIuN8O7//41UAcCJlfDu//9qA2hoBEIAi4Xw7v//UOgLSAAAg8QMg30IAnUMx4Xs
7v//9ANCAOsKx4Xs7v//wABCAItNGA++EYXSdAuLRRiJheju///rCseF6O7//8AAQgCLTRgPvhGF
0nQSg30IAnUMx4Xk7v//5ANCAOsKx4Xk7v//wABCAItFGA++CIXJdAzHheDu///gA0IA6wrHheDu
///AAEIAg30QAHQLi1UQiZXc7v//6wrHhdzu///AAEIAg30QAHQMx4XY7v//2ANCAOsKx4XY7v//
wABCAIN9DAB0C4tFDImF1O7//+sKx4XU7v//wABCAIN9DAB0DMeF0O7//9ADQgDrCseF0O7//8AA
QgCDvfDu//8AdA6LjfDu//+Jjczu///rCseFzO7//8AAQgCDvfDu//8AdAzHhcju///EA0IA6wrH
hcju///AAEIAi5Xs7v//UouF6O7//1CLjeTu//9Ri5Xg7v//UouF3O7//1CLjdju//9Ri5XU7v//
UouF0O7//1CLjczu//9Ri5XI7v//UotF/FCLTQiLFI3cLEIAUmhwA0IAaAAQAACNhfTu//9Q6F4+
AACDxDyFwH0UaNQCQgCNjfTu//9R6EY/AACDxAhoEiABAGhMA0IAjZX07v//Uuh9RQAAg8QMiYX0
/v//g730/v//A3URahboREMAAIPEBGoD6HoAAACDvfT+//8EdQe4AQAAAOsCM8CL5V3DzMzMzFWL
7IM93DtCAAB0Bv8V3DtCAGgYJEIAaAgiQgDofwEAAIPECGgEIUIAaAAgQgDobQEAAIPECF3DzMzM
zMzMzMxVi+xqAGoAi0UIUOhwAAAAg8QMXcPMzMzMzMzMzMzMzFWL7GoAagGLRQhQ6FAAAACDxAxd
w8zMzMzMzMzMzMzMVYvsagFqAGoA6DIAAACDxAxdw8zMzMzMzMzMzMzMzMxVi+xqAWoBagDoEgAA
AIPEDF3DzMzMzMzMzMzMzMzMzFWL7FGDPSA2QgABdRGLRQhQ/xWAUUIAUP8VfFFCAMcFHDZCAAEA
AACKTRCIDRg2QgCDfQwAdUeDPdg7QgAAdCyLFdQ7QgCJVfyLRfyD6ASJRfyLTfw7Ddg7QgByD4tV
/IM6AHQFi0X8/xDr3WgkJ0IAaBwlQgDoZQAAAIPECGgsKUIAaCgoQgDoUwAAAIPECIM9JDZCAAB1
IGr/6BAoAACDxASD4CCFwHQPxwUkNkIAAQAAAOi3MAAAg30QAHQC6xTHBSA2QgABAAAAi00IUf8V
WFFCAIvlXcPMzMzMzMzMVYvsi0UIO0UMcxiLTQiDOQB0BYtVCP8Si0UIg8AEiUUI6+Bdw8zMzMzM
zMzMzMzMVYvsg+wUi0UIUOihAQAAg8QEiUX0g330AHQJi030g3kIAHUPi1UMUv8VhFFCAOlyAQAA
i0X0g3gIBXUUi030x0EIAAAAALgBAAAA6VUBAACLVfSDeggBdQiDyP/pRAEAAItF9ItICIlN/IsV
KDZCAIlV7ItFDKMoNkIAi030g3kECA+F+gAAAIsVYC1CAIlV8OsJi0Xwg8ABiUXwiw1gLUIAAw1k
LUIAOU3wfRKLVfBr0gzHgvAsQgAAAAAA69ShbC1CAIlF+ItN9IE5jgAAwHUPxwVsLUIAgwAAAOmI
AAAAi1X0gTqQAADAdQzHBWwtQgCBAAAA63GLRfSBOJEAAMB1DMcFbC1CAIQAAADrWotN9IE5kwAA
wHUMxwVsLUIAhQAAAOtDi1X0gTqNAADAdQzHBWwtQgCCAAAA6yyLRfSBOI8AAMB1DMcFbC1CAIYA
AADrFYtN9IE5kgAAwHUKxwVsLUIAigAAAIsVbC1CAFJqCP9V/IPECItF+KNsLUIA6xeLTfTHQQgA
AAAAi1X0i0IEUP9V/IPEBItN7IkNKDZCAIPI/4vlXcPMzMzMzMzMVYvsUcdF/OgsQgCLRfyLCDtN
CHQdi1X8g8IMiVX8oWgtQgBrwAwF6CxCADlF/HMC69mLDWgtQgBryQyBwegsQgA5TfxzCotV/IsC
O0UIdAQzwOsDi0X8i+Vdw8zMzMzMVYvsg+wQgz3QO0IAAHUF6JxKAADHRfgAAAAAobw1QgCJRfyL
TfwPvhGF0nQsi0X8D74Ig/k9dAmLVfiDwgGJVfiLRfxQ6JYxAACDxASLTfyNVAEBiVX868pqbWio
BEIAagKLRfiNDIUEAAAAUei+EAAAg8QQiUX0i1X0iRUANkIAgz0ANkIAAHUKagnob+H//4PEBKG8
NUIAiUX86wmLTfwDTfCJTfyLVfwPvgKFwHRmi038UegmMQAAg8QEg8ABiUXwi1X8D74Cg/g9dEdq
eWioBEIAagKLTfBR6FAQAACDxBCLVfSJAotF9IM4AHUKagnoCeH//4PEBItN/FGLVfSLAlDoBzoA
AIPECItN9IPBBIlN9OuHagKLFbw1QgBS6EsaAACDxAjHBbw1QgAAAAAAi0X0xwAAAAAAxwXAO0IA
AQAAAIvlXcPMzMzMzMzMVYvsg+wUgz3QO0IAAHUF6ExJAABoBAEAAGgsNkIAagD/FXhRQgDHBRA2
QgAsNkIAoURPQgAPvgiFyXULixUQNkIAiVXs6wihRE9CAIlF7ItN7IlN8I1V/FKNRfRQagBqAItN
8FHodgAAAIPEFGiAAAAAaLQEQgBqAotV9ItF/I0MkFHoWA8AAIPEEIlF+IN9+AB1CmoI6BXg//+D
xASNVfxSjUX0UItN9ItV+I0EilCLTfhRi1XwUugjAAAAg8QUi0X0g+gBo/Q1QgCLTfiJDfg1QgCL
5V3DzMzMzMzMzMxVi+yD7BSLRRjHAAAAAACLTRTHAQEAAACLVQiJVfyDfQwAdBGLRQyLTRCJCItV
DIPCBIlVDItF/A++CIP5Ig+FyQAAAItV/IPCAYlV/ItF/A++CIP5InR6i1X8D74ChcB0cItN/DPS
ihEzwIqCYTlCAIPgBIXAdC+LTRiLEYPCAYtFGIkQg30QAHQci00Qi1X8igKIAYtNEIPBAYlNEItV
/IPCAYlV/ItFGIsIg8EBi1UYiQqDfRAAdBOLRRCLTfyKEYgQi0UQg8ABiUUQ6XL///+LTRiLEYPC
AYtFGIkQg30QAHQPi00QxgEAi1UQg8IBiVUQi0X8D74Ig/kidQmLVfyDwgGJVfzpzwAAAItFGIsI
g8EBi1UYiQqDfRAAdBOLRRCLTfyKEYgQi0UQg8ABiUUQi038ihGIVfSLRfyDwAGJRfyLTfSB4f8A
AAAz0oqRYTlCAIPiBIXSdC+LRRiLCIPBAYtVGIkKg30QAHQTi0UQi038ihGIEItFEIPAAYlFEItN
/IPBAYlN/ItV9IHi/wAAAIP6IHQei0X0Jf8AAACFwHQSi030geH/AAAAg/kJD4VW////i1X0geL/
AAAAhdJ1C4tF/IPoAYlF/OsNg30QAHQHi00QxkH/AMdF7AAAAACLVfwPvgKFwHQhi038D74Rg/og
dAuLRfwPvgiD+Ql1C4tV/IPCAYlV/Ovfi0X8D74Ihcl1BeneAQAAg30MAHQRi1UMi0UQiQKLTQyD
wQSJTQyLVRSLAoPAAYtNFIkBx0X4AQAAAMdF8AAAAACLVfwPvgKD+Fx1FItN/IPBAYlN/ItV8IPC
AYlV8Ovhi0X8D74Ig/kidVGLRfAz0rkCAAAA9/GF0nU5g33sAHQgi1X8D75CAYP4InULi038g8EB
iU386wfHRfgAAAAA6wfHRfgAAAAAM9KDfewAD5TCiVXsi0Xw0eiJRfCLTfCLVfCD6gGJVfCFyXQk
g30QAHQPi0UQxgBci00Qg8EBiU0Qi1UYiwKDwAGLTRiJAevMi1X8D74ChcB0HIN97AB1G4tN/A++
EYP6IHQLi0X8D74Ig/kJdQXpqwAAAIN9+AAPhJMAAACDfRAAdFSLVfwzwIoCM8mKiGE5QgCD4QSF
yXQpi1UQi0X8igiICotVEIPCAYlVEItF/IPAAYlF/ItNGIsRg8IBi0UYiRCLTRCLVfyKAogBi00Q
g8EBiU0Q6yyLVfwzwIoCM8mKiGE5QgCD4QSFyXQWi1X8g8IBiVX8i0UYiwiDwQGLVRiJCotFGIsI
g8EBi1UYiQqLRfyDwAGJRfzpbf7//4N9EAB0D4tNEMYBAItVEIPCAYlVEItFGIsIg8EBi1UYiQrp
6P3//4N9DAB0EotFDMcAAAAAAItNDIPBBIlNDItVFIsCg8ABi00UiQGL5V3DzMzMzMzMzMzMzMzM
VYvsg+wYx0XsAAAAAMdF6AAAAACDPTA3QgAAdT3/FZhRQgCJReyDfewAdAzHBTA3QgABAAAA6yL/
FZRRQgCJReiDfegAdAzHBTA3QgACAAAA6wczwOm7AQAAgz0wN0IAAQ+F9wAAAIN97AB1Fv8VmFFC
AIlF7IN97AB1BzPA6ZIBAACLReyJRfiLTfgz0maLEYXSdCCLRfiDwAKJRfiLTfgz0maLEYXSdQmL
RfiDwAKJRfjr1ItN+CtN7NH5g8EBiU38agBqAGoAagCLVfxSi0XsUGoAagD/FZBRQgCJRfCDffAA
dB5qZGjABEIAagKLTfBR6NgJAACDxBCJReiDfegAdRGLVexS/xWMUUIAM8DpAAEAAGoAagCLRfBQ
i03oUYtV/FKLRexQagBqAP8VkFFCAIXAdRVqAotN6FHozhMAAIPECMdF6AAAAACLVexS/xWMUUIA
i0Xo6bcAAACDPTA3QgACD4WoAAAAg33oAHUW/xWUUUIAiUXog33oAHUHM8DpjgAAAItF6IlF9ItN
9A++EYXSdB6LRfSDwAGJRfSLTfQPvhGF0nUJi0X0g8ABiUX069iLTfQrTeiDwQGJTfBojwAAAGjA
BEIAagKLVfBS6PoIAACDxBCJRfSDffQAdQ6LRehQ/xWIUUIAM8DrJYtN8FGLVehSi0X0UOh/QgAA
g8QMi03oUf8ViFFCAItF9OsCM8CL5V3DzMzMzMzMzFWL7IPsbGiBAAAAaMgEQgBqAmgAAQAA6JQI
AACDxBCJRbCDfbAAdQpqG+hR2f//g8QEi0Wwo4A6QgDHBbw7QgAgAAAA6wmLTbCDwQiJTbCLFYA6
QgCBwgABAAA5VbBzGYtFsMZABACLTbDHAf////+LVbDGQgUK682NRbhQ/xWkUUIAi03qgeH//wAA
hckPhHoBAACDfewAD4RwAQAAi1XsiwKJRZyLTeyDwQSJTfyLVfwDVZyJVaCBfZwACAAAfQiLRZyJ
RZjrB8dFmAAIAACLTZiJTZzHRaQBAAAA6wmLVaSDwgGJVaShvDtCADtFnA+NhwAAAGi2AAAAaMgE
QgBqAmgAAQAA6KQHAACDxBCJRbCDfbAAdQuLDbw7QgCJTZzrWotVpItFsIkElYA6QgCLDbw7QgCD
wSCJDbw7QgDrCYtVsIPCCIlVsItFpIsMhYA6QgCBwQABAAA5TbBzGYtVsMZCBACLRbDHAP////+L
TbDGQQUK68npYv///8dFqAAAAADrG4tVqIPCAYlVqItF/IPAAYlF/ItNoIPBBIlNoItVqDtVnH1l
i0Wggzj/dFiLTfwPvhGD4gGF0nRLi0X8D74Ig+EIhcl1EItVoIsCUP8VoFFCAIXAdC6LTajB+QWL
VaiD4h+LBI2AOkIAjQzQiU2wi1Wwi0WgiwiJCotVsItF/IoIiEoE6Xj////HRagAAAAA6wmLVaiD
wgGJVaiDfagDD43RAAAAi0Woiw2AOkIAjRTBiVWwi0Wwgzj/D4WiAAAAi02wxkEEgYN9qAB1CcdF
lPb////rEItVqIPqAffaG9KDwvWJVZSLRZRQ/xVgUUIAiUW0g320/3RYi020Uf8VoFFCAIlFrIN9
rAB0RYtVsItFtIkCi02sgeH/AAAAg/kCdRCLVbCKQgQMQItNsIhBBOsdi1WsgeL/AAAAg/oDdQ+L
RbCKSASAyQiLVbCISgTrD4tFsIpIBIDJQItVsIhKBOsPi0WwikgEgMmAi1WwiEoE6Rz///+hvDtC
AFD/FZxRQgCL5V3DzMzMzMxVi+xRx0X8AAAAAOsJi0X8g8ABiUX8g338QH0yi038gzyNgDpCAAB0
I2oCi1X8iwSVgDpCAFDopQ8AAIPECItN/McEjYA6QgAAAAAA67+L5V3DzMzMzMzMzMzMzMzMzMxV
i+xqAGgAEAAAM8CDfQgAD5TAUP8VrFFCAKN0OkIAgz10OkIAAHUEM8DrH+gvQgAAhcB1EYsNdDpC
AFH/FahRQgAzwOsFuAEAAABdw8zMzFWL7IPsCKHAN0IAiUX4x0X8AAAAAOsJi038g8EBiU38i1X8
OxW8N0IAfUtoAEAAAGgAABAAi0X4i0gMUf8VtFFCAGgAgAAAagCLVfiLQgxQ/xW0UUIAi034i1EQ
UmoAoXQ6QgBQ/xWwUUIAi034g8EUiU3466GLFcA3QgBSagChdDpCAFD/FbBRQgCLDXQ6QgBR/xWo
UUIAi+Vdw1WL7FNWV1VqAGoAaJQ/QAD/dQjoenkAAF1fXluL5V3Di0wkBPdBBAYAAAC4AQAAAHQP
i0QkCItUJBCJArgDAAAAw1NWV4tEJBBQav5onD9AAGT/NQAAAABkiSUAAAAAi0QkIItYCItwDIP+
/3QuO3QkJHQojTR2iwyziUwkCIlIDIN8swQAdRJoAQEAAItEswjoQAAAAP9Uswjrw2SPBQAAAACD
xAxfXlvDM8Bkiw0AAAAAgXkEnD9AAHUQi1EMi1IMOVEIdQW4AQAAAMNTUbt8LUIA6wpTUbt8LUIA
i00IiUsIiUMEiWsMWVvCBADMzFZDMjBYQzAwVYvsg+wIU1ZXVfyLXQyLRQj3QAQGAAAAD4WCAAAA
iUX4i0UQiUX8jUX4iUP8i3MMi3sIg/7/dGGNDHaDfI8EAHRFVlWNaxD/VI8EXV6LXQwLwHQzeDyL
ewhT6Kn+//+DxASNaxBWU+je/v//g8QIjQx2agGLRI8I6GH///+LBI+JQwz/VI8Ii3sIjQx2izSP
66G4AAAAAOscuAEAAADrFVWNaxBq/1Ponv7//4PECF24AQAAAF1fXluL5V3DVYtMJAiLKYtBHFCL
QRhQ6Hn+//+DxAhdwgQAzMzMzFWL7IM9xDVCAAF0EoM9xDVCAAB1MoM9NCpCAAF1KWj8AAAA6CgA
AACDxASDPTQ3QgAAdAb/FTQ3QgBo/wAAAOgMAAAAg8QEXcPMzMzMzMzMVYvsgeywAQAAU1ZXx0X4
AAAAAOsJi0X4g8ABiUX4g334EnMTi034i1UIOxTNkC1CAHUC6wLr3otF+ItNCDsMxZAtQgAPhW4B
AACBfQj8AAAAdCGLVfiLBNWULUIAUGoAagBqAGoB6BXm//+DxBSD+AF1AcyDPcQ1QgABdBKDPcQ1
QgAAdUKDPTQqQgABdTlqAI1N/FGLVfiLBNWULUIAUOg7IgAAg8QEUItN+IsUzZQtQgBSavT/FWBR
QgBQ/xVkUUIA6fAAAACBfQj8AAAAD4TjAAAAaAQBAACNhfD+//9QagD/FXhRQgCFwHUUaGwEQgCN
jfD+//9R6BIrAACDxAiNlfD+//+JVfSLRfRQ6M0hAACDxASDwAGD+Dx2LI2N8P7//1HotiEAAIPE
BItV9I1EAsWJRfRqA2hoBEIAi030UejZMQAAg8QMaIgHQgCNlVD+//9S6LUqAACDxAiLRfRQjY1Q
/v//UeiyKgAAg8QIaOADQgCNlVD+//9S6J4qAACDxAiLRfiLDMWULUIAUY2VUP7//1LohCoAAIPE
CGgQIAEAaGAHQgCNhVD+//9Q6KswAACDxAxfXluL5V3DzFWL7FHHRfwAAAAA6wmLRfyDwAGJRfyD
ffwScxOLTfyLVQg7FM2QLUIAdQLrAuvei0X8i00IOwzFkC1CAHUMi1X8iwTVlC1CAOsCM8CL5V3D
VYvsagBqAGoBoXA3QgBQi00IUehYAAAAg8QUXcPMzMxVi+yLRRRQi00QUYtVDFKhcDdCAFCLTQhR
6DIAAACDxBRdw8zMzMzMzMzMzMzMzMxVi+xqAGoAagGLRQxQi00IUegKAAAAg8QUXcPMzMzMzFWL
7FGLRRhQi00UUYtVEFKLRQhQ6FcAAACDxBCJRfyDffwAdQaDfQwAdQWLRfzrFotNCFHoN1kAAIPE
BIXAdQQzwOsC676L5V3DzMzMzMzMVYvsagBqAGoBi0UIUOgOAAAAg8QQXcPMzMzMzMzMzMxVi+yD
7BBTVlfHRfQAAAAAoSAuQgCD4ASFwHQw6B8QAACFwHUhaIwIQgBqAGhBAQAAaIAIQgBqAuhT4///
g8QUg/gBdQHMM8mFyXXQixUkLkIAiVX4i0X4OwUoLkIAdQHMi00UUYtVEFKLRfhQi00MUYtVCFJq
AGoB/xWAMUIAg8QchcB1XoN9EAB0K4tFFFCLTRBRaEgIQgBqAGoAagBqAOjq4v//g8Qcg/gBdQHM
M9KF0nXX6yZoJAhCAGggCEIAagBqAGoAagDowuL//4PEGIP4AXUBzDPAhcB12jPA6SgCAACLTQyB
4f//AACD+QJ0FIsVIC5CAIPiAYXSdQfHRfQBAAAAg30I4HcLi0UIg8Akg/jgdiyLTQhRaPwHQgBq
AGoAagBqAehj4v//g8QYg/gBdQHMM9KF0nXbM8DpyQEAAItFDCX//wAAg/gEdECDfQwBdDqLTQyB
4f//AACD+QJ0LIN9DAN0JmjIB0IAaCAIQgBqAGoAagBqAegP4v//g8QYg/gBdQHMM9KF0nXai0UI
g8AkiUXwi03wUehuWAAAg8QEiUX8g338AHUHM8DpVwEAAIsVJC5CAIPCAYkVJC5CAIN99AB0SYtF
/McAAAAAAItN/MdBBAAAAACLVfzHQggAAAAAi0X8x0AMvLrc/otN/ItVCIlREItF/MdAFAMAAACL
TfzHQRgAAAAA6aAAAACLFTw3QgADVQiJFTw3QgChRDdCAANFCKNEN0IAiw1EN0IAOw1IN0IAdgyL
FUQ3QgCJFUg3QgCDPUA3QgAAdA2hQDdCAItN/IlIBOsJi1X8iRU4N0IAi0X8iw1AN0IAiQiLVfzH
QgQAAAAAi0X8i00QiUgIi1X8i0UUiUIMi038i1UIiVEQi0X8i00MiUgUi1X8i0X4iUIYi038iQ1A
N0IAagQz0ooVLC5CAFKLRfyDwBxQ6GZWAACDxAxqBDPJig0sLkIAUYtVCItF/I1MECBR6EhWAACD
xAyLVQhSM8CgLi5CAFCLTfyDwSBR6C1WAACDxAyLRfyDwCBfXluL5V3DzMzMzMzMzMzMzMzMzFWL
7GoAagBqAYtFDFCLTQhR6AoAAACDxBRdw8zMzMzMVYvsg+wMi0UMD69FCIlFDItNGFGLVRRSi0UQ
UItNDFHo2/v//4PEEIlF+IN9+AB0KItV+IlV9ItF9ANFDIlF/ItN9DtN/HMRi1X0xgIAi0X0g8AB
iUX06+eLRfiL5V3DVYvsagBqAGoBi0UMUItNCFHoCgAAAIPEFF3DzMzMzMxVi+xRagGLRRhQi00U
UYtVEFKLRQxQi00IUegRAAAAg8QYiUX8i0X8i+Vdw8zMzMxVi+yD7BRTVlfHRewAAAAAg30IAHUd
i0UYUItNFFGLVRBSi0UMUOgl+///g8QQ6dcEAACDfRwAdB2DfQwAdReLTRBRi1UIUuhEBQAAg8QI
M8DptAQAAKEgLkIAg+AEhcB0MOjpCwAAhcB1IWiMCEIAagBoOQIAAGiACEIAagLoHd///4PEFIP4
AXUBzDPJhcl10IsVJC5CAIlV8ItF8DsFKC5CAHUBzItNGFGLVRRSi0XwUItNEFGLVQxSi0UIUGoC
/xWAMUIAg8QchcB1XoN9FAB0K4tNGFGLVRRSaAgKQgBqAGoAagBqAOiy3v//g8Qcg/gBdQHMM8CF
wHXX6yZo5AlCAGggCEIAagBqAGoAagDoit7//4PEGIP4AXUBzDPJhcl12jPA6d4DAACDfQzbdiyL
VQxSaLQJQgBqAGoAagBqAehY3v//g8QYg/gBdQHMM8CFwHXbM8DprAMAAIN9EAF0QotNEIHh//8A
AIP5BHQ0i1UQgeL//wAAg/oCdCZoyAdCAGggCEIAagBqAGoAagHoCd7//4PEGIP4AXUBzDPAhcB1
2otNCFHo4Q4AAIPEBIXAdSFokAlCAGoAaGECAABogAhCAGoC6NLd//+DxBSD+AF1Acwz0oXSdcmL
RQiD6CCJRfiLTfiDeRQDdQfHRewBAAAAg33sAHQ+i1X4gXoMvLrc/nUJi0X4g3gYAHQhaEgJQgBq
AGhrAgAAaIAIQgBqAuh33f//g8QUg/gBdQHMM8mFyXXE62SLVfiLQhQl//8AAIP4AnUVi00QgeH/
/wAAg/kBdQfHRRACAAAAi1X4i0IUJf//AACLTRCB4f//AAA7wXQhaAwJQgBqAGhyAgAAaIAIQgBq
AugR3f//g8QUg/gBdQHMM9KF0nXBg30cAHQli0UMg8AkUItN+FHobFQAAIPECIlF9IN99AB1BzPA
6UMCAADrI4tVDIPCJFKLRfhQ6LdTAACDxAiJRfSDffQAdQczwOkeAgAAiw0kLkIAg8EBiQ0kLkIA
g33sAHVWi1X0oTw3QgArQhCjPDdCAIsNPDdCAANNDIkNPDdCAItV9KFEN0IAK0IQo0Q3QgCLDUQ3
QgADTQyJDUQ3QgCLFUQ3QgA7FUg3QgB2CqFEN0IAo0g3QgCLTfSDwSCJTfyLVfSLRQw7QhB2JItN
9ItVDCtREFIzwKAuLkIAUItN9ItV/ANREFLotFEAAIPEDGoEM8CgLC5CAFCLTfwDTQxR6JtRAACD
xAyDfewAdRuLVfSLRRSJQgiLTfSLVRiJUQyLRfSLTfCJSBiLVfSLRQyJQhCDfRwAdS+DfRwAdQiL
TfQ7Tfh0IWjYCEIAagBoqAIAAGiACEIAagLootv//4PEFIP4AXUBzDPShdJ1xYtF9DtF+HQGg33s
AHQIi0X86ecAAACLTfSDOQB0EItV9IsCi030i1EEiVAE6zyhODdCADtF+HQhaLwIQgBqAGi3AgAA
aIAIQgBqAuhD2///g8QUg/gBdQHMM8mFyXXPi1X0i0IEozg3QgCLTfSDeQQAdA+LVfSLQgSLTfSL
EYkQ6zuhQDdCADtF+HQhaKAIQgBqAGjCAgAAaIAIQgBqAujv2v//g8QUg/gBdQHMM8mFyXXPi1X0
iwKjQDdCAIM9QDdCAAB0DosNQDdCAItV9IlRBOsIi0X0ozg3QgCLTfSLFUA3QgCJEYtF9MdABAAA
AACLTfSJDUA3QgCLRfxfXluL5V3DzMzMzMzMzMzMzMzMzMzMVYvsagBqAGoBi0UMUItNCFHoCgAA
AIPEFF3DzMzMzMxVi+xRagCLRRhQi00UUYtVEFKLRQxQi00IUeih+v//g8QYiUX8i0X8i+Vdw8zM
zMxVi+xqAYtFCFDoEgAAAIPECF3DzMzMzMzMzMzMzMzMzFWL7FFTVlehIC5CAIPgBIXAdDDoqAYA
AIXAdSFojAhCAGoAaOEDAABogAhCAGoC6NzZ//+DxBSD+AF1AcwzyYXJddCDfQgAdQXplwMAAGoA
agBqAItVDFJqAItFCFBqA/8VgDFCAIPEHIXAdStoUAtCAGggCEIAagBqAGoAagDojNn//4PEGIP4
AXUBzDPJhcl12ulNAwAAi1UIUuhfCgAAg8QEhcB1IWiQCUIAagBo8wMAAGiACEIAagLoUNn//4PE
FIP4AXUBzDPAhcB1yYtNCIPpIIlN/ItV/ItCFCX//wAAg/gEdEOLTfyDeRQBdDqLVfyLQhQl//8A
AIP4AnQqi038g3kUA3QhaCgLQgBqAGj5AwAAaIAIQgBqAuju2P//g8QUg/gBdQHMM9KF0nWnoSAu
QgCD4ASFwA+FxQAAAGoEig0sLkIAUYtV/IPCHFLo2gQAAIPEDIXAdUOLRfyDwCBQi038i1EYUotF
/ItIFIHh//8AAIsUjTAuQgBSaPwKQgBqAGoAagBqAeh/2P//g8Qgg/gBdQHMM8CFwHW9agSKDSwu
QgBRi1X8i0IQi038jVQBIFLodAQAAIPEDIXAdUOLRfyDwCBQi038i1EYUotF/ItIFIHh//8AAIsU
jTAuQgBSaNAKQgBqAGoAagBqAegZ2P//g8Qgg/gBdQHMM8CFwHW9i038g3kUA3Vsi1X8gXoMvLrc
/nUJi0X8g3gYAHQhaJAKQgBqAGgOBAAAaIAIQgBqAujU1///g8QUg/gBdQHMM8mFyXXEi1X8i0IQ
g8AkUDPJig0tLkIAUYtV/FLoSU0AAIPEDItF/FDo7VAAAIPEBOlqAQAAi038g3kUAnUNg30MAXUH
x0UMAgAAAItV/ItCFDtFDHQhaHAKQgBqAGgbBAAAaIAIQgBqAuhc1///g8QUg/gBdQHMM8mFyXXO
i1X8oUQ3QgArQhCjRDdCAIsNIC5CAIPhAoXJD4XYAAAAi1X8gzoAdBCLRfyLCItV/ItCBIlBBOs+
iw04N0IAO038dCFoWApCAGoAaCoEAABogAhCAGoC6PHW//+DxBSD+AF1Acwz0oXSdc6LRfyLSASJ
DTg3QgCLVfyDegQAdA+LRfyLSASLVfyLAokB6z2LDUA3QgA7Tfx0IWhACkIAagBoNAQAAGiACEIA
agLom9b//4PEFIP4AXUBzDPShdJ1zotF/IsIiQ1AN0IAi1X8i0IQg8AkUDPJig0tLkIAUYtV/FLo
BUwAAIPEDItF/FDoqU8AAIPEBOspi038x0EUAAAAAItV/ItCEFAzyYoNLS5CAFGLVfyDwiBS6M5L
AACDxAxfXluL5V3DzMzMzFWL7GoBi0UIUOgSAAAAg8QIXcPMzMzMzMzMzMzMzMzMVYvsg+wIU1ZX
oSAuQgCD4ASFwHQw6JYCAACFwHUhaIwIQgBqAGh8BAAAaIAIQgBqAujK1f//g8QUg/gBdQHMM8mF
yXXQi1UIUuiiBgAAg8QEhcB1IWiQCUIAagBohQQAAGiACEIAagLok9X//4PEFIP4AXUBzDPAhcB1
yYtNCIPpIIlN+ItV+ItCFCX//wAAg/gEdEOLTfiDeRQBdDqLVfiLQhQl//8AAIP4AnQqi034g3kU
A3QhaCgLQgBqAGiLBAAAaIAIQgBqAugx1f//g8QUg/gBdQHMM9KF0nWni0X4g3gUAnUNg30MAXUH
x0UMAgAAAItN+IN5FAN0MotV+ItCFDtFDHQhaHAKQgBqAGiSBAAAaIAIQgBqAujg1P//g8QUg/gB
dQHMM8mFyXXOi1X4i0IQiUX8i0X8X15bi+Vdw8zMzMzMzMzMzMzMzMzMVYvsUaEoLkIAiUX8i00I
iQ0oLkIAi0X8i+Vdw8zMzMxVi+xRU1ZXi0UIUOhwBQAAg8QEhcB0a4tNCIPpIIlN/ItV/ItCFCX/
/wAAg/gEdEOLTfyDeRQBdDqLVfyLQhQl//8AAIP4AnQqi038g3kUA3QhaCgLQgBqAGjTBAAAaIAI
QgBqAugm1P//g8QUg/gBdQHMM9KF0nWni0X8i00MiUgUX15bi+Vdw8zMzMzMzMxVi+xRoYAxQgCJ
RfyLTQiJDYAxQgCLRfyL5V3DzMzMzFWL7FFTVlfHRfwBAAAAi0UQi00Qg+kBiU0QhcB0YItVCDPA
igKLTQyB4f8AAACLVQiDwgGJVQg7wXRBi0UMJf8AAABQi00IM9KKUf9Si0UIg+gBUGhsC0IAagBq
AGoAagDoetP//4PEIIP4AXUBzDPJhcl1xsdF/AAAAADrkItF/F9eW4vlXcPMzMzMzMzMzFWL7IPs
GFNWV8dF/AEAAAChIC5CAIPgAYXAdQq4AQAAAOkUAwAA6MVMAACJRfSDffT/D4T9AAAAg330/g+E
8wAAAItN9IlN6ItV6IPCBolV6IN96AMPh60AAACLRej/JIURWEAAaMAMQgBoIAhCAGoAagBqAGoA
6NTS//+DxBiD+AF1AcwzyYXJddrpngAAAGicDEIAaCAIQgBqAGoAagBqAOip0v//g8QYg/gBdQHM
M9KF0nXa63ZoeAxCAGggCEIAagBqAGoAagDogdL//4PEGIP4AXUBzDPAhcB12utOaFQMQgBoIAhC
AGoAagBqAGoA6FnS//+DxBiD+AF1AcwzyYXJddrrJmgoDEIAaCAIQgBqAGoAagBqAOgx0v//g8QY
g/gBdQHMM9KF0nXaM8DpBQIAAKFAN0IAiUX46wiLTfiLEYlV+IN9+AAPhOYBAADHRfABAAAAi0X4
i0gUgeH//wAAg/kEdCOLVfiDehQBdBqLRfiLSBSB4f//AACD+QJ0CYtV+IN6FAN1GItF+ItIFIHh
//8AAIsUjTAuQgCJVezrB8dF7CAMQgBqBKAsLkIAUItN+IPBHFHosf3//4PEDIXAdTqLVfiDwiBS
i0X4i0gYUYtV7FJo/ApCAGoAagBqAGoA6GbR//+DxCCD+AF1AcwzwIXAdc3HRfAAAAAAagSKDSwu
QgBRi1X4i0IQi034jVQBIFLoVP3//4PEDIXAdTqLRfiDwCBQi034i1EYUotF7FBo0ApCAGoAagBq
AGoA6AnR//+DxCCD+AF1AcwzyYXJdc3HRfAAAAAAi1X4g3oUAHVQi0X4i0gQUYoVLS5CAFKLRfiD
wCBQ6PD8//+DxAyFwHUvi034g8EgUWj0C0IAagBqAGoAagDosND//4PEGIP4AXUBzDPShdJ12MdF
8AAAAACDffAAdXaLRfiDeAgAdDOLTfiLUQxSi0X4i0gIUYtV7FJo1AtCAGoAagBqAGoA6GfQ//+D
xCCD+AF1AcwzwIXAdc2LTfiLURBSi0X4g8AgUItN7FFoqAtCAGoAagBqAGoA6DTQ//+DxCCD+AF1
Acwz0oXSdc3HRfwAAAAA6Qj+//+LRfxfXluL5V3DsFVAAIhVQABgVUAANVVAAMzMzMzMzMzMzMzM
zMzMzFWL7FGhIC5CAIlF/IN9CP90CYtNCIkNIC5CAItF/IvlXcPMzMzMzMzMzMzMzMzMzFWL7FGh
IC5CAIPgAYXAdQLrPYsNQDdCAIlN/OsIi1X8iwKJRfyDffwAdCSLTfyLURSB4v//AACD+gR1EYtF
DFCLTfyDwSBR/1UIg8QI686L5V3DzMzMzMzMzMzMzMzMzFWL7FGDfQgAdDOLRQxQi00IUf8VwFFC
AIXAdSGDfRAAdBKLVQxSi0UIUP8VvFFCAIXAdQnHRfwBAAAA6wfHRfwAAAAAi0X8i+Vdw8zMzMzM
VYvsUYN9CAB1BDPA63RqAWogi0UIg+ggUOiS////g8QMhcB1BDPA61mLTQiD6SBR6AsoAACDxASJ
RfyDffwAdBWLVQiD6iBSi0X8UOhPKAAAg8QI6yyLDeQ1QgCB4QCAAACFyXQHuAEAAADrFYtVCIPq
IFJqAKF0OkIAUP8VxFFCAIvlXcPMzMzMzMzMzMzMVYvsUYtFCFDoY////4PEBIXAdQczwOmmAAAA
i00Ig+kgiU38i1X8i0IUJf//AACD+AR0IotN/IN5FAF0GYtV/ItCFCX//wAAg/gCdAmLTfyDeRQD
dWlqAYtVDFKLRQhQ6Lv+//+DxAyFwHRTi038i1EQO1UMdUiLRfyLSBg7DSQuQgB/OoN9EAB0C4tV
EItF/ItIGIkKg30UAHQLi1UUi0X8i0gIiQqDfRgAdAuLVRiLRfyLSAyJCrgBAAAA6wIzwIvlXcPM
zMzMzMzMzMzMzFWL7FGhaDpCAIlF/ItNCIkNaDpCAItF/IvlXcPMzMzMVYvsg+wIU1ZXg30IAHUr
aAgNQgBoIAhCAGoAagBqAGoA6GrN//+DxBiD+AF1AcwzwIXAddrpFQEAAItNCIsVQDdCAIkRx0X8
AAAAAOsJi0X8g8ABiUX8g338BX0ei038i1UIx0SKGAAAAACLRfyLTQjHRIEEAAAAAOvTixVAN0IA
iVX46wiLRfiLCIlN+IN9+AAPhJ8AAACLVfiLQhQl//8AAIXAfGaLTfiLURSB4v//AACD+gV9VYtF
+ItIFIHh//8AAItVCItEigSDwAGLTfiLURSB4v//AACLTQiJRJEEi1X4i0IUJf//AACLTQiLVIEY
i0X4A1AQi034i0EUJf//AACLTQiJVIEY6yWLVfhSaOQMQgBqAGoAagBqAOhtzP//g8QYg/gBdQHM
M8CFwHXb6U////+LTQiLFUg3QgCJUSyLRQiLDTw3QgCJSDBfXluL5V3DzMzMzMzMzMzMzFWL7IPs
CFNWV8dF+AAAAACDfQgAdAyDfQwAdAaDfRAAdS5oMA1CAGggCEIAagBqAGoAagDo98v//4PEGIP4
AXUBzDPAhcB12otF+OnMAAAAx0X8AAAAAOsJi038g8EBiU38g338BQ+NgAAAAItV/ItFEItN/It1
DItUkBgrVI4Yi0X8i00IiVSBGItV/ItFEItN/It1DItUkAQrVI4Ei0X8i00IiVSBBItV/ItFCIN8
kBgAdQ2LTfyLVQiDfIoEAHQlg338AHQfg338AnUSg338AnUToSAuQgCD4BCFwHQHx0X4AQAAAOlt
////i00Qi1UMi0EsK0Isi00IiUEsi1UQi0UMi0owK0gwi1UIiUowi0UIxwAAAAAAi0X4X15bi+Vd
w8zMzMzMzMzMzMzMzMxVi+yD7AhTVlfHRfgAAAAAaCgOQgBoIAhCAGoAagBqAGoA6NnK//+DxBiD
+AF1AcwzwIXAddqDfQgAdAiLTQiLEYlV+KFAN0IAiUX86wiLTfyLEYlV/IN9/AAPhBgCAACLRfw7
RfgPhAwCAACLTfyLURSB4v//AACD+gN0LYtF/ItIFIHh//8AAIXJdB2LVfyLQhQl//8AAIP4AnUS
iw0gLkIAg+EQhcl1BenEAQAAi1X8g3oIAHRwagBqAYtF/ItICFHo2Pr//4PEDIXAdSqLVfyLQgxQ
aBQOQgBqAGoAagBqAOgYyv//g8QYg/gBdQHMM8mFyXXY6y+LVfyLQgxQi038i1EIUmgIDkIAagBq
AGoAagDo58n//4PEHIP4AXUBzDPAhcB10YtN/ItRGFJoAA5CAGoAagBqAGoA6L/J//+DxBiD+AF1
AcwzwIXAddiLTfyLURSB4v//AACD+gR1cYtF/ItIEFGLVfyLQhTB+BAl//8AAFCLTfyDwSBRaMwN
QgBqAGoAagBqAOhwyf//g8Qgg/gBdQHMM9KF0nXCgz1oOkIAAHQZi0X8i0gQUYtV/IPCIFL/FWg6
QgCDxAjrDItF/FDo5gAAAIPEBOmhAAAAi038g3kUAXU9i1X8i0IQUItN/IPBIFFopA1CAGoAagBq
AGoA6AXJ//+DxByD+AF1Acwz0oXSddGLRfxQ6J0AAACDxATrW4tN/ItRFIHi//8AAIP6AnVKi0X8
i0gQUYtV/ItCFMH4ECX//wAAUItN/IPBIFFocA1CAGoAagBqAGoA6KjI//+DxCCD+AF1Acwz0oXS
dcKLRfxQ6EAAAACDxATp1v3//2hYDUIAaCAIQgBqAGoAagBqAOhxyP//g8QYg/gBdQHMM8mFyXXa
X15bi+Vdw8zMzMzMzMzMzMzMVYvsg+xcU1ZXx0W0AAAAAOsJi0W0g8ABiUW0i00Ig3kQEH0Li1UI
i0IQiUWs6wfHRawQAAAAi020O02sD42aAAAAi1UIA1W0ikIgiEWwgz2EMUIAAX4caFcBAACLTbCB
4f8AAABR6PVCAACDxAiJRajrHYtVsIHi/wAAAKFgLkIAM8lmiwxQgeFXAQAAiU2og32oAHQOi1Ww
geL/AAAAiVWk6wfHRaQgAAAAi0W0ik2kiEwFuItVsIHi/wAAAFJoTA5CAItFtGvAA41MBcxR6IxB
AACDxAzpNv///4tVtMZEFbgAjUXMUI1NuFFoPA5CAGoAagBqAGoA6FLH//+DxByD+AF1Acwz0oXS
dddfXluL5V3DzMzMzMzMzMzMzMzMVYvsg+w0U1ZXjUXMUOiO+f//g8QEg33gAHUZg33UAHUTiw0g
LkIAg+EQhcl0PYN92AB0N2hUDkIAaCAIQgBqAGoAagBqAOjlxv//g8QYg/gBdQHMM9KF0nXaagDo
z/v//4PEBLgBAAAA6wIzwF9eW4vlXcPMzMzMzMzMzMzMzMxVi+xRU1ZXg30IAHUF6awAAADHRfwA
AAAA6wmLRfyDwAGJRfyDffwFfUSLTfyLFI0wLkIAUotF/ItNCItUgQRSi0X8i00Ii1SBGFJosA5C
AGoAagBqAGoA6FPG//+DxCCD+AF1AcwzwIXAdb7rrYtNCItRLFJojA5CAGoAagBqAGoA6CnG//+D
xBiD+AF1AcwzwIXAddiLTQiLUTBSaGwOQgBqAGoAagBqAOgBxv//g8QYg/gBdQHMM8CFwHXYX15b
i+Vdw8zMzMzMzMzMzMzMVYvsi0UIOwW8O0IAcgQzwOsbi00IwfkFi1UIg+IfiwSNgDpCAA++RNAE
g+BAXcPMVYvsg30IAHUMagDoIAEAAIPEBOs8i0UIUOhCAAAAg8QEhcB0BYPI/+sni00Ii1EMgeIA
QAAAhdJ0FYtFCItIEFHoOkEAAIPEBPfYG8DrAjPAXcPMzMzMzMzMzMzMzMzMVYvsg+wMx0X8AAAA
AItFCIlF+ItN+ItRDIPiA4P6AnV6i0X4i0gMgeEIAQAAhcl0aotV+ItF+IsKK0gIiU30g330AH5W
i1X0UotF+ItICFGLVfiLQhBQ6HRBAACDxAw7RfR1IYtN+ItRDIHigAAAAIXSdA+LRfiLSAyD4f2L
VfiJSgzrFotF+ItIDIPJIItV+IlKDMdF/P////+LRfiLTfiLUQiJEItF+MdABAAAAACLRfyL5V3D
zMzMzMzMzMzMVYvsagHoBgAAAIPEBF3DzFWL7IPsDMdF/AAAAADHRfgAAAAAx0X0AAAAAOsJi0X0
g8ABiUX0i030Ow1AT0IAD42XAAAAi1X0oew7QgCDPJAAD4SAAAAAi030ixXsO0IAiwSKi0gMgeGD
AAAAhcl0Z4N9CAF1JItV9KHsO0IAiwyQUehZ/v//g8QEg/j/dAmLVfyDwgGJVfzrPYN9CAB1N4tF
9IsN7DtCAIsUgYtCDIPgAoXAdCGLTfSLFew7QgCLBIpQ6Bj+//+DxASD+P91B8dF+P/////pUf//
/4N9CAF1BYtF/OsDi0X4i+Vdw8zMi0wkBPfBAwAAAHQUigFBhMB0QPfBAwAAAHXxBQAAAACLAbr/
/v5+A9CD8P8zwoPBBKkAAQGBdOiLQfyEwHQyhOR0JKkAAP8AdBOpAAAA/3QC682NQf+LTCQEK8HD
jUH+i0wkBCvBw41B/YtMJAQrwcONQfyLTCQEK8HDzMzMzMxVi+yD7AiDfQgAdQczwOmHAAAAgz2A
N0IAAHUti0UMJf//AAA9/wAAAH4PxwXYNUIAKgAAAIPI/+tgi00IilUMiBG4AQAAAOtRx0X4AAAA
AI1F+FBqAIsNhDFCAFGLVQhSagGNRQxQaCACAACLDZA3QgBR/xWQUUIAiUX8g338AHQGg334AHQP
xwXYNUIAKgAAAIPI/+sDi0X8i+Vdw8zMU1aLRCQYC8B1GItMJBSLRCQQM9L38YvYi0QkDPfxi9Pr
QYvIi1wkFItUJBCLRCQM0enR29Hq0dgLyXX09/OL8PdkJBiLyItEJBT35gPRcg47VCQQdwhyBztE
JAx2AU4z0ovGXlvCEADMzMzMzMzMzFOLRCQUC8B1GItMJBCLRCQMM9L38YtEJAj38YvCM9LrUIvI
i1wkEItUJAyLRCQI0enR29Hq0dgLyXX09/OLyPdkJBSR92QkEAPRcg47VCQMdwhyDjtEJAh2CCtE
JBAbVCQUK0QkCBtUJAz32vfYg9oAW8IQAMzMzMzMzMzMzMzMVYvsg+wUU1ZXg30MAHUeaLgBQgBq
AGppaBAPQgBqAuhswf//g8QUg/gBdQHMM8CFwHXWi00MiU34i1X4i0IQiUXwi034i1EMgeKCAAAA
hdJ0DYtF+ItIDIPhQIXJdBaLVfiLQgwMIItN+IlBDIPI/+n2AQAAi1X4i0IMg+ABhcB0SotN+MdB
BAAAAACLVfiLQgyD4BCFwHQci034i1X4i0IIiQGLTfiLUQyD4v6LRfiJUAzrF4tN+ItRDIPKIItF
+IlQDIPI/+mfAQAAi034i1EMg8oCi0X4iVAMi034i1EMg+Lvi0X4iVAMi034x0EEAAAAAMdF/AAA
AACLVfyJVfSLRfiLSAyB4QwBAACFyXUugX34YCpCAHQJgX34gCpCAHUQi1XwUuiE+v//g8QEhcB1
DItF+FDohEAAAIPEBItN+ItRDIHiCAEAAIXSD4TWAAAAi0X4i034ixArUQiF0n0haNAOQgBqAGig
AAAAaBAPQgBqAugWwP//g8QUg/gBdQHMM8CFwHXKi034i1X4iwErQgiJRfyLTfiLUQiDwgGLRfiJ
EItN+ItRGIPqAYtF+IlQBIN9/AB+HItN/FGLVfiLQghQi03wUehCPAAAg8QMiUX060aDffD/dBuL
VfDB+gWLRfCD4B+LDJWAOkIAjRTBiVXs6wfHRexwLUIAi0XsD75IBIPhIIXJdBBqAmoAi1XwUui3
PgAAg8QMi0X4i0gIilUIiBHrHsdF/AEAAACLRfxQjU0IUYtV8FLozzsAAIPEDIlF9ItF9DtF/HQU
i034i1EMg8ogi0X4iVAMg8j/6wiLRQgl/wAAAF9eW4vlXcPMzMzMzMzMzMzMzMzMzFWL7IPsCMdF
/AAAAADHRfgDAAAA6wmLRfiDwAGJRfiLTfg7DUBPQgB9e4tV+KHsO0IAgzyQAHRoi034ixXsO0IA
iwSKi0gMgeGDAAAAhcl0IotV+KHsO0IAiwyQUeiuPwAAg8QEg/j/dAmLVfyDwgGJVfyDffgUfCdq
AotF+IsN7DtCAIsUgVLoc+T//4PECItF+IsN7DtCAMcEgQAAAADpcf///4tF/IvlXcPMzMzMVYvs
g30QCnUeg30IAH0YagGLRRBQi00MUYtVCFLoLgAAAIPEEOsWagCLRRBQi00MUYtVCFLoFgAAAIPE
EItFDF3DzMzMzMzMzMzMzMzMzMxVi+yD7BCLRQyJRfyDfRQAdBeLTfzGAS2LVfyDwgGJVfyLRQj3
2IlFCItN/IlN+ItFCDPS93UQiVX0i0UIM9L3dRCJRQiDffQJdhaLVfSDwleLRfyIEItN/IPBAYlN
/OsUi1X0g8Iwi0X8iBCLTfyDwQGJTfyDfQgAd7SLVfzGAgCLRfyD6AGJRfyLTfyKEYhV8ItF/ItN
+IoRiBCLRfiKTfCICItV/IPqAYlV/ItF+IPAAYlF+ItN+DtN/HLMi+Vdw8zMzMzMzMzMzMzMzMzM
VYvsUYN9EAp1D4N9CAB9CcdF/AEAAADrB8dF/AAAAACLRfxQi00QUYtVDFKLRQhQ6Pv+//+DxBCL
RQyL5V3DzFWL7GoAi0UQUItNDFGLVQhS6Nr+//+DxBCLRQxdw8zMVYvsUYN9FAp1F4N9DAB/EXwG
g30IAHMJx0X8AQAAAOsHx0X8AAAAAItF/FCLTRRRi1UQUotFDFCLTQhR6A8AAACLRRCL5V3DzMzM
zMzMzMxVi+yD7BCLRRCJRfyDfRgAdCKLTfzGAS2LVfyDwgGJVfyLRQj32ItNDIPRAPfZiUUIiU0M
i1X8iVX4i0UUM8lRUItVDFKLRQhQ6DL6//+JRfSLTRQz0lJRi0UMUItNCFHoq/n//4lFCIlVDIN9
9Al2FotV9IPCV4tF/IgQi038g8EBiU386xSLVfSDwjCLRfyIEItN/IPBAYlN/IN9DAB3mXIGg30I
AHeRi1X8xgIAi0X8g+gBiUX8i038ihGIVfCLRfyLTfiKEYgQi0X4ik3wiAiLVfyD6gGJVfyLRfiD
wAGJRfiLTfg7TfxyzIvlXcIUAMzMzMzMzMzMzMzMzMzMVYvsagCLRRRQi00QUYtVDFKLRQhQ6Ob+
//+LRRBdw8xVi+yD7DBTVleNReCJRdyNTRSJTdSDfQgAdR5oKA9CAGoAal1oHA9CAGoC6EC7//+D
xBSD+AF1Acwz0oXSddaDfRAAdR5ooABCAGoAal5oHA9CAGoC6Ba7//+DxBSD+AF1AcwzwIXAddaL
TdzHQQxCAAAAi1Xci0UIiUIIi03ci1UIiRGLRdyLTQyJSASLVdRSi0UQUItN3FHo0qn//4PEDIlF
2ItV3ItCBIPoAYtN3IlBBItV3IN6BAB8IotF3IsIxgEAM9KB4v8AAACJVdCLRdyLCIPBAYtV3IkK
6xGLRdxQagDo9/j//4PECIlF0ItF2F9eW4vlXcPMzMzMzMzMV4t8JAjrao2kJAAAAACL/4tMJARX
98EDAAAAdA+KAUGEwHQ798EDAAAAdfGLAbr//v5+A9CD8P8zwoPBBKkAAQGBdOiLQfyEwHQjhOR0
GqkAAP8AdA6pAAAA/3QC682Nef/rDY15/usIjXn96wONefyLTCQM98EDAAAAdBmKEUGE0nRkiBdH
98EDAAAAde7rBYkXg8cEuv/+/n6LAQPQg/D/M8KLEYPBBKkAAQGBdOGE0nQ0hPZ0J/fCAAD/AHQS
98IAAAD/dALrx4kXi0QkCF/DZokXi0QkCMZHAgBfw2aJF4tEJAhfw4gXi0QkCF/DVYvsg+wsU1ZX
jUXgiUXcg30IAHUeaCgPQgBqAGpaaDgPQgBqAuhWuf//g8QUg/gBdQHMM8mFyXXWg30QAHUeaKAA
QgBqAGpbaDgPQgBqAugsuf//g8QUg/gBdQHMM9KF0nXWi0Xcx0AMQgAAAItN3ItVCIlRCItF3ItN
CIkIi1Xci0UMiUIEi00UUYtVEFKLRdxQ6Oin//+DxAyJRdiLTdyLUQSD6gGLRdyJUASLTdyDeQQA
fCKLVdyLAsYAADPJgeH/AAAAiU3Ui1XciwKDwAGLTdyJAesRi1XcUmoA6A33//+DxAiJRdSLRdhf
XluL5V3DzMzMzMzMzMzMzMzMzFE9ABAAAI1MJAhyFIHpABAAAC0AEAAAhQE9ABAAAHPsK8iLxIUB
i+GLCItABFDDzFWL7IPsDIN9DAR0BoN9DAN1BelCAQAAg30IAnQWg30IFXQQg30IFnQKg30IDw+F
uAAAAIN9CAJ0BoN9CBV1N4M9XDdCAAB1LmoBaHBxQAD/FcxRQgCD+AF1DMcFXDdCAAEAAADrEP8V
yFFCAKPcNUIA6eMAAACLRQiJRfSLTfSD6QKJTfSDffQUd16LRfQz0oqQTnFAAP8klTpxQACLDUw3
QgCJTfiLVQyJFUw3QgDrOKFQN0IAiUX4i00MiQ1QN0IA6yWLFVQ3QgCJVfiLRQyjVDdCAOsSiw1Y
N0IAiU34i1UMiRVYN0IA62mDfQgIdA6DfQgEdAiDfQgLdALrWotFCFDoyAIAAIPEBIlF/IN9/AB1
AutDi038i1EIiVX4i0X8i0gEO00IdSqLVfyLRQyJQgiLTfyDwQyJTfyLFWgtQgBr0gyBwugsQgA5
VfxyAusC68uLRfjrDccF2DVCABYAAACDyP+L5V3DbXBAAKdwQACBcEAAlHBAALlwQAAABAQEBAQE
BAQEBAQEAQQEBAQEAgPMzMzMzMzMzMzMzMzMVYvsg+wMg30IAHUYx0X4TDdCAItF+IsIiU30x0X8
AgAAAOsWx0X4UDdCAItV+IsCiUX0x0X8FQAAAIN99AB1BDPA6x6DffQBdBOLTfjHAQAAAACLVfxS
/1X0g8QEuAEAAACL5V3CBADMzMzMzMzMzFWL7IPsGItFCIlF6ItN6IPpAolN6IN96BR3cotF6DPS
ipB8c0AA/ySVZHNAAMdF8Ew3QgCLTfCLEYlV7OtXx0XwUDdCAItF8IsIiU3s60bHRfBUN0IAi1Xw
iwKJRezrNcdF8Fg3QgCLTfCLEYlV7Oski0UIUOhGAQAAg8QEg8AIiUXwi03wixGJVezrCIPI/+nr
AAAAg33sAXUHM8Dp3gAAAIN97AB1B2oD6JG8//+DfQgIdAyDfQgLdAaDfQgEdSuhKDZCAIlF9McF
KDZCAAAAAACDfQgIdROLDWwtQgCJTfzHBWwtQgCMAAAAg30ICHU5ixVgLUIAiVX46wmLRfiDwAGJ
RfiLDWAtQgADDWQtQgA5Tfh9EotV+GvSDMeC8CxCAAAAAADr1OsJi0XwxwAAAAAAg30ICHURiw1s
LUIAUWoI/1Xsg8QI6wqLVQhS/1Xsg8QEg30ICHQMg30IC3QGg30IBHUXi0X0oyg2QgCDfQgIdQmL
TfyJDWwtQgAzwIvlXcMNckAAUXJAAEByQAAeckAAL3JAAG1yQAAABQEFBQUBBQUBBQUFAgUFBQUF
AwTMzMzMzMzMzMzMzMzMzMxVi+xRx0X86CxCAItF/ItIBDtNCHQdi1X8g8IMiVX8oWgtQgBrwAwF
6CxCADlF/HMC69iLDWgtQgBryQyBwegsQgA5TfxzEItV/ItCBDtFCHUFi0X86wIzwIvlXcPMzMxV
i+yD7AjHRfwAAAAAgz1gN0IAAHVdaEADQgD/FXBRQgCJRfiDffgAdB1oaA9CAItF+FD/FUhRQgCj
YDdCAIM9YDdCAAB1BDPA62xoWA9CAItN+FH/FUhRQgCjZDdCAGhED0IAi1X4Uv8VSFFCAKNoN0IA
gz1kN0IAAHQJ/xVkN0IAiUX8g338AHQWgz1oN0IAAHQNi0X8UP8VaDdCAIlF/ItNEFGLVQxSi0UI
UItN/FH/FWA3QgCL5V3DzMzMzMyLTCQMV4XJdHpWU4vZi3QkFPfGAwAAAIt8JBB1B8HpAnVv6yGK
BkaIB0dJdCWEwHQp98YDAAAAdeuL2cHpAnVRg+MDdA2KBkaIB0eEwHQvS3Xzi0QkEFteX8P3xwMA
AAB0EogHR0kPhIoAAAD3xwMAAAB17ovZwekCdWyIB0dLdfpbXotEJAhfw4kXg8cESXSvuv/+/n6L
BgPQg/D/M8KLFoPGBKkAAQGBdN6E0nQshPZ0HvfCAAD/AHQM98IAAAD/dcaJF+sYgeL//wAAiRfr
DoHi/wAAAIkX6wQz0okXg8cEM8BJdAozwIkHg8cESXX4g+MDdYWLRCQQW15fw8zMVYvsg+woi0UI
UOjxAgAAg8QEiUUIi00IOw3EN0IAdQczwOnTAgAAg30IAHUR6K4DAADoKQQAADPA6bwCAADHRfwA
AAAA6wmLVfyDwgGJVfyDffwFD4M9AQAAi0X8a8Awi4h4MEIAO00ID4UjAQAAx0XcAAAAAOsJi1Xc
g8IBiVXcgX3cAQEAAHMMi0XcxoBgOUIAAOvix0X0AAAAAOsJi030g8EBiU30g330BHN7i1X8a9Iw
i0X0jYzCiDBCAIlN+OsJi1X4g8ICiVX4i0X4M8mKCIXJdE2LVfgzwIpCAYXAdEGLTfgz0ooRiVXc
6wmLRdyDwAGJRdyLTfgz0opRATlV3Hcdi0Xci030ipBhOUIACpFwMEIAi0XciJBhOUIA683rn+l2
////i00IiQ3EN0IAxwVMOEIAAQAAAIsVxDdCAFLoGAIAAIPEBKNkOkIAx0X0AAAAAOsJi0X0g8AB
iUX0g330BnMei038a8kwi1X0i0X0ZouMQXwwQgBmiQxVQDhCAOvT6NUCAAAzwOloAQAA6bD+//+N
VeBSi0UIUP8V0FFCAIP4AQ+FMgEAAMdF3AAAAADrCYtN3IPBAYlN3IF93AEBAABzDItV3MaCYDlC
AADr4otFCKPEN0IAxwVkOkIAAAAAAIN94AEPhrUAAACNTeaJTdjrCYtV2IPCAolV2ItF2DPJigiF
yXRHi1XYM8CKQgGFwHQ7i03YM9KKEYlV3OsJi0Xcg8ABiUXci03YM9KKUQE5Vdx3F4tF3IqIYTlC
AIDJBItV3IiKYTlCAOvT66XHRdwBAAAA6wmLRdyDwAGJRdyBfdz/AAAAcxeLTdyKkWE5QgCAygiL
RdyIkGE5QgDr14sNxDdCAFHozgAAAIPEBKNkOkIAxwVMOEIAAQAAAOsKxwVMOEIAAAAAAMdF9AAA
AADrCYtV9IPCAYlV9IN99AZzD4tF9GbHBEVAOEIAAADr4uiEAQAAM8DrGoM9bDdCAAB0DujyAAAA
6G0BAAAzwOsDg8j/i+Vdw8zMVYvsxwVsN0IAAAAAAIN9CP51EscFbDdCAAEAAAD/FdhRQgDrMoN9
CP11EscFbDdCAAEAAAD/FdRRQgDrGoN9CPx1EccFbDdCAAEAAAChkDdCAOsDi0UIXcPMzMzMzMzM
VYvsUYtFCIlF/ItN/IHppAMAAIlN/IN9/BJ3LotF/DPSipCEeUAA/ySVcHlAALgRBAAA6xe4BAgA
AOsQuBIEAADrCbgEBAAA6wIzwIvlXcNOeUAAVXlAAFx5QABjeUAAanlAAAAEBAQBBAQEBAQEBAQE
BAQEAgPMzMzMzMzMzMxVi+xRx0X8AAAAAOsJi0X8g8ABiUX8gX38AQEAAH0Mi038xoFgOUIAAOvi
xwXEN0IAAAAAAMcFTDhCAAAAAADHBWQ6QgAAAAAAx0X8AAAAAOsJi1X8g8IBiVX8g338Bn0Pi0X8
ZscERUA4QgAAAOvii+Vdw8zMzMzMzMzMzMzMzFWL7IHsHAUAAI2F6Pz//1CLDcQ3QgBR/xXQUUIA
g/gBD4UTAgAAx4Xk+v//AAAAAOsPi5Xk+v//g8IBiZXk+v//gb3k+v//AAEAAHMVi4Xk+v//io3k
+v//iIwF/Pz//+vQxoX8/P//II2V7vz//4lV/OsJi0X8g8ACiUX8i038M9KKEYXSdECLRfwzyYoI
iY3k+v//6w+LleT6//+DwgGJleT6//+LRfwzyYpIATmN5Pr//3cQi5Xk+v//xoQV/Pz//yDr0eus
agChZDpCAFCLDcQ3QgBRjZX8/f//UmgAAQAAjYX8/P//UGoB6E8yAACDxBxqAIsNxDdCAFFoAAEA
AI2V6Pv//1JoAAEAAI2F/Pz//1BoAAEAAIsNZDpCAFHoui4AAIPEIGoAixXEN0IAUmgAAQAAjYXo
+v//UGgAAQAAjY38/P//UWgAAgAAixVkOkIAUuiFLgAAg8Qgx4Xk+v//AAAAAOsPi4Xk+v//g8AB
iYXk+v//gb3k+v//AAEAAA+DqwAAAIuN5Pr//zPSZouUTfz9//+D4gGF0nQ2i4Xk+v//iohhOUIA
gMkQi5Xk+v//iIphOUIAi4Xk+v//i43k+v//ipQN6Pv//4iQYDhCAOtZi4Xk+v//M8lmi4xF/P3/
/4PhAoXJdDWLleT6//+KgmE5QgAMIIuN5Pr//4iBYTlCAIuV5Pr//4uF5Pr//4qMBej6//+IimA4
QgDrDYuV5Pr//8aCYDhCAADpNv///+nFAAAAx4Xk+v//AAAAAOsPi4Xk+v//g8ABiYXk+v//gb3k
+v//AAEAAA+DmgAAAIO95Pr//0FyO4O95Pr//1p3MouN5Pr//4qRYTlCAIDKEIuF5Pr//4iQYTlC
AIuN5Pr//4PBIIuV5Pr//4iKYDhCAOtRg73k+v//YXI7g73k+v//encyi4Xk+v//iohhOUIAgMkg
i5Xk+v//iIphOUIAi4Xk+v//g+ggi43k+v//iIFgOEIA6w2LleT6///GgmA4QgAA6Uf///+L5V3D
zMzMzMzMzMzMzMzMzMxVi+yDPUw4QgAAdAehxDdCAOsCM8Bdw8zMzMzMzMzMzFWL7IM90DtCAAB1
FGr96F34//+DxATHBdA7QgABAAAAXcPMzMzMzMzMzMzMzMzMzFWL7FdWi3UMi00Qi30Ii8GL0QPG
O/52CDv4D4J4AQAA98cDAAAAdRTB6QKD4gOD+QhyKfOl/ySVyH5AAIvHugMAAACD6QRyDIPgAwPI
/ySF4H1AAP8kjdh+QACQ/ySNXH5AAJDwfUAAHH5AAEB+QAAj0YoGiAeKRgGIRwGKRgLB6QKIRwKD
xgODxwOD+QhyzPOl/ySVyH5AAI1JACPRigaIB4pGAcHpAohHAYPGAoPHAoP5CHKm86X/JJXIfkAA
kCPRigaIB0bB6QJHg/kIcozzpf8klch+QACNSQC/fkAArH5AAKR+QACcfkAAlH5AAIx+QACEfkAA
fH5AAItEjuSJRI/ki0SO6IlEj+iLRI7siUSP7ItEjvCJRI/wi0SO9IlEj/SLRI74iUSP+ItEjvyJ
RI/8jQSNAAAAAAPwA/j/JJXIfkAAi//YfkAA4H5AAOx+QAAAf0AAi0UIXl/Jw5CKBogHi0UIXl/J
w5CKBogHikYBiEcBi0UIXl/Jw41JAIoGiAeKRgGIRwGKRgKIRwKLRQheX8nDkI10MfyNfDn898cD
AAAAdSTB6QKD4gOD+QhyDf3zpfz/JJVggEAAi//32f8kjRCAQACNSQCLx7oDAAAAg/kEcgyD4AMr
yP8khWh/QAD/JI1ggEAAkHh/QACYf0AAwH9AAIpGAyPRiEcDTsHpAk+D+Qhytv3zpfz/JJVggEAA
jUkAikYDI9GIRwOKRgLB6QKIRwKD7gKD7wKD+QhyjP3zpfz/JJVggEAAkIpGAyPRiEcDikYCiEcC
ikYBwekCiEcBg+4Dg+8Dg/kID4Ja/////fOl/P8klWCAQACNSQAUgEAAHIBAACSAQAAsgEAANIBA
ADyAQABEgEAAV4BAAItEjhyJRI8ci0SOGIlEjxiLRI4UiUSPFItEjhCJRI8Qi0SODIlEjwyLRI4I
iUSPCItEjgSJRI8EjQSNAAAAAAPwA/j/JJVggEAAi/9wgEAAeIBAAIiAQACcgEAAi0UIXl/Jw5CK
RgOIRwOLRQheX8nDjUkAikYDiEcDikYCiEcCi0UIXl/Jw5CKRgOIRwOKRgKIRwKKRgGIRwGLRQhe
X8nDzMzMzMzMzMzMzMxVi+yhcDFCAF3DzMzMzMzMVYvsgX0I+AMAAHYEM8DrDYtFCKNwMUIAuAEA
AABdw8xVi+xoQAEAAGoAoXQ6QgBQ/xXcUUIAo8A3QgCDPcA3QgAAdQQzwOsviw3AN0IAiQ20N0IA
xwW4N0IAAAAAAMcFvDdCAAAAAADHBaA3QgAQAAAAuAEAAABdw8zMzMzMzMxVi+yD7AyhvDdCAGvA
FIsNwDdCAAPIiU30ixXAN0IAiVX4i0X4O0X0cyWLTfiLVQgrUQyJVfyBffwAABAAcwWLRfjrDYtF
+IPAFIlF+OvTM8CL5V3DzMzMzMzMzMzMzMxVi+yD7AyLRQiLTQwrSAyJTfiLVfjB6g+JVfy4AAAA
gItN/NPoi00Ii1EII9CF0nUgi0X4g+APhcB1FotN+IHh/w8AAIXJdAnHRfQBAAAA6wfHRfQAAAAA
i0X0i+Vdw8xVi+yD7DyLRQiLSBCJTcSLVQiLRQwrQgyJRfCLTfDB6Q+JTfyLVfxp0gQCAACLRcSN
jBBEAQAAiU34i1UMg+oEiVXki0XkiwiD6QGJTdCLVeQDVdCJVciLRciLCIlN7ItV5ItC/IlF9ItN
7IPhAYXJD4UiAQAAi1XswfoEg+oBiVXcg33cP3YHx0XcPwAAAItFyItNyItQBDtRCA+F0AAAAIN9
3CBzX7gAAACAi03c0+j30ItN/ItVxItMikQjyItV/ItFxIlMkESLTcQDTdyKUQSA6gGLRcQDRdyI
UASLTcQDTdwPvlEEhdJ1GLgAAACAi03c0+j30ItNCIsRI9CLRQiJEOtri03cg+kgugAAAIDT6vfS
i0X8i03Ei4SBxAAAACPCi038i1XEiYSKxAAAAItFxANF3IpIBIDpAYtVxANV3IhKBItFxANF3A++
SASFyXUdi03cg+kgugAAAIDT6vfSi0UIi0gEI8qLVQiJSgSLRciLSAiLVciLQgSJQQSLTciLUQSL
RciLSAiJSgiLVdADVeyJVdCLRdDB+ASD6AGJRdiDfdg/dgfHRdg/AAAAi030g+EBhckPhVYBAACL
VeQrVfSJVcyLRfTB+ASD6AGJRdSDfdQ/dgfHRdQ/AAAAi03QA030iU3Qi1XQwfoEg+oBiVXYg33Y
P3YHx0XYPwAAAItF1DtF2A+EAAEAAItNzItVzItBBDtCCA+F0AAAAIN91CBzX7oAAACAi03U0+r3
0otF/ItNxItEgUQjwotN/ItVxIlEikSLRcQDRdSKSASA6QGLVcQDVdSISgSLRcQDRdQPvkgEhcl1
GLoAAACAi03U0+r30otFCIsII8qLVQiJCutri03Ug+kguAAAAIDT6PfQi038i1XEi4yKxAAAACPI
i1X8i0XEiYyQxAAAAItNxANN1IpRBIDqAYtFxANF1IhQBItNxANN1A++UQSF0nUdi03Ug+kguAAA
AIDT6PfQi00Ii1EEI9CLRQiJUASLTcyLUQiLRcyLSASJSgSLVcyLQgSLTcyLUQiJUAiLRcyJReSL
TfSD4QGFyXUMi1XUO1XYD4QQAQAAi0XYi034jRTBiVXgi0Xki03gi1EEiVAEi0Xki03giUgIi1Xg
i0XkiUIEi03ki1EEi0XkiUIIi03ki1Xki0EEO0IID4XIAAAAg33YIHNbi03EA03YD75RBItFxANF
2IpIBIDBAYtFxANF2IhIBIXSdRa6AAAAgItN2NPqi0UIiwgLyotVCIkKuAAAAICLTdjT6ItN/ItV
xItMikQLyItV/ItFxIlMkETrZ4tNxANN2A++UQSLRcQDRdiKSASAwQGLRcQDRdiISASF0nUbi03Y
g+kgugAAAIDT6otFCItIBAvKi1UIiUoEi03Yg+kguAAAAIDT6ItN/ItVxIuMisQAAAALyItV/ItF
xImMkMQAAACLTeSLVdCJEYtF5ANF0ItN0IlI/ItV+IsCg+gBi034iQGLVfiDOgAPhWEBAACDPbg3
QgAAD4RDAQAAobA3QgDB4A+LDbg3QgCLUQwD0IlV6GgAQAAAaACAAACLRehQ/xW0UUIAugAAAICL
DbA3QgDT6qG4N0IAi0gIC8qLFbg3QgCJSgihuDdCAItIEIsVsDdCAMeEkcQAAAAAAAAAobg3QgCL
SBCKUUOA6gGhuDdCAItIEIhRQ4sVuDdCAItCEA++SEOFyXUUixW4N0IAi0IEJP6LDbg3QgCJQQSL
Fbg3QgCDegj/D4WSAAAAaACAAABqAKG4N0IAi0gMUf8VtFFCAIsVuDdCAItCEFBqAIsNdDpCAFH/
FbBRQgCLFbw3QgBr0hShwDdCAAPCiw24N0IAg8EUK8FQixW4N0IAg8IUUqG4N0IAUOiKJwAAg8QM
iw28N0IAg+kBiQ28N0IAi1UIOxW4N0IAdgmLRQiD6BSJRQiLDcA3QgCJDbQ3QgCLVQiJFbg3QgCL
RfyjsDdCAIvlXcPMzMxVi+yD7DhWobw3QgBrwBSLDcA3QgADyIlN1ItVCIPCF4Pi8IlV2ItF2MH4
BIPoAYlF4IN94CB9FIPK/4tN4NPqiVXcx0XM/////+sVx0XcAAAAAItN4IPpIIPI/9PoiUXMiw20
N0IAiU3oi1XoO1XUcySLReiLTdwjCItV6ItFzCNCBAvIhcl0AusLi03og8EUiU3o69SLVeg7VdQP
hdsAAAChwDdCAIlF6ItN6DsNtDdCAHMki1Xoi0XcIwKLTeiLVcwjUQQLwoXAdALrC4tF6IPAFIlF
6OvRi03oOw20N0IAD4WVAAAAi1XoO1XUcxaLReiDeAgAdALrC4tN6IPBFIlN6Ovii1XoO1XUdUmh
wDdCAIlF6ItN6DsNtDdCAHMWi1Xog3oIAHQC6wuLReiDwBSJRejr34tN6DsNtDdCAHUV6PkDAACJ
ReiDfegAdQczwOnaAwAAi1XoUujwBAAAg8QEi03oi1EQiQKLReiLSBCDOf91BzPA6bQDAACLVeiJ
FbQ3QgCLReiLSBCJTciLVciLAolF0IN90P90I4tN0ItVyItF3CNEikSLTdCLVciLdcwjtIrEAAAA
C8aFwHU1x0XQAAAAAItF0ItNyItV3CNUgUSLRdCLTciLdcwjtIHEAAAAC9aF0nULi1XQg8IBiVXQ
69KLRdBpwAQCAACLTciNlAFEAQAAiVX8x0XgAAAAAItF0ItNyItV3CNUgUSJVeSDfeQAdRrHReAg
AAAAi0XQi03Ii1XMI5SBxAAAAIlV5IN95AB8E4tF5NHgiUXki03gg8EBiU3g6+eLVeCLRfyLTNAE
iU3wi1XwiwIrRdiJRfiLTfjB+QSD6QGJTeyDfew/fgfHRew/AAAAi1XsO1XgD4QYAgAAi0Xwi03w
i1AEO1EID4XQAAAAg33gIH1fuAAAAICLTeDT6PfQi03Qi1XIi0yKRCPIi1XQi0XIiUyQRItNyANN
4IpRBIDqAYtFyANF4IhQBItNyANN4A++UQSF0nUYuAAAAICLTeDT6PfQi03oixEj0ItF6IkQ62uL
TeCD6SC6AAAAgNPq99KLRdCLTciLhIHEAAAAI8KLTdCLVciJhIrEAAAAi0XIA0XgikgEgOkBi1XI
A1XgiEoEi0XIA0XgD75IBIXJdR2LTeCD6SC6AAAAgNPq99KLReiLSAQjyotV6IlKBItF8ItICItV
8ItCBIlBBItN8ItRBItF8ItICIlKCIN9+AAPhA4BAACLVeyLRfyNDNCJTfSLVfCLRfSLSASJSgSL
VfCLRfSJQgiLTfSLVfCJUQSLRfCLSASLVfCJUQiLRfCLTfCLUAQ7UQgPhcYAAACDfewgfVqLRcgD
RewPvkgEi1XIA1XsikIEBAGLVcgDVeyIQgSFyXUWuAAAAICLTezT6ItN6IsRC9CLReiJELoAAACA
i03s0+qLRdCLTciLRIFEC8KLTdCLVciJRIpE62aLRcgDRewPvkgEi1XIA1XsikIEBAGLVcgDVeyI
QgSFyXUbi03sg+kguAAAAIDT6ItN6ItRBAvQi0XoiVAEi03sg+kgugAAAIDT6otF0ItNyIuEgcQA
AAALwotN0ItVyImEisQAAACDffgAdBSLRfCLTfiJCItV8ANV+ItF+IlC/ItN8ANN+IlN8ItV2IPC
AYtF8IkQi03Yg8EBi1XwA1XYiUr8i0X8iwiLVfyLAoPAAYtV/IkChcl1IItF6DsFuDdCAHUVi03Q
Ow2wN0IAdQrHBbg3QgAAAAAAi1XIi0XQiQKLRfCDwARei+Vdw8zMzMzMzMzMzMxVi+xRobw3QgA7
BaA3QgB1SosNoDdCAIPBEGvJFFGLFcA3QgBSagChdDpCAFD/FeRRQgCJRfyDffwAdQczwOnIAAAA
i038iQ3AN0IAixWgN0IAg8IQiRWgN0IAobw3QgBrwBSLDcA3QgADyIlN/GjEQQAAagiLFXQ6QgBS
/xXcUUIAi038iUEQi1X8g3oQAHUEM8DrdmoEaAAgAABoAAAQAGoA/xXgUUIAi038iUEMi1X8g3oM
AHUai0X8i0gQUWoAixV0OkIAUv8VsFFCADPA6zmLRfzHAAAAAACLTfzHQQQAAAAAi1X8x0II////
/6G8N0IAg8ABo7w3QgCLTfyLURDHAv////+LRfyL5V3DzFWL7IPsLItFCItIEIlN1ItVCItCCIlF
+MdF2AAAAACDffgAfBOLTfjR4YlN+ItV2IPCAYlV2Ovni0XYacAEAgAAi03UjZQBRAEAAIlV9MdF
4AAAAADrCYtF4IPAAYlF4IN94D99IItN4ItV9I0EyolF6ItN6ItV6IlRCItF6ItN6IlIBOvRi1XY
weIPi0UIi0gMA8qJTfBqBGgAEAAAaACAAACLVfBS/xXgUUIAhcB1CIPI/+kxAQAAi0XwBQBwAACJ
ReSLTfCJTfzrDItV/IHCABAAAIlV/ItF/DtF5Hddi038x0EI/////4tV/MeC/A8AAP////+LRfyD
wAyJReiLTejHAfAPAACLVeiBwgAQAACLReiJUASLTeiB6QAQAACLVeiJSgiLRegF7A8AAIlF3ItN
3McB8A8AAOuPi1X0gcL4AQAAiVXsi0Xwg8AMi03siUEEi1Xsi0IEiUXoi03oi1XsiVEIi0Xkg8AM
i03siUEIi1Xsi0IIiUXoi03oi1XsiVEEi0XYi03Ux0SBRAAAAACLVdiLRdTHhJDEAAAAAQAAAItN
1A++UUOLRdSKSEOAwQGLRdSISEOF0nUPi00Ii1EEg8oBi0UIiVAEugAAAICLTdjT6vfSi0UIi0gI
I8qLVQiJSgiLRdiL5V3DzMxVi+yD7DCLRRCDwBck8IlF5ItNCItREIlV0ItFCItNDCtIDIlN9ItV
9MHqD4lV/ItF/GnABAIAAItN0I2UAUQBAACJVfiLRQyD6ASJReyLTeyLEYPqAYlV2ItF7ANF2IlF
1ItN1IsRiVXwi0XkO0XYD46wAgAAi03wg+EBhcl1C4tV2ANV8DlV5H4HM8DpVQUAAItF8MH4BIPo
AYlF4IN94D92B8dF4D8AAACLTdSLVdSLQQQ7QggPhdAAAACDfeAgc1+6AAAAgItN4NPq99KLRfyL
TdCLRIFEI8KLTfyLVdCJRIpEi0XQA0XgikgEgOkBi1XQA1XgiEoEi0XQA0XgD75IBIXJdRi6AAAA
gItN4NPq99KLRQiLCCPKi1UIiQrra4tN4IPpILgAAACA0+j30ItN/ItV0IuMisQAAAAjyItV/ItF
0ImMkMQAAACLTdADTeCKUQSA6gGLRdADReCIUASLTdADTeAPvlEEhdJ1HYtN4IPpILgAAACA0+j3
0ItNCItRBCPQi0UIiVAEi03Ui1EIi0XUi0gEiUoEi1XUi0IEi03Ui1EIiVAIi0XYA0XwK0XkiUXw
g33wAA+ORgEAAItN7ANN5IlN1ItV8MH6BIPqAYlV4IN94D92B8dF4D8AAACLReCLTfiNFMGJVeiL
RdSLTeiLUQSJUASLRdSLTeiJSAiLVeiLRdSJQgSLTdSLUQSLRdSJQgiLTdSLVdSLQQQ7QggPhcgA
AACDfeAgc1uLTdADTeAPvlEEi0XQA0XgikgEgMEBi0XQA0XgiEgEhdJ1FroAAACAi03g0+qLRQiL
CAvKi1UIiQq4AAAAgItN4NPoi038i1XQi0yKRAvIi1X8i0XQiUyQROtni03QA03gD75RBItF0ANF
4IpIBIDBAYtF0ANF4IhIBIXSdRuLTeCD6SC6AAAAgNPqi0UIi0gEC8qLVQiJSgSLTeCD6SC4AAAA
gNPoi038i1XQi4yKxAAAAAvIi1X8i0XQiYyQxAAAAItN1ItV8IkRi0XUA0Xwi03wiUj8i1Xkg8IB
i0XsiRCLTeSDwQGLVewDVeSJSvzpvAIAAItF5DtF2A+NsAIAAItN5IPBAYtV7IkKi0Xkg8ABi03s
A03kiUH8i1XsA1XkiVXsi0XYK0XkiUXYi03YwfkEg+kBiU3cg33cP3YHx0XcPwAAAItV8IPiAYXS
D4U7AQAAi0XwwfgEg+gBiUXgg33gP3YHx0XgPwAAAItN1ItV1ItBBDtCCA+F0AAAAIN94CBzX7oA
AACAi03g0+r30otF/ItN0ItEgUQjwotN/ItV0IlEikSLRdADReCKSASA6QGLVdADVeCISgSLRdAD
ReAPvkgEhcl1GLoAAACAi03g0+r30otFCIsII8qLVQiJCutri03gg+kguAAAAIDT6PfQi038i1XQ
i4yKxAAAACPIi1X8i0XQiYyQxAAAAItN0ANN4IpRBIDqAYtF0ANF4IhQBItN0ANN4A++UQSF0nUd
i03gg+kguAAAAIDT6PfQi00Ii1EEI9CLRQiJUASLTdSLUQiLRdSLSASJSgSLVdSLQgSLTdSLUQiJ
UAiLRdgDRfCJRdiLTdjB+QSD6QGJTdyDfdw/dgfHRdw/AAAAi1Xci0X4jQzQiU3oi1Xsi0Xoi0gE
iUoEi1Xsi0XoiUIIi03oi1XsiVEEi0Xsi0gEi1XsiVEIi0Xsi03si1AEO1EID4XGAAAAg33cIHNa
i0XQA0XcD75IBItV0ANV3IpCBAQBi1XQA1XciEIEhcl1FrgAAACAi03c0+iLTQiLEQvQi0UIiRC6
AAAAgItN3NPqi0X8i03Qi0SBRAvCi038i1XQiUSKROtmi0XQA0XcD75IBItV0ANV3IpCBAQBi1XQ
A1XciEIEhcl1G4tN3IPpILgAAACA0+iLTQiLUQQL0ItFCIlQBItN3IPpILoAAACA0+qLRfyLTdCL
hIHEAAAAC8KLTfyLVdCJhIrEAAAAi0Xsi03YiQiLVewDVdiLRdiJQvy4AQAAAIvlXcPMzMzMzFWL
7FGDPbg3QgAAD4QbAQAAobA3QgDB4A+LDbg3QgCLUQwD0IlV/GgAQAAAaACAAACLRfxQ/xW0UUIA
ugAAAICLDbA3QgDT6qG4N0IAi0gIC8qLFbg3QgCJSgihuDdCAItIEIsVsDdCAMeEkcQAAAAAAAAA
obg3QgCLSBCKUUOA6gGhuDdCAItIEIhRQ4sVuDdCAItCEA++SEOFyXUUixW4N0IAi0IEJP6LDbg3
QgCJQQSLFbg3QgCDegj/dWSDPbw3QgABfluhuDdCAItIEFFqAIsVdDpCAFL/FbBRQgChvDdCAGvA
FIsNwDdCAAPIixW4N0IAg8IUK8pRobg3QgCDwBRQiw24N0IAUegAGAAAg8QMixW8N0IAg+oBiRW8
N0IAxwW4N0IAAAAAAIvlXcNVi+yB7GgBAAChvDdCAGvAFFCLDcA3QgBR/xW8UUIAhcB0CIPI/+nu
BQAAixXAN0IAiZXE/v//x4Xg/v//AAAAAOsPi4Xg/v//g8ABiYXg/v//i43g/v//Ow28N0IAD42z
BQAAi5XE/v//i0IQiYWg/v//aMRBAACLjaD+//9R/xW8UUIAhcB0Crj+////6YYFAACLlcT+//+L
QgyJhdj+//+LjaD+//+BwUQBAACJTeiLlcT+//+LQgiJRfzHhbz+//8AAAAAx4Wo/v//AAAAAMdF
9AAAAADrCYtN9IPBAYlN9IN99CAPje4EAADHheT+//8AAAAAx4Ww/v//AAAAAMeF1P7//wAAAADH
hbT+//8AAAAA6w+LlbT+//+DwgGJlbT+//+DvbT+//9AfROLhbT+///HhIXo/v//AAAAAOvVg338
AA+MMQQAAGgAgAAAi43Y/v//Uf8VvFFCAIXAdAq4/P///+mtBAAAi5XY/v//iVX4x4XA/v//AAAA
AOsPi4XA/v//g8ABiYXA/v//g73A/v//CA+NdwEAAItN+IPBDImN0P7//4uV0P7//4HC8A8AAImV
yP7//4uF0P7//4N4/P91C4uNyP7//4M5/3QKuPv////pPQQAAIuV0P7//4sCiYW4/v//i424/v//
iY2s/v//i5Ws/v//g+IBhdJ0NouFuP7//4PoAYmFuP7//4G9uP7//wAEAAB+Crj6////6fEDAACL
jdT+//+DwQGJjdT+///rQouVuP7//8H6BIPqAYmVtP7//4O9tP7//z9+CseFtP7//z8AAACLhbT+
//+LjIXo/v//g8EBi5W0/v//iYyV6P7//4O9uP7//xB8GYuFuP7//4PgD4XAdQyBvbj+///wDwAA
fgq4+f///+lyAwAAi43Q/v//A424/v//i1H8O5Ws/v//dAq4+P///+lRAwAAi4XQ/v//A4W4/v//
iYXQ/v//i43Q/v//O43I/v//D4Lw/v//i5XQ/v//O5XI/v//dAq4+P///+kVAwAAi0X4BQAQAACJ
Rfjpbf7//4tN6IsRO5XU/v//dAq49////+nuAgAAi0XoiYXM/v//x0XsAAAAAOsJi03sg8EBiU3s
g33sQA+NLQIAAMeFmP7//wAAAACLlcz+//+JldD+//+LhdD+//+LSASJjaT+//+LlaT+//87lcz+
//8PhCMBAACLReyLjZj+//87jIXo/v//D4QNAQAAi5Wk/v//O5XY/v//chOLhdj+//8FAIAAADmF
pP7//3IKuPb////pUQIAAIuNpP7//4HhAPD//4mNnP7//4uVnP7//4PCDIlV8ItF8AXwDwAAiYXc
/v//i03wO43c/v//dB+LVfA7laT+//91AusSi0XwiwiD4f6LVfAD0YlV8OvWi0XwO4Xc/v//dQq4
9f///+nmAQAAi42k/v//ixHB+gSD6gGJlbT+//+DvbT+//8/fgrHhbT+//8/AAAAi4W0/v//O0Xs
dAq49P///+mqAQAAi42k/v//i1EIO5XQ/v//dAq48////+mPAQAAi4Wk/v//iYXQ/v//i42Y/v//
g8EBiY2Y/v//6bz+//+DvZj+//8AdG6DfewgfTK6AAAAgItN7NPqi4Xk/v//C8KJheT+//+6AAAA
gItN7NPqi4W8/v//C8KJhbz+///rNotN7IPpILoAAACA0+qLhbD+//8LwomFsP7//4tN7IPpILoA
AACA0+qLhaj+//8LwomFqP7//4uN0P7//4tRBDuVzP7//3USi0Xsi42Y/v//O4yF6P7//3QKuPL/
///pywAAAIuVzP7//4tCCDuF0P7//3QKuPH////psAAAAIuNzP7//4PBCImNzP7//+nA/f//i1X0
i4Wg/v//i43k/v//O0yQRHUYi1X0i4Wg/v//i42w/v//O4yQxAAAAHQHuPD////raIuV2P7//4HC
AIAAAImV2P7//4tF6AUEAgAAiUXoi0380eGJTfzp//r//4uVxP7//4uFvP7//zsCdRGLjcT+//+L
laj+//87UQR0B7jv////6xaLhcT+//+DwBSJhcT+///pLPr//zPAi+Vdw8zMzFWL7FGhdDdCAIlF
/ItNCIkNdDdCAItF/IvlXcPMzMzMVYvsoXQ3QgBdw8zMzMzMzFWL7FGhdDdCAIlF/IN9/AB0DotN
CFH/VfyDxASFwHUEM8DrBbgBAAAAi+Vdw8zMzItUJAyLTCQEhdJ0RzPAikQkCFeL+YP6BHIt99mD
4QN0CCvRiAdHSXX6i8jB4AgDwYvIweAQA8GLyoPiA8HpAnQG86uF0nQGiAdHSnX6i0QkCF/Di0Qk
BMPMzMzMzMzMzFWL7KFwN0IAUItNCFHoDgAAAIPECF3DzMzMzMzMzMzMVYvsUYN9COB2BDPA60WD
fQjgdxGLRQhQ6EMAAACDxASJRfzrB8dF/AAAAACDffwAdQaDfQwAdQWLRfzrFotNCFHoCv///4PE
BIXAdQQzwOsC67uL5V3DzMzMzMzMzMzMVYvsUYtFCDsFcDFCAHcai00IUego6f//g8QEiUX8g338
AHQFi0X86yyDfQgAdQfHRQgBAAAAi1UIg8IPg+LwiVUIi0UIUGoAiw10OkIAUf8V3FFCAIvlXcPM
zMzMzMzMVYvsuAEAAABdw8zMzMzMzFWL7IPsCIN9DOB2BDPA63iLRQhQ6Cfi//+DxASJRfiDffgA
dDXHRfwAAAAAi00MOw1wMUIAdx6LVQxSi0UIUItN+FHoyPD//4PEDIXAdAaLVQiJVfyLRfzrLoN9
DAB1B8dFDAEAAACLRQyDwA8k8IlFDItNDFGLVQhSahChdDpCAFD/FeRRQgCL5V3DzMzMzFWL7IPs
FIN9CAB1EYtFDFDoa/7//4PEBOmrAQAAg30MAHUTi00IUeikAQAAg8QEM8DpkgEAAMdF+AAAAACD
fQzgD4dUAQAAi1UIUuhg4f//g8QEiUX0g330AA+ECAEAAItFDDsFcDFCAHd7i00MUYtVCFKLRfRQ
6ATw//+DxAyFwHQIi00IiU3461uLVQxS6Kzn//+DxASJRfiDffgAdEaLRQiLSPyD6QGJTfyLVfw7
VQxzCItF/IlF8OsGi00MiU3wi1XwUotFCFCLTfhR6A3d//+DxAyLVQhSi0X0UOiN4f//g8QIg334
AHV6g30MAHUHx0UMAQAAAItNDIPBD4Ph8IlNDItVDFJqAKF0OkIAUP8V3FFCAIlF+IN9+AB0RotN
CItR/IPqAYlV/ItF/DtFDHMIi038iU3s6waLVQyJVeyLRexQi00IUYtV+FLojdz//4PEDItFCFCL
TfRR6A3h//+DxAjrM4N9DAB1B8dFDAEAAACLVQyDwg+D4vCJVQyLRQxQi00IUWoAixV0OkIAUv8V
5FFCAIlF+IN9+AB1CYM9cDdCAAB1BYtF+OsZi0UMUOg4/P//g8QEhcB1BDPA6wXpbv7//4vlXcPM
zMzMVYvsUYN9CAB1Aus6i0UIUOjL3///g8QEiUX8g338AHQSi00IUYtV/FLocuD//4PECOsTi0UI
UGoAiw10OkIAUf8VsFFCAIvlXcPMzMzMzMxVi+xRx0X8/v///+hw9f//hcB9B8dF/Pz///9qAGoA
oXQ6QgBQ/xXEUUIAhcB1KP8VyFFCAIP4eHUWxwXcNUIAeAAAAMcF2DVCACgAAADrB8dF/Pz///+L
RfyL5V3DzMxVi+zomP///13DzMzMzMzMVYvsg+wwU1ZXjUXgiUXcjU0QiU3Ug30IAHUeaCgPQgBq
AGpdaBwPQgBqAujAhf//g8QUg/gBdQHMM9KF0nXWg30MAHUeaKAAQgBqAGpeaBwPQgBqAuiWhf//
g8QUg/gBdQHMM8CFwHXWi03cx0EMQgAAAItV3ItFCIlCCItN3ItVCIkRi0Xcx0AE////f4tN1FGL
VQxSi0XcUOhRdP//g8QMiUXYi03ci1EEg+oBi0XciVAEi03cg3kEAHwii1XciwLGAAAzyYHh/wAA
AIlN0ItV3IsCg8ABi03ciQHrEYtV3FJqAOh2w///g8QIiUXQi0XYX15bi+Vdw8zMzMzMzFWL7IPs
DItFCIPAAT0AAQAAdxeLTQiLFWAuQgAzwGaLBEojRQzpiQAAAItNCMH5CIHh/wAAAIHh/wAAAIsV
YC5CADPAZosESiUAgAAAhcB0IotNCMH5CIHh/wAAAIhN9IpVCIhV9cZF9gDHRfgCAAAA6xGKRQiI
RfTGRfUAx0X4AQAAAGoBagBqAI1N/FGLVfhSjUX0UGoB6JMJAACDxByFwHUEM8DrC4tF/CX//wAA
I0UMi+Vdw8zMzMzMzMzMzFWL7FGLRQg7Bbw7QgBzH4tNCMH5BYtVCIPiH4sEjYA6QgAPvkzQBIPh
AYXJdQ/HBdg1QgAJAAAAg8j/622LVQjB+gWLRQiD4B+LDJWAOkIAD75UwQSD4gGF0nQ6i0UIUOi7
EAAAg8QEUP8V6FFCAIXAdQv/FchRQgCJRfzrB8dF/AAAAACDffwAdQLrGotN/IkN3DVCAMcF2DVC
AAkAAADHRfz/////i0X8i+Vdw8zMVYvsgewgBAAAi0UIOwW8O0IAcx+LTQjB+QWLVQiD4h+LBI2A
OkIAD75M0ASD4QGFyXUcxwXYNUIACQAAAMcF3DVCAAAAAACDyP/pTwIAAMdF8AAAAACLVfCJleD7
//+DfRAAdQczwOkyAgAAi0UIwfgFi00Ig+EfixSFgDpCAA++RMoEg+AghcB0EGoCagCLTQhR6CgC
AACDxAyLVQjB+gWLRQiD4B+LDJWAOkIAD75UwQSB4oAAAACF0g+ECAEAAItFDIlF/MdF9AAAAACL
TfwrTQw7TRAPg+oAAACNlez7//+JVfiLRfiNjez7//8rwT0ABAAAfV+LVfwrVQw7VRBzVItF/IoI
iI3k+///i1X8g8IBiVX8D76F5Pv//4P4CnUei43g+///g8EBiY3g+///i1X4xgINi0X4g8ABiUX4
i034ipXk+///iBGLRfiDwAGJRfjrj2oAjY3o+///UYtV+I2F7Pv//yvQUo2N7Pv//1GLVQjB+gWL
RQiD4B+LDJWAOkIAixTBUv8VZFFCAIXAdCOLRfADhej7//+JRfCLTfiNlez7//8ryjmN6Pv//30C
6xLrC/8VyFFCAIlF9OsF6Qf////rTWoAjYXo+///UItNEFGLVQxSi0UIwfgFi00Ig+EfixSFgDpC
AIsEylD/FWRRQgCFwHQSx0X0AAAAAIuN6Pv//4lN8OsJ/xXIUUIAiUX0g33wAHV5g330AHQsg330
BXUVxwXYNUIACQAAAItV9IkV3DVCAOsMi0X0UOh6DwAAg8QEg8j/61CLTQjB+QWLVQiD4h+LBI2A
OkIAD75M0ASD4UCFyXQPi1UMD74Cg/gadQQzwOsixwXYNUIAHAAAAMcF3DVCAAAAAACDyP/rCYtF
8CuF4Pv//4vlXcPMzMzMzMzMzMzMzMzMzFWL7GoC6EZt//+DxARdw8xVi+yD7AyLRQg7Bbw7QgBz
H4tNCMH5BYtVCIPiH4sEjYA6QgAPvkzQBIPhAYXJdRzHBdg1QgAJAAAAxwXcNUIAAAAAAIPI/+me
AAAAi1UIUuhbDQAAg8QEiUX0g330/3UPxwXYNUIACQAAAIPI/+t6i0UQUGoAi00MUYtV9FL/FexR
QgCJRfiDffj/dQv/FchRQgCJRfzrB8dF/AAAAACDffwAdBGLRfxQ6FIOAACDxASDyP/rNItNCMH5
BYtVCIPiH4sEjYA6QgCKTNAEgOH9i1UIwfoFi0UIg+AfixSVgDpCAIhMwgSLRfiL5V3DzMxVi+xR
U1ZXg30IAHUeaLgBQgBqAGouaHQPQgBqAuiuf///g8QUg/gBdQHMM8CFwHXWiw3QNUIAg8EBiQ3Q
NUIAi1UIiVX8ajtodA9CAGoCaAAQAADoJ5v//4PEEItN/IlBCItV/IN6CAB0G4tF/ItIDIPJCItV
/IlKDItF/MdAGAAQAADrJYtN/ItRDIPKBItF/IlQDItN/IPBFItV/IlKCItF/MdAGAIAAACLTfyL
VfyLQgiJAYtN/MdBBAAAAABfXluL5V3DzMzMzMzMzMzMVYvsg+wIU1ZXx0X8/////4tFCIlF+ItN
+ItRDIPiQIXSdBKLRfjHQAwAAAAAg8j/6aEAAACDfQgAdR5ouAFCAGoAandogA9CAGoC6LB+//+D
xBSD+AF1AcwzyYXJddaLVfiLQgwlgwAAAIXAdFuLTfhR6Dm5//+DxASJRfyLVfhS6DoOAACDxASL
RfiLSBBR6DsNAACDxASFwH0Jx0X8/////+ski1X4g3ocAHQbagKLRfiLSBxR6DSk//+DxAiLVfjH
QhwAAAAAi0X4x0AMAAAAAItF/F9eW4vlXcPMzMxVi+xq/2iYD0IAaHRAQABkoQAAAABQZIklAAAA
AIPE3FNWV4ll6IM9mDdCAAB1V2oAagBqAWiQD0IAaAABAABqAP8V+FFCAIXAdAzHBZg3QgABAAAA
6y9qAGoAagFojA9CAGgAAQAAagD/FfRRQgCFwHQMxwWYN0IAAgAAAOsHM8DpawIAAIN9FAB+E4tF
FFCLTRBR6HcCAACDxAiJRRSDPZg3QgACdSOLVRxSi0UYUItNFFGLVRBSi0UMUItNCFH/FfRRQgDp
JgIAAIM9mDdCAAEPhRcCAACDfSAAdQmLFZA3QgCJVSBqAGoAi0UUUItNEFGLVST32hvSg+IIg8IB
UotFIFD/FfBRQgCJReSDfeQAdQczwOnWAQAAx0X8AAAAAItF5NHgg8ADJPzob8T//4ll0Ill6ItN
0IlN3MdF/P/////rF7gBAAAAw4tl6MdF3AAAAADHRfz/////g33cAHUHM8DphwEAAItV5FKLRdxQ
i00UUYtVEFJqAYtFIFD/FfBRQgCFwHUHM8DpYAEAAGoAagCLTeRRi1XcUotFDFCLTQhR/xX4UUIA
iUXYg33YAHUHM8DpNgEAAItVDIHiAAQAAIXSdEODfRwAdDiLRdg7RRx+BzPA6RQBAACLTRxRi1UY
UotF5FCLTdxRi1UMUotFCFD/FfhRQgCFwHUHM8Dp6wAAAOnfAAAAi03YiU3Ux0X8AQAAAItF1NHg
g8ADJPzoecP//4llzIll6ItVzIlV4MdF/P/////rF7gBAAAAw4tl6MdF4AAAAADHRfz/////g33g
AHUHM8DpkQAAAItF1FCLTeBRi1XkUotF3FCLTQxRi1UIUv8V+FFCAIXAdQQzwOtrg30cAHUuagBq
AGoAagCLRdRQi03gUWggAgAAi1UgUv8VkFFCAIlF2IN92AB1BDPA6znrMGoAagCLRRxQi00YUYtV
1FKLReBQaCACAACLTSBR/xWQUUIAiUXYg33YAHUEM8DrB4tF2OsCM8CNZcCLTfBkiQ0AAAAAX15b
i+Vdw8zMzMzMzMzMzMzMVYvsg+wIi0UMiUX4i00IiU38i1X4i0X4g+gBiUX4hdJ0FYtN/A++EYXS
dAuLRfyDwAGJRfzr24tN/A++EYXSdQiLRfwrRQjrA4tFDIvlXcNVi+xq/2iwD0IAaHRAQABkoQAA
AABQZIklAAAAAIPE5FNWV4ll6IM9nDdCAAB1T41F5FBqAWiQD0IAagH/FQBSQgCFwHQMxwWcN0IA
AQAAAOssjU3kUWoBaIwPQgBqAWoA/xX8UUIAhcB0DMcFnDdCAAIAAADrBzPA6SoBAACDPZw3QgAC
dS6DfRwAdQmLFYA3QgCJVRyLRRRQi00QUYtVDFKLRQhQi00cUf8V/FFCAOnzAAAAgz2cN0IAAQ+F
5AAAAIN9GAB1CYsVkDdCAIlVGGoAagCLRRBQi00MUYtVIPfaG9KD4giDwgFSi0UYUP8V8FFCAIlF
4IN94AB1BzPA6aMAAADHRfwAAAAAi0Xg0eCDwAMk/Oglwf//iWXUiWXoi03UiU3ci1Xg0eJSagCL
RdxQ6Cjv//+DxAzHRfz/////6xe4AQAAAMOLZejHRdwAAAAAx0X8/////4N93AB1BDPA60OLTeBR
i1XcUotFEFCLTQxRagGLVRhS/xXwUUIAiUXYg33YAHUEM8DrGotFFFCLTdhRi1XcUotFCFD/FQBS
QgDrAjPAjWXIi03wZIkNAAAAAF9eW4vlXcPMzMzMVYvsV1aLdQyLTRCLfQiLwYvRA8Y7/nYIO/gP
gngBAAD3xwMAAAB1FMHpAoPiA4P5CHIp86X/JJVosEAAi8e6AwAAAIPpBHIMg+ADA8j/JIWAr0AA
/ySNeLBAAJD/JI38r0AAkJCvQAC8r0AA4K9AACPRigaIB4pGAYhHAYpGAsHpAohHAoPGA4PHA4P5
CHLM86X/JJVosEAAjUkAI9GKBogHikYBwekCiEcBg8YCg8cCg/kIcqbzpf8klWiwQACQI9GKBogH
RsHpAkeD+QhyjPOl/ySVaLBAAI1JAF+wQABMsEAARLBAADywQAA0sEAALLBAACSwQAAcsEAAi0SO
5IlEj+SLRI7oiUSP6ItEjuyJRI/si0SO8IlEj/CLRI70iUSP9ItEjviJRI/4i0SO/IlEj/yNBI0A
AAAAA/AD+P8klWiwQACL/3iwQACAsEAAjLBAAKCwQACLRQheX8nDkIoGiAeLRQheX8nDkIoGiAeK
RgGIRwGLRQheX8nDjUkAigaIB4pGAYhHAYpGAohHAotFCF5fycOQjXQx/I18Ofz3xwMAAAB1JMHp
AoPiA4P5CHIN/fOl/P8klQCyQACL//fZ/ySNsLFAAI1JAIvHugMAAACD+QRyDIPgAyvI/ySFCLFA
AP8kjQCyQACQGLFAADixQABgsUAAikYDI9GIRwNOwekCT4P5CHK2/fOl/P8klQCyQACNSQCKRgMj
0YhHA4pGAsHpAohHAoPuAoPvAoP5CHKM/fOl/P8klQCyQACQikYDI9GIRwOKRgKIRwKKRgHB6QKI
RwGD7gOD7wOD+QgPglr////986X8/ySVALJAAI1JALSxQAC8sUAAxLFAAMyxQADUsUAA3LFAAOSx
QAD3sUAAi0SOHIlEjxyLRI4YiUSPGItEjhSJRI8Ui0SOEIlEjxCLRI4MiUSPDItEjgiJRI8Ii0SO
BIlEjwSNBI0AAAAAA/AD+P8klQCyQACL/xCyQAAYskAAKLJAADyyQACLRQheX8nDkIpGA4hHA4tF
CF5fycONSQCKRgOIRwOKRgKIRwKLRQheX8nDkIpGA4hHA4pGAohHAopGAYhHAYtFCF5fycPMzMzM
zMzMzMzMzFWL7IPsDMdF+P/////HRfQAAAAA6wmLRfSDwAGJRfSDffRAD439AAAAi030gzyNgDpC
AAB0b4tV9IsElYA6QgCJRfzrCYtN/IPBCIlN/ItV9IsElYA6QgAFAAEAADlF/HM2i038D75RBIPi
AYXSdSaLRfzHAP////+LTfTB4QWLVfSLRfwrBJWAOkIAwfgDA8iJTfjrAuutg334/3QF6YMAAADr
fGp5aLwPQgBqAmgAAQAA6LiQ//+DxBCJRfyDffwAdFuLTfSLVfyJFI2AOkIAobw7QgCDwCCjvDtC
AOsJi038g8EIiU38i1X0iwSVgDpCAAUAAQAAOUX8cxmLTfzGQQQAi1X8xwL/////i0X8xkAFCuvK
i030weEFiU346wXp8P7//4tF+IvlXcPMzMxVi+xRi0UIOwW8O0IAD4OBAAAAi00IwfkFi1UIg+If
iwSNgDpCAIM80P91aIM9NCpCAAF1QotNCIlN/IN9/AB0DoN9/AF0FoN9/AJ0Husoi1UMUmr2/xUE
UkIA6xqLRQxQavX/FQRSQgDrDItNDFFq9P8VBFJCAItVCMH6BYtFCIPgH4sMlYA6QgCLVQyJFMEz
wOsXxwXYNUIACQAAAMcF3DVCAAAAAACDyP+L5V3DzFWL7FGLRQg7Bbw7QgAPg5sAAACLTQjB+QWL
VQiD4h+LBI2AOkIAD75M0ASD4QGFyXR8i1UIwfoFi0UIg+AfiwyVgDpCAIM8wf90Y4M9NCpCAAF1
PItVCIlV/IN9/AB0DoN9/AF0FIN9/AJ0GusiagBq9v8VBFJCAOsWagBq9f8VBFJCAOsKagBq9P8V
BFJCAItFCMH4BYtNCIPhH4sUhYA6QgDHBMr/////M8DrF8cF2DVCAAkAAADHBdw1QgAAAAAAg8j/
i+Vdw8zMzMzMzMxVi+yLRQg7Bbw7QgBzN4tNCMH5BYtVCIPiH4sEjYA6QgAPvkzQBIPhAYXJdBiL
VQjB+gWLRQiD4B+LDJWAOkIAiwTB6xfHBdg1QgAJAAAAxwXcNUIAAAAAAIPI/13DzMxVi+yD7AzG
RfQAi0UMg+AIhcB0CYpN9IDJIIhN9ItVDIHiAEAAAIXSdAiKRfQMgIhF9ItNDIHhgAAAAIXJdAmK
VfSAyhCIVfSLRQhQ/xWgUUIAiUX8g338AHUU/xXIUUIAUOiJAAAAg8QEg8j/632DffwCdQuKTfSA
yUCITfTrD4N9/AN1CYpV9IDKCIhV9Ohc/P//iUX4g334/3UZxwXYNUIAGAAAAMcF3DVCAAAAAACD
yP/rNotFCFCLTfhR6F39//+DxAiKVfSAygGIVfSLRfjB+AWLTfiD4R+LFIWAOkIAikX0iETKBItF
+IvlXcNVi+xRi0UIo9w1QgDHRfwAAAAA6wmLTfyDwQGJTfyDffwtcyOLVfyLRQg7BNWQMUIAdRKL
TfyLFM2UMUIAiRXYNUIA60LrzoN9CBNyEoN9CCR3DMcF2DVCAA0AAADrKIF9CLwAAAByFYF9CMoA
AAB3DMcF2DVCAAgAAADrCscF2DVCABYAAACL5V3DzMzMzMxVi+xRVotFCDsFvDtCAHMfi00IwfkF
i1UIg+IfiwSNgDpCAA++TNAEg+EBhcl1HMcF2DVCAAkAAADHBdw1QgAAAAAAg8j/6Z0AAACLVQhS
6Mz9//+DxASD+P90PYN9CAF0BoN9CAJ1GmoB6LH9//+DxASL8GoC6KX9//+DxAQ78HQXi0UIUOiV
/f//g8QEUP8VCFJCAIXAdAnHRfwAAAAA6wn/FchRQgCJRfyLTQhR6Jz8//+DxASLVQjB+gWLRQiD
4B+LDJWAOkIAxkTBBACDffwAdBGLVfxS6JL+//+DxASDyP/rAjPAXovlXcPMzMxVi+xTVleDfQgA
dR5o1A9CAGoAajBoyA9CAGoC6B9w//+DxBSD+AF1AcwzwIXAddaLTQiLUQyB4oMAAACF0nRNi0UI
i0gMg+EIhcl0QGoCi1UIi0IIUOjVlf//g8QIi00Ii1EMgeL3+///i0UIiVAMi00IxwEAAAAAi1UI
x0IIAAAAAItFCMdABAAAAABfXltdw/8lUFFCAP8lVFFCAP8lWFFCAP8lXFFCAP8lYFFCAP8lZFFC
AP8laFFCAP8lbFFCAP8lcFFCAP8ldFFCAP8leFFCAP8lfFFCAP8lgFFCAP8lhFFCAP8liFFCAP8l
jFFCAP8lkFFCAP8llFFCAP8lmFFCAP8lnFFCAP8loFFCAP8lpFFCAP8lqFFCAP8lrFFCAP8lsFFC
AP8ltFFCAP8luFFCAP8lvFFCAP8lwFFCAP8lxFFCAP8lyFFCAP8lzFFCAP8l0FFCAP8l1FFCAP8l
2FFCAP8l3FFCAP8l4FFCAP8l5FFCAP8l6FFCAP8l7FFCAP8l8FFCAP8l9FFCAP8l+FFCAP8l/FFC
AP8lAFJCAP8lBFJCAP8lCFJCAMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzM
zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMwAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEakrUAAAAAACAAAALAAAAAAAAAAAYAIA
bWluX3JlcyA9ICVkLCBtYXhfcmVzID0gJWQsIGN1cl9yZXMgPSAlZAoAAAAAAAAAAAAAAE50UXVl
cnlUaW1lclJlc29sdXRpb24AAAAAAABOdFNldFRpbWVyUmVzb2x1dGlvbgAAAAAAAAAAbnRkbGwu
ZGxsAAAAcHJpbnRmLmMAAAAAZm9ybWF0ICE9IE5VTEwAAGkzODZcY2hrZXNwLmMAAAAAAAAAVGhl
IHZhbHVlIG9mIEVTUCB3YXMgbm90IHByb3Blcmx5IHNhdmVkIGFjcm9zcyBhIGZ1bmN0aW9uIGNh
bGwuICBUaGlzIGlzIHVzdWFsbHkgYSByZXN1bHQgb2YgY2FsbGluZyBhIGZ1bmN0aW9uIGRlY2xh
cmVkIHdpdGggb25lIGNhbGxpbmcgY29udmVudGlvbiB3aXRoIGEgZnVuY3Rpb24gcG9pbnRlciBk
ZWNsYXJlZCB3aXRoIGEgZGlmZmVyZW50IGNhbGxpbmcgY29udmVudGlvbi4gAP////9oFEAAgxRA
AF9zZnRidWYuYwAAAHN0ciAhPSBOVUxMAGZsYWcgPT0gMCB8fCBmbGFnID09IDEAAAYAAAYAAQAA
EAADBgAGAhAERUVFBQUFBQU1MABQAAAAACAoOFBYBwgANzAwV1AHAAAgIAgAAAAACGBoYGBgYAAA
cHB4eHh4CAcIAAAHAAgICAAACAAIAAcIAAAAKABuAHUAbABsACkAAAAAAChudWxsKQAAb3V0cHV0
LmMAAAAAY2ggIT0gX1QoJ1wwJykAAF9maWxlLmMAQXNzZXJ0aW9uIEZhaWxlZAAAAABFcnJvcgAA
AFdhcm5pbmcAJXMoJWQpIDogJXMACgAAAA0AAABBc3NlcnRpb24gZmFpbGVkIQAAAEFzc2VydGlv
biBmYWlsZWQ6IAAAX0NydERiZ1JlcG9ydDogU3RyaW5nIHRvbyBsb25nIG9yIElPIEVycm9yAABT
ZWNvbmQgQ2hhbmNlIEFzc2VydGlvbiBGYWlsZWQ6IEZpbGUgJXMsIExpbmUgJWQKAAAAd3Nwcmlu
dGZBAAAAdXNlcjMyLmRsbAAATWljcm9zb2Z0IFZpc3VhbCBDKysgRGVidWcgTGlicmFyeQAARGVi
dWcgJXMhCgpQcm9ncmFtOiAlcyVzJXMlcyVzJXMlcyVzJXMlcyVzCgooUHJlc3MgUmV0cnkgdG8g
ZGVidWcgdGhlIGFwcGxpY2F0aW9uKQAACk1vZHVsZTogAAAACkZpbGU6IAAKTGluZTogAAoKAABF
eHByZXNzaW9uOiAAAAAACgpGb3IgaW5mb3JtYXRpb24gb24gaG93IHlvdXIgcHJvZ3JhbSBjYW4g
Y2F1c2UgYW4gYXNzZXJ0aW9uCmZhaWx1cmUsIHNlZSB0aGUgVmlzdWFsIEMrKyBkb2N1bWVudGF0
aW9uIG9uIGFzc2VydHMuAAAuLi4APHByb2dyYW0gbmFtZSB1bmtub3duPgAAZGJncnB0LmMAAAAA
c3pVc2VyTWVzc2FnZSAhPSBOVUxMAAAAc3RkZW52cC5jAAAAc3RkYXJndi5jAAAAYV9lbnYuYwBp
b2luaXQuYwAAAABydW50aW1lIGVycm9yIAAADQoAAFRMT1NTIGVycm9yDQoAAABTSU5HIGVycm9y
DQoAAAAARE9NQUlOIGVycm9yDQoAAFI2MDI4DQotIHVuYWJsZSB0byBpbml0aWFsaXplIGhlYXAN
CgAAAABSNjAyNw0KLSBub3QgZW5vdWdoIHNwYWNlIGZvciBsb3dpbyBpbml0aWFsaXphdGlvbg0K
AAAAAFI2MDI2DQotIG5vdCBlbm91Z2ggc3BhY2UgZm9yIHN0ZGlvIGluaXRpYWxpemF0aW9uDQoA
AAAAUjYwMjUNCi0gcHVyZSB2aXJ0dWFsIGZ1bmN0aW9uIGNhbGwNCgAAAFI2MDI0DQotIG5vdCBl
bm91Z2ggc3BhY2UgZm9yIF9vbmV4aXQvYXRleGl0IHRhYmxlDQoAAAAAUjYwMTkNCi0gdW5hYmxl
IHRvIG9wZW4gY29uc29sZSBkZXZpY2UNCgAAAABSNjAxOA0KLSB1bmV4cGVjdGVkIGhlYXAgZXJy
b3INCgAAAABSNjAxNw0KLSB1bmV4cGVjdGVkIG11bHRpdGhyZWFkIGxvY2sgZXJyb3INCgAAAABS
NjAxNg0KLSBub3QgZW5vdWdoIHNwYWNlIGZvciB0aHJlYWQgZGF0YQ0KAA0KYWJub3JtYWwgcHJv
Z3JhbSB0ZXJtaW5hdGlvbg0KAAAAAFI2MDA5DQotIG5vdCBlbm91Z2ggc3BhY2UgZm9yIGVudmly
b25tZW50DQoAUjYwMDgNCi0gbm90IGVub3VnaCBzcGFjZSBmb3IgYXJndW1lbnRzDQoAAABSNjAw
Mg0KLSBmbG9hdGluZyBwb2ludCBub3QgbG9hZGVkDQoAAAAATWljcm9zb2Z0IFZpc3VhbCBDKysg
UnVudGltZSBMaWJyYXJ5AAAAAFJ1bnRpbWUgRXJyb3IhCgpQcm9ncmFtOiAAAABDbGllbnQAAEln
bm9yZQAAQ1JUAE5vcm1hbAAARnJlZQAAAABFcnJvcjogbWVtb3J5IGFsbG9jYXRpb246IGJhZCBt
ZW1vcnkgYmxvY2sgdHlwZS4KAAAASW52YWxpZCBhbGxvY2F0aW9uIHNpemU6ICV1IGJ5dGVzLgoA
JXMAAENsaWVudCBob29rIGFsbG9jYXRpb24gZmFpbHVyZS4KAAAAAENsaWVudCBob29rIGFsbG9j
YXRpb24gZmFpbHVyZSBhdCBmaWxlICVocyBsaW5lICVkLgoAAAAAZGJnaGVhcC5jAAAAX0NydENo
ZWNrTWVtb3J5KCkAAABfcEZpcnN0QmxvY2sgPT0gcE9sZEJsb2NrAAAAX3BMYXN0QmxvY2sgPT0g
cE9sZEJsb2NrAAAAAGZSZWFsbG9jIHx8ICghZlJlYWxsb2MgJiYgcE5ld0Jsb2NrID09IHBPbGRC
bG9jaykAAABfQkxPQ0tfVFlQRShwT2xkQmxvY2stPm5CbG9ja1VzZSk9PV9CTE9DS19UWVBFKG5C
bG9ja1VzZSkAAABwT2xkQmxvY2stPm5MaW5lID09IElHTk9SRV9MSU5FICYmIHBPbGRCbG9jay0+
bFJlcXVlc3QgPT0gSUdOT1JFX1JFUQAAAABfQ3J0SXNWYWxpZEhlYXBQb2ludGVyKHBVc2VyRGF0
YSkAAABBbGxvY2F0aW9uIHRvbyBsYXJnZSBvciBuZWdhdGl2ZTogJXUgYnl0ZXMuCgAAAABDbGll
bnQgaG9vayByZS1hbGxvY2F0aW9uIGZhaWx1cmUuCgBDbGllbnQgaG9vayByZS1hbGxvY2F0aW9u
IGZhaWx1cmUgYXQgZmlsZSAlaHMgbGluZSAlZC4KAF9wRmlyc3RCbG9jayA9PSBwSGVhZAAAAF9w
TGFzdEJsb2NrID09IHBIZWFkAAAAAHBIZWFkLT5uQmxvY2tVc2UgPT0gbkJsb2NrVXNlAAAAcEhl
YWQtPm5MaW5lID09IElHTk9SRV9MSU5FICYmIHBIZWFkLT5sUmVxdWVzdCA9PSBJR05PUkVfUkVR
AAAAAERBTUFHRTogYWZ0ZXIgJWhzIGJsb2NrICgjJWQpIGF0IDB4JTA4WC4KAAAAREFNQUdFOiBi
ZWZvcmUgJWhzIGJsb2NrICgjJWQpIGF0IDB4JTA4WC4KAABfQkxPQ0tfVFlQRV9JU19WQUxJRChw
SGVhZC0+bkJsb2NrVXNlKQAAQ2xpZW50IGhvb2sgZnJlZSBmYWlsdXJlLgoAAG1lbW9yeSBjaGVj
ayBlcnJvciBhdCAweCUwOFggPSAweCUwMlgsIHNob3VsZCBiZSAweCUwMlguCgAAACVocyBsb2Nh
dGVkIGF0IDB4JTA4WCBpcyAldSBieXRlcyBsb25nLgoAAAAAJWhzIGFsbG9jYXRlZCBhdCBmaWxl
ICVocyglZCkuCgBEQU1BR0U6IG9uIHRvcCBvZiBGcmVlIGJsb2NrIGF0IDB4JTA4WC4KAAAAAERB
TUFHRUQAX2hlYXBjaGsgZmFpbHMgd2l0aCB1bmtub3duIHJldHVybiB2YWx1ZSEKAABfaGVhcGNo
ayBmYWlscyB3aXRoIF9IRUFQQkFEUFRSLgoAAABfaGVhcGNoayBmYWlscyB3aXRoIF9IRUFQQkFE
RU5ELgoAAABfaGVhcGNoayBmYWlscyB3aXRoIF9IRUFQQkFETk9ERS4KAABfaGVhcGNoayBmYWls
cyB3aXRoIF9IRUFQQkFEQkVHSU4uCgBCYWQgbWVtb3J5IGJsb2NrIGZvdW5kIGF0IDB4JTA4WC4K
AABfQ3J0TWVtQ2hlY2tQb2ludDogTlVMTCBzdGF0ZSBwb2ludGVyLgoAX0NydE1lbURpZmZlcmVu
Y2U6IE5VTEwgc3RhdGUgcG9pbnRlci4KAE9iamVjdCBkdW1wIGNvbXBsZXRlLgoAAGNydCBibG9j
ayBhdCAweCUwOFgsIHN1YnR5cGUgJXgsICV1IGJ5dGVzIGxvbmcuCgAAAABub3JtYWwgYmxvY2sg
YXQgMHglMDhYLCAldSBieXRlcyBsb25nLgoAY2xpZW50IGJsb2NrIGF0IDB4JTA4WCwgc3VidHlw
ZSAleCwgJXUgYnl0ZXMgbG9uZy4KAHslbGR9IAAAJWhzKCVkKSA6IAAAI0ZpbGUgRXJyb3IjKCVk
KSA6IABEdW1waW5nIG9iamVjdHMgLT4KACBEYXRhOiA8JXM+ICVzCgAlLjJYIAAAAERldGVjdGVk
IG1lbW9yeSBsZWFrcyEKAFRvdGFsIGFsbG9jYXRpb25zOiAlbGQgYnl0ZXMuCgAATGFyZ2VzdCBu
dW1iZXIgdXNlZDogJWxkIGJ5dGVzLgoAAAAAJWxkIGJ5dGVzIGluICVsZCAlaHMgQmxvY2tzLgoA
AAAoImluY29uc2lzdGVudCBJT0IgZmllbGRzIiwgc3RyZWFtLT5fcHRyIC0gc3RyZWFtLT5fYmFz
ZSA+PSAwKQAAX2Zsc2J1Zi5jAAAAc3ByaW50Zi5jAAAAc3RyaW5nICE9IE5VTEwAAHZzcHJpbnRm
LmMAAEdldExhc3RBY3RpdmVQb3B1cAAAR2V0QWN0aXZlV2luZG93AE1lc3NhZ2VCb3hBAF9nZXRi
dWYuYwAAAGZjbG9zZS5jAAAAAAAAAAAAAAAAAAAAAP////9Gq0AATKtAAP////88rEAAQqxAAP//
//+krkAAqq5AAG9zZmluZm8uYwAAAF9mcmVlYnVmLmMAAHN0cmVhbSAhPSBOVUxMAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAlQABQ
fUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA8CZAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAg
L0AAAQAAAEgCQgA4AkIAQD9CAAAAAABAP0IAAQEAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAA
AAACAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAACAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP////8CAAAABAAAAAQAAAD///////////////+Q
AkIAiAJCAHQCQgAFAADACwAAAAAAAAAdAADABAAAAAAAAACWAADABAAAAAAAAACNAADACAAAAAAA
AACOAADACAAAAAAAAACPAADACAAAAAAAAACQAADACAAAAAAAAACRAADACAAAAAAAAACSAADACAAA
AAAAAACTAADACAAAAAAAAAADAAAABwAAAAoAAACMAAAA/////wAKAAAQAAAAIAWTGQAAAAAAAAAA
AAAAAAAAAAACAAAAOAdCAAgAAAAMB0IACQAAAOAGQgAKAAAAvAZCABAAAACQBkIAEQAAAGAGQgAS
AAAAPAZCABMAAAAQBkIAGAAAANgFQgAZAAAAsAVCABoAAAB4BUIAGwAAAEAFQgAcAAAAGAVCAHgA
AAAIBUIAeQAAAPgEQgB6AAAA6ARCAPwAAADkBEIA/wAAANQEQgABAAAAAQAAAP/////93c0AwAdC
ALgHQgC0B0IArAdCAKQHQgBQp0AAUKdAAFCnQABQp0AAUKdAAFCnQAAAAAAAai5CAGouQgAAACAA
IAAgACAAIAAgACAAIAAgACgAKAAoACgAKAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAg
ACAAIABIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAIQAhACEAIQAhACEAIQAhACEAIQA
EAAQABAAEAAQABAAEACBAIEAgQCBAIEAgQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAB
AAEAAQABAAEAEAAQABAAEAAQABAAggCCAIIAggCCAIIAAgACAAIAAgACAAIAAgACAAIAAgACAAIA
AgACAAIAAgACAAIAAgACABAAEAAQABAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAQIECAAAAACkAwAAYIJ5giEAAAAAAAAApt8AAAAAAAChpQAAAAAAAIGf4PwAAAAAQH6A/AAA
AACoAwAAwaPaoyAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIH+AAAAAAAAQP4AAAAAAAC1AwAAwaPa
oyAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIH+AAAAAAAAQf4AAAAAAAC2AwAAz6LkohoA5aLoolsA
AAAAAAAAAAAAAAAAAAAAAIH+AAAAAAAAQH6h/gAAAABRBQAAUdpe2iAAX9pq2jIAAAAAAAAAAAAA
AAAAAAAAAIHT2N7g+QAAMX6B/gAAAAAAAAAAAAAAAPgDAAAAAAAAAAAAAAAAAAAAn0AAAQAAAC4A
AAABAAAAAQAAABYAAAACAAAAAgAAAAMAAAACAAAABAAAABgAAAAFAAAADQAAAAYAAAAJAAAABwAA
AAwAAAAIAAAADAAAAAkAAAAMAAAACgAAAAcAAAALAAAACAAAAAwAAAAWAAAADQAAABYAAAAPAAAA
AgAAABAAAAANAAAAEQAAABIAAAASAAAAAgAAACEAAAANAAAANQAAAAIAAABBAAAADQAAAEMAAAAC
AAAAUAAAABEAAABSAAAADQAAAFMAAAANAAAAVwAAABYAAABZAAAACwAAAGwAAAANAAAAbQAAACAA
AABwAAAAHAAAAHIAAAAJAAAABgAAABYAAACAAAAACgAAAIEAAAAKAAAAggAAAAkAAACDAAAAFgAA
AIQAAAANAAAAkQAAACkAAACeAAAADQAAAKEAAAACAAAApAAAAAsAAACnAAAADQAAALcAAAARAAAA
zgAAAAIAAADXAAAACwAAABgHAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKFAC
AAAAAAAAAAAAjlICAERRAgAAAAAAAAAAAAAAAAAAAAAAAAAAAGBSAgBoUgIAelICAJxSAgCuUgIA
vFICAMpSAgDYUgIA6FICAPRSAgAMUwIAIlMCADJTAgBKUwIAYFMCAHRTAgCIUwIApFMCAL5TAgDY
UwIA7lMCAAZUAgAgVAIAMlQCAEBUAgBSVAIAYFQCAG5UAgB6VAIAiFQCAJRUAgCkVAIAtFQCAMRU
AgDUVAIA7FQCAPhUAgACVQIADlUCABpVAgAqVQIAOFUCAExVAgBeVQIAdFUCAIRVAgCUVQIAplUC
ALhVAgDIVQIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYFICAGhSAgB6UgIAnFICAK5SAgC8
UgIAylICANhSAgDoUgIA9FICAAxTAgAiUwIAMlMCAEpTAgBgUwIAdFMCAIhTAgCkUwIAvlMCANhT
AgDuUwIABlQCACBUAgAyVAIAQFQCAFJUAgBgVAIAblQCAHpUAgCIVAIAlFQCAKRUAgC0VAIAxFQC
ANRUAgDsVAIA+FQCAAJVAgAOVQIAGlUCACpVAgA4VQIATFUCAF5VAgB0VQIAhFUCAJRVAgCmVQIA
uFUCAMhVAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACWAlNsZWVwAD4BR2V0UHJvY0FkZHJl
c3MAACYBR2V0TW9kdWxlSGFuZGxlQQAAS0VSTkVMMzIuZGxsAADKAEdldENvbW1hbmRMaW5lQQB0
AUdldFZlcnNpb24AAH0ARXhpdFByb2Nlc3MAUQBEZWJ1Z0JyZWFrAABSAUdldFN0ZEhhbmRsZQAA
3wJXcml0ZUZpbGUArQFJbnRlcmxvY2tlZERlY3JlbWVudAAA9QFPdXRwdXREZWJ1Z1N0cmluZ0EA
AMIBTG9hZExpYnJhcnlBAACwAUludGVybG9ja2VkSW5jcmVtZW50AAAkAUdldE1vZHVsZUZpbGVO
YW1lQQAAngJUZXJtaW5hdGVQcm9jZXNzAAD3AEdldEN1cnJlbnRQcm9jZXNzAK0CVW5oYW5kbGVk
RXhjZXB0aW9uRmlsdGVyAACyAEZyZWVFbnZpcm9ubWVudFN0cmluZ3NBALMARnJlZUVudmlyb25t
ZW50U3RyaW5nc1cA0gJXaWRlQ2hhclRvTXVsdGlCeXRlAAYBR2V0RW52aXJvbm1lbnRTdHJpbmdz
AAgBR2V0RW52aXJvbm1lbnRTdHJpbmdzVwAAbQJTZXRIYW5kbGVDb3VudAAAFQFHZXRGaWxlVHlw
ZQBQAUdldFN0YXJ0dXBJbmZvQQCdAUhlYXBEZXN0cm95AJsBSGVhcENyZWF0ZQAAnwFIZWFwRnJl
ZQAAvwJWaXJ0dWFsRnJlZQAvAlJ0bFVud2luZAC4AUlzQmFkV3JpdGVQdHIAtQFJc0JhZFJlYWRQ
dHIAAKcBSGVhcFZhbGlkYXRlAAAaAUdldExhc3RFcnJvcgAAQQJTZXRDb25zb2xlQ3RybEhhbmRs
ZXIAvwBHZXRDUEluZm8AuQBHZXRBQ1AAADEBR2V0T0VNQ1AAAJkBSGVhcEFsbG9jALsCVmlydHVh
bEFsbG9jAACiAUhlYXBSZUFsbG9jAKoARmx1c2hGaWxlQnVmZmVycwAAagJTZXRGaWxlUG9pbnRl
cgAA5AFNdWx0aUJ5dGVUb1dpZGVDaGFyAL8BTENNYXBTdHJpbmdBAADAAUxDTWFwU3RyaW5nVwAA
UwFHZXRTdHJpbmdUeXBlQQAAVgFHZXRTdHJpbmdUeXBlVwAAfAJTZXRTdGRIYW5kbGUAABsAQ2xv
c2VIYW5kbGUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAACsAAAATjBU
MHMwfTCOMJgw2zDtMBgxQzFuMZkxxDHvMZQymjKgMsYyzzLqMgIzEjM/M0QzSzN2M3szmDOdM6Iz
rzO1M8EzxzPQM9Yz2zPoMxI0FzQhNDY0PDRCNEg0TzSlNMQ01TT0NBA1GTVWNWg1gDWJNak1szXM
NdY1EzZ0NoA2hzezN9w38Tc3OEM46DjvOAo6ETqhOqg6ZDs2PJ08zz3xPRM+AAAAIAAALAEAAEQz
SDNMM1AzVDNYM1wzYDNkM2gzbDNwM3QzeDONM5EzlTOZM50z0DPUM9gz3DPgM+Qz6DPsM/Az9DP4
M/wzADQENAg0xjXPNds15DXyNfs1CTYPNhg2JjYwNj42RDZ0Nn02rTbGNtg2+zYVN0E3XDdsN6M3
rzfAN8o32jfkN/M3BTgROKE4pzi1OL04wzjXOOQ46TjvOAc5FDkkOSk5LzlqOY85mznXOeM59zkg
Oj06ajqFOpc6nTqyOsM68Dr3OgE7FTsfO4Y7jDufO6U7xDvQO/47BzxNPKU8xDzQPO88CT0VPSk9
NT1QPWA9bD2HPZc9oz3EPdc94z03Pj0+Wj5zPsU+zj7TPtg+5T7qPoY/kz+aP6A/rT+5P8I/1z/t
P/I//z8AMAAA5AAAAAQwEjAsMEMwUTC6MPwwBzEaMS4xNDFFMVAxZDF+MZUxrDHDMdox8TH7MQwy
LjJHMl8yZzJ0Mn0yqDK8MvwyHTMjMzUzcTO7M8oz3TP4Mwo0EjQYNBw0ITQuNDg0YzS3NMA0PTUh
NhQ4aDgWOR85Ljk6OUk5XDlvOdk56TkKOi86UjpgOnM6xzroOgo7LDtWO1w7cTufOwk8HDw6PEw8
UjxbPHA88zwKPU89kT2kPRw+Iz5SPmE+dD6mPqs+sT7HPs4+5z4FPx4/Mj9AP0c/WD9gP2c/bT90
P4g/yT8AQAAACAEAADIwTDBVMFUxXjFnMX0xhjHNMeAx+TEWMh8yKDI7Mk8yWDJfMoUyjjLaMusy
EjMrM0UzhTOYM6QzujPgM6E0tjTCNN406jQLNSU1SDVNNYQ1rDX7NQA2RzZQNqU2rjazNrs2wTbH
Ns821TbbNuM29Db9Nj83STdjN4I31zjsOPg4FDkgOUM5XTmAOYU5tzkBOgY6NzpDOpI6njr4OgQ7
bjt3O4U7jTuTO5w7pDusO7I7uzvBO8c7zjvTO/g7FDxnPHM8vDzGPNI88zwQPRo9Jj1GPUw9VT1l
PW49gz0YPi0+OT5yPn4+gz65PsU+Gz8nP0I/VT+KP5A/tD/wP/Y/AAAAUAAAzAAAADUwQTBpMK0w
uTDXMN8w5TAOMRgxJDFGMWQxbjF6MZsxrTHhMSoyPzJLMnYygjLYMuQyKTM1M3UzgTPjM+8zJTQx
NJU04TQxNTY1OzVhNWY1iTWONbE1tjXZNd41BjZrNnc2fjapNtQ2Bjc/N183qDfbNxE4FTgZOB04
NThHOGU4dDjUOOw4aDmHOY45Hzp1OoE6oDqlOs86DTuiO8w72DsTPBg8xTwxPTY9ZT2/Pfc9KD5Q
Pp8+wT7YPgo/Zz+ZP54/AAAAYAAAaAAAACAwTTCQML0wEzElMSoxoDG8MeYxDjJIMmEyyzPZM+wz
CjQ1NEs0FTUtNVg1bjV1NYo1oDapNqQ3rTfzN/83eziKODQ5PjlNOWU5kDmlOcw81Tz2PP88tj6/
PuA+6T4AAABwAABwAQAAETAbMCEwLDA4MD0wYjBpMG8wezCCMI4wljChMKkwtTAOMRcxKzE6MT4x
QjFGMUoxfzGXMQIyCTIQMiEyMjJDMqIyqzK7MsQy1DLoMu4y/zIcM0szWjNkM2gzbDNwM3QzeDOn
M8AzyDPVM94zDzQXNB00KzQ1NDo0QDRMNFY0WzRgNGo0bzR1NH40jTSaNLM02jUiNk82ejbNNtM2
3DbuNvQ2/jYMNzg3QDdhN443mTefNwA4DDg0OEA4SDhWOFw4aDiPOKI4xTjVON847Tj3OAU5DjlD
OUo5cDl0OXg5fDmAOcQ5zTnXOeE5CDoyOjk66zryOhg7PDtNO3E7yDvXO/A7FTwjPDw8SjyePK08
wjziPPE8Bj0UPTU9PT1VPWg9uD3QPdc93z3kPeg97D0VPjs+VT5cPmA+ZD5oPmw+cD50Png+wj7I
Psw+0D7UPjo/RT9gP2c/bD9wP3Q/kT+7P+0/9D/4P/w/AIAAALgAAAAAMAQwCDAMMBAwWjBgMGQw
aDBsMMQw5DD7MAIxBzENMRoxIDEmMTAxOjFXMWAxazF6NoY2jzarNrY2vTbINtA22TbpNvc2AzcU
Nx83KDc+N0g3TjdaN2E3ZzdvN3c3gzeMN5s3pDetN743xDfNN9U36DfxN0M4gziPOL447zj7OBw5
YjnoPPM8+zwlPSs9Mz1APUg9Tz1oPW49dz18PYU9lz2ePcU94z3qPRA+GD7ZPgCQAABkAAAAFjYi
Nis2RzZSNlk2ZDZsNnU2hTaTNp82sDa7NsQ20DbYNuQ26zbwNvk2ATcMNxY3JTcuNzQ3SjdUN1s3
bTeaN8E3mjhlPXE9hD2VPSQ+qT7qPvE+QT+NP5Q/AAAAoAAA4AAAAAUwrDCzMC0xNDFDMasxsjHg
Mecx8TH8MQYyTDJVMnYyfzJIM24z+TMONCA0PDRbNGU0gjSINK40wzTVNN80GzVKNSI2LDZZNok2
kzavNso21zb9Nh43KDdrN4A3kjecN8M34DfvNyU4PzheOGc4gziMOJk4XDllOQY6CzooOjY6QzpN
Ol46azp1OqE6wjrNOuA6BzuCO6c79zt6PKc82TxmPWs9iD2WPZ49qD25PcM9zT3gPe89DD4XPio+
UT7dPgA/WD9wP3c/fz+EP4g/jD+1P9s/9T/8PwCwAAAYAQAAADAEMAgwDDAQMBQwGDBiMGgwbDBw
MHQw2jDlMAAxBzEMMRAxFDExMVsxjTGUMZgxnDGgMaQxqDGsMbAx+jEAMgQyCDIMMo8ynDK0Mugy
CDMtMzIzOjNPM5kzsjO+M+cz9TMDNBY0JjQwNEk0YjSBNI00tDTANMw03zTwNPo0GDUtNUw1VzVh
Nb41zTUPNhk2TjZoNo02mTafNrU20zbfNvo2DzchNys3gTeUN7Y37Tf2N3Q4ejiAOIY4jDiSOJg4
njikOKo4sDi2OLw4wjjIOM441DjaOOA45jjsOPI4+Dj+OAQ5CjkQORY5HDkiOSg5Ljk0OTo5QDlG
OUw5UjlYOV45ZDlqOXA5djl8OYI5iDkAAAIAGAAAAKQxqDGcP6A/qD+sP7Q/uD8AIAIAXAAAAAwz
EDMgNjA6ODo8OkA6SDrcPOA85DyUPZw9pD2sPbQ9vD3EPcw91D3cPeQ97D30Pfw9BD4MPhQ+HD4w
PjQ+OD48PkA+RD5IPkw+UD5UPlg+YD5kPgAwAgAMAAAAgDEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABOQjEwAAAAABGpK1ACAAAARDpc
dGVzdFx0aW1lXERlYnVnXHRlc3QucGRiAA==

------=_001_NextPart215818845028_=----
Content-Type: application/octet-stream;
	name="test.cpp"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
	filename="test.cpp"

I2luY2x1ZGUgPHN0ZGlvLmg+DQojaW5jbHVkZSA8d2luZG93cy5oPg0KdHlwZWRlZiBpbnQgKF9f
c3RkY2FsbCAqTlRTRVRUSU1FUikoSU4gVUxPTkcgUmVxdWVzdGVkUmVzb2x1dGlvbiwgSU4gQk9P
TEVBTiBTZXQsIE9VVCBQVUxPTkcgQWN0dWFsUmVzb2x1dGlvbiApOw0KdHlwZWRlZiBpbnQgKF9f
c3RkY2FsbCAqTlRRVUVSWVRJTUVSKShPVVQgUFVMT05HICAgTWluaW11bVJlc29sdXRpb24sIE9V
VCBQVUxPTkcgTWF4aW11bVJlc29sdXRpb24sIE9VVCBQVUxPTkcgQ3VycmVudFJlc29sdXRpb24g
KTsNCg0KaW50IG1haW4oKQ0Kew0KCURXT1JEIG1pbl9yZXMgPSAwLCBtYXhfcmVzID0gMCwgY3Vy
X3JlcyA9IDAsIHJldCA9IDA7DQoJSE1PRFVMRSAgaGRsbCA9IE5VTEw7DQoJaGRsbCA9IEdldE1v
ZHVsZUhhbmRsZSgibnRkbGwuZGxsIik7DQoJTlRTRVRUSU1FUiBBZGRyTnRTZXRUaW1lciA9IDA7
DQoJTlRRVUVSWVRJTUVSIEFkZHJOdFF1ZXlUaW1lciA9IDA7DQoJDQoJQWRkck50U2V0VGltZXIg
PSAoTlRTRVRUSU1FUikgR2V0UHJvY0FkZHJlc3MoaGRsbCwgIk50U2V0VGltZXJSZXNvbHV0aW9u
Iik7DQoJQWRkck50UXVleVRpbWVyID0gKE5UUVVFUllUSU1FUilHZXRQcm9jQWRkcmVzcyhoZGxs
LCAiTnRRdWVyeVRpbWVyUmVzb2x1dGlvbiIpOw0KDQoJd2hpbGUgKDEpDQoJew0KCQlyZXQgPSBB
ZGRyTnRRdWV5VGltZXIoJm1pbl9yZXMsICZtYXhfcmVzLCAmY3VyX3Jlcyk7DQoJCXByaW50Zigi
bWluX3JlcyA9ICVkLCBtYXhfcmVzID0gJWQsIGN1cl9yZXMgPSAlZFxuIixtaW5fcmVzLCBtYXhf
cmVzLCBjdXJfcmVzKTsNCgkJU2xlZXAoNSk7DQoJCXJldCA9IEFkZHJOdFNldFRpbWVyKDEwMDAw
LCAxLCAmY3VyX3Jlcyk7DQoJCVNsZWVwKDUpOw0KCQlyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAw
MCwgMCwgJmN1cl9yZXMpOw0KCQlTbGVlcCg1KTsNCgkJcmV0ID0gQWRkck50U2V0VGltZXIoNTAw
MDAsIDEsICZjdXJfcmVzKTsNCgkJU2xlZXAoNSk7DQoJCXJldCA9IEFkZHJOdFNldFRpbWVyKDUw
MDAwLCAwLCAmY3VyX3Jlcyk7DQoJCVNsZWVwKDUpOw0KCQlyZXQgPSBBZGRyTnRTZXRUaW1lcigx
MDAwMDAsIDEsICZjdXJfcmVzKTsNCgkJU2xlZXAoNSk7DQoJCXJldCA9IEFkZHJOdFNldFRpbWVy
KDEwMDAwMCwgMCwgJmN1cl9yZXMpOw0KCQlTbGVlcCg1KTsNCgl9DQoNCglyZXR1cm4gMDsNCn0N
Cg0K

------=_001_NextPart215818845028_=----
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------=_001_NextPart215818845028_=------



From xen-devel-bounces@lists.xen.org Wed Aug 15 14:12:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eKr-0006NF-Nb; Wed, 15 Aug 2012 14:12:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T1eKp-0006My-A9
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:12:35 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345039946!3018538!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=1.4 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6598 invoked from network); 15 Aug 2012 14:12:27 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 14:12:27 -0000
Received: by ghrr17 with SMTP id r17so2009308ghr.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 07:12:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=vmCu4AMsEafAtSK52KcqpO0j+0U1kFV379rSnptYBPA=;
	b=vywdXz9bcIbRM4hyM5VuPhRa+c3i0jXCI+3hsk6jhEqYWiUQZVxCvP9WwW6npS55OY
	fRiqh97V5bDV5Cr0Ll5nPVf3QAF8DKcUixwfLZ9tHTliBHJDo01Qjpq/OxACTAOG0XmZ
	hyT0FutK4/p6WvJtlFR2YWi36lB9KDdHkD7wk5X+XpD5Q78mOy+dfM45nm/X33ePPhXz
	4clBeaRqr3bT/q/hVmsy8M/8bxb+25u/J9eeuL//EqvYAcAy/vAQXY8TDVIzryzrMlLF
	/Cp4MozTaeflywnLB6yuo4AdMmePDgUrePjgBjVSo2LRy31m3ybuXMXvC2W+hl0RDIUI
	6BFg==
Received: by 10.68.227.3 with SMTP id rw3mr40809131pbc.13.1345039945349;
	Wed, 15 Aug 2012 07:12:25 -0700 (PDT)
Received: from root ([115.199.253.245])
	by mx.google.com with ESMTPS id nv6sm610172pbc.42.2012.08.15.07.12.09
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 07:12:24 -0700 (PDT)
Date: Wed, 15 Aug 2012 22:12:15 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>, 
	"Yang Z Zhang" <yang.z.zhang@intel.com>, "Keir Fraser" <keir@xen.org>, 
	"Tim Deegan" <tim@xen.org>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>, 
	<2012081522045495397713@gmail.com>
X-Priority: 3
X-GUID: 4EA20C5B-B549-4236-B43B-FA6EA6EE4F04
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081522121039050717@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5636090078893645621=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============5636090078893645621==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart564768006800_=----"

This is a multi-part message in MIME format.

------=_001_NextPart564768006800_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

SGksIEphbjoNCkkgYW0gc29ycnkgSSByZWFsbHkgZG9uJ3QgaGF2ZSBtdWNoIHRpbWUgdG8gdHJ5
IGEgdGVzdCBvZiB5b3VyIHBhdGNoLCBhbmQgaXQgaXMgbm90IGNvbnZlbmllbnQNCmZvciBtZSB0
byBoYXZlIGEgdHJ5LiBGb3IgdGhlIHZlcnNpb24gSSBoYXZlIGJlZW4gdXNpbmcgaXMgeGVuNC4w
LngsIGFuZCB5b3VyIHBhdGNoIGlzIGJhc2VkIG9uIA0KdGhlIGxhdGVzdCB2ZXJzaW9uIHhlbjQu
Mi54LihJIGhhdmUgbmV2ZXIgY29tcGxpZWQgdGhlIHVuc3RhYmxlIG9uZSksIHNvIEkgbWVyZ2Vk
IHlvdXIgcGF0Y2ggdG8gbXkgDQp4ZW40LjAueCwgc3RpbGwgY291bGRuJ3QgZmluZCB0aGUgdHdv
IGZ1bmN0aW9ucyBiZWxvdzoNCiBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpv
cGFxdWUpIA0KIHN0YXRpYyB2b2lkIHJ0Y19hbGFybV9jYih2b2lkICpvcGFxdWUpIA0Kc28gSSBk
aWRuJ3QgbWVyZ2UgdGhlIHR3byBmdW5jdGlvbnMgd2hpY2ggY29udGFpbnMgYSBydGNfdG9nZ2xl
X2lycSgpIC4NCg0KVGhlIHJlc3VsdHMgZm9yIG1lIHdlcmUgdGhlc2U6DQoxIEluIG15IHJlYWwg
YXBwbGljYXRpb24gZW52aXJvbm1lbnQsIGl0IHdvcmtlZCB2ZXJ5IHdlbGwgaW4gdGhlIGZvcm1l
ciA1bWlucywgbXVjaCBiZXR0ZXIgdGhhbiBiZWZvcmUsDQogYnV0IGF0IGxhc3QgaXQgbGFnZ2Vk
IGFnYWluLiBJIGRvbid0IGtub3cgd2hldGhlciBpdCBiZWxvbmdzIHRvIHRoZSB0d28gbWlzc2Vk
IGZ1bmN0aW9ucy4gSSBsYWNrIHRoZSANCiBhYmlsaXR5IHRvIGZpZ3VyZSB0aGVtIG91dC4NCg0K
MiBXaGVuIEkgdGVzdGVkIG15IHRlc3QgcHJvZ3JhbSB3aGljaCBJIHByb3ZpZGVkIGRheXMgYmVm
b3JlLCBpdCB3b3JrZWQgdmVyeSB3ZWxsLCBtYXliZSB0aGUgcHJvZ3JhbSBkb2Vzbid0IA0KZW11
bGF0ZSB0aGUgcmVhbCBlbnZpcm9ubWVudCBkdWUgdG8gdGhlIHNhbWUgc2V0dGluZyByYXRlLCBz
byBJIG1vZGlmaWVkIHRoaXMgcHJvZ3JhbSBhcyB3aGljaCBpbiB0aGUgYXR0YWNobWVudC4NCmlm
IHlvdSBhcmUgbW9yZSBjb252ZW5pZW50LCB5b3UgY2FuIGhlbHAgbWUgdG8gaGF2ZSBhIGxvb2sg
b2YgaXQuDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLQ0KI2luY2x1ZGUgPHN0ZGlvLmg+DQojaW5jbHVkZSA8d2luZG93
cy5oPg0KdHlwZWRlZiBpbnQgKF9fc3RkY2FsbCAqTlRTRVRUSU1FUikoSU4gVUxPTkcgUmVxdWVz
dGVkUmVzb2x1dGlvbiwgSU4gQk9PTEVBTiBTZXQsIE9VVCBQVUxPTkcgQWN0dWFsUmVzb2x1dGlv
biApOw0KdHlwZWRlZiBpbnQgKF9fc3RkY2FsbCAqTlRRVUVSWVRJTUVSKShPVVQgUFVMT05HICAg
TWluaW11bVJlc29sdXRpb24sIE9VVCBQVUxPTkcgTWF4aW11bVJlc29sdXRpb24sIE9VVCBQVUxP
TkcgQ3VycmVudFJlc29sdXRpb24gKTsNCg0KaW50IG1haW4oKQ0Kew0KRFdPUkQgbWluX3JlcyA9
IDAsIG1heF9yZXMgPSAwLCBjdXJfcmVzID0gMCwgcmV0ID0gMDsNCkhNT0RVTEUgIGhkbGwgPSBO
VUxMOw0KaGRsbCA9IEdldE1vZHVsZUhhbmRsZSgibnRkbGwuZGxsIik7DQpOVFNFVFRJTUVSIEFk
ZHJOdFNldFRpbWVyID0gMDsNCk5UUVVFUllUSU1FUiBBZGRyTnRRdWV5VGltZXIgPSAwOw0KQWRk
ck50U2V0VGltZXIgPSAoTlRTRVRUSU1FUikgR2V0UHJvY0FkZHJlc3MoaGRsbCwgIk50U2V0VGlt
ZXJSZXNvbHV0aW9uIik7DQpBZGRyTnRRdWV5VGltZXIgPSAoTlRRVUVSWVRJTUVSKUdldFByb2NB
ZGRyZXNzKGhkbGwsICJOdFF1ZXJ5VGltZXJSZXNvbHV0aW9uIik7DQoNCndoaWxlICgxKQ0Kew0K
cmV0ID0gQWRkck50UXVleVRpbWVyKCZtaW5fcmVzLCAmbWF4X3JlcywgJmN1cl9yZXMpOw0KcHJp
bnRmKCJtaW5fcmVzID0gJWQsIG1heF9yZXMgPSAlZCwgY3VyX3JlcyA9ICVkXG4iLG1pbl9yZXMs
IG1heF9yZXMsIGN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAw
MCwgMSwgJmN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAwMCwg
MCwgJmN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcig1MDAwMCwgMSwg
JmN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcig1MDAwMCwgMCwgJmN1
cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAwMDAsIDEsICZjdXJf
cmVzKTsNClNsZWVwKDUpOw0KcmV0ID0gQWRkck50U2V0VGltZXIoMTAwMDAwLCAwLCAmY3VyX3Jl
cyk7DQpTbGVlcCg1KTsNCn0NCg0KcmV0dXJuIDA7DQp9DQotLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KQW5kIEkgaGF2
ZSBhIG9waW5pb24sIGJlY2F1c2Ugb3VyIHByb2R1Y3QgaXMgYmFzZWQgb24gVmVyc2lvbiBYZW40
LjAueCwgaWYgeW91IGhhdmUgZW5vdWdoIHRpbWUsIGNhbiB5b3Ugd3JpdGUgDQphbm90aGVyIHBh
dGNoIGJhc2VkIGh0dHA6Ly94ZW5iaXRzLnhlbi5vcmcvaGcveGVuLTQuMC10ZXN0aW5nLmhnLyBm
b3IgbWUsIHRoYW5rIHlvdSB2ZXJ5IG11Y2ghDQoNCjMgSSBhbHNvIGhhdmUgYSB0aG91Z2h0IHRo
YXQgY2FuIHdlIGhhdmUgc29tZSBkZXRlY3RpbmcgbWV0aG9kcyB0byBmaW5kIHRoZSBsYWdnaW5n
IHRpbWUgZWFybGllciB0byBhZGp1c3QgdGltZQ0KYmFjayB0byBub3JtYWwgdmFsdWUgaW4gdGhl
IGNvZGU/DQoNCmJlc3QgcmVnYXJkcywNCg0KDQoNCg0KdHVwZW5nMjEyDQoNClNlY29uZCBkcmFm
dCBvZiBhIHBhdGNoIHBvc3RlZDsgbm8gdGVzdCByZXN1bHRzIHNvIGZhciBmb3IgZmlyc3QgZHJh
ZnQuDQpKYW4NCg0KRnJvbTogSmFuIEJldWxpY2gNCkRhdGU6IDIwMTItMDgtMTQgMTc6NTENClRv
OiBZYW5nIFogWmhhbmc7IEtlaXIgRnJhc2VyOyBUaW0gRGVlZ2FuDQpDQzogdHVwZW5nMjEyOyB4
ZW4tZGV2ZWwNClN1YmplY3Q6IFtYZW4tZGV2ZWxdIFtQQVRDSCwgUkZDIHYyXSB4ODYvSFZNOiBh
c3NvcnRlZCBSVEMgZW11bGF0aW9uIGFkanVzdG1lbnRzICh3YXMgUmU6IEJpZyBCdWc6VGltZSBp
biBWTSBnb2VzIHNsb3dlci4uLikNCkJlbG93L2F0dGFjaGVkIGEgc2Vjb25kIGRyYWZ0IG9mIGEg
cGF0Y2ggdG8gZml4IG5vdCBvbmx5IHRoaXMNCmlzc3VlLCBidXQgYSBmZXcgbW9yZSB3aXRoIHRo
ZSBSVEMgZW11bGF0aW9uLg0KDQpLZWlyLCBUaW0sIFlhbmcsIG90aGVycyAtIHRoZSBjaGFuZ2Ug
dG8geGVuL2FyY2gveDg2L2h2bS92cHQuYyByZWFsbHkNCmxvb2tzIG1vcmUgbGlrZSBhIGhhY2sg
dGhhbiBhIHNvbHV0aW9uLCBidXQgSSBkb24ndCBzZWUgYW5vdGhlcg0Kd2F5IHdpdGhvdXQgbXVj
aCBtb3JlIGludHJ1c2l2ZSBjaGFuZ2VzLiBUaGUgcG9pbnQgaXMgdGhhdCB3ZQ0Kd2FudCB0aGUg
UlRDIGNvZGUgdG8gZGVjaWRlIHdoZXRoZXIgdG8gZ2VuZXJhdGUgYW4gaW50ZXJydXB0DQooc28g
dGhhdCBSVENfUEYgY2FuIGJlY29tZSBzZXQgY29ycmVjdGx5IGV2ZW4gd2l0aG91dCBSVENfUElF
DQpnZXR0aW5nIGVuYWJsZWQgYnkgdGhlIGd1ZXN0KS4NCg0KQWRkaXRpb25hbGx5IEkgd29uZGVy
IHdoZXRoZXIgYWxhcm1fdGltZXJfdXBkYXRlKCkgc2hvdWxkbid0DQpiYWlsIG9uIG5vbi1jb25m
b3JtaW5nIFJUQ18qX0FMQVJNIHZhbHVlcyAoYXMgdGhvc2Ugd291bGQNCm5ldmVyIG1hdGNoIHRo
ZSB2YWx1ZXMgdGhleSBnZXQgY29tcGFyZWQgYWdhaW5zdCwgd2hlcmVhcw0Kd2l0aCB0aGUgY3Vy
cmVudCB3YXkgb2YgaGFuZGxpbmcgdGhpcyB0aGV5IHdvdWxkIGFwcGVhciB0bw0KbWF0Y2ggLSBp
LmUuIHNldCBSVENfQUYgYW5kIHBvc3NpYmx5IGdlbmVyYXRlIGFuIGludGVycnVwdCAtDQpzb21l
IG90aGVyIHBvaW50IGluIHRpbWUpLiBJIHJlYWxpemUgdGhlIGJlaGF2aW9yIGhlcmUgbWF5IG5v
dA0KYmUgcHJlY2lzZWx5IHNwZWNpZmllZCwgYnV0IHRoZSBzcGVjaWZpY2F0aW9uIHNheWluZyAi
dGhlIGN1cnJlbnQNCnRpbWUgaGFzIG1hdGNoZWQgdGhlIGFsYXJtIHRpbWUiIG1lYW5zIHRvIG1l
IGEgdmFsdWUgYnkgdmFsdWUNCmNvbXBhcmlzb24sIHdoaWNoIGltcGxpZXMgdGhhdCBub24tY29u
Zm9ybWluZyB2YWx1ZXMgd291bGQNCm5ldmVyIG1hdGNoIChzaW5jZSBub24tY29uZm9ybWluZyBj
dXJyZW50IHRpbWUgdmFsdWVzIGNvdWxkDQpnZXQgcmVwbGFjZWQgYXQgYW55IHRpbWUgYnkgdGhl
IGhhcmR3YXJlIGR1ZSB0byBvdmVyZmxvdw0KZGV0ZWN0aW9uKS4NCg0KSmFuDQoNCi0gZG9uJ3Qg
Y2FsbCBydGNfdGltZXJfdXBkYXRlKCkgb24gUkVHX0Egd3JpdGVzIHdoZW4gdGhlIHZhbHVlIGRp
ZG4ndA0KICBjaGFuZ2UgKGRvaW5nIHRoZSBjYWxsIGFsd2F5cyB3YXMgcmVwb3J0ZWQgdG8gY2F1
c2Ugd2FsbCBjbG9jayB0aW1lDQogIGxhZ2dpbmcgd2l0aCB0aGUgSlZNIHJ1bm5pbmcgb24gV2lu
ZG93cykNCi0gZG9uJ3QgY2FsbCBydGNfdGltZXJfdXBkYXRlKCkgb24gUkVHX0Igd3JpdGVzIGF0
IGFsbA0KLSBvbmx5IGNhbGwgYWxhcm1fdGltZXJfdXBkYXRlKCkgb24gUkVHX0Igd3JpdGVzIHdo
ZW4gcmVsZXZhbnQgYml0cw0KICBjaGFuZ2UNCi0gb25seSBjYWxsIGNoZWNrX3VwZGF0ZV90aW1l
cigpIG9uIFJFR19CIHdyaXRlcyB3aGVuIFNFVCBjaGFuZ2VzDQotIGluc3RlYWQgcHJvcGVybHkg
aGFuZGxlIEFGIGFuZCBQRiB3aGVuIHRoZSBndWVzdCBpcyBub3QgYWxzbyBzZXR0aW5nDQogIEFJ
RS9QSUUgcmVzcGVjdGl2ZWx5IChmb3IgVUYgdGhpcyB3YXMgYWxyZWFkeSB0aGUgY2FzZSwgb25s
eSBhDQogIGNvbW1lbnQgd2FzIHNsaWdodGx5IGluYWNjdXJhdGUpDQotIHJhaXNlIHRoZSBSVEMg
SVJRIG5vdCBvbmx5IHdoZW4gVUlFIGdldHMgc2V0IHdoaWxlIFVGIHdhcyBhbHJlYWR5DQogIHNl
dCwgYnV0IGdlbmVyYWxpemUgdGhpcyB0byBjb3ZlciBBSUUgYW5kIFBJRSBhcyB3ZWxsDQotIHBy
b3Blcmx5IG1hc2sgb2ZmIGJpdCA3IHdoZW4gcmV0cmlldmluZyB0aGUgaG91ciB2YWx1ZXMgaW4N
CiAgYWxhcm1fdGltZXJfdXBkYXRlKCksIGFuZCBwcm9wZXJseSB1c2UgUlRDX0hPVVJTX0FMQVJN
J3MgYml0IDcgd2hlbg0KICBjb252ZXJ0aW5nIGZyb20gMTItIHRvIDI0LWhvdXIgdmFsdWUNCi0g
YWxzbyBoYW5kbGUgdGhlIHR3byBvdGhlciBwb3NzaWJsZSBjbG9jayBiYXNlcw0KLSB1c2UgUlRD
XyogbmFtZXMgaW4gYSBjb3VwbGUgb2YgcGxhY2VzIHdoZXJlIGxpdGVyYWwgbnVtYmVycyB3ZXJl
IHVzZWQNCiAgc28gZmFyDQoNCi0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vcnRjLmMNCisrKyBiL3hl
bi9hcmNoL3g4Ni9odm0vcnRjLmMNCkBAIC01MCwxMSArNTAsMjQgQEAgc3RhdGljIHZvaWQgcnRj
X3NldF90aW1lKFJUQ1N0YXRlICpzKTsNCiBzdGF0aWMgaW5saW5lIGludCBmcm9tX2JjZChSVENT
dGF0ZSAqcywgaW50IGEpOw0KIHN0YXRpYyBpbmxpbmUgaW50IGNvbnZlcnRfaG91cihSVENTdGF0
ZSAqcywgaW50IGhvdXIpOw0KDQotc3RhdGljIHZvaWQgcnRjX3BlcmlvZGljX2NiKHN0cnVjdCB2
Y3B1ICp2LCB2b2lkICpvcGFxdWUpDQorc3RhdGljIHZvaWQgcnRjX3RvZ2dsZV9pcnEoUlRDU3Rh
dGUgKnMpDQorew0KKyAgICBzdHJ1Y3QgZG9tYWluICpkID0gdnJ0Y19kb21haW4ocyk7DQorDQor
ICAgIEFTU0VSVChzcGluX2lzX2xvY2tlZCgmcy0+bG9jaykpOw0KKyAgICBzLT5ody5jbW9zX2Rh
dGFbUlRDX1JFR19DXSB8PSBSVENfSVJRRjsNCisgICAgaHZtX2lzYV9pcnFfZGVhc3NlcnQoZCwg
UlRDX0lSUSk7DQorICAgIGh2bV9pc2FfaXJxX2Fzc2VydChkLCBSVENfSVJRKTsNCit9DQorDQor
dm9pZCBydGNfcGVyaW9kaWNfaW50ZXJydXB0KHZvaWQgKm9wYXF1ZSkNCiB7DQogICAgIFJUQ1N0
YXRlICpzID0gb3BhcXVlOw0KKw0KICAgICBzcGluX2xvY2soJnMtPmxvY2spOw0KLSAgICBzLT5o
dy5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSAweGMwOw0KKyAgICBzLT5ody5jbW9zX2RhdGFbUlRD
X1JFR19DXSB8PSBSVENfUEY7DQorICAgIGlmICggcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0g
JiBSVENfUElFICkNCisgICAgICAgIHJ0Y190b2dnbGVfaXJxKHMpOw0KICAgICBzcGluX3VubG9j
aygmcy0+bG9jayk7DQogfQ0KDQpAQCAtNjgsMTkgKzgxLDI1IEBAIHN0YXRpYyB2b2lkIHJ0Y190
aW1lcl91cGRhdGUoUlRDU3RhdGUgKnMNCiAgICAgQVNTRVJUKHNwaW5faXNfbG9ja2VkKCZzLT5s
b2NrKSk7DQoNCiAgICAgcGVyaW9kX2NvZGUgPSBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19BXSAm
IFJUQ19SQVRFX1NFTEVDVDsNCi0gICAgaWYgKCAocGVyaW9kX2NvZGUgIT0gMCkgJiYgKHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1BJRSkgKQ0KKyAgICBzd2l0Y2ggKCBzLT5ody5j
bW9zX2RhdGFbUlRDX1JFR19BXSAmIFJUQ19ESVZfQ1RMICkNCiAgICAgew0KLSAgICAgICAgaWYg
KCBwZXJpb2RfY29kZSA8PSAyICkNCisgICAgY2FzZSBSVENfUkVGX0NMQ0tfMzJLSFo6DQorICAg
ICAgICBpZiAoIChwZXJpb2RfY29kZSAhPSAwKSAmJiAocGVyaW9kX2NvZGUgPD0gMikgKQ0KICAg
ICAgICAgICAgIHBlcmlvZF9jb2RlICs9IDc7DQotDQotICAgICAgICBwZXJpb2QgPSAxIDw8IChw
ZXJpb2RfY29kZSAtIDEpOyAvKiBwZXJpb2QgaW4gMzIgS2h6IGN5Y2xlcyAqLw0KLSAgICAgICAg
cGVyaW9kID0gRElWX1JPVU5EKChwZXJpb2QgKiAxMDAwMDAwMDAwVUxMKSwgMzI3NjgpOyAvKiBw
ZXJpb2QgaW4gbnMgKi8NCi0gICAgICAgIGNyZWF0ZV9wZXJpb2RpY190aW1lKHYsICZzLT5wdCwg
cGVyaW9kLCBwZXJpb2QsIFJUQ19JUlEsDQotICAgICAgICAgICAgICAgICAgICAgICAgICAgICBy
dGNfcGVyaW9kaWNfY2IsIHMpOw0KLSAgICB9DQotICAgIGVsc2UNCi0gICAgew0KKyAgICAgICAg
LyogZmFsbCB0aHJvdWdoICovDQorICAgIGNhc2UgUlRDX1JFRl9DTENLXzFNSFo6DQorICAgIGNh
c2UgUlRDX1JFRl9DTENLXzRNSFo6DQorICAgICAgICBpZiAoIHBlcmlvZF9jb2RlICE9IDAgKQ0K
KyAgICAgICAgew0KKyAgICAgICAgICAgIHBlcmlvZCA9IDEgPDwgKHBlcmlvZF9jb2RlIC0gMSk7
IC8qIHBlcmlvZCBpbiAzMiBLaHogY3ljbGVzICovDQorICAgICAgICAgICAgcGVyaW9kID0gRElW
X1JPVU5EKHBlcmlvZCAqIDEwMDAwMDAwMDBVTEwsIDMyNzY4KTsgLyogaW4gbnMgKi8NCisgICAg
ICAgICAgICBjcmVhdGVfcGVyaW9kaWNfdGltZSh2LCAmcy0+cHQsIHBlcmlvZCwgcGVyaW9kLCBS
VENfSVJRLCBOVUxMLCBzKTsNCisgICAgICAgICAgICBicmVhazsNCisgICAgICAgIH0NCisgICAg
ICAgIC8qIGZhbGwgdGhyb3VnaCAqLw0KKyAgICBkZWZhdWx0Og0KICAgICAgICAgZGVzdHJveV9w
ZXJpb2RpY190aW1lKCZzLT5wdCk7DQorICAgICAgICBicmVhazsNCiAgICAgfQ0KIH0NCg0KQEAg
LTEwMiw3ICsxMjEsNyBAQCBzdGF0aWMgdm9pZCBjaGVja191cGRhdGVfdGltZXIoUlRDU3RhdGUg
DQogICAgICAgICBndWVzdF91c2VjID0gZ2V0X2xvY2FsdGltZV91cyhkKSAlIFVTRUNfUEVSX1NF
QzsNCiAgICAgICAgIGlmIChndWVzdF91c2VjID49IChVU0VDX1BFUl9TRUMgLSAyNDQpKQ0KICAg
ICAgICAgew0KLSAgICAgICAgICAgIC8qIFJUQyBpcyBpbiB1cGRhdGUgY3ljbGUgd2hlbiBlbmFi
bGluZyBVSUUgKi8NCisgICAgICAgICAgICAvKiBSVEMgaXMgaW4gdXBkYXRlIGN5Y2xlICovDQog
ICAgICAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQV0gfD0gUlRDX1VJUDsNCiAgICAg
ICAgICAgICBuZXh0X3VwZGF0ZV90aW1lID0gKFVTRUNfUEVSX1NFQyAtIGd1ZXN0X3VzZWMpICog
TlNfUEVSX1VTRUM7DQogICAgICAgICAgICAgZXhwaXJlX3RpbWUgPSBOT1coKSArIG5leHRfdXBk
YXRlX3RpbWU7DQpAQCAtMTQ0LDcgKzE2Myw2IEBAIHN0YXRpYyB2b2lkIHJ0Y191cGRhdGVfdGlt
ZXIodm9pZCAqb3BhcXUNCiBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpvcGFx
dWUpDQogew0KICAgICBSVENTdGF0ZSAqcyA9IG9wYXF1ZTsNCi0gICAgc3RydWN0IGRvbWFpbiAq
ZCA9IHZydGNfZG9tYWluKHMpOw0KDQogICAgIHNwaW5fbG9jaygmcy0+bG9jayk7DQogICAgIGlm
ICghKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCkpDQpAQCAtMTUyLDExICsx
NzAsNyBAQCBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpvcGFxDQogICAgICAg
ICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSBSVENfVUY7DQogICAgICAgICBzLT5ody5j
bW9zX2RhdGFbUlRDX1JFR19BXSAmPSB+UlRDX1VJUDsNCiAgICAgICAgIGlmICgocy0+aHcuY21v
c19kYXRhW1JUQ19SRUdfQl0gJiBSVENfVUlFKSkNCi0gICAgICAgIHsNCi0gICAgICAgICAgICBz
LT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSBSVENfSVJRRjsNCi0gICAgICAgICAgICBodm1f
aXNhX2lycV9kZWFzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgICAgICBodm1faXNhX2lycV9h
c3NlcnQoZCwgUlRDX0lSUSk7DQotICAgICAgICB9DQorICAgICAgICAgICAgcnRjX3RvZ2dsZV9p
cnEocyk7DQogICAgICAgICBjaGVja191cGRhdGVfdGltZXIocyk7DQogICAgIH0NCiAgICAgc3Bp
bl91bmxvY2soJnMtPmxvY2spOw0KQEAgLTE3NSwyMSArMTg5LDE4IEBAIHN0YXRpYyB2b2lkIGFs
YXJtX3RpbWVyX3VwZGF0ZShSVENTdGF0ZSANCg0KICAgICBzdG9wX3RpbWVyKCZzLT5hbGFybV90
aW1lcik7DQoNCi0gICAgaWYgKChzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19BSUUp
ICYmDQotICAgICAgICAgICAgIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19TRVQp
KQ0KKyAgICBpZiAoICEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfU0VUKSApDQog
ICAgIHsNCiAgICAgICAgIHMtPmN1cnJlbnRfdG0gPSBnbXRpbWUoZ2V0X2xvY2FsdGltZShkKSk7
DQogICAgICAgICBydGNfY29weV9kYXRlKHMpOw0KDQogICAgICAgICBhbGFybV9zZWMgPSBmcm9t
X2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX1NFQ09ORFNfQUxBUk1dKTsNCiAgICAgICAgIGFs
YXJtX21pbiA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfTUlOVVRFU19BTEFSTV0p
Ow0KLSAgICAgICAgYWxhcm1faG91ciA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENf
SE9VUlNfQUxBUk1dKTsNCi0gICAgICAgIGFsYXJtX2hvdXIgPSBjb252ZXJ0X2hvdXIocywgYWxh
cm1faG91cik7DQorICAgICAgICBhbGFybV9ob3VyID0gY29udmVydF9ob3VyKHMsIHMtPmh3LmNt
b3NfZGF0YVtSVENfSE9VUlNfQUxBUk1dKTsNCg0KICAgICAgICAgY3VyX3NlYyA9IGZyb21fYmNk
KHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfU0VDT05EU10pOw0KICAgICAgICAgY3VyX21pbiA9IGZy
b21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfTUlOVVRFU10pOw0KLSAgICAgICAgY3VyX2hv
dXIgPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSk7DQotICAgICAgICBj
dXJfaG91ciA9IGNvbnZlcnRfaG91cihzLCBjdXJfaG91cik7DQorICAgICAgICBjdXJfaG91ciA9
IGNvbnZlcnRfaG91cihzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSk7DQoNCiAgICAgICAg
IG5leHRfdXBkYXRlX3RpbWUgPSBVU0VDX1BFUl9TRUMgLSAoZ2V0X2xvY2FsdGltZV91cyhkKSAl
IFVTRUNfUEVSX1NFQyk7DQogICAgICAgICBuZXh0X3VwZGF0ZV90aW1lID0gbmV4dF91cGRhdGVf
dGltZSAqIE5TX1BFUl9VU0VDICsgTk9XKCk7DQpAQCAtMzQzLDcgKzM1NCw2IEBAIHN0YXRpYyB2
b2lkIGFsYXJtX3RpbWVyX3VwZGF0ZShSVENTdGF0ZSANCiBzdGF0aWMgdm9pZCBydGNfYWxhcm1f
Y2Iodm9pZCAqb3BhcXVlKQ0KIHsNCiAgICAgUlRDU3RhdGUgKnMgPSBvcGFxdWU7DQotICAgIHN0
cnVjdCBkb21haW4gKmQgPSB2cnRjX2RvbWFpbihzKTsNCg0KICAgICBzcGluX2xvY2soJnMtPmxv
Y2spOw0KICAgICBpZiAoIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19TRVQpKQ0K
QEAgLTM1MSwxMSArMzYxLDcgQEAgc3RhdGljIHZvaWQgcnRjX2FsYXJtX2NiKHZvaWQgKm9wYXF1
ZSkNCiAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19BRjsNCiAgICAg
ICAgIC8qIGFsYXJtIGludGVycnVwdCAqLw0KICAgICAgICAgaWYgKHMtPmh3LmNtb3NfZGF0YVtS
VENfUkVHX0JdICYgUlRDX0FJRSkNCi0gICAgICAgIHsNCi0gICAgICAgICAgICBzLT5ody5jbW9z
X2RhdGFbUlRDX1JFR19DXSB8PSBSVENfSVJRRjsNCi0gICAgICAgICAgICBodm1faXNhX2lycV9k
ZWFzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgICAgICBodm1faXNhX2lycV9hc3NlcnQoZCwg
UlRDX0lSUSk7DQotICAgICAgICB9DQorICAgICAgICAgICAgcnRjX3RvZ2dsZV9pcnEocyk7DQog
ICAgICAgICBhbGFybV90aW1lcl91cGRhdGUocyk7DQogICAgIH0NCiAgICAgc3Bpbl91bmxvY2so
JnMtPmxvY2spOw0KQEAgLTM2NSw2ICszNzEsNyBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRfd3Jp
dGUodm9pZCAqb3BhcXVlDQogew0KICAgICBSVENTdGF0ZSAqcyA9IG9wYXF1ZTsNCiAgICAgc3Ry
dWN0IGRvbWFpbiAqZCA9IHZydGNfZG9tYWluKHMpOw0KKyAgICB1aW50MzJfdCBvcmlnLCBtYXNr
Ow0KDQogICAgIHNwaW5fbG9jaygmcy0+bG9jayk7DQoNCkBAIC0zODIsNiArMzg5LDcgQEAgc3Rh
dGljIGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1ZQ0KICAgICAgICAgcmV0dXJuIDA7
DQogICAgIH0NCg0KKyAgICBvcmlnID0gcy0+aHcuY21vc19kYXRhW3MtPmh3LmNtb3NfaW5kZXhd
Ow0KICAgICBzd2l0Y2ggKCBzLT5ody5jbW9zX2luZGV4ICkNCiAgICAgew0KICAgICBjYXNlIFJU
Q19TRUNPTkRTX0FMQVJNOg0KQEAgLTQwNSw5ICs0MTMsOSBAQCBzdGF0aWMgaW50IHJ0Y19pb3Bv
cnRfd3JpdGUodm9pZCAqb3BhcXVlDQogICAgICAgICBicmVhazsNCiAgICAgY2FzZSBSVENfUkVH
X0E6DQogICAgICAgICAvKiBVSVAgYml0IGlzIHJlYWQgb25seSAqLw0KLSAgICAgICAgcy0+aHcu
Y21vc19kYXRhW1JUQ19SRUdfQV0gPSAoZGF0YSAmIH5SVENfVUlQKSB8DQotICAgICAgICAgICAg
KHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICYgUlRDX1VJUCk7DQotICAgICAgICBydGNfdGlt
ZXJfdXBkYXRlKHMpOw0KKyAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQV0gPSAoZGF0
YSAmIH5SVENfVUlQKSB8IChvcmlnICYgUlRDX1VJUCk7DQorICAgICAgICBpZiAoIChkYXRhIF4g
b3JpZykgJiAoUlRDX1JBVEVfU0VMRUNUIHwgUlRDX0RJVl9DVEwpICkNCisgICAgICAgICAgICBy
dGNfdGltZXJfdXBkYXRlKHMpOw0KICAgICAgICAgYnJlYWs7DQogICAgIGNhc2UgUlRDX1JFR19C
Og0KICAgICAgICAgaWYgKCBkYXRhICYgUlRDX1NFVCApDQpAQCAtNDE1LDcgKzQyMyw3IEBAIHN0
YXRpYyBpbnQgcnRjX2lvcG9ydF93cml0ZSh2b2lkICpvcGFxdWUNCiAgICAgICAgICAgICAvKiBz
ZXQgbW9kZTogcmVzZXQgVUlQIG1vZGUgKi8NCiAgICAgICAgICAgICBzLT5ody5jbW9zX2RhdGFb
UlRDX1JFR19BXSAmPSB+UlRDX1VJUDsNCiAgICAgICAgICAgICAvKiBhZGp1c3QgY21vcyBiZWZv
cmUgc3RvcHBpbmcgKi8NCi0gICAgICAgICAgICBpZiAoIShzLT5ody5jbW9zX2RhdGFbUlRDX1JF
R19CXSAmIFJUQ19TRVQpKQ0KKyAgICAgICAgICAgIGlmICghKG9yaWcgJiBSVENfU0VUKSkNCiAg
ICAgICAgICAgICB7DQogICAgICAgICAgICAgICAgIHMtPmN1cnJlbnRfdG0gPSBnbXRpbWUoZ2V0
X2xvY2FsdGltZShkKSk7DQogICAgICAgICAgICAgICAgIHJ0Y19jb3B5X2RhdGUocyk7DQpAQCAt
NDI0LDIxICs0MzIsMjYgQEAgc3RhdGljIGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1
ZQ0KICAgICAgICAgZWxzZQ0KICAgICAgICAgew0KICAgICAgICAgICAgIC8qIGlmIGRpc2FibGlu
ZyBzZXQgbW9kZSwgdXBkYXRlIHRoZSB0aW1lICovDQotICAgICAgICAgICAgaWYgKCBzLT5ody5j
bW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19TRVQgKQ0KKyAgICAgICAgICAgIGlmICggb3JpZyAm
IFJUQ19TRVQgKQ0KICAgICAgICAgICAgICAgICBydGNfc2V0X3RpbWUocyk7DQogICAgICAgICB9
DQotICAgICAgICAvKiBpZiB0aGUgaW50ZXJydXB0IGlzIGFscmVhZHkgc2V0IHdoZW4gdGhlIGlu
dGVycnVwdCBiZWNvbWUNCi0gICAgICAgICAqIGVuYWJsZWQsIHJhaXNlIGFuIGludGVycnVwdCBp
bW1lZGlhdGVseSovDQotICAgICAgICBpZiAoKGRhdGEgJiBSVENfVUlFKSAmJiAhKHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1VJRSkpDQotICAgICAgICAgICAgaWYgKHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0NdICYgUlRDX1VGKQ0KKyAgICAgICAgLyoNCisgICAgICAgICAqIElm
IHRoZSBpbnRlcnJ1cHQgaXMgYWxyZWFkeSBzZXQgd2hlbiB0aGUgaW50ZXJydXB0IGJlY29tZXMN
CisgICAgICAgICAqIGVuYWJsZWQsIHJhaXNlIGFuIGludGVycnVwdCBpbW1lZGlhdGVseS4NCisg
ICAgICAgICAqIE5COiBSVENfe0EsUCxVfUlFID09IFJUQ197QSxQLFV9RiByZXNwZWN0aXZlbHku
DQorICAgICAgICAgKi8NCisgICAgICAgIGZvciAoIG1hc2sgPSBSVENfVUlFOyBtYXNrIDw9IFJU
Q19QSUU7IG1hc2sgPDw9IDEgKQ0KKyAgICAgICAgICAgIGlmICggKGRhdGEgJiBtYXNrKSAmJiAh
KG9yaWcgJiBtYXNrKSAmJg0KKyAgICAgICAgICAgICAgICAgKHMtPmh3LmNtb3NfZGF0YVtSVENf
UkVHX0NdICYgbWFzaykgKQ0KICAgICAgICAgICAgIHsNCi0gICAgICAgICAgICAgICAgaHZtX2lz
YV9pcnFfZGVhc3NlcnQoZCwgUlRDX0lSUSk7DQotICAgICAgICAgICAgICAgIGh2bV9pc2FfaXJx
X2Fzc2VydChkLCBSVENfSVJRKTsNCisgICAgICAgICAgICAgICAgcnRjX3RvZ2dsZV9pcnEocyk7
DQorICAgICAgICAgICAgICAgIGJyZWFrOw0KICAgICAgICAgICAgIH0NCiAgICAgICAgIHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0JdID0gZGF0YTsNCi0gICAgICAgIHJ0Y190aW1lcl91cGRhdGUo
cyk7DQotICAgICAgICBjaGVja191cGRhdGVfdGltZXIocyk7DQotICAgICAgICBhbGFybV90aW1l
cl91cGRhdGUocyk7DQorICAgICAgICBpZiAoIChkYXRhIF4gb3JpZykgJiBSVENfU0VUICkNCisg
ICAgICAgICAgICBjaGVja191cGRhdGVfdGltZXIocyk7DQorICAgICAgICBpZiAoIChkYXRhIF4g
b3JpZykgJiAoUlRDXzI0SCB8IFJUQ19ETV9CSU5BUlkgfCBSVENfU0VUKSApDQorICAgICAgICAg
ICAgYWxhcm1fdGltZXJfdXBkYXRlKHMpOw0KICAgICAgICAgYnJlYWs7DQogICAgIGNhc2UgUlRD
X1JFR19DOg0KICAgICBjYXNlIFJUQ19SRUdfRDoNCkBAIC00NTMsNyArNDY2LDcgQEAgc3RhdGlj
IGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1ZQ0KDQogc3RhdGljIGlubGluZSBpbnQg
dG9fYmNkKFJUQ1N0YXRlICpzLCBpbnQgYSkNCiB7DQotICAgIGlmICggcy0+aHcuY21vc19kYXRh
W1JUQ19SRUdfQl0gJiAweDA0ICkNCisgICAgaWYgKCBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19C
XSAmIFJUQ19ETV9CSU5BUlkgKQ0KICAgICAgICAgcmV0dXJuIGE7DQogICAgIGVsc2UNCiAgICAg
ICAgIHJldHVybiAoKGEgLyAxMCkgPDwgNCkgfCAoYSAlIDEwKTsNCkBAIC00NjEsNyArNDc0LDcg
QEAgc3RhdGljIGlubGluZSBpbnQgdG9fYmNkKFJUQ1N0YXRlICpzLCBpbg0KDQogc3RhdGljIGlu
bGluZSBpbnQgZnJvbV9iY2QoUlRDU3RhdGUgKnMsIGludCBhKQ0KIHsNCi0gICAgaWYgKCBzLT5o
dy5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIDB4MDQgKQ0KKyAgICBpZiAoIHMtPmh3LmNtb3NfZGF0
YVtSVENfUkVHX0JdICYgUlRDX0RNX0JJTkFSWSApDQogICAgICAgICByZXR1cm4gYTsNCiAgICAg
ZWxzZQ0KICAgICAgICAgcmV0dXJuICgoYSA+PiA0KSAqIDEwKSArIChhICYgMHgwZik7DQpAQCAt
NDY5LDEyICs0ODIsMTQgQEAgc3RhdGljIGlubGluZSBpbnQgZnJvbV9iY2QoUlRDU3RhdGUgKnMs
IA0KDQogLyogSG91cnMgaW4gMTIgaG91ciBtb2RlIGFyZSBpbiAxLTEyIHJhbmdlLCBub3QgMC0x
MS4NCiAgKiBTbyB3ZSBuZWVkIGNvbnZlcnQgaXQgYmVmb3JlIHVzaW5nIGl0Ki8NCi1zdGF0aWMg
aW5saW5lIGludCBjb252ZXJ0X2hvdXIoUlRDU3RhdGUgKnMsIGludCBob3VyKQ0KK3N0YXRpYyBp
bmxpbmUgaW50IGNvbnZlcnRfaG91cihSVENTdGF0ZSAqcywgaW50IHJhdykNCiB7DQorICAgIGlu
dCBob3VyID0gZnJvbV9iY2QocywgcmF3ICYgMHg3Zik7DQorDQogICAgIGlmICghKHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0JdICYgUlRDXzI0SCkpDQogICAgIHsNCiAgICAgICAgIGhvdXIgJT0g
MTI7DQotICAgICAgICBpZiAocy0+aHcuY21vc19kYXRhW1JUQ19IT1VSU10gJiAweDgwKQ0KKyAg
ICAgICAgaWYgKHJhdyAmIDB4ODApDQogICAgICAgICAgICAgaG91ciArPSAxMjsNCiAgICAgfQ0K
ICAgICByZXR1cm4gaG91cjsNCkBAIC00OTMsOCArNTA4LDcgQEAgc3RhdGljIHZvaWQgcnRjX3Nl
dF90aW1lKFJUQ1N0YXRlICpzKQ0KICAgICANCiAgICAgdG0tPnRtX3NlYyA9IGZyb21fYmNkKHMs
IHMtPmh3LmNtb3NfZGF0YVtSVENfU0VDT05EU10pOw0KICAgICB0bS0+dG1fbWluID0gZnJvbV9i
Y2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19NSU5VVEVTXSk7DQotICAgIHRtLT50bV9ob3VyID0g
ZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19IT1VSU10gJiAweDdmKTsNCi0gICAgdG0t
PnRtX2hvdXIgPSBjb252ZXJ0X2hvdXIocywgdG0tPnRtX2hvdXIpOw0KKyAgICB0bS0+dG1faG91
ciA9IGNvbnZlcnRfaG91cihzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSk7DQogICAgIHRt
LT50bV93ZGF5ID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19EQVlfT0ZfV0VFS10p
Ow0KICAgICB0bS0+dG1fbWRheSA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfREFZ
X09GX01PTlRIXSk7DQogICAgIHRtLT50bV9tb24gPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2Rh
dGFbUlRDX01PTlRIXSkgLSAxOw0KLS0tIGEveGVuL2FyY2gveDg2L2h2bS92cHQuYw0KKysrIGIv
eGVuL2FyY2gveDg2L2h2bS92cHQuYw0KQEAgLTIyLDYgKzIyLDcgQEANCiAjaW5jbHVkZSA8YXNt
L2h2bS92cHQuaD4NCiAjaW5jbHVkZSA8YXNtL2V2ZW50Lmg+DQogI2luY2x1ZGUgPGFzbS9hcGlj
Lmg+DQorI2luY2x1ZGUgPGFzbS9tYzE0NjgxOHJ0Yy5oPg0KDQogI2RlZmluZSBtb2RlX2lzKGQs
IG5hbWUpIFwNCiAgICAgKChkKS0+YXJjaC5odm1fZG9tYWluLnBhcmFtc1tIVk1fUEFSQU1fVElN
RVJfTU9ERV0gPT0gSFZNUFRNXyMjbmFtZSkNCkBAIC0yMTgsNiArMjE5LDcgQEAgdm9pZCBwdF91
cGRhdGVfaXJxKHN0cnVjdCB2Y3B1ICp2KQ0KICAgICBzdHJ1Y3QgcGVyaW9kaWNfdGltZSAqcHQs
ICp0ZW1wLCAqZWFybGllc3RfcHQgPSBOVUxMOw0KICAgICB1aW50NjRfdCBtYXhfbGFnID0gLTFV
TEw7DQogICAgIGludCBpcnEsIGlzX2xhcGljOw0KKyAgICB2b2lkICpwdF9wcml2Ow0KDQogICAg
IHNwaW5fbG9jaygmdi0+YXJjaC5odm1fdmNwdS50bV9sb2NrKTsNCg0KQEAgLTI1MSwxMyArMjUz
LDE0IEBAIHZvaWQgcHRfdXBkYXRlX2lycShzdHJ1Y3QgdmNwdSAqdikNCiAgICAgZWFybGllc3Rf
cHQtPmlycV9pc3N1ZWQgPSAxOw0KICAgICBpcnEgPSBlYXJsaWVzdF9wdC0+aXJxOw0KICAgICBp
c19sYXBpYyA9IChlYXJsaWVzdF9wdC0+c291cmNlID09IFBUU1JDX2xhcGljKTsNCisgICAgcHRf
cHJpdiA9IGVhcmxpZXN0X3B0LT5wcml2Ow0KDQogICAgIHNwaW5fdW5sb2NrKCZ2LT5hcmNoLmh2
bV92Y3B1LnRtX2xvY2spOw0KDQogICAgIGlmICggaXNfbGFwaWMgKQ0KLSAgICB7DQogICAgICAg
ICB2bGFwaWNfc2V0X2lycSh2Y3B1X3ZsYXBpYyh2KSwgaXJxLCAwKTsNCi0gICAgfQ0KKyAgICBl
bHNlIGlmICggaXJxID09IFJUQ19JUlEgKQ0KKyAgICAgICAgcnRjX3BlcmlvZGljX2ludGVycnVw
dChwdF9wcml2KTsNCiAgICAgZWxzZQ0KICAgICB7DQogICAgICAgICBodm1faXNhX2lycV9kZWFz
c2VydCh2LT5kb21haW4sIGlycSk7DQotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92cHQu
aA0KKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdnB0LmgNCkBAIC0xODEsNiArMTgxLDcg
QEAgdm9pZCBydGNfbWlncmF0ZV90aW1lcnMoc3RydWN0IHZjcHUgKnYpOw0KIHZvaWQgcnRjX2Rl
aW5pdChzdHJ1Y3QgZG9tYWluICpkKTsNCiB2b2lkIHJ0Y19yZXNldChzdHJ1Y3QgZG9tYWluICpk
KTsNCiB2b2lkIHJ0Y191cGRhdGVfY2xvY2soc3RydWN0IGRvbWFpbiAqZCk7DQordm9pZCBydGNf
cGVyaW9kaWNfaW50ZXJydXB0KHZvaWQgKik7DQoNCiB2b2lkIHBtdGltZXJfaW5pdChzdHJ1Y3Qg
dmNwdSAqdik7DQogdm9pZCBwbXRpbWVyX2RlaW5pdChzdHJ1Y3QgZG9tYWluICpkKTsNCg0KDQpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KWGVuLWRldmVs
IG1haWxpbmcgbGlzdA0KWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcNCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbA==

------=_001_NextPart564768006800_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
DIV.FoxDiv20120815221113875709 {
	FONT-SIZE: 10.5pt; MARGIN: 10px; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-F=
AMILY: =CB=CE=CC=E5
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR>
<STYLE>BLOCKQUOTE {
	MARGIN-TOP: 0px
}
OL {
	MARGIN-TOP: 0px
}
UL {
	MARGIN-TOP: 0px
}
</STYLE>
</HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Hi, Jan:</DIV>
<DIV>
<DIV class=3DFoxDiv20120815221113875709>
<DIV>I am sorry I really don't have much time to try a test of your patch,=
 and=20
it is not convenient</DIV>
<DIV>for me to have a try. For the version I have been using is xen4.0.x, =
and=20
your patch is based on </DIV>
<DIV>the latest version xen4.2.x.(I have never complied the unstable one),=
 so I=20
merged your patch to my </DIV>
<DIV>xen4.0.x, still couldn't find the two functions below:</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque) </DI=
V>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque) </DIV>
<DIV>so I didn't merge the two functions which contains a rtc_toggle_irq()=
=20
.</DIV>
<DIV>&nbsp;</DIV>
<DIV>The results for me were these:</DIV>
<DIV>1 In my real application environment, it worked very well in the form=
er=20
5mins, much better than before,</DIV>
<DIV>&nbsp;but at last it lagged again. I don't know whether it belongs to=
 the=20
two missed functions. I lack the </DIV>
<DIV>&nbsp;ability to figure them out.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>2 When I tested my test program which I provided days before, it work=
ed=20
very well, maybe the program doesn't </DIV>
<DIV>emulate the real environment due to&nbsp;the same setting rate,&nbsp;=
so I=20
modified this program as which in the attachment.</DIV>
<DIV>if you are more convenient, you can help me to have a look of it.</DI=
V>
<DIV>--------------------------------------------------------------------<=
/DIV>
<DIV>
<DIV>#include&nbsp;&lt;stdio.h&gt;</DIV>
<DIV>#include&nbsp;&lt;windows.h&gt;</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTSETTIMER)(IN&nbsp;ULONG&nbsp=
;RequestedResolution,&nbsp;IN&nbsp;BOOLEAN&nbsp;Set,&nbsp;OUT&nbsp;PULONG&=
nbsp;ActualResolution&nbsp;);</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTQUERYTIMER)(OUT&nbsp;PULONG&=
nbsp;&nbsp;&nbsp;MinimumResolution,&nbsp;OUT&nbsp;PULONG&nbsp;MaximumResol=
ution,&nbsp;OUT&nbsp;PULONG&nbsp;CurrentResolution&nbsp;);</DIV>
<DIV>&nbsp;</DIV>
<DIV>int&nbsp;main()</DIV>
<DIV>{</DIV>
<DIV>DWORD&nbsp;min_res&nbsp;=3D&nbsp;0,&nbsp;max_res&nbsp;=3D&nbsp;0,&nbs=
p;cur_res&nbsp;=3D&nbsp;0,&nbsp;ret&nbsp;=3D&nbsp;0;</DIV>
<DIV>HMODULE&nbsp;&nbsp;hdll&nbsp;=3D&nbsp;NULL;</DIV>
<DIV>hdll&nbsp;=3D&nbsp;GetModuleHandle("ntdll.dll");</DIV>
<DIV>NTSETTIMER&nbsp;AddrNtSetTimer&nbsp;=3D&nbsp;0;</DIV>
<DIV>NTQUERYTIMER&nbsp;AddrNtQueyTimer&nbsp;=3D&nbsp;0;</DIV>
<DIV></DIV>
<DIV>AddrNtSetTimer&nbsp;=3D&nbsp;(NTSETTIMER)&nbsp;GetProcAddress(hdll,&n=
bsp;"NtSetTimerResolution");</DIV>
<DIV>AddrNtQueyTimer&nbsp;=3D&nbsp;(NTQUERYTIMER)GetProcAddress(hdll,&nbsp=
;"NtQueryTimerResolution");</DIV>
<DIV>&nbsp;</DIV>
<DIV>while&nbsp;(1)</DIV>
<DIV>{</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtQueyTimer(&amp;min_res,&nbsp;&amp;max_res,&nb=
sp;&amp;cur_res);</DIV>
<DIV>printf("min_res&nbsp;=3D&nbsp;%d,&nbsp;max_res&nbsp;=3D&nbsp;%d,&nbsp=
;cur_res&nbsp;=3D&nbsp;%d\n",min_res,&nbsp;max_res,&nbsp;cur_res);</DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;1,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;0,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(50000,&nbsp;1,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(50000,&nbsp;0,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(100000,&nbsp;1,&nbsp;&amp;cur_res);<=
/DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(100000,&nbsp;0,&nbsp;&amp;cur_res);<=
/DIV>
<DIV>Sleep(5);</DIV>
<DIV>}</DIV>
<DIV>&nbsp;</DIV>
<DIV>return&nbsp;0;</DIV>
<DIV>}</DIV></DIV>
<DIV>--------------------------------------------------------------------<=
/DIV>
<DIV>And I have a opinion, because our product is based on Version Xen4.0.=
x, if=20
you have enough time, can you write </DIV>
<DIV>another patch based <A=20
href=3D"http://xenbits.xen.org/hg/xen-4.0-testing.hg/">http://xenbits.xen.=
org/hg/xen-4.0-testing.hg/</A>&nbsp;for=20
me, thank you very much!</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 I also have a thought that can we have some detecting methods to fi=
nd the=20
lagging time earlier to adjust time</DIV>
<DIV>back to normal value in the code?</DIV>
<DIV>&nbsp;</DIV>
<DIV>best regards,</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>
</DIV>
<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>
<DIV>&nbsp;</DIV></DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV>
<DIV>Second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patch&nbsp;posted;&nbsp;no&nbsp=
;test&nbsp;results&nbsp;so&nbsp;far&nbsp;for&nbsp;first&nbsp;draft.</DIV>
<DIV>Jan</DIV></DIV>
<DIV><B></B>&nbsp;</DIV>
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-14&nbsp;17:51</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:yang.z.zhang@intel.com">Yang Z Zhan=
g</A>;=20
<A href=3D"mailto:keir@xen.org">Keir Fraser</A>; <A href=3D"mailto:tim@xen=
.org">Tim=20
Deegan</A></DIV>
<DIV><B>CC:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A>;=
 <A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;[Xen-devel] [PATCH, RFC v2] x86/HVM: assorted RT=
C=20
emulation adjustments (was Re: Big Bug:Time in VM goes=20
slower...)</DIV></DIV></DIV>
<DIV>
<DIV>Below/attached&nbsp;a&nbsp;second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patc=
h&nbsp;to&nbsp;fix&nbsp;not&nbsp;only&nbsp;this</DIV>
<DIV>issue,&nbsp;but&nbsp;a&nbsp;few&nbsp;more&nbsp;with&nbsp;the&nbsp;RTC=
&nbsp;emulation.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Keir,&nbsp;Tim,&nbsp;Yang,&nbsp;others&nbsp;-&nbsp;the&nbsp;change&nb=
sp;to&nbsp;xen/arch/x86/hvm/vpt.c&nbsp;really</DIV>
<DIV>looks&nbsp;more&nbsp;like&nbsp;a&nbsp;hack&nbsp;than&nbsp;a&nbsp;solu=
tion,&nbsp;but&nbsp;I&nbsp;don't&nbsp;see&nbsp;another</DIV>
<DIV>way&nbsp;without&nbsp;much&nbsp;more&nbsp;intrusive&nbsp;changes.&nbs=
p;The&nbsp;point&nbsp;is&nbsp;that&nbsp;we</DIV>
<DIV>want&nbsp;the&nbsp;RTC&nbsp;code&nbsp;to&nbsp;decide&nbsp;whether&nbs=
p;to&nbsp;generate&nbsp;an&nbsp;interrupt</DIV>
<DIV>(so&nbsp;that&nbsp;RTC_PF&nbsp;can&nbsp;become&nbsp;set&nbsp;correctl=
y&nbsp;even&nbsp;without&nbsp;RTC_PIE</DIV>
<DIV>getting&nbsp;enabled&nbsp;by&nbsp;the&nbsp;guest).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Additionally&nbsp;I&nbsp;wonder&nbsp;whether&nbsp;alarm_timer_update(=
)&nbsp;shouldn't</DIV>
<DIV>bail&nbsp;on&nbsp;non-conforming&nbsp;RTC_*_ALARM&nbsp;values&nbsp;(a=
s&nbsp;those&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;the&nbsp;values&nbsp;they&nbsp;get&nbsp;compare=
d&nbsp;against,&nbsp;whereas</DIV>
<DIV>with&nbsp;the&nbsp;current&nbsp;way&nbsp;of&nbsp;handling&nbsp;this&n=
bsp;they&nbsp;would&nbsp;appear&nbsp;to</DIV>
<DIV>match&nbsp;-&nbsp;i.e.&nbsp;set&nbsp;RTC_AF&nbsp;and&nbsp;possibly&nb=
sp;generate&nbsp;an&nbsp;interrupt&nbsp;-</DIV>
<DIV>some&nbsp;other&nbsp;point&nbsp;in&nbsp;time).&nbsp;I&nbsp;realize&nb=
sp;the&nbsp;behavior&nbsp;here&nbsp;may&nbsp;not</DIV>
<DIV>be&nbsp;precisely&nbsp;specified,&nbsp;but&nbsp;the&nbsp;specificatio=
n&nbsp;saying&nbsp;"the&nbsp;current</DIV>
<DIV>time&nbsp;has&nbsp;matched&nbsp;the&nbsp;alarm&nbsp;time"&nbsp;means&=
nbsp;to&nbsp;me&nbsp;a&nbsp;value&nbsp;by&nbsp;value</DIV>
<DIV>comparison,&nbsp;which&nbsp;implies&nbsp;that&nbsp;non-conforming&nbs=
p;values&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;(since&nbsp;non-conforming&nbsp;current&nbsp;ti=
me&nbsp;values&nbsp;could</DIV>
<DIV>get&nbsp;replaced&nbsp;at&nbsp;any&nbsp;time&nbsp;by&nbsp;the&nbsp;ha=
rdware&nbsp;due&nbsp;to&nbsp;overflow</DIV>
<DIV>detection).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_A&nbs=
p;writes&nbsp;when&nbsp;the&nbsp;value&nbsp;didn't</DIV>
<DIV>&nbsp;&nbsp;change&nbsp;(doing&nbsp;the&nbsp;call&nbsp;always&nbsp;wa=
s&nbsp;reported&nbsp;to&nbsp;cause&nbsp;wall&nbsp;clock&nbsp;time</DIV>
<DIV>&nbsp;&nbsp;lagging&nbsp;with&nbsp;the&nbsp;JVM&nbsp;running&nbsp;on&=
nbsp;Windows)</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_B&nbs=
p;writes&nbsp;at&nbsp;all</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;alarm_timer_update()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;relevant&nbsp;bits</DIV>
<DIV>&nbsp;&nbsp;change</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;check_update_timer()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;SET&nbsp;changes</DIV>
<DIV>-&nbsp;instead&nbsp;properly&nbsp;handle&nbsp;AF&nbsp;and&nbsp;PF&nbs=
p;when&nbsp;the&nbsp;guest&nbsp;is&nbsp;not&nbsp;also&nbsp;setting</DIV>
<DIV>&nbsp;&nbsp;AIE/PIE&nbsp;respectively&nbsp;(for&nbsp;UF&nbsp;this&nbs=
p;was&nbsp;already&nbsp;the&nbsp;case,&nbsp;only&nbsp;a</DIV>
<DIV>&nbsp;&nbsp;comment&nbsp;was&nbsp;slightly&nbsp;inaccurate)</DIV>
<DIV>-&nbsp;raise&nbsp;the&nbsp;RTC&nbsp;IRQ&nbsp;not&nbsp;only&nbsp;when&=
nbsp;UIE&nbsp;gets&nbsp;set&nbsp;while&nbsp;UF&nbsp;was&nbsp;already</DIV>
<DIV>&nbsp;&nbsp;set,&nbsp;but&nbsp;generalize&nbsp;this&nbsp;to&nbsp;cove=
r&nbsp;AIE&nbsp;and&nbsp;PIE&nbsp;as&nbsp;well</DIV>
<DIV>-&nbsp;properly&nbsp;mask&nbsp;off&nbsp;bit&nbsp;7&nbsp;when&nbsp;ret=
rieving&nbsp;the&nbsp;hour&nbsp;values&nbsp;in</DIV>
<DIV>&nbsp;&nbsp;alarm_timer_update(),&nbsp;and&nbsp;properly&nbsp;use&nbs=
p;RTC_HOURS_ALARM's&nbsp;bit&nbsp;7&nbsp;when</DIV>
<DIV>&nbsp;&nbsp;converting&nbsp;from&nbsp;12-&nbsp;to&nbsp;24-hour&nbsp;v=
alue</DIV>
<DIV>-&nbsp;also&nbsp;handle&nbsp;the&nbsp;two&nbsp;other&nbsp;possible&nb=
sp;clock&nbsp;bases</DIV>
<DIV>-&nbsp;use&nbsp;RTC_*&nbsp;names&nbsp;in&nbsp;a&nbsp;couple&nbsp;of&n=
bsp;places&nbsp;where&nbsp;literal&nbsp;numbers&nbsp;were&nbsp;used</DIV>
<DIV>&nbsp;&nbsp;so&nbsp;far</DIV>
<DIV>&nbsp;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>@@&nbsp;-50,11&nbsp;+50,24&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,=
&nbsp;int&nbsp;hour);</DIV>
<DIV>&nbsp;</DIV>
<DIV>-static&nbsp;void&nbsp;rtc_periodic_cb(struct&nbsp;vcpu&nbsp;*v,&nbsp=
;void&nbsp;*opaque)</DIV>
<DIV>+static&nbsp;void&nbsp;rtc_toggle_irq(RTCState&nbsp;*s)</DIV>
<DIV>+{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>+</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock));</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_IRQF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+}</DIV>
<DIV>+</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;0xc0;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_PF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_PIE&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</=
DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-68,19&nbsp;+81,25&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_tim=
er_update(RTCState&nbsp;*s</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock))=
;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period_code&nbsp;=3D&nbsp;s-&gt;hw.cmos=
_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_RATE_SELECT;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(period_code&nbsp;!=3D&nbsp;0=
)&nbsp;&amp;&amp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_=
PIE)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_RE=
G_A]&nbsp;&amp;&nbsp;RTC_DIV_CTL&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;&lt;=3D&nbsp;2&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_32KHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(peri=
od_code&nbsp;!=3D&nbsp;0)&nbsp;&amp;&amp;&nbsp;(period_code&nbsp;&lt;=3D&n=
bsp;2)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;period_code&nbsp;+=3D&nbsp;7;</DIV>
<DIV>-</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);&nbsp;/*&nbsp;period&nbs=
p;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;DIV_ROUND((period&nbsp;*&nbsp;1000000000ULL),&nbsp;32768);&nbsp;/*&nbsp;p=
eriod&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;create_periodic_time=
(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&nbsp;RTC_IRQ,</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_cb,&nbsp;s);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_1MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_4MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;!=3D&nbsp;0&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);=
&nbsp;/*&nbsp;period&nbsp;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;DIV_ROUND(period&nbsp;*&nbsp;1000000000ULL,&nbsp;=
32768);&nbsp;/*&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;create_periodic_time(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&=
nbsp;RTC_IRQ,&nbsp;NULL,&nbsp;s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;break;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;default:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;destroy_periodi=
c_time(&amp;s-&gt;pt);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-102,7&nbsp;+121,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;check_u=
pdate_timer(RTCState&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;guest_usec&nbsp=
;=3D&nbsp;get_localtime_us(d)&nbsp;%&nbsp;USEC_PER_SEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(guest_=
usec&nbsp;&gt;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;244))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;when&nbsp;enab=
ling&nbsp;UIE&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;|=3D&nbsp;RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;next_update_time&nbsp;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;guest_us=
ec)&nbsp;*&nbsp;NS_PER_USEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;expire_time&nbsp;=3D&nbsp;NOW()&nbsp;+&nbsp;next_update_time;</DI=
V>
<DIV>@@&nbsp;-144,7&nbsp;+163,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_upd=
ate_timer(void&nbsp;*opaqu</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque)</DIV=
>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-152,11&nbsp;+170,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_up=
date_timer2(void&nbsp;*opaq</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_UF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt=
;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_ti=
mer(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-175,21&nbsp;+189,18&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm=
_timer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;stop_timer(&amp;s-&gt;alarm_timer);</DI=
V>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt;hw.cmos_data[RTC_REG_B]&nbsp=
;&amp;&nbsp;RTC_AIE)&nbsp;&amp;&amp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_=
B]&nbsp;&amp;&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_t=
m&nbsp;=3D&nbsp;gmtime(get_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s=
);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS_ALARM]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;alarm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;cur_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;USEC_PER_SEC&nbsp;-&nbsp;(get_localtime_us(d)&nbsp;%&nbsp;=
USEC_PER_SEC);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;next_update_time&nbsp;*&nbsp;NS_PER_USEC&nbsp;+&nbsp;NOW()=
;</DIV>
<DIV>@@&nbsp;-343,7&nbsp;+354,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm_t=
imer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-351,11&nbsp;+361,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_al=
arm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_AF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;alarm&n=
bsp;interrupt&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;=
hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_AIE)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_upd=
ate(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-365,6&nbsp;+371,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbs=
p;vrtc_domain(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;uint32_t&nbsp;orig,&nbsp;mask;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-382,6&nbsp;+389,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;0;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;orig&nbsp;=3D&nbsp;s-&gt;hw.cmos_data[s-&gt;=
hw.cmos_index];</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_index&=
nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_SECONDS_ALARM:</DIV>
<DIV>@@&nbsp;-405,9&nbsp;+413,9&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_A:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;UIP&nbs=
p;bit&nbsp;is&nbsp;read&nbsp;only&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;(s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|&nbsp;(orig&=
nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_RATE_SELECT&nbsp;|&nbsp;RTC_DIV_CT=
L)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_B:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;=
data&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>@@&nbsp;-415,7&nbsp;+423,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;set&nbsp;mode:&nbsp;reset&nbsp;UIP&nbsp;mode&nbsp;*/</DIV=
>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;adjust&nbsp;cmos&nbsp;before&nbsp;stopping&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(orig&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_tm&nbsp;=3D&nbsp;gmtime(get=
_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s);</DIV>
<DIV>@@&nbsp;-424,21&nbsp;+432,26&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_io=
port_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;if&nbsp;disabling&nbsp;set&nbsp;mode,&nbsp;update&nbsp;th=
e&nbsp;time&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET&n=
bsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;orig&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_set_time(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;if&nbsp;the&=
nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;inter=
rupt&nbsp;become</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((data&nbsp;=
&amp;&nbsp;RTC_UIE)&nbsp;&amp;&amp;&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&n=
bsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp;&nbsp;RTC_UF)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;If&nbsp=
;the&nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;=
interrupt&nbsp;becomes</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;NB:&nbs=
p;RTC_{A,P,U}IE&nbsp;=3D=3D&nbsp;RTC_{A,P,U}F&nbsp;respectively.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(&nbsp;mask=
&nbsp;=3D&nbsp;RTC_UIE;&nbsp;mask&nbsp;&lt;=3D&nbsp;RTC_PIE;&nbsp;mask&nbs=
p;&lt;&lt;=3D&nbsp;1&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;(data&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;&nbsp;!(orig=
&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp=
;&nbsp;mask)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_B]&nbsp;=3D&nbsp;data;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_timer(s=
);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_update(s=
);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;check_update_timer(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_24H&nbsp;|&nbsp;RTC_DM_BINARY&nbsp=
;|&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;alarm_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_C:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_D:</DIV>
<DIV>@@&nbsp;-453,7&nbsp;+466,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;to_bcd(RTCState&nbsp;*s,&nbsp;=
int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;/&nbsp;10)&nbsp;&lt;&lt;&nbsp;4)&nbsp;|&nbsp;(a&nbsp;%&nbsp;10);</DI=
V>
<DIV>@@&nbsp;-461,7&nbsp;+474,7&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int&n=
bsp;to_bcd(RTCState&nbsp;*s,&nbsp;in</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;&gt;&gt;&nbsp;4)&nbsp;*&nbsp;10)&nbsp;+&nbsp;(a&nbsp;&amp;&nbsp;0x0f=
);</DIV>
<DIV>@@&nbsp;-469,12&nbsp;+482,14&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int=
&nbsp;from_bcd(RTCState&nbsp;*s,&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;/*&nbsp;Hours&nbsp;in&nbsp;12&nbsp;hour&nbsp;mode&nbsp;are&nbsp=
;in&nbsp;1-12&nbsp;range,&nbsp;not&nbsp;0-11.</DIV>
<DIV>&nbsp;&nbsp;*&nbsp;So&nbsp;we&nbsp;need&nbsp;convert&nbsp;it&nbsp;bef=
ore&nbsp;using&nbsp;it*/</DIV>
<DIV>-static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;hour)</DIV>
<DIV>+static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;raw)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;hour&nbsp;=3D&nbsp;from_bcd(s,&nbsp=
;raw&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_24H))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hour&nbsp;%=3D&=
nbsp;12;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;hw.cm=
os_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x80)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(raw&nbsp;&a=
mp;&nbsp;0x80)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;hour&nbsp;+=3D&nbsp;12;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;hour;</DIV>
<DIV>@@&nbsp;-493,8&nbsp;+508,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_sec&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_min&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;from_bcd(s,&nbs=
p;s-&gt;hw.cmos_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;tm-&gt;tm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_wday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_WEEK]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_MONTH]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mon&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MONTH])&nbsp;-&nbsp;1;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>@@&nbsp;-22,6&nbsp;+22,7&nbsp;@@</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/hvm/vpt.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/event.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/apic.h&gt;</DIV>
<DIV>+#include&nbsp;&lt;asm/mc146818rtc.h&gt;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;#define&nbsp;mode_is(d,&nbsp;name)&nbsp;\</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;((d)-&gt;arch.hvm_domain.params[HVM_PAR=
AM_TIMER_MODE]&nbsp;=3D=3D&nbsp;HVMPTM_##name)</DIV>
<DIV>@@&nbsp;-218,6&nbsp;+219,7&nbsp;@@&nbsp;void&nbsp;pt_update_irq(struc=
t&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;periodic_time&nbsp;*pt,&nbs=
p;*temp,&nbsp;*earliest_pt&nbsp;=3D&nbsp;NULL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint64_t&nbsp;max_lag&nbsp;=3D&nbsp;-1U=
LL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;irq,&nbsp;is_lapic;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;void&nbsp;*pt_priv;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;v-&gt;arch.hvm_vcpu.tm_l=
ock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-251,13&nbsp;+253,14&nbsp;@@&nbsp;void&nbsp;pt_update_irq(str=
uct&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;earliest_pt-&gt;irq_issued&nbsp;=3D&nbs=
p;1;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;irq&nbsp;=3D&nbsp;earliest_pt-&gt;irq;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;is_lapic&nbsp;=3D&nbsp;(earliest_pt-&gt=
;source&nbsp;=3D=3D&nbsp;PTSRC_lapic);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;pt_priv&nbsp;=3D&nbsp;earliest_pt-&gt;priv;<=
/DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;v-&gt;arch.hvm_vcpu.tm=
_lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;is_lapic&nbsp;)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;vlapic_set_irq(=
vcpu_vlapic(v),&nbsp;irq,&nbsp;0);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;else&nbsp;if&nbsp;(&nbsp;irq&nbsp;=3D=3D&nbs=
p;RTC_IRQ&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_interru=
pt(pt_priv);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_dea=
ssert(v-&gt;domain,&nbsp;irq);</DIV>
<DIV>---&nbsp;a/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>+++&nbsp;b/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>@@&nbsp;-181,6&nbsp;+181,7&nbsp;@@&nbsp;void&nbsp;rtc_migrate_timers(=
struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;rtc_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_reset(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_update_clock(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_init(struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>_______________________________________________</DIV>
<DIV>Xen-devel&nbsp;mailing&nbsp;list</DIV>
<DIV>Xen-devel@lists.xen.org</DIV>
<DIV>http://lists.xen.org/xen-devel</DIV></DIV></DIV></DIV></BODY></HTML>

------=_001_NextPart564768006800_=------



--===============5636090078893645621==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5636090078893645621==--



From xen-devel-bounces@lists.xen.org Wed Aug 15 14:12:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eKr-0006NF-Nb; Wed, 15 Aug 2012 14:12:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T1eKp-0006My-A9
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:12:35 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345039946!3018538!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=1.4 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6598 invoked from network); 15 Aug 2012 14:12:27 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 14:12:27 -0000
Received: by ghrr17 with SMTP id r17so2009308ghr.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 07:12:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=vmCu4AMsEafAtSK52KcqpO0j+0U1kFV379rSnptYBPA=;
	b=vywdXz9bcIbRM4hyM5VuPhRa+c3i0jXCI+3hsk6jhEqYWiUQZVxCvP9WwW6npS55OY
	fRiqh97V5bDV5Cr0Ll5nPVf3QAF8DKcUixwfLZ9tHTliBHJDo01Qjpq/OxACTAOG0XmZ
	hyT0FutK4/p6WvJtlFR2YWi36lB9KDdHkD7wk5X+XpD5Q78mOy+dfM45nm/X33ePPhXz
	4clBeaRqr3bT/q/hVmsy8M/8bxb+25u/J9eeuL//EqvYAcAy/vAQXY8TDVIzryzrMlLF
	/Cp4MozTaeflywnLB6yuo4AdMmePDgUrePjgBjVSo2LRy31m3ybuXMXvC2W+hl0RDIUI
	6BFg==
Received: by 10.68.227.3 with SMTP id rw3mr40809131pbc.13.1345039945349;
	Wed, 15 Aug 2012 07:12:25 -0700 (PDT)
Received: from root ([115.199.253.245])
	by mx.google.com with ESMTPS id nv6sm610172pbc.42.2012.08.15.07.12.09
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 07:12:24 -0700 (PDT)
Date: Wed, 15 Aug 2012 22:12:15 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>, 
	"Yang Z Zhang" <yang.z.zhang@intel.com>, "Keir Fraser" <keir@xen.org>, 
	"Tim Deegan" <tim@xen.org>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>, 
	<2012081522045495397713@gmail.com>
X-Priority: 3
X-GUID: 4EA20C5B-B549-4236-B43B-FA6EA6EE4F04
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081522121039050717@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5636090078893645621=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============5636090078893645621==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart564768006800_=----"

This is a multi-part message in MIME format.

------=_001_NextPart564768006800_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

SGksIEphbjoNCkkgYW0gc29ycnkgSSByZWFsbHkgZG9uJ3QgaGF2ZSBtdWNoIHRpbWUgdG8gdHJ5
IGEgdGVzdCBvZiB5b3VyIHBhdGNoLCBhbmQgaXQgaXMgbm90IGNvbnZlbmllbnQNCmZvciBtZSB0
byBoYXZlIGEgdHJ5LiBGb3IgdGhlIHZlcnNpb24gSSBoYXZlIGJlZW4gdXNpbmcgaXMgeGVuNC4w
LngsIGFuZCB5b3VyIHBhdGNoIGlzIGJhc2VkIG9uIA0KdGhlIGxhdGVzdCB2ZXJzaW9uIHhlbjQu
Mi54LihJIGhhdmUgbmV2ZXIgY29tcGxpZWQgdGhlIHVuc3RhYmxlIG9uZSksIHNvIEkgbWVyZ2Vk
IHlvdXIgcGF0Y2ggdG8gbXkgDQp4ZW40LjAueCwgc3RpbGwgY291bGRuJ3QgZmluZCB0aGUgdHdv
IGZ1bmN0aW9ucyBiZWxvdzoNCiBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpv
cGFxdWUpIA0KIHN0YXRpYyB2b2lkIHJ0Y19hbGFybV9jYih2b2lkICpvcGFxdWUpIA0Kc28gSSBk
aWRuJ3QgbWVyZ2UgdGhlIHR3byBmdW5jdGlvbnMgd2hpY2ggY29udGFpbnMgYSBydGNfdG9nZ2xl
X2lycSgpIC4NCg0KVGhlIHJlc3VsdHMgZm9yIG1lIHdlcmUgdGhlc2U6DQoxIEluIG15IHJlYWwg
YXBwbGljYXRpb24gZW52aXJvbm1lbnQsIGl0IHdvcmtlZCB2ZXJ5IHdlbGwgaW4gdGhlIGZvcm1l
ciA1bWlucywgbXVjaCBiZXR0ZXIgdGhhbiBiZWZvcmUsDQogYnV0IGF0IGxhc3QgaXQgbGFnZ2Vk
IGFnYWluLiBJIGRvbid0IGtub3cgd2hldGhlciBpdCBiZWxvbmdzIHRvIHRoZSB0d28gbWlzc2Vk
IGZ1bmN0aW9ucy4gSSBsYWNrIHRoZSANCiBhYmlsaXR5IHRvIGZpZ3VyZSB0aGVtIG91dC4NCg0K
MiBXaGVuIEkgdGVzdGVkIG15IHRlc3QgcHJvZ3JhbSB3aGljaCBJIHByb3ZpZGVkIGRheXMgYmVm
b3JlLCBpdCB3b3JrZWQgdmVyeSB3ZWxsLCBtYXliZSB0aGUgcHJvZ3JhbSBkb2Vzbid0IA0KZW11
bGF0ZSB0aGUgcmVhbCBlbnZpcm9ubWVudCBkdWUgdG8gdGhlIHNhbWUgc2V0dGluZyByYXRlLCBz
byBJIG1vZGlmaWVkIHRoaXMgcHJvZ3JhbSBhcyB3aGljaCBpbiB0aGUgYXR0YWNobWVudC4NCmlm
IHlvdSBhcmUgbW9yZSBjb252ZW5pZW50LCB5b3UgY2FuIGhlbHAgbWUgdG8gaGF2ZSBhIGxvb2sg
b2YgaXQuDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLQ0KI2luY2x1ZGUgPHN0ZGlvLmg+DQojaW5jbHVkZSA8d2luZG93
cy5oPg0KdHlwZWRlZiBpbnQgKF9fc3RkY2FsbCAqTlRTRVRUSU1FUikoSU4gVUxPTkcgUmVxdWVz
dGVkUmVzb2x1dGlvbiwgSU4gQk9PTEVBTiBTZXQsIE9VVCBQVUxPTkcgQWN0dWFsUmVzb2x1dGlv
biApOw0KdHlwZWRlZiBpbnQgKF9fc3RkY2FsbCAqTlRRVUVSWVRJTUVSKShPVVQgUFVMT05HICAg
TWluaW11bVJlc29sdXRpb24sIE9VVCBQVUxPTkcgTWF4aW11bVJlc29sdXRpb24sIE9VVCBQVUxP
TkcgQ3VycmVudFJlc29sdXRpb24gKTsNCg0KaW50IG1haW4oKQ0Kew0KRFdPUkQgbWluX3JlcyA9
IDAsIG1heF9yZXMgPSAwLCBjdXJfcmVzID0gMCwgcmV0ID0gMDsNCkhNT0RVTEUgIGhkbGwgPSBO
VUxMOw0KaGRsbCA9IEdldE1vZHVsZUhhbmRsZSgibnRkbGwuZGxsIik7DQpOVFNFVFRJTUVSIEFk
ZHJOdFNldFRpbWVyID0gMDsNCk5UUVVFUllUSU1FUiBBZGRyTnRRdWV5VGltZXIgPSAwOw0KQWRk
ck50U2V0VGltZXIgPSAoTlRTRVRUSU1FUikgR2V0UHJvY0FkZHJlc3MoaGRsbCwgIk50U2V0VGlt
ZXJSZXNvbHV0aW9uIik7DQpBZGRyTnRRdWV5VGltZXIgPSAoTlRRVUVSWVRJTUVSKUdldFByb2NB
ZGRyZXNzKGhkbGwsICJOdFF1ZXJ5VGltZXJSZXNvbHV0aW9uIik7DQoNCndoaWxlICgxKQ0Kew0K
cmV0ID0gQWRkck50UXVleVRpbWVyKCZtaW5fcmVzLCAmbWF4X3JlcywgJmN1cl9yZXMpOw0KcHJp
bnRmKCJtaW5fcmVzID0gJWQsIG1heF9yZXMgPSAlZCwgY3VyX3JlcyA9ICVkXG4iLG1pbl9yZXMs
IG1heF9yZXMsIGN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAw
MCwgMSwgJmN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAwMCwg
MCwgJmN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcig1MDAwMCwgMSwg
JmN1cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcig1MDAwMCwgMCwgJmN1
cl9yZXMpOw0KU2xlZXAoNSk7DQpyZXQgPSBBZGRyTnRTZXRUaW1lcigxMDAwMDAsIDEsICZjdXJf
cmVzKTsNClNsZWVwKDUpOw0KcmV0ID0gQWRkck50U2V0VGltZXIoMTAwMDAwLCAwLCAmY3VyX3Jl
cyk7DQpTbGVlcCg1KTsNCn0NCg0KcmV0dXJuIDA7DQp9DQotLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KQW5kIEkgaGF2
ZSBhIG9waW5pb24sIGJlY2F1c2Ugb3VyIHByb2R1Y3QgaXMgYmFzZWQgb24gVmVyc2lvbiBYZW40
LjAueCwgaWYgeW91IGhhdmUgZW5vdWdoIHRpbWUsIGNhbiB5b3Ugd3JpdGUgDQphbm90aGVyIHBh
dGNoIGJhc2VkIGh0dHA6Ly94ZW5iaXRzLnhlbi5vcmcvaGcveGVuLTQuMC10ZXN0aW5nLmhnLyBm
b3IgbWUsIHRoYW5rIHlvdSB2ZXJ5IG11Y2ghDQoNCjMgSSBhbHNvIGhhdmUgYSB0aG91Z2h0IHRo
YXQgY2FuIHdlIGhhdmUgc29tZSBkZXRlY3RpbmcgbWV0aG9kcyB0byBmaW5kIHRoZSBsYWdnaW5n
IHRpbWUgZWFybGllciB0byBhZGp1c3QgdGltZQ0KYmFjayB0byBub3JtYWwgdmFsdWUgaW4gdGhl
IGNvZGU/DQoNCmJlc3QgcmVnYXJkcywNCg0KDQoNCg0KdHVwZW5nMjEyDQoNClNlY29uZCBkcmFm
dCBvZiBhIHBhdGNoIHBvc3RlZDsgbm8gdGVzdCByZXN1bHRzIHNvIGZhciBmb3IgZmlyc3QgZHJh
ZnQuDQpKYW4NCg0KRnJvbTogSmFuIEJldWxpY2gNCkRhdGU6IDIwMTItMDgtMTQgMTc6NTENClRv
OiBZYW5nIFogWmhhbmc7IEtlaXIgRnJhc2VyOyBUaW0gRGVlZ2FuDQpDQzogdHVwZW5nMjEyOyB4
ZW4tZGV2ZWwNClN1YmplY3Q6IFtYZW4tZGV2ZWxdIFtQQVRDSCwgUkZDIHYyXSB4ODYvSFZNOiBh
c3NvcnRlZCBSVEMgZW11bGF0aW9uIGFkanVzdG1lbnRzICh3YXMgUmU6IEJpZyBCdWc6VGltZSBp
biBWTSBnb2VzIHNsb3dlci4uLikNCkJlbG93L2F0dGFjaGVkIGEgc2Vjb25kIGRyYWZ0IG9mIGEg
cGF0Y2ggdG8gZml4IG5vdCBvbmx5IHRoaXMNCmlzc3VlLCBidXQgYSBmZXcgbW9yZSB3aXRoIHRo
ZSBSVEMgZW11bGF0aW9uLg0KDQpLZWlyLCBUaW0sIFlhbmcsIG90aGVycyAtIHRoZSBjaGFuZ2Ug
dG8geGVuL2FyY2gveDg2L2h2bS92cHQuYyByZWFsbHkNCmxvb2tzIG1vcmUgbGlrZSBhIGhhY2sg
dGhhbiBhIHNvbHV0aW9uLCBidXQgSSBkb24ndCBzZWUgYW5vdGhlcg0Kd2F5IHdpdGhvdXQgbXVj
aCBtb3JlIGludHJ1c2l2ZSBjaGFuZ2VzLiBUaGUgcG9pbnQgaXMgdGhhdCB3ZQ0Kd2FudCB0aGUg
UlRDIGNvZGUgdG8gZGVjaWRlIHdoZXRoZXIgdG8gZ2VuZXJhdGUgYW4gaW50ZXJydXB0DQooc28g
dGhhdCBSVENfUEYgY2FuIGJlY29tZSBzZXQgY29ycmVjdGx5IGV2ZW4gd2l0aG91dCBSVENfUElF
DQpnZXR0aW5nIGVuYWJsZWQgYnkgdGhlIGd1ZXN0KS4NCg0KQWRkaXRpb25hbGx5IEkgd29uZGVy
IHdoZXRoZXIgYWxhcm1fdGltZXJfdXBkYXRlKCkgc2hvdWxkbid0DQpiYWlsIG9uIG5vbi1jb25m
b3JtaW5nIFJUQ18qX0FMQVJNIHZhbHVlcyAoYXMgdGhvc2Ugd291bGQNCm5ldmVyIG1hdGNoIHRo
ZSB2YWx1ZXMgdGhleSBnZXQgY29tcGFyZWQgYWdhaW5zdCwgd2hlcmVhcw0Kd2l0aCB0aGUgY3Vy
cmVudCB3YXkgb2YgaGFuZGxpbmcgdGhpcyB0aGV5IHdvdWxkIGFwcGVhciB0bw0KbWF0Y2ggLSBp
LmUuIHNldCBSVENfQUYgYW5kIHBvc3NpYmx5IGdlbmVyYXRlIGFuIGludGVycnVwdCAtDQpzb21l
IG90aGVyIHBvaW50IGluIHRpbWUpLiBJIHJlYWxpemUgdGhlIGJlaGF2aW9yIGhlcmUgbWF5IG5v
dA0KYmUgcHJlY2lzZWx5IHNwZWNpZmllZCwgYnV0IHRoZSBzcGVjaWZpY2F0aW9uIHNheWluZyAi
dGhlIGN1cnJlbnQNCnRpbWUgaGFzIG1hdGNoZWQgdGhlIGFsYXJtIHRpbWUiIG1lYW5zIHRvIG1l
IGEgdmFsdWUgYnkgdmFsdWUNCmNvbXBhcmlzb24sIHdoaWNoIGltcGxpZXMgdGhhdCBub24tY29u
Zm9ybWluZyB2YWx1ZXMgd291bGQNCm5ldmVyIG1hdGNoIChzaW5jZSBub24tY29uZm9ybWluZyBj
dXJyZW50IHRpbWUgdmFsdWVzIGNvdWxkDQpnZXQgcmVwbGFjZWQgYXQgYW55IHRpbWUgYnkgdGhl
IGhhcmR3YXJlIGR1ZSB0byBvdmVyZmxvdw0KZGV0ZWN0aW9uKS4NCg0KSmFuDQoNCi0gZG9uJ3Qg
Y2FsbCBydGNfdGltZXJfdXBkYXRlKCkgb24gUkVHX0Egd3JpdGVzIHdoZW4gdGhlIHZhbHVlIGRp
ZG4ndA0KICBjaGFuZ2UgKGRvaW5nIHRoZSBjYWxsIGFsd2F5cyB3YXMgcmVwb3J0ZWQgdG8gY2F1
c2Ugd2FsbCBjbG9jayB0aW1lDQogIGxhZ2dpbmcgd2l0aCB0aGUgSlZNIHJ1bm5pbmcgb24gV2lu
ZG93cykNCi0gZG9uJ3QgY2FsbCBydGNfdGltZXJfdXBkYXRlKCkgb24gUkVHX0Igd3JpdGVzIGF0
IGFsbA0KLSBvbmx5IGNhbGwgYWxhcm1fdGltZXJfdXBkYXRlKCkgb24gUkVHX0Igd3JpdGVzIHdo
ZW4gcmVsZXZhbnQgYml0cw0KICBjaGFuZ2UNCi0gb25seSBjYWxsIGNoZWNrX3VwZGF0ZV90aW1l
cigpIG9uIFJFR19CIHdyaXRlcyB3aGVuIFNFVCBjaGFuZ2VzDQotIGluc3RlYWQgcHJvcGVybHkg
aGFuZGxlIEFGIGFuZCBQRiB3aGVuIHRoZSBndWVzdCBpcyBub3QgYWxzbyBzZXR0aW5nDQogIEFJ
RS9QSUUgcmVzcGVjdGl2ZWx5IChmb3IgVUYgdGhpcyB3YXMgYWxyZWFkeSB0aGUgY2FzZSwgb25s
eSBhDQogIGNvbW1lbnQgd2FzIHNsaWdodGx5IGluYWNjdXJhdGUpDQotIHJhaXNlIHRoZSBSVEMg
SVJRIG5vdCBvbmx5IHdoZW4gVUlFIGdldHMgc2V0IHdoaWxlIFVGIHdhcyBhbHJlYWR5DQogIHNl
dCwgYnV0IGdlbmVyYWxpemUgdGhpcyB0byBjb3ZlciBBSUUgYW5kIFBJRSBhcyB3ZWxsDQotIHBy
b3Blcmx5IG1hc2sgb2ZmIGJpdCA3IHdoZW4gcmV0cmlldmluZyB0aGUgaG91ciB2YWx1ZXMgaW4N
CiAgYWxhcm1fdGltZXJfdXBkYXRlKCksIGFuZCBwcm9wZXJseSB1c2UgUlRDX0hPVVJTX0FMQVJN
J3MgYml0IDcgd2hlbg0KICBjb252ZXJ0aW5nIGZyb20gMTItIHRvIDI0LWhvdXIgdmFsdWUNCi0g
YWxzbyBoYW5kbGUgdGhlIHR3byBvdGhlciBwb3NzaWJsZSBjbG9jayBiYXNlcw0KLSB1c2UgUlRD
XyogbmFtZXMgaW4gYSBjb3VwbGUgb2YgcGxhY2VzIHdoZXJlIGxpdGVyYWwgbnVtYmVycyB3ZXJl
IHVzZWQNCiAgc28gZmFyDQoNCi0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vcnRjLmMNCisrKyBiL3hl
bi9hcmNoL3g4Ni9odm0vcnRjLmMNCkBAIC01MCwxMSArNTAsMjQgQEAgc3RhdGljIHZvaWQgcnRj
X3NldF90aW1lKFJUQ1N0YXRlICpzKTsNCiBzdGF0aWMgaW5saW5lIGludCBmcm9tX2JjZChSVENT
dGF0ZSAqcywgaW50IGEpOw0KIHN0YXRpYyBpbmxpbmUgaW50IGNvbnZlcnRfaG91cihSVENTdGF0
ZSAqcywgaW50IGhvdXIpOw0KDQotc3RhdGljIHZvaWQgcnRjX3BlcmlvZGljX2NiKHN0cnVjdCB2
Y3B1ICp2LCB2b2lkICpvcGFxdWUpDQorc3RhdGljIHZvaWQgcnRjX3RvZ2dsZV9pcnEoUlRDU3Rh
dGUgKnMpDQorew0KKyAgICBzdHJ1Y3QgZG9tYWluICpkID0gdnJ0Y19kb21haW4ocyk7DQorDQor
ICAgIEFTU0VSVChzcGluX2lzX2xvY2tlZCgmcy0+bG9jaykpOw0KKyAgICBzLT5ody5jbW9zX2Rh
dGFbUlRDX1JFR19DXSB8PSBSVENfSVJRRjsNCisgICAgaHZtX2lzYV9pcnFfZGVhc3NlcnQoZCwg
UlRDX0lSUSk7DQorICAgIGh2bV9pc2FfaXJxX2Fzc2VydChkLCBSVENfSVJRKTsNCit9DQorDQor
dm9pZCBydGNfcGVyaW9kaWNfaW50ZXJydXB0KHZvaWQgKm9wYXF1ZSkNCiB7DQogICAgIFJUQ1N0
YXRlICpzID0gb3BhcXVlOw0KKw0KICAgICBzcGluX2xvY2soJnMtPmxvY2spOw0KLSAgICBzLT5o
dy5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSAweGMwOw0KKyAgICBzLT5ody5jbW9zX2RhdGFbUlRD
X1JFR19DXSB8PSBSVENfUEY7DQorICAgIGlmICggcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0g
JiBSVENfUElFICkNCisgICAgICAgIHJ0Y190b2dnbGVfaXJxKHMpOw0KICAgICBzcGluX3VubG9j
aygmcy0+bG9jayk7DQogfQ0KDQpAQCAtNjgsMTkgKzgxLDI1IEBAIHN0YXRpYyB2b2lkIHJ0Y190
aW1lcl91cGRhdGUoUlRDU3RhdGUgKnMNCiAgICAgQVNTRVJUKHNwaW5faXNfbG9ja2VkKCZzLT5s
b2NrKSk7DQoNCiAgICAgcGVyaW9kX2NvZGUgPSBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19BXSAm
IFJUQ19SQVRFX1NFTEVDVDsNCi0gICAgaWYgKCAocGVyaW9kX2NvZGUgIT0gMCkgJiYgKHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1BJRSkgKQ0KKyAgICBzd2l0Y2ggKCBzLT5ody5j
bW9zX2RhdGFbUlRDX1JFR19BXSAmIFJUQ19ESVZfQ1RMICkNCiAgICAgew0KLSAgICAgICAgaWYg
KCBwZXJpb2RfY29kZSA8PSAyICkNCisgICAgY2FzZSBSVENfUkVGX0NMQ0tfMzJLSFo6DQorICAg
ICAgICBpZiAoIChwZXJpb2RfY29kZSAhPSAwKSAmJiAocGVyaW9kX2NvZGUgPD0gMikgKQ0KICAg
ICAgICAgICAgIHBlcmlvZF9jb2RlICs9IDc7DQotDQotICAgICAgICBwZXJpb2QgPSAxIDw8IChw
ZXJpb2RfY29kZSAtIDEpOyAvKiBwZXJpb2QgaW4gMzIgS2h6IGN5Y2xlcyAqLw0KLSAgICAgICAg
cGVyaW9kID0gRElWX1JPVU5EKChwZXJpb2QgKiAxMDAwMDAwMDAwVUxMKSwgMzI3NjgpOyAvKiBw
ZXJpb2QgaW4gbnMgKi8NCi0gICAgICAgIGNyZWF0ZV9wZXJpb2RpY190aW1lKHYsICZzLT5wdCwg
cGVyaW9kLCBwZXJpb2QsIFJUQ19JUlEsDQotICAgICAgICAgICAgICAgICAgICAgICAgICAgICBy
dGNfcGVyaW9kaWNfY2IsIHMpOw0KLSAgICB9DQotICAgIGVsc2UNCi0gICAgew0KKyAgICAgICAg
LyogZmFsbCB0aHJvdWdoICovDQorICAgIGNhc2UgUlRDX1JFRl9DTENLXzFNSFo6DQorICAgIGNh
c2UgUlRDX1JFRl9DTENLXzRNSFo6DQorICAgICAgICBpZiAoIHBlcmlvZF9jb2RlICE9IDAgKQ0K
KyAgICAgICAgew0KKyAgICAgICAgICAgIHBlcmlvZCA9IDEgPDwgKHBlcmlvZF9jb2RlIC0gMSk7
IC8qIHBlcmlvZCBpbiAzMiBLaHogY3ljbGVzICovDQorICAgICAgICAgICAgcGVyaW9kID0gRElW
X1JPVU5EKHBlcmlvZCAqIDEwMDAwMDAwMDBVTEwsIDMyNzY4KTsgLyogaW4gbnMgKi8NCisgICAg
ICAgICAgICBjcmVhdGVfcGVyaW9kaWNfdGltZSh2LCAmcy0+cHQsIHBlcmlvZCwgcGVyaW9kLCBS
VENfSVJRLCBOVUxMLCBzKTsNCisgICAgICAgICAgICBicmVhazsNCisgICAgICAgIH0NCisgICAg
ICAgIC8qIGZhbGwgdGhyb3VnaCAqLw0KKyAgICBkZWZhdWx0Og0KICAgICAgICAgZGVzdHJveV9w
ZXJpb2RpY190aW1lKCZzLT5wdCk7DQorICAgICAgICBicmVhazsNCiAgICAgfQ0KIH0NCg0KQEAg
LTEwMiw3ICsxMjEsNyBAQCBzdGF0aWMgdm9pZCBjaGVja191cGRhdGVfdGltZXIoUlRDU3RhdGUg
DQogICAgICAgICBndWVzdF91c2VjID0gZ2V0X2xvY2FsdGltZV91cyhkKSAlIFVTRUNfUEVSX1NF
QzsNCiAgICAgICAgIGlmIChndWVzdF91c2VjID49IChVU0VDX1BFUl9TRUMgLSAyNDQpKQ0KICAg
ICAgICAgew0KLSAgICAgICAgICAgIC8qIFJUQyBpcyBpbiB1cGRhdGUgY3ljbGUgd2hlbiBlbmFi
bGluZyBVSUUgKi8NCisgICAgICAgICAgICAvKiBSVEMgaXMgaW4gdXBkYXRlIGN5Y2xlICovDQog
ICAgICAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQV0gfD0gUlRDX1VJUDsNCiAgICAg
ICAgICAgICBuZXh0X3VwZGF0ZV90aW1lID0gKFVTRUNfUEVSX1NFQyAtIGd1ZXN0X3VzZWMpICog
TlNfUEVSX1VTRUM7DQogICAgICAgICAgICAgZXhwaXJlX3RpbWUgPSBOT1coKSArIG5leHRfdXBk
YXRlX3RpbWU7DQpAQCAtMTQ0LDcgKzE2Myw2IEBAIHN0YXRpYyB2b2lkIHJ0Y191cGRhdGVfdGlt
ZXIodm9pZCAqb3BhcXUNCiBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpvcGFx
dWUpDQogew0KICAgICBSVENTdGF0ZSAqcyA9IG9wYXF1ZTsNCi0gICAgc3RydWN0IGRvbWFpbiAq
ZCA9IHZydGNfZG9tYWluKHMpOw0KDQogICAgIHNwaW5fbG9jaygmcy0+bG9jayk7DQogICAgIGlm
ICghKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCkpDQpAQCAtMTUyLDExICsx
NzAsNyBAQCBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpvcGFxDQogICAgICAg
ICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSBSVENfVUY7DQogICAgICAgICBzLT5ody5j
bW9zX2RhdGFbUlRDX1JFR19BXSAmPSB+UlRDX1VJUDsNCiAgICAgICAgIGlmICgocy0+aHcuY21v
c19kYXRhW1JUQ19SRUdfQl0gJiBSVENfVUlFKSkNCi0gICAgICAgIHsNCi0gICAgICAgICAgICBz
LT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSBSVENfSVJRRjsNCi0gICAgICAgICAgICBodm1f
aXNhX2lycV9kZWFzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgICAgICBodm1faXNhX2lycV9h
c3NlcnQoZCwgUlRDX0lSUSk7DQotICAgICAgICB9DQorICAgICAgICAgICAgcnRjX3RvZ2dsZV9p
cnEocyk7DQogICAgICAgICBjaGVja191cGRhdGVfdGltZXIocyk7DQogICAgIH0NCiAgICAgc3Bp
bl91bmxvY2soJnMtPmxvY2spOw0KQEAgLTE3NSwyMSArMTg5LDE4IEBAIHN0YXRpYyB2b2lkIGFs
YXJtX3RpbWVyX3VwZGF0ZShSVENTdGF0ZSANCg0KICAgICBzdG9wX3RpbWVyKCZzLT5hbGFybV90
aW1lcik7DQoNCi0gICAgaWYgKChzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19BSUUp
ICYmDQotICAgICAgICAgICAgIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19TRVQp
KQ0KKyAgICBpZiAoICEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gJiBSVENfU0VUKSApDQog
ICAgIHsNCiAgICAgICAgIHMtPmN1cnJlbnRfdG0gPSBnbXRpbWUoZ2V0X2xvY2FsdGltZShkKSk7
DQogICAgICAgICBydGNfY29weV9kYXRlKHMpOw0KDQogICAgICAgICBhbGFybV9zZWMgPSBmcm9t
X2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX1NFQ09ORFNfQUxBUk1dKTsNCiAgICAgICAgIGFs
YXJtX21pbiA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfTUlOVVRFU19BTEFSTV0p
Ow0KLSAgICAgICAgYWxhcm1faG91ciA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENf
SE9VUlNfQUxBUk1dKTsNCi0gICAgICAgIGFsYXJtX2hvdXIgPSBjb252ZXJ0X2hvdXIocywgYWxh
cm1faG91cik7DQorICAgICAgICBhbGFybV9ob3VyID0gY29udmVydF9ob3VyKHMsIHMtPmh3LmNt
b3NfZGF0YVtSVENfSE9VUlNfQUxBUk1dKTsNCg0KICAgICAgICAgY3VyX3NlYyA9IGZyb21fYmNk
KHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfU0VDT05EU10pOw0KICAgICAgICAgY3VyX21pbiA9IGZy
b21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfTUlOVVRFU10pOw0KLSAgICAgICAgY3VyX2hv
dXIgPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSk7DQotICAgICAgICBj
dXJfaG91ciA9IGNvbnZlcnRfaG91cihzLCBjdXJfaG91cik7DQorICAgICAgICBjdXJfaG91ciA9
IGNvbnZlcnRfaG91cihzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSk7DQoNCiAgICAgICAg
IG5leHRfdXBkYXRlX3RpbWUgPSBVU0VDX1BFUl9TRUMgLSAoZ2V0X2xvY2FsdGltZV91cyhkKSAl
IFVTRUNfUEVSX1NFQyk7DQogICAgICAgICBuZXh0X3VwZGF0ZV90aW1lID0gbmV4dF91cGRhdGVf
dGltZSAqIE5TX1BFUl9VU0VDICsgTk9XKCk7DQpAQCAtMzQzLDcgKzM1NCw2IEBAIHN0YXRpYyB2
b2lkIGFsYXJtX3RpbWVyX3VwZGF0ZShSVENTdGF0ZSANCiBzdGF0aWMgdm9pZCBydGNfYWxhcm1f
Y2Iodm9pZCAqb3BhcXVlKQ0KIHsNCiAgICAgUlRDU3RhdGUgKnMgPSBvcGFxdWU7DQotICAgIHN0
cnVjdCBkb21haW4gKmQgPSB2cnRjX2RvbWFpbihzKTsNCg0KICAgICBzcGluX2xvY2soJnMtPmxv
Y2spOw0KICAgICBpZiAoIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19TRVQpKQ0K
QEAgLTM1MSwxMSArMzYxLDcgQEAgc3RhdGljIHZvaWQgcnRjX2FsYXJtX2NiKHZvaWQgKm9wYXF1
ZSkNCiAgICAgICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19BRjsNCiAgICAg
ICAgIC8qIGFsYXJtIGludGVycnVwdCAqLw0KICAgICAgICAgaWYgKHMtPmh3LmNtb3NfZGF0YVtS
VENfUkVHX0JdICYgUlRDX0FJRSkNCi0gICAgICAgIHsNCi0gICAgICAgICAgICBzLT5ody5jbW9z
X2RhdGFbUlRDX1JFR19DXSB8PSBSVENfSVJRRjsNCi0gICAgICAgICAgICBodm1faXNhX2lycV9k
ZWFzc2VydChkLCBSVENfSVJRKTsNCi0gICAgICAgICAgICBodm1faXNhX2lycV9hc3NlcnQoZCwg
UlRDX0lSUSk7DQotICAgICAgICB9DQorICAgICAgICAgICAgcnRjX3RvZ2dsZV9pcnEocyk7DQog
ICAgICAgICBhbGFybV90aW1lcl91cGRhdGUocyk7DQogICAgIH0NCiAgICAgc3Bpbl91bmxvY2so
JnMtPmxvY2spOw0KQEAgLTM2NSw2ICszNzEsNyBAQCBzdGF0aWMgaW50IHJ0Y19pb3BvcnRfd3Jp
dGUodm9pZCAqb3BhcXVlDQogew0KICAgICBSVENTdGF0ZSAqcyA9IG9wYXF1ZTsNCiAgICAgc3Ry
dWN0IGRvbWFpbiAqZCA9IHZydGNfZG9tYWluKHMpOw0KKyAgICB1aW50MzJfdCBvcmlnLCBtYXNr
Ow0KDQogICAgIHNwaW5fbG9jaygmcy0+bG9jayk7DQoNCkBAIC0zODIsNiArMzg5LDcgQEAgc3Rh
dGljIGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1ZQ0KICAgICAgICAgcmV0dXJuIDA7
DQogICAgIH0NCg0KKyAgICBvcmlnID0gcy0+aHcuY21vc19kYXRhW3MtPmh3LmNtb3NfaW5kZXhd
Ow0KICAgICBzd2l0Y2ggKCBzLT5ody5jbW9zX2luZGV4ICkNCiAgICAgew0KICAgICBjYXNlIFJU
Q19TRUNPTkRTX0FMQVJNOg0KQEAgLTQwNSw5ICs0MTMsOSBAQCBzdGF0aWMgaW50IHJ0Y19pb3Bv
cnRfd3JpdGUodm9pZCAqb3BhcXVlDQogICAgICAgICBicmVhazsNCiAgICAgY2FzZSBSVENfUkVH
X0E6DQogICAgICAgICAvKiBVSVAgYml0IGlzIHJlYWQgb25seSAqLw0KLSAgICAgICAgcy0+aHcu
Y21vc19kYXRhW1JUQ19SRUdfQV0gPSAoZGF0YSAmIH5SVENfVUlQKSB8DQotICAgICAgICAgICAg
KHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0FdICYgUlRDX1VJUCk7DQotICAgICAgICBydGNfdGlt
ZXJfdXBkYXRlKHMpOw0KKyAgICAgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQV0gPSAoZGF0
YSAmIH5SVENfVUlQKSB8IChvcmlnICYgUlRDX1VJUCk7DQorICAgICAgICBpZiAoIChkYXRhIF4g
b3JpZykgJiAoUlRDX1JBVEVfU0VMRUNUIHwgUlRDX0RJVl9DVEwpICkNCisgICAgICAgICAgICBy
dGNfdGltZXJfdXBkYXRlKHMpOw0KICAgICAgICAgYnJlYWs7DQogICAgIGNhc2UgUlRDX1JFR19C
Og0KICAgICAgICAgaWYgKCBkYXRhICYgUlRDX1NFVCApDQpAQCAtNDE1LDcgKzQyMyw3IEBAIHN0
YXRpYyBpbnQgcnRjX2lvcG9ydF93cml0ZSh2b2lkICpvcGFxdWUNCiAgICAgICAgICAgICAvKiBz
ZXQgbW9kZTogcmVzZXQgVUlQIG1vZGUgKi8NCiAgICAgICAgICAgICBzLT5ody5jbW9zX2RhdGFb
UlRDX1JFR19BXSAmPSB+UlRDX1VJUDsNCiAgICAgICAgICAgICAvKiBhZGp1c3QgY21vcyBiZWZv
cmUgc3RvcHBpbmcgKi8NCi0gICAgICAgICAgICBpZiAoIShzLT5ody5jbW9zX2RhdGFbUlRDX1JF
R19CXSAmIFJUQ19TRVQpKQ0KKyAgICAgICAgICAgIGlmICghKG9yaWcgJiBSVENfU0VUKSkNCiAg
ICAgICAgICAgICB7DQogICAgICAgICAgICAgICAgIHMtPmN1cnJlbnRfdG0gPSBnbXRpbWUoZ2V0
X2xvY2FsdGltZShkKSk7DQogICAgICAgICAgICAgICAgIHJ0Y19jb3B5X2RhdGUocyk7DQpAQCAt
NDI0LDIxICs0MzIsMjYgQEAgc3RhdGljIGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1
ZQ0KICAgICAgICAgZWxzZQ0KICAgICAgICAgew0KICAgICAgICAgICAgIC8qIGlmIGRpc2FibGlu
ZyBzZXQgbW9kZSwgdXBkYXRlIHRoZSB0aW1lICovDQotICAgICAgICAgICAgaWYgKCBzLT5ody5j
bW9zX2RhdGFbUlRDX1JFR19CXSAmIFJUQ19TRVQgKQ0KKyAgICAgICAgICAgIGlmICggb3JpZyAm
IFJUQ19TRVQgKQ0KICAgICAgICAgICAgICAgICBydGNfc2V0X3RpbWUocyk7DQogICAgICAgICB9
DQotICAgICAgICAvKiBpZiB0aGUgaW50ZXJydXB0IGlzIGFscmVhZHkgc2V0IHdoZW4gdGhlIGlu
dGVycnVwdCBiZWNvbWUNCi0gICAgICAgICAqIGVuYWJsZWQsIHJhaXNlIGFuIGludGVycnVwdCBp
bW1lZGlhdGVseSovDQotICAgICAgICBpZiAoKGRhdGEgJiBSVENfVUlFKSAmJiAhKHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1VJRSkpDQotICAgICAgICAgICAgaWYgKHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0NdICYgUlRDX1VGKQ0KKyAgICAgICAgLyoNCisgICAgICAgICAqIElm
IHRoZSBpbnRlcnJ1cHQgaXMgYWxyZWFkeSBzZXQgd2hlbiB0aGUgaW50ZXJydXB0IGJlY29tZXMN
CisgICAgICAgICAqIGVuYWJsZWQsIHJhaXNlIGFuIGludGVycnVwdCBpbW1lZGlhdGVseS4NCisg
ICAgICAgICAqIE5COiBSVENfe0EsUCxVfUlFID09IFJUQ197QSxQLFV9RiByZXNwZWN0aXZlbHku
DQorICAgICAgICAgKi8NCisgICAgICAgIGZvciAoIG1hc2sgPSBSVENfVUlFOyBtYXNrIDw9IFJU
Q19QSUU7IG1hc2sgPDw9IDEgKQ0KKyAgICAgICAgICAgIGlmICggKGRhdGEgJiBtYXNrKSAmJiAh
KG9yaWcgJiBtYXNrKSAmJg0KKyAgICAgICAgICAgICAgICAgKHMtPmh3LmNtb3NfZGF0YVtSVENf
UkVHX0NdICYgbWFzaykgKQ0KICAgICAgICAgICAgIHsNCi0gICAgICAgICAgICAgICAgaHZtX2lz
YV9pcnFfZGVhc3NlcnQoZCwgUlRDX0lSUSk7DQotICAgICAgICAgICAgICAgIGh2bV9pc2FfaXJx
X2Fzc2VydChkLCBSVENfSVJRKTsNCisgICAgICAgICAgICAgICAgcnRjX3RvZ2dsZV9pcnEocyk7
DQorICAgICAgICAgICAgICAgIGJyZWFrOw0KICAgICAgICAgICAgIH0NCiAgICAgICAgIHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0JdID0gZGF0YTsNCi0gICAgICAgIHJ0Y190aW1lcl91cGRhdGUo
cyk7DQotICAgICAgICBjaGVja191cGRhdGVfdGltZXIocyk7DQotICAgICAgICBhbGFybV90aW1l
cl91cGRhdGUocyk7DQorICAgICAgICBpZiAoIChkYXRhIF4gb3JpZykgJiBSVENfU0VUICkNCisg
ICAgICAgICAgICBjaGVja191cGRhdGVfdGltZXIocyk7DQorICAgICAgICBpZiAoIChkYXRhIF4g
b3JpZykgJiAoUlRDXzI0SCB8IFJUQ19ETV9CSU5BUlkgfCBSVENfU0VUKSApDQorICAgICAgICAg
ICAgYWxhcm1fdGltZXJfdXBkYXRlKHMpOw0KICAgICAgICAgYnJlYWs7DQogICAgIGNhc2UgUlRD
X1JFR19DOg0KICAgICBjYXNlIFJUQ19SRUdfRDoNCkBAIC00NTMsNyArNDY2LDcgQEAgc3RhdGlj
IGludCBydGNfaW9wb3J0X3dyaXRlKHZvaWQgKm9wYXF1ZQ0KDQogc3RhdGljIGlubGluZSBpbnQg
dG9fYmNkKFJUQ1N0YXRlICpzLCBpbnQgYSkNCiB7DQotICAgIGlmICggcy0+aHcuY21vc19kYXRh
W1JUQ19SRUdfQl0gJiAweDA0ICkNCisgICAgaWYgKCBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19C
XSAmIFJUQ19ETV9CSU5BUlkgKQ0KICAgICAgICAgcmV0dXJuIGE7DQogICAgIGVsc2UNCiAgICAg
ICAgIHJldHVybiAoKGEgLyAxMCkgPDwgNCkgfCAoYSAlIDEwKTsNCkBAIC00NjEsNyArNDc0LDcg
QEAgc3RhdGljIGlubGluZSBpbnQgdG9fYmNkKFJUQ1N0YXRlICpzLCBpbg0KDQogc3RhdGljIGlu
bGluZSBpbnQgZnJvbV9iY2QoUlRDU3RhdGUgKnMsIGludCBhKQ0KIHsNCi0gICAgaWYgKCBzLT5o
dy5jbW9zX2RhdGFbUlRDX1JFR19CXSAmIDB4MDQgKQ0KKyAgICBpZiAoIHMtPmh3LmNtb3NfZGF0
YVtSVENfUkVHX0JdICYgUlRDX0RNX0JJTkFSWSApDQogICAgICAgICByZXR1cm4gYTsNCiAgICAg
ZWxzZQ0KICAgICAgICAgcmV0dXJuICgoYSA+PiA0KSAqIDEwKSArIChhICYgMHgwZik7DQpAQCAt
NDY5LDEyICs0ODIsMTQgQEAgc3RhdGljIGlubGluZSBpbnQgZnJvbV9iY2QoUlRDU3RhdGUgKnMs
IA0KDQogLyogSG91cnMgaW4gMTIgaG91ciBtb2RlIGFyZSBpbiAxLTEyIHJhbmdlLCBub3QgMC0x
MS4NCiAgKiBTbyB3ZSBuZWVkIGNvbnZlcnQgaXQgYmVmb3JlIHVzaW5nIGl0Ki8NCi1zdGF0aWMg
aW5saW5lIGludCBjb252ZXJ0X2hvdXIoUlRDU3RhdGUgKnMsIGludCBob3VyKQ0KK3N0YXRpYyBp
bmxpbmUgaW50IGNvbnZlcnRfaG91cihSVENTdGF0ZSAqcywgaW50IHJhdykNCiB7DQorICAgIGlu
dCBob3VyID0gZnJvbV9iY2QocywgcmF3ICYgMHg3Zik7DQorDQogICAgIGlmICghKHMtPmh3LmNt
b3NfZGF0YVtSVENfUkVHX0JdICYgUlRDXzI0SCkpDQogICAgIHsNCiAgICAgICAgIGhvdXIgJT0g
MTI7DQotICAgICAgICBpZiAocy0+aHcuY21vc19kYXRhW1JUQ19IT1VSU10gJiAweDgwKQ0KKyAg
ICAgICAgaWYgKHJhdyAmIDB4ODApDQogICAgICAgICAgICAgaG91ciArPSAxMjsNCiAgICAgfQ0K
ICAgICByZXR1cm4gaG91cjsNCkBAIC00OTMsOCArNTA4LDcgQEAgc3RhdGljIHZvaWQgcnRjX3Nl
dF90aW1lKFJUQ1N0YXRlICpzKQ0KICAgICANCiAgICAgdG0tPnRtX3NlYyA9IGZyb21fYmNkKHMs
IHMtPmh3LmNtb3NfZGF0YVtSVENfU0VDT05EU10pOw0KICAgICB0bS0+dG1fbWluID0gZnJvbV9i
Y2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19NSU5VVEVTXSk7DQotICAgIHRtLT50bV9ob3VyID0g
ZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19IT1VSU10gJiAweDdmKTsNCi0gICAgdG0t
PnRtX2hvdXIgPSBjb252ZXJ0X2hvdXIocywgdG0tPnRtX2hvdXIpOw0KKyAgICB0bS0+dG1faG91
ciA9IGNvbnZlcnRfaG91cihzLCBzLT5ody5jbW9zX2RhdGFbUlRDX0hPVVJTXSk7DQogICAgIHRt
LT50bV93ZGF5ID0gZnJvbV9iY2Qocywgcy0+aHcuY21vc19kYXRhW1JUQ19EQVlfT0ZfV0VFS10p
Ow0KICAgICB0bS0+dG1fbWRheSA9IGZyb21fYmNkKHMsIHMtPmh3LmNtb3NfZGF0YVtSVENfREFZ
X09GX01PTlRIXSk7DQogICAgIHRtLT50bV9tb24gPSBmcm9tX2JjZChzLCBzLT5ody5jbW9zX2Rh
dGFbUlRDX01PTlRIXSkgLSAxOw0KLS0tIGEveGVuL2FyY2gveDg2L2h2bS92cHQuYw0KKysrIGIv
eGVuL2FyY2gveDg2L2h2bS92cHQuYw0KQEAgLTIyLDYgKzIyLDcgQEANCiAjaW5jbHVkZSA8YXNt
L2h2bS92cHQuaD4NCiAjaW5jbHVkZSA8YXNtL2V2ZW50Lmg+DQogI2luY2x1ZGUgPGFzbS9hcGlj
Lmg+DQorI2luY2x1ZGUgPGFzbS9tYzE0NjgxOHJ0Yy5oPg0KDQogI2RlZmluZSBtb2RlX2lzKGQs
IG5hbWUpIFwNCiAgICAgKChkKS0+YXJjaC5odm1fZG9tYWluLnBhcmFtc1tIVk1fUEFSQU1fVElN
RVJfTU9ERV0gPT0gSFZNUFRNXyMjbmFtZSkNCkBAIC0yMTgsNiArMjE5LDcgQEAgdm9pZCBwdF91
cGRhdGVfaXJxKHN0cnVjdCB2Y3B1ICp2KQ0KICAgICBzdHJ1Y3QgcGVyaW9kaWNfdGltZSAqcHQs
ICp0ZW1wLCAqZWFybGllc3RfcHQgPSBOVUxMOw0KICAgICB1aW50NjRfdCBtYXhfbGFnID0gLTFV
TEw7DQogICAgIGludCBpcnEsIGlzX2xhcGljOw0KKyAgICB2b2lkICpwdF9wcml2Ow0KDQogICAg
IHNwaW5fbG9jaygmdi0+YXJjaC5odm1fdmNwdS50bV9sb2NrKTsNCg0KQEAgLTI1MSwxMyArMjUz
LDE0IEBAIHZvaWQgcHRfdXBkYXRlX2lycShzdHJ1Y3QgdmNwdSAqdikNCiAgICAgZWFybGllc3Rf
cHQtPmlycV9pc3N1ZWQgPSAxOw0KICAgICBpcnEgPSBlYXJsaWVzdF9wdC0+aXJxOw0KICAgICBp
c19sYXBpYyA9IChlYXJsaWVzdF9wdC0+c291cmNlID09IFBUU1JDX2xhcGljKTsNCisgICAgcHRf
cHJpdiA9IGVhcmxpZXN0X3B0LT5wcml2Ow0KDQogICAgIHNwaW5fdW5sb2NrKCZ2LT5hcmNoLmh2
bV92Y3B1LnRtX2xvY2spOw0KDQogICAgIGlmICggaXNfbGFwaWMgKQ0KLSAgICB7DQogICAgICAg
ICB2bGFwaWNfc2V0X2lycSh2Y3B1X3ZsYXBpYyh2KSwgaXJxLCAwKTsNCi0gICAgfQ0KKyAgICBl
bHNlIGlmICggaXJxID09IFJUQ19JUlEgKQ0KKyAgICAgICAgcnRjX3BlcmlvZGljX2ludGVycnVw
dChwdF9wcml2KTsNCiAgICAgZWxzZQ0KICAgICB7DQogICAgICAgICBodm1faXNhX2lycV9kZWFz
c2VydCh2LT5kb21haW4sIGlycSk7DQotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92cHQu
aA0KKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdnB0LmgNCkBAIC0xODEsNiArMTgxLDcg
QEAgdm9pZCBydGNfbWlncmF0ZV90aW1lcnMoc3RydWN0IHZjcHUgKnYpOw0KIHZvaWQgcnRjX2Rl
aW5pdChzdHJ1Y3QgZG9tYWluICpkKTsNCiB2b2lkIHJ0Y19yZXNldChzdHJ1Y3QgZG9tYWluICpk
KTsNCiB2b2lkIHJ0Y191cGRhdGVfY2xvY2soc3RydWN0IGRvbWFpbiAqZCk7DQordm9pZCBydGNf
cGVyaW9kaWNfaW50ZXJydXB0KHZvaWQgKik7DQoNCiB2b2lkIHBtdGltZXJfaW5pdChzdHJ1Y3Qg
dmNwdSAqdik7DQogdm9pZCBwbXRpbWVyX2RlaW5pdChzdHJ1Y3QgZG9tYWluICpkKTsNCg0KDQpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KWGVuLWRldmVs
IG1haWxpbmcgbGlzdA0KWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcNCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbA==

------=_001_NextPart564768006800_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
DIV.FoxDiv20120815221113875709 {
	FONT-SIZE: 10.5pt; MARGIN: 10px; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-F=
AMILY: =CB=CE=CC=E5
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR>
<STYLE>BLOCKQUOTE {
	MARGIN-TOP: 0px
}
OL {
	MARGIN-TOP: 0px
}
UL {
	MARGIN-TOP: 0px
}
</STYLE>
</HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Hi, Jan:</DIV>
<DIV>
<DIV class=3DFoxDiv20120815221113875709>
<DIV>I am sorry I really don't have much time to try a test of your patch,=
 and=20
it is not convenient</DIV>
<DIV>for me to have a try. For the version I have been using is xen4.0.x, =
and=20
your patch is based on </DIV>
<DIV>the latest version xen4.2.x.(I have never complied the unstable one),=
 so I=20
merged your patch to my </DIV>
<DIV>xen4.0.x, still couldn't find the two functions below:</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque) </DI=
V>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque) </DIV>
<DIV>so I didn't merge the two functions which contains a rtc_toggle_irq()=
=20
.</DIV>
<DIV>&nbsp;</DIV>
<DIV>The results for me were these:</DIV>
<DIV>1 In my real application environment, it worked very well in the form=
er=20
5mins, much better than before,</DIV>
<DIV>&nbsp;but at last it lagged again. I don't know whether it belongs to=
 the=20
two missed functions. I lack the </DIV>
<DIV>&nbsp;ability to figure them out.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>2 When I tested my test program which I provided days before, it work=
ed=20
very well, maybe the program doesn't </DIV>
<DIV>emulate the real environment due to&nbsp;the same setting rate,&nbsp;=
so I=20
modified this program as which in the attachment.</DIV>
<DIV>if you are more convenient, you can help me to have a look of it.</DI=
V>
<DIV>--------------------------------------------------------------------<=
/DIV>
<DIV>
<DIV>#include&nbsp;&lt;stdio.h&gt;</DIV>
<DIV>#include&nbsp;&lt;windows.h&gt;</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTSETTIMER)(IN&nbsp;ULONG&nbsp=
;RequestedResolution,&nbsp;IN&nbsp;BOOLEAN&nbsp;Set,&nbsp;OUT&nbsp;PULONG&=
nbsp;ActualResolution&nbsp;);</DIV>
<DIV>typedef&nbsp;int&nbsp;(__stdcall&nbsp;*NTQUERYTIMER)(OUT&nbsp;PULONG&=
nbsp;&nbsp;&nbsp;MinimumResolution,&nbsp;OUT&nbsp;PULONG&nbsp;MaximumResol=
ution,&nbsp;OUT&nbsp;PULONG&nbsp;CurrentResolution&nbsp;);</DIV>
<DIV>&nbsp;</DIV>
<DIV>int&nbsp;main()</DIV>
<DIV>{</DIV>
<DIV>DWORD&nbsp;min_res&nbsp;=3D&nbsp;0,&nbsp;max_res&nbsp;=3D&nbsp;0,&nbs=
p;cur_res&nbsp;=3D&nbsp;0,&nbsp;ret&nbsp;=3D&nbsp;0;</DIV>
<DIV>HMODULE&nbsp;&nbsp;hdll&nbsp;=3D&nbsp;NULL;</DIV>
<DIV>hdll&nbsp;=3D&nbsp;GetModuleHandle("ntdll.dll");</DIV>
<DIV>NTSETTIMER&nbsp;AddrNtSetTimer&nbsp;=3D&nbsp;0;</DIV>
<DIV>NTQUERYTIMER&nbsp;AddrNtQueyTimer&nbsp;=3D&nbsp;0;</DIV>
<DIV></DIV>
<DIV>AddrNtSetTimer&nbsp;=3D&nbsp;(NTSETTIMER)&nbsp;GetProcAddress(hdll,&n=
bsp;"NtSetTimerResolution");</DIV>
<DIV>AddrNtQueyTimer&nbsp;=3D&nbsp;(NTQUERYTIMER)GetProcAddress(hdll,&nbsp=
;"NtQueryTimerResolution");</DIV>
<DIV>&nbsp;</DIV>
<DIV>while&nbsp;(1)</DIV>
<DIV>{</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtQueyTimer(&amp;min_res,&nbsp;&amp;max_res,&nb=
sp;&amp;cur_res);</DIV>
<DIV>printf("min_res&nbsp;=3D&nbsp;%d,&nbsp;max_res&nbsp;=3D&nbsp;%d,&nbsp=
;cur_res&nbsp;=3D&nbsp;%d\n",min_res,&nbsp;max_res,&nbsp;cur_res);</DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;1,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(10000,&nbsp;0,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(50000,&nbsp;1,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(50000,&nbsp;0,&nbsp;&amp;cur_res);</=
DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(100000,&nbsp;1,&nbsp;&amp;cur_res);<=
/DIV>
<DIV>Sleep(5);</DIV>
<DIV>ret&nbsp;=3D&nbsp;AddrNtSetTimer(100000,&nbsp;0,&nbsp;&amp;cur_res);<=
/DIV>
<DIV>Sleep(5);</DIV>
<DIV>}</DIV>
<DIV>&nbsp;</DIV>
<DIV>return&nbsp;0;</DIV>
<DIV>}</DIV></DIV>
<DIV>--------------------------------------------------------------------<=
/DIV>
<DIV>And I have a opinion, because our product is based on Version Xen4.0.=
x, if=20
you have enough time, can you write </DIV>
<DIV>another patch based <A=20
href=3D"http://xenbits.xen.org/hg/xen-4.0-testing.hg/">http://xenbits.xen.=
org/hg/xen-4.0-testing.hg/</A>&nbsp;for=20
me, thank you very much!</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 I also have a thought that can we have some detecting methods to fi=
nd the=20
lagging time earlier to adjust time</DIV>
<DIV>back to normal value in the code?</DIV>
<DIV>&nbsp;</DIV>
<DIV>best regards,</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>
</DIV>
<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>
<DIV>&nbsp;</DIV></DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV>
<DIV>Second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patch&nbsp;posted;&nbsp;no&nbsp=
;test&nbsp;results&nbsp;so&nbsp;far&nbsp;for&nbsp;first&nbsp;draft.</DIV>
<DIV>Jan</DIV></DIV>
<DIV><B></B>&nbsp;</DIV>
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-14&nbsp;17:51</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:yang.z.zhang@intel.com">Yang Z Zhan=
g</A>;=20
<A href=3D"mailto:keir@xen.org">Keir Fraser</A>; <A href=3D"mailto:tim@xen=
.org">Tim=20
Deegan</A></DIV>
<DIV><B>CC:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A>;=
 <A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;[Xen-devel] [PATCH, RFC v2] x86/HVM: assorted RT=
C=20
emulation adjustments (was Re: Big Bug:Time in VM goes=20
slower...)</DIV></DIV></DIV>
<DIV>
<DIV>Below/attached&nbsp;a&nbsp;second&nbsp;draft&nbsp;of&nbsp;a&nbsp;patc=
h&nbsp;to&nbsp;fix&nbsp;not&nbsp;only&nbsp;this</DIV>
<DIV>issue,&nbsp;but&nbsp;a&nbsp;few&nbsp;more&nbsp;with&nbsp;the&nbsp;RTC=
&nbsp;emulation.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Keir,&nbsp;Tim,&nbsp;Yang,&nbsp;others&nbsp;-&nbsp;the&nbsp;change&nb=
sp;to&nbsp;xen/arch/x86/hvm/vpt.c&nbsp;really</DIV>
<DIV>looks&nbsp;more&nbsp;like&nbsp;a&nbsp;hack&nbsp;than&nbsp;a&nbsp;solu=
tion,&nbsp;but&nbsp;I&nbsp;don't&nbsp;see&nbsp;another</DIV>
<DIV>way&nbsp;without&nbsp;much&nbsp;more&nbsp;intrusive&nbsp;changes.&nbs=
p;The&nbsp;point&nbsp;is&nbsp;that&nbsp;we</DIV>
<DIV>want&nbsp;the&nbsp;RTC&nbsp;code&nbsp;to&nbsp;decide&nbsp;whether&nbs=
p;to&nbsp;generate&nbsp;an&nbsp;interrupt</DIV>
<DIV>(so&nbsp;that&nbsp;RTC_PF&nbsp;can&nbsp;become&nbsp;set&nbsp;correctl=
y&nbsp;even&nbsp;without&nbsp;RTC_PIE</DIV>
<DIV>getting&nbsp;enabled&nbsp;by&nbsp;the&nbsp;guest).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Additionally&nbsp;I&nbsp;wonder&nbsp;whether&nbsp;alarm_timer_update(=
)&nbsp;shouldn't</DIV>
<DIV>bail&nbsp;on&nbsp;non-conforming&nbsp;RTC_*_ALARM&nbsp;values&nbsp;(a=
s&nbsp;those&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;the&nbsp;values&nbsp;they&nbsp;get&nbsp;compare=
d&nbsp;against,&nbsp;whereas</DIV>
<DIV>with&nbsp;the&nbsp;current&nbsp;way&nbsp;of&nbsp;handling&nbsp;this&n=
bsp;they&nbsp;would&nbsp;appear&nbsp;to</DIV>
<DIV>match&nbsp;-&nbsp;i.e.&nbsp;set&nbsp;RTC_AF&nbsp;and&nbsp;possibly&nb=
sp;generate&nbsp;an&nbsp;interrupt&nbsp;-</DIV>
<DIV>some&nbsp;other&nbsp;point&nbsp;in&nbsp;time).&nbsp;I&nbsp;realize&nb=
sp;the&nbsp;behavior&nbsp;here&nbsp;may&nbsp;not</DIV>
<DIV>be&nbsp;precisely&nbsp;specified,&nbsp;but&nbsp;the&nbsp;specificatio=
n&nbsp;saying&nbsp;"the&nbsp;current</DIV>
<DIV>time&nbsp;has&nbsp;matched&nbsp;the&nbsp;alarm&nbsp;time"&nbsp;means&=
nbsp;to&nbsp;me&nbsp;a&nbsp;value&nbsp;by&nbsp;value</DIV>
<DIV>comparison,&nbsp;which&nbsp;implies&nbsp;that&nbsp;non-conforming&nbs=
p;values&nbsp;would</DIV>
<DIV>never&nbsp;match&nbsp;(since&nbsp;non-conforming&nbsp;current&nbsp;ti=
me&nbsp;values&nbsp;could</DIV>
<DIV>get&nbsp;replaced&nbsp;at&nbsp;any&nbsp;time&nbsp;by&nbsp;the&nbsp;ha=
rdware&nbsp;due&nbsp;to&nbsp;overflow</DIV>
<DIV>detection).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_A&nbs=
p;writes&nbsp;when&nbsp;the&nbsp;value&nbsp;didn't</DIV>
<DIV>&nbsp;&nbsp;change&nbsp;(doing&nbsp;the&nbsp;call&nbsp;always&nbsp;wa=
s&nbsp;reported&nbsp;to&nbsp;cause&nbsp;wall&nbsp;clock&nbsp;time</DIV>
<DIV>&nbsp;&nbsp;lagging&nbsp;with&nbsp;the&nbsp;JVM&nbsp;running&nbsp;on&=
nbsp;Windows)</DIV>
<DIV>-&nbsp;don't&nbsp;call&nbsp;rtc_timer_update()&nbsp;on&nbsp;REG_B&nbs=
p;writes&nbsp;at&nbsp;all</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;alarm_timer_update()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;relevant&nbsp;bits</DIV>
<DIV>&nbsp;&nbsp;change</DIV>
<DIV>-&nbsp;only&nbsp;call&nbsp;check_update_timer()&nbsp;on&nbsp;REG_B&nb=
sp;writes&nbsp;when&nbsp;SET&nbsp;changes</DIV>
<DIV>-&nbsp;instead&nbsp;properly&nbsp;handle&nbsp;AF&nbsp;and&nbsp;PF&nbs=
p;when&nbsp;the&nbsp;guest&nbsp;is&nbsp;not&nbsp;also&nbsp;setting</DIV>
<DIV>&nbsp;&nbsp;AIE/PIE&nbsp;respectively&nbsp;(for&nbsp;UF&nbsp;this&nbs=
p;was&nbsp;already&nbsp;the&nbsp;case,&nbsp;only&nbsp;a</DIV>
<DIV>&nbsp;&nbsp;comment&nbsp;was&nbsp;slightly&nbsp;inaccurate)</DIV>
<DIV>-&nbsp;raise&nbsp;the&nbsp;RTC&nbsp;IRQ&nbsp;not&nbsp;only&nbsp;when&=
nbsp;UIE&nbsp;gets&nbsp;set&nbsp;while&nbsp;UF&nbsp;was&nbsp;already</DIV>
<DIV>&nbsp;&nbsp;set,&nbsp;but&nbsp;generalize&nbsp;this&nbsp;to&nbsp;cove=
r&nbsp;AIE&nbsp;and&nbsp;PIE&nbsp;as&nbsp;well</DIV>
<DIV>-&nbsp;properly&nbsp;mask&nbsp;off&nbsp;bit&nbsp;7&nbsp;when&nbsp;ret=
rieving&nbsp;the&nbsp;hour&nbsp;values&nbsp;in</DIV>
<DIV>&nbsp;&nbsp;alarm_timer_update(),&nbsp;and&nbsp;properly&nbsp;use&nbs=
p;RTC_HOURS_ALARM's&nbsp;bit&nbsp;7&nbsp;when</DIV>
<DIV>&nbsp;&nbsp;converting&nbsp;from&nbsp;12-&nbsp;to&nbsp;24-hour&nbsp;v=
alue</DIV>
<DIV>-&nbsp;also&nbsp;handle&nbsp;the&nbsp;two&nbsp;other&nbsp;possible&nb=
sp;clock&nbsp;bases</DIV>
<DIV>-&nbsp;use&nbsp;RTC_*&nbsp;names&nbsp;in&nbsp;a&nbsp;couple&nbsp;of&n=
bsp;places&nbsp;where&nbsp;literal&nbsp;numbers&nbsp;were&nbsp;used</DIV>
<DIV>&nbsp;&nbsp;so&nbsp;far</DIV>
<DIV>&nbsp;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/rtc.c</DIV>
<DIV>@@&nbsp;-50,11&nbsp;+50,24&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a);</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,=
&nbsp;int&nbsp;hour);</DIV>
<DIV>&nbsp;</DIV>
<DIV>-static&nbsp;void&nbsp;rtc_periodic_cb(struct&nbsp;vcpu&nbsp;*v,&nbsp=
;void&nbsp;*opaque)</DIV>
<DIV>+static&nbsp;void&nbsp;rtc_toggle_irq(RTCState&nbsp;*s)</DIV>
<DIV>+{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>+</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock));</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_IRQF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+}</DIV>
<DIV>+</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;0xc0;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp=
;RTC_PF;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_PIE&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</=
DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-68,19&nbsp;+81,25&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_tim=
er_update(RTCState&nbsp;*s</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ASSERT(spin_is_locked(&amp;s-&gt;lock))=
;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period_code&nbsp;=3D&nbsp;s-&gt;hw.cmos=
_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_RATE_SELECT;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(period_code&nbsp;!=3D&nbsp;0=
)&nbsp;&amp;&amp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_=
PIE)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_RE=
G_A]&nbsp;&amp;&nbsp;RTC_DIV_CTL&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;&lt;=3D&nbsp;2&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_32KHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(peri=
od_code&nbsp;!=3D&nbsp;0)&nbsp;&amp;&amp;&nbsp;(period_code&nbsp;&lt;=3D&n=
bsp;2)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;period_code&nbsp;+=3D&nbsp;7;</DIV>
<DIV>-</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);&nbsp;/*&nbsp;period&nbs=
p;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;period&nbsp;=3D&nbsp=
;DIV_ROUND((period&nbsp;*&nbsp;1000000000ULL),&nbsp;32768);&nbsp;/*&nbsp;p=
eriod&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;create_periodic_time=
(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&nbsp;RTC_IRQ,</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_cb,&nbsp;s);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_1MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REF_CLCK_4MHZ:</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;perio=
d_code&nbsp;!=3D&nbsp;0&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;1&nbsp;&lt;&lt;&nbsp;(period_code&nbsp;-&nbsp;1);=
&nbsp;/*&nbsp;period&nbsp;in&nbsp;32&nbsp;Khz&nbsp;cycles&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;period&nbsp;=3D&nbsp;DIV_ROUND(period&nbsp;*&nbsp;1000000000ULL,&nbsp;=
32768);&nbsp;/*&nbsp;in&nbsp;ns&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;create_periodic_time(v,&nbsp;&amp;s-&gt;pt,&nbsp;period,&nbsp;period,&=
nbsp;RTC_IRQ,&nbsp;NULL,&nbsp;s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;break;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;fall&nbsp;th=
rough&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;default:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;destroy_periodi=
c_time(&amp;s-&gt;pt);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-102,7&nbsp;+121,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;check_u=
pdate_timer(RTCState&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;guest_usec&nbsp=
;=3D&nbsp;get_localtime_us(d)&nbsp;%&nbsp;USEC_PER_SEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(guest_=
usec&nbsp;&gt;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;244))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;when&nbsp;enab=
ling&nbsp;UIE&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;/*&nbsp;RTC&nbsp;is&nbsp;in&nbsp;update&nbsp;cycle&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;|=3D&nbsp;RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;next_update_time&nbsp;=3D&nbsp;(USEC_PER_SEC&nbsp;-&nbsp;guest_us=
ec)&nbsp;*&nbsp;NS_PER_USEC;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;expire_time&nbsp;=3D&nbsp;NOW()&nbsp;+&nbsp;next_update_time;</DI=
V>
<DIV>@@&nbsp;-144,7&nbsp;+163,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_upd=
ate_timer(void&nbsp;*opaqu</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_update_timer2(void&nbsp;*opaque)</DIV=
>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-152,11&nbsp;+170,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_up=
date_timer2(void&nbsp;*opaq</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_UF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt=
;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_ti=
mer(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-175,21&nbsp;+189,18&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm=
_timer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;stop_timer(&amp;s-&gt;alarm_timer);</DI=
V>
<DIV>&nbsp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((s-&gt;hw.cmos_data[RTC_REG_B]&nbsp=
;&amp;&nbsp;RTC_AIE)&nbsp;&amp;&amp;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_=
B]&nbsp;&amp;&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_t=
m&nbsp;=3D&nbsp;gmtime(get_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s=
);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS_ALARM]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;alarm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_hour&nbsp;=3D&=
nbsp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS_ALARM]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_sec&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_min&nbsp;=
=3D&nbsp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;from_bcd(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;cur_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cur_hour&nbsp;=3D&nb=
sp;convert_hour(s,&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;USEC_PER_SEC&nbsp;-&nbsp;(get_localtime_us(d)&nbsp;%&nbsp;=
USEC_PER_SEC);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;next_update_tim=
e&nbsp;=3D&nbsp;next_update_time&nbsp;*&nbsp;NS_PER_USEC&nbsp;+&nbsp;NOW()=
;</DIV>
<DIV>@@&nbsp;-343,7&nbsp;+354,6&nbsp;@@&nbsp;static&nbsp;void&nbsp;alarm_t=
imer_update(RTCState&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;void&nbsp;rtc_alarm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbsp;vrt=
c_domain(s);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>@@&nbsp;-351,11&nbsp;+361,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_al=
arm_cb(void&nbsp;*opaque)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_AF;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;alarm&n=
bsp;interrupt&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;=
hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_AIE)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;|=3D&nbsp;RTC_IRQF;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_toggle_irq(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_upd=
ate(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;s-&gt;lock);</DIV>
<DIV>@@&nbsp;-365,6&nbsp;+371,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;RTCState&nbsp;*s&nbsp;=3D&nbsp;opaque;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;domain&nbsp;*d&nbsp;=3D&nbs=
p;vrtc_domain(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;uint32_t&nbsp;orig,&nbsp;mask;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;s-&gt;lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-382,6&nbsp;+389,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;0;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;orig&nbsp;=3D&nbsp;s-&gt;hw.cmos_data[s-&gt;=
hw.cmos_index];</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;switch&nbsp;(&nbsp;s-&gt;hw.cmos_index&=
nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_SECONDS_ALARM:</DIV>
<DIV>@@&nbsp;-405,9&nbsp;+413,9&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_A:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;UIP&nbs=
p;bit&nbsp;is&nbsp;read&nbsp;only&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;(s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_data[R=
TC_REG_A]&nbsp;=3D&nbsp;(data&nbsp;&amp;&nbsp;~RTC_UIP)&nbsp;|&nbsp;(orig&=
nbsp;&amp;&nbsp;RTC_UIP);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_RATE_SELECT&nbsp;|&nbsp;RTC_DIV_CT=
L)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;rtc_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_B:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;=
data&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>@@&nbsp;-415,7&nbsp;+423,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;set&nbsp;mode:&nbsp;reset&nbsp;UIP&nbsp;mode&nbsp;*/</DIV=
>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;s-&gt;hw.cmos_data[RTC_REG_A]&nbsp;&amp;=3D&nbsp;~RTC_UIP;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;adjust&nbsp;cmos&nbsp;before&nbsp;stopping&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET))</DI=
V>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(!(orig&nbsp;&amp;&nbsp;RTC_SET))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;current_tm&nbsp;=3D&nbsp;gmtime(get=
_localtime(d));</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_copy_date(s);</DIV>
<DIV>@@&nbsp;-424,21&nbsp;+432,26&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_io=
port_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;/*&nbsp;if&nbsp;disabling&nbsp;set&nbsp;mode,&nbsp;update&nbsp;th=
e&nbsp;time&nbsp;*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]&nbsp;&amp;&nbsp;RTC_SET&n=
bsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;orig&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_set_time(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp;if&nbsp;the&=
nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;inter=
rupt&nbsp;become</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately*/</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;((data&nbsp;=
&amp;&nbsp;RTC_UIE)&nbsp;&amp;&amp;&nbsp;!(s-&gt;hw.cmos_data[RTC_REG_B]&n=
bsp;&amp;&nbsp;RTC_UIE))</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp;&nbsp;RTC_UF)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/*</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;If&nbsp=
;the&nbsp;interrupt&nbsp;is&nbsp;already&nbsp;set&nbsp;when&nbsp;the&nbsp;=
interrupt&nbsp;becomes</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;enabled=
,&nbsp;raise&nbsp;an&nbsp;interrupt&nbsp;immediately.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;NB:&nbs=
p;RTC_{A,P,U}IE&nbsp;=3D=3D&nbsp;RTC_{A,P,U}F&nbsp;respectively.</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;for&nbsp;(&nbsp;mask=
&nbsp;=3D&nbsp;RTC_UIE;&nbsp;mask&nbsp;&lt;=3D&nbsp;RTC_PIE;&nbsp;mask&nbs=
p;&lt;&lt;=3D&nbsp;1&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;if&nbsp;(&nbsp;(data&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;&nbsp;!(orig=
&nbsp;&amp;&nbsp;mask)&nbsp;&amp;&amp;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(s-&gt;hw.cmos_data[RTC_REG_C]&nbsp;&amp=
;&nbsp;mask)&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_deassert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_assert(d,&nbsp;RTC_IRQ);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_toggle_irq(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;s-&gt;hw.cmos_d=
ata[RTC_REG_B]&nbsp;=3D&nbsp;data;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_timer_update(s);=
</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check_update_timer(s=
);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;alarm_timer_update(s=
);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;RTC_SET&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;check_update_timer(s);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;(data=
&nbsp;^&nbsp;orig)&nbsp;&amp;&nbsp;(RTC_24H&nbsp;|&nbsp;RTC_DM_BINARY&nbsp=
;|&nbsp;RTC_SET)&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;alarm_timer_update(s);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;break;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_C:</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;case&nbsp;RTC_REG_D:</DIV>
<DIV>@@&nbsp;-453,7&nbsp;+466,7&nbsp;@@&nbsp;static&nbsp;int&nbsp;rtc_iopo=
rt_write(void&nbsp;*opaque</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;to_bcd(RTCState&nbsp;*s,&nbsp;=
int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;/&nbsp;10)&nbsp;&lt;&lt;&nbsp;4)&nbsp;|&nbsp;(a&nbsp;%&nbsp;10);</DI=
V>
<DIV>@@&nbsp;-461,7&nbsp;+474,7&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int&n=
bsp;to_bcd(RTCState&nbsp;*s,&nbsp;in</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;static&nbsp;inline&nbsp;int&nbsp;from_bcd(RTCState&nbsp;*s,&nbs=
p;int&nbsp;a)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;0x04&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;s-&gt;hw.cmos_data[RTC_REG_B]=
&nbsp;&amp;&nbsp;RTC_DM_BINARY&nbsp;)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;a;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;((a=
&nbsp;&gt;&gt;&nbsp;4)&nbsp;*&nbsp;10)&nbsp;+&nbsp;(a&nbsp;&amp;&nbsp;0x0f=
);</DIV>
<DIV>@@&nbsp;-469,12&nbsp;+482,14&nbsp;@@&nbsp;static&nbsp;inline&nbsp;int=
&nbsp;from_bcd(RTCState&nbsp;*s,&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;/*&nbsp;Hours&nbsp;in&nbsp;12&nbsp;hour&nbsp;mode&nbsp;are&nbsp=
;in&nbsp;1-12&nbsp;range,&nbsp;not&nbsp;0-11.</DIV>
<DIV>&nbsp;&nbsp;*&nbsp;So&nbsp;we&nbsp;need&nbsp;convert&nbsp;it&nbsp;bef=
ore&nbsp;using&nbsp;it*/</DIV>
<DIV>-static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;hour)</DIV>
<DIV>+static&nbsp;inline&nbsp;int&nbsp;convert_hour(RTCState&nbsp;*s,&nbsp=
;int&nbsp;raw)</DIV>
<DIV>&nbsp;{</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;hour&nbsp;=3D&nbsp;from_bcd(s,&nbsp=
;raw&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>+</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(!(s-&gt;hw.cmos_data[RTC_REG_B=
]&nbsp;&amp;&nbsp;RTC_24H))</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hour&nbsp;%=3D&=
nbsp;12;</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(s-&gt;hw.cm=
os_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x80)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(raw&nbsp;&a=
mp;&nbsp;0x80)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;hour&nbsp;+=3D&nbsp;12;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;hour;</DIV>
<DIV>@@&nbsp;-493,8&nbsp;+508,7&nbsp;@@&nbsp;static&nbsp;void&nbsp;rtc_set=
_time(RTCState&nbsp;*s)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_sec&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_SECONDS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_min&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MINUTES]);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;from_bcd(s,&nbs=
p;s-&gt;hw.cmos_data[RTC_HOURS]&nbsp;&amp;&nbsp;0x7f);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;tm-&gt;tm_hour);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_hour&nbsp;=3D&nbsp;convert_hour(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_HOURS]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_wday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_WEEK]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mday&nbsp;=3D&nbsp;from_bcd(s=
,&nbsp;s-&gt;hw.cmos_data[RTC_DAY_OF_MONTH]);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tm-&gt;tm_mon&nbsp;=3D&nbsp;from_bcd(s,=
&nbsp;s-&gt;hw.cmos_data[RTC_MONTH])&nbsp;-&nbsp;1;</DIV>
<DIV>---&nbsp;a/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>+++&nbsp;b/xen/arch/x86/hvm/vpt.c</DIV>
<DIV>@@&nbsp;-22,6&nbsp;+22,7&nbsp;@@</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/hvm/vpt.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/event.h&gt;</DIV>
<DIV>&nbsp;#include&nbsp;&lt;asm/apic.h&gt;</DIV>
<DIV>+#include&nbsp;&lt;asm/mc146818rtc.h&gt;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;#define&nbsp;mode_is(d,&nbsp;name)&nbsp;\</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;((d)-&gt;arch.hvm_domain.params[HVM_PAR=
AM_TIMER_MODE]&nbsp;=3D=3D&nbsp;HVMPTM_##name)</DIV>
<DIV>@@&nbsp;-218,6&nbsp;+219,7&nbsp;@@&nbsp;void&nbsp;pt_update_irq(struc=
t&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct&nbsp;periodic_time&nbsp;*pt,&nbs=
p;*temp,&nbsp;*earliest_pt&nbsp;=3D&nbsp;NULL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint64_t&nbsp;max_lag&nbsp;=3D&nbsp;-1U=
LL;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;int&nbsp;irq,&nbsp;is_lapic;</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;void&nbsp;*pt_priv;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_lock(&amp;v-&gt;arch.hvm_vcpu.tm_l=
ock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>@@&nbsp;-251,13&nbsp;+253,14&nbsp;@@&nbsp;void&nbsp;pt_update_irq(str=
uct&nbsp;vcpu&nbsp;*v)</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;earliest_pt-&gt;irq_issued&nbsp;=3D&nbs=
p;1;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;irq&nbsp;=3D&nbsp;earliest_pt-&gt;irq;<=
/DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;is_lapic&nbsp;=3D&nbsp;(earliest_pt-&gt=
;source&nbsp;=3D=3D&nbsp;PTSRC_lapic);</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;pt_priv&nbsp;=3D&nbsp;earliest_pt-&gt;priv;<=
/DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;spin_unlock(&amp;v-&gt;arch.hvm_vcpu.tm=
_lock);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if&nbsp;(&nbsp;is_lapic&nbsp;)</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;vlapic_set_irq(=
vcpu_vlapic(v),&nbsp;irq,&nbsp;0);</DIV>
<DIV>-&nbsp;&nbsp;&nbsp;&nbsp;}</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;else&nbsp;if&nbsp;(&nbsp;irq&nbsp;=3D=3D&nbs=
p;RTC_IRQ&nbsp;)</DIV>
<DIV>+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;rtc_periodic_interru=
pt(pt_priv);</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hvm_isa_irq_dea=
ssert(v-&gt;domain,&nbsp;irq);</DIV>
<DIV>---&nbsp;a/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>+++&nbsp;b/xen/include/asm-x86/hvm/vpt.h</DIV>
<DIV>@@&nbsp;-181,6&nbsp;+181,7&nbsp;@@&nbsp;void&nbsp;rtc_migrate_timers(=
struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;rtc_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_reset(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;void&nbsp;rtc_update_clock(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>+void&nbsp;rtc_periodic_interrupt(void&nbsp;*);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_init(struct&nbsp;vcpu&nbsp;*v);</DIV>
<DIV>&nbsp;void&nbsp;pmtimer_deinit(struct&nbsp;domain&nbsp;*d);</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>_______________________________________________</DIV>
<DIV>Xen-devel&nbsp;mailing&nbsp;list</DIV>
<DIV>Xen-devel@lists.xen.org</DIV>
<DIV>http://lists.xen.org/xen-devel</DIV></DIV></DIV></DIV></BODY></HTML>

------=_001_NextPart564768006800_=------



--===============5636090078893645621==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5636090078893645621==--



From xen-devel-bounces@lists.xen.org Wed Aug 15 14:29:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eae-0007Dr-Oc; Wed, 15 Aug 2012 14:28:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T1eaa-0007Dh-Ka
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:28:55 +0000
Received: from [85.158.143.99:4289] by server-3.bemta-4.messagelabs.com id
	C2/DF-09529-022BB205; Wed, 15 Aug 2012 14:28:48 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345040927!28351674!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0NjE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28053 invoked from network); 15 Aug 2012 14:28:48 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-9.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 14:28:48 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 15 Aug 2012 07:28:46 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,773,1336374000"; d="scan'208";a="134586483"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by AZSMGA002.ch.intel.com with ESMTP; 15 Aug 2012 07:28:46 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 07:28:46 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 07:28:45 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Wed, 15 Aug 2012 22:28:44 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson'
	<Ian.Jackson@eu.citrix.com>, 'Ben Guthro' <ben@guthro.net>, "Wu, GabrielX"
	<gabrielx.wu@intel.com>
Thread-Topic: [Xen-devel] 4.2 TODO / Release Plan
Thread-Index: AQHNee/LQWENY/xaNkWdPia9JLgun5dZArpggAHcTwA=
Date: Wed, 15 Aug 2012 14:28:43 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101470D1@SHSMSX101.ccr.corp.intel.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
	<502A1C7B0200007800094A8F@nat28.tlf.novell.com>
	<1B4B44D9196EFF41AE41FDA404FC0A10142A81@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A10142A81@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: 'Keir Fraser' <keir@xen.org>, 'Ian Campbell' <Ian.Campbell@citrix.com>,
	'xen-devel' <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ren, Yongjie
> Sent: Tuesday, August 14, 2012 5:03 PM
> To: Jan Beulich; Ian Jackson; Ben Guthro; Wu, GabrielX
> Cc: Ian Campbell; xen-devel; Keir Fraser
> Subject: RE: [Xen-devel] 4.2 TODO / Release Plan
> > >>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> > > Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> > > It seems to fail 100% of the time on 100% of x86 machines I have tried.
> >
> > Don't know. The systems I have tried S3 on have problems
> > even with native Linux, so there's not much point playing
> > with Xen on them.
> >
> > Ian(J), I don't suppose this is part of the regression tests?
> >
> > Gabriel, Yongjie - ISTR this is part of your regular VMX testing,
> > and the most recent report doesn't mention any problem.
> >
> I'll double check the S3 issue, and give update later.
>
I can also reproduce the S3 issue on my hardware.
It can sleep to memory (S3) but can't resume after pressing power button.
It's not a recent regression. It exists for a long time.
We filed a bug for Dom0 S3 more than 1.5 years ago.
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
(There are some comments in this bug.)
There are 2 reasons why we didn't list this bug in our recent report.
1. our report is based on our automatic testing, but dom0 S3 is not included.
  It's difficult to make such a case automatic.
2. We really missed the bug in our recent reports. Sorry.
  We'll add it in the old issue list for tracking.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 14:29:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eae-0007Dr-Oc; Wed, 15 Aug 2012 14:28:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T1eaa-0007Dh-Ka
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:28:55 +0000
Received: from [85.158.143.99:4289] by server-3.bemta-4.messagelabs.com id
	C2/DF-09529-022BB205; Wed, 15 Aug 2012 14:28:48 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345040927!28351674!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0NjE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28053 invoked from network); 15 Aug 2012 14:28:48 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-9.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 14:28:48 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 15 Aug 2012 07:28:46 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,773,1336374000"; d="scan'208";a="134586483"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by AZSMGA002.ch.intel.com with ESMTP; 15 Aug 2012 07:28:46 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 07:28:46 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 07:28:45 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.82]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Wed, 15 Aug 2012 22:28:44 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson'
	<Ian.Jackson@eu.citrix.com>, 'Ben Guthro' <ben@guthro.net>, "Wu, GabrielX"
	<gabrielx.wu@intel.com>
Thread-Topic: [Xen-devel] 4.2 TODO / Release Plan
Thread-Index: AQHNee/LQWENY/xaNkWdPia9JLgun5dZArpggAHcTwA=
Date: Wed, 15 Aug 2012 14:28:43 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101470D1@SHSMSX101.ccr.corp.intel.com>
References: <1344243491.11339.11.camel@zakaz.uk.xensource.com>
	<5028C34C02000078000945FC@nat28.tlf.novell.com>
	<CAOvdn6Vezt5gtsykUPxUdf_0-ubqjAJ5ygnV9hPiON1Geqq9Ag@mail.gmail.com>
	<502A1C7B0200007800094A8F@nat28.tlf.novell.com>
	<1B4B44D9196EFF41AE41FDA404FC0A10142A81@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A10142A81@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: 'Keir Fraser' <keir@xen.org>, 'Ian Campbell' <Ian.Campbell@citrix.com>,
	'xen-devel' <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ren, Yongjie
> Sent: Tuesday, August 14, 2012 5:03 PM
> To: Jan Beulich; Ian Jackson; Ben Guthro; Wu, GabrielX
> Cc: Ian Campbell; xen-devel; Keir Fraser
> Subject: RE: [Xen-devel] 4.2 TODO / Release Plan
> > >>> On 13.08.12 at 19:26, Ben Guthro <ben@guthro.net> wrote:
> > > Is this exclusive to my setup? Does S3 work elsewhere with 4.2?
> > > It seems to fail 100% of the time on 100% of x86 machines I have tried.
> >
> > Don't know. The systems I have tried S3 on have problems
> > even with native Linux, so there's not much point playing
> > with Xen on them.
> >
> > Ian(J), I don't suppose this is part of the regression tests?
> >
> > Gabriel, Yongjie - ISTR this is part of your regular VMX testing,
> > and the most recent report doesn't mention any problem.
> >
> I'll double check the S3 issue, and give update later.
>
I can also reproduce the S3 issue on my hardware.
It can sleep to memory (S3) but can't resume after pressing power button.
It's not a recent regression. It exists for a long time.
We filed a bug for Dom0 S3 more than 1.5 years ago.
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
(There are some comments in this bug.)
There are 2 reasons why we didn't list this bug in our recent report.
1. our report is based on our automatic testing, but dom0 S3 is not included.
  It's difficult to make such a case automatic.
2. We really missed the bug in our recent reports. Sorry.
  We'll add it in the old issue list for tracking.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 14:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eur-0007mz-Fl; Wed, 15 Aug 2012 14:49:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1eup-0007mr-Hy
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:49:47 +0000
Received: from [85.158.139.83:37340] by server-10.bemta-5.messagelabs.com id
	18/4A-13125-A07BB205; Wed, 15 Aug 2012 14:49:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345042186!28203937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21904 invoked from network); 15 Aug 2012 14:49:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	15 Aug 2012 14:49:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 15:49:45 +0100
Message-Id: <502BD3520200007800095291@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 15:50:26 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
In-Reply-To: <CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 15:11, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>>> This was encouraging, so I tried the same change against the tree
>>> tip...unfortunately that didn't go as well.
>>>
>>> tip + evtchn_move_pirqs() removal:
>>> did not resume from the first suspend.
>>
>> Any logs of this (i.e. indications of what's going wrong - still
>> the same AHCI not working, but else apparently fine)?
> 
> This is a bit strange, in that the observed behavior changes when I am
> logging to the serial connection.
> 
> When I am logging to serial, the failure is the same as before -
> The first suspend / resume works -
> The second fails with AHCI not working
> 
> However, when I am not logging to serial - the system goes down, but
> never comes back up. I cannot ssh in, and no graphics are displayed on
> the screen. My only recourse is to hard power cycle the machine.
> 
> This, of course makes collecting logs difficult.

Indeed. Did you try using the serial driver in polling mode
(without IRQ that is)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 14:50:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1eur-0007mz-Fl; Wed, 15 Aug 2012 14:49:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1eup-0007mr-Hy
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:49:47 +0000
Received: from [85.158.139.83:37340] by server-10.bemta-5.messagelabs.com id
	18/4A-13125-A07BB205; Wed, 15 Aug 2012 14:49:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345042186!28203937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21904 invoked from network); 15 Aug 2012 14:49:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-182.messagelabs.com with SMTP;
	15 Aug 2012 14:49:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 15:49:45 +0100
Message-Id: <502BD3520200007800095291@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 15:50:26 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
In-Reply-To: <CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 15:11, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>>> This was encouraging, so I tried the same change against the tree
>>> tip...unfortunately that didn't go as well.
>>>
>>> tip + evtchn_move_pirqs() removal:
>>> did not resume from the first suspend.
>>
>> Any logs of this (i.e. indications of what's going wrong - still
>> the same AHCI not working, but else apparently fine)?
> 
> This is a bit strange, in that the observed behavior changes when I am
> logging to the serial connection.
> 
> When I am logging to serial, the failure is the same as before -
> The first suspend / resume works -
> The second fails with AHCI not working
> 
> However, when I am not logging to serial - the system goes down, but
> never comes back up. I cannot ssh in, and no graphics are displayed on
> the screen. My only recourse is to hard power cycle the machine.
> 
> This, of course makes collecting logs difficult.

Indeed. Did you try using the serial driver in polling mode
(without IRQ that is)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 14:58:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1f2w-0007yZ-DL; Wed, 15 Aug 2012 14:58:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1f2u-0007yR-Th
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:58:09 +0000
Received: from [85.158.139.83:35626] by server-2.bemta-5.messagelabs.com id
	BC/66-10142-FF8BB205; Wed, 15 Aug 2012 14:58:07 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345042685!28280776!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22565 invoked from network); 15 Aug 2012 14:58:07 -0000
Received: from mail-iy0-f173.google.com (HELO mail-iy0-f173.google.com)
	(209.85.210.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 14:58:07 -0000
Received: by iakx26 with SMTP id x26so202355iak.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 07:58:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=6zki0UGlhML4btC3gVjgPNfoYNJ3+jKtmwDpGM5vXUA=;
	b=dvR5cmyoN+MJ3Ch+X0wB5huHPfaIdgLAjPoDQV6sgpXx7KweFc63gdSCf2O3iBwNPF
	CPEErdpSrRxltJjLKpFviYxw4MdBc6iS/DNcizJ+3LZSdEK3tCsp1Z/qVDpni3clIxfd
	Cd5SidjL5cMPABX5lS4uSBkIzWbKvBx0pOKR2kQt0S/LM7UvYtLtcSriMRFi0KiKdFL/
	J16n50Oj2sPPbu8EF8aeEZSfmsvp4q0cjJmKq3Wm5ZKxxMH50EY1vhmaKV5SnD/BwoZO
	z8spcpk/vhppY7m2wxPEX6Ts04W8FloIX5qlc+k0Bp/5r20UxJ6v6awckjioRLNycY7o
	MYGA==
MIME-Version: 1.0
Received: by 10.42.64.9 with SMTP id e9mr15179014ici.20.1345042685486; Wed, 15
	Aug 2012 07:58:05 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 07:58:05 -0700 (PDT)
In-Reply-To: <502BD3520200007800095291@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 10:58:05 -0400
X-Google-Sender-Auth: qphkvmaTzfuXKcHgl6Bn5kMYTMc
Message-ID: <CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7062139833991625020=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7062139833991625020==
Content-Type: multipart/alternative; boundary=90e6ba3fd35b79e77d04c74f2724

--90e6ba3fd35b79e77d04c74f2724
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:

>
> > This, of course makes collecting logs difficult.
>
> Indeed. Did you try using the serial driver in polling mode
> (without IRQ that is)?
>
>
I'm not familiar with how to set this up, and a quick glance through
xen/drivers/charr/ns16550.c didn't really shed much light.

Is there a README / wiki page, etc describing this?

--90e6ba3fd35b79e77d04c74f2724
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Wed, Aug 15, 2012 at 10:50 AM, Jan Be=
ulich <span dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@suse.com" target=3D"=
_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br><blockquote class=3D"gma=
il_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-lef=
t:1ex">
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt; This, of course makes collecting logs difficult.<br>
<br>
</div></div>Indeed. Did you try using the serial driver in polling mode<br>
(without IRQ that is)?<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br></font></span></blockquo=
te><div><br></div><div>I&#39;m not familiar with how to set this up, and a =
quick glance through xen/drivers/charr/ns16550.c didn&#39;t really shed muc=
h light.</div>
<div><br></div><div>Is there a README / wiki page, etc describing this?</di=
v></div>

--90e6ba3fd35b79e77d04c74f2724--


--===============7062139833991625020==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7062139833991625020==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 14:58:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 14:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1f2w-0007yZ-DL; Wed, 15 Aug 2012 14:58:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1f2u-0007yR-Th
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 14:58:09 +0000
Received: from [85.158.139.83:35626] by server-2.bemta-5.messagelabs.com id
	BC/66-10142-FF8BB205; Wed, 15 Aug 2012 14:58:07 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345042685!28280776!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22565 invoked from network); 15 Aug 2012 14:58:07 -0000
Received: from mail-iy0-f173.google.com (HELO mail-iy0-f173.google.com)
	(209.85.210.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 14:58:07 -0000
Received: by iakx26 with SMTP id x26so202355iak.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 07:58:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=6zki0UGlhML4btC3gVjgPNfoYNJ3+jKtmwDpGM5vXUA=;
	b=dvR5cmyoN+MJ3Ch+X0wB5huHPfaIdgLAjPoDQV6sgpXx7KweFc63gdSCf2O3iBwNPF
	CPEErdpSrRxltJjLKpFviYxw4MdBc6iS/DNcizJ+3LZSdEK3tCsp1Z/qVDpni3clIxfd
	Cd5SidjL5cMPABX5lS4uSBkIzWbKvBx0pOKR2kQt0S/LM7UvYtLtcSriMRFi0KiKdFL/
	J16n50Oj2sPPbu8EF8aeEZSfmsvp4q0cjJmKq3Wm5ZKxxMH50EY1vhmaKV5SnD/BwoZO
	z8spcpk/vhppY7m2wxPEX6Ts04W8FloIX5qlc+k0Bp/5r20UxJ6v6awckjioRLNycY7o
	MYGA==
MIME-Version: 1.0
Received: by 10.42.64.9 with SMTP id e9mr15179014ici.20.1345042685486; Wed, 15
	Aug 2012 07:58:05 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 07:58:05 -0700 (PDT)
In-Reply-To: <502BD3520200007800095291@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 10:58:05 -0400
X-Google-Sender-Auth: qphkvmaTzfuXKcHgl6Bn5kMYTMc
Message-ID: <CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7062139833991625020=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7062139833991625020==
Content-Type: multipart/alternative; boundary=90e6ba3fd35b79e77d04c74f2724

--90e6ba3fd35b79e77d04c74f2724
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:

>
> > This, of course makes collecting logs difficult.
>
> Indeed. Did you try using the serial driver in polling mode
> (without IRQ that is)?
>
>
I'm not familiar with how to set this up, and a quick glance through
xen/drivers/charr/ns16550.c didn't really shed much light.

Is there a README / wiki page, etc describing this?

--90e6ba3fd35b79e77d04c74f2724
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Wed, Aug 15, 2012 at 10:50 AM, Jan Be=
ulich <span dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@suse.com" target=3D"=
_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br><blockquote class=3D"gma=
il_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-lef=
t:1ex">
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt; This, of course makes collecting logs difficult.<br>
<br>
</div></div>Indeed. Did you try using the serial driver in polling mode<br>
(without IRQ that is)?<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br></font></span></blockquo=
te><div><br></div><div>I&#39;m not familiar with how to set this up, and a =
quick glance through xen/drivers/charr/ns16550.c didn&#39;t really shed muc=
h light.</div>
<div><br></div><div>Is there a README / wiki page, etc describing this?</di=
v></div>

--90e6ba3fd35b79e77d04c74f2724--


--===============7062139833991625020==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7062139833991625020==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 15:00:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1f4v-00085h-UE; Wed, 15 Aug 2012 15:00:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1f4u-00085a-Sr
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:00:13 +0000
Received: from [85.158.138.51:47300] by server-12.bemta-3.messagelabs.com id
	DA/67-04073-C79BB205; Wed, 15 Aug 2012 15:00:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345042811!9749670!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15772 invoked from network); 15 Aug 2012 15:00:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	15 Aug 2012 15:00:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 16:00:11 +0100
Message-Id: <502BD5C302000078000952AD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 16:00:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>
	<2012081522045495397713@gmail.com>
In-Reply-To: <2012081522045495397713@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 16:07, tupeng212 <tupeng212@gmail.com> wrote:
> Hi, Jan:
> I am sorry I really don't have much time to try a test of your patch, and it 
> is not convenient
> for me to have a try. For the version I have been using is xen4.0.x, and 
> your patch is based on 
> the latest version xen4.2.x.(I have never complied the unstable one), so I 
> merged your patch to my 
> xen4.0.x, still couldn't find the two functions below:
>  static void rtc_update_timer2(void *opaque) 
>  static void rtc_alarm_cb(void *opaque) 
> so I didn't merge the two functions which contains a rtc_toggle_irq() .

Which looks quite right for the older tree.

> The results for me were these:
> 1 In my real application environment, it worked very well in the former 
> 5mins, much better than before,
>  but at last it lagged again. I don't know whether it belongs to the two 
> missed functions. I lack the 
>  ability to figure them out.

Unlikely.

> 2 When I tested my test program which I provided days before, it worked very 
> well, maybe the program doesn't 
> emulate the real environment due to the same setting rate, so I modified 
> this program as which in the attachment.

Okay, so we're at least moving in the right direction.

> if you are more convenient, you can help me to have a look of it.
> And I have a opinion, because our product is based on Version Xen4.0.x, if 
> you have enough time, can you write 
> another patch based http://xenbits.xen.org/hg/xen-4.0-testing.hg/ for me, 
> thank you very much!

You seem to have done the right thing, so I don't see an
immediate need.

> 3 I also have a thought that can we have some detecting methods to find the 
> lagging time earlier to adjust time
> back to normal value in the code?

For that we'd first need to understand what the reason for the
remaining mis-behavior is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:00:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1f4v-00085h-UE; Wed, 15 Aug 2012 15:00:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1f4u-00085a-Sr
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:00:13 +0000
Received: from [85.158.138.51:47300] by server-12.bemta-3.messagelabs.com id
	DA/67-04073-C79BB205; Wed, 15 Aug 2012 15:00:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345042811!9749670!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15772 invoked from network); 15 Aug 2012 15:00:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	15 Aug 2012 15:00:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 16:00:11 +0100
Message-Id: <502BD5C302000078000952AD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 16:00:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>
	<2012081522045495397713@gmail.com>
In-Reply-To: <2012081522045495397713@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 16:07, tupeng212 <tupeng212@gmail.com> wrote:
> Hi, Jan:
> I am sorry I really don't have much time to try a test of your patch, and it 
> is not convenient
> for me to have a try. For the version I have been using is xen4.0.x, and 
> your patch is based on 
> the latest version xen4.2.x.(I have never complied the unstable one), so I 
> merged your patch to my 
> xen4.0.x, still couldn't find the two functions below:
>  static void rtc_update_timer2(void *opaque) 
>  static void rtc_alarm_cb(void *opaque) 
> so I didn't merge the two functions which contains a rtc_toggle_irq() .

Which looks quite right for the older tree.

> The results for me were these:
> 1 In my real application environment, it worked very well in the former 
> 5mins, much better than before,
>  but at last it lagged again. I don't know whether it belongs to the two 
> missed functions. I lack the 
>  ability to figure them out.

Unlikely.

> 2 When I tested my test program which I provided days before, it worked very 
> well, maybe the program doesn't 
> emulate the real environment due to the same setting rate, so I modified 
> this program as which in the attachment.

Okay, so we're at least moving in the right direction.

> if you are more convenient, you can help me to have a look of it.
> And I have a opinion, because our product is based on Version Xen4.0.x, if 
> you have enough time, can you write 
> another patch based http://xenbits.xen.org/hg/xen-4.0-testing.hg/ for me, 
> thank you very much!

You seem to have done the right thing, so I don't see an
immediate need.

> 3 I also have a thought that can we have some detecting methods to find the 
> lagging time earlier to adjust time
> back to normal value in the code?

For that we'd first need to understand what the reason for the
remaining mis-behavior is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:01:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1f5f-00089M-CD; Wed, 15 Aug 2012 15:00:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1f5d-000899-T7
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:00:58 +0000
Received: from [85.158.143.99:21484] by server-2.bemta-4.messagelabs.com id
	FA/A8-31966-9A9BB205; Wed, 15 Aug 2012 15:00:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345042854!20804214!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 886 invoked from network); 15 Aug 2012 15:00:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:00:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336363200"; 
	d="scan'208,217";a="205260190"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 11:00:52 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 11:00:51 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1f5X-0001es-8Y;
	Wed, 15 Aug 2012 16:00:51 +0100
Message-ID: <502BB9A3.5050908@citrix.com>
Date: Wed, 15 Aug 2012 16:00:51 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
	<CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
In-Reply-To: <CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	John Baboval <john.baboval@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6731689140469103217=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6731689140469103217==
Content-Type: multipart/alternative;
	boundary="------------040100060804030608080506"

--------------040100060804030608080506
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit


On 15/08/12 15:58, Ben Guthro wrote:
>
>
> On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com
> <mailto:JBeulich@suse.com>> wrote:
>
>
>     > This, of course makes collecting logs difficult.
>
>     Indeed. Did you try using the serial driver in polling mode
>     (without IRQ that is)?
>
>
> I'm not familiar with how to set this up, and a quick glance through
> xen/drivers/charr/ns16550.c didn't really shed much light.
>
> Is there a README / wiki page, etc describing this?

http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

You want the com1 entry

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------040100060804030608080506
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <div class="moz-cite-prefix">On 15/08/12 15:58, Ben Guthro wrote:<br>
    </div>
    <blockquote
cite="mid:CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com"
      type="cite"><br>
      <br>
      <div class="gmail_quote">On Wed, Aug 15, 2012 at 10:50 AM, Jan
        Beulich <span dir="ltr">&lt;<a moz-do-not-send="true"
            href="mailto:JBeulich@suse.com" target="_blank">JBeulich@suse.com</a>&gt;</span>
        wrote:<br>
        <blockquote class="gmail_quote" style="margin:0 0 0
          .8ex;border-left:1px #ccc solid;padding-left:1ex">
          <div class="HOEnZb">
            <div class="h5"><br>
              &gt; This, of course makes collecting logs difficult.<br>
              <br>
            </div>
          </div>
          Indeed. Did you try using the serial driver in polling mode<br>
          (without IRQ that is)?<br>
          <span class="HOEnZb"><font color="#888888"><br>
            </font></span></blockquote>
        <div><br>
        </div>
        <div>I'm not familiar with how to set this up, and a quick
          glance through xen/drivers/charr/ns16550.c didn't really shed
          much light.</div>
        <div><br>
        </div>
        <div>Is there a README / wiki page, etc describing this?</div>
      </div>
    </blockquote>
    <br>
    <a class="moz-txt-link-freetext" href="http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html">http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html</a><br>
    <br>
    You want the com1 entry<br>
    <pre class="moz-signature" cols="72">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a></pre>
  </body>
</html>

--------------040100060804030608080506--


--===============6731689140469103217==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6731689140469103217==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 15:01:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1f5f-00089M-CD; Wed, 15 Aug 2012 15:00:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T1f5d-000899-T7
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:00:58 +0000
Received: from [85.158.143.99:21484] by server-2.bemta-4.messagelabs.com id
	FA/A8-31966-9A9BB205; Wed, 15 Aug 2012 15:00:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345042854!20804214!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 886 invoked from network); 15 Aug 2012 15:00:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:00:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336363200"; 
	d="scan'208,217";a="205260190"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 11:00:52 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 11:00:51 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T1f5X-0001es-8Y;
	Wed, 15 Aug 2012 16:00:51 +0100
Message-ID: <502BB9A3.5050908@citrix.com>
Date: Wed, 15 Aug 2012 16:00:51 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
	<CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
In-Reply-To: <CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	John Baboval <john.baboval@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6731689140469103217=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6731689140469103217==
Content-Type: multipart/alternative;
	boundary="------------040100060804030608080506"

--------------040100060804030608080506
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit


On 15/08/12 15:58, Ben Guthro wrote:
>
>
> On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com
> <mailto:JBeulich@suse.com>> wrote:
>
>
>     > This, of course makes collecting logs difficult.
>
>     Indeed. Did you try using the serial driver in polling mode
>     (without IRQ that is)?
>
>
> I'm not familiar with how to set this up, and a quick glance through
> xen/drivers/charr/ns16550.c didn't really shed much light.
>
> Is there a README / wiki page, etc describing this?

http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

You want the com1 entry

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------040100060804030608080506
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <div class="moz-cite-prefix">On 15/08/12 15:58, Ben Guthro wrote:<br>
    </div>
    <blockquote
cite="mid:CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com"
      type="cite"><br>
      <br>
      <div class="gmail_quote">On Wed, Aug 15, 2012 at 10:50 AM, Jan
        Beulich <span dir="ltr">&lt;<a moz-do-not-send="true"
            href="mailto:JBeulich@suse.com" target="_blank">JBeulich@suse.com</a>&gt;</span>
        wrote:<br>
        <blockquote class="gmail_quote" style="margin:0 0 0
          .8ex;border-left:1px #ccc solid;padding-left:1ex">
          <div class="HOEnZb">
            <div class="h5"><br>
              &gt; This, of course makes collecting logs difficult.<br>
              <br>
            </div>
          </div>
          Indeed. Did you try using the serial driver in polling mode<br>
          (without IRQ that is)?<br>
          <span class="HOEnZb"><font color="#888888"><br>
            </font></span></blockquote>
        <div><br>
        </div>
        <div>I'm not familiar with how to set this up, and a quick
          glance through xen/drivers/charr/ns16550.c didn't really shed
          much light.</div>
        <div><br>
        </div>
        <div>Is there a README / wiki page, etc describing this?</div>
      </div>
    </blockquote>
    <br>
    <a class="moz-txt-link-freetext" href="http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html">http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html</a><br>
    <br>
    You want the com1 entry<br>
    <pre class="moz-signature" cols="72">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a></pre>
  </body>
</html>

--------------040100060804030608080506--


--===============6731689140469103217==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6731689140469103217==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 15:06:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fAt-0008Si-52; Wed, 15 Aug 2012 15:06:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1fAr-0008SV-KT
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:06:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345043174!1747505!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10427 invoked from network); 15 Aug 2012 15:06:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 15:06:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 16:06:13 +0100
Message-Id: <502BD72C02000078000952CF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 16:06:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
	<CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
In-Reply-To: <CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 16:58, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>>
>> > This, of course makes collecting logs difficult.
>>
>> Indeed. Did you try using the serial driver in polling mode
>> (without IRQ that is)?
>>
>>
> I'm not familiar with how to set this up, and a quick glance through
> xen/drivers/charr/ns16550.c didn't really shed much light.
> 
> Is there a README / wiki page, etc describing this?

There's docs/misc/xen-command-line.markdown, which describes
this. It's basically

com1=<baud>,8n1,<port>,<irq>

and you'd want to set <irq> to 0 (I have a patch pending for
post-4.2 that allows omitting all the fields that you don't really
care to change from their default values, but for now you'll
have to specify them).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:06:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fAt-0008Si-52; Wed, 15 Aug 2012 15:06:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1fAr-0008SV-KT
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:06:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345043174!1747505!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10427 invoked from network); 15 Aug 2012 15:06:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 15:06:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Aug 2012 16:06:13 +0100
Message-Id: <502BD72C02000078000952CF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 15 Aug 2012 16:06:52 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
	<CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
In-Reply-To: <CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 16:58, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>>
>> > This, of course makes collecting logs difficult.
>>
>> Indeed. Did you try using the serial driver in polling mode
>> (without IRQ that is)?
>>
>>
> I'm not familiar with how to set this up, and a quick glance through
> xen/drivers/charr/ns16550.c didn't really shed much light.
> 
> Is there a README / wiki page, etc describing this?

There's docs/misc/xen-command-line.markdown, which describes
this. It's basically

com1=<baud>,8n1,<port>,<irq>

and you'd want to set <irq> to 0 (I have a patch pending for
post-4.2 that allows omitting all the fields that you don't really
care to change from their default values, but for now you'll
have to specify them).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:17:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fLB-0000D9-He; Wed, 15 Aug 2012 15:17:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1fL9-0000D4-IA
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:16:59 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345043812!3031642!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 561 invoked from network); 15 Aug 2012 15:16:52 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:16:52 -0000
Received: by weyz53 with SMTP id z53so1278745wey.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 08:16:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Ms9ANrpNpAlWmofpPAiO2/ZadsaJmRcTw7F7Fk1zXBs=;
	b=Krec9LeHG1/DqA+FJtfdE2yFhVplw4/PSDcolT21S5TvURc7L/G5LiBUJmFhTFOtwZ
	OiKSTbEiqUwMJEei2O9Uz4/OBnlrjBdmwQlbwTvoy6MfBF3PEW9ZFYUaJWess5C95fuD
	JfATGKCw6QWxnCef1bZFBdY10Zt9EwFo1MPq3w3bQt22VecssxJjXX8vQLSzowiQWQIQ
	T/cs4APBZjBohZYYErwSjRG1YXxy2jLxHCz3e/V2ZfneyTCN6Glti95aQh3B7LZR1mZC
	431q2y3C26va0YGK2JQ0Ty2ISbEoaQmF73cCoU8fI/PGmV9bbG/y3dYkCgGM+360uXfB
	P7ZQ==
MIME-Version: 1.0
Received: by 10.50.236.72 with SMTP id us8mr14755075igc.70.1345043811371; Wed,
	15 Aug 2012 08:16:51 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 08:16:51 -0700 (PDT)
In-Reply-To: <502BD72C02000078000952CF@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
	<CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
	<502BD72C02000078000952CF@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 11:16:51 -0400
X-Google-Sender-Auth: 7B5wt7PfIwxCI5eIoALm_QQLaIE
Message-ID: <CAOvdn6V15gi3D5azegksTk4y=xxj3za7LaWxdHd56t7Ws_-Q3g@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2082431032509510735=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2082431032509510735==
Content-Type: multipart/alternative; boundary=14dae9340af39589c204c74f6ace

--14dae9340af39589c204c74f6ace
Content-Type: text/plain; charset=ISO-8859-1

OK, well, I tried it with the following boot config:

With serial:

multiboot /xen.gz com1=115200,8n1,pci,0 console=com1 dom0_mem=max:1024M
cpufreq=xen cpuidle sync_console loglvl=all xsave=0
module /vmlinuz-3.2.23-orc
root=/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro
ignore_loglevel no_console_suspend  xencons=tty console=hvc

I'm not sure it matters, but I'm making use of the renamed "magic" patch to
autodetect the PCI serial card.


Without serial:

multiboot /xen.gz console=null dom0_mem=max:1024M cpufreq=xen cpuidle
xsave=0
module /vmlinuz-3.2.23-orc dummy
root=/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro
module /initrd.img-3.2.23-orc



All of that said - this exhibited the same behavior as before, with the
presence of the serial connection changing the test behavior.





On Wed, Aug 15, 2012 at 11:06 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 15.08.12 at 16:58, Ben Guthro <ben@guthro.net> wrote:
> > On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >>
> >> > This, of course makes collecting logs difficult.
> >>
> >> Indeed. Did you try using the serial driver in polling mode
> >> (without IRQ that is)?
> >>
> >>
> > I'm not familiar with how to set this up, and a quick glance through
> > xen/drivers/charr/ns16550.c didn't really shed much light.
> >
> > Is there a README / wiki page, etc describing this?
>
> There's docs/misc/xen-command-line.markdown, which describes
> this. It's basically
>
> com1=<baud>,8n1,<port>,<irq>
>
> and you'd want to set <irq> to 0 (I have a patch pending for
> post-4.2 that allows omitting all the fields that you don't really
> care to change from their default values, but for now you'll
> have to specify them).
>
> Jan
>
>

--14dae9340af39589c204c74f6ace
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

OK, well, I tried it with the following boot config:<div><br></div><div>Wit=
h serial:</div><div><br></div><div><div>multiboot<span class=3D"Apple-tab-s=
pan" style=3D"white-space:pre">	</span>/xen.gz com1=3D115200,8n1,pci,0 cons=
ole=3Dcom1 dom0_mem=3Dmax:1024M cpufreq=3Dxen cpuidle sync_console loglvl=
=3Dall xsave=3D0</div>
<div>module<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span=
>/vmlinuz-3.2.23-orc root=3D/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--6=
78d57b9dc9e-NxDisk5 ro ignore_loglevel no_console_suspend =A0xencons=3Dtty =
console=3Dhvc</div>
<div><br></div><div><div>I&#39;m not sure it matters, but I&#39;m making us=
e of the renamed &quot;magic&quot; patch to autodetect the PCI serial card.=
</div></div><div><br></div><div><br></div><div>Without serial:</div><div>
<br></div><div><div>multiboot<span class=3D"Apple-tab-span" style=3D"white-=
space:pre">	</span>/xen.gz console=3Dnull dom0_mem=3Dmax:1024M cpufreq=3Dxe=
n cpuidle xsave=3D0</div><div>module<span class=3D"Apple-tab-span" style=3D=
"white-space:pre">	</span>/vmlinuz-3.2.23-orc dummy root=3D/dev/mapper/NxVG=
--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro=A0</div>
<div>module<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span=
>/initrd.img-3.2.23-orc</div></div><div><br></div><div><br></div><div><br><=
/div><div>All of that said - this exhibited the same behavior as before, wi=
th the presence of the serial connection changing the test behavior.</div>
<div><br></div><div><br></div><div><br></div><div><br><br><div class=3D"gma=
il_quote">On Wed, Aug 15, 2012 at 11:06 AM, Jan Beulich <span dir=3D"ltr">&=
lt;<a href=3D"mailto:JBeulich@suse.com" target=3D"_blank">JBeulich@suse.com=
</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">&gt;=
&gt;&gt; On 15.08.12 at 16:58, Ben Guthro &lt;<a href=3D"mailto:ben@guthro.=
net">ben@guthro.net</a>&gt; wrote:<br>

&gt; On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich &lt;<a href=3D"mailto:JB=
eulich@suse.com">JBeulich@suse.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt;<br>
&gt;&gt; &gt; This, of course makes collecting logs difficult.<br>
&gt;&gt;<br>
&gt;&gt; Indeed. Did you try using the serial driver in polling mode<br>
&gt;&gt; (without IRQ that is)?<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt; I&#39;m not familiar with how to set this up, and a quick glance throu=
gh<br>
&gt; xen/drivers/charr/ns16550.c didn&#39;t really shed much light.<br>
&gt;<br>
&gt; Is there a README / wiki page, etc describing this?<br>
<br>
</div></div>There&#39;s docs/misc/xen-command-line.markdown, which describe=
s<br>
this. It&#39;s basically<br>
<br>
com1=3D&lt;baud&gt;,8n1,&lt;port&gt;,&lt;irq&gt;<br>
<br>
and you&#39;d want to set &lt;irq&gt; to 0 (I have a patch pending for<br>
post-4.2 that allows omitting all the fields that you don&#39;t really<br>
care to change from their default values, but for now you&#39;ll<br>
have to specify them).<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Jan<br>
<br>
</font></span></blockquote></div><br></div></div>

--14dae9340af39589c204c74f6ace--


--===============2082431032509510735==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2082431032509510735==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 15:17:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fLB-0000D9-He; Wed, 15 Aug 2012 15:17:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1fL9-0000D4-IA
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:16:59 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345043812!3031642!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 561 invoked from network); 15 Aug 2012 15:16:52 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:16:52 -0000
Received: by weyz53 with SMTP id z53so1278745wey.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 08:16:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Ms9ANrpNpAlWmofpPAiO2/ZadsaJmRcTw7F7Fk1zXBs=;
	b=Krec9LeHG1/DqA+FJtfdE2yFhVplw4/PSDcolT21S5TvURc7L/G5LiBUJmFhTFOtwZ
	OiKSTbEiqUwMJEei2O9Uz4/OBnlrjBdmwQlbwTvoy6MfBF3PEW9ZFYUaJWess5C95fuD
	JfATGKCw6QWxnCef1bZFBdY10Zt9EwFo1MPq3w3bQt22VecssxJjXX8vQLSzowiQWQIQ
	T/cs4APBZjBohZYYErwSjRG1YXxy2jLxHCz3e/V2ZfneyTCN6Glti95aQh3B7LZR1mZC
	431q2y3C26va0YGK2JQ0Ty2ISbEoaQmF73cCoU8fI/PGmV9bbG/y3dYkCgGM+360uXfB
	P7ZQ==
MIME-Version: 1.0
Received: by 10.50.236.72 with SMTP id us8mr14755075igc.70.1345043811371; Wed,
	15 Aug 2012 08:16:51 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Wed, 15 Aug 2012 08:16:51 -0700 (PDT)
In-Reply-To: <502BD72C02000078000952CF@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502BD3520200007800095291@nat28.tlf.novell.com>
	<CAOvdn6WgDSOj9siOGLY6hmofgw3Ht6nOd8fwT2zBQsyXm+KivA@mail.gmail.com>
	<502BD72C02000078000952CF@nat28.tlf.novell.com>
Date: Wed, 15 Aug 2012 11:16:51 -0400
X-Google-Sender-Auth: 7B5wt7PfIwxCI5eIoALm_QQLaIE
Message-ID: <CAOvdn6V15gi3D5azegksTk4y=xxj3za7LaWxdHd56t7Ws_-Q3g@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2082431032509510735=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2082431032509510735==
Content-Type: multipart/alternative; boundary=14dae9340af39589c204c74f6ace

--14dae9340af39589c204c74f6ace
Content-Type: text/plain; charset=ISO-8859-1

OK, well, I tried it with the following boot config:

With serial:

multiboot /xen.gz com1=115200,8n1,pci,0 console=com1 dom0_mem=max:1024M
cpufreq=xen cpuidle sync_console loglvl=all xsave=0
module /vmlinuz-3.2.23-orc
root=/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro
ignore_loglevel no_console_suspend  xencons=tty console=hvc

I'm not sure it matters, but I'm making use of the renamed "magic" patch to
autodetect the PCI serial card.


Without serial:

multiboot /xen.gz console=null dom0_mem=max:1024M cpufreq=xen cpuidle
xsave=0
module /vmlinuz-3.2.23-orc dummy
root=/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro
module /initrd.img-3.2.23-orc



All of that said - this exhibited the same behavior as before, with the
presence of the serial connection changing the test behavior.





On Wed, Aug 15, 2012 at 11:06 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 15.08.12 at 16:58, Ben Guthro <ben@guthro.net> wrote:
> > On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >>
> >> > This, of course makes collecting logs difficult.
> >>
> >> Indeed. Did you try using the serial driver in polling mode
> >> (without IRQ that is)?
> >>
> >>
> > I'm not familiar with how to set this up, and a quick glance through
> > xen/drivers/charr/ns16550.c didn't really shed much light.
> >
> > Is there a README / wiki page, etc describing this?
>
> There's docs/misc/xen-command-line.markdown, which describes
> this. It's basically
>
> com1=<baud>,8n1,<port>,<irq>
>
> and you'd want to set <irq> to 0 (I have a patch pending for
> post-4.2 that allows omitting all the fields that you don't really
> care to change from their default values, but for now you'll
> have to specify them).
>
> Jan
>
>

--14dae9340af39589c204c74f6ace
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

OK, well, I tried it with the following boot config:<div><br></div><div>Wit=
h serial:</div><div><br></div><div><div>multiboot<span class=3D"Apple-tab-s=
pan" style=3D"white-space:pre">	</span>/xen.gz com1=3D115200,8n1,pci,0 cons=
ole=3Dcom1 dom0_mem=3Dmax:1024M cpufreq=3Dxen cpuidle sync_console loglvl=
=3Dall xsave=3D0</div>
<div>module<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span=
>/vmlinuz-3.2.23-orc root=3D/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--6=
78d57b9dc9e-NxDisk5 ro ignore_loglevel no_console_suspend =A0xencons=3Dtty =
console=3Dhvc</div>
<div><br></div><div><div>I&#39;m not sure it matters, but I&#39;m making us=
e of the renamed &quot;magic&quot; patch to autodetect the PCI serial card.=
</div></div><div><br></div><div><br></div><div>Without serial:</div><div>
<br></div><div><div>multiboot<span class=3D"Apple-tab-span" style=3D"white-=
space:pre">	</span>/xen.gz console=3Dnull dom0_mem=3Dmax:1024M cpufreq=3Dxe=
n cpuidle xsave=3D0</div><div>module<span class=3D"Apple-tab-span" style=3D=
"white-space:pre">	</span>/vmlinuz-3.2.23-orc dummy root=3D/dev/mapper/NxVG=
--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro=A0</div>
<div>module<span class=3D"Apple-tab-span" style=3D"white-space:pre">	</span=
>/initrd.img-3.2.23-orc</div></div><div><br></div><div><br></div><div><br><=
/div><div>All of that said - this exhibited the same behavior as before, wi=
th the presence of the serial connection changing the test behavior.</div>
<div><br></div><div><br></div><div><br></div><div><br><br><div class=3D"gma=
il_quote">On Wed, Aug 15, 2012 at 11:06 AM, Jan Beulich <span dir=3D"ltr">&=
lt;<a href=3D"mailto:JBeulich@suse.com" target=3D"_blank">JBeulich@suse.com=
</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">&gt;=
&gt;&gt; On 15.08.12 at 16:58, Ben Guthro &lt;<a href=3D"mailto:ben@guthro.=
net">ben@guthro.net</a>&gt; wrote:<br>

&gt; On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich &lt;<a href=3D"mailto:JB=
eulich@suse.com">JBeulich@suse.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt;<br>
&gt;&gt; &gt; This, of course makes collecting logs difficult.<br>
&gt;&gt;<br>
&gt;&gt; Indeed. Did you try using the serial driver in polling mode<br>
&gt;&gt; (without IRQ that is)?<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt; I&#39;m not familiar with how to set this up, and a quick glance throu=
gh<br>
&gt; xen/drivers/charr/ns16550.c didn&#39;t really shed much light.<br>
&gt;<br>
&gt; Is there a README / wiki page, etc describing this?<br>
<br>
</div></div>There&#39;s docs/misc/xen-command-line.markdown, which describe=
s<br>
this. It&#39;s basically<br>
<br>
com1=3D&lt;baud&gt;,8n1,&lt;port&gt;,&lt;irq&gt;<br>
<br>
and you&#39;d want to set &lt;irq&gt; to 0 (I have a patch pending for<br>
post-4.2 that allows omitting all the fields that you don&#39;t really<br>
care to change from their default values, but for now you&#39;ll<br>
have to specify them).<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Jan<br>
<br>
</font></span></blockquote></div><br></div></div>

--14dae9340af39589c204c74f6ace--


--===============2082431032509510735==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2082431032509510735==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 15:47:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fo9-0000RG-29; Wed, 15 Aug 2012 15:46:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1fo7-0000R7-KD
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:46:55 +0000
Received: from [85.158.143.99:64617] by server-3.bemta-4.messagelabs.com id
	07/87-09529-E64CB205; Wed, 15 Aug 2012 15:46:54 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345045613!24964250!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26872 invoked from network); 15 Aug 2012 15:46:54 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:46:54 -0000
Received: by eaac13 with SMTP id c13so535564eaa.32
	for <multiple recipients>; Wed, 15 Aug 2012 08:46:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=GynLBU0JXm8OTAlwK2i8Nd7S/JzaCrvaqJWC06Feges=;
	b=cyYfD1F7sJ5GIE3dCoMNmBhzEvD7z4KOvrxptA3et7iUMGJ+djaUt7K6jbW3g6Hiln
	n6nLvnAzEjPT28THxQzepX8Xp8GAO8tRGiXTPJ7QpUB4sXTPby95cMMgEE9ABudrPpxs
	RMfWA0ZUYLDs9ebvdaJ5BIyEPTlR6ZT01JTgKILx1sp4mIaFRwheDhcSKsEl2/bAbABJ
	4TcA3TKe0tiC/GbjmUjiJjT8IsTEDM0UY/vBHiKTLBNyiUYXpnMGaPIzdjNxK7bVvZjP
	97nDCyZPDDDWDsukyFDFIOnswX+WwAIpE8x4dR6xscqI5mojTvuTopIgjViC03dEI1+u
	zs7w==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr15380785eej.0.1345045613717; Wed, 15
	Aug 2012 08:46:53 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Wed, 15 Aug 2012 08:46:53 -0700 (PDT)
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
Date: Wed, 15 Aug 2012 16:46:53 +0100
X-Google-Sender-Auth: IsYJOc8F5r5sFCPSxTruqYEq0WI
Message-ID: <CAFLBxZbP2d=0gj7DqAc9nXga3H8Rv+RZPPSHHNbrbUv_=wzORA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 10:05 AM, Ian Campbell <Ian.Campbell@citrix.com> wr=
ote:
> tools, blockers:
    * qemu-traditional has 50% cpu utilization on an idle Windows
system if USB is enabled
       Not 100% clear whether this is Xen or qemu.  George is seeing
if any XenServer patches
      address the issue.

>
>     * libxl stable API -- we would like 4.2 to define a stable API
>       which downstream's can start to rely on not changing. Aspects of
>       this are:
>
>         * None known
>
>     * xl compatibility with xm:
>
>         * No known issues
>
>     * [CHECK] More formally deprecate xm/xend. Manpage patches already
>       in tree. Needs release noting and communication around -rc1 to
>       remind people to test xl.
>
>     * [CHECK] Confirm that migration from Xen 4.1 -> 4.2 works.
>
>     * Bump library SONAMES as necessary.
>       <20502.39440.969619.824976@mariner.uk.xensource.com>
>
> hypervisor, nice to have:
>
>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>       new vMCE in 4.3. (Jinsong Liu, Jan Beulich, DONE for 4.2)
>
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>             (George Dunlap)
>
>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
>
>     * address PoD problems with early host side accesses to guest
>       address space (draft patch for 4.0.x exists, needs to be ported
>       over to -unstable, which I'll expect to get to today, Jan
>       Beulich)
>
>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)
>
> tools, nice to have:
>
>     * xl compatibility with xm:
>
>         * None
>
>     * xl.cfg(5) documentation patch for qemu-upstream
>       videoram/videomem support:
>       http://lists.xen.org/archives/html/xen-devel/2012-05/msg00250.html
>       qemu-upstream doesn't support specifying videomem size for the
>       HVM guest cirrus/stdvga.  (but this works with
>       qemu-xen-traditional). (Pasi K=E4rkk=E4inen)
>
>     * [BUG] long stop during the guest boot process with qcow image,
>       reported by Intel: http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=
=3D1821
>
>     * [BUG] vcpu-set doesn't take effect on guest, reported by Intel:
>       http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=3D1822
>
>     * Load blktap driver from xencommons initscript if available, thread =
at:
>       <db614e92faf743e20b3f.1337096977@kodo2>. To be fixed more
>       properly in 4.3. (Patch posted, discussion, plan to take simple
>       xencommons patch for 4.2 and revist for 4.3. Ping sent)
>
>     * [BUG] xl allows same PCI device to be assigned to multiple
>       guests. http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=3D1826
>       (<E4558C0C96688748837EB1B05BEED75A0FD5574A@SHSMSX102.ccr.corp.intel=
.com>)
>
>     * [BUG(?)] If domain 0 attempts to access a guests' memory before
>       it is finished being built, and it is being built in PoD mode,
>       this may cause the guest to crash.  Again, this is NOT a
>       regression from 4.1.  Furthermore, it's only been reported
>       (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
>       (Again, I'd be open to the idea that it should wait until after
>       the release to get more testing.)
>           (George Dunlap / Jan Beulich)
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:47:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fo9-0000RG-29; Wed, 15 Aug 2012 15:46:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T1fo7-0000R7-KD
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:46:55 +0000
Received: from [85.158.143.99:64617] by server-3.bemta-4.messagelabs.com id
	07/87-09529-E64CB205; Wed, 15 Aug 2012 15:46:54 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345045613!24964250!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26872 invoked from network); 15 Aug 2012 15:46:54 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:46:54 -0000
Received: by eaac13 with SMTP id c13so535564eaa.32
	for <multiple recipients>; Wed, 15 Aug 2012 08:46:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=GynLBU0JXm8OTAlwK2i8Nd7S/JzaCrvaqJWC06Feges=;
	b=cyYfD1F7sJ5GIE3dCoMNmBhzEvD7z4KOvrxptA3et7iUMGJ+djaUt7K6jbW3g6Hiln
	n6nLvnAzEjPT28THxQzepX8Xp8GAO8tRGiXTPJ7QpUB4sXTPby95cMMgEE9ABudrPpxs
	RMfWA0ZUYLDs9ebvdaJ5BIyEPTlR6ZT01JTgKILx1sp4mIaFRwheDhcSKsEl2/bAbABJ
	4TcA3TKe0tiC/GbjmUjiJjT8IsTEDM0UY/vBHiKTLBNyiUYXpnMGaPIzdjNxK7bVvZjP
	97nDCyZPDDDWDsukyFDFIOnswX+WwAIpE8x4dR6xscqI5mojTvuTopIgjViC03dEI1+u
	zs7w==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr15380785eej.0.1345045613717; Wed, 15
	Aug 2012 08:46:53 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Wed, 15 Aug 2012 08:46:53 -0700 (PDT)
In-Reply-To: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
Date: Wed, 15 Aug 2012 16:46:53 +0100
X-Google-Sender-Auth: IsYJOc8F5r5sFCPSxTruqYEq0WI
Message-ID: <CAFLBxZbP2d=0gj7DqAc9nXga3H8Rv+RZPPSHHNbrbUv_=wzORA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 10:05 AM, Ian Campbell <Ian.Campbell@citrix.com> wr=
ote:
> tools, blockers:
    * qemu-traditional has 50% cpu utilization on an idle Windows
system if USB is enabled
       Not 100% clear whether this is Xen or qemu.  George is seeing
if any XenServer patches
      address the issue.

>
>     * libxl stable API -- we would like 4.2 to define a stable API
>       which downstream's can start to rely on not changing. Aspects of
>       this are:
>
>         * None known
>
>     * xl compatibility with xm:
>
>         * No known issues
>
>     * [CHECK] More formally deprecate xm/xend. Manpage patches already
>       in tree. Needs release noting and communication around -rc1 to
>       remind people to test xl.
>
>     * [CHECK] Confirm that migration from Xen 4.1 -> 4.2 works.
>
>     * Bump library SONAMES as necessary.
>       <20502.39440.969619.824976@mariner.uk.xensource.com>
>
> hypervisor, nice to have:
>
>     * vMCE save/restore changes, to simplify migration 4.2->4.3 with
>       new vMCE in 4.3. (Jinsong Liu, Jan Beulich, DONE for 4.2)
>
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>             (George Dunlap)
>
>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
>
>     * address PoD problems with early host side accesses to guest
>       address space (draft patch for 4.0.x exists, needs to be ported
>       over to -unstable, which I'll expect to get to today, Jan
>       Beulich)
>
>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)
>
> tools, nice to have:
>
>     * xl compatibility with xm:
>
>         * None
>
>     * xl.cfg(5) documentation patch for qemu-upstream
>       videoram/videomem support:
>       http://lists.xen.org/archives/html/xen-devel/2012-05/msg00250.html
>       qemu-upstream doesn't support specifying videomem size for the
>       HVM guest cirrus/stdvga.  (but this works with
>       qemu-xen-traditional). (Pasi K=E4rkk=E4inen)
>
>     * [BUG] long stop during the guest boot process with qcow image,
>       reported by Intel: http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=
=3D1821
>
>     * [BUG] vcpu-set doesn't take effect on guest, reported by Intel:
>       http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=3D1822
>
>     * Load blktap driver from xencommons initscript if available, thread =
at:
>       <db614e92faf743e20b3f.1337096977@kodo2>. To be fixed more
>       properly in 4.3. (Patch posted, discussion, plan to take simple
>       xencommons patch for 4.2 and revist for 4.3. Ping sent)
>
>     * [BUG] xl allows same PCI device to be assigned to multiple
>       guests. http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=3D1826
>       (<E4558C0C96688748837EB1B05BEED75A0FD5574A@SHSMSX102.ccr.corp.intel=
.com>)
>
>     * [BUG(?)] If domain 0 attempts to access a guests' memory before
>       it is finished being built, and it is being built in PoD mode,
>       this may cause the guest to crash.  Again, this is NOT a
>       regression from 4.1.  Furthermore, it's only been reported
>       (AIUI) by a customer of SuSE; so it shoudn't be a blocker.
>       (Again, I'd be open to the idea that it should wait until after
>       the release to get more testing.)
>           (George Dunlap / Jan Beulich)
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fwp-0000qU-Lg; Wed, 15 Aug 2012 15:55:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1fwo-0000qK-5F
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:55:54 +0000
Received: from [85.158.138.51:54432] by server-7.bemta-3.messagelabs.com id
	01/65-01906-986CB205; Wed, 15 Aug 2012 15:55:53 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345046151!28527437!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17940 invoked from network); 15 Aug 2012 15:55:52 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:55:52 -0000
Received: by ggna5 with SMTP id a5so2146733ggn.32
	for <multiple recipients>; Wed, 15 Aug 2012 08:55:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=Y4TaIKILbLsUhBqPrHSvzvhPYQhVkJPAeb6YXCf7vb8=;
	b=QSpo4EEK2fu7V6AQG42CZzsJuVAeIw6wE1PEOJUsbAfVYqkxAuI15ogY7J92hiAR5F
	I1eby88Oi9bDDSuW0BLdztCwl+tAlle/l6aDuCKnEZWwpyqL9GngRLI4XyzU8Kcr+4vJ
	Kh3kllOi9JehC5sybaD0AQJq5BGHkzTq5EuBewMh1TtjHTc6LBUsHNvXIDbL6VKcYaUT
	XoyZRrZcH71NOBN6GFO3mIZJxPQkJ+eVul5bCCUORv61ZBc86ULzqGJmPsCclZ0DL1Vo
	6qa8oBZZmZgTWdiqaf2Q1g2i/qLwEN5qGPAUssbMQk44HcMTGkrUy22gVD+/F7O5TuuX
	SVRQ==
Received: by 10.236.110.209 with SMTP id u57mr20298533yhg.101.1345046150797;
	Wed, 15 Aug 2012 08:55:50 -0700 (PDT)
Received: from [172.16.25.10] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id o25sm3274230yhm.14.2012.08.15.08.55.48
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 08:55:49 -0700 (PDT)
Message-ID: <502BC683.6060407@xen.org>
Date: Wed, 15 Aug 2012 16:55:47 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-users@lists.xen.org, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen Test Day - Thank You!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I wanted to thank everybody who joined yesterday's test day for 
contributing. At the peak we had about 30 people on the #xentest 
channel, and a number of bugs were raised as well as items were added to 
the Xen 4.2 TODO list / Release Plan. There has also been quite a bit of 
discussion on the channel.

It appears that measuring the success of the test day is a little harder 
than that of docs days. For example, I don't quite know how many tests 
were successful as nobody posted success reports to the mailing list. 
This makes me think that maybe this is the wrong approach and that we 
need to put more prep work into preparing test days.

Here are a number of ideas:
1) Fedora Test Days are much more prescriptive (i.e. test cases are 
pre-defined and results are recorded in a big spreadsheet)
2) Maybe we should guide testing a bit more than we have
3) Maybe reporting results should be easier (e.g. a simple web form or a 
google docs spreadsheet)

Anyway, suggestions and feedback is very welcome!

Best Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fwp-0000qU-Lg; Wed, 15 Aug 2012 15:55:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1fwo-0000qK-5F
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:55:54 +0000
Received: from [85.158.138.51:54432] by server-7.bemta-3.messagelabs.com id
	01/65-01906-986CB205; Wed, 15 Aug 2012 15:55:53 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345046151!28527437!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17940 invoked from network); 15 Aug 2012 15:55:52 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:55:52 -0000
Received: by ggna5 with SMTP id a5so2146733ggn.32
	for <multiple recipients>; Wed, 15 Aug 2012 08:55:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=Y4TaIKILbLsUhBqPrHSvzvhPYQhVkJPAeb6YXCf7vb8=;
	b=QSpo4EEK2fu7V6AQG42CZzsJuVAeIw6wE1PEOJUsbAfVYqkxAuI15ogY7J92hiAR5F
	I1eby88Oi9bDDSuW0BLdztCwl+tAlle/l6aDuCKnEZWwpyqL9GngRLI4XyzU8Kcr+4vJ
	Kh3kllOi9JehC5sybaD0AQJq5BGHkzTq5EuBewMh1TtjHTc6LBUsHNvXIDbL6VKcYaUT
	XoyZRrZcH71NOBN6GFO3mIZJxPQkJ+eVul5bCCUORv61ZBc86ULzqGJmPsCclZ0DL1Vo
	6qa8oBZZmZgTWdiqaf2Q1g2i/qLwEN5qGPAUssbMQk44HcMTGkrUy22gVD+/F7O5TuuX
	SVRQ==
Received: by 10.236.110.209 with SMTP id u57mr20298533yhg.101.1345046150797;
	Wed, 15 Aug 2012 08:55:50 -0700 (PDT)
Received: from [172.16.25.10] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id o25sm3274230yhm.14.2012.08.15.08.55.48
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 08:55:49 -0700 (PDT)
Message-ID: <502BC683.6060407@xen.org>
Date: Wed, 15 Aug 2012 16:55:47 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-users@lists.xen.org, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen Test Day - Thank You!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I wanted to thank everybody who joined yesterday's test day for 
contributing. At the peak we had about 30 people on the #xentest 
channel, and a number of bugs were raised as well as items were added to 
the Xen 4.2 TODO list / Release Plan. There has also been quite a bit of 
discussion on the channel.

It appears that measuring the success of the test day is a little harder 
than that of docs days. For example, I don't quite know how many tests 
were successful as nobody posted success reports to the mailing list. 
This makes me think that maybe this is the wrong approach and that we 
need to put more prep work into preparing test days.

Here are a number of ideas:
1) Fedora Test Days are much more prescriptive (i.e. test cases are 
pre-defined and results are recorded in a big spreadsheet)
2) Maybe we should guide testing a bit more than we have
3) Maybe reporting results should be easier (e.g. a simple web form or a 
google docs spreadsheet)

Anyway, suggestions and feedback is very welcome!

Best Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:59:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fzq-0001AP-2T; Wed, 15 Aug 2012 15:59:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1fzo-0001AC-Nt
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:59:01 +0000
Received: from [85.158.138.51:15760] by server-7.bemta-3.messagelabs.com id
	51/B9-01906-347CB205; Wed, 15 Aug 2012 15:58:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345046337!24402921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28755 invoked from network); 15 Aug 2012 15:58:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:58:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336363200"; d="scan'208";a="205269505"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 11:58:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 11:58:56 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T1fzk-0002hB-Ct;
	Wed, 15 Aug 2012 16:58:56 +0100
MIME-Version: 1.0
X-Mercurial-Node: 7cec0543f67cefe3755bbad0c2262fa2e820d746
Message-ID: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 15 Aug 2012 16:58:56 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: make domain resume API asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345046301 -3600
# Node ID 7cec0543f67cefe3755bbad0c2262fa2e820d746
# Parent  30bf79cc14d932fbe6ff572d0438e5a432f69b0a
libxl: make domain resume API asynchronous

Although the current implementation has no asynchromous parts I can
envisage it needing to do bits of create/destroy like functionality
which may need async support in the future.

To do this make the meat into an internal libxl__domain_resume
function in order to satisfy the no-internal-callers rule for the
async function.

Since I needed to touch the logging to s/ctx/CTX/ anyway switch to the
LOG* helper macros.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl.c	Wed Aug 15 16:58:21 2012 +0100
@@ -396,15 +396,12 @@ int libxl_domain_rename(libxl_ctx *ctx, 
     return rc;
 }
 
-int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel)
+int libxl__domain_resume(libxl__gc *gc, uint32_t domid, int suspend_cancel)
 {
-    GC_INIT(ctx);
     int rc = 0;
 
-    if (xc_domain_resume(ctx->xch, domid, suspend_cancel)) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
-                        "xc_domain_resume failed for domain %u",
-                        domid);
+    if (xc_domain_resume(CTX->xch, domid, suspend_cancel)) {
+        LOGE(ERROR, "xc_domain_resume failed for domain %u", domid);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -418,24 +415,29 @@ int libxl_domain_resume(libxl_ctx *ctx, 
     if (type == LIBXL_DOMAIN_TYPE_HVM) {
         rc = libxl__domain_resume_device_model(gc, domid);
         if (rc) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
-                       "failed to resume device model for domain %u:%d",
-                       domid, rc);
+            LOG(ERROR, "failed to resume device model for domain %u:%d",
+                domid, rc);
             goto out;
         }
     }
 
-    if (!xs_resume_domain(ctx->xsh, domid)) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
-                        "xs_resume_domain failed for domain %u",
-                        domid);
+    if (!xs_resume_domain(CTX->xsh, domid)) {
+        LOGE(ERROR, "xs_resume_domain failed for domain %u", domid);
         rc = ERROR_FAIL;
     }
 out:
-    GC_FREE;
     return rc;
 }
 
+int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel,
+                        const libxl_asyncop_how *ao_how)
+{
+    AO_CREATE(ctx, domid, ao_how);
+    int rc = libxl__domain_resume(gc, domid, suspend_cancel);
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
+}
+
 /*
  * Preserves a domain but rewrites xenstore etc to make it unique so
  * that the domain can be restarted.
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl.h
--- a/tools/libxl/libxl.h	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl.h	Wed Aug 15 16:58:21 2012 +0100
@@ -529,7 +529,9 @@ int libxl_domain_suspend(libxl_ctx *ctx,
  *   If this parameter is true, use co-operative resume. The guest
  *   must support this.
  */
-int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel);
+int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel,
+                        const libxl_asyncop_how *ao_how)
+                        LIBXL_EXTERNAL_CALLERS_ONLY;
 
 int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
                              uint32_t domid, int send_fd, int recv_fd,
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl_dom.c
--- a/tools/libxl/libxl_dom.c	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl_dom.c	Wed Aug 15 16:58:21 2012 +0100
@@ -1121,7 +1121,7 @@ static int libxl__remus_domain_resume_ca
     STATE_AO_GC(dss->ao);
 
     /* Resumes the domain and the device model */
-    if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
+    if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
     /* REMUS TODO: Deal with disk. Start a new network output buffer */
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Wed Aug 15 16:58:21 2012 +0100
@@ -899,6 +899,9 @@ _hidden int libxl__domain_resume_device_
 
 _hidden void libxl__userdata_destroyall(libxl__gc *gc, uint32_t domid);
 
+_hidden int libxl__domain_resume(libxl__gc *gc, uint32_t domid,
+                                 int suspend_cancel);
+
 /* returns 0 or 1, or a libxl error code */
 _hidden int libxl__domain_pvcontrol_available(libxl__gc *gc, uint32_t domid);
 
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Wed Aug 15 16:58:21 2012 +0100
@@ -2859,7 +2859,7 @@ static int save_domain(const char *p, co
     close(fd);
 
     if (checkpoint)
-        libxl_domain_resume(ctx, domid, 1);
+        libxl_domain_resume(ctx, domid, 1, 0);
     else
         libxl_domain_destroy(ctx, domid, 0);
 
@@ -3110,7 +3110,7 @@ static void migrate_domain(const char *d
         if (common_domname) {
             libxl_domain_rename(ctx, domid, away_domname, common_domname);
         }
-        rc = libxl_domain_resume(ctx, domid, 0);
+        rc = libxl_domain_resume(ctx, domid, 0, 0);
         if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
 
         fprintf(stderr, "Migration failed due to problems at target.\n");
@@ -3132,7 +3132,7 @@ static void migrate_domain(const char *d
     close(send_fd);
     migration_child_report(recv_fd);
     fprintf(stderr, "Migration failed, resuming at sender.\n");
-    libxl_domain_resume(ctx, domid, 0);
+    libxl_domain_resume(ctx, domid, 0, 0);
     exit(-ERROR_FAIL);
 
  failed_badly:
@@ -6654,7 +6654,7 @@ int main_remus(int argc, char **argv)
         fprintf(stderr, "Failed to suspend domain at primary.\n");
     else {
         fprintf(stderr, "Remus: Backup failed? resuming domain at primary.\n");
-        libxl_domain_resume(ctx, domid, 1);
+        libxl_domain_resume(ctx, domid, 1, 0);
     }
 
     close(send_fd);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 15:59:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 15:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1fzq-0001AP-2T; Wed, 15 Aug 2012 15:59:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1fzo-0001AC-Nt
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 15:59:01 +0000
Received: from [85.158.138.51:15760] by server-7.bemta-3.messagelabs.com id
	51/B9-01906-347CB205; Wed, 15 Aug 2012 15:58:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345046337!24402921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE1NDQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28755 invoked from network); 15 Aug 2012 15:58:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 15:58:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336363200"; d="scan'208";a="205269505"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 11:58:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 11:58:56 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T1fzk-0002hB-Ct;
	Wed, 15 Aug 2012 16:58:56 +0100
MIME-Version: 1.0
X-Mercurial-Node: 7cec0543f67cefe3755bbad0c2262fa2e820d746
Message-ID: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 15 Aug 2012 16:58:56 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: ian.jackson@citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: make domain resume API asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345046301 -3600
# Node ID 7cec0543f67cefe3755bbad0c2262fa2e820d746
# Parent  30bf79cc14d932fbe6ff572d0438e5a432f69b0a
libxl: make domain resume API asynchronous

Although the current implementation has no asynchromous parts I can
envisage it needing to do bits of create/destroy like functionality
which may need async support in the future.

To do this make the meat into an internal libxl__domain_resume
function in order to satisfy the no-internal-callers rule for the
async function.

Since I needed to touch the logging to s/ctx/CTX/ anyway switch to the
LOG* helper macros.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl.c	Wed Aug 15 16:58:21 2012 +0100
@@ -396,15 +396,12 @@ int libxl_domain_rename(libxl_ctx *ctx, 
     return rc;
 }
 
-int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel)
+int libxl__domain_resume(libxl__gc *gc, uint32_t domid, int suspend_cancel)
 {
-    GC_INIT(ctx);
     int rc = 0;
 
-    if (xc_domain_resume(ctx->xch, domid, suspend_cancel)) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
-                        "xc_domain_resume failed for domain %u",
-                        domid);
+    if (xc_domain_resume(CTX->xch, domid, suspend_cancel)) {
+        LOGE(ERROR, "xc_domain_resume failed for domain %u", domid);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -418,24 +415,29 @@ int libxl_domain_resume(libxl_ctx *ctx, 
     if (type == LIBXL_DOMAIN_TYPE_HVM) {
         rc = libxl__domain_resume_device_model(gc, domid);
         if (rc) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
-                       "failed to resume device model for domain %u:%d",
-                       domid, rc);
+            LOG(ERROR, "failed to resume device model for domain %u:%d",
+                domid, rc);
             goto out;
         }
     }
 
-    if (!xs_resume_domain(ctx->xsh, domid)) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
-                        "xs_resume_domain failed for domain %u",
-                        domid);
+    if (!xs_resume_domain(CTX->xsh, domid)) {
+        LOGE(ERROR, "xs_resume_domain failed for domain %u", domid);
         rc = ERROR_FAIL;
     }
 out:
-    GC_FREE;
     return rc;
 }
 
+int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel,
+                        const libxl_asyncop_how *ao_how)
+{
+    AO_CREATE(ctx, domid, ao_how);
+    int rc = libxl__domain_resume(gc, domid, suspend_cancel);
+    libxl__ao_complete(egc, ao, rc);
+    return AO_INPROGRESS;
+}
+
 /*
  * Preserves a domain but rewrites xenstore etc to make it unique so
  * that the domain can be restarted.
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl.h
--- a/tools/libxl/libxl.h	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl.h	Wed Aug 15 16:58:21 2012 +0100
@@ -529,7 +529,9 @@ int libxl_domain_suspend(libxl_ctx *ctx,
  *   If this parameter is true, use co-operative resume. The guest
  *   must support this.
  */
-int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel);
+int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel,
+                        const libxl_asyncop_how *ao_how)
+                        LIBXL_EXTERNAL_CALLERS_ONLY;
 
 int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
                              uint32_t domid, int send_fd, int recv_fd,
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl_dom.c
--- a/tools/libxl/libxl_dom.c	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl_dom.c	Wed Aug 15 16:58:21 2012 +0100
@@ -1121,7 +1121,7 @@ static int libxl__remus_domain_resume_ca
     STATE_AO_GC(dss->ao);
 
     /* Resumes the domain and the device model */
-    if (libxl_domain_resume(CTX, dss->domid, /* Fast Suspend */1))
+    if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
     /* REMUS TODO: Deal with disk. Start a new network output buffer */
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl_internal.h
--- a/tools/libxl/libxl_internal.h	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/libxl_internal.h	Wed Aug 15 16:58:21 2012 +0100
@@ -899,6 +899,9 @@ _hidden int libxl__domain_resume_device_
 
 _hidden void libxl__userdata_destroyall(libxl__gc *gc, uint32_t domid);
 
+_hidden int libxl__domain_resume(libxl__gc *gc, uint32_t domid,
+                                 int suspend_cancel);
+
 /* returns 0 or 1, or a libxl error code */
 _hidden int libxl__domain_pvcontrol_available(libxl__gc *gc, uint32_t domid);
 
diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 14:45:21 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Wed Aug 15 16:58:21 2012 +0100
@@ -2859,7 +2859,7 @@ static int save_domain(const char *p, co
     close(fd);
 
     if (checkpoint)
-        libxl_domain_resume(ctx, domid, 1);
+        libxl_domain_resume(ctx, domid, 1, 0);
     else
         libxl_domain_destroy(ctx, domid, 0);
 
@@ -3110,7 +3110,7 @@ static void migrate_domain(const char *d
         if (common_domname) {
             libxl_domain_rename(ctx, domid, away_domname, common_domname);
         }
-        rc = libxl_domain_resume(ctx, domid, 0);
+        rc = libxl_domain_resume(ctx, domid, 0, 0);
         if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
 
         fprintf(stderr, "Migration failed due to problems at target.\n");
@@ -3132,7 +3132,7 @@ static void migrate_domain(const char *d
     close(send_fd);
     migration_child_report(recv_fd);
     fprintf(stderr, "Migration failed, resuming at sender.\n");
-    libxl_domain_resume(ctx, domid, 0);
+    libxl_domain_resume(ctx, domid, 0, 0);
     exit(-ERROR_FAIL);
 
  failed_badly:
@@ -6654,7 +6654,7 @@ int main_remus(int argc, char **argv)
         fprintf(stderr, "Failed to suspend domain at primary.\n");
     else {
         fprintf(stderr, "Remus: Backup failed? resuming domain at primary.\n");
-        libxl_domain_resume(ctx, domid, 1);
+        libxl_domain_resume(ctx, domid, 1, 0);
     }
 
     close(send_fd);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:03:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1g3s-0001qp-O5; Wed, 15 Aug 2012 16:03:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>) id 1T1g3r-0001qX-1n
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:03:11 +0000
Received: from [85.158.143.99:12713] by server-1.bemta-4.messagelabs.com id
	A1/F7-07754-E38CB205; Wed, 15 Aug 2012 16:03:10 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345046575!27776977!1
X-Originating-IP: [220.181.15.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30643 invoked from network); 15 Aug 2012 16:03:07 -0000
Received: from m15-48.126.com (HELO m15-48.126.com) (220.181.15.48)
	by server-3.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 16:03:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=HPhvoJPXpk1S7vIBL5MBDjLow1Tinc20A/Fb
	oPgPCRI=; b=L+orvdcJD9pYE3K8KRIKNvUzJF9Qb1VLxxarxpgoQtZDgjTf5qvl
	iFTnmCEMcXKaJ/xKWqzzO/Y8H53v/lnv28YpEh5AU+PENU8S8ZRYKBlnu28Ppcpn
	8aruZ1hBqFB4BY3/helFnnYgzUscguXLdEdzvFfUJeIqdNQyQw+22Do=
Received: from topperxin$126.com ( [211.157.141.3] ) by ajax-webmail-wmsvr48
	(Coremail) ; Thu, 16 Aug 2012 00:02:52 +0800 (CST)
X-Originating-IP: [211.157.141.3]
Date: Thu, 16 Aug 2012 00:02:52 +0800 (CST)
From: topperxin <topperxin@126.com>
To: xen-users <xen-users@lists.xensource.com>, 
	xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20120727(19208.4800.4773) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: FmkaiGZvb3Rlcl9odG09NDI0Ojgx
MIME-Version: 1.0
Message-ID: <7c167b81.184e5.1392b05ed05.Coremail.topperxin@126.com>
X-CM-TRANSID: MMqowECJrUItyCtQoK4TAA--.1205W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbi4hEMDkkZsmF4xgACsO
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] live migration with hvm type vm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6120020075324871233=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6120020075324871233==
Content-Type: multipart/alternative; 
	boundary="----=_Part_489935_2075035325.1345046572292"

------=_Part_489935_2075035325.1345046572292
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

hi all
      A quick question: Can I do the live migration with HVM type VM? on the open source xen?
      Maybe, on the xenserver, live migration can only done on the PV vm.
      Why?
      Regards
     Thanks a lot
------=_Part_489935_2075035325.1345046572292
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">hi all<div>&nbsp; &nbsp; &nbsp; A quick question: Can I do the live migration with HVM type VM? on the open source xen?</div><div>&nbsp; &nbsp; &nbsp; Maybe, on the xenserver, live migration can only done on the PV vm.</div><div>&nbsp; &nbsp; &nbsp; Why?</div><div>&nbsp; &nbsp; &nbsp; Regards</div><div>&nbsp; &nbsp; &nbsp;Thanks a lot</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_489935_2075035325.1345046572292--



--===============6120020075324871233==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6120020075324871233==--



From xen-devel-bounces@lists.xen.org Wed Aug 15 16:03:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1g3s-0001qp-O5; Wed, 15 Aug 2012 16:03:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>) id 1T1g3r-0001qX-1n
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:03:11 +0000
Received: from [85.158.143.99:12713] by server-1.bemta-4.messagelabs.com id
	A1/F7-07754-E38CB205; Wed, 15 Aug 2012 16:03:10 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345046575!27776977!1
X-Originating-IP: [220.181.15.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30643 invoked from network); 15 Aug 2012 16:03:07 -0000
Received: from m15-48.126.com (HELO m15-48.126.com) (220.181.15.48)
	by server-3.tower-216.messagelabs.com with SMTP;
	15 Aug 2012 16:03:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=HPhvoJPXpk1S7vIBL5MBDjLow1Tinc20A/Fb
	oPgPCRI=; b=L+orvdcJD9pYE3K8KRIKNvUzJF9Qb1VLxxarxpgoQtZDgjTf5qvl
	iFTnmCEMcXKaJ/xKWqzzO/Y8H53v/lnv28YpEh5AU+PENU8S8ZRYKBlnu28Ppcpn
	8aruZ1hBqFB4BY3/helFnnYgzUscguXLdEdzvFfUJeIqdNQyQw+22Do=
Received: from topperxin$126.com ( [211.157.141.3] ) by ajax-webmail-wmsvr48
	(Coremail) ; Thu, 16 Aug 2012 00:02:52 +0800 (CST)
X-Originating-IP: [211.157.141.3]
Date: Thu, 16 Aug 2012 00:02:52 +0800 (CST)
From: topperxin <topperxin@126.com>
To: xen-users <xen-users@lists.xensource.com>, 
	xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20120727(19208.4800.4773) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: FmkaiGZvb3Rlcl9odG09NDI0Ojgx
MIME-Version: 1.0
Message-ID: <7c167b81.184e5.1392b05ed05.Coremail.topperxin@126.com>
X-CM-TRANSID: MMqowECJrUItyCtQoK4TAA--.1205W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbi4hEMDkkZsmF4xgACsO
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] live migration with hvm type vm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6120020075324871233=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6120020075324871233==
Content-Type: multipart/alternative; 
	boundary="----=_Part_489935_2075035325.1345046572292"

------=_Part_489935_2075035325.1345046572292
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

hi all
      A quick question: Can I do the live migration with HVM type VM? on the open source xen?
      Maybe, on the xenserver, live migration can only done on the PV vm.
      Why?
      Regards
     Thanks a lot
------=_Part_489935_2075035325.1345046572292
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">hi all<div>&nbsp; &nbsp; &nbsp; A quick question: Can I do the live migration with HVM type VM? on the open source xen?</div><div>&nbsp; &nbsp; &nbsp; Maybe, on the xenserver, live migration can only done on the PV vm.</div><div>&nbsp; &nbsp; &nbsp; Why?</div><div>&nbsp; &nbsp; &nbsp; Regards</div><div>&nbsp; &nbsp; &nbsp;Thanks a lot</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_489935_2075035325.1345046572292--



--===============6120020075324871233==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6120020075324871233==--



From xen-devel-bounces@lists.xen.org Wed Aug 15 16:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1g4U-0001vN-Eq; Wed, 15 Aug 2012 16:03:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1g4S-0001vA-VS
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 16:03:49 +0000
Received: from [85.158.143.99:16702] by server-3.bemta-4.messagelabs.com id
	09/7C-09529-468CB205; Wed, 15 Aug 2012 16:03:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345046626!28367589!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18277 invoked from network); 15 Aug 2012 16:03:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:03:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336363200"; d="scan'208";a="34743417"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 12:03:35 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 12:03:34 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T1g4E-0002l4-Cy;
	Wed, 15 Aug 2012 17:03:34 +0100
MIME-Version: 1.0
X-Mercurial-Node: add1552b607e014717f9e48bff61606bc62ff269
Message-ID: <add1552b607e014717f9.1345046614@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 15 Aug 2012 17:03:34 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: waldi@debian.org, ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH] xl: make "xl list -l" proper JSON
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345046605 -3600
# Node ID add1552b607e014717f9e48bff61606bc62ff269
# Parent  7cec0543f67cefe3755bbad0c2262fa2e820d746
xl: make "xl list -l" proper JSON

Bastian Blank reports that the output of this command is just multiple
JSON objects concatenated and is not a single properly formed JSON
object.

Fix this by wrapping in an array. This turned out to be a bit more
intrusive than I was expecting due to the requirement to keep
supporting the SXP output mode.

Python's json module is happy to parse the result...

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 7cec0543f67c -r add1552b607e tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 16:58:21 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Wed Aug 15 17:03:25 2012 +0100
@@ -320,23 +320,10 @@ static void dolog(const char *file, int 
     free(s);
 }
 
-static void printf_info(enum output_format output_format,
-                        int domid,
-                        libxl_domain_config *d_config)
-{
-    if (output_format == OUTPUT_FORMAT_SXP)
-        return printf_info_sexp(domid, d_config);
-
-    const char *buf;
-    libxl_yajl_length len = 0;
+static yajl_gen_status printf_info_one_json(yajl_gen hand, int domid,
+                                            libxl_domain_config *d_config)
+{
     yajl_gen_status s;
-    yajl_gen hand;
-
-    hand = libxl_yajl_gen_alloc(NULL);
-    if (!hand) {
-        fprintf(stderr, "unable to allocate JSON generator\n");
-        return;
-    }
 
     s = yajl_gen_map_open(hand);
     if (s != yajl_gen_status_ok)
@@ -365,6 +352,31 @@ static void printf_info(enum output_form
     if (s != yajl_gen_status_ok)
         goto out;
 
+out:
+    return s;
+}
+static void printf_info(enum output_format output_format,
+                        int domid,
+                        libxl_domain_config *d_config)
+{
+    if (output_format == OUTPUT_FORMAT_SXP)
+        return printf_info_sexp(domid, d_config);
+
+    const char *buf;
+    libxl_yajl_length len = 0;
+    yajl_gen_status s;
+    yajl_gen hand;
+
+    hand = libxl_yajl_gen_alloc(NULL);
+    if (!hand) {
+        fprintf(stderr, "unable to allocate JSON generator\n");
+        return;
+    }
+
+    s = printf_info_one_json(hand, domid, d_config);
+    if (s != yajl_gen_status_ok)
+        goto out;
+
     s = yajl_gen_get_buf(hand, (const unsigned char **)&buf, &len);
     if (s != yajl_gen_status_ok)
         goto out;
@@ -2679,6 +2691,24 @@ static void list_domains_details(const l
     uint8_t *data;
     int i, len, rc;
 
+    yajl_gen hand;
+    yajl_gen_status s;
+    const char *buf;
+    libxl_yajl_length yajl_len = 0;
+
+    if (default_output_format == OUTPUT_FORMAT_JSON) {
+        hand = libxl_yajl_gen_alloc(NULL);
+        if (!hand) {
+            fprintf(stderr, "unable to allocate JSON generator\n");
+            return;
+        }
+
+        s = yajl_gen_array_open(hand);
+        if (s != yajl_gen_status_ok)
+            goto out;
+    } else
+        s = yajl_gen_status_ok;
+
     for (i = 0; i < nb_domain; i++) {
         /* no detailed info available on dom0 */
         if (info[i].domid == 0)
@@ -2689,10 +2719,35 @@ static void list_domains_details(const l
         CHK_ERRNO(asprintf(&config_source, "<domid %d data>", info[i].domid));
         libxl_domain_config_init(&d_config);
         parse_config_data(config_source, (char *)data, len, &d_config, NULL);
-        printf_info(default_output_format, info[i].domid, &d_config);
+        if (default_output_format == OUTPUT_FORMAT_SXP)
+            printf_info_sexp(domid, &d_config);
+        else
+            s = printf_info_one_json(hand, info[i].domid, &d_config);
         libxl_domain_config_dispose(&d_config);
         free(data);
         free(config_source);
+        if (s != yajl_gen_status_ok)
+            goto out;
+    }
+
+    if (default_output_format == OUTPUT_FORMAT_JSON) {
+        s = yajl_gen_array_close(hand);
+        if (s != yajl_gen_status_ok)
+            goto out;
+
+        s = yajl_gen_get_buf(hand, (const unsigned char **)&buf, &yajl_len);
+        if (s != yajl_gen_status_ok)
+            goto out;
+
+        puts(buf);
+    }
+
+out:
+    if (default_output_format == OUTPUT_FORMAT_JSON) {
+        yajl_gen_free(hand);
+        if (s != yajl_gen_status_ok)
+            fprintf(stderr,
+                    "unable to format domain config as JSON (YAJL:%d)\n", s);
     }
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1g4U-0001vN-Eq; Wed, 15 Aug 2012 16:03:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1g4S-0001vA-VS
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 16:03:49 +0000
Received: from [85.158.143.99:16702] by server-3.bemta-4.messagelabs.com id
	09/7C-09529-468CB205; Wed, 15 Aug 2012 16:03:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345046626!28367589!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18277 invoked from network); 15 Aug 2012 16:03:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:03:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336363200"; d="scan'208";a="34743417"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 12:03:35 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 12:03:34 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T1g4E-0002l4-Cy;
	Wed, 15 Aug 2012 17:03:34 +0100
MIME-Version: 1.0
X-Mercurial-Node: add1552b607e014717f9e48bff61606bc62ff269
Message-ID: <add1552b607e014717f9.1345046614@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Wed, 15 Aug 2012 17:03:34 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: waldi@debian.org, ian.jackson@citrix.com
Subject: [Xen-devel] [PATCH] xl: make "xl list -l" proper JSON
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345046605 -3600
# Node ID add1552b607e014717f9e48bff61606bc62ff269
# Parent  7cec0543f67cefe3755bbad0c2262fa2e820d746
xl: make "xl list -l" proper JSON

Bastian Blank reports that the output of this command is just multiple
JSON objects concatenated and is not a single properly formed JSON
object.

Fix this by wrapping in an array. This turned out to be a bit more
intrusive than I was expecting due to the requirement to keep
supporting the SXP output mode.

Python's json module is happy to parse the result...

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 7cec0543f67c -r add1552b607e tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 16:58:21 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Wed Aug 15 17:03:25 2012 +0100
@@ -320,23 +320,10 @@ static void dolog(const char *file, int 
     free(s);
 }
 
-static void printf_info(enum output_format output_format,
-                        int domid,
-                        libxl_domain_config *d_config)
-{
-    if (output_format == OUTPUT_FORMAT_SXP)
-        return printf_info_sexp(domid, d_config);
-
-    const char *buf;
-    libxl_yajl_length len = 0;
+static yajl_gen_status printf_info_one_json(yajl_gen hand, int domid,
+                                            libxl_domain_config *d_config)
+{
     yajl_gen_status s;
-    yajl_gen hand;
-
-    hand = libxl_yajl_gen_alloc(NULL);
-    if (!hand) {
-        fprintf(stderr, "unable to allocate JSON generator\n");
-        return;
-    }
 
     s = yajl_gen_map_open(hand);
     if (s != yajl_gen_status_ok)
@@ -365,6 +352,31 @@ static void printf_info(enum output_form
     if (s != yajl_gen_status_ok)
         goto out;
 
+out:
+    return s;
+}
+static void printf_info(enum output_format output_format,
+                        int domid,
+                        libxl_domain_config *d_config)
+{
+    if (output_format == OUTPUT_FORMAT_SXP)
+        return printf_info_sexp(domid, d_config);
+
+    const char *buf;
+    libxl_yajl_length len = 0;
+    yajl_gen_status s;
+    yajl_gen hand;
+
+    hand = libxl_yajl_gen_alloc(NULL);
+    if (!hand) {
+        fprintf(stderr, "unable to allocate JSON generator\n");
+        return;
+    }
+
+    s = printf_info_one_json(hand, domid, d_config);
+    if (s != yajl_gen_status_ok)
+        goto out;
+
     s = yajl_gen_get_buf(hand, (const unsigned char **)&buf, &len);
     if (s != yajl_gen_status_ok)
         goto out;
@@ -2679,6 +2691,24 @@ static void list_domains_details(const l
     uint8_t *data;
     int i, len, rc;
 
+    yajl_gen hand;
+    yajl_gen_status s;
+    const char *buf;
+    libxl_yajl_length yajl_len = 0;
+
+    if (default_output_format == OUTPUT_FORMAT_JSON) {
+        hand = libxl_yajl_gen_alloc(NULL);
+        if (!hand) {
+            fprintf(stderr, "unable to allocate JSON generator\n");
+            return;
+        }
+
+        s = yajl_gen_array_open(hand);
+        if (s != yajl_gen_status_ok)
+            goto out;
+    } else
+        s = yajl_gen_status_ok;
+
     for (i = 0; i < nb_domain; i++) {
         /* no detailed info available on dom0 */
         if (info[i].domid == 0)
@@ -2689,10 +2719,35 @@ static void list_domains_details(const l
         CHK_ERRNO(asprintf(&config_source, "<domid %d data>", info[i].domid));
         libxl_domain_config_init(&d_config);
         parse_config_data(config_source, (char *)data, len, &d_config, NULL);
-        printf_info(default_output_format, info[i].domid, &d_config);
+        if (default_output_format == OUTPUT_FORMAT_SXP)
+            printf_info_sexp(domid, &d_config);
+        else
+            s = printf_info_one_json(hand, info[i].domid, &d_config);
         libxl_domain_config_dispose(&d_config);
         free(data);
         free(config_source);
+        if (s != yajl_gen_status_ok)
+            goto out;
+    }
+
+    if (default_output_format == OUTPUT_FORMAT_JSON) {
+        s = yajl_gen_array_close(hand);
+        if (s != yajl_gen_status_ok)
+            goto out;
+
+        s = yajl_gen_get_buf(hand, (const unsigned char **)&buf, &yajl_len);
+        if (s != yajl_gen_status_ok)
+            goto out;
+
+        puts(buf);
+    }
+
+out:
+    if (default_output_format == OUTPUT_FORMAT_JSON) {
+        yajl_gen_free(hand);
+        if (s != yajl_gen_status_ok)
+            fprintf(stderr,
+                    "unable to format domain config as JSON (YAJL:%d)\n", s);
     }
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gFd-0002RP-AW; Wed, 15 Aug 2012 16:15:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1T1gFb-0002Qi-Od; Wed, 15 Aug 2012 16:15:19 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345047302!9013053!1
X-Originating-IP: [220.181.15.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24799 invoked from network); 15 Aug 2012 16:15:06 -0000
Received: from m15-48.126.com (HELO m15-48.126.com) (220.181.15.48)
	by server-3.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 16:15:06 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=U4Md9QxpXkd+x0r8bIx+KYCzE6XAKefzmX2o
	42ukiCc=; b=EkvQJ/grwTYqyheQb5YK2VtHIlR1jYPvBSfwhcZN+/rO2jsILyl9
	4taqnW9EnwkRaoeviZHzNM+aRMlRZI7agSgRWeZNp2eTYON0TaANHach8uJ/qb53
	CDQZybHvMO692hE9xDFcR4ZCYQVfDOxiGXn2OjhfO5ra17Nw9TqO2ic=
Received: from topperxin$126.com ( [211.157.141.3] ) by ajax-webmail-wmsvr48
	(Coremail) ; Thu, 16 Aug 2012 00:15:00 +0800 (CST)
X-Originating-IP: [211.157.141.3]
Date: Thu, 16 Aug 2012 00:15:00 +0800 (CST)
From: topperxin <topperxin@126.com>
To: xen-devel <xen-devel@lists.xensource.com>, 
	xen-users <xen-users@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20120727(19208.4800.4773) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: N9ZqmGZvb3Rlcl9odG09NDU5Ojgx
MIME-Version: 1.0
Message-ID: <56fc1d2.1863b.1392b110a04.Coremail.topperxin@126.com>
X-CM-TRANSID: MMqowEAZPUIFyytQXLETAA--.72W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiLQ4MDk6AOkVcowACst
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] sr-ivo & hvm & PV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1361660099610858777=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1361660099610858777==
Content-Type: multipart/alternative; 
	boundary="----=_Part_492430_1826974434.1345047300612"

------=_Part_492430_1826974434.1345047300612
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi all
       Is there any limitations about SR-IOV working on which VM types?
       Can SR-IOV NIC work on PV vm? or just HVM vm? or just pv-on-hvm vm?
       I really confused, please help me, thank you very much.


       Regards.
------=_Part_492430_1826974434.1345047300612
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">Hi all<div>&nbsp; &nbsp; &nbsp; &nbsp;Is there any&nbsp;limitations&nbsp;about SR-IOV working on which VM types?</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Can SR-IOV NIC work on PV vm? or just HVM vm? or just pv-on-hvm vm?</div><div>&nbsp; &nbsp; &nbsp; &nbsp;I really confused, please help me, thank you very much.</div><div><br></div><div>&nbsp; &nbsp; &nbsp; &nbsp;Regards.</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_492430_1826974434.1345047300612--



--===============1361660099610858777==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1361660099610858777==--



From xen-devel-bounces@lists.xen.org Wed Aug 15 16:15:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gFd-0002RP-AW; Wed, 15 Aug 2012 16:15:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1T1gFb-0002Qi-Od; Wed, 15 Aug 2012 16:15:19 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345047302!9013053!1
X-Originating-IP: [220.181.15.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjQ4ID0+IDIwMTM2\n,HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24799 invoked from network); 15 Aug 2012 16:15:06 -0000
Received: from m15-48.126.com (HELO m15-48.126.com) (220.181.15.48)
	by server-3.tower-27.messagelabs.com with SMTP;
	15 Aug 2012 16:15:06 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Received:Date:From:To:Subject:Content-Type:
	MIME-Version:Message-ID; bh=U4Md9QxpXkd+x0r8bIx+KYCzE6XAKefzmX2o
	42ukiCc=; b=EkvQJ/grwTYqyheQb5YK2VtHIlR1jYPvBSfwhcZN+/rO2jsILyl9
	4taqnW9EnwkRaoeviZHzNM+aRMlRZI7agSgRWeZNp2eTYON0TaANHach8uJ/qb53
	CDQZybHvMO692hE9xDFcR4ZCYQVfDOxiGXn2OjhfO5ra17Nw9TqO2ic=
Received: from topperxin$126.com ( [211.157.141.3] ) by ajax-webmail-wmsvr48
	(Coremail) ; Thu, 16 Aug 2012 00:15:00 +0800 (CST)
X-Originating-IP: [211.157.141.3]
Date: Thu, 16 Aug 2012 00:15:00 +0800 (CST)
From: topperxin <topperxin@126.com>
To: xen-devel <xen-devel@lists.xensource.com>, 
	xen-users <xen-users@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20120727(19208.4800.4773) Copyright (c) 2002-2012 www.mailtech.cn
	126com
X-CM-CTRLDATA: N9ZqmGZvb3Rlcl9odG09NDU5Ojgx
MIME-Version: 1.0
Message-ID: <56fc1d2.1863b.1392b110a04.Coremail.topperxin@126.com>
X-CM-TRANSID: MMqowEAZPUIFyytQXLETAA--.72W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiLQ4MDk6AOkVcowACst
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Subject: [Xen-devel] sr-ivo & hvm & PV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1361660099610858777=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1361660099610858777==
Content-Type: multipart/alternative; 
	boundary="----=_Part_492430_1826974434.1345047300612"

------=_Part_492430_1826974434.1345047300612
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

Hi all
       Is there any limitations about SR-IOV working on which VM types?
       Can SR-IOV NIC work on PV vm? or just HVM vm? or just pv-on-hvm vm?
       I really confused, please help me, thank you very much.


       Regards.
------=_Part_492430_1826974434.1345047300612
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">Hi all<div>&nbsp; &nbsp; &nbsp; &nbsp;Is there any&nbsp;limitations&nbsp;about SR-IOV working on which VM types?</div><div>&nbsp; &nbsp; &nbsp; &nbsp;Can SR-IOV NIC work on PV vm? or just HVM vm? or just pv-on-hvm vm?</div><div>&nbsp; &nbsp; &nbsp; &nbsp;I really confused, please help me, thank you very much.</div><div><br></div><div>&nbsp; &nbsp; &nbsp; &nbsp;Regards.</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>
------=_Part_492430_1826974434.1345047300612--



--===============1361660099610858777==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1361660099610858777==--



From xen-devel-bounces@lists.xen.org Wed Aug 15 16:20:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gK4-0002gt-7A; Wed, 15 Aug 2012 16:19:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1gCz-0002HF-3Y
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:12:37 +0000
Received: from [85.158.139.83:3615] by server-1.bemta-5.messagelabs.com id
	42/11-09980-37ACB205; Wed, 15 Aug 2012 16:12:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345047106!28286590!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7380 invoked from network); 15 Aug 2012 16:12:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:12:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14024197"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 16:11:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 17:11:45 +0100
Message-ID: <1345047104.5926.233.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: topperxin <topperxin@126.com>
Date: Wed, 15 Aug 2012 17:11:44 +0100
In-Reply-To: <7c167b81.184e5.1392b05ed05.Coremail.topperxin@126.com>
References: <7c167b81.184e5.1392b05ed05.Coremail.topperxin@126.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 15 Aug 2012 16:19:54 +0000
Cc: xen-users <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] live migration with hvm type vm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please don't cross post. I have moved the development list to BCC since
this question isn't really in topic there.

On Wed, 2012-08-15 at 17:02 +0100, topperxin wrote:
> hi all
>       A quick question: Can I do the live migration with HVM type VM?
> on the open source xen?

Yes.

>       Maybe, on the xenserver, live migration can only done on the PV
> vm.
>       Why?

Probably to reduce the size of the support envelope or something like
that.

If you have questions about XenServer then they are best addressed to
your Support rep or to the XenServer forums.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:20:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gK4-0002gt-7A; Wed, 15 Aug 2012 16:19:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1gCz-0002HF-3Y
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:12:37 +0000
Received: from [85.158.139.83:3615] by server-1.bemta-5.messagelabs.com id
	42/11-09980-37ACB205; Wed, 15 Aug 2012 16:12:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345047106!28286590!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7380 invoked from network); 15 Aug 2012 16:12:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:12:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14024197"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 16:11:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 17:11:45 +0100
Message-ID: <1345047104.5926.233.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: topperxin <topperxin@126.com>
Date: Wed, 15 Aug 2012 17:11:44 +0100
In-Reply-To: <7c167b81.184e5.1392b05ed05.Coremail.topperxin@126.com>
References: <7c167b81.184e5.1392b05ed05.Coremail.topperxin@126.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 15 Aug 2012 16:19:54 +0000
Cc: xen-users <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] live migration with hvm type vm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please don't cross post. I have moved the development list to BCC since
this question isn't really in topic there.

On Wed, 2012-08-15 at 17:02 +0100, topperxin wrote:
> hi all
>       A quick question: Can I do the live migration with HVM type VM?
> on the open source xen?

Yes.

>       Maybe, on the xenserver, live migration can only done on the PV
> vm.
>       Why?

Probably to reduce the size of the support envelope or something like
that.

If you have questions about XenServer then they are best addressed to
your Support rep or to the XenServer forums.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gZa-0002zK-SH; Wed, 15 Aug 2012 16:35:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1gZZ-0002zF-5X
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 16:35:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345048550!9417757!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30506 invoked from network); 15 Aug 2012 16:35:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:35:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14024464"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 16:35:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 17:35:50 +0100
Message-ID: <1345048547.5926.245.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <dunlapg@umich.edu>
Date: Wed, 15 Aug 2012 17:35:47 +0100
In-Reply-To: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 18:33 +0100, George Dunlap wrote:
> # xl cpupool-create 'name="pool2" sched="credit2"'
> command line:2: config parsing error near `sched': syntax error,
> unexpected IDENT, expecting NEWLINE or ';'
> Failed to parse config file: Invalid argument
> *** glibc detected *** xl: free(): invalid pointer: 0x0000000001a79a10 ***
> Segmentation fault (core dumped)
> 
> Looking at the code in xl_cmdimpl.c:main_cpupoolcreate() it calls:
> * xlu_cfg_init()
> * xlu_cfg_readdata()
> 
> if the readdata() fails, it calls xlu_cfg_destroy() before returning.

I think the issue is the parser has:
        %destructor { xlu__cfg_set_free($$); }  value valuelist values
which frees the current "setting" but does not remove it from the list
of settings.

xlu_cfg_destroy then walks the list of settings and frees it again.

This stuff is more of an Ian J thing but I wonder if when we hit the
syntax error that $$ still refers to the last value parsed, which we
think we are finished with but actually aren't? i.e. we've already
stashed it in the cfg and the reference in $$ is now "stale".

IOW, I wonder if the patch at the end is the right thing to do. It seems
to work for me but I don't know if it is good bison practice or not.

(aside; I had to find and install the Lenny version of bison to make the
autogen diff readable. We should bump this to at least Squeeze in 4.3. I
also trimmed the diff to the generated files to just the relevant
looking bit -- in particular I trimmed a lot of stuff which appeared to
be semi-automated modifications touching the generated files, e.g. the
addition of emacs blocks and removal of page breaks ^L)

> Other callers to readdata() seem to call exit(1) if readdata() fails.

For 4.2 I would be happy with a patch to make this user do the same as a
lower risk fix than changing the parser.

Ian.

diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.c
--- a/tools/libxl/libxlu_cfg_y.c	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.c	Wed Aug 15 17:34:25 2012 +0100
@@ -1418,7 +1418,7 @@ yyreduce:
     {
         case 4:
 #line 50 "libxlu_cfg_y.y"
-    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); ;}
+    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); (yyvsp[(3) - (3)].setting) = NULL; ;}
     break;
 
   case 10:
diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.y
--- a/tools/libxl/libxlu_cfg_y.y	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.y	Wed Aug 15 17:34:25 2012 +0100
@@ -47,7 +47,7 @@
 file: /* empty */
  |     file setting
 
-setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
+setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); $3 = NULL; }
                      endstmt
  |      endstmt
  |      error NEWLINE



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gZa-0002zK-SH; Wed, 15 Aug 2012 16:35:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1gZZ-0002zF-5X
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 16:35:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345048550!9417757!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30506 invoked from network); 15 Aug 2012 16:35:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:35:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14024464"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 16:35:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 17:35:50 +0100
Message-ID: <1345048547.5926.245.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <dunlapg@umich.edu>
Date: Wed, 15 Aug 2012 17:35:47 +0100
In-Reply-To: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 18:33 +0100, George Dunlap wrote:
> # xl cpupool-create 'name="pool2" sched="credit2"'
> command line:2: config parsing error near `sched': syntax error,
> unexpected IDENT, expecting NEWLINE or ';'
> Failed to parse config file: Invalid argument
> *** glibc detected *** xl: free(): invalid pointer: 0x0000000001a79a10 ***
> Segmentation fault (core dumped)
> 
> Looking at the code in xl_cmdimpl.c:main_cpupoolcreate() it calls:
> * xlu_cfg_init()
> * xlu_cfg_readdata()
> 
> if the readdata() fails, it calls xlu_cfg_destroy() before returning.

I think the issue is the parser has:
        %destructor { xlu__cfg_set_free($$); }  value valuelist values
which frees the current "setting" but does not remove it from the list
of settings.

xlu_cfg_destroy then walks the list of settings and frees it again.

This stuff is more of an Ian J thing but I wonder if when we hit the
syntax error that $$ still refers to the last value parsed, which we
think we are finished with but actually aren't? i.e. we've already
stashed it in the cfg and the reference in $$ is now "stale".

IOW, I wonder if the patch at the end is the right thing to do. It seems
to work for me but I don't know if it is good bison practice or not.

(aside; I had to find and install the Lenny version of bison to make the
autogen diff readable. We should bump this to at least Squeeze in 4.3. I
also trimmed the diff to the generated files to just the relevant
looking bit -- in particular I trimmed a lot of stuff which appeared to
be semi-automated modifications touching the generated files, e.g. the
addition of emacs blocks and removal of page breaks ^L)

> Other callers to readdata() seem to call exit(1) if readdata() fails.

For 4.2 I would be happy with a patch to make this user do the same as a
lower risk fix than changing the parser.

Ian.

diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.c
--- a/tools/libxl/libxlu_cfg_y.c	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.c	Wed Aug 15 17:34:25 2012 +0100
@@ -1418,7 +1418,7 @@ yyreduce:
     {
         case 4:
 #line 50 "libxlu_cfg_y.y"
-    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); ;}
+    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); (yyvsp[(3) - (3)].setting) = NULL; ;}
     break;
 
   case 10:
diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.y
--- a/tools/libxl/libxlu_cfg_y.y	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.y	Wed Aug 15 17:34:25 2012 +0100
@@ -47,7 +47,7 @@
 file: /* empty */
  |     file setting
 
-setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
+setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); $3 = NULL; }
                      endstmt
  |      endstmt
  |      error NEWLINE



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:43:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:43:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ggU-00038c-Ni; Wed, 15 Aug 2012 16:43:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1ggT-00038T-Lp
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:43:05 +0000
Received: from [85.158.139.83:56533] by server-12.bemta-5.messagelabs.com id
	BB/5F-22359-891DB205; Wed, 15 Aug 2012 16:43:04 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345048982!27615321!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9778 invoked from network); 15 Aug 2012 16:43:04 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 16:43:04 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id DE0B01A1B;
	Wed, 15 Aug 2012 19:43:00 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A87962005D; Wed, 15 Aug 2012 19:43:00 +0300 (EEST)
Date: Wed, 15 Aug 2012 19:43:00 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: topperxin <topperxin@126.com>
Message-ID: <20120815164300.GF19851@reaktio.net>
References: <56fc1d2.1863b.1392b110a04.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <56fc1d2.1863b.1392b110a04.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	xen-users <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] sr-ivo & hvm & PV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 12:15:00AM +0800, topperxin wrote:
>    Hi all
>

Hello,

>           Is there any limitations about SR-IOV working on which VM types?
>           Can SR-IOV NIC work on PV vm? or just HVM vm? or just pv-on-hvm vm?
>           I really confused, please help me, thank you very much.
>

I have used SR-IOV NIC VF passthru with both Xen PV and HVM guests.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:43:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:43:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ggU-00038c-Ni; Wed, 15 Aug 2012 16:43:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1ggT-00038T-Lp
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:43:05 +0000
Received: from [85.158.139.83:56533] by server-12.bemta-5.messagelabs.com id
	BB/5F-22359-891DB205; Wed, 15 Aug 2012 16:43:04 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345048982!27615321!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9778 invoked from network); 15 Aug 2012 16:43:04 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Aug 2012 16:43:04 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id DE0B01A1B;
	Wed, 15 Aug 2012 19:43:00 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A87962005D; Wed, 15 Aug 2012 19:43:00 +0300 (EEST)
Date: Wed, 15 Aug 2012 19:43:00 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: topperxin <topperxin@126.com>
Message-ID: <20120815164300.GF19851@reaktio.net>
References: <56fc1d2.1863b.1392b110a04.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <56fc1d2.1863b.1392b110a04.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	xen-users <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] sr-ivo & hvm & PV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 12:15:00AM +0800, topperxin wrote:
>    Hi all
>

Hello,

>           Is there any limitations about SR-IOV working on which VM types?
>           Can SR-IOV NIC work on PV vm? or just HVM vm? or just pv-on-hvm vm?
>           I really confused, please help me, thank you very much.
>

I have used SR-IOV NIC VF passthru with both Xen PV and HVM guests.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:55:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gsc-0003aw-MS; Wed, 15 Aug 2012 16:55:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1gsb-0003aq-Tr
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:55:38 +0000
Received: from [85.158.143.99:19049] by server-1.bemta-4.messagelabs.com id
	4F/6C-07754-984DB205; Wed, 15 Aug 2012 16:55:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345049734!21968033!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15530 invoked from network); 15 Aug 2012 16:55:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:55:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14024657"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 16:55:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 17:55:34 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1gsX-0006ep-Vg;
	Wed, 15 Aug 2012 16:55:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1gsX-0001jG-OT;
	Wed, 15 Aug 2012 17:55:33 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13604-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 17:55:33 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13604: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13604 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13604/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-qemuu-winxpsp3  4 xen-install           fail REGR. vs. 13603
 test-amd64-amd64-xl-winxpsp3 12 guest-localmigrate/x10    fail REGR. vs. 13603
 test-i386-i386-win            4 xen-install               fail REGR. vs. 13603
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 13603
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 13603
 test-i386-i386-xl-win         4 xen-install               fail REGR. vs. 13603
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 13603
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 13603
 test-amd64-amd64-pair        16 guest-start               fail REGR. vs. 13603

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13603
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)         broken REGR. vs. 13603

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  6d56e31fe1e1
baseline version:
 xen                  af7143d97fa2

------------------------------------------------------------
People who touched revisions under test:
  Boris Ostrovsky <boris.ostrovsky@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25753:6d56e31fe1e1
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 15 09:41:21 2012 +0100
    
    etherboot: Build fixes for gcc 4.7.
    
    Signed-off-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25752:1df4fdbaade0
user:        Boris Ostrovsky <boris.ostrovsky@amd.com>
date:        Wed Aug 15 09:43:25 2012 +0200
    
    acpi: Make sure valid CPU is passed to do_pm_op()
    
    Passing invalid CPU value to do_pm_op() will cause assertion
    in cpu_online().
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
    
    Such checks would, at a first glance, then also be missing at the top
    of various helper functions, but these check really were already
    redundant with the check in do_pm_op(). Remove the redundant checks
    for clarity and brevity.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   25751:02b4d5fedb7b
user:        Daniel De Graaf <dgdegra@tycho.nsa.gov>
date:        Wed Aug 15 09:42:14 2012 +0200
    
    x86-64/EFI: add CFLAGS to check compile
    
    Without this, the compilation of check.c could fail due to compiler
    features such as -fstack-protector being enabled, which causes a
    missing __stack_chk_fail symbol error.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    
    Rather than using plain CFLAGS here, remove CFLAGS-y from them to
    particularly get rid of the -MF argument referencing (the undefined
    here) $(@F).
    
    The use of CFLAGS at once allows dropping the explicit use of -Werror.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   25750:af7143d97fa2
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 14 15:59:38 2012 +0100
    
    QEMU_TAG update
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 16:55:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 16:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1gsc-0003aw-MS; Wed, 15 Aug 2012 16:55:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1gsb-0003aq-Tr
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 16:55:38 +0000
Received: from [85.158.143.99:19049] by server-1.bemta-4.messagelabs.com id
	4F/6C-07754-984DB205; Wed, 15 Aug 2012 16:55:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345049734!21968033!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15530 invoked from network); 15 Aug 2012 16:55:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 16:55:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,773,1336348800"; d="scan'208";a="14024657"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 16:55:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 17:55:34 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1gsX-0006ep-Vg;
	Wed, 15 Aug 2012 16:55:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1gsX-0001jG-OT;
	Wed, 15 Aug 2012 17:55:33 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13604-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 17:55:33 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13604: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13604 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13604/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-qemuu-winxpsp3  4 xen-install           fail REGR. vs. 13603
 test-amd64-amd64-xl-winxpsp3 12 guest-localmigrate/x10    fail REGR. vs. 13603
 test-i386-i386-win            4 xen-install               fail REGR. vs. 13603
 test-amd64-amd64-win          3 host-install(3)         broken REGR. vs. 13603
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 13603
 test-i386-i386-xl-win         4 xen-install               fail REGR. vs. 13603
 test-amd64-i386-pair         3 host-install/src_host(3) broken REGR. vs. 13603
 test-amd64-i386-pair         4 host-install/dst_host(4) broken REGR. vs. 13603
 test-amd64-amd64-pair        16 guest-start               fail REGR. vs. 13603

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13603
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)         broken REGR. vs. 13603

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  6d56e31fe1e1
baseline version:
 xen                  af7143d97fa2

------------------------------------------------------------
People who touched revisions under test:
  Boris Ostrovsky <boris.ostrovsky@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         broken  
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25753:6d56e31fe1e1
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Wed Aug 15 09:41:21 2012 +0100
    
    etherboot: Build fixes for gcc 4.7.
    
    Signed-off-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25752:1df4fdbaade0
user:        Boris Ostrovsky <boris.ostrovsky@amd.com>
date:        Wed Aug 15 09:43:25 2012 +0200
    
    acpi: Make sure valid CPU is passed to do_pm_op()
    
    Passing invalid CPU value to do_pm_op() will cause assertion
    in cpu_online().
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
    
    Such checks would, at a first glance, then also be missing at the top
    of various helper functions, but these check really were already
    redundant with the check in do_pm_op(). Remove the redundant checks
    for clarity and brevity.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   25751:02b4d5fedb7b
user:        Daniel De Graaf <dgdegra@tycho.nsa.gov>
date:        Wed Aug 15 09:42:14 2012 +0200
    
    x86-64/EFI: add CFLAGS to check compile
    
    Without this, the compilation of check.c could fail due to compiler
    features such as -fstack-protector being enabled, which causes a
    missing __stack_chk_fail symbol error.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    
    Rather than using plain CFLAGS here, remove CFLAGS-y from them to
    particularly get rid of the -MF argument referencing (the undefined
    here) $(@F).
    
    The use of CFLAGS at once allows dropping the explicit use of -Werror.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Committed-by: Jan Beulich <jbeulich@suse.com>
    
    
changeset:   25750:af7143d97fa2
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Tue Aug 14 15:59:38 2012 +0100
    
    QEMU_TAG update
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:08:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1h4e-0003rV-6Y; Wed, 15 Aug 2012 17:08:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tparker@cbnco.com>) id 1T1h4b-0003rQ-Gw
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:08:01 +0000
Received: from [85.158.139.83:38657] by server-7.bemta-5.messagelabs.com id
	81/BE-32634-077DB205; Wed, 15 Aug 2012 17:08:00 +0000
X-Env-Sender: tparker@cbnco.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345050478!20956773!1
X-Originating-IP: [207.164.182.72]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30672 invoked from network); 15 Aug 2012 17:07:59 -0000
Received: from smtp.cbnco.com (HELO smtp.cbnco.com) (207.164.182.72)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 17:07:59 -0000
Received: from localhost (localhost [127.0.0.1])
	by smtp.cbnco.com (Postfix) with ESMTP id BC3E3108A7D7
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 13:07:57 -0400 (EDT)
Received: from smtp.cbnco.com ([127.0.0.1])
	by localhost (mail.cbnco.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 07786-03 for <xen-devel@lists.xen.org>;
	Wed, 15 Aug 2012 13:07:57 -0400 (EDT)
Received: from [172.16.32.130] (dmzgw2.cbnco.com [207.164.182.65])
	by smtp.cbnco.com (Postfix) with ESMTPSA id 74E3A108A7CC
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 13:07:57 -0400 (EDT)
Message-ID: <502BD75B.9040301@cbnco.com>
Date: Wed, 15 Aug 2012 13:07:39 -0400
From: Tom Parker <tparker@cbnco.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Virus-Scanned: amavisd-new at cbnco.com
Subject: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2452941091161512089=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============2452941091161512089==
Content-Type: multipart/alternative;
 boundary="------------040300040600080202000602"

This is a multi-part message in MIME format.
--------------040300040600080202000602
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
channel to provide our use case for PV USB in our environment.  This is
possible with the current xm stack but not available with the xl stack.

Currently we use PVUSB to attach a USB Smartcard reader through our dom0
(SLES 11 SP1) running on an HP Blade Server with the Token mounted on an
internal USB Port to our domU CA server (SLES 11)

The config file syntax is broken so we have to manually attach (I have
it scripted) whenever our hosts reboot (which is almost never.)

On the dom0 server I have to do the following steps:

*/usr/sbin/xm usb-list-assignable-devices* (get the bus-id of the USB
device)
*/usr/sbin/xm usb-hc-create $Domain 2 2* (Create a USB 2.0 Root Hub with
2 ports in $Domain)
*/usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId* (Attach the
USB bus-id found in step 1 to the hub created in step 2)

On the domU the lsusb looks like this after the above (before it returns
nothing)
*
mgaca:~ # lsusb
Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub*

Once I have done this I can use the usb devce in the domU as if it was
directly connected.

Thanks for your time.

Tom Parker
Canadian Bank Note Company, Ltd.
tparker@cbnco.com

--------------040300040600080202000602
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Good Afternoon.&nbsp; My colleague Stefan (sstan) was asked on the IRC
    channel to provide our use case for PV USB in our environment.&nbsp; This
    is possible with the current xm stack but not available with the xl
    stack.<br>
    <br>
    Currently we use PVUSB to attach a USB Smartcard reader through our
    dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
    mounted on an internal USB Port to our domU CA server (SLES 11)<br>
    <br>
    The config file syntax is broken so we have to manually attach (I
    have it scripted) whenever our hosts reboot (which is almost never.)<br>
    <br>
    On the dom0 server I have to do the following steps:<br>
    <br>
    <b><tt>/usr/sbin/xm usb-list-assignable-devices</tt></b> (get the
    bus-id of the USB device)<br>
    <tt><b>/usr/sbin/xm usb-hc-create $Domain 2 2</b></tt> (Create a USB
    2.0 Root Hub with 2 ports in $Domain)<br>
    <tt><b>/usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId</b></tt>
    (Attach the USB bus-id found in step 1 to the hub created in step 2)<br>
    <br>
    On the domU the lsusb looks like this after the above (before it
    returns nothing)<br>
    <b><tt><br>
        mgaca:~ # lsusb <br>
        Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc.
        SCR331-LC1 SmartCard Reader<br>
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub</tt></b><br>
    <br>
    Once I have done this I can use the usb devce in the domU as if it
    was directly connected. <br>
    <br>
    Thanks for your time.<br>
    <br>
    Tom Parker<br>
    Canadian Bank Note Company, Ltd.<br>
    <a class="moz-txt-link-abbreviated" href="mailto:tparker@cbnco.com">tparker@cbnco.com</a><br>
  </body>
</html>

--------------040300040600080202000602--


--===============2452941091161512089==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2452941091161512089==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 17:08:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1h4e-0003rV-6Y; Wed, 15 Aug 2012 17:08:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tparker@cbnco.com>) id 1T1h4b-0003rQ-Gw
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:08:01 +0000
Received: from [85.158.139.83:38657] by server-7.bemta-5.messagelabs.com id
	81/BE-32634-077DB205; Wed, 15 Aug 2012 17:08:00 +0000
X-Env-Sender: tparker@cbnco.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345050478!20956773!1
X-Originating-IP: [207.164.182.72]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30672 invoked from network); 15 Aug 2012 17:07:59 -0000
Received: from smtp.cbnco.com (HELO smtp.cbnco.com) (207.164.182.72)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 17:07:59 -0000
Received: from localhost (localhost [127.0.0.1])
	by smtp.cbnco.com (Postfix) with ESMTP id BC3E3108A7D7
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 13:07:57 -0400 (EDT)
Received: from smtp.cbnco.com ([127.0.0.1])
	by localhost (mail.cbnco.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 07786-03 for <xen-devel@lists.xen.org>;
	Wed, 15 Aug 2012 13:07:57 -0400 (EDT)
Received: from [172.16.32.130] (dmzgw2.cbnco.com [207.164.182.65])
	by smtp.cbnco.com (Postfix) with ESMTPSA id 74E3A108A7CC
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 13:07:57 -0400 (EDT)
Message-ID: <502BD75B.9040301@cbnco.com>
Date: Wed, 15 Aug 2012 13:07:39 -0400
From: Tom Parker <tparker@cbnco.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Virus-Scanned: amavisd-new at cbnco.com
Subject: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2452941091161512089=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============2452941091161512089==
Content-Type: multipart/alternative;
 boundary="------------040300040600080202000602"

This is a multi-part message in MIME format.
--------------040300040600080202000602
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
channel to provide our use case for PV USB in our environment.  This is
possible with the current xm stack but not available with the xl stack.

Currently we use PVUSB to attach a USB Smartcard reader through our dom0
(SLES 11 SP1) running on an HP Blade Server with the Token mounted on an
internal USB Port to our domU CA server (SLES 11)

The config file syntax is broken so we have to manually attach (I have
it scripted) whenever our hosts reboot (which is almost never.)

On the dom0 server I have to do the following steps:

*/usr/sbin/xm usb-list-assignable-devices* (get the bus-id of the USB
device)
*/usr/sbin/xm usb-hc-create $Domain 2 2* (Create a USB 2.0 Root Hub with
2 ports in $Domain)
*/usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId* (Attach the
USB bus-id found in step 1 to the hub created in step 2)

On the domU the lsusb looks like this after the above (before it returns
nothing)
*
mgaca:~ # lsusb
Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub*

Once I have done this I can use the usb devce in the domU as if it was
directly connected.

Thanks for your time.

Tom Parker
Canadian Bank Note Company, Ltd.
tparker@cbnco.com

--------------040300040600080202000602
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Good Afternoon.&nbsp; My colleague Stefan (sstan) was asked on the IRC
    channel to provide our use case for PV USB in our environment.&nbsp; This
    is possible with the current xm stack but not available with the xl
    stack.<br>
    <br>
    Currently we use PVUSB to attach a USB Smartcard reader through our
    dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
    mounted on an internal USB Port to our domU CA server (SLES 11)<br>
    <br>
    The config file syntax is broken so we have to manually attach (I
    have it scripted) whenever our hosts reboot (which is almost never.)<br>
    <br>
    On the dom0 server I have to do the following steps:<br>
    <br>
    <b><tt>/usr/sbin/xm usb-list-assignable-devices</tt></b> (get the
    bus-id of the USB device)<br>
    <tt><b>/usr/sbin/xm usb-hc-create $Domain 2 2</b></tt> (Create a USB
    2.0 Root Hub with 2 ports in $Domain)<br>
    <tt><b>/usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId</b></tt>
    (Attach the USB bus-id found in step 1 to the hub created in step 2)<br>
    <br>
    On the domU the lsusb looks like this after the above (before it
    returns nothing)<br>
    <b><tt><br>
        mgaca:~ # lsusb <br>
        Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc.
        SCR331-LC1 SmartCard Reader<br>
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub</tt></b><br>
    <br>
    Once I have done this I can use the usb devce in the domU as if it
    was directly connected. <br>
    <br>
    Thanks for your time.<br>
    <br>
    Tom Parker<br>
    Canadian Bank Note Company, Ltd.<br>
    <a class="moz-txt-link-abbreviated" href="mailto:tparker@cbnco.com">tparker@cbnco.com</a><br>
  </body>
</html>

--------------040300040600080202000602--


--===============2452941091161512089==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2452941091161512089==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 17:20:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:20:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hGV-00041v-KW; Wed, 15 Aug 2012 17:20:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T1hGM-00041n-TS
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:20:17 +0000
Received: from [85.158.143.35:42194] by server-3.bemta-4.messagelabs.com id
	CC/E6-09529-A4ADB205; Wed, 15 Aug 2012 17:20:10 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345051208!15941420!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE2MzU5NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11336 invoked from network); 15 Aug 2012 17:20:09 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:20:09 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1345051209; x=1376587209;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=RZDM3wRxk3XMAHyLD7MIkg179Ip48lecWYX/3NPFFWY=;
	b=PVvCVSkRsH95Z31eJ14YNVZQuQfUG4ZFZh2L898moImvz325XX03CvR/
	+l8i4IHRis50I8C4i7MrgsDqC3nqrA==;
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="780532946"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 15 Aug 2012 17:19:58 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7FHJvm4003609
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 15 Aug 2012 17:19:58 GMT
Received: from US-SEA-R8XVZTX (10.224.80.44) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 15 Aug 2012 10:19:53 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 15 Aug 2012
	10:19:53 -0700
Date: Wed, 15 Aug 2012 10:19:53 -0700
From: Matt Wilson <msw@amazon.com>
To: Lars Kurth <lars.kurth@xen.org>
Message-ID: <20120815171953.GB9984@US-SEA-R8XVZTX>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502A3A0B.6000401@xen.org>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 04:44:11AM -0700, Lars Kurth wrote:
> On 14/08/2012 11:56, Ian Campbell wrote:
> > On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> >> Feel free to reply to the thread or add to
> >> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> > Is the intention to merge this into
> > http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?
> Ian, we could merge it, or we could have a line  in the matrix linking 
> to the release limits.
> No strong preference

Lars,

Some of the per-guest limits on http://wiki.xen.org/wiki/Xen_4.2_Limits 
do not match those on http://wiki.xen.org/wiki/Xen_Release_Features. For
example, the release features page says HVM guests have a limit of
1 TB (should be TiB) of RAM, but the 4.2 limits page says 512 or 256.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:20:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:20:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hGV-00041v-KW; Wed, 15 Aug 2012 17:20:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T1hGM-00041n-TS
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:20:17 +0000
Received: from [85.158.143.35:42194] by server-3.bemta-4.messagelabs.com id
	CC/E6-09529-A4ADB205; Wed, 15 Aug 2012 17:20:10 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345051208!15941420!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE2MzU5NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11336 invoked from network); 15 Aug 2012 17:20:09 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:20:09 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1345051209; x=1376587209;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=RZDM3wRxk3XMAHyLD7MIkg179Ip48lecWYX/3NPFFWY=;
	b=PVvCVSkRsH95Z31eJ14YNVZQuQfUG4ZFZh2L898moImvz325XX03CvR/
	+l8i4IHRis50I8C4i7MrgsDqC3nqrA==;
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="780532946"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 15 Aug 2012 17:19:58 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7FHJvm4003609
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 15 Aug 2012 17:19:58 GMT
Received: from US-SEA-R8XVZTX (10.224.80.44) by ex10-hub-31006.ant.amazon.com
	(10.185.176.13) with Microsoft SMTP Server id 14.2.247.3;
	Wed, 15 Aug 2012 10:19:53 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Wed, 15 Aug 2012
	10:19:53 -0700
Date: Wed, 15 Aug 2012 10:19:53 -0700
From: Matt Wilson <msw@amazon.com>
To: Lars Kurth <lars.kurth@xen.org>
Message-ID: <20120815171953.GB9984@US-SEA-R8XVZTX>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502A3A0B.6000401@xen.org>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 04:44:11AM -0700, Lars Kurth wrote:
> On 14/08/2012 11:56, Ian Campbell wrote:
> > On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> >> Feel free to reply to the thread or add to
> >> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> > Is the intention to merge this into
> > http://wiki.xen.org/wiki/Xen_Release_Features when the release happens?
> Ian, we could merge it, or we could have a line  in the matrix linking 
> to the release limits.
> No strong preference

Lars,

Some of the per-guest limits on http://wiki.xen.org/wiki/Xen_4.2_Limits 
do not match those on http://wiki.xen.org/wiki/Xen_Release_Features. For
example, the release features page says HVM guests have a limit of
1 TB (should be TiB) of RAM, but the 4.2 limits page says 512 or 256.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:26:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hM8-0004Av-E7; Wed, 15 Aug 2012 17:26:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hM7-0004Ap-ED
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:26:07 +0000
Received: from [85.158.139.83:32309] by server-10.bemta-5.messagelabs.com id
	2F/9C-13125-EABDB205; Wed, 15 Aug 2012 17:26:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345051566!25595208!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21747 invoked from network); 15 Aug 2012 17:26:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:26:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14024949"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:26:05 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:26:05 +0100
Date: Wed, 15 Aug 2012 18:25:49 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Attilio Rao wrote:
> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>   pgt_buf_start, but still bigger than it.
> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>   for verifying start and end are contained in the range
>   [pgt_buf_start, pgt_buf_top].
> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>   an actual need to do that (or, in other words, if there are actually some
>   pages going to switch from RO to RW).
> 
> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> ---
>  arch/x86/mm/init.c |    4 ++++
>  arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>  2 files changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index e0e6990..c5849b6 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>  
>  void __init native_pagetable_reserve(u64 start, u64 end)
>  {
> +	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"

code style (you can check whether your patch breaks the code style with
scripts/checkpatch.pl)

> +			start, end, PFN_PHYS(pgt_buf_start),
> +			PFN_PHYS(pgt_buf_top));
>  	memblock_reserve(start, end - start);
>  }
>  
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..66d73a2 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
>  
>  static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
>  {
> +	u64 begin;
> +
> +	begin = PFN_PHYS(pgt_buf_start);
> +
> +	if (start < begin || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"

code style

> +			start, end, begin, PFN_PHYS(pgt_buf_top));
> +
> +	/* set RW the initial range */
> +	if  (start != begin)
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			begin, start);
> +	while (begin < start) {
> +		make_lowmem_page_readwrite(__va(begin));
> +		begin += PAGE_SIZE;
> +	}
> +
>  	/* reserve the range used */
>  	native_pagetable_reserve(start, end);
>  
>  	/* set as RW the rest */
> -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> -			PFN_PHYS(pgt_buf_top));
> +	if (end != PFN_PHYS(pgt_buf_top))
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			end, PFN_PHYS(pgt_buf_top));
>  	while (end < PFN_PHYS(pgt_buf_top)) {
>  		make_lowmem_page_readwrite(__va(end));
>  		end += PAGE_SIZE;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:26:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hM8-0004Av-E7; Wed, 15 Aug 2012 17:26:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hM7-0004Ap-ED
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:26:07 +0000
Received: from [85.158.139.83:32309] by server-10.bemta-5.messagelabs.com id
	2F/9C-13125-EABDB205; Wed, 15 Aug 2012 17:26:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345051566!25595208!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21747 invoked from network); 15 Aug 2012 17:26:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:26:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14024949"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:26:05 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:26:05 +0100
Date: Wed, 15 Aug 2012 18:25:49 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Attilio Rao wrote:
> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>   pgt_buf_start, but still bigger than it.
> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>   for verifying start and end are contained in the range
>   [pgt_buf_start, pgt_buf_top].
> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>   an actual need to do that (or, in other words, if there are actually some
>   pages going to switch from RO to RW).
> 
> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> ---
>  arch/x86/mm/init.c |    4 ++++
>  arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>  2 files changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index e0e6990..c5849b6 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>  
>  void __init native_pagetable_reserve(u64 start, u64 end)
>  {
> +	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"

code style (you can check whether your patch breaks the code style with
scripts/checkpatch.pl)

> +			start, end, PFN_PHYS(pgt_buf_start),
> +			PFN_PHYS(pgt_buf_top));
>  	memblock_reserve(start, end - start);
>  }
>  
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..66d73a2 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
>  
>  static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
>  {
> +	u64 begin;
> +
> +	begin = PFN_PHYS(pgt_buf_start);
> +
> +	if (start < begin || end > PFN_PHYS(pgt_buf_top))
> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"

code style

> +			start, end, begin, PFN_PHYS(pgt_buf_top));
> +
> +	/* set RW the initial range */
> +	if  (start != begin)
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			begin, start);
> +	while (begin < start) {
> +		make_lowmem_page_readwrite(__va(begin));
> +		begin += PAGE_SIZE;
> +	}
> +
>  	/* reserve the range used */
>  	native_pagetable_reserve(start, end);
>  
>  	/* set as RW the rest */
> -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> -			PFN_PHYS(pgt_buf_top));
> +	if (end != PFN_PHYS(pgt_buf_top))
> +		pr_debug("xen: setting RW the range %llx - %llx\n",
> +			end, PFN_PHYS(pgt_buf_top));
>  	while (end < PFN_PHYS(pgt_buf_top)) {
>  		make_lowmem_page_readwrite(__va(end));
>  		end += PAGE_SIZE;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:28:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hOF-0004G4-Um; Wed, 15 Aug 2012 17:28:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1hOE-0004Fx-ND
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:28:18 +0000
Received: from [85.158.138.51:8203] by server-5.bemta-3.messagelabs.com id
	25/B7-08865-13CDB205; Wed, 15 Aug 2012 17:28:17 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345051697!28539899!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10067 invoked from network); 15 Aug 2012 17:28:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:28:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14024975"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:28:16 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 18:28:16 +0100
Message-ID: <502BD943.8030701@citrix.com>
Date: Wed, 15 Aug 2012 18:15:47 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 18:25, Stefano Stabellini wrote:
> On Tue, 14 Aug 2012, Attilio Rao wrote:
>    
>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>>    pgt_buf_start, but still bigger than it.
>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>>    for verifying start and end are contained in the range
>>    [pgt_buf_start, pgt_buf_top].
>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>>    an actual need to do that (or, in other words, if there are actually some
>>    pages going to switch from RO to RW).
>>
>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
>> ---
>>   arch/x86/mm/init.c |    4 ++++
>>   arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>>   2 files changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index e0e6990..c5849b6 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>>
>>   void __init native_pagetable_reserve(u64 start, u64 end)
>>   {
>> +	if (start<  PFN_PHYS(pgt_buf_start) || end>  PFN_PHYS(pgt_buf_top))
>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
>>      
> code style (you can check whether your patch breaks the code style with
> scripts/checkpatch.pl)
>    

I actually did before to submit, it reported 0 errors/warning.
Do you have an handy link on where I can find a style guide for Linux 
kernel? I tried to follow what other parts of the code do.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:28:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hOF-0004G4-Um; Wed, 15 Aug 2012 17:28:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1hOE-0004Fx-ND
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:28:18 +0000
Received: from [85.158.138.51:8203] by server-5.bemta-3.messagelabs.com id
	25/B7-08865-13CDB205; Wed, 15 Aug 2012 17:28:17 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345051697!28539899!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10067 invoked from network); 15 Aug 2012 17:28:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:28:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14024975"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:28:16 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 18:28:16 +0100
Message-ID: <502BD943.8030701@citrix.com>
Date: Wed, 15 Aug 2012 18:15:47 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 18:25, Stefano Stabellini wrote:
> On Tue, 14 Aug 2012, Attilio Rao wrote:
>    
>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>>    pgt_buf_start, but still bigger than it.
>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>>    for verifying start and end are contained in the range
>>    [pgt_buf_start, pgt_buf_top].
>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>>    an actual need to do that (or, in other words, if there are actually some
>>    pages going to switch from RO to RW).
>>
>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
>> ---
>>   arch/x86/mm/init.c |    4 ++++
>>   arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>>   2 files changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index e0e6990..c5849b6 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>>
>>   void __init native_pagetable_reserve(u64 start, u64 end)
>>   {
>> +	if (start<  PFN_PHYS(pgt_buf_start) || end>  PFN_PHYS(pgt_buf_top))
>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
>>      
> code style (you can check whether your patch breaks the code style with
> scripts/checkpatch.pl)
>    

I actually did before to submit, it reported 0 errors/warning.
Do you have an handy link on where I can find a style guide for Linux 
kernel? I tried to follow what other parts of the code do.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:34:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hTd-0004Ty-Nj; Wed, 15 Aug 2012 17:33:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hTb-0004Tt-Dl
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:33:51 +0000
Received: from [85.158.143.99:32396] by server-3.bemta-4.messagelabs.com id
	4F/FF-09529-E7DDB205; Wed, 15 Aug 2012 17:33:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345052028!23073282!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14253 invoked from network); 15 Aug 2012 17:33:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:33:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025024"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:33:48 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:33:48 +0100
Date: Wed, 15 Aug 2012 18:33:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208151833110.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

You can add me acked-by, once you do these changes

On Wed, 15 Aug 2012, Stefano Stabellini wrote:
> On Tue, 14 Aug 2012, Attilio Rao wrote:
> > - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >   pgt_buf_start, but still bigger than it.
> > - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >   for verifying start and end are contained in the range
> >   [pgt_buf_start, pgt_buf_top].
> > - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> > - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >   an actual need to do that (or, in other words, if there are actually some
> >   pages going to switch from RO to RW).
> > 
> > Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> > ---
> >  arch/x86/mm/init.c |    4 ++++
> >  arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >  2 files changed, 24 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> > index e0e6990..c5849b6 100644
> > --- a/arch/x86/mm/init.c
> > +++ b/arch/x86/mm/init.c
> > @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >  
> >  void __init native_pagetable_reserve(u64 start, u64 end)
> >  {
> > +	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
> > +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> 
> code style (you can check whether your patch breaks the code style with
> scripts/checkpatch.pl)
> 
> > +			start, end, PFN_PHYS(pgt_buf_start),
> > +			PFN_PHYS(pgt_buf_top));
> >  	memblock_reserve(start, end - start);
> >  }
> >  
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index b65a761..66d73a2 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
> >  
> >  static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
> >  {
> > +	u64 begin;
> > +
> > +	begin = PFN_PHYS(pgt_buf_start);
> > +
> > +	if (start < begin || end > PFN_PHYS(pgt_buf_top))
> > +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> 
> code style
> 
> > +			start, end, begin, PFN_PHYS(pgt_buf_top));
> > +
> > +	/* set RW the initial range */
> > +	if  (start != begin)
> > +		pr_debug("xen: setting RW the range %llx - %llx\n",
> > +			begin, start);
> > +	while (begin < start) {
> > +		make_lowmem_page_readwrite(__va(begin));
> > +		begin += PAGE_SIZE;
> > +	}
> > +
> >  	/* reserve the range used */
> >  	native_pagetable_reserve(start, end);
> >  
> >  	/* set as RW the rest */
> > -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> > -			PFN_PHYS(pgt_buf_top));
> > +	if (end != PFN_PHYS(pgt_buf_top))
> > +		pr_debug("xen: setting RW the range %llx - %llx\n",
> > +			end, PFN_PHYS(pgt_buf_top));
> >  	while (end < PFN_PHYS(pgt_buf_top)) {
> >  		make_lowmem_page_readwrite(__va(end));
> >  		end += PAGE_SIZE;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:34:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hTd-0004Ty-Nj; Wed, 15 Aug 2012 17:33:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hTb-0004Tt-Dl
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:33:51 +0000
Received: from [85.158.143.99:32396] by server-3.bemta-4.messagelabs.com id
	4F/FF-09529-E7DDB205; Wed, 15 Aug 2012 17:33:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345052028!23073282!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14253 invoked from network); 15 Aug 2012 17:33:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:33:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025024"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:33:48 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:33:48 +0100
Date: Wed, 15 Aug 2012 18:33:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208151833110.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

You can add me acked-by, once you do these changes

On Wed, 15 Aug 2012, Stefano Stabellini wrote:
> On Tue, 14 Aug 2012, Attilio Rao wrote:
> > - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >   pgt_buf_start, but still bigger than it.
> > - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >   for verifying start and end are contained in the range
> >   [pgt_buf_start, pgt_buf_top].
> > - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> > - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >   an actual need to do that (or, in other words, if there are actually some
> >   pages going to switch from RO to RW).
> > 
> > Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
> > ---
> >  arch/x86/mm/init.c |    4 ++++
> >  arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >  2 files changed, 24 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> > index e0e6990..c5849b6 100644
> > --- a/arch/x86/mm/init.c
> > +++ b/arch/x86/mm/init.c
> > @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >  
> >  void __init native_pagetable_reserve(u64 start, u64 end)
> >  {
> > +	if (start < PFN_PHYS(pgt_buf_start) || end > PFN_PHYS(pgt_buf_top))
> > +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> 
> code style (you can check whether your patch breaks the code style with
> scripts/checkpatch.pl)
> 
> > +			start, end, PFN_PHYS(pgt_buf_start),
> > +			PFN_PHYS(pgt_buf_top));
> >  	memblock_reserve(start, end - start);
> >  }
> >  
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index b65a761..66d73a2 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1180,12 +1180,30 @@ static void __init xen_pagetable_setup_start(pgd_t *base)
> >  
> >  static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
> >  {
> > +	u64 begin;
> > +
> > +	begin = PFN_PHYS(pgt_buf_start);
> > +
> > +	if (start < begin || end > PFN_PHYS(pgt_buf_top))
> > +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> 
> code style
> 
> > +			start, end, begin, PFN_PHYS(pgt_buf_top));
> > +
> > +	/* set RW the initial range */
> > +	if  (start != begin)
> > +		pr_debug("xen: setting RW the range %llx - %llx\n",
> > +			begin, start);
> > +	while (begin < start) {
> > +		make_lowmem_page_readwrite(__va(begin));
> > +		begin += PAGE_SIZE;
> > +	}
> > +
> >  	/* reserve the range used */
> >  	native_pagetable_reserve(start, end);
> >  
> >  	/* set as RW the rest */
> > -	printk(KERN_DEBUG "xen: setting RW the range %llx - %llx\n", end,
> > -			PFN_PHYS(pgt_buf_top));
> > +	if (end != PFN_PHYS(pgt_buf_top))
> > +		pr_debug("xen: setting RW the range %llx - %llx\n",
> > +			end, PFN_PHYS(pgt_buf_top));
> >  	while (end < PFN_PHYS(pgt_buf_top)) {
> >  		make_lowmem_page_readwrite(__va(end));
> >  		end += PAGE_SIZE;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:39:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hZA-0004co-GM; Wed, 15 Aug 2012 17:39:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1hZ9-0004cc-Jr
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:39:35 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345052369!2078880!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11271 invoked from network); 15 Aug 2012 17:39:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:39:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025063"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:39:29 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:39:29 +0100
Message-ID: <502BDF14.1060103@citrix.com>
Date: Wed, 15 Aug 2012 18:40:36 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
In-Reply-To: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: make domain resume API asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1345046301 -3600
> # Node ID 7cec0543f67cefe3755bbad0c2262fa2e820d746
> # Parent  30bf79cc14d932fbe6ff572d0438e5a432f69b0a
> libxl: make domain resume API asynchronous
> =

> Although the current implementation has no asynchromous parts I can
> envisage it needing to do bits of create/destroy like functionality
> which may need async support in the future.
> =

> To do this make the meat into an internal libxl__domain_resume
> function in order to satisfy the no-internal-callers rule for the
> async function.
> =

> Since I needed to touch the logging to s/ctx/CTX/ anyway switch to the
> LOG* helper macros.
> =

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

Just a minor comment below.
> =

> diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c	Wed Aug 15 14:45:21 2012 +0100
> +++ b/tools/libxl/libxl.c	Wed Aug 15 16:58:21 2012 +0100
> @@ -396,15 +396,12 @@ int libxl_domain_rename(libxl_ctx *ctx, =

>      return rc;
>  }
>  =

> -int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_canc=
el)
> +int libxl__domain_resume(libxl__gc *gc, uint32_t domid, int suspend_canc=
el)
>  {
> -    GC_INIT(ctx);

You can also use libxl_ctx *ctx =3D CTX; so there's no need to change all
occurrences of ctx with CTX (not that I have a problem with that).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:39:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hZA-0004co-GM; Wed, 15 Aug 2012 17:39:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T1hZ9-0004cc-Jr
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:39:35 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345052369!2078880!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11271 invoked from network); 15 Aug 2012 17:39:29 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:39:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025063"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:39:29 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:39:29 +0100
Message-ID: <502BDF14.1060103@citrix.com>
Date: Wed, 15 Aug 2012 18:40:36 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
In-Reply-To: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: make domain resume API asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1345046301 -3600
> # Node ID 7cec0543f67cefe3755bbad0c2262fa2e820d746
> # Parent  30bf79cc14d932fbe6ff572d0438e5a432f69b0a
> libxl: make domain resume API asynchronous
> =

> Although the current implementation has no asynchromous parts I can
> envisage it needing to do bits of create/destroy like functionality
> which may need async support in the future.
> =

> To do this make the meat into an internal libxl__domain_resume
> function in order to satisfy the no-internal-callers rule for the
> async function.
> =

> Since I needed to touch the logging to s/ctx/CTX/ anyway switch to the
> LOG* helper macros.
> =

> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

Just a minor comment below.
> =

> diff -r 30bf79cc14d9 -r 7cec0543f67c tools/libxl/libxl.c
> --- a/tools/libxl/libxl.c	Wed Aug 15 14:45:21 2012 +0100
> +++ b/tools/libxl/libxl.c	Wed Aug 15 16:58:21 2012 +0100
> @@ -396,15 +396,12 @@ int libxl_domain_rename(libxl_ctx *ctx, =

>      return rc;
>  }
>  =

> -int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_canc=
el)
> +int libxl__domain_resume(libxl__gc *gc, uint32_t domid, int suspend_canc=
el)
>  {
> -    GC_INIT(ctx);

You can also use libxl_ctx *ctx =3D CTX; so there's no need to change all
occurrences of ctx with CTX (not that I have a problem with that).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hgG-0004mA-Cp; Wed, 15 Aug 2012 17:46:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hgE-0004m5-Ld
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:46:54 +0000
Received: from [85.158.139.83:11774] by server-12.bemta-5.messagelabs.com id
	83/9F-22359-D80EB205; Wed, 15 Aug 2012 17:46:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345052813!21087712!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4979 invoked from network); 15 Aug 2012 17:46:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:46:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025129"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:46:30 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:46:30 +0100
Date: Wed, 15 Aug 2012 18:46:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <502BD943.8030701@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, Attilio Rao wrote:
> On 15/08/12 18:25, Stefano Stabellini wrote:
> > On Tue, 14 Aug 2012, Attilio Rao wrote:
> >    
> >> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >>    pgt_buf_start, but still bigger than it.
> >> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >>    for verifying start and end are contained in the range
> >>    [pgt_buf_start, pgt_buf_top].
> >> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> >> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >>    an actual need to do that (or, in other words, if there are actually some
> >>    pages going to switch from RO to RW).
> >>
> >> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> >> ---
> >>   arch/x86/mm/init.c |    4 ++++
> >>   arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >>   2 files changed, 24 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> >> index e0e6990..c5849b6 100644
> >> --- a/arch/x86/mm/init.c
> >> +++ b/arch/x86/mm/init.c
> >> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >>
> >>   void __init native_pagetable_reserve(u64 start, u64 end)
> >>   {
> >> +	if (start<  PFN_PHYS(pgt_buf_start) || end>  PFN_PHYS(pgt_buf_top))
> >> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> >>      
> > code style (you can check whether your patch breaks the code style with
> > scripts/checkpatch.pl)
> >    
> 
> I actually did before to submit, it reported 0 errors/warning.

strange, that really looks like a line over 80 chars


> Do you have an handy link on where I can find a style guide for Linux 
> kernel? I tried to follow what other parts of the code do.

Documentation/CodingStyle

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:47:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hgG-0004mA-Cp; Wed, 15 Aug 2012 17:46:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hgE-0004m5-Ld
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:46:54 +0000
Received: from [85.158.139.83:11774] by server-12.bemta-5.messagelabs.com id
	83/9F-22359-D80EB205; Wed, 15 Aug 2012 17:46:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345052813!21087712!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4979 invoked from network); 15 Aug 2012 17:46:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:46:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025129"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:46:30 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:46:30 +0100
Date: Wed, 15 Aug 2012 18:46:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <502BD943.8030701@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, Attilio Rao wrote:
> On 15/08/12 18:25, Stefano Stabellini wrote:
> > On Tue, 14 Aug 2012, Attilio Rao wrote:
> >    
> >> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >>    pgt_buf_start, but still bigger than it.
> >> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >>    for verifying start and end are contained in the range
> >>    [pgt_buf_start, pgt_buf_top].
> >> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> >> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >>    an actual need to do that (or, in other words, if there are actually some
> >>    pages going to switch from RO to RW).
> >>
> >> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> >> ---
> >>   arch/x86/mm/init.c |    4 ++++
> >>   arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >>   2 files changed, 24 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> >> index e0e6990..c5849b6 100644
> >> --- a/arch/x86/mm/init.c
> >> +++ b/arch/x86/mm/init.c
> >> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >>
> >>   void __init native_pagetable_reserve(u64 start, u64 end)
> >>   {
> >> +	if (start<  PFN_PHYS(pgt_buf_start) || end>  PFN_PHYS(pgt_buf_top))
> >> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> >>      
> > code style (you can check whether your patch breaks the code style with
> > scripts/checkpatch.pl)
> >    
> 
> I actually did before to submit, it reported 0 errors/warning.

strange, that really looks like a line over 80 chars


> Do you have an handy link on where I can find a style guide for Linux 
> kernel? I tried to follow what other parts of the code do.

Documentation/CodingStyle

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:47:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hgS-0004nJ-WB; Wed, 15 Aug 2012 17:47:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hgR-0004n6-D4
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:47:07 +0000
Received: from [85.158.143.99:48525] by server-3.bemta-4.messagelabs.com id
	65/98-09529-A90EB205; Wed, 15 Aug 2012 17:47:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345052826!23074472!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32552 invoked from network); 15 Aug 2012 17:47:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:47:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025132"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:47:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:47:06 +0100
Date: Wed, 15 Aug 2012 18:46:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151846410.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Attilio Rao wrote:
> The informations added on the hook are:
> - Native behaviour
> - Xen specific behaviour
> - Logic behind the Xen specific behaviour
> - PVOPS semantic
> 
> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
>  1 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
> index 38155f6..b22093c 100644
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -72,8 +72,23 @@ struct x86_init_oem {
>   * struct x86_init_mapping - platform specific initial kernel pagetable setup
>   * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
>   *
> - * For more details on the purpose of this hook, look in
> - * init_memory_mapping and the commit that added it.
> + * It does reserve a range of pages, to be used as pagetable pages.
> + * The start and end parameters are expected to be contained in the
> + * [pgt_buf_start, pgt_buf_top] range.
> + * The native implementation reserves the pages via the memblock_reserve()
> + * interface.
> + * The Xen implementation, besides reserving the range via memblock_reserve(),
> + * also sets RW the remaining pages contained in the ranges
> + * [pgt_buf_start, start) and [end, pgt_buf_top).
> + * This is needed because the range [pgt_buf_start, pgt_buf_top] was
> + * previously mapped read-only by xen_set_pte_init: when running
> + * on Xen all the pagetable pages need to be mapped read-only in order to
> + * avoid protection faults from the hypervisor. However, once the correct
> + * amount of pages is reserved for the pagetables, all the others contained
> + * in the range must be set to RW so that they can be correctly recycled by
> + * the VM subsystem.
> + * This operation is meant to be performed only during init_memory_mapping(),
> + * just after space for the kernel direct mapping tables is found.
>   */
>  struct x86_init_mapping {
>  	void (*pagetable_reserve)(u64 start, u64 end);
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 17:47:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 17:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1hgS-0004nJ-WB; Wed, 15 Aug 2012 17:47:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1hgR-0004n6-D4
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 17:47:07 +0000
Received: from [85.158.143.99:48525] by server-3.bemta-4.messagelabs.com id
	65/98-09529-A90EB205; Wed, 15 Aug 2012 17:47:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345052826!23074472!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32552 invoked from network); 15 Aug 2012 17:47:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 17:47:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025132"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 17:47:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 18:47:06 +0100
Date: Wed, 15 Aug 2012 18:46:50 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.DEB.2.02.1208151846410.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Attilio Rao wrote:
> The informations added on the hook are:
> - Native behaviour
> - Xen specific behaviour
> - Logic behind the Xen specific behaviour
> - PVOPS semantic
> 
> Signed-off-by: Attilio Rao <attilio.rao@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  arch/x86/include/asm/x86_init.h |   19 +++++++++++++++++--
>  1 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
> index 38155f6..b22093c 100644
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -72,8 +72,23 @@ struct x86_init_oem {
>   * struct x86_init_mapping - platform specific initial kernel pagetable setup
>   * @pagetable_reserve:	reserve a range of addresses for kernel pagetable usage
>   *
> - * For more details on the purpose of this hook, look in
> - * init_memory_mapping and the commit that added it.
> + * It does reserve a range of pages, to be used as pagetable pages.
> + * The start and end parameters are expected to be contained in the
> + * [pgt_buf_start, pgt_buf_top] range.
> + * The native implementation reserves the pages via the memblock_reserve()
> + * interface.
> + * The Xen implementation, besides reserving the range via memblock_reserve(),
> + * also sets RW the remaining pages contained in the ranges
> + * [pgt_buf_start, start) and [end, pgt_buf_top).
> + * This is needed because the range [pgt_buf_start, pgt_buf_top] was
> + * previously mapped read-only by xen_set_pte_init: when running
> + * on Xen all the pagetable pages need to be mapped read-only in order to
> + * avoid protection faults from the hypervisor. However, once the correct
> + * amount of pages is reserved for the pagetables, all the others contained
> + * in the range must be set to RW so that they can be correctly recycled by
> + * the VM subsystem.
> + * This operation is meant to be performed only during init_memory_mapping(),
> + * just after space for the kernel direct mapping tables is found.
>   */
>  struct x86_init_mapping {
>  	void (*pagetable_reserve)(u64 start, u64 end);
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 18:43:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 18:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1iYx-0005ur-Pm; Wed, 15 Aug 2012 18:43:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1iYw-0005um-3T
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 18:43:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345056192!8573061!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20498 invoked from network); 15 Aug 2012 18:43:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 18:43:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025676"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 18:43:11 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 19:43:11 +0100
Message-ID: <1345056190.12434.34.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 15 Aug 2012 19:43:10 +0100
In-Reply-To: <alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 18:46 +0100, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, Attilio Rao wrote:
> > On 15/08/12 18:25, Stefano Stabellini wrote:
> > > On Tue, 14 Aug 2012, Attilio Rao wrote:
> > >    
> > >> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> > >>    pgt_buf_start, but still bigger than it.
> > >> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> > >>    for verifying start and end are contained in the range
> > >>    [pgt_buf_start, pgt_buf_top].
> > >> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> > >> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> > >>    an actual need to do that (or, in other words, if there are actually some
> > >>    pages going to switch from RO to RW).
> > >>
> > >> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> > >> ---
> > >>   arch/x86/mm/init.c |    4 ++++
> > >>   arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> > >>   2 files changed, 24 insertions(+), 2 deletions(-)
> > >>
> > >> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> > >> index e0e6990..c5849b6 100644
> > >> --- a/arch/x86/mm/init.c
> > >> +++ b/arch/x86/mm/init.c
> > >> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> > >>
> > >>   void __init native_pagetable_reserve(u64 start, u64 end)
> > >>   {
> > >> +	if (start<  PFN_PHYS(pgt_buf_start) || end>  PFN_PHYS(pgt_buf_top))
> > >> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> > >>      
> > > code style (you can check whether your patch breaks the code style with
> > > scripts/checkpatch.pl)
> > >    
> > 
> > I actually did before to submit, it reported 0 errors/warning.
> 
> strange, that really looks like a line over 80 chars

Also there should be one space either side of the "<" and ">" in the
conditional.

> 
> 
> > Do you have an handy link on where I can find a style guide for Linux 
> > kernel? I tried to follow what other parts of the code do.
> 
> Documentation/CodingStyle
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 18:43:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 18:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1iYx-0005ur-Pm; Wed, 15 Aug 2012 18:43:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1iYw-0005um-3T
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 18:43:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345056192!8573061!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20498 invoked from network); 15 Aug 2012 18:43:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 18:43:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025676"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 18:43:11 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 19:43:11 +0100
Message-ID: <1345056190.12434.34.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 15 Aug 2012 19:43:10 +0100
In-Reply-To: <alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 18:46 +0100, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, Attilio Rao wrote:
> > On 15/08/12 18:25, Stefano Stabellini wrote:
> > > On Tue, 14 Aug 2012, Attilio Rao wrote:
> > >    
> > >> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> > >>    pgt_buf_start, but still bigger than it.
> > >> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> > >>    for verifying start and end are contained in the range
> > >>    [pgt_buf_start, pgt_buf_top].
> > >> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> > >> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> > >>    an actual need to do that (or, in other words, if there are actually some
> > >>    pages going to switch from RO to RW).
> > >>
> > >> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> > >> ---
> > >>   arch/x86/mm/init.c |    4 ++++
> > >>   arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> > >>   2 files changed, 24 insertions(+), 2 deletions(-)
> > >>
> > >> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> > >> index e0e6990..c5849b6 100644
> > >> --- a/arch/x86/mm/init.c
> > >> +++ b/arch/x86/mm/init.c
> > >> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> > >>
> > >>   void __init native_pagetable_reserve(u64 start, u64 end)
> > >>   {
> > >> +	if (start<  PFN_PHYS(pgt_buf_start) || end>  PFN_PHYS(pgt_buf_top))
> > >> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> > >>      
> > > code style (you can check whether your patch breaks the code style with
> > > scripts/checkpatch.pl)
> > >    
> > 
> > I actually did before to submit, it reported 0 errors/warning.
> 
> strange, that really looks like a line over 80 chars

Also there should be one space either side of the "<" and ">" in the
conditional.

> 
> 
> > Do you have an handy link on where I can find a style guide for Linux 
> > kernel? I tried to follow what other parts of the code do.
> 
> Documentation/CodingStyle
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 19:00:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 19:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1iob-00064w-As; Wed, 15 Aug 2012 18:59:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1ioZ-00064r-Mh
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 18:59:35 +0000
Received: from [85.158.143.99:63584] by server-2.bemta-4.messagelabs.com id
	CC/2E-31966-691FB205; Wed, 15 Aug 2012 18:59:34 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345057174!17163407!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30900 invoked from network); 15 Aug 2012 18:59:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 18:59:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025892"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 18:59:34 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 19:59:34 +0100
Message-ID: <502BEEAD.7030906@citrix.com>
Date: Wed, 15 Aug 2012 19:47:09 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 18:46, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, Attilio Rao wrote:
>    
>> On 15/08/12 18:25, Stefano Stabellini wrote:
>>      
>>> On Tue, 14 Aug 2012, Attilio Rao wrote:
>>>
>>>        
>>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>>>>     pgt_buf_start, but still bigger than it.
>>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>>>>     for verifying start and end are contained in the range
>>>>     [pgt_buf_start, pgt_buf_top].
>>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
>>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>>>>     an actual need to do that (or, in other words, if there are actually some
>>>>     pages going to switch from RO to RW).
>>>>
>>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
>>>> ---
>>>>    arch/x86/mm/init.c |    4 ++++
>>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>>>>    2 files changed, 24 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>>>> index e0e6990..c5849b6 100644
>>>> --- a/arch/x86/mm/init.c
>>>> +++ b/arch/x86/mm/init.c
>>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>>>>
>>>>    void __init native_pagetable_reserve(u64 start, u64 end)
>>>>    {
>>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
>>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
>>>>
>>>>          
>>> code style (you can check whether your patch breaks the code style with
>>> scripts/checkpatch.pl)
>>>
>>>        
>> I actually did before to submit, it reported 0 errors/warning.
>>      
> strange, that really looks like a line over 80 chars
>
>    

Actually code style explicitely says to not break strings because they 
want to retain the ability to grep. In FreeBSD this is the same and I 
think this is why checkpatch doesn't whine. I don't think there is a bug 
here.

Can I submit the patch as it is, then?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 19:00:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 19:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1iob-00064w-As; Wed, 15 Aug 2012 18:59:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1ioZ-00064r-Mh
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 18:59:35 +0000
Received: from [85.158.143.99:63584] by server-2.bemta-4.messagelabs.com id
	CC/2E-31966-691FB205; Wed, 15 Aug 2012 18:59:34 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345057174!17163407!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30900 invoked from network); 15 Aug 2012 18:59:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 18:59:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025892"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 18:59:34 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 19:59:34 +0100
Message-ID: <502BEEAD.7030906@citrix.com>
Date: Wed, 15 Aug 2012 19:47:09 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 18:46, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, Attilio Rao wrote:
>    
>> On 15/08/12 18:25, Stefano Stabellini wrote:
>>      
>>> On Tue, 14 Aug 2012, Attilio Rao wrote:
>>>
>>>        
>>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>>>>     pgt_buf_start, but still bigger than it.
>>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>>>>     for verifying start and end are contained in the range
>>>>     [pgt_buf_start, pgt_buf_top].
>>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
>>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>>>>     an actual need to do that (or, in other words, if there are actually some
>>>>     pages going to switch from RO to RW).
>>>>
>>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
>>>> ---
>>>>    arch/x86/mm/init.c |    4 ++++
>>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>>>>    2 files changed, 24 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>>>> index e0e6990..c5849b6 100644
>>>> --- a/arch/x86/mm/init.c
>>>> +++ b/arch/x86/mm/init.c
>>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>>>>
>>>>    void __init native_pagetable_reserve(u64 start, u64 end)
>>>>    {
>>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
>>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
>>>>
>>>>          
>>> code style (you can check whether your patch breaks the code style with
>>> scripts/checkpatch.pl)
>>>
>>>        
>> I actually did before to submit, it reported 0 errors/warning.
>>      
> strange, that really looks like a line over 80 chars
>
>    

Actually code style explicitely says to not break strings because they 
want to retain the ability to grep. In FreeBSD this is the same and I 
think this is why checkpatch doesn't whine. I don't think there is a bug 
here.

Can I submit the patch as it is, then?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 19:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 19:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1is8-0006Eq-Vl; Wed, 15 Aug 2012 19:03:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1is7-0006Ej-Ej
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 19:03:15 +0000
Received: from [85.158.143.99:12990] by server-2.bemta-4.messagelabs.com id
	97/60-31966-272FB205; Wed, 15 Aug 2012 19:03:14 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345057394!28345233!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23390 invoked from network); 15 Aug 2012 19:03:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 19:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025933"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 19:02:31 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 20:02:31 +0100
Message-ID: <502BEF5E.1080105@citrix.com>
Date: Wed, 15 Aug 2012 19:50:06 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>	
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>	
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>	
	<502BD943.8030701@citrix.com>	
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
	<1345056190.12434.34.camel@dagon.hellion.org.uk>
In-Reply-To: <1345056190.12434.34.camel@dagon.hellion.org.uk>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 19:43, Ian Campbell wrote:
> On Wed, 2012-08-15 at 18:46 +0100, Stefano Stabellini wrote:
>    
>> On Wed, 15 Aug 2012, Attilio Rao wrote:
>>      
>>> On 15/08/12 18:25, Stefano Stabellini wrote:
>>>        
>>>> On Tue, 14 Aug 2012, Attilio Rao wrote:
>>>>
>>>>          
>>>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>>>>>     pgt_buf_start, but still bigger than it.
>>>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>>>>>     for verifying start and end are contained in the range
>>>>>     [pgt_buf_start, pgt_buf_top].
>>>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
>>>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>>>>>     an actual need to do that (or, in other words, if there are actually some
>>>>>     pages going to switch from RO to RW).
>>>>>
>>>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
>>>>> ---
>>>>>    arch/x86/mm/init.c |    4 ++++
>>>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>>>>>    2 files changed, 24 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>>>>> index e0e6990..c5849b6 100644
>>>>> --- a/arch/x86/mm/init.c
>>>>> +++ b/arch/x86/mm/init.c
>>>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>>>>>
>>>>>    void __init native_pagetable_reserve(u64 start, u64 end)
>>>>>    {
>>>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
>>>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
>>>>>
>>>>>            
>>>> code style (you can check whether your patch breaks the code style with
>>>> scripts/checkpatch.pl)
>>>>
>>>>          
>>> I actually did before to submit, it reported 0 errors/warning.
>>>        
>> strange, that really looks like a line over 80 chars
>>      
> Also there should be one space either side of the "<" and">" in the
> conditional.
>
>    

I have no idea why they are reported like that, but in the original 
patch the space is fine.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 19:03:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 19:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1is8-0006Eq-Vl; Wed, 15 Aug 2012 19:03:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T1is7-0006Ej-Ej
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 19:03:15 +0000
Received: from [85.158.143.99:12990] by server-2.bemta-4.messagelabs.com id
	97/60-31966-272FB205; Wed, 15 Aug 2012 19:03:14 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345057394!28345233!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23390 invoked from network); 15 Aug 2012 19:03:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 19:03:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14025933"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 19:02:31 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 15 Aug 2012 20:02:31 +0100
Message-ID: <502BEF5E.1080105@citrix.com>
Date: Wed, 15 Aug 2012 19:50:06 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>	
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>	
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>	
	<502BD943.8030701@citrix.com>	
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
	<1345056190.12434.34.camel@dagon.hellion.org.uk>
In-Reply-To: <1345056190.12434.34.camel@dagon.hellion.org.uk>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 19:43, Ian Campbell wrote:
> On Wed, 2012-08-15 at 18:46 +0100, Stefano Stabellini wrote:
>    
>> On Wed, 15 Aug 2012, Attilio Rao wrote:
>>      
>>> On 15/08/12 18:25, Stefano Stabellini wrote:
>>>        
>>>> On Tue, 14 Aug 2012, Attilio Rao wrote:
>>>>
>>>>          
>>>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
>>>>>     pgt_buf_start, but still bigger than it.
>>>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
>>>>>     for verifying start and end are contained in the range
>>>>>     [pgt_buf_start, pgt_buf_top].
>>>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
>>>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
>>>>>     an actual need to do that (or, in other words, if there are actually some
>>>>>     pages going to switch from RO to RW).
>>>>>
>>>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
>>>>> ---
>>>>>    arch/x86/mm/init.c |    4 ++++
>>>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
>>>>>    2 files changed, 24 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>>>>> index e0e6990..c5849b6 100644
>>>>> --- a/arch/x86/mm/init.c
>>>>> +++ b/arch/x86/mm/init.c
>>>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
>>>>>
>>>>>    void __init native_pagetable_reserve(u64 start, u64 end)
>>>>>    {
>>>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
>>>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
>>>>>
>>>>>            
>>>> code style (you can check whether your patch breaks the code style with
>>>> scripts/checkpatch.pl)
>>>>
>>>>          
>>> I actually did before to submit, it reported 0 errors/warning.
>>>        
>> strange, that really looks like a line over 80 chars
>>      
> Also there should be one space either side of the "<" and">" in the
> conditional.
>
>    

I have no idea why they are reported like that, but in the original 
patch the space is fine.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 19:16:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 19:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1j4O-0006Qt-7s; Wed, 15 Aug 2012 19:15:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T1j4N-0006Qo-3p
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 19:15:55 +0000
Received: from [85.158.139.83:35524] by server-12.bemta-5.messagelabs.com id
	E9/11-22359-A65FB205; Wed, 15 Aug 2012 19:15:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345058150!28237961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3557 invoked from network); 15 Aug 2012 19:15:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 19:15:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336363200"; d="scan'208";a="34767090"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 15:15:49 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	15:15:49 -0400
Message-ID: <502BF563.8010902@citrix.com>
Date: Wed, 15 Aug 2012 20:15:47 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
	<alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 14:55, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, David Vrabel wrote:
>> On 14/08/12 15:12, Attilio Rao wrote:
>>> On 14/08/12 14:57, David Vrabel wrote:
>>>> On 14/08/12 13:24, Attilio Rao wrote:
>> After looking at it some more, I think this pv-ops is unnecessary. How
>> about the following patch to just remove it completely?
>>
>> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
>> is sound.
> 
> Do you have more then 4G to dom0 on those boxes?

I've tested with 6G now, both 64-bit and 32-bit with HIGHPTE.

> It certainly fixed a serious crash at the time it was introduced, see
> http://marc.info/?l=linux-kernel&m=129901609503574 and
> http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
> changed in kernel_physical_mapping_init, I think we still need it.
> Depending on the e820 of your test box, the kernel could crash (or not),
> possibly in different places.
>
>>>> Having said that, I couldn't immediately see where pages in (end, 
>>>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
>>>> done?
>>>>
>>>
>>> As mentioned in the comment, please look at xen_set_pte_init().
>>
>> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
>> is already present and read-only.
> 
> look at mask_rw_pte and read the threads linked above, unfortunately it
> is not that simple.

Yes, I was remembering what 32-bit did here.

The 64-bit version is a bit confused and it often ends up /not/ clearing
RW for the direct mapping of the pages in the pgt_buf because any
existing RW mappings will be used as-is.  See phys_pte_init() which
checks for an existing mapping and only sets the PTE if it is not
already set.

pgd_populate(), pud_populate(), and pmd_populate() take care of clearing
RW for the newly allocated page table pages, so mask_rw_pte() only needs
to consider clearing RW for mappings of the the existing page table PFNs
which all lie outside the range (pt_buf_start, pt_buf_top].

This patch makes it more sensible, and makes removal of the pv-op safe
if pgt_buf lies outside the initial mapping.

index 04c1f61..2fd5e86 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1400,14 +1400,13 @@ static pte_t __init mask_rw_pte(pte_t *ptep,
pte_t pte)
 	unsigned long pfn = pte_pfn(pte);

 	/*
-	 * If the new pfn is within the range of the newly allocated
-	 * kernel pagetable, and it isn't being mapped into an
-	 * early_ioremap fixmap slot as a freshly allocated page, make sure
-	 * it is RO.
+	 * If this is a PTE of an early_ioremap fixmap slot but
+	 * outside the range (pgt_buf_start, pgt_buf_top], then this
+	 * PTE is mapping a PFN in the current page table.  Make
+	 * sure it is RO.
 	 */
-	if (((!is_early_ioremap_ptep(ptep) &&
-			pfn >= pgt_buf_start && pfn < pgt_buf_top)) ||
-			(is_early_ioremap_ptep(ptep) && pfn != (pgt_buf_end - 1)))
+	if (is_early_ioremap_ptep(ptep)
+	    && (pfn < pgt_buf_start || pfn >= pgt_buf_top))
 		pte = pte_wrprotect(pte);

 	return pte;


>> 8<----------------------
>> x86: remove x86_init.mapping.pagetable_reserve paravirt op
>>
>> The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
>> guests to set the writable flag for the mapping of (pgt_buf_end,
>> pgt_buf_top].  This is not necessary as these pages are never set as
>> read-only as they have never contained page tables.
> 
> Is this actually true? It is possible when pagetable pages are
> allocated by alloc_low_page.

These newly allocated page table pages will be set read-only when they
are linked into the page tables with pgd_populate(), pud_populate() and
friends.

>> When running as a Xen guest, the initial page tables are provided by
>> Xen (these are reserved with memblock_reserve() in
>> xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
>> guests) or in the kernel's .data section (for 64-bit guests, see
>> head_64.S).
>>
>> Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
>> does not overlap with them and the mappings for these PFNs will be
>> read-write.
> 
> We are talking about pagetable pages built by
> kernel_physical_mapping_init.
> 
> 
>> Since Xen doesn't need to change the mapping its implementation
>> becomes the same as a native and we can simply remove this pv-op
>> completely.
> 
> I don't think so.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 19:16:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 19:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1j4O-0006Qt-7s; Wed, 15 Aug 2012 19:15:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T1j4N-0006Qo-3p
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 19:15:55 +0000
Received: from [85.158.139.83:35524] by server-12.bemta-5.messagelabs.com id
	E9/11-22359-A65FB205; Wed, 15 Aug 2012 19:15:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345058150!28237961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQwMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3557 invoked from network); 15 Aug 2012 19:15:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 19:15:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336363200"; d="scan'208";a="34767090"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 15:15:49 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Wed, 15 Aug 2012
	15:15:49 -0400
Message-ID: <502BF563.8010902@citrix.com>
Date: Wed, 15 Aug 2012 20:15:47 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
	<alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/08/12 14:55, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, David Vrabel wrote:
>> On 14/08/12 15:12, Attilio Rao wrote:
>>> On 14/08/12 14:57, David Vrabel wrote:
>>>> On 14/08/12 13:24, Attilio Rao wrote:
>> After looking at it some more, I think this pv-ops is unnecessary. How
>> about the following patch to just remove it completely?
>>
>> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
>> is sound.
> 
> Do you have more then 4G to dom0 on those boxes?

I've tested with 6G now, both 64-bit and 32-bit with HIGHPTE.

> It certainly fixed a serious crash at the time it was introduced, see
> http://marc.info/?l=linux-kernel&m=129901609503574 and
> http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
> changed in kernel_physical_mapping_init, I think we still need it.
> Depending on the e820 of your test box, the kernel could crash (or not),
> possibly in different places.
>
>>>> Having said that, I couldn't immediately see where pages in (end, 
>>>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
>>>> done?
>>>>
>>>
>>> As mentioned in the comment, please look at xen_set_pte_init().
>>
>> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
>> is already present and read-only.
> 
> look at mask_rw_pte and read the threads linked above, unfortunately it
> is not that simple.

Yes, I was remembering what 32-bit did here.

The 64-bit version is a bit confused and it often ends up /not/ clearing
RW for the direct mapping of the pages in the pgt_buf because any
existing RW mappings will be used as-is.  See phys_pte_init() which
checks for an existing mapping and only sets the PTE if it is not
already set.

pgd_populate(), pud_populate(), and pmd_populate() take care of clearing
RW for the newly allocated page table pages, so mask_rw_pte() only needs
to consider clearing RW for mappings of the the existing page table PFNs
which all lie outside the range (pt_buf_start, pt_buf_top].

This patch makes it more sensible, and makes removal of the pv-op safe
if pgt_buf lies outside the initial mapping.

index 04c1f61..2fd5e86 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1400,14 +1400,13 @@ static pte_t __init mask_rw_pte(pte_t *ptep,
pte_t pte)
 	unsigned long pfn = pte_pfn(pte);

 	/*
-	 * If the new pfn is within the range of the newly allocated
-	 * kernel pagetable, and it isn't being mapped into an
-	 * early_ioremap fixmap slot as a freshly allocated page, make sure
-	 * it is RO.
+	 * If this is a PTE of an early_ioremap fixmap slot but
+	 * outside the range (pgt_buf_start, pgt_buf_top], then this
+	 * PTE is mapping a PFN in the current page table.  Make
+	 * sure it is RO.
 	 */
-	if (((!is_early_ioremap_ptep(ptep) &&
-			pfn >= pgt_buf_start && pfn < pgt_buf_top)) ||
-			(is_early_ioremap_ptep(ptep) && pfn != (pgt_buf_end - 1)))
+	if (is_early_ioremap_ptep(ptep)
+	    && (pfn < pgt_buf_start || pfn >= pgt_buf_top))
 		pte = pte_wrprotect(pte);

 	return pte;


>> 8<----------------------
>> x86: remove x86_init.mapping.pagetable_reserve paravirt op
>>
>> The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
>> guests to set the writable flag for the mapping of (pgt_buf_end,
>> pgt_buf_top].  This is not necessary as these pages are never set as
>> read-only as they have never contained page tables.
> 
> Is this actually true? It is possible when pagetable pages are
> allocated by alloc_low_page.

These newly allocated page table pages will be set read-only when they
are linked into the page tables with pgd_populate(), pud_populate() and
friends.

>> When running as a Xen guest, the initial page tables are provided by
>> Xen (these are reserved with memblock_reserve() in
>> xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
>> guests) or in the kernel's .data section (for 64-bit guests, see
>> head_64.S).
>>
>> Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
>> does not overlap with them and the mappings for these PFNs will be
>> read-write.
> 
> We are talking about pagetable pages built by
> kernel_physical_mapping_init.
> 
> 
>> Since Xen doesn't need to change the mapping its implementation
>> becomes the same as a native and we can simply remove this pv-op
>> completely.
> 
> I don't think so.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 20:39:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 20:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1kN2-0006qo-U8; Wed, 15 Aug 2012 20:39:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1kN1-0006qj-Ep
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 20:39:15 +0000
Received: from [85.158.143.99:48497] by server-3.bemta-4.messagelabs.com id
	73/E6-09529-2F80C205; Wed, 15 Aug 2012 20:39:14 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345063152!27808035!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17163 invoked from network); 15 Aug 2012 20:39:12 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 20:39:12 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 5ED99A032A
	for <xen-devel@lists.xensource.com>;
	Wed, 15 Aug 2012 20:39:12 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id iHaLHqD881PK for <xen-devel@lists.xensource.com>;
	Wed, 15 Aug 2012 20:39:11 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 8B274A02ED
	for <xen-devel@lists.xensource.com>;
	Wed, 15 Aug 2012 20:39:11 +0000 (UTC)
Date: Wed, 15 Aug 2012 22:39:09 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: xen-devel <xen-devel@lists.xensource.com>
Message-ID: <20120815223909.1eb7a0cd@internecto.net>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Subject: [Xen-devel] E5606 with no HVM;
	Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Xen developers and enthousiasts,

Recently I have been having a lot of problems with HVM hosts on a dual
Xeon E5606 box. Problem is that HVM hosts don't work on it at all and
crash the server.

Today I got the logging to work over an IPMI console so then I compiled
a fresh xen-unstable. Apart from the message I posted in the subject
the call trace reads as follows:

(XEN) Xen call trace:
(XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
(XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
(XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
(XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614

The entire log from boot to crash can be viewed at the following link:
http://pastebin.com/5wcH7GWR

Here's the kernel's config
http://pastebin.com/E51S61Qk

And although I'm not sure if knowing BIOS settings is useful it doesn't
hurt to share the many settings it has. So I captured them and posted
the images:
http://imgur.com/a/wGBVS/all#0

I hope you can help me with getting this fixed. It's been a nightmare
despite the lack of sleep :) but then again I also learned much along
the way. I triple checked the consistency of the above posts to make
sure they are all from the same build. The problem has been occuring
for a fairly long time now but I had to make sure there was no issue on
my side. I'm still not sure but I'm also fresh out of ideas, so thanks
up front for your help or suggestions.

As always please CC me when you reply to the list, I am not receiving
xen-devel emails because most are way outside my comfort zone...

-- 
Thank you,
Mark van Dijk.               ,--------------------------------
----------------------------'        Wed Aug 15 20:38 UTC 2012
Today is Boomtime, the 8th day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 20:39:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 20:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1kN2-0006qo-U8; Wed, 15 Aug 2012 20:39:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1kN1-0006qj-Ep
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 20:39:15 +0000
Received: from [85.158.143.99:48497] by server-3.bemta-4.messagelabs.com id
	73/E6-09529-2F80C205; Wed, 15 Aug 2012 20:39:14 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345063152!27808035!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17163 invoked from network); 15 Aug 2012 20:39:12 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 20:39:12 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 5ED99A032A
	for <xen-devel@lists.xensource.com>;
	Wed, 15 Aug 2012 20:39:12 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id iHaLHqD881PK for <xen-devel@lists.xensource.com>;
	Wed, 15 Aug 2012 20:39:11 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 8B274A02ED
	for <xen-devel@lists.xensource.com>;
	Wed, 15 Aug 2012 20:39:11 +0000 (UTC)
Date: Wed, 15 Aug 2012 22:39:09 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: xen-devel <xen-devel@lists.xensource.com>
Message-ID: <20120815223909.1eb7a0cd@internecto.net>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Subject: [Xen-devel] E5606 with no HVM;
	Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Xen developers and enthousiasts,

Recently I have been having a lot of problems with HVM hosts on a dual
Xeon E5606 box. Problem is that HVM hosts don't work on it at all and
crash the server.

Today I got the logging to work over an IPMI console so then I compiled
a fresh xen-unstable. Apart from the message I posted in the subject
the call trace reads as follows:

(XEN) Xen call trace:
(XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
(XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
(XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
(XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614

The entire log from boot to crash can be viewed at the following link:
http://pastebin.com/5wcH7GWR

Here's the kernel's config
http://pastebin.com/E51S61Qk

And although I'm not sure if knowing BIOS settings is useful it doesn't
hurt to share the many settings it has. So I captured them and posted
the images:
http://imgur.com/a/wGBVS/all#0

I hope you can help me with getting this fixed. It's been a nightmare
despite the lack of sleep :) but then again I also learned much along
the way. I triple checked the consistency of the above posts to make
sure they are all from the same build. The problem has been occuring
for a fairly long time now but I had to make sure there was no issue on
my side. I'm still not sure but I'm also fresh out of ideas, so thanks
up front for your help or suggestions.

As always please CC me when you reply to the list, I am not receiving
xen-devel emails because most are way outside my comfort zone...

-- 
Thank you,
Mark van Dijk.               ,--------------------------------
----------------------------'        Wed Aug 15 20:38 UTC 2012
Today is Boomtime, the 8th day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 20:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 20:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1kVQ-00071x-26; Wed, 15 Aug 2012 20:47:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1kVO-00071s-RP
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 20:47:55 +0000
Received: from [85.158.143.35:2768] by server-2.bemta-4.messagelabs.com id
	2A/06-31966-AFA0C205; Wed, 15 Aug 2012 20:47:54 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345063673!15965680!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24556 invoked from network); 15 Aug 2012 20:47:53 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 20:47:53 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 25FBDA032A;
	Wed, 15 Aug 2012 20:47:53 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id QiUUoia8B2ac; Wed, 15 Aug 2012 20:47:52 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 2EC10A02ED;
	Wed, 15 Aug 2012 20:47:52 +0000 (UTC)
Date: Wed, 15 Aug 2012 22:47:50 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20120815224750.18435d2b@internecto.net>
In-Reply-To: <502A2A23.4050205@citrix.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
	<20120810163722.2feaadad@internecto.net>
	<502A2A23.4050205@citrix.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>> This is upstream QEMU that is breaking, not qemu-xen-traditional
> >>>> (see the code path: qemu-xen-dir-remote instead of
> >>>> qemu-xen-traditional-dir-remote).
> >>> Ah, I didn't know, it's a little bit confusing. Would you like me
> >>> to submit a bug report with them?
> >>>
> >>>> Moreover it is breaking compiling qemu-nbd that we aren't
> >>>> currently using. I would try out the following change to the
> >>>> configure script: (..snip..)
> >>> Yes, that works, thanks! But it gives a new error which I couldn't
> >>> solve yet:
> >>>
> >>> ---
> >>> LINK  qemu-nbd
> >>>
> >>> cutils.o: In function `strtosz_suffix_unit':
> >>>
> >>> tools/qemu-xen-dir/cutils.c:354: undefined reference to
> >>> `__isnan'
> >>>
> >>> tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
> >>> collect2: ld returned 1 exit status
> >>> ---
> >>>
> >>> Any idea there?
> >> It is another "-lm" missing somewhere.
> > 
> > Ok, well I'll leave that to the people who can actually make a
> > healthy patch out of this.
> > 
> >>
> >>> Also -- If we're not using qemu-nbd then could you suggest a
> >>> workaround please? I'd prefer something that can be patched or
> >>> issued before I run the make process. (I run the make process
> >>> twice now - if the first run fails, patch, then run again and if
> >>> it fails again error out)
> >> You can disable qemu-nbd altogether with the following patch:
> >> (..snip..)
> > 
> > While I couldn't find the proper configure script for this (I even
> > grepped for stuff like 'virtfs=no' but got nothing), it was a good
> > starting point. So thanks for pointing me in the right direction :)
> > 
> > For now building unstable on Alpine Linux works with the following
> > patch:
> > 
> > http://pastebin.com/QU8XuM0a
> 
> Natanael Copa sent a patch to Qemu-devel some months ago to fix the
> build of Qemu on uClibc, but it seems like it was ignored:
> 
> http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg02388.html
> 
> Could you try if that still applies and fixes your problems?

Hi Roger, thanks for following up. I will try this tomorrow and mail the
results.

Also, for the sake of being as thourough as possible -- just now I sent
a bug report about HVM issues. This is occurring on the same machine
but I also had HVM problems when it was still running ArchLinux. So I
doubt those are related but I want to make sure people know this
regards the same machine.

-- 
Good night,
Mark van Dijk.               ,--------------------------------
----------------------------'        Wed Aug 15 20:47 UTC 2012
Today is Boomtime, the 8th day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 20:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 20:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1kVQ-00071x-26; Wed, 15 Aug 2012 20:47:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1kVO-00071s-RP
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 20:47:55 +0000
Received: from [85.158.143.35:2768] by server-2.bemta-4.messagelabs.com id
	2A/06-31966-AFA0C205; Wed, 15 Aug 2012 20:47:54 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345063673!15965680!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24556 invoked from network); 15 Aug 2012 20:47:53 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 20:47:53 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 25FBDA032A;
	Wed, 15 Aug 2012 20:47:53 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id QiUUoia8B2ac; Wed, 15 Aug 2012 20:47:52 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 2EC10A02ED;
	Wed, 15 Aug 2012 20:47:52 +0000 (UTC)
Date: Wed, 15 Aug 2012 22:47:50 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20120815224750.18435d2b@internecto.net>
In-Reply-To: <502A2A23.4050205@citrix.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
	<20120810163722.2feaadad@internecto.net>
	<502A2A23.4050205@citrix.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>> This is upstream QEMU that is breaking, not qemu-xen-traditional
> >>>> (see the code path: qemu-xen-dir-remote instead of
> >>>> qemu-xen-traditional-dir-remote).
> >>> Ah, I didn't know, it's a little bit confusing. Would you like me
> >>> to submit a bug report with them?
> >>>
> >>>> Moreover it is breaking compiling qemu-nbd that we aren't
> >>>> currently using. I would try out the following change to the
> >>>> configure script: (..snip..)
> >>> Yes, that works, thanks! But it gives a new error which I couldn't
> >>> solve yet:
> >>>
> >>> ---
> >>> LINK  qemu-nbd
> >>>
> >>> cutils.o: In function `strtosz_suffix_unit':
> >>>
> >>> tools/qemu-xen-dir/cutils.c:354: undefined reference to
> >>> `__isnan'
> >>>
> >>> tools/qemu-xen-dir/cutils.c:357: undefined reference to `modf'
> >>> collect2: ld returned 1 exit status
> >>> ---
> >>>
> >>> Any idea there?
> >> It is another "-lm" missing somewhere.
> > 
> > Ok, well I'll leave that to the people who can actually make a
> > healthy patch out of this.
> > 
> >>
> >>> Also -- If we're not using qemu-nbd then could you suggest a
> >>> workaround please? I'd prefer something that can be patched or
> >>> issued before I run the make process. (I run the make process
> >>> twice now - if the first run fails, patch, then run again and if
> >>> it fails again error out)
> >> You can disable qemu-nbd altogether with the following patch:
> >> (..snip..)
> > 
> > While I couldn't find the proper configure script for this (I even
> > grepped for stuff like 'virtfs=no' but got nothing), it was a good
> > starting point. So thanks for pointing me in the right direction :)
> > 
> > For now building unstable on Alpine Linux works with the following
> > patch:
> > 
> > http://pastebin.com/QU8XuM0a
> 
> Natanael Copa sent a patch to Qemu-devel some months ago to fix the
> build of Qemu on uClibc, but it seems like it was ignored:
> 
> http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg02388.html
> 
> Could you try if that still applies and fixes your problems?

Hi Roger, thanks for following up. I will try this tomorrow and mail the
results.

Also, for the sake of being as thourough as possible -- just now I sent
a bug report about HVM issues. This is occurring on the same machine
but I also had HVM problems when it was still running ArchLinux. So I
doubt those are related but I want to make sure people know this
regards the same machine.

-- 
Good night,
Mark van Dijk.               ,--------------------------------
----------------------------'        Wed Aug 15 20:47 UTC 2012
Today is Boomtime, the 8th day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 21:23:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 21:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1l2v-0007HX-Uv; Wed, 15 Aug 2012 21:22:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1l2u-0007HO-R4
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 21:22:32 +0000
Received: from [85.158.139.83:53294] by server-7.bemta-5.messagelabs.com id
	03/62-32634-8131C205; Wed, 15 Aug 2012 21:22:32 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345065749!28250758!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15233 invoked from network); 15 Aug 2012 21:22:30 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 21:22:30 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B88752471;
	Thu, 16 Aug 2012 00:22:28 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 26BB92005D; Thu, 16 Aug 2012 00:22:28 +0300 (EEST)
Date: Thu, 16 Aug 2012 00:22:28 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120815212228.GG19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org
Subject: [Xen-devel] Howto/Tutorial: Compiling Xen 4.2 from source on
 RHEL5/CentOS5, RHEL6/CentOS6, Fedora 16, Fedora 17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I just wrote instructions for building Xen 4.2 from sources on various RPM-based Linux distributions.
The guide is available here: 

http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora

I was able to successfully build Xen 4.2.0-rc2 on the following Linux distros:

- CentOS 5.8 x64
- CentOS 6.3 x64
- Fedora 17 x64


RHEL5/CentOS5 requires some hackery to get yajl/yajl-devel rebuilt there, so the tutorial url above 
contains easy instructions and the required patch to make yajl build OK on EL5.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 21:23:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 21:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1l2v-0007HX-Uv; Wed, 15 Aug 2012 21:22:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1l2u-0007HO-R4
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 21:22:32 +0000
Received: from [85.158.139.83:53294] by server-7.bemta-5.messagelabs.com id
	03/62-32634-8131C205; Wed, 15 Aug 2012 21:22:32 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345065749!28250758!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0NzkwODU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15233 invoked from network); 15 Aug 2012 21:22:30 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Aug 2012 21:22:30 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id B88752471;
	Thu, 16 Aug 2012 00:22:28 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 26BB92005D; Thu, 16 Aug 2012 00:22:28 +0300 (EEST)
Date: Thu, 16 Aug 2012 00:22:28 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xen-devel@lists.xen.org
Message-ID: <20120815212228.GG19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org
Subject: [Xen-devel] Howto/Tutorial: Compiling Xen 4.2 from source on
 RHEL5/CentOS5, RHEL6/CentOS6, Fedora 16, Fedora 17
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I just wrote instructions for building Xen 4.2 from sources on various RPM-based Linux distributions.
The guide is available here: 

http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora

I was able to successfully build Xen 4.2.0-rc2 on the following Linux distros:

- CentOS 5.8 x64
- CentOS 6.3 x64
- Fedora 17 x64


RHEL5/CentOS5 requires some hackery to get yajl/yajl-devel rebuilt there, so the tutorial url above 
contains easy instructions and the required patch to make yajl build OK on EL5.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 21:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 21:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1lU2-0007nA-Vo; Wed, 15 Aug 2012 21:50:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1lU1-0007n5-Fc
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 21:50:33 +0000
Received: from [85.158.143.99:54765] by server-3.bemta-4.messagelabs.com id
	F4/77-09529-8A91C205; Wed, 15 Aug 2012 21:50:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345067431!25003061!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5708 invoked from network); 15 Aug 2012 21:50:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 21:50:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14028129"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 21:50:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 22:50:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1lTy-0000Cc-Ds;
	Wed, 15 Aug 2012 21:50:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1lTy-00012Q-1x;
	Wed, 15 Aug 2012 22:50:30 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13605-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 22:50:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13605: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13605 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13605/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13603
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13603
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2      fail blocked in 13603
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13603

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  6d56e31fe1e1
baseline version:
 xen                  af7143d97fa2

------------------------------------------------------------
People who touched revisions under test:
  Boris Ostrovsky <boris.ostrovsky@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=6d56e31fe1e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 6d56e31fe1e1
+ branch=xen-unstable
+ revision=6d56e31fe1e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 6d56e31fe1e1 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 21:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 21:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1lU2-0007nA-Vo; Wed, 15 Aug 2012 21:50:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1lU1-0007n5-Fc
	for xen-devel@lists.xensource.com; Wed, 15 Aug 2012 21:50:33 +0000
Received: from [85.158.143.99:54765] by server-3.bemta-4.messagelabs.com id
	F4/77-09529-8A91C205; Wed, 15 Aug 2012 21:50:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345067431!25003061!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5708 invoked from network); 15 Aug 2012 21:50:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 21:50:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,774,1336348800"; d="scan'208";a="14028129"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	15 Aug 2012 21:50:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 15 Aug 2012 22:50:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1lTy-0000Cc-Ds;
	Wed, 15 Aug 2012 21:50:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1lTy-00012Q-1x;
	Wed, 15 Aug 2012 22:50:30 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13605-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Aug 2012 22:50:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13605: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13605 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13605/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13603
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13603
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2      fail blocked in 13603
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13603

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  6d56e31fe1e1
baseline version:
 xen                  af7143d97fa2

------------------------------------------------------------
People who touched revisions under test:
  Boris Ostrovsky <boris.ostrovsky@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=6d56e31fe1e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 6d56e31fe1e1
+ branch=xen-unstable
+ revision=6d56e31fe1e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 6d56e31fe1e1 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 6 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 23:30:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 23:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1n2E-0000Vt-4m; Wed, 15 Aug 2012 23:29:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <deepti.kdeeps@gmail.com>) id 1T1n2C-0000Vo-P7
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 23:29:56 +0000
Received: from [85.158.143.99:50872] by server-1.bemta-4.messagelabs.com id
	AA/A3-07754-4F03C205; Wed, 15 Aug 2012 23:29:56 +0000
X-Env-Sender: deepti.kdeeps@gmail.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345073394!23139323!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17641 invoked from network); 15 Aug 2012 23:29:54 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 23:29:54 -0000
Received: by wgbed3 with SMTP id ed3so1387768wgb.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 16:29:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Ay3RkjDWUD47nbDlxhJpt6l7fDkuL7+XZ8BXf1OfOQ0=;
	b=J89NKMCaM/KqcNZIH+vPXXrbTMKO3b/W3QFMIdTubFiIduWwEeqcO1IgOOK+pE3awr
	vYaoY+eNMfQ18AE/Tg5rHYSinO49oxgkQkUIwG9rd+lpqq0o4HyP3bVtujxs9bmrdquF
	mxg3ySI2UDCU3WyFRJDPdZGmh/ZPHHHj3vu/GC/V10S7TK19/RVhjC/i09Elw4NPkQvf
	QX7BZO4Ttxid/CTw5xqTY9B5zLJ8SmHmjydIRWO6yfe8FxvNkgaXNiCaBORewBdfJ1y0
	4uflhgBkLIXO4HouPRpWHdkzk6a6QNKR272IpWMNhq7YgRAKSRE1aQtZ9LQEAqbR16UO
	gOLA==
MIME-Version: 1.0
Received: by 10.180.20.204 with SMTP id p12mr720600wie.7.1345073394349; Wed,
	15 Aug 2012 16:29:54 -0700 (PDT)
Received: by 10.194.21.35 with HTTP; Wed, 15 Aug 2012 16:29:54 -0700 (PDT)
Date: Wed, 15 Aug 2012 16:29:54 -0700
Message-ID: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
From: Deepti kulkarni <deepti.kdeeps@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4752771748926114437=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4752771748926114437==
Content-Type: multipart/alternative; boundary=bcaec53d5ce3ddf61a04c7564d49

--bcaec53d5ce3ddf61a04c7564d49
Content-Type: text/plain; charset=ISO-8859-1

I am trying to debug the debian bug for Xen
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656792.

The domU debian machine (HVM and PV) fails to bootup and redirects to
initramfs prompt. The error is "Unable to find a medium containing a live
image system".

 I did a cat on proc/modules. Looks like the xen drivers are not loaded.

A modprobe of xen_blkfront doesnt help the bootup either.

Any workaround suggested. Debian Kernel 3.0 work fine.

Thanks

--bcaec53d5ce3ddf61a04c7564d49
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I am trying to debug the debian bug for Xen=A0
<a href=3D"http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792">http:=
//bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792</a>.<div><br></div><di=
v>The domU debian machine (HVM and PV) fails to bootup and redirects to ini=
tramfs prompt. The error is &quot;Unable to find a medium containing a live=
 image system&quot;.</div>
<div><br></div><div>=A0I did a cat on proc/modules. Looks like the xen driv=
ers are not loaded.</div><div><br></div><div>A modprobe of xen_blkfront doe=
snt help the bootup either.</div><div><br></div><div>Any workaround suggest=
ed. Debian Kernel 3.0 work fine.</div>
<div><br></div><div>Thanks</div><div><br></div>

--bcaec53d5ce3ddf61a04c7564d49--


--===============4752771748926114437==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4752771748926114437==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 23:30:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 23:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1n2E-0000Vt-4m; Wed, 15 Aug 2012 23:29:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <deepti.kdeeps@gmail.com>) id 1T1n2C-0000Vo-P7
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 23:29:56 +0000
Received: from [85.158.143.99:50872] by server-1.bemta-4.messagelabs.com id
	AA/A3-07754-4F03C205; Wed, 15 Aug 2012 23:29:56 +0000
X-Env-Sender: deepti.kdeeps@gmail.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345073394!23139323!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17641 invoked from network); 15 Aug 2012 23:29:54 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 23:29:54 -0000
Received: by wgbed3 with SMTP id ed3so1387768wgb.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 16:29:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Ay3RkjDWUD47nbDlxhJpt6l7fDkuL7+XZ8BXf1OfOQ0=;
	b=J89NKMCaM/KqcNZIH+vPXXrbTMKO3b/W3QFMIdTubFiIduWwEeqcO1IgOOK+pE3awr
	vYaoY+eNMfQ18AE/Tg5rHYSinO49oxgkQkUIwG9rd+lpqq0o4HyP3bVtujxs9bmrdquF
	mxg3ySI2UDCU3WyFRJDPdZGmh/ZPHHHj3vu/GC/V10S7TK19/RVhjC/i09Elw4NPkQvf
	QX7BZO4Ttxid/CTw5xqTY9B5zLJ8SmHmjydIRWO6yfe8FxvNkgaXNiCaBORewBdfJ1y0
	4uflhgBkLIXO4HouPRpWHdkzk6a6QNKR272IpWMNhq7YgRAKSRE1aQtZ9LQEAqbR16UO
	gOLA==
MIME-Version: 1.0
Received: by 10.180.20.204 with SMTP id p12mr720600wie.7.1345073394349; Wed,
	15 Aug 2012 16:29:54 -0700 (PDT)
Received: by 10.194.21.35 with HTTP; Wed, 15 Aug 2012 16:29:54 -0700 (PDT)
Date: Wed, 15 Aug 2012 16:29:54 -0700
Message-ID: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
From: Deepti kulkarni <deepti.kdeeps@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4752771748926114437=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4752771748926114437==
Content-Type: multipart/alternative; boundary=bcaec53d5ce3ddf61a04c7564d49

--bcaec53d5ce3ddf61a04c7564d49
Content-Type: text/plain; charset=ISO-8859-1

I am trying to debug the debian bug for Xen
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656792.

The domU debian machine (HVM and PV) fails to bootup and redirects to
initramfs prompt. The error is "Unable to find a medium containing a live
image system".

 I did a cat on proc/modules. Looks like the xen drivers are not loaded.

A modprobe of xen_blkfront doesnt help the bootup either.

Any workaround suggested. Debian Kernel 3.0 work fine.

Thanks

--bcaec53d5ce3ddf61a04c7564d49
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I am trying to debug the debian bug for Xen=A0
<a href=3D"http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792">http:=
//bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792</a>.<div><br></div><di=
v>The domU debian machine (HVM and PV) fails to bootup and redirects to ini=
tramfs prompt. The error is &quot;Unable to find a medium containing a live=
 image system&quot;.</div>
<div><br></div><div>=A0I did a cat on proc/modules. Looks like the xen driv=
ers are not loaded.</div><div><br></div><div>A modprobe of xen_blkfront doe=
snt help the bootup either.</div><div><br></div><div>Any workaround suggest=
ed. Debian Kernel 3.0 work fine.</div>
<div><br></div><div>Thanks</div><div><br></div>

--bcaec53d5ce3ddf61a04c7564d49--


--===============4752771748926114437==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4752771748926114437==--


From xen-devel-bounces@lists.xen.org Wed Aug 15 23:49:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 23:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1nKO-0000hC-Sz; Wed, 15 Aug 2012 23:48:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T1nKN-0000h7-7k
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 23:48:43 +0000
Received: from [85.158.139.83:30078] by server-9.bemta-5.messagelabs.com id
	2A/4C-26123-A553C205; Wed, 15 Aug 2012 23:48:42 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345074518!24402290!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19290 invoked from network); 15 Aug 2012 23:48:41 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-7.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	15 Aug 2012 23:48:41 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T1nKF-0001tw-K0; Thu, 16 Aug 2012 09:48:35 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Thu, 16 Aug 2012 09:48:35 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 09:48:35 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Kent Overstreet <koverstreet@google.com>, Joseph Glanville
	<joseph.glanville@orionvm.com.au>
Thread-Topic: [Xen-devel] blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1v//btIA//9NMFCAAOpDAIAAsyWA//84rcCAA1aSAIAAfBkA//8bO8A=
Date: Wed, 15 Aug 2012 23:48:33 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299FCC7F@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
	<6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
	<CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
	<20120815200418.GA2758@google.com>
In-Reply-To: <20120815200418.GA2758@google.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:9fd:24f5:8230:fd14]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19116.000
x-tm-as-result: No--41.859800-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 15 Aug 2012 23:48:35.0374 (UTC)
	FILETIME=[7C18DCE0:01CD7B40]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Is there anything xen-devel should be doing about this? I wouldn't
> > expect blkback to care about block size...
> 
> Well, I wouldn't be surprised if Windows doesn't work on a device with block
> size > 512 bytes. But Linux (ext4 at least) certaintly does work with 4k blocks -
> unless maybe it was breaking on something in the boot process?
> 
> So it sounds like this might be indicative of a bug in blkback.
> 

As far as I can tell, blkback works internally in 512 byte sectors, but does a bounds check of the 512 byte sector offset against the native sector count, so if you have 4K sectors you get an error trying to access anything past 1/8th of the disk. This is a problem, but qemu can't boot off anything but 512 byte sectors so that is the more limiting factor. I suspect a more thorough audit of blkback's behaviour wrt 4K sectors would be in order before simply fixing the bounds check bug, although it seems to work fine.

Curiously, despite literature to the contrary, Windows itself doesn't seem to care if the sector size is 4K. I added a second disk to an already running Windows DomU, partitioned it, and formatted it, and tested it (copying files on, off, rebooting, etc). As long as the size of my partition was <1/8th the size of the disk it worked fine.

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 15 23:49:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Aug 2012 23:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1nKO-0000hC-Sz; Wed, 15 Aug 2012 23:48:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T1nKN-0000h7-7k
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 23:48:43 +0000
Received: from [85.158.139.83:30078] by server-9.bemta-5.messagelabs.com id
	2A/4C-26123-A553C205; Wed, 15 Aug 2012 23:48:42 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345074518!24402290!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19290 invoked from network); 15 Aug 2012 23:48:41 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-7.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	15 Aug 2012 23:48:41 -0000
Received: from trantor.int.sbss.com.au ([192.168.200.206]
	helo=mail.bendigoit.com.au)
	by smtp2.bendigoit.com.au with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T1nKF-0001tw-K0; Thu, 16 Aug 2012 09:48:35 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Thu, 16 Aug 2012 09:48:35 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 09:48:35 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Kent Overstreet <koverstreet@google.com>, Joseph Glanville
	<joseph.glanville@orionvm.com.au>
Thread-Topic: [Xen-devel] blkback and bcache
Thread-Index: Ac15GvPQ4p9LsRz2TPWwWFq0dSWp1v//btIA//9NMFCAAOpDAIAAsyWA//84rcCAA1aSAIAAfBkA//8bO8A=
Date: Wed, 15 Aug 2012 23:48:33 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B299FCC7F@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
	<6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
	<CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
	<20120815200418.GA2758@google.com>
In-Reply-To: <20120815200418.GA2758@google.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [2001:388:e000:712:9fd:24f5:8230:fd14]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19116.000
x-tm-as-result: No--41.859800-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 15 Aug 2012 23:48:35.0374 (UTC)
	FILETIME=[7C18DCE0:01CD7B40]
X-Really-From-Bendigo-IT: magichashvalue
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Is there anything xen-devel should be doing about this? I wouldn't
> > expect blkback to care about block size...
> 
> Well, I wouldn't be surprised if Windows doesn't work on a device with block
> size > 512 bytes. But Linux (ext4 at least) certaintly does work with 4k blocks -
> unless maybe it was breaking on something in the boot process?
> 
> So it sounds like this might be indicative of a bug in blkback.
> 

As far as I can tell, blkback works internally in 512 byte sectors, but does a bounds check of the 512 byte sector offset against the native sector count, so if you have 4K sectors you get an error trying to access anything past 1/8th of the disk. This is a problem, but qemu can't boot off anything but 512 byte sectors so that is the more limiting factor. I suspect a more thorough audit of blkback's behaviour wrt 4K sectors would be in order before simply fixing the bounds check bug, although it seems to work fine.

Curiously, despite literature to the contrary, Windows itself doesn't seem to care if the sector size is 4K. I added a second disk to an already running Windows DomU, partitioned it, and formatted it, and tested it (copying files on, off, rebooting, etc). As long as the size of my partition was <1/8th the size of the disk it worked fine.

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 00:27:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 00:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1nvl-0001Le-2T; Thu, 16 Aug 2012 00:27:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T1nvj-0001LZ-F9
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 00:27:19 +0000
Received: from [85.158.138.51:24064] by server-7.bemta-3.messagelabs.com id
	02/D8-01906-66E3C205; Thu, 16 Aug 2012 00:27:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345076837!28530193!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0NjE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19047 invoked from network); 16 Aug 2012 00:27:17 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 00:27:17 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 15 Aug 2012 17:27:16 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,775,1336374000"; d="scan'208";a="134785174"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by AZSMGA002.ch.intel.com with ESMTP; 15 Aug 2012 17:27:15 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 17:27:15 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 17:27:15 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 08:27:13 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
	disk images for IDE
Thread-Index: AQHNeiUFIawdT7mpSU6DBvyFdkNW7ZdY3iWAgAACFACAArcjsA==
Date: Thu, 16 Aug 2012 00:27:12 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE06D82@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
	<20522.22946.930177.26680@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
	<20522.26538.475622.352650@mariner.uk.xensource.com>
In-Reply-To: <20522.26538.475622.352650@mariner.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Sent: Tuesday, August 14, 2012 10:59 PM
> To: Stefano Stabellini
> Cc: Xu, Dongxiao; xen-devel@lists.xen.org; Zhang, Yang Z; Ian Campbell
> Subject: RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to
> open disk images for IDE
> 
> Stefano Stabellini writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or
> CACHE_WB to open disk images for IDE"):
> > I also thought that IDE also needs NOCACHE for safety but after a
> > lengthy discussion, we came up with the conclusion that WRITEBACK is
> > OK for IDE, see your message:
> >
> > http://marc.info/?l=xen-devel&m=133311527009773
> >
> > So I think we should revert 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38,
> > I don't know why it was committed.
> 
> I was probably confused.  Sorry.  I have reverted it.

Thanks, this helps me a lot.

Thanks,
Dongxiao

> 
> > Also it seems to me that libxl is not specifying cache=writeback for
> > upstream QEMU, that means it is going to default to writethrough.
> > I'll write a patch for that.
> 
> Right, thanks.
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 00:27:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 00:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1nvl-0001Le-2T; Thu, 16 Aug 2012 00:27:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T1nvj-0001LZ-F9
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 00:27:19 +0000
Received: from [85.158.138.51:24064] by server-7.bemta-3.messagelabs.com id
	02/D8-01906-66E3C205; Thu, 16 Aug 2012 00:27:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345076837!28530193!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0NjE1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19047 invoked from network); 16 Aug 2012 00:27:17 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 00:27:17 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 15 Aug 2012 17:27:16 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,775,1336374000"; d="scan'208";a="134785174"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by AZSMGA002.ch.intel.com with ESMTP; 15 Aug 2012 17:27:15 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 17:27:15 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 17:27:15 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 08:27:13 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
	disk images for IDE
Thread-Index: AQHNeiUFIawdT7mpSU6DBvyFdkNW7ZdY3iWAgAACFACAArcjsA==
Date: Thu, 16 Aug 2012 00:27:12 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE06D82@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE016D3@SHSMSX102.ccr.corp.intel.com>
	<40776A41FC278F40B59438AD47D147A90FE048FB@SHSMSX102.ccr.corp.intel.com>
	<20522.22946.930177.26680@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1208141510440.21096@kaball.uk.xensource.com>
	<20522.26538.475622.352650@mariner.uk.xensource.com>
In-Reply-To: <20522.26538.475622.352650@mariner.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to open
 disk images for IDE
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com]
> Sent: Tuesday, August 14, 2012 10:59 PM
> To: Stefano Stabellini
> Cc: Xu, Dongxiao; xen-devel@lists.xen.org; Zhang, Yang Z; Ian Campbell
> Subject: RE: [Xen-devel] qemu-xen-traditional: NOCACHE or CACHE_WB to
> open disk images for IDE
> 
> Stefano Stabellini writes ("RE: [Xen-devel] qemu-xen-traditional: NOCACHE or
> CACHE_WB to open disk images for IDE"):
> > I also thought that IDE also needs NOCACHE for safety but after a
> > lengthy discussion, we came up with the conclusion that WRITEBACK is
> > OK for IDE, see your message:
> >
> > http://marc.info/?l=xen-devel&m=133311527009773
> >
> > So I think we should revert 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38,
> > I don't know why it was committed.
> 
> I was probably confused.  Sorry.  I have reverted it.

Thanks, this helps me a lot.

Thanks,
Dongxiao

> 
> > Also it seems to me that libxl is not specifying cache=writeback for
> > upstream QEMU, that means it is going to default to writethrough.
> > I'll write a patch for that.
> 
> Right, thanks.
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 00:53:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 00:53:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oK9-0001Zz-JH; Thu, 16 Aug 2012 00:52:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oK7-0001Zu-Nz
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 00:52:31 +0000
Received: from [85.158.139.83:61607] by server-10.bemta-5.messagelabs.com id
	D4/4F-13125-E444C205; Thu, 16 Aug 2012 00:52:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345078349!21129974!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5OTI0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23755 invoked from network); 16 Aug 2012 00:52:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 00:52:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G0qQwf032273
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:52:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G0qPHT009613
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:52:26 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G0qPbj010888
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 19:52:25 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 17:52:25 -0700
Date: Wed, 15 Aug 2012 17:52:24 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120815175224.36457451@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: [Xen-devel] [RFC PATCH 0/8]: PVH : PV guest in HVM container
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

	Following are series of patches for PVH linux changes. xen
	patches will follow later.

        A PVH is a PV guest that runs in an HVM container. Its only available
        for x86_64 only. It runs in ring 0 (the kernel), has native page tables,
        native IDT, and requires HAP. It uses lot of HVM code paths. Both in
        the guest and xen, the guest is viewed as pv guest,  and is_hvm()
        would return false. It uses event channels thereby eliminating APIC
        emulation code paths in xen.

        This series of patches implment this feature. They were built on
        top of fc6bdb59a501740b28ed3b616641a22c8dc5dd31 from the following
        tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git


Thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 00:53:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 00:53:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oK9-0001Zz-JH; Thu, 16 Aug 2012 00:52:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oK7-0001Zu-Nz
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 00:52:31 +0000
Received: from [85.158.139.83:61607] by server-10.bemta-5.messagelabs.com id
	D4/4F-13125-E444C205; Thu, 16 Aug 2012 00:52:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345078349!21129974!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5OTI0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23755 invoked from network); 16 Aug 2012 00:52:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 00:52:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G0qQwf032273
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:52:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G0qPHT009613
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:52:26 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G0qPbj010888
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 19:52:25 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 17:52:25 -0700
Date: Wed, 15 Aug 2012 17:52:24 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120815175224.36457451@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: [Xen-devel] [RFC PATCH 0/8]: PVH : PV guest in HVM container
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

	Following are series of patches for PVH linux changes. xen
	patches will follow later.

        A PVH is a PV guest that runs in an HVM container. Its only available
        for x86_64 only. It runs in ring 0 (the kernel), has native page tables,
        native IDT, and requires HAP. It uses lot of HVM code paths. Both in
        the guest and xen, the guest is viewed as pv guest,  and is_hvm()
        would return false. It uses event channels thereby eliminating APIC
        emulation code paths in xen.

        This series of patches implment this feature. They were built on
        top of fc6bdb59a501740b28ed3b616641a22c8dc5dd31 from the following
        tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git


Thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 00:58:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 00:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oP0-0001id-Cg; Thu, 16 Aug 2012 00:57:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oOy-0001iX-WA
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 00:57:33 +0000
Received: from [85.158.138.51:59761] by server-9.bemta-3.messagelabs.com id
	9B/2D-23952-C754C205; Thu, 16 Aug 2012 00:57:32 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345078649!28584946!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5OTI0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10227 invoked from network); 16 Aug 2012 00:57:31 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 00:57:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G0vRjT002619
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:57:28 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G0vQm9029872
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:57:27 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G0vPsX013148
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 19:57:25 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 17:57:25 -0700
Date: Wed, 15 Aug 2012 17:57:24 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120815175724.3405043a@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="MP_/_S_M8MHWYHIP6PqjiC9xMOw"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--MP_/_S_M8MHWYHIP6PqjiC9xMOw
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline


--MP_/_S_M8MHWYHIP6PqjiC9xMOw
Content-Type: text/x-patch
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename=0001-PVH-1st-patch.-Basic-and-preparatory-changes.patch


---
 arch/x86/include/asm/xen/interface.h |    3 +-
 arch/x86/include/asm/xen/page.h      |    3 ++
 arch/x86/xen/setup.c                 |   13 ++++++++--
 arch/x86/xen/smp.c                   |   39 ++++++++++++++++++---------------
 drivers/xen/cpu_hotplug.c            |    3 +-
 include/xen/interface/xen.h          |    1 +
 include/xen/xen.h                    |    4 +++
 7 files changed, 43 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..1bd5e88 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -136,7 +136,8 @@ struct vcpu_guest_context {
     struct cpu_user_regs user_regs;         /* User-level CPU registers     */
     struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
     unsigned long ldt_base, ldt_ents;       /* LDT (linear address, # ents) */
-    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents) */
+    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents).*
+					     * PV in HVM: it's GDTR addr/sz */
     unsigned long kernel_ss, kernel_sp;     /* Virtual TSS (only SS1/SP1)   */
     /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
     unsigned long ctrlreg[8];               /* CR0-CR7 (control registers)  */
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 93971e8..d1cfb96 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -158,6 +158,9 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
 static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
 {
 	unsigned long pfn = mfn_to_pfn(mfn);
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return mfn;
 	if (get_phys_to_machine(pfn) != mfn)
 		return -1; /* force !pfn_valid() */
 	return pfn;
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index ead8557..936f21d 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -500,10 +500,9 @@ void __cpuinit xen_enable_syscall(void)
 #endif /* CONFIG_X86_64 */
 }
 
-void __init xen_arch_setup(void)
+/* Normal PV domain not running in HVM container */
+static __init void inline xen_non_pvh_arch_setup(void)
 {
-	xen_panic_handler_init();
-
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
 
@@ -517,6 +516,14 @@ void __init xen_arch_setup(void)
 
 	xen_enable_sysenter();
 	xen_enable_syscall();
+}
+
+void __init xen_arch_setup(void)
+{
+	xen_panic_handler_init();
+
+	if (!xen_pvh_domain())
+		xen_non_pvh_arch_setup();
 
 #ifdef CONFIG_ACPI
 	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index f58dca7..cdf269d 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 	gdt = get_cpu_gdt_table(cpu);
 
 	ctxt->flags = VGCF_IN_KERNEL;
-	ctxt->user_regs.ds = __USER_DS;
-	ctxt->user_regs.es = __USER_DS;
 	ctxt->user_regs.ss = __KERNEL_DS;
 #ifdef CONFIG_X86_32
 	ctxt->user_regs.fs = __KERNEL_PERCPU;
@@ -314,31 +312,36 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
-	xen_copy_trap_info(ctxt->trap_ctxt);
+		ctxt->user_regs.ds = __USER_DS;
+		ctxt->user_regs.es = __USER_DS;
 
-	ctxt->ldt_ents = 0;
+		xen_copy_trap_info(ctxt->trap_ctxt);
 
-	BUG_ON((unsigned long)gdt & ~PAGE_MASK);
+		ctxt->ldt_ents = 0;
 
-	gdt_mfn = arbitrary_virt_to_mfn(gdt);
-	make_lowmem_page_readonly(gdt);
-	make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
+		BUG_ON((unsigned long)gdt & ~PAGE_MASK);
 
-	ctxt->gdt_frames[0] = gdt_mfn;
-	ctxt->gdt_ents      = GDT_ENTRIES;
+		gdt_mfn = arbitrary_virt_to_mfn(gdt);
+		make_lowmem_page_readonly(gdt);
+		make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
 
-	ctxt->user_regs.cs = __KERNEL_CS;
-	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
+		ctxt->gdt_frames[0] = gdt_mfn;
+		ctxt->gdt_ents      = GDT_ENTRIES;
 
-	ctxt->kernel_ss = __KERNEL_DS;
-	ctxt->kernel_sp = idle->thread.sp0;
+		ctxt->kernel_ss = __KERNEL_DS;
+		ctxt->kernel_sp = idle->thread.sp0;
 
 #ifdef CONFIG_X86_32
-	ctxt->event_callback_cs     = __KERNEL_CS;
-	ctxt->failsafe_callback_cs  = __KERNEL_CS;
+		ctxt->event_callback_cs     = __KERNEL_CS;
+		ctxt->failsafe_callback_cs  = __KERNEL_CS;
 #endif
-	ctxt->event_callback_eip    = (unsigned long)xen_hypervisor_callback;
-	ctxt->failsafe_callback_eip = (unsigned long)xen_failsafe_callback;
+		ctxt->event_callback_eip    =
+					(unsigned long)xen_hypervisor_callback;
+		ctxt->failsafe_callback_eip =
+					(unsigned long)xen_failsafe_callback;
+
+	ctxt->user_regs.cs = __KERNEL_CS;
+	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
 
 	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
 	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
index 4dcfced..a797359 100644
--- a/drivers/xen/cpu_hotplug.c
+++ b/drivers/xen/cpu_hotplug.c
@@ -100,7 +100,8 @@ static int __init setup_vcpu_hotplug_event(void)
 	static struct notifier_block xsn_cpu = {
 		.notifier_call = setup_cpu_watcher };
 
-	if (!xen_pv_domain())
+	/* PVH TBD/FIXME: future work */
+	if (!xen_pv_domain() || xen_pvh_domain())
 		return -ENODEV;
 
 	register_xenstore_notifier(&xsn_cpu);
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 0801468..1d5bc36 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -493,6 +493,7 @@ struct dom0_vga_console_info {
 /* These flags are passed in the 'flags' field of start_info_t. */
 #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
 #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control domain? */
+#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running in HVM container? */
 #define SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm options */
 
 typedef uint64_t cpumap_t;
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a164024..e823639 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -18,6 +18,10 @@ extern enum xen_domain_type xen_domain_type;
 				 xen_domain_type == XEN_PV_DOMAIN)
 #define xen_hvm_domain()	(xen_domain() &&			\
 				 xen_domain_type == XEN_HVM_DOMAIN)
+/* xen_pv_domain check is necessary as start_info ptr is null in HVM. Also,
+ * note, xen PVH domain shares lot of HVM code */
+#define xen_pvh_domain()       (xen_pv_domain() &&                     \
+				(xen_start_info->flags & SIF_IS_PVINHVM))
 
 #ifdef CONFIG_XEN_DOM0
 #include <xen/interface/xen.h>
-- 
1.7.2.3


--MP_/_S_M8MHWYHIP6PqjiC9xMOw
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--MP_/_S_M8MHWYHIP6PqjiC9xMOw--


From xen-devel-bounces@lists.xen.org Thu Aug 16 00:58:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 00:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oP0-0001id-Cg; Thu, 16 Aug 2012 00:57:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oOy-0001iX-WA
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 00:57:33 +0000
Received: from [85.158.138.51:59761] by server-9.bemta-3.messagelabs.com id
	9B/2D-23952-C754C205; Thu, 16 Aug 2012 00:57:32 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345078649!28584946!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5OTI0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10227 invoked from network); 16 Aug 2012 00:57:31 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 00:57:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G0vRjT002619
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:57:28 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G0vQm9029872
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 00:57:27 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G0vPsX013148
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 19:57:25 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 17:57:25 -0700
Date: Wed, 15 Aug 2012 17:57:24 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120815175724.3405043a@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="MP_/_S_M8MHWYHIP6PqjiC9xMOw"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--MP_/_S_M8MHWYHIP6PqjiC9xMOw
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline


--MP_/_S_M8MHWYHIP6PqjiC9xMOw
Content-Type: text/x-patch
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename=0001-PVH-1st-patch.-Basic-and-preparatory-changes.patch


---
 arch/x86/include/asm/xen/interface.h |    3 +-
 arch/x86/include/asm/xen/page.h      |    3 ++
 arch/x86/xen/setup.c                 |   13 ++++++++--
 arch/x86/xen/smp.c                   |   39 ++++++++++++++++++---------------
 drivers/xen/cpu_hotplug.c            |    3 +-
 include/xen/interface/xen.h          |    1 +
 include/xen/xen.h                    |    4 +++
 7 files changed, 43 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..1bd5e88 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -136,7 +136,8 @@ struct vcpu_guest_context {
     struct cpu_user_regs user_regs;         /* User-level CPU registers     */
     struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
     unsigned long ldt_base, ldt_ents;       /* LDT (linear address, # ents) */
-    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents) */
+    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents).*
+					     * PV in HVM: it's GDTR addr/sz */
     unsigned long kernel_ss, kernel_sp;     /* Virtual TSS (only SS1/SP1)   */
     /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
     unsigned long ctrlreg[8];               /* CR0-CR7 (control registers)  */
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 93971e8..d1cfb96 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -158,6 +158,9 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
 static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
 {
 	unsigned long pfn = mfn_to_pfn(mfn);
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return mfn;
 	if (get_phys_to_machine(pfn) != mfn)
 		return -1; /* force !pfn_valid() */
 	return pfn;
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index ead8557..936f21d 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -500,10 +500,9 @@ void __cpuinit xen_enable_syscall(void)
 #endif /* CONFIG_X86_64 */
 }
 
-void __init xen_arch_setup(void)
+/* Normal PV domain not running in HVM container */
+static __init void inline xen_non_pvh_arch_setup(void)
 {
-	xen_panic_handler_init();
-
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
 
@@ -517,6 +516,14 @@ void __init xen_arch_setup(void)
 
 	xen_enable_sysenter();
 	xen_enable_syscall();
+}
+
+void __init xen_arch_setup(void)
+{
+	xen_panic_handler_init();
+
+	if (!xen_pvh_domain())
+		xen_non_pvh_arch_setup();
 
 #ifdef CONFIG_ACPI
 	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index f58dca7..cdf269d 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 	gdt = get_cpu_gdt_table(cpu);
 
 	ctxt->flags = VGCF_IN_KERNEL;
-	ctxt->user_regs.ds = __USER_DS;
-	ctxt->user_regs.es = __USER_DS;
 	ctxt->user_regs.ss = __KERNEL_DS;
 #ifdef CONFIG_X86_32
 	ctxt->user_regs.fs = __KERNEL_PERCPU;
@@ -314,31 +312,36 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
-	xen_copy_trap_info(ctxt->trap_ctxt);
+		ctxt->user_regs.ds = __USER_DS;
+		ctxt->user_regs.es = __USER_DS;
 
-	ctxt->ldt_ents = 0;
+		xen_copy_trap_info(ctxt->trap_ctxt);
 
-	BUG_ON((unsigned long)gdt & ~PAGE_MASK);
+		ctxt->ldt_ents = 0;
 
-	gdt_mfn = arbitrary_virt_to_mfn(gdt);
-	make_lowmem_page_readonly(gdt);
-	make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
+		BUG_ON((unsigned long)gdt & ~PAGE_MASK);
 
-	ctxt->gdt_frames[0] = gdt_mfn;
-	ctxt->gdt_ents      = GDT_ENTRIES;
+		gdt_mfn = arbitrary_virt_to_mfn(gdt);
+		make_lowmem_page_readonly(gdt);
+		make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
 
-	ctxt->user_regs.cs = __KERNEL_CS;
-	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
+		ctxt->gdt_frames[0] = gdt_mfn;
+		ctxt->gdt_ents      = GDT_ENTRIES;
 
-	ctxt->kernel_ss = __KERNEL_DS;
-	ctxt->kernel_sp = idle->thread.sp0;
+		ctxt->kernel_ss = __KERNEL_DS;
+		ctxt->kernel_sp = idle->thread.sp0;
 
 #ifdef CONFIG_X86_32
-	ctxt->event_callback_cs     = __KERNEL_CS;
-	ctxt->failsafe_callback_cs  = __KERNEL_CS;
+		ctxt->event_callback_cs     = __KERNEL_CS;
+		ctxt->failsafe_callback_cs  = __KERNEL_CS;
 #endif
-	ctxt->event_callback_eip    = (unsigned long)xen_hypervisor_callback;
-	ctxt->failsafe_callback_eip = (unsigned long)xen_failsafe_callback;
+		ctxt->event_callback_eip    =
+					(unsigned long)xen_hypervisor_callback;
+		ctxt->failsafe_callback_eip =
+					(unsigned long)xen_failsafe_callback;
+
+	ctxt->user_regs.cs = __KERNEL_CS;
+	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
 
 	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
 	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
index 4dcfced..a797359 100644
--- a/drivers/xen/cpu_hotplug.c
+++ b/drivers/xen/cpu_hotplug.c
@@ -100,7 +100,8 @@ static int __init setup_vcpu_hotplug_event(void)
 	static struct notifier_block xsn_cpu = {
 		.notifier_call = setup_cpu_watcher };
 
-	if (!xen_pv_domain())
+	/* PVH TBD/FIXME: future work */
+	if (!xen_pv_domain() || xen_pvh_domain())
 		return -ENODEV;
 
 	register_xenstore_notifier(&xsn_cpu);
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 0801468..1d5bc36 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -493,6 +493,7 @@ struct dom0_vga_console_info {
 /* These flags are passed in the 'flags' field of start_info_t. */
 #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
 #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control domain? */
+#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running in HVM container? */
 #define SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm options */
 
 typedef uint64_t cpumap_t;
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a164024..e823639 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -18,6 +18,10 @@ extern enum xen_domain_type xen_domain_type;
 				 xen_domain_type == XEN_PV_DOMAIN)
 #define xen_hvm_domain()	(xen_domain() &&			\
 				 xen_domain_type == XEN_HVM_DOMAIN)
+/* xen_pv_domain check is necessary as start_info ptr is null in HVM. Also,
+ * note, xen PVH domain shares lot of HVM code */
+#define xen_pvh_domain()       (xen_pv_domain() &&                     \
+				(xen_start_info->flags & SIF_IS_PVINHVM))
 
 #ifdef CONFIG_XEN_DOM0
 #include <xen/interface/xen.h>
-- 
1.7.2.3


--MP_/_S_M8MHWYHIP6PqjiC9xMOw
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--MP_/_S_M8MHWYHIP6PqjiC9xMOw--


From xen-devel-bounces@lists.xen.org Thu Aug 16 01:02:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oTA-0002mb-BD; Thu, 16 Aug 2012 01:01:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oT8-0002Mi-Ad
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:01:50 +0000
Received: from [85.158.143.35:23779] by server-3.bemta-4.messagelabs.com id
	D1/47-09529-D764C205; Thu, 16 Aug 2012 01:01:49 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345078896!15990821!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7620 invoked from network); 16 Aug 2012 01:01:49 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:01:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G11XLD022333
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:01:34 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G11XH3021679
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:01:33 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G11Xu7015514
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:01:33 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:01:33 -0700
Date: Wed, 15 Aug 2012 18:01:31 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120815180131.24aaa5ce@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial boot
 and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
 arch/x86/xen/irq.c       |   22 ++++++++++++++-
 2 files changed, 77 insertions(+), 12 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index bf4bda6..3a58c51 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -139,6 +139,8 @@ struct tls_descs {
  */
 static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
 
+static void __init xen_hvm_init_shared_info(void);
+
 static void clamp_max_cpus(void)
 {
 #ifdef CONFIG_SMP
@@ -217,8 +219,8 @@ static void __init xen_banner(void)
 	struct xen_extraversion extra;
 	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
 
-	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
-	       pv_info.name);
+	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
+		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);
 	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
 	       version >> 16, version & 0xffff, extra.extraversion,
 	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
@@ -271,12 +273,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
 		break;
 	}
 
-	asm(XEN_EMULATE_PREFIX "cpuid"
-		: "=a" (*ax),
-		  "=b" (*bx),
-		  "=c" (*cx),
-		  "=d" (*dx)
-		: "0" (*ax), "2" (*cx));
+	if (xen_pvh_domain())
+		native_cpuid(ax, bx, cx, dx);
+	else
+		asm(XEN_EMULATE_PREFIX "cpuid"
+			: "=a" (*ax),
+			"=b" (*bx),
+			"=c" (*cx),
+			"=d" (*dx)
+			: "0" (*ax), "2" (*cx));
 
 	*bx &= maskebx;
 	*cx &= maskecx;
@@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 void xen_setup_shared_info(void)
 {
+	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
+	if (xen_pvh_domain() && xen_initial_domain())
+		return;
+
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
 		set_fixmap(FIX_PARAVIRT_BOOTMAP,
 			   xen_start_info->shared_info);
@@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
 		HYPERVISOR_shared_info =
 			(struct shared_info *)__va(xen_start_info->shared_info);
 
+	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
+	if (xen_pvh_domain())
+		return;
+
 #ifndef CONFIG_SMP
 	/* In UP this is as good a place as any to set up shared info */
 	xen_setup_vcpu_info_placement();
@@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
  */
 static void __init xen_setup_stackprotector(void)
 {
+	if (xen_pvh_domain()) {
+		switch_to_new_gdt(0);
+		return;
+	}
 	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
 	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
 
@@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+static void __init xen_pvh_guest_init(void)
+{
+#ifndef __HAVE_ARCH_PTE_SPECIAL
+	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
+	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
+#endif
+	/* PVH TBD/FIXME: for now just disable this. */
+	have_vcpu_info_placement = 0;
+
+	if (xen_feature(XENFEAT_hvm_callback_vector))
+		xen_have_vector_callback = 1;
+
+        /* for domU, the library sets start_info.shared_info to pfn, but for
+         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
+	 * uses HVM code in many places */
+	if (xen_initial_domain())
+		xen_hvm_init_shared_info();
+}
+
 /* First C function to be called on Xen boot */
 asmlinkage void __init xen_start_kernel(void)
 {
@@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
 	if (!xen_start_info)
 		return;
 
+#ifdef CONFIG_X86_32
+	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");
+	return;
+#endif
 	xen_domain_type = XEN_PV_DOMAIN;
 
+	xen_setup_features();
 	xen_setup_machphys_mapping();
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
-	pv_cpu_ops = xen_cpu_ops;
 	pv_apic_ops = xen_apic_ops;
+	if (xen_pvh_domain())
+		pv_cpu_ops.cpuid = xen_cpuid;
+	else
+		pv_cpu_ops = xen_cpu_ops;
 
 	x86_init.resources.memory_setup = xen_memory_setup;
 	x86_init.oem.arch_setup = xen_arch_setup;
@@ -1334,8 +1378,6 @@ asmlinkage void __init xen_start_kernel(void)
 	/* Work out if we support NX */
 	x86_configure_nx();
 
-	xen_setup_features();
-
 	/* Get mfn list */
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		xen_build_dynamic_phys_to_machine();
@@ -1462,6 +1504,9 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_setup_runstate_info(0);
 
+	if (xen_pvh_domain())
+		xen_pvh_guest_init();
+
 	/* Start the world */
 #ifdef CONFIG_X86_32
 	i386_start_kernel();
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 1573376..7c7dfd1 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
 
 static void xen_safe_halt(void)
 {
+	/* so event channel can be delivered to us, since in HVM container */
+	if (xen_pvh_domain())
+		local_irq_enable();
+
 	/* Blocking includes an implicit local_irq_enable(). */
 	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
 		BUG();
@@ -126,8 +130,24 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
 #endif
 };
 
+static const struct pv_irq_ops xen_pvh_irq_ops __initdata = {
+	.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
+	.restore_fl = __PV_IS_CALLEE_SAVE(native_restore_fl),
+	.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
+	.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
+
+	.safe_halt = xen_safe_halt,
+	.halt = xen_halt,
+#ifdef CONFIG_X86_64
+	.adjust_exception_frame = paravirt_nop,
+#endif
+};
+
 void __init xen_init_irq_ops(void)
 {
-	pv_irq_ops = xen_irq_ops;
+	if (xen_pvh_domain())
+		pv_irq_ops = xen_pvh_irq_ops;
+	else
+		pv_irq_ops = xen_irq_ops;
 	x86_init.irqs.intr_init = xen_init_IRQ;
 }
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:02:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oTA-0002mb-BD; Thu, 16 Aug 2012 01:01:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oT8-0002Mi-Ad
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:01:50 +0000
Received: from [85.158.143.35:23779] by server-3.bemta-4.messagelabs.com id
	D1/47-09529-D764C205; Thu, 16 Aug 2012 01:01:49 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345078896!15990821!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7620 invoked from network); 16 Aug 2012 01:01:49 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:01:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G11XLD022333
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:01:34 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G11XH3021679
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:01:33 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G11Xu7015514
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:01:33 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:01:33 -0700
Date: Wed, 15 Aug 2012 18:01:31 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120815180131.24aaa5ce@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial boot
 and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
 arch/x86/xen/irq.c       |   22 ++++++++++++++-
 2 files changed, 77 insertions(+), 12 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index bf4bda6..3a58c51 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -139,6 +139,8 @@ struct tls_descs {
  */
 static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
 
+static void __init xen_hvm_init_shared_info(void);
+
 static void clamp_max_cpus(void)
 {
 #ifdef CONFIG_SMP
@@ -217,8 +219,8 @@ static void __init xen_banner(void)
 	struct xen_extraversion extra;
 	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
 
-	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
-	       pv_info.name);
+	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
+		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);
 	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
 	       version >> 16, version & 0xffff, extra.extraversion,
 	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
@@ -271,12 +273,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
 		break;
 	}
 
-	asm(XEN_EMULATE_PREFIX "cpuid"
-		: "=a" (*ax),
-		  "=b" (*bx),
-		  "=c" (*cx),
-		  "=d" (*dx)
-		: "0" (*ax), "2" (*cx));
+	if (xen_pvh_domain())
+		native_cpuid(ax, bx, cx, dx);
+	else
+		asm(XEN_EMULATE_PREFIX "cpuid"
+			: "=a" (*ax),
+			"=b" (*bx),
+			"=c" (*cx),
+			"=d" (*dx)
+			: "0" (*ax), "2" (*cx));
 
 	*bx &= maskebx;
 	*cx &= maskecx;
@@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 void xen_setup_shared_info(void)
 {
+	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
+	if (xen_pvh_domain() && xen_initial_domain())
+		return;
+
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
 		set_fixmap(FIX_PARAVIRT_BOOTMAP,
 			   xen_start_info->shared_info);
@@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
 		HYPERVISOR_shared_info =
 			(struct shared_info *)__va(xen_start_info->shared_info);
 
+	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
+	if (xen_pvh_domain())
+		return;
+
 #ifndef CONFIG_SMP
 	/* In UP this is as good a place as any to set up shared info */
 	xen_setup_vcpu_info_placement();
@@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
  */
 static void __init xen_setup_stackprotector(void)
 {
+	if (xen_pvh_domain()) {
+		switch_to_new_gdt(0);
+		return;
+	}
 	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
 	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
 
@@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+static void __init xen_pvh_guest_init(void)
+{
+#ifndef __HAVE_ARCH_PTE_SPECIAL
+	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
+	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
+#endif
+	/* PVH TBD/FIXME: for now just disable this. */
+	have_vcpu_info_placement = 0;
+
+	if (xen_feature(XENFEAT_hvm_callback_vector))
+		xen_have_vector_callback = 1;
+
+        /* for domU, the library sets start_info.shared_info to pfn, but for
+         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
+	 * uses HVM code in many places */
+	if (xen_initial_domain())
+		xen_hvm_init_shared_info();
+}
+
 /* First C function to be called on Xen boot */
 asmlinkage void __init xen_start_kernel(void)
 {
@@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
 	if (!xen_start_info)
 		return;
 
+#ifdef CONFIG_X86_32
+	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");
+	return;
+#endif
 	xen_domain_type = XEN_PV_DOMAIN;
 
+	xen_setup_features();
 	xen_setup_machphys_mapping();
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
-	pv_cpu_ops = xen_cpu_ops;
 	pv_apic_ops = xen_apic_ops;
+	if (xen_pvh_domain())
+		pv_cpu_ops.cpuid = xen_cpuid;
+	else
+		pv_cpu_ops = xen_cpu_ops;
 
 	x86_init.resources.memory_setup = xen_memory_setup;
 	x86_init.oem.arch_setup = xen_arch_setup;
@@ -1334,8 +1378,6 @@ asmlinkage void __init xen_start_kernel(void)
 	/* Work out if we support NX */
 	x86_configure_nx();
 
-	xen_setup_features();
-
 	/* Get mfn list */
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		xen_build_dynamic_phys_to_machine();
@@ -1462,6 +1504,9 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_setup_runstate_info(0);
 
+	if (xen_pvh_domain())
+		xen_pvh_guest_init();
+
 	/* Start the world */
 #ifdef CONFIG_X86_32
 	i386_start_kernel();
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 1573376..7c7dfd1 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
 
 static void xen_safe_halt(void)
 {
+	/* so event channel can be delivered to us, since in HVM container */
+	if (xen_pvh_domain())
+		local_irq_enable();
+
 	/* Blocking includes an implicit local_irq_enable(). */
 	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
 		BUG();
@@ -126,8 +130,24 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
 #endif
 };
 
+static const struct pv_irq_ops xen_pvh_irq_ops __initdata = {
+	.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
+	.restore_fl = __PV_IS_CALLEE_SAVE(native_restore_fl),
+	.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
+	.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
+
+	.safe_halt = xen_safe_halt,
+	.halt = xen_halt,
+#ifdef CONFIG_X86_64
+	.adjust_exception_frame = paravirt_nop,
+#endif
+};
+
 void __init xen_init_irq_ops(void)
 {
-	pv_irq_ops = xen_irq_ops;
+	if (xen_pvh_domain())
+		pv_irq_ops = xen_pvh_irq_ops;
+	else
+		pv_irq_ops = xen_irq_ops;
 	x86_init.irqs.intr_init = xen_init_IRQ;
 }
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:03:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oUK-0005na-SA; Thu, 16 Aug 2012 01:03:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oUJ-0005nB-TW
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:03:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345078974!1944173!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27858 invoked from network); 16 Aug 2012 01:02:55 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:02:55 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G12rt1022997
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:02:54 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G12qC9007181
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:02:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G12qVK016191
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:02:52 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:02:52 -0700
Date: Wed, 15 Aug 2012 18:02:50 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180250.1e068d10@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging related
	changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



---
 arch/x86/xen/mmu.c              |  179 ++++++++++++++++++++++++++++++++++++---
 arch/x86/xen/mmu.h              |    2 +
 include/xen/interface/memory.h  |   27 ++++++-
 include/xen/interface/physdev.h |   10 ++
 include/xen/xen-ops.h           |    7 ++
 5 files changed, 211 insertions(+), 14 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..44a6477 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -330,6 +330,38 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
 	__xen_set_pte(ptep, pteval);
 }
 
+/* This for PV guest in hvm container */
+void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
+			      int nr_mfns, int add_mapping)
+{
+	int rc;
+	struct physdev_map_iomem iomem;
+
+	iomem.first_gfn = pfn;
+	iomem.first_mfn = mfn;
+	iomem.nr_mfns = nr_mfns;
+	iomem.add_mapping = add_mapping;
+
+	rc = HYPERVISOR_physdev_op(PHYSDEVOP_pvh_map_iomem, &iomem);
+	BUG_ON(rc);
+}
+
+/* This for PV guest in hvm container.
+ * We need this because during boot early_ioremap path eventually calls
+ * set_pte that maps io space. Also, ACPI pages are not mapped into to the
+ * EPT during dom0 creation. The pages are mapped initially here from
+ * kernel_physical_mapping_init() then later the memtype is changed.  */
+static void xen_dom0pvh_set_pte(pte_t *ptep, pte_t pteval)
+{
+	native_set_pte(ptep, pteval);
+}
+
+static void xen_dom0pvh_set_pte_at(struct mm_struct *mm, unsigned long addr,
+				   pte_t *ptep, pte_t pteval)
+{
+	native_set_pte(ptep, pteval);
+}
+
 static void xen_set_pte_at(struct mm_struct *mm, unsigned long addr,
 		    pte_t *ptep, pte_t pteval)
 {
@@ -1197,6 +1229,10 @@ static void xen_post_allocator_init(void);
 static void __init xen_pagetable_setup_done(pgd_t *base)
 {
 	xen_setup_shared_info();
+
+	if (xen_pvh_domain())
+		return;
+
 	xen_post_allocator_init();
 }
 
@@ -1652,6 +1688,10 @@ static void set_page_prot(void *addr, pgprot_t prot)
 	unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
 	pte_t pte = pfn_pte(pfn, prot);
 
+	/* for PVH, page tables are native. */
+	if (xen_pvh_domain())
+		return;
+
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
 		BUG();
 }
@@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
  * but that's enough to get __va working.  We need to fill in the rest
  * of the physical mapping once some sort of allocator has been set
  * up.
+ * NOTE: for PVH, the page tables are native with HAP required.
  */
 pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 					 unsigned long max_pfn)
@@ -1761,10 +1802,12 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
-	/* Pre-constructed entries are in pfn, so convert to mfn */
-	convert_pfn_mfn(init_level4_pgt);
-	convert_pfn_mfn(level3_ident_pgt);
-	convert_pfn_mfn(level3_kernel_pgt);
+	if (!xen_pvh_domain()) {
+		/* Pre-constructed entries are in pfn, so convert to mfn */
+		convert_pfn_mfn(init_level4_pgt);
+		convert_pfn_mfn(level3_ident_pgt);
+		convert_pfn_mfn(level3_kernel_pgt);
+	}
 
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
@@ -1787,12 +1830,14 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
 
-	/* Pin down new L4 */
-	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
-			  PFN_DOWN(__pa_symbol(init_level4_pgt)));
+	if (!xen_pvh_domain()) {
+		/* Pin down new L4 */
+		pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+				PFN_DOWN(__pa_symbol(init_level4_pgt)));
 
-	/* Unpin Xen-provided one */
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
+		/* Unpin Xen-provided one */
+		pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
+	}
 
 	/* Switch over */
 	pgd = init_level4_pgt;
@@ -1802,9 +1847,13 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 	 * structure to attach it to, so make sure we just set kernel
 	 * pgd.
 	 */
-	xen_mc_batch();
-	__xen_write_cr3(true, __pa(pgd));
-	xen_mc_issue(PARAVIRT_LAZY_CPU);
+	if (xen_pvh_domain()) {
+		native_write_cr3(__pa(pgd));
+	} else {
+		xen_mc_batch();
+		__xen_write_cr3(true, __pa(pgd));
+		xen_mc_issue(PARAVIRT_LAZY_CPU);
+	}
 
 	memblock_reserve(__pa(xen_start_info->pt_base),
 			 xen_start_info->nr_pt_frames * PAGE_SIZE);
@@ -2067,9 +2116,21 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 
 void __init xen_init_mmu_ops(void)
 {
+	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
+
+	if (xen_pvh_domain()) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+
+		/* set_pte* for PCI devices to map iomem. */
+		if (xen_initial_domain()) {
+			pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
+			pv_mmu_ops.set_pte_at = xen_dom0pvh_set_pte_at;
+		}
+		return;
+	}
+
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
-	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
@@ -2305,6 +2366,93 @@ void __init xen_hvm_init_mmu_ops(void)
 }
 #endif
 
+/* Map foreign gmfn, fgmfn, to local pfn, lpfn. This for the user space
+ * creating new guest on PVH dom0 and needs to map domU pages. Called from
+ * exported function, so no need to export this.
+ */
+static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long fgmfn,
+			      unsigned int domid)
+{
+	int rc;
+	struct xen_add_to_physmap pmb = {.foreign_domid = domid};
+
+	pmb.gpfn = lpfn;
+	pmb.idx = fgmfn;
+	pmb.space = XENMAPSPACE_gmfn_foreign;
+	rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &pmb);
+	if (rc) {
+		pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
+			rc, lpfn, fgmfn);
+		return 1;
+	}
+	return 0;
+}
+
+/* Unmap an entry from xen p2m table */
+int pvh_rem_xen_p2m(unsigned long spfn, int count)
+{
+	struct xen_remove_from_physmap xrp;
+	int i, rc;
+
+	for (i=0; i < count; i++) {
+		xrp.domid = DOMID_SELF;
+		xrp.gpfn = spfn+i;
+		rc = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
+		if (rc) {
+			pr_warn("Failed to unmap pfn:%lx rc:%d done:%d\n",
+				spfn+i, rc, i);
+			return 1;
+		}
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);
+
+struct pvh_remap_data {
+	unsigned long fgmfn;		/* foreign domain's gmfn */
+	pgprot_t prot;
+	domid_t  domid;
+	struct vm_area_struct *vma;
+};
+
+static int pvh_map_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
+			void *data)
+{
+	struct pvh_remap_data *pvhp = data;
+	struct xen_pvh_sav_pfn_info *savp = pvhp->vma->vm_private_data;
+	unsigned long pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo++]);
+	pte_t pteval = pte_mkspecial(pfn_pte(pfn, pvhp->prot));
+
+	native_set_pte(ptep, pteval);
+	if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
+		return -EFAULT;
+
+	return 0;
+}
+
+/* The only caller at moment passes one gmfn at a time.
+ * PVH TBD/FIXME: expand this in future to honor batch requests.
+ */
+static int pvh_remap_gmfn_range(struct vm_area_struct *vma,
+				unsigned long addr, unsigned long mfn, int nr,
+				pgprot_t prot, unsigned domid)
+{
+	int err;
+	struct pvh_remap_data pvhdata;
+
+	if (nr > 1)
+		return -EINVAL;
+
+	pvhdata.fgmfn = mfn;
+	pvhdata.prot = prot;
+	pvhdata.domid = domid;
+	pvhdata.vma = vma;
+	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
+				  pvh_map_pte_fn, &pvhdata);
+	flush_tlb_all();
+	return err;
+}
+
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
@@ -2342,6 +2490,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
 				(VM_PFNMAP | VM_RESERVED | VM_IO)));
 
+	if (xen_pvh_domain()) {
+		/* We need to update the local page tables and the xen HAP */
+		return pvh_remap_gmfn_range(vma, addr, mfn, nr, prot, domid);
+	}
+
 	rmd.mfn = mfn;
 	rmd.prot = prot;
 
diff --git a/arch/x86/xen/mmu.h b/arch/x86/xen/mmu.h
index 73809bb..6d0bb56 100644
--- a/arch/x86/xen/mmu.h
+++ b/arch/x86/xen/mmu.h
@@ -23,4 +23,6 @@ unsigned long xen_read_cr2_direct(void);
 
 extern void xen_init_mmu_ops(void);
 extern void xen_hvm_init_mmu_ops(void);
+extern void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
+				     int nr_mfns, int add_mapping);
 #endif	/* _XEN_MMU_H */
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..1b213b1 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,10 +163,19 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-    unsigned int space;
+#define XENMAPSPACE_gmfn        2 /* GMFN */
+#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
+    uint16_t space;
+    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */
+
+#define XENMAPIDX_grant_table_status 0x80000000
 
     /* Index into source mapping space. */
     unsigned long idx;
@@ -234,4 +243,20 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_memory_map);
  * during a driver critical region.
  */
 extern spinlock_t xen_reservation_lock;
+
+/*
+ * Unmaps the page appearing at a particular GPFN from the specified guest's
+ * pseudophysical address space.
+ * arg == addr of xen_remove_from_physmap_t.
+ */
+#define XENMEM_remove_from_physmap      15
+struct xen_remove_from_physmap {
+    /* Which domain to change the mapping for. */
+    domid_t domid;
+
+    /* GPFN of the current mapping of the page. */
+    unsigned long     gpfn;
+};
+DEFINE_GUEST_HANDLE_STRUCT(xen_remove_from_physmap);
+
 #endif /* __XEN_PUBLIC_MEMORY_H__ */
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 9ce788d..80f792e 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -258,6 +258,16 @@ struct physdev_pci_device {
     uint8_t devfn;
 };
 
+#define PHYSDEVOP_pvh_map_iomem        29
+struct physdev_map_iomem {
+    /* IN */
+    uint64_t first_gfn;
+    uint64_t first_mfn;
+    uint32_t nr_mfns;
+    uint32_t add_mapping;        /* 1 == add mapping;  0 == unmap */
+
+};
+
 /*
  * Notify that some PIRQ-bound event channels have been unmasked.
  * ** This command is obsolete since interface version 0x00030202 and is **
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..fa595e1 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+struct xen_pvh_sav_pfn_info {
+	struct page **sp_paga;	/* save pfn (info) page array */
+	int sp_num_pgs;
+	int sp_next_todo;
+};
+extern int pvh_rem_xen_p2m(unsigned long spfn, int count);
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:03:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oUK-0005na-SA; Thu, 16 Aug 2012 01:03:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oUJ-0005nB-TW
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:03:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345078974!1944173!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27858 invoked from network); 16 Aug 2012 01:02:55 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:02:55 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G12rt1022997
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:02:54 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G12qC9007181
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:02:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G12qVK016191
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:02:52 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:02:52 -0700
Date: Wed, 15 Aug 2012 18:02:50 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180250.1e068d10@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging related
	changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



---
 arch/x86/xen/mmu.c              |  179 ++++++++++++++++++++++++++++++++++++---
 arch/x86/xen/mmu.h              |    2 +
 include/xen/interface/memory.h  |   27 ++++++-
 include/xen/interface/physdev.h |   10 ++
 include/xen/xen-ops.h           |    7 ++
 5 files changed, 211 insertions(+), 14 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..44a6477 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -330,6 +330,38 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
 	__xen_set_pte(ptep, pteval);
 }
 
+/* This for PV guest in hvm container */
+void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
+			      int nr_mfns, int add_mapping)
+{
+	int rc;
+	struct physdev_map_iomem iomem;
+
+	iomem.first_gfn = pfn;
+	iomem.first_mfn = mfn;
+	iomem.nr_mfns = nr_mfns;
+	iomem.add_mapping = add_mapping;
+
+	rc = HYPERVISOR_physdev_op(PHYSDEVOP_pvh_map_iomem, &iomem);
+	BUG_ON(rc);
+}
+
+/* This for PV guest in hvm container.
+ * We need this because during boot early_ioremap path eventually calls
+ * set_pte that maps io space. Also, ACPI pages are not mapped into to the
+ * EPT during dom0 creation. The pages are mapped initially here from
+ * kernel_physical_mapping_init() then later the memtype is changed.  */
+static void xen_dom0pvh_set_pte(pte_t *ptep, pte_t pteval)
+{
+	native_set_pte(ptep, pteval);
+}
+
+static void xen_dom0pvh_set_pte_at(struct mm_struct *mm, unsigned long addr,
+				   pte_t *ptep, pte_t pteval)
+{
+	native_set_pte(ptep, pteval);
+}
+
 static void xen_set_pte_at(struct mm_struct *mm, unsigned long addr,
 		    pte_t *ptep, pte_t pteval)
 {
@@ -1197,6 +1229,10 @@ static void xen_post_allocator_init(void);
 static void __init xen_pagetable_setup_done(pgd_t *base)
 {
 	xen_setup_shared_info();
+
+	if (xen_pvh_domain())
+		return;
+
 	xen_post_allocator_init();
 }
 
@@ -1652,6 +1688,10 @@ static void set_page_prot(void *addr, pgprot_t prot)
 	unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
 	pte_t pte = pfn_pte(pfn, prot);
 
+	/* for PVH, page tables are native. */
+	if (xen_pvh_domain())
+		return;
+
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
 		BUG();
 }
@@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
  * but that's enough to get __va working.  We need to fill in the rest
  * of the physical mapping once some sort of allocator has been set
  * up.
+ * NOTE: for PVH, the page tables are native with HAP required.
  */
 pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 					 unsigned long max_pfn)
@@ -1761,10 +1802,12 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
-	/* Pre-constructed entries are in pfn, so convert to mfn */
-	convert_pfn_mfn(init_level4_pgt);
-	convert_pfn_mfn(level3_ident_pgt);
-	convert_pfn_mfn(level3_kernel_pgt);
+	if (!xen_pvh_domain()) {
+		/* Pre-constructed entries are in pfn, so convert to mfn */
+		convert_pfn_mfn(init_level4_pgt);
+		convert_pfn_mfn(level3_ident_pgt);
+		convert_pfn_mfn(level3_kernel_pgt);
+	}
 
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
@@ -1787,12 +1830,14 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
 
-	/* Pin down new L4 */
-	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
-			  PFN_DOWN(__pa_symbol(init_level4_pgt)));
+	if (!xen_pvh_domain()) {
+		/* Pin down new L4 */
+		pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+				PFN_DOWN(__pa_symbol(init_level4_pgt)));
 
-	/* Unpin Xen-provided one */
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
+		/* Unpin Xen-provided one */
+		pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
+	}
 
 	/* Switch over */
 	pgd = init_level4_pgt;
@@ -1802,9 +1847,13 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 	 * structure to attach it to, so make sure we just set kernel
 	 * pgd.
 	 */
-	xen_mc_batch();
-	__xen_write_cr3(true, __pa(pgd));
-	xen_mc_issue(PARAVIRT_LAZY_CPU);
+	if (xen_pvh_domain()) {
+		native_write_cr3(__pa(pgd));
+	} else {
+		xen_mc_batch();
+		__xen_write_cr3(true, __pa(pgd));
+		xen_mc_issue(PARAVIRT_LAZY_CPU);
+	}
 
 	memblock_reserve(__pa(xen_start_info->pt_base),
 			 xen_start_info->nr_pt_frames * PAGE_SIZE);
@@ -2067,9 +2116,21 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 
 void __init xen_init_mmu_ops(void)
 {
+	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
+
+	if (xen_pvh_domain()) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+
+		/* set_pte* for PCI devices to map iomem. */
+		if (xen_initial_domain()) {
+			pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
+			pv_mmu_ops.set_pte_at = xen_dom0pvh_set_pte_at;
+		}
+		return;
+	}
+
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
-	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
@@ -2305,6 +2366,93 @@ void __init xen_hvm_init_mmu_ops(void)
 }
 #endif
 
+/* Map foreign gmfn, fgmfn, to local pfn, lpfn. This for the user space
+ * creating new guest on PVH dom0 and needs to map domU pages. Called from
+ * exported function, so no need to export this.
+ */
+static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long fgmfn,
+			      unsigned int domid)
+{
+	int rc;
+	struct xen_add_to_physmap pmb = {.foreign_domid = domid};
+
+	pmb.gpfn = lpfn;
+	pmb.idx = fgmfn;
+	pmb.space = XENMAPSPACE_gmfn_foreign;
+	rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &pmb);
+	if (rc) {
+		pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
+			rc, lpfn, fgmfn);
+		return 1;
+	}
+	return 0;
+}
+
+/* Unmap an entry from xen p2m table */
+int pvh_rem_xen_p2m(unsigned long spfn, int count)
+{
+	struct xen_remove_from_physmap xrp;
+	int i, rc;
+
+	for (i=0; i < count; i++) {
+		xrp.domid = DOMID_SELF;
+		xrp.gpfn = spfn+i;
+		rc = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
+		if (rc) {
+			pr_warn("Failed to unmap pfn:%lx rc:%d done:%d\n",
+				spfn+i, rc, i);
+			return 1;
+		}
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);
+
+struct pvh_remap_data {
+	unsigned long fgmfn;		/* foreign domain's gmfn */
+	pgprot_t prot;
+	domid_t  domid;
+	struct vm_area_struct *vma;
+};
+
+static int pvh_map_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
+			void *data)
+{
+	struct pvh_remap_data *pvhp = data;
+	struct xen_pvh_sav_pfn_info *savp = pvhp->vma->vm_private_data;
+	unsigned long pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo++]);
+	pte_t pteval = pte_mkspecial(pfn_pte(pfn, pvhp->prot));
+
+	native_set_pte(ptep, pteval);
+	if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
+		return -EFAULT;
+
+	return 0;
+}
+
+/* The only caller at moment passes one gmfn at a time.
+ * PVH TBD/FIXME: expand this in future to honor batch requests.
+ */
+static int pvh_remap_gmfn_range(struct vm_area_struct *vma,
+				unsigned long addr, unsigned long mfn, int nr,
+				pgprot_t prot, unsigned domid)
+{
+	int err;
+	struct pvh_remap_data pvhdata;
+
+	if (nr > 1)
+		return -EINVAL;
+
+	pvhdata.fgmfn = mfn;
+	pvhdata.prot = prot;
+	pvhdata.domid = domid;
+	pvhdata.vma = vma;
+	err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
+				  pvh_map_pte_fn, &pvhdata);
+	flush_tlb_all();
+	return err;
+}
+
 #define REMAP_BATCH_SIZE 16
 
 struct remap_data {
@@ -2342,6 +2490,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
 				(VM_PFNMAP | VM_RESERVED | VM_IO)));
 
+	if (xen_pvh_domain()) {
+		/* We need to update the local page tables and the xen HAP */
+		return pvh_remap_gmfn_range(vma, addr, mfn, nr, prot, domid);
+	}
+
 	rmd.mfn = mfn;
 	rmd.prot = prot;
 
diff --git a/arch/x86/xen/mmu.h b/arch/x86/xen/mmu.h
index 73809bb..6d0bb56 100644
--- a/arch/x86/xen/mmu.h
+++ b/arch/x86/xen/mmu.h
@@ -23,4 +23,6 @@ unsigned long xen_read_cr2_direct(void);
 
 extern void xen_init_mmu_ops(void);
 extern void xen_hvm_init_mmu_ops(void);
+extern void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
+				     int nr_mfns, int add_mapping);
 #endif	/* _XEN_MMU_H */
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..1b213b1 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,10 +163,19 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-    unsigned int space;
+#define XENMAPSPACE_gmfn        2 /* GMFN */
+#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
+    uint16_t space;
+    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */
+
+#define XENMAPIDX_grant_table_status 0x80000000
 
     /* Index into source mapping space. */
     unsigned long idx;
@@ -234,4 +243,20 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_memory_map);
  * during a driver critical region.
  */
 extern spinlock_t xen_reservation_lock;
+
+/*
+ * Unmaps the page appearing at a particular GPFN from the specified guest's
+ * pseudophysical address space.
+ * arg == addr of xen_remove_from_physmap_t.
+ */
+#define XENMEM_remove_from_physmap      15
+struct xen_remove_from_physmap {
+    /* Which domain to change the mapping for. */
+    domid_t domid;
+
+    /* GPFN of the current mapping of the page. */
+    unsigned long     gpfn;
+};
+DEFINE_GUEST_HANDLE_STRUCT(xen_remove_from_physmap);
+
 #endif /* __XEN_PUBLIC_MEMORY_H__ */
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 9ce788d..80f792e 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -258,6 +258,16 @@ struct physdev_pci_device {
     uint8_t devfn;
 };
 
+#define PHYSDEVOP_pvh_map_iomem        29
+struct physdev_map_iomem {
+    /* IN */
+    uint64_t first_gfn;
+    uint64_t first_mfn;
+    uint32_t nr_mfns;
+    uint32_t add_mapping;        /* 1 == add mapping;  0 == unmap */
+
+};
+
 /*
  * Notify that some PIRQ-bound event channels have been unmasked.
  * ** This command is obsolete since interface version 0x00030202 and is **
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6a198e4..fa595e1 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -29,4 +29,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid);
 
+struct xen_pvh_sav_pfn_info {
+	struct page **sp_paga;	/* save pfn (info) page array */
+	int sp_num_pgs;
+	int sp_next_todo;
+};
+extern int pvh_rem_xen_p2m(unsigned long spfn, int count);
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:04:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oVJ-0005tl-G7; Thu, 16 Aug 2012 01:04:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oVH-0005tP-Sh
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:04:04 +0000
Received: from [85.158.139.83:12320] by server-4.bemta-5.messagelabs.com id
	D6/41-12386-3074C205; Thu, 16 Aug 2012 01:04:03 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345079041!17090254!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5OTI0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14364 invoked from network); 16 Aug 2012 01:04:02 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:04:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G13wHf006797
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:03:59 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G13wRY008963
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:03:58 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G13v4N016667
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:03:57 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:03:57 -0700
Date: Wed, 15 Aug 2012 18:03:56 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180356.08d4d2e4@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
	and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 arch/x86/xen/setup.c               |   32 +++++++++++++++++++++++++++-----
 drivers/xen/events.c               |    7 +++++++
 drivers/xen/xenbus/xenbus_client.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c  |    5 ++++-
 4 files changed, 39 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 936f21d..1c961fc 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -26,6 +26,7 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/physdev.h>
 #include <xen/features.h>
+#include "mmu.h"
 #include "xen-ops.h"
 #include "vdso.h"
 
@@ -222,6 +223,20 @@ static void __init xen_set_identity_and_release_chunk(
 	*identity += set_phys_range_identity(start_pfn, end_pfn);
 }
 
+/* For PVH, the pfns [0..MAX] are mapped to mfn's in the EPT/NPT. The mfns
+ * are released as part of this 1:1 mapping hypercall. We can't balloon down
+ * any time later because when p2m/EPT is updated, the mfns are already lost.
+ * Also, we map the entire IO space, ie, beyond max_pfn_mapped.
+ */
+static void noinline __init xen_pvh_identity_map_chunk(unsigned long start_pfn,
+						       unsigned long end_pfn)
+{
+	unsigned long pfn;
+
+	for (pfn = start_pfn; pfn < end_pfn; pfn++)
+		xen_set_clr_mmio_pvh_pte(pfn, pfn, 1, 1);
+}
+
 static unsigned long __init xen_set_identity_and_release(
 	const struct e820entry *list, size_t map_size, unsigned long nr_pages)
 {
@@ -251,11 +266,18 @@ static unsigned long __init xen_set_identity_and_release(
 			if (entry->type == E820_RAM)
 				end_pfn = PFN_UP(entry->addr);
 
-			if (start_pfn < end_pfn)
-				xen_set_identity_and_release_chunk(
-					start_pfn, end_pfn, nr_pages,
-					&released, &identity);
-
+			if (start_pfn < end_pfn) {
+				if (xen_pvh_domain()) {
+					xen_pvh_identity_map_chunk(start_pfn,
+								   end_pfn);
+					released += end_pfn - start_pfn;
+					identity += end_pfn - start_pfn;
+				} else {
+					xen_set_identity_and_release_chunk(
+						start_pfn, end_pfn, nr_pages,
+						&released, &identity);
+				}
+			}
 			start = end;
 		}
 	}
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..260113e 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
 		if (xen_initial_domain())
 			pci_xen_initial_domain();
 
+		if (xen_pvh_domain()) {
+			xen_callback_vector();
+			return;
+		}
+
+		/* PVH: TBD/FIXME: debug and fix eio map to work with pvh */
+
 		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index b3e146e..c0fcff1 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -743,7 +743,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
 
 void __init xenbus_ring_ops_init(void)
 {
-	if (xen_pv_domain())
+	if (xen_pv_domain() && !xen_pvh_domain())
 		ring_ops = &ring_ops_pv;
 	else
 		ring_ops = &ring_ops_hvm;
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..735dd5c 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -749,7 +749,10 @@ static int __init xenbus_init(void)
 			if (err)
 				goto out_error;
 		}
-		xen_store_interface = mfn_to_virt(xen_store_mfn);
+		if (xen_pvh_domain())
+			xen_store_interface = __va(xen_store_mfn<<PAGE_SHIFT);
+		else
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
 	}
 
 	/* Initialize the interface to xenstore. */
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:04:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oVJ-0005tl-G7; Thu, 16 Aug 2012 01:04:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oVH-0005tP-Sh
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:04:04 +0000
Received: from [85.158.139.83:12320] by server-4.bemta-5.messagelabs.com id
	D6/41-12386-3074C205; Thu, 16 Aug 2012 01:04:03 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345079041!17090254!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDY5OTI0Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14364 invoked from network); 16 Aug 2012 01:04:02 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:04:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G13wHf006797
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:03:59 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G13wRY008963
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:03:58 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G13v4N016667
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:03:57 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:03:57 -0700
Date: Wed, 15 Aug 2012 18:03:56 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180356.08d4d2e4@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
	and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 arch/x86/xen/setup.c               |   32 +++++++++++++++++++++++++++-----
 drivers/xen/events.c               |    7 +++++++
 drivers/xen/xenbus/xenbus_client.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c  |    5 ++++-
 4 files changed, 39 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 936f21d..1c961fc 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -26,6 +26,7 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/physdev.h>
 #include <xen/features.h>
+#include "mmu.h"
 #include "xen-ops.h"
 #include "vdso.h"
 
@@ -222,6 +223,20 @@ static void __init xen_set_identity_and_release_chunk(
 	*identity += set_phys_range_identity(start_pfn, end_pfn);
 }
 
+/* For PVH, the pfns [0..MAX] are mapped to mfn's in the EPT/NPT. The mfns
+ * are released as part of this 1:1 mapping hypercall. We can't balloon down
+ * any time later because when p2m/EPT is updated, the mfns are already lost.
+ * Also, we map the entire IO space, ie, beyond max_pfn_mapped.
+ */
+static void noinline __init xen_pvh_identity_map_chunk(unsigned long start_pfn,
+						       unsigned long end_pfn)
+{
+	unsigned long pfn;
+
+	for (pfn = start_pfn; pfn < end_pfn; pfn++)
+		xen_set_clr_mmio_pvh_pte(pfn, pfn, 1, 1);
+}
+
 static unsigned long __init xen_set_identity_and_release(
 	const struct e820entry *list, size_t map_size, unsigned long nr_pages)
 {
@@ -251,11 +266,18 @@ static unsigned long __init xen_set_identity_and_release(
 			if (entry->type == E820_RAM)
 				end_pfn = PFN_UP(entry->addr);
 
-			if (start_pfn < end_pfn)
-				xen_set_identity_and_release_chunk(
-					start_pfn, end_pfn, nr_pages,
-					&released, &identity);
-
+			if (start_pfn < end_pfn) {
+				if (xen_pvh_domain()) {
+					xen_pvh_identity_map_chunk(start_pfn,
+								   end_pfn);
+					released += end_pfn - start_pfn;
+					identity += end_pfn - start_pfn;
+				} else {
+					xen_set_identity_and_release_chunk(
+						start_pfn, end_pfn, nr_pages,
+						&released, &identity);
+				}
+			}
 			start = end;
 		}
 	}
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..260113e 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
 		if (xen_initial_domain())
 			pci_xen_initial_domain();
 
+		if (xen_pvh_domain()) {
+			xen_callback_vector();
+			return;
+		}
+
+		/* PVH: TBD/FIXME: debug and fix eio map to work with pvh */
+
 		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index b3e146e..c0fcff1 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -743,7 +743,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
 
 void __init xenbus_ring_ops_init(void)
 {
-	if (xen_pv_domain())
+	if (xen_pv_domain() && !xen_pvh_domain())
 		ring_ops = &ring_ops_pv;
 	else
 		ring_ops = &ring_ops_hvm;
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..735dd5c 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -749,7 +749,10 @@ static int __init xenbus_init(void)
 			if (err)
 				goto out_error;
 		}
-		xen_store_interface = mfn_to_virt(xen_store_mfn);
+		if (xen_pvh_domain())
+			xen_store_interface = __va(xen_store_mfn<<PAGE_SHIFT);
+		else
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
 	}
 
 	/* Initialize the interface to xenstore. */
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:05:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oW8-00060b-U7; Thu, 16 Aug 2012 01:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oW7-00060B-Bt
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:04:55 +0000
Received: from [85.158.143.35:14785] by server-1.bemta-4.messagelabs.com id
	63/81-07754-6374C205; Thu, 16 Aug 2012 01:04:54 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345079092!14251854!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5301 invoked from network); 16 Aug 2012 01:04:54 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:04:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G14pP7024282
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:04:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G14pVm017883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:04:51 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G14pdl000829
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:04:51 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:04:51 -0700
Date: Wed, 15 Aug 2012 18:04:49 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180449.50410028@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 arch/x86/xen/smp.c |   33 +++++++++++++++++++++++++--------
 1 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index cdf269d..017d7fa 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -68,9 +68,11 @@ static void __cpuinit cpu_bringup(void)
 	touch_softlockup_watchdog();
 	preempt_disable();
 
-	xen_enable_sysenter();
-	xen_enable_syscall();
-
+	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
+	if (!xen_pvh_domain()) {
+		xen_enable_sysenter();
+		xen_enable_syscall();
+	}
 	cpu = smp_processor_id();
 	smp_store_cpu_info(cpu);
 	cpu_data(cpu).x86_max_cores = 1;
@@ -230,10 +232,11 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	BUG_ON(smp_processor_id() != 0);
 	native_smp_prepare_boot_cpu();
 
-	/* We've switched to the "real" per-cpu gdt, so make sure the
-	   old memory can be recycled */
-	make_lowmem_page_readwrite(xen_initial_gdt);
-
+	if (!xen_pvh_domain()) {
+		/* We've switched to the "real" per-cpu gdt, so make sure the
+		 * old memory can be recycled */
+		make_lowmem_page_readwrite(xen_initial_gdt);
+	}
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
 }
@@ -312,6 +315,7 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
+	if (!xen_pvh_domain()) {
 		ctxt->user_regs.ds = __USER_DS;
 		ctxt->user_regs.es = __USER_DS;
 
@@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 					(unsigned long)xen_hypervisor_callback;
 		ctxt->failsafe_callback_eip = 
 					(unsigned long)xen_failsafe_callback;
-
+	} else {
+		ctxt->user_regs.ds = __KERNEL_DS;
+		ctxt->user_regs.es = 0;
+		ctxt->user_regs.gs = 0;
+
+		ctxt->gdt_frames[0] = (unsigned long)gdt;
+		ctxt->gdt_ents = (unsigned long)(GDT_SIZE - 1);
+
+		/* Note: PVH is not supported on x86_32. */
+#ifdef __x86_64__
+		ctxt->gs_base_user = (unsigned long)
+		                         per_cpu(irq_stack_union.gs_base, cpu);
+#endif
+	}
 	ctxt->user_regs.cs = __KERNEL_CS;
 	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
 
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:05:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oW8-00060b-U7; Thu, 16 Aug 2012 01:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oW7-00060B-Bt
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:04:55 +0000
Received: from [85.158.143.35:14785] by server-1.bemta-4.messagelabs.com id
	63/81-07754-6374C205; Thu, 16 Aug 2012 01:04:54 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345079092!14251854!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5301 invoked from network); 16 Aug 2012 01:04:54 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:04:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G14pP7024282
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:04:52 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G14pVm017883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:04:51 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G14pdl000829
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:04:51 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:04:51 -0700
Date: Wed, 15 Aug 2012 18:04:49 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180449.50410028@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 arch/x86/xen/smp.c |   33 +++++++++++++++++++++++++--------
 1 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index cdf269d..017d7fa 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -68,9 +68,11 @@ static void __cpuinit cpu_bringup(void)
 	touch_softlockup_watchdog();
 	preempt_disable();
 
-	xen_enable_sysenter();
-	xen_enable_syscall();
-
+	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
+	if (!xen_pvh_domain()) {
+		xen_enable_sysenter();
+		xen_enable_syscall();
+	}
 	cpu = smp_processor_id();
 	smp_store_cpu_info(cpu);
 	cpu_data(cpu).x86_max_cores = 1;
@@ -230,10 +232,11 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	BUG_ON(smp_processor_id() != 0);
 	native_smp_prepare_boot_cpu();
 
-	/* We've switched to the "real" per-cpu gdt, so make sure the
-	   old memory can be recycled */
-	make_lowmem_page_readwrite(xen_initial_gdt);
-
+	if (!xen_pvh_domain()) {
+		/* We've switched to the "real" per-cpu gdt, so make sure the
+		 * old memory can be recycled */
+		make_lowmem_page_readwrite(xen_initial_gdt);
+	}
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
 }
@@ -312,6 +315,7 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
+	if (!xen_pvh_domain()) {
 		ctxt->user_regs.ds = __USER_DS;
 		ctxt->user_regs.es = __USER_DS;
 
@@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 					(unsigned long)xen_hypervisor_callback;
 		ctxt->failsafe_callback_eip = 
 					(unsigned long)xen_failsafe_callback;
-
+	} else {
+		ctxt->user_regs.ds = __KERNEL_DS;
+		ctxt->user_regs.es = 0;
+		ctxt->user_regs.gs = 0;
+
+		ctxt->gdt_frames[0] = (unsigned long)gdt;
+		ctxt->gdt_ents = (unsigned long)(GDT_SIZE - 1);
+
+		/* Note: PVH is not supported on x86_32. */
+#ifdef __x86_64__
+		ctxt->gs_base_user = (unsigned long)
+		                         per_cpu(irq_stack_union.gs_base, cpu);
+#endif
+	}
 	ctxt->user_regs.cs = __KERNEL_CS;
 	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
 
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:06:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oWy-00067x-CC; Thu, 16 Aug 2012 01:05:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oWx-00067h-Fl
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:05:47 +0000
Received: from [85.158.143.99:29422] by server-1.bemta-4.messagelabs.com id
	A6/02-07754-A674C205; Thu, 16 Aug 2012 01:05:46 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345079144!18627614!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27301 invoked from network); 16 Aug 2012 01:05:46 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:05:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G15hcO024979
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:05:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G15hn3025554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:05:43 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G15gOT001291
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:05:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:05:42 -0700
Date: Wed, 15 Aug 2012 18:05:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180541.2fddead9@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [RFC PATCH 6/8]: Ballooning changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 drivers/xen/balloon.c |   26 +++++++++++++++++++++-----
 1 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 31ab82f..57960a1 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -358,10 +358,18 @@ static enum bp_state increase_reservation(unsigned long nr_pages)
 		BUG_ON(!xen_feature(XENFEAT_auto_translated_physmap) &&
 		       phys_to_machine_mapping_valid(pfn));
 
-		set_phys_to_machine(pfn, frame_list[i]);
-
+		if (!xen_pvh_domain()) {
+			set_phys_to_machine(pfn, frame_list[i]);
+		} else {
+			pte_t *ptep;
+			unsigned int level;
+			void *addr = __va(pfn << PAGE_SHIFT);
+			ptep = lookup_address((unsigned long)addr, &level);
+			set_pte(ptep, pfn_pte(pfn, PAGE_KERNEL));
+		}
 		/* Link back into the page tables if not highmem. */
-		if (xen_pv_domain() && !PageHighMem(page)) {
+		if (xen_pv_domain() && !PageHighMem(page) &&
+		    !xen_pvh_domain()) {
 			int ret;
 			ret = HYPERVISOR_update_va_mapping(
 				(unsigned long)__va(pfn << PAGE_SHIFT),
@@ -417,7 +425,14 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
 
 		scrub_page(page);
 
-		if (xen_pv_domain() && !PageHighMem(page)) {
+		if (xen_pvh_domain() && !PageHighMem(page)) {
+			unsigned int level;
+			pte_t *ptep;
+			void *addr = __va(pfn << PAGE_SHIFT);
+			ptep = lookup_address((unsigned long)addr, &level);
+			set_pte(ptep, __pte(0));
+
+		} else if (xen_pv_domain() && !PageHighMem(page)) {
 			ret = HYPERVISOR_update_va_mapping(
 				(unsigned long)__va(pfn << PAGE_SHIFT),
 				__pte_ma(0), 0);
@@ -433,7 +448,8 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
 	/* No more mappings: invalidate P2M and add to balloon. */
 	for (i = 0; i < nr_pages; i++) {
 		pfn = mfn_to_pfn(frame_list[i]);
-		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
+		if (!xen_pvh_domain())
+			__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 		balloon_append(pfn_to_page(pfn));
 	}
 
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:06:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oWy-00067x-CC; Thu, 16 Aug 2012 01:05:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oWx-00067h-Fl
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:05:47 +0000
Received: from [85.158.143.99:29422] by server-1.bemta-4.messagelabs.com id
	A6/02-07754-A674C205; Thu, 16 Aug 2012 01:05:46 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345079144!18627614!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27301 invoked from network); 16 Aug 2012 01:05:46 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:05:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G15hcO024979
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:05:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G15hn3025554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:05:43 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G15gOT001291
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:05:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:05:42 -0700
Date: Wed, 15 Aug 2012 18:05:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180541.2fddead9@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [RFC PATCH 6/8]: Ballooning changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 drivers/xen/balloon.c |   26 +++++++++++++++++++++-----
 1 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 31ab82f..57960a1 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -358,10 +358,18 @@ static enum bp_state increase_reservation(unsigned long nr_pages)
 		BUG_ON(!xen_feature(XENFEAT_auto_translated_physmap) &&
 		       phys_to_machine_mapping_valid(pfn));
 
-		set_phys_to_machine(pfn, frame_list[i]);
-
+		if (!xen_pvh_domain()) {
+			set_phys_to_machine(pfn, frame_list[i]);
+		} else {
+			pte_t *ptep;
+			unsigned int level;
+			void *addr = __va(pfn << PAGE_SHIFT);
+			ptep = lookup_address((unsigned long)addr, &level);
+			set_pte(ptep, pfn_pte(pfn, PAGE_KERNEL));
+		}
 		/* Link back into the page tables if not highmem. */
-		if (xen_pv_domain() && !PageHighMem(page)) {
+		if (xen_pv_domain() && !PageHighMem(page) &&
+		    !xen_pvh_domain()) {
 			int ret;
 			ret = HYPERVISOR_update_va_mapping(
 				(unsigned long)__va(pfn << PAGE_SHIFT),
@@ -417,7 +425,14 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
 
 		scrub_page(page);
 
-		if (xen_pv_domain() && !PageHighMem(page)) {
+		if (xen_pvh_domain() && !PageHighMem(page)) {
+			unsigned int level;
+			pte_t *ptep;
+			void *addr = __va(pfn << PAGE_SHIFT);
+			ptep = lookup_address((unsigned long)addr, &level);
+			set_pte(ptep, __pte(0));
+
+		} else if (xen_pv_domain() && !PageHighMem(page)) {
 			ret = HYPERVISOR_update_va_mapping(
 				(unsigned long)__va(pfn << PAGE_SHIFT),
 				__pte_ma(0), 0);
@@ -433,7 +448,8 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
 	/* No more mappings: invalidate P2M and add to balloon. */
 	for (i = 0; i < nr_pages; i++) {
 		pfn = mfn_to_pfn(frame_list[i]);
-		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
+		if (!xen_pvh_domain())
+			__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 		balloon_append(pfn_to_page(pfn));
 	}
 
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:06:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oXe-0006EM-Qe; Thu, 16 Aug 2012 01:06:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oXd-0006E2-My
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:06:29 +0000
Received: from [85.158.139.83:3007] by server-6.bemta-5.messagelabs.com id
	0B/BB-22415-4974C205; Thu, 16 Aug 2012 01:06:28 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345079186!28354079!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24136 invoked from network); 16 Aug 2012 01:06:28 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 01:06:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G16PLk025574
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:06:26 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G16Oav024544
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:06:25 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G16ONf001687
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:06:24 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:06:23 -0700
Date: Wed, 15 Aug 2012 18:06:22 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180622.0c988f48@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 drivers/xen/gntdev.c      |    2 +-
 drivers/xen/grant-table.c |   26 +++++++++++++++++++++-----
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 1ffd03b..ad2d28e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -802,7 +802,7 @@ static int __init gntdev_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	use_ptemod = xen_pv_domain();
+	use_ptemod = xen_pv_domain() && !xen_pvh_domain();
 
 	err = misc_register(&gntdev_miscdev);
 	if (err != 0) {
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..2430133 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -974,14 +974,19 @@ static void gnttab_unmap_frames_v2(void)
 static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 {
 	struct gnttab_setup_table setup;
-	unsigned long *frames;
+	unsigned long *frames, start_gpfn;
 	unsigned int nr_gframes = end_idx + 1;
 	int rc;
 
-	if (xen_hvm_domain()) {
+	if (xen_hvm_domain() || xen_pvh_domain()) {
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
+
+		if (xen_hvm_domain())
+			start_gpfn = xen_hvm_resume_frames >> PAGE_SHIFT;
+		else
+			start_gpfn = virt_to_pfn(gnttab_shared.addr);
 		/*
 		 * Loop backwards, so that the first hypercall has the largest
 		 * index, ensuring that the table will grow only once.
@@ -990,7 +995,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 			xatp.domid = DOMID_SELF;
 			xatp.idx = i;
 			xatp.space = XENMAPSPACE_grant_table;
-			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
+			xatp.gpfn = start_gpfn + i;
 			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
 			if (rc != 0) {
 				printk(KERN_WARNING
@@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
 	int rc;
 	struct gnttab_set_version gsv;
 
-	if (xen_hvm_domain())
+	if (xen_hvm_domain() || xen_pvh_domain())
 		gsv.version = 1;
 	else
 		gsv.version = 2;
@@ -1081,13 +1086,24 @@ static void gnttab_request_version(void)
 int gnttab_resume(void)
 {
 	unsigned int max_nr_gframes;
+	char *kmsg="Failed to kmalloc pages for pv in hvm grant frames\n";
 
 	gnttab_request_version();
 	max_nr_gframes = gnttab_max_grant_frames();
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
 
-	if (xen_pv_domain())
+	/* PVH note: xen will free existing kmalloc'd mfn in
+	 * XENMEM_add_to_physmap */
+	if (xen_pvh_domain() && !gnttab_shared.addr) {
+		gnttab_shared.addr =
+			kmalloc(max_nr_gframes * PAGE_SIZE, GFP_KERNEL);
+		if ( !gnttab_shared.addr ) {
+			printk(KERN_WARNING "%s", kmsg);
+			return -ENOMEM;
+		}
+	}
+	if (xen_pv_domain() || xen_pvh_domain())
 		return gnttab_map(0, nr_grant_frames - 1);
 
 	if (gnttab_shared.addr == NULL) {
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:06:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oXe-0006EM-Qe; Thu, 16 Aug 2012 01:06:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oXd-0006E2-My
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:06:29 +0000
Received: from [85.158.139.83:3007] by server-6.bemta-5.messagelabs.com id
	0B/BB-22415-4974C205; Thu, 16 Aug 2012 01:06:28 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345079186!28354079!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24136 invoked from network); 16 Aug 2012 01:06:28 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 01:06:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G16PLk025574
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:06:26 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G16Oav024544
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:06:25 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G16ONf001687
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:06:24 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:06:23 -0700
Date: Wed, 15 Aug 2012 18:06:22 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180622.0c988f48@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 drivers/xen/gntdev.c      |    2 +-
 drivers/xen/grant-table.c |   26 +++++++++++++++++++++-----
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 1ffd03b..ad2d28e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -802,7 +802,7 @@ static int __init gntdev_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	use_ptemod = xen_pv_domain();
+	use_ptemod = xen_pv_domain() && !xen_pvh_domain();
 
 	err = misc_register(&gntdev_miscdev);
 	if (err != 0) {
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..2430133 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -974,14 +974,19 @@ static void gnttab_unmap_frames_v2(void)
 static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 {
 	struct gnttab_setup_table setup;
-	unsigned long *frames;
+	unsigned long *frames, start_gpfn;
 	unsigned int nr_gframes = end_idx + 1;
 	int rc;
 
-	if (xen_hvm_domain()) {
+	if (xen_hvm_domain() || xen_pvh_domain()) {
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
+
+		if (xen_hvm_domain())
+			start_gpfn = xen_hvm_resume_frames >> PAGE_SHIFT;
+		else
+			start_gpfn = virt_to_pfn(gnttab_shared.addr);
 		/*
 		 * Loop backwards, so that the first hypercall has the largest
 		 * index, ensuring that the table will grow only once.
@@ -990,7 +995,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 			xatp.domid = DOMID_SELF;
 			xatp.idx = i;
 			xatp.space = XENMAPSPACE_grant_table;
-			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
+			xatp.gpfn = start_gpfn + i;
 			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
 			if (rc != 0) {
 				printk(KERN_WARNING
@@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
 	int rc;
 	struct gnttab_set_version gsv;
 
-	if (xen_hvm_domain())
+	if (xen_hvm_domain() || xen_pvh_domain())
 		gsv.version = 1;
 	else
 		gsv.version = 2;
@@ -1081,13 +1086,24 @@ static void gnttab_request_version(void)
 int gnttab_resume(void)
 {
 	unsigned int max_nr_gframes;
+	char *kmsg="Failed to kmalloc pages for pv in hvm grant frames\n";
 
 	gnttab_request_version();
 	max_nr_gframes = gnttab_max_grant_frames();
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
 
-	if (xen_pv_domain())
+	/* PVH note: xen will free existing kmalloc'd mfn in
+	 * XENMEM_add_to_physmap */
+	if (xen_pvh_domain() && !gnttab_shared.addr) {
+		gnttab_shared.addr =
+			kmalloc(max_nr_gframes * PAGE_SIZE, GFP_KERNEL);
+		if ( !gnttab_shared.addr ) {
+			printk(KERN_WARNING "%s", kmsg);
+			return -ENOMEM;
+		}
+	}
+	if (xen_pv_domain() || xen_pvh_domain())
 		return gnttab_map(0, nr_grant_frames - 1);
 
 	if (gnttab_shared.addr == NULL) {
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:07:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oYd-0006PT-Ab; Thu, 16 Aug 2012 01:07:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oYb-0006O9-EP
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:07:29 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345079241!1944655!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4432 invoked from network); 16 Aug 2012 01:07:23 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:07:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G17IkN026099
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:07:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G17Id9020556
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:07:18 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G17HC1002172
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:07:17 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:07:17 -0700
Date: Wed, 15 Aug 2012 18:07:16 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180716.0049bffe@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [RFC PATCH 8/8]: PVH: privcmd changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 drivers/xen/privcmd.c |   68 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 66 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..0a240ab 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -33,6 +33,7 @@
 #include <xen/features.h>
 #include <xen/page.h>
 #include <xen/xen-ops.h>
+#include <xen/balloon.h>
 
 #include "privcmd.h"
 
@@ -199,6 +200,10 @@ static long privcmd_ioctl_mmap(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
+	/* PVH: TBD/FIXME. For now we only support privcmd_ioctl_mmap_batch */
+	if (xen_pvh_domain())
+		return -ENOSYS;
+
 	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
 		return -EFAULT;
 
@@ -251,6 +256,8 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user;
 };
 
+/* PVH dom0: if domU being created is PV, then mfn is mfn(addr on bus). If
+ * it's PVH then mfn is pfn (input to HAP). */
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
@@ -274,6 +281,40 @@ static int mmap_return_errors(void *data, void *state)
 	return put_user(*mfnp, st->user++);
 }
 
+/* Allocate pfns that are then mapped with gmfns from foreign domid. Update
+ * the vma with the page info to use later.
+ * Returns: 0 if success, otherwise -errno
+ */
+static int pvh_privcmd_resv_pfns(struct vm_area_struct *vma, int numpgs)
+{
+	int rc;
+	struct xen_pvh_sav_pfn_info *savp;
+
+	savp = kzalloc(sizeof(struct xen_pvh_sav_pfn_info), GFP_KERNEL);
+	if (savp == NULL)
+		return -ENOMEM;
+
+	savp->sp_paga = kcalloc(numpgs, sizeof(savp->sp_paga[0]), GFP_KERNEL);
+	if (savp->sp_paga == NULL) {
+		kfree(savp);
+		return -ENOMEM;
+	}
+
+	rc = alloc_xenballooned_pages(numpgs, savp->sp_paga, 0);
+	if (rc != 0) {
+		pr_warn("%s Could not alloc %d pfns rc:%d\n", __FUNCTION__,
+			numpgs, rc);
+		kfree(savp->sp_paga);
+		kfree(savp);
+		return -ENOMEM;
+	}
+	savp->sp_num_pgs = numpgs;
+	BUG_ON(vma->vm_private_data);
+	vma->vm_private_data = savp;
+
+	return 0;
+}
+
 static struct vm_operations_struct privcmd_vm_ops;
 
 static long privcmd_ioctl_mmap_batch(void __user *udata)
@@ -315,6 +356,12 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
+	if (xen_pvh_domain()) {
+		if ((ret=pvh_privcmd_resv_pfns(vma, m.num))) {
+			up_write(&mm->mmap_sem);
+			goto out;
+		}
+	}
 	state.domain = m.dom;
 	state.vma = vma;
 	state.va = m.addr;
@@ -365,6 +412,19 @@ static long privcmd_ioctl(struct file *file,
 	return ret;
 }
 
+static void privcmd_close(struct vm_area_struct *vma)
+{
+	struct xen_pvh_sav_pfn_info *savp;
+
+	if (!xen_pvh_domain() || ((savp=vma->vm_private_data) == NULL))
+		return;
+
+	while (savp->sp_next_todo--) {
+		xen_pfn_t pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo]);
+		pvh_rem_xen_p2m(pfn, 1);
+	}
+}
+
 static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
@@ -375,13 +435,14 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 }
 
 static struct vm_operations_struct privcmd_vm_ops = {
+	.close = privcmd_close,
 	.fault = privcmd_fault
 };
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
+	/* Unsupported for auto-translate guests unless PVH */
+	if (xen_feature(XENFEAT_auto_translated_physmap) && !xen_pvh_domain())
 		return -ENOSYS;
 
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
@@ -395,6 +456,9 @@ static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 
 static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma)
 {
+	if (xen_pvh_domain())
+		return (vma->vm_private_data == NULL);
+
 	return (xchg(&vma->vm_private_data, (void *)1) == NULL);
 }
 
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 01:07:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 01:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1oYd-0006PT-Ab; Thu, 16 Aug 2012 01:07:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T1oYb-0006O9-EP
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 01:07:29 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345079241!1944655!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjkxODU0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4432 invoked from network); 16 Aug 2012 01:07:23 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 01:07:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7G17IkN026099
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:07:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7G17Id9020556
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 01:07:18 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7G17HC1002172
	for <Xen-devel@lists.xensource.com>; Wed, 15 Aug 2012 20:07:17 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Aug 2012 18:07:17 -0700
Date: Wed, 15 Aug 2012 18:07:16 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120815180716.0049bffe@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [RFC PATCH 8/8]: PVH: privcmd changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


---
 drivers/xen/privcmd.c |   68 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 66 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..0a240ab 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -33,6 +33,7 @@
 #include <xen/features.h>
 #include <xen/page.h>
 #include <xen/xen-ops.h>
+#include <xen/balloon.h>
 
 #include "privcmd.h"
 
@@ -199,6 +200,10 @@ static long privcmd_ioctl_mmap(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
+	/* PVH: TBD/FIXME. For now we only support privcmd_ioctl_mmap_batch */
+	if (xen_pvh_domain())
+		return -ENOSYS;
+
 	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
 		return -EFAULT;
 
@@ -251,6 +256,8 @@ struct mmap_batch_state {
 	xen_pfn_t __user *user;
 };
 
+/* PVH dom0: if domU being created is PV, then mfn is mfn(addr on bus). If
+ * it's PVH then mfn is pfn (input to HAP). */
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
@@ -274,6 +281,40 @@ static int mmap_return_errors(void *data, void *state)
 	return put_user(*mfnp, st->user++);
 }
 
+/* Allocate pfns that are then mapped with gmfns from foreign domid. Update
+ * the vma with the page info to use later.
+ * Returns: 0 if success, otherwise -errno
+ */
+static int pvh_privcmd_resv_pfns(struct vm_area_struct *vma, int numpgs)
+{
+	int rc;
+	struct xen_pvh_sav_pfn_info *savp;
+
+	savp = kzalloc(sizeof(struct xen_pvh_sav_pfn_info), GFP_KERNEL);
+	if (savp == NULL)
+		return -ENOMEM;
+
+	savp->sp_paga = kcalloc(numpgs, sizeof(savp->sp_paga[0]), GFP_KERNEL);
+	if (savp->sp_paga == NULL) {
+		kfree(savp);
+		return -ENOMEM;
+	}
+
+	rc = alloc_xenballooned_pages(numpgs, savp->sp_paga, 0);
+	if (rc != 0) {
+		pr_warn("%s Could not alloc %d pfns rc:%d\n", __FUNCTION__,
+			numpgs, rc);
+		kfree(savp->sp_paga);
+		kfree(savp);
+		return -ENOMEM;
+	}
+	savp->sp_num_pgs = numpgs;
+	BUG_ON(vma->vm_private_data);
+	vma->vm_private_data = savp;
+
+	return 0;
+}
+
 static struct vm_operations_struct privcmd_vm_ops;
 
 static long privcmd_ioctl_mmap_batch(void __user *udata)
@@ -315,6 +356,12 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
+	if (xen_pvh_domain()) {
+		if ((ret=pvh_privcmd_resv_pfns(vma, m.num))) {
+			up_write(&mm->mmap_sem);
+			goto out;
+		}
+	}
 	state.domain = m.dom;
 	state.vma = vma;
 	state.va = m.addr;
@@ -365,6 +412,19 @@ static long privcmd_ioctl(struct file *file,
 	return ret;
 }
 
+static void privcmd_close(struct vm_area_struct *vma)
+{
+	struct xen_pvh_sav_pfn_info *savp;
+
+	if (!xen_pvh_domain() || ((savp=vma->vm_private_data) == NULL))
+		return;
+
+	while (savp->sp_next_todo--) {
+		xen_pfn_t pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo]);
+		pvh_rem_xen_p2m(pfn, 1);
+	}
+}
+
 static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
@@ -375,13 +435,14 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 }
 
 static struct vm_operations_struct privcmd_vm_ops = {
+	.close = privcmd_close,
 	.fault = privcmd_fault
 };
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
+	/* Unsupported for auto-translate guests unless PVH */
+	if (xen_feature(XENFEAT_auto_translated_physmap) && !xen_pvh_domain())
 		return -ENOSYS;
 
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
@@ -395,6 +456,9 @@ static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 
 static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma)
 {
+	if (xen_pvh_domain())
+		return (vma->vm_private_data == NULL);
+
 	return (xchg(&vma->vm_private_data, (void *)1) == NULL);
 }
 
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 02:58:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 02:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1qHW-0007K8-Ol; Thu, 16 Aug 2012 02:57:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1qHV-0007K3-0h
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 02:57:57 +0000
Received: from [85.158.139.83:60670] by server-11.bemta-5.messagelabs.com id
	19/00-29296-4B16C205; Thu, 16 Aug 2012 02:57:56 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345085874!21141086!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4MTc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2906 invoked from network); 16 Aug 2012 02:57:55 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 02:57:55 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 15 Aug 2012 19:57:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,776,1336374000"; d="scan'208";a="187011557"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 15 Aug 2012 19:57:53 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 19:57:52 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 10:57:39 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaI7IAgAGbXeA=
Date: Thu, 16 Aug 2012 02:57:39 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Wednesday, August 15, 2012 6:21 PM
> To: Hao, Xudong
> Cc: xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao
> Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> On Wed, 15 Aug 2012, Xudong Hao wrote:
> > Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> > device whose BAR size is larger than 4G, it must access > 4G memory
> address.
> > This patch enable the 64bits big BAR support on hvmloader.
> >
> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> 
> It is very good to see that this problem has been solved!
> 
> Considering that at this point it is too late for the 4.2 release cycle,
> it might be worth spinning a version of this patches for SeaBIOS and
> upstream QEMU, that now supports PCI passthrough.
> 

Hi, Stefano

Does Xen already switch to SeaBIOS and upstream QEMU? I saw SeaBIOS does not update for 5 months.

You mean upstream QEMU is this tree git://git.qemu.org/qemu.git ? I heard somebody said PCI device assignment does not work for this tree, but device hot-add/remove works.

> 
> 
> > diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
> >  #define PCI_MEM_START       0xf0000000
> >  #define PCI_MEM_END         0xfc000000
> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> > +#define PCI_MIN_MMIO_ADDR   0x80000000
> > +
> >  extern unsigned long pci_mem_start, pci_mem_end;
> >
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> > --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -31,24 +31,33 @@
> >  unsigned long pci_mem_start = PCI_MEM_START;
> >  unsigned long pci_mem_end = PCI_MEM_END;
> >
> > +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> > +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> > +
> >  enum virtual_vga virtual_vga = VGA_none;
> >  unsigned long igd_opregion_pgbase = 0;
> >
> >  void pci_setup(void)
> >  {
> > -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> > +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> > +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> > +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
> >      uint32_t vga_devfn = 256;
> >      uint16_t class, vendor_id, device_id;
> >      unsigned int bar, pin, link, isa_irq;
> > +    int64_t mmio_left;
> >
> >      /* Resources assignable to PCI devices via BARs. */
> >      struct resource {
> > -        uint32_t base, max;
> > -    } *resource, mem_resource, io_resource;
> > +        uint64_t base, max;
> > +    } *resource, mem_resource, high_mem_resource, io_resource;
> >
> >      /* Create a list of device BARs in descending order of size. */
> >      struct bars {
> > -        uint32_t devfn, bar_reg, bar_sz;
> > +        uint32_t is_64bar;
> > +        uint32_t devfn;
> > +        uint32_t bar_reg;
> > +        uint64_t bar_sz;
> >      } *bars = (struct bars *)scratch_start;
> >      unsigned int i, nr_bars = 0;
> >
> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >          /* Map the I/O memory and port resources. */
> >          for ( bar = 0; bar < 7; bar++ )
> >          {
> > +            bar_sz_upper = 0;
> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
> >              if ( bar == 6 )
> >                  bar_reg = PCI_ROM_ADDRESS;
> >
> >              bar_data = pci_readl(devfn, bar_reg);
> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >              pci_writel(devfn, bar_reg, ~0);
> >              bar_sz = pci_readl(devfn, bar_reg);
> >              pci_writel(devfn, bar_reg, bar_data);
> > +
> > +            if (is_64bar) {
> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, ~0);
> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> > +            }
> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > +                       0xfffffffffffffff0 :
> > +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > +            bar_sz &= ~(bar_sz - 1);
> >              if ( bar_sz == 0 )
> >                  continue;
> >
> > -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > -                       PCI_BASE_ADDRESS_MEM_MASK :
> > -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > -            bar_sz &= ~(bar_sz - 1);
> > -
> >              for ( i = 0; i < nr_bars; i++ )
> >                  if ( bars[i].bar_sz < bar_sz )
> >                      break;
> > @@ -157,6 +178,7 @@ void pci_setup(void)
> >              if ( i != nr_bars )
> >                  memmove(&bars[i+1], &bars[i], (nr_bars-i) *
> sizeof(*bars));
> >
> > +            bars[i].is_64bar = is_64bar;
> >              bars[i].devfn   = devfn;
> >              bars[i].bar_reg = bar_reg;
> >              bars[i].bar_sz  = bar_sz;
> > @@ -167,11 +189,8 @@ void pci_setup(void)
> >
> >              nr_bars++;
> >
> > -            /* Skip the upper-half of the address for a 64-bit BAR. */
> > -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> > -
> PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > -                 (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> > +            /*The upper half is already calculated, skip it! */
> > +            if (is_64bar)
> >                  bar++;
> >          }
> >
> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >      }
> >
> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> > -            ((pci_mem_start << 1) != 0) )
> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> pci_mem_start )
> >          pci_mem_start <<= 1;
> >
> > +    if (!pci_mem_start) {
> > +        bar64_relocate = 1;
> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> > +    }
> > +
> >      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
> >      while ( (pci_mem_start >> PAGE_SHIFT) <
> hvm_info->low_mem_pgend )
> >      {
> > @@ -218,11 +241,15 @@ void pci_setup(void)
> >          hvm_info->high_mem_pgend += nr_pages;
> >      }
> >
> > +    high_mem_resource.base = pci_high_mem_start;
> > +    high_mem_resource.max = pci_high_mem_end;
> >      mem_resource.base = pci_mem_start;
> >      mem_resource.max = pci_mem_end;
> >      io_resource.base = 0xc000;
> >      io_resource.max = 0x10000;
> >
> > +    mmio_left = pci_mem_end - pci_mem_end;
> > +
> >      /* Assign iomem and ioport resources in descending order of size. */
> >      for ( i = 0; i < nr_bars; i++ )
> >      {
> > @@ -230,13 +257,21 @@ void pci_setup(void)
> >          bar_reg = bars[i].bar_reg;
> >          bar_sz  = bars[i].bar_sz;
> >
> > +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left
> < bar_sz);
> >          bar_data = pci_readl(devfn, bar_reg);
> >
> >          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
> >               PCI_BASE_ADDRESS_SPACE_MEMORY )
> >          {
> > -            resource = &mem_resource;
> > -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            if (using_64bar) {
> > +                resource = &high_mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            else {
> > +                resource = &mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            mmio_left -= bar_sz;
> >          }
> >          else
> >          {
> > @@ -244,13 +279,14 @@ void pci_setup(void)
> >              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
> >          }
> >
> > -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> > -        bar_data |= base;
> > +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> > +        bar_data |= (uint32_t)base;
> > +        bar_data_upper = (uint32_t)(base >> 32);
> >          base += bar_sz;
> >
> >          if ( (base < resource->base) || (base > resource->max) )
> >          {
> > -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> > +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
> >                     "resource!\n", devfn>>3, devfn&7, bar_reg,
> bar_sz);
> >              continue;
> >          }
> > @@ -258,7 +294,9 @@ void pci_setup(void)
> >          resource->base = base;
> >
> >          pci_writel(devfn, bar_reg, bar_data);
> > -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> > +        if (using_64bar)
> > +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
> >                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
> >
> >          /* Now enable the memory or I/O mapping. */
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 02:58:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 02:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1qHW-0007K8-Ol; Thu, 16 Aug 2012 02:57:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1qHV-0007K3-0h
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 02:57:57 +0000
Received: from [85.158.139.83:60670] by server-11.bemta-5.messagelabs.com id
	19/00-29296-4B16C205; Thu, 16 Aug 2012 02:57:56 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345085874!21141086!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4MTc4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2906 invoked from network); 16 Aug 2012 02:57:55 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 02:57:55 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 15 Aug 2012 19:57:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,776,1336374000"; d="scan'208";a="187011557"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 15 Aug 2012 19:57:53 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 15 Aug 2012 19:57:52 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 10:57:39 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaI7IAgAGbXeA=
Date: Thu, 16 Aug 2012 02:57:39 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Wednesday, August 15, 2012 6:21 PM
> To: Hao, Xudong
> Cc: xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao
> Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> On Wed, 15 Aug 2012, Xudong Hao wrote:
> > Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> > device whose BAR size is larger than 4G, it must access > 4G memory
> address.
> > This patch enable the 64bits big BAR support on hvmloader.
> >
> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> 
> It is very good to see that this problem has been solved!
> 
> Considering that at this point it is too late for the 4.2 release cycle,
> it might be worth spinning a version of this patches for SeaBIOS and
> upstream QEMU, that now supports PCI passthrough.
> 

Hi, Stefano

Does Xen already switch to SeaBIOS and upstream QEMU? I saw SeaBIOS does not update for 5 months.

You mean upstream QEMU is this tree git://git.qemu.org/qemu.git ? I heard somebody said PCI device assignment does not work for this tree, but device hot-add/remove works.

> 
> 
> > diff -r 663eb766cdde tools/firmware/hvmloader/config.h
> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
> >  #define PCI_MEM_START       0xf0000000
> >  #define PCI_MEM_END         0xfc000000
> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> > +#define PCI_MIN_MMIO_ADDR   0x80000000
> > +
> >  extern unsigned long pci_mem_start, pci_mem_end;
> >
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> > --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -31,24 +31,33 @@
> >  unsigned long pci_mem_start = PCI_MEM_START;
> >  unsigned long pci_mem_end = PCI_MEM_END;
> >
> > +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> > +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> > +
> >  enum virtual_vga virtual_vga = VGA_none;
> >  unsigned long igd_opregion_pgbase = 0;
> >
> >  void pci_setup(void)
> >  {
> > -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> > +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> > +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> > +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
> >      uint32_t vga_devfn = 256;
> >      uint16_t class, vendor_id, device_id;
> >      unsigned int bar, pin, link, isa_irq;
> > +    int64_t mmio_left;
> >
> >      /* Resources assignable to PCI devices via BARs. */
> >      struct resource {
> > -        uint32_t base, max;
> > -    } *resource, mem_resource, io_resource;
> > +        uint64_t base, max;
> > +    } *resource, mem_resource, high_mem_resource, io_resource;
> >
> >      /* Create a list of device BARs in descending order of size. */
> >      struct bars {
> > -        uint32_t devfn, bar_reg, bar_sz;
> > +        uint32_t is_64bar;
> > +        uint32_t devfn;
> > +        uint32_t bar_reg;
> > +        uint64_t bar_sz;
> >      } *bars = (struct bars *)scratch_start;
> >      unsigned int i, nr_bars = 0;
> >
> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >          /* Map the I/O memory and port resources. */
> >          for ( bar = 0; bar < 7; bar++ )
> >          {
> > +            bar_sz_upper = 0;
> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
> >              if ( bar == 6 )
> >                  bar_reg = PCI_ROM_ADDRESS;
> >
> >              bar_data = pci_readl(devfn, bar_reg);
> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >              pci_writel(devfn, bar_reg, ~0);
> >              bar_sz = pci_readl(devfn, bar_reg);
> >              pci_writel(devfn, bar_reg, bar_data);
> > +
> > +            if (is_64bar) {
> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, ~0);
> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> > +            }
> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > +                       0xfffffffffffffff0 :
> > +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > +            bar_sz &= ~(bar_sz - 1);
> >              if ( bar_sz == 0 )
> >                  continue;
> >
> > -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > -                       PCI_BASE_ADDRESS_MEM_MASK :
> > -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > -            bar_sz &= ~(bar_sz - 1);
> > -
> >              for ( i = 0; i < nr_bars; i++ )
> >                  if ( bars[i].bar_sz < bar_sz )
> >                      break;
> > @@ -157,6 +178,7 @@ void pci_setup(void)
> >              if ( i != nr_bars )
> >                  memmove(&bars[i+1], &bars[i], (nr_bars-i) *
> sizeof(*bars));
> >
> > +            bars[i].is_64bar = is_64bar;
> >              bars[i].devfn   = devfn;
> >              bars[i].bar_reg = bar_reg;
> >              bars[i].bar_sz  = bar_sz;
> > @@ -167,11 +189,8 @@ void pci_setup(void)
> >
> >              nr_bars++;
> >
> > -            /* Skip the upper-half of the address for a 64-bit BAR. */
> > -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> > -
> PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > -                 (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> > +            /*The upper half is already calculated, skip it! */
> > +            if (is_64bar)
> >                  bar++;
> >          }
> >
> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >      }
> >
> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> > -            ((pci_mem_start << 1) != 0) )
> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> pci_mem_start )
> >          pci_mem_start <<= 1;
> >
> > +    if (!pci_mem_start) {
> > +        bar64_relocate = 1;
> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> > +    }
> > +
> >      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
> >      while ( (pci_mem_start >> PAGE_SHIFT) <
> hvm_info->low_mem_pgend )
> >      {
> > @@ -218,11 +241,15 @@ void pci_setup(void)
> >          hvm_info->high_mem_pgend += nr_pages;
> >      }
> >
> > +    high_mem_resource.base = pci_high_mem_start;
> > +    high_mem_resource.max = pci_high_mem_end;
> >      mem_resource.base = pci_mem_start;
> >      mem_resource.max = pci_mem_end;
> >      io_resource.base = 0xc000;
> >      io_resource.max = 0x10000;
> >
> > +    mmio_left = pci_mem_end - pci_mem_end;
> > +
> >      /* Assign iomem and ioport resources in descending order of size. */
> >      for ( i = 0; i < nr_bars; i++ )
> >      {
> > @@ -230,13 +257,21 @@ void pci_setup(void)
> >          bar_reg = bars[i].bar_reg;
> >          bar_sz  = bars[i].bar_sz;
> >
> > +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left
> < bar_sz);
> >          bar_data = pci_readl(devfn, bar_reg);
> >
> >          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
> >               PCI_BASE_ADDRESS_SPACE_MEMORY )
> >          {
> > -            resource = &mem_resource;
> > -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            if (using_64bar) {
> > +                resource = &high_mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            else {
> > +                resource = &mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            mmio_left -= bar_sz;
> >          }
> >          else
> >          {
> > @@ -244,13 +279,14 @@ void pci_setup(void)
> >              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
> >          }
> >
> > -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> > -        bar_data |= base;
> > +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> > +        bar_data |= (uint32_t)base;
> > +        bar_data_upper = (uint32_t)(base >> 32);
> >          base += bar_sz;
> >
> >          if ( (base < resource->base) || (base > resource->max) )
> >          {
> > -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> > +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
> >                     "resource!\n", devfn>>3, devfn&7, bar_reg,
> bar_sz);
> >              continue;
> >          }
> > @@ -258,7 +294,9 @@ void pci_setup(void)
> >          resource->base = base;
> >
> >          pci_writel(devfn, bar_reg, bar_data);
> > -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> > +        if (using_64bar)
> > +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
> >                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
> >
> >          /* Now enable the memory or I/O mapping. */
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 04:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 04:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1rnE-0008EC-IE; Thu, 16 Aug 2012 04:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1T1rnC-0008Dq-5J
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 04:34:46 +0000
Received: from [85.158.143.35:23267] by server-1.bemta-4.messagelabs.com id
	7C/9F-07754-5687C205; Thu, 16 Aug 2012 04:34:45 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345091684!15916555!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI0NTYzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8585 invoked from network); 16 Aug 2012 04:34:44 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 04:34:44 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=K49oKwKsGim+ibaKgzw65wAcBlGE8ocQFxqQNAgz2Ajh4qFa696YWseg
	Pm1dAe4ZRr2JFPCUyWEBROPIqRUN9Fz/6t15yeSc70a/OTTvZr0QOb7gW
	2zoufDKDQCw+HyihNXKifvbcwLjJheIzGuwiHGYTzPNPccY34fCdcx5uw
	p9cdZKFYGqMln3bGaGBIOSAEgiaOpGAAtwhzeU8KOfnC/aaHH8ZCL1o5d
	C3kZnVx8SgjwDQUL4KVvdIUt4SKQB;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1345091684; x=1376627684;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=Pk26hhBWJ3Cmv+AVTN8LwdhL80a1qrb63MeBKwbLuWk=;
	b=DfY5rl90K7MshlPjHt277JOP3mNBEAwwfRvNxn3MqmhAN9PfOi+K5PTO
	00WWQAWWjpa4yB20zXPOUTB7nToYtWLDHH0WW8LEmlAdpO6grkkxVjx1p
	8LlahiWZ3yyy6IaBdwKLiIMTtBRtbhHY8yu00vUhBkC9sCJa6KoyASYia
	7g8UZjO5L1WYmfq686PoQiXBW8IdDs4G6W2ELWKJHnXIukI4MikFdMfUM
	323FMOe3HMbCpPSMg/g+FvINbKD8J;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.77,776,1336341600"; d="scan'208";a="119358083"
Received: from abgdgate40u.abg.fsc.net ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 16 Aug 2012 06:34:43 +0200
X-IronPort-AV: E=Sophos;i="4.77,775,1336341600"; d="scan'208";a="142004263"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate40u.abg.fsc.net with ESMTP; 16 Aug 2012 06:34:44 +0200
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id D1C8876E11A;
	Thu, 16 Aug 2012 06:34:43 +0200 (CEST)
Message-ID: <502C7863.1020205@ts.fujitsu.com>
Date: Thu, 16 Aug 2012 06:34:43 +0200
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120613 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <502A2C590200007800094AF0@nat28.tlf.novell.com>
	<CAFLBxZbB0YLYcbGEPrLab_RS2Y9yjAbTr287yYxh58-PDS24oQ@mail.gmail.com>
In-Reply-To: <CAFLBxZbB0YLYcbGEPrLab_RS2Y9yjAbTr287yYxh58-PDS24oQ@mail.gmail.com>
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] x86/PoD: prevent guest from being
 destroyed upon early access to its memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 15.08.2012 15:50, schrieb George Dunlap:
> On Tue, Aug 14, 2012 at 9:45 AM, Jan Beulich<JBeulich@suse.com>  wrote:
>> x86/PoD: prevent guest from being destroyed upon early access to its mem=
ory
>>
>> When an external agent (e.g. a monitoring daemon) happens to access the
>> memory of a PoD guest prior to setting the PoD target, that access must
>> fail for there not being any page in the PoD cache, and only the space
>> above the low 2Mb gets scanned for victim pages (while only the low 2Mb
>> got real pages populated so far).
>>
>> To accomodate for this
>> - set the PoD target first
>> - do all physmap population in PoD mode (i.e. not just large [2Mb or
>>    1Gb] pages)
>> - slightly lift the restrictions enforced by p2m_pod_set_mem_target()
>>    to accomodate for the changed tools behavior
>>
>> Tested-by: J=FCrgen Gro=DF<juergen.gross@ts.fujitsu.com>
>>             (in a 4.0.x based incarnation)
>> Signed-off-by: Jan Beulich<jbeulich@suse.com>
>
> Acked-by: George Dunlap<george.dunlap@eu.citrix.com>
>
> However, my "hg qpush" chokes on the German characters in Juergen's name.=
..

:-)

Feel free to change it to "Juergen Gross <juergen.gross@ts.fujitsu.com>".


Juergen

>
>   -George
>
>>
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -160,7 +160,7 @@ static int setup_guest(xc_interface *xch
>>       int pod_mode =3D 0;
>>
>>       if ( nr_pages>  target_pages )
>> -        pod_mode =3D 1;
>> +        pod_mode =3D XENMEMF_populate_on_demand;
>>
>>       memset(&elf, 0, sizeof(elf));
>>       if ( elf_init(&elf, image, image_size) !=3D 0 )
>> @@ -197,6 +197,22 @@ static int setup_guest(xc_interface *xch
>>       for ( i =3D mmio_start>>  PAGE_SHIFT; i<  nr_pages; i++ )
>>           page_array[i] +=3D mmio_size>>  PAGE_SHIFT;
>>
>> +    if ( pod_mode )
>> +    {
>> +        /*
>> +         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> +         * adjust the PoD cache size so that domain tot_pages will be
>> +         * target_pages - 0x20 after this call.
>> +         */
>> +        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> +                                      NULL, NULL, NULL);
>> +        if ( rc !=3D 0 )
>> +        {
>> +            PERROR("Could not set PoD target for HVM guest.\n");
>> +            goto error_out;
>> +        }
>> +    }
>> +
>>       /*
>>        * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC000=
0.
>>        *
>> @@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch
>>        * ensure that we can be preempted and hence dom0 remains responsi=
ve.
>>        */
>>       rc =3D xc_domain_populate_physmap_exact(
>> -        xch, dom, 0xa0, 0, 0,&page_array[0x00]);
>> +        xch, dom, 0xa0, 0, pod_mode,&page_array[0x00]);
>>       cur_pages =3D 0xc0;
>>       stat_normal_pages =3D 0xc0;
>>       while ( (rc =3D=3D 0)&&  (nr_pages>  cur_pages) )
>> @@ -247,8 +263,7 @@ static int setup_guest(xc_interface *xch
>>                   sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_1=
GB_SHIFT)];
>>
>>               done =3D xc_domain_populate_physmap(xch, dom, nr_extents, =
SUPERPAGE_1GB_SHIFT,
>> -                                              pod_mode ? XENMEMF_popula=
te_on_demand : 0,
>> -                                              sp_extents);
>> +                                              pod_mode, sp_extents);
>>
>>               if ( done>  0 )
>>               {
>> @@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch
>>                       sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPA=
GE_2MB_SHIFT)];
>>
>>                   done =3D xc_domain_populate_physmap(xch, dom, nr_exten=
ts, SUPERPAGE_2MB_SHIFT,
>> -                                                  pod_mode ? XENMEMF_po=
pulate_on_demand : 0,
>> -                                                  sp_extents);
>> +                                                  pod_mode, sp_extents);
>>
>>                   if ( done>  0 )
>>                   {
>> @@ -302,19 +316,12 @@ static int setup_guest(xc_interface *xch
>>           if ( count !=3D 0 )
>>           {
>>               rc =3D xc_domain_populate_physmap_exact(
>> -                xch, dom, count, 0, 0,&page_array[cur_pages]);
>> +                xch, dom, count, 0, pod_mode,&page_array[cur_pages]);
>>               cur_pages +=3D count;
>>               stat_normal_pages +=3D count;
>>           }
>>       }
>>
>> -    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> -     * adjust the PoD cache size so that domain tot_pages will be
>> -     * target_pages - 0x20 after this call. */
>> -    if ( pod_mode )
>> -        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> -                                      NULL, NULL, NULL);
>> -
>>       if ( rc !=3D 0 )
>>       {
>>           PERROR("Could not allocate memory for HVM guest.");
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain *d,
>>
>>       pod_lock(p2m);
>>
>> -    /* P =3D=3D B: Nothing to do. */
>> -    if ( p2m->pod.entry_count =3D=3D 0 )
>> +    /* P =3D=3D B: Nothing to do (unless the guest is being created). */
>> +    populated =3D d->tot_pages - p2m->pod.count;
>> +    if ( populated>  0&&  p2m->pod.entry_count =3D=3D 0 )
>>           goto out;
>>
>>       /* Don't do anything if the domain is being torn down */
>> @@ -357,13 +358,11 @@ p2m_pod_set_mem_target(struct domain *d,
>>       if ( target<  d->tot_pages )
>>           goto out;
>>
>> -    populated  =3D d->tot_pages - p2m->pod.count;
>> -
>>       pod_target =3D target - populated;
>>
>>       /* B<  T': Set the cache size equal to # of outstanding entries,
>>        * let the balloon driver fill in the rest. */
>> -    if ( pod_target>  p2m->pod.entry_count )
>> +    if ( populated>  0&&  pod_target>  p2m->pod.entry_count )
>>           pod_target =3D p2m->pod.entry_count;
>>
>>       ASSERT( pod_target>=3D p2m->pod.count );
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


-- =

Juergen Gross                 Principal Developer Operating Systems
PDG ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.=
com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.ht=
ml

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 04:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 04:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1rnE-0008EC-IE; Thu, 16 Aug 2012 04:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1T1rnC-0008Dq-5J
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 04:34:46 +0000
Received: from [85.158.143.35:23267] by server-1.bemta-4.messagelabs.com id
	7C/9F-07754-5687C205; Thu, 16 Aug 2012 04:34:45 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345091684!15916555!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI0NTYzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8585 invoked from network); 16 Aug 2012 04:34:44 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 04:34:44 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:Message-ID:Date:From:Organization:
	User-Agent:MIME-Version:To:CC:Subject:References:
	In-Reply-To:Content-Type:Content-Transfer-Encoding;
	b=K49oKwKsGim+ibaKgzw65wAcBlGE8ocQFxqQNAgz2Ajh4qFa696YWseg
	Pm1dAe4ZRr2JFPCUyWEBROPIqRUN9Fz/6t15yeSc70a/OTTvZr0QOb7gW
	2zoufDKDQCw+HyihNXKifvbcwLjJheIzGuwiHGYTzPNPccY34fCdcx5uw
	p9cdZKFYGqMln3bGaGBIOSAEgiaOpGAAtwhzeU8KOfnC/aaHH8ZCL1o5d
	C3kZnVx8SgjwDQUL4KVvdIUt4SKQB;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1345091684; x=1376627684;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=Pk26hhBWJ3Cmv+AVTN8LwdhL80a1qrb63MeBKwbLuWk=;
	b=DfY5rl90K7MshlPjHt277JOP3mNBEAwwfRvNxn3MqmhAN9PfOi+K5PTO
	00WWQAWWjpa4yB20zXPOUTB7nToYtWLDHH0WW8LEmlAdpO6grkkxVjx1p
	8LlahiWZ3yyy6IaBdwKLiIMTtBRtbhHY8yu00vUhBkC9sCJa6KoyASYia
	7g8UZjO5L1WYmfq686PoQiXBW8IdDs4G6W2ELWKJHnXIukI4MikFdMfUM
	323FMOe3HMbCpPSMg/g+FvINbKD8J;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.77,776,1336341600"; d="scan'208";a="119358083"
Received: from abgdgate40u.abg.fsc.net ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 16 Aug 2012 06:34:43 +0200
X-IronPort-AV: E=Sophos;i="4.77,775,1336341600"; d="scan'208";a="142004263"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate40u.abg.fsc.net with ESMTP; 16 Aug 2012 06:34:44 +0200
Received: from [172.17.21.50] (verdon.osd.mch.fsc.net [172.17.21.50])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id D1C8876E11A;
	Thu, 16 Aug 2012 06:34:43 +0200 (CEST)
Message-ID: <502C7863.1020205@ts.fujitsu.com>
Date: Thu, 16 Aug 2012 06:34:43 +0200
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120613 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <502A2C590200007800094AF0@nat28.tlf.novell.com>
	<CAFLBxZbB0YLYcbGEPrLab_RS2Y9yjAbTr287yYxh58-PDS24oQ@mail.gmail.com>
In-Reply-To: <CAFLBxZbB0YLYcbGEPrLab_RS2Y9yjAbTr287yYxh58-PDS24oQ@mail.gmail.com>
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] x86/PoD: prevent guest from being
 destroyed upon early access to its memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 15.08.2012 15:50, schrieb George Dunlap:
> On Tue, Aug 14, 2012 at 9:45 AM, Jan Beulich<JBeulich@suse.com>  wrote:
>> x86/PoD: prevent guest from being destroyed upon early access to its mem=
ory
>>
>> When an external agent (e.g. a monitoring daemon) happens to access the
>> memory of a PoD guest prior to setting the PoD target, that access must
>> fail for there not being any page in the PoD cache, and only the space
>> above the low 2Mb gets scanned for victim pages (while only the low 2Mb
>> got real pages populated so far).
>>
>> To accomodate for this
>> - set the PoD target first
>> - do all physmap population in PoD mode (i.e. not just large [2Mb or
>>    1Gb] pages)
>> - slightly lift the restrictions enforced by p2m_pod_set_mem_target()
>>    to accomodate for the changed tools behavior
>>
>> Tested-by: J=FCrgen Gro=DF<juergen.gross@ts.fujitsu.com>
>>             (in a 4.0.x based incarnation)
>> Signed-off-by: Jan Beulich<jbeulich@suse.com>
>
> Acked-by: George Dunlap<george.dunlap@eu.citrix.com>
>
> However, my "hg qpush" chokes on the German characters in Juergen's name.=
..

:-)

Feel free to change it to "Juergen Gross <juergen.gross@ts.fujitsu.com>".


Juergen

>
>   -George
>
>>
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -160,7 +160,7 @@ static int setup_guest(xc_interface *xch
>>       int pod_mode =3D 0;
>>
>>       if ( nr_pages>  target_pages )
>> -        pod_mode =3D 1;
>> +        pod_mode =3D XENMEMF_populate_on_demand;
>>
>>       memset(&elf, 0, sizeof(elf));
>>       if ( elf_init(&elf, image, image_size) !=3D 0 )
>> @@ -197,6 +197,22 @@ static int setup_guest(xc_interface *xch
>>       for ( i =3D mmio_start>>  PAGE_SHIFT; i<  nr_pages; i++ )
>>           page_array[i] +=3D mmio_size>>  PAGE_SHIFT;
>>
>> +    if ( pod_mode )
>> +    {
>> +        /*
>> +         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> +         * adjust the PoD cache size so that domain tot_pages will be
>> +         * target_pages - 0x20 after this call.
>> +         */
>> +        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> +                                      NULL, NULL, NULL);
>> +        if ( rc !=3D 0 )
>> +        {
>> +            PERROR("Could not set PoD target for HVM guest.\n");
>> +            goto error_out;
>> +        }
>> +    }
>> +
>>       /*
>>        * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC000=
0.
>>        *
>> @@ -208,7 +224,7 @@ static int setup_guest(xc_interface *xch
>>        * ensure that we can be preempted and hence dom0 remains responsi=
ve.
>>        */
>>       rc =3D xc_domain_populate_physmap_exact(
>> -        xch, dom, 0xa0, 0, 0,&page_array[0x00]);
>> +        xch, dom, 0xa0, 0, pod_mode,&page_array[0x00]);
>>       cur_pages =3D 0xc0;
>>       stat_normal_pages =3D 0xc0;
>>       while ( (rc =3D=3D 0)&&  (nr_pages>  cur_pages) )
>> @@ -247,8 +263,7 @@ static int setup_guest(xc_interface *xch
>>                   sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPAGE_1=
GB_SHIFT)];
>>
>>               done =3D xc_domain_populate_physmap(xch, dom, nr_extents, =
SUPERPAGE_1GB_SHIFT,
>> -                                              pod_mode ? XENMEMF_popula=
te_on_demand : 0,
>> -                                              sp_extents);
>> +                                              pod_mode, sp_extents);
>>
>>               if ( done>  0 )
>>               {
>> @@ -285,8 +300,7 @@ static int setup_guest(xc_interface *xch
>>                       sp_extents[i] =3D page_array[cur_pages+(i<<SUPERPA=
GE_2MB_SHIFT)];
>>
>>                   done =3D xc_domain_populate_physmap(xch, dom, nr_exten=
ts, SUPERPAGE_2MB_SHIFT,
>> -                                                  pod_mode ? XENMEMF_po=
pulate_on_demand : 0,
>> -                                                  sp_extents);
>> +                                                  pod_mode, sp_extents);
>>
>>                   if ( done>  0 )
>>                   {
>> @@ -302,19 +316,12 @@ static int setup_guest(xc_interface *xch
>>           if ( count !=3D 0 )
>>           {
>>               rc =3D xc_domain_populate_physmap_exact(
>> -                xch, dom, count, 0, 0,&page_array[cur_pages]);
>> +                xch, dom, count, 0, pod_mode,&page_array[cur_pages]);
>>               cur_pages +=3D count;
>>               stat_normal_pages +=3D count;
>>           }
>>       }
>>
>> -    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> -     * adjust the PoD cache size so that domain tot_pages will be
>> -     * target_pages - 0x20 after this call. */
>> -    if ( pod_mode )
>> -        rc =3D xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> -                                      NULL, NULL, NULL);
>> -
>>       if ( rc !=3D 0 )
>>       {
>>           PERROR("Could not allocate memory for HVM guest.");
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -344,8 +344,9 @@ p2m_pod_set_mem_target(struct domain *d,
>>
>>       pod_lock(p2m);
>>
>> -    /* P =3D=3D B: Nothing to do. */
>> -    if ( p2m->pod.entry_count =3D=3D 0 )
>> +    /* P =3D=3D B: Nothing to do (unless the guest is being created). */
>> +    populated =3D d->tot_pages - p2m->pod.count;
>> +    if ( populated>  0&&  p2m->pod.entry_count =3D=3D 0 )
>>           goto out;
>>
>>       /* Don't do anything if the domain is being torn down */
>> @@ -357,13 +358,11 @@ p2m_pod_set_mem_target(struct domain *d,
>>       if ( target<  d->tot_pages )
>>           goto out;
>>
>> -    populated  =3D d->tot_pages - p2m->pod.count;
>> -
>>       pod_target =3D target - populated;
>>
>>       /* B<  T': Set the cache size equal to # of outstanding entries,
>>        * let the balloon driver fill in the rest. */
>> -    if ( pod_target>  p2m->pod.entry_count )
>> +    if ( populated>  0&&  pod_target>  p2m->pod.entry_count )
>>           pod_target =3D p2m->pod.entry_count;
>>
>>       ASSERT( pod_target>=3D p2m->pod.count );
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>


-- =

Juergen Gross                 Principal Developer Operating Systems
PDG ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@ts.fujitsu.=
com
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.ht=
ml

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 07:52:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 07:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1urZ-0001Mb-RK; Thu, 16 Aug 2012 07:51:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1urY-0001MW-9u
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 07:51:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345103481!9342826!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24155 invoked from network); 16 Aug 2012 07:51:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 07:51:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14032787"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 07:51:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 08:51:20 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1urQ-0003jH-ML;
	Thu, 16 Aug 2012 07:51:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1urQ-0003ip-FH;
	Thu, 16 Aug 2012 08:51:20 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13606-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 08:51:20 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13606: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13606 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13606/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 13605
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 13605
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13605
 test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 13605
 test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 13605

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13605
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13605
 test-amd64-amd64-win          3 host-install(3)              broken like 13604
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13605
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2 fail in 13605 blocked in 13606

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 13605 never pass
 test-amd64-amd64-win         16 leak-check/check      fail in 13605 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13605 never pass

version targeted for testing:
 xen                  6d56e31fe1e1
baseline version:
 xen                  6d56e31fe1e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 07:52:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 07:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1urZ-0001Mb-RK; Thu, 16 Aug 2012 07:51:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T1urY-0001MW-9u
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 07:51:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345103481!9342826!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24155 invoked from network); 16 Aug 2012 07:51:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 07:51:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14032787"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 07:51:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 08:51:20 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T1urQ-0003jH-ML;
	Thu, 16 Aug 2012 07:51:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T1urQ-0003ip-FH;
	Thu, 16 Aug 2012 08:51:20 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13606-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 08:51:20 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13606: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13606 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13606/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 13605
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 13605
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13605
 test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 13605
 test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 13605

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13605
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13605
 test-amd64-amd64-win          3 host-install(3)              broken like 13604
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13605
 test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2 fail in 13605 blocked in 13606

Tests which did not succeed, but are not blocking:
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 13605 never pass
 test-amd64-amd64-win         16 leak-check/check      fail in 13605 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13605 never pass

version targeted for testing:
 xen                  6d56e31fe1e1
baseline version:
 xen                  6d56e31fe1e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         broken  
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:10:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1v9Q-0002I9-6X; Thu, 16 Aug 2012 08:09:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1v9N-0002Hw-UV
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:09:54 +0000
Received: from [85.158.143.99:25996] by server-3.bemta-4.messagelabs.com id
	86/4C-09529-1DAAC205; Thu, 16 Aug 2012 08:09:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345104592!18822530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1505 invoked from network); 16 Aug 2012 08:09:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:09:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14033129"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 08:09:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 09:09:51 +0100
Message-ID: <1345104590.27489.5.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Deepti kulkarni <deepti.kdeeps@gmail.com>
Date: Thu, 16 Aug 2012 09:09:50 +0100
In-Reply-To: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
References: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 00:29 +0100, Deepti kulkarni wrote:
> I am trying to debug the debian bug for Xen
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656792.
> 
> 
> The domU debian machine (HVM and PV) fails to bootup and redirects to
> initramfs prompt. The error is "Unable to find a medium containing a
> live image system".
> 
> 
>  I did a cat on proc/modules. Looks like the xen drivers are not
> loaded.
> 
> 
> A modprobe of xen_blkfront doesnt help the bootup either.
> 
> 
> Any workaround suggested. Debian Kernel 3.0 work fine.

There are several suggested workarounds and diagnostics to try in that
bug log -- have you tried them?

That said I'm not necessarily convinced your issue is related to that
bug, which was quite PVHVM specific (yet you mention having the problem 
on PV too).

If none of the suggested workarounds help then I would suggest making a
full bug report (with logs etc) as described in
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen

In particular full guest logs and a description of the installation
method which you used would be useful.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:10:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1v9Q-0002I9-6X; Thu, 16 Aug 2012 08:09:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1v9N-0002Hw-UV
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:09:54 +0000
Received: from [85.158.143.99:25996] by server-3.bemta-4.messagelabs.com id
	86/4C-09529-1DAAC205; Thu, 16 Aug 2012 08:09:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345104592!18822530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1505 invoked from network); 16 Aug 2012 08:09:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:09:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14033129"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 08:09:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 09:09:51 +0100
Message-ID: <1345104590.27489.5.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Deepti kulkarni <deepti.kdeeps@gmail.com>
Date: Thu, 16 Aug 2012 09:09:50 +0100
In-Reply-To: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
References: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 00:29 +0100, Deepti kulkarni wrote:
> I am trying to debug the debian bug for Xen
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656792.
> 
> 
> The domU debian machine (HVM and PV) fails to bootup and redirects to
> initramfs prompt. The error is "Unable to find a medium containing a
> live image system".
> 
> 
>  I did a cat on proc/modules. Looks like the xen drivers are not
> loaded.
> 
> 
> A modprobe of xen_blkfront doesnt help the bootup either.
> 
> 
> Any workaround suggested. Debian Kernel 3.0 work fine.

There are several suggested workarounds and diagnostics to try in that
bug log -- have you tried them?

That said I'm not necessarily convinced your issue is related to that
bug, which was quite PVHVM specific (yet you mention having the problem 
on PV too).

If none of the suggested workarounds help then I would suggest making a
full bug report (with logs etc) as described in
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen

In particular full guest logs and a description of the installation
method which you used would be useful.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:11:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vAj-0002Mt-LB; Thu, 16 Aug 2012 08:11:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1vAi-0002Mj-RD
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:11:17 +0000
Received: from [85.158.143.99:25598] by server-1.bemta-4.messagelabs.com id
	42/3C-07754-42BAC205; Thu, 16 Aug 2012 08:11:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345104672!28472175!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16093 invoked from network); 16 Aug 2012 08:11:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 08:11:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 09:11:12 +0100
Message-Id: <502CC76602000078000955B0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 09:11:50 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mark van Dijk" <lists+xen@internecto.net>
References: <20120815223909.1eb7a0cd@internecto.net>
In-Reply-To: <20120815223909.1eb7a0cd@internecto.net>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartE9D89856.1__="
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartE9D89856.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net> wrote:
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
> (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
> (XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
> (XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
>=20
> The entire log from boot to crash can be viewed at the following link:
> http://pastebin.com/5wcH7GWR=20

Unfortunately quite a few of the registers contain values that
could sensibly be "i". Could you either disassemble the
instructions around the place yourself to find out which one it
is, or make the xen-syms file corresponding to this run available?

I'm suspecting "i" to actually be 2, and the code not having got
updated when 1Gb page support got added to PoD. You could
hence alternatively also try the attached debugging patch,
which - if my guess is right - may at once fix your problem.

However, because of this outright claiming that HVM doesn't
work seems a little harsh - did you try running your guests
without the use of PoD (i.e. with memory =3D=3D maxmem)?

Jan


--=__PartE9D89856.1__=
Content-Type: text/plain; name="ept-pod-1Gb-assert.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="ept-pod-1Gb-assert.patch"

--- a/xen/arch/x86/mm/p2m-ept.c=0A+++ b/xen/arch/x86/mm/p2m-ept.c=0A@@ =
-521,13 +521,13 @@ static mfn_t ept_get_entry(struct p2m_do=0A             =
}=0A =0A             /* Populate this superpage */=0A-            ASSERT(i =
=3D=3D 1);=0A+if(i >=3D 2) printk("PoD[%lx] level=3D%d\n", gfn, =
i);//temp=0A+            ASSERT(i <=3D 2);=0A =0A             index =3D =
gfn_remainder >> ( i * EPT_TABLE_ORDER);=0A             ept_entry =3D =
table + index;=0A =0A-            if ( !p2m_pod_demand_populate(p2m, gfn, =
=0A-                                            PAGE_ORDER_2M, q) )=0A+    =
        if ( !p2m_pod_demand_populate(p2m, gfn, i * EPT_TABLE_ORDER, q) =
)=0A                 goto retry;=0A             else=0A                 =
goto out;=0A
--=__PartE9D89856.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartE9D89856.1__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 08:11:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vAj-0002Mt-LB; Thu, 16 Aug 2012 08:11:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1vAi-0002Mj-RD
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:11:17 +0000
Received: from [85.158.143.99:25598] by server-1.bemta-4.messagelabs.com id
	42/3C-07754-42BAC205; Thu, 16 Aug 2012 08:11:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345104672!28472175!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16093 invoked from network); 16 Aug 2012 08:11:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 08:11:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 09:11:12 +0100
Message-Id: <502CC76602000078000955B0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 09:11:50 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mark van Dijk" <lists+xen@internecto.net>
References: <20120815223909.1eb7a0cd@internecto.net>
In-Reply-To: <20120815223909.1eb7a0cd@internecto.net>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartE9D89856.1__="
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartE9D89856.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net> wrote:
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
> (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
> (XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
> (XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
>=20
> The entire log from boot to crash can be viewed at the following link:
> http://pastebin.com/5wcH7GWR=20

Unfortunately quite a few of the registers contain values that
could sensibly be "i". Could you either disassemble the
instructions around the place yourself to find out which one it
is, or make the xen-syms file corresponding to this run available?

I'm suspecting "i" to actually be 2, and the code not having got
updated when 1Gb page support got added to PoD. You could
hence alternatively also try the attached debugging patch,
which - if my guess is right - may at once fix your problem.

However, because of this outright claiming that HVM doesn't
work seems a little harsh - did you try running your guests
without the use of PoD (i.e. with memory =3D=3D maxmem)?

Jan


--=__PartE9D89856.1__=
Content-Type: text/plain; name="ept-pod-1Gb-assert.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="ept-pod-1Gb-assert.patch"

--- a/xen/arch/x86/mm/p2m-ept.c=0A+++ b/xen/arch/x86/mm/p2m-ept.c=0A@@ =
-521,13 +521,13 @@ static mfn_t ept_get_entry(struct p2m_do=0A             =
}=0A =0A             /* Populate this superpage */=0A-            ASSERT(i =
=3D=3D 1);=0A+if(i >=3D 2) printk("PoD[%lx] level=3D%d\n", gfn, =
i);//temp=0A+            ASSERT(i <=3D 2);=0A =0A             index =3D =
gfn_remainder >> ( i * EPT_TABLE_ORDER);=0A             ept_entry =3D =
table + index;=0A =0A-            if ( !p2m_pod_demand_populate(p2m, gfn, =
=0A-                                            PAGE_ORDER_2M, q) )=0A+    =
        if ( !p2m_pod_demand_populate(p2m, gfn, i * EPT_TABLE_ORDER, q) =
)=0A                 goto retry;=0A             else=0A                 =
goto out;=0A
--=__PartE9D89856.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartE9D89856.1__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 08:13:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vCC-0002Uh-4M; Thu, 16 Aug 2012 08:12:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1vCA-0002UF-UC
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:12:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345104760!9587939!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26412 invoked from network); 16 Aug 2012 08:12:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:12:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14033193"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 08:12:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 09:12:40 +0100
Message-ID: <1345104759.27489.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Attilio Rao <attilio.rao@citrix.com>
Date: Thu, 16 Aug 2012 09:12:39 +0100
In-Reply-To: <502BEEAD.7030906@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
	<502BEEAD.7030906@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 19:47 +0100, Attilio Rao wrote:
> On 15/08/12 18:46, Stefano Stabellini wrote:
> > On Wed, 15 Aug 2012, Attilio Rao wrote:
> >    
> >> On 15/08/12 18:25, Stefano Stabellini wrote:
> >>      
> >>> On Tue, 14 Aug 2012, Attilio Rao wrote:
> >>>
> >>>        
> >>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >>>>     pgt_buf_start, but still bigger than it.
> >>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >>>>     for verifying start and end are contained in the range
> >>>>     [pgt_buf_start, pgt_buf_top].
> >>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> >>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >>>>     an actual need to do that (or, in other words, if there are actually some
> >>>>     pages going to switch from RO to RW).
> >>>>
> >>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> >>>> ---
> >>>>    arch/x86/mm/init.c |    4 ++++
> >>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >>>>    2 files changed, 24 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> >>>> index e0e6990..c5849b6 100644
> >>>> --- a/arch/x86/mm/init.c
> >>>> +++ b/arch/x86/mm/init.c
> >>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >>>>
> >>>>    void __init native_pagetable_reserve(u64 start, u64 end)
> >>>>    {
> >>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
> >>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> >>>>
> >>>>          
> >>> code style (you can check whether your patch breaks the code style with
> >>> scripts/checkpatch.pl)
> >>>
> >>>        
> >> I actually did before to submit, it reported 0 errors/warning.
> >>      
> > strange, that really looks like a line over 80 chars
> >
> >    
> 
> Actually code style explicitely says to not break strings because they 
> want to retain the ability to grep. In FreeBSD this is the same and I 
> think this is why checkpatch doesn't whine. I don't think there is a bug 
> here.

Right, CodingStyle changed a little while ago from a strict 80 column
limit to just strongly preferred 80 columns with an explicit exception
for user visible strings.

> 
> Can I submit the patch as it is, then?
> 
> Attilio
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:13:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vCC-0002Uh-4M; Thu, 16 Aug 2012 08:12:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1vCA-0002UF-UC
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:12:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345104760!9587939!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26412 invoked from network); 16 Aug 2012 08:12:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:12:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14033193"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 08:12:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 09:12:40 +0100
Message-ID: <1345104759.27489.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Attilio Rao <attilio.rao@citrix.com>
Date: Thu, 16 Aug 2012 09:12:39 +0100
In-Reply-To: <502BEEAD.7030906@citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
	<502BEEAD.7030906@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 19:47 +0100, Attilio Rao wrote:
> On 15/08/12 18:46, Stefano Stabellini wrote:
> > On Wed, 15 Aug 2012, Attilio Rao wrote:
> >    
> >> On 15/08/12 18:25, Stefano Stabellini wrote:
> >>      
> >>> On Tue, 14 Aug 2012, Attilio Rao wrote:
> >>>
> >>>        
> >>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >>>>     pgt_buf_start, but still bigger than it.
> >>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >>>>     for verifying start and end are contained in the range
> >>>>     [pgt_buf_start, pgt_buf_top].
> >>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> >>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >>>>     an actual need to do that (or, in other words, if there are actually some
> >>>>     pages going to switch from RO to RW).
> >>>>
> >>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> >>>> ---
> >>>>    arch/x86/mm/init.c |    4 ++++
> >>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >>>>    2 files changed, 24 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> >>>> index e0e6990..c5849b6 100644
> >>>> --- a/arch/x86/mm/init.c
> >>>> +++ b/arch/x86/mm/init.c
> >>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >>>>
> >>>>    void __init native_pagetable_reserve(u64 start, u64 end)
> >>>>    {
> >>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
> >>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> >>>>
> >>>>          
> >>> code style (you can check whether your patch breaks the code style with
> >>> scripts/checkpatch.pl)
> >>>
> >>>        
> >> I actually did before to submit, it reported 0 errors/warning.
> >>      
> > strange, that really looks like a line over 80 chars
> >
> >    
> 
> Actually code style explicitely says to not break strings because they 
> want to retain the ability to grep. In FreeBSD this is the same and I 
> think this is why checkpatch doesn't whine. I don't think there is a bug 
> here.

Right, CodingStyle changed a little while ago from a strict 80 column
limit to just strongly preferred 80 columns with an explicit exception
for user visible strings.

> 
> Can I submit the patch as it is, then?
> 
> Attilio
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:21:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:21:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vKl-0002o9-BW; Thu, 16 Aug 2012 08:21:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1vKk-0002o4-0E
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:21:38 +0000
Received: from [85.158.139.83:4566] by server-11.bemta-5.messagelabs.com id
	4E/79-29296-19DAC205; Thu, 16 Aug 2012 08:21:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345105296!24148334!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26151 invoked from network); 16 Aug 2012 08:21:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 08:21:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 09:21:35 +0100
Message-Id: <502CC9D702000078000955B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 09:22:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>,
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>
In-Reply-To: <2012081522121039050717@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 16:12, tupeng212 <tupeng212@gmail.com> wrote:
> The results for me were these:
> 1 In my real application environment, it worked very well in the former 
> 5mins, much better than before,
>  but at last it lagged again. I don't know whether it belongs to the two 
> missed functions. I lack the 
>  ability to figure them out.

Did you check whether possibly the guest kernel started flipping
between to rate values? If so, that is something that we could
deal with too (yet it might be a bit involved, so I'd like to avoid
going that route if it would lead nowhere).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:21:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:21:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vKl-0002o9-BW; Thu, 16 Aug 2012 08:21:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1vKk-0002o4-0E
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:21:38 +0000
Received: from [85.158.139.83:4566] by server-11.bemta-5.messagelabs.com id
	4E/79-29296-19DAC205; Thu, 16 Aug 2012 08:21:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345105296!24148334!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26151 invoked from network); 16 Aug 2012 08:21:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 08:21:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 09:21:35 +0100
Message-Id: <502CC9D702000078000955B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 09:22:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>,
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>
In-Reply-To: <2012081522121039050717@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 16:12, tupeng212 <tupeng212@gmail.com> wrote:
> The results for me were these:
> 1 In my real application environment, it worked very well in the former 
> 5mins, much better than before,
>  but at last it lagged again. I don't know whether it belongs to the two 
> missed functions. I lack the 
>  ability to figure them out.

Did you check whether possibly the guest kernel started flipping
between to rate values? If so, that is something that we could
deal with too (yet it might be a bit involved, so I'd like to avoid
going that route if it would lead nowhere).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:31:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vTz-0002yK-Ei; Thu, 16 Aug 2012 08:31:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1vTx-0002yC-Q8
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:31:09 +0000
Received: from [85.158.143.35:22633] by server-2.bemta-4.messagelabs.com id
	88/A5-31966-DCFAC205; Thu, 16 Aug 2012 08:31:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345105850!5880229!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28721 invoked from network); 16 Aug 2012 08:30:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 08:30:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 09:30:49 +0100
Message-Id: <502CCC0002000078000955C4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 09:31:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
In-Reply-To: <CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 15:11, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>>> This was encouraging, so I tried the same change against the tree
>>> tip...unfortunately that didn't go as well.
>>>
>>> tip + evtchn_move_pirqs() removal:
>>> did not resume from the first suspend.
>>
>> Any logs of this (i.e. indications of what's going wrong - still
>> the same AHCI not working, but else apparently fine)?
> 
> This is a bit strange, in that the observed behavior changes when I am
> logging to the serial connection.
> 
> When I am logging to serial, the failure is the same as before -
> The first suspend / resume works -
> The second fails with AHCI not working

And this is with and/or without the evtchn_move_pirqs() calls
removed? Otherwise, this might allow us at least debugging
that part of the problem.

> However, when I am not logging to serial - the system goes down, but
> never comes back up. I cannot ssh in, and no graphics are displayed on
> the screen. My only recourse is to hard power cycle the machine.
> 
> This, of course makes collecting logs difficult.

Did you try whether anything would make it out to the screen
when you allow Xen to continue to access the video buffer even
post-boot? Quite possibly for this it would be advantageous (or
even required, as I don't think the video mode would get restored
after coming back out of S3) to also only use a (simpler) text mode
console (requiring that you have this up on the screen when you
invoke S3, i.e. you'd have to switch away from or not make use of
the GUI). That would be "vga=text-80x25,keep" on the Xen
command line (while 80x50 or 80x60 would allow for more output
to remain visible, even those modes would - I think - need code
to be added to get restored during resume from S3).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:31:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vTz-0002yK-Ei; Thu, 16 Aug 2012 08:31:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1vTx-0002yC-Q8
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:31:09 +0000
Received: from [85.158.143.35:22633] by server-2.bemta-4.messagelabs.com id
	88/A5-31966-DCFAC205; Thu, 16 Aug 2012 08:31:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345105850!5880229!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28721 invoked from network); 16 Aug 2012 08:30:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 08:30:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 09:30:49 +0100
Message-Id: <502CCC0002000078000955C4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 09:31:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
In-Reply-To: <CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.08.12 at 15:11, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>>> This was encouraging, so I tried the same change against the tree
>>> tip...unfortunately that didn't go as well.
>>>
>>> tip + evtchn_move_pirqs() removal:
>>> did not resume from the first suspend.
>>
>> Any logs of this (i.e. indications of what's going wrong - still
>> the same AHCI not working, but else apparently fine)?
> 
> This is a bit strange, in that the observed behavior changes when I am
> logging to the serial connection.
> 
> When I am logging to serial, the failure is the same as before -
> The first suspend / resume works -
> The second fails with AHCI not working

And this is with and/or without the evtchn_move_pirqs() calls
removed? Otherwise, this might allow us at least debugging
that part of the problem.

> However, when I am not logging to serial - the system goes down, but
> never comes back up. I cannot ssh in, and no graphics are displayed on
> the screen. My only recourse is to hard power cycle the machine.
> 
> This, of course makes collecting logs difficult.

Did you try whether anything would make it out to the screen
when you allow Xen to continue to access the video buffer even
post-boot? Quite possibly for this it would be advantageous (or
even required, as I don't think the video mode would get restored
after coming back out of S3) to also only use a (simpler) text mode
console (requiring that you have this up on the screen when you
invoke S3, i.e. you'd have to switch away from or not make use of
the GUI). That would be "vga=text-80x25,keep" on the Xen
command line (while 80x50 or 80x60 would allow for more output
to remain visible, even those modes would - I think - need code
to be added to get restored during resume from S3).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:40:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vcP-0003Ab-F1; Thu, 16 Aug 2012 08:39:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <koverstreet@google.com>) id 1T1jpK-0006lD-1O
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 20:04:26 +0000
Received: from [85.158.143.99:10432] by server-3.bemta-4.messagelabs.com id
	85/85-09529-9C00C205; Wed, 15 Aug 2012 20:04:25 +0000
X-Env-Sender: koverstreet@google.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345061063!21987854!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26967 invoked from network); 15 Aug 2012 20:04:24 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 20:04:24 -0000
Received: by ggna5 with SMTP id a5so2469563ggn.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 13:04:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=E6EJpatzDW9UawL4kmaKcRAMclPMZBy8hbCpRnppNjI=;
	b=ZDLoXLgi4N60Ws8uDtQF4LnKSh6aLlKW1B0zNPnIktwyZuQxE6ehpchehk3s08wEPf
	L9LaCU1AGIw49dsapjgF8oT1n2B+dcGwV6VQDLd+pjLUJXsdGyMhHORepuWYAHiLD612
	HXDqf7H1HPz4AYv+QLpCCGfCxWo4i6JhEycP/QqTTvVKDxsuo36doIBLihHN3aL3k+Z1
	x5tCcMv/bUaFF+2yL3Hty0ChOsGci22EN5BbLIdeC6oxCUkjXULij6IWrwA5uIHz2m/9
	Z9LP4susqj5KAD4lcYbclpPV0PgRyGJo7G5VgyRz94Fkr2/RXvDn899nIto+iMmn8nG5
	TA1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=E6EJpatzDW9UawL4kmaKcRAMclPMZBy8hbCpRnppNjI=;
	b=mXCYQwvbHbEuUG2hxhQIRcpQDJBKOgYCpCb7gE4P0OqQoeajHrT168f9eFN3LhkqNj
	rll2kuj6aL6HSOgtWqWR+YxoCWuHfrUiBNWrC2vc1QvzAcHBo07zv1+MeabE04A5kvnQ
	lf7uhcyWrkKYouW1LlEHtUeiEYxKUJB5tKxeS+npkB/mPa5+Dl/92bHsQ6YsNrjQp+MH
	4Ji3DK3gim8Wm2l3AHyI9o/gWDiHEIN6esh/GoJzrdXY6b3eAI0vFLewqwE6INmrGsx2
	y0S7X8A7cS4uUDVsfj1iFFcQGrbZGTa33zy0O1mURlZ3S9D7HKiNBJCEhAKraW1/XMjq
	zmFQ==
Received: by 10.66.77.71 with SMTP id q7mr26234715paw.0.1345061062636;
	Wed, 15 Aug 2012 13:04:22 -0700 (PDT)
Received: by 10.66.77.71 with SMTP id q7mr26234700paw.0.1345061062497;
	Wed, 15 Aug 2012 13:04:22 -0700 (PDT)
Received: from google.com ([2620:0:1000:2300:be30:5bff:fed1:fc17])
	by mx.google.com with ESMTPS id pj8sm1011169pbb.60.2012.08.15.13.04.20
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 13:04:21 -0700 (PDT)
Date: Wed, 15 Aug 2012 13:04:18 -0700
From: Kent Overstreet <koverstreet@google.com>
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
Message-ID: <20120815200418.GA2758@google.com>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
	<6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
	<CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQnknohO2iRlAq4gJiuLk/aRpOZAL1Mc6OufTxWl3VRauN6emby8+3BQhmYKX+7rlaq4jIHl/qJe5+ODpMxm0wNd9Wn2mg0axTJdTky8GX9UQpaL3VyXYIiBW2jGTELefGMmmkkg34ba6y+snziilYGkQqXMYffPQwgYB0x2a6fp2gKvyDngq5v8b4gDglrJK/G5OlfC
X-Mailman-Approved-At: Thu, 16 Aug 2012 08:39:52 +0000
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	James Harper <james.harper@bendigoit.com.au>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 10:40:08PM +1000, Joseph Glanville wrote:
> Hi James, Kent.
> 
> I can confirm this is the issue I was seeing, thanks for locating the
> blkback issue!

Cool!

> Is there anything xen-devel should be doing about this? I wouldn't
> expect blkback to care about block size...

Well, I wouldn't be surprised if Windows doesn't work on a device with
block size > 512 bytes. But Linux (ext4 at least) certaintly does work
with 4k blocks - unless maybe it was breaking on something in the boot
process?

So it sounds like this might be indicative of a bug in blkback.

> Joseph.
> 
> On 14 August 2012 09:30, James Harper <james.harper@bendigoit.com.au> wrote:
> >>
> >> Just mentioned this in the other thread, but if this is due to the 4k blocksize -
> >> that's easy to fix: just format with 512 byte blocksize
> >>
> >> make-bcache --block 512
> >>
> >> Maybe I should change the default.
> >
> > I suggest making the default 512, but also print a warning if the user didn't explicitly set it,eg "Block size not set - defaulting to 512 bytes"
> >
> > James
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> -- 
> CTO | Orion Virtualisation Solutions | www.orionvm.com.au
> Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:40:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vcP-0003Ab-F1; Thu, 16 Aug 2012 08:39:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <koverstreet@google.com>) id 1T1jpK-0006lD-1O
	for xen-devel@lists.xen.org; Wed, 15 Aug 2012 20:04:26 +0000
Received: from [85.158.143.99:10432] by server-3.bemta-4.messagelabs.com id
	85/85-09529-9C00C205; Wed, 15 Aug 2012 20:04:25 +0000
X-Env-Sender: koverstreet@google.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345061063!21987854!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26967 invoked from network); 15 Aug 2012 20:04:24 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Aug 2012 20:04:24 -0000
Received: by ggna5 with SMTP id a5so2469563ggn.32
	for <xen-devel@lists.xen.org>; Wed, 15 Aug 2012 13:04:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=E6EJpatzDW9UawL4kmaKcRAMclPMZBy8hbCpRnppNjI=;
	b=ZDLoXLgi4N60Ws8uDtQF4LnKSh6aLlKW1B0zNPnIktwyZuQxE6ehpchehk3s08wEPf
	L9LaCU1AGIw49dsapjgF8oT1n2B+dcGwV6VQDLd+pjLUJXsdGyMhHORepuWYAHiLD612
	HXDqf7H1HPz4AYv+QLpCCGfCxWo4i6JhEycP/QqTTvVKDxsuo36doIBLihHN3aL3k+Z1
	x5tCcMv/bUaFF+2yL3Hty0ChOsGci22EN5BbLIdeC6oxCUkjXULij6IWrwA5uIHz2m/9
	Z9LP4susqj5KAD4lcYbclpPV0PgRyGJo7G5VgyRz94Fkr2/RXvDn899nIto+iMmn8nG5
	TA1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent
	:x-gm-message-state;
	bh=E6EJpatzDW9UawL4kmaKcRAMclPMZBy8hbCpRnppNjI=;
	b=mXCYQwvbHbEuUG2hxhQIRcpQDJBKOgYCpCb7gE4P0OqQoeajHrT168f9eFN3LhkqNj
	rll2kuj6aL6HSOgtWqWR+YxoCWuHfrUiBNWrC2vc1QvzAcHBo07zv1+MeabE04A5kvnQ
	lf7uhcyWrkKYouW1LlEHtUeiEYxKUJB5tKxeS+npkB/mPa5+Dl/92bHsQ6YsNrjQp+MH
	4Ji3DK3gim8Wm2l3AHyI9o/gWDiHEIN6esh/GoJzrdXY6b3eAI0vFLewqwE6INmrGsx2
	y0S7X8A7cS4uUDVsfj1iFFcQGrbZGTa33zy0O1mURlZ3S9D7HKiNBJCEhAKraW1/XMjq
	zmFQ==
Received: by 10.66.77.71 with SMTP id q7mr26234715paw.0.1345061062636;
	Wed, 15 Aug 2012 13:04:22 -0700 (PDT)
Received: by 10.66.77.71 with SMTP id q7mr26234700paw.0.1345061062497;
	Wed, 15 Aug 2012 13:04:22 -0700 (PDT)
Received: from google.com ([2620:0:1000:2300:be30:5bff:fed1:fc17])
	by mx.google.com with ESMTPS id pj8sm1011169pbb.60.2012.08.15.13.04.20
	(version=SSLv3 cipher=OTHER); Wed, 15 Aug 2012 13:04:21 -0700 (PDT)
Date: Wed, 15 Aug 2012 13:04:18 -0700
From: Kent Overstreet <koverstreet@google.com>
To: Joseph Glanville <joseph.glanville@orionvm.com.au>
Message-ID: <20120815200418.GA2758@google.com>
References: <6035A0D088A63A46850C3988ED045A4B299F67FA@BITCOM1.int.sbss.com.au>
	<CAOzFzEhna3CaBE28aHVX_ZoNLDEa6AhArHPcB9240Ni4jh9PYA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B299F6A9B@BITCOM1.int.sbss.com.au>
	<CAOzFzEhF+Pb89PNoibBc-9_Db6OUiyG5R8mKkAPi5pns4+CsAg@mail.gmail.com>
	<20120813213455.GC6887@google.com>
	<6035A0D088A63A46850C3988ED045A4B299F80E1@BITCOM1.int.sbss.com.au>
	<CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOzFzEitP95cy4LKJD+H1ffBLP_OjxWPTYCEd=XNkb-i5Mz39w@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Gm-Message-State: ALoCoQnknohO2iRlAq4gJiuLk/aRpOZAL1Mc6OufTxWl3VRauN6emby8+3BQhmYKX+7rlaq4jIHl/qJe5+ODpMxm0wNd9Wn2mg0axTJdTky8GX9UQpaL3VyXYIiBW2jGTELefGMmmkkg34ba6y+snziilYGkQqXMYffPQwgYB0x2a6fp2gKvyDngq5v8b4gDglrJK/G5OlfC
X-Mailman-Approved-At: Thu, 16 Aug 2012 08:39:52 +0000
Cc: "linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	James Harper <james.harper@bendigoit.com.au>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] blkback and bcache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 10:40:08PM +1000, Joseph Glanville wrote:
> Hi James, Kent.
> 
> I can confirm this is the issue I was seeing, thanks for locating the
> blkback issue!

Cool!

> Is there anything xen-devel should be doing about this? I wouldn't
> expect blkback to care about block size...

Well, I wouldn't be surprised if Windows doesn't work on a device with
block size > 512 bytes. But Linux (ext4 at least) certaintly does work
with 4k blocks - unless maybe it was breaking on something in the boot
process?

So it sounds like this might be indicative of a bug in blkback.

> Joseph.
> 
> On 14 August 2012 09:30, James Harper <james.harper@bendigoit.com.au> wrote:
> >>
> >> Just mentioned this in the other thread, but if this is due to the 4k blocksize -
> >> that's easy to fix: just format with 512 byte blocksize
> >>
> >> make-bcache --block 512
> >>
> >> Maybe I should change the default.
> >
> > I suggest making the default 512, but also print a warning if the user didn't explicitly set it,eg "Block size not set - defaulting to 512 bytes"
> >
> > James
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> -- 
> CTO | Orion Virtualisation Solutions | www.orionvm.com.au
> Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:43:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vg5-0003I5-3f; Thu, 16 Aug 2012 08:43:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1vg3-0003Hw-8O
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:43:39 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345106609!1877000!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13222 invoked from network); 16 Aug 2012 08:43:30 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:43:30 -0000
Received: by lagz14 with SMTP id z14so1443948lag.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 01:43:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=slA0iXQ0UGMkbGXRwWJTIb4aBZGA30fBkcrWnqM0AtY=;
	b=r4yciV+bkeRjSz5ToP5VyCFaPjIa7+2V/lTIii+hFia6bIza8ZmRo9L9hRajRJCdBY
	kYvEWn3vITrFZFaSECaW3DPlE96gW2AODJiddO4ToUJ0e4F1EvWyiQrqKHLDKYQqnQll
	n6HtupzMGbC4yQEDFMcPJ0AkJEyvMei6IzCg7rZU3GDnyfjCZzgl0RwMPL74I6gjQWCl
	JCF/bmFac/7bDKpcFuzQYuZtK3XvIYJWzxWK36UPRAmAzeqhf+S3R8or4WeRKZ0tlZI7
	5/wtslYLCc9r5lMAGLDcxKlT6Fu6bDPhOLf5thSIzveimMGWa0aowrzI9Y8mqxYzzH5G
	bnYw==
MIME-Version: 1.0
Received: by 10.152.110.70 with SMTP id hy6mr425371lab.44.1345106608943; Thu,
	16 Aug 2012 01:43:28 -0700 (PDT)
Received: by 10.112.27.40 with HTTP; Thu, 16 Aug 2012 01:43:28 -0700 (PDT)
In-Reply-To: <20120815171953.GB9984@US-SEA-R8XVZTX>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
Date: Thu, 16 Aug 2012 09:43:28 +0100
Message-ID: <CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
From: Lars Kurth <lars.kurth.xen@gmail.com>
To: Matt Wilson <msw@amazon.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3802811319312657465=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3802811319312657465==
Content-Type: multipart/alternative; boundary=bcaec54ee7b89c733d04c75e09a8

--bcaec54ee7b89c733d04c75e09a8
Content-Type: text/plain; charset=ISO-8859-1

Matt,
thanks for pointing this out. Ian, I think you are right, we should kill
the limits page.
Ian,
do you know which limits are correct? Maybe we should just add a Xen 4.2
column now.
Lars

On Wed, Aug 15, 2012 at 6:19 PM, Matt Wilson <msw@amazon.com> wrote:

> On Tue, Aug 14, 2012 at 04:44:11AM -0700, Lars Kurth wrote:
> > On 14/08/2012 11:56, Ian Campbell wrote:
> > > On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> > >> Feel free to reply to the thread or add to
> > >> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> > > Is the intention to merge this into
> > > http://wiki.xen.org/wiki/Xen_Release_Features when the release
> happens?
> > Ian, we could merge it, or we could have a line  in the matrix linking
> > to the release limits.
> > No strong preference
>
> Lars,
>
> Some of the per-guest limits on http://wiki.xen.org/wiki/Xen_4.2_Limits
> do not match those on http://wiki.xen.org/wiki/Xen_Release_Features. For
> example, the release features page says HVM guests have a limit of
> 1 TB (should be TiB) of RAM, but the 4.2 limits page says 512 or 256.
>
> Matt
>

--bcaec54ee7b89c733d04c75e09a8
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Matt,<br>thanks for pointing this out. Ian, I think you are right, we shoul=
d kill the limits page. <br>Ian,<br>do you know which limits are correct? M=
aybe we should just add a Xen 4.2 column now.<br>Lars<br><br><div class=3D"=
gmail_quote">
On Wed, Aug 15, 2012 at 6:19 PM, Matt Wilson <span dir=3D"ltr">&lt;<a href=
=3D"mailto:msw@amazon.com" target=3D"_blank">msw@amazon.com</a>&gt;</span> =
wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bord=
er-left:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Tue, Aug 14, 2012 at 04:44:11AM =
-0700, Lars Kurth wrote:<br>
&gt; On 14/08/2012 11:56, Ian Campbell wrote:<br>
&gt; &gt; On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:<br>
&gt; &gt;&gt; Feel free to reply to the thread or add to<br>
&gt; &gt;&gt; <a href=3D"http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits" targ=
et=3D"_blank">http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits</a><br>
&gt; &gt; Is the intention to merge this into<br>
&gt; &gt; <a href=3D"http://wiki.xen.org/wiki/Xen_Release_Features" target=
=3D"_blank">http://wiki.xen.org/wiki/Xen_Release_Features</a> when the rele=
ase happens?<br>
&gt; Ian, we could merge it, or we could have a line =A0in the matrix linki=
ng<br>
&gt; to the release limits.<br>
&gt; No strong preference<br>
<br>
</div></div>Lars,<br>
<br>
Some of the per-guest limits on <a href=3D"http://wiki.xen.org/wiki/Xen_4.2=
_Limits" target=3D"_blank">http://wiki.xen.org/wiki/Xen_4.2_Limits</a><br>
do not match those on <a href=3D"http://wiki.xen.org/wiki/Xen_Release_Featu=
res" target=3D"_blank">http://wiki.xen.org/wiki/Xen_Release_Features</a>. F=
or<br>
example, the release features page says HVM guests have a limit of<br>
1 TB (should be TiB) of RAM, but the 4.2 limits page says 512 or 256.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Matt<br>
</font></span></blockquote></div><br>

--bcaec54ee7b89c733d04c75e09a8--


--===============3802811319312657465==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3802811319312657465==--


From xen-devel-bounces@lists.xen.org Thu Aug 16 08:43:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vg5-0003I5-3f; Thu, 16 Aug 2012 08:43:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1vg3-0003Hw-8O
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:43:39 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345106609!1877000!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13222 invoked from network); 16 Aug 2012 08:43:30 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:43:30 -0000
Received: by lagz14 with SMTP id z14so1443948lag.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 01:43:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=slA0iXQ0UGMkbGXRwWJTIb4aBZGA30fBkcrWnqM0AtY=;
	b=r4yciV+bkeRjSz5ToP5VyCFaPjIa7+2V/lTIii+hFia6bIza8ZmRo9L9hRajRJCdBY
	kYvEWn3vITrFZFaSECaW3DPlE96gW2AODJiddO4ToUJ0e4F1EvWyiQrqKHLDKYQqnQll
	n6HtupzMGbC4yQEDFMcPJ0AkJEyvMei6IzCg7rZU3GDnyfjCZzgl0RwMPL74I6gjQWCl
	JCF/bmFac/7bDKpcFuzQYuZtK3XvIYJWzxWK36UPRAmAzeqhf+S3R8or4WeRKZ0tlZI7
	5/wtslYLCc9r5lMAGLDcxKlT6Fu6bDPhOLf5thSIzveimMGWa0aowrzI9Y8mqxYzzH5G
	bnYw==
MIME-Version: 1.0
Received: by 10.152.110.70 with SMTP id hy6mr425371lab.44.1345106608943; Thu,
	16 Aug 2012 01:43:28 -0700 (PDT)
Received: by 10.112.27.40 with HTTP; Thu, 16 Aug 2012 01:43:28 -0700 (PDT)
In-Reply-To: <20120815171953.GB9984@US-SEA-R8XVZTX>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
Date: Thu, 16 Aug 2012 09:43:28 +0100
Message-ID: <CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
From: Lars Kurth <lars.kurth.xen@gmail.com>
To: Matt Wilson <msw@amazon.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3802811319312657465=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3802811319312657465==
Content-Type: multipart/alternative; boundary=bcaec54ee7b89c733d04c75e09a8

--bcaec54ee7b89c733d04c75e09a8
Content-Type: text/plain; charset=ISO-8859-1

Matt,
thanks for pointing this out. Ian, I think you are right, we should kill
the limits page.
Ian,
do you know which limits are correct? Maybe we should just add a Xen 4.2
column now.
Lars

On Wed, Aug 15, 2012 at 6:19 PM, Matt Wilson <msw@amazon.com> wrote:

> On Tue, Aug 14, 2012 at 04:44:11AM -0700, Lars Kurth wrote:
> > On 14/08/2012 11:56, Ian Campbell wrote:
> > > On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:
> > >> Feel free to reply to the thread or add to
> > >> http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits
> > > Is the intention to merge this into
> > > http://wiki.xen.org/wiki/Xen_Release_Features when the release
> happens?
> > Ian, we could merge it, or we could have a line  in the matrix linking
> > to the release limits.
> > No strong preference
>
> Lars,
>
> Some of the per-guest limits on http://wiki.xen.org/wiki/Xen_4.2_Limits
> do not match those on http://wiki.xen.org/wiki/Xen_Release_Features. For
> example, the release features page says HVM guests have a limit of
> 1 TB (should be TiB) of RAM, but the 4.2 limits page says 512 or 256.
>
> Matt
>

--bcaec54ee7b89c733d04c75e09a8
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Matt,<br>thanks for pointing this out. Ian, I think you are right, we shoul=
d kill the limits page. <br>Ian,<br>do you know which limits are correct? M=
aybe we should just add a Xen 4.2 column now.<br>Lars<br><br><div class=3D"=
gmail_quote">
On Wed, Aug 15, 2012 at 6:19 PM, Matt Wilson <span dir=3D"ltr">&lt;<a href=
=3D"mailto:msw@amazon.com" target=3D"_blank">msw@amazon.com</a>&gt;</span> =
wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bord=
er-left:1px #ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Tue, Aug 14, 2012 at 04:44:11AM =
-0700, Lars Kurth wrote:<br>
&gt; On 14/08/2012 11:56, Ian Campbell wrote:<br>
&gt; &gt; On Mon, 2012-08-13 at 18:09 +0100, Lars Kurth wrote:<br>
&gt; &gt;&gt; Feel free to reply to the thread or add to<br>
&gt; &gt;&gt; <a href=3D"http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits" targ=
et=3D"_blank">http://wiki.xen.org/wiki/Talk:Xen_4.2_Limits</a><br>
&gt; &gt; Is the intention to merge this into<br>
&gt; &gt; <a href=3D"http://wiki.xen.org/wiki/Xen_Release_Features" target=
=3D"_blank">http://wiki.xen.org/wiki/Xen_Release_Features</a> when the rele=
ase happens?<br>
&gt; Ian, we could merge it, or we could have a line =A0in the matrix linki=
ng<br>
&gt; to the release limits.<br>
&gt; No strong preference<br>
<br>
</div></div>Lars,<br>
<br>
Some of the per-guest limits on <a href=3D"http://wiki.xen.org/wiki/Xen_4.2=
_Limits" target=3D"_blank">http://wiki.xen.org/wiki/Xen_4.2_Limits</a><br>
do not match those on <a href=3D"http://wiki.xen.org/wiki/Xen_Release_Featu=
res" target=3D"_blank">http://wiki.xen.org/wiki/Xen_Release_Features</a>. F=
or<br>
example, the release features page says HVM guests have a limit of<br>
1 TB (should be TiB) of RAM, but the 4.2 limits page says 512 or 256.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Matt<br>
</font></span></blockquote></div><br>

--bcaec54ee7b89c733d04c75e09a8--


--===============3802811319312657465==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3802811319312657465==--


From xen-devel-bounces@lists.xen.org Thu Aug 16 08:48:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vkr-0003Rp-Qz; Thu, 16 Aug 2012 08:48:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1vkq-0003Rj-U6
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:48:37 +0000
Received: from [85.158.143.35:15650] by server-2.bemta-4.messagelabs.com id
	D8/85-31966-4E3BC205; Thu, 16 Aug 2012 08:48:36 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345106907!12169842!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25205 invoked from network); 16 Aug 2012 08:48:34 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 08:48:34 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 91295A03F8;
	Thu, 16 Aug 2012 08:48:27 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id fEdQNlJEY43N; Thu, 16 Aug 2012 08:48:25 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id CF284A02EB;
	Thu, 16 Aug 2012 08:48:25 +0000 (UTC)
Date: Thu, 16 Aug 2012 10:48:23 +0200
From: "Mark van Dijk" <lists+xen@internecto.net>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20120816104823.55a775ed@internecto.net>
In-Reply-To: <502CC76602000078000955B0@nat28.tlf.novell.com>
References: <20120815223909.1eb7a0cd@internecto.net>
	<502CC76602000078000955B0@nat28.tlf.novell.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 09:11:50 +0100
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net>
> >>> wrote:
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
> > (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
> > (XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
> > (XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
> > 
> > The entire log from boot to crash can be viewed at the following
> > link: http://pastebin.com/5wcH7GWR 
> 
> Unfortunately quite a few of the registers contain values that
> could sensibly be "i". Could you either disassemble the
> instructions around the place yourself to find out which one it
> is, or make the xen-syms file corresponding to this run available?

Hi Jan, I'm not capable to disassemble anything because my experience
with that is zero. So it's probably easier to give you the xen-syms
file, I posted it to a pastebin with a little detour:

curl http://sprunge.us/cGeL | openssl enc -a -d | \
 xz -d > xen-syms-4.2.0-rc3-pre

md5sum aa27f5aeea45f72cab88848e5996080e

> 
> I'm suspecting "i" to actually be 2, and the code not having got
> updated when 1Gb page support got added to PoD. You could
> hence alternatively also try the attached debugging patch,
> which - if my guess is right - may at once fix your problem.

Sure, I'll give that a go and let you know, thanks.

> However, because of this outright claiming that HVM doesn't
> work seems a little harsh - did you try running your guests
> without the use of PoD (i.e. with memory == maxmem)?

If it sounded like I meant to complain that HVM is broken then I
apologise, that's not what I meant. I meant it didn't work on my system
and I had no idea that this relates to maxmem. :)

I'll admit that I didn't try the hvm without the maxmem setting but the
VM has settings 'memory = 1024; maxmem = 4096'.

Mark

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:48:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vkr-0003Rp-Qz; Thu, 16 Aug 2012 08:48:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1vkq-0003Rj-U6
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:48:37 +0000
Received: from [85.158.143.35:15650] by server-2.bemta-4.messagelabs.com id
	D8/85-31966-4E3BC205; Thu, 16 Aug 2012 08:48:36 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345106907!12169842!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25205 invoked from network); 16 Aug 2012 08:48:34 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 08:48:34 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 91295A03F8;
	Thu, 16 Aug 2012 08:48:27 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id fEdQNlJEY43N; Thu, 16 Aug 2012 08:48:25 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id CF284A02EB;
	Thu, 16 Aug 2012 08:48:25 +0000 (UTC)
Date: Thu, 16 Aug 2012 10:48:23 +0200
From: "Mark van Dijk" <lists+xen@internecto.net>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20120816104823.55a775ed@internecto.net>
In-Reply-To: <502CC76602000078000955B0@nat28.tlf.novell.com>
References: <20120815223909.1eb7a0cd@internecto.net>
	<502CC76602000078000955B0@nat28.tlf.novell.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 09:11:50 +0100
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net>
> >>> wrote:
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
> > (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
> > (XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
> > (XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
> > 
> > The entire log from boot to crash can be viewed at the following
> > link: http://pastebin.com/5wcH7GWR 
> 
> Unfortunately quite a few of the registers contain values that
> could sensibly be "i". Could you either disassemble the
> instructions around the place yourself to find out which one it
> is, or make the xen-syms file corresponding to this run available?

Hi Jan, I'm not capable to disassemble anything because my experience
with that is zero. So it's probably easier to give you the xen-syms
file, I posted it to a pastebin with a little detour:

curl http://sprunge.us/cGeL | openssl enc -a -d | \
 xz -d > xen-syms-4.2.0-rc3-pre

md5sum aa27f5aeea45f72cab88848e5996080e

> 
> I'm suspecting "i" to actually be 2, and the code not having got
> updated when 1Gb page support got added to PoD. You could
> hence alternatively also try the attached debugging patch,
> which - if my guess is right - may at once fix your problem.

Sure, I'll give that a go and let you know, thanks.

> However, because of this outright claiming that HVM doesn't
> work seems a little harsh - did you try running your guests
> without the use of PoD (i.e. with memory == maxmem)?

If it sounded like I meant to complain that HVM is broken then I
apologise, that's not what I meant. I meant it didn't work on my system
and I had no idea that this relates to maxmem. :)

I'll admit that I didn't try the hvm without the maxmem setting but the
VM has settings 'memory = 1024; maxmem = 4096'.

Mark

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:50:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vmi-0003Xu-DN; Thu, 16 Aug 2012 08:50:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1vmg-0003Xo-Ki
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:50:30 +0000
Received: from [85.158.143.35:38719] by server-1.bemta-4.messagelabs.com id
	47/E2-07754-554BC205; Thu, 16 Aug 2012 08:50:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345107026!5830495!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13318 invoked from network); 16 Aug 2012 08:50:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:50:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14034048"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 08:50:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 09:50:26 +0100
Message-ID: <1345107025.27489.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lars Kurth <lars.kurth.xen@gmail.com>
Date: Thu, 16 Aug 2012 09:50:25 +0100
In-Reply-To: <CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
> do you know which limits are correct?

I think Jan confirmed the 4.1 numbers on
http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
about this sort of thing. e.g. if any had increased for 4.2.

> Maybe we should just add a Xen 4.2 column now.

I think that's a good idea.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 08:50:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 08:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1vmi-0003Xu-DN; Thu, 16 Aug 2012 08:50:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1vmg-0003Xo-Ki
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 08:50:30 +0000
Received: from [85.158.143.35:38719] by server-1.bemta-4.messagelabs.com id
	47/E2-07754-554BC205; Thu, 16 Aug 2012 08:50:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345107026!5830495!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13318 invoked from network); 16 Aug 2012 08:50:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 08:50:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14034048"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 08:50:26 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 09:50:26 +0100
Message-ID: <1345107025.27489.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lars Kurth <lars.kurth.xen@gmail.com>
Date: Thu, 16 Aug 2012 09:50:25 +0100
In-Reply-To: <CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
> do you know which limits are correct?

I think Jan confirmed the 4.1 numbers on
http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
about this sort of thing. e.g. if any had increased for 4.2.

> Maybe we should just add a Xen 4.2 column now.

I think that's a good idea.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:25:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wKL-0004Cc-4a; Thu, 16 Aug 2012 09:25:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1wKK-0004CX-8l
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:25:16 +0000
Received: from [85.158.143.35:44031] by server-2.bemta-4.messagelabs.com id
	7A/90-31966-B7CBC205; Thu, 16 Aug 2012 09:25:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345109105!13735587!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30974 invoked from network); 16 Aug 2012 09:25:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 09:25:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14034862"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 09:25:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:25:04 +0100
Message-ID: <1345109102.27489.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tom Parker <tparker@cbnco.com>
Date: Thu, 16 Aug 2012 10:25:02 +0100
In-Reply-To: <502BD75B.9040301@cbnco.com>
References: <502BD75B.9040301@cbnco.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 18:07 +0100, Tom Parker wrote:
> Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
> channel to provide our use case for PV USB in our environment.  This
> is possible with the current xm stack but not available with the xl
> stack.

Thanks for doing this.

At first glance this doesn't seem like something which we could do for
4.2.0 at this stage, although we should do it for 4.3 and potentially
consider it for 4.2.1.

Is it something which you guys might be interested in providing patches
for? It is at heart a moderately simple C coding exercise, I'm more than
happy to provide guidance etc. Much of the generic framework already
exists and there are examples in the form of other device types.

> Currently we use PVUSB to attach a USB Smartcard reader through our
> dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
> mounted on an internal USB Port to our domU CA server (SLES 11)
> 
> The config file syntax is broken so we have to manually attach (I have
> it scripted) whenever our hosts reboot (which is almost never.)

Can you give an example of what the syntax *should* be?

> On the dom0 server I have to do the following steps:
> 
> /usr/sbin/xm usb-list-assignable-devices (get the bus-id of the USB
> device)
> /usr/sbin/xm usb-hc-create $Domain 2 2 (Create a USB 2.0 Root Hub with
> 2 ports in $Domain)
> /usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId (Attach the
> USB bus-id found in step 1 to the hub created in step 2)

What (if anything) is the output of these commands?

Do you need to do anything to make a device "assignable"? (I get no
output from the list command for example)

> On the domU the lsusb looks like this after the above (before it
> returns nothing)
> 
> mgaca:~ # lsusb 
> Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
> SmartCard Reader
> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Can you post the output of "xenstore-ls -fp" while the device is
connected?

Do you happen to know if this uses the PVUSB drivers or some other
mechanism? "lsmod" in both dom0 and domU should provide a clue if the
drivers are loaded.

Does this work for both PV and HVM guests or do you only use one or the
other?

> Once I have done this I can use the usb devce in the domU as if it was
> directly connected. 
> 
> Thanks for your time.

Thank you for describing the functionality.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:25:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wKL-0004Cc-4a; Thu, 16 Aug 2012 09:25:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1wKK-0004CX-8l
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:25:16 +0000
Received: from [85.158.143.35:44031] by server-2.bemta-4.messagelabs.com id
	7A/90-31966-B7CBC205; Thu, 16 Aug 2012 09:25:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345109105!13735587!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30974 invoked from network); 16 Aug 2012 09:25:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 09:25:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14034862"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 09:25:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:25:04 +0100
Message-ID: <1345109102.27489.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tom Parker <tparker@cbnco.com>
Date: Thu, 16 Aug 2012 10:25:02 +0100
In-Reply-To: <502BD75B.9040301@cbnco.com>
References: <502BD75B.9040301@cbnco.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 18:07 +0100, Tom Parker wrote:
> Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
> channel to provide our use case for PV USB in our environment.  This
> is possible with the current xm stack but not available with the xl
> stack.

Thanks for doing this.

At first glance this doesn't seem like something which we could do for
4.2.0 at this stage, although we should do it for 4.3 and potentially
consider it for 4.2.1.

Is it something which you guys might be interested in providing patches
for? It is at heart a moderately simple C coding exercise, I'm more than
happy to provide guidance etc. Much of the generic framework already
exists and there are examples in the form of other device types.

> Currently we use PVUSB to attach a USB Smartcard reader through our
> dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
> mounted on an internal USB Port to our domU CA server (SLES 11)
> 
> The config file syntax is broken so we have to manually attach (I have
> it scripted) whenever our hosts reboot (which is almost never.)

Can you give an example of what the syntax *should* be?

> On the dom0 server I have to do the following steps:
> 
> /usr/sbin/xm usb-list-assignable-devices (get the bus-id of the USB
> device)
> /usr/sbin/xm usb-hc-create $Domain 2 2 (Create a USB 2.0 Root Hub with
> 2 ports in $Domain)
> /usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId (Attach the
> USB bus-id found in step 1 to the hub created in step 2)

What (if anything) is the output of these commands?

Do you need to do anything to make a device "assignable"? (I get no
output from the list command for example)

> On the domU the lsusb looks like this after the above (before it
> returns nothing)
> 
> mgaca:~ # lsusb 
> Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
> SmartCard Reader
> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Can you post the output of "xenstore-ls -fp" while the device is
connected?

Do you happen to know if this uses the PVUSB drivers or some other
mechanism? "lsmod" in both dom0 and domU should provide a clue if the
drivers are loaded.

Does this work for both PV and HVM guests or do you only use one or the
other?

> Once I have done this I can use the usb devce in the domU as if it was
> directly connected. 
> 
> Thanks for your time.

Thank you for describing the functionality.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:37:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wVk-0004PX-Ju; Thu, 16 Aug 2012 09:37:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1wVj-0004PS-0v
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:37:03 +0000
Received: from [85.158.143.35:22190] by server-2.bemta-4.messagelabs.com id
	25/2A-31966-E3FBC205; Thu, 16 Aug 2012 09:37:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345109814!14393528!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23549 invoked from network); 16 Aug 2012 09:36:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 09:36:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 10:36:54 +0100
Message-Id: <502CDB7E0200007800095622@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 10:37:34 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Lars Kurth" <lars.kurth.xen@gmail.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345107025.27489.27.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
>> do you know which limits are correct?
> 
> I think Jan confirmed the 4.1 numbers on
> http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
> about this sort of thing. e.g. if any had increased for 4.2.

I had gone over this already, and adjusted the things I knew for
sure. The upper limit of memory for HVM guests is something I'm
not sure about though (especially because I don't know whether
the tools impose any restrictions, as is the case for PV guests).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:37:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wVk-0004PX-Ju; Thu, 16 Aug 2012 09:37:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1wVj-0004PS-0v
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:37:03 +0000
Received: from [85.158.143.35:22190] by server-2.bemta-4.messagelabs.com id
	25/2A-31966-E3FBC205; Thu, 16 Aug 2012 09:37:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345109814!14393528!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23549 invoked from network); 16 Aug 2012 09:36:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 09:36:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 10:36:54 +0100
Message-Id: <502CDB7E0200007800095622@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 10:37:34 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Lars Kurth" <lars.kurth.xen@gmail.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345107025.27489.27.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
>> do you know which limits are correct?
> 
> I think Jan confirmed the 4.1 numbers on
> http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
> about this sort of thing. e.g. if any had increased for 4.2.

I had gone over this already, and adjusted the things I knew for
sure. The upper limit of memory for HVM guests is something I'm
not sure about though (especially because I don't know whether
the tools impose any restrictions, as is the case for PV guests).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:54:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wmH-0004fe-Eg; Thu, 16 Aug 2012 09:54:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1wmF-0004fZ-PW
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:54:08 +0000
Received: from [85.158.138.51:25904] by server-5.bemta-3.messagelabs.com id
	0A/6F-08865-E33CC205; Thu, 16 Aug 2012 09:54:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345110846!28578127!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25408 invoked from network); 16 Aug 2012 09:54:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 09:54:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14035485"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 09:54:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 10:54:05 +0100
Date: Thu, 16 Aug 2012 10:53:49 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <502BEEAD.7030906@citrix.com>
Message-ID: <alpine.DEB.2.02.1208161053350.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
	<502BEEAD.7030906@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, Attilio Rao wrote:
> On 15/08/12 18:46, Stefano Stabellini wrote:
> > On Wed, 15 Aug 2012, Attilio Rao wrote:
> >    
> >> On 15/08/12 18:25, Stefano Stabellini wrote:
> >>      
> >>> On Tue, 14 Aug 2012, Attilio Rao wrote:
> >>>
> >>>        
> >>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >>>>     pgt_buf_start, but still bigger than it.
> >>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >>>>     for verifying start and end are contained in the range
> >>>>     [pgt_buf_start, pgt_buf_top].
> >>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> >>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >>>>     an actual need to do that (or, in other words, if there are actually some
> >>>>     pages going to switch from RO to RW).
> >>>>
> >>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> >>>> ---
> >>>>    arch/x86/mm/init.c |    4 ++++
> >>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >>>>    2 files changed, 24 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> >>>> index e0e6990..c5849b6 100644
> >>>> --- a/arch/x86/mm/init.c
> >>>> +++ b/arch/x86/mm/init.c
> >>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >>>>
> >>>>    void __init native_pagetable_reserve(u64 start, u64 end)
> >>>>    {
> >>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
> >>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> >>>>
> >>>>          
> >>> code style (you can check whether your patch breaks the code style with
> >>> scripts/checkpatch.pl)
> >>>
> >>>        
> >> I actually did before to submit, it reported 0 errors/warning.
> >>      
> > strange, that really looks like a line over 80 chars
> >
> >    
> 
> Actually code style explicitely says to not break strings because they 
> want to retain the ability to grep. In FreeBSD this is the same and I 
> think this is why checkpatch doesn't whine. I don't think there is a bug 
> here.
> 
> Can I submit the patch as it is, then?

it is ok for me

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:54:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wmH-0004fe-Eg; Thu, 16 Aug 2012 09:54:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1wmF-0004fZ-PW
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:54:08 +0000
Received: from [85.158.138.51:25904] by server-5.bemta-3.messagelabs.com id
	0A/6F-08865-E33CC205; Thu, 16 Aug 2012 09:54:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345110846!28578127!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkxOTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25408 invoked from network); 16 Aug 2012 09:54:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 09:54:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14035485"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 09:54:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 10:54:05 +0100
Date: Thu, 16 Aug 2012 10:53:49 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <502BEEAD.7030906@citrix.com>
Message-ID: <alpine.DEB.2.02.1208161053350.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-2-git-send-email-attilio.rao@citrix.com>
	<alpine.DEB.2.02.1208151824250.2278@kaball.uk.xensource.com>
	<502BD943.8030701@citrix.com>
	<alpine.DEB.2.02.1208151844230.2278@kaball.uk.xensource.com>
	<502BEEAD.7030906@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] XEN,
 X86: Improve semantic support for pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, Attilio Rao wrote:
> On 15/08/12 18:46, Stefano Stabellini wrote:
> > On Wed, 15 Aug 2012, Attilio Rao wrote:
> >    
> >> On 15/08/12 18:25, Stefano Stabellini wrote:
> >>      
> >>> On Tue, 14 Aug 2012, Attilio Rao wrote:
> >>>
> >>>        
> >>>> - Allow xen_mapping_pagetable_reserve() to handle a start different from
> >>>>     pgt_buf_start, but still bigger than it.
> >>>> - Add checks to xen_mapping_pagetable_reserve() and native_pagetable_reserve()
> >>>>     for verifying start and end are contained in the range
> >>>>     [pgt_buf_start, pgt_buf_top].
> >>>> - In xen_mapping_pagetable_reserve(), change printk into pr_debug.
> >>>> - In xen_mapping_pagetable_reserve(), print out diagnostic only if there is
> >>>>     an actual need to do that (or, in other words, if there are actually some
> >>>>     pages going to switch from RO to RW).
> >>>>
> >>>> Signed-off-by: Attilio Rao<attilio.rao@citrix.com>
> >>>> ---
> >>>>    arch/x86/mm/init.c |    4 ++++
> >>>>    arch/x86/xen/mmu.c |   22 ++++++++++++++++++++--
> >>>>    2 files changed, 24 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> >>>> index e0e6990..c5849b6 100644
> >>>> --- a/arch/x86/mm/init.c
> >>>> +++ b/arch/x86/mm/init.c
> >>>> @@ -92,6 +92,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
> >>>>
> >>>>    void __init native_pagetable_reserve(u64 start, u64 end)
> >>>>    {
> >>>> +	if (start<   PFN_PHYS(pgt_buf_start) || end>   PFN_PHYS(pgt_buf_top))
> >>>> +		panic("Invalid address range: [%llu - %llu] should be a subset of [%llu - %llu]\n"
> >>>>
> >>>>          
> >>> code style (you can check whether your patch breaks the code style with
> >>> scripts/checkpatch.pl)
> >>>
> >>>        
> >> I actually did before to submit, it reported 0 errors/warning.
> >>      
> > strange, that really looks like a line over 80 chars
> >
> >    
> 
> Actually code style explicitely says to not break strings because they 
> want to retain the ability to grep. In FreeBSD this is the same and I 
> think this is why checkpatch doesn't whine. I don't think there is a bug 
> here.
> 
> Can I submit the patch as it is, then?

it is ok for me

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:56:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1woV-0004lH-VV; Thu, 16 Aug 2012 09:56:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1woU-0004lB-CL
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:56:26 +0000
Received: from [85.158.138.51:41499] by server-7.bemta-3.messagelabs.com id
	06/64-01906-9C3CC205; Thu, 16 Aug 2012 09:56:25 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345110981!28544734!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11371 invoked from network); 16 Aug 2012 09:56:21 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 09:56:21 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 3BB42A03F8;
	Thu, 16 Aug 2012 09:56:21 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id tDaEQa86D6LI; Thu, 16 Aug 2012 09:56:17 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 3624FA02EB;
	Thu, 16 Aug 2012 09:56:17 +0000 (UTC)
Date: Thu, 16 Aug 2012 11:56:15 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20120816115615.1cda9eee@internecto.net>
In-Reply-To: <502A2A23.4050205@citrix.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
	<20120810163722.2feaadad@internecto.net>
	<502A2A23.4050205@citrix.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> Natanael Copa sent a patch to Qemu-devel some months ago to fix the
> build of Qemu on uClibc, but it seems like it was ignored:
> 
> http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg02388.html
> 
> Could you try if that still applies and fixes your problems?

Hi Roger,

I haven't tested it yet; this morning I noticed that ncopa and nenolod,
the alpine devs, got Xen 4.1.3 to work with various patches. The
following url has the working alpinelinux build:

http://git.alpinelinux.org/cgit/aports/tree/main/xen

It's probably wisest for me to take a step back and contact them
first with regards to xen-unstable.hg and alpine linux, so that's what
I'll do.

Regards,
Mark

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 09:56:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 09:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1woV-0004lH-VV; Thu, 16 Aug 2012 09:56:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1woU-0004lB-CL
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 09:56:26 +0000
Received: from [85.158.138.51:41499] by server-7.bemta-3.messagelabs.com id
	06/64-01906-9C3CC205; Thu, 16 Aug 2012 09:56:25 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345110981!28544734!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11371 invoked from network); 16 Aug 2012 09:56:21 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 09:56:21 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id 3BB42A03F8;
	Thu, 16 Aug 2012 09:56:21 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id tDaEQa86D6LI; Thu, 16 Aug 2012 09:56:17 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 3624FA02EB;
	Thu, 16 Aug 2012 09:56:17 +0000 (UTC)
Date: Thu, 16 Aug 2012 11:56:15 +0200
From: Mark van Dijk <lists+xen@internecto.net>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20120816115615.1cda9eee@internecto.net>
In-Reply-To: <502A2A23.4050205@citrix.com>
References: <20120810115405.05af653e@internecto.net>
	<alpine.DEB.2.02.1208101110370.21096@kaball.uk.xensource.com>
	<20120810140611.4ca8a1fb@internecto.net>
	<alpine.DEB.2.02.1208101313320.21096@kaball.uk.xensource.com>
	<20120810163722.2feaadad@internecto.net>
	<502A2A23.4050205@citrix.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen 4.2.0-rc3-pre: building failure on alpine linux
 / uclibc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> Natanael Copa sent a patch to Qemu-devel some months ago to fix the
> build of Qemu on uClibc, but it seems like it was ignored:
> 
> http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg02388.html
> 
> Could you try if that still applies and fixes your problems?

Hi Roger,

I haven't tested it yet; this morning I noticed that ncopa and nenolod,
the alpine devs, got Xen 4.1.3 to work with various patches. The
following url has the working alpinelinux build:

http://git.alpinelinux.org/cgit/aports/tree/main/xen

It's probably wisest for me to take a step back and contact them
first with regards to xen-unstable.hg and alpine linux, so that's what
I'll do.

Regards,
Mark

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:05:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:05:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wxE-00053X-0F; Thu, 16 Aug 2012 10:05:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1wxB-00053S-Qy
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:05:26 +0000
Received: from [85.158.138.51:10686] by server-5.bemta-3.messagelabs.com id
	00/75-08865-4E5CC205; Thu, 16 Aug 2012 10:05:24 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345111520!28542193!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24883 invoked from network); 16 Aug 2012 10:05:21 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 10:05:21 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:05:20 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="187134934"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 16 Aug 2012 03:05:10 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:05:08 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:05:03 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
Thread-Index: AQHNerLpTPW4kQTcTESrmJ/O73nifJdaEzwAgAIh9MA=
Date: Thu, 16 Aug 2012 10:05:01 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
In-Reply-To: <502B86420200007800095048@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "tim@xen.org" <tim@xen.org>, "Zhang, Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, August 15, 2012 5:22 PM
> To: Hao, Xudong
> Cc: Zhang, Xiantao; xen-devel@lists.xen.org; tim@xen.org
> Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
> mfn_valid
> 
> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> > 64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid
> > may return failure, so using INVALID_MFN to measure.
> 
> Hmm, that can be true for 32-bit BARs too (on systems with less
> than 4Gb).
> 

Exactly right, thanks comments.

> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> >
> > diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
> >      }
> >
> >      /* Track the highest gfn for which we have ever had a valid mapping */
> > -    if ( mfn_valid(mfn_x(mfn)) &&
> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> 
> Depending on how the above comment gets addressed (i.e.
> whether MMIO MFNs are to be considered here at all), this
> might need changing anyway, as this a huge max_mapped_pfn
> value likely wouldn't be very useful anymore.
> 

Jan,

Your viewpoint is similar with us. Here max_mapped_pfn value is for memory but not for MMIO. I think this is a simple changes, do you have another suggestion?

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:05:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:05:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1wxE-00053X-0F; Thu, 16 Aug 2012 10:05:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1wxB-00053S-Qy
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:05:26 +0000
Received: from [85.158.138.51:10686] by server-5.bemta-3.messagelabs.com id
	00/75-08865-4E5CC205; Thu, 16 Aug 2012 10:05:24 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345111520!28542193!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24883 invoked from network); 16 Aug 2012 10:05:21 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 10:05:21 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:05:20 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="187134934"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 16 Aug 2012 03:05:10 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:05:08 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:05:03 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
Thread-Index: AQHNerLpTPW4kQTcTESrmJ/O73nifJdaEzwAgAIh9MA=
Date: Thu, 16 Aug 2012 10:05:01 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
In-Reply-To: <502B86420200007800095048@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "tim@xen.org" <tim@xen.org>, "Zhang, Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, August 15, 2012 5:22 PM
> To: Hao, Xudong
> Cc: Zhang, Xiantao; xen-devel@lists.xen.org; tim@xen.org
> Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
> mfn_valid
> 
> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> > 64 bits big bar's MMIO address may out of the highest gfn, then mfn_valid
> > may return failure, so using INVALID_MFN to measure.
> 
> Hmm, that can be true for 32-bit BARs too (on systems with less
> than 4Gb).
> 

Exactly right, thanks comments.

> > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> >
> > diff -r 663eb766cdde xen/arch/x86/mm/p2m-ept.c
> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
> >      }
> >
> >      /* Track the highest gfn for which we have ever had a valid mapping */
> > -    if ( mfn_valid(mfn_x(mfn)) &&
> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> 
> Depending on how the above comment gets addressed (i.e.
> whether MMIO MFNs are to be considered here at all), this
> might need changing anyway, as this a huge max_mapped_pfn
> value likely wouldn't be very useful anymore.
> 

Jan,

Your viewpoint is similar with us. Here max_mapped_pfn value is for memory but not for MMIO. I think this is a simple changes, do you have another suggestion?

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:11:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1x33-0005Dh-2T; Thu, 16 Aug 2012 10:11:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1x30-0005Dc-Rf
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:11:27 +0000
Received: from [85.158.143.35:14722] by server-3.bemta-4.messagelabs.com id
	FD/12-09529-E47CC205; Thu, 16 Aug 2012 10:11:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345111865!10803430!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6798 invoked from network); 16 Aug 2012 10:11:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:11:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14035923"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:11:05 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 11:11:05 +0100
Date: Thu, 16 Aug 2012 11:10:48 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Hao, Xudong" <xudong.hao@intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208161104290.2278@kaball.uk.xensource.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Anthony PERARD <anthony@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Hao, Xudong wrote:
> > -----Original Message-----
> > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > Sent: Wednesday, August 15, 2012 6:21 PM
> > To: Hao, Xudong
> > Cc: xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao
> > Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> > 
> > On Wed, 15 Aug 2012, Xudong Hao wrote:
> > > Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> > > device whose BAR size is larger than 4G, it must access > 4G memory
> > address.
> > > This patch enable the 64bits big BAR support on hvmloader.
> > >
> > > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> > 
> > It is very good to see that this problem has been solved!
> > 
> > Considering that at this point it is too late for the 4.2 release cycle,
> > it might be worth spinning a version of this patches for SeaBIOS and
> > upstream QEMU, that now supports PCI passthrough.
> > 
> 
> Hi, Stefano
> 
> Does Xen already switch to SeaBIOS and upstream QEMU? I saw SeaBIOS does not update for 5 months.

Yes, they are available but not the default yet for HVM guests.
In order to enable upstream QEMU you need to pass:

device_model_version='qemu-xen'

in the VM config file.


> You mean upstream QEMU is this tree git://git.qemu.org/qemu.git ? I heard somebody said PCI device assignment does not work for this tree, but device hot-add/remove works.

qemu-upstream-unstable, the upstream QEMU based tree used with Xen 4.2,
(git://xenbits.xensource.com/qemu-upstream-unstable.git) is based on
QEMU 1.0 and doesn't support PCI passthrough.

However upstream QEMU (git://git.qemu.org/qemu.git) does, and I am going
to rebase on it early when the 4.3 development cycle opens.  So it is
probably a good idea for you to try out upstram QEMU now.
We have a wiki page on how to build and run upstream QEMU on
xen-unstable:

http://wiki.xen.org/wiki/QEMU_Upstream

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:11:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1x33-0005Dh-2T; Thu, 16 Aug 2012 10:11:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1x30-0005Dc-Rf
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:11:27 +0000
Received: from [85.158.143.35:14722] by server-3.bemta-4.messagelabs.com id
	FD/12-09529-E47CC205; Thu, 16 Aug 2012 10:11:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345111865!10803430!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6798 invoked from network); 16 Aug 2012 10:11:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:11:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14035923"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:11:05 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 11:11:05 +0100
Date: Thu, 16 Aug 2012 11:10:48 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Hao, Xudong" <xudong.hao@intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208161104290.2278@kaball.uk.xensource.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Anthony PERARD <anthony@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Hao, Xudong wrote:
> > -----Original Message-----
> > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > Sent: Wednesday, August 15, 2012 6:21 PM
> > To: Hao, Xudong
> > Cc: xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao
> > Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> > 
> > On Wed, 15 Aug 2012, Xudong Hao wrote:
> > > Currently it is assumed PCI device BAR access < 4G memory. If there is such a
> > > device whose BAR size is larger than 4G, it must access > 4G memory
> > address.
> > > This patch enable the 64bits big BAR support on hvmloader.
> > >
> > > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> > 
> > It is very good to see that this problem has been solved!
> > 
> > Considering that at this point it is too late for the 4.2 release cycle,
> > it might be worth spinning a version of this patches for SeaBIOS and
> > upstream QEMU, that now supports PCI passthrough.
> > 
> 
> Hi, Stefano
> 
> Does Xen already switch to SeaBIOS and upstream QEMU? I saw SeaBIOS does not update for 5 months.

Yes, they are available but not the default yet for HVM guests.
In order to enable upstream QEMU you need to pass:

device_model_version='qemu-xen'

in the VM config file.


> You mean upstream QEMU is this tree git://git.qemu.org/qemu.git ? I heard somebody said PCI device assignment does not work for this tree, but device hot-add/remove works.

qemu-upstream-unstable, the upstream QEMU based tree used with Xen 4.2,
(git://xenbits.xensource.com/qemu-upstream-unstable.git) is based on
QEMU 1.0 and doesn't support PCI passthrough.

However upstream QEMU (git://git.qemu.org/qemu.git) does, and I am going
to rebase on it early when the 4.3 development cycle opens.  So it is
probably a good idea for you to try out upstram QEMU now.
We have a wiki page on how to build and run upstream QEMU on
xen-unstable:

http://wiki.xen.org/wiki/QEMU_Upstream

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:12:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1x3Z-0005Fn-Fw; Thu, 16 Aug 2012 10:12:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1x3Y-0005Fc-EY
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:12:00 +0000
Received: from [85.158.143.35:52273] by server-1.bemta-4.messagelabs.com id
	7B/D5-07754-F67CC205; Thu, 16 Aug 2012 10:11:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345111908!12537995!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3198 invoked from network); 16 Aug 2012 10:11:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:11:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 11:11:48 +0100
Message-Id: <502CE3AB0200007800095686@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 11:12:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "tim@xen.org" <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:05, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
>> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
>> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
>> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>> >      }
>> >
>> >      /* Track the highest gfn for which we have ever had a valid mapping */
>> > -    if ( mfn_valid(mfn_x(mfn)) &&
>> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>> 
>> Depending on how the above comment gets addressed (i.e.
>> whether MMIO MFNs are to be considered here at all), this
>> might need changing anyway, as this a huge max_mapped_pfn
>> value likely wouldn't be very useful anymore.
> 
> Your viewpoint is similar with us. Here max_mapped_pfn value is for memory 
> but not for MMIO. I think this is a simple changes, do you have another 
> suggestion?

The question is why this needs to be changed at all. If this is
only about RAM, then mfn_valid() is the right thing to use. If
this is about MMIO too, then the condition is wrong already
(since, as we appear to agree, even now there can be MMIO
above RAM, provided there's little enough RAM).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:12:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1x3Z-0005Fn-Fw; Thu, 16 Aug 2012 10:12:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1x3Y-0005Fc-EY
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:12:00 +0000
Received: from [85.158.143.35:52273] by server-1.bemta-4.messagelabs.com id
	7B/D5-07754-F67CC205; Thu, 16 Aug 2012 10:11:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345111908!12537995!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3198 invoked from network); 16 Aug 2012 10:11:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:11:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 11:11:48 +0100
Message-Id: <502CE3AB0200007800095686@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 11:12:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "tim@xen.org" <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:05, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
>> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
>> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
>> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>> >      }
>> >
>> >      /* Track the highest gfn for which we have ever had a valid mapping */
>> > -    if ( mfn_valid(mfn_x(mfn)) &&
>> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>> 
>> Depending on how the above comment gets addressed (i.e.
>> whether MMIO MFNs are to be considered here at all), this
>> might need changing anyway, as this a huge max_mapped_pfn
>> value likely wouldn't be very useful anymore.
> 
> Your viewpoint is similar with us. Here max_mapped_pfn value is for memory 
> but not for MMIO. I think this is a simple changes, do you have another 
> suggestion?

The question is why this needs to be changed at all. If this is
only about RAM, then mfn_valid() is the right thing to use. If
this is about MMIO too, then the condition is wrong already
(since, as we appear to agree, even now there can be MMIO
above RAM, provided there's little enough RAM).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:23:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xEn-0005Wa-Tr; Thu, 16 Aug 2012 10:23:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xEm-0005WV-AZ
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:23:36 +0000
Received: from [85.158.143.35:54706] by server-3.bemta-4.messagelabs.com id
	AC/F8-09529-72ACC205; Thu, 16 Aug 2012 10:23:35 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345112613!15681606!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY5MDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31647 invoked from network); 16 Aug 2012 10:23:33 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-13.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:23:33 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 16 Aug 2012 03:23:32 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="134923027"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Aug 2012 03:23:31 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:23:31 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:23:29 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 1/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mS0vU5VLngcHSnKZaUDeGa+eFQ==
Date: Thu, 16 Aug 2012 10:23:28 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF1EE@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 1/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


refactoring the blkfront
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 4e86393..a263faf 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -64,6 +64,12 @@ enum blkif_state {
        BLKIF_STATE_SUSPENDED,
 };

+enum blkif_ring_type {
+       RING_TYPE_UNDEFINED =3D 0,
+       RING_TYPE_1 =3D 1,
+       RING_TYPE_2 =3D 2,
+};
+
 struct blk_shadow {
        struct blkif_request req;
        struct request *request;
@@ -91,12 +97,14 @@ struct blkfront_info
        enum blkif_state connected;
        int ring_ref;
        struct blkif_front_ring ring;
-       struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+       struct scatterlist *sg;
        unsigned int evtchn, irq;
        struct request_queue *rq;
        struct work_struct work;
        struct gnttab_free_callback callback;
-       struct blk_shadow shadow[BLK_RING_SIZE];
+       struct blk_shadow *shadow;
+       struct blk_front_operations *ops;
+       enum blkif_ring_type ring_type;
        unsigned long shadow_free;
        unsigned int feature_flush;
        unsigned int flush_op;
@@ -107,6 +115,36 @@ struct blkfront_info
        int is_ready;
 };

+/* interface of blkfront ring operation */
+static struct blk_front_operations {
+       void *(*ring_get_request) (struct blkfront_info *info);
+       struct blkif_response *(*ring_get_response) (struct blkfront_info *=
info);
+       struct blkif_request_segment *(*ring_get_segment)
+                               (struct blkfront_info *info, int i);
+       unsigned long (*get_id) (struct blkfront_info *info);
+       void (*add_id) (struct blkfront_info *info, unsigned long id);
+       void (*save_seg_shadow) (struct blkfront_info *info, unsigned long =
mfn,
+                                unsigned long id, int i);
+       void (*save_req_shadow) (struct blkfront_info *info,
+                                struct request *req, unsigned long id);
+       struct request *(*get_req_from_shadow)(struct blkfront_info *info,
+                                              unsigned long id);
+       RING_IDX (*get_rsp_prod) (struct blkfront_info *info);
+       RING_IDX (*get_rsp_cons) (struct blkfront_info *info);
+       RING_IDX (*get_req_prod_pvt) (struct blkfront_info *info);
+       void (*check_left_response) (struct blkfront_info *info, int *more_=
to_do);
+       void (*update_rsp_event) (struct blkfront_info *info, int i);
+       void (*update_rsp_cons) (struct blkfront_info *info);
+       void (*update_req_prod_pvt) (struct blkfront_info *info);
+       void (*ring_push) (struct blkfront_info *info, int *notify);
+       int (*recover) (struct blkfront_info *info);
+       int (*ring_full) (struct blkfront_info *info);
+       int (*setup_blkring) (struct xenbus_device *dev, struct blkfront_in=
fo *info);
+       void (*free_blkring) (struct blkfront_info *info, int suspend);
+       void (*blkif_completion) (struct blkfront_info *info, unsigned long=
 id);
+       unsigned int max_seg;
+} blk_front_ops;
+
 static unsigned int nr_minors;
 static unsigned long *minors;
 static DEFINE_SPINLOCK(minor_lock);
@@ -132,7 +170,7 @@ static DEFINE_SPINLOCK(minor_lock);

 #define DEV_NAME       "xvd"   /* name in /dev */

-static int get_id_from_freelist(struct blkfront_info *info)
+static unsigned long get_id_from_freelist(struct blkfront_info *info)
 {
        unsigned long free =3D info->shadow_free;
        BUG_ON(free >=3D BLK_RING_SIZE);
@@ -141,7 +179,7 @@ static int get_id_from_freelist(struct blkfront_info *i=
nfo)
        return free;
 }

-static void add_id_to_freelist(struct blkfront_info *info,
+void add_id_to_freelist(struct blkfront_info *info,
                               unsigned long id)
 {
        info->shadow[id].req.u.rw.id  =3D info->shadow_free;
@@ -251,6 +289,42 @@ static int blkif_ioctl(struct block_device *bdev, fmod=
e_t mode,
        return 0;
 }

+static int ring_full(struct blkfront_info *info)
+{
+       return RING_FULL(&info->ring);
+}
+
+void *ring_get_request(struct blkfront_info *info)
+{
+       return (void *)RING_GET_REQUEST(&info->ring, info->ring.req_prod_pv=
t);
+}
+
+struct blkif_request_segment *ring_get_segment(struct blkfront_info *info,=
 int i)
+{
+       struct blkif_request *ring_req =3D
+                       (struct blkif_request *)info->ops->ring_get_request=
(info);
+       return &ring_req->u.rw.seg[i];
+}
+
+void save_seg_shadow(struct blkfront_info *info,
+                     unsigned long mfn, unsigned long id, int i)
+{
+       info->shadow[id].frame[i] =3D mfn_to_pfn(mfn);
+}
+
+void save_req_shadow(struct blkfront_info *info,
+                     struct request *req, unsigned long id)
+{
+       struct blkif_request *ring_req =3D
+                       (struct blkif_request *)info->ops->ring_get_request=
(info);
+       info->shadow[id].req =3D *ring_req;
+       info->shadow[id].request =3D req;
+}
+
+void update_req_prod_pvt(struct blkfront_info *info)
+{
+       info->ring.req_prod_pvt++;
+}
 /*
  * Generate a Xen blkfront IO request from a blk layer request.  Reads
  * and writes are handled as expected.
@@ -262,6 +336,7 @@ static int blkif_queue_request(struct request *req)
        struct blkfront_info *info =3D req->rq_disk->private_data;
        unsigned long buffer_mfn;
        struct blkif_request *ring_req;
+       struct blkif_request_segment *ring_seg;
        unsigned long id;
        unsigned int fsect, lsect;
        int i, ref;
@@ -282,9 +357,9 @@ static int blkif_queue_request(struct request *req)
        }

        /* Fill out a communications ring structure. */
-       ring_req =3D RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt)=
;
-       id =3D get_id_from_freelist(info);
-       info->shadow[id].request =3D req;
+       ring_req =3D (struct blkif_request *)info->ops->ring_get_request(in=
fo);
+       id =3D info->ops->get_id(info);
+       //info->shadow[id].request =3D req;

        ring_req->u.rw.id =3D id;
        ring_req->u.rw.sector_number =3D (blkif_sector_t)blk_rq_pos(req);
@@ -315,8 +390,7 @@ static int blkif_queue_request(struct request *req)
        } else {
                ring_req->u.rw.nr_segments =3D blk_rq_map_sg(req->q, req,
                                                           info->sg);
-               BUG_ON(ring_req->u.rw.nr_segments >
-                      BLKIF_MAX_SEGMENTS_PER_REQUEST);
+               BUG_ON(ring_req->u.rw.nr_segments > info->ops->max_seg);

                for_each_sg(info->sg, sg, ring_req->u.rw.nr_segments, i) {
                        buffer_mfn =3D pfn_to_mfn(page_to_pfn(sg_page(sg)))=
;
@@ -332,31 +406,35 @@ static int blkif_queue_request(struct request *req)
                                        buffer_mfn,
                                        rq_data_dir(req));

-                       info->shadow[id].frame[i] =3D mfn_to_pfn(buffer_mfn=
);
-                       ring_req->u.rw.seg[i] =3D
-                                       (struct blkif_request_segment) {
-                                               .gref       =3D ref,
-                                               .first_sect =3D fsect,
-                                               .last_sect  =3D lsect };
+                       ring_seg =3D info->ops->ring_get_segment(info, i);
+                       *ring_seg =3D(struct blkif_request_segment) {
+                                       .gref       =3D ref,
+                                       .first_sect =3D fsect,
+                                       .last_sect  =3D lsect };
+                       info->ops->save_seg_shadow(info, buffer_mfn, id, i)=
;
                }
        }

-       info->ring.req_prod_pvt++;
-
        /* Keep a private copy so we can reissue requests when recovering. =
*/
-       info->shadow[id].req =3D *ring_req;
+       info->ops->save_req_shadow(info, req, id);
+
+       info->ops->update_req_prod_pvt(info);

        gnttab_free_grant_references(gref_head);

        return 0;
 }

+void ring_push(struct blkfront_info *info, int *notify)
+{
+       RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->ring, *notify);
+}

 static inline void flush_requests(struct blkfront_info *info)
 {
        int notify;

-       RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->ring, notify);
+       info->ops->ring_push(info, &notify);

        if (notify)
                notify_remote_via_irq(info->irq);
@@ -379,7 +457,7 @@ static void do_blkif_request(struct request_queue *rq)
        while ((req =3D blk_peek_request(rq)) !=3D NULL) {
                info =3D req->rq_disk->private_data;

-               if (RING_FULL(&info->ring))
+               if (info->ops->ring_full(info))
                        goto wait;

                blk_start_request(req);
@@ -434,14 +512,15 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u=
16 sector_size)

        /* Hard sector size and max sectors impersonate the equiv. hardware=
. */
        blk_queue_logical_block_size(rq, sector_size);
-       blk_queue_max_hw_sectors(rq, 512);

        /* Each segment in a request is up to an aligned page in size. */
        blk_queue_segment_boundary(rq, PAGE_SIZE - 1);
        blk_queue_max_segment_size(rq, PAGE_SIZE);

        /* Ensure a merged request will fit in a single I/O ring slot. */
-       blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST);
+       blk_queue_max_segments(rq, info->ops->max_seg);
+       blk_queue_max_hw_sectors(rq, info->ops->max_seg * PAGE_SIZE
+                                / sector_size);

        /* Make sure buffer addresses are sector-aligned. */
        blk_queue_dma_alignment(rq, 511);
@@ -661,7 +740,7 @@ static void xlvbd_release_gendisk(struct blkfront_info =
*info)

 static void kick_pending_request_queues(struct blkfront_info *info)
 {
-       if (!RING_FULL(&info->ring)) {
+       if (!ring_full(info)) {
                /* Re-enable calldowns. */
                blk_start_queue(info->rq);
                /* Kick things off immediately. */
@@ -696,20 +775,17 @@ static void blkif_free(struct blkfront_info *info, in=
t suspend)
        flush_work_sync(&info->work);

        /* Free resources associated with old device channel. */
-       if (info->ring_ref !=3D GRANT_INVALID_REF) {
-               gnttab_end_foreign_access(info->ring_ref, 0,
-                                         (unsigned long)info->ring.sring);
-               info->ring_ref =3D GRANT_INVALID_REF;
-               info->ring.sring =3D NULL;
-       }
+       info->ops->free_blkring(info, suspend);
+
        if (info->irq)
                unbind_from_irqhandler(info->irq, info);
        info->evtchn =3D info->irq =3D 0;

 }

-static void blkif_completion(struct blk_shadow *s)
+static void blkif_completion(struct blkfront_info *info, unsigned long id)
 {
+       struct blk_shadow *s =3D &info->shadow[id];
        int i;
        /* Do not let BLKIF_OP_DISCARD as nr_segment is in the same place
         * flag. */
@@ -717,6 +793,39 @@ static void blkif_completion(struct blk_shadow *s)
                gnttab_end_foreign_access(s->req.u.rw.seg[i].gref, 0, 0UL);
 }

+struct blkif_response *ring_get_response(struct blkfront_info *info)
+{
+       return RING_GET_RESPONSE(&info->ring, info->ring.rsp_cons);
+}
+RING_IDX get_rsp_prod(struct blkfront_info *info)
+{
+       return info->ring.sring->rsp_prod;
+}
+RING_IDX get_rsp_cons(struct blkfront_info *info)
+{
+       return info->ring.rsp_cons;
+}
+struct request *get_req_from_shadow(struct blkfront_info *info,
+                                   unsigned long id)
+{
+       return info->shadow[id].request;
+}
+void update_rsp_cons(struct blkfront_info *info)
+{
+       info->ring.rsp_cons++;
+}
+RING_IDX get_req_prod_pvt(struct blkfront_info *info)
+{
+       return info->ring.req_prod_pvt;
+}
+void check_left_response(struct blkfront_info *info, int *more_to_do)
+{
+       RING_FINAL_CHECK_FOR_RESPONSES(&info->ring, *more_to_do);
+}
+void update_rsp_event(struct blkfront_info *info, int i)
+{
+       info->ring.sring->rsp_event =3D i + 1;
+}
 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 {
        struct request *req;
@@ -734,20 +843,20 @@ static irqreturn_t blkif_interrupt(int irq, void *dev=
_id)
        }

  again:
-       rp =3D info->ring.sring->rsp_prod;
+       rp =3D info->ops->get_rsp_prod(info);
        rmb(); /* Ensure we see queued responses up to 'rp'. */

-       for (i =3D info->ring.rsp_cons; i !=3D rp; i++) {
+       for (i =3D info->ops->get_rsp_cons(info); i !=3D rp; i++) {
                unsigned long id;

-               bret =3D RING_GET_RESPONSE(&info->ring, i);
+               bret =3D info->ops->ring_get_response(info);
                id   =3D bret->id;
-               req  =3D info->shadow[id].request;
+               req  =3D info->ops->get_req_from_shadow(info, id);

                if (bret->operation !=3D BLKIF_OP_DISCARD)
-                       blkif_completion(&info->shadow[id]);
+                       info->ops->blkif_completion(info, id);

-               add_id_to_freelist(info, id);
+               info->ops->add_id(info, id);

                error =3D (bret->status =3D=3D BLKIF_RSP_OKAY) ? 0 : -EIO;
                switch (bret->operation) {
@@ -800,17 +909,17 @@ static irqreturn_t blkif_interrupt(int irq, void *dev=
_id)
                default:
                        BUG();
                }
+               info->ops->update_rsp_cons(info);
        }

-       info->ring.rsp_cons =3D i;
-
-       if (i !=3D info->ring.req_prod_pvt) {
+       rp =3D info->ops->get_req_prod_pvt(info);
+       if (i !=3D rp) {
                int more_to_do;
-               RING_FINAL_CHECK_FOR_RESPONSES(&info->ring, more_to_do);
+               info->ops->check_left_response(info, &more_to_do);
                if (more_to_do)
                        goto again;
        } else
-               info->ring.sring->rsp_event =3D i + 1;
+               info->ops->update_rsp_event(info, i);

        kick_pending_request_queues(info);

@@ -819,6 +928,26 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_=
id)
        return IRQ_HANDLED;
 }

+static int init_shadow(struct blkfront_info *info)
+{
+       unsigned int ring_size;
+       int i;
+       if (info->ring_type !=3D RING_TYPE_UNDEFINED)
+               return 0;
+
+       info->ring_type =3D RING_TYPE_1;
+       ring_size =3D BLK_RING_SIZE;
+       info->shadow =3D kzalloc(sizeof(struct blk_shadow) * ring_size,
+                              GFP_KERNEL);
+       if (!info->shadow)
+               return -ENOMEM;
+
+       for (i =3D 0; i < ring_size; i++)
+               info->shadow[i].req.u.rw.id =3D i+1;
+       info->shadow[ring_size - 1].req.u.rw.id =3D 0x0fffffff;
+
+       return 0;
+}

 static int setup_blkring(struct xenbus_device *dev,
                         struct blkfront_info *info)
@@ -836,8 +965,6 @@ static int setup_blkring(struct xenbus_device *dev,
        SHARED_RING_INIT(sring);
        FRONT_RING_INIT(&info->ring, sring, PAGE_SIZE);

-       sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
-
        err =3D xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring));
        if (err < 0) {
                free_page((unsigned long)sring);
@@ -846,6 +973,16 @@ static int setup_blkring(struct xenbus_device *dev,
        }
        info->ring_ref =3D err;

+       info->sg =3D kzalloc(sizeof(struct scatterlist) * info->ops->max_se=
g, GFP_KERNEL);
+       if (!info->sg) {
+               err =3D -ENOMEM;
+               goto fail;
+       }
+       sg_init_table(info->sg, info->ops->max_seg);
+
+       err =3D init_shadow(info);
+       if (err)
+               goto fail;
        err =3D xenbus_alloc_evtchn(dev, &info->evtchn);
        if (err)
                goto fail;
@@ -866,6 +1003,20 @@ fail:
        return err;
 }

+static void free_blkring(struct blkfront_info *info, int suspend)
+{
+       if (info->ring_ref !=3D GRANT_INVALID_REF) {
+               gnttab_end_foreign_access(info->ring_ref, 0,
+                                        (unsigned long)info->ring.sring);
+               info->ring_ref =3D GRANT_INVALID_REF;
+               info->ring.sring =3D NULL;
+       }
+
+       kfree(info->sg);
+
+       if (!suspend)
+               kfree(info->shadow);
+}

 /* Common code used when first setting up, and when resuming. */
 static int talk_to_blkback(struct xenbus_device *dev,
@@ -875,8 +1026,11 @@ static int talk_to_blkback(struct xenbus_device *dev,
        struct xenbus_transaction xbt;
        int err;

+       /* register ring ops */
+       info->ops =3D &blk_front_ops;
+
        /* Create shared ring, alloc event channel. */
-       err =3D setup_blkring(dev, info);
+       err =3D info->ops->setup_blkring(dev, info);
        if (err)
                goto out;

@@ -937,7 +1091,7 @@ again:
 static int blkfront_probe(struct xenbus_device *dev,
                          const struct xenbus_device_id *id)
 {
-       int err, vdevice, i;
+       int err, vdevice;
        struct blkfront_info *info;

        /* FIXME: Use dynamic device id if this is not set. */
@@ -995,10 +1149,6 @@ static int blkfront_probe(struct xenbus_device *dev,
        info->connected =3D BLKIF_STATE_DISCONNECTED;
        INIT_WORK(&info->work, blkif_restart_queue);

-       for (i =3D 0; i < BLK_RING_SIZE; i++)
-               info->shadow[i].req.u.rw.id =3D i+1;
-       info->shadow[BLK_RING_SIZE-1].req.u.rw.id =3D 0x0fffffff;
-
        /* Front end dir is a number, which is used as the id. */
        info->handle =3D simple_strtoul(strrchr(dev->nodename, '/')+1, NULL=
, 0);
        dev_set_drvdata(&dev->dev, info);
@@ -1022,14 +1172,14 @@ static int blkif_recover(struct blkfront_info *info=
)
        int j;

        /* Stage 1: Make a safe copy of the shadow state. */
-       copy =3D kmalloc(sizeof(info->shadow),
+       copy =3D kmalloc(sizeof(struct blk_shadow) * BLK_RING_SIZE,
                       GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
        if (!copy)
                return -ENOMEM;
-       memcpy(copy, info->shadow, sizeof(info->shadow));
+       memcpy(copy, info->shadow, sizeof(struct blk_shadow) * BLK_RING_SIZ=
E);

        /* Stage 2: Set up free list. */
-       memset(&info->shadow, 0, sizeof(info->shadow));
+       memset(info->shadow, 0, sizeof(struct blk_shadow) * BLK_RING_SIZE);
        for (i =3D 0; i < BLK_RING_SIZE; i++)
                info->shadow[i].req.u.rw.id =3D i+1;
        info->shadow_free =3D info->ring.req_prod_pvt;
@@ -1042,11 +1192,11 @@ static int blkif_recover(struct blkfront_info *info=
)
                        continue;

                /* Grab a request slot and copy shadow state into it. */
-               req =3D RING_GET_REQUEST(&info->ring, info->ring.req_prod_p=
vt);
+               req =3D (struct blkif_request *)info->ops->ring_get_request=
(info);
                *req =3D copy[i].req;

                /* We get a new request id, and must reset the shadow state=
. */
-               req->u.rw.id =3D get_id_from_freelist(info);
+               req->u.rw.id =3D info->ops->get_id(info);
                memcpy(&info->shadow[req->u.rw.id], &copy[i], sizeof(copy[i=
]));

                if (req->operation !=3D BLKIF_OP_DISCARD) {
@@ -1100,7 +1250,7 @@ static int blkfront_resume(struct xenbus_device *dev)

        err =3D talk_to_blkback(dev, info);
        if (info->connected =3D=3D BLKIF_STATE_SUSPENDED && !err)
-               err =3D blkif_recover(info);
+               err =3D info->ops->recover(info);

        return err;
 }
@@ -1280,7 +1430,6 @@ static void blkfront_connect(struct blkfront_info *in=
fo)
        info->connected =3D BLKIF_STATE_CONNECTED;
        kick_pending_request_queues(info);
        spin_unlock_irq(&info->io_lock);
-
        add_disk(info->gd);

        info->is_ready =3D 1;
@@ -1444,6 +1593,31 @@ out:
        return 0;
 }

+static struct blk_front_operations blk_front_ops =3D {
+       .ring_get_request =3D ring_get_request,
+       .ring_get_response =3D ring_get_response,
+       .ring_get_segment =3D ring_get_segment,
+       .get_id =3D get_id_from_freelist,
+       .add_id =3D add_id_to_freelist,
+       .save_seg_shadow =3D save_seg_shadow,
+       .save_req_shadow =3D save_req_shadow,
+       .get_req_from_shadow =3D get_req_from_shadow,
+       .get_rsp_prod =3D get_rsp_prod,
+       .get_rsp_cons =3D get_rsp_cons,
+       .get_req_prod_pvt =3D get_req_prod_pvt,
+       .check_left_response =3D check_left_response,
+       .update_rsp_event =3D update_rsp_event,
+       .update_rsp_cons =3D update_rsp_cons,
+       .update_req_prod_pvt =3D update_req_prod_pvt,
+       .ring_push =3D ring_push,
+       .recover =3D blkif_recover,
+       .ring_full =3D ring_full,
+       .setup_blkring =3D setup_blkring,
+       .free_blkring =3D free_blkring,
+       .blkif_completion =3D blkif_completion,
+       .max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
+};
+
 static const struct block_device_operations xlvbd_block_fops =3D
 {
        .owner =3D THIS_MODULE,

-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_01.patch"
Content-Description: vbd_enlarge_segments_01.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_01.patch";
	size=17621; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:33:44 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IGUxZWFhNTllZmMwMzJjZTg0MDA2OGYzODYyYTY5YWMzYjkwMTJkODAKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAx
NjozMDoxNCAyMDEyICswODAwCgogICAgcmVmYWN0b3JpbmcgdGhlIGJsa2Zyb250CgpkaWZmIC0t
Z2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2Zyb250LmMKaW5kZXggNGU4NjM5My4uYTI2M2ZhZiAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrZnJvbnQuYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCkBAIC02
NCw2ICs2NCwxMiBAQCBlbnVtIGJsa2lmX3N0YXRlIHsKIAlCTEtJRl9TVEFURV9TVVNQRU5ERUQs
CiB9OwogCitlbnVtIGJsa2lmX3JpbmdfdHlwZSB7CisJUklOR19UWVBFX1VOREVGSU5FRCA9IDAs
CisJUklOR19UWVBFXzEgPSAxLAorCVJJTkdfVFlQRV8yID0gMiwKK307CisKIHN0cnVjdCBibGtf
c2hhZG93IHsKIAlzdHJ1Y3QgYmxraWZfcmVxdWVzdCByZXE7CiAJc3RydWN0IHJlcXVlc3QgKnJl
cXVlc3Q7CkBAIC05MSwxMiArOTcsMTQgQEAgc3RydWN0IGJsa2Zyb250X2luZm8KIAllbnVtIGJs
a2lmX3N0YXRlIGNvbm5lY3RlZDsKIAlpbnQgcmluZ19yZWY7CiAJc3RydWN0IGJsa2lmX2Zyb250
X3JpbmcgcmluZzsKLQlzdHJ1Y3Qgc2NhdHRlcmxpc3Qgc2dbQkxLSUZfTUFYX1NFR01FTlRTX1BF
Ul9SRVFVRVNUXTsKKwlzdHJ1Y3Qgc2NhdHRlcmxpc3QgKnNnOwogCXVuc2lnbmVkIGludCBldnRj
aG4sIGlycTsKIAlzdHJ1Y3QgcmVxdWVzdF9xdWV1ZSAqcnE7CiAJc3RydWN0IHdvcmtfc3RydWN0
IHdvcms7CiAJc3RydWN0IGdudHRhYl9mcmVlX2NhbGxiYWNrIGNhbGxiYWNrOwotCXN0cnVjdCBi
bGtfc2hhZG93IHNoYWRvd1tCTEtfUklOR19TSVpFXTsKKwlzdHJ1Y3QgYmxrX3NoYWRvdyAqc2hh
ZG93OworCXN0cnVjdCBibGtfZnJvbnRfb3BlcmF0aW9ucyAqb3BzOworCWVudW0gYmxraWZfcmlu
Z190eXBlIHJpbmdfdHlwZTsKIAl1bnNpZ25lZCBsb25nIHNoYWRvd19mcmVlOwogCXVuc2lnbmVk
IGludCBmZWF0dXJlX2ZsdXNoOwogCXVuc2lnbmVkIGludCBmbHVzaF9vcDsKQEAgLTEwNyw2ICsx
MTUsMzYgQEAgc3RydWN0IGJsa2Zyb250X2luZm8KIAlpbnQgaXNfcmVhZHk7CiB9OwogCisvKiBp
bnRlcmZhY2Ugb2YgYmxrZnJvbnQgcmluZyBvcGVyYXRpb24gKi8KK3N0YXRpYyBzdHJ1Y3QgYmxr
X2Zyb250X29wZXJhdGlvbnMgeworCXZvaWQgKigqcmluZ19nZXRfcmVxdWVzdCkgKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKTsKKwlzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKigqcmluZ19nZXRf
cmVzcG9uc2UpIChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbyk7CisJc3RydWN0IGJsa2lmX3Jl
cXVlc3Rfc2VnbWVudCAqKCpyaW5nX2dldF9zZWdtZW50KQorCQkJCShzdHJ1Y3QgYmxrZnJvbnRf
aW5mbyAqaW5mbywgaW50IGkpOworCXVuc2lnbmVkIGxvbmcgKCpnZXRfaWQpIChzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbyk7CisJdm9pZCAoKmFkZF9pZCkgKHN0cnVjdCBibGtmcm9udF9pbmZv
ICppbmZvLCB1bnNpZ25lZCBsb25nIGlkKTsKKwl2b2lkICgqc2F2ZV9zZWdfc2hhZG93KSAoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8sIHVuc2lnbmVkIGxvbmcgbWZuLAorCQkJCSB1bnNpZ25l
ZCBsb25nIGlkLCBpbnQgaSk7CisJdm9pZCAoKnNhdmVfcmVxX3NoYWRvdykgKHN0cnVjdCBibGtm
cm9udF9pbmZvICppbmZvLAorCQkJCSBzdHJ1Y3QgcmVxdWVzdCAqcmVxLCB1bnNpZ25lZCBsb25n
IGlkKTsKKwlzdHJ1Y3QgcmVxdWVzdCAqKCpnZXRfcmVxX2Zyb21fc2hhZG93KShzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbywKKwkJCQkJICAgICAgIHVuc2lnbmVkIGxvbmcgaWQpOworCVJJTkdf
SURYICgqZ2V0X3JzcF9wcm9kKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pOworCVJJTkdf
SURYICgqZ2V0X3JzcF9jb25zKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pOworCVJJTkdf
SURYICgqZ2V0X3JlcV9wcm9kX3B2dCkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKTsKKwl2
b2lkICgqY2hlY2tfbGVmdF9yZXNwb25zZSkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBp
bnQgKm1vcmVfdG9fZG8pOworCXZvaWQgKCp1cGRhdGVfcnNwX2V2ZW50KSAoc3RydWN0IGJsa2Zy
b250X2luZm8gKmluZm8sIGludCBpKTsKKwl2b2lkICgqdXBkYXRlX3JzcF9jb25zKSAoc3RydWN0
IGJsa2Zyb250X2luZm8gKmluZm8pOworCXZvaWQgKCp1cGRhdGVfcmVxX3Byb2RfcHZ0KSAoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8pOworCXZvaWQgKCpyaW5nX3B1c2gpIChzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbywgaW50ICpub3RpZnkpOworCWludCAoKnJlY292ZXIpIChzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbyk7CisJaW50ICgqcmluZ19mdWxsKSAoc3RydWN0IGJsa2Zyb250
X2luZm8gKmluZm8pOworCWludCAoKnNldHVwX2Jsa3JpbmcpIChzdHJ1Y3QgeGVuYnVzX2Rldmlj
ZSAqZGV2LCBzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbyk7CisJdm9pZCAoKmZyZWVfYmxrcmlu
ZykgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVuZCk7CisJdm9pZCAoKmJs
a2lmX2NvbXBsZXRpb24pIChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9u
ZyBpZCk7CisJdW5zaWduZWQgaW50IG1heF9zZWc7Cit9IGJsa19mcm9udF9vcHM7IAorCiBzdGF0
aWMgdW5zaWduZWQgaW50IG5yX21pbm9yczsKIHN0YXRpYyB1bnNpZ25lZCBsb25nICptaW5vcnM7
CiBzdGF0aWMgREVGSU5FX1NQSU5MT0NLKG1pbm9yX2xvY2spOwpAQCAtMTMyLDcgKzE3MCw3IEBA
IHN0YXRpYyBERUZJTkVfU1BJTkxPQ0sobWlub3JfbG9jayk7CiAKICNkZWZpbmUgREVWX05BTUUJ
Inh2ZCIJLyogbmFtZSBpbiAvZGV2ICovCiAKLXN0YXRpYyBpbnQgZ2V0X2lkX2Zyb21fZnJlZWxp
c3Qoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCitzdGF0aWMgdW5zaWduZWQgbG9uZyBnZXRf
aWRfZnJvbV9mcmVlbGlzdChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIHsKIAl1bnNpZ25l
ZCBsb25nIGZyZWUgPSBpbmZvLT5zaGFkb3dfZnJlZTsKIAlCVUdfT04oZnJlZSA+PSBCTEtfUklO
R19TSVpFKTsKQEAgLTE0MSw3ICsxNzksNyBAQCBzdGF0aWMgaW50IGdldF9pZF9mcm9tX2ZyZWVs
aXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogCXJldHVybiBmcmVlOwogfQogCi1zdGF0
aWMgdm9pZCBhZGRfaWRfdG9fZnJlZWxpc3Qoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCit2
b2lkIGFkZF9pZF90b19mcmVlbGlzdChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKIAkJCSAg
ICAgICB1bnNpZ25lZCBsb25nIGlkKQogewogCWluZm8tPnNoYWRvd1tpZF0ucmVxLnUucncuaWQg
ID0gaW5mby0+c2hhZG93X2ZyZWU7CkBAIC0yNTEsNiArMjg5LDQyIEBAIHN0YXRpYyBpbnQgYmxr
aWZfaW9jdGwoc3RydWN0IGJsb2NrX2RldmljZSAqYmRldiwgZm1vZGVfdCBtb2RlLAogCXJldHVy
biAwOwogfQogCitzdGF0aWMgaW50IHJpbmdfZnVsbChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5m
bykKK3sKKwlyZXR1cm4gUklOR19GVUxMKCZpbmZvLT5yaW5nKTsKK30KKwordm9pZCAqcmluZ19n
ZXRfcmVxdWVzdChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gKHZvaWQg
KilSSU5HX0dFVF9SRVFVRVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5yaW5nLnJlcV9wcm9kX3B2dCk7
Cit9CisKK3N0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnJpbmdfZ2V0X3NlZ21lbnQoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCBpKQoreworCXN0cnVjdCBibGtpZl9yZXF1ZXN0
ICpyaW5nX3JlcSA9CisJCQkoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilpbmZvLT5vcHMtPnJpbmdf
Z2V0X3JlcXVlc3QoaW5mbyk7CisJcmV0dXJuICZyaW5nX3JlcS0+dS5ydy5zZWdbaV07Cit9CisK
K3ZvaWQgc2F2ZV9zZWdfc2hhZG93KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAorCQkgICAg
ICB1bnNpZ25lZCBsb25nIG1mbiwgdW5zaWduZWQgbG9uZyBpZCwgaW50IGkpCit7CisJaW5mby0+
c2hhZG93W2lkXS5mcmFtZVtpXSA9IG1mbl90b19wZm4obWZuKTsKK30KKwordm9pZCBzYXZlX3Jl
cV9zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCisJCSAgICAgIHN0cnVjdCByZXF1
ZXN0ICpyZXEsIHVuc2lnbmVkIGxvbmcgaWQpCit7CisJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJp
bmdfcmVxID0KKwkJCShzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqKWluZm8tPm9wcy0+cmluZ19nZXRf
cmVxdWVzdChpbmZvKTsKKwlpbmZvLT5zaGFkb3dbaWRdLnJlcSA9ICpyaW5nX3JlcTsKKwlpbmZv
LT5zaGFkb3dbaWRdLnJlcXVlc3QgPSByZXE7Cit9CisKK3ZvaWQgdXBkYXRlX3JlcV9wcm9kX3B2
dChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlpbmZvLT5yaW5nLnJlcV9wcm9kX3B2
dCsrOworfQogLyoKICAqIEdlbmVyYXRlIGEgWGVuIGJsa2Zyb250IElPIHJlcXVlc3QgZnJvbSBh
IGJsayBsYXllciByZXF1ZXN0LiAgUmVhZHMKICAqIGFuZCB3cml0ZXMgYXJlIGhhbmRsZWQgYXMg
ZXhwZWN0ZWQuCkBAIC0yNjIsNiArMzM2LDcgQEAgc3RhdGljIGludCBibGtpZl9xdWV1ZV9yZXF1
ZXN0KHN0cnVjdCByZXF1ZXN0ICpyZXEpCiAJc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8gPSBy
ZXEtPnJxX2Rpc2stPnByaXZhdGVfZGF0YTsKIAl1bnNpZ25lZCBsb25nIGJ1ZmZlcl9tZm47CiAJ
c3RydWN0IGJsa2lmX3JlcXVlc3QgKnJpbmdfcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3Nl
Z21lbnQgKnJpbmdfc2VnOwogCXVuc2lnbmVkIGxvbmcgaWQ7CiAJdW5zaWduZWQgaW50IGZzZWN0
LCBsc2VjdDsKIAlpbnQgaSwgcmVmOwpAQCAtMjgyLDkgKzM1Nyw5IEBAIHN0YXRpYyBpbnQgYmxr
aWZfcXVldWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCX0KIAogCS8qIEZpbGwgb3V0
IGEgY29tbXVuaWNhdGlvbnMgcmluZyBzdHJ1Y3R1cmUuICovCi0JcmluZ19yZXEgPSBSSU5HX0dF
VF9SRVFVRVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5yaW5nLnJlcV9wcm9kX3B2dCk7Ci0JaWQgPSBn
ZXRfaWRfZnJvbV9mcmVlbGlzdChpbmZvKTsKLQlpbmZvLT5zaGFkb3dbaWRdLnJlcXVlc3QgPSBy
ZXE7CisJcmluZ19yZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilpbmZvLT5vcHMtPnJpbmdf
Z2V0X3JlcXVlc3QoaW5mbyk7CisJaWQgPSBpbmZvLT5vcHMtPmdldF9pZChpbmZvKTsKKwkvL2lu
Zm8tPnNoYWRvd1tpZF0ucmVxdWVzdCA9IHJlcTsKIAogCXJpbmdfcmVxLT51LnJ3LmlkID0gaWQ7
CiAJcmluZ19yZXEtPnUucncuc2VjdG9yX251bWJlciA9IChibGtpZl9zZWN0b3JfdClibGtfcnFf
cG9zKHJlcSk7CkBAIC0zMTUsOCArMzkwLDcgQEAgc3RhdGljIGludCBibGtpZl9xdWV1ZV9yZXF1
ZXN0KHN0cnVjdCByZXF1ZXN0ICpyZXEpCiAJfSBlbHNlIHsKIAkJcmluZ19yZXEtPnUucncubnJf
c2VnbWVudHMgPSBibGtfcnFfbWFwX3NnKHJlcS0+cSwgcmVxLAogCQkJCQkJCSAgIGluZm8tPnNn
KTsKLQkJQlVHX09OKHJpbmdfcmVxLT51LnJ3Lm5yX3NlZ21lbnRzID4KLQkJICAgICAgIEJMS0lG
X01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCk7CisJCUJVR19PTihyaW5nX3JlcS0+dS5ydy5ucl9z
ZWdtZW50cyA+IGluZm8tPm9wcy0+bWF4X3NlZyk7CiAKIAkJZm9yX2VhY2hfc2coaW5mby0+c2cs
IHNnLCByaW5nX3JlcS0+dS5ydy5ucl9zZWdtZW50cywgaSkgewogCQkJYnVmZmVyX21mbiA9IHBm
bl90b19tZm4ocGFnZV90b19wZm4oc2dfcGFnZShzZykpKTsKQEAgLTMzMiwzMSArNDA2LDM1IEBA
IHN0YXRpYyBpbnQgYmxraWZfcXVldWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCQkJ
CQlidWZmZXJfbWZuLAogCQkJCQlycV9kYXRhX2RpcihyZXEpKTsKIAotCQkJaW5mby0+c2hhZG93
W2lkXS5mcmFtZVtpXSA9IG1mbl90b19wZm4oYnVmZmVyX21mbik7Ci0JCQlyaW5nX3JlcS0+dS5y
dy5zZWdbaV0gPQotCQkJCQkoc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkgewotCQkJCQkJ
LmdyZWYgICAgICAgPSByZWYsCi0JCQkJCQkuZmlyc3Rfc2VjdCA9IGZzZWN0LAotCQkJCQkJLmxh
c3Rfc2VjdCAgPSBsc2VjdCB9OworCQkJcmluZ19zZWcgPSBpbmZvLT5vcHMtPnJpbmdfZ2V0X3Nl
Z21lbnQoaW5mbywgaSk7CisJCQkqcmluZ19zZWcgPShzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdt
ZW50KSB7CisJCQkJCS5ncmVmICAgICAgID0gcmVmLAorCQkJCQkuZmlyc3Rfc2VjdCA9IGZzZWN0
LAorCQkJCQkubGFzdF9zZWN0ICA9IGxzZWN0IH07CisJCQlpbmZvLT5vcHMtPnNhdmVfc2VnX3No
YWRvdyhpbmZvLCBidWZmZXJfbWZuLCBpZCwgaSk7CiAJCX0KIAl9CiAKLQlpbmZvLT5yaW5nLnJl
cV9wcm9kX3B2dCsrOwotCiAJLyogS2VlcCBhIHByaXZhdGUgY29weSBzbyB3ZSBjYW4gcmVpc3N1
ZSByZXF1ZXN0cyB3aGVuIHJlY292ZXJpbmcuICovCi0JaW5mby0+c2hhZG93W2lkXS5yZXEgPSAq
cmluZ19yZXE7CisJaW5mby0+b3BzLT5zYXZlX3JlcV9zaGFkb3coaW5mbywgcmVxLCBpZCk7CisK
KwlpbmZvLT5vcHMtPnVwZGF0ZV9yZXFfcHJvZF9wdnQoaW5mbyk7CiAKIAlnbnR0YWJfZnJlZV9n
cmFudF9yZWZlcmVuY2VzKGdyZWZfaGVhZCk7CiAKIAlyZXR1cm4gMDsKIH0KIAordm9pZCByaW5n
X3B1c2goc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCAqbm90aWZ5KQoreworCVJJTkdf
UFVTSF9SRVFVRVNUU19BTkRfQ0hFQ0tfTk9USUZZKCZpbmZvLT5yaW5nLCAqbm90aWZ5KTsKK30K
IAogc3RhdGljIGlubGluZSB2b2lkIGZsdXNoX3JlcXVlc3RzKHN0cnVjdCBibGtmcm9udF9pbmZv
ICppbmZvKQogewogCWludCBub3RpZnk7CiAKLQlSSU5HX1BVU0hfUkVRVUVTVFNfQU5EX0NIRUNL
X05PVElGWSgmaW5mby0+cmluZywgbm90aWZ5KTsKKwlpbmZvLT5vcHMtPnJpbmdfcHVzaChpbmZv
LCAmbm90aWZ5KTsKIAogCWlmIChub3RpZnkpCiAJCW5vdGlmeV9yZW1vdGVfdmlhX2lycShpbmZv
LT5pcnEpOwpAQCAtMzc5LDcgKzQ1Nyw3IEBAIHN0YXRpYyB2b2lkIGRvX2Jsa2lmX3JlcXVlc3Qo
c3RydWN0IHJlcXVlc3RfcXVldWUgKnJxKQogCXdoaWxlICgocmVxID0gYmxrX3BlZWtfcmVxdWVz
dChycSkpICE9IE5VTEwpIHsKIAkJaW5mbyA9IHJlcS0+cnFfZGlzay0+cHJpdmF0ZV9kYXRhOwog
Ci0JCWlmIChSSU5HX0ZVTEwoJmluZm8tPnJpbmcpKQorCQlpZiAoaW5mby0+b3BzLT5yaW5nX2Z1
bGwoaW5mbykpCiAJCQlnb3RvIHdhaXQ7CiAKIAkJYmxrX3N0YXJ0X3JlcXVlc3QocmVxKTsKQEAg
LTQzNCwxNCArNTEyLDE1IEBAIHN0YXRpYyBpbnQgeGx2YmRfaW5pdF9ibGtfcXVldWUoc3RydWN0
IGdlbmRpc2sgKmdkLCB1MTYgc2VjdG9yX3NpemUpCiAKIAkvKiBIYXJkIHNlY3RvciBzaXplIGFu
ZCBtYXggc2VjdG9ycyBpbXBlcnNvbmF0ZSB0aGUgZXF1aXYuIGhhcmR3YXJlLiAqLwogCWJsa19x
dWV1ZV9sb2dpY2FsX2Jsb2NrX3NpemUocnEsIHNlY3Rvcl9zaXplKTsKLQlibGtfcXVldWVfbWF4
X2h3X3NlY3RvcnMocnEsIDUxMik7CiAKIAkvKiBFYWNoIHNlZ21lbnQgaW4gYSByZXF1ZXN0IGlz
IHVwIHRvIGFuIGFsaWduZWQgcGFnZSBpbiBzaXplLiAqLwogCWJsa19xdWV1ZV9zZWdtZW50X2Jv
dW5kYXJ5KHJxLCBQQUdFX1NJWkUgLSAxKTsKIAlibGtfcXVldWVfbWF4X3NlZ21lbnRfc2l6ZShy
cSwgUEFHRV9TSVpFKTsKIAogCS8qIEVuc3VyZSBhIG1lcmdlZCByZXF1ZXN0IHdpbGwgZml0IGlu
IGEgc2luZ2xlIEkvTyByaW5nIHNsb3QuICovCi0JYmxrX3F1ZXVlX21heF9zZWdtZW50cyhycSwg
QkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUKTsKKwlibGtfcXVldWVfbWF4X3NlZ21lbnRz
KHJxLCBpbmZvLT5vcHMtPm1heF9zZWcpOworCWJsa19xdWV1ZV9tYXhfaHdfc2VjdG9ycyhycSwg
aW5mby0+b3BzLT5tYXhfc2VnICogUEFHRV9TSVpFCisJCQkJIC8gc2VjdG9yX3NpemUpOwogCiAJ
LyogTWFrZSBzdXJlIGJ1ZmZlciBhZGRyZXNzZXMgYXJlIHNlY3Rvci1hbGlnbmVkLiAqLwogCWJs
a19xdWV1ZV9kbWFfYWxpZ25tZW50KHJxLCA1MTEpOwpAQCAtNjYxLDcgKzc0MCw3IEBAIHN0YXRp
YyB2b2lkIHhsdmJkX3JlbGVhc2VfZ2VuZGlzayhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykK
IAogc3RhdGljIHZvaWQga2lja19wZW5kaW5nX3JlcXVlc3RfcXVldWVzKHN0cnVjdCBibGtmcm9u
dF9pbmZvICppbmZvKQogewotCWlmICghUklOR19GVUxMKCZpbmZvLT5yaW5nKSkgeworCWlmICgh
cmluZ19mdWxsKGluZm8pKSB7CiAJCS8qIFJlLWVuYWJsZSBjYWxsZG93bnMuICovCiAJCWJsa19z
dGFydF9xdWV1ZShpbmZvLT5ycSk7CiAJCS8qIEtpY2sgdGhpbmdzIG9mZiBpbW1lZGlhdGVseS4g
Ki8KQEAgLTY5NiwyMCArNzc1LDE3IEBAIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJs
a2Zyb250X2luZm8gKmluZm8sIGludCBzdXNwZW5kKQogCWZsdXNoX3dvcmtfc3luYygmaW5mby0+
d29yayk7CiAKIAkvKiBGcmVlIHJlc291cmNlcyBhc3NvY2lhdGVkIHdpdGggb2xkIGRldmljZSBj
aGFubmVsLiAqLwotCWlmIChpbmZvLT5yaW5nX3JlZiAhPSBHUkFOVF9JTlZBTElEX1JFRikgewot
CQlnbnR0YWJfZW5kX2ZvcmVpZ25fYWNjZXNzKGluZm8tPnJpbmdfcmVmLCAwLAotCQkJCQkgICh1
bnNpZ25lZCBsb25nKWluZm8tPnJpbmcuc3JpbmcpOwotCQlpbmZvLT5yaW5nX3JlZiA9IEdSQU5U
X0lOVkFMSURfUkVGOwotCQlpbmZvLT5yaW5nLnNyaW5nID0gTlVMTDsKLQl9CisJaW5mby0+b3Bz
LT5mcmVlX2Jsa3JpbmcoaW5mbywgc3VzcGVuZCk7CisKIAlpZiAoaW5mby0+aXJxKQogCQl1bmJp
bmRfZnJvbV9pcnFoYW5kbGVyKGluZm8tPmlycSwgaW5mbyk7CiAJaW5mby0+ZXZ0Y2huID0gaW5m
by0+aXJxID0gMDsKIAogfQogCi1zdGF0aWMgdm9pZCBibGtpZl9jb21wbGV0aW9uKHN0cnVjdCBi
bGtfc2hhZG93ICpzKQorc3RhdGljIHZvaWQgYmxraWZfY29tcGxldGlvbihzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCkKIHsKKwlzdHJ1Y3QgYmxrX3NoYWRvdyAq
cyA9ICZpbmZvLT5zaGFkb3dbaWRdOwogCWludCBpOwogCS8qIERvIG5vdCBsZXQgQkxLSUZfT1Bf
RElTQ0FSRCBhcyBucl9zZWdtZW50IGlzIGluIHRoZSBzYW1lIHBsYWNlCiAJICogZmxhZy4gKi8K
QEAgLTcxNyw2ICs3OTMsMzkgQEAgc3RhdGljIHZvaWQgYmxraWZfY29tcGxldGlvbihzdHJ1Y3Qg
YmxrX3NoYWRvdyAqcykKIAkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhzLT5yZXEudS5ydy5z
ZWdbaV0uZ3JlZiwgMCwgMFVMKTsKIH0KIAorc3RydWN0IGJsa2lmX3Jlc3BvbnNlICpyaW5nX2dl
dF9yZXNwb25zZShzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gUklOR19H
RVRfUkVTUE9OU0UoJmluZm8tPnJpbmcsIGluZm8tPnJpbmcucnNwX2NvbnMpOworfQorUklOR19J
RFggZ2V0X3JzcF9wcm9kKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCXJldHVybiBp
bmZvLT5yaW5nLnNyaW5nLT5yc3BfcHJvZDsKK30KK1JJTkdfSURYIGdldF9yc3BfY29ucyhzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gaW5mby0+cmluZy5yc3BfY29uczsK
K30KK3N0cnVjdCByZXF1ZXN0ICpnZXRfcmVxX2Zyb21fc2hhZG93KHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLAorCQkJCSAgICB1bnNpZ25lZCBsb25nIGlkKQoreworCXJldHVybiBpbmZvLT5z
aGFkb3dbaWRdLnJlcXVlc3Q7Cit9Cit2b2lkIHVwZGF0ZV9yc3BfY29ucyhzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbykKK3sKKwlpbmZvLT5yaW5nLnJzcF9jb25zKys7Cit9CitSSU5HX0lEWCBn
ZXRfcmVxX3Byb2RfcHZ0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCXJldHVybiBp
bmZvLT5yaW5nLnJlcV9wcm9kX3B2dDsKK30KK3ZvaWQgY2hlY2tfbGVmdF9yZXNwb25zZShzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50ICptb3JlX3RvX2RvKQoreworCVJJTkdfRklOQUxf
Q0hFQ0tfRk9SX1JFU1BPTlNFUygmaW5mby0+cmluZywgKm1vcmVfdG9fZG8pOworfQordm9pZCB1
cGRhdGVfcnNwX2V2ZW50KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgaSkKK3sKKwlp
bmZvLT5yaW5nLnNyaW5nLT5yc3BfZXZlbnQgPSBpICsgMTsKK30KIHN0YXRpYyBpcnFyZXR1cm5f
dCBibGtpZl9pbnRlcnJ1cHQoaW50IGlycSwgdm9pZCAqZGV2X2lkKQogewogCXN0cnVjdCByZXF1
ZXN0ICpyZXE7CkBAIC03MzQsMjAgKzg0MywyMCBAQCBzdGF0aWMgaXJxcmV0dXJuX3QgYmxraWZf
aW50ZXJydXB0KGludCBpcnEsIHZvaWQgKmRldl9pZCkKIAl9CiAKICBhZ2FpbjoKLQlycCA9IGlu
Zm8tPnJpbmcuc3JpbmctPnJzcF9wcm9kOworCXJwID0gaW5mby0+b3BzLT5nZXRfcnNwX3Byb2Qo
aW5mbyk7CiAJcm1iKCk7IC8qIEVuc3VyZSB3ZSBzZWUgcXVldWVkIHJlc3BvbnNlcyB1cCB0byAn
cnAnLiAqLwogCi0JZm9yIChpID0gaW5mby0+cmluZy5yc3BfY29uczsgaSAhPSBycDsgaSsrKSB7
CisJZm9yIChpID0gaW5mby0+b3BzLT5nZXRfcnNwX2NvbnMoaW5mbyk7IGkgIT0gcnA7IGkrKykg
ewogCQl1bnNpZ25lZCBsb25nIGlkOwogCi0JCWJyZXQgPSBSSU5HX0dFVF9SRVNQT05TRSgmaW5m
by0+cmluZywgaSk7CisJCWJyZXQgPSBpbmZvLT5vcHMtPnJpbmdfZ2V0X3Jlc3BvbnNlKGluZm8p
OwogCQlpZCAgID0gYnJldC0+aWQ7Ci0JCXJlcSAgPSBpbmZvLT5zaGFkb3dbaWRdLnJlcXVlc3Q7
CisJCXJlcSAgPSBpbmZvLT5vcHMtPmdldF9yZXFfZnJvbV9zaGFkb3coaW5mbywgaWQpOwogCiAJ
CWlmIChicmV0LT5vcGVyYXRpb24gIT0gQkxLSUZfT1BfRElTQ0FSRCkKLQkJCWJsa2lmX2NvbXBs
ZXRpb24oJmluZm8tPnNoYWRvd1tpZF0pOworCQkJaW5mby0+b3BzLT5ibGtpZl9jb21wbGV0aW9u
KGluZm8sIGlkKTsKIAotCQlhZGRfaWRfdG9fZnJlZWxpc3QoaW5mbywgaWQpOworCQlpbmZvLT5v
cHMtPmFkZF9pZChpbmZvLCBpZCk7CiAKIAkJZXJyb3IgPSAoYnJldC0+c3RhdHVzID09IEJMS0lG
X1JTUF9PS0FZKSA/IDAgOiAtRUlPOwogCQlzd2l0Y2ggKGJyZXQtPm9wZXJhdGlvbikgewpAQCAt
ODAwLDE3ICs5MDksMTcgQEAgc3RhdGljIGlycXJldHVybl90IGJsa2lmX2ludGVycnVwdChpbnQg
aXJxLCB2b2lkICpkZXZfaWQpCiAJCWRlZmF1bHQ6CiAJCQlCVUcoKTsKIAkJfQorCQlpbmZvLT5v
cHMtPnVwZGF0ZV9yc3BfY29ucyhpbmZvKTsKIAl9CiAKLQlpbmZvLT5yaW5nLnJzcF9jb25zID0g
aTsKLQotCWlmIChpICE9IGluZm8tPnJpbmcucmVxX3Byb2RfcHZ0KSB7CisJcnAgPSBpbmZvLT5v
cHMtPmdldF9yZXFfcHJvZF9wdnQoaW5mbyk7CisJaWYgKGkgIT0gcnApIHsKIAkJaW50IG1vcmVf
dG9fZG87Ci0JCVJJTkdfRklOQUxfQ0hFQ0tfRk9SX1JFU1BPTlNFUygmaW5mby0+cmluZywgbW9y
ZV90b19kbyk7CisJCWluZm8tPm9wcy0+Y2hlY2tfbGVmdF9yZXNwb25zZShpbmZvLCAmbW9yZV90
b19kbyk7CiAJCWlmIChtb3JlX3RvX2RvKQogCQkJZ290byBhZ2FpbjsKIAl9IGVsc2UKLQkJaW5m
by0+cmluZy5zcmluZy0+cnNwX2V2ZW50ID0gaSArIDE7CisJCWluZm8tPm9wcy0+dXBkYXRlX3Jz
cF9ldmVudChpbmZvLCBpKTsKIAogCWtpY2tfcGVuZGluZ19yZXF1ZXN0X3F1ZXVlcyhpbmZvKTsK
IApAQCAtODE5LDYgKzkyOCwyNiBAQCBzdGF0aWMgaXJxcmV0dXJuX3QgYmxraWZfaW50ZXJydXB0
KGludCBpcnEsIHZvaWQgKmRldl9pZCkKIAlyZXR1cm4gSVJRX0hBTkRMRUQ7CiB9CiAKK3N0YXRp
YyBpbnQgaW5pdF9zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJdW5zaWdu
ZWQgaW50IHJpbmdfc2l6ZTsKKwlpbnQgaTsKKwlpZiAoaW5mby0+cmluZ190eXBlICE9IFJJTkdf
VFlQRV9VTkRFRklORUQpCisJCXJldHVybiAwOworCisJaW5mby0+cmluZ190eXBlID0gUklOR19U
WVBFXzE7CisJcmluZ19zaXplID0gQkxLX1JJTkdfU0laRTsKKwlpbmZvLT5zaGFkb3cgPSBremFs
bG9jKHNpemVvZihzdHJ1Y3QgYmxrX3NoYWRvdykgKiByaW5nX3NpemUsCisJCQkgICAgICAgR0ZQ
X0tFUk5FTCk7CisJaWYgKCFpbmZvLT5zaGFkb3cpCisJCXJldHVybiAtRU5PTUVNOworCisJZm9y
IChpID0gMDsgaSA8IHJpbmdfc2l6ZTsgaSsrKQorCQlpbmZvLT5zaGFkb3dbaV0ucmVxLnUucncu
aWQgPSBpKzE7CisJaW5mby0+c2hhZG93W3Jpbmdfc2l6ZSAtIDFdLnJlcS51LnJ3LmlkID0gMHgw
ZmZmZmZmZjsKKworCXJldHVybiAwOworfQogCiBzdGF0aWMgaW50IHNldHVwX2Jsa3Jpbmcoc3Ry
dWN0IHhlbmJ1c19kZXZpY2UgKmRldiwKIAkJCSBzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykK
QEAgLTgzNiw4ICs5NjUsNiBAQCBzdGF0aWMgaW50IHNldHVwX2Jsa3Jpbmcoc3RydWN0IHhlbmJ1
c19kZXZpY2UgKmRldiwKIAlTSEFSRURfUklOR19JTklUKHNyaW5nKTsKIAlGUk9OVF9SSU5HX0lO
SVQoJmluZm8tPnJpbmcsIHNyaW5nLCBQQUdFX1NJWkUpOwogCi0Jc2dfaW5pdF90YWJsZShpbmZv
LT5zZywgQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUKTsKLQogCWVyciA9IHhlbmJ1c19n
cmFudF9yaW5nKGRldiwgdmlydF90b19tZm4oaW5mby0+cmluZy5zcmluZykpOwogCWlmIChlcnIg
PCAwKSB7CiAJCWZyZWVfcGFnZSgodW5zaWduZWQgbG9uZylzcmluZyk7CkBAIC04NDYsNiArOTcz
LDE2IEBAIHN0YXRpYyBpbnQgc2V0dXBfYmxrcmluZyhzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2
LAogCX0KIAlpbmZvLT5yaW5nX3JlZiA9IGVycjsKIAorCWluZm8tPnNnID0ga3phbGxvYyhzaXpl
b2Yoc3RydWN0IHNjYXR0ZXJsaXN0KSAqIGluZm8tPm9wcy0+bWF4X3NlZywgR0ZQX0tFUk5FTCk7
CisJaWYgKCFpbmZvLT5zZykgeworCQllcnIgPSAtRU5PTUVNOworCQlnb3RvIGZhaWw7CisJfQor
CXNnX2luaXRfdGFibGUoaW5mby0+c2csIGluZm8tPm9wcy0+bWF4X3NlZyk7CisKKwllcnIgPSBp
bml0X3NoYWRvdyhpbmZvKTsKKwlpZiAoZXJyKQorCQlnb3RvIGZhaWw7CiAJZXJyID0geGVuYnVz
X2FsbG9jX2V2dGNobihkZXYsICZpbmZvLT5ldnRjaG4pOwogCWlmIChlcnIpCiAJCWdvdG8gZmFp
bDsKQEAgLTg2Niw2ICsxMDAzLDIwIEBAIGZhaWw6CiAJcmV0dXJuIGVycjsKIH0KIAorc3RhdGlj
IHZvaWQgZnJlZV9ibGtyaW5nKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVu
ZCkKK3sKKwlpZiAoaW5mby0+cmluZ19yZWYgIT0gR1JBTlRfSU5WQUxJRF9SRUYpIHsKKwkJZ250
dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhpbmZvLT5yaW5nX3JlZiwgMCwKKwkJCQkJICh1bnNpZ25l
ZCBsb25nKWluZm8tPnJpbmcuc3JpbmcpOworCQlpbmZvLT5yaW5nX3JlZiA9IEdSQU5UX0lOVkFM
SURfUkVGOworCQlpbmZvLT5yaW5nLnNyaW5nID0gTlVMTDsKKwl9CisKKwlrZnJlZShpbmZvLT5z
Zyk7CisKKwlpZiAoIXN1c3BlbmQpCisJCWtmcmVlKGluZm8tPnNoYWRvdyk7Cit9CiAKIC8qIENv
bW1vbiBjb2RlIHVzZWQgd2hlbiBmaXJzdCBzZXR0aW5nIHVwLCBhbmQgd2hlbiByZXN1bWluZy4g
Ki8KIHN0YXRpYyBpbnQgdGFsa190b19ibGtiYWNrKHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYs
CkBAIC04NzUsOCArMTAyNiwxMSBAQCBzdGF0aWMgaW50IHRhbGtfdG9fYmxrYmFjayhzdHJ1Y3Qg
eGVuYnVzX2RldmljZSAqZGV2LAogCXN0cnVjdCB4ZW5idXNfdHJhbnNhY3Rpb24geGJ0OwogCWlu
dCBlcnI7CiAKKwkvKiByZWdpc3RlciByaW5nIG9wcyAqLworCWluZm8tPm9wcyA9ICZibGtfZnJv
bnRfb3BzOworCiAJLyogQ3JlYXRlIHNoYXJlZCByaW5nLCBhbGxvYyBldmVudCBjaGFubmVsLiAq
LwotCWVyciA9IHNldHVwX2Jsa3JpbmcoZGV2LCBpbmZvKTsKKwllcnIgPSBpbmZvLT5vcHMtPnNl
dHVwX2Jsa3JpbmcoZGV2LCBpbmZvKTsKIAlpZiAoZXJyKQogCQlnb3RvIG91dDsKIApAQCAtOTM3
LDcgKzEwOTEsNyBAQCBhZ2FpbjoKIHN0YXRpYyBpbnQgYmxrZnJvbnRfcHJvYmUoc3RydWN0IHhl
bmJ1c19kZXZpY2UgKmRldiwKIAkJCSAgY29uc3Qgc3RydWN0IHhlbmJ1c19kZXZpY2VfaWQgKmlk
KQogewotCWludCBlcnIsIHZkZXZpY2UsIGk7CisJaW50IGVyciwgdmRldmljZTsKIAlzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbzsKIAogCS8qIEZJWE1FOiBVc2UgZHluYW1pYyBkZXZpY2UgaWQg
aWYgdGhpcyBpcyBub3Qgc2V0LiAqLwpAQCAtOTk1LDEwICsxMTQ5LDYgQEAgc3RhdGljIGludCBi
bGtmcm9udF9wcm9iZShzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2LAogCWluZm8tPmNvbm5lY3Rl
ZCA9IEJMS0lGX1NUQVRFX0RJU0NPTk5FQ1RFRDsKIAlJTklUX1dPUksoJmluZm8tPndvcmssIGJs
a2lmX3Jlc3RhcnRfcXVldWUpOwogCi0JZm9yIChpID0gMDsgaSA8IEJMS19SSU5HX1NJWkU7IGkr
KykKLQkJaW5mby0+c2hhZG93W2ldLnJlcS51LnJ3LmlkID0gaSsxOwotCWluZm8tPnNoYWRvd1tC
TEtfUklOR19TSVpFLTFdLnJlcS51LnJ3LmlkID0gMHgwZmZmZmZmZjsKLQogCS8qIEZyb250IGVu
ZCBkaXIgaXMgYSBudW1iZXIsIHdoaWNoIGlzIHVzZWQgYXMgdGhlIGlkLiAqLwogCWluZm8tPmhh
bmRsZSA9IHNpbXBsZV9zdHJ0b3VsKHN0cnJjaHIoZGV2LT5ub2RlbmFtZSwgJy8nKSsxLCBOVUxM
LCAwKTsKIAlkZXZfc2V0X2RydmRhdGEoJmRldi0+ZGV2LCBpbmZvKTsKQEAgLTEwMjIsMTQgKzEx
NzIsMTQgQEAgc3RhdGljIGludCBibGtpZl9yZWNvdmVyKHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvKQogCWludCBqOwogCiAJLyogU3RhZ2UgMTogTWFrZSBhIHNhZmUgY29weSBvZiB0aGUgc2hh
ZG93IHN0YXRlLiAqLwotCWNvcHkgPSBrbWFsbG9jKHNpemVvZihpbmZvLT5zaGFkb3cpLAorCWNv
cHkgPSBrbWFsbG9jKHNpemVvZihzdHJ1Y3QgYmxrX3NoYWRvdykgKiBCTEtfUklOR19TSVpFLAog
CQkgICAgICAgR0ZQX05PSU8gfCBfX0dGUF9SRVBFQVQgfCBfX0dGUF9ISUdIKTsKIAlpZiAoIWNv
cHkpCiAJCXJldHVybiAtRU5PTUVNOwotCW1lbWNweShjb3B5LCBpbmZvLT5zaGFkb3csIHNpemVv
ZihpbmZvLT5zaGFkb3cpKTsKKwltZW1jcHkoY29weSwgaW5mby0+c2hhZG93LCBzaXplb2Yoc3Ry
dWN0IGJsa19zaGFkb3cpICogQkxLX1JJTkdfU0laRSk7CiAKIAkvKiBTdGFnZSAyOiBTZXQgdXAg
ZnJlZSBsaXN0LiAqLwotCW1lbXNldCgmaW5mby0+c2hhZG93LCAwLCBzaXplb2YoaW5mby0+c2hh
ZG93KSk7CisJbWVtc2V0KGluZm8tPnNoYWRvdywgMCwgc2l6ZW9mKHN0cnVjdCBibGtfc2hhZG93
KSAqIEJMS19SSU5HX1NJWkUpOwogCWZvciAoaSA9IDA7IGkgPCBCTEtfUklOR19TSVpFOyBpKysp
CiAJCWluZm8tPnNoYWRvd1tpXS5yZXEudS5ydy5pZCA9IGkrMTsKIAlpbmZvLT5zaGFkb3dfZnJl
ZSA9IGluZm8tPnJpbmcucmVxX3Byb2RfcHZ0OwpAQCAtMTA0MiwxMSArMTE5MiwxMSBAQCBzdGF0
aWMgaW50IGJsa2lmX3JlY292ZXIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCiAJCQljb250
aW51ZTsKIAogCQkvKiBHcmFiIGEgcmVxdWVzdCBzbG90IGFuZCBjb3B5IHNoYWRvdyBzdGF0ZSBp
bnRvIGl0LiAqLwotCQlyZXEgPSBSSU5HX0dFVF9SRVFVRVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5y
aW5nLnJlcV9wcm9kX3B2dCk7CisJCXJlcSA9IChzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqKWluZm8t
Pm9wcy0+cmluZ19nZXRfcmVxdWVzdChpbmZvKTsKIAkJKnJlcSA9IGNvcHlbaV0ucmVxOwogCiAJ
CS8qIFdlIGdldCBhIG5ldyByZXF1ZXN0IGlkLCBhbmQgbXVzdCByZXNldCB0aGUgc2hhZG93IHN0
YXRlLiAqLwotCQlyZXEtPnUucncuaWQgPSBnZXRfaWRfZnJvbV9mcmVlbGlzdChpbmZvKTsKKwkJ
cmVxLT51LnJ3LmlkID0gaW5mby0+b3BzLT5nZXRfaWQoaW5mbyk7CiAJCW1lbWNweSgmaW5mby0+
c2hhZG93W3JlcS0+dS5ydy5pZF0sICZjb3B5W2ldLCBzaXplb2YoY29weVtpXSkpOwogCiAJCWlm
IChyZXEtPm9wZXJhdGlvbiAhPSBCTEtJRl9PUF9ESVNDQVJEKSB7CkBAIC0xMTAwLDcgKzEyNTAs
NyBAQCBzdGF0aWMgaW50IGJsa2Zyb250X3Jlc3VtZShzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2
KQogCiAJZXJyID0gdGFsa190b19ibGtiYWNrKGRldiwgaW5mbyk7CiAJaWYgKGluZm8tPmNvbm5l
Y3RlZCA9PSBCTEtJRl9TVEFURV9TVVNQRU5ERUQgJiYgIWVycikKLQkJZXJyID0gYmxraWZfcmVj
b3ZlcihpbmZvKTsKKwkJZXJyID0gaW5mby0+b3BzLT5yZWNvdmVyKGluZm8pOwogCiAJcmV0dXJu
IGVycjsKIH0KQEAgLTEyODAsNyArMTQzMCw2IEBAIHN0YXRpYyB2b2lkIGJsa2Zyb250X2Nvbm5l
Y3Qoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCiAJaW5mby0+Y29ubmVjdGVkID0gQkxLSUZf
U1RBVEVfQ09OTkVDVEVEOwogCWtpY2tfcGVuZGluZ19yZXF1ZXN0X3F1ZXVlcyhpbmZvKTsKIAlz
cGluX3VubG9ja19pcnEoJmluZm8tPmlvX2xvY2spOwotCiAJYWRkX2Rpc2soaW5mby0+Z2QpOwog
CiAJaW5mby0+aXNfcmVhZHkgPSAxOwpAQCAtMTQ0NCw2ICsxNTkzLDMxIEBAIG91dDoKIAlyZXR1
cm4gMDsKIH0KIAorc3RhdGljIHN0cnVjdCBibGtfZnJvbnRfb3BlcmF0aW9ucyBibGtfZnJvbnRf
b3BzID0geworCS5yaW5nX2dldF9yZXF1ZXN0ID0gcmluZ19nZXRfcmVxdWVzdCwKKwkucmluZ19n
ZXRfcmVzcG9uc2UgPSByaW5nX2dldF9yZXNwb25zZSwKKwkucmluZ19nZXRfc2VnbWVudCA9IHJp
bmdfZ2V0X3NlZ21lbnQsCisJLmdldF9pZCA9IGdldF9pZF9mcm9tX2ZyZWVsaXN0LAorCS5hZGRf
aWQgPSBhZGRfaWRfdG9fZnJlZWxpc3QsCisJLnNhdmVfc2VnX3NoYWRvdyA9IHNhdmVfc2VnX3No
YWRvdywKKwkuc2F2ZV9yZXFfc2hhZG93ID0gc2F2ZV9yZXFfc2hhZG93LAorCS5nZXRfcmVxX2Zy
b21fc2hhZG93ID0gZ2V0X3JlcV9mcm9tX3NoYWRvdywKKwkuZ2V0X3JzcF9wcm9kID0gZ2V0X3Jz
cF9wcm9kLAorCS5nZXRfcnNwX2NvbnMgPSBnZXRfcnNwX2NvbnMsCisJLmdldF9yZXFfcHJvZF9w
dnQgPSBnZXRfcmVxX3Byb2RfcHZ0LAorCS5jaGVja19sZWZ0X3Jlc3BvbnNlID0gY2hlY2tfbGVm
dF9yZXNwb25zZSwKKwkudXBkYXRlX3JzcF9ldmVudCA9IHVwZGF0ZV9yc3BfZXZlbnQsCisJLnVw
ZGF0ZV9yc3BfY29ucyA9IHVwZGF0ZV9yc3BfY29ucywKKwkudXBkYXRlX3JlcV9wcm9kX3B2dCA9
IHVwZGF0ZV9yZXFfcHJvZF9wdnQsCisJLnJpbmdfcHVzaCA9IHJpbmdfcHVzaCwKKwkucmVjb3Zl
ciA9IGJsa2lmX3JlY292ZXIsCisJLnJpbmdfZnVsbCA9IHJpbmdfZnVsbCwKKwkuc2V0dXBfYmxr
cmluZyA9IHNldHVwX2Jsa3JpbmcsCisJLmZyZWVfYmxrcmluZyA9IGZyZWVfYmxrcmluZywKKwku
YmxraWZfY29tcGxldGlvbiA9IGJsa2lmX2NvbXBsZXRpb24sCisJLm1heF9zZWcgPSBCTEtJRl9N
QVhfU0VHTUVOVFNfUEVSX1JFUVVFU1QsCit9OworCiBzdGF0aWMgY29uc3Qgc3RydWN0IGJsb2Nr
X2RldmljZV9vcGVyYXRpb25zIHhsdmJkX2Jsb2NrX2ZvcHMgPQogewogCS5vd25lciA9IFRISVNf
TU9EVUxFLAo=

--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:23:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xEn-0005Wa-Tr; Thu, 16 Aug 2012 10:23:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xEm-0005WV-AZ
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:23:36 +0000
Received: from [85.158.143.35:54706] by server-3.bemta-4.messagelabs.com id
	AC/F8-09529-72ACC205; Thu, 16 Aug 2012 10:23:35 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345112613!15681606!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY5MDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31647 invoked from network); 16 Aug 2012 10:23:33 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-13.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:23:33 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 16 Aug 2012 03:23:32 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="134923027"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Aug 2012 03:23:31 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:23:31 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:23:29 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 1/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mS0vU5VLngcHSnKZaUDeGa+eFQ==
Date: Thu, 16 Aug 2012 10:23:28 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF1EE@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 1/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


refactoring the blkfront
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 4e86393..a263faf 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -64,6 +64,12 @@ enum blkif_state {
        BLKIF_STATE_SUSPENDED,
 };

+enum blkif_ring_type {
+       RING_TYPE_UNDEFINED =3D 0,
+       RING_TYPE_1 =3D 1,
+       RING_TYPE_2 =3D 2,
+};
+
 struct blk_shadow {
        struct blkif_request req;
        struct request *request;
@@ -91,12 +97,14 @@ struct blkfront_info
        enum blkif_state connected;
        int ring_ref;
        struct blkif_front_ring ring;
-       struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+       struct scatterlist *sg;
        unsigned int evtchn, irq;
        struct request_queue *rq;
        struct work_struct work;
        struct gnttab_free_callback callback;
-       struct blk_shadow shadow[BLK_RING_SIZE];
+       struct blk_shadow *shadow;
+       struct blk_front_operations *ops;
+       enum blkif_ring_type ring_type;
        unsigned long shadow_free;
        unsigned int feature_flush;
        unsigned int flush_op;
@@ -107,6 +115,36 @@ struct blkfront_info
        int is_ready;
 };

+/* interface of blkfront ring operation */
+static struct blk_front_operations {
+       void *(*ring_get_request) (struct blkfront_info *info);
+       struct blkif_response *(*ring_get_response) (struct blkfront_info *=
info);
+       struct blkif_request_segment *(*ring_get_segment)
+                               (struct blkfront_info *info, int i);
+       unsigned long (*get_id) (struct blkfront_info *info);
+       void (*add_id) (struct blkfront_info *info, unsigned long id);
+       void (*save_seg_shadow) (struct blkfront_info *info, unsigned long =
mfn,
+                                unsigned long id, int i);
+       void (*save_req_shadow) (struct blkfront_info *info,
+                                struct request *req, unsigned long id);
+       struct request *(*get_req_from_shadow)(struct blkfront_info *info,
+                                              unsigned long id);
+       RING_IDX (*get_rsp_prod) (struct blkfront_info *info);
+       RING_IDX (*get_rsp_cons) (struct blkfront_info *info);
+       RING_IDX (*get_req_prod_pvt) (struct blkfront_info *info);
+       void (*check_left_response) (struct blkfront_info *info, int *more_=
to_do);
+       void (*update_rsp_event) (struct blkfront_info *info, int i);
+       void (*update_rsp_cons) (struct blkfront_info *info);
+       void (*update_req_prod_pvt) (struct blkfront_info *info);
+       void (*ring_push) (struct blkfront_info *info, int *notify);
+       int (*recover) (struct blkfront_info *info);
+       int (*ring_full) (struct blkfront_info *info);
+       int (*setup_blkring) (struct xenbus_device *dev, struct blkfront_in=
fo *info);
+       void (*free_blkring) (struct blkfront_info *info, int suspend);
+       void (*blkif_completion) (struct blkfront_info *info, unsigned long=
 id);
+       unsigned int max_seg;
+} blk_front_ops;
+
 static unsigned int nr_minors;
 static unsigned long *minors;
 static DEFINE_SPINLOCK(minor_lock);
@@ -132,7 +170,7 @@ static DEFINE_SPINLOCK(minor_lock);

 #define DEV_NAME       "xvd"   /* name in /dev */

-static int get_id_from_freelist(struct blkfront_info *info)
+static unsigned long get_id_from_freelist(struct blkfront_info *info)
 {
        unsigned long free =3D info->shadow_free;
        BUG_ON(free >=3D BLK_RING_SIZE);
@@ -141,7 +179,7 @@ static int get_id_from_freelist(struct blkfront_info *i=
nfo)
        return free;
 }

-static void add_id_to_freelist(struct blkfront_info *info,
+void add_id_to_freelist(struct blkfront_info *info,
                               unsigned long id)
 {
        info->shadow[id].req.u.rw.id  =3D info->shadow_free;
@@ -251,6 +289,42 @@ static int blkif_ioctl(struct block_device *bdev, fmod=
e_t mode,
        return 0;
 }

+static int ring_full(struct blkfront_info *info)
+{
+       return RING_FULL(&info->ring);
+}
+
+void *ring_get_request(struct blkfront_info *info)
+{
+       return (void *)RING_GET_REQUEST(&info->ring, info->ring.req_prod_pv=
t);
+}
+
+struct blkif_request_segment *ring_get_segment(struct blkfront_info *info,=
 int i)
+{
+       struct blkif_request *ring_req =3D
+                       (struct blkif_request *)info->ops->ring_get_request=
(info);
+       return &ring_req->u.rw.seg[i];
+}
+
+void save_seg_shadow(struct blkfront_info *info,
+                     unsigned long mfn, unsigned long id, int i)
+{
+       info->shadow[id].frame[i] =3D mfn_to_pfn(mfn);
+}
+
+void save_req_shadow(struct blkfront_info *info,
+                     struct request *req, unsigned long id)
+{
+       struct blkif_request *ring_req =3D
+                       (struct blkif_request *)info->ops->ring_get_request=
(info);
+       info->shadow[id].req =3D *ring_req;
+       info->shadow[id].request =3D req;
+}
+
+void update_req_prod_pvt(struct blkfront_info *info)
+{
+       info->ring.req_prod_pvt++;
+}
 /*
  * Generate a Xen blkfront IO request from a blk layer request.  Reads
  * and writes are handled as expected.
@@ -262,6 +336,7 @@ static int blkif_queue_request(struct request *req)
        struct blkfront_info *info =3D req->rq_disk->private_data;
        unsigned long buffer_mfn;
        struct blkif_request *ring_req;
+       struct blkif_request_segment *ring_seg;
        unsigned long id;
        unsigned int fsect, lsect;
        int i, ref;
@@ -282,9 +357,9 @@ static int blkif_queue_request(struct request *req)
        }

        /* Fill out a communications ring structure. */
-       ring_req =3D RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt)=
;
-       id =3D get_id_from_freelist(info);
-       info->shadow[id].request =3D req;
+       ring_req =3D (struct blkif_request *)info->ops->ring_get_request(in=
fo);
+       id =3D info->ops->get_id(info);
+       //info->shadow[id].request =3D req;

        ring_req->u.rw.id =3D id;
        ring_req->u.rw.sector_number =3D (blkif_sector_t)blk_rq_pos(req);
@@ -315,8 +390,7 @@ static int blkif_queue_request(struct request *req)
        } else {
                ring_req->u.rw.nr_segments =3D blk_rq_map_sg(req->q, req,
                                                           info->sg);
-               BUG_ON(ring_req->u.rw.nr_segments >
-                      BLKIF_MAX_SEGMENTS_PER_REQUEST);
+               BUG_ON(ring_req->u.rw.nr_segments > info->ops->max_seg);

                for_each_sg(info->sg, sg, ring_req->u.rw.nr_segments, i) {
                        buffer_mfn =3D pfn_to_mfn(page_to_pfn(sg_page(sg)))=
;
@@ -332,31 +406,35 @@ static int blkif_queue_request(struct request *req)
                                        buffer_mfn,
                                        rq_data_dir(req));

-                       info->shadow[id].frame[i] =3D mfn_to_pfn(buffer_mfn=
);
-                       ring_req->u.rw.seg[i] =3D
-                                       (struct blkif_request_segment) {
-                                               .gref       =3D ref,
-                                               .first_sect =3D fsect,
-                                               .last_sect  =3D lsect };
+                       ring_seg =3D info->ops->ring_get_segment(info, i);
+                       *ring_seg =3D(struct blkif_request_segment) {
+                                       .gref       =3D ref,
+                                       .first_sect =3D fsect,
+                                       .last_sect  =3D lsect };
+                       info->ops->save_seg_shadow(info, buffer_mfn, id, i)=
;
                }
        }

-       info->ring.req_prod_pvt++;
-
        /* Keep a private copy so we can reissue requests when recovering. =
*/
-       info->shadow[id].req =3D *ring_req;
+       info->ops->save_req_shadow(info, req, id);
+
+       info->ops->update_req_prod_pvt(info);

        gnttab_free_grant_references(gref_head);

        return 0;
 }

+void ring_push(struct blkfront_info *info, int *notify)
+{
+       RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->ring, *notify);
+}

 static inline void flush_requests(struct blkfront_info *info)
 {
        int notify;

-       RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->ring, notify);
+       info->ops->ring_push(info, &notify);

        if (notify)
                notify_remote_via_irq(info->irq);
@@ -379,7 +457,7 @@ static void do_blkif_request(struct request_queue *rq)
        while ((req =3D blk_peek_request(rq)) !=3D NULL) {
                info =3D req->rq_disk->private_data;

-               if (RING_FULL(&info->ring))
+               if (info->ops->ring_full(info))
                        goto wait;

                blk_start_request(req);
@@ -434,14 +512,15 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u=
16 sector_size)

        /* Hard sector size and max sectors impersonate the equiv. hardware=
. */
        blk_queue_logical_block_size(rq, sector_size);
-       blk_queue_max_hw_sectors(rq, 512);

        /* Each segment in a request is up to an aligned page in size. */
        blk_queue_segment_boundary(rq, PAGE_SIZE - 1);
        blk_queue_max_segment_size(rq, PAGE_SIZE);

        /* Ensure a merged request will fit in a single I/O ring slot. */
-       blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST);
+       blk_queue_max_segments(rq, info->ops->max_seg);
+       blk_queue_max_hw_sectors(rq, info->ops->max_seg * PAGE_SIZE
+                                / sector_size);

        /* Make sure buffer addresses are sector-aligned. */
        blk_queue_dma_alignment(rq, 511);
@@ -661,7 +740,7 @@ static void xlvbd_release_gendisk(struct blkfront_info =
*info)

 static void kick_pending_request_queues(struct blkfront_info *info)
 {
-       if (!RING_FULL(&info->ring)) {
+       if (!ring_full(info)) {
                /* Re-enable calldowns. */
                blk_start_queue(info->rq);
                /* Kick things off immediately. */
@@ -696,20 +775,17 @@ static void blkif_free(struct blkfront_info *info, in=
t suspend)
        flush_work_sync(&info->work);

        /* Free resources associated with old device channel. */
-       if (info->ring_ref !=3D GRANT_INVALID_REF) {
-               gnttab_end_foreign_access(info->ring_ref, 0,
-                                         (unsigned long)info->ring.sring);
-               info->ring_ref =3D GRANT_INVALID_REF;
-               info->ring.sring =3D NULL;
-       }
+       info->ops->free_blkring(info, suspend);
+
        if (info->irq)
                unbind_from_irqhandler(info->irq, info);
        info->evtchn =3D info->irq =3D 0;

 }

-static void blkif_completion(struct blk_shadow *s)
+static void blkif_completion(struct blkfront_info *info, unsigned long id)
 {
+       struct blk_shadow *s =3D &info->shadow[id];
        int i;
        /* Do not let BLKIF_OP_DISCARD as nr_segment is in the same place
         * flag. */
@@ -717,6 +793,39 @@ static void blkif_completion(struct blk_shadow *s)
                gnttab_end_foreign_access(s->req.u.rw.seg[i].gref, 0, 0UL);
 }

+struct blkif_response *ring_get_response(struct blkfront_info *info)
+{
+       return RING_GET_RESPONSE(&info->ring, info->ring.rsp_cons);
+}
+RING_IDX get_rsp_prod(struct blkfront_info *info)
+{
+       return info->ring.sring->rsp_prod;
+}
+RING_IDX get_rsp_cons(struct blkfront_info *info)
+{
+       return info->ring.rsp_cons;
+}
+struct request *get_req_from_shadow(struct blkfront_info *info,
+                                   unsigned long id)
+{
+       return info->shadow[id].request;
+}
+void update_rsp_cons(struct blkfront_info *info)
+{
+       info->ring.rsp_cons++;
+}
+RING_IDX get_req_prod_pvt(struct blkfront_info *info)
+{
+       return info->ring.req_prod_pvt;
+}
+void check_left_response(struct blkfront_info *info, int *more_to_do)
+{
+       RING_FINAL_CHECK_FOR_RESPONSES(&info->ring, *more_to_do);
+}
+void update_rsp_event(struct blkfront_info *info, int i)
+{
+       info->ring.sring->rsp_event =3D i + 1;
+}
 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 {
        struct request *req;
@@ -734,20 +843,20 @@ static irqreturn_t blkif_interrupt(int irq, void *dev=
_id)
        }

  again:
-       rp =3D info->ring.sring->rsp_prod;
+       rp =3D info->ops->get_rsp_prod(info);
        rmb(); /* Ensure we see queued responses up to 'rp'. */

-       for (i =3D info->ring.rsp_cons; i !=3D rp; i++) {
+       for (i =3D info->ops->get_rsp_cons(info); i !=3D rp; i++) {
                unsigned long id;

-               bret =3D RING_GET_RESPONSE(&info->ring, i);
+               bret =3D info->ops->ring_get_response(info);
                id   =3D bret->id;
-               req  =3D info->shadow[id].request;
+               req  =3D info->ops->get_req_from_shadow(info, id);

                if (bret->operation !=3D BLKIF_OP_DISCARD)
-                       blkif_completion(&info->shadow[id]);
+                       info->ops->blkif_completion(info, id);

-               add_id_to_freelist(info, id);
+               info->ops->add_id(info, id);

                error =3D (bret->status =3D=3D BLKIF_RSP_OKAY) ? 0 : -EIO;
                switch (bret->operation) {
@@ -800,17 +909,17 @@ static irqreturn_t blkif_interrupt(int irq, void *dev=
_id)
                default:
                        BUG();
                }
+               info->ops->update_rsp_cons(info);
        }

-       info->ring.rsp_cons =3D i;
-
-       if (i !=3D info->ring.req_prod_pvt) {
+       rp =3D info->ops->get_req_prod_pvt(info);
+       if (i !=3D rp) {
                int more_to_do;
-               RING_FINAL_CHECK_FOR_RESPONSES(&info->ring, more_to_do);
+               info->ops->check_left_response(info, &more_to_do);
                if (more_to_do)
                        goto again;
        } else
-               info->ring.sring->rsp_event =3D i + 1;
+               info->ops->update_rsp_event(info, i);

        kick_pending_request_queues(info);

@@ -819,6 +928,26 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_=
id)
        return IRQ_HANDLED;
 }

+static int init_shadow(struct blkfront_info *info)
+{
+       unsigned int ring_size;
+       int i;
+       if (info->ring_type !=3D RING_TYPE_UNDEFINED)
+               return 0;
+
+       info->ring_type =3D RING_TYPE_1;
+       ring_size =3D BLK_RING_SIZE;
+       info->shadow =3D kzalloc(sizeof(struct blk_shadow) * ring_size,
+                              GFP_KERNEL);
+       if (!info->shadow)
+               return -ENOMEM;
+
+       for (i =3D 0; i < ring_size; i++)
+               info->shadow[i].req.u.rw.id =3D i+1;
+       info->shadow[ring_size - 1].req.u.rw.id =3D 0x0fffffff;
+
+       return 0;
+}

 static int setup_blkring(struct xenbus_device *dev,
                         struct blkfront_info *info)
@@ -836,8 +965,6 @@ static int setup_blkring(struct xenbus_device *dev,
        SHARED_RING_INIT(sring);
        FRONT_RING_INIT(&info->ring, sring, PAGE_SIZE);

-       sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
-
        err =3D xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring));
        if (err < 0) {
                free_page((unsigned long)sring);
@@ -846,6 +973,16 @@ static int setup_blkring(struct xenbus_device *dev,
        }
        info->ring_ref =3D err;

+       info->sg =3D kzalloc(sizeof(struct scatterlist) * info->ops->max_se=
g, GFP_KERNEL);
+       if (!info->sg) {
+               err =3D -ENOMEM;
+               goto fail;
+       }
+       sg_init_table(info->sg, info->ops->max_seg);
+
+       err =3D init_shadow(info);
+       if (err)
+               goto fail;
        err =3D xenbus_alloc_evtchn(dev, &info->evtchn);
        if (err)
                goto fail;
@@ -866,6 +1003,20 @@ fail:
        return err;
 }

+static void free_blkring(struct blkfront_info *info, int suspend)
+{
+       if (info->ring_ref !=3D GRANT_INVALID_REF) {
+               gnttab_end_foreign_access(info->ring_ref, 0,
+                                        (unsigned long)info->ring.sring);
+               info->ring_ref =3D GRANT_INVALID_REF;
+               info->ring.sring =3D NULL;
+       }
+
+       kfree(info->sg);
+
+       if (!suspend)
+               kfree(info->shadow);
+}

 /* Common code used when first setting up, and when resuming. */
 static int talk_to_blkback(struct xenbus_device *dev,
@@ -875,8 +1026,11 @@ static int talk_to_blkback(struct xenbus_device *dev,
        struct xenbus_transaction xbt;
        int err;

+       /* register ring ops */
+       info->ops =3D &blk_front_ops;
+
        /* Create shared ring, alloc event channel. */
-       err =3D setup_blkring(dev, info);
+       err =3D info->ops->setup_blkring(dev, info);
        if (err)
                goto out;

@@ -937,7 +1091,7 @@ again:
 static int blkfront_probe(struct xenbus_device *dev,
                          const struct xenbus_device_id *id)
 {
-       int err, vdevice, i;
+       int err, vdevice;
        struct blkfront_info *info;

        /* FIXME: Use dynamic device id if this is not set. */
@@ -995,10 +1149,6 @@ static int blkfront_probe(struct xenbus_device *dev,
        info->connected =3D BLKIF_STATE_DISCONNECTED;
        INIT_WORK(&info->work, blkif_restart_queue);

-       for (i =3D 0; i < BLK_RING_SIZE; i++)
-               info->shadow[i].req.u.rw.id =3D i+1;
-       info->shadow[BLK_RING_SIZE-1].req.u.rw.id =3D 0x0fffffff;
-
        /* Front end dir is a number, which is used as the id. */
        info->handle =3D simple_strtoul(strrchr(dev->nodename, '/')+1, NULL=
, 0);
        dev_set_drvdata(&dev->dev, info);
@@ -1022,14 +1172,14 @@ static int blkif_recover(struct blkfront_info *info=
)
        int j;

        /* Stage 1: Make a safe copy of the shadow state. */
-       copy =3D kmalloc(sizeof(info->shadow),
+       copy =3D kmalloc(sizeof(struct blk_shadow) * BLK_RING_SIZE,
                       GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
        if (!copy)
                return -ENOMEM;
-       memcpy(copy, info->shadow, sizeof(info->shadow));
+       memcpy(copy, info->shadow, sizeof(struct blk_shadow) * BLK_RING_SIZ=
E);

        /* Stage 2: Set up free list. */
-       memset(&info->shadow, 0, sizeof(info->shadow));
+       memset(info->shadow, 0, sizeof(struct blk_shadow) * BLK_RING_SIZE);
        for (i =3D 0; i < BLK_RING_SIZE; i++)
                info->shadow[i].req.u.rw.id =3D i+1;
        info->shadow_free =3D info->ring.req_prod_pvt;
@@ -1042,11 +1192,11 @@ static int blkif_recover(struct blkfront_info *info=
)
                        continue;

                /* Grab a request slot and copy shadow state into it. */
-               req =3D RING_GET_REQUEST(&info->ring, info->ring.req_prod_p=
vt);
+               req =3D (struct blkif_request *)info->ops->ring_get_request=
(info);
                *req =3D copy[i].req;

                /* We get a new request id, and must reset the shadow state=
. */
-               req->u.rw.id =3D get_id_from_freelist(info);
+               req->u.rw.id =3D info->ops->get_id(info);
                memcpy(&info->shadow[req->u.rw.id], &copy[i], sizeof(copy[i=
]));

                if (req->operation !=3D BLKIF_OP_DISCARD) {
@@ -1100,7 +1250,7 @@ static int blkfront_resume(struct xenbus_device *dev)

        err =3D talk_to_blkback(dev, info);
        if (info->connected =3D=3D BLKIF_STATE_SUSPENDED && !err)
-               err =3D blkif_recover(info);
+               err =3D info->ops->recover(info);

        return err;
 }
@@ -1280,7 +1430,6 @@ static void blkfront_connect(struct blkfront_info *in=
fo)
        info->connected =3D BLKIF_STATE_CONNECTED;
        kick_pending_request_queues(info);
        spin_unlock_irq(&info->io_lock);
-
        add_disk(info->gd);

        info->is_ready =3D 1;
@@ -1444,6 +1593,31 @@ out:
        return 0;
 }

+static struct blk_front_operations blk_front_ops =3D {
+       .ring_get_request =3D ring_get_request,
+       .ring_get_response =3D ring_get_response,
+       .ring_get_segment =3D ring_get_segment,
+       .get_id =3D get_id_from_freelist,
+       .add_id =3D add_id_to_freelist,
+       .save_seg_shadow =3D save_seg_shadow,
+       .save_req_shadow =3D save_req_shadow,
+       .get_req_from_shadow =3D get_req_from_shadow,
+       .get_rsp_prod =3D get_rsp_prod,
+       .get_rsp_cons =3D get_rsp_cons,
+       .get_req_prod_pvt =3D get_req_prod_pvt,
+       .check_left_response =3D check_left_response,
+       .update_rsp_event =3D update_rsp_event,
+       .update_rsp_cons =3D update_rsp_cons,
+       .update_req_prod_pvt =3D update_req_prod_pvt,
+       .ring_push =3D ring_push,
+       .recover =3D blkif_recover,
+       .ring_full =3D ring_full,
+       .setup_blkring =3D setup_blkring,
+       .free_blkring =3D free_blkring,
+       .blkif_completion =3D blkif_completion,
+       .max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
+};
+
 static const struct block_device_operations xlvbd_block_fops =3D
 {
        .owner =3D THIS_MODULE,

-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_01.patch"
Content-Description: vbd_enlarge_segments_01.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_01.patch";
	size=17621; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:33:44 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IGUxZWFhNTllZmMwMzJjZTg0MDA2OGYzODYyYTY5YWMzYjkwMTJkODAKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAx
NjozMDoxNCAyMDEyICswODAwCgogICAgcmVmYWN0b3JpbmcgdGhlIGJsa2Zyb250CgpkaWZmIC0t
Z2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2Zyb250LmMKaW5kZXggNGU4NjM5My4uYTI2M2ZhZiAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrZnJvbnQuYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCkBAIC02
NCw2ICs2NCwxMiBAQCBlbnVtIGJsa2lmX3N0YXRlIHsKIAlCTEtJRl9TVEFURV9TVVNQRU5ERUQs
CiB9OwogCitlbnVtIGJsa2lmX3JpbmdfdHlwZSB7CisJUklOR19UWVBFX1VOREVGSU5FRCA9IDAs
CisJUklOR19UWVBFXzEgPSAxLAorCVJJTkdfVFlQRV8yID0gMiwKK307CisKIHN0cnVjdCBibGtf
c2hhZG93IHsKIAlzdHJ1Y3QgYmxraWZfcmVxdWVzdCByZXE7CiAJc3RydWN0IHJlcXVlc3QgKnJl
cXVlc3Q7CkBAIC05MSwxMiArOTcsMTQgQEAgc3RydWN0IGJsa2Zyb250X2luZm8KIAllbnVtIGJs
a2lmX3N0YXRlIGNvbm5lY3RlZDsKIAlpbnQgcmluZ19yZWY7CiAJc3RydWN0IGJsa2lmX2Zyb250
X3JpbmcgcmluZzsKLQlzdHJ1Y3Qgc2NhdHRlcmxpc3Qgc2dbQkxLSUZfTUFYX1NFR01FTlRTX1BF
Ul9SRVFVRVNUXTsKKwlzdHJ1Y3Qgc2NhdHRlcmxpc3QgKnNnOwogCXVuc2lnbmVkIGludCBldnRj
aG4sIGlycTsKIAlzdHJ1Y3QgcmVxdWVzdF9xdWV1ZSAqcnE7CiAJc3RydWN0IHdvcmtfc3RydWN0
IHdvcms7CiAJc3RydWN0IGdudHRhYl9mcmVlX2NhbGxiYWNrIGNhbGxiYWNrOwotCXN0cnVjdCBi
bGtfc2hhZG93IHNoYWRvd1tCTEtfUklOR19TSVpFXTsKKwlzdHJ1Y3QgYmxrX3NoYWRvdyAqc2hh
ZG93OworCXN0cnVjdCBibGtfZnJvbnRfb3BlcmF0aW9ucyAqb3BzOworCWVudW0gYmxraWZfcmlu
Z190eXBlIHJpbmdfdHlwZTsKIAl1bnNpZ25lZCBsb25nIHNoYWRvd19mcmVlOwogCXVuc2lnbmVk
IGludCBmZWF0dXJlX2ZsdXNoOwogCXVuc2lnbmVkIGludCBmbHVzaF9vcDsKQEAgLTEwNyw2ICsx
MTUsMzYgQEAgc3RydWN0IGJsa2Zyb250X2luZm8KIAlpbnQgaXNfcmVhZHk7CiB9OwogCisvKiBp
bnRlcmZhY2Ugb2YgYmxrZnJvbnQgcmluZyBvcGVyYXRpb24gKi8KK3N0YXRpYyBzdHJ1Y3QgYmxr
X2Zyb250X29wZXJhdGlvbnMgeworCXZvaWQgKigqcmluZ19nZXRfcmVxdWVzdCkgKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKTsKKwlzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKigqcmluZ19nZXRf
cmVzcG9uc2UpIChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbyk7CisJc3RydWN0IGJsa2lmX3Jl
cXVlc3Rfc2VnbWVudCAqKCpyaW5nX2dldF9zZWdtZW50KQorCQkJCShzdHJ1Y3QgYmxrZnJvbnRf
aW5mbyAqaW5mbywgaW50IGkpOworCXVuc2lnbmVkIGxvbmcgKCpnZXRfaWQpIChzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbyk7CisJdm9pZCAoKmFkZF9pZCkgKHN0cnVjdCBibGtmcm9udF9pbmZv
ICppbmZvLCB1bnNpZ25lZCBsb25nIGlkKTsKKwl2b2lkICgqc2F2ZV9zZWdfc2hhZG93KSAoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8sIHVuc2lnbmVkIGxvbmcgbWZuLAorCQkJCSB1bnNpZ25l
ZCBsb25nIGlkLCBpbnQgaSk7CisJdm9pZCAoKnNhdmVfcmVxX3NoYWRvdykgKHN0cnVjdCBibGtm
cm9udF9pbmZvICppbmZvLAorCQkJCSBzdHJ1Y3QgcmVxdWVzdCAqcmVxLCB1bnNpZ25lZCBsb25n
IGlkKTsKKwlzdHJ1Y3QgcmVxdWVzdCAqKCpnZXRfcmVxX2Zyb21fc2hhZG93KShzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbywKKwkJCQkJICAgICAgIHVuc2lnbmVkIGxvbmcgaWQpOworCVJJTkdf
SURYICgqZ2V0X3JzcF9wcm9kKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pOworCVJJTkdf
SURYICgqZ2V0X3JzcF9jb25zKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pOworCVJJTkdf
SURYICgqZ2V0X3JlcV9wcm9kX3B2dCkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKTsKKwl2
b2lkICgqY2hlY2tfbGVmdF9yZXNwb25zZSkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBp
bnQgKm1vcmVfdG9fZG8pOworCXZvaWQgKCp1cGRhdGVfcnNwX2V2ZW50KSAoc3RydWN0IGJsa2Zy
b250X2luZm8gKmluZm8sIGludCBpKTsKKwl2b2lkICgqdXBkYXRlX3JzcF9jb25zKSAoc3RydWN0
IGJsa2Zyb250X2luZm8gKmluZm8pOworCXZvaWQgKCp1cGRhdGVfcmVxX3Byb2RfcHZ0KSAoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8pOworCXZvaWQgKCpyaW5nX3B1c2gpIChzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbywgaW50ICpub3RpZnkpOworCWludCAoKnJlY292ZXIpIChzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbyk7CisJaW50ICgqcmluZ19mdWxsKSAoc3RydWN0IGJsa2Zyb250
X2luZm8gKmluZm8pOworCWludCAoKnNldHVwX2Jsa3JpbmcpIChzdHJ1Y3QgeGVuYnVzX2Rldmlj
ZSAqZGV2LCBzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbyk7CisJdm9pZCAoKmZyZWVfYmxrcmlu
ZykgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVuZCk7CisJdm9pZCAoKmJs
a2lmX2NvbXBsZXRpb24pIChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9u
ZyBpZCk7CisJdW5zaWduZWQgaW50IG1heF9zZWc7Cit9IGJsa19mcm9udF9vcHM7IAorCiBzdGF0
aWMgdW5zaWduZWQgaW50IG5yX21pbm9yczsKIHN0YXRpYyB1bnNpZ25lZCBsb25nICptaW5vcnM7
CiBzdGF0aWMgREVGSU5FX1NQSU5MT0NLKG1pbm9yX2xvY2spOwpAQCAtMTMyLDcgKzE3MCw3IEBA
IHN0YXRpYyBERUZJTkVfU1BJTkxPQ0sobWlub3JfbG9jayk7CiAKICNkZWZpbmUgREVWX05BTUUJ
Inh2ZCIJLyogbmFtZSBpbiAvZGV2ICovCiAKLXN0YXRpYyBpbnQgZ2V0X2lkX2Zyb21fZnJlZWxp
c3Qoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCitzdGF0aWMgdW5zaWduZWQgbG9uZyBnZXRf
aWRfZnJvbV9mcmVlbGlzdChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIHsKIAl1bnNpZ25l
ZCBsb25nIGZyZWUgPSBpbmZvLT5zaGFkb3dfZnJlZTsKIAlCVUdfT04oZnJlZSA+PSBCTEtfUklO
R19TSVpFKTsKQEAgLTE0MSw3ICsxNzksNyBAQCBzdGF0aWMgaW50IGdldF9pZF9mcm9tX2ZyZWVs
aXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogCXJldHVybiBmcmVlOwogfQogCi1zdGF0
aWMgdm9pZCBhZGRfaWRfdG9fZnJlZWxpc3Qoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCit2
b2lkIGFkZF9pZF90b19mcmVlbGlzdChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKIAkJCSAg
ICAgICB1bnNpZ25lZCBsb25nIGlkKQogewogCWluZm8tPnNoYWRvd1tpZF0ucmVxLnUucncuaWQg
ID0gaW5mby0+c2hhZG93X2ZyZWU7CkBAIC0yNTEsNiArMjg5LDQyIEBAIHN0YXRpYyBpbnQgYmxr
aWZfaW9jdGwoc3RydWN0IGJsb2NrX2RldmljZSAqYmRldiwgZm1vZGVfdCBtb2RlLAogCXJldHVy
biAwOwogfQogCitzdGF0aWMgaW50IHJpbmdfZnVsbChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5m
bykKK3sKKwlyZXR1cm4gUklOR19GVUxMKCZpbmZvLT5yaW5nKTsKK30KKwordm9pZCAqcmluZ19n
ZXRfcmVxdWVzdChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gKHZvaWQg
KilSSU5HX0dFVF9SRVFVRVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5yaW5nLnJlcV9wcm9kX3B2dCk7
Cit9CisKK3N0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnJpbmdfZ2V0X3NlZ21lbnQoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCBpKQoreworCXN0cnVjdCBibGtpZl9yZXF1ZXN0
ICpyaW5nX3JlcSA9CisJCQkoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilpbmZvLT5vcHMtPnJpbmdf
Z2V0X3JlcXVlc3QoaW5mbyk7CisJcmV0dXJuICZyaW5nX3JlcS0+dS5ydy5zZWdbaV07Cit9CisK
K3ZvaWQgc2F2ZV9zZWdfc2hhZG93KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAorCQkgICAg
ICB1bnNpZ25lZCBsb25nIG1mbiwgdW5zaWduZWQgbG9uZyBpZCwgaW50IGkpCit7CisJaW5mby0+
c2hhZG93W2lkXS5mcmFtZVtpXSA9IG1mbl90b19wZm4obWZuKTsKK30KKwordm9pZCBzYXZlX3Jl
cV9zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCisJCSAgICAgIHN0cnVjdCByZXF1
ZXN0ICpyZXEsIHVuc2lnbmVkIGxvbmcgaWQpCit7CisJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJp
bmdfcmVxID0KKwkJCShzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqKWluZm8tPm9wcy0+cmluZ19nZXRf
cmVxdWVzdChpbmZvKTsKKwlpbmZvLT5zaGFkb3dbaWRdLnJlcSA9ICpyaW5nX3JlcTsKKwlpbmZv
LT5zaGFkb3dbaWRdLnJlcXVlc3QgPSByZXE7Cit9CisKK3ZvaWQgdXBkYXRlX3JlcV9wcm9kX3B2
dChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlpbmZvLT5yaW5nLnJlcV9wcm9kX3B2
dCsrOworfQogLyoKICAqIEdlbmVyYXRlIGEgWGVuIGJsa2Zyb250IElPIHJlcXVlc3QgZnJvbSBh
IGJsayBsYXllciByZXF1ZXN0LiAgUmVhZHMKICAqIGFuZCB3cml0ZXMgYXJlIGhhbmRsZWQgYXMg
ZXhwZWN0ZWQuCkBAIC0yNjIsNiArMzM2LDcgQEAgc3RhdGljIGludCBibGtpZl9xdWV1ZV9yZXF1
ZXN0KHN0cnVjdCByZXF1ZXN0ICpyZXEpCiAJc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8gPSBy
ZXEtPnJxX2Rpc2stPnByaXZhdGVfZGF0YTsKIAl1bnNpZ25lZCBsb25nIGJ1ZmZlcl9tZm47CiAJ
c3RydWN0IGJsa2lmX3JlcXVlc3QgKnJpbmdfcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3Nl
Z21lbnQgKnJpbmdfc2VnOwogCXVuc2lnbmVkIGxvbmcgaWQ7CiAJdW5zaWduZWQgaW50IGZzZWN0
LCBsc2VjdDsKIAlpbnQgaSwgcmVmOwpAQCAtMjgyLDkgKzM1Nyw5IEBAIHN0YXRpYyBpbnQgYmxr
aWZfcXVldWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCX0KIAogCS8qIEZpbGwgb3V0
IGEgY29tbXVuaWNhdGlvbnMgcmluZyBzdHJ1Y3R1cmUuICovCi0JcmluZ19yZXEgPSBSSU5HX0dF
VF9SRVFVRVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5yaW5nLnJlcV9wcm9kX3B2dCk7Ci0JaWQgPSBn
ZXRfaWRfZnJvbV9mcmVlbGlzdChpbmZvKTsKLQlpbmZvLT5zaGFkb3dbaWRdLnJlcXVlc3QgPSBy
ZXE7CisJcmluZ19yZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilpbmZvLT5vcHMtPnJpbmdf
Z2V0X3JlcXVlc3QoaW5mbyk7CisJaWQgPSBpbmZvLT5vcHMtPmdldF9pZChpbmZvKTsKKwkvL2lu
Zm8tPnNoYWRvd1tpZF0ucmVxdWVzdCA9IHJlcTsKIAogCXJpbmdfcmVxLT51LnJ3LmlkID0gaWQ7
CiAJcmluZ19yZXEtPnUucncuc2VjdG9yX251bWJlciA9IChibGtpZl9zZWN0b3JfdClibGtfcnFf
cG9zKHJlcSk7CkBAIC0zMTUsOCArMzkwLDcgQEAgc3RhdGljIGludCBibGtpZl9xdWV1ZV9yZXF1
ZXN0KHN0cnVjdCByZXF1ZXN0ICpyZXEpCiAJfSBlbHNlIHsKIAkJcmluZ19yZXEtPnUucncubnJf
c2VnbWVudHMgPSBibGtfcnFfbWFwX3NnKHJlcS0+cSwgcmVxLAogCQkJCQkJCSAgIGluZm8tPnNn
KTsKLQkJQlVHX09OKHJpbmdfcmVxLT51LnJ3Lm5yX3NlZ21lbnRzID4KLQkJICAgICAgIEJMS0lG
X01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCk7CisJCUJVR19PTihyaW5nX3JlcS0+dS5ydy5ucl9z
ZWdtZW50cyA+IGluZm8tPm9wcy0+bWF4X3NlZyk7CiAKIAkJZm9yX2VhY2hfc2coaW5mby0+c2cs
IHNnLCByaW5nX3JlcS0+dS5ydy5ucl9zZWdtZW50cywgaSkgewogCQkJYnVmZmVyX21mbiA9IHBm
bl90b19tZm4ocGFnZV90b19wZm4oc2dfcGFnZShzZykpKTsKQEAgLTMzMiwzMSArNDA2LDM1IEBA
IHN0YXRpYyBpbnQgYmxraWZfcXVldWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCQkJ
CQlidWZmZXJfbWZuLAogCQkJCQlycV9kYXRhX2RpcihyZXEpKTsKIAotCQkJaW5mby0+c2hhZG93
W2lkXS5mcmFtZVtpXSA9IG1mbl90b19wZm4oYnVmZmVyX21mbik7Ci0JCQlyaW5nX3JlcS0+dS5y
dy5zZWdbaV0gPQotCQkJCQkoc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkgewotCQkJCQkJ
LmdyZWYgICAgICAgPSByZWYsCi0JCQkJCQkuZmlyc3Rfc2VjdCA9IGZzZWN0LAotCQkJCQkJLmxh
c3Rfc2VjdCAgPSBsc2VjdCB9OworCQkJcmluZ19zZWcgPSBpbmZvLT5vcHMtPnJpbmdfZ2V0X3Nl
Z21lbnQoaW5mbywgaSk7CisJCQkqcmluZ19zZWcgPShzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdt
ZW50KSB7CisJCQkJCS5ncmVmICAgICAgID0gcmVmLAorCQkJCQkuZmlyc3Rfc2VjdCA9IGZzZWN0
LAorCQkJCQkubGFzdF9zZWN0ICA9IGxzZWN0IH07CisJCQlpbmZvLT5vcHMtPnNhdmVfc2VnX3No
YWRvdyhpbmZvLCBidWZmZXJfbWZuLCBpZCwgaSk7CiAJCX0KIAl9CiAKLQlpbmZvLT5yaW5nLnJl
cV9wcm9kX3B2dCsrOwotCiAJLyogS2VlcCBhIHByaXZhdGUgY29weSBzbyB3ZSBjYW4gcmVpc3N1
ZSByZXF1ZXN0cyB3aGVuIHJlY292ZXJpbmcuICovCi0JaW5mby0+c2hhZG93W2lkXS5yZXEgPSAq
cmluZ19yZXE7CisJaW5mby0+b3BzLT5zYXZlX3JlcV9zaGFkb3coaW5mbywgcmVxLCBpZCk7CisK
KwlpbmZvLT5vcHMtPnVwZGF0ZV9yZXFfcHJvZF9wdnQoaW5mbyk7CiAKIAlnbnR0YWJfZnJlZV9n
cmFudF9yZWZlcmVuY2VzKGdyZWZfaGVhZCk7CiAKIAlyZXR1cm4gMDsKIH0KIAordm9pZCByaW5n
X3B1c2goc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCAqbm90aWZ5KQoreworCVJJTkdf
UFVTSF9SRVFVRVNUU19BTkRfQ0hFQ0tfTk9USUZZKCZpbmZvLT5yaW5nLCAqbm90aWZ5KTsKK30K
IAogc3RhdGljIGlubGluZSB2b2lkIGZsdXNoX3JlcXVlc3RzKHN0cnVjdCBibGtmcm9udF9pbmZv
ICppbmZvKQogewogCWludCBub3RpZnk7CiAKLQlSSU5HX1BVU0hfUkVRVUVTVFNfQU5EX0NIRUNL
X05PVElGWSgmaW5mby0+cmluZywgbm90aWZ5KTsKKwlpbmZvLT5vcHMtPnJpbmdfcHVzaChpbmZv
LCAmbm90aWZ5KTsKIAogCWlmIChub3RpZnkpCiAJCW5vdGlmeV9yZW1vdGVfdmlhX2lycShpbmZv
LT5pcnEpOwpAQCAtMzc5LDcgKzQ1Nyw3IEBAIHN0YXRpYyB2b2lkIGRvX2Jsa2lmX3JlcXVlc3Qo
c3RydWN0IHJlcXVlc3RfcXVldWUgKnJxKQogCXdoaWxlICgocmVxID0gYmxrX3BlZWtfcmVxdWVz
dChycSkpICE9IE5VTEwpIHsKIAkJaW5mbyA9IHJlcS0+cnFfZGlzay0+cHJpdmF0ZV9kYXRhOwog
Ci0JCWlmIChSSU5HX0ZVTEwoJmluZm8tPnJpbmcpKQorCQlpZiAoaW5mby0+b3BzLT5yaW5nX2Z1
bGwoaW5mbykpCiAJCQlnb3RvIHdhaXQ7CiAKIAkJYmxrX3N0YXJ0X3JlcXVlc3QocmVxKTsKQEAg
LTQzNCwxNCArNTEyLDE1IEBAIHN0YXRpYyBpbnQgeGx2YmRfaW5pdF9ibGtfcXVldWUoc3RydWN0
IGdlbmRpc2sgKmdkLCB1MTYgc2VjdG9yX3NpemUpCiAKIAkvKiBIYXJkIHNlY3RvciBzaXplIGFu
ZCBtYXggc2VjdG9ycyBpbXBlcnNvbmF0ZSB0aGUgZXF1aXYuIGhhcmR3YXJlLiAqLwogCWJsa19x
dWV1ZV9sb2dpY2FsX2Jsb2NrX3NpemUocnEsIHNlY3Rvcl9zaXplKTsKLQlibGtfcXVldWVfbWF4
X2h3X3NlY3RvcnMocnEsIDUxMik7CiAKIAkvKiBFYWNoIHNlZ21lbnQgaW4gYSByZXF1ZXN0IGlz
IHVwIHRvIGFuIGFsaWduZWQgcGFnZSBpbiBzaXplLiAqLwogCWJsa19xdWV1ZV9zZWdtZW50X2Jv
dW5kYXJ5KHJxLCBQQUdFX1NJWkUgLSAxKTsKIAlibGtfcXVldWVfbWF4X3NlZ21lbnRfc2l6ZShy
cSwgUEFHRV9TSVpFKTsKIAogCS8qIEVuc3VyZSBhIG1lcmdlZCByZXF1ZXN0IHdpbGwgZml0IGlu
IGEgc2luZ2xlIEkvTyByaW5nIHNsb3QuICovCi0JYmxrX3F1ZXVlX21heF9zZWdtZW50cyhycSwg
QkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUKTsKKwlibGtfcXVldWVfbWF4X3NlZ21lbnRz
KHJxLCBpbmZvLT5vcHMtPm1heF9zZWcpOworCWJsa19xdWV1ZV9tYXhfaHdfc2VjdG9ycyhycSwg
aW5mby0+b3BzLT5tYXhfc2VnICogUEFHRV9TSVpFCisJCQkJIC8gc2VjdG9yX3NpemUpOwogCiAJ
LyogTWFrZSBzdXJlIGJ1ZmZlciBhZGRyZXNzZXMgYXJlIHNlY3Rvci1hbGlnbmVkLiAqLwogCWJs
a19xdWV1ZV9kbWFfYWxpZ25tZW50KHJxLCA1MTEpOwpAQCAtNjYxLDcgKzc0MCw3IEBAIHN0YXRp
YyB2b2lkIHhsdmJkX3JlbGVhc2VfZ2VuZGlzayhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykK
IAogc3RhdGljIHZvaWQga2lja19wZW5kaW5nX3JlcXVlc3RfcXVldWVzKHN0cnVjdCBibGtmcm9u
dF9pbmZvICppbmZvKQogewotCWlmICghUklOR19GVUxMKCZpbmZvLT5yaW5nKSkgeworCWlmICgh
cmluZ19mdWxsKGluZm8pKSB7CiAJCS8qIFJlLWVuYWJsZSBjYWxsZG93bnMuICovCiAJCWJsa19z
dGFydF9xdWV1ZShpbmZvLT5ycSk7CiAJCS8qIEtpY2sgdGhpbmdzIG9mZiBpbW1lZGlhdGVseS4g
Ki8KQEAgLTY5NiwyMCArNzc1LDE3IEBAIHN0YXRpYyB2b2lkIGJsa2lmX2ZyZWUoc3RydWN0IGJs
a2Zyb250X2luZm8gKmluZm8sIGludCBzdXNwZW5kKQogCWZsdXNoX3dvcmtfc3luYygmaW5mby0+
d29yayk7CiAKIAkvKiBGcmVlIHJlc291cmNlcyBhc3NvY2lhdGVkIHdpdGggb2xkIGRldmljZSBj
aGFubmVsLiAqLwotCWlmIChpbmZvLT5yaW5nX3JlZiAhPSBHUkFOVF9JTlZBTElEX1JFRikgewot
CQlnbnR0YWJfZW5kX2ZvcmVpZ25fYWNjZXNzKGluZm8tPnJpbmdfcmVmLCAwLAotCQkJCQkgICh1
bnNpZ25lZCBsb25nKWluZm8tPnJpbmcuc3JpbmcpOwotCQlpbmZvLT5yaW5nX3JlZiA9IEdSQU5U
X0lOVkFMSURfUkVGOwotCQlpbmZvLT5yaW5nLnNyaW5nID0gTlVMTDsKLQl9CisJaW5mby0+b3Bz
LT5mcmVlX2Jsa3JpbmcoaW5mbywgc3VzcGVuZCk7CisKIAlpZiAoaW5mby0+aXJxKQogCQl1bmJp
bmRfZnJvbV9pcnFoYW5kbGVyKGluZm8tPmlycSwgaW5mbyk7CiAJaW5mby0+ZXZ0Y2huID0gaW5m
by0+aXJxID0gMDsKIAogfQogCi1zdGF0aWMgdm9pZCBibGtpZl9jb21wbGV0aW9uKHN0cnVjdCBi
bGtfc2hhZG93ICpzKQorc3RhdGljIHZvaWQgYmxraWZfY29tcGxldGlvbihzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCkKIHsKKwlzdHJ1Y3QgYmxrX3NoYWRvdyAq
cyA9ICZpbmZvLT5zaGFkb3dbaWRdOwogCWludCBpOwogCS8qIERvIG5vdCBsZXQgQkxLSUZfT1Bf
RElTQ0FSRCBhcyBucl9zZWdtZW50IGlzIGluIHRoZSBzYW1lIHBsYWNlCiAJICogZmxhZy4gKi8K
QEAgLTcxNyw2ICs3OTMsMzkgQEAgc3RhdGljIHZvaWQgYmxraWZfY29tcGxldGlvbihzdHJ1Y3Qg
YmxrX3NoYWRvdyAqcykKIAkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhzLT5yZXEudS5ydy5z
ZWdbaV0uZ3JlZiwgMCwgMFVMKTsKIH0KIAorc3RydWN0IGJsa2lmX3Jlc3BvbnNlICpyaW5nX2dl
dF9yZXNwb25zZShzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gUklOR19H
RVRfUkVTUE9OU0UoJmluZm8tPnJpbmcsIGluZm8tPnJpbmcucnNwX2NvbnMpOworfQorUklOR19J
RFggZ2V0X3JzcF9wcm9kKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCXJldHVybiBp
bmZvLT5yaW5nLnNyaW5nLT5yc3BfcHJvZDsKK30KK1JJTkdfSURYIGdldF9yc3BfY29ucyhzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gaW5mby0+cmluZy5yc3BfY29uczsK
K30KK3N0cnVjdCByZXF1ZXN0ICpnZXRfcmVxX2Zyb21fc2hhZG93KHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLAorCQkJCSAgICB1bnNpZ25lZCBsb25nIGlkKQoreworCXJldHVybiBpbmZvLT5z
aGFkb3dbaWRdLnJlcXVlc3Q7Cit9Cit2b2lkIHVwZGF0ZV9yc3BfY29ucyhzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbykKK3sKKwlpbmZvLT5yaW5nLnJzcF9jb25zKys7Cit9CitSSU5HX0lEWCBn
ZXRfcmVxX3Byb2RfcHZ0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCXJldHVybiBp
bmZvLT5yaW5nLnJlcV9wcm9kX3B2dDsKK30KK3ZvaWQgY2hlY2tfbGVmdF9yZXNwb25zZShzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50ICptb3JlX3RvX2RvKQoreworCVJJTkdfRklOQUxf
Q0hFQ0tfRk9SX1JFU1BPTlNFUygmaW5mby0+cmluZywgKm1vcmVfdG9fZG8pOworfQordm9pZCB1
cGRhdGVfcnNwX2V2ZW50KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgaSkKK3sKKwlp
bmZvLT5yaW5nLnNyaW5nLT5yc3BfZXZlbnQgPSBpICsgMTsKK30KIHN0YXRpYyBpcnFyZXR1cm5f
dCBibGtpZl9pbnRlcnJ1cHQoaW50IGlycSwgdm9pZCAqZGV2X2lkKQogewogCXN0cnVjdCByZXF1
ZXN0ICpyZXE7CkBAIC03MzQsMjAgKzg0MywyMCBAQCBzdGF0aWMgaXJxcmV0dXJuX3QgYmxraWZf
aW50ZXJydXB0KGludCBpcnEsIHZvaWQgKmRldl9pZCkKIAl9CiAKICBhZ2FpbjoKLQlycCA9IGlu
Zm8tPnJpbmcuc3JpbmctPnJzcF9wcm9kOworCXJwID0gaW5mby0+b3BzLT5nZXRfcnNwX3Byb2Qo
aW5mbyk7CiAJcm1iKCk7IC8qIEVuc3VyZSB3ZSBzZWUgcXVldWVkIHJlc3BvbnNlcyB1cCB0byAn
cnAnLiAqLwogCi0JZm9yIChpID0gaW5mby0+cmluZy5yc3BfY29uczsgaSAhPSBycDsgaSsrKSB7
CisJZm9yIChpID0gaW5mby0+b3BzLT5nZXRfcnNwX2NvbnMoaW5mbyk7IGkgIT0gcnA7IGkrKykg
ewogCQl1bnNpZ25lZCBsb25nIGlkOwogCi0JCWJyZXQgPSBSSU5HX0dFVF9SRVNQT05TRSgmaW5m
by0+cmluZywgaSk7CisJCWJyZXQgPSBpbmZvLT5vcHMtPnJpbmdfZ2V0X3Jlc3BvbnNlKGluZm8p
OwogCQlpZCAgID0gYnJldC0+aWQ7Ci0JCXJlcSAgPSBpbmZvLT5zaGFkb3dbaWRdLnJlcXVlc3Q7
CisJCXJlcSAgPSBpbmZvLT5vcHMtPmdldF9yZXFfZnJvbV9zaGFkb3coaW5mbywgaWQpOwogCiAJ
CWlmIChicmV0LT5vcGVyYXRpb24gIT0gQkxLSUZfT1BfRElTQ0FSRCkKLQkJCWJsa2lmX2NvbXBs
ZXRpb24oJmluZm8tPnNoYWRvd1tpZF0pOworCQkJaW5mby0+b3BzLT5ibGtpZl9jb21wbGV0aW9u
KGluZm8sIGlkKTsKIAotCQlhZGRfaWRfdG9fZnJlZWxpc3QoaW5mbywgaWQpOworCQlpbmZvLT5v
cHMtPmFkZF9pZChpbmZvLCBpZCk7CiAKIAkJZXJyb3IgPSAoYnJldC0+c3RhdHVzID09IEJMS0lG
X1JTUF9PS0FZKSA/IDAgOiAtRUlPOwogCQlzd2l0Y2ggKGJyZXQtPm9wZXJhdGlvbikgewpAQCAt
ODAwLDE3ICs5MDksMTcgQEAgc3RhdGljIGlycXJldHVybl90IGJsa2lmX2ludGVycnVwdChpbnQg
aXJxLCB2b2lkICpkZXZfaWQpCiAJCWRlZmF1bHQ6CiAJCQlCVUcoKTsKIAkJfQorCQlpbmZvLT5v
cHMtPnVwZGF0ZV9yc3BfY29ucyhpbmZvKTsKIAl9CiAKLQlpbmZvLT5yaW5nLnJzcF9jb25zID0g
aTsKLQotCWlmIChpICE9IGluZm8tPnJpbmcucmVxX3Byb2RfcHZ0KSB7CisJcnAgPSBpbmZvLT5v
cHMtPmdldF9yZXFfcHJvZF9wdnQoaW5mbyk7CisJaWYgKGkgIT0gcnApIHsKIAkJaW50IG1vcmVf
dG9fZG87Ci0JCVJJTkdfRklOQUxfQ0hFQ0tfRk9SX1JFU1BPTlNFUygmaW5mby0+cmluZywgbW9y
ZV90b19kbyk7CisJCWluZm8tPm9wcy0+Y2hlY2tfbGVmdF9yZXNwb25zZShpbmZvLCAmbW9yZV90
b19kbyk7CiAJCWlmIChtb3JlX3RvX2RvKQogCQkJZ290byBhZ2FpbjsKIAl9IGVsc2UKLQkJaW5m
by0+cmluZy5zcmluZy0+cnNwX2V2ZW50ID0gaSArIDE7CisJCWluZm8tPm9wcy0+dXBkYXRlX3Jz
cF9ldmVudChpbmZvLCBpKTsKIAogCWtpY2tfcGVuZGluZ19yZXF1ZXN0X3F1ZXVlcyhpbmZvKTsK
IApAQCAtODE5LDYgKzkyOCwyNiBAQCBzdGF0aWMgaXJxcmV0dXJuX3QgYmxraWZfaW50ZXJydXB0
KGludCBpcnEsIHZvaWQgKmRldl9pZCkKIAlyZXR1cm4gSVJRX0hBTkRMRUQ7CiB9CiAKK3N0YXRp
YyBpbnQgaW5pdF9zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJdW5zaWdu
ZWQgaW50IHJpbmdfc2l6ZTsKKwlpbnQgaTsKKwlpZiAoaW5mby0+cmluZ190eXBlICE9IFJJTkdf
VFlQRV9VTkRFRklORUQpCisJCXJldHVybiAwOworCisJaW5mby0+cmluZ190eXBlID0gUklOR19U
WVBFXzE7CisJcmluZ19zaXplID0gQkxLX1JJTkdfU0laRTsKKwlpbmZvLT5zaGFkb3cgPSBremFs
bG9jKHNpemVvZihzdHJ1Y3QgYmxrX3NoYWRvdykgKiByaW5nX3NpemUsCisJCQkgICAgICAgR0ZQ
X0tFUk5FTCk7CisJaWYgKCFpbmZvLT5zaGFkb3cpCisJCXJldHVybiAtRU5PTUVNOworCisJZm9y
IChpID0gMDsgaSA8IHJpbmdfc2l6ZTsgaSsrKQorCQlpbmZvLT5zaGFkb3dbaV0ucmVxLnUucncu
aWQgPSBpKzE7CisJaW5mby0+c2hhZG93W3Jpbmdfc2l6ZSAtIDFdLnJlcS51LnJ3LmlkID0gMHgw
ZmZmZmZmZjsKKworCXJldHVybiAwOworfQogCiBzdGF0aWMgaW50IHNldHVwX2Jsa3Jpbmcoc3Ry
dWN0IHhlbmJ1c19kZXZpY2UgKmRldiwKIAkJCSBzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykK
QEAgLTgzNiw4ICs5NjUsNiBAQCBzdGF0aWMgaW50IHNldHVwX2Jsa3Jpbmcoc3RydWN0IHhlbmJ1
c19kZXZpY2UgKmRldiwKIAlTSEFSRURfUklOR19JTklUKHNyaW5nKTsKIAlGUk9OVF9SSU5HX0lO
SVQoJmluZm8tPnJpbmcsIHNyaW5nLCBQQUdFX1NJWkUpOwogCi0Jc2dfaW5pdF90YWJsZShpbmZv
LT5zZywgQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUKTsKLQogCWVyciA9IHhlbmJ1c19n
cmFudF9yaW5nKGRldiwgdmlydF90b19tZm4oaW5mby0+cmluZy5zcmluZykpOwogCWlmIChlcnIg
PCAwKSB7CiAJCWZyZWVfcGFnZSgodW5zaWduZWQgbG9uZylzcmluZyk7CkBAIC04NDYsNiArOTcz
LDE2IEBAIHN0YXRpYyBpbnQgc2V0dXBfYmxrcmluZyhzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2
LAogCX0KIAlpbmZvLT5yaW5nX3JlZiA9IGVycjsKIAorCWluZm8tPnNnID0ga3phbGxvYyhzaXpl
b2Yoc3RydWN0IHNjYXR0ZXJsaXN0KSAqIGluZm8tPm9wcy0+bWF4X3NlZywgR0ZQX0tFUk5FTCk7
CisJaWYgKCFpbmZvLT5zZykgeworCQllcnIgPSAtRU5PTUVNOworCQlnb3RvIGZhaWw7CisJfQor
CXNnX2luaXRfdGFibGUoaW5mby0+c2csIGluZm8tPm9wcy0+bWF4X3NlZyk7CisKKwllcnIgPSBp
bml0X3NoYWRvdyhpbmZvKTsKKwlpZiAoZXJyKQorCQlnb3RvIGZhaWw7CiAJZXJyID0geGVuYnVz
X2FsbG9jX2V2dGNobihkZXYsICZpbmZvLT5ldnRjaG4pOwogCWlmIChlcnIpCiAJCWdvdG8gZmFp
bDsKQEAgLTg2Niw2ICsxMDAzLDIwIEBAIGZhaWw6CiAJcmV0dXJuIGVycjsKIH0KIAorc3RhdGlj
IHZvaWQgZnJlZV9ibGtyaW5nKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVu
ZCkKK3sKKwlpZiAoaW5mby0+cmluZ19yZWYgIT0gR1JBTlRfSU5WQUxJRF9SRUYpIHsKKwkJZ250
dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhpbmZvLT5yaW5nX3JlZiwgMCwKKwkJCQkJICh1bnNpZ25l
ZCBsb25nKWluZm8tPnJpbmcuc3JpbmcpOworCQlpbmZvLT5yaW5nX3JlZiA9IEdSQU5UX0lOVkFM
SURfUkVGOworCQlpbmZvLT5yaW5nLnNyaW5nID0gTlVMTDsKKwl9CisKKwlrZnJlZShpbmZvLT5z
Zyk7CisKKwlpZiAoIXN1c3BlbmQpCisJCWtmcmVlKGluZm8tPnNoYWRvdyk7Cit9CiAKIC8qIENv
bW1vbiBjb2RlIHVzZWQgd2hlbiBmaXJzdCBzZXR0aW5nIHVwLCBhbmQgd2hlbiByZXN1bWluZy4g
Ki8KIHN0YXRpYyBpbnQgdGFsa190b19ibGtiYWNrKHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYs
CkBAIC04NzUsOCArMTAyNiwxMSBAQCBzdGF0aWMgaW50IHRhbGtfdG9fYmxrYmFjayhzdHJ1Y3Qg
eGVuYnVzX2RldmljZSAqZGV2LAogCXN0cnVjdCB4ZW5idXNfdHJhbnNhY3Rpb24geGJ0OwogCWlu
dCBlcnI7CiAKKwkvKiByZWdpc3RlciByaW5nIG9wcyAqLworCWluZm8tPm9wcyA9ICZibGtfZnJv
bnRfb3BzOworCiAJLyogQ3JlYXRlIHNoYXJlZCByaW5nLCBhbGxvYyBldmVudCBjaGFubmVsLiAq
LwotCWVyciA9IHNldHVwX2Jsa3JpbmcoZGV2LCBpbmZvKTsKKwllcnIgPSBpbmZvLT5vcHMtPnNl
dHVwX2Jsa3JpbmcoZGV2LCBpbmZvKTsKIAlpZiAoZXJyKQogCQlnb3RvIG91dDsKIApAQCAtOTM3
LDcgKzEwOTEsNyBAQCBhZ2FpbjoKIHN0YXRpYyBpbnQgYmxrZnJvbnRfcHJvYmUoc3RydWN0IHhl
bmJ1c19kZXZpY2UgKmRldiwKIAkJCSAgY29uc3Qgc3RydWN0IHhlbmJ1c19kZXZpY2VfaWQgKmlk
KQogewotCWludCBlcnIsIHZkZXZpY2UsIGk7CisJaW50IGVyciwgdmRldmljZTsKIAlzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbzsKIAogCS8qIEZJWE1FOiBVc2UgZHluYW1pYyBkZXZpY2UgaWQg
aWYgdGhpcyBpcyBub3Qgc2V0LiAqLwpAQCAtOTk1LDEwICsxMTQ5LDYgQEAgc3RhdGljIGludCBi
bGtmcm9udF9wcm9iZShzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2LAogCWluZm8tPmNvbm5lY3Rl
ZCA9IEJMS0lGX1NUQVRFX0RJU0NPTk5FQ1RFRDsKIAlJTklUX1dPUksoJmluZm8tPndvcmssIGJs
a2lmX3Jlc3RhcnRfcXVldWUpOwogCi0JZm9yIChpID0gMDsgaSA8IEJMS19SSU5HX1NJWkU7IGkr
KykKLQkJaW5mby0+c2hhZG93W2ldLnJlcS51LnJ3LmlkID0gaSsxOwotCWluZm8tPnNoYWRvd1tC
TEtfUklOR19TSVpFLTFdLnJlcS51LnJ3LmlkID0gMHgwZmZmZmZmZjsKLQogCS8qIEZyb250IGVu
ZCBkaXIgaXMgYSBudW1iZXIsIHdoaWNoIGlzIHVzZWQgYXMgdGhlIGlkLiAqLwogCWluZm8tPmhh
bmRsZSA9IHNpbXBsZV9zdHJ0b3VsKHN0cnJjaHIoZGV2LT5ub2RlbmFtZSwgJy8nKSsxLCBOVUxM
LCAwKTsKIAlkZXZfc2V0X2RydmRhdGEoJmRldi0+ZGV2LCBpbmZvKTsKQEAgLTEwMjIsMTQgKzEx
NzIsMTQgQEAgc3RhdGljIGludCBibGtpZl9yZWNvdmVyKHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvKQogCWludCBqOwogCiAJLyogU3RhZ2UgMTogTWFrZSBhIHNhZmUgY29weSBvZiB0aGUgc2hh
ZG93IHN0YXRlLiAqLwotCWNvcHkgPSBrbWFsbG9jKHNpemVvZihpbmZvLT5zaGFkb3cpLAorCWNv
cHkgPSBrbWFsbG9jKHNpemVvZihzdHJ1Y3QgYmxrX3NoYWRvdykgKiBCTEtfUklOR19TSVpFLAog
CQkgICAgICAgR0ZQX05PSU8gfCBfX0dGUF9SRVBFQVQgfCBfX0dGUF9ISUdIKTsKIAlpZiAoIWNv
cHkpCiAJCXJldHVybiAtRU5PTUVNOwotCW1lbWNweShjb3B5LCBpbmZvLT5zaGFkb3csIHNpemVv
ZihpbmZvLT5zaGFkb3cpKTsKKwltZW1jcHkoY29weSwgaW5mby0+c2hhZG93LCBzaXplb2Yoc3Ry
dWN0IGJsa19zaGFkb3cpICogQkxLX1JJTkdfU0laRSk7CiAKIAkvKiBTdGFnZSAyOiBTZXQgdXAg
ZnJlZSBsaXN0LiAqLwotCW1lbXNldCgmaW5mby0+c2hhZG93LCAwLCBzaXplb2YoaW5mby0+c2hh
ZG93KSk7CisJbWVtc2V0KGluZm8tPnNoYWRvdywgMCwgc2l6ZW9mKHN0cnVjdCBibGtfc2hhZG93
KSAqIEJMS19SSU5HX1NJWkUpOwogCWZvciAoaSA9IDA7IGkgPCBCTEtfUklOR19TSVpFOyBpKysp
CiAJCWluZm8tPnNoYWRvd1tpXS5yZXEudS5ydy5pZCA9IGkrMTsKIAlpbmZvLT5zaGFkb3dfZnJl
ZSA9IGluZm8tPnJpbmcucmVxX3Byb2RfcHZ0OwpAQCAtMTA0MiwxMSArMTE5MiwxMSBAQCBzdGF0
aWMgaW50IGJsa2lmX3JlY292ZXIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCiAJCQljb250
aW51ZTsKIAogCQkvKiBHcmFiIGEgcmVxdWVzdCBzbG90IGFuZCBjb3B5IHNoYWRvdyBzdGF0ZSBp
bnRvIGl0LiAqLwotCQlyZXEgPSBSSU5HX0dFVF9SRVFVRVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5y
aW5nLnJlcV9wcm9kX3B2dCk7CisJCXJlcSA9IChzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqKWluZm8t
Pm9wcy0+cmluZ19nZXRfcmVxdWVzdChpbmZvKTsKIAkJKnJlcSA9IGNvcHlbaV0ucmVxOwogCiAJ
CS8qIFdlIGdldCBhIG5ldyByZXF1ZXN0IGlkLCBhbmQgbXVzdCByZXNldCB0aGUgc2hhZG93IHN0
YXRlLiAqLwotCQlyZXEtPnUucncuaWQgPSBnZXRfaWRfZnJvbV9mcmVlbGlzdChpbmZvKTsKKwkJ
cmVxLT51LnJ3LmlkID0gaW5mby0+b3BzLT5nZXRfaWQoaW5mbyk7CiAJCW1lbWNweSgmaW5mby0+
c2hhZG93W3JlcS0+dS5ydy5pZF0sICZjb3B5W2ldLCBzaXplb2YoY29weVtpXSkpOwogCiAJCWlm
IChyZXEtPm9wZXJhdGlvbiAhPSBCTEtJRl9PUF9ESVNDQVJEKSB7CkBAIC0xMTAwLDcgKzEyNTAs
NyBAQCBzdGF0aWMgaW50IGJsa2Zyb250X3Jlc3VtZShzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2
KQogCiAJZXJyID0gdGFsa190b19ibGtiYWNrKGRldiwgaW5mbyk7CiAJaWYgKGluZm8tPmNvbm5l
Y3RlZCA9PSBCTEtJRl9TVEFURV9TVVNQRU5ERUQgJiYgIWVycikKLQkJZXJyID0gYmxraWZfcmVj
b3ZlcihpbmZvKTsKKwkJZXJyID0gaW5mby0+b3BzLT5yZWNvdmVyKGluZm8pOwogCiAJcmV0dXJu
IGVycjsKIH0KQEAgLTEyODAsNyArMTQzMCw2IEBAIHN0YXRpYyB2b2lkIGJsa2Zyb250X2Nvbm5l
Y3Qoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCiAJaW5mby0+Y29ubmVjdGVkID0gQkxLSUZf
U1RBVEVfQ09OTkVDVEVEOwogCWtpY2tfcGVuZGluZ19yZXF1ZXN0X3F1ZXVlcyhpbmZvKTsKIAlz
cGluX3VubG9ja19pcnEoJmluZm8tPmlvX2xvY2spOwotCiAJYWRkX2Rpc2soaW5mby0+Z2QpOwog
CiAJaW5mby0+aXNfcmVhZHkgPSAxOwpAQCAtMTQ0NCw2ICsxNTkzLDMxIEBAIG91dDoKIAlyZXR1
cm4gMDsKIH0KIAorc3RhdGljIHN0cnVjdCBibGtfZnJvbnRfb3BlcmF0aW9ucyBibGtfZnJvbnRf
b3BzID0geworCS5yaW5nX2dldF9yZXF1ZXN0ID0gcmluZ19nZXRfcmVxdWVzdCwKKwkucmluZ19n
ZXRfcmVzcG9uc2UgPSByaW5nX2dldF9yZXNwb25zZSwKKwkucmluZ19nZXRfc2VnbWVudCA9IHJp
bmdfZ2V0X3NlZ21lbnQsCisJLmdldF9pZCA9IGdldF9pZF9mcm9tX2ZyZWVsaXN0LAorCS5hZGRf
aWQgPSBhZGRfaWRfdG9fZnJlZWxpc3QsCisJLnNhdmVfc2VnX3NoYWRvdyA9IHNhdmVfc2VnX3No
YWRvdywKKwkuc2F2ZV9yZXFfc2hhZG93ID0gc2F2ZV9yZXFfc2hhZG93LAorCS5nZXRfcmVxX2Zy
b21fc2hhZG93ID0gZ2V0X3JlcV9mcm9tX3NoYWRvdywKKwkuZ2V0X3JzcF9wcm9kID0gZ2V0X3Jz
cF9wcm9kLAorCS5nZXRfcnNwX2NvbnMgPSBnZXRfcnNwX2NvbnMsCisJLmdldF9yZXFfcHJvZF9w
dnQgPSBnZXRfcmVxX3Byb2RfcHZ0LAorCS5jaGVja19sZWZ0X3Jlc3BvbnNlID0gY2hlY2tfbGVm
dF9yZXNwb25zZSwKKwkudXBkYXRlX3JzcF9ldmVudCA9IHVwZGF0ZV9yc3BfZXZlbnQsCisJLnVw
ZGF0ZV9yc3BfY29ucyA9IHVwZGF0ZV9yc3BfY29ucywKKwkudXBkYXRlX3JlcV9wcm9kX3B2dCA9
IHVwZGF0ZV9yZXFfcHJvZF9wdnQsCisJLnJpbmdfcHVzaCA9IHJpbmdfcHVzaCwKKwkucmVjb3Zl
ciA9IGJsa2lmX3JlY292ZXIsCisJLnJpbmdfZnVsbCA9IHJpbmdfZnVsbCwKKwkuc2V0dXBfYmxr
cmluZyA9IHNldHVwX2Jsa3JpbmcsCisJLmZyZWVfYmxrcmluZyA9IGZyZWVfYmxrcmluZywKKwku
YmxraWZfY29tcGxldGlvbiA9IGJsa2lmX2NvbXBsZXRpb24sCisJLm1heF9zZWcgPSBCTEtJRl9N
QVhfU0VHTUVOVFNfUEVSX1JFUVVFU1QsCit9OworCiBzdGF0aWMgY29uc3Qgc3RydWN0IGJsb2Nr
X2RldmljZV9vcGVyYXRpb25zIHhsdmJkX2Jsb2NrX2ZvcHMgPQogewogCS5vd25lciA9IFRISVNf
TU9EVUxFLAo=

--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF1EESHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:24:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xFQ-0005Zf-HZ; Thu, 16 Aug 2012 10:24:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1xFO-0005ZK-3y
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:24:14 +0000
Received: from [85.158.138.51:6135] by server-6.bemta-3.messagelabs.com id
	E3/7B-32013-C4ACC205; Thu, 16 Aug 2012 10:24:12 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345112651!28641197!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9477 invoked from network); 16 Aug 2012 10:24:11 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-9.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 10:24:11 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 03:24:10 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181734433"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:24:10 -0700
Received: from FMSMSX109.amr.corp.intel.com (10.19.9.28) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:24:10 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx109.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:24:09 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:24:08 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaI7IAgAGbXeD///Q/AIAAiVzA
Date: Thu, 16 Aug 2012 10:24:06 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AA89@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208161104290.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208161104290.2278@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Thursday, August 16, 2012 6:11 PM
> To: Hao, Xudong
> Cc: Stefano Stabellini; xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao;
> Anthony PERARD
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> On Thu, 16 Aug 2012, Hao, Xudong wrote:
> > > -----Original Message-----
> > > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > > Sent: Wednesday, August 15, 2012 6:21 PM
> > > To: Hao, Xudong
> > > Cc: xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao
> > > Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> > >
> > > On Wed, 15 Aug 2012, Xudong Hao wrote:
> > > > Currently it is assumed PCI device BAR access < 4G memory. If there is
> such a
> > > > device whose BAR size is larger than 4G, it must access > 4G memory
> > > address.
> > > > This patch enable the 64bits big BAR support on hvmloader.
> > > >
> > > > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > > > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> > >
> > > It is very good to see that this problem has been solved!
> > >
> > > Considering that at this point it is too late for the 4.2 release cycle,
> > > it might be worth spinning a version of this patches for SeaBIOS and
> > > upstream QEMU, that now supports PCI passthrough.
> > >
> >
> > Hi, Stefano
> >
> > Does Xen already switch to SeaBIOS and upstream QEMU? I saw SeaBIOS
> does not update for 5 months.
> 
> Yes, they are available but not the default yet for HVM guests.
> In order to enable upstream QEMU you need to pass:
> 
> device_model_version='qemu-xen'
> 
> in the VM config file.
> 
> 
> > You mean upstream QEMU is this tree git://git.qemu.org/qemu.git ? I heard
> somebody said PCI device assignment does not work for this tree, but device
> hot-add/remove works.
> 
> qemu-upstream-unstable, the upstream QEMU based tree used with Xen 4.2,
> (git://xenbits.xensource.com/qemu-upstream-unstable.git) is based on
> QEMU 1.0 and doesn't support PCI passthrough.
> 
> However upstream QEMU (git://git.qemu.org/qemu.git) does, and I am going
> to rebase on it early when the 4.3 development cycle opens.  So it is
> probably a good idea for you to try out upstram QEMU now.
> We have a wiki page on how to build and run upstream QEMU on
> xen-unstable:
> 
> http://wiki.xen.org/wiki/QEMU_Upstream

I got it, I'll work my patch for QEMU upstream.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:24:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xFQ-0005Zf-HZ; Thu, 16 Aug 2012 10:24:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1xFO-0005ZK-3y
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:24:14 +0000
Received: from [85.158.138.51:6135] by server-6.bemta-3.messagelabs.com id
	E3/7B-32013-C4ACC205; Thu, 16 Aug 2012 10:24:12 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345112651!28641197!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9477 invoked from network); 16 Aug 2012 10:24:11 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-9.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 10:24:11 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 03:24:10 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181734433"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:24:10 -0700
Received: from FMSMSX109.amr.corp.intel.com (10.19.9.28) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:24:10 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx109.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:24:09 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:24:08 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaI7IAgAGbXeD///Q/AIAAiVzA
Date: Thu, 16 Aug 2012 10:24:06 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AA89@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<alpine.DEB.2.02.1208151118270.2278@kaball.uk.xensource.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8A788@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208161104290.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208161104290.2278@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Thursday, August 16, 2012 6:11 PM
> To: Hao, Xudong
> Cc: Stefano Stabellini; xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao;
> Anthony PERARD
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> On Thu, 16 Aug 2012, Hao, Xudong wrote:
> > > -----Original Message-----
> > > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > > Sent: Wednesday, August 15, 2012 6:21 PM
> > > To: Hao, Xudong
> > > Cc: xen-devel@lists.xen.org; Ian Jackson; Zhang, Xiantao
> > > Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> > >
> > > On Wed, 15 Aug 2012, Xudong Hao wrote:
> > > > Currently it is assumed PCI device BAR access < 4G memory. If there is
> such a
> > > > device whose BAR size is larger than 4G, it must access > 4G memory
> > > address.
> > > > This patch enable the 64bits big BAR support on hvmloader.
> > > >
> > > > Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
> > > > Signed-off-by: Xudong Hao <xudong.hao@intel.com>
> > >
> > > It is very good to see that this problem has been solved!
> > >
> > > Considering that at this point it is too late for the 4.2 release cycle,
> > > it might be worth spinning a version of this patches for SeaBIOS and
> > > upstream QEMU, that now supports PCI passthrough.
> > >
> >
> > Hi, Stefano
> >
> > Does Xen already switch to SeaBIOS and upstream QEMU? I saw SeaBIOS
> does not update for 5 months.
> 
> Yes, they are available but not the default yet for HVM guests.
> In order to enable upstream QEMU you need to pass:
> 
> device_model_version='qemu-xen'
> 
> in the VM config file.
> 
> 
> > You mean upstream QEMU is this tree git://git.qemu.org/qemu.git ? I heard
> somebody said PCI device assignment does not work for this tree, but device
> hot-add/remove works.
> 
> qemu-upstream-unstable, the upstream QEMU based tree used with Xen 4.2,
> (git://xenbits.xensource.com/qemu-upstream-unstable.git) is based on
> QEMU 1.0 and doesn't support PCI passthrough.
> 
> However upstream QEMU (git://git.qemu.org/qemu.git) does, and I am going
> to rebase on it early when the 4.3 development cycle opens.  So it is
> probably a good idea for you to try out upstram QEMU now.
> We have a wiki page on how to build and run upstream QEMU on
> xen-unstable:
> 
> http://wiki.xen.org/wiki/QEMU_Upstream

I got it, I'll work my patch for QEMU upstream.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:25:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xGX-0005gv-0B; Thu, 16 Aug 2012 10:25:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xGV-0005gm-Hr
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:25:23 +0000
Received: from [85.158.143.35:6436] by server-2.bemta-4.messagelabs.com id
	AF/33-31966-29ACC205; Thu, 16 Aug 2012 10:25:22 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345112604!10805881!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28755 invoked from network); 16 Aug 2012 10:23:24 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-10.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:23:24 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 03:23:23 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181734191"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:22:59 -0700
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:22:59 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:22:58 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:22:57 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mRnxOUx7HczFSN2/QYn7Hm+BlQ==
Date: Thu, 16 Aug 2012 10:22:56 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, list.
The max segments for request in VBD queue is 11, while for Linux OS/ other VMM, the parameter is set to 128 in default. This may be caused by the limited size of ring between Front/Back. So I guess whether we can put segment data into another ring and dynamic use them for the single request's need. Here is prototype which don't do much test, but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to original in sequential test. But it bring some overhead which will make random IO's cpu utilization increase a little.

Here is a short version data use only 1K random read and 64K sequential read in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got form xentop.
Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
		W	52005.9	86.6	71
		W/O	52123.1	85.8	66.9
			
Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
	W	250		27.1	       10.6
	W/O	250		62.6	       31.1


The patch will be simple if we only use new methods. But we need consider that user may use new kernel as backend while an older one as frontend. Also need considerate live migration case. So the change become huge...
[RFC v1 1/5] 
	In order to add new segment ring, refactoring the original code, split some methods related with ring operation.
[RFC v1 2/5]
	Add the segment ring support in blkfront. Most of code is about suspend/recover.
[RFC v1 3/5]
	As the same, need refractor the original code in blkback.
[RFC v1 4/5]
	In order to support different type of ring type in blkback, make the pending_req list per disk.
[RFC v1 5/5]
	Add the segment ring support in blkback.
-ronghui



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:25:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xGX-0005gv-0B; Thu, 16 Aug 2012 10:25:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xGV-0005gm-Hr
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:25:23 +0000
Received: from [85.158.143.35:6436] by server-2.bemta-4.messagelabs.com id
	AF/33-31966-29ACC205; Thu, 16 Aug 2012 10:25:22 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345112604!10805881!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28755 invoked from network); 16 Aug 2012 10:23:24 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-10.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:23:24 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 03:23:23 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181734191"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:22:59 -0700
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:22:59 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:22:58 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:22:57 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mRnxOUx7HczFSN2/QYn7Hm+BlQ==
Date: Thu, 16 Aug 2012 10:22:56 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, list.
The max segments for request in VBD queue is 11, while for Linux OS/ other VMM, the parameter is set to 128 in default. This may be caused by the limited size of ring between Front/Back. So I guess whether we can put segment data into another ring and dynamic use them for the single request's need. Here is prototype which don't do much test, but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to original in sequential test. But it bring some overhead which will make random IO's cpu utilization increase a little.

Here is a short version data use only 1K random read and 64K sequential read in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got form xentop.
Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
		W	52005.9	86.6	71
		W/O	52123.1	85.8	66.9
			
Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
	W	250		27.1	       10.6
	W/O	250		62.6	       31.1


The patch will be simple if we only use new methods. But we need consider that user may use new kernel as backend while an older one as frontend. Also need considerate live migration case. So the change become huge...
[RFC v1 1/5] 
	In order to add new segment ring, refactoring the original code, split some methods related with ring operation.
[RFC v1 2/5]
	Add the segment ring support in blkfront. Most of code is about suspend/recover.
[RFC v1 3/5]
	As the same, need refractor the original code in blkback.
[RFC v1 4/5]
	In order to support different type of ring type in blkback, make the pending_req list per disk.
[RFC v1 5/5]
	Add the segment ring support in blkback.
-ronghui



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:26:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xH8-0005lN-F6; Thu, 16 Aug 2012 10:26:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xH6-0005l4-9V
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:26:01 +0000
Received: from [85.158.139.83:59035] by server-3.bemta-5.messagelabs.com id
	95/A2-27237-7BACC205; Thu, 16 Aug 2012 10:25:59 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345112755!21213241!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY5MDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7857 invoked from network); 16 Aug 2012 10:25:56 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 10:25:56 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 16 Aug 2012 03:25:55 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181734822"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:25:54 -0700
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:25:54 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:25:53 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:25:51 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 2/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mXn4BmTF/5rIR/qU6lefhXPSaA==
Date: Thu, 16 Aug 2012 10:25:50 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF213@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 2/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


add segring support in blkfront
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a263faf..b9f383d 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -76,10 +76,23 @@ struct blk_shadow {
 	unsigned long frame[BLKIF_MAX_SEGMENTS_PER_REQUEST];
 };
=20
+struct blk_req_shadow {
+	struct blkif_request_header req;
+	struct request *request;
+};
+
+struct blk_seg_shadow {
+	uint64_t id;
+	struct blkif_request_segment req;
+	unsigned long frame;
+};
+
 static DEFINE_MUTEX(blkfront_mutex);
 static const struct block_device_operations xlvbd_block_fops;
=20
 #define BLK_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE)
+#define BLK_REQ_RING_SIZE __CONST_RING_SIZE(blkif_request, PAGE_SIZE)
+#define BLK_SEG_RING_SIZE __CONST_RING_SIZE(blkif_segment, PAGE_SIZE)
=20
 /*
  * We have one of these per vbd, whether ide, scsi or 'other'.  They
@@ -96,22 +109,30 @@ struct blkfront_info
 	blkif_vdev_t handle;
 	enum blkif_state connected;
 	int ring_ref;
+	int reqring_ref;
+	int segring_ref;
 	struct blkif_front_ring ring;
+	struct blkif_request_front_ring reqring;
+	struct blkif_segment_front_ring segring;
 	struct scatterlist *sg;
 	unsigned int evtchn, irq;
 	struct request_queue *rq;
 	struct work_struct work;
 	struct gnttab_free_callback callback;
 	struct blk_shadow *shadow;
+	struct blk_req_shadow *req_shadow;
+	struct blk_seg_shadow *seg_shadow;
 	struct blk_front_operations *ops;
 	enum blkif_ring_type ring_type;
 	unsigned long shadow_free;
+	unsigned long seg_shadow_free;
 	unsigned int feature_flush;
 	unsigned int flush_op;
 	unsigned int feature_discard:1;
 	unsigned int feature_secdiscard:1;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
+	unsigned long last_id;
 	int is_ready;
 };
=20
@@ -124,7 +145,7 @@ static struct blk_front_operations {
 	unsigned long (*get_id) (struct blkfront_info *info);
 	void (*add_id) (struct blkfront_info *info, unsigned long id);
 	void (*save_seg_shadow) (struct blkfront_info *info, unsigned long mfn,
-				 unsigned long id, int i);
+				 unsigned long id, int i, struct blkif_request_segment *ring_seg);
 	void (*save_req_shadow) (struct blkfront_info *info,
 				 struct request *req, unsigned long id);
 	struct request *(*get_req_from_shadow)(struct blkfront_info *info,
@@ -136,14 +157,16 @@ static struct blk_front_operations {
 	void (*update_rsp_event) (struct blkfront_info *info, int i);
 	void (*update_rsp_cons) (struct blkfront_info *info);
 	void (*update_req_prod_pvt) (struct blkfront_info *info);
+	void (*update_segment_rsp_cons) (struct blkfront_info *info, unsigned lon=
g id);
 	void (*ring_push) (struct blkfront_info *info, int *notify);
 	int (*recover) (struct blkfront_info *info);
 	int (*ring_full) (struct blkfront_info *info);
+	int (*segring_full) (struct blkfront_info *info, unsigned int nr_segments=
);
 	int (*setup_blkring) (struct xenbus_device *dev, struct blkfront_info *in=
fo);
 	void (*free_blkring) (struct blkfront_info *info, int suspend);
 	void (*blkif_completion) (struct blkfront_info *info, unsigned long id);
 	unsigned int max_seg;
-} blk_front_ops;=20
+} blk_front_ops, blk_front_ops_v2;=20
=20
 static unsigned int nr_minors;
 static unsigned long *minors;
@@ -179,6 +202,24 @@ static unsigned long get_id_from_freelist(struct blkfr=
ont_info *info)
 	return free;
 }
=20
+static unsigned long get_id_from_freelist_v2(struct blkfront_info *info)
+{
+	unsigned long free =3D info->shadow_free;
+	BUG_ON(free >=3D BLK_REQ_RING_SIZE);
+	info->shadow_free =3D info->req_shadow[free].req.u.rw.id;
+	info->req_shadow[free].req.u.rw.id =3D 0x0fffffee; /* debug */
+	return free;
+}
+
+static unsigned long get_seg_shadow_id(struct blkfront_info *info)
+{
+	unsigned long free =3D info->seg_shadow_free;
+	BUG_ON(free >=3D BLK_SEG_RING_SIZE);
+	info->seg_shadow_free =3D info->seg_shadow[free].id;
+	info->seg_shadow[free].id =3D 0x0fffffee; /* debug */
+	return free;
+}
+
 void add_id_to_freelist(struct blkfront_info *info,
 			       unsigned long id)
 {
@@ -187,6 +228,21 @@ void add_id_to_freelist(struct blkfront_info *info,
 	info->shadow_free =3D id;
 }
=20
+static void add_id_to_freelist_v2(struct blkfront_info *info,
+				  unsigned long id)
+{
+	info->req_shadow[id].req.u.rw.id  =3D info->shadow_free;
+	info->req_shadow[id].request =3D NULL;
+	info->shadow_free =3D id;
+}
+
+static void free_seg_shadow_id(struct blkfront_info *info,
+				  unsigned long id)
+{
+	info->seg_shadow[id].id  =3D info->seg_shadow_free;
+	info->seg_shadow_free =3D id;
+}
+
 static int xlbd_reserve_minors(unsigned int minor, unsigned int nr)
 {
 	unsigned int end =3D minor + nr;
@@ -299,6 +355,14 @@ void *ring_get_request(struct blkfront_info *info)
 	return (void *)RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt);
 }
=20
+void *ring_get_request_v2(struct blkfront_info *info)
+{
+	struct blkif_request_header *ring_req;
+	ring_req =3D RING_GET_REQUEST(&info->reqring,
+				info->reqring.req_prod_pvt);
+	return (void *)ring_req;
+}
+
 struct blkif_request_segment *ring_get_segment(struct blkfront_info *info,=
 int i)
 {
 	struct blkif_request *ring_req =3D
@@ -306,12 +370,34 @@ struct blkif_request_segment *ring_get_segment(struct=
 blkfront_info *info, int i
 	return &ring_req->u.rw.seg[i];
 }
=20
-void save_seg_shadow(struct blkfront_info *info,
-		      unsigned long mfn, unsigned long id, int i)
+struct blkif_request_segment *ring_get_segment_v2(struct blkfront_info *in=
fo, int i)
+{
+	return RING_GET_REQUEST(&info->segring, info->segring.req_prod_pvt++);
+}
+
+void save_seg_shadow(struct blkfront_info *info, unsigned long mfn,
+		     unsigned long id, int i, struct blkif_request_segment *ring_seg)
 {
 	info->shadow[id].frame[i] =3D mfn_to_pfn(mfn);
 }
=20
+void save_seg_shadow_v2(struct blkfront_info *info, unsigned long mfn,
+			unsigned long id, int i, struct blkif_request_segment *ring_seg)
+{
+	struct blkif_request_header *ring_req;
+	unsigned long seg_id =3D get_seg_shadow_id(info);
+
+	ring_req =3D (struct blkif_request_header *)info->ops->ring_get_request(i=
nfo);
+	if (i =3D=3D 0)
+		ring_req->u.rw.seg_id =3D seg_id;
+	else
+		info->seg_shadow[info->last_id].id =3D seg_id;
+	info->seg_shadow[seg_id].frame =3D mfn_to_pfn(mfn);
+	memcpy(&(info->seg_shadow[seg_id].req), ring_seg,
+	       sizeof(struct blkif_request_segment));
+	info->last_id =3D seg_id;
+}
+
 void save_req_shadow(struct blkfront_info *info,
 		      struct request *req, unsigned long id)
 {
@@ -321,10 +407,34 @@ void save_req_shadow(struct blkfront_info *info,
 	info->shadow[id].request =3D req;
 }
=20
+void save_req_shadow_v2(struct blkfront_info *info,
+		      struct request *req, unsigned long id)
+{
+	struct blkif_request_header *ring_req =3D
+			(struct blkif_request_header *)info->ops->ring_get_request(info);
+	info->req_shadow[id].req =3D *ring_req;
+	info->req_shadow[id].request =3D req;
+}
+
 void update_req_prod_pvt(struct blkfront_info *info)
 {
 	info->ring.req_prod_pvt++;
 }
+
+void update_req_prod_pvt_v2(struct blkfront_info *info)
+{
+	info->reqring.req_prod_pvt++;
+}
+
+int segring_full(struct blkfront_info *info, unsigned int nr_segments)
+{
+	return 0;
+}
+
+int segring_full_v2(struct blkfront_info *info, unsigned int nr_segments)
+{
+	return nr_segments > RING_FREE_REQUESTS(&info->segring);
+}
 /*
  * Generate a Xen blkfront IO request from a blk layer request.  Reads
  * and writes are handled as expected.
@@ -347,19 +457,18 @@ static int blkif_queue_request(struct request *req)
 		return 1;
=20
 	if (gnttab_alloc_grant_references(
-		BLKIF_MAX_SEGMENTS_PER_REQUEST, &gref_head) < 0) {
+		info->ops->max_seg, &gref_head) < 0) {
 		gnttab_request_free_callback(
 			&info->callback,
 			blkif_restart_queue_callback,
 			info,
-			BLKIF_MAX_SEGMENTS_PER_REQUEST);
+			info->ops->max_seg);
 		return 1;
 	}
=20
 	/* Fill out a communications ring structure. */
 	ring_req =3D (struct blkif_request *)info->ops->ring_get_request(info);
 	id =3D info->ops->get_id(info);
-	//info->shadow[id].request =3D req;
=20
 	ring_req->u.rw.id =3D id;
 	ring_req->u.rw.sector_number =3D (blkif_sector_t)blk_rq_pos(req);
@@ -392,6 +501,9 @@ static int blkif_queue_request(struct request *req)
 							   info->sg);
 		BUG_ON(ring_req->u.rw.nr_segments > info->ops->max_seg);
=20
+		if (info->ops->segring_full(info, ring_req->u.rw.nr_segments))
+			goto wait;
+
 		for_each_sg(info->sg, sg, ring_req->u.rw.nr_segments, i) {
 			buffer_mfn =3D pfn_to_mfn(page_to_pfn(sg_page(sg)));
 			fsect =3D sg->offset >> 9;
@@ -411,7 +523,7 @@ static int blkif_queue_request(struct request *req)
 					.gref       =3D ref,
 					.first_sect =3D fsect,
 					.last_sect  =3D lsect };
-			info->ops->save_seg_shadow(info, buffer_mfn, id, i);
+			info->ops->save_seg_shadow(info, buffer_mfn, id, i, ring_seg);
 		}
 	}
=20
@@ -423,6 +535,11 @@ static int blkif_queue_request(struct request *req)
 	gnttab_free_grant_references(gref_head);
=20
 	return 0;
+wait:
+	gnttab_free_grant_references(gref_head);
+	pr_debug("No enough segment!\n");
+	info->ops->add_id(info, id);
+	return 1;
 }
=20
 void ring_push(struct blkfront_info *info, int *notify)
@@ -430,6 +547,13 @@ void ring_push(struct blkfront_info *info, int *notify=
)
 	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->ring, *notify);
 }
=20
+void ring_push_v2(struct blkfront_info *info, int *notify)
+{
+	RING_PUSH_REQUESTS(&info->segring);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->reqring, *notify);
+}
+
+
 static inline void flush_requests(struct blkfront_info *info)
 {
 	int notify;
@@ -440,6 +564,16 @@ static inline void flush_requests(struct blkfront_info=
 *info)
 		notify_remote_via_irq(info->irq);
 }
=20
+static int ring_free_v2(struct blkfront_info *info)
+{
+	return (!RING_FULL(&info->reqring) &&
+		RING_FREE_REQUESTS(&info->segring) > RING_SIZE(&info->segring)/3);
+}
+static int ring_full_v2(struct blkfront_info *info)
+{
+	return (RING_FULL(&info->reqring) || RING_FULL(&info->segring));
+}
+
 /*
  * do_blkif_request
  *  read a block; request is in a request queue
@@ -490,6 +624,17 @@ wait:
 		flush_requests(info);
 }
=20
+static void update_blk_queue(struct blkfront_info *info)
+{
+	struct request_queue *q =3D info->rq;
+
+	blk_queue_max_segments(q, info->ops->max_seg);
+	blk_queue_max_hw_sectors(q, queue_max_segments(q) *
+				 queue_max_segment_size(q) /
+				 queue_logical_block_size(q));
+	return;
+}
+
 static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size)
 {
 	struct request_queue *rq;
@@ -740,7 +885,7 @@ static void xlvbd_release_gendisk(struct blkfront_info =
*info)
=20
 static void kick_pending_request_queues(struct blkfront_info *info)
 {
-	if (!ring_full(info)) {
+	if (!info->ops->ring_full(info)) {
 		/* Re-enable calldowns. */
 		blk_start_queue(info->rq);
 		/* Kick things off immediately. */
@@ -793,39 +938,115 @@ static void blkif_completion(struct blkfront_info *i=
nfo, unsigned long id)
 		gnttab_end_foreign_access(s->req.u.rw.seg[i].gref, 0, 0UL);
 }
=20
+static void blkif_completion_v2(struct blkfront_info *info, unsigned long =
id)
+{
+	int i;
+	/* Do not let BLKIF_OP_DISCARD as nr_segment is in the same place
+	 * flag. */
+	unsigned short nr =3D info->req_shadow[id].req.u.rw.nr_segments;
+	unsigned long shadow_id, free_id;
+
+	shadow_id =3D info->req_shadow[id].req.u.rw.seg_id;
+	for (i =3D 0; i < nr; i++) {
+		gnttab_end_foreign_access(info->seg_shadow[shadow_id].req.gref, 0, 0UL);
+		free_id =3D shadow_id;
+		shadow_id =3D info->seg_shadow[shadow_id].id;
+		free_seg_shadow_id(info, free_id);
+	}
+}
+
 struct blkif_response *ring_get_response(struct blkfront_info *info)
 {
 	return RING_GET_RESPONSE(&info->ring, info->ring.rsp_cons);
 }
+
+struct blkif_response *ring_get_response_v2(struct blkfront_info *info)
+{
+	return RING_GET_RESPONSE(&info->reqring, info->reqring.rsp_cons);
+}
+
 RING_IDX get_rsp_prod(struct blkfront_info *info)
 {
 	return info->ring.sring->rsp_prod;
 }
+
+RING_IDX get_rsp_prod_v2(struct blkfront_info *info)
+{
+	return info->reqring.sring->rsp_prod;
+}
+
 RING_IDX get_rsp_cons(struct blkfront_info *info)
 {
 	return info->ring.rsp_cons;
 }
+
+RING_IDX get_rsp_cons_v2(struct blkfront_info *info)
+{
+	return info->reqring.rsp_cons;
+}
+
 struct request *get_req_from_shadow(struct blkfront_info *info,
 				    unsigned long id)
 {
 	return info->shadow[id].request;
 }
+
+struct request *get_req_from_shadow_v2(struct blkfront_info *info,
+				    unsigned long id)
+{
+	return info->req_shadow[id].request;
+}
+
 void update_rsp_cons(struct blkfront_info *info)
 {
 	info->ring.rsp_cons++;
 }
+
+void update_rsp_cons_v2(struct blkfront_info *info)
+{
+	info->reqring.rsp_cons++;
+}
+
 RING_IDX get_req_prod_pvt(struct blkfront_info *info)
 {
 	return info->ring.req_prod_pvt;
 }
+
+RING_IDX get_req_prod_pvt_v2(struct blkfront_info *info)
+{
+	return info->reqring.req_prod_pvt;
+}
+
 void check_left_response(struct blkfront_info *info, int *more_to_do)
 {
 	RING_FINAL_CHECK_FOR_RESPONSES(&info->ring, *more_to_do);
 }
+
+void check_left_response_v2(struct blkfront_info *info, int *more_to_do)
+{
+	RING_FINAL_CHECK_FOR_RESPONSES(&info->reqring, *more_to_do);
+}
+
 void update_rsp_event(struct blkfront_info *info, int i)
 {
 	info->ring.sring->rsp_event =3D i + 1;
 }
+
+void update_rsp_event_v2(struct blkfront_info *info, int i)
+{
+	info->reqring.sring->rsp_event =3D i + 1;
+}
+
+void update_segment_rsp_cons(struct blkfront_info *info, unsigned long id)
+{
+	return;
+}
+
+void update_segment_rsp_cons_v2(struct blkfront_info *info, unsigned long =
id)
+{
+	info->segring.rsp_cons +=3D info->req_shadow[id].req.u.rw.nr_segments;
+	return;
+}
 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 {
 	struct request *req;
@@ -903,8 +1124,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_=
id)
 			if (unlikely(bret->status !=3D BLKIF_RSP_OKAY))
 				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
 					"request: %x\n", bret->status);
-
 			__blk_end_request_all(req, error);
+			info->ops->update_segment_rsp_cons(info, id);
 			break;
 		default:
 			BUG();
@@ -949,6 +1170,43 @@ static int init_shadow(struct blkfront_info *info)
 	return 0;
 }
=20
+static int init_shadow_v2(struct blkfront_info *info)
+{
+	unsigned int ring_size;
+	int i;
+
+	if (info->ring_type !=3D RING_TYPE_UNDEFINED)
+		return 0;
+
+	info->ring_type =3D RING_TYPE_2;
+
+	ring_size =3D BLK_REQ_RING_SIZE;
+	info->req_shadow =3D kzalloc(sizeof(struct blk_req_shadow) * ring_size,
+				   GFP_KERNEL);
+	if (!info->req_shadow)
+		return -ENOMEM;
+
+	for (i =3D 0; i < ring_size; i++)
+		info->req_shadow[i].req.u.rw.id =3D i+1;
+	info->req_shadow[ring_size - 1].req.u.rw.id =3D 0x0fffffff;
+
+	ring_size =3D BLK_SEG_RING_SIZE;
+
+	info->seg_shadow =3D kzalloc(sizeof(struct blk_seg_shadow) * ring_size,
+				   GFP_KERNEL);	=09
+	if (!info->seg_shadow) {
+		kfree(info->req_shadow);
+		return -ENOMEM;
+	}
+
+	for (i =3D 0; i < ring_size; i++) {
+		info->seg_shadow[i].id =3D i+1;
+	}
+	info->seg_shadow[ring_size - 1].id =3D 0x0fffffff;
+
+	return 0;
+}
+
 static int setup_blkring(struct xenbus_device *dev,
 			 struct blkfront_info *info)
 {
@@ -1003,6 +1261,84 @@ fail:
 	return err;
 }
=20
+static int setup_blkring_v2(struct xenbus_device *dev,
+			    struct blkfront_info *info)
+{
+	struct blkif_request_sring *sring;
+	struct blkif_segment_sring *seg_sring;
+	int err;
+
+	info->reqring_ref =3D GRANT_INVALID_REF;
+
+	sring =3D (struct blkif_request_sring *)__get_free_page(GFP_NOIO | __GFP_=
HIGH);
+	if (!sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
+		return -ENOMEM;
+	}
+	SHARED_RING_INIT(sring);
+	FRONT_RING_INIT(&info->reqring, sring, PAGE_SIZE);
+
+	err =3D xenbus_grant_ring(dev, virt_to_mfn(info->reqring.sring));
+	if (err < 0) {
+		free_page((unsigned long)sring);
+		info->reqring.sring =3D NULL;
+		goto fail;
+	}
+
+	info->reqring_ref =3D err;
+
+	info->segring_ref =3D GRANT_INVALID_REF;
+
+	seg_sring =3D (struct blkif_segment_sring *)__get_free_page(GFP_NOIO | __=
GFP_HIGH);
+	if (!seg_sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
+		err =3D -ENOMEM;
+		goto fail;
+	}
+	SHARED_RING_INIT(seg_sring);
+	FRONT_RING_INIT(&info->segring, seg_sring, PAGE_SIZE);
+
+	err =3D xenbus_grant_ring(dev, virt_to_mfn(info->segring.sring));
+	if (err < 0) {
+		free_page((unsigned long)seg_sring);
+		info->segring.sring =3D NULL;
+		goto fail;
+	}
+
+	info->segring_ref =3D err;
+
+	info->sg =3D kzalloc(sizeof(struct scatterlist) * info->ops->max_seg,
+			   GFP_KERNEL);
+	if (!info->sg) {
+		err =3D -ENOMEM;
+		goto fail;
+	}
+	sg_init_table(info->sg, info->ops->max_seg);
+
+	err =3D init_shadow_v2(info);
+	if (err)
+		goto fail;
+
+	err =3D xenbus_alloc_evtchn(dev, &info->evtchn);
+	if (err)
+		goto fail;
+
+	err =3D bind_evtchn_to_irqhandler(info->evtchn,
+					blkif_interrupt,
+					IRQF_SAMPLE_RANDOM, "blkif", info);
+	if (err <=3D 0) {
+		xenbus_dev_fatal(dev, err,
+				 "bind_evtchn_to_irqhandler failed");
+		goto fail;
+	}
+	info->irq =3D err;
+
+	return 0;
+fail:
+	blkif_free(info, 0);
+	return err;
+}
+
 static void free_blkring(struct blkfront_info *info, int suspend)
 {
 	if (info->ring_ref !=3D GRANT_INVALID_REF) {
@@ -1018,6 +1354,32 @@ static void free_blkring(struct blkfront_info *info,=
 int suspend)
 		kfree(info->shadow);
 }
=20
+static void free_blkring_v2(struct blkfront_info *info, int suspend)
+{
+	if (info->reqring_ref !=3D GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->reqring_ref, 0,
+					  (unsigned long)info->reqring.sring);
+		info->reqring_ref =3D GRANT_INVALID_REF;
+		info->reqring.sring =3D NULL;
+	}
+
+	if (info->segring_ref !=3D GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->segring_ref, 0,
+					  (unsigned long)info->segring.sring);
+		info->segring_ref =3D GRANT_INVALID_REF;
+		info->segring.sring =3D NULL;
+	}
+
+	kfree(info->sg);
+
+	if(!suspend) {
+		kfree(info->req_shadow);
+		kfree(info->seg_shadow);
+	}
+
+}
+
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_blkback(struct xenbus_device *dev,
 			   struct blkfront_info *info)
@@ -1025,9 +1387,17 @@ static int talk_to_blkback(struct xenbus_device *dev=
,
 	const char *message =3D NULL;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int type;
=20
 	/* register ring ops */
-	info->ops =3D &blk_front_ops;
+	err =3D xenbus_scanf(XBT_NIL, dev->otherend, "blkback-ring-type", "%u",
+			   &type);
+	if (err !=3D 1)
+		type =3D 1;
+	if (type =3D=3D 2)
+		info->ops =3D &blk_front_ops_v2;
+	else
+		info->ops =3D &blk_front_ops;
=20
 	/* Create shared ring, alloc event channel. */
 	err =3D info->ops->setup_blkring(dev, info);
@@ -1040,13 +1410,6 @@ again:
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_blkring;
 	}
-
-	err =3D xenbus_printf(xbt, dev->nodename,
-			    "ring-ref", "%u", info->ring_ref);
-	if (err) {
-		message =3D "writing ring-ref";
-		goto abort_transaction;
-	}
 	err =3D xenbus_printf(xbt, dev->nodename,
 			    "event-channel", "%u", info->evtchn);
 	if (err) {
@@ -1059,7 +1422,40 @@ again:
 		message =3D "writing protocol";
 		goto abort_transaction;
 	}
-
+	if (type =3D=3D 1) {
+		err =3D xenbus_printf(xbt, dev->nodename,
+				    "ring-ref", "%u", info->ring_ref);
+		if (err) {
+			message =3D "writing ring-ref";
+			goto abort_transaction;
+		}
+		err =3D xenbus_printf(xbt, dev->nodename, "blkfront-ring-type",
+				    "%u", type);
+		if (err) {
+			message =3D "writing blkfront ring type";
+			goto abort_transaction;
+		}=09
+	}
+	if (type =3D=3D 2) {
+		err =3D xenbus_printf(xbt, dev->nodename,
+				    "reqring-ref", "%u", info->reqring_ref);
+		if (err) {
+			message =3D "writing reqring-ref";
+			goto abort_transaction;
+		}
+		err =3D xenbus_printf(xbt, dev->nodename,
+				    "segring-ref", "%u", info->segring_ref);
+		if (err) {
+			message =3D "writing segring-ref";
+			goto abort_transaction;
+		}
+		err =3D xenbus_printf(xbt, dev->nodename, "blkfront-ring-type",
+				    "%u", type);
+		if (err) {
+			message =3D "writing blkfront ring type";
+			goto abort_transaction;
+		}=09
+	}
 	err =3D xenbus_transaction_end(xbt, 0);
 	if (err) {
 		if (err =3D=3D -EAGAIN)
@@ -1164,7 +1560,7 @@ static int blkfront_probe(struct xenbus_device *dev,
 }
=20
=20
-static int blkif_recover(struct blkfront_info *info)
+static int recover_from_v1_to_v1(struct blkfront_info *info)
 {
 	int i;
 	struct blkif_request *req;
@@ -1233,6 +1629,372 @@ static int blkif_recover(struct blkfront_info *info=
)
 	return 0;
 }
=20
+/* migrate from V2 type ring to V1 type*/
+static int recover_from_v2_to_v1(struct blkfront_info *info)
+{=09
+	struct blk_req_shadow *copy;
+	struct blk_seg_shadow *seg_copy;
+	struct request *req;
+	struct blkif_request *new_req;
+	int i, j, err;
+	unsigned int req_rs;
+	struct bio *biolist =3D NULL, *biotail =3D NULL, *bio;
+	unsigned long index;
+	unsigned long flags;
+
+	pr_info("Warning, migrate to older backend, some io may fail\n");
+
+	/* Stage 1: Init the new shadow state. */
+	info->ring_type =3D RING_TYPE_UNDEFINED;
+	err =3D init_shadow(info);
+	if (err)
+		return err;
+
+	req_rs =3D BLK_REQ_RING_SIZE;
+
+	/* Stage 2: Set up free list. */
+	info->shadow_free =3D info->ring.req_prod_pvt;
+
+	/* Stage 3: Find pending requests and requeue them. */
+	for (i =3D 0; i < req_rs; i++) {
+		req =3D info->req_shadow[i].request;
+		/* Not in use? */
+		if (!req)
+			continue;
+
+		if (ring_full(info))=20
+			goto out;
+
+		copy =3D &info->req_shadow[i];
+
+                /* We get a new request, reset the blkif request and shado=
w state. */
+		new_req =3D RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt);
+
+		if (copy->req.operation =3D=3D BLKIF_OP_DISCARD) {
+			new_req->operation =3D BLKIF_OP_DISCARD;
+			new_req->u.discard =3D copy->req.u.discard;
+			new_req->u.discard.id =3D get_id_from_freelist(info);
+			info->shadow[new_req->u.discard.id].request =3D req;
+		}
+		else {
+			if (copy->req.u.rw.nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST)=20
+                        	continue;=20
+
+			new_req->u.rw.id =3D get_id_from_freelist(info);
+			info->shadow[new_req->u.rw.id].request =3D req;
+			new_req->operation =3D copy->req.operation;
+			new_req->u.rw.nr_segments =3D copy->req.u.rw.nr_segments;
+			new_req->u.rw.handle =3D copy->req.u.rw.handle;
+			new_req->u.rw.sector_number =3D copy->req.u.rw.sector_number;
+			index =3D copy->req.u.rw.seg_id;
+			for (j =3D 0; j < new_req->u.rw.nr_segments; j++) {
+				seg_copy =3D &info->seg_shadow[index];
+				new_req->u.rw.seg[j].gref =3D seg_copy->req.gref;
+				new_req->u.rw.seg[j].first_sect =3D seg_copy->req.first_sect;
+				new_req->u.rw.seg[j].last_sect =3D seg_copy->req.last_sect;=20
+				info->shadow[new_req->u.rw.id].frame[j] =3D seg_copy->frame;
+				gnttab_grant_foreign_access_ref(
+					new_req->u.rw.seg[j].gref,
+					info->xbdev->otherend_id,
+					pfn_to_mfn(info->shadow[new_req->u.rw.id].frame[j]),
+					rq_data_dir(info->shadow[new_req->u.rw.id].request));
+				index =3D info->seg_shadow[index].id;
+			}
+		}
+		info->shadow[new_req->u.rw.id].req =3D *new_req;
+		info->ring.req_prod_pvt++;
+		info->req_shadow[i].request =3D NULL;
+	=09
+	}
+out:
+	xenbus_switch_state(info->xbdev, XenbusStateConnected);
+
+	spin_lock_irqsave(&info->io_lock, flags);
+
+	/* cancel the request and resubmit the bio */
+	for (i =3D 0; i < req_rs; i++) {
+		req =3D info->req_shadow[i].request;
+		if (!req)
+			continue;
+
+		blkif_completion_v2(info, i);
+
+		if (biolist =3D=3D NULL)=09
+			biolist =3D req->bio;
+		else
+			biotail->bi_next =3D req->bio;
+		biotail =3D req->biotail;
+		req->bio =3D NULL;
+		__blk_put_request(info->rq, req);
+	}
+
+	while ((req =3D blk_peek_request(info->rq)) !=3D NULL) {
+
+		blk_start_request(req);
+
+		if (biolist =3D=3D NULL)
+			biolist =3D req->bio;
+		else
+			biotail->bi_next =3D req->bio;
+		biotail =3D req->biotail;
+		req->bio =3D NULL;
+		__blk_put_request(info->rq, req);
+	}
+
+	/* Now safe for us to use the shared ring */
+	info->connected =3D BLKIF_STATE_CONNECTED;
+
+	/* need update the queue limit setting */
+	update_blk_queue(info);
+
+	/* Send off requeued requests */
+	flush_requests(info);
+
+	/* Kick any other new requests queued since we resumed */
+	kick_pending_request_queues(info);
+
+	spin_unlock_irqrestore(&info->io_lock, flags);
+
+	/* free original shadow*/
+	kfree(info->seg_shadow);
+	kfree(info->req_shadow);
+
+	while(biolist) {
+		bio =3D biolist;
+		biolist =3D biolist->bi_next;
+		bio->bi_next =3D NULL;
+		submit_bio(bio->bi_rw, bio);
+	}
+
+	return 0;
+}
+
+static int blkif_recover(struct blkfront_info *info)
+{
+	int rc;
+
+	if (info->ring_type =3D=3D RING_TYPE_1)
+		rc =3D recover_from_v1_to_v1(info);
+	else if (info->ring_type =3D=3D RING_TYPE_2)
+		rc =3D recover_from_v2_to_v1(info);
+	else
+		rc =3D -EPERM;
+	return rc;
+}
+
+static int recover_from_v1_to_v2(struct blkfront_info *info)
+{
+	int i,err;
+	struct blkif_request_header *req;
+	struct blkif_request_segment *segring_req;
+	struct blk_shadow *copy;
+	int j;
+	unsigned long seg_id, last_id =3D 0x0fffffff;
+
+	/* Stage 1: Init the new shadow. */
+	info->ring_type =3D RING_TYPE_UNDEFINED;
+	err =3D init_shadow_v2(info);
+	if (err)
+		return err;
+
+	/* Stage 2: Set up free list. */
+	info->shadow_free =3D info->reqring.req_prod_pvt;
+	info->seg_shadow_free =3D info->segring.req_prod_pvt;
+
+	/* Stage 3: Find pending requests and requeue them. */
+	for (i =3D 0; i < BLK_RING_SIZE; i++) {
+		copy =3D &info->shadow[i];
+		/* Not in use? */
+		if (!copy->request)
+			continue;
+
+		/* We get a new request, reset the blkif request and shadow state. */
+		req =3D RING_GET_REQUEST(&info->reqring, info->reqring.req_prod_pvt);
+
+		if (copy->req.operation =3D=3D BLKIF_OP_DISCARD) {
+			req->operation =3D BLKIF_OP_DISCARD;
+			req->u.discard =3D copy->req.u.discard;
+			req->u.discard.id =3D get_id_from_freelist_v2(info);
+			info->req_shadow[req->u.discard.id].request =3D copy->request;
+			info->req_shadow[req->u.discard.id].req =3D *req;
+		}
+		else {
+			req->u.rw.id =3D get_id_from_freelist_v2(info);
+			req->operation =3D copy->req.operation;
+			req->u.rw.nr_segments =3D copy->req.u.rw.nr_segments;
+			req->u.rw.handle =3D copy->req.u.rw.handle;
+			req->u.rw.sector_number =3D copy->req.u.rw.sector_number;
+			for (j =3D 0; j < req->u.rw.nr_segments; j++) {
+				seg_id =3D get_seg_shadow_id(info);
+				if (j =3D=3D 0)
+					req->u.rw.seg_id =3D seg_id;
+				else
+					info->seg_shadow[last_id].id =3D seg_id;
+				segring_req =3D RING_GET_REQUEST(&info->segring, info->segring.req_pro=
d_pvt);
+				segring_req->gref =3D copy->req.u.rw.seg[j].gref;
+				segring_req->first_sect =3D copy->req.u.rw.seg[j].first_sect;
+				segring_req->last_sect =3D copy->req.u.rw.seg[j].last_sect;
+				info->seg_shadow[seg_id].req =3D *segring_req;
+				info->seg_shadow[seg_id].frame =3D copy->frame[j];
+				info->segring.req_prod_pvt++;
+				gnttab_grant_foreign_access_ref(
+					segring_req->gref,
+					info->xbdev->otherend_id,
+					pfn_to_mfn(copy->frame[j]),
+					rq_data_dir(copy->request));
+				last_id =3D seg_id;
+			}
+			info->req_shadow[req->u.rw.id].req =3D *req;
+			info->req_shadow[req->u.rw.id].request =3D copy->request;
+		}
+
+		info->reqring.req_prod_pvt++;
+	}
+
+	/* need update the queue limit setting */
+	update_blk_queue(info);
+
+	/* free original shadow*/
+	kfree(info->shadow);
+
+	xenbus_switch_state(info->xbdev, XenbusStateConnected);
+
+	spin_lock_irq(&info->io_lock);
+
+	/* Now safe for us to use the shared ring */
+	info->connected =3D BLKIF_STATE_CONNECTED;
+
+	/* Send off requeued requests */
+	flush_requests(info);
+
+	/* Kick any other new requests queued since we resumed */
+	kick_pending_request_queues(info);
+
+	spin_unlock_irq(&info->io_lock);
+
+	return 0;
+}
+
+static int recover_from_v2_to_v2(struct blkfront_info *info)
+{
+	int i;
+	struct blkif_request_header *req;
+	struct blkif_request_segment *segring_req;
+	struct blk_req_shadow *copy;
+	struct blk_seg_shadow *seg_copy;
+	unsigned long index =3D 0x0fffffff, seg_id, last_id =3D 0x0fffffff;
+	int j;
+	unsigned int req_rs, seg_rs;
+	unsigned long flags;
+
+	req_rs =3D BLK_REQ_RING_SIZE;
+	seg_rs =3D BLK_SEG_RING_SIZE;
+
+	/* Stage 1: Make a safe copy of the shadow state. */
+	copy =3D kmalloc(sizeof(struct blk_req_shadow) * req_rs,
+		       GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
+	if (!copy)
+		return -ENOMEM;
+
+	seg_copy =3D kmalloc(sizeof(struct blk_seg_shadow) * seg_rs,
+			   GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
+	if (!seg_copy ) {
+		kfree(copy);
+		return -ENOMEM;
+	}
+
+	memcpy(copy, info->req_shadow, sizeof(struct blk_req_shadow) * req_rs);
+	memcpy(seg_copy, info->seg_shadow,
+	       sizeof(struct blk_seg_shadow) * seg_rs);
+
+	/* Stage 2: Set up free list. */
+        for (i =3D 0; i < req_rs; i++)
+                info->req_shadow[i].req.u.rw.id =3D i+1;
+        info->req_shadow[req_rs - 1].req.u.rw.id =3D 0x0fffffff;
+
+	for (i =3D 0; i < seg_rs; i++)
+		info->seg_shadow[i].id =3D i+1;
+	info->seg_shadow[seg_rs - 1].id =3D 0x0fffffff;
+
+	info->shadow_free =3D info->reqring.req_prod_pvt;
+	info->seg_shadow_free =3D info->segring.req_prod_pvt;
+
+	/* Stage 3: Find pending requests and requeue them. */
+	for (i =3D 0; i < req_rs; i++) {
+		/* Not in use? */
+		if (!copy[i].request)
+			continue;
+
+		req =3D RING_GET_REQUEST(&info->reqring, info->reqring.req_prod_pvt);
+		*req =3D copy[i].req;
+
+		req->u.rw.id =3D get_id_from_freelist_v2(info);
+		memcpy(&info->req_shadow[req->u.rw.id], &copy[i], sizeof(copy[i]));
+
+		if (req->operation !=3D BLKIF_OP_DISCARD) {
+			for (j =3D 0; j < req->u.rw.nr_segments; j++) {
+				seg_id =3D get_seg_shadow_id(info);
+				if (j =3D=3D 0)
+					index =3D req->u.rw.seg_id;
+				else
+					index =3D seg_copy[index].id ;
+				gnttab_grant_foreign_access_ref(
+					seg_copy[index].req.gref,
+					info->xbdev->otherend_id,
+					pfn_to_mfn(seg_copy[index].frame),
+					rq_data_dir(info->req_shadow[req->u.rw.id].request));
+				segring_req =3D RING_GET_REQUEST(&info->segring, info->segring.req_pro=
d_pvt);
+				memcpy(segring_req, &(seg_copy[index].req),
+				       sizeof(struct blkif_request_segment));
+				if (j =3D=3D 0)
+					req->u.rw.seg_id =3D seg_id;
+				else
+					info->seg_shadow[last_id].id =3D seg_id;
+
+				memcpy(&info->seg_shadow[seg_id],
+				       &seg_copy[index], sizeof(struct blk_seg_shadow));
+				info->segring.req_prod_pvt++;
+				last_id =3D seg_id;
+			}
+		}
+		info->req_shadow[req->u.rw.id].req =3D *req;
+
+		info->reqring.req_prod_pvt++;
+	}
+
+	kfree(seg_copy);
+	kfree(copy);
+
+	xenbus_switch_state(info->xbdev, XenbusStateConnected);
+
+	spin_lock_irqsave(&info->io_lock, flags);
+
+	/* Now safe for us to use the shared ring */
+	info->connected =3D BLKIF_STATE_CONNECTED;
+
+	/* Send off requeued requests */
+	flush_requests(info);
+
+	/* Kick any other new requests queued since we resumed */
+	kick_pending_request_queues(info);
+
+	spin_unlock_irqrestore(&info->io_lock, flags);
+
+	return 0;
+}
+
+static int blkif_recover_v2(struct blkfront_info *info)
+{
+	int rc;
+
+	if (info->ring_type =3D=3D RING_TYPE_1)
+		rc =3D recover_from_v1_to_v2(info);
+	else if (info->ring_type =3D=3D RING_TYPE_2)
+		rc =3D recover_from_v2_to_v2(info);
+	else
+		rc =3D -EPERM;
+	return rc;
+}
 /**
  * We are reconnecting to the backend, due to a suspend/resume, or a backe=
nd
  * driver restart.  We tear down our blkif structure and recreate it, but
@@ -1609,15 +2371,44 @@ static struct blk_front_operations blk_front_ops =
=3D {
 	.update_rsp_event =3D update_rsp_event,
 	.update_rsp_cons =3D update_rsp_cons,
 	.update_req_prod_pvt =3D update_req_prod_pvt,
+	.update_segment_rsp_cons =3D update_segment_rsp_cons,
 	.ring_push =3D ring_push,
 	.recover =3D blkif_recover,
 	.ring_full =3D ring_full,
+	.segring_full =3D segring_full,
 	.setup_blkring =3D setup_blkring,
 	.free_blkring =3D free_blkring,
 	.blkif_completion =3D blkif_completion,
 	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
 };
=20
+static struct blk_front_operations blk_front_ops_v2 =3D {
+	.ring_get_request =3D ring_get_request_v2,
+	.ring_get_response =3D ring_get_response_v2,
+	.ring_get_segment =3D ring_get_segment_v2,
+	.get_id =3D get_id_from_freelist_v2,
+	.add_id =3D add_id_to_freelist_v2,
+	.save_seg_shadow =3D save_seg_shadow_v2,
+	.save_req_shadow =3D save_req_shadow_v2,
+	.get_req_from_shadow =3D get_req_from_shadow_v2,
+	.get_rsp_prod =3D get_rsp_prod_v2,
+	.get_rsp_cons =3D get_rsp_cons_v2,
+	.get_req_prod_pvt =3D get_req_prod_pvt_v2,
+	.check_left_response =3D check_left_response_v2,
+	.update_rsp_event =3D update_rsp_event_v2,
+	.update_rsp_cons =3D update_rsp_cons_v2,
+	.update_req_prod_pvt =3D update_req_prod_pvt_v2,
+	.update_segment_rsp_cons =3D update_segment_rsp_cons_v2,
+	.ring_push =3D ring_push_v2,
+	.recover =3D blkif_recover_v2,
+	.ring_full =3D ring_full_v2,
+	.segring_full =3D segring_full_v2,
+	.setup_blkring =3D setup_blkring_v2,
+	.free_blkring =3D free_blkring_v2,
+	.blkif_completion =3D blkif_completion_v2,
+	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST_V2,
+};
+
 static const struct block_device_operations xlvbd_block_fops =3D
 {
 	.owner =3D THIS_MODULE,
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index f100ce2..a5a98b0 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -475,7 +475,7 @@ void gnttab_end_foreign_access(grant_ref_t ref, int rea=
donly,
 		/* XXX This needs to be fixed so that the ref and page are
 		   placed on a list to be freed up later. */
 		printk(KERN_WARNING
-		       "WARNING: leaking g.e. and page still in use!\n");
+		       "WARNING: ref %u leaking g.e. and page still in use!\n", ref);
 	}
 }
 EXPORT_SYMBOL_GPL(gnttab_end_foreign_access);
diff --git a/include/xen/interface/io/blkif.h b/include/xen/interface/io/bl=
kif.h
index ee338bf..763489a 100644
--- a/include/xen/interface/io/blkif.h
+++ b/include/xen/interface/io/blkif.h
@@ -108,6 +108,7 @@ typedef uint64_t blkif_sector_t;
  * NB. This could be 12 if the ring indexes weren't stored in the same pag=
e.
  */
 #define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
+#define BLKIF_MAX_SEGMENTS_PER_REQUEST_V2 128
=20
 struct blkif_request_rw {
 	uint8_t        nr_segments;  /* number of segments                   */
@@ -125,6 +126,17 @@ struct blkif_request_rw {
 	} seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
 } __attribute__((__packed__));
=20
+struct blkif_request_rw_header {
+	uint8_t        nr_segments;  /* number of segments                   */
+	blkif_vdev_t   handle;       /* only for read/write requests         */
+#ifdef CONFIG_X86_64
+	uint32_t       _pad1;	     /* offsetof(blkif_request,u.rw.id) =3D=3D 8 */
+#endif
+	uint64_t       id;           /* private guest value, echoed in resp  */
+	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
+	uint64_t       seg_id;	     /* segment id in the segment shadow     */=09
+} __attribute__((__packed__));
+
 struct blkif_request_discard {
 	uint8_t        flag;         /* BLKIF_DISCARD_SECURE or zero.        */
 #define BLKIF_DISCARD_SECURE (1<<0)  /* ignored if discard-secure=3D0     =
     */
@@ -135,7 +147,6 @@ struct blkif_request_discard {
 	uint64_t       id;           /* private guest value, echoed in resp  */
 	blkif_sector_t sector_number;
 	uint64_t       nr_sectors;
-	uint8_t        _pad3;
 } __attribute__((__packed__));
=20
 struct blkif_request {
@@ -146,12 +157,24 @@ struct blkif_request {
 	} u;
 } __attribute__((__packed__));
=20
+struct blkif_request_header {
+	uint8_t        operation;    /* BLKIF_OP_???                         */
+	union {
+		struct blkif_request_rw_header rw;
+		struct blkif_request_discard discard;
+	} u;
+} __attribute__((__packed__));
+
 struct blkif_response {
 	uint64_t        id;              /* copied from request */
 	uint8_t         operation;       /* copied from request */
 	int16_t         status;          /* BLKIF_RSP_???       */
 };
=20
+struct blkif_response_segment {
+	char		dummy;
+} __attribute__((__packed__));
+
 /*
  * STATUS RETURN CODES.
  */
@@ -167,6 +190,8 @@ struct blkif_response {
  */
=20
 DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);
+DEFINE_RING_TYPES(blkif_request, struct blkif_request_header, struct blkif=
_response);
+DEFINE_RING_TYPES(blkif_segment, struct blkif_request_segment, struct blki=
f_response_segment);
=20
 #define VDISK_CDROM        0x1
 #define VDISK_REMOVABLE    0x2
-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_02.patch"
Content-Description: vbd_enlarge_segments_02.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_02.patch";
	size=35892; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:34:34 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IDBjM2M0NDNhNzUzZjQxNThkMDRiOTY2NGZiMzdkNWE1MDQ0MGQzOGEKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAx
NjozMDo0NCAyMDEyICswODAwCgogICAgYWRkIHNlZ3Jpbmcgc3VwcG9ydCBpbiBibGtmcm9udAoK
ZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMgYi9kcml2ZXJzL2Jsb2Nr
L3hlbi1ibGtmcm9udC5jCmluZGV4IGEyNjNmYWYuLmI5ZjM4M2QgMTAwNjQ0Ci0tLSBhL2RyaXZl
cnMvYmxvY2sveGVuLWJsa2Zyb250LmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQu
YwpAQCAtNzYsMTAgKzc2LDIzIEBAIHN0cnVjdCBibGtfc2hhZG93IHsKIAl1bnNpZ25lZCBsb25n
IGZyYW1lW0JMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVF07CiB9OwogCitzdHJ1Y3QgYmxr
X3JlcV9zaGFkb3cgeworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X2hlYWRlciByZXE7CisJc3RydWN0
IHJlcXVlc3QgKnJlcXVlc3Q7Cit9OworCitzdHJ1Y3QgYmxrX3NlZ19zaGFkb3cgeworCXVpbnQ2
NF90IGlkOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgcmVxOworCXVuc2lnbmVkIGxv
bmcgZnJhbWU7Cit9OworCiBzdGF0aWMgREVGSU5FX01VVEVYKGJsa2Zyb250X211dGV4KTsKIHN0
YXRpYyBjb25zdCBzdHJ1Y3QgYmxvY2tfZGV2aWNlX29wZXJhdGlvbnMgeGx2YmRfYmxvY2tfZm9w
czsKIAogI2RlZmluZSBCTEtfUklOR19TSVpFIF9fQ09OU1RfUklOR19TSVpFKGJsa2lmLCBQQUdF
X1NJWkUpCisjZGVmaW5lIEJMS19SRVFfUklOR19TSVpFIF9fQ09OU1RfUklOR19TSVpFKGJsa2lm
X3JlcXVlc3QsIFBBR0VfU0laRSkKKyNkZWZpbmUgQkxLX1NFR19SSU5HX1NJWkUgX19DT05TVF9S
SU5HX1NJWkUoYmxraWZfc2VnbWVudCwgUEFHRV9TSVpFKQogCiAvKgogICogV2UgaGF2ZSBvbmUg
b2YgdGhlc2UgcGVyIHZiZCwgd2hldGhlciBpZGUsIHNjc2kgb3IgJ290aGVyJy4gIFRoZXkKQEAg
LTk2LDIyICsxMDksMzAgQEAgc3RydWN0IGJsa2Zyb250X2luZm8KIAlibGtpZl92ZGV2X3QgaGFu
ZGxlOwogCWVudW0gYmxraWZfc3RhdGUgY29ubmVjdGVkOwogCWludCByaW5nX3JlZjsKKwlpbnQg
cmVxcmluZ19yZWY7CisJaW50IHNlZ3JpbmdfcmVmOwogCXN0cnVjdCBibGtpZl9mcm9udF9yaW5n
IHJpbmc7CisJc3RydWN0IGJsa2lmX3JlcXVlc3RfZnJvbnRfcmluZyByZXFyaW5nOworCXN0cnVj
dCBibGtpZl9zZWdtZW50X2Zyb250X3Jpbmcgc2VncmluZzsKIAlzdHJ1Y3Qgc2NhdHRlcmxpc3Qg
KnNnOwogCXVuc2lnbmVkIGludCBldnRjaG4sIGlycTsKIAlzdHJ1Y3QgcmVxdWVzdF9xdWV1ZSAq
cnE7CiAJc3RydWN0IHdvcmtfc3RydWN0IHdvcms7CiAJc3RydWN0IGdudHRhYl9mcmVlX2NhbGxi
YWNrIGNhbGxiYWNrOwogCXN0cnVjdCBibGtfc2hhZG93ICpzaGFkb3c7CisJc3RydWN0IGJsa19y
ZXFfc2hhZG93ICpyZXFfc2hhZG93OworCXN0cnVjdCBibGtfc2VnX3NoYWRvdyAqc2VnX3NoYWRv
dzsKIAlzdHJ1Y3QgYmxrX2Zyb250X29wZXJhdGlvbnMgKm9wczsKIAllbnVtIGJsa2lmX3Jpbmdf
dHlwZSByaW5nX3R5cGU7CiAJdW5zaWduZWQgbG9uZyBzaGFkb3dfZnJlZTsKKwl1bnNpZ25lZCBs
b25nIHNlZ19zaGFkb3dfZnJlZTsKIAl1bnNpZ25lZCBpbnQgZmVhdHVyZV9mbHVzaDsKIAl1bnNp
Z25lZCBpbnQgZmx1c2hfb3A7CiAJdW5zaWduZWQgaW50IGZlYXR1cmVfZGlzY2FyZDoxOwogCXVu
c2lnbmVkIGludCBmZWF0dXJlX3NlY2Rpc2NhcmQ6MTsKIAl1bnNpZ25lZCBpbnQgZGlzY2FyZF9n
cmFudWxhcml0eTsKIAl1bnNpZ25lZCBpbnQgZGlzY2FyZF9hbGlnbm1lbnQ7CisJdW5zaWduZWQg
bG9uZyBsYXN0X2lkOwogCWludCBpc19yZWFkeTsKIH07CiAKQEAgLTEyNCw3ICsxNDUsNyBAQCBz
dGF0aWMgc3RydWN0IGJsa19mcm9udF9vcGVyYXRpb25zIHsKIAl1bnNpZ25lZCBsb25nICgqZ2V0
X2lkKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pOwogCXZvaWQgKCphZGRfaWQpIChzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCk7CiAJdm9pZCAoKnNhdmVf
c2VnX3NoYWRvdykgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCB1bnNpZ25lZCBsb25nIG1m
biwKLQkJCQkgdW5zaWduZWQgbG9uZyBpZCwgaW50IGkpOworCQkJCSB1bnNpZ25lZCBsb25nIGlk
LCBpbnQgaSwgc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCAqcmluZ19zZWcpOwogCXZvaWQg
KCpzYXZlX3JlcV9zaGFkb3cpIChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKIAkJCQkgc3Ry
dWN0IHJlcXVlc3QgKnJlcSwgdW5zaWduZWQgbG9uZyBpZCk7CiAJc3RydWN0IHJlcXVlc3QgKigq
Z2V0X3JlcV9mcm9tX3NoYWRvdykoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCkBAIC0xMzYs
MTQgKzE1NywxNiBAQCBzdGF0aWMgc3RydWN0IGJsa19mcm9udF9vcGVyYXRpb25zIHsKIAl2b2lk
ICgqdXBkYXRlX3JzcF9ldmVudCkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgaSk7
CiAJdm9pZCAoKnVwZGF0ZV9yc3BfY29ucykgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKTsK
IAl2b2lkICgqdXBkYXRlX3JlcV9wcm9kX3B2dCkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
KTsKKwl2b2lkICgqdXBkYXRlX3NlZ21lbnRfcnNwX2NvbnMpIChzdHJ1Y3QgYmxrZnJvbnRfaW5m
byAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCk7CiAJdm9pZCAoKnJpbmdfcHVzaCkgKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvLCBpbnQgKm5vdGlmeSk7CiAJaW50ICgqcmVjb3ZlcikgKHN0cnVj
dCBibGtmcm9udF9pbmZvICppbmZvKTsKIAlpbnQgKCpyaW5nX2Z1bGwpIChzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbyk7CisJaW50ICgqc2VncmluZ19mdWxsKSAoc3RydWN0IGJsa2Zyb250X2lu
Zm8gKmluZm8sIHVuc2lnbmVkIGludCBucl9zZWdtZW50cyk7CiAJaW50ICgqc2V0dXBfYmxrcmlu
ZykgKHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYsIHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
KTsKIAl2b2lkICgqZnJlZV9ibGtyaW5nKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGlu
dCBzdXNwZW5kKTsKIAl2b2lkICgqYmxraWZfY29tcGxldGlvbikgKHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLCB1bnNpZ25lZCBsb25nIGlkKTsKIAl1bnNpZ25lZCBpbnQgbWF4X3NlZzsKLX0g
YmxrX2Zyb250X29wczsgCit9IGJsa19mcm9udF9vcHMsIGJsa19mcm9udF9vcHNfdjI7IAogCiBz
dGF0aWMgdW5zaWduZWQgaW50IG5yX21pbm9yczsKIHN0YXRpYyB1bnNpZ25lZCBsb25nICptaW5v
cnM7CkBAIC0xNzksNiArMjAyLDI0IEBAIHN0YXRpYyB1bnNpZ25lZCBsb25nIGdldF9pZF9mcm9t
X2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogCXJldHVybiBmcmVlOwogfQog
CitzdGF0aWMgdW5zaWduZWQgbG9uZyBnZXRfaWRfZnJvbV9mcmVlbGlzdF92MihzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbykKK3sKKwl1bnNpZ25lZCBsb25nIGZyZWUgPSBpbmZvLT5zaGFkb3df
ZnJlZTsKKwlCVUdfT04oZnJlZSA+PSBCTEtfUkVRX1JJTkdfU0laRSk7CisJaW5mby0+c2hhZG93
X2ZyZWUgPSBpbmZvLT5yZXFfc2hhZG93W2ZyZWVdLnJlcS51LnJ3LmlkOworCWluZm8tPnJlcV9z
aGFkb3dbZnJlZV0ucmVxLnUucncuaWQgPSAweDBmZmZmZmVlOyAvKiBkZWJ1ZyAqLworCXJldHVy
biBmcmVlOworfQorCitzdGF0aWMgdW5zaWduZWQgbG9uZyBnZXRfc2VnX3NoYWRvd19pZChzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwl1bnNpZ25lZCBsb25nIGZyZWUgPSBpbmZvLT5z
ZWdfc2hhZG93X2ZyZWU7CisJQlVHX09OKGZyZWUgPj0gQkxLX1NFR19SSU5HX1NJWkUpOworCWlu
Zm8tPnNlZ19zaGFkb3dfZnJlZSA9IGluZm8tPnNlZ19zaGFkb3dbZnJlZV0uaWQ7CisJaW5mby0+
c2VnX3NoYWRvd1tmcmVlXS5pZCA9IDB4MGZmZmZmZWU7IC8qIGRlYnVnICovCisJcmV0dXJuIGZy
ZWU7Cit9CisKIHZvaWQgYWRkX2lkX3RvX2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvLAogCQkJICAgICAgIHVuc2lnbmVkIGxvbmcgaWQpCiB7CkBAIC0xODcsNiArMjI4LDIxIEBA
IHZvaWQgYWRkX2lkX3RvX2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAogCWlu
Zm8tPnNoYWRvd19mcmVlID0gaWQ7CiB9CiAKK3N0YXRpYyB2b2lkIGFkZF9pZF90b19mcmVlbGlz
dF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKKwkJCQkgIHVuc2lnbmVkIGxvbmcgaWQp
Cit7CisJaW5mby0+cmVxX3NoYWRvd1tpZF0ucmVxLnUucncuaWQgID0gaW5mby0+c2hhZG93X2Zy
ZWU7CisJaW5mby0+cmVxX3NoYWRvd1tpZF0ucmVxdWVzdCA9IE5VTEw7CisJaW5mby0+c2hhZG93
X2ZyZWUgPSBpZDsKK30KKworc3RhdGljIHZvaWQgZnJlZV9zZWdfc2hhZG93X2lkKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvLAorCQkJCSAgdW5zaWduZWQgbG9uZyBpZCkKK3sKKwlpbmZvLT5z
ZWdfc2hhZG93W2lkXS5pZCAgPSBpbmZvLT5zZWdfc2hhZG93X2ZyZWU7CisJaW5mby0+c2VnX3No
YWRvd19mcmVlID0gaWQ7Cit9CisKIHN0YXRpYyBpbnQgeGxiZF9yZXNlcnZlX21pbm9ycyh1bnNp
Z25lZCBpbnQgbWlub3IsIHVuc2lnbmVkIGludCBucikKIHsKIAl1bnNpZ25lZCBpbnQgZW5kID0g
bWlub3IgKyBucjsKQEAgLTI5OSw2ICszNTUsMTQgQEAgdm9pZCAqcmluZ19nZXRfcmVxdWVzdChz
dHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIAlyZXR1cm4gKHZvaWQgKilSSU5HX0dFVF9SRVFV
RVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5yaW5nLnJlcV9wcm9kX3B2dCk7CiB9CiAKK3ZvaWQgKnJp
bmdfZ2V0X3JlcXVlc3RfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJc3RydWN0
IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpyaW5nX3JlcTsKKwlyaW5nX3JlcSA9IFJJTkdfR0VUX1JF
UVVFU1QoJmluZm8tPnJlcXJpbmcsCisJCQkJaW5mby0+cmVxcmluZy5yZXFfcHJvZF9wdnQpOwor
CXJldHVybiAodm9pZCAqKXJpbmdfcmVxOworfQorCiBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdt
ZW50ICpyaW5nX2dldF9zZWdtZW50KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgaSkK
IHsKIAlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmluZ19yZXEgPQpAQCAtMzA2LDEyICszNzAsMzQg
QEAgc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCAqcmluZ19nZXRfc2VnbWVudChzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbywgaW50IGkKIAlyZXR1cm4gJnJpbmdfcmVxLT51LnJ3LnNlZ1tp
XTsKIH0KIAotdm9pZCBzYXZlX3NlZ19zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8s
Ci0JCSAgICAgIHVuc2lnbmVkIGxvbmcgbWZuLCB1bnNpZ25lZCBsb25nIGlkLCBpbnQgaSkKK3N0
cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnJpbmdfZ2V0X3NlZ21lbnRfdjIoc3RydWN0IGJs
a2Zyb250X2luZm8gKmluZm8sIGludCBpKQoreworCXJldHVybiBSSU5HX0dFVF9SRVFVRVNUKCZp
bmZvLT5zZWdyaW5nLCBpbmZvLT5zZWdyaW5nLnJlcV9wcm9kX3B2dCsrKTsKK30KKwordm9pZCBz
YXZlX3NlZ19zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIHVuc2lnbmVkIGxvbmcg
bWZuLAorCQkgICAgIHVuc2lnbmVkIGxvbmcgaWQsIGludCBpLCBzdHJ1Y3QgYmxraWZfcmVxdWVz
dF9zZWdtZW50ICpyaW5nX3NlZykKIHsKIAlpbmZvLT5zaGFkb3dbaWRdLmZyYW1lW2ldID0gbWZu
X3RvX3BmbihtZm4pOwogfQogCit2b2lkIHNhdmVfc2VnX3NoYWRvd192MihzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBtZm4sCisJCQl1bnNpZ25lZCBsb25nIGlkLCBp
bnQgaSwgc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCAqcmluZ19zZWcpCit7CisJc3RydWN0
IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpyaW5nX3JlcTsKKwl1bnNpZ25lZCBsb25nIHNlZ19pZCA9
IGdldF9zZWdfc2hhZG93X2lkKGluZm8pOworCisJcmluZ19yZXEgPSAoc3RydWN0IGJsa2lmX3Jl
cXVlc3RfaGVhZGVyICopaW5mby0+b3BzLT5yaW5nX2dldF9yZXF1ZXN0KGluZm8pOworCWlmIChp
ID09IDApCisJCXJpbmdfcmVxLT51LnJ3LnNlZ19pZCA9IHNlZ19pZDsKKwllbHNlCisJCWluZm8t
PnNlZ19zaGFkb3dbaW5mby0+bGFzdF9pZF0uaWQgPSBzZWdfaWQ7CisJaW5mby0+c2VnX3NoYWRv
d1tzZWdfaWRdLmZyYW1lID0gbWZuX3RvX3BmbihtZm4pOworCW1lbWNweSgmKGluZm8tPnNlZ19z
aGFkb3dbc2VnX2lkXS5yZXEpLCByaW5nX3NlZywKKwkgICAgICAgc2l6ZW9mKHN0cnVjdCBibGtp
Zl9yZXF1ZXN0X3NlZ21lbnQpKTsKKwlpbmZvLT5sYXN0X2lkID0gc2VnX2lkOworfQorCiB2b2lk
IHNhdmVfcmVxX3NoYWRvdyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKIAkJICAgICAgc3Ry
dWN0IHJlcXVlc3QgKnJlcSwgdW5zaWduZWQgbG9uZyBpZCkKIHsKQEAgLTMyMSwxMCArNDA3LDM0
IEBAIHZvaWQgc2F2ZV9yZXFfc2hhZG93KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAogCWlu
Zm8tPnNoYWRvd1tpZF0ucmVxdWVzdCA9IHJlcTsKIH0KIAordm9pZCBzYXZlX3JlcV9zaGFkb3df
djIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCisJCSAgICAgIHN0cnVjdCByZXF1ZXN0ICpy
ZXEsIHVuc2lnbmVkIGxvbmcgaWQpCit7CisJc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpy
aW5nX3JlcSA9CisJCQkoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICopaW5mby0+b3BzLT5y
aW5nX2dldF9yZXF1ZXN0KGluZm8pOworCWluZm8tPnJlcV9zaGFkb3dbaWRdLnJlcSA9ICpyaW5n
X3JlcTsKKwlpbmZvLT5yZXFfc2hhZG93W2lkXS5yZXF1ZXN0ID0gcmVxOworfQorCiB2b2lkIHVw
ZGF0ZV9yZXFfcHJvZF9wdnQoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCiB7CiAJaW5mby0+
cmluZy5yZXFfcHJvZF9wdnQrKzsKIH0KKwordm9pZCB1cGRhdGVfcmVxX3Byb2RfcHZ0X3YyKHN0
cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCWluZm8tPnJlcXJpbmcucmVxX3Byb2RfcHZ0
Kys7Cit9CisKK2ludCBzZWdyaW5nX2Z1bGwoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIHVu
c2lnbmVkIGludCBucl9zZWdtZW50cykKK3sKKwlyZXR1cm4gMDsKK30KKworaW50IHNlZ3Jpbmdf
ZnVsbF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgaW50IG5yX3NlZ21l
bnRzKQoreworCXJldHVybiBucl9zZWdtZW50cyA+IFJJTkdfRlJFRV9SRVFVRVNUUygmaW5mby0+
c2VncmluZyk7Cit9CiAvKgogICogR2VuZXJhdGUgYSBYZW4gYmxrZnJvbnQgSU8gcmVxdWVzdCBm
cm9tIGEgYmxrIGxheWVyIHJlcXVlc3QuICBSZWFkcwogICogYW5kIHdyaXRlcyBhcmUgaGFuZGxl
ZCBhcyBleHBlY3RlZC4KQEAgLTM0NywxOSArNDU3LDE4IEBAIHN0YXRpYyBpbnQgYmxraWZfcXVl
dWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCQlyZXR1cm4gMTsKIAogCWlmIChnbnR0
YWJfYWxsb2NfZ3JhbnRfcmVmZXJlbmNlcygKLQkJQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFV
RVNULCAmZ3JlZl9oZWFkKSA8IDApIHsKKwkJaW5mby0+b3BzLT5tYXhfc2VnLCAmZ3JlZl9oZWFk
KSA8IDApIHsKIAkJZ250dGFiX3JlcXVlc3RfZnJlZV9jYWxsYmFjaygKIAkJCSZpbmZvLT5jYWxs
YmFjaywKIAkJCWJsa2lmX3Jlc3RhcnRfcXVldWVfY2FsbGJhY2ssCiAJCQlpbmZvLAotCQkJQkxL
SUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUKTsKKwkJCWluZm8tPm9wcy0+bWF4X3NlZyk7CiAJ
CXJldHVybiAxOwogCX0KIAogCS8qIEZpbGwgb3V0IGEgY29tbXVuaWNhdGlvbnMgcmluZyBzdHJ1
Y3R1cmUuICovCiAJcmluZ19yZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilpbmZvLT5vcHMt
PnJpbmdfZ2V0X3JlcXVlc3QoaW5mbyk7CiAJaWQgPSBpbmZvLT5vcHMtPmdldF9pZChpbmZvKTsK
LQkvL2luZm8tPnNoYWRvd1tpZF0ucmVxdWVzdCA9IHJlcTsKIAogCXJpbmdfcmVxLT51LnJ3Lmlk
ID0gaWQ7CiAJcmluZ19yZXEtPnUucncuc2VjdG9yX251bWJlciA9IChibGtpZl9zZWN0b3JfdCli
bGtfcnFfcG9zKHJlcSk7CkBAIC0zOTIsNiArNTAxLDkgQEAgc3RhdGljIGludCBibGtpZl9xdWV1
ZV9yZXF1ZXN0KHN0cnVjdCByZXF1ZXN0ICpyZXEpCiAJCQkJCQkJICAgaW5mby0+c2cpOwogCQlC
VUdfT04ocmluZ19yZXEtPnUucncubnJfc2VnbWVudHMgPiBpbmZvLT5vcHMtPm1heF9zZWcpOwog
CisJCWlmIChpbmZvLT5vcHMtPnNlZ3JpbmdfZnVsbChpbmZvLCByaW5nX3JlcS0+dS5ydy5ucl9z
ZWdtZW50cykpCisJCQlnb3RvIHdhaXQ7CisKIAkJZm9yX2VhY2hfc2coaW5mby0+c2csIHNnLCBy
aW5nX3JlcS0+dS5ydy5ucl9zZWdtZW50cywgaSkgewogCQkJYnVmZmVyX21mbiA9IHBmbl90b19t
Zm4ocGFnZV90b19wZm4oc2dfcGFnZShzZykpKTsKIAkJCWZzZWN0ID0gc2ctPm9mZnNldCA+PiA5
OwpAQCAtNDExLDcgKzUyMyw3IEBAIHN0YXRpYyBpbnQgYmxraWZfcXVldWVfcmVxdWVzdChzdHJ1
Y3QgcmVxdWVzdCAqcmVxKQogCQkJCQkuZ3JlZiAgICAgICA9IHJlZiwKIAkJCQkJLmZpcnN0X3Nl
Y3QgPSBmc2VjdCwKIAkJCQkJLmxhc3Rfc2VjdCAgPSBsc2VjdCB9OwotCQkJaW5mby0+b3BzLT5z
YXZlX3NlZ19zaGFkb3coaW5mbywgYnVmZmVyX21mbiwgaWQsIGkpOworCQkJaW5mby0+b3BzLT5z
YXZlX3NlZ19zaGFkb3coaW5mbywgYnVmZmVyX21mbiwgaWQsIGksIHJpbmdfc2VnKTsKIAkJfQog
CX0KIApAQCAtNDIzLDYgKzUzNSwxMSBAQCBzdGF0aWMgaW50IGJsa2lmX3F1ZXVlX3JlcXVlc3Qo
c3RydWN0IHJlcXVlc3QgKnJlcSkKIAlnbnR0YWJfZnJlZV9ncmFudF9yZWZlcmVuY2VzKGdyZWZf
aGVhZCk7CiAKIAlyZXR1cm4gMDsKK3dhaXQ6CisJZ250dGFiX2ZyZWVfZ3JhbnRfcmVmZXJlbmNl
cyhncmVmX2hlYWQpOworCXByX2RlYnVnKCJObyBlbm91Z2ggc2VnbWVudCFcbiIpOworCWluZm8t
Pm9wcy0+YWRkX2lkKGluZm8sIGlkKTsKKwlyZXR1cm4gMTsKIH0KIAogdm9pZCByaW5nX3B1c2go
c3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCAqbm90aWZ5KQpAQCAtNDMwLDYgKzU0Nywx
MyBAQCB2b2lkIHJpbmdfcHVzaChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50ICpub3Rp
ZnkpCiAJUklOR19QVVNIX1JFUVVFU1RTX0FORF9DSEVDS19OT1RJRlkoJmluZm8tPnJpbmcsICpu
b3RpZnkpOwogfQogCit2b2lkIHJpbmdfcHVzaF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5m
bywgaW50ICpub3RpZnkpCit7CisJUklOR19QVVNIX1JFUVVFU1RTKCZpbmZvLT5zZWdyaW5nKTsK
KwlSSU5HX1BVU0hfUkVRVUVTVFNfQU5EX0NIRUNLX05PVElGWSgmaW5mby0+cmVxcmluZywgKm5v
dGlmeSk7Cit9CisKKwogc3RhdGljIGlubGluZSB2b2lkIGZsdXNoX3JlcXVlc3RzKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKQogewogCWludCBub3RpZnk7CkBAIC00NDAsNiArNTY0LDE2IEBA
IHN0YXRpYyBpbmxpbmUgdm9pZCBmbHVzaF9yZXF1ZXN0cyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAq
aW5mbykKIAkJbm90aWZ5X3JlbW90ZV92aWFfaXJxKGluZm8tPmlycSk7CiB9CiAKK3N0YXRpYyBp
bnQgcmluZ19mcmVlX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCXJldHVybiAo
IVJJTkdfRlVMTCgmaW5mby0+cmVxcmluZykgJiYKKwkJUklOR19GUkVFX1JFUVVFU1RTKCZpbmZv
LT5zZWdyaW5nKSA+IFJJTkdfU0laRSgmaW5mby0+c2VncmluZykvMyk7Cit9CitzdGF0aWMgaW50
IHJpbmdfZnVsbF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gKFJJ
TkdfRlVMTCgmaW5mby0+cmVxcmluZykgfHwgUklOR19GVUxMKCZpbmZvLT5zZWdyaW5nKSk7Cit9
CisKIC8qCiAgKiBkb19ibGtpZl9yZXF1ZXN0CiAgKiAgcmVhZCBhIGJsb2NrOyByZXF1ZXN0IGlz
IGluIGEgcmVxdWVzdCBxdWV1ZQpAQCAtNDkwLDYgKzYyNCwxNyBAQCB3YWl0OgogCQlmbHVzaF9y
ZXF1ZXN0cyhpbmZvKTsKIH0KIAorc3RhdGljIHZvaWQgdXBkYXRlX2Jsa19xdWV1ZShzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlzdHJ1Y3QgcmVxdWVzdF9xdWV1ZSAqcSA9IGluZm8t
PnJxOworCisJYmxrX3F1ZXVlX21heF9zZWdtZW50cyhxLCBpbmZvLT5vcHMtPm1heF9zZWcpOwor
CWJsa19xdWV1ZV9tYXhfaHdfc2VjdG9ycyhxLCBxdWV1ZV9tYXhfc2VnbWVudHMocSkgKgorCQkJ
CSBxdWV1ZV9tYXhfc2VnbWVudF9zaXplKHEpIC8KKwkJCQkgcXVldWVfbG9naWNhbF9ibG9ja19z
aXplKHEpKTsKKwlyZXR1cm47Cit9CisKIHN0YXRpYyBpbnQgeGx2YmRfaW5pdF9ibGtfcXVldWUo
c3RydWN0IGdlbmRpc2sgKmdkLCB1MTYgc2VjdG9yX3NpemUpCiB7CiAJc3RydWN0IHJlcXVlc3Rf
cXVldWUgKnJxOwpAQCAtNzQwLDcgKzg4NSw3IEBAIHN0YXRpYyB2b2lkIHhsdmJkX3JlbGVhc2Vf
Z2VuZGlzayhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIAogc3RhdGljIHZvaWQga2lja19w
ZW5kaW5nX3JlcXVlc3RfcXVldWVzKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewotCWlm
ICghcmluZ19mdWxsKGluZm8pKSB7CisJaWYgKCFpbmZvLT5vcHMtPnJpbmdfZnVsbChpbmZvKSkg
ewogCQkvKiBSZS1lbmFibGUgY2FsbGRvd25zLiAqLwogCQlibGtfc3RhcnRfcXVldWUoaW5mby0+
cnEpOwogCQkvKiBLaWNrIHRoaW5ncyBvZmYgaW1tZWRpYXRlbHkuICovCkBAIC03OTMsMzkgKzkz
OCwxMTUgQEAgc3RhdGljIHZvaWQgYmxraWZfY29tcGxldGlvbihzdHJ1Y3QgYmxrZnJvbnRfaW5m
byAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCkKIAkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2Vzcyhz
LT5yZXEudS5ydy5zZWdbaV0uZ3JlZiwgMCwgMFVMKTsKIH0KIAorc3RhdGljIHZvaWQgYmxraWZf
Y29tcGxldGlvbl92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBp
ZCkKK3sKKwlpbnQgaTsKKwkvKiBEbyBub3QgbGV0IEJMS0lGX09QX0RJU0NBUkQgYXMgbnJfc2Vn
bWVudCBpcyBpbiB0aGUgc2FtZSBwbGFjZQorCSAqIGZsYWcuICovCisJdW5zaWduZWQgc2hvcnQg
bnIgPSBpbmZvLT5yZXFfc2hhZG93W2lkXS5yZXEudS5ydy5ucl9zZWdtZW50czsKKwl1bnNpZ25l
ZCBsb25nIHNoYWRvd19pZCwgZnJlZV9pZDsKKworCXNoYWRvd19pZCA9IGluZm8tPnJlcV9zaGFk
b3dbaWRdLnJlcS51LnJ3LnNlZ19pZDsKKwlmb3IgKGkgPSAwOyBpIDwgbnI7IGkrKykgeworCQln
bnR0YWJfZW5kX2ZvcmVpZ25fYWNjZXNzKGluZm8tPnNlZ19zaGFkb3dbc2hhZG93X2lkXS5yZXEu
Z3JlZiwgMCwgMFVMKTsKKwkJZnJlZV9pZCA9IHNoYWRvd19pZDsKKwkJc2hhZG93X2lkID0gaW5m
by0+c2VnX3NoYWRvd1tzaGFkb3dfaWRdLmlkOworCQlmcmVlX3NlZ19zaGFkb3dfaWQoaW5mbywg
ZnJlZV9pZCk7CisJfQorfQorCiBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKnJpbmdfZ2V0X3Jlc3Bv
bnNlKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewogCXJldHVybiBSSU5HX0dFVF9SRVNQ
T05TRSgmaW5mby0+cmluZywgaW5mby0+cmluZy5yc3BfY29ucyk7CiB9CisKK3N0cnVjdCBibGtp
Zl9yZXNwb25zZSAqcmluZ19nZXRfcmVzcG9uc2VfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmlu
Zm8pCit7CisJcmV0dXJuIFJJTkdfR0VUX1JFU1BPTlNFKCZpbmZvLT5yZXFyaW5nLCBpbmZvLT5y
ZXFyaW5nLnJzcF9jb25zKTsKK30KKwogUklOR19JRFggZ2V0X3JzcF9wcm9kKHN0cnVjdCBibGtm
cm9udF9pbmZvICppbmZvKQogewogCXJldHVybiBpbmZvLT5yaW5nLnNyaW5nLT5yc3BfcHJvZDsK
IH0KKworUklOR19JRFggZ2V0X3JzcF9wcm9kX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
KQoreworCXJldHVybiBpbmZvLT5yZXFyaW5nLnNyaW5nLT5yc3BfcHJvZDsKK30KKwogUklOR19J
RFggZ2V0X3JzcF9jb25zKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewogCXJldHVybiBp
bmZvLT5yaW5nLnJzcF9jb25zOwogfQorCitSSU5HX0lEWCBnZXRfcnNwX2NvbnNfdjIoc3RydWN0
IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJcmV0dXJuIGluZm8tPnJlcXJpbmcucnNwX2NvbnM7
Cit9CisKIHN0cnVjdCByZXF1ZXN0ICpnZXRfcmVxX2Zyb21fc2hhZG93KHN0cnVjdCBibGtmcm9u
dF9pbmZvICppbmZvLAogCQkJCSAgICB1bnNpZ25lZCBsb25nIGlkKQogewogCXJldHVybiBpbmZv
LT5zaGFkb3dbaWRdLnJlcXVlc3Q7CiB9CisKK3N0cnVjdCByZXF1ZXN0ICpnZXRfcmVxX2Zyb21f
c2hhZG93X3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAorCQkJCSAgICB1bnNpZ25lZCBs
b25nIGlkKQoreworCXJldHVybiBpbmZvLT5yZXFfc2hhZG93W2lkXS5yZXF1ZXN0OworfQorCiB2
b2lkIHVwZGF0ZV9yc3BfY29ucyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIHsKIAlpbmZv
LT5yaW5nLnJzcF9jb25zKys7CiB9CisKK3ZvaWQgdXBkYXRlX3JzcF9jb25zX3YyKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKQoreworCWluZm8tPnJlcXJpbmcucnNwX2NvbnMrKzsKK30KKwog
UklOR19JRFggZ2V0X3JlcV9wcm9kX3B2dChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIHsK
IAlyZXR1cm4gaW5mby0+cmluZy5yZXFfcHJvZF9wdnQ7CiB9CisKK1JJTkdfSURYIGdldF9yZXFf
cHJvZF9wdnRfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJcmV0dXJuIGluZm8t
PnJlcXJpbmcucmVxX3Byb2RfcHZ0OworfQorCiB2b2lkIGNoZWNrX2xlZnRfcmVzcG9uc2Uoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCAqbW9yZV90b19kbykKIHsKIAlSSU5HX0ZJTkFM
X0NIRUNLX0ZPUl9SRVNQT05TRVMoJmluZm8tPnJpbmcsICptb3JlX3RvX2RvKTsKIH0KKwordm9p
ZCBjaGVja19sZWZ0X3Jlc3BvbnNlX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQg
Km1vcmVfdG9fZG8pCit7CisJUklOR19GSU5BTF9DSEVDS19GT1JfUkVTUE9OU0VTKCZpbmZvLT5y
ZXFyaW5nLCAqbW9yZV90b19kbyk7Cit9CisKIHZvaWQgdXBkYXRlX3JzcF9ldmVudChzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbywgaW50IGkpCiB7CiAJaW5mby0+cmluZy5zcmluZy0+cnNwX2V2
ZW50ID0gaSArIDE7CiB9CisKK3ZvaWQgdXBkYXRlX3JzcF9ldmVudF92MihzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywgaW50IGkpCit7CisJaW5mby0+cmVxcmluZy5zcmluZy0+cnNwX2V2ZW50
ID0gaSArIDE7Cit9CisKK3ZvaWQgdXBkYXRlX3NlZ21lbnRfcnNwX2NvbnMoc3RydWN0IGJsa2Zy
b250X2luZm8gKmluZm8sIHVuc2lnbmVkIGxvbmcgaWQpCit7CisJcmV0dXJuOworfQorCit2b2lk
IHVwZGF0ZV9zZWdtZW50X3JzcF9jb25zX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCB1
bnNpZ25lZCBsb25nIGlkKQoreworCWluZm8tPnNlZ3JpbmcucnNwX2NvbnMgKz0gaW5mby0+cmVx
X3NoYWRvd1tpZF0ucmVxLnUucncubnJfc2VnbWVudHM7CisJcmV0dXJuOworfQogc3RhdGljIGly
cXJldHVybl90IGJsa2lmX2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpCiB7CiAJc3Ry
dWN0IHJlcXVlc3QgKnJlcTsKQEAgLTkwMyw4ICsxMTI0LDggQEAgc3RhdGljIGlycXJldHVybl90
IGJsa2lmX2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpCiAJCQlpZiAodW5saWtlbHko
YnJldC0+c3RhdHVzICE9IEJMS0lGX1JTUF9PS0FZKSkKIAkJCQlkZXZfZGJnKCZpbmZvLT54YmRl
di0+ZGV2LCAiQmFkIHJldHVybiBmcm9tIGJsa2RldiBkYXRhICIKIAkJCQkJInJlcXVlc3Q6ICV4
XG4iLCBicmV0LT5zdGF0dXMpOwotCiAJCQlfX2Jsa19lbmRfcmVxdWVzdF9hbGwocmVxLCBlcnJv
cik7CisJCQlpbmZvLT5vcHMtPnVwZGF0ZV9zZWdtZW50X3JzcF9jb25zKGluZm8sIGlkKTsKIAkJ
CWJyZWFrOwogCQlkZWZhdWx0OgogCQkJQlVHKCk7CkBAIC05NDksNiArMTE3MCw0MyBAQCBzdGF0
aWMgaW50IGluaXRfc2hhZG93KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogCXJldHVybiAw
OwogfQogCitzdGF0aWMgaW50IGluaXRfc2hhZG93X3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvKQoreworCXVuc2lnbmVkIGludCByaW5nX3NpemU7CisJaW50IGk7CisKKwlpZiAoaW5mby0+
cmluZ190eXBlICE9IFJJTkdfVFlQRV9VTkRFRklORUQpCisJCXJldHVybiAwOworCisJaW5mby0+
cmluZ190eXBlID0gUklOR19UWVBFXzI7CisKKwlyaW5nX3NpemUgPSBCTEtfUkVRX1JJTkdfU0la
RTsKKwlpbmZvLT5yZXFfc2hhZG93ID0ga3phbGxvYyhzaXplb2Yoc3RydWN0IGJsa19yZXFfc2hh
ZG93KSAqIHJpbmdfc2l6ZSwKKwkJCQkgICBHRlBfS0VSTkVMKTsKKwlpZiAoIWluZm8tPnJlcV9z
aGFkb3cpCisJCXJldHVybiAtRU5PTUVNOworCisJZm9yIChpID0gMDsgaSA8IHJpbmdfc2l6ZTsg
aSsrKQorCQlpbmZvLT5yZXFfc2hhZG93W2ldLnJlcS51LnJ3LmlkID0gaSsxOworCWluZm8tPnJl
cV9zaGFkb3dbcmluZ19zaXplIC0gMV0ucmVxLnUucncuaWQgPSAweDBmZmZmZmZmOworCisJcmlu
Z19zaXplID0gQkxLX1NFR19SSU5HX1NJWkU7CisKKwlpbmZvLT5zZWdfc2hhZG93ID0ga3phbGxv
YyhzaXplb2Yoc3RydWN0IGJsa19zZWdfc2hhZG93KSAqIHJpbmdfc2l6ZSwKKwkJCQkgICBHRlBf
S0VSTkVMKTsJCQorCWlmICghaW5mby0+c2VnX3NoYWRvdykgeworCQlrZnJlZShpbmZvLT5yZXFf
c2hhZG93KTsKKwkJcmV0dXJuIC1FTk9NRU07CisJfQorCisJZm9yIChpID0gMDsgaSA8IHJpbmdf
c2l6ZTsgaSsrKSB7CisJCWluZm8tPnNlZ19zaGFkb3dbaV0uaWQgPSBpKzE7CisJfQorCWluZm8t
PnNlZ19zaGFkb3dbcmluZ19zaXplIC0gMV0uaWQgPSAweDBmZmZmZmZmOworCisJcmV0dXJuIDA7
Cit9CisKIHN0YXRpYyBpbnQgc2V0dXBfYmxrcmluZyhzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2
LAogCQkJIHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewpAQCAtMTAwMyw2ICsxMjYxLDg0
IEBAIGZhaWw6CiAJcmV0dXJuIGVycjsKIH0KIAorc3RhdGljIGludCBzZXR1cF9ibGtyaW5nX3Yy
KHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYsCisJCQkgICAgc3RydWN0IGJsa2Zyb250X2luZm8g
KmluZm8pCit7CisJc3RydWN0IGJsa2lmX3JlcXVlc3Rfc3JpbmcgKnNyaW5nOworCXN0cnVjdCBi
bGtpZl9zZWdtZW50X3NyaW5nICpzZWdfc3Jpbmc7CisJaW50IGVycjsKKworCWluZm8tPnJlcXJp
bmdfcmVmID0gR1JBTlRfSU5WQUxJRF9SRUY7CisKKwlzcmluZyA9IChzdHJ1Y3QgYmxraWZfcmVx
dWVzdF9zcmluZyAqKV9fZ2V0X2ZyZWVfcGFnZShHRlBfTk9JTyB8IF9fR0ZQX0hJR0gpOworCWlm
ICghc3JpbmcpIHsKKwkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIC1FTk9NRU0sICJhbGxvY2F0aW5n
IHNoYXJlZCByaW5nIik7CisJCXJldHVybiAtRU5PTUVNOworCX0KKwlTSEFSRURfUklOR19JTklU
KHNyaW5nKTsKKwlGUk9OVF9SSU5HX0lOSVQoJmluZm8tPnJlcXJpbmcsIHNyaW5nLCBQQUdFX1NJ
WkUpOworCisJZXJyID0geGVuYnVzX2dyYW50X3JpbmcoZGV2LCB2aXJ0X3RvX21mbihpbmZvLT5y
ZXFyaW5nLnNyaW5nKSk7CisJaWYgKGVyciA8IDApIHsKKwkJZnJlZV9wYWdlKCh1bnNpZ25lZCBs
b25nKXNyaW5nKTsKKwkJaW5mby0+cmVxcmluZy5zcmluZyA9IE5VTEw7CisJCWdvdG8gZmFpbDsK
Kwl9CisKKwlpbmZvLT5yZXFyaW5nX3JlZiA9IGVycjsKKworCWluZm8tPnNlZ3JpbmdfcmVmID0g
R1JBTlRfSU5WQUxJRF9SRUY7CisKKwlzZWdfc3JpbmcgPSAoc3RydWN0IGJsa2lmX3NlZ21lbnRf
c3JpbmcgKilfX2dldF9mcmVlX3BhZ2UoR0ZQX05PSU8gfCBfX0dGUF9ISUdIKTsKKwlpZiAoIXNl
Z19zcmluZykgeworCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgLUVOT01FTSwgImFsbG9jYXRpbmcg
c2hhcmVkIHJpbmciKTsKKwkJZXJyID0gLUVOT01FTTsKKwkJZ290byBmYWlsOworCX0KKwlTSEFS
RURfUklOR19JTklUKHNlZ19zcmluZyk7CisJRlJPTlRfUklOR19JTklUKCZpbmZvLT5zZWdyaW5n
LCBzZWdfc3JpbmcsIFBBR0VfU0laRSk7CisKKwllcnIgPSB4ZW5idXNfZ3JhbnRfcmluZyhkZXYs
IHZpcnRfdG9fbWZuKGluZm8tPnNlZ3Jpbmcuc3JpbmcpKTsKKwlpZiAoZXJyIDwgMCkgeworCQlm
cmVlX3BhZ2UoKHVuc2lnbmVkIGxvbmcpc2VnX3NyaW5nKTsKKwkJaW5mby0+c2VncmluZy5zcmlu
ZyA9IE5VTEw7CisJCWdvdG8gZmFpbDsKKwl9CisKKwlpbmZvLT5zZWdyaW5nX3JlZiA9IGVycjsK
KworCWluZm8tPnNnID0ga3phbGxvYyhzaXplb2Yoc3RydWN0IHNjYXR0ZXJsaXN0KSAqIGluZm8t
Pm9wcy0+bWF4X3NlZywKKwkJCSAgIEdGUF9LRVJORUwpOworCWlmICghaW5mby0+c2cpIHsKKwkJ
ZXJyID0gLUVOT01FTTsKKwkJZ290byBmYWlsOworCX0KKwlzZ19pbml0X3RhYmxlKGluZm8tPnNn
LCBpbmZvLT5vcHMtPm1heF9zZWcpOworCisJZXJyID0gaW5pdF9zaGFkb3dfdjIoaW5mbyk7CisJ
aWYgKGVycikKKwkJZ290byBmYWlsOworCisJZXJyID0geGVuYnVzX2FsbG9jX2V2dGNobihkZXYs
ICZpbmZvLT5ldnRjaG4pOworCWlmIChlcnIpCisJCWdvdG8gZmFpbDsKKworCWVyciA9IGJpbmRf
ZXZ0Y2huX3RvX2lycWhhbmRsZXIoaW5mby0+ZXZ0Y2huLAorCQkJCQlibGtpZl9pbnRlcnJ1cHQs
CisJCQkJCUlSUUZfU0FNUExFX1JBTkRPTSwgImJsa2lmIiwgaW5mbyk7CisJaWYgKGVyciA8PSAw
KSB7CisJCXhlbmJ1c19kZXZfZmF0YWwoZGV2LCBlcnIsCisJCQkJICJiaW5kX2V2dGNobl90b19p
cnFoYW5kbGVyIGZhaWxlZCIpOworCQlnb3RvIGZhaWw7CisJfQorCWluZm8tPmlycSA9IGVycjsK
KworCXJldHVybiAwOworZmFpbDoKKwlibGtpZl9mcmVlKGluZm8sIDApOworCXJldHVybiBlcnI7
Cit9CisKIHN0YXRpYyB2b2lkIGZyZWVfYmxrcmluZyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5m
bywgaW50IHN1c3BlbmQpCiB7CiAJaWYgKGluZm8tPnJpbmdfcmVmICE9IEdSQU5UX0lOVkFMSURf
UkVGKSB7CkBAIC0xMDE4LDYgKzEzNTQsMzIgQEAgc3RhdGljIHZvaWQgZnJlZV9ibGtyaW5nKHN0
cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVuZCkKIAkJa2ZyZWUoaW5mby0+c2hh
ZG93KTsKIH0KIAorc3RhdGljIHZvaWQgZnJlZV9ibGtyaW5nX3YyKHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLCBpbnQgc3VzcGVuZCkKK3sKKwlpZiAoaW5mby0+cmVxcmluZ19yZWYgIT0gR1JB
TlRfSU5WQUxJRF9SRUYpIHsKKwkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhpbmZvLT5yZXFy
aW5nX3JlZiwgMCwKKwkJCQkJICAodW5zaWduZWQgbG9uZylpbmZvLT5yZXFyaW5nLnNyaW5nKTsK
KwkJaW5mby0+cmVxcmluZ19yZWYgPSBHUkFOVF9JTlZBTElEX1JFRjsKKwkJaW5mby0+cmVxcmlu
Zy5zcmluZyA9IE5VTEw7CisJfQorCisJaWYgKGluZm8tPnNlZ3JpbmdfcmVmICE9IEdSQU5UX0lO
VkFMSURfUkVGKSB7CisJCWdudHRhYl9lbmRfZm9yZWlnbl9hY2Nlc3MoaW5mby0+c2VncmluZ19y
ZWYsIDAsCisJCQkJCSAgKHVuc2lnbmVkIGxvbmcpaW5mby0+c2VncmluZy5zcmluZyk7CisJCWlu
Zm8tPnNlZ3JpbmdfcmVmID0gR1JBTlRfSU5WQUxJRF9SRUY7CisJCWluZm8tPnNlZ3Jpbmcuc3Jp
bmcgPSBOVUxMOworCX0KKworCWtmcmVlKGluZm8tPnNnKTsKKworCWlmKCFzdXNwZW5kKSB7CisJ
CWtmcmVlKGluZm8tPnJlcV9zaGFkb3cpOworCQlrZnJlZShpbmZvLT5zZWdfc2hhZG93KTsKKwl9
CisKK30KKworCiAvKiBDb21tb24gY29kZSB1c2VkIHdoZW4gZmlyc3Qgc2V0dGluZyB1cCwgYW5k
IHdoZW4gcmVzdW1pbmcuICovCiBzdGF0aWMgaW50IHRhbGtfdG9fYmxrYmFjayhzdHJ1Y3QgeGVu
YnVzX2RldmljZSAqZGV2LAogCQkJICAgc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCkBAIC0x
MDI1LDkgKzEzODcsMTcgQEAgc3RhdGljIGludCB0YWxrX3RvX2Jsa2JhY2soc3RydWN0IHhlbmJ1
c19kZXZpY2UgKmRldiwKIAljb25zdCBjaGFyICptZXNzYWdlID0gTlVMTDsKIAlzdHJ1Y3QgeGVu
YnVzX3RyYW5zYWN0aW9uIHhidDsKIAlpbnQgZXJyOworCXVuc2lnbmVkIGludCB0eXBlOwogCiAJ
LyogcmVnaXN0ZXIgcmluZyBvcHMgKi8KLQlpbmZvLT5vcHMgPSAmYmxrX2Zyb250X29wczsKKwll
cnIgPSB4ZW5idXNfc2NhbmYoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgImJsa2JhY2stcmluZy10
eXBlIiwgIiV1IiwKKwkJCSAgICZ0eXBlKTsKKwlpZiAoZXJyICE9IDEpCisJCXR5cGUgPSAxOwor
CWlmICh0eXBlID09IDIpCisJCWluZm8tPm9wcyA9ICZibGtfZnJvbnRfb3BzX3YyOworCWVsc2UK
KwkJaW5mby0+b3BzID0gJmJsa19mcm9udF9vcHM7CiAKIAkvKiBDcmVhdGUgc2hhcmVkIHJpbmcs
IGFsbG9jIGV2ZW50IGNoYW5uZWwuICovCiAJZXJyID0gaW5mby0+b3BzLT5zZXR1cF9ibGtyaW5n
KGRldiwgaW5mbyk7CkBAIC0xMDQwLDEzICsxNDEwLDYgQEAgYWdhaW46CiAJCXhlbmJ1c19kZXZf
ZmF0YWwoZGV2LCBlcnIsICJzdGFydGluZyB0cmFuc2FjdGlvbiIpOwogCQlnb3RvIGRlc3Ryb3lf
YmxrcmluZzsKIAl9Ci0KLQllcnIgPSB4ZW5idXNfcHJpbnRmKHhidCwgZGV2LT5ub2RlbmFtZSwK
LQkJCSAgICAicmluZy1yZWYiLCAiJXUiLCBpbmZvLT5yaW5nX3JlZik7Ci0JaWYgKGVycikgewot
CQltZXNzYWdlID0gIndyaXRpbmcgcmluZy1yZWYiOwotCQlnb3RvIGFib3J0X3RyYW5zYWN0aW9u
OwotCX0KIAllcnIgPSB4ZW5idXNfcHJpbnRmKHhidCwgZGV2LT5ub2RlbmFtZSwKIAkJCSAgICAi
ZXZlbnQtY2hhbm5lbCIsICIldSIsIGluZm8tPmV2dGNobik7CiAJaWYgKGVycikgewpAQCAtMTA1
OSw3ICsxNDIyLDQwIEBAIGFnYWluOgogCQltZXNzYWdlID0gIndyaXRpbmcgcHJvdG9jb2wiOwog
CQlnb3RvIGFib3J0X3RyYW5zYWN0aW9uOwogCX0KLQorCWlmICh0eXBlID09IDEpIHsKKwkJZXJy
ID0geGVuYnVzX3ByaW50Zih4YnQsIGRldi0+bm9kZW5hbWUsCisJCQkJICAgICJyaW5nLXJlZiIs
ICIldSIsIGluZm8tPnJpbmdfcmVmKTsKKwkJaWYgKGVycikgeworCQkJbWVzc2FnZSA9ICJ3cml0
aW5nIHJpbmctcmVmIjsKKwkJCWdvdG8gYWJvcnRfdHJhbnNhY3Rpb247CisJCX0KKwkJZXJyID0g
eGVuYnVzX3ByaW50Zih4YnQsIGRldi0+bm9kZW5hbWUsICJibGtmcm9udC1yaW5nLXR5cGUiLAor
CQkJCSAgICAiJXUiLCB0eXBlKTsKKwkJaWYgKGVycikgeworCQkJbWVzc2FnZSA9ICJ3cml0aW5n
IGJsa2Zyb250IHJpbmcgdHlwZSI7CisJCQlnb3RvIGFib3J0X3RyYW5zYWN0aW9uOworCQl9CQor
CX0KKwlpZiAodHlwZSA9PSAyKSB7CisJCWVyciA9IHhlbmJ1c19wcmludGYoeGJ0LCBkZXYtPm5v
ZGVuYW1lLAorCQkJCSAgICAicmVxcmluZy1yZWYiLCAiJXUiLCBpbmZvLT5yZXFyaW5nX3JlZik7
CisJCWlmIChlcnIpIHsKKwkJCW1lc3NhZ2UgPSAid3JpdGluZyByZXFyaW5nLXJlZiI7CisJCQln
b3RvIGFib3J0X3RyYW5zYWN0aW9uOworCQl9CisJCWVyciA9IHhlbmJ1c19wcmludGYoeGJ0LCBk
ZXYtPm5vZGVuYW1lLAorCQkJCSAgICAic2VncmluZy1yZWYiLCAiJXUiLCBpbmZvLT5zZWdyaW5n
X3JlZik7CisJCWlmIChlcnIpIHsKKwkJCW1lc3NhZ2UgPSAid3JpdGluZyBzZWdyaW5nLXJlZiI7
CisJCQlnb3RvIGFib3J0X3RyYW5zYWN0aW9uOworCQl9CisJCWVyciA9IHhlbmJ1c19wcmludGYo
eGJ0LCBkZXYtPm5vZGVuYW1lLCAiYmxrZnJvbnQtcmluZy10eXBlIiwKKwkJCQkgICAgIiV1Iiwg
dHlwZSk7CisJCWlmIChlcnIpIHsKKwkJCW1lc3NhZ2UgPSAid3JpdGluZyBibGtmcm9udCByaW5n
IHR5cGUiOworCQkJZ290byBhYm9ydF90cmFuc2FjdGlvbjsKKwkJfQkKKwl9CiAJZXJyID0geGVu
YnVzX3RyYW5zYWN0aW9uX2VuZCh4YnQsIDApOwogCWlmIChlcnIpIHsKIAkJaWYgKGVyciA9PSAt
RUFHQUlOKQpAQCAtMTE2NCw3ICsxNTYwLDcgQEAgc3RhdGljIGludCBibGtmcm9udF9wcm9iZShz
dHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2LAogfQogCiAKLXN0YXRpYyBpbnQgYmxraWZfcmVjb3Zl
cihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3N0YXRpYyBpbnQgcmVjb3Zlcl9mcm9tX3Yx
X3RvX3YxKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewogCWludCBpOwogCXN0cnVjdCBi
bGtpZl9yZXF1ZXN0ICpyZXE7CkBAIC0xMjMzLDYgKzE2MjksMzcyIEBAIHN0YXRpYyBpbnQgYmxr
aWZfcmVjb3ZlcihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIAlyZXR1cm4gMDsKIH0KIAor
LyogbWlncmF0ZSBmcm9tIFYyIHR5cGUgcmluZyB0byBWMSB0eXBlKi8KK3N0YXRpYyBpbnQgcmVj
b3Zlcl9mcm9tX3YyX3RvX3YxKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQorewkKKwlzdHJ1
Y3QgYmxrX3JlcV9zaGFkb3cgKmNvcHk7CisJc3RydWN0IGJsa19zZWdfc2hhZG93ICpzZWdfY29w
eTsKKwlzdHJ1Y3QgcmVxdWVzdCAqcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0ICpuZXdfcmVx
OworCWludCBpLCBqLCBlcnI7CisJdW5zaWduZWQgaW50IHJlcV9yczsKKwlzdHJ1Y3QgYmlvICpi
aW9saXN0ID0gTlVMTCwgKmJpb3RhaWwgPSBOVUxMLCAqYmlvOworCXVuc2lnbmVkIGxvbmcgaW5k
ZXg7CisJdW5zaWduZWQgbG9uZyBmbGFnczsKKworCXByX2luZm8oIldhcm5pbmcsIG1pZ3JhdGUg
dG8gb2xkZXIgYmFja2VuZCwgc29tZSBpbyBtYXkgZmFpbFxuIik7CisKKwkvKiBTdGFnZSAxOiBJ
bml0IHRoZSBuZXcgc2hhZG93IHN0YXRlLiAqLworCWluZm8tPnJpbmdfdHlwZSA9IFJJTkdfVFlQ
RV9VTkRFRklORUQ7CisJZXJyID0gaW5pdF9zaGFkb3coaW5mbyk7CisJaWYgKGVycikKKwkJcmV0
dXJuIGVycjsKKworCXJlcV9ycyA9IEJMS19SRVFfUklOR19TSVpFOworCisJLyogU3RhZ2UgMjog
U2V0IHVwIGZyZWUgbGlzdC4gKi8KKwlpbmZvLT5zaGFkb3dfZnJlZSA9IGluZm8tPnJpbmcucmVx
X3Byb2RfcHZ0OworCisJLyogU3RhZ2UgMzogRmluZCBwZW5kaW5nIHJlcXVlc3RzIGFuZCByZXF1
ZXVlIHRoZW0uICovCisJZm9yIChpID0gMDsgaSA8IHJlcV9yczsgaSsrKSB7CisJCXJlcSA9IGlu
Zm8tPnJlcV9zaGFkb3dbaV0ucmVxdWVzdDsKKwkJLyogTm90IGluIHVzZT8gKi8KKwkJaWYgKCFy
ZXEpCisJCQljb250aW51ZTsKKworCQlpZiAocmluZ19mdWxsKGluZm8pKSAKKwkJCWdvdG8gb3V0
OworCisJCWNvcHkgPSAmaW5mby0+cmVxX3NoYWRvd1tpXTsKKworICAgICAgICAgICAgICAgIC8q
IFdlIGdldCBhIG5ldyByZXF1ZXN0LCByZXNldCB0aGUgYmxraWYgcmVxdWVzdCBhbmQgc2hhZG93
IHN0YXRlLiAqLworCQluZXdfcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+cmluZywgaW5m
by0+cmluZy5yZXFfcHJvZF9wdnQpOworCisJCWlmIChjb3B5LT5yZXEub3BlcmF0aW9uID09IEJM
S0lGX09QX0RJU0NBUkQpIHsKKwkJCW5ld19yZXEtPm9wZXJhdGlvbiA9IEJMS0lGX09QX0RJU0NB
UkQ7CisJCQluZXdfcmVxLT51LmRpc2NhcmQgPSBjb3B5LT5yZXEudS5kaXNjYXJkOworCQkJbmV3
X3JlcS0+dS5kaXNjYXJkLmlkID0gZ2V0X2lkX2Zyb21fZnJlZWxpc3QoaW5mbyk7CisJCQlpbmZv
LT5zaGFkb3dbbmV3X3JlcS0+dS5kaXNjYXJkLmlkXS5yZXF1ZXN0ID0gcmVxOworCQl9CisJCWVs
c2UgeworCQkJaWYgKGNvcHktPnJlcS51LnJ3Lm5yX3NlZ21lbnRzID4gQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUKSAKKyAgICAgICAgICAgICAgICAgICAgICAgIAljb250aW51ZTsgCisK
KwkJCW5ld19yZXEtPnUucncuaWQgPSBnZXRfaWRfZnJvbV9mcmVlbGlzdChpbmZvKTsKKwkJCWlu
Zm8tPnNoYWRvd1tuZXdfcmVxLT51LnJ3LmlkXS5yZXF1ZXN0ID0gcmVxOworCQkJbmV3X3JlcS0+
b3BlcmF0aW9uID0gY29weS0+cmVxLm9wZXJhdGlvbjsKKwkJCW5ld19yZXEtPnUucncubnJfc2Vn
bWVudHMgPSBjb3B5LT5yZXEudS5ydy5ucl9zZWdtZW50czsKKwkJCW5ld19yZXEtPnUucncuaGFu
ZGxlID0gY29weS0+cmVxLnUucncuaGFuZGxlOworCQkJbmV3X3JlcS0+dS5ydy5zZWN0b3JfbnVt
YmVyID0gY29weS0+cmVxLnUucncuc2VjdG9yX251bWJlcjsKKwkJCWluZGV4ID0gY29weS0+cmVx
LnUucncuc2VnX2lkOworCQkJZm9yIChqID0gMDsgaiA8IG5ld19yZXEtPnUucncubnJfc2VnbWVu
dHM7IGorKykgeworCQkJCXNlZ19jb3B5ID0gJmluZm8tPnNlZ19zaGFkb3dbaW5kZXhdOworCQkJ
CW5ld19yZXEtPnUucncuc2VnW2pdLmdyZWYgPSBzZWdfY29weS0+cmVxLmdyZWY7CisJCQkJbmV3
X3JlcS0+dS5ydy5zZWdbal0uZmlyc3Rfc2VjdCA9IHNlZ19jb3B5LT5yZXEuZmlyc3Rfc2VjdDsK
KwkJCQluZXdfcmVxLT51LnJ3LnNlZ1tqXS5sYXN0X3NlY3QgPSBzZWdfY29weS0+cmVxLmxhc3Rf
c2VjdDsgCisJCQkJaW5mby0+c2hhZG93W25ld19yZXEtPnUucncuaWRdLmZyYW1lW2pdID0gc2Vn
X2NvcHktPmZyYW1lOworCQkJCWdudHRhYl9ncmFudF9mb3JlaWduX2FjY2Vzc19yZWYoCisJCQkJ
CW5ld19yZXEtPnUucncuc2VnW2pdLmdyZWYsCisJCQkJCWluZm8tPnhiZGV2LT5vdGhlcmVuZF9p
ZCwKKwkJCQkJcGZuX3RvX21mbihpbmZvLT5zaGFkb3dbbmV3X3JlcS0+dS5ydy5pZF0uZnJhbWVb
al0pLAorCQkJCQlycV9kYXRhX2RpcihpbmZvLT5zaGFkb3dbbmV3X3JlcS0+dS5ydy5pZF0ucmVx
dWVzdCkpOworCQkJCWluZGV4ID0gaW5mby0+c2VnX3NoYWRvd1tpbmRleF0uaWQ7CisJCQl9CisJ
CX0KKwkJaW5mby0+c2hhZG93W25ld19yZXEtPnUucncuaWRdLnJlcSA9ICpuZXdfcmVxOworCQlp
bmZvLT5yaW5nLnJlcV9wcm9kX3B2dCsrOworCQlpbmZvLT5yZXFfc2hhZG93W2ldLnJlcXVlc3Qg
PSBOVUxMOworCQkKKwl9CitvdXQ6CisJeGVuYnVzX3N3aXRjaF9zdGF0ZShpbmZvLT54YmRldiwg
WGVuYnVzU3RhdGVDb25uZWN0ZWQpOworCisJc3Bpbl9sb2NrX2lycXNhdmUoJmluZm8tPmlvX2xv
Y2ssIGZsYWdzKTsKKworCS8qIGNhbmNlbCB0aGUgcmVxdWVzdCBhbmQgcmVzdWJtaXQgdGhlIGJp
byAqLworCWZvciAoaSA9IDA7IGkgPCByZXFfcnM7IGkrKykgeworCQlyZXEgPSBpbmZvLT5yZXFf
c2hhZG93W2ldLnJlcXVlc3Q7CisJCWlmICghcmVxKQorCQkJY29udGludWU7CisKKwkJYmxraWZf
Y29tcGxldGlvbl92MihpbmZvLCBpKTsKKworCQlpZiAoYmlvbGlzdCA9PSBOVUxMKQkKKwkJCWJp
b2xpc3QgPSByZXEtPmJpbzsKKwkJZWxzZQorCQkJYmlvdGFpbC0+YmlfbmV4dCA9IHJlcS0+Ymlv
OworCQliaW90YWlsID0gcmVxLT5iaW90YWlsOworCQlyZXEtPmJpbyA9IE5VTEw7CisJCV9fYmxr
X3B1dF9yZXF1ZXN0KGluZm8tPnJxLCByZXEpOworCX0KKworCXdoaWxlICgocmVxID0gYmxrX3Bl
ZWtfcmVxdWVzdChpbmZvLT5ycSkpICE9IE5VTEwpIHsKKworCQlibGtfc3RhcnRfcmVxdWVzdChy
ZXEpOworCisJCWlmIChiaW9saXN0ID09IE5VTEwpCisJCQliaW9saXN0ID0gcmVxLT5iaW87CisJ
CWVsc2UKKwkJCWJpb3RhaWwtPmJpX25leHQgPSByZXEtPmJpbzsKKwkJYmlvdGFpbCA9IHJlcS0+
YmlvdGFpbDsKKwkJcmVxLT5iaW8gPSBOVUxMOworCQlfX2Jsa19wdXRfcmVxdWVzdChpbmZvLT5y
cSwgcmVxKTsKKwl9CisKKwkvKiBOb3cgc2FmZSBmb3IgdXMgdG8gdXNlIHRoZSBzaGFyZWQgcmlu
ZyAqLworCWluZm8tPmNvbm5lY3RlZCA9IEJMS0lGX1NUQVRFX0NPTk5FQ1RFRDsKKworCS8qIG5l
ZWQgdXBkYXRlIHRoZSBxdWV1ZSBsaW1pdCBzZXR0aW5nICovCisJdXBkYXRlX2Jsa19xdWV1ZShp
bmZvKTsKKworCS8qIFNlbmQgb2ZmIHJlcXVldWVkIHJlcXVlc3RzICovCisJZmx1c2hfcmVxdWVz
dHMoaW5mbyk7CisKKwkvKiBLaWNrIGFueSBvdGhlciBuZXcgcmVxdWVzdHMgcXVldWVkIHNpbmNl
IHdlIHJlc3VtZWQgKi8KKwlraWNrX3BlbmRpbmdfcmVxdWVzdF9xdWV1ZXMoaW5mbyk7CisKKwlz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZpbmZvLT5pb19sb2NrLCBmbGFncyk7CisKKwkvKiBmcmVl
IG9yaWdpbmFsIHNoYWRvdyovCisJa2ZyZWUoaW5mby0+c2VnX3NoYWRvdyk7CisJa2ZyZWUoaW5m
by0+cmVxX3NoYWRvdyk7CisKKwl3aGlsZShiaW9saXN0KSB7CisJCWJpbyA9IGJpb2xpc3Q7CisJ
CWJpb2xpc3QgPSBiaW9saXN0LT5iaV9uZXh0OworCQliaW8tPmJpX25leHQgPSBOVUxMOworCQlz
dWJtaXRfYmlvKGJpby0+YmlfcncsIGJpbyk7CisJfQorCisJcmV0dXJuIDA7Cit9CisKK3N0YXRp
YyBpbnQgYmxraWZfcmVjb3ZlcihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlpbnQg
cmM7CisKKwlpZiAoaW5mby0+cmluZ190eXBlID09IFJJTkdfVFlQRV8xKQorCQlyYyA9IHJlY292
ZXJfZnJvbV92MV90b192MShpbmZvKTsKKwllbHNlIGlmIChpbmZvLT5yaW5nX3R5cGUgPT0gUklO
R19UWVBFXzIpCisJCXJjID0gcmVjb3Zlcl9mcm9tX3YyX3RvX3YxKGluZm8pOworCWVsc2UKKwkJ
cmMgPSAtRVBFUk07CisJcmV0dXJuIHJjOworfQorCitzdGF0aWMgaW50IHJlY292ZXJfZnJvbV92
MV90b192MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlpbnQgaSxlcnI7CisJc3Ry
dWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpyZXE7CisJc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2Vn
bWVudCAqc2VncmluZ19yZXE7CisJc3RydWN0IGJsa19zaGFkb3cgKmNvcHk7CisJaW50IGo7CisJ
dW5zaWduZWQgbG9uZyBzZWdfaWQsIGxhc3RfaWQgPSAweDBmZmZmZmZmOworCisJLyogU3RhZ2Ug
MTogSW5pdCB0aGUgbmV3IHNoYWRvdy4gKi8KKwlpbmZvLT5yaW5nX3R5cGUgPSBSSU5HX1RZUEVf
VU5ERUZJTkVEOworCWVyciA9IGluaXRfc2hhZG93X3YyKGluZm8pOworCWlmIChlcnIpCisJCXJl
dHVybiBlcnI7CisKKwkvKiBTdGFnZSAyOiBTZXQgdXAgZnJlZSBsaXN0LiAqLworCWluZm8tPnNo
YWRvd19mcmVlID0gaW5mby0+cmVxcmluZy5yZXFfcHJvZF9wdnQ7CisJaW5mby0+c2VnX3NoYWRv
d19mcmVlID0gaW5mby0+c2VncmluZy5yZXFfcHJvZF9wdnQ7CisKKwkvKiBTdGFnZSAzOiBGaW5k
IHBlbmRpbmcgcmVxdWVzdHMgYW5kIHJlcXVldWUgdGhlbS4gKi8KKwlmb3IgKGkgPSAwOyBpIDwg
QkxLX1JJTkdfU0laRTsgaSsrKSB7CisJCWNvcHkgPSAmaW5mby0+c2hhZG93W2ldOworCQkvKiBO
b3QgaW4gdXNlPyAqLworCQlpZiAoIWNvcHktPnJlcXVlc3QpCisJCQljb250aW51ZTsKKworCQkv
KiBXZSBnZXQgYSBuZXcgcmVxdWVzdCwgcmVzZXQgdGhlIGJsa2lmIHJlcXVlc3QgYW5kIHNoYWRv
dyBzdGF0ZS4gKi8KKwkJcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+cmVxcmluZywgaW5m
by0+cmVxcmluZy5yZXFfcHJvZF9wdnQpOworCisJCWlmIChjb3B5LT5yZXEub3BlcmF0aW9uID09
IEJMS0lGX09QX0RJU0NBUkQpIHsKKwkJCXJlcS0+b3BlcmF0aW9uID0gQkxLSUZfT1BfRElTQ0FS
RDsKKwkJCXJlcS0+dS5kaXNjYXJkID0gY29weS0+cmVxLnUuZGlzY2FyZDsKKwkJCXJlcS0+dS5k
aXNjYXJkLmlkID0gZ2V0X2lkX2Zyb21fZnJlZWxpc3RfdjIoaW5mbyk7CisJCQlpbmZvLT5yZXFf
c2hhZG93W3JlcS0+dS5kaXNjYXJkLmlkXS5yZXF1ZXN0ID0gY29weS0+cmVxdWVzdDsKKwkJCWlu
Zm8tPnJlcV9zaGFkb3dbcmVxLT51LmRpc2NhcmQuaWRdLnJlcSA9ICpyZXE7CisJCX0KKwkJZWxz
ZSB7CisJCQlyZXEtPnUucncuaWQgPSBnZXRfaWRfZnJvbV9mcmVlbGlzdF92MihpbmZvKTsKKwkJ
CXJlcS0+b3BlcmF0aW9uID0gY29weS0+cmVxLm9wZXJhdGlvbjsKKwkJCXJlcS0+dS5ydy5ucl9z
ZWdtZW50cyA9IGNvcHktPnJlcS51LnJ3Lm5yX3NlZ21lbnRzOworCQkJcmVxLT51LnJ3LmhhbmRs
ZSA9IGNvcHktPnJlcS51LnJ3LmhhbmRsZTsKKwkJCXJlcS0+dS5ydy5zZWN0b3JfbnVtYmVyID0g
Y29weS0+cmVxLnUucncuc2VjdG9yX251bWJlcjsKKwkJCWZvciAoaiA9IDA7IGogPCByZXEtPnUu
cncubnJfc2VnbWVudHM7IGorKykgeworCQkJCXNlZ19pZCA9IGdldF9zZWdfc2hhZG93X2lkKGlu
Zm8pOworCQkJCWlmIChqID09IDApCisJCQkJCXJlcS0+dS5ydy5zZWdfaWQgPSBzZWdfaWQ7CisJ
CQkJZWxzZQorCQkJCQlpbmZvLT5zZWdfc2hhZG93W2xhc3RfaWRdLmlkID0gc2VnX2lkOworCQkJ
CXNlZ3JpbmdfcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+c2VncmluZywgaW5mby0+c2Vn
cmluZy5yZXFfcHJvZF9wdnQpOworCQkJCXNlZ3JpbmdfcmVxLT5ncmVmID0gY29weS0+cmVxLnUu
cncuc2VnW2pdLmdyZWY7CisJCQkJc2VncmluZ19yZXEtPmZpcnN0X3NlY3QgPSBjb3B5LT5yZXEu
dS5ydy5zZWdbal0uZmlyc3Rfc2VjdDsKKwkJCQlzZWdyaW5nX3JlcS0+bGFzdF9zZWN0ID0gY29w
eS0+cmVxLnUucncuc2VnW2pdLmxhc3Rfc2VjdDsKKwkJCQlpbmZvLT5zZWdfc2hhZG93W3NlZ19p
ZF0ucmVxID0gKnNlZ3JpbmdfcmVxOworCQkJCWluZm8tPnNlZ19zaGFkb3dbc2VnX2lkXS5mcmFt
ZSA9IGNvcHktPmZyYW1lW2pdOworCQkJCWluZm8tPnNlZ3JpbmcucmVxX3Byb2RfcHZ0Kys7CisJ
CQkJZ250dGFiX2dyYW50X2ZvcmVpZ25fYWNjZXNzX3JlZigKKwkJCQkJc2VncmluZ19yZXEtPmdy
ZWYsCisJCQkJCWluZm8tPnhiZGV2LT5vdGhlcmVuZF9pZCwKKwkJCQkJcGZuX3RvX21mbihjb3B5
LT5mcmFtZVtqXSksCisJCQkJCXJxX2RhdGFfZGlyKGNvcHktPnJlcXVlc3QpKTsKKwkJCQlsYXN0
X2lkID0gc2VnX2lkOworCQkJfQorCQkJaW5mby0+cmVxX3NoYWRvd1tyZXEtPnUucncuaWRdLnJl
cSA9ICpyZXE7CisJCQlpbmZvLT5yZXFfc2hhZG93W3JlcS0+dS5ydy5pZF0ucmVxdWVzdCA9IGNv
cHktPnJlcXVlc3Q7CisJCX0KKworCQlpbmZvLT5yZXFyaW5nLnJlcV9wcm9kX3B2dCsrOworCX0K
KworCS8qIG5lZWQgdXBkYXRlIHRoZSBxdWV1ZSBsaW1pdCBzZXR0aW5nICovCisJdXBkYXRlX2Js
a19xdWV1ZShpbmZvKTsKKworCS8qIGZyZWUgb3JpZ2luYWwgc2hhZG93Ki8KKwlrZnJlZShpbmZv
LT5zaGFkb3cpOworCisJeGVuYnVzX3N3aXRjaF9zdGF0ZShpbmZvLT54YmRldiwgWGVuYnVzU3Rh
dGVDb25uZWN0ZWQpOworCisJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CisKKwkvKiBO
b3cgc2FmZSBmb3IgdXMgdG8gdXNlIHRoZSBzaGFyZWQgcmluZyAqLworCWluZm8tPmNvbm5lY3Rl
ZCA9IEJMS0lGX1NUQVRFX0NPTk5FQ1RFRDsKKworCS8qIFNlbmQgb2ZmIHJlcXVldWVkIHJlcXVl
c3RzICovCisJZmx1c2hfcmVxdWVzdHMoaW5mbyk7CisKKwkvKiBLaWNrIGFueSBvdGhlciBuZXcg
cmVxdWVzdHMgcXVldWVkIHNpbmNlIHdlIHJlc3VtZWQgKi8KKwlraWNrX3BlbmRpbmdfcmVxdWVz
dF9xdWV1ZXMoaW5mbyk7CisKKwlzcGluX3VubG9ja19pcnEoJmluZm8tPmlvX2xvY2spOworCisJ
cmV0dXJuIDA7Cit9CisKK3N0YXRpYyBpbnQgcmVjb3Zlcl9mcm9tX3YyX3RvX3YyKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKQoreworCWludCBpOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X2hl
YWRlciAqcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnNlZ3JpbmdfcmVxOwor
CXN0cnVjdCBibGtfcmVxX3NoYWRvdyAqY29weTsKKwlzdHJ1Y3QgYmxrX3NlZ19zaGFkb3cgKnNl
Z19jb3B5OworCXVuc2lnbmVkIGxvbmcgaW5kZXggPSAweDBmZmZmZmZmLCBzZWdfaWQsIGxhc3Rf
aWQgPSAweDBmZmZmZmZmOworCWludCBqOworCXVuc2lnbmVkIGludCByZXFfcnMsIHNlZ19yczsK
Kwl1bnNpZ25lZCBsb25nIGZsYWdzOworCisJcmVxX3JzID0gQkxLX1JFUV9SSU5HX1NJWkU7CisJ
c2VnX3JzID0gQkxLX1NFR19SSU5HX1NJWkU7CisKKwkvKiBTdGFnZSAxOiBNYWtlIGEgc2FmZSBj
b3B5IG9mIHRoZSBzaGFkb3cgc3RhdGUuICovCisJY29weSA9IGttYWxsb2Moc2l6ZW9mKHN0cnVj
dCBibGtfcmVxX3NoYWRvdykgKiByZXFfcnMsCisJCSAgICAgICBHRlBfTk9JTyB8IF9fR0ZQX1JF
UEVBVCB8IF9fR0ZQX0hJR0gpOworCWlmICghY29weSkKKwkJcmV0dXJuIC1FTk9NRU07CisKKwlz
ZWdfY29weSA9IGttYWxsb2Moc2l6ZW9mKHN0cnVjdCBibGtfc2VnX3NoYWRvdykgKiBzZWdfcnMs
CisJCQkgICBHRlBfTk9JTyB8IF9fR0ZQX1JFUEVBVCB8IF9fR0ZQX0hJR0gpOworCWlmICghc2Vn
X2NvcHkgKSB7CisJCWtmcmVlKGNvcHkpOworCQlyZXR1cm4gLUVOT01FTTsKKwl9CisKKwltZW1j
cHkoY29weSwgaW5mby0+cmVxX3NoYWRvdywgc2l6ZW9mKHN0cnVjdCBibGtfcmVxX3NoYWRvdykg
KiByZXFfcnMpOworCW1lbWNweShzZWdfY29weSwgaW5mby0+c2VnX3NoYWRvdywKKwkgICAgICAg
c2l6ZW9mKHN0cnVjdCBibGtfc2VnX3NoYWRvdykgKiBzZWdfcnMpOworCisJLyogU3RhZ2UgMjog
U2V0IHVwIGZyZWUgbGlzdC4gKi8KKyAgICAgICAgZm9yIChpID0gMDsgaSA8IHJlcV9yczsgaSsr
KQorICAgICAgICAgICAgICAgIGluZm8tPnJlcV9zaGFkb3dbaV0ucmVxLnUucncuaWQgPSBpKzE7
CisgICAgICAgIGluZm8tPnJlcV9zaGFkb3dbcmVxX3JzIC0gMV0ucmVxLnUucncuaWQgPSAweDBm
ZmZmZmZmOworCisJZm9yIChpID0gMDsgaSA8IHNlZ19yczsgaSsrKQorCQlpbmZvLT5zZWdfc2hh
ZG93W2ldLmlkID0gaSsxOworCWluZm8tPnNlZ19zaGFkb3dbc2VnX3JzIC0gMV0uaWQgPSAweDBm
ZmZmZmZmOworCisJaW5mby0+c2hhZG93X2ZyZWUgPSBpbmZvLT5yZXFyaW5nLnJlcV9wcm9kX3B2
dDsKKwlpbmZvLT5zZWdfc2hhZG93X2ZyZWUgPSBpbmZvLT5zZWdyaW5nLnJlcV9wcm9kX3B2dDsK
KworCS8qIFN0YWdlIDM6IEZpbmQgcGVuZGluZyByZXF1ZXN0cyBhbmQgcmVxdWV1ZSB0aGVtLiAq
LworCWZvciAoaSA9IDA7IGkgPCByZXFfcnM7IGkrKykgeworCQkvKiBOb3QgaW4gdXNlPyAqLwor
CQlpZiAoIWNvcHlbaV0ucmVxdWVzdCkKKwkJCWNvbnRpbnVlOworCisJCXJlcSA9IFJJTkdfR0VU
X1JFUVVFU1QoJmluZm8tPnJlcXJpbmcsIGluZm8tPnJlcXJpbmcucmVxX3Byb2RfcHZ0KTsKKwkJ
KnJlcSA9IGNvcHlbaV0ucmVxOworCisJCXJlcS0+dS5ydy5pZCA9IGdldF9pZF9mcm9tX2ZyZWVs
aXN0X3YyKGluZm8pOworCQltZW1jcHkoJmluZm8tPnJlcV9zaGFkb3dbcmVxLT51LnJ3LmlkXSwg
JmNvcHlbaV0sIHNpemVvZihjb3B5W2ldKSk7CisKKwkJaWYgKHJlcS0+b3BlcmF0aW9uICE9IEJM
S0lGX09QX0RJU0NBUkQpIHsKKwkJCWZvciAoaiA9IDA7IGogPCByZXEtPnUucncubnJfc2VnbWVu
dHM7IGorKykgeworCQkJCXNlZ19pZCA9IGdldF9zZWdfc2hhZG93X2lkKGluZm8pOworCQkJCWlm
IChqID09IDApCisJCQkJCWluZGV4ID0gcmVxLT51LnJ3LnNlZ19pZDsKKwkJCQllbHNlCisJCQkJ
CWluZGV4ID0gc2VnX2NvcHlbaW5kZXhdLmlkIDsKKwkJCQlnbnR0YWJfZ3JhbnRfZm9yZWlnbl9h
Y2Nlc3NfcmVmKAorCQkJCQlzZWdfY29weVtpbmRleF0ucmVxLmdyZWYsCisJCQkJCWluZm8tPnhi
ZGV2LT5vdGhlcmVuZF9pZCwKKwkJCQkJcGZuX3RvX21mbihzZWdfY29weVtpbmRleF0uZnJhbWUp
LAorCQkJCQlycV9kYXRhX2RpcihpbmZvLT5yZXFfc2hhZG93W3JlcS0+dS5ydy5pZF0ucmVxdWVz
dCkpOworCQkJCXNlZ3JpbmdfcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+c2VncmluZywg
aW5mby0+c2VncmluZy5yZXFfcHJvZF9wdnQpOworCQkJCW1lbWNweShzZWdyaW5nX3JlcSwgJihz
ZWdfY29weVtpbmRleF0ucmVxKSwKKwkJCQkgICAgICAgc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1
ZXN0X3NlZ21lbnQpKTsKKwkJCQlpZiAoaiA9PSAwKQorCQkJCQlyZXEtPnUucncuc2VnX2lkID0g
c2VnX2lkOworCQkJCWVsc2UKKwkJCQkJaW5mby0+c2VnX3NoYWRvd1tsYXN0X2lkXS5pZCA9IHNl
Z19pZDsKKworCQkJCW1lbWNweSgmaW5mby0+c2VnX3NoYWRvd1tzZWdfaWRdLAorCQkJCSAgICAg
ICAmc2VnX2NvcHlbaW5kZXhdLCBzaXplb2Yoc3RydWN0IGJsa19zZWdfc2hhZG93KSk7CisJCQkJ
aW5mby0+c2VncmluZy5yZXFfcHJvZF9wdnQrKzsKKwkJCQlsYXN0X2lkID0gc2VnX2lkOworCQkJ
fQorCQl9CisJCWluZm8tPnJlcV9zaGFkb3dbcmVxLT51LnJ3LmlkXS5yZXEgPSAqcmVxOworCisJ
CWluZm8tPnJlcXJpbmcucmVxX3Byb2RfcHZ0Kys7CisJfQorCisJa2ZyZWUoc2VnX2NvcHkpOwor
CWtmcmVlKGNvcHkpOworCisJeGVuYnVzX3N3aXRjaF9zdGF0ZShpbmZvLT54YmRldiwgWGVuYnVz
U3RhdGVDb25uZWN0ZWQpOworCisJc3Bpbl9sb2NrX2lycXNhdmUoJmluZm8tPmlvX2xvY2ssIGZs
YWdzKTsKKworCS8qIE5vdyBzYWZlIGZvciB1cyB0byB1c2UgdGhlIHNoYXJlZCByaW5nICovCisJ
aW5mby0+Y29ubmVjdGVkID0gQkxLSUZfU1RBVEVfQ09OTkVDVEVEOworCisJLyogU2VuZCBvZmYg
cmVxdWV1ZWQgcmVxdWVzdHMgKi8KKwlmbHVzaF9yZXF1ZXN0cyhpbmZvKTsKKworCS8qIEtpY2sg
YW55IG90aGVyIG5ldyByZXF1ZXN0cyBxdWV1ZWQgc2luY2Ugd2UgcmVzdW1lZCAqLworCWtpY2tf
cGVuZGluZ19yZXF1ZXN0X3F1ZXVlcyhpbmZvKTsKKworCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUo
JmluZm8tPmlvX2xvY2ssIGZsYWdzKTsKKworCXJldHVybiAwOworfQorCitzdGF0aWMgaW50IGJs
a2lmX3JlY292ZXJfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJaW50IHJjOwor
CisJaWYgKGluZm8tPnJpbmdfdHlwZSA9PSBSSU5HX1RZUEVfMSkKKwkJcmMgPSByZWNvdmVyX2Zy
b21fdjFfdG9fdjIoaW5mbyk7CisJZWxzZSBpZiAoaW5mby0+cmluZ190eXBlID09IFJJTkdfVFlQ
RV8yKQorCQlyYyA9IHJlY292ZXJfZnJvbV92Ml90b192MihpbmZvKTsKKwllbHNlCisJCXJjID0g
LUVQRVJNOworCXJldHVybiByYzsKK30KIC8qKgogICogV2UgYXJlIHJlY29ubmVjdGluZyB0byB0
aGUgYmFja2VuZCwgZHVlIHRvIGEgc3VzcGVuZC9yZXN1bWUsIG9yIGEgYmFja2VuZAogICogZHJp
dmVyIHJlc3RhcnQuICBXZSB0ZWFyIGRvd24gb3VyIGJsa2lmIHN0cnVjdHVyZSBhbmQgcmVjcmVh
dGUgaXQsIGJ1dApAQCAtMTYwOSwxNSArMjM3MSw0NCBAQCBzdGF0aWMgc3RydWN0IGJsa19mcm9u
dF9vcGVyYXRpb25zIGJsa19mcm9udF9vcHMgPSB7CiAJLnVwZGF0ZV9yc3BfZXZlbnQgPSB1cGRh
dGVfcnNwX2V2ZW50LAogCS51cGRhdGVfcnNwX2NvbnMgPSB1cGRhdGVfcnNwX2NvbnMsCiAJLnVw
ZGF0ZV9yZXFfcHJvZF9wdnQgPSB1cGRhdGVfcmVxX3Byb2RfcHZ0LAorCS51cGRhdGVfc2VnbWVu
dF9yc3BfY29ucyA9IHVwZGF0ZV9zZWdtZW50X3JzcF9jb25zLAogCS5yaW5nX3B1c2ggPSByaW5n
X3B1c2gsCiAJLnJlY292ZXIgPSBibGtpZl9yZWNvdmVyLAogCS5yaW5nX2Z1bGwgPSByaW5nX2Z1
bGwsCisJLnNlZ3JpbmdfZnVsbCA9IHNlZ3JpbmdfZnVsbCwKIAkuc2V0dXBfYmxrcmluZyA9IHNl
dHVwX2Jsa3JpbmcsCiAJLmZyZWVfYmxrcmluZyA9IGZyZWVfYmxrcmluZywKIAkuYmxraWZfY29t
cGxldGlvbiA9IGJsa2lmX2NvbXBsZXRpb24sCiAJLm1heF9zZWcgPSBCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1QsCiB9OwogCitzdGF0aWMgc3RydWN0IGJsa19mcm9udF9vcGVyYXRpb25z
IGJsa19mcm9udF9vcHNfdjIgPSB7CisJLnJpbmdfZ2V0X3JlcXVlc3QgPSByaW5nX2dldF9yZXF1
ZXN0X3YyLAorCS5yaW5nX2dldF9yZXNwb25zZSA9IHJpbmdfZ2V0X3Jlc3BvbnNlX3YyLAorCS5y
aW5nX2dldF9zZWdtZW50ID0gcmluZ19nZXRfc2VnbWVudF92MiwKKwkuZ2V0X2lkID0gZ2V0X2lk
X2Zyb21fZnJlZWxpc3RfdjIsCisJLmFkZF9pZCA9IGFkZF9pZF90b19mcmVlbGlzdF92MiwKKwku
c2F2ZV9zZWdfc2hhZG93ID0gc2F2ZV9zZWdfc2hhZG93X3YyLAorCS5zYXZlX3JlcV9zaGFkb3cg
PSBzYXZlX3JlcV9zaGFkb3dfdjIsCisJLmdldF9yZXFfZnJvbV9zaGFkb3cgPSBnZXRfcmVxX2Zy
b21fc2hhZG93X3YyLAorCS5nZXRfcnNwX3Byb2QgPSBnZXRfcnNwX3Byb2RfdjIsCisJLmdldF9y
c3BfY29ucyA9IGdldF9yc3BfY29uc192MiwKKwkuZ2V0X3JlcV9wcm9kX3B2dCA9IGdldF9yZXFf
cHJvZF9wdnRfdjIsCisJLmNoZWNrX2xlZnRfcmVzcG9uc2UgPSBjaGVja19sZWZ0X3Jlc3BvbnNl
X3YyLAorCS51cGRhdGVfcnNwX2V2ZW50ID0gdXBkYXRlX3JzcF9ldmVudF92MiwKKwkudXBkYXRl
X3JzcF9jb25zID0gdXBkYXRlX3JzcF9jb25zX3YyLAorCS51cGRhdGVfcmVxX3Byb2RfcHZ0ID0g
dXBkYXRlX3JlcV9wcm9kX3B2dF92MiwKKwkudXBkYXRlX3NlZ21lbnRfcnNwX2NvbnMgPSB1cGRh
dGVfc2VnbWVudF9yc3BfY29uc192MiwKKwkucmluZ19wdXNoID0gcmluZ19wdXNoX3YyLAorCS5y
ZWNvdmVyID0gYmxraWZfcmVjb3Zlcl92MiwKKwkucmluZ19mdWxsID0gcmluZ19mdWxsX3YyLAor
CS5zZWdyaW5nX2Z1bGwgPSBzZWdyaW5nX2Z1bGxfdjIsCisJLnNldHVwX2Jsa3JpbmcgPSBzZXR1
cF9ibGtyaW5nX3YyLAorCS5mcmVlX2Jsa3JpbmcgPSBmcmVlX2Jsa3JpbmdfdjIsCisJLmJsa2lm
X2NvbXBsZXRpb24gPSBibGtpZl9jb21wbGV0aW9uX3YyLAorCS5tYXhfc2VnID0gQkxLSUZfTUFY
X1NFR01FTlRTX1BFUl9SRVFVRVNUX1YyLAorfTsKKwogc3RhdGljIGNvbnN0IHN0cnVjdCBibG9j
a19kZXZpY2Vfb3BlcmF0aW9ucyB4bHZiZF9ibG9ja19mb3BzID0KIHsKIAkub3duZXIgPSBUSElT
X01PRFVMRSwKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2dyYW50LXRhYmxlLmMgYi9kcml2ZXJz
L3hlbi9ncmFudC10YWJsZS5jCmluZGV4IGYxMDBjZTIuLmE1YTk4YjAgMTAwNjQ0Ci0tLSBhL2Ry
aXZlcnMveGVuL2dyYW50LXRhYmxlLmMKKysrIGIvZHJpdmVycy94ZW4vZ3JhbnQtdGFibGUuYwpA
QCAtNDc1LDcgKzQ3NSw3IEBAIHZvaWQgZ250dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhncmFudF9y
ZWZfdCByZWYsIGludCByZWFkb25seSwKIAkJLyogWFhYIFRoaXMgbmVlZHMgdG8gYmUgZml4ZWQg
c28gdGhhdCB0aGUgcmVmIGFuZCBwYWdlIGFyZQogCQkgICBwbGFjZWQgb24gYSBsaXN0IHRvIGJl
IGZyZWVkIHVwIGxhdGVyLiAqLwogCQlwcmludGsoS0VSTl9XQVJOSU5HCi0JCSAgICAgICAiV0FS
TklORzogbGVha2luZyBnLmUuIGFuZCBwYWdlIHN0aWxsIGluIHVzZSFcbiIpOworCQkgICAgICAg
IldBUk5JTkc6IHJlZiAldSBsZWFraW5nIGcuZS4gYW5kIHBhZ2Ugc3RpbGwgaW4gdXNlIVxuIiwg
cmVmKTsKIAl9CiB9CiBFWFBPUlRfU1lNQk9MX0dQTChnbnR0YWJfZW5kX2ZvcmVpZ25fYWNjZXNz
KTsKZGlmZiAtLWdpdCBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9pby9ibGtpZi5oIGIvaW5jbHVk
ZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmgKaW5kZXggZWUzMzhiZi4uNzYzNDg5YSAxMDA2NDQK
LS0tIGEvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmgKKysrIGIvaW5jbHVkZS94ZW4v
aW50ZXJmYWNlL2lvL2Jsa2lmLmgKQEAgLTEwOCw2ICsxMDgsNyBAQCB0eXBlZGVmIHVpbnQ2NF90
IGJsa2lmX3NlY3Rvcl90OwogICogTkIuIFRoaXMgY291bGQgYmUgMTIgaWYgdGhlIHJpbmcgaW5k
ZXhlcyB3ZXJlbid0IHN0b3JlZCBpbiB0aGUgc2FtZSBwYWdlLgogICovCiAjZGVmaW5lIEJMS0lG
X01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCAxMQorI2RlZmluZSBCTEtJRl9NQVhfU0VHTUVOVFNf
UEVSX1JFUVVFU1RfVjIgMTI4CiAKIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3J3IHsKIAl1aW50OF90
ICAgICAgICBucl9zZWdtZW50czsgIC8qIG51bWJlciBvZiBzZWdtZW50cyAgICAgICAgICAgICAg
ICAgICAqLwpAQCAtMTI1LDYgKzEyNiwxNyBAQCBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9ydyB7CiAJ
fSBzZWdbQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUXTsKIH0gX19hdHRyaWJ1dGVfXygo
X19wYWNrZWRfXykpOwogCitzdHJ1Y3QgYmxraWZfcmVxdWVzdF9yd19oZWFkZXIgeworCXVpbnQ4
X3QgICAgICAgIG5yX3NlZ21lbnRzOyAgLyogbnVtYmVyIG9mIHNlZ21lbnRzICAgICAgICAgICAg
ICAgICAgICovCisJYmxraWZfdmRldl90ICAgaGFuZGxlOyAgICAgICAvKiBvbmx5IGZvciByZWFk
L3dyaXRlIHJlcXVlc3RzICAgICAgICAgKi8KKyNpZmRlZiBDT05GSUdfWDg2XzY0CisJdWludDMy
X3QgICAgICAgX3BhZDE7CSAgICAgLyogb2Zmc2V0b2YoYmxraWZfcmVxdWVzdCx1LnJ3LmlkKSA9
PSA4ICovCisjZW5kaWYKKwl1aW50NjRfdCAgICAgICBpZDsgICAgICAgICAgIC8qIHByaXZhdGUg
Z3Vlc3QgdmFsdWUsIGVjaG9lZCBpbiByZXNwICAqLworCWJsa2lmX3NlY3Rvcl90IHNlY3Rvcl9u
dW1iZXI7Lyogc3RhcnQgc2VjdG9yIGlkeCBvbiBkaXNrIChyL3cgb25seSkgICovCisJdWludDY0
X3QgICAgICAgc2VnX2lkOwkgICAgIC8qIHNlZ21lbnQgaWQgaW4gdGhlIHNlZ21lbnQgc2hhZG93
ICAgICAqLwkKK30gX19hdHRyaWJ1dGVfXygoX19wYWNrZWRfXykpOworCiBzdHJ1Y3QgYmxraWZf
cmVxdWVzdF9kaXNjYXJkIHsKIAl1aW50OF90ICAgICAgICBmbGFnOyAgICAgICAgIC8qIEJMS0lG
X0RJU0NBUkRfU0VDVVJFIG9yIHplcm8uICAgICAgICAqLwogI2RlZmluZSBCTEtJRl9ESVNDQVJE
X1NFQ1VSRSAoMTw8MCkgIC8qIGlnbm9yZWQgaWYgZGlzY2FyZC1zZWN1cmU9MCAgICAgICAgICAq
LwpAQCAtMTM1LDcgKzE0Nyw2IEBAIHN0cnVjdCBibGtpZl9yZXF1ZXN0X2Rpc2NhcmQgewogCXVp
bnQ2NF90ICAgICAgIGlkOyAgICAgICAgICAgLyogcHJpdmF0ZSBndWVzdCB2YWx1ZSwgZWNob2Vk
IGluIHJlc3AgICovCiAJYmxraWZfc2VjdG9yX3Qgc2VjdG9yX251bWJlcjsKIAl1aW50NjRfdCAg
ICAgICBucl9zZWN0b3JzOwotCXVpbnQ4X3QgICAgICAgIF9wYWQzOwogfSBfX2F0dHJpYnV0ZV9f
KChfX3BhY2tlZF9fKSk7CiAKIHN0cnVjdCBibGtpZl9yZXF1ZXN0IHsKQEAgLTE0NiwxMiArMTU3
LDI0IEBAIHN0cnVjdCBibGtpZl9yZXF1ZXN0IHsKIAl9IHU7CiB9IF9fYXR0cmlidXRlX18oKF9f
cGFja2VkX18pKTsKIAorc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyIHsKKwl1aW50OF90ICAg
ICAgICBvcGVyYXRpb247ICAgIC8qIEJMS0lGX09QXz8/PyAgICAgICAgICAgICAgICAgICAgICAg
ICAqLworCXVuaW9uIHsKKwkJc3RydWN0IGJsa2lmX3JlcXVlc3RfcndfaGVhZGVyIHJ3OworCQlz
dHJ1Y3QgYmxraWZfcmVxdWVzdF9kaXNjYXJkIGRpc2NhcmQ7CisJfSB1OworfSBfX2F0dHJpYnV0
ZV9fKChfX3BhY2tlZF9fKSk7CisKIHN0cnVjdCBibGtpZl9yZXNwb25zZSB7CiAJdWludDY0X3Qg
ICAgICAgIGlkOyAgICAgICAgICAgICAgLyogY29waWVkIGZyb20gcmVxdWVzdCAqLwogCXVpbnQ4
X3QgICAgICAgICBvcGVyYXRpb247ICAgICAgIC8qIGNvcGllZCBmcm9tIHJlcXVlc3QgKi8KIAlp
bnQxNl90ICAgICAgICAgc3RhdHVzOyAgICAgICAgICAvKiBCTEtJRl9SU1BfPz8/ICAgICAgICov
CiB9OwogCitzdHJ1Y3QgYmxraWZfcmVzcG9uc2Vfc2VnbWVudCB7CisJY2hhcgkJZHVtbXk7Cit9
IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsKKwogLyoKICAqIFNUQVRVUyBSRVRVUk4gQ09E
RVMuCiAgKi8KQEAgLTE2Nyw2ICsxOTAsOCBAQCBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgewogICov
CiAKIERFRklORV9SSU5HX1RZUEVTKGJsa2lmLCBzdHJ1Y3QgYmxraWZfcmVxdWVzdCwgc3RydWN0
IGJsa2lmX3Jlc3BvbnNlKTsKK0RFRklORV9SSU5HX1RZUEVTKGJsa2lmX3JlcXVlc3QsIHN0cnVj
dCBibGtpZl9yZXF1ZXN0X2hlYWRlciwgc3RydWN0IGJsa2lmX3Jlc3BvbnNlKTsKK0RFRklORV9S
SU5HX1RZUEVTKGJsa2lmX3NlZ21lbnQsIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQsIHN0
cnVjdCBibGtpZl9yZXNwb25zZV9zZWdtZW50KTsKIAogI2RlZmluZSBWRElTS19DRFJPTSAgICAg
ICAgMHgxCiAjZGVmaW5lIFZESVNLX1JFTU9WQUJMRSAgICAweDIK

--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:26:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xH8-0005lN-F6; Thu, 16 Aug 2012 10:26:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xH6-0005l4-9V
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:26:01 +0000
Received: from [85.158.139.83:59035] by server-3.bemta-5.messagelabs.com id
	95/A2-27237-7BACC205; Thu, 16 Aug 2012 10:25:59 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345112755!21213241!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY5MDQ5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7857 invoked from network); 16 Aug 2012 10:25:56 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 10:25:56 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 16 Aug 2012 03:25:55 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181734822"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:25:54 -0700
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:25:54 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:25:53 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:25:51 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 2/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mXn4BmTF/5rIR/qU6lefhXPSaA==
Date: Thu, 16 Aug 2012 10:25:50 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF213@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 2/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


add segring support in blkfront
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a263faf..b9f383d 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -76,10 +76,23 @@ struct blk_shadow {
 	unsigned long frame[BLKIF_MAX_SEGMENTS_PER_REQUEST];
 };
=20
+struct blk_req_shadow {
+	struct blkif_request_header req;
+	struct request *request;
+};
+
+struct blk_seg_shadow {
+	uint64_t id;
+	struct blkif_request_segment req;
+	unsigned long frame;
+};
+
 static DEFINE_MUTEX(blkfront_mutex);
 static const struct block_device_operations xlvbd_block_fops;
=20
 #define BLK_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE)
+#define BLK_REQ_RING_SIZE __CONST_RING_SIZE(blkif_request, PAGE_SIZE)
+#define BLK_SEG_RING_SIZE __CONST_RING_SIZE(blkif_segment, PAGE_SIZE)
=20
 /*
  * We have one of these per vbd, whether ide, scsi or 'other'.  They
@@ -96,22 +109,30 @@ struct blkfront_info
 	blkif_vdev_t handle;
 	enum blkif_state connected;
 	int ring_ref;
+	int reqring_ref;
+	int segring_ref;
 	struct blkif_front_ring ring;
+	struct blkif_request_front_ring reqring;
+	struct blkif_segment_front_ring segring;
 	struct scatterlist *sg;
 	unsigned int evtchn, irq;
 	struct request_queue *rq;
 	struct work_struct work;
 	struct gnttab_free_callback callback;
 	struct blk_shadow *shadow;
+	struct blk_req_shadow *req_shadow;
+	struct blk_seg_shadow *seg_shadow;
 	struct blk_front_operations *ops;
 	enum blkif_ring_type ring_type;
 	unsigned long shadow_free;
+	unsigned long seg_shadow_free;
 	unsigned int feature_flush;
 	unsigned int flush_op;
 	unsigned int feature_discard:1;
 	unsigned int feature_secdiscard:1;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
+	unsigned long last_id;
 	int is_ready;
 };
=20
@@ -124,7 +145,7 @@ static struct blk_front_operations {
 	unsigned long (*get_id) (struct blkfront_info *info);
 	void (*add_id) (struct blkfront_info *info, unsigned long id);
 	void (*save_seg_shadow) (struct blkfront_info *info, unsigned long mfn,
-				 unsigned long id, int i);
+				 unsigned long id, int i, struct blkif_request_segment *ring_seg);
 	void (*save_req_shadow) (struct blkfront_info *info,
 				 struct request *req, unsigned long id);
 	struct request *(*get_req_from_shadow)(struct blkfront_info *info,
@@ -136,14 +157,16 @@ static struct blk_front_operations {
 	void (*update_rsp_event) (struct blkfront_info *info, int i);
 	void (*update_rsp_cons) (struct blkfront_info *info);
 	void (*update_req_prod_pvt) (struct blkfront_info *info);
+	void (*update_segment_rsp_cons) (struct blkfront_info *info, unsigned lon=
g id);
 	void (*ring_push) (struct blkfront_info *info, int *notify);
 	int (*recover) (struct blkfront_info *info);
 	int (*ring_full) (struct blkfront_info *info);
+	int (*segring_full) (struct blkfront_info *info, unsigned int nr_segments=
);
 	int (*setup_blkring) (struct xenbus_device *dev, struct blkfront_info *in=
fo);
 	void (*free_blkring) (struct blkfront_info *info, int suspend);
 	void (*blkif_completion) (struct blkfront_info *info, unsigned long id);
 	unsigned int max_seg;
-} blk_front_ops;=20
+} blk_front_ops, blk_front_ops_v2;=20
=20
 static unsigned int nr_minors;
 static unsigned long *minors;
@@ -179,6 +202,24 @@ static unsigned long get_id_from_freelist(struct blkfr=
ont_info *info)
 	return free;
 }
=20
+static unsigned long get_id_from_freelist_v2(struct blkfront_info *info)
+{
+	unsigned long free =3D info->shadow_free;
+	BUG_ON(free >=3D BLK_REQ_RING_SIZE);
+	info->shadow_free =3D info->req_shadow[free].req.u.rw.id;
+	info->req_shadow[free].req.u.rw.id =3D 0x0fffffee; /* debug */
+	return free;
+}
+
+static unsigned long get_seg_shadow_id(struct blkfront_info *info)
+{
+	unsigned long free =3D info->seg_shadow_free;
+	BUG_ON(free >=3D BLK_SEG_RING_SIZE);
+	info->seg_shadow_free =3D info->seg_shadow[free].id;
+	info->seg_shadow[free].id =3D 0x0fffffee; /* debug */
+	return free;
+}
+
 void add_id_to_freelist(struct blkfront_info *info,
 			       unsigned long id)
 {
@@ -187,6 +228,21 @@ void add_id_to_freelist(struct blkfront_info *info,
 	info->shadow_free =3D id;
 }
=20
+static void add_id_to_freelist_v2(struct blkfront_info *info,
+				  unsigned long id)
+{
+	info->req_shadow[id].req.u.rw.id  =3D info->shadow_free;
+	info->req_shadow[id].request =3D NULL;
+	info->shadow_free =3D id;
+}
+
+static void free_seg_shadow_id(struct blkfront_info *info,
+				  unsigned long id)
+{
+	info->seg_shadow[id].id  =3D info->seg_shadow_free;
+	info->seg_shadow_free =3D id;
+}
+
 static int xlbd_reserve_minors(unsigned int minor, unsigned int nr)
 {
 	unsigned int end =3D minor + nr;
@@ -299,6 +355,14 @@ void *ring_get_request(struct blkfront_info *info)
 	return (void *)RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt);
 }
=20
+void *ring_get_request_v2(struct blkfront_info *info)
+{
+	struct blkif_request_header *ring_req;
+	ring_req =3D RING_GET_REQUEST(&info->reqring,
+				info->reqring.req_prod_pvt);
+	return (void *)ring_req;
+}
+
 struct blkif_request_segment *ring_get_segment(struct blkfront_info *info,=
 int i)
 {
 	struct blkif_request *ring_req =3D
@@ -306,12 +370,34 @@ struct blkif_request_segment *ring_get_segment(struct=
 blkfront_info *info, int i
 	return &ring_req->u.rw.seg[i];
 }
=20
-void save_seg_shadow(struct blkfront_info *info,
-		      unsigned long mfn, unsigned long id, int i)
+struct blkif_request_segment *ring_get_segment_v2(struct blkfront_info *in=
fo, int i)
+{
+	return RING_GET_REQUEST(&info->segring, info->segring.req_prod_pvt++);
+}
+
+void save_seg_shadow(struct blkfront_info *info, unsigned long mfn,
+		     unsigned long id, int i, struct blkif_request_segment *ring_seg)
 {
 	info->shadow[id].frame[i] =3D mfn_to_pfn(mfn);
 }
=20
+void save_seg_shadow_v2(struct blkfront_info *info, unsigned long mfn,
+			unsigned long id, int i, struct blkif_request_segment *ring_seg)
+{
+	struct blkif_request_header *ring_req;
+	unsigned long seg_id =3D get_seg_shadow_id(info);
+
+	ring_req =3D (struct blkif_request_header *)info->ops->ring_get_request(i=
nfo);
+	if (i =3D=3D 0)
+		ring_req->u.rw.seg_id =3D seg_id;
+	else
+		info->seg_shadow[info->last_id].id =3D seg_id;
+	info->seg_shadow[seg_id].frame =3D mfn_to_pfn(mfn);
+	memcpy(&(info->seg_shadow[seg_id].req), ring_seg,
+	       sizeof(struct blkif_request_segment));
+	info->last_id =3D seg_id;
+}
+
 void save_req_shadow(struct blkfront_info *info,
 		      struct request *req, unsigned long id)
 {
@@ -321,10 +407,34 @@ void save_req_shadow(struct blkfront_info *info,
 	info->shadow[id].request =3D req;
 }
=20
+void save_req_shadow_v2(struct blkfront_info *info,
+		      struct request *req, unsigned long id)
+{
+	struct blkif_request_header *ring_req =3D
+			(struct blkif_request_header *)info->ops->ring_get_request(info);
+	info->req_shadow[id].req =3D *ring_req;
+	info->req_shadow[id].request =3D req;
+}
+
 void update_req_prod_pvt(struct blkfront_info *info)
 {
 	info->ring.req_prod_pvt++;
 }
+
+void update_req_prod_pvt_v2(struct blkfront_info *info)
+{
+	info->reqring.req_prod_pvt++;
+}
+
+int segring_full(struct blkfront_info *info, unsigned int nr_segments)
+{
+	return 0;
+}
+
+int segring_full_v2(struct blkfront_info *info, unsigned int nr_segments)
+{
+	return nr_segments > RING_FREE_REQUESTS(&info->segring);
+}
 /*
  * Generate a Xen blkfront IO request from a blk layer request.  Reads
  * and writes are handled as expected.
@@ -347,19 +457,18 @@ static int blkif_queue_request(struct request *req)
 		return 1;
=20
 	if (gnttab_alloc_grant_references(
-		BLKIF_MAX_SEGMENTS_PER_REQUEST, &gref_head) < 0) {
+		info->ops->max_seg, &gref_head) < 0) {
 		gnttab_request_free_callback(
 			&info->callback,
 			blkif_restart_queue_callback,
 			info,
-			BLKIF_MAX_SEGMENTS_PER_REQUEST);
+			info->ops->max_seg);
 		return 1;
 	}
=20
 	/* Fill out a communications ring structure. */
 	ring_req =3D (struct blkif_request *)info->ops->ring_get_request(info);
 	id =3D info->ops->get_id(info);
-	//info->shadow[id].request =3D req;
=20
 	ring_req->u.rw.id =3D id;
 	ring_req->u.rw.sector_number =3D (blkif_sector_t)blk_rq_pos(req);
@@ -392,6 +501,9 @@ static int blkif_queue_request(struct request *req)
 							   info->sg);
 		BUG_ON(ring_req->u.rw.nr_segments > info->ops->max_seg);
=20
+		if (info->ops->segring_full(info, ring_req->u.rw.nr_segments))
+			goto wait;
+
 		for_each_sg(info->sg, sg, ring_req->u.rw.nr_segments, i) {
 			buffer_mfn =3D pfn_to_mfn(page_to_pfn(sg_page(sg)));
 			fsect =3D sg->offset >> 9;
@@ -411,7 +523,7 @@ static int blkif_queue_request(struct request *req)
 					.gref       =3D ref,
 					.first_sect =3D fsect,
 					.last_sect  =3D lsect };
-			info->ops->save_seg_shadow(info, buffer_mfn, id, i);
+			info->ops->save_seg_shadow(info, buffer_mfn, id, i, ring_seg);
 		}
 	}
=20
@@ -423,6 +535,11 @@ static int blkif_queue_request(struct request *req)
 	gnttab_free_grant_references(gref_head);
=20
 	return 0;
+wait:
+	gnttab_free_grant_references(gref_head);
+	pr_debug("No enough segment!\n");
+	info->ops->add_id(info, id);
+	return 1;
 }
=20
 void ring_push(struct blkfront_info *info, int *notify)
@@ -430,6 +547,13 @@ void ring_push(struct blkfront_info *info, int *notify=
)
 	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->ring, *notify);
 }
=20
+void ring_push_v2(struct blkfront_info *info, int *notify)
+{
+	RING_PUSH_REQUESTS(&info->segring);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->reqring, *notify);
+}
+
+
 static inline void flush_requests(struct blkfront_info *info)
 {
 	int notify;
@@ -440,6 +564,16 @@ static inline void flush_requests(struct blkfront_info=
 *info)
 		notify_remote_via_irq(info->irq);
 }
=20
+static int ring_free_v2(struct blkfront_info *info)
+{
+	return (!RING_FULL(&info->reqring) &&
+		RING_FREE_REQUESTS(&info->segring) > RING_SIZE(&info->segring)/3);
+}
+static int ring_full_v2(struct blkfront_info *info)
+{
+	return (RING_FULL(&info->reqring) || RING_FULL(&info->segring));
+}
+
 /*
  * do_blkif_request
  *  read a block; request is in a request queue
@@ -490,6 +624,17 @@ wait:
 		flush_requests(info);
 }
=20
+static void update_blk_queue(struct blkfront_info *info)
+{
+	struct request_queue *q =3D info->rq;
+
+	blk_queue_max_segments(q, info->ops->max_seg);
+	blk_queue_max_hw_sectors(q, queue_max_segments(q) *
+				 queue_max_segment_size(q) /
+				 queue_logical_block_size(q));
+	return;
+}
+
 static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size)
 {
 	struct request_queue *rq;
@@ -740,7 +885,7 @@ static void xlvbd_release_gendisk(struct blkfront_info =
*info)
=20
 static void kick_pending_request_queues(struct blkfront_info *info)
 {
-	if (!ring_full(info)) {
+	if (!info->ops->ring_full(info)) {
 		/* Re-enable calldowns. */
 		blk_start_queue(info->rq);
 		/* Kick things off immediately. */
@@ -793,39 +938,115 @@ static void blkif_completion(struct blkfront_info *i=
nfo, unsigned long id)
 		gnttab_end_foreign_access(s->req.u.rw.seg[i].gref, 0, 0UL);
 }
=20
+static void blkif_completion_v2(struct blkfront_info *info, unsigned long =
id)
+{
+	int i;
+	/* Do not let BLKIF_OP_DISCARD as nr_segment is in the same place
+	 * flag. */
+	unsigned short nr =3D info->req_shadow[id].req.u.rw.nr_segments;
+	unsigned long shadow_id, free_id;
+
+	shadow_id =3D info->req_shadow[id].req.u.rw.seg_id;
+	for (i =3D 0; i < nr; i++) {
+		gnttab_end_foreign_access(info->seg_shadow[shadow_id].req.gref, 0, 0UL);
+		free_id =3D shadow_id;
+		shadow_id =3D info->seg_shadow[shadow_id].id;
+		free_seg_shadow_id(info, free_id);
+	}
+}
+
 struct blkif_response *ring_get_response(struct blkfront_info *info)
 {
 	return RING_GET_RESPONSE(&info->ring, info->ring.rsp_cons);
 }
+
+struct blkif_response *ring_get_response_v2(struct blkfront_info *info)
+{
+	return RING_GET_RESPONSE(&info->reqring, info->reqring.rsp_cons);
+}
+
 RING_IDX get_rsp_prod(struct blkfront_info *info)
 {
 	return info->ring.sring->rsp_prod;
 }
+
+RING_IDX get_rsp_prod_v2(struct blkfront_info *info)
+{
+	return info->reqring.sring->rsp_prod;
+}
+
 RING_IDX get_rsp_cons(struct blkfront_info *info)
 {
 	return info->ring.rsp_cons;
 }
+
+RING_IDX get_rsp_cons_v2(struct blkfront_info *info)
+{
+	return info->reqring.rsp_cons;
+}
+
 struct request *get_req_from_shadow(struct blkfront_info *info,
 				    unsigned long id)
 {
 	return info->shadow[id].request;
 }
+
+struct request *get_req_from_shadow_v2(struct blkfront_info *info,
+				    unsigned long id)
+{
+	return info->req_shadow[id].request;
+}
+
 void update_rsp_cons(struct blkfront_info *info)
 {
 	info->ring.rsp_cons++;
 }
+
+void update_rsp_cons_v2(struct blkfront_info *info)
+{
+	info->reqring.rsp_cons++;
+}
+
 RING_IDX get_req_prod_pvt(struct blkfront_info *info)
 {
 	return info->ring.req_prod_pvt;
 }
+
+RING_IDX get_req_prod_pvt_v2(struct blkfront_info *info)
+{
+	return info->reqring.req_prod_pvt;
+}
+
 void check_left_response(struct blkfront_info *info, int *more_to_do)
 {
 	RING_FINAL_CHECK_FOR_RESPONSES(&info->ring, *more_to_do);
 }
+
+void check_left_response_v2(struct blkfront_info *info, int *more_to_do)
+{
+	RING_FINAL_CHECK_FOR_RESPONSES(&info->reqring, *more_to_do);
+}
+
 void update_rsp_event(struct blkfront_info *info, int i)
 {
 	info->ring.sring->rsp_event =3D i + 1;
 }
+
+void update_rsp_event_v2(struct blkfront_info *info, int i)
+{
+	info->reqring.sring->rsp_event =3D i + 1;
+}
+
+void update_segment_rsp_cons(struct blkfront_info *info, unsigned long id)
+{
+	return;
+}
+
+void update_segment_rsp_cons_v2(struct blkfront_info *info, unsigned long =
id)
+{
+	info->segring.rsp_cons +=3D info->req_shadow[id].req.u.rw.nr_segments;
+	return;
+}
 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 {
 	struct request *req;
@@ -903,8 +1124,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_=
id)
 			if (unlikely(bret->status !=3D BLKIF_RSP_OKAY))
 				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
 					"request: %x\n", bret->status);
-
 			__blk_end_request_all(req, error);
+			info->ops->update_segment_rsp_cons(info, id);
 			break;
 		default:
 			BUG();
@@ -949,6 +1170,43 @@ static int init_shadow(struct blkfront_info *info)
 	return 0;
 }
=20
+static int init_shadow_v2(struct blkfront_info *info)
+{
+	unsigned int ring_size;
+	int i;
+
+	if (info->ring_type !=3D RING_TYPE_UNDEFINED)
+		return 0;
+
+	info->ring_type =3D RING_TYPE_2;
+
+	ring_size =3D BLK_REQ_RING_SIZE;
+	info->req_shadow =3D kzalloc(sizeof(struct blk_req_shadow) * ring_size,
+				   GFP_KERNEL);
+	if (!info->req_shadow)
+		return -ENOMEM;
+
+	for (i =3D 0; i < ring_size; i++)
+		info->req_shadow[i].req.u.rw.id =3D i+1;
+	info->req_shadow[ring_size - 1].req.u.rw.id =3D 0x0fffffff;
+
+	ring_size =3D BLK_SEG_RING_SIZE;
+
+	info->seg_shadow =3D kzalloc(sizeof(struct blk_seg_shadow) * ring_size,
+				   GFP_KERNEL);	=09
+	if (!info->seg_shadow) {
+		kfree(info->req_shadow);
+		return -ENOMEM;
+	}
+
+	for (i =3D 0; i < ring_size; i++) {
+		info->seg_shadow[i].id =3D i+1;
+	}
+	info->seg_shadow[ring_size - 1].id =3D 0x0fffffff;
+
+	return 0;
+}
+
 static int setup_blkring(struct xenbus_device *dev,
 			 struct blkfront_info *info)
 {
@@ -1003,6 +1261,84 @@ fail:
 	return err;
 }
=20
+static int setup_blkring_v2(struct xenbus_device *dev,
+			    struct blkfront_info *info)
+{
+	struct blkif_request_sring *sring;
+	struct blkif_segment_sring *seg_sring;
+	int err;
+
+	info->reqring_ref =3D GRANT_INVALID_REF;
+
+	sring =3D (struct blkif_request_sring *)__get_free_page(GFP_NOIO | __GFP_=
HIGH);
+	if (!sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
+		return -ENOMEM;
+	}
+	SHARED_RING_INIT(sring);
+	FRONT_RING_INIT(&info->reqring, sring, PAGE_SIZE);
+
+	err =3D xenbus_grant_ring(dev, virt_to_mfn(info->reqring.sring));
+	if (err < 0) {
+		free_page((unsigned long)sring);
+		info->reqring.sring =3D NULL;
+		goto fail;
+	}
+
+	info->reqring_ref =3D err;
+
+	info->segring_ref =3D GRANT_INVALID_REF;
+
+	seg_sring =3D (struct blkif_segment_sring *)__get_free_page(GFP_NOIO | __=
GFP_HIGH);
+	if (!seg_sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
+		err =3D -ENOMEM;
+		goto fail;
+	}
+	SHARED_RING_INIT(seg_sring);
+	FRONT_RING_INIT(&info->segring, seg_sring, PAGE_SIZE);
+
+	err =3D xenbus_grant_ring(dev, virt_to_mfn(info->segring.sring));
+	if (err < 0) {
+		free_page((unsigned long)seg_sring);
+		info->segring.sring =3D NULL;
+		goto fail;
+	}
+
+	info->segring_ref =3D err;
+
+	info->sg =3D kzalloc(sizeof(struct scatterlist) * info->ops->max_seg,
+			   GFP_KERNEL);
+	if (!info->sg) {
+		err =3D -ENOMEM;
+		goto fail;
+	}
+	sg_init_table(info->sg, info->ops->max_seg);
+
+	err =3D init_shadow_v2(info);
+	if (err)
+		goto fail;
+
+	err =3D xenbus_alloc_evtchn(dev, &info->evtchn);
+	if (err)
+		goto fail;
+
+	err =3D bind_evtchn_to_irqhandler(info->evtchn,
+					blkif_interrupt,
+					IRQF_SAMPLE_RANDOM, "blkif", info);
+	if (err <=3D 0) {
+		xenbus_dev_fatal(dev, err,
+				 "bind_evtchn_to_irqhandler failed");
+		goto fail;
+	}
+	info->irq =3D err;
+
+	return 0;
+fail:
+	blkif_free(info, 0);
+	return err;
+}
+
 static void free_blkring(struct blkfront_info *info, int suspend)
 {
 	if (info->ring_ref !=3D GRANT_INVALID_REF) {
@@ -1018,6 +1354,32 @@ static void free_blkring(struct blkfront_info *info,=
 int suspend)
 		kfree(info->shadow);
 }
=20
+static void free_blkring_v2(struct blkfront_info *info, int suspend)
+{
+	if (info->reqring_ref !=3D GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->reqring_ref, 0,
+					  (unsigned long)info->reqring.sring);
+		info->reqring_ref =3D GRANT_INVALID_REF;
+		info->reqring.sring =3D NULL;
+	}
+
+	if (info->segring_ref !=3D GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->segring_ref, 0,
+					  (unsigned long)info->segring.sring);
+		info->segring_ref =3D GRANT_INVALID_REF;
+		info->segring.sring =3D NULL;
+	}
+
+	kfree(info->sg);
+
+	if(!suspend) {
+		kfree(info->req_shadow);
+		kfree(info->seg_shadow);
+	}
+
+}
+
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_blkback(struct xenbus_device *dev,
 			   struct blkfront_info *info)
@@ -1025,9 +1387,17 @@ static int talk_to_blkback(struct xenbus_device *dev=
,
 	const char *message =3D NULL;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int type;
=20
 	/* register ring ops */
-	info->ops =3D &blk_front_ops;
+	err =3D xenbus_scanf(XBT_NIL, dev->otherend, "blkback-ring-type", "%u",
+			   &type);
+	if (err !=3D 1)
+		type =3D 1;
+	if (type =3D=3D 2)
+		info->ops =3D &blk_front_ops_v2;
+	else
+		info->ops =3D &blk_front_ops;
=20
 	/* Create shared ring, alloc event channel. */
 	err =3D info->ops->setup_blkring(dev, info);
@@ -1040,13 +1410,6 @@ again:
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_blkring;
 	}
-
-	err =3D xenbus_printf(xbt, dev->nodename,
-			    "ring-ref", "%u", info->ring_ref);
-	if (err) {
-		message =3D "writing ring-ref";
-		goto abort_transaction;
-	}
 	err =3D xenbus_printf(xbt, dev->nodename,
 			    "event-channel", "%u", info->evtchn);
 	if (err) {
@@ -1059,7 +1422,40 @@ again:
 		message =3D "writing protocol";
 		goto abort_transaction;
 	}
-
+	if (type =3D=3D 1) {
+		err =3D xenbus_printf(xbt, dev->nodename,
+				    "ring-ref", "%u", info->ring_ref);
+		if (err) {
+			message =3D "writing ring-ref";
+			goto abort_transaction;
+		}
+		err =3D xenbus_printf(xbt, dev->nodename, "blkfront-ring-type",
+				    "%u", type);
+		if (err) {
+			message =3D "writing blkfront ring type";
+			goto abort_transaction;
+		}=09
+	}
+	if (type =3D=3D 2) {
+		err =3D xenbus_printf(xbt, dev->nodename,
+				    "reqring-ref", "%u", info->reqring_ref);
+		if (err) {
+			message =3D "writing reqring-ref";
+			goto abort_transaction;
+		}
+		err =3D xenbus_printf(xbt, dev->nodename,
+				    "segring-ref", "%u", info->segring_ref);
+		if (err) {
+			message =3D "writing segring-ref";
+			goto abort_transaction;
+		}
+		err =3D xenbus_printf(xbt, dev->nodename, "blkfront-ring-type",
+				    "%u", type);
+		if (err) {
+			message =3D "writing blkfront ring type";
+			goto abort_transaction;
+		}=09
+	}
 	err =3D xenbus_transaction_end(xbt, 0);
 	if (err) {
 		if (err =3D=3D -EAGAIN)
@@ -1164,7 +1560,7 @@ static int blkfront_probe(struct xenbus_device *dev,
 }
=20
=20
-static int blkif_recover(struct blkfront_info *info)
+static int recover_from_v1_to_v1(struct blkfront_info *info)
 {
 	int i;
 	struct blkif_request *req;
@@ -1233,6 +1629,372 @@ static int blkif_recover(struct blkfront_info *info=
)
 	return 0;
 }
=20
+/* migrate from V2 type ring to V1 type*/
+static int recover_from_v2_to_v1(struct blkfront_info *info)
+{=09
+	struct blk_req_shadow *copy;
+	struct blk_seg_shadow *seg_copy;
+	struct request *req;
+	struct blkif_request *new_req;
+	int i, j, err;
+	unsigned int req_rs;
+	struct bio *biolist =3D NULL, *biotail =3D NULL, *bio;
+	unsigned long index;
+	unsigned long flags;
+
+	pr_info("Warning, migrate to older backend, some io may fail\n");
+
+	/* Stage 1: Init the new shadow state. */
+	info->ring_type =3D RING_TYPE_UNDEFINED;
+	err =3D init_shadow(info);
+	if (err)
+		return err;
+
+	req_rs =3D BLK_REQ_RING_SIZE;
+
+	/* Stage 2: Set up free list. */
+	info->shadow_free =3D info->ring.req_prod_pvt;
+
+	/* Stage 3: Find pending requests and requeue them. */
+	for (i =3D 0; i < req_rs; i++) {
+		req =3D info->req_shadow[i].request;
+		/* Not in use? */
+		if (!req)
+			continue;
+
+		if (ring_full(info))=20
+			goto out;
+
+		copy =3D &info->req_shadow[i];
+
+                /* We get a new request, reset the blkif request and shado=
w state. */
+		new_req =3D RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt);
+
+		if (copy->req.operation =3D=3D BLKIF_OP_DISCARD) {
+			new_req->operation =3D BLKIF_OP_DISCARD;
+			new_req->u.discard =3D copy->req.u.discard;
+			new_req->u.discard.id =3D get_id_from_freelist(info);
+			info->shadow[new_req->u.discard.id].request =3D req;
+		}
+		else {
+			if (copy->req.u.rw.nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST)=20
+                        	continue;=20
+
+			new_req->u.rw.id =3D get_id_from_freelist(info);
+			info->shadow[new_req->u.rw.id].request =3D req;
+			new_req->operation =3D copy->req.operation;
+			new_req->u.rw.nr_segments =3D copy->req.u.rw.nr_segments;
+			new_req->u.rw.handle =3D copy->req.u.rw.handle;
+			new_req->u.rw.sector_number =3D copy->req.u.rw.sector_number;
+			index =3D copy->req.u.rw.seg_id;
+			for (j =3D 0; j < new_req->u.rw.nr_segments; j++) {
+				seg_copy =3D &info->seg_shadow[index];
+				new_req->u.rw.seg[j].gref =3D seg_copy->req.gref;
+				new_req->u.rw.seg[j].first_sect =3D seg_copy->req.first_sect;
+				new_req->u.rw.seg[j].last_sect =3D seg_copy->req.last_sect;=20
+				info->shadow[new_req->u.rw.id].frame[j] =3D seg_copy->frame;
+				gnttab_grant_foreign_access_ref(
+					new_req->u.rw.seg[j].gref,
+					info->xbdev->otherend_id,
+					pfn_to_mfn(info->shadow[new_req->u.rw.id].frame[j]),
+					rq_data_dir(info->shadow[new_req->u.rw.id].request));
+				index =3D info->seg_shadow[index].id;
+			}
+		}
+		info->shadow[new_req->u.rw.id].req =3D *new_req;
+		info->ring.req_prod_pvt++;
+		info->req_shadow[i].request =3D NULL;
+	=09
+	}
+out:
+	xenbus_switch_state(info->xbdev, XenbusStateConnected);
+
+	spin_lock_irqsave(&info->io_lock, flags);
+
+	/* cancel the request and resubmit the bio */
+	for (i =3D 0; i < req_rs; i++) {
+		req =3D info->req_shadow[i].request;
+		if (!req)
+			continue;
+
+		blkif_completion_v2(info, i);
+
+		if (biolist =3D=3D NULL)=09
+			biolist =3D req->bio;
+		else
+			biotail->bi_next =3D req->bio;
+		biotail =3D req->biotail;
+		req->bio =3D NULL;
+		__blk_put_request(info->rq, req);
+	}
+
+	while ((req =3D blk_peek_request(info->rq)) !=3D NULL) {
+
+		blk_start_request(req);
+
+		if (biolist =3D=3D NULL)
+			biolist =3D req->bio;
+		else
+			biotail->bi_next =3D req->bio;
+		biotail =3D req->biotail;
+		req->bio =3D NULL;
+		__blk_put_request(info->rq, req);
+	}
+
+	/* Now safe for us to use the shared ring */
+	info->connected =3D BLKIF_STATE_CONNECTED;
+
+	/* need update the queue limit setting */
+	update_blk_queue(info);
+
+	/* Send off requeued requests */
+	flush_requests(info);
+
+	/* Kick any other new requests queued since we resumed */
+	kick_pending_request_queues(info);
+
+	spin_unlock_irqrestore(&info->io_lock, flags);
+
+	/* free original shadow*/
+	kfree(info->seg_shadow);
+	kfree(info->req_shadow);
+
+	while(biolist) {
+		bio =3D biolist;
+		biolist =3D biolist->bi_next;
+		bio->bi_next =3D NULL;
+		submit_bio(bio->bi_rw, bio);
+	}
+
+	return 0;
+}
+
+static int blkif_recover(struct blkfront_info *info)
+{
+	int rc;
+
+	if (info->ring_type =3D=3D RING_TYPE_1)
+		rc =3D recover_from_v1_to_v1(info);
+	else if (info->ring_type =3D=3D RING_TYPE_2)
+		rc =3D recover_from_v2_to_v1(info);
+	else
+		rc =3D -EPERM;
+	return rc;
+}
+
+static int recover_from_v1_to_v2(struct blkfront_info *info)
+{
+	int i,err;
+	struct blkif_request_header *req;
+	struct blkif_request_segment *segring_req;
+	struct blk_shadow *copy;
+	int j;
+	unsigned long seg_id, last_id =3D 0x0fffffff;
+
+	/* Stage 1: Init the new shadow. */
+	info->ring_type =3D RING_TYPE_UNDEFINED;
+	err =3D init_shadow_v2(info);
+	if (err)
+		return err;
+
+	/* Stage 2: Set up free list. */
+	info->shadow_free =3D info->reqring.req_prod_pvt;
+	info->seg_shadow_free =3D info->segring.req_prod_pvt;
+
+	/* Stage 3: Find pending requests and requeue them. */
+	for (i =3D 0; i < BLK_RING_SIZE; i++) {
+		copy =3D &info->shadow[i];
+		/* Not in use? */
+		if (!copy->request)
+			continue;
+
+		/* We get a new request, reset the blkif request and shadow state. */
+		req =3D RING_GET_REQUEST(&info->reqring, info->reqring.req_prod_pvt);
+
+		if (copy->req.operation =3D=3D BLKIF_OP_DISCARD) {
+			req->operation =3D BLKIF_OP_DISCARD;
+			req->u.discard =3D copy->req.u.discard;
+			req->u.discard.id =3D get_id_from_freelist_v2(info);
+			info->req_shadow[req->u.discard.id].request =3D copy->request;
+			info->req_shadow[req->u.discard.id].req =3D *req;
+		}
+		else {
+			req->u.rw.id =3D get_id_from_freelist_v2(info);
+			req->operation =3D copy->req.operation;
+			req->u.rw.nr_segments =3D copy->req.u.rw.nr_segments;
+			req->u.rw.handle =3D copy->req.u.rw.handle;
+			req->u.rw.sector_number =3D copy->req.u.rw.sector_number;
+			for (j =3D 0; j < req->u.rw.nr_segments; j++) {
+				seg_id =3D get_seg_shadow_id(info);
+				if (j =3D=3D 0)
+					req->u.rw.seg_id =3D seg_id;
+				else
+					info->seg_shadow[last_id].id =3D seg_id;
+				segring_req =3D RING_GET_REQUEST(&info->segring, info->segring.req_pro=
d_pvt);
+				segring_req->gref =3D copy->req.u.rw.seg[j].gref;
+				segring_req->first_sect =3D copy->req.u.rw.seg[j].first_sect;
+				segring_req->last_sect =3D copy->req.u.rw.seg[j].last_sect;
+				info->seg_shadow[seg_id].req =3D *segring_req;
+				info->seg_shadow[seg_id].frame =3D copy->frame[j];
+				info->segring.req_prod_pvt++;
+				gnttab_grant_foreign_access_ref(
+					segring_req->gref,
+					info->xbdev->otherend_id,
+					pfn_to_mfn(copy->frame[j]),
+					rq_data_dir(copy->request));
+				last_id =3D seg_id;
+			}
+			info->req_shadow[req->u.rw.id].req =3D *req;
+			info->req_shadow[req->u.rw.id].request =3D copy->request;
+		}
+
+		info->reqring.req_prod_pvt++;
+	}
+
+	/* need update the queue limit setting */
+	update_blk_queue(info);
+
+	/* free original shadow*/
+	kfree(info->shadow);
+
+	xenbus_switch_state(info->xbdev, XenbusStateConnected);
+
+	spin_lock_irq(&info->io_lock);
+
+	/* Now safe for us to use the shared ring */
+	info->connected =3D BLKIF_STATE_CONNECTED;
+
+	/* Send off requeued requests */
+	flush_requests(info);
+
+	/* Kick any other new requests queued since we resumed */
+	kick_pending_request_queues(info);
+
+	spin_unlock_irq(&info->io_lock);
+
+	return 0;
+}
+
+static int recover_from_v2_to_v2(struct blkfront_info *info)
+{
+	int i;
+	struct blkif_request_header *req;
+	struct blkif_request_segment *segring_req;
+	struct blk_req_shadow *copy;
+	struct blk_seg_shadow *seg_copy;
+	unsigned long index =3D 0x0fffffff, seg_id, last_id =3D 0x0fffffff;
+	int j;
+	unsigned int req_rs, seg_rs;
+	unsigned long flags;
+
+	req_rs =3D BLK_REQ_RING_SIZE;
+	seg_rs =3D BLK_SEG_RING_SIZE;
+
+	/* Stage 1: Make a safe copy of the shadow state. */
+	copy =3D kmalloc(sizeof(struct blk_req_shadow) * req_rs,
+		       GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
+	if (!copy)
+		return -ENOMEM;
+
+	seg_copy =3D kmalloc(sizeof(struct blk_seg_shadow) * seg_rs,
+			   GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
+	if (!seg_copy ) {
+		kfree(copy);
+		return -ENOMEM;
+	}
+
+	memcpy(copy, info->req_shadow, sizeof(struct blk_req_shadow) * req_rs);
+	memcpy(seg_copy, info->seg_shadow,
+	       sizeof(struct blk_seg_shadow) * seg_rs);
+
+	/* Stage 2: Set up free list. */
+        for (i =3D 0; i < req_rs; i++)
+                info->req_shadow[i].req.u.rw.id =3D i+1;
+        info->req_shadow[req_rs - 1].req.u.rw.id =3D 0x0fffffff;
+
+	for (i =3D 0; i < seg_rs; i++)
+		info->seg_shadow[i].id =3D i+1;
+	info->seg_shadow[seg_rs - 1].id =3D 0x0fffffff;
+
+	info->shadow_free =3D info->reqring.req_prod_pvt;
+	info->seg_shadow_free =3D info->segring.req_prod_pvt;
+
+	/* Stage 3: Find pending requests and requeue them. */
+	for (i =3D 0; i < req_rs; i++) {
+		/* Not in use? */
+		if (!copy[i].request)
+			continue;
+
+		req =3D RING_GET_REQUEST(&info->reqring, info->reqring.req_prod_pvt);
+		*req =3D copy[i].req;
+
+		req->u.rw.id =3D get_id_from_freelist_v2(info);
+		memcpy(&info->req_shadow[req->u.rw.id], &copy[i], sizeof(copy[i]));
+
+		if (req->operation !=3D BLKIF_OP_DISCARD) {
+			for (j =3D 0; j < req->u.rw.nr_segments; j++) {
+				seg_id =3D get_seg_shadow_id(info);
+				if (j =3D=3D 0)
+					index =3D req->u.rw.seg_id;
+				else
+					index =3D seg_copy[index].id ;
+				gnttab_grant_foreign_access_ref(
+					seg_copy[index].req.gref,
+					info->xbdev->otherend_id,
+					pfn_to_mfn(seg_copy[index].frame),
+					rq_data_dir(info->req_shadow[req->u.rw.id].request));
+				segring_req =3D RING_GET_REQUEST(&info->segring, info->segring.req_pro=
d_pvt);
+				memcpy(segring_req, &(seg_copy[index].req),
+				       sizeof(struct blkif_request_segment));
+				if (j =3D=3D 0)
+					req->u.rw.seg_id =3D seg_id;
+				else
+					info->seg_shadow[last_id].id =3D seg_id;
+
+				memcpy(&info->seg_shadow[seg_id],
+				       &seg_copy[index], sizeof(struct blk_seg_shadow));
+				info->segring.req_prod_pvt++;
+				last_id =3D seg_id;
+			}
+		}
+		info->req_shadow[req->u.rw.id].req =3D *req;
+
+		info->reqring.req_prod_pvt++;
+	}
+
+	kfree(seg_copy);
+	kfree(copy);
+
+	xenbus_switch_state(info->xbdev, XenbusStateConnected);
+
+	spin_lock_irqsave(&info->io_lock, flags);
+
+	/* Now safe for us to use the shared ring */
+	info->connected =3D BLKIF_STATE_CONNECTED;
+
+	/* Send off requeued requests */
+	flush_requests(info);
+
+	/* Kick any other new requests queued since we resumed */
+	kick_pending_request_queues(info);
+
+	spin_unlock_irqrestore(&info->io_lock, flags);
+
+	return 0;
+}
+
+static int blkif_recover_v2(struct blkfront_info *info)
+{
+	int rc;
+
+	if (info->ring_type =3D=3D RING_TYPE_1)
+		rc =3D recover_from_v1_to_v2(info);
+	else if (info->ring_type =3D=3D RING_TYPE_2)
+		rc =3D recover_from_v2_to_v2(info);
+	else
+		rc =3D -EPERM;
+	return rc;
+}
 /**
  * We are reconnecting to the backend, due to a suspend/resume, or a backe=
nd
  * driver restart.  We tear down our blkif structure and recreate it, but
@@ -1609,15 +2371,44 @@ static struct blk_front_operations blk_front_ops =
=3D {
 	.update_rsp_event =3D update_rsp_event,
 	.update_rsp_cons =3D update_rsp_cons,
 	.update_req_prod_pvt =3D update_req_prod_pvt,
+	.update_segment_rsp_cons =3D update_segment_rsp_cons,
 	.ring_push =3D ring_push,
 	.recover =3D blkif_recover,
 	.ring_full =3D ring_full,
+	.segring_full =3D segring_full,
 	.setup_blkring =3D setup_blkring,
 	.free_blkring =3D free_blkring,
 	.blkif_completion =3D blkif_completion,
 	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
 };
=20
+static struct blk_front_operations blk_front_ops_v2 =3D {
+	.ring_get_request =3D ring_get_request_v2,
+	.ring_get_response =3D ring_get_response_v2,
+	.ring_get_segment =3D ring_get_segment_v2,
+	.get_id =3D get_id_from_freelist_v2,
+	.add_id =3D add_id_to_freelist_v2,
+	.save_seg_shadow =3D save_seg_shadow_v2,
+	.save_req_shadow =3D save_req_shadow_v2,
+	.get_req_from_shadow =3D get_req_from_shadow_v2,
+	.get_rsp_prod =3D get_rsp_prod_v2,
+	.get_rsp_cons =3D get_rsp_cons_v2,
+	.get_req_prod_pvt =3D get_req_prod_pvt_v2,
+	.check_left_response =3D check_left_response_v2,
+	.update_rsp_event =3D update_rsp_event_v2,
+	.update_rsp_cons =3D update_rsp_cons_v2,
+	.update_req_prod_pvt =3D update_req_prod_pvt_v2,
+	.update_segment_rsp_cons =3D update_segment_rsp_cons_v2,
+	.ring_push =3D ring_push_v2,
+	.recover =3D blkif_recover_v2,
+	.ring_full =3D ring_full_v2,
+	.segring_full =3D segring_full_v2,
+	.setup_blkring =3D setup_blkring_v2,
+	.free_blkring =3D free_blkring_v2,
+	.blkif_completion =3D blkif_completion_v2,
+	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST_V2,
+};
+
 static const struct block_device_operations xlvbd_block_fops =3D
 {
 	.owner =3D THIS_MODULE,
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index f100ce2..a5a98b0 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -475,7 +475,7 @@ void gnttab_end_foreign_access(grant_ref_t ref, int rea=
donly,
 		/* XXX This needs to be fixed so that the ref and page are
 		   placed on a list to be freed up later. */
 		printk(KERN_WARNING
-		       "WARNING: leaking g.e. and page still in use!\n");
+		       "WARNING: ref %u leaking g.e. and page still in use!\n", ref);
 	}
 }
 EXPORT_SYMBOL_GPL(gnttab_end_foreign_access);
diff --git a/include/xen/interface/io/blkif.h b/include/xen/interface/io/bl=
kif.h
index ee338bf..763489a 100644
--- a/include/xen/interface/io/blkif.h
+++ b/include/xen/interface/io/blkif.h
@@ -108,6 +108,7 @@ typedef uint64_t blkif_sector_t;
  * NB. This could be 12 if the ring indexes weren't stored in the same pag=
e.
  */
 #define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
+#define BLKIF_MAX_SEGMENTS_PER_REQUEST_V2 128
=20
 struct blkif_request_rw {
 	uint8_t        nr_segments;  /* number of segments                   */
@@ -125,6 +126,17 @@ struct blkif_request_rw {
 	} seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
 } __attribute__((__packed__));
=20
+struct blkif_request_rw_header {
+	uint8_t        nr_segments;  /* number of segments                   */
+	blkif_vdev_t   handle;       /* only for read/write requests         */
+#ifdef CONFIG_X86_64
+	uint32_t       _pad1;	     /* offsetof(blkif_request,u.rw.id) =3D=3D 8 */
+#endif
+	uint64_t       id;           /* private guest value, echoed in resp  */
+	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
+	uint64_t       seg_id;	     /* segment id in the segment shadow     */=09
+} __attribute__((__packed__));
+
 struct blkif_request_discard {
 	uint8_t        flag;         /* BLKIF_DISCARD_SECURE or zero.        */
 #define BLKIF_DISCARD_SECURE (1<<0)  /* ignored if discard-secure=3D0     =
     */
@@ -135,7 +147,6 @@ struct blkif_request_discard {
 	uint64_t       id;           /* private guest value, echoed in resp  */
 	blkif_sector_t sector_number;
 	uint64_t       nr_sectors;
-	uint8_t        _pad3;
 } __attribute__((__packed__));
=20
 struct blkif_request {
@@ -146,12 +157,24 @@ struct blkif_request {
 	} u;
 } __attribute__((__packed__));
=20
+struct blkif_request_header {
+	uint8_t        operation;    /* BLKIF_OP_???                         */
+	union {
+		struct blkif_request_rw_header rw;
+		struct blkif_request_discard discard;
+	} u;
+} __attribute__((__packed__));
+
 struct blkif_response {
 	uint64_t        id;              /* copied from request */
 	uint8_t         operation;       /* copied from request */
 	int16_t         status;          /* BLKIF_RSP_???       */
 };
=20
+struct blkif_response_segment {
+	char		dummy;
+} __attribute__((__packed__));
+
 /*
  * STATUS RETURN CODES.
  */
@@ -167,6 +190,8 @@ struct blkif_response {
  */
=20
 DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);
+DEFINE_RING_TYPES(blkif_request, struct blkif_request_header, struct blkif=
_response);
+DEFINE_RING_TYPES(blkif_segment, struct blkif_request_segment, struct blki=
f_response_segment);
=20
 #define VDISK_CDROM        0x1
 #define VDISK_REMOVABLE    0x2
-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_02.patch"
Content-Description: vbd_enlarge_segments_02.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_02.patch";
	size=35892; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:34:34 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IDBjM2M0NDNhNzUzZjQxNThkMDRiOTY2NGZiMzdkNWE1MDQ0MGQzOGEKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAx
NjozMDo0NCAyMDEyICswODAwCgogICAgYWRkIHNlZ3Jpbmcgc3VwcG9ydCBpbiBibGtmcm9udAoK
ZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMgYi9kcml2ZXJzL2Jsb2Nr
L3hlbi1ibGtmcm9udC5jCmluZGV4IGEyNjNmYWYuLmI5ZjM4M2QgMTAwNjQ0Ci0tLSBhL2RyaXZl
cnMvYmxvY2sveGVuLWJsa2Zyb250LmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQu
YwpAQCAtNzYsMTAgKzc2LDIzIEBAIHN0cnVjdCBibGtfc2hhZG93IHsKIAl1bnNpZ25lZCBsb25n
IGZyYW1lW0JMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVF07CiB9OwogCitzdHJ1Y3QgYmxr
X3JlcV9zaGFkb3cgeworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X2hlYWRlciByZXE7CisJc3RydWN0
IHJlcXVlc3QgKnJlcXVlc3Q7Cit9OworCitzdHJ1Y3QgYmxrX3NlZ19zaGFkb3cgeworCXVpbnQ2
NF90IGlkOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgcmVxOworCXVuc2lnbmVkIGxv
bmcgZnJhbWU7Cit9OworCiBzdGF0aWMgREVGSU5FX01VVEVYKGJsa2Zyb250X211dGV4KTsKIHN0
YXRpYyBjb25zdCBzdHJ1Y3QgYmxvY2tfZGV2aWNlX29wZXJhdGlvbnMgeGx2YmRfYmxvY2tfZm9w
czsKIAogI2RlZmluZSBCTEtfUklOR19TSVpFIF9fQ09OU1RfUklOR19TSVpFKGJsa2lmLCBQQUdF
X1NJWkUpCisjZGVmaW5lIEJMS19SRVFfUklOR19TSVpFIF9fQ09OU1RfUklOR19TSVpFKGJsa2lm
X3JlcXVlc3QsIFBBR0VfU0laRSkKKyNkZWZpbmUgQkxLX1NFR19SSU5HX1NJWkUgX19DT05TVF9S
SU5HX1NJWkUoYmxraWZfc2VnbWVudCwgUEFHRV9TSVpFKQogCiAvKgogICogV2UgaGF2ZSBvbmUg
b2YgdGhlc2UgcGVyIHZiZCwgd2hldGhlciBpZGUsIHNjc2kgb3IgJ290aGVyJy4gIFRoZXkKQEAg
LTk2LDIyICsxMDksMzAgQEAgc3RydWN0IGJsa2Zyb250X2luZm8KIAlibGtpZl92ZGV2X3QgaGFu
ZGxlOwogCWVudW0gYmxraWZfc3RhdGUgY29ubmVjdGVkOwogCWludCByaW5nX3JlZjsKKwlpbnQg
cmVxcmluZ19yZWY7CisJaW50IHNlZ3JpbmdfcmVmOwogCXN0cnVjdCBibGtpZl9mcm9udF9yaW5n
IHJpbmc7CisJc3RydWN0IGJsa2lmX3JlcXVlc3RfZnJvbnRfcmluZyByZXFyaW5nOworCXN0cnVj
dCBibGtpZl9zZWdtZW50X2Zyb250X3Jpbmcgc2VncmluZzsKIAlzdHJ1Y3Qgc2NhdHRlcmxpc3Qg
KnNnOwogCXVuc2lnbmVkIGludCBldnRjaG4sIGlycTsKIAlzdHJ1Y3QgcmVxdWVzdF9xdWV1ZSAq
cnE7CiAJc3RydWN0IHdvcmtfc3RydWN0IHdvcms7CiAJc3RydWN0IGdudHRhYl9mcmVlX2NhbGxi
YWNrIGNhbGxiYWNrOwogCXN0cnVjdCBibGtfc2hhZG93ICpzaGFkb3c7CisJc3RydWN0IGJsa19y
ZXFfc2hhZG93ICpyZXFfc2hhZG93OworCXN0cnVjdCBibGtfc2VnX3NoYWRvdyAqc2VnX3NoYWRv
dzsKIAlzdHJ1Y3QgYmxrX2Zyb250X29wZXJhdGlvbnMgKm9wczsKIAllbnVtIGJsa2lmX3Jpbmdf
dHlwZSByaW5nX3R5cGU7CiAJdW5zaWduZWQgbG9uZyBzaGFkb3dfZnJlZTsKKwl1bnNpZ25lZCBs
b25nIHNlZ19zaGFkb3dfZnJlZTsKIAl1bnNpZ25lZCBpbnQgZmVhdHVyZV9mbHVzaDsKIAl1bnNp
Z25lZCBpbnQgZmx1c2hfb3A7CiAJdW5zaWduZWQgaW50IGZlYXR1cmVfZGlzY2FyZDoxOwogCXVu
c2lnbmVkIGludCBmZWF0dXJlX3NlY2Rpc2NhcmQ6MTsKIAl1bnNpZ25lZCBpbnQgZGlzY2FyZF9n
cmFudWxhcml0eTsKIAl1bnNpZ25lZCBpbnQgZGlzY2FyZF9hbGlnbm1lbnQ7CisJdW5zaWduZWQg
bG9uZyBsYXN0X2lkOwogCWludCBpc19yZWFkeTsKIH07CiAKQEAgLTEyNCw3ICsxNDUsNyBAQCBz
dGF0aWMgc3RydWN0IGJsa19mcm9udF9vcGVyYXRpb25zIHsKIAl1bnNpZ25lZCBsb25nICgqZ2V0
X2lkKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pOwogCXZvaWQgKCphZGRfaWQpIChzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCk7CiAJdm9pZCAoKnNhdmVf
c2VnX3NoYWRvdykgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCB1bnNpZ25lZCBsb25nIG1m
biwKLQkJCQkgdW5zaWduZWQgbG9uZyBpZCwgaW50IGkpOworCQkJCSB1bnNpZ25lZCBsb25nIGlk
LCBpbnQgaSwgc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCAqcmluZ19zZWcpOwogCXZvaWQg
KCpzYXZlX3JlcV9zaGFkb3cpIChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKIAkJCQkgc3Ry
dWN0IHJlcXVlc3QgKnJlcSwgdW5zaWduZWQgbG9uZyBpZCk7CiAJc3RydWN0IHJlcXVlc3QgKigq
Z2V0X3JlcV9mcm9tX3NoYWRvdykoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCkBAIC0xMzYs
MTQgKzE1NywxNiBAQCBzdGF0aWMgc3RydWN0IGJsa19mcm9udF9vcGVyYXRpb25zIHsKIAl2b2lk
ICgqdXBkYXRlX3JzcF9ldmVudCkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgaSk7
CiAJdm9pZCAoKnVwZGF0ZV9yc3BfY29ucykgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKTsK
IAl2b2lkICgqdXBkYXRlX3JlcV9wcm9kX3B2dCkgKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
KTsKKwl2b2lkICgqdXBkYXRlX3NlZ21lbnRfcnNwX2NvbnMpIChzdHJ1Y3QgYmxrZnJvbnRfaW5m
byAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCk7CiAJdm9pZCAoKnJpbmdfcHVzaCkgKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvLCBpbnQgKm5vdGlmeSk7CiAJaW50ICgqcmVjb3ZlcikgKHN0cnVj
dCBibGtmcm9udF9pbmZvICppbmZvKTsKIAlpbnQgKCpyaW5nX2Z1bGwpIChzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbyk7CisJaW50ICgqc2VncmluZ19mdWxsKSAoc3RydWN0IGJsa2Zyb250X2lu
Zm8gKmluZm8sIHVuc2lnbmVkIGludCBucl9zZWdtZW50cyk7CiAJaW50ICgqc2V0dXBfYmxrcmlu
ZykgKHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYsIHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
KTsKIAl2b2lkICgqZnJlZV9ibGtyaW5nKSAoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGlu
dCBzdXNwZW5kKTsKIAl2b2lkICgqYmxraWZfY29tcGxldGlvbikgKHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLCB1bnNpZ25lZCBsb25nIGlkKTsKIAl1bnNpZ25lZCBpbnQgbWF4X3NlZzsKLX0g
YmxrX2Zyb250X29wczsgCit9IGJsa19mcm9udF9vcHMsIGJsa19mcm9udF9vcHNfdjI7IAogCiBz
dGF0aWMgdW5zaWduZWQgaW50IG5yX21pbm9yczsKIHN0YXRpYyB1bnNpZ25lZCBsb25nICptaW5v
cnM7CkBAIC0xNzksNiArMjAyLDI0IEBAIHN0YXRpYyB1bnNpZ25lZCBsb25nIGdldF9pZF9mcm9t
X2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogCXJldHVybiBmcmVlOwogfQog
CitzdGF0aWMgdW5zaWduZWQgbG9uZyBnZXRfaWRfZnJvbV9mcmVlbGlzdF92MihzdHJ1Y3QgYmxr
ZnJvbnRfaW5mbyAqaW5mbykKK3sKKwl1bnNpZ25lZCBsb25nIGZyZWUgPSBpbmZvLT5zaGFkb3df
ZnJlZTsKKwlCVUdfT04oZnJlZSA+PSBCTEtfUkVRX1JJTkdfU0laRSk7CisJaW5mby0+c2hhZG93
X2ZyZWUgPSBpbmZvLT5yZXFfc2hhZG93W2ZyZWVdLnJlcS51LnJ3LmlkOworCWluZm8tPnJlcV9z
aGFkb3dbZnJlZV0ucmVxLnUucncuaWQgPSAweDBmZmZmZmVlOyAvKiBkZWJ1ZyAqLworCXJldHVy
biBmcmVlOworfQorCitzdGF0aWMgdW5zaWduZWQgbG9uZyBnZXRfc2VnX3NoYWRvd19pZChzdHJ1
Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwl1bnNpZ25lZCBsb25nIGZyZWUgPSBpbmZvLT5z
ZWdfc2hhZG93X2ZyZWU7CisJQlVHX09OKGZyZWUgPj0gQkxLX1NFR19SSU5HX1NJWkUpOworCWlu
Zm8tPnNlZ19zaGFkb3dfZnJlZSA9IGluZm8tPnNlZ19zaGFkb3dbZnJlZV0uaWQ7CisJaW5mby0+
c2VnX3NoYWRvd1tmcmVlXS5pZCA9IDB4MGZmZmZmZWU7IC8qIGRlYnVnICovCisJcmV0dXJuIGZy
ZWU7Cit9CisKIHZvaWQgYWRkX2lkX3RvX2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvLAogCQkJICAgICAgIHVuc2lnbmVkIGxvbmcgaWQpCiB7CkBAIC0xODcsNiArMjI4LDIxIEBA
IHZvaWQgYWRkX2lkX3RvX2ZyZWVsaXN0KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAogCWlu
Zm8tPnNoYWRvd19mcmVlID0gaWQ7CiB9CiAKK3N0YXRpYyB2b2lkIGFkZF9pZF90b19mcmVlbGlz
dF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKKwkJCQkgIHVuc2lnbmVkIGxvbmcgaWQp
Cit7CisJaW5mby0+cmVxX3NoYWRvd1tpZF0ucmVxLnUucncuaWQgID0gaW5mby0+c2hhZG93X2Zy
ZWU7CisJaW5mby0+cmVxX3NoYWRvd1tpZF0ucmVxdWVzdCA9IE5VTEw7CisJaW5mby0+c2hhZG93
X2ZyZWUgPSBpZDsKK30KKworc3RhdGljIHZvaWQgZnJlZV9zZWdfc2hhZG93X2lkKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvLAorCQkJCSAgdW5zaWduZWQgbG9uZyBpZCkKK3sKKwlpbmZvLT5z
ZWdfc2hhZG93W2lkXS5pZCAgPSBpbmZvLT5zZWdfc2hhZG93X2ZyZWU7CisJaW5mby0+c2VnX3No
YWRvd19mcmVlID0gaWQ7Cit9CisKIHN0YXRpYyBpbnQgeGxiZF9yZXNlcnZlX21pbm9ycyh1bnNp
Z25lZCBpbnQgbWlub3IsIHVuc2lnbmVkIGludCBucikKIHsKIAl1bnNpZ25lZCBpbnQgZW5kID0g
bWlub3IgKyBucjsKQEAgLTI5OSw2ICszNTUsMTQgQEAgdm9pZCAqcmluZ19nZXRfcmVxdWVzdChz
dHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIAlyZXR1cm4gKHZvaWQgKilSSU5HX0dFVF9SRVFV
RVNUKCZpbmZvLT5yaW5nLCBpbmZvLT5yaW5nLnJlcV9wcm9kX3B2dCk7CiB9CiAKK3ZvaWQgKnJp
bmdfZ2V0X3JlcXVlc3RfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJc3RydWN0
IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpyaW5nX3JlcTsKKwlyaW5nX3JlcSA9IFJJTkdfR0VUX1JF
UVVFU1QoJmluZm8tPnJlcXJpbmcsCisJCQkJaW5mby0+cmVxcmluZy5yZXFfcHJvZF9wdnQpOwor
CXJldHVybiAodm9pZCAqKXJpbmdfcmVxOworfQorCiBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdt
ZW50ICpyaW5nX2dldF9zZWdtZW50KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgaSkK
IHsKIAlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmluZ19yZXEgPQpAQCAtMzA2LDEyICszNzAsMzQg
QEAgc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCAqcmluZ19nZXRfc2VnbWVudChzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbywgaW50IGkKIAlyZXR1cm4gJnJpbmdfcmVxLT51LnJ3LnNlZ1tp
XTsKIH0KIAotdm9pZCBzYXZlX3NlZ19zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8s
Ci0JCSAgICAgIHVuc2lnbmVkIGxvbmcgbWZuLCB1bnNpZ25lZCBsb25nIGlkLCBpbnQgaSkKK3N0
cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnJpbmdfZ2V0X3NlZ21lbnRfdjIoc3RydWN0IGJs
a2Zyb250X2luZm8gKmluZm8sIGludCBpKQoreworCXJldHVybiBSSU5HX0dFVF9SRVFVRVNUKCZp
bmZvLT5zZWdyaW5nLCBpbmZvLT5zZWdyaW5nLnJlcV9wcm9kX3B2dCsrKTsKK30KKwordm9pZCBz
YXZlX3NlZ19zaGFkb3coc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIHVuc2lnbmVkIGxvbmcg
bWZuLAorCQkgICAgIHVuc2lnbmVkIGxvbmcgaWQsIGludCBpLCBzdHJ1Y3QgYmxraWZfcmVxdWVz
dF9zZWdtZW50ICpyaW5nX3NlZykKIHsKIAlpbmZvLT5zaGFkb3dbaWRdLmZyYW1lW2ldID0gbWZu
X3RvX3BmbihtZm4pOwogfQogCit2b2lkIHNhdmVfc2VnX3NoYWRvd192MihzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBtZm4sCisJCQl1bnNpZ25lZCBsb25nIGlkLCBp
bnQgaSwgc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCAqcmluZ19zZWcpCit7CisJc3RydWN0
IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpyaW5nX3JlcTsKKwl1bnNpZ25lZCBsb25nIHNlZ19pZCA9
IGdldF9zZWdfc2hhZG93X2lkKGluZm8pOworCisJcmluZ19yZXEgPSAoc3RydWN0IGJsa2lmX3Jl
cXVlc3RfaGVhZGVyICopaW5mby0+b3BzLT5yaW5nX2dldF9yZXF1ZXN0KGluZm8pOworCWlmIChp
ID09IDApCisJCXJpbmdfcmVxLT51LnJ3LnNlZ19pZCA9IHNlZ19pZDsKKwllbHNlCisJCWluZm8t
PnNlZ19zaGFkb3dbaW5mby0+bGFzdF9pZF0uaWQgPSBzZWdfaWQ7CisJaW5mby0+c2VnX3NoYWRv
d1tzZWdfaWRdLmZyYW1lID0gbWZuX3RvX3BmbihtZm4pOworCW1lbWNweSgmKGluZm8tPnNlZ19z
aGFkb3dbc2VnX2lkXS5yZXEpLCByaW5nX3NlZywKKwkgICAgICAgc2l6ZW9mKHN0cnVjdCBibGtp
Zl9yZXF1ZXN0X3NlZ21lbnQpKTsKKwlpbmZvLT5sYXN0X2lkID0gc2VnX2lkOworfQorCiB2b2lk
IHNhdmVfcmVxX3NoYWRvdyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywKIAkJICAgICAgc3Ry
dWN0IHJlcXVlc3QgKnJlcSwgdW5zaWduZWQgbG9uZyBpZCkKIHsKQEAgLTMyMSwxMCArNDA3LDM0
IEBAIHZvaWQgc2F2ZV9yZXFfc2hhZG93KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAogCWlu
Zm8tPnNoYWRvd1tpZF0ucmVxdWVzdCA9IHJlcTsKIH0KIAordm9pZCBzYXZlX3JlcV9zaGFkb3df
djIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sCisJCSAgICAgIHN0cnVjdCByZXF1ZXN0ICpy
ZXEsIHVuc2lnbmVkIGxvbmcgaWQpCit7CisJc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpy
aW5nX3JlcSA9CisJCQkoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICopaW5mby0+b3BzLT5y
aW5nX2dldF9yZXF1ZXN0KGluZm8pOworCWluZm8tPnJlcV9zaGFkb3dbaWRdLnJlcSA9ICpyaW5n
X3JlcTsKKwlpbmZvLT5yZXFfc2hhZG93W2lkXS5yZXF1ZXN0ID0gcmVxOworfQorCiB2b2lkIHVw
ZGF0ZV9yZXFfcHJvZF9wdnQoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCiB7CiAJaW5mby0+
cmluZy5yZXFfcHJvZF9wdnQrKzsKIH0KKwordm9pZCB1cGRhdGVfcmVxX3Byb2RfcHZ0X3YyKHN0
cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCWluZm8tPnJlcXJpbmcucmVxX3Byb2RfcHZ0
Kys7Cit9CisKK2ludCBzZWdyaW5nX2Z1bGwoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIHVu
c2lnbmVkIGludCBucl9zZWdtZW50cykKK3sKKwlyZXR1cm4gMDsKK30KKworaW50IHNlZ3Jpbmdf
ZnVsbF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgaW50IG5yX3NlZ21l
bnRzKQoreworCXJldHVybiBucl9zZWdtZW50cyA+IFJJTkdfRlJFRV9SRVFVRVNUUygmaW5mby0+
c2VncmluZyk7Cit9CiAvKgogICogR2VuZXJhdGUgYSBYZW4gYmxrZnJvbnQgSU8gcmVxdWVzdCBm
cm9tIGEgYmxrIGxheWVyIHJlcXVlc3QuICBSZWFkcwogICogYW5kIHdyaXRlcyBhcmUgaGFuZGxl
ZCBhcyBleHBlY3RlZC4KQEAgLTM0NywxOSArNDU3LDE4IEBAIHN0YXRpYyBpbnQgYmxraWZfcXVl
dWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCQlyZXR1cm4gMTsKIAogCWlmIChnbnR0
YWJfYWxsb2NfZ3JhbnRfcmVmZXJlbmNlcygKLQkJQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFV
RVNULCAmZ3JlZl9oZWFkKSA8IDApIHsKKwkJaW5mby0+b3BzLT5tYXhfc2VnLCAmZ3JlZl9oZWFk
KSA8IDApIHsKIAkJZ250dGFiX3JlcXVlc3RfZnJlZV9jYWxsYmFjaygKIAkJCSZpbmZvLT5jYWxs
YmFjaywKIAkJCWJsa2lmX3Jlc3RhcnRfcXVldWVfY2FsbGJhY2ssCiAJCQlpbmZvLAotCQkJQkxL
SUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUKTsKKwkJCWluZm8tPm9wcy0+bWF4X3NlZyk7CiAJ
CXJldHVybiAxOwogCX0KIAogCS8qIEZpbGwgb3V0IGEgY29tbXVuaWNhdGlvbnMgcmluZyBzdHJ1
Y3R1cmUuICovCiAJcmluZ19yZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilpbmZvLT5vcHMt
PnJpbmdfZ2V0X3JlcXVlc3QoaW5mbyk7CiAJaWQgPSBpbmZvLT5vcHMtPmdldF9pZChpbmZvKTsK
LQkvL2luZm8tPnNoYWRvd1tpZF0ucmVxdWVzdCA9IHJlcTsKIAogCXJpbmdfcmVxLT51LnJ3Lmlk
ID0gaWQ7CiAJcmluZ19yZXEtPnUucncuc2VjdG9yX251bWJlciA9IChibGtpZl9zZWN0b3JfdCli
bGtfcnFfcG9zKHJlcSk7CkBAIC0zOTIsNiArNTAxLDkgQEAgc3RhdGljIGludCBibGtpZl9xdWV1
ZV9yZXF1ZXN0KHN0cnVjdCByZXF1ZXN0ICpyZXEpCiAJCQkJCQkJICAgaW5mby0+c2cpOwogCQlC
VUdfT04ocmluZ19yZXEtPnUucncubnJfc2VnbWVudHMgPiBpbmZvLT5vcHMtPm1heF9zZWcpOwog
CisJCWlmIChpbmZvLT5vcHMtPnNlZ3JpbmdfZnVsbChpbmZvLCByaW5nX3JlcS0+dS5ydy5ucl9z
ZWdtZW50cykpCisJCQlnb3RvIHdhaXQ7CisKIAkJZm9yX2VhY2hfc2coaW5mby0+c2csIHNnLCBy
aW5nX3JlcS0+dS5ydy5ucl9zZWdtZW50cywgaSkgewogCQkJYnVmZmVyX21mbiA9IHBmbl90b19t
Zm4ocGFnZV90b19wZm4oc2dfcGFnZShzZykpKTsKIAkJCWZzZWN0ID0gc2ctPm9mZnNldCA+PiA5
OwpAQCAtNDExLDcgKzUyMyw3IEBAIHN0YXRpYyBpbnQgYmxraWZfcXVldWVfcmVxdWVzdChzdHJ1
Y3QgcmVxdWVzdCAqcmVxKQogCQkJCQkuZ3JlZiAgICAgICA9IHJlZiwKIAkJCQkJLmZpcnN0X3Nl
Y3QgPSBmc2VjdCwKIAkJCQkJLmxhc3Rfc2VjdCAgPSBsc2VjdCB9OwotCQkJaW5mby0+b3BzLT5z
YXZlX3NlZ19zaGFkb3coaW5mbywgYnVmZmVyX21mbiwgaWQsIGkpOworCQkJaW5mby0+b3BzLT5z
YXZlX3NlZ19zaGFkb3coaW5mbywgYnVmZmVyX21mbiwgaWQsIGksIHJpbmdfc2VnKTsKIAkJfQog
CX0KIApAQCAtNDIzLDYgKzUzNSwxMSBAQCBzdGF0aWMgaW50IGJsa2lmX3F1ZXVlX3JlcXVlc3Qo
c3RydWN0IHJlcXVlc3QgKnJlcSkKIAlnbnR0YWJfZnJlZV9ncmFudF9yZWZlcmVuY2VzKGdyZWZf
aGVhZCk7CiAKIAlyZXR1cm4gMDsKK3dhaXQ6CisJZ250dGFiX2ZyZWVfZ3JhbnRfcmVmZXJlbmNl
cyhncmVmX2hlYWQpOworCXByX2RlYnVnKCJObyBlbm91Z2ggc2VnbWVudCFcbiIpOworCWluZm8t
Pm9wcy0+YWRkX2lkKGluZm8sIGlkKTsKKwlyZXR1cm4gMTsKIH0KIAogdm9pZCByaW5nX3B1c2go
c3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCAqbm90aWZ5KQpAQCAtNDMwLDYgKzU0Nywx
MyBAQCB2b2lkIHJpbmdfcHVzaChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgaW50ICpub3Rp
ZnkpCiAJUklOR19QVVNIX1JFUVVFU1RTX0FORF9DSEVDS19OT1RJRlkoJmluZm8tPnJpbmcsICpu
b3RpZnkpOwogfQogCit2b2lkIHJpbmdfcHVzaF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5m
bywgaW50ICpub3RpZnkpCit7CisJUklOR19QVVNIX1JFUVVFU1RTKCZpbmZvLT5zZWdyaW5nKTsK
KwlSSU5HX1BVU0hfUkVRVUVTVFNfQU5EX0NIRUNLX05PVElGWSgmaW5mby0+cmVxcmluZywgKm5v
dGlmeSk7Cit9CisKKwogc3RhdGljIGlubGluZSB2b2lkIGZsdXNoX3JlcXVlc3RzKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKQogewogCWludCBub3RpZnk7CkBAIC00NDAsNiArNTY0LDE2IEBA
IHN0YXRpYyBpbmxpbmUgdm9pZCBmbHVzaF9yZXF1ZXN0cyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAq
aW5mbykKIAkJbm90aWZ5X3JlbW90ZV92aWFfaXJxKGluZm8tPmlycSk7CiB9CiAKK3N0YXRpYyBp
bnQgcmluZ19mcmVlX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQoreworCXJldHVybiAo
IVJJTkdfRlVMTCgmaW5mby0+cmVxcmluZykgJiYKKwkJUklOR19GUkVFX1JFUVVFU1RTKCZpbmZv
LT5zZWdyaW5nKSA+IFJJTkdfU0laRSgmaW5mby0+c2VncmluZykvMyk7Cit9CitzdGF0aWMgaW50
IHJpbmdfZnVsbF92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlyZXR1cm4gKFJJ
TkdfRlVMTCgmaW5mby0+cmVxcmluZykgfHwgUklOR19GVUxMKCZpbmZvLT5zZWdyaW5nKSk7Cit9
CisKIC8qCiAgKiBkb19ibGtpZl9yZXF1ZXN0CiAgKiAgcmVhZCBhIGJsb2NrOyByZXF1ZXN0IGlz
IGluIGEgcmVxdWVzdCBxdWV1ZQpAQCAtNDkwLDYgKzYyNCwxNyBAQCB3YWl0OgogCQlmbHVzaF9y
ZXF1ZXN0cyhpbmZvKTsKIH0KIAorc3RhdGljIHZvaWQgdXBkYXRlX2Jsa19xdWV1ZShzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlzdHJ1Y3QgcmVxdWVzdF9xdWV1ZSAqcSA9IGluZm8t
PnJxOworCisJYmxrX3F1ZXVlX21heF9zZWdtZW50cyhxLCBpbmZvLT5vcHMtPm1heF9zZWcpOwor
CWJsa19xdWV1ZV9tYXhfaHdfc2VjdG9ycyhxLCBxdWV1ZV9tYXhfc2VnbWVudHMocSkgKgorCQkJ
CSBxdWV1ZV9tYXhfc2VnbWVudF9zaXplKHEpIC8KKwkJCQkgcXVldWVfbG9naWNhbF9ibG9ja19z
aXplKHEpKTsKKwlyZXR1cm47Cit9CisKIHN0YXRpYyBpbnQgeGx2YmRfaW5pdF9ibGtfcXVldWUo
c3RydWN0IGdlbmRpc2sgKmdkLCB1MTYgc2VjdG9yX3NpemUpCiB7CiAJc3RydWN0IHJlcXVlc3Rf
cXVldWUgKnJxOwpAQCAtNzQwLDcgKzg4NSw3IEBAIHN0YXRpYyB2b2lkIHhsdmJkX3JlbGVhc2Vf
Z2VuZGlzayhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIAogc3RhdGljIHZvaWQga2lja19w
ZW5kaW5nX3JlcXVlc3RfcXVldWVzKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewotCWlm
ICghcmluZ19mdWxsKGluZm8pKSB7CisJaWYgKCFpbmZvLT5vcHMtPnJpbmdfZnVsbChpbmZvKSkg
ewogCQkvKiBSZS1lbmFibGUgY2FsbGRvd25zLiAqLwogCQlibGtfc3RhcnRfcXVldWUoaW5mby0+
cnEpOwogCQkvKiBLaWNrIHRoaW5ncyBvZmYgaW1tZWRpYXRlbHkuICovCkBAIC03OTMsMzkgKzkz
OCwxMTUgQEAgc3RhdGljIHZvaWQgYmxraWZfY29tcGxldGlvbihzdHJ1Y3QgYmxrZnJvbnRfaW5m
byAqaW5mbywgdW5zaWduZWQgbG9uZyBpZCkKIAkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2Vzcyhz
LT5yZXEudS5ydy5zZWdbaV0uZ3JlZiwgMCwgMFVMKTsKIH0KIAorc3RhdGljIHZvaWQgYmxraWZf
Y29tcGxldGlvbl92MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbywgdW5zaWduZWQgbG9uZyBp
ZCkKK3sKKwlpbnQgaTsKKwkvKiBEbyBub3QgbGV0IEJMS0lGX09QX0RJU0NBUkQgYXMgbnJfc2Vn
bWVudCBpcyBpbiB0aGUgc2FtZSBwbGFjZQorCSAqIGZsYWcuICovCisJdW5zaWduZWQgc2hvcnQg
bnIgPSBpbmZvLT5yZXFfc2hhZG93W2lkXS5yZXEudS5ydy5ucl9zZWdtZW50czsKKwl1bnNpZ25l
ZCBsb25nIHNoYWRvd19pZCwgZnJlZV9pZDsKKworCXNoYWRvd19pZCA9IGluZm8tPnJlcV9zaGFk
b3dbaWRdLnJlcS51LnJ3LnNlZ19pZDsKKwlmb3IgKGkgPSAwOyBpIDwgbnI7IGkrKykgeworCQln
bnR0YWJfZW5kX2ZvcmVpZ25fYWNjZXNzKGluZm8tPnNlZ19zaGFkb3dbc2hhZG93X2lkXS5yZXEu
Z3JlZiwgMCwgMFVMKTsKKwkJZnJlZV9pZCA9IHNoYWRvd19pZDsKKwkJc2hhZG93X2lkID0gaW5m
by0+c2VnX3NoYWRvd1tzaGFkb3dfaWRdLmlkOworCQlmcmVlX3NlZ19zaGFkb3dfaWQoaW5mbywg
ZnJlZV9pZCk7CisJfQorfQorCiBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKnJpbmdfZ2V0X3Jlc3Bv
bnNlKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewogCXJldHVybiBSSU5HX0dFVF9SRVNQ
T05TRSgmaW5mby0+cmluZywgaW5mby0+cmluZy5yc3BfY29ucyk7CiB9CisKK3N0cnVjdCBibGtp
Zl9yZXNwb25zZSAqcmluZ19nZXRfcmVzcG9uc2VfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmlu
Zm8pCit7CisJcmV0dXJuIFJJTkdfR0VUX1JFU1BPTlNFKCZpbmZvLT5yZXFyaW5nLCBpbmZvLT5y
ZXFyaW5nLnJzcF9jb25zKTsKK30KKwogUklOR19JRFggZ2V0X3JzcF9wcm9kKHN0cnVjdCBibGtm
cm9udF9pbmZvICppbmZvKQogewogCXJldHVybiBpbmZvLT5yaW5nLnNyaW5nLT5yc3BfcHJvZDsK
IH0KKworUklOR19JRFggZ2V0X3JzcF9wcm9kX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZv
KQoreworCXJldHVybiBpbmZvLT5yZXFyaW5nLnNyaW5nLT5yc3BfcHJvZDsKK30KKwogUklOR19J
RFggZ2V0X3JzcF9jb25zKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewogCXJldHVybiBp
bmZvLT5yaW5nLnJzcF9jb25zOwogfQorCitSSU5HX0lEWCBnZXRfcnNwX2NvbnNfdjIoc3RydWN0
IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJcmV0dXJuIGluZm8tPnJlcXJpbmcucnNwX2NvbnM7
Cit9CisKIHN0cnVjdCByZXF1ZXN0ICpnZXRfcmVxX2Zyb21fc2hhZG93KHN0cnVjdCBibGtmcm9u
dF9pbmZvICppbmZvLAogCQkJCSAgICB1bnNpZ25lZCBsb25nIGlkKQogewogCXJldHVybiBpbmZv
LT5zaGFkb3dbaWRdLnJlcXVlc3Q7CiB9CisKK3N0cnVjdCByZXF1ZXN0ICpnZXRfcmVxX2Zyb21f
c2hhZG93X3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLAorCQkJCSAgICB1bnNpZ25lZCBs
b25nIGlkKQoreworCXJldHVybiBpbmZvLT5yZXFfc2hhZG93W2lkXS5yZXF1ZXN0OworfQorCiB2
b2lkIHVwZGF0ZV9yc3BfY29ucyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIHsKIAlpbmZv
LT5yaW5nLnJzcF9jb25zKys7CiB9CisKK3ZvaWQgdXBkYXRlX3JzcF9jb25zX3YyKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKQoreworCWluZm8tPnJlcXJpbmcucnNwX2NvbnMrKzsKK30KKwog
UklOR19JRFggZ2V0X3JlcV9wcm9kX3B2dChzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIHsK
IAlyZXR1cm4gaW5mby0+cmluZy5yZXFfcHJvZF9wdnQ7CiB9CisKK1JJTkdfSURYIGdldF9yZXFf
cHJvZF9wdnRfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJcmV0dXJuIGluZm8t
PnJlcXJpbmcucmVxX3Byb2RfcHZ0OworfQorCiB2b2lkIGNoZWNrX2xlZnRfcmVzcG9uc2Uoc3Ry
dWN0IGJsa2Zyb250X2luZm8gKmluZm8sIGludCAqbW9yZV90b19kbykKIHsKIAlSSU5HX0ZJTkFM
X0NIRUNLX0ZPUl9SRVNQT05TRVMoJmluZm8tPnJpbmcsICptb3JlX3RvX2RvKTsKIH0KKwordm9p
ZCBjaGVja19sZWZ0X3Jlc3BvbnNlX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQg
Km1vcmVfdG9fZG8pCit7CisJUklOR19GSU5BTF9DSEVDS19GT1JfUkVTUE9OU0VTKCZpbmZvLT5y
ZXFyaW5nLCAqbW9yZV90b19kbyk7Cit9CisKIHZvaWQgdXBkYXRlX3JzcF9ldmVudChzdHJ1Y3Qg
YmxrZnJvbnRfaW5mbyAqaW5mbywgaW50IGkpCiB7CiAJaW5mby0+cmluZy5zcmluZy0+cnNwX2V2
ZW50ID0gaSArIDE7CiB9CisKK3ZvaWQgdXBkYXRlX3JzcF9ldmVudF92MihzdHJ1Y3QgYmxrZnJv
bnRfaW5mbyAqaW5mbywgaW50IGkpCit7CisJaW5mby0+cmVxcmluZy5zcmluZy0+cnNwX2V2ZW50
ID0gaSArIDE7Cit9CisKK3ZvaWQgdXBkYXRlX3NlZ21lbnRfcnNwX2NvbnMoc3RydWN0IGJsa2Zy
b250X2luZm8gKmluZm8sIHVuc2lnbmVkIGxvbmcgaWQpCit7CisJcmV0dXJuOworfQorCit2b2lk
IHVwZGF0ZV9zZWdtZW50X3JzcF9jb25zX3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCB1
bnNpZ25lZCBsb25nIGlkKQoreworCWluZm8tPnNlZ3JpbmcucnNwX2NvbnMgKz0gaW5mby0+cmVx
X3NoYWRvd1tpZF0ucmVxLnUucncubnJfc2VnbWVudHM7CisJcmV0dXJuOworfQogc3RhdGljIGly
cXJldHVybl90IGJsa2lmX2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpCiB7CiAJc3Ry
dWN0IHJlcXVlc3QgKnJlcTsKQEAgLTkwMyw4ICsxMTI0LDggQEAgc3RhdGljIGlycXJldHVybl90
IGJsa2lmX2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpCiAJCQlpZiAodW5saWtlbHko
YnJldC0+c3RhdHVzICE9IEJMS0lGX1JTUF9PS0FZKSkKIAkJCQlkZXZfZGJnKCZpbmZvLT54YmRl
di0+ZGV2LCAiQmFkIHJldHVybiBmcm9tIGJsa2RldiBkYXRhICIKIAkJCQkJInJlcXVlc3Q6ICV4
XG4iLCBicmV0LT5zdGF0dXMpOwotCiAJCQlfX2Jsa19lbmRfcmVxdWVzdF9hbGwocmVxLCBlcnJv
cik7CisJCQlpbmZvLT5vcHMtPnVwZGF0ZV9zZWdtZW50X3JzcF9jb25zKGluZm8sIGlkKTsKIAkJ
CWJyZWFrOwogCQlkZWZhdWx0OgogCQkJQlVHKCk7CkBAIC05NDksNiArMTE3MCw0MyBAQCBzdGF0
aWMgaW50IGluaXRfc2hhZG93KHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogCXJldHVybiAw
OwogfQogCitzdGF0aWMgaW50IGluaXRfc2hhZG93X3YyKHN0cnVjdCBibGtmcm9udF9pbmZvICpp
bmZvKQoreworCXVuc2lnbmVkIGludCByaW5nX3NpemU7CisJaW50IGk7CisKKwlpZiAoaW5mby0+
cmluZ190eXBlICE9IFJJTkdfVFlQRV9VTkRFRklORUQpCisJCXJldHVybiAwOworCisJaW5mby0+
cmluZ190eXBlID0gUklOR19UWVBFXzI7CisKKwlyaW5nX3NpemUgPSBCTEtfUkVRX1JJTkdfU0la
RTsKKwlpbmZvLT5yZXFfc2hhZG93ID0ga3phbGxvYyhzaXplb2Yoc3RydWN0IGJsa19yZXFfc2hh
ZG93KSAqIHJpbmdfc2l6ZSwKKwkJCQkgICBHRlBfS0VSTkVMKTsKKwlpZiAoIWluZm8tPnJlcV9z
aGFkb3cpCisJCXJldHVybiAtRU5PTUVNOworCisJZm9yIChpID0gMDsgaSA8IHJpbmdfc2l6ZTsg
aSsrKQorCQlpbmZvLT5yZXFfc2hhZG93W2ldLnJlcS51LnJ3LmlkID0gaSsxOworCWluZm8tPnJl
cV9zaGFkb3dbcmluZ19zaXplIC0gMV0ucmVxLnUucncuaWQgPSAweDBmZmZmZmZmOworCisJcmlu
Z19zaXplID0gQkxLX1NFR19SSU5HX1NJWkU7CisKKwlpbmZvLT5zZWdfc2hhZG93ID0ga3phbGxv
YyhzaXplb2Yoc3RydWN0IGJsa19zZWdfc2hhZG93KSAqIHJpbmdfc2l6ZSwKKwkJCQkgICBHRlBf
S0VSTkVMKTsJCQorCWlmICghaW5mby0+c2VnX3NoYWRvdykgeworCQlrZnJlZShpbmZvLT5yZXFf
c2hhZG93KTsKKwkJcmV0dXJuIC1FTk9NRU07CisJfQorCisJZm9yIChpID0gMDsgaSA8IHJpbmdf
c2l6ZTsgaSsrKSB7CisJCWluZm8tPnNlZ19zaGFkb3dbaV0uaWQgPSBpKzE7CisJfQorCWluZm8t
PnNlZ19zaGFkb3dbcmluZ19zaXplIC0gMV0uaWQgPSAweDBmZmZmZmZmOworCisJcmV0dXJuIDA7
Cit9CisKIHN0YXRpYyBpbnQgc2V0dXBfYmxrcmluZyhzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2
LAogCQkJIHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewpAQCAtMTAwMyw2ICsxMjYxLDg0
IEBAIGZhaWw6CiAJcmV0dXJuIGVycjsKIH0KIAorc3RhdGljIGludCBzZXR1cF9ibGtyaW5nX3Yy
KHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYsCisJCQkgICAgc3RydWN0IGJsa2Zyb250X2luZm8g
KmluZm8pCit7CisJc3RydWN0IGJsa2lmX3JlcXVlc3Rfc3JpbmcgKnNyaW5nOworCXN0cnVjdCBi
bGtpZl9zZWdtZW50X3NyaW5nICpzZWdfc3Jpbmc7CisJaW50IGVycjsKKworCWluZm8tPnJlcXJp
bmdfcmVmID0gR1JBTlRfSU5WQUxJRF9SRUY7CisKKwlzcmluZyA9IChzdHJ1Y3QgYmxraWZfcmVx
dWVzdF9zcmluZyAqKV9fZ2V0X2ZyZWVfcGFnZShHRlBfTk9JTyB8IF9fR0ZQX0hJR0gpOworCWlm
ICghc3JpbmcpIHsKKwkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIC1FTk9NRU0sICJhbGxvY2F0aW5n
IHNoYXJlZCByaW5nIik7CisJCXJldHVybiAtRU5PTUVNOworCX0KKwlTSEFSRURfUklOR19JTklU
KHNyaW5nKTsKKwlGUk9OVF9SSU5HX0lOSVQoJmluZm8tPnJlcXJpbmcsIHNyaW5nLCBQQUdFX1NJ
WkUpOworCisJZXJyID0geGVuYnVzX2dyYW50X3JpbmcoZGV2LCB2aXJ0X3RvX21mbihpbmZvLT5y
ZXFyaW5nLnNyaW5nKSk7CisJaWYgKGVyciA8IDApIHsKKwkJZnJlZV9wYWdlKCh1bnNpZ25lZCBs
b25nKXNyaW5nKTsKKwkJaW5mby0+cmVxcmluZy5zcmluZyA9IE5VTEw7CisJCWdvdG8gZmFpbDsK
Kwl9CisKKwlpbmZvLT5yZXFyaW5nX3JlZiA9IGVycjsKKworCWluZm8tPnNlZ3JpbmdfcmVmID0g
R1JBTlRfSU5WQUxJRF9SRUY7CisKKwlzZWdfc3JpbmcgPSAoc3RydWN0IGJsa2lmX3NlZ21lbnRf
c3JpbmcgKilfX2dldF9mcmVlX3BhZ2UoR0ZQX05PSU8gfCBfX0dGUF9ISUdIKTsKKwlpZiAoIXNl
Z19zcmluZykgeworCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgLUVOT01FTSwgImFsbG9jYXRpbmcg
c2hhcmVkIHJpbmciKTsKKwkJZXJyID0gLUVOT01FTTsKKwkJZ290byBmYWlsOworCX0KKwlTSEFS
RURfUklOR19JTklUKHNlZ19zcmluZyk7CisJRlJPTlRfUklOR19JTklUKCZpbmZvLT5zZWdyaW5n
LCBzZWdfc3JpbmcsIFBBR0VfU0laRSk7CisKKwllcnIgPSB4ZW5idXNfZ3JhbnRfcmluZyhkZXYs
IHZpcnRfdG9fbWZuKGluZm8tPnNlZ3Jpbmcuc3JpbmcpKTsKKwlpZiAoZXJyIDwgMCkgeworCQlm
cmVlX3BhZ2UoKHVuc2lnbmVkIGxvbmcpc2VnX3NyaW5nKTsKKwkJaW5mby0+c2VncmluZy5zcmlu
ZyA9IE5VTEw7CisJCWdvdG8gZmFpbDsKKwl9CisKKwlpbmZvLT5zZWdyaW5nX3JlZiA9IGVycjsK
KworCWluZm8tPnNnID0ga3phbGxvYyhzaXplb2Yoc3RydWN0IHNjYXR0ZXJsaXN0KSAqIGluZm8t
Pm9wcy0+bWF4X3NlZywKKwkJCSAgIEdGUF9LRVJORUwpOworCWlmICghaW5mby0+c2cpIHsKKwkJ
ZXJyID0gLUVOT01FTTsKKwkJZ290byBmYWlsOworCX0KKwlzZ19pbml0X3RhYmxlKGluZm8tPnNn
LCBpbmZvLT5vcHMtPm1heF9zZWcpOworCisJZXJyID0gaW5pdF9zaGFkb3dfdjIoaW5mbyk7CisJ
aWYgKGVycikKKwkJZ290byBmYWlsOworCisJZXJyID0geGVuYnVzX2FsbG9jX2V2dGNobihkZXYs
ICZpbmZvLT5ldnRjaG4pOworCWlmIChlcnIpCisJCWdvdG8gZmFpbDsKKworCWVyciA9IGJpbmRf
ZXZ0Y2huX3RvX2lycWhhbmRsZXIoaW5mby0+ZXZ0Y2huLAorCQkJCQlibGtpZl9pbnRlcnJ1cHQs
CisJCQkJCUlSUUZfU0FNUExFX1JBTkRPTSwgImJsa2lmIiwgaW5mbyk7CisJaWYgKGVyciA8PSAw
KSB7CisJCXhlbmJ1c19kZXZfZmF0YWwoZGV2LCBlcnIsCisJCQkJICJiaW5kX2V2dGNobl90b19p
cnFoYW5kbGVyIGZhaWxlZCIpOworCQlnb3RvIGZhaWw7CisJfQorCWluZm8tPmlycSA9IGVycjsK
KworCXJldHVybiAwOworZmFpbDoKKwlibGtpZl9mcmVlKGluZm8sIDApOworCXJldHVybiBlcnI7
Cit9CisKIHN0YXRpYyB2b2lkIGZyZWVfYmxrcmluZyhzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5m
bywgaW50IHN1c3BlbmQpCiB7CiAJaWYgKGluZm8tPnJpbmdfcmVmICE9IEdSQU5UX0lOVkFMSURf
UkVGKSB7CkBAIC0xMDE4LDYgKzEzNTQsMzIgQEAgc3RhdGljIHZvaWQgZnJlZV9ibGtyaW5nKHN0
cnVjdCBibGtmcm9udF9pbmZvICppbmZvLCBpbnQgc3VzcGVuZCkKIAkJa2ZyZWUoaW5mby0+c2hh
ZG93KTsKIH0KIAorc3RhdGljIHZvaWQgZnJlZV9ibGtyaW5nX3YyKHN0cnVjdCBibGtmcm9udF9p
bmZvICppbmZvLCBpbnQgc3VzcGVuZCkKK3sKKwlpZiAoaW5mby0+cmVxcmluZ19yZWYgIT0gR1JB
TlRfSU5WQUxJRF9SRUYpIHsKKwkJZ250dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhpbmZvLT5yZXFy
aW5nX3JlZiwgMCwKKwkJCQkJICAodW5zaWduZWQgbG9uZylpbmZvLT5yZXFyaW5nLnNyaW5nKTsK
KwkJaW5mby0+cmVxcmluZ19yZWYgPSBHUkFOVF9JTlZBTElEX1JFRjsKKwkJaW5mby0+cmVxcmlu
Zy5zcmluZyA9IE5VTEw7CisJfQorCisJaWYgKGluZm8tPnNlZ3JpbmdfcmVmICE9IEdSQU5UX0lO
VkFMSURfUkVGKSB7CisJCWdudHRhYl9lbmRfZm9yZWlnbl9hY2Nlc3MoaW5mby0+c2VncmluZ19y
ZWYsIDAsCisJCQkJCSAgKHVuc2lnbmVkIGxvbmcpaW5mby0+c2VncmluZy5zcmluZyk7CisJCWlu
Zm8tPnNlZ3JpbmdfcmVmID0gR1JBTlRfSU5WQUxJRF9SRUY7CisJCWluZm8tPnNlZ3Jpbmcuc3Jp
bmcgPSBOVUxMOworCX0KKworCWtmcmVlKGluZm8tPnNnKTsKKworCWlmKCFzdXNwZW5kKSB7CisJ
CWtmcmVlKGluZm8tPnJlcV9zaGFkb3cpOworCQlrZnJlZShpbmZvLT5zZWdfc2hhZG93KTsKKwl9
CisKK30KKworCiAvKiBDb21tb24gY29kZSB1c2VkIHdoZW4gZmlyc3Qgc2V0dGluZyB1cCwgYW5k
IHdoZW4gcmVzdW1pbmcuICovCiBzdGF0aWMgaW50IHRhbGtfdG9fYmxrYmFjayhzdHJ1Y3QgeGVu
YnVzX2RldmljZSAqZGV2LAogCQkJICAgc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCkBAIC0x
MDI1LDkgKzEzODcsMTcgQEAgc3RhdGljIGludCB0YWxrX3RvX2Jsa2JhY2soc3RydWN0IHhlbmJ1
c19kZXZpY2UgKmRldiwKIAljb25zdCBjaGFyICptZXNzYWdlID0gTlVMTDsKIAlzdHJ1Y3QgeGVu
YnVzX3RyYW5zYWN0aW9uIHhidDsKIAlpbnQgZXJyOworCXVuc2lnbmVkIGludCB0eXBlOwogCiAJ
LyogcmVnaXN0ZXIgcmluZyBvcHMgKi8KLQlpbmZvLT5vcHMgPSAmYmxrX2Zyb250X29wczsKKwll
cnIgPSB4ZW5idXNfc2NhbmYoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgImJsa2JhY2stcmluZy10
eXBlIiwgIiV1IiwKKwkJCSAgICZ0eXBlKTsKKwlpZiAoZXJyICE9IDEpCisJCXR5cGUgPSAxOwor
CWlmICh0eXBlID09IDIpCisJCWluZm8tPm9wcyA9ICZibGtfZnJvbnRfb3BzX3YyOworCWVsc2UK
KwkJaW5mby0+b3BzID0gJmJsa19mcm9udF9vcHM7CiAKIAkvKiBDcmVhdGUgc2hhcmVkIHJpbmcs
IGFsbG9jIGV2ZW50IGNoYW5uZWwuICovCiAJZXJyID0gaW5mby0+b3BzLT5zZXR1cF9ibGtyaW5n
KGRldiwgaW5mbyk7CkBAIC0xMDQwLDEzICsxNDEwLDYgQEAgYWdhaW46CiAJCXhlbmJ1c19kZXZf
ZmF0YWwoZGV2LCBlcnIsICJzdGFydGluZyB0cmFuc2FjdGlvbiIpOwogCQlnb3RvIGRlc3Ryb3lf
YmxrcmluZzsKIAl9Ci0KLQllcnIgPSB4ZW5idXNfcHJpbnRmKHhidCwgZGV2LT5ub2RlbmFtZSwK
LQkJCSAgICAicmluZy1yZWYiLCAiJXUiLCBpbmZvLT5yaW5nX3JlZik7Ci0JaWYgKGVycikgewot
CQltZXNzYWdlID0gIndyaXRpbmcgcmluZy1yZWYiOwotCQlnb3RvIGFib3J0X3RyYW5zYWN0aW9u
OwotCX0KIAllcnIgPSB4ZW5idXNfcHJpbnRmKHhidCwgZGV2LT5ub2RlbmFtZSwKIAkJCSAgICAi
ZXZlbnQtY2hhbm5lbCIsICIldSIsIGluZm8tPmV2dGNobik7CiAJaWYgKGVycikgewpAQCAtMTA1
OSw3ICsxNDIyLDQwIEBAIGFnYWluOgogCQltZXNzYWdlID0gIndyaXRpbmcgcHJvdG9jb2wiOwog
CQlnb3RvIGFib3J0X3RyYW5zYWN0aW9uOwogCX0KLQorCWlmICh0eXBlID09IDEpIHsKKwkJZXJy
ID0geGVuYnVzX3ByaW50Zih4YnQsIGRldi0+bm9kZW5hbWUsCisJCQkJICAgICJyaW5nLXJlZiIs
ICIldSIsIGluZm8tPnJpbmdfcmVmKTsKKwkJaWYgKGVycikgeworCQkJbWVzc2FnZSA9ICJ3cml0
aW5nIHJpbmctcmVmIjsKKwkJCWdvdG8gYWJvcnRfdHJhbnNhY3Rpb247CisJCX0KKwkJZXJyID0g
eGVuYnVzX3ByaW50Zih4YnQsIGRldi0+bm9kZW5hbWUsICJibGtmcm9udC1yaW5nLXR5cGUiLAor
CQkJCSAgICAiJXUiLCB0eXBlKTsKKwkJaWYgKGVycikgeworCQkJbWVzc2FnZSA9ICJ3cml0aW5n
IGJsa2Zyb250IHJpbmcgdHlwZSI7CisJCQlnb3RvIGFib3J0X3RyYW5zYWN0aW9uOworCQl9CQor
CX0KKwlpZiAodHlwZSA9PSAyKSB7CisJCWVyciA9IHhlbmJ1c19wcmludGYoeGJ0LCBkZXYtPm5v
ZGVuYW1lLAorCQkJCSAgICAicmVxcmluZy1yZWYiLCAiJXUiLCBpbmZvLT5yZXFyaW5nX3JlZik7
CisJCWlmIChlcnIpIHsKKwkJCW1lc3NhZ2UgPSAid3JpdGluZyByZXFyaW5nLXJlZiI7CisJCQln
b3RvIGFib3J0X3RyYW5zYWN0aW9uOworCQl9CisJCWVyciA9IHhlbmJ1c19wcmludGYoeGJ0LCBk
ZXYtPm5vZGVuYW1lLAorCQkJCSAgICAic2VncmluZy1yZWYiLCAiJXUiLCBpbmZvLT5zZWdyaW5n
X3JlZik7CisJCWlmIChlcnIpIHsKKwkJCW1lc3NhZ2UgPSAid3JpdGluZyBzZWdyaW5nLXJlZiI7
CisJCQlnb3RvIGFib3J0X3RyYW5zYWN0aW9uOworCQl9CisJCWVyciA9IHhlbmJ1c19wcmludGYo
eGJ0LCBkZXYtPm5vZGVuYW1lLCAiYmxrZnJvbnQtcmluZy10eXBlIiwKKwkJCQkgICAgIiV1Iiwg
dHlwZSk7CisJCWlmIChlcnIpIHsKKwkJCW1lc3NhZ2UgPSAid3JpdGluZyBibGtmcm9udCByaW5n
IHR5cGUiOworCQkJZ290byBhYm9ydF90cmFuc2FjdGlvbjsKKwkJfQkKKwl9CiAJZXJyID0geGVu
YnVzX3RyYW5zYWN0aW9uX2VuZCh4YnQsIDApOwogCWlmIChlcnIpIHsKIAkJaWYgKGVyciA9PSAt
RUFHQUlOKQpAQCAtMTE2NCw3ICsxNTYwLDcgQEAgc3RhdGljIGludCBibGtmcm9udF9wcm9iZShz
dHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2LAogfQogCiAKLXN0YXRpYyBpbnQgYmxraWZfcmVjb3Zl
cihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3N0YXRpYyBpbnQgcmVjb3Zlcl9mcm9tX3Yx
X3RvX3YxKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQogewogCWludCBpOwogCXN0cnVjdCBi
bGtpZl9yZXF1ZXN0ICpyZXE7CkBAIC0xMjMzLDYgKzE2MjksMzcyIEBAIHN0YXRpYyBpbnQgYmxr
aWZfcmVjb3ZlcihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKIAlyZXR1cm4gMDsKIH0KIAor
LyogbWlncmF0ZSBmcm9tIFYyIHR5cGUgcmluZyB0byBWMSB0eXBlKi8KK3N0YXRpYyBpbnQgcmVj
b3Zlcl9mcm9tX3YyX3RvX3YxKHN0cnVjdCBibGtmcm9udF9pbmZvICppbmZvKQorewkKKwlzdHJ1
Y3QgYmxrX3JlcV9zaGFkb3cgKmNvcHk7CisJc3RydWN0IGJsa19zZWdfc2hhZG93ICpzZWdfY29w
eTsKKwlzdHJ1Y3QgcmVxdWVzdCAqcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0ICpuZXdfcmVx
OworCWludCBpLCBqLCBlcnI7CisJdW5zaWduZWQgaW50IHJlcV9yczsKKwlzdHJ1Y3QgYmlvICpi
aW9saXN0ID0gTlVMTCwgKmJpb3RhaWwgPSBOVUxMLCAqYmlvOworCXVuc2lnbmVkIGxvbmcgaW5k
ZXg7CisJdW5zaWduZWQgbG9uZyBmbGFnczsKKworCXByX2luZm8oIldhcm5pbmcsIG1pZ3JhdGUg
dG8gb2xkZXIgYmFja2VuZCwgc29tZSBpbyBtYXkgZmFpbFxuIik7CisKKwkvKiBTdGFnZSAxOiBJ
bml0IHRoZSBuZXcgc2hhZG93IHN0YXRlLiAqLworCWluZm8tPnJpbmdfdHlwZSA9IFJJTkdfVFlQ
RV9VTkRFRklORUQ7CisJZXJyID0gaW5pdF9zaGFkb3coaW5mbyk7CisJaWYgKGVycikKKwkJcmV0
dXJuIGVycjsKKworCXJlcV9ycyA9IEJMS19SRVFfUklOR19TSVpFOworCisJLyogU3RhZ2UgMjog
U2V0IHVwIGZyZWUgbGlzdC4gKi8KKwlpbmZvLT5zaGFkb3dfZnJlZSA9IGluZm8tPnJpbmcucmVx
X3Byb2RfcHZ0OworCisJLyogU3RhZ2UgMzogRmluZCBwZW5kaW5nIHJlcXVlc3RzIGFuZCByZXF1
ZXVlIHRoZW0uICovCisJZm9yIChpID0gMDsgaSA8IHJlcV9yczsgaSsrKSB7CisJCXJlcSA9IGlu
Zm8tPnJlcV9zaGFkb3dbaV0ucmVxdWVzdDsKKwkJLyogTm90IGluIHVzZT8gKi8KKwkJaWYgKCFy
ZXEpCisJCQljb250aW51ZTsKKworCQlpZiAocmluZ19mdWxsKGluZm8pKSAKKwkJCWdvdG8gb3V0
OworCisJCWNvcHkgPSAmaW5mby0+cmVxX3NoYWRvd1tpXTsKKworICAgICAgICAgICAgICAgIC8q
IFdlIGdldCBhIG5ldyByZXF1ZXN0LCByZXNldCB0aGUgYmxraWYgcmVxdWVzdCBhbmQgc2hhZG93
IHN0YXRlLiAqLworCQluZXdfcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+cmluZywgaW5m
by0+cmluZy5yZXFfcHJvZF9wdnQpOworCisJCWlmIChjb3B5LT5yZXEub3BlcmF0aW9uID09IEJM
S0lGX09QX0RJU0NBUkQpIHsKKwkJCW5ld19yZXEtPm9wZXJhdGlvbiA9IEJMS0lGX09QX0RJU0NB
UkQ7CisJCQluZXdfcmVxLT51LmRpc2NhcmQgPSBjb3B5LT5yZXEudS5kaXNjYXJkOworCQkJbmV3
X3JlcS0+dS5kaXNjYXJkLmlkID0gZ2V0X2lkX2Zyb21fZnJlZWxpc3QoaW5mbyk7CisJCQlpbmZv
LT5zaGFkb3dbbmV3X3JlcS0+dS5kaXNjYXJkLmlkXS5yZXF1ZXN0ID0gcmVxOworCQl9CisJCWVs
c2UgeworCQkJaWYgKGNvcHktPnJlcS51LnJ3Lm5yX3NlZ21lbnRzID4gQkxLSUZfTUFYX1NFR01F
TlRTX1BFUl9SRVFVRVNUKSAKKyAgICAgICAgICAgICAgICAgICAgICAgIAljb250aW51ZTsgCisK
KwkJCW5ld19yZXEtPnUucncuaWQgPSBnZXRfaWRfZnJvbV9mcmVlbGlzdChpbmZvKTsKKwkJCWlu
Zm8tPnNoYWRvd1tuZXdfcmVxLT51LnJ3LmlkXS5yZXF1ZXN0ID0gcmVxOworCQkJbmV3X3JlcS0+
b3BlcmF0aW9uID0gY29weS0+cmVxLm9wZXJhdGlvbjsKKwkJCW5ld19yZXEtPnUucncubnJfc2Vn
bWVudHMgPSBjb3B5LT5yZXEudS5ydy5ucl9zZWdtZW50czsKKwkJCW5ld19yZXEtPnUucncuaGFu
ZGxlID0gY29weS0+cmVxLnUucncuaGFuZGxlOworCQkJbmV3X3JlcS0+dS5ydy5zZWN0b3JfbnVt
YmVyID0gY29weS0+cmVxLnUucncuc2VjdG9yX251bWJlcjsKKwkJCWluZGV4ID0gY29weS0+cmVx
LnUucncuc2VnX2lkOworCQkJZm9yIChqID0gMDsgaiA8IG5ld19yZXEtPnUucncubnJfc2VnbWVu
dHM7IGorKykgeworCQkJCXNlZ19jb3B5ID0gJmluZm8tPnNlZ19zaGFkb3dbaW5kZXhdOworCQkJ
CW5ld19yZXEtPnUucncuc2VnW2pdLmdyZWYgPSBzZWdfY29weS0+cmVxLmdyZWY7CisJCQkJbmV3
X3JlcS0+dS5ydy5zZWdbal0uZmlyc3Rfc2VjdCA9IHNlZ19jb3B5LT5yZXEuZmlyc3Rfc2VjdDsK
KwkJCQluZXdfcmVxLT51LnJ3LnNlZ1tqXS5sYXN0X3NlY3QgPSBzZWdfY29weS0+cmVxLmxhc3Rf
c2VjdDsgCisJCQkJaW5mby0+c2hhZG93W25ld19yZXEtPnUucncuaWRdLmZyYW1lW2pdID0gc2Vn
X2NvcHktPmZyYW1lOworCQkJCWdudHRhYl9ncmFudF9mb3JlaWduX2FjY2Vzc19yZWYoCisJCQkJ
CW5ld19yZXEtPnUucncuc2VnW2pdLmdyZWYsCisJCQkJCWluZm8tPnhiZGV2LT5vdGhlcmVuZF9p
ZCwKKwkJCQkJcGZuX3RvX21mbihpbmZvLT5zaGFkb3dbbmV3X3JlcS0+dS5ydy5pZF0uZnJhbWVb
al0pLAorCQkJCQlycV9kYXRhX2RpcihpbmZvLT5zaGFkb3dbbmV3X3JlcS0+dS5ydy5pZF0ucmVx
dWVzdCkpOworCQkJCWluZGV4ID0gaW5mby0+c2VnX3NoYWRvd1tpbmRleF0uaWQ7CisJCQl9CisJ
CX0KKwkJaW5mby0+c2hhZG93W25ld19yZXEtPnUucncuaWRdLnJlcSA9ICpuZXdfcmVxOworCQlp
bmZvLT5yaW5nLnJlcV9wcm9kX3B2dCsrOworCQlpbmZvLT5yZXFfc2hhZG93W2ldLnJlcXVlc3Qg
PSBOVUxMOworCQkKKwl9CitvdXQ6CisJeGVuYnVzX3N3aXRjaF9zdGF0ZShpbmZvLT54YmRldiwg
WGVuYnVzU3RhdGVDb25uZWN0ZWQpOworCisJc3Bpbl9sb2NrX2lycXNhdmUoJmluZm8tPmlvX2xv
Y2ssIGZsYWdzKTsKKworCS8qIGNhbmNlbCB0aGUgcmVxdWVzdCBhbmQgcmVzdWJtaXQgdGhlIGJp
byAqLworCWZvciAoaSA9IDA7IGkgPCByZXFfcnM7IGkrKykgeworCQlyZXEgPSBpbmZvLT5yZXFf
c2hhZG93W2ldLnJlcXVlc3Q7CisJCWlmICghcmVxKQorCQkJY29udGludWU7CisKKwkJYmxraWZf
Y29tcGxldGlvbl92MihpbmZvLCBpKTsKKworCQlpZiAoYmlvbGlzdCA9PSBOVUxMKQkKKwkJCWJp
b2xpc3QgPSByZXEtPmJpbzsKKwkJZWxzZQorCQkJYmlvdGFpbC0+YmlfbmV4dCA9IHJlcS0+Ymlv
OworCQliaW90YWlsID0gcmVxLT5iaW90YWlsOworCQlyZXEtPmJpbyA9IE5VTEw7CisJCV9fYmxr
X3B1dF9yZXF1ZXN0KGluZm8tPnJxLCByZXEpOworCX0KKworCXdoaWxlICgocmVxID0gYmxrX3Bl
ZWtfcmVxdWVzdChpbmZvLT5ycSkpICE9IE5VTEwpIHsKKworCQlibGtfc3RhcnRfcmVxdWVzdChy
ZXEpOworCisJCWlmIChiaW9saXN0ID09IE5VTEwpCisJCQliaW9saXN0ID0gcmVxLT5iaW87CisJ
CWVsc2UKKwkJCWJpb3RhaWwtPmJpX25leHQgPSByZXEtPmJpbzsKKwkJYmlvdGFpbCA9IHJlcS0+
YmlvdGFpbDsKKwkJcmVxLT5iaW8gPSBOVUxMOworCQlfX2Jsa19wdXRfcmVxdWVzdChpbmZvLT5y
cSwgcmVxKTsKKwl9CisKKwkvKiBOb3cgc2FmZSBmb3IgdXMgdG8gdXNlIHRoZSBzaGFyZWQgcmlu
ZyAqLworCWluZm8tPmNvbm5lY3RlZCA9IEJMS0lGX1NUQVRFX0NPTk5FQ1RFRDsKKworCS8qIG5l
ZWQgdXBkYXRlIHRoZSBxdWV1ZSBsaW1pdCBzZXR0aW5nICovCisJdXBkYXRlX2Jsa19xdWV1ZShp
bmZvKTsKKworCS8qIFNlbmQgb2ZmIHJlcXVldWVkIHJlcXVlc3RzICovCisJZmx1c2hfcmVxdWVz
dHMoaW5mbyk7CisKKwkvKiBLaWNrIGFueSBvdGhlciBuZXcgcmVxdWVzdHMgcXVldWVkIHNpbmNl
IHdlIHJlc3VtZWQgKi8KKwlraWNrX3BlbmRpbmdfcmVxdWVzdF9xdWV1ZXMoaW5mbyk7CisKKwlz
cGluX3VubG9ja19pcnFyZXN0b3JlKCZpbmZvLT5pb19sb2NrLCBmbGFncyk7CisKKwkvKiBmcmVl
IG9yaWdpbmFsIHNoYWRvdyovCisJa2ZyZWUoaW5mby0+c2VnX3NoYWRvdyk7CisJa2ZyZWUoaW5m
by0+cmVxX3NoYWRvdyk7CisKKwl3aGlsZShiaW9saXN0KSB7CisJCWJpbyA9IGJpb2xpc3Q7CisJ
CWJpb2xpc3QgPSBiaW9saXN0LT5iaV9uZXh0OworCQliaW8tPmJpX25leHQgPSBOVUxMOworCQlz
dWJtaXRfYmlvKGJpby0+YmlfcncsIGJpbyk7CisJfQorCisJcmV0dXJuIDA7Cit9CisKK3N0YXRp
YyBpbnQgYmxraWZfcmVjb3ZlcihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlpbnQg
cmM7CisKKwlpZiAoaW5mby0+cmluZ190eXBlID09IFJJTkdfVFlQRV8xKQorCQlyYyA9IHJlY292
ZXJfZnJvbV92MV90b192MShpbmZvKTsKKwllbHNlIGlmIChpbmZvLT5yaW5nX3R5cGUgPT0gUklO
R19UWVBFXzIpCisJCXJjID0gcmVjb3Zlcl9mcm9tX3YyX3RvX3YxKGluZm8pOworCWVsc2UKKwkJ
cmMgPSAtRVBFUk07CisJcmV0dXJuIHJjOworfQorCitzdGF0aWMgaW50IHJlY292ZXJfZnJvbV92
MV90b192MihzdHJ1Y3QgYmxrZnJvbnRfaW5mbyAqaW5mbykKK3sKKwlpbnQgaSxlcnI7CisJc3Ry
dWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpyZXE7CisJc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2Vn
bWVudCAqc2VncmluZ19yZXE7CisJc3RydWN0IGJsa19zaGFkb3cgKmNvcHk7CisJaW50IGo7CisJ
dW5zaWduZWQgbG9uZyBzZWdfaWQsIGxhc3RfaWQgPSAweDBmZmZmZmZmOworCisJLyogU3RhZ2Ug
MTogSW5pdCB0aGUgbmV3IHNoYWRvdy4gKi8KKwlpbmZvLT5yaW5nX3R5cGUgPSBSSU5HX1RZUEVf
VU5ERUZJTkVEOworCWVyciA9IGluaXRfc2hhZG93X3YyKGluZm8pOworCWlmIChlcnIpCisJCXJl
dHVybiBlcnI7CisKKwkvKiBTdGFnZSAyOiBTZXQgdXAgZnJlZSBsaXN0LiAqLworCWluZm8tPnNo
YWRvd19mcmVlID0gaW5mby0+cmVxcmluZy5yZXFfcHJvZF9wdnQ7CisJaW5mby0+c2VnX3NoYWRv
d19mcmVlID0gaW5mby0+c2VncmluZy5yZXFfcHJvZF9wdnQ7CisKKwkvKiBTdGFnZSAzOiBGaW5k
IHBlbmRpbmcgcmVxdWVzdHMgYW5kIHJlcXVldWUgdGhlbS4gKi8KKwlmb3IgKGkgPSAwOyBpIDwg
QkxLX1JJTkdfU0laRTsgaSsrKSB7CisJCWNvcHkgPSAmaW5mby0+c2hhZG93W2ldOworCQkvKiBO
b3QgaW4gdXNlPyAqLworCQlpZiAoIWNvcHktPnJlcXVlc3QpCisJCQljb250aW51ZTsKKworCQkv
KiBXZSBnZXQgYSBuZXcgcmVxdWVzdCwgcmVzZXQgdGhlIGJsa2lmIHJlcXVlc3QgYW5kIHNoYWRv
dyBzdGF0ZS4gKi8KKwkJcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+cmVxcmluZywgaW5m
by0+cmVxcmluZy5yZXFfcHJvZF9wdnQpOworCisJCWlmIChjb3B5LT5yZXEub3BlcmF0aW9uID09
IEJMS0lGX09QX0RJU0NBUkQpIHsKKwkJCXJlcS0+b3BlcmF0aW9uID0gQkxLSUZfT1BfRElTQ0FS
RDsKKwkJCXJlcS0+dS5kaXNjYXJkID0gY29weS0+cmVxLnUuZGlzY2FyZDsKKwkJCXJlcS0+dS5k
aXNjYXJkLmlkID0gZ2V0X2lkX2Zyb21fZnJlZWxpc3RfdjIoaW5mbyk7CisJCQlpbmZvLT5yZXFf
c2hhZG93W3JlcS0+dS5kaXNjYXJkLmlkXS5yZXF1ZXN0ID0gY29weS0+cmVxdWVzdDsKKwkJCWlu
Zm8tPnJlcV9zaGFkb3dbcmVxLT51LmRpc2NhcmQuaWRdLnJlcSA9ICpyZXE7CisJCX0KKwkJZWxz
ZSB7CisJCQlyZXEtPnUucncuaWQgPSBnZXRfaWRfZnJvbV9mcmVlbGlzdF92MihpbmZvKTsKKwkJ
CXJlcS0+b3BlcmF0aW9uID0gY29weS0+cmVxLm9wZXJhdGlvbjsKKwkJCXJlcS0+dS5ydy5ucl9z
ZWdtZW50cyA9IGNvcHktPnJlcS51LnJ3Lm5yX3NlZ21lbnRzOworCQkJcmVxLT51LnJ3LmhhbmRs
ZSA9IGNvcHktPnJlcS51LnJ3LmhhbmRsZTsKKwkJCXJlcS0+dS5ydy5zZWN0b3JfbnVtYmVyID0g
Y29weS0+cmVxLnUucncuc2VjdG9yX251bWJlcjsKKwkJCWZvciAoaiA9IDA7IGogPCByZXEtPnUu
cncubnJfc2VnbWVudHM7IGorKykgeworCQkJCXNlZ19pZCA9IGdldF9zZWdfc2hhZG93X2lkKGlu
Zm8pOworCQkJCWlmIChqID09IDApCisJCQkJCXJlcS0+dS5ydy5zZWdfaWQgPSBzZWdfaWQ7CisJ
CQkJZWxzZQorCQkJCQlpbmZvLT5zZWdfc2hhZG93W2xhc3RfaWRdLmlkID0gc2VnX2lkOworCQkJ
CXNlZ3JpbmdfcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+c2VncmluZywgaW5mby0+c2Vn
cmluZy5yZXFfcHJvZF9wdnQpOworCQkJCXNlZ3JpbmdfcmVxLT5ncmVmID0gY29weS0+cmVxLnUu
cncuc2VnW2pdLmdyZWY7CisJCQkJc2VncmluZ19yZXEtPmZpcnN0X3NlY3QgPSBjb3B5LT5yZXEu
dS5ydy5zZWdbal0uZmlyc3Rfc2VjdDsKKwkJCQlzZWdyaW5nX3JlcS0+bGFzdF9zZWN0ID0gY29w
eS0+cmVxLnUucncuc2VnW2pdLmxhc3Rfc2VjdDsKKwkJCQlpbmZvLT5zZWdfc2hhZG93W3NlZ19p
ZF0ucmVxID0gKnNlZ3JpbmdfcmVxOworCQkJCWluZm8tPnNlZ19zaGFkb3dbc2VnX2lkXS5mcmFt
ZSA9IGNvcHktPmZyYW1lW2pdOworCQkJCWluZm8tPnNlZ3JpbmcucmVxX3Byb2RfcHZ0Kys7CisJ
CQkJZ250dGFiX2dyYW50X2ZvcmVpZ25fYWNjZXNzX3JlZigKKwkJCQkJc2VncmluZ19yZXEtPmdy
ZWYsCisJCQkJCWluZm8tPnhiZGV2LT5vdGhlcmVuZF9pZCwKKwkJCQkJcGZuX3RvX21mbihjb3B5
LT5mcmFtZVtqXSksCisJCQkJCXJxX2RhdGFfZGlyKGNvcHktPnJlcXVlc3QpKTsKKwkJCQlsYXN0
X2lkID0gc2VnX2lkOworCQkJfQorCQkJaW5mby0+cmVxX3NoYWRvd1tyZXEtPnUucncuaWRdLnJl
cSA9ICpyZXE7CisJCQlpbmZvLT5yZXFfc2hhZG93W3JlcS0+dS5ydy5pZF0ucmVxdWVzdCA9IGNv
cHktPnJlcXVlc3Q7CisJCX0KKworCQlpbmZvLT5yZXFyaW5nLnJlcV9wcm9kX3B2dCsrOworCX0K
KworCS8qIG5lZWQgdXBkYXRlIHRoZSBxdWV1ZSBsaW1pdCBzZXR0aW5nICovCisJdXBkYXRlX2Js
a19xdWV1ZShpbmZvKTsKKworCS8qIGZyZWUgb3JpZ2luYWwgc2hhZG93Ki8KKwlrZnJlZShpbmZv
LT5zaGFkb3cpOworCisJeGVuYnVzX3N3aXRjaF9zdGF0ZShpbmZvLT54YmRldiwgWGVuYnVzU3Rh
dGVDb25uZWN0ZWQpOworCisJc3Bpbl9sb2NrX2lycSgmaW5mby0+aW9fbG9jayk7CisKKwkvKiBO
b3cgc2FmZSBmb3IgdXMgdG8gdXNlIHRoZSBzaGFyZWQgcmluZyAqLworCWluZm8tPmNvbm5lY3Rl
ZCA9IEJMS0lGX1NUQVRFX0NPTk5FQ1RFRDsKKworCS8qIFNlbmQgb2ZmIHJlcXVldWVkIHJlcXVl
c3RzICovCisJZmx1c2hfcmVxdWVzdHMoaW5mbyk7CisKKwkvKiBLaWNrIGFueSBvdGhlciBuZXcg
cmVxdWVzdHMgcXVldWVkIHNpbmNlIHdlIHJlc3VtZWQgKi8KKwlraWNrX3BlbmRpbmdfcmVxdWVz
dF9xdWV1ZXMoaW5mbyk7CisKKwlzcGluX3VubG9ja19pcnEoJmluZm8tPmlvX2xvY2spOworCisJ
cmV0dXJuIDA7Cit9CisKK3N0YXRpYyBpbnQgcmVjb3Zlcl9mcm9tX3YyX3RvX3YyKHN0cnVjdCBi
bGtmcm9udF9pbmZvICppbmZvKQoreworCWludCBpOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X2hl
YWRlciAqcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnNlZ3JpbmdfcmVxOwor
CXN0cnVjdCBibGtfcmVxX3NoYWRvdyAqY29weTsKKwlzdHJ1Y3QgYmxrX3NlZ19zaGFkb3cgKnNl
Z19jb3B5OworCXVuc2lnbmVkIGxvbmcgaW5kZXggPSAweDBmZmZmZmZmLCBzZWdfaWQsIGxhc3Rf
aWQgPSAweDBmZmZmZmZmOworCWludCBqOworCXVuc2lnbmVkIGludCByZXFfcnMsIHNlZ19yczsK
Kwl1bnNpZ25lZCBsb25nIGZsYWdzOworCisJcmVxX3JzID0gQkxLX1JFUV9SSU5HX1NJWkU7CisJ
c2VnX3JzID0gQkxLX1NFR19SSU5HX1NJWkU7CisKKwkvKiBTdGFnZSAxOiBNYWtlIGEgc2FmZSBj
b3B5IG9mIHRoZSBzaGFkb3cgc3RhdGUuICovCisJY29weSA9IGttYWxsb2Moc2l6ZW9mKHN0cnVj
dCBibGtfcmVxX3NoYWRvdykgKiByZXFfcnMsCisJCSAgICAgICBHRlBfTk9JTyB8IF9fR0ZQX1JF
UEVBVCB8IF9fR0ZQX0hJR0gpOworCWlmICghY29weSkKKwkJcmV0dXJuIC1FTk9NRU07CisKKwlz
ZWdfY29weSA9IGttYWxsb2Moc2l6ZW9mKHN0cnVjdCBibGtfc2VnX3NoYWRvdykgKiBzZWdfcnMs
CisJCQkgICBHRlBfTk9JTyB8IF9fR0ZQX1JFUEVBVCB8IF9fR0ZQX0hJR0gpOworCWlmICghc2Vn
X2NvcHkgKSB7CisJCWtmcmVlKGNvcHkpOworCQlyZXR1cm4gLUVOT01FTTsKKwl9CisKKwltZW1j
cHkoY29weSwgaW5mby0+cmVxX3NoYWRvdywgc2l6ZW9mKHN0cnVjdCBibGtfcmVxX3NoYWRvdykg
KiByZXFfcnMpOworCW1lbWNweShzZWdfY29weSwgaW5mby0+c2VnX3NoYWRvdywKKwkgICAgICAg
c2l6ZW9mKHN0cnVjdCBibGtfc2VnX3NoYWRvdykgKiBzZWdfcnMpOworCisJLyogU3RhZ2UgMjog
U2V0IHVwIGZyZWUgbGlzdC4gKi8KKyAgICAgICAgZm9yIChpID0gMDsgaSA8IHJlcV9yczsgaSsr
KQorICAgICAgICAgICAgICAgIGluZm8tPnJlcV9zaGFkb3dbaV0ucmVxLnUucncuaWQgPSBpKzE7
CisgICAgICAgIGluZm8tPnJlcV9zaGFkb3dbcmVxX3JzIC0gMV0ucmVxLnUucncuaWQgPSAweDBm
ZmZmZmZmOworCisJZm9yIChpID0gMDsgaSA8IHNlZ19yczsgaSsrKQorCQlpbmZvLT5zZWdfc2hh
ZG93W2ldLmlkID0gaSsxOworCWluZm8tPnNlZ19zaGFkb3dbc2VnX3JzIC0gMV0uaWQgPSAweDBm
ZmZmZmZmOworCisJaW5mby0+c2hhZG93X2ZyZWUgPSBpbmZvLT5yZXFyaW5nLnJlcV9wcm9kX3B2
dDsKKwlpbmZvLT5zZWdfc2hhZG93X2ZyZWUgPSBpbmZvLT5zZWdyaW5nLnJlcV9wcm9kX3B2dDsK
KworCS8qIFN0YWdlIDM6IEZpbmQgcGVuZGluZyByZXF1ZXN0cyBhbmQgcmVxdWV1ZSB0aGVtLiAq
LworCWZvciAoaSA9IDA7IGkgPCByZXFfcnM7IGkrKykgeworCQkvKiBOb3QgaW4gdXNlPyAqLwor
CQlpZiAoIWNvcHlbaV0ucmVxdWVzdCkKKwkJCWNvbnRpbnVlOworCisJCXJlcSA9IFJJTkdfR0VU
X1JFUVVFU1QoJmluZm8tPnJlcXJpbmcsIGluZm8tPnJlcXJpbmcucmVxX3Byb2RfcHZ0KTsKKwkJ
KnJlcSA9IGNvcHlbaV0ucmVxOworCisJCXJlcS0+dS5ydy5pZCA9IGdldF9pZF9mcm9tX2ZyZWVs
aXN0X3YyKGluZm8pOworCQltZW1jcHkoJmluZm8tPnJlcV9zaGFkb3dbcmVxLT51LnJ3LmlkXSwg
JmNvcHlbaV0sIHNpemVvZihjb3B5W2ldKSk7CisKKwkJaWYgKHJlcS0+b3BlcmF0aW9uICE9IEJM
S0lGX09QX0RJU0NBUkQpIHsKKwkJCWZvciAoaiA9IDA7IGogPCByZXEtPnUucncubnJfc2VnbWVu
dHM7IGorKykgeworCQkJCXNlZ19pZCA9IGdldF9zZWdfc2hhZG93X2lkKGluZm8pOworCQkJCWlm
IChqID09IDApCisJCQkJCWluZGV4ID0gcmVxLT51LnJ3LnNlZ19pZDsKKwkJCQllbHNlCisJCQkJ
CWluZGV4ID0gc2VnX2NvcHlbaW5kZXhdLmlkIDsKKwkJCQlnbnR0YWJfZ3JhbnRfZm9yZWlnbl9h
Y2Nlc3NfcmVmKAorCQkJCQlzZWdfY29weVtpbmRleF0ucmVxLmdyZWYsCisJCQkJCWluZm8tPnhi
ZGV2LT5vdGhlcmVuZF9pZCwKKwkJCQkJcGZuX3RvX21mbihzZWdfY29weVtpbmRleF0uZnJhbWUp
LAorCQkJCQlycV9kYXRhX2RpcihpbmZvLT5yZXFfc2hhZG93W3JlcS0+dS5ydy5pZF0ucmVxdWVz
dCkpOworCQkJCXNlZ3JpbmdfcmVxID0gUklOR19HRVRfUkVRVUVTVCgmaW5mby0+c2VncmluZywg
aW5mby0+c2VncmluZy5yZXFfcHJvZF9wdnQpOworCQkJCW1lbWNweShzZWdyaW5nX3JlcSwgJihz
ZWdfY29weVtpbmRleF0ucmVxKSwKKwkJCQkgICAgICAgc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1
ZXN0X3NlZ21lbnQpKTsKKwkJCQlpZiAoaiA9PSAwKQorCQkJCQlyZXEtPnUucncuc2VnX2lkID0g
c2VnX2lkOworCQkJCWVsc2UKKwkJCQkJaW5mby0+c2VnX3NoYWRvd1tsYXN0X2lkXS5pZCA9IHNl
Z19pZDsKKworCQkJCW1lbWNweSgmaW5mby0+c2VnX3NoYWRvd1tzZWdfaWRdLAorCQkJCSAgICAg
ICAmc2VnX2NvcHlbaW5kZXhdLCBzaXplb2Yoc3RydWN0IGJsa19zZWdfc2hhZG93KSk7CisJCQkJ
aW5mby0+c2VncmluZy5yZXFfcHJvZF9wdnQrKzsKKwkJCQlsYXN0X2lkID0gc2VnX2lkOworCQkJ
fQorCQl9CisJCWluZm8tPnJlcV9zaGFkb3dbcmVxLT51LnJ3LmlkXS5yZXEgPSAqcmVxOworCisJ
CWluZm8tPnJlcXJpbmcucmVxX3Byb2RfcHZ0Kys7CisJfQorCisJa2ZyZWUoc2VnX2NvcHkpOwor
CWtmcmVlKGNvcHkpOworCisJeGVuYnVzX3N3aXRjaF9zdGF0ZShpbmZvLT54YmRldiwgWGVuYnVz
U3RhdGVDb25uZWN0ZWQpOworCisJc3Bpbl9sb2NrX2lycXNhdmUoJmluZm8tPmlvX2xvY2ssIGZs
YWdzKTsKKworCS8qIE5vdyBzYWZlIGZvciB1cyB0byB1c2UgdGhlIHNoYXJlZCByaW5nICovCisJ
aW5mby0+Y29ubmVjdGVkID0gQkxLSUZfU1RBVEVfQ09OTkVDVEVEOworCisJLyogU2VuZCBvZmYg
cmVxdWV1ZWQgcmVxdWVzdHMgKi8KKwlmbHVzaF9yZXF1ZXN0cyhpbmZvKTsKKworCS8qIEtpY2sg
YW55IG90aGVyIG5ldyByZXF1ZXN0cyBxdWV1ZWQgc2luY2Ugd2UgcmVzdW1lZCAqLworCWtpY2tf
cGVuZGluZ19yZXF1ZXN0X3F1ZXVlcyhpbmZvKTsKKworCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUo
JmluZm8tPmlvX2xvY2ssIGZsYWdzKTsKKworCXJldHVybiAwOworfQorCitzdGF0aWMgaW50IGJs
a2lmX3JlY292ZXJfdjIoc3RydWN0IGJsa2Zyb250X2luZm8gKmluZm8pCit7CisJaW50IHJjOwor
CisJaWYgKGluZm8tPnJpbmdfdHlwZSA9PSBSSU5HX1RZUEVfMSkKKwkJcmMgPSByZWNvdmVyX2Zy
b21fdjFfdG9fdjIoaW5mbyk7CisJZWxzZSBpZiAoaW5mby0+cmluZ190eXBlID09IFJJTkdfVFlQ
RV8yKQorCQlyYyA9IHJlY292ZXJfZnJvbV92Ml90b192MihpbmZvKTsKKwllbHNlCisJCXJjID0g
LUVQRVJNOworCXJldHVybiByYzsKK30KIC8qKgogICogV2UgYXJlIHJlY29ubmVjdGluZyB0byB0
aGUgYmFja2VuZCwgZHVlIHRvIGEgc3VzcGVuZC9yZXN1bWUsIG9yIGEgYmFja2VuZAogICogZHJp
dmVyIHJlc3RhcnQuICBXZSB0ZWFyIGRvd24gb3VyIGJsa2lmIHN0cnVjdHVyZSBhbmQgcmVjcmVh
dGUgaXQsIGJ1dApAQCAtMTYwOSwxNSArMjM3MSw0NCBAQCBzdGF0aWMgc3RydWN0IGJsa19mcm9u
dF9vcGVyYXRpb25zIGJsa19mcm9udF9vcHMgPSB7CiAJLnVwZGF0ZV9yc3BfZXZlbnQgPSB1cGRh
dGVfcnNwX2V2ZW50LAogCS51cGRhdGVfcnNwX2NvbnMgPSB1cGRhdGVfcnNwX2NvbnMsCiAJLnVw
ZGF0ZV9yZXFfcHJvZF9wdnQgPSB1cGRhdGVfcmVxX3Byb2RfcHZ0LAorCS51cGRhdGVfc2VnbWVu
dF9yc3BfY29ucyA9IHVwZGF0ZV9zZWdtZW50X3JzcF9jb25zLAogCS5yaW5nX3B1c2ggPSByaW5n
X3B1c2gsCiAJLnJlY292ZXIgPSBibGtpZl9yZWNvdmVyLAogCS5yaW5nX2Z1bGwgPSByaW5nX2Z1
bGwsCisJLnNlZ3JpbmdfZnVsbCA9IHNlZ3JpbmdfZnVsbCwKIAkuc2V0dXBfYmxrcmluZyA9IHNl
dHVwX2Jsa3JpbmcsCiAJLmZyZWVfYmxrcmluZyA9IGZyZWVfYmxrcmluZywKIAkuYmxraWZfY29t
cGxldGlvbiA9IGJsa2lmX2NvbXBsZXRpb24sCiAJLm1heF9zZWcgPSBCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1QsCiB9OwogCitzdGF0aWMgc3RydWN0IGJsa19mcm9udF9vcGVyYXRpb25z
IGJsa19mcm9udF9vcHNfdjIgPSB7CisJLnJpbmdfZ2V0X3JlcXVlc3QgPSByaW5nX2dldF9yZXF1
ZXN0X3YyLAorCS5yaW5nX2dldF9yZXNwb25zZSA9IHJpbmdfZ2V0X3Jlc3BvbnNlX3YyLAorCS5y
aW5nX2dldF9zZWdtZW50ID0gcmluZ19nZXRfc2VnbWVudF92MiwKKwkuZ2V0X2lkID0gZ2V0X2lk
X2Zyb21fZnJlZWxpc3RfdjIsCisJLmFkZF9pZCA9IGFkZF9pZF90b19mcmVlbGlzdF92MiwKKwku
c2F2ZV9zZWdfc2hhZG93ID0gc2F2ZV9zZWdfc2hhZG93X3YyLAorCS5zYXZlX3JlcV9zaGFkb3cg
PSBzYXZlX3JlcV9zaGFkb3dfdjIsCisJLmdldF9yZXFfZnJvbV9zaGFkb3cgPSBnZXRfcmVxX2Zy
b21fc2hhZG93X3YyLAorCS5nZXRfcnNwX3Byb2QgPSBnZXRfcnNwX3Byb2RfdjIsCisJLmdldF9y
c3BfY29ucyA9IGdldF9yc3BfY29uc192MiwKKwkuZ2V0X3JlcV9wcm9kX3B2dCA9IGdldF9yZXFf
cHJvZF9wdnRfdjIsCisJLmNoZWNrX2xlZnRfcmVzcG9uc2UgPSBjaGVja19sZWZ0X3Jlc3BvbnNl
X3YyLAorCS51cGRhdGVfcnNwX2V2ZW50ID0gdXBkYXRlX3JzcF9ldmVudF92MiwKKwkudXBkYXRl
X3JzcF9jb25zID0gdXBkYXRlX3JzcF9jb25zX3YyLAorCS51cGRhdGVfcmVxX3Byb2RfcHZ0ID0g
dXBkYXRlX3JlcV9wcm9kX3B2dF92MiwKKwkudXBkYXRlX3NlZ21lbnRfcnNwX2NvbnMgPSB1cGRh
dGVfc2VnbWVudF9yc3BfY29uc192MiwKKwkucmluZ19wdXNoID0gcmluZ19wdXNoX3YyLAorCS5y
ZWNvdmVyID0gYmxraWZfcmVjb3Zlcl92MiwKKwkucmluZ19mdWxsID0gcmluZ19mdWxsX3YyLAor
CS5zZWdyaW5nX2Z1bGwgPSBzZWdyaW5nX2Z1bGxfdjIsCisJLnNldHVwX2Jsa3JpbmcgPSBzZXR1
cF9ibGtyaW5nX3YyLAorCS5mcmVlX2Jsa3JpbmcgPSBmcmVlX2Jsa3JpbmdfdjIsCisJLmJsa2lm
X2NvbXBsZXRpb24gPSBibGtpZl9jb21wbGV0aW9uX3YyLAorCS5tYXhfc2VnID0gQkxLSUZfTUFY
X1NFR01FTlRTX1BFUl9SRVFVRVNUX1YyLAorfTsKKwogc3RhdGljIGNvbnN0IHN0cnVjdCBibG9j
a19kZXZpY2Vfb3BlcmF0aW9ucyB4bHZiZF9ibG9ja19mb3BzID0KIHsKIAkub3duZXIgPSBUSElT
X01PRFVMRSwKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2dyYW50LXRhYmxlLmMgYi9kcml2ZXJz
L3hlbi9ncmFudC10YWJsZS5jCmluZGV4IGYxMDBjZTIuLmE1YTk4YjAgMTAwNjQ0Ci0tLSBhL2Ry
aXZlcnMveGVuL2dyYW50LXRhYmxlLmMKKysrIGIvZHJpdmVycy94ZW4vZ3JhbnQtdGFibGUuYwpA
QCAtNDc1LDcgKzQ3NSw3IEBAIHZvaWQgZ250dGFiX2VuZF9mb3JlaWduX2FjY2VzcyhncmFudF9y
ZWZfdCByZWYsIGludCByZWFkb25seSwKIAkJLyogWFhYIFRoaXMgbmVlZHMgdG8gYmUgZml4ZWQg
c28gdGhhdCB0aGUgcmVmIGFuZCBwYWdlIGFyZQogCQkgICBwbGFjZWQgb24gYSBsaXN0IHRvIGJl
IGZyZWVkIHVwIGxhdGVyLiAqLwogCQlwcmludGsoS0VSTl9XQVJOSU5HCi0JCSAgICAgICAiV0FS
TklORzogbGVha2luZyBnLmUuIGFuZCBwYWdlIHN0aWxsIGluIHVzZSFcbiIpOworCQkgICAgICAg
IldBUk5JTkc6IHJlZiAldSBsZWFraW5nIGcuZS4gYW5kIHBhZ2Ugc3RpbGwgaW4gdXNlIVxuIiwg
cmVmKTsKIAl9CiB9CiBFWFBPUlRfU1lNQk9MX0dQTChnbnR0YWJfZW5kX2ZvcmVpZ25fYWNjZXNz
KTsKZGlmZiAtLWdpdCBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9pby9ibGtpZi5oIGIvaW5jbHVk
ZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmgKaW5kZXggZWUzMzhiZi4uNzYzNDg5YSAxMDA2NDQK
LS0tIGEvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmgKKysrIGIvaW5jbHVkZS94ZW4v
aW50ZXJmYWNlL2lvL2Jsa2lmLmgKQEAgLTEwOCw2ICsxMDgsNyBAQCB0eXBlZGVmIHVpbnQ2NF90
IGJsa2lmX3NlY3Rvcl90OwogICogTkIuIFRoaXMgY291bGQgYmUgMTIgaWYgdGhlIHJpbmcgaW5k
ZXhlcyB3ZXJlbid0IHN0b3JlZCBpbiB0aGUgc2FtZSBwYWdlLgogICovCiAjZGVmaW5lIEJMS0lG
X01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCAxMQorI2RlZmluZSBCTEtJRl9NQVhfU0VHTUVOVFNf
UEVSX1JFUVVFU1RfVjIgMTI4CiAKIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3J3IHsKIAl1aW50OF90
ICAgICAgICBucl9zZWdtZW50czsgIC8qIG51bWJlciBvZiBzZWdtZW50cyAgICAgICAgICAgICAg
ICAgICAqLwpAQCAtMTI1LDYgKzEyNiwxNyBAQCBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9ydyB7CiAJ
fSBzZWdbQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUXTsKIH0gX19hdHRyaWJ1dGVfXygo
X19wYWNrZWRfXykpOwogCitzdHJ1Y3QgYmxraWZfcmVxdWVzdF9yd19oZWFkZXIgeworCXVpbnQ4
X3QgICAgICAgIG5yX3NlZ21lbnRzOyAgLyogbnVtYmVyIG9mIHNlZ21lbnRzICAgICAgICAgICAg
ICAgICAgICovCisJYmxraWZfdmRldl90ICAgaGFuZGxlOyAgICAgICAvKiBvbmx5IGZvciByZWFk
L3dyaXRlIHJlcXVlc3RzICAgICAgICAgKi8KKyNpZmRlZiBDT05GSUdfWDg2XzY0CisJdWludDMy
X3QgICAgICAgX3BhZDE7CSAgICAgLyogb2Zmc2V0b2YoYmxraWZfcmVxdWVzdCx1LnJ3LmlkKSA9
PSA4ICovCisjZW5kaWYKKwl1aW50NjRfdCAgICAgICBpZDsgICAgICAgICAgIC8qIHByaXZhdGUg
Z3Vlc3QgdmFsdWUsIGVjaG9lZCBpbiByZXNwICAqLworCWJsa2lmX3NlY3Rvcl90IHNlY3Rvcl9u
dW1iZXI7Lyogc3RhcnQgc2VjdG9yIGlkeCBvbiBkaXNrIChyL3cgb25seSkgICovCisJdWludDY0
X3QgICAgICAgc2VnX2lkOwkgICAgIC8qIHNlZ21lbnQgaWQgaW4gdGhlIHNlZ21lbnQgc2hhZG93
ICAgICAqLwkKK30gX19hdHRyaWJ1dGVfXygoX19wYWNrZWRfXykpOworCiBzdHJ1Y3QgYmxraWZf
cmVxdWVzdF9kaXNjYXJkIHsKIAl1aW50OF90ICAgICAgICBmbGFnOyAgICAgICAgIC8qIEJMS0lG
X0RJU0NBUkRfU0VDVVJFIG9yIHplcm8uICAgICAgICAqLwogI2RlZmluZSBCTEtJRl9ESVNDQVJE
X1NFQ1VSRSAoMTw8MCkgIC8qIGlnbm9yZWQgaWYgZGlzY2FyZC1zZWN1cmU9MCAgICAgICAgICAq
LwpAQCAtMTM1LDcgKzE0Nyw2IEBAIHN0cnVjdCBibGtpZl9yZXF1ZXN0X2Rpc2NhcmQgewogCXVp
bnQ2NF90ICAgICAgIGlkOyAgICAgICAgICAgLyogcHJpdmF0ZSBndWVzdCB2YWx1ZSwgZWNob2Vk
IGluIHJlc3AgICovCiAJYmxraWZfc2VjdG9yX3Qgc2VjdG9yX251bWJlcjsKIAl1aW50NjRfdCAg
ICAgICBucl9zZWN0b3JzOwotCXVpbnQ4X3QgICAgICAgIF9wYWQzOwogfSBfX2F0dHJpYnV0ZV9f
KChfX3BhY2tlZF9fKSk7CiAKIHN0cnVjdCBibGtpZl9yZXF1ZXN0IHsKQEAgLTE0NiwxMiArMTU3
LDI0IEBAIHN0cnVjdCBibGtpZl9yZXF1ZXN0IHsKIAl9IHU7CiB9IF9fYXR0cmlidXRlX18oKF9f
cGFja2VkX18pKTsKIAorc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyIHsKKwl1aW50OF90ICAg
ICAgICBvcGVyYXRpb247ICAgIC8qIEJMS0lGX09QXz8/PyAgICAgICAgICAgICAgICAgICAgICAg
ICAqLworCXVuaW9uIHsKKwkJc3RydWN0IGJsa2lmX3JlcXVlc3RfcndfaGVhZGVyIHJ3OworCQlz
dHJ1Y3QgYmxraWZfcmVxdWVzdF9kaXNjYXJkIGRpc2NhcmQ7CisJfSB1OworfSBfX2F0dHJpYnV0
ZV9fKChfX3BhY2tlZF9fKSk7CisKIHN0cnVjdCBibGtpZl9yZXNwb25zZSB7CiAJdWludDY0X3Qg
ICAgICAgIGlkOyAgICAgICAgICAgICAgLyogY29waWVkIGZyb20gcmVxdWVzdCAqLwogCXVpbnQ4
X3QgICAgICAgICBvcGVyYXRpb247ICAgICAgIC8qIGNvcGllZCBmcm9tIHJlcXVlc3QgKi8KIAlp
bnQxNl90ICAgICAgICAgc3RhdHVzOyAgICAgICAgICAvKiBCTEtJRl9SU1BfPz8/ICAgICAgICov
CiB9OwogCitzdHJ1Y3QgYmxraWZfcmVzcG9uc2Vfc2VnbWVudCB7CisJY2hhcgkJZHVtbXk7Cit9
IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsKKwogLyoKICAqIFNUQVRVUyBSRVRVUk4gQ09E
RVMuCiAgKi8KQEAgLTE2Nyw2ICsxOTAsOCBAQCBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgewogICov
CiAKIERFRklORV9SSU5HX1RZUEVTKGJsa2lmLCBzdHJ1Y3QgYmxraWZfcmVxdWVzdCwgc3RydWN0
IGJsa2lmX3Jlc3BvbnNlKTsKK0RFRklORV9SSU5HX1RZUEVTKGJsa2lmX3JlcXVlc3QsIHN0cnVj
dCBibGtpZl9yZXF1ZXN0X2hlYWRlciwgc3RydWN0IGJsa2lmX3Jlc3BvbnNlKTsKK0RFRklORV9S
SU5HX1RZUEVTKGJsa2lmX3NlZ21lbnQsIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQsIHN0
cnVjdCBibGtpZl9yZXNwb25zZV9zZWdtZW50KTsKIAogI2RlZmluZSBWRElTS19DRFJPTSAgICAg
ICAgMHgxCiAjZGVmaW5lIFZESVNLX1JFTU9WQUJMRSAgICAweDIK

--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF213SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:27:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xIe-0005yO-92; Thu, 16 Aug 2012 10:27:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xIc-0005xu-Dd
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:27:35 +0000
Received: from [85.158.138.51:35142] by server-5.bemta-3.messagelabs.com id
	14/9D-08865-31BCC205; Thu, 16 Aug 2012 10:27:31 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345112847!19680854!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzNjg4NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4011 invoked from network); 16 Aug 2012 10:27:27 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 10:27:27 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 16 Aug 2012 03:27:15 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; 
	d="scan'208,217";a="202534491"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 16 Aug 2012 03:27:14 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:27:14 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:27:13 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:27:11 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 3/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mbHQd/+wJK9MRfWlX0hUUsOitw==
Date: Thu, 16 Aug 2012 10:27:10 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF23D@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 3/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: multipart/alternative;
	boundary="_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_"

--_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


refactoring balback

Signed-off-by: Ronghui Duan <ronghui.duan@intel.com<mailto:ronghui.duan@int=
el.com>>

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index 73f196c..b4767f5 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -64,6 +64,11 @@ MODULE_PARM_DESC(reqs, "Number of blkback requests to al=
locate");
static unsigned int log_stats;
module_param(log_stats, int, 0644);
+struct seg_buf {
+        unsigned long buf;
+        unsigned int nsec;
+};
+
/*
  * Each outstanding request that we've passed to the lower device layers h=
as a
  * 'pending_req' allocated to it. Each buffer_head that completes decremen=
ts
@@ -78,6 +83,11 @@ struct pending_req {
    unsigned short       operation;
    int             status;
    struct list_head     free_list;
+    struct gnttab_map_grant_ref     *map;
+    struct gnttab_unmap_grant_ref   *unmap;
+    struct seg_buf             *seg;
+    struct bio           **biolist;
+    struct page                **pages;
};
 #define BLKBACK_INVALID_HANDLE (~0)
@@ -123,28 +133,9 @@ static inline unsigned long vaddr(struct pending_req *=
req, int seg)
 static int do_block_io_op(struct xen_blkif *blkif);
static int dispatch_rw_block_io(struct xen_blkif *blkif,
-                    struct blkif_request *req,
                    struct pending_req *pending_req);
static void make_response(struct xen_blkif *blkif, u64 id,
-                 unsigned short op, int st);
-
-/*
- * Retrieve from the 'pending_reqs' a free pending_req structure to be use=
d.
- */
-static struct pending_req *alloc_req(void)
-{
-    struct pending_req *req =3D NULL;
-    unsigned long flags;
-
-    spin_lock_irqsave(&blkbk->pending_free_lock, flags);
-    if (!list_empty(&blkbk->pending_free)) {
-          req =3D list_entry(blkbk->pending_free.next, struct pending_req,
-                    free_list);
-          list_del(&req->free_list);
-    }
-    spin_unlock_irqrestore(&blkbk->pending_free_lock, flags);
-    return req;
-}
+                 unsigned short op, int nr_page, int st);
 /*
  * Return the 'pending_req' structure back to the freepool. We also
@@ -155,6 +146,12 @@ static void free_req(struct pending_req *req)
    unsigned long flags;
    int was_empty;
+    kfree(req->map);
+    kfree(req->unmap);
+    kfree(req->biolist);
+    kfree(req->seg);
+    kfree(req->pages);
+
    spin_lock_irqsave(&blkbk->pending_free_lock, flags);
    was_empty =3D list_empty(&blkbk->pending_free);
    list_add(&req->free_list, &blkbk->pending_free);
@@ -162,7 +159,42 @@ static void free_req(struct pending_req *req)
    if (was_empty)
          wake_up(&blkbk->pending_free_wq);
}
-
+/*
+ * Retrieve from the 'pending_reqs' a free pending_req structure to be use=
d.
+ */
+static struct pending_req *alloc_req(void)
+{
+    struct pending_req *req =3D NULL;
+    unsigned long flags;
+    unsigned int max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST;
+
+    spin_lock_irqsave(&blkbk->pending_free_lock, flags);
+    if (!list_empty(&blkbk->pending_free)) {
+          req =3D list_entry(blkbk->pending_free.next, struct pending_req,
+                    free_list);
+          list_del(&req->free_list);
+    }
+    spin_unlock_irqrestore(&blkbk->pending_free_lock, flags);
+
+    if (req !=3D NULL) {
+          req->map =3D kzalloc(sizeof(struct gnttab_map_grant_ref) *
+                       max_seg, GFP_KERNEL);
+          req->unmap =3D kzalloc(sizeof(struct gnttab_unmap_grant_ref) *
+                       max_seg, GFP_KERNEL);
+          req->biolist =3D kzalloc(sizeof(struct bio *) * max_seg,
+                           GFP_KERNEL);
+          req->seg =3D kzalloc(sizeof(struct seg_buf) * max_seg,
+                       GFP_KERNEL);
+          req->pages =3D kzalloc(sizeof(struct page *) * max_seg,
+                       GFP_KERNEL);
+          if (!req->map || !req->unmap || !req->biolist ||
+              !req->seg || !req->pages) {
+               free_req(req);
+               req =3D NULL;
+          }
+    }
+    return req;
+}
/*
  * Routines for managing virtual block devices (vbds).
  */
@@ -308,11 +340,6 @@ int xen_blkif_schedule(void *arg)
    xen_blkif_put(blkif);
     return 0;
-}
-
-struct seg_buf {
-    unsigned long buf;
-    unsigned int nsec;
};
/*
  * Unmap the grant references, and also remove the M2P over-rides
@@ -320,8 +347,8 @@ struct seg_buf {
  */
static void xen_blkbk_unmap(struct pending_req *req)
{
-    struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct gnttab_unmap_grant_ref *unmap =3D req->unmap;
+    struct page **pages =3D req->pages;
    unsigned int i, invcount =3D 0;
    grant_handle_t handle;
    int ret;
@@ -341,11 +368,13 @@ static void xen_blkbk_unmap(struct pending_req *req)
    BUG_ON(ret);
}
+
static int xen_blkbk_map(struct blkif_request *req,
+               struct blkif_request_segment *seg_req,
                struct pending_req *pending_req,
                struct seg_buf seg[])
{
-    struct gnttab_map_grant_ref map[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct gnttab_map_grant_ref *map =3D pending_req->map;
    int i;
    int nseg =3D req->u.rw.nr_segments;
    int ret =3D 0;
@@ -362,7 +391,7 @@ static int xen_blkbk_map(struct blkif_request *req,
          if (pending_req->operation !=3D BLKIF_OP_READ)
               flags |=3D GNTMAP_readonly;
          gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
-                      req->u.rw.seg[i].gref,
+                      seg_req[i].gref,
                      pending_req->blkif->domid);
    }
@@ -387,14 +416,15 @@ static int xen_blkbk_map(struct blkif_request *req,
               continue;
           seg[i].buf  =3D map[i].dev_bus_addr |
-               (req->u.rw.seg[i].first_sect << 9);
+               (seg_req[i].first_sect << 9);
    }
    return ret;
}
-static int dispatch_discard_io(struct xen_blkif *blkif,
-                    struct blkif_request *req)
+static int dispatch_discard_io(struct xen_blkif *blkif)
{
+    struct blkif_request *blkif_req =3D (struct blkif_request *)blkif->req=
;
+    struct blkif_request_discard *req =3D &blkif_req->u.discard;
    int err =3D 0;
    int status =3D BLKIF_RSP_OKAY;
    struct block_device *bdev =3D blkif->vbd.bdev;
@@ -404,11 +434,11 @@ static int dispatch_discard_io(struct xen_blkif *blki=
f,
     xen_blkif_get(blkif);
    secure =3D (blkif->vbd.discard_secure &&
-          (req->u.discard.flag & BLKIF_DISCARD_SECURE)) ?
+          (req->flag & BLKIF_DISCARD_SECURE)) ?
           BLKDEV_DISCARD_SECURE : 0;
-    err =3D blkdev_issue_discard(bdev, req->u.discard.sector_number,
-                       req->u.discard.nr_sectors,
+    err =3D blkdev_issue_discard(bdev, req->sector_number,
+                       req->nr_sectors,
                       GFP_KERNEL, secure);
     if (err =3D=3D -EOPNOTSUPP) {
@@ -417,7 +447,7 @@ static int dispatch_discard_io(struct xen_blkif *blkif,
    } else if (err)
          status =3D BLKIF_RSP_ERROR;
-    make_response(blkif, req->u.discard.id, req->operation, status);
+    make_response(blkif, req->id, BLKIF_OP_DISCARD, 0, status);
    xen_blkif_put(blkif);
    return err;
}
@@ -470,7 +500,8 @@ static void __end_block_io_op(struct pending_req *pendi=
ng_req, int error)
    if (atomic_dec_and_test(&pending_req->pendcnt)) {
          xen_blkbk_unmap(pending_req);
          make_response(pending_req->blkif, pending_req->id,
-                     pending_req->operation, pending_req->status);
+                     pending_req->operation, pending_req->nr_pages,
+                     pending_req->status);
          xen_blkif_put(pending_req->blkif);
          if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
               if (atomic_read(&pending_req->blkif->drain))
@@ -489,8 +520,37 @@ static void end_block_io_op(struct bio *bio, int error=
)
    bio_put(bio);
}
+void *get_back_ring(struct xen_blkif *blkif)
+{
+    return (void *)&blkif->blk_rings;
+}
+void copy_blkif_req(struct xen_blkif *blkif, RING_IDX rc)
+{
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
+    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+    switch (blkif->blk_protocol) {
+    case BLKIF_PROTOCOL_NATIVE:
+          memcpy(req, RING_GET_REQUEST(&blk_rings->native, rc),
+               sizeof(struct blkif_request));
+          break;
+    case BLKIF_PROTOCOL_X86_32:
+          blkif_get_x86_32_req(req, RING_GET_REQUEST(&blk_rings->x86_32, r=
c));
+          break;
+    case BLKIF_PROTOCOL_X86_64:
+          blkif_get_x86_64_req(req, RING_GET_REQUEST(&blk_rings->x86_64, r=
c));
+          break;
+    default:
+          BUG();
+    }
+}
+
+void copy_blkif_seg_req(struct xen_blkif *blkif)
+{
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
+    blkif->seg_req =3D req->u.rw.seg;
+}
/*
  * Function to copy the from the ring buffer the 'struct blkif_request'
  * (which has the sectors we want, number of them, grant references, etc),
@@ -499,8 +559,9 @@ static void end_block_io_op(struct bio *bio, int error)
static int
__do_block_io_op(struct xen_blkif *blkif)
{
-    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
-    struct blkif_request req;
+    union blkif_back_rings *blk_rings =3D
+          (union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
    struct pending_req *pending_req;
    RING_IDX rc, rp;
    int more_to_do =3D 0;
@@ -526,28 +587,19 @@ __do_block_io_op(struct xen_blkif *blkif)
               break;
          }
-          switch (blkif->blk_protocol) {
-          case BLKIF_PROTOCOL_NATIVE:
-              memcpy(&req, RING_GET_REQUEST(&blk_rings->native, rc), sizeo=
f(req));
-               break;
-          case BLKIF_PROTOCOL_X86_32:
-               blkif_get_x86_32_req(&req, RING_GET_REQUEST(&blk_rings->x86=
_32, rc));
-               break;
-          case BLKIF_PROTOCOL_X86_64:
-               blkif_get_x86_64_req(&req, RING_GET_REQUEST(&blk_rings->x86=
_64, rc));
-               break;
-          default:
-               BUG();
-          }
+          blkif->ops->copy_blkif_req(blkif, rc);
+
+          blkif->ops->copy_blkif_seg_req(blkif);
+
          blk_rings->common.req_cons =3D ++rc; /* before make_response() */
           /* Apply all sanity checks to /private copy/ of request. */
          barrier();
-          if (unlikely(req.operation =3D=3D BLKIF_OP_DISCARD)) {
+          if (unlikely(req->operation =3D=3D BLKIF_OP_DISCARD)) {
               free_req(pending_req);
-               if (dispatch_discard_io(blkif, &req))
+               if (dispatch_discard_io(blkif))
                    break;
-          } else if (dispatch_rw_block_io(blkif, &req, pending_req))
+          } else if (dispatch_rw_block_io(blkif, pending_req))
               break;
           /* Yield point for this unbounded loop. */
@@ -560,7 +612,8 @@ __do_block_io_op(struct xen_blkif *blkif)
static int
do_block_io_op(struct xen_blkif *blkif)
{
-    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+    union blkif_back_rings *blk_rings =3D
+          (union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
    int more_to_do;
     do {
@@ -578,14 +631,15 @@ do_block_io_op(struct xen_blkif *blkif)
  * and call the 'submit_bio' to pass it to the underlying storage.
  */
static int dispatch_rw_block_io(struct xen_blkif *blkif,
-                    struct blkif_request *req,
                    struct pending_req *pending_req)
{
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
+    struct blkif_request_segment *seg_req =3D blkif->seg_req;
    struct phys_req preq;
-    struct seg_buf seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct seg_buf *seg =3D pending_req->seg;
    unsigned int nseg;
    struct bio *bio =3D NULL;
-    struct bio *biolist[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct bio **biolist =3D pending_req->biolist;
    int i, nbio =3D 0;
    int operation;
    struct blk_plug plug;
@@ -616,7 +670,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
    nseg =3D req->u.rw.nr_segments;
     if (unlikely(nseg =3D=3D 0 && operation !=3D WRITE_FLUSH) ||
-        unlikely(nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) {
+        unlikely(nseg > blkif->ops->max_seg)) {
          pr_debug(DRV_PFX "Bad number of segments in request (%d)\n",
                nseg);
          /* Haven't submitted any bio's yet. */
@@ -634,10 +688,10 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
    pending_req->nr_pages  =3D nseg;
     for (i =3D 0; i < nseg; i++) {
-          seg[i].nsec =3D req->u.rw.seg[i].last_sect -
-               req->u.rw.seg[i].first_sect + 1;
-          if ((req->u.rw.seg[i].last_sect >=3D (PAGE_SIZE >> 9)) ||
-              (req->u.rw.seg[i].last_sect < req->u.rw.seg[i].first_sect))
+          seg[i].nsec =3D seg_req[i].last_sect -
+               seg_req[i].first_sect + 1;
+          if ((seg_req[i].last_sect >=3D (PAGE_SIZE >> 9)) ||
+              (seg_req[i].last_sect < seg_req[i].first_sect))
               goto fail_response;
          preq.nr_sects +=3D seg[i].nsec;
@@ -676,7 +730,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
     * the hypercall to unmap the grants - that is all done in
     * xen_blkbk_unmap.
     */
-    if (xen_blkbk_map(req, pending_req, seg))
+    if (xen_blkbk_map(req, seg_req, pending_req, seg))
          goto fail_flush;
     /*
@@ -746,7 +800,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
    xen_blkbk_unmap(pending_req);
  fail_response:
    /* Haven't submitted any bio's yet. */
-    make_response(blkif, req->u.rw.id, req->operation, BLKIF_RSP_ERROR);
+    make_response(blkif, req->u.rw.id, req->operation, 0, BLKIF_RSP_ERROR)=
;
    free_req(pending_req);
    msleep(1); /* back off a bit */
    return -EIO;
@@ -759,17 +813,28 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
    return -EIO;
}
+struct blkif_segment_back_ring *
+    get_seg_back_ring(struct xen_blkif *blkif)
+{
+    return NULL;
+}
+void push_back_ring_rsp(union blkif_back_rings *blk_rings, int nr_page, in=
t *notify)
+{
+    blk_rings->common.rsp_prod_pvt++;
+    RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, *notify);
+}
 /*
  * Put a response on the ring on how the operation fared.
  */
static void make_response(struct xen_blkif *blkif, u64 id,
-                 unsigned short op, int st)
+                 unsigned short op, int nr_page, int st)
{
    struct blkif_response  resp;
    unsigned long     flags;
-    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+    union blkif_back_rings *blk_rings =3D
+          (union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
    int notify;
     resp.id        =3D id;
@@ -794,8 +859,9 @@ static void make_response(struct xen_blkif *blkif, u64 =
id,
    default:
          BUG();
    }
-    blk_rings->common.rsp_prod_pvt++;
-    RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
+
+    blkif->ops->push_back_ring_rsp(blk_rings, nr_page, &notify);
+
    spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
    if (notify)
          notify_remote_via_irq(blkif->irq);
@@ -873,6 +939,14 @@ static int __init xen_blkif_init(void)
    return rc;
}
+struct blkback_ring_operation blkback_ring_ops =3D {
+    .get_back_ring =3D get_back_ring,
+    .copy_blkif_req =3D copy_blkif_req,
+    .copy_blkif_seg_req =3D copy_blkif_seg_req,
+    .push_back_ring_rsp =3D push_back_ring_rsp,
+    .max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
+};
+
module_init(xen_blkif_init);
 MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index 9ad3b5e..ce5556a 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -146,6 +146,11 @@ enum blkif_protocol {
    BLKIF_PROTOCOL_X86_64 =3D 3,
};
+enum blkif_backring_type {
+    BACKRING_TYPE_1 =3D 1,
+    BACKRING_TYPE_2 =3D 2,
+};
+
struct xen_vbd {
    /* What the domain refers to this vbd as. */
    blkif_vdev_t         handle;
@@ -163,6 +168,15 @@ struct xen_vbd {
};
 struct backend_info;
+struct xen_blkif;
+
+struct blkback_ring_operation {
+    void *(*get_back_ring) (struct xen_blkif *blkif);
+    void (*copy_blkif_req) (struct xen_blkif *blkif, RING_IDX rc);
+    void (*copy_blkif_seg_req) (struct xen_blkif *blkif);
+    void (*push_back_ring_rsp) (union blkif_back_rings *blk_rings, int nr_=
page, int *notify);
+    unsigned int max_seg;
+};
 struct xen_blkif {
    /* Unique identifier for this interface. */
@@ -171,7 +185,9 @@ struct xen_blkif {
    /* Physical parameters of the comms window. */
    unsigned int         irq;
    /* Comms information. */
+    struct blkback_ring_operation   *ops;
    enum blkif_protocol  blk_protocol;
+    enum blkif_backring_type blk_backring_type;
    union blkif_back_rings     blk_rings;
    void            *blk_ring;
    /* The VBD attached to this interface. */
@@ -179,6 +195,8 @@ struct xen_blkif {
    /* Back pointer to the backend_info. */
    struct backend_info  *be;
    /* Private fields. */
+    void *               req;
+    struct blkif_request_segment *seg_req;
    spinlock_t      blk_ring_lock;
    atomic_t        refcnt;
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 4f66171..850ecad 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -36,6 +36,8 @@ static int connect_ring(struct backend_info *);
static void backend_changed(struct xenbus_watch *, const char **,
                   unsigned int);
+extern struct blkback_ring_operation blkback_ring_ops;
+
struct xenbus_device *xen_blkbk_xenbus(struct backend_info *be)
{
    return be->dev;
@@ -725,6 +727,12 @@ static int connect_ring(struct backend_info *be)
    int err;
     DPRINTK("%s", dev->otherend);
+    be->blkif->ops =3D &blkback_ring_ops;
+    be->blkif->req =3D kmalloc(sizeof(struct blkif_request),
+                    GFP_KERNEL);
+    be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
+                         be->blkif->ops->max_seg,  GFP_KERNEL);
+    be->blkif->blk_backring_type =3D BACKRING_TYPE_1;
     err =3D xenbus_gather(XBT_NIL, dev->otherend, "ring-ref", "%lu",
                   &ring_ref, "event-channel", "%u", &evtchn, NULL);


-ronghui


--_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" xmlns:m=3D"http://schema=
s.microsoft.com/office/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html=
40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Plain Text Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:10.5pt;
	font-family:Consolas;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.PlainTextChar
	{mso-style-name:"Plain Text Char";
	mso-style-priority:99;
	mso-style-link:"Plain Text";
	font-family:Consolas;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.25in 1.0in 1.25in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">refactoring balback<o:p></o:p></span></p>
<p class=3D"MsoPlainText">Signed-off-by: Ronghui Duan &lt;<a href=3D"mailto=
:ronghui.duan@intel.com">ronghui.duan@intel.com</a>&gt;<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/drivers/block/xen-blkback/blkbac=
k.c b/drivers/block/xen-blkback/blkback.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 73f196c..b4767f5 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/drivers/block/xen-blkback/blkback.c<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/drivers/block/xen-blkback/b=
lkback.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -64,6 &#43;64,11 @@ MODULE_PARM_DESC(reqs,=
 &quot;Number of blkback requests to allocate&quot;);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static unsigned int log_stats;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">module_param(log_stats, int, 0644);<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct seg_buf {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; unsigned long buf;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; unsigned int nsec;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Each outstanding request that we've =
passed to the lower device layers has a<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * 'pending_req' allocated to it. Each =
buffer_head that completes decrements<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -78,6 &#43;83,11 @@ struct pending_req {<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned short&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; operation;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; status;<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct list_head&nbsp;&nbs=
p;&nbsp;&nbsp; free_list;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_map_gra=
nt_ref&nbsp;&nbsp;&nbsp;&nbsp; *map;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_unmap_g=
rant_ref&nbsp;&nbsp; *unmap;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct seg_buf&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *seg;<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct bio&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; **biolist;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct page&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; **pages;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;#define BLKBACK_INVALID_HANDLE (~0)<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -123,28 &#43;133,9 @@ static inline unsign=
ed long vaddr(struct pending_req *req, int seg)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;static int do_block_io_op(struct xen_bl=
kif *blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int dispatch_rw_block_io(struct xen_bl=
kif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stru=
ct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struc=
t pending_req *pending_req);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void make_response(struct xen_blkif *b=
lkif, u64 id,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int st);=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">- * Retrieve from the 'pending_reqs' a free p=
ending_req structure to be used.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">- */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-static struct pending_req *alloc_req(void)<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct pending_req *req =
=3D NULL;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; unsigned long flags;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; spin_lock_irqsave(&amp;bl=
kbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; if (!list_empty(&amp;blkb=
k-&gt;pending_free)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; req =3D list_entry(blkbk-&gt;pending_free.next, struct pending_r=
eq,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; free=
_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; list_del(&amp;req-&gt;free_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; spin_unlock_irqrestore(&a=
mp;blkbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; return req;<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int =
nr_page, int st);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Return the 'pending_req' structure b=
ack to the freepool. We also<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -155,6 &#43;146,12 @@ static void free_req=
(struct pending_req *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned long flags;<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int was_empty;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;map);<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;unmap);=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;biolist=
);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;seg);<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;pages);=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; spin_lock_irqsave(&amp;blk=
bk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; was_empty =3D list_empty(&=
amp;blkbk-&gt;pending_free);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; list_add(&amp;req-&gt;free=
_list, &amp;blkbk-&gt;pending_free);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -162,7 &#43;159,42 @@ static void free_req=
(struct pending_req *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; if (was_empty)<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; wake_up(&amp;blkbk-&gt;pending_free_wq);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43; * Retrieve from the 'pending_reqs' a fr=
ee pending_req structure to be used.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;static struct pending_req *alloc_req(voi=
d)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct pending_req *r=
eq =3D NULL;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; unsigned long flags;<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; unsigned int max_seg =
=3D BLKIF_MAX_SEGMENTS_PER_REQUEST;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; spin_lock_irqsave(&am=
p;blkbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; if (!list_empty(&amp;=
blkbk-&gt;pending_free)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req =3D list_entry(blkbk-&gt;pending_free.next, struct pendi=
ng_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
free_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; list_del(&amp;req-&gt;free_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; spin_unlock_irqrestor=
e(&amp;blkbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; if (req !=3D NULL) {<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;map =3D kzalloc(sizeof(struct gnttab_map_grant_ref) =
*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; max_seg, GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;unmap =3D kzalloc(sizeof(struct gnttab_unmap_grant_r=
ef) *<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; max_seg, GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;biolist =3D kzalloc(sizeof(struct bio *) * max_seg,<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;seg =3D kzalloc(sizeof(struct seg_buf) * max_seg,<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;pages =3D kzalloc(sizeof(struct page *) * max_seg,<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; if (!req-&gt;map || !req-&gt;unmap || !req-&gt;biolist ||<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; !req-&gt;seg || !req-&gt;pages) {<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; free_req(req);<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; req =3D NULL;<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; return req;<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Routines for managing virtual block =
devices (vbds).<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -308,11 &#43;340,6 @@ int xen_blkif_schedu=
le(void *arg)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; xen_blkif_put(blkif);<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; return 0;<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-struct seg_buf {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; unsigned long buf;<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; unsigned int nsec;<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Unmap the grant references, and also=
 remove the M2P over-rides<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -320,8 &#43;347,8 @@ struct seg_buf {<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void xen_blkbk_unmap(struct pending_re=
q *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct gnttab_unmap_grant=
_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct page *pages[BLKIF_=
MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_unmap_g=
rant_ref *unmap =3D req-&gt;unmap;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct page **pages =
=3D req-&gt;pages;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned int i, invcount =
=3D 0;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; grant_handle_t handle;<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int ret;<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -341,11 &#43;368,13 @@ static void xen_blk=
bk_unmap(struct pending_req *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; BUG_ON(ret);<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int xen_blkbk_map(struct blkif_request=
 *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struct blkif_request_segment *=
seg_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;struct pending_req *pending_r=
eq,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;struct seg_buf seg[])<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct gnttab_map_grant_r=
ef map[BLKIF_MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_map_gra=
nt_ref *map =3D pending_req-&gt;map;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int i;<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int nseg =3D req-&gt;u.rw.=
nr_segments;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int ret =3D 0;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -362,7 &#43;391,7 @@ static int xen_blkbk_=
map(struct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; if (pending_req-&gt;operation !=3D BLKIF_OP_READ)<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; flags |=3D GNTMAP_readonly;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; gnttab_set_map_op(&amp;map[i], vaddr(pending_req, i), flags,<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbs=
p; req-&gt;u.rw.seg[i].gref,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp; seg_req[i].gref,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp=
;&nbsp;pending_req-&gt;blkif-&gt;domid);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -387,14 &#43;416,15 @@ static int xen_blkb=
k_map(struct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; continue;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; seg[i].buf&nbsp; =3D map[i].dev_bus_addr |<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (req-&gt;u.rw.seg[i].first_sect &l=
t;&lt; 9);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (seg_req[i].first_sect &lt;&lt=
; 9);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return ret;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-static int dispatch_discard_io(struct xen_bl=
kif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stru=
ct blkif_request *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;static int dispatch_discard_io(struct xe=
n_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*blkif_req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request_=
discard *req =3D &amp;blkif_req-&gt;u.discard;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int err =3D 0;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int status =3D BLKIF_RSP_O=
KAY;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct block_device *bdev =
=3D blkif-&gt;vbd.bdev;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -404,11 &#43;434,11 @@ static int dispatch=
_discard_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; xen_blkif_get(blkif)=
;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; secure =3D (blkif-&gt;vbd.=
discard_secure &amp;&amp;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; (req-&gt;u.discard.flag &amp; BLKIF_DISCARD_SECURE)) ?<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (req-&gt;flag &amp; BLKIF_DISCARD_SECURE)) ?<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; &nbsp;BLKDEV_DISCARD_SECURE : 0;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; err =3D blkdev_issue_disc=
ard(bdev, req-&gt;u.discard.sector_number,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbs=
p;&nbsp; req-&gt;u.discard.nr_sectors,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; err =3D blkdev_issue_=
discard(bdev, req-&gt;sector_number,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; req-&gt;nr_sectors,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp=
;&nbsp;&nbsp;GFP_KERNEL, secure);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; if (err =3D=3D -EOPN=
OTSUPP) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -417,7 &#43;447,7 @@ static int dispatch_d=
iscard_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; } else if (err)<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; status =3D BLKIF_RSP_ERROR;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; make_response(blkif, req-=
&gt;u.discard.id, req-&gt;operation, status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; make_response(blkif, =
req-&gt;id, BLKIF_OP_DISCARD, 0, status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; xen_blkif_put(blkif);<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return err;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -470,7 &#43;500,8 @@ static void __end_blo=
ck_io_op(struct pending_req *pending_req, int error)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; if (atomic_dec_and_test(&a=
mp;pending_req-&gt;pendcnt)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; xen_blkbk_unmap(pending_req);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; make_response(pending_req-&gt;blkif, pending_req-&gt;id,<o:p></o:=
p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; pen=
ding_req-&gt;operation, pending_req-&gt;status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 pending_req-&gt;operation, pending_req-&gt;nr_pages,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 pending_req-&gt;status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; xen_blkif_put(pending_req-&gt;blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; if (atomic_read(&amp;pending_req-&gt;blkif-&gt;refcnt) &lt;=3D 2)=
 {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (atomic_read(&amp;pending_req-&g=
t;blkif-&gt;drain))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -489,8 &#43;520,37 @@ static void end_bloc=
k_io_op(struct bio *bio, int error)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; bio_put(bio);<o:p></o:p></=
span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void *get_back_ring(struct xen_blkif *bl=
kif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; return (void *)&amp;b=
lkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void copy_blkif_req(struct xen_blkif *bl=
kif, RING_IDX rc)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; switch (blkif-&gt;blk=
_protocol) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; case BLKIF_PROTOCOL_N=
ATIVE:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; memcpy(req, RING_GET_REQUEST(&amp;blk_rings-&gt;native, rc),=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; sizeof(struct blkif_request));=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; case BLKIF_PROTOCOL_X=
86_32:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif_get_x86_32_req(req, RING_GET_REQUEST(&amp;blk_rings-&g=
t;x86_32, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; case BLKIF_PROTOCOL_X=
86_64:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif_get_x86_64_req(req, RING_GET_REQUEST(&amp;blk_rings-&g=
t;x86_64, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; default:<o:p></o:p></=
span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; BUG();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void copy_blkif_seg_req(struct xen_blkif=
 *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; blkif-&gt;seg_req =3D=
 req-&gt;u.rw.seg;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Function to copy the from the ring b=
uffer the 'struct blkif_request'<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * (which has the sectors we want, numb=
er of them, grant references, etc),<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -499,8 &#43;559,9 @@ static void end_block=
_io_op(struct bio *bio, int error)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">__do_block_io_op(struct xen_blkif *blkif)<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; union blkif_back_rings *b=
lk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct blkif_request req;=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (union blkif_back_rings *)blkif-&gt;ops-&gt;get_back_ring(bl=
kif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct pending_req *pendin=
g_req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; RING_IDX rc, rp;<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int more_to_do =3D 0;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -526,28 &#43;587,19 @@ __do_block_io_op(st=
ruct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; switch (blkif-&gt;blk_protocol) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; case BLKIF_PROTOCOL_NATIVE:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; memcpy(&amp;req, RING_GET_REQUEST(&amp;b=
lk_rings-&gt;native, rc), sizeof(req));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; case BLKIF_PROTOCOL_X86_32:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; blkif_get_x86_32_req(&amp;req, RIN=
G_GET_REQUEST(&amp;blk_rings-&gt;x86_32, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; case BLKIF_PROTOCOL_X86_64:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; blkif_get_x86_64_req(&amp;req, RIN=
G_GET_REQUEST(&amp;blk_rings-&gt;x86_64, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; default:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; BUG();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif-&gt;ops-&gt;copy_blkif_req(blkif, rc);<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif-&gt;ops-&gt;copy_blkif_seg_req(blkif);<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; blk_rings-&gt;common.req_cons =3D &#43;&#43;rc; /* before make_re=
sponse() */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; /* Apply all sanity checks to /private copy/ of request. */=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; barrier();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; if (unlikely(req.operation =3D=3D BLKIF_OP_DISCARD)) {<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; if (unlikely(req-&gt;operation =3D=3D BLKIF_OP_DISCARD)) {<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; free_req(pending_req);<o:p></o:p></=
span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (dispatch_discard_io(blkif, &am=
p;req))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (dispatch_discard_io(blkif)=
)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break=
;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; } else if (dispatch_rw_block_io(blkif, &amp;req, pending_req))<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; } else if (dispatch_rw_block_io(blkif, pending_req))<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; /* Yield point for this unbounded loop. */<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -560,7 &#43;612,8 @@ __do_block_io_op(stru=
ct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">do_block_io_op(struct xen_blkif *blkif)<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; union blkif_back_rings *b=
lk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (union blkif_back_rings *)blkif-&gt;ops-&gt;get_back_ring(bl=
kif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int more_to_do;<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; do {<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -578,14 &#43;631,15 @@ do_block_io_op(stru=
ct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * and call the 'submit_bio' to pass it=
 to the underlying storage.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int dispatch_rw_block_io(struct xen_bl=
kif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stru=
ct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struc=
t pending_req *pending_req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request_=
segment *seg_req =3D blkif-&gt;seg_req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct phys_req preq;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct seg_buf seg[BLKIF_=
MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct seg_buf *seg =
=3D pending_req-&gt;seg;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned int nseg;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct bio *bio =3D NULL;<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct bio *biolist[BLKIF=
_MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct bio **biolist =
=3D pending_req-&gt;biolist;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int i, nbio =3D 0;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int operation;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct blk_plug plug;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -616,7 &#43;670,7 @@ static int dispatch_r=
w_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; nseg =3D req-&gt;u.rw.nr_s=
egments;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; if (unlikely(nseg =
=3D=3D 0 &amp;&amp; operation !=3D WRITE_FLUSH) ||<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; unlike=
ly(nseg &gt; BLKIF_MAX_SEGMENTS_PER_REQUEST)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; un=
likely(nseg &gt; blkif-&gt;ops-&gt;max_seg)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; pr_debug(DRV_PFX &quot;Bad number of segments in request (%d)\n&q=
uot;,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;nseg);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; /* Haven't submitted any bio's yet. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -634,10 &#43;688,10 @@ static int dispatch=
_rw_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; pending_req-&gt;nr_pages&n=
bsp; =3D nseg;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; for (i =3D 0; i &lt;=
 nseg; i&#43;&#43;) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; seg[i].nsec =3D req-&gt;u.rw.seg[i].last_sect -<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; req-&gt;u.rw.seg[i].first_sect &#4=
3; 1;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; if ((req-&gt;u.rw.seg[i].last_sect &gt;=3D (PAGE_SIZE &gt;&gt; 9=
)) ||<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; &nbsp;&nbsp;&nbsp; (req-&gt;u.rw.seg[i].last_sect &lt; req-&gt;u=
.rw.seg[i].first_sect))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; seg[i].nsec =3D seg_req[i].last_sect -<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; seg_req[i].first_sect &#43; 1;=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; if ((seg_req[i].last_sect &gt;=3D (PAGE_SIZE &gt;&gt; 9)) ||=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; (seg_req[i].last_sect &lt; seg_req[i].fir=
st_sect))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; goto fail_response;<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; preq.nr_sects &#43;=3D seg[i].nsec;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -676,7 &#43;730,7 @@ static int dispatch_r=
w_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;* the hypercall to u=
nmap the grants - that is all done in<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;* xen_blkbk_unmap.<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;*/<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; if (xen_blkbk_map(req, pe=
nding_req, seg))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; if (xen_blkbk_map(req=
, seg_req, pending_req, seg))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; goto fail_flush;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; /*<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -746,7 &#43;800,7 @@ static int dispatch_r=
w_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; xen_blkbk_unmap(pending_re=
q);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; fail_response:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Haven't submitted any b=
io's yet. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; make_response(blkif, req-=
&gt;u.rw.id, req-&gt;operation, BLKIF_RSP_ERROR);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; make_response(blkif, =
req-&gt;u.rw.id, req-&gt;operation, 0, BLKIF_RSP_ERROR);<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; free_req(pending_req);<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; msleep(1); /* back off a b=
it */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return -EIO;<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -759,17 &#43;813,28 @@ static int dispatch=
_rw_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return -EIO;<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct blkif_segment_back_ring *<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; get_seg_back_ring(str=
uct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; return NULL;<o:p></o:=
p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void push_back_ring_rsp(union blkif_back=
_rings *blk_rings, int nr_page, int *notify)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; blk_rings-&gt;common.=
rsp_prod_pvt&#43;&#43;;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; RING_PUSH_RESPONSES_A=
ND_CHECK_NOTIFY(&amp;blk_rings-&gt;common, *notify);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Put a response on the ring on how th=
e operation fared.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void make_response(struct xen_blkif *b=
lkif, u64 id,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int st)<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int =
nr_page, int st)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct blkif_response&nbsp=
; resp;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned long&nbsp;&nbsp;&=
nbsp;&nbsp; flags;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; union blkif_back_rings *b=
lk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (union blkif_back_rings *)blkif-&gt;ops-&gt;get_back_ring(bl=
kif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int notify;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; resp.id&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =3D id;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -794,8 &#43;859,9 @@ static void make_resp=
onse(struct xen_blkif *blkif, u64 id,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; default:<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; BUG();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; blk_rings-&gt;common.rsp_=
prod_pvt&#43;&#43;;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; RING_PUSH_RESPONSES_AND_C=
HECK_NOTIFY(&amp;blk_rings-&gt;common, notify);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; blkif-&gt;ops-&gt;pus=
h_back_ring_rsp(blk_rings, nr_page, &amp;notify);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; spin_unlock_irqrestore(&am=
p;blkif-&gt;blk_ring_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; if (notify)<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; notify_remote_via_irq(blkif-&gt;irq);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -873,6 &#43;939,14 @@ static int __init xe=
n_blkif_init(void)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return rc;<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct blkback_ring_operation blkback_ri=
ng_ops =3D {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .get_back_ring =3D ge=
t_back_ring,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .copy_blkif_req =3D c=
opy_blkif_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .copy_blkif_seg_req =
=3D copy_blkif_seg_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .push_back_ring_rsp =
=3D push_back_ring_rsp,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .max_seg =3D BLKIF_MA=
X_SEGMENTS_PER_REQUEST,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">module_init(xen_blkif_init);<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;MODULE_LICENSE(&quot;Dual BSD/GPL&quot;=
);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/drivers/block/xen-blkback/common=
.h b/drivers/block/xen-blkback/common.h<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 9ad3b5e..ce5556a 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/drivers/block/xen-blkback/common.h<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/drivers/block/xen-blkback/c=
ommon.h<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -146,6 &#43;146,11 @@ enum blkif_protocol =
{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; BLKIF_PROTOCOL_X86_64 =3D =
3,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;enum blkif_backring_type {<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; BACKRING_TYPE_1 =3D 1=
,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; BACKRING_TYPE_2 =3D 2=
,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">struct xen_vbd {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* What the domain refers =
to this vbd as. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; blkif_vdev_t&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; handle;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -163,6 &#43;168,15 @@ struct xen_vbd {<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;struct backend_info;<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct xen_blkif;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct blkback_ring_operation {<o:p></o:=
p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void *(*get_back_ring=
) (struct xen_blkif *blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void (*copy_blkif_req=
) (struct xen_blkif *blkif, RING_IDX rc);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void (*copy_blkif_seg=
_req) (struct xen_blkif *blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void (*push_back_ring=
_rsp) (union blkif_back_rings *blk_rings, int nr_page, int *notify);<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; unsigned int max_seg;=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;struct xen_blkif {<o:p></o:p></span></p=
>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Unique identifier for t=
his interface. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -171,7 &#43;185,9 @@ struct xen_blkif {<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Physical parameters of =
the comms window. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned int&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; irq;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Comms information. */<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkback_ring_o=
peration&nbsp;&nbsp; *ops;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; enum blkif_protocol&nbsp; =
blk_protocol;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; enum blkif_backring_t=
ype blk_backring_type;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; union blkif_back_rings&nbs=
p;&nbsp;&nbsp;&nbsp; blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; void&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *blk_ring;<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* The VBD attached to thi=
s interface. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -179,6 &#43;195,8 @@ struct xen_blkif {<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Back pointer to the bac=
kend_info. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct backend_info&nbsp; =
*be;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Private fields. */<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void *&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; req;<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request_=
segment *seg_req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; spinlock_t&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blk_ring_lock;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; atomic_t&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; refcnt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/drivers/block/xen-blkback/xenbus=
.c b/drivers/block/xen-blkback/xenbus.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 4f66171..850ecad 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/drivers/block/xen-blkback/xenbus.c<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/drivers/block/xen-blkback/x=
enbus.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -36,6 &#43;36,8 @@ static int connect_ring=
(struct backend_info *);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void backend_changed(struct xenbus_wat=
ch *, const char **,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;unsigned in=
t);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;extern struct blkback_ring_operation blk=
back_ring_ops;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">struct xenbus_device *xen_blkbk_xenbus(struct=
 backend_info *be)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return be-&gt;dev;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -725,6 &#43;727,12 @@ static int connect_r=
ing(struct backend_info *be)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int err;<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; DPRINTK(&quot;%s&quo=
t;, dev-&gt;otherend);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;ops =
=3D &amp;blkback_ring_ops;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;req =
=3D kmalloc(sizeof(struct blkif_request),<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;seg_=
req =3D kmalloc(sizeof(struct blkif_request_segment)*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;ops-&gt;max_seg,&nbsp; GFP_KERNEL=
);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;blk_=
backring_type =3D BACKRING_TYPE_1;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; err =3D xenbus_gathe=
r(XBT_NIL, dev-&gt;otherend, &quot;ring-ref&quot;, &quot;%lu&quot;,<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&amp;ring_ref, &=
quot;event-channel&quot;, &quot;%u&quot;, &amp;evtchn, NULL);<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">-ronghui<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_--

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_03.patch"
Content-Description: vbd_enlarge_segments_03.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_03.patch";
	size=16583; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:34:40 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IDBlN2RjMzVlMDE2ZmYwYzVkYTBhM2Y5ODNkZGQ4NGQ4ZmE1NTkxYWMKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAy
MToyMTo0MyAyMDEyICswODAwCgogICAgcmVmYWN0b3JpbmcgYmFsYmFjawoKZGlmZiAtLWdpdCBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4t
YmxrYmFjay9ibGtiYWNrLmMKaW5kZXggNzNmMTk2Yy4uYjQ3NjdmNSAxMDA2NDQKLS0tIGEvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4t
YmxrYmFjay9ibGtiYWNrLmMKQEAgLTY0LDYgKzY0LDExIEBAIE1PRFVMRV9QQVJNX0RFU0MocmVx
cywgIk51bWJlciBvZiBibGtiYWNrIHJlcXVlc3RzIHRvIGFsbG9jYXRlIik7CiBzdGF0aWMgdW5z
aWduZWQgaW50IGxvZ19zdGF0czsKIG1vZHVsZV9wYXJhbShsb2dfc3RhdHMsIGludCwgMDY0NCk7
CiAKK3N0cnVjdCBzZWdfYnVmIHsKKyAgICAgICAgdW5zaWduZWQgbG9uZyBidWY7CisgICAgICAg
IHVuc2lnbmVkIGludCBuc2VjOworfTsKKwogLyoKICAqIEVhY2ggb3V0c3RhbmRpbmcgcmVxdWVz
dCB0aGF0IHdlJ3ZlIHBhc3NlZCB0byB0aGUgbG93ZXIgZGV2aWNlIGxheWVycyBoYXMgYQogICog
J3BlbmRpbmdfcmVxJyBhbGxvY2F0ZWQgdG8gaXQuIEVhY2ggYnVmZmVyX2hlYWQgdGhhdCBjb21w
bGV0ZXMgZGVjcmVtZW50cwpAQCAtNzgsNiArODMsMTEgQEAgc3RydWN0IHBlbmRpbmdfcmVxIHsK
IAl1bnNpZ25lZCBzaG9ydAkJb3BlcmF0aW9uOwogCWludAkJCXN0YXR1czsKIAlzdHJ1Y3QgbGlz
dF9oZWFkCWZyZWVfbGlzdDsKKwlzdHJ1Y3QgZ250dGFiX21hcF9ncmFudF9yZWYJKm1hcDsKKwlz
dHJ1Y3QgZ250dGFiX3VubWFwX2dyYW50X3JlZgkqdW5tYXA7CisJc3RydWN0IHNlZ19idWYJCQkq
c2VnOworCXN0cnVjdCBiaW8JCQkqKmJpb2xpc3Q7CisJc3RydWN0IHBhZ2UJCQkqKnBhZ2VzOwog
fTsKIAogI2RlZmluZSBCTEtCQUNLX0lOVkFMSURfSEFORExFICh+MCkKQEAgLTEyMywyOCArMTMz
LDkgQEAgc3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIHZhZGRyKHN0cnVjdCBwZW5kaW5nX3Jl
cSAqcmVxLCBpbnQgc2VnKQogCiBzdGF0aWMgaW50IGRvX2Jsb2NrX2lvX29wKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmKTsKIHN0YXRpYyBpbnQgZGlzcGF0Y2hfcndfYmxvY2tfaW8oc3RydWN0IHhl
bl9ibGtpZiAqYmxraWYsCi0JCQkJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSwKIAkJCQlzdHJ1
Y3QgcGVuZGluZ19yZXEgKnBlbmRpbmdfcmVxKTsKIHN0YXRpYyB2b2lkIG1ha2VfcmVzcG9uc2Uo
c3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIHU2NCBpZCwKLQkJCSAgdW5zaWduZWQgc2hvcnQgb3As
IGludCBzdCk7Ci0KLS8qCi0gKiBSZXRyaWV2ZSBmcm9tIHRoZSAncGVuZGluZ19yZXFzJyBhIGZy
ZWUgcGVuZGluZ19yZXEgc3RydWN0dXJlIHRvIGJlIHVzZWQuCi0gKi8KLXN0YXRpYyBzdHJ1Y3Qg
cGVuZGluZ19yZXEgKmFsbG9jX3JlcSh2b2lkKQotewotCXN0cnVjdCBwZW5kaW5nX3JlcSAqcmVx
ID0gTlVMTDsKLQl1bnNpZ25lZCBsb25nIGZsYWdzOwotCi0Jc3Bpbl9sb2NrX2lycXNhdmUoJmJs
a2JrLT5wZW5kaW5nX2ZyZWVfbG9jaywgZmxhZ3MpOwotCWlmICghbGlzdF9lbXB0eSgmYmxrYmst
PnBlbmRpbmdfZnJlZSkpIHsKLQkJcmVxID0gbGlzdF9lbnRyeShibGtiay0+cGVuZGluZ19mcmVl
Lm5leHQsIHN0cnVjdCBwZW5kaW5nX3JlcSwKLQkJCQkgZnJlZV9saXN0KTsKLQkJbGlzdF9kZWwo
JnJlcS0+ZnJlZV9saXN0KTsKLQl9Ci0Jc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmYmxrYmstPnBl
bmRpbmdfZnJlZV9sb2NrLCBmbGFncyk7Ci0JcmV0dXJuIHJlcTsKLX0KKwkJCSAgdW5zaWduZWQg
c2hvcnQgb3AsIGludCBucl9wYWdlLCBpbnQgc3QpOwogCiAvKgogICogUmV0dXJuIHRoZSAncGVu
ZGluZ19yZXEnIHN0cnVjdHVyZSBiYWNrIHRvIHRoZSBmcmVlcG9vbC4gV2UgYWxzbwpAQCAtMTU1
LDYgKzE0NiwxMiBAQCBzdGF0aWMgdm9pZCBmcmVlX3JlcShzdHJ1Y3QgcGVuZGluZ19yZXEgKnJl
cSkKIAl1bnNpZ25lZCBsb25nIGZsYWdzOwogCWludCB3YXNfZW1wdHk7CiAKKwlrZnJlZShyZXEt
Pm1hcCk7CisJa2ZyZWUocmVxLT51bm1hcCk7CisJa2ZyZWUocmVxLT5iaW9saXN0KTsKKwlrZnJl
ZShyZXEtPnNlZyk7CisJa2ZyZWUocmVxLT5wYWdlcyk7CisKIAlzcGluX2xvY2tfaXJxc2F2ZSgm
YmxrYmstPnBlbmRpbmdfZnJlZV9sb2NrLCBmbGFncyk7CiAJd2FzX2VtcHR5ID0gbGlzdF9lbXB0
eSgmYmxrYmstPnBlbmRpbmdfZnJlZSk7CiAJbGlzdF9hZGQoJnJlcS0+ZnJlZV9saXN0LCAmYmxr
YmstPnBlbmRpbmdfZnJlZSk7CkBAIC0xNjIsNyArMTU5LDQyIEBAIHN0YXRpYyB2b2lkIGZyZWVf
cmVxKHN0cnVjdCBwZW5kaW5nX3JlcSAqcmVxKQogCWlmICh3YXNfZW1wdHkpCiAJCXdha2VfdXAo
JmJsa2JrLT5wZW5kaW5nX2ZyZWVfd3EpOwogfQotCisvKgorICogUmV0cmlldmUgZnJvbSB0aGUg
J3BlbmRpbmdfcmVxcycgYSBmcmVlIHBlbmRpbmdfcmVxIHN0cnVjdHVyZSB0byBiZSB1c2VkLgor
ICovCitzdGF0aWMgc3RydWN0IHBlbmRpbmdfcmVxICphbGxvY19yZXEodm9pZCkKK3sKKwlzdHJ1
Y3QgcGVuZGluZ19yZXEgKnJlcSA9IE5VTEw7CisJdW5zaWduZWQgbG9uZyBmbGFnczsKKwl1bnNp
Z25lZCBpbnQgbWF4X3NlZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVDsKKwkKKwlz
cGluX2xvY2tfaXJxc2F2ZSgmYmxrYmstPnBlbmRpbmdfZnJlZV9sb2NrLCBmbGFncyk7CisJaWYg
KCFsaXN0X2VtcHR5KCZibGtiay0+cGVuZGluZ19mcmVlKSkgeworCQlyZXEgPSBsaXN0X2VudHJ5
KGJsa2JrLT5wZW5kaW5nX2ZyZWUubmV4dCwgc3RydWN0IHBlbmRpbmdfcmVxLAorCQkJCSBmcmVl
X2xpc3QpOworCQlsaXN0X2RlbCgmcmVxLT5mcmVlX2xpc3QpOworCX0KKwlzcGluX3VubG9ja19p
cnFyZXN0b3JlKCZibGtiay0+cGVuZGluZ19mcmVlX2xvY2ssIGZsYWdzKTsKKwkKKwlpZiAocmVx
ICE9IE5VTEwpIHsKKwkJcmVxLT5tYXAgPSBremFsbG9jKHNpemVvZihzdHJ1Y3QgZ250dGFiX21h
cF9ncmFudF9yZWYpICoKKwkJCQkgICBtYXhfc2VnLCBHRlBfS0VSTkVMKTsKKwkJcmVxLT51bm1h
cCA9IGt6YWxsb2Moc2l6ZW9mKHN0cnVjdCBnbnR0YWJfdW5tYXBfZ3JhbnRfcmVmKSAqCisJCQkJ
ICAgbWF4X3NlZywgR0ZQX0tFUk5FTCk7CisJCXJlcS0+YmlvbGlzdCA9IGt6YWxsb2Moc2l6ZW9m
KHN0cnVjdCBiaW8gKikgKiBtYXhfc2VnLAorCQkJCSAgICAgICBHRlBfS0VSTkVMKTsKKwkJcmVx
LT5zZWcgPSBremFsbG9jKHNpemVvZihzdHJ1Y3Qgc2VnX2J1ZikgKiBtYXhfc2VnLAorCQkJCSAg
IEdGUF9LRVJORUwpOworCQlyZXEtPnBhZ2VzID0ga3phbGxvYyhzaXplb2Yoc3RydWN0IHBhZ2Ug
KikgKiBtYXhfc2VnLAorCQkJCSAgIEdGUF9LRVJORUwpOworCQlpZiAoIXJlcS0+bWFwIHx8ICFy
ZXEtPnVubWFwIHx8ICFyZXEtPmJpb2xpc3QgfHwKKwkJICAgICFyZXEtPnNlZyB8fCAhcmVxLT5w
YWdlcykgeworCQkJZnJlZV9yZXEocmVxKTsKKwkJCXJlcSA9IE5VTEw7CisJCX0KKwl9CisJcmV0
dXJuIHJlcTsKK30KIC8qCiAgKiBSb3V0aW5lcyBmb3IgbWFuYWdpbmcgdmlydHVhbCBibG9jayBk
ZXZpY2VzICh2YmRzKS4KICAqLwpAQCAtMzA4LDExICszNDAsNiBAQCBpbnQgeGVuX2Jsa2lmX3Nj
aGVkdWxlKHZvaWQgKmFyZykKIAl4ZW5fYmxraWZfcHV0KGJsa2lmKTsKIAogCXJldHVybiAwOwot
fQotCi1zdHJ1Y3Qgc2VnX2J1ZiB7Ci0JdW5zaWduZWQgbG9uZyBidWY7Ci0JdW5zaWduZWQgaW50
IG5zZWM7CiB9OwogLyoKICAqIFVubWFwIHRoZSBncmFudCByZWZlcmVuY2VzLCBhbmQgYWxzbyBy
ZW1vdmUgdGhlIE0yUCBvdmVyLXJpZGVzCkBAIC0zMjAsOCArMzQ3LDggQEAgc3RydWN0IHNlZ19i
dWYgewogICovCiBzdGF0aWMgdm9pZCB4ZW5fYmxrYmtfdW5tYXAoc3RydWN0IHBlbmRpbmdfcmVx
ICpyZXEpCiB7Ci0Jc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFY
X1NFR01FTlRTX1BFUl9SRVFVRVNUXTsKLQlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NF
R01FTlRTX1BFUl9SRVFVRVNUXTsKKwlzdHJ1Y3QgZ250dGFiX3VubWFwX2dyYW50X3JlZiAqdW5t
YXAgPSByZXEtPnVubWFwOworCXN0cnVjdCBwYWdlICoqcGFnZXMgPSByZXEtPnBhZ2VzOwogCXVu
c2lnbmVkIGludCBpLCBpbnZjb3VudCA9IDA7CiAJZ3JhbnRfaGFuZGxlX3QgaGFuZGxlOwogCWlu
dCByZXQ7CkBAIC0zNDEsMTEgKzM2OCwxMyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxrYmtfdW5tYXAo
c3RydWN0IHBlbmRpbmdfcmVxICpyZXEpCiAJQlVHX09OKHJldCk7CiB9CiAKKwogc3RhdGljIGlu
dCB4ZW5fYmxrYmtfbWFwKHN0cnVjdCBibGtpZl9yZXF1ZXN0ICpyZXEsCisJCQkgc3RydWN0IGJs
a2lmX3JlcXVlc3Rfc2VnbWVudCAqc2VnX3JlcSwKIAkJCSBzdHJ1Y3QgcGVuZGluZ19yZXEgKnBl
bmRpbmdfcmVxLAogCQkJIHN0cnVjdCBzZWdfYnVmIHNlZ1tdKQogewotCXN0cnVjdCBnbnR0YWJf
bWFwX2dyYW50X3JlZiBtYXBbQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUXTsKKwlzdHJ1
Y3QgZ250dGFiX21hcF9ncmFudF9yZWYgKm1hcCA9IHBlbmRpbmdfcmVxLT5tYXA7CiAJaW50IGk7
CiAJaW50IG5zZWcgPSByZXEtPnUucncubnJfc2VnbWVudHM7CiAJaW50IHJldCA9IDA7CkBAIC0z
NjIsNyArMzkxLDcgQEAgc3RhdGljIGludCB4ZW5fYmxrYmtfbWFwKHN0cnVjdCBibGtpZl9yZXF1
ZXN0ICpyZXEsCiAJCWlmIChwZW5kaW5nX3JlcS0+b3BlcmF0aW9uICE9IEJMS0lGX09QX1JFQUQp
CiAJCQlmbGFncyB8PSBHTlRNQVBfcmVhZG9ubHk7CiAJCWdudHRhYl9zZXRfbWFwX29wKCZtYXBb
aV0sIHZhZGRyKHBlbmRpbmdfcmVxLCBpKSwgZmxhZ3MsCi0JCQkJICByZXEtPnUucncuc2VnW2ld
LmdyZWYsCisJCQkJICBzZWdfcmVxW2ldLmdyZWYsCiAJCQkJICBwZW5kaW5nX3JlcS0+YmxraWYt
PmRvbWlkKTsKIAl9CiAKQEAgLTM4NywxNCArNDE2LDE1IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2Jr
X21hcChzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxLAogCQkJY29udGludWU7CiAKIAkJc2VnW2ld
LmJ1ZiAgPSBtYXBbaV0uZGV2X2J1c19hZGRyIHwKLQkJCShyZXEtPnUucncuc2VnW2ldLmZpcnN0
X3NlY3QgPDwgOSk7CisJCQkoc2VnX3JlcVtpXS5maXJzdF9zZWN0IDw8IDkpOwogCX0KIAlyZXR1
cm4gcmV0OwogfQogCi1zdGF0aWMgaW50IGRpc3BhdGNoX2Rpc2NhcmRfaW8oc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYsCi0JCQkJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSkKK3N0YXRpYyBpbnQg
ZGlzcGF0Y2hfZGlzY2FyZF9pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikKIHsKKwlzdHJ1Y3Qg
YmxraWZfcmVxdWVzdCAqYmxraWZfcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0ICopYmxraWYt
PnJlcTsKKwlzdHJ1Y3QgYmxraWZfcmVxdWVzdF9kaXNjYXJkICpyZXEgPSAmYmxraWZfcmVxLT51
LmRpc2NhcmQ7CiAJaW50IGVyciA9IDA7CiAJaW50IHN0YXR1cyA9IEJMS0lGX1JTUF9PS0FZOwog
CXN0cnVjdCBibG9ja19kZXZpY2UgKmJkZXYgPSBibGtpZi0+dmJkLmJkZXY7CkBAIC00MDQsMTEg
KzQzNCwxMSBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX2Rpc2NhcmRfaW8oc3RydWN0IHhlbl9ibGtp
ZiAqYmxraWYsCiAKIAl4ZW5fYmxraWZfZ2V0KGJsa2lmKTsKIAlzZWN1cmUgPSAoYmxraWYtPnZi
ZC5kaXNjYXJkX3NlY3VyZSAmJgotCQkgKHJlcS0+dS5kaXNjYXJkLmZsYWcgJiBCTEtJRl9ESVND
QVJEX1NFQ1VSRSkpID8KKwkJIChyZXEtPmZsYWcgJiBCTEtJRl9ESVNDQVJEX1NFQ1VSRSkpID8K
IAkJIEJMS0RFVl9ESVNDQVJEX1NFQ1VSRSA6IDA7CiAKLQllcnIgPSBibGtkZXZfaXNzdWVfZGlz
Y2FyZChiZGV2LCByZXEtPnUuZGlzY2FyZC5zZWN0b3JfbnVtYmVyLAotCQkJCSAgIHJlcS0+dS5k
aXNjYXJkLm5yX3NlY3RvcnMsCisJZXJyID0gYmxrZGV2X2lzc3VlX2Rpc2NhcmQoYmRldiwgcmVx
LT5zZWN0b3JfbnVtYmVyLAorCQkJCSAgIHJlcS0+bnJfc2VjdG9ycywKIAkJCQkgICBHRlBfS0VS
TkVMLCBzZWN1cmUpOwogCiAJaWYgKGVyciA9PSAtRU9QTk9UU1VQUCkgewpAQCAtNDE3LDcgKzQ0
Nyw3IEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfZGlzY2FyZF9pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpi
bGtpZiwKIAl9IGVsc2UgaWYgKGVycikKIAkJc3RhdHVzID0gQkxLSUZfUlNQX0VSUk9SOwogCi0J
bWFrZV9yZXNwb25zZShibGtpZiwgcmVxLT51LmRpc2NhcmQuaWQsIHJlcS0+b3BlcmF0aW9uLCBz
dGF0dXMpOworCW1ha2VfcmVzcG9uc2UoYmxraWYsIHJlcS0+aWQsIEJMS0lGX09QX0RJU0NBUkQs
IDAsIHN0YXR1cyk7CiAJeGVuX2Jsa2lmX3B1dChibGtpZik7CiAJcmV0dXJuIGVycjsKIH0KQEAg
LTQ3MCw3ICs1MDAsOCBAQCBzdGF0aWMgdm9pZCBfX2VuZF9ibG9ja19pb19vcChzdHJ1Y3QgcGVu
ZGluZ19yZXEgKnBlbmRpbmdfcmVxLCBpbnQgZXJyb3IpCiAJaWYgKGF0b21pY19kZWNfYW5kX3Rl
c3QoJnBlbmRpbmdfcmVxLT5wZW5kY250KSkgewogCQl4ZW5fYmxrYmtfdW5tYXAocGVuZGluZ19y
ZXEpOwogCQltYWtlX3Jlc3BvbnNlKHBlbmRpbmdfcmVxLT5ibGtpZiwgcGVuZGluZ19yZXEtPmlk
LAotCQkJICAgICAgcGVuZGluZ19yZXEtPm9wZXJhdGlvbiwgcGVuZGluZ19yZXEtPnN0YXR1cyk7
CisJCQkgICAgICBwZW5kaW5nX3JlcS0+b3BlcmF0aW9uLCBwZW5kaW5nX3JlcS0+bnJfcGFnZXMs
CisJCQkgICAgICBwZW5kaW5nX3JlcS0+c3RhdHVzKTsKIAkJeGVuX2Jsa2lmX3B1dChwZW5kaW5n
X3JlcS0+YmxraWYpOwogCQlpZiAoYXRvbWljX3JlYWQoJnBlbmRpbmdfcmVxLT5ibGtpZi0+cmVm
Y250KSA8PSAyKSB7CiAJCQlpZiAoYXRvbWljX3JlYWQoJnBlbmRpbmdfcmVxLT5ibGtpZi0+ZHJh
aW4pKQpAQCAtNDg5LDggKzUyMCwzNyBAQCBzdGF0aWMgdm9pZCBlbmRfYmxvY2tfaW9fb3Aoc3Ry
dWN0IGJpbyAqYmlvLCBpbnQgZXJyb3IpCiAJYmlvX3B1dChiaW8pOwogfQogCit2b2lkICpnZXRf
YmFja19yaW5nKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQoreworCXJldHVybiAodm9pZCAqKSZi
bGtpZi0+YmxrX3JpbmdzOworfQogCit2b2lkIGNvcHlfYmxraWZfcmVxKHN0cnVjdCB4ZW5fYmxr
aWYgKmJsa2lmLCBSSU5HX0lEWCByYykKK3sKKwlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxID0g
KHN0cnVjdCBibGtpZl9yZXF1ZXN0ICopYmxraWYtPnJlcTsgCisJdW5pb24gYmxraWZfYmFja19y
aW5ncyAqYmxrX3JpbmdzID0gJmJsa2lmLT5ibGtfcmluZ3M7CisJc3dpdGNoIChibGtpZi0+Ymxr
X3Byb3RvY29sKSB7CisJY2FzZSBCTEtJRl9QUk9UT0NPTF9OQVRJVkU6CisJCW1lbWNweShyZXEs
IFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5ncy0+bmF0aXZlLCByYyksCisJCQlzaXplb2Yoc3Ry
dWN0IGJsa2lmX3JlcXVlc3QpKTsKKwkJYnJlYWs7CisJY2FzZSBCTEtJRl9QUk9UT0NPTF9YODZf
MzI6CisJCWJsa2lmX2dldF94ODZfMzJfcmVxKHJlcSwgUklOR19HRVRfUkVRVUVTVCgmYmxrX3Jp
bmdzLT54ODZfMzIsIHJjKSk7CisJCWJyZWFrOworCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzY0
OgorCQlibGtpZl9nZXRfeDg2XzY0X3JlcShyZXEsIFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5n
cy0+eDg2XzY0LCByYykpOworCQlicmVhazsKKwlkZWZhdWx0OgorCQlCVUcoKTsKKwl9Cit9CisK
K3ZvaWQgY29weV9ibGtpZl9zZWdfcmVxKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQoreworCXN0
cnVjdCBibGtpZl9yZXF1ZXN0ICpyZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilibGtpZi0+
cmVxOwogCisJYmxraWYtPnNlZ19yZXEgPSByZXEtPnUucncuc2VnOworfQogLyoKICAqIEZ1bmN0
aW9uIHRvIGNvcHkgdGhlIGZyb20gdGhlIHJpbmcgYnVmZmVyIHRoZSAnc3RydWN0IGJsa2lmX3Jl
cXVlc3QnCiAgKiAod2hpY2ggaGFzIHRoZSBzZWN0b3JzIHdlIHdhbnQsIG51bWJlciBvZiB0aGVt
LCBncmFudCByZWZlcmVuY2VzLCBldGMpLApAQCAtNDk5LDggKzU1OSw5IEBAIHN0YXRpYyB2b2lk
IGVuZF9ibG9ja19pb19vcChzdHJ1Y3QgYmlvICpiaW8sIGludCBlcnJvcikKIHN0YXRpYyBpbnQK
IF9fZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiB7Ci0JdW5pb24gYmxr
aWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0gJmJsa2lmLT5ibGtfcmluZ3M7Ci0Jc3RydWN0IGJs
a2lmX3JlcXVlc3QgcmVxOworCXVuaW9uIGJsa2lmX2JhY2tfcmluZ3MgKmJsa19yaW5ncyA9CisJ
CSh1bmlvbiBibGtpZl9iYWNrX3JpbmdzICopYmxraWYtPm9wcy0+Z2V0X2JhY2tfcmluZyhibGtp
Zik7CisJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSA9IChzdHJ1Y3QgYmxraWZfcmVxdWVzdCAq
KWJsa2lmLT5yZXE7CiAJc3RydWN0IHBlbmRpbmdfcmVxICpwZW5kaW5nX3JlcTsKIAlSSU5HX0lE
WCByYywgcnA7CiAJaW50IG1vcmVfdG9fZG8gPSAwOwpAQCAtNTI2LDI4ICs1ODcsMTkgQEAgX19k
b19ibG9ja19pb19vcChzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikKIAkJCWJyZWFrOwogCQl9CiAK
LQkJc3dpdGNoIChibGtpZi0+YmxrX3Byb3RvY29sKSB7Ci0JCWNhc2UgQkxLSUZfUFJPVE9DT0xf
TkFUSVZFOgotCQkJbWVtY3B5KCZyZXEsIFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5ncy0+bmF0
aXZlLCByYyksIHNpemVvZihyZXEpKTsKLQkJCWJyZWFrOwotCQljYXNlIEJMS0lGX1BST1RPQ09M
X1g4Nl8zMjoKLQkJCWJsa2lmX2dldF94ODZfMzJfcmVxKCZyZXEsIFJJTkdfR0VUX1JFUVVFU1Qo
JmJsa19yaW5ncy0+eDg2XzMyLCByYykpOwotCQkJYnJlYWs7Ci0JCWNhc2UgQkxLSUZfUFJPVE9D
T0xfWDg2XzY0OgotCQkJYmxraWZfZ2V0X3g4Nl82NF9yZXEoJnJlcSwgUklOR19HRVRfUkVRVUVT
VCgmYmxrX3JpbmdzLT54ODZfNjQsIHJjKSk7Ci0JCQlicmVhazsKLQkJZGVmYXVsdDoKLQkJCUJV
RygpOwotCQl9CisJCWJsa2lmLT5vcHMtPmNvcHlfYmxraWZfcmVxKGJsa2lmLCByYyk7CisKKwkJ
YmxraWYtPm9wcy0+Y29weV9ibGtpZl9zZWdfcmVxKGJsa2lmKTsKKwogCQlibGtfcmluZ3MtPmNv
bW1vbi5yZXFfY29ucyA9ICsrcmM7IC8qIGJlZm9yZSBtYWtlX3Jlc3BvbnNlKCkgKi8KIAogCQkv
KiBBcHBseSBhbGwgc2FuaXR5IGNoZWNrcyB0byAvcHJpdmF0ZSBjb3B5LyBvZiByZXF1ZXN0LiAq
LwogCQliYXJyaWVyKCk7Ci0JCWlmICh1bmxpa2VseShyZXEub3BlcmF0aW9uID09IEJMS0lGX09Q
X0RJU0NBUkQpKSB7CisJCWlmICh1bmxpa2VseShyZXEtPm9wZXJhdGlvbiA9PSBCTEtJRl9PUF9E
SVNDQVJEKSkgewogCQkJZnJlZV9yZXEocGVuZGluZ19yZXEpOwotCQkJaWYgKGRpc3BhdGNoX2Rp
c2NhcmRfaW8oYmxraWYsICZyZXEpKQorCQkJaWYgKGRpc3BhdGNoX2Rpc2NhcmRfaW8oYmxraWYp
KQogCQkJCWJyZWFrOwotCQl9IGVsc2UgaWYgKGRpc3BhdGNoX3J3X2Jsb2NrX2lvKGJsa2lmLCAm
cmVxLCBwZW5kaW5nX3JlcSkpCisJCX0gZWxzZSBpZiAoZGlzcGF0Y2hfcndfYmxvY2tfaW8oYmxr
aWYsIHBlbmRpbmdfcmVxKSkKIAkJCWJyZWFrOwogCiAJCS8qIFlpZWxkIHBvaW50IGZvciB0aGlz
IHVuYm91bmRlZCBsb29wLiAqLwpAQCAtNTYwLDcgKzYxMiw4IEBAIF9fZG9fYmxvY2tfaW9fb3Ao
c3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiBzdGF0aWMgaW50CiBkb19ibG9ja19pb19vcChzdHJ1
Y3QgeGVuX2Jsa2lmICpibGtpZikKIHsKLQl1bmlvbiBibGtpZl9iYWNrX3JpbmdzICpibGtfcmlu
Z3MgPSAmYmxraWYtPmJsa19yaW5nczsKKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzICpibGtfcmlu
Z3MgPQorCQkodW5pb24gYmxraWZfYmFja19yaW5ncyAqKWJsa2lmLT5vcHMtPmdldF9iYWNrX3Jp
bmcoYmxraWYpOwogCWludCBtb3JlX3RvX2RvOwogCiAJZG8gewpAQCAtNTc4LDE0ICs2MzEsMTUg
QEAgZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiAgKiBhbmQgY2FsbCB0
aGUgJ3N1Ym1pdF9iaW8nIHRvIHBhc3MgaXQgdG8gdGhlIHVuZGVybHlpbmcgc3RvcmFnZS4KICAq
Lwogc3RhdGljIGludCBkaXNwYXRjaF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtp
ZiwKLQkJCQlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxLAogCQkJCXN0cnVjdCBwZW5kaW5nX3Jl
cSAqcGVuZGluZ19yZXEpCiB7CisJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSA9IChzdHJ1Y3Qg
YmxraWZfcmVxdWVzdCAqKWJsa2lmLT5yZXE7CisJc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVu
dCAqc2VnX3JlcSA9IGJsa2lmLT5zZWdfcmVxOwogCXN0cnVjdCBwaHlzX3JlcSBwcmVxOwotCXN0
cnVjdCBzZWdfYnVmIHNlZ1tCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1RdOworCXN0cnVj
dCBzZWdfYnVmICpzZWcgPSBwZW5kaW5nX3JlcS0+c2VnOwogCXVuc2lnbmVkIGludCBuc2VnOwog
CXN0cnVjdCBiaW8gKmJpbyA9IE5VTEw7Ci0Jc3RydWN0IGJpbyAqYmlvbGlzdFtCTEtJRl9NQVhf
U0VHTUVOVFNfUEVSX1JFUVVFU1RdOworCXN0cnVjdCBiaW8gKipiaW9saXN0ID0gcGVuZGluZ19y
ZXEtPmJpb2xpc3Q7CiAJaW50IGksIG5iaW8gPSAwOwogCWludCBvcGVyYXRpb247CiAJc3RydWN0
IGJsa19wbHVnIHBsdWc7CkBAIC02MTYsNyArNjcwLDcgQEAgc3RhdGljIGludCBkaXNwYXRjaF9y
d19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwKIAluc2VnID0gcmVxLT51LnJ3Lm5y
X3NlZ21lbnRzOwogCiAJaWYgKHVubGlrZWx5KG5zZWcgPT0gMCAmJiBvcGVyYXRpb24gIT0gV1JJ
VEVfRkxVU0gpIHx8Ci0JICAgIHVubGlrZWx5KG5zZWcgPiBCTEtJRl9NQVhfU0VHTUVOVFNfUEVS
X1JFUVVFU1QpKSB7CisJICAgIHVubGlrZWx5KG5zZWcgPiBibGtpZi0+b3BzLT5tYXhfc2VnKSkg
ewogCQlwcl9kZWJ1ZyhEUlZfUEZYICJCYWQgbnVtYmVyIG9mIHNlZ21lbnRzIGluIHJlcXVlc3Qg
KCVkKVxuIiwKIAkJCSBuc2VnKTsKIAkJLyogSGF2ZW4ndCBzdWJtaXR0ZWQgYW55IGJpbydzIHll
dC4gKi8KQEAgLTYzNCwxMCArNjg4LDEwIEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfcndfYmxvY2tf
aW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsCiAJcGVuZGluZ19yZXEtPm5yX3BhZ2VzICA9IG5z
ZWc7CiAKIAlmb3IgKGkgPSAwOyBpIDwgbnNlZzsgaSsrKSB7Ci0JCXNlZ1tpXS5uc2VjID0gcmVx
LT51LnJ3LnNlZ1tpXS5sYXN0X3NlY3QgLQotCQkJcmVxLT51LnJ3LnNlZ1tpXS5maXJzdF9zZWN0
ICsgMTsKLQkJaWYgKChyZXEtPnUucncuc2VnW2ldLmxhc3Rfc2VjdCA+PSAoUEFHRV9TSVpFID4+
IDkpKSB8fAotCQkgICAgKHJlcS0+dS5ydy5zZWdbaV0ubGFzdF9zZWN0IDwgcmVxLT51LnJ3LnNl
Z1tpXS5maXJzdF9zZWN0KSkKKwkJc2VnW2ldLm5zZWMgPSBzZWdfcmVxW2ldLmxhc3Rfc2VjdCAt
CisJCQlzZWdfcmVxW2ldLmZpcnN0X3NlY3QgKyAxOworCQlpZiAoKHNlZ19yZXFbaV0ubGFzdF9z
ZWN0ID49IChQQUdFX1NJWkUgPj4gOSkpIHx8CisJCSAgICAoc2VnX3JlcVtpXS5sYXN0X3NlY3Qg
PCBzZWdfcmVxW2ldLmZpcnN0X3NlY3QpKQogCQkJZ290byBmYWlsX3Jlc3BvbnNlOwogCQlwcmVx
Lm5yX3NlY3RzICs9IHNlZ1tpXS5uc2VjOwogCkBAIC02NzYsNyArNzMwLDcgQEAgc3RhdGljIGlu
dCBkaXNwYXRjaF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwKIAkgKiB0aGUg
aHlwZXJjYWxsIHRvIHVubWFwIHRoZSBncmFudHMgLSB0aGF0IGlzIGFsbCBkb25lIGluCiAJICog
eGVuX2Jsa2JrX3VubWFwLgogCSAqLwotCWlmICh4ZW5fYmxrYmtfbWFwKHJlcSwgcGVuZGluZ19y
ZXEsIHNlZykpCisJaWYgKHhlbl9ibGtia19tYXAocmVxLCBzZWdfcmVxLCBwZW5kaW5nX3JlcSwg
c2VnKSkKIAkJZ290byBmYWlsX2ZsdXNoOwogCiAJLyoKQEAgLTc0Niw3ICs4MDAsNyBAQCBzdGF0
aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLAogCXhl
bl9ibGtia191bm1hcChwZW5kaW5nX3JlcSk7CiAgZmFpbF9yZXNwb25zZToKIAkvKiBIYXZlbid0
IHN1Ym1pdHRlZCBhbnkgYmlvJ3MgeWV0LiAqLwotCW1ha2VfcmVzcG9uc2UoYmxraWYsIHJlcS0+
dS5ydy5pZCwgcmVxLT5vcGVyYXRpb24sIEJMS0lGX1JTUF9FUlJPUik7CisJbWFrZV9yZXNwb25z
ZShibGtpZiwgcmVxLT51LnJ3LmlkLCByZXEtPm9wZXJhdGlvbiwgMCwgQkxLSUZfUlNQX0VSUk9S
KTsKIAlmcmVlX3JlcShwZW5kaW5nX3JlcSk7CiAJbXNsZWVwKDEpOyAvKiBiYWNrIG9mZiBhIGJp
dCAqLwogCXJldHVybiAtRUlPOwpAQCAtNzU5LDE3ICs4MTMsMjggQEAgc3RhdGljIGludCBkaXNw
YXRjaF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwKIAlyZXR1cm4gLUVJTzsK
IH0KIAorc3RydWN0IGJsa2lmX3NlZ21lbnRfYmFja19yaW5nICoKKwlnZXRfc2VnX2JhY2tfcmlu
ZyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikKK3sKKwlyZXR1cm4gTlVMTDsKK30KIAordm9pZCBw
dXNoX2JhY2tfcmluZ19yc3AodW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzLCBpbnQg
bnJfcGFnZSwgaW50ICpub3RpZnkpCit7CisJYmxrX3JpbmdzLT5jb21tb24ucnNwX3Byb2RfcHZ0
Kys7CisJUklOR19QVVNIX1JFU1BPTlNFU19BTkRfQ0hFQ0tfTk9USUZZKCZibGtfcmluZ3MtPmNv
bW1vbiwgKm5vdGlmeSk7Cit9CiAKIC8qCiAgKiBQdXQgYSByZXNwb25zZSBvbiB0aGUgcmluZyBv
biBob3cgdGhlIG9wZXJhdGlvbiBmYXJlZC4KICAqLwogc3RhdGljIHZvaWQgbWFrZV9yZXNwb25z
ZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAotCQkJICB1bnNpZ25lZCBzaG9ydCBv
cCwgaW50IHN0KQorCQkJICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IG5yX3BhZ2UsIGludCBzdCkK
IHsKIAlzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgIHJlc3A7CiAJdW5zaWduZWQgbG9uZyAgICAgZmxh
Z3M7Ci0JdW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0gJmJsa2lmLT5ibGtfcmlu
Z3M7CisJdW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0KKwkJKHVuaW9uIGJsa2lm
X2JhY2tfcmluZ3MgKilibGtpZi0+b3BzLT5nZXRfYmFja19yaW5nKGJsa2lmKTsKIAlpbnQgbm90
aWZ5OwogCiAJcmVzcC5pZCAgICAgICAgPSBpZDsKQEAgLTc5NCw4ICs4NTksOSBAQCBzdGF0aWMg
dm9pZCBtYWtlX3Jlc3BvbnNlKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCB1NjQgaWQsCiAJZGVm
YXVsdDoKIAkJQlVHKCk7CiAJfQotCWJsa19yaW5ncy0+Y29tbW9uLnJzcF9wcm9kX3B2dCsrOwot
CVJJTkdfUFVTSF9SRVNQT05TRVNfQU5EX0NIRUNLX05PVElGWSgmYmxrX3JpbmdzLT5jb21tb24s
IG5vdGlmeSk7CisKKwlibGtpZi0+b3BzLT5wdXNoX2JhY2tfcmluZ19yc3AoYmxrX3JpbmdzLCBu
cl9wYWdlLCAmbm90aWZ5KTsKKwogCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmJsa2lmLT5ibGtf
cmluZ19sb2NrLCBmbGFncyk7CiAJaWYgKG5vdGlmeSkKIAkJbm90aWZ5X3JlbW90ZV92aWFfaXJx
KGJsa2lmLT5pcnEpOwpAQCAtODczLDYgKzkzOSwxNCBAQCBzdGF0aWMgaW50IF9faW5pdCB4ZW5f
YmxraWZfaW5pdCh2b2lkKQogCXJldHVybiByYzsKIH0KIAorc3RydWN0IGJsa2JhY2tfcmluZ19v
cGVyYXRpb24gYmxrYmFja19yaW5nX29wcyA9IHsKKwkuZ2V0X2JhY2tfcmluZyA9IGdldF9iYWNr
X3JpbmcsCisJLmNvcHlfYmxraWZfcmVxID0gY29weV9ibGtpZl9yZXEsCisJLmNvcHlfYmxraWZf
c2VnX3JlcSA9IGNvcHlfYmxraWZfc2VnX3JlcSwKKwkucHVzaF9iYWNrX3JpbmdfcnNwID0gcHVz
aF9iYWNrX3JpbmdfcnNwLAorCS5tYXhfc2VnID0gQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFV
RVNULAorfTsKKwogbW9kdWxlX2luaXQoeGVuX2Jsa2lmX2luaXQpOwogCiBNT0RVTEVfTElDRU5T
RSgiRHVhbCBCU0QvR1BMIik7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2NvbW1vbi5oIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCA5YWQz
YjVlLi5jZTU1NTZhIDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1v
bi5oCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTE0Niw2ICsx
NDYsMTEgQEAgZW51bSBibGtpZl9wcm90b2NvbCB7CiAJQkxLSUZfUFJPVE9DT0xfWDg2XzY0ID0g
MywKIH07CiAKK2VudW0gYmxraWZfYmFja3JpbmdfdHlwZSB7CisJQkFDS1JJTkdfVFlQRV8xID0g
MSwKKwlCQUNLUklOR19UWVBFXzIgPSAyLAorfTsKKwogc3RydWN0IHhlbl92YmQgewogCS8qIFdo
YXQgdGhlIGRvbWFpbiByZWZlcnMgdG8gdGhpcyB2YmQgYXMuICovCiAJYmxraWZfdmRldl90CQlo
YW5kbGU7CkBAIC0xNjMsNiArMTY4LDE1IEBAIHN0cnVjdCB4ZW5fdmJkIHsKIH07CiAKIHN0cnVj
dCBiYWNrZW5kX2luZm87CitzdHJ1Y3QgeGVuX2Jsa2lmOworCitzdHJ1Y3QgYmxrYmFja19yaW5n
X29wZXJhdGlvbiB7CisJdm9pZCAqKCpnZXRfYmFja19yaW5nKSAoc3RydWN0IHhlbl9ibGtpZiAq
YmxraWYpOworCXZvaWQgKCpjb3B5X2Jsa2lmX3JlcSkgKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lm
LCBSSU5HX0lEWCByYyk7CisJdm9pZCAoKmNvcHlfYmxraWZfc2VnX3JlcSkgKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmKTsKKwl2b2lkICgqcHVzaF9iYWNrX3JpbmdfcnNwKSAodW5pb24gYmxraWZf
YmFja19yaW5ncyAqYmxrX3JpbmdzLCBpbnQgbnJfcGFnZSwgaW50ICpub3RpZnkpOworCXVuc2ln
bmVkIGludCBtYXhfc2VnOworfTsKIAogc3RydWN0IHhlbl9ibGtpZiB7CiAJLyogVW5pcXVlIGlk
ZW50aWZpZXIgZm9yIHRoaXMgaW50ZXJmYWNlLiAqLwpAQCAtMTcxLDcgKzE4NSw5IEBAIHN0cnVj
dCB4ZW5fYmxraWYgewogCS8qIFBoeXNpY2FsIHBhcmFtZXRlcnMgb2YgdGhlIGNvbW1zIHdpbmRv
dy4gKi8KIAl1bnNpZ25lZCBpbnQJCWlycTsKIAkvKiBDb21tcyBpbmZvcm1hdGlvbi4gKi8KKwlz
dHJ1Y3QgYmxrYmFja19yaW5nX29wZXJhdGlvbgkqb3BzOwogCWVudW0gYmxraWZfcHJvdG9jb2wJ
YmxrX3Byb3RvY29sOworCWVudW0gYmxraWZfYmFja3JpbmdfdHlwZSBibGtfYmFja3JpbmdfdHlw
ZTsKIAl1bmlvbiBibGtpZl9iYWNrX3JpbmdzCWJsa19yaW5nczsKIAl2b2lkCQkJKmJsa19yaW5n
OwogCS8qIFRoZSBWQkQgYXR0YWNoZWQgdG8gdGhpcyBpbnRlcmZhY2UuICovCkBAIC0xNzksNiAr
MTk1LDggQEAgc3RydWN0IHhlbl9ibGtpZiB7CiAJLyogQmFjayBwb2ludGVyIHRvIHRoZSBiYWNr
ZW5kX2luZm8uICovCiAJc3RydWN0IGJhY2tlbmRfaW5mbwkqYmU7CiAJLyogUHJpdmF0ZSBmaWVs
ZHMuICovCisJdm9pZCAqCQkJcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnNl
Z19yZXE7CiAJc3BpbmxvY2tfdAkJYmxrX3JpbmdfbG9jazsKIAlhdG9taWNfdAkJcmVmY250Owog
CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jIGIvZHJpdmVy
cy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYwppbmRleCA0ZjY2MTcxLi44NTBlY2FkIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCisrKyBiL2RyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKQEAgLTM2LDYgKzM2LDggQEAgc3RhdGljIGludCBj
b25uZWN0X3Jpbmcoc3RydWN0IGJhY2tlbmRfaW5mbyAqKTsKIHN0YXRpYyB2b2lkIGJhY2tlbmRf
Y2hhbmdlZChzdHJ1Y3QgeGVuYnVzX3dhdGNoICosIGNvbnN0IGNoYXIgKiosCiAJCQkgICAgdW5z
aWduZWQgaW50KTsKIAorZXh0ZXJuIHN0cnVjdCBibGtiYWNrX3Jpbmdfb3BlcmF0aW9uIGJsa2Jh
Y2tfcmluZ19vcHM7CisKIHN0cnVjdCB4ZW5idXNfZGV2aWNlICp4ZW5fYmxrYmtfeGVuYnVzKHN0
cnVjdCBiYWNrZW5kX2luZm8gKmJlKQogewogCXJldHVybiBiZS0+ZGV2OwpAQCAtNzI1LDYgKzcy
NywxMiBAQCBzdGF0aWMgaW50IGNvbm5lY3RfcmluZyhzdHJ1Y3QgYmFja2VuZF9pbmZvICpiZSkK
IAlpbnQgZXJyOwogCiAJRFBSSU5USygiJXMiLCBkZXYtPm90aGVyZW5kKTsKKwliZS0+YmxraWYt
Pm9wcyA9ICZibGtiYWNrX3Jpbmdfb3BzOworCWJlLT5ibGtpZi0+cmVxID0ga21hbGxvYyhzaXpl
b2Yoc3RydWN0IGJsa2lmX3JlcXVlc3QpLAorCQkJCSBHRlBfS0VSTkVMKTsKKwliZS0+YmxraWYt
PnNlZ19yZXEgPSBrbWFsbG9jKHNpemVvZihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50KSoK
KwkJCQkgICAgIGJlLT5ibGtpZi0+b3BzLT5tYXhfc2VnLCAgR0ZQX0tFUk5FTCk7CisJYmUtPmJs
a2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJBQ0tSSU5HX1RZUEVfMTsKIAogCWVyciA9IHhlbmJ1
c19nYXRoZXIoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgInJpbmctcmVmIiwgIiVsdSIsCiAJCQkg
ICAgJnJpbmdfcmVmLCAiZXZlbnQtY2hhbm5lbCIsICIldSIsICZldnRjaG4sIE5VTEwpOwo=

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:27:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xIe-0005yO-92; Thu, 16 Aug 2012 10:27:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xIc-0005xu-Dd
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:27:35 +0000
Received: from [85.158.138.51:35142] by server-5.bemta-3.messagelabs.com id
	14/9D-08865-31BCC205; Thu, 16 Aug 2012 10:27:31 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345112847!19680854!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzNjg4NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4011 invoked from network); 16 Aug 2012 10:27:27 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 10:27:27 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 16 Aug 2012 03:27:15 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; 
	d="scan'208,217";a="202534491"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 16 Aug 2012 03:27:14 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:27:14 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:27:13 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:27:11 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 3/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mbHQd/+wJK9MRfWlX0hUUsOitw==
Date: Thu, 16 Aug 2012 10:27:10 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF23D@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 3/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: multipart/alternative;
	boundary="_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_"

--_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


refactoring balback

Signed-off-by: Ronghui Duan <ronghui.duan@intel.com<mailto:ronghui.duan@int=
el.com>>

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index 73f196c..b4767f5 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -64,6 +64,11 @@ MODULE_PARM_DESC(reqs, "Number of blkback requests to al=
locate");
static unsigned int log_stats;
module_param(log_stats, int, 0644);
+struct seg_buf {
+        unsigned long buf;
+        unsigned int nsec;
+};
+
/*
  * Each outstanding request that we've passed to the lower device layers h=
as a
  * 'pending_req' allocated to it. Each buffer_head that completes decremen=
ts
@@ -78,6 +83,11 @@ struct pending_req {
    unsigned short       operation;
    int             status;
    struct list_head     free_list;
+    struct gnttab_map_grant_ref     *map;
+    struct gnttab_unmap_grant_ref   *unmap;
+    struct seg_buf             *seg;
+    struct bio           **biolist;
+    struct page                **pages;
};
 #define BLKBACK_INVALID_HANDLE (~0)
@@ -123,28 +133,9 @@ static inline unsigned long vaddr(struct pending_req *=
req, int seg)
 static int do_block_io_op(struct xen_blkif *blkif);
static int dispatch_rw_block_io(struct xen_blkif *blkif,
-                    struct blkif_request *req,
                    struct pending_req *pending_req);
static void make_response(struct xen_blkif *blkif, u64 id,
-                 unsigned short op, int st);
-
-/*
- * Retrieve from the 'pending_reqs' a free pending_req structure to be use=
d.
- */
-static struct pending_req *alloc_req(void)
-{
-    struct pending_req *req =3D NULL;
-    unsigned long flags;
-
-    spin_lock_irqsave(&blkbk->pending_free_lock, flags);
-    if (!list_empty(&blkbk->pending_free)) {
-          req =3D list_entry(blkbk->pending_free.next, struct pending_req,
-                    free_list);
-          list_del(&req->free_list);
-    }
-    spin_unlock_irqrestore(&blkbk->pending_free_lock, flags);
-    return req;
-}
+                 unsigned short op, int nr_page, int st);
 /*
  * Return the 'pending_req' structure back to the freepool. We also
@@ -155,6 +146,12 @@ static void free_req(struct pending_req *req)
    unsigned long flags;
    int was_empty;
+    kfree(req->map);
+    kfree(req->unmap);
+    kfree(req->biolist);
+    kfree(req->seg);
+    kfree(req->pages);
+
    spin_lock_irqsave(&blkbk->pending_free_lock, flags);
    was_empty =3D list_empty(&blkbk->pending_free);
    list_add(&req->free_list, &blkbk->pending_free);
@@ -162,7 +159,42 @@ static void free_req(struct pending_req *req)
    if (was_empty)
          wake_up(&blkbk->pending_free_wq);
}
-
+/*
+ * Retrieve from the 'pending_reqs' a free pending_req structure to be use=
d.
+ */
+static struct pending_req *alloc_req(void)
+{
+    struct pending_req *req =3D NULL;
+    unsigned long flags;
+    unsigned int max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST;
+
+    spin_lock_irqsave(&blkbk->pending_free_lock, flags);
+    if (!list_empty(&blkbk->pending_free)) {
+          req =3D list_entry(blkbk->pending_free.next, struct pending_req,
+                    free_list);
+          list_del(&req->free_list);
+    }
+    spin_unlock_irqrestore(&blkbk->pending_free_lock, flags);
+
+    if (req !=3D NULL) {
+          req->map =3D kzalloc(sizeof(struct gnttab_map_grant_ref) *
+                       max_seg, GFP_KERNEL);
+          req->unmap =3D kzalloc(sizeof(struct gnttab_unmap_grant_ref) *
+                       max_seg, GFP_KERNEL);
+          req->biolist =3D kzalloc(sizeof(struct bio *) * max_seg,
+                           GFP_KERNEL);
+          req->seg =3D kzalloc(sizeof(struct seg_buf) * max_seg,
+                       GFP_KERNEL);
+          req->pages =3D kzalloc(sizeof(struct page *) * max_seg,
+                       GFP_KERNEL);
+          if (!req->map || !req->unmap || !req->biolist ||
+              !req->seg || !req->pages) {
+               free_req(req);
+               req =3D NULL;
+          }
+    }
+    return req;
+}
/*
  * Routines for managing virtual block devices (vbds).
  */
@@ -308,11 +340,6 @@ int xen_blkif_schedule(void *arg)
    xen_blkif_put(blkif);
     return 0;
-}
-
-struct seg_buf {
-    unsigned long buf;
-    unsigned int nsec;
};
/*
  * Unmap the grant references, and also remove the M2P over-rides
@@ -320,8 +347,8 @@ struct seg_buf {
  */
static void xen_blkbk_unmap(struct pending_req *req)
{
-    struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];
-    struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct gnttab_unmap_grant_ref *unmap =3D req->unmap;
+    struct page **pages =3D req->pages;
    unsigned int i, invcount =3D 0;
    grant_handle_t handle;
    int ret;
@@ -341,11 +368,13 @@ static void xen_blkbk_unmap(struct pending_req *req)
    BUG_ON(ret);
}
+
static int xen_blkbk_map(struct blkif_request *req,
+               struct blkif_request_segment *seg_req,
                struct pending_req *pending_req,
                struct seg_buf seg[])
{
-    struct gnttab_map_grant_ref map[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct gnttab_map_grant_ref *map =3D pending_req->map;
    int i;
    int nseg =3D req->u.rw.nr_segments;
    int ret =3D 0;
@@ -362,7 +391,7 @@ static int xen_blkbk_map(struct blkif_request *req,
          if (pending_req->operation !=3D BLKIF_OP_READ)
               flags |=3D GNTMAP_readonly;
          gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
-                      req->u.rw.seg[i].gref,
+                      seg_req[i].gref,
                      pending_req->blkif->domid);
    }
@@ -387,14 +416,15 @@ static int xen_blkbk_map(struct blkif_request *req,
               continue;
           seg[i].buf  =3D map[i].dev_bus_addr |
-               (req->u.rw.seg[i].first_sect << 9);
+               (seg_req[i].first_sect << 9);
    }
    return ret;
}
-static int dispatch_discard_io(struct xen_blkif *blkif,
-                    struct blkif_request *req)
+static int dispatch_discard_io(struct xen_blkif *blkif)
{
+    struct blkif_request *blkif_req =3D (struct blkif_request *)blkif->req=
;
+    struct blkif_request_discard *req =3D &blkif_req->u.discard;
    int err =3D 0;
    int status =3D BLKIF_RSP_OKAY;
    struct block_device *bdev =3D blkif->vbd.bdev;
@@ -404,11 +434,11 @@ static int dispatch_discard_io(struct xen_blkif *blki=
f,
     xen_blkif_get(blkif);
    secure =3D (blkif->vbd.discard_secure &&
-          (req->u.discard.flag & BLKIF_DISCARD_SECURE)) ?
+          (req->flag & BLKIF_DISCARD_SECURE)) ?
           BLKDEV_DISCARD_SECURE : 0;
-    err =3D blkdev_issue_discard(bdev, req->u.discard.sector_number,
-                       req->u.discard.nr_sectors,
+    err =3D blkdev_issue_discard(bdev, req->sector_number,
+                       req->nr_sectors,
                       GFP_KERNEL, secure);
     if (err =3D=3D -EOPNOTSUPP) {
@@ -417,7 +447,7 @@ static int dispatch_discard_io(struct xen_blkif *blkif,
    } else if (err)
          status =3D BLKIF_RSP_ERROR;
-    make_response(blkif, req->u.discard.id, req->operation, status);
+    make_response(blkif, req->id, BLKIF_OP_DISCARD, 0, status);
    xen_blkif_put(blkif);
    return err;
}
@@ -470,7 +500,8 @@ static void __end_block_io_op(struct pending_req *pendi=
ng_req, int error)
    if (atomic_dec_and_test(&pending_req->pendcnt)) {
          xen_blkbk_unmap(pending_req);
          make_response(pending_req->blkif, pending_req->id,
-                     pending_req->operation, pending_req->status);
+                     pending_req->operation, pending_req->nr_pages,
+                     pending_req->status);
          xen_blkif_put(pending_req->blkif);
          if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
               if (atomic_read(&pending_req->blkif->drain))
@@ -489,8 +520,37 @@ static void end_block_io_op(struct bio *bio, int error=
)
    bio_put(bio);
}
+void *get_back_ring(struct xen_blkif *blkif)
+{
+    return (void *)&blkif->blk_rings;
+}
+void copy_blkif_req(struct xen_blkif *blkif, RING_IDX rc)
+{
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
+    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+    switch (blkif->blk_protocol) {
+    case BLKIF_PROTOCOL_NATIVE:
+          memcpy(req, RING_GET_REQUEST(&blk_rings->native, rc),
+               sizeof(struct blkif_request));
+          break;
+    case BLKIF_PROTOCOL_X86_32:
+          blkif_get_x86_32_req(req, RING_GET_REQUEST(&blk_rings->x86_32, r=
c));
+          break;
+    case BLKIF_PROTOCOL_X86_64:
+          blkif_get_x86_64_req(req, RING_GET_REQUEST(&blk_rings->x86_64, r=
c));
+          break;
+    default:
+          BUG();
+    }
+}
+
+void copy_blkif_seg_req(struct xen_blkif *blkif)
+{
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
+    blkif->seg_req =3D req->u.rw.seg;
+}
/*
  * Function to copy the from the ring buffer the 'struct blkif_request'
  * (which has the sectors we want, number of them, grant references, etc),
@@ -499,8 +559,9 @@ static void end_block_io_op(struct bio *bio, int error)
static int
__do_block_io_op(struct xen_blkif *blkif)
{
-    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
-    struct blkif_request req;
+    union blkif_back_rings *blk_rings =3D
+          (union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
    struct pending_req *pending_req;
    RING_IDX rc, rp;
    int more_to_do =3D 0;
@@ -526,28 +587,19 @@ __do_block_io_op(struct xen_blkif *blkif)
               break;
          }
-          switch (blkif->blk_protocol) {
-          case BLKIF_PROTOCOL_NATIVE:
-              memcpy(&req, RING_GET_REQUEST(&blk_rings->native, rc), sizeo=
f(req));
-               break;
-          case BLKIF_PROTOCOL_X86_32:
-               blkif_get_x86_32_req(&req, RING_GET_REQUEST(&blk_rings->x86=
_32, rc));
-               break;
-          case BLKIF_PROTOCOL_X86_64:
-               blkif_get_x86_64_req(&req, RING_GET_REQUEST(&blk_rings->x86=
_64, rc));
-               break;
-          default:
-               BUG();
-          }
+          blkif->ops->copy_blkif_req(blkif, rc);
+
+          blkif->ops->copy_blkif_seg_req(blkif);
+
          blk_rings->common.req_cons =3D ++rc; /* before make_response() */
           /* Apply all sanity checks to /private copy/ of request. */
          barrier();
-          if (unlikely(req.operation =3D=3D BLKIF_OP_DISCARD)) {
+          if (unlikely(req->operation =3D=3D BLKIF_OP_DISCARD)) {
               free_req(pending_req);
-               if (dispatch_discard_io(blkif, &req))
+               if (dispatch_discard_io(blkif))
                    break;
-          } else if (dispatch_rw_block_io(blkif, &req, pending_req))
+          } else if (dispatch_rw_block_io(blkif, pending_req))
               break;
           /* Yield point for this unbounded loop. */
@@ -560,7 +612,8 @@ __do_block_io_op(struct xen_blkif *blkif)
static int
do_block_io_op(struct xen_blkif *blkif)
{
-    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+    union blkif_back_rings *blk_rings =3D
+          (union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
    int more_to_do;
     do {
@@ -578,14 +631,15 @@ do_block_io_op(struct xen_blkif *blkif)
  * and call the 'submit_bio' to pass it to the underlying storage.
  */
static int dispatch_rw_block_io(struct xen_blkif *blkif,
-                    struct blkif_request *req,
                    struct pending_req *pending_req)
{
+    struct blkif_request *req =3D (struct blkif_request *)blkif->req;
+    struct blkif_request_segment *seg_req =3D blkif->seg_req;
    struct phys_req preq;
-    struct seg_buf seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct seg_buf *seg =3D pending_req->seg;
    unsigned int nseg;
    struct bio *bio =3D NULL;
-    struct bio *biolist[BLKIF_MAX_SEGMENTS_PER_REQUEST];
+    struct bio **biolist =3D pending_req->biolist;
    int i, nbio =3D 0;
    int operation;
    struct blk_plug plug;
@@ -616,7 +670,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
    nseg =3D req->u.rw.nr_segments;
     if (unlikely(nseg =3D=3D 0 && operation !=3D WRITE_FLUSH) ||
-        unlikely(nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) {
+        unlikely(nseg > blkif->ops->max_seg)) {
          pr_debug(DRV_PFX "Bad number of segments in request (%d)\n",
                nseg);
          /* Haven't submitted any bio's yet. */
@@ -634,10 +688,10 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
    pending_req->nr_pages  =3D nseg;
     for (i =3D 0; i < nseg; i++) {
-          seg[i].nsec =3D req->u.rw.seg[i].last_sect -
-               req->u.rw.seg[i].first_sect + 1;
-          if ((req->u.rw.seg[i].last_sect >=3D (PAGE_SIZE >> 9)) ||
-              (req->u.rw.seg[i].last_sect < req->u.rw.seg[i].first_sect))
+          seg[i].nsec =3D seg_req[i].last_sect -
+               seg_req[i].first_sect + 1;
+          if ((seg_req[i].last_sect >=3D (PAGE_SIZE >> 9)) ||
+              (seg_req[i].last_sect < seg_req[i].first_sect))
               goto fail_response;
          preq.nr_sects +=3D seg[i].nsec;
@@ -676,7 +730,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
     * the hypercall to unmap the grants - that is all done in
     * xen_blkbk_unmap.
     */
-    if (xen_blkbk_map(req, pending_req, seg))
+    if (xen_blkbk_map(req, seg_req, pending_req, seg))
          goto fail_flush;
     /*
@@ -746,7 +800,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
    xen_blkbk_unmap(pending_req);
  fail_response:
    /* Haven't submitted any bio's yet. */
-    make_response(blkif, req->u.rw.id, req->operation, BLKIF_RSP_ERROR);
+    make_response(blkif, req->u.rw.id, req->operation, 0, BLKIF_RSP_ERROR)=
;
    free_req(pending_req);
    msleep(1); /* back off a bit */
    return -EIO;
@@ -759,17 +813,28 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
    return -EIO;
}
+struct blkif_segment_back_ring *
+    get_seg_back_ring(struct xen_blkif *blkif)
+{
+    return NULL;
+}
+void push_back_ring_rsp(union blkif_back_rings *blk_rings, int nr_page, in=
t *notify)
+{
+    blk_rings->common.rsp_prod_pvt++;
+    RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, *notify);
+}
 /*
  * Put a response on the ring on how the operation fared.
  */
static void make_response(struct xen_blkif *blkif, u64 id,
-                 unsigned short op, int st)
+                 unsigned short op, int nr_page, int st)
{
    struct blkif_response  resp;
    unsigned long     flags;
-    union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+    union blkif_back_rings *blk_rings =3D
+          (union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
    int notify;
     resp.id        =3D id;
@@ -794,8 +859,9 @@ static void make_response(struct xen_blkif *blkif, u64 =
id,
    default:
          BUG();
    }
-    blk_rings->common.rsp_prod_pvt++;
-    RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
+
+    blkif->ops->push_back_ring_rsp(blk_rings, nr_page, &notify);
+
    spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
    if (notify)
          notify_remote_via_irq(blkif->irq);
@@ -873,6 +939,14 @@ static int __init xen_blkif_init(void)
    return rc;
}
+struct blkback_ring_operation blkback_ring_ops =3D {
+    .get_back_ring =3D get_back_ring,
+    .copy_blkif_req =3D copy_blkif_req,
+    .copy_blkif_seg_req =3D copy_blkif_seg_req,
+    .push_back_ring_rsp =3D push_back_ring_rsp,
+    .max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
+};
+
module_init(xen_blkif_init);
 MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index 9ad3b5e..ce5556a 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -146,6 +146,11 @@ enum blkif_protocol {
    BLKIF_PROTOCOL_X86_64 =3D 3,
};
+enum blkif_backring_type {
+    BACKRING_TYPE_1 =3D 1,
+    BACKRING_TYPE_2 =3D 2,
+};
+
struct xen_vbd {
    /* What the domain refers to this vbd as. */
    blkif_vdev_t         handle;
@@ -163,6 +168,15 @@ struct xen_vbd {
};
 struct backend_info;
+struct xen_blkif;
+
+struct blkback_ring_operation {
+    void *(*get_back_ring) (struct xen_blkif *blkif);
+    void (*copy_blkif_req) (struct xen_blkif *blkif, RING_IDX rc);
+    void (*copy_blkif_seg_req) (struct xen_blkif *blkif);
+    void (*push_back_ring_rsp) (union blkif_back_rings *blk_rings, int nr_=
page, int *notify);
+    unsigned int max_seg;
+};
 struct xen_blkif {
    /* Unique identifier for this interface. */
@@ -171,7 +185,9 @@ struct xen_blkif {
    /* Physical parameters of the comms window. */
    unsigned int         irq;
    /* Comms information. */
+    struct blkback_ring_operation   *ops;
    enum blkif_protocol  blk_protocol;
+    enum blkif_backring_type blk_backring_type;
    union blkif_back_rings     blk_rings;
    void            *blk_ring;
    /* The VBD attached to this interface. */
@@ -179,6 +195,8 @@ struct xen_blkif {
    /* Back pointer to the backend_info. */
    struct backend_info  *be;
    /* Private fields. */
+    void *               req;
+    struct blkif_request_segment *seg_req;
    spinlock_t      blk_ring_lock;
    atomic_t        refcnt;
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 4f66171..850ecad 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -36,6 +36,8 @@ static int connect_ring(struct backend_info *);
static void backend_changed(struct xenbus_watch *, const char **,
                   unsigned int);
+extern struct blkback_ring_operation blkback_ring_ops;
+
struct xenbus_device *xen_blkbk_xenbus(struct backend_info *be)
{
    return be->dev;
@@ -725,6 +727,12 @@ static int connect_ring(struct backend_info *be)
    int err;
     DPRINTK("%s", dev->otherend);
+    be->blkif->ops =3D &blkback_ring_ops;
+    be->blkif->req =3D kmalloc(sizeof(struct blkif_request),
+                    GFP_KERNEL);
+    be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
+                         be->blkif->ops->max_seg,  GFP_KERNEL);
+    be->blkif->blk_backring_type =3D BACKRING_TYPE_1;
     err =3D xenbus_gather(XBT_NIL, dev->otherend, "ring-ref", "%lu",
                   &ring_ref, "event-channel", "%u", &evtchn, NULL);


-ronghui


--_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" xmlns:m=3D"http://schema=
s.microsoft.com/office/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html=
40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Plain Text Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:10.5pt;
	font-family:Consolas;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.PlainTextChar
	{mso-style-name:"Plain Text Char";
	mso-style-priority:99;
	mso-style-link:"Plain Text";
	font-family:Consolas;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.25in 1.0in 1.25in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">refactoring balback<o:p></o:p></span></p>
<p class=3D"MsoPlainText">Signed-off-by: Ronghui Duan &lt;<a href=3D"mailto=
:ronghui.duan@intel.com">ronghui.duan@intel.com</a>&gt;<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/drivers/block/xen-blkback/blkbac=
k.c b/drivers/block/xen-blkback/blkback.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 73f196c..b4767f5 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/drivers/block/xen-blkback/blkback.c<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/drivers/block/xen-blkback/b=
lkback.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -64,6 &#43;64,11 @@ MODULE_PARM_DESC(reqs,=
 &quot;Number of blkback requests to allocate&quot;);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static unsigned int log_stats;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">module_param(log_stats, int, 0644);<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct seg_buf {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; unsigned long buf;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; unsigned int nsec;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Each outstanding request that we've =
passed to the lower device layers has a<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * 'pending_req' allocated to it. Each =
buffer_head that completes decrements<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -78,6 &#43;83,11 @@ struct pending_req {<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned short&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; operation;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; status;<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct list_head&nbsp;&nbs=
p;&nbsp;&nbsp; free_list;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_map_gra=
nt_ref&nbsp;&nbsp;&nbsp;&nbsp; *map;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_unmap_g=
rant_ref&nbsp;&nbsp; *unmap;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct seg_buf&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *seg;<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct bio&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; **biolist;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct page&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; **pages;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;#define BLKBACK_INVALID_HANDLE (~0)<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -123,28 &#43;133,9 @@ static inline unsign=
ed long vaddr(struct pending_req *req, int seg)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;static int do_block_io_op(struct xen_bl=
kif *blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int dispatch_rw_block_io(struct xen_bl=
kif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stru=
ct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struc=
t pending_req *pending_req);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void make_response(struct xen_blkif *b=
lkif, u64 id,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int st);=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">- * Retrieve from the 'pending_reqs' a free p=
ending_req structure to be used.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">- */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-static struct pending_req *alloc_req(void)<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct pending_req *req =
=3D NULL;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; unsigned long flags;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; spin_lock_irqsave(&amp;bl=
kbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; if (!list_empty(&amp;blkb=
k-&gt;pending_free)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; req =3D list_entry(blkbk-&gt;pending_free.next, struct pending_r=
eq,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; free=
_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; list_del(&amp;req-&gt;free_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; spin_unlock_irqrestore(&a=
mp;blkbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; return req;<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int =
nr_page, int st);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Return the 'pending_req' structure b=
ack to the freepool. We also<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -155,6 &#43;146,12 @@ static void free_req=
(struct pending_req *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned long flags;<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int was_empty;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;map);<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;unmap);=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;biolist=
);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;seg);<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; kfree(req-&gt;pages);=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; spin_lock_irqsave(&amp;blk=
bk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; was_empty =3D list_empty(&=
amp;blkbk-&gt;pending_free);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; list_add(&amp;req-&gt;free=
_list, &amp;blkbk-&gt;pending_free);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -162,7 &#43;159,42 @@ static void free_req=
(struct pending_req *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; if (was_empty)<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; wake_up(&amp;blkbk-&gt;pending_free_wq);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43; * Retrieve from the 'pending_reqs' a fr=
ee pending_req structure to be used.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;static struct pending_req *alloc_req(voi=
d)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct pending_req *r=
eq =3D NULL;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; unsigned long flags;<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; unsigned int max_seg =
=3D BLKIF_MAX_SEGMENTS_PER_REQUEST;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; spin_lock_irqsave(&am=
p;blkbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; if (!list_empty(&amp;=
blkbk-&gt;pending_free)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req =3D list_entry(blkbk-&gt;pending_free.next, struct pendi=
ng_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
free_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; list_del(&amp;req-&gt;free_list);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; spin_unlock_irqrestor=
e(&amp;blkbk-&gt;pending_free_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; if (req !=3D NULL) {<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;map =3D kzalloc(sizeof(struct gnttab_map_grant_ref) =
*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; max_seg, GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;unmap =3D kzalloc(sizeof(struct gnttab_unmap_grant_r=
ef) *<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; max_seg, GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;biolist =3D kzalloc(sizeof(struct bio *) * max_seg,<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;seg =3D kzalloc(sizeof(struct seg_buf) * max_seg,<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; req-&gt;pages =3D kzalloc(sizeof(struct page *) * max_seg,<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; if (!req-&gt;map || !req-&gt;unmap || !req-&gt;biolist ||<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; !req-&gt;seg || !req-&gt;pages) {<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; free_req(req);<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; req =3D NULL;<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; return req;<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Routines for managing virtual block =
devices (vbds).<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -308,11 &#43;340,6 @@ int xen_blkif_schedu=
le(void *arg)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; xen_blkif_put(blkif);<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; return 0;<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-struct seg_buf {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; unsigned long buf;<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; unsigned int nsec;<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Unmap the grant references, and also=
 remove the M2P over-rides<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -320,8 &#43;347,8 @@ struct seg_buf {<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void xen_blkbk_unmap(struct pending_re=
q *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct gnttab_unmap_grant=
_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct page *pages[BLKIF_=
MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_unmap_g=
rant_ref *unmap =3D req-&gt;unmap;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct page **pages =
=3D req-&gt;pages;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned int i, invcount =
=3D 0;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; grant_handle_t handle;<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int ret;<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -341,11 &#43;368,13 @@ static void xen_blk=
bk_unmap(struct pending_req *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; BUG_ON(ret);<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int xen_blkbk_map(struct blkif_request=
 *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struct blkif_request_segment *=
seg_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;struct pending_req *pending_r=
eq,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;struct seg_buf seg[])<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct gnttab_map_grant_r=
ef map[BLKIF_MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct gnttab_map_gra=
nt_ref *map =3D pending_req-&gt;map;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int i;<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int nseg =3D req-&gt;u.rw.=
nr_segments;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int ret =3D 0;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -362,7 &#43;391,7 @@ static int xen_blkbk_=
map(struct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; if (pending_req-&gt;operation !=3D BLKIF_OP_READ)<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; flags |=3D GNTMAP_readonly;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; gnttab_set_map_op(&amp;map[i], vaddr(pending_req, i), flags,<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbs=
p; req-&gt;u.rw.seg[i].gref,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp; seg_req[i].gref,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp=
;&nbsp;pending_req-&gt;blkif-&gt;domid);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -387,14 &#43;416,15 @@ static int xen_blkb=
k_map(struct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; continue;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; seg[i].buf&nbsp; =3D map[i].dev_bus_addr |<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (req-&gt;u.rw.seg[i].first_sect &l=
t;&lt; 9);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (seg_req[i].first_sect &lt;&lt=
; 9);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return ret;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-static int dispatch_discard_io(struct xen_bl=
kif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stru=
ct blkif_request *req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;static int dispatch_discard_io(struct xe=
n_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*blkif_req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request_=
discard *req =3D &amp;blkif_req-&gt;u.discard;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int err =3D 0;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int status =3D BLKIF_RSP_O=
KAY;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct block_device *bdev =
=3D blkif-&gt;vbd.bdev;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -404,11 &#43;434,11 @@ static int dispatch=
_discard_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; xen_blkif_get(blkif)=
;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; secure =3D (blkif-&gt;vbd.=
discard_secure &amp;&amp;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; (req-&gt;u.discard.flag &amp; BLKIF_DISCARD_SECURE)) ?<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (req-&gt;flag &amp; BLKIF_DISCARD_SECURE)) ?<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; &nbsp;BLKDEV_DISCARD_SECURE : 0;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; err =3D blkdev_issue_disc=
ard(bdev, req-&gt;u.discard.sector_number,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbs=
p;&nbsp; req-&gt;u.discard.nr_sectors,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; err =3D blkdev_issue_=
discard(bdev, req-&gt;sector_number,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp; req-&gt;nr_sectors,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp=
;&nbsp;&nbsp;GFP_KERNEL, secure);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; if (err =3D=3D -EOPN=
OTSUPP) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -417,7 &#43;447,7 @@ static int dispatch_d=
iscard_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; } else if (err)<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; status =3D BLKIF_RSP_ERROR;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; make_response(blkif, req-=
&gt;u.discard.id, req-&gt;operation, status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; make_response(blkif, =
req-&gt;id, BLKIF_OP_DISCARD, 0, status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; xen_blkif_put(blkif);<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return err;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -470,7 &#43;500,8 @@ static void __end_blo=
ck_io_op(struct pending_req *pending_req, int error)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; if (atomic_dec_and_test(&a=
mp;pending_req-&gt;pendcnt)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; xen_blkbk_unmap(pending_req);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; make_response(pending_req-&gt;blkif, pending_req-&gt;id,<o:p></o:=
p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; pen=
ding_req-&gt;operation, pending_req-&gt;status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 pending_req-&gt;operation, pending_req-&gt;nr_pages,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 pending_req-&gt;status);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; xen_blkif_put(pending_req-&gt;blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; if (atomic_read(&amp;pending_req-&gt;blkif-&gt;refcnt) &lt;=3D 2)=
 {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (atomic_read(&amp;pending_req-&g=
t;blkif-&gt;drain))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -489,8 &#43;520,37 @@ static void end_bloc=
k_io_op(struct bio *bio, int error)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; bio_put(bio);<o:p></o:p></=
span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void *get_back_ring(struct xen_blkif *bl=
kif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; return (void *)&amp;b=
lkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void copy_blkif_req(struct xen_blkif *bl=
kif, RING_IDX rc)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; switch (blkif-&gt;blk=
_protocol) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; case BLKIF_PROTOCOL_N=
ATIVE:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; memcpy(req, RING_GET_REQUEST(&amp;blk_rings-&gt;native, rc),=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; sizeof(struct blkif_request));=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; case BLKIF_PROTOCOL_X=
86_32:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif_get_x86_32_req(req, RING_GET_REQUEST(&amp;blk_rings-&g=
t;x86_32, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; case BLKIF_PROTOCOL_X=
86_64:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif_get_x86_64_req(req, RING_GET_REQUEST(&amp;blk_rings-&g=
t;x86_64, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; default:<o:p></o:p></=
span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; BUG();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void copy_blkif_seg_req(struct xen_blkif=
 *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; blkif-&gt;seg_req =3D=
 req-&gt;u.rw.seg;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Function to copy the from the ring b=
uffer the 'struct blkif_request'<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * (which has the sectors we want, numb=
er of them, grant references, etc),<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -499,8 &#43;559,9 @@ static void end_block=
_io_op(struct bio *bio, int error)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">__do_block_io_op(struct xen_blkif *blkif)<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; union blkif_back_rings *b=
lk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct blkif_request req;=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (union blkif_back_rings *)blkif-&gt;ops-&gt;get_back_ring(bl=
kif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct pending_req *pendin=
g_req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; RING_IDX rc, rp;<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int more_to_do =3D 0;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -526,28 &#43;587,19 @@ __do_block_io_op(st=
ruct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; switch (blkif-&gt;blk_protocol) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; case BLKIF_PROTOCOL_NATIVE:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; memcpy(&amp;req, RING_GET_REQUEST(&amp;b=
lk_rings-&gt;native, rc), sizeof(req));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; case BLKIF_PROTOCOL_X86_32:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; blkif_get_x86_32_req(&amp;req, RIN=
G_GET_REQUEST(&amp;blk_rings-&gt;x86_32, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; case BLKIF_PROTOCOL_X86_64:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; blkif_get_x86_64_req(&amp;req, RIN=
G_GET_REQUEST(&amp;blk_rings-&gt;x86_64, rc));<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; default:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; BUG();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif-&gt;ops-&gt;copy_blkif_req(blkif, rc);<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blkif-&gt;ops-&gt;copy_blkif_seg_req(blkif);<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; blk_rings-&gt;common.req_cons =3D &#43;&#43;rc; /* before make_re=
sponse() */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; /* Apply all sanity checks to /private copy/ of request. */=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; barrier();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; if (unlikely(req.operation =3D=3D BLKIF_OP_DISCARD)) {<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; if (unlikely(req-&gt;operation =3D=3D BLKIF_OP_DISCARD)) {<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; free_req(pending_req);<o:p></o:p></=
span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (dispatch_discard_io(blkif, &am=
p;req))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (dispatch_discard_io(blkif)=
)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break=
;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; } else if (dispatch_rw_block_io(blkif, &amp;req, pending_req))<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; } else if (dispatch_rw_block_io(blkif, pending_req))<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; /* Yield point for this unbounded loop. */<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -560,7 &#43;612,8 @@ __do_block_io_op(stru=
ct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">do_block_io_op(struct xen_blkif *blkif)<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; union blkif_back_rings *b=
lk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (union blkif_back_rings *)blkif-&gt;ops-&gt;get_back_ring(bl=
kif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int more_to_do;<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; do {<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -578,14 &#43;631,15 @@ do_block_io_op(stru=
ct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * and call the 'submit_bio' to pass it=
 to the underlying storage.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static int dispatch_rw_block_io(struct xen_bl=
kif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; stru=
ct blkif_request *req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struc=
t pending_req *pending_req)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request =
*req =3D (struct blkif_request *)blkif-&gt;req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request_=
segment *seg_req =3D blkif-&gt;seg_req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct phys_req preq;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct seg_buf seg[BLKIF_=
MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct seg_buf *seg =
=3D pending_req-&gt;seg;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned int nseg;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct bio *bio =3D NULL;<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; struct bio *biolist[BLKIF=
_MAX_SEGMENTS_PER_REQUEST];<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct bio **biolist =
=3D pending_req-&gt;biolist;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int i, nbio =3D 0;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int operation;<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct blk_plug plug;<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -616,7 &#43;670,7 @@ static int dispatch_r=
w_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; nseg =3D req-&gt;u.rw.nr_s=
egments;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; if (unlikely(nseg =
=3D=3D 0 &amp;&amp; operation !=3D WRITE_FLUSH) ||<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; unlike=
ly(nseg &gt; BLKIF_MAX_SEGMENTS_PER_REQUEST)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; un=
likely(nseg &gt; blkif-&gt;ops-&gt;max_seg)) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; pr_debug(DRV_PFX &quot;Bad number of segments in request (%d)\n&q=
uot;,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;nseg);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; /* Haven't submitted any bio's yet. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -634,10 &#43;688,10 @@ static int dispatch=
_rw_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; pending_req-&gt;nr_pages&n=
bsp; =3D nseg;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; for (i =3D 0; i &lt;=
 nseg; i&#43;&#43;) {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; seg[i].nsec =3D req-&gt;u.rw.seg[i].last_sect -<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; req-&gt;u.rw.seg[i].first_sect &#4=
3; 1;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; if ((req-&gt;u.rw.seg[i].last_sect &gt;=3D (PAGE_SIZE &gt;&gt; 9=
)) ||<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; &nbsp;&nbsp;&nbsp; (req-&gt;u.rw.seg[i].last_sect &lt; req-&gt;u=
.rw.seg[i].first_sect))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; seg[i].nsec =3D seg_req[i].last_sect -<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; seg_req[i].first_sect &#43; 1;=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; if ((seg_req[i].last_sect &gt;=3D (PAGE_SIZE &gt;&gt; 9)) ||=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; (seg_req[i].last_sect &lt; seg_req[i].fir=
st_sect))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; goto fail_response;<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; preq.nr_sects &#43;=3D seg[i].nsec;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -676,7 &#43;730,7 @@ static int dispatch_r=
w_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;* the hypercall to u=
nmap the grants - that is all done in<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;* xen_blkbk_unmap.<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;*/<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; if (xen_blkbk_map(req, pe=
nding_req, seg))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; if (xen_blkbk_map(req=
, seg_req, pending_req, seg))<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; goto fail_flush;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; /*<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -746,7 &#43;800,7 @@ static int dispatch_r=
w_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; xen_blkbk_unmap(pending_re=
q);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; fail_response:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Haven't submitted any b=
io's yet. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; make_response(blkif, req-=
&gt;u.rw.id, req-&gt;operation, BLKIF_RSP_ERROR);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; make_response(blkif, =
req-&gt;u.rw.id, req-&gt;operation, 0, BLKIF_RSP_ERROR);<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; free_req(pending_req);<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; msleep(1); /* back off a b=
it */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return -EIO;<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -759,17 &#43;813,28 @@ static int dispatch=
_rw_block_io(struct xen_blkif *blkif,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return -EIO;<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct blkif_segment_back_ring *<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; get_seg_back_ring(str=
uct xen_blkif *blkif)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; return NULL;<o:p></o:=
p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;void push_back_ring_rsp(union blkif_back=
_rings *blk_rings, int nr_page, int *notify)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; blk_rings-&gt;common.=
rsp_prod_pvt&#43;&#43;;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; RING_PUSH_RESPONSES_A=
ND_CHECK_NOTIFY(&amp;blk_rings-&gt;common, *notify);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;/*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; * Put a response on the ring on how th=
e operation fared.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp; */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void make_response(struct xen_blkif *b=
lkif, u64 id,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int st)<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; unsigned short op, int =
nr_page, int st)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct blkif_response&nbsp=
; resp;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned long&nbsp;&nbsp;&=
nbsp;&nbsp; flags;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; union blkif_back_rings *b=
lk_rings =3D &amp;blkif-&gt;blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; union blkif_back_ring=
s *blk_rings =3D<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; (union blkif_back_rings *)blkif-&gt;ops-&gt;get_back_ring(bl=
kif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int notify;<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; resp.id&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =3D id;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -794,8 &#43;859,9 @@ static void make_resp=
onse(struct xen_blkif *blkif, u64 id,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; default:<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; BUG();<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; }<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; blk_rings-&gt;common.rsp_=
prod_pvt&#43;&#43;;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp; RING_PUSH_RESPONSES_AND_C=
HECK_NOTIFY(&amp;blk_rings-&gt;common, notify);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; blkif-&gt;ops-&gt;pus=
h_back_ring_rsp(blk_rings, nr_page, &amp;notify);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; spin_unlock_irqrestore(&am=
p;blkif-&gt;blk_ring_lock, flags);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; if (notify)<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp; notify_remote_via_irq(blkif-&gt;irq);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -873,6 &#43;939,14 @@ static int __init xe=
n_blkif_init(void)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return rc;<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">}<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct blkback_ring_operation blkback_ri=
ng_ops =3D {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .get_back_ring =3D ge=
t_back_ring,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .copy_blkif_req =3D c=
opy_blkif_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .copy_blkif_seg_req =
=3D copy_blkif_seg_req,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .push_back_ring_rsp =
=3D push_back_ring_rsp,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; .max_seg =3D BLKIF_MA=
X_SEGMENTS_PER_REQUEST,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">module_init(xen_blkif_init);<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;MODULE_LICENSE(&quot;Dual BSD/GPL&quot;=
);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/drivers/block/xen-blkback/common=
.h b/drivers/block/xen-blkback/common.h<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 9ad3b5e..ce5556a 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/drivers/block/xen-blkback/common.h<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/drivers/block/xen-blkback/c=
ommon.h<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -146,6 &#43;146,11 @@ enum blkif_protocol =
{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; BLKIF_PROTOCOL_X86_64 =3D =
3,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;enum blkif_backring_type {<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; BACKRING_TYPE_1 =3D 1=
,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; BACKRING_TYPE_2 =3D 2=
,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">struct xen_vbd {<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* What the domain refers =
to this vbd as. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; blkif_vdev_t&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; handle;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -163,6 &#43;168,15 @@ struct xen_vbd {<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;struct backend_info;<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct xen_blkif;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;struct blkback_ring_operation {<o:p></o:=
p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void *(*get_back_ring=
) (struct xen_blkif *blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void (*copy_blkif_req=
) (struct xen_blkif *blkif, RING_IDX rc);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void (*copy_blkif_seg=
_req) (struct xen_blkif *blkif);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void (*push_back_ring=
_rsp) (union blkif_back_rings *blk_rings, int nr_page, int *notify);<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; unsigned int max_seg;=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;};<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;struct xen_blkif {<o:p></o:p></span></p=
>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Unique identifier for t=
his interface. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -171,7 &#43;185,9 @@ struct xen_blkif {<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Physical parameters of =
the comms window. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; unsigned int&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; irq;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Comms information. */<o=
:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkback_ring_o=
peration&nbsp;&nbsp; *ops;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; enum blkif_protocol&nbsp; =
blk_protocol;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; enum blkif_backring_t=
ype blk_backring_type;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; union blkif_back_rings&nbs=
p;&nbsp;&nbsp;&nbsp; blk_rings;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; void&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *blk_ring;<o:p></o:p></span></=
p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* The VBD attached to thi=
s interface. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -179,6 &#43;195,8 @@ struct xen_blkif {<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Back pointer to the bac=
kend_info. */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; struct backend_info&nbsp; =
*be;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; /* Private fields. */<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; void *&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; req;<=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; struct blkif_request_=
segment *seg_req;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; spinlock_t&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; blk_ring_lock;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; atomic_t&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp; refcnt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/drivers/block/xen-blkback/xenbus=
.c b/drivers/block/xen-blkback/xenbus.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 4f66171..850ecad 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/drivers/block/xen-blkback/xenbus.c<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/drivers/block/xen-blkback/x=
enbus.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -36,6 &#43;36,8 @@ static int connect_ring=
(struct backend_info *);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">static void backend_changed(struct xenbus_wat=
ch *, const char **,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;unsigned in=
t);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;extern struct blkback_ring_operation blk=
back_ring_ops;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">struct xenbus_device *xen_blkbk_xenbus(struct=
 backend_info *be)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">{<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; return be-&gt;dev;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -725,6 &#43;727,12 @@ static int connect_r=
ing(struct backend_info *be)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; int err;<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; DPRINTK(&quot;%s&quo=
t;, dev-&gt;otherend);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;ops =
=3D &amp;blkback_ring_ops;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;req =
=3D kmalloc(sizeof(struct blkif_request),<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
GFP_KERNEL);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;seg_=
req =3D kmalloc(sizeof(struct blkif_request_segment)*<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
&nbsp;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;ops-&gt;max_seg,&nbsp; GFP_KERNEL=
);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp; be-&gt;blkif-&gt;blk_=
backring_type =3D BACKRING_TYPE_1;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; err =3D xenbus_gathe=
r(XBT_NIL, dev-&gt;otherend, &quot;ring-ref&quot;, &quot;%lu&quot;,<o:p></o=
:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&amp;ring_ref, &=
quot;event-channel&quot;, &quot;%u&quot;, &amp;evtchn, NULL);<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">-ronghui<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_--

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_03.patch"
Content-Description: vbd_enlarge_segments_03.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_03.patch";
	size=16583; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:34:40 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IDBlN2RjMzVlMDE2ZmYwYzVkYTBhM2Y5ODNkZGQ4NGQ4ZmE1NTkxYWMKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAy
MToyMTo0MyAyMDEyICswODAwCgogICAgcmVmYWN0b3JpbmcgYmFsYmFjawoKZGlmZiAtLWdpdCBh
L2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4t
YmxrYmFjay9ibGtiYWNrLmMKaW5kZXggNzNmMTk2Yy4uYjQ3NjdmNSAxMDA2NDQKLS0tIGEvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4t
YmxrYmFjay9ibGtiYWNrLmMKQEAgLTY0LDYgKzY0LDExIEBAIE1PRFVMRV9QQVJNX0RFU0MocmVx
cywgIk51bWJlciBvZiBibGtiYWNrIHJlcXVlc3RzIHRvIGFsbG9jYXRlIik7CiBzdGF0aWMgdW5z
aWduZWQgaW50IGxvZ19zdGF0czsKIG1vZHVsZV9wYXJhbShsb2dfc3RhdHMsIGludCwgMDY0NCk7
CiAKK3N0cnVjdCBzZWdfYnVmIHsKKyAgICAgICAgdW5zaWduZWQgbG9uZyBidWY7CisgICAgICAg
IHVuc2lnbmVkIGludCBuc2VjOworfTsKKwogLyoKICAqIEVhY2ggb3V0c3RhbmRpbmcgcmVxdWVz
dCB0aGF0IHdlJ3ZlIHBhc3NlZCB0byB0aGUgbG93ZXIgZGV2aWNlIGxheWVycyBoYXMgYQogICog
J3BlbmRpbmdfcmVxJyBhbGxvY2F0ZWQgdG8gaXQuIEVhY2ggYnVmZmVyX2hlYWQgdGhhdCBjb21w
bGV0ZXMgZGVjcmVtZW50cwpAQCAtNzgsNiArODMsMTEgQEAgc3RydWN0IHBlbmRpbmdfcmVxIHsK
IAl1bnNpZ25lZCBzaG9ydAkJb3BlcmF0aW9uOwogCWludAkJCXN0YXR1czsKIAlzdHJ1Y3QgbGlz
dF9oZWFkCWZyZWVfbGlzdDsKKwlzdHJ1Y3QgZ250dGFiX21hcF9ncmFudF9yZWYJKm1hcDsKKwlz
dHJ1Y3QgZ250dGFiX3VubWFwX2dyYW50X3JlZgkqdW5tYXA7CisJc3RydWN0IHNlZ19idWYJCQkq
c2VnOworCXN0cnVjdCBiaW8JCQkqKmJpb2xpc3Q7CisJc3RydWN0IHBhZ2UJCQkqKnBhZ2VzOwog
fTsKIAogI2RlZmluZSBCTEtCQUNLX0lOVkFMSURfSEFORExFICh+MCkKQEAgLTEyMywyOCArMTMz
LDkgQEAgc3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIHZhZGRyKHN0cnVjdCBwZW5kaW5nX3Jl
cSAqcmVxLCBpbnQgc2VnKQogCiBzdGF0aWMgaW50IGRvX2Jsb2NrX2lvX29wKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmKTsKIHN0YXRpYyBpbnQgZGlzcGF0Y2hfcndfYmxvY2tfaW8oc3RydWN0IHhl
bl9ibGtpZiAqYmxraWYsCi0JCQkJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSwKIAkJCQlzdHJ1
Y3QgcGVuZGluZ19yZXEgKnBlbmRpbmdfcmVxKTsKIHN0YXRpYyB2b2lkIG1ha2VfcmVzcG9uc2Uo
c3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIHU2NCBpZCwKLQkJCSAgdW5zaWduZWQgc2hvcnQgb3As
IGludCBzdCk7Ci0KLS8qCi0gKiBSZXRyaWV2ZSBmcm9tIHRoZSAncGVuZGluZ19yZXFzJyBhIGZy
ZWUgcGVuZGluZ19yZXEgc3RydWN0dXJlIHRvIGJlIHVzZWQuCi0gKi8KLXN0YXRpYyBzdHJ1Y3Qg
cGVuZGluZ19yZXEgKmFsbG9jX3JlcSh2b2lkKQotewotCXN0cnVjdCBwZW5kaW5nX3JlcSAqcmVx
ID0gTlVMTDsKLQl1bnNpZ25lZCBsb25nIGZsYWdzOwotCi0Jc3Bpbl9sb2NrX2lycXNhdmUoJmJs
a2JrLT5wZW5kaW5nX2ZyZWVfbG9jaywgZmxhZ3MpOwotCWlmICghbGlzdF9lbXB0eSgmYmxrYmst
PnBlbmRpbmdfZnJlZSkpIHsKLQkJcmVxID0gbGlzdF9lbnRyeShibGtiay0+cGVuZGluZ19mcmVl
Lm5leHQsIHN0cnVjdCBwZW5kaW5nX3JlcSwKLQkJCQkgZnJlZV9saXN0KTsKLQkJbGlzdF9kZWwo
JnJlcS0+ZnJlZV9saXN0KTsKLQl9Ci0Jc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmYmxrYmstPnBl
bmRpbmdfZnJlZV9sb2NrLCBmbGFncyk7Ci0JcmV0dXJuIHJlcTsKLX0KKwkJCSAgdW5zaWduZWQg
c2hvcnQgb3AsIGludCBucl9wYWdlLCBpbnQgc3QpOwogCiAvKgogICogUmV0dXJuIHRoZSAncGVu
ZGluZ19yZXEnIHN0cnVjdHVyZSBiYWNrIHRvIHRoZSBmcmVlcG9vbC4gV2UgYWxzbwpAQCAtMTU1
LDYgKzE0NiwxMiBAQCBzdGF0aWMgdm9pZCBmcmVlX3JlcShzdHJ1Y3QgcGVuZGluZ19yZXEgKnJl
cSkKIAl1bnNpZ25lZCBsb25nIGZsYWdzOwogCWludCB3YXNfZW1wdHk7CiAKKwlrZnJlZShyZXEt
Pm1hcCk7CisJa2ZyZWUocmVxLT51bm1hcCk7CisJa2ZyZWUocmVxLT5iaW9saXN0KTsKKwlrZnJl
ZShyZXEtPnNlZyk7CisJa2ZyZWUocmVxLT5wYWdlcyk7CisKIAlzcGluX2xvY2tfaXJxc2F2ZSgm
YmxrYmstPnBlbmRpbmdfZnJlZV9sb2NrLCBmbGFncyk7CiAJd2FzX2VtcHR5ID0gbGlzdF9lbXB0
eSgmYmxrYmstPnBlbmRpbmdfZnJlZSk7CiAJbGlzdF9hZGQoJnJlcS0+ZnJlZV9saXN0LCAmYmxr
YmstPnBlbmRpbmdfZnJlZSk7CkBAIC0xNjIsNyArMTU5LDQyIEBAIHN0YXRpYyB2b2lkIGZyZWVf
cmVxKHN0cnVjdCBwZW5kaW5nX3JlcSAqcmVxKQogCWlmICh3YXNfZW1wdHkpCiAJCXdha2VfdXAo
JmJsa2JrLT5wZW5kaW5nX2ZyZWVfd3EpOwogfQotCisvKgorICogUmV0cmlldmUgZnJvbSB0aGUg
J3BlbmRpbmdfcmVxcycgYSBmcmVlIHBlbmRpbmdfcmVxIHN0cnVjdHVyZSB0byBiZSB1c2VkLgor
ICovCitzdGF0aWMgc3RydWN0IHBlbmRpbmdfcmVxICphbGxvY19yZXEodm9pZCkKK3sKKwlzdHJ1
Y3QgcGVuZGluZ19yZXEgKnJlcSA9IE5VTEw7CisJdW5zaWduZWQgbG9uZyBmbGFnczsKKwl1bnNp
Z25lZCBpbnQgbWF4X3NlZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVDsKKwkKKwlz
cGluX2xvY2tfaXJxc2F2ZSgmYmxrYmstPnBlbmRpbmdfZnJlZV9sb2NrLCBmbGFncyk7CisJaWYg
KCFsaXN0X2VtcHR5KCZibGtiay0+cGVuZGluZ19mcmVlKSkgeworCQlyZXEgPSBsaXN0X2VudHJ5
KGJsa2JrLT5wZW5kaW5nX2ZyZWUubmV4dCwgc3RydWN0IHBlbmRpbmdfcmVxLAorCQkJCSBmcmVl
X2xpc3QpOworCQlsaXN0X2RlbCgmcmVxLT5mcmVlX2xpc3QpOworCX0KKwlzcGluX3VubG9ja19p
cnFyZXN0b3JlKCZibGtiay0+cGVuZGluZ19mcmVlX2xvY2ssIGZsYWdzKTsKKwkKKwlpZiAocmVx
ICE9IE5VTEwpIHsKKwkJcmVxLT5tYXAgPSBremFsbG9jKHNpemVvZihzdHJ1Y3QgZ250dGFiX21h
cF9ncmFudF9yZWYpICoKKwkJCQkgICBtYXhfc2VnLCBHRlBfS0VSTkVMKTsKKwkJcmVxLT51bm1h
cCA9IGt6YWxsb2Moc2l6ZW9mKHN0cnVjdCBnbnR0YWJfdW5tYXBfZ3JhbnRfcmVmKSAqCisJCQkJ
ICAgbWF4X3NlZywgR0ZQX0tFUk5FTCk7CisJCXJlcS0+YmlvbGlzdCA9IGt6YWxsb2Moc2l6ZW9m
KHN0cnVjdCBiaW8gKikgKiBtYXhfc2VnLAorCQkJCSAgICAgICBHRlBfS0VSTkVMKTsKKwkJcmVx
LT5zZWcgPSBremFsbG9jKHNpemVvZihzdHJ1Y3Qgc2VnX2J1ZikgKiBtYXhfc2VnLAorCQkJCSAg
IEdGUF9LRVJORUwpOworCQlyZXEtPnBhZ2VzID0ga3phbGxvYyhzaXplb2Yoc3RydWN0IHBhZ2Ug
KikgKiBtYXhfc2VnLAorCQkJCSAgIEdGUF9LRVJORUwpOworCQlpZiAoIXJlcS0+bWFwIHx8ICFy
ZXEtPnVubWFwIHx8ICFyZXEtPmJpb2xpc3QgfHwKKwkJICAgICFyZXEtPnNlZyB8fCAhcmVxLT5w
YWdlcykgeworCQkJZnJlZV9yZXEocmVxKTsKKwkJCXJlcSA9IE5VTEw7CisJCX0KKwl9CisJcmV0
dXJuIHJlcTsKK30KIC8qCiAgKiBSb3V0aW5lcyBmb3IgbWFuYWdpbmcgdmlydHVhbCBibG9jayBk
ZXZpY2VzICh2YmRzKS4KICAqLwpAQCAtMzA4LDExICszNDAsNiBAQCBpbnQgeGVuX2Jsa2lmX3Nj
aGVkdWxlKHZvaWQgKmFyZykKIAl4ZW5fYmxraWZfcHV0KGJsa2lmKTsKIAogCXJldHVybiAwOwot
fQotCi1zdHJ1Y3Qgc2VnX2J1ZiB7Ci0JdW5zaWduZWQgbG9uZyBidWY7Ci0JdW5zaWduZWQgaW50
IG5zZWM7CiB9OwogLyoKICAqIFVubWFwIHRoZSBncmFudCByZWZlcmVuY2VzLCBhbmQgYWxzbyBy
ZW1vdmUgdGhlIE0yUCBvdmVyLXJpZGVzCkBAIC0zMjAsOCArMzQ3LDggQEAgc3RydWN0IHNlZ19i
dWYgewogICovCiBzdGF0aWMgdm9pZCB4ZW5fYmxrYmtfdW5tYXAoc3RydWN0IHBlbmRpbmdfcmVx
ICpyZXEpCiB7Ci0Jc3RydWN0IGdudHRhYl91bm1hcF9ncmFudF9yZWYgdW5tYXBbQkxLSUZfTUFY
X1NFR01FTlRTX1BFUl9SRVFVRVNUXTsKLQlzdHJ1Y3QgcGFnZSAqcGFnZXNbQkxLSUZfTUFYX1NF
R01FTlRTX1BFUl9SRVFVRVNUXTsKKwlzdHJ1Y3QgZ250dGFiX3VubWFwX2dyYW50X3JlZiAqdW5t
YXAgPSByZXEtPnVubWFwOworCXN0cnVjdCBwYWdlICoqcGFnZXMgPSByZXEtPnBhZ2VzOwogCXVu
c2lnbmVkIGludCBpLCBpbnZjb3VudCA9IDA7CiAJZ3JhbnRfaGFuZGxlX3QgaGFuZGxlOwogCWlu
dCByZXQ7CkBAIC0zNDEsMTEgKzM2OCwxMyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxrYmtfdW5tYXAo
c3RydWN0IHBlbmRpbmdfcmVxICpyZXEpCiAJQlVHX09OKHJldCk7CiB9CiAKKwogc3RhdGljIGlu
dCB4ZW5fYmxrYmtfbWFwKHN0cnVjdCBibGtpZl9yZXF1ZXN0ICpyZXEsCisJCQkgc3RydWN0IGJs
a2lmX3JlcXVlc3Rfc2VnbWVudCAqc2VnX3JlcSwKIAkJCSBzdHJ1Y3QgcGVuZGluZ19yZXEgKnBl
bmRpbmdfcmVxLAogCQkJIHN0cnVjdCBzZWdfYnVmIHNlZ1tdKQogewotCXN0cnVjdCBnbnR0YWJf
bWFwX2dyYW50X3JlZiBtYXBbQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNUXTsKKwlzdHJ1
Y3QgZ250dGFiX21hcF9ncmFudF9yZWYgKm1hcCA9IHBlbmRpbmdfcmVxLT5tYXA7CiAJaW50IGk7
CiAJaW50IG5zZWcgPSByZXEtPnUucncubnJfc2VnbWVudHM7CiAJaW50IHJldCA9IDA7CkBAIC0z
NjIsNyArMzkxLDcgQEAgc3RhdGljIGludCB4ZW5fYmxrYmtfbWFwKHN0cnVjdCBibGtpZl9yZXF1
ZXN0ICpyZXEsCiAJCWlmIChwZW5kaW5nX3JlcS0+b3BlcmF0aW9uICE9IEJMS0lGX09QX1JFQUQp
CiAJCQlmbGFncyB8PSBHTlRNQVBfcmVhZG9ubHk7CiAJCWdudHRhYl9zZXRfbWFwX29wKCZtYXBb
aV0sIHZhZGRyKHBlbmRpbmdfcmVxLCBpKSwgZmxhZ3MsCi0JCQkJICByZXEtPnUucncuc2VnW2ld
LmdyZWYsCisJCQkJICBzZWdfcmVxW2ldLmdyZWYsCiAJCQkJICBwZW5kaW5nX3JlcS0+YmxraWYt
PmRvbWlkKTsKIAl9CiAKQEAgLTM4NywxNCArNDE2LDE1IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2Jr
X21hcChzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxLAogCQkJY29udGludWU7CiAKIAkJc2VnW2ld
LmJ1ZiAgPSBtYXBbaV0uZGV2X2J1c19hZGRyIHwKLQkJCShyZXEtPnUucncuc2VnW2ldLmZpcnN0
X3NlY3QgPDwgOSk7CisJCQkoc2VnX3JlcVtpXS5maXJzdF9zZWN0IDw8IDkpOwogCX0KIAlyZXR1
cm4gcmV0OwogfQogCi1zdGF0aWMgaW50IGRpc3BhdGNoX2Rpc2NhcmRfaW8oc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYsCi0JCQkJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSkKK3N0YXRpYyBpbnQg
ZGlzcGF0Y2hfZGlzY2FyZF9pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikKIHsKKwlzdHJ1Y3Qg
YmxraWZfcmVxdWVzdCAqYmxraWZfcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0ICopYmxraWYt
PnJlcTsKKwlzdHJ1Y3QgYmxraWZfcmVxdWVzdF9kaXNjYXJkICpyZXEgPSAmYmxraWZfcmVxLT51
LmRpc2NhcmQ7CiAJaW50IGVyciA9IDA7CiAJaW50IHN0YXR1cyA9IEJMS0lGX1JTUF9PS0FZOwog
CXN0cnVjdCBibG9ja19kZXZpY2UgKmJkZXYgPSBibGtpZi0+dmJkLmJkZXY7CkBAIC00MDQsMTEg
KzQzNCwxMSBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX2Rpc2NhcmRfaW8oc3RydWN0IHhlbl9ibGtp
ZiAqYmxraWYsCiAKIAl4ZW5fYmxraWZfZ2V0KGJsa2lmKTsKIAlzZWN1cmUgPSAoYmxraWYtPnZi
ZC5kaXNjYXJkX3NlY3VyZSAmJgotCQkgKHJlcS0+dS5kaXNjYXJkLmZsYWcgJiBCTEtJRl9ESVND
QVJEX1NFQ1VSRSkpID8KKwkJIChyZXEtPmZsYWcgJiBCTEtJRl9ESVNDQVJEX1NFQ1VSRSkpID8K
IAkJIEJMS0RFVl9ESVNDQVJEX1NFQ1VSRSA6IDA7CiAKLQllcnIgPSBibGtkZXZfaXNzdWVfZGlz
Y2FyZChiZGV2LCByZXEtPnUuZGlzY2FyZC5zZWN0b3JfbnVtYmVyLAotCQkJCSAgIHJlcS0+dS5k
aXNjYXJkLm5yX3NlY3RvcnMsCisJZXJyID0gYmxrZGV2X2lzc3VlX2Rpc2NhcmQoYmRldiwgcmVx
LT5zZWN0b3JfbnVtYmVyLAorCQkJCSAgIHJlcS0+bnJfc2VjdG9ycywKIAkJCQkgICBHRlBfS0VS
TkVMLCBzZWN1cmUpOwogCiAJaWYgKGVyciA9PSAtRU9QTk9UU1VQUCkgewpAQCAtNDE3LDcgKzQ0
Nyw3IEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfZGlzY2FyZF9pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpi
bGtpZiwKIAl9IGVsc2UgaWYgKGVycikKIAkJc3RhdHVzID0gQkxLSUZfUlNQX0VSUk9SOwogCi0J
bWFrZV9yZXNwb25zZShibGtpZiwgcmVxLT51LmRpc2NhcmQuaWQsIHJlcS0+b3BlcmF0aW9uLCBz
dGF0dXMpOworCW1ha2VfcmVzcG9uc2UoYmxraWYsIHJlcS0+aWQsIEJMS0lGX09QX0RJU0NBUkQs
IDAsIHN0YXR1cyk7CiAJeGVuX2Jsa2lmX3B1dChibGtpZik7CiAJcmV0dXJuIGVycjsKIH0KQEAg
LTQ3MCw3ICs1MDAsOCBAQCBzdGF0aWMgdm9pZCBfX2VuZF9ibG9ja19pb19vcChzdHJ1Y3QgcGVu
ZGluZ19yZXEgKnBlbmRpbmdfcmVxLCBpbnQgZXJyb3IpCiAJaWYgKGF0b21pY19kZWNfYW5kX3Rl
c3QoJnBlbmRpbmdfcmVxLT5wZW5kY250KSkgewogCQl4ZW5fYmxrYmtfdW5tYXAocGVuZGluZ19y
ZXEpOwogCQltYWtlX3Jlc3BvbnNlKHBlbmRpbmdfcmVxLT5ibGtpZiwgcGVuZGluZ19yZXEtPmlk
LAotCQkJICAgICAgcGVuZGluZ19yZXEtPm9wZXJhdGlvbiwgcGVuZGluZ19yZXEtPnN0YXR1cyk7
CisJCQkgICAgICBwZW5kaW5nX3JlcS0+b3BlcmF0aW9uLCBwZW5kaW5nX3JlcS0+bnJfcGFnZXMs
CisJCQkgICAgICBwZW5kaW5nX3JlcS0+c3RhdHVzKTsKIAkJeGVuX2Jsa2lmX3B1dChwZW5kaW5n
X3JlcS0+YmxraWYpOwogCQlpZiAoYXRvbWljX3JlYWQoJnBlbmRpbmdfcmVxLT5ibGtpZi0+cmVm
Y250KSA8PSAyKSB7CiAJCQlpZiAoYXRvbWljX3JlYWQoJnBlbmRpbmdfcmVxLT5ibGtpZi0+ZHJh
aW4pKQpAQCAtNDg5LDggKzUyMCwzNyBAQCBzdGF0aWMgdm9pZCBlbmRfYmxvY2tfaW9fb3Aoc3Ry
dWN0IGJpbyAqYmlvLCBpbnQgZXJyb3IpCiAJYmlvX3B1dChiaW8pOwogfQogCit2b2lkICpnZXRf
YmFja19yaW5nKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQoreworCXJldHVybiAodm9pZCAqKSZi
bGtpZi0+YmxrX3JpbmdzOworfQogCit2b2lkIGNvcHlfYmxraWZfcmVxKHN0cnVjdCB4ZW5fYmxr
aWYgKmJsa2lmLCBSSU5HX0lEWCByYykKK3sKKwlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxID0g
KHN0cnVjdCBibGtpZl9yZXF1ZXN0ICopYmxraWYtPnJlcTsgCisJdW5pb24gYmxraWZfYmFja19y
aW5ncyAqYmxrX3JpbmdzID0gJmJsa2lmLT5ibGtfcmluZ3M7CisJc3dpdGNoIChibGtpZi0+Ymxr
X3Byb3RvY29sKSB7CisJY2FzZSBCTEtJRl9QUk9UT0NPTF9OQVRJVkU6CisJCW1lbWNweShyZXEs
IFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5ncy0+bmF0aXZlLCByYyksCisJCQlzaXplb2Yoc3Ry
dWN0IGJsa2lmX3JlcXVlc3QpKTsKKwkJYnJlYWs7CisJY2FzZSBCTEtJRl9QUk9UT0NPTF9YODZf
MzI6CisJCWJsa2lmX2dldF94ODZfMzJfcmVxKHJlcSwgUklOR19HRVRfUkVRVUVTVCgmYmxrX3Jp
bmdzLT54ODZfMzIsIHJjKSk7CisJCWJyZWFrOworCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzY0
OgorCQlibGtpZl9nZXRfeDg2XzY0X3JlcShyZXEsIFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5n
cy0+eDg2XzY0LCByYykpOworCQlicmVhazsKKwlkZWZhdWx0OgorCQlCVUcoKTsKKwl9Cit9CisK
K3ZvaWQgY29weV9ibGtpZl9zZWdfcmVxKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQoreworCXN0
cnVjdCBibGtpZl9yZXF1ZXN0ICpyZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilibGtpZi0+
cmVxOwogCisJYmxraWYtPnNlZ19yZXEgPSByZXEtPnUucncuc2VnOworfQogLyoKICAqIEZ1bmN0
aW9uIHRvIGNvcHkgdGhlIGZyb20gdGhlIHJpbmcgYnVmZmVyIHRoZSAnc3RydWN0IGJsa2lmX3Jl
cXVlc3QnCiAgKiAod2hpY2ggaGFzIHRoZSBzZWN0b3JzIHdlIHdhbnQsIG51bWJlciBvZiB0aGVt
LCBncmFudCByZWZlcmVuY2VzLCBldGMpLApAQCAtNDk5LDggKzU1OSw5IEBAIHN0YXRpYyB2b2lk
IGVuZF9ibG9ja19pb19vcChzdHJ1Y3QgYmlvICpiaW8sIGludCBlcnJvcikKIHN0YXRpYyBpbnQK
IF9fZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiB7Ci0JdW5pb24gYmxr
aWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0gJmJsa2lmLT5ibGtfcmluZ3M7Ci0Jc3RydWN0IGJs
a2lmX3JlcXVlc3QgcmVxOworCXVuaW9uIGJsa2lmX2JhY2tfcmluZ3MgKmJsa19yaW5ncyA9CisJ
CSh1bmlvbiBibGtpZl9iYWNrX3JpbmdzICopYmxraWYtPm9wcy0+Z2V0X2JhY2tfcmluZyhibGtp
Zik7CisJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSA9IChzdHJ1Y3QgYmxraWZfcmVxdWVzdCAq
KWJsa2lmLT5yZXE7CiAJc3RydWN0IHBlbmRpbmdfcmVxICpwZW5kaW5nX3JlcTsKIAlSSU5HX0lE
WCByYywgcnA7CiAJaW50IG1vcmVfdG9fZG8gPSAwOwpAQCAtNTI2LDI4ICs1ODcsMTkgQEAgX19k
b19ibG9ja19pb19vcChzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikKIAkJCWJyZWFrOwogCQl9CiAK
LQkJc3dpdGNoIChibGtpZi0+YmxrX3Byb3RvY29sKSB7Ci0JCWNhc2UgQkxLSUZfUFJPVE9DT0xf
TkFUSVZFOgotCQkJbWVtY3B5KCZyZXEsIFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5ncy0+bmF0
aXZlLCByYyksIHNpemVvZihyZXEpKTsKLQkJCWJyZWFrOwotCQljYXNlIEJMS0lGX1BST1RPQ09M
X1g4Nl8zMjoKLQkJCWJsa2lmX2dldF94ODZfMzJfcmVxKCZyZXEsIFJJTkdfR0VUX1JFUVVFU1Qo
JmJsa19yaW5ncy0+eDg2XzMyLCByYykpOwotCQkJYnJlYWs7Ci0JCWNhc2UgQkxLSUZfUFJPVE9D
T0xfWDg2XzY0OgotCQkJYmxraWZfZ2V0X3g4Nl82NF9yZXEoJnJlcSwgUklOR19HRVRfUkVRVUVT
VCgmYmxrX3JpbmdzLT54ODZfNjQsIHJjKSk7Ci0JCQlicmVhazsKLQkJZGVmYXVsdDoKLQkJCUJV
RygpOwotCQl9CisJCWJsa2lmLT5vcHMtPmNvcHlfYmxraWZfcmVxKGJsa2lmLCByYyk7CisKKwkJ
YmxraWYtPm9wcy0+Y29weV9ibGtpZl9zZWdfcmVxKGJsa2lmKTsKKwogCQlibGtfcmluZ3MtPmNv
bW1vbi5yZXFfY29ucyA9ICsrcmM7IC8qIGJlZm9yZSBtYWtlX3Jlc3BvbnNlKCkgKi8KIAogCQkv
KiBBcHBseSBhbGwgc2FuaXR5IGNoZWNrcyB0byAvcHJpdmF0ZSBjb3B5LyBvZiByZXF1ZXN0LiAq
LwogCQliYXJyaWVyKCk7Ci0JCWlmICh1bmxpa2VseShyZXEub3BlcmF0aW9uID09IEJMS0lGX09Q
X0RJU0NBUkQpKSB7CisJCWlmICh1bmxpa2VseShyZXEtPm9wZXJhdGlvbiA9PSBCTEtJRl9PUF9E
SVNDQVJEKSkgewogCQkJZnJlZV9yZXEocGVuZGluZ19yZXEpOwotCQkJaWYgKGRpc3BhdGNoX2Rp
c2NhcmRfaW8oYmxraWYsICZyZXEpKQorCQkJaWYgKGRpc3BhdGNoX2Rpc2NhcmRfaW8oYmxraWYp
KQogCQkJCWJyZWFrOwotCQl9IGVsc2UgaWYgKGRpc3BhdGNoX3J3X2Jsb2NrX2lvKGJsa2lmLCAm
cmVxLCBwZW5kaW5nX3JlcSkpCisJCX0gZWxzZSBpZiAoZGlzcGF0Y2hfcndfYmxvY2tfaW8oYmxr
aWYsIHBlbmRpbmdfcmVxKSkKIAkJCWJyZWFrOwogCiAJCS8qIFlpZWxkIHBvaW50IGZvciB0aGlz
IHVuYm91bmRlZCBsb29wLiAqLwpAQCAtNTYwLDcgKzYxMiw4IEBAIF9fZG9fYmxvY2tfaW9fb3Ao
c3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiBzdGF0aWMgaW50CiBkb19ibG9ja19pb19vcChzdHJ1
Y3QgeGVuX2Jsa2lmICpibGtpZikKIHsKLQl1bmlvbiBibGtpZl9iYWNrX3JpbmdzICpibGtfcmlu
Z3MgPSAmYmxraWYtPmJsa19yaW5nczsKKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzICpibGtfcmlu
Z3MgPQorCQkodW5pb24gYmxraWZfYmFja19yaW5ncyAqKWJsa2lmLT5vcHMtPmdldF9iYWNrX3Jp
bmcoYmxraWYpOwogCWludCBtb3JlX3RvX2RvOwogCiAJZG8gewpAQCAtNTc4LDE0ICs2MzEsMTUg
QEAgZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiAgKiBhbmQgY2FsbCB0
aGUgJ3N1Ym1pdF9iaW8nIHRvIHBhc3MgaXQgdG8gdGhlIHVuZGVybHlpbmcgc3RvcmFnZS4KICAq
Lwogc3RhdGljIGludCBkaXNwYXRjaF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtp
ZiwKLQkJCQlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxLAogCQkJCXN0cnVjdCBwZW5kaW5nX3Jl
cSAqcGVuZGluZ19yZXEpCiB7CisJc3RydWN0IGJsa2lmX3JlcXVlc3QgKnJlcSA9IChzdHJ1Y3Qg
YmxraWZfcmVxdWVzdCAqKWJsa2lmLT5yZXE7CisJc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVu
dCAqc2VnX3JlcSA9IGJsa2lmLT5zZWdfcmVxOwogCXN0cnVjdCBwaHlzX3JlcSBwcmVxOwotCXN0
cnVjdCBzZWdfYnVmIHNlZ1tCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JFUVVFU1RdOworCXN0cnVj
dCBzZWdfYnVmICpzZWcgPSBwZW5kaW5nX3JlcS0+c2VnOwogCXVuc2lnbmVkIGludCBuc2VnOwog
CXN0cnVjdCBiaW8gKmJpbyA9IE5VTEw7Ci0Jc3RydWN0IGJpbyAqYmlvbGlzdFtCTEtJRl9NQVhf
U0VHTUVOVFNfUEVSX1JFUVVFU1RdOworCXN0cnVjdCBiaW8gKipiaW9saXN0ID0gcGVuZGluZ19y
ZXEtPmJpb2xpc3Q7CiAJaW50IGksIG5iaW8gPSAwOwogCWludCBvcGVyYXRpb247CiAJc3RydWN0
IGJsa19wbHVnIHBsdWc7CkBAIC02MTYsNyArNjcwLDcgQEAgc3RhdGljIGludCBkaXNwYXRjaF9y
d19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwKIAluc2VnID0gcmVxLT51LnJ3Lm5y
X3NlZ21lbnRzOwogCiAJaWYgKHVubGlrZWx5KG5zZWcgPT0gMCAmJiBvcGVyYXRpb24gIT0gV1JJ
VEVfRkxVU0gpIHx8Ci0JICAgIHVubGlrZWx5KG5zZWcgPiBCTEtJRl9NQVhfU0VHTUVOVFNfUEVS
X1JFUVVFU1QpKSB7CisJICAgIHVubGlrZWx5KG5zZWcgPiBibGtpZi0+b3BzLT5tYXhfc2VnKSkg
ewogCQlwcl9kZWJ1ZyhEUlZfUEZYICJCYWQgbnVtYmVyIG9mIHNlZ21lbnRzIGluIHJlcXVlc3Qg
KCVkKVxuIiwKIAkJCSBuc2VnKTsKIAkJLyogSGF2ZW4ndCBzdWJtaXR0ZWQgYW55IGJpbydzIHll
dC4gKi8KQEAgLTYzNCwxMCArNjg4LDEwIEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfcndfYmxvY2tf
aW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsCiAJcGVuZGluZ19yZXEtPm5yX3BhZ2VzICA9IG5z
ZWc7CiAKIAlmb3IgKGkgPSAwOyBpIDwgbnNlZzsgaSsrKSB7Ci0JCXNlZ1tpXS5uc2VjID0gcmVx
LT51LnJ3LnNlZ1tpXS5sYXN0X3NlY3QgLQotCQkJcmVxLT51LnJ3LnNlZ1tpXS5maXJzdF9zZWN0
ICsgMTsKLQkJaWYgKChyZXEtPnUucncuc2VnW2ldLmxhc3Rfc2VjdCA+PSAoUEFHRV9TSVpFID4+
IDkpKSB8fAotCQkgICAgKHJlcS0+dS5ydy5zZWdbaV0ubGFzdF9zZWN0IDwgcmVxLT51LnJ3LnNl
Z1tpXS5maXJzdF9zZWN0KSkKKwkJc2VnW2ldLm5zZWMgPSBzZWdfcmVxW2ldLmxhc3Rfc2VjdCAt
CisJCQlzZWdfcmVxW2ldLmZpcnN0X3NlY3QgKyAxOworCQlpZiAoKHNlZ19yZXFbaV0ubGFzdF9z
ZWN0ID49IChQQUdFX1NJWkUgPj4gOSkpIHx8CisJCSAgICAoc2VnX3JlcVtpXS5sYXN0X3NlY3Qg
PCBzZWdfcmVxW2ldLmZpcnN0X3NlY3QpKQogCQkJZ290byBmYWlsX3Jlc3BvbnNlOwogCQlwcmVx
Lm5yX3NlY3RzICs9IHNlZ1tpXS5uc2VjOwogCkBAIC02NzYsNyArNzMwLDcgQEAgc3RhdGljIGlu
dCBkaXNwYXRjaF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwKIAkgKiB0aGUg
aHlwZXJjYWxsIHRvIHVubWFwIHRoZSBncmFudHMgLSB0aGF0IGlzIGFsbCBkb25lIGluCiAJICog
eGVuX2Jsa2JrX3VubWFwLgogCSAqLwotCWlmICh4ZW5fYmxrYmtfbWFwKHJlcSwgcGVuZGluZ19y
ZXEsIHNlZykpCisJaWYgKHhlbl9ibGtia19tYXAocmVxLCBzZWdfcmVxLCBwZW5kaW5nX3JlcSwg
c2VnKSkKIAkJZ290byBmYWlsX2ZsdXNoOwogCiAJLyoKQEAgLTc0Niw3ICs4MDAsNyBAQCBzdGF0
aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLAogCXhl
bl9ibGtia191bm1hcChwZW5kaW5nX3JlcSk7CiAgZmFpbF9yZXNwb25zZToKIAkvKiBIYXZlbid0
IHN1Ym1pdHRlZCBhbnkgYmlvJ3MgeWV0LiAqLwotCW1ha2VfcmVzcG9uc2UoYmxraWYsIHJlcS0+
dS5ydy5pZCwgcmVxLT5vcGVyYXRpb24sIEJMS0lGX1JTUF9FUlJPUik7CisJbWFrZV9yZXNwb25z
ZShibGtpZiwgcmVxLT51LnJ3LmlkLCByZXEtPm9wZXJhdGlvbiwgMCwgQkxLSUZfUlNQX0VSUk9S
KTsKIAlmcmVlX3JlcShwZW5kaW5nX3JlcSk7CiAJbXNsZWVwKDEpOyAvKiBiYWNrIG9mZiBhIGJp
dCAqLwogCXJldHVybiAtRUlPOwpAQCAtNzU5LDE3ICs4MTMsMjggQEAgc3RhdGljIGludCBkaXNw
YXRjaF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwKIAlyZXR1cm4gLUVJTzsK
IH0KIAorc3RydWN0IGJsa2lmX3NlZ21lbnRfYmFja19yaW5nICoKKwlnZXRfc2VnX2JhY2tfcmlu
ZyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikKK3sKKwlyZXR1cm4gTlVMTDsKK30KIAordm9pZCBw
dXNoX2JhY2tfcmluZ19yc3AodW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzLCBpbnQg
bnJfcGFnZSwgaW50ICpub3RpZnkpCit7CisJYmxrX3JpbmdzLT5jb21tb24ucnNwX3Byb2RfcHZ0
Kys7CisJUklOR19QVVNIX1JFU1BPTlNFU19BTkRfQ0hFQ0tfTk9USUZZKCZibGtfcmluZ3MtPmNv
bW1vbiwgKm5vdGlmeSk7Cit9CiAKIC8qCiAgKiBQdXQgYSByZXNwb25zZSBvbiB0aGUgcmluZyBv
biBob3cgdGhlIG9wZXJhdGlvbiBmYXJlZC4KICAqLwogc3RhdGljIHZvaWQgbWFrZV9yZXNwb25z
ZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAotCQkJICB1bnNpZ25lZCBzaG9ydCBv
cCwgaW50IHN0KQorCQkJICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IG5yX3BhZ2UsIGludCBzdCkK
IHsKIAlzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgIHJlc3A7CiAJdW5zaWduZWQgbG9uZyAgICAgZmxh
Z3M7Ci0JdW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0gJmJsa2lmLT5ibGtfcmlu
Z3M7CisJdW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0KKwkJKHVuaW9uIGJsa2lm
X2JhY2tfcmluZ3MgKilibGtpZi0+b3BzLT5nZXRfYmFja19yaW5nKGJsa2lmKTsKIAlpbnQgbm90
aWZ5OwogCiAJcmVzcC5pZCAgICAgICAgPSBpZDsKQEAgLTc5NCw4ICs4NTksOSBAQCBzdGF0aWMg
dm9pZCBtYWtlX3Jlc3BvbnNlKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCB1NjQgaWQsCiAJZGVm
YXVsdDoKIAkJQlVHKCk7CiAJfQotCWJsa19yaW5ncy0+Y29tbW9uLnJzcF9wcm9kX3B2dCsrOwot
CVJJTkdfUFVTSF9SRVNQT05TRVNfQU5EX0NIRUNLX05PVElGWSgmYmxrX3JpbmdzLT5jb21tb24s
IG5vdGlmeSk7CisKKwlibGtpZi0+b3BzLT5wdXNoX2JhY2tfcmluZ19yc3AoYmxrX3JpbmdzLCBu
cl9wYWdlLCAmbm90aWZ5KTsKKwogCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmJsa2lmLT5ibGtf
cmluZ19sb2NrLCBmbGFncyk7CiAJaWYgKG5vdGlmeSkKIAkJbm90aWZ5X3JlbW90ZV92aWFfaXJx
KGJsa2lmLT5pcnEpOwpAQCAtODczLDYgKzkzOSwxNCBAQCBzdGF0aWMgaW50IF9faW5pdCB4ZW5f
YmxraWZfaW5pdCh2b2lkKQogCXJldHVybiByYzsKIH0KIAorc3RydWN0IGJsa2JhY2tfcmluZ19v
cGVyYXRpb24gYmxrYmFja19yaW5nX29wcyA9IHsKKwkuZ2V0X2JhY2tfcmluZyA9IGdldF9iYWNr
X3JpbmcsCisJLmNvcHlfYmxraWZfcmVxID0gY29weV9ibGtpZl9yZXEsCisJLmNvcHlfYmxraWZf
c2VnX3JlcSA9IGNvcHlfYmxraWZfc2VnX3JlcSwKKwkucHVzaF9iYWNrX3JpbmdfcnNwID0gcHVz
aF9iYWNrX3JpbmdfcnNwLAorCS5tYXhfc2VnID0gQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFV
RVNULAorfTsKKwogbW9kdWxlX2luaXQoeGVuX2Jsa2lmX2luaXQpOwogCiBNT0RVTEVfTElDRU5T
RSgiRHVhbCBCU0QvR1BMIik7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2NvbW1vbi5oIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCA5YWQz
YjVlLi5jZTU1NTZhIDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1v
bi5oCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTE0Niw2ICsx
NDYsMTEgQEAgZW51bSBibGtpZl9wcm90b2NvbCB7CiAJQkxLSUZfUFJPVE9DT0xfWDg2XzY0ID0g
MywKIH07CiAKK2VudW0gYmxraWZfYmFja3JpbmdfdHlwZSB7CisJQkFDS1JJTkdfVFlQRV8xID0g
MSwKKwlCQUNLUklOR19UWVBFXzIgPSAyLAorfTsKKwogc3RydWN0IHhlbl92YmQgewogCS8qIFdo
YXQgdGhlIGRvbWFpbiByZWZlcnMgdG8gdGhpcyB2YmQgYXMuICovCiAJYmxraWZfdmRldl90CQlo
YW5kbGU7CkBAIC0xNjMsNiArMTY4LDE1IEBAIHN0cnVjdCB4ZW5fdmJkIHsKIH07CiAKIHN0cnVj
dCBiYWNrZW5kX2luZm87CitzdHJ1Y3QgeGVuX2Jsa2lmOworCitzdHJ1Y3QgYmxrYmFja19yaW5n
X29wZXJhdGlvbiB7CisJdm9pZCAqKCpnZXRfYmFja19yaW5nKSAoc3RydWN0IHhlbl9ibGtpZiAq
YmxraWYpOworCXZvaWQgKCpjb3B5X2Jsa2lmX3JlcSkgKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lm
LCBSSU5HX0lEWCByYyk7CisJdm9pZCAoKmNvcHlfYmxraWZfc2VnX3JlcSkgKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmKTsKKwl2b2lkICgqcHVzaF9iYWNrX3JpbmdfcnNwKSAodW5pb24gYmxraWZf
YmFja19yaW5ncyAqYmxrX3JpbmdzLCBpbnQgbnJfcGFnZSwgaW50ICpub3RpZnkpOworCXVuc2ln
bmVkIGludCBtYXhfc2VnOworfTsKIAogc3RydWN0IHhlbl9ibGtpZiB7CiAJLyogVW5pcXVlIGlk
ZW50aWZpZXIgZm9yIHRoaXMgaW50ZXJmYWNlLiAqLwpAQCAtMTcxLDcgKzE4NSw5IEBAIHN0cnVj
dCB4ZW5fYmxraWYgewogCS8qIFBoeXNpY2FsIHBhcmFtZXRlcnMgb2YgdGhlIGNvbW1zIHdpbmRv
dy4gKi8KIAl1bnNpZ25lZCBpbnQJCWlycTsKIAkvKiBDb21tcyBpbmZvcm1hdGlvbi4gKi8KKwlz
dHJ1Y3QgYmxrYmFja19yaW5nX29wZXJhdGlvbgkqb3BzOwogCWVudW0gYmxraWZfcHJvdG9jb2wJ
YmxrX3Byb3RvY29sOworCWVudW0gYmxraWZfYmFja3JpbmdfdHlwZSBibGtfYmFja3JpbmdfdHlw
ZTsKIAl1bmlvbiBibGtpZl9iYWNrX3JpbmdzCWJsa19yaW5nczsKIAl2b2lkCQkJKmJsa19yaW5n
OwogCS8qIFRoZSBWQkQgYXR0YWNoZWQgdG8gdGhpcyBpbnRlcmZhY2UuICovCkBAIC0xNzksNiAr
MTk1LDggQEAgc3RydWN0IHhlbl9ibGtpZiB7CiAJLyogQmFjayBwb2ludGVyIHRvIHRoZSBiYWNr
ZW5kX2luZm8uICovCiAJc3RydWN0IGJhY2tlbmRfaW5mbwkqYmU7CiAJLyogUHJpdmF0ZSBmaWVs
ZHMuICovCisJdm9pZCAqCQkJcmVxOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnNl
Z19yZXE7CiAJc3BpbmxvY2tfdAkJYmxrX3JpbmdfbG9jazsKIAlhdG9taWNfdAkJcmVmY250Owog
CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jIGIvZHJpdmVy
cy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYwppbmRleCA0ZjY2MTcxLi44NTBlY2FkIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCisrKyBiL2RyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKQEAgLTM2LDYgKzM2LDggQEAgc3RhdGljIGludCBj
b25uZWN0X3Jpbmcoc3RydWN0IGJhY2tlbmRfaW5mbyAqKTsKIHN0YXRpYyB2b2lkIGJhY2tlbmRf
Y2hhbmdlZChzdHJ1Y3QgeGVuYnVzX3dhdGNoICosIGNvbnN0IGNoYXIgKiosCiAJCQkgICAgdW5z
aWduZWQgaW50KTsKIAorZXh0ZXJuIHN0cnVjdCBibGtiYWNrX3Jpbmdfb3BlcmF0aW9uIGJsa2Jh
Y2tfcmluZ19vcHM7CisKIHN0cnVjdCB4ZW5idXNfZGV2aWNlICp4ZW5fYmxrYmtfeGVuYnVzKHN0
cnVjdCBiYWNrZW5kX2luZm8gKmJlKQogewogCXJldHVybiBiZS0+ZGV2OwpAQCAtNzI1LDYgKzcy
NywxMiBAQCBzdGF0aWMgaW50IGNvbm5lY3RfcmluZyhzdHJ1Y3QgYmFja2VuZF9pbmZvICpiZSkK
IAlpbnQgZXJyOwogCiAJRFBSSU5USygiJXMiLCBkZXYtPm90aGVyZW5kKTsKKwliZS0+YmxraWYt
Pm9wcyA9ICZibGtiYWNrX3Jpbmdfb3BzOworCWJlLT5ibGtpZi0+cmVxID0ga21hbGxvYyhzaXpl
b2Yoc3RydWN0IGJsa2lmX3JlcXVlc3QpLAorCQkJCSBHRlBfS0VSTkVMKTsKKwliZS0+YmxraWYt
PnNlZ19yZXEgPSBrbWFsbG9jKHNpemVvZihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50KSoK
KwkJCQkgICAgIGJlLT5ibGtpZi0+b3BzLT5tYXhfc2VnLCAgR0ZQX0tFUk5FTCk7CisJYmUtPmJs
a2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJBQ0tSSU5HX1RZUEVfMTsKIAogCWVyciA9IHhlbmJ1
c19nYXRoZXIoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgInJpbmctcmVmIiwgIiVsdSIsCiAJCQkg
ICAgJnJpbmdfcmVmLCAiZXZlbnQtY2hhbm5lbCIsICIldSIsICZldnRjaG4sIE5VTEwpOwo=

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_A21691DE07B84740B5F0B81466D5148A23BCF23DSHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:30:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:30:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xKu-0006Di-5k; Thu, 16 Aug 2012 10:29:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xKs-0006DI-FT
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:29:54 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345112950!9551246!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1456 invoked from network); 16 Aug 2012 10:29:11 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 10:29:11 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:29:10 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="187140963"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 16 Aug 2012 03:29:09 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:29:09 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:29:07 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 4/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mfcFcRPq5PRjQDG8OqDy5fs6eg==
Date: Thu, 16 Aug 2012 10:29:06 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF265@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 4/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


per disk pending_req list
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index b4767f5..45eda98 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -64,49 +64,6 @@ MODULE_PARM_DESC(reqs, "Number of blkback requests to al=
locate");
 static unsigned int log_stats;
 module_param(log_stats, int, 0644);
=20
-struct seg_buf {
-        unsigned long buf;
-        unsigned int nsec;
-};
-
-/*
- * Each outstanding request that we've passed to the lower device layers h=
as a
- * 'pending_req' allocated to it. Each buffer_head that completes decremen=
ts
- * the pendcnt towards zero. When it hits zero, the specified domain has a
- * response queued for it, with the saved 'id' passed back.
- */
-struct pending_req {
-	struct xen_blkif	*blkif;
-	u64			id;
-	int			nr_pages;
-	atomic_t		pendcnt;
-	unsigned short		operation;
-	int			status;
-	struct list_head	free_list;
-	struct gnttab_map_grant_ref	*map;
-	struct gnttab_unmap_grant_ref	*unmap;
-	struct seg_buf			*seg;
-	struct bio			**biolist;
-	struct page			**pages;
-};
-
-#define BLKBACK_INVALID_HANDLE (~0)
-
-struct xen_blkbk {
-	struct pending_req	*pending_reqs;
-	/* List of all 'pending_req' available */
-	struct list_head	pending_free;
-	/* And its spinlock. */
-	spinlock_t		pending_free_lock;
-	wait_queue_head_t	pending_free_wq;
-	/* The list of all pages that are available. */
-	struct page		**pending_pages;
-	/* And the grant handles that are available. */
-	grant_handle_t		*pending_grant_handles;
-};
-
-static struct xen_blkbk *blkbk;
-
 /*
  * Little helpful macro to figure out the index and virtual address of the
  * pending_pages[..]. For each 'pending_req' we have have up to
@@ -115,20 +72,20 @@ static struct xen_blkbk *blkbk;
  */
 static inline int vaddr_pagenr(struct pending_req *req, int seg)
 {
-	return (req - blkbk->pending_reqs) *
-		BLKIF_MAX_SEGMENTS_PER_REQUEST + seg;
+	return (req - req->blkif->blkbk->pending_reqs) *
+		req->blkif->ops->max_seg + seg;
 }
=20
 #define pending_page(req, seg) pending_pages[vaddr_pagenr(req, seg)]
=20
 static inline unsigned long vaddr(struct pending_req *req, int seg)
 {
-	unsigned long pfn =3D page_to_pfn(blkbk->pending_page(req, seg));
+	unsigned long pfn =3D page_to_pfn(req->blkif->blkbk->pending_page(req, se=
g));
 	return (unsigned long)pfn_to_kaddr(pfn);
 }
=20
 #define pending_handle(_req, _seg) \
-	(blkbk->pending_grant_handles[vaddr_pagenr(_req, _seg)])
+	(_req->blkif->blkbk->pending_grant_handles[vaddr_pagenr(_req, _seg)])
=20
=20
 static int do_block_io_op(struct xen_blkif *blkif);
@@ -143,6 +100,7 @@ static void make_response(struct xen_blkif *blkif, u64 =
id,
  */
 static void free_req(struct pending_req *req)
 {
+	struct xen_blkbk *blkbk =3D req->blkif->blkbk;
 	unsigned long flags;
 	int was_empty;
=20
@@ -162,8 +120,9 @@ static void free_req(struct pending_req *req)
 /*
  * Retrieve from the 'pending_reqs' a free pending_req structure to be use=
d.
  */
-static struct pending_req *alloc_req(void)
+static struct pending_req *alloc_req(struct xen_blkif *blkif)
 {
+	struct xen_blkbk *blkbk =3D blkif->blkbk;
 	struct pending_req *req =3D NULL;
 	unsigned long flags;
 	unsigned int max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST;
@@ -173,6 +132,7 @@ static struct pending_req *alloc_req(void)
 		req =3D list_entry(blkbk->pending_free.next, struct pending_req,
 				 free_list);
 		list_del(&req->free_list);
+		req->blkif =3D blkif;
 	}
 	spin_unlock_irqrestore(&blkbk->pending_free_lock, flags);
 =09
@@ -319,8 +279,8 @@ int xen_blkif_schedule(void *arg)
 			blkif->wq,
 			blkif->waiting_reqs || kthread_should_stop());
 		wait_event_interruptible(
-			blkbk->pending_free_wq,
-			!list_empty(&blkbk->pending_free) ||
+			blkif->blkbk->pending_free_wq,
+			!list_empty(&blkif->blkbk->pending_free) ||
 			kthread_should_stop());
=20
 		blkif->waiting_reqs =3D 0;
@@ -395,7 +355,8 @@ static int xen_blkbk_map(struct blkif_request *req,
 				  pending_req->blkif->domid);
 	}
=20
-	ret =3D gnttab_map_refs(map, NULL, &blkbk->pending_page(pending_req, 0), =
nseg);
+	ret =3D gnttab_map_refs(map, NULL,
+		&pending_req->blkif->blkbk->pending_page(pending_req, 0), nseg);
 	BUG_ON(ret);
=20
 	/*
@@ -580,7 +541,7 @@ __do_block_io_op(struct xen_blkif *blkif)
 			break;
 		}
=20
-		pending_req =3D alloc_req();
+		pending_req =3D alloc_req(blkif);
 		if (NULL =3D=3D pending_req) {
 			blkif->st_oo_req++;
 			more_to_do =3D 1;
@@ -742,7 +703,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
 	for (i =3D 0; i < nseg; i++) {
 		while ((bio =3D=3D NULL) ||
 		       (bio_add_page(bio,
-				     blkbk->pending_page(pending_req, i),
+				     blkif->blkbk->pending_page(pending_req, i),
 				     seg[i].nsec << 9,
 				     seg[i].buf & ~PAGE_MASK) =3D=3D 0)) {
=20
@@ -867,35 +828,34 @@ static void make_response(struct xen_blkif *blkif, u6=
4 id,
 		notify_remote_via_irq(blkif->irq);
 }
=20
-static int __init xen_blkif_init(void)
+int xen_blkif_init_blkbk(struct xen_blkif *blkif)
 {
-	int i, mmap_pages;
 	int rc =3D 0;
+	int i, mmap_pages;
+	struct xen_blkbk *blkbk;
=20
-	if (!xen_domain())
-		return -ENODEV;
-
-	blkbk =3D kzalloc(sizeof(struct xen_blkbk), GFP_KERNEL);
-	if (!blkbk) {
+	blkif->blkbk =3D kzalloc(sizeof(struct xen_blkbk), GFP_KERNEL);
+	if (!blkif->blkbk) {
 		pr_alert(DRV_PFX "%s: out of memory!\n", __func__);
 		return -ENOMEM;
 	}
=20
-	mmap_pages =3D xen_blkif_reqs * BLKIF_MAX_SEGMENTS_PER_REQUEST;
-
-	blkbk->pending_reqs          =3D kzalloc(sizeof(blkbk->pending_reqs[0]) *
-					xen_blkif_reqs, GFP_KERNEL);
-	blkbk->pending_grant_handles =3D kmalloc(sizeof(blkbk->pending_grant_hand=
les[0]) *
-					mmap_pages, GFP_KERNEL);
-	blkbk->pending_pages         =3D kzalloc(sizeof(blkbk->pending_pages[0]) =
*
-					mmap_pages, GFP_KERNEL);
-
-	if (!blkbk->pending_reqs || !blkbk->pending_grant_handles ||
-	    !blkbk->pending_pages) {
-		rc =3D -ENOMEM;
+	blkbk =3D blkif->blkbk;
+	mmap_pages =3D xen_blkif_reqs * blkif->ops->max_seg;
+
+	blkbk->pending_reqs =3D kzalloc(sizeof(blkbk->pending_reqs[0]) *
+				      xen_blkif_reqs, GFP_KERNEL);
+	blkbk->pending_grant_handles =3D kzalloc(sizeof(blkbk->pending_grant_hand=
les[0])
+					       * mmap_pages, GFP_KERNEL);
+	blkbk->pending_pages =3D kzalloc(sizeof(blkbk->pending_pages[0]) *
+				       mmap_pages, GFP_KERNEL);
+=20
+ 	if (!blkbk->pending_reqs || !blkbk->pending_grant_handles ||
+ 	    !blkbk->pending_pages) {
+ 		rc =3D -ENOMEM;
 		goto out_of_memory;
 	}
-
+=09
 	for (i =3D 0; i < mmap_pages; i++) {
 		blkbk->pending_grant_handles[i] =3D BLKBACK_INVALID_HANDLE;
 		blkbk->pending_pages[i] =3D alloc_page(GFP_KERNEL);
@@ -904,10 +864,6 @@ static int __init xen_blkif_init(void)
 			goto out_of_memory;
 		}
 	}
-	rc =3D xen_blkif_interface_init();
-	if (rc)
-		goto failed_init;
-
 	INIT_LIST_HEAD(&blkbk->pending_free);
 	spin_lock_init(&blkbk->pending_free_lock);
 	init_waitqueue_head(&blkbk->pending_free_wq);
@@ -916,15 +872,10 @@ static int __init xen_blkif_init(void)
 		list_add_tail(&blkbk->pending_reqs[i].free_list,
 			      &blkbk->pending_free);
=20
-	rc =3D xen_blkif_xenbus_init();
-	if (rc)
-		goto failed_init;
-
 	return 0;
=20
- out_of_memory:
+out_of_memory:
 	pr_alert(DRV_PFX "%s: out of memory\n", __func__);
- failed_init:
 	kfree(blkbk->pending_reqs);
 	kfree(blkbk->pending_grant_handles);
 	if (blkbk->pending_pages) {
@@ -935,7 +886,7 @@ static int __init xen_blkif_init(void)
 		kfree(blkbk->pending_pages);
 	}
 	kfree(blkbk);
-	blkbk =3D NULL;
+	blkif->blkbk =3D NULL;
 	return rc;
 }
=20
@@ -947,6 +898,24 @@ struct blkback_ring_operation blkback_ring_ops =3D {
 	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
 };
=20
+static int __init xen_blkif_init(void)
+{
+	int rc =3D 0;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	rc =3D xen_blkif_interface_init();
+	if (rc)
+		return rc;
+
+	rc =3D xen_blkif_xenbus_init();
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
 module_init(xen_blkif_init);
=20
 MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index ce5556a..80e8acc 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -169,6 +169,7 @@ struct xen_vbd {
=20
 struct backend_info;
 struct xen_blkif;
+struct xen_blkbk;
=20
 struct blkback_ring_operation {
 	void *(*get_back_ring) (struct xen_blkif *blkif);
@@ -190,6 +191,7 @@ struct xen_blkif {
 	enum blkif_backring_type blk_backring_type;
 	union blkif_back_rings	blk_rings;
 	void			*blk_ring;
+	struct xen_blkbk        *blkbk;
 	/* The VBD attached to this interface. */
 	struct xen_vbd		vbd;
 	/* Back pointer to the backend_info. */
@@ -221,6 +223,46 @@ struct xen_blkif {
 	wait_queue_head_t	waiting_to_free;
 };
=20
+struct seg_buf {
+        unsigned long buf;
+        unsigned int nsec;
+};
+
+/*
+ * Each outstanding request that we've passed to the lower device layers h=
as a
+ * 'pending_req' allocated to it. Each buffer_head that completes decremen=
ts
+ * the pendcnt towards zero. When it hits zero, the specified domain has a
+ * response queued for it, with the saved 'id' passed back.
+ */
+struct pending_req {
+	struct xen_blkif	*blkif;
+	u64			id;
+	int			nr_pages;
+	atomic_t		pendcnt;
+	unsigned short		operation;
+	int			status;
+	struct list_head	free_list;
+	struct gnttab_map_grant_ref	*map;
+	struct gnttab_unmap_grant_ref	*unmap;
+	struct seg_buf			*seg;
+	struct bio			**biolist;
+	struct page			**pages;
+};
+
+#define BLKBACK_INVALID_HANDLE (~0)
+
+struct xen_blkbk {
+	struct pending_req	*pending_reqs;
+	/* List of all 'pending_req' available */
+	struct list_head	pending_free;
+	/* And its spinlock. */
+	spinlock_t		pending_free_lock;
+	wait_queue_head_t	pending_free_wq;
+	/* The list of all pages that are available. */
+	struct page		**pending_pages;
+	/* And the grant handles that are available. */
+	grant_handle_t		*pending_grant_handles;
+};
=20
 #define vbd_sz(_v)	((_v)->bdev->bd_part ? \
 			 (_v)->bdev->bd_part->nr_sects : \
@@ -243,6 +285,8 @@ int xen_blkif_interface_init(void);
=20
 int xen_blkif_xenbus_init(void);
=20
+int xen_blkif_init_blkbk(struct xen_blkif *blkif);
+
 irqreturn_t xen_blkif_be_int(int irq, void *dev_id);
 int xen_blkif_schedule(void *arg);
=20
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 850ecad..8b0d496 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -769,6 +769,12 @@ static int connect_ring(struct backend_info *be)
 		return err;
 	}
=20
+	err =3D xen_blkif_init_blkbk(be->blkif);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "xen blkif init blkbk fails\n");
+		return err;
+	}
+=09
 	return 0;
 }
=20


-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_04.patch"
Content-Description: vbd_enlarge_segments_04.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_04.patch";
	size=10670; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:34:50 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IGQyZWMxYzI4MzViZjZkMmJhZDBkY2RmYzYyNjFiYzA4NzhmYzgzNGMKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAy
MTozNDo1NCAyMDEyICswODAwCgogICAgcGVyIGRpc2sgcGVuZGluZ19yZXEgbGlzdAoKZGlmZiAt
LWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKaW5kZXggYjQ3NjdmNS4uNDVlZGE5OCAxMDA2NDQKLS0t
IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKKysrIGIvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKQEAgLTY0LDQ5ICs2NCw2IEBAIE1PRFVMRV9QQVJNX0RF
U0MocmVxcywgIk51bWJlciBvZiBibGtiYWNrIHJlcXVlc3RzIHRvIGFsbG9jYXRlIik7CiBzdGF0
aWMgdW5zaWduZWQgaW50IGxvZ19zdGF0czsKIG1vZHVsZV9wYXJhbShsb2dfc3RhdHMsIGludCwg
MDY0NCk7CiAKLXN0cnVjdCBzZWdfYnVmIHsKLSAgICAgICAgdW5zaWduZWQgbG9uZyBidWY7Ci0g
ICAgICAgIHVuc2lnbmVkIGludCBuc2VjOwotfTsKLQotLyoKLSAqIEVhY2ggb3V0c3RhbmRpbmcg
cmVxdWVzdCB0aGF0IHdlJ3ZlIHBhc3NlZCB0byB0aGUgbG93ZXIgZGV2aWNlIGxheWVycyBoYXMg
YQotICogJ3BlbmRpbmdfcmVxJyBhbGxvY2F0ZWQgdG8gaXQuIEVhY2ggYnVmZmVyX2hlYWQgdGhh
dCBjb21wbGV0ZXMgZGVjcmVtZW50cwotICogdGhlIHBlbmRjbnQgdG93YXJkcyB6ZXJvLiBXaGVu
IGl0IGhpdHMgemVybywgdGhlIHNwZWNpZmllZCBkb21haW4gaGFzIGEKLSAqIHJlc3BvbnNlIHF1
ZXVlZCBmb3IgaXQsIHdpdGggdGhlIHNhdmVkICdpZCcgcGFzc2VkIGJhY2suCi0gKi8KLXN0cnVj
dCBwZW5kaW5nX3JlcSB7Ci0Jc3RydWN0IHhlbl9ibGtpZgkqYmxraWY7Ci0JdTY0CQkJaWQ7Ci0J
aW50CQkJbnJfcGFnZXM7Ci0JYXRvbWljX3QJCXBlbmRjbnQ7Ci0JdW5zaWduZWQgc2hvcnQJCW9w
ZXJhdGlvbjsKLQlpbnQJCQlzdGF0dXM7Ci0Jc3RydWN0IGxpc3RfaGVhZAlmcmVlX2xpc3Q7Ci0J
c3RydWN0IGdudHRhYl9tYXBfZ3JhbnRfcmVmCSptYXA7Ci0Jc3RydWN0IGdudHRhYl91bm1hcF9n
cmFudF9yZWYJKnVubWFwOwotCXN0cnVjdCBzZWdfYnVmCQkJKnNlZzsKLQlzdHJ1Y3QgYmlvCQkJ
KipiaW9saXN0OwotCXN0cnVjdCBwYWdlCQkJKipwYWdlczsKLX07Ci0KLSNkZWZpbmUgQkxLQkFD
S19JTlZBTElEX0hBTkRMRSAofjApCi0KLXN0cnVjdCB4ZW5fYmxrYmsgewotCXN0cnVjdCBwZW5k
aW5nX3JlcQkqcGVuZGluZ19yZXFzOwotCS8qIExpc3Qgb2YgYWxsICdwZW5kaW5nX3JlcScgYXZh
aWxhYmxlICovCi0Jc3RydWN0IGxpc3RfaGVhZAlwZW5kaW5nX2ZyZWU7Ci0JLyogQW5kIGl0cyBz
cGlubG9jay4gKi8KLQlzcGlubG9ja190CQlwZW5kaW5nX2ZyZWVfbG9jazsKLQl3YWl0X3F1ZXVl
X2hlYWRfdAlwZW5kaW5nX2ZyZWVfd3E7Ci0JLyogVGhlIGxpc3Qgb2YgYWxsIHBhZ2VzIHRoYXQg
YXJlIGF2YWlsYWJsZS4gKi8KLQlzdHJ1Y3QgcGFnZQkJKipwZW5kaW5nX3BhZ2VzOwotCS8qIEFu
ZCB0aGUgZ3JhbnQgaGFuZGxlcyB0aGF0IGFyZSBhdmFpbGFibGUuICovCi0JZ3JhbnRfaGFuZGxl
X3QJCSpwZW5kaW5nX2dyYW50X2hhbmRsZXM7Ci19OwotCi1zdGF0aWMgc3RydWN0IHhlbl9ibGti
ayAqYmxrYms7Ci0KIC8qCiAgKiBMaXR0bGUgaGVscGZ1bCBtYWNybyB0byBmaWd1cmUgb3V0IHRo
ZSBpbmRleCBhbmQgdmlydHVhbCBhZGRyZXNzIG9mIHRoZQogICogcGVuZGluZ19wYWdlc1suLl0u
IEZvciBlYWNoICdwZW5kaW5nX3JlcScgd2UgaGF2ZSBoYXZlIHVwIHRvCkBAIC0xMTUsMjAgKzcy
LDIwIEBAIHN0YXRpYyBzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiazsKICAqLwogc3RhdGljIGlubGlu
ZSBpbnQgdmFkZHJfcGFnZW5yKHN0cnVjdCBwZW5kaW5nX3JlcSAqcmVxLCBpbnQgc2VnKQogewot
CXJldHVybiAocmVxIC0gYmxrYmstPnBlbmRpbmdfcmVxcykgKgotCQlCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1QgKyBzZWc7CisJcmV0dXJuIChyZXEgLSByZXEtPmJsa2lmLT5ibGtiay0+
cGVuZGluZ19yZXFzKSAqCisJCXJlcS0+YmxraWYtPm9wcy0+bWF4X3NlZyArIHNlZzsKIH0KIAog
I2RlZmluZSBwZW5kaW5nX3BhZ2UocmVxLCBzZWcpIHBlbmRpbmdfcGFnZXNbdmFkZHJfcGFnZW5y
KHJlcSwgc2VnKV0KIAogc3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIHZhZGRyKHN0cnVjdCBw
ZW5kaW5nX3JlcSAqcmVxLCBpbnQgc2VnKQogewotCXVuc2lnbmVkIGxvbmcgcGZuID0gcGFnZV90
b19wZm4oYmxrYmstPnBlbmRpbmdfcGFnZShyZXEsIHNlZykpOworCXVuc2lnbmVkIGxvbmcgcGZu
ID0gcGFnZV90b19wZm4ocmVxLT5ibGtpZi0+YmxrYmstPnBlbmRpbmdfcGFnZShyZXEsIHNlZykp
OwogCXJldHVybiAodW5zaWduZWQgbG9uZylwZm5fdG9fa2FkZHIocGZuKTsKIH0KIAogI2RlZmlu
ZSBwZW5kaW5nX2hhbmRsZShfcmVxLCBfc2VnKSBcCi0JKGJsa2JrLT5wZW5kaW5nX2dyYW50X2hh
bmRsZXNbdmFkZHJfcGFnZW5yKF9yZXEsIF9zZWcpXSkKKwkoX3JlcS0+YmxraWYtPmJsa2JrLT5w
ZW5kaW5nX2dyYW50X2hhbmRsZXNbdmFkZHJfcGFnZW5yKF9yZXEsIF9zZWcpXSkKIAogCiBzdGF0
aWMgaW50IGRvX2Jsb2NrX2lvX29wKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKTsKQEAgLTE0Myw2
ICsxMDAsNyBAQCBzdGF0aWMgdm9pZCBtYWtlX3Jlc3BvbnNlKHN0cnVjdCB4ZW5fYmxraWYgKmJs
a2lmLCB1NjQgaWQsCiAgKi8KIHN0YXRpYyB2b2lkIGZyZWVfcmVxKHN0cnVjdCBwZW5kaW5nX3Jl
cSAqcmVxKQogeworCXN0cnVjdCB4ZW5fYmxrYmsgKmJsa2JrID0gcmVxLT5ibGtpZi0+YmxrYms7
CiAJdW5zaWduZWQgbG9uZyBmbGFnczsKIAlpbnQgd2FzX2VtcHR5OwogCkBAIC0xNjIsOCArMTIw
LDkgQEAgc3RhdGljIHZvaWQgZnJlZV9yZXEoc3RydWN0IHBlbmRpbmdfcmVxICpyZXEpCiAvKgog
ICogUmV0cmlldmUgZnJvbSB0aGUgJ3BlbmRpbmdfcmVxcycgYSBmcmVlIHBlbmRpbmdfcmVxIHN0
cnVjdHVyZSB0byBiZSB1c2VkLgogICovCi1zdGF0aWMgc3RydWN0IHBlbmRpbmdfcmVxICphbGxv
Y19yZXEodm9pZCkKK3N0YXRpYyBzdHJ1Y3QgcGVuZGluZ19yZXEgKmFsbG9jX3JlcShzdHJ1Y3Qg
eGVuX2Jsa2lmICpibGtpZikKIHsKKwlzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiayA9IGJsa2lmLT5i
bGtiazsKIAlzdHJ1Y3QgcGVuZGluZ19yZXEgKnJlcSA9IE5VTEw7CiAJdW5zaWduZWQgbG9uZyBm
bGFnczsKIAl1bnNpZ25lZCBpbnQgbWF4X3NlZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVR
VUVTVDsKQEAgLTE3Myw2ICsxMzIsNyBAQCBzdGF0aWMgc3RydWN0IHBlbmRpbmdfcmVxICphbGxv
Y19yZXEodm9pZCkKIAkJcmVxID0gbGlzdF9lbnRyeShibGtiay0+cGVuZGluZ19mcmVlLm5leHQs
IHN0cnVjdCBwZW5kaW5nX3JlcSwKIAkJCQkgZnJlZV9saXN0KTsKIAkJbGlzdF9kZWwoJnJlcS0+
ZnJlZV9saXN0KTsKKwkJcmVxLT5ibGtpZiA9IGJsa2lmOwogCX0KIAlzcGluX3VubG9ja19pcnFy
ZXN0b3JlKCZibGtiay0+cGVuZGluZ19mcmVlX2xvY2ssIGZsYWdzKTsKIAkKQEAgLTMxOSw4ICsy
NzksOCBAQCBpbnQgeGVuX2Jsa2lmX3NjaGVkdWxlKHZvaWQgKmFyZykKIAkJCWJsa2lmLT53cSwK
IAkJCWJsa2lmLT53YWl0aW5nX3JlcXMgfHwga3RocmVhZF9zaG91bGRfc3RvcCgpKTsKIAkJd2Fp
dF9ldmVudF9pbnRlcnJ1cHRpYmxlKAotCQkJYmxrYmstPnBlbmRpbmdfZnJlZV93cSwKLQkJCSFs
aXN0X2VtcHR5KCZibGtiay0+cGVuZGluZ19mcmVlKSB8fAorCQkJYmxraWYtPmJsa2JrLT5wZW5k
aW5nX2ZyZWVfd3EsCisJCQkhbGlzdF9lbXB0eSgmYmxraWYtPmJsa2JrLT5wZW5kaW5nX2ZyZWUp
IHx8CiAJCQlrdGhyZWFkX3Nob3VsZF9zdG9wKCkpOwogCiAJCWJsa2lmLT53YWl0aW5nX3JlcXMg
PSAwOwpAQCAtMzk1LDcgKzM1NSw4IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2JrX21hcChzdHJ1Y3Qg
YmxraWZfcmVxdWVzdCAqcmVxLAogCQkJCSAgcGVuZGluZ19yZXEtPmJsa2lmLT5kb21pZCk7CiAJ
fQogCi0JcmV0ID0gZ250dGFiX21hcF9yZWZzKG1hcCwgTlVMTCwgJmJsa2JrLT5wZW5kaW5nX3Bh
Z2UocGVuZGluZ19yZXEsIDApLCBuc2VnKTsKKwlyZXQgPSBnbnR0YWJfbWFwX3JlZnMobWFwLCBO
VUxMLAorCQkmcGVuZGluZ19yZXEtPmJsa2lmLT5ibGtiay0+cGVuZGluZ19wYWdlKHBlbmRpbmdf
cmVxLCAwKSwgbnNlZyk7CiAJQlVHX09OKHJldCk7CiAKIAkvKgpAQCAtNTgwLDcgKzU0MSw3IEBA
IF9fZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiAJCQlicmVhazsKIAkJ
fQogCi0JCXBlbmRpbmdfcmVxID0gYWxsb2NfcmVxKCk7CisJCXBlbmRpbmdfcmVxID0gYWxsb2Nf
cmVxKGJsa2lmKTsKIAkJaWYgKE5VTEwgPT0gcGVuZGluZ19yZXEpIHsKIAkJCWJsa2lmLT5zdF9v
b19yZXErKzsKIAkJCW1vcmVfdG9fZG8gPSAxOwpAQCAtNzQyLDcgKzcwMyw3IEBAIHN0YXRpYyBp
bnQgZGlzcGF0Y2hfcndfYmxvY2tfaW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsCiAJZm9yIChp
ID0gMDsgaSA8IG5zZWc7IGkrKykgewogCQl3aGlsZSAoKGJpbyA9PSBOVUxMKSB8fAogCQkgICAg
ICAgKGJpb19hZGRfcGFnZShiaW8sCi0JCQkJICAgICBibGtiay0+cGVuZGluZ19wYWdlKHBlbmRp
bmdfcmVxLCBpKSwKKwkJCQkgICAgIGJsa2lmLT5ibGtiay0+cGVuZGluZ19wYWdlKHBlbmRpbmdf
cmVxLCBpKSwKIAkJCQkgICAgIHNlZ1tpXS5uc2VjIDw8IDksCiAJCQkJICAgICBzZWdbaV0uYnVm
ICYgflBBR0VfTUFTSykgPT0gMCkpIHsKIApAQCAtODY3LDM1ICs4MjgsMzQgQEAgc3RhdGljIHZv
aWQgbWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQlub3Rp
ZnlfcmVtb3RlX3ZpYV9pcnEoYmxraWYtPmlycSk7CiB9CiAKLXN0YXRpYyBpbnQgX19pbml0IHhl
bl9ibGtpZl9pbml0KHZvaWQpCitpbnQgeGVuX2Jsa2lmX2luaXRfYmxrYmsoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYpCiB7Ci0JaW50IGksIG1tYXBfcGFnZXM7CiAJaW50IHJjID0gMDsKKwlpbnQg
aSwgbW1hcF9wYWdlczsKKwlzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiazsKIAotCWlmICgheGVuX2Rv
bWFpbigpKQotCQlyZXR1cm4gLUVOT0RFVjsKLQotCWJsa2JrID0ga3phbGxvYyhzaXplb2Yoc3Ry
dWN0IHhlbl9ibGtiayksIEdGUF9LRVJORUwpOwotCWlmICghYmxrYmspIHsKKwlibGtpZi0+Ymxr
YmsgPSBremFsbG9jKHNpemVvZihzdHJ1Y3QgeGVuX2Jsa2JrKSwgR0ZQX0tFUk5FTCk7CisJaWYg
KCFibGtpZi0+YmxrYmspIHsKIAkJcHJfYWxlcnQoRFJWX1BGWCAiJXM6IG91dCBvZiBtZW1vcnkh
XG4iLCBfX2Z1bmNfXyk7CiAJCXJldHVybiAtRU5PTUVNOwogCX0KIAotCW1tYXBfcGFnZXMgPSB4
ZW5fYmxraWZfcmVxcyAqIEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVDsKLQotCWJsa2Jr
LT5wZW5kaW5nX3JlcXMgICAgICAgICAgPSBremFsbG9jKHNpemVvZihibGtiay0+cGVuZGluZ19y
ZXFzWzBdKSAqCi0JCQkJCXhlbl9ibGtpZl9yZXFzLCBHRlBfS0VSTkVMKTsKLQlibGtiay0+cGVu
ZGluZ19ncmFudF9oYW5kbGVzID0ga21hbGxvYyhzaXplb2YoYmxrYmstPnBlbmRpbmdfZ3JhbnRf
aGFuZGxlc1swXSkgKgotCQkJCQltbWFwX3BhZ2VzLCBHRlBfS0VSTkVMKTsKLQlibGtiay0+cGVu
ZGluZ19wYWdlcyAgICAgICAgID0ga3phbGxvYyhzaXplb2YoYmxrYmstPnBlbmRpbmdfcGFnZXNb
MF0pICoKLQkJCQkJbW1hcF9wYWdlcywgR0ZQX0tFUk5FTCk7Ci0KLQlpZiAoIWJsa2JrLT5wZW5k
aW5nX3JlcXMgfHwgIWJsa2JrLT5wZW5kaW5nX2dyYW50X2hhbmRsZXMgfHwKLQkgICAgIWJsa2Jr
LT5wZW5kaW5nX3BhZ2VzKSB7Ci0JCXJjID0gLUVOT01FTTsKKwlibGtiayA9IGJsa2lmLT5ibGti
azsKKwltbWFwX3BhZ2VzID0geGVuX2Jsa2lmX3JlcXMgKiBibGtpZi0+b3BzLT5tYXhfc2VnOwor
CisJYmxrYmstPnBlbmRpbmdfcmVxcyA9IGt6YWxsb2Moc2l6ZW9mKGJsa2JrLT5wZW5kaW5nX3Jl
cXNbMF0pICoKKwkJCQkgICAgICB4ZW5fYmxraWZfcmVxcywgR0ZQX0tFUk5FTCk7CisJYmxrYmst
PnBlbmRpbmdfZ3JhbnRfaGFuZGxlcyA9IGt6YWxsb2Moc2l6ZW9mKGJsa2JrLT5wZW5kaW5nX2dy
YW50X2hhbmRsZXNbMF0pCisJCQkJCSAgICAgICAqIG1tYXBfcGFnZXMsIEdGUF9LRVJORUwpOwor
CWJsa2JrLT5wZW5kaW5nX3BhZ2VzID0ga3phbGxvYyhzaXplb2YoYmxrYmstPnBlbmRpbmdfcGFn
ZXNbMF0pICoKKwkJCQkgICAgICAgbW1hcF9wYWdlcywgR0ZQX0tFUk5FTCk7CisgCisgCWlmICgh
YmxrYmstPnBlbmRpbmdfcmVxcyB8fCAhYmxrYmstPnBlbmRpbmdfZ3JhbnRfaGFuZGxlcyB8fAor
IAkgICAgIWJsa2JrLT5wZW5kaW5nX3BhZ2VzKSB7CisgCQlyYyA9IC1FTk9NRU07CiAJCWdvdG8g
b3V0X29mX21lbW9yeTsKIAl9Ci0KKwkKIAlmb3IgKGkgPSAwOyBpIDwgbW1hcF9wYWdlczsgaSsr
KSB7CiAJCWJsa2JrLT5wZW5kaW5nX2dyYW50X2hhbmRsZXNbaV0gPSBCTEtCQUNLX0lOVkFMSURf
SEFORExFOwogCQlibGtiay0+cGVuZGluZ19wYWdlc1tpXSA9IGFsbG9jX3BhZ2UoR0ZQX0tFUk5F
TCk7CkBAIC05MDQsMTAgKzg2NCw2IEBAIHN0YXRpYyBpbnQgX19pbml0IHhlbl9ibGtpZl9pbml0
KHZvaWQpCiAJCQlnb3RvIG91dF9vZl9tZW1vcnk7CiAJCX0KIAl9Ci0JcmMgPSB4ZW5fYmxraWZf
aW50ZXJmYWNlX2luaXQoKTsKLQlpZiAocmMpCi0JCWdvdG8gZmFpbGVkX2luaXQ7Ci0KIAlJTklU
X0xJU1RfSEVBRCgmYmxrYmstPnBlbmRpbmdfZnJlZSk7CiAJc3Bpbl9sb2NrX2luaXQoJmJsa2Jr
LT5wZW5kaW5nX2ZyZWVfbG9jayk7CiAJaW5pdF93YWl0cXVldWVfaGVhZCgmYmxrYmstPnBlbmRp
bmdfZnJlZV93cSk7CkBAIC05MTYsMTUgKzg3MiwxMCBAQCBzdGF0aWMgaW50IF9faW5pdCB4ZW5f
YmxraWZfaW5pdCh2b2lkKQogCQlsaXN0X2FkZF90YWlsKCZibGtiay0+cGVuZGluZ19yZXFzW2ld
LmZyZWVfbGlzdCwKIAkJCSAgICAgICZibGtiay0+cGVuZGluZ19mcmVlKTsKIAotCXJjID0geGVu
X2Jsa2lmX3hlbmJ1c19pbml0KCk7Ci0JaWYgKHJjKQotCQlnb3RvIGZhaWxlZF9pbml0OwotCiAJ
cmV0dXJuIDA7CiAKLSBvdXRfb2ZfbWVtb3J5Ogorb3V0X29mX21lbW9yeToKIAlwcl9hbGVydChE
UlZfUEZYICIlczogb3V0IG9mIG1lbW9yeVxuIiwgX19mdW5jX18pOwotIGZhaWxlZF9pbml0Ogog
CWtmcmVlKGJsa2JrLT5wZW5kaW5nX3JlcXMpOwogCWtmcmVlKGJsa2JrLT5wZW5kaW5nX2dyYW50
X2hhbmRsZXMpOwogCWlmIChibGtiay0+cGVuZGluZ19wYWdlcykgewpAQCAtOTM1LDcgKzg4Niw3
IEBAIHN0YXRpYyBpbnQgX19pbml0IHhlbl9ibGtpZl9pbml0KHZvaWQpCiAJCWtmcmVlKGJsa2Jr
LT5wZW5kaW5nX3BhZ2VzKTsKIAl9CiAJa2ZyZWUoYmxrYmspOwotCWJsa2JrID0gTlVMTDsKKwli
bGtpZi0+YmxrYmsgPSBOVUxMOwogCXJldHVybiByYzsKIH0KIApAQCAtOTQ3LDYgKzg5OCwyNCBA
QCBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJhdGlvbiBibGtiYWNrX3Jpbmdfb3BzID0gewogCS5t
YXhfc2VnID0gQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNULAogfTsKIAorc3RhdGljIGlu
dCBfX2luaXQgeGVuX2Jsa2lmX2luaXQodm9pZCkKK3sKKwlpbnQgcmMgPSAwOworCisJaWYgKCF4
ZW5fZG9tYWluKCkpCisJCXJldHVybiAtRU5PREVWOworCisJcmMgPSB4ZW5fYmxraWZfaW50ZXJm
YWNlX2luaXQoKTsKKwlpZiAocmMpCisJCXJldHVybiByYzsKKworCXJjID0geGVuX2Jsa2lmX3hl
bmJ1c19pbml0KCk7CisJaWYgKHJjKQorCQlyZXR1cm4gcmM7CisKKwlyZXR1cm4gMDsKK30KKwog
bW9kdWxlX2luaXQoeGVuX2Jsa2lmX2luaXQpOwogCiBNT0RVTEVfTElDRU5TRSgiRHVhbCBCU0Qv
R1BMIik7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIv
ZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCBjZTU1NTZhLi44MGU4YWNj
IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCisrKyBiL2Ry
aXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTE2OSw2ICsxNjksNyBAQCBzdHJ1
Y3QgeGVuX3ZiZCB7CiAKIHN0cnVjdCBiYWNrZW5kX2luZm87CiBzdHJ1Y3QgeGVuX2Jsa2lmOwor
c3RydWN0IHhlbl9ibGtiazsKIAogc3RydWN0IGJsa2JhY2tfcmluZ19vcGVyYXRpb24gewogCXZv
aWQgKigqZ2V0X2JhY2tfcmluZykgKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKTsKQEAgLTE5MCw2
ICsxOTEsNyBAQCBzdHJ1Y3QgeGVuX2Jsa2lmIHsKIAllbnVtIGJsa2lmX2JhY2tyaW5nX3R5cGUg
YmxrX2JhY2tyaW5nX3R5cGU7CiAJdW5pb24gYmxraWZfYmFja19yaW5ncwlibGtfcmluZ3M7CiAJ
dm9pZAkJCSpibGtfcmluZzsKKwlzdHJ1Y3QgeGVuX2Jsa2JrICAgICAgICAqYmxrYms7CiAJLyog
VGhlIFZCRCBhdHRhY2hlZCB0byB0aGlzIGludGVyZmFjZS4gKi8KIAlzdHJ1Y3QgeGVuX3ZiZAkJ
dmJkOwogCS8qIEJhY2sgcG9pbnRlciB0byB0aGUgYmFja2VuZF9pbmZvLiAqLwpAQCAtMjIxLDYg
KzIyMyw0NiBAQCBzdHJ1Y3QgeGVuX2Jsa2lmIHsKIAl3YWl0X3F1ZXVlX2hlYWRfdAl3YWl0aW5n
X3RvX2ZyZWU7CiB9OwogCitzdHJ1Y3Qgc2VnX2J1ZiB7CisgICAgICAgIHVuc2lnbmVkIGxvbmcg
YnVmOworICAgICAgICB1bnNpZ25lZCBpbnQgbnNlYzsKK307CisKKy8qCisgKiBFYWNoIG91dHN0
YW5kaW5nIHJlcXVlc3QgdGhhdCB3ZSd2ZSBwYXNzZWQgdG8gdGhlIGxvd2VyIGRldmljZSBsYXll
cnMgaGFzIGEKKyAqICdwZW5kaW5nX3JlcScgYWxsb2NhdGVkIHRvIGl0LiBFYWNoIGJ1ZmZlcl9o
ZWFkIHRoYXQgY29tcGxldGVzIGRlY3JlbWVudHMKKyAqIHRoZSBwZW5kY250IHRvd2FyZHMgemVy
by4gV2hlbiBpdCBoaXRzIHplcm8sIHRoZSBzcGVjaWZpZWQgZG9tYWluIGhhcyBhCisgKiByZXNw
b25zZSBxdWV1ZWQgZm9yIGl0LCB3aXRoIHRoZSBzYXZlZCAnaWQnIHBhc3NlZCBiYWNrLgorICov
CitzdHJ1Y3QgcGVuZGluZ19yZXEgeworCXN0cnVjdCB4ZW5fYmxraWYJKmJsa2lmOworCXU2NAkJ
CWlkOworCWludAkJCW5yX3BhZ2VzOworCWF0b21pY190CQlwZW5kY250OworCXVuc2lnbmVkIHNo
b3J0CQlvcGVyYXRpb247CisJaW50CQkJc3RhdHVzOworCXN0cnVjdCBsaXN0X2hlYWQJZnJlZV9s
aXN0OworCXN0cnVjdCBnbnR0YWJfbWFwX2dyYW50X3JlZgkqbWFwOworCXN0cnVjdCBnbnR0YWJf
dW5tYXBfZ3JhbnRfcmVmCSp1bm1hcDsKKwlzdHJ1Y3Qgc2VnX2J1ZgkJCSpzZWc7CisJc3RydWN0
IGJpbwkJCSoqYmlvbGlzdDsKKwlzdHJ1Y3QgcGFnZQkJCSoqcGFnZXM7Cit9OworCisjZGVmaW5l
IEJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUgKH4wKQorCitzdHJ1Y3QgeGVuX2Jsa2JrIHsKKwlzdHJ1
Y3QgcGVuZGluZ19yZXEJKnBlbmRpbmdfcmVxczsKKwkvKiBMaXN0IG9mIGFsbCAncGVuZGluZ19y
ZXEnIGF2YWlsYWJsZSAqLworCXN0cnVjdCBsaXN0X2hlYWQJcGVuZGluZ19mcmVlOworCS8qIEFu
ZCBpdHMgc3BpbmxvY2suICovCisJc3BpbmxvY2tfdAkJcGVuZGluZ19mcmVlX2xvY2s7CisJd2Fp
dF9xdWV1ZV9oZWFkX3QJcGVuZGluZ19mcmVlX3dxOworCS8qIFRoZSBsaXN0IG9mIGFsbCBwYWdl
cyB0aGF0IGFyZSBhdmFpbGFibGUuICovCisJc3RydWN0IHBhZ2UJCSoqcGVuZGluZ19wYWdlczsK
KwkvKiBBbmQgdGhlIGdyYW50IGhhbmRsZXMgdGhhdCBhcmUgYXZhaWxhYmxlLiAqLworCWdyYW50
X2hhbmRsZV90CQkqcGVuZGluZ19ncmFudF9oYW5kbGVzOworfTsKIAogI2RlZmluZSB2YmRfc3oo
X3YpCSgoX3YpLT5iZGV2LT5iZF9wYXJ0ID8gXAogCQkJIChfdiktPmJkZXYtPmJkX3BhcnQtPm5y
X3NlY3RzIDogXApAQCAtMjQzLDYgKzI4NSw4IEBAIGludCB4ZW5fYmxraWZfaW50ZXJmYWNlX2lu
aXQodm9pZCk7CiAKIGludCB4ZW5fYmxraWZfeGVuYnVzX2luaXQodm9pZCk7CiAKK2ludCB4ZW5f
YmxraWZfaW5pdF9ibGtiayhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZik7CisKIGlycXJldHVybl90
IHhlbl9ibGtpZl9iZV9pbnQoaW50IGlycSwgdm9pZCAqZGV2X2lkKTsKIGludCB4ZW5fYmxraWZf
c2NoZWR1bGUodm9pZCAqYXJnKTsKIApkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxr
YmFjay94ZW5idXMuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKaW5kZXgg
ODUwZWNhZC4uOGIwZDQ5NiAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94
ZW5idXMuYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCkBAIC03Njks
NiArNzY5LDEyIEBAIHN0YXRpYyBpbnQgY29ubmVjdF9yaW5nKHN0cnVjdCBiYWNrZW5kX2luZm8g
KmJlKQogCQlyZXR1cm4gZXJyOwogCX0KIAorCWVyciA9IHhlbl9ibGtpZl9pbml0X2Jsa2JrKGJl
LT5ibGtpZik7CisJaWYgKGVycikgeworCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgZXJyLCAieGVu
IGJsa2lmIGluaXQgYmxrYmsgZmFpbHNcbiIpOworCQlyZXR1cm4gZXJyOworCX0KKwkKIAlyZXR1
cm4gMDsKIH0KIAo=

--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:30:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:30:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xKu-0006Di-5k; Thu, 16 Aug 2012 10:29:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xKs-0006DI-FT
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:29:54 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345112950!9551246!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1456 invoked from network); 16 Aug 2012 10:29:11 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 10:29:11 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:29:10 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="187140963"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 16 Aug 2012 03:29:09 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:29:09 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:29:07 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 4/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mfcFcRPq5PRjQDG8OqDy5fs6eg==
Date: Thu, 16 Aug 2012 10:29:06 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF265@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 4/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


per disk pending_req list
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index b4767f5..45eda98 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -64,49 +64,6 @@ MODULE_PARM_DESC(reqs, "Number of blkback requests to al=
locate");
 static unsigned int log_stats;
 module_param(log_stats, int, 0644);
=20
-struct seg_buf {
-        unsigned long buf;
-        unsigned int nsec;
-};
-
-/*
- * Each outstanding request that we've passed to the lower device layers h=
as a
- * 'pending_req' allocated to it. Each buffer_head that completes decremen=
ts
- * the pendcnt towards zero. When it hits zero, the specified domain has a
- * response queued for it, with the saved 'id' passed back.
- */
-struct pending_req {
-	struct xen_blkif	*blkif;
-	u64			id;
-	int			nr_pages;
-	atomic_t		pendcnt;
-	unsigned short		operation;
-	int			status;
-	struct list_head	free_list;
-	struct gnttab_map_grant_ref	*map;
-	struct gnttab_unmap_grant_ref	*unmap;
-	struct seg_buf			*seg;
-	struct bio			**biolist;
-	struct page			**pages;
-};
-
-#define BLKBACK_INVALID_HANDLE (~0)
-
-struct xen_blkbk {
-	struct pending_req	*pending_reqs;
-	/* List of all 'pending_req' available */
-	struct list_head	pending_free;
-	/* And its spinlock. */
-	spinlock_t		pending_free_lock;
-	wait_queue_head_t	pending_free_wq;
-	/* The list of all pages that are available. */
-	struct page		**pending_pages;
-	/* And the grant handles that are available. */
-	grant_handle_t		*pending_grant_handles;
-};
-
-static struct xen_blkbk *blkbk;
-
 /*
  * Little helpful macro to figure out the index and virtual address of the
  * pending_pages[..]. For each 'pending_req' we have have up to
@@ -115,20 +72,20 @@ static struct xen_blkbk *blkbk;
  */
 static inline int vaddr_pagenr(struct pending_req *req, int seg)
 {
-	return (req - blkbk->pending_reqs) *
-		BLKIF_MAX_SEGMENTS_PER_REQUEST + seg;
+	return (req - req->blkif->blkbk->pending_reqs) *
+		req->blkif->ops->max_seg + seg;
 }
=20
 #define pending_page(req, seg) pending_pages[vaddr_pagenr(req, seg)]
=20
 static inline unsigned long vaddr(struct pending_req *req, int seg)
 {
-	unsigned long pfn =3D page_to_pfn(blkbk->pending_page(req, seg));
+	unsigned long pfn =3D page_to_pfn(req->blkif->blkbk->pending_page(req, se=
g));
 	return (unsigned long)pfn_to_kaddr(pfn);
 }
=20
 #define pending_handle(_req, _seg) \
-	(blkbk->pending_grant_handles[vaddr_pagenr(_req, _seg)])
+	(_req->blkif->blkbk->pending_grant_handles[vaddr_pagenr(_req, _seg)])
=20
=20
 static int do_block_io_op(struct xen_blkif *blkif);
@@ -143,6 +100,7 @@ static void make_response(struct xen_blkif *blkif, u64 =
id,
  */
 static void free_req(struct pending_req *req)
 {
+	struct xen_blkbk *blkbk =3D req->blkif->blkbk;
 	unsigned long flags;
 	int was_empty;
=20
@@ -162,8 +120,9 @@ static void free_req(struct pending_req *req)
 /*
  * Retrieve from the 'pending_reqs' a free pending_req structure to be use=
d.
  */
-static struct pending_req *alloc_req(void)
+static struct pending_req *alloc_req(struct xen_blkif *blkif)
 {
+	struct xen_blkbk *blkbk =3D blkif->blkbk;
 	struct pending_req *req =3D NULL;
 	unsigned long flags;
 	unsigned int max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST;
@@ -173,6 +132,7 @@ static struct pending_req *alloc_req(void)
 		req =3D list_entry(blkbk->pending_free.next, struct pending_req,
 				 free_list);
 		list_del(&req->free_list);
+		req->blkif =3D blkif;
 	}
 	spin_unlock_irqrestore(&blkbk->pending_free_lock, flags);
 =09
@@ -319,8 +279,8 @@ int xen_blkif_schedule(void *arg)
 			blkif->wq,
 			blkif->waiting_reqs || kthread_should_stop());
 		wait_event_interruptible(
-			blkbk->pending_free_wq,
-			!list_empty(&blkbk->pending_free) ||
+			blkif->blkbk->pending_free_wq,
+			!list_empty(&blkif->blkbk->pending_free) ||
 			kthread_should_stop());
=20
 		blkif->waiting_reqs =3D 0;
@@ -395,7 +355,8 @@ static int xen_blkbk_map(struct blkif_request *req,
 				  pending_req->blkif->domid);
 	}
=20
-	ret =3D gnttab_map_refs(map, NULL, &blkbk->pending_page(pending_req, 0), =
nseg);
+	ret =3D gnttab_map_refs(map, NULL,
+		&pending_req->blkif->blkbk->pending_page(pending_req, 0), nseg);
 	BUG_ON(ret);
=20
 	/*
@@ -580,7 +541,7 @@ __do_block_io_op(struct xen_blkif *blkif)
 			break;
 		}
=20
-		pending_req =3D alloc_req();
+		pending_req =3D alloc_req(blkif);
 		if (NULL =3D=3D pending_req) {
 			blkif->st_oo_req++;
 			more_to_do =3D 1;
@@ -742,7 +703,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif=
,
 	for (i =3D 0; i < nseg; i++) {
 		while ((bio =3D=3D NULL) ||
 		       (bio_add_page(bio,
-				     blkbk->pending_page(pending_req, i),
+				     blkif->blkbk->pending_page(pending_req, i),
 				     seg[i].nsec << 9,
 				     seg[i].buf & ~PAGE_MASK) =3D=3D 0)) {
=20
@@ -867,35 +828,34 @@ static void make_response(struct xen_blkif *blkif, u6=
4 id,
 		notify_remote_via_irq(blkif->irq);
 }
=20
-static int __init xen_blkif_init(void)
+int xen_blkif_init_blkbk(struct xen_blkif *blkif)
 {
-	int i, mmap_pages;
 	int rc =3D 0;
+	int i, mmap_pages;
+	struct xen_blkbk *blkbk;
=20
-	if (!xen_domain())
-		return -ENODEV;
-
-	blkbk =3D kzalloc(sizeof(struct xen_blkbk), GFP_KERNEL);
-	if (!blkbk) {
+	blkif->blkbk =3D kzalloc(sizeof(struct xen_blkbk), GFP_KERNEL);
+	if (!blkif->blkbk) {
 		pr_alert(DRV_PFX "%s: out of memory!\n", __func__);
 		return -ENOMEM;
 	}
=20
-	mmap_pages =3D xen_blkif_reqs * BLKIF_MAX_SEGMENTS_PER_REQUEST;
-
-	blkbk->pending_reqs          =3D kzalloc(sizeof(blkbk->pending_reqs[0]) *
-					xen_blkif_reqs, GFP_KERNEL);
-	blkbk->pending_grant_handles =3D kmalloc(sizeof(blkbk->pending_grant_hand=
les[0]) *
-					mmap_pages, GFP_KERNEL);
-	blkbk->pending_pages         =3D kzalloc(sizeof(blkbk->pending_pages[0]) =
*
-					mmap_pages, GFP_KERNEL);
-
-	if (!blkbk->pending_reqs || !blkbk->pending_grant_handles ||
-	    !blkbk->pending_pages) {
-		rc =3D -ENOMEM;
+	blkbk =3D blkif->blkbk;
+	mmap_pages =3D xen_blkif_reqs * blkif->ops->max_seg;
+
+	blkbk->pending_reqs =3D kzalloc(sizeof(blkbk->pending_reqs[0]) *
+				      xen_blkif_reqs, GFP_KERNEL);
+	blkbk->pending_grant_handles =3D kzalloc(sizeof(blkbk->pending_grant_hand=
les[0])
+					       * mmap_pages, GFP_KERNEL);
+	blkbk->pending_pages =3D kzalloc(sizeof(blkbk->pending_pages[0]) *
+				       mmap_pages, GFP_KERNEL);
+=20
+ 	if (!blkbk->pending_reqs || !blkbk->pending_grant_handles ||
+ 	    !blkbk->pending_pages) {
+ 		rc =3D -ENOMEM;
 		goto out_of_memory;
 	}
-
+=09
 	for (i =3D 0; i < mmap_pages; i++) {
 		blkbk->pending_grant_handles[i] =3D BLKBACK_INVALID_HANDLE;
 		blkbk->pending_pages[i] =3D alloc_page(GFP_KERNEL);
@@ -904,10 +864,6 @@ static int __init xen_blkif_init(void)
 			goto out_of_memory;
 		}
 	}
-	rc =3D xen_blkif_interface_init();
-	if (rc)
-		goto failed_init;
-
 	INIT_LIST_HEAD(&blkbk->pending_free);
 	spin_lock_init(&blkbk->pending_free_lock);
 	init_waitqueue_head(&blkbk->pending_free_wq);
@@ -916,15 +872,10 @@ static int __init xen_blkif_init(void)
 		list_add_tail(&blkbk->pending_reqs[i].free_list,
 			      &blkbk->pending_free);
=20
-	rc =3D xen_blkif_xenbus_init();
-	if (rc)
-		goto failed_init;
-
 	return 0;
=20
- out_of_memory:
+out_of_memory:
 	pr_alert(DRV_PFX "%s: out of memory\n", __func__);
- failed_init:
 	kfree(blkbk->pending_reqs);
 	kfree(blkbk->pending_grant_handles);
 	if (blkbk->pending_pages) {
@@ -935,7 +886,7 @@ static int __init xen_blkif_init(void)
 		kfree(blkbk->pending_pages);
 	}
 	kfree(blkbk);
-	blkbk =3D NULL;
+	blkif->blkbk =3D NULL;
 	return rc;
 }
=20
@@ -947,6 +898,24 @@ struct blkback_ring_operation blkback_ring_ops =3D {
 	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
 };
=20
+static int __init xen_blkif_init(void)
+{
+	int rc =3D 0;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	rc =3D xen_blkif_interface_init();
+	if (rc)
+		return rc;
+
+	rc =3D xen_blkif_xenbus_init();
+	if (rc)
+		return rc;
+
+	return 0;
+}
+
 module_init(xen_blkif_init);
=20
 MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index ce5556a..80e8acc 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -169,6 +169,7 @@ struct xen_vbd {
=20
 struct backend_info;
 struct xen_blkif;
+struct xen_blkbk;
=20
 struct blkback_ring_operation {
 	void *(*get_back_ring) (struct xen_blkif *blkif);
@@ -190,6 +191,7 @@ struct xen_blkif {
 	enum blkif_backring_type blk_backring_type;
 	union blkif_back_rings	blk_rings;
 	void			*blk_ring;
+	struct xen_blkbk        *blkbk;
 	/* The VBD attached to this interface. */
 	struct xen_vbd		vbd;
 	/* Back pointer to the backend_info. */
@@ -221,6 +223,46 @@ struct xen_blkif {
 	wait_queue_head_t	waiting_to_free;
 };
=20
+struct seg_buf {
+        unsigned long buf;
+        unsigned int nsec;
+};
+
+/*
+ * Each outstanding request that we've passed to the lower device layers h=
as a
+ * 'pending_req' allocated to it. Each buffer_head that completes decremen=
ts
+ * the pendcnt towards zero. When it hits zero, the specified domain has a
+ * response queued for it, with the saved 'id' passed back.
+ */
+struct pending_req {
+	struct xen_blkif	*blkif;
+	u64			id;
+	int			nr_pages;
+	atomic_t		pendcnt;
+	unsigned short		operation;
+	int			status;
+	struct list_head	free_list;
+	struct gnttab_map_grant_ref	*map;
+	struct gnttab_unmap_grant_ref	*unmap;
+	struct seg_buf			*seg;
+	struct bio			**biolist;
+	struct page			**pages;
+};
+
+#define BLKBACK_INVALID_HANDLE (~0)
+
+struct xen_blkbk {
+	struct pending_req	*pending_reqs;
+	/* List of all 'pending_req' available */
+	struct list_head	pending_free;
+	/* And its spinlock. */
+	spinlock_t		pending_free_lock;
+	wait_queue_head_t	pending_free_wq;
+	/* The list of all pages that are available. */
+	struct page		**pending_pages;
+	/* And the grant handles that are available. */
+	grant_handle_t		*pending_grant_handles;
+};
=20
 #define vbd_sz(_v)	((_v)->bdev->bd_part ? \
 			 (_v)->bdev->bd_part->nr_sects : \
@@ -243,6 +285,8 @@ int xen_blkif_interface_init(void);
=20
 int xen_blkif_xenbus_init(void);
=20
+int xen_blkif_init_blkbk(struct xen_blkif *blkif);
+
 irqreturn_t xen_blkif_be_int(int irq, void *dev_id);
 int xen_blkif_schedule(void *arg);
=20
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 850ecad..8b0d496 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -769,6 +769,12 @@ static int connect_ring(struct backend_info *be)
 		return err;
 	}
=20
+	err =3D xen_blkif_init_blkbk(be->blkif);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "xen blkif init blkbk fails\n");
+		return err;
+	}
+=09
 	return 0;
 }
=20


-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_04.patch"
Content-Description: vbd_enlarge_segments_04.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_04.patch";
	size=10670; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 17:34:50 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IGQyZWMxYzI4MzViZjZkMmJhZDBkY2RmYzYyNjFiYzA4NzhmYzgzNGMKQXV0aG9yOiBS
b25naHVpIER1YW4gPHJvbmdodWkuZHVhbkBpbnRlbC5jb20+CkRhdGU6ICAgVGh1IEF1ZyAxNiAy
MTozNDo1NCAyMDEyICswODAwCgogICAgcGVyIGRpc2sgcGVuZGluZ19yZXEgbGlzdAoKZGlmZiAt
LWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKaW5kZXggYjQ3NjdmNS4uNDVlZGE5OCAxMDA2NDQKLS0t
IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKKysrIGIvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKQEAgLTY0LDQ5ICs2NCw2IEBAIE1PRFVMRV9QQVJNX0RF
U0MocmVxcywgIk51bWJlciBvZiBibGtiYWNrIHJlcXVlc3RzIHRvIGFsbG9jYXRlIik7CiBzdGF0
aWMgdW5zaWduZWQgaW50IGxvZ19zdGF0czsKIG1vZHVsZV9wYXJhbShsb2dfc3RhdHMsIGludCwg
MDY0NCk7CiAKLXN0cnVjdCBzZWdfYnVmIHsKLSAgICAgICAgdW5zaWduZWQgbG9uZyBidWY7Ci0g
ICAgICAgIHVuc2lnbmVkIGludCBuc2VjOwotfTsKLQotLyoKLSAqIEVhY2ggb3V0c3RhbmRpbmcg
cmVxdWVzdCB0aGF0IHdlJ3ZlIHBhc3NlZCB0byB0aGUgbG93ZXIgZGV2aWNlIGxheWVycyBoYXMg
YQotICogJ3BlbmRpbmdfcmVxJyBhbGxvY2F0ZWQgdG8gaXQuIEVhY2ggYnVmZmVyX2hlYWQgdGhh
dCBjb21wbGV0ZXMgZGVjcmVtZW50cwotICogdGhlIHBlbmRjbnQgdG93YXJkcyB6ZXJvLiBXaGVu
IGl0IGhpdHMgemVybywgdGhlIHNwZWNpZmllZCBkb21haW4gaGFzIGEKLSAqIHJlc3BvbnNlIHF1
ZXVlZCBmb3IgaXQsIHdpdGggdGhlIHNhdmVkICdpZCcgcGFzc2VkIGJhY2suCi0gKi8KLXN0cnVj
dCBwZW5kaW5nX3JlcSB7Ci0Jc3RydWN0IHhlbl9ibGtpZgkqYmxraWY7Ci0JdTY0CQkJaWQ7Ci0J
aW50CQkJbnJfcGFnZXM7Ci0JYXRvbWljX3QJCXBlbmRjbnQ7Ci0JdW5zaWduZWQgc2hvcnQJCW9w
ZXJhdGlvbjsKLQlpbnQJCQlzdGF0dXM7Ci0Jc3RydWN0IGxpc3RfaGVhZAlmcmVlX2xpc3Q7Ci0J
c3RydWN0IGdudHRhYl9tYXBfZ3JhbnRfcmVmCSptYXA7Ci0Jc3RydWN0IGdudHRhYl91bm1hcF9n
cmFudF9yZWYJKnVubWFwOwotCXN0cnVjdCBzZWdfYnVmCQkJKnNlZzsKLQlzdHJ1Y3QgYmlvCQkJ
KipiaW9saXN0OwotCXN0cnVjdCBwYWdlCQkJKipwYWdlczsKLX07Ci0KLSNkZWZpbmUgQkxLQkFD
S19JTlZBTElEX0hBTkRMRSAofjApCi0KLXN0cnVjdCB4ZW5fYmxrYmsgewotCXN0cnVjdCBwZW5k
aW5nX3JlcQkqcGVuZGluZ19yZXFzOwotCS8qIExpc3Qgb2YgYWxsICdwZW5kaW5nX3JlcScgYXZh
aWxhYmxlICovCi0Jc3RydWN0IGxpc3RfaGVhZAlwZW5kaW5nX2ZyZWU7Ci0JLyogQW5kIGl0cyBz
cGlubG9jay4gKi8KLQlzcGlubG9ja190CQlwZW5kaW5nX2ZyZWVfbG9jazsKLQl3YWl0X3F1ZXVl
X2hlYWRfdAlwZW5kaW5nX2ZyZWVfd3E7Ci0JLyogVGhlIGxpc3Qgb2YgYWxsIHBhZ2VzIHRoYXQg
YXJlIGF2YWlsYWJsZS4gKi8KLQlzdHJ1Y3QgcGFnZQkJKipwZW5kaW5nX3BhZ2VzOwotCS8qIEFu
ZCB0aGUgZ3JhbnQgaGFuZGxlcyB0aGF0IGFyZSBhdmFpbGFibGUuICovCi0JZ3JhbnRfaGFuZGxl
X3QJCSpwZW5kaW5nX2dyYW50X2hhbmRsZXM7Ci19OwotCi1zdGF0aWMgc3RydWN0IHhlbl9ibGti
ayAqYmxrYms7Ci0KIC8qCiAgKiBMaXR0bGUgaGVscGZ1bCBtYWNybyB0byBmaWd1cmUgb3V0IHRo
ZSBpbmRleCBhbmQgdmlydHVhbCBhZGRyZXNzIG9mIHRoZQogICogcGVuZGluZ19wYWdlc1suLl0u
IEZvciBlYWNoICdwZW5kaW5nX3JlcScgd2UgaGF2ZSBoYXZlIHVwIHRvCkBAIC0xMTUsMjAgKzcy
LDIwIEBAIHN0YXRpYyBzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiazsKICAqLwogc3RhdGljIGlubGlu
ZSBpbnQgdmFkZHJfcGFnZW5yKHN0cnVjdCBwZW5kaW5nX3JlcSAqcmVxLCBpbnQgc2VnKQogewot
CXJldHVybiAocmVxIC0gYmxrYmstPnBlbmRpbmdfcmVxcykgKgotCQlCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1QgKyBzZWc7CisJcmV0dXJuIChyZXEgLSByZXEtPmJsa2lmLT5ibGtiay0+
cGVuZGluZ19yZXFzKSAqCisJCXJlcS0+YmxraWYtPm9wcy0+bWF4X3NlZyArIHNlZzsKIH0KIAog
I2RlZmluZSBwZW5kaW5nX3BhZ2UocmVxLCBzZWcpIHBlbmRpbmdfcGFnZXNbdmFkZHJfcGFnZW5y
KHJlcSwgc2VnKV0KIAogc3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIHZhZGRyKHN0cnVjdCBw
ZW5kaW5nX3JlcSAqcmVxLCBpbnQgc2VnKQogewotCXVuc2lnbmVkIGxvbmcgcGZuID0gcGFnZV90
b19wZm4oYmxrYmstPnBlbmRpbmdfcGFnZShyZXEsIHNlZykpOworCXVuc2lnbmVkIGxvbmcgcGZu
ID0gcGFnZV90b19wZm4ocmVxLT5ibGtpZi0+YmxrYmstPnBlbmRpbmdfcGFnZShyZXEsIHNlZykp
OwogCXJldHVybiAodW5zaWduZWQgbG9uZylwZm5fdG9fa2FkZHIocGZuKTsKIH0KIAogI2RlZmlu
ZSBwZW5kaW5nX2hhbmRsZShfcmVxLCBfc2VnKSBcCi0JKGJsa2JrLT5wZW5kaW5nX2dyYW50X2hh
bmRsZXNbdmFkZHJfcGFnZW5yKF9yZXEsIF9zZWcpXSkKKwkoX3JlcS0+YmxraWYtPmJsa2JrLT5w
ZW5kaW5nX2dyYW50X2hhbmRsZXNbdmFkZHJfcGFnZW5yKF9yZXEsIF9zZWcpXSkKIAogCiBzdGF0
aWMgaW50IGRvX2Jsb2NrX2lvX29wKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKTsKQEAgLTE0Myw2
ICsxMDAsNyBAQCBzdGF0aWMgdm9pZCBtYWtlX3Jlc3BvbnNlKHN0cnVjdCB4ZW5fYmxraWYgKmJs
a2lmLCB1NjQgaWQsCiAgKi8KIHN0YXRpYyB2b2lkIGZyZWVfcmVxKHN0cnVjdCBwZW5kaW5nX3Jl
cSAqcmVxKQogeworCXN0cnVjdCB4ZW5fYmxrYmsgKmJsa2JrID0gcmVxLT5ibGtpZi0+YmxrYms7
CiAJdW5zaWduZWQgbG9uZyBmbGFnczsKIAlpbnQgd2FzX2VtcHR5OwogCkBAIC0xNjIsOCArMTIw
LDkgQEAgc3RhdGljIHZvaWQgZnJlZV9yZXEoc3RydWN0IHBlbmRpbmdfcmVxICpyZXEpCiAvKgog
ICogUmV0cmlldmUgZnJvbSB0aGUgJ3BlbmRpbmdfcmVxcycgYSBmcmVlIHBlbmRpbmdfcmVxIHN0
cnVjdHVyZSB0byBiZSB1c2VkLgogICovCi1zdGF0aWMgc3RydWN0IHBlbmRpbmdfcmVxICphbGxv
Y19yZXEodm9pZCkKK3N0YXRpYyBzdHJ1Y3QgcGVuZGluZ19yZXEgKmFsbG9jX3JlcShzdHJ1Y3Qg
eGVuX2Jsa2lmICpibGtpZikKIHsKKwlzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiayA9IGJsa2lmLT5i
bGtiazsKIAlzdHJ1Y3QgcGVuZGluZ19yZXEgKnJlcSA9IE5VTEw7CiAJdW5zaWduZWQgbG9uZyBm
bGFnczsKIAl1bnNpZ25lZCBpbnQgbWF4X3NlZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVR
VUVTVDsKQEAgLTE3Myw2ICsxMzIsNyBAQCBzdGF0aWMgc3RydWN0IHBlbmRpbmdfcmVxICphbGxv
Y19yZXEodm9pZCkKIAkJcmVxID0gbGlzdF9lbnRyeShibGtiay0+cGVuZGluZ19mcmVlLm5leHQs
IHN0cnVjdCBwZW5kaW5nX3JlcSwKIAkJCQkgZnJlZV9saXN0KTsKIAkJbGlzdF9kZWwoJnJlcS0+
ZnJlZV9saXN0KTsKKwkJcmVxLT5ibGtpZiA9IGJsa2lmOwogCX0KIAlzcGluX3VubG9ja19pcnFy
ZXN0b3JlKCZibGtiay0+cGVuZGluZ19mcmVlX2xvY2ssIGZsYWdzKTsKIAkKQEAgLTMxOSw4ICsy
NzksOCBAQCBpbnQgeGVuX2Jsa2lmX3NjaGVkdWxlKHZvaWQgKmFyZykKIAkJCWJsa2lmLT53cSwK
IAkJCWJsa2lmLT53YWl0aW5nX3JlcXMgfHwga3RocmVhZF9zaG91bGRfc3RvcCgpKTsKIAkJd2Fp
dF9ldmVudF9pbnRlcnJ1cHRpYmxlKAotCQkJYmxrYmstPnBlbmRpbmdfZnJlZV93cSwKLQkJCSFs
aXN0X2VtcHR5KCZibGtiay0+cGVuZGluZ19mcmVlKSB8fAorCQkJYmxraWYtPmJsa2JrLT5wZW5k
aW5nX2ZyZWVfd3EsCisJCQkhbGlzdF9lbXB0eSgmYmxraWYtPmJsa2JrLT5wZW5kaW5nX2ZyZWUp
IHx8CiAJCQlrdGhyZWFkX3Nob3VsZF9zdG9wKCkpOwogCiAJCWJsa2lmLT53YWl0aW5nX3JlcXMg
PSAwOwpAQCAtMzk1LDcgKzM1NSw4IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2JrX21hcChzdHJ1Y3Qg
YmxraWZfcmVxdWVzdCAqcmVxLAogCQkJCSAgcGVuZGluZ19yZXEtPmJsa2lmLT5kb21pZCk7CiAJ
fQogCi0JcmV0ID0gZ250dGFiX21hcF9yZWZzKG1hcCwgTlVMTCwgJmJsa2JrLT5wZW5kaW5nX3Bh
Z2UocGVuZGluZ19yZXEsIDApLCBuc2VnKTsKKwlyZXQgPSBnbnR0YWJfbWFwX3JlZnMobWFwLCBO
VUxMLAorCQkmcGVuZGluZ19yZXEtPmJsa2lmLT5ibGtiay0+cGVuZGluZ19wYWdlKHBlbmRpbmdf
cmVxLCAwKSwgbnNlZyk7CiAJQlVHX09OKHJldCk7CiAKIAkvKgpAQCAtNTgwLDcgKzU0MSw3IEBA
IF9fZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiAJCQlicmVhazsKIAkJ
fQogCi0JCXBlbmRpbmdfcmVxID0gYWxsb2NfcmVxKCk7CisJCXBlbmRpbmdfcmVxID0gYWxsb2Nf
cmVxKGJsa2lmKTsKIAkJaWYgKE5VTEwgPT0gcGVuZGluZ19yZXEpIHsKIAkJCWJsa2lmLT5zdF9v
b19yZXErKzsKIAkJCW1vcmVfdG9fZG8gPSAxOwpAQCAtNzQyLDcgKzcwMyw3IEBAIHN0YXRpYyBp
bnQgZGlzcGF0Y2hfcndfYmxvY2tfaW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsCiAJZm9yIChp
ID0gMDsgaSA8IG5zZWc7IGkrKykgewogCQl3aGlsZSAoKGJpbyA9PSBOVUxMKSB8fAogCQkgICAg
ICAgKGJpb19hZGRfcGFnZShiaW8sCi0JCQkJICAgICBibGtiay0+cGVuZGluZ19wYWdlKHBlbmRp
bmdfcmVxLCBpKSwKKwkJCQkgICAgIGJsa2lmLT5ibGtiay0+cGVuZGluZ19wYWdlKHBlbmRpbmdf
cmVxLCBpKSwKIAkJCQkgICAgIHNlZ1tpXS5uc2VjIDw8IDksCiAJCQkJICAgICBzZWdbaV0uYnVm
ICYgflBBR0VfTUFTSykgPT0gMCkpIHsKIApAQCAtODY3LDM1ICs4MjgsMzQgQEAgc3RhdGljIHZv
aWQgbWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQlub3Rp
ZnlfcmVtb3RlX3ZpYV9pcnEoYmxraWYtPmlycSk7CiB9CiAKLXN0YXRpYyBpbnQgX19pbml0IHhl
bl9ibGtpZl9pbml0KHZvaWQpCitpbnQgeGVuX2Jsa2lmX2luaXRfYmxrYmsoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYpCiB7Ci0JaW50IGksIG1tYXBfcGFnZXM7CiAJaW50IHJjID0gMDsKKwlpbnQg
aSwgbW1hcF9wYWdlczsKKwlzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiazsKIAotCWlmICgheGVuX2Rv
bWFpbigpKQotCQlyZXR1cm4gLUVOT0RFVjsKLQotCWJsa2JrID0ga3phbGxvYyhzaXplb2Yoc3Ry
dWN0IHhlbl9ibGtiayksIEdGUF9LRVJORUwpOwotCWlmICghYmxrYmspIHsKKwlibGtpZi0+Ymxr
YmsgPSBremFsbG9jKHNpemVvZihzdHJ1Y3QgeGVuX2Jsa2JrKSwgR0ZQX0tFUk5FTCk7CisJaWYg
KCFibGtpZi0+YmxrYmspIHsKIAkJcHJfYWxlcnQoRFJWX1BGWCAiJXM6IG91dCBvZiBtZW1vcnkh
XG4iLCBfX2Z1bmNfXyk7CiAJCXJldHVybiAtRU5PTUVNOwogCX0KIAotCW1tYXBfcGFnZXMgPSB4
ZW5fYmxraWZfcmVxcyAqIEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVDsKLQotCWJsa2Jr
LT5wZW5kaW5nX3JlcXMgICAgICAgICAgPSBremFsbG9jKHNpemVvZihibGtiay0+cGVuZGluZ19y
ZXFzWzBdKSAqCi0JCQkJCXhlbl9ibGtpZl9yZXFzLCBHRlBfS0VSTkVMKTsKLQlibGtiay0+cGVu
ZGluZ19ncmFudF9oYW5kbGVzID0ga21hbGxvYyhzaXplb2YoYmxrYmstPnBlbmRpbmdfZ3JhbnRf
aGFuZGxlc1swXSkgKgotCQkJCQltbWFwX3BhZ2VzLCBHRlBfS0VSTkVMKTsKLQlibGtiay0+cGVu
ZGluZ19wYWdlcyAgICAgICAgID0ga3phbGxvYyhzaXplb2YoYmxrYmstPnBlbmRpbmdfcGFnZXNb
MF0pICoKLQkJCQkJbW1hcF9wYWdlcywgR0ZQX0tFUk5FTCk7Ci0KLQlpZiAoIWJsa2JrLT5wZW5k
aW5nX3JlcXMgfHwgIWJsa2JrLT5wZW5kaW5nX2dyYW50X2hhbmRsZXMgfHwKLQkgICAgIWJsa2Jr
LT5wZW5kaW5nX3BhZ2VzKSB7Ci0JCXJjID0gLUVOT01FTTsKKwlibGtiayA9IGJsa2lmLT5ibGti
azsKKwltbWFwX3BhZ2VzID0geGVuX2Jsa2lmX3JlcXMgKiBibGtpZi0+b3BzLT5tYXhfc2VnOwor
CisJYmxrYmstPnBlbmRpbmdfcmVxcyA9IGt6YWxsb2Moc2l6ZW9mKGJsa2JrLT5wZW5kaW5nX3Jl
cXNbMF0pICoKKwkJCQkgICAgICB4ZW5fYmxraWZfcmVxcywgR0ZQX0tFUk5FTCk7CisJYmxrYmst
PnBlbmRpbmdfZ3JhbnRfaGFuZGxlcyA9IGt6YWxsb2Moc2l6ZW9mKGJsa2JrLT5wZW5kaW5nX2dy
YW50X2hhbmRsZXNbMF0pCisJCQkJCSAgICAgICAqIG1tYXBfcGFnZXMsIEdGUF9LRVJORUwpOwor
CWJsa2JrLT5wZW5kaW5nX3BhZ2VzID0ga3phbGxvYyhzaXplb2YoYmxrYmstPnBlbmRpbmdfcGFn
ZXNbMF0pICoKKwkJCQkgICAgICAgbW1hcF9wYWdlcywgR0ZQX0tFUk5FTCk7CisgCisgCWlmICgh
YmxrYmstPnBlbmRpbmdfcmVxcyB8fCAhYmxrYmstPnBlbmRpbmdfZ3JhbnRfaGFuZGxlcyB8fAor
IAkgICAgIWJsa2JrLT5wZW5kaW5nX3BhZ2VzKSB7CisgCQlyYyA9IC1FTk9NRU07CiAJCWdvdG8g
b3V0X29mX21lbW9yeTsKIAl9Ci0KKwkKIAlmb3IgKGkgPSAwOyBpIDwgbW1hcF9wYWdlczsgaSsr
KSB7CiAJCWJsa2JrLT5wZW5kaW5nX2dyYW50X2hhbmRsZXNbaV0gPSBCTEtCQUNLX0lOVkFMSURf
SEFORExFOwogCQlibGtiay0+cGVuZGluZ19wYWdlc1tpXSA9IGFsbG9jX3BhZ2UoR0ZQX0tFUk5F
TCk7CkBAIC05MDQsMTAgKzg2NCw2IEBAIHN0YXRpYyBpbnQgX19pbml0IHhlbl9ibGtpZl9pbml0
KHZvaWQpCiAJCQlnb3RvIG91dF9vZl9tZW1vcnk7CiAJCX0KIAl9Ci0JcmMgPSB4ZW5fYmxraWZf
aW50ZXJmYWNlX2luaXQoKTsKLQlpZiAocmMpCi0JCWdvdG8gZmFpbGVkX2luaXQ7Ci0KIAlJTklU
X0xJU1RfSEVBRCgmYmxrYmstPnBlbmRpbmdfZnJlZSk7CiAJc3Bpbl9sb2NrX2luaXQoJmJsa2Jr
LT5wZW5kaW5nX2ZyZWVfbG9jayk7CiAJaW5pdF93YWl0cXVldWVfaGVhZCgmYmxrYmstPnBlbmRp
bmdfZnJlZV93cSk7CkBAIC05MTYsMTUgKzg3MiwxMCBAQCBzdGF0aWMgaW50IF9faW5pdCB4ZW5f
YmxraWZfaW5pdCh2b2lkKQogCQlsaXN0X2FkZF90YWlsKCZibGtiay0+cGVuZGluZ19yZXFzW2ld
LmZyZWVfbGlzdCwKIAkJCSAgICAgICZibGtiay0+cGVuZGluZ19mcmVlKTsKIAotCXJjID0geGVu
X2Jsa2lmX3hlbmJ1c19pbml0KCk7Ci0JaWYgKHJjKQotCQlnb3RvIGZhaWxlZF9pbml0OwotCiAJ
cmV0dXJuIDA7CiAKLSBvdXRfb2ZfbWVtb3J5Ogorb3V0X29mX21lbW9yeToKIAlwcl9hbGVydChE
UlZfUEZYICIlczogb3V0IG9mIG1lbW9yeVxuIiwgX19mdW5jX18pOwotIGZhaWxlZF9pbml0Ogog
CWtmcmVlKGJsa2JrLT5wZW5kaW5nX3JlcXMpOwogCWtmcmVlKGJsa2JrLT5wZW5kaW5nX2dyYW50
X2hhbmRsZXMpOwogCWlmIChibGtiay0+cGVuZGluZ19wYWdlcykgewpAQCAtOTM1LDcgKzg4Niw3
IEBAIHN0YXRpYyBpbnQgX19pbml0IHhlbl9ibGtpZl9pbml0KHZvaWQpCiAJCWtmcmVlKGJsa2Jr
LT5wZW5kaW5nX3BhZ2VzKTsKIAl9CiAJa2ZyZWUoYmxrYmspOwotCWJsa2JrID0gTlVMTDsKKwli
bGtpZi0+YmxrYmsgPSBOVUxMOwogCXJldHVybiByYzsKIH0KIApAQCAtOTQ3LDYgKzg5OCwyNCBA
QCBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJhdGlvbiBibGtiYWNrX3Jpbmdfb3BzID0gewogCS5t
YXhfc2VnID0gQkxLSUZfTUFYX1NFR01FTlRTX1BFUl9SRVFVRVNULAogfTsKIAorc3RhdGljIGlu
dCBfX2luaXQgeGVuX2Jsa2lmX2luaXQodm9pZCkKK3sKKwlpbnQgcmMgPSAwOworCisJaWYgKCF4
ZW5fZG9tYWluKCkpCisJCXJldHVybiAtRU5PREVWOworCisJcmMgPSB4ZW5fYmxraWZfaW50ZXJm
YWNlX2luaXQoKTsKKwlpZiAocmMpCisJCXJldHVybiByYzsKKworCXJjID0geGVuX2Jsa2lmX3hl
bmJ1c19pbml0KCk7CisJaWYgKHJjKQorCQlyZXR1cm4gcmM7CisKKwlyZXR1cm4gMDsKK30KKwog
bW9kdWxlX2luaXQoeGVuX2Jsa2lmX2luaXQpOwogCiBNT0RVTEVfTElDRU5TRSgiRHVhbCBCU0Qv
R1BMIik7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIv
ZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCBjZTU1NTZhLi44MGU4YWNj
IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCisrKyBiL2Ry
aXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTE2OSw2ICsxNjksNyBAQCBzdHJ1
Y3QgeGVuX3ZiZCB7CiAKIHN0cnVjdCBiYWNrZW5kX2luZm87CiBzdHJ1Y3QgeGVuX2Jsa2lmOwor
c3RydWN0IHhlbl9ibGtiazsKIAogc3RydWN0IGJsa2JhY2tfcmluZ19vcGVyYXRpb24gewogCXZv
aWQgKigqZ2V0X2JhY2tfcmluZykgKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKTsKQEAgLTE5MCw2
ICsxOTEsNyBAQCBzdHJ1Y3QgeGVuX2Jsa2lmIHsKIAllbnVtIGJsa2lmX2JhY2tyaW5nX3R5cGUg
YmxrX2JhY2tyaW5nX3R5cGU7CiAJdW5pb24gYmxraWZfYmFja19yaW5ncwlibGtfcmluZ3M7CiAJ
dm9pZAkJCSpibGtfcmluZzsKKwlzdHJ1Y3QgeGVuX2Jsa2JrICAgICAgICAqYmxrYms7CiAJLyog
VGhlIFZCRCBhdHRhY2hlZCB0byB0aGlzIGludGVyZmFjZS4gKi8KIAlzdHJ1Y3QgeGVuX3ZiZAkJ
dmJkOwogCS8qIEJhY2sgcG9pbnRlciB0byB0aGUgYmFja2VuZF9pbmZvLiAqLwpAQCAtMjIxLDYg
KzIyMyw0NiBAQCBzdHJ1Y3QgeGVuX2Jsa2lmIHsKIAl3YWl0X3F1ZXVlX2hlYWRfdAl3YWl0aW5n
X3RvX2ZyZWU7CiB9OwogCitzdHJ1Y3Qgc2VnX2J1ZiB7CisgICAgICAgIHVuc2lnbmVkIGxvbmcg
YnVmOworICAgICAgICB1bnNpZ25lZCBpbnQgbnNlYzsKK307CisKKy8qCisgKiBFYWNoIG91dHN0
YW5kaW5nIHJlcXVlc3QgdGhhdCB3ZSd2ZSBwYXNzZWQgdG8gdGhlIGxvd2VyIGRldmljZSBsYXll
cnMgaGFzIGEKKyAqICdwZW5kaW5nX3JlcScgYWxsb2NhdGVkIHRvIGl0LiBFYWNoIGJ1ZmZlcl9o
ZWFkIHRoYXQgY29tcGxldGVzIGRlY3JlbWVudHMKKyAqIHRoZSBwZW5kY250IHRvd2FyZHMgemVy
by4gV2hlbiBpdCBoaXRzIHplcm8sIHRoZSBzcGVjaWZpZWQgZG9tYWluIGhhcyBhCisgKiByZXNw
b25zZSBxdWV1ZWQgZm9yIGl0LCB3aXRoIHRoZSBzYXZlZCAnaWQnIHBhc3NlZCBiYWNrLgorICov
CitzdHJ1Y3QgcGVuZGluZ19yZXEgeworCXN0cnVjdCB4ZW5fYmxraWYJKmJsa2lmOworCXU2NAkJ
CWlkOworCWludAkJCW5yX3BhZ2VzOworCWF0b21pY190CQlwZW5kY250OworCXVuc2lnbmVkIHNo
b3J0CQlvcGVyYXRpb247CisJaW50CQkJc3RhdHVzOworCXN0cnVjdCBsaXN0X2hlYWQJZnJlZV9s
aXN0OworCXN0cnVjdCBnbnR0YWJfbWFwX2dyYW50X3JlZgkqbWFwOworCXN0cnVjdCBnbnR0YWJf
dW5tYXBfZ3JhbnRfcmVmCSp1bm1hcDsKKwlzdHJ1Y3Qgc2VnX2J1ZgkJCSpzZWc7CisJc3RydWN0
IGJpbwkJCSoqYmlvbGlzdDsKKwlzdHJ1Y3QgcGFnZQkJCSoqcGFnZXM7Cit9OworCisjZGVmaW5l
IEJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUgKH4wKQorCitzdHJ1Y3QgeGVuX2Jsa2JrIHsKKwlzdHJ1
Y3QgcGVuZGluZ19yZXEJKnBlbmRpbmdfcmVxczsKKwkvKiBMaXN0IG9mIGFsbCAncGVuZGluZ19y
ZXEnIGF2YWlsYWJsZSAqLworCXN0cnVjdCBsaXN0X2hlYWQJcGVuZGluZ19mcmVlOworCS8qIEFu
ZCBpdHMgc3BpbmxvY2suICovCisJc3BpbmxvY2tfdAkJcGVuZGluZ19mcmVlX2xvY2s7CisJd2Fp
dF9xdWV1ZV9oZWFkX3QJcGVuZGluZ19mcmVlX3dxOworCS8qIFRoZSBsaXN0IG9mIGFsbCBwYWdl
cyB0aGF0IGFyZSBhdmFpbGFibGUuICovCisJc3RydWN0IHBhZ2UJCSoqcGVuZGluZ19wYWdlczsK
KwkvKiBBbmQgdGhlIGdyYW50IGhhbmRsZXMgdGhhdCBhcmUgYXZhaWxhYmxlLiAqLworCWdyYW50
X2hhbmRsZV90CQkqcGVuZGluZ19ncmFudF9oYW5kbGVzOworfTsKIAogI2RlZmluZSB2YmRfc3oo
X3YpCSgoX3YpLT5iZGV2LT5iZF9wYXJ0ID8gXAogCQkJIChfdiktPmJkZXYtPmJkX3BhcnQtPm5y
X3NlY3RzIDogXApAQCAtMjQzLDYgKzI4NSw4IEBAIGludCB4ZW5fYmxraWZfaW50ZXJmYWNlX2lu
aXQodm9pZCk7CiAKIGludCB4ZW5fYmxraWZfeGVuYnVzX2luaXQodm9pZCk7CiAKK2ludCB4ZW5f
YmxraWZfaW5pdF9ibGtiayhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZik7CisKIGlycXJldHVybl90
IHhlbl9ibGtpZl9iZV9pbnQoaW50IGlycSwgdm9pZCAqZGV2X2lkKTsKIGludCB4ZW5fYmxraWZf
c2NoZWR1bGUodm9pZCAqYXJnKTsKIApkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxr
YmFjay94ZW5idXMuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKaW5kZXgg
ODUwZWNhZC4uOGIwZDQ5NiAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94
ZW5idXMuYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCkBAIC03Njks
NiArNzY5LDEyIEBAIHN0YXRpYyBpbnQgY29ubmVjdF9yaW5nKHN0cnVjdCBiYWNrZW5kX2luZm8g
KmJlKQogCQlyZXR1cm4gZXJyOwogCX0KIAorCWVyciA9IHhlbl9ibGtpZl9pbml0X2Jsa2JrKGJl
LT5ibGtpZik7CisJaWYgKGVycikgeworCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgZXJyLCAieGVu
IGJsa2lmIGluaXQgYmxrYmsgZmFpbHNcbiIpOworCQlyZXR1cm4gZXJyOworCX0KKwkKIAlyZXR1
cm4gMDsKIH0KIAo=

--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF265SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xMV-0006Mr-NJ; Thu, 16 Aug 2012 10:31:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xMT-0006Ma-C1
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:31:33 +0000
Received: from [85.158.143.99:27660] by server-1.bemta-4.messagelabs.com id
	09/19-07754-40CCC205; Thu, 16 Aug 2012 10:31:32 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345113088!21443126!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11732 invoked from network); 16 Aug 2012 10:31:29 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-6.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 10:31:29 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 03:31:28 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181735984"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:31:25 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:31:13 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:31:12 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 5/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mkDy7uj5Ma85Q7yiLvIEmcoA5w==
Date: Thu, 16 Aug 2012 10:31:10 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF283@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 5/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

add segring support in blkback
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index 45eda98..0bbc226 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -60,6 +60,10 @@ static int xen_blkif_reqs =3D 64;
 module_param_named(reqs, xen_blkif_reqs, int, 0);
 MODULE_PARM_DESC(reqs, "Number of blkback requests to allocate");
=20
+int blkback_ring_type =3D 2;
+module_param_named(blk_ring_type, blkback_ring_type, int, 0);
+MODULE_PARM_DESC(blk_ring_type, "type of ring for blk device");
+
 /* Run-time switchable: /sys/module/blkback/parameters/ */
 static unsigned int log_stats;
 module_param(log_stats, int, 0644);
@@ -125,7 +129,7 @@ static struct pending_req *alloc_req(struct xen_blkif *=
blkif)
 	struct xen_blkbk *blkbk =3D blkif->blkbk;
 	struct pending_req *req =3D NULL;
 	unsigned long flags;
-	unsigned int max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST;
+	unsigned int max_seg =3D blkif->ops->max_seg;
 =09
 	spin_lock_irqsave(&blkbk->pending_free_lock, flags);
 	if (!list_empty(&blkbk->pending_free)) {
@@ -315,8 +319,10 @@ static void xen_blkbk_unmap(struct pending_req *req)
=20
 	for (i =3D 0; i < req->nr_pages; i++) {
 		handle =3D pending_handle(req, i);
-		if (handle =3D=3D BLKBACK_INVALID_HANDLE)
+		if (handle =3D=3D BLKBACK_INVALID_HANDLE) {
+			printk("BLKBACK_INVALID_HANDLE\n");
 			continue;
+		}
 		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
 				    GNTMAP_host_map, handle);
 		pending_handle(req, i) =3D BLKBACK_INVALID_HANDLE;
@@ -486,6 +492,12 @@ void *get_back_ring(struct xen_blkif *blkif)
 	return (void *)&blkif->blk_rings;
 }
=20
+void *get_back_ring_v2(struct xen_blkif *blkif)
+{
+	return (void *)&blkif->blk_rings_v2;
+}
+
+
 void copy_blkif_req(struct xen_blkif *blkif, RING_IDX rc)
 {
 	struct blkif_request *req =3D (struct blkif_request *)blkif->req;=20
@@ -506,12 +518,48 @@ void copy_blkif_req(struct xen_blkif *blkif, RING_IDX=
 rc)
 	}
 }
=20
+void copy_blkif_req_v2(struct xen_blkif *blkif, RING_IDX rc)
+{
+	struct blkif_request_header *req =3D (struct blkif_request_header *)blkif=
->req;=20
+	union blkif_back_rings_v2 *blk_rings =3D &blkif->blk_rings_v2;
+	switch (blkif->blk_protocol) {
+	case BLKIF_PROTOCOL_NATIVE:
+		memcpy(req, RING_GET_REQUEST(&blk_rings->native, rc),
+			sizeof(struct blkif_request_header));
+		break;
+	case BLKIF_PROTOCOL_X86_32:
+		blkif_get_x86_32_req_v2(req, RING_GET_REQUEST(&blk_rings->x86_32, rc));
+		break;
+	case BLKIF_PROTOCOL_X86_64:
+		blkif_get_x86_64_req_v2(req, RING_GET_REQUEST(&blk_rings->x86_64, rc));
+		break;
+	default:
+		BUG();
+	}
+}
+
 void copy_blkif_seg_req(struct xen_blkif *blkif)
 {
 	struct blkif_request *req =3D (struct blkif_request *)blkif->req;
=20
 	blkif->seg_req =3D req->u.rw.seg;
 }
+
+void copy_blkif_seg_req_v2(struct xen_blkif *blkif)
+{
+	struct blkif_request_header *req =3D (struct blkif_request_header *)blkif=
->req;
+	struct blkif_segment_back_ring *blk_segrings =3D &blkif->blk_segrings;
+	int i;
+	RING_IDX rc;
+
+	rc =3D blk_segrings->req_cons;
+	for (i =3D 0; i < req->u.rw.nr_segments; i++) {
+		memcpy(&blkif->seg_req[i], RING_GET_REQUEST(blk_segrings, rc++),
+			sizeof(struct blkif_request_segment));
+	}
+	blk_segrings->req_cons =3D rc;
+}
+
 /*
  * Function to copy the from the ring buffer the 'struct blkif_request'
  * (which has the sectors we want, number of them, grant references, etc),
@@ -587,10 +635,12 @@ do_block_io_op(struct xen_blkif *blkif)
=20
 	return more_to_do;
 }
+
 /*
  * Transmutation of the 'struct blkif_request' to a proper 'struct bio'
  * and call the 'submit_bio' to pass it to the underlying storage.
  */
+
 static int dispatch_rw_block_io(struct xen_blkif *blkif,
 				struct pending_req *pending_req)
 {
@@ -774,54 +824,89 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
 	return -EIO;
 }
=20
-struct blkif_segment_back_ring *
-	get_seg_back_ring(struct xen_blkif *blkif)
+void push_back_ring_rsp(struct xen_blkif *blkif, int nr_page, int *notify)
 {
-	return NULL;
+	union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+
+	blk_rings->common.rsp_prod_pvt++;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, *notify);
 }
=20
-void push_back_ring_rsp(union blkif_back_rings *blk_rings, int nr_page, in=
t *notify)
+void push_back_ring_rsp_v2(struct xen_blkif *blkif, int nr_page, int *noti=
fy)
 {
+	union blkif_back_rings_v2 *blk_rings =3D &blkif->blk_rings_v2;
+	struct blkif_segment_back_ring *blk_segrings =3D &blkif->blk_segrings;
+
 	blk_rings->common.rsp_prod_pvt++;
+	blk_segrings->rsp_prod_pvt +=3D nr_page;
+	RING_PUSH_RESPONSES(blk_segrings);
 	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, *notify);
 }
=20
-/*
- * Put a response on the ring on how the operation fared.
- */
-static void make_response(struct xen_blkif *blkif, u64 id,
-			  unsigned short op, int nr_page, int st)
+void copy_response(struct xen_blkif *blkif, struct blkif_response *resp)
 {
-	struct blkif_response  resp;
-	unsigned long     flags;
-	union blkif_back_rings *blk_rings =3D
-		(union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
-	int notify;
+	union blkif_back_rings *blk_rings =3D &blkif->blk_rings;=09
+=09
+	switch (blkif->blk_protocol) {
+	case BLKIF_PROTOCOL_NATIVE:
+		memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_=
pvt),
+		       resp, sizeof(*resp));
+		break;
+	case BLKIF_PROTOCOL_X86_32:
+		memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_=
pvt),
+		       resp, sizeof(*resp));
+		break;
+	case BLKIF_PROTOCOL_X86_64:
+		memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_=
pvt),
+		       resp, sizeof(*resp));
+		break;
+	default:
+		BUG();
+	}
=20
-	resp.id        =3D id;
-	resp.operation =3D op;
-	resp.status    =3D st;
+}
=20
-	spin_lock_irqsave(&blkif->blk_ring_lock, flags);
-	/* Place on the response ring for the relevant domain. */
+void copy_response_v2(struct xen_blkif *blkif, struct blkif_response *resp=
)
+{
+	union blkif_back_rings_v2 *blk_rings =3D &blkif->blk_rings_v2;=09
+=09
 	switch (blkif->blk_protocol) {
 	case BLKIF_PROTOCOL_NATIVE:
 		memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_=
pvt),
-		       &resp, sizeof(resp));
+		       resp, sizeof(*resp));
 		break;
 	case BLKIF_PROTOCOL_X86_32:
 		memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_=
pvt),
-		       &resp, sizeof(resp));
+		       resp, sizeof(*resp));
 		break;
 	case BLKIF_PROTOCOL_X86_64:
 		memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_=
pvt),
-		       &resp, sizeof(resp));
+		       resp, sizeof(*resp));
 		break;
 	default:
 		BUG();
 	}
+}
=20
-	blkif->ops->push_back_ring_rsp(blk_rings, nr_page, &notify);
+/*
+ * Put a response on the ring on how the operation fared.
+ */
+static void make_response(struct xen_blkif *blkif, u64 id,
+			  unsigned short op, int nr_page, int st)
+{
+	struct blkif_response  resp;
+	unsigned long     flags;
+	int notify;
+
+	resp.id        =3D id;
+	resp.operation =3D op;
+	resp.status    =3D st;
+
+	spin_lock_irqsave(&blkif->blk_ring_lock, flags);
+	/* Place on the response ring for the relevant domain. */
+	blkif->ops->copy_response(blkif, &resp);
+
+	blkif->ops->push_back_ring_rsp(blkif, nr_page, &notify);
=20
 	spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
 	if (notify)
@@ -895,9 +980,19 @@ struct blkback_ring_operation blkback_ring_ops =3D {
 	.copy_blkif_req =3D copy_blkif_req,
 	.copy_blkif_seg_req =3D copy_blkif_seg_req,
 	.push_back_ring_rsp =3D push_back_ring_rsp,
+	.copy_response =3D copy_response,
 	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
 };
=20
+struct blkback_ring_operation blkback_ring_ops_v2 =3D {
+	.get_back_ring =3D get_back_ring_v2,
+	.copy_blkif_req =3D copy_blkif_req_v2,
+	.copy_blkif_seg_req =3D copy_blkif_seg_req_v2,
+	.push_back_ring_rsp =3D push_back_ring_rsp_v2,
+	.copy_response =3D copy_response_v2,
+	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST_V2,
+};
+
 static int __init xen_blkif_init(void)
 {
 	int rc =3D 0;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index 80e8acc..2e241a4 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -48,6 +48,7 @@
 	pr_debug(DRV_PFX "(%s:%d) " fmt ".\n",		\
 		 __func__, __LINE__, ##args)
=20
+extern int blkback_ring_type;
=20
 /* Not a real protocol.  Used to generate ring structs which contain
  * the elements common to all protocols only.  This way we get a
@@ -84,6 +85,22 @@ struct blkif_x86_32_request {
 	} u;
 } __attribute__((__packed__));
=20
+struct blkif_x86_32_request_rw_v2 {
+	uint8_t        nr_segments;  /* number of segments                   */
+	blkif_vdev_t   handle;       /* only for read/write requests         */
+	uint64_t       id;           /* private guest value, echoed in resp  */
+	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
+	uint64_t       seg_id;/* segment offset in the segment ring   */
+} __attribute__((__packed__));
+
+struct blkif_x86_32_request_v2 {
+	uint8_t        operation;    /* BLKIF_OP_???                         */
+	union {
+		struct blkif_x86_32_request_rw_v2 rw;
+		struct blkif_x86_32_request_discard discard;
+	} u;
+} __attribute__((__packed__));
+
 /* i386 protocol version */
 #pragma pack(push, 4)
 struct blkif_x86_32_response {
@@ -120,6 +137,23 @@ struct blkif_x86_64_request {
 	} u;
 } __attribute__((__packed__));
=20
+struct blkif_x86_64_request_rw_v2 {
+	uint8_t        nr_segments;  /* number of segments                   */
+	blkif_vdev_t   handle;       /* only for read/write requests         */
+	uint32_t       _pad1;        /* offsetof(blkif_reqest..,u.rw.id)=3D=3D8  =
*/
+	uint64_t       id;
+	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
+	uint64_t       seg_id;/* segment offset in the segment ring   */
+} __attribute__((__packed__));
+
+struct blkif_x86_64_request_v2 {
+	uint8_t        operation;    /* BLKIF_OP_???                         */
+	union {
+		struct blkif_x86_64_request_rw_v2 rw;
+		struct blkif_x86_64_request_discard discard;
+	} u;
+} __attribute__((__packed__));
+
 struct blkif_x86_64_response {
 	uint64_t       __attribute__((__aligned__(8))) id;
 	uint8_t         operation;       /* copied from request */
@@ -132,6 +166,10 @@ DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_re=
quest,
 		  struct blkif_x86_32_response);
 DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
 		  struct blkif_x86_64_response);
+DEFINE_RING_TYPES(blkif_x86_32_v2, struct blkif_x86_32_request_v2,
+                  struct blkif_x86_32_response);
+DEFINE_RING_TYPES(blkif_x86_64_v2, struct blkif_x86_64_request_v2,
+                  struct blkif_x86_64_response);
=20
 union blkif_back_rings {
 	struct blkif_back_ring        native;
@@ -140,6 +178,13 @@ union blkif_back_rings {
 	struct blkif_x86_64_back_ring x86_64;
 };
=20
+union blkif_back_rings_v2 {
+        struct blkif_request_back_ring        native;
+        struct blkif_common_back_ring 	      common;
+        struct blkif_x86_32_v2_back_ring      x86_32;
+        struct blkif_x86_64_v2_back_ring      x86_64;
+};
+
 enum blkif_protocol {
 	BLKIF_PROTOCOL_NATIVE =3D 1,
 	BLKIF_PROTOCOL_X86_32 =3D 2,
@@ -175,7 +220,8 @@ struct blkback_ring_operation {
 	void *(*get_back_ring) (struct xen_blkif *blkif);
 	void (*copy_blkif_req) (struct xen_blkif *blkif, RING_IDX rc);
 	void (*copy_blkif_seg_req) (struct xen_blkif *blkif);
-	void (*push_back_ring_rsp) (union blkif_back_rings *blk_rings, int nr_pag=
e, int *notify);
+	void (*push_back_ring_rsp) (struct xen_blkif *blkif, int nr_page, int *no=
tify);
+	void (*copy_response) (struct xen_blkif *blkif, struct blkif_response *re=
sp);
 	unsigned int max_seg;
 };
=20
@@ -190,7 +236,10 @@ struct xen_blkif {
 	enum blkif_protocol	blk_protocol;
 	enum blkif_backring_type blk_backring_type;
 	union blkif_back_rings	blk_rings;
+	union blkif_back_rings_v2       blk_rings_v2;
+	struct blkif_segment_back_ring	blk_segrings;
 	void			*blk_ring;
+	void                    *blk_segring;
 	struct xen_blkbk        *blkbk;
 	/* The VBD attached to this interface. */
 	struct xen_vbd		vbd;
@@ -328,6 +377,31 @@ static inline void blkif_get_x86_32_req(struct blkif_r=
equest *dst,
 	}
 }
=20
+static inline void blkif_get_x86_32_req_v2(struct blkif_request_header *ds=
t,
+					struct blkif_x86_32_request_v2 *src)
+{
+	dst->operation =3D src->operation;
+	switch (src->operation) {
+	case BLKIF_OP_READ:
+	case BLKIF_OP_WRITE:
+	case BLKIF_OP_WRITE_BARRIER:
+	case BLKIF_OP_FLUSH_DISKCACHE:
+		dst->u.rw.nr_segments =3D src->u.rw.nr_segments;
+		dst->u.rw.handle =3D src->u.rw.handle;
+		dst->u.rw.id =3D src->u.rw.id;
+		dst->u.rw.sector_number =3D src->u.rw.sector_number;
+		dst->u.rw.seg_id =3D src->u.rw.seg_id;
+		barrier();
+		break;
+	case BLKIF_OP_DISCARD:
+		dst->u.discard.flag =3D src->u.discard.flag;
+		dst->u.discard.sector_number =3D src->u.discard.sector_number;
+		dst->u.discard.nr_sectors =3D src->u.discard.nr_sectors;
+		break;
+	default:
+		break;
+	}
+}
 static inline void blkif_get_x86_64_req(struct blkif_request *dst,
 					struct blkif_x86_64_request *src)
 {
@@ -359,4 +433,30 @@ static inline void blkif_get_x86_64_req(struct blkif_r=
equest *dst,
 	}
 }
=20
+static inline void blkif_get_x86_64_req_v2(struct blkif_request_header *ds=
t,
+					struct blkif_x86_64_request_v2 *src)
+{
+	dst->operation =3D src->operation;
+	switch (src->operation) {
+	case BLKIF_OP_READ:
+	case BLKIF_OP_WRITE:
+	case BLKIF_OP_WRITE_BARRIER:
+	case BLKIF_OP_FLUSH_DISKCACHE:
+		dst->u.rw.nr_segments =3D src->u.rw.nr_segments;
+		dst->u.rw.handle =3D src->u.rw.handle;
+		dst->u.rw.id =3D src->u.rw.id;
+		dst->u.rw.sector_number =3D src->u.rw.sector_number;
+		dst->u.rw.seg_id =3D src->u.rw.seg_id;
+		barrier();
+		break;
+	case BLKIF_OP_DISCARD:
+		dst->u.discard.flag =3D src->u.discard.flag;
+		dst->u.discard.sector_number =3D src->u.discard.sector_number;
+		dst->u.discard.nr_sectors =3D src->u.discard.nr_sectors;
+		break;
+	default:
+		break;
+	}
+}
+
 #endif /* __XEN_BLKIF__BACKEND__COMMON_H__ */
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 8b0d496..4678533 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -36,7 +36,7 @@ static int connect_ring(struct backend_info *);
 static void backend_changed(struct xenbus_watch *, const char **,
 			    unsigned int);
=20
-extern struct blkback_ring_operation blkback_ring_ops;
+extern struct blkback_ring_operation blkback_ring_ops, blkback_ring_ops_v2=
;
=20
 struct xenbus_device *xen_blkbk_xenbus(struct backend_info *be)
 {
@@ -176,6 +176,83 @@ static int xen_blkif_map(struct xen_blkif *blkif, unsi=
gned long shared_page,
 	return 0;
 }
=20
+static int
+xen_blkif_map_segring(struct xen_blkif *blkif, unsigned long shared_page)
+{
+	struct blkif_segment_sring *sring;
+	int err;
+
+	err =3D xenbus_map_ring_valloc(blkif->be->dev, shared_page,
+				     &blkif->blk_segring);
+
+	if (err < 0)
+		return err;
+
+	sring =3D (struct blkif_segment_sring *)blkif->blk_segring;
+	BACK_RING_INIT(&blkif->blk_segrings, sring, PAGE_SIZE);
+
+	return 0;
+}
+
+static int xen_blkif_map_v2(struct xen_blkif *blkif, unsigned long shared_=
page,=20
+                         unsigned int evtchn)
+{
+	int err;
+
+	/* Already connected through? */
+	if (blkif->irq)
+		return 0;
+
+	err =3D xenbus_map_ring_valloc(blkif->be->dev, shared_page,
+				     &blkif->blk_ring);
+
+	if (err < 0)
+		return err;
+
+	switch (blkif->blk_protocol) {
+	case BLKIF_PROTOCOL_NATIVE:
+	{
+		struct blkif_request_sring *sring;
+		sring =3D (struct blkif_request_sring *)blkif->blk_ring;
+		BACK_RING_INIT(&blkif->blk_rings_v2.native, sring,
+			       PAGE_SIZE);
+		break;
+	}
+	case BLKIF_PROTOCOL_X86_32:
+	{
+		struct blkif_x86_32_v2_sring *sring_x86_32;
+		sring_x86_32 =3D (struct blkif_x86_32_v2_sring *)blkif->blk_ring;
+		BACK_RING_INIT(&blkif->blk_rings_v2.x86_32, sring_x86_32,
+			       PAGE_SIZE);
+		break;
+	}
+	case BLKIF_PROTOCOL_X86_64:
+	{
+		struct blkif_x86_64_v2_sring *sring_x86_64;
+		sring_x86_64 =3D (struct blkif_x86_64_v2_sring *)blkif->blk_ring;
+		BACK_RING_INIT(&blkif->blk_rings_v2.x86_64, sring_x86_64,
+			       PAGE_SIZE);
+		break;
+	}
+	default:
+		BUG();
+	}
+
+=09
+
+	err =3D bind_interdomain_evtchn_to_irqhandler(blkif->domid, evtchn,
+						    xen_blkif_be_int, 0,
+						    "blkif-backend", blkif);
+	if (err < 0) {
+		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_ring);
+		blkif->blk_rings_v2.common.sring =3D NULL;
+		return err;
+	}
+	blkif->irq =3D err;
+
+	return 0;
+}
+
 static void xen_blkif_disconnect(struct xen_blkif *blkif)
 {
 	if (blkif->xenblkd) {
@@ -192,10 +269,18 @@ static void xen_blkif_disconnect(struct xen_blkif *bl=
kif)
 		blkif->irq =3D 0;
 	}
=20
-	if (blkif->blk_rings.common.sring) {
+	if (blkif->blk_backring_type =3D=3D BACKRING_TYPE_1 &&=20
+	    blkif->blk_rings.common.sring) {
 		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_ring);
 		blkif->blk_rings.common.sring =3D NULL;
 	}
+	if (blkif->blk_backring_type =3D=3D BACKRING_TYPE_2 &&
+	    blkif->blk_rings_v2.common.sring) {
+		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_ring);
+		blkif->blk_rings_v2.common.sring =3D NULL;
+		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_segring);
+		blkif->blk_segrings.sring=3D NULL;
+	}
 }
=20
 void xen_blkif_free(struct xen_blkif *blkif)
@@ -476,6 +561,9 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
 	if (err)
 		goto fail;
=20
+	err =3D xenbus_printf(XBT_NIL, dev->nodename, "blkback-ring-type",
+			    "%u", blkback_ring_type);
+
 	err =3D xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -722,25 +810,68 @@ static int connect_ring(struct backend_info *be)
 {
 	struct xenbus_device *dev =3D be->dev;
 	unsigned long ring_ref;
+	unsigned long segring_ref;
 	unsigned int evtchn;
+	unsigned int ring_type;
 	char protocol[64] =3D "";
 	int err;
=20
 	DPRINTK("%s", dev->otherend);
-	be->blkif->ops =3D &blkback_ring_ops;
-	be->blkif->req =3D kmalloc(sizeof(struct blkif_request),
-				 GFP_KERNEL);
-	be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
-				     be->blkif->ops->max_seg,  GFP_KERNEL);
-	be->blkif->blk_backring_type =3D BACKRING_TYPE_1;
-
-	err =3D xenbus_gather(XBT_NIL, dev->otherend, "ring-ref", "%lu",
-			    &ring_ref, "event-channel", "%u", &evtchn, NULL);
-	if (err) {
+=09
+	err =3D xenbus_scanf(XBT_NIL, dev->otherend, "blkfront-ring-type", "%u",
+			   &ring_type);
+	if (err !=3D 1) {
+		pr_info(DRV_PFX "using legacy blk ring\n");
+		ring_type =3D 1;
+	}
+=09
+	if (ring_type =3D=3D 1) {
+		be->blkif->ops =3D &blkback_ring_ops;
+		be->blkif->blk_backring_type =3D BACKRING_TYPE_1;
+		be->blkif->req =3D kmalloc(sizeof(struct blkif_request), GFP_KERNEL);
+		be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
+					     be->blkif->ops->max_seg, GFP_KERNEL);
+		if (!be->blkif->req || !be->blkif->seg_req) {
+			kfree(be->blkif->req);
+			kfree(be->blkif->seg_req);
+			xenbus_dev_fatal(dev, err, "no enough memory");
+			return -ENOMEM;
+		}
+		err =3D xenbus_gather(XBT_NIL, dev->otherend, "ring-ref", "%lu",
+				    &ring_ref, "event-channel", "%u", &evtchn, NULL);
+		if (err) {
+			xenbus_dev_fatal(dev, err,
+					 "reading %s/ring-ref and event-channel",
+					 dev->otherend);
+			return err;
+		}=09
+	}
+	else if (ring_type =3D=3D 2){
+		be->blkif->ops =3D &blkback_ring_ops_v2;
+		be->blkif->blk_backring_type =3D BACKRING_TYPE_2;
+		be->blkif->req =3D kmalloc(sizeof(struct blkif_request_header), GFP_KERN=
EL);
+		be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
+					     be->blkif->ops->max_seg, GFP_KERNEL);
+		if (!be->blkif->req || !be->blkif->seg_req) {
+			kfree(be->blkif->req);
+			kfree(be->blkif->seg_req);
+			xenbus_dev_fatal(dev, err, "no enough memory");
+			return -ENOMEM;
+		}
+		err =3D xenbus_gather(XBT_NIL, dev->otherend, "reqring-ref", "%lu",
+				    &ring_ref, "event-channel", "%u", &evtchn,
+				    "segring-ref", "%lu", &segring_ref, NULL);
+		if (err) {
+			xenbus_dev_fatal(dev, err,
+					 "reading %s/ring/segring-ref and event-channel",
+					 dev->otherend);
+			return err;
+		}
+	}
+	else {
 		xenbus_dev_fatal(dev, err,
-				 "reading %s/ring-ref and event-channel",
-				 dev->otherend);
-		return err;
+				 "unsupport %s blkfront ring", dev->otherend);
+		return -EINVAL;
 	}
=20
 	be->blkif->blk_protocol =3D BLKIF_PROTOCOL_NATIVE;
@@ -758,19 +889,51 @@ static int connect_ring(struct backend_info *be)
 		xenbus_dev_fatal(dev, err, "unknown fe protocol %s", protocol);
 		return -1;
 	}
-	pr_info(DRV_PFX "ring-ref %ld, event-channel %d, protocol %d (%s)\n",
-		ring_ref, evtchn, be->blkif->blk_protocol, protocol);
-
 	/* Map the shared frame, irq etc. */
-	err =3D xen_blkif_map(be->blkif, ring_ref, evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "mapping ring-ref %lu port %u",
-				 ring_ref, evtchn);
-		return err;
+	if (ring_type =3D=3D 2) {=20
+		err =3D xen_blkif_map_segring(be->blkif, segring_ref);
+		if (err) {
+			xenbus_dev_fatal(dev, err, "mapping segment rinfs");
+			return err;
+		}
+		err =3D xen_blkif_map_v2(be->blkif, ring_ref, evtchn);
+		if (err) {
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_segring);
+			be->blkif->blk_segrings.sring =3D NULL;
+			xenbus_dev_fatal(dev, err, "mapping request rinfs");
+			return err;
+		}
+		pr_info(DRV_PFX
+			"ring-ref %ld,segring-ref %ld,event-channel %d,protocol %d (%s)\n",
+			ring_ref, segring_ref, evtchn, be->blkif->blk_protocol, protocol);
+	}
+	else {
+		err =3D xen_blkif_map(be->blkif, ring_ref, evtchn);
+		if (err) {
+			xenbus_dev_fatal(dev, err, "mapping ring-ref %lu port %u",
+					 ring_ref, evtchn);
+			return err;
+		}
+		pr_info(DRV_PFX "ring-ref %ld,event-channel %d,protocol %d (%s)\n",
+			ring_ref, evtchn, be->blkif->blk_protocol, protocol);
 	}
=20
 	err =3D xen_blkif_init_blkbk(be->blkif);
 	if (err) {
+		if (ring_type =3D=3D 2) {
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_segring);
+			be->blkif->blk_segrings.sring =3D NULL;
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_ring);
+			be->blkif->blk_rings_v2.common.sring =3D NULL;
+		}
+		else {
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_ring);
+			be->blkif->blk_rings.common.sring =3D NULL;
+		}
 		xenbus_dev_fatal(dev, err, "xen blkif init blkbk fails\n");
 		return err;
 	}
-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_05.patch"
Content-Description: vbd_enlarge_segments_05.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_05.patch";
	size=23023; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 10:29:09 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IGI3ODZiMzJiYzM4YWY4MGFiZjg1NWQ2ZDkzMGMwYzIwZGFhN2MzMGINCkF1dGhvcjog
Um9uZ2h1aSBEdWFuIDxyb25naHVpLmR1YW5AaW50ZWwuY29tPg0KRGF0ZTogICBGcmkgQXVnIDE3
IDAxOjMxOjI4IDIwMTIgKzA4MDANCg0KICAgIGFkZCBzZWdyaW5nIHN1cHBvcnQgaW4gYmxrYmFj
aw0KDQpkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYw0KaW5kZXggNDVlZGE5OC4uMGJiYzIy
NiAxMDA2NDQNCi0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jDQorKysg
Yi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYw0KQEAgLTYwLDYgKzYwLDEwIEBA
IHN0YXRpYyBpbnQgeGVuX2Jsa2lmX3JlcXMgPSA2NDsNCiBtb2R1bGVfcGFyYW1fbmFtZWQocmVx
cywgeGVuX2Jsa2lmX3JlcXMsIGludCwgMCk7DQogTU9EVUxFX1BBUk1fREVTQyhyZXFzLCAiTnVt
YmVyIG9mIGJsa2JhY2sgcmVxdWVzdHMgdG8gYWxsb2NhdGUiKTsNCiANCitpbnQgYmxrYmFja19y
aW5nX3R5cGUgPSAyOw0KK21vZHVsZV9wYXJhbV9uYW1lZChibGtfcmluZ190eXBlLCBibGtiYWNr
X3JpbmdfdHlwZSwgaW50LCAwKTsNCitNT0RVTEVfUEFSTV9ERVNDKGJsa19yaW5nX3R5cGUsICJ0
eXBlIG9mIHJpbmcgZm9yIGJsayBkZXZpY2UiKTsNCisNCiAvKiBSdW4tdGltZSBzd2l0Y2hhYmxl
OiAvc3lzL21vZHVsZS9ibGtiYWNrL3BhcmFtZXRlcnMvICovDQogc3RhdGljIHVuc2lnbmVkIGlu
dCBsb2dfc3RhdHM7DQogbW9kdWxlX3BhcmFtKGxvZ19zdGF0cywgaW50LCAwNjQ0KTsNCkBAIC0x
MjUsNyArMTI5LDcgQEAgc3RhdGljIHN0cnVjdCBwZW5kaW5nX3JlcSAqYWxsb2NfcmVxKHN0cnVj
dCB4ZW5fYmxraWYgKmJsa2lmKQ0KIAlzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiayA9IGJsa2lmLT5i
bGtiazsNCiAJc3RydWN0IHBlbmRpbmdfcmVxICpyZXEgPSBOVUxMOw0KIAl1bnNpZ25lZCBsb25n
IGZsYWdzOw0KLQl1bnNpZ25lZCBpbnQgbWF4X3NlZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJf
UkVRVUVTVDsNCisJdW5zaWduZWQgaW50IG1heF9zZWcgPSBibGtpZi0+b3BzLT5tYXhfc2VnOw0K
IAkNCiAJc3Bpbl9sb2NrX2lycXNhdmUoJmJsa2JrLT5wZW5kaW5nX2ZyZWVfbG9jaywgZmxhZ3Mp
Ow0KIAlpZiAoIWxpc3RfZW1wdHkoJmJsa2JrLT5wZW5kaW5nX2ZyZWUpKSB7DQpAQCAtMzE1LDgg
KzMxOSwxMCBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxrYmtfdW5tYXAoc3RydWN0IHBlbmRpbmdfcmVx
ICpyZXEpDQogDQogCWZvciAoaSA9IDA7IGkgPCByZXEtPm5yX3BhZ2VzOyBpKyspIHsNCiAJCWhh
bmRsZSA9IHBlbmRpbmdfaGFuZGxlKHJlcSwgaSk7DQotCQlpZiAoaGFuZGxlID09IEJMS0JBQ0tf
SU5WQUxJRF9IQU5ETEUpDQorCQlpZiAoaGFuZGxlID09IEJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUp
IHsNCisJCQlwcmludGsoIkJMS0JBQ0tfSU5WQUxJRF9IQU5ETEVcbiIpOw0KIAkJCWNvbnRpbnVl
Ow0KKwkJfQ0KIAkJZ250dGFiX3NldF91bm1hcF9vcCgmdW5tYXBbaW52Y291bnRdLCB2YWRkcihy
ZXEsIGkpLA0KIAkJCQkgICAgR05UTUFQX2hvc3RfbWFwLCBoYW5kbGUpOw0KIAkJcGVuZGluZ19o
YW5kbGUocmVxLCBpKSA9IEJMS0JBQ0tfSU5WQUxJRF9IQU5ETEU7DQpAQCAtNDg2LDYgKzQ5Miwx
MiBAQCB2b2lkICpnZXRfYmFja19yaW5nKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQ0KIAlyZXR1
cm4gKHZvaWQgKikmYmxraWYtPmJsa19yaW5nczsNCiB9DQogDQordm9pZCAqZ2V0X2JhY2tfcmlu
Z192MihzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikNCit7DQorCXJldHVybiAodm9pZCAqKSZibGtp
Zi0+YmxrX3JpbmdzX3YyOw0KK30NCisNCisNCiB2b2lkIGNvcHlfYmxraWZfcmVxKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLCBSSU5HX0lEWCByYykNCiB7DQogCXN0cnVjdCBibGtpZl9yZXF1ZXN0
ICpyZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilibGtpZi0+cmVxOyANCkBAIC01MDYsMTIg
KzUxOCw0OCBAQCB2b2lkIGNvcHlfYmxraWZfcmVxKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBS
SU5HX0lEWCByYykNCiAJfQ0KIH0NCiANCit2b2lkIGNvcHlfYmxraWZfcmVxX3YyKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLCBSSU5HX0lEWCByYykNCit7DQorCXN0cnVjdCBibGtpZl9yZXF1ZXN0
X2hlYWRlciAqcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0X2hlYWRlciAqKWJsa2lmLT5yZXE7
IA0KKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzX3YyICpibGtfcmluZ3MgPSAmYmxraWYtPmJsa19y
aW5nc192MjsNCisJc3dpdGNoIChibGtpZi0+YmxrX3Byb3RvY29sKSB7DQorCWNhc2UgQkxLSUZf
UFJPVE9DT0xfTkFUSVZFOg0KKwkJbWVtY3B5KHJlcSwgUklOR19HRVRfUkVRVUVTVCgmYmxrX3Jp
bmdzLT5uYXRpdmUsIHJjKSwNCisJCQlzaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVy
KSk7DQorCQlicmVhazsNCisJY2FzZSBCTEtJRl9QUk9UT0NPTF9YODZfMzI6DQorCQlibGtpZl9n
ZXRfeDg2XzMyX3JlcV92MihyZXEsIFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5ncy0+eDg2XzMy
LCByYykpOw0KKwkJYnJlYWs7DQorCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzY0Og0KKwkJYmxr
aWZfZ2V0X3g4Nl82NF9yZXFfdjIocmVxLCBSSU5HX0dFVF9SRVFVRVNUKCZibGtfcmluZ3MtPng4
Nl82NCwgcmMpKTsNCisJCWJyZWFrOw0KKwlkZWZhdWx0Og0KKwkJQlVHKCk7DQorCX0NCit9DQor
DQogdm9pZCBjb3B5X2Jsa2lmX3NlZ19yZXEoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpDQogew0K
IAlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0ICopYmxr
aWYtPnJlcTsNCiANCiAJYmxraWYtPnNlZ19yZXEgPSByZXEtPnUucncuc2VnOw0KIH0NCisNCit2
b2lkIGNvcHlfYmxraWZfc2VnX3JlcV92MihzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikNCit7DQor
CXN0cnVjdCBibGtpZl9yZXF1ZXN0X2hlYWRlciAqcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0
X2hlYWRlciAqKWJsa2lmLT5yZXE7DQorCXN0cnVjdCBibGtpZl9zZWdtZW50X2JhY2tfcmluZyAq
YmxrX3NlZ3JpbmdzID0gJmJsa2lmLT5ibGtfc2VncmluZ3M7DQorCWludCBpOw0KKwlSSU5HX0lE
WCByYzsNCisNCisJcmMgPSBibGtfc2VncmluZ3MtPnJlcV9jb25zOw0KKwlmb3IgKGkgPSAwOyBp
IDwgcmVxLT51LnJ3Lm5yX3NlZ21lbnRzOyBpKyspIHsNCisJCW1lbWNweSgmYmxraWYtPnNlZ19y
ZXFbaV0sIFJJTkdfR0VUX1JFUVVFU1QoYmxrX3NlZ3JpbmdzLCByYysrKSwNCisJCQlzaXplb2Yo
c3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkpOw0KKwl9DQorCWJsa19zZWdyaW5ncy0+cmVx
X2NvbnMgPSByYzsNCit9DQorDQogLyoNCiAgKiBGdW5jdGlvbiB0byBjb3B5IHRoZSBmcm9tIHRo
ZSByaW5nIGJ1ZmZlciB0aGUgJ3N0cnVjdCBibGtpZl9yZXF1ZXN0Jw0KICAqICh3aGljaCBoYXMg
dGhlIHNlY3RvcnMgd2Ugd2FudCwgbnVtYmVyIG9mIHRoZW0sIGdyYW50IHJlZmVyZW5jZXMsIGV0
YyksDQpAQCAtNTg3LDEwICs2MzUsMTIgQEAgZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtp
ZiAqYmxraWYpDQogDQogCXJldHVybiBtb3JlX3RvX2RvOw0KIH0NCisNCiAvKg0KICAqIFRyYW5z
bXV0YXRpb24gb2YgdGhlICdzdHJ1Y3QgYmxraWZfcmVxdWVzdCcgdG8gYSBwcm9wZXIgJ3N0cnVj
dCBiaW8nDQogICogYW5kIGNhbGwgdGhlICdzdWJtaXRfYmlvJyB0byBwYXNzIGl0IHRvIHRoZSB1
bmRlcmx5aW5nIHN0b3JhZ2UuDQogICovDQorDQogc3RhdGljIGludCBkaXNwYXRjaF9yd19ibG9j
a19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwNCiAJCQkJc3RydWN0IHBlbmRpbmdfcmVxICpw
ZW5kaW5nX3JlcSkNCiB7DQpAQCAtNzc0LDU0ICs4MjQsODkgQEAgc3RhdGljIGludCBkaXNwYXRj
aF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwNCiAJcmV0dXJuIC1FSU87DQog
fQ0KIA0KLXN0cnVjdCBibGtpZl9zZWdtZW50X2JhY2tfcmluZyAqDQotCWdldF9zZWdfYmFja19y
aW5nKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQ0KK3ZvaWQgcHVzaF9iYWNrX3JpbmdfcnNwKHN0
cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBpbnQgbnJfcGFnZSwgaW50ICpub3RpZnkpDQogew0KLQly
ZXR1cm4gTlVMTDsNCisJdW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0gJmJsa2lm
LT5ibGtfcmluZ3M7DQorDQorCWJsa19yaW5ncy0+Y29tbW9uLnJzcF9wcm9kX3B2dCsrOw0KKwlS
SU5HX1BVU0hfUkVTUE9OU0VTX0FORF9DSEVDS19OT1RJRlkoJmJsa19yaW5ncy0+Y29tbW9uLCAq
bm90aWZ5KTsNCiB9DQogDQotdm9pZCBwdXNoX2JhY2tfcmluZ19yc3AodW5pb24gYmxraWZfYmFj
a19yaW5ncyAqYmxrX3JpbmdzLCBpbnQgbnJfcGFnZSwgaW50ICpub3RpZnkpDQordm9pZCBwdXNo
X2JhY2tfcmluZ19yc3BfdjIoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIGludCBucl9wYWdlLCBp
bnQgKm5vdGlmeSkNCiB7DQorCXVuaW9uIGJsa2lmX2JhY2tfcmluZ3NfdjIgKmJsa19yaW5ncyA9
ICZibGtpZi0+YmxrX3JpbmdzX3YyOw0KKwlzdHJ1Y3QgYmxraWZfc2VnbWVudF9iYWNrX3Jpbmcg
KmJsa19zZWdyaW5ncyA9ICZibGtpZi0+YmxrX3NlZ3JpbmdzOw0KKw0KIAlibGtfcmluZ3MtPmNv
bW1vbi5yc3BfcHJvZF9wdnQrKzsNCisJYmxrX3NlZ3JpbmdzLT5yc3BfcHJvZF9wdnQgKz0gbnJf
cGFnZTsNCisJUklOR19QVVNIX1JFU1BPTlNFUyhibGtfc2VncmluZ3MpOw0KIAlSSU5HX1BVU0hf
UkVTUE9OU0VTX0FORF9DSEVDS19OT1RJRlkoJmJsa19yaW5ncy0+Y29tbW9uLCAqbm90aWZ5KTsN
CiB9DQogDQotLyoNCi0gKiBQdXQgYSByZXNwb25zZSBvbiB0aGUgcmluZyBvbiBob3cgdGhlIG9w
ZXJhdGlvbiBmYXJlZC4NCi0gKi8NCi1zdGF0aWMgdm9pZCBtYWtlX3Jlc3BvbnNlKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLCB1NjQgaWQsDQotCQkJICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IG5y
X3BhZ2UsIGludCBzdCkNCit2b2lkIGNvcHlfcmVzcG9uc2Uoc3RydWN0IHhlbl9ibGtpZiAqYmxr
aWYsIHN0cnVjdCBibGtpZl9yZXNwb25zZSAqcmVzcCkNCiB7DQotCXN0cnVjdCBibGtpZl9yZXNw
b25zZSAgcmVzcDsNCi0JdW5zaWduZWQgbG9uZyAgICAgZmxhZ3M7DQotCXVuaW9uIGJsa2lmX2Jh
Y2tfcmluZ3MgKmJsa19yaW5ncyA9DQotCQkodW5pb24gYmxraWZfYmFja19yaW5ncyAqKWJsa2lm
LT5vcHMtPmdldF9iYWNrX3JpbmcoYmxraWYpOw0KLQlpbnQgbm90aWZ5Ow0KKwl1bmlvbiBibGtp
Zl9iYWNrX3JpbmdzICpibGtfcmluZ3MgPSAmYmxraWYtPmJsa19yaW5nczsJDQorCQ0KKwlzd2l0
Y2ggKGJsa2lmLT5ibGtfcHJvdG9jb2wpIHsNCisJY2FzZSBCTEtJRl9QUk9UT0NPTF9OQVRJVkU6
DQorCQltZW1jcHkoUklOR19HRVRfUkVTUE9OU0UoJmJsa19yaW5ncy0+bmF0aXZlLCBibGtfcmlu
Z3MtPm5hdGl2ZS5yc3BfcHJvZF9wdnQpLA0KKwkJICAgICAgIHJlc3AsIHNpemVvZigqcmVzcCkp
Ow0KKwkJYnJlYWs7DQorCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzMyOg0KKwkJbWVtY3B5KFJJ
TkdfR0VUX1JFU1BPTlNFKCZibGtfcmluZ3MtPng4Nl8zMiwgYmxrX3JpbmdzLT54ODZfMzIucnNw
X3Byb2RfcHZ0KSwNCisJCSAgICAgICByZXNwLCBzaXplb2YoKnJlc3ApKTsNCisJCWJyZWFrOw0K
KwljYXNlIEJMS0lGX1BST1RPQ09MX1g4Nl82NDoNCisJCW1lbWNweShSSU5HX0dFVF9SRVNQT05T
RSgmYmxrX3JpbmdzLT54ODZfNjQsIGJsa19yaW5ncy0+eDg2XzY0LnJzcF9wcm9kX3B2dCksDQor
CQkgICAgICAgcmVzcCwgc2l6ZW9mKCpyZXNwKSk7DQorCQlicmVhazsNCisJZGVmYXVsdDoNCisJ
CUJVRygpOw0KKwl9DQogDQotCXJlc3AuaWQgICAgICAgID0gaWQ7DQotCXJlc3Aub3BlcmF0aW9u
ID0gb3A7DQotCXJlc3Auc3RhdHVzICAgID0gc3Q7DQorfQ0KIA0KLQlzcGluX2xvY2tfaXJxc2F2
ZSgmYmxraWYtPmJsa19yaW5nX2xvY2ssIGZsYWdzKTsNCi0JLyogUGxhY2Ugb24gdGhlIHJlc3Bv
bnNlIHJpbmcgZm9yIHRoZSByZWxldmFudCBkb21haW4uICovDQordm9pZCBjb3B5X3Jlc3BvbnNl
X3YyKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKnJlc3Ap
DQorew0KKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzX3YyICpibGtfcmluZ3MgPSAmYmxraWYtPmJs
a19yaW5nc192MjsJDQorCQ0KIAlzd2l0Y2ggKGJsa2lmLT5ibGtfcHJvdG9jb2wpIHsNCiAJY2Fz
ZSBCTEtJRl9QUk9UT0NPTF9OQVRJVkU6DQogCQltZW1jcHkoUklOR19HRVRfUkVTUE9OU0UoJmJs
a19yaW5ncy0+bmF0aXZlLCBibGtfcmluZ3MtPm5hdGl2ZS5yc3BfcHJvZF9wdnQpLA0KLQkJICAg
ICAgICZyZXNwLCBzaXplb2YocmVzcCkpOw0KKwkJICAgICAgIHJlc3AsIHNpemVvZigqcmVzcCkp
Ow0KIAkJYnJlYWs7DQogCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzMyOg0KIAkJbWVtY3B5KFJJ
TkdfR0VUX1JFU1BPTlNFKCZibGtfcmluZ3MtPng4Nl8zMiwgYmxrX3JpbmdzLT54ODZfMzIucnNw
X3Byb2RfcHZ0KSwNCi0JCSAgICAgICAmcmVzcCwgc2l6ZW9mKHJlc3ApKTsNCisJCSAgICAgICBy
ZXNwLCBzaXplb2YoKnJlc3ApKTsNCiAJCWJyZWFrOw0KIAljYXNlIEJMS0lGX1BST1RPQ09MX1g4
Nl82NDoNCiAJCW1lbWNweShSSU5HX0dFVF9SRVNQT05TRSgmYmxrX3JpbmdzLT54ODZfNjQsIGJs
a19yaW5ncy0+eDg2XzY0LnJzcF9wcm9kX3B2dCksDQotCQkgICAgICAgJnJlc3AsIHNpemVvZihy
ZXNwKSk7DQorCQkgICAgICAgcmVzcCwgc2l6ZW9mKCpyZXNwKSk7DQogCQlicmVhazsNCiAJZGVm
YXVsdDoNCiAJCUJVRygpOw0KIAl9DQorfQ0KIA0KLQlibGtpZi0+b3BzLT5wdXNoX2JhY2tfcmlu
Z19yc3AoYmxrX3JpbmdzLCBucl9wYWdlLCAmbm90aWZ5KTsNCisvKg0KKyAqIFB1dCBhIHJlc3Bv
bnNlIG9uIHRoZSByaW5nIG9uIGhvdyB0aGUgb3BlcmF0aW9uIGZhcmVkLg0KKyAqLw0KK3N0YXRp
YyB2b2lkIG1ha2VfcmVzcG9uc2Uoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIHU2NCBpZCwNCisJ
CQkgIHVuc2lnbmVkIHNob3J0IG9wLCBpbnQgbnJfcGFnZSwgaW50IHN0KQ0KK3sNCisJc3RydWN0
IGJsa2lmX3Jlc3BvbnNlICByZXNwOw0KKwl1bnNpZ25lZCBsb25nICAgICBmbGFnczsNCisJaW50
IG5vdGlmeTsNCisNCisJcmVzcC5pZCAgICAgICAgPSBpZDsNCisJcmVzcC5vcGVyYXRpb24gPSBv
cDsNCisJcmVzcC5zdGF0dXMgICAgPSBzdDsNCisNCisJc3Bpbl9sb2NrX2lycXNhdmUoJmJsa2lm
LT5ibGtfcmluZ19sb2NrLCBmbGFncyk7DQorCS8qIFBsYWNlIG9uIHRoZSByZXNwb25zZSByaW5n
IGZvciB0aGUgcmVsZXZhbnQgZG9tYWluLiAqLw0KKwlibGtpZi0+b3BzLT5jb3B5X3Jlc3BvbnNl
KGJsa2lmLCAmcmVzcCk7DQorDQorCWJsa2lmLT5vcHMtPnB1c2hfYmFja19yaW5nX3JzcChibGtp
ZiwgbnJfcGFnZSwgJm5vdGlmeSk7DQogDQogCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmJsa2lm
LT5ibGtfcmluZ19sb2NrLCBmbGFncyk7DQogCWlmIChub3RpZnkpDQpAQCAtODk1LDkgKzk4MCwx
OSBAQCBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJhdGlvbiBibGtiYWNrX3Jpbmdfb3BzID0gew0K
IAkuY29weV9ibGtpZl9yZXEgPSBjb3B5X2Jsa2lmX3JlcSwNCiAJLmNvcHlfYmxraWZfc2VnX3Jl
cSA9IGNvcHlfYmxraWZfc2VnX3JlcSwNCiAJLnB1c2hfYmFja19yaW5nX3JzcCA9IHB1c2hfYmFj
a19yaW5nX3JzcCwNCisJLmNvcHlfcmVzcG9uc2UgPSBjb3B5X3Jlc3BvbnNlLA0KIAkubWF4X3Nl
ZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCwNCiB9Ow0KIA0KK3N0cnVjdCBibGti
YWNrX3Jpbmdfb3BlcmF0aW9uIGJsa2JhY2tfcmluZ19vcHNfdjIgPSB7DQorCS5nZXRfYmFja19y
aW5nID0gZ2V0X2JhY2tfcmluZ192MiwNCisJLmNvcHlfYmxraWZfcmVxID0gY29weV9ibGtpZl9y
ZXFfdjIsDQorCS5jb3B5X2Jsa2lmX3NlZ19yZXEgPSBjb3B5X2Jsa2lmX3NlZ19yZXFfdjIsDQor
CS5wdXNoX2JhY2tfcmluZ19yc3AgPSBwdXNoX2JhY2tfcmluZ19yc3BfdjIsDQorCS5jb3B5X3Jl
c3BvbnNlID0gY29weV9yZXNwb25zZV92MiwNCisJLm1heF9zZWcgPSBCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1RfVjIsDQorfTsNCisNCiBzdGF0aWMgaW50IF9faW5pdCB4ZW5fYmxraWZf
aW5pdCh2b2lkKQ0KIHsNCiAJaW50IHJjID0gMDsNCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2Nr
L3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24u
aA0KaW5kZXggODBlOGFjYy4uMmUyNDFhNCAxMDA2NDQNCi0tLSBhL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svY29tbW9uLmgNCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9u
LmgNCkBAIC00OCw2ICs0OCw3IEBADQogCXByX2RlYnVnKERSVl9QRlggIiglczolZCkgIiBmbXQg
Ii5cbiIsCQlcDQogCQkgX19mdW5jX18sIF9fTElORV9fLCAjI2FyZ3MpDQogDQorZXh0ZXJuIGlu
dCBibGtiYWNrX3JpbmdfdHlwZTsNCiANCiAvKiBOb3QgYSByZWFsIHByb3RvY29sLiAgVXNlZCB0
byBnZW5lcmF0ZSByaW5nIHN0cnVjdHMgd2hpY2ggY29udGFpbg0KICAqIHRoZSBlbGVtZW50cyBj
b21tb24gdG8gYWxsIHByb3RvY29scyBvbmx5LiAgVGhpcyB3YXkgd2UgZ2V0IGENCkBAIC04NCw2
ICs4NSwyMiBAQCBzdHJ1Y3QgYmxraWZfeDg2XzMyX3JlcXVlc3Qgew0KIAl9IHU7DQogfSBfX2F0
dHJpYnV0ZV9fKChfX3BhY2tlZF9fKSk7DQogDQorc3RydWN0IGJsa2lmX3g4Nl8zMl9yZXF1ZXN0
X3J3X3YyIHsNCisJdWludDhfdCAgICAgICAgbnJfc2VnbWVudHM7ICAvKiBudW1iZXIgb2Ygc2Vn
bWVudHMgICAgICAgICAgICAgICAgICAgKi8NCisJYmxraWZfdmRldl90ICAgaGFuZGxlOyAgICAg
ICAvKiBvbmx5IGZvciByZWFkL3dyaXRlIHJlcXVlc3RzICAgICAgICAgKi8NCisJdWludDY0X3Qg
ICAgICAgaWQ7ICAgICAgICAgICAvKiBwcml2YXRlIGd1ZXN0IHZhbHVlLCBlY2hvZWQgaW4gcmVz
cCAgKi8NCisJYmxraWZfc2VjdG9yX3Qgc2VjdG9yX251bWJlcjsvKiBzdGFydCBzZWN0b3IgaWR4
IG9uIGRpc2sgKHIvdyBvbmx5KSAgKi8NCisJdWludDY0X3QgICAgICAgc2VnX2lkOy8qIHNlZ21l
bnQgb2Zmc2V0IGluIHRoZSBzZWdtZW50IHJpbmcgICAqLw0KK30gX19hdHRyaWJ1dGVfXygoX19w
YWNrZWRfXykpOw0KKw0KK3N0cnVjdCBibGtpZl94ODZfMzJfcmVxdWVzdF92MiB7DQorCXVpbnQ4
X3QgICAgICAgIG9wZXJhdGlvbjsgICAgLyogQkxLSUZfT1BfPz8/ICAgICAgICAgICAgICAgICAg
ICAgICAgICovDQorCXVuaW9uIHsNCisJCXN0cnVjdCBibGtpZl94ODZfMzJfcmVxdWVzdF9yd192
MiBydzsNCisJCXN0cnVjdCBibGtpZl94ODZfMzJfcmVxdWVzdF9kaXNjYXJkIGRpc2NhcmQ7DQor
CX0gdTsNCit9IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsNCisNCiAvKiBpMzg2IHByb3Rv
Y29sIHZlcnNpb24gKi8NCiAjcHJhZ21hIHBhY2socHVzaCwgNCkNCiBzdHJ1Y3QgYmxraWZfeDg2
XzMyX3Jlc3BvbnNlIHsNCkBAIC0xMjAsNiArMTM3LDIzIEBAIHN0cnVjdCBibGtpZl94ODZfNjRf
cmVxdWVzdCB7DQogCX0gdTsNCiB9IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsNCiANCitz
dHJ1Y3QgYmxraWZfeDg2XzY0X3JlcXVlc3RfcndfdjIgew0KKwl1aW50OF90ICAgICAgICBucl9z
ZWdtZW50czsgIC8qIG51bWJlciBvZiBzZWdtZW50cyAgICAgICAgICAgICAgICAgICAqLw0KKwli
bGtpZl92ZGV2X3QgICBoYW5kbGU7ICAgICAgIC8qIG9ubHkgZm9yIHJlYWQvd3JpdGUgcmVxdWVz
dHMgICAgICAgICAqLw0KKwl1aW50MzJfdCAgICAgICBfcGFkMTsgICAgICAgIC8qIG9mZnNldG9m
KGJsa2lmX3JlcWVzdC4uLHUucncuaWQpPT04ICAqLw0KKwl1aW50NjRfdCAgICAgICBpZDsNCisJ
YmxraWZfc2VjdG9yX3Qgc2VjdG9yX251bWJlcjsvKiBzdGFydCBzZWN0b3IgaWR4IG9uIGRpc2sg
KHIvdyBvbmx5KSAgKi8NCisJdWludDY0X3QgICAgICAgc2VnX2lkOy8qIHNlZ21lbnQgb2Zmc2V0
IGluIHRoZSBzZWdtZW50IHJpbmcgICAqLw0KK30gX19hdHRyaWJ1dGVfXygoX19wYWNrZWRfXykp
Ow0KKw0KK3N0cnVjdCBibGtpZl94ODZfNjRfcmVxdWVzdF92MiB7DQorCXVpbnQ4X3QgICAgICAg
IG9wZXJhdGlvbjsgICAgLyogQkxLSUZfT1BfPz8/ICAgICAgICAgICAgICAgICAgICAgICAgICov
DQorCXVuaW9uIHsNCisJCXN0cnVjdCBibGtpZl94ODZfNjRfcmVxdWVzdF9yd192MiBydzsNCisJ
CXN0cnVjdCBibGtpZl94ODZfNjRfcmVxdWVzdF9kaXNjYXJkIGRpc2NhcmQ7DQorCX0gdTsNCit9
IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsNCisNCiBzdHJ1Y3QgYmxraWZfeDg2XzY0X3Jl
c3BvbnNlIHsNCiAJdWludDY0X3QgICAgICAgX19hdHRyaWJ1dGVfXygoX19hbGlnbmVkX18oOCkp
KSBpZDsNCiAJdWludDhfdCAgICAgICAgIG9wZXJhdGlvbjsgICAgICAgLyogY29waWVkIGZyb20g
cmVxdWVzdCAqLw0KQEAgLTEzMiw2ICsxNjYsMTAgQEAgREVGSU5FX1JJTkdfVFlQRVMoYmxraWZf
eDg2XzMyLCBzdHJ1Y3QgYmxraWZfeDg2XzMyX3JlcXVlc3QsDQogCQkgIHN0cnVjdCBibGtpZl94
ODZfMzJfcmVzcG9uc2UpOw0KIERFRklORV9SSU5HX1RZUEVTKGJsa2lmX3g4Nl82NCwgc3RydWN0
IGJsa2lmX3g4Nl82NF9yZXF1ZXN0LA0KIAkJICBzdHJ1Y3QgYmxraWZfeDg2XzY0X3Jlc3BvbnNl
KTsNCitERUZJTkVfUklOR19UWVBFUyhibGtpZl94ODZfMzJfdjIsIHN0cnVjdCBibGtpZl94ODZf
MzJfcmVxdWVzdF92MiwNCisgICAgICAgICAgICAgICAgICBzdHJ1Y3QgYmxraWZfeDg2XzMyX3Jl
c3BvbnNlKTsNCitERUZJTkVfUklOR19UWVBFUyhibGtpZl94ODZfNjRfdjIsIHN0cnVjdCBibGtp
Zl94ODZfNjRfcmVxdWVzdF92MiwNCisgICAgICAgICAgICAgICAgICBzdHJ1Y3QgYmxraWZfeDg2
XzY0X3Jlc3BvbnNlKTsNCiANCiB1bmlvbiBibGtpZl9iYWNrX3JpbmdzIHsNCiAJc3RydWN0IGJs
a2lmX2JhY2tfcmluZyAgICAgICAgbmF0aXZlOw0KQEAgLTE0MCw2ICsxNzgsMTMgQEAgdW5pb24g
YmxraWZfYmFja19yaW5ncyB7DQogCXN0cnVjdCBibGtpZl94ODZfNjRfYmFja19yaW5nIHg4Nl82
NDsNCiB9Ow0KIA0KK3VuaW9uIGJsa2lmX2JhY2tfcmluZ3NfdjIgew0KKyAgICAgICAgc3RydWN0
IGJsa2lmX3JlcXVlc3RfYmFja19yaW5nICAgICAgICBuYXRpdmU7DQorICAgICAgICBzdHJ1Y3Qg
YmxraWZfY29tbW9uX2JhY2tfcmluZyAJICAgICAgY29tbW9uOw0KKyAgICAgICAgc3RydWN0IGJs
a2lmX3g4Nl8zMl92Ml9iYWNrX3JpbmcgICAgICB4ODZfMzI7DQorICAgICAgICBzdHJ1Y3QgYmxr
aWZfeDg2XzY0X3YyX2JhY2tfcmluZyAgICAgIHg4Nl82NDsNCit9Ow0KKw0KIGVudW0gYmxraWZf
cHJvdG9jb2wgew0KIAlCTEtJRl9QUk9UT0NPTF9OQVRJVkUgPSAxLA0KIAlCTEtJRl9QUk9UT0NP
TF9YODZfMzIgPSAyLA0KQEAgLTE3NSw3ICsyMjAsOCBAQCBzdHJ1Y3QgYmxrYmFja19yaW5nX29w
ZXJhdGlvbiB7DQogCXZvaWQgKigqZ2V0X2JhY2tfcmluZykgKHN0cnVjdCB4ZW5fYmxraWYgKmJs
a2lmKTsNCiAJdm9pZCAoKmNvcHlfYmxraWZfcmVxKSAoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYs
IFJJTkdfSURYIHJjKTsNCiAJdm9pZCAoKmNvcHlfYmxraWZfc2VnX3JlcSkgKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmKTsNCi0Jdm9pZCAoKnB1c2hfYmFja19yaW5nX3JzcCkgKHVuaW9uIGJsa2lm
X2JhY2tfcmluZ3MgKmJsa19yaW5ncywgaW50IG5yX3BhZ2UsIGludCAqbm90aWZ5KTsNCisJdm9p
ZCAoKnB1c2hfYmFja19yaW5nX3JzcCkgKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBpbnQgbnJf
cGFnZSwgaW50ICpub3RpZnkpOw0KKwl2b2lkICgqY29weV9yZXNwb25zZSkgKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmLCBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKnJlc3ApOw0KIAl1bnNpZ25lZCBp
bnQgbWF4X3NlZzsNCiB9Ow0KIA0KQEAgLTE5MCw3ICsyMzYsMTAgQEAgc3RydWN0IHhlbl9ibGtp
ZiB7DQogCWVudW0gYmxraWZfcHJvdG9jb2wJYmxrX3Byb3RvY29sOw0KIAllbnVtIGJsa2lmX2Jh
Y2tyaW5nX3R5cGUgYmxrX2JhY2tyaW5nX3R5cGU7DQogCXVuaW9uIGJsa2lmX2JhY2tfcmluZ3MJ
YmxrX3JpbmdzOw0KKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzX3YyICAgICAgIGJsa19yaW5nc192
MjsNCisJc3RydWN0IGJsa2lmX3NlZ21lbnRfYmFja19yaW5nCWJsa19zZWdyaW5nczsNCiAJdm9p
ZAkJCSpibGtfcmluZzsNCisJdm9pZCAgICAgICAgICAgICAgICAgICAgKmJsa19zZWdyaW5nOw0K
IAlzdHJ1Y3QgeGVuX2Jsa2JrICAgICAgICAqYmxrYms7DQogCS8qIFRoZSBWQkQgYXR0YWNoZWQg
dG8gdGhpcyBpbnRlcmZhY2UuICovDQogCXN0cnVjdCB4ZW5fdmJkCQl2YmQ7DQpAQCAtMzI4LDYg
KzM3NywzMSBAQCBzdGF0aWMgaW5saW5lIHZvaWQgYmxraWZfZ2V0X3g4Nl8zMl9yZXEoc3RydWN0
IGJsa2lmX3JlcXVlc3QgKmRzdCwNCiAJfQ0KIH0NCiANCitzdGF0aWMgaW5saW5lIHZvaWQgYmxr
aWZfZ2V0X3g4Nl8zMl9yZXFfdjIoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpkc3QsDQor
CQkJCQlzdHJ1Y3QgYmxraWZfeDg2XzMyX3JlcXVlc3RfdjIgKnNyYykNCit7DQorCWRzdC0+b3Bl
cmF0aW9uID0gc3JjLT5vcGVyYXRpb247DQorCXN3aXRjaCAoc3JjLT5vcGVyYXRpb24pIHsNCisJ
Y2FzZSBCTEtJRl9PUF9SRUFEOg0KKwljYXNlIEJMS0lGX09QX1dSSVRFOg0KKwljYXNlIEJMS0lG
X09QX1dSSVRFX0JBUlJJRVI6DQorCWNhc2UgQkxLSUZfT1BfRkxVU0hfRElTS0NBQ0hFOg0KKwkJ
ZHN0LT51LnJ3Lm5yX3NlZ21lbnRzID0gc3JjLT51LnJ3Lm5yX3NlZ21lbnRzOw0KKwkJZHN0LT51
LnJ3LmhhbmRsZSA9IHNyYy0+dS5ydy5oYW5kbGU7DQorCQlkc3QtPnUucncuaWQgPSBzcmMtPnUu
cncuaWQ7DQorCQlkc3QtPnUucncuc2VjdG9yX251bWJlciA9IHNyYy0+dS5ydy5zZWN0b3JfbnVt
YmVyOw0KKwkJZHN0LT51LnJ3LnNlZ19pZCA9IHNyYy0+dS5ydy5zZWdfaWQ7DQorCQliYXJyaWVy
KCk7DQorCQlicmVhazsNCisJY2FzZSBCTEtJRl9PUF9ESVNDQVJEOg0KKwkJZHN0LT51LmRpc2Nh
cmQuZmxhZyA9IHNyYy0+dS5kaXNjYXJkLmZsYWc7DQorCQlkc3QtPnUuZGlzY2FyZC5zZWN0b3Jf
bnVtYmVyID0gc3JjLT51LmRpc2NhcmQuc2VjdG9yX251bWJlcjsNCisJCWRzdC0+dS5kaXNjYXJk
Lm5yX3NlY3RvcnMgPSBzcmMtPnUuZGlzY2FyZC5ucl9zZWN0b3JzOw0KKwkJYnJlYWs7DQorCWRl
ZmF1bHQ6DQorCQlicmVhazsNCisJfQ0KK30NCiBzdGF0aWMgaW5saW5lIHZvaWQgYmxraWZfZ2V0
X3g4Nl82NF9yZXEoc3RydWN0IGJsa2lmX3JlcXVlc3QgKmRzdCwNCiAJCQkJCXN0cnVjdCBibGtp
Zl94ODZfNjRfcmVxdWVzdCAqc3JjKQ0KIHsNCkBAIC0zNTksNCArNDMzLDMwIEBAIHN0YXRpYyBp
bmxpbmUgdm9pZCBibGtpZl9nZXRfeDg2XzY0X3JlcShzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqZHN0
LA0KIAl9DQogfQ0KIA0KK3N0YXRpYyBpbmxpbmUgdm9pZCBibGtpZl9nZXRfeDg2XzY0X3JlcV92
MihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9oZWFkZXIgKmRzdCwNCisJCQkJCXN0cnVjdCBibGtpZl94
ODZfNjRfcmVxdWVzdF92MiAqc3JjKQ0KK3sNCisJZHN0LT5vcGVyYXRpb24gPSBzcmMtPm9wZXJh
dGlvbjsNCisJc3dpdGNoIChzcmMtPm9wZXJhdGlvbikgew0KKwljYXNlIEJMS0lGX09QX1JFQUQ6
DQorCWNhc2UgQkxLSUZfT1BfV1JJVEU6DQorCWNhc2UgQkxLSUZfT1BfV1JJVEVfQkFSUklFUjoN
CisJY2FzZSBCTEtJRl9PUF9GTFVTSF9ESVNLQ0FDSEU6DQorCQlkc3QtPnUucncubnJfc2VnbWVu
dHMgPSBzcmMtPnUucncubnJfc2VnbWVudHM7DQorCQlkc3QtPnUucncuaGFuZGxlID0gc3JjLT51
LnJ3LmhhbmRsZTsNCisJCWRzdC0+dS5ydy5pZCA9IHNyYy0+dS5ydy5pZDsNCisJCWRzdC0+dS5y
dy5zZWN0b3JfbnVtYmVyID0gc3JjLT51LnJ3LnNlY3Rvcl9udW1iZXI7DQorCQlkc3QtPnUucncu
c2VnX2lkID0gc3JjLT51LnJ3LnNlZ19pZDsNCisJCWJhcnJpZXIoKTsNCisJCWJyZWFrOw0KKwlj
YXNlIEJMS0lGX09QX0RJU0NBUkQ6DQorCQlkc3QtPnUuZGlzY2FyZC5mbGFnID0gc3JjLT51LmRp
c2NhcmQuZmxhZzsNCisJCWRzdC0+dS5kaXNjYXJkLnNlY3Rvcl9udW1iZXIgPSBzcmMtPnUuZGlz
Y2FyZC5zZWN0b3JfbnVtYmVyOw0KKwkJZHN0LT51LmRpc2NhcmQubnJfc2VjdG9ycyA9IHNyYy0+
dS5kaXNjYXJkLm5yX3NlY3RvcnM7DQorCQlicmVhazsNCisJZGVmYXVsdDoNCisJCWJyZWFrOw0K
Kwl9DQorfQ0KKw0KICNlbmRpZiAvKiBfX1hFTl9CTEtJRl9fQkFDS0VORF9fQ09NTU9OX0hfXyAq
Lw0KZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMgYi9kcml2
ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jDQppbmRleCA4YjBkNDk2Li40Njc4NTMzIDEw
MDY0NA0KLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYw0KKysrIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYw0KQEAgLTM2LDcgKzM2LDcgQEAgc3RhdGlj
IGludCBjb25uZWN0X3Jpbmcoc3RydWN0IGJhY2tlbmRfaW5mbyAqKTsNCiBzdGF0aWMgdm9pZCBi
YWNrZW5kX2NoYW5nZWQoc3RydWN0IHhlbmJ1c193YXRjaCAqLCBjb25zdCBjaGFyICoqLA0KIAkJ
CSAgICB1bnNpZ25lZCBpbnQpOw0KIA0KLWV4dGVybiBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJh
dGlvbiBibGtiYWNrX3Jpbmdfb3BzOw0KK2V4dGVybiBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJh
dGlvbiBibGtiYWNrX3Jpbmdfb3BzLCBibGtiYWNrX3Jpbmdfb3BzX3YyOw0KIA0KIHN0cnVjdCB4
ZW5idXNfZGV2aWNlICp4ZW5fYmxrYmtfeGVuYnVzKHN0cnVjdCBiYWNrZW5kX2luZm8gKmJlKQ0K
IHsNCkBAIC0xNzYsNiArMTc2LDgzIEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2lmX21hcChzdHJ1Y3Qg
eGVuX2Jsa2lmICpibGtpZiwgdW5zaWduZWQgbG9uZyBzaGFyZWRfcGFnZSwNCiAJcmV0dXJuIDA7
DQogfQ0KIA0KK3N0YXRpYyBpbnQNCit4ZW5fYmxraWZfbWFwX3NlZ3Jpbmcoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYsIHVuc2lnbmVkIGxvbmcgc2hhcmVkX3BhZ2UpDQorew0KKwlzdHJ1Y3QgYmxr
aWZfc2VnbWVudF9zcmluZyAqc3Jpbmc7DQorCWludCBlcnI7DQorDQorCWVyciA9IHhlbmJ1c19t
YXBfcmluZ192YWxsb2MoYmxraWYtPmJlLT5kZXYsIHNoYXJlZF9wYWdlLA0KKwkJCQkgICAgICZi
bGtpZi0+YmxrX3NlZ3JpbmcpOw0KKw0KKwlpZiAoZXJyIDwgMCkNCisJCXJldHVybiBlcnI7DQor
DQorCXNyaW5nID0gKHN0cnVjdCBibGtpZl9zZWdtZW50X3NyaW5nICopYmxraWYtPmJsa19zZWdy
aW5nOw0KKwlCQUNLX1JJTkdfSU5JVCgmYmxraWYtPmJsa19zZWdyaW5ncywgc3JpbmcsIFBBR0Vf
U0laRSk7DQorDQorCXJldHVybiAwOw0KK30NCisNCitzdGF0aWMgaW50IHhlbl9ibGtpZl9tYXBf
djIoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIHVuc2lnbmVkIGxvbmcgc2hhcmVkX3BhZ2UsIA0K
KyAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgZXZ0Y2huKQ0KK3sNCisJaW50
IGVycjsNCisNCisJLyogQWxyZWFkeSBjb25uZWN0ZWQgdGhyb3VnaD8gKi8NCisJaWYgKGJsa2lm
LT5pcnEpDQorCQlyZXR1cm4gMDsNCisNCisJZXJyID0geGVuYnVzX21hcF9yaW5nX3ZhbGxvYyhi
bGtpZi0+YmUtPmRldiwgc2hhcmVkX3BhZ2UsDQorCQkJCSAgICAgJmJsa2lmLT5ibGtfcmluZyk7
DQorDQorCWlmIChlcnIgPCAwKQ0KKwkJcmV0dXJuIGVycjsNCisNCisJc3dpdGNoIChibGtpZi0+
YmxrX3Byb3RvY29sKSB7DQorCWNhc2UgQkxLSUZfUFJPVE9DT0xfTkFUSVZFOg0KKwl7DQorCQlz
dHJ1Y3QgYmxraWZfcmVxdWVzdF9zcmluZyAqc3Jpbmc7DQorCQlzcmluZyA9IChzdHJ1Y3QgYmxr
aWZfcmVxdWVzdF9zcmluZyAqKWJsa2lmLT5ibGtfcmluZzsNCisJCUJBQ0tfUklOR19JTklUKCZi
bGtpZi0+YmxrX3JpbmdzX3YyLm5hdGl2ZSwgc3JpbmcsDQorCQkJICAgICAgIFBBR0VfU0laRSk7
DQorCQlicmVhazsNCisJfQ0KKwljYXNlIEJMS0lGX1BST1RPQ09MX1g4Nl8zMjoNCisJew0KKwkJ
c3RydWN0IGJsa2lmX3g4Nl8zMl92Ml9zcmluZyAqc3JpbmdfeDg2XzMyOw0KKwkJc3JpbmdfeDg2
XzMyID0gKHN0cnVjdCBibGtpZl94ODZfMzJfdjJfc3JpbmcgKilibGtpZi0+YmxrX3Jpbmc7DQor
CQlCQUNLX1JJTkdfSU5JVCgmYmxraWYtPmJsa19yaW5nc192Mi54ODZfMzIsIHNyaW5nX3g4Nl8z
MiwNCisJCQkgICAgICAgUEFHRV9TSVpFKTsNCisJCWJyZWFrOw0KKwl9DQorCWNhc2UgQkxLSUZf
UFJPVE9DT0xfWDg2XzY0Og0KKwl7DQorCQlzdHJ1Y3QgYmxraWZfeDg2XzY0X3YyX3NyaW5nICpz
cmluZ194ODZfNjQ7DQorCQlzcmluZ194ODZfNjQgPSAoc3RydWN0IGJsa2lmX3g4Nl82NF92Ml9z
cmluZyAqKWJsa2lmLT5ibGtfcmluZzsNCisJCUJBQ0tfUklOR19JTklUKCZibGtpZi0+YmxrX3Jp
bmdzX3YyLng4Nl82NCwgc3JpbmdfeDg2XzY0LA0KKwkJCSAgICAgICBQQUdFX1NJWkUpOw0KKwkJ
YnJlYWs7DQorCX0NCisJZGVmYXVsdDoNCisJCUJVRygpOw0KKwl9DQorDQorCQ0KKw0KKwllcnIg
PSBiaW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFoYW5kbGVyKGJsa2lmLT5kb21pZCwgZXZ0
Y2huLA0KKwkJCQkJCSAgICB4ZW5fYmxraWZfYmVfaW50LCAwLA0KKwkJCQkJCSAgICAiYmxraWYt
YmFja2VuZCIsIGJsa2lmKTsNCisJaWYgKGVyciA8IDApIHsNCisJCXhlbmJ1c191bm1hcF9yaW5n
X3ZmcmVlKGJsa2lmLT5iZS0+ZGV2LCBibGtpZi0+YmxrX3JpbmcpOw0KKwkJYmxraWYtPmJsa19y
aW5nc192Mi5jb21tb24uc3JpbmcgPSBOVUxMOw0KKwkJcmV0dXJuIGVycjsNCisJfQ0KKwlibGtp
Zi0+aXJxID0gZXJyOw0KKw0KKwlyZXR1cm4gMDsNCit9DQorDQogc3RhdGljIHZvaWQgeGVuX2Js
a2lmX2Rpc2Nvbm5lY3Qoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpDQogew0KIAlpZiAoYmxraWYt
PnhlbmJsa2QpIHsNCkBAIC0xOTIsMTAgKzI2OSwxOCBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxraWZf
ZGlzY29ubmVjdChzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikNCiAJCWJsa2lmLT5pcnEgPSAwOw0K
IAl9DQogDQotCWlmIChibGtpZi0+YmxrX3JpbmdzLmNvbW1vbi5zcmluZykgew0KKwlpZiAoYmxr
aWYtPmJsa19iYWNrcmluZ190eXBlID09IEJBQ0tSSU5HX1RZUEVfMSAmJiANCisJICAgIGJsa2lm
LT5ibGtfcmluZ3MuY29tbW9uLnNyaW5nKSB7DQogCQl4ZW5idXNfdW5tYXBfcmluZ192ZnJlZShi
bGtpZi0+YmUtPmRldiwgYmxraWYtPmJsa19yaW5nKTsNCiAJCWJsa2lmLT5ibGtfcmluZ3MuY29t
bW9uLnNyaW5nID0gTlVMTDsNCiAJfQ0KKwlpZiAoYmxraWYtPmJsa19iYWNrcmluZ190eXBlID09
IEJBQ0tSSU5HX1RZUEVfMiAmJg0KKwkgICAgYmxraWYtPmJsa19yaW5nc192Mi5jb21tb24uc3Jp
bmcpIHsNCisJCXhlbmJ1c191bm1hcF9yaW5nX3ZmcmVlKGJsa2lmLT5iZS0+ZGV2LCBibGtpZi0+
YmxrX3JpbmcpOw0KKwkJYmxraWYtPmJsa19yaW5nc192Mi5jb21tb24uc3JpbmcgPSBOVUxMOw0K
KwkJeGVuYnVzX3VubWFwX3JpbmdfdmZyZWUoYmxraWYtPmJlLT5kZXYsIGJsa2lmLT5ibGtfc2Vn
cmluZyk7DQorCQlibGtpZi0+YmxrX3NlZ3JpbmdzLnNyaW5nPSBOVUxMOw0KKwl9DQogfQ0KIA0K
IHZvaWQgeGVuX2Jsa2lmX2ZyZWUoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpDQpAQCAtNDc2LDYg
KzU2MSw5IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2JrX3Byb2JlKHN0cnVjdCB4ZW5idXNfZGV2aWNl
ICpkZXYsDQogCWlmIChlcnIpDQogCQlnb3RvIGZhaWw7DQogDQorCWVyciA9IHhlbmJ1c19wcmlu
dGYoWEJUX05JTCwgZGV2LT5ub2RlbmFtZSwgImJsa2JhY2stcmluZy10eXBlIiwNCisJCQkgICAg
IiV1IiwgYmxrYmFja19yaW5nX3R5cGUpOw0KKw0KIAllcnIgPSB4ZW5idXNfc3dpdGNoX3N0YXRl
KGRldiwgWGVuYnVzU3RhdGVJbml0V2FpdCk7DQogCWlmIChlcnIpDQogCQlnb3RvIGZhaWw7DQpA
QCAtNzIyLDI1ICs4MTAsNjggQEAgc3RhdGljIGludCBjb25uZWN0X3Jpbmcoc3RydWN0IGJhY2tl
bmRfaW5mbyAqYmUpDQogew0KIAlzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2ID0gYmUtPmRldjsN
CiAJdW5zaWduZWQgbG9uZyByaW5nX3JlZjsNCisJdW5zaWduZWQgbG9uZyBzZWdyaW5nX3JlZjsN
CiAJdW5zaWduZWQgaW50IGV2dGNobjsNCisJdW5zaWduZWQgaW50IHJpbmdfdHlwZTsNCiAJY2hh
ciBwcm90b2NvbFs2NF0gPSAiIjsNCiAJaW50IGVycjsNCiANCiAJRFBSSU5USygiJXMiLCBkZXYt
Pm90aGVyZW5kKTsNCi0JYmUtPmJsa2lmLT5vcHMgPSAmYmxrYmFja19yaW5nX29wczsNCi0JYmUt
PmJsa2lmLT5yZXEgPSBrbWFsbG9jKHNpemVvZihzdHJ1Y3QgYmxraWZfcmVxdWVzdCksDQotCQkJ
CSBHRlBfS0VSTkVMKTsNCi0JYmUtPmJsa2lmLT5zZWdfcmVxID0ga21hbGxvYyhzaXplb2Yoc3Ry
dWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkqDQotCQkJCSAgICAgYmUtPmJsa2lmLT5vcHMtPm1h
eF9zZWcsICBHRlBfS0VSTkVMKTsNCi0JYmUtPmJsa2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJB
Q0tSSU5HX1RZUEVfMTsNCi0NCi0JZXJyID0geGVuYnVzX2dhdGhlcihYQlRfTklMLCBkZXYtPm90
aGVyZW5kLCAicmluZy1yZWYiLCAiJWx1IiwNCi0JCQkgICAgJnJpbmdfcmVmLCAiZXZlbnQtY2hh
bm5lbCIsICIldSIsICZldnRjaG4sIE5VTEwpOw0KLQlpZiAoZXJyKSB7DQorCQ0KKwllcnIgPSB4
ZW5idXNfc2NhbmYoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgImJsa2Zyb250LXJpbmctdHlwZSIs
ICIldSIsDQorCQkJICAgJnJpbmdfdHlwZSk7DQorCWlmIChlcnIgIT0gMSkgew0KKwkJcHJfaW5m
byhEUlZfUEZYICJ1c2luZyBsZWdhY3kgYmxrIHJpbmdcbiIpOw0KKwkJcmluZ190eXBlID0gMTsN
CisJfQ0KKwkNCisJaWYgKHJpbmdfdHlwZSA9PSAxKSB7DQorCQliZS0+YmxraWYtPm9wcyA9ICZi
bGtiYWNrX3Jpbmdfb3BzOw0KKwkJYmUtPmJsa2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJBQ0tS
SU5HX1RZUEVfMTsNCisJCWJlLT5ibGtpZi0+cmVxID0ga21hbGxvYyhzaXplb2Yoc3RydWN0IGJs
a2lmX3JlcXVlc3QpLCBHRlBfS0VSTkVMKTsNCisJCWJlLT5ibGtpZi0+c2VnX3JlcSA9IGttYWxs
b2Moc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQpKg0KKwkJCQkJICAgICBiZS0+
YmxraWYtPm9wcy0+bWF4X3NlZywgR0ZQX0tFUk5FTCk7DQorCQlpZiAoIWJlLT5ibGtpZi0+cmVx
IHx8ICFiZS0+YmxraWYtPnNlZ19yZXEpIHsNCisJCQlrZnJlZShiZS0+YmxraWYtPnJlcSk7DQor
CQkJa2ZyZWUoYmUtPmJsa2lmLT5zZWdfcmVxKTsNCisJCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwg
ZXJyLCAibm8gZW5vdWdoIG1lbW9yeSIpOw0KKwkJCXJldHVybiAtRU5PTUVNOw0KKwkJfQ0KKwkJ
ZXJyID0geGVuYnVzX2dhdGhlcihYQlRfTklMLCBkZXYtPm90aGVyZW5kLCAicmluZy1yZWYiLCAi
JWx1IiwNCisJCQkJICAgICZyaW5nX3JlZiwgImV2ZW50LWNoYW5uZWwiLCAiJXUiLCAmZXZ0Y2hu
LCBOVUxMKTsNCisJCWlmIChlcnIpIHsNCisJCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgZXJyLA0K
KwkJCQkJICJyZWFkaW5nICVzL3JpbmctcmVmIGFuZCBldmVudC1jaGFubmVsIiwNCisJCQkJCSBk
ZXYtPm90aGVyZW5kKTsNCisJCQlyZXR1cm4gZXJyOw0KKwkJfQkNCisJfQ0KKwllbHNlIGlmIChy
aW5nX3R5cGUgPT0gMil7DQorCQliZS0+YmxraWYtPm9wcyA9ICZibGtiYWNrX3Jpbmdfb3BzX3Yy
Ow0KKwkJYmUtPmJsa2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJBQ0tSSU5HX1RZUEVfMjsNCisJ
CWJlLT5ibGtpZi0+cmVxID0ga21hbGxvYyhzaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVh
ZGVyKSwgR0ZQX0tFUk5FTCk7DQorCQliZS0+YmxraWYtPnNlZ19yZXEgPSBrbWFsbG9jKHNpemVv
ZihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50KSoNCisJCQkJCSAgICAgYmUtPmJsa2lmLT5v
cHMtPm1heF9zZWcsIEdGUF9LRVJORUwpOw0KKwkJaWYgKCFiZS0+YmxraWYtPnJlcSB8fCAhYmUt
PmJsa2lmLT5zZWdfcmVxKSB7DQorCQkJa2ZyZWUoYmUtPmJsa2lmLT5yZXEpOw0KKwkJCWtmcmVl
KGJlLT5ibGtpZi0+c2VnX3JlcSk7DQorCQkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIGVyciwgIm5v
IGVub3VnaCBtZW1vcnkiKTsNCisJCQlyZXR1cm4gLUVOT01FTTsNCisJCX0NCisJCWVyciA9IHhl
bmJ1c19nYXRoZXIoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgInJlcXJpbmctcmVmIiwgIiVsdSIs
DQorCQkJCSAgICAmcmluZ19yZWYsICJldmVudC1jaGFubmVsIiwgIiV1IiwgJmV2dGNobiwNCisJ
CQkJICAgICJzZWdyaW5nLXJlZiIsICIlbHUiLCAmc2VncmluZ19yZWYsIE5VTEwpOw0KKwkJaWYg
KGVycikgew0KKwkJCXhlbmJ1c19kZXZfZmF0YWwoZGV2LCBlcnIsDQorCQkJCQkgInJlYWRpbmcg
JXMvcmluZy9zZWdyaW5nLXJlZiBhbmQgZXZlbnQtY2hhbm5lbCIsDQorCQkJCQkgZGV2LT5vdGhl
cmVuZCk7DQorCQkJcmV0dXJuIGVycjsNCisJCX0NCisJfQ0KKwllbHNlIHsNCiAJCXhlbmJ1c19k
ZXZfZmF0YWwoZGV2LCBlcnIsDQotCQkJCSAicmVhZGluZyAlcy9yaW5nLXJlZiBhbmQgZXZlbnQt
Y2hhbm5lbCIsDQotCQkJCSBkZXYtPm90aGVyZW5kKTsNCi0JCXJldHVybiBlcnI7DQorCQkJCSAi
dW5zdXBwb3J0ICVzIGJsa2Zyb250IHJpbmciLCBkZXYtPm90aGVyZW5kKTsNCisJCXJldHVybiAt
RUlOVkFMOw0KIAl9DQogDQogCWJlLT5ibGtpZi0+YmxrX3Byb3RvY29sID0gQkxLSUZfUFJPVE9D
T0xfTkFUSVZFOw0KQEAgLTc1OCwxOSArODg5LDUxIEBAIHN0YXRpYyBpbnQgY29ubmVjdF9yaW5n
KHN0cnVjdCBiYWNrZW5kX2luZm8gKmJlKQ0KIAkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIGVyciwg
InVua25vd24gZmUgcHJvdG9jb2wgJXMiLCBwcm90b2NvbCk7DQogCQlyZXR1cm4gLTE7DQogCX0N
Ci0JcHJfaW5mbyhEUlZfUEZYICJyaW5nLXJlZiAlbGQsIGV2ZW50LWNoYW5uZWwgJWQsIHByb3Rv
Y29sICVkICglcylcbiIsDQotCQlyaW5nX3JlZiwgZXZ0Y2huLCBiZS0+YmxraWYtPmJsa19wcm90
b2NvbCwgcHJvdG9jb2wpOw0KLQ0KIAkvKiBNYXAgdGhlIHNoYXJlZCBmcmFtZSwgaXJxIGV0Yy4g
Ki8NCi0JZXJyID0geGVuX2Jsa2lmX21hcChiZS0+YmxraWYsIHJpbmdfcmVmLCBldnRjaG4pOw0K
LQlpZiAoZXJyKSB7DQotCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgZXJyLCAibWFwcGluZyByaW5n
LXJlZiAlbHUgcG9ydCAldSIsDQotCQkJCSByaW5nX3JlZiwgZXZ0Y2huKTsNCi0JCXJldHVybiBl
cnI7DQorCWlmIChyaW5nX3R5cGUgPT0gMikgeyANCisJCWVyciA9IHhlbl9ibGtpZl9tYXBfc2Vn
cmluZyhiZS0+YmxraWYsIHNlZ3JpbmdfcmVmKTsNCisJCWlmIChlcnIpIHsNCisJCQl4ZW5idXNf
ZGV2X2ZhdGFsKGRldiwgZXJyLCAibWFwcGluZyBzZWdtZW50IHJpbmZzIik7DQorCQkJcmV0dXJu
IGVycjsNCisJCX0NCisJCWVyciA9IHhlbl9ibGtpZl9tYXBfdjIoYmUtPmJsa2lmLCByaW5nX3Jl
ZiwgZXZ0Y2huKTsNCisJCWlmIChlcnIpIHsNCisJCQl4ZW5idXNfdW5tYXBfcmluZ192ZnJlZShi
ZS0+YmxraWYtPmJlLT5kZXYsDQorCQkJCQkJYmUtPmJsa2lmLT5ibGtfc2VncmluZyk7DQorCQkJ
YmUtPmJsa2lmLT5ibGtfc2VncmluZ3Muc3JpbmcgPSBOVUxMOw0KKwkJCXhlbmJ1c19kZXZfZmF0
YWwoZGV2LCBlcnIsICJtYXBwaW5nIHJlcXVlc3QgcmluZnMiKTsNCisJCQlyZXR1cm4gZXJyOw0K
KwkJfQ0KKwkJcHJfaW5mbyhEUlZfUEZYDQorCQkJInJpbmctcmVmICVsZCxzZWdyaW5nLXJlZiAl
bGQsZXZlbnQtY2hhbm5lbCAlZCxwcm90b2NvbCAlZCAoJXMpXG4iLA0KKwkJCXJpbmdfcmVmLCBz
ZWdyaW5nX3JlZiwgZXZ0Y2huLCBiZS0+YmxraWYtPmJsa19wcm90b2NvbCwgcHJvdG9jb2wpOw0K
Kwl9DQorCWVsc2Ugew0KKwkJZXJyID0geGVuX2Jsa2lmX21hcChiZS0+YmxraWYsIHJpbmdfcmVm
LCBldnRjaG4pOw0KKwkJaWYgKGVycikgew0KKwkJCXhlbmJ1c19kZXZfZmF0YWwoZGV2LCBlcnIs
ICJtYXBwaW5nIHJpbmctcmVmICVsdSBwb3J0ICV1IiwNCisJCQkJCSByaW5nX3JlZiwgZXZ0Y2hu
KTsNCisJCQlyZXR1cm4gZXJyOw0KKwkJfQ0KKwkJcHJfaW5mbyhEUlZfUEZYICJyaW5nLXJlZiAl
bGQsZXZlbnQtY2hhbm5lbCAlZCxwcm90b2NvbCAlZCAoJXMpXG4iLA0KKwkJCXJpbmdfcmVmLCBl
dnRjaG4sIGJlLT5ibGtpZi0+YmxrX3Byb3RvY29sLCBwcm90b2NvbCk7DQogCX0NCiANCiAJZXJy
ID0geGVuX2Jsa2lmX2luaXRfYmxrYmsoYmUtPmJsa2lmKTsNCiAJaWYgKGVycikgew0KKwkJaWYg
KHJpbmdfdHlwZSA9PSAyKSB7DQorCQkJeGVuYnVzX3VubWFwX3JpbmdfdmZyZWUoYmUtPmJsa2lm
LT5iZS0+ZGV2LA0KKwkJCQkJCWJlLT5ibGtpZi0+YmxrX3NlZ3JpbmcpOw0KKwkJCWJlLT5ibGtp
Zi0+YmxrX3NlZ3JpbmdzLnNyaW5nID0gTlVMTDsNCisJCQl4ZW5idXNfdW5tYXBfcmluZ192ZnJl
ZShiZS0+YmxraWYtPmJlLT5kZXYsDQorCQkJCQkJYmUtPmJsa2lmLT5ibGtfcmluZyk7DQorCQkJ
YmUtPmJsa2lmLT5ibGtfcmluZ3NfdjIuY29tbW9uLnNyaW5nID0gTlVMTDsNCisJCX0NCisJCWVs
c2Ugew0KKwkJCXhlbmJ1c191bm1hcF9yaW5nX3ZmcmVlKGJlLT5ibGtpZi0+YmUtPmRldiwNCisJ
CQkJCQliZS0+YmxraWYtPmJsa19yaW5nKTsNCisJCQliZS0+YmxraWYtPmJsa19yaW5ncy5jb21t
b24uc3JpbmcgPSBOVUxMOw0KKwkJfQ0KIAkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIGVyciwgInhl
biBibGtpZiBpbml0IGJsa2JrIGZhaWxzXG4iKTsNCiAJCXJldHVybiBlcnI7DQogCX0NCg==

--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xMV-0006Mr-NJ; Thu, 16 Aug 2012 10:31:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T1xMT-0006Ma-C1
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:31:33 +0000
Received: from [85.158.143.99:27660] by server-1.bemta-4.messagelabs.com id
	09/19-07754-40CCC205; Thu, 16 Aug 2012 10:31:32 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345113088!21443126!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE0OTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11732 invoked from network); 16 Aug 2012 10:31:29 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-6.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 10:31:29 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 03:31:28 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181735984"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 16 Aug 2012 03:31:25 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:31:13 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.82]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:31:12 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC v1 5/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mkDy7uj5Ma85Q7yiLvIEmcoA5w==
Date: Thu, 16 Aug 2012 10:31:10 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BCF283@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_"
MIME-Version: 1.0
Cc: "Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC v1 5/5] VBD: enlarge max segment per request in
 blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

add segring support in blkback
Signed-off-by: Ronghui Duan <ronghui.duan@intel.com>
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index 45eda98..0bbc226 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -60,6 +60,10 @@ static int xen_blkif_reqs =3D 64;
 module_param_named(reqs, xen_blkif_reqs, int, 0);
 MODULE_PARM_DESC(reqs, "Number of blkback requests to allocate");
=20
+int blkback_ring_type =3D 2;
+module_param_named(blk_ring_type, blkback_ring_type, int, 0);
+MODULE_PARM_DESC(blk_ring_type, "type of ring for blk device");
+
 /* Run-time switchable: /sys/module/blkback/parameters/ */
 static unsigned int log_stats;
 module_param(log_stats, int, 0644);
@@ -125,7 +129,7 @@ static struct pending_req *alloc_req(struct xen_blkif *=
blkif)
 	struct xen_blkbk *blkbk =3D blkif->blkbk;
 	struct pending_req *req =3D NULL;
 	unsigned long flags;
-	unsigned int max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST;
+	unsigned int max_seg =3D blkif->ops->max_seg;
 =09
 	spin_lock_irqsave(&blkbk->pending_free_lock, flags);
 	if (!list_empty(&blkbk->pending_free)) {
@@ -315,8 +319,10 @@ static void xen_blkbk_unmap(struct pending_req *req)
=20
 	for (i =3D 0; i < req->nr_pages; i++) {
 		handle =3D pending_handle(req, i);
-		if (handle =3D=3D BLKBACK_INVALID_HANDLE)
+		if (handle =3D=3D BLKBACK_INVALID_HANDLE) {
+			printk("BLKBACK_INVALID_HANDLE\n");
 			continue;
+		}
 		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
 				    GNTMAP_host_map, handle);
 		pending_handle(req, i) =3D BLKBACK_INVALID_HANDLE;
@@ -486,6 +492,12 @@ void *get_back_ring(struct xen_blkif *blkif)
 	return (void *)&blkif->blk_rings;
 }
=20
+void *get_back_ring_v2(struct xen_blkif *blkif)
+{
+	return (void *)&blkif->blk_rings_v2;
+}
+
+
 void copy_blkif_req(struct xen_blkif *blkif, RING_IDX rc)
 {
 	struct blkif_request *req =3D (struct blkif_request *)blkif->req;=20
@@ -506,12 +518,48 @@ void copy_blkif_req(struct xen_blkif *blkif, RING_IDX=
 rc)
 	}
 }
=20
+void copy_blkif_req_v2(struct xen_blkif *blkif, RING_IDX rc)
+{
+	struct blkif_request_header *req =3D (struct blkif_request_header *)blkif=
->req;=20
+	union blkif_back_rings_v2 *blk_rings =3D &blkif->blk_rings_v2;
+	switch (blkif->blk_protocol) {
+	case BLKIF_PROTOCOL_NATIVE:
+		memcpy(req, RING_GET_REQUEST(&blk_rings->native, rc),
+			sizeof(struct blkif_request_header));
+		break;
+	case BLKIF_PROTOCOL_X86_32:
+		blkif_get_x86_32_req_v2(req, RING_GET_REQUEST(&blk_rings->x86_32, rc));
+		break;
+	case BLKIF_PROTOCOL_X86_64:
+		blkif_get_x86_64_req_v2(req, RING_GET_REQUEST(&blk_rings->x86_64, rc));
+		break;
+	default:
+		BUG();
+	}
+}
+
 void copy_blkif_seg_req(struct xen_blkif *blkif)
 {
 	struct blkif_request *req =3D (struct blkif_request *)blkif->req;
=20
 	blkif->seg_req =3D req->u.rw.seg;
 }
+
+void copy_blkif_seg_req_v2(struct xen_blkif *blkif)
+{
+	struct blkif_request_header *req =3D (struct blkif_request_header *)blkif=
->req;
+	struct blkif_segment_back_ring *blk_segrings =3D &blkif->blk_segrings;
+	int i;
+	RING_IDX rc;
+
+	rc =3D blk_segrings->req_cons;
+	for (i =3D 0; i < req->u.rw.nr_segments; i++) {
+		memcpy(&blkif->seg_req[i], RING_GET_REQUEST(blk_segrings, rc++),
+			sizeof(struct blkif_request_segment));
+	}
+	blk_segrings->req_cons =3D rc;
+}
+
 /*
  * Function to copy the from the ring buffer the 'struct blkif_request'
  * (which has the sectors we want, number of them, grant references, etc),
@@ -587,10 +635,12 @@ do_block_io_op(struct xen_blkif *blkif)
=20
 	return more_to_do;
 }
+
 /*
  * Transmutation of the 'struct blkif_request' to a proper 'struct bio'
  * and call the 'submit_bio' to pass it to the underlying storage.
  */
+
 static int dispatch_rw_block_io(struct xen_blkif *blkif,
 				struct pending_req *pending_req)
 {
@@ -774,54 +824,89 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
 	return -EIO;
 }
=20
-struct blkif_segment_back_ring *
-	get_seg_back_ring(struct xen_blkif *blkif)
+void push_back_ring_rsp(struct xen_blkif *blkif, int nr_page, int *notify)
 {
-	return NULL;
+	union blkif_back_rings *blk_rings =3D &blkif->blk_rings;
+
+	blk_rings->common.rsp_prod_pvt++;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, *notify);
 }
=20
-void push_back_ring_rsp(union blkif_back_rings *blk_rings, int nr_page, in=
t *notify)
+void push_back_ring_rsp_v2(struct xen_blkif *blkif, int nr_page, int *noti=
fy)
 {
+	union blkif_back_rings_v2 *blk_rings =3D &blkif->blk_rings_v2;
+	struct blkif_segment_back_ring *blk_segrings =3D &blkif->blk_segrings;
+
 	blk_rings->common.rsp_prod_pvt++;
+	blk_segrings->rsp_prod_pvt +=3D nr_page;
+	RING_PUSH_RESPONSES(blk_segrings);
 	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, *notify);
 }
=20
-/*
- * Put a response on the ring on how the operation fared.
- */
-static void make_response(struct xen_blkif *blkif, u64 id,
-			  unsigned short op, int nr_page, int st)
+void copy_response(struct xen_blkif *blkif, struct blkif_response *resp)
 {
-	struct blkif_response  resp;
-	unsigned long     flags;
-	union blkif_back_rings *blk_rings =3D
-		(union blkif_back_rings *)blkif->ops->get_back_ring(blkif);
-	int notify;
+	union blkif_back_rings *blk_rings =3D &blkif->blk_rings;=09
+=09
+	switch (blkif->blk_protocol) {
+	case BLKIF_PROTOCOL_NATIVE:
+		memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_=
pvt),
+		       resp, sizeof(*resp));
+		break;
+	case BLKIF_PROTOCOL_X86_32:
+		memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_=
pvt),
+		       resp, sizeof(*resp));
+		break;
+	case BLKIF_PROTOCOL_X86_64:
+		memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_=
pvt),
+		       resp, sizeof(*resp));
+		break;
+	default:
+		BUG();
+	}
=20
-	resp.id        =3D id;
-	resp.operation =3D op;
-	resp.status    =3D st;
+}
=20
-	spin_lock_irqsave(&blkif->blk_ring_lock, flags);
-	/* Place on the response ring for the relevant domain. */
+void copy_response_v2(struct xen_blkif *blkif, struct blkif_response *resp=
)
+{
+	union blkif_back_rings_v2 *blk_rings =3D &blkif->blk_rings_v2;=09
+=09
 	switch (blkif->blk_protocol) {
 	case BLKIF_PROTOCOL_NATIVE:
 		memcpy(RING_GET_RESPONSE(&blk_rings->native, blk_rings->native.rsp_prod_=
pvt),
-		       &resp, sizeof(resp));
+		       resp, sizeof(*resp));
 		break;
 	case BLKIF_PROTOCOL_X86_32:
 		memcpy(RING_GET_RESPONSE(&blk_rings->x86_32, blk_rings->x86_32.rsp_prod_=
pvt),
-		       &resp, sizeof(resp));
+		       resp, sizeof(*resp));
 		break;
 	case BLKIF_PROTOCOL_X86_64:
 		memcpy(RING_GET_RESPONSE(&blk_rings->x86_64, blk_rings->x86_64.rsp_prod_=
pvt),
-		       &resp, sizeof(resp));
+		       resp, sizeof(*resp));
 		break;
 	default:
 		BUG();
 	}
+}
=20
-	blkif->ops->push_back_ring_rsp(blk_rings, nr_page, &notify);
+/*
+ * Put a response on the ring on how the operation fared.
+ */
+static void make_response(struct xen_blkif *blkif, u64 id,
+			  unsigned short op, int nr_page, int st)
+{
+	struct blkif_response  resp;
+	unsigned long     flags;
+	int notify;
+
+	resp.id        =3D id;
+	resp.operation =3D op;
+	resp.status    =3D st;
+
+	spin_lock_irqsave(&blkif->blk_ring_lock, flags);
+	/* Place on the response ring for the relevant domain. */
+	blkif->ops->copy_response(blkif, &resp);
+
+	blkif->ops->push_back_ring_rsp(blkif, nr_page, &notify);
=20
 	spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
 	if (notify)
@@ -895,9 +980,19 @@ struct blkback_ring_operation blkback_ring_ops =3D {
 	.copy_blkif_req =3D copy_blkif_req,
 	.copy_blkif_seg_req =3D copy_blkif_seg_req,
 	.push_back_ring_rsp =3D push_back_ring_rsp,
+	.copy_response =3D copy_response,
 	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST,
 };
=20
+struct blkback_ring_operation blkback_ring_ops_v2 =3D {
+	.get_back_ring =3D get_back_ring_v2,
+	.copy_blkif_req =3D copy_blkif_req_v2,
+	.copy_blkif_seg_req =3D copy_blkif_seg_req_v2,
+	.push_back_ring_rsp =3D push_back_ring_rsp_v2,
+	.copy_response =3D copy_response_v2,
+	.max_seg =3D BLKIF_MAX_SEGMENTS_PER_REQUEST_V2,
+};
+
 static int __init xen_blkif_init(void)
 {
 	int rc =3D 0;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index 80e8acc..2e241a4 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -48,6 +48,7 @@
 	pr_debug(DRV_PFX "(%s:%d) " fmt ".\n",		\
 		 __func__, __LINE__, ##args)
=20
+extern int blkback_ring_type;
=20
 /* Not a real protocol.  Used to generate ring structs which contain
  * the elements common to all protocols only.  This way we get a
@@ -84,6 +85,22 @@ struct blkif_x86_32_request {
 	} u;
 } __attribute__((__packed__));
=20
+struct blkif_x86_32_request_rw_v2 {
+	uint8_t        nr_segments;  /* number of segments                   */
+	blkif_vdev_t   handle;       /* only for read/write requests         */
+	uint64_t       id;           /* private guest value, echoed in resp  */
+	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
+	uint64_t       seg_id;/* segment offset in the segment ring   */
+} __attribute__((__packed__));
+
+struct blkif_x86_32_request_v2 {
+	uint8_t        operation;    /* BLKIF_OP_???                         */
+	union {
+		struct blkif_x86_32_request_rw_v2 rw;
+		struct blkif_x86_32_request_discard discard;
+	} u;
+} __attribute__((__packed__));
+
 /* i386 protocol version */
 #pragma pack(push, 4)
 struct blkif_x86_32_response {
@@ -120,6 +137,23 @@ struct blkif_x86_64_request {
 	} u;
 } __attribute__((__packed__));
=20
+struct blkif_x86_64_request_rw_v2 {
+	uint8_t        nr_segments;  /* number of segments                   */
+	blkif_vdev_t   handle;       /* only for read/write requests         */
+	uint32_t       _pad1;        /* offsetof(blkif_reqest..,u.rw.id)=3D=3D8  =
*/
+	uint64_t       id;
+	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
+	uint64_t       seg_id;/* segment offset in the segment ring   */
+} __attribute__((__packed__));
+
+struct blkif_x86_64_request_v2 {
+	uint8_t        operation;    /* BLKIF_OP_???                         */
+	union {
+		struct blkif_x86_64_request_rw_v2 rw;
+		struct blkif_x86_64_request_discard discard;
+	} u;
+} __attribute__((__packed__));
+
 struct blkif_x86_64_response {
 	uint64_t       __attribute__((__aligned__(8))) id;
 	uint8_t         operation;       /* copied from request */
@@ -132,6 +166,10 @@ DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_re=
quest,
 		  struct blkif_x86_32_response);
 DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request,
 		  struct blkif_x86_64_response);
+DEFINE_RING_TYPES(blkif_x86_32_v2, struct blkif_x86_32_request_v2,
+                  struct blkif_x86_32_response);
+DEFINE_RING_TYPES(blkif_x86_64_v2, struct blkif_x86_64_request_v2,
+                  struct blkif_x86_64_response);
=20
 union blkif_back_rings {
 	struct blkif_back_ring        native;
@@ -140,6 +178,13 @@ union blkif_back_rings {
 	struct blkif_x86_64_back_ring x86_64;
 };
=20
+union blkif_back_rings_v2 {
+        struct blkif_request_back_ring        native;
+        struct blkif_common_back_ring 	      common;
+        struct blkif_x86_32_v2_back_ring      x86_32;
+        struct blkif_x86_64_v2_back_ring      x86_64;
+};
+
 enum blkif_protocol {
 	BLKIF_PROTOCOL_NATIVE =3D 1,
 	BLKIF_PROTOCOL_X86_32 =3D 2,
@@ -175,7 +220,8 @@ struct blkback_ring_operation {
 	void *(*get_back_ring) (struct xen_blkif *blkif);
 	void (*copy_blkif_req) (struct xen_blkif *blkif, RING_IDX rc);
 	void (*copy_blkif_seg_req) (struct xen_blkif *blkif);
-	void (*push_back_ring_rsp) (union blkif_back_rings *blk_rings, int nr_pag=
e, int *notify);
+	void (*push_back_ring_rsp) (struct xen_blkif *blkif, int nr_page, int *no=
tify);
+	void (*copy_response) (struct xen_blkif *blkif, struct blkif_response *re=
sp);
 	unsigned int max_seg;
 };
=20
@@ -190,7 +236,10 @@ struct xen_blkif {
 	enum blkif_protocol	blk_protocol;
 	enum blkif_backring_type blk_backring_type;
 	union blkif_back_rings	blk_rings;
+	union blkif_back_rings_v2       blk_rings_v2;
+	struct blkif_segment_back_ring	blk_segrings;
 	void			*blk_ring;
+	void                    *blk_segring;
 	struct xen_blkbk        *blkbk;
 	/* The VBD attached to this interface. */
 	struct xen_vbd		vbd;
@@ -328,6 +377,31 @@ static inline void blkif_get_x86_32_req(struct blkif_r=
equest *dst,
 	}
 }
=20
+static inline void blkif_get_x86_32_req_v2(struct blkif_request_header *ds=
t,
+					struct blkif_x86_32_request_v2 *src)
+{
+	dst->operation =3D src->operation;
+	switch (src->operation) {
+	case BLKIF_OP_READ:
+	case BLKIF_OP_WRITE:
+	case BLKIF_OP_WRITE_BARRIER:
+	case BLKIF_OP_FLUSH_DISKCACHE:
+		dst->u.rw.nr_segments =3D src->u.rw.nr_segments;
+		dst->u.rw.handle =3D src->u.rw.handle;
+		dst->u.rw.id =3D src->u.rw.id;
+		dst->u.rw.sector_number =3D src->u.rw.sector_number;
+		dst->u.rw.seg_id =3D src->u.rw.seg_id;
+		barrier();
+		break;
+	case BLKIF_OP_DISCARD:
+		dst->u.discard.flag =3D src->u.discard.flag;
+		dst->u.discard.sector_number =3D src->u.discard.sector_number;
+		dst->u.discard.nr_sectors =3D src->u.discard.nr_sectors;
+		break;
+	default:
+		break;
+	}
+}
 static inline void blkif_get_x86_64_req(struct blkif_request *dst,
 					struct blkif_x86_64_request *src)
 {
@@ -359,4 +433,30 @@ static inline void blkif_get_x86_64_req(struct blkif_r=
equest *dst,
 	}
 }
=20
+static inline void blkif_get_x86_64_req_v2(struct blkif_request_header *ds=
t,
+					struct blkif_x86_64_request_v2 *src)
+{
+	dst->operation =3D src->operation;
+	switch (src->operation) {
+	case BLKIF_OP_READ:
+	case BLKIF_OP_WRITE:
+	case BLKIF_OP_WRITE_BARRIER:
+	case BLKIF_OP_FLUSH_DISKCACHE:
+		dst->u.rw.nr_segments =3D src->u.rw.nr_segments;
+		dst->u.rw.handle =3D src->u.rw.handle;
+		dst->u.rw.id =3D src->u.rw.id;
+		dst->u.rw.sector_number =3D src->u.rw.sector_number;
+		dst->u.rw.seg_id =3D src->u.rw.seg_id;
+		barrier();
+		break;
+	case BLKIF_OP_DISCARD:
+		dst->u.discard.flag =3D src->u.discard.flag;
+		dst->u.discard.sector_number =3D src->u.discard.sector_number;
+		dst->u.discard.nr_sectors =3D src->u.discard.nr_sectors;
+		break;
+	default:
+		break;
+	}
+}
+
 #endif /* __XEN_BLKIF__BACKEND__COMMON_H__ */
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 8b0d496..4678533 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -36,7 +36,7 @@ static int connect_ring(struct backend_info *);
 static void backend_changed(struct xenbus_watch *, const char **,
 			    unsigned int);
=20
-extern struct blkback_ring_operation blkback_ring_ops;
+extern struct blkback_ring_operation blkback_ring_ops, blkback_ring_ops_v2=
;
=20
 struct xenbus_device *xen_blkbk_xenbus(struct backend_info *be)
 {
@@ -176,6 +176,83 @@ static int xen_blkif_map(struct xen_blkif *blkif, unsi=
gned long shared_page,
 	return 0;
 }
=20
+static int
+xen_blkif_map_segring(struct xen_blkif *blkif, unsigned long shared_page)
+{
+	struct blkif_segment_sring *sring;
+	int err;
+
+	err =3D xenbus_map_ring_valloc(blkif->be->dev, shared_page,
+				     &blkif->blk_segring);
+
+	if (err < 0)
+		return err;
+
+	sring =3D (struct blkif_segment_sring *)blkif->blk_segring;
+	BACK_RING_INIT(&blkif->blk_segrings, sring, PAGE_SIZE);
+
+	return 0;
+}
+
+static int xen_blkif_map_v2(struct xen_blkif *blkif, unsigned long shared_=
page,=20
+                         unsigned int evtchn)
+{
+	int err;
+
+	/* Already connected through? */
+	if (blkif->irq)
+		return 0;
+
+	err =3D xenbus_map_ring_valloc(blkif->be->dev, shared_page,
+				     &blkif->blk_ring);
+
+	if (err < 0)
+		return err;
+
+	switch (blkif->blk_protocol) {
+	case BLKIF_PROTOCOL_NATIVE:
+	{
+		struct blkif_request_sring *sring;
+		sring =3D (struct blkif_request_sring *)blkif->blk_ring;
+		BACK_RING_INIT(&blkif->blk_rings_v2.native, sring,
+			       PAGE_SIZE);
+		break;
+	}
+	case BLKIF_PROTOCOL_X86_32:
+	{
+		struct blkif_x86_32_v2_sring *sring_x86_32;
+		sring_x86_32 =3D (struct blkif_x86_32_v2_sring *)blkif->blk_ring;
+		BACK_RING_INIT(&blkif->blk_rings_v2.x86_32, sring_x86_32,
+			       PAGE_SIZE);
+		break;
+	}
+	case BLKIF_PROTOCOL_X86_64:
+	{
+		struct blkif_x86_64_v2_sring *sring_x86_64;
+		sring_x86_64 =3D (struct blkif_x86_64_v2_sring *)blkif->blk_ring;
+		BACK_RING_INIT(&blkif->blk_rings_v2.x86_64, sring_x86_64,
+			       PAGE_SIZE);
+		break;
+	}
+	default:
+		BUG();
+	}
+
+=09
+
+	err =3D bind_interdomain_evtchn_to_irqhandler(blkif->domid, evtchn,
+						    xen_blkif_be_int, 0,
+						    "blkif-backend", blkif);
+	if (err < 0) {
+		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_ring);
+		blkif->blk_rings_v2.common.sring =3D NULL;
+		return err;
+	}
+	blkif->irq =3D err;
+
+	return 0;
+}
+
 static void xen_blkif_disconnect(struct xen_blkif *blkif)
 {
 	if (blkif->xenblkd) {
@@ -192,10 +269,18 @@ static void xen_blkif_disconnect(struct xen_blkif *bl=
kif)
 		blkif->irq =3D 0;
 	}
=20
-	if (blkif->blk_rings.common.sring) {
+	if (blkif->blk_backring_type =3D=3D BACKRING_TYPE_1 &&=20
+	    blkif->blk_rings.common.sring) {
 		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_ring);
 		blkif->blk_rings.common.sring =3D NULL;
 	}
+	if (blkif->blk_backring_type =3D=3D BACKRING_TYPE_2 &&
+	    blkif->blk_rings_v2.common.sring) {
+		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_ring);
+		blkif->blk_rings_v2.common.sring =3D NULL;
+		xenbus_unmap_ring_vfree(blkif->be->dev, blkif->blk_segring);
+		blkif->blk_segrings.sring=3D NULL;
+	}
 }
=20
 void xen_blkif_free(struct xen_blkif *blkif)
@@ -476,6 +561,9 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
 	if (err)
 		goto fail;
=20
+	err =3D xenbus_printf(XBT_NIL, dev->nodename, "blkback-ring-type",
+			    "%u", blkback_ring_type);
+
 	err =3D xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -722,25 +810,68 @@ static int connect_ring(struct backend_info *be)
 {
 	struct xenbus_device *dev =3D be->dev;
 	unsigned long ring_ref;
+	unsigned long segring_ref;
 	unsigned int evtchn;
+	unsigned int ring_type;
 	char protocol[64] =3D "";
 	int err;
=20
 	DPRINTK("%s", dev->otherend);
-	be->blkif->ops =3D &blkback_ring_ops;
-	be->blkif->req =3D kmalloc(sizeof(struct blkif_request),
-				 GFP_KERNEL);
-	be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
-				     be->blkif->ops->max_seg,  GFP_KERNEL);
-	be->blkif->blk_backring_type =3D BACKRING_TYPE_1;
-
-	err =3D xenbus_gather(XBT_NIL, dev->otherend, "ring-ref", "%lu",
-			    &ring_ref, "event-channel", "%u", &evtchn, NULL);
-	if (err) {
+=09
+	err =3D xenbus_scanf(XBT_NIL, dev->otherend, "blkfront-ring-type", "%u",
+			   &ring_type);
+	if (err !=3D 1) {
+		pr_info(DRV_PFX "using legacy blk ring\n");
+		ring_type =3D 1;
+	}
+=09
+	if (ring_type =3D=3D 1) {
+		be->blkif->ops =3D &blkback_ring_ops;
+		be->blkif->blk_backring_type =3D BACKRING_TYPE_1;
+		be->blkif->req =3D kmalloc(sizeof(struct blkif_request), GFP_KERNEL);
+		be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
+					     be->blkif->ops->max_seg, GFP_KERNEL);
+		if (!be->blkif->req || !be->blkif->seg_req) {
+			kfree(be->blkif->req);
+			kfree(be->blkif->seg_req);
+			xenbus_dev_fatal(dev, err, "no enough memory");
+			return -ENOMEM;
+		}
+		err =3D xenbus_gather(XBT_NIL, dev->otherend, "ring-ref", "%lu",
+				    &ring_ref, "event-channel", "%u", &evtchn, NULL);
+		if (err) {
+			xenbus_dev_fatal(dev, err,
+					 "reading %s/ring-ref and event-channel",
+					 dev->otherend);
+			return err;
+		}=09
+	}
+	else if (ring_type =3D=3D 2){
+		be->blkif->ops =3D &blkback_ring_ops_v2;
+		be->blkif->blk_backring_type =3D BACKRING_TYPE_2;
+		be->blkif->req =3D kmalloc(sizeof(struct blkif_request_header), GFP_KERN=
EL);
+		be->blkif->seg_req =3D kmalloc(sizeof(struct blkif_request_segment)*
+					     be->blkif->ops->max_seg, GFP_KERNEL);
+		if (!be->blkif->req || !be->blkif->seg_req) {
+			kfree(be->blkif->req);
+			kfree(be->blkif->seg_req);
+			xenbus_dev_fatal(dev, err, "no enough memory");
+			return -ENOMEM;
+		}
+		err =3D xenbus_gather(XBT_NIL, dev->otherend, "reqring-ref", "%lu",
+				    &ring_ref, "event-channel", "%u", &evtchn,
+				    "segring-ref", "%lu", &segring_ref, NULL);
+		if (err) {
+			xenbus_dev_fatal(dev, err,
+					 "reading %s/ring/segring-ref and event-channel",
+					 dev->otherend);
+			return err;
+		}
+	}
+	else {
 		xenbus_dev_fatal(dev, err,
-				 "reading %s/ring-ref and event-channel",
-				 dev->otherend);
-		return err;
+				 "unsupport %s blkfront ring", dev->otherend);
+		return -EINVAL;
 	}
=20
 	be->blkif->blk_protocol =3D BLKIF_PROTOCOL_NATIVE;
@@ -758,19 +889,51 @@ static int connect_ring(struct backend_info *be)
 		xenbus_dev_fatal(dev, err, "unknown fe protocol %s", protocol);
 		return -1;
 	}
-	pr_info(DRV_PFX "ring-ref %ld, event-channel %d, protocol %d (%s)\n",
-		ring_ref, evtchn, be->blkif->blk_protocol, protocol);
-
 	/* Map the shared frame, irq etc. */
-	err =3D xen_blkif_map(be->blkif, ring_ref, evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "mapping ring-ref %lu port %u",
-				 ring_ref, evtchn);
-		return err;
+	if (ring_type =3D=3D 2) {=20
+		err =3D xen_blkif_map_segring(be->blkif, segring_ref);
+		if (err) {
+			xenbus_dev_fatal(dev, err, "mapping segment rinfs");
+			return err;
+		}
+		err =3D xen_blkif_map_v2(be->blkif, ring_ref, evtchn);
+		if (err) {
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_segring);
+			be->blkif->blk_segrings.sring =3D NULL;
+			xenbus_dev_fatal(dev, err, "mapping request rinfs");
+			return err;
+		}
+		pr_info(DRV_PFX
+			"ring-ref %ld,segring-ref %ld,event-channel %d,protocol %d (%s)\n",
+			ring_ref, segring_ref, evtchn, be->blkif->blk_protocol, protocol);
+	}
+	else {
+		err =3D xen_blkif_map(be->blkif, ring_ref, evtchn);
+		if (err) {
+			xenbus_dev_fatal(dev, err, "mapping ring-ref %lu port %u",
+					 ring_ref, evtchn);
+			return err;
+		}
+		pr_info(DRV_PFX "ring-ref %ld,event-channel %d,protocol %d (%s)\n",
+			ring_ref, evtchn, be->blkif->blk_protocol, protocol);
 	}
=20
 	err =3D xen_blkif_init_blkbk(be->blkif);
 	if (err) {
+		if (ring_type =3D=3D 2) {
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_segring);
+			be->blkif->blk_segrings.sring =3D NULL;
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_ring);
+			be->blkif->blk_rings_v2.common.sring =3D NULL;
+		}
+		else {
+			xenbus_unmap_ring_vfree(be->blkif->be->dev,
+						be->blkif->blk_ring);
+			be->blkif->blk_rings.common.sring =3D NULL;
+		}
 		xenbus_dev_fatal(dev, err, "xen blkif init blkbk fails\n");
 		return err;
 	}
-ronghui



--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_
Content-Type: application/octet-stream; name="vbd_enlarge_segments_05.patch"
Content-Description: vbd_enlarge_segments_05.patch
Content-Disposition: attachment; filename="vbd_enlarge_segments_05.patch";
	size=23023; creation-date="Thu, 16 Aug 2012 10:19:11 GMT";
	modification-date="Thu, 16 Aug 2012 10:29:09 GMT"
Content-Transfer-Encoding: base64

Y29tbWl0IGI3ODZiMzJiYzM4YWY4MGFiZjg1NWQ2ZDkzMGMwYzIwZGFhN2MzMGINCkF1dGhvcjog
Um9uZ2h1aSBEdWFuIDxyb25naHVpLmR1YW5AaW50ZWwuY29tPg0KRGF0ZTogICBGcmkgQXVnIDE3
IDAxOjMxOjI4IDIwMTIgKzA4MDANCg0KICAgIGFkZCBzZWdyaW5nIHN1cHBvcnQgaW4gYmxrYmFj
aw0KDQpkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYw0KaW5kZXggNDVlZGE5OC4uMGJiYzIy
NiAxMDA2NDQNCi0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jDQorKysg
Yi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYw0KQEAgLTYwLDYgKzYwLDEwIEBA
IHN0YXRpYyBpbnQgeGVuX2Jsa2lmX3JlcXMgPSA2NDsNCiBtb2R1bGVfcGFyYW1fbmFtZWQocmVx
cywgeGVuX2Jsa2lmX3JlcXMsIGludCwgMCk7DQogTU9EVUxFX1BBUk1fREVTQyhyZXFzLCAiTnVt
YmVyIG9mIGJsa2JhY2sgcmVxdWVzdHMgdG8gYWxsb2NhdGUiKTsNCiANCitpbnQgYmxrYmFja19y
aW5nX3R5cGUgPSAyOw0KK21vZHVsZV9wYXJhbV9uYW1lZChibGtfcmluZ190eXBlLCBibGtiYWNr
X3JpbmdfdHlwZSwgaW50LCAwKTsNCitNT0RVTEVfUEFSTV9ERVNDKGJsa19yaW5nX3R5cGUsICJ0
eXBlIG9mIHJpbmcgZm9yIGJsayBkZXZpY2UiKTsNCisNCiAvKiBSdW4tdGltZSBzd2l0Y2hhYmxl
OiAvc3lzL21vZHVsZS9ibGtiYWNrL3BhcmFtZXRlcnMvICovDQogc3RhdGljIHVuc2lnbmVkIGlu
dCBsb2dfc3RhdHM7DQogbW9kdWxlX3BhcmFtKGxvZ19zdGF0cywgaW50LCAwNjQ0KTsNCkBAIC0x
MjUsNyArMTI5LDcgQEAgc3RhdGljIHN0cnVjdCBwZW5kaW5nX3JlcSAqYWxsb2NfcmVxKHN0cnVj
dCB4ZW5fYmxraWYgKmJsa2lmKQ0KIAlzdHJ1Y3QgeGVuX2Jsa2JrICpibGtiayA9IGJsa2lmLT5i
bGtiazsNCiAJc3RydWN0IHBlbmRpbmdfcmVxICpyZXEgPSBOVUxMOw0KIAl1bnNpZ25lZCBsb25n
IGZsYWdzOw0KLQl1bnNpZ25lZCBpbnQgbWF4X3NlZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJf
UkVRVUVTVDsNCisJdW5zaWduZWQgaW50IG1heF9zZWcgPSBibGtpZi0+b3BzLT5tYXhfc2VnOw0K
IAkNCiAJc3Bpbl9sb2NrX2lycXNhdmUoJmJsa2JrLT5wZW5kaW5nX2ZyZWVfbG9jaywgZmxhZ3Mp
Ow0KIAlpZiAoIWxpc3RfZW1wdHkoJmJsa2JrLT5wZW5kaW5nX2ZyZWUpKSB7DQpAQCAtMzE1LDgg
KzMxOSwxMCBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxrYmtfdW5tYXAoc3RydWN0IHBlbmRpbmdfcmVx
ICpyZXEpDQogDQogCWZvciAoaSA9IDA7IGkgPCByZXEtPm5yX3BhZ2VzOyBpKyspIHsNCiAJCWhh
bmRsZSA9IHBlbmRpbmdfaGFuZGxlKHJlcSwgaSk7DQotCQlpZiAoaGFuZGxlID09IEJMS0JBQ0tf
SU5WQUxJRF9IQU5ETEUpDQorCQlpZiAoaGFuZGxlID09IEJMS0JBQ0tfSU5WQUxJRF9IQU5ETEUp
IHsNCisJCQlwcmludGsoIkJMS0JBQ0tfSU5WQUxJRF9IQU5ETEVcbiIpOw0KIAkJCWNvbnRpbnVl
Ow0KKwkJfQ0KIAkJZ250dGFiX3NldF91bm1hcF9vcCgmdW5tYXBbaW52Y291bnRdLCB2YWRkcihy
ZXEsIGkpLA0KIAkJCQkgICAgR05UTUFQX2hvc3RfbWFwLCBoYW5kbGUpOw0KIAkJcGVuZGluZ19o
YW5kbGUocmVxLCBpKSA9IEJMS0JBQ0tfSU5WQUxJRF9IQU5ETEU7DQpAQCAtNDg2LDYgKzQ5Miwx
MiBAQCB2b2lkICpnZXRfYmFja19yaW5nKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQ0KIAlyZXR1
cm4gKHZvaWQgKikmYmxraWYtPmJsa19yaW5nczsNCiB9DQogDQordm9pZCAqZ2V0X2JhY2tfcmlu
Z192MihzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikNCit7DQorCXJldHVybiAodm9pZCAqKSZibGtp
Zi0+YmxrX3JpbmdzX3YyOw0KK30NCisNCisNCiB2b2lkIGNvcHlfYmxraWZfcmVxKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLCBSSU5HX0lEWCByYykNCiB7DQogCXN0cnVjdCBibGtpZl9yZXF1ZXN0
ICpyZXEgPSAoc3RydWN0IGJsa2lmX3JlcXVlc3QgKilibGtpZi0+cmVxOyANCkBAIC01MDYsMTIg
KzUxOCw0OCBAQCB2b2lkIGNvcHlfYmxraWZfcmVxKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBS
SU5HX0lEWCByYykNCiAJfQ0KIH0NCiANCit2b2lkIGNvcHlfYmxraWZfcmVxX3YyKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLCBSSU5HX0lEWCByYykNCit7DQorCXN0cnVjdCBibGtpZl9yZXF1ZXN0
X2hlYWRlciAqcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0X2hlYWRlciAqKWJsa2lmLT5yZXE7
IA0KKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzX3YyICpibGtfcmluZ3MgPSAmYmxraWYtPmJsa19y
aW5nc192MjsNCisJc3dpdGNoIChibGtpZi0+YmxrX3Byb3RvY29sKSB7DQorCWNhc2UgQkxLSUZf
UFJPVE9DT0xfTkFUSVZFOg0KKwkJbWVtY3B5KHJlcSwgUklOR19HRVRfUkVRVUVTVCgmYmxrX3Jp
bmdzLT5uYXRpdmUsIHJjKSwNCisJCQlzaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVy
KSk7DQorCQlicmVhazsNCisJY2FzZSBCTEtJRl9QUk9UT0NPTF9YODZfMzI6DQorCQlibGtpZl9n
ZXRfeDg2XzMyX3JlcV92MihyZXEsIFJJTkdfR0VUX1JFUVVFU1QoJmJsa19yaW5ncy0+eDg2XzMy
LCByYykpOw0KKwkJYnJlYWs7DQorCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzY0Og0KKwkJYmxr
aWZfZ2V0X3g4Nl82NF9yZXFfdjIocmVxLCBSSU5HX0dFVF9SRVFVRVNUKCZibGtfcmluZ3MtPng4
Nl82NCwgcmMpKTsNCisJCWJyZWFrOw0KKwlkZWZhdWx0Og0KKwkJQlVHKCk7DQorCX0NCit9DQor
DQogdm9pZCBjb3B5X2Jsa2lmX3NlZ19yZXEoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpDQogew0K
IAlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0ICopYmxr
aWYtPnJlcTsNCiANCiAJYmxraWYtPnNlZ19yZXEgPSByZXEtPnUucncuc2VnOw0KIH0NCisNCit2
b2lkIGNvcHlfYmxraWZfc2VnX3JlcV92MihzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikNCit7DQor
CXN0cnVjdCBibGtpZl9yZXF1ZXN0X2hlYWRlciAqcmVxID0gKHN0cnVjdCBibGtpZl9yZXF1ZXN0
X2hlYWRlciAqKWJsa2lmLT5yZXE7DQorCXN0cnVjdCBibGtpZl9zZWdtZW50X2JhY2tfcmluZyAq
YmxrX3NlZ3JpbmdzID0gJmJsa2lmLT5ibGtfc2VncmluZ3M7DQorCWludCBpOw0KKwlSSU5HX0lE
WCByYzsNCisNCisJcmMgPSBibGtfc2VncmluZ3MtPnJlcV9jb25zOw0KKwlmb3IgKGkgPSAwOyBp
IDwgcmVxLT51LnJ3Lm5yX3NlZ21lbnRzOyBpKyspIHsNCisJCW1lbWNweSgmYmxraWYtPnNlZ19y
ZXFbaV0sIFJJTkdfR0VUX1JFUVVFU1QoYmxrX3NlZ3JpbmdzLCByYysrKSwNCisJCQlzaXplb2Yo
c3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkpOw0KKwl9DQorCWJsa19zZWdyaW5ncy0+cmVx
X2NvbnMgPSByYzsNCit9DQorDQogLyoNCiAgKiBGdW5jdGlvbiB0byBjb3B5IHRoZSBmcm9tIHRo
ZSByaW5nIGJ1ZmZlciB0aGUgJ3N0cnVjdCBibGtpZl9yZXF1ZXN0Jw0KICAqICh3aGljaCBoYXMg
dGhlIHNlY3RvcnMgd2Ugd2FudCwgbnVtYmVyIG9mIHRoZW0sIGdyYW50IHJlZmVyZW5jZXMsIGV0
YyksDQpAQCAtNTg3LDEwICs2MzUsMTIgQEAgZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtp
ZiAqYmxraWYpDQogDQogCXJldHVybiBtb3JlX3RvX2RvOw0KIH0NCisNCiAvKg0KICAqIFRyYW5z
bXV0YXRpb24gb2YgdGhlICdzdHJ1Y3QgYmxraWZfcmVxdWVzdCcgdG8gYSBwcm9wZXIgJ3N0cnVj
dCBiaW8nDQogICogYW5kIGNhbGwgdGhlICdzdWJtaXRfYmlvJyB0byBwYXNzIGl0IHRvIHRoZSB1
bmRlcmx5aW5nIHN0b3JhZ2UuDQogICovDQorDQogc3RhdGljIGludCBkaXNwYXRjaF9yd19ibG9j
a19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwNCiAJCQkJc3RydWN0IHBlbmRpbmdfcmVxICpw
ZW5kaW5nX3JlcSkNCiB7DQpAQCAtNzc0LDU0ICs4MjQsODkgQEAgc3RhdGljIGludCBkaXNwYXRj
aF9yd19ibG9ja19pbyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwNCiAJcmV0dXJuIC1FSU87DQog
fQ0KIA0KLXN0cnVjdCBibGtpZl9zZWdtZW50X2JhY2tfcmluZyAqDQotCWdldF9zZWdfYmFja19y
aW5nKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQ0KK3ZvaWQgcHVzaF9iYWNrX3JpbmdfcnNwKHN0
cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBpbnQgbnJfcGFnZSwgaW50ICpub3RpZnkpDQogew0KLQly
ZXR1cm4gTlVMTDsNCisJdW5pb24gYmxraWZfYmFja19yaW5ncyAqYmxrX3JpbmdzID0gJmJsa2lm
LT5ibGtfcmluZ3M7DQorDQorCWJsa19yaW5ncy0+Y29tbW9uLnJzcF9wcm9kX3B2dCsrOw0KKwlS
SU5HX1BVU0hfUkVTUE9OU0VTX0FORF9DSEVDS19OT1RJRlkoJmJsa19yaW5ncy0+Y29tbW9uLCAq
bm90aWZ5KTsNCiB9DQogDQotdm9pZCBwdXNoX2JhY2tfcmluZ19yc3AodW5pb24gYmxraWZfYmFj
a19yaW5ncyAqYmxrX3JpbmdzLCBpbnQgbnJfcGFnZSwgaW50ICpub3RpZnkpDQordm9pZCBwdXNo
X2JhY2tfcmluZ19yc3BfdjIoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIGludCBucl9wYWdlLCBp
bnQgKm5vdGlmeSkNCiB7DQorCXVuaW9uIGJsa2lmX2JhY2tfcmluZ3NfdjIgKmJsa19yaW5ncyA9
ICZibGtpZi0+YmxrX3JpbmdzX3YyOw0KKwlzdHJ1Y3QgYmxraWZfc2VnbWVudF9iYWNrX3Jpbmcg
KmJsa19zZWdyaW5ncyA9ICZibGtpZi0+YmxrX3NlZ3JpbmdzOw0KKw0KIAlibGtfcmluZ3MtPmNv
bW1vbi5yc3BfcHJvZF9wdnQrKzsNCisJYmxrX3NlZ3JpbmdzLT5yc3BfcHJvZF9wdnQgKz0gbnJf
cGFnZTsNCisJUklOR19QVVNIX1JFU1BPTlNFUyhibGtfc2VncmluZ3MpOw0KIAlSSU5HX1BVU0hf
UkVTUE9OU0VTX0FORF9DSEVDS19OT1RJRlkoJmJsa19yaW5ncy0+Y29tbW9uLCAqbm90aWZ5KTsN
CiB9DQogDQotLyoNCi0gKiBQdXQgYSByZXNwb25zZSBvbiB0aGUgcmluZyBvbiBob3cgdGhlIG9w
ZXJhdGlvbiBmYXJlZC4NCi0gKi8NCi1zdGF0aWMgdm9pZCBtYWtlX3Jlc3BvbnNlKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLCB1NjQgaWQsDQotCQkJICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IG5y
X3BhZ2UsIGludCBzdCkNCit2b2lkIGNvcHlfcmVzcG9uc2Uoc3RydWN0IHhlbl9ibGtpZiAqYmxr
aWYsIHN0cnVjdCBibGtpZl9yZXNwb25zZSAqcmVzcCkNCiB7DQotCXN0cnVjdCBibGtpZl9yZXNw
b25zZSAgcmVzcDsNCi0JdW5zaWduZWQgbG9uZyAgICAgZmxhZ3M7DQotCXVuaW9uIGJsa2lmX2Jh
Y2tfcmluZ3MgKmJsa19yaW5ncyA9DQotCQkodW5pb24gYmxraWZfYmFja19yaW5ncyAqKWJsa2lm
LT5vcHMtPmdldF9iYWNrX3JpbmcoYmxraWYpOw0KLQlpbnQgbm90aWZ5Ow0KKwl1bmlvbiBibGtp
Zl9iYWNrX3JpbmdzICpibGtfcmluZ3MgPSAmYmxraWYtPmJsa19yaW5nczsJDQorCQ0KKwlzd2l0
Y2ggKGJsa2lmLT5ibGtfcHJvdG9jb2wpIHsNCisJY2FzZSBCTEtJRl9QUk9UT0NPTF9OQVRJVkU6
DQorCQltZW1jcHkoUklOR19HRVRfUkVTUE9OU0UoJmJsa19yaW5ncy0+bmF0aXZlLCBibGtfcmlu
Z3MtPm5hdGl2ZS5yc3BfcHJvZF9wdnQpLA0KKwkJICAgICAgIHJlc3AsIHNpemVvZigqcmVzcCkp
Ow0KKwkJYnJlYWs7DQorCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzMyOg0KKwkJbWVtY3B5KFJJ
TkdfR0VUX1JFU1BPTlNFKCZibGtfcmluZ3MtPng4Nl8zMiwgYmxrX3JpbmdzLT54ODZfMzIucnNw
X3Byb2RfcHZ0KSwNCisJCSAgICAgICByZXNwLCBzaXplb2YoKnJlc3ApKTsNCisJCWJyZWFrOw0K
KwljYXNlIEJMS0lGX1BST1RPQ09MX1g4Nl82NDoNCisJCW1lbWNweShSSU5HX0dFVF9SRVNQT05T
RSgmYmxrX3JpbmdzLT54ODZfNjQsIGJsa19yaW5ncy0+eDg2XzY0LnJzcF9wcm9kX3B2dCksDQor
CQkgICAgICAgcmVzcCwgc2l6ZW9mKCpyZXNwKSk7DQorCQlicmVhazsNCisJZGVmYXVsdDoNCisJ
CUJVRygpOw0KKwl9DQogDQotCXJlc3AuaWQgICAgICAgID0gaWQ7DQotCXJlc3Aub3BlcmF0aW9u
ID0gb3A7DQotCXJlc3Auc3RhdHVzICAgID0gc3Q7DQorfQ0KIA0KLQlzcGluX2xvY2tfaXJxc2F2
ZSgmYmxraWYtPmJsa19yaW5nX2xvY2ssIGZsYWdzKTsNCi0JLyogUGxhY2Ugb24gdGhlIHJlc3Bv
bnNlIHJpbmcgZm9yIHRoZSByZWxldmFudCBkb21haW4uICovDQordm9pZCBjb3B5X3Jlc3BvbnNl
X3YyKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKnJlc3Ap
DQorew0KKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzX3YyICpibGtfcmluZ3MgPSAmYmxraWYtPmJs
a19yaW5nc192MjsJDQorCQ0KIAlzd2l0Y2ggKGJsa2lmLT5ibGtfcHJvdG9jb2wpIHsNCiAJY2Fz
ZSBCTEtJRl9QUk9UT0NPTF9OQVRJVkU6DQogCQltZW1jcHkoUklOR19HRVRfUkVTUE9OU0UoJmJs
a19yaW5ncy0+bmF0aXZlLCBibGtfcmluZ3MtPm5hdGl2ZS5yc3BfcHJvZF9wdnQpLA0KLQkJICAg
ICAgICZyZXNwLCBzaXplb2YocmVzcCkpOw0KKwkJICAgICAgIHJlc3AsIHNpemVvZigqcmVzcCkp
Ow0KIAkJYnJlYWs7DQogCWNhc2UgQkxLSUZfUFJPVE9DT0xfWDg2XzMyOg0KIAkJbWVtY3B5KFJJ
TkdfR0VUX1JFU1BPTlNFKCZibGtfcmluZ3MtPng4Nl8zMiwgYmxrX3JpbmdzLT54ODZfMzIucnNw
X3Byb2RfcHZ0KSwNCi0JCSAgICAgICAmcmVzcCwgc2l6ZW9mKHJlc3ApKTsNCisJCSAgICAgICBy
ZXNwLCBzaXplb2YoKnJlc3ApKTsNCiAJCWJyZWFrOw0KIAljYXNlIEJMS0lGX1BST1RPQ09MX1g4
Nl82NDoNCiAJCW1lbWNweShSSU5HX0dFVF9SRVNQT05TRSgmYmxrX3JpbmdzLT54ODZfNjQsIGJs
a19yaW5ncy0+eDg2XzY0LnJzcF9wcm9kX3B2dCksDQotCQkgICAgICAgJnJlc3AsIHNpemVvZihy
ZXNwKSk7DQorCQkgICAgICAgcmVzcCwgc2l6ZW9mKCpyZXNwKSk7DQogCQlicmVhazsNCiAJZGVm
YXVsdDoNCiAJCUJVRygpOw0KIAl9DQorfQ0KIA0KLQlibGtpZi0+b3BzLT5wdXNoX2JhY2tfcmlu
Z19yc3AoYmxrX3JpbmdzLCBucl9wYWdlLCAmbm90aWZ5KTsNCisvKg0KKyAqIFB1dCBhIHJlc3Bv
bnNlIG9uIHRoZSByaW5nIG9uIGhvdyB0aGUgb3BlcmF0aW9uIGZhcmVkLg0KKyAqLw0KK3N0YXRp
YyB2b2lkIG1ha2VfcmVzcG9uc2Uoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIHU2NCBpZCwNCisJ
CQkgIHVuc2lnbmVkIHNob3J0IG9wLCBpbnQgbnJfcGFnZSwgaW50IHN0KQ0KK3sNCisJc3RydWN0
IGJsa2lmX3Jlc3BvbnNlICByZXNwOw0KKwl1bnNpZ25lZCBsb25nICAgICBmbGFnczsNCisJaW50
IG5vdGlmeTsNCisNCisJcmVzcC5pZCAgICAgICAgPSBpZDsNCisJcmVzcC5vcGVyYXRpb24gPSBv
cDsNCisJcmVzcC5zdGF0dXMgICAgPSBzdDsNCisNCisJc3Bpbl9sb2NrX2lycXNhdmUoJmJsa2lm
LT5ibGtfcmluZ19sb2NrLCBmbGFncyk7DQorCS8qIFBsYWNlIG9uIHRoZSByZXNwb25zZSByaW5n
IGZvciB0aGUgcmVsZXZhbnQgZG9tYWluLiAqLw0KKwlibGtpZi0+b3BzLT5jb3B5X3Jlc3BvbnNl
KGJsa2lmLCAmcmVzcCk7DQorDQorCWJsa2lmLT5vcHMtPnB1c2hfYmFja19yaW5nX3JzcChibGtp
ZiwgbnJfcGFnZSwgJm5vdGlmeSk7DQogDQogCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmJsa2lm
LT5ibGtfcmluZ19sb2NrLCBmbGFncyk7DQogCWlmIChub3RpZnkpDQpAQCAtODk1LDkgKzk4MCwx
OSBAQCBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJhdGlvbiBibGtiYWNrX3Jpbmdfb3BzID0gew0K
IAkuY29weV9ibGtpZl9yZXEgPSBjb3B5X2Jsa2lmX3JlcSwNCiAJLmNvcHlfYmxraWZfc2VnX3Jl
cSA9IGNvcHlfYmxraWZfc2VnX3JlcSwNCiAJLnB1c2hfYmFja19yaW5nX3JzcCA9IHB1c2hfYmFj
a19yaW5nX3JzcCwNCisJLmNvcHlfcmVzcG9uc2UgPSBjb3B5X3Jlc3BvbnNlLA0KIAkubWF4X3Nl
ZyA9IEJMS0lGX01BWF9TRUdNRU5UU19QRVJfUkVRVUVTVCwNCiB9Ow0KIA0KK3N0cnVjdCBibGti
YWNrX3Jpbmdfb3BlcmF0aW9uIGJsa2JhY2tfcmluZ19vcHNfdjIgPSB7DQorCS5nZXRfYmFja19y
aW5nID0gZ2V0X2JhY2tfcmluZ192MiwNCisJLmNvcHlfYmxraWZfcmVxID0gY29weV9ibGtpZl9y
ZXFfdjIsDQorCS5jb3B5X2Jsa2lmX3NlZ19yZXEgPSBjb3B5X2Jsa2lmX3NlZ19yZXFfdjIsDQor
CS5wdXNoX2JhY2tfcmluZ19yc3AgPSBwdXNoX2JhY2tfcmluZ19yc3BfdjIsDQorCS5jb3B5X3Jl
c3BvbnNlID0gY29weV9yZXNwb25zZV92MiwNCisJLm1heF9zZWcgPSBCTEtJRl9NQVhfU0VHTUVO
VFNfUEVSX1JFUVVFU1RfVjIsDQorfTsNCisNCiBzdGF0aWMgaW50IF9faW5pdCB4ZW5fYmxraWZf
aW5pdCh2b2lkKQ0KIHsNCiAJaW50IHJjID0gMDsNCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2Nr
L3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24u
aA0KaW5kZXggODBlOGFjYy4uMmUyNDFhNCAxMDA2NDQNCi0tLSBhL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svY29tbW9uLmgNCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9u
LmgNCkBAIC00OCw2ICs0OCw3IEBADQogCXByX2RlYnVnKERSVl9QRlggIiglczolZCkgIiBmbXQg
Ii5cbiIsCQlcDQogCQkgX19mdW5jX18sIF9fTElORV9fLCAjI2FyZ3MpDQogDQorZXh0ZXJuIGlu
dCBibGtiYWNrX3JpbmdfdHlwZTsNCiANCiAvKiBOb3QgYSByZWFsIHByb3RvY29sLiAgVXNlZCB0
byBnZW5lcmF0ZSByaW5nIHN0cnVjdHMgd2hpY2ggY29udGFpbg0KICAqIHRoZSBlbGVtZW50cyBj
b21tb24gdG8gYWxsIHByb3RvY29scyBvbmx5LiAgVGhpcyB3YXkgd2UgZ2V0IGENCkBAIC04NCw2
ICs4NSwyMiBAQCBzdHJ1Y3QgYmxraWZfeDg2XzMyX3JlcXVlc3Qgew0KIAl9IHU7DQogfSBfX2F0
dHJpYnV0ZV9fKChfX3BhY2tlZF9fKSk7DQogDQorc3RydWN0IGJsa2lmX3g4Nl8zMl9yZXF1ZXN0
X3J3X3YyIHsNCisJdWludDhfdCAgICAgICAgbnJfc2VnbWVudHM7ICAvKiBudW1iZXIgb2Ygc2Vn
bWVudHMgICAgICAgICAgICAgICAgICAgKi8NCisJYmxraWZfdmRldl90ICAgaGFuZGxlOyAgICAg
ICAvKiBvbmx5IGZvciByZWFkL3dyaXRlIHJlcXVlc3RzICAgICAgICAgKi8NCisJdWludDY0X3Qg
ICAgICAgaWQ7ICAgICAgICAgICAvKiBwcml2YXRlIGd1ZXN0IHZhbHVlLCBlY2hvZWQgaW4gcmVz
cCAgKi8NCisJYmxraWZfc2VjdG9yX3Qgc2VjdG9yX251bWJlcjsvKiBzdGFydCBzZWN0b3IgaWR4
IG9uIGRpc2sgKHIvdyBvbmx5KSAgKi8NCisJdWludDY0X3QgICAgICAgc2VnX2lkOy8qIHNlZ21l
bnQgb2Zmc2V0IGluIHRoZSBzZWdtZW50IHJpbmcgICAqLw0KK30gX19hdHRyaWJ1dGVfXygoX19w
YWNrZWRfXykpOw0KKw0KK3N0cnVjdCBibGtpZl94ODZfMzJfcmVxdWVzdF92MiB7DQorCXVpbnQ4
X3QgICAgICAgIG9wZXJhdGlvbjsgICAgLyogQkxLSUZfT1BfPz8/ICAgICAgICAgICAgICAgICAg
ICAgICAgICovDQorCXVuaW9uIHsNCisJCXN0cnVjdCBibGtpZl94ODZfMzJfcmVxdWVzdF9yd192
MiBydzsNCisJCXN0cnVjdCBibGtpZl94ODZfMzJfcmVxdWVzdF9kaXNjYXJkIGRpc2NhcmQ7DQor
CX0gdTsNCit9IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsNCisNCiAvKiBpMzg2IHByb3Rv
Y29sIHZlcnNpb24gKi8NCiAjcHJhZ21hIHBhY2socHVzaCwgNCkNCiBzdHJ1Y3QgYmxraWZfeDg2
XzMyX3Jlc3BvbnNlIHsNCkBAIC0xMjAsNiArMTM3LDIzIEBAIHN0cnVjdCBibGtpZl94ODZfNjRf
cmVxdWVzdCB7DQogCX0gdTsNCiB9IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsNCiANCitz
dHJ1Y3QgYmxraWZfeDg2XzY0X3JlcXVlc3RfcndfdjIgew0KKwl1aW50OF90ICAgICAgICBucl9z
ZWdtZW50czsgIC8qIG51bWJlciBvZiBzZWdtZW50cyAgICAgICAgICAgICAgICAgICAqLw0KKwli
bGtpZl92ZGV2X3QgICBoYW5kbGU7ICAgICAgIC8qIG9ubHkgZm9yIHJlYWQvd3JpdGUgcmVxdWVz
dHMgICAgICAgICAqLw0KKwl1aW50MzJfdCAgICAgICBfcGFkMTsgICAgICAgIC8qIG9mZnNldG9m
KGJsa2lmX3JlcWVzdC4uLHUucncuaWQpPT04ICAqLw0KKwl1aW50NjRfdCAgICAgICBpZDsNCisJ
YmxraWZfc2VjdG9yX3Qgc2VjdG9yX251bWJlcjsvKiBzdGFydCBzZWN0b3IgaWR4IG9uIGRpc2sg
KHIvdyBvbmx5KSAgKi8NCisJdWludDY0X3QgICAgICAgc2VnX2lkOy8qIHNlZ21lbnQgb2Zmc2V0
IGluIHRoZSBzZWdtZW50IHJpbmcgICAqLw0KK30gX19hdHRyaWJ1dGVfXygoX19wYWNrZWRfXykp
Ow0KKw0KK3N0cnVjdCBibGtpZl94ODZfNjRfcmVxdWVzdF92MiB7DQorCXVpbnQ4X3QgICAgICAg
IG9wZXJhdGlvbjsgICAgLyogQkxLSUZfT1BfPz8/ICAgICAgICAgICAgICAgICAgICAgICAgICov
DQorCXVuaW9uIHsNCisJCXN0cnVjdCBibGtpZl94ODZfNjRfcmVxdWVzdF9yd192MiBydzsNCisJ
CXN0cnVjdCBibGtpZl94ODZfNjRfcmVxdWVzdF9kaXNjYXJkIGRpc2NhcmQ7DQorCX0gdTsNCit9
IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsNCisNCiBzdHJ1Y3QgYmxraWZfeDg2XzY0X3Jl
c3BvbnNlIHsNCiAJdWludDY0X3QgICAgICAgX19hdHRyaWJ1dGVfXygoX19hbGlnbmVkX18oOCkp
KSBpZDsNCiAJdWludDhfdCAgICAgICAgIG9wZXJhdGlvbjsgICAgICAgLyogY29waWVkIGZyb20g
cmVxdWVzdCAqLw0KQEAgLTEzMiw2ICsxNjYsMTAgQEAgREVGSU5FX1JJTkdfVFlQRVMoYmxraWZf
eDg2XzMyLCBzdHJ1Y3QgYmxraWZfeDg2XzMyX3JlcXVlc3QsDQogCQkgIHN0cnVjdCBibGtpZl94
ODZfMzJfcmVzcG9uc2UpOw0KIERFRklORV9SSU5HX1RZUEVTKGJsa2lmX3g4Nl82NCwgc3RydWN0
IGJsa2lmX3g4Nl82NF9yZXF1ZXN0LA0KIAkJICBzdHJ1Y3QgYmxraWZfeDg2XzY0X3Jlc3BvbnNl
KTsNCitERUZJTkVfUklOR19UWVBFUyhibGtpZl94ODZfMzJfdjIsIHN0cnVjdCBibGtpZl94ODZf
MzJfcmVxdWVzdF92MiwNCisgICAgICAgICAgICAgICAgICBzdHJ1Y3QgYmxraWZfeDg2XzMyX3Jl
c3BvbnNlKTsNCitERUZJTkVfUklOR19UWVBFUyhibGtpZl94ODZfNjRfdjIsIHN0cnVjdCBibGtp
Zl94ODZfNjRfcmVxdWVzdF92MiwNCisgICAgICAgICAgICAgICAgICBzdHJ1Y3QgYmxraWZfeDg2
XzY0X3Jlc3BvbnNlKTsNCiANCiB1bmlvbiBibGtpZl9iYWNrX3JpbmdzIHsNCiAJc3RydWN0IGJs
a2lmX2JhY2tfcmluZyAgICAgICAgbmF0aXZlOw0KQEAgLTE0MCw2ICsxNzgsMTMgQEAgdW5pb24g
YmxraWZfYmFja19yaW5ncyB7DQogCXN0cnVjdCBibGtpZl94ODZfNjRfYmFja19yaW5nIHg4Nl82
NDsNCiB9Ow0KIA0KK3VuaW9uIGJsa2lmX2JhY2tfcmluZ3NfdjIgew0KKyAgICAgICAgc3RydWN0
IGJsa2lmX3JlcXVlc3RfYmFja19yaW5nICAgICAgICBuYXRpdmU7DQorICAgICAgICBzdHJ1Y3Qg
YmxraWZfY29tbW9uX2JhY2tfcmluZyAJICAgICAgY29tbW9uOw0KKyAgICAgICAgc3RydWN0IGJs
a2lmX3g4Nl8zMl92Ml9iYWNrX3JpbmcgICAgICB4ODZfMzI7DQorICAgICAgICBzdHJ1Y3QgYmxr
aWZfeDg2XzY0X3YyX2JhY2tfcmluZyAgICAgIHg4Nl82NDsNCit9Ow0KKw0KIGVudW0gYmxraWZf
cHJvdG9jb2wgew0KIAlCTEtJRl9QUk9UT0NPTF9OQVRJVkUgPSAxLA0KIAlCTEtJRl9QUk9UT0NP
TF9YODZfMzIgPSAyLA0KQEAgLTE3NSw3ICsyMjAsOCBAQCBzdHJ1Y3QgYmxrYmFja19yaW5nX29w
ZXJhdGlvbiB7DQogCXZvaWQgKigqZ2V0X2JhY2tfcmluZykgKHN0cnVjdCB4ZW5fYmxraWYgKmJs
a2lmKTsNCiAJdm9pZCAoKmNvcHlfYmxraWZfcmVxKSAoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYs
IFJJTkdfSURYIHJjKTsNCiAJdm9pZCAoKmNvcHlfYmxraWZfc2VnX3JlcSkgKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmKTsNCi0Jdm9pZCAoKnB1c2hfYmFja19yaW5nX3JzcCkgKHVuaW9uIGJsa2lm
X2JhY2tfcmluZ3MgKmJsa19yaW5ncywgaW50IG5yX3BhZ2UsIGludCAqbm90aWZ5KTsNCisJdm9p
ZCAoKnB1c2hfYmFja19yaW5nX3JzcCkgKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBpbnQgbnJf
cGFnZSwgaW50ICpub3RpZnkpOw0KKwl2b2lkICgqY29weV9yZXNwb25zZSkgKHN0cnVjdCB4ZW5f
YmxraWYgKmJsa2lmLCBzdHJ1Y3QgYmxraWZfcmVzcG9uc2UgKnJlc3ApOw0KIAl1bnNpZ25lZCBp
bnQgbWF4X3NlZzsNCiB9Ow0KIA0KQEAgLTE5MCw3ICsyMzYsMTAgQEAgc3RydWN0IHhlbl9ibGtp
ZiB7DQogCWVudW0gYmxraWZfcHJvdG9jb2wJYmxrX3Byb3RvY29sOw0KIAllbnVtIGJsa2lmX2Jh
Y2tyaW5nX3R5cGUgYmxrX2JhY2tyaW5nX3R5cGU7DQogCXVuaW9uIGJsa2lmX2JhY2tfcmluZ3MJ
YmxrX3JpbmdzOw0KKwl1bmlvbiBibGtpZl9iYWNrX3JpbmdzX3YyICAgICAgIGJsa19yaW5nc192
MjsNCisJc3RydWN0IGJsa2lmX3NlZ21lbnRfYmFja19yaW5nCWJsa19zZWdyaW5nczsNCiAJdm9p
ZAkJCSpibGtfcmluZzsNCisJdm9pZCAgICAgICAgICAgICAgICAgICAgKmJsa19zZWdyaW5nOw0K
IAlzdHJ1Y3QgeGVuX2Jsa2JrICAgICAgICAqYmxrYms7DQogCS8qIFRoZSBWQkQgYXR0YWNoZWQg
dG8gdGhpcyBpbnRlcmZhY2UuICovDQogCXN0cnVjdCB4ZW5fdmJkCQl2YmQ7DQpAQCAtMzI4LDYg
KzM3NywzMSBAQCBzdGF0aWMgaW5saW5lIHZvaWQgYmxraWZfZ2V0X3g4Nl8zMl9yZXEoc3RydWN0
IGJsa2lmX3JlcXVlc3QgKmRzdCwNCiAJfQ0KIH0NCiANCitzdGF0aWMgaW5saW5lIHZvaWQgYmxr
aWZfZ2V0X3g4Nl8zMl9yZXFfdjIoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVhZGVyICpkc3QsDQor
CQkJCQlzdHJ1Y3QgYmxraWZfeDg2XzMyX3JlcXVlc3RfdjIgKnNyYykNCit7DQorCWRzdC0+b3Bl
cmF0aW9uID0gc3JjLT5vcGVyYXRpb247DQorCXN3aXRjaCAoc3JjLT5vcGVyYXRpb24pIHsNCisJ
Y2FzZSBCTEtJRl9PUF9SRUFEOg0KKwljYXNlIEJMS0lGX09QX1dSSVRFOg0KKwljYXNlIEJMS0lG
X09QX1dSSVRFX0JBUlJJRVI6DQorCWNhc2UgQkxLSUZfT1BfRkxVU0hfRElTS0NBQ0hFOg0KKwkJ
ZHN0LT51LnJ3Lm5yX3NlZ21lbnRzID0gc3JjLT51LnJ3Lm5yX3NlZ21lbnRzOw0KKwkJZHN0LT51
LnJ3LmhhbmRsZSA9IHNyYy0+dS5ydy5oYW5kbGU7DQorCQlkc3QtPnUucncuaWQgPSBzcmMtPnUu
cncuaWQ7DQorCQlkc3QtPnUucncuc2VjdG9yX251bWJlciA9IHNyYy0+dS5ydy5zZWN0b3JfbnVt
YmVyOw0KKwkJZHN0LT51LnJ3LnNlZ19pZCA9IHNyYy0+dS5ydy5zZWdfaWQ7DQorCQliYXJyaWVy
KCk7DQorCQlicmVhazsNCisJY2FzZSBCTEtJRl9PUF9ESVNDQVJEOg0KKwkJZHN0LT51LmRpc2Nh
cmQuZmxhZyA9IHNyYy0+dS5kaXNjYXJkLmZsYWc7DQorCQlkc3QtPnUuZGlzY2FyZC5zZWN0b3Jf
bnVtYmVyID0gc3JjLT51LmRpc2NhcmQuc2VjdG9yX251bWJlcjsNCisJCWRzdC0+dS5kaXNjYXJk
Lm5yX3NlY3RvcnMgPSBzcmMtPnUuZGlzY2FyZC5ucl9zZWN0b3JzOw0KKwkJYnJlYWs7DQorCWRl
ZmF1bHQ6DQorCQlicmVhazsNCisJfQ0KK30NCiBzdGF0aWMgaW5saW5lIHZvaWQgYmxraWZfZ2V0
X3g4Nl82NF9yZXEoc3RydWN0IGJsa2lmX3JlcXVlc3QgKmRzdCwNCiAJCQkJCXN0cnVjdCBibGtp
Zl94ODZfNjRfcmVxdWVzdCAqc3JjKQ0KIHsNCkBAIC0zNTksNCArNDMzLDMwIEBAIHN0YXRpYyBp
bmxpbmUgdm9pZCBibGtpZl9nZXRfeDg2XzY0X3JlcShzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqZHN0
LA0KIAl9DQogfQ0KIA0KK3N0YXRpYyBpbmxpbmUgdm9pZCBibGtpZl9nZXRfeDg2XzY0X3JlcV92
MihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9oZWFkZXIgKmRzdCwNCisJCQkJCXN0cnVjdCBibGtpZl94
ODZfNjRfcmVxdWVzdF92MiAqc3JjKQ0KK3sNCisJZHN0LT5vcGVyYXRpb24gPSBzcmMtPm9wZXJh
dGlvbjsNCisJc3dpdGNoIChzcmMtPm9wZXJhdGlvbikgew0KKwljYXNlIEJMS0lGX09QX1JFQUQ6
DQorCWNhc2UgQkxLSUZfT1BfV1JJVEU6DQorCWNhc2UgQkxLSUZfT1BfV1JJVEVfQkFSUklFUjoN
CisJY2FzZSBCTEtJRl9PUF9GTFVTSF9ESVNLQ0FDSEU6DQorCQlkc3QtPnUucncubnJfc2VnbWVu
dHMgPSBzcmMtPnUucncubnJfc2VnbWVudHM7DQorCQlkc3QtPnUucncuaGFuZGxlID0gc3JjLT51
LnJ3LmhhbmRsZTsNCisJCWRzdC0+dS5ydy5pZCA9IHNyYy0+dS5ydy5pZDsNCisJCWRzdC0+dS5y
dy5zZWN0b3JfbnVtYmVyID0gc3JjLT51LnJ3LnNlY3Rvcl9udW1iZXI7DQorCQlkc3QtPnUucncu
c2VnX2lkID0gc3JjLT51LnJ3LnNlZ19pZDsNCisJCWJhcnJpZXIoKTsNCisJCWJyZWFrOw0KKwlj
YXNlIEJMS0lGX09QX0RJU0NBUkQ6DQorCQlkc3QtPnUuZGlzY2FyZC5mbGFnID0gc3JjLT51LmRp
c2NhcmQuZmxhZzsNCisJCWRzdC0+dS5kaXNjYXJkLnNlY3Rvcl9udW1iZXIgPSBzcmMtPnUuZGlz
Y2FyZC5zZWN0b3JfbnVtYmVyOw0KKwkJZHN0LT51LmRpc2NhcmQubnJfc2VjdG9ycyA9IHNyYy0+
dS5kaXNjYXJkLm5yX3NlY3RvcnM7DQorCQlicmVhazsNCisJZGVmYXVsdDoNCisJCWJyZWFrOw0K
Kwl9DQorfQ0KKw0KICNlbmRpZiAvKiBfX1hFTl9CTEtJRl9fQkFDS0VORF9fQ09NTU9OX0hfXyAq
Lw0KZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMgYi9kcml2
ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jDQppbmRleCA4YjBkNDk2Li40Njc4NTMzIDEw
MDY0NA0KLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYw0KKysrIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYw0KQEAgLTM2LDcgKzM2LDcgQEAgc3RhdGlj
IGludCBjb25uZWN0X3Jpbmcoc3RydWN0IGJhY2tlbmRfaW5mbyAqKTsNCiBzdGF0aWMgdm9pZCBi
YWNrZW5kX2NoYW5nZWQoc3RydWN0IHhlbmJ1c193YXRjaCAqLCBjb25zdCBjaGFyICoqLA0KIAkJ
CSAgICB1bnNpZ25lZCBpbnQpOw0KIA0KLWV4dGVybiBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJh
dGlvbiBibGtiYWNrX3Jpbmdfb3BzOw0KK2V4dGVybiBzdHJ1Y3QgYmxrYmFja19yaW5nX29wZXJh
dGlvbiBibGtiYWNrX3Jpbmdfb3BzLCBibGtiYWNrX3Jpbmdfb3BzX3YyOw0KIA0KIHN0cnVjdCB4
ZW5idXNfZGV2aWNlICp4ZW5fYmxrYmtfeGVuYnVzKHN0cnVjdCBiYWNrZW5kX2luZm8gKmJlKQ0K
IHsNCkBAIC0xNzYsNiArMTc2LDgzIEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2lmX21hcChzdHJ1Y3Qg
eGVuX2Jsa2lmICpibGtpZiwgdW5zaWduZWQgbG9uZyBzaGFyZWRfcGFnZSwNCiAJcmV0dXJuIDA7
DQogfQ0KIA0KK3N0YXRpYyBpbnQNCit4ZW5fYmxraWZfbWFwX3NlZ3Jpbmcoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYsIHVuc2lnbmVkIGxvbmcgc2hhcmVkX3BhZ2UpDQorew0KKwlzdHJ1Y3QgYmxr
aWZfc2VnbWVudF9zcmluZyAqc3Jpbmc7DQorCWludCBlcnI7DQorDQorCWVyciA9IHhlbmJ1c19t
YXBfcmluZ192YWxsb2MoYmxraWYtPmJlLT5kZXYsIHNoYXJlZF9wYWdlLA0KKwkJCQkgICAgICZi
bGtpZi0+YmxrX3NlZ3JpbmcpOw0KKw0KKwlpZiAoZXJyIDwgMCkNCisJCXJldHVybiBlcnI7DQor
DQorCXNyaW5nID0gKHN0cnVjdCBibGtpZl9zZWdtZW50X3NyaW5nICopYmxraWYtPmJsa19zZWdy
aW5nOw0KKwlCQUNLX1JJTkdfSU5JVCgmYmxraWYtPmJsa19zZWdyaW5ncywgc3JpbmcsIFBBR0Vf
U0laRSk7DQorDQorCXJldHVybiAwOw0KK30NCisNCitzdGF0aWMgaW50IHhlbl9ibGtpZl9tYXBf
djIoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIHVuc2lnbmVkIGxvbmcgc2hhcmVkX3BhZ2UsIA0K
KyAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgZXZ0Y2huKQ0KK3sNCisJaW50
IGVycjsNCisNCisJLyogQWxyZWFkeSBjb25uZWN0ZWQgdGhyb3VnaD8gKi8NCisJaWYgKGJsa2lm
LT5pcnEpDQorCQlyZXR1cm4gMDsNCisNCisJZXJyID0geGVuYnVzX21hcF9yaW5nX3ZhbGxvYyhi
bGtpZi0+YmUtPmRldiwgc2hhcmVkX3BhZ2UsDQorCQkJCSAgICAgJmJsa2lmLT5ibGtfcmluZyk7
DQorDQorCWlmIChlcnIgPCAwKQ0KKwkJcmV0dXJuIGVycjsNCisNCisJc3dpdGNoIChibGtpZi0+
YmxrX3Byb3RvY29sKSB7DQorCWNhc2UgQkxLSUZfUFJPVE9DT0xfTkFUSVZFOg0KKwl7DQorCQlz
dHJ1Y3QgYmxraWZfcmVxdWVzdF9zcmluZyAqc3Jpbmc7DQorCQlzcmluZyA9IChzdHJ1Y3QgYmxr
aWZfcmVxdWVzdF9zcmluZyAqKWJsa2lmLT5ibGtfcmluZzsNCisJCUJBQ0tfUklOR19JTklUKCZi
bGtpZi0+YmxrX3JpbmdzX3YyLm5hdGl2ZSwgc3JpbmcsDQorCQkJICAgICAgIFBBR0VfU0laRSk7
DQorCQlicmVhazsNCisJfQ0KKwljYXNlIEJMS0lGX1BST1RPQ09MX1g4Nl8zMjoNCisJew0KKwkJ
c3RydWN0IGJsa2lmX3g4Nl8zMl92Ml9zcmluZyAqc3JpbmdfeDg2XzMyOw0KKwkJc3JpbmdfeDg2
XzMyID0gKHN0cnVjdCBibGtpZl94ODZfMzJfdjJfc3JpbmcgKilibGtpZi0+YmxrX3Jpbmc7DQor
CQlCQUNLX1JJTkdfSU5JVCgmYmxraWYtPmJsa19yaW5nc192Mi54ODZfMzIsIHNyaW5nX3g4Nl8z
MiwNCisJCQkgICAgICAgUEFHRV9TSVpFKTsNCisJCWJyZWFrOw0KKwl9DQorCWNhc2UgQkxLSUZf
UFJPVE9DT0xfWDg2XzY0Og0KKwl7DQorCQlzdHJ1Y3QgYmxraWZfeDg2XzY0X3YyX3NyaW5nICpz
cmluZ194ODZfNjQ7DQorCQlzcmluZ194ODZfNjQgPSAoc3RydWN0IGJsa2lmX3g4Nl82NF92Ml9z
cmluZyAqKWJsa2lmLT5ibGtfcmluZzsNCisJCUJBQ0tfUklOR19JTklUKCZibGtpZi0+YmxrX3Jp
bmdzX3YyLng4Nl82NCwgc3JpbmdfeDg2XzY0LA0KKwkJCSAgICAgICBQQUdFX1NJWkUpOw0KKwkJ
YnJlYWs7DQorCX0NCisJZGVmYXVsdDoNCisJCUJVRygpOw0KKwl9DQorDQorCQ0KKw0KKwllcnIg
PSBiaW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFoYW5kbGVyKGJsa2lmLT5kb21pZCwgZXZ0
Y2huLA0KKwkJCQkJCSAgICB4ZW5fYmxraWZfYmVfaW50LCAwLA0KKwkJCQkJCSAgICAiYmxraWYt
YmFja2VuZCIsIGJsa2lmKTsNCisJaWYgKGVyciA8IDApIHsNCisJCXhlbmJ1c191bm1hcF9yaW5n
X3ZmcmVlKGJsa2lmLT5iZS0+ZGV2LCBibGtpZi0+YmxrX3JpbmcpOw0KKwkJYmxraWYtPmJsa19y
aW5nc192Mi5jb21tb24uc3JpbmcgPSBOVUxMOw0KKwkJcmV0dXJuIGVycjsNCisJfQ0KKwlibGtp
Zi0+aXJxID0gZXJyOw0KKw0KKwlyZXR1cm4gMDsNCit9DQorDQogc3RhdGljIHZvaWQgeGVuX2Js
a2lmX2Rpc2Nvbm5lY3Qoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpDQogew0KIAlpZiAoYmxraWYt
PnhlbmJsa2QpIHsNCkBAIC0xOTIsMTAgKzI2OSwxOCBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxraWZf
ZGlzY29ubmVjdChzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZikNCiAJCWJsa2lmLT5pcnEgPSAwOw0K
IAl9DQogDQotCWlmIChibGtpZi0+YmxrX3JpbmdzLmNvbW1vbi5zcmluZykgew0KKwlpZiAoYmxr
aWYtPmJsa19iYWNrcmluZ190eXBlID09IEJBQ0tSSU5HX1RZUEVfMSAmJiANCisJICAgIGJsa2lm
LT5ibGtfcmluZ3MuY29tbW9uLnNyaW5nKSB7DQogCQl4ZW5idXNfdW5tYXBfcmluZ192ZnJlZShi
bGtpZi0+YmUtPmRldiwgYmxraWYtPmJsa19yaW5nKTsNCiAJCWJsa2lmLT5ibGtfcmluZ3MuY29t
bW9uLnNyaW5nID0gTlVMTDsNCiAJfQ0KKwlpZiAoYmxraWYtPmJsa19iYWNrcmluZ190eXBlID09
IEJBQ0tSSU5HX1RZUEVfMiAmJg0KKwkgICAgYmxraWYtPmJsa19yaW5nc192Mi5jb21tb24uc3Jp
bmcpIHsNCisJCXhlbmJ1c191bm1hcF9yaW5nX3ZmcmVlKGJsa2lmLT5iZS0+ZGV2LCBibGtpZi0+
YmxrX3JpbmcpOw0KKwkJYmxraWYtPmJsa19yaW5nc192Mi5jb21tb24uc3JpbmcgPSBOVUxMOw0K
KwkJeGVuYnVzX3VubWFwX3JpbmdfdmZyZWUoYmxraWYtPmJlLT5kZXYsIGJsa2lmLT5ibGtfc2Vn
cmluZyk7DQorCQlibGtpZi0+YmxrX3NlZ3JpbmdzLnNyaW5nPSBOVUxMOw0KKwl9DQogfQ0KIA0K
IHZvaWQgeGVuX2Jsa2lmX2ZyZWUoc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpDQpAQCAtNDc2LDYg
KzU2MSw5IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2JrX3Byb2JlKHN0cnVjdCB4ZW5idXNfZGV2aWNl
ICpkZXYsDQogCWlmIChlcnIpDQogCQlnb3RvIGZhaWw7DQogDQorCWVyciA9IHhlbmJ1c19wcmlu
dGYoWEJUX05JTCwgZGV2LT5ub2RlbmFtZSwgImJsa2JhY2stcmluZy10eXBlIiwNCisJCQkgICAg
IiV1IiwgYmxrYmFja19yaW5nX3R5cGUpOw0KKw0KIAllcnIgPSB4ZW5idXNfc3dpdGNoX3N0YXRl
KGRldiwgWGVuYnVzU3RhdGVJbml0V2FpdCk7DQogCWlmIChlcnIpDQogCQlnb3RvIGZhaWw7DQpA
QCAtNzIyLDI1ICs4MTAsNjggQEAgc3RhdGljIGludCBjb25uZWN0X3Jpbmcoc3RydWN0IGJhY2tl
bmRfaW5mbyAqYmUpDQogew0KIAlzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2ID0gYmUtPmRldjsN
CiAJdW5zaWduZWQgbG9uZyByaW5nX3JlZjsNCisJdW5zaWduZWQgbG9uZyBzZWdyaW5nX3JlZjsN
CiAJdW5zaWduZWQgaW50IGV2dGNobjsNCisJdW5zaWduZWQgaW50IHJpbmdfdHlwZTsNCiAJY2hh
ciBwcm90b2NvbFs2NF0gPSAiIjsNCiAJaW50IGVycjsNCiANCiAJRFBSSU5USygiJXMiLCBkZXYt
Pm90aGVyZW5kKTsNCi0JYmUtPmJsa2lmLT5vcHMgPSAmYmxrYmFja19yaW5nX29wczsNCi0JYmUt
PmJsa2lmLT5yZXEgPSBrbWFsbG9jKHNpemVvZihzdHJ1Y3QgYmxraWZfcmVxdWVzdCksDQotCQkJ
CSBHRlBfS0VSTkVMKTsNCi0JYmUtPmJsa2lmLT5zZWdfcmVxID0ga21hbGxvYyhzaXplb2Yoc3Ry
dWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkqDQotCQkJCSAgICAgYmUtPmJsa2lmLT5vcHMtPm1h
eF9zZWcsICBHRlBfS0VSTkVMKTsNCi0JYmUtPmJsa2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJB
Q0tSSU5HX1RZUEVfMTsNCi0NCi0JZXJyID0geGVuYnVzX2dhdGhlcihYQlRfTklMLCBkZXYtPm90
aGVyZW5kLCAicmluZy1yZWYiLCAiJWx1IiwNCi0JCQkgICAgJnJpbmdfcmVmLCAiZXZlbnQtY2hh
bm5lbCIsICIldSIsICZldnRjaG4sIE5VTEwpOw0KLQlpZiAoZXJyKSB7DQorCQ0KKwllcnIgPSB4
ZW5idXNfc2NhbmYoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgImJsa2Zyb250LXJpbmctdHlwZSIs
ICIldSIsDQorCQkJICAgJnJpbmdfdHlwZSk7DQorCWlmIChlcnIgIT0gMSkgew0KKwkJcHJfaW5m
byhEUlZfUEZYICJ1c2luZyBsZWdhY3kgYmxrIHJpbmdcbiIpOw0KKwkJcmluZ190eXBlID0gMTsN
CisJfQ0KKwkNCisJaWYgKHJpbmdfdHlwZSA9PSAxKSB7DQorCQliZS0+YmxraWYtPm9wcyA9ICZi
bGtiYWNrX3Jpbmdfb3BzOw0KKwkJYmUtPmJsa2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJBQ0tS
SU5HX1RZUEVfMTsNCisJCWJlLT5ibGtpZi0+cmVxID0ga21hbGxvYyhzaXplb2Yoc3RydWN0IGJs
a2lmX3JlcXVlc3QpLCBHRlBfS0VSTkVMKTsNCisJCWJlLT5ibGtpZi0+c2VnX3JlcSA9IGttYWxs
b2Moc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQpKg0KKwkJCQkJICAgICBiZS0+
YmxraWYtPm9wcy0+bWF4X3NlZywgR0ZQX0tFUk5FTCk7DQorCQlpZiAoIWJlLT5ibGtpZi0+cmVx
IHx8ICFiZS0+YmxraWYtPnNlZ19yZXEpIHsNCisJCQlrZnJlZShiZS0+YmxraWYtPnJlcSk7DQor
CQkJa2ZyZWUoYmUtPmJsa2lmLT5zZWdfcmVxKTsNCisJCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwg
ZXJyLCAibm8gZW5vdWdoIG1lbW9yeSIpOw0KKwkJCXJldHVybiAtRU5PTUVNOw0KKwkJfQ0KKwkJ
ZXJyID0geGVuYnVzX2dhdGhlcihYQlRfTklMLCBkZXYtPm90aGVyZW5kLCAicmluZy1yZWYiLCAi
JWx1IiwNCisJCQkJICAgICZyaW5nX3JlZiwgImV2ZW50LWNoYW5uZWwiLCAiJXUiLCAmZXZ0Y2hu
LCBOVUxMKTsNCisJCWlmIChlcnIpIHsNCisJCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgZXJyLA0K
KwkJCQkJICJyZWFkaW5nICVzL3JpbmctcmVmIGFuZCBldmVudC1jaGFubmVsIiwNCisJCQkJCSBk
ZXYtPm90aGVyZW5kKTsNCisJCQlyZXR1cm4gZXJyOw0KKwkJfQkNCisJfQ0KKwllbHNlIGlmIChy
aW5nX3R5cGUgPT0gMil7DQorCQliZS0+YmxraWYtPm9wcyA9ICZibGtiYWNrX3Jpbmdfb3BzX3Yy
Ow0KKwkJYmUtPmJsa2lmLT5ibGtfYmFja3JpbmdfdHlwZSA9IEJBQ0tSSU5HX1RZUEVfMjsNCisJ
CWJlLT5ibGtpZi0+cmVxID0ga21hbGxvYyhzaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVlc3RfaGVh
ZGVyKSwgR0ZQX0tFUk5FTCk7DQorCQliZS0+YmxraWYtPnNlZ19yZXEgPSBrbWFsbG9jKHNpemVv
ZihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50KSoNCisJCQkJCSAgICAgYmUtPmJsa2lmLT5v
cHMtPm1heF9zZWcsIEdGUF9LRVJORUwpOw0KKwkJaWYgKCFiZS0+YmxraWYtPnJlcSB8fCAhYmUt
PmJsa2lmLT5zZWdfcmVxKSB7DQorCQkJa2ZyZWUoYmUtPmJsa2lmLT5yZXEpOw0KKwkJCWtmcmVl
KGJlLT5ibGtpZi0+c2VnX3JlcSk7DQorCQkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIGVyciwgIm5v
IGVub3VnaCBtZW1vcnkiKTsNCisJCQlyZXR1cm4gLUVOT01FTTsNCisJCX0NCisJCWVyciA9IHhl
bmJ1c19nYXRoZXIoWEJUX05JTCwgZGV2LT5vdGhlcmVuZCwgInJlcXJpbmctcmVmIiwgIiVsdSIs
DQorCQkJCSAgICAmcmluZ19yZWYsICJldmVudC1jaGFubmVsIiwgIiV1IiwgJmV2dGNobiwNCisJ
CQkJICAgICJzZWdyaW5nLXJlZiIsICIlbHUiLCAmc2VncmluZ19yZWYsIE5VTEwpOw0KKwkJaWYg
KGVycikgew0KKwkJCXhlbmJ1c19kZXZfZmF0YWwoZGV2LCBlcnIsDQorCQkJCQkgInJlYWRpbmcg
JXMvcmluZy9zZWdyaW5nLXJlZiBhbmQgZXZlbnQtY2hhbm5lbCIsDQorCQkJCQkgZGV2LT5vdGhl
cmVuZCk7DQorCQkJcmV0dXJuIGVycjsNCisJCX0NCisJfQ0KKwllbHNlIHsNCiAJCXhlbmJ1c19k
ZXZfZmF0YWwoZGV2LCBlcnIsDQotCQkJCSAicmVhZGluZyAlcy9yaW5nLXJlZiBhbmQgZXZlbnQt
Y2hhbm5lbCIsDQotCQkJCSBkZXYtPm90aGVyZW5kKTsNCi0JCXJldHVybiBlcnI7DQorCQkJCSAi
dW5zdXBwb3J0ICVzIGJsa2Zyb250IHJpbmciLCBkZXYtPm90aGVyZW5kKTsNCisJCXJldHVybiAt
RUlOVkFMOw0KIAl9DQogDQogCWJlLT5ibGtpZi0+YmxrX3Byb3RvY29sID0gQkxLSUZfUFJPVE9D
T0xfTkFUSVZFOw0KQEAgLTc1OCwxOSArODg5LDUxIEBAIHN0YXRpYyBpbnQgY29ubmVjdF9yaW5n
KHN0cnVjdCBiYWNrZW5kX2luZm8gKmJlKQ0KIAkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIGVyciwg
InVua25vd24gZmUgcHJvdG9jb2wgJXMiLCBwcm90b2NvbCk7DQogCQlyZXR1cm4gLTE7DQogCX0N
Ci0JcHJfaW5mbyhEUlZfUEZYICJyaW5nLXJlZiAlbGQsIGV2ZW50LWNoYW5uZWwgJWQsIHByb3Rv
Y29sICVkICglcylcbiIsDQotCQlyaW5nX3JlZiwgZXZ0Y2huLCBiZS0+YmxraWYtPmJsa19wcm90
b2NvbCwgcHJvdG9jb2wpOw0KLQ0KIAkvKiBNYXAgdGhlIHNoYXJlZCBmcmFtZSwgaXJxIGV0Yy4g
Ki8NCi0JZXJyID0geGVuX2Jsa2lmX21hcChiZS0+YmxraWYsIHJpbmdfcmVmLCBldnRjaG4pOw0K
LQlpZiAoZXJyKSB7DQotCQl4ZW5idXNfZGV2X2ZhdGFsKGRldiwgZXJyLCAibWFwcGluZyByaW5n
LXJlZiAlbHUgcG9ydCAldSIsDQotCQkJCSByaW5nX3JlZiwgZXZ0Y2huKTsNCi0JCXJldHVybiBl
cnI7DQorCWlmIChyaW5nX3R5cGUgPT0gMikgeyANCisJCWVyciA9IHhlbl9ibGtpZl9tYXBfc2Vn
cmluZyhiZS0+YmxraWYsIHNlZ3JpbmdfcmVmKTsNCisJCWlmIChlcnIpIHsNCisJCQl4ZW5idXNf
ZGV2X2ZhdGFsKGRldiwgZXJyLCAibWFwcGluZyBzZWdtZW50IHJpbmZzIik7DQorCQkJcmV0dXJu
IGVycjsNCisJCX0NCisJCWVyciA9IHhlbl9ibGtpZl9tYXBfdjIoYmUtPmJsa2lmLCByaW5nX3Jl
ZiwgZXZ0Y2huKTsNCisJCWlmIChlcnIpIHsNCisJCQl4ZW5idXNfdW5tYXBfcmluZ192ZnJlZShi
ZS0+YmxraWYtPmJlLT5kZXYsDQorCQkJCQkJYmUtPmJsa2lmLT5ibGtfc2VncmluZyk7DQorCQkJ
YmUtPmJsa2lmLT5ibGtfc2VncmluZ3Muc3JpbmcgPSBOVUxMOw0KKwkJCXhlbmJ1c19kZXZfZmF0
YWwoZGV2LCBlcnIsICJtYXBwaW5nIHJlcXVlc3QgcmluZnMiKTsNCisJCQlyZXR1cm4gZXJyOw0K
KwkJfQ0KKwkJcHJfaW5mbyhEUlZfUEZYDQorCQkJInJpbmctcmVmICVsZCxzZWdyaW5nLXJlZiAl
bGQsZXZlbnQtY2hhbm5lbCAlZCxwcm90b2NvbCAlZCAoJXMpXG4iLA0KKwkJCXJpbmdfcmVmLCBz
ZWdyaW5nX3JlZiwgZXZ0Y2huLCBiZS0+YmxraWYtPmJsa19wcm90b2NvbCwgcHJvdG9jb2wpOw0K
Kwl9DQorCWVsc2Ugew0KKwkJZXJyID0geGVuX2Jsa2lmX21hcChiZS0+YmxraWYsIHJpbmdfcmVm
LCBldnRjaG4pOw0KKwkJaWYgKGVycikgew0KKwkJCXhlbmJ1c19kZXZfZmF0YWwoZGV2LCBlcnIs
ICJtYXBwaW5nIHJpbmctcmVmICVsdSBwb3J0ICV1IiwNCisJCQkJCSByaW5nX3JlZiwgZXZ0Y2hu
KTsNCisJCQlyZXR1cm4gZXJyOw0KKwkJfQ0KKwkJcHJfaW5mbyhEUlZfUEZYICJyaW5nLXJlZiAl
bGQsZXZlbnQtY2hhbm5lbCAlZCxwcm90b2NvbCAlZCAoJXMpXG4iLA0KKwkJCXJpbmdfcmVmLCBl
dnRjaG4sIGJlLT5ibGtpZi0+YmxrX3Byb3RvY29sLCBwcm90b2NvbCk7DQogCX0NCiANCiAJZXJy
ID0geGVuX2Jsa2lmX2luaXRfYmxrYmsoYmUtPmJsa2lmKTsNCiAJaWYgKGVycikgew0KKwkJaWYg
KHJpbmdfdHlwZSA9PSAyKSB7DQorCQkJeGVuYnVzX3VubWFwX3JpbmdfdmZyZWUoYmUtPmJsa2lm
LT5iZS0+ZGV2LA0KKwkJCQkJCWJlLT5ibGtpZi0+YmxrX3NlZ3JpbmcpOw0KKwkJCWJlLT5ibGtp
Zi0+YmxrX3NlZ3JpbmdzLnNyaW5nID0gTlVMTDsNCisJCQl4ZW5idXNfdW5tYXBfcmluZ192ZnJl
ZShiZS0+YmxraWYtPmJlLT5kZXYsDQorCQkJCQkJYmUtPmJsa2lmLT5ibGtfcmluZyk7DQorCQkJ
YmUtPmJsa2lmLT5ibGtfcmluZ3NfdjIuY29tbW9uLnNyaW5nID0gTlVMTDsNCisJCX0NCisJCWVs
c2Ugew0KKwkJCXhlbmJ1c191bm1hcF9yaW5nX3ZmcmVlKGJlLT5ibGtpZi0+YmUtPmRldiwNCisJ
CQkJCQliZS0+YmxraWYtPmJsa19yaW5nKTsNCisJCQliZS0+YmxraWYtPmJsa19yaW5ncy5jb21t
b24uc3JpbmcgPSBOVUxMOw0KKwkJfQ0KIAkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIGVyciwgInhl
biBibGtpZiBpbml0IGJsa2JrIGZhaWxzXG4iKTsNCiAJCXJldHVybiBlcnI7DQogCX0NCg==

--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_A21691DE07B84740B5F0B81466D5148A23BCF283SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 10:31:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xMX-0006NS-BE; Thu, 16 Aug 2012 10:31:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1xMV-0006Mq-Vw
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:31:36 +0000
Received: from [85.158.143.99:42030] by server-3.bemta-4.messagelabs.com id
	07/18-09529-70CCC205; Thu, 16 Aug 2012 10:31:35 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345113094!28459038!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7426 invoked from network); 16 Aug 2012 10:31:34 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-15.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 10:31:34 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:31:33 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181411645"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 16 Aug 2012 03:31:33 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:31:32 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:31:31 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
Thread-Index: AQHNerLpTPW4kQTcTESrmJ/O73nifJdaEzwAgAIh9MD//36UgIAAirKg
Date: Thu, 16 Aug 2012 10:31:30 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
In-Reply-To: <502CE3AB0200007800095686@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "tim@xen.org" <tim@xen.org>, "Zhang, Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, August 16, 2012 6:12 PM
> To: Hao, Xudong
> Cc: Zhang, Xiantao; xen-devel@lists.xen.org; tim@xen.org
> Subject: RE: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
> mfn_valid
> 
> >>> On 16.08.12 at 12:05, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
> >> >      }
> >> >
> >> >      /* Track the highest gfn for which we have ever had a valid mapping
> */
> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> >>
> >> Depending on how the above comment gets addressed (i.e.
> >> whether MMIO MFNs are to be considered here at all), this
> >> might need changing anyway, as this a huge max_mapped_pfn
> >> value likely wouldn't be very useful anymore.
> >
> > Your viewpoint is similar with us. Here max_mapped_pfn value is for memory
> > but not for MMIO. I think this is a simple changes, do you have another
> > suggestion?
> 
> The question is why this needs to be changed at all. If this is
> only about RAM, then mfn_valid() is the right thing to use. If
> this is about MMIO too, then the condition is wrong already
> (since, as we appear to agree, even now there can be MMIO
> above RAM, provided there's little enough RAM).
> 

The original code considered EPT only, now for the device assignment, it need to consider MMIO. So how about remove the mfn_valid() here?

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:31:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xMX-0006NS-BE; Thu, 16 Aug 2012 10:31:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1xMV-0006Mq-Vw
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:31:36 +0000
Received: from [85.158.143.99:42030] by server-3.bemta-4.messagelabs.com id
	07/18-09529-70CCC205; Thu, 16 Aug 2012 10:31:35 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345113094!28459038!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7426 invoked from network); 16 Aug 2012 10:31:34 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-15.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 10:31:34 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:31:33 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181411645"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 16 Aug 2012 03:31:33 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:31:32 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:31:31 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
Thread-Index: AQHNerLpTPW4kQTcTESrmJ/O73nifJdaEzwAgAIh9MD//36UgIAAirKg
Date: Thu, 16 Aug 2012 10:31:30 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
In-Reply-To: <502CE3AB0200007800095686@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "tim@xen.org" <tim@xen.org>, "Zhang, Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, August 16, 2012 6:12 PM
> To: Hao, Xudong
> Cc: Zhang, Xiantao; xen-devel@lists.xen.org; tim@xen.org
> Subject: RE: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
> mfn_valid
> 
> >>> On 16.08.12 at 12:05, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
> >> >      }
> >> >
> >> >      /* Track the highest gfn for which we have ever had a valid mapping
> */
> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> >>
> >> Depending on how the above comment gets addressed (i.e.
> >> whether MMIO MFNs are to be considered here at all), this
> >> might need changing anyway, as this a huge max_mapped_pfn
> >> value likely wouldn't be very useful anymore.
> >
> > Your viewpoint is similar with us. Here max_mapped_pfn value is for memory
> > but not for MMIO. I think this is a simple changes, do you have another
> > suggestion?
> 
> The question is why this needs to be changed at all. If this is
> only about RAM, then mfn_valid() is the right thing to use. If
> this is about MMIO too, then the condition is wrong already
> (since, as we appear to agree, even now there can be MMIO
> above RAM, provided there's little enough RAM).
> 

The original code considered EPT only, now for the device assignment, it need to consider MMIO. So how about remove the mfn_valid() here?

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xSj-0006p8-6E; Thu, 16 Aug 2012 10:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1xSh-0006p3-6g
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:37:59 +0000
Received: from [85.158.143.99:19195] by server-3.bemta-4.messagelabs.com id
	26/D3-09529-68DCC205; Thu, 16 Aug 2012 10:37:58 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345113475!22092657!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30169 invoked from network); 16 Aug 2012 10:37:56 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:37:56 -0000
Received: by yenm4 with SMTP id m4so3054328yen.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 03:37:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Y39NxdGgBg2XFLwcogBFqWCENoQtnU3od65VRf3H+Vg=;
	b=no5bmj9ny2fALm0+7Ko+7xQLQyCKjss7vlcFaGPKwuDaF0kBASVvYY6UpbHil8wBiz
	J61YFdVk0++sNVcP4LgsQ8rXMDITELm95xC6G7v8uonuUM0c2KcnwiIzRUCwGS7TTEiv
	kB6VxqnHQLqgz4OpncOgUxq1rvBIt9+HliQpjzKnaa/Szu+18pFLAOpSCPiFY8mVT6NE
	zAAGh6zxyUhKXRMOrkDwjB2D9p8EU9IxqBpor4iw+BkW8gtFjkHPEZOGqm62MIzjke/k
	s2K7C/6W7HiMCNWBKh/DxJh6quTvd4LJ042Pgy5DYoCHQztM8OPNEDef71M5lDLQHx+U
	5Gvw==
MIME-Version: 1.0
Received: by 10.50.158.138 with SMTP id wu10mr1816750igb.0.1345113474943; Thu,
	16 Aug 2012 03:37:54 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 16 Aug 2012 03:37:54 -0700 (PDT)
In-Reply-To: <502CCC0002000078000955C4@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
Date: Thu, 16 Aug 2012 06:37:54 -0400
X-Google-Sender-Auth: Yzo_d1V_D1qiAfsrYzS95QOEiq4
Message-ID: <CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> When I am logging to serial, the failure is the same as before -
>> The first suspend / resume works -
>> The second fails with AHCI not working
>
> And this is with and/or without the evtchn_move_pirqs() calls
> removed? Otherwise, this might allow us at least debugging
> that part of the problem.

I am now convinced there is more than one problem:
One is the MSI issue we are chasing here... the other seems to be a
bit more insidious, where the system does not come back from S3 at all
- as mentioned in the Intel bug report from the other thread.

Running on serial to debug the former seems to at least mask the latter.

Removing evtchn_move_pirqs() at the tip does not seem to have the same
effect as removing them from the changeset that I bisected the problem
to.

At the tip, with these changes - I observe no change in behavior -
AHCI still has problems after the 2nd suspend/resume cycle.
At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
/ resume a dozen times, or more.

>
>> However, when I am not logging to serial - the system goes down, but
>> never comes back up. I cannot ssh in, and no graphics are displayed on
>> the screen. My only recourse is to hard power cycle the machine.
>>
>> This, of course makes collecting logs difficult.
>
> Did you try whether anything would make it out to the screen
> when you allow Xen to continue to access the video buffer even
> post-boot? Quite possibly for this it would be advantageous (or
> even required, as I don't think the video mode would get restored
> after coming back out of S3) to also only use a (simpler) text mode
> console (requiring that you have this up on the screen when you
> invoke S3, i.e. you'd have to switch away from or not make use of
> the GUI). That would be "vga=text-80x25,keep" on the Xen
> command line (while 80x50 or 80x60 would allow for more output
> to remain visible, even those modes would - I think - need code
> to be added to get restored during resume from S3).
>

I did not try this, but will investigate when I get to work today.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:38:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xSj-0006p8-6E; Thu, 16 Aug 2012 10:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1xSh-0006p3-6g
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:37:59 +0000
Received: from [85.158.143.99:19195] by server-3.bemta-4.messagelabs.com id
	26/D3-09529-68DCC205; Thu, 16 Aug 2012 10:37:58 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345113475!22092657!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30169 invoked from network); 16 Aug 2012 10:37:56 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:37:56 -0000
Received: by yenm4 with SMTP id m4so3054328yen.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 03:37:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Y39NxdGgBg2XFLwcogBFqWCENoQtnU3od65VRf3H+Vg=;
	b=no5bmj9ny2fALm0+7Ko+7xQLQyCKjss7vlcFaGPKwuDaF0kBASVvYY6UpbHil8wBiz
	J61YFdVk0++sNVcP4LgsQ8rXMDITELm95xC6G7v8uonuUM0c2KcnwiIzRUCwGS7TTEiv
	kB6VxqnHQLqgz4OpncOgUxq1rvBIt9+HliQpjzKnaa/Szu+18pFLAOpSCPiFY8mVT6NE
	zAAGh6zxyUhKXRMOrkDwjB2D9p8EU9IxqBpor4iw+BkW8gtFjkHPEZOGqm62MIzjke/k
	s2K7C/6W7HiMCNWBKh/DxJh6quTvd4LJ042Pgy5DYoCHQztM8OPNEDef71M5lDLQHx+U
	5Gvw==
MIME-Version: 1.0
Received: by 10.50.158.138 with SMTP id wu10mr1816750igb.0.1345113474943; Thu,
	16 Aug 2012 03:37:54 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 16 Aug 2012 03:37:54 -0700 (PDT)
In-Reply-To: <502CCC0002000078000955C4@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
Date: Thu, 16 Aug 2012 06:37:54 -0400
X-Google-Sender-Auth: Yzo_d1V_D1qiAfsrYzS95QOEiq4
Message-ID: <CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> When I am logging to serial, the failure is the same as before -
>> The first suspend / resume works -
>> The second fails with AHCI not working
>
> And this is with and/or without the evtchn_move_pirqs() calls
> removed? Otherwise, this might allow us at least debugging
> that part of the problem.

I am now convinced there is more than one problem:
One is the MSI issue we are chasing here... the other seems to be a
bit more insidious, where the system does not come back from S3 at all
- as mentioned in the Intel bug report from the other thread.

Running on serial to debug the former seems to at least mask the latter.

Removing evtchn_move_pirqs() at the tip does not seem to have the same
effect as removing them from the changeset that I bisected the problem
to.

At the tip, with these changes - I observe no change in behavior -
AHCI still has problems after the 2nd suspend/resume cycle.
At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
/ resume a dozen times, or more.

>
>> However, when I am not logging to serial - the system goes down, but
>> never comes back up. I cannot ssh in, and no graphics are displayed on
>> the screen. My only recourse is to hard power cycle the machine.
>>
>> This, of course makes collecting logs difficult.
>
> Did you try whether anything would make it out to the screen
> when you allow Xen to continue to access the video buffer even
> post-boot? Quite possibly for this it would be advantageous (or
> even required, as I don't think the video mode would get restored
> after coming back out of S3) to also only use a (simpler) text mode
> console (requiring that you have this up on the screen when you
> invoke S3, i.e. you'd have to switch away from or not make use of
> the GUI). That would be "vga=text-80x25,keep" on the Xen
> command line (while 80x50 or 80x60 would allow for more output
> to remain visible, even those modes would - I think - need code
> to be added to get restored during resume from S3).
>

I did not try this, but will investigate when I get to work today.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:40:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xUw-0006uu-NC; Thu, 16 Aug 2012 10:40:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1xUv-0006uo-KY
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 10:40:18 +0000
Received: from [85.158.139.83:61517] by server-11.bemta-5.messagelabs.com id
	7A/D1-29296-01ECC205; Thu, 16 Aug 2012 10:40:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345113615!24573660!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20520 invoked from network); 16 Aug 2012 10:40:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:40:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14036568"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:40:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:40:15 +0100
Message-ID: <1345113614.27489.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 11:40:14 +0100
In-Reply-To: <osstest-13606-mainreport@xen.org>
References: <osstest-13606-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13606: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 08:51 +0100, xen.org wrote:
> flight 13606 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13606/
> 
> Failures :-/ but no regressions.
> 
> Tests which are failing intermittently (not blocking):
>  test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 13605

These host install issues have been happening sporadically over the last
few days and a large number seem to relate to apt mirror failures.

It seems that the apt-cache mirror used by the test system has been
OOMing. Our IT guys have doubled the VMs RAM allocation and made some
other configuration tweaks so hopefully this is now resolved.

>  test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 13605
>  test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13605
>  test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 13605
>  test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 13605
> 
> Regressions which are regarded as allowable (not blocking):
>  test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13605
>  test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13605
>  test-amd64-amd64-win          3 host-install(3)              broken like 13604
>  test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13605
>  test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2 fail in 13605 blocked in 13606
> 
> Tests which did not succeed, but are not blocking:
>  test-amd64-i386-win          16 leak-check/check             fail   never pass
>  test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
>  test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
>  test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
>  test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
>  test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
>  test-i386-i386-win           16 leak-check/check             fail   never pass
>  test-i386-i386-xl-win        13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 13605 never pass
>  test-amd64-amd64-win         16 leak-check/check      fail in 13605 never pass
>  test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13605 never pass
> 
> version targeted for testing:
>  xen                  6d56e31fe1e1
> baseline version:
>  xen                  6d56e31fe1e1
> 
> jobs:
>  build-amd64                                                  pass    
>  build-i386                                                   pass    
>  build-amd64-oldkern                                          pass    
>  build-i386-oldkern                                           pass    
>  build-amd64-pvops                                            pass    
>  build-i386-pvops                                             pass    
>  test-amd64-amd64-xl                                          pass    
>  test-amd64-i386-xl                                           pass    
>  test-i386-i386-xl                                            pass    
>  test-amd64-i386-rhel6hvm-amd                                 pass    
>  test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
>  test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
>  test-amd64-amd64-xl-win7-amd64                               broken  
>  test-amd64-i386-xl-win7-amd64                                fail    
>  test-amd64-i386-xl-credit2                                   pass    
>  test-amd64-amd64-xl-pcipt-intel                              fail    
>  test-amd64-i386-rhel6hvm-intel                               pass    
>  test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
>  test-amd64-i386-xl-multivcpu                                 pass    
>  test-amd64-amd64-pair                                        broken  
>  test-amd64-i386-pair                                         pass    
>  test-i386-i386-pair                                          pass    
>  test-amd64-amd64-xl-sedf-pin                                 fail    
>  test-amd64-amd64-pv                                          pass    
>  test-amd64-i386-pv                                           pass    
>  test-i386-i386-pv                                            pass    
>  test-amd64-amd64-xl-sedf                                     pass    
>  test-amd64-i386-win-vcpus1                                   fail    
>  test-amd64-i386-xl-win-vcpus1                                fail    
>  test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
>  test-amd64-amd64-win                                         broken  
>  test-amd64-i386-win                                          fail    
>  test-i386-i386-win                                           fail    
>  test-amd64-amd64-xl-win                                      fail    
>  test-i386-i386-xl-win                                        fail    
>  test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
>  test-i386-i386-xl-qemuu-winxpsp3                             fail    
>  test-amd64-i386-xend-winxpsp3                                fail    
>  test-amd64-amd64-xl-winxpsp3                                 fail    
>  test-i386-i386-xl-winxpsp3                                   fail    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on woking.cam.xci-test.com
> logs: /home/xc_osstest/logs
> images: /home/xc_osstest/images
> 
> Logs, config files, etc. are available at
>     http://www.chiark.greenend.org.uk/~xensrcts/logs
> 
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
> 
> 
> Published tested tree is already up to date.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:40:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xUw-0006uu-NC; Thu, 16 Aug 2012 10:40:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1xUv-0006uo-KY
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 10:40:18 +0000
Received: from [85.158.139.83:61517] by server-11.bemta-5.messagelabs.com id
	7A/D1-29296-01ECC205; Thu, 16 Aug 2012 10:40:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345113615!24573660!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20520 invoked from network); 16 Aug 2012 10:40:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:40:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14036568"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:40:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:40:15 +0100
Message-ID: <1345113614.27489.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 11:40:14 +0100
In-Reply-To: <osstest-13606-mainreport@xen.org>
References: <osstest-13606-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13606: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 08:51 +0100, xen.org wrote:
> flight 13606 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13606/
> 
> Failures :-/ but no regressions.
> 
> Tests which are failing intermittently (not blocking):
>  test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 13605

These host install issues have been happening sporadically over the last
few days and a large number seem to relate to apt mirror failures.

It seems that the apt-cache mirror used by the test system has been
OOMing. Our IT guys have doubled the VMs RAM allocation and made some
other configuration tweaks so hopefully this is now resolved.

>  test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 13605
>  test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13605
>  test-amd64-amd64-pair         3 host-install/src_host(3)  broken pass in 13605
>  test-amd64-amd64-pair         4 host-install/dst_host(4)  broken pass in 13605
> 
> Regressions which are regarded as allowable (not blocking):
>  test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13605
>  test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13605
>  test-amd64-amd64-win          3 host-install(3)              broken like 13604
>  test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13605
>  test-amd64-amd64-xl-sedf-pin 12 guest-saverestore.2 fail in 13605 blocked in 13606
> 
> Tests which did not succeed, but are not blocking:
>  test-amd64-i386-win          16 leak-check/check             fail   never pass
>  test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
>  test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
>  test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
>  test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
>  test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
>  test-i386-i386-win           16 leak-check/check             fail   never pass
>  test-i386-i386-xl-win        13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 13605 never pass
>  test-amd64-amd64-win         16 leak-check/check      fail in 13605 never pass
>  test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13605 never pass
> 
> version targeted for testing:
>  xen                  6d56e31fe1e1
> baseline version:
>  xen                  6d56e31fe1e1
> 
> jobs:
>  build-amd64                                                  pass    
>  build-i386                                                   pass    
>  build-amd64-oldkern                                          pass    
>  build-i386-oldkern                                           pass    
>  build-amd64-pvops                                            pass    
>  build-i386-pvops                                             pass    
>  test-amd64-amd64-xl                                          pass    
>  test-amd64-i386-xl                                           pass    
>  test-i386-i386-xl                                            pass    
>  test-amd64-i386-rhel6hvm-amd                                 pass    
>  test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
>  test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
>  test-amd64-amd64-xl-win7-amd64                               broken  
>  test-amd64-i386-xl-win7-amd64                                fail    
>  test-amd64-i386-xl-credit2                                   pass    
>  test-amd64-amd64-xl-pcipt-intel                              fail    
>  test-amd64-i386-rhel6hvm-intel                               pass    
>  test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
>  test-amd64-i386-xl-multivcpu                                 pass    
>  test-amd64-amd64-pair                                        broken  
>  test-amd64-i386-pair                                         pass    
>  test-i386-i386-pair                                          pass    
>  test-amd64-amd64-xl-sedf-pin                                 fail    
>  test-amd64-amd64-pv                                          pass    
>  test-amd64-i386-pv                                           pass    
>  test-i386-i386-pv                                            pass    
>  test-amd64-amd64-xl-sedf                                     pass    
>  test-amd64-i386-win-vcpus1                                   fail    
>  test-amd64-i386-xl-win-vcpus1                                fail    
>  test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
>  test-amd64-amd64-win                                         broken  
>  test-amd64-i386-win                                          fail    
>  test-i386-i386-win                                           fail    
>  test-amd64-amd64-xl-win                                      fail    
>  test-i386-i386-xl-win                                        fail    
>  test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
>  test-i386-i386-xl-qemuu-winxpsp3                             fail    
>  test-amd64-i386-xend-winxpsp3                                fail    
>  test-amd64-amd64-xl-winxpsp3                                 fail    
>  test-i386-i386-xl-winxpsp3                                   fail    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on woking.cam.xci-test.com
> logs: /home/xc_osstest/logs
> images: /home/xc_osstest/images
> 
> Logs, config files, etc. are available at
>     http://www.chiark.greenend.org.uk/~xensrcts/logs
> 
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
> 
> 
> Published tested tree is already up to date.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:41:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xVT-0006yA-4O; Thu, 16 Aug 2012 10:40:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xVS-0006xt-1X
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:40:50 +0000
Received: from [85.158.143.35:31533] by server-3.bemta-4.messagelabs.com id
	0C/39-09529-13ECC205; Thu, 16 Aug 2012 10:40:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345113642!15982665!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21813 invoked from network); 16 Aug 2012 10:40:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:40:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 11:40:41 +0100
Message-Id: <502CEA7102000078000956DD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 11:41:21 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>,"tim@xen.org" <tim@xen.org>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:31, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Thursday, August 16, 2012 6:12 PM
>> To: Hao, Xudong
>> Cc: Zhang, Xiantao; xen-devel@lists.xen.org; tim@xen.org 
>> Subject: RE: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
>> mfn_valid
>> 
>> >>> On 16.08.12 at 12:05, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
>> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
>> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>> >> >      }
>> >> >
>> >> >      /* Track the highest gfn for which we have ever had a valid mapping
>> */
>> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
>> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>> >>
>> >> Depending on how the above comment gets addressed (i.e.
>> >> whether MMIO MFNs are to be considered here at all), this
>> >> might need changing anyway, as this a huge max_mapped_pfn
>> >> value likely wouldn't be very useful anymore.
>> >
>> > Your viewpoint is similar with us. Here max_mapped_pfn value is for memory
>> > but not for MMIO. I think this is a simple changes, do you have another
>> > suggestion?
>> 
>> The question is why this needs to be changed at all. If this is
>> only about RAM, then mfn_valid() is the right thing to use. If
>> this is about MMIO too, then the condition is wrong already
>> (since, as we appear to agree, even now there can be MMIO
>> above RAM, provided there's little enough RAM).
>> 
> 
> The original code considered EPT only, now for the device assignment, it 
> need to consider MMIO. So how about remove the mfn_valid() here?

I don't think it's there without reason, but I'm not sure. Tim?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:41:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xVT-0006yA-4O; Thu, 16 Aug 2012 10:40:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xVS-0006xt-1X
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:40:50 +0000
Received: from [85.158.143.35:31533] by server-3.bemta-4.messagelabs.com id
	0C/39-09529-13ECC205; Thu, 16 Aug 2012 10:40:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345113642!15982665!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21813 invoked from network); 16 Aug 2012 10:40:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 10:40:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 11:40:41 +0100
Message-Id: <502CEA7102000078000956DD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 11:41:21 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>,"tim@xen.org" <tim@xen.org>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:31, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Thursday, August 16, 2012 6:12 PM
>> To: Hao, Xudong
>> Cc: Zhang, Xiantao; xen-devel@lists.xen.org; tim@xen.org 
>> Subject: RE: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
>> mfn_valid
>> 
>> >>> On 16.08.12 at 12:05, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
>> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
>> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>> >> >      }
>> >> >
>> >> >      /* Track the highest gfn for which we have ever had a valid mapping
>> */
>> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
>> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>> >>
>> >> Depending on how the above comment gets addressed (i.e.
>> >> whether MMIO MFNs are to be considered here at all), this
>> >> might need changing anyway, as this a huge max_mapped_pfn
>> >> value likely wouldn't be very useful anymore.
>> >
>> > Your viewpoint is similar with us. Here max_mapped_pfn value is for memory
>> > but not for MMIO. I think this is a simple changes, do you have another
>> > suggestion?
>> 
>> The question is why this needs to be changed at all. If this is
>> only about RAM, then mfn_valid() is the right thing to use. If
>> this is about MMIO too, then the condition is wrong already
>> (since, as we appear to agree, even now there can be MMIO
>> above RAM, provided there's little enough RAM).
>> 
> 
> The original code considered EPT only, now for the device assignment, it 
> need to consider MMIO. So how about remove the mfn_valid() here?

I don't think it's there without reason, but I'm not sure. Tim?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:48:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xco-0007GE-2j; Thu, 16 Aug 2012 10:48:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1xcm-0007G9-Il
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:48:24 +0000
Received: from [85.158.143.99:2645] by server-1.bemta-4.messagelabs.com id
	42/56-07754-7FFCC205; Thu, 16 Aug 2012 10:48:23 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345114102!22095013!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14793 invoked from network); 16 Aug 2012 10:48:22 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-10.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 10:48:22 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:48:21 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181415617"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 16 Aug 2012 03:48:13 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:48:12 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:48:11 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqA=
Date: Thu, 16 Aug 2012 10:48:10 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
In-Reply-To: <502B85020200007800095036@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, August 15, 2012 5:16 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
> >  #define PCI_MEM_START       0xf0000000
> >  #define PCI_MEM_END         0xfc000000
> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> 
> With such hard coded values, this is hardly meant to be anything
> more than an RFC, is it? These values should not exist in the first
> place, and the variables below should be determined from VM
> characteristics (best would presumably be to allocate them top
> down from the end of physical address space, making sure you
> don't run into RAM).
> 
> > +#define PCI_MIN_MMIO_ADDR   0x80000000
> > +
> >  extern unsigned long pci_mem_start, pci_mem_end;
> >
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> > --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -31,24 +31,33 @@
> >  unsigned long pci_mem_start = PCI_MEM_START;
> >  unsigned long pci_mem_end = PCI_MEM_END;
> >
> > +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> > +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> > +
> >  enum virtual_vga virtual_vga = VGA_none;
> >  unsigned long igd_opregion_pgbase = 0;
> >
> >  void pci_setup(void)
> >  {
> > -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> > +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> > +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> > +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
> >      uint32_t vga_devfn = 256;
> >      uint16_t class, vendor_id, device_id;
> >      unsigned int bar, pin, link, isa_irq;
> > +    int64_t mmio_left;
> >
> >      /* Resources assignable to PCI devices via BARs. */
> >      struct resource {
> > -        uint32_t base, max;
> > -    } *resource, mem_resource, io_resource;
> > +        uint64_t base, max;
> > +    } *resource, mem_resource, high_mem_resource, io_resource;
> >
> >      /* Create a list of device BARs in descending order of size. */
> >      struct bars {
> > -        uint32_t devfn, bar_reg, bar_sz;
> > +        uint32_t is_64bar;
> > +        uint32_t devfn;
> > +        uint32_t bar_reg;
> > +        uint64_t bar_sz;
> >      } *bars = (struct bars *)scratch_start;
> >      unsigned int i, nr_bars = 0;
> >
> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >          /* Map the I/O memory and port resources. */
> >          for ( bar = 0; bar < 7; bar++ )
> >          {
> > +            bar_sz_upper = 0;
> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
> >              if ( bar == 6 )
> >                  bar_reg = PCI_ROM_ADDRESS;
> >
> >              bar_data = pci_readl(devfn, bar_reg);
> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >              pci_writel(devfn, bar_reg, ~0);
> >              bar_sz = pci_readl(devfn, bar_reg);
> >              pci_writel(devfn, bar_reg, bar_data);
> > +
> > +            if (is_64bar) {
> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, ~0);
> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> > +            }
> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > +                       0xfffffffffffffff0 :
> 
> This should be a proper constant (or the masking could be
> done earlier, in which case you could continue to use the
> existing PCI_BASE_ADDRESS_MEM_MASK).
> 

So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.

> > +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > +            bar_sz &= ~(bar_sz - 1);
> >              if ( bar_sz == 0 )
> >                  continue;
> >
> > -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > -                       PCI_BASE_ADDRESS_MEM_MASK :
> > -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > -            bar_sz &= ~(bar_sz - 1);
> > -
> >              for ( i = 0; i < nr_bars; i++ )
> >                  if ( bars[i].bar_sz < bar_sz )
> >                      break;
> > @@ -157,6 +178,7 @@ void pci_setup(void)
> >              if ( i != nr_bars )
> >                  memmove(&bars[i+1], &bars[i], (nr_bars-i) *
> sizeof(*bars));
> >
> > +            bars[i].is_64bar = is_64bar;
> >              bars[i].devfn   = devfn;
> >              bars[i].bar_reg = bar_reg;
> >              bars[i].bar_sz  = bar_sz;
> > @@ -167,11 +189,8 @@ void pci_setup(void)
> >
> >              nr_bars++;
> >
> > -            /* Skip the upper-half of the address for a 64-bit BAR. */
> > -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> > -
> PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > -                 (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> > +            /*The upper half is already calculated, skip it! */
> > +            if (is_64bar)
> >                  bar++;
> >          }
> >
> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >      }
> >
> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> > -            ((pci_mem_start << 1) != 0) )
> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> pci_mem_start )
> 
> The old code here could remain in place if ...
> 
> >          pci_mem_start <<= 1;
> >
> > +    if (!pci_mem_start) {
> 
> .. the condition here would get changed to the one used in the
> first part of the while above.
> 
> > +        bar64_relocate = 1;
> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> 
> Which would then also make this assignment (and the
> constant) unnecessary.
> 

Cool, I'll leave the old code, and just add

if (pci_mem_start = PCI_MIN_MMIO_ADDR)
    bar64_relocate = 1;

> > +    }
> > +
> >      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
> >      while ( (pci_mem_start >> PAGE_SHIFT) <
> hvm_info->low_mem_pgend )
> >      {
> > @@ -218,11 +241,15 @@ void pci_setup(void)
> >          hvm_info->high_mem_pgend += nr_pages;
> >      }
> >
> > +    high_mem_resource.base = pci_high_mem_start;
> > +    high_mem_resource.max = pci_high_mem_end;
> >      mem_resource.base = pci_mem_start;
> >      mem_resource.max = pci_mem_end;
> >      io_resource.base = 0xc000;
> >      io_resource.max = 0x10000;
> >
> > +    mmio_left = pci_mem_end - pci_mem_end;
> > +
> >      /* Assign iomem and ioport resources in descending order of size. */
> >      for ( i = 0; i < nr_bars; i++ )
> >      {
> > @@ -230,13 +257,21 @@ void pci_setup(void)
> >          bar_reg = bars[i].bar_reg;
> >          bar_sz  = bars[i].bar_sz;
> >
> > +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left
> <
> > bar_sz);
> >          bar_data = pci_readl(devfn, bar_reg);
> >
> >          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
> >               PCI_BASE_ADDRESS_SPACE_MEMORY )
> >          {
> > -            resource = &mem_resource;
> > -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            if (using_64bar) {
> > +                resource = &high_mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            else {
> > +                resource = &mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            mmio_left -= bar_sz;
> >          }
> >          else
> >          {
> > @@ -244,13 +279,14 @@ void pci_setup(void)
> >              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
> >          }
> >
> > -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> > -        bar_data |= base;
> > +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> > +        bar_data |= (uint32_t)base;
> > +        bar_data_upper = (uint32_t)(base >> 32);
> >          base += bar_sz;
> >
> >          if ( (base < resource->base) || (base > resource->max) )
> >          {
> > -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> > +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
> >                     "resource!\n", devfn>>3, devfn&7, bar_reg,
> bar_sz);
> >              continue;
> >          }
> > @@ -258,7 +294,9 @@ void pci_setup(void)
> >          resource->base = base;
> >
> >          pci_writel(devfn, bar_reg, bar_data);
> > -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> > +        if (using_64bar)
> > +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
> >                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
> >
> >          /* Now enable the memory or I/O mapping. */
> 
> Besides that, I'd encourage you to have an intermediate state
> between not using BARs above 4Gb and forcing all 64-bit ones
> beyond 4Gb for maximum compatibility - try fitting as many as
> you can into the low 2Gb. Perhaps this would warrant an option
> of some sort.
> 

I'll add bar size to judge if the 64 bit bar beyond 4G, special BAR size out of 512M should be treated as high memory resource.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 10:48:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 10:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xco-0007GE-2j; Thu, 16 Aug 2012 10:48:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T1xcm-0007G9-Il
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:48:24 +0000
Received: from [85.158.143.99:2645] by server-1.bemta-4.messagelabs.com id
	42/56-07754-7FFCC205; Thu, 16 Aug 2012 10:48:23 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345114102!22095013!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA4NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14793 invoked from network); 16 Aug 2012 10:48:22 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-10.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 10:48:22 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Aug 2012 03:48:21 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,778,1336374000"; d="scan'208";a="181415617"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 16 Aug 2012 03:48:13 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 03:48:12 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.196]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.232]) with mapi id
	14.01.0355.002; Thu, 16 Aug 2012 18:48:11 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqA=
Date: Thu, 16 Aug 2012 10:48:10 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
In-Reply-To: <502B85020200007800095036@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, August 15, 2012 5:16 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
> >  #define PCI_MEM_START       0xf0000000
> >  #define PCI_MEM_END         0xfc000000
> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> 
> With such hard coded values, this is hardly meant to be anything
> more than an RFC, is it? These values should not exist in the first
> place, and the variables below should be determined from VM
> characteristics (best would presumably be to allocate them top
> down from the end of physical address space, making sure you
> don't run into RAM).
> 
> > +#define PCI_MIN_MMIO_ADDR   0x80000000
> > +
> >  extern unsigned long pci_mem_start, pci_mem_end;
> >
> >
> > diff -r 663eb766cdde tools/firmware/hvmloader/pci.c
> > --- a/tools/firmware/hvmloader/pci.c	Tue Jul 24 17:02:04 2012 +0200
> > +++ b/tools/firmware/hvmloader/pci.c	Thu Jul 26 15:40:01 2012 +0800
> > @@ -31,24 +31,33 @@
> >  unsigned long pci_mem_start = PCI_MEM_START;
> >  unsigned long pci_mem_end = PCI_MEM_END;
> >
> > +uint64_t pci_high_mem_start = PCI_HIGH_MEM_START;
> > +uint64_t pci_high_mem_end = PCI_HIGH_MEM_END;
> > +
> >  enum virtual_vga virtual_vga = VGA_none;
> >  unsigned long igd_opregion_pgbase = 0;
> >
> >  void pci_setup(void)
> >  {
> > -    uint32_t base, devfn, bar_reg, bar_data, bar_sz, cmd, mmio_total = 0;
> > +    uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> > +    uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
> > +    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
> >      uint32_t vga_devfn = 256;
> >      uint16_t class, vendor_id, device_id;
> >      unsigned int bar, pin, link, isa_irq;
> > +    int64_t mmio_left;
> >
> >      /* Resources assignable to PCI devices via BARs. */
> >      struct resource {
> > -        uint32_t base, max;
> > -    } *resource, mem_resource, io_resource;
> > +        uint64_t base, max;
> > +    } *resource, mem_resource, high_mem_resource, io_resource;
> >
> >      /* Create a list of device BARs in descending order of size. */
> >      struct bars {
> > -        uint32_t devfn, bar_reg, bar_sz;
> > +        uint32_t is_64bar;
> > +        uint32_t devfn;
> > +        uint32_t bar_reg;
> > +        uint64_t bar_sz;
> >      } *bars = (struct bars *)scratch_start;
> >      unsigned int i, nr_bars = 0;
> >
> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >          /* Map the I/O memory and port resources. */
> >          for ( bar = 0; bar < 7; bar++ )
> >          {
> > +            bar_sz_upper = 0;
> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
> >              if ( bar == 6 )
> >                  bar_reg = PCI_ROM_ADDRESS;
> >
> >              bar_data = pci_readl(devfn, bar_reg);
> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >              pci_writel(devfn, bar_reg, ~0);
> >              bar_sz = pci_readl(devfn, bar_reg);
> >              pci_writel(devfn, bar_reg, bar_data);
> > +
> > +            if (is_64bar) {
> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, ~0);
> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> > +            }
> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > +                       0xfffffffffffffff0 :
> 
> This should be a proper constant (or the masking could be
> done earlier, in which case you could continue to use the
> existing PCI_BASE_ADDRESS_MEM_MASK).
> 

So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.

> > +                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > +            bar_sz &= ~(bar_sz - 1);
> >              if ( bar_sz == 0 )
> >                  continue;
> >
> > -            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> > -                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> > -                       PCI_BASE_ADDRESS_MEM_MASK :
> > -                       (PCI_BASE_ADDRESS_IO_MASK & 0xffff));
> > -            bar_sz &= ~(bar_sz - 1);
> > -
> >              for ( i = 0; i < nr_bars; i++ )
> >                  if ( bars[i].bar_sz < bar_sz )
> >                      break;
> > @@ -157,6 +178,7 @@ void pci_setup(void)
> >              if ( i != nr_bars )
> >                  memmove(&bars[i+1], &bars[i], (nr_bars-i) *
> sizeof(*bars));
> >
> > +            bars[i].is_64bar = is_64bar;
> >              bars[i].devfn   = devfn;
> >              bars[i].bar_reg = bar_reg;
> >              bars[i].bar_sz  = bar_sz;
> > @@ -167,11 +189,8 @@ void pci_setup(void)
> >
> >              nr_bars++;
> >
> > -            /* Skip the upper-half of the address for a 64-bit BAR. */
> > -            if ( (bar_data & (PCI_BASE_ADDRESS_SPACE |
> > -
> PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
> > -                 (PCI_BASE_ADDRESS_SPACE_MEMORY |
> > -                  PCI_BASE_ADDRESS_MEM_TYPE_64) )
> > +            /*The upper half is already calculated, skip it! */
> > +            if (is_64bar)
> >                  bar++;
> >          }
> >
> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >      }
> >
> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> > -            ((pci_mem_start << 1) != 0) )
> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> pci_mem_start )
> 
> The old code here could remain in place if ...
> 
> >          pci_mem_start <<= 1;
> >
> > +    if (!pci_mem_start) {
> 
> .. the condition here would get changed to the one used in the
> first part of the while above.
> 
> > +        bar64_relocate = 1;
> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> 
> Which would then also make this assignment (and the
> constant) unnecessary.
> 

Cool, I'll leave the old code, and just add

if (pci_mem_start = PCI_MIN_MMIO_ADDR)
    bar64_relocate = 1;

> > +    }
> > +
> >      /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
> >      while ( (pci_mem_start >> PAGE_SHIFT) <
> hvm_info->low_mem_pgend )
> >      {
> > @@ -218,11 +241,15 @@ void pci_setup(void)
> >          hvm_info->high_mem_pgend += nr_pages;
> >      }
> >
> > +    high_mem_resource.base = pci_high_mem_start;
> > +    high_mem_resource.max = pci_high_mem_end;
> >      mem_resource.base = pci_mem_start;
> >      mem_resource.max = pci_mem_end;
> >      io_resource.base = 0xc000;
> >      io_resource.max = 0x10000;
> >
> > +    mmio_left = pci_mem_end - pci_mem_end;
> > +
> >      /* Assign iomem and ioport resources in descending order of size. */
> >      for ( i = 0; i < nr_bars; i++ )
> >      {
> > @@ -230,13 +257,21 @@ void pci_setup(void)
> >          bar_reg = bars[i].bar_reg;
> >          bar_sz  = bars[i].bar_sz;
> >
> > +        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left
> <
> > bar_sz);
> >          bar_data = pci_readl(devfn, bar_reg);
> >
> >          if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
> >               PCI_BASE_ADDRESS_SPACE_MEMORY )
> >          {
> > -            resource = &mem_resource;
> > -            bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            if (using_64bar) {
> > +                resource = &high_mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            else {
> > +                resource = &mem_resource;
> > +                bar_data &= ~PCI_BASE_ADDRESS_MEM_MASK;
> > +            }
> > +            mmio_left -= bar_sz;
> >          }
> >          else
> >          {
> > @@ -244,13 +279,14 @@ void pci_setup(void)
> >              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
> >          }
> >
> > -        base = (resource->base + bar_sz - 1) & ~(bar_sz - 1);
> > -        bar_data |= base;
> > +        base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
> > +        bar_data |= (uint32_t)base;
> > +        bar_data_upper = (uint32_t)(base >> 32);
> >          base += bar_sz;
> >
> >          if ( (base < resource->base) || (base > resource->max) )
> >          {
> > -            printf("pci dev %02x:%x bar %02x size %08x: no space for "
> > +            printf("pci dev %02x:%x bar %02x size %llx: no space for "
> >                     "resource!\n", devfn>>3, devfn&7, bar_reg,
> bar_sz);
> >              continue;
> >          }
> > @@ -258,7 +294,9 @@ void pci_setup(void)
> >          resource->base = base;
> >
> >          pci_writel(devfn, bar_reg, bar_data);
> > -        printf("pci dev %02x:%x bar %02x size %08x: %08x\n",
> > +        if (using_64bar)
> > +            pci_writel(devfn, bar_reg + 4, bar_data_upper);
> > +        printf("pci dev %02x:%x bar %02x size %llx: %08x\n",
> >                 devfn>>3, devfn&7, bar_reg, bar_sz, bar_data);
> >
> >          /* Now enable the memory or I/O mapping. */
> 
> Besides that, I'd encourage you to have an intermediate state
> between not using BARs above 4Gb and forcing all 64-bit ones
> beyond 4Gb for maximum compatibility - try fitting as many as
> you can into the low 2Gb. Perhaps this would warrant an option
> of some sort.
> 

I'll add bar size to judge if the 64 bit bar beyond 4G, special BAR size out of 512M should be treated as high memory resource.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:00:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xnz-0007Tp-GJ; Thu, 16 Aug 2012 10:59:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1xny-0007Tk-OR
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:59:58 +0000
Received: from [85.158.143.35:32967] by server-1.bemta-4.messagelabs.com id
	A8/5A-07754-EA2DC205; Thu, 16 Aug 2012 10:59:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345114797!15112487!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12170 invoked from network); 16 Aug 2012 10:59:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:59:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037153"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:59:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:59:19 +0100
Message-ID: <1345114757.27489.82.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Aug 2012 11:59:17 +0100
In-Reply-To: <502CDB7E0200007800095622@nat28.tlf.novell.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 10:37 +0100, Jan Beulich wrote:
> >>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
> >> do you know which limits are correct?
> > 
> > I think Jan confirmed the 4.1 numbers on
> > http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
> > about this sort of thing. e.g. if any had increased for 4.2.
> 
> I had gone over this already, and adjusted the things I knew for
> sure.

"This" being the 4.2 limits wiki page:
http://wiki.xen.org/wiki/Xen_4.2_Limits which I accidentally trimmed
when CCing you?

>  The upper limit of memory for HVM guests is something I'm
> not sure about though (especially because I don't know whether
> the tools impose any restrictions, as is the case for PV guests).

I'm afraid I don't know either.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:00:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xnz-0007Tp-GJ; Thu, 16 Aug 2012 10:59:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1xny-0007Tk-OR
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 10:59:58 +0000
Received: from [85.158.143.35:32967] by server-1.bemta-4.messagelabs.com id
	A8/5A-07754-EA2DC205; Thu, 16 Aug 2012 10:59:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345114797!15112487!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12170 invoked from network); 16 Aug 2012 10:59:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 10:59:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037153"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:59:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:59:19 +0100
Message-ID: <1345114757.27489.82.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Aug 2012 11:59:17 +0100
In-Reply-To: <502CDB7E0200007800095622@nat28.tlf.novell.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 10:37 +0100, Jan Beulich wrote:
> >>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
> >> do you know which limits are correct?
> > 
> > I think Jan confirmed the 4.1 numbers on
> > http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
> > about this sort of thing. e.g. if any had increased for 4.2.
> 
> I had gone over this already, and adjusted the things I knew for
> sure.

"This" being the 4.2 limits wiki page:
http://wiki.xen.org/wiki/Xen_4.2_Limits which I accidentally trimmed
when CCing you?

>  The upper limit of memory for HVM guests is something I'm
> not sure about though (especially because I don't know whether
> the tools impose any restrictions, as is the case for PV guests).

I'm afraid I don't know either.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xq0-0007au-0e; Thu, 16 Aug 2012 11:02:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T1xpy-0007ap-SI
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:02:03 +0000
Received: from [85.158.138.51:47707] by server-10.bemta-3.messagelabs.com id
	84/52-20518-A23DC205; Thu, 16 Aug 2012 11:02:02 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345114920!20507313!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7981 invoked from network); 16 Aug 2012 11:02:01 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 11:02:01 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T1xpt-0005ig-Q2; Thu, 16 Aug 2012 11:01:57 +0000
Date: Thu, 16 Aug 2012 12:01:57 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120816110157.GC20601@ocelot.phlegethon.org>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
	<502CEA7102000078000956DD@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502CEA7102000078000956DD@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: Xudong Hao <xudong.hao@intel.com>, Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:41 +0100 on 16 Aug (1345117281), Jan Beulich wrote:
> >>> On 16.08.12 at 12:31, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> >> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> >> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> >> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
> >> >> >      }
> >> >> >
> >> >> >      /* Track the highest gfn for which we have ever had a valid mapping
> >> */
> >> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
> >> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> >> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> >> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> >> >>
> >> >> Depending on how the above comment gets addressed (i.e.
> >> >> whether MMIO MFNs are to be considered here at all), this
> >> >> might need changing anyway, as this a huge max_mapped_pfn
> >> >> value likely wouldn't be very useful anymore.
> >> >
> >> > Your viewpoint is similar with us. Here max_mapped_pfn value is for memory
> >> > but not for MMIO. I think this is a simple changes, do you have another
> >> > suggestion?
> >> 
> >> The question is why this needs to be changed at all. If this is
> >> only about RAM, then mfn_valid() is the right thing to use. If
> >> this is about MMIO too, then the condition is wrong already
> >> (since, as we appear to agree, even now there can be MMIO
> >> above RAM, provided there's little enough RAM).
> >> 
> > 
> > The original code considered EPT only, now for the device assignment, it 
> > need to consider MMIO. So how about remove the mfn_valid() here?
> 
> I don't think it's there without reason, but I'm not sure. Tim?

max_mapped_pfn should be the highest entry that's even had a mapping in
the p2m.  Its intent was to provide a fast path exit from p2m lookups in
the (at the time) common case where _emulated_ MMIO addresses were
higher than all the actual p2m mappings, and the cost of a failed lookup
(on 32-bit) was a page fault in the linear map.  Also, at the time, the
p2m wasn't typed and we didn't support direct MMIO, so mfn_valid() was
equivalent to 'entry is present'.

These days, I'm not sure how useful max_mapped_pfn is, since (a) for any
VM with >3GB RAM the emulated MMIO lookups are not avoided, and (b) on
64-bit builds there's not pagefault for a failed lookup.  Also it seems to
have been abused in a few places to do for() loops that touch every PFN
instead of just walking the tries.  So I might get rid of it after 4.2
is out.

In the meantime, the patch at the top of this thread is definitely an
improvement.  However, I think this is a better fix:

diff -r c887c30a0a35 xen/arch/x86/mm/p2m-ept.c
--- a/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 10:16:19 2012 +0200
+++ b/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 11:57:44 2012 +0100
@@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn_x(mfn)) &&
+    if ( p2mt != p2m_invalid &&
          (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
 
diff -r c887c30a0a35 xen/arch/x86/mm/p2m-pt.c
--- a/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 10:16:19 2012 +0200
+++ b/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 11:57:44 2012 +0100
@@ -454,7 +454,7 @@ p2m_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn) 
+    if ( p2mt != p2m_invalid
          && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
 
and I'll commit it this afternoon or tomorrow.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xq0-0007au-0e; Thu, 16 Aug 2012 11:02:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T1xpy-0007ap-SI
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:02:03 +0000
Received: from [85.158.138.51:47707] by server-10.bemta-3.messagelabs.com id
	84/52-20518-A23DC205; Thu, 16 Aug 2012 11:02:02 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345114920!20507313!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7981 invoked from network); 16 Aug 2012 11:02:01 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 11:02:01 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T1xpt-0005ig-Q2; Thu, 16 Aug 2012 11:01:57 +0000
Date: Thu, 16 Aug 2012 12:01:57 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120816110157.GC20601@ocelot.phlegethon.org>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
	<502CEA7102000078000956DD@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502CEA7102000078000956DD@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: Xudong Hao <xudong.hao@intel.com>, Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:41 +0100 on 16 Aug (1345117281), Jan Beulich wrote:
> >>> On 16.08.12 at 12:31, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
> >> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
> >> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
> >> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
> >> >> >      }
> >> >> >
> >> >> >      /* Track the highest gfn for which we have ever had a valid mapping
> >> */
> >> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
> >> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> >> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> >> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> >> >>
> >> >> Depending on how the above comment gets addressed (i.e.
> >> >> whether MMIO MFNs are to be considered here at all), this
> >> >> might need changing anyway, as this a huge max_mapped_pfn
> >> >> value likely wouldn't be very useful anymore.
> >> >
> >> > Your viewpoint is similar with us. Here max_mapped_pfn value is for memory
> >> > but not for MMIO. I think this is a simple changes, do you have another
> >> > suggestion?
> >> 
> >> The question is why this needs to be changed at all. If this is
> >> only about RAM, then mfn_valid() is the right thing to use. If
> >> this is about MMIO too, then the condition is wrong already
> >> (since, as we appear to agree, even now there can be MMIO
> >> above RAM, provided there's little enough RAM).
> >> 
> > 
> > The original code considered EPT only, now for the device assignment, it 
> > need to consider MMIO. So how about remove the mfn_valid() here?
> 
> I don't think it's there without reason, but I'm not sure. Tim?

max_mapped_pfn should be the highest entry that's even had a mapping in
the p2m.  Its intent was to provide a fast path exit from p2m lookups in
the (at the time) common case where _emulated_ MMIO addresses were
higher than all the actual p2m mappings, and the cost of a failed lookup
(on 32-bit) was a page fault in the linear map.  Also, at the time, the
p2m wasn't typed and we didn't support direct MMIO, so mfn_valid() was
equivalent to 'entry is present'.

These days, I'm not sure how useful max_mapped_pfn is, since (a) for any
VM with >3GB RAM the emulated MMIO lookups are not avoided, and (b) on
64-bit builds there's not pagefault for a failed lookup.  Also it seems to
have been abused in a few places to do for() loops that touch every PFN
instead of just walking the tries.  So I might get rid of it after 4.2
is out.

In the meantime, the patch at the top of this thread is definitely an
improvement.  However, I think this is a better fix:

diff -r c887c30a0a35 xen/arch/x86/mm/p2m-ept.c
--- a/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 10:16:19 2012 +0200
+++ b/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 11:57:44 2012 +0100
@@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn_x(mfn)) &&
+    if ( p2mt != p2m_invalid &&
          (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
 
diff -r c887c30a0a35 xen/arch/x86/mm/p2m-pt.c
--- a/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 10:16:19 2012 +0200
+++ b/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 11:57:44 2012 +0100
@@ -454,7 +454,7 @@ p2m_set_entry(struct p2m_domain *p2m, un
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
-    if ( mfn_valid(mfn) 
+    if ( p2mt != p2m_invalid
          && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
 
and I'll commit it this afternoon or tomorrow.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:04:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xrx-0007la-HF; Thu, 16 Aug 2012 11:04:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xrw-0007lE-CU
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:04:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345115002!3191825!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12636 invoked from network); 16 Aug 2012 11:03:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 11:03:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:03:20 +0100
Message-Id: <502CEFC10200007800095727@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:04:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
>> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
>> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
>> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>> >  #define PCI_MEM_START       0xf0000000
>> >  #define PCI_MEM_END         0xfc000000
>> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
>> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
>> 
>> With such hard coded values, this is hardly meant to be anything
>> more than an RFC, is it? These values should not exist in the first
>> place, and the variables below should be determined from VM
>> characteristics (best would presumably be to allocate them top
>> down from the end of physical address space, making sure you
>> don't run into RAM).

No comment on this part?

>> > @@ -133,23 +142,35 @@ void pci_setup(void)
>> >          /* Map the I/O memory and port resources. */
>> >          for ( bar = 0; bar < 7; bar++ )
>> >          {
>> > +            bar_sz_upper = 0;
>> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>> >              if ( bar == 6 )
>> >                  bar_reg = PCI_ROM_ADDRESS;
>> >
>> >              bar_data = pci_readl(devfn, bar_reg);
>> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
>> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
>> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
>> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>> >              pci_writel(devfn, bar_reg, ~0);
>> >              bar_sz = pci_readl(devfn, bar_reg);
>> >              pci_writel(devfn, bar_reg, bar_data);
>> > +
>> > +            if (is_64bar) {
>> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
>> > +                pci_writel(devfn, bar_reg + 4, ~0);
>> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
>> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
>> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
>> > +            }
>> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
>> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
>> > +                       0xfffffffffffffff0 :
>> 
>> This should be a proper constant (or the masking could be
>> done earlier, in which case you could continue to use the
>> existing PCI_BASE_ADDRESS_MEM_MASK).
>> 
> 
> So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.

I'd recommend not touching existing variables. As said before,
by re-ordering the code you can still use the constant as-is.

>> > @@ -193,10 +212,14 @@ void pci_setup(void)
>> >          pci_writew(devfn, PCI_COMMAND, cmd);
>> >      }
>> >
>> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
>> > -            ((pci_mem_start << 1) != 0) )
>> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
>> pci_mem_start )
>> 
>> The old code here could remain in place if ...
>> 
>> >          pci_mem_start <<= 1;
>> >
>> > +    if (!pci_mem_start) {
>> 
>> .. the condition here would get changed to the one used in the
>> first part of the while above.
>> 
>> > +        bar64_relocate = 1;
>> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
>> 
>> Which would then also make this assignment (and the
>> constant) unnecessary.
>> 
> 
> Cool, I'll leave the old code, and just add
> 
> if (pci_mem_start = PCI_MIN_MMIO_ADDR)
>     bar64_relocate = 1;

No, that's not correct. It's the other half of the while()'s condition
that you need to re-check here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:04:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xrx-0007la-HF; Thu, 16 Aug 2012 11:04:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xrw-0007lE-CU
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:04:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345115002!3191825!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12636 invoked from network); 16 Aug 2012 11:03:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 11:03:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:03:20 +0100
Message-Id: <502CEFC10200007800095727@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:04:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
>> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012 +0200
>> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012 +0800
>> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>> >  #define PCI_MEM_START       0xf0000000
>> >  #define PCI_MEM_END         0xfc000000
>> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
>> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
>> 
>> With such hard coded values, this is hardly meant to be anything
>> more than an RFC, is it? These values should not exist in the first
>> place, and the variables below should be determined from VM
>> characteristics (best would presumably be to allocate them top
>> down from the end of physical address space, making sure you
>> don't run into RAM).

No comment on this part?

>> > @@ -133,23 +142,35 @@ void pci_setup(void)
>> >          /* Map the I/O memory and port resources. */
>> >          for ( bar = 0; bar < 7; bar++ )
>> >          {
>> > +            bar_sz_upper = 0;
>> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>> >              if ( bar == 6 )
>> >                  bar_reg = PCI_ROM_ADDRESS;
>> >
>> >              bar_data = pci_readl(devfn, bar_reg);
>> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
>> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
>> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
>> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>> >              pci_writel(devfn, bar_reg, ~0);
>> >              bar_sz = pci_readl(devfn, bar_reg);
>> >              pci_writel(devfn, bar_reg, bar_data);
>> > +
>> > +            if (is_64bar) {
>> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
>> > +                pci_writel(devfn, bar_reg + 4, ~0);
>> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
>> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
>> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
>> > +            }
>> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
>> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
>> > +                       0xfffffffffffffff0 :
>> 
>> This should be a proper constant (or the masking could be
>> done earlier, in which case you could continue to use the
>> existing PCI_BASE_ADDRESS_MEM_MASK).
>> 
> 
> So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.

I'd recommend not touching existing variables. As said before,
by re-ordering the code you can still use the constant as-is.

>> > @@ -193,10 +212,14 @@ void pci_setup(void)
>> >          pci_writew(devfn, PCI_COMMAND, cmd);
>> >      }
>> >
>> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
>> > -            ((pci_mem_start << 1) != 0) )
>> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
>> pci_mem_start )
>> 
>> The old code here could remain in place if ...
>> 
>> >          pci_mem_start <<= 1;
>> >
>> > +    if (!pci_mem_start) {
>> 
>> .. the condition here would get changed to the one used in the
>> first part of the while above.
>> 
>> > +        bar64_relocate = 1;
>> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
>> 
>> Which would then also make this assignment (and the
>> constant) unnecessary.
>> 
> 
> Cool, I'll leave the old code, and just add
> 
> if (pci_mem_start = PCI_MIN_MMIO_ADDR)
>     bar64_relocate = 1;

No, that's not correct. It's the other half of the while()'s condition
that you need to re-check here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:04:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xse-0007qI-0M; Thu, 16 Aug 2012 11:04:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xsb-0007pS-LY
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:04:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345115049!9160324!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8295 invoked from network); 16 Aug 2012 11:04:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 11:04:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:04:08 +0100
Message-Id: <502CEFF0020000780009572A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:04:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345114757.27489.82.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:59, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-16 at 10:37 +0100, Jan Beulich wrote:
>> >>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
>> >> do you know which limits are correct?
>> > 
>> > I think Jan confirmed the 4.1 numbers on
>> > http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
>> > about this sort of thing. e.g. if any had increased for 4.2.
>> 
>> I had gone over this already, and adjusted the things I knew for
>> sure.
> 
> "This" being the 4.2 limits wiki page:
> http://wiki.xen.org/wiki/Xen_4.2_Limits which I accidentally trimmed
> when CCing you?

Yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:04:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xse-0007qI-0M; Thu, 16 Aug 2012 11:04:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xsb-0007pS-LY
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:04:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345115049!9160324!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8295 invoked from network); 16 Aug 2012 11:04:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 11:04:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:04:08 +0100
Message-Id: <502CEFF0020000780009572A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:04:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345114757.27489.82.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:59, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-16 at 10:37 +0100, Jan Beulich wrote:
>> >>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
>> >> do you know which limits are correct?
>> > 
>> > I think Jan confirmed the 4.1 numbers on
>> > http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
>> > about this sort of thing. e.g. if any had increased for 4.2.
>> 
>> I had gone over this already, and adjusted the things I knew for
>> sure.
> 
> "This" being the 4.2 limits wiki page:
> http://wiki.xen.org/wiki/Xen_4.2_Limits which I accidentally trimmed
> when CCing you?

Yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:07:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xv7-000849-Iy; Thu, 16 Aug 2012 11:07:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xv6-00083p-7i
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:07:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345115230!9161147!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26636 invoked from network); 16 Aug 2012 11:07:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 11:07:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:07:10 +0100
Message-Id: <502CF0A8020000780009574E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:07:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
In-Reply-To: <CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> When I am logging to serial, the failure is the same as before -
>>> The first suspend / resume works -
>>> The second fails with AHCI not working
>>
>> And this is with and/or without the evtchn_move_pirqs() calls
>> removed? Otherwise, this might allow us at least debugging
>> that part of the problem.
> 
> I am now convinced there is more than one problem:
> One is the MSI issue we are chasing here... the other seems to be a
> bit more insidious, where the system does not come back from S3 at all
> - as mentioned in the Intel bug report from the other thread.
> 
> Running on serial to debug the former seems to at least mask the latter.
> 
> Removing evtchn_move_pirqs() at the tip does not seem to have the same
> effect as removing them from the changeset that I bisected the problem
> to.

Odd.

> At the tip, with these changes - I observe no change in behavior -
> AHCI still has problems after the 2nd suspend/resume cycle.
> At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
> / resume a dozen times, or more.

As there ought to be at least some affinity break messages during
the suspend part, and I don't recall having seen any, could you -
for starters - provide a full serial log of the suspend/resume process,
with "loglvl=all guest_loglvl=all" in place? I'll then try to get to
produce a debugging patch for you to try.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:07:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xv7-000849-Iy; Thu, 16 Aug 2012 11:07:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1xv6-00083p-7i
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:07:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345115230!9161147!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26636 invoked from network); 16 Aug 2012 11:07:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 11:07:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:07:10 +0100
Message-Id: <502CF0A8020000780009574E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:07:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
In-Reply-To: <CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> When I am logging to serial, the failure is the same as before -
>>> The first suspend / resume works -
>>> The second fails with AHCI not working
>>
>> And this is with and/or without the evtchn_move_pirqs() calls
>> removed? Otherwise, this might allow us at least debugging
>> that part of the problem.
> 
> I am now convinced there is more than one problem:
> One is the MSI issue we are chasing here... the other seems to be a
> bit more insidious, where the system does not come back from S3 at all
> - as mentioned in the Intel bug report from the other thread.
> 
> Running on serial to debug the former seems to at least mask the latter.
> 
> Removing evtchn_move_pirqs() at the tip does not seem to have the same
> effect as removing them from the changeset that I bisected the problem
> to.

Odd.

> At the tip, with these changes - I observe no change in behavior -
> AHCI still has problems after the 2nd suspend/resume cycle.
> At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
> / resume a dozen times, or more.

As there ought to be at least some affinity break messages during
the suspend part, and I don't recall having seen any, could you -
for starters - provide a full serial log of the suspend/resume process,
with "loglvl=all guest_loglvl=all" in place? I'll then try to get to
produce a debugging patch for you to try.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:08:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xwW-0008CU-3t; Thu, 16 Aug 2012 11:08:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1xwU-0008Bu-Et
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:08:46 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345115278!2213343!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22029 invoked from network); 16 Aug 2012 11:07:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:07:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037330"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:07:58 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 12:07:58 +0100
Date: Thu, 16 Aug 2012 12:07:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <502BF563.8010902@citrix.com>
Message-ID: <alpine.DEB.2.02.1208161200200.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
	<alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
	<502BF563.8010902@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, David Vrabel wrote:
> On 15/08/12 14:55, Stefano Stabellini wrote:
> > On Wed, 15 Aug 2012, David Vrabel wrote:
> >> On 14/08/12 15:12, Attilio Rao wrote:
> >>> On 14/08/12 14:57, David Vrabel wrote:
> >>>> On 14/08/12 13:24, Attilio Rao wrote:
> >> After looking at it some more, I think this pv-ops is unnecessary. How
> >> about the following patch to just remove it completely?
> >>
> >> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
> >> is sound.
> > 
> > Do you have more then 4G to dom0 on those boxes?
> 
> I've tested with 6G now, both 64-bit and 32-bit with HIGHPTE.
> 
> > It certainly fixed a serious crash at the time it was introduced, see
> > http://marc.info/?l=linux-kernel&m=129901609503574 and
> > http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
> > changed in kernel_physical_mapping_init, I think we still need it.
> > Depending on the e820 of your test box, the kernel could crash (or not),
> > possibly in different places.
> >
> >>>> Having said that, I couldn't immediately see where pages in (end, 
> >>>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
> >>>> done?
> >>>>
> >>>
> >>> As mentioned in the comment, please look at xen_set_pte_init().
> >>
> >> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
> >> is already present and read-only.
> > 
> > look at mask_rw_pte and read the threads linked above, unfortunately it
> > is not that simple.
> 
> Yes, I was remembering what 32-bit did here.
> 
> The 64-bit version is a bit confused and it often ends up /not/ clearing
> RW for the direct mapping of the pages in the pgt_buf because any
> existing RW mappings will be used as-is.  See phys_pte_init() which
> checks for an existing mapping and only sets the PTE if it is not
> already set.

not all the pagetable pages might be already mapped, even if they are
already hooked into the pagetable


> pgd_populate(), pud_populate(), and pmd_populate() take care of clearing
> RW for the newly allocated page table pages, so mask_rw_pte() only needs
> to consider clearing RW for mappings of the the existing page table PFNs
> which all lie outside the range (pt_buf_start, pt_buf_top].
> 
> This patch makes it more sensible, and makes removal of the pv-op safe
> if pgt_buf lies outside the initial mapping.
> 
> index 04c1f61..2fd5e86 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1400,14 +1400,13 @@ static pte_t __init mask_rw_pte(pte_t *ptep,
> pte_t pte)
>  	unsigned long pfn = pte_pfn(pte);
> 
>  	/*
> -	 * If the new pfn is within the range of the newly allocated
> -	 * kernel pagetable, and it isn't being mapped into an
> -	 * early_ioremap fixmap slot as a freshly allocated page, make sure
> -	 * it is RO.
> +	 * If this is a PTE of an early_ioremap fixmap slot but
> +	 * outside the range (pgt_buf_start, pgt_buf_top], then this
> +	 * PTE is mapping a PFN in the current page table.  Make
> +	 * sure it is RO.
>  	 */
> -	if (((!is_early_ioremap_ptep(ptep) &&
> -			pfn >= pgt_buf_start && pfn < pgt_buf_top)) ||
> -			(is_early_ioremap_ptep(ptep) && pfn != (pgt_buf_end - 1)))
> +	if (is_early_ioremap_ptep(ptep)
> +	    && (pfn < pgt_buf_start || pfn >= pgt_buf_top))
>  		pte = pte_wrprotect(pte);
> 
>  	return pte;
> 

That's a mistake, you just inverted the condition!
What if map_low_page is used to map a page already hooked into the
pagetable? It should be RO while with your change it would be RW.
Also you don't handle the case when map_low_page is used to map the very
latest page, allocated and mapped by alloc_low_page, but
still not hooked into the pagetable: that page needs to be RW.

I believe you need more RAM than 6G to see all these issues.


> >> 8<----------------------
> >> x86: remove x86_init.mapping.pagetable_reserve paravirt op
> >>
> >> The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
> >> guests to set the writable flag for the mapping of (pgt_buf_end,
> >> pgt_buf_top].  This is not necessary as these pages are never set as
> >> read-only as they have never contained page tables.
> > 
> > Is this actually true? It is possible when pagetable pages are
> > allocated by alloc_low_page.
> 
> These newly allocated page table pages will be set read-only when they
> are linked into the page tables with pgd_populate(), pud_populate() and
> friends.
> 
> >> When running as a Xen guest, the initial page tables are provided by
> >> Xen (these are reserved with memblock_reserve() in
> >> xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
> >> guests) or in the kernel's .data section (for 64-bit guests, see
> >> head_64.S).
> >>
> >> Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
> >> does not overlap with them and the mappings for these PFNs will be
> >> read-write.
> > 
> > We are talking about pagetable pages built by
> > kernel_physical_mapping_init.
> > 
> > 
> >> Since Xen doesn't need to change the mapping its implementation
> >> becomes the same as a native and we can simply remove this pv-op
> >> completely.
> > 
> > I don't think so.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:08:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1xwW-0008CU-3t; Thu, 16 Aug 2012 11:08:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T1xwU-0008Bu-Et
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:08:46 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345115278!2213343!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22029 invoked from network); 16 Aug 2012 11:07:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:07:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037330"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:07:58 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 12:07:58 +0100
Date: Thu, 16 Aug 2012 12:07:41 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <502BF563.8010902@citrix.com>
Message-ID: <alpine.DEB.2.02.1208161200200.2278@kaball.uk.xensource.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
	<alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
	<502BF563.8010902@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Aug 2012, David Vrabel wrote:
> On 15/08/12 14:55, Stefano Stabellini wrote:
> > On Wed, 15 Aug 2012, David Vrabel wrote:
> >> On 14/08/12 15:12, Attilio Rao wrote:
> >>> On 14/08/12 14:57, David Vrabel wrote:
> >>>> On 14/08/12 13:24, Attilio Rao wrote:
> >> After looking at it some more, I think this pv-ops is unnecessary. How
> >> about the following patch to just remove it completely?
> >>
> >> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
> >> is sound.
> > 
> > Do you have more then 4G to dom0 on those boxes?
> 
> I've tested with 6G now, both 64-bit and 32-bit with HIGHPTE.
> 
> > It certainly fixed a serious crash at the time it was introduced, see
> > http://marc.info/?l=linux-kernel&m=129901609503574 and
> > http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
> > changed in kernel_physical_mapping_init, I think we still need it.
> > Depending on the e820 of your test box, the kernel could crash (or not),
> > possibly in different places.
> >
> >>>> Having said that, I couldn't immediately see where pages in (end, 
> >>>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
> >>>> done?
> >>>>
> >>>
> >>> As mentioned in the comment, please look at xen_set_pte_init().
> >>
> >> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
> >> is already present and read-only.
> > 
> > look at mask_rw_pte and read the threads linked above, unfortunately it
> > is not that simple.
> 
> Yes, I was remembering what 32-bit did here.
> 
> The 64-bit version is a bit confused and it often ends up /not/ clearing
> RW for the direct mapping of the pages in the pgt_buf because any
> existing RW mappings will be used as-is.  See phys_pte_init() which
> checks for an existing mapping and only sets the PTE if it is not
> already set.

not all the pagetable pages might be already mapped, even if they are
already hooked into the pagetable


> pgd_populate(), pud_populate(), and pmd_populate() take care of clearing
> RW for the newly allocated page table pages, so mask_rw_pte() only needs
> to consider clearing RW for mappings of the the existing page table PFNs
> which all lie outside the range (pt_buf_start, pt_buf_top].
> 
> This patch makes it more sensible, and makes removal of the pv-op safe
> if pgt_buf lies outside the initial mapping.
> 
> index 04c1f61..2fd5e86 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1400,14 +1400,13 @@ static pte_t __init mask_rw_pte(pte_t *ptep,
> pte_t pte)
>  	unsigned long pfn = pte_pfn(pte);
> 
>  	/*
> -	 * If the new pfn is within the range of the newly allocated
> -	 * kernel pagetable, and it isn't being mapped into an
> -	 * early_ioremap fixmap slot as a freshly allocated page, make sure
> -	 * it is RO.
> +	 * If this is a PTE of an early_ioremap fixmap slot but
> +	 * outside the range (pgt_buf_start, pgt_buf_top], then this
> +	 * PTE is mapping a PFN in the current page table.  Make
> +	 * sure it is RO.
>  	 */
> -	if (((!is_early_ioremap_ptep(ptep) &&
> -			pfn >= pgt_buf_start && pfn < pgt_buf_top)) ||
> -			(is_early_ioremap_ptep(ptep) && pfn != (pgt_buf_end - 1)))
> +	if (is_early_ioremap_ptep(ptep)
> +	    && (pfn < pgt_buf_start || pfn >= pgt_buf_top))
>  		pte = pte_wrprotect(pte);
> 
>  	return pte;
> 

That's a mistake, you just inverted the condition!
What if map_low_page is used to map a page already hooked into the
pagetable? It should be RO while with your change it would be RW.
Also you don't handle the case when map_low_page is used to map the very
latest page, allocated and mapped by alloc_low_page, but
still not hooked into the pagetable: that page needs to be RW.

I believe you need more RAM than 6G to see all these issues.


> >> 8<----------------------
> >> x86: remove x86_init.mapping.pagetable_reserve paravirt op
> >>
> >> The x86_init.mapping.pagetable_reserve paravirt op is used for Xen
> >> guests to set the writable flag for the mapping of (pgt_buf_end,
> >> pgt_buf_top].  This is not necessary as these pages are never set as
> >> read-only as they have never contained page tables.
> > 
> > Is this actually true? It is possible when pagetable pages are
> > allocated by alloc_low_page.
> 
> These newly allocated page table pages will be set read-only when they
> are linked into the page tables with pgd_populate(), pud_populate() and
> friends.
> 
> >> When running as a Xen guest, the initial page tables are provided by
> >> Xen (these are reserved with memblock_reserve() in
> >> xen_setup_kernel_pagetable()) and constructed in brk space (for 32-bit
> >> guests) or in the kernel's .data section (for 64-bit guests, see
> >> head_64.S).
> >>
> >> Since these are all marked as reserved, (pgt_buf_start, pgt_buf_top]
> >> does not overlap with them and the mappings for these PFNs will be
> >> read-write.
> > 
> > We are talking about pagetable pages built by
> > kernel_physical_mapping_init.
> > 
> > 
> >> Since Xen doesn't need to change the mapping its implementation
> >> becomes the same as a native and we can simply remove this pv-op
> >> completely.
> > 
> > I don't think so.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:13:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1y1J-0008Ul-6D; Thu, 16 Aug 2012 11:13:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1y1I-0008Ud-DM
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:13:44 +0000
Received: from [85.158.143.99:42129] by server-1.bemta-4.messagelabs.com id
	9B/B3-07754-7E5DC205; Thu, 16 Aug 2012 11:13:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345115623!28467034!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18860 invoked from network); 16 Aug 2012 11:13:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 11:13:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:13:42 +0100
Message-Id: <502CF22E0200007800095763@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:14:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ronghui Duan" <ronghui.duan@intel.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:22, "Duan, Ronghui" <ronghui.duan@intel.com> wrote:
> The max segments for request in VBD queue is 11, while for Linux OS/ other 
> VMM, the parameter is set to 128 in default. This may be caused by the 
> limited size of ring between Front/Back. So I guess whether we can put 
> segment data into another ring and dynamic use them for the single request's 
> need. Here is prototype which don't do much test, but it can work on Linux 64 
> bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to 
> original in sequential test. But it bring some overhead which will make 
> random IO's cpu utilization increase a little.

In what way to improve blkback is intended to be subject of a
discussion on the summit - are you by any chance going to be
there? Fact is that there are a number of other extensions to
the interface, and since you don't mention those I'm assuming
you were not aware of them.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:13:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1y1J-0008Ul-6D; Thu, 16 Aug 2012 11:13:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1y1I-0008Ud-DM
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:13:44 +0000
Received: from [85.158.143.99:42129] by server-1.bemta-4.messagelabs.com id
	9B/B3-07754-7E5DC205; Thu, 16 Aug 2012 11:13:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345115623!28467034!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18860 invoked from network); 16 Aug 2012 11:13:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 11:13:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:13:42 +0100
Message-Id: <502CF22E0200007800095763@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:14:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ronghui Duan" <ronghui.duan@intel.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 12:22, "Duan, Ronghui" <ronghui.duan@intel.com> wrote:
> The max segments for request in VBD queue is 11, while for Linux OS/ other 
> VMM, the parameter is set to 128 in default. This may be caused by the 
> limited size of ring between Front/Back. So I guess whether we can put 
> segment data into another ring and dynamic use them for the single request's 
> need. Here is prototype which don't do much test, but it can work on Linux 64 
> bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to 
> original in sequential test. But it bring some overhead which will make 
> random IO's cpu utilization increase a little.

In what way to improve blkback is intended to be subject of a
discussion on the summit - are you by any chance going to be
there? Fact is that there are a number of other extensions to
the interface, and since you don't mention those I'm assuming
you were not aware of them.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:18:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1y5M-0000D0-UZ; Thu, 16 Aug 2012 11:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1y5L-0000Ct-NW
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:17:55 +0000
Received: from [85.158.143.99:33279] by server-2.bemta-4.messagelabs.com id
	81/80-31966-3E6DC205; Thu, 16 Aug 2012 11:17:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345115873!18714304!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13797 invoked from network); 16 Aug 2012 11:17:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 11:17:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:17:52 +0100
Message-Id: <502CF3290200007800095778@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:18:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mark van Dijk" <lists+xen@internecto.net>
References: <20120815223909.1eb7a0cd@internecto.net>
	<502CC76602000078000955B0@nat28.tlf.novell.com>
	<20120816104823.55a775ed@internecto.net>
In-Reply-To: <20120816104823.55a775ed@internecto.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 10:48, "Mark van Dijk" <lists+xen@internecto.net> wrote:
> On Thu, 16 Aug 2012 09:11:50 +0100
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> >>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net>
>> >>> wrote:
>> > (XEN) Xen call trace:
>> > (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
>> > (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
>> > (XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
>> > (XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
>> > 
>> > The entire log from boot to crash can be viewed at the following
>> > link: http://pastebin.com/5wcH7GWR 
>> 
>> Unfortunately quite a few of the registers contain values that
>> could sensibly be "i". Could you either disassemble the
>> instructions around the place yourself to find out which one it
>> is, or make the xen-syms file corresponding to this run available?
> 
> Hi Jan, I'm not capable to disassemble anything because my experience
> with that is zero. So it's probably easier to give you the xen-syms
> file, I posted it to a pastebin with a little detour:
> 
> curl http://sprunge.us/cGeL | openssl enc -a -d | \
>  xz -d > xen-syms-4.2.0-rc3-pre

Thanks. That confirms that "i" (in r12d) is indeed 2. So there are
good chances that the patch I sent earlier will help. Please let us
know.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:18:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1y5M-0000D0-UZ; Thu, 16 Aug 2012 11:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T1y5L-0000Ct-NW
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:17:55 +0000
Received: from [85.158.143.99:33279] by server-2.bemta-4.messagelabs.com id
	81/80-31966-3E6DC205; Thu, 16 Aug 2012 11:17:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345115873!18714304!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13797 invoked from network); 16 Aug 2012 11:17:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 11:17:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 12:17:52 +0100
Message-Id: <502CF3290200007800095778@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 12:18:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mark van Dijk" <lists+xen@internecto.net>
References: <20120815223909.1eb7a0cd@internecto.net>
	<502CC76602000078000955B0@nat28.tlf.novell.com>
	<20120816104823.55a775ed@internecto.net>
In-Reply-To: <20120816104823.55a775ed@internecto.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 10:48, "Mark van Dijk" <lists+xen@internecto.net> wrote:
> On Thu, 16 Aug 2012 09:11:50 +0100
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> >>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net>
>> >>> wrote:
>> > (XEN) Xen call trace:
>> > (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
>> > (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
>> > (XEN)    [<ffff82c4801b5a55>] hvm_hap_nested_page_fault+0x133/0x422
>> > (XEN)    [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
>> > 
>> > The entire log from boot to crash can be viewed at the following
>> > link: http://pastebin.com/5wcH7GWR 
>> 
>> Unfortunately quite a few of the registers contain values that
>> could sensibly be "i". Could you either disassemble the
>> instructions around the place yourself to find out which one it
>> is, or make the xen-syms file corresponding to this run available?
> 
> Hi Jan, I'm not capable to disassemble anything because my experience
> with that is zero. So it's probably easier to give you the xen-syms
> file, I posted it to a pastebin with a little detour:
> 
> curl http://sprunge.us/cGeL | openssl enc -a -d | \
>  xz -d > xen-syms-4.2.0-rc3-pre

Thanks. That confirms that "i" (in r12d) is indeed 2. So there are
good chances that the patch I sent earlier will help. Please let us
know.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1y7R-0000Jd-FN; Thu, 16 Aug 2012 11:20:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1y7P-0000JL-Ja
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:20:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345115947!9532922!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6639 invoked from network); 16 Aug 2012 11:19:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:19:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037582"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:19:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 12:19:07 +0100
Message-ID: <1345115945.27489.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Aug 2012 12:19:05 +0100
In-Reply-To: <502CEFF0020000780009572A@nat28.tlf.novell.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 12:04 +0100, Jan Beulich wrote:
> >>> On 16.08.12 at 12:59, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2012-08-16 at 10:37 +0100, Jan Beulich wrote:
> >> >>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
> >> >> do you know which limits are correct?
> >> > 
> >> > I think Jan confirmed the 4.1 numbers on
> >> > http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
> >> > about this sort of thing. e.g. if any had increased for 4.2.
> >> 
> >> I had gone over this already, and adjusted the things I knew for
> >> sure.
> > 
> > "This" being the 4.2 limits wiki page:
> > http://wiki.xen.org/wiki/Xen_4.2_Limits which I accidentally trimmed
> > when CCing you?
> 
> Yes.

Smashing, thanks.

So we should just merge te Xen_4.2_limit pages into a column in the
Release Features page.

Ian.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:20:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1y7R-0000Jd-FN; Thu, 16 Aug 2012 11:20:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1y7P-0000JL-Ja
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:20:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345115947!9532922!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6639 invoked from network); 16 Aug 2012 11:19:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:19:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037582"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:19:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 12:19:07 +0100
Message-ID: <1345115945.27489.87.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Aug 2012 12:19:05 +0100
In-Reply-To: <502CEFF0020000780009572A@nat28.tlf.novell.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 12:04 +0100, Jan Beulich wrote:
> >>> On 16.08.12 at 12:59, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2012-08-16 at 10:37 +0100, Jan Beulich wrote:
> >> >>> On 16.08.12 at 10:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Thu, 2012-08-16 at 09:43 +0100, Lars Kurth wrote:
> >> >> do you know which limits are correct?
> >> > 
> >> > I think Jan confirmed the 4.1 numbers on
> >> > http://wiki.xen.org/wiki/Xen_Release_Features and he'd know for sure
> >> > about this sort of thing. e.g. if any had increased for 4.2.
> >> 
> >> I had gone over this already, and adjusted the things I knew for
> >> sure.
> > 
> > "This" being the 4.2 limits wiki page:
> > http://wiki.xen.org/wiki/Xen_4.2_Limits which I accidentally trimmed
> > when CCing you?
> 
> Yes.

Smashing, thanks.

So we should just merge te Xen_4.2_limit pages into a column in the
Release Features page.

Ian.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1yFG-0000Ws-Dn; Thu, 16 Aug 2012 11:28:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1yFF-0000Wn-K0
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:28:09 +0000
Received: from [85.158.138.51:48924] by server-4.bemta-3.messagelabs.com id
	D3/A2-04276-849DC205; Thu, 16 Aug 2012 11:28:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345116488!28679717!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21823 invoked from network); 16 Aug 2012 11:28:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037909"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:28:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 12:28:08 +0100
Message-ID: <1345116486.27489.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 16 Aug 2012 12:28:06 +0100
In-Reply-To: <CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 22:26 +0100, Jean Guyader wrote:
> On 9 August 2012 22:02, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> > On 07/30/2012 10:03 AM, Ian Campbell wrote:
> >> This is based upon my inspection of a system with a single PV domain running
> >> and is therefore very incomplete.
> >>
> >> There are several things I'm not sure of here, mostly marked with XXX in the
> >> text.
> >>
> >> In particular:
> >>
> >>  - We seem to expose various things to the guest which really it has no need to
> >>    know (at least not via xenstore). e.g. its own domid, its device model pid,
> >>    the size of the video ram, store port and gref.
> >
> > If the domid key is unneeded/removed, is there a recommended method for
> > a guest to query its own domid? I don't see a hypercall that returns it
> > directly, although there is one to return the guest's UUID - which seems
> > much less useful for a guest to know about itself.
> >
> > While hypercalls are fairly consistent about accepting DOMID_SELF, a
> > domain does occasionally need to know its own ID: xenstore permission
> > changes do not accept DOMID_SELF, 

I wonder if that would be a worthwhile protocol extension.

> and if two domains are attempting to
> > set up communication such as V4V or vchan, they need to be able to tell
> > their peer what domain ID to use.

That's trickier.

I suppose they could rendezvous via /vm/$UUID? Although there has been
talk of removing that path in the future.

> >
> 
> That is one way for doing it another way would be to use a name
> resolution system
> a bit like a DNS. A system like that would need to live where the VM
> are created and destroyed
> (probably dom0 or a domain builder VM).
> The server could be using vchan, v4v or even a shared XenStore node,
> but I think we
> need something like that.
> 
> In the long run it's much better to rely on a name instead of a domid
> because domids can
> change throughout the VM life cycle (reboot, hibernate, save/restore,
> migration, ...).

Right, this is the main reason to avoid building a reliance on domid
into a protocol.

> 
> Jean
> 
> > It is possible for a domain to query its own domain ID indirectly, so it
> > would be difficult to argue that a domain should not be able to obtain
> > its own ID. One method for a domain to query its own ID is to create an
> > unbound event channel with remote_domid = DOMID_SELF, and then execute
> > evtchn_status on the event channel in order to see the resolved domain
> > id. Querying Xenstore permissions on a newly-created key will show the
> > local domain as the first entry. Less reliably, the backend paths for
> > all xenbus devices contain the local and remote domain IDs.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:28:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1yFG-0000Ws-Dn; Thu, 16 Aug 2012 11:28:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T1yFF-0000Wn-K0
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:28:09 +0000
Received: from [85.158.138.51:48924] by server-4.bemta-3.messagelabs.com id
	D3/A2-04276-849DC205; Thu, 16 Aug 2012 11:28:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345116488!28679717!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21823 invoked from network); 16 Aug 2012 11:28:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14037909"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:28:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 12:28:08 +0100
Message-ID: <1345116486.27489.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 16 Aug 2012 12:28:06 +0100
In-Reply-To: <CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 22:26 +0100, Jean Guyader wrote:
> On 9 August 2012 22:02, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> > On 07/30/2012 10:03 AM, Ian Campbell wrote:
> >> This is based upon my inspection of a system with a single PV domain running
> >> and is therefore very incomplete.
> >>
> >> There are several things I'm not sure of here, mostly marked with XXX in the
> >> text.
> >>
> >> In particular:
> >>
> >>  - We seem to expose various things to the guest which really it has no need to
> >>    know (at least not via xenstore). e.g. its own domid, its device model pid,
> >>    the size of the video ram, store port and gref.
> >
> > If the domid key is unneeded/removed, is there a recommended method for
> > a guest to query its own domid? I don't see a hypercall that returns it
> > directly, although there is one to return the guest's UUID - which seems
> > much less useful for a guest to know about itself.
> >
> > While hypercalls are fairly consistent about accepting DOMID_SELF, a
> > domain does occasionally need to know its own ID: xenstore permission
> > changes do not accept DOMID_SELF, 

I wonder if that would be a worthwhile protocol extension.

> and if two domains are attempting to
> > set up communication such as V4V or vchan, they need to be able to tell
> > their peer what domain ID to use.

That's trickier.

I suppose they could rendezvous via /vm/$UUID? Although there has been
talk of removing that path in the future.

> >
> 
> That is one way for doing it another way would be to use a name
> resolution system
> a bit like a DNS. A system like that would need to live where the VM
> are created and destroyed
> (probably dom0 or a domain builder VM).
> The server could be using vchan, v4v or even a shared XenStore node,
> but I think we
> need something like that.
> 
> In the long run it's much better to rely on a name instead of a domid
> because domids can
> change throughout the VM life cycle (reboot, hibernate, save/restore,
> migration, ...).

Right, this is the main reason to avoid building a reliance on domid
into a protocol.

> 
> Jean
> 
> > It is possible for a domain to query its own domain ID indirectly, so it
> > would be difficult to argue that a domain should not be able to obtain
> > its own ID. One method for a domain to query its own ID is to create an
> > unbound event channel with remote_domid = DOMID_SELF, and then execute
> > evtchn_status on the event channel in order to see the resolved domain
> > id. Querying Xenstore permissions on a newly-created key will show the
> > local domain as the first entry. Less reliably, the backend paths for
> > all xenbus devices contain the local and remote domain IDs.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:57:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ygu-0000n6-46; Thu, 16 Aug 2012 11:56:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1ygs-0000n1-KQ
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:56:42 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345118177!7181698!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20895 invoked from network); 16 Aug 2012 11:56:19 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:56:19 -0000
Received: by yenm4 with SMTP id m4so3117382yen.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 04:56:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Q0BUQIAg96twXy1oLGLCcnIIfWZZUbWltcbaRARYW2k=;
	b=PSxiZz4LYIiuXoBc3mNK8YJbHtVhnEaCSe8FgZ5uP7eJIAeoh2vLEHrTkpQU+NwFlB
	viiKyk95flXtkIzC8r8T+F3tTGDjqQb5kMJlHOTmogrDJ+jfUCUesVf/TQyJAvR25Smj
	DPv4H83+aMqtwSu/f/3a5HHVc7e/+XHviReMZ634A/W/jry9SLMJYGi55SPNsofWXQGY
	LUC4yV5J3nFWilydiKc/DdtNTjGUfV/zrRPiO5LU+CxxkCbxRt6qcSIVeY/sF/qRA9XB
	4m6ZDYPWspc+Pg2I/2+FayaBy3v76k/xnDYeJIP1qNmRv2iEaHkQYTOQJgxXO/YNze3L
	g8Gg==
MIME-Version: 1.0
Received: by 10.50.222.193 with SMTP id qo1mr998631igc.70.1345118176914; Thu,
	16 Aug 2012 04:56:16 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 16 Aug 2012 04:56:16 -0700 (PDT)
In-Reply-To: <502CF0A8020000780009574E@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
Date: Thu, 16 Aug 2012 07:56:16 -0400
X-Google-Sender-Auth: SNUpUPQd3KTJHDx-QayoQhBAzeY
Message-ID: <CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 7:07 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
> >>> On 16.08.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
> > On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> When I am logging to serial, the failure is the same as before -
> >>> The first suspend / resume works -
> >>> The second fails with AHCI not working
> >>
> >> And this is with and/or without the evtchn_move_pirqs() calls
> >> removed? Otherwise, this might allow us at least debugging
> >> that part of the problem.
> >
> > I am now convinced there is more than one problem:
> > One is the MSI issue we are chasing here... the other seems to be a
> > bit more insidious, where the system does not come back from S3 at all
> > - as mentioned in the Intel bug report from the other thread.
> >
> > Running on serial to debug the former seems to at least mask the latter.
> >
> > Removing evtchn_move_pirqs() at the tip does not seem to have the same
> > effect as removing them from the changeset that I bisected the problem
> > to.
>
> Odd.


Indeed. I can't explain it either.


>
>
> > At the tip, with these changes - I observe no change in behavior -
> > AHCI still has problems after the 2nd suspend/resume cycle.
> > At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
> > / resume a dozen times, or more.
>
> As there ought to be at least some affinity break messages during
> the suspend part, and I don't recall having seen any, could you -
> for starters - provide a full serial log of the suspend/resume process,
> with "loglvl=all guest_loglvl=all" in place? I'll then try to get to
> produce a debugging patch for you to try.


In order to not flood the list with large logs, I put the logs here:
https://citrix.sharefile.com/d/sfab699024a54df39
I wasn't sure if you wanted a log with, or without the calls to
evtchn_move_pirqs() commented out - so I included both.

Please let me know if there are anything else you'd like me to experiment with.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 11:57:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 11:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ygu-0000n6-46; Thu, 16 Aug 2012 11:56:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T1ygs-0000n1-KQ
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 11:56:42 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345118177!7181698!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20895 invoked from network); 16 Aug 2012 11:56:19 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 11:56:19 -0000
Received: by yenm4 with SMTP id m4so3117382yen.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 04:56:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Q0BUQIAg96twXy1oLGLCcnIIfWZZUbWltcbaRARYW2k=;
	b=PSxiZz4LYIiuXoBc3mNK8YJbHtVhnEaCSe8FgZ5uP7eJIAeoh2vLEHrTkpQU+NwFlB
	viiKyk95flXtkIzC8r8T+F3tTGDjqQb5kMJlHOTmogrDJ+jfUCUesVf/TQyJAvR25Smj
	DPv4H83+aMqtwSu/f/3a5HHVc7e/+XHviReMZ634A/W/jry9SLMJYGi55SPNsofWXQGY
	LUC4yV5J3nFWilydiKc/DdtNTjGUfV/zrRPiO5LU+CxxkCbxRt6qcSIVeY/sF/qRA9XB
	4m6ZDYPWspc+Pg2I/2+FayaBy3v76k/xnDYeJIP1qNmRv2iEaHkQYTOQJgxXO/YNze3L
	g8Gg==
MIME-Version: 1.0
Received: by 10.50.222.193 with SMTP id qo1mr998631igc.70.1345118176914; Thu,
	16 Aug 2012 04:56:16 -0700 (PDT)
Received: by 10.64.33.15 with HTTP; Thu, 16 Aug 2012 04:56:16 -0700 (PDT)
In-Reply-To: <502CF0A8020000780009574E@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
Date: Thu, 16 Aug 2012 07:56:16 -0400
X-Google-Sender-Auth: SNUpUPQd3KTJHDx-QayoQhBAzeY
Message-ID: <CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 7:07 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
> >>> On 16.08.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
> > On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> When I am logging to serial, the failure is the same as before -
> >>> The first suspend / resume works -
> >>> The second fails with AHCI not working
> >>
> >> And this is with and/or without the evtchn_move_pirqs() calls
> >> removed? Otherwise, this might allow us at least debugging
> >> that part of the problem.
> >
> > I am now convinced there is more than one problem:
> > One is the MSI issue we are chasing here... the other seems to be a
> > bit more insidious, where the system does not come back from S3 at all
> > - as mentioned in the Intel bug report from the other thread.
> >
> > Running on serial to debug the former seems to at least mask the latter.
> >
> > Removing evtchn_move_pirqs() at the tip does not seem to have the same
> > effect as removing them from the changeset that I bisected the problem
> > to.
>
> Odd.


Indeed. I can't explain it either.


>
>
> > At the tip, with these changes - I observe no change in behavior -
> > AHCI still has problems after the 2nd suspend/resume cycle.
> > At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
> > / resume a dozen times, or more.
>
> As there ought to be at least some affinity break messages during
> the suspend part, and I don't recall having seen any, could you -
> for starters - provide a full serial log of the suspend/resume process,
> with "loglvl=all guest_loglvl=all" in place? I'll then try to get to
> produce a debugging patch for you to try.


In order to not flood the list with large logs, I put the logs here:
https://citrix.sharefile.com/d/sfab699024a54df39
I wasn't sure if you wanted a log with, or without the calls to
evtchn_move_pirqs() commented out - so I included both.

Please let me know if there are anything else you'd like me to experiment with.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 12:09:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 12:09:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ysW-00019b-8H; Thu, 16 Aug 2012 12:08:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1ysU-00019W-HC
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 12:08:42 +0000
Received: from [85.158.143.35:10862] by server-3.bemta-4.messagelabs.com id
	0E/9A-09529-9C2EC205; Thu, 16 Aug 2012 12:08:41 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345118905!13765913!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25371 invoked from network); 16 Aug 2012 12:08:26 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 12:08:26 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id BC0C7A03F8;
	Thu, 16 Aug 2012 12:08:25 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id WDumX-iUMxxe; Thu, 16 Aug 2012 12:08:22 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 34512A02EB;
	Thu, 16 Aug 2012 12:08:22 +0000 (UTC)
Date: Thu, 16 Aug 2012 14:08:20 +0200
From: "Mark van Dijk" <lists+xen@internecto.net>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20120816140820.355053c4@internecto.net>
In-Reply-To: <502CF3290200007800095778@nat28.tlf.novell.com>
References: <20120815223909.1eb7a0cd@internecto.net>
	<502CC76602000078000955B0@nat28.tlf.novell.com>
	<20120816104823.55a775ed@internecto.net>
	<502CF3290200007800095778@nat28.tlf.novell.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 12:18:33 +0100
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 16.08.12 at 10:48, "Mark van Dijk" <lists+xen@internecto.net>
> >>> wrote:
> > On Thu, 16 Aug 2012 09:11:50 +0100
> > "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> >> >>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net>
> >> >>> wrote:
> >> > (XEN) Xen call trace:
> >> > (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
> >> > (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
> >> > (XEN)    [<ffff82c4801b5a55>]
> >> > hvm_hap_nested_page_fault+0x133/0x422 (XEN)
> >> > [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
> >> > 
> >> > The entire log from boot to crash can be viewed at the following
> >> > link: http://pastebin.com/5wcH7GWR 
> >> 
> >> Unfortunately quite a few of the registers contain values that
> >> could sensibly be "i". Could you either disassemble the
> >> instructions around the place yourself to find out which one it
> >> is, or make the xen-syms file corresponding to this run available?
> > 
> > Hi Jan, I'm not capable to disassemble anything because my
> > experience with that is zero. So it's probably easier to give you
> > the xen-syms file, I posted it to a pastebin with a little detour:
> > 
> > curl http://sprunge.us/cGeL | openssl enc -a -d | \
> >  xz -d > xen-syms-4.2.0-rc3-pre
> 
> Thanks. That confirms that "i" (in r12d) is indeed 2. So there are
> good chances that the patch I sent earlier will help. Please let us
> know.

Good news: the VM boots. Thanks Jan!

My console reads:

...
(XEN) irq.c:270: Dom1 PCI link 3 changed 5 -> 0
(XEN) PoD[bfc00] level=2
(XEN) PoD[7fc00] level=2

So that's one more confirmation. :)

Mark

-- 
Stay in touch,
Mark van Dijk.               ,---------------------------------
----------------------------'         Thu Aug 16 11:57 UTC 2012
Today is Pungenday, the 9th day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 12:09:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 12:09:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1ysW-00019b-8H; Thu, 16 Aug 2012 12:08:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lists+xen@internecto.net>) id 1T1ysU-00019W-HC
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 12:08:42 +0000
Received: from [85.158.143.35:10862] by server-3.bemta-4.messagelabs.com id
	0E/9A-09529-9C2EC205; Thu, 16 Aug 2012 12:08:41 +0000
X-Env-Sender: lists+xen@internecto.net
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345118905!13765913!1
X-Originating-IP: [176.9.245.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25371 invoked from network); 16 Aug 2012 12:08:26 -0000
Received: from polaris.internecto.net (HELO mx1.internecto.net) (176.9.245.29)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 12:08:26 -0000
Received: from localhost (localhost [127.0.0.1])
	by mx1.internecto.net (Postfix) with ESMTP id BC0C7A03F8;
	Thu, 16 Aug 2012 12:08:25 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at mail.internecto.net
Received: from mx1.internecto.net ([127.0.0.1])
	by localhost (mail.polaris.internecto.net [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id WDumX-iUMxxe; Thu, 16 Aug 2012 12:08:22 +0000 (UTC)
Received: from internecto.net (5ED4FDEB.cm-7-5d.dynamic.ziggo.nl
	[94.212.253.235])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	(Authenticated sender: lists@internecto.net)
	by mx1.internecto.net (Postfix) with ESMTPSA id 34512A02EB;
	Thu, 16 Aug 2012 12:08:22 +0000 (UTC)
Date: Thu, 16 Aug 2012 14:08:20 +0200
From: "Mark van Dijk" <lists+xen@internecto.net>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20120816140820.355053c4@internecto.net>
In-Reply-To: <502CF3290200007800095778@nat28.tlf.novell.com>
References: <20120815223909.1eb7a0cd@internecto.net>
	<502CC76602000078000955B0@nat28.tlf.novell.com>
	<20120816104823.55a775ed@internecto.net>
	<502CF3290200007800095778@nat28.tlf.novell.com>
Organization: Internecto SIS
X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] E5606 with no HVM;
 Assertion 'i == 1' failed at p2m-ept.c:524
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 12:18:33 +0100
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 16.08.12 at 10:48, "Mark van Dijk" <lists+xen@internecto.net>
> >>> wrote:
> > On Thu, 16 Aug 2012 09:11:50 +0100
> > "Jan Beulich" <JBeulich@suse.com> wrote:
> > 
> >> >>> On 15.08.12 at 22:39, Mark van Dijk <lists+xen@internecto.net>
> >> >>> wrote:
> >> > (XEN) Xen call trace:
> >> > (XEN)    [<ffff82c4801e1be9>] ept_get_entry+0x13a/0x28f
> >> > (XEN)    [<ffff82c4801da274>] __get_gfn_type_access+0x175/0x256
> >> > (XEN)    [<ffff82c4801b5a55>]
> >> > hvm_hap_nested_page_fault+0x133/0x422 (XEN)
> >> > [<ffff82c4801d3c8e>] vmx_vmexit_handler+0x136b/0x1614
> >> > 
> >> > The entire log from boot to crash can be viewed at the following
> >> > link: http://pastebin.com/5wcH7GWR 
> >> 
> >> Unfortunately quite a few of the registers contain values that
> >> could sensibly be "i". Could you either disassemble the
> >> instructions around the place yourself to find out which one it
> >> is, or make the xen-syms file corresponding to this run available?
> > 
> > Hi Jan, I'm not capable to disassemble anything because my
> > experience with that is zero. So it's probably easier to give you
> > the xen-syms file, I posted it to a pastebin with a little detour:
> > 
> > curl http://sprunge.us/cGeL | openssl enc -a -d | \
> >  xz -d > xen-syms-4.2.0-rc3-pre
> 
> Thanks. That confirms that "i" (in r12d) is indeed 2. So there are
> good chances that the patch I sent earlier will help. Please let us
> know.

Good news: the VM boots. Thanks Jan!

My console reads:

...
(XEN) irq.c:270: Dom1 PCI link 3 changed 5 -> 0
(XEN) PoD[bfc00] level=2
(XEN) PoD[7fc00] level=2

So that's one more confirmation. :)

Mark

-- 
Stay in touch,
Mark van Dijk.               ,---------------------------------
----------------------------'         Thu Aug 16 11:57 UTC 2012
Today is Pungenday, the 9th day of Bureaucracy in the YOLD 3178

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 12:22:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 12:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1z58-0001Jx-Jm; Thu, 16 Aug 2012 12:21:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1z57-0001Js-I5
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 12:21:45 +0000
Received: from [85.158.143.35:21567] by server-2.bemta-4.messagelabs.com id
	12/8E-31966-8D5EC205; Thu, 16 Aug 2012 12:21:44 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345119703!16004675!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODA3NjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6637 invoked from network); 16 Aug 2012 12:21:43 -0000
Received: from smtp.tele.fi (HELO vulpes-int.media.sonera.net) (192.89.123.25)
	by server-14.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 12:21:43 -0000
Received: from smtp.tele.fi (smtp.tele.fi [192.89.123.25])
	by vulpes-int.media.sonera.net (Postfix) with ESMTP id 90B941AE8B
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 15:21:42 +0300 (EEST)
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id CC2DC1526;
	Thu, 16 Aug 2012 15:21:21 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id CE8F42005D; Thu, 16 Aug 2012 15:21:20 +0300 (EEST)
Date: Thu, 16 Aug 2012 15:21:20 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120816122120.GI19851@reaktio.net>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345109102.27489.38.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Tom Parker <tparker@cbnco.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 10:25:02AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-15 at 18:07 +0100, Tom Parker wrote:
> > Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
> > channel to provide our use case for PV USB in our environment.  This
> > is possible with the current xm stack but not available with the xl
> > stack.
> 
> Thanks for doing this.
> 
> At first glance this doesn't seem like something which we could do for
> 4.2.0 at this stage, although we should do it for 4.3 and potentially
> consider it for 4.2.1.
> 
> Is it something which you guys might be interested in providing patches
> for? It is at heart a moderately simple C coding exercise, I'm more than
> happy to provide guidance etc. Much of the generic framework already
> exists and there are examples in the form of other device types.
> 
> > Currently we use PVUSB to attach a USB Smartcard reader through our
> > dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
> > mounted on an internal USB Port to our domU CA server (SLES 11)
> > 
> > The config file syntax is broken so we have to manually attach (I have
> > it scripted) whenever our hosts reboot (which is almost never.)
> 
> Can you give an example of what the syntax *should* be?
> 
> > On the dom0 server I have to do the following steps:
> > 
> > /usr/sbin/xm usb-list-assignable-devices (get the bus-id of the USB
> > device)
> > /usr/sbin/xm usb-hc-create $Domain 2 2 (Create a USB 2.0 Root Hub with
> > 2 ports in $Domain)
> > /usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId (Attach the
> > USB bus-id found in step 1 to the hub created in step 2)
> 
> What (if anything) is the output of these commands?
> 
> Do you need to do anything to make a device "assignable"? (I get no
> output from the list command for example)
> 
> > On the domU the lsusb looks like this after the above (before it
> > returns nothing)
> > 
> > mgaca:~ # lsusb 
> > Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
> > SmartCard Reader
> > Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> 
> Can you post the output of "xenstore-ls -fp" while the device is
> connected?
> 
> Do you happen to know if this uses the PVUSB drivers or some other
> mechanism? "lsmod" in both dom0 and domU should provide a clue if the
> drivers are loaded.
> 
> Does this work for both PV and HVM guests or do you only use one or the
> other?
> 
> > Once I have done this I can use the usb devce in the domU as if it was
> > directly connected. 
> > 
> > Thanks for your time.
> 
> Thank you for describing the functionality.
> 

We should also update / keep up-to-date this wiki page:
http://wiki.xen.org/wiki/Xen_USB_Passthrough

already earlier I added the missing USB features to :
http://wiki.xen.org/wiki/XL_vs_Xend_Feature_Comparison

Worth a note: in xm/xend there are two different kind of USB passthru methods available:

1) Xen qemu-dm usb 1.1 passthru for HVM guests.
2) Xen PVUSB usb 2.0 passthru for both PV and PVHVM guests.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 12:22:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 12:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1z58-0001Jx-Jm; Thu, 16 Aug 2012 12:21:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T1z57-0001Js-I5
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 12:21:45 +0000
Received: from [85.158.143.35:21567] by server-2.bemta-4.messagelabs.com id
	12/8E-31966-8D5EC205; Thu, 16 Aug 2012 12:21:44 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345119703!16004675!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODA3NjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6637 invoked from network); 16 Aug 2012 12:21:43 -0000
Received: from smtp.tele.fi (HELO vulpes-int.media.sonera.net) (192.89.123.25)
	by server-14.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 12:21:43 -0000
Received: from smtp.tele.fi (smtp.tele.fi [192.89.123.25])
	by vulpes-int.media.sonera.net (Postfix) with ESMTP id 90B941AE8B
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 15:21:42 +0300 (EEST)
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id CC2DC1526;
	Thu, 16 Aug 2012 15:21:21 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id CE8F42005D; Thu, 16 Aug 2012 15:21:20 +0300 (EEST)
Date: Thu, 16 Aug 2012 15:21:20 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120816122120.GI19851@reaktio.net>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345109102.27489.38.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Tom Parker <tparker@cbnco.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 10:25:02AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-15 at 18:07 +0100, Tom Parker wrote:
> > Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
> > channel to provide our use case for PV USB in our environment.  This
> > is possible with the current xm stack but not available with the xl
> > stack.
> 
> Thanks for doing this.
> 
> At first glance this doesn't seem like something which we could do for
> 4.2.0 at this stage, although we should do it for 4.3 and potentially
> consider it for 4.2.1.
> 
> Is it something which you guys might be interested in providing patches
> for? It is at heart a moderately simple C coding exercise, I'm more than
> happy to provide guidance etc. Much of the generic framework already
> exists and there are examples in the form of other device types.
> 
> > Currently we use PVUSB to attach a USB Smartcard reader through our
> > dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
> > mounted on an internal USB Port to our domU CA server (SLES 11)
> > 
> > The config file syntax is broken so we have to manually attach (I have
> > it scripted) whenever our hosts reboot (which is almost never.)
> 
> Can you give an example of what the syntax *should* be?
> 
> > On the dom0 server I have to do the following steps:
> > 
> > /usr/sbin/xm usb-list-assignable-devices (get the bus-id of the USB
> > device)
> > /usr/sbin/xm usb-hc-create $Domain 2 2 (Create a USB 2.0 Root Hub with
> > 2 ports in $Domain)
> > /usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId (Attach the
> > USB bus-id found in step 1 to the hub created in step 2)
> 
> What (if anything) is the output of these commands?
> 
> Do you need to do anything to make a device "assignable"? (I get no
> output from the list command for example)
> 
> > On the domU the lsusb looks like this after the above (before it
> > returns nothing)
> > 
> > mgaca:~ # lsusb 
> > Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
> > SmartCard Reader
> > Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> 
> Can you post the output of "xenstore-ls -fp" while the device is
> connected?
> 
> Do you happen to know if this uses the PVUSB drivers or some other
> mechanism? "lsmod" in both dom0 and domU should provide a clue if the
> drivers are loaded.
> 
> Does this work for both PV and HVM guests or do you only use one or the
> other?
> 
> > Once I have done this I can use the usb devce in the domU as if it was
> > directly connected. 
> > 
> > Thanks for your time.
> 
> Thank you for describing the functionality.
> 

We should also update / keep up-to-date this wiki page:
http://wiki.xen.org/wiki/Xen_USB_Passthrough

already earlier I added the missing USB features to :
http://wiki.xen.org/wiki/XL_vs_Xend_Feature_Comparison

Worth a note: in xm/xend there are two different kind of USB passthru methods available:

1) Xen qemu-dm usb 1.1 passthru for HVM guests.
2) Xen PVUSB usb 2.0 passthru for both PV and PVHVM guests.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 12:32:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 12:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1zFO-0001US-O4; Thu, 16 Aug 2012 12:32:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T1zFN-0001UN-8e
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 12:32:21 +0000
Received: from [85.158.143.35:22066] by server-2.bemta-4.messagelabs.com id
	8F/9F-31966-358EC205; Thu, 16 Aug 2012 12:32:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345120336!10832940!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3275 invoked from network); 16 Aug 2012 12:32:16 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 12:32:16 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T1zFF-0005yC-NF; Thu, 16 Aug 2012 12:32:13 +0000
Date: Thu, 16 Aug 2012 13:32:13 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120816123213.GD20601@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
	<20120810165109.GA19429@spongy>
	<20120813093811.GB75552@ocelot.phlegethon.org>
	<CAEBdQ93L5WW+b=C+YkZiZccv+5zUwr573sibjLHLSk2qZmxxYg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ93L5WW+b=C+YkZiZccv+5zUwr573sibjLHLSk2qZmxxYg@mail.gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Jean Guyader <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:43 +0100 on 13 Aug (1344865403), Jean Guyader wrote:
> Protocol isn't part of the namespace - I think that's
> where the confusion arises. The namespace is exclusively
> Port/Domain. Protocol is there to describe the content of
> _a particular message_ in the ring.

OK.  

In that case, I think the hypervisor shouldn't handle it at all.  That
can be done in the client drivers, and the V4V_PROTO definitions and
maybe packet format stuff can go in documentation.

> It is included in the hypercalls (rather than, say, being the first n
> bytes of all messages) to force all senders to use it.

I don't think that's very useful.  It just forces any V4V user who
doesn't need multiple protocols to make up a number for form's sake, and
since Xen doesn't do any checking on the field, it doesn't protect the
receiver from anything.

I guess what I'm saying is, either 'protocol' (or whatever name you give
it) is part of the v4v addressing, in which case it should be treated
properly and demuxed before port, or it's not, in which case it
needn't be in the interface at all.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 12:32:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 12:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1zFO-0001US-O4; Thu, 16 Aug 2012 12:32:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T1zFN-0001UN-8e
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 12:32:21 +0000
Received: from [85.158.143.35:22066] by server-2.bemta-4.messagelabs.com id
	8F/9F-31966-358EC205; Thu, 16 Aug 2012 12:32:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345120336!10832940!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3275 invoked from network); 16 Aug 2012 12:32:16 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 12:32:16 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T1zFF-0005yC-NF; Thu, 16 Aug 2012 12:32:13 +0000
Date: Thu, 16 Aug 2012 13:32:13 +0100
From: Tim Deegan <tim@xen.org>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120816123213.GD20601@ocelot.phlegethon.org>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<20120809103840.GD16986@ocelot.phlegethon.org>
	<20120810165109.GA19429@spongy>
	<20120813093811.GB75552@ocelot.phlegethon.org>
	<CAEBdQ93L5WW+b=C+YkZiZccv+5zUwr573sibjLHLSk2qZmxxYg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ93L5WW+b=C+YkZiZccv+5zUwr573sibjLHLSk2qZmxxYg@mail.gmail.com>
User-Agent: Mutt/1.4.2.1i
Cc: Jean Guyader <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:43 +0100 on 13 Aug (1344865403), Jean Guyader wrote:
> Protocol isn't part of the namespace - I think that's
> where the confusion arises. The namespace is exclusively
> Port/Domain. Protocol is there to describe the content of
> _a particular message_ in the ring.

OK.  

In that case, I think the hypervisor shouldn't handle it at all.  That
can be done in the client drivers, and the V4V_PROTO definitions and
maybe packet format stuff can go in documentation.

> It is included in the hypercalls (rather than, say, being the first n
> bytes of all messages) to force all senders to use it.

I don't think that's very useful.  It just forces any V4V user who
doesn't need multiple protocols to make up a number for form's sake, and
since Xen doesn't do any checking on the field, it doesn't protect the
receiver from anything.

I guess what I'm saying is, either 'protocol' (or whatever name you give
it) is part of the v4v addressing, in which case it should be treated
properly and demuxed before port, or it's not, in which case it
needn't be in the interface at all.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:14:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1zth-0001nT-5S; Thu, 16 Aug 2012 13:14:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1ztf-0001nO-SV
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:14:00 +0000
Received: from [85.158.138.51:47587] by server-4.bemta-3.messagelabs.com id
	A1/D6-04276-612FC205; Thu, 16 Aug 2012 13:13:58 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345122838!28582393!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13733 invoked from network); 16 Aug 2012 13:13:58 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:13:58 -0000
Received: by eaac13 with SMTP id c13so822615eaa.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 06:13:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=pqRgQBR457Rv8oEPD/+02rPCw9Fxd5xsujN4KeOjRL0=;
	b=EAJFSVcIdLFCZdafgicCgEXjRt5PVzwS+3YrQpqIrOzPKw3nUVPlS2xT1wvI34a+Wg
	TFhCsdgVHDrlmUi7PfOvY0RraFZiKF0j+QcTfQkZbgFzrYCpCS6DU2rtYgD06teSa/Zv
	IX204qbZ03Os5SJzjHvRA/w3bZs23/eBYf2ZIg0e3HvJv9g98FYpraNHp2PgW4uNq0Yz
	ePFt98wXX8d6uOyDvx3Ao6Vwwjyg7B25Lx24YkKCF2mVDOUma/jOjm3nWDAcYelm3boM
	dTNWAGxj6SJT1tVHHD3SbynMjuT9fpQDBHxRZ50EqaCmco/4ZzvObbKb9sGM801lLLFH
	yN4Q==
Received: by 10.14.224.193 with SMTP id x41mr1461521eep.46.1345122837967;
	Thu, 16 Aug 2012 06:13:57 -0700 (PDT)
Received: from [172.16.26.11] (b0fb50b8.bb.sky.com. [176.251.80.184])
	by mx.google.com with ESMTPS id u47sm12065203eeo.9.2012.08.16.06.13.55
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 06:13:56 -0700 (PDT)
Message-ID: <502CF211.9050406@xen.org>
Date: Thu, 16 Aug 2012 14:13:53 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345115945.27489.87.camel@zakaz.uk.xensource.com>
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 12:19, Ian Campbell wrote:
> So we should just merge te Xen_4.2_limit pages into a column in the 
> Release Features page. Ian.
OK, I added a column. There are a few ???'s and some stuff on 
Xen_4.2_limit seems to have been a reduction compared to 
http://wiki.xen.org/wiki/Xen_Release_Features, so I chose the upper 
limit (in particular for HVM)

Any noteworthy new 4.2 features 
http://wiki.xen.org/wiki/Xen_Release_Features would also need to be added

Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:14:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1zth-0001nT-5S; Thu, 16 Aug 2012 13:14:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T1ztf-0001nO-SV
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:14:00 +0000
Received: from [85.158.138.51:47587] by server-4.bemta-3.messagelabs.com id
	A1/D6-04276-612FC205; Thu, 16 Aug 2012 13:13:58 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345122838!28582393!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13733 invoked from network); 16 Aug 2012 13:13:58 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:13:58 -0000
Received: by eaac13 with SMTP id c13so822615eaa.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 06:13:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=pqRgQBR457Rv8oEPD/+02rPCw9Fxd5xsujN4KeOjRL0=;
	b=EAJFSVcIdLFCZdafgicCgEXjRt5PVzwS+3YrQpqIrOzPKw3nUVPlS2xT1wvI34a+Wg
	TFhCsdgVHDrlmUi7PfOvY0RraFZiKF0j+QcTfQkZbgFzrYCpCS6DU2rtYgD06teSa/Zv
	IX204qbZ03Os5SJzjHvRA/w3bZs23/eBYf2ZIg0e3HvJv9g98FYpraNHp2PgW4uNq0Yz
	ePFt98wXX8d6uOyDvx3Ao6Vwwjyg7B25Lx24YkKCF2mVDOUma/jOjm3nWDAcYelm3boM
	dTNWAGxj6SJT1tVHHD3SbynMjuT9fpQDBHxRZ50EqaCmco/4ZzvObbKb9sGM801lLLFH
	yN4Q==
Received: by 10.14.224.193 with SMTP id x41mr1461521eep.46.1345122837967;
	Thu, 16 Aug 2012 06:13:57 -0700 (PDT)
Received: from [172.16.26.11] (b0fb50b8.bb.sky.com. [176.251.80.184])
	by mx.google.com with ESMTPS id u47sm12065203eeo.9.2012.08.16.06.13.55
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 06:13:56 -0700 (PDT)
Message-ID: <502CF211.9050406@xen.org>
Date: Thu, 16 Aug 2012 14:13:53 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345115945.27489.87.camel@zakaz.uk.xensource.com>
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 12:19, Ian Campbell wrote:
> So we should just merge te Xen_4.2_limit pages into a column in the 
> Release Features page. Ian.
OK, I added a column. There are a few ???'s and some stuff on 
Xen_4.2_limit seems to have been a reduction compared to 
http://wiki.xen.org/wiki/Xen_Release_Features, so I chose the upper 
limit (in particular for HVM)

Any noteworthy new 4.2 features 
http://wiki.xen.org/wiki/Xen_Release_Features would also need to be added

Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:16:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1zvd-0001s4-NT; Thu, 16 Aug 2012 13:16:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T1zvc-0001rv-12
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:16:00 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345122953!9654995!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.8 required=7.0 tests=MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14364 invoked from network); 16 Aug 2012 13:15:53 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 13:15:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T1zvP-0006Kq-2g; Thu, 16 Aug 2012 13:15:47 +0000
Date: Thu, 16 Aug 2012 14:15:47 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120816131547.GE20601@ocelot.phlegethon.org>
References: <50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>
	<50293785020000780009484C@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50293785020000780009484C@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: tupeng212 <tupeng212@gmail.com>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
	foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:21 +0100 on 13 Aug (1344874869), Jan Beulich wrote:
> Below/attached a first draft of a patch to fix not only this issue,
> but a few more with the RTC emulation. Would you give this a
> try?
> 
> Keir, Tim, others - the change to xen/arch/x86/hvm/vpt.c really
> looks more like a hack than a solution, but I don't see another
> way without much more intrusive changes.

It seems no worse than the code that's already there to special-case
lapic time interrupts.  After 4.3 it might be nice to adjust the vpt
interface to use explicit callbacks instead.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:16:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T1zvd-0001s4-NT; Thu, 16 Aug 2012 13:16:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T1zvc-0001rv-12
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:16:00 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345122953!9654995!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.8 required=7.0 tests=MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14364 invoked from network); 16 Aug 2012 13:15:53 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 13:15:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T1zvP-0006Kq-2g; Thu, 16 Aug 2012 13:15:47 +0000
Date: Thu, 16 Aug 2012 14:15:47 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120816131547.GE20601@ocelot.phlegethon.org>
References: <50224B7402000078000937DA@nat28.tlf.novell.com>
	<2012081023124696835343@gmail.com>
	<50293785020000780009484C@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50293785020000780009484C@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: tupeng212 <tupeng212@gmail.com>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Big Bug:Time in VM goes slower;
	foud Solution but demand Judgement! A Interesting Story!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:21 +0100 on 13 Aug (1344874869), Jan Beulich wrote:
> Below/attached a first draft of a patch to fix not only this issue,
> but a few more with the RTC emulation. Would you give this a
> try?
> 
> Keir, Tim, others - the change to xen/arch/x86/hvm/vpt.c really
> looks more like a hack than a solution, but I don't see another
> way without much more intrusive changes.

It seems no worse than the code that's already there to special-case
lapic time interrupts.  After 4.3 it might be nice to adjust the vpt
interface to use explicit callbacks instead.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20Ea-00029a-IX; Thu, 16 Aug 2012 13:35:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T20EZ-00029V-6F
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:35:35 +0000
Received: from [85.158.139.83:15936] by server-9.bemta-5.messagelabs.com id
	29/3B-26123-B17FC205; Thu, 16 Aug 2012 13:35:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345124120!21125345!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 687 invoked from network); 16 Aug 2012 13:35:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:35:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14041049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 13:35:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 14:35:02 +0100
Message-ID: <1345124100.30865.2.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Thu, 16 Aug 2012 14:35:00 +0100
In-Reply-To: <502CF211.9050406@xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
	<502CF211.9050406@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 14:13 +0100, Lars Kurth wrote:
> On 16/08/2012 12:19, Ian Campbell wrote:
> > So we should just merge te Xen_4.2_limit pages into a column in the 
> > Release Features page. Ian.
> OK, I added a column. There are a few ???'s 

I cleared these:
      * memsharing is still tech preview.
      * credit sched is still a prototype
      * ipxe is still the pxe stack
      * qdisk is still a fallback option.

I think that's all acurate.

> Any noteworthy new 4.2 features 
> http://wiki.xen.org/wiki/Xen_Release_Features would also need to be added

Yeah.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20Ea-00029a-IX; Thu, 16 Aug 2012 13:35:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T20EZ-00029V-6F
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:35:35 +0000
Received: from [85.158.139.83:15936] by server-9.bemta-5.messagelabs.com id
	29/3B-26123-B17FC205; Thu, 16 Aug 2012 13:35:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345124120!21125345!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 687 invoked from network); 16 Aug 2012 13:35:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:35:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14041049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 13:35:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 14:35:02 +0100
Message-ID: <1345124100.30865.2.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "lars.kurth@xen.org" <lars.kurth@xen.org>
Date: Thu, 16 Aug 2012 14:35:00 +0100
In-Reply-To: <502CF211.9050406@xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
	<502CF211.9050406@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth.xen@gmail.com>, Matt Wilson <msw@amazon.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 14:13 +0100, Lars Kurth wrote:
> On 16/08/2012 12:19, Ian Campbell wrote:
> > So we should just merge te Xen_4.2_limit pages into a column in the 
> > Release Features page. Ian.
> OK, I added a column. There are a few ???'s 

I cleared these:
      * memsharing is still tech preview.
      * credit sched is still a prototype
      * ipxe is still the pxe stack
      * qdisk is still a fallback option.

I think that's all acurate.

> Any noteworthy new 4.2 features 
> http://wiki.xen.org/wiki/Xen_Release_Features would also need to be added

Yeah.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:44:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20MQ-0002J5-Gl; Thu, 16 Aug 2012 13:43:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T20MN-0002J0-Jf
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:43:40 +0000
Received: from [85.158.138.51:8280] by server-11.bemta-3.messagelabs.com id
	3E/77-23152-709FC205; Thu, 16 Aug 2012 13:43:35 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345124612!26865629!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=1.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70,HTML_MESSAGE,MAILTO_TO_SPAM_ADDR,MIME_BASE64_TEXT,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19041 invoked from network); 16 Aug 2012 13:43:34 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:43:34 -0000
Received: by pbbrp12 with SMTP id rp12so1653925pbb.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 06:43:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=zilfsV2OTjoeVwAW86UtZTl1BJQN4FT1cLNiRNBHtSs=;
	b=XG8SFg7v4dRPIP/DgtDv+9ZUvFsx+4RO2avjedqs6bi6B+QCih0/oJhoFUyWicSUkv
	e6IuMX8r0rZraaEV8RhvTzPvnF8eIiPxrRwfzkf5RfJAEOt8TdAdHpBWKmQC4JnqT3oy
	+OkcC1Bet3YOb9ff5qOPiM1GtsByvKKUJ/dpdl4oDiI2/6tMAuhxnQhluSF25HKFgXr/
	EVTzS3bqNjg/bYkCX98HFwipstS2SWakZ5tZ7LIPUpBYTmtRsE1JcRHTKuI498IpWDgO
	649tKBRaV+o4aD5u7zUhBIFUzDn+kr9bajpaEGLj1+nYxC/NbsprtELMBSsBEkGKPK6f
	Ev7Q==
Received: by 10.68.241.202 with SMTP id wk10mr3482564pbc.77.1345124611716;
	Thu, 16 Aug 2012 06:43:31 -0700 (PDT)
Received: from root ([115.199.242.186])
	by mx.google.com with ESMTPS id pn4sm2705093pbb.50.2012.08.16.06.43.25
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 06:43:31 -0700 (PDT)
Date: Thu, 16 Aug 2012 21:43:32 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>, 
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>, 
	<502CC9D702000078000955B9@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: F9F46BCC-7AD0-4A5D-8F36-D652162620D5
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <201208162143289686654@gmail.com>
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4223491905756042710=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4223491905756042710==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart451220271581_=----"

This is a multi-part message in MIME format.

------=_001_NextPart451220271581_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBKYW46DQpJIGRpZG4ndCBjaGVjayB0aGUgZmxpcHBpbmcgaW4gSlZNLCBidXQgZnJvbSB0
aGUgcHJpbnRpbmcgcGVyaW9kIGluIHhlbiwgdGhpcyBzZXR0aW5nIGhhcHBlbmVkKHVuaXQ6IG5z
KToNCjk3NjU2MiwgOTc2NTYyLCA5NzY1NjIsIDk3NjU2MiwgMTU2MjUwMDAsIDk3NjU2MiwgOTc2
NTYyLCA5NzY1NjIsIDk3NjU2MiwgMTU2MjUwMDAgLi4uLi4NCmFuZCBzb21lb25lIHRvbGQgbWUg
d2luZG93cyB1c2UgdGhlc2UgdHdvIHJhdGUvcGVyaW9kIG9ubHkuDQoNCmJlc2lkZXMsIEkgY2hl
Y2tlZCBteSBmb3JtZXIgc2ltcGxlIHRlc3RlciB0aGlzIG1vcm5pbmcgYWZ0ZXIgaXQgcmFuIGZv
ciBhIHdob2xlIG5pZ2h0LCBpdCBsYWdnZWQgbXVjaC4NCnNvIHRoZXJlIGV4aXN0cyBkaWZmZXJl
bmNlIGJldHdlZW4gdmlydHVhbGl6YXRpb24gYW5kIHJlYWxpdHkuDQoNCg0KDQp0dXBlbmcyMTIN
Cg0KRnJvbTogSmFuIEJldWxpY2gNCkRhdGU6IDIwMTItMDgtMTYgMTY6MjINClRvOiB0dXBlbmcy
MTINCkNDOiBZYW5nIFogWmhhbmc7IHhlbi1kZXZlbDsgS2VpciBGcmFzZXI7IFRpbSBEZWVnYW4N
ClN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBbUEFUQ0gsIFJGQyB2Ml0geDg2L0hWTTogYXNzb3J0
ZWQgUlRDIGVtdWxhdGlvbiBhZGp1c3RtZW50cyAod2FzIFJlOiBCaWcgQnVnOlRpbWUgaW4gVk0g
Z29lcyBzbG93ZXIuLi4pDQo+Pj4gT24gMTUuMDguMTIgYXQgMTY6MTIsIHR1cGVuZzIxMiA8dHVw
ZW5nMjEyQGdtYWlsLmNvbT4gd3JvdGU6DQo+IFRoZSByZXN1bHRzIGZvciBtZSB3ZXJlIHRoZXNl
Og0KPiAxIEluIG15IHJlYWwgYXBwbGljYXRpb24gZW52aXJvbm1lbnQsIGl0IHdvcmtlZCB2ZXJ5
IHdlbGwgaW4gdGhlIGZvcm1lciANCj4gNW1pbnMsIG11Y2ggYmV0dGVyIHRoYW4gYmVmb3JlLA0K
PiAgYnV0IGF0IGxhc3QgaXQgbGFnZ2VkIGFnYWluLiBJIGRvbid0IGtub3cgd2hldGhlciBpdCBi
ZWxvbmdzIHRvIHRoZSB0d28gDQo+IG1pc3NlZCBmdW5jdGlvbnMuIEkgbGFjayB0aGUgDQo+ICBh
YmlsaXR5IHRvIGZpZ3VyZSB0aGVtIG91dC4NCg0KRGlkIHlvdSBjaGVjayB3aGV0aGVyIHBvc3Np
Ymx5IHRoZSBndWVzdCBrZXJuZWwgc3RhcnRlZCBmbGlwcGluZw0KYmV0d2VlbiB0byByYXRlIHZh
bHVlcz8gSWYgc28sIHRoYXQgaXMgc29tZXRoaW5nIHRoYXQgd2UgY291bGQNCmRlYWwgd2l0aCB0
b28gKHlldCBpdCBtaWdodCBiZSBhIGJpdCBpbnZvbHZlZCwgc28gSSdkIGxpa2UgdG8gYXZvaWQN
CmdvaW5nIHRoYXQgcm91dGUgaWYgaXQgd291bGQgbGVhZCBub3doZXJlKS4NCg0KSmFu

------=_001_NextPart451220271581_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3DGB2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear Jan:</DIV>
<DIV>I didn't check the flipping in JVM, but from the printing period in x=
en,=20
this setting happened(unit: ns):</DIV>
<DIV>976562, 976562, 976562, 976562, 15625000, 976562, 976562, 976562, 976=
562,=20
15625000 .....</DIV>
<DIV>and someone told me windows use these two rate/period only.</DIV>
<DIV>&nbsp;</DIV>
<DIV>besides, I checked my former simple tester this morning after it ran =
for a=20
whole night, it lagged much.</DIV>
<DIV>so there&nbsp;exists difference&nbsp;between virtualization and=20
reality.</DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-16&nbsp;16:22</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A><=
/DIV>
<DIV><B>CC:</B>&nbsp;<A href=3D"mailto:yang.z.zhang@intel.com">Yang Z Zhan=
g</A>;=20
<A href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A>; <A=20
href=3D"mailto:keir@xen.org">Keir Fraser</A>; <A href=3D"mailto:tim@xen.or=
g">Tim=20
Deegan</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [PATCH, RFC v2] x86/HVM: assorte=
d RTC=20
emulation adjustments (was Re: Big Bug:Time in VM goes=20
slower...)</DIV></DIV></DIV>
<DIV>
<DIV>&gt;&gt;&gt;&nbsp;On&nbsp;15.08.12&nbsp;at&nbsp;16:12,&nbsp;tupeng212=
&nbsp;&lt;tupeng212@gmail.com&gt;&nbsp;wrote:</DIV>
<DIV>&gt;&nbsp;The&nbsp;results&nbsp;for&nbsp;me&nbsp;were&nbsp;these:</DI=
V>
<DIV>&gt;&nbsp;1&nbsp;In&nbsp;my&nbsp;real&nbsp;application&nbsp;environme=
nt,&nbsp;it&nbsp;worked&nbsp;very&nbsp;well&nbsp;in&nbsp;the&nbsp;former&n=
bsp;</DIV>
<DIV>&gt;&nbsp;5mins,&nbsp;much&nbsp;better&nbsp;than&nbsp;before,</DIV>
<DIV>&gt;&nbsp;&nbsp;but&nbsp;at&nbsp;last&nbsp;it&nbsp;lagged&nbsp;again.=
&nbsp;I&nbsp;don't&nbsp;know&nbsp;whether&nbsp;it&nbsp;belongs&nbsp;to&nbs=
p;the&nbsp;two&nbsp;</DIV>
<DIV>&gt;&nbsp;missed&nbsp;functions.&nbsp;I&nbsp;lack&nbsp;the&nbsp;</DIV=
>
<DIV>&gt;&nbsp;&nbsp;ability&nbsp;to&nbsp;figure&nbsp;them&nbsp;out.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Did&nbsp;you&nbsp;check&nbsp;whether&nbsp;possibly&nbsp;the&nbsp;gues=
t&nbsp;kernel&nbsp;started&nbsp;flipping</DIV>
<DIV>between&nbsp;to&nbsp;rate&nbsp;values?&nbsp;If&nbsp;so,&nbsp;that&nbs=
p;is&nbsp;something&nbsp;that&nbsp;we&nbsp;could</DIV>
<DIV>deal&nbsp;with&nbsp;too&nbsp;(yet&nbsp;it&nbsp;might&nbsp;be&nbsp;a&n=
bsp;bit&nbsp;involved,&nbsp;so&nbsp;I'd&nbsp;like&nbsp;to&nbsp;avoid</DIV>
<DIV>going&nbsp;that&nbsp;route&nbsp;if&nbsp;it&nbsp;would&nbsp;lead&nbsp;=
nowhere).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart451220271581_=------



--===============4223491905756042710==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4223491905756042710==--



From xen-devel-bounces@lists.xen.org Thu Aug 16 13:44:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20MQ-0002J5-Gl; Thu, 16 Aug 2012 13:43:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T20MN-0002J0-Jf
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:43:40 +0000
Received: from [85.158.138.51:8280] by server-11.bemta-3.messagelabs.com id
	3E/77-23152-709FC205; Thu, 16 Aug 2012 13:43:35 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345124612!26865629!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=1.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70,HTML_MESSAGE,MAILTO_TO_SPAM_ADDR,MIME_BASE64_TEXT,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19041 invoked from network); 16 Aug 2012 13:43:34 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:43:34 -0000
Received: by pbbrp12 with SMTP id rp12so1653925pbb.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 06:43:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=zilfsV2OTjoeVwAW86UtZTl1BJQN4FT1cLNiRNBHtSs=;
	b=XG8SFg7v4dRPIP/DgtDv+9ZUvFsx+4RO2avjedqs6bi6B+QCih0/oJhoFUyWicSUkv
	e6IuMX8r0rZraaEV8RhvTzPvnF8eIiPxrRwfzkf5RfJAEOt8TdAdHpBWKmQC4JnqT3oy
	+OkcC1Bet3YOb9ff5qOPiM1GtsByvKKUJ/dpdl4oDiI2/6tMAuhxnQhluSF25HKFgXr/
	EVTzS3bqNjg/bYkCX98HFwipstS2SWakZ5tZ7LIPUpBYTmtRsE1JcRHTKuI498IpWDgO
	649tKBRaV+o4aD5u7zUhBIFUzDn+kr9bajpaEGLj1+nYxC/NbsprtELMBSsBEkGKPK6f
	Ev7Q==
Received: by 10.68.241.202 with SMTP id wk10mr3482564pbc.77.1345124611716;
	Thu, 16 Aug 2012 06:43:31 -0700 (PDT)
Received: from root ([115.199.242.186])
	by mx.google.com with ESMTPS id pn4sm2705093pbb.50.2012.08.16.06.43.25
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 06:43:31 -0700 (PDT)
Date: Thu, 16 Aug 2012 21:43:32 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>, 
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>, 
	<502CC9D702000078000955B9@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: F9F46BCC-7AD0-4A5D-8F36-D652162620D5
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <201208162143289686654@gmail.com>
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4223491905756042710=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4223491905756042710==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart451220271581_=----"

This is a multi-part message in MIME format.

------=_001_NextPart451220271581_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBKYW46DQpJIGRpZG4ndCBjaGVjayB0aGUgZmxpcHBpbmcgaW4gSlZNLCBidXQgZnJvbSB0
aGUgcHJpbnRpbmcgcGVyaW9kIGluIHhlbiwgdGhpcyBzZXR0aW5nIGhhcHBlbmVkKHVuaXQ6IG5z
KToNCjk3NjU2MiwgOTc2NTYyLCA5NzY1NjIsIDk3NjU2MiwgMTU2MjUwMDAsIDk3NjU2MiwgOTc2
NTYyLCA5NzY1NjIsIDk3NjU2MiwgMTU2MjUwMDAgLi4uLi4NCmFuZCBzb21lb25lIHRvbGQgbWUg
d2luZG93cyB1c2UgdGhlc2UgdHdvIHJhdGUvcGVyaW9kIG9ubHkuDQoNCmJlc2lkZXMsIEkgY2hl
Y2tlZCBteSBmb3JtZXIgc2ltcGxlIHRlc3RlciB0aGlzIG1vcm5pbmcgYWZ0ZXIgaXQgcmFuIGZv
ciBhIHdob2xlIG5pZ2h0LCBpdCBsYWdnZWQgbXVjaC4NCnNvIHRoZXJlIGV4aXN0cyBkaWZmZXJl
bmNlIGJldHdlZW4gdmlydHVhbGl6YXRpb24gYW5kIHJlYWxpdHkuDQoNCg0KDQp0dXBlbmcyMTIN
Cg0KRnJvbTogSmFuIEJldWxpY2gNCkRhdGU6IDIwMTItMDgtMTYgMTY6MjINClRvOiB0dXBlbmcy
MTINCkNDOiBZYW5nIFogWmhhbmc7IHhlbi1kZXZlbDsgS2VpciBGcmFzZXI7IFRpbSBEZWVnYW4N
ClN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBbUEFUQ0gsIFJGQyB2Ml0geDg2L0hWTTogYXNzb3J0
ZWQgUlRDIGVtdWxhdGlvbiBhZGp1c3RtZW50cyAod2FzIFJlOiBCaWcgQnVnOlRpbWUgaW4gVk0g
Z29lcyBzbG93ZXIuLi4pDQo+Pj4gT24gMTUuMDguMTIgYXQgMTY6MTIsIHR1cGVuZzIxMiA8dHVw
ZW5nMjEyQGdtYWlsLmNvbT4gd3JvdGU6DQo+IFRoZSByZXN1bHRzIGZvciBtZSB3ZXJlIHRoZXNl
Og0KPiAxIEluIG15IHJlYWwgYXBwbGljYXRpb24gZW52aXJvbm1lbnQsIGl0IHdvcmtlZCB2ZXJ5
IHdlbGwgaW4gdGhlIGZvcm1lciANCj4gNW1pbnMsIG11Y2ggYmV0dGVyIHRoYW4gYmVmb3JlLA0K
PiAgYnV0IGF0IGxhc3QgaXQgbGFnZ2VkIGFnYWluLiBJIGRvbid0IGtub3cgd2hldGhlciBpdCBi
ZWxvbmdzIHRvIHRoZSB0d28gDQo+IG1pc3NlZCBmdW5jdGlvbnMuIEkgbGFjayB0aGUgDQo+ICBh
YmlsaXR5IHRvIGZpZ3VyZSB0aGVtIG91dC4NCg0KRGlkIHlvdSBjaGVjayB3aGV0aGVyIHBvc3Np
Ymx5IHRoZSBndWVzdCBrZXJuZWwgc3RhcnRlZCBmbGlwcGluZw0KYmV0d2VlbiB0byByYXRlIHZh
bHVlcz8gSWYgc28sIHRoYXQgaXMgc29tZXRoaW5nIHRoYXQgd2UgY291bGQNCmRlYWwgd2l0aCB0
b28gKHlldCBpdCBtaWdodCBiZSBhIGJpdCBpbnZvbHZlZCwgc28gSSdkIGxpa2UgdG8gYXZvaWQN
CmdvaW5nIHRoYXQgcm91dGUgaWYgaXQgd291bGQgbGVhZCBub3doZXJlKS4NCg0KSmFu

------=_001_NextPart451220271581_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3DGB2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear Jan:</DIV>
<DIV>I didn't check the flipping in JVM, but from the printing period in x=
en,=20
this setting happened(unit: ns):</DIV>
<DIV>976562, 976562, 976562, 976562, 15625000, 976562, 976562, 976562, 976=
562,=20
15625000 .....</DIV>
<DIV>and someone told me windows use these two rate/period only.</DIV>
<DIV>&nbsp;</DIV>
<DIV>besides, I checked my former simple tester this morning after it ran =
for a=20
whole night, it lagged much.</DIV>
<DIV>so there&nbsp;exists difference&nbsp;between virtualization and=20
reality.</DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-RIGHT: medium none; PADDING-RIGHT: 0cm; BORDER-TOP: #b5c4d=
f 1pt solid; PADDING-LEFT: 0cm; PADDING-BOTTOM: 0cm; BORDER-LEFT: medium n=
one; PADDING-TOP: 3pt; BORDER-BOTTOM: medium none">
<DIV=20
style=3D"PADDING-RIGHT: 8px; PADDING-LEFT: 8px; FONT-SIZE: 12px; BACKGROUN=
D: #efefef; PADDING-BOTTOM: 8px; COLOR: #000000; PADDING-TOP: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:JBeulich@suse.com">Jan Beulich</A=
></DIV>
<DIV><B>Date:</B>&nbsp;2012-08-16&nbsp;16:22</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:tupeng212@gmail.com">tupeng212</A><=
/DIV>
<DIV><B>CC:</B>&nbsp;<A href=3D"mailto:yang.z.zhang@intel.com">Yang Z Zhan=
g</A>;=20
<A href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A>; <A=20
href=3D"mailto:keir@xen.org">Keir Fraser</A>; <A href=3D"mailto:tim@xen.or=
g">Tim=20
Deegan</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [PATCH, RFC v2] x86/HVM: assorte=
d RTC=20
emulation adjustments (was Re: Big Bug:Time in VM goes=20
slower...)</DIV></DIV></DIV>
<DIV>
<DIV>&gt;&gt;&gt;&nbsp;On&nbsp;15.08.12&nbsp;at&nbsp;16:12,&nbsp;tupeng212=
&nbsp;&lt;tupeng212@gmail.com&gt;&nbsp;wrote:</DIV>
<DIV>&gt;&nbsp;The&nbsp;results&nbsp;for&nbsp;me&nbsp;were&nbsp;these:</DI=
V>
<DIV>&gt;&nbsp;1&nbsp;In&nbsp;my&nbsp;real&nbsp;application&nbsp;environme=
nt,&nbsp;it&nbsp;worked&nbsp;very&nbsp;well&nbsp;in&nbsp;the&nbsp;former&n=
bsp;</DIV>
<DIV>&gt;&nbsp;5mins,&nbsp;much&nbsp;better&nbsp;than&nbsp;before,</DIV>
<DIV>&gt;&nbsp;&nbsp;but&nbsp;at&nbsp;last&nbsp;it&nbsp;lagged&nbsp;again.=
&nbsp;I&nbsp;don't&nbsp;know&nbsp;whether&nbsp;it&nbsp;belongs&nbsp;to&nbs=
p;the&nbsp;two&nbsp;</DIV>
<DIV>&gt;&nbsp;missed&nbsp;functions.&nbsp;I&nbsp;lack&nbsp;the&nbsp;</DIV=
>
<DIV>&gt;&nbsp;&nbsp;ability&nbsp;to&nbsp;figure&nbsp;them&nbsp;out.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Did&nbsp;you&nbsp;check&nbsp;whether&nbsp;possibly&nbsp;the&nbsp;gues=
t&nbsp;kernel&nbsp;started&nbsp;flipping</DIV>
<DIV>between&nbsp;to&nbsp;rate&nbsp;values?&nbsp;If&nbsp;so,&nbsp;that&nbs=
p;is&nbsp;something&nbsp;that&nbsp;we&nbsp;could</DIV>
<DIV>deal&nbsp;with&nbsp;too&nbsp;(yet&nbsp;it&nbsp;might&nbsp;be&nbsp;a&n=
bsp;bit&nbsp;involved,&nbsp;so&nbsp;I'd&nbsp;like&nbsp;to&nbsp;avoid</DIV>
<DIV>going&nbsp;that&nbsp;route&nbsp;if&nbsp;it&nbsp;would&nbsp;lead&nbsp;=
nowhere).</DIV>
<DIV>&nbsp;</DIV>
<DIV>Jan</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart451220271581_=------



--===============4223491905756042710==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4223491905756042710==--



From xen-devel-bounces@lists.xen.org Thu Aug 16 13:45:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20Nj-0002Ov-8e; Thu, 16 Aug 2012 13:45:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20Nh-0002ON-Gw
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:45:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345124691!9659520!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32555 invoked from network); 16 Aug 2012 13:44:53 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 13:44:53 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GDijsu027140
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 13:44:46 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GDiiWV008932
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 13:44:45 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GDiik6025143; Thu, 16 Aug 2012 08:44:44 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 06:44:44 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1A973402C0; Thu, 16 Aug 2012 09:34:58 -0400 (EDT)
Date: Thu, 16 Aug 2012 09:34:58 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Duan, Ronghui" <ronghui.duan@intel.com>
Message-ID: <20120816133457.GA5898@phenom.dumpdata.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
> Hi, list.
> The max segments for request in VBD queue is 11, while for Linux OS/ other VMM, the parameter is set to 128 in default.

Like the FreeBSD one?

> This may be caused by the limited size of ring between Front/Back. So I guess whether we can put segment data into another ring and dynamic use them for the single request's need. Here is prototype which don't do much test, but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to original in sequential test. But it bring some overhead which will make random IO's cpu utilization increase a little.
> 

Did you think also about expanding the ring size to something bigger?

> Here is a short version data use only 1K random read and 64K sequential read in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got form xentop.

> Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
> 		W	52005.9	86.6	71
> 		W/O	52123.1	85.8	66.9
> 			
> Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
> 	W	250		27.1	       10.6
> 	W/O	250		62.6	       31.1
> 
> 
> The patch will be simple if we only use new methods. But we need consider that user may use new kernel as backend while an older one as frontend. Also need considerate live migration case. So the change become huge...

OK? I think you are implementing the extension documented in

changeset:   24875:a59c1dcfe968
user:        Justin T. Gibbs <justing@spectralogic.com>
date:        Thu Feb 23 10:03:07 2012 +0000
summary:     blkif.h: Define and document the request number/size/segments extension

changeset:   24874:f9789db96c39
user:        Justin T. Gibbs <justing@spectralogic.com>
date:        Thu Feb 23 10:02:30 2012 +0000
summary:     blkif.h: Document the Red Hat and Citrix blkif multi-page ring extensions

so that would be the max-requests-segments one?



> [RFC v1 1/5] 
> 	In order to add new segment ring, refactoring the original code, split some methods related with ring operation.
> [RFC v1 2/5]
> 	Add the segment ring support in blkfront. Most of code is about suspend/recover.
> [RFC v1 3/5]
> 	As the same, need refractor the original code in blkback.
> [RFC v1 4/5]
> 	In order to support different type of ring type in blkback, make the pending_req list per disk.

Not sure why you structured the patches like this way, but it might
make sense to order them in 1, 3, 4, 2, 5 order. The 'pending_req'/per disk is an overall
improvement that fixes a lot of concurrent issues. I tried to implement this and ran
in an issue with grants still being active? Did you have issues with that or it worked just fine
for you?
> [RFC v1 5/5]
> 	Add the segment ring support in blkback.

So .. where are the patches? Did I miss them?
> -ronghui
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:45:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20Nj-0002Ov-8e; Thu, 16 Aug 2012 13:45:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20Nh-0002ON-Gw
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 13:45:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345124691!9659520!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32555 invoked from network); 16 Aug 2012 13:44:53 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 13:44:53 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GDijsu027140
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 13:44:46 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GDiiWV008932
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 13:44:45 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GDiik6025143; Thu, 16 Aug 2012 08:44:44 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 06:44:44 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1A973402C0; Thu, 16 Aug 2012 09:34:58 -0400 (EDT)
Date: Thu, 16 Aug 2012 09:34:58 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Duan, Ronghui" <ronghui.duan@intel.com>
Message-ID: <20120816133457.GA5898@phenom.dumpdata.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
> Hi, list.
> The max segments for request in VBD queue is 11, while for Linux OS/ other VMM, the parameter is set to 128 in default.

Like the FreeBSD one?

> This may be caused by the limited size of ring between Front/Back. So I guess whether we can put segment data into another ring and dynamic use them for the single request's need. Here is prototype which don't do much test, but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to original in sequential test. But it bring some overhead which will make random IO's cpu utilization increase a little.
> 

Did you think also about expanding the ring size to something bigger?

> Here is a short version data use only 1K random read and 64K sequential read in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got form xentop.

> Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
> 		W	52005.9	86.6	71
> 		W/O	52123.1	85.8	66.9
> 			
> Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
> 	W	250		27.1	       10.6
> 	W/O	250		62.6	       31.1
> 
> 
> The patch will be simple if we only use new methods. But we need consider that user may use new kernel as backend while an older one as frontend. Also need considerate live migration case. So the change become huge...

OK? I think you are implementing the extension documented in

changeset:   24875:a59c1dcfe968
user:        Justin T. Gibbs <justing@spectralogic.com>
date:        Thu Feb 23 10:03:07 2012 +0000
summary:     blkif.h: Define and document the request number/size/segments extension

changeset:   24874:f9789db96c39
user:        Justin T. Gibbs <justing@spectralogic.com>
date:        Thu Feb 23 10:02:30 2012 +0000
summary:     blkif.h: Document the Red Hat and Citrix blkif multi-page ring extensions

so that would be the max-requests-segments one?



> [RFC v1 1/5] 
> 	In order to add new segment ring, refactoring the original code, split some methods related with ring operation.
> [RFC v1 2/5]
> 	Add the segment ring support in blkfront. Most of code is about suspend/recover.
> [RFC v1 3/5]
> 	As the same, need refractor the original code in blkback.
> [RFC v1 4/5]
> 	In order to support different type of ring type in blkback, make the pending_req list per disk.

Not sure why you structured the patches like this way, but it might
make sense to order them in 1, 3, 4, 2, 5 order. The 'pending_req'/per disk is an overall
improvement that fixes a lot of concurrent issues. I tried to implement this and ran
in an issue with grants still being active? Did you have issues with that or it worked just fine
for you?
> [RFC v1 5/5]
> 	Add the segment ring support in blkback.

So .. where are the patches? Did I miss them?
> -ronghui
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:59:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20bo-0002eL-N6; Thu, 16 Aug 2012 13:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20bn-0002eG-LD
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 13:59:35 +0000
Received: from [85.158.143.35:34022] by server-2.bemta-4.messagelabs.com id
	2F/39-31966-6CCFC205; Thu, 16 Aug 2012 13:59:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345125571!12586285!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13398 invoked from network); 16 Aug 2012 13:59:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:59:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14041814"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 13:59:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 14:59:31 +0100
Date: Thu, 16 Aug 2012 14:59:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815175724.3405043a@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It's good to see well split patches :)
But could you please send them inline next time?


>  arch/x86/include/asm/xen/interface.h |    3 +-
>  arch/x86/include/asm/xen/page.h      |    3 ++
>  arch/x86/xen/setup.c                 |   13 ++++++++--
>  arch/x86/xen/smp.c                   |   39 ++++++++++++++++++---------------
>  drivers/xen/cpu_hotplug.c            |    3 +-
>  include/xen/interface/xen.h          |    1 +
>  include/xen/xen.h                    |    4 +++
>  7 files changed, 43 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..1bd5e88 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -136,7 +136,8 @@ struct vcpu_guest_context {
>      struct cpu_user_regs user_regs;         /* User-level CPU registers     */
>      struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
>      unsigned long ldt_base, ldt_ents;       /* LDT (linear address, # ents) */
> -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents) */
> +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents).*
> +					     * PV in HVM: it's GDTR addr/sz */
>      unsigned long kernel_ss, kernel_sp;     /* Virtual TSS (only SS1/SP1)   */
>      /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
>      unsigned long ctrlreg[8];               /* CR0-CR7 (control registers)  */
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 93971e8..d1cfb96 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -158,6 +158,9 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
>  static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
>  {
>  	unsigned long pfn = mfn_to_pfn(mfn);
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return mfn;
>  	if (get_phys_to_machine(pfn) != mfn)
>  		return -1; /* force !pfn_valid() */
>  	return pfn;
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index ead8557..936f21d 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -500,10 +500,9 @@ void __cpuinit xen_enable_syscall(void)
>  #endif /* CONFIG_X86_64 */
>  }
>  
> -void __init xen_arch_setup(void)
> +/* Normal PV domain not running in HVM container */
> +static __init void inline xen_non_pvh_arch_setup(void)
>  {
> -	xen_panic_handler_init();
> -
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
>  
> @@ -517,6 +516,14 @@ void __init xen_arch_setup(void)
>  
>  	xen_enable_sysenter();
>  	xen_enable_syscall();
> +}
> +
> +void __init xen_arch_setup(void)
> +{
> +	xen_panic_handler_init();
> +
> +	if (!xen_pvh_domain())
> +		xen_non_pvh_arch_setup();
>  
>  #ifdef CONFIG_ACPI
>  	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index f58dca7..cdf269d 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  	gdt = get_cpu_gdt_table(cpu);
>  
>  	ctxt->flags = VGCF_IN_KERNEL;
> -	ctxt->user_regs.ds = __USER_DS;
> -	ctxt->user_regs.es = __USER_DS;
>  	ctxt->user_regs.ss = __KERNEL_DS;
>  #ifdef CONFIG_X86_32
>  	ctxt->user_regs.fs = __KERNEL_PERCPU;
> @@ -314,31 +312,36 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  
>  	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
>  
> -	xen_copy_trap_info(ctxt->trap_ctxt);
> +		ctxt->user_regs.ds = __USER_DS;
> +		ctxt->user_regs.es = __USER_DS;
>  
> -	ctxt->ldt_ents = 0;
> +		xen_copy_trap_info(ctxt->trap_ctxt);
>  
> -	BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> +		ctxt->ldt_ents = 0;
>  
> -	gdt_mfn = arbitrary_virt_to_mfn(gdt);
> -	make_lowmem_page_readonly(gdt);
> -	make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
> +		BUG_ON((unsigned long)gdt & ~PAGE_MASK);
>  
> -	ctxt->gdt_frames[0] = gdt_mfn;
> -	ctxt->gdt_ents      = GDT_ENTRIES;
> +		gdt_mfn = arbitrary_virt_to_mfn(gdt);
> +		make_lowmem_page_readonly(gdt);
> +		make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
>  
> -	ctxt->user_regs.cs = __KERNEL_CS;
> -	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
> +		ctxt->gdt_frames[0] = gdt_mfn;
> +		ctxt->gdt_ents      = GDT_ENTRIES;
>  
> -	ctxt->kernel_ss = __KERNEL_DS;
> -	ctxt->kernel_sp = idle->thread.sp0;
> +		ctxt->kernel_ss = __KERNEL_DS;
> +		ctxt->kernel_sp = idle->thread.sp0;
>  
>  #ifdef CONFIG_X86_32
> -	ctxt->event_callback_cs     = __KERNEL_CS;
> -	ctxt->failsafe_callback_cs  = __KERNEL_CS;
> +		ctxt->event_callback_cs     = __KERNEL_CS;
> +		ctxt->failsafe_callback_cs  = __KERNEL_CS;
>  #endif
> -	ctxt->event_callback_eip    = (unsigned long)xen_hypervisor_callback;
> -	ctxt->failsafe_callback_eip = (unsigned long)xen_failsafe_callback;
> +		ctxt->event_callback_eip    =
> +					(unsigned long)xen_hypervisor_callback;
> +		ctxt->failsafe_callback_eip =
> +					(unsigned long)xen_failsafe_callback;
> +
> +	ctxt->user_regs.cs = __KERNEL_CS;
> +	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
>  
>  	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
>  	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
> diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
> index 4dcfced..a797359 100644
> --- a/drivers/xen/cpu_hotplug.c
> +++ b/drivers/xen/cpu_hotplug.c
> @@ -100,7 +100,8 @@ static int __init setup_vcpu_hotplug_event(void)
>  	static struct notifier_block xsn_cpu = {
>  		.notifier_call = setup_cpu_watcher };
>  
> -	if (!xen_pv_domain())
> +	/* PVH TBD/FIXME: future work */
> +	if (!xen_pv_domain() || xen_pvh_domain())
>  		return -ENODEV;
>  
>  	register_xenstore_notifier(&xsn_cpu);

I don't think we should use or have xen_pvh_domain() in non-x86 files,
like anything under drivers/xen.
Isn't XENFEAT_auto_translated_physmap enough?


> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 0801468..1d5bc36 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
>  /* These flags are passed in the 'flags' field of start_info_t. */
>  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
>  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control domain? */
> +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running in HVM container? */
>  #define SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm options */
>  
>  typedef uint64_t cpumap_t;

I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
generic xen.h interface file. 


> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index a164024..e823639 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -18,6 +18,10 @@ extern enum xen_domain_type xen_domain_type;
>  				 xen_domain_type == XEN_PV_DOMAIN)
>  #define xen_hvm_domain()	(xen_domain() &&			\
>  				 xen_domain_type == XEN_HVM_DOMAIN)
> +/* xen_pv_domain check is necessary as start_info ptr is null in HVM. Also,
> + * note, xen PVH domain shares lot of HVM code */
> +#define xen_pvh_domain()       (xen_pv_domain() &&                     \
> +				(xen_start_info->flags & SIF_IS_PVINHVM))
 
Also here.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 13:59:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 13:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20bo-0002eL-N6; Thu, 16 Aug 2012 13:59:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20bn-0002eG-LD
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 13:59:35 +0000
Received: from [85.158.143.35:34022] by server-2.bemta-4.messagelabs.com id
	2F/39-31966-6CCFC205; Thu, 16 Aug 2012 13:59:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345125571!12586285!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13398 invoked from network); 16 Aug 2012 13:59:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 13:59:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14041814"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 13:59:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 14:59:31 +0100
Date: Thu, 16 Aug 2012 14:59:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815175724.3405043a@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It's good to see well split patches :)
But could you please send them inline next time?


>  arch/x86/include/asm/xen/interface.h |    3 +-
>  arch/x86/include/asm/xen/page.h      |    3 ++
>  arch/x86/xen/setup.c                 |   13 ++++++++--
>  arch/x86/xen/smp.c                   |   39 ++++++++++++++++++---------------
>  drivers/xen/cpu_hotplug.c            |    3 +-
>  include/xen/interface/xen.h          |    1 +
>  include/xen/xen.h                    |    4 +++
>  7 files changed, 43 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..1bd5e88 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -136,7 +136,8 @@ struct vcpu_guest_context {
>      struct cpu_user_regs user_regs;         /* User-level CPU registers     */
>      struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
>      unsigned long ldt_base, ldt_ents;       /* LDT (linear address, # ents) */
> -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents) */
> +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents).*
> +					     * PV in HVM: it's GDTR addr/sz */
>      unsigned long kernel_ss, kernel_sp;     /* Virtual TSS (only SS1/SP1)   */
>      /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
>      unsigned long ctrlreg[8];               /* CR0-CR7 (control registers)  */
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 93971e8..d1cfb96 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -158,6 +158,9 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
>  static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
>  {
>  	unsigned long pfn = mfn_to_pfn(mfn);
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return mfn;
>  	if (get_phys_to_machine(pfn) != mfn)
>  		return -1; /* force !pfn_valid() */
>  	return pfn;
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index ead8557..936f21d 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -500,10 +500,9 @@ void __cpuinit xen_enable_syscall(void)
>  #endif /* CONFIG_X86_64 */
>  }
>  
> -void __init xen_arch_setup(void)
> +/* Normal PV domain not running in HVM container */
> +static __init void inline xen_non_pvh_arch_setup(void)
>  {
> -	xen_panic_handler_init();
> -
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
>  
> @@ -517,6 +516,14 @@ void __init xen_arch_setup(void)
>  
>  	xen_enable_sysenter();
>  	xen_enable_syscall();
> +}
> +
> +void __init xen_arch_setup(void)
> +{
> +	xen_panic_handler_init();
> +
> +	if (!xen_pvh_domain())
> +		xen_non_pvh_arch_setup();
>  
>  #ifdef CONFIG_ACPI
>  	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index f58dca7..cdf269d 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  	gdt = get_cpu_gdt_table(cpu);
>  
>  	ctxt->flags = VGCF_IN_KERNEL;
> -	ctxt->user_regs.ds = __USER_DS;
> -	ctxt->user_regs.es = __USER_DS;
>  	ctxt->user_regs.ss = __KERNEL_DS;
>  #ifdef CONFIG_X86_32
>  	ctxt->user_regs.fs = __KERNEL_PERCPU;
> @@ -314,31 +312,36 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  
>  	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
>  
> -	xen_copy_trap_info(ctxt->trap_ctxt);
> +		ctxt->user_regs.ds = __USER_DS;
> +		ctxt->user_regs.es = __USER_DS;
>  
> -	ctxt->ldt_ents = 0;
> +		xen_copy_trap_info(ctxt->trap_ctxt);
>  
> -	BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> +		ctxt->ldt_ents = 0;
>  
> -	gdt_mfn = arbitrary_virt_to_mfn(gdt);
> -	make_lowmem_page_readonly(gdt);
> -	make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
> +		BUG_ON((unsigned long)gdt & ~PAGE_MASK);
>  
> -	ctxt->gdt_frames[0] = gdt_mfn;
> -	ctxt->gdt_ents      = GDT_ENTRIES;
> +		gdt_mfn = arbitrary_virt_to_mfn(gdt);
> +		make_lowmem_page_readonly(gdt);
> +		make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
>  
> -	ctxt->user_regs.cs = __KERNEL_CS;
> -	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
> +		ctxt->gdt_frames[0] = gdt_mfn;
> +		ctxt->gdt_ents      = GDT_ENTRIES;
>  
> -	ctxt->kernel_ss = __KERNEL_DS;
> -	ctxt->kernel_sp = idle->thread.sp0;
> +		ctxt->kernel_ss = __KERNEL_DS;
> +		ctxt->kernel_sp = idle->thread.sp0;
>  
>  #ifdef CONFIG_X86_32
> -	ctxt->event_callback_cs     = __KERNEL_CS;
> -	ctxt->failsafe_callback_cs  = __KERNEL_CS;
> +		ctxt->event_callback_cs     = __KERNEL_CS;
> +		ctxt->failsafe_callback_cs  = __KERNEL_CS;
>  #endif
> -	ctxt->event_callback_eip    = (unsigned long)xen_hypervisor_callback;
> -	ctxt->failsafe_callback_eip = (unsigned long)xen_failsafe_callback;
> +		ctxt->event_callback_eip    =
> +					(unsigned long)xen_hypervisor_callback;
> +		ctxt->failsafe_callback_eip =
> +					(unsigned long)xen_failsafe_callback;
> +
> +	ctxt->user_regs.cs = __KERNEL_CS;
> +	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
>  
>  	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
>  	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
> diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
> index 4dcfced..a797359 100644
> --- a/drivers/xen/cpu_hotplug.c
> +++ b/drivers/xen/cpu_hotplug.c
> @@ -100,7 +100,8 @@ static int __init setup_vcpu_hotplug_event(void)
>  	static struct notifier_block xsn_cpu = {
>  		.notifier_call = setup_cpu_watcher };
>  
> -	if (!xen_pv_domain())
> +	/* PVH TBD/FIXME: future work */
> +	if (!xen_pv_domain() || xen_pvh_domain())
>  		return -ENODEV;
>  
>  	register_xenstore_notifier(&xsn_cpu);

I don't think we should use or have xen_pvh_domain() in non-x86 files,
like anything under drivers/xen.
Isn't XENFEAT_auto_translated_physmap enough?


> diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
> index 0801468..1d5bc36 100644
> --- a/include/xen/interface/xen.h
> +++ b/include/xen/interface/xen.h
> @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
>  /* These flags are passed in the 'flags' field of start_info_t. */
>  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
>  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control domain? */
> +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running in HVM container? */
>  #define SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm options */
>  
>  typedef uint64_t cpumap_t;

I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
generic xen.h interface file. 


> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index a164024..e823639 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -18,6 +18,10 @@ extern enum xen_domain_type xen_domain_type;
>  				 xen_domain_type == XEN_PV_DOMAIN)
>  #define xen_hvm_domain()	(xen_domain() &&			\
>  				 xen_domain_type == XEN_HVM_DOMAIN)
> +/* xen_pv_domain check is necessary as start_info ptr is null in HVM. Also,
> + * note, xen PVH domain shares lot of HVM code */
> +#define xen_pvh_domain()       (xen_pv_domain() &&                     \
> +				(xen_start_info->flags & SIF_IS_PVINHVM))
 
Also here.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:06:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:06:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20hr-0002tF-Hc; Thu, 16 Aug 2012 14:05:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20hq-0002t7-4g
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:05:50 +0000
Received: from [85.158.143.99:58679] by server-1.bemta-4.messagelabs.com id
	11/4F-07754-D3EFC205; Thu, 16 Aug 2012 14:05:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345125942!27954022!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22404 invoked from network); 16 Aug 2012 14:05:44 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 14:05:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GE5aG1007824
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 14:05:37 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GE5a8M018705
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 14:05:36 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GE5ar8009695; Thu, 16 Aug 2012 09:05:36 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:05:35 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 737C7402C0; Thu, 16 Aug 2012 09:55:49 -0400 (EDT)
Date: Thu, 16 Aug 2012 09:55:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Duan, Ronghui" <ronghui.duan@intel.com>
Message-ID: <20120816135549.GA17613@phenom.dumpdata.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<20120816133457.GA5898@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120816133457.GA5898@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 09:34:57AM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
> > Hi, list.
> > The max segments for request in VBD queue is 11, while for Linux OS/ other VMM, the parameter is set to 128 in default.
> 
> Like the FreeBSD one?
> 
> > This may be caused by the limited size of ring between Front/Back. So I guess whether we can put segment data into another ring and dynamic use them for the single request's need. Here is prototype which don't do much test, but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to original in sequential test. But it bring some overhead which will make random IO's cpu utilization increase a little.
> > 
> 
> Did you think also about expanding the ring size to something bigger?
> 
> > Here is a short version data use only 1K random read and 64K sequential read in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got form xentop.
> 
> > Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
> > 		W	52005.9	86.6	71
> > 		W/O	52123.1	85.8	66.9
> > 			
> > Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
> > 	W	250		27.1	       10.6
> > 	W/O	250		62.6	       31.1
> > 
> > 
> > The patch will be simple if we only use new methods. But we need consider that user may use new kernel as backend while an older one as frontend. Also need considerate live migration case. So the change become huge...
> 
> OK? I think you are implementing the extension documented in
> 
> changeset:   24875:a59c1dcfe968
> user:        Justin T. Gibbs <justing@spectralogic.com>
> date:        Thu Feb 23 10:03:07 2012 +0000
> summary:     blkif.h: Define and document the request number/size/segments extension
> 
> changeset:   24874:f9789db96c39
> user:        Justin T. Gibbs <justing@spectralogic.com>
> date:        Thu Feb 23 10:02:30 2012 +0000
> summary:     blkif.h: Document the Red Hat and Citrix blkif multi-page ring extensions
> 
> so that would be the max-requests-segments one?
> 
> 
> 
> > [RFC v1 1/5] 
> > 	In order to add new segment ring, refactoring the original code, split some methods related with ring operation.
> > [RFC v1 2/5]
> > 	Add the segment ring support in blkfront. Most of code is about suspend/recover.
> > [RFC v1 3/5]
> > 	As the same, need refractor the original code in blkback.
> > [RFC v1 4/5]
> > 	In order to support different type of ring type in blkback, make the pending_req list per disk.
> 
> Not sure why you structured the patches like this way, but it might
> make sense to order them in 1, 3, 4, 2, 5 order. The 'pending_req'/per disk is an overall
> improvement that fixes a lot of concurrent issues. I tried to implement this and ran
> in an issue with grants still being active? Did you have issues with that or it worked just fine
> for you?
> > [RFC v1 5/5]
> > 	Add the segment ring support in blkback.
> 
> So .. where are the patches? Did I miss them?

Ah, they just arrived.

I took a brief look at them, and I think they are the right step. The things that are
missing is that that you are missing the kfree  in 4/5 when the disk is gone away. Also
there are some code that is commented out and its not clear to me why that is.

Lastly, this protocol should be negotiated using the 'max-request-.. ' or whichever is
the proper type, not the blkfront-ring-type. It also would be good to CC Justin as he
might have some guidance in this and also could test the frontend on his backend
(or vice-versa). Not sure what is involved in setting up a FreeBSD backend that spectralogic
is using.. Thought this might also involed expanding the ring to be a multi-page one
I think?

And I wonder if you need to have such a huge list of ops? Can some of them be trimmed down?
They v1 and v2 look quite similar. Oh, and instead of v1 and v2 I would just call them
'large_segment' and 'default_segment'. Or 'lgr_segment' and 'def_segment' perhaps?

Maybe 'huge_segment' and 'generic_segment' that sounds better.

Lastly, its not clear to me why you are removing the padding on some of the older blkif structures?

Thanks for posting this!
> > -ronghui
> > 
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:06:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:06:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20hr-0002tF-Hc; Thu, 16 Aug 2012 14:05:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20hq-0002t7-4g
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:05:50 +0000
Received: from [85.158.143.99:58679] by server-1.bemta-4.messagelabs.com id
	11/4F-07754-D3EFC205; Thu, 16 Aug 2012 14:05:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345125942!27954022!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22404 invoked from network); 16 Aug 2012 14:05:44 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 14:05:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GE5aG1007824
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 14:05:37 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GE5a8M018705
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 14:05:36 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GE5ar8009695; Thu, 16 Aug 2012 09:05:36 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:05:35 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 737C7402C0; Thu, 16 Aug 2012 09:55:49 -0400 (EDT)
Date: Thu, 16 Aug 2012 09:55:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Duan, Ronghui" <ronghui.duan@intel.com>
Message-ID: <20120816135549.GA17613@phenom.dumpdata.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<20120816133457.GA5898@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120816133457.GA5898@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 09:34:57AM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
> > Hi, list.
> > The max segments for request in VBD queue is 11, while for Linux OS/ other VMM, the parameter is set to 128 in default.
> 
> Like the FreeBSD one?
> 
> > This may be caused by the limited size of ring between Front/Back. So I guess whether we can put segment data into another ring and dynamic use them for the single request's need. Here is prototype which don't do much test, but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to original in sequential test. But it bring some overhead which will make random IO's cpu utilization increase a little.
> > 
> 
> Did you think also about expanding the ring size to something bigger?
> 
> > Here is a short version data use only 1K random read and 64K sequential read in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got form xentop.
> 
> > Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
> > 		W	52005.9	86.6	71
> > 		W/O	52123.1	85.8	66.9
> > 			
> > Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
> > 	W	250		27.1	       10.6
> > 	W/O	250		62.6	       31.1
> > 
> > 
> > The patch will be simple if we only use new methods. But we need consider that user may use new kernel as backend while an older one as frontend. Also need considerate live migration case. So the change become huge...
> 
> OK? I think you are implementing the extension documented in
> 
> changeset:   24875:a59c1dcfe968
> user:        Justin T. Gibbs <justing@spectralogic.com>
> date:        Thu Feb 23 10:03:07 2012 +0000
> summary:     blkif.h: Define and document the request number/size/segments extension
> 
> changeset:   24874:f9789db96c39
> user:        Justin T. Gibbs <justing@spectralogic.com>
> date:        Thu Feb 23 10:02:30 2012 +0000
> summary:     blkif.h: Document the Red Hat and Citrix blkif multi-page ring extensions
> 
> so that would be the max-requests-segments one?
> 
> 
> 
> > [RFC v1 1/5] 
> > 	In order to add new segment ring, refactoring the original code, split some methods related with ring operation.
> > [RFC v1 2/5]
> > 	Add the segment ring support in blkfront. Most of code is about suspend/recover.
> > [RFC v1 3/5]
> > 	As the same, need refractor the original code in blkback.
> > [RFC v1 4/5]
> > 	In order to support different type of ring type in blkback, make the pending_req list per disk.
> 
> Not sure why you structured the patches like this way, but it might
> make sense to order them in 1, 3, 4, 2, 5 order. The 'pending_req'/per disk is an overall
> improvement that fixes a lot of concurrent issues. I tried to implement this and ran
> in an issue with grants still being active? Did you have issues with that or it worked just fine
> for you?
> > [RFC v1 5/5]
> > 	Add the segment ring support in blkback.
> 
> So .. where are the patches? Did I miss them?

Ah, they just arrived.

I took a brief look at them, and I think they are the right step. The things that are
missing is that that you are missing the kfree  in 4/5 when the disk is gone away. Also
there are some code that is commented out and its not clear to me why that is.

Lastly, this protocol should be negotiated using the 'max-request-.. ' or whichever is
the proper type, not the blkfront-ring-type. It also would be good to CC Justin as he
might have some guidance in this and also could test the frontend on his backend
(or vice-versa). Not sure what is involved in setting up a FreeBSD backend that spectralogic
is using.. Thought this might also involed expanding the ring to be a multi-page one
I think?

And I wonder if you need to have such a huge list of ops? Can some of them be trimmed down?
They v1 and v2 look quite similar. Oh, and instead of v1 and v2 I would just call them
'large_segment' and 'default_segment'. Or 'lgr_segment' and 'def_segment' perhaps?

Maybe 'huge_segment' and 'generic_segment' that sounds better.

Lastly, its not clear to me why you are removing the padding on some of the older blkif structures?

Thanks for posting this!
> > -ronghui
> > 
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:06:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20hx-0002ti-Ue; Thu, 16 Aug 2012 14:05:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20hv-0002tW-RA
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:05:56 +0000
Received: from [85.158.143.99:59262] by server-3.bemta-4.messagelabs.com id
	91/F0-09529-34EFC205; Thu, 16 Aug 2012 14:05:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345125944!28188575!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27773 invoked from network); 16 Aug 2012 14:05:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:05:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042003"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:05:43 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:05:44 +0100
Date: Thu, 16 Aug 2012 15:05:27 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815180131.24aaa5ce@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161501550.4850@kaball.uk.xensource.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> ---
>  arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
>  arch/x86/xen/irq.c       |   22 ++++++++++++++-
>  2 files changed, 77 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index bf4bda6..3a58c51 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -139,6 +139,8 @@ struct tls_descs {
>   */
>  static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>  
> +static void __init xen_hvm_init_shared_info(void);
> +
>  static void clamp_max_cpus(void)
>  {
>  #ifdef CONFIG_SMP
> @@ -217,8 +219,8 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
> +		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);
>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
> @@ -271,12 +273,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>  		break;
>  	}
>  
> -	asm(XEN_EMULATE_PREFIX "cpuid"
> -		: "=a" (*ax),
> -		  "=b" (*bx),
> -		  "=c" (*cx),
> -		  "=d" (*dx)
> -		: "0" (*ax), "2" (*cx));
> +	if (xen_pvh_domain())
> +		native_cpuid(ax, bx, cx, dx);
> +	else
> +		asm(XEN_EMULATE_PREFIX "cpuid"
> +			: "=a" (*ax),
> +			"=b" (*bx),
> +			"=c" (*cx),
> +			"=d" (*dx)
> +			: "0" (*ax), "2" (*cx));
>  
>  	*bx &= maskebx;
>  	*cx &= maskecx;
> @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  void xen_setup_shared_info(void)
>  {
> +	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
> +	if (xen_pvh_domain() && xen_initial_domain())
> +		return;
> +
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
>  			   xen_start_info->shared_info);
> @@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
>  		HYPERVISOR_shared_info =
>  			(struct shared_info *)__va(xen_start_info->shared_info);
>  
> +	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
> +	if (xen_pvh_domain())
> +		return;

Is XENFEAT_auto_translated_physmap even believed to work?

If not, we could change the XENFEAT_auto_translated_physmap case and
make it point to xen_hvm_init_shared_info, that would work for pvh as
well.


>  #ifndef CONFIG_SMP
>  	/* In UP this is as good a place as any to set up shared info */
>  	xen_setup_vcpu_info_placement();
> @@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
>   */
>  static void __init xen_setup_stackprotector(void)
>  {
> +	if (xen_pvh_domain()) {
> +		switch_to_new_gdt(0);
> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>  
> @@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_guest_init(void)
> +{
> +#ifndef __HAVE_ARCH_PTE_SPECIAL
> +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> +#endif
> +	/* PVH TBD/FIXME: for now just disable this. */
> +	have_vcpu_info_placement = 0;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +        /* for domU, the library sets start_info.shared_info to pfn, but for
> +         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
> +	 * uses HVM code in many places */
> +	if (xen_initial_domain())
> +		xen_hvm_init_shared_info();
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
>  	if (!xen_start_info)
>  		return;
>  
> +#ifdef CONFIG_X86_32
> +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");
> +	return;
> +#endif

this has to be wrong: shouldn't the return be at least below the
following line?


>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (xen_pvh_domain())
> +		pv_cpu_ops.cpuid = xen_cpuid;
> +	else
> +		pv_cpu_ops = xen_cpu_ops;
>  
>  	x86_init.resources.memory_setup = xen_memory_setup;
>  	x86_init.oem.arch_setup = xen_arch_setup;
> @@ -1334,8 +1378,6 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* Work out if we support NX */
>  	x86_configure_nx();
>  
> -	xen_setup_features();
> -
>  	/* Get mfn list */
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		xen_build_dynamic_phys_to_machine();
> @@ -1462,6 +1504,9 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_setup_runstate_info(0);
>  
> +	if (xen_pvh_domain())
> +		xen_pvh_guest_init();
> +
>  	/* Start the world */
>  #ifdef CONFIG_X86_32
>  	i386_start_kernel();
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..7c7dfd1 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
>  
>  static void xen_safe_halt(void)
>  {
> +	/* so event channel can be delivered to us, since in HVM container */
> +	if (xen_pvh_domain())
> +		local_irq_enable();
> +
>  	/* Blocking includes an implicit local_irq_enable(). */
>  	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
>  		BUG();
> @@ -126,8 +130,24 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  #endif
>  };
>  
> +static const struct pv_irq_ops xen_pvh_irq_ops __initdata = {
> +	.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
> +	.restore_fl = __PV_IS_CALLEE_SAVE(native_restore_fl),
> +	.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
> +	.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
> +
> +	.safe_halt = xen_safe_halt,
> +	.halt = xen_halt,
> +#ifdef CONFIG_X86_64
> +	.adjust_exception_frame = paravirt_nop,
> +#endif
> +};
> +
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	if (xen_pvh_domain())
> +		pv_irq_ops = xen_pvh_irq_ops;
> +	else
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> -- 
> 1.7.2.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:06:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20hx-0002ti-Ue; Thu, 16 Aug 2012 14:05:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20hv-0002tW-RA
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:05:56 +0000
Received: from [85.158.143.99:59262] by server-3.bemta-4.messagelabs.com id
	91/F0-09529-34EFC205; Thu, 16 Aug 2012 14:05:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345125944!28188575!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27773 invoked from network); 16 Aug 2012 14:05:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:05:53 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042003"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:05:43 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:05:44 +0100
Date: Thu, 16 Aug 2012 15:05:27 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815180131.24aaa5ce@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161501550.4850@kaball.uk.xensource.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> ---
>  arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
>  arch/x86/xen/irq.c       |   22 ++++++++++++++-
>  2 files changed, 77 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index bf4bda6..3a58c51 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -139,6 +139,8 @@ struct tls_descs {
>   */
>  static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>  
> +static void __init xen_hvm_init_shared_info(void);
> +
>  static void clamp_max_cpus(void)
>  {
>  #ifdef CONFIG_SMP
> @@ -217,8 +219,8 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
> +		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);
>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
> @@ -271,12 +273,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>  		break;
>  	}
>  
> -	asm(XEN_EMULATE_PREFIX "cpuid"
> -		: "=a" (*ax),
> -		  "=b" (*bx),
> -		  "=c" (*cx),
> -		  "=d" (*dx)
> -		: "0" (*ax), "2" (*cx));
> +	if (xen_pvh_domain())
> +		native_cpuid(ax, bx, cx, dx);
> +	else
> +		asm(XEN_EMULATE_PREFIX "cpuid"
> +			: "=a" (*ax),
> +			"=b" (*bx),
> +			"=c" (*cx),
> +			"=d" (*dx)
> +			: "0" (*ax), "2" (*cx));
>  
>  	*bx &= maskebx;
>  	*cx &= maskecx;
> @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  void xen_setup_shared_info(void)
>  {
> +	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
> +	if (xen_pvh_domain() && xen_initial_domain())
> +		return;
> +
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
>  			   xen_start_info->shared_info);
> @@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
>  		HYPERVISOR_shared_info =
>  			(struct shared_info *)__va(xen_start_info->shared_info);
>  
> +	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
> +	if (xen_pvh_domain())
> +		return;

Is XENFEAT_auto_translated_physmap even believed to work?

If not, we could change the XENFEAT_auto_translated_physmap case and
make it point to xen_hvm_init_shared_info, that would work for pvh as
well.


>  #ifndef CONFIG_SMP
>  	/* In UP this is as good a place as any to set up shared info */
>  	xen_setup_vcpu_info_placement();
> @@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
>   */
>  static void __init xen_setup_stackprotector(void)
>  {
> +	if (xen_pvh_domain()) {
> +		switch_to_new_gdt(0);
> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>  
> @@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_guest_init(void)
> +{
> +#ifndef __HAVE_ARCH_PTE_SPECIAL
> +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> +#endif
> +	/* PVH TBD/FIXME: for now just disable this. */
> +	have_vcpu_info_placement = 0;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +        /* for domU, the library sets start_info.shared_info to pfn, but for
> +         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
> +	 * uses HVM code in many places */
> +	if (xen_initial_domain())
> +		xen_hvm_init_shared_info();
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
>  	if (!xen_start_info)
>  		return;
>  
> +#ifdef CONFIG_X86_32
> +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");
> +	return;
> +#endif

this has to be wrong: shouldn't the return be at least below the
following line?


>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (xen_pvh_domain())
> +		pv_cpu_ops.cpuid = xen_cpuid;
> +	else
> +		pv_cpu_ops = xen_cpu_ops;
>  
>  	x86_init.resources.memory_setup = xen_memory_setup;
>  	x86_init.oem.arch_setup = xen_arch_setup;
> @@ -1334,8 +1378,6 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* Work out if we support NX */
>  	x86_configure_nx();
>  
> -	xen_setup_features();
> -
>  	/* Get mfn list */
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		xen_build_dynamic_phys_to_machine();
> @@ -1462,6 +1504,9 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_setup_runstate_info(0);
>  
> +	if (xen_pvh_domain())
> +		xen_pvh_guest_init();
> +
>  	/* Start the world */
>  #ifdef CONFIG_X86_32
>  	i386_start_kernel();
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..7c7dfd1 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
>  
>  static void xen_safe_halt(void)
>  {
> +	/* so event channel can be delivered to us, since in HVM container */
> +	if (xen_pvh_domain())
> +		local_irq_enable();
> +
>  	/* Blocking includes an implicit local_irq_enable(). */
>  	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
>  		BUG();
> @@ -126,8 +130,24 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  #endif
>  };
>  
> +static const struct pv_irq_ops xen_pvh_irq_ops __initdata = {
> +	.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
> +	.restore_fl = __PV_IS_CALLEE_SAVE(native_restore_fl),
> +	.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
> +	.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
> +
> +	.safe_halt = xen_safe_halt,
> +	.halt = xen_halt,
> +#ifdef CONFIG_X86_64
> +	.adjust_exception_frame = paravirt_nop,
> +#endif
> +};
> +
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	if (xen_pvh_domain())
> +		pv_irq_ops = xen_pvh_irq_ops;
> +	else
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> -- 
> 1.7.2.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:07:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20jH-00032e-IG; Thu, 16 Aug 2012 14:07:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20jF-00032L-Qn
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:07:18 +0000
Received: from [85.158.139.83:40403] by server-6.bemta-5.messagelabs.com id
	0B/F4-22415-49EFC205; Thu, 16 Aug 2012 14:07:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345126036!27944576!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3946 invoked from network); 16 Aug 2012 14:07:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:07:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:07:15 +0100
Message-Id: <502D1ADA0200007800095820@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:07:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part407131AA.0__="
Subject: [Xen-devel] [PATCH] EPT/PoD: fix interaction with 1Gb pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part407131AA.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
updated to match - the assertion in there triggered, indicating that
the call to p2m_pod_demand_populate() needed adjustment.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -521,13 +521,12 @@ static mfn_t ept_get_entry(struct p2m_do
             }
=20
             /* Populate this superpage */
-            ASSERT(i =3D=3D 1);
+            ASSERT(i <=3D 2);
=20
             index =3D gfn_remainder >> ( i * EPT_TABLE_ORDER);
             ept_entry =3D table + index;
=20
-            if ( !p2m_pod_demand_populate(p2m, gfn,=20
-                                            PAGE_ORDER_2M, q) )
+            if ( !p2m_pod_demand_populate(p2m, gfn, i * EPT_TABLE_ORDER, =
q) )
                 goto retry;
             else
                 goto out;




--=__Part407131AA.0__=
Content-Type: text/plain; name="x86-EPT-PoD-1Gb-assert.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-EPT-PoD-1Gb-assert.patch"

EPT/PoD: fix interaction with 1Gb pages=0A=0AWhen PoD got enabled to =
support 1Gb pages, ept_get_entry() didn't get=0Aupdated to match - the =
assertion in there triggered, indicating that=0Athe call to p2m_pod_demand_=
populate() needed adjustment.=0A=0ASigned-off-by: Jan Beulich <jbeulich@sus=
e.com>=0A=0A--- a/xen/arch/x86/mm/p2m-ept.c=0A+++ b/xen/arch/x86/mm/p2m-ept=
.c=0A@@ -521,13 +521,12 @@ static mfn_t ept_get_entry(struct p2m_do=0A     =
        }=0A =0A             /* Populate this superpage */=0A-            =
ASSERT(i =3D=3D 1);=0A+            ASSERT(i <=3D 2);=0A =0A             =
index =3D gfn_remainder >> ( i * EPT_TABLE_ORDER);=0A             =
ept_entry =3D table + index;=0A =0A-            if ( !p2m_pod_demand_popula=
te(p2m, gfn, =0A-                                            PAGE_ORDER_2M,=
 q) )=0A+            if ( !p2m_pod_demand_populate(p2m, gfn, i * EPT_TABLE_=
ORDER, q) )=0A                 goto retry;=0A             else=0A          =
       goto out;=0A
--=__Part407131AA.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part407131AA.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 14:07:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20jH-00032e-IG; Thu, 16 Aug 2012 14:07:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20jF-00032L-Qn
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:07:18 +0000
Received: from [85.158.139.83:40403] by server-6.bemta-5.messagelabs.com id
	0B/F4-22415-49EFC205; Thu, 16 Aug 2012 14:07:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345126036!27944576!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3946 invoked from network); 16 Aug 2012 14:07:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:07:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:07:15 +0100
Message-Id: <502D1ADA0200007800095820@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:07:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part407131AA.0__="
Subject: [Xen-devel] [PATCH] EPT/PoD: fix interaction with 1Gb pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part407131AA.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
updated to match - the assertion in there triggered, indicating that
the call to p2m_pod_demand_populate() needed adjustment.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -521,13 +521,12 @@ static mfn_t ept_get_entry(struct p2m_do
             }
=20
             /* Populate this superpage */
-            ASSERT(i =3D=3D 1);
+            ASSERT(i <=3D 2);
=20
             index =3D gfn_remainder >> ( i * EPT_TABLE_ORDER);
             ept_entry =3D table + index;
=20
-            if ( !p2m_pod_demand_populate(p2m, gfn,=20
-                                            PAGE_ORDER_2M, q) )
+            if ( !p2m_pod_demand_populate(p2m, gfn, i * EPT_TABLE_ORDER, =
q) )
                 goto retry;
             else
                 goto out;




--=__Part407131AA.0__=
Content-Type: text/plain; name="x86-EPT-PoD-1Gb-assert.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-EPT-PoD-1Gb-assert.patch"

EPT/PoD: fix interaction with 1Gb pages=0A=0AWhen PoD got enabled to =
support 1Gb pages, ept_get_entry() didn't get=0Aupdated to match - the =
assertion in there triggered, indicating that=0Athe call to p2m_pod_demand_=
populate() needed adjustment.=0A=0ASigned-off-by: Jan Beulich <jbeulich@sus=
e.com>=0A=0A--- a/xen/arch/x86/mm/p2m-ept.c=0A+++ b/xen/arch/x86/mm/p2m-ept=
.c=0A@@ -521,13 +521,12 @@ static mfn_t ept_get_entry(struct p2m_do=0A     =
        }=0A =0A             /* Populate this superpage */=0A-            =
ASSERT(i =3D=3D 1);=0A+            ASSERT(i <=3D 2);=0A =0A             =
index =3D gfn_remainder >> ( i * EPT_TABLE_ORDER);=0A             =
ept_entry =3D table + index;=0A =0A-            if ( !p2m_pod_demand_popula=
te(p2m, gfn, =0A-                                            PAGE_ORDER_2M,=
 q) )=0A+            if ( !p2m_pod_demand_populate(p2m, gfn, i * EPT_TABLE_=
ORDER, q) )=0A                 goto retry;=0A             else=0A          =
       goto out;=0A
--=__Part407131AA.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part407131AA.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mG-0003H0-54; Thu, 16 Aug 2012 14:10:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20mE-0003Gn-GN
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:10:23 +0000
Received: from [85.158.139.83:14731] by server-1.bemta-5.messagelabs.com id
	60/65-09980-D4FFC205; Thu, 16 Aug 2012 14:10:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345126220!21132683!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27998 invoked from network); 16 Aug 2012 14:10:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:10:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042158"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:10:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:10:20 +0100
Date: Thu, 16 Aug 2012 15:10:03 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815180250.1e068d10@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
>  arch/x86/xen/mmu.c              |  179 ++++++++++++++++++++++++++++++++++++---
>  arch/x86/xen/mmu.h              |    2 +
>  include/xen/interface/memory.h  |   27 ++++++-
>  include/xen/interface/physdev.h |   10 ++
>  include/xen/xen-ops.h           |    7 ++
>  5 files changed, 211 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..44a6477 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -330,6 +330,38 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
>         __xen_set_pte(ptep, pteval);
>  }
> 
> +/* This for PV guest in hvm container */
> +void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
> +                             int nr_mfns, int add_mapping)
> +{
> +       int rc;
> +       struct physdev_map_iomem iomem;
> +
> +       iomem.first_gfn = pfn;
> +       iomem.first_mfn = mfn;
> +       iomem.nr_mfns = nr_mfns;
> +       iomem.add_mapping = add_mapping;
> +
> +       rc = HYPERVISOR_physdev_op(PHYSDEVOP_pvh_map_iomem, &iomem);
> +       BUG_ON(rc);
> +}
> +
> +/* This for PV guest in hvm container.
> + * We need this because during boot early_ioremap path eventually calls
> + * set_pte that maps io space. Also, ACPI pages are not mapped into to the
> + * EPT during dom0 creation. The pages are mapped initially here from
> + * kernel_physical_mapping_init() then later the memtype is changed.  */
> +static void xen_dom0pvh_set_pte(pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}
> +
> +static void xen_dom0pvh_set_pte_at(struct mm_struct *mm, unsigned long addr,
> +                                  pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}
> +
>  static void xen_set_pte_at(struct mm_struct *mm, unsigned long addr,
>                     pte_t *ptep, pte_t pteval)
>  {
> @@ -1197,6 +1229,10 @@ static void xen_post_allocator_init(void);
>  static void __init xen_pagetable_setup_done(pgd_t *base)
>  {
>         xen_setup_shared_info();
> +
> +       if (xen_pvh_domain())
> +               return;
> +
>         xen_post_allocator_init();
>  }
> 
> @@ -1652,6 +1688,10 @@ static void set_page_prot(void *addr, pgprot_t prot)
>         unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
>         pte_t pte = pfn_pte(pfn, prot);
> 
> +       /* for PVH, page tables are native. */
> +       if (xen_pvh_domain())
> +               return;
> +
>         if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>                 BUG();
>  }
> @@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
>   * but that's enough to get __va working.  We need to fill in the rest
>   * of the physical mapping once some sort of allocator has been set
>   * up.
> + * NOTE: for PVH, the page tables are native with HAP required.
>   */
>  pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>                                          unsigned long max_pfn)
> @@ -1761,10 +1802,12 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         /* Zap identity mapping */
>         init_level4_pgt[0] = __pgd(0);
> 
> -       /* Pre-constructed entries are in pfn, so convert to mfn */
> -       convert_pfn_mfn(init_level4_pgt);
> -       convert_pfn_mfn(level3_ident_pgt);
> -       convert_pfn_mfn(level3_kernel_pgt);
> +       if (!xen_pvh_domain()) {
> +               /* Pre-constructed entries are in pfn, so convert to mfn */
> +               convert_pfn_mfn(init_level4_pgt);
> +               convert_pfn_mfn(level3_ident_pgt);
> +               convert_pfn_mfn(level3_kernel_pgt);
> +       }
> 
>         l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
>         l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
> @@ -1787,12 +1830,14 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>         set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
> 
> -       /* Pin down new L4 */
> -       pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> -                         PFN_DOWN(__pa_symbol(init_level4_pgt)));
> +       if (!xen_pvh_domain()) {
> +               /* Pin down new L4 */
> +               pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> +                               PFN_DOWN(__pa_symbol(init_level4_pgt)));
> 
> -       /* Unpin Xen-provided one */
> -       pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +               /* Unpin Xen-provided one */
> +               pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +       }
> 
>         /* Switch over */
>         pgd = init_level4_pgt;
> @@ -1802,9 +1847,13 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>          * structure to attach it to, so make sure we just set kernel
>          * pgd.
>          */
> -       xen_mc_batch();
> -       __xen_write_cr3(true, __pa(pgd));
> -       xen_mc_issue(PARAVIRT_LAZY_CPU);
> +       if (xen_pvh_domain()) {
> +               native_write_cr3(__pa(pgd));
> +       } else {
> +               xen_mc_batch();
> +               __xen_write_cr3(true, __pa(pgd));
> +               xen_mc_issue(PARAVIRT_LAZY_CPU);
> +       }
> 
>         memblock_reserve(__pa(xen_start_info->pt_base),
>                          xen_start_info->nr_pt_frames * PAGE_SIZE);
> @@ -2067,9 +2116,21 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> 
>  void __init xen_init_mmu_ops(void)
>  {
> +       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
> +
> +       if (xen_pvh_domain()) {
> +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> +
> +               /* set_pte* for PCI devices to map iomem. */
> +               if (xen_initial_domain()) {
> +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> +                       pv_mmu_ops.set_pte_at = xen_dom0pvh_set_pte_at;
> +               }
> +               return;
> +       }

Considering that the implementation of xen_dom0pvh_set_pte is
native_set_pte, can't we just leave it to the default that is
native_set_pte?


>         x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
>         x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
> -       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
>         pv_mmu_ops = xen_mmu_ops;
> 
>         memset(dummy_mapping, 0xff, PAGE_SIZE);
> @@ -2305,6 +2366,93 @@ void __init xen_hvm_init_mmu_ops(void)
>  }
>  #endif
> 
> +/* Map foreign gmfn, fgmfn, to local pfn, lpfn. This for the user space
> + * creating new guest on PVH dom0 and needs to map domU pages. Called from
> + * exported function, so no need to export this.
> + */
> +static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long fgmfn,
> +                             unsigned int domid)
> +{
> +       int rc;
> +       struct xen_add_to_physmap pmb = {.foreign_domid = domid};
> +
> +       pmb.gpfn = lpfn;
> +       pmb.idx = fgmfn;
> +       pmb.space = XENMAPSPACE_gmfn_foreign;
> +       rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &pmb);
> +       if (rc) {
> +               pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
> +                       rc, lpfn, fgmfn);
> +               return 1;
> +       }
> +       return 0;
> +}
> +
> +/* Unmap an entry from xen p2m table */
> +int pvh_rem_xen_p2m(unsigned long spfn, int count)
> +{
> +       struct xen_remove_from_physmap xrp;
> +       int i, rc;
> +
> +       for (i=0; i < count; i++) {
> +               xrp.domid = DOMID_SELF;
> +               xrp.gpfn = spfn+i;
> +               rc = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
> +               if (rc) {
> +                       pr_warn("Failed to unmap pfn:%lx rc:%d done:%d\n",
> +                               spfn+i, rc, i);
> +                       return 1;
> +               }
> +       }
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);
> +
> +struct pvh_remap_data {
> +       unsigned long fgmfn;            /* foreign domain's gmfn */
> +       pgprot_t prot;
> +       domid_t  domid;
> +       struct vm_area_struct *vma;
> +};
> +
> +static int pvh_map_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
> +                       void *data)
> +{
> +       struct pvh_remap_data *pvhp = data;
> +       struct xen_pvh_sav_pfn_info *savp = pvhp->vma->vm_private_data;
> +       unsigned long pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo++]);
> +       pte_t pteval = pte_mkspecial(pfn_pte(pfn, pvhp->prot));
> +
> +       native_set_pte(ptep, pteval);
> +       if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
> +               return -EFAULT;
> +
> +       return 0;
> +}
> +
> +/* The only caller at moment passes one gmfn at a time.
> + * PVH TBD/FIXME: expand this in future to honor batch requests.
> + */
> +static int pvh_remap_gmfn_range(struct vm_area_struct *vma,
> +                               unsigned long addr, unsigned long mfn, int nr,
> +                               pgprot_t prot, unsigned domid)
> +{
> +       int err;
> +       struct pvh_remap_data pvhdata;
> +
> +       if (nr > 1)
> +               return -EINVAL;
> +
> +       pvhdata.fgmfn = mfn;
> +       pvhdata.prot = prot;
> +       pvhdata.domid = domid;
> +       pvhdata.vma = vma;
> +       err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
> +                                 pvh_map_pte_fn, &pvhdata);
> +       flush_tlb_all();
> +       return err;
> +}
> +
>  #define REMAP_BATCH_SIZE 16
> 
>  struct remap_data {
> @@ -2342,6 +2490,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>         BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
>                                 (VM_PFNMAP | VM_RESERVED | VM_IO)));
> 
> +       if (xen_pvh_domain()) {
> +               /* We need to update the local page tables and the xen HAP */
> +               return pvh_remap_gmfn_range(vma, addr, mfn, nr, prot, domid);
> +       }
> +
>         rmd.mfn = mfn;
>         rmd.prot = prot;
> 
> diff --git a/arch/x86/xen/mmu.h b/arch/x86/xen/mmu.h
> index 73809bb..6d0bb56 100644
> --- a/arch/x86/xen/mmu.h
> +++ b/arch/x86/xen/mmu.h
> @@ -23,4 +23,6 @@ unsigned long xen_read_cr2_direct(void);
> 
>  extern void xen_init_mmu_ops(void);
>  extern void xen_hvm_init_mmu_ops(void);
> +extern void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
> +                                    int nr_mfns, int add_mapping);
>  #endif /* _XEN_MMU_H */
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..1b213b1 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,10 +163,19 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
> 
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -    unsigned int space;
> +#define XENMAPSPACE_gmfn        2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> +    uint16_t space;
> +    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */
> +
> +#define XENMAPIDX_grant_table_status 0x80000000

As you have seen, I have a very similar patch in my series. 


>      /* Index into source mapping space. */
>      unsigned long idx;
> @@ -234,4 +243,20 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_memory_map);
>   * during a driver critical region.
>   */
>  extern spinlock_t xen_reservation_lock;
> +
> +/*
> + * Unmaps the page appearing at a particular GPFN from the specified guest's
> + * pseudophysical address space.
> + * arg == addr of xen_remove_from_physmap_t.
> + */
> +#define XENMEM_remove_from_physmap      15
> +struct xen_remove_from_physmap {
> +    /* Which domain to change the mapping for. */
> +    domid_t domid;
> +
> +    /* GPFN of the current mapping of the page. */
> +    unsigned long     gpfn;
> +};
> +DEFINE_GUEST_HANDLE_STRUCT(xen_remove_from_physmap);
> +
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */
> diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
> index 9ce788d..80f792e 100644
> --- a/include/xen/interface/physdev.h
> +++ b/include/xen/interface/physdev.h
> @@ -258,6 +258,16 @@ struct physdev_pci_device {
>      uint8_t devfn;
>  };
> 
> +#define PHYSDEVOP_pvh_map_iomem        29
> +struct physdev_map_iomem {
> +    /* IN */
> +    uint64_t first_gfn;
> +    uint64_t first_mfn;
> +    uint32_t nr_mfns;
> +    uint32_t add_mapping;        /* 1 == add mapping;  0 == unmap */
> +
> +};
> +
>  /*
>   * Notify that some PIRQ-bound event channels have been unmasked.
>   * ** This command is obsolete since interface version 0x00030202 and is **
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 6a198e4..fa595e1 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -29,4 +29,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                                unsigned long mfn, int nr,
>                                pgprot_t prot, unsigned domid);
> 
> +struct xen_pvh_sav_pfn_info {
> +       struct page **sp_paga;  /* save pfn (info) page array */
> +       int sp_num_pgs;
> +       int sp_next_todo;
> +};
> +extern int pvh_rem_xen_p2m(unsigned long spfn, int count);
> +
>  #endif /* INCLUDE_XEN_OPS_H */
> --
> 1.7.2.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mG-0003H0-54; Thu, 16 Aug 2012 14:10:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20mE-0003Gn-GN
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:10:23 +0000
Received: from [85.158.139.83:14731] by server-1.bemta-5.messagelabs.com id
	60/65-09980-D4FFC205; Thu, 16 Aug 2012 14:10:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345126220!21132683!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27998 invoked from network); 16 Aug 2012 14:10:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:10:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042158"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:10:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:10:20 +0100
Date: Thu, 16 Aug 2012 15:10:03 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815180250.1e068d10@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
>  arch/x86/xen/mmu.c              |  179 ++++++++++++++++++++++++++++++++++++---
>  arch/x86/xen/mmu.h              |    2 +
>  include/xen/interface/memory.h  |   27 ++++++-
>  include/xen/interface/physdev.h |   10 ++
>  include/xen/xen-ops.h           |    7 ++
>  5 files changed, 211 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..44a6477 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -330,6 +330,38 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
>         __xen_set_pte(ptep, pteval);
>  }
> 
> +/* This for PV guest in hvm container */
> +void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
> +                             int nr_mfns, int add_mapping)
> +{
> +       int rc;
> +       struct physdev_map_iomem iomem;
> +
> +       iomem.first_gfn = pfn;
> +       iomem.first_mfn = mfn;
> +       iomem.nr_mfns = nr_mfns;
> +       iomem.add_mapping = add_mapping;
> +
> +       rc = HYPERVISOR_physdev_op(PHYSDEVOP_pvh_map_iomem, &iomem);
> +       BUG_ON(rc);
> +}
> +
> +/* This for PV guest in hvm container.
> + * We need this because during boot early_ioremap path eventually calls
> + * set_pte that maps io space. Also, ACPI pages are not mapped into to the
> + * EPT during dom0 creation. The pages are mapped initially here from
> + * kernel_physical_mapping_init() then later the memtype is changed.  */
> +static void xen_dom0pvh_set_pte(pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}
> +
> +static void xen_dom0pvh_set_pte_at(struct mm_struct *mm, unsigned long addr,
> +                                  pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}
> +
>  static void xen_set_pte_at(struct mm_struct *mm, unsigned long addr,
>                     pte_t *ptep, pte_t pteval)
>  {
> @@ -1197,6 +1229,10 @@ static void xen_post_allocator_init(void);
>  static void __init xen_pagetable_setup_done(pgd_t *base)
>  {
>         xen_setup_shared_info();
> +
> +       if (xen_pvh_domain())
> +               return;
> +
>         xen_post_allocator_init();
>  }
> 
> @@ -1652,6 +1688,10 @@ static void set_page_prot(void *addr, pgprot_t prot)
>         unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
>         pte_t pte = pfn_pte(pfn, prot);
> 
> +       /* for PVH, page tables are native. */
> +       if (xen_pvh_domain())
> +               return;
> +
>         if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>                 BUG();
>  }
> @@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
>   * but that's enough to get __va working.  We need to fill in the rest
>   * of the physical mapping once some sort of allocator has been set
>   * up.
> + * NOTE: for PVH, the page tables are native with HAP required.
>   */
>  pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>                                          unsigned long max_pfn)
> @@ -1761,10 +1802,12 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         /* Zap identity mapping */
>         init_level4_pgt[0] = __pgd(0);
> 
> -       /* Pre-constructed entries are in pfn, so convert to mfn */
> -       convert_pfn_mfn(init_level4_pgt);
> -       convert_pfn_mfn(level3_ident_pgt);
> -       convert_pfn_mfn(level3_kernel_pgt);
> +       if (!xen_pvh_domain()) {
> +               /* Pre-constructed entries are in pfn, so convert to mfn */
> +               convert_pfn_mfn(init_level4_pgt);
> +               convert_pfn_mfn(level3_ident_pgt);
> +               convert_pfn_mfn(level3_kernel_pgt);
> +       }
> 
>         l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
>         l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
> @@ -1787,12 +1830,14 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>         set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
> 
> -       /* Pin down new L4 */
> -       pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> -                         PFN_DOWN(__pa_symbol(init_level4_pgt)));
> +       if (!xen_pvh_domain()) {
> +               /* Pin down new L4 */
> +               pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> +                               PFN_DOWN(__pa_symbol(init_level4_pgt)));
> 
> -       /* Unpin Xen-provided one */
> -       pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +               /* Unpin Xen-provided one */
> +               pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +       }
> 
>         /* Switch over */
>         pgd = init_level4_pgt;
> @@ -1802,9 +1847,13 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>          * structure to attach it to, so make sure we just set kernel
>          * pgd.
>          */
> -       xen_mc_batch();
> -       __xen_write_cr3(true, __pa(pgd));
> -       xen_mc_issue(PARAVIRT_LAZY_CPU);
> +       if (xen_pvh_domain()) {
> +               native_write_cr3(__pa(pgd));
> +       } else {
> +               xen_mc_batch();
> +               __xen_write_cr3(true, __pa(pgd));
> +               xen_mc_issue(PARAVIRT_LAZY_CPU);
> +       }
> 
>         memblock_reserve(__pa(xen_start_info->pt_base),
>                          xen_start_info->nr_pt_frames * PAGE_SIZE);
> @@ -2067,9 +2116,21 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> 
>  void __init xen_init_mmu_ops(void)
>  {
> +       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
> +
> +       if (xen_pvh_domain()) {
> +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> +
> +               /* set_pte* for PCI devices to map iomem. */
> +               if (xen_initial_domain()) {
> +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> +                       pv_mmu_ops.set_pte_at = xen_dom0pvh_set_pte_at;
> +               }
> +               return;
> +       }

Considering that the implementation of xen_dom0pvh_set_pte is
native_set_pte, can't we just leave it to the default that is
native_set_pte?


>         x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
>         x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
> -       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
>         pv_mmu_ops = xen_mmu_ops;
> 
>         memset(dummy_mapping, 0xff, PAGE_SIZE);
> @@ -2305,6 +2366,93 @@ void __init xen_hvm_init_mmu_ops(void)
>  }
>  #endif
> 
> +/* Map foreign gmfn, fgmfn, to local pfn, lpfn. This for the user space
> + * creating new guest on PVH dom0 and needs to map domU pages. Called from
> + * exported function, so no need to export this.
> + */
> +static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long fgmfn,
> +                             unsigned int domid)
> +{
> +       int rc;
> +       struct xen_add_to_physmap pmb = {.foreign_domid = domid};
> +
> +       pmb.gpfn = lpfn;
> +       pmb.idx = fgmfn;
> +       pmb.space = XENMAPSPACE_gmfn_foreign;
> +       rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &pmb);
> +       if (rc) {
> +               pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
> +                       rc, lpfn, fgmfn);
> +               return 1;
> +       }
> +       return 0;
> +}
> +
> +/* Unmap an entry from xen p2m table */
> +int pvh_rem_xen_p2m(unsigned long spfn, int count)
> +{
> +       struct xen_remove_from_physmap xrp;
> +       int i, rc;
> +
> +       for (i=0; i < count; i++) {
> +               xrp.domid = DOMID_SELF;
> +               xrp.gpfn = spfn+i;
> +               rc = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
> +               if (rc) {
> +                       pr_warn("Failed to unmap pfn:%lx rc:%d done:%d\n",
> +                               spfn+i, rc, i);
> +                       return 1;
> +               }
> +       }
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);
> +
> +struct pvh_remap_data {
> +       unsigned long fgmfn;            /* foreign domain's gmfn */
> +       pgprot_t prot;
> +       domid_t  domid;
> +       struct vm_area_struct *vma;
> +};
> +
> +static int pvh_map_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
> +                       void *data)
> +{
> +       struct pvh_remap_data *pvhp = data;
> +       struct xen_pvh_sav_pfn_info *savp = pvhp->vma->vm_private_data;
> +       unsigned long pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo++]);
> +       pte_t pteval = pte_mkspecial(pfn_pte(pfn, pvhp->prot));
> +
> +       native_set_pte(ptep, pteval);
> +       if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
> +               return -EFAULT;
> +
> +       return 0;
> +}
> +
> +/* The only caller at moment passes one gmfn at a time.
> + * PVH TBD/FIXME: expand this in future to honor batch requests.
> + */
> +static int pvh_remap_gmfn_range(struct vm_area_struct *vma,
> +                               unsigned long addr, unsigned long mfn, int nr,
> +                               pgprot_t prot, unsigned domid)
> +{
> +       int err;
> +       struct pvh_remap_data pvhdata;
> +
> +       if (nr > 1)
> +               return -EINVAL;
> +
> +       pvhdata.fgmfn = mfn;
> +       pvhdata.prot = prot;
> +       pvhdata.domid = domid;
> +       pvhdata.vma = vma;
> +       err = apply_to_page_range(vma->vm_mm, addr, nr << PAGE_SHIFT,
> +                                 pvh_map_pte_fn, &pvhdata);
> +       flush_tlb_all();
> +       return err;
> +}
> +
>  #define REMAP_BATCH_SIZE 16
> 
>  struct remap_data {
> @@ -2342,6 +2490,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>         BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
>                                 (VM_PFNMAP | VM_RESERVED | VM_IO)));
> 
> +       if (xen_pvh_domain()) {
> +               /* We need to update the local page tables and the xen HAP */
> +               return pvh_remap_gmfn_range(vma, addr, mfn, nr, prot, domid);
> +       }
> +
>         rmd.mfn = mfn;
>         rmd.prot = prot;
> 
> diff --git a/arch/x86/xen/mmu.h b/arch/x86/xen/mmu.h
> index 73809bb..6d0bb56 100644
> --- a/arch/x86/xen/mmu.h
> +++ b/arch/x86/xen/mmu.h
> @@ -23,4 +23,6 @@ unsigned long xen_read_cr2_direct(void);
> 
>  extern void xen_init_mmu_ops(void);
>  extern void xen_hvm_init_mmu_ops(void);
> +extern void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
> +                                    int nr_mfns, int add_mapping);
>  #endif /* _XEN_MMU_H */
> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..1b213b1 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,10 +163,19 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
> 
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -    unsigned int space;
> +#define XENMAPSPACE_gmfn        2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> +    uint16_t space;
> +    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */
> +
> +#define XENMAPIDX_grant_table_status 0x80000000

As you have seen, I have a very similar patch in my series. 


>      /* Index into source mapping space. */
>      unsigned long idx;
> @@ -234,4 +243,20 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_memory_map);
>   * during a driver critical region.
>   */
>  extern spinlock_t xen_reservation_lock;
> +
> +/*
> + * Unmaps the page appearing at a particular GPFN from the specified guest's
> + * pseudophysical address space.
> + * arg == addr of xen_remove_from_physmap_t.
> + */
> +#define XENMEM_remove_from_physmap      15
> +struct xen_remove_from_physmap {
> +    /* Which domain to change the mapping for. */
> +    domid_t domid;
> +
> +    /* GPFN of the current mapping of the page. */
> +    unsigned long     gpfn;
> +};
> +DEFINE_GUEST_HANDLE_STRUCT(xen_remove_from_physmap);
> +
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */
> diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
> index 9ce788d..80f792e 100644
> --- a/include/xen/interface/physdev.h
> +++ b/include/xen/interface/physdev.h
> @@ -258,6 +258,16 @@ struct physdev_pci_device {
>      uint8_t devfn;
>  };
> 
> +#define PHYSDEVOP_pvh_map_iomem        29
> +struct physdev_map_iomem {
> +    /* IN */
> +    uint64_t first_gfn;
> +    uint64_t first_mfn;
> +    uint32_t nr_mfns;
> +    uint32_t add_mapping;        /* 1 == add mapping;  0 == unmap */
> +
> +};
> +
>  /*
>   * Notify that some PIRQ-bound event channels have been unmasked.
>   * ** This command is obsolete since interface version 0x00030202 and is **
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 6a198e4..fa595e1 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -29,4 +29,11 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>                                unsigned long mfn, int nr,
>                                pgprot_t prot, unsigned domid);
> 
> +struct xen_pvh_sav_pfn_info {
> +       struct page **sp_paga;  /* save pfn (info) page array */
> +       int sp_num_pgs;
> +       int sp_next_todo;
> +};
> +extern int pvh_rem_xen_p2m(unsigned long spfn, int count);
> +
>  #endif /* INCLUDE_XEN_OPS_H */
> --
> 1.7.2.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mM-0003Hq-IS; Thu, 16 Aug 2012 14:10:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1T20mK-0003HU-NG
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:10:28 +0000
Received: from [85.158.139.83:16009] by server-11.bemta-5.messagelabs.com id
	B6/53-29296-35FFC205; Thu, 16 Aug 2012 14:10:27 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345126227!21132711!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28374 invoked from network); 16 Aug 2012 14:10:27 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 14:10:27 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 8F626A2FD6;
	Thu, 16 Aug 2012 16:10:26 +0200 (CEST)
Message-ID: <502CFF4F.3040003@suse.de>
Date: Thu, 16 Aug 2012 16:10:23 +0200
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1345122741-32492-1-git-send-email-sw@weilnetz.de>
	<20120816133226.GA22676@stefanha-thinkpad.localdomain>
In-Reply-To: <20120816133226.GA22676@stefanha-thinkpad.localdomain>
X-Enigmail-Version: 1.5a1pre
Cc: qemu-trivial@nongnu.org, Stefan Hajnoczi <stefanha@gmail.com>,
	"xen-devel@lists.xensource.com Devel" <xen-devel@lists.xensource.com>,
	qemu-devel@nongnu.org, Stefan Weil <sw@weilnetz.de>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] Spelling fixes in comments and
 macro names (ressource -> resource)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 16.08.2012 15:32, schrieb Stefan Hajnoczi:
> On Thu, Aug 16, 2012 at 03:12:21PM +0200, Stefan Weil wrote:
>> Macro XEN_HOST_PCI_RESOURCE_BUFFER_SIZE is only used locally,
>> so the change should be safe.
>>
>> Signed-off-by: Stefan Weil <sw@weilnetz.de>
>> ---
>>  hw/xen-host-pci-device.c |    6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
> =

> Thanks, applied to the trivial patches tree:
> https://github.com/stefanha/qemu/commits/trivial-patches

CC'ing the Xen folks.

Andreas

-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mM-0003Hq-IS; Thu, 16 Aug 2012 14:10:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1T20mK-0003HU-NG
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:10:28 +0000
Received: from [85.158.139.83:16009] by server-11.bemta-5.messagelabs.com id
	B6/53-29296-35FFC205; Thu, 16 Aug 2012 14:10:27 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345126227!21132711!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28374 invoked from network); 16 Aug 2012 14:10:27 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-16.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 14:10:27 -0000
Received: from relay1.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 8F626A2FD6;
	Thu, 16 Aug 2012 16:10:26 +0200 (CEST)
Message-ID: <502CFF4F.3040003@suse.de>
Date: Thu, 16 Aug 2012 16:10:23 +0200
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1345122741-32492-1-git-send-email-sw@weilnetz.de>
	<20120816133226.GA22676@stefanha-thinkpad.localdomain>
In-Reply-To: <20120816133226.GA22676@stefanha-thinkpad.localdomain>
X-Enigmail-Version: 1.5a1pre
Cc: qemu-trivial@nongnu.org, Stefan Hajnoczi <stefanha@gmail.com>,
	"xen-devel@lists.xensource.com Devel" <xen-devel@lists.xensource.com>,
	qemu-devel@nongnu.org, Stefan Weil <sw@weilnetz.de>
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH] Spelling fixes in comments and
 macro names (ressource -> resource)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 16.08.2012 15:32, schrieb Stefan Hajnoczi:
> On Thu, Aug 16, 2012 at 03:12:21PM +0200, Stefan Weil wrote:
>> Macro XEN_HOST_PCI_RESOURCE_BUFFER_SIZE is only used locally,
>> so the change should be safe.
>>
>> Signed-off-by: Stefan Weil <sw@weilnetz.de>
>> ---
>>  hw/xen-host-pci-device.c |    6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
> =

> Thanks, applied to the trivial patches tree:
> https://github.com/stefanha/qemu/commits/trivial-patches

CC'ing the Xen folks.

Andreas

-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mZ-0003Kh-VZ; Thu, 16 Aug 2012 14:10:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20mY-0003KG-Ti
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:10:43 +0000
Received: from [85.158.139.83:17705] by server-11.bemta-5.messagelabs.com id
	9D/24-29296-26FFC205; Thu, 16 Aug 2012 14:10:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345126237!27794149!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17240 invoked from network); 16 Aug 2012 14:10:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:10:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:10:37 +0100
Message-Id: <502D1BA50200007800095834@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:11:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7E4F0F95.0__="
Subject: [Xen-devel] [PATCH] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7E4F0F95.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes at all
- only call alarm_timer_update() on REG_B writes when relevant bits
  change
- only call check_update_timer() on REG_B writes when SET changes
- instead properly handle AF and PF when the guest is not also setting
  AIE/PIE respectively (for UF this was already the case, only a
  comment was slightly inaccurate)
- raise the RTC IRQ not only when UIE gets set while UF was already
  set, but generalize this to cover AIE and PIE as well
- properly mask off bit 7 when retrieving the hour values in
  alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
  converting from 12- to 24-hour value
- also handle the two other possible clock bases
- use RTC_* names in a couple of places where literal numbers were used
  so far

Note that this only improves the situation described in the thread at
http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
there are still problems with the emulation when invoked at a high rate
as described there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
=20
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d =3D vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s =3D opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
=20
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
=20
     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <=3D 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code !=3D 0) && (period_code <=3D 2) )
             period_code +=3D 7;
-
-        period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period =
in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code !=3D 0 )
+        {
+            period =3D 1 << (period_code - 1); /* period in 32 Khz cycles =
*/
+            period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns =
*/
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, =
NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
=20
@@ -102,7 +121,7 @@ static void check_update_timer(RTCState=20
         guest_usec =3D get_localtime_us(d) % USEC_PER_SEC;
         if (guest_usec >=3D (USEC_PER_SEC - 244))
         {
-            /* RTC is in update cycle when enabling UIE */
+            /* RTC is in update cycle */
             s->hw.cmos_data[RTC_REG_A] |=3D RTC_UIP;
             next_update_time =3D (USEC_PER_SEC - guest_usec) * NS_PER_USEC=
;
             expire_time =3D NOW() + next_update_time;
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState=20
=20
     stop_timer(&s->alarm_timer);
=20
-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
     {
         s->current_tm =3D gmtime(get_localtime(d));
         rtc_copy_date(s);
=20
         alarm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
         alarm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
-        alarm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
-        alarm_hour =3D convert_hour(s, alarm_hour);
+        alarm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
=20
         cur_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
         cur_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-        cur_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
-        cur_hour =3D convert_hour(s, cur_hour);
+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
=20
         next_update_time =3D USEC_PER_SEC - (get_localtime_us(d) % =
USEC_PER_SEC);
         next_update_time =3D next_update_time * NS_PER_USEC + NOW();
@@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState=20
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig, mask;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm =3D gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.
+         */
+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
-        check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_SET )
+            check_update_timer(s);
+        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
@@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
=20
 static inline int to_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a / 10) << 4) | (a % 10);
@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
=20
 static inline int from_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a >> 4) * 10) + (a & 0x0f);
@@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s,=20
=20
 /* Hours in 12 hour mode are in 1-12 range, not 0-11.
  * So we need convert it before using it*/
-static inline int convert_hour(RTCState *s, int hour)
+static inline int convert_hour(RTCState *s, int raw)
 {
+    int hour =3D from_bcd(s, raw & 0x7f);
+
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
     {
         hour %=3D 12;
-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
+        if (raw & 0x80)
             hour +=3D 12;
     }
     return hour;
@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
    =20
     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
     tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-    tm->tm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
-    tm->tm_hour =3D convert_hour(s, tm->tm_hour);
+    tm->tm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
     tm->tm_wday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
=20
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##nam=
e)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;
     uint64_t max_lag =3D -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
=20
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
=20
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued =3D 1;
     irq =3D earliest_pt->irq;
     is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
+    pt_priv =3D earliest_pt->priv;
=20
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
=20
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq =3D=3D RTC_IRQ )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
=20
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);



--=__Part7E4F0F95.0__=
Content-Type: text/plain; name="x86-hvm-rtc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc.patch"

x86/HVM: assorted RTC emulation adjustments=0A=0A- don't call rtc_timer_upd=
ate() on REG_A writes when the value didn't=0A  change (doing the call =
always was reported to cause wall clock time=0A  lagging with the JVM =
running on Windows)=0A- don't call rtc_timer_update() on REG_B writes at =
all=0A- only call alarm_timer_update() on REG_B writes when relevant =
bits=0A  change=0A- only call check_update_timer() on REG_B writes when =
SET changes=0A- instead properly handle AF and PF when the guest is not =
also setting=0A  AIE/PIE respectively (for UF this was already the case, =
only a=0A  comment was slightly inaccurate)=0A- raise the RTC IRQ not only =
when UIE gets set while UF was already=0A  set, but generalize this to =
cover AIE and PIE as well=0A- properly mask off bit 7 when retrieving the =
hour values in=0A  alarm_timer_update(), and properly use RTC_HOURS_ALARM's=
 bit 7 when=0A  converting from 12- to 24-hour value=0A- also handle the =
two other possible clock bases=0A- use RTC_* names in a couple of places =
where literal numbers were used=0A  so far=0A=0ANote that this only =
improves the situation described in the thread at=0Ahttp://lists.xen.org/ar=
chives/html/xen-devel/2012-08/msg00664.html,=0Athere are still problems =
with the emulation when invoked at a high rate=0Aas described there.=0A=0AS=
igned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/hvm/r=
tc.c=0A+++ b/xen/arch/x86/hvm/rtc.c=0A@@ -50,11 +50,24 @@ static void =
rtc_set_time(RTCState *s);=0A static inline int from_bcd(RTCState *s, int =
a);=0A static inline int convert_hour(RTCState *s, int hour);=0A =0A-static=
 void rtc_periodic_cb(struct vcpu *v, void *opaque)=0A+static void =
rtc_toggle_irq(RTCState *s)=0A+{=0A+    struct domain *d =3D vrtc_domain(s)=
;=0A+=0A+    ASSERT(spin_is_locked(&s->lock));=0A+    s->hw.cmos_data[RTC_R=
EG_C] |=3D RTC_IRQF;=0A+    hvm_isa_irq_deassert(d, RTC_IRQ);=0A+    =
hvm_isa_irq_assert(d, RTC_IRQ);=0A+}=0A+=0A+void rtc_periodic_interrupt(voi=
d *opaque)=0A {=0A     RTCState *s =3D opaque;=0A+=0A     spin_lock(&s->loc=
k);=0A-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;=0A+    s->hw.cmos_data[RTC=
_REG_C] |=3D RTC_PF;=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE =
)=0A+        rtc_toggle_irq(s);=0A     spin_unlock(&s->lock);=0A }=0A =
=0A@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s=0A     =
ASSERT(spin_is_locked(&s->lock));=0A =0A     period_code =3D s->hw.cmos_dat=
a[RTC_REG_A] & RTC_RATE_SELECT;=0A-    if ( (period_code !=3D 0) && =
(s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )=0A+    switch ( s->hw.cmos_data[RT=
C_REG_A] & RTC_DIV_CTL )=0A     {=0A-        if ( period_code <=3D 2 )=0A+ =
   case RTC_REF_CLCK_32KHZ:=0A+        if ( (period_code !=3D 0) && =
(period_code <=3D 2) )=0A             period_code +=3D 7;=0A-=0A-        =
period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */=0A-       =
 period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period in ns =
*/=0A-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,=0A- =
                            rtc_periodic_cb, s);=0A-    }=0A-    else=0A-  =
  {=0A+        /* fall through */=0A+    case RTC_REF_CLCK_1MHZ:=0A+    =
case RTC_REF_CLCK_4MHZ:=0A+        if ( period_code !=3D 0 )=0A+        =
{=0A+            period =3D 1 << (period_code - 1); /* period in 32 Khz =
cycles */=0A+            period =3D DIV_ROUND(period * 1000000000ULL, =
32768); /* in ns */=0A+            create_periodic_time(v, &s->pt, period, =
period, RTC_IRQ, NULL, s);=0A+            break;=0A+        }=0A+        =
/* fall through */=0A+    default:=0A         destroy_periodic_time(&s->pt)=
;=0A+        break;=0A     }=0A }=0A =0A@@ -102,7 +121,7 @@ static void =
check_update_timer(RTCState =0A         guest_usec =3D get_localtime_us(d) =
% USEC_PER_SEC;=0A         if (guest_usec >=3D (USEC_PER_SEC - 244))=0A    =
     {=0A-            /* RTC is in update cycle when enabling UIE */=0A+   =
         /* RTC is in update cycle */=0A             s->hw.cmos_data[RTC_RE=
G_A] |=3D RTC_UIP;=0A             next_update_time =3D (USEC_PER_SEC - =
guest_usec) * NS_PER_USEC;=0A             expire_time =3D NOW() + =
next_update_time;=0A@@ -144,7 +163,6 @@ static void rtc_update_timer(void =
*opaqu=0A static void rtc_update_timer2(void *opaque)=0A {=0A     RTCState =
*s =3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;=0A         s->hw.cmos_data[RTC_REG_=
A] &=3D ~RTC_UIP;=0A         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))=0A=
-        {=0A-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-    =
        hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert=
(d, RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
check_update_timer(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -175,21 =
+189,18 @@ static void alarm_timer_update(RTCState =0A =0A     stop_timer(&=
s->alarm_timer);=0A =0A-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) =
&&=0A-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A+    if ( =
!(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )=0A     {=0A         s->current_tm=
 =3D gmtime(get_localtime(d));=0A         rtc_copy_date(s);=0A =0A         =
alarm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);=0A         =
alarm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);=0A-        =
alarm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);=0A-        =
alarm_hour =3D convert_hour(s, alarm_hour);=0A+        alarm_hour =3D =
convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);=0A =0A         cur_sec =
=3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);=0A         cur_min =3D =
from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);=0A-        cur_hour =3D =
from_bcd(s, s->hw.cmos_data[RTC_HOURS]);=0A-        cur_hour =3D convert_ho=
ur(s, cur_hour);=0A+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RT=
C_HOURS]);=0A =0A         next_update_time =3D USEC_PER_SEC - (get_localtim=
e_us(d) % USEC_PER_SEC);=0A         next_update_time =3D next_update_time =
* NS_PER_USEC + NOW();=0A@@ -343,7 +354,6 @@ static void alarm_timer_update=
(RTCState =0A static void rtc_alarm_cb(void *opaque)=0A {=0A     RTCState =
*s =3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;=0A         /* alarm interrupt =
*/=0A         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)=0A-        {=0A-   =
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-            =
hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert(d, =
RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
alarm_timer_update(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -365,6 =
+371,7 @@ static int rtc_ioport_write(void *opaque=0A {=0A     RTCState *s =
=3D opaque;=0A     struct domain *d =3D vrtc_domain(s);=0A+    uint32_t =
orig, mask;=0A =0A     spin_lock(&s->lock);=0A =0A@@ -382,6 +389,7 @@ =
static int rtc_ioport_write(void *opaque=0A         return 0;=0A     }=0A =
=0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index];=0A     switch ( =
s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALARM:=0A@@ -405,9 =
+413,9 @@ static int rtc_ioport_write(void *opaque=0A         break;=0A    =
 case RTC_REG_A:=0A         /* UIP bit is read only */=0A-        =
s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -415,7 +423,7 @@ static =
int rtc_ioport_write(void *opaque=0A             /* set mode: reset UIP =
mode */=0A             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A        =
     /* adjust cmos before stopping */=0A-            if (!(s->hw.cmos_data=
[RTC_REG_B] & RTC_SET))=0A+            if (!(orig & RTC_SET))=0A           =
  {=0A                 s->current_tm =3D gmtime(get_localtime(d));=0A      =
           rtc_copy_date(s);=0A@@ -424,21 +432,26 @@ static int rtc_ioport_=
write(void *opaque=0A         else=0A         {=0A             /* if =
disabling set mode, update the time */=0A-            if ( s->hw.cmos_data[=
RTC_REG_B] & RTC_SET )=0A+            if ( orig & RTC_SET )=0A             =
    rtc_set_time(s);=0A         }=0A-        /* if the interrupt is =
already set when the interrupt become=0A-         * enabled, raise an =
interrupt immediately*/=0A-        if ((data & RTC_UIE) && !(s->hw.cmos_dat=
a[RTC_REG_B] & RTC_UIE))=0A-            if (s->hw.cmos_data[RTC_REG_C] & =
RTC_UF)=0A+        /*=0A+         * If the interrupt is already set when =
the interrupt becomes=0A+         * enabled, raise an interrupt immediately=
.=0A+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.=0A+    =
     */=0A+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 =
)=0A+            if ( (data & mask) && !(orig & mask) &&=0A+               =
  (s->hw.cmos_data[RTC_REG_C] & mask) )=0A             {=0A-               =
 hvm_isa_irq_deassert(d, RTC_IRQ);=0A-                hvm_isa_irq_assert(d,=
 RTC_IRQ);=0A+                rtc_toggle_irq(s);=0A+                =
break;=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A-        check_update_timer(s);=0A-=
        alarm_timer_update(s);=0A+        if ( (data ^ orig) & RTC_SET =
)=0A+            check_update_timer(s);=0A+        if ( (data ^ orig) & =
(RTC_24H | RTC_DM_BINARY | RTC_SET) )=0A+            alarm_timer_update(s);=
=0A         break;=0A     case RTC_REG_C:=0A     case RTC_REG_D:=0A@@ =
-453,7 +466,7 @@ static int rtc_ioport_write(void *opaque=0A =0A static =
inline int to_bcd(RTCState *s, int a)=0A {=0A-    if ( s->hw.cmos_data[RTC_=
REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY =
)=0A         return a;=0A     else=0A         return ((a / 10) << 4) | (a =
% 10);=0A@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in=0A =
=0A static inline int from_bcd(RTCState *s, int a)=0A {=0A-    if ( =
s->hw.cmos_data[RTC_REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] =
& RTC_DM_BINARY )=0A         return a;=0A     else=0A         return ((a =
>> 4) * 10) + (a & 0x0f);=0A@@ -469,12 +482,14 @@ static inline int =
from_bcd(RTCState *s, =0A =0A /* Hours in 12 hour mode are in 1-12 range, =
not 0-11.=0A  * So we need convert it before using it*/=0A-static inline =
int convert_hour(RTCState *s, int hour)=0A+static inline int convert_hour(R=
TCState *s, int raw)=0A {=0A+    int hour =3D from_bcd(s, raw & 0x7f);=0A+=
=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))=0A     {=0A         =
hour %=3D 12;=0A-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)=0A+        =
if (raw & 0x80)=0A             hour +=3D 12;=0A     }=0A     return =
hour;=0A@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)=0A     =
=0A     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);=0A     =
tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);=0A-    tm->tm_hou=
r =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);=0A-    tm->tm_hour =
=3D convert_hour(s, tm->tm_hour);=0A+    tm->tm_hour =3D convert_hour(s, =
s->hw.cmos_data[RTC_HOURS]);=0A     tm->tm_wday =3D from_bcd(s, s->hw.cmos_=
data[RTC_DAY_OF_WEEK]);=0A     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[=
RTC_DAY_OF_MONTH]);=0A     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_M=
ONTH]) - 1;=0A--- a/xen/arch/x86/hvm/vpt.c=0A+++ b/xen/arch/x86/hvm/vpt.c=
=0A@@ -22,6 +22,7 @@=0A #include <asm/hvm/vpt.h>=0A #include <asm/event.h>=
=0A #include <asm/apic.h>=0A+#include <asm/mc146818rtc.h>=0A =0A #define =
mode_is(d, name) \=0A     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE=
] =3D=3D HVMPTM_##name)=0A@@ -218,6 +219,7 @@ void pt_update_irq(struct =
vcpu *v)=0A     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;=0A =
    uint64_t max_lag =3D -1ULL;=0A     int irq, is_lapic;=0A+    void =
*pt_priv;=0A =0A     spin_lock(&v->arch.hvm_vcpu.tm_lock);=0A =0A@@ =
-251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)=0A     earliest_pt->i=
rq_issued =3D 1;=0A     irq =3D earliest_pt->irq;=0A     is_lapic =3D =
(earliest_pt->source =3D=3D PTSRC_lapic);=0A+    pt_priv =3D earliest_pt->p=
riv;=0A =0A     spin_unlock(&v->arch.hvm_vcpu.tm_lock);=0A =0A     if ( =
is_lapic )=0A-    {=0A         vlapic_set_irq(vcpu_vlapic(v), irq, 0);=0A- =
   }=0A+    else if ( irq =3D=3D RTC_IRQ )=0A+        rtc_periodic_interrup=
t(pt_priv);=0A     else=0A     {=0A         hvm_isa_irq_deassert(v->domain,=
 irq);=0A--- a/xen/include/asm-x86/hvm/vpt.h=0A+++ b/xen/include/asm-x86/hv=
m/vpt.h=0A@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);=0A =
void rtc_deinit(struct domain *d);=0A void rtc_reset(struct domain *d);=0A =
void rtc_update_clock(struct domain *d);=0A+void rtc_periodic_interrupt(voi=
d *);=0A =0A void pmtimer_init(struct vcpu *v);=0A void pmtimer_deinit(stru=
ct domain *d);=0A
--=__Part7E4F0F95.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7E4F0F95.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mZ-0003Kh-VZ; Thu, 16 Aug 2012 14:10:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20mY-0003KG-Ti
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:10:43 +0000
Received: from [85.158.139.83:17705] by server-11.bemta-5.messagelabs.com id
	9D/24-29296-26FFC205; Thu, 16 Aug 2012 14:10:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345126237!27794149!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17240 invoked from network); 16 Aug 2012 14:10:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:10:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:10:37 +0100
Message-Id: <502D1BA50200007800095834@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:11:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7E4F0F95.0__="
Subject: [Xen-devel] [PATCH] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7E4F0F95.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes at all
- only call alarm_timer_update() on REG_B writes when relevant bits
  change
- only call check_update_timer() on REG_B writes when SET changes
- instead properly handle AF and PF when the guest is not also setting
  AIE/PIE respectively (for UF this was already the case, only a
  comment was slightly inaccurate)
- raise the RTC IRQ not only when UIE gets set while UF was already
  set, but generalize this to cover AIE and PIE as well
- properly mask off bit 7 when retrieving the hour values in
  alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
  converting from 12- to 24-hour value
- also handle the two other possible clock bases
- use RTC_* names in a couple of places where literal numbers were used
  so far

Note that this only improves the situation described in the thread at
http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
there are still problems with the emulation when invoked at a high rate
as described there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
=20
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d =3D vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s =3D opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
=20
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
=20
     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <=3D 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code !=3D 0) && (period_code <=3D 2) )
             period_code +=3D 7;
-
-        period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period =
in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code !=3D 0 )
+        {
+            period =3D 1 << (period_code - 1); /* period in 32 Khz cycles =
*/
+            period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns =
*/
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, =
NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
=20
@@ -102,7 +121,7 @@ static void check_update_timer(RTCState=20
         guest_usec =3D get_localtime_us(d) % USEC_PER_SEC;
         if (guest_usec >=3D (USEC_PER_SEC - 244))
         {
-            /* RTC is in update cycle when enabling UIE */
+            /* RTC is in update cycle */
             s->hw.cmos_data[RTC_REG_A] |=3D RTC_UIP;
             next_update_time =3D (USEC_PER_SEC - guest_usec) * NS_PER_USEC=
;
             expire_time =3D NOW() + next_update_time;
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState=20
=20
     stop_timer(&s->alarm_timer);
=20
-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
     {
         s->current_tm =3D gmtime(get_localtime(d));
         rtc_copy_date(s);
=20
         alarm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
         alarm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
-        alarm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
-        alarm_hour =3D convert_hour(s, alarm_hour);
+        alarm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
=20
         cur_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
         cur_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-        cur_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
-        cur_hour =3D convert_hour(s, cur_hour);
+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
=20
         next_update_time =3D USEC_PER_SEC - (get_localtime_us(d) % =
USEC_PER_SEC);
         next_update_time =3D next_update_time * NS_PER_USEC + NOW();
@@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState=20
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig, mask;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm =3D gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.
+         */
+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
-        check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_SET )
+            check_update_timer(s);
+        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
@@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
=20
 static inline int to_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a / 10) << 4) | (a % 10);
@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
=20
 static inline int from_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a >> 4) * 10) + (a & 0x0f);
@@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s,=20
=20
 /* Hours in 12 hour mode are in 1-12 range, not 0-11.
  * So we need convert it before using it*/
-static inline int convert_hour(RTCState *s, int hour)
+static inline int convert_hour(RTCState *s, int raw)
 {
+    int hour =3D from_bcd(s, raw & 0x7f);
+
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
     {
         hour %=3D 12;
-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
+        if (raw & 0x80)
             hour +=3D 12;
     }
     return hour;
@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
    =20
     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
     tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-    tm->tm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
-    tm->tm_hour =3D convert_hour(s, tm->tm_hour);
+    tm->tm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
     tm->tm_wday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
=20
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##nam=
e)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;
     uint64_t max_lag =3D -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
=20
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
=20
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued =3D 1;
     irq =3D earliest_pt->irq;
     is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
+    pt_priv =3D earliest_pt->priv;
=20
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
=20
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq =3D=3D RTC_IRQ )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
=20
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);



--=__Part7E4F0F95.0__=
Content-Type: text/plain; name="x86-hvm-rtc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc.patch"

x86/HVM: assorted RTC emulation adjustments=0A=0A- don't call rtc_timer_upd=
ate() on REG_A writes when the value didn't=0A  change (doing the call =
always was reported to cause wall clock time=0A  lagging with the JVM =
running on Windows)=0A- don't call rtc_timer_update() on REG_B writes at =
all=0A- only call alarm_timer_update() on REG_B writes when relevant =
bits=0A  change=0A- only call check_update_timer() on REG_B writes when =
SET changes=0A- instead properly handle AF and PF when the guest is not =
also setting=0A  AIE/PIE respectively (for UF this was already the case, =
only a=0A  comment was slightly inaccurate)=0A- raise the RTC IRQ not only =
when UIE gets set while UF was already=0A  set, but generalize this to =
cover AIE and PIE as well=0A- properly mask off bit 7 when retrieving the =
hour values in=0A  alarm_timer_update(), and properly use RTC_HOURS_ALARM's=
 bit 7 when=0A  converting from 12- to 24-hour value=0A- also handle the =
two other possible clock bases=0A- use RTC_* names in a couple of places =
where literal numbers were used=0A  so far=0A=0ANote that this only =
improves the situation described in the thread at=0Ahttp://lists.xen.org/ar=
chives/html/xen-devel/2012-08/msg00664.html,=0Athere are still problems =
with the emulation when invoked at a high rate=0Aas described there.=0A=0AS=
igned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/hvm/r=
tc.c=0A+++ b/xen/arch/x86/hvm/rtc.c=0A@@ -50,11 +50,24 @@ static void =
rtc_set_time(RTCState *s);=0A static inline int from_bcd(RTCState *s, int =
a);=0A static inline int convert_hour(RTCState *s, int hour);=0A =0A-static=
 void rtc_periodic_cb(struct vcpu *v, void *opaque)=0A+static void =
rtc_toggle_irq(RTCState *s)=0A+{=0A+    struct domain *d =3D vrtc_domain(s)=
;=0A+=0A+    ASSERT(spin_is_locked(&s->lock));=0A+    s->hw.cmos_data[RTC_R=
EG_C] |=3D RTC_IRQF;=0A+    hvm_isa_irq_deassert(d, RTC_IRQ);=0A+    =
hvm_isa_irq_assert(d, RTC_IRQ);=0A+}=0A+=0A+void rtc_periodic_interrupt(voi=
d *opaque)=0A {=0A     RTCState *s =3D opaque;=0A+=0A     spin_lock(&s->loc=
k);=0A-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;=0A+    s->hw.cmos_data[RTC=
_REG_C] |=3D RTC_PF;=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE =
)=0A+        rtc_toggle_irq(s);=0A     spin_unlock(&s->lock);=0A }=0A =
=0A@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s=0A     =
ASSERT(spin_is_locked(&s->lock));=0A =0A     period_code =3D s->hw.cmos_dat=
a[RTC_REG_A] & RTC_RATE_SELECT;=0A-    if ( (period_code !=3D 0) && =
(s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )=0A+    switch ( s->hw.cmos_data[RT=
C_REG_A] & RTC_DIV_CTL )=0A     {=0A-        if ( period_code <=3D 2 )=0A+ =
   case RTC_REF_CLCK_32KHZ:=0A+        if ( (period_code !=3D 0) && =
(period_code <=3D 2) )=0A             period_code +=3D 7;=0A-=0A-        =
period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */=0A-       =
 period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period in ns =
*/=0A-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,=0A- =
                            rtc_periodic_cb, s);=0A-    }=0A-    else=0A-  =
  {=0A+        /* fall through */=0A+    case RTC_REF_CLCK_1MHZ:=0A+    =
case RTC_REF_CLCK_4MHZ:=0A+        if ( period_code !=3D 0 )=0A+        =
{=0A+            period =3D 1 << (period_code - 1); /* period in 32 Khz =
cycles */=0A+            period =3D DIV_ROUND(period * 1000000000ULL, =
32768); /* in ns */=0A+            create_periodic_time(v, &s->pt, period, =
period, RTC_IRQ, NULL, s);=0A+            break;=0A+        }=0A+        =
/* fall through */=0A+    default:=0A         destroy_periodic_time(&s->pt)=
;=0A+        break;=0A     }=0A }=0A =0A@@ -102,7 +121,7 @@ static void =
check_update_timer(RTCState =0A         guest_usec =3D get_localtime_us(d) =
% USEC_PER_SEC;=0A         if (guest_usec >=3D (USEC_PER_SEC - 244))=0A    =
     {=0A-            /* RTC is in update cycle when enabling UIE */=0A+   =
         /* RTC is in update cycle */=0A             s->hw.cmos_data[RTC_RE=
G_A] |=3D RTC_UIP;=0A             next_update_time =3D (USEC_PER_SEC - =
guest_usec) * NS_PER_USEC;=0A             expire_time =3D NOW() + =
next_update_time;=0A@@ -144,7 +163,6 @@ static void rtc_update_timer(void =
*opaqu=0A static void rtc_update_timer2(void *opaque)=0A {=0A     RTCState =
*s =3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;=0A         s->hw.cmos_data[RTC_REG_=
A] &=3D ~RTC_UIP;=0A         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))=0A=
-        {=0A-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-    =
        hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert=
(d, RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
check_update_timer(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -175,21 =
+189,18 @@ static void alarm_timer_update(RTCState =0A =0A     stop_timer(&=
s->alarm_timer);=0A =0A-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) =
&&=0A-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A+    if ( =
!(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )=0A     {=0A         s->current_tm=
 =3D gmtime(get_localtime(d));=0A         rtc_copy_date(s);=0A =0A         =
alarm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);=0A         =
alarm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);=0A-        =
alarm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);=0A-        =
alarm_hour =3D convert_hour(s, alarm_hour);=0A+        alarm_hour =3D =
convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);=0A =0A         cur_sec =
=3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);=0A         cur_min =3D =
from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);=0A-        cur_hour =3D =
from_bcd(s, s->hw.cmos_data[RTC_HOURS]);=0A-        cur_hour =3D convert_ho=
ur(s, cur_hour);=0A+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RT=
C_HOURS]);=0A =0A         next_update_time =3D USEC_PER_SEC - (get_localtim=
e_us(d) % USEC_PER_SEC);=0A         next_update_time =3D next_update_time =
* NS_PER_USEC + NOW();=0A@@ -343,7 +354,6 @@ static void alarm_timer_update=
(RTCState =0A static void rtc_alarm_cb(void *opaque)=0A {=0A     RTCState =
*s =3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;=0A         /* alarm interrupt =
*/=0A         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)=0A-        {=0A-   =
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-            =
hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert(d, =
RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
alarm_timer_update(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -365,6 =
+371,7 @@ static int rtc_ioport_write(void *opaque=0A {=0A     RTCState *s =
=3D opaque;=0A     struct domain *d =3D vrtc_domain(s);=0A+    uint32_t =
orig, mask;=0A =0A     spin_lock(&s->lock);=0A =0A@@ -382,6 +389,7 @@ =
static int rtc_ioport_write(void *opaque=0A         return 0;=0A     }=0A =
=0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index];=0A     switch ( =
s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALARM:=0A@@ -405,9 =
+413,9 @@ static int rtc_ioport_write(void *opaque=0A         break;=0A    =
 case RTC_REG_A:=0A         /* UIP bit is read only */=0A-        =
s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -415,7 +423,7 @@ static =
int rtc_ioport_write(void *opaque=0A             /* set mode: reset UIP =
mode */=0A             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A        =
     /* adjust cmos before stopping */=0A-            if (!(s->hw.cmos_data=
[RTC_REG_B] & RTC_SET))=0A+            if (!(orig & RTC_SET))=0A           =
  {=0A                 s->current_tm =3D gmtime(get_localtime(d));=0A      =
           rtc_copy_date(s);=0A@@ -424,21 +432,26 @@ static int rtc_ioport_=
write(void *opaque=0A         else=0A         {=0A             /* if =
disabling set mode, update the time */=0A-            if ( s->hw.cmos_data[=
RTC_REG_B] & RTC_SET )=0A+            if ( orig & RTC_SET )=0A             =
    rtc_set_time(s);=0A         }=0A-        /* if the interrupt is =
already set when the interrupt become=0A-         * enabled, raise an =
interrupt immediately*/=0A-        if ((data & RTC_UIE) && !(s->hw.cmos_dat=
a[RTC_REG_B] & RTC_UIE))=0A-            if (s->hw.cmos_data[RTC_REG_C] & =
RTC_UF)=0A+        /*=0A+         * If the interrupt is already set when =
the interrupt becomes=0A+         * enabled, raise an interrupt immediately=
.=0A+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.=0A+    =
     */=0A+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 =
)=0A+            if ( (data & mask) && !(orig & mask) &&=0A+               =
  (s->hw.cmos_data[RTC_REG_C] & mask) )=0A             {=0A-               =
 hvm_isa_irq_deassert(d, RTC_IRQ);=0A-                hvm_isa_irq_assert(d,=
 RTC_IRQ);=0A+                rtc_toggle_irq(s);=0A+                =
break;=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A-        check_update_timer(s);=0A-=
        alarm_timer_update(s);=0A+        if ( (data ^ orig) & RTC_SET =
)=0A+            check_update_timer(s);=0A+        if ( (data ^ orig) & =
(RTC_24H | RTC_DM_BINARY | RTC_SET) )=0A+            alarm_timer_update(s);=
=0A         break;=0A     case RTC_REG_C:=0A     case RTC_REG_D:=0A@@ =
-453,7 +466,7 @@ static int rtc_ioport_write(void *opaque=0A =0A static =
inline int to_bcd(RTCState *s, int a)=0A {=0A-    if ( s->hw.cmos_data[RTC_=
REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY =
)=0A         return a;=0A     else=0A         return ((a / 10) << 4) | (a =
% 10);=0A@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in=0A =
=0A static inline int from_bcd(RTCState *s, int a)=0A {=0A-    if ( =
s->hw.cmos_data[RTC_REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] =
& RTC_DM_BINARY )=0A         return a;=0A     else=0A         return ((a =
>> 4) * 10) + (a & 0x0f);=0A@@ -469,12 +482,14 @@ static inline int =
from_bcd(RTCState *s, =0A =0A /* Hours in 12 hour mode are in 1-12 range, =
not 0-11.=0A  * So we need convert it before using it*/=0A-static inline =
int convert_hour(RTCState *s, int hour)=0A+static inline int convert_hour(R=
TCState *s, int raw)=0A {=0A+    int hour =3D from_bcd(s, raw & 0x7f);=0A+=
=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))=0A     {=0A         =
hour %=3D 12;=0A-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)=0A+        =
if (raw & 0x80)=0A             hour +=3D 12;=0A     }=0A     return =
hour;=0A@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)=0A     =
=0A     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);=0A     =
tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);=0A-    tm->tm_hou=
r =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);=0A-    tm->tm_hour =
=3D convert_hour(s, tm->tm_hour);=0A+    tm->tm_hour =3D convert_hour(s, =
s->hw.cmos_data[RTC_HOURS]);=0A     tm->tm_wday =3D from_bcd(s, s->hw.cmos_=
data[RTC_DAY_OF_WEEK]);=0A     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[=
RTC_DAY_OF_MONTH]);=0A     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_M=
ONTH]) - 1;=0A--- a/xen/arch/x86/hvm/vpt.c=0A+++ b/xen/arch/x86/hvm/vpt.c=
=0A@@ -22,6 +22,7 @@=0A #include <asm/hvm/vpt.h>=0A #include <asm/event.h>=
=0A #include <asm/apic.h>=0A+#include <asm/mc146818rtc.h>=0A =0A #define =
mode_is(d, name) \=0A     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE=
] =3D=3D HVMPTM_##name)=0A@@ -218,6 +219,7 @@ void pt_update_irq(struct =
vcpu *v)=0A     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;=0A =
    uint64_t max_lag =3D -1ULL;=0A     int irq, is_lapic;=0A+    void =
*pt_priv;=0A =0A     spin_lock(&v->arch.hvm_vcpu.tm_lock);=0A =0A@@ =
-251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)=0A     earliest_pt->i=
rq_issued =3D 1;=0A     irq =3D earliest_pt->irq;=0A     is_lapic =3D =
(earliest_pt->source =3D=3D PTSRC_lapic);=0A+    pt_priv =3D earliest_pt->p=
riv;=0A =0A     spin_unlock(&v->arch.hvm_vcpu.tm_lock);=0A =0A     if ( =
is_lapic )=0A-    {=0A         vlapic_set_irq(vcpu_vlapic(v), irq, 0);=0A- =
   }=0A+    else if ( irq =3D=3D RTC_IRQ )=0A+        rtc_periodic_interrup=
t(pt_priv);=0A     else=0A     {=0A         hvm_isa_irq_deassert(v->domain,=
 irq);=0A--- a/xen/include/asm-x86/hvm/vpt.h=0A+++ b/xen/include/asm-x86/hv=
m/vpt.h=0A@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);=0A =
void rtc_deinit(struct domain *d);=0A void rtc_reset(struct domain *d);=0A =
void rtc_update_clock(struct domain *d);=0A+void rtc_periodic_interrupt(voi=
d *);=0A =0A void pmtimer_init(struct vcpu *v);=0A void pmtimer_deinit(stru=
ct domain *d);=0A
--=__Part7E4F0F95.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7E4F0F95.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mh-0003Mx-JV; Thu, 16 Aug 2012 14:10:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T20mg-0003MU-CM
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:10:50 +0000
Received: from [85.158.143.99:36987] by server-3.bemta-4.messagelabs.com id
	84/4A-09529-96FFC205; Thu, 16 Aug 2012 14:10:49 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345126248!23205282!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14688 invoked from network); 16 Aug 2012 14:10:49 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:10:49 -0000
Received: by eaac13 with SMTP id c13so847131eaa.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 07:10:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ffKTAUdhkV9P5bCOUCk25S51pyR1pebXksWk3VFtqCo=;
	b=UjmukUGOS+qytDR1IN8zuexvIQY35ygQKCfYZ1khVeFnRqB5tTTjeFQhxgDCpR9dRN
	1W4d3o5DCEIraQrunkO+lM0BTQTJzi4wVas8SQhgpHCBL93/9AtTKM2kM5jRbYeK3NH/
	P8cHYUf5ywHE8/t+8qL0EMgXZxSPJqyUXou1qsPII7U9GwGo9Kih8NU7b2qkp+48Nwnt
	OWCWCStECt5S4DXaHULGk/syFVK7dNgA8oAt9+NPkJFz+jaOf9qMCw6j4yCachZ05REw
	ELBY96m23ltZWVDvTjO/YSpMXXSvXpv6srXwccykfl28TtX+f+OPr1dO6v6dzeUUMUUx
	kFVw==
Received: by 10.14.203.73 with SMTP id e49mr1787905eeo.27.1345126248761;
	Thu, 16 Aug 2012 07:10:48 -0700 (PDT)
Received: from [172.16.26.11] (b0fb50b8.bb.sky.com. [176.251.80.184])
	by mx.google.com with ESMTPS id h42sm12492438eem.5.2012.08.16.07.10.39
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 07:10:40 -0700 (PDT)
Message-ID: <502CFF5F.4050208@xen.org>
Date: Thu, 16 Aug 2012 15:10:39 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
CC: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
	<502CF211.9050406@xen.org>
	<1345124100.30865.2.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345124100.30865.2.camel@zakaz.uk.xensource.com>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 14:35, Ian Campbell wrote:
> On Thu, 2012-08-16 at 14:13 +0100, Lars Kurth wrote:
>> On 16/08/2012 12:19, Ian Campbell wrote:
>>> So we should just merge te Xen_4.2_limit pages into a column in the
>>> Release Features page. Ian.
>> OK, I added a column. There are a few ???'s
> I cleared these:
>        * memsharing is still tech preview.
>        * credit sched is still a prototype
>        * ipxe is still the pxe stack
>        * qdisk is still a fallback option.
>
> I think that's all acurate.
>
Thank you!
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:10:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20mh-0003Mx-JV; Thu, 16 Aug 2012 14:10:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1T20mg-0003MU-CM
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:10:50 +0000
Received: from [85.158.143.99:36987] by server-3.bemta-4.messagelabs.com id
	84/4A-09529-96FFC205; Thu, 16 Aug 2012 14:10:49 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345126248!23205282!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14688 invoked from network); 16 Aug 2012 14:10:49 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:10:49 -0000
Received: by eaac13 with SMTP id c13so847131eaa.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 07:10:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ffKTAUdhkV9P5bCOUCk25S51pyR1pebXksWk3VFtqCo=;
	b=UjmukUGOS+qytDR1IN8zuexvIQY35ygQKCfYZ1khVeFnRqB5tTTjeFQhxgDCpR9dRN
	1W4d3o5DCEIraQrunkO+lM0BTQTJzi4wVas8SQhgpHCBL93/9AtTKM2kM5jRbYeK3NH/
	P8cHYUf5ywHE8/t+8qL0EMgXZxSPJqyUXou1qsPII7U9GwGo9Kih8NU7b2qkp+48Nwnt
	OWCWCStECt5S4DXaHULGk/syFVK7dNgA8oAt9+NPkJFz+jaOf9qMCw6j4yCachZ05REw
	ELBY96m23ltZWVDvTjO/YSpMXXSvXpv6srXwccykfl28TtX+f+OPr1dO6v6dzeUUMUUx
	kFVw==
Received: by 10.14.203.73 with SMTP id e49mr1787905eeo.27.1345126248761;
	Thu, 16 Aug 2012 07:10:48 -0700 (PDT)
Received: from [172.16.26.11] (b0fb50b8.bb.sky.com. [176.251.80.184])
	by mx.google.com with ESMTPS id h42sm12492438eem.5.2012.08.16.07.10.39
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 07:10:40 -0700 (PDT)
Message-ID: <502CFF5F.4050208@xen.org>
Date: Thu, 16 Aug 2012 15:10:39 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
CC: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
	<502CF211.9050406@xen.org>
	<1345124100.30865.2.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345124100.30865.2.camel@zakaz.uk.xensource.com>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 14:35, Ian Campbell wrote:
> On Thu, 2012-08-16 at 14:13 +0100, Lars Kurth wrote:
>> On 16/08/2012 12:19, Ian Campbell wrote:
>>> So we should just merge te Xen_4.2_limit pages into a column in the
>>> Release Features page. Ian.
>> OK, I added a column. There are a few ???'s
> I cleared these:
>        * memsharing is still tech preview.
>        * credit sched is still a prototype
>        * ipxe is still the pxe stack
>        * qdisk is still a fallback option.
>
> I think that's all acurate.
>
Thank you!
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:12:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20oL-0003hP-48; Thu, 16 Aug 2012 14:12:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T20oK-0003hC-8l
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:12:32 +0000
Received: from [85.158.143.99:49364] by server-1.bemta-4.messagelabs.com id
	F7/CB-07754-FCFFC205; Thu, 16 Aug 2012 14:12:31 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345126346!18899193!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32091 invoked from network); 16 Aug 2012 14:12:30 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:12:30 -0000
Received: by ggna5 with SMTP id a5so3293101ggn.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 07:12:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:reply-to:subject:x-priority:x-guid:x-has-attach
	:x-mailer:mime-version:message-id:content-type;
	bh=qW883UTX2RBi9myGZWFrg9I/hWtSbokkXA9sO7Y4mMw=;
	b=mc1zb5u4esnZQf7JcepvafXm8KkhHk4ikF9X1MmgaoLYMDqczhSiO7L7BiXEkqISRA
	s4Sua6BRAZPEzU6B4U3f2sv/YKt1lyfdW+ol9jfDYbx3uDUor3pJDBI+iHGgMHQETHJn
	xnuZ3SNZbft1/JvmUzB01owZtuTjRQHBF00BBJhWLh7ipo5Aav/6UvMgEC5DBdSQ2MnB
	x4i2Ay96wz6wxUNx3nzIYrjWtn84FYMQRG4pO8XWFnBBzZTzxbyahw7s4Nn7P+shNWr8
	sJUyZlaXRw51+rAbRm4pJN/obBF9AT7nCIH/nWNpvtADJ6J27ZGAiqt82IaLUzaABVhg
	fK7A==
Received: by 10.68.211.105 with SMTP id nb9mr3672939pbc.67.1345126345976;
	Thu, 16 Aug 2012 07:12:25 -0700 (PDT)
Received: from root ([115.199.242.186])
	by mx.google.com with ESMTPS id hx9sm2744039pbc.68.2012.08.16.07.12.20
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 07:12:25 -0700 (PDT)
Date: Thu, 16 Aug 2012 22:12:26 +0800
From: tupeng212 <tupeng212@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-GUID: 0D15BCA6-D535-465C-A3BC-33754DFA3269
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081622122189073014@gmail.com>
Subject: [Xen-devel] CPU stucked when xen initializing, why?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6719232616089783824=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6719232616089783824==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart255634547472_=----"

This is a multi-part message in MIME format.

------=_001_NextPart255634547472_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBhbGw6DQpJIG5lZWQgeW91ciBoZWxwLg0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KX19zdGFydF94ZW4gDQp7DQogICAgaW9t
bXVfc2V0dXAoKTsgICAgLyogc2V0dXAgaW9tbXUgaWYgYXZhaWxhYmxlICovDQogICAgc21wX3By
ZXBhcmVfY3B1cyhtYXhfY3B1cyk7DQp9DQoNCnJvdXRpbmcgcGF0Y2g6DQpzbXBfcHJlcGFyZV9j
cHVzIC0+IHNtcF9ib290X2NwdXMgLT4gZG9fYm9vdF9jcHUgLT4gd2FrZXVwX3NlY29uZGFyeV9j
cHUgDQoNCndoZW4gQlNQICJTZW5kaW5nIFNUQVJUVVAgIzEobnVtX3N0YXJ0cz0yKSIsIGhhbmRs
aW5nIENQVSMzMSwgc3lzdGVtIHdhcyBzdHVjaywgSSBkb24ndCBrbm93IHdoeS4NCmJlc2lkZXMs
IHRoZXJlIGFyZSA2NCBwaHlzaWNhbCBDUFVzLg0Kd2FrZXVwX3NlY29uZGFyeV9jcHUgDQp7DQpm
b3IgKGogPSAxOyBqIDw9IG51bV9zdGFydHM7IGorKykgew0KRHByaW50aygiU2VuZGluZyBTVEFS
VFVQICMlZC5cbiIsaik7IC8vdGhpcyBwcmludGluZyBhcHBlYXJlZA0KYXBpY19yZWFkX2Fyb3Vu
ZChBUElDX1NQSVYpOw0KYXBpY193cml0ZShBUElDX0VTUiwgMCk7DQphcGljX3JlYWQoQVBJQ19F
U1IpOw0KRHByaW50aygiQWZ0ZXIgYXBpY193cml0ZS5cbiIpOyAvL2J1dCB0aGlzIHByaW50IG5l
dmVyIGNvbWUgb3V0LCBhbmQgc3lzdGVtIHdhcyBzdHVjaw0KfQ0KfQ0KLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCjEgIGFwaWNfd3JpdGUvcmVh
ZCgpIGNvbnRhaW5zIHNvbWUgbG9jayB0aGVyZT8gaG93IHRvIGNoZWNrID8NCg0KMiBCU1AgQ1BV
ICdzIHJpcCBqdW1wZWQgdG8gYW5vdGhlciBwbGFjZSBvd2luZyB0byBzdWNoIGFzIGFuIGludGVy
cnVwdGlvbj8NCnRoZSBtb3N0IHN1c3BlY3RlZCBwbGFjZSBJIGRvdWJ0IGlzIGlvbW11X3NldHVw
KCksIGFuZCBpbiB0aGlzIGZ1bmN0aW9uLCANCkkvTyB2aXJ0dWFsaXNhdGlvbiBlbmFibGVkDQpE
b20wIG1vZGU6IFJlbGF4ZWQNCmJ1dCBpb21tdV9wYWdlX2ZhdWx0IG5ldmVyIGVudGVyZWQgYWZ0
ZXIgY2hlY2tpbmcgbG9nLg0KDQozIENhbiBzb21lb25lIHRlYWNoIG1lIGEgbWV0aG9kIHRvIGFu
YWx5emUgWGVuJ3Mgc3R1Y2sgc3RhY2ssIGZvciBleGFtcGxlIA0KY29yZSBkdW1wPy4uLiB0aGlz
IHByb2JsZW0gaGFwcGVuZWQgYWNjaWRlbnRhbGx5LCBoYXJkIHRvIGNhdGNoIGl0cyBwcmludGlu
Zy4NCnNvIGlmIGFwcGVhciwgSSdkIGxpa2UgdG8gYW5hbHl6ZSBpdCBpbnN0YW50bHkuDQoNClRo
YW5rIHlvdSB2ZXJ5IG11Y2ghDQoNCg0KDQoNCnR1cGVuZzIxMg==

------=_001_NextPart255634547472_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000000; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear all:</DIV>
<DIV>I need your help.</DIV>
<DIV>&nbsp;</DIV>
<DIV>--------------------------------------------------------</DIV>
<DIV>__start_xen </DIV>
<DIV>{</DIV>
<DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;iommu_setup();&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp=
;setup&nbsp;iommu&nbsp;if&nbsp;available&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;smp_prepare_cpus(max_cpus);</DIV>
<DIV>}</DIV>
<DIV>&nbsp;</DIV>
<DIV>routing patch:</DIV>
<DIV>smp_prepare_cpus -&gt; smp_boot_cpus -&gt; do_boot_cpu -&gt;=20
wakeup_secondary_cpu </DIV>
<DIV>&nbsp;</DIV>
<DIV>when BSP "Sending&nbsp;STARTUP&nbsp;#1(num_starts=3D2)", handling CPU=
#31,=20
system was stuck, I don't know why.</DIV>
<DIV>besides, there are 64 physical CPUs.</DIV>
<DIV>wakeup_secondary_cpu </DIV>
<DIV>{</DIV>
<DIV>
<BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
  <DIV>for&nbsp;(j&nbsp;=3D&nbsp;1;&nbsp;j&nbsp;&lt;=3D&nbsp;num_starts;&n=
bsp;j++)&nbsp;{</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>Dprintk("Sending&nbsp;STARTUP&nbsp;#%d.\n",j); //this printing=20
    appeared</DIV>
    <DIV>apic_read_around(APIC_SPIV);</DIV>
    <DIV>apic_write(APIC_ESR,&nbsp;0);</DIV>
    <DIV>apic_read(APIC_ESR);</DIV>
    <DIV>Dprintk("After&nbsp;apic_write.\n"); //but this print never come=20
    out,&nbsp;and system was stuck</DIV></BLOCKQUOTE>
  <DIV dir=3Dltr>}</DIV></BLOCKQUOTE>
<DIV>}</DIV>
<DIV>---------------------------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>1&nbsp; apic_write/read() contains some lock there? how to check ?</D=
IV>
<DIV>&nbsp;</DIV>
<DIV>2 BSP CPU 's rip jumped to another place owing to such as an=20
interruption?</DIV>
<DIV>the most suspected place I doubt is iommu_setup(), and in this functi=
on,=20
</DIV>
<DIV>I/O&nbsp;virtualisation enabled</DIV>
<DIV>Dom0&nbsp;mode: Relaxed</DIV>
<DIV>but iommu_page_fault never entered after checking log.</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 Can someone teach me a method to analyze&nbsp;Xen's stuck stack, fo=
r=20
example </DIV>
<DIV>core dump?... this problem happened accidentally, hard to catch its=20
printing.</DIV>
<DIV>so if appear, I'd like to analyze it instantly.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Thank you very much!</DIV></DIV></DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV></BODY></HTML>

------=_001_NextPart255634547472_=------



--===============6719232616089783824==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6719232616089783824==--



From xen-devel-bounces@lists.xen.org Thu Aug 16 14:12:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20oL-0003hP-48; Thu, 16 Aug 2012 14:12:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T20oK-0003hC-8l
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:12:32 +0000
Received: from [85.158.143.99:49364] by server-1.bemta-4.messagelabs.com id
	F7/CB-07754-FCFFC205; Thu, 16 Aug 2012 14:12:31 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345126346!18899193!1
X-Originating-IP: [209.85.161.173]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32091 invoked from network); 16 Aug 2012 14:12:30 -0000
Received: from mail-gg0-f173.google.com (HELO mail-gg0-f173.google.com)
	(209.85.161.173)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:12:30 -0000
Received: by ggna5 with SMTP id a5so3293101ggn.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 07:12:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:reply-to:subject:x-priority:x-guid:x-has-attach
	:x-mailer:mime-version:message-id:content-type;
	bh=qW883UTX2RBi9myGZWFrg9I/hWtSbokkXA9sO7Y4mMw=;
	b=mc1zb5u4esnZQf7JcepvafXm8KkhHk4ikF9X1MmgaoLYMDqczhSiO7L7BiXEkqISRA
	s4Sua6BRAZPEzU6B4U3f2sv/YKt1lyfdW+ol9jfDYbx3uDUor3pJDBI+iHGgMHQETHJn
	xnuZ3SNZbft1/JvmUzB01owZtuTjRQHBF00BBJhWLh7ipo5Aav/6UvMgEC5DBdSQ2MnB
	x4i2Ay96wz6wxUNx3nzIYrjWtn84FYMQRG4pO8XWFnBBzZTzxbyahw7s4Nn7P+shNWr8
	sJUyZlaXRw51+rAbRm4pJN/obBF9AT7nCIH/nWNpvtADJ6J27ZGAiqt82IaLUzaABVhg
	fK7A==
Received: by 10.68.211.105 with SMTP id nb9mr3672939pbc.67.1345126345976;
	Thu, 16 Aug 2012 07:12:25 -0700 (PDT)
Received: from root ([115.199.242.186])
	by mx.google.com with ESMTPS id hx9sm2744039pbc.68.2012.08.16.07.12.20
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 07:12:25 -0700 (PDT)
Date: Thu, 16 Aug 2012 22:12:26 +0800
From: tupeng212 <tupeng212@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-GUID: 0D15BCA6-D535-465C-A3BC-33754DFA3269
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081622122189073014@gmail.com>
Subject: [Xen-devel] CPU stucked when xen initializing, why?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6719232616089783824=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6719232616089783824==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart255634547472_=----"

This is a multi-part message in MIME format.

------=_001_NextPart255634547472_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

RGVhciBhbGw6DQpJIG5lZWQgeW91ciBoZWxwLg0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KX19zdGFydF94ZW4gDQp7DQogICAgaW9t
bXVfc2V0dXAoKTsgICAgLyogc2V0dXAgaW9tbXUgaWYgYXZhaWxhYmxlICovDQogICAgc21wX3By
ZXBhcmVfY3B1cyhtYXhfY3B1cyk7DQp9DQoNCnJvdXRpbmcgcGF0Y2g6DQpzbXBfcHJlcGFyZV9j
cHVzIC0+IHNtcF9ib290X2NwdXMgLT4gZG9fYm9vdF9jcHUgLT4gd2FrZXVwX3NlY29uZGFyeV9j
cHUgDQoNCndoZW4gQlNQICJTZW5kaW5nIFNUQVJUVVAgIzEobnVtX3N0YXJ0cz0yKSIsIGhhbmRs
aW5nIENQVSMzMSwgc3lzdGVtIHdhcyBzdHVjaywgSSBkb24ndCBrbm93IHdoeS4NCmJlc2lkZXMs
IHRoZXJlIGFyZSA2NCBwaHlzaWNhbCBDUFVzLg0Kd2FrZXVwX3NlY29uZGFyeV9jcHUgDQp7DQpm
b3IgKGogPSAxOyBqIDw9IG51bV9zdGFydHM7IGorKykgew0KRHByaW50aygiU2VuZGluZyBTVEFS
VFVQICMlZC5cbiIsaik7IC8vdGhpcyBwcmludGluZyBhcHBlYXJlZA0KYXBpY19yZWFkX2Fyb3Vu
ZChBUElDX1NQSVYpOw0KYXBpY193cml0ZShBUElDX0VTUiwgMCk7DQphcGljX3JlYWQoQVBJQ19F
U1IpOw0KRHByaW50aygiQWZ0ZXIgYXBpY193cml0ZS5cbiIpOyAvL2J1dCB0aGlzIHByaW50IG5l
dmVyIGNvbWUgb3V0LCBhbmQgc3lzdGVtIHdhcyBzdHVjaw0KfQ0KfQ0KLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCjEgIGFwaWNfd3JpdGUvcmVh
ZCgpIGNvbnRhaW5zIHNvbWUgbG9jayB0aGVyZT8gaG93IHRvIGNoZWNrID8NCg0KMiBCU1AgQ1BV
ICdzIHJpcCBqdW1wZWQgdG8gYW5vdGhlciBwbGFjZSBvd2luZyB0byBzdWNoIGFzIGFuIGludGVy
cnVwdGlvbj8NCnRoZSBtb3N0IHN1c3BlY3RlZCBwbGFjZSBJIGRvdWJ0IGlzIGlvbW11X3NldHVw
KCksIGFuZCBpbiB0aGlzIGZ1bmN0aW9uLCANCkkvTyB2aXJ0dWFsaXNhdGlvbiBlbmFibGVkDQpE
b20wIG1vZGU6IFJlbGF4ZWQNCmJ1dCBpb21tdV9wYWdlX2ZhdWx0IG5ldmVyIGVudGVyZWQgYWZ0
ZXIgY2hlY2tpbmcgbG9nLg0KDQozIENhbiBzb21lb25lIHRlYWNoIG1lIGEgbWV0aG9kIHRvIGFu
YWx5emUgWGVuJ3Mgc3R1Y2sgc3RhY2ssIGZvciBleGFtcGxlIA0KY29yZSBkdW1wPy4uLiB0aGlz
IHByb2JsZW0gaGFwcGVuZWQgYWNjaWRlbnRhbGx5LCBoYXJkIHRvIGNhdGNoIGl0cyBwcmludGlu
Zy4NCnNvIGlmIGFwcGVhciwgSSdkIGxpa2UgdG8gYW5hbHl6ZSBpdCBpbnN0YW50bHkuDQoNClRo
YW5rIHlvdSB2ZXJ5IG11Y2ghDQoNCg0KDQoNCnR1cGVuZzIxMg==

------=_001_NextPart255634547472_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3Dgb2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000000; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear all:</DIV>
<DIV>I need your help.</DIV>
<DIV>&nbsp;</DIV>
<DIV>--------------------------------------------------------</DIV>
<DIV>__start_xen </DIV>
<DIV>{</DIV>
<DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;iommu_setup();&nbsp;&nbsp;&nbsp;&nbsp;/*&nbsp=
;setup&nbsp;iommu&nbsp;if&nbsp;available&nbsp;*/</DIV>
<DIV>&nbsp;&nbsp;&nbsp;&nbsp;smp_prepare_cpus(max_cpus);</DIV>
<DIV>}</DIV>
<DIV>&nbsp;</DIV>
<DIV>routing patch:</DIV>
<DIV>smp_prepare_cpus -&gt; smp_boot_cpus -&gt; do_boot_cpu -&gt;=20
wakeup_secondary_cpu </DIV>
<DIV>&nbsp;</DIV>
<DIV>when BSP "Sending&nbsp;STARTUP&nbsp;#1(num_starts=3D2)", handling CPU=
#31,=20
system was stuck, I don't know why.</DIV>
<DIV>besides, there are 64 physical CPUs.</DIV>
<DIV>wakeup_secondary_cpu </DIV>
<DIV>{</DIV>
<DIV>
<BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
  <DIV>for&nbsp;(j&nbsp;=3D&nbsp;1;&nbsp;j&nbsp;&lt;=3D&nbsp;num_starts;&n=
bsp;j++)&nbsp;{</DIV>
  <BLOCKQUOTE dir=3Dltr style=3D"MARGIN-RIGHT: 0px">
    <DIV>Dprintk("Sending&nbsp;STARTUP&nbsp;#%d.\n",j); //this printing=20
    appeared</DIV>
    <DIV>apic_read_around(APIC_SPIV);</DIV>
    <DIV>apic_write(APIC_ESR,&nbsp;0);</DIV>
    <DIV>apic_read(APIC_ESR);</DIV>
    <DIV>Dprintk("After&nbsp;apic_write.\n"); //but this print never come=20
    out,&nbsp;and system was stuck</DIV></BLOCKQUOTE>
  <DIV dir=3Dltr>}</DIV></BLOCKQUOTE>
<DIV>}</DIV>
<DIV>---------------------------------------------------</DIV>
<DIV>&nbsp;</DIV>
<DIV>1&nbsp; apic_write/read() contains some lock there? how to check ?</D=
IV>
<DIV>&nbsp;</DIV>
<DIV>2 BSP CPU 's rip jumped to another place owing to such as an=20
interruption?</DIV>
<DIV>the most suspected place I doubt is iommu_setup(), and in this functi=
on,=20
</DIV>
<DIV>I/O&nbsp;virtualisation enabled</DIV>
<DIV>Dom0&nbsp;mode: Relaxed</DIV>
<DIV>but iommu_page_fault never entered after checking log.</DIV>
<DIV>&nbsp;</DIV>
<DIV>3 Can someone teach me a method to analyze&nbsp;Xen's stuck stack, fo=
r=20
example </DIV>
<DIV>core dump?... this problem happened accidentally, hard to catch its=20
printing.</DIV>
<DIV>so if appear, I'd like to analyze it instantly.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Thank you very much!</DIV></DIV></DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>tupeng212</SPAN></DIV></BODY></HTML>

------=_001_NextPart255634547472_=------



--===============6719232616089783824==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6719232616089783824==--



From xen-devel-bounces@lists.xen.org Thu Aug 16 14:13:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20pU-0003sv-JE; Thu, 16 Aug 2012 14:13:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20pT-0003sY-1b
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:13:43 +0000
Received: from [85.158.143.35:28298] by server-3.bemta-4.messagelabs.com id
	B3/30-09529-6100D205; Thu, 16 Aug 2012 14:13:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345126421!10855691!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31030 invoked from network); 16 Aug 2012 14:13:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042242"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:13:41 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:13:41 +0100
Date: Thu, 16 Aug 2012 15:13:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815180541.2fddead9@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161511290.4850@kaball.uk.xensource.com>
References: <20120815180541.2fddead9@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 6/8]: Ballooning changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

And patches like this one are the reason why I don't want xen_pvh_domain
in non-x86 xen files: I am pretty sure that if you didn't use
xen_pvh_domain here but a combination of XENFEAT_auto_translated_physmap
and xen_hvm_domain I would be able to reuse this work on ARM (that
doesn't have a working balloon driver yet) as is.

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> ---
>  drivers/xen/balloon.c |   26 +++++++++++++++++++++-----
>  1 files changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 31ab82f..57960a1 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -358,10 +358,18 @@ static enum bp_state increase_reservation(unsigned long nr_pages)
>  		BUG_ON(!xen_feature(XENFEAT_auto_translated_physmap) &&
>  		       phys_to_machine_mapping_valid(pfn));
>  
> -		set_phys_to_machine(pfn, frame_list[i]);
> -
> +		if (!xen_pvh_domain()) {
> +			set_phys_to_machine(pfn, frame_list[i]);
> +		} else {
> +			pte_t *ptep;
> +			unsigned int level;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, pfn_pte(pfn, PAGE_KERNEL));
> +		}
>  		/* Link back into the page tables if not highmem. */
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pv_domain() && !PageHighMem(page) &&
> +		    !xen_pvh_domain()) {
>  			int ret;
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
> @@ -417,7 +425,14 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  
>  		scrub_page(page);
>  
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pvh_domain() && !PageHighMem(page)) {
> +			unsigned int level;
> +			pte_t *ptep;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, __pte(0));
> +
> +		} else if (xen_pv_domain() && !PageHighMem(page)) {
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
>  				__pte_ma(0), 0);
> @@ -433,7 +448,8 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  	/* No more mappings: invalidate P2M and add to balloon. */
>  	for (i = 0; i < nr_pages; i++) {
>  		pfn = mfn_to_pfn(frame_list[i]);
> -		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +		if (!xen_pvh_domain())
> +			__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
>  		balloon_append(pfn_to_page(pfn));
>  	}
>  
> -- 
> 1.7.2.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:13:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20pU-0003sv-JE; Thu, 16 Aug 2012 14:13:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T20pT-0003sY-1b
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:13:43 +0000
Received: from [85.158.143.35:28298] by server-3.bemta-4.messagelabs.com id
	B3/30-09529-6100D205; Thu, 16 Aug 2012 14:13:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345126421!10855691!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31030 invoked from network); 16 Aug 2012 14:13:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042242"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:13:41 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:13:41 +0100
Date: Thu, 16 Aug 2012 15:13:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120815180541.2fddead9@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208161511290.4850@kaball.uk.xensource.com>
References: <20120815180541.2fddead9@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 6/8]: Ballooning changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

And patches like this one are the reason why I don't want xen_pvh_domain
in non-x86 xen files: I am pretty sure that if you didn't use
xen_pvh_domain here but a combination of XENFEAT_auto_translated_physmap
and xen_hvm_domain I would be able to reuse this work on ARM (that
doesn't have a working balloon driver yet) as is.

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> ---
>  drivers/xen/balloon.c |   26 +++++++++++++++++++++-----
>  1 files changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 31ab82f..57960a1 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -358,10 +358,18 @@ static enum bp_state increase_reservation(unsigned long nr_pages)
>  		BUG_ON(!xen_feature(XENFEAT_auto_translated_physmap) &&
>  		       phys_to_machine_mapping_valid(pfn));
>  
> -		set_phys_to_machine(pfn, frame_list[i]);
> -
> +		if (!xen_pvh_domain()) {
> +			set_phys_to_machine(pfn, frame_list[i]);
> +		} else {
> +			pte_t *ptep;
> +			unsigned int level;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, pfn_pte(pfn, PAGE_KERNEL));
> +		}
>  		/* Link back into the page tables if not highmem. */
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pv_domain() && !PageHighMem(page) &&
> +		    !xen_pvh_domain()) {
>  			int ret;
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
> @@ -417,7 +425,14 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  
>  		scrub_page(page);
>  
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pvh_domain() && !PageHighMem(page)) {
> +			unsigned int level;
> +			pte_t *ptep;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, __pte(0));
> +
> +		} else if (xen_pv_domain() && !PageHighMem(page)) {
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
>  				__pte_ma(0), 0);
> @@ -433,7 +448,8 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  	/* No more mappings: invalidate P2M and add to balloon. */
>  	for (i = 0; i < nr_pages; i++) {
>  		pfn = mfn_to_pfn(frame_list[i]);
> -		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +		if (!xen_pvh_domain())
> +			__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
>  		balloon_append(pfn_to_page(pfn));
>  	}
>  
> -- 
> 1.7.2.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:14:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20qF-000407-1C; Thu, 16 Aug 2012 14:14:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20qD-0003zg-Iu
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:14:29 +0000
Received: from [85.158.139.83:26155] by server-4.bemta-5.messagelabs.com id
	F5/5D-12386-4400D205; Thu, 16 Aug 2012 14:14:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345126466!24225976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19982 invoked from network); 16 Aug 2012 14:14:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:14:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:14:25 +0100
Message-Id: <502D1C8A0200007800095851@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:15:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>,
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>, 
	<502CC9D702000078000955B9@nat28.tlf.novell.com>
	<201208162143289686654@gmail.com>
In-Reply-To: <201208162143289686654@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 15:43, tupeng212 <tupeng212@gmail.com> wrote:
> Dear Jan:
> I didn't check the flipping in JVM, but from the printing period in xen, 
> this setting happened(unit: ns):
> 976562, 976562, 976562, 976562, 15625000, 976562, 976562, 976562, 976562, 
> 15625000 .....
> and someone told me windows use these two rate/period only.

Okay, some back and forth is there in any case, but without
knowing the time distance between the individual instances it's
hard to tell whether what I'm thinking of might help.

> besides, I checked my former simple tester this morning after it ran for a 
> whole night, it lagged much.

Could you btw quantify the lagging?

Your tester sets only a single, constant rate repeatedly, right?
If so, then the thought of adjustment likely won't help.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:14:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20qF-000407-1C; Thu, 16 Aug 2012 14:14:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20qD-0003zg-Iu
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:14:29 +0000
Received: from [85.158.139.83:26155] by server-4.bemta-5.messagelabs.com id
	F5/5D-12386-4400D205; Thu, 16 Aug 2012 14:14:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345126466!24225976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19982 invoked from network); 16 Aug 2012 14:14:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:14:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:14:25 +0100
Message-Id: <502D1C8A0200007800095851@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:15:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "tupeng212" <tupeng212@gmail.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>,
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>, 
	<502CC9D702000078000955B9@nat28.tlf.novell.com>
	<201208162143289686654@gmail.com>
In-Reply-To: <201208162143289686654@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
 Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 15:43, tupeng212 <tupeng212@gmail.com> wrote:
> Dear Jan:
> I didn't check the flipping in JVM, but from the printing period in xen, 
> this setting happened(unit: ns):
> 976562, 976562, 976562, 976562, 15625000, 976562, 976562, 976562, 976562, 
> 15625000 .....
> and someone told me windows use these two rate/period only.

Okay, some back and forth is there in any case, but without
knowing the time distance between the individual instances it's
hard to tell whether what I'm thinking of might help.

> besides, I checked my former simple tester this morning after it ran for a 
> whole night, it lagged much.

Could you btw quantify the lagging?

Your tester sets only a single, constant rate repeatedly, right?
If so, then the thought of adjustment likely won't help.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:16:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20rd-0004DX-HL; Thu, 16 Aug 2012 14:15:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T20rb-0004Cs-TG
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:15:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345126547!1951854!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6349 invoked from network); 16 Aug 2012 14:15:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:15:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042291"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:15:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 15:15:47 +0100
Message-ID: <1345126546.30865.5.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 16 Aug 2012 15:15:46 +0100
In-Reply-To: <alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 15:10 +0100, Stefano Stabellini wrote:
On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> [....]

Please can you trim your quotes, getting a repeat of hundreds of lines
of quote for just a couple of lines of new content makes it quite hard
to spot the actual content.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:16:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20rd-0004DX-HL; Thu, 16 Aug 2012 14:15:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T20rb-0004Cs-TG
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:15:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345126547!1951854!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6349 invoked from network); 16 Aug 2012 14:15:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:15:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042291"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:15:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 15:15:47 +0100
Message-ID: <1345126546.30865.5.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 16 Aug 2012 15:15:46 +0100
In-Reply-To: <alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 15:10 +0100, Stefano Stabellini wrote:
On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> [....]

Please can you trim your quotes, getting a repeat of hundreds of lines
of quote for just a couple of lines of new content makes it quite hard
to spot the actual content.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:17:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20tP-0004P2-1Z; Thu, 16 Aug 2012 14:17:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20tN-0004OG-Jd
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:17:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345126649!9593797!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8976 invoked from network); 16 Aug 2012 14:17:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 14:17:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:17:29 +0100
Message-Id: <502D1D410200007800095854@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:18:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ronghui Duan" <ronghui.duan@intel.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<20120816133457.GA5898@phenom.dumpdata.com>
In-Reply-To: <20120816133457.GA5898@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 15:34, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
>> This may be caused by the limited size of ring between Front/Back. So I 
> guess whether we can put segment data into another ring and dynamic use them 
> for the single request's need. Here is prototype which don't do much test, 
> but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be 
> reduced to 1/3 compared to original in sequential test. But it bring some 
> overhead which will make random IO's cpu utilization increase a little.
>> 
> 
> Did you think also about expanding the ring size to something bigger?

If there's a separate ring for the segment data, then there's room
for quite a bit more entries in the request ring page, so I don't see
an immediate need to also increase that one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:17:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20tP-0004P2-1Z; Thu, 16 Aug 2012 14:17:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T20tN-0004OG-Jd
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:17:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345126649!9593797!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8976 invoked from network); 16 Aug 2012 14:17:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 14:17:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:17:29 +0100
Message-Id: <502D1D410200007800095854@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:18:09 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ronghui Duan" <ronghui.duan@intel.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<20120816133457.GA5898@phenom.dumpdata.com>
In-Reply-To: <20120816133457.GA5898@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 15:34, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
>> This may be caused by the limited size of ring between Front/Back. So I 
> guess whether we can put segment data into another ring and dynamic use them 
> for the single request's need. Here is prototype which don't do much test, 
> but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be 
> reduced to 1/3 compared to original in sequential test. But it bring some 
> overhead which will make random IO's cpu utilization increase a little.
>> 
> 
> Did you think also about expanding the ring size to something bigger?

If there's a separate ring for the segment data, then there's room
for quite a bit more entries in the request ring page, so I don't see
an immediate need to also increase that one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:19:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20ut-0004Xw-Hc; Thu, 16 Aug 2012 14:19:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kumarsukhani@gmail.com>) id 1T20us-0004Xk-3B
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:19:18 +0000
X-Env-Sender: kumarsukhani@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345126751!2080847!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4350 invoked from network); 16 Aug 2012 14:19:12 -0000
Received: from mail-lpp01m010-f43.google.com (HELO
	mail-lpp01m010-f43.google.com) (209.85.215.43)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:19:12 -0000
Received: by lagk11 with SMTP id k11so1751140lag.30
	for <xen-devel@lists.xensource.com>;
	Thu, 16 Aug 2012 07:19:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=UBIc9T6SQMO9YOz4R3PNH70KAf2Lezx3C8TqwIoIPGQ=;
	b=f+zA6ZErJQ11BuFUGooCbnkMs3fL2VbFg9UPeQk9vfVLaXZ+a56qVX1+xN7raKcuSF
	PaICuxuXXeILkp1g57Ay1KB6pRwkC9M6Rfrp11zc1BS2wTi1n32DPNe+25v/w/qJGW4M
	N/+DDHcDmyzT/YOPI8MCPYcaAufn6E28HIKZLyh/jccaW20uxrR1BYHDH9njA1hiAvhR
	2yGblSvl5Sixy971VJ79+7WvwGQYvWlOTeotTda6aPIqS+uMB92PhWt5NNbbZ2FyPDA9
	hxHIkvMD8AleUP2nqSN2rQZxoq3MAnWgu9eDQIPffLflR0PtW0J08oOO9RlwmEWIgd4N
	zIkQ==
MIME-Version: 1.0
Received: by 10.112.37.8 with SMTP id u8mr782169lbj.30.1345126751020; Thu, 16
	Aug 2012 07:19:11 -0700 (PDT)
Received: by 10.112.26.168 with HTTP; Thu, 16 Aug 2012 07:19:10 -0700 (PDT)
In-Reply-To: <CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
References: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
	<CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
Date: Thu, 16 Aug 2012 19:49:10 +0530
Message-ID: <CALNeL8WF91Qjz3tSXparF2TZhdG0n0ncHPfHCte0=wJUB4t8RA@mail.gmail.com>
From: Kumar Sukhani <kumarsukhani@gmail.com>
To: xen-devel@lists.xensource.com
Cc: Surabhi Mutha <muthasurabhi@gmail.com>,
	Saurabh Gadia <saurabh.gadia4@gmail.com>,
	Akash deep agrawal <akashagrawal14@gmail.com>,
	Karthick K <karthick.k11@gmail.com>
Subject: Re: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 1:51 AM, Fajar A. Nugraha <list@fajar.net> wrote:
>
> On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani <kumarsukhani@gmail.com> wrote:
>
> > So we are planning to introduce features of O.S. level virtualization
> > in Xen, by proposing one integrated architecture[1] having Operating
> > system virtualization over one of the VM of Xen.
> > Please review our architecture and let us know whether it is worth to
> > implement it or not.
> >
> > [1] http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream
>
>
> How is this "integrated"?
>
> For example, what is the difference of that proposal with running lxc
> on an Ubuntu domU (which you can already do)?
>
> --
> Fajar

We are planning to do the same thing but with a different approach,
which will be some what efficient.
[Note: I am assuming we are using lxc for O.S. level virtualization,
even though we can use any other platform].

Consider one scenario:
http://www.flickr.com/photos/84959360@N02/7795262950/in/photostream/

Simple case:
       Here for xen their are 3 V.M.s running
       1) Dom 0
       2) Dom 1 (lxc)
       3) Dom 2 (windows)

Modified case:
       Here we will modify the lxc implementation to make it aware
that its running over xen in virtualize mode and will also make some
changes in hypervisor and Dom0 to make it aware that we are also
having one or more VMs running over Dom1.
       So hypervisor and Dom0 will be aware that total 5 V.M.s are running.

Few Advantages:
       1. This implementation will make the distribution of resources
simple, because xen is aware of how much actual VMs are running.
       2. Process migration of VMs(running over lxc) will be possible
here in the case like, we want to migrate VM to other hardware where
lxc might not be available.

[Note: It will also have all the advantages of O.S. level virtualization].
--
Be Happy Always
+919579650250

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:19:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20ut-0004Xw-Hc; Thu, 16 Aug 2012 14:19:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kumarsukhani@gmail.com>) id 1T20us-0004Xk-3B
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:19:18 +0000
X-Env-Sender: kumarsukhani@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345126751!2080847!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4350 invoked from network); 16 Aug 2012 14:19:12 -0000
Received: from mail-lpp01m010-f43.google.com (HELO
	mail-lpp01m010-f43.google.com) (209.85.215.43)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:19:12 -0000
Received: by lagk11 with SMTP id k11so1751140lag.30
	for <xen-devel@lists.xensource.com>;
	Thu, 16 Aug 2012 07:19:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=UBIc9T6SQMO9YOz4R3PNH70KAf2Lezx3C8TqwIoIPGQ=;
	b=f+zA6ZErJQ11BuFUGooCbnkMs3fL2VbFg9UPeQk9vfVLaXZ+a56qVX1+xN7raKcuSF
	PaICuxuXXeILkp1g57Ay1KB6pRwkC9M6Rfrp11zc1BS2wTi1n32DPNe+25v/w/qJGW4M
	N/+DDHcDmyzT/YOPI8MCPYcaAufn6E28HIKZLyh/jccaW20uxrR1BYHDH9njA1hiAvhR
	2yGblSvl5Sixy971VJ79+7WvwGQYvWlOTeotTda6aPIqS+uMB92PhWt5NNbbZ2FyPDA9
	hxHIkvMD8AleUP2nqSN2rQZxoq3MAnWgu9eDQIPffLflR0PtW0J08oOO9RlwmEWIgd4N
	zIkQ==
MIME-Version: 1.0
Received: by 10.112.37.8 with SMTP id u8mr782169lbj.30.1345126751020; Thu, 16
	Aug 2012 07:19:11 -0700 (PDT)
Received: by 10.112.26.168 with HTTP; Thu, 16 Aug 2012 07:19:10 -0700 (PDT)
In-Reply-To: <CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
References: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
	<CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
Date: Thu, 16 Aug 2012 19:49:10 +0530
Message-ID: <CALNeL8WF91Qjz3tSXparF2TZhdG0n0ncHPfHCte0=wJUB4t8RA@mail.gmail.com>
From: Kumar Sukhani <kumarsukhani@gmail.com>
To: xen-devel@lists.xensource.com
Cc: Surabhi Mutha <muthasurabhi@gmail.com>,
	Saurabh Gadia <saurabh.gadia4@gmail.com>,
	Akash deep agrawal <akashagrawal14@gmail.com>,
	Karthick K <karthick.k11@gmail.com>
Subject: Re: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 1:51 AM, Fajar A. Nugraha <list@fajar.net> wrote:
>
> On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani <kumarsukhani@gmail.com> wrote:
>
> > So we are planning to introduce features of O.S. level virtualization
> > in Xen, by proposing one integrated architecture[1] having Operating
> > system virtualization over one of the VM of Xen.
> > Please review our architecture and let us know whether it is worth to
> > implement it or not.
> >
> > [1] http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream
>
>
> How is this "integrated"?
>
> For example, what is the difference of that proposal with running lxc
> on an Ubuntu domU (which you can already do)?
>
> --
> Fajar

We are planning to do the same thing but with a different approach,
which will be some what efficient.
[Note: I am assuming we are using lxc for O.S. level virtualization,
even though we can use any other platform].

Consider one scenario:
http://www.flickr.com/photos/84959360@N02/7795262950/in/photostream/

Simple case:
       Here for xen their are 3 V.M.s running
       1) Dom 0
       2) Dom 1 (lxc)
       3) Dom 2 (windows)

Modified case:
       Here we will modify the lxc implementation to make it aware
that its running over xen in virtualize mode and will also make some
changes in hypervisor and Dom0 to make it aware that we are also
having one or more VMs running over Dom1.
       So hypervisor and Dom0 will be aware that total 5 V.M.s are running.

Few Advantages:
       1. This implementation will make the distribution of resources
simple, because xen is aware of how much actual VMs are running.
       2. Process migration of VMs(running over lxc) will be possible
here in the case like, we want to migrate VM to other hardware where
lxc might not be available.

[Note: It will also have all the advantages of O.S. level virtualization].
--
Be Happy Always
+919579650250

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:19:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20uy-0004Yr-3B; Thu, 16 Aug 2012 14:19:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20uw-0004YT-6o
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:19:22 +0000
Received: from [85.158.138.51:8331] by server-9.bemta-3.messagelabs.com id
	44/8C-23952-9610D205; Thu, 16 Aug 2012 14:19:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345126759!26873179!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28898 invoked from network); 16 Aug 2012 14:19:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 14:19:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GEJHS0004060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:19:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GEJGp5014187
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:19:17 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GEJGop015203
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 09:19:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:19:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D4979402C0; Thu, 16 Aug 2012 10:09:29 -0400 (EDT)
Date: Thu, 16 Aug 2012 10:09:29 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816140929.GB17687@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120815180131.24aaa5ce@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 06:01:31PM -0700, Mukesh Rathor wrote:
> 
> ---
>  arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
>  arch/x86/xen/irq.c       |   22 ++++++++++++++-
>  2 files changed, 77 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index bf4bda6..3a58c51 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -139,6 +139,8 @@ struct tls_descs {
>   */
>  static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>  
> +static void __init xen_hvm_init_shared_info(void);
> +
>  static void clamp_max_cpus(void)
>  {
>  #ifdef CONFIG_SMP
> @@ -217,8 +219,8 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
> +		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);
>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
> @@ -271,12 +273,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>  		break;
>  	}
>  
> -	asm(XEN_EMULATE_PREFIX "cpuid"
> -		: "=a" (*ax),
> -		  "=b" (*bx),
> -		  "=c" (*cx),
> -		  "=d" (*dx)
> -		: "0" (*ax), "2" (*cx));
> +	if (xen_pvh_domain())
> +		native_cpuid(ax, bx, cx, dx);
> +	else
> +		asm(XEN_EMULATE_PREFIX "cpuid"
> +			: "=a" (*ax),
> +			"=b" (*bx),
> +			"=c" (*cx),
> +			"=d" (*dx)
> +			: "0" (*ax), "2" (*cx));
>  
>  	*bx &= maskebx;
>  	*cx &= maskecx;
> @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  void xen_setup_shared_info(void)
>  {
> +	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
> +	if (xen_pvh_domain() && xen_initial_domain())
> +		return;
> +
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
>  			   xen_start_info->shared_info);
> @@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
>  		HYPERVISOR_shared_info =
>  			(struct shared_info *)__va(xen_start_info->shared_info);
>  
> +	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
> +	if (xen_pvh_domain())
> +		return;
> +
>  #ifndef CONFIG_SMP
>  	/* In UP this is as good a place as any to set up shared info */
>  	xen_setup_vcpu_info_placement();
> @@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
>   */
>  static void __init xen_setup_stackprotector(void)
>  {
> +	if (xen_pvh_domain()) {
> +		switch_to_new_gdt(0);
> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>  
> @@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_guest_init(void)
> +{
> +#ifndef __HAVE_ARCH_PTE_SPECIAL
> +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> +#endif

Should this be just rolled in the Kconfig as part of enabling
this mode?

> +	/* PVH TBD/FIXME: for now just disable this. */
> +	have_vcpu_info_placement = 0;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +        /* for domU, the library sets start_info.shared_info to pfn, but for
> +         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
> +	 * uses HVM code in many places */
> +	if (xen_initial_domain())
> +		xen_hvm_init_shared_info();
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
>  	if (!xen_start_info)
>  		return;
>  
> +#ifdef CONFIG_X86_32
> +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");
> +	return;

Perhaps BUG() ?
> +#endif
>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (xen_pvh_domain())
> +		pv_cpu_ops.cpuid = xen_cpuid;
> +	else
> +		pv_cpu_ops = xen_cpu_ops;
>  
>  	x86_init.resources.memory_setup = xen_memory_setup;
>  	x86_init.oem.arch_setup = xen_arch_setup;
> @@ -1334,8 +1378,6 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* Work out if we support NX */
>  	x86_configure_nx();
>  
> -	xen_setup_features();
> -
>  	/* Get mfn list */
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		xen_build_dynamic_phys_to_machine();
> @@ -1462,6 +1504,9 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_setup_runstate_info(0);
>  
> +	if (xen_pvh_domain())
> +		xen_pvh_guest_init();
> +
>  	/* Start the world */
>  #ifdef CONFIG_X86_32
>  	i386_start_kernel();
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..7c7dfd1 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
>  
>  static void xen_safe_halt(void)
>  {
> +	/* so event channel can be delivered to us, since in HVM container */
> +	if (xen_pvh_domain())
> +		local_irq_enable();
> +
>  	/* Blocking includes an implicit local_irq_enable(). */
>  	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
>  		BUG();
> @@ -126,8 +130,24 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  #endif
>  };
>  
> +static const struct pv_irq_ops xen_pvh_irq_ops __initdata = {
> +	.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
> +	.restore_fl = __PV_IS_CALLEE_SAVE(native_restore_fl),
> +	.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
> +	.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
> +
> +	.safe_halt = xen_safe_halt,
> +	.halt = xen_halt,
> +#ifdef CONFIG_X86_64
> +	.adjust_exception_frame = paravirt_nop,
> +#endif
> +};
> +
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	if (xen_pvh_domain())
> +		pv_irq_ops = xen_pvh_irq_ops;
> +	else
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> -- 
> 1.7.2.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:19:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20uy-0004Yr-3B; Thu, 16 Aug 2012 14:19:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20uw-0004YT-6o
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:19:22 +0000
Received: from [85.158.138.51:8331] by server-9.bemta-3.messagelabs.com id
	44/8C-23952-9610D205; Thu, 16 Aug 2012 14:19:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345126759!26873179!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28898 invoked from network); 16 Aug 2012 14:19:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 14:19:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GEJHS0004060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:19:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GEJGp5014187
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:19:17 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GEJGop015203
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 09:19:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:19:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D4979402C0; Thu, 16 Aug 2012 10:09:29 -0400 (EDT)
Date: Thu, 16 Aug 2012 10:09:29 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816140929.GB17687@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120815180131.24aaa5ce@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 06:01:31PM -0700, Mukesh Rathor wrote:
> 
> ---
>  arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
>  arch/x86/xen/irq.c       |   22 ++++++++++++++-
>  2 files changed, 77 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index bf4bda6..3a58c51 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -139,6 +139,8 @@ struct tls_descs {
>   */
>  static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>  
> +static void __init xen_hvm_init_shared_info(void);
> +
>  static void clamp_max_cpus(void)
>  {
>  #ifdef CONFIG_SMP
> @@ -217,8 +219,8 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
> +		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);
>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
> @@ -271,12 +273,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>  		break;
>  	}
>  
> -	asm(XEN_EMULATE_PREFIX "cpuid"
> -		: "=a" (*ax),
> -		  "=b" (*bx),
> -		  "=c" (*cx),
> -		  "=d" (*dx)
> -		: "0" (*ax), "2" (*cx));
> +	if (xen_pvh_domain())
> +		native_cpuid(ax, bx, cx, dx);
> +	else
> +		asm(XEN_EMULATE_PREFIX "cpuid"
> +			: "=a" (*ax),
> +			"=b" (*bx),
> +			"=c" (*cx),
> +			"=d" (*dx)
> +			: "0" (*ax), "2" (*cx));
>  
>  	*bx &= maskebx;
>  	*cx &= maskecx;
> @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  void xen_setup_shared_info(void)
>  {
> +	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
> +	if (xen_pvh_domain() && xen_initial_domain())
> +		return;
> +
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
>  			   xen_start_info->shared_info);
> @@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
>  		HYPERVISOR_shared_info =
>  			(struct shared_info *)__va(xen_start_info->shared_info);
>  
> +	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
> +	if (xen_pvh_domain())
> +		return;
> +
>  #ifndef CONFIG_SMP
>  	/* In UP this is as good a place as any to set up shared info */
>  	xen_setup_vcpu_info_placement();
> @@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
>   */
>  static void __init xen_setup_stackprotector(void)
>  {
> +	if (xen_pvh_domain()) {
> +		switch_to_new_gdt(0);
> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>  
> @@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_guest_init(void)
> +{
> +#ifndef __HAVE_ARCH_PTE_SPECIAL
> +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> +#endif

Should this be just rolled in the Kconfig as part of enabling
this mode?

> +	/* PVH TBD/FIXME: for now just disable this. */
> +	have_vcpu_info_placement = 0;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +        /* for domU, the library sets start_info.shared_info to pfn, but for
> +         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
> +	 * uses HVM code in many places */
> +	if (xen_initial_domain())
> +		xen_hvm_init_shared_info();
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
>  	if (!xen_start_info)
>  		return;
>  
> +#ifdef CONFIG_X86_32
> +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");
> +	return;

Perhaps BUG() ?
> +#endif
>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (xen_pvh_domain())
> +		pv_cpu_ops.cpuid = xen_cpuid;
> +	else
> +		pv_cpu_ops = xen_cpu_ops;
>  
>  	x86_init.resources.memory_setup = xen_memory_setup;
>  	x86_init.oem.arch_setup = xen_arch_setup;
> @@ -1334,8 +1378,6 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* Work out if we support NX */
>  	x86_configure_nx();
>  
> -	xen_setup_features();
> -
>  	/* Get mfn list */
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		xen_build_dynamic_phys_to_machine();
> @@ -1462,6 +1504,9 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_setup_runstate_info(0);
>  
> +	if (xen_pvh_domain())
> +		xen_pvh_guest_init();
> +
>  	/* Start the world */
>  #ifdef CONFIG_X86_32
>  	i386_start_kernel();
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..7c7dfd1 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
>  
>  static void xen_safe_halt(void)
>  {
> +	/* so event channel can be delivered to us, since in HVM container */
> +	if (xen_pvh_domain())
> +		local_irq_enable();
> +
>  	/* Blocking includes an implicit local_irq_enable(). */
>  	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
>  		BUG();
> @@ -126,8 +130,24 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  #endif
>  };
>  
> +static const struct pv_irq_ops xen_pvh_irq_ops __initdata = {
> +	.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
> +	.restore_fl = __PV_IS_CALLEE_SAVE(native_restore_fl),
> +	.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
> +	.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
> +
> +	.safe_halt = xen_safe_halt,
> +	.halt = xen_halt,
> +#ifdef CONFIG_X86_64
> +	.adjust_exception_frame = paravirt_nop,
> +#endif
> +};
> +
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	if (xen_pvh_domain())
> +		pv_irq_ops = xen_pvh_irq_ops;
> +	else
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> -- 
> 1.7.2.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20zP-0004vr-QO; Thu, 16 Aug 2012 14:23:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20zO-0004vj-Tx
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:23:59 +0000
Received: from [85.158.139.83:50081] by server-1.bemta-5.messagelabs.com id
	CF/41-09980-E720D205; Thu, 16 Aug 2012 14:23:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345127036!28529187!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6430 invoked from network); 16 Aug 2012 14:23:57 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 14:23:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GENsKl029282
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:23:54 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GENrTM022580
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:23:54 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GENrPp007879
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 09:23:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:23:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4B9CB402C0; Thu, 16 Aug 2012 10:14:07 -0400 (EDT)
Date: Thu, 16 Aug 2012 10:14:07 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816141407.GC17687@phenom.dumpdata.com>
References: <20120815180356.08d4d2e4@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120815180356.08d4d2e4@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
	and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 06:03:56PM -0700, Mukesh Rathor wrote:
> 
> ---
>  arch/x86/xen/setup.c               |   32 +++++++++++++++++++++++++++-----
>  drivers/xen/events.c               |    7 +++++++
>  drivers/xen/xenbus/xenbus_client.c |    2 +-
>  drivers/xen/xenbus/xenbus_probe.c  |    5 ++++-
>  4 files changed, 39 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 936f21d..1c961fc 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -26,6 +26,7 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/physdev.h>
>  #include <xen/features.h>
> +#include "mmu.h"
>  #include "xen-ops.h"
>  #include "vdso.h"
>  
> @@ -222,6 +223,20 @@ static void __init xen_set_identity_and_release_chunk(
>  	*identity += set_phys_range_identity(start_pfn, end_pfn);
>  }
>  
> +/* For PVH, the pfns [0..MAX] are mapped to mfn's in the EPT/NPT. The mfns
> + * are released as part of this 1:1 mapping hypercall. We can't balloon down
> + * any time later because when p2m/EPT is updated, the mfns are already lost.
> + * Also, we map the entire IO space, ie, beyond max_pfn_mapped.

> + */
> +static void noinline __init xen_pvh_identity_map_chunk(unsigned long start_pfn,
> +						       unsigned long end_pfn)
> +{
> +	unsigned long pfn;
> +
> +	for (pfn = start_pfn; pfn < end_pfn; pfn++)
> +		xen_set_clr_mmio_pvh_pte(pfn, pfn, 1, 1);

Include a comment for what the '1' and '1' are for? Like:
		xen_set..(pfn, pfn, 1 /* Enable clearing */, 1 /* Do something .. */

> +}
> +
>  static unsigned long __init xen_set_identity_and_release(
>  	const struct e820entry *list, size_t map_size, unsigned long nr_pages)
>  {
> @@ -251,11 +266,18 @@ static unsigned long __init xen_set_identity_and_release(
>  			if (entry->type == E820_RAM)
>  				end_pfn = PFN_UP(entry->addr);
>  
> -			if (start_pfn < end_pfn)
> -				xen_set_identity_and_release_chunk(
> -					start_pfn, end_pfn, nr_pages,
> -					&released, &identity);
> -
> +			if (start_pfn < end_pfn) {
> +				if (xen_pvh_domain()) {
> +					xen_pvh_identity_map_chunk(start_pfn,
> +								   end_pfn);
> +					released += end_pfn - start_pfn;
> +					identity += end_pfn - start_pfn;
> +				} else {
> +					xen_set_identity_and_release_chunk(
> +						start_pfn, end_pfn, nr_pages,
> +						&released, &identity);
> +				}
> +			}
>  			start = end;
>  		}
>  	}
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7595581..260113e 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
>  		if (xen_initial_domain())
>  			pci_xen_initial_domain();
>  
> +		if (xen_pvh_domain()) {
> +			xen_callback_vector();
> +			return;
> +		}
> +
> +		/* PVH: TBD/FIXME: debug and fix eio map to work with pvh */
> +
>  		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
>  		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index b3e146e..c0fcff1 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -743,7 +743,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
>  
>  void __init xenbus_ring_ops_init(void)
>  {
> -	if (xen_pv_domain())
> +	if (xen_pv_domain() && !xen_pvh_domain())
>  		ring_ops = &ring_ops_pv;
>  	else
>  		ring_ops = &ring_ops_hvm;
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index b793723..735dd5c 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -749,7 +749,10 @@ static int __init xenbus_init(void)
>  			if (err)
>  				goto out_error;
>  		}
> -		xen_store_interface = mfn_to_virt(xen_store_mfn);
> +		if (xen_pvh_domain())
> +			xen_store_interface = __va(xen_store_mfn<<PAGE_SHIFT);
> +		else
> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
>  	}
>  
>  	/* Initialize the interface to xenstore. */
> -- 
> 1.7.2.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T20zP-0004vr-QO; Thu, 16 Aug 2012 14:23:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T20zO-0004vj-Tx
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:23:59 +0000
Received: from [85.158.139.83:50081] by server-1.bemta-5.messagelabs.com id
	CF/41-09980-E720D205; Thu, 16 Aug 2012 14:23:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345127036!28529187!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6430 invoked from network); 16 Aug 2012 14:23:57 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 14:23:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GENsKl029282
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:23:54 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GENrTM022580
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:23:54 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GENrPp007879
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 09:23:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:23:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4B9CB402C0; Thu, 16 Aug 2012 10:14:07 -0400 (EDT)
Date: Thu, 16 Aug 2012 10:14:07 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816141407.GC17687@phenom.dumpdata.com>
References: <20120815180356.08d4d2e4@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120815180356.08d4d2e4@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
	and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 06:03:56PM -0700, Mukesh Rathor wrote:
> 
> ---
>  arch/x86/xen/setup.c               |   32 +++++++++++++++++++++++++++-----
>  drivers/xen/events.c               |    7 +++++++
>  drivers/xen/xenbus/xenbus_client.c |    2 +-
>  drivers/xen/xenbus/xenbus_probe.c  |    5 ++++-
>  4 files changed, 39 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 936f21d..1c961fc 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -26,6 +26,7 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/physdev.h>
>  #include <xen/features.h>
> +#include "mmu.h"
>  #include "xen-ops.h"
>  #include "vdso.h"
>  
> @@ -222,6 +223,20 @@ static void __init xen_set_identity_and_release_chunk(
>  	*identity += set_phys_range_identity(start_pfn, end_pfn);
>  }
>  
> +/* For PVH, the pfns [0..MAX] are mapped to mfn's in the EPT/NPT. The mfns
> + * are released as part of this 1:1 mapping hypercall. We can't balloon down
> + * any time later because when p2m/EPT is updated, the mfns are already lost.
> + * Also, we map the entire IO space, ie, beyond max_pfn_mapped.

> + */
> +static void noinline __init xen_pvh_identity_map_chunk(unsigned long start_pfn,
> +						       unsigned long end_pfn)
> +{
> +	unsigned long pfn;
> +
> +	for (pfn = start_pfn; pfn < end_pfn; pfn++)
> +		xen_set_clr_mmio_pvh_pte(pfn, pfn, 1, 1);

Include a comment for what the '1' and '1' are for? Like:
		xen_set..(pfn, pfn, 1 /* Enable clearing */, 1 /* Do something .. */

> +}
> +
>  static unsigned long __init xen_set_identity_and_release(
>  	const struct e820entry *list, size_t map_size, unsigned long nr_pages)
>  {
> @@ -251,11 +266,18 @@ static unsigned long __init xen_set_identity_and_release(
>  			if (entry->type == E820_RAM)
>  				end_pfn = PFN_UP(entry->addr);
>  
> -			if (start_pfn < end_pfn)
> -				xen_set_identity_and_release_chunk(
> -					start_pfn, end_pfn, nr_pages,
> -					&released, &identity);
> -
> +			if (start_pfn < end_pfn) {
> +				if (xen_pvh_domain()) {
> +					xen_pvh_identity_map_chunk(start_pfn,
> +								   end_pfn);
> +					released += end_pfn - start_pfn;
> +					identity += end_pfn - start_pfn;
> +				} else {
> +					xen_set_identity_and_release_chunk(
> +						start_pfn, end_pfn, nr_pages,
> +						&released, &identity);
> +				}
> +			}
>  			start = end;
>  		}
>  	}
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7595581..260113e 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
>  		if (xen_initial_domain())
>  			pci_xen_initial_domain();
>  
> +		if (xen_pvh_domain()) {
> +			xen_callback_vector();
> +			return;
> +		}
> +
> +		/* PVH: TBD/FIXME: debug and fix eio map to work with pvh */
> +
>  		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
>  		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index b3e146e..c0fcff1 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -743,7 +743,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
>  
>  void __init xenbus_ring_ops_init(void)
>  {
> -	if (xen_pv_domain())
> +	if (xen_pv_domain() && !xen_pvh_domain())
>  		ring_ops = &ring_ops_pv;
>  	else
>  		ring_ops = &ring_ops_hvm;
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index b793723..735dd5c 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -749,7 +749,10 @@ static int __init xenbus_init(void)
>  			if (err)
>  				goto out_error;
>  		}
> -		xen_store_interface = mfn_to_virt(xen_store_mfn);
> +		if (xen_pvh_domain())
> +			xen_store_interface = __va(xen_store_mfn<<PAGE_SHIFT);
> +		else
> +			xen_store_interface = mfn_to_virt(xen_store_mfn);
>  	}
>  
>  	/* Initialize the interface to xenstore. */
> -- 
> 1.7.2.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:25:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:25:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T210X-0004zx-9T; Thu, 16 Aug 2012 14:25:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T210V-0004zq-SY
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:25:07 +0000
Received: from [85.158.143.35:42867] by server-1.bemta-4.messagelabs.com id
	2D/71-07754-3C20D205; Thu, 16 Aug 2012 14:25:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345127106!14457112!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18103 invoked from network); 16 Aug 2012 14:25:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 14:25:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:25:05 +0100
Message-Id: <502D1F080200007800095882@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:25:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <lars.kurth.xen@gmail.com>,<lars.kurth@xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
	<502CF211.9050406@xen.org>
In-Reply-To: <502CF211.9050406@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 15:13, Lars Kurth <lars.kurth@xen.org> wrote:
> Any noteworthy new 4.2 features 
> http://wiki.xen.org/wiki/Xen_Release_Features would also need to be added

I would have wanted to add native EFI boot and multi-segment
PCI support, but given how the "edit" tab's contents looks, I'm
afraid I would corrupt the entire table...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:25:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:25:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T210X-0004zx-9T; Thu, 16 Aug 2012 14:25:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T210V-0004zq-SY
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:25:07 +0000
Received: from [85.158.143.35:42867] by server-1.bemta-4.messagelabs.com id
	2D/71-07754-3C20D205; Thu, 16 Aug 2012 14:25:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345127106!14457112!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18103 invoked from network); 16 Aug 2012 14:25:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 14:25:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 15:25:05 +0100
Message-Id: <502D1F080200007800095882@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 15:25:44 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <lars.kurth.xen@gmail.com>,<lars.kurth@xen.org>
References: <502934CB.7060409@xen.org>
	<1344941807.5926.25.camel@zakaz.uk.xensource.com>
	<502A3A0B.6000401@xen.org> <20120815171953.GB9984@US-SEA-R8XVZTX>
	<CAOqnZH54ZhSOmZ1zBwF1QPVdBc_g_9859121pcqdnbUFO6pUvw@mail.gmail.com>
	<1345107025.27489.27.camel@zakaz.uk.xensource.com>
	<502CDB7E0200007800095622@nat28.tlf.novell.com>
	<1345114757.27489.82.camel@zakaz.uk.xensource.com>
	<502CEFF0020000780009572A@nat28.tlf.novell.com>
	<1345115945.27489.87.camel@zakaz.uk.xensource.com>
	<502CF211.9050406@xen.org>
In-Reply-To: <502CF211.9050406@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 Limits (needs review)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 15:13, Lars Kurth <lars.kurth@xen.org> wrote:
> Any noteworthy new 4.2 features 
> http://wiki.xen.org/wiki/Xen_Release_Features would also need to be added

I would have wanted to add native EFI boot and multi-segment
PCI support, but given how the "edit" tab's contents looks, I'm
afraid I would corrupt the entire table...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T212c-00058Y-Qm; Thu, 16 Aug 2012 14:27:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T212a-00058K-Jm
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:27:16 +0000
Received: from [85.158.138.51:5104] by server-11.bemta-3.messagelabs.com id
	8F/82-23152-3430D205; Thu, 16 Aug 2012 14:27:15 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345127233!28542382!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MIME_BASE64_TEXT, MIME_BOUND_NEXTPART, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19157 invoked from network); 16 Aug 2012 14:27:15 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:27:15 -0000
Received: by pbbrp12 with SMTP id rp12so1709119pbb.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 07:27:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=zitJo3UgaH28bR4YzQPaoUz5nrNBhZKSp4iVUikab6o=;
	b=mmic8d3vqeu8g2u3f2SD5KGiI4KyptJvdnOeXcYOxtWeOEeH04c7YnOk1bVM6REqVH
	e7Pz2dSPcNE4mQdgm6Tqt0qc7jEZuLxesYH8KzPayJMO7FfwI2fUArl7pHZghg9Q4FxK
	bEYaqsVgMjaZ9ODo7Mj5KJ6GcYw8f+vIkWqrcfFDTKnnzEvjPSW6ugoq6A8qFaIZMvEM
	n9gK2rwTlG2VLPtVPWkKeJgBZoyxRzWZrpROJnBCf5txrzKWef4Bvj2o61oeA7U8SJyc
	OOpHsE+6IYyfvVX44CeDww/BhJlGmldVjArAYVeLpv00tsfH3lOMIztX0MNvWkUExAaK
	ujVQ==
Received: by 10.68.222.167 with SMTP id qn7mr3650364pbc.98.1345127232582;
	Thu, 16 Aug 2012 07:27:12 -0700 (PDT)
Received: from root ([115.199.242.186])
	by mx.google.com with ESMTPS id kt8sm2785434pbc.1.2012.08.16.07.27.05
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 07:27:12 -0700 (PDT)
Date: Thu, 16 Aug 2012 22:27:12 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>, 
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>, 
	<502CC9D702000078000955B9@nat28.tlf.novell.com>
	<201208162143289686654@gmail.com>, 
	<502D1C8A0200007800095851@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: 4EC6CDED-746D-4104-A8CC-6783C71A3E22
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081622270760996017@gmail.com>
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6051636379622459400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6051636379622459400==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart846642126284_=----"

This is a multi-part message in MIME format.

------=_001_NextPart846642126284_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

T2theSwgc29tZSBiYWNrIGFuZCBmb3J0aCBpcyB0aGVyZSBpbiBhbnkgY2FzZSwgYnV0IHdpdGhv
dXQNCmtub3dpbmcgdGhlIHRpbWUgZGlzdGFuY2UgYmV0d2VlbiB0aGUgaW5kaXZpZHVhbCBpbnN0
YW5jZXMgaXQncw0KaGFyZCB0byB0ZWxsIHdoZXRoZXIgd2hhdCBJJ20gdGhpbmtpbmcgb2YgbWln
aHQgaGVscC4NCg0KPiBiZXNpZGVzLCBJIGNoZWNrZWQgbXkgZm9ybWVyIHNpbXBsZSB0ZXN0ZXIg
dGhpcyBtb3JuaW5nIGFmdGVyIGl0IHJhbiBmb3IgYSANCj4gd2hvbGUgbmlnaHQsIGl0IGxhZ2dl
ZCBtdWNoLg0KDQpDb3VsZCB5b3UgYnR3IHF1YW50aWZ5IHRoZSBsYWdnaW5nPyANCi8vIEkgZGlk
bid0IHBheSBhdHRlbnRpb24gdG8gaXQsIGJ1dCBhYm91dCBoYWxmIGFuIGhvdXIncyBsYWdnaW5n
IGNlcnRhaW5seSBleGlzdC4gDQp3aGVuIHlvdSB3aWxsIGdvIGhvbWUsIHlvdSBjYW4gYWxzbyBo
YXZlIGEgdHJ5IHRvIHNlZSByZXN1bHQgdG9tb3Jyb3cgbW9ybmluZy4NCg0KWW91ciB0ZXN0ZXIg
c2V0cyBvbmx5IGEgc2luZ2xlLCBjb25zdGFudCByYXRlIHJlcGVhdGVkbHksIHJpZ2h0PyAvL3ll
cywgdGhlIHNpbXBsZXN0IHRlc3QganVzdCBzZXR0aW5nIGEgc2luZ2xlIDEwMDAwIGRvd24gcmVw
ZWF0bHkuDQpJZiBzbywgdGhlbiB0aGUgdGhvdWdodCBvZiBhZGp1c3RtZW50IGxpa2VseSB3b24n
dCBoZWxwLg==

------=_001_NextPart846642126284_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3DGB2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>
<DIV>Okay,&nbsp;some&nbsp;back&nbsp;and&nbsp;forth&nbsp;is&nbsp;there&nbsp=
;in&nbsp;any&nbsp;case,&nbsp;but&nbsp;without</DIV>
<DIV>knowing&nbsp;the&nbsp;time&nbsp;distance&nbsp;between&nbsp;the&nbsp;i=
ndividual&nbsp;instances&nbsp;it's</DIV>
<DIV>hard&nbsp;to&nbsp;tell&nbsp;whether&nbsp;what&nbsp;I'm&nbsp;thinking&=
nbsp;of&nbsp;might&nbsp;help.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp;besides,&nbsp;I&nbsp;checked&nbsp;my&nbsp;former&nbsp;simpl=
e&nbsp;tester&nbsp;this&nbsp;morning&nbsp;after&nbsp;it&nbsp;ran&nbsp;for&=
nbsp;a&nbsp;</DIV>
<DIV>&gt;&nbsp;whole&nbsp;night,&nbsp;it&nbsp;lagged&nbsp;much.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Could&nbsp;you&nbsp;btw&nbsp;quantify&nbsp;the&nbsp;lagging? </DIV>
<DIV>// I didn't pay attention to it, but about&nbsp;half an hour's laggin=
g=20
certainly exist.&nbsp;</DIV>
<DIV>when you will go home, you can also&nbsp;have a try to see result tom=
orrow=20
morning.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Your&nbsp;tester&nbsp;sets&nbsp;only&nbsp;a&nbsp;single,&nbsp;constan=
t&nbsp;rate&nbsp;repeatedly,&nbsp;right?=20
//yes, the simplest test just setting a single&nbsp;10000 down repeatly.</=
DIV>
<DIV>If&nbsp;so,&nbsp;then&nbsp;the&nbsp;thought&nbsp;of&nbsp;adjustment&n=
bsp;likely&nbsp;won't&nbsp;help.</DIV></DIV>
<DIV>&nbsp;</DIV></BODY></HTML>

------=_001_NextPart846642126284_=------



--===============6051636379622459400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6051636379622459400==--



From xen-devel-bounces@lists.xen.org Thu Aug 16 14:27:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T212c-00058Y-Qm; Thu, 16 Aug 2012 14:27:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tupeng212@gmail.com>) id 1T212a-00058K-Jm
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:27:16 +0000
Received: from [85.158.138.51:5104] by server-11.bemta-3.messagelabs.com id
	8F/82-23152-3430D205; Thu, 16 Aug 2012 14:27:15 +0000
X-Env-Sender: tupeng212@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345127233!28542382!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	MIME_BASE64_TEXT, MIME_BOUND_NEXTPART, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19157 invoked from network); 16 Aug 2012 14:27:15 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:27:15 -0000
Received: by pbbrp12 with SMTP id rp12so1709119pbb.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 07:27:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:reply-to:subject:references:x-priority:x-guid
	:x-has-attach:x-mailer:mime-version:message-id:content-type;
	bh=zitJo3UgaH28bR4YzQPaoUz5nrNBhZKSp4iVUikab6o=;
	b=mmic8d3vqeu8g2u3f2SD5KGiI4KyptJvdnOeXcYOxtWeOEeH04c7YnOk1bVM6REqVH
	e7Pz2dSPcNE4mQdgm6Tqt0qc7jEZuLxesYH8KzPayJMO7FfwI2fUArl7pHZghg9Q4FxK
	bEYaqsVgMjaZ9ODo7Mj5KJ6GcYw8f+vIkWqrcfFDTKnnzEvjPSW6ugoq6A8qFaIZMvEM
	n9gK2rwTlG2VLPtVPWkKeJgBZoyxRzWZrpROJnBCf5txrzKWef4Bvj2o61oeA7U8SJyc
	OOpHsE+6IYyfvVX44CeDww/BhJlGmldVjArAYVeLpv00tsfH3lOMIztX0MNvWkUExAaK
	ujVQ==
Received: by 10.68.222.167 with SMTP id qn7mr3650364pbc.98.1345127232582;
	Thu, 16 Aug 2012 07:27:12 -0700 (PDT)
Received: from root ([115.199.242.186])
	by mx.google.com with ESMTPS id kt8sm2785434pbc.1.2012.08.16.07.27.05
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 07:27:12 -0700 (PDT)
Date: Thu, 16 Aug 2012 22:27:12 +0800
From: tupeng212 <tupeng212@gmail.com>
To: "Jan Beulich" <JBeulich@suse.com>
References: <502A3BBC0200007800094B68@nat28.tlf.novell.com>, 
	<2012081522045495397713@gmail.com> <2012081522121039050717@gmail.com>, 
	<502CC9D702000078000955B9@nat28.tlf.novell.com>
	<201208162143289686654@gmail.com>, 
	<502D1C8A0200007800095851@nat28.tlf.novell.com>
X-Priority: 3
X-GUID: 4EC6CDED-746D-4104-A8CC-6783C71A3E22
X-Has-Attach: no
X-Mailer: Foxmail 7.0.1.87[cn]
Mime-Version: 1.0
Message-ID: <2012081622270760996017@gmail.com>
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
	RFC v2] x86/HVM: assorted RTC emulation adjustments (was Re: Big
	Bug:Time in VM goes slower...)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: tupeng212 <tupeng212@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6051636379622459400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6051636379622459400==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart846642126284_=----"

This is a multi-part message in MIME format.

------=_001_NextPart846642126284_=----
Content-Type: text/plain;
	charset="gb2312"
Content-Transfer-Encoding: base64

T2theSwgc29tZSBiYWNrIGFuZCBmb3J0aCBpcyB0aGVyZSBpbiBhbnkgY2FzZSwgYnV0IHdpdGhv
dXQNCmtub3dpbmcgdGhlIHRpbWUgZGlzdGFuY2UgYmV0d2VlbiB0aGUgaW5kaXZpZHVhbCBpbnN0
YW5jZXMgaXQncw0KaGFyZCB0byB0ZWxsIHdoZXRoZXIgd2hhdCBJJ20gdGhpbmtpbmcgb2YgbWln
aHQgaGVscC4NCg0KPiBiZXNpZGVzLCBJIGNoZWNrZWQgbXkgZm9ybWVyIHNpbXBsZSB0ZXN0ZXIg
dGhpcyBtb3JuaW5nIGFmdGVyIGl0IHJhbiBmb3IgYSANCj4gd2hvbGUgbmlnaHQsIGl0IGxhZ2dl
ZCBtdWNoLg0KDQpDb3VsZCB5b3UgYnR3IHF1YW50aWZ5IHRoZSBsYWdnaW5nPyANCi8vIEkgZGlk
bid0IHBheSBhdHRlbnRpb24gdG8gaXQsIGJ1dCBhYm91dCBoYWxmIGFuIGhvdXIncyBsYWdnaW5n
IGNlcnRhaW5seSBleGlzdC4gDQp3aGVuIHlvdSB3aWxsIGdvIGhvbWUsIHlvdSBjYW4gYWxzbyBo
YXZlIGEgdHJ5IHRvIHNlZSByZXN1bHQgdG9tb3Jyb3cgbW9ybmluZy4NCg0KWW91ciB0ZXN0ZXIg
c2V0cyBvbmx5IGEgc2luZ2xlLCBjb25zdGFudCByYXRlIHJlcGVhdGVkbHksIHJpZ2h0PyAvL3ll
cywgdGhlIHNpbXBsZXN0IHRlc3QganVzdCBzZXR0aW5nIGEgc2luZ2xlIDEwMDAwIGRvd24gcmVw
ZWF0bHkuDQpJZiBzbywgdGhlbiB0aGUgdGhvdWdodCBvZiBhZGp1c3RtZW50IGxpa2VseSB3b24n
dCBoZWxwLg==

------=_001_NextPart846642126284_=----
Content-Type: text/html;
	charset="gb2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; charset=3DGB2312">
<STYLE>
BLOCKQUOTE {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
	FONT-SIZE: 10.5pt; COLOR: #000080; LINE-HEIGHT: 1.5; FONT-FAMILY: =CB=CE=
=CC=E5
}
</STYLE>

<META content=3D"MSHTML 6.00.2900.5512" name=3DGENERATOR></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>
<DIV>Okay,&nbsp;some&nbsp;back&nbsp;and&nbsp;forth&nbsp;is&nbsp;there&nbsp=
;in&nbsp;any&nbsp;case,&nbsp;but&nbsp;without</DIV>
<DIV>knowing&nbsp;the&nbsp;time&nbsp;distance&nbsp;between&nbsp;the&nbsp;i=
ndividual&nbsp;instances&nbsp;it's</DIV>
<DIV>hard&nbsp;to&nbsp;tell&nbsp;whether&nbsp;what&nbsp;I'm&nbsp;thinking&=
nbsp;of&nbsp;might&nbsp;help.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp;besides,&nbsp;I&nbsp;checked&nbsp;my&nbsp;former&nbsp;simpl=
e&nbsp;tester&nbsp;this&nbsp;morning&nbsp;after&nbsp;it&nbsp;ran&nbsp;for&=
nbsp;a&nbsp;</DIV>
<DIV>&gt;&nbsp;whole&nbsp;night,&nbsp;it&nbsp;lagged&nbsp;much.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Could&nbsp;you&nbsp;btw&nbsp;quantify&nbsp;the&nbsp;lagging? </DIV>
<DIV>// I didn't pay attention to it, but about&nbsp;half an hour's laggin=
g=20
certainly exist.&nbsp;</DIV>
<DIV>when you will go home, you can also&nbsp;have a try to see result tom=
orrow=20
morning.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Your&nbsp;tester&nbsp;sets&nbsp;only&nbsp;a&nbsp;single,&nbsp;constan=
t&nbsp;rate&nbsp;repeatedly,&nbsp;right?=20
//yes, the simplest test just setting a single&nbsp;10000 down repeatly.</=
DIV>
<DIV>If&nbsp;so,&nbsp;then&nbsp;the&nbsp;thought&nbsp;of&nbsp;adjustment&n=
bsp;likely&nbsp;won't&nbsp;help.</DIV></DIV>
<DIV>&nbsp;</DIV></BODY></HTML>

------=_001_NextPart846642126284_=------



--===============6051636379622459400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6051636379622459400==--



From xen-devel-bounces@lists.xen.org Thu Aug 16 14:33:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T218S-0005Ot-Kr; Thu, 16 Aug 2012 14:33:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T218R-0005OS-NN
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:33:19 +0000
Received: from [85.158.143.99:56022] by server-2.bemta-4.messagelabs.com id
	B1/E2-31966-FA40D205; Thu, 16 Aug 2012 14:33:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345127596!21493224!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8013 invoked from network); 16 Aug 2012 14:33:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205375467"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:33:16 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 16 Aug 2012
	10:33:16 -0400
Message-ID: <502D04AA.8030107@citrix.com>
Date: Thu, 16 Aug 2012 15:33:14 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
	<alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
	<502BF563.8010902@citrix.com>
	<alpine.DEB.2.02.1208161200200.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208161200200.2278@kaball.uk.xensource.com>
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/12 12:07, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, David Vrabel wrote:
>> On 15/08/12 14:55, Stefano Stabellini wrote:
>>> On Wed, 15 Aug 2012, David Vrabel wrote:
>>>> On 14/08/12 15:12, Attilio Rao wrote:
>>>>> On 14/08/12 14:57, David Vrabel wrote:
>>>>>> On 14/08/12 13:24, Attilio Rao wrote:
>>>> After looking at it some more, I think this pv-ops is unnecessary. How
>>>> about the following patch to just remove it completely?
>>>>
>>>> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
>>>> is sound.
>>>
>>> Do you have more then 4G to dom0 on those boxes?
>>
>> I've tested with 6G now, both 64-bit and 32-bit with HIGHPTE.
>>
>>> It certainly fixed a serious crash at the time it was introduced, see
>>> http://marc.info/?l=linux-kernel&m=129901609503574 and
>>> http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
>>> changed in kernel_physical_mapping_init, I think we still need it.
>>> Depending on the e820 of your test box, the kernel could crash (or not),
>>> possibly in different places.

FYI, it looks like pgt_buf is now located low down, which is why these
changes worked for me.  Possibly this changed as part of a memblock
refactor.

>>>>>> Having said that, I couldn't immediately see where pages in (end, 
>>>>>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
>>>>>> done?
>>>>>>
>>>>>
>>>>> As mentioned in the comment, please look at xen_set_pte_init().
>>>>
>>>> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
>>>> is already present and read-only.
>>>
>>> look at mask_rw_pte and read the threads linked above, unfortunately it
>>> is not that simple.
>>
>> Yes, I was remembering what 32-bit did here.
>>
>> The 64-bit version is a bit confused and it often ends up /not/ clearing
>> RW for the direct mapping of the pages in the pgt_buf because any
>> existing RW mappings will be used as-is.  See phys_pte_init() which
>> checks for an existing mapping and only sets the PTE if it is not
>> already set.
> 
> not all the pagetable pages might be already mapped, even if they are
> already hooked into the pagetable

Yes, I think this is easy to handle though.

    /*
     * Make sure that active page table pages are not mapped RW.
     */
    if (is_early_ioremap_ptep(ptep)) {
        /*
         * If we're updating an early ioremap PTE, then this PFN may
         * already be in the linear mapping.  If it is, use the
         * existing RW bit.
         */
        unsigned int level;
        pte_t *linear_pte;

        linear_pte = lookup_address(__va(PFN_PHYS(pfn)), &level);
        if (linear_pte && !pte_write(*linear_pt))
            pte = pte_wrprotect(pte);
    } else if (pfn >= pgt_buf_start && pfn < pgt_buf_end) {
        /*
         * The PFN may not be mapped but may be hooked into the page
         * tables.  Make sure this new mapping is read-only.
         */
        pte = pte_wrprotect(pte);
    }

However, the real subtlety are page tables that are mapped as they
themselves are hooked in.

As as example, let's say pgt_buf_start (s) is on a 1 GiB boundary and
pgt_buf_top (t) is below the next 2 MiB boundary.  We're mapping memory
with 4 KiB pages from s to s + 4 MiB.  This requires a new PUD and two
new PMDs.

To map this region:

1. Allocate a new PUD. (@ e = pgt_buf_end)

2. Allocate a new PMD. (@ e + 1)

3. Fill in this PMD's PTEs.  This covers pgt_buf so sets (s, e + 1) as RO.

4. Call pmd_populate() to hook-in this PMD.  This does not set e+1 as RO
(but this is ok as it already is).

5. Allocate a new PMD  (@ e + 2)

6. Fill in this PMD's PTEs.

7. Call pmd_populate() to hook in this PMD.  This does not set e+2 as RO
as the region isn't mapped yet.

8. Call pud_populate() to hook in this PUD.  This sets e as RO but e + 2
is still RW.

9. Boom!

It may be possible to fixup the permissions as the pages are hooked in.
 i.e., if this page's entries cover (pgt_buf_start, pgt_buf_end], walk
the entries and any child tables and fixup the permissions of the leaf
entries.

This would walk the tables a few times unless we were careful to only
walk them when hooking a page into an active table.

It was fun trying to understand this, but I think I'll give up now...

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:33:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T218S-0005Ot-Kr; Thu, 16 Aug 2012 14:33:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T218R-0005OS-NN
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:33:19 +0000
Received: from [85.158.143.99:56022] by server-2.bemta-4.messagelabs.com id
	B1/E2-31966-FA40D205; Thu, 16 Aug 2012 14:33:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345127596!21493224!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8013 invoked from network); 16 Aug 2012 14:33:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205375467"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:33:16 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 16 Aug 2012
	10:33:16 -0400
Message-ID: <502D04AA.8030107@citrix.com>
Date: Thu, 16 Aug 2012 15:33:14 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <1344947062-4796-1-git-send-email-attilio.rao@citrix.com>
	<1344947062-4796-3-git-send-email-attilio.rao@citrix.com>
	<502A5964.2080509@citrix.com> <502A5CD0.8000201@citrix.com>
	<502B85D1.8000606@citrix.com>
	<alpine.DEB.2.02.1208151418380.2278@kaball.uk.xensource.com>
	<502BF563.8010902@citrix.com>
	<alpine.DEB.2.02.1208161200200.2278@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208161200200.2278@kaball.uk.xensource.com>
Cc: Attilio Rao <attilio.rao@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 2/2] Xen: Document the semantic of the
 pagetable_reserve PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/12 12:07, Stefano Stabellini wrote:
> On Wed, 15 Aug 2012, David Vrabel wrote:
>> On 15/08/12 14:55, Stefano Stabellini wrote:
>>> On Wed, 15 Aug 2012, David Vrabel wrote:
>>>> On 14/08/12 15:12, Attilio Rao wrote:
>>>>> On 14/08/12 14:57, David Vrabel wrote:
>>>>>> On 14/08/12 13:24, Attilio Rao wrote:
>>>> After looking at it some more, I think this pv-ops is unnecessary. How
>>>> about the following patch to just remove it completely?
>>>>
>>>> I've only smoke-tested 32-bit and 64-bit dom0 but I think the reasoning
>>>> is sound.
>>>
>>> Do you have more then 4G to dom0 on those boxes?
>>
>> I've tested with 6G now, both 64-bit and 32-bit with HIGHPTE.
>>
>>> It certainly fixed a serious crash at the time it was introduced, see
>>> http://marc.info/?l=linux-kernel&m=129901609503574 and
>>> http://marc.info/?l=linux-kernel&m=130133909408229. Unless something big
>>> changed in kernel_physical_mapping_init, I think we still need it.
>>> Depending on the e820 of your test box, the kernel could crash (or not),
>>> possibly in different places.

FYI, it looks like pgt_buf is now located low down, which is why these
changes worked for me.  Possibly this changed as part of a memblock
refactor.

>>>>>> Having said that, I couldn't immediately see where pages in (end, 
>>>>>> pgt_buf_top] was getting set RO.  Can you point me to where it's 
>>>>>> done?
>>>>>>
>>>>>
>>>>> As mentioned in the comment, please look at xen_set_pte_init().
>>>>
>>>> xen_set_pte_init() only ensures it doesn't set the PTE as writable if it
>>>> is already present and read-only.
>>>
>>> look at mask_rw_pte and read the threads linked above, unfortunately it
>>> is not that simple.
>>
>> Yes, I was remembering what 32-bit did here.
>>
>> The 64-bit version is a bit confused and it often ends up /not/ clearing
>> RW for the direct mapping of the pages in the pgt_buf because any
>> existing RW mappings will be used as-is.  See phys_pte_init() which
>> checks for an existing mapping and only sets the PTE if it is not
>> already set.
> 
> not all the pagetable pages might be already mapped, even if they are
> already hooked into the pagetable

Yes, I think this is easy to handle though.

    /*
     * Make sure that active page table pages are not mapped RW.
     */
    if (is_early_ioremap_ptep(ptep)) {
        /*
         * If we're updating an early ioremap PTE, then this PFN may
         * already be in the linear mapping.  If it is, use the
         * existing RW bit.
         */
        unsigned int level;
        pte_t *linear_pte;

        linear_pte = lookup_address(__va(PFN_PHYS(pfn)), &level);
        if (linear_pte && !pte_write(*linear_pt))
            pte = pte_wrprotect(pte);
    } else if (pfn >= pgt_buf_start && pfn < pgt_buf_end) {
        /*
         * The PFN may not be mapped but may be hooked into the page
         * tables.  Make sure this new mapping is read-only.
         */
        pte = pte_wrprotect(pte);
    }

However, the real subtlety are page tables that are mapped as they
themselves are hooked in.

As as example, let's say pgt_buf_start (s) is on a 1 GiB boundary and
pgt_buf_top (t) is below the next 2 MiB boundary.  We're mapping memory
with 4 KiB pages from s to s + 4 MiB.  This requires a new PUD and two
new PMDs.

To map this region:

1. Allocate a new PUD. (@ e = pgt_buf_end)

2. Allocate a new PMD. (@ e + 1)

3. Fill in this PMD's PTEs.  This covers pgt_buf so sets (s, e + 1) as RO.

4. Call pmd_populate() to hook-in this PMD.  This does not set e+1 as RO
(but this is ok as it already is).

5. Allocate a new PMD  (@ e + 2)

6. Fill in this PMD's PTEs.

7. Call pmd_populate() to hook in this PMD.  This does not set e+2 as RO
as the region isn't mapped yet.

8. Call pud_populate() to hook in this PUD.  This sets e as RO but e + 2
is still RW.

9. Boom!

It may be possible to fixup the permissions as the pages are hooked in.
 i.e., if this page's entries cover (pgt_buf_start, pgt_buf_end], walk
the entries and any child tables and fixup the permissions of the leaf
entries.

This would walk the tables a few times unless we were careful to only
walk them when hooking a page into an active table.

It was fun trying to understand this, but I think I'll give up now...

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:33:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T218d-0005QF-7C; Thu, 16 Aug 2012 14:33:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T218b-0005Pp-Cy
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:33:29 +0000
Received: from [85.158.143.35:39257] by server-2.bemta-4.messagelabs.com id
	78/23-31966-8B40D205; Thu, 16 Aug 2012 14:33:28 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345127606!13796941!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11981 invoked from network); 16 Aug 2012 14:33:27 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-8.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 14:33:27 -0000
X-TM-IMSS-Message-ID: <ae8712ea0000ec68@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	ae8712ea0000ec68 ; Thu, 16 Aug 2012 10:34:30 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GEXMw7032734; 
	Thu, 16 Aug 2012 10:33:22 -0400
Message-ID: <502D04B2.6080203@tycho.nsa.gov>
Date: Thu, 16 Aug 2012 10:33:22 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345116486.27489.92.camel@zakaz.uk.xensource.com>
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 07:28 AM, Ian Campbell wrote:
> On Thu, 2012-08-09 at 22:26 +0100, Jean Guyader wrote:
>> On 9 August 2012 22:02, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> On 07/30/2012 10:03 AM, Ian Campbell wrote:
>>>> This is based upon my inspection of a system with a single PV domain running
>>>> and is therefore very incomplete.
>>>>
>>>> There are several things I'm not sure of here, mostly marked with XXX in the
>>>> text.
>>>>
>>>> In particular:
>>>>
>>>>  - We seem to expose various things to the guest which really it has no need to
>>>>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>>>>    the size of the video ram, store port and gref.
>>>
>>> If the domid key is unneeded/removed, is there a recommended method for
>>> a guest to query its own domid? I don't see a hypercall that returns it
>>> directly, although there is one to return the guest's UUID - which seems
>>> much less useful for a guest to know about itself.
>>>
>>> While hypercalls are fairly consistent about accepting DOMID_SELF, a
>>> domain does occasionally need to know its own ID: xenstore permission
>>> changes do not accept DOMID_SELF, 
> 
> I wonder if that would be a worthwhile protocol extension.

It might be, although it's never required. After checking where this would
be useful, it looks like the caller could just call a get-permissions on
the node before set-permissions, and just avoid modification to the owner.
Since non-privileged xenstore clients cannot modify the owner of a xenstore
key, and cannot change permissions on keys they do not own, this change
would just end up saving one call to xenstore. Privileged domains already
take advantage of their override capabilities and usually do not add domain
0 to the node permissions list, so it's not useful there.

>> and if two domains are attempting to
>>> set up communication such as V4V or vchan, they need to be able to tell
>>> their peer what domain ID to use.
> 
> That's trickier.
> 
> I suppose they could rendezvous via /vm/$UUID? Although there has been
> talk of removing that path in the future.

The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
domain IDs (just names) and doesn't contain writable sub-keys for a domain
to use. I also don't think such a sub-key should be added; it makes more
sense to keep all of a domain's modifiable keys under its home path.

Perhaps this could be changed to another identifier-to-domid mapping, like
the proposed addition of a location to map name to domid? 

The toolstack would maintain something like:
  /local/by-name/$name == domid
  /local/by-uuid/$uuid == domid
  /local/domain/$domid/name - same as existing
  /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.

>>>
>>
>> That is one way for doing it another way would be to use a name
>> resolution system
>> a bit like a DNS. A system like that would need to live where the VM
>> are created and destroyed
>> (probably dom0 or a domain builder VM).
>> The server could be using vchan, v4v or even a shared XenStore node,
>> but I think we
>> need something like that.
>>
>> In the long run it's much better to rely on a name instead of a domid
>> because domids can
>> change throughout the VM life cycle (reboot, hibernate, save/restore,
>> migration, ...).
> 
> Right, this is the main reason to avoid building a reliance on domid
> into a protocol.
> 

Both ends of any xen-based communication need to handle each of these events
in order to re-establish grants/events or v4v rings/ports. The domid does
need to be treated as a short-term identifier of a domain in a protocol that
expects to continue to work across such events.

>>
>> Jean
>>
>>> It is possible for a domain to query its own domain ID indirectly, so it
>>> would be difficult to argue that a domain should not be able to obtain
>>> its own ID. One method for a domain to query its own ID is to create an
>>> unbound event channel with remote_domid = DOMID_SELF, and then execute
>>> evtchn_status on the event channel in order to see the resolved domain
>>> id. Querying Xenstore permissions on a newly-created key will show the
>>> local domain as the first entry. Less reliably, the backend paths for
>>> all xenbus devices contain the local and remote domain IDs.
>>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:33:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T218d-0005QF-7C; Thu, 16 Aug 2012 14:33:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T218b-0005Pp-Cy
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:33:29 +0000
Received: from [85.158.143.35:39257] by server-2.bemta-4.messagelabs.com id
	78/23-31966-8B40D205; Thu, 16 Aug 2012 14:33:28 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345127606!13796941!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11981 invoked from network); 16 Aug 2012 14:33:27 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-8.tower-21.messagelabs.com with SMTP;
	16 Aug 2012 14:33:27 -0000
X-TM-IMSS-Message-ID: <ae8712ea0000ec68@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	ae8712ea0000ec68 ; Thu, 16 Aug 2012 10:34:30 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GEXMw7032734; 
	Thu, 16 Aug 2012 10:33:22 -0400
Message-ID: <502D04B2.6080203@tycho.nsa.gov>
Date: Thu, 16 Aug 2012 10:33:22 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345116486.27489.92.camel@zakaz.uk.xensource.com>
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 07:28 AM, Ian Campbell wrote:
> On Thu, 2012-08-09 at 22:26 +0100, Jean Guyader wrote:
>> On 9 August 2012 22:02, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
>>> On 07/30/2012 10:03 AM, Ian Campbell wrote:
>>>> This is based upon my inspection of a system with a single PV domain running
>>>> and is therefore very incomplete.
>>>>
>>>> There are several things I'm not sure of here, mostly marked with XXX in the
>>>> text.
>>>>
>>>> In particular:
>>>>
>>>>  - We seem to expose various things to the guest which really it has no need to
>>>>    know (at least not via xenstore). e.g. its own domid, its device model pid,
>>>>    the size of the video ram, store port and gref.
>>>
>>> If the domid key is unneeded/removed, is there a recommended method for
>>> a guest to query its own domid? I don't see a hypercall that returns it
>>> directly, although there is one to return the guest's UUID - which seems
>>> much less useful for a guest to know about itself.
>>>
>>> While hypercalls are fairly consistent about accepting DOMID_SELF, a
>>> domain does occasionally need to know its own ID: xenstore permission
>>> changes do not accept DOMID_SELF, 
> 
> I wonder if that would be a worthwhile protocol extension.

It might be, although it's never required. After checking where this would
be useful, it looks like the caller could just call a get-permissions on
the node before set-permissions, and just avoid modification to the owner.
Since non-privileged xenstore clients cannot modify the owner of a xenstore
key, and cannot change permissions on keys they do not own, this change
would just end up saving one call to xenstore. Privileged domains already
take advantage of their override capabilities and usually do not add domain
0 to the node permissions list, so it's not useful there.

>> and if two domains are attempting to
>>> set up communication such as V4V or vchan, they need to be able to tell
>>> their peer what domain ID to use.
> 
> That's trickier.
> 
> I suppose they could rendezvous via /vm/$UUID? Although there has been
> talk of removing that path in the future.

The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
domain IDs (just names) and doesn't contain writable sub-keys for a domain
to use. I also don't think such a sub-key should be added; it makes more
sense to keep all of a domain's modifiable keys under its home path.

Perhaps this could be changed to another identifier-to-domid mapping, like
the proposed addition of a location to map name to domid? 

The toolstack would maintain something like:
  /local/by-name/$name == domid
  /local/by-uuid/$uuid == domid
  /local/domain/$domid/name - same as existing
  /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.

>>>
>>
>> That is one way for doing it another way would be to use a name
>> resolution system
>> a bit like a DNS. A system like that would need to live where the VM
>> are created and destroyed
>> (probably dom0 or a domain builder VM).
>> The server could be using vchan, v4v or even a shared XenStore node,
>> but I think we
>> need something like that.
>>
>> In the long run it's much better to rely on a name instead of a domid
>> because domids can
>> change throughout the VM life cycle (reboot, hibernate, save/restore,
>> migration, ...).
> 
> Right, this is the main reason to avoid building a reliance on domid
> into a protocol.
> 

Both ends of any xen-based communication need to handle each of these events
in order to re-establish grants/events or v4v rings/ports. The domid does
need to be treated as a short-term identifier of a domain in a protocol that
expects to continue to work across such events.

>>
>> Jean
>>
>>> It is possible for a domain to query its own domain ID indirectly, so it
>>> would be difficult to argue that a domain should not be able to obtain
>>> its own ID. One method for a domain to query its own ID is to create an
>>> unbound event channel with remote_domid = DOMID_SELF, and then execute
>>> evtchn_status on the event channel in order to see the resolved domain
>>> id. Querying Xenstore permissions on a newly-created key will show the
>>> local domain as the first entry. Less reliably, the backend paths for
>>> all xenbus devices contain the local and remote domain IDs.
>>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:37:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21Cd-0005gj-TF; Thu, 16 Aug 2012 14:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T21Cb-0005gW-LS
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:37:37 +0000
Received: from [85.158.139.83:29444] by server-2.bemta-5.messagelabs.com id
	D6/BE-10142-CA50D205; Thu, 16 Aug 2012 14:37:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345127850!28472413!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5676 invoked from network); 16 Aug 2012 14:37:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:37:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042830"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:36:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 15:36:56 +0100
Message-ID: <1345127814.30865.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Thu, 16 Aug 2012 15:36:54 +0100
In-Reply-To: <502D04B2.6080203@tycho.nsa.gov>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
	<502D04B2.6080203@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 15:33 +0100, Daniel De Graaf wrote:

> >> and if two domains are attempting to
> >>> set up communication such as V4V or vchan, they need to be able to tell
> >>> their peer what domain ID to use.
> > 
> > That's trickier.
> > 
> > I suppose they could rendezvous via /vm/$UUID? Although there has been
> > talk of removing that path in the future.
> 
> The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
> domain IDs (just names) and doesn't contain writable sub-keys for a domain
> to use. I also don't think such a sub-key should be added; it makes more
> sense to keep all of a domain's modifiable keys under its home path.
> 
> Perhaps this could be changed to another identifier-to-domid mapping, like
> the proposed addition of a location to map name to domid? 
> 
> The toolstack would maintain something like:
>   /local/by-name/$name == domid
>   /local/by-uuid/$uuid == domid

This second one is a bit like the existing /vm/$uuid/domid.

I think I would go with:

  /local/by-name/$name == /local/domain/$domid
  /local/by-uuid/$uuid == /local/domain/$domid

though, So that you can just read it and use it without interpreting it.

>   /local/domain/$domid/name - same as existing
>   /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.

Is it available for other domains via xen, or just yourself?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:37:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21Cd-0005gj-TF; Thu, 16 Aug 2012 14:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T21Cb-0005gW-LS
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:37:37 +0000
Received: from [85.158.139.83:29444] by server-2.bemta-5.messagelabs.com id
	D6/BE-10142-CA50D205; Thu, 16 Aug 2012 14:37:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345127850!28472413!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5676 invoked from network); 16 Aug 2012 14:37:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:37:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14042830"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:36:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 15:36:56 +0100
Message-ID: <1345127814.30865.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Thu, 16 Aug 2012 15:36:54 +0100
In-Reply-To: <502D04B2.6080203@tycho.nsa.gov>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
	<502D04B2.6080203@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 15:33 +0100, Daniel De Graaf wrote:

> >> and if two domains are attempting to
> >>> set up communication such as V4V or vchan, they need to be able to tell
> >>> their peer what domain ID to use.
> > 
> > That's trickier.
> > 
> > I suppose they could rendezvous via /vm/$UUID? Although there has been
> > talk of removing that path in the future.
> 
> The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
> domain IDs (just names) and doesn't contain writable sub-keys for a domain
> to use. I also don't think such a sub-key should be added; it makes more
> sense to keep all of a domain's modifiable keys under its home path.
> 
> Perhaps this could be changed to another identifier-to-domid mapping, like
> the proposed addition of a location to map name to domid? 
> 
> The toolstack would maintain something like:
>   /local/by-name/$name == domid
>   /local/by-uuid/$uuid == domid

This second one is a bit like the existing /vm/$uuid/domid.

I think I would go with:

  /local/by-name/$name == /local/domain/$domid
  /local/by-uuid/$uuid == /local/domain/$domid

though, So that you can just read it and use it without interpreting it.

>   /local/domain/$domid/name - same as existing
>   /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.

Is it available for other domains via xen, or just yourself?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:48:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21N3-0005vW-2F; Thu, 16 Aug 2012 14:48:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21N2-0005vR-33
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:48:24 +0000
Received: from [85.158.138.51:23432] by server-3.bemta-3.messagelabs.com id
	FC/EF-13809-7380D205; Thu, 16 Aug 2012 14:48:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345128501!9954665!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 723 invoked from network); 16 Aug 2012 14:48:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:48:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14043273"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:48:21 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:48:21 +0100
Date: Thu, 16 Aug 2012 15:48:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1591267471-1345128021=:4850"
Content-ID: <alpine.DEB.2.02.1208161541090.4850@kaball.uk.xensource.com>
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 0/6] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1591267471-1345128021=:4850
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208161541091.4850@kaball.uk.xensource.com>

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.

In this version of the patch series I have introduced conversion macros
to convert a XEN_GUEST_HANDLE_PARAM into a XEN_GUEST_HANDLE a vice
versa. Most of the problematic cases come from xen/arch/x86 code, in
order to spot them I wrote a simple debug patch that change the
definition of XEN_GUEST_HANDLE_PARAM to be different from
XEN_GUEST_HANDLE on x86 too. I am attaching the debug patch to this
email.



It is based on Ian's arm-for-4.3 branch. 


Changes in v3:
- default all the guest_handle_* conversion macros to
  XEN_GUEST_HANDLE_PARAM as return type;
- add two new guest_handle_to_param and guest_handle_from_param macros
  to do conversions.

Changes in v2:

- do not use an anonymous union in struct xen_add_to_physmap; 
- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t in python/xen/lowlevel/xc/xc;
- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code;
- add a patch to limit the maximum number of extents handled by
do_memory_op;
- remove the patch "introduce __lshrdi3 and __aeabi_llsr" that is
already in the for-4.3 branch.



Stefano Stabellini (6):
      xen: improve changes to xen_add_to_physmap
      xen: xen_ulong_t substitution
      xen: change the limit of nr_extents to UINT_MAX >> MEMOP_EXTENT_SHIFT
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate
      xen: more substitutions

 tools/firmware/hvmloader/pci.c           |    2 +-
 tools/python/xen/lowlevel/xc/xc.c        |    2 +-
 xen/arch/arm/domain.c                    |    2 +-
 xen/arch/arm/domctl.c                    |    2 +-
 xen/arch/arm/hvm.c                       |    2 +-
 xen/arch/arm/mm.c                        |    4 +-
 xen/arch/arm/physdev.c                   |    2 +-
 xen/arch/arm/sysctl.c                    |    2 +-
 xen/arch/x86/compat.c                    |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c            |    2 +-
 xen/arch/x86/domain.c                    |    2 +-
 xen/arch/x86/domctl.c                    |    2 +-
 xen/arch/x86/efi/runtime.c               |    2 +-
 xen/arch/x86/hvm/hvm.c                   |   26 +++++++-------
 xen/arch/x86/microcode.c                 |    2 +-
 xen/arch/x86/mm.c                        |   36 ++++++++++++--------
 xen/arch/x86/mm/hap/hap.c                |    2 +-
 xen/arch/x86/mm/mem_event.c              |    2 +-
 xen/arch/x86/mm/paging.c                 |    2 +-
 xen/arch/x86/mm/shadow/common.c          |    2 +-
 xen/arch/x86/oprofile/backtrace.c        |    4 ++-
 xen/arch/x86/oprofile/xenoprof.c         |    6 ++--
 xen/arch/x86/physdev.c                   |    2 +-
 xen/arch/x86/platform_hypercall.c        |   10 ++++--
 xen/arch/x86/sysctl.c                    |    2 +-
 xen/arch/x86/traps.c                     |    2 +-
 xen/arch/x86/x86_32/mm.c                 |    2 +-
 xen/arch/x86/x86_32/traps.c              |    2 +-
 xen/arch/x86/x86_64/compat/mm.c          |   16 ++++++---
 xen/arch/x86/x86_64/cpu_idle.c           |    4 ++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 ++-
 xen/arch/x86/x86_64/domain.c             |    2 +-
 xen/arch/x86/x86_64/mm.c                 |    2 +-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/arch/x86/x86_64/traps.c              |    2 +-
 xen/common/compat/domain.c               |    2 +-
 xen/common/compat/grant_table.c          |    8 ++--
 xen/common/compat/memory.c               |    4 +-
 xen/common/compat/multicall.c            |    1 +
 xen/common/domain.c                      |    2 +-
 xen/common/domctl.c                      |    2 +-
 xen/common/event_channel.c               |    2 +-
 xen/common/grant_table.c                 |   36 ++++++++++----------
 xen/common/kernel.c                      |    4 +-
 xen/common/kexec.c                       |   16 ++++----
 xen/common/memory.c                      |    6 ++--
 xen/common/multicall.c                   |    2 +-
 xen/common/schedule.c                    |    2 +-
 xen/common/sysctl.c                      |    2 +-
 xen/common/xenoprof.c                    |    8 ++--
 xen/drivers/acpi/pmstat.c                |    2 +-
 xen/drivers/char/console.c               |    6 ++--
 xen/drivers/passthrough/iommu.c          |    2 +-
 xen/include/asm-arm/guest_access.h       |   19 +++++++++--
 xen/include/asm-arm/hypercall.h          |    2 +-
 xen/include/asm-arm/mm.h                 |    2 +-
 xen/include/asm-x86/guest_access.h       |   19 +++++++++--
 xen/include/asm-x86/hap.h                |    2 +-
 xen/include/asm-x86/hypercall.h          |   24 +++++++-------
 xen/include/asm-x86/mem_event.h          |    2 +-
 xen/include/asm-x86/mm.h                 |    8 ++--
 xen/include/asm-x86/paging.h             |    2 +-
 xen/include/asm-x86/processor.h          |    2 +-
 xen/include/asm-x86/shadow.h             |    2 +-
 xen/include/asm-x86/xenoprof.h           |    6 ++--
 xen/include/public/arch-arm.h            |   30 +++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++
 xen/include/public/arch-x86/xen.h        |    9 +++++
 xen/include/public/memory.h              |   11 ++++--
 xen/include/public/version.h             |    4 ++-
 xen/include/xen/acpi.h                   |    4 +-
 xen/include/xen/hypercall.h              |   52 +++++++++++++++---------------
 xen/include/xen/iommu.h                  |    2 +-
 xen/include/xen/tmem_xen.h               |    2 +-
 xen/include/xsm/xsm.h                    |    4 +-
 xen/xsm/dummy.c                          |    2 +-
 xen/xsm/flask/flask_op.c                 |    4 +-
 xen/xsm/flask/hooks.c                    |    2 +-
 xen/xsm/xsm_core.c                       |    2 +-
 79 files changed, 292 insertions(+), 203 deletions(-)


Cheers,

Stefano
--1342847746-1591267471-1345128021=:4850
Content-Type: text/plain; charset="US-ASCII"; name="debug"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161540210.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="debug"

ZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLXg4Ni94ZW4u
aCBiL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLXg4Ni94ZW4uaA0KaW5kZXgg
MGUxMDI2MC4uMDhhNzg4ZSAxMDA2NDQNCi0tLSBhL3hlbi9pbmNsdWRlL3B1
YmxpYy9hcmNoLXg4Ni94ZW4uaA0KKysrIGIveGVuL2luY2x1ZGUvcHVibGlj
L2FyY2gteDg2L3hlbi5oDQpAQCAtMzIsNyArMzIsOCBAQA0KIC8qIFN0cnVj
dHVyYWwgZ3Vlc3QgaGFuZGxlcyBpbnRyb2R1Y2VkIGluIDB4MDAwMzAyMDEu
ICovDQogI2lmIF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX18gPj0gMHgwMDAz
MDIwMQ0KICNkZWZpbmUgX19fREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobmFt
ZSwgdHlwZSkgXA0KLSAgICB0eXBlZGVmIHN0cnVjdCB7IHR5cGUgKnA7IH0g
X19ndWVzdF9oYW5kbGVfICMjIG5hbWUNCisgICAgdHlwZWRlZiBzdHJ1Y3Qg
eyB0eXBlICpwOyB9IF9fZ3Vlc3RfaGFuZGxlXyAjIyBuYW1lOyBcDQorICAg
IHR5cGVkZWYgc3RydWN0IHsgdHlwZSAqcDsgfSBfX2d1ZXN0X2hhbmRsZV9w
YXJhbV8gIyMgbmFtZQ0KICNlbHNlDQogI2RlZmluZSBfX19ERUZJTkVfWEVO
X0dVRVNUX0hBTkRMRShuYW1lLCB0eXBlKSBcDQogICAgIHR5cGVkZWYgdHlw
ZSAqIF9fZ3Vlc3RfaGFuZGxlXyAjIyBuYW1lDQpAQCAtNTIsNyArNTMsNyBA
QA0KICNkZWZpbmUgREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobmFtZSkgICBf
X0RFRklORV9YRU5fR1VFU1RfSEFORExFKG5hbWUsIG5hbWUpDQogI2RlZmlu
ZSBfX1hFTl9HVUVTVF9IQU5ETEUobmFtZSkgICAgICAgIF9fZ3Vlc3RfaGFu
ZGxlXyAjIyBuYW1lDQogI2RlZmluZSBYRU5fR1VFU1RfSEFORExFKG5hbWUp
ICAgICAgICAgIF9fWEVOX0dVRVNUX0hBTkRMRShuYW1lKQ0KLSNkZWZpbmUg
WEVOX0dVRVNUX0hBTkRMRV9QQVJBTShuYW1lKSAgICBYRU5fR1VFU1RfSEFO
RExFKG5hbWUpDQorI2RlZmluZSBYRU5fR1VFU1RfSEFORExFX1BBUkFNKG5h
bWUpICAgIF9fZ3Vlc3RfaGFuZGxlX3BhcmFtXyAjIyBuYW1lDQogI2RlZmlu
ZSBzZXRfeGVuX2d1ZXN0X2hhbmRsZV9yYXcoaG5kLCB2YWwpICBkbyB7ICho
bmQpLnAgPSB2YWw7IH0gd2hpbGUgKDApDQogI2lmZGVmIF9fWEVOX1RPT0xT
X18NCiAjZGVmaW5lIGdldF94ZW5fZ3Vlc3RfaGFuZGxlKHZhbCwgaG5kKSAg
ZG8geyB2YWwgPSAoaG5kKS5wOyB9IHdoaWxlICgwKQ0K

--1342847746-1591267471-1345128021=:4850
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1591267471-1345128021=:4850--


From xen-devel-bounces@lists.xen.org Thu Aug 16 14:48:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21N3-0005vW-2F; Thu, 16 Aug 2012 14:48:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21N2-0005vR-33
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:48:24 +0000
Received: from [85.158.138.51:23432] by server-3.bemta-3.messagelabs.com id
	FC/EF-13809-7380D205; Thu, 16 Aug 2012 14:48:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345128501!9954665!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 723 invoked from network); 16 Aug 2012 14:48:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:48:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14043273"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 14:48:21 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 15:48:21 +0100
Date: Thu, 16 Aug 2012 15:48:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1591267471-1345128021=:4850"
Content-ID: <alpine.DEB.2.02.1208161541090.4850@kaball.uk.xensource.com>
Cc: "Tim Deegan \(3P\)" <Tim.Deegan@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 0/6] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1591267471-1345128021=:4850
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208161541091.4850@kaball.uk.xensource.com>

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.

In this version of the patch series I have introduced conversion macros
to convert a XEN_GUEST_HANDLE_PARAM into a XEN_GUEST_HANDLE a vice
versa. Most of the problematic cases come from xen/arch/x86 code, in
order to spot them I wrote a simple debug patch that change the
definition of XEN_GUEST_HANDLE_PARAM to be different from
XEN_GUEST_HANDLE on x86 too. I am attaching the debug patch to this
email.



It is based on Ian's arm-for-4.3 branch. 


Changes in v3:
- default all the guest_handle_* conversion macros to
  XEN_GUEST_HANDLE_PARAM as return type;
- add two new guest_handle_to_param and guest_handle_from_param macros
  to do conversions.

Changes in v2:

- do not use an anonymous union in struct xen_add_to_physmap; 
- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t in python/xen/lowlevel/xc/xc;
- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code;
- add a patch to limit the maximum number of extents handled by
do_memory_op;
- remove the patch "introduce __lshrdi3 and __aeabi_llsr" that is
already in the for-4.3 branch.



Stefano Stabellini (6):
      xen: improve changes to xen_add_to_physmap
      xen: xen_ulong_t substitution
      xen: change the limit of nr_extents to UINT_MAX >> MEMOP_EXTENT_SHIFT
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate
      xen: more substitutions

 tools/firmware/hvmloader/pci.c           |    2 +-
 tools/python/xen/lowlevel/xc/xc.c        |    2 +-
 xen/arch/arm/domain.c                    |    2 +-
 xen/arch/arm/domctl.c                    |    2 +-
 xen/arch/arm/hvm.c                       |    2 +-
 xen/arch/arm/mm.c                        |    4 +-
 xen/arch/arm/physdev.c                   |    2 +-
 xen/arch/arm/sysctl.c                    |    2 +-
 xen/arch/x86/compat.c                    |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c            |    2 +-
 xen/arch/x86/domain.c                    |    2 +-
 xen/arch/x86/domctl.c                    |    2 +-
 xen/arch/x86/efi/runtime.c               |    2 +-
 xen/arch/x86/hvm/hvm.c                   |   26 +++++++-------
 xen/arch/x86/microcode.c                 |    2 +-
 xen/arch/x86/mm.c                        |   36 ++++++++++++--------
 xen/arch/x86/mm/hap/hap.c                |    2 +-
 xen/arch/x86/mm/mem_event.c              |    2 +-
 xen/arch/x86/mm/paging.c                 |    2 +-
 xen/arch/x86/mm/shadow/common.c          |    2 +-
 xen/arch/x86/oprofile/backtrace.c        |    4 ++-
 xen/arch/x86/oprofile/xenoprof.c         |    6 ++--
 xen/arch/x86/physdev.c                   |    2 +-
 xen/arch/x86/platform_hypercall.c        |   10 ++++--
 xen/arch/x86/sysctl.c                    |    2 +-
 xen/arch/x86/traps.c                     |    2 +-
 xen/arch/x86/x86_32/mm.c                 |    2 +-
 xen/arch/x86/x86_32/traps.c              |    2 +-
 xen/arch/x86/x86_64/compat/mm.c          |   16 ++++++---
 xen/arch/x86/x86_64/cpu_idle.c           |    4 ++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 ++-
 xen/arch/x86/x86_64/domain.c             |    2 +-
 xen/arch/x86/x86_64/mm.c                 |    2 +-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/arch/x86/x86_64/traps.c              |    2 +-
 xen/common/compat/domain.c               |    2 +-
 xen/common/compat/grant_table.c          |    8 ++--
 xen/common/compat/memory.c               |    4 +-
 xen/common/compat/multicall.c            |    1 +
 xen/common/domain.c                      |    2 +-
 xen/common/domctl.c                      |    2 +-
 xen/common/event_channel.c               |    2 +-
 xen/common/grant_table.c                 |   36 ++++++++++----------
 xen/common/kernel.c                      |    4 +-
 xen/common/kexec.c                       |   16 ++++----
 xen/common/memory.c                      |    6 ++--
 xen/common/multicall.c                   |    2 +-
 xen/common/schedule.c                    |    2 +-
 xen/common/sysctl.c                      |    2 +-
 xen/common/xenoprof.c                    |    8 ++--
 xen/drivers/acpi/pmstat.c                |    2 +-
 xen/drivers/char/console.c               |    6 ++--
 xen/drivers/passthrough/iommu.c          |    2 +-
 xen/include/asm-arm/guest_access.h       |   19 +++++++++--
 xen/include/asm-arm/hypercall.h          |    2 +-
 xen/include/asm-arm/mm.h                 |    2 +-
 xen/include/asm-x86/guest_access.h       |   19 +++++++++--
 xen/include/asm-x86/hap.h                |    2 +-
 xen/include/asm-x86/hypercall.h          |   24 +++++++-------
 xen/include/asm-x86/mem_event.h          |    2 +-
 xen/include/asm-x86/mm.h                 |    8 ++--
 xen/include/asm-x86/paging.h             |    2 +-
 xen/include/asm-x86/processor.h          |    2 +-
 xen/include/asm-x86/shadow.h             |    2 +-
 xen/include/asm-x86/xenoprof.h           |    6 ++--
 xen/include/public/arch-arm.h            |   30 +++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++
 xen/include/public/arch-x86/xen.h        |    9 +++++
 xen/include/public/memory.h              |   11 ++++--
 xen/include/public/version.h             |    4 ++-
 xen/include/xen/acpi.h                   |    4 +-
 xen/include/xen/hypercall.h              |   52 +++++++++++++++---------------
 xen/include/xen/iommu.h                  |    2 +-
 xen/include/xen/tmem_xen.h               |    2 +-
 xen/include/xsm/xsm.h                    |    4 +-
 xen/xsm/dummy.c                          |    2 +-
 xen/xsm/flask/flask_op.c                 |    4 +-
 xen/xsm/flask/hooks.c                    |    2 +-
 xen/xsm/xsm_core.c                       |    2 +-
 79 files changed, 292 insertions(+), 203 deletions(-)


Cheers,

Stefano
--1342847746-1591267471-1345128021=:4850
Content-Type: text/plain; charset="US-ASCII"; name="debug"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161540210.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="debug"

ZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLXg4Ni94ZW4u
aCBiL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLXg4Ni94ZW4uaA0KaW5kZXgg
MGUxMDI2MC4uMDhhNzg4ZSAxMDA2NDQNCi0tLSBhL3hlbi9pbmNsdWRlL3B1
YmxpYy9hcmNoLXg4Ni94ZW4uaA0KKysrIGIveGVuL2luY2x1ZGUvcHVibGlj
L2FyY2gteDg2L3hlbi5oDQpAQCAtMzIsNyArMzIsOCBAQA0KIC8qIFN0cnVj
dHVyYWwgZ3Vlc3QgaGFuZGxlcyBpbnRyb2R1Y2VkIGluIDB4MDAwMzAyMDEu
ICovDQogI2lmIF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX18gPj0gMHgwMDAz
MDIwMQ0KICNkZWZpbmUgX19fREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobmFt
ZSwgdHlwZSkgXA0KLSAgICB0eXBlZGVmIHN0cnVjdCB7IHR5cGUgKnA7IH0g
X19ndWVzdF9oYW5kbGVfICMjIG5hbWUNCisgICAgdHlwZWRlZiBzdHJ1Y3Qg
eyB0eXBlICpwOyB9IF9fZ3Vlc3RfaGFuZGxlXyAjIyBuYW1lOyBcDQorICAg
IHR5cGVkZWYgc3RydWN0IHsgdHlwZSAqcDsgfSBfX2d1ZXN0X2hhbmRsZV9w
YXJhbV8gIyMgbmFtZQ0KICNlbHNlDQogI2RlZmluZSBfX19ERUZJTkVfWEVO
X0dVRVNUX0hBTkRMRShuYW1lLCB0eXBlKSBcDQogICAgIHR5cGVkZWYgdHlw
ZSAqIF9fZ3Vlc3RfaGFuZGxlXyAjIyBuYW1lDQpAQCAtNTIsNyArNTMsNyBA
QA0KICNkZWZpbmUgREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUobmFtZSkgICBf
X0RFRklORV9YRU5fR1VFU1RfSEFORExFKG5hbWUsIG5hbWUpDQogI2RlZmlu
ZSBfX1hFTl9HVUVTVF9IQU5ETEUobmFtZSkgICAgICAgIF9fZ3Vlc3RfaGFu
ZGxlXyAjIyBuYW1lDQogI2RlZmluZSBYRU5fR1VFU1RfSEFORExFKG5hbWUp
ICAgICAgICAgIF9fWEVOX0dVRVNUX0hBTkRMRShuYW1lKQ0KLSNkZWZpbmUg
WEVOX0dVRVNUX0hBTkRMRV9QQVJBTShuYW1lKSAgICBYRU5fR1VFU1RfSEFO
RExFKG5hbWUpDQorI2RlZmluZSBYRU5fR1VFU1RfSEFORExFX1BBUkFNKG5h
bWUpICAgIF9fZ3Vlc3RfaGFuZGxlX3BhcmFtXyAjIyBuYW1lDQogI2RlZmlu
ZSBzZXRfeGVuX2d1ZXN0X2hhbmRsZV9yYXcoaG5kLCB2YWwpICBkbyB7ICho
bmQpLnAgPSB2YWw7IH0gd2hpbGUgKDApDQogI2lmZGVmIF9fWEVOX1RPT0xT
X18NCiAjZGVmaW5lIGdldF94ZW5fZ3Vlc3RfaGFuZGxlKHZhbCwgaG5kKSAg
ZG8geyB2YWwgPSAoaG5kKS5wOyB9IHdoaWxlICgwKQ0K

--1342847746-1591267471-1345128021=:4850
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1591267471-1345128021=:4850--


From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P1-00060a-JG; Thu, 16 Aug 2012 14:50:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21Oz-00060O-IY
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:25 +0000
Received: from [85.158.139.83:16419] by server-3.bemta-5.messagelabs.com id
	CA/ED-27237-0B80D205; Thu, 16 Aug 2012 14:50:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25213 invoked from network); 16 Aug 2012 14:50:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378341"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-S3;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:09 +0100
Message-ID: <1345128612-10323-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 3/6] xen: change the limit of nr_extents to
	UINT_MAX >> MEMOP_EXTENT_SHIFT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently do_memory_op has a different maximum limit for nr_extents on
32 bit and 64 bit.
Change the limit to UINT_MAX >> MEMOP_EXTENT_SHIFT, so that it is the
same in both cases.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/memory.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..7e58cc4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -553,7 +553,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
             return start_extent;
 
         /* Is size too large for us to encode a continuation? */
-        if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
+        if ( reservation.nr_extents > (UINT_MAX >> MEMOP_EXTENT_SHIFT) )
             return start_extent;
 
         if ( unlikely(start_extent >= reservation.nr_extents) )
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P1-00060a-JG; Thu, 16 Aug 2012 14:50:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21Oz-00060O-IY
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:25 +0000
Received: from [85.158.139.83:16419] by server-3.bemta-5.messagelabs.com id
	CA/ED-27237-0B80D205; Thu, 16 Aug 2012 14:50:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25213 invoked from network); 16 Aug 2012 14:50:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378341"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-S3;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:09 +0100
Message-ID: <1345128612-10323-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 3/6] xen: change the limit of nr_extents to
	UINT_MAX >> MEMOP_EXTENT_SHIFT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently do_memory_op has a different maximum limit for nr_extents on
32 bit and 64 bit.
Change the limit to UINT_MAX >> MEMOP_EXTENT_SHIFT, so that it is the
same in both cases.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/memory.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..7e58cc4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -553,7 +553,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
             return start_extent;
 
         /* Is size too large for us to encode a continuation? */
-        if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
+        if ( reservation.nr_extents > (UINT_MAX >> MEMOP_EXTENT_SHIFT) )
             return start_extent;
 
         if ( unlikely(start_extent >= reservation.nr_extents) )
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P4-00061i-Uz; Thu, 16 Aug 2012 14:50:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P2-00060n-UF
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:29 +0000
Received: from [85.158.143.35:13502] by server-1.bemta-4.messagelabs.com id
	58/BA-07754-4B80D205; Thu, 16 Aug 2012 14:50:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345128624!10863913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7314 invoked from network); 16 Aug 2012 14:50:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378344"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-Uw;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:12 +0100
Message-ID: <1345128612-10323-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 6/6] xen: more substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

More substitutions in this patch, not as obvious as the ones in the
previous patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/x86/mm.c                        |   12 +++++++++---
 xen/arch/x86/oprofile/backtrace.c        |    4 +++-
 xen/arch/x86/platform_hypercall.c        |    8 ++++++--
 xen/arch/x86/x86_64/cpu_idle.c           |    4 +++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 +++-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/common/compat/multicall.c            |    1 +
 7 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..088db11 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3198,7 +3198,9 @@ int do_mmuext_op(
         {
             cpumask_t pmask;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            if ( unlikely(vcpumask_to_pcpumask(d,
+                            guest_handle_to_param(op.arg2.vcpumask, const_void),
+                            &pmask)) )
             {
                 okay = 0;
                 break;
@@ -4484,6 +4486,7 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     if ( s > ctxt->s )
     {
         e820entry_t ent;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
 
         if ( ctxt->n + 1 >= ctxt->map.nr_entries )
@@ -4491,7 +4494,8 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
         ent.addr = (uint64_t)ctxt->s << PAGE_SHIFT;
         ent.size = (uint64_t)(s - ctxt->s) << PAGE_SHIFT;
         ent.type = E820_RESERVED;
-        buffer = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( __copy_to_guest_offset(buffer, ctxt->n, &ent, 1) )
             return -EFAULT;
         ctxt->n++;
@@ -4790,6 +4794,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
     {
         struct memory_map_context ctxt;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         unsigned int i;
 
         if ( !IS_PRIV(current->domain) )
@@ -4804,7 +4809,8 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( ctxt.map.nr_entries < e820.nr_map + 1 )
             return -EINVAL;
 
-        buffer = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( !guest_handle_okay(buffer, ctxt.map.nr_entries) )
             return -EFAULT;
 
diff --git a/xen/arch/x86/oprofile/backtrace.c b/xen/arch/x86/oprofile/backtrace.c
index 33fd142..699cd28 100644
--- a/xen/arch/x86/oprofile/backtrace.c
+++ b/xen/arch/x86/oprofile/backtrace.c
@@ -80,8 +80,10 @@ dump_guest_backtrace(struct vcpu *vcpu, const struct frame_head *head,
     else
 #endif
     {
-        XEN_GUEST_HANDLE(const_frame_head_t) guest_head =
+        XEN_GUEST_HANDLE(const_frame_head_t) guest_head;
+        XEN_GUEST_HANDLE_PARAM(const_frame_head_t) guest_head_t =
             const_guest_handle_from_ptr(head, frame_head_t);
+        guest_head = guest_handle_from_param(guest_head_t, const_frame_head_t);
 
         /* Also check accessibility of one struct frame_head beyond */
         if (!guest_handle_okay(guest_head, 2))
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index a32e0a2..2994b12 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -185,7 +185,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             }
         }
 
-        ret = microcode_update(data, op->u.microcode.length);
+        ret = microcode_update(
+                guest_handle_to_param(data, const_void),
+                op->u.microcode.length);
         spin_unlock(&vcpu_alloc_lock);
     }
     break;
@@ -448,7 +450,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             XEN_GUEST_HANDLE(uint32) pdc;
 
             guest_from_compat_handle(pdc, op->u.set_pminfo.u.pdc);
-            ret = acpi_set_pdc_bits(op->u.set_pminfo.id, pdc);
+            ret = acpi_set_pdc_bits(
+                    op->u.set_pminfo.id,
+                    guest_handle_to_param(pdc, uint32));
         }
         break;
 
diff --git a/xen/arch/x86/x86_64/cpu_idle.c b/xen/arch/x86/x86_64/cpu_idle.c
index 3e7422f..1cdaf96 100644
--- a/xen/arch/x86/x86_64/cpu_idle.c
+++ b/xen/arch/x86/x86_64/cpu_idle.c
@@ -57,10 +57,12 @@ static int copy_from_compat_state(xen_processor_cx_t *xen_state,
 {
 #define XLAT_processor_cx_HNDL_dp(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_csd_t) dps; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_csd_t) dps_t; \
     if ( unlikely(!compat_handle_okay((_s_)->dp, (_s_)->dpcnt)) ) \
             return -EFAULT; \
     guest_from_compat_handle(dps, (_s_)->dp); \
-    (_d_)->dp = guest_handle_cast(dps, xen_processor_csd_t); \
+    dps_t = guest_handle_cast(dps, xen_processor_csd_t); \
+    (_d_)->dp = guest_handle_from_param(dps_t, xen_processor_csd_t); \
 } while (0)
     XLAT_processor_cx(xen_state, state);
 #undef XLAT_processor_cx_HNDL_dp
diff --git a/xen/arch/x86/x86_64/cpufreq.c b/xen/arch/x86/x86_64/cpufreq.c
index ce9864e..1956777 100644
--- a/xen/arch/x86/x86_64/cpufreq.c
+++ b/xen/arch/x86/x86_64/cpufreq.c
@@ -45,10 +45,12 @@ compat_set_px_pminfo(uint32_t cpu, struct compat_processor_performance *perf)
 
 #define XLAT_processor_performance_HNDL_states(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_px_t) states; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_px_t) states_t; \
     if ( unlikely(!compat_handle_okay((_s_)->states, (_s_)->state_count)) ) \
         return -EFAULT; \
     guest_from_compat_handle(states, (_s_)->states); \
-    (_d_)->states = guest_handle_cast(states, xen_processor_px_t); \
+    states_t = guest_handle_cast(states, xen_processor_px_t); \
+    (_d_)->states = guest_handle_from_param(states_t, xen_processor_px_t); \
 } while (0)
 
     XLAT_processor_performance(xen_perf, perf);
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 188aa37..f577761 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
 
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 typedef int ret_t;
 
 #include "../platform_hypercall.c"
diff --git a/xen/common/compat/multicall.c b/xen/common/compat/multicall.c
index 0eb1212..72db213 100644
--- a/xen/common/compat/multicall.c
+++ b/xen/common/compat/multicall.c
@@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
 #define call                 compat_call
 #define do_multicall(l, n)   compat_multicall(_##l, n)
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 
 #include "../multicall.c"
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P4-00061i-Uz; Thu, 16 Aug 2012 14:50:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P2-00060n-UF
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:29 +0000
Received: from [85.158.143.35:13502] by server-1.bemta-4.messagelabs.com id
	58/BA-07754-4B80D205; Thu, 16 Aug 2012 14:50:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345128624!10863913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7314 invoked from network); 16 Aug 2012 14:50:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378344"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-Uw;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:12 +0100
Message-ID: <1345128612-10323-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 6/6] xen: more substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

More substitutions in this patch, not as obvious as the ones in the
previous patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/x86/mm.c                        |   12 +++++++++---
 xen/arch/x86/oprofile/backtrace.c        |    4 +++-
 xen/arch/x86/platform_hypercall.c        |    8 ++++++--
 xen/arch/x86/x86_64/cpu_idle.c           |    4 +++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 +++-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/common/compat/multicall.c            |    1 +
 7 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..088db11 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3198,7 +3198,9 @@ int do_mmuext_op(
         {
             cpumask_t pmask;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            if ( unlikely(vcpumask_to_pcpumask(d,
+                            guest_handle_to_param(op.arg2.vcpumask, const_void),
+                            &pmask)) )
             {
                 okay = 0;
                 break;
@@ -4484,6 +4486,7 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     if ( s > ctxt->s )
     {
         e820entry_t ent;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
 
         if ( ctxt->n + 1 >= ctxt->map.nr_entries )
@@ -4491,7 +4494,8 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
         ent.addr = (uint64_t)ctxt->s << PAGE_SHIFT;
         ent.size = (uint64_t)(s - ctxt->s) << PAGE_SHIFT;
         ent.type = E820_RESERVED;
-        buffer = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( __copy_to_guest_offset(buffer, ctxt->n, &ent, 1) )
             return -EFAULT;
         ctxt->n++;
@@ -4790,6 +4794,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
     {
         struct memory_map_context ctxt;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         unsigned int i;
 
         if ( !IS_PRIV(current->domain) )
@@ -4804,7 +4809,8 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( ctxt.map.nr_entries < e820.nr_map + 1 )
             return -EINVAL;
 
-        buffer = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( !guest_handle_okay(buffer, ctxt.map.nr_entries) )
             return -EFAULT;
 
diff --git a/xen/arch/x86/oprofile/backtrace.c b/xen/arch/x86/oprofile/backtrace.c
index 33fd142..699cd28 100644
--- a/xen/arch/x86/oprofile/backtrace.c
+++ b/xen/arch/x86/oprofile/backtrace.c
@@ -80,8 +80,10 @@ dump_guest_backtrace(struct vcpu *vcpu, const struct frame_head *head,
     else
 #endif
     {
-        XEN_GUEST_HANDLE(const_frame_head_t) guest_head =
+        XEN_GUEST_HANDLE(const_frame_head_t) guest_head;
+        XEN_GUEST_HANDLE_PARAM(const_frame_head_t) guest_head_t =
             const_guest_handle_from_ptr(head, frame_head_t);
+        guest_head = guest_handle_from_param(guest_head_t, const_frame_head_t);
 
         /* Also check accessibility of one struct frame_head beyond */
         if (!guest_handle_okay(guest_head, 2))
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index a32e0a2..2994b12 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -185,7 +185,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             }
         }
 
-        ret = microcode_update(data, op->u.microcode.length);
+        ret = microcode_update(
+                guest_handle_to_param(data, const_void),
+                op->u.microcode.length);
         spin_unlock(&vcpu_alloc_lock);
     }
     break;
@@ -448,7 +450,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             XEN_GUEST_HANDLE(uint32) pdc;
 
             guest_from_compat_handle(pdc, op->u.set_pminfo.u.pdc);
-            ret = acpi_set_pdc_bits(op->u.set_pminfo.id, pdc);
+            ret = acpi_set_pdc_bits(
+                    op->u.set_pminfo.id,
+                    guest_handle_to_param(pdc, uint32));
         }
         break;
 
diff --git a/xen/arch/x86/x86_64/cpu_idle.c b/xen/arch/x86/x86_64/cpu_idle.c
index 3e7422f..1cdaf96 100644
--- a/xen/arch/x86/x86_64/cpu_idle.c
+++ b/xen/arch/x86/x86_64/cpu_idle.c
@@ -57,10 +57,12 @@ static int copy_from_compat_state(xen_processor_cx_t *xen_state,
 {
 #define XLAT_processor_cx_HNDL_dp(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_csd_t) dps; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_csd_t) dps_t; \
     if ( unlikely(!compat_handle_okay((_s_)->dp, (_s_)->dpcnt)) ) \
             return -EFAULT; \
     guest_from_compat_handle(dps, (_s_)->dp); \
-    (_d_)->dp = guest_handle_cast(dps, xen_processor_csd_t); \
+    dps_t = guest_handle_cast(dps, xen_processor_csd_t); \
+    (_d_)->dp = guest_handle_from_param(dps_t, xen_processor_csd_t); \
 } while (0)
     XLAT_processor_cx(xen_state, state);
 #undef XLAT_processor_cx_HNDL_dp
diff --git a/xen/arch/x86/x86_64/cpufreq.c b/xen/arch/x86/x86_64/cpufreq.c
index ce9864e..1956777 100644
--- a/xen/arch/x86/x86_64/cpufreq.c
+++ b/xen/arch/x86/x86_64/cpufreq.c
@@ -45,10 +45,12 @@ compat_set_px_pminfo(uint32_t cpu, struct compat_processor_performance *perf)
 
 #define XLAT_processor_performance_HNDL_states(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_px_t) states; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_px_t) states_t; \
     if ( unlikely(!compat_handle_okay((_s_)->states, (_s_)->state_count)) ) \
         return -EFAULT; \
     guest_from_compat_handle(states, (_s_)->states); \
-    (_d_)->states = guest_handle_cast(states, xen_processor_px_t); \
+    states_t = guest_handle_cast(states, xen_processor_px_t); \
+    (_d_)->states = guest_handle_from_param(states_t, xen_processor_px_t); \
 } while (0)
 
     XLAT_processor_performance(xen_perf, perf);
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 188aa37..f577761 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
 
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 typedef int ret_t;
 
 #include "../platform_hypercall.c"
diff --git a/xen/common/compat/multicall.c b/xen/common/compat/multicall.c
index 0eb1212..72db213 100644
--- a/xen/common/compat/multicall.c
+++ b/xen/common/compat/multicall.c
@@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
 #define call                 compat_call
 #define do_multicall(l, n)   compat_multicall(_##l, n)
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 
 #include "../multicall.c"
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P2-00060o-WE; Thu, 16 Aug 2012 14:50:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P0-00060T-OT
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:26 +0000
Received: from [85.158.139.83:45173] by server-5.bemta-5.messagelabs.com id
	77/1A-31019-1B80D205; Thu, 16 Aug 2012 14:50:25 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25296 invoked from network); 16 Aug 2012 14:50:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378342"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-QL;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:08 +0100
Message-ID: <1345128612-10323-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 2/6] xen: xen_ulong_t substitution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is still an unwanted unsigned long in the xen public interface:
replace it with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Changes in v2:

- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/python/xen/lowlevel/xc/xc.c |    2 +-
 xen/include/public/arch-arm.h     |    4 ++--
 xen/include/public/version.h      |    4 +++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..e220f68 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1439,7 +1439,7 @@ static PyObject *pyxc_xeninfo(XcObject *self)
     if ( xc_version(self->xc_handle, XENVER_commandline, &xen_commandline) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
-    snprintf(str, sizeof(str), "virt_start=0x%lx", p_parms.virt_start);
+    snprintf(str, sizeof(str), "virt_start=0x%"PRI_xen_ulong, p_parms.virt_start);
 
     xen_pagesize = xc_version(self->xc_handle, XENVER_pagesize, NULL);
     if (xen_pagesize < 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..c7e6f8c 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -28,6 +28,8 @@
 #ifndef __XEN_PUBLIC_VERSION_H__
 #define __XEN_PUBLIC_VERSION_H__
 
+#include "xen.h"
+
 /* NB. All ops return zero on success, except XENVER_{version,pagesize} */
 
 /* arg == NULL; returns major:minor (16:16). */
@@ -58,7 +60,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P2-00060o-WE; Thu, 16 Aug 2012 14:50:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P0-00060T-OT
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:26 +0000
Received: from [85.158.139.83:45173] by server-5.bemta-5.messagelabs.com id
	77/1A-31019-1B80D205; Thu, 16 Aug 2012 14:50:25 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25296 invoked from network); 16 Aug 2012 14:50:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378342"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-QL;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:08 +0100
Message-ID: <1345128612-10323-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 2/6] xen: xen_ulong_t substitution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is still an unwanted unsigned long in the xen public interface:
replace it with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Changes in v2:

- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/python/xen/lowlevel/xc/xc.c |    2 +-
 xen/include/public/arch-arm.h     |    4 ++--
 xen/include/public/version.h      |    4 +++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..e220f68 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1439,7 +1439,7 @@ static PyObject *pyxc_xeninfo(XcObject *self)
     if ( xc_version(self->xc_handle, XENVER_commandline, &xen_commandline) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
-    snprintf(str, sizeof(str), "virt_start=0x%lx", p_parms.virt_start);
+    snprintf(str, sizeof(str), "virt_start=0x%"PRI_xen_ulong, p_parms.virt_start);
 
     xen_pagesize = xc_version(self->xc_handle, XENVER_pagesize, NULL);
     if (xen_pagesize < 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..c7e6f8c 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -28,6 +28,8 @@
 #ifndef __XEN_PUBLIC_VERSION_H__
 #define __XEN_PUBLIC_VERSION_H__
 
+#include "xen.h"
+
 /* NB. All ops return zero on success, except XENVER_{version,pagesize} */
 
 /* arg == NULL; returns major:minor (16:16). */
@@ -58,7 +60,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P3-000613-Cn; Thu, 16 Aug 2012 14:50:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P1-00060Z-OS
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:27 +0000
Received: from [85.158.139.83:45260] by server-2.bemta-5.messagelabs.com id
	36/88-10142-2B80D205; Thu, 16 Aug 2012 14:50:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25383 invoked from network); 16 Aug 2012 14:50:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378343"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-Pk;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:07 +0100
Message-ID: <1345128612-10323-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 1/6] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Changes in v2:

- do not use an anonymous union.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/firmware/hvmloader/pci.c  |    2 +-
 xen/arch/arm/mm.c               |    2 +-
 xen/arch/x86/mm.c               |   10 +++++-----
 xen/arch/x86/x86_64/compat/mm.c |    6 ++++++
 xen/include/public/memory.h     |   11 +++++++----
 5 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index fd56e50..6375989 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -212,7 +212,7 @@ void pci_setup(void)
         xatp.space = XENMAPSPACE_gmfn_range;
         xatp.idx   = hvm_info->low_mem_pgend;
         xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
+        xatp.u.size  = nr_pages;
         if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
             BUG();
         hvm_info->high_mem_pgend += nr_pages;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..2400e1c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,7 +506,7 @@ static int xenmem_add_to_physmap_once(
         paddr_t maddr;
         struct domain *od;
 
-        rc = rcu_lock_target_domain_by_id(xatp->foreign_domid, &od);
+        rc = rcu_lock_target_domain_by_id(xatp->u.foreign_domid, &od);
         if ( rc < 0 )
             return rc;
         maddr = p2m_lookup(od, xatp->idx << PAGE_SHIFT);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..f5c704e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4630,7 +4630,7 @@ static int xenmem_add_to_physmap(struct domain *d,
             this_cpu(iommu_dont_flush_iotlb) = 1;
 
         start_xatp = *xatp;
-        while ( xatp->size > 0 )
+        while ( xatp->u.size > 0 )
         {
             rc = xenmem_add_to_physmap_once(d, xatp);
             if ( rc < 0 )
@@ -4638,10 +4638,10 @@ static int xenmem_add_to_physmap(struct domain *d,
 
             xatp->idx++;
             xatp->gpfn++;
-            xatp->size--;
+            xatp->u.size--;
 
             /* Check for continuation if it's not the last interation */
-            if ( xatp->size > 0 && hypercall_preempt_check() )
+            if ( xatp->u.size > 0 && hypercall_preempt_check() )
             {
                 rc = -EAGAIN;
                 break;
@@ -4651,8 +4651,8 @@ static int xenmem_add_to_physmap(struct domain *d,
         if ( need_iommu(d) )
         {
             this_cpu(iommu_dont_flush_iotlb) = 0;
-            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp->size);
-            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp->size);
+            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.u.size - xatp->u.size);
+            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.u.size - xatp->u.size);
         }
 
         return rc;
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..5bcd2fd 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -59,10 +59,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     {
         struct compat_add_to_physmap cmp;
         struct xen_add_to_physmap *nat = COMPAT_ARG_XLAT_VIRT_BASE;
+        enum XLAT_add_to_physmap_u u;
 
         if ( copy_from_guest(&cmp, arg, 1) )
             return -EFAULT;
 
+        if ( cmp.space == XENMAPSPACE_gmfn_range )
+            u = XLAT_add_to_physmap_u_size;
+        if ( cmp.space == XENMAPSPACE_gmfn_foreign )
+            u = XLAT_add_to_physmap_u_foreign_domid;
+
         XLAT_add_to_physmap(nat, &cmp);
         rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..7d4ee26 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P3-000613-Cn; Thu, 16 Aug 2012 14:50:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P1-00060Z-OS
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:27 +0000
Received: from [85.158.139.83:45260] by server-2.bemta-5.messagelabs.com id
	36/88-10142-2B80D205; Thu, 16 Aug 2012 14:50:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25383 invoked from network); 16 Aug 2012 14:50:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378343"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-Pk;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:07 +0100
Message-ID: <1345128612-10323-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 1/6] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Changes in v2:

- do not use an anonymous union.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/firmware/hvmloader/pci.c  |    2 +-
 xen/arch/arm/mm.c               |    2 +-
 xen/arch/x86/mm.c               |   10 +++++-----
 xen/arch/x86/x86_64/compat/mm.c |    6 ++++++
 xen/include/public/memory.h     |   11 +++++++----
 5 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index fd56e50..6375989 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -212,7 +212,7 @@ void pci_setup(void)
         xatp.space = XENMAPSPACE_gmfn_range;
         xatp.idx   = hvm_info->low_mem_pgend;
         xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
+        xatp.u.size  = nr_pages;
         if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
             BUG();
         hvm_info->high_mem_pgend += nr_pages;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..2400e1c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,7 +506,7 @@ static int xenmem_add_to_physmap_once(
         paddr_t maddr;
         struct domain *od;
 
-        rc = rcu_lock_target_domain_by_id(xatp->foreign_domid, &od);
+        rc = rcu_lock_target_domain_by_id(xatp->u.foreign_domid, &od);
         if ( rc < 0 )
             return rc;
         maddr = p2m_lookup(od, xatp->idx << PAGE_SHIFT);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..f5c704e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4630,7 +4630,7 @@ static int xenmem_add_to_physmap(struct domain *d,
             this_cpu(iommu_dont_flush_iotlb) = 1;
 
         start_xatp = *xatp;
-        while ( xatp->size > 0 )
+        while ( xatp->u.size > 0 )
         {
             rc = xenmem_add_to_physmap_once(d, xatp);
             if ( rc < 0 )
@@ -4638,10 +4638,10 @@ static int xenmem_add_to_physmap(struct domain *d,
 
             xatp->idx++;
             xatp->gpfn++;
-            xatp->size--;
+            xatp->u.size--;
 
             /* Check for continuation if it's not the last interation */
-            if ( xatp->size > 0 && hypercall_preempt_check() )
+            if ( xatp->u.size > 0 && hypercall_preempt_check() )
             {
                 rc = -EAGAIN;
                 break;
@@ -4651,8 +4651,8 @@ static int xenmem_add_to_physmap(struct domain *d,
         if ( need_iommu(d) )
         {
             this_cpu(iommu_dont_flush_iotlb) = 0;
-            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp->size);
-            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp->size);
+            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.u.size - xatp->u.size);
+            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.u.size - xatp->u.size);
         }
 
         return rc;
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..5bcd2fd 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -59,10 +59,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     {
         struct compat_add_to_physmap cmp;
         struct xen_add_to_physmap *nat = COMPAT_ARG_XLAT_VIRT_BASE;
+        enum XLAT_add_to_physmap_u u;
 
         if ( copy_from_guest(&cmp, arg, 1) )
             return -EFAULT;
 
+        if ( cmp.space == XENMAPSPACE_gmfn_range )
+            u = XLAT_add_to_physmap_u_size;
+        if ( cmp.space == XENMAPSPACE_gmfn_foreign )
+            u = XLAT_add_to_physmap_u_foreign_domid;
+
         XLAT_add_to_physmap(nat, &cmp);
         rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..7d4ee26 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P5-00062A-OH; Thu, 16 Aug 2012 14:50:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P4-000617-28
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:30 +0000
Received: from [85.158.138.51:40672] by server-2.bemta-3.messagelabs.com id
	E2/E7-17748-5B80D205; Thu, 16 Aug 2012 14:50:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345128623!20556109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12559 invoked from network); 16 Aug 2012 14:50:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34861514"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:22 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-UH;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:11 +0100
Message-ID: <1345128612-10323-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 5/6] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c             |    2 +-
 xen/arch/arm/domctl.c             |    2 +-
 xen/arch/arm/hvm.c                |    2 +-
 xen/arch/arm/mm.c                 |    2 +-
 xen/arch/arm/physdev.c            |    2 +-
 xen/arch/arm/sysctl.c             |    2 +-
 xen/arch/x86/compat.c             |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c     |    2 +-
 xen/arch/x86/domain.c             |    2 +-
 xen/arch/x86/domctl.c             |    2 +-
 xen/arch/x86/efi/runtime.c        |    2 +-
 xen/arch/x86/hvm/hvm.c            |   26 +++++++++---------
 xen/arch/x86/microcode.c          |    2 +-
 xen/arch/x86/mm.c                 |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c         |    2 +-
 xen/arch/x86/mm/mem_event.c       |    2 +-
 xen/arch/x86/mm/paging.c          |    2 +-
 xen/arch/x86/mm/shadow/common.c   |    2 +-
 xen/arch/x86/oprofile/xenoprof.c  |    6 ++--
 xen/arch/x86/physdev.c            |    2 +-
 xen/arch/x86/platform_hypercall.c |    2 +-
 xen/arch/x86/sysctl.c             |    2 +-
 xen/arch/x86/traps.c              |    2 +-
 xen/arch/x86/x86_32/mm.c          |    2 +-
 xen/arch/x86/x86_32/traps.c       |    2 +-
 xen/arch/x86/x86_64/compat/mm.c   |   10 +++---
 xen/arch/x86/x86_64/domain.c      |    2 +-
 xen/arch/x86/x86_64/mm.c          |    2 +-
 xen/arch/x86/x86_64/traps.c       |    2 +-
 xen/common/compat/domain.c        |    2 +-
 xen/common/compat/grant_table.c   |    8 +++---
 xen/common/compat/memory.c        |    4 +-
 xen/common/domain.c               |    2 +-
 xen/common/domctl.c               |    2 +-
 xen/common/event_channel.c        |    2 +-
 xen/common/grant_table.c          |   36 +++++++++++++-------------
 xen/common/kernel.c               |    4 +-
 xen/common/kexec.c                |   16 +++++-----
 xen/common/memory.c               |    4 +-
 xen/common/multicall.c            |    2 +-
 xen/common/schedule.c             |    2 +-
 xen/common/sysctl.c               |    2 +-
 xen/common/xenoprof.c             |    8 +++---
 xen/drivers/acpi/pmstat.c         |    2 +-
 xen/drivers/char/console.c        |    6 ++--
 xen/drivers/passthrough/iommu.c   |    2 +-
 xen/include/asm-arm/hypercall.h   |    2 +-
 xen/include/asm-arm/mm.h          |    2 +-
 xen/include/asm-x86/hap.h         |    2 +-
 xen/include/asm-x86/hypercall.h   |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h   |    2 +-
 xen/include/asm-x86/mm.h          |    8 +++---
 xen/include/asm-x86/paging.h      |    2 +-
 xen/include/asm-x86/processor.h   |    2 +-
 xen/include/asm-x86/shadow.h      |    2 +-
 xen/include/asm-x86/xenoprof.h    |    6 ++--
 xen/include/xen/acpi.h            |    4 +-
 xen/include/xen/hypercall.h       |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h           |    2 +-
 xen/include/xen/tmem_xen.h        |    2 +-
 xen/include/xsm/xsm.h             |    4 +-
 xen/xsm/dummy.c                   |    2 +-
 xen/xsm/flask/flask_op.c          |    4 +-
 xen/xsm/flask/hooks.c             |    2 +-
 xen/xsm/xsm_core.c                |    2 +-
 65 files changed, 168 insertions(+), 168 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2400e1c..3e8b6cc 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/compat.c b/xen/arch/x86/compat.c
index a4fda06..2d05867 100644
--- a/xen/arch/x86/compat.c
+++ b/xen/arch/x86/compat.c
@@ -27,7 +27,7 @@ ret_t do_physdev_op_compat(XEN_GUEST_HANDLE(physdev_op_t) uop)
 #ifndef COMPAT
 
 /* Legacy hypercall (as of 0x00030202). */
-long do_event_channel_op_compat(XEN_GUEST_HANDLE(evtchn_op_t) uop)
+long do_event_channel_op_compat(XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop)
 {
     struct evtchn_op op;
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index a89df6d..0f122b3 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1357,7 +1357,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..e2bf831 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3047,14 +3047,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3072,7 +3072,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3088,7 +3088,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3137,7 +3137,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3145,7 +3145,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3164,7 +3164,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3188,7 +3188,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3360,7 +3360,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3525,7 +3525,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3569,7 +3569,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3602,7 +3602,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3686,7 +3686,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f5c704e..4d72700 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/oprofile/xenoprof.c b/xen/arch/x86/oprofile/xenoprof.c
index 71f00ef..5d286a2 100644
--- a/xen/arch/x86/oprofile/xenoprof.c
+++ b/xen/arch/x86/oprofile/xenoprof.c
@@ -19,7 +19,7 @@
 
 #include "op_counter.h"
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_counter counter;
 
@@ -39,7 +39,7 @@ int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
     return 0;
 }
 
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_ibs_counter ibs_counter;
 
@@ -57,7 +57,7 @@ int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
 }
 
 #ifdef CONFIG_COMPAT
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_oprof_counter counter;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 5bcd2fd..1de93b7 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -266,14 +266,14 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
     int rc = 0;
-    XEN_GUEST_HANDLE(mmuext_op_t) nat_ops;
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) nat_ops;
 
     preempt_mask = count & MMU_UPDATE_PREEMPTED;
     count ^= preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..b524955 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,12 +52,12 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
     unsigned int i;
-    XEN_GUEST_HANDLE(void) cnt_uop;
+    XEN_GUEST_HANDLE_PARAM(void) cnt_uop;
 
     set_xen_guest_handle(cnt_uop, NULL);
     switch ( cmd )
@@ -206,7 +206,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_transfer_compat_t) xfer;
+                XEN_GUEST_HANDLE_PARAM(gnttab_transfer_compat_t) xfer;
 
                 xfer = guest_handle_cast(cmp_uop, gnttab_transfer_compat_t);
                 guest_handle_add_offset(xfer, i);
@@ -251,7 +251,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_copy_compat_t) copy;
+                XEN_GUEST_HANDLE_PARAM(gnttab_copy_compat_t) copy;
 
                 copy = guest_handle_cast(cmp_uop, gnttab_copy_compat_t);
                 guest_handle_add_offset(copy, i);
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..996151c 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
@@ -22,7 +22,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
     {
         unsigned int i, end_extent = 0;
         union {
-            XEN_GUEST_HANDLE(void) hnd;
+            XEN_GUEST_HANDLE_PARAM(void) hnd;
             struct xen_memory_reservation *rsrv;
             struct xen_memory_exchange *xchg;
             struct xen_remove_from_physmap *xrfp;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7e58cc4..a683954 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 698711e..f8d62f2 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -515,7 +515,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P5-00062A-OH; Thu, 16 Aug 2012 14:50:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P4-000617-28
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:30 +0000
Received: from [85.158.138.51:40672] by server-2.bemta-3.messagelabs.com id
	E2/E7-17748-5B80D205; Thu, 16 Aug 2012 14:50:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345128623!20556109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12559 invoked from network); 16 Aug 2012 14:50:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34861514"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:22 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-UH;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:11 +0100
Message-ID: <1345128612-10323-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 5/6] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c             |    2 +-
 xen/arch/arm/domctl.c             |    2 +-
 xen/arch/arm/hvm.c                |    2 +-
 xen/arch/arm/mm.c                 |    2 +-
 xen/arch/arm/physdev.c            |    2 +-
 xen/arch/arm/sysctl.c             |    2 +-
 xen/arch/x86/compat.c             |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c     |    2 +-
 xen/arch/x86/domain.c             |    2 +-
 xen/arch/x86/domctl.c             |    2 +-
 xen/arch/x86/efi/runtime.c        |    2 +-
 xen/arch/x86/hvm/hvm.c            |   26 +++++++++---------
 xen/arch/x86/microcode.c          |    2 +-
 xen/arch/x86/mm.c                 |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c         |    2 +-
 xen/arch/x86/mm/mem_event.c       |    2 +-
 xen/arch/x86/mm/paging.c          |    2 +-
 xen/arch/x86/mm/shadow/common.c   |    2 +-
 xen/arch/x86/oprofile/xenoprof.c  |    6 ++--
 xen/arch/x86/physdev.c            |    2 +-
 xen/arch/x86/platform_hypercall.c |    2 +-
 xen/arch/x86/sysctl.c             |    2 +-
 xen/arch/x86/traps.c              |    2 +-
 xen/arch/x86/x86_32/mm.c          |    2 +-
 xen/arch/x86/x86_32/traps.c       |    2 +-
 xen/arch/x86/x86_64/compat/mm.c   |   10 +++---
 xen/arch/x86/x86_64/domain.c      |    2 +-
 xen/arch/x86/x86_64/mm.c          |    2 +-
 xen/arch/x86/x86_64/traps.c       |    2 +-
 xen/common/compat/domain.c        |    2 +-
 xen/common/compat/grant_table.c   |    8 +++---
 xen/common/compat/memory.c        |    4 +-
 xen/common/domain.c               |    2 +-
 xen/common/domctl.c               |    2 +-
 xen/common/event_channel.c        |    2 +-
 xen/common/grant_table.c          |   36 +++++++++++++-------------
 xen/common/kernel.c               |    4 +-
 xen/common/kexec.c                |   16 +++++-----
 xen/common/memory.c               |    4 +-
 xen/common/multicall.c            |    2 +-
 xen/common/schedule.c             |    2 +-
 xen/common/sysctl.c               |    2 +-
 xen/common/xenoprof.c             |    8 +++---
 xen/drivers/acpi/pmstat.c         |    2 +-
 xen/drivers/char/console.c        |    6 ++--
 xen/drivers/passthrough/iommu.c   |    2 +-
 xen/include/asm-arm/hypercall.h   |    2 +-
 xen/include/asm-arm/mm.h          |    2 +-
 xen/include/asm-x86/hap.h         |    2 +-
 xen/include/asm-x86/hypercall.h   |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h   |    2 +-
 xen/include/asm-x86/mm.h          |    8 +++---
 xen/include/asm-x86/paging.h      |    2 +-
 xen/include/asm-x86/processor.h   |    2 +-
 xen/include/asm-x86/shadow.h      |    2 +-
 xen/include/asm-x86/xenoprof.h    |    6 ++--
 xen/include/xen/acpi.h            |    4 +-
 xen/include/xen/hypercall.h       |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h           |    2 +-
 xen/include/xen/tmem_xen.h        |    2 +-
 xen/include/xsm/xsm.h             |    4 +-
 xen/xsm/dummy.c                   |    2 +-
 xen/xsm/flask/flask_op.c          |    4 +-
 xen/xsm/flask/hooks.c             |    2 +-
 xen/xsm/xsm_core.c                |    2 +-
 65 files changed, 168 insertions(+), 168 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2400e1c..3e8b6cc 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/compat.c b/xen/arch/x86/compat.c
index a4fda06..2d05867 100644
--- a/xen/arch/x86/compat.c
+++ b/xen/arch/x86/compat.c
@@ -27,7 +27,7 @@ ret_t do_physdev_op_compat(XEN_GUEST_HANDLE(physdev_op_t) uop)
 #ifndef COMPAT
 
 /* Legacy hypercall (as of 0x00030202). */
-long do_event_channel_op_compat(XEN_GUEST_HANDLE(evtchn_op_t) uop)
+long do_event_channel_op_compat(XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop)
 {
     struct evtchn_op op;
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index a89df6d..0f122b3 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1357,7 +1357,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..e2bf831 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3047,14 +3047,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3072,7 +3072,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3088,7 +3088,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3137,7 +3137,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3145,7 +3145,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3164,7 +3164,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3188,7 +3188,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3360,7 +3360,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3525,7 +3525,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3569,7 +3569,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3602,7 +3602,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3686,7 +3686,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f5c704e..4d72700 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/oprofile/xenoprof.c b/xen/arch/x86/oprofile/xenoprof.c
index 71f00ef..5d286a2 100644
--- a/xen/arch/x86/oprofile/xenoprof.c
+++ b/xen/arch/x86/oprofile/xenoprof.c
@@ -19,7 +19,7 @@
 
 #include "op_counter.h"
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_counter counter;
 
@@ -39,7 +39,7 @@ int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
     return 0;
 }
 
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_ibs_counter ibs_counter;
 
@@ -57,7 +57,7 @@ int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
 }
 
 #ifdef CONFIG_COMPAT
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_oprof_counter counter;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 5bcd2fd..1de93b7 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -266,14 +266,14 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
     int rc = 0;
-    XEN_GUEST_HANDLE(mmuext_op_t) nat_ops;
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) nat_ops;
 
     preempt_mask = count & MMU_UPDATE_PREEMPTED;
     count ^= preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..b524955 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,12 +52,12 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
     unsigned int i;
-    XEN_GUEST_HANDLE(void) cnt_uop;
+    XEN_GUEST_HANDLE_PARAM(void) cnt_uop;
 
     set_xen_guest_handle(cnt_uop, NULL);
     switch ( cmd )
@@ -206,7 +206,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_transfer_compat_t) xfer;
+                XEN_GUEST_HANDLE_PARAM(gnttab_transfer_compat_t) xfer;
 
                 xfer = guest_handle_cast(cmp_uop, gnttab_transfer_compat_t);
                 guest_handle_add_offset(xfer, i);
@@ -251,7 +251,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_copy_compat_t) copy;
+                XEN_GUEST_HANDLE_PARAM(gnttab_copy_compat_t) copy;
 
                 copy = guest_handle_cast(cmp_uop, gnttab_copy_compat_t);
                 guest_handle_add_offset(copy, i);
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..996151c 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
@@ -22,7 +22,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
     {
         unsigned int i, end_extent = 0;
         union {
-            XEN_GUEST_HANDLE(void) hnd;
+            XEN_GUEST_HANDLE_PARAM(void) hnd;
             struct xen_memory_reservation *rsrv;
             struct xen_memory_exchange *xchg;
             struct xen_remove_from_physmap *xrfp;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7e58cc4..a683954 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 698711e..f8d62f2 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -515,7 +515,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P5-00061u-A3; Thu, 16 Aug 2012 14:50:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P2-00060m-Qv
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:29 +0000
Received: from [85.158.139.83:16798] by server-8.bemta-5.messagelabs.com id
	24/44-02481-4B80D205; Thu, 16 Aug 2012 14:50:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25439 invoked from network); 16 Aug 2012 14:50:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378345"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-SY;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:10 +0100
Message-ID: <1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

guest_handle_* macros default to XEN_GUEST_HANDLE_PARAM as return type.
Two new guest_handle_to_param and guest_handle_from_param macros are
introduced to do conversions.

Changes in v2:

- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code.

Changes in v3:

- move the guest_handle_cast change into this patch;
- add a clear comment on top of guest_handle_cast;
- also s/XEN_GUEST_HANDLE/XEN_GUEST_HANDLE_PARAM in
  guest_handle_from_ptr and const_guest_handle_from_ptr;
- introduce guest_handle_from_param and guest_handle_to_param.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/guest_access.h |   19 ++++++++++++++++---
 xen/include/asm-x86/guest_access.h |   19 ++++++++++++++++---
 xen/include/public/arch-arm.h      |   24 ++++++++++++++++++++----
 xen/include/public/arch-ia64.h     |    9 +++++++++
 xen/include/public/arch-x86/xen.h  |    9 +++++++++
 5 files changed, 70 insertions(+), 10 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..28a4b3f 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -27,16 +27,29 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
     (XEN_GUEST_HANDLE(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 2b429c2..6d35104 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -45,16 +45,29 @@
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
     (XEN_GUEST_HANDLE(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..9db3c81 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,34 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef struct { type *p; }                                 \
+        __guest_handle_ ## name;                                \
+    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_64_ ## name;
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
+ * aligned.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
+         (hnd).p = val;                                     \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..e4e9688 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -45,8 +45,17 @@
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on ia64 but
+ * they might not be on other architectures.
+ */
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..0e10260 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -38,12 +38,21 @@
     typedef type * __guest_handle_ ## name
 #endif
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on X86 but
+ * they might not be on other architectures.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:50:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21P5-00061u-A3; Thu, 16 Aug 2012 14:50:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T21P2-00060m-Qv
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:50:29 +0000
Received: from [85.158.139.83:16798] by server-8.bemta-5.messagelabs.com id
	24/44-02481-4B80D205; Thu, 16 Aug 2012 14:50:28 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345128622!24543670!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25439 invoked from network); 16 Aug 2012 14:50:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 14:50:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205378345"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 10:50:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 10:50:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T21Op-0006Vh-SY;
	Thu, 16 Aug 2012 15:50:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Aug 2012 15:50:10 +0100
Message-ID: <1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

guest_handle_* macros default to XEN_GUEST_HANDLE_PARAM as return type.
Two new guest_handle_to_param and guest_handle_from_param macros are
introduced to do conversions.

Changes in v2:

- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code.

Changes in v3:

- move the guest_handle_cast change into this patch;
- add a clear comment on top of guest_handle_cast;
- also s/XEN_GUEST_HANDLE/XEN_GUEST_HANDLE_PARAM in
  guest_handle_from_ptr and const_guest_handle_from_ptr;
- introduce guest_handle_from_param and guest_handle_to_param.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/guest_access.h |   19 ++++++++++++++++---
 xen/include/asm-x86/guest_access.h |   19 ++++++++++++++++---
 xen/include/public/arch-arm.h      |   24 ++++++++++++++++++++----
 xen/include/public/arch-ia64.h     |    9 +++++++++
 xen/include/public/arch-x86/xen.h  |    9 +++++++++
 5 files changed, 70 insertions(+), 10 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..28a4b3f 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -27,16 +27,29 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
     (XEN_GUEST_HANDLE(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 2b429c2..6d35104 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -45,16 +45,29 @@
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                         \
     (XEN_GUEST_HANDLE(type)) { _x };            \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..9db3c81 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,34 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef struct { type *p; }                                 \
+        __guest_handle_ ## name;                                \
+    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_64_ ## name;
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
+ * aligned.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
+         (hnd).p = val;                                     \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..e4e9688 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -45,8 +45,17 @@
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on ia64 but
+ * they might not be on other architectures.
+ */
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..0e10260 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -38,12 +38,21 @@
     typedef type * __guest_handle_ ## name
 #endif
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on X86 but
+ * they might not be on other architectures.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:52:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:52:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21R1-0006dJ-I3; Thu, 16 Aug 2012 14:52:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T21R0-0006cv-0o
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:52:30 +0000
Received: from [85.158.139.83:8863] by server-9.bemta-5.messagelabs.com id
	F0/2A-26123-D290D205; Thu, 16 Aug 2012 14:52:29 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345128748!25775596!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20238 invoked from network); 16 Aug 2012 14:52:28 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:52:28 -0000
X-TM-IMSS-Message-ID: <ae98a78b000106e5@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id ae98a78b000106e5 ;
	Thu, 16 Aug 2012 10:52:10 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GEqLWq001565; 
	Thu, 16 Aug 2012 10:52:21 -0400
Message-ID: <502D0925.5070705@tycho.nsa.gov>
Date: Thu, 16 Aug 2012 10:52:21 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
	<502D04B2.6080203@tycho.nsa.gov>
	<1345127814.30865.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345127814.30865.11.camel@zakaz.uk.xensource.com>
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 10:36 AM, Ian Campbell wrote:
> On Thu, 2012-08-16 at 15:33 +0100, Daniel De Graaf wrote:
> 
>>>> and if two domains are attempting to
>>>>> set up communication such as V4V or vchan, they need to be able to tell
>>>>> their peer what domain ID to use.
>>>
>>> That's trickier.
>>>
>>> I suppose they could rendezvous via /vm/$UUID? Although there has been
>>> talk of removing that path in the future.
>>
>> The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
>> domain IDs (just names) and doesn't contain writable sub-keys for a domain
>> to use. I also don't think such a sub-key should be added; it makes more
>> sense to keep all of a domain's modifiable keys under its home path.
>>
>> Perhaps this could be changed to another identifier-to-domid mapping, like
>> the proposed addition of a location to map name to domid? 
>>
>> The toolstack would maintain something like:
>>   /local/by-name/$name == domid
>>   /local/by-uuid/$uuid == domid
> 
> This second one is a bit like the existing /vm/$uuid/domid.

That key isn't being populated by xl on my system, so I didn't know it existed.

> 
> I think I would go with:
> 
>   /local/by-name/$name == /local/domain/$domid
>   /local/by-uuid/$uuid == /local/domain/$domid
> 
> though, So that you can just read it and use it without interpreting it.

In that case, you would need to parse the domid out of the string in order
to use it in hypercalls (grant, event, v4v). The frontend/backend paths use
a distinct frontend-id/backend-id key for the domain ID, but we are trying
to avoid this since populating this key would mean the domain populating it
needs to know its own domain ID.

>>   /local/domain/$domid/name - same as existing
>>   /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.
> 
> Is it available for other domains via xen, or just yourself?
> 

Yourself and anyone who can call getdomaininfo (which is usually just dom0).
This is actually the same as the xenstore permissions on keys such as
/local/domain/$id/name. Changing these may need consideration, because on
public hosting servers, you might not want to allow domains to be enumerated
by name.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:52:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:52:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21R1-0006dJ-I3; Thu, 16 Aug 2012 14:52:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T21R0-0006cv-0o
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 14:52:30 +0000
Received: from [85.158.139.83:8863] by server-9.bemta-5.messagelabs.com id
	F0/2A-26123-D290D205; Thu, 16 Aug 2012 14:52:29 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345128748!25775596!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20238 invoked from network); 16 Aug 2012 14:52:28 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-4.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 14:52:28 -0000
X-TM-IMSS-Message-ID: <ae98a78b000106e5@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id ae98a78b000106e5 ;
	Thu, 16 Aug 2012 10:52:10 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GEqLWq001565; 
	Thu, 16 Aug 2012 10:52:21 -0400
Message-ID: <502D0925.5070705@tycho.nsa.gov>
Date: Thu, 16 Aug 2012 10:52:21 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
	<502D04B2.6080203@tycho.nsa.gov>
	<1345127814.30865.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345127814.30865.11.camel@zakaz.uk.xensource.com>
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 10:36 AM, Ian Campbell wrote:
> On Thu, 2012-08-16 at 15:33 +0100, Daniel De Graaf wrote:
> 
>>>> and if two domains are attempting to
>>>>> set up communication such as V4V or vchan, they need to be able to tell
>>>>> their peer what domain ID to use.
>>>
>>> That's trickier.
>>>
>>> I suppose they could rendezvous via /vm/$UUID? Although there has been
>>> talk of removing that path in the future.
>>
>> The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
>> domain IDs (just names) and doesn't contain writable sub-keys for a domain
>> to use. I also don't think such a sub-key should be added; it makes more
>> sense to keep all of a domain's modifiable keys under its home path.
>>
>> Perhaps this could be changed to another identifier-to-domid mapping, like
>> the proposed addition of a location to map name to domid? 
>>
>> The toolstack would maintain something like:
>>   /local/by-name/$name == domid
>>   /local/by-uuid/$uuid == domid
> 
> This second one is a bit like the existing /vm/$uuid/domid.

That key isn't being populated by xl on my system, so I didn't know it existed.

> 
> I think I would go with:
> 
>   /local/by-name/$name == /local/domain/$domid
>   /local/by-uuid/$uuid == /local/domain/$domid
> 
> though, So that you can just read it and use it without interpreting it.

In that case, you would need to parse the domid out of the string in order
to use it in hypercalls (grant, event, v4v). The frontend/backend paths use
a distinct frontend-id/backend-id key for the domain ID, but we are trying
to avoid this since populating this key would mean the domain populating it
needs to know its own domain ID.

>>   /local/domain/$domid/name - same as existing
>>   /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.
> 
> Is it available for other domains via xen, or just yourself?
> 

Yourself and anyone who can call getdomaininfo (which is usually just dom0).
This is actually the same as the xenstore permissions on keys such as
/local/domain/$id/name. Changing these may need consideration, because on
public hosting servers, you might not want to allow domains to be enumerated
by name.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:52:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21RC-0006g1-UW; Thu, 16 Aug 2012 14:52:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T21RB-0006fY-D8
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:52:41 +0000
Received: from [85.158.143.99:6750] by server-1.bemta-4.messagelabs.com id
	AC/3E-07754-8390D205; Thu, 16 Aug 2012 14:52:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345128758!27963066!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11359 invoked from network); 16 Aug 2012 14:52:39 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 14:52:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GEqaRg029962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:52:37 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GEqZn3027973
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:52:36 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GEqZK2030146
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 09:52:35 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:52:35 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 34BAE402E8; Thu, 16 Aug 2012 10:42:49 -0400 (EDT)
Date: Thu, 16 Aug 2012 10:42:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816144249.GA17735@phenom.dumpdata.com>
References: <20120815175224.36457451@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120815175224.36457451@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/8]: PVH : PV guest in HVM container
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 05:52:24PM -0700, Mukesh Rathor wrote:
> Hi,
> 
> 	Following are series of patches for PVH linux changes. xen
> 	patches will follow later.
> 
>         A PVH is a PV guest that runs in an HVM container. Its only available
>         for x86_64 only. It runs in ring 0 (the kernel), has native page tables,
>         native IDT, and requires HAP. It uses lot of HVM code paths. Both in
>         the guest and xen, the guest is viewed as pv guest,  and is_hvm()
>         would return false. It uses event channels thereby eliminating APIC
>         emulation code paths in xen.
> 
>         This series of patches implment this feature. They were built on
>         top of fc6bdb59a501740b28ed3b616641a22c8dc5dd31 from the following
>         tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Neat. I like the patches. The one thing that you might also want to expose so
is something like this:

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index fdce49c..cd60024 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -27,6 +27,13 @@ config XEN_PVHVM
 	def_bool y
 	depends on XEN && PCI && X86_LOCAL_APIC
 
+config XEN_PVH
+	def_bool n
+	depends on XEN_DOM0 && SPECIAL_PTE?
+	help
+	  Run initial domain in a HVM container. The hypervisor
+        must have the extensions.. blah blah.
+
 config XEN_MAX_DOMAIN_MEMORY
        int
        default 500 if X86_64
diff --git a/include/xen/xen.h b/include/xen/xen.h
index e823639..0317a36 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -20,9 +20,12 @@ extern enum xen_domain_type xen_domain_type;
 				 xen_domain_type == XEN_HVM_DOMAIN)
 /* xen_pv_domain check is necessary as start_info ptr is null in HVM. Also,
  * note, xen PVH domain shares lot of HVM code */
+#ifdef CONFIG_XEN_PVH
 #define xen_pvh_domain()       (xen_pv_domain() &&                     \
 				(xen_start_info->flags & SIF_IS_PVINHVM))
-
+#else
+#define xen_pvh_domain()       (0)
+#endif
 #ifdef CONFIG_XEN_DOM0
 #include <xen/interface/xen.h>
 #include <asm/xen/hypervisor.h>

That way we can work through all the kinks without having to worry
about causing revert issues or adding extra undue config build options.

Better yet, it allows to test all your code in pure PV and HVM to make
sure nothing broke. And when all is ready this can be removed.

> 
> 
> Thanks,
> Mukesh
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 14:52:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 14:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21RC-0006g1-UW; Thu, 16 Aug 2012 14:52:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T21RB-0006fY-D8
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 14:52:41 +0000
Received: from [85.158.143.99:6750] by server-1.bemta-4.messagelabs.com id
	AC/3E-07754-8390D205; Thu, 16 Aug 2012 14:52:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345128758!27963066!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11359 invoked from network); 16 Aug 2012 14:52:39 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 14:52:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GEqaRg029962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:52:37 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GEqZn3027973
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:52:36 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GEqZK2030146
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 09:52:35 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 07:52:35 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 34BAE402E8; Thu, 16 Aug 2012 10:42:49 -0400 (EDT)
Date: Thu, 16 Aug 2012 10:42:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816144249.GA17735@phenom.dumpdata.com>
References: <20120815175224.36457451@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120815175224.36457451@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/8]: PVH : PV guest in HVM container
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 15, 2012 at 05:52:24PM -0700, Mukesh Rathor wrote:
> Hi,
> 
> 	Following are series of patches for PVH linux changes. xen
> 	patches will follow later.
> 
>         A PVH is a PV guest that runs in an HVM container. Its only available
>         for x86_64 only. It runs in ring 0 (the kernel), has native page tables,
>         native IDT, and requires HAP. It uses lot of HVM code paths. Both in
>         the guest and xen, the guest is viewed as pv guest,  and is_hvm()
>         would return false. It uses event channels thereby eliminating APIC
>         emulation code paths in xen.
> 
>         This series of patches implment this feature. They were built on
>         top of fc6bdb59a501740b28ed3b616641a22c8dc5dd31 from the following
>         tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Neat. I like the patches. The one thing that you might also want to expose so
is something like this:

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index fdce49c..cd60024 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -27,6 +27,13 @@ config XEN_PVHVM
 	def_bool y
 	depends on XEN && PCI && X86_LOCAL_APIC
 
+config XEN_PVH
+	def_bool n
+	depends on XEN_DOM0 && SPECIAL_PTE?
+	help
+	  Run initial domain in a HVM container. The hypervisor
+        must have the extensions.. blah blah.
+
 config XEN_MAX_DOMAIN_MEMORY
        int
        default 500 if X86_64
diff --git a/include/xen/xen.h b/include/xen/xen.h
index e823639..0317a36 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -20,9 +20,12 @@ extern enum xen_domain_type xen_domain_type;
 				 xen_domain_type == XEN_HVM_DOMAIN)
 /* xen_pv_domain check is necessary as start_info ptr is null in HVM. Also,
  * note, xen PVH domain shares lot of HVM code */
+#ifdef CONFIG_XEN_PVH
 #define xen_pvh_domain()       (xen_pv_domain() &&                     \
 				(xen_start_info->flags & SIF_IS_PVINHVM))
-
+#else
+#define xen_pvh_domain()       (0)
+#endif
 #ifdef CONFIG_XEN_DOM0
 #include <xen/interface/xen.h>
 #include <asm/xen/hypervisor.h>

That way we can work through all the kinks without having to worry
about causing revert issues or adding extra undue config build options.

Better yet, it allows to test all your code in pure PV and HVM to make
sure nothing broke. And when all is ready this can be removed.

> 
> 
> Thanks,
> Mukesh
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:22:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21sz-0007Ol-DT; Thu, 16 Aug 2012 15:21:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T21sy-0007Og-Lh
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 15:21:24 +0000
Received: from [85.158.138.51:64028] by server-11.bemta-3.messagelabs.com id
	3E/49-23152-3FF0D205; Thu, 16 Aug 2012 15:21:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345130482!20614997!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21471 invoked from network); 16 Aug 2012 15:21:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 15:21:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 16:21:22 +0100
Message-Id: <502D2C3A0200007800095918@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 16:22:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0F3E7F0A.0__="
Subject: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0F3E7F0A.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -109,6 +109,9 @@ int microcode_resume_cpu(int cpu)
     struct cpu_signature nsig;
     unsigned int cpu2;
=20
+    if ( !microcode_ops )
+        return 0;
+
     spin_lock(&microcode_mutex);
=20
     err =3D microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig);




--=__Part0F3E7F0A.0__=
Content-Type: text/plain; name="x86-ucode-resume-check.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-ucode-resume-check.patch"

x86/ucode: don't crash during AP bringup on non-Intel, non-AMD CPUs=0A=0ASi=
gned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/microc=
ode.c=0A+++ b/xen/arch/x86/microcode.c=0A@@ -109,6 +109,9 @@ int microcode_=
resume_cpu(int cpu)=0A     struct cpu_signature nsig;=0A     unsigned int =
cpu2;=0A =0A+    if ( !microcode_ops )=0A+        return 0;=0A+=0A     =
spin_lock(&microcode_mutex);=0A =0A     err =3D microcode_ops->collect_cpu_=
info(cpu, &uci->cpu_sig);=0A
--=__Part0F3E7F0A.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0F3E7F0A.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 15:22:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T21sz-0007Ol-DT; Thu, 16 Aug 2012 15:21:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T21sy-0007Og-Lh
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 15:21:24 +0000
Received: from [85.158.138.51:64028] by server-11.bemta-3.messagelabs.com id
	3E/49-23152-3FF0D205; Thu, 16 Aug 2012 15:21:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345130482!20614997!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21471 invoked from network); 16 Aug 2012 15:21:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 15:21:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 16:21:22 +0100
Message-Id: <502D2C3A0200007800095918@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 16:22:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0F3E7F0A.0__="
Subject: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0F3E7F0A.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -109,6 +109,9 @@ int microcode_resume_cpu(int cpu)
     struct cpu_signature nsig;
     unsigned int cpu2;
=20
+    if ( !microcode_ops )
+        return 0;
+
     spin_lock(&microcode_mutex);
=20
     err =3D microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig);




--=__Part0F3E7F0A.0__=
Content-Type: text/plain; name="x86-ucode-resume-check.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-ucode-resume-check.patch"

x86/ucode: don't crash during AP bringup on non-Intel, non-AMD CPUs=0A=0ASi=
gned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/microc=
ode.c=0A+++ b/xen/arch/x86/microcode.c=0A@@ -109,6 +109,9 @@ int microcode_=
resume_cpu(int cpu)=0A     struct cpu_signature nsig;=0A     unsigned int =
cpu2;=0A =0A+    if ( !microcode_ops )=0A+        return 0;=0A+=0A     =
spin_lock(&microcode_mutex);=0A =0A     err =3D microcode_ops->collect_cpu_=
info(cpu, &uci->cpu_sig);=0A
--=__Part0F3E7F0A.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0F3E7F0A.0__=--


From xen-devel-bounces@lists.xen.org Thu Aug 16 15:29:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T220H-0007Zj-Eu; Thu, 16 Aug 2012 15:28:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T220G-0007Ze-77
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 15:28:56 +0000
Received: from [85.158.139.83:45173] by server-2.bemta-5.messagelabs.com id
	B4/A2-10142-7B11D205; Thu, 16 Aug 2012 15:28:55 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345130934!21148709!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11623 invoked from network); 16 Aug 2012 15:28:55 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:28:55 -0000
Received: by eaac13 with SMTP id c13so878341eaa.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 08:28:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=o4FC6Pk4urPJqY5AhLCMevkEpHceVJoBCYF6GxeMwr8=;
	b=nuQDnIzXVHd3HNrFIYUzIfDWKAMVs769cfhsnymr3Nz1Dg3L0jN6vBzNu6fccB6b5w
	2JnfTSv3M3UzavFjBhgnAqrK6PvWtSGvYLMcqYaPlrdv2nN9LVj+8F5ZD/5Jl8jE04vN
	H/zR0Qmyn4wmCcT1tzt0w4btRBZh6kEbEOTzus0EBjYX0dIQMPkdFq8Bwg9+IPUv0Vbs
	fE72dYRVunHpUtj4CEQWgkFadCAw7sDYRAf/86J4Bf5lIKfVy4BJ26uJYScKQor2FMun
	kkeHalDeBlMtDpIGggwSomDrpUVYMUJJmh4vP7dr1oNKYlC02FSD6Q+XfbtpWvCNg4u9
	jPIg==
Received: by 10.14.182.134 with SMTP id o6mr2175410eem.26.1345130934869;
	Thu, 16 Aug 2012 08:28:54 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id u47sm13009245eeo.9.2012.08.16.08.28.53
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 08:28:54 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 16 Aug 2012 16:28:49 +0100
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC52D041.4973A%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
	non-Intel, non-AMD CPUs
Thread-Index: Ac17w9U91M/1KzrGOUWedPzu5tZxTQ==
In-Reply-To: <502D2C3A0200007800095918@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 16:22, "Jan Beulich" <JBeulich@suse.com> wrote:

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Do we support any such processors? Newer VIA processors maybe?

Anyhow,
Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/microcode.c
> +++ b/xen/arch/x86/microcode.c
> @@ -109,6 +109,9 @@ int microcode_resume_cpu(int cpu)
>      struct cpu_signature nsig;
>      unsigned int cpu2;
>  
> +    if ( !microcode_ops )
> +        return 0;
> +
>      spin_lock(&microcode_mutex);
>  
>      err = microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig);
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:29:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T220H-0007Zj-Eu; Thu, 16 Aug 2012 15:28:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T220G-0007Ze-77
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 15:28:56 +0000
Received: from [85.158.139.83:45173] by server-2.bemta-5.messagelabs.com id
	B4/A2-10142-7B11D205; Thu, 16 Aug 2012 15:28:55 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345130934!21148709!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11623 invoked from network); 16 Aug 2012 15:28:55 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:28:55 -0000
Received: by eaac13 with SMTP id c13so878341eaa.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 08:28:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=o4FC6Pk4urPJqY5AhLCMevkEpHceVJoBCYF6GxeMwr8=;
	b=nuQDnIzXVHd3HNrFIYUzIfDWKAMVs769cfhsnymr3Nz1Dg3L0jN6vBzNu6fccB6b5w
	2JnfTSv3M3UzavFjBhgnAqrK6PvWtSGvYLMcqYaPlrdv2nN9LVj+8F5ZD/5Jl8jE04vN
	H/zR0Qmyn4wmCcT1tzt0w4btRBZh6kEbEOTzus0EBjYX0dIQMPkdFq8Bwg9+IPUv0Vbs
	fE72dYRVunHpUtj4CEQWgkFadCAw7sDYRAf/86J4Bf5lIKfVy4BJ26uJYScKQor2FMun
	kkeHalDeBlMtDpIGggwSomDrpUVYMUJJmh4vP7dr1oNKYlC02FSD6Q+XfbtpWvCNg4u9
	jPIg==
Received: by 10.14.182.134 with SMTP id o6mr2175410eem.26.1345130934869;
	Thu, 16 Aug 2012 08:28:54 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id u47sm13009245eeo.9.2012.08.16.08.28.53
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 08:28:54 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 16 Aug 2012 16:28:49 +0100
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC52D041.4973A%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
	non-Intel, non-AMD CPUs
Thread-Index: Ac17w9U91M/1KzrGOUWedPzu5tZxTQ==
In-Reply-To: <502D2C3A0200007800095918@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 16:22, "Jan Beulich" <JBeulich@suse.com> wrote:

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Do we support any such processors? Newer VIA processors maybe?

Anyhow,
Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/microcode.c
> +++ b/xen/arch/x86/microcode.c
> @@ -109,6 +109,9 @@ int microcode_resume_cpu(int cpu)
>      struct cpu_signature nsig;
>      unsigned int cpu2;
>  
> +    if ( !microcode_ops )
> +        return 0;
> +
>      spin_lock(&microcode_mutex);
>  
>      err = microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig);
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:34:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T225N-0007mQ-Rk; Thu, 16 Aug 2012 15:34:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T225L-0007lm-Rg
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:34:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345131244!9609035!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11973 invoked from network); 16 Aug 2012 15:34:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:34:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; 
	d="dts'?dtsi'?scan'208";a="14044587"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 15:34:04 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 16:34:04 +0100
Date: Thu, 16 Aug 2012 16:33:47 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Message-ID: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="_004_alpineDEB20212080614280604645kaballukxensourcecom_"
Content-ID: <alpine.DEB.2.02.1208161618551.4850@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Tim
	\(Xen.org\)" <tim@xen.org>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v3 00/23] Introduce Xen support on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208161618552.4850@kaball.uk.xensource.com>

Hi all,
this patch series implements Xen support for ARMv7 with virtualization
extensions.  It allows a Linux guest to boot as dom0 and
as domU on Xen on ARM. PV console, disk and network frontends and
backends are all working correctly.

It has been tested on a Versatile Express Cortex A15 emulator, using the
latest Xen ARM developement branch
(git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3) plus
the "ARM hypercall ABI: 64 bit ready" patch series
(http://marc.info/?l=xen-devel&m=134426267205408), and a simple ad-hoc
tool to build guest domains (marc.info/?l=xen-devel&m=134089788016546).

The patch marked with [HACK] shouldn't be applied and is part of the
series only because it is needed to create domUs.

I am also attaching to this email the dts'es that I am currently using
for dom0 and domU: vexpress-v2p-ca15-tc1.dts (that includes
vexpress-v2m-rs1-rtsm.dtsi) is the dts used for dom0 and it is passed to
Linux by Xen, while vexpress-virt.dts is the dts used for other domUs
and it is appended in binary form to the guest kernel image. I am not
sure where they are supposed to live yet, so I am just attaching them
here so that people can actually try out this series if they want to.

Comments are very welcome!


Changes in v3:
- move patches that have been picked up by Konrad at the end of the
  series;
- improve comments;
- add a doc to describe the Xen Device Tree format;
- do not use xen_ulong_t for multicalls and apic_physbase;
- add a patch at the end of the series to use the new __HVC macro;
- add missing pvclock-abi.h include to ia64 header files;
- do not use an anonymous union in struct xen_add_to_physmap.


Changes in v2:
- fix up many comments and commit messages;
- remove the early_printk patches: rely on the emulated serial for now;
- remove the xen_guest_init patch: without any PV early_printk, we don't
  need any early call to xen_guest_init, we can rely on core_initcall
  alone;
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
  at the moment is unused;
- use ldm instead of pop in the hypercall wrappers;
- return -ENOSYS rather than -1 from the unimplemented grant_table
  functions;
- remove the pvclock ifdef in the Xen headers;
- remove include linux/types.h from xen/interface/xen.h;
- replace pr_info with pr_debug in xen_guest_init;
- add a new patch to introduce xen_ulong_t and use it top replace all
  the occurences of unsigned long in the public Xen interface;
- explicitely size all the pointers to 64 bit on ARM, so that the
  hypercall ABI is "64 bit ready";
- clean up xenbus_init;
- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI;
- mark Xen guest support on ARM as EXPERIMENTAL;
- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames;
- add a new patch to clear IRQ_NOAUTOEN and IRQ_NOREQUEST in events.c;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap;
- retain binary compatibility in xen_add_to_physmap: use a union to
  introduce foreign_domid.


Ian Campbell (1):
      [HACK] xen/arm: implement xen_remap_domain_mfn_range

Stefano Stabellini (24):
      arm: initial Xen support
      xen/arm: hypercalls
      xen/arm: page.h definitions
      xen/arm: sync_bitops
      xen/arm: empty implementation of grant_table arch specific functions
      docs: Xen ARM DT bindings
      xen/arm: Xen detection and shared_info page mapping
      xen/arm: Introduce xen_pfn_t for pfn and mfn types
      xen/arm: Introduce xen_ulong_t for unsigned long
      xen/arm: compile and run xenbus
      xen: do not compile manage, balloon, pci, acpi and cpu_hotplug on ARM
      xen/arm: introduce CONFIG_XEN on ARM
      xen/arm: get privilege status
      xen/arm: initialize grant_table on ARM
      xen/arm: receive Xen events on ARM
      xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
      xen/arm: implement alloc/free_xenballooned_pages with alloc_pages/kfree
      xen: allow privcmd for HVM guests
      xen/arm: compile blkfront and blkback
      xen/arm: compile netback
      arm/v2m: initialize arch_timers even if v2m_timer is not present
      xen/arm: use the __HVC macro
      xen: missing includes
      xen: update xen_add_to_physmap interface

 Documentation/devicetree/bindings/arm/xen.txt |   22 +++
 arch/arm/Kconfig                              |   10 +
 arch/arm/Makefile                             |    1 +
 arch/arm/include/asm/hypervisor.h             |    6 +
 arch/arm/include/asm/sync_bitops.h            |   27 +++
 arch/arm/include/asm/xen/events.h             |   18 ++
 arch/arm/include/asm/xen/hypercall.h          |   69 +++++++
 arch/arm/include/asm/xen/hypervisor.h         |   19 ++
 arch/arm/include/asm/xen/interface.h          |   73 ++++++++
 arch/arm/include/asm/xen/page.h               |   82 ++++++++
 arch/arm/mach-vexpress/v2m.c                  |   11 +-
 arch/arm/xen/Makefile                         |    1 +
 arch/arm/xen/enlighten.c                      |  245 +++++++++++++++++++++++++
 arch/arm/xen/grant-table.c                    |   53 ++++++
 arch/arm/xen/hypercall.S                      |  102 ++++++++++
 arch/ia64/include/asm/xen/interface.h         |    8 +-
 arch/x86/include/asm/xen/interface.h          |    8 +
 arch/x86/xen/enlighten.c                      |    1 +
 arch/x86/xen/irq.c                            |    1 +
 arch/x86/xen/mmu.c                            |    3 +
 arch/x86/xen/xen-ops.h                        |    1 -
 drivers/block/xen-blkback/blkback.c           |    1 +
 drivers/net/xen-netback/netback.c             |    1 +
 drivers/net/xen-netfront.c                    |    1 +
 drivers/tty/hvc/hvc_xen.c                     |    2 +
 drivers/xen/Makefile                          |   11 +-
 drivers/xen/events.c                          |   18 ++-
 drivers/xen/grant-table.c                     |    1 +
 drivers/xen/privcmd.c                         |   20 +-
 drivers/xen/xenbus/xenbus_comms.c             |    2 +-
 drivers/xen/xenbus/xenbus_probe.c             |   62 +++++--
 drivers/xen/xenbus/xenbus_probe_frontend.c    |    1 +
 drivers/xen/xenbus/xenbus_xs.c                |    1 +
 drivers/xen/xenfs/super.c                     |    7 +
 include/xen/events.h                          |    2 +
 include/xen/interface/features.h              |    3 +
 include/xen/interface/grant_table.h           |    4 +-
 include/xen/interface/io/protocols.h          |    3 +
 include/xen/interface/memory.h                |   32 ++-
 include/xen/interface/physdev.h               |    2 +-
 include/xen/interface/platform.h              |    4 +-
 include/xen/interface/version.h               |    2 +-
 include/xen/interface/xen.h                   |    7 +-
 include/xen/privcmd.h                         |    3 +-
 include/xen/xen.h                             |    2 +-
 45 files changed, 885 insertions(+), 68 deletions(-)


A branch based on 3.5-rc7 is available here (the __HVC patch is missing
from this branch because it depends on "ARM: opcodes: Facilitate custom
opcode injection" http://marc.info/?l=linux-arm-kernel&m=134442896128124):

git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 3.5-rc7-arm-3

Cheers,

Stefano
--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-virt.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161633020.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-virt.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQVJN
IEVudmVsb3BlIE1vZGVsIHY3QSAoc2luZ2xlIENQVSkuDQogKi8NCg0KL2R0
cy12MS87DQoNCi9pbmNsdWRlLyAic2tlbGV0b24uZHRzaSINCg0KLyB7DQoJ
bW9kZWwgPSAiVjJQLUFFTXY3QSI7DQoJY29tcGF0aWJsZSA9ICJhcm0sdmV4
cHJlc3MsdjJwLWFlbSx2N2EiLCAiYXJtLHZleHByZXNzLHYycC1hZW0iLCAi
YXJtLHZleHByZXNzIjsNCglpbnRlcnJ1cHQtcGFyZW50ID0gPCZnaWM+Ow0K
DQogICAgICAgIGNob3NlbiB7DQogICAgICAgICAgICAgICAgYm9vdGFyZ3Mg
PSAiZWFybHlwcmludGsgZGVidWcgbG9nbGV2ZWw9OSBjb25zb2xlPWh2YzAg
cm9vdD0vZGV2L3h2ZGEgaW5pdD0vc2Jpbi9pbml0IjsNCiAgICAgICAgfTsN
Cg0KCWNwdXMgew0KCQkjYWRkcmVzcy1jZWxscyA9IDwxPjsNCgkJI3NpemUt
Y2VsbHMgPSA8MD47DQoNCgkJY3B1QDAgew0KCQkJZGV2aWNlX3R5cGUgPSAi
Y3B1IjsNCgkJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hMTUiOw0KCQkJ
cmVnID0gPDA+Ow0KCQl9Ow0KCX07DQoNCgltZW1vcnkgew0KCQlkZXZpY2Vf
dHlwZSA9ICJtZW1vcnkiOw0KCQlyZWcgPSA8MHg4MDAwMDAwMCAweDA4MDAw
MDAwPjsNCgl9Ow0KDQoJZ2ljOiBpbnRlcnJ1cHQtY29udHJvbGxlckAyYzAw
MTAwMCB7DQoJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hOS1naWMiOw0K
CQkjaW50ZXJydXB0LWNlbGxzID0gPDM+Ow0KCQkjYWRkcmVzcy1jZWxscyA9
IDwwPjsNCgkJaW50ZXJydXB0LWNvbnRyb2xsZXI7DQoJCXJlZyA9IDwweDJj
MDAxMDAwIDB4MTAwMD4sDQoJCSAgICAgIDwweDJjMDAyMDAwIDB4MTAwPjsN
Cgl9Ow0KDQoJdGltZXIgew0KCQljb21wYXRpYmxlID0gImFybSxhcm12Ny10
aW1lciI7DQoJCWludGVycnVwdHMgPSA8MSAxMyAweGYwOD4sDQoJCQkgICAg
IDwxIDE0IDB4ZjA4PiwNCgkJCSAgICAgPDEgMTEgMHhmMDg+LA0KCQkJICAg
ICA8MSAxMCAweGYwOD47DQoJfTsNCg0KCWh5cGVydmlzb3Igew0KCQljb21w
YXRpYmxlID0gInhlbix4ZW4iLCAieGVuLHhlbi00LjIiOw0KCQlyZWcgPSA8
MHhiMDAwMDAwMCAweDIwMDAwPjsNCgkJaW50ZXJydXB0cyA9IDwxIDE1IDB4
ZjA4PjsNCgl9Ow0KDQoJbW90aGVyYm9hcmQgew0KCQlhcm0sdjJtLW1lbW9y
eS1tYXAgPSAicnMxIjsNCgkJcmFuZ2VzID0gPDAgMCAweDA4MDAwMDAwIDB4
MDQwMDAwMDA+LA0KCQkJIDwxIDAgMHgxNDAwMDAwMCAweDA0MDAwMDAwPiwN
CgkJCSA8MiAwIDB4MTgwMDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDMgMCAw
eDFjMDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDw0IDAgMHgwYzAwMDAwMCAw
eDA0MDAwMDAwPiwNCgkJCSA8NSAwIDB4MTAwMDAwMDAgMHgwNDAwMDAwMD47
DQoNCgkJaW50ZXJydXB0LW1hcC1tYXNrID0gPDAgMCA2Mz47DQoJCWludGVy
cnVwdC1tYXAgPSA8MCAwICAwICZnaWMgMCAgMCA0PiwNCgkJCQk8MCAwICAx
ICZnaWMgMCAgMSA0PiwNCgkJCQk8MCAwICAyICZnaWMgMCAgMiA0PiwNCgkJ
CQk8MCAwICAzICZnaWMgMCAgMyA0PiwNCgkJCQk8MCAwICA0ICZnaWMgMCAg
NCA0PiwNCgkJCQk8MCAwICA1ICZnaWMgMCAgNSA0PiwNCgkJCQk8MCAwICA2
ICZnaWMgMCAgNiA0PiwNCgkJCQk8MCAwICA3ICZnaWMgMCAgNyA0PiwNCgkJ
CQk8MCAwICA4ICZnaWMgMCAgOCA0PiwNCgkJCQk8MCAwICA5ICZnaWMgMCAg
OSA0PiwNCgkJCQk8MCAwIDEwICZnaWMgMCAxMCA0PiwNCgkJCQk8MCAwIDEx
ICZnaWMgMCAxMSA0PiwNCgkJCQk8MCAwIDEyICZnaWMgMCAxMiA0PiwNCgkJ
CQk8MCAwIDEzICZnaWMgMCAxMyA0PiwNCgkJCQk8MCAwIDE0ICZnaWMgMCAx
NCA0PiwNCgkJCQk8MCAwIDE1ICZnaWMgMCAxNSA0PiwNCgkJCQk8MCAwIDE2
ICZnaWMgMCAxNiA0PiwNCgkJCQk8MCAwIDE3ICZnaWMgMCAxNyA0PiwNCgkJ
CQk8MCAwIDE4ICZnaWMgMCAxOCA0PiwNCgkJCQk8MCAwIDE5ICZnaWMgMCAx
OSA0PiwNCgkJCQk8MCAwIDIwICZnaWMgMCAyMCA0PiwNCgkJCQk8MCAwIDIx
ICZnaWMgMCAyMSA0PiwNCgkJCQk8MCAwIDIyICZnaWMgMCAyMiA0PiwNCgkJ
CQk8MCAwIDIzICZnaWMgMCAyMyA0PiwNCgkJCQk8MCAwIDI0ICZnaWMgMCAy
NCA0PiwNCgkJCQk8MCAwIDI1ICZnaWMgMCAyNSA0PiwNCgkJCQk8MCAwIDI2
ICZnaWMgMCAyNiA0PiwNCgkJCQk8MCAwIDI3ICZnaWMgMCAyNyA0PiwNCgkJ
CQk8MCAwIDI4ICZnaWMgMCAyOCA0PiwNCgkJCQk8MCAwIDI5ICZnaWMgMCAy
OSA0PiwNCgkJCQk8MCAwIDMwICZnaWMgMCAzMCA0PiwNCgkJCQk8MCAwIDMx
ICZnaWMgMCAzMSA0PiwNCgkJCQk8MCAwIDMyICZnaWMgMCAzMiA0PiwNCgkJ
CQk8MCAwIDMzICZnaWMgMCAzMyA0PiwNCgkJCQk8MCAwIDM0ICZnaWMgMCAz
NCA0PiwNCgkJCQk8MCAwIDM1ICZnaWMgMCAzNSA0PiwNCgkJCQk8MCAwIDM2
ICZnaWMgMCAzNiA0PiwNCgkJCQk8MCAwIDM3ICZnaWMgMCAzNyA0PiwNCgkJ
CQk8MCAwIDM4ICZnaWMgMCAzOCA0PiwNCgkJCQk8MCAwIDM5ICZnaWMgMCAz
OSA0PiwNCgkJCQk8MCAwIDQwICZnaWMgMCA0MCA0PiwNCgkJCQk8MCAwIDQx
ICZnaWMgMCA0MSA0PiwNCgkJCQk8MCAwIDQyICZnaWMgMCA0MiA0PjsNCgl9
Ow0KfTsNCg0KLyogL2luY2x1ZGUvICJ2ZXhwcmVzcy12Mm0tcnMxLXJ0c20u
ZHRzaSIgKi8NCg==

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-v2m-rs1.dtsi"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161633021.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2m-rs1.dtsi"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogTW90
aGVyYm9hcmQgRXhwcmVzcyB1QVRYDQogKiBWMk0tUDENCiAqDQogKiBIQkkt
MDE5MEQNCiAqDQogKiBSUzEgbWVtb3J5IG1hcCAoIkFSTSBDb3J0ZXgtQSBT
ZXJpZXMgbWVtb3J5IG1hcCIgaW4gdGhlIGJvYXJkJ3MNCiAqIFRlY2huaWNh
bCBSZWZlcmVuY2UgTWFudWFsKQ0KICoNCiAqIFdBUk5JTkchIFRoZSBoYXJk
d2FyZSBkZXNjcmliZWQgaW4gdGhpcyBmaWxlIGlzIGluZGVwZW5kZW50IGZy
b20gdGhlDQogKiBvcmlnaW5hbCB2YXJpYW50ICh2ZXhwcmVzcy12Mm0uZHRz
aSksIGJ1dCB0aGVyZSBpcyBhIHN0cm9uZw0KICogY29ycmVzcG9uZGVuY2Ug
YmV0d2VlbiB0aGUgdHdvIGNvbmZpZ3VyYXRpb25zLg0KICoNCiAqIFRBS0Ug
Q0FSRSBXSEVOIE1BSU5UQUlOSU5HIFRISVMgRklMRSBUTyBQUk9QQUdBVEUg
QU5ZIFJFTEVWQU5UDQogKiBDSEFOR0VTIFRPIHZleHByZXNzLXYybS5kdHNp
IQ0KICovDQoNCi8gew0KCWFsaWFzZXMgew0KCQlhcm0sdjJtX3RpbWVyID0g
JnYybV90aW1lcjAxOw0KCX07DQoNCgltb3RoZXJib2FyZCB7DQoJCWNvbXBh
dGlibGUgPSAic2ltcGxlLWJ1cyI7DQoJCWFybSx2Mm0tbWVtb3J5LW1hcCA9
ICJyczEiOw0KCQkjYWRkcmVzcy1jZWxscyA9IDwyPjsgLyogU01CIGNoaXBz
ZWxlY3QgbnVtYmVyIGFuZCBvZmZzZXQgKi8NCgkJI3NpemUtY2VsbHMgPSA8
MT47DQoJCSNpbnRlcnJ1cHQtY2VsbHMgPSA8MT47DQoNCgkJZmxhc2hAMCww
MDAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gImFybSx2ZXhwcmVzcy1mbGFz
aCIsICJjZmktZmxhc2giOw0KCQkJcmVnID0gPDAgMHgwMDAwMDAwMCAweDA0
MDAwMDAwPiwNCgkJCSAgICAgIDw0IDB4MDAwMDAwMDAgMHgwNDAwMDAwMD47
DQoJCQliYW5rLXdpZHRoID0gPDQ+Ow0KCQl9Ow0KDQoJCXBzcmFtQDEsMDAw
MDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtcHNyYW0i
LCAibXRkLXJhbSI7DQoJCQlyZWcgPSA8MSAweDAwMDAwMDAwIDB4MDIwMDAw
MDA+Ow0KCQkJYmFuay13aWR0aCA9IDw0PjsNCgkJfTsNCg0KCQl2cmFtQDIs
MDAwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtdnJh
bSI7DQoJCQlyZWcgPSA8MiAweDAwMDAwMDAwIDB4MDA4MDAwMDA+Ow0KCQl9
Ow0KDQoJCWV0aGVybmV0QDIsMDIwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9
ICJzbXNjLGxhbjkxYzExMSI7DQogCQkJcmVnID0gPDIgMHgwMjAwMDAwMCAw
eDEwMDAwPjsNCiAJCQlpbnRlcnJ1cHRzID0gPDE1PjsNCgkJfTsNCg0KCQl1
c2JAMiwwMzAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gIm54cCx1c2ItaXNw
MTc2MSI7DQoJCQlyZWcgPSA8MiAweDAzMDAwMDAwIDB4MjAwMDA+Ow0KCQkJ
aW50ZXJydXB0cyA9IDwxNj47DQoJCQlwb3J0MS1vdGc7DQoJCX07DQoNCgkJ
aW9mcGdhQDMsMDAwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sYW1i
YS1idXMiLCAic2ltcGxlLWJ1cyI7DQoJCQkjYWRkcmVzcy1jZWxscyA9IDwx
PjsNCgkJCSNzaXplLWNlbGxzID0gPDE+Ow0KCQkJcmFuZ2VzID0gPDAgMyAw
IDB4MjAwMDAwPjsNCg0KCQkJc3lzcmVnQDAxMDAwMCB7DQoJCQkJY29tcGF0
aWJsZSA9ICJhcm0sdmV4cHJlc3Mtc3lzcmVnIjsNCgkJCQlyZWcgPSA8MHgw
MTAwMDAgMHgxMDAwPjsNCgkJCX07DQoNCgkJCXN5c2N0bEAwMjAwMDAgew0K
CQkJCWNvbXBhdGlibGUgPSAiYXJtLHNwODEwIiwgImFybSxwcmltZWNlbGwi
Ow0KCQkJCXJlZyA9IDwweDAyMDAwMCAweDEwMDA+Ow0KCQkJfTsNCg0KCQkJ
LyogUENJLUUgSTJDIGJ1cyAqLw0KCQkJdjJtX2kyY19wY2llOiBpMmNAMDMw
MDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSx2ZXJzYXRpbGUtaTJjIjsN
CgkJCQlyZWcgPSA8MHgwMzAwMDAgMHgxMDAwPjsNCg0KCQkJCSNhZGRyZXNz
LWNlbGxzID0gPDE+Ow0KCQkJCSNzaXplLWNlbGxzID0gPDA+Ow0KDQoJCQkJ
cGNpZS1zd2l0Y2hANjAgew0KCQkJCQljb21wYXRpYmxlID0gImlkdCw4OWhw
ZXMzMmg4IjsNCgkJCQkJcmVnID0gPDB4NjA+Ow0KCQkJCX07DQoJCQl9Ow0K
DQoJCQlhYWNpQDA0MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGww
NDEiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDQwMDAwIDB4
MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDwxMT47DQoJCQl9Ow0KDQoJCQlt
bWNpQDA1MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwxODAiLCAi
YXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDUwMDAwIDB4MTAwMD47
DQoJCQkJaW50ZXJydXB0cyA9IDw5IDEwPjsNCgkJCX07DQoNCgkJCWttaUAw
NjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBsMDUwIiwgImFybSxw
cmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDA2MDAwMCAweDEwMDA+Ow0KCQkJ
CWludGVycnVwdHMgPSA8MTI+Ow0KCQkJfTsNCg0KCQkJa21pQDA3MDAwMCB7
DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwNTAiLCAiYXJtLHByaW1lY2Vs
bCI7DQoJCQkJcmVnID0gPDB4MDcwMDAwIDB4MTAwMD47DQoJCQkJaW50ZXJy
dXB0cyA9IDwxMz47DQoJCQl9Ow0KDQoJCQl2Mm1fc2VyaWFsMDogdWFydEAw
OTAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBsMDExIiwgImFybSxw
cmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDA5MDAwMCAweDEwMDA+Ow0KCQkJ
CWludGVycnVwdHMgPSA8NT47DQoJCQl9Ow0KDQoJCQl2Mm1fc2VyaWFsMTog
dWFydEAwYTAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBsMDExIiwg
ImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBhMDAwMCAweDEwMDA+
Ow0KCQkJCWludGVycnVwdHMgPSA8Nj47DQoJCQl9Ow0KDQoJCQl2Mm1fc2Vy
aWFsMjogdWFydEAwYjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBs
MDExIiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBiMDAwMCAw
eDEwMDA+Ow0KCQkJCWludGVycnVwdHMgPSA8Nz47DQoJCQl9Ow0KDQoJCQl2
Mm1fc2VyaWFsMzogdWFydEAwYzAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAi
YXJtLHBsMDExIiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBj
MDAwMCAweDEwMDA+Ow0KCQkJCWludGVycnVwdHMgPSA8OD47DQoJCQl9Ow0K
DQoJCQl3ZHRAMGYwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxzcDgw
NSIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgwZjAwMDAgMHgx
MDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDA+Ow0KCQkJfTsNCg0KCQkJdjJt
X3RpbWVyMDE6IHRpbWVyQDExMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJh
cm0sc3A4MDQiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MTEw
MDAwIDB4MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDwyPjsNCgkJCX07DQoN
CgkJCXYybV90aW1lcjIzOiB0aW1lckAxMjAwMDAgew0KCQkJCWNvbXBhdGli
bGUgPSAiYXJtLHNwODA0IiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9
IDwweDEyMDAwMCAweDEwMDA+Ow0KCQkJfTsNCg0KCQkJLyogRFZJIEkyQyBi
dXMgKi8NCgkJCXYybV9pMmNfZHZpOiBpMmNAMTYwMDAwIHsNCgkJCQljb21w
YXRpYmxlID0gImFybSx2ZXJzYXRpbGUtaTJjIjsNCgkJCQlyZWcgPSA8MHgx
NjAwMDAgMHgxMDAwPjsNCg0KCQkJCSNhZGRyZXNzLWNlbGxzID0gPDE+Ow0K
CQkJCSNzaXplLWNlbGxzID0gPDA+Ow0KDQoJCQkJZHZpLXRyYW5zbWl0dGVy
QDM5IHsNCgkJCQkJY29tcGF0aWJsZSA9ICJzaWwsc2lpOTAyMi10cGkiLCAi
c2lsLHNpaTkwMjIiOw0KCQkJCQlyZWcgPSA8MHgzOT47DQoJCQkJfTsNCg0K
CQkJCWR2aS10cmFuc21pdHRlckA2MCB7DQoJCQkJCWNvbXBhdGlibGUgPSAi
c2lsLHNpaTkwMjItY3BpIiwgInNpbCxzaWk5MDIyIjsNCgkJCQkJcmVnID0g
PDB4NjA+Ow0KCQkJCX07DQoJCQl9Ow0KDQoJCQlydGNAMTcwMDAwIHsNCgkJ
CQljb21wYXRpYmxlID0gImFybSxwbDAzMSIsICJhcm0scHJpbWVjZWxsIjsN
CgkJCQlyZWcgPSA8MHgxNzAwMDAgMHgxMDAwPjsNCgkJCQlpbnRlcnJ1cHRz
ID0gPDQ+Ow0KCQkJfTsNCg0KCQkJY29tcGFjdC1mbGFzaEAxYTAwMDAgew0K
CQkJCWNvbXBhdGlibGUgPSAiYXJtLHZleHByZXNzLWNmIiwgImF0YS1nZW5l
cmljIjsNCgkJCQlyZWcgPSA8MHgxYTAwMDAgMHgxMDANCgkJCQkgICAgICAg
MHgxYTAxMDAgMHhmMDA+Ow0KCQkJCXJlZy1zaGlmdCA9IDwyPjsNCgkJCX07
DQoNCgkJCWNsY2RAMWYwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxw
bDExMSIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgxZjAwMDAg
MHgxMDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDE0PjsNCgkJCX07DQoJCX07
DQoJfTsNCn07DQo=

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-v2p-ca15-tc1.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161633022.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2p-ca15-tc1.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQ29y
ZVRpbGUgRXhwcmVzcyBBMTV4MiAodmVyc2lvbiB3aXRoIFRlc3QgQ2hpcCAx
KQ0KICogQ29ydGV4LUExNSBNUENvcmUgKFYyUC1DQTE1KQ0KICoNCiAqIEhC
SS0wMjM3QQ0KICovDQoNCi9kdHMtdjEvOw0KDQovIHsNCgltb2RlbCA9ICJW
MlAtQ0ExNSI7DQoJYXJtLGhiaSA9IDwweDIzNz47DQoJY29tcGF0aWJsZSA9
ICJhcm0sdmV4cHJlc3MsdjJwLWNhMTUsdGMxIiwgImFybSx2ZXhwcmVzcyx2
MnAtY2ExNSIsICJhcm0sdmV4cHJlc3MiOw0KCWludGVycnVwdC1wYXJlbnQg
PSA8JmdpYz47DQoJI2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJI3NpemUtY2Vs
bHMgPSA8MT47DQoNCgljaG9zZW4gew0KICAgICAgICAgICAgICAgICBib290
YXJncyA9ICJkb20wX21lbT0xMjhNIjsNCiAgICAgICAgICAgICAgICAgeGVu
LGRvbTAtYm9vdGFyZ3MgPSAiZWFybHlwcmludGsgY29uc29sZT10dHlBTUEx
IHJvb3Q9L2Rldi9tbWNibGswIGRlYnVnIHJ3IjsNCgl9Ow0KDQoJYWxpYXNl
cyB7DQoJCXNlcmlhbDAgPSAmdjJtX3NlcmlhbDA7DQoJCXNlcmlhbDEgPSAm
djJtX3NlcmlhbDE7DQoJCXNlcmlhbDIgPSAmdjJtX3NlcmlhbDI7DQoJCXNl
cmlhbDMgPSAmdjJtX3NlcmlhbDM7DQoJCWkyYzAgPSAmdjJtX2kyY19kdmk7
DQoJCWkyYzEgPSAmdjJtX2kyY19wY2llOw0KCX07DQoNCgljcHVzIHsNCgkJ
I2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJCSNzaXplLWNlbGxzID0gPDA+Ow0K
DQoJCWNwdUAwIHsNCgkJCWRldmljZV90eXBlID0gImNwdSI7DQoJCQljb21w
YXRpYmxlID0gImFybSxjb3J0ZXgtYTE1IjsNCgkJCXJlZyA9IDwwPjsNCgkJ
fTsNCgl9Ow0KDQoJbWVtb3J5IHsNCgkJZGV2aWNlX3R5cGUgPSAibWVtb3J5
IjsNCgkJcmVnID0gPDB4ODAwMDAwMDAgMHg4MDAwMDAwMD47DQoJfTsNCg0K
CWdpYzogaW50ZXJydXB0LWNvbnRyb2xsZXJAMmMwMDEwMDAgew0KCQljb21w
YXRpYmxlID0gImFybSxjb3J0ZXgtYTE1LWdpYyIsICJhcm0sY29ydGV4LWE5
LWdpYyI7DQoJCSNpbnRlcnJ1cHQtY2VsbHMgPSA8Mz47DQoJCSNhZGRyZXNz
LWNlbGxzID0gPDA+Ow0KCQlpbnRlcnJ1cHQtY29udHJvbGxlcjsNCgkJcmVn
ID0gPDB4MmMwMDEwMDAgMHgxMDAwPiwNCgkJICAgICAgPDB4MmMwMDIwMDAg
MHgxMDAwPiwNCgkJICAgICAgPDB4MmMwMDQwMDAgMHgyMDAwPiwNCgkJICAg
ICAgPDB4MmMwMDYwMDAgMHgyMDAwPjsNCgkJaW50ZXJydXB0cyA9IDwxIDkg
MHhmMDQ+Ow0KCX07DQoNCgl0aW1lciB7DQoJCWNvbXBhdGlibGUgPSAiYXJt
LGFybXY3LXRpbWVyIjsNCgkJaW50ZXJydXB0cyA9IDwxIDEzIDB4ZjA4PiwN
CgkJCSAgICAgPDEgMTQgMHhmMDg+LA0KCQkJICAgICA8MSAxMSAweGYwOD4s
DQoJCQkgICAgIDwxIDEwIDB4ZjA4PjsNCgl9Ow0KDQoJcG11IHsNCgkJY29t
cGF0aWJsZSA9ICJhcm0sY29ydGV4LWExNS1wbXUiLCAiYXJtLGNvcnRleC1h
OS1wbXUiOw0KCQlpbnRlcnJ1cHRzID0gPDAgNjggND4sDQoJCQkgICAgIDww
IDY5IDQ+Ow0KCX07DQoJDQoJaHlwZXJ2aXNvciB7DQoJCWNvbXBhdGlibGUg
PSAieGVuLHhlbiIsICJ4ZW4seGVuLTQuMiI7DQoJCXJlZyA9IDwweGIwMDAw
MDAwIDB4MjAwMDA+Ow0KCQlpbnRlcnJ1cHRzID0gPDEgMTUgMHhmMDg+Ow0K
CX07DQoNCgltb3RoZXJib2FyZCB7DQoJCXJhbmdlcyA9IDwwIDAgMHgwODAw
MDAwMCAweDA0MDAwMDAwPiwNCgkJCSA8MSAwIDB4MTQwMDAwMDAgMHgwNDAw
MDAwMD4sDQoJCQkgPDIgMCAweDE4MDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJ
IDwzIDAgMHgxYzAwMDAwMCAweDA0MDAwMDAwPiwNCgkJCSA8NCAwIDB4MGMw
MDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDUgMCAweDEwMDAwMDAwIDB4MDQw
MDAwMDA+Ow0KDQoJCWludGVycnVwdC1tYXAtbWFzayA9IDwwIDAgNjM+Ow0K
CQlpbnRlcnJ1cHQtbWFwID0gPDAgMCAgMCAmZ2ljIDAgIDAgND4sDQoJCQkJ
PDAgMCAgMSAmZ2ljIDAgIDEgND4sDQoJCQkJPDAgMCAgMiAmZ2ljIDAgIDIg
ND4sDQoJCQkJPDAgMCAgMyAmZ2ljIDAgIDMgND4sDQoJCQkJPDAgMCAgNCAm
Z2ljIDAgIDQgND4sDQoJCQkJPDAgMCAgNSAmZ2ljIDAgIDUgND4sDQoJCQkJ
PDAgMCAgNiAmZ2ljIDAgIDYgND4sDQoJCQkJPDAgMCAgNyAmZ2ljIDAgIDcg
ND4sDQoJCQkJPDAgMCAgOCAmZ2ljIDAgIDggND4sDQoJCQkJPDAgMCAgOSAm
Z2ljIDAgIDkgND4sDQoJCQkJPDAgMCAxMCAmZ2ljIDAgMTAgND4sDQoJCQkJ
PDAgMCAxMSAmZ2ljIDAgMTEgND4sDQoJCQkJPDAgMCAxMiAmZ2ljIDAgMTIg
ND4sDQoJCQkJPDAgMCAxMyAmZ2ljIDAgMTMgND4sDQoJCQkJPDAgMCAxNCAm
Z2ljIDAgMTQgND4sDQoJCQkJPDAgMCAxNSAmZ2ljIDAgMTUgND4sDQoJCQkJ
PDAgMCAxNiAmZ2ljIDAgMTYgND4sDQoJCQkJPDAgMCAxNyAmZ2ljIDAgMTcg
ND4sDQoJCQkJPDAgMCAxOCAmZ2ljIDAgMTggND4sDQoJCQkJPDAgMCAxOSAm
Z2ljIDAgMTkgND4sDQoJCQkJPDAgMCAyMCAmZ2ljIDAgMjAgND4sDQoJCQkJ
PDAgMCAyMSAmZ2ljIDAgMjEgND4sDQoJCQkJPDAgMCAyMiAmZ2ljIDAgMjIg
ND4sDQoJCQkJPDAgMCAyMyAmZ2ljIDAgMjMgND4sDQoJCQkJPDAgMCAyNCAm
Z2ljIDAgMjQgND4sDQoJCQkJPDAgMCAyNSAmZ2ljIDAgMjUgND4sDQoJCQkJ
PDAgMCAyNiAmZ2ljIDAgMjYgND4sDQoJCQkJPDAgMCAyNyAmZ2ljIDAgMjcg
ND4sDQoJCQkJPDAgMCAyOCAmZ2ljIDAgMjggND4sDQoJCQkJPDAgMCAyOSAm
Z2ljIDAgMjkgND4sDQoJCQkJPDAgMCAzMCAmZ2ljIDAgMzAgND4sDQoJCQkJ
PDAgMCAzMSAmZ2ljIDAgMzEgND4sDQoJCQkJPDAgMCAzMiAmZ2ljIDAgMzIg
ND4sDQoJCQkJPDAgMCAzMyAmZ2ljIDAgMzMgND4sDQoJCQkJPDAgMCAzNCAm
Z2ljIDAgMzQgND4sDQoJCQkJPDAgMCAzNSAmZ2ljIDAgMzUgND4sDQoJCQkJ
PDAgMCAzNiAmZ2ljIDAgMzYgND4sDQoJCQkJPDAgMCAzNyAmZ2ljIDAgMzcg
ND4sDQoJCQkJPDAgMCAzOCAmZ2ljIDAgMzggND4sDQoJCQkJPDAgMCAzOSAm
Z2ljIDAgMzkgND4sDQoJCQkJPDAgMCA0MCAmZ2ljIDAgNDAgND4sDQoJCQkJ
PDAgMCA0MSAmZ2ljIDAgNDEgND4sDQoJCQkJPDAgMCA0MiAmZ2ljIDAgNDIg
ND47DQoJfTsNCn07DQoNCi9pbmNsdWRlLyAidmV4cHJlc3MtdjJtLXJzMS5k
dHNpIg0K

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_alpineDEB20212080614280604645kaballukxensourcecom_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 15:34:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T225N-0007mQ-Rk; Thu, 16 Aug 2012 15:34:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T225L-0007lm-Rg
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:34:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345131244!9609035!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11973 invoked from network); 16 Aug 2012 15:34:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:34:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; 
	d="dts'?dtsi'?scan'208";a="14044587"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 15:34:04 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 16:34:04 +0100
Date: Thu, 16 Aug 2012 16:33:47 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Message-ID: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="_004_alpineDEB20212080614280604645kaballukxensourcecom_"
Content-ID: <alpine.DEB.2.02.1208161618551.4850@kaball.uk.xensource.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linaro-dev@lists.linaro.org" <linaro-dev@lists.linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Tim
	\(Xen.org\)" <tim@xen.org>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v3 00/23] Introduce Xen support on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"
Content-ID: <alpine.DEB.2.02.1208161618552.4850@kaball.uk.xensource.com>

Hi all,
this patch series implements Xen support for ARMv7 with virtualization
extensions.  It allows a Linux guest to boot as dom0 and
as domU on Xen on ARM. PV console, disk and network frontends and
backends are all working correctly.

It has been tested on a Versatile Express Cortex A15 emulator, using the
latest Xen ARM developement branch
(git://xenbits.xen.org/people/ianc/xen-unstable.git arm-for-4.3) plus
the "ARM hypercall ABI: 64 bit ready" patch series
(http://marc.info/?l=xen-devel&m=134426267205408), and a simple ad-hoc
tool to build guest domains (marc.info/?l=xen-devel&m=134089788016546).

The patch marked with [HACK] shouldn't be applied and is part of the
series only because it is needed to create domUs.

I am also attaching to this email the dts'es that I am currently using
for dom0 and domU: vexpress-v2p-ca15-tc1.dts (that includes
vexpress-v2m-rs1-rtsm.dtsi) is the dts used for dom0 and it is passed to
Linux by Xen, while vexpress-virt.dts is the dts used for other domUs
and it is appended in binary form to the guest kernel image. I am not
sure where they are supposed to live yet, so I am just attaching them
here so that people can actually try out this series if they want to.

Comments are very welcome!


Changes in v3:
- move patches that have been picked up by Konrad at the end of the
  series;
- improve comments;
- add a doc to describe the Xen Device Tree format;
- do not use xen_ulong_t for multicalls and apic_physbase;
- add a patch at the end of the series to use the new __HVC macro;
- add missing pvclock-abi.h include to ia64 header files;
- do not use an anonymous union in struct xen_add_to_physmap.


Changes in v2:
- fix up many comments and commit messages;
- remove the early_printk patches: rely on the emulated serial for now;
- remove the xen_guest_init patch: without any PV early_printk, we don't
  need any early call to xen_guest_init, we can rely on core_initcall
  alone;
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
  at the moment is unused;
- use ldm instead of pop in the hypercall wrappers;
- return -ENOSYS rather than -1 from the unimplemented grant_table
  functions;
- remove the pvclock ifdef in the Xen headers;
- remove include linux/types.h from xen/interface/xen.h;
- replace pr_info with pr_debug in xen_guest_init;
- add a new patch to introduce xen_ulong_t and use it top replace all
  the occurences of unsigned long in the public Xen interface;
- explicitely size all the pointers to 64 bit on ARM, so that the
  hypercall ABI is "64 bit ready";
- clean up xenbus_init;
- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI;
- mark Xen guest support on ARM as EXPERIMENTAL;
- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames;
- add a new patch to clear IRQ_NOAUTOEN and IRQ_NOREQUEST in events.c;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap;
- retain binary compatibility in xen_add_to_physmap: use a union to
  introduce foreign_domid.


Ian Campbell (1):
      [HACK] xen/arm: implement xen_remap_domain_mfn_range

Stefano Stabellini (24):
      arm: initial Xen support
      xen/arm: hypercalls
      xen/arm: page.h definitions
      xen/arm: sync_bitops
      xen/arm: empty implementation of grant_table arch specific functions
      docs: Xen ARM DT bindings
      xen/arm: Xen detection and shared_info page mapping
      xen/arm: Introduce xen_pfn_t for pfn and mfn types
      xen/arm: Introduce xen_ulong_t for unsigned long
      xen/arm: compile and run xenbus
      xen: do not compile manage, balloon, pci, acpi and cpu_hotplug on ARM
      xen/arm: introduce CONFIG_XEN on ARM
      xen/arm: get privilege status
      xen/arm: initialize grant_table on ARM
      xen/arm: receive Xen events on ARM
      xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
      xen/arm: implement alloc/free_xenballooned_pages with alloc_pages/kfree
      xen: allow privcmd for HVM guests
      xen/arm: compile blkfront and blkback
      xen/arm: compile netback
      arm/v2m: initialize arch_timers even if v2m_timer is not present
      xen/arm: use the __HVC macro
      xen: missing includes
      xen: update xen_add_to_physmap interface

 Documentation/devicetree/bindings/arm/xen.txt |   22 +++
 arch/arm/Kconfig                              |   10 +
 arch/arm/Makefile                             |    1 +
 arch/arm/include/asm/hypervisor.h             |    6 +
 arch/arm/include/asm/sync_bitops.h            |   27 +++
 arch/arm/include/asm/xen/events.h             |   18 ++
 arch/arm/include/asm/xen/hypercall.h          |   69 +++++++
 arch/arm/include/asm/xen/hypervisor.h         |   19 ++
 arch/arm/include/asm/xen/interface.h          |   73 ++++++++
 arch/arm/include/asm/xen/page.h               |   82 ++++++++
 arch/arm/mach-vexpress/v2m.c                  |   11 +-
 arch/arm/xen/Makefile                         |    1 +
 arch/arm/xen/enlighten.c                      |  245 +++++++++++++++++++++++++
 arch/arm/xen/grant-table.c                    |   53 ++++++
 arch/arm/xen/hypercall.S                      |  102 ++++++++++
 arch/ia64/include/asm/xen/interface.h         |    8 +-
 arch/x86/include/asm/xen/interface.h          |    8 +
 arch/x86/xen/enlighten.c                      |    1 +
 arch/x86/xen/irq.c                            |    1 +
 arch/x86/xen/mmu.c                            |    3 +
 arch/x86/xen/xen-ops.h                        |    1 -
 drivers/block/xen-blkback/blkback.c           |    1 +
 drivers/net/xen-netback/netback.c             |    1 +
 drivers/net/xen-netfront.c                    |    1 +
 drivers/tty/hvc/hvc_xen.c                     |    2 +
 drivers/xen/Makefile                          |   11 +-
 drivers/xen/events.c                          |   18 ++-
 drivers/xen/grant-table.c                     |    1 +
 drivers/xen/privcmd.c                         |   20 +-
 drivers/xen/xenbus/xenbus_comms.c             |    2 +-
 drivers/xen/xenbus/xenbus_probe.c             |   62 +++++--
 drivers/xen/xenbus/xenbus_probe_frontend.c    |    1 +
 drivers/xen/xenbus/xenbus_xs.c                |    1 +
 drivers/xen/xenfs/super.c                     |    7 +
 include/xen/events.h                          |    2 +
 include/xen/interface/features.h              |    3 +
 include/xen/interface/grant_table.h           |    4 +-
 include/xen/interface/io/protocols.h          |    3 +
 include/xen/interface/memory.h                |   32 ++-
 include/xen/interface/physdev.h               |    2 +-
 include/xen/interface/platform.h              |    4 +-
 include/xen/interface/version.h               |    2 +-
 include/xen/interface/xen.h                   |    7 +-
 include/xen/privcmd.h                         |    3 +-
 include/xen/xen.h                             |    2 +-
 45 files changed, 885 insertions(+), 68 deletions(-)


A branch based on 3.5-rc7 is available here (the __HVC patch is missing
from this branch because it depends on "ARM: opcodes: Facilitate custom
opcode injection" http://marc.info/?l=linux-arm-kernel&m=134442896128124):

git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 3.5-rc7-arm-3

Cheers,

Stefano
--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-virt.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161633020.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-virt.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQVJN
IEVudmVsb3BlIE1vZGVsIHY3QSAoc2luZ2xlIENQVSkuDQogKi8NCg0KL2R0
cy12MS87DQoNCi9pbmNsdWRlLyAic2tlbGV0b24uZHRzaSINCg0KLyB7DQoJ
bW9kZWwgPSAiVjJQLUFFTXY3QSI7DQoJY29tcGF0aWJsZSA9ICJhcm0sdmV4
cHJlc3MsdjJwLWFlbSx2N2EiLCAiYXJtLHZleHByZXNzLHYycC1hZW0iLCAi
YXJtLHZleHByZXNzIjsNCglpbnRlcnJ1cHQtcGFyZW50ID0gPCZnaWM+Ow0K
DQogICAgICAgIGNob3NlbiB7DQogICAgICAgICAgICAgICAgYm9vdGFyZ3Mg
PSAiZWFybHlwcmludGsgZGVidWcgbG9nbGV2ZWw9OSBjb25zb2xlPWh2YzAg
cm9vdD0vZGV2L3h2ZGEgaW5pdD0vc2Jpbi9pbml0IjsNCiAgICAgICAgfTsN
Cg0KCWNwdXMgew0KCQkjYWRkcmVzcy1jZWxscyA9IDwxPjsNCgkJI3NpemUt
Y2VsbHMgPSA8MD47DQoNCgkJY3B1QDAgew0KCQkJZGV2aWNlX3R5cGUgPSAi
Y3B1IjsNCgkJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hMTUiOw0KCQkJ
cmVnID0gPDA+Ow0KCQl9Ow0KCX07DQoNCgltZW1vcnkgew0KCQlkZXZpY2Vf
dHlwZSA9ICJtZW1vcnkiOw0KCQlyZWcgPSA8MHg4MDAwMDAwMCAweDA4MDAw
MDAwPjsNCgl9Ow0KDQoJZ2ljOiBpbnRlcnJ1cHQtY29udHJvbGxlckAyYzAw
MTAwMCB7DQoJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hOS1naWMiOw0K
CQkjaW50ZXJydXB0LWNlbGxzID0gPDM+Ow0KCQkjYWRkcmVzcy1jZWxscyA9
IDwwPjsNCgkJaW50ZXJydXB0LWNvbnRyb2xsZXI7DQoJCXJlZyA9IDwweDJj
MDAxMDAwIDB4MTAwMD4sDQoJCSAgICAgIDwweDJjMDAyMDAwIDB4MTAwPjsN
Cgl9Ow0KDQoJdGltZXIgew0KCQljb21wYXRpYmxlID0gImFybSxhcm12Ny10
aW1lciI7DQoJCWludGVycnVwdHMgPSA8MSAxMyAweGYwOD4sDQoJCQkgICAg
IDwxIDE0IDB4ZjA4PiwNCgkJCSAgICAgPDEgMTEgMHhmMDg+LA0KCQkJICAg
ICA8MSAxMCAweGYwOD47DQoJfTsNCg0KCWh5cGVydmlzb3Igew0KCQljb21w
YXRpYmxlID0gInhlbix4ZW4iLCAieGVuLHhlbi00LjIiOw0KCQlyZWcgPSA8
MHhiMDAwMDAwMCAweDIwMDAwPjsNCgkJaW50ZXJydXB0cyA9IDwxIDE1IDB4
ZjA4PjsNCgl9Ow0KDQoJbW90aGVyYm9hcmQgew0KCQlhcm0sdjJtLW1lbW9y
eS1tYXAgPSAicnMxIjsNCgkJcmFuZ2VzID0gPDAgMCAweDA4MDAwMDAwIDB4
MDQwMDAwMDA+LA0KCQkJIDwxIDAgMHgxNDAwMDAwMCAweDA0MDAwMDAwPiwN
CgkJCSA8MiAwIDB4MTgwMDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDMgMCAw
eDFjMDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJIDw0IDAgMHgwYzAwMDAwMCAw
eDA0MDAwMDAwPiwNCgkJCSA8NSAwIDB4MTAwMDAwMDAgMHgwNDAwMDAwMD47
DQoNCgkJaW50ZXJydXB0LW1hcC1tYXNrID0gPDAgMCA2Mz47DQoJCWludGVy
cnVwdC1tYXAgPSA8MCAwICAwICZnaWMgMCAgMCA0PiwNCgkJCQk8MCAwICAx
ICZnaWMgMCAgMSA0PiwNCgkJCQk8MCAwICAyICZnaWMgMCAgMiA0PiwNCgkJ
CQk8MCAwICAzICZnaWMgMCAgMyA0PiwNCgkJCQk8MCAwICA0ICZnaWMgMCAg
NCA0PiwNCgkJCQk8MCAwICA1ICZnaWMgMCAgNSA0PiwNCgkJCQk8MCAwICA2
ICZnaWMgMCAgNiA0PiwNCgkJCQk8MCAwICA3ICZnaWMgMCAgNyA0PiwNCgkJ
CQk8MCAwICA4ICZnaWMgMCAgOCA0PiwNCgkJCQk8MCAwICA5ICZnaWMgMCAg
OSA0PiwNCgkJCQk8MCAwIDEwICZnaWMgMCAxMCA0PiwNCgkJCQk8MCAwIDEx
ICZnaWMgMCAxMSA0PiwNCgkJCQk8MCAwIDEyICZnaWMgMCAxMiA0PiwNCgkJ
CQk8MCAwIDEzICZnaWMgMCAxMyA0PiwNCgkJCQk8MCAwIDE0ICZnaWMgMCAx
NCA0PiwNCgkJCQk8MCAwIDE1ICZnaWMgMCAxNSA0PiwNCgkJCQk8MCAwIDE2
ICZnaWMgMCAxNiA0PiwNCgkJCQk8MCAwIDE3ICZnaWMgMCAxNyA0PiwNCgkJ
CQk8MCAwIDE4ICZnaWMgMCAxOCA0PiwNCgkJCQk8MCAwIDE5ICZnaWMgMCAx
OSA0PiwNCgkJCQk8MCAwIDIwICZnaWMgMCAyMCA0PiwNCgkJCQk8MCAwIDIx
ICZnaWMgMCAyMSA0PiwNCgkJCQk8MCAwIDIyICZnaWMgMCAyMiA0PiwNCgkJ
CQk8MCAwIDIzICZnaWMgMCAyMyA0PiwNCgkJCQk8MCAwIDI0ICZnaWMgMCAy
NCA0PiwNCgkJCQk8MCAwIDI1ICZnaWMgMCAyNSA0PiwNCgkJCQk8MCAwIDI2
ICZnaWMgMCAyNiA0PiwNCgkJCQk8MCAwIDI3ICZnaWMgMCAyNyA0PiwNCgkJ
CQk8MCAwIDI4ICZnaWMgMCAyOCA0PiwNCgkJCQk8MCAwIDI5ICZnaWMgMCAy
OSA0PiwNCgkJCQk8MCAwIDMwICZnaWMgMCAzMCA0PiwNCgkJCQk8MCAwIDMx
ICZnaWMgMCAzMSA0PiwNCgkJCQk8MCAwIDMyICZnaWMgMCAzMiA0PiwNCgkJ
CQk8MCAwIDMzICZnaWMgMCAzMyA0PiwNCgkJCQk8MCAwIDM0ICZnaWMgMCAz
NCA0PiwNCgkJCQk8MCAwIDM1ICZnaWMgMCAzNSA0PiwNCgkJCQk8MCAwIDM2
ICZnaWMgMCAzNiA0PiwNCgkJCQk8MCAwIDM3ICZnaWMgMCAzNyA0PiwNCgkJ
CQk8MCAwIDM4ICZnaWMgMCAzOCA0PiwNCgkJCQk8MCAwIDM5ICZnaWMgMCAz
OSA0PiwNCgkJCQk8MCAwIDQwICZnaWMgMCA0MCA0PiwNCgkJCQk8MCAwIDQx
ICZnaWMgMCA0MSA0PiwNCgkJCQk8MCAwIDQyICZnaWMgMCA0MiA0PjsNCgl9
Ow0KfTsNCg0KLyogL2luY2x1ZGUvICJ2ZXhwcmVzcy12Mm0tcnMxLXJ0c20u
ZHRzaSIgKi8NCg==

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-v2m-rs1.dtsi"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161633021.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2m-rs1.dtsi"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogTW90
aGVyYm9hcmQgRXhwcmVzcyB1QVRYDQogKiBWMk0tUDENCiAqDQogKiBIQkkt
MDE5MEQNCiAqDQogKiBSUzEgbWVtb3J5IG1hcCAoIkFSTSBDb3J0ZXgtQSBT
ZXJpZXMgbWVtb3J5IG1hcCIgaW4gdGhlIGJvYXJkJ3MNCiAqIFRlY2huaWNh
bCBSZWZlcmVuY2UgTWFudWFsKQ0KICoNCiAqIFdBUk5JTkchIFRoZSBoYXJk
d2FyZSBkZXNjcmliZWQgaW4gdGhpcyBmaWxlIGlzIGluZGVwZW5kZW50IGZy
b20gdGhlDQogKiBvcmlnaW5hbCB2YXJpYW50ICh2ZXhwcmVzcy12Mm0uZHRz
aSksIGJ1dCB0aGVyZSBpcyBhIHN0cm9uZw0KICogY29ycmVzcG9uZGVuY2Ug
YmV0d2VlbiB0aGUgdHdvIGNvbmZpZ3VyYXRpb25zLg0KICoNCiAqIFRBS0Ug
Q0FSRSBXSEVOIE1BSU5UQUlOSU5HIFRISVMgRklMRSBUTyBQUk9QQUdBVEUg
QU5ZIFJFTEVWQU5UDQogKiBDSEFOR0VTIFRPIHZleHByZXNzLXYybS5kdHNp
IQ0KICovDQoNCi8gew0KCWFsaWFzZXMgew0KCQlhcm0sdjJtX3RpbWVyID0g
JnYybV90aW1lcjAxOw0KCX07DQoNCgltb3RoZXJib2FyZCB7DQoJCWNvbXBh
dGlibGUgPSAic2ltcGxlLWJ1cyI7DQoJCWFybSx2Mm0tbWVtb3J5LW1hcCA9
ICJyczEiOw0KCQkjYWRkcmVzcy1jZWxscyA9IDwyPjsgLyogU01CIGNoaXBz
ZWxlY3QgbnVtYmVyIGFuZCBvZmZzZXQgKi8NCgkJI3NpemUtY2VsbHMgPSA8
MT47DQoJCSNpbnRlcnJ1cHQtY2VsbHMgPSA8MT47DQoNCgkJZmxhc2hAMCww
MDAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gImFybSx2ZXhwcmVzcy1mbGFz
aCIsICJjZmktZmxhc2giOw0KCQkJcmVnID0gPDAgMHgwMDAwMDAwMCAweDA0
MDAwMDAwPiwNCgkJCSAgICAgIDw0IDB4MDAwMDAwMDAgMHgwNDAwMDAwMD47
DQoJCQliYW5rLXdpZHRoID0gPDQ+Ow0KCQl9Ow0KDQoJCXBzcmFtQDEsMDAw
MDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtcHNyYW0i
LCAibXRkLXJhbSI7DQoJCQlyZWcgPSA8MSAweDAwMDAwMDAwIDB4MDIwMDAw
MDA+Ow0KCQkJYmFuay13aWR0aCA9IDw0PjsNCgkJfTsNCg0KCQl2cmFtQDIs
MDAwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sdmV4cHJlc3MtdnJh
bSI7DQoJCQlyZWcgPSA8MiAweDAwMDAwMDAwIDB4MDA4MDAwMDA+Ow0KCQl9
Ow0KDQoJCWV0aGVybmV0QDIsMDIwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9
ICJzbXNjLGxhbjkxYzExMSI7DQogCQkJcmVnID0gPDIgMHgwMjAwMDAwMCAw
eDEwMDAwPjsNCiAJCQlpbnRlcnJ1cHRzID0gPDE1PjsNCgkJfTsNCg0KCQl1
c2JAMiwwMzAwMDAwMCB7DQoJCQljb21wYXRpYmxlID0gIm54cCx1c2ItaXNw
MTc2MSI7DQoJCQlyZWcgPSA8MiAweDAzMDAwMDAwIDB4MjAwMDA+Ow0KCQkJ
aW50ZXJydXB0cyA9IDwxNj47DQoJCQlwb3J0MS1vdGc7DQoJCX07DQoNCgkJ
aW9mcGdhQDMsMDAwMDAwMDAgew0KCQkJY29tcGF0aWJsZSA9ICJhcm0sYW1i
YS1idXMiLCAic2ltcGxlLWJ1cyI7DQoJCQkjYWRkcmVzcy1jZWxscyA9IDwx
PjsNCgkJCSNzaXplLWNlbGxzID0gPDE+Ow0KCQkJcmFuZ2VzID0gPDAgMyAw
IDB4MjAwMDAwPjsNCg0KCQkJc3lzcmVnQDAxMDAwMCB7DQoJCQkJY29tcGF0
aWJsZSA9ICJhcm0sdmV4cHJlc3Mtc3lzcmVnIjsNCgkJCQlyZWcgPSA8MHgw
MTAwMDAgMHgxMDAwPjsNCgkJCX07DQoNCgkJCXN5c2N0bEAwMjAwMDAgew0K
CQkJCWNvbXBhdGlibGUgPSAiYXJtLHNwODEwIiwgImFybSxwcmltZWNlbGwi
Ow0KCQkJCXJlZyA9IDwweDAyMDAwMCAweDEwMDA+Ow0KCQkJfTsNCg0KCQkJ
LyogUENJLUUgSTJDIGJ1cyAqLw0KCQkJdjJtX2kyY19wY2llOiBpMmNAMDMw
MDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSx2ZXJzYXRpbGUtaTJjIjsN
CgkJCQlyZWcgPSA8MHgwMzAwMDAgMHgxMDAwPjsNCg0KCQkJCSNhZGRyZXNz
LWNlbGxzID0gPDE+Ow0KCQkJCSNzaXplLWNlbGxzID0gPDA+Ow0KDQoJCQkJ
cGNpZS1zd2l0Y2hANjAgew0KCQkJCQljb21wYXRpYmxlID0gImlkdCw4OWhw
ZXMzMmg4IjsNCgkJCQkJcmVnID0gPDB4NjA+Ow0KCQkJCX07DQoJCQl9Ow0K
DQoJCQlhYWNpQDA0MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGww
NDEiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDQwMDAwIDB4
MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDwxMT47DQoJCQl9Ow0KDQoJCQlt
bWNpQDA1MDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwxODAiLCAi
YXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MDUwMDAwIDB4MTAwMD47
DQoJCQkJaW50ZXJydXB0cyA9IDw5IDEwPjsNCgkJCX07DQoNCgkJCWttaUAw
NjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBsMDUwIiwgImFybSxw
cmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDA2MDAwMCAweDEwMDA+Ow0KCQkJ
CWludGVycnVwdHMgPSA8MTI+Ow0KCQkJfTsNCg0KCQkJa21pQDA3MDAwMCB7
DQoJCQkJY29tcGF0aWJsZSA9ICJhcm0scGwwNTAiLCAiYXJtLHByaW1lY2Vs
bCI7DQoJCQkJcmVnID0gPDB4MDcwMDAwIDB4MTAwMD47DQoJCQkJaW50ZXJy
dXB0cyA9IDwxMz47DQoJCQl9Ow0KDQoJCQl2Mm1fc2VyaWFsMDogdWFydEAw
OTAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBsMDExIiwgImFybSxw
cmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDA5MDAwMCAweDEwMDA+Ow0KCQkJ
CWludGVycnVwdHMgPSA8NT47DQoJCQl9Ow0KDQoJCQl2Mm1fc2VyaWFsMTog
dWFydEAwYTAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBsMDExIiwg
ImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBhMDAwMCAweDEwMDA+
Ow0KCQkJCWludGVycnVwdHMgPSA8Nj47DQoJCQl9Ow0KDQoJCQl2Mm1fc2Vy
aWFsMjogdWFydEAwYjAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAiYXJtLHBs
MDExIiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBiMDAwMCAw
eDEwMDA+Ow0KCQkJCWludGVycnVwdHMgPSA8Nz47DQoJCQl9Ow0KDQoJCQl2
Mm1fc2VyaWFsMzogdWFydEAwYzAwMDAgew0KCQkJCWNvbXBhdGlibGUgPSAi
YXJtLHBsMDExIiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9IDwweDBj
MDAwMCAweDEwMDA+Ow0KCQkJCWludGVycnVwdHMgPSA8OD47DQoJCQl9Ow0K
DQoJCQl3ZHRAMGYwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxzcDgw
NSIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgwZjAwMDAgMHgx
MDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDA+Ow0KCQkJfTsNCg0KCQkJdjJt
X3RpbWVyMDE6IHRpbWVyQDExMDAwMCB7DQoJCQkJY29tcGF0aWJsZSA9ICJh
cm0sc3A4MDQiLCAiYXJtLHByaW1lY2VsbCI7DQoJCQkJcmVnID0gPDB4MTEw
MDAwIDB4MTAwMD47DQoJCQkJaW50ZXJydXB0cyA9IDwyPjsNCgkJCX07DQoN
CgkJCXYybV90aW1lcjIzOiB0aW1lckAxMjAwMDAgew0KCQkJCWNvbXBhdGli
bGUgPSAiYXJtLHNwODA0IiwgImFybSxwcmltZWNlbGwiOw0KCQkJCXJlZyA9
IDwweDEyMDAwMCAweDEwMDA+Ow0KCQkJfTsNCg0KCQkJLyogRFZJIEkyQyBi
dXMgKi8NCgkJCXYybV9pMmNfZHZpOiBpMmNAMTYwMDAwIHsNCgkJCQljb21w
YXRpYmxlID0gImFybSx2ZXJzYXRpbGUtaTJjIjsNCgkJCQlyZWcgPSA8MHgx
NjAwMDAgMHgxMDAwPjsNCg0KCQkJCSNhZGRyZXNzLWNlbGxzID0gPDE+Ow0K
CQkJCSNzaXplLWNlbGxzID0gPDA+Ow0KDQoJCQkJZHZpLXRyYW5zbWl0dGVy
QDM5IHsNCgkJCQkJY29tcGF0aWJsZSA9ICJzaWwsc2lpOTAyMi10cGkiLCAi
c2lsLHNpaTkwMjIiOw0KCQkJCQlyZWcgPSA8MHgzOT47DQoJCQkJfTsNCg0K
CQkJCWR2aS10cmFuc21pdHRlckA2MCB7DQoJCQkJCWNvbXBhdGlibGUgPSAi
c2lsLHNpaTkwMjItY3BpIiwgInNpbCxzaWk5MDIyIjsNCgkJCQkJcmVnID0g
PDB4NjA+Ow0KCQkJCX07DQoJCQl9Ow0KDQoJCQlydGNAMTcwMDAwIHsNCgkJ
CQljb21wYXRpYmxlID0gImFybSxwbDAzMSIsICJhcm0scHJpbWVjZWxsIjsN
CgkJCQlyZWcgPSA8MHgxNzAwMDAgMHgxMDAwPjsNCgkJCQlpbnRlcnJ1cHRz
ID0gPDQ+Ow0KCQkJfTsNCg0KCQkJY29tcGFjdC1mbGFzaEAxYTAwMDAgew0K
CQkJCWNvbXBhdGlibGUgPSAiYXJtLHZleHByZXNzLWNmIiwgImF0YS1nZW5l
cmljIjsNCgkJCQlyZWcgPSA8MHgxYTAwMDAgMHgxMDANCgkJCQkgICAgICAg
MHgxYTAxMDAgMHhmMDA+Ow0KCQkJCXJlZy1zaGlmdCA9IDwyPjsNCgkJCX07
DQoNCgkJCWNsY2RAMWYwMDAwIHsNCgkJCQljb21wYXRpYmxlID0gImFybSxw
bDExMSIsICJhcm0scHJpbWVjZWxsIjsNCgkJCQlyZWcgPSA8MHgxZjAwMDAg
MHgxMDAwPjsNCgkJCQlpbnRlcnJ1cHRzID0gPDE0PjsNCgkJCX07DQoJCX07
DQoJfTsNCn07DQo=

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="US-ASCII"; name="vexpress-v2p-ca15-tc1.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.02.1208161633022.4850@kaball.uk.xensource.com>
Content-Description: 
Content-Disposition: attachment; filename="vexpress-v2p-ca15-tc1.dts"

LyoNCiAqIEFSTSBMdGQuIFZlcnNhdGlsZSBFeHByZXNzDQogKg0KICogQ29y
ZVRpbGUgRXhwcmVzcyBBMTV4MiAodmVyc2lvbiB3aXRoIFRlc3QgQ2hpcCAx
KQ0KICogQ29ydGV4LUExNSBNUENvcmUgKFYyUC1DQTE1KQ0KICoNCiAqIEhC
SS0wMjM3QQ0KICovDQoNCi9kdHMtdjEvOw0KDQovIHsNCgltb2RlbCA9ICJW
MlAtQ0ExNSI7DQoJYXJtLGhiaSA9IDwweDIzNz47DQoJY29tcGF0aWJsZSA9
ICJhcm0sdmV4cHJlc3MsdjJwLWNhMTUsdGMxIiwgImFybSx2ZXhwcmVzcyx2
MnAtY2ExNSIsICJhcm0sdmV4cHJlc3MiOw0KCWludGVycnVwdC1wYXJlbnQg
PSA8JmdpYz47DQoJI2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJI3NpemUtY2Vs
bHMgPSA8MT47DQoNCgljaG9zZW4gew0KICAgICAgICAgICAgICAgICBib290
YXJncyA9ICJkb20wX21lbT0xMjhNIjsNCiAgICAgICAgICAgICAgICAgeGVu
LGRvbTAtYm9vdGFyZ3MgPSAiZWFybHlwcmludGsgY29uc29sZT10dHlBTUEx
IHJvb3Q9L2Rldi9tbWNibGswIGRlYnVnIHJ3IjsNCgl9Ow0KDQoJYWxpYXNl
cyB7DQoJCXNlcmlhbDAgPSAmdjJtX3NlcmlhbDA7DQoJCXNlcmlhbDEgPSAm
djJtX3NlcmlhbDE7DQoJCXNlcmlhbDIgPSAmdjJtX3NlcmlhbDI7DQoJCXNl
cmlhbDMgPSAmdjJtX3NlcmlhbDM7DQoJCWkyYzAgPSAmdjJtX2kyY19kdmk7
DQoJCWkyYzEgPSAmdjJtX2kyY19wY2llOw0KCX07DQoNCgljcHVzIHsNCgkJ
I2FkZHJlc3MtY2VsbHMgPSA8MT47DQoJCSNzaXplLWNlbGxzID0gPDA+Ow0K
DQoJCWNwdUAwIHsNCgkJCWRldmljZV90eXBlID0gImNwdSI7DQoJCQljb21w
YXRpYmxlID0gImFybSxjb3J0ZXgtYTE1IjsNCgkJCXJlZyA9IDwwPjsNCgkJ
fTsNCgl9Ow0KDQoJbWVtb3J5IHsNCgkJZGV2aWNlX3R5cGUgPSAibWVtb3J5
IjsNCgkJcmVnID0gPDB4ODAwMDAwMDAgMHg4MDAwMDAwMD47DQoJfTsNCg0K
CWdpYzogaW50ZXJydXB0LWNvbnRyb2xsZXJAMmMwMDEwMDAgew0KCQljb21w
YXRpYmxlID0gImFybSxjb3J0ZXgtYTE1LWdpYyIsICJhcm0sY29ydGV4LWE5
LWdpYyI7DQoJCSNpbnRlcnJ1cHQtY2VsbHMgPSA8Mz47DQoJCSNhZGRyZXNz
LWNlbGxzID0gPDA+Ow0KCQlpbnRlcnJ1cHQtY29udHJvbGxlcjsNCgkJcmVn
ID0gPDB4MmMwMDEwMDAgMHgxMDAwPiwNCgkJICAgICAgPDB4MmMwMDIwMDAg
MHgxMDAwPiwNCgkJICAgICAgPDB4MmMwMDQwMDAgMHgyMDAwPiwNCgkJICAg
ICAgPDB4MmMwMDYwMDAgMHgyMDAwPjsNCgkJaW50ZXJydXB0cyA9IDwxIDkg
MHhmMDQ+Ow0KCX07DQoNCgl0aW1lciB7DQoJCWNvbXBhdGlibGUgPSAiYXJt
LGFybXY3LXRpbWVyIjsNCgkJaW50ZXJydXB0cyA9IDwxIDEzIDB4ZjA4PiwN
CgkJCSAgICAgPDEgMTQgMHhmMDg+LA0KCQkJICAgICA8MSAxMSAweGYwOD4s
DQoJCQkgICAgIDwxIDEwIDB4ZjA4PjsNCgl9Ow0KDQoJcG11IHsNCgkJY29t
cGF0aWJsZSA9ICJhcm0sY29ydGV4LWExNS1wbXUiLCAiYXJtLGNvcnRleC1h
OS1wbXUiOw0KCQlpbnRlcnJ1cHRzID0gPDAgNjggND4sDQoJCQkgICAgIDww
IDY5IDQ+Ow0KCX07DQoJDQoJaHlwZXJ2aXNvciB7DQoJCWNvbXBhdGlibGUg
PSAieGVuLHhlbiIsICJ4ZW4seGVuLTQuMiI7DQoJCXJlZyA9IDwweGIwMDAw
MDAwIDB4MjAwMDA+Ow0KCQlpbnRlcnJ1cHRzID0gPDEgMTUgMHhmMDg+Ow0K
CX07DQoNCgltb3RoZXJib2FyZCB7DQoJCXJhbmdlcyA9IDwwIDAgMHgwODAw
MDAwMCAweDA0MDAwMDAwPiwNCgkJCSA8MSAwIDB4MTQwMDAwMDAgMHgwNDAw
MDAwMD4sDQoJCQkgPDIgMCAweDE4MDAwMDAwIDB4MDQwMDAwMDA+LA0KCQkJ
IDwzIDAgMHgxYzAwMDAwMCAweDA0MDAwMDAwPiwNCgkJCSA8NCAwIDB4MGMw
MDAwMDAgMHgwNDAwMDAwMD4sDQoJCQkgPDUgMCAweDEwMDAwMDAwIDB4MDQw
MDAwMDA+Ow0KDQoJCWludGVycnVwdC1tYXAtbWFzayA9IDwwIDAgNjM+Ow0K
CQlpbnRlcnJ1cHQtbWFwID0gPDAgMCAgMCAmZ2ljIDAgIDAgND4sDQoJCQkJ
PDAgMCAgMSAmZ2ljIDAgIDEgND4sDQoJCQkJPDAgMCAgMiAmZ2ljIDAgIDIg
ND4sDQoJCQkJPDAgMCAgMyAmZ2ljIDAgIDMgND4sDQoJCQkJPDAgMCAgNCAm
Z2ljIDAgIDQgND4sDQoJCQkJPDAgMCAgNSAmZ2ljIDAgIDUgND4sDQoJCQkJ
PDAgMCAgNiAmZ2ljIDAgIDYgND4sDQoJCQkJPDAgMCAgNyAmZ2ljIDAgIDcg
ND4sDQoJCQkJPDAgMCAgOCAmZ2ljIDAgIDggND4sDQoJCQkJPDAgMCAgOSAm
Z2ljIDAgIDkgND4sDQoJCQkJPDAgMCAxMCAmZ2ljIDAgMTAgND4sDQoJCQkJ
PDAgMCAxMSAmZ2ljIDAgMTEgND4sDQoJCQkJPDAgMCAxMiAmZ2ljIDAgMTIg
ND4sDQoJCQkJPDAgMCAxMyAmZ2ljIDAgMTMgND4sDQoJCQkJPDAgMCAxNCAm
Z2ljIDAgMTQgND4sDQoJCQkJPDAgMCAxNSAmZ2ljIDAgMTUgND4sDQoJCQkJ
PDAgMCAxNiAmZ2ljIDAgMTYgND4sDQoJCQkJPDAgMCAxNyAmZ2ljIDAgMTcg
ND4sDQoJCQkJPDAgMCAxOCAmZ2ljIDAgMTggND4sDQoJCQkJPDAgMCAxOSAm
Z2ljIDAgMTkgND4sDQoJCQkJPDAgMCAyMCAmZ2ljIDAgMjAgND4sDQoJCQkJ
PDAgMCAyMSAmZ2ljIDAgMjEgND4sDQoJCQkJPDAgMCAyMiAmZ2ljIDAgMjIg
ND4sDQoJCQkJPDAgMCAyMyAmZ2ljIDAgMjMgND4sDQoJCQkJPDAgMCAyNCAm
Z2ljIDAgMjQgND4sDQoJCQkJPDAgMCAyNSAmZ2ljIDAgMjUgND4sDQoJCQkJ
PDAgMCAyNiAmZ2ljIDAgMjYgND4sDQoJCQkJPDAgMCAyNyAmZ2ljIDAgMjcg
ND4sDQoJCQkJPDAgMCAyOCAmZ2ljIDAgMjggND4sDQoJCQkJPDAgMCAyOSAm
Z2ljIDAgMjkgND4sDQoJCQkJPDAgMCAzMCAmZ2ljIDAgMzAgND4sDQoJCQkJ
PDAgMCAzMSAmZ2ljIDAgMzEgND4sDQoJCQkJPDAgMCAzMiAmZ2ljIDAgMzIg
ND4sDQoJCQkJPDAgMCAzMyAmZ2ljIDAgMzMgND4sDQoJCQkJPDAgMCAzNCAm
Z2ljIDAgMzQgND4sDQoJCQkJPDAgMCAzNSAmZ2ljIDAgMzUgND4sDQoJCQkJ
PDAgMCAzNiAmZ2ljIDAgMzYgND4sDQoJCQkJPDAgMCAzNyAmZ2ljIDAgMzcg
ND4sDQoJCQkJPDAgMCAzOCAmZ2ljIDAgMzggND4sDQoJCQkJPDAgMCAzOSAm
Z2ljIDAgMzkgND4sDQoJCQkJPDAgMCA0MCAmZ2ljIDAgNDAgND4sDQoJCQkJ
PDAgMCA0MSAmZ2ljIDAgNDEgND4sDQoJCQkJPDAgMCA0MiAmZ2ljIDAgNDIg
ND47DQoJfTsNCn07DQoNCi9pbmNsdWRlLyAidmV4cHJlc3MtdjJtLXJzMS5k
dHNpIg0K

--_004_alpineDEB20212080614280604645kaballukxensourcecom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_alpineDEB20212080614280604645kaballukxensourcecom_--


From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T227w-00085D-0q; Thu, 16 Aug 2012 15:36:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T227u-00084o-Ft
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:50 +0000
Received: from [85.158.143.99:32290] by server-2.bemta-4.messagelabs.com id
	01/27-31966-1931D205; Thu, 16 Aug 2012 15:36:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345131407!21011213!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11068 invoked from network); 16 Aug 2012 15:36:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386141"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-5w;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:59 +0100
Message-ID: <1345131377-14713-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 07/25] xen/arm: Xen detection and shared_info
	page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Check for a node in the device tree compatible with "xen,xen", if it is
present set xen_domain_type to XEN_HVM_DOMAIN and continue
initialization.

Map the real shared info page using XENMEM_add_to_physmap with
XENMAPSPACE_shared_info.

Changes in v3:

- use the "xen,xen" notation rather than "arm,xen";
- add an additional check on the presence of the Xen version.

Changes in v2:

- replace pr_info with pr_debug.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   61 ++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 61 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index c535540..78ea1f7 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -5,6 +5,9 @@
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_address.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -33,3 +36,61 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	return -ENOSYS;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/*
+ * see Documentation/devicetree/bindings/arm/xen.txt for the
+ * documentation of the Xen Device Tree format.
+ */
+static int __init xen_guest_init(void)
+{
+	struct xen_add_to_physmap xatp;
+	static struct shared_info *shared_info_page = 0;
+	struct device_node *node;
+	int len;
+	const char *s = NULL;
+	const char *version = NULL;
+	const char *xen_prefix = "xen,xen-";
+
+	node = of_find_compatible_node(NULL, NULL, "xen,xen");
+	if (!node) {
+		pr_debug("No Xen support\n");
+		return 0;
+	}
+	s = of_get_property(node, "compatible", &len);
+	if (strlen(s) + strlen(xen_prefix) + 1  < len &&
+			!strncmp(xen_prefix, s + strlen(s) + 1, strlen(xen_prefix)))
+		version = s + strlen(s) + strlen(xen_prefix) + 1;
+	if (version == NULL) {
+		pr_debug("Xen version not found\n");
+		return 0;
+	}
+	xen_domain_type = XEN_HVM_DOMAIN;
+
+	if (!shared_info_page)
+		shared_info_page = (struct shared_info *)
+			get_zeroed_page(GFP_KERNEL);
+	if (!shared_info_page) {
+		pr_err("not enough memory\n");
+		return -ENOMEM;
+	}
+	xatp.domid = DOMID_SELF;
+	xatp.idx = 0;
+	xatp.space = XENMAPSPACE_shared_info;
+	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
+	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
+		BUG();
+
+	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
+
+	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
+	 * page, we use it in the event channel upcall and in some pvclock
+	 * related functions. We don't need the vcpu_info placement
+	 * optimizations because we don't use any pv_mmu or pv_irq op on
+	 * HVM.
+	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
+	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
+	 * for secondary CPUs as they are brought up. */
+	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+	return 0;
+}
+core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T227w-00085D-0q; Thu, 16 Aug 2012 15:36:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T227u-00084o-Ft
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:50 +0000
Received: from [85.158.143.99:32290] by server-2.bemta-4.messagelabs.com id
	01/27-31966-1931D205; Thu, 16 Aug 2012 15:36:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345131407!21011213!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11068 invoked from network); 16 Aug 2012 15:36:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386141"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-5w;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:59 +0100
Message-ID: <1345131377-14713-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 07/25] xen/arm: Xen detection and shared_info
	page mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Check for a node in the device tree compatible with "xen,xen", if it is
present set xen_domain_type to XEN_HVM_DOMAIN and continue
initialization.

Map the real shared info page using XENMEM_add_to_physmap with
XENMAPSPACE_shared_info.

Changes in v3:

- use the "xen,xen" notation rather than "arm,xen";
- add an additional check on the presence of the Xen version.

Changes in v2:

- replace pr_info with pr_debug.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   61 ++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 61 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index c535540..78ea1f7 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -5,6 +5,9 @@
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_address.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -33,3 +36,61 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	return -ENOSYS;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
+
+/*
+ * see Documentation/devicetree/bindings/arm/xen.txt for the
+ * documentation of the Xen Device Tree format.
+ */
+static int __init xen_guest_init(void)
+{
+	struct xen_add_to_physmap xatp;
+	static struct shared_info *shared_info_page = 0;
+	struct device_node *node;
+	int len;
+	const char *s = NULL;
+	const char *version = NULL;
+	const char *xen_prefix = "xen,xen-";
+
+	node = of_find_compatible_node(NULL, NULL, "xen,xen");
+	if (!node) {
+		pr_debug("No Xen support\n");
+		return 0;
+	}
+	s = of_get_property(node, "compatible", &len);
+	if (strlen(s) + strlen(xen_prefix) + 1  < len &&
+			!strncmp(xen_prefix, s + strlen(s) + 1, strlen(xen_prefix)))
+		version = s + strlen(s) + strlen(xen_prefix) + 1;
+	if (version == NULL) {
+		pr_debug("Xen version not found\n");
+		return 0;
+	}
+	xen_domain_type = XEN_HVM_DOMAIN;
+
+	if (!shared_info_page)
+		shared_info_page = (struct shared_info *)
+			get_zeroed_page(GFP_KERNEL);
+	if (!shared_info_page) {
+		pr_err("not enough memory\n");
+		return -ENOMEM;
+	}
+	xatp.domid = DOMID_SELF;
+	xatp.idx = 0;
+	xatp.space = XENMAPSPACE_shared_info;
+	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
+	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
+		BUG();
+
+	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
+
+	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
+	 * page, we use it in the event channel upcall and in some pvclock
+	 * related functions. We don't need the vcpu_info placement
+	 * optimizations because we don't use any pv_mmu or pv_irq op on
+	 * HVM.
+	 * The shared info contains exactly 1 CPU (the boot CPU). The guest
+	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
+	 * for secondary CPUs as they are brought up. */
+	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+	return 0;
+}
+core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T227v-000856-Km; Thu, 16 Aug 2012 15:36:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T227t-00084i-Va
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:50 +0000
Received: from [85.158.143.99:16293] by server-1.bemta-4.messagelabs.com id
	63/63-07754-1931D205; Thu, 16 Aug 2012 15:36:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345131407!21011213!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11040 invoked from network); 16 Aug 2012 15:36:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386128"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-03;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:54 +0100
Message-ID: <1345131377-14713-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 02/25] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use r12 to pass the hypercall number to the hypervisor.

We need a register to pass the hypercall number because we might not
know it at compile time and HVC only takes an immediate argument.

Among the available registers r12 seems to be the best choice because it
is defined as "intra-procedure call scratch register".

Use the ISS to pass an hypervisor specific tag.


Changes in v2:
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
at the moment is unused;
- use ldm instead of pop;
- fix up comments.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
 arch/arm/xen/Makefile                |    2 +-
 arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
 3 files changed, 157 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/hypercall.h
 create mode 100644 arch/arm/xen/hypercall.S

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
new file mode 100644
index 0000000..4ac0624
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -0,0 +1,50 @@
+/******************************************************************************
+ * hypercall.h
+ *
+ * Linux-specific hypervisor handling.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef _ASM_ARM_XEN_HYPERCALL_H
+#define _ASM_ARM_XEN_HYPERCALL_H
+
+#include <xen/interface/xen.h>
+
+long privcmd_call(unsigned call, unsigned long a1,
+		unsigned long a2, unsigned long a3,
+		unsigned long a4, unsigned long a5);
+int HYPERVISOR_xen_version(int cmd, void *arg);
+int HYPERVISOR_console_io(int cmd, int count, char *str);
+int HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count);
+int HYPERVISOR_sched_op(int cmd, void *arg);
+int HYPERVISOR_event_channel_op(int cmd, void *arg);
+unsigned long HYPERVISOR_hvm_op(int op, void *arg);
+int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
+int HYPERVISOR_physdev_op(int cmd, void *arg);
+
+#endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index 0bad594..b9d6acc 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o
+obj-y		:= enlighten.o hypercall.o
diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
new file mode 100644
index 0000000..074f5ed
--- /dev/null
+++ b/arch/arm/xen/hypercall.S
@@ -0,0 +1,106 @@
+/******************************************************************************
+ * hypercall.S
+ *
+ * Xen hypercall wrappers
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+/*
+ * The Xen hypercall calling convention is very similar to the ARM
+ * procedure calling convention: the first paramter is passed in r0, the
+ * second in r1, the third in r2 and the fourth in r3. Considering that
+ * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
+ * in r4, differently from the procedure calling convention of using the
+ * stack for that case.
+ *
+ * The hypercall number is passed in r12.
+ *
+ * The return value is in r0.
+ *
+ * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
+ * hypercall tag.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <xen/interface/xen.h>
+
+
+/* HVC 0xEA1 */
+#ifdef CONFIG_THUMB2_KERNEL
+#define xen_hvc .word 0xf7e08ea1
+#else
+#define xen_hvc .word 0xe140ea71
+#endif
+
+#define HYPERCALL_SIMPLE(hypercall)		\
+ENTRY(HYPERVISOR_##hypercall)			\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc;							\
+	mov pc, lr;							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+#define HYPERCALL0 HYPERCALL_SIMPLE
+#define HYPERCALL1 HYPERCALL_SIMPLE
+#define HYPERCALL2 HYPERCALL_SIMPLE
+#define HYPERCALL3 HYPERCALL_SIMPLE
+#define HYPERCALL4 HYPERCALL_SIMPLE
+
+#define HYPERCALL5(hypercall)			\
+ENTRY(HYPERVISOR_##hypercall)			\
+	stmdb sp!, {r4}						\
+	ldr r4, [sp, #4]					\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc								\
+	ldm sp!, {r4}						\
+	mov pc, lr							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+                .text
+
+HYPERCALL2(xen_version);
+HYPERCALL3(console_io);
+HYPERCALL3(grant_table_op);
+HYPERCALL2(sched_op);
+HYPERCALL2(event_channel_op);
+HYPERCALL2(hvm_op);
+HYPERCALL2(memory_op);
+HYPERCALL2(physdev_op);
+
+ENTRY(privcmd_call)
+	stmdb sp!, {r4}
+	mov r12, r0
+	mov r0, r1
+	mov r1, r2
+	mov r2, r3
+	ldr r3, [sp, #8]
+	ldr r4, [sp, #4]
+	xen_hvc
+	ldm sp!, {r4}
+	mov pc, lr
+ENDPROC(privcmd_call);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T227v-000856-Km; Thu, 16 Aug 2012 15:36:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T227t-00084i-Va
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:50 +0000
Received: from [85.158.143.99:16293] by server-1.bemta-4.messagelabs.com id
	63/63-07754-1931D205; Thu, 16 Aug 2012 15:36:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345131407!21011213!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11040 invoked from network); 16 Aug 2012 15:36:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386128"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-03;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:54 +0100
Message-ID: <1345131377-14713-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 02/25] xen/arm: hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use r12 to pass the hypercall number to the hypervisor.

We need a register to pass the hypercall number because we might not
know it at compile time and HVC only takes an immediate argument.

Among the available registers r12 seems to be the best choice because it
is defined as "intra-procedure call scratch register".

Use the ISS to pass an hypervisor specific tag.


Changes in v2:
- define an HYPERCALL macro for 5 arguments hypercall wrappers, even if
at the moment is unused;
- use ldm instead of pop;
- fix up comments.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |   50 ++++++++++++++++
 arch/arm/xen/Makefile                |    2 +-
 arch/arm/xen/hypercall.S             |  106 ++++++++++++++++++++++++++++++++++
 3 files changed, 157 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/hypercall.h
 create mode 100644 arch/arm/xen/hypercall.S

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
new file mode 100644
index 0000000..4ac0624
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -0,0 +1,50 @@
+/******************************************************************************
+ * hypercall.h
+ *
+ * Linux-specific hypervisor handling.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#ifndef _ASM_ARM_XEN_HYPERCALL_H
+#define _ASM_ARM_XEN_HYPERCALL_H
+
+#include <xen/interface/xen.h>
+
+long privcmd_call(unsigned call, unsigned long a1,
+		unsigned long a2, unsigned long a3,
+		unsigned long a4, unsigned long a5);
+int HYPERVISOR_xen_version(int cmd, void *arg);
+int HYPERVISOR_console_io(int cmd, int count, char *str);
+int HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count);
+int HYPERVISOR_sched_op(int cmd, void *arg);
+int HYPERVISOR_event_channel_op(int cmd, void *arg);
+unsigned long HYPERVISOR_hvm_op(int op, void *arg);
+int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
+int HYPERVISOR_physdev_op(int cmd, void *arg);
+
+#endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index 0bad594..b9d6acc 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o
+obj-y		:= enlighten.o hypercall.o
diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
new file mode 100644
index 0000000..074f5ed
--- /dev/null
+++ b/arch/arm/xen/hypercall.S
@@ -0,0 +1,106 @@
+/******************************************************************************
+ * hypercall.S
+ *
+ * Xen hypercall wrappers
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+/*
+ * The Xen hypercall calling convention is very similar to the ARM
+ * procedure calling convention: the first paramter is passed in r0, the
+ * second in r1, the third in r2 and the fourth in r3. Considering that
+ * Xen hypercalls have 5 arguments at most, the fifth paramter is passed
+ * in r4, differently from the procedure calling convention of using the
+ * stack for that case.
+ *
+ * The hypercall number is passed in r12.
+ *
+ * The return value is in r0.
+ *
+ * The hvc ISS is required to be 0xEA1, that is the Xen specific ARM
+ * hypercall tag.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+#include <xen/interface/xen.h>
+
+
+/* HVC 0xEA1 */
+#ifdef CONFIG_THUMB2_KERNEL
+#define xen_hvc .word 0xf7e08ea1
+#else
+#define xen_hvc .word 0xe140ea71
+#endif
+
+#define HYPERCALL_SIMPLE(hypercall)		\
+ENTRY(HYPERVISOR_##hypercall)			\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc;							\
+	mov pc, lr;							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+#define HYPERCALL0 HYPERCALL_SIMPLE
+#define HYPERCALL1 HYPERCALL_SIMPLE
+#define HYPERCALL2 HYPERCALL_SIMPLE
+#define HYPERCALL3 HYPERCALL_SIMPLE
+#define HYPERCALL4 HYPERCALL_SIMPLE
+
+#define HYPERCALL5(hypercall)			\
+ENTRY(HYPERVISOR_##hypercall)			\
+	stmdb sp!, {r4}						\
+	ldr r4, [sp, #4]					\
+	mov r12, #__HYPERVISOR_##hypercall;	\
+	xen_hvc								\
+	ldm sp!, {r4}						\
+	mov pc, lr							\
+ENDPROC(HYPERVISOR_##hypercall)
+
+                .text
+
+HYPERCALL2(xen_version);
+HYPERCALL3(console_io);
+HYPERCALL3(grant_table_op);
+HYPERCALL2(sched_op);
+HYPERCALL2(event_channel_op);
+HYPERCALL2(hvm_op);
+HYPERCALL2(memory_op);
+HYPERCALL2(physdev_op);
+
+ENTRY(privcmd_call)
+	stmdb sp!, {r4}
+	mov r12, r0
+	mov r0, r1
+	mov r1, r2
+	mov r2, r3
+	ldr r3, [sp, #8]
+	ldr r4, [sp, #4]
+	xen_hvc
+	ldm sp!, {r4}
+	mov pc, lr
+ENDPROC(privcmd_call);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2284-00086t-Dv; Thu, 16 Aug 2012 15:37:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2282-00085V-PX
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345131406!2740922!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24139 invoked from network); 16 Aug 2012 15:36:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34869943"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-AN;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:02 +0100
Message-ID: <1345131377-14713-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 10/25] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
an error.

If Linux is running as an HVM domain and is running as Dom0, use
xenstored_local_init to initialize the xenstore page and event channel.

Changes in v2:

- refactor xenbus_init.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/xenbus/xenbus_comms.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
 drivers/xen/xenbus/xenbus_xs.c    |    1 +
 3 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index 52fe7ad..c5aa55c 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -224,7 +224,7 @@ int xb_init_comms(void)
 		int err;
 		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
 						0, "xenbus", &xb_waitq);
-		if (err <= 0) {
+		if (err < 0) {
 			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
 			return err;
 		}
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..a67ccc0 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
 	return err;
 }
 
+enum xenstore_init {
+	UNKNOWN,
+	PV,
+	HVM,
+	LOCAL,
+};
 static int __init xenbus_init(void)
 {
 	int err = 0;
+	enum xenstore_init usage = UNKNOWN;
+	uint64_t v = 0;
 
 	if (!xen_domain())
 		return -ENODEV;
 
 	xenbus_ring_ops_init();
 
-	if (xen_hvm_domain()) {
-		uint64_t v = 0;
-		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
-		if (err)
-			goto out_error;
-		xen_store_evtchn = (int)v;
-		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
-		if (err)
-			goto out_error;
-		xen_store_mfn = (unsigned long)v;
-		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
-	} else {
-		xen_store_evtchn = xen_start_info->store_evtchn;
-		xen_store_mfn = xen_start_info->store_mfn;
-		if (xen_store_evtchn)
-			xenstored_ready = 1;
-		else {
+	if (xen_pv_domain())
+		usage = PV;
+	if (xen_hvm_domain())
+		usage = HVM;
+	if (xen_hvm_domain() && xen_initial_domain())
+		usage = LOCAL;
+	if (xen_pv_domain() && !xen_start_info->store_evtchn)
+		usage = LOCAL;
+	if (xen_pv_domain() && xen_start_info->store_evtchn)
+		xenstored_ready = 1;
+
+	switch (usage) {
+		case LOCAL:
 			err = xenstored_local_init();
 			if (err)
 				goto out_error;
-		}
-		xen_store_interface = mfn_to_virt(xen_store_mfn);
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case PV:
+			xen_store_evtchn = xen_start_info->store_evtchn;
+			xen_store_mfn = xen_start_info->store_mfn;
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case HVM:
+			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
+			if (err)
+				goto out_error;
+			xen_store_evtchn = (int)v;
+			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
+			if (err)
+				goto out_error;
+			xen_store_mfn = (unsigned long)v;
+			xen_store_interface =
+				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
+			break;
+		default:
+			pr_warn("Xenstore state unknown\n");
+			break;
 	}
 
 	/* Initialize the interface to xenstore. */
diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index d1c217b..f7feb3d 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -44,6 +44,7 @@
 #include <linux/rwsem.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
+#include <asm/xen/hypervisor.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
 #include "xenbus_comms.h"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2284-00086t-Dv; Thu, 16 Aug 2012 15:37:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2282-00085V-PX
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345131406!2740922!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24139 invoked from network); 16 Aug 2012 15:36:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34869943"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-AN;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:02 +0100
Message-ID: <1345131377-14713-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 10/25] xen/arm: compile and run xenbus
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

bind_evtchn_to_irqhandler can legitimately return 0 (irq 0): it is not
an error.

If Linux is running as an HVM domain and is running as Dom0, use
xenstored_local_init to initialize the xenstore page and event channel.

Changes in v2:

- refactor xenbus_init.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/xenbus/xenbus_comms.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c |   62 +++++++++++++++++++++++++-----------
 drivers/xen/xenbus/xenbus_xs.c    |    1 +
 3 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index 52fe7ad..c5aa55c 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -224,7 +224,7 @@ int xb_init_comms(void)
 		int err;
 		err = bind_evtchn_to_irqhandler(xen_store_evtchn, wake_waiting,
 						0, "xenbus", &xb_waitq);
-		if (err <= 0) {
+		if (err < 0) {
 			printk(KERN_ERR "XENBUS request irq failed %i\n", err);
 			return err;
 		}
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..a67ccc0 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -719,37 +719,61 @@ static int __init xenstored_local_init(void)
 	return err;
 }
 
+enum xenstore_init {
+	UNKNOWN,
+	PV,
+	HVM,
+	LOCAL,
+};
 static int __init xenbus_init(void)
 {
 	int err = 0;
+	enum xenstore_init usage = UNKNOWN;
+	uint64_t v = 0;
 
 	if (!xen_domain())
 		return -ENODEV;
 
 	xenbus_ring_ops_init();
 
-	if (xen_hvm_domain()) {
-		uint64_t v = 0;
-		err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
-		if (err)
-			goto out_error;
-		xen_store_evtchn = (int)v;
-		err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
-		if (err)
-			goto out_error;
-		xen_store_mfn = (unsigned long)v;
-		xen_store_interface = ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
-	} else {
-		xen_store_evtchn = xen_start_info->store_evtchn;
-		xen_store_mfn = xen_start_info->store_mfn;
-		if (xen_store_evtchn)
-			xenstored_ready = 1;
-		else {
+	if (xen_pv_domain())
+		usage = PV;
+	if (xen_hvm_domain())
+		usage = HVM;
+	if (xen_hvm_domain() && xen_initial_domain())
+		usage = LOCAL;
+	if (xen_pv_domain() && !xen_start_info->store_evtchn)
+		usage = LOCAL;
+	if (xen_pv_domain() && xen_start_info->store_evtchn)
+		xenstored_ready = 1;
+
+	switch (usage) {
+		case LOCAL:
 			err = xenstored_local_init();
 			if (err)
 				goto out_error;
-		}
-		xen_store_interface = mfn_to_virt(xen_store_mfn);
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case PV:
+			xen_store_evtchn = xen_start_info->store_evtchn;
+			xen_store_mfn = xen_start_info->store_mfn;
+			xen_store_interface = mfn_to_virt(xen_store_mfn);
+			break;
+		case HVM:
+			err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
+			if (err)
+				goto out_error;
+			xen_store_evtchn = (int)v;
+			err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
+			if (err)
+				goto out_error;
+			xen_store_mfn = (unsigned long)v;
+			xen_store_interface =
+				ioremap(xen_store_mfn << PAGE_SHIFT, PAGE_SIZE);
+			break;
+		default:
+			pr_warn("Xenstore state unknown\n");
+			break;
 	}
 
 	/* Initialize the interface to xenstore. */
diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index d1c217b..f7feb3d 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -44,6 +44,7 @@
 #include <linux/rwsem.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
+#include <asm/xen/hypervisor.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
 #include "xenbus_comms.h"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2287-00088Q-B4; Thu, 16 Aug 2012 15:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2285-00086v-AQ
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:01 +0000
Received: from [85.158.143.35:42061] by server-3.bemta-4.messagelabs.com id
	BC/6B-09529-C931D205; Thu, 16 Aug 2012 15:37:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345131402!13809863!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 16 Aug 2012 15:36:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386129"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-5H;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:58 +0100
Message-ID: <1345131377-14713-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, devicetree-discuss@lists.ozlabs.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, David Vrabel <david.vrabel@citrix.com>,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 06/25] docs: Xen ARM DT bindings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add a doc to describe the Xen ARM device tree bindings

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: devicetree-discuss@lists.ozlabs.org
CC: David Vrabel <david.vrabel@citrix.com>
---
 Documentation/devicetree/bindings/arm/xen.txt |   22 ++++++++++++++++++++++
 1 files changed, 22 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/arm/xen.txt

diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
new file mode 100644
index 0000000..ec6d884
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/xen.txt
@@ -0,0 +1,22 @@
+* Xen hypervisor device tree bindings
+
+Xen ARM virtual platforms shall have the following properties:
+
+- compatible:
+	compatible = "xen,xen", "xen,xen-<version>";
+  where <version> is the version of the Xen ABI of the platform.
+
+- reg: specifies the base physical address and size of a region in
+  memory where the grant table should be mapped to, using an
+  HYPERVISOR_memory_op hypercall. 
+
+- interrupts: the interrupt used by Xen to inject event notifications.
+
+
+Example:
+
+hypervisor {
+	compatible = "xen,xen", "xen,xen-4.3";
+	reg = <0xb0000000 0x20000>;
+	interrupts = <1 15 0xf08>;
+};
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2287-00088Q-B4; Thu, 16 Aug 2012 15:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2285-00086v-AQ
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:01 +0000
Received: from [85.158.143.35:42061] by server-3.bemta-4.messagelabs.com id
	BC/6B-09529-C931D205; Thu, 16 Aug 2012 15:37:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345131402!13809863!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 16 Aug 2012 15:36:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386129"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-5H;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:58 +0100
Message-ID: <1345131377-14713-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, devicetree-discuss@lists.ozlabs.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, David Vrabel <david.vrabel@citrix.com>,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 06/25] docs: Xen ARM DT bindings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add a doc to describe the Xen ARM device tree bindings

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: devicetree-discuss@lists.ozlabs.org
CC: David Vrabel <david.vrabel@citrix.com>
---
 Documentation/devicetree/bindings/arm/xen.txt |   22 ++++++++++++++++++++++
 1 files changed, 22 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/arm/xen.txt

diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
new file mode 100644
index 0000000..ec6d884
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/xen.txt
@@ -0,0 +1,22 @@
+* Xen hypervisor device tree bindings
+
+Xen ARM virtual platforms shall have the following properties:
+
+- compatible:
+	compatible = "xen,xen", "xen,xen-<version>";
+  where <version> is the version of the Xen ABI of the platform.
+
+- reg: specifies the base physical address and size of a region in
+  memory where the grant table should be mapped to, using an
+  HYPERVISOR_memory_op hypercall. 
+
+- interrupts: the interrupt used by Xen to inject event notifications.
+
+
+Example:
+
+hypervisor {
+	compatible = "xen,xen", "xen,xen-4.3";
+	reg = <0xb0000000 0x20000>;
+	interrupts = <1 15 0xf08>;
+};
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2284-000877-RK; Thu, 16 Aug 2012 15:37:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2282-00085U-PE
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345131406!2740922!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24264 invoked from network); 16 Aug 2012 15:36:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34869944"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-7o;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:00 +0100
Message-ID: <1345131377-14713-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 08/25] xen/arm: Introduce xen_pfn_t for pfn
	and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_pfn_t as mfn and pfn type, however
when they have been imported in Linux, xen_pfn_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
see fit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/interface.h  |    4 ++++
 arch/ia64/include/asm/xen/interface.h |    5 ++++-
 arch/x86/include/asm/xen/interface.h  |    5 +++++
 include/xen/interface/grant_table.h   |    4 ++--
 include/xen/interface/memory.h        |    6 +++---
 include/xen/interface/platform.h      |    4 ++--
 include/xen/interface/xen.h           |    6 +++---
 include/xen/privcmd.h                 |    2 --
 8 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index 9e0ec5a..74c72b5 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -25,6 +25,9 @@
 	} while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the interface with
+ * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
+typedef uint64_t xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -35,6 +38,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 
 /* Maximum number of virtual CPUs in multi-processor guests. */
 #define MAX_VIRT_CPUS 1
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 09d5f7f..686464e 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -67,6 +67,10 @@
 #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that we could have one ABI that works for 32 and 64 bit
+ * guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
@@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
 
-typedef unsigned long xen_pfn_t;
 DEFINE_GUEST_HANDLE(xen_pfn_t);
 #define PRI_xen_pfn	"lx"
 #endif
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..2289075 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -47,6 +47,10 @@
 #endif
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that on ARM we can have one ABI that works for 32 and 64
+ * bit guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 #endif
 
 #ifndef HYPERVISOR_VIRT_START
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index a17d844..7da811b 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
 #define GNTTABOP_transfer                4
 struct gnttab_transfer {
     /* IN parameters. */
-    unsigned long mfn;
+    xen_pfn_t mfn;
     domid_t       domid;
     grant_ref_t   ref;
     /* OUT parameters. */
@@ -375,7 +375,7 @@ struct gnttab_copy {
 	struct {
 		union {
 			grant_ref_t ref;
-			unsigned long   gmfn;
+			xen_pfn_t   gmfn;
 		} u;
 		domid_t  domid;
 		uint16_t offset;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..abbbff0 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -31,7 +31,7 @@ struct xen_memory_reservation {
      *   OUT: GMFN bases of extents that were allocated
      *   (NB. This command also updates the mach_to_phys translation table)
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
     unsigned long  nr_extents;
@@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
      * any large discontiguities in the machine address space, 2MB gaps in
      * the machphys table will be represented by an MFN base of zero.
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /*
      * Number of extents written to the above array. This will be smaller
@@ -172,7 +172,7 @@ struct xen_add_to_physmap {
     unsigned long idx;
 
     /* GPFN where the source mapping page should appear. */
-    unsigned long gpfn;
+    xen_pfn_t gpfn;
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
 
diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
index 486653f..0bea470 100644
--- a/include/xen/interface/platform.h
+++ b/include/xen/interface/platform.h
@@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
 #define XENPF_add_memtype         31
 struct xenpf_add_memtype {
 	/* IN variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 	/* OUT variables. */
@@ -84,7 +84,7 @@ struct xenpf_read_memtype {
 	/* IN variables. */
 	uint32_t reg;
 	/* OUT variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 };
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..f6b8965 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -189,7 +189,7 @@ struct mmuext_op {
 	unsigned int cmd;
 	union {
 		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
-		unsigned long mfn;
+		xen_pfn_t mfn;
 		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
 		unsigned long linear_addr;
 	} arg1;
@@ -429,11 +429,11 @@ struct start_info {
 	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
 	unsigned long shared_info;  /* MACHINE address of shared info struct. */
 	uint32_t flags;             /* SIF_xxx flags.                         */
-	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
+	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
 	uint32_t store_evtchn;      /* Event channel for store communication. */
 	union {
 		struct {
-			unsigned long mfn;  /* MACHINE page number of console page.   */
+			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
 			uint32_t  evtchn;   /* Event channel for console page.        */
 		} domU;
 		struct {
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..59f1bd8 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -36,8 +36,6 @@
 #include <linux/types.h>
 #include <linux/compiler.h>
 
-typedef unsigned long xen_pfn_t;
-
 struct privcmd_hypercall {
 	__u64 op;
 	__u64 arg[5];
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2284-000877-RK; Thu, 16 Aug 2012 15:37:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2282-00085U-PE
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:36:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345131406!2740922!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24264 invoked from network); 16 Aug 2012 15:36:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34869944"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-7o;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:00 +0100
Message-ID: <1345131377-14713-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 08/25] xen/arm: Introduce xen_pfn_t for pfn
	and mfn types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_pfn_t as mfn and pfn type, however
when they have been imported in Linux, xen_pfn_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
see fit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/interface.h  |    4 ++++
 arch/ia64/include/asm/xen/interface.h |    5 ++++-
 arch/x86/include/asm/xen/interface.h  |    5 +++++
 include/xen/interface/grant_table.h   |    4 ++--
 include/xen/interface/memory.h        |    6 +++---
 include/xen/interface/platform.h      |    4 ++--
 include/xen/interface/xen.h           |    6 +++---
 include/xen/privcmd.h                 |    2 --
 8 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index 9e0ec5a..74c72b5 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -25,6 +25,9 @@
 	} while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the interface with
+ * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
+typedef uint64_t xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -35,6 +38,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 
 /* Maximum number of virtual CPUs in multi-processor guests. */
 #define MAX_VIRT_CPUS 1
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 09d5f7f..686464e 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -67,6 +67,10 @@
 #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that we could have one ABI that works for 32 and 64 bit
+ * guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
@@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
 
-typedef unsigned long xen_pfn_t;
 DEFINE_GUEST_HANDLE(xen_pfn_t);
 #define PRI_xen_pfn	"lx"
 #endif
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..2289075 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -47,6 +47,10 @@
 #endif
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that on ARM we can have one ABI that works for 32 and 64
+ * bit guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 #endif
 
 #ifndef HYPERVISOR_VIRT_START
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index a17d844..7da811b 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
 #define GNTTABOP_transfer                4
 struct gnttab_transfer {
     /* IN parameters. */
-    unsigned long mfn;
+    xen_pfn_t mfn;
     domid_t       domid;
     grant_ref_t   ref;
     /* OUT parameters. */
@@ -375,7 +375,7 @@ struct gnttab_copy {
 	struct {
 		union {
 			grant_ref_t ref;
-			unsigned long   gmfn;
+			xen_pfn_t   gmfn;
 		} u;
 		domid_t  domid;
 		uint16_t offset;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..abbbff0 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -31,7 +31,7 @@ struct xen_memory_reservation {
      *   OUT: GMFN bases of extents that were allocated
      *   (NB. This command also updates the mach_to_phys translation table)
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
     unsigned long  nr_extents;
@@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
      * any large discontiguities in the machine address space, 2MB gaps in
      * the machphys table will be represented by an MFN base of zero.
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /*
      * Number of extents written to the above array. This will be smaller
@@ -172,7 +172,7 @@ struct xen_add_to_physmap {
     unsigned long idx;
 
     /* GPFN where the source mapping page should appear. */
-    unsigned long gpfn;
+    xen_pfn_t gpfn;
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
 
diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
index 486653f..0bea470 100644
--- a/include/xen/interface/platform.h
+++ b/include/xen/interface/platform.h
@@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
 #define XENPF_add_memtype         31
 struct xenpf_add_memtype {
 	/* IN variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 	/* OUT variables. */
@@ -84,7 +84,7 @@ struct xenpf_read_memtype {
 	/* IN variables. */
 	uint32_t reg;
 	/* OUT variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 };
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index a890804..f6b8965 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -189,7 +189,7 @@ struct mmuext_op {
 	unsigned int cmd;
 	union {
 		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
-		unsigned long mfn;
+		xen_pfn_t mfn;
 		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
 		unsigned long linear_addr;
 	} arg1;
@@ -429,11 +429,11 @@ struct start_info {
 	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
 	unsigned long shared_info;  /* MACHINE address of shared info struct. */
 	uint32_t flags;             /* SIF_xxx flags.                         */
-	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
+	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
 	uint32_t store_evtchn;      /* Event channel for store communication. */
 	union {
 		struct {
-			unsigned long mfn;  /* MACHINE page number of console page.   */
+			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
 			uint32_t  evtchn;   /* Event channel for console page.        */
 		} domU;
 		struct {
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..59f1bd8 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -36,8 +36,6 @@
 #include <linux/types.h>
 #include <linux/compiler.h>
 
-typedef unsigned long xen_pfn_t;
-
 struct privcmd_hypercall {
 	__u64 op;
 	__u64 arg[5];
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2286-00088C-Te; Thu, 16 Aug 2012 15:37:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2284-00086v-RR
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:01 +0000
Received: from [85.158.143.35:42044] by server-3.bemta-4.messagelabs.com id
	9B/6B-09529-C931D205; Thu, 16 Aug 2012 15:37:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345131402!13809863!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31672 invoked from network); 16 Aug 2012 15:36:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386125"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-2d;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:56 +0100
Message-ID: <1345131377-14713-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 04/25] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

sync_bitops functions are equivalent to the SMP implementation of the
original functions, independently from CONFIG_SMP being defined.

We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
under Xen you might be communicating with a completely external entity
who might be on another CPU (e.g. two uniprocessor guests communicating
via event channels and grant tables). So we need a variant of the bit
ops which are SMP safe even on a UP kernel.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/sync_bitops.h |   27 +++++++++++++++++++++++++++
 1 files changed, 27 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/sync_bitops.h

diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
new file mode 100644
index 0000000..63479ee
--- /dev/null
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -0,0 +1,27 @@
+#ifndef __ASM_SYNC_BITOPS_H__
+#define __ASM_SYNC_BITOPS_H__
+
+#include <asm/bitops.h>
+#include <asm/system.h>
+
+/* sync_bitops functions are equivalent to the SMP implementation of the
+ * original functions, independently from CONFIG_SMP being defined.
+ *
+ * We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
+ * under Xen you might be communicating with a completely external entity
+ * who might be on another CPU (e.g. two uniprocessor guests communicating
+ * via event channels and grant tables). So we need a variant of the bit
+ * ops which are SMP safe even on a UP kernel.
+ */
+
+#define sync_set_bit(nr, p)		_set_bit(nr, p)
+#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
+#define sync_change_bit(nr, p)		_change_bit(nr, p)
+#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
+#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
+#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
+#define sync_test_bit(nr, addr)		test_bit(nr, addr)
+#define sync_cmpxchg			cmpxchg
+
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2286-00088C-Te; Thu, 16 Aug 2012 15:37:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2284-00086v-RR
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:01 +0000
Received: from [85.158.143.35:42044] by server-3.bemta-4.messagelabs.com id
	9B/6B-09529-C931D205; Thu, 16 Aug 2012 15:37:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345131402!13809863!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31672 invoked from network); 16 Aug 2012 15:36:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386125"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-2d;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:56 +0100
Message-ID: <1345131377-14713-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 04/25] xen/arm: sync_bitops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

sync_bitops functions are equivalent to the SMP implementation of the
original functions, independently from CONFIG_SMP being defined.

We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
under Xen you might be communicating with a completely external entity
who might be on another CPU (e.g. two uniprocessor guests communicating
via event channels and grant tables). So we need a variant of the bit
ops which are SMP safe even on a UP kernel.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/sync_bitops.h |   27 +++++++++++++++++++++++++++
 1 files changed, 27 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/sync_bitops.h

diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
new file mode 100644
index 0000000..63479ee
--- /dev/null
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -0,0 +1,27 @@
+#ifndef __ASM_SYNC_BITOPS_H__
+#define __ASM_SYNC_BITOPS_H__
+
+#include <asm/bitops.h>
+#include <asm/system.h>
+
+/* sync_bitops functions are equivalent to the SMP implementation of the
+ * original functions, independently from CONFIG_SMP being defined.
+ *
+ * We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But
+ * under Xen you might be communicating with a completely external entity
+ * who might be on another CPU (e.g. two uniprocessor guests communicating
+ * via event channels and grant tables). So we need a variant of the bit
+ * ops which are SMP safe even on a UP kernel.
+ */
+
+#define sync_set_bit(nr, p)		_set_bit(nr, p)
+#define sync_clear_bit(nr, p)		_clear_bit(nr, p)
+#define sync_change_bit(nr, p)		_change_bit(nr, p)
+#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
+#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
+#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
+#define sync_test_bit(nr, addr)		test_bit(nr, addr)
+#define sync_cmpxchg			cmpxchg
+
+
+#endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T228A-0008Ad-7g; Thu, 16 Aug 2012 15:37:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2288-00086W-4v
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345131406!2740922!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24325 invoked from network); 16 Aug 2012 15:36:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34869945"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-8U;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:01 +0100
Message-ID: <1345131377-14713-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 09/25] xen/arm: Introduce xen_ulong_t for
	unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_ulong_t as unsigned long type, however
when they have been imported in Linux, xen_ulong_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
see fit.

Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.


Changes in v3:

- remove the incorrect changes to multicall_entry;
- remove the change to apic_physbase.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/interface.h  |    8 ++++++--
 arch/ia64/include/asm/xen/interface.h |    1 +
 arch/x86/include/asm/xen/interface.h  |    1 +
 include/xen/interface/memory.h        |   12 ++++++------
 include/xen/interface/physdev.h       |    2 +-
 include/xen/interface/version.h       |    2 +-
 6 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index 74c72b5..ae05e56 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -9,8 +9,11 @@
 
 #include <linux/types.h>
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
+
 #define __DEFINE_GUEST_HANDLE(name, type) \
-	typedef type * __guest_handle_ ## name
+	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_ ## name
 
 #define DEFINE_GUEST_HANDLE_STRUCT(name) \
 	__DEFINE_GUEST_HANDLE(name, struct name)
@@ -21,13 +24,14 @@
 	do {						\
 		if (sizeof(hnd) == 8)			\
 			*(uint64_t *)&(hnd) = 0;	\
-		(hnd) = val;				\
+		(hnd).p = val;				\
 	} while (0)
 
 #ifndef __ASSEMBLY__
 /* Explicitly size integers that represent pfns in the interface with
  * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
 typedef uint64_t xen_pfn_t;
+typedef uint64_t xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 686464e..7c83445 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -71,6 +71,7 @@
  * with Xen so that we could have one ABI that works for 32 and 64 bit
  * guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index 2289075..25cc8df 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -51,6 +51,7 @@
  * with Xen so that on ARM we can have one ABI that works for 32 and 64
  * bit guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index abbbff0..b5c3098 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -34,7 +34,7 @@ struct xen_memory_reservation {
     GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
-    unsigned long  nr_extents;
+    xen_ulong_t  nr_extents;
     unsigned int   extent_order;
 
     /*
@@ -92,7 +92,7 @@ struct xen_memory_exchange {
      *     command will be non-zero.
      *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
      */
-    unsigned long nr_exchanged;
+    xen_ulong_t nr_exchanged;
 };
 
 DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
@@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
  */
 #define XENMEM_machphys_mapping     12
 struct xen_machphys_mapping {
-    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
-    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
+    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
+    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
 
@@ -169,7 +169,7 @@ struct xen_add_to_physmap {
     unsigned int space;
 
     /* Index into source mapping space. */
-    unsigned long idx;
+    xen_ulong_t idx;
 
     /* GPFN where the source mapping page should appear. */
     xen_pfn_t gpfn;
@@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
     domid_t domid;
 
     /* Length of list. */
-    unsigned long nr_gpfns;
+    xen_ulong_t nr_gpfns;
 
     /* List of GPFNs to translate. */
     GUEST_HANDLE(ulong) gpfn_list;
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 9ce788d..f616514 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -56,7 +56,7 @@ struct physdev_eoi {
 #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
 struct physdev_pirq_eoi_gmfn {
     /* IN */
-    unsigned long gmfn;
+    xen_ulong_t gmfn;
 };
 
 /*
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..30280c9 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -45,7 +45,7 @@ struct xen_changeset_info {
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 
 #define XENVER_get_features 6
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2287-00088f-Mx; Thu, 16 Aug 2012 15:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2285-00086v-VU
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:02 +0000
Received: from [85.158.143.35:20839] by server-3.bemta-4.messagelabs.com id
	A2/7B-09529-D931D205; Thu, 16 Aug 2012 15:37:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345131402!13809863!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31745 invoked from network); 16 Aug 2012 15:36:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386127"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-3I;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:57 +0100
Message-ID: <1345131377-14713-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 05/25] xen/arm: empty implementation of
	grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- return -ENOSYS rather than -1.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/Makefile      |    2 +-
 arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/xen/grant-table.c

diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index b9d6acc..4384103 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o hypercall.o
+obj-y		:= enlighten.o hypercall.o grant-table.o
diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
new file mode 100644
index 0000000..dbd1330
--- /dev/null
+++ b/arch/arm/xen/grant-table.c
@@ -0,0 +1,53 @@
+/******************************************************************************
+ * grant_table.c
+ * ARM specific part
+ *
+ * Granting foreign access to our memory reservation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <xen/interface/xen.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+
+int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   void **__shared)
+{
+	return -ENOSYS;
+}
+
+void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
+{
+	return;
+}
+
+int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   grant_status_t **__shared)
+{
+	return -ENOSYS;
+}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2287-00088f-Mx; Thu, 16 Aug 2012 15:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2285-00086v-VU
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:02 +0000
Received: from [85.158.143.35:20839] by server-3.bemta-4.messagelabs.com id
	A2/7B-09529-D931D205; Thu, 16 Aug 2012 15:37:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345131402!13809863!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31745 invoked from network); 16 Aug 2012 15:36:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386127"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-3I;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:57 +0100
Message-ID: <1345131377-14713-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 05/25] xen/arm: empty implementation of
	grant_table arch specific functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- return -ENOSYS rather than -1.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/Makefile      |    2 +-
 arch/arm/xen/grant-table.c |   53 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/xen/grant-table.c

diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
index b9d6acc..4384103 100644
--- a/arch/arm/xen/Makefile
+++ b/arch/arm/xen/Makefile
@@ -1 +1 @@
-obj-y		:= enlighten.o hypercall.o
+obj-y		:= enlighten.o hypercall.o grant-table.o
diff --git a/arch/arm/xen/grant-table.c b/arch/arm/xen/grant-table.c
new file mode 100644
index 0000000..dbd1330
--- /dev/null
+++ b/arch/arm/xen/grant-table.c
@@ -0,0 +1,53 @@
+/******************************************************************************
+ * grant_table.c
+ * ARM specific part
+ *
+ * Granting foreign access to our memory reservation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <xen/interface/xen.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+
+int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   void **__shared)
+{
+	return -ENOSYS;
+}
+
+void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
+{
+	return;
+}
+
+int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
+			   unsigned long max_nr_gframes,
+			   grant_status_t **__shared)
+{
+	return -ENOSYS;
+}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T228A-0008Ad-7g; Thu, 16 Aug 2012 15:37:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2288-00086W-4v
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:04 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345131406!2740922!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24325 invoked from network); 16 Aug 2012 15:36:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34869945"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:41 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-8U;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:01 +0100
Message-ID: <1345131377-14713-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 09/25] xen/arm: Introduce xen_ulong_t for
	unsigned long
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_ulong_t as unsigned long type, however
when they have been imported in Linux, xen_ulong_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_ulong_t and let each architecture define xen_ulong_t as they
see fit.

Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit.


Changes in v3:

- remove the incorrect changes to multicall_entry;
- remove the change to apic_physbase.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/interface.h  |    8 ++++++--
 arch/ia64/include/asm/xen/interface.h |    1 +
 arch/x86/include/asm/xen/interface.h  |    1 +
 include/xen/interface/memory.h        |   12 ++++++------
 include/xen/interface/physdev.h       |    2 +-
 include/xen/interface/version.h       |    2 +-
 6 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
index 74c72b5..ae05e56 100644
--- a/arch/arm/include/asm/xen/interface.h
+++ b/arch/arm/include/asm/xen/interface.h
@@ -9,8 +9,11 @@
 
 #include <linux/types.h>
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
+
 #define __DEFINE_GUEST_HANDLE(name, type) \
-	typedef type * __guest_handle_ ## name
+	typedef struct { union { type *p; uint64_aligned_t q; }; }  \
+        __guest_handle_ ## name
 
 #define DEFINE_GUEST_HANDLE_STRUCT(name) \
 	__DEFINE_GUEST_HANDLE(name, struct name)
@@ -21,13 +24,14 @@
 	do {						\
 		if (sizeof(hnd) == 8)			\
 			*(uint64_t *)&(hnd) = 0;	\
-		(hnd) = val;				\
+		(hnd).p = val;				\
 	} while (0)
 
 #ifndef __ASSEMBLY__
 /* Explicitly size integers that represent pfns in the interface with
  * Xen so that we can have one ABI that works for 32 and 64 bit guests. */
 typedef uint64_t xen_pfn_t;
+typedef uint64_t xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 686464e..7c83445 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -71,6 +71,7 @@
  * with Xen so that we could have one ABI that works for 32 and 64 bit
  * guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index 2289075..25cc8df 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -51,6 +51,7 @@
  * with Xen so that on ARM we can have one ABI that works for 32 and 64
  * bit guests. */
 typedef unsigned long xen_pfn_t;
+typedef unsigned long xen_ulong_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index abbbff0..b5c3098 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -34,7 +34,7 @@ struct xen_memory_reservation {
     GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
-    unsigned long  nr_extents;
+    xen_ulong_t  nr_extents;
     unsigned int   extent_order;
 
     /*
@@ -92,7 +92,7 @@ struct xen_memory_exchange {
      *     command will be non-zero.
      *  5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER!
      */
-    unsigned long nr_exchanged;
+    xen_ulong_t nr_exchanged;
 };
 
 DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange);
@@ -148,8 +148,8 @@ DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list);
  */
 #define XENMEM_machphys_mapping     12
 struct xen_machphys_mapping {
-    unsigned long v_start, v_end; /* Start and end virtual addresses.   */
-    unsigned long max_mfn;        /* Maximum MFN that can be looked up. */
+    xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
+    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t);
 
@@ -169,7 +169,7 @@ struct xen_add_to_physmap {
     unsigned int space;
 
     /* Index into source mapping space. */
-    unsigned long idx;
+    xen_ulong_t idx;
 
     /* GPFN where the source mapping page should appear. */
     xen_pfn_t gpfn;
@@ -186,7 +186,7 @@ struct xen_translate_gpfn_list {
     domid_t domid;
 
     /* Length of list. */
-    unsigned long nr_gpfns;
+    xen_ulong_t nr_gpfns;
 
     /* List of GPFNs to translate. */
     GUEST_HANDLE(ulong) gpfn_list;
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 9ce788d..f616514 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -56,7 +56,7 @@ struct physdev_eoi {
 #define PHYSDEVOP_pirq_eoi_gmfn_v2       28
 struct physdev_pirq_eoi_gmfn {
     /* IN */
-    unsigned long gmfn;
+    xen_ulong_t gmfn;
 };
 
 /*
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..30280c9 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -45,7 +45,7 @@ struct xen_changeset_info {
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 
 #define XENVER_get_features 6
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T228D-0008Cs-0q; Thu, 16 Aug 2012 15:37:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T228A-00087M-TP
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345131397!9609475!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23618 invoked from network); 16 Aug 2012 15:36:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386124"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-0i;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:55 +0100
Message-ID: <1345131377-14713-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 03/25] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM Xen guests always use paging in hardware, like PV on HVM guests in
the X86 world.

Changes in v3:

- improve comments.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/page.h |   82 +++++++++++++++++++++++++++++++++++++++
 1 files changed, 82 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/page.h

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
new file mode 100644
index 0000000..1742023
--- /dev/null
+++ b/arch/arm/include/asm/xen/page.h
@@ -0,0 +1,82 @@
+#ifndef _ASM_ARM_XEN_PAGE_H
+#define _ASM_ARM_XEN_PAGE_H
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+#include <linux/pfn.h>
+#include <linux/types.h>
+
+#include <xen/interface/grant_table.h>
+
+#define pfn_to_mfn(pfn)			(pfn)
+#define phys_to_machine_mapping_valid	(1)
+#define mfn_to_pfn(mfn)			(mfn)
+#define mfn_to_virt(m)			(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+#define pte_mfn	    pte_pfn
+#define mfn_pte	    pfn_pte
+
+/* Xen machine address */
+typedef struct xmaddr {
+	phys_addr_t maddr;
+} xmaddr_t;
+
+/* Xen pseudo-physical address */
+typedef struct xpaddr {
+	phys_addr_t paddr;
+} xpaddr_t;
+
+#define XMADDR(x)	((xmaddr_t) { .maddr = (x) })
+#define XPADDR(x)	((xpaddr_t) { .paddr = (x) })
+
+static inline xmaddr_t phys_to_machine(xpaddr_t phys)
+{
+	unsigned offset = phys.paddr & ~PAGE_MASK;
+	return XMADDR(PFN_PHYS(pfn_to_mfn(PFN_DOWN(phys.paddr))) | offset);
+}
+
+static inline xpaddr_t machine_to_phys(xmaddr_t machine)
+{
+	unsigned offset = machine.maddr & ~PAGE_MASK;
+	return XPADDR(PFN_PHYS(mfn_to_pfn(PFN_DOWN(machine.maddr))) | offset);
+}
+/* VIRT <-> MACHINE conversion */
+#define virt_to_machine(v)	(phys_to_machine(XPADDR(__pa(v))))
+#define virt_to_pfn(v)          (PFN_DOWN(__pa(v)))
+#define virt_to_mfn(v)		(pfn_to_mfn(virt_to_pfn(v)))
+#define mfn_to_virt(m)		(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
+{
+	/* TODO: assuming it is mapped in the kernel 1:1 */
+	return virt_to_machine(vaddr);
+}
+
+/* TODO: this shouldn't be here but it is because the frontend drivers
+ * are using it (its rolled in headers) even though we won't hit the code path.
+ * So for right now just punt with this.
+ */
+static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
+{
+	BUG();
+	return NULL;
+}
+
+static inline int m2p_add_override(unsigned long mfn, struct page *page,
+		struct gnttab_map_grant_ref *kmap_op)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page, bool clear_pte)
+{
+	return 0;
+}
+
+static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
+{
+	BUG();
+	return false;
+}
+#endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T228D-0008Cs-0q; Thu, 16 Aug 2012 15:37:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T228A-00087M-TP
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:07 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345131397!9609475!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23618 invoked from network); 16 Aug 2012 15:36:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386124"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-0i;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:55 +0100
Message-ID: <1345131377-14713-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 03/25] xen/arm: page.h definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM Xen guests always use paging in hardware, like PV on HVM guests in
the X86 world.

Changes in v3:

- improve comments.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/page.h |   82 +++++++++++++++++++++++++++++++++++++++
 1 files changed, 82 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/page.h

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
new file mode 100644
index 0000000..1742023
--- /dev/null
+++ b/arch/arm/include/asm/xen/page.h
@@ -0,0 +1,82 @@
+#ifndef _ASM_ARM_XEN_PAGE_H
+#define _ASM_ARM_XEN_PAGE_H
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+
+#include <linux/pfn.h>
+#include <linux/types.h>
+
+#include <xen/interface/grant_table.h>
+
+#define pfn_to_mfn(pfn)			(pfn)
+#define phys_to_machine_mapping_valid	(1)
+#define mfn_to_pfn(mfn)			(mfn)
+#define mfn_to_virt(m)			(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+#define pte_mfn	    pte_pfn
+#define mfn_pte	    pfn_pte
+
+/* Xen machine address */
+typedef struct xmaddr {
+	phys_addr_t maddr;
+} xmaddr_t;
+
+/* Xen pseudo-physical address */
+typedef struct xpaddr {
+	phys_addr_t paddr;
+} xpaddr_t;
+
+#define XMADDR(x)	((xmaddr_t) { .maddr = (x) })
+#define XPADDR(x)	((xpaddr_t) { .paddr = (x) })
+
+static inline xmaddr_t phys_to_machine(xpaddr_t phys)
+{
+	unsigned offset = phys.paddr & ~PAGE_MASK;
+	return XMADDR(PFN_PHYS(pfn_to_mfn(PFN_DOWN(phys.paddr))) | offset);
+}
+
+static inline xpaddr_t machine_to_phys(xmaddr_t machine)
+{
+	unsigned offset = machine.maddr & ~PAGE_MASK;
+	return XPADDR(PFN_PHYS(mfn_to_pfn(PFN_DOWN(machine.maddr))) | offset);
+}
+/* VIRT <-> MACHINE conversion */
+#define virt_to_machine(v)	(phys_to_machine(XPADDR(__pa(v))))
+#define virt_to_pfn(v)          (PFN_DOWN(__pa(v)))
+#define virt_to_mfn(v)		(pfn_to_mfn(virt_to_pfn(v)))
+#define mfn_to_virt(m)		(__va(mfn_to_pfn(m) << PAGE_SHIFT))
+
+static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
+{
+	/* TODO: assuming it is mapped in the kernel 1:1 */
+	return virt_to_machine(vaddr);
+}
+
+/* TODO: this shouldn't be here but it is because the frontend drivers
+ * are using it (its rolled in headers) even though we won't hit the code path.
+ * So for right now just punt with this.
+ */
+static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
+{
+	BUG();
+	return NULL;
+}
+
+static inline int m2p_add_override(unsigned long mfn, struct page *page,
+		struct gnttab_map_grant_ref *kmap_op)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page, bool clear_pte)
+{
+	return 0;
+}
+
+static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
+{
+	BUG();
+	return false;
+}
+#endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T228C-0008CS-KD; Thu, 16 Aug 2012 15:37:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T228A-000879-5k
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345131397!9609475!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23891 invoked from network); 16 Aug 2012 15:36:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386126"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227a-0007Iu-Vc;
	Thu, 16 Aug 2012 16:36:30 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:53 +0100
Message-ID: <1345131377-14713-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 01/25] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Basic hypervisor.h and interface.h definitions.
- Skeleton enlighten.c, set xen_start_info to an empty struct.
- Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.

The new code only compiles when CONFIG_XEN is set, that is going to be
added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
ARM".

Changes in v3:

- improve comments.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/Makefile                     |    1 +
 arch/arm/include/asm/hypervisor.h     |    6 +++
 arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
 arch/arm/include/asm/xen/interface.h  |   65 +++++++++++++++++++++++++++++++++
 arch/arm/xen/Makefile                 |    1 +
 arch/arm/xen/enlighten.c              |   35 ++++++++++++++++++
 include/xen/xen.h                     |    2 +-
 7 files changed, 128 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/interface.h
 create mode 100644 arch/arm/xen/Makefile
 create mode 100644 arch/arm/xen/enlighten.c

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 0298b00..70aaa82 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -246,6 +246,7 @@ endif
 core-$(CONFIG_FPE_NWFPE)	+= arch/arm/nwfpe/
 core-$(CONFIG_FPE_FASTFPE)	+= $(FASTFPE_OBJ)
 core-$(CONFIG_VFP)		+= arch/arm/vfp/
+core-$(CONFIG_XEN)		+= arch/arm/xen/
 
 # If we have a machine-specific directory, then include it in the build.
 core-y				+= arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
diff --git a/arch/arm/include/asm/hypervisor.h b/arch/arm/include/asm/hypervisor.h
new file mode 100644
index 0000000..b90d9e5
--- /dev/null
+++ b/arch/arm/include/asm/hypervisor.h
@@ -0,0 +1,6 @@
+#ifndef _ASM_ARM_HYPERVISOR_H
+#define _ASM_ARM_HYPERVISOR_H
+
+#include <asm/xen/hypervisor.h>
+
+#endif
diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
new file mode 100644
index 0000000..d7ab99a
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypervisor.h
@@ -0,0 +1,19 @@
+#ifndef _ASM_ARM_XEN_HYPERVISOR_H
+#define _ASM_ARM_XEN_HYPERVISOR_H
+
+extern struct shared_info *HYPERVISOR_shared_info;
+extern struct start_info *xen_start_info;
+
+/* Lazy mode for batching updates / context switch */
+enum paravirt_lazy_mode {
+	PARAVIRT_LAZY_NONE,
+	PARAVIRT_LAZY_MMU,
+	PARAVIRT_LAZY_CPU,
+};
+
+static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
+{
+	return PARAVIRT_LAZY_NONE;
+}
+
+#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
new file mode 100644
index 0000000..9e0ec5a
--- /dev/null
+++ b/arch/arm/include/asm/xen/interface.h
@@ -0,0 +1,65 @@
+/******************************************************************************
+ * Guest OS interface to ARM Xen.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ */
+
+#ifndef _ASM_ARM_XEN_INTERFACE_H
+#define _ASM_ARM_XEN_INTERFACE_H
+
+#include <linux/types.h>
+
+#define __DEFINE_GUEST_HANDLE(name, type) \
+	typedef type * __guest_handle_ ## name
+
+#define DEFINE_GUEST_HANDLE_STRUCT(name) \
+	__DEFINE_GUEST_HANDLE(name, struct name)
+#define DEFINE_GUEST_HANDLE(name) __DEFINE_GUEST_HANDLE(name, name)
+#define GUEST_HANDLE(name)        __guest_handle_ ## name
+
+#define set_xen_guest_handle(hnd, val)			\
+	do {						\
+		if (sizeof(hnd) == 8)			\
+			*(uint64_t *)&(hnd) = 0;	\
+		(hnd) = val;				\
+	} while (0)
+
+#ifndef __ASSEMBLY__
+/* Guest handles for primitive C types. */
+__DEFINE_GUEST_HANDLE(uchar, unsigned char);
+__DEFINE_GUEST_HANDLE(uint,  unsigned int);
+__DEFINE_GUEST_HANDLE(ulong, unsigned long);
+DEFINE_GUEST_HANDLE(char);
+DEFINE_GUEST_HANDLE(int);
+DEFINE_GUEST_HANDLE(long);
+DEFINE_GUEST_HANDLE(void);
+DEFINE_GUEST_HANDLE(uint64_t);
+DEFINE_GUEST_HANDLE(uint32_t);
+
+/* Maximum number of virtual CPUs in multi-processor guests. */
+#define MAX_VIRT_CPUS 1
+
+struct arch_vcpu_info { };
+struct arch_shared_info { };
+
+/* TODO: Move pvclock definitions some place arch independent */
+struct pvclock_vcpu_time_info {
+	u32   version;
+	u32   pad0;
+	u64   tsc_timestamp;
+	u64   system_time;
+	u32   tsc_to_system_mul;
+	s8    tsc_shift;
+	u8    flags;
+	u8    pad[2];
+} __attribute__((__packed__)); /* 32 bytes */
+
+/* It is OK to have a 12 bytes struct with no padding because it is packed */
+struct pvclock_wall_clock {
+	u32   version;
+	u32   sec;
+	u32   nsec;
+} __attribute__((__packed__));
+#endif
+
+#endif /* _ASM_ARM_XEN_INTERFACE_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
new file mode 100644
index 0000000..0bad594
--- /dev/null
+++ b/arch/arm/xen/Makefile
@@ -0,0 +1 @@
+obj-y		:= enlighten.o
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
new file mode 100644
index 0000000..c535540
--- /dev/null
+++ b/arch/arm/xen/enlighten.c
@@ -0,0 +1,35 @@
+#include <xen/xen.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/memory.h>
+#include <xen/platform_pci.h>
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+#include <linux/module.h>
+
+struct start_info _xen_start_info;
+struct start_info *xen_start_info = &_xen_start_info;
+EXPORT_SYMBOL_GPL(xen_start_info);
+
+enum xen_domain_type xen_domain_type = XEN_NATIVE;
+EXPORT_SYMBOL_GPL(xen_domain_type);
+
+struct shared_info xen_dummy_shared_info;
+struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
+
+DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
+
+/* TODO: to be removed */
+__read_mostly int xen_have_vector_callback;
+EXPORT_SYMBOL_GPL(xen_have_vector_callback);
+
+int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
+EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
+
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return -ENOSYS;
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a164024..2c0d3a5 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -23,7 +23,7 @@ extern enum xen_domain_type xen_domain_type;
 #include <xen/interface/xen.h>
 #include <asm/xen/hypervisor.h>
 
-#define xen_initial_domain()	(xen_pv_domain() && \
+#define xen_initial_domain()	(xen_domain() && \
 				 xen_start_info->flags & SIF_INITDOMAIN)
 #else  /* !CONFIG_XEN_DOM0 */
 #define xen_initial_domain()	(0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:37:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T228C-0008CS-KD; Thu, 16 Aug 2012 15:37:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T228A-000879-5k
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:37:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345131397!9609475!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23891 invoked from network); 16 Aug 2012 15:36:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:36:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205386126"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:36:36 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:36:36 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227a-0007Iu-Vc;
	Thu, 16 Aug 2012 16:36:30 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:35:53 +0100
Message-ID: <1345131377-14713-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 01/25] arm: initial Xen support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Basic hypervisor.h and interface.h definitions.
- Skeleton enlighten.c, set xen_start_info to an empty struct.
- Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT.

The new code only compiles when CONFIG_XEN is set, that is going to be
added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on
ARM".

Changes in v3:

- improve comments.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/Makefile                     |    1 +
 arch/arm/include/asm/hypervisor.h     |    6 +++
 arch/arm/include/asm/xen/hypervisor.h |   19 ++++++++++
 arch/arm/include/asm/xen/interface.h  |   65 +++++++++++++++++++++++++++++++++
 arch/arm/xen/Makefile                 |    1 +
 arch/arm/xen/enlighten.c              |   35 ++++++++++++++++++
 include/xen/xen.h                     |    2 +-
 7 files changed, 128 insertions(+), 1 deletions(-)
 create mode 100644 arch/arm/include/asm/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/hypervisor.h
 create mode 100644 arch/arm/include/asm/xen/interface.h
 create mode 100644 arch/arm/xen/Makefile
 create mode 100644 arch/arm/xen/enlighten.c

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 0298b00..70aaa82 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -246,6 +246,7 @@ endif
 core-$(CONFIG_FPE_NWFPE)	+= arch/arm/nwfpe/
 core-$(CONFIG_FPE_FASTFPE)	+= $(FASTFPE_OBJ)
 core-$(CONFIG_VFP)		+= arch/arm/vfp/
+core-$(CONFIG_XEN)		+= arch/arm/xen/
 
 # If we have a machine-specific directory, then include it in the build.
 core-y				+= arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
diff --git a/arch/arm/include/asm/hypervisor.h b/arch/arm/include/asm/hypervisor.h
new file mode 100644
index 0000000..b90d9e5
--- /dev/null
+++ b/arch/arm/include/asm/hypervisor.h
@@ -0,0 +1,6 @@
+#ifndef _ASM_ARM_HYPERVISOR_H
+#define _ASM_ARM_HYPERVISOR_H
+
+#include <asm/xen/hypervisor.h>
+
+#endif
diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
new file mode 100644
index 0000000..d7ab99a
--- /dev/null
+++ b/arch/arm/include/asm/xen/hypervisor.h
@@ -0,0 +1,19 @@
+#ifndef _ASM_ARM_XEN_HYPERVISOR_H
+#define _ASM_ARM_XEN_HYPERVISOR_H
+
+extern struct shared_info *HYPERVISOR_shared_info;
+extern struct start_info *xen_start_info;
+
+/* Lazy mode for batching updates / context switch */
+enum paravirt_lazy_mode {
+	PARAVIRT_LAZY_NONE,
+	PARAVIRT_LAZY_MMU,
+	PARAVIRT_LAZY_CPU,
+};
+
+static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
+{
+	return PARAVIRT_LAZY_NONE;
+}
+
+#endif /* _ASM_ARM_XEN_HYPERVISOR_H */
diff --git a/arch/arm/include/asm/xen/interface.h b/arch/arm/include/asm/xen/interface.h
new file mode 100644
index 0000000..9e0ec5a
--- /dev/null
+++ b/arch/arm/include/asm/xen/interface.h
@@ -0,0 +1,65 @@
+/******************************************************************************
+ * Guest OS interface to ARM Xen.
+ *
+ * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012
+ */
+
+#ifndef _ASM_ARM_XEN_INTERFACE_H
+#define _ASM_ARM_XEN_INTERFACE_H
+
+#include <linux/types.h>
+
+#define __DEFINE_GUEST_HANDLE(name, type) \
+	typedef type * __guest_handle_ ## name
+
+#define DEFINE_GUEST_HANDLE_STRUCT(name) \
+	__DEFINE_GUEST_HANDLE(name, struct name)
+#define DEFINE_GUEST_HANDLE(name) __DEFINE_GUEST_HANDLE(name, name)
+#define GUEST_HANDLE(name)        __guest_handle_ ## name
+
+#define set_xen_guest_handle(hnd, val)			\
+	do {						\
+		if (sizeof(hnd) == 8)			\
+			*(uint64_t *)&(hnd) = 0;	\
+		(hnd) = val;				\
+	} while (0)
+
+#ifndef __ASSEMBLY__
+/* Guest handles for primitive C types. */
+__DEFINE_GUEST_HANDLE(uchar, unsigned char);
+__DEFINE_GUEST_HANDLE(uint,  unsigned int);
+__DEFINE_GUEST_HANDLE(ulong, unsigned long);
+DEFINE_GUEST_HANDLE(char);
+DEFINE_GUEST_HANDLE(int);
+DEFINE_GUEST_HANDLE(long);
+DEFINE_GUEST_HANDLE(void);
+DEFINE_GUEST_HANDLE(uint64_t);
+DEFINE_GUEST_HANDLE(uint32_t);
+
+/* Maximum number of virtual CPUs in multi-processor guests. */
+#define MAX_VIRT_CPUS 1
+
+struct arch_vcpu_info { };
+struct arch_shared_info { };
+
+/* TODO: Move pvclock definitions some place arch independent */
+struct pvclock_vcpu_time_info {
+	u32   version;
+	u32   pad0;
+	u64   tsc_timestamp;
+	u64   system_time;
+	u32   tsc_to_system_mul;
+	s8    tsc_shift;
+	u8    flags;
+	u8    pad[2];
+} __attribute__((__packed__)); /* 32 bytes */
+
+/* It is OK to have a 12 bytes struct with no padding because it is packed */
+struct pvclock_wall_clock {
+	u32   version;
+	u32   sec;
+	u32   nsec;
+} __attribute__((__packed__));
+#endif
+
+#endif /* _ASM_ARM_XEN_INTERFACE_H */
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile
new file mode 100644
index 0000000..0bad594
--- /dev/null
+++ b/arch/arm/xen/Makefile
@@ -0,0 +1 @@
+obj-y		:= enlighten.o
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
new file mode 100644
index 0000000..c535540
--- /dev/null
+++ b/arch/arm/xen/enlighten.c
@@ -0,0 +1,35 @@
+#include <xen/xen.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/memory.h>
+#include <xen/platform_pci.h>
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+#include <linux/module.h>
+
+struct start_info _xen_start_info;
+struct start_info *xen_start_info = &_xen_start_info;
+EXPORT_SYMBOL_GPL(xen_start_info);
+
+enum xen_domain_type xen_domain_type = XEN_NATIVE;
+EXPORT_SYMBOL_GPL(xen_domain_type);
+
+struct shared_info xen_dummy_shared_info;
+struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
+
+DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
+
+/* TODO: to be removed */
+__read_mostly int xen_have_vector_callback;
+EXPORT_SYMBOL_GPL(xen_have_vector_callback);
+
+int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
+EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
+
+int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
+			       unsigned long addr,
+			       unsigned long mfn, int nr,
+			       pgprot_t prot, unsigned domid)
+{
+	return -ENOSYS;
+}
+EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a164024..2c0d3a5 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -23,7 +23,7 @@ extern enum xen_domain_type xen_domain_type;
 #include <xen/interface/xen.h>
 #include <asm/xen/hypervisor.h>
 
-#define xen_initial_domain()	(xen_pv_domain() && \
+#define xen_initial_domain()	(xen_domain() && \
 				 xen_start_info->flags & SIF_INITDOMAIN)
 #else  /* !CONFIG_XEN_DOM0 */
 #define xen_initial_domain()	(0)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:53:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:53:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Nu-0001I9-Oj; Thu, 16 Aug 2012 15:53:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T22Ns-0001I2-Vo
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:53:21 +0000
Received: from [85.158.139.83:10605] by server-11.bemta-5.messagelabs.com id
	A4/8B-29296-0771D205; Thu, 16 Aug 2012 15:53:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345132399!24556137!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 16 Aug 2012 15:53:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 15:53:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 16:53:19 +0100
Message-Id: <502D33B8020000780009596B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 16:54:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -51,18 +51,34 @@
>  
>  #define XEN_HYPERCALL_TAG   0XEA1
>  
> +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
>  
>  #ifndef __ASSEMBLY__
> -#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> -    typedef struct { type *p; } __guest_handle_ ## name
> +#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
> +    typedef struct { type *p; }                                 \
> +        __guest_handle_ ## name;                                \
> +    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> +        __guest_handle_64_ ## name;

Why struct { union { ... } } instead of just union { ... }?

>  
> +/*
> + * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
> + * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
> + * aligned.
> + * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
> + * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
> + */
>  #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
>      ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
>      ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
> -#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
> +#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
>  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> -#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> +/* this is going to be changes on 64 bit */
> +#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
> +#define set_xen_guest_handle_raw(hnd, val)                  \
> +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \

If you made the "normal" handle a union too, you could avoid
the explicit cast (which e.g. gcc, when not passed
-fno-strict-aliasing, will choke on) and instead use (hnd).q (and
at once avoid the double initialization of the low half).

Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
usable at once for 64-bit avoiding a full double initialization there.

> +         (hnd).p = val;                                     \

In a public header you certainly want to avoid evaluating a
macro argument twice.

> +    } while ( 0 )
>  #ifdef __XEN_TOOLS__
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
>  #endif

Seeing the patch I btw realized that there's no easy way to
avoid having the type as a second argument in the conversion
macros. Nevertheless I still don't like the explicitly specified type
there.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:53:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:53:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Nu-0001I9-Oj; Thu, 16 Aug 2012 15:53:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T22Ns-0001I2-Vo
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:53:21 +0000
Received: from [85.158.139.83:10605] by server-11.bemta-5.messagelabs.com id
	A4/8B-29296-0771D205; Thu, 16 Aug 2012 15:53:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345132399!24556137!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 16 Aug 2012 15:53:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-182.messagelabs.com with SMTP;
	16 Aug 2012 15:53:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 16:53:19 +0100
Message-Id: <502D33B8020000780009596B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 16:54:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -51,18 +51,34 @@
>  
>  #define XEN_HYPERCALL_TAG   0XEA1
>  
> +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
>  
>  #ifndef __ASSEMBLY__
> -#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> -    typedef struct { type *p; } __guest_handle_ ## name
> +#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
> +    typedef struct { type *p; }                                 \
> +        __guest_handle_ ## name;                                \
> +    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> +        __guest_handle_64_ ## name;

Why struct { union { ... } } instead of just union { ... }?

>  
> +/*
> + * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
> + * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
> + * aligned.
> + * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
> + * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
> + */
>  #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
>      ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
>      ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
>  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
> -#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
> +#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
>  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> -#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> +/* this is going to be changes on 64 bit */
> +#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
> +#define set_xen_guest_handle_raw(hnd, val)                  \
> +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \

If you made the "normal" handle a union too, you could avoid
the explicit cast (which e.g. gcc, when not passed
-fno-strict-aliasing, will choke on) and instead use (hnd).q (and
at once avoid the double initialization of the low half).

Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
usable at once for 64-bit avoiding a full double initialization there.

> +         (hnd).p = val;                                     \

In a public header you certainly want to avoid evaluating a
macro argument twice.

> +    } while ( 0 )
>  #ifdef __XEN_TOOLS__
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
>  #endif

Seeing the patch I btw realized that there's no easy way to
avoid having the type as a second argument in the conversion
macros. Nevertheless I still don't like the explicitly specified type
there.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22QU-0001RB-MI; Thu, 16 Aug 2012 15:56:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QS-0001Qe-AP
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:00 +0000
Received: from [85.158.143.35:45483] by server-3.bemta-4.messagelabs.com id
	2C/25-09529-F081D205; Thu, 16 Aug 2012 15:55:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345132556!6069840!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9092 invoked from network); 16 Aug 2012 15:55:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388545"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:55:52 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:55:51 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Ka;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:17 +0100
Message-ID: <1345131377-14713-25-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 25/25] [HACK] xen/arm: implement
	xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Campbell <Ian.Campbell@citrix.com>

Do not apply!

This is a simple, hacky implementation of xen_remap_domain_mfn_range,
using XENMAPSPACE_gmfn_foreign.

It should use same interface as hybrid x86.


Changes in v2:

- retain binary compatibility in xen_add_to_physmap: use a union.

Changes in v3:

- do not use an anonymous union.


Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
 drivers/xen/privcmd.c          |   16 +++++----
 drivers/xen/xenfs/super.c      |    7 ++++
 include/xen/interface/memory.h |   15 ++++++--
 4 files changed, 105 insertions(+), 12 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 3348d90..4616412 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -16,6 +16,10 @@
 #include <linux/of.h>
 #include <linux/of_irq.h>
 #include <linux/of_address.h>
+#include <linux/mm.h>
+#include <linux/ioport.h>
+
+#include <asm/pgtable.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
 static __read_mostly int xen_events_irq = -1;
 
+#define FOREIGN_MAP_BUFFER 0x90000000UL
+#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
+struct resource foreign_map_resource = {
+	.start = FOREIGN_MAP_BUFFER,
+	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
+	.name = "Xen foreign map buffer",
+	.flags = 0,
+};
+
+static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
+
+struct remap_data {
+	struct mm_struct *mm;
+	unsigned long mfn;
+	pgprot_t prot;
+};
+
+static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
+				 unsigned long addr, void *data)
+{
+	struct remap_data *rmd = data;
+	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
+
+	if (rmd->mfn < 0x90010)
+		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
+		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
+
+	set_pte_at(rmd->mm, addr, ptep, pte);
+
+	rmd->mfn++;
+	return 0;
+}
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid)
 {
-	return -ENOSYS;
+	int i, rc = 0;
+	struct remap_data rmd = {
+		.mm = vma->vm_mm,
+		.prot = prot,
+	};
+	struct xen_add_to_physmap xatp = {
+		.domid = DOMID_SELF,
+		.space = XENMAPSPACE_gmfn_foreign,
+
+		.u.foreign_domid = domid,
+	};
+
+	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
+					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
+		pr_crit("RAM out of foreign map buffers...\n");
+		return -EBUSY;
+	}
+
+	for (i = 0; i < nr; i++) {
+		xatp.idx = mfn + i;
+		xatp.gpfn = foreign_map_buffer_pfn + i;
+		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
+		if (rc != 0) {
+			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
+			goto out;
+		}
+	}
+
+	rmd.mfn = foreign_map_buffer_pfn;
+	rc = apply_to_page_range(vma->vm_mm,
+				 addr,
+				 (unsigned long)nr << PAGE_SHIFT,
+				 remap_area_mfn_pte_fn, &rmd);
+	if (rc != 0) {
+		pr_crit("apply_to_page_range failed rc=%d\n", rc);
+		goto out;
+	}
+
+	foreign_map_buffer_pfn += nr;
+out:
+	return rc;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..3e15c22 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -20,6 +20,8 @@
 #include <linux/pagemap.h>
 #include <linux/seq_file.h>
 #include <linux/miscdevice.h>
+#include <linux/resource.h>
+#include <linux/ioport.h>
 
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
@@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_mfn_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
 		return -EFAULT;
 
@@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_batch_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&m, udata, sizeof(m)))
 		return -EFAULT;
 
@@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
 	return ret;
 }
 
+static void privcmd_close(struct vm_area_struct *vma)
+{
+	/* TODO: unmap VMA */
+}
+
 static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
@@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 }
 
 static struct vm_operations_struct privcmd_vm_ops = {
-	.fault = privcmd_fault
+	.fault = privcmd_fault,
+	.close = privcmd_close,
 };
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
index a84b53c..edbe22f 100644
--- a/drivers/xen/xenfs/super.c
+++ b/drivers/xen/xenfs/super.c
@@ -12,6 +12,7 @@
 #include <linux/module.h>
 #include <linux/fs.h>
 #include <linux/magic.h>
+#include <linux/ioport.h>
 
 #include <xen/xen.h>
 
@@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
 	.llseek = default_llseek,
 };
 
+extern struct resource foreign_map_resource;
+
 static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 {
 	static struct tree_descr xenfs_files[] = {
@@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
 		xenfs_create_file(sb, sb->s_root, "xsd_port",
 				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
+		rc = request_resource(&iomem_resource, &foreign_map_resource);
+		if (rc < 0)
+			pr_crit("failed to register foreign map resource\n");
+		rc = 0; /* ignore */
 	}
 
 	return rc;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b66d04c..99c5052 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,12 +163,19 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
-#define XENMAPSPACE_shared_info 0 /* shared info page */
-#define XENMAPSPACE_grant_table 1 /* grant table page */
+#define XENMAPSPACE_shared_info  0 /* shared info page */
+#define XENMAPSPACE_grant_table  1 /* grant table page */
+#define XENMAPSPACE_gmfn         2 /* GMFN */
+#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
     unsigned int space;
 
     /* Index into source mapping space. */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22QU-0001RB-MI; Thu, 16 Aug 2012 15:56:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QS-0001Qe-AP
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:00 +0000
Received: from [85.158.143.35:45483] by server-3.bemta-4.messagelabs.com id
	2C/25-09529-F081D205; Thu, 16 Aug 2012 15:55:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345132556!6069840!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9092 invoked from network); 16 Aug 2012 15:55:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388545"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:55:52 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:55:51 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Ka;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:17 +0100
Message-ID: <1345131377-14713-25-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 25/25] [HACK] xen/arm: implement
	xen_remap_domain_mfn_range
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Campbell <Ian.Campbell@citrix.com>

Do not apply!

This is a simple, hacky implementation of xen_remap_domain_mfn_range,
using XENMAPSPACE_gmfn_foreign.

It should use same interface as hybrid x86.


Changes in v2:

- retain binary compatibility in xen_add_to_physmap: use a union.

Changes in v3:

- do not use an anonymous union.


Signed-off-by: Ian Campbell <Ian.Campbell@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c       |   79 +++++++++++++++++++++++++++++++++++++++-
 drivers/xen/privcmd.c          |   16 +++++----
 drivers/xen/xenfs/super.c      |    7 ++++
 include/xen/interface/memory.h |   15 ++++++--
 4 files changed, 105 insertions(+), 12 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 3348d90..4616412 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -16,6 +16,10 @@
 #include <linux/of.h>
 #include <linux/of_irq.h>
 #include <linux/of_address.h>
+#include <linux/mm.h>
+#include <linux/ioport.h>
+
+#include <asm/pgtable.h>
 
 struct start_info _xen_start_info;
 struct start_info *xen_start_info = &_xen_start_info;
@@ -38,12 +42,85 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
 static __read_mostly int xen_events_irq = -1;
 
+#define FOREIGN_MAP_BUFFER 0x90000000UL
+#define FOREIGN_MAP_BUFFER_SIZE 0x10000000UL
+struct resource foreign_map_resource = {
+	.start = FOREIGN_MAP_BUFFER,
+	.end = FOREIGN_MAP_BUFFER + FOREIGN_MAP_BUFFER_SIZE,
+	.name = "Xen foreign map buffer",
+	.flags = 0,
+};
+
+static unsigned long foreign_map_buffer_pfn = FOREIGN_MAP_BUFFER >> PAGE_SHIFT;
+
+struct remap_data {
+	struct mm_struct *mm;
+	unsigned long mfn;
+	pgprot_t prot;
+};
+
+static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
+				 unsigned long addr, void *data)
+{
+	struct remap_data *rmd = data;
+	pte_t pte = pfn_pte(rmd->mfn, rmd->prot);
+
+	if (rmd->mfn < 0x90010)
+		pr_crit("%s: ptep %p addr %#lx => %#x / %#lx\n",
+		       __func__, ptep, addr, pte_val(pte), rmd->mfn);
+
+	set_pte_at(rmd->mm, addr, ptep, pte);
+
+	rmd->mfn++;
+	return 0;
+}
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
 			       pgprot_t prot, unsigned domid)
 {
-	return -ENOSYS;
+	int i, rc = 0;
+	struct remap_data rmd = {
+		.mm = vma->vm_mm,
+		.prot = prot,
+	};
+	struct xen_add_to_physmap xatp = {
+		.domid = DOMID_SELF,
+		.space = XENMAPSPACE_gmfn_foreign,
+
+		.u.foreign_domid = domid,
+	};
+
+	if (foreign_map_buffer_pfn + nr > ((FOREIGN_MAP_BUFFER +
+					FOREIGN_MAP_BUFFER_SIZE)>>PAGE_SHIFT)) {
+		pr_crit("RAM out of foreign map buffers...\n");
+		return -EBUSY;
+	}
+
+	for (i = 0; i < nr; i++) {
+		xatp.idx = mfn + i;
+		xatp.gpfn = foreign_map_buffer_pfn + i;
+		rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
+		if (rc != 0) {
+			pr_crit("foreign map add_to_physmap failed, err=%d\n", rc);
+			goto out;
+		}
+	}
+
+	rmd.mfn = foreign_map_buffer_pfn;
+	rc = apply_to_page_range(vma->vm_mm,
+				 addr,
+				 (unsigned long)nr << PAGE_SHIFT,
+				 remap_area_mfn_pte_fn, &rmd);
+	if (rc != 0) {
+		pr_crit("apply_to_page_range failed rc=%d\n", rc);
+		goto out;
+	}
+
+	foreign_map_buffer_pfn += nr;
+out:
+	return rc;
 }
 EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..3e15c22 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -20,6 +20,8 @@
 #include <linux/pagemap.h>
 #include <linux/seq_file.h>
 #include <linux/miscdevice.h>
+#include <linux/resource.h>
+#include <linux/ioport.h>
 
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
@@ -196,9 +198,6 @@ static long privcmd_ioctl_mmap(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_mfn_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
 		return -EFAULT;
 
@@ -286,9 +285,6 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	LIST_HEAD(pagelist);
 	struct mmap_batch_state state;
 
-	if (!xen_initial_domain())
-		return -EPERM;
-
 	if (copy_from_user(&m, udata, sizeof(m)))
 		return -EFAULT;
 
@@ -365,6 +361,11 @@ static long privcmd_ioctl(struct file *file,
 	return ret;
 }
 
+static void privcmd_close(struct vm_area_struct *vma)
+{
+	/* TODO: unmap VMA */
+}
+
 static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
@@ -375,7 +376,8 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 }
 
 static struct vm_operations_struct privcmd_vm_ops = {
-	.fault = privcmd_fault
+	.fault = privcmd_fault,
+	.close = privcmd_close,
 };
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
diff --git a/drivers/xen/xenfs/super.c b/drivers/xen/xenfs/super.c
index a84b53c..edbe22f 100644
--- a/drivers/xen/xenfs/super.c
+++ b/drivers/xen/xenfs/super.c
@@ -12,6 +12,7 @@
 #include <linux/module.h>
 #include <linux/fs.h>
 #include <linux/magic.h>
+#include <linux/ioport.h>
 
 #include <xen/xen.h>
 
@@ -80,6 +81,8 @@ static const struct file_operations capabilities_file_ops = {
 	.llseek = default_llseek,
 };
 
+extern struct resource foreign_map_resource;
+
 static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 {
 	static struct tree_descr xenfs_files[] = {
@@ -100,6 +103,10 @@ static int xenfs_fill_super(struct super_block *sb, void *data, int silent)
 				  &xsd_kva_file_ops, NULL, S_IRUSR|S_IWUSR);
 		xenfs_create_file(sb, sb->s_root, "xsd_port",
 				  &xsd_port_file_ops, NULL, S_IRUSR|S_IWUSR);
+		rc = request_resource(&iomem_resource, &foreign_map_resource);
+		if (rc < 0)
+			pr_crit("failed to register foreign map resource\n");
+		rc = 0; /* ignore */
 	}
 
 	return rc;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b66d04c..99c5052 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,12 +163,19 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
-#define XENMAPSPACE_shared_info 0 /* shared info page */
-#define XENMAPSPACE_grant_table 1 /* grant table page */
+#define XENMAPSPACE_shared_info  0 /* shared info page */
+#define XENMAPSPACE_grant_table  1 /* grant table page */
+#define XENMAPSPACE_gmfn         2 /* GMFN */
+#define XENMAPSPACE_gmfn_range   3 /* GMFN range */
+#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
     unsigned int space;
 
     /* Index into source mapping space. */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22QS-0001Qj-AQ; Thu, 16 Aug 2012 15:56:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QR-0001Qb-MO
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:55:59 +0000
Received: from [85.158.143.35:13448] by server-2.bemta-4.messagelabs.com id
	CA/22-31966-F081D205; Thu, 16 Aug 2012 15:55:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345132556!6069840!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9033 invoked from network); 16 Aug 2012 15:55:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388536"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:55:46 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:55:45 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-B4;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:03 +0100
Message-ID: <1345131377-14713-11-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 11/25] xen: do not compile manage, balloon,
	pci, acpi and cpu_hotplug on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/Makefile |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index fc34886..bee02b2 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -1,11 +1,17 @@
-obj-y	+= grant-table.o features.o events.o manage.o balloon.o
+ifneq ($(CONFIG_ARM),y)
+obj-y	+= manage.o balloon.o
+obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
+endif
+obj-y	+= grant-table.o features.o events.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
 
+obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
+dom0-$(CONFIG_PCI) := pci.o
+dom0-$(CONFIG_ACPI) := acpi.o
 obj-$(CONFIG_BLOCK)			+= biomerge.o
-obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
 obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
 obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
@@ -17,7 +23,6 @@ obj-$(CONFIG_XEN_SYS_HYPERVISOR)	+= sys-hypervisor.o
 obj-$(CONFIG_XEN_PVHVM)			+= platform-pci.o
 obj-$(CONFIG_XEN_TMEM)			+= tmem.o
 obj-$(CONFIG_SWIOTLB_XEN)		+= swiotlb-xen.o
-obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
 obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
 obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
 obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22QS-0001Qj-AQ; Thu, 16 Aug 2012 15:56:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QR-0001Qb-MO
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:55:59 +0000
Received: from [85.158.143.35:13448] by server-2.bemta-4.messagelabs.com id
	CA/22-31966-F081D205; Thu, 16 Aug 2012 15:55:59 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345132556!6069840!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9033 invoked from network); 16 Aug 2012 15:55:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388536"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:55:46 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:55:45 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-B4;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:03 +0100
Message-ID: <1345131377-14713-11-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 11/25] xen: do not compile manage, balloon,
	pci, acpi and cpu_hotplug on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- make pci.o depend on CONFIG_PCI and acpi.o depend on CONFIG_ACPI.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/Makefile |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index fc34886..bee02b2 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -1,11 +1,17 @@
-obj-y	+= grant-table.o features.o events.o manage.o balloon.o
+ifneq ($(CONFIG_ARM),y)
+obj-y	+= manage.o balloon.o
+obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
+endif
+obj-y	+= grant-table.o features.o events.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
 
+obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
+dom0-$(CONFIG_PCI) := pci.o
+dom0-$(CONFIG_ACPI) := acpi.o
 obj-$(CONFIG_BLOCK)			+= biomerge.o
-obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
 obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
 obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
@@ -17,7 +23,6 @@ obj-$(CONFIG_XEN_SYS_HYPERVISOR)	+= sys-hypervisor.o
 obj-$(CONFIG_XEN_PVHVM)			+= platform-pci.o
 obj-$(CONFIG_XEN_TMEM)			+= tmem.o
 obj-$(CONFIG_SWIOTLB_XEN)		+= swiotlb-xen.o
-obj-$(CONFIG_XEN_DOM0)			+= pci.o acpi.o
 obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
 obj-$(CONFIG_XEN_PRIVCMD)		+= xen-privcmd.o
 obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22QV-0001RM-2i; Thu, 16 Aug 2012 15:56:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QT-0001Qx-Vs
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:02 +0000
Received: from [85.158.139.83:28691] by server-12.bemta-5.messagelabs.com id
	D1/F3-22359-1181D205; Thu, 16 Aug 2012 15:56:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345132558!28416991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15057 invoked from network); 16 Aug 2012 15:55:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388555"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:55:58 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:55:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Hh;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:12 +0100
Message-ID: <1345131377-14713-20-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 20/25] xen/arm: compile netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/hypercall.h |   19 +++++++++++++++++++
 drivers/net/xen-netback/netback.c    |    1 +
 drivers/net/xen-netfront.c           |    1 +
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index 4ac0624..8a82325 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -47,4 +47,23 @@ unsigned long HYPERVISOR_hvm_op(int op, void *arg);
 int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
 int HYPERVISOR_physdev_op(int cmd, void *arg);
 
+static inline void
+MULTI_update_va_mapping(struct multicall_entry *mcl, unsigned long va,
+			unsigned int new_val, unsigned long flags)
+{
+	BUG();
+}
+
+static inline void
+MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
+		 int count, int *success_count, domid_t domid)
+{
+	BUG();
+}
+
+static inline int
+HYPERVISOR_multicall(void *call_list, int nr_calls)
+{
+	BUG();
+}
 #endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f4a6fca..ab4f81c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -40,6 +40,7 @@
 
 #include <net/tcp.h>
 
+#include <xen/xen.h>
 #include <xen/events.h>
 #include <xen/interface/memory.h>
 
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3089990..bf4ba2b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -43,6 +43,7 @@
 #include <linux/slab.h>
 #include <net/ip.h>
 
+#include <asm/xen/page.h>
 #include <xen/xen.h>
 #include <xen/xenbus.h>
 #include <xen/events.h>
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22QV-0001RM-2i; Thu, 16 Aug 2012 15:56:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QT-0001Qx-Vs
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:02 +0000
Received: from [85.158.139.83:28691] by server-12.bemta-5.messagelabs.com id
	D1/F3-22359-1181D205; Thu, 16 Aug 2012 15:56:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345132558!28416991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15057 invoked from network); 16 Aug 2012 15:55:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388555"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:55:58 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:55:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Hh;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:12 +0100
Message-ID: <1345131377-14713-20-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 20/25] xen/arm: compile netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/hypercall.h |   19 +++++++++++++++++++
 drivers/net/xen-netback/netback.c    |    1 +
 drivers/net/xen-netfront.c           |    1 +
 3 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index 4ac0624..8a82325 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -47,4 +47,23 @@ unsigned long HYPERVISOR_hvm_op(int op, void *arg);
 int HYPERVISOR_memory_op(unsigned int cmd, void *arg);
 int HYPERVISOR_physdev_op(int cmd, void *arg);
 
+static inline void
+MULTI_update_va_mapping(struct multicall_entry *mcl, unsigned long va,
+			unsigned int new_val, unsigned long flags)
+{
+	BUG();
+}
+
+static inline void
+MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
+		 int count, int *success_count, domid_t domid)
+{
+	BUG();
+}
+
+static inline int
+HYPERVISOR_multicall(void *call_list, int nr_calls)
+{
+	BUG();
+}
 #endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f4a6fca..ab4f81c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -40,6 +40,7 @@
 
 #include <net/tcp.h>
 
+#include <xen/xen.h>
 #include <xen/events.h>
 #include <xen/interface/memory.h>
 
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3089990..bf4ba2b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -43,6 +43,7 @@
 #include <linux/slab.h>
 #include <net/ip.h>
 
+#include <asm/xen/page.h>
 #include <xen/xen.h>
 #include <xen/xenbus.h>
 #include <xen/events.h>
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qb-0001Sn-Fb; Thu, 16 Aug 2012 15:56:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QZ-0001SO-Kt
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:07 +0000
Received: from [85.158.138.51:23168] by server-11.bemta-3.messagelabs.com id
	CB/7B-23152-6181D205; Thu, 16 Aug 2012 15:56:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345132564!28652662!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31415 invoked from network); 16 Aug 2012 15:56:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388575"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:04 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:03 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-JQ;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:15 +0100
Message-ID: <1345131377-14713-23-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 23/25] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: this patch should be already in Konrad's tree, it is here just for
convenience.

Changes in v3:
- add missing pvclock-abi.h include to ia64 header files.

Changes in v2:
- remove pvclock hack;
- remove include linux/types.h from xen/interface/xen.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/ia64/include/asm/xen/interface.h      |    2 ++
 arch/x86/include/asm/xen/interface.h       |    2 ++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/xen.h                |    1 -
 include/xen/privcmd.h                      |    1 +
 7 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 7c83445..e88c5de 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -269,6 +269,8 @@ typedef struct xen_callback xen_callback_t;
 
 #endif /* !__ASSEMBLY__ */
 
+#include <asm/pvclock-abi.h>
+
 /* Size of the shared_info area (this is not related to page size).  */
 #define XSI_SHIFT			14
 #define XSI_SIZE			(1 << XSI_SHIFT)
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index 25cc8df..28fc621 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -127,6 +127,8 @@ struct arch_shared_info {
 #include "interface_64.h"
 #endif
 
+#include <asm/pvclock-abi.h>
+
 #ifndef __ASSEMBLY__
 /*
  * The following is all CPU context. Note that the fpu_ctxt block is filled
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 944eaeb..dc07f56 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -21,6 +21,7 @@
 #include <linux/console.h>
 #include <linux/delay.h>
 #include <linux/err.h>
+#include <linux/irq.h>
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/list.h>
@@ -35,6 +36,7 @@
 #include <xen/page.h>
 #include <xen/events.h>
 #include <xen/interface/io/console.h>
+#include <xen/interface/sched.h>
 #include <xen/hvc-console.h>
 #include <xen/xenbus.h>
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..1d0d95e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -47,6 +47,7 @@
 #include <xen/interface/memory.h>
 #include <xen/hvc-console.h>
 #include <asm/xen/hypercall.h>
+#include <asm/xen/interface.h>
 
 #include <asm/pgtable.h>
 #include <asm/sync_bitops.h>
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
index a31b54d..3159a37 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -21,6 +21,7 @@
 #include <xen/xenbus.h>
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 
 #include <xen/platform_pci.h>
 
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index f6b8965..42834a3 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -10,7 +10,6 @@
 #define __XEN_PUBLIC_XEN_H__
 
 #include <asm/xen/interface.h>
-#include <asm/pvclock-abi.h>
 
 /*
  * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 59f1bd8..45c1aa1 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -35,6 +35,7 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
+#include <xen/interface/xen.h>
 
 struct privcmd_hypercall {
 	__u64 op;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qb-0001Sn-Fb; Thu, 16 Aug 2012 15:56:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22QZ-0001SO-Kt
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:07 +0000
Received: from [85.158.138.51:23168] by server-11.bemta-3.messagelabs.com id
	CB/7B-23152-6181D205; Thu, 16 Aug 2012 15:56:06 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345132564!28652662!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31415 invoked from network); 16 Aug 2012 15:56:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388575"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:04 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:03 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-JQ;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:15 +0100
Message-ID: <1345131377-14713-23-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 23/25] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: this patch should be already in Konrad's tree, it is here just for
convenience.

Changes in v3:
- add missing pvclock-abi.h include to ia64 header files.

Changes in v2:
- remove pvclock hack;
- remove include linux/types.h from xen/interface/xen.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/ia64/include/asm/xen/interface.h      |    2 ++
 arch/x86/include/asm/xen/interface.h       |    2 ++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/xen.h                |    1 -
 include/xen/privcmd.h                      |    1 +
 7 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 7c83445..e88c5de 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -269,6 +269,8 @@ typedef struct xen_callback xen_callback_t;
 
 #endif /* !__ASSEMBLY__ */
 
+#include <asm/pvclock-abi.h>
+
 /* Size of the shared_info area (this is not related to page size).  */
 #define XSI_SHIFT			14
 #define XSI_SIZE			(1 << XSI_SHIFT)
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index 25cc8df..28fc621 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -127,6 +127,8 @@ struct arch_shared_info {
 #include "interface_64.h"
 #endif
 
+#include <asm/pvclock-abi.h>
+
 #ifndef __ASSEMBLY__
 /*
  * The following is all CPU context. Note that the fpu_ctxt block is filled
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 944eaeb..dc07f56 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -21,6 +21,7 @@
 #include <linux/console.h>
 #include <linux/delay.h>
 #include <linux/err.h>
+#include <linux/irq.h>
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/list.h>
@@ -35,6 +36,7 @@
 #include <xen/page.h>
 #include <xen/events.h>
 #include <xen/interface/io/console.h>
+#include <xen/interface/sched.h>
 #include <xen/hvc-console.h>
 #include <xen/xenbus.h>
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..1d0d95e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -47,6 +47,7 @@
 #include <xen/interface/memory.h>
 #include <xen/hvc-console.h>
 #include <asm/xen/hypercall.h>
+#include <asm/xen/interface.h>
 
 #include <asm/pgtable.h>
 #include <asm/sync_bitops.h>
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
index a31b54d..3159a37 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -21,6 +21,7 @@
 #include <xen/xenbus.h>
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 
 #include <xen/platform_pci.h>
 
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index f6b8965..42834a3 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -10,7 +10,6 @@
 #define __XEN_PUBLIC_XEN_H__
 
 #include <asm/xen/interface.h>
-#include <asm/pvclock-abi.h>
 
 /*
  * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 59f1bd8..45c1aa1 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -35,6 +35,7 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
+#include <xen/interface/xen.h>
 
 struct privcmd_hypercall {
 	__u64 op;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qi-0001WI-0e; Thu, 16 Aug 2012 15:56:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Qg-0001VC-Ah
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:14 +0000
Received: from [85.158.138.51:8299] by server-4.bemta-3.messagelabs.com id
	05/69-04276-D181D205; Thu, 16 Aug 2012 15:56:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345132564!28652662!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31809 invoked from network); 16 Aug 2012 15:56:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388588"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:09 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Jy;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:16 +0100
Message-ID: <1345131377-14713-24-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 24/25] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update struct xen_add_to_physmap to be in sync with Xen's version of the
structure.
The size field was introduced by:

changeset:   24164:707d27fe03e7
user:        Jean Guyader <jean.guyader@eu.citrix.com>
date:        Fri Nov 18 13:42:08 2011 +0000
summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range

According to the comment:

"This new field .size is located in the 16 bits padding between .domid
and .space in struct xen_add_to_physmap to stay compatible with older
versions."

Note: this patch should be already in Konrad's tree, it is here just for
convenience.

Changes in v2:

- remove erroneous comment in the commit message.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 include/xen/interface/memory.h |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b5c3098..b66d04c 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,6 +163,9 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qi-0001WI-0e; Thu, 16 Aug 2012 15:56:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Qg-0001VC-Ah
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:14 +0000
Received: from [85.158.138.51:8299] by server-4.bemta-3.messagelabs.com id
	05/69-04276-D181D205; Thu, 16 Aug 2012 15:56:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345132564!28652662!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31809 invoked from network); 16 Aug 2012 15:56:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388588"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:09 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Jy;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:16 +0100
Message-ID: <1345131377-14713-24-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 24/25] xen: update xen_add_to_physmap
	interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update struct xen_add_to_physmap to be in sync with Xen's version of the
structure.
The size field was introduced by:

changeset:   24164:707d27fe03e7
user:        Jean Guyader <jean.guyader@eu.citrix.com>
date:        Fri Nov 18 13:42:08 2011 +0000
summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range

According to the comment:

"This new field .size is located in the 16 bits padding between .domid
and .space in struct xen_add_to_physmap to stay compatible with older
versions."

Note: this patch should be already in Konrad's tree, it is here just for
convenience.

Changes in v2:

- remove erroneous comment in the commit message.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 include/xen/interface/memory.h |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index b5c3098..b66d04c 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,6 +163,9 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qn-0001Yt-GX; Thu, 16 Aug 2012 15:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Ql-0001Xw-QK
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:20 +0000
Received: from [85.158.139.83:29947] by server-2.bemta-5.messagelabs.com id
	CD/C2-10142-3281D205; Thu, 16 Aug 2012 15:56:19 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345132576!28756440!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31623 invoked from network); 16 Aug 2012 15:56:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872725"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:16 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:15 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-FS;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:08 +0100
Message-ID: <1345131377-14713-16-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 16/25] xen: clear IRQ_NOAUTOEN and
	IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
irq_startup, that is responsible for calling irq_unmask at startup time.
As a result event channels remain masked.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/events.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index c3516d3..9b506b2 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -839,6 +839,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
 	}
+	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
 out:
 	mutex_unlock(&irq_mapping_update_lock);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qn-0001Yt-GX; Thu, 16 Aug 2012 15:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Ql-0001Xw-QK
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:20 +0000
Received: from [85.158.139.83:29947] by server-2.bemta-5.messagelabs.com id
	CD/C2-10142-3281D205; Thu, 16 Aug 2012 15:56:19 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345132576!28756440!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31623 invoked from network); 16 Aug 2012 15:56:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872725"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:16 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:15 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-FS;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:08 +0100
Message-ID: <1345131377-14713-16-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 16/25] xen: clear IRQ_NOAUTOEN and
	IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
irq_startup, that is responsible for calling irq_unmask at startup time.
As a result event channels remain masked.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/events.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index c3516d3..9b506b2 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -839,6 +839,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
 	}
+	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
 out:
 	mutex_unlock(&irq_mapping_update_lock);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qs-0001bj-UA; Thu, 16 Aug 2012 15:56:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Qr-0001aT-BE
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:25 +0000
Received: from [85.158.139.83:30342] by server-1.bemta-5.messagelabs.com id
	AE/F0-09980-8281D205; Thu, 16 Aug 2012 15:56:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345132576!28756440!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32035 invoked from network); 16 Aug 2012 15:56:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872736"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:22 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-D2;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:06 +0100
Message-ID: <1345131377-14713-14-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 14/25] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Initialize the grant table mapping at the address specified at index 0
in the DT under the /xen node.
After the grant table is initialized, call xenbus_probe (if not dom0).

Changes in v2:

- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e529662..2e65e8d 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,8 +1,12 @@
 #include <xen/xen.h>
+#include <xen/grant_table.h>
+#include <xen/hvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/interface/hvm/params.h>
 #include <xen/features.h>
 #include <xen/platform_pci.h>
+#include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
@@ -42,6 +46,7 @@ EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
  * see Documentation/devicetree/bindings/arm/xen.txt for the
  * documentation of the Xen Device Tree format.
  */
+#define GRANT_TABLE_PHYSADDR 0
 static int __init xen_guest_init(void)
 {
 	struct xen_add_to_physmap xatp;
@@ -51,6 +56,7 @@ static int __init xen_guest_init(void)
 	const char *s = NULL;
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
+	struct resource res;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -65,6 +71,9 @@ static int __init xen_guest_init(void)
 		pr_debug("Xen version not found\n");
 		return 0;
 	}
+	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
+		return 0;
+	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -98,6 +107,11 @@ static int __init xen_guest_init(void)
 	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
 	 * for secondary CPUs as they are brought up. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+
+	gnttab_init();
+	if (!xen_initial_domain())
+		xenbus_probe(NULL);
+
 	return 0;
 }
 core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qs-0001bj-UA; Thu, 16 Aug 2012 15:56:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Qr-0001aT-BE
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:25 +0000
Received: from [85.158.139.83:30342] by server-1.bemta-5.messagelabs.com id
	AE/F0-09980-8281D205; Thu, 16 Aug 2012 15:56:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345132576!28756440!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32035 invoked from network); 16 Aug 2012 15:56:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872736"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:22 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-D2;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:06 +0100
Message-ID: <1345131377-14713-14-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 14/25] xen/arm: initialize grant_table on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Initialize the grant table mapping at the address specified at index 0
in the DT under the /xen node.
After the grant table is initialized, call xenbus_probe (if not dom0).

Changes in v2:

- introduce GRANT_TABLE_PHYSADDR;
- remove unneeded initialization of boot_max_nr_grant_frames.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/xen/enlighten.c |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e529662..2e65e8d 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,8 +1,12 @@
 #include <xen/xen.h>
+#include <xen/grant_table.h>
+#include <xen/hvm.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/interface/hvm/params.h>
 #include <xen/features.h>
 #include <xen/platform_pci.h>
+#include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include <linux/module.h>
@@ -42,6 +46,7 @@ EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range);
  * see Documentation/devicetree/bindings/arm/xen.txt for the
  * documentation of the Xen Device Tree format.
  */
+#define GRANT_TABLE_PHYSADDR 0
 static int __init xen_guest_init(void)
 {
 	struct xen_add_to_physmap xatp;
@@ -51,6 +56,7 @@ static int __init xen_guest_init(void)
 	const char *s = NULL;
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
+	struct resource res;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -65,6 +71,9 @@ static int __init xen_guest_init(void)
 		pr_debug("Xen version not found\n");
 		return 0;
 	}
+	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
+		return 0;
+	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -98,6 +107,11 @@ static int __init xen_guest_init(void)
 	 * is required to use VCPUOP_register_vcpu_info to place vcpu info
 	 * for secondary CPUs as they are brought up. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+
+	gnttab_init();
+	if (!xen_initial_domain())
+		xenbus_probe(NULL);
+
 	return 0;
 }
 core_initcall(xen_guest_init);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qy-0001eL-Co; Thu, 16 Aug 2012 15:56:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Qx-0001dj-Kn
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:31 +0000
Received: from [85.158.139.83:43842] by server-3.bemta-5.messagelabs.com id
	DB/09-27237-E281D205; Thu, 16 Aug 2012 15:56:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345132588!28547563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17036 invoked from network); 16 Aug 2012 15:56:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872752"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:27 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:27 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-G1;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:09 +0100
Message-ID: <1345131377-14713-17-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 17/25] xen/arm: implement
	alloc/free_xenballooned_pages with alloc_pages/kfree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only until we get the balloon driver to work.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/enlighten.c |   18 ++++++++++++++++++
 1 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 14af29d..3348d90 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -148,3 +148,21 @@ static int __init xen_init_events(void)
 	return 0;
 }
 postcore_initcall(xen_init_events);
+
+/* XXX: only until balloon is properly working */
+int alloc_xenballooned_pages(int nr_pages, struct page **pages, bool highmem)
+{
+	*pages = alloc_pages(highmem ? GFP_HIGHUSER : GFP_KERNEL,
+			get_order(nr_pages));
+	if (*pages == NULL)
+		return -ENOMEM;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(alloc_xenballooned_pages);
+
+void free_xenballooned_pages(int nr_pages, struct page **pages)
+{
+	kfree(*pages);
+	*pages = NULL;
+}
+EXPORT_SYMBOL_GPL(free_xenballooned_pages);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Qy-0001eL-Co; Thu, 16 Aug 2012 15:56:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Qx-0001dj-Kn
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:31 +0000
Received: from [85.158.139.83:43842] by server-3.bemta-5.messagelabs.com id
	DB/09-27237-E281D205; Thu, 16 Aug 2012 15:56:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345132588!28547563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17036 invoked from network); 16 Aug 2012 15:56:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872752"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:27 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:27 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-G1;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:09 +0100
Message-ID: <1345131377-14713-17-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 17/25] xen/arm: implement
	alloc/free_xenballooned_pages with alloc_pages/kfree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only until we get the balloon driver to work.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/enlighten.c |   18 ++++++++++++++++++
 1 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 14af29d..3348d90 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -148,3 +148,21 @@ static int __init xen_init_events(void)
 	return 0;
 }
 postcore_initcall(xen_init_events);
+
+/* XXX: only until balloon is properly working */
+int alloc_xenballooned_pages(int nr_pages, struct page **pages, bool highmem)
+{
+	*pages = alloc_pages(highmem ? GFP_HIGHUSER : GFP_KERNEL,
+			get_order(nr_pages));
+	if (*pages == NULL)
+		return -ENOMEM;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(alloc_xenballooned_pages);
+
+void free_xenballooned_pages(int nr_pages, struct page **pages)
+{
+	kfree(*pages);
+	*pages = NULL;
+}
+EXPORT_SYMBOL_GPL(free_xenballooned_pages);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22R4-0001ht-Qi; Thu, 16 Aug 2012 15:56:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22R3-0001gj-Jh
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:37 +0000
Received: from [85.158.138.51:14018] by server-7.bemta-3.messagelabs.com id
	CA/31-01906-4381D205; Thu, 16 Aug 2012 15:56:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345132594!22311969!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24698 invoked from network); 16 Aug 2012 15:56:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388649"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:33 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Iq;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:14 +0100
Message-ID: <1345131377-14713-22-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Dave Martin <dave.martin@linaro.org>, xen-devel@lists.xensource.com,
	linaro-dev@lists.linaro.org, Ian.Campbell@citrix.com,
	arnd@arndb.de, konrad.wilk@oracle.com, catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 22/25] xen/arm: use the __HVC macro
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: This patch depends on Dave Martin's patch series "ARM: opcodes:
Facilitate custom opcode injection".

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Dave Martin <dave.martin@linaro.org>
---
 arch/arm/xen/hypercall.S |   14 +++++---------
 1 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
index 074f5ed..71f7239 100644
--- a/arch/arm/xen/hypercall.S
+++ b/arch/arm/xen/hypercall.S
@@ -48,20 +48,16 @@
 
 #include <linux/linkage.h>
 #include <asm/assembler.h>
+#include <asm/opcodes-virt.h>
 #include <xen/interface/xen.h>
 
 
-/* HVC 0xEA1 */
-#ifdef CONFIG_THUMB2_KERNEL
-#define xen_hvc .word 0xf7e08ea1
-#else
-#define xen_hvc .word 0xe140ea71
-#endif
+#define XEN_IMM 0xEA1
 
 #define HYPERCALL_SIMPLE(hypercall)		\
 ENTRY(HYPERVISOR_##hypercall)			\
 	mov r12, #__HYPERVISOR_##hypercall;	\
-	xen_hvc;							\
+	__HVC(XEN_IMM);						\
 	mov pc, lr;							\
 ENDPROC(HYPERVISOR_##hypercall)
 
@@ -76,7 +72,7 @@ ENTRY(HYPERVISOR_##hypercall)			\
 	stmdb sp!, {r4}						\
 	ldr r4, [sp, #4]					\
 	mov r12, #__HYPERVISOR_##hypercall;	\
-	xen_hvc								\
+	__HVC(XEN_IMM);						\
 	ldm sp!, {r4}						\
 	mov pc, lr							\
 ENDPROC(HYPERVISOR_##hypercall)
@@ -100,7 +96,7 @@ ENTRY(privcmd_call)
 	mov r2, r3
 	ldr r3, [sp, #8]
 	ldr r4, [sp, #4]
-	xen_hvc
+	__HVC(XEN_IMM)
 	ldm sp!, {r4}
 	mov pc, lr
 ENDPROC(privcmd_call);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22R4-0001ht-Qi; Thu, 16 Aug 2012 15:56:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22R3-0001gj-Jh
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:37 +0000
Received: from [85.158.138.51:14018] by server-7.bemta-3.messagelabs.com id
	CA/31-01906-4381D205; Thu, 16 Aug 2012 15:56:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345132594!22311969!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24698 invoked from network); 16 Aug 2012 15:56:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388649"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:33 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Iq;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:14 +0100
Message-ID: <1345131377-14713-22-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Dave Martin <dave.martin@linaro.org>, xen-devel@lists.xensource.com,
	linaro-dev@lists.linaro.org, Ian.Campbell@citrix.com,
	arnd@arndb.de, konrad.wilk@oracle.com, catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 22/25] xen/arm: use the __HVC macro
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: This patch depends on Dave Martin's patch series "ARM: opcodes:
Facilitate custom opcode injection".

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Dave Martin <dave.martin@linaro.org>
---
 arch/arm/xen/hypercall.S |   14 +++++---------
 1 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
index 074f5ed..71f7239 100644
--- a/arch/arm/xen/hypercall.S
+++ b/arch/arm/xen/hypercall.S
@@ -48,20 +48,16 @@
 
 #include <linux/linkage.h>
 #include <asm/assembler.h>
+#include <asm/opcodes-virt.h>
 #include <xen/interface/xen.h>
 
 
-/* HVC 0xEA1 */
-#ifdef CONFIG_THUMB2_KERNEL
-#define xen_hvc .word 0xf7e08ea1
-#else
-#define xen_hvc .word 0xe140ea71
-#endif
+#define XEN_IMM 0xEA1
 
 #define HYPERCALL_SIMPLE(hypercall)		\
 ENTRY(HYPERVISOR_##hypercall)			\
 	mov r12, #__HYPERVISOR_##hypercall;	\
-	xen_hvc;							\
+	__HVC(XEN_IMM);						\
 	mov pc, lr;							\
 ENDPROC(HYPERVISOR_##hypercall)
 
@@ -76,7 +72,7 @@ ENTRY(HYPERVISOR_##hypercall)			\
 	stmdb sp!, {r4}						\
 	ldr r4, [sp, #4]					\
 	mov r12, #__HYPERVISOR_##hypercall;	\
-	xen_hvc								\
+	__HVC(XEN_IMM);						\
 	ldm sp!, {r4}						\
 	mov pc, lr							\
 ENDPROC(HYPERVISOR_##hypercall)
@@ -100,7 +96,7 @@ ENTRY(privcmd_call)
 	mov r2, r3
 	ldr r3, [sp, #8]
 	ldr r4, [sp, #4]
-	xen_hvc
+	__HVC(XEN_IMM)
 	ldm sp!, {r4}
 	mov pc, lr
 ENDPROC(privcmd_call);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RA-0001lr-95; Thu, 16 Aug 2012 15:56:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22R8-0001jt-8Q
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:42 +0000
Received: from [85.158.143.99:53440] by server-2.bemta-4.messagelabs.com id
	51/03-31966-9381D205; Thu, 16 Aug 2012 15:56:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345132599!18767641!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26931 invoked from network); 16 Aug 2012 15:56:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872775"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:39 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:39 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Gb;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:10 +0100
Message-ID: <1345131377-14713-18-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 18/25] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the "return -ENOSYS" for auto_translated_physmap
guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
mmap calls. However privcmd mmap calls are still going to fail for HVM
and hybrid guests on x86 because the xen_remap_domain_mfn_range
implementation is currently PV only.

Changes in v2:

- better commit message;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c    |    3 +++
 drivers/xen/privcmd.c |    4 ----
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 3a73785..885a223 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	unsigned long range;
 	int err = 0;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return -EINVAL;
+
 	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
 
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..85226cb 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return -ENOSYS;
-
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
 	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RA-0001lr-95; Thu, 16 Aug 2012 15:56:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22R8-0001jt-8Q
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:42 +0000
Received: from [85.158.143.99:53440] by server-2.bemta-4.messagelabs.com id
	51/03-31966-9381D205; Thu, 16 Aug 2012 15:56:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345132599!18767641!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26931 invoked from network); 16 Aug 2012 15:56:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872775"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:39 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:39 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Gb;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:10 +0100
Message-ID: <1345131377-14713-18-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 18/25] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the "return -ENOSYS" for auto_translated_physmap
guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
mmap calls. However privcmd mmap calls are still going to fail for HVM
and hybrid guests on x86 because the xen_remap_domain_mfn_range
implementation is currently PV only.

Changes in v2:

- better commit message;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c    |    3 +++
 drivers/xen/privcmd.c |    4 ----
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 3a73785..885a223 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2310,6 +2310,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	unsigned long range;
 	int err = 0;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return -EINVAL;
+
 	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
 
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..85226cb 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return -ENOSYS;
-
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
 	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RF-0001qH-Lp; Thu, 16 Aug 2012 15:56:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RE-0001os-GC
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:48 +0000
Received: from [85.158.138.51:28434] by server-2.bemta-3.messagelabs.com id
	8B/B3-17748-F381D205; Thu, 16 Aug 2012 15:56:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345132605!19749182!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2943 invoked from network); 16 Aug 2012 15:56:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388675"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:45 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:45 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-H9;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:11 +0100
Message-ID: <1345131377-14713-19-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 19/25] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/blkback.c  |    1 +
 include/xen/interface/io/protocols.h |    3 +++
 2 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 73f196c..63dd5b9 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -42,6 +42,7 @@
 
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include "common.h"
diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
index 01fc8ae..0eafaf2 100644
--- a/include/xen/interface/io/protocols.h
+++ b/include/xen/interface/io/protocols.h
@@ -5,6 +5,7 @@
 #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
 #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
 #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
+#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
 
 #if defined(__i386__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
@@ -14,6 +15,8 @@
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
 #elif defined(__powerpc64__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
+#elif defined(__arm__)
+# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
 #else
 # error arch fixup needed here
 #endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:56:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RF-0001qH-Lp; Thu, 16 Aug 2012 15:56:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RE-0001os-GC
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:48 +0000
Received: from [85.158.138.51:28434] by server-2.bemta-3.messagelabs.com id
	8B/B3-17748-F381D205; Thu, 16 Aug 2012 15:56:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345132605!19749182!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2943 invoked from network); 16 Aug 2012 15:56:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388675"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:45 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:45 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-H9;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:11 +0100
Message-ID: <1345131377-14713-19-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 19/25] xen/arm: compile blkfront and blkback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/blkback.c  |    1 +
 include/xen/interface/io/protocols.h |    3 +++
 2 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 73f196c..63dd5b9 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -42,6 +42,7 @@
 
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 #include "common.h"
diff --git a/include/xen/interface/io/protocols.h b/include/xen/interface/io/protocols.h
index 01fc8ae..0eafaf2 100644
--- a/include/xen/interface/io/protocols.h
+++ b/include/xen/interface/io/protocols.h
@@ -5,6 +5,7 @@
 #define XEN_IO_PROTO_ABI_X86_64     "x86_64-abi"
 #define XEN_IO_PROTO_ABI_IA64       "ia64-abi"
 #define XEN_IO_PROTO_ABI_POWERPC64  "powerpc64-abi"
+#define XEN_IO_PROTO_ABI_ARM        "arm-abi"
 
 #if defined(__i386__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
@@ -14,6 +15,8 @@
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
 #elif defined(__powerpc64__)
 # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
+#elif defined(__arm__)
+# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM
 #else
 # error arch fixup needed here
 #endif
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RM-0001vO-2s; Thu, 16 Aug 2012 15:56:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RL-0001t1-3x
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:55 +0000
Received: from [85.158.138.51:15189] by server-11.bemta-3.messagelabs.com id
	D5/BC-23152-4481D205; Thu, 16 Aug 2012 15:56:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345132605!19749182!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3191 invoked from network); 16 Aug 2012 15:56:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388689"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:51 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-IH;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:13 +0100
Message-ID: <1345131377-14713-21-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, Russell King <linux@arm.linux.org.uk>,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 21/25] arm/v2m: initialize arch_timers even
	if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
 1 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
index fde26ad..dee1451 100644
--- a/arch/arm/mach-vexpress/v2m.c
+++ b/arch/arm/mach-vexpress/v2m.c
@@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
 	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
 	v2m_sysctl_init(of_iomap(node, 0));
 
-	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
-	if (WARN_ON(err))
-		return;
-	node = of_find_node_by_path(path);
-	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 	if (arch_timer_of_register() != 0)
 		twd_local_timer_of_register();
 
 	if (arch_timer_sched_clock_init() != 0)
 		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
+
+	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
+	if (WARN_ON(err))
+		return;
+	node = of_find_node_by_path(path);
+	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 }
 
 static struct sys_timer v2m_dt_timer = {
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RM-0001vO-2s; Thu, 16 Aug 2012 15:56:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RL-0001t1-3x
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:56:55 +0000
Received: from [85.158.138.51:15189] by server-11.bemta-3.messagelabs.com id
	D5/BC-23152-4481D205; Thu, 16 Aug 2012 15:56:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345132605!19749182!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3191 invoked from network); 16 Aug 2012 15:56:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388689"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:51 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:51 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-IH;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:13 +0100
Message-ID: <1345131377-14713-21-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, Russell King <linux@arm.linux.org.uk>,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 21/25] arm/v2m: initialize arch_timers even
	if v2m_timer is not present
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Russell King <linux@arm.linux.org.uk>
---
 arch/arm/mach-vexpress/v2m.c |   11 ++++++-----
 1 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm/mach-vexpress/v2m.c b/arch/arm/mach-vexpress/v2m.c
index fde26ad..dee1451 100644
--- a/arch/arm/mach-vexpress/v2m.c
+++ b/arch/arm/mach-vexpress/v2m.c
@@ -637,16 +637,17 @@ static void __init v2m_dt_timer_init(void)
 	node = of_find_compatible_node(NULL, NULL, "arm,sp810");
 	v2m_sysctl_init(of_iomap(node, 0));
 
-	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
-	if (WARN_ON(err))
-		return;
-	node = of_find_node_by_path(path);
-	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 	if (arch_timer_of_register() != 0)
 		twd_local_timer_of_register();
 
 	if (arch_timer_sched_clock_init() != 0)
 		versatile_sched_clock_init(v2m_sysreg_base + V2M_SYS_24MHZ, 24000000);
+
+	err = of_property_read_string(of_aliases, "arm,v2m_timer", &path);
+	if (WARN_ON(err))
+		return;
+	node = of_find_node_by_path(path);
+	v2m_sp804_init(of_iomap(node, 0), irq_of_parse_and_map(node, 0));
 }
 
 static struct sys_timer v2m_dt_timer = {
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RT-00021q-Mu; Thu, 16 Aug 2012 15:57:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RS-0001zz-0m
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:57:02 +0000
Received: from [85.158.139.83:35011] by server-2.bemta-5.messagelabs.com id
	71/F3-10142-D481D205; Thu, 16 Aug 2012 15:57:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345132617!24246392!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22905 invoked from network); 16 Aug 2012 15:56:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872811"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Er;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:07 +0100
Message-ID: <1345131377-14713-15-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 15/25] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Compile events.c on ARM.
Parse, map and enable the IRQ to get event notifications from the device
tree (node "/xen").

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
 arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
 arch/x86/xen/enlighten.c          |    1 +
 arch/x86/xen/irq.c                |    1 +
 arch/x86/xen/xen-ops.h            |    1 -
 drivers/xen/events.c              |   17 ++++++++++++++---
 include/xen/events.h              |    2 ++
 7 files changed, 69 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/events.h

diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
new file mode 100644
index 0000000..94b4e90
--- /dev/null
+++ b/arch/arm/include/asm/xen/events.h
@@ -0,0 +1,18 @@
+#ifndef _ASM_ARM_XEN_EVENTS_H
+#define _ASM_ARM_XEN_EVENTS_H
+
+#include <asm/ptrace.h>
+
+enum ipi_vector {
+	XEN_PLACEHOLDER_VECTOR,
+
+	/* Xen IPIs go here */
+	XEN_NR_IPIS,
+};
+
+static inline int xen_irqs_disabled(struct pt_regs *regs)
+{
+	return raw_irqs_disabled_flags(regs->ARM_cpsr);
+}
+
+#endif /* _ASM_ARM_XEN_EVENTS_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 2e65e8d..14af29d 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,4 +1,5 @@
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/grant_table.h>
 #include <xen/hvm.h>
 #include <xen/interface/xen.h>
@@ -9,6 +10,8 @@
 #include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <linux/interrupt.h>
+#include <linux/irqreturn.h>
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/of_irq.h>
@@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
 int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
 EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
+static __read_mostly int xen_events_irq = -1;
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
@@ -74,6 +79,9 @@ static int __init xen_guest_init(void)
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
 	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
+	xen_events_irq = irq_of_parse_and_map(node, 0);
+	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
+			version, xen_events_irq, xen_hvm_resume_frames);
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -115,3 +123,28 @@ static int __init xen_guest_init(void)
 	return 0;
 }
 core_initcall(xen_guest_init);
+
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
+static int __init xen_init_events(void)
+{
+	if (!xen_domain() || xen_events_irq < 0)
+		return -ENODEV;
+
+	xen_init_IRQ();
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			"events", xen_vcpu)) {
+		pr_err("Error requesting IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	enable_percpu_irq(xen_events_irq, 0);
+
+	return 0;
+}
+postcore_initcall(xen_init_events);
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..9f8b0ef 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -33,6 +33,7 @@
 #include <linux/memblock.h>
 
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/version.h>
 #include <xen/interface/physdev.h>
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 1573376..01a4dc0 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 202d4c1..2368295 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -35,7 +35,6 @@ void xen_set_pat(u64);
 
 char * __init xen_memory_setup(void);
 void __init xen_arch_setup(void);
-void __init xen_init_IRQ(void);
 void xen_enable_sysenter(void);
 void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7da65d3..c3516d3 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -31,14 +31,16 @@
 #include <linux/irqnr.h>
 #include <linux/pci.h>
 
+#ifdef CONFIG_X86
 #include <asm/desc.h>
 #include <asm/ptrace.h>
 #include <asm/irq.h>
 #include <asm/idle.h>
 #include <asm/io_apic.h>
-#include <asm/sync_bitops.h>
 #include <asm/xen/page.h>
 #include <asm/xen/pci.h>
+#endif
+#include <asm/sync_bitops.h>
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
 
@@ -50,6 +52,9 @@
 #include <xen/interface/event_channel.h>
 #include <xen/interface/hvm/hvm_op.h>
 #include <xen/interface/hvm/params.h>
+#include <xen/interface/physdev.h>
+#include <xen/interface/sched.h>
+#include <asm/hw_irq.h>
 
 /*
  * This lock protects updates to the following mapping and reference-count
@@ -1377,7 +1382,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
+#ifdef CONFIG_X86
 	exit_idle();
+#endif
 	irq_enter();
 
 	__xen_evtchn_do_upcall();
@@ -1786,9 +1793,9 @@ void xen_callback_vector(void)
 void xen_callback_vector(void) {}
 #endif
 
-void __init xen_init_IRQ(void)
+void xen_init_IRQ(void)
 {
-	int i, rc;
+	int i;
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
 				    GFP_KERNEL);
@@ -1804,6 +1811,7 @@ void __init xen_init_IRQ(void)
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
+#ifdef CONFIG_X86
 	if (xen_hvm_domain()) {
 		xen_callback_vector();
 		native_init_IRQ();
@@ -1811,6 +1819,7 @@ void __init xen_init_IRQ(void)
 		 * __acpi_register_gsi can point at the right function */
 		pci_xen_hvm_init();
 	} else {
+		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
 		irq_ctx_init(smp_processor_id());
@@ -1826,4 +1835,6 @@ void __init xen_init_IRQ(void)
 		} else
 			pirq_needs_eoi = pirq_check_eoi_map;
 	}
+#endif
 }
+EXPORT_SYMBOL_GPL(xen_init_IRQ);
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..c6bfe01 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* initialize Xen IRQ subsystem */
+void xen_init_IRQ(void);
 #endif	/* _XEN_EVENTS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RT-00021q-Mu; Thu, 16 Aug 2012 15:57:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RS-0001zz-0m
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:57:02 +0000
Received: from [85.158.139.83:35011] by server-2.bemta-5.messagelabs.com id
	71/F3-10142-D481D205; Thu, 16 Aug 2012 15:57:01 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345132617!24246392!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22905 invoked from network); 16 Aug 2012 15:56:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:56:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872811"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:56:57 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:56:57 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Er;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:07 +0100
Message-ID: <1345131377-14713-15-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 15/25] xen/arm: receive Xen events on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Compile events.c on ARM.
Parse, map and enable the IRQ to get event notifications from the device
tree (node "/xen").

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/events.h |   18 ++++++++++++++++++
 arch/arm/xen/enlighten.c          |   33 +++++++++++++++++++++++++++++++++
 arch/x86/xen/enlighten.c          |    1 +
 arch/x86/xen/irq.c                |    1 +
 arch/x86/xen/xen-ops.h            |    1 -
 drivers/xen/events.c              |   17 ++++++++++++++---
 include/xen/events.h              |    2 ++
 7 files changed, 69 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/events.h

diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
new file mode 100644
index 0000000..94b4e90
--- /dev/null
+++ b/arch/arm/include/asm/xen/events.h
@@ -0,0 +1,18 @@
+#ifndef _ASM_ARM_XEN_EVENTS_H
+#define _ASM_ARM_XEN_EVENTS_H
+
+#include <asm/ptrace.h>
+
+enum ipi_vector {
+	XEN_PLACEHOLDER_VECTOR,
+
+	/* Xen IPIs go here */
+	XEN_NR_IPIS,
+};
+
+static inline int xen_irqs_disabled(struct pt_regs *regs)
+{
+	return raw_irqs_disabled_flags(regs->ARM_cpsr);
+}
+
+#endif /* _ASM_ARM_XEN_EVENTS_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 2e65e8d..14af29d 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,4 +1,5 @@
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/grant_table.h>
 #include <xen/hvm.h>
 #include <xen/interface/xen.h>
@@ -9,6 +10,8 @@
 #include <xen/xenbus.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <linux/interrupt.h>
+#include <linux/irqreturn.h>
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/of_irq.h>
@@ -33,6 +36,8 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
 int xen_platform_pci_unplug = XEN_UNPLUG_ALL;
 EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
+static __read_mostly int xen_events_irq = -1;
+
 int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 			       unsigned long addr,
 			       unsigned long mfn, int nr,
@@ -74,6 +79,9 @@ static int __init xen_guest_init(void)
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
 	xen_hvm_resume_frames = res.start >> PAGE_SHIFT;
+	xen_events_irq = irq_of_parse_and_map(node, 0);
+	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
+			version, xen_events_irq, xen_hvm_resume_frames);
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -115,3 +123,28 @@ static int __init xen_guest_init(void)
 	return 0;
 }
 core_initcall(xen_guest_init);
+
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
+static int __init xen_init_events(void)
+{
+	if (!xen_domain() || xen_events_irq < 0)
+		return -ENODEV;
+
+	xen_init_IRQ();
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			"events", xen_vcpu)) {
+		pr_err("Error requesting IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	enable_percpu_irq(xen_events_irq, 0);
+
+	return 0;
+}
+postcore_initcall(xen_init_events);
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..9f8b0ef 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -33,6 +33,7 @@
 #include <linux/memblock.h>
 
 #include <xen/xen.h>
+#include <xen/events.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/version.h>
 #include <xen/interface/physdev.h>
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 1573376..01a4dc0 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 202d4c1..2368295 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -35,7 +35,6 @@ void xen_set_pat(u64);
 
 char * __init xen_memory_setup(void);
 void __init xen_arch_setup(void);
-void __init xen_init_IRQ(void);
 void xen_enable_sysenter(void);
 void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7da65d3..c3516d3 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -31,14 +31,16 @@
 #include <linux/irqnr.h>
 #include <linux/pci.h>
 
+#ifdef CONFIG_X86
 #include <asm/desc.h>
 #include <asm/ptrace.h>
 #include <asm/irq.h>
 #include <asm/idle.h>
 #include <asm/io_apic.h>
-#include <asm/sync_bitops.h>
 #include <asm/xen/page.h>
 #include <asm/xen/pci.h>
+#endif
+#include <asm/sync_bitops.h>
 #include <asm/xen/hypercall.h>
 #include <asm/xen/hypervisor.h>
 
@@ -50,6 +52,9 @@
 #include <xen/interface/event_channel.h>
 #include <xen/interface/hvm/hvm_op.h>
 #include <xen/interface/hvm/params.h>
+#include <xen/interface/physdev.h>
+#include <xen/interface/sched.h>
+#include <asm/hw_irq.h>
 
 /*
  * This lock protects updates to the following mapping and reference-count
@@ -1377,7 +1382,9 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
+#ifdef CONFIG_X86
 	exit_idle();
+#endif
 	irq_enter();
 
 	__xen_evtchn_do_upcall();
@@ -1786,9 +1793,9 @@ void xen_callback_vector(void)
 void xen_callback_vector(void) {}
 #endif
 
-void __init xen_init_IRQ(void)
+void xen_init_IRQ(void)
 {
-	int i, rc;
+	int i;
 
 	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
 				    GFP_KERNEL);
@@ -1804,6 +1811,7 @@ void __init xen_init_IRQ(void)
 
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
+#ifdef CONFIG_X86
 	if (xen_hvm_domain()) {
 		xen_callback_vector();
 		native_init_IRQ();
@@ -1811,6 +1819,7 @@ void __init xen_init_IRQ(void)
 		 * __acpi_register_gsi can point at the right function */
 		pci_xen_hvm_init();
 	} else {
+		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
 		irq_ctx_init(smp_processor_id());
@@ -1826,4 +1835,6 @@ void __init xen_init_IRQ(void)
 		} else
 			pirq_needs_eoi = pirq_check_eoi_map;
 	}
+#endif
 }
+EXPORT_SYMBOL_GPL(xen_init_IRQ);
diff --git a/include/xen/events.h b/include/xen/events.h
index 04399b2..c6bfe01 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -109,4 +109,6 @@ int xen_irq_from_gsi(unsigned gsi);
 /* Determine whether to ignore this IRQ if it is passed to a guest. */
 int xen_test_irq_shared(int irq);
 
+/* initialize Xen IRQ subsystem */
+void xen_init_IRQ(void);
 #endif	/* _XEN_EVENTS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RX-00025E-5E; Thu, 16 Aug 2012 15:57:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RV-00023d-V3
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:57:06 +0000
Received: from [85.158.139.83:35314] by server-4.bemta-5.messagelabs.com id
	A0/F1-12386-1581D205; Thu, 16 Aug 2012 15:57:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345132617!24246392!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23476 invoked from network); 16 Aug 2012 15:57:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:57:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872819"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:57:04 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:57:03 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Bq;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:04 +0100
Message-ID: <1345131377-14713-12-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 12/25] xen/arm: introduce CONFIG_XEN on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- mark Xen guest support on ARM as EXPERIMENTAL.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/Kconfig |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index a91009c..f14664b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1855,6 +1855,16 @@ config DEPRECATED_PARAM_STRUCT
 	  This was deprecated in 2001 and announced to live on for 5 years.
 	  Some old boot loaders still use this way.
 
+config XEN_DOM0
+	def_bool y
+
+config XEN
+	bool "Xen guest support on ARM (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ARM && OF
+	select XEN_DOM0
+	help
+	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
+
 endmenu
 
 menu "Boot options"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22RX-00025E-5E; Thu, 16 Aug 2012 15:57:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22RV-00023d-V3
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:57:06 +0000
Received: from [85.158.139.83:35314] by server-4.bemta-5.messagelabs.com id
	A0/F1-12386-1581D205; Thu, 16 Aug 2012 15:57:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345132617!24246392!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23476 invoked from network); 16 Aug 2012 15:57:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:57:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34872819"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:57:04 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:57:03 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-Bq;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:04 +0100
Message-ID: <1345131377-14713-12-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 12/25] xen/arm: introduce CONFIG_XEN on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:

- mark Xen guest support on ARM as EXPERIMENTAL.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/Kconfig |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index a91009c..f14664b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1855,6 +1855,16 @@ config DEPRECATED_PARAM_STRUCT
 	  This was deprecated in 2001 and announced to live on for 5 years.
 	  Some old boot loaders still use this way.
 
+config XEN_DOM0
+	def_bool y
+
+config XEN
+	bool "Xen guest support on ARM (EXPERIMENTAL)"
+	depends on EXPERIMENTAL && ARM && OF
+	select XEN_DOM0
+	help
+	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
+
 endmenu
 
 menu "Boot options"
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Rd-0002Co-IN; Thu, 16 Aug 2012 15:57:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Rc-0002BR-NF
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:57:12 +0000
Received: from [85.158.139.83:35782] by server-3.bemta-5.messagelabs.com id
	BC/5A-27237-7581D205; Thu, 16 Aug 2012 15:57:11 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345132629!24639669!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5738 invoked from network); 16 Aug 2012 15:57:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:57:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388723"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:57:09 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:57:08 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-CS;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:05 +0100
Message-ID: <1345131377-14713-13-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 13/25] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use Xen features to figure out if we are privileged.

XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/enlighten.c         |    7 +++++++
 include/xen/interface/features.h |    3 +++
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 78ea1f7..e529662 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,6 +1,7 @@
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/features.h>
 #include <xen/platform_pci.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
@@ -66,6 +67,12 @@ static int __init xen_guest_init(void)
 	}
 	xen_domain_type = XEN_HVM_DOMAIN;
 
+	xen_setup_features();
+	if (xen_feature(XENFEAT_dom0))
+		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
+	else
+		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
+
 	if (!shared_info_page)
 		shared_info_page = (struct shared_info *)
 			get_zeroed_page(GFP_KERNEL);
diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
index b6ca39a..131a6cc 100644
--- a/include/xen/interface/features.h
+++ b/include/xen/interface/features.h
@@ -50,6 +50,9 @@
 /* x86: pirq can be used by HVM guests */
 #define XENFEAT_hvm_pirqs           10
 
+/* operation as Dom0 is supported */
+#define XENFEAT_dom0                      11
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:57:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Rd-0002Co-IN; Thu, 16 Aug 2012 15:57:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T22Rc-0002BR-NF
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 15:57:12 +0000
Received: from [85.158.139.83:35782] by server-3.bemta-5.messagelabs.com id
	BC/5A-27237-7581D205; Thu, 16 Aug 2012 15:57:11 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345132629!24639669!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5738 invoked from network); 16 Aug 2012 15:57:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 15:57:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205388723"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 11:57:09 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 11:57:08 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T227b-0007Iu-CS;
	Thu, 16 Aug 2012 16:36:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Thu, 16 Aug 2012 16:36:05 +0100
Message-ID: <1345131377-14713-13-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	tim@xen.org, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v3 13/25] xen/arm: get privilege status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use Xen features to figure out if we are privileged.

XENFEAT_dom0 was introduced by 23735 in xen-unstable.hg.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/enlighten.c         |    7 +++++++
 include/xen/interface/features.h |    3 +++
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 78ea1f7..e529662 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -1,6 +1,7 @@
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
 #include <xen/interface/memory.h>
+#include <xen/features.h>
 #include <xen/platform_pci.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
@@ -66,6 +67,12 @@ static int __init xen_guest_init(void)
 	}
 	xen_domain_type = XEN_HVM_DOMAIN;
 
+	xen_setup_features();
+	if (xen_feature(XENFEAT_dom0))
+		xen_start_info->flags |= SIF_INITDOMAIN|SIF_PRIVILEGED;
+	else
+		xen_start_info->flags &= ~(SIF_INITDOMAIN|SIF_PRIVILEGED);
+
 	if (!shared_info_page)
 		shared_info_page = (struct shared_info *)
 			get_zeroed_page(GFP_KERNEL);
diff --git a/include/xen/interface/features.h b/include/xen/interface/features.h
index b6ca39a..131a6cc 100644
--- a/include/xen/interface/features.h
+++ b/include/xen/interface/features.h
@@ -50,6 +50,9 @@
 /* x86: pirq can be used by HVM guests */
 #define XENFEAT_hvm_pirqs           10
 
+/* operation as Dom0 is supported */
+#define XENFEAT_dom0                      11
+
 #define XENFEAT_NR_SUBMAPS 1
 
 #endif /* __XEN_PUBLIC_FEATURES_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:58:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Si-0002vi-17; Thu, 16 Aug 2012 15:58:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T22Sg-0002ug-HR
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 15:58:18 +0000
Received: from [85.158.138.51:37384] by server-8.bemta-3.messagelabs.com id
	A9/A6-29583-9981D205; Thu, 16 Aug 2012 15:58:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345132696!9968190!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27635 invoked from network); 16 Aug 2012 15:58:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 15:58:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 16:58:16 +0100
Message-Id: <502D34E00200007800095977@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 16:58:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir@xen.org>
References: <502D2C3A0200007800095918@nat28.tlf.novell.com>
	<CC52D041.4973A%keir@xen.org>
In-Reply-To: <CC52D041.4973A%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 17:28, Keir Fraser <keir@xen.org> wrote:
> On 16/08/2012 16:22, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Do we support any such processors? Newer VIA processors maybe?

Exactly those - they were kind enough to lend me a system. I'm
having a patch queued for post-4.2 to enable VMX and a few
other vendor specific things on them, but of course a prereq for
testing this was that the system would boot at all under Xen.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 15:58:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 15:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Si-0002vi-17; Thu, 16 Aug 2012 15:58:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T22Sg-0002ug-HR
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 15:58:18 +0000
Received: from [85.158.138.51:37384] by server-8.bemta-3.messagelabs.com id
	A9/A6-29583-9981D205; Thu, 16 Aug 2012 15:58:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345132696!9968190!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27635 invoked from network); 16 Aug 2012 15:58:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 15:58:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 16:58:16 +0100
Message-Id: <502D34E00200007800095977@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 16:58:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir@xen.org>
References: <502D2C3A0200007800095918@nat28.tlf.novell.com>
	<CC52D041.4973A%keir@xen.org>
In-Reply-To: <CC52D041.4973A%keir@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 17:28, Keir Fraser <keir@xen.org> wrote:
> On 16/08/2012 16:22, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Do we support any such processors? Newer VIA processors maybe?

Exactly those - they were kind enough to lend me a system. I'm
having a patch queued for post-4.2 to enable VMX and a few
other vendor specific things on them, but of course a prereq for
testing this was that the system would boot at all under Xen.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:00:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22UR-0003s2-Kw; Thu, 16 Aug 2012 16:00:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22UQ-0003rZ-Md
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:00:06 +0000
Received: from [85.158.143.35:34950] by server-2.bemta-4.messagelabs.com id
	59/67-31966-5091D205; Thu, 16 Aug 2012 16:00:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345132803!13836050!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26182 invoked from network); 16 Aug 2012 16:00:05 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:00:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG02FR025482
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG01aj005794
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:00:02 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG01Co028659; Thu, 16 Aug 2012 11:00:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:00:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F0B19402EB; Thu, 16 Aug 2012 11:50:14 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:50:12 -0400
Message-Id: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH] Fixes for v3.6 (v1).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I sent a git pull two days ago to Linus and then I realized yesterday
we still have two more bugs that should be fixed.

So these are two tiny fixes that I am going to propose for v3.6.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:00:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22UR-0003s2-Kw; Thu, 16 Aug 2012 16:00:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22UQ-0003rZ-Md
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:00:06 +0000
Received: from [85.158.143.35:34950] by server-2.bemta-4.messagelabs.com id
	59/67-31966-5091D205; Thu, 16 Aug 2012 16:00:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345132803!13836050!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26182 invoked from network); 16 Aug 2012 16:00:05 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:00:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG02FR025482
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG01aj005794
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:00:02 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG01Co028659; Thu, 16 Aug 2012 11:00:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:00:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F0B19402EB; Thu, 16 Aug 2012 11:50:14 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:50:12 -0400
Message-Id: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH] Fixes for v3.6 (v1).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I sent a git pull two days ago to Linus and then I realized yesterday
we still have two more bugs that should be fixed.

So these are two tiny fixes that I am going to propose for v3.6.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:00:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22US-0003sK-1V; Thu, 16 Aug 2012 16:00:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22UR-0003rZ-2Q
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:00:07 +0000
Received: from [85.158.143.99:11145] by server-2.bemta-4.messagelabs.com id
	1A/67-31966-6091D205; Thu, 16 Aug 2012 16:00:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345132803!18768258!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13438 invoked from network); 16 Aug 2012 16:00:05 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:00:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG02si025483
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG014b011505
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:00:02 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG01xv001110; Thu, 16 Aug 2012 11:00:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:00:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0076C402C0; Thu, 16 Aug 2012 11:50:14 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:50:13 -0400
Message-Id: <1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the "Reserve
	8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.

extended the _brk space to fit 1048576 PFNs. The math is that each
P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
we want to cover 4GB of PFNs, that means having enough for space
to fit 1048576 unsigned longs.

On 64-bit:
1048576 * sizeof(unsigned long) (8) bytes = 8MB

On 32-bit:
1048576 * sizeof(unsigned long) (4) bytes = 4MB

We fix that by using the above mentioned math instead of predefined
PMD_SIZE.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index b2e91d4..626c979 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -198,7 +198,8 @@ RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
  * max we have is seen is 395979, but that does not mean it can't be more.
  * But some machines can have 3GB I/O holes even. So lets reserve enough
  * for 4GB of I/O and E820 holes. */
-RESERVE_BRK(p2m_populated, PMD_SIZE * 4);
+RESERVE_BRK(p2m_populated, 1048576 * sizeof(unsigned long));
+
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:00:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22US-0003sK-1V; Thu, 16 Aug 2012 16:00:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22UR-0003rZ-2Q
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:00:07 +0000
Received: from [85.158.143.99:11145] by server-2.bemta-4.messagelabs.com id
	1A/67-31966-6091D205; Thu, 16 Aug 2012 16:00:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345132803!18768258!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13438 invoked from network); 16 Aug 2012 16:00:05 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:00:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG02si025483
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG014b011505
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:00:02 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG01xv001110; Thu, 16 Aug 2012 11:00:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:00:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0076C402C0; Thu, 16 Aug 2012 11:50:14 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:50:13 -0400
Message-Id: <1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the "Reserve
	8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.

extended the _brk space to fit 1048576 PFNs. The math is that each
P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
we want to cover 4GB of PFNs, that means having enough for space
to fit 1048576 unsigned longs.

On 64-bit:
1048576 * sizeof(unsigned long) (8) bytes = 8MB

On 32-bit:
1048576 * sizeof(unsigned long) (4) bytes = 4MB

We fix that by using the above mentioned math instead of predefined
PMD_SIZE.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index b2e91d4..626c979 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -198,7 +198,8 @@ RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
  * max we have is seen is 395979, but that does not mean it can't be more.
  * But some machines can have 3GB I/O holes even. So lets reserve enough
  * for 4GB of I/O and E820 holes. */
-RESERVE_BRK(p2m_populated, PMD_SIZE * 4);
+RESERVE_BRK(p2m_populated, 1048576 * sizeof(unsigned long));
+
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22UV-0003tm-GH; Thu, 16 Aug 2012 16:00:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22UT-0003sg-B3
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:00:09 +0000
Received: from [85.158.139.83:20181] by server-10.bemta-5.messagelabs.com id
	32/F5-13125-8091D205; Thu, 16 Aug 2012 16:00:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345132806!28548240!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1037 invoked from network); 16 Aug 2012 16:00:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:00:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG03hg012605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG01Op019531
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG01HU017495; Thu, 16 Aug 2012 11:00:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:00:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 03177402E8; Thu, 16 Aug 2012 11:50:15 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:50:14 -0400
Message-Id: <1345132214-15298-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/2] Revert "xen PVonHVM: move shared_info to
	MMIO before kexec"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 00e37bdb0113a98408de42db85be002f21dbffd3.

During shutdown of PVHVM guests with more than 2VCPUs on certain
machines we can hit the race where the replaced shared_info is not
replaced fast enough and the PV time clock retries reading the same
area over and over without any success and is stuck in an
infinite loop.

Acked-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c   |  118 ++++---------------------------------------
 arch/x86/xen/suspend.c     |    2 +-
 arch/x86/xen/xen-ops.h     |    2 +-
 drivers/xen/platform-pci.c |   15 ------
 include/xen/events.h       |    2 -
 5 files changed, 13 insertions(+), 126 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a6f8acb..f1814fc 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -31,7 +31,6 @@
 #include <linux/pci.h>
 #include <linux/gfp.h>
 #include <linux/memblock.h>
-#include <linux/syscore_ops.h>
 
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
@@ -1472,130 +1471,38 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 }
 
-#ifdef CONFIG_XEN_PVHVM
-/*
- * The pfn containing the shared_info is located somewhere in RAM. This
- * will cause trouble if the current kernel is doing a kexec boot into a
- * new kernel. The new kernel (and its startup code) can not know where
- * the pfn is, so it can not reserve the page. The hypervisor will
- * continue to update the pfn, and as a result memory corruption occours
- * in the new kernel.
- *
- * One way to work around this issue is to allocate a page in the
- * xen-platform pci device's BAR memory range. But pci init is done very
- * late and the shared_info page is already in use very early to read
- * the pvclock. So moving the pfn from RAM to MMIO is racy because some
- * code paths on other vcpus could access the pfn during the small
- * window when the old pfn is moved to the new pfn. There is even a
- * small window were the old pfn is not backed by a mfn, and during that
- * time all reads return -1.
- *
- * Because it is not known upfront where the MMIO region is located it
- * can not be used right from the start in xen_hvm_init_shared_info.
- *
- * To minimise trouble the move of the pfn is done shortly before kexec.
- * This does not eliminate the race because all vcpus are still online
- * when the syscore_ops will be called. But hopefully there is no work
- * pending at this point in time. Also the syscore_op is run last which
- * reduces the risk further.
- */
-
-static struct shared_info *xen_hvm_shared_info;
-
-static void xen_hvm_connect_shared_info(unsigned long pfn)
+void __ref xen_hvm_init_shared_info(void)
 {
+	int cpu;
 	struct xen_add_to_physmap xatp;
+	static struct shared_info *shared_info_page = 0;
 
+	if (!shared_info_page)
+		shared_info_page = (struct shared_info *)
+			extend_brk(PAGE_SIZE, PAGE_SIZE);
 	xatp.domid = DOMID_SELF;
 	xatp.idx = 0;
 	xatp.space = XENMAPSPACE_shared_info;
-	xatp.gpfn = pfn;
+	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
 	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
 		BUG();
 
-}
-static void xen_hvm_set_shared_info(struct shared_info *sip)
-{
-	int cpu;
-
-	HYPERVISOR_shared_info = sip;
+	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
 
 	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
 	 * page, we use it in the event channel upcall and in some pvclock
 	 * related functions. We don't need the vcpu_info placement
 	 * optimizations because we don't use any pv_mmu or pv_irq op on
 	 * HVM.
-	 * When xen_hvm_set_shared_info is run at boot time only vcpu 0 is
-	 * online but xen_hvm_set_shared_info is run at resume time too and
+	 * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is
+	 * online but xen_hvm_init_shared_info is run at resume time too and
 	 * in that case multiple vcpus might be online. */
 	for_each_online_cpu(cpu) {
 		per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu];
 	}
 }
 
-/* Reconnect the shared_info pfn to a mfn */
-void xen_hvm_resume_shared_info(void)
-{
-	xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT);
-}
-
-#ifdef CONFIG_KEXEC
-static struct shared_info *xen_hvm_shared_info_kexec;
-static unsigned long xen_hvm_shared_info_pfn_kexec;
-
-/* Remember a pfn in MMIO space for kexec reboot */
-void __devinit xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn)
-{
-	xen_hvm_shared_info_kexec = sip;
-	xen_hvm_shared_info_pfn_kexec = pfn;
-}
-
-static void xen_hvm_syscore_shutdown(void)
-{
-	struct xen_memory_reservation reservation = {
-		.domid = DOMID_SELF,
-		.nr_extents = 1,
-	};
-	unsigned long prev_pfn;
-	int rc;
-
-	if (!xen_hvm_shared_info_kexec)
-		return;
-
-	prev_pfn = __pa(xen_hvm_shared_info) >> PAGE_SHIFT;
-	set_xen_guest_handle(reservation.extent_start, &prev_pfn);
-
-	/* Move pfn to MMIO, disconnects previous pfn from mfn */
-	xen_hvm_connect_shared_info(xen_hvm_shared_info_pfn_kexec);
-
-	/* Update pointers, following hypercall is also a memory barrier */
-	xen_hvm_set_shared_info(xen_hvm_shared_info_kexec);
-
-	/* Allocate new mfn for previous pfn */
-	do {
-		rc = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation);
-		if (rc == 0)
-			msleep(123);
-	} while (rc == 0);
-
-	/* Make sure the previous pfn is really connected to a (new) mfn */
-	BUG_ON(rc != 1);
-}
-
-static struct syscore_ops xen_hvm_syscore_ops = {
-	.shutdown = xen_hvm_syscore_shutdown,
-};
-#endif
-
-/* Use a pfn in RAM, may move to MMIO before kexec. */
-static void __init xen_hvm_init_shared_info(void)
-{
-	/* Remember pointer for resume */
-	xen_hvm_shared_info = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT);
-	xen_hvm_set_shared_info(xen_hvm_shared_info);
-}
-
+#ifdef CONFIG_XEN_PVHVM
 static void __init init_hvm_pv_info(void)
 {
 	int major, minor;
@@ -1646,9 +1553,6 @@ static void __init xen_hvm_guest_init(void)
 	init_hvm_pv_info();
 
 	xen_hvm_init_shared_info();
-#ifdef CONFIG_KEXEC
-	register_syscore_ops(&xen_hvm_syscore_ops);
-#endif
 
 	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback = 1;
diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index ae8a00c..45329c8 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -30,7 +30,7 @@ void xen_arch_hvm_post_suspend(int suspend_cancelled)
 {
 #ifdef CONFIG_XEN_PVHVM
 	int cpu;
-	xen_hvm_resume_shared_info();
+	xen_hvm_init_shared_info();
 	xen_callback_vector();
 	xen_unplug_emulated_devices();
 	if (xen_feature(XENFEAT_hvm_safe_pvclock)) {
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 1e4329e..202d4c1 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -41,7 +41,7 @@ void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
 
 void xen_callback_vector(void);
-void xen_hvm_resume_shared_info(void);
+void xen_hvm_init_shared_info(void);
 void xen_unplug_emulated_devices(void);
 
 void __init xen_build_dynamic_phys_to_machine(void);
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index d4c50d6..97ca359 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -101,19 +101,6 @@ static int platform_pci_resume(struct pci_dev *pdev)
 	return 0;
 }
 
-static void __devinit prepare_shared_info(void)
-{
-#ifdef CONFIG_KEXEC
-	unsigned long addr;
-	struct shared_info *hvm_shared_info;
-
-	addr = alloc_xen_mmio(PAGE_SIZE);
-	hvm_shared_info = ioremap(addr, PAGE_SIZE);
-	memset(hvm_shared_info, 0, PAGE_SIZE);
-	xen_hvm_prepare_kexec(hvm_shared_info, addr >> PAGE_SHIFT);
-#endif
-}
-
 static int __devinit platform_pci_init(struct pci_dev *pdev,
 				       const struct pci_device_id *ent)
 {
@@ -151,8 +138,6 @@ static int __devinit platform_pci_init(struct pci_dev *pdev,
 	platform_mmio = mmio_addr;
 	platform_mmiolen = mmio_len;
 
-	prepare_shared_info();
-
 	if (!xen_have_vector_callback) {
 		ret = xen_allocate_irq(pdev);
 		if (ret) {
diff --git a/include/xen/events.h b/include/xen/events.h
index 9c641de..04399b2 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -58,8 +58,6 @@ void notify_remote_via_irq(int irq);
 
 void xen_irq_resume(void);
 
-void xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn);
-
 /* Clear an irq's pending state, in preparation for polling on it */
 void xen_clear_irq_pending(int irq);
 void xen_set_irq_pending(int irq);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:00:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22UV-0003tm-GH; Thu, 16 Aug 2012 16:00:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22UT-0003sg-B3
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:00:09 +0000
Received: from [85.158.139.83:20181] by server-10.bemta-5.messagelabs.com id
	32/F5-13125-8091D205; Thu, 16 Aug 2012 16:00:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345132806!28548240!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1037 invoked from network); 16 Aug 2012 16:00:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:00:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG03hg012605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG01Op019531
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:00:03 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG01HU017495; Thu, 16 Aug 2012 11:00:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:00:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 03177402E8; Thu, 16 Aug 2012 11:50:15 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:50:14 -0400
Message-Id: <1345132214-15298-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/2] Revert "xen PVonHVM: move shared_info to
	MMIO before kexec"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 00e37bdb0113a98408de42db85be002f21dbffd3.

During shutdown of PVHVM guests with more than 2VCPUs on certain
machines we can hit the race where the replaced shared_info is not
replaced fast enough and the PV time clock retries reading the same
area over and over without any success and is stuck in an
infinite loop.

Acked-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c   |  118 ++++---------------------------------------
 arch/x86/xen/suspend.c     |    2 +-
 arch/x86/xen/xen-ops.h     |    2 +-
 drivers/xen/platform-pci.c |   15 ------
 include/xen/events.h       |    2 -
 5 files changed, 13 insertions(+), 126 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a6f8acb..f1814fc 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -31,7 +31,6 @@
 #include <linux/pci.h>
 #include <linux/gfp.h>
 #include <linux/memblock.h>
-#include <linux/syscore_ops.h>
 
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
@@ -1472,130 +1471,38 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 }
 
-#ifdef CONFIG_XEN_PVHVM
-/*
- * The pfn containing the shared_info is located somewhere in RAM. This
- * will cause trouble if the current kernel is doing a kexec boot into a
- * new kernel. The new kernel (and its startup code) can not know where
- * the pfn is, so it can not reserve the page. The hypervisor will
- * continue to update the pfn, and as a result memory corruption occours
- * in the new kernel.
- *
- * One way to work around this issue is to allocate a page in the
- * xen-platform pci device's BAR memory range. But pci init is done very
- * late and the shared_info page is already in use very early to read
- * the pvclock. So moving the pfn from RAM to MMIO is racy because some
- * code paths on other vcpus could access the pfn during the small
- * window when the old pfn is moved to the new pfn. There is even a
- * small window were the old pfn is not backed by a mfn, and during that
- * time all reads return -1.
- *
- * Because it is not known upfront where the MMIO region is located it
- * can not be used right from the start in xen_hvm_init_shared_info.
- *
- * To minimise trouble the move of the pfn is done shortly before kexec.
- * This does not eliminate the race because all vcpus are still online
- * when the syscore_ops will be called. But hopefully there is no work
- * pending at this point in time. Also the syscore_op is run last which
- * reduces the risk further.
- */
-
-static struct shared_info *xen_hvm_shared_info;
-
-static void xen_hvm_connect_shared_info(unsigned long pfn)
+void __ref xen_hvm_init_shared_info(void)
 {
+	int cpu;
 	struct xen_add_to_physmap xatp;
+	static struct shared_info *shared_info_page = 0;
 
+	if (!shared_info_page)
+		shared_info_page = (struct shared_info *)
+			extend_brk(PAGE_SIZE, PAGE_SIZE);
 	xatp.domid = DOMID_SELF;
 	xatp.idx = 0;
 	xatp.space = XENMAPSPACE_shared_info;
-	xatp.gpfn = pfn;
+	xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
 	if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp))
 		BUG();
 
-}
-static void xen_hvm_set_shared_info(struct shared_info *sip)
-{
-	int cpu;
-
-	HYPERVISOR_shared_info = sip;
+	HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
 
 	/* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
 	 * page, we use it in the event channel upcall and in some pvclock
 	 * related functions. We don't need the vcpu_info placement
 	 * optimizations because we don't use any pv_mmu or pv_irq op on
 	 * HVM.
-	 * When xen_hvm_set_shared_info is run at boot time only vcpu 0 is
-	 * online but xen_hvm_set_shared_info is run at resume time too and
+	 * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is
+	 * online but xen_hvm_init_shared_info is run at resume time too and
 	 * in that case multiple vcpus might be online. */
 	for_each_online_cpu(cpu) {
 		per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu];
 	}
 }
 
-/* Reconnect the shared_info pfn to a mfn */
-void xen_hvm_resume_shared_info(void)
-{
-	xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT);
-}
-
-#ifdef CONFIG_KEXEC
-static struct shared_info *xen_hvm_shared_info_kexec;
-static unsigned long xen_hvm_shared_info_pfn_kexec;
-
-/* Remember a pfn in MMIO space for kexec reboot */
-void __devinit xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn)
-{
-	xen_hvm_shared_info_kexec = sip;
-	xen_hvm_shared_info_pfn_kexec = pfn;
-}
-
-static void xen_hvm_syscore_shutdown(void)
-{
-	struct xen_memory_reservation reservation = {
-		.domid = DOMID_SELF,
-		.nr_extents = 1,
-	};
-	unsigned long prev_pfn;
-	int rc;
-
-	if (!xen_hvm_shared_info_kexec)
-		return;
-
-	prev_pfn = __pa(xen_hvm_shared_info) >> PAGE_SHIFT;
-	set_xen_guest_handle(reservation.extent_start, &prev_pfn);
-
-	/* Move pfn to MMIO, disconnects previous pfn from mfn */
-	xen_hvm_connect_shared_info(xen_hvm_shared_info_pfn_kexec);
-
-	/* Update pointers, following hypercall is also a memory barrier */
-	xen_hvm_set_shared_info(xen_hvm_shared_info_kexec);
-
-	/* Allocate new mfn for previous pfn */
-	do {
-		rc = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation);
-		if (rc == 0)
-			msleep(123);
-	} while (rc == 0);
-
-	/* Make sure the previous pfn is really connected to a (new) mfn */
-	BUG_ON(rc != 1);
-}
-
-static struct syscore_ops xen_hvm_syscore_ops = {
-	.shutdown = xen_hvm_syscore_shutdown,
-};
-#endif
-
-/* Use a pfn in RAM, may move to MMIO before kexec. */
-static void __init xen_hvm_init_shared_info(void)
-{
-	/* Remember pointer for resume */
-	xen_hvm_shared_info = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT);
-	xen_hvm_set_shared_info(xen_hvm_shared_info);
-}
-
+#ifdef CONFIG_XEN_PVHVM
 static void __init init_hvm_pv_info(void)
 {
 	int major, minor;
@@ -1646,9 +1553,6 @@ static void __init xen_hvm_guest_init(void)
 	init_hvm_pv_info();
 
 	xen_hvm_init_shared_info();
-#ifdef CONFIG_KEXEC
-	register_syscore_ops(&xen_hvm_syscore_ops);
-#endif
 
 	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback = 1;
diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index ae8a00c..45329c8 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -30,7 +30,7 @@ void xen_arch_hvm_post_suspend(int suspend_cancelled)
 {
 #ifdef CONFIG_XEN_PVHVM
 	int cpu;
-	xen_hvm_resume_shared_info();
+	xen_hvm_init_shared_info();
 	xen_callback_vector();
 	xen_unplug_emulated_devices();
 	if (xen_feature(XENFEAT_hvm_safe_pvclock)) {
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 1e4329e..202d4c1 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -41,7 +41,7 @@ void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
 
 void xen_callback_vector(void);
-void xen_hvm_resume_shared_info(void);
+void xen_hvm_init_shared_info(void);
 void xen_unplug_emulated_devices(void);
 
 void __init xen_build_dynamic_phys_to_machine(void);
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index d4c50d6..97ca359 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -101,19 +101,6 @@ static int platform_pci_resume(struct pci_dev *pdev)
 	return 0;
 }
 
-static void __devinit prepare_shared_info(void)
-{
-#ifdef CONFIG_KEXEC
-	unsigned long addr;
-	struct shared_info *hvm_shared_info;
-
-	addr = alloc_xen_mmio(PAGE_SIZE);
-	hvm_shared_info = ioremap(addr, PAGE_SIZE);
-	memset(hvm_shared_info, 0, PAGE_SIZE);
-	xen_hvm_prepare_kexec(hvm_shared_info, addr >> PAGE_SHIFT);
-#endif
-}
-
 static int __devinit platform_pci_init(struct pci_dev *pdev,
 				       const struct pci_device_id *ent)
 {
@@ -151,8 +138,6 @@ static int __devinit platform_pci_init(struct pci_dev *pdev,
 	platform_mmio = mmio_addr;
 	platform_mmiolen = mmio_len;
 
-	prepare_shared_info();
-
 	if (!xen_have_vector_callback) {
 		ret = xen_allocate_irq(pdev);
 		if (ret) {
diff --git a/include/xen/events.h b/include/xen/events.h
index 9c641de..04399b2 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -58,8 +58,6 @@ void notify_remote_via_irq(int irq);
 
 void xen_irq_resume(void);
 
-void xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn);
-
 /* Clear an irq's pending state, in preparation for polling on it */
 void xen_clear_irq_pending(int irq);
 void xen_set_irq_pending(int irq);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yt-0004ht-Rf; Thu, 16 Aug 2012 16:04:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Ys-0004hS-8C
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:42 +0000
Received: from [85.158.138.51:29566] by server-8.bemta-3.messagelabs.com id
	EA/11-29583-91A1D205; Thu, 16 Aug 2012 16:04:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345133079!28560427!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14908 invoked from network); 16 Aug 2012 16:04:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cvo031213
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4bQq022301
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4b4Y021294; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B37F2402EF; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:44 -0400
Message-Id: <1345132488-27323-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/5] xen/swiotlb: Simplify the logic.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Its pretty easy:
 1). We only check to see if we need Xen SWIOTLB for PV guests.
 2). If swiotlb=force or iommu=soft is set, then Xen SWIOTLB will
     be enabled.
 3). If it is an initial domain, then Xen SWIOTLB will be enabled.
 4). Native SWIOTLB must be disabled for PV guests.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/pci-swiotlb-xen.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 967633a..b6a5340 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -34,19 +34,20 @@ static struct dma_map_ops xen_swiotlb_dma_ops = {
 int __init pci_xen_swiotlb_detect(void)
 {
 
+	if (!xen_pv_domain())
+		return 0;
+
 	/* If running as PV guest, either iommu=soft, or swiotlb=force will
 	 * activate this IOMMU. If running as PV privileged, activate it
 	 * irregardless.
 	 */
-	if ((xen_initial_domain() || swiotlb || swiotlb_force) &&
-	    (xen_pv_domain()))
+	if ((xen_initial_domain() || swiotlb || swiotlb_force))
 		xen_swiotlb = 1;
 
 	/* If we are running under Xen, we MUST disable the native SWIOTLB.
 	 * Don't worry about swiotlb_force flag activating the native, as
 	 * the 'swiotlb' flag is the only one turning it on. */
-	if (xen_pv_domain())
-		swiotlb = 0;
+	swiotlb = 0;
 
 	return xen_swiotlb;
 }
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yt-0004ht-Rf; Thu, 16 Aug 2012 16:04:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Ys-0004hS-8C
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:42 +0000
Received: from [85.158.138.51:29566] by server-8.bemta-3.messagelabs.com id
	EA/11-29583-91A1D205; Thu, 16 Aug 2012 16:04:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345133079!28560427!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14908 invoked from network); 16 Aug 2012 16:04:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cvo031213
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4bQq022301
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4b4Y021294; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B37F2402EF; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:44 -0400
Message-Id: <1345132488-27323-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/5] xen/swiotlb: Simplify the logic.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Its pretty easy:
 1). We only check to see if we need Xen SWIOTLB for PV guests.
 2). If swiotlb=force or iommu=soft is set, then Xen SWIOTLB will
     be enabled.
 3). If it is an initial domain, then Xen SWIOTLB will be enabled.
 4). Native SWIOTLB must be disabled for PV guests.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/pci-swiotlb-xen.c |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 967633a..b6a5340 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -34,19 +34,20 @@ static struct dma_map_ops xen_swiotlb_dma_ops = {
 int __init pci_xen_swiotlb_detect(void)
 {
 
+	if (!xen_pv_domain())
+		return 0;
+
 	/* If running as PV guest, either iommu=soft, or swiotlb=force will
 	 * activate this IOMMU. If running as PV privileged, activate it
 	 * irregardless.
 	 */
-	if ((xen_initial_domain() || swiotlb || swiotlb_force) &&
-	    (xen_pv_domain()))
+	if ((xen_initial_domain() || swiotlb || swiotlb_force))
 		xen_swiotlb = 1;
 
 	/* If we are running under Xen, we MUST disable the native SWIOTLB.
 	 * Don't worry about swiotlb_force flag activating the native, as
 	 * the 'swiotlb' flag is the only one turning it on. */
-	if (xen_pv_domain())
-		swiotlb = 0;
+	swiotlb = 0;
 
 	return xen_swiotlb;
 }
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yt-0004hi-Fz; Thu, 16 Aug 2012 16:04:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Ys-0004hR-0l
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:42 +0000
Received: from [85.158.143.99:35619] by server-3.bemta-4.messagelabs.com id
	51/52-09529-91A1D205; Thu, 16 Aug 2012 16:04:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345133079!23294138!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18808 invoked from network); 16 Aug 2012 16:04:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cNo031221
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:39 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4cVT023051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4bqI021311; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:37 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C1BC44032C; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:48 -0400
Message-Id: <1345132488-27323-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 5/5] xen/pcifront: Use Xen-SWIOTLB when initting
	if required.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We piggyback on "xen/swiotlb: Use the swiotlb_late_init_with_tbl to init
Xen-SWIOTLB late when PV PCI is used." functionality to start up
the Xen-SWIOTLB if we are hot-plugged. This allows us to bypass
the need to supply 'iommu=soft' on the Linux command line (mostly).
With this patch, if a user forgot 'iommu=soft' on the command line,
and hotplug a PCI device they will get:

pcifront pci-0: Installing PCI frontend
Warning: only able to allocate 4 MB for software IO TLB
software IO TLB [mem 0x2a000000-0x2a3fffff] (4MB) mapped at [ffff88002a000000-ffff88002a3fffff]
pcifront pci-0: Creating PCI Frontend Bus 0000:00
pcifront pci-0: PCI host bridge to bus 0000:00
pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffff]
pci 0000:00:00.0: [8086:10d3] type 00 class 0x020000
pci 0000:00:00.0: reg 10: [mem 0xfe5c0000-0xfe5dffff]
pci 0000:00:00.0: reg 14: [mem 0xfe500000-0xfe57ffff]
pci 0000:00:00.0: reg 18: [io  0xe000-0xe01f]
pci 0000:00:00.0: reg 1c: [mem 0xfe5e0000-0xfe5e3fff]
pcifront pci-0: claiming resource 0000:00:00.0/0
pcifront pci-0: claiming resource 0000:00:00.0/1
pcifront pci-0: claiming resource 0000:00:00.0/2
pcifront pci-0: claiming resource 0000:00:00.0/3
e1000e: Intel(R) PRO/1000 Network Driver - 2.0.0-k
e1000e: Copyright(c) 1999 - 2012 Intel Corporation.
e1000e 0000:00:00.0: Disabling ASPM L0s L1
e1000e 0000:00:00.0: enabling device (0000 -> 0002)
e1000e 0000:00:00.0: Xen PCI mapped GSI16 to IRQ34
e1000e 0000:00:00.0: (unregistered net_device): Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
e1000e 0000:00:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1b:21:ab:c6:13
e1000e 0000:00:00.0: eth0: Intel(R) PRO/1000 Network Connection
e1000e 0000:00:00.0: eth0: MAC: 3, PHY: 8, PBA No: E46981-005

The "Warning only" will go away if one supplies 'iommu=soft' instead
as we have a higher chance of being able to allocate large swaths of
memory.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/pci/xen-pcifront.c |   14 ++++++++++----
 1 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index d6cc62c..ca92801 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -21,6 +21,7 @@
 #include <linux/bitops.h>
 #include <linux/time.h>
 
+#include <asm/xen/swiotlb-xen.h>
 #define INVALID_GRANT_REF (0)
 #define INVALID_EVTCHN    (-1)
 
@@ -668,7 +669,7 @@ static irqreturn_t pcifront_handler_aer(int irq, void *dev)
 	schedule_pcifront_aer_op(pdev);
 	return IRQ_HANDLED;
 }
-static int pcifront_connect(struct pcifront_device *pdev)
+static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 {
 	int err = 0;
 
@@ -681,9 +682,13 @@ static int pcifront_connect(struct pcifront_device *pdev)
 		dev_err(&pdev->xdev->dev, "PCI frontend already installed!\n");
 		err = -EEXIST;
 	}
-
 	spin_unlock(&pcifront_dev_lock);
 
+	if (!err && !swiotlb_nr_tbl()) {
+		err = pci_xen_swiotlb_init_late();
+		if (err)
+			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
+	}
 	return err;
 }
 
@@ -699,6 +704,7 @@ static void pcifront_disconnect(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 }
+
 static struct pcifront_device *alloc_pdev(struct xenbus_device *xdev)
 {
 	struct pcifront_device *pdev;
@@ -842,10 +848,10 @@ static int __devinit pcifront_try_connect(struct pcifront_device *pdev)
 	    XenbusStateInitialised)
 		goto out;
 
-	err = pcifront_connect(pdev);
+	err = pcifront_connect_and_init_dma(pdev);
 	if (err) {
 		xenbus_dev_fatal(pdev->xdev, err,
-				 "Error connecting PCI Frontend");
+				 "Error setting up PCI Frontend");
 		goto out;
 	}
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yt-0004hi-Fz; Thu, 16 Aug 2012 16:04:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Ys-0004hR-0l
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:42 +0000
Received: from [85.158.143.99:35619] by server-3.bemta-4.messagelabs.com id
	51/52-09529-91A1D205; Thu, 16 Aug 2012 16:04:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345133079!23294138!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18808 invoked from network); 16 Aug 2012 16:04:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cNo031221
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:39 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4cVT023051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4bqI021311; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:37 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C1BC44032C; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:48 -0400
Message-Id: <1345132488-27323-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 5/5] xen/pcifront: Use Xen-SWIOTLB when initting
	if required.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We piggyback on "xen/swiotlb: Use the swiotlb_late_init_with_tbl to init
Xen-SWIOTLB late when PV PCI is used." functionality to start up
the Xen-SWIOTLB if we are hot-plugged. This allows us to bypass
the need to supply 'iommu=soft' on the Linux command line (mostly).
With this patch, if a user forgot 'iommu=soft' on the command line,
and hotplug a PCI device they will get:

pcifront pci-0: Installing PCI frontend
Warning: only able to allocate 4 MB for software IO TLB
software IO TLB [mem 0x2a000000-0x2a3fffff] (4MB) mapped at [ffff88002a000000-ffff88002a3fffff]
pcifront pci-0: Creating PCI Frontend Bus 0000:00
pcifront pci-0: PCI host bridge to bus 0000:00
pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffff]
pci 0000:00:00.0: [8086:10d3] type 00 class 0x020000
pci 0000:00:00.0: reg 10: [mem 0xfe5c0000-0xfe5dffff]
pci 0000:00:00.0: reg 14: [mem 0xfe500000-0xfe57ffff]
pci 0000:00:00.0: reg 18: [io  0xe000-0xe01f]
pci 0000:00:00.0: reg 1c: [mem 0xfe5e0000-0xfe5e3fff]
pcifront pci-0: claiming resource 0000:00:00.0/0
pcifront pci-0: claiming resource 0000:00:00.0/1
pcifront pci-0: claiming resource 0000:00:00.0/2
pcifront pci-0: claiming resource 0000:00:00.0/3
e1000e: Intel(R) PRO/1000 Network Driver - 2.0.0-k
e1000e: Copyright(c) 1999 - 2012 Intel Corporation.
e1000e 0000:00:00.0: Disabling ASPM L0s L1
e1000e 0000:00:00.0: enabling device (0000 -> 0002)
e1000e 0000:00:00.0: Xen PCI mapped GSI16 to IRQ34
e1000e 0000:00:00.0: (unregistered net_device): Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
e1000e 0000:00:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1b:21:ab:c6:13
e1000e 0000:00:00.0: eth0: Intel(R) PRO/1000 Network Connection
e1000e 0000:00:00.0: eth0: MAC: 3, PHY: 8, PBA No: E46981-005

The "Warning only" will go away if one supplies 'iommu=soft' instead
as we have a higher chance of being able to allocate large swaths of
memory.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/pci/xen-pcifront.c |   14 ++++++++++----
 1 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index d6cc62c..ca92801 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -21,6 +21,7 @@
 #include <linux/bitops.h>
 #include <linux/time.h>
 
+#include <asm/xen/swiotlb-xen.h>
 #define INVALID_GRANT_REF (0)
 #define INVALID_EVTCHN    (-1)
 
@@ -668,7 +669,7 @@ static irqreturn_t pcifront_handler_aer(int irq, void *dev)
 	schedule_pcifront_aer_op(pdev);
 	return IRQ_HANDLED;
 }
-static int pcifront_connect(struct pcifront_device *pdev)
+static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 {
 	int err = 0;
 
@@ -681,9 +682,13 @@ static int pcifront_connect(struct pcifront_device *pdev)
 		dev_err(&pdev->xdev->dev, "PCI frontend already installed!\n");
 		err = -EEXIST;
 	}
-
 	spin_unlock(&pcifront_dev_lock);
 
+	if (!err && !swiotlb_nr_tbl()) {
+		err = pci_xen_swiotlb_init_late();
+		if (err)
+			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
+	}
 	return err;
 }
 
@@ -699,6 +704,7 @@ static void pcifront_disconnect(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 }
+
 static struct pcifront_device *alloc_pdev(struct xenbus_device *xdev)
 {
 	struct pcifront_device *pdev;
@@ -842,10 +848,10 @@ static int __devinit pcifront_try_connect(struct pcifront_device *pdev)
 	    XenbusStateInitialised)
 		goto out;
 
-	err = pcifront_connect(pdev);
+	err = pcifront_connect_and_init_dma(pdev);
 	if (err) {
 		xenbus_dev_fatal(pdev->xdev, err,
-				 "Error connecting PCI Frontend");
+				 "Error setting up PCI Frontend");
 		goto out;
 	}
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yu-0004i5-7s; Thu, 16 Aug 2012 16:04:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Ys-0004hR-H5
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:42 +0000
Received: from [85.158.143.99:35654] by server-3.bemta-4.messagelabs.com id
	50/62-09529-A1A1D205; Thu, 16 Aug 2012 16:04:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345133078!17340029!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20576 invoked from network); 16 Aug 2012 16:04:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4bMj031198
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4b2r022995
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4aQt032345; Thu, 16 Aug 2012 11:04:36 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AB0D1402EB; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:43 -0400
Message-Id: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [PATCH] Xen-SWIOTLB fies (v3) for v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since changeset: v2: https://lkml.org/lkml/2012/7/31/337
 - fixed smatch warnings (add __ref and appropiate header files.

Besides that it is exactly like the one I posted in July except
it has been tested more extensively.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yu-0004i5-7s; Thu, 16 Aug 2012 16:04:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Ys-0004hR-H5
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:42 +0000
Received: from [85.158.143.99:35654] by server-3.bemta-4.messagelabs.com id
	50/62-09529-A1A1D205; Thu, 16 Aug 2012 16:04:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345133078!17340029!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20576 invoked from network); 16 Aug 2012 16:04:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4bMj031198
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4b2r022995
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4aQt032345; Thu, 16 Aug 2012 11:04:36 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AB0D1402EB; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:43 -0400
Message-Id: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [PATCH] Xen-SWIOTLB fies (v3) for v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since changeset: v2: https://lkml.org/lkml/2012/7/31/337
 - fixed smatch warnings (add __ref and appropiate header files.

Besides that it is exactly like the one I posted in July except
it has been tested more extensively.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yv-0004ic-Kx; Thu, 16 Aug 2012 16:04:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Yu-0004hq-8J
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:44 +0000
Received: from [85.158.138.51:56720] by server-10.bemta-3.messagelabs.com id
	27/FB-20518-B1A1D205; Thu, 16 Aug 2012 16:04:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345133081!28619982!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27151 invoked from network); 16 Aug 2012 16:04:42 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:04:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cHd031234
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:39 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4biK022315
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4bZN032354; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:37 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BCE5B4031E; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:47 -0400
Message-Id: <1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 4/5] xen/swiotlb: Use the
	swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV
	PCI is used.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With this patch we provide the functionality to initialize the
Xen-SWIOTLB late in the bootup cycle - specifically for
Xen PCI-frontend. We still will work if the user had
supplied 'iommu=soft' on the Linux command line.

CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
[v1: Fix smatch warnings]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
 arch/x86/xen/pci-swiotlb-xen.c         |   17 +++++++++-
 drivers/xen/swiotlb-xen.c              |   54 ++++++++++++++++++++++++++-----
 include/xen/swiotlb-xen.h              |    1 +
 4 files changed, 64 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
index 1be1ab7..ee52fca 100644
--- a/arch/x86/include/asm/xen/swiotlb-xen.h
+++ b/arch/x86/include/asm/xen/swiotlb-xen.h
@@ -5,10 +5,12 @@
 extern int xen_swiotlb;
 extern int __init pci_xen_swiotlb_detect(void);
 extern void __init pci_xen_swiotlb_init(void);
+extern int pci_xen_swiotlb_init_late(void);
 #else
 #define xen_swiotlb (0)
 static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
 static inline void __init pci_xen_swiotlb_init(void) { }
+static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
 #endif
 
 #endif /* _ASM_X86_SWIOTLB_XEN_H */
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 1c17227..031d8bc 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -12,7 +12,7 @@
 #include <asm/iommu.h>
 #include <asm/dma.h>
 #endif
-
+#include <linux/export.h>
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
@@ -76,6 +76,21 @@ void __init pci_xen_swiotlb_init(void)
 		pci_request_acs();
 	}
 }
+
+int pci_xen_swiotlb_init_late(void)
+{
+	int rc = xen_swiotlb_late_init(1);
+	if (rc)
+		return rc;
+
+	dma_ops = &xen_swiotlb_dma_ops;
+	/* Make sure ACS will be enabled */
+	pci_request_acs();
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
+
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
 		  0,
 		  pci_xen_swiotlb_init,
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1afb4fb..1942a3e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -145,13 +145,14 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 	return 0;
 }
 
-void __init xen_swiotlb_init(int verbose)
+static int __ref __xen_swiotlb_init(int verbose, bool early)
 {
 	unsigned long bytes;
 	int rc = -ENOMEM;
 	unsigned long nr_tbl;
 	char *m = NULL;
 	unsigned int repeat = 3;
+	unsigned long order;
 
 	nr_tbl = swiotlb_nr_tbl();
 	if (nr_tbl)
@@ -161,12 +162,31 @@ void __init xen_swiotlb_init(int verbose)
 		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
 	}
 retry:
+	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
 	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
 
 	/*
 	 * Get IO TLB memory from any location.
 	 */
-	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	if (early)
+		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	else {
+#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
+#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
+
+		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
+			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
+			if (xen_io_tlb_start)
+				break;
+			order--;
+		}
+		if (order != get_order(bytes)) {
+			pr_warn("Warning: only able to allocate %ld MB "
+				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
+			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
+			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+		}
+	}
 	if (!xen_io_tlb_start) {
 		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
 		goto error;
@@ -179,17 +199,22 @@ retry:
 			       bytes,
 			       xen_io_tlb_nslabs);
 	if (rc) {
-		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
+		if (early)
+			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
 		m = "Failed to get contiguous memory for DMA from Xen!\n"\
 		    "You either: don't have the permissions, do not have"\
 		    " enough free memory under 4GB, or the hypervisor memory"\
-		    "is too fragmented!";
+		    " is too fragmented!";
 		goto error;
 	}
 	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
-	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
 
-	return;
+	rc = 0;
+	if (early)
+		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
+	else
+		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
+	return rc;
 error:
 	if (repeat--) {
 		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
@@ -198,10 +223,21 @@ error:
 		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
-	xen_raw_printk("%s (rc:%d)", m, rc);
-	panic("%s (rc:%d)", m, rc);
+	pr_err("%s (rc:%d)", m, rc);
+	if (early)
+		panic("%s (rc:%d)", m, rc);
+	else
+		free_pages((unsigned long)xen_io_tlb_start, order);
+	return rc;
+}
+void __init xen_swiotlb_init(int verbose)
+{
+	__xen_swiotlb_init(verbose, true /* early */);
+}
+int xen_swiotlb_late_init(int verbose)
+{
+	return __xen_swiotlb_init(verbose, false /* late */);
 }
-
 void *
 xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			   dma_addr_t *dma_handle, gfp_t flags,
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index 4f4d449..d38d984 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -4,6 +4,7 @@
 #include <linux/swiotlb.h>
 
 extern void xen_swiotlb_init(int verbose);
+extern int  xen_swiotlb_late_init(int verbose);
 
 extern void
 *xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yv-0004ic-Kx; Thu, 16 Aug 2012 16:04:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Yu-0004hq-8J
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:44 +0000
Received: from [85.158.138.51:56720] by server-10.bemta-3.messagelabs.com id
	27/FB-20518-B1A1D205; Thu, 16 Aug 2012 16:04:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345133081!28619982!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27151 invoked from network); 16 Aug 2012 16:04:42 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:04:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cHd031234
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:39 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4biK022315
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4bZN032354; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:37 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BCE5B4031E; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:47 -0400
Message-Id: <1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 4/5] xen/swiotlb: Use the
	swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV
	PCI is used.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With this patch we provide the functionality to initialize the
Xen-SWIOTLB late in the bootup cycle - specifically for
Xen PCI-frontend. We still will work if the user had
supplied 'iommu=soft' on the Linux command line.

CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
[v1: Fix smatch warnings]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
 arch/x86/xen/pci-swiotlb-xen.c         |   17 +++++++++-
 drivers/xen/swiotlb-xen.c              |   54 ++++++++++++++++++++++++++-----
 include/xen/swiotlb-xen.h              |    1 +
 4 files changed, 64 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
index 1be1ab7..ee52fca 100644
--- a/arch/x86/include/asm/xen/swiotlb-xen.h
+++ b/arch/x86/include/asm/xen/swiotlb-xen.h
@@ -5,10 +5,12 @@
 extern int xen_swiotlb;
 extern int __init pci_xen_swiotlb_detect(void);
 extern void __init pci_xen_swiotlb_init(void);
+extern int pci_xen_swiotlb_init_late(void);
 #else
 #define xen_swiotlb (0)
 static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
 static inline void __init pci_xen_swiotlb_init(void) { }
+static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
 #endif
 
 #endif /* _ASM_X86_SWIOTLB_XEN_H */
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 1c17227..031d8bc 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -12,7 +12,7 @@
 #include <asm/iommu.h>
 #include <asm/dma.h>
 #endif
-
+#include <linux/export.h>
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
@@ -76,6 +76,21 @@ void __init pci_xen_swiotlb_init(void)
 		pci_request_acs();
 	}
 }
+
+int pci_xen_swiotlb_init_late(void)
+{
+	int rc = xen_swiotlb_late_init(1);
+	if (rc)
+		return rc;
+
+	dma_ops = &xen_swiotlb_dma_ops;
+	/* Make sure ACS will be enabled */
+	pci_request_acs();
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
+
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
 		  0,
 		  pci_xen_swiotlb_init,
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1afb4fb..1942a3e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -145,13 +145,14 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 	return 0;
 }
 
-void __init xen_swiotlb_init(int verbose)
+static int __ref __xen_swiotlb_init(int verbose, bool early)
 {
 	unsigned long bytes;
 	int rc = -ENOMEM;
 	unsigned long nr_tbl;
 	char *m = NULL;
 	unsigned int repeat = 3;
+	unsigned long order;
 
 	nr_tbl = swiotlb_nr_tbl();
 	if (nr_tbl)
@@ -161,12 +162,31 @@ void __init xen_swiotlb_init(int verbose)
 		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
 	}
 retry:
+	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
 	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
 
 	/*
 	 * Get IO TLB memory from any location.
 	 */
-	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	if (early)
+		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	else {
+#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
+#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
+
+		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
+			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
+			if (xen_io_tlb_start)
+				break;
+			order--;
+		}
+		if (order != get_order(bytes)) {
+			pr_warn("Warning: only able to allocate %ld MB "
+				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
+			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
+			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+		}
+	}
 	if (!xen_io_tlb_start) {
 		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
 		goto error;
@@ -179,17 +199,22 @@ retry:
 			       bytes,
 			       xen_io_tlb_nslabs);
 	if (rc) {
-		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
+		if (early)
+			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
 		m = "Failed to get contiguous memory for DMA from Xen!\n"\
 		    "You either: don't have the permissions, do not have"\
 		    " enough free memory under 4GB, or the hypervisor memory"\
-		    "is too fragmented!";
+		    " is too fragmented!";
 		goto error;
 	}
 	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
-	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
 
-	return;
+	rc = 0;
+	if (early)
+		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
+	else
+		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
+	return rc;
 error:
 	if (repeat--) {
 		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
@@ -198,10 +223,21 @@ error:
 		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
-	xen_raw_printk("%s (rc:%d)", m, rc);
-	panic("%s (rc:%d)", m, rc);
+	pr_err("%s (rc:%d)", m, rc);
+	if (early)
+		panic("%s (rc:%d)", m, rc);
+	else
+		free_pages((unsigned long)xen_io_tlb_start, order);
+	return rc;
+}
+void __init xen_swiotlb_init(int verbose)
+{
+	__xen_swiotlb_init(verbose, true /* early */);
+}
+int xen_swiotlb_late_init(int verbose)
+{
+	return __xen_swiotlb_init(verbose, false /* late */);
 }
-
 void *
 xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			   dma_addr_t *dma_handle, gfp_t flags,
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index 4f4d449..d38d984 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -4,6 +4,7 @@
 #include <linux/swiotlb.h>
 
 extern void xen_swiotlb_init(int verbose);
+extern int  xen_swiotlb_late_init(int verbose);
 
 extern void
 *xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yw-0004il-1v; Thu, 16 Aug 2012 16:04:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Yu-0004iA-TJ
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:45 +0000
Received: from [85.158.139.83:36887] by server-5.bemta-5.messagelabs.com id
	BF/E4-31019-C1A1D205; Thu, 16 Aug 2012 16:04:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345133080!24248077!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22241 invoked from network); 16 Aug 2012 16:04:42 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:42 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4bVj031205
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4b7a023018
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4bI6021295; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B64DA402E8; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:46 -0400
Message-Id: <1345132488-27323-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 3/5] swiotlb: add the late swiotlb
	initialization function with iotlb memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This enables the caller to initialize swiotlb with its own iotlb
memory late in the bootup.

See git commit eb605a5754d050a25a9f00d718fb173f24c486ef
"swiotlb: add swiotlb_tbl_map_single library function" which will
explain the full details of what it can be used for.

CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
[v1: Fold in smatch warning]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 include/linux/swiotlb.h |    1 +
 lib/swiotlb.c           |   33 ++++++++++++++++++++++++---------
 2 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index e872526..8d08b3e 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -25,6 +25,7 @@ extern int swiotlb_force;
 extern void swiotlb_init(int verbose);
 extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
 extern unsigned long swiotlb_nr_tbl(void);
+extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
 
 /*
  * Enumeration for sync targets
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 45bc1f8..f114bf6 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -170,7 +170,7 @@ void __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
  * Statically reserve bounce buffer space and initialize bounce buffer data
  * structures for the software IO TLB used to implement the DMA API.
  */
-void __init
+static void __init
 swiotlb_init_with_default_size(size_t default_size, int verbose)
 {
 	unsigned long bytes;
@@ -206,8 +206,9 @@ swiotlb_init(int verbose)
 int
 swiotlb_late_init_with_default_size(size_t default_size)
 {
-	unsigned long i, bytes, req_nslabs = io_tlb_nslabs;
+	unsigned long bytes, req_nslabs = io_tlb_nslabs;
 	unsigned int order;
+	int rc = 0;
 
 	if (!io_tlb_nslabs) {
 		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
@@ -229,16 +230,32 @@ swiotlb_late_init_with_default_size(size_t default_size)
 		order--;
 	}
 
-	if (!io_tlb_start)
-		goto cleanup1;
-
+	if (!io_tlb_start) {
+		io_tlb_nslabs = req_nslabs;
+		return -ENOMEM;
+	}
 	if (order != get_order(bytes)) {
 		printk(KERN_WARNING "Warning: only able to allocate %ld MB "
 		       "for software IO TLB\n", (PAGE_SIZE << order) >> 20);
 		io_tlb_nslabs = SLABS_PER_PAGE << order;
-		bytes = io_tlb_nslabs << IO_TLB_SHIFT;
 	}
+	rc = swiotlb_late_init_with_tbl(io_tlb_start, io_tlb_nslabs);
+	if (rc)
+		free_pages((unsigned long)io_tlb_start, order);
+	return rc;
+}
+
+int
+swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+{
+	unsigned long i, bytes;
+
+	bytes = nslabs << IO_TLB_SHIFT;
+
+	io_tlb_nslabs = nslabs;
+	io_tlb_start = tlb;
 	io_tlb_end = io_tlb_start + bytes;
+
 	memset(io_tlb_start, 0, bytes);
 
 	/*
@@ -288,10 +305,8 @@ cleanup3:
 	io_tlb_list = NULL;
 cleanup2:
 	io_tlb_end = NULL;
-	free_pages((unsigned long)io_tlb_start, order);
 	io_tlb_start = NULL;
-cleanup1:
-	io_tlb_nslabs = req_nslabs;
+	io_tlb_nslabs = 0;
 	return -ENOMEM;
 }
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yw-0004il-1v; Thu, 16 Aug 2012 16:04:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Yu-0004iA-TJ
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:45 +0000
Received: from [85.158.139.83:36887] by server-5.bemta-5.messagelabs.com id
	BF/E4-31019-C1A1D205; Thu, 16 Aug 2012 16:04:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345133080!24248077!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22241 invoked from network); 16 Aug 2012 16:04:42 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:04:42 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4bVj031205
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4b7a023018
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4bI6021295; Thu, 16 Aug 2012 11:04:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B64DA402E8; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:46 -0400
Message-Id: <1345132488-27323-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 3/5] swiotlb: add the late swiotlb
	initialization function with iotlb memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This enables the caller to initialize swiotlb with its own iotlb
memory late in the bootup.

See git commit eb605a5754d050a25a9f00d718fb173f24c486ef
"swiotlb: add swiotlb_tbl_map_single library function" which will
explain the full details of what it can be used for.

CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
[v1: Fold in smatch warning]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 include/linux/swiotlb.h |    1 +
 lib/swiotlb.c           |   33 ++++++++++++++++++++++++---------
 2 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index e872526..8d08b3e 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -25,6 +25,7 @@ extern int swiotlb_force;
 extern void swiotlb_init(int verbose);
 extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
 extern unsigned long swiotlb_nr_tbl(void);
+extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
 
 /*
  * Enumeration for sync targets
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 45bc1f8..f114bf6 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -170,7 +170,7 @@ void __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
  * Statically reserve bounce buffer space and initialize bounce buffer data
  * structures for the software IO TLB used to implement the DMA API.
  */
-void __init
+static void __init
 swiotlb_init_with_default_size(size_t default_size, int verbose)
 {
 	unsigned long bytes;
@@ -206,8 +206,9 @@ swiotlb_init(int verbose)
 int
 swiotlb_late_init_with_default_size(size_t default_size)
 {
-	unsigned long i, bytes, req_nslabs = io_tlb_nslabs;
+	unsigned long bytes, req_nslabs = io_tlb_nslabs;
 	unsigned int order;
+	int rc = 0;
 
 	if (!io_tlb_nslabs) {
 		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
@@ -229,16 +230,32 @@ swiotlb_late_init_with_default_size(size_t default_size)
 		order--;
 	}
 
-	if (!io_tlb_start)
-		goto cleanup1;
-
+	if (!io_tlb_start) {
+		io_tlb_nslabs = req_nslabs;
+		return -ENOMEM;
+	}
 	if (order != get_order(bytes)) {
 		printk(KERN_WARNING "Warning: only able to allocate %ld MB "
 		       "for software IO TLB\n", (PAGE_SIZE << order) >> 20);
 		io_tlb_nslabs = SLABS_PER_PAGE << order;
-		bytes = io_tlb_nslabs << IO_TLB_SHIFT;
 	}
+	rc = swiotlb_late_init_with_tbl(io_tlb_start, io_tlb_nslabs);
+	if (rc)
+		free_pages((unsigned long)io_tlb_start, order);
+	return rc;
+}
+
+int
+swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+{
+	unsigned long i, bytes;
+
+	bytes = nslabs << IO_TLB_SHIFT;
+
+	io_tlb_nslabs = nslabs;
+	io_tlb_start = tlb;
 	io_tlb_end = io_tlb_start + bytes;
+
 	memset(io_tlb_start, 0, bytes);
 
 	/*
@@ -288,10 +305,8 @@ cleanup3:
 	io_tlb_list = NULL;
 cleanup2:
 	io_tlb_end = NULL;
-	free_pages((unsigned long)io_tlb_start, order);
 	io_tlb_start = NULL;
-cleanup1:
-	io_tlb_nslabs = req_nslabs;
+	io_tlb_nslabs = 0;
 	return -ENOMEM;
 }
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yz-0004l5-G1; Thu, 16 Aug 2012 16:04:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Yx-0004hZ-RS
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345133080!2100574!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24081 invoked from network); 16 Aug 2012 16:04:41 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:04:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cvg018519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4bQf022300
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4aAJ032348; Thu, 16 Aug 2012 11:04:36 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B127C402C0; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:45 -0400
Message-Id: <1345132488-27323-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/5] xen/swiotlb: With more than 4GB on 64-bit,
	disable the native SWIOTLB.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If a PV guest is booted the native SWIOTLB should not be
turned on. It does not help us (we don't have any PCI devices)
and it eats 64MB of good memory. In the case of PV guests
with PCI devices we need the Xen-SWIOTLB one.

[v1: Rewrite comment per Stefano's suggestion]
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/pci-swiotlb-xen.c |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index b6a5340..1c17227 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -8,6 +8,11 @@
 #include <xen/xen.h>
 #include <asm/iommu_table.h>
 
+#ifdef CONFIG_X86_64
+#include <asm/iommu.h>
+#include <asm/dma.h>
+#endif
+
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
@@ -49,6 +54,15 @@ int __init pci_xen_swiotlb_detect(void)
 	 * the 'swiotlb' flag is the only one turning it on. */
 	swiotlb = 0;
 
+#ifdef CONFIG_X86_64
+	/* pci_swiotlb_detect_4gb turns on native SWIOTLB if no_iommu == 0
+	 * (so no iommu=X command line over-writes).
+	 * Considering that PV guests do not want the *native SWIOTLB* but
+	 * only Xen SWIOTLB it is not useful to us so set no_iommu=1 here.
+	 */
+	if (max_pfn > MAX_DMA32_PFN)
+		no_iommu = 1;
+#endif
 	return xen_swiotlb;
 }
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:04:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22Yz-0004l5-G1; Thu, 16 Aug 2012 16:04:49 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22Yx-0004hZ-RS
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:04:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345133080!2100574!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24081 invoked from network); 16 Aug 2012 16:04:41 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:04:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GG4cvg018519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:04:38 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GG4bQf022300
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:04:37 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GG4aAJ032348; Thu, 16 Aug 2012 11:04:36 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:04:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B127C402C0; Thu, 16 Aug 2012 11:54:50 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 11:54:45 -0400
Message-Id: <1345132488-27323-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/5] xen/swiotlb: With more than 4GB on 64-bit,
	disable the native SWIOTLB.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If a PV guest is booted the native SWIOTLB should not be
turned on. It does not help us (we don't have any PCI devices)
and it eats 64MB of good memory. In the case of PV guests
with PCI devices we need the Xen-SWIOTLB one.

[v1: Rewrite comment per Stefano's suggestion]
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/pci-swiotlb-xen.c |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index b6a5340..1c17227 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -8,6 +8,11 @@
 #include <xen/xen.h>
 #include <asm/iommu_table.h>
 
+#ifdef CONFIG_X86_64
+#include <asm/iommu.h>
+#include <asm/dma.h>
+#endif
+
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
@@ -49,6 +54,15 @@ int __init pci_xen_swiotlb_detect(void)
 	 * the 'swiotlb' flag is the only one turning it on. */
 	swiotlb = 0;
 
+#ifdef CONFIG_X86_64
+	/* pci_swiotlb_detect_4gb turns on native SWIOTLB if no_iommu == 0
+	 * (so no iommu=X command line over-writes).
+	 * Considering that PV guests do not want the *native SWIOTLB* but
+	 * only Xen SWIOTLB it is not useful to us so set no_iommu=1 here.
+	 */
+	if (max_pfn > MAX_DMA32_PFN)
+		no_iommu = 1;
+#endif
 	return xen_swiotlb;
 }
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:09:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22di-0005bh-G0; Thu, 16 Aug 2012 16:09:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T22dh-0005bc-G3
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:09:41 +0000
Received: from [85.158.143.99:58409] by server-2.bemta-4.messagelabs.com id
	47/89-31966-44B1D205; Thu, 16 Aug 2012 16:09:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345133378!21016755!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19322 invoked from network); 16 Aug 2012 16:09:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:09:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14045341"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 16:09:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 17:09:37 +0100
Message-ID: <1345133376.30865.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Thu, 16 Aug 2012 17:09:36 +0100
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDE1OjAzICswMTAwLCBUaGFub3MgTWFrYXRvcyB3cm90ZToK
PiBJ4oCZZCBsaWtlIHRvIGludHJvZHVjZSBibGt0YXAzOiBlc3NlbnRpYWxseSBibGt0YXAyIHdp
dGhvdXQgdGhlIG5lZWQgb2YKPiBibGtiYWNrLiBUaGlzIGhhcyBiZWVuIGRldmVsb3BlZCBieSBT
YW50b3NoIEpvZGgsIGFuZCBJ4oCZbGwgbWFpbnRhaW4KPiBpdC4KCkkgdGhpbmsgeW91IGFyZSB3
b3JraW5nIG9uIHJlcG9zdGluZyBpbiBhIG1vcmUgbWFuYWdlYWJsZSBmb3JtIGJ1dApoZXJlJ3Mg
YSBmZXcgdGhpbmdzIHdoaWNoIEkgbm90aWNlZCBvbiBhIHRvcCBsZXZlbCBzY3JvbGwgdGhvdWdo
LiAoSQptaWdodCBiZSByZXBlYXRpbmcgbXlzZWxmIG9jY2FzaW9uYWxseSBmcm9tIHRoZSBxdWlj
ayBjb21tZW50cyBJIG1hZGUKZWFybGllciwgc29ycnkpCgo+IGRpZmYgLS1naXQgYS90b29scy9N
YWtlZmlsZSBiL3Rvb2xzL01ha2VmaWxlCj4gLS0tIGEvdG9vbHMvTWFrZWZpbGUKPiArKysgYi90
b29scy9NYWtlZmlsZQo+IEBAIC0yMDEsMyArMjAzLDIwIEBACj4gIAo+ICBzdWJkaXItZGlzdGNs
ZWFuLWZpcm13YXJlOiAucGhvbnkKPiAgCSQoTUFLRSkgLUMgZmlybXdhcmUgZGlzdGNsZWFuCj4g
Kwo+ICtzdWJkaXItYWxsLWJsa3RhcDMgc3ViZGlyLWluc3RhbGwtYmxrdGFwMzogLnBob255Cj4g
Kwlzb3VyY2U9LjsgXAo+ICsJY2QgYmxrdGFwMzsgXAo+ICsJLi9hdXRvZ2VuLnNoOyBcCgpJZiBh
bnl0aGluZyB0aGlzIHNob3VsZCBiZSBjYWxsZWQgZnJvbSB0aGUgdG9wLWxldmVsIC4vYXV0b2dl
bi5zaCBhbmQKbm90IGhlcmUuIFdlIHNob3VsZG4ndCBleHBlY3QgZW5kIHVzZXJzIHRvIGhhdmUg
YXV0b2NvbmYgYXZhaWxhYmxlLgoKPiArCS4vY29uZmlndXJlIFwKCkkgdGhpbmsgYXV0b2NvbmYg
aGFzIGEgY29uc3RydWN0IHdoaWNoIGNhbiBjYXVzZSBjb25maWd1cmUgdG8gY2FsbCBvdGhlcgpz
dWItY29uZmlndXJlcyBpbiBzdWJkaXJzLiBJZiBJJ20gcmlnaHQgdGhlbiBpdCB3b3VsZCBiZSBi
ZXR0ZXIgdG8gdXNlCnRoaXMgaW5zdGVhZCBvZiBjYWxsaW5nIGl0IGhlcmUuCgpIb3dldmVyIEkg
dGhpbmsgdGhhdCB0aGUgcmVhbCBjb3JyZWN0IGFuc3dlciBpcyB0aGF0IGJsa3RhcDMgc2hvdWxk
bid0CmhhdmUgaXQncyBvd24gY29uZmlndXJlIGFueXdheSBidXQgc2hvdWxkIHNpbXBseSBhZGQg
dGhlIHRlc3RzIHdoaWNoIGl0Cm5lZWRzIHRvIHRoZSBnbG9iYWwgdG9vbHMgbGV2ZWwgb25lIGFu
ZCB1c2UgdGhlIHJlc3VsdCBsaWtlIGV2ZXJ5b25lCmVsc2UuCgo+ICsJQ0ZMQUdTPSItSSQoWEVO
X1JPT1QpL3Rvb2xzL2luY2x1ZGUgXAo+ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMvbGlieGMgXAo+
ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMveGVuc3RvcmUiIFwKPiArCUxERkxBR1M9Ii1MJChYRU5f
Uk9PVCkvdG9vbHMveGVuc3RvcmUgXAo+ICsJCSAtTCQoWEVOX1JPT1QpL3Rvb2xzL2xpYnhjIjsg
XAoKWW91ciBNYWtlZmlsZXMgc2hvdWxkIHN0YXJ0IHdpdGgKCiAgICAgICAgWEVOX1JPT1QgPSAk
KENVUkRJUikvLi4vLi4KICAgICAgICBpbmNsdWRlICQoWEVOX1JPT1QpL3Rvb2xzL1J1bGVzLm1r
CgpBbmQgdGhlbiBtYWtlIHVzZSBvZiB0aGUgdmFyaWFibGVzIGRlZmluZWQgaW4gUnVsZXMubWsu
IGUuZy4KQ0ZMQUdTX2xpYnhlbmN0cmwsIExJQlNfbGlieGVuY3RybCBldGMgcmF0aGVyIHRoYW4g
ZG9pbmcgdGhpcy4KCkkgc3VwcG9zZSBibGt0YXAzIG9uY2UgbGl2ZWQgb3V0c2lkZSBvZiB0aGUg
eGVuIHRyZWUgYW5kIHRoaXMgKGFuZCB0aGUKY29uZmlndXJleSkgaXMgYSBoYW5nb3ZlciBmcm9t
IHRoYXQuIEJ1dCB3ZSBzaG91bGQgY2xlYW4gaXQgdXAgb24gaXRzCndheSBpbnRvIHRoZSB0cmVl
Cgo+IGRpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUgYi90b29scy9i
bGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUKPiAtLS0gYS90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFr
ZWZpbGUKPiArKysgYi90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUKPiBAQCAtNCw5ICs0
LDkgQEAKPiAgCj4gIExJQlZIRERJUiAgPSAkKEJMS1RBUF9ST09UKS92aGQvbGliCj4gIAo+IC1J
QklOICAgICAgID0gdGFwZGlzazIgdGQtdXRpbCB0YXBkaXNrLWNsaWVudCB0YXBkaXNrLXN0cmVh
bSB0YXBkaXNrLWRpZmYKPiAtUUNPV19VVElMICA9IGltZzJxY293IHFjb3ctY3JlYXRlIHFjb3cy
cmF3Cj4gLUxPQ0tfVVRJTCAgPSBsb2NrLXV0aWwKPiArSUJJTiAgICAgICA9IHRhcGRpc2syIHRk
LXV0aWwyIHRhcGRpc2stY2xpZW50MiB0YXBkaXNrLXN0cmVhbTIgdGFwZGlzay1kaWZmMgo+ICtR
Q09XX1VUSUwgID0gaW1nMnFjb3cyIHFjb3ctY3JlYXRlMiBxY293MnJhdzIKPiArTE9DS19VVElM
ICA9IGxvY2stdXRpbDIKClRoaXMgc2VyaWVzIHNob3VsZG4ndCBiZSByZW5hbWluZyBiaXRzIG9m
IGJsa3RhcDIuIEluIGZhY3QgSSB0aGluayBhcyBhCmdlbmVyYWwgcnVsZSBpdCBzaG91bGQgbm90
IGJlIHRvdWNoaW5nIHRvb2xzL2Jsa3RhcDIgYXQgYWxsLiBJZiBpdCBkb2VzCml0IHNob3VsZCBi
ZSBpbiBhIHNlcGFyYXRlIHBhdGNoIEkgdGhpbmsuCgo+IGRpZmYgLS1naXQgYS90b29scy9ibGt0
YXAzL01ha2VmaWxlLmFtIGIvdG9vbHMvYmxrdGFwMy9NYWtlZmlsZS5hbQo+IG5ldyBmaWxlIG1v
ZGUgMTAwNjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xzL2Jsa3RhcDMvTWFrZWZpbGUu
YW0KClRoaXMgaXMgYWRkaW5nIGEgbmV3IGRlcGVuZGVuY3kgb24gYXV0b21ha2Ugd2hpY2ggaXMg
c29tZXRoaW5nIHdlJ2xsCmhhdmUgdG8gZGlzY3Vzcy4KCkFzIHBhcnQgb2YgdGhlIGluaXRpYWwg
cHVzaCBJIHRoaW5rIGl0IHdvdWxkIGJlIGxlc3MgY29udHJvdmVyc2lhbCB0bwpzaW1wbHkgdXNl
IHRoZSBleGlzdGluZyBYZW4gdG9vbHMgYnVpbGQgaW5mcmFzdHJ1Y3R1cmUgKHN1Y2ggYXMgaXQg
aXMpLgpJIHRoaW5rIHRoZSBtYWpvcml0eSBvZiB0aGlzIGNvdWxkIGJlIGNyaWJiZWQgcGV0dHkg
ZGlyZWN0bHkgZnJvbQpibGt0YXAyIGFuZCBvdGhlciBwYXJ0cyBvZiB0aGUgdG9vbHMgdHJlZS4K
Cj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2Jsa3RhcDMvUkVBRE1FIGIvdG9vbHMvYmxrdGFwMy9SRUFE
TUUKPiBuZXcgZmlsZSBtb2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiArKysgYi90b29scy9i
bGt0YXAzL1JFQURNRQoKSSB0aGluayBJIG1lbnRpb25lZCB0aGlzIGJlZm9yZSBidXQgaXQgbG9v
a3MgbGlrZSB0aGlzIGRvY3VtZW50IGNvdWxkIGRvCndpdGggYSBwcmV0dHkgaGVmdHkgdXBkYXRl
LgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9jb250cm9sL3RhcC1jdGwtYXR0YWNoLmMg
Yi90b29scy9ibGt0YXAzL2NvbnRyb2wvdGFwLWN0bC1hdHRhY2guYwo+IG5ldyBmaWxlIG1vZGUg
MTAwNjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xzL2Jsa3RhcDMvY29udHJvbC90YXAt
Y3RsLWF0dGFjaC5jCj4gQEAgLTAsMCArMSw2NiBAQAo+ICsvKgo+ICsgKiBDb3B5cmlnaHQgKGMp
IDIwMDgsIFhlblNvdXJjZSBJbmMuCgpZb3UgcHJvYmFibHkgd2FudCB0byBkbyBhbiB1cGRhdGUg
b2YgYWxsIHRoZXNlIGNvcHlyaWdodCBoZWFkZXJzLgoKCj4gKyAqIEFsbCByaWdodHMgcmVzZXJ2
ZWQuCj4gKyAqCj4gKyAqIFJlZGlzdHJpYnV0aW9uIGFuZCB1c2UgaW4gc291cmNlIGFuZCBiaW5h
cnkgZm9ybXMsIHdpdGggb3Igd2l0aG91dAo+ICsgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0
ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMgYXJlIG1ldDoKPiArICog
ICAgICogUmVkaXN0cmlidXRpb25zIG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWluIHRoZSBhYm92
ZSBjb3B5cmlnaHQKPiArICogICAgICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBh
bmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgo+ICsgKiAgICAgKiBSZWRpc3RyaWJ1dGlvbnMg
aW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAo+ICsgKiAg
ICAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRp
c2NsYWltZXIgaW4gdGhlCj4gKyAqICAgICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1h
dGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCj4gKyAqICAgICAqIE5laXRo
ZXIgdGhlIG5hbWUgb2YgWGVuU291cmNlIEluYy4gbm9yIHRoZSBuYW1lcyBvZiBpdHMgY29udHJp
YnV0b3JzCgpBbmQgSSBzdXBwb3NlIHRoaXMgb3VnaHQgdG8gYmUgdXBkYXRlZCB0b28uCgo+ICsg
KiAgICAgICBtYXkgYmUgdXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZl
ZCBmcm9tIHRoaXMgc29mdHdhcmUKPiArICogICAgICAgd2l0aG91dCBzcGVjaWZpYyBwcmlvciB3
cml0dGVuIHBlcm1pc3Npb24uCgoKVGhlIGFjdHVhbCB0aHJlZSBjbGF1c2UgQlNEIHNheXMgIlRo
ZSBuYW1lIG9mIHRoZSBhdXRob3IgbWF5IG5vdCBiZSB1c2VkCnRvIGVuZG9yc2Ugb3IgcHJvbW90
ZSBwcm9kdWN0cyBkZXJpdmVkIGZyb20gdGhpcyBzb2Z0d2FyZSB3aXRob3V0CnNwZWNpZmljIHBy
aW9yIHdyaXR0ZW4gcGVybWlzc2lvbi4KClRoaXMgd2VpcmQgdmFyaWFudCBvZiB0aGUgMy1jbGF1
c2UgQlNEIGlzIHNvbWV0aGluZyB5b3UgbWlnaHQgd2FudCB0bwpkaXNjdXNzIHdpdGggeW91ciBt
YW5hZ2VtZW50IHRvIHNlZSBpZiBpdCBjYW4ndCBiZSByYXRpb25hbGlzZWQuCgo+ICsgKgo+ICsg
KiBUSElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQg
Q09OVFJJQlVUT1JTCj4gKyAqICJBUyBJUyIgQU5EIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FS
UkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9UCj4gKyAqIExJTUlURUQgVE8sIFRIRSBJTVBMSUVE
IFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRORVNTIEZPUgo+ICsgKiBBIFBB
UlRJQ1VMQVIgUFVSUE9TRSBBUkUgRElTQ0xBSU1FRC4gSU4gTk8gRVZFTlQgU0hBTEwgVEhFIENP
UFlSSUdIVCBPV05FUgo+ICsgKiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxFIEZPUiBBTlkgRElS
RUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwKPiArICogRVhFTVBMQVJZLCBPUiBD
T05TRVFVRU5USUFMIERBTUFHRVMgKElOQ0xVRElORywgQlVUIE5PVCBMSU1JVEVEIFRPLAo+ICsg
KiBQUk9DVVJFTUVOVCBPRiBTVUJTVElUVVRFIEdPT0RTIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVT
RSwgREFUQSwgT1IKPiArICogUFJPRklUUzsgT1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKSBIT1dF
VkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRgo+ICsgKiBMSUFCSUxJVFksIFdIRVRIRVIg
SU4gQ09OVFJBQ1QsIFNUUklDVCBMSUFCSUxJVFksIE9SIFRPUlQgKElOQ0xVRElORwo+ICsgKiBO
RUdMSUdFTkNFIE9SIE9USEVSV0lTRSkgQVJJU0lORyBJTiBBTlkgV0FZIE9VVCBPRiBUSEUgVVNF
IE9GIFRISVMKPiArICogU09GVFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJ
VFkgT0YgU1VDSCBEQU1BR0UuCj4gKyAqLwoKSSB0aGluayBpdCB3b3VsZCBiZSB3b3J0aHdoaWxl
IHRvIGhhdmUgYSB0b29scy9ibGt0YXAzL0NPUFlJTkcgZmlsZSB0bwpjbGFyaWZ5IHRoZSBsaWNl
c2luZyB0ZXJtcyBvZiBibGt0YXAzIGFzIGEgd2hvbGUuCgpbLi4uXSBJIGRpZG4ndCBsb29rIGF0
IHRoZSBtYWpvcml0eSBvZiB0aGUgYWN0dWFsIHRvb2xzL2Jsa3RhcDMgY29kZS4KVGhlcmUncyBx
dWl0ZSBhIGxvdCBvZiBpdC4gSSBtZW50aW9uZWQgZWFybGllciB0aGF0IHlvdSBtaWdodCB3YW50
IHRvCmNvbnNpZGVyIGRyb3BwaW5nIHNvbWUgb2YgdGhlIG9wdGlvbmFsIGNvbXBvbmVudHMgZm9y
IHRoZSB0aW1lIGJlaW5nIHRvCmtlZXAgdGhlIGluaXRpYWwgdXBzdHJlYW1pbmcgbW9yZSBtYW5h
Z2VhYmxlLgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3RkLXJhdGVkLjEu
dHh0IGIvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3RkLXJhdGVkLjEudHh0Cj4gbmV3IGZpbGUgbW9k
ZSAxMDA2NDQKPiAtLS0gL2Rldi9udWxsCj4gKysrIGIvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3Rk
LXJhdGVkLjEudHh0CgpJcyB0aGlzIGEgZ2VuZXJhdGVkIGZpbGU/IEkgZGlkbid0IHNlZSB0aGUg
c291cmNlIGJ1dCBpdCdkIGJlIG5pY2UgdG8KaGF2ZSBlLmcuIHRoZSBhY3R1YWwgbWFuIHBhZ2Ug
ZXRjLgoKVGhpcyBtYWRlIG1lIGdyZXAgZm9yICJkb2MiLCAibWFuIiBhbmQgInR4dCIgaW4gdGhl
IHBhdGNoLCB3aGljaCBvbmx5CmZvdW5kIHRoaXMgb25lIGZpbGUuIEhvcGVmdWxseSBJIGp1c3Qg
bWlzc2VkIGl0IGFsbCwgb3IgYXQgbGVhc3QgY2FuIHdlCmV4cGVjdCB0aGF0IGFkZGl0aW9uYWwg
ZG9jcyB3aWxsIGJlIGZvcnRoY29taW5nIGluIHRoZSBmdXR1cmU/CgoKPiBkaWZmIC0tZ2l0IGEv
dG9vbHMvYmxrdGFwMy9pbmNsdWRlL2Jsa3RhcDIuaCBiL3Rvb2xzL2Jsa3RhcDMvaW5jbHVkZS9i
bGt0YXAyLmgKPiBuZXcgZmlsZSBtb2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiArKysgYi90
b29scy9ibGt0YXAzL2luY2x1ZGUvYmxrdGFwMi5oCgpzLzIvMy8gT3IgZG9lcyB0aGlzIGZpbGUg
YmVsb25nIGF0IGFsbD8gSXQgc2VlbXMgdG8gbW9zdGx5IHJlbGF0ZSB0byB0aGUKYmxrdGFwMiBr
ZXJuZWwgZHJpdmVyIGlvY3RsIGludGVyZmFjZS4gUGxlYXNlIGNhbiB5b3Uga2lsbCBhbGwgdGhp
cwpjcnVmdCBiZWZvcmUgcmVwb3N0aW5nLgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9p
bmNsdWRlL2xpc3QuaCBiL3Rvb2xzL2Jsa3RhcDMvaW5jbHVkZS9saXN0LmgKPiBuZXcgZmlsZSBt
b2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiArKysgYi90b29scy9ibGt0YXAzL2luY2x1ZGUv
bGlzdC5oCj4gQEAgLTAsMCArMSwxNDkgQEAKPiArLyoKPiArICogbGlzdC5oCj4gKyAqIAo+ICsg
KiBUaGlzIGlzIGEgc3Vic2V0IG9mIGxpbnV4J3MgbGlzdC5oIGludGVuZGVkIHRvIGJlIHVzZWQg
aW4gdXNlci1zcGFjZS4KPiArICogCj4gKyAqLwoKSWYgdGhpcyBjYW1lIGZyb20gTGludXggdGhl
biBpdCBpcyBHUEwgbGljZW5zZWQgYW5kIG11c3QgaGF2ZSBhIEdQTApoZWFkZXIgb24gaXQuCgpU
aGUgaW50ZW50aW9uIHNlZW1zIHRvIGJlIHRoYXQgYmxrdGFwMyBpcyBCU0QgYnV0IHRoaXMgd291
bGQgbWFrZSBpdApvdmVyYWxsIEdQTC4gWW91IGNvdWxkIGVpdGhlciByZWxpY2Vuc2UgdGhlIHdo
b2xlIHRoaW5nIGFzIChMKUdQTCBvcgpwZXJoYXBzIHJlaW1wbGVtZW50IHVzaW5nIHRoZSBCU0Qg
bGljZW5zZWQgbGlzdCBtYWNyb3MgKHNlZQp0b29scy9pbmNsdWRlL3hlbi1leHRlcm5hbCBmb3Ig
dGhlIEJTRCBtYWNyb3Mgd2hpY2ggbGlieGwgYW5kIG1pbmktb3MKdXNlKQoKCj4gZGlmZiAtLWdp
dCBhL3Rvb2xzL2Jsa3RhcDMveGVuaW8vYmxraWYuaCBiL3Rvb2xzL2Jsa3RhcDMveGVuaW8vYmxr
aWYuaAo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xz
L2Jsa3RhcDMveGVuaW8vYmxraWYuaAoKR2l2ZW4gdGhhdCB0aGlzIGlzIGluLXRyZWUgeW91IG1p
Z2h0IHBlcmhhcHMgd2FudCB0byB1c2UgdGhlIGluLXRocmVlCmludGVyZmFjZSBkZWNsYXJhdGlv
bnMgZnJvbSB0b29scy9pbmNsdWRlLgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy94ZW5p
by9saXN0LmggYi90b29scy9ibGt0YXAzL3hlbmlvL2xpc3QuaAo+IG5ldyBmaWxlIG1vZGUgMTAw
NjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xzL2Jsa3RhcDMveGVuaW8vbGlzdC5oCj4g
QEAgLTAsMCArMSwxMzQgQEAKPiArLyoKPiArICogbGlzdC5oCj4gKyAqIAo+ICsgKiBUaGlzIGlz
IGEgc3Vic2V0IG9mIGxpbnV4J3MgbGlzdC5oIGludGVuZGVkIHRvIGJlIHVzZWQgaW4gdXNlci1z
cGFjZS4KPiArICogCj4gKyAqLwoKQW5vdGhlciBkdXBsaWNhdGVkIGNvcHkgb2Ygc29tZSBHUEwg
Y29kZS4KCkFwYXJ0IGZyb20gdGhlIGxpY2Vuc2luZyB0aGluZ3MgcGVyaGFwcyB5b3UgY291bGQg
cmF0aW9uYWxpc2UgdGhlIG51bWJlcgpvZiBjb3BpZXMgb2YgdGhpbmdzIGxpa2UgdGhpcyB3aGlj
aCB5b3UgYXJlIGludHJvZHVjaW5nPwoKCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhs
LmMgYi90b29scy9saWJ4bC9saWJ4bC5jCj4gLS0tIGEvdG9vbHMvbGlieGwvbGlieGwuYwo+ICsr
KyBiL3Rvb2xzL2xpYnhsL2xpYnhsLmMKPiBAQCAtMTE3MSw2ICsxMTcxLDggQEAKCkNhbiB5b3Ug
YWRkIHRoZSBmb2xsb3dpbmcgdG8geW91ciB+Ly5oZ3JjIHBsZWFzZToKICAgICAgICBbZGlmZl0K
ICAgICAgICBzaG93ZnVuYyA9IFRydWUKClRoaXMgd2lsbCBpbmplY3QgdGhlIGN1cnJlbnQgZnVu
Y3Rpb24gbmFtZSBpbnRvIHRoZSBodW5rIGhlYWRlciB3aGljaAptYWtlcyByZXZpZXcgbXVjaCBl
YXNpZXIuCgo+ICAgICAgICAgIGRpc2stPmJhY2tlbmQgPSBMSUJYTF9ESVNLX0JBQ0tFTkRfVEFQ
Owo+ICAgICAgfSBlbHNlIGlmICghc3RyY21wKGJhY2tlbmRfdHlwZSwgInFkaXNrIikpIHsKPiAg
ICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExfRElTS19CQUNLRU5EX1FESVNLOwo+ICsgICAg
fSBlbHNlIGlmICghc3RyY21wKGJhY2tlbmRfdHlwZSwgInhlbmlvIikpIHsKPiArICAgICAgICBk
aXNrLT5iYWNrZW5kID0gTElCWExfRElTS19CQUNLRU5EX1hFTklPOwoKSSB0aGluayB5b3Ugd2Fu
dCB0byByZXBsYWNlIExJQlhMX0RJU0tfQkFDS0VORF9UQVAgcmF0aGVyIHRoYW4gYWRkIGEgbmV3
Cm9uZS4gWW91IGNvdWxkIGFsc28gc3RlYWwgdGhlIG5hbWUgaWYgeW91IGxpa2UgSSByZWNrb24u
Cgo+IAo+ICAgICAgfSBlbHNlIHsKPiAgICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExfRElT
S19CQUNLRU5EX1VOS05PV047Cj4gICAgICB9CgoKPiBAQCAtMTk2MSw2ICsxOTgxLDcgQEAKPiAg
fQo+ICAKPiAgc3RhdGljIHZvaWQgbGlieGxfX2RldmljZV9kaXNrX2Zyb21feHNfYmUobGlieGxf
X2djICpnYywKPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgeHNf
dHJhbnNhY3Rpb25fdCB0LAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBjb25zdCBjaGFyICpiZV9wYXRoLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBsaWJ4bF9kZXZpY2VfZGlzayAqZGlzaykKPiAgewoKVGhpcyBzb3J0IG9m
IHRoaW5nIHNob3VsZCBiZSBkb25lIGFzIGEgc2VwYXJhdGUgcHJlLWN1cnNvciBwYXRjaC4KCgo+
IGRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF90YXBkaXNrLmMgYi90b29scy9saWJ4bC9s
aWJ4bF90YXBkaXNrLmMKPiBuZXcgZmlsZSBtb2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiAr
KysgYi90b29scy9saWJ4bC9saWJ4bF90YXBkaXNrLmMKCklzIHRoaXMgYWN0dWFsbHkgYSBtb3Zl
IG9mIG9mIHRoZSBleGlzdGluZyBpYnhsX2Jsa3RhcD8gSSB0aGluayAiaGcgZGlmZgotZyIgd2ls
bCBjYXVzZSBpdCB0byB1c2UgZ2l0IHN0eWxlIHBhdGNoZXMgd2hpY2ggbWFrZSB0aGlzIGNsZWFy
ZXIuCgpBbHRob3VnaCBJIGRvbid0IHNlZSBsaWJ4bF9ibGt0YXAgZ2V0dGluZyByZW1vdmVkLCBz
byBwZXJoYXBzIG5vdD8gSQp0aG91Z2h0IEkgc2F3IHlvdSBjaGFuZ2luZyB0aGUgTWFrZWZpbGUg
YXMgaWYgeW91IHdlcmUgcmVuYW1uZyBhcyB3ZWxsLgoKUmVuYW1pbmcgc2hvdWxkIGdlbmVyYWxs
eSBiZSBkb25lIGFzIGEgc3RhbmRhbG9uZSBwYXRjaCB3aXRoIG5vCm5vbi1yZWxhdGVkIGNoYW5n
ZXMgaW4gdGhlbSwgdG8gbWFrZSB0aGVtIGVhaXNlciB0byByZXZpZXcuCgo+IEBAIC0wLDAgKzEs
MTYyIEBAClsuLi5dCj4gKyAgICAgICAgc3RydWN0IGxpc3RfaGVhZCBsaXN0Owo+ICsJdGFwX2xp
c3RfdCAqZW50cnksICpuZXh0X3Q7CgpTb21ldGhpbmcgb2RkIHdpdGggd2hpdGVzcGFjZSBoZXJl
LgoKPiArICAgICAgICBpbnQgcmV0ID0gLUVOT0VOVCwgZXJyOwo+ICsKPiArCWZwcmludGYoc3Rk
ZXJyLCAiYmxrdGFwX2ZpbmQoJXM6JXMpXG4iLCB0eXBlLCBwYXRoKTsKClBsZWFzZSBkcm9wIHRo
aXMgc29ydCBvZiBkZWJ1Zy4KCj4gKyAgICAgICAgSU5JVF9MSVNUX0hFQUQoJmxpc3QpOwo+ICsg
ICAgICAgIGVyciA9IHRhcF9jdGxfbGlzdCgmbGlzdCk7Cj4gKyAgICAgICAgaWYgKGVyciA8IDAp
Cj4gKyAgICAgICAgICAgICAgICByZXR1cm4gZXJyOwo+IFsuLi5dCj4gKy8vICAgICAgICB0YXBf
Y3RsX2xpc3RfZnJlZSgmbGlzdCk7CgpMZWFrPwoKCj4gY2hhciAqbGlieGxfX2Jsa3RhcF9kZXZw
YXRoKGxpYnhsX19nYyAqZ2MsCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBj
aGFyICpkaXNrLAo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGlza19mb3Jt
YXQgZm9ybWF0KQo+ICt7Cj4gKyAgICBjb25zdCBjaGFyICp0eXBlOwo+ICsgICAgY2hhciAqcGFy
YW1zLCAqZGV2bmFtZSA9IE5VTEw7Cj4gKyAgICB0YXBfbGlzdF90IHRhcDsKPiArICAgIGludCBl
cnI7Cj4gKwo+ICsgICAgdHlwZSA9IGxpYnhsX19kZXZpY2VfZGlza19zdHJpbmdfb2ZfZm9ybWF0
KGZvcm1hdCk7Cj4gKyAgICBmcHJpbnRmKHN0ZGVyciwgImxpYnhsX19ibGt0YXBfZGV2cGF0aCgl
czolcylcbiIsIGRpc2ssIHR5cGUpOwo+ICsgICAgZXJyID0gYmxrdGFwX2ZpbmQodHlwZSwgZGlz
aywgJnRhcCk7Cj4gKyAgICBpZiAoZXJyID09IDApIHsKPiArICAgICAgICBkZXZuYW1lID0gbGli
eGxfX3NwcmludGYoZ2MsICIvZGV2L3hlbi9ibGt0YXAtMi90YXBkZXYlZCIsIHRhcC5taW5vcik7
CgpTdXJlbHkgbm90IGFueSBtb3JlPwoKPiArICAgICAgICBpZiAoZGV2bmFtZSkKPiArICAgICAg
ICAgICAgcmV0dXJuIGRldm5hbWU7Cj4gKyAgICB9Cj4gKwo+ICsgICAgcGFyYW1zID0gbGlieGxf
X3NwcmludGYoZ2MsICIlczolcyIsIHR5cGUsIGRpc2spOwo+ICsgICAgZXJyID0gdGFwX2N0bF9j
cmVhdGUocGFyYW1zLCAmZGV2bmFtZSwgMCwgLTEsIE5VTEwpOwo+ICsgICAgaWYgKCFlcnIpIHsK
PiArICAgICAgICBsaWJ4bF9fcHRyX2FkZChnYywgZGV2bmFtZSk7Cj4gKyAgICAgICAgcmV0dXJu
IGRldm5hbWU7Cj4gKyAgICB9Cj4gKwo+ICsgICAgcmV0dXJuIE5VTEw7Cj4gK30KClsuLi5dCj4g
ZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYyBiL3Rvb2xzL2xpYnhsL3hsX2Nt
ZGltcGwuYwo+IC0tLSBhL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYwo+ICsrKyBiL3Rvb2xzL2xp
YnhsL3hsX2NtZGltcGwuYwo+IEBAIC0xODYyLDcgKzE4NjIsNyBAQAo+ICAKPiAgICAgICAgICBj
aGlsZDEgPSB4bF9mb3JrKGNoaWxkX3dhaXRkYWVtb24pOwo+ICAgICAgICAgIGlmIChjaGlsZDEp
IHsKPiAtICAgICAgICAgICAgcHJpbnRmKCJEYWVtb24gcnVubmluZyB3aXRoIFBJRCAlZFxuIiwg
Y2hpbGQxKTsKPiArICAgICAgICAgICAgcHJpbnRmKCJEYWVtb24gcnVubmluZyB3aXRoIFBJRCAl
ZCBmb3IgZG9tYWluICVkXG4iLCBjaGlsZDEsIGRvbWlkKTsKClRoaXMgaXMgcHJvYmFibHkgYSB1
c2VmdWwgY2hhbmdlIGJ1dCBpdCBoYXMgbm90aGluZyBhdCBhbGwgdG8gZG8gd2l0aApibGt0YXAz
LCBwbGVhc2Ugc2VwYXJhdGUgYWxsIHRoaXMgc29ydCBvZiBzdHVmZiBvdXQuCgo+ICAKPiAgICAg
ICAgICAgICAgZm9yICg7Oykgewo+ICAgICAgICAgICAgICAgICAgZ290X2NoaWxkID0geGxfd2Fp
dHBpZChjaGlsZF93YWl0ZGFlbW9uLCAmc3RhdHVzLCAwKTsKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:09:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22di-0005bh-G0; Thu, 16 Aug 2012 16:09:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T22dh-0005bc-G3
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:09:41 +0000
Received: from [85.158.143.99:58409] by server-2.bemta-4.messagelabs.com id
	47/89-31966-44B1D205; Thu, 16 Aug 2012 16:09:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345133378!21016755!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19322 invoked from network); 16 Aug 2012 16:09:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:09:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14045341"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 16:09:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 17:09:37 +0100
Message-ID: <1345133376.30865.45.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Thu, 16 Aug 2012 17:09:36 +0100
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDE1OjAzICswMTAwLCBUaGFub3MgTWFrYXRvcyB3cm90ZToK
PiBJ4oCZZCBsaWtlIHRvIGludHJvZHVjZSBibGt0YXAzOiBlc3NlbnRpYWxseSBibGt0YXAyIHdp
dGhvdXQgdGhlIG5lZWQgb2YKPiBibGtiYWNrLiBUaGlzIGhhcyBiZWVuIGRldmVsb3BlZCBieSBT
YW50b3NoIEpvZGgsIGFuZCBJ4oCZbGwgbWFpbnRhaW4KPiBpdC4KCkkgdGhpbmsgeW91IGFyZSB3
b3JraW5nIG9uIHJlcG9zdGluZyBpbiBhIG1vcmUgbWFuYWdlYWJsZSBmb3JtIGJ1dApoZXJlJ3Mg
YSBmZXcgdGhpbmdzIHdoaWNoIEkgbm90aWNlZCBvbiBhIHRvcCBsZXZlbCBzY3JvbGwgdGhvdWdo
LiAoSQptaWdodCBiZSByZXBlYXRpbmcgbXlzZWxmIG9jY2FzaW9uYWxseSBmcm9tIHRoZSBxdWlj
ayBjb21tZW50cyBJIG1hZGUKZWFybGllciwgc29ycnkpCgo+IGRpZmYgLS1naXQgYS90b29scy9N
YWtlZmlsZSBiL3Rvb2xzL01ha2VmaWxlCj4gLS0tIGEvdG9vbHMvTWFrZWZpbGUKPiArKysgYi90
b29scy9NYWtlZmlsZQo+IEBAIC0yMDEsMyArMjAzLDIwIEBACj4gIAo+ICBzdWJkaXItZGlzdGNs
ZWFuLWZpcm13YXJlOiAucGhvbnkKPiAgCSQoTUFLRSkgLUMgZmlybXdhcmUgZGlzdGNsZWFuCj4g
Kwo+ICtzdWJkaXItYWxsLWJsa3RhcDMgc3ViZGlyLWluc3RhbGwtYmxrdGFwMzogLnBob255Cj4g
Kwlzb3VyY2U9LjsgXAo+ICsJY2QgYmxrdGFwMzsgXAo+ICsJLi9hdXRvZ2VuLnNoOyBcCgpJZiBh
bnl0aGluZyB0aGlzIHNob3VsZCBiZSBjYWxsZWQgZnJvbSB0aGUgdG9wLWxldmVsIC4vYXV0b2dl
bi5zaCBhbmQKbm90IGhlcmUuIFdlIHNob3VsZG4ndCBleHBlY3QgZW5kIHVzZXJzIHRvIGhhdmUg
YXV0b2NvbmYgYXZhaWxhYmxlLgoKPiArCS4vY29uZmlndXJlIFwKCkkgdGhpbmsgYXV0b2NvbmYg
aGFzIGEgY29uc3RydWN0IHdoaWNoIGNhbiBjYXVzZSBjb25maWd1cmUgdG8gY2FsbCBvdGhlcgpz
dWItY29uZmlndXJlcyBpbiBzdWJkaXJzLiBJZiBJJ20gcmlnaHQgdGhlbiBpdCB3b3VsZCBiZSBi
ZXR0ZXIgdG8gdXNlCnRoaXMgaW5zdGVhZCBvZiBjYWxsaW5nIGl0IGhlcmUuCgpIb3dldmVyIEkg
dGhpbmsgdGhhdCB0aGUgcmVhbCBjb3JyZWN0IGFuc3dlciBpcyB0aGF0IGJsa3RhcDMgc2hvdWxk
bid0CmhhdmUgaXQncyBvd24gY29uZmlndXJlIGFueXdheSBidXQgc2hvdWxkIHNpbXBseSBhZGQg
dGhlIHRlc3RzIHdoaWNoIGl0Cm5lZWRzIHRvIHRoZSBnbG9iYWwgdG9vbHMgbGV2ZWwgb25lIGFu
ZCB1c2UgdGhlIHJlc3VsdCBsaWtlIGV2ZXJ5b25lCmVsc2UuCgo+ICsJQ0ZMQUdTPSItSSQoWEVO
X1JPT1QpL3Rvb2xzL2luY2x1ZGUgXAo+ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMvbGlieGMgXAo+
ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMveGVuc3RvcmUiIFwKPiArCUxERkxBR1M9Ii1MJChYRU5f
Uk9PVCkvdG9vbHMveGVuc3RvcmUgXAo+ICsJCSAtTCQoWEVOX1JPT1QpL3Rvb2xzL2xpYnhjIjsg
XAoKWW91ciBNYWtlZmlsZXMgc2hvdWxkIHN0YXJ0IHdpdGgKCiAgICAgICAgWEVOX1JPT1QgPSAk
KENVUkRJUikvLi4vLi4KICAgICAgICBpbmNsdWRlICQoWEVOX1JPT1QpL3Rvb2xzL1J1bGVzLm1r
CgpBbmQgdGhlbiBtYWtlIHVzZSBvZiB0aGUgdmFyaWFibGVzIGRlZmluZWQgaW4gUnVsZXMubWsu
IGUuZy4KQ0ZMQUdTX2xpYnhlbmN0cmwsIExJQlNfbGlieGVuY3RybCBldGMgcmF0aGVyIHRoYW4g
ZG9pbmcgdGhpcy4KCkkgc3VwcG9zZSBibGt0YXAzIG9uY2UgbGl2ZWQgb3V0c2lkZSBvZiB0aGUg
eGVuIHRyZWUgYW5kIHRoaXMgKGFuZCB0aGUKY29uZmlndXJleSkgaXMgYSBoYW5nb3ZlciBmcm9t
IHRoYXQuIEJ1dCB3ZSBzaG91bGQgY2xlYW4gaXQgdXAgb24gaXRzCndheSBpbnRvIHRoZSB0cmVl
Cgo+IGRpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUgYi90b29scy9i
bGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUKPiAtLS0gYS90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFr
ZWZpbGUKPiArKysgYi90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUKPiBAQCAtNCw5ICs0
LDkgQEAKPiAgCj4gIExJQlZIRERJUiAgPSAkKEJMS1RBUF9ST09UKS92aGQvbGliCj4gIAo+IC1J
QklOICAgICAgID0gdGFwZGlzazIgdGQtdXRpbCB0YXBkaXNrLWNsaWVudCB0YXBkaXNrLXN0cmVh
bSB0YXBkaXNrLWRpZmYKPiAtUUNPV19VVElMICA9IGltZzJxY293IHFjb3ctY3JlYXRlIHFjb3cy
cmF3Cj4gLUxPQ0tfVVRJTCAgPSBsb2NrLXV0aWwKPiArSUJJTiAgICAgICA9IHRhcGRpc2syIHRk
LXV0aWwyIHRhcGRpc2stY2xpZW50MiB0YXBkaXNrLXN0cmVhbTIgdGFwZGlzay1kaWZmMgo+ICtR
Q09XX1VUSUwgID0gaW1nMnFjb3cyIHFjb3ctY3JlYXRlMiBxY293MnJhdzIKPiArTE9DS19VVElM
ICA9IGxvY2stdXRpbDIKClRoaXMgc2VyaWVzIHNob3VsZG4ndCBiZSByZW5hbWluZyBiaXRzIG9m
IGJsa3RhcDIuIEluIGZhY3QgSSB0aGluayBhcyBhCmdlbmVyYWwgcnVsZSBpdCBzaG91bGQgbm90
IGJlIHRvdWNoaW5nIHRvb2xzL2Jsa3RhcDIgYXQgYWxsLiBJZiBpdCBkb2VzCml0IHNob3VsZCBi
ZSBpbiBhIHNlcGFyYXRlIHBhdGNoIEkgdGhpbmsuCgo+IGRpZmYgLS1naXQgYS90b29scy9ibGt0
YXAzL01ha2VmaWxlLmFtIGIvdG9vbHMvYmxrdGFwMy9NYWtlZmlsZS5hbQo+IG5ldyBmaWxlIG1v
ZGUgMTAwNjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xzL2Jsa3RhcDMvTWFrZWZpbGUu
YW0KClRoaXMgaXMgYWRkaW5nIGEgbmV3IGRlcGVuZGVuY3kgb24gYXV0b21ha2Ugd2hpY2ggaXMg
c29tZXRoaW5nIHdlJ2xsCmhhdmUgdG8gZGlzY3Vzcy4KCkFzIHBhcnQgb2YgdGhlIGluaXRpYWwg
cHVzaCBJIHRoaW5rIGl0IHdvdWxkIGJlIGxlc3MgY29udHJvdmVyc2lhbCB0bwpzaW1wbHkgdXNl
IHRoZSBleGlzdGluZyBYZW4gdG9vbHMgYnVpbGQgaW5mcmFzdHJ1Y3R1cmUgKHN1Y2ggYXMgaXQg
aXMpLgpJIHRoaW5rIHRoZSBtYWpvcml0eSBvZiB0aGlzIGNvdWxkIGJlIGNyaWJiZWQgcGV0dHkg
ZGlyZWN0bHkgZnJvbQpibGt0YXAyIGFuZCBvdGhlciBwYXJ0cyBvZiB0aGUgdG9vbHMgdHJlZS4K
Cj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2Jsa3RhcDMvUkVBRE1FIGIvdG9vbHMvYmxrdGFwMy9SRUFE
TUUKPiBuZXcgZmlsZSBtb2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiArKysgYi90b29scy9i
bGt0YXAzL1JFQURNRQoKSSB0aGluayBJIG1lbnRpb25lZCB0aGlzIGJlZm9yZSBidXQgaXQgbG9v
a3MgbGlrZSB0aGlzIGRvY3VtZW50IGNvdWxkIGRvCndpdGggYSBwcmV0dHkgaGVmdHkgdXBkYXRl
LgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9jb250cm9sL3RhcC1jdGwtYXR0YWNoLmMg
Yi90b29scy9ibGt0YXAzL2NvbnRyb2wvdGFwLWN0bC1hdHRhY2guYwo+IG5ldyBmaWxlIG1vZGUg
MTAwNjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xzL2Jsa3RhcDMvY29udHJvbC90YXAt
Y3RsLWF0dGFjaC5jCj4gQEAgLTAsMCArMSw2NiBAQAo+ICsvKgo+ICsgKiBDb3B5cmlnaHQgKGMp
IDIwMDgsIFhlblNvdXJjZSBJbmMuCgpZb3UgcHJvYmFibHkgd2FudCB0byBkbyBhbiB1cGRhdGUg
b2YgYWxsIHRoZXNlIGNvcHlyaWdodCBoZWFkZXJzLgoKCj4gKyAqIEFsbCByaWdodHMgcmVzZXJ2
ZWQuCj4gKyAqCj4gKyAqIFJlZGlzdHJpYnV0aW9uIGFuZCB1c2UgaW4gc291cmNlIGFuZCBiaW5h
cnkgZm9ybXMsIHdpdGggb3Igd2l0aG91dAo+ICsgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0
ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMgYXJlIG1ldDoKPiArICog
ICAgICogUmVkaXN0cmlidXRpb25zIG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWluIHRoZSBhYm92
ZSBjb3B5cmlnaHQKPiArICogICAgICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBh
bmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgo+ICsgKiAgICAgKiBSZWRpc3RyaWJ1dGlvbnMg
aW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAo+ICsgKiAg
ICAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRp
c2NsYWltZXIgaW4gdGhlCj4gKyAqICAgICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1h
dGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCj4gKyAqICAgICAqIE5laXRo
ZXIgdGhlIG5hbWUgb2YgWGVuU291cmNlIEluYy4gbm9yIHRoZSBuYW1lcyBvZiBpdHMgY29udHJp
YnV0b3JzCgpBbmQgSSBzdXBwb3NlIHRoaXMgb3VnaHQgdG8gYmUgdXBkYXRlZCB0b28uCgo+ICsg
KiAgICAgICBtYXkgYmUgdXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZl
ZCBmcm9tIHRoaXMgc29mdHdhcmUKPiArICogICAgICAgd2l0aG91dCBzcGVjaWZpYyBwcmlvciB3
cml0dGVuIHBlcm1pc3Npb24uCgoKVGhlIGFjdHVhbCB0aHJlZSBjbGF1c2UgQlNEIHNheXMgIlRo
ZSBuYW1lIG9mIHRoZSBhdXRob3IgbWF5IG5vdCBiZSB1c2VkCnRvIGVuZG9yc2Ugb3IgcHJvbW90
ZSBwcm9kdWN0cyBkZXJpdmVkIGZyb20gdGhpcyBzb2Z0d2FyZSB3aXRob3V0CnNwZWNpZmljIHBy
aW9yIHdyaXR0ZW4gcGVybWlzc2lvbi4KClRoaXMgd2VpcmQgdmFyaWFudCBvZiB0aGUgMy1jbGF1
c2UgQlNEIGlzIHNvbWV0aGluZyB5b3UgbWlnaHQgd2FudCB0bwpkaXNjdXNzIHdpdGggeW91ciBt
YW5hZ2VtZW50IHRvIHNlZSBpZiBpdCBjYW4ndCBiZSByYXRpb25hbGlzZWQuCgo+ICsgKgo+ICsg
KiBUSElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQg
Q09OVFJJQlVUT1JTCj4gKyAqICJBUyBJUyIgQU5EIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FS
UkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9UCj4gKyAqIExJTUlURUQgVE8sIFRIRSBJTVBMSUVE
IFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRORVNTIEZPUgo+ICsgKiBBIFBB
UlRJQ1VMQVIgUFVSUE9TRSBBUkUgRElTQ0xBSU1FRC4gSU4gTk8gRVZFTlQgU0hBTEwgVEhFIENP
UFlSSUdIVCBPV05FUgo+ICsgKiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxFIEZPUiBBTlkgRElS
RUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwKPiArICogRVhFTVBMQVJZLCBPUiBD
T05TRVFVRU5USUFMIERBTUFHRVMgKElOQ0xVRElORywgQlVUIE5PVCBMSU1JVEVEIFRPLAo+ICsg
KiBQUk9DVVJFTUVOVCBPRiBTVUJTVElUVVRFIEdPT0RTIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVT
RSwgREFUQSwgT1IKPiArICogUFJPRklUUzsgT1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKSBIT1dF
VkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRgo+ICsgKiBMSUFCSUxJVFksIFdIRVRIRVIg
SU4gQ09OVFJBQ1QsIFNUUklDVCBMSUFCSUxJVFksIE9SIFRPUlQgKElOQ0xVRElORwo+ICsgKiBO
RUdMSUdFTkNFIE9SIE9USEVSV0lTRSkgQVJJU0lORyBJTiBBTlkgV0FZIE9VVCBPRiBUSEUgVVNF
IE9GIFRISVMKPiArICogU09GVFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJ
VFkgT0YgU1VDSCBEQU1BR0UuCj4gKyAqLwoKSSB0aGluayBpdCB3b3VsZCBiZSB3b3J0aHdoaWxl
IHRvIGhhdmUgYSB0b29scy9ibGt0YXAzL0NPUFlJTkcgZmlsZSB0bwpjbGFyaWZ5IHRoZSBsaWNl
c2luZyB0ZXJtcyBvZiBibGt0YXAzIGFzIGEgd2hvbGUuCgpbLi4uXSBJIGRpZG4ndCBsb29rIGF0
IHRoZSBtYWpvcml0eSBvZiB0aGUgYWN0dWFsIHRvb2xzL2Jsa3RhcDMgY29kZS4KVGhlcmUncyBx
dWl0ZSBhIGxvdCBvZiBpdC4gSSBtZW50aW9uZWQgZWFybGllciB0aGF0IHlvdSBtaWdodCB3YW50
IHRvCmNvbnNpZGVyIGRyb3BwaW5nIHNvbWUgb2YgdGhlIG9wdGlvbmFsIGNvbXBvbmVudHMgZm9y
IHRoZSB0aW1lIGJlaW5nIHRvCmtlZXAgdGhlIGluaXRpYWwgdXBzdHJlYW1pbmcgbW9yZSBtYW5h
Z2VhYmxlLgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3RkLXJhdGVkLjEu
dHh0IGIvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3RkLXJhdGVkLjEudHh0Cj4gbmV3IGZpbGUgbW9k
ZSAxMDA2NDQKPiAtLS0gL2Rldi9udWxsCj4gKysrIGIvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3Rk
LXJhdGVkLjEudHh0CgpJcyB0aGlzIGEgZ2VuZXJhdGVkIGZpbGU/IEkgZGlkbid0IHNlZSB0aGUg
c291cmNlIGJ1dCBpdCdkIGJlIG5pY2UgdG8KaGF2ZSBlLmcuIHRoZSBhY3R1YWwgbWFuIHBhZ2Ug
ZXRjLgoKVGhpcyBtYWRlIG1lIGdyZXAgZm9yICJkb2MiLCAibWFuIiBhbmQgInR4dCIgaW4gdGhl
IHBhdGNoLCB3aGljaCBvbmx5CmZvdW5kIHRoaXMgb25lIGZpbGUuIEhvcGVmdWxseSBJIGp1c3Qg
bWlzc2VkIGl0IGFsbCwgb3IgYXQgbGVhc3QgY2FuIHdlCmV4cGVjdCB0aGF0IGFkZGl0aW9uYWwg
ZG9jcyB3aWxsIGJlIGZvcnRoY29taW5nIGluIHRoZSBmdXR1cmU/CgoKPiBkaWZmIC0tZ2l0IGEv
dG9vbHMvYmxrdGFwMy9pbmNsdWRlL2Jsa3RhcDIuaCBiL3Rvb2xzL2Jsa3RhcDMvaW5jbHVkZS9i
bGt0YXAyLmgKPiBuZXcgZmlsZSBtb2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiArKysgYi90
b29scy9ibGt0YXAzL2luY2x1ZGUvYmxrdGFwMi5oCgpzLzIvMy8gT3IgZG9lcyB0aGlzIGZpbGUg
YmVsb25nIGF0IGFsbD8gSXQgc2VlbXMgdG8gbW9zdGx5IHJlbGF0ZSB0byB0aGUKYmxrdGFwMiBr
ZXJuZWwgZHJpdmVyIGlvY3RsIGludGVyZmFjZS4gUGxlYXNlIGNhbiB5b3Uga2lsbCBhbGwgdGhp
cwpjcnVmdCBiZWZvcmUgcmVwb3N0aW5nLgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9p
bmNsdWRlL2xpc3QuaCBiL3Rvb2xzL2Jsa3RhcDMvaW5jbHVkZS9saXN0LmgKPiBuZXcgZmlsZSBt
b2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiArKysgYi90b29scy9ibGt0YXAzL2luY2x1ZGUv
bGlzdC5oCj4gQEAgLTAsMCArMSwxNDkgQEAKPiArLyoKPiArICogbGlzdC5oCj4gKyAqIAo+ICsg
KiBUaGlzIGlzIGEgc3Vic2V0IG9mIGxpbnV4J3MgbGlzdC5oIGludGVuZGVkIHRvIGJlIHVzZWQg
aW4gdXNlci1zcGFjZS4KPiArICogCj4gKyAqLwoKSWYgdGhpcyBjYW1lIGZyb20gTGludXggdGhl
biBpdCBpcyBHUEwgbGljZW5zZWQgYW5kIG11c3QgaGF2ZSBhIEdQTApoZWFkZXIgb24gaXQuCgpU
aGUgaW50ZW50aW9uIHNlZW1zIHRvIGJlIHRoYXQgYmxrdGFwMyBpcyBCU0QgYnV0IHRoaXMgd291
bGQgbWFrZSBpdApvdmVyYWxsIEdQTC4gWW91IGNvdWxkIGVpdGhlciByZWxpY2Vuc2UgdGhlIHdo
b2xlIHRoaW5nIGFzIChMKUdQTCBvcgpwZXJoYXBzIHJlaW1wbGVtZW50IHVzaW5nIHRoZSBCU0Qg
bGljZW5zZWQgbGlzdCBtYWNyb3MgKHNlZQp0b29scy9pbmNsdWRlL3hlbi1leHRlcm5hbCBmb3Ig
dGhlIEJTRCBtYWNyb3Mgd2hpY2ggbGlieGwgYW5kIG1pbmktb3MKdXNlKQoKCj4gZGlmZiAtLWdp
dCBhL3Rvb2xzL2Jsa3RhcDMveGVuaW8vYmxraWYuaCBiL3Rvb2xzL2Jsa3RhcDMveGVuaW8vYmxr
aWYuaAo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xz
L2Jsa3RhcDMveGVuaW8vYmxraWYuaAoKR2l2ZW4gdGhhdCB0aGlzIGlzIGluLXRyZWUgeW91IG1p
Z2h0IHBlcmhhcHMgd2FudCB0byB1c2UgdGhlIGluLXRocmVlCmludGVyZmFjZSBkZWNsYXJhdGlv
bnMgZnJvbSB0b29scy9pbmNsdWRlLgoKPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy94ZW5p
by9saXN0LmggYi90b29scy9ibGt0YXAzL3hlbmlvL2xpc3QuaAo+IG5ldyBmaWxlIG1vZGUgMTAw
NjQ0Cj4gLS0tIC9kZXYvbnVsbAo+ICsrKyBiL3Rvb2xzL2Jsa3RhcDMveGVuaW8vbGlzdC5oCj4g
QEAgLTAsMCArMSwxMzQgQEAKPiArLyoKPiArICogbGlzdC5oCj4gKyAqIAo+ICsgKiBUaGlzIGlz
IGEgc3Vic2V0IG9mIGxpbnV4J3MgbGlzdC5oIGludGVuZGVkIHRvIGJlIHVzZWQgaW4gdXNlci1z
cGFjZS4KPiArICogCj4gKyAqLwoKQW5vdGhlciBkdXBsaWNhdGVkIGNvcHkgb2Ygc29tZSBHUEwg
Y29kZS4KCkFwYXJ0IGZyb20gdGhlIGxpY2Vuc2luZyB0aGluZ3MgcGVyaGFwcyB5b3UgY291bGQg
cmF0aW9uYWxpc2UgdGhlIG51bWJlcgpvZiBjb3BpZXMgb2YgdGhpbmdzIGxpa2UgdGhpcyB3aGlj
aCB5b3UgYXJlIGludHJvZHVjaW5nPwoKCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhs
LmMgYi90b29scy9saWJ4bC9saWJ4bC5jCj4gLS0tIGEvdG9vbHMvbGlieGwvbGlieGwuYwo+ICsr
KyBiL3Rvb2xzL2xpYnhsL2xpYnhsLmMKPiBAQCAtMTE3MSw2ICsxMTcxLDggQEAKCkNhbiB5b3Ug
YWRkIHRoZSBmb2xsb3dpbmcgdG8geW91ciB+Ly5oZ3JjIHBsZWFzZToKICAgICAgICBbZGlmZl0K
ICAgICAgICBzaG93ZnVuYyA9IFRydWUKClRoaXMgd2lsbCBpbmplY3QgdGhlIGN1cnJlbnQgZnVu
Y3Rpb24gbmFtZSBpbnRvIHRoZSBodW5rIGhlYWRlciB3aGljaAptYWtlcyByZXZpZXcgbXVjaCBl
YXNpZXIuCgo+ICAgICAgICAgIGRpc2stPmJhY2tlbmQgPSBMSUJYTF9ESVNLX0JBQ0tFTkRfVEFQ
Owo+ICAgICAgfSBlbHNlIGlmICghc3RyY21wKGJhY2tlbmRfdHlwZSwgInFkaXNrIikpIHsKPiAg
ICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExfRElTS19CQUNLRU5EX1FESVNLOwo+ICsgICAg
fSBlbHNlIGlmICghc3RyY21wKGJhY2tlbmRfdHlwZSwgInhlbmlvIikpIHsKPiArICAgICAgICBk
aXNrLT5iYWNrZW5kID0gTElCWExfRElTS19CQUNLRU5EX1hFTklPOwoKSSB0aGluayB5b3Ugd2Fu
dCB0byByZXBsYWNlIExJQlhMX0RJU0tfQkFDS0VORF9UQVAgcmF0aGVyIHRoYW4gYWRkIGEgbmV3
Cm9uZS4gWW91IGNvdWxkIGFsc28gc3RlYWwgdGhlIG5hbWUgaWYgeW91IGxpa2UgSSByZWNrb24u
Cgo+IAo+ICAgICAgfSBlbHNlIHsKPiAgICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExfRElT
S19CQUNLRU5EX1VOS05PV047Cj4gICAgICB9CgoKPiBAQCAtMTk2MSw2ICsxOTgxLDcgQEAKPiAg
fQo+ICAKPiAgc3RhdGljIHZvaWQgbGlieGxfX2RldmljZV9kaXNrX2Zyb21feHNfYmUobGlieGxf
X2djICpnYywKPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgeHNf
dHJhbnNhY3Rpb25fdCB0LAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBjb25zdCBjaGFyICpiZV9wYXRoLAo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBsaWJ4bF9kZXZpY2VfZGlzayAqZGlzaykKPiAgewoKVGhpcyBzb3J0IG9m
IHRoaW5nIHNob3VsZCBiZSBkb25lIGFzIGEgc2VwYXJhdGUgcHJlLWN1cnNvciBwYXRjaC4KCgo+
IGRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF90YXBkaXNrLmMgYi90b29scy9saWJ4bC9s
aWJ4bF90YXBkaXNrLmMKPiBuZXcgZmlsZSBtb2RlIDEwMDY0NAo+IC0tLSAvZGV2L251bGwKPiAr
KysgYi90b29scy9saWJ4bC9saWJ4bF90YXBkaXNrLmMKCklzIHRoaXMgYWN0dWFsbHkgYSBtb3Zl
IG9mIG9mIHRoZSBleGlzdGluZyBpYnhsX2Jsa3RhcD8gSSB0aGluayAiaGcgZGlmZgotZyIgd2ls
bCBjYXVzZSBpdCB0byB1c2UgZ2l0IHN0eWxlIHBhdGNoZXMgd2hpY2ggbWFrZSB0aGlzIGNsZWFy
ZXIuCgpBbHRob3VnaCBJIGRvbid0IHNlZSBsaWJ4bF9ibGt0YXAgZ2V0dGluZyByZW1vdmVkLCBz
byBwZXJoYXBzIG5vdD8gSQp0aG91Z2h0IEkgc2F3IHlvdSBjaGFuZ2luZyB0aGUgTWFrZWZpbGUg
YXMgaWYgeW91IHdlcmUgcmVuYW1uZyBhcyB3ZWxsLgoKUmVuYW1pbmcgc2hvdWxkIGdlbmVyYWxs
eSBiZSBkb25lIGFzIGEgc3RhbmRhbG9uZSBwYXRjaCB3aXRoIG5vCm5vbi1yZWxhdGVkIGNoYW5n
ZXMgaW4gdGhlbSwgdG8gbWFrZSB0aGVtIGVhaXNlciB0byByZXZpZXcuCgo+IEBAIC0wLDAgKzEs
MTYyIEBAClsuLi5dCj4gKyAgICAgICAgc3RydWN0IGxpc3RfaGVhZCBsaXN0Owo+ICsJdGFwX2xp
c3RfdCAqZW50cnksICpuZXh0X3Q7CgpTb21ldGhpbmcgb2RkIHdpdGggd2hpdGVzcGFjZSBoZXJl
LgoKPiArICAgICAgICBpbnQgcmV0ID0gLUVOT0VOVCwgZXJyOwo+ICsKPiArCWZwcmludGYoc3Rk
ZXJyLCAiYmxrdGFwX2ZpbmQoJXM6JXMpXG4iLCB0eXBlLCBwYXRoKTsKClBsZWFzZSBkcm9wIHRo
aXMgc29ydCBvZiBkZWJ1Zy4KCj4gKyAgICAgICAgSU5JVF9MSVNUX0hFQUQoJmxpc3QpOwo+ICsg
ICAgICAgIGVyciA9IHRhcF9jdGxfbGlzdCgmbGlzdCk7Cj4gKyAgICAgICAgaWYgKGVyciA8IDAp
Cj4gKyAgICAgICAgICAgICAgICByZXR1cm4gZXJyOwo+IFsuLi5dCj4gKy8vICAgICAgICB0YXBf
Y3RsX2xpc3RfZnJlZSgmbGlzdCk7CgpMZWFrPwoKCj4gY2hhciAqbGlieGxfX2Jsa3RhcF9kZXZw
YXRoKGxpYnhsX19nYyAqZ2MsCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBj
aGFyICpkaXNrLAo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGlza19mb3Jt
YXQgZm9ybWF0KQo+ICt7Cj4gKyAgICBjb25zdCBjaGFyICp0eXBlOwo+ICsgICAgY2hhciAqcGFy
YW1zLCAqZGV2bmFtZSA9IE5VTEw7Cj4gKyAgICB0YXBfbGlzdF90IHRhcDsKPiArICAgIGludCBl
cnI7Cj4gKwo+ICsgICAgdHlwZSA9IGxpYnhsX19kZXZpY2VfZGlza19zdHJpbmdfb2ZfZm9ybWF0
KGZvcm1hdCk7Cj4gKyAgICBmcHJpbnRmKHN0ZGVyciwgImxpYnhsX19ibGt0YXBfZGV2cGF0aCgl
czolcylcbiIsIGRpc2ssIHR5cGUpOwo+ICsgICAgZXJyID0gYmxrdGFwX2ZpbmQodHlwZSwgZGlz
aywgJnRhcCk7Cj4gKyAgICBpZiAoZXJyID09IDApIHsKPiArICAgICAgICBkZXZuYW1lID0gbGli
eGxfX3NwcmludGYoZ2MsICIvZGV2L3hlbi9ibGt0YXAtMi90YXBkZXYlZCIsIHRhcC5taW5vcik7
CgpTdXJlbHkgbm90IGFueSBtb3JlPwoKPiArICAgICAgICBpZiAoZGV2bmFtZSkKPiArICAgICAg
ICAgICAgcmV0dXJuIGRldm5hbWU7Cj4gKyAgICB9Cj4gKwo+ICsgICAgcGFyYW1zID0gbGlieGxf
X3NwcmludGYoZ2MsICIlczolcyIsIHR5cGUsIGRpc2spOwo+ICsgICAgZXJyID0gdGFwX2N0bF9j
cmVhdGUocGFyYW1zLCAmZGV2bmFtZSwgMCwgLTEsIE5VTEwpOwo+ICsgICAgaWYgKCFlcnIpIHsK
PiArICAgICAgICBsaWJ4bF9fcHRyX2FkZChnYywgZGV2bmFtZSk7Cj4gKyAgICAgICAgcmV0dXJu
IGRldm5hbWU7Cj4gKyAgICB9Cj4gKwo+ICsgICAgcmV0dXJuIE5VTEw7Cj4gK30KClsuLi5dCj4g
ZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYyBiL3Rvb2xzL2xpYnhsL3hsX2Nt
ZGltcGwuYwo+IC0tLSBhL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYwo+ICsrKyBiL3Rvb2xzL2xp
YnhsL3hsX2NtZGltcGwuYwo+IEBAIC0xODYyLDcgKzE4NjIsNyBAQAo+ICAKPiAgICAgICAgICBj
aGlsZDEgPSB4bF9mb3JrKGNoaWxkX3dhaXRkYWVtb24pOwo+ICAgICAgICAgIGlmIChjaGlsZDEp
IHsKPiAtICAgICAgICAgICAgcHJpbnRmKCJEYWVtb24gcnVubmluZyB3aXRoIFBJRCAlZFxuIiwg
Y2hpbGQxKTsKPiArICAgICAgICAgICAgcHJpbnRmKCJEYWVtb24gcnVubmluZyB3aXRoIFBJRCAl
ZCBmb3IgZG9tYWluICVkXG4iLCBjaGlsZDEsIGRvbWlkKTsKClRoaXMgaXMgcHJvYmFibHkgYSB1
c2VmdWwgY2hhbmdlIGJ1dCBpdCBoYXMgbm90aGluZyBhdCBhbGwgdG8gZG8gd2l0aApibGt0YXAz
LCBwbGVhc2Ugc2VwYXJhdGUgYWxsIHRoaXMgc29ydCBvZiBzdHVmZiBvdXQuCgo+ICAKPiAgICAg
ICAgICAgICAgZm9yICg7Oykgewo+ICAgICAgICAgICAgICAgICAgZ290X2NoaWxkID0geGxfd2Fp
dHBpZChjaGlsZF93YWl0ZGFlbW9uLCAmc3RhdHVzLCAwKTsKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22eL-0005eq-VO; Thu, 16 Aug 2012 16:10:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T22eK-0005eW-FL
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:10:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345133407!2747089!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27487 invoked from network); 16 Aug 2012 16:10:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14045349"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 16:10:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 17:10:06 +0100
Message-ID: <1345133405.30865.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Thu, 16 Aug 2012 17:10:05 +0100
In-Reply-To: <alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-08-12 at 19:03 +0100, M A Young wrote:
> On Wed, 25 Jul 2012, Ian Campbell wrote:
> 
> > On Tue, 2012-07-24 at 20:36 +0100, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jul 24, 2012 at 08:04:30PM +0100, M A Young wrote:
> >>> Fedora is keen to stop using PyXML and I have been sent a bug report
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=842843 which includes a
> >>> patch to remove the use of XMLPrettyPrint in
> >>> tools/python/xen/xm/create.py . I am going to try this in the Fedora
> >>> build, but I was wondering if it makes sense to do this in the
> >>> official xen releases as well.
> >>
> >> Yes.
> >
> > Agreed.
> >
> > According to the bug we've already moved from PyXML to lxml
> > (22235:b8cc53d22545 from the looks of it) and simply missed this one use
> > of PyXML.
> 
> Here is the patch from that bug (trivially) rebased to 4.2.

Seems good to me, although xend is deprecated this patch is pretty
simple and the goal a useful one.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22eL-0005eq-VO; Thu, 16 Aug 2012 16:10:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T22eK-0005eW-FL
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:10:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345133407!2747089!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27487 invoked from network); 16 Aug 2012 16:10:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14045349"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 16:10:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 16 Aug 2012 17:10:06 +0100
Message-ID: <1345133405.30865.46.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Thu, 16 Aug 2012 17:10:05 +0100
In-Reply-To: <alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-08-12 at 19:03 +0100, M A Young wrote:
> On Wed, 25 Jul 2012, Ian Campbell wrote:
> 
> > On Tue, 2012-07-24 at 20:36 +0100, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jul 24, 2012 at 08:04:30PM +0100, M A Young wrote:
> >>> Fedora is keen to stop using PyXML and I have been sent a bug report
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=842843 which includes a
> >>> patch to remove the use of XMLPrettyPrint in
> >>> tools/python/xen/xm/create.py . I am going to try this in the Fedora
> >>> build, but I was wondering if it makes sense to do this in the
> >>> official xen releases as well.
> >>
> >> Yes.
> >
> > Agreed.
> >
> > According to the bug we've already moved from PyXML to lxml
> > (22235:b8cc53d22545 from the looks of it) and simply missed this one use
> > of PyXML.
> 
> Here is the patch from that bug (trivially) rebased to 4.2.

Seems good to me, although xend is deprecated this patch is pretty
simple and the goal a useful one.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22ey-0005k9-D9; Thu, 16 Aug 2012 16:11:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T22ew-0005jp-Hs
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:10:58 +0000
Received: from [85.158.138.51:11268] by server-10.bemta-3.messagelabs.com id
	B6/F6-20518-19B1D205; Thu, 16 Aug 2012 16:10:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345133456!9970869!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11783 invoked from network); 16 Aug 2012 16:10:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 16:10:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 17:10:56 +0100
Message-Id: <502D37D702000078000959F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 17:11:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
In-Reply-To: <502D33B8020000780009596B@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> Seeing the patch I btw realized that there's no easy way to
> avoid having the type as a second argument in the conversion
> macros. Nevertheless I still don't like the explicitly specified type
> there.

Btw - on the architecture(s) where the two handles are identical
I would prefer you to make the conversion functions trivial (and
thus avoid making use of the "type" parameter), thus allowing
the type checking to occur that you currently circumvent.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22ey-0005k9-D9; Thu, 16 Aug 2012 16:11:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T22ew-0005jp-Hs
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:10:58 +0000
Received: from [85.158.138.51:11268] by server-10.bemta-3.messagelabs.com id
	B6/F6-20518-19B1D205; Thu, 16 Aug 2012 16:10:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345133456!9970869!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11783 invoked from network); 16 Aug 2012 16:10:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	16 Aug 2012 16:10:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Aug 2012 17:10:56 +0100
Message-Id: <502D37D702000078000959F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 16 Aug 2012 17:11:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
In-Reply-To: <502D33B8020000780009596B@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, Ian.Campbell@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> Seeing the patch I btw realized that there's no easy way to
> avoid having the type as a second argument in the conversion
> macros. Nevertheless I still don't like the explicitly specified type
> there.

Btw - on the architecture(s) where the two handles are identical
I would prefer you to make the conversion functions trivial (and
thus avoid making use of the "type" parameter), thus allowing
the type checking to occur that you currently circumvent.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hK-00069y-1l; Thu, 16 Aug 2012 16:13:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067j-Dz
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.138.51:25201] by server-7.bemta-3.messagelabs.com id
	3D/0B-01906-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345133599!9971322!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22901 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDH7v007924
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHhd014619
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:17 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDHd6011297; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D3678402C0; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:19 -0400
Message-Id: <1345133009-21941-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 01/11] xen/p2m: Fix the comment describing the
	P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It mixed up the p2m_mid_missing with p2m_missing. Also
remove some extra spaces.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |   14 +++++++-------
 1 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 64effdc..e4adbfb 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -22,7 +22,7 @@
  *
  * P2M_PER_PAGE depends on the architecture, as a mfn is always
  * unsigned long (8 bytes on 64-bit, 4 bytes on 32), leading to
- * 512 and 1024 entries respectively. 
+ * 512 and 1024 entries respectively.
  *
  * In short, these structures contain the Machine Frame Number (MFN) of the PFN.
  *
@@ -139,11 +139,11 @@
  *      /    | ~0, ~0, ....  |
  *     |     \---------------/
  *     |
- *     p2m_missing             p2m_missing
- * /------------------\     /------------\
- * | [p2m_mid_missing]+---->| ~0, ~0, ~0 |
- * | [p2m_mid_missing]+---->| ..., ~0    |
- * \------------------/     \------------/
+ *   p2m_mid_missing           p2m_missing
+ * /-----------------\     /------------\
+ * | [p2m_missing]   +---->| ~0, ~0, ~0 |
+ * | [p2m_missing]   +---->| ..., ~0    |
+ * \-----------------/     \------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -423,7 +423,7 @@ static void free_p2m_page(void *p)
 	free_page((unsigned long)p);
 }
 
-/* 
+/*
  * Fully allocate the p2m structure for a given pfn.  We need to check
  * that both the top and mid levels are allocated, and make sure the
  * parallel mfn tree is kept in sync.  We may race with other cpus, so
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hG-00067k-WA; Thu, 16 Aug 2012 16:13:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hF-00067L-S4
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.143.35:42112] by server-2.bemta-4.messagelabs.com id
	57/1E-31966-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345133599!14478509!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23684 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIhG007928
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHro000530
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH06006355; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ED6994031E; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:22 -0400
Message-Id: <1345133009-21941-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 04/11] xen/mmu: Provide comments describing the
	_ka and _va aliasing issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Which is that the level2_kernel_pgt (__ka virtual addresses)
and level2_ident_pgt (__va virtual address) contain the same
PMD entries. So if you modify a PTE in __ka, it will be reflected
in __va (and vice-versa).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 4ac21a4..6ba6100 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1734,19 +1734,36 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	init_level4_pgt[0] = __pgd(0);
 
 	/* Pre-constructed entries are in pfn, so convert to mfn */
+	/* L4[272] -> level3_ident_pgt
+	 * L4[511] -> level3_kernel_pgt */
 	convert_pfn_mfn(init_level4_pgt);
+
+	/* L3_i[0] -> level2_ident_pgt */
 	convert_pfn_mfn(level3_ident_pgt);
+	/* L3_k[510] -> level2_kernel_pgt
+	 * L3_i[511] -> level2_fixmap_pgt */
 	convert_pfn_mfn(level3_kernel_pgt);
 
+	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
 
+	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
+	 * Both L4[272][0] and L4[511][511] have entries that point to the same
+	 * L2 (PMD) tables. Meaning that if you modify it in __va space
+	 * it will be also modified in the __ka space! (But if you just
+	 * modify the PMD table to point to other PTE's or none, then you
+	 * are OK - which is what cleanup_highmap does) */
 	memcpy(level2_ident_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	/* Graft it onto L4[511][511] */
 	memcpy(level2_kernel_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
 
+	/* Get [511][510] and graft that in level2_fixmap_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud);
 	memcpy(level2_fixmap_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	/* Note that we don't do anything with level1_fixmap_pgt which
+	 * we don't need. */
 
 	/* Set up identity map */
 	xen_map_identity_early(level2_ident_pgt, max_pfn);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hK-00069y-1l; Thu, 16 Aug 2012 16:13:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067j-Dz
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.138.51:25201] by server-7.bemta-3.messagelabs.com id
	3D/0B-01906-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345133599!9971322!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22901 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDH7v007924
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHhd014619
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:17 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDHd6011297; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D3678402C0; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:19 -0400
Message-Id: <1345133009-21941-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 01/11] xen/p2m: Fix the comment describing the
	P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It mixed up the p2m_mid_missing with p2m_missing. Also
remove some extra spaces.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |   14 +++++++-------
 1 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 64effdc..e4adbfb 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -22,7 +22,7 @@
  *
  * P2M_PER_PAGE depends on the architecture, as a mfn is always
  * unsigned long (8 bytes on 64-bit, 4 bytes on 32), leading to
- * 512 and 1024 entries respectively. 
+ * 512 and 1024 entries respectively.
  *
  * In short, these structures contain the Machine Frame Number (MFN) of the PFN.
  *
@@ -139,11 +139,11 @@
  *      /    | ~0, ~0, ....  |
  *     |     \---------------/
  *     |
- *     p2m_missing             p2m_missing
- * /------------------\     /------------\
- * | [p2m_mid_missing]+---->| ~0, ~0, ~0 |
- * | [p2m_mid_missing]+---->| ..., ~0    |
- * \------------------/     \------------/
+ *   p2m_mid_missing           p2m_missing
+ * /-----------------\     /------------\
+ * | [p2m_missing]   +---->| ~0, ~0, ~0 |
+ * | [p2m_missing]   +---->| ..., ~0    |
+ * \-----------------/     \------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -423,7 +423,7 @@ static void free_p2m_page(void *p)
 	free_page((unsigned long)p);
 }
 
-/* 
+/*
  * Fully allocate the p2m structure for a given pfn.  We need to check
  * that both the top and mid levels are allocated, and make sure the
  * parallel mfn tree is kept in sync.  We may race with other cpus, so
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hG-00067k-WA; Thu, 16 Aug 2012 16:13:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hF-00067L-S4
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.143.35:42112] by server-2.bemta-4.messagelabs.com id
	57/1E-31966-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345133599!14478509!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23684 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIhG007928
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHro000530
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH06006355; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ED6994031E; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:22 -0400
Message-Id: <1345133009-21941-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 04/11] xen/mmu: Provide comments describing the
	_ka and _va aliasing issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Which is that the level2_kernel_pgt (__ka virtual addresses)
and level2_ident_pgt (__va virtual address) contain the same
PMD entries. So if you modify a PTE in __ka, it will be reflected
in __va (and vice-versa).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 4ac21a4..6ba6100 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1734,19 +1734,36 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	init_level4_pgt[0] = __pgd(0);
 
 	/* Pre-constructed entries are in pfn, so convert to mfn */
+	/* L4[272] -> level3_ident_pgt
+	 * L4[511] -> level3_kernel_pgt */
 	convert_pfn_mfn(init_level4_pgt);
+
+	/* L3_i[0] -> level2_ident_pgt */
 	convert_pfn_mfn(level3_ident_pgt);
+	/* L3_k[510] -> level2_kernel_pgt
+	 * L3_i[511] -> level2_fixmap_pgt */
 	convert_pfn_mfn(level3_kernel_pgt);
 
+	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
 
+	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
+	 * Both L4[272][0] and L4[511][511] have entries that point to the same
+	 * L2 (PMD) tables. Meaning that if you modify it in __va space
+	 * it will be also modified in the __ka space! (But if you just
+	 * modify the PMD table to point to other PTE's or none, then you
+	 * are OK - which is what cleanup_highmap does) */
 	memcpy(level2_ident_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	/* Graft it onto L4[511][511] */
 	memcpy(level2_kernel_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
 
+	/* Get [511][510] and graft that in level2_fixmap_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud);
 	memcpy(level2_fixmap_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	/* Note that we don't do anything with level1_fixmap_pgt which
+	 * we don't need. */
 
 	/* Set up identity map */
 	xen_map_identity_early(level2_ident_pgt, max_pfn);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hI-00069A-Rw; Thu, 16 Aug 2012 16:13:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067T-Oy
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.139.83:3199] by server-12.bemta-5.messagelabs.com id
	15/5E-22359-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345133599!28501761!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18157 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIKu027552
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHDC000527
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH4H027782; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DA338402EF; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:20 -0400
Message-Id: <1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 02/11] xen/x86: Use memblock_reserve for
	sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

instead of a big memblock_reserve. This way we can be more
selective in freeing regions (and it also makes it easier
to understand where is what).

[v1: Move the auto_translate_physmap to proper line]
[v2: Per Stefano suggestion add more comments]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/xen/p2m.c       |    5 ++++
 arch/x86/xen/setup.c     |    9 --------
 3 files changed, 53 insertions(+), 9 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..e532eb5 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -998,7 +998,54 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 	return ret;
 }
+/*
+ * If the MFN is not in the m2p (provided to us by the hypervisor) this
+ * function won't do anything. In practice this means that the XenBus
+ * MFN won't be available for the initial domain. */
+static void __init xen_reserve_mfn(unsigned long mfn)
+{
+	unsigned long pfn;
+
+	if (!mfn)
+		return;
+	pfn = mfn_to_pfn(mfn);
+	if (phys_to_machine_mapping_valid(pfn))
+		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
+}
+static void __init xen_reserve_internals(void)
+{
+	unsigned long size;
+
+	if (!xen_pv_domain())
+		return;
+
+	/* xen_start_info does not exist in the M2P, hence can't use
+	 * xen_reserve_mfn. */
+	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
+
+	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
+	xen_reserve_mfn(xen_start_info->store_mfn);
 
+	if (!xen_initial_domain())
+		xen_reserve_mfn(xen_start_info->console.domU.mfn);
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	/*
+	 * ALIGN up to compensate for the p2m_page pointing to an array that
+	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
+	 */
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+	/* We could use xen_reserve_mfn here, but would end up looping quite
+	 * a lot (and call memblock_reserve for each PAGE), so lets just use
+	 * the easy way and reserve it wholesale. */
+	memblock_reserve(__pa(xen_start_info->mfn_list), size);
+
+	/* The pagetables are reserved in mmu.c */
+}
 void xen_setup_shared_info(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
@@ -1362,6 +1409,7 @@ asmlinkage void __init xen_start_kernel(void)
 	xen_raw_console_write("mapping kernel into physical memory\n");
 	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
 
+	xen_reserve_internals();
 	/* Allocate and initialize top and mid mfn levels for p2m structure */
 	xen_build_mfn_list_list();
 
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index e4adbfb..6a2bfa4 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -388,6 +388,11 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	}
 
 	m2p_override_init();
+
+	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
+	 * isn't enough pieces to make it work (for one - we are still using the
+	 * Xen provided pagetable). Do it later in xen_reserve_internals.
+	 */
 }
 
 unsigned long get_phys_to_machine(unsigned long pfn)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index a4790bf..9efca75 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -424,15 +424,6 @@ char * __init xen_memory_setup(void)
 	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
 			E820_RESERVED);
 
-	/*
-	 * Reserve Xen bits:
-	 *  - mfn_list
-	 *  - xen_start_info
-	 * See comment above "struct start_info" in <xen/interface/xen.h>
-	 */
-	memblock_reserve(__pa(xen_start_info->mfn_list),
-			 xen_start_info->pt_base - xen_start_info->mfn_list);
-
 	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
 
 	return "Xen";
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hI-00069A-Rw; Thu, 16 Aug 2012 16:13:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067T-Oy
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.139.83:3199] by server-12.bemta-5.messagelabs.com id
	15/5E-22359-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345133599!28501761!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18157 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIKu027552
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHDC000527
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH4H027782; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DA338402EF; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:20 -0400
Message-Id: <1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 02/11] xen/x86: Use memblock_reserve for
	sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

instead of a big memblock_reserve. This way we can be more
selective in freeing regions (and it also makes it easier
to understand where is what).

[v1: Move the auto_translate_physmap to proper line]
[v2: Per Stefano suggestion add more comments]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/xen/p2m.c       |    5 ++++
 arch/x86/xen/setup.c     |    9 --------
 3 files changed, 53 insertions(+), 9 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ff962d4..e532eb5 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -998,7 +998,54 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 	return ret;
 }
+/*
+ * If the MFN is not in the m2p (provided to us by the hypervisor) this
+ * function won't do anything. In practice this means that the XenBus
+ * MFN won't be available for the initial domain. */
+static void __init xen_reserve_mfn(unsigned long mfn)
+{
+	unsigned long pfn;
+
+	if (!mfn)
+		return;
+	pfn = mfn_to_pfn(mfn);
+	if (phys_to_machine_mapping_valid(pfn))
+		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
+}
+static void __init xen_reserve_internals(void)
+{
+	unsigned long size;
+
+	if (!xen_pv_domain())
+		return;
+
+	/* xen_start_info does not exist in the M2P, hence can't use
+	 * xen_reserve_mfn. */
+	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
+
+	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
+	xen_reserve_mfn(xen_start_info->store_mfn);
 
+	if (!xen_initial_domain())
+		xen_reserve_mfn(xen_start_info->console.domU.mfn);
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	/*
+	 * ALIGN up to compensate for the p2m_page pointing to an array that
+	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
+	 */
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+	/* We could use xen_reserve_mfn here, but would end up looping quite
+	 * a lot (and call memblock_reserve for each PAGE), so lets just use
+	 * the easy way and reserve it wholesale. */
+	memblock_reserve(__pa(xen_start_info->mfn_list), size);
+
+	/* The pagetables are reserved in mmu.c */
+}
 void xen_setup_shared_info(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
@@ -1362,6 +1409,7 @@ asmlinkage void __init xen_start_kernel(void)
 	xen_raw_console_write("mapping kernel into physical memory\n");
 	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
 
+	xen_reserve_internals();
 	/* Allocate and initialize top and mid mfn levels for p2m structure */
 	xen_build_mfn_list_list();
 
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index e4adbfb..6a2bfa4 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -388,6 +388,11 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	}
 
 	m2p_override_init();
+
+	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
+	 * isn't enough pieces to make it work (for one - we are still using the
+	 * Xen provided pagetable). Do it later in xen_reserve_internals.
+	 */
 }
 
 unsigned long get_phys_to_machine(unsigned long pfn)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index a4790bf..9efca75 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -424,15 +424,6 @@ char * __init xen_memory_setup(void)
 	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
 			E820_RESERVED);
 
-	/*
-	 * Reserve Xen bits:
-	 *  - mfn_list
-	 *  - xen_start_info
-	 * See comment above "struct start_info" in <xen/interface/xen.h>
-	 */
-	memblock_reserve(__pa(xen_start_info->mfn_list),
-			 xen_start_info->pt_base - xen_start_info->mfn_list);
-
 	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
 
 	return "Xen";
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hJ-00069M-7m; Thu, 16 Aug 2012 16:13:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067V-US
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.138.51:15893] by server-9.bemta-3.messagelabs.com id
	54/77-23952-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345133599!20717328!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7359 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIjk007938
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIjH007173
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDIq7027803; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 421BB40364; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:29 -0400
Message-Id: <1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
	not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We call memblock_reserve for [start of mfn list] -> [PMD aligned end
of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].

This has the disastrous effect that if at bootup the end of mfn_list is
not PMD aligned we end up returning to memblock parts of the region
past the mfn_list array. And those parts are the PTE tables with
the disastrous effect of seeing this at bootup:

Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 1860k freed
Freeing unused kernel memory: 200k freed
(XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
...
(XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
(XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
.. and so on.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 5a880b8..6019c22 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1227,7 +1227,6 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			/* We should be in __ka space. */
 			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
 			addr = xen_start_info->mfn_list;
-			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
 			/* We roundup to the PMD, which means that if anybody at this stage is
 			 * using the __ka address of xen_start_info or xen_start_info->shared_info
 			 * they are in going to crash. Fortunatly we have already revectored
@@ -1235,6 +1234,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			size = roundup(size, PMD_SIZE);
 			xen_cleanhighmap(addr, addr + size);
 
+			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
 			memblock_free(__pa(xen_start_info->mfn_list), size);
 			/* And revector! Bye bye old array */
 			xen_start_info->mfn_list = new_mfn_list;
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hJ-00069M-7m; Thu, 16 Aug 2012 16:13:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067V-US
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.138.51:15893] by server-9.bemta-3.messagelabs.com id
	54/77-23952-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345133599!20717328!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7359 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIjk007938
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIjH007173
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDIq7027803; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 421BB40364; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:29 -0400
Message-Id: <1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
	not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We call memblock_reserve for [start of mfn list] -> [PMD aligned end
of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].

This has the disastrous effect that if at bootup the end of mfn_list is
not PMD aligned we end up returning to memblock parts of the region
past the mfn_list array. And those parts are the PTE tables with
the disastrous effect of seeing this at bootup:

Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 1860k freed
Freeing unused kernel memory: 200k freed
(XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
...
(XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
(XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
.. and so on.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 5a880b8..6019c22 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1227,7 +1227,6 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			/* We should be in __ka space. */
 			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
 			addr = xen_start_info->mfn_list;
-			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
 			/* We roundup to the PMD, which means that if anybody at this stage is
 			 * using the __ka address of xen_start_info or xen_start_info->shared_info
 			 * they are in going to crash. Fortunatly we have already revectored
@@ -1235,6 +1234,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			size = roundup(size, PMD_SIZE);
 			xen_cleanhighmap(addr, addr + size);
 
+			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
 			memblock_free(__pa(xen_start_info->mfn_list), size);
 			/* And revector! Bye bye old array */
 			xen_start_info->mfn_list = new_mfn_list;
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hH-000680-Cj; Thu, 16 Aug 2012 16:13:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067M-8B
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.143.99:16046] by server-1.bemta-4.messagelabs.com id
	5E/5A-07754-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345133599!21510805!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12320 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDI7h007939
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDI2C007167
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDIcL011311; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1628640357; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:25 -0400
Message-Id: <1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 07/11] xen/mmu: Recycle the Xen provided L4, L3,
	and L2 pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As we are not using them. We end up only using the L1 pagetables
and grafting those to our page-tables.

[v1: Per Stefano's suggestion squashed two commits]
[v2: Per Stefano's suggestion simplified loop]
[v3: Fix smatch warnings]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   40 +++++++++++++++++++++++++++++++++-------
 1 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index a59070b..bd92c82 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1708,7 +1708,20 @@ static void convert_pfn_mfn(void *v)
 	for (i = 0; i < PTRS_PER_PTE; i++)
 		pte[i] = xen_make_pte(pte[i].pte);
 }
-
+static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
+				 unsigned long addr)
+{
+	if (*pt_base == PFN_DOWN(__pa(addr))) {
+		set_page_prot((void *)addr, PAGE_KERNEL);
+		clear_page((void *)addr);
+		(*pt_base)++;
+	}
+	if (*pt_end == PFN_DOWN(__pa(addr))) {
+		set_page_prot((void *)addr, PAGE_KERNEL);
+		clear_page((void *)addr);
+		(*pt_end)--;
+	}
+}
 /*
  * Set up the initial kernel pagetable.
  *
@@ -1724,6 +1737,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
 	pud_t *l3;
 	pmd_t *l2;
+	unsigned long addr[3];
+	unsigned long pt_base, pt_end;
+	unsigned i;
 
 	/* max_pfn_mapped is the last pfn mapped in the initial memory
 	 * mappings. Considering that on Xen after the kernel mappings we
@@ -1731,6 +1747,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	 * set max_pfn_mapped to the last real pfn mapped. */
 	max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
 
+	pt_base = PFN_DOWN(__pa(xen_start_info->pt_base));
+	pt_end = PFN_DOWN(__pa(xen_start_info->pt_base + (xen_start_info->nr_pt_frames * PAGE_SIZE)));
+
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
@@ -1749,6 +1768,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
 
+	addr[0] = (unsigned long)pgd;
+	addr[1] = (unsigned long)l3;
+	addr[2] = (unsigned long)l2;
 	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
 	 * Both L4[272][0] and L4[511][511] have entries that point to the same
 	 * L2 (PMD) tables. Meaning that if you modify it in __va space
@@ -1782,20 +1804,24 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Unpin Xen-provided one */
 	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
 
-	/* Switch over */
-	pgd = init_level4_pgt;
-
 	/*
 	 * At this stage there can be no user pgd, and no page
 	 * structure to attach it to, so make sure we just set kernel
 	 * pgd.
 	 */
 	xen_mc_batch();
-	__xen_write_cr3(true, __pa(pgd));
+	__xen_write_cr3(true, __pa(init_level4_pgt));
 	xen_mc_issue(PARAVIRT_LAZY_CPU);
 
-	memblock_reserve(__pa(xen_start_info->pt_base),
-			 xen_start_info->nr_pt_frames * PAGE_SIZE);
+	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
+	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
+	 * the initial domain. For guests using the toolstack, they are in:
+	 * [L4], [L3], [L2], [L1], [L1], order .. */
+	for (i = 0; i < ARRAY_SIZE(addr); i++)
+		check_pt_base(&pt_base, &pt_end, addr[i]);
+
+	/* Our (by three pages) smaller Xen pagetable that we are using */
+	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
 }
 #else	/* !CONFIG_X86_64 */
 static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hJ-00069i-KW; Thu, 16 Aug 2012 16:13:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067M-77
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.143.35:42214] by server-1.bemta-4.messagelabs.com id
	F7/6A-07754-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345133600!5924381!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18595 invoked from network); 16 Aug 2012 16:13:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIiO027563
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDISJ007174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDIpH011316; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3448D4035E; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:28 -0400
Message-Id: <1345133009-21941-11-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 10/11] xen/mmu: Remove from __ka space PMD
	entries for pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please first read the description in "xen/mmu: Copy and revector the
P2M tree."

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

The xen_remove_p2m_tree and code around has ripped out the __ka for
the old P2M array.

Here we continue on doing it to where the Xen page-tables were.
It is safe to do it, as the page-tables are addressed using __va.
For good measure we delete anything that is within MODULES_VADDR
and up to the end of the PMD.

At this point the __ka only contains PMD entries for the start
of the kernel up to __brk.

[v1: Per Stefano's suggestion wrapped the MODULES_VADDR in debug]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   19 +++++++++++++++++++
 1 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index e0919c5..5a880b8 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1240,6 +1240,25 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			xen_start_info->mfn_list = new_mfn_list;
 		}
 	}
+	/* At this stage, cleanup_highmap has already cleaned __ka space
+	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
+	 * the ramdisk). We continue on, erasing PMD entries that point to page
+	 * tables - do note that they are accessible at this stage via __va.
+	 * For good measure we also round up to the PMD - which means that if
+	 * anybody is using __ka address to the initial boot-stack - and try
+	 * to use it - they are going to crash. The xen_start_info has been
+	 * taken care of already in xen_setup_kernel_pagetable. */
+	addr = xen_start_info->pt_base;
+	size = roundup(xen_start_info->nr_pt_frames * PAGE_SIZE, PMD_SIZE);
+
+	xen_cleanhighmap(addr, addr + size);
+	xen_start_info->pt_base = (unsigned long)__va(__pa(xen_start_info->pt_base));
+#ifdef DEBUG
+	/* This is superflous and is not neccessary, but you know what
+	 * lets do it. The MODULES_VADDR -> MODULES_END should be clear of
+	 * anything at this stage. */
+	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
+#endif
 #endif
 	xen_post_allocator_init();
 }
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hH-00068J-Po; Thu, 16 Aug 2012 16:13:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067L-BC
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.143.35:42129] by server-2.bemta-4.messagelabs.com id
	0B/1E-31966-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345133599!13695819!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27523 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIxT027551
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHhg007940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH8O011298; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E5069402F0; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:21 -0400
Message-Id: <1345133009-21941-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 03/11] xen/mmu: The xen_setup_kernel_pagetable
	doesn't need to return anything.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We don't need to return the new PGD - as we do not use it.

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |    5 +----
 arch/x86/xen/mmu.c       |   10 ++--------
 arch/x86/xen/xen-ops.h   |    2 +-
 3 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index e532eb5..993e2a5 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1305,7 +1305,6 @@ asmlinkage void __init xen_start_kernel(void)
 {
 	struct physdev_set_iopl set_iopl;
 	int rc;
-	pgd_t *pgd;
 
 	if (!xen_start_info)
 		return;
@@ -1397,8 +1396,6 @@ asmlinkage void __init xen_start_kernel(void)
 	acpi_numa = -1;
 #endif
 
-	pgd = (pgd_t *)xen_start_info->pt_base;
-
 	/* Don't do the full vcpu_info placement stuff until we have a
 	   possible map and a non-dummy shared_info. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
@@ -1407,7 +1404,7 @@ asmlinkage void __init xen_start_kernel(void)
 	early_boot_irqs_disabled = true;
 
 	xen_raw_console_write("mapping kernel into physical memory\n");
-	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
+	xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base, xen_start_info->nr_pages);
 
 	xen_reserve_internals();
 	/* Allocate and initialize top and mid mfn levels for p2m structure */
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 3a73785..4ac21a4 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1719,8 +1719,7 @@ static void convert_pfn_mfn(void *v)
  * of the physical mapping once some sort of allocator has been set
  * up.
  */
-pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
-					 unsigned long max_pfn)
+void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
 	pud_t *l3;
 	pmd_t *l2;
@@ -1781,8 +1780,6 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 
 	memblock_reserve(__pa(xen_start_info->pt_base),
 			 xen_start_info->nr_pt_frames * PAGE_SIZE);
-
-	return pgd;
 }
 #else	/* !CONFIG_X86_64 */
 static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
@@ -1825,8 +1822,7 @@ static void __init xen_write_cr3_init(unsigned long cr3)
 	pv_mmu_ops.write_cr3 = &xen_write_cr3;
 }
 
-pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
-					 unsigned long max_pfn)
+void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
 	pmd_t *kernel_pmd;
 
@@ -1858,8 +1854,6 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 
 	memblock_reserve(__pa(xen_start_info->pt_base),
 			 xen_start_info->nr_pt_frames * PAGE_SIZE);
-
-	return initial_page_table;
 }
 #endif	/* CONFIG_X86_64 */
 
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 202d4c1..2230f57 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -27,7 +27,7 @@ void xen_setup_mfn_list_list(void);
 void xen_setup_shared_info(void);
 void xen_build_mfn_list_list(void);
 void xen_setup_machphys_mapping(void);
-pgd_t *xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn);
+void xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn);
 void xen_reserve_top(void);
 extern unsigned long xen_max_p2m_pfn;
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hH-000680-Cj; Thu, 16 Aug 2012 16:13:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067M-8B
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.143.99:16046] by server-1.bemta-4.messagelabs.com id
	5E/5A-07754-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345133599!21510805!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12320 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDI7h007939
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDI2C007167
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDIcL011311; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1628640357; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:25 -0400
Message-Id: <1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 07/11] xen/mmu: Recycle the Xen provided L4, L3,
	and L2 pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As we are not using them. We end up only using the L1 pagetables
and grafting those to our page-tables.

[v1: Per Stefano's suggestion squashed two commits]
[v2: Per Stefano's suggestion simplified loop]
[v3: Fix smatch warnings]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   40 +++++++++++++++++++++++++++++++++-------
 1 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index a59070b..bd92c82 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1708,7 +1708,20 @@ static void convert_pfn_mfn(void *v)
 	for (i = 0; i < PTRS_PER_PTE; i++)
 		pte[i] = xen_make_pte(pte[i].pte);
 }
-
+static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
+				 unsigned long addr)
+{
+	if (*pt_base == PFN_DOWN(__pa(addr))) {
+		set_page_prot((void *)addr, PAGE_KERNEL);
+		clear_page((void *)addr);
+		(*pt_base)++;
+	}
+	if (*pt_end == PFN_DOWN(__pa(addr))) {
+		set_page_prot((void *)addr, PAGE_KERNEL);
+		clear_page((void *)addr);
+		(*pt_end)--;
+	}
+}
 /*
  * Set up the initial kernel pagetable.
  *
@@ -1724,6 +1737,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
 	pud_t *l3;
 	pmd_t *l2;
+	unsigned long addr[3];
+	unsigned long pt_base, pt_end;
+	unsigned i;
 
 	/* max_pfn_mapped is the last pfn mapped in the initial memory
 	 * mappings. Considering that on Xen after the kernel mappings we
@@ -1731,6 +1747,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	 * set max_pfn_mapped to the last real pfn mapped. */
 	max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
 
+	pt_base = PFN_DOWN(__pa(xen_start_info->pt_base));
+	pt_end = PFN_DOWN(__pa(xen_start_info->pt_base + (xen_start_info->nr_pt_frames * PAGE_SIZE)));
+
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
@@ -1749,6 +1768,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
 
+	addr[0] = (unsigned long)pgd;
+	addr[1] = (unsigned long)l3;
+	addr[2] = (unsigned long)l2;
 	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
 	 * Both L4[272][0] and L4[511][511] have entries that point to the same
 	 * L2 (PMD) tables. Meaning that if you modify it in __va space
@@ -1782,20 +1804,24 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Unpin Xen-provided one */
 	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
 
-	/* Switch over */
-	pgd = init_level4_pgt;
-
 	/*
 	 * At this stage there can be no user pgd, and no page
 	 * structure to attach it to, so make sure we just set kernel
 	 * pgd.
 	 */
 	xen_mc_batch();
-	__xen_write_cr3(true, __pa(pgd));
+	__xen_write_cr3(true, __pa(init_level4_pgt));
 	xen_mc_issue(PARAVIRT_LAZY_CPU);
 
-	memblock_reserve(__pa(xen_start_info->pt_base),
-			 xen_start_info->nr_pt_frames * PAGE_SIZE);
+	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
+	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
+	 * the initial domain. For guests using the toolstack, they are in:
+	 * [L4], [L3], [L2], [L1], [L1], order .. */
+	for (i = 0; i < ARRAY_SIZE(addr); i++)
+		check_pt_base(&pt_base, &pt_end, addr[i]);
+
+	/* Our (by three pages) smaller Xen pagetable that we are using */
+	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
 }
 #else	/* !CONFIG_X86_64 */
 static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hJ-00069i-KW; Thu, 16 Aug 2012 16:13:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067M-77
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.143.35:42214] by server-1.bemta-4.messagelabs.com id
	F7/6A-07754-22C1D205; Thu, 16 Aug 2012 16:13:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345133600!5924381!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18595 invoked from network); 16 Aug 2012 16:13:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIiO027563
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDISJ007174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDIpH011316; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3448D4035E; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:28 -0400
Message-Id: <1345133009-21941-11-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 10/11] xen/mmu: Remove from __ka space PMD
	entries for pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please first read the description in "xen/mmu: Copy and revector the
P2M tree."

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

The xen_remove_p2m_tree and code around has ripped out the __ka for
the old P2M array.

Here we continue on doing it to where the Xen page-tables were.
It is safe to do it, as the page-tables are addressed using __va.
For good measure we delete anything that is within MODULES_VADDR
and up to the end of the PMD.

At this point the __ka only contains PMD entries for the start
of the kernel up to __brk.

[v1: Per Stefano's suggestion wrapped the MODULES_VADDR in debug]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   19 +++++++++++++++++++
 1 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index e0919c5..5a880b8 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1240,6 +1240,25 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			xen_start_info->mfn_list = new_mfn_list;
 		}
 	}
+	/* At this stage, cleanup_highmap has already cleaned __ka space
+	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
+	 * the ramdisk). We continue on, erasing PMD entries that point to page
+	 * tables - do note that they are accessible at this stage via __va.
+	 * For good measure we also round up to the PMD - which means that if
+	 * anybody is using __ka address to the initial boot-stack - and try
+	 * to use it - they are going to crash. The xen_start_info has been
+	 * taken care of already in xen_setup_kernel_pagetable. */
+	addr = xen_start_info->pt_base;
+	size = roundup(xen_start_info->nr_pt_frames * PAGE_SIZE, PMD_SIZE);
+
+	xen_cleanhighmap(addr, addr + size);
+	xen_start_info->pt_base = (unsigned long)__va(__pa(xen_start_info->pt_base));
+#ifdef DEBUG
+	/* This is superflous and is not neccessary, but you know what
+	 * lets do it. The MODULES_VADDR -> MODULES_END should be clear of
+	 * anything at this stage. */
+	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
+#endif
 #endif
 	xen_post_allocator_init();
 }
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hH-00068J-Po; Thu, 16 Aug 2012 16:13:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067L-BC
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.143.35:42129] by server-2.bemta-4.messagelabs.com id
	0B/1E-31966-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345133599!13695819!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27523 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIxT027551
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHhg007940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH8O011298; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E5069402F0; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:21 -0400
Message-Id: <1345133009-21941-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 03/11] xen/mmu: The xen_setup_kernel_pagetable
	doesn't need to return anything.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We don't need to return the new PGD - as we do not use it.

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |    5 +----
 arch/x86/xen/mmu.c       |   10 ++--------
 arch/x86/xen/xen-ops.h   |    2 +-
 3 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index e532eb5..993e2a5 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1305,7 +1305,6 @@ asmlinkage void __init xen_start_kernel(void)
 {
 	struct physdev_set_iopl set_iopl;
 	int rc;
-	pgd_t *pgd;
 
 	if (!xen_start_info)
 		return;
@@ -1397,8 +1396,6 @@ asmlinkage void __init xen_start_kernel(void)
 	acpi_numa = -1;
 #endif
 
-	pgd = (pgd_t *)xen_start_info->pt_base;
-
 	/* Don't do the full vcpu_info placement stuff until we have a
 	   possible map and a non-dummy shared_info. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
@@ -1407,7 +1404,7 @@ asmlinkage void __init xen_start_kernel(void)
 	early_boot_irqs_disabled = true;
 
 	xen_raw_console_write("mapping kernel into physical memory\n");
-	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
+	xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base, xen_start_info->nr_pages);
 
 	xen_reserve_internals();
 	/* Allocate and initialize top and mid mfn levels for p2m structure */
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 3a73785..4ac21a4 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1719,8 +1719,7 @@ static void convert_pfn_mfn(void *v)
  * of the physical mapping once some sort of allocator has been set
  * up.
  */
-pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
-					 unsigned long max_pfn)
+void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
 	pud_t *l3;
 	pmd_t *l2;
@@ -1781,8 +1780,6 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 
 	memblock_reserve(__pa(xen_start_info->pt_base),
 			 xen_start_info->nr_pt_frames * PAGE_SIZE);
-
-	return pgd;
 }
 #else	/* !CONFIG_X86_64 */
 static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
@@ -1825,8 +1822,7 @@ static void __init xen_write_cr3_init(unsigned long cr3)
 	pv_mmu_ops.write_cr3 = &xen_write_cr3;
 }
 
-pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
-					 unsigned long max_pfn)
+void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
 	pmd_t *kernel_pmd;
 
@@ -1858,8 +1854,6 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
 
 	memblock_reserve(__pa(xen_start_info->pt_base),
 			 xen_start_info->nr_pt_frames * PAGE_SIZE);
-
-	return initial_page_table;
 }
 #endif	/* CONFIG_X86_64 */
 
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 202d4c1..2230f57 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -27,7 +27,7 @@ void xen_setup_mfn_list_list(void);
 void xen_setup_shared_info(void);
 void xen_build_mfn_list_list(void);
 void xen_setup_machphys_mapping(void);
-pgd_t *xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn);
+void xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn);
 void xen_reserve_top(void);
 extern unsigned long xen_max_p2m_pfn;
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hI-00068q-EZ; Thu, 16 Aug 2012 16:13:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067O-IH
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.139.83:40635] by server-10.bemta-5.messagelabs.com id
	86/C6-13125-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345133600!25790051!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3748 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIQu007944
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIrM000545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDI0Z011310; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0BE714032D; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:24 -0400
Message-Id: <1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
	xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

B/c we do not need it. During the startup the Xen provides
us with all the memory mapped that we need to function.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 7247e5a..a59070b 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -84,6 +84,7 @@
  */
 DEFINE_SPINLOCK(xen_reservation_lock);
 
+#ifdef CONFIG_X86_32
 /*
  * Identity map, in addition to plain kernel map.  This needs to be
  * large enough to allocate page table pages to allocate the rest.
@@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
  */
 #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
 static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
-
+#endif
 #ifdef CONFIG_X86_64
 /* l3 pud for userspace vsyscall mapping */
 static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
@@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
 		BUG();
 }
-
+#ifdef CONFIG_X86_32
 static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 {
 	unsigned pmdidx, pteidx;
@@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 
 	set_page_prot(pmd, PAGE_KERNEL_RO);
 }
-
+#endif
 void __init xen_setup_machphys_mapping(void)
 {
 	struct xen_machphys_mapping mapping;
@@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
 
-	/* Set up identity map */
-	xen_map_identity_early(level2_ident_pgt, max_pfn);
-
 	/* Make pagetable pieces RO */
 	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hI-00068q-EZ; Thu, 16 Aug 2012 16:13:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hG-00067O-IH
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:22 +0000
Received: from [85.158.139.83:40635] by server-10.bemta-5.messagelabs.com id
	86/C6-13125-12C1D205; Thu, 16 Aug 2012 16:13:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345133600!25790051!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3748 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDIQu007944
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIrM000545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDI0Z011310; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0BE714032D; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:24 -0400
Message-Id: <1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
	xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

B/c we do not need it. During the startup the Xen provides
us with all the memory mapped that we need to function.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 7247e5a..a59070b 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -84,6 +84,7 @@
  */
 DEFINE_SPINLOCK(xen_reservation_lock);
 
+#ifdef CONFIG_X86_32
 /*
  * Identity map, in addition to plain kernel map.  This needs to be
  * large enough to allocate page table pages to allocate the rest.
@@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
  */
 #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
 static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
-
+#endif
 #ifdef CONFIG_X86_64
 /* l3 pud for userspace vsyscall mapping */
 static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
@@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
 		BUG();
 }
-
+#ifdef CONFIG_X86_32
 static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 {
 	unsigned pmdidx, pteidx;
@@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 
 	set_page_prot(pmd, PAGE_KERNEL_RO);
 }
-
+#endif
 void __init xen_setup_machphys_mapping(void)
 {
 	struct xen_machphys_mapping mapping;
@@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
 
-	/* Set up identity map */
-	xen_map_identity_early(level2_ident_pgt, max_pfn);
-
 	/* Make pagetable pieces RO */
 	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hK-0006Am-QM; Thu, 16 Aug 2012 16:13:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067M-N2
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.143.99:16134] by server-1.bemta-4.messagelabs.com id
	49/6A-07754-32C1D205; Thu, 16 Aug 2012 16:13:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345133601!18920475!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26957 invoked from network); 16 Aug 2012 16:13:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDJuh027567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:20 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIfU007975
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH1g006365; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 29D9F4035B; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:27 -0400
Message-Id: <1345133009-21941-10-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 09/11] xen/mmu: Copy and revector the P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please first read the description in "xen/p2m: Add logic to revector a
P2M tree to use __va leafs" patch.

The 'xen_revector_p2m_tree()' function allocates a new P2M tree
copies the contents of the old one in it, and returns the new one.

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

We have revectored the P2M tree (and the one for save/restore as well)
to use new shiny __va address to new MFNs. The xen_start_info
has been taken care of already in 'xen_setup_kernel_pagetable()' and
xen_start_info->shared_info in 'xen_setup_shared_info()', so
we are free to roam and delete PMD entries - which is exactly what
we are going to do. We rip out the __ka for the old P2M array.

[v1: Fix smatch warnings]
[v2: memset was doing 0 instead of 0xff]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   57 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 57 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index bd92c82..e0919c5 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1183,9 +1183,64 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 
 static void xen_post_allocator_init(void);
 
+#ifdef CONFIG_X86_64
+static void __init xen_cleanhighmap(unsigned long vaddr,
+				    unsigned long vaddr_end)
+{
+	unsigned long kernel_end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
+	pmd_t *pmd = level2_kernel_pgt + pmd_index(vaddr);
+
+	/* NOTE: The loop is more greedy than the cleanup_highmap variant.
+	 * We include the PMD passed in on _both_ boundaries. */
+	for (; vaddr <= vaddr_end && (pmd < (level2_kernel_pgt + PAGE_SIZE));
+			pmd++, vaddr += PMD_SIZE) {
+		if (pmd_none(*pmd))
+			continue;
+		if (vaddr < (unsigned long) _text || vaddr > kernel_end)
+			set_pmd(pmd, __pmd(0));
+	}
+	/* In case we did something silly, we should crash in this function
+	 * instead of somewhere later and be confusing. */
+	xen_mc_flush();
+}
+#endif
 static void __init xen_pagetable_setup_done(pgd_t *base)
 {
+#ifdef CONFIG_X86_64
+	unsigned long size;
+	unsigned long addr;
+#endif
+
 	xen_setup_shared_info();
+#ifdef CONFIG_X86_64
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		unsigned long new_mfn_list;
+
+		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+		/* On 32-bit, we get zero so this never gets executed. */
+		new_mfn_list = xen_revector_p2m_tree();
+		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
+			/* using __ka address and sticking INVALID_P2M_ENTRY! */
+			memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+			/* We should be in __ka space. */
+			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+			addr = xen_start_info->mfn_list;
+			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+			/* We roundup to the PMD, which means that if anybody at this stage is
+			 * using the __ka address of xen_start_info or xen_start_info->shared_info
+			 * they are in going to crash. Fortunatly we have already revectored
+			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+			size = roundup(size, PMD_SIZE);
+			xen_cleanhighmap(addr, addr + size);
+
+			memblock_free(__pa(xen_start_info->mfn_list), size);
+			/* And revector! Bye bye old array */
+			xen_start_info->mfn_list = new_mfn_list;
+		}
+	}
+#endif
 	xen_post_allocator_init();
 }
 
@@ -1822,6 +1877,8 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 
 	/* Our (by three pages) smaller Xen pagetable that we are using */
 	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
+	/* Revector the xen_start_info */
+	xen_start_info = (struct start_info *)__va(__pa(xen_start_info));
 }
 #else	/* !CONFIG_X86_64 */
 static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hK-0006Am-QM; Thu, 16 Aug 2012 16:13:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067M-N2
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.143.99:16134] by server-1.bemta-4.messagelabs.com id
	49/6A-07754-32C1D205; Thu, 16 Aug 2012 16:13:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345133601!18920475!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26957 invoked from network); 16 Aug 2012 16:13:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDJuh027567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:20 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIfU007975
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDH1g006365; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 29D9F4035B; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:27 -0400
Message-Id: <1345133009-21941-10-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 09/11] xen/mmu: Copy and revector the P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please first read the description in "xen/p2m: Add logic to revector a
P2M tree to use __va leafs" patch.

The 'xen_revector_p2m_tree()' function allocates a new P2M tree
copies the contents of the old one in it, and returns the new one.

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

We have revectored the P2M tree (and the one for save/restore as well)
to use new shiny __va address to new MFNs. The xen_start_info
has been taken care of already in 'xen_setup_kernel_pagetable()' and
xen_start_info->shared_info in 'xen_setup_shared_info()', so
we are free to roam and delete PMD entries - which is exactly what
we are going to do. We rip out the __ka for the old P2M array.

[v1: Fix smatch warnings]
[v2: memset was doing 0 instead of 0xff]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   57 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 57 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index bd92c82..e0919c5 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1183,9 +1183,64 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 
 static void xen_post_allocator_init(void);
 
+#ifdef CONFIG_X86_64
+static void __init xen_cleanhighmap(unsigned long vaddr,
+				    unsigned long vaddr_end)
+{
+	unsigned long kernel_end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
+	pmd_t *pmd = level2_kernel_pgt + pmd_index(vaddr);
+
+	/* NOTE: The loop is more greedy than the cleanup_highmap variant.
+	 * We include the PMD passed in on _both_ boundaries. */
+	for (; vaddr <= vaddr_end && (pmd < (level2_kernel_pgt + PAGE_SIZE));
+			pmd++, vaddr += PMD_SIZE) {
+		if (pmd_none(*pmd))
+			continue;
+		if (vaddr < (unsigned long) _text || vaddr > kernel_end)
+			set_pmd(pmd, __pmd(0));
+	}
+	/* In case we did something silly, we should crash in this function
+	 * instead of somewhere later and be confusing. */
+	xen_mc_flush();
+}
+#endif
 static void __init xen_pagetable_setup_done(pgd_t *base)
 {
+#ifdef CONFIG_X86_64
+	unsigned long size;
+	unsigned long addr;
+#endif
+
 	xen_setup_shared_info();
+#ifdef CONFIG_X86_64
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		unsigned long new_mfn_list;
+
+		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+		/* On 32-bit, we get zero so this never gets executed. */
+		new_mfn_list = xen_revector_p2m_tree();
+		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
+			/* using __ka address and sticking INVALID_P2M_ENTRY! */
+			memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+			/* We should be in __ka space. */
+			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+			addr = xen_start_info->mfn_list;
+			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+			/* We roundup to the PMD, which means that if anybody at this stage is
+			 * using the __ka address of xen_start_info or xen_start_info->shared_info
+			 * they are in going to crash. Fortunatly we have already revectored
+			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+			size = roundup(size, PMD_SIZE);
+			xen_cleanhighmap(addr, addr + size);
+
+			memblock_free(__pa(xen_start_info->mfn_list), size);
+			/* And revector! Bye bye old array */
+			xen_start_info->mfn_list = new_mfn_list;
+		}
+	}
+#endif
 	xen_post_allocator_init();
 }
 
@@ -1822,6 +1877,8 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 
 	/* Our (by three pages) smaller Xen pagetable that we are using */
 	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
+	/* Revector the xen_start_info */
+	xen_start_info = (struct start_info *)__va(__pa(xen_start_info));
 }
 #else	/* !CONFIG_X86_64 */
 static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hK-0006AK-D1; Thu, 16 Aug 2012 16:13:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067V-JP
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.138.51:25253] by server-9.bemta-3.messagelabs.com id
	4A/77-23952-32C1D205; Thu, 16 Aug 2012 16:13:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345133601!20717333!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7428 invoked from network); 16 Aug 2012 16:13:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDJdE027566
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIsx000550
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDHqD027797; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 01AA24032C; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:23 -0400
Message-Id: <1345133009-21941-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 05/11] xen/mmu: use copy_page instead of memcpy.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After all, this is what it is there for.

Acked-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   13 ++++++-------
 1 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 6ba6100..7247e5a 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1754,14 +1754,14 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	 * it will be also modified in the __ka space! (But if you just
 	 * modify the PMD table to point to other PTE's or none, then you
 	 * are OK - which is what cleanup_highmap does) */
-	memcpy(level2_ident_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(level2_ident_pgt, l2);
 	/* Graft it onto L4[511][511] */
-	memcpy(level2_kernel_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(level2_kernel_pgt, l2);
 
 	/* Get [511][510] and graft that in level2_fixmap_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud);
-	memcpy(level2_fixmap_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(level2_fixmap_pgt, l2);
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
 
@@ -1821,8 +1821,7 @@ static void __init xen_write_cr3_init(unsigned long cr3)
 	 */
 	swapper_kernel_pmd =
 		extend_brk(sizeof(pmd_t) * PTRS_PER_PMD, PAGE_SIZE);
-	memcpy(swapper_kernel_pmd, initial_kernel_pmd,
-	       sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(swapper_kernel_pmd, initial_kernel_pmd);
 	swapper_pg_dir[KERNEL_PGD_BOUNDARY] =
 		__pgd(__pa(swapper_kernel_pmd) | _PAGE_PRESENT);
 	set_page_prot(swapper_kernel_pmd, PAGE_KERNEL_RO);
@@ -1851,11 +1850,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 				  512*1024);
 
 	kernel_pmd = m2v(pgd[KERNEL_PGD_BOUNDARY].pgd);
-	memcpy(initial_kernel_pmd, kernel_pmd, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(initial_kernel_pmd, kernel_pmd);
 
 	xen_map_identity_early(initial_kernel_pmd, max_pfn);
 
-	memcpy(initial_page_table, pgd, sizeof(pgd_t) * PTRS_PER_PGD);
+	copy_page(initial_page_table, pgd);
 	initial_page_table[KERNEL_PGD_BOUNDARY] =
 		__pgd(__pa(initial_kernel_pmd) | _PAGE_PRESENT);
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hK-0006AK-D1; Thu, 16 Aug 2012 16:13:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hH-00067V-JP
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:23 +0000
Received: from [85.158.138.51:25253] by server-9.bemta-3.messagelabs.com id
	4A/77-23952-32C1D205; Thu, 16 Aug 2012 16:13:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345133601!20717333!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7428 invoked from network); 16 Aug 2012 16:13:22 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:13:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDJdE027566
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIsx000550
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDHqD027797; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 01AA24032C; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:23 -0400
Message-Id: <1345133009-21941-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 05/11] xen/mmu: use copy_page instead of memcpy.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After all, this is what it is there for.

Acked-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   13 ++++++-------
 1 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 6ba6100..7247e5a 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1754,14 +1754,14 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	 * it will be also modified in the __ka space! (But if you just
 	 * modify the PMD table to point to other PTE's or none, then you
 	 * are OK - which is what cleanup_highmap does) */
-	memcpy(level2_ident_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(level2_ident_pgt, l2);
 	/* Graft it onto L4[511][511] */
-	memcpy(level2_kernel_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(level2_kernel_pgt, l2);
 
 	/* Get [511][510] and graft that in level2_fixmap_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map + PMD_SIZE)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map + PMD_SIZE)].pud);
-	memcpy(level2_fixmap_pgt, l2, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(level2_fixmap_pgt, l2);
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
 
@@ -1821,8 +1821,7 @@ static void __init xen_write_cr3_init(unsigned long cr3)
 	 */
 	swapper_kernel_pmd =
 		extend_brk(sizeof(pmd_t) * PTRS_PER_PMD, PAGE_SIZE);
-	memcpy(swapper_kernel_pmd, initial_kernel_pmd,
-	       sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(swapper_kernel_pmd, initial_kernel_pmd);
 	swapper_pg_dir[KERNEL_PGD_BOUNDARY] =
 		__pgd(__pa(swapper_kernel_pmd) | _PAGE_PRESENT);
 	set_page_prot(swapper_kernel_pmd, PAGE_KERNEL_RO);
@@ -1851,11 +1850,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 				  512*1024);
 
 	kernel_pmd = m2v(pgd[KERNEL_PGD_BOUNDARY].pgd);
-	memcpy(initial_kernel_pmd, kernel_pmd, sizeof(pmd_t) * PTRS_PER_PMD);
+	copy_page(initial_kernel_pmd, kernel_pmd);
 
 	xen_map_identity_early(initial_kernel_pmd, max_pfn);
 
-	memcpy(initial_page_table, pgd, sizeof(pgd_t) * PTRS_PER_PGD);
+	copy_page(initial_page_table, pgd);
 	initial_page_table[KERNEL_PGD_BOUNDARY] =
 		__pgd(__pa(initial_kernel_pmd) | _PAGE_PRESENT);
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hM-0006CB-Da; Thu, 16 Aug 2012 16:13:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hK-00067K-KX
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345133599!9587518!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30710 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDHGt007925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHlf014616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:17 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDHd7027781; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CD1E8402E8; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:18 -0400
Message-Id: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v3) for
	v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since (v2): http://lists.xen.org/archives/html/xen-devel/2012-07/msg01864.html
 - fixed a bug if guest booted with non-PMD aligned size (say, 899MB).
 - fixed smack warnings
 - moved a memset(xen_start_info->mfn_list, 0xff,.. ) from one patch to another.
Since v1: [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01561.html]
 - added more comments, and #ifdefs
 - squashed The L4 and L4, L3, and L2 recycle patches together
 - Added Acked-by's

These patches are quite baked by now. If you already looked at this before
I would just skip over it and just look at the last patch.

The explanation of these patches is exactly what v1 had:

The details of this problem are nicely explained in:

 [PATCH 4/6] xen/p2m: Add logic to revector a P2M tree to use __va
 [PATCH 5/6] xen/mmu: Copy and revector the P2M tree.
 [PATCH 6/6] xen/mmu: Remove from __ka space PMD entries for

and the supporting patches are just nice optimizations. Pasting in
what those patches mentioned:


During bootup Xen supplies us with a P2M array. It sticks
it right after the ramdisk, as can be seen with a 128GB PV guest:

(certain parts removed for clarity):
xc_dom_build_image: called
xc_dom_alloc_segment:   kernel       : 0xffffffff81000000 -> 0xffffffff81e43000 
 (pfn 0x1000 + 0xe43 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xe43 at 0x7f097d8bf000
xc_dom_alloc_segment:   ramdisk      : 0xffffffff81e43000 -> 0xffffffff925c7000 
 (pfn 0x1e43 + 0x10784 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1e43+0x10784 at 0x7f0952dd2000
xc_dom_alloc_segment:   phys2mach    : 0xffffffff925c7000 -> 0xffffffffa25c7000 
 (pfn 0x125c7 + 0x10000 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x125c7+0x10000 at 0x7f0942dd2000
xc_dom_alloc_page   :   start info   : 0xffffffffa25c7000 (pfn 0x225c7)
xc_dom_alloc_page   :   xenstore     : 0xffffffffa25c8000 (pfn 0x225c8)
xc_dom_alloc_page   :   console      : 0xffffffffa25c9000 (pfn 0x225c9)
nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 
0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 
0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 
0xffffffffbfffffff, 1 table(s)
nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 
0xffffffffa27fffff, 276 table(s)
xc_dom_alloc_segment:   page tables  : 0xffffffffa25ca000 -> 0xffffffffa26e1000 
 (pfn 0x225ca + 0x117 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x225ca+0x117 at 0x7f097d7a8000
xc_dom_alloc_page   :   boot stack   : 0xffffffffa26e1000 (pfn 0x226e1)
xc_dom_build_image  : virt_alloc_end : 0xffffffffa26e2000
xc_dom_build_image  : virt_pgtab_end : 0xffffffffa2800000

So the physical memory and virtual (using __START_KERNEL_map addresses)
layout looks as so:

  phys                             __ka
/------------\                   /-------------------\
| 0          | empty             | 0xffffffff80000000|
| ..         |                   | ..                |
| 16MB       | <= kernel starts  | 0xffffffff81000000|
| ..         |                   |                   |
| 30MB       | <= kernel ends => | 0xffffffff81e43000|
| ..         |  & ramdisk starts | ..                |
| 293MB      | <= ramdisk ends=> | 0xffffffff925c7000|
| ..         |  & P2M starts     | ..                |
| ..         |                   | ..                |
| 549MB      | <= P2M ends    => | 0xffffffffa25c7000|
| ..         | start_info        | 0xffffffffa25c7000|
| ..         | xenstore          | 0xffffffffa25c8000|
| ..         | cosole            | 0xffffffffa25c9000|
| 549MB      | <= page tables => | 0xffffffffa25ca000|
| ..         |                   |                   |
| 550MB      | <= PGT end     => | 0xffffffffa26e1000|
| ..         | boot stack        |                   |
\------------/                   \-------------------/

As can be seen, the ramdisk, P2M and pagetables are taking
a bit of __ka addresses space. Which is a problem since the
MODULES_VADDR starts at 0xffffffffa0000000 - and P2M sits
right in there! This results during bootup with the inability to
load modules, with this error:

------------[ cut here ]------------
WARNING: at /home/konrad/ssd/linux/mm/vmalloc.c:106 
vmap_page_range_noflush+0x2d9/0x370()
Call Trace:
 [<ffffffff810719fa>] warn_slowpath_common+0x7a/0xb0
 [<ffffffff81030279>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
 [<ffffffff81071a45>] warn_slowpath_null+0x15/0x20
 [<ffffffff81130b89>] vmap_page_range_noflush+0x2d9/0x370
 [<ffffffff81130c4d>] map_vm_area+0x2d/0x50
 [<ffffffff811326d0>] __vmalloc_node_range+0x160/0x250
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c6186>] ? load_module+0x66/0x19c0
 [<ffffffff8105cadc>] module_alloc+0x5c/0x60
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c5369>] module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c70c3>] load_module+0xfa3/0x19c0
 [<ffffffff812491f6>] ? security_file_permission+0x86/0x90
 [<ffffffff810c7b3a>] sys_init_module+0x5a/0x220
 [<ffffffff815ce339>] system_call_fastpath+0x16/0x1b
---[ end trace fd8f7704fdea0291 ]---
vmalloc: allocation failure, allocated 16384 of 20480 bytes
modprobe: page allocation failure: order:0, mode:0xd2

Since the __va and __ka are 1:1 up to MODULES_VADDR and
cleanup_highmap rids __ka of the ramdisk mapping, what
we want to do is similar - get rid of the P2M in the __ka
address space. There are two ways of fixing this:

 1) All P2M lookups instead of using the __ka address would
    use the __va address. This means we can safely erase from
    __ka space the PMD pointers that point to the PFNs for
    P2M array and be OK.
 2). Allocate a new array, copy the existing P2M into it,
    revector the P2M tree to use that, and return the old
    P2M to the memory allocate. This has the advantage that
    it sets the stage for using XEN_ELF_NOTE_INIT_P2M
    feature. That feature allows us to set the exact virtual
    address space we want for the P2M - and allows us to
    boot as initial domain on large machines.

So we pick option 2).

This patch only lays the groundwork in the P2M code. The patch
that modifies the MMU is called "xen/mmu: Copy and revector the P2M tree."

-- xen/mmu: Copy and revector the P2M tree:

The 'xen_revector_p2m_tree()' function allocates a new P2M tree
copies the contents of the old one in it, and returns the new one.

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

We have revectored the P2M tree (and the one for save/restore as well)
to use new shiny __va address to new MFNs. The xen_start_info
has been taken care of already in 'xen_setup_kernel_pagetable()' and
xen_start_info->shared_info in 'xen_setup_shared_info()', so
we are free to roam and delete PMD entries - which is exactly what
we are going to do. We rip out the __ka for the old P2M array.

-- xen/mmu:   Remove from __ka space PMD entries for

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

The xen_remove_p2m_tree and code around has ripped out the __ka for
the old P2M array.

Here we continue on doing it to where the Xen page-tables were.
It is safe to do it, as the page-tables are addressed using __va.
For good measure we delete anything that is within MODULES_VADDR
and up to the end of the PMD.

At this point the __ka only contains PMD entries for the start
of the kernel up to __brk.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hM-0006CB-Da; Thu, 16 Aug 2012 16:13:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hK-00067K-KX
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345133599!9587518!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30710 invoked from network); 16 Aug 2012 16:13:20 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDHGt007925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDHlf014616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:17 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDHd7027781; Thu, 16 Aug 2012 11:13:17 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CD1E8402E8; Thu, 16 Aug 2012 12:03:30 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:18 -0400
Message-Id: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v3) for
	v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since (v2): http://lists.xen.org/archives/html/xen-devel/2012-07/msg01864.html
 - fixed a bug if guest booted with non-PMD aligned size (say, 899MB).
 - fixed smack warnings
 - moved a memset(xen_start_info->mfn_list, 0xff,.. ) from one patch to another.
Since v1: [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01561.html]
 - added more comments, and #ifdefs
 - squashed The L4 and L4, L3, and L2 recycle patches together
 - Added Acked-by's

These patches are quite baked by now. If you already looked at this before
I would just skip over it and just look at the last patch.

The explanation of these patches is exactly what v1 had:

The details of this problem are nicely explained in:

 [PATCH 4/6] xen/p2m: Add logic to revector a P2M tree to use __va
 [PATCH 5/6] xen/mmu: Copy and revector the P2M tree.
 [PATCH 6/6] xen/mmu: Remove from __ka space PMD entries for

and the supporting patches are just nice optimizations. Pasting in
what those patches mentioned:


During bootup Xen supplies us with a P2M array. It sticks
it right after the ramdisk, as can be seen with a 128GB PV guest:

(certain parts removed for clarity):
xc_dom_build_image: called
xc_dom_alloc_segment:   kernel       : 0xffffffff81000000 -> 0xffffffff81e43000 
 (pfn 0x1000 + 0xe43 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xe43 at 0x7f097d8bf000
xc_dom_alloc_segment:   ramdisk      : 0xffffffff81e43000 -> 0xffffffff925c7000 
 (pfn 0x1e43 + 0x10784 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1e43+0x10784 at 0x7f0952dd2000
xc_dom_alloc_segment:   phys2mach    : 0xffffffff925c7000 -> 0xffffffffa25c7000 
 (pfn 0x125c7 + 0x10000 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x125c7+0x10000 at 0x7f0942dd2000
xc_dom_alloc_page   :   start info   : 0xffffffffa25c7000 (pfn 0x225c7)
xc_dom_alloc_page   :   xenstore     : 0xffffffffa25c8000 (pfn 0x225c8)
xc_dom_alloc_page   :   console      : 0xffffffffa25c9000 (pfn 0x225c9)
nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 
0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 
0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 
0xffffffffbfffffff, 1 table(s)
nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 
0xffffffffa27fffff, 276 table(s)
xc_dom_alloc_segment:   page tables  : 0xffffffffa25ca000 -> 0xffffffffa26e1000 
 (pfn 0x225ca + 0x117 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x225ca+0x117 at 0x7f097d7a8000
xc_dom_alloc_page   :   boot stack   : 0xffffffffa26e1000 (pfn 0x226e1)
xc_dom_build_image  : virt_alloc_end : 0xffffffffa26e2000
xc_dom_build_image  : virt_pgtab_end : 0xffffffffa2800000

So the physical memory and virtual (using __START_KERNEL_map addresses)
layout looks as so:

  phys                             __ka
/------------\                   /-------------------\
| 0          | empty             | 0xffffffff80000000|
| ..         |                   | ..                |
| 16MB       | <= kernel starts  | 0xffffffff81000000|
| ..         |                   |                   |
| 30MB       | <= kernel ends => | 0xffffffff81e43000|
| ..         |  & ramdisk starts | ..                |
| 293MB      | <= ramdisk ends=> | 0xffffffff925c7000|
| ..         |  & P2M starts     | ..                |
| ..         |                   | ..                |
| 549MB      | <= P2M ends    => | 0xffffffffa25c7000|
| ..         | start_info        | 0xffffffffa25c7000|
| ..         | xenstore          | 0xffffffffa25c8000|
| ..         | cosole            | 0xffffffffa25c9000|
| 549MB      | <= page tables => | 0xffffffffa25ca000|
| ..         |                   |                   |
| 550MB      | <= PGT end     => | 0xffffffffa26e1000|
| ..         | boot stack        |                   |
\------------/                   \-------------------/

As can be seen, the ramdisk, P2M and pagetables are taking
a bit of __ka addresses space. Which is a problem since the
MODULES_VADDR starts at 0xffffffffa0000000 - and P2M sits
right in there! This results during bootup with the inability to
load modules, with this error:

------------[ cut here ]------------
WARNING: at /home/konrad/ssd/linux/mm/vmalloc.c:106 
vmap_page_range_noflush+0x2d9/0x370()
Call Trace:
 [<ffffffff810719fa>] warn_slowpath_common+0x7a/0xb0
 [<ffffffff81030279>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
 [<ffffffff81071a45>] warn_slowpath_null+0x15/0x20
 [<ffffffff81130b89>] vmap_page_range_noflush+0x2d9/0x370
 [<ffffffff81130c4d>] map_vm_area+0x2d/0x50
 [<ffffffff811326d0>] __vmalloc_node_range+0x160/0x250
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c6186>] ? load_module+0x66/0x19c0
 [<ffffffff8105cadc>] module_alloc+0x5c/0x60
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c5369>] module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c70c3>] load_module+0xfa3/0x19c0
 [<ffffffff812491f6>] ? security_file_permission+0x86/0x90
 [<ffffffff810c7b3a>] sys_init_module+0x5a/0x220
 [<ffffffff815ce339>] system_call_fastpath+0x16/0x1b
---[ end trace fd8f7704fdea0291 ]---
vmalloc: allocation failure, allocated 16384 of 20480 bytes
modprobe: page allocation failure: order:0, mode:0xd2

Since the __va and __ka are 1:1 up to MODULES_VADDR and
cleanup_highmap rids __ka of the ramdisk mapping, what
we want to do is similar - get rid of the P2M in the __ka
address space. There are two ways of fixing this:

 1) All P2M lookups instead of using the __ka address would
    use the __va address. This means we can safely erase from
    __ka space the PMD pointers that point to the PFNs for
    P2M array and be OK.
 2). Allocate a new array, copy the existing P2M into it,
    revector the P2M tree to use that, and return the old
    P2M to the memory allocate. This has the advantage that
    it sets the stage for using XEN_ELF_NOTE_INIT_P2M
    feature. That feature allows us to set the exact virtual
    address space we want for the P2M - and allows us to
    boot as initial domain on large machines.

So we pick option 2).

This patch only lays the groundwork in the P2M code. The patch
that modifies the MMU is called "xen/mmu: Copy and revector the P2M tree."

-- xen/mmu: Copy and revector the P2M tree:

The 'xen_revector_p2m_tree()' function allocates a new P2M tree
copies the contents of the old one in it, and returns the new one.

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

We have revectored the P2M tree (and the one for save/restore as well)
to use new shiny __va address to new MFNs. The xen_start_info
has been taken care of already in 'xen_setup_kernel_pagetable()' and
xen_start_info->shared_info in 'xen_setup_shared_info()', so
we are free to roam and delete PMD entries - which is exactly what
we are going to do. We rip out the __ka for the old P2M array.

-- xen/mmu:   Remove from __ka space PMD entries for

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

The xen_remove_p2m_tree and code around has ripped out the __ka for
the old P2M array.

Here we continue on doing it to where the Xen page-tables were.
It is safe to do it, as the page-tables are addressed using __va.
For good measure we delete anything that is within MODULES_VADDR
and up to the end of the PMD.

At this point the __ka only contains PMD entries for the start
of the kernel up to __brk.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hO-0006Fb-2X; Thu, 16 Aug 2012 16:13:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hM-00067f-4e
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345133600!9447666!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27243 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDI4o027562
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIY1014645
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDICj011317; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1F84D4035A; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:26 -0400
Message-Id: <1345133009-21941-9-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 08/11] xen/p2m: Add logic to revector a P2M tree
	to use __va leafs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

During bootup Xen supplies us with a P2M array. It sticks
it right after the ramdisk, as can be seen with a 128GB PV guest:

(certain parts removed for clarity):
xc_dom_build_image: called
xc_dom_alloc_segment:   kernel       : 0xffffffff81000000 -> 0xffffffff81e43000  (pfn 0x1000 + 0xe43 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xe43 at 0x7f097d8bf000
xc_dom_alloc_segment:   ramdisk      : 0xffffffff81e43000 -> 0xffffffff925c7000  (pfn 0x1e43 + 0x10784 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1e43+0x10784 at 0x7f0952dd2000
xc_dom_alloc_segment:   phys2mach    : 0xffffffff925c7000 -> 0xffffffffa25c7000  (pfn 0x125c7 + 0x10000 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x125c7+0x10000 at 0x7f0942dd2000
xc_dom_alloc_page   :   start info   : 0xffffffffa25c7000 (pfn 0x225c7)
xc_dom_alloc_page   :   xenstore     : 0xffffffffa25c8000 (pfn 0x225c8)
xc_dom_alloc_page   :   console      : 0xffffffffa25c9000 (pfn 0x225c9)
nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s)
nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffffa27fffff, 276 table(s)
xc_dom_alloc_segment:   page tables  : 0xffffffffa25ca000 -> 0xffffffffa26e1000  (pfn 0x225ca + 0x117 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x225ca+0x117 at 0x7f097d7a8000
xc_dom_alloc_page   :   boot stack   : 0xffffffffa26e1000 (pfn 0x226e1)
xc_dom_build_image  : virt_alloc_end : 0xffffffffa26e2000
xc_dom_build_image  : virt_pgtab_end : 0xffffffffa2800000

So the physical memory and virtual (using __START_KERNEL_map addresses)
layout looks as so:

  phys                             __ka
/------------\                   /-------------------\
| 0          | empty             | 0xffffffff80000000|
| ..         |                   | ..                |
| 16MB       | <= kernel starts  | 0xffffffff81000000|
| ..         |                   |                   |
| 30MB       | <= kernel ends => | 0xffffffff81e43000|
| ..         |  & ramdisk starts | ..                |
| 293MB      | <= ramdisk ends=> | 0xffffffff925c7000|
| ..         |  & P2M starts     | ..                |
| ..         |                   | ..                |
| 549MB      | <= P2M ends    => | 0xffffffffa25c7000|
| ..         | start_info        | 0xffffffffa25c7000|
| ..         | xenstore          | 0xffffffffa25c8000|
| ..         | cosole            | 0xffffffffa25c9000|
| 549MB      | <= page tables => | 0xffffffffa25ca000|
| ..         |                   |                   |
| 550MB      | <= PGT end     => | 0xffffffffa26e1000|
| ..         | boot stack        |                   |
\------------/                   \-------------------/

As can be seen, the ramdisk, P2M and pagetables are taking
a bit of __ka addresses space. Which is a problem since the
MODULES_VADDR starts at 0xffffffffa0000000 - and P2M sits
right in there! This results during bootup with the inability to
load modules, with this error:

------------[ cut here ]------------
WARNING: at /home/konrad/ssd/linux/mm/vmalloc.c:106 vmap_page_range_noflush+0x2d9/0x370()
Call Trace:
 [<ffffffff810719fa>] warn_slowpath_common+0x7a/0xb0
 [<ffffffff81030279>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
 [<ffffffff81071a45>] warn_slowpath_null+0x15/0x20
 [<ffffffff81130b89>] vmap_page_range_noflush+0x2d9/0x370
 [<ffffffff81130c4d>] map_vm_area+0x2d/0x50
 [<ffffffff811326d0>] __vmalloc_node_range+0x160/0x250
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c6186>] ? load_module+0x66/0x19c0
 [<ffffffff8105cadc>] module_alloc+0x5c/0x60
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c5369>] module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c70c3>] load_module+0xfa3/0x19c0
 [<ffffffff812491f6>] ? security_file_permission+0x86/0x90
 [<ffffffff810c7b3a>] sys_init_module+0x5a/0x220
 [<ffffffff815ce339>] system_call_fastpath+0x16/0x1b
---[ end trace fd8f7704fdea0291 ]---
vmalloc: allocation failure, allocated 16384 of 20480 bytes
modprobe: page allocation failure: order:0, mode:0xd2

Since the __va and __ka are 1:1 up to MODULES_VADDR and
cleanup_highmap rids __ka of the ramdisk mapping, what
we want to do is similar - get rid of the P2M in the __ka
address space. There are two ways of fixing this:

 1) All P2M lookups instead of using the __ka address would
    use the __va address. This means we can safely erase from
    __ka space the PMD pointers that point to the PFNs for
    P2M array and be OK.
 2). Allocate a new array, copy the existing P2M into it,
    revector the P2M tree to use that, and return the old
    P2M to the memory allocate. This has the advantage that
    it sets the stage for using XEN_ELF_NOTE_INIT_P2M
    feature. That feature allows us to set the exact virtual
    address space we want for the P2M - and allows us to
    boot as initial domain on large machines.

So we pick option 2).

This patch only lays the groundwork in the P2M code. The patch
that modifies the MMU is called "xen/mmu: Copy and revector the P2M tree."

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c     |   70 ++++++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/xen/xen-ops.h |    1 +
 2 files changed, 71 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 6a2bfa4..bbfd085 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -394,7 +394,77 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	 * Xen provided pagetable). Do it later in xen_reserve_internals.
 	 */
 }
+#ifdef CONFIG_X86_64
+#include <linux/bootmem.h>
+unsigned long __init xen_revector_p2m_tree(void)
+{
+	unsigned long va_start;
+	unsigned long va_end;
+	unsigned long pfn;
+	unsigned long *mfn_list = NULL;
+	unsigned long size;
+
+	va_start = xen_start_info->mfn_list;
+	/*We copy in increments of P2M_PER_PAGE * sizeof(unsigned long),
+	 * so make sure it is rounded up to that */
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+	va_end = va_start + size;
+
+	/* If we were revectored already, don't do it again. */
+	if (va_start <= __START_KERNEL_map && va_start >= __PAGE_OFFSET)
+		return 0;
+
+	mfn_list = alloc_bootmem_align(size, PAGE_SIZE);
+	if (!mfn_list) {
+		pr_warn("Could not allocate space for a new P2M tree!\n");
+		return xen_start_info->mfn_list;
+	}
+	/* Fill it out with INVALID_P2M_ENTRY value */
+	memset(mfn_list, 0xFF, size);
+
+	for (pfn = 0; pfn < ALIGN(MAX_DOMAIN_PAGES, P2M_PER_PAGE); pfn += P2M_PER_PAGE) {
+		unsigned topidx = p2m_top_index(pfn);
+		unsigned mididx;
+		unsigned long *mid_p;
+
+		if (!p2m_top[topidx])
+			continue;
+
+		if (p2m_top[topidx] == p2m_mid_missing)
+			continue;
+
+		mididx = p2m_mid_index(pfn);
+		mid_p = p2m_top[topidx][mididx];
+		if (!mid_p)
+			continue;
+		if ((mid_p == p2m_missing) || (mid_p == p2m_identity))
+			continue;
+
+		if ((unsigned long)mid_p == INVALID_P2M_ENTRY)
+			continue;
+
+		/* The old va. Rebase it on mfn_list */
+		if (mid_p >= (unsigned long *)va_start && mid_p <= (unsigned long *)va_end) {
+			unsigned long *new;
+
+			new = &mfn_list[pfn];
+
+			copy_page(new, mid_p);
+			p2m_top[topidx][mididx] = &mfn_list[pfn];
+			p2m_top_mfn_p[topidx][mididx] = virt_to_mfn(&mfn_list[pfn]);
 
+		}
+		/* This should be the leafs allocated for identity from _brk. */
+	}
+	return (unsigned long)mfn_list;
+
+}
+#else
+unsigned long __init xen_revector_p2m_tree(void)
+{
+	return 0;
+}
+#endif
 unsigned long get_phys_to_machine(unsigned long pfn)
 {
 	unsigned topidx, mididx, idx;
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 2230f57..bb5a810 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -45,6 +45,7 @@ void xen_hvm_init_shared_info(void);
 void xen_unplug_emulated_devices(void);
 
 void __init xen_build_dynamic_phys_to_machine(void);
+unsigned long __init xen_revector_p2m_tree(void);
 
 void xen_init_irq_ops(void);
 void xen_setup_timer(int cpu);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:13:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22hO-0006Fb-2X; Thu, 16 Aug 2012 16:13:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22hM-00067f-4e
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:13:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345133600!9447666!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27243 invoked from network); 16 Aug 2012 16:13:21 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:13:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGDI4o027562
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:13:19 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGDIY1014645
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:13:18 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGDICj011317; Thu, 16 Aug 2012 11:13:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:13:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1F84D4035A; Thu, 16 Aug 2012 12:03:31 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:03:26 -0400
Message-Id: <1345133009-21941-9-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 08/11] xen/p2m: Add logic to revector a P2M tree
	to use __va leafs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

During bootup Xen supplies us with a P2M array. It sticks
it right after the ramdisk, as can be seen with a 128GB PV guest:

(certain parts removed for clarity):
xc_dom_build_image: called
xc_dom_alloc_segment:   kernel       : 0xffffffff81000000 -> 0xffffffff81e43000  (pfn 0x1000 + 0xe43 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xe43 at 0x7f097d8bf000
xc_dom_alloc_segment:   ramdisk      : 0xffffffff81e43000 -> 0xffffffff925c7000  (pfn 0x1e43 + 0x10784 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x1e43+0x10784 at 0x7f0952dd2000
xc_dom_alloc_segment:   phys2mach    : 0xffffffff925c7000 -> 0xffffffffa25c7000  (pfn 0x125c7 + 0x10000 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x125c7+0x10000 at 0x7f0942dd2000
xc_dom_alloc_page   :   start info   : 0xffffffffa25c7000 (pfn 0x225c7)
xc_dom_alloc_page   :   xenstore     : 0xffffffffa25c8000 (pfn 0x225c8)
xc_dom_alloc_page   :   console      : 0xffffffffa25c9000 (pfn 0x225c9)
nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s)
nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffffa27fffff, 276 table(s)
xc_dom_alloc_segment:   page tables  : 0xffffffffa25ca000 -> 0xffffffffa26e1000  (pfn 0x225ca + 0x117 pages)
xc_dom_pfn_to_ptr: domU mapping: pfn 0x225ca+0x117 at 0x7f097d7a8000
xc_dom_alloc_page   :   boot stack   : 0xffffffffa26e1000 (pfn 0x226e1)
xc_dom_build_image  : virt_alloc_end : 0xffffffffa26e2000
xc_dom_build_image  : virt_pgtab_end : 0xffffffffa2800000

So the physical memory and virtual (using __START_KERNEL_map addresses)
layout looks as so:

  phys                             __ka
/------------\                   /-------------------\
| 0          | empty             | 0xffffffff80000000|
| ..         |                   | ..                |
| 16MB       | <= kernel starts  | 0xffffffff81000000|
| ..         |                   |                   |
| 30MB       | <= kernel ends => | 0xffffffff81e43000|
| ..         |  & ramdisk starts | ..                |
| 293MB      | <= ramdisk ends=> | 0xffffffff925c7000|
| ..         |  & P2M starts     | ..                |
| ..         |                   | ..                |
| 549MB      | <= P2M ends    => | 0xffffffffa25c7000|
| ..         | start_info        | 0xffffffffa25c7000|
| ..         | xenstore          | 0xffffffffa25c8000|
| ..         | cosole            | 0xffffffffa25c9000|
| 549MB      | <= page tables => | 0xffffffffa25ca000|
| ..         |                   |                   |
| 550MB      | <= PGT end     => | 0xffffffffa26e1000|
| ..         | boot stack        |                   |
\------------/                   \-------------------/

As can be seen, the ramdisk, P2M and pagetables are taking
a bit of __ka addresses space. Which is a problem since the
MODULES_VADDR starts at 0xffffffffa0000000 - and P2M sits
right in there! This results during bootup with the inability to
load modules, with this error:

------------[ cut here ]------------
WARNING: at /home/konrad/ssd/linux/mm/vmalloc.c:106 vmap_page_range_noflush+0x2d9/0x370()
Call Trace:
 [<ffffffff810719fa>] warn_slowpath_common+0x7a/0xb0
 [<ffffffff81030279>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
 [<ffffffff81071a45>] warn_slowpath_null+0x15/0x20
 [<ffffffff81130b89>] vmap_page_range_noflush+0x2d9/0x370
 [<ffffffff81130c4d>] map_vm_area+0x2d/0x50
 [<ffffffff811326d0>] __vmalloc_node_range+0x160/0x250
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c6186>] ? load_module+0x66/0x19c0
 [<ffffffff8105cadc>] module_alloc+0x5c/0x60
 [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c5369>] module_alloc_update_bounds+0x19/0x80
 [<ffffffff810c70c3>] load_module+0xfa3/0x19c0
 [<ffffffff812491f6>] ? security_file_permission+0x86/0x90
 [<ffffffff810c7b3a>] sys_init_module+0x5a/0x220
 [<ffffffff815ce339>] system_call_fastpath+0x16/0x1b
---[ end trace fd8f7704fdea0291 ]---
vmalloc: allocation failure, allocated 16384 of 20480 bytes
modprobe: page allocation failure: order:0, mode:0xd2

Since the __va and __ka are 1:1 up to MODULES_VADDR and
cleanup_highmap rids __ka of the ramdisk mapping, what
we want to do is similar - get rid of the P2M in the __ka
address space. There are two ways of fixing this:

 1) All P2M lookups instead of using the __ka address would
    use the __va address. This means we can safely erase from
    __ka space the PMD pointers that point to the PFNs for
    P2M array and be OK.
 2). Allocate a new array, copy the existing P2M into it,
    revector the P2M tree to use that, and return the old
    P2M to the memory allocate. This has the advantage that
    it sets the stage for using XEN_ELF_NOTE_INIT_P2M
    feature. That feature allows us to set the exact virtual
    address space we want for the P2M - and allows us to
    boot as initial domain on large machines.

So we pick option 2).

This patch only lays the groundwork in the P2M code. The patch
that modifies the MMU is called "xen/mmu: Copy and revector the P2M tree."

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c     |   70 ++++++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/xen/xen-ops.h |    1 +
 2 files changed, 71 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 6a2bfa4..bbfd085 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -394,7 +394,77 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	 * Xen provided pagetable). Do it later in xen_reserve_internals.
 	 */
 }
+#ifdef CONFIG_X86_64
+#include <linux/bootmem.h>
+unsigned long __init xen_revector_p2m_tree(void)
+{
+	unsigned long va_start;
+	unsigned long va_end;
+	unsigned long pfn;
+	unsigned long *mfn_list = NULL;
+	unsigned long size;
+
+	va_start = xen_start_info->mfn_list;
+	/*We copy in increments of P2M_PER_PAGE * sizeof(unsigned long),
+	 * so make sure it is rounded up to that */
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+	va_end = va_start + size;
+
+	/* If we were revectored already, don't do it again. */
+	if (va_start <= __START_KERNEL_map && va_start >= __PAGE_OFFSET)
+		return 0;
+
+	mfn_list = alloc_bootmem_align(size, PAGE_SIZE);
+	if (!mfn_list) {
+		pr_warn("Could not allocate space for a new P2M tree!\n");
+		return xen_start_info->mfn_list;
+	}
+	/* Fill it out with INVALID_P2M_ENTRY value */
+	memset(mfn_list, 0xFF, size);
+
+	for (pfn = 0; pfn < ALIGN(MAX_DOMAIN_PAGES, P2M_PER_PAGE); pfn += P2M_PER_PAGE) {
+		unsigned topidx = p2m_top_index(pfn);
+		unsigned mididx;
+		unsigned long *mid_p;
+
+		if (!p2m_top[topidx])
+			continue;
+
+		if (p2m_top[topidx] == p2m_mid_missing)
+			continue;
+
+		mididx = p2m_mid_index(pfn);
+		mid_p = p2m_top[topidx][mididx];
+		if (!mid_p)
+			continue;
+		if ((mid_p == p2m_missing) || (mid_p == p2m_identity))
+			continue;
+
+		if ((unsigned long)mid_p == INVALID_P2M_ENTRY)
+			continue;
+
+		/* The old va. Rebase it on mfn_list */
+		if (mid_p >= (unsigned long *)va_start && mid_p <= (unsigned long *)va_end) {
+			unsigned long *new;
+
+			new = &mfn_list[pfn];
+
+			copy_page(new, mid_p);
+			p2m_top[topidx][mididx] = &mfn_list[pfn];
+			p2m_top_mfn_p[topidx][mididx] = virt_to_mfn(&mfn_list[pfn]);
 
+		}
+		/* This should be the leafs allocated for identity from _brk. */
+	}
+	return (unsigned long)mfn_list;
+
+}
+#else
+unsigned long __init xen_revector_p2m_tree(void)
+{
+	return 0;
+}
+#endif
 unsigned long get_phys_to_machine(unsigned long pfn)
 {
 	unsigned topidx, mididx, idx;
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 2230f57..bb5a810 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -45,6 +45,7 @@ void xen_hvm_init_shared_info(void);
 void xen_unplug_emulated_devices(void);
 
 void __init xen_build_dynamic_phys_to_machine(void);
+unsigned long __init xen_revector_p2m_tree(void);
 
 void xen_init_irq_ops(void);
 void xen_setup_timer(int cpu);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22qA-00080m-AH; Thu, 16 Aug 2012 16:22:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q8-0007zn-3r
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:32 +0000
Received: from [85.158.138.51:22991] by server-2.bemta-3.messagelabs.com id
	84/2A-17748-74E1D205; Thu, 16 Aug 2012 16:22:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345134149!28623226!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17785 invoked from network); 16 Aug 2012 16:22:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRR5005386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQbt016518
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPPN018266; Thu, 16 Aug 2012 11:22:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8D3F9402EF; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:34 -0400
Message-Id: <1345133558-23341-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/6] xen/swiotlb: Fix compile warnings when
	using plain integer instead of NULL pointer.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

arch/x86/xen/pci-swiotlb-xen.c:96:1: warning: Using plain integer as NULL pointer
arch/x86/xen/pci-swiotlb-xen.c:96:1: warning: Using plain integer as NULL pointer

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/pci-swiotlb-xen.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 031d8bc..1667602 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -92,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
 EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
 
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
-		  0,
+		  NULL,
 		  pci_xen_swiotlb_init,
-		  0);
+		  NULL);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22qA-00080m-AH; Thu, 16 Aug 2012 16:22:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q8-0007zn-3r
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:32 +0000
Received: from [85.158.138.51:22991] by server-2.bemta-3.messagelabs.com id
	84/2A-17748-74E1D205; Thu, 16 Aug 2012 16:22:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345134149!28623226!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17785 invoked from network); 16 Aug 2012 16:22:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRR5005386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQbt016518
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPPN018266; Thu, 16 Aug 2012 11:22:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8D3F9402EF; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:34 -0400
Message-Id: <1345133558-23341-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/6] xen/swiotlb: Fix compile warnings when
	using plain integer instead of NULL pointer.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

arch/x86/xen/pci-swiotlb-xen.c:96:1: warning: Using plain integer as NULL pointer
arch/x86/xen/pci-swiotlb-xen.c:96:1: warning: Using plain integer as NULL pointer

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/pci-swiotlb-xen.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 031d8bc..1667602 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -92,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
 EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
 
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
-		  0,
+		  NULL,
 		  pci_xen_swiotlb_init,
-		  0);
+		  NULL);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q8-0007zx-5D; Thu, 16 Aug 2012 16:22:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q7-0007ze-2A
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:31 +0000
Received: from [85.158.139.83:26553] by server-2.bemta-5.messagelabs.com id
	0E/9E-10142-64E1D205; Thu, 16 Aug 2012 16:22:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345134148!27820287!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26882 invoked from network); 16 Aug 2012 16:22:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMR29005387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQs5025903
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMQBO018267; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 98E68402F0; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:35 -0400
Message-Id: <1345133558-23341-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 3/6]
	xen/apic/xenbus/swiotlb/pcifront/grant/tmem: Make functions
	or variables static.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is no need for those functions/variables to be visible. Make them
static and also fix the compile warnings of this sort:

drivers/xen/<some file>.c: warning: symbol '<blah>' was not declared. Should it be static?

Some of them just require including the header file that
declares the functions.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/apic.c                     |    3 ++-
 arch/x86/xen/enlighten.c                |    2 ++
 arch/x86/xen/pci-swiotlb-xen.c          |    2 ++
 arch/x86/xen/platform-pci-unplug.c      |    1 +
 drivers/pci/xen-pcifront.c              |    2 +-
 drivers/xen/gntdev.c                    |    2 +-
 drivers/xen/grant-table.c               |   13 ++++++-------
 drivers/xen/swiotlb-xen.c               |    2 +-
 drivers/xen/tmem.c                      |    1 +
 drivers/xen/xenbus/xenbus_dev_backend.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c       |    4 ++--
 11 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/arch/x86/xen/apic.c b/arch/x86/xen/apic.c
index ec57bd3..7005ced 100644
--- a/arch/x86/xen/apic.c
+++ b/arch/x86/xen/apic.c
@@ -6,8 +6,9 @@
 
 #include <xen/xen.h>
 #include <xen/interface/physdev.h>
+#include "xen-ops.h"
 
-unsigned int xen_io_apic_read(unsigned apic, unsigned reg)
+static unsigned int xen_io_apic_read(unsigned apic, unsigned reg)
 {
 	struct physdev_apic apic_op;
 	int ret;
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index acf906c..2c36e6e 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -81,6 +81,8 @@
 #include "smp.h"
 #include "multicalls.h"
 
+#include <xen/events.h>
+
 EXPORT_SYMBOL_GPL(hypercall_page);
 
 DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 1667602..db4db14 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -13,6 +13,8 @@
 #include <asm/dma.h>
 #endif
 #include <linux/export.h>
+
+#include <asm/xen/swiotlb-xen.h>
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
diff --git a/arch/x86/xen/platform-pci-unplug.c b/arch/x86/xen/platform-pci-unplug.c
index ffcf261..0a78524 100644
--- a/arch/x86/xen/platform-pci-unplug.c
+++ b/arch/x86/xen/platform-pci-unplug.c
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 
 #include <xen/platform_pci.h>
+#include "xen-ops.h"
 
 #define XEN_PLATFORM_ERR_MAGIC -1
 #define XEN_PLATFORM_ERR_PROTOCOL -2
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index ca92801..ebd1ddb 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -237,7 +237,7 @@ static int pcifront_bus_write(struct pci_bus *bus, unsigned int devfn,
 	return errno_to_pcibios_err(do_pci_op(pdev, &op));
 }
 
-struct pci_ops pcifront_bus_ops = {
+static struct pci_ops pcifront_bus_ops = {
 	.read = pcifront_bus_read,
 	.write = pcifront_bus_write,
 };
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 1ffd03b..163b7e9 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -445,7 +445,7 @@ static void mn_release(struct mmu_notifier *mn,
 	spin_unlock(&priv->lock);
 }
 
-struct mmu_notifier_ops gntdev_mmu_ops = {
+static struct mmu_notifier_ops gntdev_mmu_ops = {
 	.release                = mn_release,
 	.invalidate_page        = mn_invl_page,
 	.invalidate_range_start = mn_invl_range_start,
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..4061943 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -285,10 +285,9 @@ int gnttab_grant_foreign_access(domid_t domid, unsigned long frame,
 }
 EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access);
 
-void gnttab_update_subpage_entry_v2(grant_ref_t ref, domid_t domid,
-				    unsigned long frame, int flags,
-				    unsigned page_off,
-				    unsigned length)
+static void gnttab_update_subpage_entry_v2(grant_ref_t ref, domid_t domid,
+					   unsigned long frame, int flags,
+					   unsigned page_off, unsigned length)
 {
 	gnttab_shared.v2[ref].sub_page.frame = frame;
 	gnttab_shared.v2[ref].sub_page.page_off = page_off;
@@ -345,9 +344,9 @@ bool gnttab_subpage_grants_available(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_subpage_grants_available);
 
-void gnttab_update_trans_entry_v2(grant_ref_t ref, domid_t domid,
-				  int flags, domid_t trans_domid,
-				  grant_ref_t trans_gref)
+static void gnttab_update_trans_entry_v2(grant_ref_t ref, domid_t domid,
+					 int flags, domid_t trans_domid,
+					 grant_ref_t trans_gref)
 {
 	gnttab_shared.v2[ref].transitive.trans_domid = trans_domid;
 	gnttab_shared.v2[ref].transitive.gref = trans_gref;
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 09e25a6..307a908 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -52,7 +52,7 @@ static unsigned long xen_io_tlb_nslabs;
  * Quick lookup value of the bus address of the IOTLB.
  */
 
-u64 start_dma_addr;
+static u64 start_dma_addr;
 
 static dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
diff --git a/drivers/xen/tmem.c b/drivers/xen/tmem.c
index 89f264c..144564e 100644
--- a/drivers/xen/tmem.c
+++ b/drivers/xen/tmem.c
@@ -21,6 +21,7 @@
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
 #include <asm/xen/hypervisor.h>
+#include <xen/tmem.h>
 
 #define TMEM_CONTROL               0
 #define TMEM_NEW_POOL              1
diff --git a/drivers/xen/xenbus/xenbus_dev_backend.c b/drivers/xen/xenbus/xenbus_dev_backend.c
index be738c4..d730008 100644
--- a/drivers/xen/xenbus/xenbus_dev_backend.c
+++ b/drivers/xen/xenbus/xenbus_dev_backend.c
@@ -107,7 +107,7 @@ static int xenbus_backend_mmap(struct file *file, struct vm_area_struct *vma)
 	return 0;
 }
 
-const struct file_operations xenbus_backend_fops = {
+static const struct file_operations xenbus_backend_fops = {
 	.open = xenbus_backend_open,
 	.mmap = xenbus_backend_mmap,
 	.unlocked_ioctl = xenbus_backend_ioctl,
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..91d3d654 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -324,8 +324,8 @@ static int cmp_dev(struct device *dev, void *data)
 	return 0;
 }
 
-struct xenbus_device *xenbus_device_find(const char *nodename,
-					 struct bus_type *bus)
+static struct xenbus_device *xenbus_device_find(const char *nodename,
+						struct bus_type *bus)
 {
 	struct xb_find_info info = { .dev = NULL, .nodename = nodename };
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q8-0007zx-5D; Thu, 16 Aug 2012 16:22:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q7-0007ze-2A
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:31 +0000
Received: from [85.158.139.83:26553] by server-2.bemta-5.messagelabs.com id
	0E/9E-10142-64E1D205; Thu, 16 Aug 2012 16:22:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345134148!27820287!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26882 invoked from network); 16 Aug 2012 16:22:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMR29005387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQs5025903
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMQBO018267; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 98E68402F0; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:35 -0400
Message-Id: <1345133558-23341-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 3/6]
	xen/apic/xenbus/swiotlb/pcifront/grant/tmem: Make functions
	or variables static.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is no need for those functions/variables to be visible. Make them
static and also fix the compile warnings of this sort:

drivers/xen/<some file>.c: warning: symbol '<blah>' was not declared. Should it be static?

Some of them just require including the header file that
declares the functions.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/apic.c                     |    3 ++-
 arch/x86/xen/enlighten.c                |    2 ++
 arch/x86/xen/pci-swiotlb-xen.c          |    2 ++
 arch/x86/xen/platform-pci-unplug.c      |    1 +
 drivers/pci/xen-pcifront.c              |    2 +-
 drivers/xen/gntdev.c                    |    2 +-
 drivers/xen/grant-table.c               |   13 ++++++-------
 drivers/xen/swiotlb-xen.c               |    2 +-
 drivers/xen/tmem.c                      |    1 +
 drivers/xen/xenbus/xenbus_dev_backend.c |    2 +-
 drivers/xen/xenbus/xenbus_probe.c       |    4 ++--
 11 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/arch/x86/xen/apic.c b/arch/x86/xen/apic.c
index ec57bd3..7005ced 100644
--- a/arch/x86/xen/apic.c
+++ b/arch/x86/xen/apic.c
@@ -6,8 +6,9 @@
 
 #include <xen/xen.h>
 #include <xen/interface/physdev.h>
+#include "xen-ops.h"
 
-unsigned int xen_io_apic_read(unsigned apic, unsigned reg)
+static unsigned int xen_io_apic_read(unsigned apic, unsigned reg)
 {
 	struct physdev_apic apic_op;
 	int ret;
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index acf906c..2c36e6e 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -81,6 +81,8 @@
 #include "smp.h"
 #include "multicalls.h"
 
+#include <xen/events.h>
+
 EXPORT_SYMBOL_GPL(hypercall_page);
 
 DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 1667602..db4db14 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -13,6 +13,8 @@
 #include <asm/dma.h>
 #endif
 #include <linux/export.h>
+
+#include <asm/xen/swiotlb-xen.h>
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
diff --git a/arch/x86/xen/platform-pci-unplug.c b/arch/x86/xen/platform-pci-unplug.c
index ffcf261..0a78524 100644
--- a/arch/x86/xen/platform-pci-unplug.c
+++ b/arch/x86/xen/platform-pci-unplug.c
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 
 #include <xen/platform_pci.h>
+#include "xen-ops.h"
 
 #define XEN_PLATFORM_ERR_MAGIC -1
 #define XEN_PLATFORM_ERR_PROTOCOL -2
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index ca92801..ebd1ddb 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -237,7 +237,7 @@ static int pcifront_bus_write(struct pci_bus *bus, unsigned int devfn,
 	return errno_to_pcibios_err(do_pci_op(pdev, &op));
 }
 
-struct pci_ops pcifront_bus_ops = {
+static struct pci_ops pcifront_bus_ops = {
 	.read = pcifront_bus_read,
 	.write = pcifront_bus_write,
 };
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 1ffd03b..163b7e9 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -445,7 +445,7 @@ static void mn_release(struct mmu_notifier *mn,
 	spin_unlock(&priv->lock);
 }
 
-struct mmu_notifier_ops gntdev_mmu_ops = {
+static struct mmu_notifier_ops gntdev_mmu_ops = {
 	.release                = mn_release,
 	.invalidate_page        = mn_invl_page,
 	.invalidate_range_start = mn_invl_range_start,
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..4061943 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -285,10 +285,9 @@ int gnttab_grant_foreign_access(domid_t domid, unsigned long frame,
 }
 EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access);
 
-void gnttab_update_subpage_entry_v2(grant_ref_t ref, domid_t domid,
-				    unsigned long frame, int flags,
-				    unsigned page_off,
-				    unsigned length)
+static void gnttab_update_subpage_entry_v2(grant_ref_t ref, domid_t domid,
+					   unsigned long frame, int flags,
+					   unsigned page_off, unsigned length)
 {
 	gnttab_shared.v2[ref].sub_page.frame = frame;
 	gnttab_shared.v2[ref].sub_page.page_off = page_off;
@@ -345,9 +344,9 @@ bool gnttab_subpage_grants_available(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_subpage_grants_available);
 
-void gnttab_update_trans_entry_v2(grant_ref_t ref, domid_t domid,
-				  int flags, domid_t trans_domid,
-				  grant_ref_t trans_gref)
+static void gnttab_update_trans_entry_v2(grant_ref_t ref, domid_t domid,
+					 int flags, domid_t trans_domid,
+					 grant_ref_t trans_gref)
 {
 	gnttab_shared.v2[ref].transitive.trans_domid = trans_domid;
 	gnttab_shared.v2[ref].transitive.gref = trans_gref;
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 09e25a6..307a908 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -52,7 +52,7 @@ static unsigned long xen_io_tlb_nslabs;
  * Quick lookup value of the bus address of the IOTLB.
  */
 
-u64 start_dma_addr;
+static u64 start_dma_addr;
 
 static dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
diff --git a/drivers/xen/tmem.c b/drivers/xen/tmem.c
index 89f264c..144564e 100644
--- a/drivers/xen/tmem.c
+++ b/drivers/xen/tmem.c
@@ -21,6 +21,7 @@
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
 #include <asm/xen/hypervisor.h>
+#include <xen/tmem.h>
 
 #define TMEM_CONTROL               0
 #define TMEM_NEW_POOL              1
diff --git a/drivers/xen/xenbus/xenbus_dev_backend.c b/drivers/xen/xenbus/xenbus_dev_backend.c
index be738c4..d730008 100644
--- a/drivers/xen/xenbus/xenbus_dev_backend.c
+++ b/drivers/xen/xenbus/xenbus_dev_backend.c
@@ -107,7 +107,7 @@ static int xenbus_backend_mmap(struct file *file, struct vm_area_struct *vma)
 	return 0;
 }
 
-const struct file_operations xenbus_backend_fops = {
+static const struct file_operations xenbus_backend_fops = {
 	.open = xenbus_backend_open,
 	.mmap = xenbus_backend_mmap,
 	.unlocked_ioctl = xenbus_backend_ioctl,
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index b793723..91d3d654 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -324,8 +324,8 @@ static int cmp_dev(struct device *dev, void *data)
 	return 0;
 }
 
-struct xenbus_device *xenbus_device_find(const char *nodename,
-					 struct bus_type *bus)
+static struct xenbus_device *xenbus_device_find(const char *nodename,
+						struct bus_type *bus)
 {
 	struct xb_find_info info = { .dev = NULL, .nodename = nodename };
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q9-00080S-Ho; Thu, 16 Aug 2012 16:22:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q8-0007zo-2B
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:32 +0000
Received: from [85.158.143.99:15254] by server-1.bemta-4.messagelabs.com id
	36/25-07754-74E1D205; Thu, 16 Aug 2012 16:22:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345134149!22161009!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17714 invoked from network); 16 Aug 2012 16:22:30 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:22:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRli018560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:28 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQ4s022734
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMQhs013415; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BA5184032D; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:38 -0400
Message-Id: <1345133558-23341-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: x86@kernel.org, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 6/6] iommu: Fixes duplicate const warning.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

arch/x86/kernel/pci-dma.c:275:1: warning: duplicate const
arch/x86/kernel/pci-swiotlb.c:68:1: warning: duplicate const
arch/x86/kernel/pci-swiotlb.c:86:1: warning: duplicate const
arch/x86/kernel/amd_gart_64.c:899:1: warning: duplicate const
arch/x86/kernel/pci-calgary_64.c:1600:1: warning: duplicate const
arch/x86/xen/pci-swiotlb-xen.c:96:1: warning: duplicate const
drivers/iommu/amd_iommu_init.c:1831:1: warning: duplicate const
drivers/iommu/dmar.c:1350:1: warning: duplicate const

when using smatch.

CC: x86@kernel.org
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/iommu_table.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/iommu_table.h b/arch/x86/include/asm/iommu_table.h
index f229b13..bbf8fb2 100644
--- a/arch/x86/include/asm/iommu_table.h
+++ b/arch/x86/include/asm/iommu_table.h
@@ -48,7 +48,7 @@ struct iommu_table_entry {
 
 
 #define __IOMMU_INIT(_detect, _depend, _early_init, _late_init, _finish)\
-	static const struct iommu_table_entry const			\
+	static const struct iommu_table_entry				\
 		__iommu_entry_##_detect __used				\
 	__attribute__ ((unused, __section__(".iommu_table"),		\
 			aligned((sizeof(void *)))))	\
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q9-00080S-Ho; Thu, 16 Aug 2012 16:22:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q8-0007zo-2B
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:32 +0000
Received: from [85.158.143.99:15254] by server-1.bemta-4.messagelabs.com id
	36/25-07754-74E1D205; Thu, 16 Aug 2012 16:22:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345134149!22161009!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17714 invoked from network); 16 Aug 2012 16:22:30 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:22:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRli018560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:28 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQ4s022734
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMQhs013415; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BA5184032D; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:38 -0400
Message-Id: <1345133558-23341-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: x86@kernel.org, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 6/6] iommu: Fixes duplicate const warning.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

arch/x86/kernel/pci-dma.c:275:1: warning: duplicate const
arch/x86/kernel/pci-swiotlb.c:68:1: warning: duplicate const
arch/x86/kernel/pci-swiotlb.c:86:1: warning: duplicate const
arch/x86/kernel/amd_gart_64.c:899:1: warning: duplicate const
arch/x86/kernel/pci-calgary_64.c:1600:1: warning: duplicate const
arch/x86/xen/pci-swiotlb-xen.c:96:1: warning: duplicate const
drivers/iommu/amd_iommu_init.c:1831:1: warning: duplicate const
drivers/iommu/dmar.c:1350:1: warning: duplicate const

when using smatch.

CC: x86@kernel.org
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/iommu_table.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/iommu_table.h b/arch/x86/include/asm/iommu_table.h
index f229b13..bbf8fb2 100644
--- a/arch/x86/include/asm/iommu_table.h
+++ b/arch/x86/include/asm/iommu_table.h
@@ -48,7 +48,7 @@ struct iommu_table_entry {
 
 
 #define __IOMMU_INIT(_detect, _depend, _early_init, _late_init, _finish)\
-	static const struct iommu_table_entry const			\
+	static const struct iommu_table_entry				\
 		__iommu_entry_##_detect __used				\
 	__attribute__ ((unused, __section__(".iommu_table"),		\
 			aligned((sizeof(void *)))))	\
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q9-00080Z-UL; Thu, 16 Aug 2012 16:22:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q8-0007zp-3F
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:32 +0000
Received: from [85.158.143.35:19754] by server-3.bemta-4.messagelabs.com id
	2E/9C-09529-74E1D205; Thu, 16 Aug 2012 16:22:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345134149!15756795!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26247 invoked from network); 16 Aug 2012 16:22:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRZN005385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQnp025899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPX2002287; Thu, 16 Aug 2012 11:22:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 85D29402C0; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:33 -0400
Message-Id: <1345133558-23341-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/6] xen/swiotlb: Remove functions not needed
	anymore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sparse warns us off:
drivers/xen/swiotlb-xen.c:506:1: warning: symbol 'xen_swiotlb_map_sg' was not declared. Should it be static?
drivers/xen/swiotlb-xen.c:534:1: warning: symbol 'xen_swiotlb_unmap_sg' was not declared. Should it be static?

and it looks like we do not need this function at all.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/swiotlb-xen.c |   16 ----------------
 include/xen/swiotlb-xen.h |    9 ---------
 2 files changed, 0 insertions(+), 25 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1942a3e..09e25a6 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -502,14 +502,6 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 }
 EXPORT_SYMBOL_GPL(xen_swiotlb_map_sg_attrs);
 
-int
-xen_swiotlb_map_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
-		   enum dma_data_direction dir)
-{
-	return xen_swiotlb_map_sg_attrs(hwdev, sgl, nelems, dir, NULL);
-}
-EXPORT_SYMBOL_GPL(xen_swiotlb_map_sg);
-
 /*
  * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
  * concerning calls here are the same as for swiotlb_unmap_page() above.
@@ -530,14 +522,6 @@ xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 }
 EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg_attrs);
 
-void
-xen_swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
-		     enum dma_data_direction dir)
-{
-	return xen_swiotlb_unmap_sg_attrs(hwdev, sgl, nelems, dir, NULL);
-}
-EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg);
-
 /*
  * Make physical memory consistent for a set of streaming mode DMA translations
  * after a transfer.
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index d38d984..d8cba6f 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -24,15 +24,6 @@ extern dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 extern void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 				   size_t size, enum dma_data_direction dir,
 				   struct dma_attrs *attrs);
-/*
-extern int
-xen_swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nents,
-		   enum dma_data_direction dir);
-
-extern void
-xen_swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nents,
-		     enum dma_data_direction dir);
-*/
 extern int
 xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 			 int nelems, enum dma_data_direction dir,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q9-00080Z-UL; Thu, 16 Aug 2012 16:22:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q8-0007zp-3F
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:32 +0000
Received: from [85.158.143.35:19754] by server-3.bemta-4.messagelabs.com id
	2E/9C-09529-74E1D205; Thu, 16 Aug 2012 16:22:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345134149!15756795!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26247 invoked from network); 16 Aug 2012 16:22:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRZN005385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQnp025899
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPX2002287; Thu, 16 Aug 2012 11:22:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 85D29402C0; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:33 -0400
Message-Id: <1345133558-23341-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/6] xen/swiotlb: Remove functions not needed
	anymore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sparse warns us off:
drivers/xen/swiotlb-xen.c:506:1: warning: symbol 'xen_swiotlb_map_sg' was not declared. Should it be static?
drivers/xen/swiotlb-xen.c:534:1: warning: symbol 'xen_swiotlb_unmap_sg' was not declared. Should it be static?

and it looks like we do not need this function at all.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/swiotlb-xen.c |   16 ----------------
 include/xen/swiotlb-xen.h |    9 ---------
 2 files changed, 0 insertions(+), 25 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1942a3e..09e25a6 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -502,14 +502,6 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 }
 EXPORT_SYMBOL_GPL(xen_swiotlb_map_sg_attrs);
 
-int
-xen_swiotlb_map_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
-		   enum dma_data_direction dir)
-{
-	return xen_swiotlb_map_sg_attrs(hwdev, sgl, nelems, dir, NULL);
-}
-EXPORT_SYMBOL_GPL(xen_swiotlb_map_sg);
-
 /*
  * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
  * concerning calls here are the same as for swiotlb_unmap_page() above.
@@ -530,14 +522,6 @@ xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 }
 EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg_attrs);
 
-void
-xen_swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
-		     enum dma_data_direction dir)
-{
-	return xen_swiotlb_unmap_sg_attrs(hwdev, sgl, nelems, dir, NULL);
-}
-EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg);
-
 /*
  * Make physical memory consistent for a set of streaming mode DMA translations
  * after a transfer.
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index d38d984..d8cba6f 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -24,15 +24,6 @@ extern dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 extern void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 				   size_t size, enum dma_data_direction dir,
 				   struct dma_attrs *attrs);
-/*
-extern int
-xen_swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nents,
-		   enum dma_data_direction dir);
-
-extern void
-xen_swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nents,
-		     enum dma_data_direction dir);
-*/
 extern int
 xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 			 int nelems, enum dma_data_direction dir,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q7-0007zq-PR; Thu, 16 Aug 2012 16:22:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q6-0007zc-Vw
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:31 +0000
Received: from [85.158.139.83:26529] by server-7.bemta-5.messagelabs.com id
	25/46-32634-64E1D205; Thu, 16 Aug 2012 16:22:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345134148!28552374!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24166 invoked from network); 16 Aug 2012 16:22:29 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:29 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRNe018557
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMPp2016513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPGw013403; Thu, 16 Aug 2012 11:22:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7E7C7402E8; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:32 -0400
Message-Id: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH] Various cleanups in the code base found with
	sparse (v1) for v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've been starting to run sparse to make sure we are not introducing
any potential issues. It does not report any in the Xen code-base but it
does reports a bit of the warnings so these take care of removing them.

Most of the warnings were of the type : <function or value> not declared.

And in some cases they just needed to be converted to 'static' or
the header file included in the C code.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22q7-0007zq-PR; Thu, 16 Aug 2012 16:22:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22q6-0007zc-Vw
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:31 +0000
Received: from [85.158.139.83:26529] by server-7.bemta-5.messagelabs.com id
	25/46-32634-64E1D205; Thu, 16 Aug 2012 16:22:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345134148!28552374!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24166 invoked from network); 16 Aug 2012 16:22:29 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:29 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRNe018557
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMPp2016513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPGw013403; Thu, 16 Aug 2012 11:22:25 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7E7C7402E8; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:32 -0400
Message-Id: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH] Various cleanups in the code base found with
	sparse (v1) for v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've been starting to run sparse to make sure we are not introducing
any potential issues. It does not report any in the Xen code-base but it
does reports a bit of the warnings so these take care of removing them.

Most of the warnings were of the type : <function or value> not declared.

And in some cases they just needed to be converted to 'static' or
the header file included in the C code.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22qC-00081e-OW; Thu, 16 Aug 2012 16:22:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22qA-00080k-Lm
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:34 +0000
Received: from [85.158.143.35:21859] by server-2.bemta-4.messagelabs.com id
	0A/F8-31966-A4E1D205; Thu, 16 Aug 2012 16:22:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345134152!12267216!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29932 invoked from network); 16 Aug 2012 16:22:33 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRqM018567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:28 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQdj025924
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMQPi002292; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AF5504032C; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:37 -0400
Message-Id: <1345133558-23341-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: JEns Axboe <axboe@kernel.dk>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 5/6] xen/blkback: Fix compile warning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

drivers/block/xen-blkback/xenbus.c:260:5: warning: symbol 'xenvbd_sysfs_addif' was not declared. Should it be static?
drivers/block/xen-blkback/xenbus.c:284:6: warning: symbol 'xenvbd_sysfs_delif' was not declared. Should it be static?

CC: JEns Axboe <axboe@kernel.dk>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/xenbus.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 4f66171..d0fed55 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -196,7 +196,7 @@ static void xen_blkif_disconnect(struct xen_blkif *blkif)
 	}
 }
 
-void xen_blkif_free(struct xen_blkif *blkif)
+static void xen_blkif_free(struct xen_blkif *blkif)
 {
 	if (!atomic_dec_and_test(&blkif->refcnt))
 		BUG();
@@ -257,7 +257,7 @@ static struct attribute_group xen_vbdstat_group = {
 VBD_SHOW(physical_device, "%x:%x\n", be->major, be->minor);
 VBD_SHOW(mode, "%s\n", be->mode);
 
-int xenvbd_sysfs_addif(struct xenbus_device *dev)
+static int xenvbd_sysfs_addif(struct xenbus_device *dev)
 {
 	int error;
 
@@ -281,7 +281,7 @@ fail1:	device_remove_file(&dev->dev, &dev_attr_physical_device);
 	return error;
 }
 
-void xenvbd_sysfs_delif(struct xenbus_device *dev)
+static void xenvbd_sysfs_delif(struct xenbus_device *dev)
 {
 	sysfs_remove_group(&dev->dev.kobj, &xen_vbdstat_group);
 	device_remove_file(&dev->dev, &dev_attr_mode);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22qC-00081e-OW; Thu, 16 Aug 2012 16:22:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22qA-00080k-Lm
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:34 +0000
Received: from [85.158.143.35:21859] by server-2.bemta-4.messagelabs.com id
	0A/F8-31966-A4E1D205; Thu, 16 Aug 2012 16:22:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345134152!12267216!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29932 invoked from network); 16 Aug 2012 16:22:33 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMRqM018567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:28 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQdj025924
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMQPi002292; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AF5504032C; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:37 -0400
Message-Id: <1345133558-23341-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: JEns Axboe <axboe@kernel.dk>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 5/6] xen/blkback: Fix compile warning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

drivers/block/xen-blkback/xenbus.c:260:5: warning: symbol 'xenvbd_sysfs_addif' was not declared. Should it be static?
drivers/block/xen-blkback/xenbus.c:284:6: warning: symbol 'xenvbd_sysfs_delif' was not declared. Should it be static?

CC: JEns Axboe <axboe@kernel.dk>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/block/xen-blkback/xenbus.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 4f66171..d0fed55 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -196,7 +196,7 @@ static void xen_blkif_disconnect(struct xen_blkif *blkif)
 	}
 }
 
-void xen_blkif_free(struct xen_blkif *blkif)
+static void xen_blkif_free(struct xen_blkif *blkif)
 {
 	if (!atomic_dec_and_test(&blkif->refcnt))
 		BUG();
@@ -257,7 +257,7 @@ static struct attribute_group xen_vbdstat_group = {
 VBD_SHOW(physical_device, "%x:%x\n", be->major, be->minor);
 VBD_SHOW(mode, "%s\n", be->mode);
 
-int xenvbd_sysfs_addif(struct xenbus_device *dev)
+static int xenvbd_sysfs_addif(struct xenbus_device *dev)
 {
 	int error;
 
@@ -281,7 +281,7 @@ fail1:	device_remove_file(&dev->dev, &dev_attr_physical_device);
 	return error;
 }
 
-void xenvbd_sysfs_delif(struct xenbus_device *dev)
+static void xenvbd_sysfs_delif(struct xenbus_device *dev)
 {
 	sysfs_remove_group(&dev->dev.kobj, &xen_vbdstat_group);
 	device_remove_file(&dev->dev, &dev_attr_mode);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22qF-00084P-DQ; Thu, 16 Aug 2012 16:22:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22qB-0007zd-SZ
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345134148!2103372!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24480 invoked from network); 16 Aug 2012 16:22:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMQB2005382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQCu000938
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPl1013406; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A563B4031E; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:36 -0400
Message-Id: <1345133558-23341-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 4/6] xen/mmu: Fix compile warnings.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

linux/arch/x86/xen/mmu.c:1788:14: warning: comparison between pointer and integer [enabled by default]

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 90d31a2..4911354 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1786,11 +1786,11 @@ void __init xen_setup_machphys_mapping(void)
 {
 	struct xen_machphys_mapping mapping;
 
-	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, &mapping) == 0) {
+	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, (void *)&mapping) == 0) {
 		machine_to_phys_mapping = (unsigned long *)mapping.v_start;
 		machine_to_phys_nr = mapping.max_mfn + 1;
 	} else {
-		machine_to_phys_nr = MACH2PHYS_NR_ENTRIES;
+		machine_to_phys_nr = (unsigned long)MACH2PHYS_NR_ENTRIES;
 	}
 #ifdef CONFIG_X86_32
 	WARN_ON((machine_to_phys_mapping + (machine_to_phys_nr - 1))
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:22:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22qF-00084P-DQ; Thu, 16 Aug 2012 16:22:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T22qB-0007zd-SZ
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:22:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345134148!2103372!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24480 invoked from network); 16 Aug 2012 16:22:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 16:22:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GGMQB2005382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 16:22:27 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GGMQCu000938
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 16:22:26 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GGMPl1013406; Thu, 16 Aug 2012 11:22:26 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 09:22:25 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A563B4031E; Thu, 16 Aug 2012 12:12:39 -0400 (EDT)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Date: Thu, 16 Aug 2012 12:12:36 -0400
Message-Id: <1345133558-23341-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 4/6] xen/mmu: Fix compile warnings.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

linux/arch/x86/xen/mmu.c:1788:14: warning: comparison between pointer and integer [enabled by default]

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 90d31a2..4911354 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1786,11 +1786,11 @@ void __init xen_setup_machphys_mapping(void)
 {
 	struct xen_machphys_mapping mapping;
 
-	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, &mapping) == 0) {
+	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, (void *)&mapping) == 0) {
 		machine_to_phys_mapping = (unsigned long *)mapping.v_start;
 		machine_to_phys_nr = mapping.max_mfn + 1;
 	} else {
-		machine_to_phys_nr = MACH2PHYS_NR_ENTRIES;
+		machine_to_phys_nr = (unsigned long)MACH2PHYS_NR_ENTRIES;
 	}
 #ifdef CONFIG_X86_32
 	WARN_ON((machine_to_phys_mapping + (machine_to_phys_nr - 1))
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:25:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22si-0000Uj-0v; Thu, 16 Aug 2012 16:25:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T22sf-0000U7-Rk
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:25:10 +0000
Received: from [85.158.139.83:47117] by server-3.bemta-5.messagelabs.com id
	A1/59-27237-5EE1D205; Thu, 16 Aug 2012 16:25:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345134305!24644711!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11253 invoked from network); 16 Aug 2012 16:25:06 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:25:06 -0000
Received: by eeke53 with SMTP id e53so899072eek.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 09:25:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=E0nLGtcSnauoR3tDehvj7nAFasuDkTFOOkEdUWBXbZ8=;
	b=bi2zZ6+NbI1NcSpoehHctdxGkNN5NxFRsGBSE2h39KIysbd8UIIeXNJs+y9D2sYbBu
	MCJXiL5cWd2ynVsJ9cjp+pML07AfcIAeuj1Q3/QM+dO3rdMP26jIyeZ1RShfWMzhLTPi
	uDAEhUD/k/Damk2PpsRdwwojmwQUH0kPJd7H6UXkVp4vYbDbgfFYTSF2H8xlZS2zcNMO
	TTqTYp++I7GFdfKmgLcVZtehJajuumPuVxCbAlV0o3uf2mBuUcJP9IFD/EZjA9ATmXi1
	LviDBWHR0EyirvctvRQ7/BR2zIB+cZPIWoh2/jRjSSyY+vyLjsNWpYI68naUqZAx+egf
	7o9A==
Received: by 10.14.225.200 with SMTP id z48mr2379485eep.39.1345134305653;
	Thu, 16 Aug 2012 09:25:05 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id d48sm13355777eeo.10.2012.08.16.09.25.03
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 09:25:04 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 16 Aug 2012 17:25:01 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC52DD6D.3C261%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
	non-Intel, non-AMD CPUs
Thread-Index: Ac17y68blXcnHtk8zkiFor+IR8CnJg==
In-Reply-To: <502D34E00200007800095977@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 16:58, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 16.08.12 at 17:28, Keir Fraser <keir@xen.org> wrote:
>> On 16/08/2012 16:22, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> Do we support any such processors? Newer VIA processors maybe?
> 
> Exactly those - they were kind enough to lend me a system. I'm
> having a patch queued for post-4.2 to enable VMX and a few
> other vendor specific things on them, but of course a prereq for
> testing this was that the system would boot at all under Xen.

Very cool! :-)

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:25:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22si-0000Uj-0v; Thu, 16 Aug 2012 16:25:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T22sf-0000U7-Rk
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:25:10 +0000
Received: from [85.158.139.83:47117] by server-3.bemta-5.messagelabs.com id
	A1/59-27237-5EE1D205; Thu, 16 Aug 2012 16:25:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345134305!24644711!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11253 invoked from network); 16 Aug 2012 16:25:06 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:25:06 -0000
Received: by eeke53 with SMTP id e53so899072eek.32
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 09:25:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=E0nLGtcSnauoR3tDehvj7nAFasuDkTFOOkEdUWBXbZ8=;
	b=bi2zZ6+NbI1NcSpoehHctdxGkNN5NxFRsGBSE2h39KIysbd8UIIeXNJs+y9D2sYbBu
	MCJXiL5cWd2ynVsJ9cjp+pML07AfcIAeuj1Q3/QM+dO3rdMP26jIyeZ1RShfWMzhLTPi
	uDAEhUD/k/Damk2PpsRdwwojmwQUH0kPJd7H6UXkVp4vYbDbgfFYTSF2H8xlZS2zcNMO
	TTqTYp++I7GFdfKmgLcVZtehJajuumPuVxCbAlV0o3uf2mBuUcJP9IFD/EZjA9ATmXi1
	LviDBWHR0EyirvctvRQ7/BR2zIB+cZPIWoh2/jRjSSyY+vyLjsNWpYI68naUqZAx+egf
	7o9A==
Received: by 10.14.225.200 with SMTP id z48mr2379485eep.39.1345134305653;
	Thu, 16 Aug 2012 09:25:05 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id d48sm13355777eeo.10.2012.08.16.09.25.03
	(version=SSLv3 cipher=OTHER); Thu, 16 Aug 2012 09:25:04 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 16 Aug 2012 17:25:01 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC52DD6D.3C261%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
	non-Intel, non-AMD CPUs
Thread-Index: Ac17y68blXcnHtk8zkiFor+IR8CnJg==
In-Reply-To: <502D34E00200007800095977@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/ucode: don't crash during AP bringup on
 non-Intel, non-AMD CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/2012 16:58, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 16.08.12 at 17:28, Keir Fraser <keir@xen.org> wrote:
>> On 16/08/2012 16:22, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> Do we support any such processors? Newer VIA processors maybe?
> 
> Exactly those - they were kind enough to lend me a system. I'm
> having a patch queued for post-4.2 to enable VMX and a few
> other vendor specific things on them, but of course a prereq for
> testing this was that the system would boot at all under Xen.

Very cool! :-)

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:28:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22vL-0000sC-Jp; Thu, 16 Aug 2012 16:27:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T22vK-0000rw-Pe
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:27:55 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345134465!2749666!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21826 invoked from network); 16 Aug 2012 16:27:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:27:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34877681"
Received: from sjcpmailmx02.citrite.net ([10.216.14.75])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 12:27:44 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi; Thu, 16 Aug 2012
	09:27:43 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Aug 2012 09:27:38 -0700
Thread-Topic: [Xen-devel] [PATCH] Dump IOMMU p2m table
Thread-Index: Ac16w25SaMBpv15nS7WbJcYpxMTXHAAbWU+A
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E399CF7@SJCPMAILBOX01.citrite.net>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
	<502B7FC9020000780009500F@nat28.tlf.novell.com>
In-Reply-To: <502B7FC9020000780009500F@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, August 15, 2012 1:54 AM
> To: Santosh Jodh
> Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim
> (Xen.org)
> Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
> 
> >>> On 14.08.12 at 21:55, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> 
> Sorry to be picky; after this many rounds I would have expected that no
> further comments would be needed.

I started off trying to code to existing structure and style in individual files I was modifying. I also created IOMMU p2m dump - similar to MMU p2m dump. Over the last few days, this has evolved into cleaning up existing AMD code, indenting for more clarity etc. All good points btw.

> 
> > +static void amd_dump_p2m_table_level(struct page_info* pg, int
> level,
> > +                                     paddr_t gpa, int indent) {
> > +    paddr_t address;
> > +    void *table_vaddr, *pde;
> > +    paddr_t next_table_maddr;
> > +    int index, next_level, present;
> > +    u32 *entry;
> > +
> > +    if ( level < 1 )
> > +        return;
> > +
> > +    table_vaddr = __map_domain_page(pg);
> > +    if ( table_vaddr == NULL )
> > +    {
> > +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> > +                page_to_maddr(pg));
> > +        return;
> > +    }
> > +
> > +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> > +    {
> > +        if ( !(index % 2) )
> > +            process_pending_softirqs();
> > +
> > +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> > +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> > +        entry = (u32*)pde;
> > +
> > +        present = get_field_from_reg_u32(entry[0],
> > +                                         IOMMU_PDE_PRESENT_MASK,
> > +                                         IOMMU_PDE_PRESENT_SHIFT);
> > +
> > +        if ( !present )
> > +            continue;
> > +
> > +        next_level = get_field_from_reg_u32(entry[0],
> > +
> IOMMU_PDE_NEXT_LEVEL_MASK,
> > +
> > + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> > +
> > +        address = gpa + amd_offset_level_address(index, level);
> > +        if ( next_level >= 1 )
> > +            amd_dump_p2m_table_level(
> > +                maddr_to_page(next_table_maddr), level - 1,
> 
> Did you see Wei's cleanup patches to the code you cloned from?
> You should follow that route (replacing the ASSERT() with printing of
> the inconsistency and _not_ recursing or doing the normal printing),
> and using either "level" or "next_level"
> consistently here.

Ok - will do.

> 
> > +                address, indent + 1);
> > +        else
> > +            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
> > +                   indent, " ",
> 
>             printk("%*sgfn: %08lx  mfn: %08lx\n",
>                    indent, "",
> 
> I can vaguely see the point in splitting the two strings in the first
> argument, but the extra space in the third argument is definitely wrong
> - it'll make level 1 and level 2 indistinguishable.

I misunderstood how "%*s" works. That probably explains the output Wei saw.

> 
> I also don't see how you addressed Wei's reporting of this still not
> printing correctly. I may be overlooking something, but without you
> making clear in the description what you changed over the previous
> version that's also relatively easy to happen.


Will add more history.

> 
> > +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level,
> paddr_t gpa,
> > +                                     int indent) {
> > +    paddr_t address;
> > +    int i;
> > +    struct dma_pte *pt_vaddr, *pte;
> > +    int next_level;
> > +
> > +    if ( level < 1 )
> > +        return;
> > +
> > +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> > +    if ( pt_vaddr == NULL )
> > +    {
> > +        printk("Failed to map VT-D domain page %"PRIpaddr"\n",
> pt_maddr);
> > +        return;
> > +    }
> > +
> > +    next_level = level - 1;
> > +    for ( i = 0; i < PTE_NUM; i++ )
> > +    {
> > +        if ( !(i % 2) )
> > +            process_pending_softirqs();
> > +
> > +        pte = &pt_vaddr[i];
> > +        if ( !dma_pte_present(*pte) )
> > +            continue;
> > +
> > +        address = gpa + offset_level_address(i, level);
> > +        if ( next_level >= 1 )
> > +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> > +                                     address, indent + 1);
> > +        else
> > +            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d
> wr=%d\n",
> > +                   indent, " ",
> 
> Same comment as above.

Yep - got it.

> 
> > +                   (unsigned long)(address >> PAGE_SHIFT_4K),
> > +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
> > +                   dma_pte_superpage(*pte)? 1 : 0,
> > +                   dma_pte_read(*pte)? 1 : 0,
> > +                   dma_pte_write(*pte)? 1 : 0);
> 
> Missing spaces. Even worse - given your definitions of these macros
> there's no point in using the conditional operators here at all.
> 
> And, despite your claim in another response, this still isn't similar
> to AMD's variant (which still doesn't print any of these three
> attributes).

I meant structure in terms of recursion logic, level checks etc. I will just remove the extra prints to make it more similar. My original goal was to print it similar to the existing MMU p2m table.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:28:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T22vL-0000sC-Jp; Thu, 16 Aug 2012 16:27:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T22vK-0000rw-Pe
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:27:55 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345134465!2749666!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQyODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21826 invoked from network); 16 Aug 2012 16:27:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:27:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="34877681"
Received: from sjcpmailmx02.citrite.net ([10.216.14.75])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 12:27:44 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi; Thu, 16 Aug 2012
	09:27:43 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Aug 2012 09:27:38 -0700
Thread-Topic: [Xen-devel] [PATCH] Dump IOMMU p2m table
Thread-Index: Ac16w25SaMBpv15nS7WbJcYpxMTXHAAbWU+A
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E399CF7@SJCPMAILBOX01.citrite.net>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
	<502B7FC9020000780009500F@nat28.tlf.novell.com>
In-Reply-To: <502B7FC9020000780009500F@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim \(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Wednesday, August 15, 2012 1:54 AM
> To: Santosh Jodh
> Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim
> (Xen.org)
> Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
> 
> >>> On 14.08.12 at 21:55, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> 
> Sorry to be picky; after this many rounds I would have expected that no
> further comments would be needed.

I started off trying to code to existing structure and style in individual files I was modifying. I also created IOMMU p2m dump - similar to MMU p2m dump. Over the last few days, this has evolved into cleaning up existing AMD code, indenting for more clarity etc. All good points btw.

> 
> > +static void amd_dump_p2m_table_level(struct page_info* pg, int
> level,
> > +                                     paddr_t gpa, int indent) {
> > +    paddr_t address;
> > +    void *table_vaddr, *pde;
> > +    paddr_t next_table_maddr;
> > +    int index, next_level, present;
> > +    u32 *entry;
> > +
> > +    if ( level < 1 )
> > +        return;
> > +
> > +    table_vaddr = __map_domain_page(pg);
> > +    if ( table_vaddr == NULL )
> > +    {
> > +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> > +                page_to_maddr(pg));
> > +        return;
> > +    }
> > +
> > +    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> > +    {
> > +        if ( !(index % 2) )
> > +            process_pending_softirqs();
> > +
> > +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> > +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> > +        entry = (u32*)pde;
> > +
> > +        present = get_field_from_reg_u32(entry[0],
> > +                                         IOMMU_PDE_PRESENT_MASK,
> > +                                         IOMMU_PDE_PRESENT_SHIFT);
> > +
> > +        if ( !present )
> > +            continue;
> > +
> > +        next_level = get_field_from_reg_u32(entry[0],
> > +
> IOMMU_PDE_NEXT_LEVEL_MASK,
> > +
> > + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> > +
> > +        address = gpa + amd_offset_level_address(index, level);
> > +        if ( next_level >= 1 )
> > +            amd_dump_p2m_table_level(
> > +                maddr_to_page(next_table_maddr), level - 1,
> 
> Did you see Wei's cleanup patches to the code you cloned from?
> You should follow that route (replacing the ASSERT() with printing of
> the inconsistency and _not_ recursing or doing the normal printing),
> and using either "level" or "next_level"
> consistently here.

Ok - will do.

> 
> > +                address, indent + 1);
> > +        else
> > +            printk("%*s" "gfn: %08lx  mfn: %08lx\n",
> > +                   indent, " ",
> 
>             printk("%*sgfn: %08lx  mfn: %08lx\n",
>                    indent, "",
> 
> I can vaguely see the point in splitting the two strings in the first
> argument, but the extra space in the third argument is definitely wrong
> - it'll make level 1 and level 2 indistinguishable.

I misunderstood how "%*s" works. That probably explains the output Wei saw.

> 
> I also don't see how you addressed Wei's reporting of this still not
> printing correctly. I may be overlooking something, but without you
> making clear in the description what you changed over the previous
> version that's also relatively easy to happen.


Will add more history.

> 
> > +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level,
> paddr_t gpa,
> > +                                     int indent) {
> > +    paddr_t address;
> > +    int i;
> > +    struct dma_pte *pt_vaddr, *pte;
> > +    int next_level;
> > +
> > +    if ( level < 1 )
> > +        return;
> > +
> > +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> > +    if ( pt_vaddr == NULL )
> > +    {
> > +        printk("Failed to map VT-D domain page %"PRIpaddr"\n",
> pt_maddr);
> > +        return;
> > +    }
> > +
> > +    next_level = level - 1;
> > +    for ( i = 0; i < PTE_NUM; i++ )
> > +    {
> > +        if ( !(i % 2) )
> > +            process_pending_softirqs();
> > +
> > +        pte = &pt_vaddr[i];
> > +        if ( !dma_pte_present(*pte) )
> > +            continue;
> > +
> > +        address = gpa + offset_level_address(i, level);
> > +        if ( next_level >= 1 )
> > +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> > +                                     address, indent + 1);
> > +        else
> > +            printk("%*s" "gfn: %08lx mfn: %08lx super=%d rd=%d
> wr=%d\n",
> > +                   indent, " ",
> 
> Same comment as above.

Yep - got it.

> 
> > +                   (unsigned long)(address >> PAGE_SHIFT_4K),
> > +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
> > +                   dma_pte_superpage(*pte)? 1 : 0,
> > +                   dma_pte_read(*pte)? 1 : 0,
> > +                   dma_pte_write(*pte)? 1 : 0);
> 
> Missing spaces. Even worse - given your definitions of these macros
> there's no point in using the conditional operators here at all.
> 
> And, despite your claim in another response, this still isn't similar
> to AMD's variant (which still doesn't print any of these three
> attributes).

I meant structure in terms of recursion logic, level checks etc. I will just remove the extra prints to make it more similar. My original goal was to print it similar to the existing MMU p2m table.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:36:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T233g-0001E4-KB; Thu, 16 Aug 2012 16:36:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T233f-0001Dz-F0
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:36:31 +0000
Received: from [85.158.143.99:52510] by server-3.bemta-4.messagelabs.com id
	68/7C-09529-E812D205; Thu, 16 Aug 2012 16:36:30 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345134988!27647493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11697 invoked from network); 16 Aug 2012 16:36:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:36:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205394190"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 12:36:14 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 12:36:13 -0400
MIME-Version: 1.0
X-Mercurial-Node: 575a53faf4e1f35330963cd41e3749d889a0e1a1
Message-ID: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 16 Aug 2012 09:36:13 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, JBeulich@suse.com, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Incorporated feedback from Jan Beulich and Wei Wang.
Fixed indent printing with %*s.
Removed superflous superpage and other attribute prints.
Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16 09:28:24 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        if ( next_level != (level - 1) )
+        {
+            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
+                   next_level, level - 1);
+
+            continue;
+        }
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), next_level,
+                address, indent + 1);
+        else
+            printk("%*sgfn: %08lx  mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Thu Aug 16 09:28:24 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 16 09:28:24 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,60 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*sgfn: %08lx mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K));
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 16 09:28:24 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 16 09:28:24 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/xen/iommu.h	Thu Aug 16 09:28:24 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:36:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T233g-0001E4-KB; Thu, 16 Aug 2012 16:36:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T233f-0001Dz-F0
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:36:31 +0000
Received: from [85.158.143.99:52510] by server-3.bemta-4.messagelabs.com id
	68/7C-09529-E812D205; Thu, 16 Aug 2012 16:36:30 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345134988!27647493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzE3OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11697 invoked from network); 16 Aug 2012 16:36:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:36:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336363200"; d="scan'208";a="205394190"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 12:36:14 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 12:36:13 -0400
MIME-Version: 1.0
X-Mercurial-Node: 575a53faf4e1f35330963cd41e3749d889a0e1a1
Message-ID: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Thu, 16 Aug 2012 09:36:13 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, JBeulich@suse.com, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Incorporated feedback from Jan Beulich and Wei Wang.
Fixed indent printing with %*s.
Removed superflous superpage and other attribute prints.
Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16 09:28:24 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        if ( next_level != (level - 1) )
+        {
+            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
+                   next_level, level - 1);
+
+            continue;
+        }
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), next_level,
+                address, indent + 1);
+        else
+            printk("%*sgfn: %08lx  mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Thu Aug 16 09:28:24 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 16 09:28:24 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,60 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*sgfn: %08lx mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K));
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 16 09:28:24 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 16 09:28:24 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/xen/iommu.h	Thu Aug 16 09:28:24 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:40:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T236u-0001Kc-80; Thu, 16 Aug 2012 16:39:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T236s-0001KQ-3f
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:39:50 +0000
Received: from [85.158.143.99:64852] by server-2.bemta-4.messagelabs.com id
	25/EB-31966-5522D205; Thu, 16 Aug 2012 16:39:49 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345135188!18774591!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14371 invoked from network); 16 Aug 2012 16:39:49 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:39:49 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T236q-00085X-90; Thu, 16 Aug 2012 16:39:48 +0000
Date: Thu, 16 Aug 2012 17:39:48 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120816163948.GB24786@ocelot.phlegethon.org>
References: <502D1ADA0200007800095820@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502D1ADA0200007800095820@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] EPT/PoD: fix interaction with 1Gb pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:07 +0100 on 16 Aug (1345129674), Jan Beulich wrote:
> When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
> updated to match - the assertion in there triggered, indicating that
> the call to p2m_pod_demand_populate() needed adjustment.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Applied, thanks.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:40:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T236u-0001Kc-80; Thu, 16 Aug 2012 16:39:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T236s-0001KQ-3f
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 16:39:50 +0000
Received: from [85.158.143.99:64852] by server-2.bemta-4.messagelabs.com id
	25/EB-31966-5522D205; Thu, 16 Aug 2012 16:39:49 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345135188!18774591!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14371 invoked from network); 16 Aug 2012 16:39:49 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 16:39:49 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T236q-00085X-90; Thu, 16 Aug 2012 16:39:48 +0000
Date: Thu, 16 Aug 2012 17:39:48 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120816163948.GB24786@ocelot.phlegethon.org>
References: <502D1ADA0200007800095820@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502D1ADA0200007800095820@nat28.tlf.novell.com>
User-Agent: Mutt/1.4.2.1i
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] EPT/PoD: fix interaction with 1Gb pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:07 +0100 on 16 Aug (1345129674), Jan Beulich wrote:
> When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
> updated to match - the assertion in there triggered, indicating that
> the call to p2m_pod_demand_populate() needed adjustment.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Applied, thanks.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:40:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2372-0001ML-Qx; Thu, 16 Aug 2012 16:40:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <torushikeshj@gmail.com>) id 1T2371-0001Lh-UB
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:40:00 +0000
Received: from [85.158.143.99:8449] by server-2.bemta-4.messagelabs.com id
	57/0C-31966-E522D205; Thu, 16 Aug 2012 16:39:58 +0000
X-Env-Sender: torushikeshj@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345135196!21021135!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10456 invoked from network); 16 Aug 2012 16:39:56 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:39:56 -0000
Received: by eaah11 with SMTP id h11so920134eaa.30
	for <multiple recipients>; Thu, 16 Aug 2012 09:39:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=Hw7GnTFYCN7+NHCp2cZ1dwoSYJIZD+RGPUeP/WDUPLc=;
	b=QIckQJaCpI0l8NsdeZkU9qUy4eevu+qDP+PwpNb8NI8o0I2w7KBqmWlo15ngrxh1jh
	mBhqenUIaG17pAuhtnx0lqLxyB4f22N8x6iRRa4qTe08AqgOAMSgxZKSnVDyf7jmO78O
	/mJtsmPYA/GEyWlUOhDi79hrlJve95Rk9IUoTcw/ow10QkZcd+pcUj3fYl6Qtbr/Zxy2
	k1TOuSh77Qdh6vRBfG59VHutdNDL0LtLKeJzkFJHmTvLN+YJ6tRw+gXKLxXFUbO3fZKK
	asoLqds+iqlIThMeByep1q46t8Owy5tsQciEmScBCA38ad4f0T9laMi2eYLoPZPNJQ1a
	Y4sA==
MIME-Version: 1.0
Received: by 10.14.210.132 with SMTP id u4mr2520918eeo.6.1345135196188; Thu,
	16 Aug 2012 09:39:56 -0700 (PDT)
Received: by 10.14.4.131 with HTTP; Thu, 16 Aug 2012 09:39:56 -0700 (PDT)
In-Reply-To: <CAO14VsOkki7FUZFJ65RRXkXfQBY_nU-pb=brHDL+Yiwh_42YLA@mail.gmail.com>
References: <CAO14VsOkki7FUZFJ65RRXkXfQBY_nU-pb=brHDL+Yiwh_42YLA@mail.gmail.com>
Date: Thu, 16 Aug 2012 22:09:56 +0530
Message-ID: <CAO14VsN5jZzJssK-hXO2=SCiMZjT6HYULAL9S+Lo2pe9kG48iw@mail.gmail.com>
From: R J <torushikeshj@gmail.com>
To: xen-api@lists.xensource.com, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] want to write a difference copy program to sync two
	VHDs in blktap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8269344221270378490=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8269344221270378490==
Content-Type: multipart/alternative; boundary=047d7b621b148b2fe704c764b104

--047d7b621b148b2fe704c764b104
Content-Type: text/plain; charset=ISO-8859-1

Hello,

I tried to copy the VHD in two ways. One by using xe vm-copy command and
other simple cp.
Im verifying the blocks with vhd-read command. VHD created by vm-copy has
different BAT and BITMAP than the original one.

But with cp the BAT and BITMAP remains same as its a mirror copy.
Still no big improvements.

-- Rishi

On Sat, Aug 11, 2012 at 4:26 AM, R J <torushikeshj@gmail.com> wrote:

> Hello List,
>
> I'm using blktap on XCP 1.1 and developing a difference copy program for
> easy backup and remote archival.
>
> The idea is to read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP
> and the data blocks.
> Other way would be by adding a sha1 signature in each data block and
> comparing it with remote VHD.
>
> I tried to get some info from below programs but could not get required
> info about adding a sha1 sign of data block while writing it.
> Similar to ZFS this can be used to verify file integrity ?
> May be if someone can guide me about how to inject data while writing it
> in data block of VHD.
>
> Please advice.
>
> -- Rishi
>
> On Mon, Aug 6, 2012 at 9:26 PM, R J <torushikeshj@gmail.com> wrote:
>
>> Hello List,
>>
>> I would like to know details of tapdisk-diff and tapdisk-stream
>> My aim is to find differences in two vhds and if possible sync them for
>> redundancy via a difference-copy program
>>
>> The idea is
>> - find the journel difference of src and dest VHD
>> - copy journel and modified blocks from src VHD to dest VHD for backup.
>>
>> in my case both are nfs vhds.
>>
>> Could anyone please provide me more info on above two programs ?
>>
>> - Rishi
>>
>
>

--047d7b621b148b2fe704c764b104
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br><br>I tried to copy the VHD in two ways. One by using xe vm-copy =
command and other simple cp.<br>Im verifying the blocks with vhd-read comma=
nd. VHD created by vm-copy has different BAT and BITMAP than the original o=
ne.<br>
<br>But with cp the BAT and BITMAP remains same as its a mirror copy.<br>St=
ill no big improvements.<br><br>-- Rishi<br><br><div class=3D"gmail_quote">=
On Sat, Aug 11, 2012 at 4:26 AM, R J <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:torushikeshj@gmail.com" target=3D"_blank">torushikeshj@gmail.com</a>&gt;<=
/span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0pt 0pt 0pt 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
Hello List,<br><br>I&#39;m using blktap on XCP 1.1 and developing a differe=
nce copy program for easy backup and remote archival.<br><br>The idea is to=
 read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP and the data b=
locks.<br>


Other way would be by adding a sha1 signature in each data block and compar=
ing it with remote VHD.<br><br>I tried to get some info from below programs=
 but could not get required info about adding a sha1 sign of data block whi=
le writing it.<br>


Similar to ZFS this can be used to verify file integrity ?<br>May be if som=
eone can guide me about how to inject data while writing it in data block o=
f VHD.<br><br>Please advice.<br><br>-- Rishi<br><br><div class=3D"gmail_quo=
te">


On Mon, Aug 6, 2012 at 9:26 PM, R J <span dir=3D"ltr">&lt;<a href=3D"mailto=
:torushikeshj@gmail.com" target=3D"_blank">torushikeshj@gmail.com</a>&gt;</=
span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0pt 0pt 0=
pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">


Hello List,<br><br>I would like to know details of tapdisk-diff and tapdisk=
-stream<br>My aim is to find differences in two vhds and if possible sync t=
hem for redundancy via a difference-copy program<br><br>The idea is<br>


- find the journel difference of src and dest VHD<br>
- copy journel and modified blocks from src VHD to dest VHD for backup.<br>=
<br>in my case both are nfs vhds.<br><br>Could anyone please provide me mor=
e info on above two programs ?<br><br>- Rishi<br>
</blockquote></div><br>
</blockquote></div><br>

--047d7b621b148b2fe704c764b104--


--===============8269344221270378490==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8269344221270378490==--


From xen-devel-bounces@lists.xen.org Thu Aug 16 16:40:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2372-0001ML-Qx; Thu, 16 Aug 2012 16:40:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <torushikeshj@gmail.com>) id 1T2371-0001Lh-UB
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:40:00 +0000
Received: from [85.158.143.99:8449] by server-2.bemta-4.messagelabs.com id
	57/0C-31966-E522D205; Thu, 16 Aug 2012 16:39:58 +0000
X-Env-Sender: torushikeshj@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345135196!21021135!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10456 invoked from network); 16 Aug 2012 16:39:56 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:39:56 -0000
Received: by eaah11 with SMTP id h11so920134eaa.30
	for <multiple recipients>; Thu, 16 Aug 2012 09:39:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=Hw7GnTFYCN7+NHCp2cZ1dwoSYJIZD+RGPUeP/WDUPLc=;
	b=QIckQJaCpI0l8NsdeZkU9qUy4eevu+qDP+PwpNb8NI8o0I2w7KBqmWlo15ngrxh1jh
	mBhqenUIaG17pAuhtnx0lqLxyB4f22N8x6iRRa4qTe08AqgOAMSgxZKSnVDyf7jmO78O
	/mJtsmPYA/GEyWlUOhDi79hrlJve95Rk9IUoTcw/ow10QkZcd+pcUj3fYl6Qtbr/Zxy2
	k1TOuSh77Qdh6vRBfG59VHutdNDL0LtLKeJzkFJHmTvLN+YJ6tRw+gXKLxXFUbO3fZKK
	asoLqds+iqlIThMeByep1q46t8Owy5tsQciEmScBCA38ad4f0T9laMi2eYLoPZPNJQ1a
	Y4sA==
MIME-Version: 1.0
Received: by 10.14.210.132 with SMTP id u4mr2520918eeo.6.1345135196188; Thu,
	16 Aug 2012 09:39:56 -0700 (PDT)
Received: by 10.14.4.131 with HTTP; Thu, 16 Aug 2012 09:39:56 -0700 (PDT)
In-Reply-To: <CAO14VsOkki7FUZFJ65RRXkXfQBY_nU-pb=brHDL+Yiwh_42YLA@mail.gmail.com>
References: <CAO14VsOkki7FUZFJ65RRXkXfQBY_nU-pb=brHDL+Yiwh_42YLA@mail.gmail.com>
Date: Thu, 16 Aug 2012 22:09:56 +0530
Message-ID: <CAO14VsN5jZzJssK-hXO2=SCiMZjT6HYULAL9S+Lo2pe9kG48iw@mail.gmail.com>
From: R J <torushikeshj@gmail.com>
To: xen-api@lists.xensource.com, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] want to write a difference copy program to sync two
	VHDs in blktap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8269344221270378490=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8269344221270378490==
Content-Type: multipart/alternative; boundary=047d7b621b148b2fe704c764b104

--047d7b621b148b2fe704c764b104
Content-Type: text/plain; charset=ISO-8859-1

Hello,

I tried to copy the VHD in two ways. One by using xe vm-copy command and
other simple cp.
Im verifying the blocks with vhd-read command. VHD created by vm-copy has
different BAT and BITMAP than the original one.

But with cp the BAT and BITMAP remains same as its a mirror copy.
Still no big improvements.

-- Rishi

On Sat, Aug 11, 2012 at 4:26 AM, R J <torushikeshj@gmail.com> wrote:

> Hello List,
>
> I'm using blktap on XCP 1.1 and developing a difference copy program for
> easy backup and remote archival.
>
> The idea is to read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP
> and the data blocks.
> Other way would be by adding a sha1 signature in each data block and
> comparing it with remote VHD.
>
> I tried to get some info from below programs but could not get required
> info about adding a sha1 sign of data block while writing it.
> Similar to ZFS this can be used to verify file integrity ?
> May be if someone can guide me about how to inject data while writing it
> in data block of VHD.
>
> Please advice.
>
> -- Rishi
>
> On Mon, Aug 6, 2012 at 9:26 PM, R J <torushikeshj@gmail.com> wrote:
>
>> Hello List,
>>
>> I would like to know details of tapdisk-diff and tapdisk-stream
>> My aim is to find differences in two vhds and if possible sync them for
>> redundancy via a difference-copy program
>>
>> The idea is
>> - find the journel difference of src and dest VHD
>> - copy journel and modified blocks from src VHD to dest VHD for backup.
>>
>> in my case both are nfs vhds.
>>
>> Could anyone please provide me more info on above two programs ?
>>
>> - Rishi
>>
>
>

--047d7b621b148b2fe704c764b104
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br><br>I tried to copy the VHD in two ways. One by using xe vm-copy =
command and other simple cp.<br>Im verifying the blocks with vhd-read comma=
nd. VHD created by vm-copy has different BAT and BITMAP than the original o=
ne.<br>
<br>But with cp the BAT and BITMAP remains same as its a mirror copy.<br>St=
ill no big improvements.<br><br>-- Rishi<br><br><div class=3D"gmail_quote">=
On Sat, Aug 11, 2012 at 4:26 AM, R J <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:torushikeshj@gmail.com" target=3D"_blank">torushikeshj@gmail.com</a>&gt;<=
/span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0pt 0pt 0pt 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
Hello List,<br><br>I&#39;m using blktap on XCP 1.1 and developing a differe=
nce copy program for easy backup and remote archival.<br><br>The idea is to=
 read the BAT of VHD1 and sync it to BAT of VHD2 with BITMAP and the data b=
locks.<br>


Other way would be by adding a sha1 signature in each data block and compar=
ing it with remote VHD.<br><br>I tried to get some info from below programs=
 but could not get required info about adding a sha1 sign of data block whi=
le writing it.<br>


Similar to ZFS this can be used to verify file integrity ?<br>May be if som=
eone can guide me about how to inject data while writing it in data block o=
f VHD.<br><br>Please advice.<br><br>-- Rishi<br><br><div class=3D"gmail_quo=
te">


On Mon, Aug 6, 2012 at 9:26 PM, R J <span dir=3D"ltr">&lt;<a href=3D"mailto=
:torushikeshj@gmail.com" target=3D"_blank">torushikeshj@gmail.com</a>&gt;</=
span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0pt 0pt 0=
pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">


Hello List,<br><br>I would like to know details of tapdisk-diff and tapdisk=
-stream<br>My aim is to find differences in two vhds and if possible sync t=
hem for redundancy via a difference-copy program<br><br>The idea is<br>


- find the journel difference of src and dest VHD<br>
- copy journel and modified blocks from src VHD to dest VHD for backup.<br>=
<br>in my case both are nfs vhds.<br><br>Could anyone please provide me mor=
e info on above two programs ?<br><br>- Rishi<br>
</blockquote></div><br>
</blockquote></div><br>

--047d7b621b148b2fe704c764b104--


--===============8269344221270378490==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8269344221270378490==--


From xen-devel-bounces@lists.xen.org Thu Aug 16 16:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T239E-0001iG-Ik; Thu, 16 Aug 2012 16:42:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T239C-0001i1-Du
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:42:14 +0000
Received: from [85.158.143.35:36305] by server-3.bemta-4.messagelabs.com id
	C1/12-09529-5E22D205; Thu, 16 Aug 2012 16:42:13 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345135329!14483068!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25013 invoked from network); 16 Aug 2012 16:42:10 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-16.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	16 Aug 2012 16:42:10 -0000
Received: from mail203-tx2-R.bigfish.com (10.9.14.237) by
	TX2EHSOBE010.bigfish.com (10.9.40.30) with Microsoft SMTP Server id
	14.1.225.23; Thu, 16 Aug 2012 16:42:09 +0000
Received: from mail203-tx2 (localhost [127.0.0.1])	by
	mail203-tx2-R.bigfish.com (Postfix) with ESMTP id 02CAEB800A6	for
	<xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 16:42:09 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah)
Received: from mail203-tx2 (localhost.localdomain [127.0.0.1]) by mail203-tx2
	(MessageSwitch) id 134513532662419_7740;
	Thu, 16 Aug 2012 16:42:06 +0000 (UTC)
Received: from TX2EHSMHS027.bigfish.com (unknown [10.9.14.246])	by
	mail203-tx2.bigfish.com (Postfix) with ESMTP id 0D892980048	for
	<xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 16:42:06 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS027.bigfish.com (10.9.99.127) with Microsoft SMTP Server id
	14.1.225.23; Thu, 16 Aug 2012 16:42:06 +0000
X-WSS-ID: 0M8UX23-02-00L-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1) with ESMTP id 25D30C806F	for <xen-devel@lists.xensource.com>;
	Thu, 16 Aug 2012 11:41:28 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 16 Aug 2012 11:42:11 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 16 Aug 2012 11:41:28 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Thu, 16 Aug 2012
	12:41:27 -0400
Received: from wotan.amd.com (wotan.osrc.amd.com [165.204.15.11])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id AAC7B49C361; Thu, 16 Aug 2012
	17:41:26 +0100 (BST)
Received: by wotan.amd.com (Postfix, from userid 41729)	id A0D862D202B; Thu,
	16 Aug 2012 18:41:26 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 85190245a94d9945b7656c971ba36f7d1eff5c19
Message-ID: <85190245a94d9945b765.1345135279@localhost.localdomain>
User-Agent: Mercurial-patchbomb/1.8.2
Date: Thu, 16 Aug 2012 18:41:19 +0200
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <xen-devel@lists.xensource.com>
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com
Subject: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Boris Ostrovsky <boris.ostrovsky@amd.com>
# Date 1345135101 -7200
# Node ID 85190245a94d9945b7656c971ba36f7d1eff5c19
# Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
AMD, powernow: Update P-state directly when _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL

When _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL (i.e. shared_type is
CPUFREQ_SHARED_TYPE_HW) which most often is the case on servers, there is no
reason to go into on_selected_cpus() code, we call call transition_pstate()
directly.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

diff -r 6d56e31fe1e1 -r 85190245a94d xen/arch/x86/acpi/cpufreq/powernow.c
--- a/xen/arch/x86/acpi/cpufreq/powernow.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/arch/x86/acpi/cpufreq/powernow.c	Thu Aug 16 18:38:21 2012 +0200
@@ -56,20 +56,9 @@
 
 static struct cpufreq_driver powernow_cpufreq_driver;
 
-struct drv_cmd {
-    unsigned int type;
-    const cpumask_t *mask;
-    u32 val;
-    int turbo;
-};
-
-static void transition_pstate(void *drvcmd)
+static void transition_pstate(void *pstate)
 {
-    struct drv_cmd *cmd;
-    cmd = (struct drv_cmd *) drvcmd;
-
-
-    wrmsrl(MSR_PSTATE_CTRL, cmd->val);
+    wrmsrl(MSR_PSTATE_CTRL, *(int *)pstate);
 }
 
 static void update_cpb(void *data)
@@ -106,13 +95,11 @@ static int powernow_cpufreq_target(struc
 {
     struct acpi_cpufreq_data *data = cpufreq_drv_data[policy->cpu];
     struct processor_performance *perf;
-    struct cpufreq_freqs freqs;
     cpumask_t online_policy_cpus;
-    struct drv_cmd cmd;
-    unsigned int next_state = 0; /* Index into freq_table */
-    unsigned int next_perf_state = 0; /* Index into perf table */
-    int result = 0;
-    int j = 0;
+    unsigned int next_state; /* Index into freq_table */
+    unsigned int next_perf_state; /* Index into perf table */
+    int result;
+    int j;
 
     if (unlikely(data == NULL ||
         data->acpi_data == NULL || data->freq_table == NULL)) {
@@ -125,9 +112,7 @@ static int powernow_cpufreq_target(struc
                                             target_freq,
                                             relation, &next_state);
     if (unlikely(result))
-        return -ENODEV;
-
-    cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
+        return result;
 
     next_perf_state = data->freq_table[next_state].index;
     if (perf->state == next_perf_state) {
@@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
             return 0;
     }
 
-    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
-        cmd.mask = &online_policy_cpus;
-    else
-        cmd.mask = cpumask_of(policy->cpu);
+    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
+        likely(policy->cpu == smp_processor_id())) {
+        transition_pstate(&next_perf_state);
+        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
+    } else {
+        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
 
-    freqs.old = perf->states[perf->state].core_frequency * 1000;
-    freqs.new = data->freq_table[next_state].frequency;
+        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
+            unlikely(policy->cpu != smp_processor_id()))
+            on_selected_cpus(&online_policy_cpus, transition_pstate,
+                             &next_perf_state, 1);
+        else
+            transition_pstate(&next_perf_state);
 
-    cmd.val = next_perf_state;
-    cmd.turbo = policy->turbo;
-
-    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
-
-    for_each_cpu(j, &online_policy_cpus)
-        cpufreq_statistic_update(j, perf->state, next_perf_state);
+        for_each_cpu(j, &online_policy_cpus)
+            cpufreq_statistic_update(j, perf->state, next_perf_state);
+    }    
 
     perf->state = next_perf_state;
-    policy->cur = freqs.new;
+    policy->cur = data->freq_table[next_state].frequency;
 
-    return result;
+    return 0;
 }
 
 static int powernow_cpufreq_verify(struct cpufreq_policy *policy)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T239E-0001iG-Ik; Thu, 16 Aug 2012 16:42:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T239C-0001i1-Du
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:42:14 +0000
Received: from [85.158.143.35:36305] by server-3.bemta-4.messagelabs.com id
	C1/12-09529-5E22D205; Thu, 16 Aug 2012 16:42:13 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345135329!14483068!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25013 invoked from network); 16 Aug 2012 16:42:10 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-16.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	16 Aug 2012 16:42:10 -0000
Received: from mail203-tx2-R.bigfish.com (10.9.14.237) by
	TX2EHSOBE010.bigfish.com (10.9.40.30) with Microsoft SMTP Server id
	14.1.225.23; Thu, 16 Aug 2012 16:42:09 +0000
Received: from mail203-tx2 (localhost [127.0.0.1])	by
	mail203-tx2-R.bigfish.com (Postfix) with ESMTP id 02CAEB800A6	for
	<xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 16:42:09 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1202hzz8275bhz2dh668h839h944hd24hf0ah)
Received: from mail203-tx2 (localhost.localdomain [127.0.0.1]) by mail203-tx2
	(MessageSwitch) id 134513532662419_7740;
	Thu, 16 Aug 2012 16:42:06 +0000 (UTC)
Received: from TX2EHSMHS027.bigfish.com (unknown [10.9.14.246])	by
	mail203-tx2.bigfish.com (Postfix) with ESMTP id 0D892980048	for
	<xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 16:42:06 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS027.bigfish.com (10.9.99.127) with Microsoft SMTP Server id
	14.1.225.23; Thu, 16 Aug 2012 16:42:06 +0000
X-WSS-ID: 0M8UX23-02-00L-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1) with ESMTP id 25D30C806F	for <xen-devel@lists.xensource.com>;
	Thu, 16 Aug 2012 11:41:28 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 16 Aug 2012 11:42:11 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Thu, 16 Aug 2012 11:41:28 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Thu, 16 Aug 2012
	12:41:27 -0400
Received: from wotan.amd.com (wotan.osrc.amd.com [165.204.15.11])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id AAC7B49C361; Thu, 16 Aug 2012
	17:41:26 +0100 (BST)
Received: by wotan.amd.com (Postfix, from userid 41729)	id A0D862D202B; Thu,
	16 Aug 2012 18:41:26 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 85190245a94d9945b7656c971ba36f7d1eff5c19
Message-ID: <85190245a94d9945b765.1345135279@localhost.localdomain>
User-Agent: Mercurial-patchbomb/1.8.2
Date: Thu, 16 Aug 2012 18:41:19 +0200
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
To: <xen-devel@lists.xensource.com>
X-OriginatorOrg: amd.com
Cc: boris.ostrovsky@amd.com
Subject: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Boris Ostrovsky <boris.ostrovsky@amd.com>
# Date 1345135101 -7200
# Node ID 85190245a94d9945b7656c971ba36f7d1eff5c19
# Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
AMD, powernow: Update P-state directly when _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL

When _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL (i.e. shared_type is
CPUFREQ_SHARED_TYPE_HW) which most often is the case on servers, there is no
reason to go into on_selected_cpus() code, we call call transition_pstate()
directly.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

diff -r 6d56e31fe1e1 -r 85190245a94d xen/arch/x86/acpi/cpufreq/powernow.c
--- a/xen/arch/x86/acpi/cpufreq/powernow.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/arch/x86/acpi/cpufreq/powernow.c	Thu Aug 16 18:38:21 2012 +0200
@@ -56,20 +56,9 @@
 
 static struct cpufreq_driver powernow_cpufreq_driver;
 
-struct drv_cmd {
-    unsigned int type;
-    const cpumask_t *mask;
-    u32 val;
-    int turbo;
-};
-
-static void transition_pstate(void *drvcmd)
+static void transition_pstate(void *pstate)
 {
-    struct drv_cmd *cmd;
-    cmd = (struct drv_cmd *) drvcmd;
-
-
-    wrmsrl(MSR_PSTATE_CTRL, cmd->val);
+    wrmsrl(MSR_PSTATE_CTRL, *(int *)pstate);
 }
 
 static void update_cpb(void *data)
@@ -106,13 +95,11 @@ static int powernow_cpufreq_target(struc
 {
     struct acpi_cpufreq_data *data = cpufreq_drv_data[policy->cpu];
     struct processor_performance *perf;
-    struct cpufreq_freqs freqs;
     cpumask_t online_policy_cpus;
-    struct drv_cmd cmd;
-    unsigned int next_state = 0; /* Index into freq_table */
-    unsigned int next_perf_state = 0; /* Index into perf table */
-    int result = 0;
-    int j = 0;
+    unsigned int next_state; /* Index into freq_table */
+    unsigned int next_perf_state; /* Index into perf table */
+    int result;
+    int j;
 
     if (unlikely(data == NULL ||
         data->acpi_data == NULL || data->freq_table == NULL)) {
@@ -125,9 +112,7 @@ static int powernow_cpufreq_target(struc
                                             target_freq,
                                             relation, &next_state);
     if (unlikely(result))
-        return -ENODEV;
-
-    cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
+        return result;
 
     next_perf_state = data->freq_table[next_state].index;
     if (perf->state == next_perf_state) {
@@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
             return 0;
     }
 
-    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
-        cmd.mask = &online_policy_cpus;
-    else
-        cmd.mask = cpumask_of(policy->cpu);
+    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
+        likely(policy->cpu == smp_processor_id())) {
+        transition_pstate(&next_perf_state);
+        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
+    } else {
+        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
 
-    freqs.old = perf->states[perf->state].core_frequency * 1000;
-    freqs.new = data->freq_table[next_state].frequency;
+        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
+            unlikely(policy->cpu != smp_processor_id()))
+            on_selected_cpus(&online_policy_cpus, transition_pstate,
+                             &next_perf_state, 1);
+        else
+            transition_pstate(&next_perf_state);
 
-    cmd.val = next_perf_state;
-    cmd.turbo = policy->turbo;
-
-    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
-
-    for_each_cpu(j, &online_policy_cpus)
-        cpufreq_statistic_update(j, perf->state, next_perf_state);
+        for_each_cpu(j, &online_policy_cpus)
+            cpufreq_statistic_update(j, perf->state, next_perf_state);
+    }    
 
     perf->state = next_perf_state;
-    policy->cur = freqs.new;
+    policy->cur = data->freq_table[next_state].frequency;
 
-    return result;
+    return 0;
 }
 
 static int powernow_cpufreq_verify(struct cpufreq_policy *policy)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23FH-00029x-EU; Thu, 16 Aug 2012 16:48:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T23FG-00029q-Qr
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:48:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345135704!1979415!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19974 invoked from network); 16 Aug 2012 16:48:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:48:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14046004"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 16:48:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 17:48:23 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T23F9-0000jx-Nz;
	Thu, 16 Aug 2012 16:48:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T23F9-0004ed-Ce;
	Thu, 16 Aug 2012 17:48:23 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13607-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 17:48:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13607: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13607 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13607/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13606
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13606
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13606
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13606

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  c887c30a0a35
baseline version:
 xen                  6d56e31fe1e1

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  J?rgen Gro? <juergen.gross@ts.fujitsu.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=c887c30a0a35
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable c887c30a0a35
+ branch=xen-unstable
+ revision=c887c30a0a35
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r c887c30a0a35 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 5 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 16:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 16:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23FH-00029x-EU; Thu, 16 Aug 2012 16:48:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T23FG-00029q-Qr
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 16:48:31 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345135704!1979415!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19974 invoked from network); 16 Aug 2012 16:48:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 16:48:24 -0000
X-IronPort-AV: E=Sophos;i="4.77,778,1336348800"; d="scan'208";a="14046004"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 16:48:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 17:48:23 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T23F9-0000jx-Nz;
	Thu, 16 Aug 2012 16:48:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T23F9-0004ed-Ce;
	Thu, 16 Aug 2012 17:48:23 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13607-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 17:48:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13607: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13607 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13607/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13606
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13606
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13606
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13606

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  c887c30a0a35
baseline version:
 xen                  6d56e31fe1e1

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  J?rgen Gro? <juergen.gross@ts.fujitsu.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=c887c30a0a35
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable c887c30a0a35
+ branch=xen-unstable
+ revision=c887c30a0a35
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r c887c30a0a35 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 5 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:09:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23Yz-0002Yj-G4; Thu, 16 Aug 2012 17:08:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T23Yy-0002Ye-0Q
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 17:08:52 +0000
Received: from [85.158.139.83:5122] by server-9.bemta-5.messagelabs.com id
	FB/34-26123-3292D205; Thu, 16 Aug 2012 17:08:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345136930!28510343!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14545 invoked from network); 16 Aug 2012 17:08:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 17:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,780,1336348800"; d="scan'208";a="14046235"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 17:08:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 18:08:50 +0100
Date: Thu, 16 Aug 2012 18:08:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502D33B8020000780009596B@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Jan Beulich wrote:
> >>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > --- a/xen/include/public/arch-arm.h
> > +++ b/xen/include/public/arch-arm.h
> > @@ -51,18 +51,34 @@
> >  
> >  #define XEN_HYPERCALL_TAG   0XEA1
> >  
> > +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
> >  
> >  #ifndef __ASSEMBLY__
> > -#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> > -    typedef struct { type *p; } __guest_handle_ ## name
> > +#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
> > +    typedef struct { type *p; }                                 \
> > +        __guest_handle_ ## name;                                \
> > +    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> > +        __guest_handle_64_ ## name;
> 
> Why struct { union { ... } } instead of just union { ... }?

good point :)


> > +/*
> > + * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
> > + * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
> > + * aligned.
> > + * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
> > + * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
> > + */
> >  #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
> >      ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
> >      ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
> >  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
> > -#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
> > +#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
> >  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> > -#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> > +/* this is going to be changes on 64 bit */
> > +#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
> > +#define set_xen_guest_handle_raw(hnd, val)                  \
> > +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
> 
> If you made the "normal" handle a union too, you could avoid
> the explicit cast (which e.g. gcc, when not passed
> -fno-strict-aliasing, will choke on) and instead use (hnd).q (and
> at once avoid the double initialization of the low half).
> 
> Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
> usable at once for 64-bit avoiding a full double initialization there.
> 
> > +         (hnd).p = val;                                     \
> 
> In a public header you certainly want to avoid evaluating a
> macro argument twice.

That's a really good suggestion.
I am going to make both handles unions and therefore
set_xen_guest_handle_raw becomes:

#define set_xen_guest_handle_raw(hnd, val)                  \
    do { (hnd).q = 0;                                       \
         (hnd).p = val;                                     \
    } while ( 0 )


> > +    } while ( 0 )
> >  #ifdef __XEN_TOOLS__
> >  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
> >  #endif
> 
> Seeing the patch I btw realized that there's no easy way to
> avoid having the type as a second argument in the conversion
> macros. Nevertheless I still don't like the explicitly specified type
> there.

I agree, I don't like it either but I couldn't find anything better.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:09:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23Yz-0002Yj-G4; Thu, 16 Aug 2012 17:08:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T23Yy-0002Ye-0Q
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 17:08:52 +0000
Received: from [85.158.139.83:5122] by server-9.bemta-5.messagelabs.com id
	FB/34-26123-3292D205; Thu, 16 Aug 2012 17:08:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345136930!28510343!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14545 invoked from network); 16 Aug 2012 17:08:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 17:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,780,1336348800"; d="scan'208";a="14046235"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 17:08:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 18:08:50 +0100
Date: Thu, 16 Aug 2012 18:08:33 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502D33B8020000780009596B@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Jan Beulich wrote:
> >>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > --- a/xen/include/public/arch-arm.h
> > +++ b/xen/include/public/arch-arm.h
> > @@ -51,18 +51,34 @@
> >  
> >  #define XEN_HYPERCALL_TAG   0XEA1
> >  
> > +#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
> >  
> >  #ifndef __ASSEMBLY__
> > -#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
> > -    typedef struct { type *p; } __guest_handle_ ## name
> > +#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
> > +    typedef struct { type *p; }                                 \
> > +        __guest_handle_ ## name;                                \
> > +    typedef struct { union { type *p; uint64_aligned_t q; }; }  \
> > +        __guest_handle_64_ ## name;
> 
> Why struct { union { ... } } instead of just union { ... }?

good point :)


> > +/*
> > + * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
> > + * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
> > + * aligned.
> > + * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
> > + * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
> > + */
> >  #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
> >      ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
> >      ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
> >  #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
> > -#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
> > +#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
> >  #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
> > -#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
> > +/* this is going to be changes on 64 bit */
> > +#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
> > +#define set_xen_guest_handle_raw(hnd, val)                  \
> > +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
> 
> If you made the "normal" handle a union too, you could avoid
> the explicit cast (which e.g. gcc, when not passed
> -fno-strict-aliasing, will choke on) and instead use (hnd).q (and
> at once avoid the double initialization of the low half).
> 
> Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
> usable at once for 64-bit avoiding a full double initialization there.
> 
> > +         (hnd).p = val;                                     \
> 
> In a public header you certainly want to avoid evaluating a
> macro argument twice.

That's a really good suggestion.
I am going to make both handles unions and therefore
set_xen_guest_handle_raw becomes:

#define set_xen_guest_handle_raw(hnd, val)                  \
    do { (hnd).q = 0;                                       \
         (hnd).p = val;                                     \
    } while ( 0 )


> > +    } while ( 0 )
> >  #ifdef __XEN_TOOLS__
> >  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
> >  #endif
> 
> Seeing the patch I btw realized that there's no easy way to
> avoid having the type as a second argument in the conversion
> macros. Nevertheless I still don't like the explicitly specified type
> there.

I agree, I don't like it either but I couldn't find anything better.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:11:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23b9-0002e5-0l; Thu, 16 Aug 2012 17:11:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T23b7-0002dr-AH
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 17:11:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345137057!1982603!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23769 invoked from network); 16 Aug 2012 17:10:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 17:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,780,1336348800"; d="scan'208";a="14046284"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 17:10:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 18:10:57 +0100
Date: Thu, 16 Aug 2012 18:10:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502D37D702000078000959F7@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Jan Beulich wrote:
> >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > Seeing the patch I btw realized that there's no easy way to
> > avoid having the type as a second argument in the conversion
> > macros. Nevertheless I still don't like the explicitly specified type
> > there.
> 
> Btw - on the architecture(s) where the two handles are identical
> I would prefer you to make the conversion functions trivial (and
> thus avoid making use of the "type" parameter), thus allowing
> the type checking to occur that you currently circumvent.

OK, I can do that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:11:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23b9-0002e5-0l; Thu, 16 Aug 2012 17:11:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T23b7-0002dr-AH
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 17:11:05 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345137057!1982603!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23769 invoked from network); 16 Aug 2012 17:10:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 17:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,780,1336348800"; d="scan'208";a="14046284"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 17:10:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 18:10:57 +0100
Date: Thu, 16 Aug 2012 18:10:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502D37D702000078000959F7@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Jan Beulich wrote:
> >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > Seeing the patch I btw realized that there's no easy way to
> > avoid having the type as a second argument in the conversion
> > macros. Nevertheless I still don't like the explicitly specified type
> > there.
> 
> Btw - on the architecture(s) where the two handles are identical
> I would prefer you to make the conversion functions trivial (and
> thus avoid making use of the "type" parameter), thus allowing
> the type checking to occur that you currently circumvent.

OK, I can do that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23hs-0002ug-Vh; Thu, 16 Aug 2012 17:18:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T23ho-0002uQ-Ge
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 17:18:03 +0000
Received: from [85.158.143.99:28786] by server-2.bemta-4.messagelabs.com id
	8F/72-31966-74B2D205; Thu, 16 Aug 2012 17:17:59 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345137478!27652295!1
X-Originating-IP: [208.97.132.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4yMDcgPT4gMzAyNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19468 invoked from network); 16 Aug 2012 17:17:59 -0000
Received: from caiajhbdccah.dreamhost.com (HELO homiemail-a17.g.dreamhost.com)
	(208.97.132.207) by server-13.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 17:17:59 -0000
Received: from homiemail-a17.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a17.g.dreamhost.com (Postfix) with ESMTP id 38E2D380085;
	Thu, 16 Aug 2012 10:17:58 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=iwev1FCWClTcsVlsInYceajIXd9IV0RSqBzCZw+DKasY
	7SEVwFg6pbkVH/b3wT28eYp4lU9cA5L337g/6++XWss1e4QiORaCKCIzj6P/nFyQ
	0oZlwwsC+vkDqYGHVdxXjNz2Eu7sWj2Xw/ExELXlcXUxUk57IpS+S/x8MUVhWOg=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=SHSuBNf1czXwYznDCUmbgeVAGF4=; b=kyBAmTKp
	Hy9deJR0iefOmyLr37xXqI7Xfi8AE5jlnUMyGOlwuJnZgoKkO7xkVIzsKG/Rjqv7
	BpMhDHR14cV6wG7iU4vKboMveYdwspvJmLRI4vK7f7iXHvCu9mtSSql2Lv9owGYa
	2GoAu5km/JuQaPmAZujQ6oLxEN6gNlRZ6Pk=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a17.g.dreamhost.com (Postfix) with ESMTPA id B693238007F; 
	Thu, 16 Aug 2012 10:17:57 -0700 (PDT)
Received: from 69.196.182.10 (proxying for 69.196.182.10)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Thu, 16 Aug 2012 10:17:47 -0700
Message-ID: <35799d4871cec6ffe122121e59d930fe.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.10813.1345115327.1399.xen-devel@lists.xen.org>
References: <mailman.10813.1345115327.1399.xen-devel@lists.xen.org>
Date: Thu, 16 Aug 2012 10:17:47 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: xudong.hao@intel.com, tim@xen.org, xiantao.zhang@intel.com,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>
> At 11:41 +0100 on 16 Aug (1345117281), Jan Beulich wrote:
>> >>> On 16.08.12 at 12:31, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
>> >> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
>> >> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>> >> >> >      }
>> >> >> >
>> >> >> >      /* Track the highest gfn for which we have ever had a valid
>> mapping
>> >> */
>> >> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
>> >> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>> >> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>> >> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>> >> >>
>> >> >> Depending on how the above comment gets addressed (i.e.
>> >> >> whether MMIO MFNs are to be considered here at all), this
>> >> >> might need changing anyway, as this a huge max_mapped_pfn
>> >> >> value likely wouldn't be very useful anymore.
>> >> >
>> >> > Your viewpoint is similar with us. Here max_mapped_pfn value is for
>> memory
>> >> > but not for MMIO. I think this is a simple changes, do you have
>> another
>> >> > suggestion?
>> >>
>> >> The question is why this needs to be changed at all. If this is
>> >> only about RAM, then mfn_valid() is the right thing to use. If
>> >> this is about MMIO too, then the condition is wrong already
>> >> (since, as we appear to agree, even now there can be MMIO
>> >> above RAM, provided there's little enough RAM).
>> >>
>> >
>> > The original code considered EPT only, now for the device assignment,
>> it
>> > need to consider MMIO. So how about remove the mfn_valid() here?
>>
>> I don't think it's there without reason, but I'm not sure. Tim?
>
> max_mapped_pfn should be the highest entry that's even had a mapping in
> the p2m.  Its intent was to provide a fast path exit from p2m lookups in
> the (at the time) common case where _emulated_ MMIO addresses were
> higher than all the actual p2m mappings, and the cost of a failed lookup
> (on 32-bit) was a page fault in the linear map.  Also, at the time, the
> p2m wasn't typed and we didn't support direct MMIO, so mfn_valid() was
> equivalent to 'entry is present'.
>
> These days, I'm not sure how useful max_mapped_pfn is, since (a) for any
> VM with >3GB RAM the emulated MMIO lookups are not avoided, and (b) on
> 64-bit builds there's not pagefault for a failed lookup.  Also it seems to
> have been abused in a few places to do for() loops that touch every PFN
> instead of just walking the tries.  So I might get rid of it after 4.2
> is out.

max_mapped_pfn also helps keep XENMEM_maximum_gpfn O(1).
Andres

>
> In the meantime, the patch at the top of this thread is definitely an
> improvement.  However, I think this is a better fix:
>
> diff -r c887c30a0a35 xen/arch/x86/mm/p2m-ept.c
> --- a/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 10:16:19 2012 +0200
> +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 11:57:44 2012 +0100
> @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>      }
>
>      /* Track the highest gfn for which we have ever had a valid mapping
> */
> -    if ( mfn_valid(mfn_x(mfn)) &&
> +    if ( p2mt != p2m_invalid &&
>           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>
> diff -r c887c30a0a35 xen/arch/x86/mm/p2m-pt.c
> --- a/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 10:16:19 2012 +0200
> +++ b/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 11:57:44 2012 +0100
> @@ -454,7 +454,7 @@ p2m_set_entry(struct p2m_domain *p2m, un
>      }
>
>      /* Track the highest gfn for which we have ever had a valid mapping
> */
> -    if ( mfn_valid(mfn)
> +    if ( p2mt != p2m_invalid
>           && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
>
> and I'll commit it this afternoon or tomorrow.
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:18:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T23hs-0002ug-Vh; Thu, 16 Aug 2012 17:18:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T23ho-0002uQ-Ge
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 17:18:03 +0000
Received: from [85.158.143.99:28786] by server-2.bemta-4.messagelabs.com id
	8F/72-31966-74B2D205; Thu, 16 Aug 2012 17:17:59 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345137478!27652295!1
X-Originating-IP: [208.97.132.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4yMDcgPT4gMzAyNTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19468 invoked from network); 16 Aug 2012 17:17:59 -0000
Received: from caiajhbdccah.dreamhost.com (HELO homiemail-a17.g.dreamhost.com)
	(208.97.132.207) by server-13.tower-216.messagelabs.com with SMTP;
	16 Aug 2012 17:17:59 -0000
Received: from homiemail-a17.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a17.g.dreamhost.com (Postfix) with ESMTP id 38E2D380085;
	Thu, 16 Aug 2012 10:17:58 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=iwev1FCWClTcsVlsInYceajIXd9IV0RSqBzCZw+DKasY
	7SEVwFg6pbkVH/b3wT28eYp4lU9cA5L337g/6++XWss1e4QiORaCKCIzj6P/nFyQ
	0oZlwwsC+vkDqYGHVdxXjNz2Eu7sWj2Xw/ExELXlcXUxUk57IpS+S/x8MUVhWOg=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=SHSuBNf1czXwYznDCUmbgeVAGF4=; b=kyBAmTKp
	Hy9deJR0iefOmyLr37xXqI7Xfi8AE5jlnUMyGOlwuJnZgoKkO7xkVIzsKG/Rjqv7
	BpMhDHR14cV6wG7iU4vKboMveYdwspvJmLRI4vK7f7iXHvCu9mtSSql2Lv9owGYa
	2GoAu5km/JuQaPmAZujQ6oLxEN6gNlRZ6Pk=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a17.g.dreamhost.com (Postfix) with ESMTPA id B693238007F; 
	Thu, 16 Aug 2012 10:17:57 -0700 (PDT)
Received: from 69.196.182.10 (proxying for 69.196.182.10)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Thu, 16 Aug 2012 10:17:47 -0700
Message-ID: <35799d4871cec6ffe122121e59d930fe.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.10813.1345115327.1399.xen-devel@lists.xen.org>
References: <mailman.10813.1345115327.1399.xen-devel@lists.xen.org>
Date: Thu, 16 Aug 2012 10:17:47 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: xudong.hao@intel.com, tim@xen.org, xiantao.zhang@intel.com,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>
> At 11:41 +0100 on 16 Aug (1345117281), Jan Beulich wrote:
>> >>> On 16.08.12 at 12:31, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> >> >>> On 15.08.12 at 08:57, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> >> > --- a/xen/arch/x86/mm/p2m-ept.c	Tue Jul 24 17:02:04 2012 +0200
>> >> >> > +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Jul 26 15:40:01 2012 +0800
>> >> >> > @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>> >> >> >      }
>> >> >> >
>> >> >> >      /* Track the highest gfn for which we have ever had a valid
>> mapping
>> >> */
>> >> >> > -    if ( mfn_valid(mfn_x(mfn)) &&
>> >> >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
>> >> >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>> >> >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>> >> >>
>> >> >> Depending on how the above comment gets addressed (i.e.
>> >> >> whether MMIO MFNs are to be considered here at all), this
>> >> >> might need changing anyway, as this a huge max_mapped_pfn
>> >> >> value likely wouldn't be very useful anymore.
>> >> >
>> >> > Your viewpoint is similar with us. Here max_mapped_pfn value is for
>> memory
>> >> > but not for MMIO. I think this is a simple changes, do you have
>> another
>> >> > suggestion?
>> >>
>> >> The question is why this needs to be changed at all. If this is
>> >> only about RAM, then mfn_valid() is the right thing to use. If
>> >> this is about MMIO too, then the condition is wrong already
>> >> (since, as we appear to agree, even now there can be MMIO
>> >> above RAM, provided there's little enough RAM).
>> >>
>> >
>> > The original code considered EPT only, now for the device assignment,
>> it
>> > need to consider MMIO. So how about remove the mfn_valid() here?
>>
>> I don't think it's there without reason, but I'm not sure. Tim?
>
> max_mapped_pfn should be the highest entry that's even had a mapping in
> the p2m.  Its intent was to provide a fast path exit from p2m lookups in
> the (at the time) common case where _emulated_ MMIO addresses were
> higher than all the actual p2m mappings, and the cost of a failed lookup
> (on 32-bit) was a page fault in the linear map.  Also, at the time, the
> p2m wasn't typed and we didn't support direct MMIO, so mfn_valid() was
> equivalent to 'entry is present'.
>
> These days, I'm not sure how useful max_mapped_pfn is, since (a) for any
> VM with >3GB RAM the emulated MMIO lookups are not avoided, and (b) on
> 64-bit builds there's not pagefault for a failed lookup.  Also it seems to
> have been abused in a few places to do for() loops that touch every PFN
> instead of just walking the tries.  So I might get rid of it after 4.2
> is out.

max_mapped_pfn also helps keep XENMEM_maximum_gpfn O(1).
Andres

>
> In the meantime, the patch at the top of this thread is definitely an
> improvement.  However, I think this is a better fix:
>
> diff -r c887c30a0a35 xen/arch/x86/mm/p2m-ept.c
> --- a/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 10:16:19 2012 +0200
> +++ b/xen/arch/x86/mm/p2m-ept.c	Thu Aug 16 11:57:44 2012 +0100
> @@ -428,7 +428,7 @@ ept_set_entry(struct p2m_domain *p2m, un
>      }
>
>      /* Track the highest gfn for which we have ever had a valid mapping
> */
> -    if ( mfn_valid(mfn_x(mfn)) &&
> +    if ( p2mt != p2m_invalid &&
>           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
>
> diff -r c887c30a0a35 xen/arch/x86/mm/p2m-pt.c
> --- a/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 10:16:19 2012 +0200
> +++ b/xen/arch/x86/mm/p2m-pt.c	Thu Aug 16 11:57:44 2012 +0100
> @@ -454,7 +454,7 @@ p2m_set_entry(struct p2m_domain *p2m, un
>      }
>
>      /* Track the highest gfn for which we have ever had a valid mapping
> */
> -    if ( mfn_valid(mfn)
> +    if ( p2mt != p2m_invalid
>           && (gfn + (1UL << page_order) - 1 > p2m->max_mapped_pfn) )
>          p2m->max_mapped_pfn = gfn + (1UL << page_order) - 1;
>
> and I'll commit it this afternoon or tomorrow.
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:42:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:42:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2459-0003is-Vo; Thu, 16 Aug 2012 17:42:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2458-0003ik-Me
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 17:42:07 +0000
Received: from [85.158.139.83:50015] by server-3.bemta-5.messagelabs.com id
	18/D9-27237-DE03D205; Thu, 16 Aug 2012 17:42:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345138924!24572703!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26343 invoked from network); 16 Aug 2012 17:42:05 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 17:42:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GHg20T001923
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 17:42:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GHg18o023610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 17:42:02 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GHg1n0005311; Thu, 16 Aug 2012 12:42:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 10:42:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 79C13402E8; Thu, 16 Aug 2012 13:32:15 -0400 (EDT)
Date: Thu, 16 Aug 2012 13:32:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20120816173215.GB9790@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 11:50:13AM -0400, Konrad Rzeszutek Wilk wrote:
> The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
> xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.
> 
> extended the _brk space to fit 1048576 PFNs. The math is that each
> P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
> that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
> we want to cover 4GB of PFNs, that means having enough for space
> to fit 1048576 unsigned longs.

Scratch that patch. This is better, but even with that I am still
hitting some weird 32-bit cases.

 
>From 5502d44e8c7293f6d81a7fdabe25e49845c25cf8 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 16 Aug 2012 10:57:09 -0400
Subject: [PATCH] xen/p2m: Fix for 32-bit builds the "Reserve 8MB of _brk
 space for P2M"

The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.

extended the _brk space to fit 1048576 PFNs. The math is that each
P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
we want to cover 4GB of PFNs, that means having enough for space
to fit 1048576 unsigned longs.

On 64-bit:
1048576 * sizeof(unsigned long) (8) bytes = 8MB

On 32-bit:
1048576 * sizeof(unsigned long) (4) bytes = 4MB

.. But if you look in the comment it says 3GB not 4GB, so
lets also fix that and reserve enough space for 3GB of PFNs.

We fix that by using the above mentioned math instead of predefined
PMD_SIZE.

CC: stable@vger.kernel.org #only for 3.5
[v2: 4GB/3GB]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index b2e91d4..29244d0 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -196,9 +196,9 @@ RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
- * But some machines can have 3GB I/O holes even. So lets reserve enough
- * for 4GB of I/O and E820 holes. */
-RESERVE_BRK(p2m_populated, PMD_SIZE * 4);
+ * Some machines can have 3GB I/O holes so lets reserve for that. */
+RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
+
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:42:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:42:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2459-0003is-Vo; Thu, 16 Aug 2012 17:42:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2458-0003ik-Me
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 17:42:07 +0000
Received: from [85.158.139.83:50015] by server-3.bemta-5.messagelabs.com id
	18/D9-27237-DE03D205; Thu, 16 Aug 2012 17:42:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345138924!24572703!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26343 invoked from network); 16 Aug 2012 17:42:05 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 17:42:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GHg20T001923
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 17:42:03 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GHg18o023610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 17:42:02 GMT
Received: from abhmt108.oracle.com (abhmt108.oracle.com [141.146.116.60])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GHg1n0005311; Thu, 16 Aug 2012 12:42:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 10:42:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 79C13402E8; Thu, 16 Aug 2012 13:32:15 -0400 (EDT)
Date: Thu, 16 Aug 2012 13:32:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20120816173215.GB9790@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 11:50:13AM -0400, Konrad Rzeszutek Wilk wrote:
> The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
> xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.
> 
> extended the _brk space to fit 1048576 PFNs. The math is that each
> P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
> that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
> we want to cover 4GB of PFNs, that means having enough for space
> to fit 1048576 unsigned longs.

Scratch that patch. This is better, but even with that I am still
hitting some weird 32-bit cases.

 
>From 5502d44e8c7293f6d81a7fdabe25e49845c25cf8 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 16 Aug 2012 10:57:09 -0400
Subject: [PATCH] xen/p2m: Fix for 32-bit builds the "Reserve 8MB of _brk
 space for P2M"

The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.

extended the _brk space to fit 1048576 PFNs. The math is that each
P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
we want to cover 4GB of PFNs, that means having enough for space
to fit 1048576 unsigned longs.

On 64-bit:
1048576 * sizeof(unsigned long) (8) bytes = 8MB

On 32-bit:
1048576 * sizeof(unsigned long) (4) bytes = 4MB

.. But if you look in the comment it says 3GB not 4GB, so
lets also fix that and reserve enough space for 3GB of PFNs.

We fix that by using the above mentioned math instead of predefined
PMD_SIZE.

CC: stable@vger.kernel.org #only for 3.5
[v2: 4GB/3GB]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index b2e91d4..29244d0 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -196,9 +196,9 @@ RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
- * But some machines can have 3GB I/O holes even. So lets reserve enough
- * for 4GB of I/O and E820 holes. */
-RESERVE_BRK(p2m_populated, PMD_SIZE * 4);
+ * Some machines can have 3GB I/O holes so lets reserve for that. */
+RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
+
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:48:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T24Am-0003vG-Ob; Thu, 16 Aug 2012 17:47:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T24Ak-0003v8-L9
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 17:47:54 +0000
Received: from [85.158.138.51:35580] by server-11.bemta-3.messagelabs.com id
	B4/01-23152-9423D205; Thu, 16 Aug 2012 17:47:53 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345139272!28632331!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10333 invoked from network); 16 Aug 2012 17:47:53 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 17:47:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T24Ag-0008Qn-Q0; Thu, 16 Aug 2012 17:47:50 +0000
Date: Thu, 16 Aug 2012 18:47:50 +0100
From: Tim Deegan <tim@xen.org>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Message-ID: <20120816174750.GA32339@ocelot.phlegethon.org>
References: <mailman.10813.1345115327.1399.xen-devel@lists.xen.org>
	<35799d4871cec6ffe122121e59d930fe.squirrel@webmail.lagarcavilla.org>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <35799d4871cec6ffe122121e59d930fe.squirrel@webmail.lagarcavilla.org>
User-Agent: Mutt/1.4.2.1i
Cc: xudong.hao@intel.com, xiantao.zhang@intel.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:17 -0700 on 16 Aug (1345112267), Andres Lagar-Cavilla wrote:
> > max_mapped_pfn should be the highest entry that's even had a mapping in
> > the p2m.  Its intent was to provide a fast path exit from p2m lookups in
> > the (at the time) common case where _emulated_ MMIO addresses were
> > higher than all the actual p2m mappings, and the cost of a failed lookup
> > (on 32-bit) was a page fault in the linear map.  Also, at the time, the
> > p2m wasn't typed and we didn't support direct MMIO, so mfn_valid() was
> > equivalent to 'entry is present'.
> >
> > These days, I'm not sure how useful max_mapped_pfn is, since (a) for any
> > VM with >3GB RAM the emulated MMIO lookups are not avoided, and (b) on
> > 64-bit builds there's not pagefault for a failed lookup.  Also it seems to
> > have been abused in a few places to do for() loops that touch every PFN
> > instead of just walking the tries.  So I might get rid of it after 4.2
> > is out.
> 
> max_mapped_pfn also helps keep XENMEM_maximum_gpfn O(1).

Ergh, true.  With a little tidying up in the tree update code (to
eliminate empty nodes) it will still be O(1) - just walking
rightmost-first down a trie of fixed height.  But that's a little
more work than I hoped for. :)

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 17:48:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 17:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T24Am-0003vG-Ob; Thu, 16 Aug 2012 17:47:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T24Ak-0003v8-L9
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 17:47:54 +0000
Received: from [85.158.138.51:35580] by server-11.bemta-3.messagelabs.com id
	B4/01-23152-9423D205; Thu, 16 Aug 2012 17:47:53 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345139272!28632331!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10333 invoked from network); 16 Aug 2012 17:47:53 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 17:47:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T24Ag-0008Qn-Q0; Thu, 16 Aug 2012 17:47:50 +0000
Date: Thu, 16 Aug 2012 18:47:50 +0100
From: Tim Deegan <tim@xen.org>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Message-ID: <20120816174750.GA32339@ocelot.phlegethon.org>
References: <mailman.10813.1345115327.1399.xen-devel@lists.xen.org>
	<35799d4871cec6ffe122121e59d930fe.squirrel@webmail.lagarcavilla.org>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <35799d4871cec6ffe122121e59d930fe.squirrel@webmail.lagarcavilla.org>
User-Agent: Mutt/1.4.2.1i
Cc: xudong.hao@intel.com, xiantao.zhang@intel.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
	mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:17 -0700 on 16 Aug (1345112267), Andres Lagar-Cavilla wrote:
> > max_mapped_pfn should be the highest entry that's even had a mapping in
> > the p2m.  Its intent was to provide a fast path exit from p2m lookups in
> > the (at the time) common case where _emulated_ MMIO addresses were
> > higher than all the actual p2m mappings, and the cost of a failed lookup
> > (on 32-bit) was a page fault in the linear map.  Also, at the time, the
> > p2m wasn't typed and we didn't support direct MMIO, so mfn_valid() was
> > equivalent to 'entry is present'.
> >
> > These days, I'm not sure how useful max_mapped_pfn is, since (a) for any
> > VM with >3GB RAM the emulated MMIO lookups are not avoided, and (b) on
> > 64-bit builds there's not pagefault for a failed lookup.  Also it seems to
> > have been abused in a few places to do for() loops that touch every PFN
> > instead of just walking the tries.  So I might get rid of it after 4.2
> > is out.
> 
> max_mapped_pfn also helps keep XENMEM_maximum_gpfn O(1).

Ergh, true.  With a little tidying up in the tree update code (to
eliminate empty nodes) it will still be O(1) - just walking
rightmost-first down a trie of fixed height.  But that's a little
more work than I hoped for. :)

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:26:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T24lu-0004RF-OF; Thu, 16 Aug 2012 18:26:18 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T24lt-0004R7-7m
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 18:26:17 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345141570!9622512!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31588 invoked from network); 16 Aug 2012 18:26:10 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 18:26:10 -0000
X-TM-IMSS-Message-ID: <af5b5f980001336a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	af5b5f980001336a ; Thu, 16 Aug 2012 14:26:24 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GIPGxS017035; 
	Thu, 16 Aug 2012 14:25:16 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: ian.campbell@citrix.com
Date: Thu, 16 Aug 2012 14:25:15 -0400
Message-Id: <1345141515-31078-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org,
	jbeulich@suse.com, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH RFC] xen/sysfs: Use XENVER_guest_handle to query
	UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The /sys/hypervisor/uuid path relies on Xenstore to query the domain's
UUID (handle) even when the hypervisor exposes an interface to more
directly and efficiently query this. The xenstore path /vm/UUID which is
used for the current query is being discussed for possible removal as
most of the information under this path is only useful for the
toolstack, not the VM.

The UUID fetched from xenstore may also not be properly formatted as a
UUID for the domain if the UUID has been reused (this is most often seen
in domain 0, which if xenstored does not clean up /vm properly, can end
up with a UUID field like "00000000-0000-0000-0000-000000000000-5").

----8<-----------------------------------------------------

This hypercall has been present since Xen 3.1, and is the preferred
method for a domain to obtain its UUID.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 drivers/xen/sys-hypervisor.c    | 20 +++++---------------
 include/xen/interface/version.h |  3 +++
 2 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 4c7db7d..416fa01 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -116,22 +116,12 @@ static void xen_sysfs_version_destroy(void)
 
 static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
 {
-	char *vm, *val;
+	xen_domain_handle_t uuid;
 	int ret;
-	extern int xenstored_ready;
-
-	if (!xenstored_ready)
-		return -EBUSY;
-
-	vm = xenbus_read(XBT_NIL, "vm", "", NULL);
-	if (IS_ERR(vm))
-		return PTR_ERR(vm);
-	val = xenbus_read(XBT_NIL, vm, "uuid", NULL);
-	kfree(vm);
-	if (IS_ERR(val))
-		return PTR_ERR(val);
-	ret = sprintf(buffer, "%s\n", val);
-	kfree(val);
+	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
+	if (ret)
+		return ret;
+	ret = sprintf(buffer, "%pU\n", uuid);
 	return ret;
 }
 
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..dd58cf5 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -60,4 +60,7 @@ struct xen_feature_info {
 /* arg == NULL; returns host memory page size. */
 #define XENVER_pagesize 7
 
+/* arg == xen_domain_handle_t. */
+#define XENVER_guest_handle 8
+
 #endif /* __XEN_PUBLIC_VERSION_H__ */
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:26:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T24lu-0004RF-OF; Thu, 16 Aug 2012 18:26:18 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T24lt-0004R7-7m
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 18:26:17 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345141570!9622512!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31588 invoked from network); 16 Aug 2012 18:26:10 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-2.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 18:26:10 -0000
X-TM-IMSS-Message-ID: <af5b5f980001336a@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	af5b5f980001336a ; Thu, 16 Aug 2012 14:26:24 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GIPGxS017035; 
	Thu, 16 Aug 2012 14:25:16 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: ian.campbell@citrix.com
Date: Thu, 16 Aug 2012 14:25:15 -0400
Message-Id: <1345141515-31078-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org,
	jbeulich@suse.com, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH RFC] xen/sysfs: Use XENVER_guest_handle to query
	UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The /sys/hypervisor/uuid path relies on Xenstore to query the domain's
UUID (handle) even when the hypervisor exposes an interface to more
directly and efficiently query this. The xenstore path /vm/UUID which is
used for the current query is being discussed for possible removal as
most of the information under this path is only useful for the
toolstack, not the VM.

The UUID fetched from xenstore may also not be properly formatted as a
UUID for the domain if the UUID has been reused (this is most often seen
in domain 0, which if xenstored does not clean up /vm properly, can end
up with a UUID field like "00000000-0000-0000-0000-000000000000-5").

----8<-----------------------------------------------------

This hypercall has been present since Xen 3.1, and is the preferred
method for a domain to obtain its UUID.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 drivers/xen/sys-hypervisor.c    | 20 +++++---------------
 include/xen/interface/version.h |  3 +++
 2 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 4c7db7d..416fa01 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -116,22 +116,12 @@ static void xen_sysfs_version_destroy(void)
 
 static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
 {
-	char *vm, *val;
+	xen_domain_handle_t uuid;
 	int ret;
-	extern int xenstored_ready;
-
-	if (!xenstored_ready)
-		return -EBUSY;
-
-	vm = xenbus_read(XBT_NIL, "vm", "", NULL);
-	if (IS_ERR(vm))
-		return PTR_ERR(vm);
-	val = xenbus_read(XBT_NIL, vm, "uuid", NULL);
-	kfree(vm);
-	if (IS_ERR(val))
-		return PTR_ERR(val);
-	ret = sprintf(buffer, "%s\n", val);
-	kfree(val);
+	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
+	if (ret)
+		return ret;
+	ret = sprintf(buffer, "%pU\n", uuid);
 	return ret;
 }
 
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..dd58cf5 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -60,4 +60,7 @@ struct xen_feature_info {
 /* arg == NULL; returns host memory page size. */
 #define XENVER_pagesize 7
 
+/* arg == xen_domain_handle_t. */
+#define XENVER_guest_handle 8
+
 #endif /* __XEN_PUBLIC_VERSION_H__ */
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:35:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T24uG-0004jO-RH; Thu, 16 Aug 2012 18:34:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1T24uF-0004jC-BM
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:34:55 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345142089!9633303!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3449 invoked from network); 16 Aug 2012 18:34:49 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 18:34:49 -0000
Received: by weyx43 with SMTP id x43so2432419wey.30
	for <xen-devel@lists.xensource.com>;
	Thu, 16 Aug 2012 11:34:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=+cwHcSIDx/paDO/h/wzUcBqQy/QLibtCoxqYI1YUuo4=;
	b=Sh3WfxNoJQYcH+prYRZym89ewsHyiXXayU/ypHBkra+VSrkyEy9MwkuseZcFUN7OvQ
	++jwpdfGGL+y5U7SCCdDX7yb4IaNnMV/Ek0Y44nJdWuySMnj0hDDX+Rq2lFGwrj5U1Mx
	3eHLgGSnLShbtTSqvVKPBaXcQj+ghHAeJNdH4lNF0DJX2XBNzdtialCEMk374imz5oDl
	2ua3Ilh7ojB33yYVJ/wplfGmlm6vSPqPawynxPKT5305SfbhbCyZJWWCc6hVCJx7gPvJ
	R7Dw06HXPF5wSgj+0dnuSuq+NHjxRBdLDkuox4HfYxnAeDsC+8l7mb3yv+xC3qMv6iaX
	yMrg==
Received: by 10.216.231.208 with SMTP id l58mr955830weq.138.1345142088985;
	Thu, 16 Aug 2012 11:34:48 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.216.121.205 with HTTP; Thu, 16 Aug 2012 11:34:28 -0700 (PDT)
In-Reply-To: <20120814141513.GA17776@phenom.dumpdata.com>
References: <20120814141513.GA17776@phenom.dumpdata.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Thu, 16 Aug 2012 11:34:28 -0700
X-Google-Sender-Auth: WMU70QQkMAYIuuSycLT4zGl4mk4
Message-ID: <CA+55aFzDNMNKvibE9ddoUUHp1e3nkti4G3Bj1ummG2XRX49T6g@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.6-rc1-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 7:15 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
>  arch/x86/xen/p2m.c |    5 -----
>  1 files changed, 0 insertions(+), 5 deletions(-)
>
> Konrad Rzeszutek Wilk (1):
>       xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.

There's something wrong with your statistics generation. You have done
the diff the wrong way around, and show 5 deletions, when there are
actually 5 insertions.

Please fix your script.

              Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:35:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T24uG-0004jO-RH; Thu, 16 Aug 2012 18:34:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1T24uF-0004jC-BM
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:34:55 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345142089!9633303!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3449 invoked from network); 16 Aug 2012 18:34:49 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 18:34:49 -0000
Received: by weyx43 with SMTP id x43so2432419wey.30
	for <xen-devel@lists.xensource.com>;
	Thu, 16 Aug 2012 11:34:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=+cwHcSIDx/paDO/h/wzUcBqQy/QLibtCoxqYI1YUuo4=;
	b=Sh3WfxNoJQYcH+prYRZym89ewsHyiXXayU/ypHBkra+VSrkyEy9MwkuseZcFUN7OvQ
	++jwpdfGGL+y5U7SCCdDX7yb4IaNnMV/Ek0Y44nJdWuySMnj0hDDX+Rq2lFGwrj5U1Mx
	3eHLgGSnLShbtTSqvVKPBaXcQj+ghHAeJNdH4lNF0DJX2XBNzdtialCEMk374imz5oDl
	2ua3Ilh7ojB33yYVJ/wplfGmlm6vSPqPawynxPKT5305SfbhbCyZJWWCc6hVCJx7gPvJ
	R7Dw06HXPF5wSgj+0dnuSuq+NHjxRBdLDkuox4HfYxnAeDsC+8l7mb3yv+xC3qMv6iaX
	yMrg==
Received: by 10.216.231.208 with SMTP id l58mr955830weq.138.1345142088985;
	Thu, 16 Aug 2012 11:34:48 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.216.121.205 with HTTP; Thu, 16 Aug 2012 11:34:28 -0700 (PDT)
In-Reply-To: <20120814141513.GA17776@phenom.dumpdata.com>
References: <20120814141513.GA17776@phenom.dumpdata.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Thu, 16 Aug 2012 11:34:28 -0700
X-Google-Sender-Auth: WMU70QQkMAYIuuSycLT4zGl4mk4
Message-ID: <CA+55aFzDNMNKvibE9ddoUUHp1e3nkti4G3Bj1ummG2XRX49T6g@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.6-rc1-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 14, 2012 at 7:15 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
>  arch/x86/xen/p2m.c |    5 -----
>  1 files changed, 0 insertions(+), 5 deletions(-)
>
> Konrad Rzeszutek Wilk (1):
>       xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.

There's something wrong with your statistics generation. You have done
the diff the wrong way around, and show 5 deletions, when there are
actually 5 insertions.

Please fix your script.

              Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:47:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2561-000506-2H; Thu, 16 Aug 2012 18:47:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T255z-000501-D0
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:47:03 +0000
Received: from [85.158.143.35:59622] by server-1.bemta-4.messagelabs.com id
	E1/C7-07754-6204D205; Thu, 16 Aug 2012 18:47:02 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345142820!6092915!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7104 invoked from network); 16 Aug 2012 18:47:01 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 18:47:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GIksbk021574
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 18:46:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GIksgB013284
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 18:46:54 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GIkshb018295; Thu, 16 Aug 2012 13:46:54 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 11:46:53 -0700
Date: Thu, 16 Aug 2012 11:46:50 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120816114650.4db2079f@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 14:59:14 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> It's good to see well split patches :)
> But could you please send them inline next time?

Sorry, that was the intent. 

> >  
> > -	if (!xen_pv_domain())
> > +	/* PVH TBD/FIXME: future work */
> > +	if (!xen_pv_domain() || xen_pvh_domain())
> >  		return -ENODEV;
> >  
> >  	register_xenstore_notifier(&xsn_cpu);
> 
> I don't think we should use or have xen_pvh_domain() in non-x86 files,
> like anything under drivers/xen.
> Isn't XENFEAT_auto_translated_physmap enough?

Ah, ok, I'll change it.

> > diff --git a/include/xen/interface/xen.h
> > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > --- a/include/xen/interface/xen.h
> > +++ b/include/xen/interface/xen.h
> > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> >  /* These flags are passed in the 'flags' field of start_info_t. */
> >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > 1 byte for xen-pm options */ 
> >  typedef uint64_t cpumap_t;
> 
> I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> generic xen.h interface file. 

> > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > HVM. Also,
> > + * note, xen PVH domain shares lot of HVM code */
> > +#define xen_pvh_domain()       (xen_pv_domain()
> > &&                     \
> > +				(xen_start_info->flags &
> > SIF_IS_PVINHVM))
>  
> Also here.

Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
include/xen/interface/xen.h, and then do 
#define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.

What do you think about that?

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:47:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2561-000506-2H; Thu, 16 Aug 2012 18:47:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T255z-000501-D0
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:47:03 +0000
Received: from [85.158.143.35:59622] by server-1.bemta-4.messagelabs.com id
	E1/C7-07754-6204D205; Thu, 16 Aug 2012 18:47:02 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345142820!6092915!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7104 invoked from network); 16 Aug 2012 18:47:01 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 18:47:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GIksbk021574
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 18:46:55 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GIksgB013284
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 18:46:54 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GIkshb018295; Thu, 16 Aug 2012 13:46:54 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 11:46:53 -0700
Date: Thu, 16 Aug 2012 11:46:50 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120816114650.4db2079f@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 14:59:14 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> It's good to see well split patches :)
> But could you please send them inline next time?

Sorry, that was the intent. 

> >  
> > -	if (!xen_pv_domain())
> > +	/* PVH TBD/FIXME: future work */
> > +	if (!xen_pv_domain() || xen_pvh_domain())
> >  		return -ENODEV;
> >  
> >  	register_xenstore_notifier(&xsn_cpu);
> 
> I don't think we should use or have xen_pvh_domain() in non-x86 files,
> like anything under drivers/xen.
> Isn't XENFEAT_auto_translated_physmap enough?

Ah, ok, I'll change it.

> > diff --git a/include/xen/interface/xen.h
> > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > --- a/include/xen/interface/xen.h
> > +++ b/include/xen/interface/xen.h
> > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> >  /* These flags are passed in the 'flags' field of start_info_t. */
> >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > 1 byte for xen-pm options */ 
> >  typedef uint64_t cpumap_t;
> 
> I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> generic xen.h interface file. 

> > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > HVM. Also,
> > + * note, xen PVH domain shares lot of HVM code */
> > +#define xen_pvh_domain()       (xen_pv_domain()
> > &&                     \
> > +				(xen_start_info->flags &
> > SIF_IS_PVINHVM))
>  
> Also here.

Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
include/xen/interface/xen.h, and then do 
#define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.

What do you think about that?

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:48:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2576-00053w-H2; Thu, 16 Aug 2012 18:48:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2575-00053l-IO
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:48:11 +0000
Received: from [85.158.143.99:2777] by server-2.bemta-4.messagelabs.com id
	7A/6C-31966-A604D205; Thu, 16 Aug 2012 18:48:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345142890!21036892!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26503 invoked from network); 16 Aug 2012 18:48:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 18:48:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,780,1336348800"; d="scan'208";a="14047307"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 18:48:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 19:48:09 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2573-0001mY-MX;
	Thu, 16 Aug 2012 18:48:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2572-0006xc-MB;
	Thu, 16 Aug 2012 19:48:09 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13608-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 19:48:08 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13608: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13608 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13608/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13592
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13592
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13592
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13592
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13592

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b
baseline version:
 linux                b09b34258046c4555e535a279e29032303a932f8

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=a422ca75bd264cd26bafeb6305655245d2ea7c6b
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 a422ca75bd264cd26bafeb6305655245d2ea7c6b
+ branch=linux-3.0
+ revision=a422ca75bd264cd26bafeb6305655245d2ea7c6b
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git a422ca75bd264cd26bafeb6305655245d2ea7c6b:tested/linux-3.0
Counting objects: 1   
Counting objects: 41   
Counting objects: 385, done.
Compressing objects:   2% (1/44)   
Compressing objects:   4% (2/44)   
Compressing objects:   6% (3/44)   
Compressing objects:   9% (4/44)   
Compressing objects:  11% (5/44)   
Compressing objects:  13% (6/44)   
Compressing objects:  15% (7/44)   
Compressing objects:  18% (8/44)   
Compressing objects:  20% (9/44)   
Compressing objects:  22% (10/44)   
Compressing objects:  25% (11/44)   
Compressing objects:  27% (12/44)   
Compressing objects:  29% (13/44)   
Compressing objects:  31% (14/44)   
Compressing objects:  34% (15/44)   
Compressing objects:  36% (16/44)   
Compressing objects:  38% (17/44)   
Compressing objects:  40% (18/44)   
Compressing objects:  43% (19/44)   
Compressing objects:  45% (20/44)   
Compressing objects:  47% (21/44)   
Compressing objects:  50% (22/44)   
Compressing objects:  52% (23/44)   
Compressing objects:  54% (24/44)   
Compressing objects:  56% (25/44)   
Compressing objects:  59% (26/44)   
Compressing objects:  61% (27/44)   
Compressing objects:  63% (28/44)   
Compressing objects:  65% (29/44)   
Compressing objects:  68% (30/44)   
Compressing objects:  70% (31/44)   
Compressing objects:  72% (32/44)   
Compressing objects:  75% (33/44)   
Compressing objects:  77% (34/44)   
Compressing objects:  79% (35/44)   
Compressing objects:  81% (36/44)   
Compressing objects:  84% (37/44)   
Compressing objects:  86% (38/44)   
Compressing objects:  88% (39/44)   
Compressing objects:  90% (40/44)   
Compressing objects:  93% (41/44)   
Compressing objects:  95% (42/44)   
Compressing objects:  97% (43/44)   
Compressing objects: 100% (44/44)   
Compressing objects: 100% (44/44), done.
Writing objects:   0% (1/287)   
Writing objects:   1% (3/287)   
Writing objects:   2% (6/287)   
Writing objects:   3% (9/287)   
Writing objects:   4% (12/287)   
Writing objects:   5% (15/287)   
Writing objects:   6% (18/287)   
Writing objects:   7% (21/287)   
Writing objects:   8% (23/287)   
Writing objects:   9% (26/287)   
Writing objects:  10% (29/287)   
Writing objects:  11% (32/287)   
Writing objects:  12% (35/287)   
Writing objects:  13% (38/287)   
Writing objects:  14% (41/287)   
Writing objects:  15% (44/287)   
Writing objects:  16% (46/287)   
Writing objects:  17% (49/287)   
Writing objects:  18% (52/287)   
Writing objects:  19% (55/287)   
Writing objects:  20% (58/287)   
Writing objects:  21% (61/287)   
Writing objects:  22% (64/287)   
Writing objects:  23% (67/287)   
Writing objects:  24% (69/287)   
Writing objects:  25% (72/287)   
Writing objects:  26% (75/287)   
Writing objects:  27% (78/287)   
Writing objects:  28% (82/287)   
Writing objects:  29% (84/287)   
Writing objects:  30% (87/287)   
Writing objects:  31% (89/287)   
Writing objects:  32% (92/287)   
Writing objects:  33% (95/287)   
Writing objects:  34% (98/287)   
Writing objects:  35% (101/287)   
Writing objects:  36% (104/287)   
Writing objects:  37% (107/287)   
Writing objects:  38% (110/287)   
Writing objects:  39% (112/287)   
Writing objects:  40% (115/287)   
Writing objects:  41% (118/287)   
Writing objects:  42% (121/287)   
Writing objects:  43% (124/287)   
Writing objects:  44% (127/287)   
Writing objects:  45% (130/287)   
Writing objects:  46% (133/287)   
Writing objects:  47% (135/287)   
Writing objects:  48% (138/287)   
Writing objects:  49% (141/287)   
Writing objects:  50% (144/287)   
Writing objects:  51% (147/287)   
Writing objects:  52% (150/287)   
Writing objects:  53% (153/287)   
Writing objects:  54% (155/287)   
Writing objects:  55% (158/287)   
Writing objects:  56% (161/287)   
Writing objects:  57% (164/287)   
Writing objects:  58% (167/287)   
Writing objects:  59% (170/287)   
Writing objects:  60% (173/287)   
Writing objects:  61% (176/287)   
Writing objects:  62% (178/287)   
Writing objects:  63% (181/287)   
Writing objects:  64% (184/287)   
Writing objects:  65% (187/287)   
Writing objects:  66% (190/287)   
Writing objects:  67% (193/287)   
Writing objects:  68% (196/287)   
Writing objects:  69% (199/287)   
Writing objects:  70% (201/287)   
Writing objects:  71% (204/287)   
Writing objects:  72% (207/287)   
Writing objects:  73% (210/287)   
Writing objects:  74% (213/287)   
Writing objects:  75% (216/287)   
Writing objects:  76% (219/287)   
Writing objects:  77% (221/287)   
Writing objects:  78% (224/287)   
Writing objects:  79% (227/287)   
Writing objects:  80% (230/287)   
Writing objects:  81% (233/287)   
Writing objects:  82% (236/287)   
Writing objects:  83% (239/287)   
Writing objects:  84% (242/287)   
Writing objects:  85% (244/287)   
Writing objects:  86% (247/287)   
Writing objects:  87% (250/287)   
Writing objects:  88% (253/287)   
Writing objects:  89% (256/287)   
Writing objects:  90% (260/287)   
Writing objects:  91% (262/287)   
Writing objects:  92% (265/287)   
Writing objects:  93% (267/287)   
Writing objects:  94% (270/287)   
Writing objects:  95% (273/287)   
Writing objects:  96% (276/287)   
Writing objects:  97% (279/287)   
Writing objects:  98% (282/287)   
Writing objects:  99% (285/287)   
Writing objects: 100% (287/287)   
Writing objects: 100% (287/287), 53.47 KiB, done.
Total 287 (delta 240), reused 287 (delta 240)
To xen@xenbits.xensource.com:git/linux-pvops.git
   b09b342..a422ca7  a422ca75bd264cd26bafeb6305655245d2ea7c6b -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:48:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2576-00053w-H2; Thu, 16 Aug 2012 18:48:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2575-00053l-IO
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:48:11 +0000
Received: from [85.158.143.99:2777] by server-2.bemta-4.messagelabs.com id
	7A/6C-31966-A604D205; Thu, 16 Aug 2012 18:48:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345142890!21036892!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26503 invoked from network); 16 Aug 2012 18:48:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 18:48:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,780,1336348800"; d="scan'208";a="14047307"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 18:48:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 16 Aug 2012 19:48:09 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2573-0001mY-MX;
	Thu, 16 Aug 2012 18:48:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2572-0006xc-MB;
	Thu, 16 Aug 2012 19:48:09 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13608-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Aug 2012 19:48:08 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13608: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13608 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13608/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13592
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13592
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13592
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13592
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13592

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b
baseline version:
 linux                b09b34258046c4555e535a279e29032303a932f8

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=a422ca75bd264cd26bafeb6305655245d2ea7c6b
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 a422ca75bd264cd26bafeb6305655245d2ea7c6b
+ branch=linux-3.0
+ revision=a422ca75bd264cd26bafeb6305655245d2ea7c6b
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git a422ca75bd264cd26bafeb6305655245d2ea7c6b:tested/linux-3.0
Counting objects: 1   
Counting objects: 41   
Counting objects: 385, done.
Compressing objects:   2% (1/44)   
Compressing objects:   4% (2/44)   
Compressing objects:   6% (3/44)   
Compressing objects:   9% (4/44)   
Compressing objects:  11% (5/44)   
Compressing objects:  13% (6/44)   
Compressing objects:  15% (7/44)   
Compressing objects:  18% (8/44)   
Compressing objects:  20% (9/44)   
Compressing objects:  22% (10/44)   
Compressing objects:  25% (11/44)   
Compressing objects:  27% (12/44)   
Compressing objects:  29% (13/44)   
Compressing objects:  31% (14/44)   
Compressing objects:  34% (15/44)   
Compressing objects:  36% (16/44)   
Compressing objects:  38% (17/44)   
Compressing objects:  40% (18/44)   
Compressing objects:  43% (19/44)   
Compressing objects:  45% (20/44)   
Compressing objects:  47% (21/44)   
Compressing objects:  50% (22/44)   
Compressing objects:  52% (23/44)   
Compressing objects:  54% (24/44)   
Compressing objects:  56% (25/44)   
Compressing objects:  59% (26/44)   
Compressing objects:  61% (27/44)   
Compressing objects:  63% (28/44)   
Compressing objects:  65% (29/44)   
Compressing objects:  68% (30/44)   
Compressing objects:  70% (31/44)   
Compressing objects:  72% (32/44)   
Compressing objects:  75% (33/44)   
Compressing objects:  77% (34/44)   
Compressing objects:  79% (35/44)   
Compressing objects:  81% (36/44)   
Compressing objects:  84% (37/44)   
Compressing objects:  86% (38/44)   
Compressing objects:  88% (39/44)   
Compressing objects:  90% (40/44)   
Compressing objects:  93% (41/44)   
Compressing objects:  95% (42/44)   
Compressing objects:  97% (43/44)   
Compressing objects: 100% (44/44)   
Compressing objects: 100% (44/44), done.
Writing objects:   0% (1/287)   
Writing objects:   1% (3/287)   
Writing objects:   2% (6/287)   
Writing objects:   3% (9/287)   
Writing objects:   4% (12/287)   
Writing objects:   5% (15/287)   
Writing objects:   6% (18/287)   
Writing objects:   7% (21/287)   
Writing objects:   8% (23/287)   
Writing objects:   9% (26/287)   
Writing objects:  10% (29/287)   
Writing objects:  11% (32/287)   
Writing objects:  12% (35/287)   
Writing objects:  13% (38/287)   
Writing objects:  14% (41/287)   
Writing objects:  15% (44/287)   
Writing objects:  16% (46/287)   
Writing objects:  17% (49/287)   
Writing objects:  18% (52/287)   
Writing objects:  19% (55/287)   
Writing objects:  20% (58/287)   
Writing objects:  21% (61/287)   
Writing objects:  22% (64/287)   
Writing objects:  23% (67/287)   
Writing objects:  24% (69/287)   
Writing objects:  25% (72/287)   
Writing objects:  26% (75/287)   
Writing objects:  27% (78/287)   
Writing objects:  28% (82/287)   
Writing objects:  29% (84/287)   
Writing objects:  30% (87/287)   
Writing objects:  31% (89/287)   
Writing objects:  32% (92/287)   
Writing objects:  33% (95/287)   
Writing objects:  34% (98/287)   
Writing objects:  35% (101/287)   
Writing objects:  36% (104/287)   
Writing objects:  37% (107/287)   
Writing objects:  38% (110/287)   
Writing objects:  39% (112/287)   
Writing objects:  40% (115/287)   
Writing objects:  41% (118/287)   
Writing objects:  42% (121/287)   
Writing objects:  43% (124/287)   
Writing objects:  44% (127/287)   
Writing objects:  45% (130/287)   
Writing objects:  46% (133/287)   
Writing objects:  47% (135/287)   
Writing objects:  48% (138/287)   
Writing objects:  49% (141/287)   
Writing objects:  50% (144/287)   
Writing objects:  51% (147/287)   
Writing objects:  52% (150/287)   
Writing objects:  53% (153/287)   
Writing objects:  54% (155/287)   
Writing objects:  55% (158/287)   
Writing objects:  56% (161/287)   
Writing objects:  57% (164/287)   
Writing objects:  58% (167/287)   
Writing objects:  59% (170/287)   
Writing objects:  60% (173/287)   
Writing objects:  61% (176/287)   
Writing objects:  62% (178/287)   
Writing objects:  63% (181/287)   
Writing objects:  64% (184/287)   
Writing objects:  65% (187/287)   
Writing objects:  66% (190/287)   
Writing objects:  67% (193/287)   
Writing objects:  68% (196/287)   
Writing objects:  69% (199/287)   
Writing objects:  70% (201/287)   
Writing objects:  71% (204/287)   
Writing objects:  72% (207/287)   
Writing objects:  73% (210/287)   
Writing objects:  74% (213/287)   
Writing objects:  75% (216/287)   
Writing objects:  76% (219/287)   
Writing objects:  77% (221/287)   
Writing objects:  78% (224/287)   
Writing objects:  79% (227/287)   
Writing objects:  80% (230/287)   
Writing objects:  81% (233/287)   
Writing objects:  82% (236/287)   
Writing objects:  83% (239/287)   
Writing objects:  84% (242/287)   
Writing objects:  85% (244/287)   
Writing objects:  86% (247/287)   
Writing objects:  87% (250/287)   
Writing objects:  88% (253/287)   
Writing objects:  89% (256/287)   
Writing objects:  90% (260/287)   
Writing objects:  91% (262/287)   
Writing objects:  92% (265/287)   
Writing objects:  93% (267/287)   
Writing objects:  94% (270/287)   
Writing objects:  95% (273/287)   
Writing objects:  96% (276/287)   
Writing objects:  97% (279/287)   
Writing objects:  98% (282/287)   
Writing objects:  99% (285/287)   
Writing objects: 100% (287/287)   
Writing objects: 100% (287/287), 53.47 KiB, done.
Total 287 (delta 240), reused 287 (delta 240)
To xen@xenbits.xensource.com:git/linux-pvops.git
   b09b342..a422ca7  a422ca75bd264cd26bafeb6305655245d2ea7c6b -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25Et-0005Ju-M0; Thu, 16 Aug 2012 18:56:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T25Es-0005Jp-9X
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:56:14 +0000
Received: from [85.158.143.99:19198] by server-1.bemta-4.messagelabs.com id
	D7/3D-07754-D424D205; Thu, 16 Aug 2012 18:56:13 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345143371!23315493!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7432 invoked from network); 16 Aug 2012 18:56:12 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 18:56:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GIu761031035
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 18:56:08 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GIu6dH018393
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 18:56:07 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GIu6LS013342; Thu, 16 Aug 2012 13:56:06 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 11:56:05 -0700
Date: Thu, 16 Aug 2012 11:56:04 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120816115604.6b0df003@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208161501550.4850@kaball.uk.xensource.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161501550.4850@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 15:05:27 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> > ---
> >  arch/x86/xen/enlighten.c |   67
> > +
> >  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> >  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
> >  			   xen_start_info->shared_info);
> > @@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
> >  		HYPERVISOR_shared_info =
> >  			(struct shared_info
> > *)__va(xen_start_info->shared_info); 
> > +	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
> > +	if (xen_pvh_domain())
> > +		return;
> 
> Is XENFEAT_auto_translated_physmap even believed to work?
> 
> If not, we could change the XENFEAT_auto_translated_physmap case and
> make it point to xen_hvm_init_shared_info, that would work for pvh as
> well.

Ok, I'll open an action item on this, and open a bug as soon as patch
is in to investigate and fix it.

> > +#ifdef CONFIG_X86_32
> > +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM
> > container\n");
> > +	return;
> > +#endif
> 
> this has to be wrong: shouldn't the return be at least below the
> following line?

Ooopsy, fixing it. Whic is why, my ex-manager at IBM used to say, "it
ain't done till it's tested!" :) :).

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 18:56:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 18:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25Et-0005Ju-M0; Thu, 16 Aug 2012 18:56:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T25Es-0005Jp-9X
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 18:56:14 +0000
Received: from [85.158.143.99:19198] by server-1.bemta-4.messagelabs.com id
	D7/3D-07754-D424D205; Thu, 16 Aug 2012 18:56:13 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345143371!23315493!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7432 invoked from network); 16 Aug 2012 18:56:12 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 18:56:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GIu761031035
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 18:56:08 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GIu6dH018393
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 18:56:07 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GIu6LS013342; Thu, 16 Aug 2012 13:56:06 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 11:56:05 -0700
Date: Thu, 16 Aug 2012 11:56:04 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120816115604.6b0df003@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208161501550.4850@kaball.uk.xensource.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161501550.4850@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 15:05:27 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> > ---
> >  arch/x86/xen/enlighten.c |   67
> > +
> >  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> >  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
> >  			   xen_start_info->shared_info);
> > @@ -1044,6 +1053,10 @@ void xen_setup_shared_info(void)
> >  		HYPERVISOR_shared_info =
> >  			(struct shared_info
> > *)__va(xen_start_info->shared_info); 
> > +	/* PVH TBD/FIXME: vcpu info placement in phase 2 */
> > +	if (xen_pvh_domain())
> > +		return;
> 
> Is XENFEAT_auto_translated_physmap even believed to work?
> 
> If not, we could change the XENFEAT_auto_translated_physmap case and
> make it point to xen_hvm_init_shared_info, that would work for pvh as
> well.

Ok, I'll open an action item on this, and open a bug as soon as patch
is in to investigate and fix it.

> > +#ifdef CONFIG_X86_32
> > +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM
> > container\n");
> > +	return;
> > +#endif
> 
> this has to be wrong: shouldn't the return be at least below the
> following line?

Ooopsy, fixing it. Whic is why, my ex-manager at IBM used to say, "it
ain't done till it's tested!" :) :).

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 19:01:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 19:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25JJ-0005Tq-Gi; Thu, 16 Aug 2012 19:00:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T25JH-0005Tj-NR
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 19:00:47 +0000
Received: from [85.158.143.35:36429] by server-3.bemta-4.messagelabs.com id
	CD/99-09529-F534D205; Thu, 16 Aug 2012 19:00:47 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345143645!14429888!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16551 invoked from network); 16 Aug 2012 19:00:46 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 19:00:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GJ0hWT016425
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:00:44 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GJ0heF000244
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:00:43 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GJ0hbX016665
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:00:43 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 12:00:43 -0700
Date: Thu, 16 Aug 2012 12:00:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120816120041.15d382ed@mantra.us.oracle.com>
In-Reply-To: <20120816140929.GB17687@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<20120816140929.GB17687@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 10:09:29 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> > +#ifdef CONFIG_X86_32
> > +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM
> > container\n");
> > +	return;
> 
> Perhaps BUG() ?

BUG() will prob not work because it's way early in boot, which is why
I figured the prev if statement (right before it) has return.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 19:01:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 19:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25JJ-0005Tq-Gi; Thu, 16 Aug 2012 19:00:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T25JH-0005Tj-NR
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 19:00:47 +0000
Received: from [85.158.143.35:36429] by server-3.bemta-4.messagelabs.com id
	CD/99-09529-F534D205; Thu, 16 Aug 2012 19:00:47 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345143645!14429888!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16551 invoked from network); 16 Aug 2012 19:00:46 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 19:00:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GJ0hWT016425
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:00:44 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GJ0heF000244
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:00:43 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GJ0hbX016665
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:00:43 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 12:00:43 -0700
Date: Thu, 16 Aug 2012 12:00:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120816120041.15d382ed@mantra.us.oracle.com>
In-Reply-To: <20120816140929.GB17687@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<20120816140929.GB17687@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 10:09:29 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> > +#ifdef CONFIG_X86_32
> > +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM
> > container\n");
> > +	return;
> 
> Perhaps BUG() ?

BUG() will prob not work because it's way early in boot, which is why
I figured the prev if statement (right before it) has return.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 19:09:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 19:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25Ru-0005oV-Gx; Thu, 16 Aug 2012 19:09:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T25Rt-0005oQ-F2
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 19:09:41 +0000
Received: from [85.158.139.83:54147] by server-5.bemta-5.messagelabs.com id
	C3/CA-31019-4754D205; Thu, 16 Aug 2012 19:09:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345144178!17265148!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25185 invoked from network); 16 Aug 2012 19:09:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 19:09:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GJ9bMq025122
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:09:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GJ9bZH015024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:09:37 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GJ9a0S022920
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:09:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 12:09:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E9DA1402B7; Thu, 16 Aug 2012 15:09:35 -0400 (EDT)
Date: Thu, 16 Aug 2012 15:09:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816190935.GA3428@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<20120816140929.GB17687@phenom.dumpdata.com>
	<20120816120041.15d382ed@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120816120041.15d382ed@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 12:00:41PM -0700, Mukesh Rathor wrote:
> On Thu, 16 Aug 2012 10:09:29 -0400
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > > +#ifdef CONFIG_X86_32
> > > +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM
> > > container\n");
> > > +	return;
> > 
> > Perhaps BUG() ?
> 
> BUG() will prob not work because it's way early in boot, which is why
> I figured the prev if statement (right before it) has return.

It will crash with this:

XEN) domain_crash_sync called from entry.S
.. some more data.
> 
> thanks,
> Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 19:09:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 19:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25Ru-0005oV-Gx; Thu, 16 Aug 2012 19:09:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T25Rt-0005oQ-F2
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 19:09:41 +0000
Received: from [85.158.139.83:54147] by server-5.bemta-5.messagelabs.com id
	C3/CA-31019-4754D205; Thu, 16 Aug 2012 19:09:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345144178!17265148!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25185 invoked from network); 16 Aug 2012 19:09:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 19:09:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GJ9bMq025122
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:09:38 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GJ9bZH015024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 19:09:37 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GJ9a0S022920
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 14:09:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 12:09:36 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E9DA1402B7; Thu, 16 Aug 2012 15:09:35 -0400 (EDT)
Date: Thu, 16 Aug 2012 15:09:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120816190935.GA3428@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<20120816140929.GB17687@phenom.dumpdata.com>
	<20120816120041.15d382ed@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120816120041.15d382ed@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 12:00:41PM -0700, Mukesh Rathor wrote:
> On Thu, 16 Aug 2012 10:09:29 -0400
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > > +#ifdef CONFIG_X86_32
> > > +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM
> > > container\n");
> > > +	return;
> > 
> > Perhaps BUG() ?
> 
> BUG() will prob not work because it's way early in boot, which is why
> I figured the prev if statement (right before it) has return.

It will crash with this:

XEN) domain_crash_sync called from entry.S
.. some more data.
> 
> thanks,
> Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 19:14:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 19:14:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25Wc-0005wC-7X; Thu, 16 Aug 2012 19:14:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T25Wa-0005w7-Cl
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 19:14:32 +0000
Received: from [85.158.143.35:13123] by server-1.bemta-4.messagelabs.com id
	F5/18-07754-7964D205; Thu, 16 Aug 2012 19:14:31 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345144468!12289196!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19641 invoked from network); 16 Aug 2012 19:14:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 19:14:29 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GJEN72015935
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 19:14:24 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GJEM5n017802
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 19:14:23 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GJEMpL004931; Thu, 16 Aug 2012 14:14:22 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 12:14:22 -0700
Date: Thu, 16 Aug 2012 12:14:20 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Hao, Xudong" <xudong.hao@intel.com>
Message-ID: <20120816121420.5549f645@mantra.us.oracle.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "tim@xen.org" <tim@xen.org>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 10:31:30 +0000
"Hao, Xudong" <xudong.hao@intel.com> wrote:

> > */
> > >> > -    if ( mfn_valid(mfn_x(mfn)) &&
> > >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> > >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> > >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> > >>


BTW, here's the change in my PVH/hybrid tree that Tim D had suggested 
couple months ago:

     /* Track the highest gfn for which we have ever had a valid mapping */
     -    if ( mfn_valid(mfn_x(mfn)) &&
     +    if ( p2mt != p2m_invalid && p2mt != p2m_mmio_dm &&
               (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
	                p2m->max_mapped_pfn = gfn + (1UL << order) - 1;

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 19:14:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 19:14:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T25Wc-0005wC-7X; Thu, 16 Aug 2012 19:14:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T25Wa-0005w7-Cl
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 19:14:32 +0000
Received: from [85.158.143.35:13123] by server-1.bemta-4.messagelabs.com id
	F5/18-07754-7964D205; Thu, 16 Aug 2012 19:14:31 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345144468!12289196!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19641 invoked from network); 16 Aug 2012 19:14:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Aug 2012 19:14:29 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GJEN72015935
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 19:14:24 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GJEM5n017802
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 19:14:23 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GJEMpL004931; Thu, 16 Aug 2012 14:14:22 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 12:14:22 -0700
Date: Thu, 16 Aug 2012 12:14:20 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Hao, Xudong" <xudong.hao@intel.com>
Message-ID: <20120816121420.5549f645@mantra.us.oracle.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
References: <1345013831-20662-1-git-send-email-xudong.hao@intel.com>
	<502B86420200007800095048@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AA4F@SHSMSX102.ccr.corp.intel.com>
	<502CE3AB0200007800095686@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AAC2@SHSMSX102.ccr.corp.intel.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "tim@xen.org" <tim@xen.org>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/p2m: Using INVALID_MFN instead of
 mfn_valid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 10:31:30 +0000
"Hao, Xudong" <xudong.hao@intel.com> wrote:

> > */
> > >> > -    if ( mfn_valid(mfn_x(mfn)) &&
> > >> > +    if ( (mfn_x(mfn) != INVALID_MFN) &&
> > >> >           (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
> > >> >          p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
> > >>


BTW, here's the change in my PVH/hybrid tree that Tim D had suggested 
couple months ago:

     /* Track the highest gfn for which we have ever had a valid mapping */
     -    if ( mfn_valid(mfn_x(mfn)) &&
     +    if ( p2mt != p2m_invalid && p2mt != p2m_mmio_dm &&
               (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
	                p2m->max_mapped_pfn = gfn + (1UL << order) - 1;

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 20:23:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 20:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T26aT-0006xl-Df; Thu, 16 Aug 2012 20:22:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <deepti.kdeeps@gmail.com>) id 1T26aR-0006xg-2L
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 20:22:35 +0000
Received: from [85.158.139.83:33538] by server-10.bemta-5.messagelabs.com id
	87/8E-13125-A865D205; Thu, 16 Aug 2012 20:22:34 +0000
X-Env-Sender: deepti.kdeeps@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345148553!28789955!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22214 invoked from network); 16 Aug 2012 20:22:33 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 20:22:33 -0000
Received: by wibhm6 with SMTP id hm6so842581wib.14
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 13:22:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=XDcKbRGc7zLi7/mXgYKZ8nPfrSrW85Ci9jy7/cmXq20=;
	b=sBBCKrHSOAEIlaukszzC74d4dJPBXRf09R1dStOz3sXfikJWaRoSi75v4L+GL1mMeW
	5UdNT0CI4McI0wkHoE1NsDF4Fphf1SPBuyT99QFZ0N+gpEtDZ2c9nKwctWrYaL98DIOk
	S5i1N1vzyfO/TH6tgVQKxhTGiAICw4hjscxyuxfR4LtgBxUxdF2RpMvFoQPJ5OtPoonh
	z6CoBb0ltjlXY7YIJiFgM7xWkRpOaHQ7c+I7KNRkI1rPNOsQoodTuvv/i1H5btKUNWqw
	2zRkoYHxTKKctO8xIwQxVur2XNfD8NCTeSEjNWKI/lZTp94IqCpVCh4wc2zDTB4M4FXa
	sHVQ==
MIME-Version: 1.0
Received: by 10.216.139.95 with SMTP id b73mr1109579wej.191.1345148553081;
	Thu, 16 Aug 2012 13:22:33 -0700 (PDT)
Received: by 10.194.21.35 with HTTP; Thu, 16 Aug 2012 13:22:33 -0700 (PDT)
In-Reply-To: <1345104590.27489.5.camel@zakaz.uk.xensource.com>
References: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
	<1345104590.27489.5.camel@zakaz.uk.xensource.com>
Date: Thu, 16 Aug 2012 13:22:33 -0700
Message-ID: <CABGrkh8gXYwCABE_hsiM1+J0oLfEESd6=Jgyy5jX3wFyomMiAA@mail.gmail.com>
From: Deepti kulkarni <deepti.kdeeps@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4377237376867548376=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4377237376867548376==
Content-Type: multipart/alternative; boundary=0016e6d4f598ad340d04c767cd91

--0016e6d4f598ad340d04c767cd91
Content-Type: text/plain; charset=ISO-8859-1

On a PV, the vm doesn't boot and I see error "Error: Starting VM -
Traceback (most recent call last): - File "/usr/bin/pygrub", line 808, in ?
- fs = fsimage.open(file, part_offs[0], bootfsoptions) - IOError: [Errno
95] Operation not supported"


Deepti

On Thu, Aug 16, 2012 at 1:09 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Thu, 2012-08-16 at 00:29 +0100, Deepti kulkarni wrote:
> > I am trying to debug the debian bug for Xen
> > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656792.
> >
> >
> > The domU debian machine (HVM and PV) fails to bootup and redirects to
> > initramfs prompt. The error is "Unable to find a medium containing a
> > live image system".
> >
> >
> >  I did a cat on proc/modules. Looks like the xen drivers are not
> > loaded.
> >
> >
> > A modprobe of xen_blkfront doesnt help the bootup either.
> >
> >
> > Any workaround suggested. Debian Kernel 3.0 work fine.
>
> There are several suggested workarounds and diagnostics to try in that
> bug log -- have you tried them?
>
> That said I'm not necessarily convinced your issue is related to that
> bug, which was quite PVHVM specific (yet you mention having the problem
> on PV too).
>
> If none of the suggested workarounds help then I would suggest making a
> full bug report (with logs etc) as described in
> http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen
>
> In particular full guest logs and a description of the installation
> method which you used would be useful.
>
> Ian.
>
>
>

--0016e6d4f598ad340d04c767cd91
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On a PV, the vm doesn&#39;t boot and I see error &quot;<span style=3D"backg=
round-color:rgb(255,255,255);white-space:pre-wrap">Error: Starting VM - Tra=
ceback (most recent call last): - File &quot;/usr/bin/pygrub&quot;, line 80=
8, in ? - fs =3D fsimage.open(file, part_offs[0], bootfsoptions) - IOError:=
 [Errno 95] Operation not supported&quot;</span><div>
<span style=3D"white-space:pre-wrap"><br></span></div><div><span style=3D"w=
hite-space:pre-wrap"><br></span></div><div><span style=3D"white-space:pre-w=
rap">Deepti<br></span><br><div class=3D"gmail_quote">On Thu, Aug 16, 2012 a=
t 1:09 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbel=
l@citrix.com" target=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrot=
e:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On T=
hu, 2012-08-16 at 00:29 +0100, Deepti kulkarni wrote:<br>
&gt; I am trying to debug the debian bug for Xen<br>
&gt; <a href=3D"http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792" =
target=3D"_blank">http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792=
</a>.<br>
&gt;<br>
&gt;<br>
&gt; The domU debian machine (HVM and PV) fails to bootup and redirects to<=
br>
&gt; initramfs prompt. The error is &quot;Unable to find a medium containin=
g a<br>
&gt; live image system&quot;.<br>
&gt;<br>
&gt;<br>
&gt; =A0I did a cat on proc/modules. Looks like the xen drivers are not<br>
&gt; loaded.<br>
&gt;<br>
&gt;<br>
&gt; A modprobe of xen_blkfront doesnt help the bootup either.<br>
&gt;<br>
&gt;<br>
&gt; Any workaround suggested. Debian Kernel 3.0 work fine.<br>
<br>
</div></div>There are several suggested workarounds and diagnostics to try =
in that<br>
bug log -- have you tried them?<br>
<br>
That said I&#39;m not necessarily convinced your issue is related to that<b=
r>
bug, which was quite PVHVM specific (yet you mention having the problem<br>
on PV too).<br>
<br>
If none of the suggested workarounds help then I would suggest making a<br>
full bug report (with logs etc) as described in<br>
<a href=3D"http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen" target=3D"_=
blank">http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen</a><br>
<br>
In particular full guest logs and a description of the installation<br>
method which you used would be useful.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div>

--0016e6d4f598ad340d04c767cd91--


--===============4377237376867548376==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4377237376867548376==--


From xen-devel-bounces@lists.xen.org Thu Aug 16 20:23:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 20:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T26aT-0006xl-Df; Thu, 16 Aug 2012 20:22:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <deepti.kdeeps@gmail.com>) id 1T26aR-0006xg-2L
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 20:22:35 +0000
Received: from [85.158.139.83:33538] by server-10.bemta-5.messagelabs.com id
	87/8E-13125-A865D205; Thu, 16 Aug 2012 20:22:34 +0000
X-Env-Sender: deepti.kdeeps@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345148553!28789955!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22214 invoked from network); 16 Aug 2012 20:22:33 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 20:22:33 -0000
Received: by wibhm6 with SMTP id hm6so842581wib.14
	for <xen-devel@lists.xen.org>; Thu, 16 Aug 2012 13:22:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=XDcKbRGc7zLi7/mXgYKZ8nPfrSrW85Ci9jy7/cmXq20=;
	b=sBBCKrHSOAEIlaukszzC74d4dJPBXRf09R1dStOz3sXfikJWaRoSi75v4L+GL1mMeW
	5UdNT0CI4McI0wkHoE1NsDF4Fphf1SPBuyT99QFZ0N+gpEtDZ2c9nKwctWrYaL98DIOk
	S5i1N1vzyfO/TH6tgVQKxhTGiAICw4hjscxyuxfR4LtgBxUxdF2RpMvFoQPJ5OtPoonh
	z6CoBb0ltjlXY7YIJiFgM7xWkRpOaHQ7c+I7KNRkI1rPNOsQoodTuvv/i1H5btKUNWqw
	2zRkoYHxTKKctO8xIwQxVur2XNfD8NCTeSEjNWKI/lZTp94IqCpVCh4wc2zDTB4M4FXa
	sHVQ==
MIME-Version: 1.0
Received: by 10.216.139.95 with SMTP id b73mr1109579wej.191.1345148553081;
	Thu, 16 Aug 2012 13:22:33 -0700 (PDT)
Received: by 10.194.21.35 with HTTP; Thu, 16 Aug 2012 13:22:33 -0700 (PDT)
In-Reply-To: <1345104590.27489.5.camel@zakaz.uk.xensource.com>
References: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
	<1345104590.27489.5.camel@zakaz.uk.xensource.com>
Date: Thu, 16 Aug 2012 13:22:33 -0700
Message-ID: <CABGrkh8gXYwCABE_hsiM1+J0oLfEESd6=Jgyy5jX3wFyomMiAA@mail.gmail.com>
From: Deepti kulkarni <deepti.kdeeps@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4377237376867548376=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4377237376867548376==
Content-Type: multipart/alternative; boundary=0016e6d4f598ad340d04c767cd91

--0016e6d4f598ad340d04c767cd91
Content-Type: text/plain; charset=ISO-8859-1

On a PV, the vm doesn't boot and I see error "Error: Starting VM -
Traceback (most recent call last): - File "/usr/bin/pygrub", line 808, in ?
- fs = fsimage.open(file, part_offs[0], bootfsoptions) - IOError: [Errno
95] Operation not supported"


Deepti

On Thu, Aug 16, 2012 at 1:09 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Thu, 2012-08-16 at 00:29 +0100, Deepti kulkarni wrote:
> > I am trying to debug the debian bug for Xen
> > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656792.
> >
> >
> > The domU debian machine (HVM and PV) fails to bootup and redirects to
> > initramfs prompt. The error is "Unable to find a medium containing a
> > live image system".
> >
> >
> >  I did a cat on proc/modules. Looks like the xen drivers are not
> > loaded.
> >
> >
> > A modprobe of xen_blkfront doesnt help the bootup either.
> >
> >
> > Any workaround suggested. Debian Kernel 3.0 work fine.
>
> There are several suggested workarounds and diagnostics to try in that
> bug log -- have you tried them?
>
> That said I'm not necessarily convinced your issue is related to that
> bug, which was quite PVHVM specific (yet you mention having the problem
> on PV too).
>
> If none of the suggested workarounds help then I would suggest making a
> full bug report (with logs etc) as described in
> http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen
>
> In particular full guest logs and a description of the installation
> method which you used would be useful.
>
> Ian.
>
>
>

--0016e6d4f598ad340d04c767cd91
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On a PV, the vm doesn&#39;t boot and I see error &quot;<span style=3D"backg=
round-color:rgb(255,255,255);white-space:pre-wrap">Error: Starting VM - Tra=
ceback (most recent call last): - File &quot;/usr/bin/pygrub&quot;, line 80=
8, in ? - fs =3D fsimage.open(file, part_offs[0], bootfsoptions) - IOError:=
 [Errno 95] Operation not supported&quot;</span><div>
<span style=3D"white-space:pre-wrap"><br></span></div><div><span style=3D"w=
hite-space:pre-wrap"><br></span></div><div><span style=3D"white-space:pre-w=
rap">Deepti<br></span><br><div class=3D"gmail_quote">On Thu, Aug 16, 2012 a=
t 1:09 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbel=
l@citrix.com" target=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrot=
e:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On T=
hu, 2012-08-16 at 00:29 +0100, Deepti kulkarni wrote:<br>
&gt; I am trying to debug the debian bug for Xen<br>
&gt; <a href=3D"http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792" =
target=3D"_blank">http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D656792=
</a>.<br>
&gt;<br>
&gt;<br>
&gt; The domU debian machine (HVM and PV) fails to bootup and redirects to<=
br>
&gt; initramfs prompt. The error is &quot;Unable to find a medium containin=
g a<br>
&gt; live image system&quot;.<br>
&gt;<br>
&gt;<br>
&gt; =A0I did a cat on proc/modules. Looks like the xen drivers are not<br>
&gt; loaded.<br>
&gt;<br>
&gt;<br>
&gt; A modprobe of xen_blkfront doesnt help the bootup either.<br>
&gt;<br>
&gt;<br>
&gt; Any workaround suggested. Debian Kernel 3.0 work fine.<br>
<br>
</div></div>There are several suggested workarounds and diagnostics to try =
in that<br>
bug log -- have you tried them?<br>
<br>
That said I&#39;m not necessarily convinced your issue is related to that<b=
r>
bug, which was quite PVHVM specific (yet you mention having the problem<br>
on PV too).<br>
<br>
If none of the suggested workarounds help then I would suggest making a<br>
full bug report (with logs etc) as described in<br>
<a href=3D"http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen" target=3D"_=
blank">http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen</a><br>
<br>
In particular full guest logs and a description of the installation<br>
method which you used would be useful.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div>

--0016e6d4f598ad340d04c767cd91--


--===============4377237376867548376==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4377237376867548376==--


From xen-devel-bounces@lists.xen.org Thu Aug 16 20:23:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 20:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T26aj-0006yQ-QK; Thu, 16 Aug 2012 20:22:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T26ai-0006yL-Su
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 20:22:53 +0000
Received: from [85.158.143.99:38677] by server-2.bemta-4.messagelabs.com id
	10/91-31966-C965D205; Thu, 16 Aug 2012 20:22:52 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345148569!21046617!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNTkwMjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31297 invoked from network); 16 Aug 2012 20:22:51 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 20:22:51 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1345148571; x=1376684571;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=4ogLwTF8Hu7Sm7c69BdNunTl6tvgO7AiHH8JtUdKjew=;
	b=lIkES6e1lqNQFogkeOMiJ55bu+Z2amWfjTU7eG61e7cR15TqQ8KD3s9Z
	99hV6WZ+/Z0dhTy33/qxLjG5h1DZMQ==;
X-IronPort-AV: E=Sophos;i="4.77,781,1336348800"; d="scan'208";a="279644110"
Received: from smtp-in-9003.sea19.amazon.com ([10.186.104.20])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 16 Aug 2012 20:22:48 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-9003.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7GKMlNw003529
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 16 Aug 2012 20:22:48 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Thu, 16 Aug 2012 13:22:46 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Thu, 16 Aug 2012
	13:22:46 -0700
Date: Thu, 16 Aug 2012 13:22:46 -0700
From: Matt Wilson <msw@amazon.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20120816202246.GA6244@US-SEA-R8XVZTX>
References: <1345141515-31078-1-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345141515-31078-1-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/sysfs: Use XENVER_guest_handle to
 query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 11:25:15AM -0700, Daniel De Graaf wrote:
> The /sys/hypervisor/uuid path relies on Xenstore to query the domain's
> UUID (handle) even when the hypervisor exposes an interface to more
> directly and efficiently query this. The xenstore path /vm/UUID which is
> used for the current query is being discussed for possible removal as
> most of the information under this path is only useful for the
> toolstack, not the VM.
> 
> The UUID fetched from xenstore may also not be properly formatted as a
> UUID for the domain if the UUID has been reused (this is most often seen
> in domain 0, which if xenstored does not clean up /vm properly, can end
> up with a UUID field like "00000000-0000-0000-0000-000000000000-5").
> 
> ----8<-----------------------------------------------------
> 
> This hypercall has been present since Xen 3.1, and is the preferred
> method for a domain to obtain its UUID.

Hi Daniel,

What do you think about retaining a fallback of looking in xenstore if
the hypercall fails?

Matt

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  drivers/xen/sys-hypervisor.c    | 20 +++++---------------
>  include/xen/interface/version.h |  3 +++
>  2 files changed, 8 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
> index 4c7db7d..416fa01 100644
> --- a/drivers/xen/sys-hypervisor.c
> +++ b/drivers/xen/sys-hypervisor.c
> @@ -116,22 +116,12 @@ static void xen_sysfs_version_destroy(void)
>  
>  static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
>  {
> -	char *vm, *val;
> +	xen_domain_handle_t uuid;
>  	int ret;
> -	extern int xenstored_ready;
> -
> -	if (!xenstored_ready)
> -		return -EBUSY;
> -
> -	vm = xenbus_read(XBT_NIL, "vm", "", NULL);
> -	if (IS_ERR(vm))
> -		return PTR_ERR(vm);
> -	val = xenbus_read(XBT_NIL, vm, "uuid", NULL);
> -	kfree(vm);
> -	if (IS_ERR(val))
> -		return PTR_ERR(val);
> -	ret = sprintf(buffer, "%s\n", val);
> -	kfree(val);
> +	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
> +	if (ret)
> +		return ret;
> +	ret = sprintf(buffer, "%pU\n", uuid);
>  	return ret;
>  }
>  
> diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> index e8b6519..dd58cf5 100644
> --- a/include/xen/interface/version.h
> +++ b/include/xen/interface/version.h
> @@ -60,4 +60,7 @@ struct xen_feature_info {
>  /* arg == NULL; returns host memory page size. */
>  #define XENVER_pagesize 7
>  
> +/* arg == xen_domain_handle_t. */
> +#define XENVER_guest_handle 8
> +
>  #endif /* __XEN_PUBLIC_VERSION_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 20:23:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 20:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T26aj-0006yQ-QK; Thu, 16 Aug 2012 20:22:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T26ai-0006yL-Su
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 20:22:53 +0000
Received: from [85.158.143.99:38677] by server-2.bemta-4.messagelabs.com id
	10/91-31966-C965D205; Thu, 16 Aug 2012 20:22:52 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345148569!21046617!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNTkwMjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31297 invoked from network); 16 Aug 2012 20:22:51 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 20:22:51 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1345148571; x=1376684571;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=4ogLwTF8Hu7Sm7c69BdNunTl6tvgO7AiHH8JtUdKjew=;
	b=lIkES6e1lqNQFogkeOMiJ55bu+Z2amWfjTU7eG61e7cR15TqQ8KD3s9Z
	99hV6WZ+/Z0dhTy33/qxLjG5h1DZMQ==;
X-IronPort-AV: E=Sophos;i="4.77,781,1336348800"; d="scan'208";a="279644110"
Received: from smtp-in-9003.sea19.amazon.com ([10.186.104.20])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 16 Aug 2012 20:22:48 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-9003.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7GKMlNw003529
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 16 Aug 2012 20:22:48 GMT
Received: from US-SEA-R8XVZTX (10.224.80.42) by ex10-hub-9002.ant.amazon.com
	(10.185.137.130) with Microsoft SMTP Server id 14.2.247.3;
	Thu, 16 Aug 2012 13:22:46 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Thu, 16 Aug 2012
	13:22:46 -0700
Date: Thu, 16 Aug 2012 13:22:46 -0700
From: Matt Wilson <msw@amazon.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20120816202246.GA6244@US-SEA-R8XVZTX>
References: <1345141515-31078-1-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345141515-31078-1-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/sysfs: Use XENVER_guest_handle to
 query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 11:25:15AM -0700, Daniel De Graaf wrote:
> The /sys/hypervisor/uuid path relies on Xenstore to query the domain's
> UUID (handle) even when the hypervisor exposes an interface to more
> directly and efficiently query this. The xenstore path /vm/UUID which is
> used for the current query is being discussed for possible removal as
> most of the information under this path is only useful for the
> toolstack, not the VM.
> 
> The UUID fetched from xenstore may also not be properly formatted as a
> UUID for the domain if the UUID has been reused (this is most often seen
> in domain 0, which if xenstored does not clean up /vm properly, can end
> up with a UUID field like "00000000-0000-0000-0000-000000000000-5").
> 
> ----8<-----------------------------------------------------
> 
> This hypercall has been present since Xen 3.1, and is the preferred
> method for a domain to obtain its UUID.

Hi Daniel,

What do you think about retaining a fallback of looking in xenstore if
the hypercall fails?

Matt

> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  drivers/xen/sys-hypervisor.c    | 20 +++++---------------
>  include/xen/interface/version.h |  3 +++
>  2 files changed, 8 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
> index 4c7db7d..416fa01 100644
> --- a/drivers/xen/sys-hypervisor.c
> +++ b/drivers/xen/sys-hypervisor.c
> @@ -116,22 +116,12 @@ static void xen_sysfs_version_destroy(void)
>  
>  static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
>  {
> -	char *vm, *val;
> +	xen_domain_handle_t uuid;
>  	int ret;
> -	extern int xenstored_ready;
> -
> -	if (!xenstored_ready)
> -		return -EBUSY;
> -
> -	vm = xenbus_read(XBT_NIL, "vm", "", NULL);
> -	if (IS_ERR(vm))
> -		return PTR_ERR(vm);
> -	val = xenbus_read(XBT_NIL, vm, "uuid", NULL);
> -	kfree(vm);
> -	if (IS_ERR(val))
> -		return PTR_ERR(val);
> -	ret = sprintf(buffer, "%s\n", val);
> -	kfree(val);
> +	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
> +	if (ret)
> +		return ret;
> +	ret = sprintf(buffer, "%pU\n", uuid);
>  	return ret;
>  }
>  
> diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> index e8b6519..dd58cf5 100644
> --- a/include/xen/interface/version.h
> +++ b/include/xen/interface/version.h
> @@ -60,4 +60,7 @@ struct xen_feature_info {
>  /* arg == NULL; returns host memory page size. */
>  #define XENVER_pagesize 7
>  
> +/* arg == xen_domain_handle_t. */
> +#define XENVER_guest_handle 8
> +
>  #endif /* __XEN_PUBLIC_VERSION_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 20:41:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 20:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T26s6-0007Oo-FK; Thu, 16 Aug 2012 20:40:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T26s4-0007Og-UM
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 20:40:49 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345149642!3277688!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18643 invoked from network); 16 Aug 2012 20:40:42 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 20:40:42 -0000
X-TM-IMSS-Message-ID: <afd7753c00015b6c@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id afd7753c00015b6c ;
	Thu, 16 Aug 2012 16:40:23 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GKeYGD024736; 
	Thu, 16 Aug 2012 16:40:34 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: msw@amazon.com
Date: Thu, 16 Aug 2012 16:40:26 -0400
Message-Id: <1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <20120816202246.GA6244@US-SEA-R8XVZTX>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org,
	ian.campbell@citrix.com, jbeulich@suse.com, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to query
	UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 04:22 PM, Matt Wilson wrote:
> 
> Hi Daniel,
> 
> What do you think about retaining a fallback of looking in xenstore if
> the hypercall fails?
> 
> Matt
> 

That sounds good; there's little cost to leaving the fallback in.

----8<-----------------------------------------------------

This hypercall has been present since Xen 3.1, and is the preferred
method for a domain to obtain its UUID. Fall back to the xenstore method
if using an older version of Xen (which returns -ENOSYS).

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 drivers/xen/sys-hypervisor.c    | 13 ++++++++++++-
 include/xen/interface/version.h |  3 +++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 4c7db7d..284df8a 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -114,7 +114,7 @@ static void xen_sysfs_version_destroy(void)
 
 /* UUID */
 
-static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
+static ssize_t uuid_show_fallback(struct hyp_sysfs_attr *attr, char *buffer)
 {
 	char *vm, *val;
 	int ret;
@@ -135,6 +135,17 @@ static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
 	return ret;
 }
 
+static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+	xen_domain_handle_t uuid;
+	int ret;
+	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
+	if (ret)
+		return uuid_show_fallback(attr, buffer);
+	ret = sprintf(buffer, "%pU\n", uuid);
+	return ret;
+}
+
 HYPERVISOR_ATTR_RO(uuid);
 
 static int __init xen_sysfs_uuid_init(void)
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..dd58cf5 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -60,4 +60,7 @@ struct xen_feature_info {
 /* arg == NULL; returns host memory page size. */
 #define XENVER_pagesize 7
 
+/* arg == xen_domain_handle_t. */
+#define XENVER_guest_handle 8
+
 #endif /* __XEN_PUBLIC_VERSION_H__ */
-- 
1.7.11.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 20:41:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 20:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T26s6-0007Oo-FK; Thu, 16 Aug 2012 20:40:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T26s4-0007Og-UM
	for xen-devel@lists.xen.org; Thu, 16 Aug 2012 20:40:49 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345149642!3277688!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18643 invoked from network); 16 Aug 2012 20:40:42 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-27.messagelabs.com with SMTP;
	16 Aug 2012 20:40:42 -0000
X-TM-IMSS-Message-ID: <afd7753c00015b6c@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id afd7753c00015b6c ;
	Thu, 16 Aug 2012 16:40:23 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7GKeYGD024736; 
	Thu, 16 Aug 2012 16:40:34 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: msw@amazon.com
Date: Thu, 16 Aug 2012 16:40:26 -0400
Message-Id: <1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <20120816202246.GA6244@US-SEA-R8XVZTX>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org,
	ian.campbell@citrix.com, jbeulich@suse.com, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to query
	UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 04:22 PM, Matt Wilson wrote:
> 
> Hi Daniel,
> 
> What do you think about retaining a fallback of looking in xenstore if
> the hypercall fails?
> 
> Matt
> 

That sounds good; there's little cost to leaving the fallback in.

----8<-----------------------------------------------------

This hypercall has been present since Xen 3.1, and is the preferred
method for a domain to obtain its UUID. Fall back to the xenstore method
if using an older version of Xen (which returns -ENOSYS).

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 drivers/xen/sys-hypervisor.c    | 13 ++++++++++++-
 include/xen/interface/version.h |  3 +++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 4c7db7d..284df8a 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -114,7 +114,7 @@ static void xen_sysfs_version_destroy(void)
 
 /* UUID */
 
-static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
+static ssize_t uuid_show_fallback(struct hyp_sysfs_attr *attr, char *buffer)
 {
 	char *vm, *val;
 	int ret;
@@ -135,6 +135,17 @@ static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
 	return ret;
 }
 
+static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+	xen_domain_handle_t uuid;
+	int ret;
+	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
+	if (ret)
+		return uuid_show_fallback(attr, buffer);
+	ret = sprintf(buffer, "%pU\n", uuid);
+	return ret;
+}
+
 HYPERVISOR_ATTR_RO(uuid);
 
 static int __init xen_sysfs_uuid_init(void)
diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index e8b6519..dd58cf5 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -60,4 +60,7 @@ struct xen_feature_info {
 /* arg == NULL; returns host memory page size. */
 #define XENVER_pagesize 7
 
+/* arg == xen_domain_handle_t. */
+#define XENVER_guest_handle 8
+
 #endif /* __XEN_PUBLIC_VERSION_H__ */
-- 
1.7.11.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 21:02:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 21:02:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T27Cp-0007d3-ID; Thu, 16 Aug 2012 21:02:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T27Cn-0007cy-49
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 21:02:13 +0000
Received: from [85.158.139.83:10199] by server-2.bemta-5.messagelabs.com id
	96/EA-10142-3DF5D205; Thu, 16 Aug 2012 21:02:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345150929!28794011!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21351 invoked from network); 16 Aug 2012 21:02:11 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 21:02:11 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GL2814001438
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 21:02:09 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GL28o1028563
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 21:02:08 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GL27J7015242; Thu, 16 Aug 2012 16:02:07 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 14:02:07 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B5E49402B7; Thu, 16 Aug 2012 17:02:06 -0400 (EDT)
Date: Thu, 16 Aug 2012 17:02:06 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20120816210206.GA17966@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120816173215.GB9790@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 01:32:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 16, 2012 at 11:50:13AM -0400, Konrad Rzeszutek Wilk wrote:
> > The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
> > xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.
> > 
> > extended the _brk space to fit 1048576 PFNs. The math is that each
> > P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
> > that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
> > we want to cover 4GB of PFNs, that means having enough for space
> > to fit 1048576 unsigned longs.
> 
> Scratch that patch. This is better, but even with that I am still
> hitting some weird 32-bit cases.

So I thought about this some more and came up with this patch. Its
RFC and going to run it through some overnight tests to see how they fare.


commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date:   Fri Jul 27 16:05:47 2012 -0400

    xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
    
    If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
    1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
    with either a p2m_missing or p2m_identity respectively. The old
    page (which was created via extend_brk or was grafted on from the
    mfn_list) can be re-used for setting new PFNs.
    
    This also means we can remove git commit:
    5bc6f9888db5739abfa0cae279b4b442e4db8049
    xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
    which tried to fix this.
    
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 29244d0..b6b7c10 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -194,11 +194,6 @@ RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID
  * boundary violation will require three middle nodes. */
 RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
 
-/* When we populate back during bootup, the amount of pages can vary. The
- * max we have is seen is 395979, but that does not mean it can't be more.
- * Some machines can have 3GB I/O holes so lets reserve for that. */
-RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
-
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
@@ -575,12 +570,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 	}
 	return true;
 }
+
+/*
+ * Skim over the P2M tree looking at pages that are either filled with
+ * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
+ * replace the P2M leaf with a p2m_missing or p2m_identity.
+ * Stick the old page in the new P2M tree location.
+ */
+bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
+{
+	unsigned topidx;
+	unsigned mididx;
+	unsigned ident_pfns;
+	unsigned inv_pfns;
+	unsigned long *p2m;
+	unsigned long *mid_mfn_p;
+	unsigned idx;
+	unsigned long pfn;
+
+	/* We only look when this entails a P2M middle layer */
+	if (p2m_index(set_pfn))
+		return false;
+
+	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
+		topidx = p2m_top_index(pfn);
+
+		if (!p2m_top[topidx])
+			continue;
+
+		if (p2m_top[topidx] == p2m_mid_missing)
+			continue;
+
+		mididx = p2m_mid_index(pfn);
+		p2m = p2m_top[topidx][mididx];
+		if (!p2m)
+			continue;
+
+		if ((p2m == p2m_missing) || (p2m == p2m_identity))
+			continue;
+
+		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
+			continue;
+
+		ident_pfns = 0;
+		inv_pfns = 0;
+		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
+			/* IDENTITY_PFNs are 1:1 */
+			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
+				ident_pfns++;
+			else if (p2m[idx] == INVALID_P2M_ENTRY)
+				inv_pfns++;
+			else
+				break;
+		}
+		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
+			goto found;
+	}
+	return false;
+found:
+	/* Found one, replace old with p2m_identity or p2m_missing */
+	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
+	/* And the other for save/restore.. */
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	/* NOTE: Even if it is a p2m_identity it should still be point to
+	 * a page filled with INVALID_P2M_ENTRY entries. */
+	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
+
+	/* Reset where we want to stick the old page in. */
+	topidx = p2m_top_index(set_pfn);
+	mididx = p2m_mid_index(set_pfn);
+
+	/* This shouldn't happen */
+	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
+		early_alloc_p2m(set_pfn);
+
+	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
+		return false;
+
+	p2m_init(p2m);
+	p2m_top[topidx][mididx] = p2m;
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	mid_mfn_p[mididx] = virt_to_mfn(p2m);
+
+	return true;
+}
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
 		if (!early_alloc_p2m(pfn))
 			return false;
 
+		if (early_can_reuse_p2m_middle(pfn, mfn))
+			return __set_phys_to_machine(pfn, mfn);
+
 		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
 			return false;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 21:02:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 21:02:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T27Cp-0007d3-ID; Thu, 16 Aug 2012 21:02:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T27Cn-0007cy-49
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 21:02:13 +0000
Received: from [85.158.139.83:10199] by server-2.bemta-5.messagelabs.com id
	96/EA-10142-3DF5D205; Thu, 16 Aug 2012 21:02:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345150929!28794011!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk0MzAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21351 invoked from network); 16 Aug 2012 21:02:11 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-10.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 21:02:11 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GL2814001438
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Aug 2012 21:02:09 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GL28o1028563
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Aug 2012 21:02:08 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GL27J7015242; Thu, 16 Aug 2012 16:02:07 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 14:02:07 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B5E49402B7; Thu, 16 Aug 2012 17:02:06 -0400 (EDT)
Date: Thu, 16 Aug 2012 17:02:06 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20120816210206.GA17966@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120816173215.GB9790@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 01:32:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 16, 2012 at 11:50:13AM -0400, Konrad Rzeszutek Wilk wrote:
> > The git commit 5bc6f9888db5739abfa0cae279b4b442e4db8049
> > xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back.
> > 
> > extended the _brk space to fit 1048576 PFNs. The math is that each
> > P2M leaf can cover PAGE_SIZE/sizeof(unsigned long) PFNs. In 64-bit
> > that means 512 PFNs, on 32-bit that is 1024. If on 64-bit machines
> > we want to cover 4GB of PFNs, that means having enough for space
> > to fit 1048576 unsigned longs.
> 
> Scratch that patch. This is better, but even with that I am still
> hitting some weird 32-bit cases.

So I thought about this some more and came up with this patch. Its
RFC and going to run it through some overnight tests to see how they fare.


commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date:   Fri Jul 27 16:05:47 2012 -0400

    xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
    
    If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
    1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
    with either a p2m_missing or p2m_identity respectively. The old
    page (which was created via extend_brk or was grafted on from the
    mfn_list) can be re-used for setting new PFNs.
    
    This also means we can remove git commit:
    5bc6f9888db5739abfa0cae279b4b442e4db8049
    xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
    which tried to fix this.
    
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 29244d0..b6b7c10 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -194,11 +194,6 @@ RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID
  * boundary violation will require three middle nodes. */
 RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
 
-/* When we populate back during bootup, the amount of pages can vary. The
- * max we have is seen is 395979, but that does not mean it can't be more.
- * Some machines can have 3GB I/O holes so lets reserve for that. */
-RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
-
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
@@ -575,12 +570,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 	}
 	return true;
 }
+
+/*
+ * Skim over the P2M tree looking at pages that are either filled with
+ * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
+ * replace the P2M leaf with a p2m_missing or p2m_identity.
+ * Stick the old page in the new P2M tree location.
+ */
+bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
+{
+	unsigned topidx;
+	unsigned mididx;
+	unsigned ident_pfns;
+	unsigned inv_pfns;
+	unsigned long *p2m;
+	unsigned long *mid_mfn_p;
+	unsigned idx;
+	unsigned long pfn;
+
+	/* We only look when this entails a P2M middle layer */
+	if (p2m_index(set_pfn))
+		return false;
+
+	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
+		topidx = p2m_top_index(pfn);
+
+		if (!p2m_top[topidx])
+			continue;
+
+		if (p2m_top[topidx] == p2m_mid_missing)
+			continue;
+
+		mididx = p2m_mid_index(pfn);
+		p2m = p2m_top[topidx][mididx];
+		if (!p2m)
+			continue;
+
+		if ((p2m == p2m_missing) || (p2m == p2m_identity))
+			continue;
+
+		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
+			continue;
+
+		ident_pfns = 0;
+		inv_pfns = 0;
+		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
+			/* IDENTITY_PFNs are 1:1 */
+			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
+				ident_pfns++;
+			else if (p2m[idx] == INVALID_P2M_ENTRY)
+				inv_pfns++;
+			else
+				break;
+		}
+		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
+			goto found;
+	}
+	return false;
+found:
+	/* Found one, replace old with p2m_identity or p2m_missing */
+	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
+	/* And the other for save/restore.. */
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	/* NOTE: Even if it is a p2m_identity it should still be point to
+	 * a page filled with INVALID_P2M_ENTRY entries. */
+	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
+
+	/* Reset where we want to stick the old page in. */
+	topidx = p2m_top_index(set_pfn);
+	mididx = p2m_mid_index(set_pfn);
+
+	/* This shouldn't happen */
+	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
+		early_alloc_p2m(set_pfn);
+
+	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
+		return false;
+
+	p2m_init(p2m);
+	p2m_top[topidx][mididx] = p2m;
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	mid_mfn_p[mididx] = virt_to_mfn(p2m);
+
+	return true;
+}
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
 		if (!early_alloc_p2m(pfn))
 			return false;
 
+		if (early_can_reuse_p2m_middle(pfn, mfn))
+			return __set_phys_to_machine(pfn, mfn);
+
 		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
 			return false;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 21:46:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 21:46:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T27t9-000870-6P; Thu, 16 Aug 2012 21:45:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T27t7-00086v-TJ
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 21:45:58 +0000
Received: from [85.158.143.99:23488] by server-3.bemta-4.messagelabs.com id
	FE/4F-09529-51A6D205; Thu, 16 Aug 2012 21:45:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345153555!23332232!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29689 invoked from network); 16 Aug 2012 21:45:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 21:45:56 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GLjrs7025245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 21:45:54 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GLjrp4003974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 21:45:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GLjqX8026963
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 16:45:52 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 14:45:52 -0700
Date: Thu, 16 Aug 2012 14:45:51 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120816144551.6cd8bbaf@mantra.us.oracle.com>
In-Reply-To: <20120816190935.GA3428@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<20120816140929.GB17687@phenom.dumpdata.com>
	<20120816120041.15d382ed@mantra.us.oracle.com>
	<20120816190935.GA3428@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 15:09:35 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Aug 16, 2012 at 12:00:41PM -0700, Mukesh Rathor wrote:
> > On Thu, 16 Aug 2012 10:09:29 -0400
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > > > +#ifdef CONFIG_X86_32
> > > > +	xen_raw_printk("ERROR: 32bit PV guest can not run in
> > > > HVM container\n");
> > > > +	return;
> > > 
> > > Perhaps BUG() ?
> > 
> > BUG() will prob not work because it's way early in boot, which is
> > why I figured the prev if statement (right before it) has return.
> 
> It will crash with this:
> 
> XEN) domain_crash_sync called from entry.S
> .. some more data.


Do you want me to chnage the return in the prev if statement too in
that case?

        if (!xen_start_info)
                return;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 21:46:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 21:46:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T27t9-000870-6P; Thu, 16 Aug 2012 21:45:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T27t7-00086v-TJ
	for Xen-devel@lists.xensource.com; Thu, 16 Aug 2012 21:45:58 +0000
Received: from [85.158.143.99:23488] by server-3.bemta-4.messagelabs.com id
	FE/4F-09529-51A6D205; Thu, 16 Aug 2012 21:45:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345153555!23332232!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwMTY5NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29689 invoked from network); 16 Aug 2012 21:45:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Aug 2012 21:45:56 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7GLjrs7025245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 21:45:54 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7GLjrp4003974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 21:45:53 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7GLjqX8026963
	for <Xen-devel@lists.xensource.com>; Thu, 16 Aug 2012 16:45:52 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Aug 2012 14:45:52 -0700
Date: Thu, 16 Aug 2012 14:45:51 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120816144551.6cd8bbaf@mantra.us.oracle.com>
In-Reply-To: <20120816190935.GA3428@phenom.dumpdata.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<20120816140929.GB17687@phenom.dumpdata.com>
	<20120816120041.15d382ed@mantra.us.oracle.com>
	<20120816190935.GA3428@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 15:09:35 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Aug 16, 2012 at 12:00:41PM -0700, Mukesh Rathor wrote:
> > On Thu, 16 Aug 2012 10:09:29 -0400
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > > > +#ifdef CONFIG_X86_32
> > > > +	xen_raw_printk("ERROR: 32bit PV guest can not run in
> > > > HVM container\n");
> > > > +	return;
> > > 
> > > Perhaps BUG() ?
> > 
> > BUG() will prob not work because it's way early in boot, which is
> > why I figured the prev if statement (right before it) has return.
> 
> It will crash with this:
> 
> XEN) domain_crash_sync called from entry.S
> .. some more data.


Do you want me to chnage the return in the prev if statement too in
that case?

        if (!xen_start_info)
                return;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 23:07:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 23:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T299X-0000Ep-GE; Thu, 16 Aug 2012 23:06:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T299V-0000Ek-Ta
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 23:06:58 +0000
Received: from [85.158.139.83:3866] by server-4.bemta-5.messagelabs.com id
	D3/47-12386-11D7D205; Thu, 16 Aug 2012 23:06:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345158414!28015483!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5479 invoked from network); 16 Aug 2012 23:06:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 23:06:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,781,1336348800"; d="scan'208";a="14049868"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 23:06:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 00:06:53 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T299R-0003WD-FJ;
	Thu, 16 Aug 2012 23:06:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T299R-0007Be-3Q;
	Fri, 17 Aug 2012 00:06:53 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13609-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 00:06:53 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13609: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13609 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13609/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv           10 guest-saverestore         fail REGR. vs. 13607

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13607
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13607
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13607
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13607
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13607

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  c887c30a0a35

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25757:3468a834be8d
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 17:38:05 2012 +0100
    
    EPT/PoD: fix interaction with 1Gb pages
    
    When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
    updated to match - the assertion in there triggered, indicating that
    the call to p2m_pod_demand_populate() needed adjustment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25756:8918737c7e80
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 16 14:31:09 2012 +0100
    
    x86/mm: update max_mapped_pfn on MMIO mappings too.
    
    max_mapped_pfn should reflect the highest mapping we've ever seen of
    any type, or the tests in the lookup functions will be wrong.  As it
    happens, the highest mapping has always been a RAM one, but this is no
    longer the case when we allow 64-bit BARs.
    
    Reported-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25755:c887c30a0a35
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 10:16:19 2012 +0200
    
    x86/PoD: clean up types
    
    GMFN values must undoubtedly be "unsigned long". "count" and
    "entry_count", since they are signed types, should also be "long" as
    otherwise they can't fit all values that can fit into "d->tot_pages"
    (which currently is "uint32_t").
    
    Beyond that, the patch doesn't convert everything to "long" as in many
    places it is clear that "int" suffices. In places where "long" is being
    used partially already, the change is however being done.
    
    Furthermore, page order values have no use of being "long".
    
    Finally, in the course of updating a few printk messages anyway, some
    also get slightly shortened (to focus on the relevant information).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 16 23:07:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Aug 2012 23:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T299X-0000Ep-GE; Thu, 16 Aug 2012 23:06:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T299V-0000Ek-Ta
	for xen-devel@lists.xensource.com; Thu, 16 Aug 2012 23:06:58 +0000
Received: from [85.158.139.83:3866] by server-4.bemta-5.messagelabs.com id
	D3/47-12386-11D7D205; Thu, 16 Aug 2012 23:06:57 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345158414!28015483!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5479 invoked from network); 16 Aug 2012 23:06:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Aug 2012 23:06:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,781,1336348800"; d="scan'208";a="14049868"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	16 Aug 2012 23:06:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 00:06:53 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T299R-0003WD-FJ;
	Thu, 16 Aug 2012 23:06:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T299R-0007Be-3Q;
	Fri, 17 Aug 2012 00:06:53 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13609-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 00:06:53 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13609: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13609 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13609/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv           10 guest-saverestore         fail REGR. vs. 13607

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13607
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13607
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13607
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13607
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13607

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  c887c30a0a35

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25757:3468a834be8d
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 17:38:05 2012 +0100
    
    EPT/PoD: fix interaction with 1Gb pages
    
    When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
    updated to match - the assertion in there triggered, indicating that
    the call to p2m_pod_demand_populate() needed adjustment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25756:8918737c7e80
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 16 14:31:09 2012 +0100
    
    x86/mm: update max_mapped_pfn on MMIO mappings too.
    
    max_mapped_pfn should reflect the highest mapping we've ever seen of
    any type, or the tests in the lookup functions will be wrong.  As it
    happens, the highest mapping has always been a RAM one, but this is no
    longer the case when we allow 64-bit BARs.
    
    Reported-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25755:c887c30a0a35
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 10:16:19 2012 +0200
    
    x86/PoD: clean up types
    
    GMFN values must undoubtedly be "unsigned long". "count" and
    "entry_count", since they are signed types, should also be "long" as
    otherwise they can't fit all values that can fit into "d->tot_pages"
    (which currently is "uint32_t").
    
    Beyond that, the patch doesn't convert everything to "long" as in many
    places it is clear that "int" suffices. In places where "long" is being
    used partially already, the change is however being done.
    
    Furthermore, page order values have no use of being "long".
    
    Finally, in the course of updating a few printk messages anyway, some
    also get slightly shortened (to focus on the relevant information).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 01:13:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 01:13:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2B7J-0004wz-3Q; Fri, 17 Aug 2012 01:12:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T2B7H-0004wr-VK
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 01:12:48 +0000
Received: from [85.158.139.83:13211] by server-12.bemta-5.messagelabs.com id
	CB/08-22359-F8A9D205; Fri, 17 Aug 2012 01:12:47 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345165965!24308860!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY5MjU1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23863 invoked from network); 17 Aug 2012 01:12:46 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-14.tower-182.messagelabs.com with SMTP;
	17 Aug 2012 01:12:46 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 16 Aug 2012 18:12:45 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,782,1336374000"; d="scan'208";a="135199806"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Aug 2012 18:12:44 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 18:12:44 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 09:12:43 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mRnxOUx7HczFSN2/QYn7Hm+Blf//iEIA//6Rg0A=
Date: Fri, 17 Aug 2012 01:12:42 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BE434F@SHSMSX102.ccr.corp.intel.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<502CF22E0200007800095763@nat28.tlf.novell.com>
In-Reply-To: <502CF22E0200007800095763@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>> On 16.08.12 at 12:22, "Duan, Ronghui" <ronghui.duan@intel.com> wrote:
> > The max segments for request in VBD queue is 11, while for Linux OS/
> > other VMM, the parameter is set to 128 in default. This may be caused
> > by the limited size of ring between Front/Back. So I guess whether we
> > can put segment data into another ring and dynamic use them for the
> > single request's need. Here is prototype which don't do much test, but
> > it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be
> > reduced to 1/3 compared to original in sequential test. But it bring
> > some overhead which will make random IO's cpu utilization increase a little.
> 
> In what way to improve blkback is intended to be subject of a discussion on the
> summit - are you by any chance going to be there? Fact is that there are a
> number of other extensions to the interface, and since you don't mention those
> I'm assuming you were not aware of them.
> 
So pity that I can't be there. Indeed I did not see the other extension before.
I begin this from seeing the larger overhead in CPU in sequential IO. Then I try 
multiple ring and get a lot of help from Stefano. Then I have an idea adding a ring as
Segment separate. Stefano suggest to send here and listen for advice.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 01:13:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 01:13:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2B7J-0004wz-3Q; Fri, 17 Aug 2012 01:12:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T2B7H-0004wr-VK
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 01:12:48 +0000
Received: from [85.158.139.83:13211] by server-12.bemta-5.messagelabs.com id
	CB/08-22359-F8A9D205; Fri, 17 Aug 2012 01:12:47 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345165965!24308860!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjY5MjU1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23863 invoked from network); 17 Aug 2012 01:12:46 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-14.tower-182.messagelabs.com with SMTP;
	17 Aug 2012 01:12:46 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 16 Aug 2012 18:12:45 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,782,1336374000"; d="scan'208";a="135199806"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Aug 2012 18:12:44 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 18:12:44 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 09:12:43 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mRnxOUx7HczFSN2/QYn7Hm+Blf//iEIA//6Rg0A=
Date: Fri, 17 Aug 2012 01:12:42 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BE434F@SHSMSX102.ccr.corp.intel.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<502CF22E0200007800095763@nat28.tlf.novell.com>
In-Reply-To: <502CF22E0200007800095763@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>> On 16.08.12 at 12:22, "Duan, Ronghui" <ronghui.duan@intel.com> wrote:
> > The max segments for request in VBD queue is 11, while for Linux OS/
> > other VMM, the parameter is set to 128 in default. This may be caused
> > by the limited size of ring between Front/Back. So I guess whether we
> > can put segment data into another ring and dynamic use them for the
> > single request's need. Here is prototype which don't do much test, but
> > it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be
> > reduced to 1/3 compared to original in sequential test. But it bring
> > some overhead which will make random IO's cpu utilization increase a little.
> 
> In what way to improve blkback is intended to be subject of a discussion on the
> summit - are you by any chance going to be there? Fact is that there are a
> number of other extensions to the interface, and since you don't mention those
> I'm assuming you were not aware of them.
> 
So pity that I can't be there. Indeed I did not see the other extension before.
I begin this from seeing the larger overhead in CPU in sequential IO. Then I try 
multiple ring and get a lot of help from Stefano. Then I have an idea adding a ring as
Segment separate. Stefano suggest to send here and listen for advice.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 01:27:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 01:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2BKn-00056m-Ge; Fri, 17 Aug 2012 01:26:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T2BKm-00056h-6L
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 01:26:44 +0000
Received: from [85.158.139.83:51991] by server-10.bemta-5.messagelabs.com id
	79/28-13125-3DD9D205; Fri, 17 Aug 2012 01:26:43 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345166801!28551296!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE1MTk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7497 invoked from network); 17 Aug 2012 01:26:42 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-182.messagelabs.com with SMTP;
	17 Aug 2012 01:26:42 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 18:26:41 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,782,1336374000"; d="scan'208";a="135205667"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Aug 2012 18:26:40 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 18:26:40 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 18:26:40 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 09:26:38 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mRnxOUx7HczFSN2/QYn7Hm+Blf//r4sAgAAF04D//ryzwA==
Date: Fri, 17 Aug 2012 01:26:37 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BE5398@SHSMSX102.ccr.corp.intel.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<20120816133457.GA5898@phenom.dumpdata.com>
	<20120816135549.GA17613@phenom.dumpdata.com>
In-Reply-To: <20120816135549.GA17613@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, Aug 16, 2012 at 09:34:57AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
> > > Hi, list.
> > > The max segments for request in VBD queue is 11, while for Linux OS/ other
> VMM, the parameter is set to 128 in default.
> >
> > Like the FreeBSD one?
> >
Yeap.
> > > This may be caused by the limited size of ring between Front/Back. So I guess
> whether we can put segment data into another ring and dynamic use them for
> the single request's need. Here is prototype which don't do much test, but it can
> work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3
> compared to original in sequential test. But it bring some overhead which will
> make random IO's cpu utilization increase a little.
> > >
> >
> > Did you think also about expanding the ring size to something bigger?
> >
A separate ring will hold 1024 segments, I think it can feed most of H/W's BW.
> > > Here is a short version data use only 1K random read and 64K sequential read
> in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got
> form xentop.
> >
> > > Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
> > > 		W	52005.9	86.6	71
> > > 		W/O	52123.1	85.8	66.9
> > >
> > > Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
> > > 	W	250		27.1	       10.6
> > > 	W/O	250		62.6	       31.1
> > >
> > >
> > > The patch will be simple if we only use new methods. But we need consider
> that user may use new kernel as backend while an older one as frontend. Also
> need considerate live migration case. So the change become huge...
> >
> > OK? I think you are implementing the extension documented in
> >
> > changeset:   24875:a59c1dcfe968
> > user:        Justin T. Gibbs <justing@spectralogic.com>
> > date:        Thu Feb 23 10:03:07 2012 +0000
> > summary:     blkif.h: Define and document the request
> number/size/segments extension
> >
> > changeset:   24874:f9789db96c39
> > user:        Justin T. Gibbs <justing@spectralogic.com>
> > date:        Thu Feb 23 10:02:30 2012 +0000
> > summary:     blkif.h: Document the Red Hat and Citrix blkif multi-page ring
> extensions
> >
> > so that would be the max-requests-segments one?
Oh, I miss this info. But sure I increase the max-request-segments.
> >
> >
> > > [RFC v1 1/5]
> > > 	In order to add new segment ring, refactoring the original code, split some
> methods related with ring operation.
> > > [RFC v1 2/5]
> > > 	Add the segment ring support in blkfront. Most of code is about
> suspend/recover.
> > > [RFC v1 3/5]
> > > 	As the same, need refractor the original code in blkback.
> > > [RFC v1 4/5]
> > > 	In order to support different type of ring type in blkback, make the
> pending_req list per disk.
> >
> > Not sure why you structured the patches like this way, but it might
> > make sense to order them in 1, 3, 4, 2, 5 order. The 'pending_req'/per
> > disk is an overall improvement that fixes a lot of concurrent issues.
> > I tried to implement this and ran in an issue with grants still being
> > active? Did you have issues with that or it worked just fine for you?
> > > [RFC v1 5/5]
> > > 	Add the segment ring support in blkback.
> >
> > So .. where are the patches? Did I miss them?
> 
> Ah, they just arrived.
> 
> I took a brief look at them, and I think they are the right step. The things that are
> missing is that that you are missing the kfree  in 4/5 when the disk is gone away.
> Also there are some code that is commented out and its not clear to me why that
> is.
I forget clean this. I want to listen for advise for the change for the protocol. I can
Send out a 'patch' after.
> Lastly, this protocol should be negotiated using the 'max-request-.. ' or whichever
> is the proper type, not the blkfront-ring-type. It also would be good to CC Justin
> as he might have some guidance in this and also could test the frontend on his
> backend (or vice-versa). Not sure what is involved in setting up a FreeBSD
> backend that spectralogic is using.. Thought this might also involed expanding the
> ring to be a multi-page one I think?
> 
I begin with this from multi-page ring, I also have a patch about multi-page ring for
This patch. But due to no much positive influence on performance then I drop it.

> And I wonder if you need to have such a huge list of ops? Can some of them be
> trimmed down?
Yes, it can be less ops adding a common structure like backend.
> They v1 and v2 look quite similar. Oh, and instead of v1 and v2 I would just call
> them 'large_segment' and 'default_segment'. Or 'lgr_segment' and
> 'def_segment' perhaps?
> 
> Maybe 'huge_segment' and 'generic_segment' that sounds better.
> 
Good for me, I will reconsider the suitable parameter.
> Lastly, its not clear to me why you are removing the padding on some of the older
> blkif structures?
It will cause some miss match size between 64bit Domu on 32bits Dom0, So remove it.
I will double check there alignment after. 
> Thanks for posting this!
> > > -ronghui
> > >
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 01:27:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 01:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2BKn-00056m-Ge; Fri, 17 Aug 2012 01:26:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ronghui.duan@intel.com>) id 1T2BKm-00056h-6L
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 01:26:44 +0000
Received: from [85.158.139.83:51991] by server-10.bemta-5.messagelabs.com id
	79/28-13125-3DD9D205; Fri, 17 Aug 2012 01:26:43 +0000
X-Env-Sender: ronghui.duan@intel.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345166801!28551296!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE1MTk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7497 invoked from network); 17 Aug 2012 01:26:42 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-2.tower-182.messagelabs.com with SMTP;
	17 Aug 2012 01:26:42 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 16 Aug 2012 18:26:41 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,782,1336374000"; d="scan'208";a="135205667"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Aug 2012 18:26:40 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 18:26:40 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 18:26:40 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 09:26:38 +0800
From: "Duan, Ronghui" <ronghui.duan@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
	in blkfront
Thread-Index: Ac17mRnxOUx7HczFSN2/QYn7Hm+Blf//r4sAgAAF04D//ryzwA==
Date: Fri, 17 Aug 2012 01:26:37 +0000
Message-ID: <A21691DE07B84740B5F0B81466D5148A23BE5398@SHSMSX102.ccr.corp.intel.com>
References: <A21691DE07B84740B5F0B81466D5148A23BCF1DF@SHSMSX102.ccr.corp.intel.com>
	<20120816133457.GA5898@phenom.dumpdata.com>
	<20120816135549.GA17613@phenom.dumpdata.com>
In-Reply-To: <20120816135549.GA17613@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Stefano.Stabellini@eu.citrix.com" <Stefano.Stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request
 in blkfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, Aug 16, 2012 at 09:34:57AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
> > > Hi, list.
> > > The max segments for request in VBD queue is 11, while for Linux OS/ other
> VMM, the parameter is set to 128 in default.
> >
> > Like the FreeBSD one?
> >
Yeap.
> > > This may be caused by the limited size of ring between Front/Back. So I guess
> whether we can put segment data into another ring and dynamic use them for
> the single request's need. Here is prototype which don't do much test, but it can
> work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3
> compared to original in sequential test. But it bring some overhead which will
> make random IO's cpu utilization increase a little.
> > >
> >
> > Did you think also about expanding the ring size to something bigger?
> >
A separate ring will hold 1024 segments, I think it can feed most of H/W's BW.
> > > Here is a short version data use only 1K random read and 64K sequential read
> in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got
> form xentop.
> >
> > > Read 1K random	IOPS	   Dom0 CPU	DomU CPU%
> > > 		W	52005.9	86.6	71
> > > 		W/O	52123.1	85.8	66.9
> > >
> > > Read 64K seq	BW MB/s	Dom0 CPU	DomU CPU%
> > > 	W	250		27.1	       10.6
> > > 	W/O	250		62.6	       31.1
> > >
> > >
> > > The patch will be simple if we only use new methods. But we need consider
> that user may use new kernel as backend while an older one as frontend. Also
> need considerate live migration case. So the change become huge...
> >
> > OK? I think you are implementing the extension documented in
> >
> > changeset:   24875:a59c1dcfe968
> > user:        Justin T. Gibbs <justing@spectralogic.com>
> > date:        Thu Feb 23 10:03:07 2012 +0000
> > summary:     blkif.h: Define and document the request
> number/size/segments extension
> >
> > changeset:   24874:f9789db96c39
> > user:        Justin T. Gibbs <justing@spectralogic.com>
> > date:        Thu Feb 23 10:02:30 2012 +0000
> > summary:     blkif.h: Document the Red Hat and Citrix blkif multi-page ring
> extensions
> >
> > so that would be the max-requests-segments one?
Oh, I miss this info. But sure I increase the max-request-segments.
> >
> >
> > > [RFC v1 1/5]
> > > 	In order to add new segment ring, refactoring the original code, split some
> methods related with ring operation.
> > > [RFC v1 2/5]
> > > 	Add the segment ring support in blkfront. Most of code is about
> suspend/recover.
> > > [RFC v1 3/5]
> > > 	As the same, need refractor the original code in blkback.
> > > [RFC v1 4/5]
> > > 	In order to support different type of ring type in blkback, make the
> pending_req list per disk.
> >
> > Not sure why you structured the patches like this way, but it might
> > make sense to order them in 1, 3, 4, 2, 5 order. The 'pending_req'/per
> > disk is an overall improvement that fixes a lot of concurrent issues.
> > I tried to implement this and ran in an issue with grants still being
> > active? Did you have issues with that or it worked just fine for you?
> > > [RFC v1 5/5]
> > > 	Add the segment ring support in blkback.
> >
> > So .. where are the patches? Did I miss them?
> 
> Ah, they just arrived.
> 
> I took a brief look at them, and I think they are the right step. The things that are
> missing is that that you are missing the kfree  in 4/5 when the disk is gone away.
> Also there are some code that is commented out and its not clear to me why that
> is.
I forget clean this. I want to listen for advise for the change for the protocol. I can
Send out a 'patch' after.
> Lastly, this protocol should be negotiated using the 'max-request-.. ' or whichever
> is the proper type, not the blkfront-ring-type. It also would be good to CC Justin
> as he might have some guidance in this and also could test the frontend on his
> backend (or vice-versa). Not sure what is involved in setting up a FreeBSD
> backend that spectralogic is using.. Thought this might also involed expanding the
> ring to be a multi-page one I think?
> 
I begin with this from multi-page ring, I also have a patch about multi-page ring for
This patch. But due to no much positive influence on performance then I drop it.

> And I wonder if you need to have such a huge list of ops? Can some of them be
> trimmed down?
Yes, it can be less ops adding a common structure like backend.
> They v1 and v2 look quite similar. Oh, and instead of v1 and v2 I would just call
> them 'large_segment' and 'default_segment'. Or 'lgr_segment' and
> 'def_segment' perhaps?
> 
> Maybe 'huge_segment' and 'generic_segment' that sounds better.
> 
Good for me, I will reconsider the suitable parameter.
> Lastly, its not clear to me why you are removing the padding on some of the older
> blkif structures?
It will cause some miss match size between 64bit Domu on 32bits Dom0, So remove it.
I will double check there alignment after. 
> Thanks for posting this!
> > > -ronghui
> > >
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 02:36:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 02:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2CQ5-0005mR-1o; Fri, 17 Aug 2012 02:36:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2CQ3-0005mM-Ou
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 02:36:16 +0000
Received: from [85.158.139.83:49106] by server-2.bemta-5.messagelabs.com id
	D0/38-10142-E1EAD205; Fri, 17 Aug 2012 02:36:14 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345170973!28826433!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14028 invoked from network); 17 Aug 2012 02:36:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 02:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,782,1336348800"; d="scan'208";a="14050979"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 02:36:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 03:36:12 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2CQ0-0004et-B9;
	Fri, 17 Aug 2012 02:36:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2CQ0-0002xP-6Z;
	Fri, 17 Aug 2012 03:36:12 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13610-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 03:36:12 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13610: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13610 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 13607

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13607
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13607
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13607
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13607

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  c887c30a0a35

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25757:3468a834be8d
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 17:38:05 2012 +0100
    
    EPT/PoD: fix interaction with 1Gb pages
    
    When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
    updated to match - the assertion in there triggered, indicating that
    the call to p2m_pod_demand_populate() needed adjustment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25756:8918737c7e80
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 16 14:31:09 2012 +0100
    
    x86/mm: update max_mapped_pfn on MMIO mappings too.
    
    max_mapped_pfn should reflect the highest mapping we've ever seen of
    any type, or the tests in the lookup functions will be wrong.  As it
    happens, the highest mapping has always been a RAM one, but this is no
    longer the case when we allow 64-bit BARs.
    
    Reported-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25755:c887c30a0a35
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 10:16:19 2012 +0200
    
    x86/PoD: clean up types
    
    GMFN values must undoubtedly be "unsigned long". "count" and
    "entry_count", since they are signed types, should also be "long" as
    otherwise they can't fit all values that can fit into "d->tot_pages"
    (which currently is "uint32_t").
    
    Beyond that, the patch doesn't convert everything to "long" as in many
    places it is clear that "int" suffices. In places where "long" is being
    used partially already, the change is however being done.
    
    Furthermore, page order values have no use of being "long".
    
    Finally, in the course of updating a few printk messages anyway, some
    also get slightly shortened (to focus on the relevant information).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 02:36:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 02:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2CQ5-0005mR-1o; Fri, 17 Aug 2012 02:36:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2CQ3-0005mM-Ou
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 02:36:16 +0000
Received: from [85.158.139.83:49106] by server-2.bemta-5.messagelabs.com id
	D0/38-10142-E1EAD205; Fri, 17 Aug 2012 02:36:14 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345170973!28826433!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14028 invoked from network); 17 Aug 2012 02:36:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 02:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,782,1336348800"; d="scan'208";a="14050979"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 02:36:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 03:36:12 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2CQ0-0004et-B9;
	Fri, 17 Aug 2012 02:36:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2CQ0-0002xP-6Z;
	Fri, 17 Aug 2012 03:36:12 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13610-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 03:36:12 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13610: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13610 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 13607

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13607
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13607
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13607
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13607

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  c887c30a0a35

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           blocked 
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25757:3468a834be8d
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 17:38:05 2012 +0100
    
    EPT/PoD: fix interaction with 1Gb pages
    
    When PoD got enabled to support 1Gb pages, ept_get_entry() didn't get
    updated to match - the assertion in there triggered, indicating that
    the call to p2m_pod_demand_populate() needed adjustment.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25756:8918737c7e80
user:        Tim Deegan <tim@xen.org>
date:        Thu Aug 16 14:31:09 2012 +0100
    
    x86/mm: update max_mapped_pfn on MMIO mappings too.
    
    max_mapped_pfn should reflect the highest mapping we've ever seen of
    any type, or the tests in the lookup functions will be wrong.  As it
    happens, the highest mapping has always been a RAM one, but this is no
    longer the case when we allow 64-bit BARs.
    
    Reported-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Tim Deegan <tim@xen.org>
    Committed-by: Tim Deegan <tim@xen.org>
    
    
changeset:   25755:c887c30a0a35
user:        Jan Beulich <jbeulich@suse.com>
date:        Thu Aug 16 10:16:19 2012 +0200
    
    x86/PoD: clean up types
    
    GMFN values must undoubtedly be "unsigned long". "count" and
    "entry_count", since they are signed types, should also be "long" as
    otherwise they can't fit all values that can fit into "d->tot_pages"
    (which currently is "uint32_t").
    
    Beyond that, the patch doesn't convert everything to "long" as in many
    places it is clear that "int" suffices. In places where "long" is being
    used partially already, the change is however being done.
    
    Furthermore, page order values have no use of being "long".
    
    Finally, in the course of updating a few printk messages anyway, some
    also get slightly shortened (to focus on the relevant information).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 03:30:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 03:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2DGA-000646-Ef; Fri, 17 Aug 2012 03:30:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T2DG9-000641-2Y
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 03:30:05 +0000
Received: from [85.158.138.51:56021] by server-1.bemta-3.messagelabs.com id
	75/B1-09327-CBABD205; Fri, 17 Aug 2012 03:30:04 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345174202!20787084!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMDQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23816 invoked from network); 17 Aug 2012 03:30:03 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-174.messagelabs.com with SMTP;
	17 Aug 2012 03:30:03 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 16 Aug 2012 20:30:02 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,782,1336374000"; d="scan'208";a="209641824"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 16 Aug 2012 20:30:02 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 20:30:01 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 11:30:00 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Fix typos in "vmcs.c" file
Thread-Index: Ac18KJPCpVyp5fZiRHeKSKwVSXlTQw==
Date: Fri, 17 Aug 2012 03:30:00 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10155B15@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_"
MIME-Version: 1.0
Subject: [Xen-devel] Fix typos in "vmcs.c" file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Fix typos in "vmcs.c" file.

Signed-off-by: Yongjie Ren <yongjie.ren@intel.com>

diff -r c887c30a0a35 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c       Thu Aug 16 10:16:19 2012 +0200
+++ b/xen/arch/x86/hvm/vmx/vmcs.c       Fri Aug 17 10:44:39 2012 +0800
@@ -109,7 +109,7 @@
     if ( ctl_min & ~ctl )
     {
         *mismatch =3D 1;
-        printk("VMX: CPU%d has insufficent %s (%08x but requires min %08x)=
\n",
+        printk("VMX: CPU%d has insufficient %s (%08x but requires min %08x=
)\n",
                smp_processor_id(), name, ctl, ctl_min);
     }

@@ -227,7 +227,7 @@
     {
         /*
          * To use EPT we expect to be able to clear certain intercepts.
-         * We check VMX_BASIC_MSR[55] to correctly handle default1 control=
s.
+         * We check VMX_BASIC_MSR[55] to correctly handle default controls=
.
          */
         uint32_t must_be_one, must_be_zero, msr =3D MSR_IA32_VMX_PROCBASED=
_CTLS;
         if ( vmx_basic_msr_high & (1u << 23) )


Best Regards,
     Yongjie (Jay)


--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_
Content-Type: application/octet-stream; name="fix-typo-vmcs.patch"
Content-Description: fix-typo-vmcs.patch
Content-Disposition: attachment; filename="fix-typo-vmcs.patch"; size=924;
	creation-date="Fri, 17 Aug 2012 02:28:46 GMT";
	modification-date="Fri, 17 Aug 2012 02:45:50 GMT"
Content-Transfer-Encoding: base64

ZGlmZiAtciBjODg3YzMwYTBhMzUgeGVuL2FyY2gveDg2L2h2bS92bXgvdm1jcy5jDQotLS0gYS94
ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMgICAgICAgVGh1IEF1ZyAxNiAxMDoxNjoxOSAyMDEy
ICswMjAwDQorKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMgICAgICAgRnJpIEF1ZyAx
NyAxMDo0NDozOSAyMDEyICswODAwDQpAQCAtMTA5LDcgKzEwOSw3IEBADQogICAgIGlmICggY3Rs
X21pbiAmIH5jdGwgKQ0KICAgICB7DQogICAgICAgICAqbWlzbWF0Y2ggPSAxOw0KLSAgICAgICAg
cHJpbnRrKCJWTVg6IENQVSVkIGhhcyBpbnN1ZmZpY2VudCAlcyAoJTA4eCBidXQgcmVxdWlyZXMg
bWluICUwOHgpXG4iLA0KKyAgICAgICAgcHJpbnRrKCJWTVg6IENQVSVkIGhhcyBpbnN1ZmZpY2ll
bnQgJXMgKCUwOHggYnV0IHJlcXVpcmVzIG1pbiAlMDh4KVxuIiwNCiAgICAgICAgICAgICAgICBz
bXBfcHJvY2Vzc29yX2lkKCksIG5hbWUsIGN0bCwgY3RsX21pbik7DQogICAgIH0NCg0KQEAgLTIy
Nyw3ICsyMjcsNyBAQA0KICAgICB7DQogICAgICAgICAvKg0KICAgICAgICAgICogVG8gdXNlIEVQ
VCB3ZSBleHBlY3QgdG8gYmUgYWJsZSB0byBjbGVhciBjZXJ0YWluIGludGVyY2VwdHMuDQotICAg
ICAgICAgKiBXZSBjaGVjayBWTVhfQkFTSUNfTVNSWzU1XSB0byBjb3JyZWN0bHkgaGFuZGxlIGRl
ZmF1bHQxIGNvbnRyb2xzLg0KKyAgICAgICAgICogV2UgY2hlY2sgVk1YX0JBU0lDX01TUls1NV0g
dG8gY29ycmVjdGx5IGhhbmRsZSBkZWZhdWx0IGNvbnRyb2xzLg0KICAgICAgICAgICovDQogICAg
ICAgICB1aW50MzJfdCBtdXN0X2JlX29uZSwgbXVzdF9iZV96ZXJvLCBtc3IgPSBNU1JfSUEzMl9W
TVhfUFJPQ0JBU0VEX0NUTFM7DQogICAgICAgICBpZiAoIHZteF9iYXNpY19tc3JfaGlnaCAmICgx
dSA8PCAyMykgKQ0K

--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Fri Aug 17 03:30:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 03:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2DGA-000646-Ef; Fri, 17 Aug 2012 03:30:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T2DG9-000641-2Y
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 03:30:05 +0000
Received: from [85.158.138.51:56021] by server-1.bemta-3.messagelabs.com id
	75/B1-09327-CBABD205; Fri, 17 Aug 2012 03:30:04 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345174202!20787084!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMDQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23816 invoked from network); 17 Aug 2012 03:30:03 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-12.tower-174.messagelabs.com with SMTP;
	17 Aug 2012 03:30:03 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 16 Aug 2012 20:30:02 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,782,1336374000"; d="scan'208";a="209641824"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 16 Aug 2012 20:30:02 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 16 Aug 2012 20:30:01 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 11:30:00 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Fix typos in "vmcs.c" file
Thread-Index: Ac18KJPCpVyp5fZiRHeKSKwVSXlTQw==
Date: Fri, 17 Aug 2012 03:30:00 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A10155B15@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_"
MIME-Version: 1.0
Subject: [Xen-devel] Fix typos in "vmcs.c" file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Fix typos in "vmcs.c" file.

Signed-off-by: Yongjie Ren <yongjie.ren@intel.com>

diff -r c887c30a0a35 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c       Thu Aug 16 10:16:19 2012 +0200
+++ b/xen/arch/x86/hvm/vmx/vmcs.c       Fri Aug 17 10:44:39 2012 +0800
@@ -109,7 +109,7 @@
     if ( ctl_min & ~ctl )
     {
         *mismatch =3D 1;
-        printk("VMX: CPU%d has insufficent %s (%08x but requires min %08x)=
\n",
+        printk("VMX: CPU%d has insufficient %s (%08x but requires min %08x=
)\n",
                smp_processor_id(), name, ctl, ctl_min);
     }

@@ -227,7 +227,7 @@
     {
         /*
          * To use EPT we expect to be able to clear certain intercepts.
-         * We check VMX_BASIC_MSR[55] to correctly handle default1 control=
s.
+         * We check VMX_BASIC_MSR[55] to correctly handle default controls=
.
          */
         uint32_t must_be_one, must_be_zero, msr =3D MSR_IA32_VMX_PROCBASED=
_CTLS;
         if ( vmx_basic_msr_high & (1u << 23) )


Best Regards,
     Yongjie (Jay)


--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_
Content-Type: application/octet-stream; name="fix-typo-vmcs.patch"
Content-Description: fix-typo-vmcs.patch
Content-Disposition: attachment; filename="fix-typo-vmcs.patch"; size=924;
	creation-date="Fri, 17 Aug 2012 02:28:46 GMT";
	modification-date="Fri, 17 Aug 2012 02:45:50 GMT"
Content-Transfer-Encoding: base64

ZGlmZiAtciBjODg3YzMwYTBhMzUgeGVuL2FyY2gveDg2L2h2bS92bXgvdm1jcy5jDQotLS0gYS94
ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMgICAgICAgVGh1IEF1ZyAxNiAxMDoxNjoxOSAyMDEy
ICswMjAwDQorKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMgICAgICAgRnJpIEF1ZyAx
NyAxMDo0NDozOSAyMDEyICswODAwDQpAQCAtMTA5LDcgKzEwOSw3IEBADQogICAgIGlmICggY3Rs
X21pbiAmIH5jdGwgKQ0KICAgICB7DQogICAgICAgICAqbWlzbWF0Y2ggPSAxOw0KLSAgICAgICAg
cHJpbnRrKCJWTVg6IENQVSVkIGhhcyBpbnN1ZmZpY2VudCAlcyAoJTA4eCBidXQgcmVxdWlyZXMg
bWluICUwOHgpXG4iLA0KKyAgICAgICAgcHJpbnRrKCJWTVg6IENQVSVkIGhhcyBpbnN1ZmZpY2ll
bnQgJXMgKCUwOHggYnV0IHJlcXVpcmVzIG1pbiAlMDh4KVxuIiwNCiAgICAgICAgICAgICAgICBz
bXBfcHJvY2Vzc29yX2lkKCksIG5hbWUsIGN0bCwgY3RsX21pbik7DQogICAgIH0NCg0KQEAgLTIy
Nyw3ICsyMjcsNyBAQA0KICAgICB7DQogICAgICAgICAvKg0KICAgICAgICAgICogVG8gdXNlIEVQ
VCB3ZSBleHBlY3QgdG8gYmUgYWJsZSB0byBjbGVhciBjZXJ0YWluIGludGVyY2VwdHMuDQotICAg
ICAgICAgKiBXZSBjaGVjayBWTVhfQkFTSUNfTVNSWzU1XSB0byBjb3JyZWN0bHkgaGFuZGxlIGRl
ZmF1bHQxIGNvbnRyb2xzLg0KKyAgICAgICAgICogV2UgY2hlY2sgVk1YX0JBU0lDX01TUls1NV0g
dG8gY29ycmVjdGx5IGhhbmRsZSBkZWZhdWx0IGNvbnRyb2xzLg0KICAgICAgICAgICovDQogICAg
ICAgICB1aW50MzJfdCBtdXN0X2JlX29uZSwgbXVzdF9iZV96ZXJvLCBtc3IgPSBNU1JfSUEzMl9W
TVhfUFJPQ0JBU0VEX0NUTFM7DQogICAgICAgICBpZiAoIHZteF9iYXNpY19tc3JfaGlnaCAmICgx
dSA8PCAyMykgKQ0K

--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_1B4B44D9196EFF41AE41FDA404FC0A10155B15SHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Fri Aug 17 07:50:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 07:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HK0-0007jS-2O; Fri, 17 Aug 2012 07:50:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HJy-0007jN-KA
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 07:50:18 +0000
Received: from [85.158.138.51:23959] by server-6.bemta-3.messagelabs.com id
	27/B7-32013-9B7FD205; Fri, 17 Aug 2012 07:50:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345189816!24713652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8121 invoked from network); 17 Aug 2012 07:50:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 07:50:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14053748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 07:50:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 08:50:16 +0100
Message-ID: <1345189814.30865.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Deepti kulkarni <deepti.kdeeps@gmail.com>
Date: Fri, 17 Aug 2012 08:50:14 +0100
In-Reply-To: <CABGrkh8gXYwCABE_hsiM1+J0oLfEESd6=Jgyy5jX3wFyomMiAA@mail.gmail.com>
References: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
	<1345104590.27489.5.camel@zakaz.uk.xensource.com>
	<CABGrkh8gXYwCABE_hsiM1+J0oLfEESd6=Jgyy5jX3wFyomMiAA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please don't top post. It makes it very hard to follow the flow of a
conversation.

On Thu, 2012-08-16 at 21:22 +0100, Deepti kulkarni wrote:
> On a PV, the vm doesn't boot and I see error "Error: Starting VM -
> Traceback (most recent call last): - File "/usr/bin/pygrub", line 808,
> in ? - fs = fsimage.open(file, part_offs[0], bootfsoptions) - IOError:
> [Errno 95] Operation not supported"

Quoting myself from earlier:
>         If none of the suggested workarounds help then I would suggest
>         making a full bug report (with logs etc) as described in
>         http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen

http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen lists many logs and
other information which are often useful in diagnosing issues, please
supply them or we cannot help you.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 07:50:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 07:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HK0-0007jS-2O; Fri, 17 Aug 2012 07:50:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HJy-0007jN-KA
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 07:50:18 +0000
Received: from [85.158.138.51:23959] by server-6.bemta-3.messagelabs.com id
	27/B7-32013-9B7FD205; Fri, 17 Aug 2012 07:50:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345189816!24713652!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8121 invoked from network); 17 Aug 2012 07:50:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 07:50:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14053748"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 07:50:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 08:50:16 +0100
Message-ID: <1345189814.30865.59.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Deepti kulkarni <deepti.kdeeps@gmail.com>
Date: Fri, 17 Aug 2012 08:50:14 +0100
In-Reply-To: <CABGrkh8gXYwCABE_hsiM1+J0oLfEESd6=Jgyy5jX3wFyomMiAA@mail.gmail.com>
References: <CABGrkh9R75j+Dkc0X-Ue_tj23rFwZOMfTxZ-sJC7AWGp3ab2zQ@mail.gmail.com>
	<1345104590.27489.5.camel@zakaz.uk.xensource.com>
	<CABGrkh8gXYwCABE_hsiM1+J0oLfEESd6=Jgyy5jX3wFyomMiAA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Debian kernel 3.1 onwards fails to boot on Xen domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please don't top post. It makes it very hard to follow the flow of a
conversation.

On Thu, 2012-08-16 at 21:22 +0100, Deepti kulkarni wrote:
> On a PV, the vm doesn't boot and I see error "Error: Starting VM -
> Traceback (most recent call last): - File "/usr/bin/pygrub", line 808,
> in ? - fs = fsimage.open(file, part_offs[0], bootfsoptions) - IOError:
> [Errno 95] Operation not supported"

Quoting myself from earlier:
>         If none of the suggested workarounds help then I would suggest
>         making a full bug report (with logs etc) as described in
>         http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen

http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen lists many logs and
other information which are often useful in diagnosing issues, please
supply them or we cannot help you.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 07:51:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 07:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HL7-0007ma-Ge; Fri, 17 Aug 2012 07:51:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HL6-0007mO-FJ
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 07:51:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345189882!8864692!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31760 invoked from network); 17 Aug 2012 07:51:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 07:51:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14053763"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 07:51:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 08:51:22 +0100
Message-ID: <1345189880.30865.60.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Fri, 17 Aug 2012 08:51:20 +0100
In-Reply-To: <1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"jbeulich@suse.com" <jbeulich@suse.com>, "msw@amazon.com" <msw@amazon.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
	query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 21:40 +0100, Daniel De Graaf wrote:
> On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > 
> > Hi Daniel,
> > 
> > What do you think about retaining a fallback of looking in xenstore if
> > the hypercall fails?
> > 
> > Matt
> > 
> 
> That sounds good; there's little cost to leaving the fallback in.
> 
> ----8<-----------------------------------------------------
> 
> This hypercall has been present since Xen 3.1,

Do a pvops kernels run on a hypervisor of that vintage? I have a vague
recollection of the pvops kernels requiring 3.4+ or something. Maybe
that was only for dom0 though.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 07:51:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 07:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HL7-0007ma-Ge; Fri, 17 Aug 2012 07:51:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HL6-0007mO-FJ
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 07:51:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345189882!8864692!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31760 invoked from network); 17 Aug 2012 07:51:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 07:51:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14053763"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 07:51:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 08:51:22 +0100
Message-ID: <1345189880.30865.60.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Fri, 17 Aug 2012 08:51:20 +0100
In-Reply-To: <1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"jbeulich@suse.com" <jbeulich@suse.com>, "msw@amazon.com" <msw@amazon.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
	query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 21:40 +0100, Daniel De Graaf wrote:
> On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > 
> > Hi Daniel,
> > 
> > What do you think about retaining a fallback of looking in xenstore if
> > the hypercall fails?
> > 
> > Matt
> > 
> 
> That sounds good; there's little cost to leaving the fallback in.
> 
> ----8<-----------------------------------------------------
> 
> This hypercall has been present since Xen 3.1,

Do a pvops kernels run on a hypervisor of that vintage? I have a vague
recollection of the pvops kernels requiring 3.4+ or something. Maybe
that was only for dom0 though.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:02:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HVY-0008TD-Qd; Fri, 17 Aug 2012 08:02:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HVX-0008T8-68
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:02:15 +0000
Received: from [85.158.138.51:51763] by server-11.bemta-3.messagelabs.com id
	CB/BF-23152-68AFD205; Fri, 17 Aug 2012 08:02:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345190533!28754520!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24530 invoked from network); 17 Aug 2012 08:02:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:02:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14053966"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:02:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:02:13 +0100
Message-ID: <1345190532.30865.67.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:02:12 +0100
In-Reply-To: <alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > > Seeing the patch I btw realized that there's no easy way to
> > > avoid having the type as a second argument in the conversion
> > > macros. Nevertheless I still don't like the explicitly specified type
> > > there.
> > 
> > Btw - on the architecture(s) where the two handles are identical
> > I would prefer you to make the conversion functions trivial (and
> > thus avoid making use of the "type" parameter), thus allowing
> > the type checking to occur that you currently circumvent.
> 
> OK, I can do that.

Will this result in the type parameter potentially becoming stale?

Adding a redundant pointer compare is a good way to get the compiler to
catch this. Smth like;

        /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
        #define guest_handle_from_param(hnd, type) ({
            typeof((hnd).p) _x = (hnd).p;
            XEN_GUEST_HANDLE(type) _y;
            &_y == &_x;
            hnd;
         })

I'm not sure which two pointers of members of the various structs need
to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
idea...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:02:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HVY-0008TD-Qd; Fri, 17 Aug 2012 08:02:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HVX-0008T8-68
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:02:15 +0000
Received: from [85.158.138.51:51763] by server-11.bemta-3.messagelabs.com id
	CB/BF-23152-68AFD205; Fri, 17 Aug 2012 08:02:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345190533!28754520!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24530 invoked from network); 17 Aug 2012 08:02:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:02:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14053966"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:02:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:02:13 +0100
Message-ID: <1345190532.30865.67.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:02:12 +0100
In-Reply-To: <alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > > Seeing the patch I btw realized that there's no easy way to
> > > avoid having the type as a second argument in the conversion
> > > macros. Nevertheless I still don't like the explicitly specified type
> > > there.
> > 
> > Btw - on the architecture(s) where the two handles are identical
> > I would prefer you to make the conversion functions trivial (and
> > thus avoid making use of the "type" parameter), thus allowing
> > the type checking to occur that you currently circumvent.
> 
> OK, I can do that.

Will this result in the type parameter potentially becoming stale?

Adding a redundant pointer compare is a good way to get the compiler to
catch this. Smth like;

        /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
        #define guest_handle_from_param(hnd, type) ({
            typeof((hnd).p) _x = (hnd).p;
            XEN_GUEST_HANDLE(type) _y;
            &_y == &_x;
            hnd;
         })

I'm not sure which two pointers of members of the various structs need
to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
idea...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:06:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HZC-0000AP-El; Fri, 17 Aug 2012 08:06:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HZB-0000AI-EK
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:06:01 +0000
Received: from [85.158.143.35:64709] by server-3.bemta-4.messagelabs.com id
	F2/1C-09529-86BFD205; Fri, 17 Aug 2012 08:06:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345190759!10985837!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8221 invoked from network); 17 Aug 2012 08:06:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:06:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054030"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:05:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:05:59 +0100
Message-ID: <1345190758.30865.70.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Fri, 17 Aug 2012 09:05:58 +0100
In-Reply-To: <502D0925.5070705@tycho.nsa.gov>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
	<502D04B2.6080203@tycho.nsa.gov>
	<1345127814.30865.11.camel@zakaz.uk.xensource.com>
	<502D0925.5070705@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 15:52 +0100, Daniel De Graaf wrote:
> On 08/16/2012 10:36 AM, Ian Campbell wrote:
> > On Thu, 2012-08-16 at 15:33 +0100, Daniel De Graaf wrote:
> > 
> >>>> and if two domains are attempting to
> >>>>> set up communication such as V4V or vchan, they need to be able to tell
> >>>>> their peer what domain ID to use.
> >>>
> >>> That's trickier.
> >>>
> >>> I suppose they could rendezvous via /vm/$UUID? Although there has been
> >>> talk of removing that path in the future.
> >>
> >> The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
> >> domain IDs (just names) and doesn't contain writable sub-keys for a domain
> >> to use. I also don't think such a sub-key should be added; it makes more
> >> sense to keep all of a domain's modifiable keys under its home path.
> >>
> >> Perhaps this could be changed to another identifier-to-domid mapping, like
> >> the proposed addition of a location to map name to domid? 
> >>
> >> The toolstack would maintain something like:
> >>   /local/by-name/$name == domid
> >>   /local/by-uuid/$uuid == domid
> > 
> > This second one is a bit like the existing /vm/$uuid/domid.
> 
> That key isn't being populated by xl on my system, so I didn't know it existed.

I could have sworn I'd seen it recently, but don't now. Maybe I had
started xend at some point.

> > I think I would go with:
> > 
> >   /local/by-name/$name == /local/domain/$domid
> >   /local/by-uuid/$uuid == /local/domain/$domid
> > 
> > though, So that you can just read it and use it without interpreting it.
> 
> In that case, you would need to parse the domid out of the string in order
> to use it in hypercalls (grant, event, v4v). The frontend/backend paths use
> a distinct frontend-id/backend-id key for the domain ID, but we are trying
> to avoid this since populating this key would mean the domain populating it
> needs to know its own domain ID.

Right, dang ;-)

> >>   /local/domain/$domid/name - same as existing
> >>   /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.
> > 
> > Is it available for other domains via xen, or just yourself?
> > 
> 
> Yourself and anyone who can call getdomaininfo (which is usually just dom0).
> This is actually the same as the xenstore permissions on keys such as
> /local/domain/$id/name. Changing these may need consideration, because on
> public hosting servers, you might not want to allow domains to be enumerated
> by name.

Sure.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:06:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HZC-0000AP-El; Fri, 17 Aug 2012 08:06:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HZB-0000AI-EK
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:06:01 +0000
Received: from [85.158.143.35:64709] by server-3.bemta-4.messagelabs.com id
	F2/1C-09529-86BFD205; Fri, 17 Aug 2012 08:06:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345190759!10985837!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8221 invoked from network); 17 Aug 2012 08:06:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:06:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054030"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:05:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:05:59 +0100
Message-ID: <1345190758.30865.70.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Fri, 17 Aug 2012 09:05:58 +0100
In-Reply-To: <502D0925.5070705@tycho.nsa.gov>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<50242583.3010201@tycho.nsa.gov>
	<CAEBdQ90LB9xvdAZC_QJGRYmBXBaM3ysDuAbG5LZr4AVe=GrA0w@mail.gmail.com>
	<1345116486.27489.92.camel@zakaz.uk.xensource.com>
	<502D04B2.6080203@tycho.nsa.gov>
	<1345127814.30865.11.camel@zakaz.uk.xensource.com>
	<502D0925.5070705@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jean Guyader <jean.guyader@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Guest knowledge of own domid [was: docs: initial
 documentation for xenstore paths]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 15:52 +0100, Daniel De Graaf wrote:
> On 08/16/2012 10:36 AM, Ian Campbell wrote:
> > On Thu, 2012-08-16 at 15:33 +0100, Daniel De Graaf wrote:
> > 
> >>>> and if two domains are attempting to
> >>>>> set up communication such as V4V or vchan, they need to be able to tell
> >>>>> their peer what domain ID to use.
> >>>
> >>> That's trickier.
> >>>
> >>> I suppose they could rendezvous via /vm/$UUID? Although there has been
> >>> talk of removing that path in the future.
> >>
> >> The /vm/$UUID path isn't currently useful for this, since it doesn't maintain
> >> domain IDs (just names) and doesn't contain writable sub-keys for a domain
> >> to use. I also don't think such a sub-key should be added; it makes more
> >> sense to keep all of a domain's modifiable keys under its home path.
> >>
> >> Perhaps this could be changed to another identifier-to-domid mapping, like
> >> the proposed addition of a location to map name to domid? 
> >>
> >> The toolstack would maintain something like:
> >>   /local/by-name/$name == domid
> >>   /local/by-uuid/$uuid == domid
> > 
> > This second one is a bit like the existing /vm/$uuid/domid.
> 
> That key isn't being populated by xl on my system, so I didn't know it existed.

I could have sworn I'd seen it recently, but don't now. Maybe I had
started xend at some point.

> > I think I would go with:
> > 
> >   /local/by-name/$name == /local/domain/$domid
> >   /local/by-uuid/$uuid == /local/domain/$domid
> > 
> > though, So that you can just read it and use it without interpreting it.
> 
> In that case, you would need to parse the domid out of the string in order
> to use it in hypercalls (grant, event, v4v). The frontend/backend paths use
> a distinct frontend-id/backend-id key for the domain ID, but we are trying
> to avoid this since populating this key would mean the domain populating it
> needs to know its own domain ID.

Right, dang ;-)

> >>   /local/domain/$domid/name - same as existing
> >>   /local/domain/$domid/uuid - ? maybe unneeded, as it's available from Xen.
> > 
> > Is it available for other domains via xen, or just yourself?
> > 
> 
> Yourself and anyone who can call getdomaininfo (which is usually just dom0).
> This is actually the same as the xenstore permissions on keys such as
> /local/domain/$id/name. Changing these may need consideration, because on
> public hosting servers, you might not want to allow domains to be enumerated
> by name.

Sure.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:14:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HhB-0000O4-Hh; Fri, 17 Aug 2012 08:14:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HhA-0000Nz-Em
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:14:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345191238!9690251!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18043 invoked from network); 17 Aug 2012 08:13:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:13:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054158"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:13:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:13:58 +0100
Message-ID: <1345191237.30865.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Fri, 17 Aug 2012 09:13:57 +0100
In-Reply-To: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
References: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 19:09 +0100, Roger Pau Monne wrote:
> Changes since v2:
[...]
>  * Replace xenstore_write with xenstore-write in error function.
[...]
>  error() {
>  	echo "$@" >&2
> -	xenstore_write $xpath/hotplug-status error
> +	xenstore-write $xpath/hotplug-status error
>  	exit 1
>  }

Why this seemingly unrelated change? I don't see anything in the
comments on v2 explicitly about it.

If it is somehow necessary due to this patch then I think that deserves
mention in the changelog proper.

Is it because xenstore_write is actually specific to the Linux hotplug
scripts? (i.e. this function was just plain broken before).

While looking into this I noticed that the Linux equivalent to error()
is:
        fatal() {
          _xenstore_write "$XENBUS_PATH/hotplug-error" "$*" \
                          "$XENBUS_PATH/hotplug-status" error
          log err "$@"
          exit 1
        }

The write of the log message to hotplus-error seems like something worth
replicating (in a separate patch).

I also wonder how much of this sort of infrastructure could actually be
shared, but that's for another time.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:14:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HhB-0000O4-Hh; Fri, 17 Aug 2012 08:14:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HhA-0000Nz-Em
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:14:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345191238!9690251!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18043 invoked from network); 17 Aug 2012 08:13:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:13:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054158"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:13:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:13:58 +0100
Message-ID: <1345191237.30865.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Fri, 17 Aug 2012 09:13:57 +0100
In-Reply-To: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
References: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 19:09 +0100, Roger Pau Monne wrote:
> Changes since v2:
[...]
>  * Replace xenstore_write with xenstore-write in error function.
[...]
>  error() {
>  	echo "$@" >&2
> -	xenstore_write $xpath/hotplug-status error
> +	xenstore-write $xpath/hotplug-status error
>  	exit 1
>  }

Why this seemingly unrelated change? I don't see anything in the
comments on v2 explicitly about it.

If it is somehow necessary due to this patch then I think that deserves
mention in the changelog proper.

Is it because xenstore_write is actually specific to the Linux hotplug
scripts? (i.e. this function was just plain broken before).

While looking into this I noticed that the Linux equivalent to error()
is:
        fatal() {
          _xenstore_write "$XENBUS_PATH/hotplug-error" "$*" \
                          "$XENBUS_PATH/hotplug-status" error
          log err "$@"
          exit 1
        }

The write of the log message to hotplus-error seems like something worth
replicating (in a separate patch).

I also wonder how much of this sort of infrastructure could actually be
shared, but that's for another time.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:22:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Hox-0000Xg-IV; Fri, 17 Aug 2012 08:22:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Hov-0000Xb-Tb
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:22:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345191672!4582135!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25335 invoked from network); 17 Aug 2012 08:21:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:21:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054304"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:21:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:21:12 +0100
Message-ID: <1345191671.30865.81.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:21:11 +0100
In-Reply-To: <0982bad392e4f96fb39a.1345022903@elijah>
References: <0982bad392e4f96fb39a.1345022903@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
 cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 10:28 +0100, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1345022863 -3600
> # Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
> # Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
> xl: Suppress spurious warning message for cpupool-list
> 
> libxl_cpupool_list() enumerates the cpupools by "probing": calling
> cpupool_info, starting at 0 and stopping when it gets an error. However,
> cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
> resulting in every xl command that uses libxl_list_cpupool (such as
> cpupool-list) printing that error message spuriously.

I see:
        # xl -vvv cpupool-list
        Name               CPUs   Sched     Active   Domain count
        Pool-0               4    credit       y          2
        xc: debug: hypercall buffer: total allocations:5 total releases:5
        xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
        xc: debug: hypercall buffer: cache current size:2
        xc: debug: hypercall buffer: cache hits:3 misses:2 toobig:0

which doesn't seem to include the error message. However by code
inspection I'm sure I should be seeing what you describe. WTF?

> This patch adds a "probe" argument to cpupool_info(). If set, it won't print
> a warning if the xc_cpupool_getinfo() fails with ENOENT.

Looking at the callers I think the existing "exact" parameter could be
used instead of a new param -- it would be fine to fail silently on
ENOENT iff !exact, I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:22:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Hox-0000Xg-IV; Fri, 17 Aug 2012 08:22:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Hov-0000Xb-Tb
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:22:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345191672!4582135!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25335 invoked from network); 17 Aug 2012 08:21:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:21:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054304"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:21:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:21:12 +0100
Message-ID: <1345191671.30865.81.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:21:11 +0100
In-Reply-To: <0982bad392e4f96fb39a.1345022903@elijah>
References: <0982bad392e4f96fb39a.1345022903@elijah>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
 cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-15 at 10:28 +0100, George Dunlap wrote:
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1345022863 -3600
> # Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
> # Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
> xl: Suppress spurious warning message for cpupool-list
> 
> libxl_cpupool_list() enumerates the cpupools by "probing": calling
> cpupool_info, starting at 0 and stopping when it gets an error. However,
> cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
> resulting in every xl command that uses libxl_list_cpupool (such as
> cpupool-list) printing that error message spuriously.

I see:
        # xl -vvv cpupool-list
        Name               CPUs   Sched     Active   Domain count
        Pool-0               4    credit       y          2
        xc: debug: hypercall buffer: total allocations:5 total releases:5
        xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
        xc: debug: hypercall buffer: cache current size:2
        xc: debug: hypercall buffer: cache hits:3 misses:2 toobig:0

which doesn't seem to include the error message. However by code
inspection I'm sure I should be seeing what you describe. WTF?

> This patch adds a "probe" argument to cpupool_info(). If set, it won't print
> a warning if the xc_cpupool_getinfo() fails with ENOENT.

Looking at the callers I think the existing "exact" parameter could be
used instead of a new param -- it would be fine to fail silently on
ENOENT iff !exact, I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:23:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HqH-0000cC-1G; Fri, 17 Aug 2012 08:23:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HqG-0000c5-4r
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:23:40 +0000
Received: from [85.158.143.35:35525] by server-3.bemta-4.messagelabs.com id
	BF/EB-09529-B8FFD205; Fri, 17 Aug 2012 08:23:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345191818!12375519!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12475 invoked from network); 17 Aug 2012 08:23:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:23:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054344"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:23:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:23:38 +0100
Message-ID: <1345191816.30865.82.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:23:36 +0100
In-Reply-To: <1345191671.30865.81.camel@zakaz.uk.xensource.com>
References: <0982bad392e4f96fb39a.1345022903@elijah>
	<1345191671.30865.81.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
 cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 09:21 +0100, Ian Campbell wrote:
> On Wed, 2012-08-15 at 10:28 +0100, George Dunlap wrote:
> > # HG changeset patch
> > # User George Dunlap <george.dunlap@eu.citrix.com>
> > # Date 1345022863 -3600
> > # Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
> > # Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
> > xl: Suppress spurious warning message for cpupool-list
> > 
> > libxl_cpupool_list() enumerates the cpupools by "probing": calling
> > cpupool_info, starting at 0 and stopping when it gets an error. However,
> > cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
> > resulting in every xl command that uses libxl_list_cpupool (such as
> > cpupool-list) printing that error message spuriously.
> 
> I see:
>         # xl -vvv cpupool-list
>         Name               CPUs   Sched     Active   Domain count
>         Pool-0               4    credit       y          2
>         xc: debug: hypercall buffer: total allocations:5 total releases:5
>         xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
>         xc: debug: hypercall buffer: cache current size:2
>         xc: debug: hypercall buffer: cache hits:3 misses:2 toobig:0
> 
> which doesn't seem to include the error message. However by code
> inspection I'm sure I should be seeing what you describe. WTF?

Nevermind this bit, I'd forgotten I'd stuck 4.1 on my test box...

> 
> > This patch adds a "probe" argument to cpupool_info(). If set, it won't print
> > a warning if the xc_cpupool_getinfo() fails with ENOENT.
> 
> Looking at the callers I think the existing "exact" parameter could be
> used instead of a new param -- it would be fine to fail silently on
> ENOENT iff !exact, I think.
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:23:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2HqH-0000cC-1G; Fri, 17 Aug 2012 08:23:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2HqG-0000c5-4r
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:23:40 +0000
Received: from [85.158.143.35:35525] by server-3.bemta-4.messagelabs.com id
	BF/EB-09529-B8FFD205; Fri, 17 Aug 2012 08:23:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345191818!12375519!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12475 invoked from network); 17 Aug 2012 08:23:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:23:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054344"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:23:38 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:23:38 +0100
Message-ID: <1345191816.30865.82.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:23:36 +0100
In-Reply-To: <1345191671.30865.81.camel@zakaz.uk.xensource.com>
References: <0982bad392e4f96fb39a.1345022903@elijah>
	<1345191671.30865.81.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
 cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 09:21 +0100, Ian Campbell wrote:
> On Wed, 2012-08-15 at 10:28 +0100, George Dunlap wrote:
> > # HG changeset patch
> > # User George Dunlap <george.dunlap@eu.citrix.com>
> > # Date 1345022863 -3600
> > # Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
> > # Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
> > xl: Suppress spurious warning message for cpupool-list
> > 
> > libxl_cpupool_list() enumerates the cpupools by "probing": calling
> > cpupool_info, starting at 0 and stopping when it gets an error. However,
> > cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
> > resulting in every xl command that uses libxl_list_cpupool (such as
> > cpupool-list) printing that error message spuriously.
> 
> I see:
>         # xl -vvv cpupool-list
>         Name               CPUs   Sched     Active   Domain count
>         Pool-0               4    credit       y          2
>         xc: debug: hypercall buffer: total allocations:5 total releases:5
>         xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
>         xc: debug: hypercall buffer: cache current size:2
>         xc: debug: hypercall buffer: cache hits:3 misses:2 toobig:0
> 
> which doesn't seem to include the error message. However by code
> inspection I'm sure I should be seeing what you describe. WTF?

Nevermind this bit, I'd forgotten I'd stuck 4.1 on my test box...

> 
> > This patch adds a "probe" argument to cpupool_info(). If set, it won't print
> > a warning if the xc_cpupool_getinfo() fails with ENOENT.
> 
> Looking at the callers I think the existing "exact" parameter could be
> used instead of a new param -- it would be fine to fail silently on
> ENOENT iff !exact, I think.
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:26:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Hsg-0000lF-Iz; Fri, 17 Aug 2012 08:26:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T2Hse-0000kz-WC
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:26:09 +0000
Received: from [85.158.139.83:38777] by server-1.bemta-5.messagelabs.com id
	E4/D0-09980-0200E205; Fri, 17 Aug 2012 08:26:08 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345191965!28659390!1
X-Originating-IP: [216.32.181.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10819 invoked from network); 17 Aug 2012 08:26:07 -0000
Received: from ch1ehsobe002.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.182)
	by server-5.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 08:26:07 -0000
Received: from mail156-ch1-R.bigfish.com (10.43.68.241) by
	CH1EHSOBE009.bigfish.com (10.43.70.59) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 08:26:05 +0000
Received: from mail156-ch1 (localhost [127.0.0.1])	by
	mail156-ch1-R.bigfish.com (Postfix) with ESMTP id 564CF3C01A9;
	Fri, 17 Aug 2012 08:26:05 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -10
X-BigFish: VPS-10(zzbb2dI98dI936eI154dM1432I1418Izz1202hzzz2dh668h839h93fhd25he5bhf0ah107ah)
Received: from mail156-ch1 (localhost.localdomain [127.0.0.1]) by mail156-ch1
	(MessageSwitch) id 1345191963450942_12367;
	Fri, 17 Aug 2012 08:26:03 +0000 (UTC)
Received: from CH1EHSMHS029.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.250])	by mail156-ch1.bigfish.com (Postfix) with ESMTP id
	621413E00C6;	Fri, 17 Aug 2012 08:26:03 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS029.bigfish.com (10.43.70.29) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 08:26:03 +0000
X-WSS-ID: 0M8W4RE-01-4Q7-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 23E591028042;	Fri, 17 Aug 2012 03:26:01 -0500 (CDT)
Received: from sausexhtp01.amd.com (163.181.3.165) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 03:26:46 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp01.amd.com
	(163.181.3.165) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 17 Aug 2012 03:26:01 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	04:26:00 -0400
Message-ID: <502E0015.2090801@amd.com>
Date: Fri, 17 Aug 2012 10:25:57 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
	<1345191237.30865.76.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345191237.30865.76.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/12 10:13, Ian Campbell wrote:

> On Tue, 2012-08-14 at 19:09 +0100, Roger Pau Monne wrote:
>> Changes since v2:
> [...]
>>  * Replace xenstore_write with xenstore-write in error function.
> [...]
>>  error() {
>>  	echo "$@" >&2
>> -	xenstore_write $xpath/hotplug-status error
>> +	xenstore-write $xpath/hotplug-status error
>>  	exit 1
>>  }
> 
> Why this seemingly unrelated change? I don't see anything in the
> comments on v2 explicitly about it.


xenstore-write exists on NetBSD. xenstore_write does not exist.

Christoph

> 
> If it is somehow necessary due to this patch then I think that deserves
> mention in the changelog proper.
> 
> Is it because xenstore_write is actually specific to the Linux hotplug
> scripts? (i.e. this function was just plain broken before).
> 
> While looking into this I noticed that the Linux equivalent to error()
> is:
>         fatal() {
>           _xenstore_write "$XENBUS_PATH/hotplug-error" "$*" \
>                           "$XENBUS_PATH/hotplug-status" error
>           log err "$@"
>           exit 1
>         }
> 
> The write of the log message to hotplus-error seems like something worth
> replicating (in a separate patch).
> 
> I also wonder how much of this sort of infrastructure could actually be
> shared, but that's for another time.
> 
> Ian.
> 
> 



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:26:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Hsg-0000lF-Iz; Fri, 17 Aug 2012 08:26:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T2Hse-0000kz-WC
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 08:26:09 +0000
Received: from [85.158.139.83:38777] by server-1.bemta-5.messagelabs.com id
	E4/D0-09980-0200E205; Fri, 17 Aug 2012 08:26:08 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345191965!28659390!1
X-Originating-IP: [216.32.181.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10819 invoked from network); 17 Aug 2012 08:26:07 -0000
Received: from ch1ehsobe002.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.182)
	by server-5.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 08:26:07 -0000
Received: from mail156-ch1-R.bigfish.com (10.43.68.241) by
	CH1EHSOBE009.bigfish.com (10.43.70.59) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 08:26:05 +0000
Received: from mail156-ch1 (localhost [127.0.0.1])	by
	mail156-ch1-R.bigfish.com (Postfix) with ESMTP id 564CF3C01A9;
	Fri, 17 Aug 2012 08:26:05 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -10
X-BigFish: VPS-10(zzbb2dI98dI936eI154dM1432I1418Izz1202hzzz2dh668h839h93fhd25he5bhf0ah107ah)
Received: from mail156-ch1 (localhost.localdomain [127.0.0.1]) by mail156-ch1
	(MessageSwitch) id 1345191963450942_12367;
	Fri, 17 Aug 2012 08:26:03 +0000 (UTC)
Received: from CH1EHSMHS029.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.250])	by mail156-ch1.bigfish.com (Postfix) with ESMTP id
	621413E00C6;	Fri, 17 Aug 2012 08:26:03 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS029.bigfish.com (10.43.70.29) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 08:26:03 +0000
X-WSS-ID: 0M8W4RE-01-4Q7-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 23E591028042;	Fri, 17 Aug 2012 03:26:01 -0500 (CDT)
Received: from sausexhtp01.amd.com (163.181.3.165) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 03:26:46 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp01.amd.com
	(163.181.3.165) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Fri, 17 Aug 2012 03:26:01 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	04:26:00 -0400
Message-ID: <502E0015.2090801@amd.com>
Date: Fri, 17 Aug 2012 10:25:57 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344967799-6646-1-git-send-email-roger.pau@citrix.com>
	<1345191237.30865.76.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345191237.30865.76.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v4] hotplug/NetBSD: check type of file to
 attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/12 10:13, Ian Campbell wrote:

> On Tue, 2012-08-14 at 19:09 +0100, Roger Pau Monne wrote:
>> Changes since v2:
> [...]
>>  * Replace xenstore_write with xenstore-write in error function.
> [...]
>>  error() {
>>  	echo "$@" >&2
>> -	xenstore_write $xpath/hotplug-status error
>> +	xenstore-write $xpath/hotplug-status error
>>  	exit 1
>>  }
> 
> Why this seemingly unrelated change? I don't see anything in the
> comments on v2 explicitly about it.


xenstore-write exists on NetBSD. xenstore_write does not exist.

Christoph

> 
> If it is somehow necessary due to this patch then I think that deserves
> mention in the changelog proper.
> 
> Is it because xenstore_write is actually specific to the Linux hotplug
> scripts? (i.e. this function was just plain broken before).
> 
> While looking into this I noticed that the Linux equivalent to error()
> is:
>         fatal() {
>           _xenstore_write "$XENBUS_PATH/hotplug-error" "$*" \
>                           "$XENBUS_PATH/hotplug-status" error
>           log err "$@"
>           exit 1
>         }
> 
> The write of the log message to hotplus-error seems like something worth
> replicating (in a separate patch).
> 
> I also wonder how much of this sort of infrastructure could actually be
> shared, but that's for another time.
> 
> Ian.
> 
> 



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:28:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Hv7-0000v6-4h; Fri, 17 Aug 2012 08:28:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Hv4-0000v0-Un
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:28:39 +0000
Received: from [85.158.143.99:54705] by server-3.bemta-4.messagelabs.com id
	CB/55-09529-6B00E205; Fri, 17 Aug 2012 08:28:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345192117!22261567!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5125 invoked from network); 17 Aug 2012 08:28:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:28:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054448"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:28:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:28:36 +0100
Message-ID: <1345192115.30865.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 09:28:35 +0100
In-Reply-To: <20120816114650.4db2079f@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 19:46 +0100, Mukesh Rathor wrote:
> On Thu, 16 Aug 2012 14:59:14 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > It's good to see well split patches :)
> > But could you please send them inline next time?
> 
> Sorry, that was the intent. 

It seems like this was the only one which was attached instead of
inline, strange.

> > > diff --git a/include/xen/interface/xen.h
> > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > --- a/include/xen/interface/xen.h
> > > +++ b/include/xen/interface/xen.h
> > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > >  /* These flags are passed in the 'flags' field of start_info_t. */
> > >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> > >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > 1 byte for xen-pm options */ 
> > >  typedef uint64_t cpumap_t;
> > 
> > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> > generic xen.h interface file. 

Is PVH actually more like a XENFEAT style thing?

Is there actually anywhere which wants to know specifically about PVH
rather than some more specific property which a PVH domain happen to
has?

> > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > HVM. Also,
> > > + * note, xen PVH domain shares lot of HVM code */
> > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > &&                     \
> > > +				(xen_start_info->flags &
> > > SIF_IS_PVINHVM))
> >  
> > Also here.
> 
> Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> include/xen/interface/xen.h, and then do 
> #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> 
> What do you think about that?

Should PVH actually be a new value in the xen_domain_type enum?

> 
> thanks
> Mukesh
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:28:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Hv7-0000v6-4h; Fri, 17 Aug 2012 08:28:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Hv4-0000v0-Un
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:28:39 +0000
Received: from [85.158.143.99:54705] by server-3.bemta-4.messagelabs.com id
	CB/55-09529-6B00E205; Fri, 17 Aug 2012 08:28:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345192117!22261567!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5125 invoked from network); 17 Aug 2012 08:28:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:28:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054448"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:28:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:28:36 +0100
Message-ID: <1345192115.30865.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 09:28:35 +0100
In-Reply-To: <20120816114650.4db2079f@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 19:46 +0100, Mukesh Rathor wrote:
> On Thu, 16 Aug 2012 14:59:14 +0100
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > It's good to see well split patches :)
> > But could you please send them inline next time?
> 
> Sorry, that was the intent. 

It seems like this was the only one which was attached instead of
inline, strange.

> > > diff --git a/include/xen/interface/xen.h
> > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > --- a/include/xen/interface/xen.h
> > > +++ b/include/xen/interface/xen.h
> > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > >  /* These flags are passed in the 'flags' field of start_info_t. */
> > >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> > >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > 1 byte for xen-pm options */ 
> > >  typedef uint64_t cpumap_t;
> > 
> > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> > generic xen.h interface file. 

Is PVH actually more like a XENFEAT style thing?

Is there actually anywhere which wants to know specifically about PVH
rather than some more specific property which a PVH domain happen to
has?

> > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > HVM. Also,
> > > + * note, xen PVH domain shares lot of HVM code */
> > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > &&                     \
> > > +				(xen_start_info->flags &
> > > SIF_IS_PVINHVM))
> >  
> > Also here.
> 
> Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> include/xen/interface/xen.h, and then do 
> #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> 
> What do you think about that?

Should PVH actually be a new value in the xen_domain_type enum?

> 
> thanks
> Mukesh
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2I2B-00019u-25; Fri, 17 Aug 2012 08:35:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2I2A-00019p-43
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:35:58 +0000
Received: from [85.158.143.99:35748] by server-3.bemta-4.messagelabs.com id
	A5/E2-09529-D620E205; Fri, 17 Aug 2012 08:35:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345192556!22262865!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4780 invoked from network); 17 Aug 2012 08:35:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:35:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054608"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:35:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:35:55 +0100
Message-ID: <1345192554.30865.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 09:35:54 +0100
In-Reply-To: <20120815175724.3405043a@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:

> ---
>  arch/x86/include/asm/xen/interface.h |    3 +-
>  arch/x86/include/asm/xen/page.h      |    3 ++
>  arch/x86/xen/setup.c                 |   13 ++++++++--
>  arch/x86/xen/smp.c                   |   39 ++++++++++++++++++---------------
>  drivers/xen/cpu_hotplug.c            |    3 +-
>  include/xen/interface/xen.h          |    1 +
>  include/xen/xen.h                    |    4 +++
>  7 files changed, 43 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..1bd5e88 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -136,7 +136,8 @@ struct vcpu_guest_context {
>      struct cpu_user_regs user_regs;         /* User-level CPU registers     */
>      struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
>      unsigned long ldt_base, ldt_ents;       /* LDT (linear address, # ents) */
> -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents) */
> +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents).*
> +                                            * PV in HVM: it's GDTR addr/sz */

I'm not sure I understand this comment. What is "GDTR addr/sz" do you
mean that gdtframes/gdt_ents has a different semantics here?

Might be worthy of a union? Or finding some other way to expand this struct.

> 
>      unsigned long kernel_ss, kernel_sp;     /* Virtual TSS (only SS1/SP1)   */
>      /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
>      unsigned long ctrlreg[8];               /* CR0-CR7 (control registers)  */
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index ead8557..936f21d 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -500,10 +500,9 @@ void __cpuinit xen_enable_syscall(void)
>  #endif /* CONFIG_X86_64 */
>  }
>  
> -void __init xen_arch_setup(void)
> +/* Normal PV domain not running in HVM container */

It's a bit of a shame to overload the "HVM" term this way, to mean both
the traditional "providing a full PC like environment" and "PV using
hardware virtualisation facilities".

Perhaps:
        /* Normal PV domain without PVH extensions */

> +static __init void inline xen_non_pvh_arch_setup(void)
>  {
> -       xen_panic_handler_init();
> -
>         HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
>         HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
>  
> @@ -517,6 +516,14 @@ void __init xen_arch_setup(void)
>  
>         xen_enable_sysenter();
>         xen_enable_syscall();
> +}
> +
> +void __init xen_arch_setup(void)
> +{
> +       xen_panic_handler_init();
> +
> +       if (!xen_pvh_domain())
> +               xen_non_pvh_arch_setup();

The negative in the fn name here strikes me as a bit weird. Can't this
just be xen_pv_arch_setup?

Or even just have:
	/* Everything else is specific to PV without hardware support */
	if (xen_pvh_domain())
		return;


>  
>  #ifdef CONFIG_ACPI
>         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index f58dca7..cdf269d 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>         gdt = get_cpu_gdt_table(cpu);
>  
>         ctxt->flags = VGCF_IN_KERNEL;
> -       ctxt->user_regs.ds = __USER_DS;
> -       ctxt->user_regs.es = __USER_DS;
>         ctxt->user_regs.ss = __KERNEL_DS;
>  #ifdef CONFIG_X86_32
>         ctxt->user_regs.fs = __KERNEL_PERCPU;
> @@ -314,31 +312,36 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  
>         memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
>  
> -       xen_copy_trap_info(ctxt->trap_ctxt);
> +               ctxt->user_regs.ds = __USER_DS;
> +               ctxt->user_regs.es = __USER_DS;
>  
> -       ctxt->ldt_ents = 0;
> +               xen_copy_trap_info(ctxt->trap_ctxt);
>  
> -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> +               ctxt->ldt_ents = 0;

Something odd is going on with the indentation here (and below I've just
noticed). I suspect lots of the changes aren't really changing anything
other than whitespace?

> -       gdt_mfn = arbitrary_virt_to_mfn(gdt);
> -       make_lowmem_page_readonly(gdt);
> -       make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
> +               BUG_ON((unsigned long)gdt & ~PAGE_MASK);
>  
> -       ctxt->gdt_frames[0] = gdt_mfn;
> -       ctxt->gdt_ents      = GDT_ENTRIES;
> +               gdt_mfn = arbitrary_virt_to_mfn(gdt);
> +               make_lowmem_page_readonly(gdt);
> +               make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
>  
> -       ctxt->user_regs.cs = __KERNEL_CS;
> -       ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
> +               ctxt->gdt_frames[0] = gdt_mfn;
> +               ctxt->gdt_ents      = GDT_ENTRIES;
>  
> -       ctxt->kernel_ss = __KERNEL_DS;
> -       ctxt->kernel_sp = idle->thread.sp0;
> +               ctxt->kernel_ss = __KERNEL_DS;
> +               ctxt->kernel_sp = idle->thread.sp0;
>  
>  #ifdef CONFIG_X86_32
> -       ctxt->event_callback_cs     = __KERNEL_CS;
> -       ctxt->failsafe_callback_cs  = __KERNEL_CS;
> +               ctxt->event_callback_cs     = __KERNEL_CS;
> +               ctxt->failsafe_callback_cs  = __KERNEL_CS;
>  #endif
> -       ctxt->event_callback_eip    = (unsigned long)xen_hypervisor_callback;
> -       ctxt->failsafe_callback_eip = (unsigned long)xen_failsafe_callback;
> +               ctxt->event_callback_eip    =
> +                                       (unsigned long)xen_hypervisor_callback;
> +               ctxt->failsafe_callback_eip =
> +                                       (unsigned long)xen_failsafe_callback;
> +
> +       ctxt->user_regs.cs = __KERNEL_CS;
> +       ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
>  
>         per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
>         ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2I2B-00019u-25; Fri, 17 Aug 2012 08:35:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2I2A-00019p-43
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:35:58 +0000
Received: from [85.158.143.99:35748] by server-3.bemta-4.messagelabs.com id
	A5/E2-09529-D620E205; Fri, 17 Aug 2012 08:35:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345192556!22262865!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4780 invoked from network); 17 Aug 2012 08:35:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:35:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054608"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:35:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:35:55 +0100
Message-ID: <1345192554.30865.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 09:35:54 +0100
In-Reply-To: <20120815175724.3405043a@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:

> ---
>  arch/x86/include/asm/xen/interface.h |    3 +-
>  arch/x86/include/asm/xen/page.h      |    3 ++
>  arch/x86/xen/setup.c                 |   13 ++++++++--
>  arch/x86/xen/smp.c                   |   39 ++++++++++++++++++---------------
>  drivers/xen/cpu_hotplug.c            |    3 +-
>  include/xen/interface/xen.h          |    1 +
>  include/xen/xen.h                    |    4 +++
>  7 files changed, 43 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
> index cbf0c9d..1bd5e88 100644
> --- a/arch/x86/include/asm/xen/interface.h
> +++ b/arch/x86/include/asm/xen/interface.h
> @@ -136,7 +136,8 @@ struct vcpu_guest_context {
>      struct cpu_user_regs user_regs;         /* User-level CPU registers     */
>      struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
>      unsigned long ldt_base, ldt_ents;       /* LDT (linear address, # ents) */
> -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents) */
> +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine frames, # ents).*
> +                                            * PV in HVM: it's GDTR addr/sz */

I'm not sure I understand this comment. What is "GDTR addr/sz" do you
mean that gdtframes/gdt_ents has a different semantics here?

Might be worthy of a union? Or finding some other way to expand this struct.

> 
>      unsigned long kernel_ss, kernel_sp;     /* Virtual TSS (only SS1/SP1)   */
>      /* NB. User pagetable on x86/64 is placed in ctrlreg[1]. */
>      unsigned long ctrlreg[8];               /* CR0-CR7 (control registers)  */
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index ead8557..936f21d 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -500,10 +500,9 @@ void __cpuinit xen_enable_syscall(void)
>  #endif /* CONFIG_X86_64 */
>  }
>  
> -void __init xen_arch_setup(void)
> +/* Normal PV domain not running in HVM container */

It's a bit of a shame to overload the "HVM" term this way, to mean both
the traditional "providing a full PC like environment" and "PV using
hardware virtualisation facilities".

Perhaps:
        /* Normal PV domain without PVH extensions */

> +static __init void inline xen_non_pvh_arch_setup(void)
>  {
> -       xen_panic_handler_init();
> -
>         HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
>         HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
>  
> @@ -517,6 +516,14 @@ void __init xen_arch_setup(void)
>  
>         xen_enable_sysenter();
>         xen_enable_syscall();
> +}
> +
> +void __init xen_arch_setup(void)
> +{
> +       xen_panic_handler_init();
> +
> +       if (!xen_pvh_domain())
> +               xen_non_pvh_arch_setup();

The negative in the fn name here strikes me as a bit weird. Can't this
just be xen_pv_arch_setup?

Or even just have:
	/* Everything else is specific to PV without hardware support */
	if (xen_pvh_domain())
		return;


>  
>  #ifdef CONFIG_ACPI
>         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index f58dca7..cdf269d 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>         gdt = get_cpu_gdt_table(cpu);
>  
>         ctxt->flags = VGCF_IN_KERNEL;
> -       ctxt->user_regs.ds = __USER_DS;
> -       ctxt->user_regs.es = __USER_DS;
>         ctxt->user_regs.ss = __KERNEL_DS;
>  #ifdef CONFIG_X86_32
>         ctxt->user_regs.fs = __KERNEL_PERCPU;
> @@ -314,31 +312,36 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  
>         memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
>  
> -       xen_copy_trap_info(ctxt->trap_ctxt);
> +               ctxt->user_regs.ds = __USER_DS;
> +               ctxt->user_regs.es = __USER_DS;
>  
> -       ctxt->ldt_ents = 0;
> +               xen_copy_trap_info(ctxt->trap_ctxt);
>  
> -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> +               ctxt->ldt_ents = 0;

Something odd is going on with the indentation here (and below I've just
noticed). I suspect lots of the changes aren't really changing anything
other than whitespace?

> -       gdt_mfn = arbitrary_virt_to_mfn(gdt);
> -       make_lowmem_page_readonly(gdt);
> -       make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
> +               BUG_ON((unsigned long)gdt & ~PAGE_MASK);
>  
> -       ctxt->gdt_frames[0] = gdt_mfn;
> -       ctxt->gdt_ents      = GDT_ENTRIES;
> +               gdt_mfn = arbitrary_virt_to_mfn(gdt);
> +               make_lowmem_page_readonly(gdt);
> +               make_lowmem_page_readonly(mfn_to_virt(gdt_mfn));
>  
> -       ctxt->user_regs.cs = __KERNEL_CS;
> -       ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
> +               ctxt->gdt_frames[0] = gdt_mfn;
> +               ctxt->gdt_ents      = GDT_ENTRIES;
>  
> -       ctxt->kernel_ss = __KERNEL_DS;
> -       ctxt->kernel_sp = idle->thread.sp0;
> +               ctxt->kernel_ss = __KERNEL_DS;
> +               ctxt->kernel_sp = idle->thread.sp0;
>  
>  #ifdef CONFIG_X86_32
> -       ctxt->event_callback_cs     = __KERNEL_CS;
> -       ctxt->failsafe_callback_cs  = __KERNEL_CS;
> +               ctxt->event_callback_cs     = __KERNEL_CS;
> +               ctxt->failsafe_callback_cs  = __KERNEL_CS;
>  #endif
> -       ctxt->event_callback_eip    = (unsigned long)xen_hypervisor_callback;
> -       ctxt->failsafe_callback_eip = (unsigned long)xen_failsafe_callback;
> +               ctxt->event_callback_eip    =
> +                                       (unsigned long)xen_hypervisor_callback;
> +               ctxt->failsafe_callback_eip =
> +                                       (unsigned long)xen_failsafe_callback;
> +
> +       ctxt->user_regs.cs = __KERNEL_CS;
> +       ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
>  
>         per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
>         ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:47:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2IDX-0001LO-Fg; Fri, 17 Aug 2012 08:47:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2IDV-0001LJ-JN
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:47:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345193254!2212420!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28335 invoked from network); 17 Aug 2012 08:47:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:47:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054866"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:46:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 09:46:39 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2ICU-0006zK-HZ;
	Fri, 17 Aug 2012 08:46:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2ICU-0005jw-BG;
	Fri, 17 Aug 2012 09:46:38 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13611-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:46:38 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13611: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13611 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13611/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13609
 test-amd64-i386-pv           10 guest-saverestore  fail in 13609 pass in 13611

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13607
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13607
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13607
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13607
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13607

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13609 never pass

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  c887c30a0a35

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3468a834be8d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3468a834be8d
+ branch=xen-unstable
+ revision=3468a834be8d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3468a834be8d ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 3 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:47:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2IDX-0001LO-Fg; Fri, 17 Aug 2012 08:47:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2IDV-0001LJ-JN
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:47:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345193254!2212420!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28335 invoked from network); 17 Aug 2012 08:47:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:47:35 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14054866"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:46:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 09:46:39 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2ICU-0006zK-HZ;
	Fri, 17 Aug 2012 08:46:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2ICU-0005jw-BG;
	Fri, 17 Aug 2012 09:46:38 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13611-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 09:46:38 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13611: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13611 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13611/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13609
 test-amd64-i386-pv           10 guest-saverestore  fail in 13609 pass in 13611

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 13607
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13607
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13607
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13607
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13607

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13609 never pass

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  c887c30a0a35

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3468a834be8d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3468a834be8d
+ branch=xen-unstable
+ revision=3468a834be8d
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3468a834be8d ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 3 changes to 2 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:56:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2IM2-0001Yu-Ju; Fri, 17 Aug 2012 08:56:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2IM1-0001Yp-UJ
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:56:30 +0000
Received: from [85.158.143.99:33777] by server-2.bemta-4.messagelabs.com id
	A8/17-31966-D370E205; Fri, 17 Aug 2012 08:56:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345193781!23401945!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24953 invoked from network); 17 Aug 2012 08:56:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14055102"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:56:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:56:21 +0100
Message-ID: <1345193780.30865.109.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 09:56:20 +0100
In-Reply-To: <20120815180131.24aaa5ce@mantra.us.oracle.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:01 +0100, Mukesh Rathor wrote:
> ---
>  arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
>  arch/x86/xen/irq.c       |   22 ++++++++++++++-
>  2 files changed, 77 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index bf4bda6..3a58c51 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -139,6 +139,8 @@ struct tls_descs {
>   */
>  static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>  
> +static void __init xen_hvm_init_shared_info(void);
> +
>  static void clamp_max_cpus(void)
>  {
>  #ifdef CONFIG_SMP
> @@ -217,8 +219,8 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
> +		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);

Please can we avoid HVM  in the context of PVH here. "with PVH
extensions" or something.

>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
[...]
> @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  void xen_setup_shared_info(void)
>  {
> +	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
> +	if (xen_pvh_domain() && xen_initial_domain())
> +		return;

Could we push this setup later for a pv guest too and reduce the
divergence?

> +
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
>  			   xen_start_info->shared_info);
[...]
> @@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
>   */
>  static void __init xen_setup_stackprotector(void)
>  {
> +	if (xen_pvh_domain()) {
> +		switch_to_new_gdt(0);

This seems to skip calling setup_stack_canary_segment too?

Assuming that's not deliberate I'd be tempted to just put "if
(xen_pv_domain())" around the updates of pv_cpus_ops and leave the main
flow of the code the same. If it was deliberate a comment might be in
order.

Unrelated to PVH, so I guess more a question for Konrad, but it seems
odd to me that "struct pv_cpu_ops xen_cpu_ops" starts of with
xen_write_gdt in it, gets overridden to xen_write_gdt_boot temporarily
here and then gets put back to xen_write_gdt immediately. Having the
struct start off with xen_write_gdt_boot in it would seem more natural
to me. Unless the _boot suffix is really supposed to mean
_while_setting_up_stack_protector?

(same for the load_gdt hook, of course).

> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>  
> @@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_guest_init(void)
> +{
> +#ifndef __HAVE_ARCH_PTE_SPECIAL
> +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> +#endif

Isn't this an unconditional feature of arch/x86?

And if not then this check belongs in Kconfig.

> +	/* PVH TBD/FIXME: for now just disable this. */
> +	have_vcpu_info_placement = 0;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +        /* for domU, the library sets start_info.shared_info to pfn, but for
> +         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
> +	 * uses HVM code in many places */
> +	if (xen_initial_domain())
> +		xen_hvm_init_shared_info();
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
>  	if (!xen_start_info)
>  		return;
>  
> +#ifdef CONFIG_X86_32
> +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");

"run with PVH extensions"

I think you also want a panic here somewhere instead/as well as the
printk.

Plus I haven't got to it yet but I guess the kernels features are going
to declare something which the tools would use to error out when trying
to build such a thing? Not that this isn't a good sanity check even so.

[...]
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..7c7dfd1 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
>  
>  static void xen_safe_halt(void)
>  {
> +	/* so event channel can be delivered to us, since in HVM container */
> +	if (xen_pvh_domain())
> +		local_irq_enable();
> +
>  	/* Blocking includes an implicit local_irq_enable(). */

So this comment isn't true for a PVH guest? Why not? Should it be?

I'm half wondering if we couldn't use native_safe_halt here, IIRC both
SVN and VTd handle "sti; hlt" in a sensible way on the hypervisor side
by calling hvm_hlt

I suppose that's more of a philosophical question about the nature of
PVH ;-)

>  	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
>  		BUG();

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 08:56:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 08:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2IM2-0001Yu-Ju; Fri, 17 Aug 2012 08:56:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2IM1-0001Yp-UJ
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 08:56:30 +0000
Received: from [85.158.143.99:33777] by server-2.bemta-4.messagelabs.com id
	A8/17-31966-D370E205; Fri, 17 Aug 2012 08:56:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345193781!23401945!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0Mzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24953 invoked from network); 17 Aug 2012 08:56:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 08:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14055102"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 08:56:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:56:21 +0100
Message-ID: <1345193780.30865.109.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 09:56:20 +0100
In-Reply-To: <20120815180131.24aaa5ce@mantra.us.oracle.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:01 +0100, Mukesh Rathor wrote:
> ---
>  arch/x86/xen/enlighten.c |   67 ++++++++++++++++++++++++++++++++++++++-------
>  arch/x86/xen/irq.c       |   22 ++++++++++++++-
>  2 files changed, 77 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index bf4bda6..3a58c51 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -139,6 +139,8 @@ struct tls_descs {
>   */
>  static DEFINE_PER_CPU(struct tls_descs, shadow_tls_desc);
>  
> +static void __init xen_hvm_init_shared_info(void);
> +
>  static void clamp_max_cpus(void)
>  {
>  #ifdef CONFIG_SMP
> @@ -217,8 +219,8 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	printk(KERN_INFO "Booting paravirtualized kernel %son %s\n",
> +		(xen_pvh_domain() ? "in HVM " : ""), pv_info.name);

Please can we avoid HVM  in the context of PVH here. "with PVH
extensions" or something.

>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
[...]
> @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  void xen_setup_shared_info(void)
>  {
> +	/* do later in xen_pvh_guest_init() when extend_brk is properly setup*/
> +	if (xen_pvh_domain() && xen_initial_domain())
> +		return;

Could we push this setup later for a pv guest too and reduce the
divergence?

> +
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
>  			   xen_start_info->shared_info);
[...]
> @@ -1274,6 +1287,10 @@ static const struct machine_ops xen_machine_ops __initconst = {
>   */
>  static void __init xen_setup_stackprotector(void)
>  {
> +	if (xen_pvh_domain()) {
> +		switch_to_new_gdt(0);

This seems to skip calling setup_stack_canary_segment too?

Assuming that's not deliberate I'd be tempted to just put "if
(xen_pv_domain())" around the updates of pv_cpus_ops and leave the main
flow of the code the same. If it was deliberate a comment might be in
order.

Unrelated to PVH, so I guess more a question for Konrad, but it seems
odd to me that "struct pv_cpu_ops xen_cpu_ops" starts of with
xen_write_gdt in it, gets overridden to xen_write_gdt_boot temporarily
here and then gets put back to xen_write_gdt immediately. Having the
struct start off with xen_write_gdt_boot in it would seem more natural
to me. Unless the _boot suffix is really supposed to mean
_while_setting_up_stack_protector?

(same for the load_gdt hook, of course).

> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>  
> @@ -1284,6 +1301,25 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_guest_init(void)
> +{
> +#ifndef __HAVE_ARCH_PTE_SPECIAL
> +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> +#endif

Isn't this an unconditional feature of arch/x86?

And if not then this check belongs in Kconfig.

> +	/* PVH TBD/FIXME: for now just disable this. */
> +	have_vcpu_info_placement = 0;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +        /* for domU, the library sets start_info.shared_info to pfn, but for
> +         * dom0, it contains mfn. we need to get the pfn for shared_info. PVH
> +	 * uses HVM code in many places */
> +	if (xen_initial_domain())
> +		xen_hvm_init_shared_info();
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1294,15 +1330,23 @@ asmlinkage void __init xen_start_kernel(void)
>  	if (!xen_start_info)
>  		return;
>  
> +#ifdef CONFIG_X86_32
> +	xen_raw_printk("ERROR: 32bit PV guest can not run in HVM container\n");

"run with PVH extensions"

I think you also want a panic here somewhere instead/as well as the
printk.

Plus I haven't got to it yet but I guess the kernels features are going
to declare something which the tools would use to error out when trying
to build such a thing? Not that this isn't a good sanity check even so.

[...]
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 1573376..7c7dfd1 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
>  
>  static void xen_safe_halt(void)
>  {
> +	/* so event channel can be delivered to us, since in HVM container */
> +	if (xen_pvh_domain())
> +		local_irq_enable();
> +
>  	/* Blocking includes an implicit local_irq_enable(). */

So this comment isn't true for a PVH guest? Why not? Should it be?

I'm half wondering if we couldn't use native_safe_halt here, IIRC both
SVN and VTd handle "sti; hlt" in a sensible way on the hypervisor side
by calling hvm_hlt

I suppose that's more of a philosophical question about the nature of
PVH ;-)

>  	if (HYPERVISOR_sched_op(SCHEDOP_block, NULL) != 0)
>  		BUG();

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:17:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ig4-0001oI-Id; Fri, 17 Aug 2012 09:17:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Ig3-0001oA-4s
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:17:11 +0000
Received: from [85.158.138.51:19200] by server-8.bemta-3.messagelabs.com id
	AC/34-29583-61C0E205; Fri, 17 Aug 2012 09:17:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345195029!28769510!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2410 invoked from network); 17 Aug 2012 09:17:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	17 Aug 2012 09:17:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:17:09 +0100
Message-Id: <502E285A0200007800095D04@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:17:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 19:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
>> > +#define set_xen_guest_handle_raw(hnd, val)                  \
>> > +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
>> 
>> If you made the "normal" handle a union too, you could avoid
>> the explicit cast (which e.g. gcc, when not passed
>> -fno-strict-aliasing, will choke on) and instead use (hnd).q (and
>> at once avoid the double initialization of the low half).
>> 
>> Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
>> usable at once for 64-bit avoiding a full double initialization there.
>> 
>> > +         (hnd).p = val;                                     \
>> 
>> In a public header you certainly want to avoid evaluating a
>> macro argument twice.
> 
> That's a really good suggestion.
> I am going to make both handles unions and therefore
> set_xen_guest_handle_raw becomes:
> 
> #define set_xen_guest_handle_raw(hnd, val)                  \
>     do { (hnd).q = 0;                                       \
>          (hnd).p = val;                                     \
>     } while ( 0 )

But that still doesn't eliminate the double evaluation of "hnd".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:17:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ig4-0001oI-Id; Fri, 17 Aug 2012 09:17:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Ig3-0001oA-4s
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:17:11 +0000
Received: from [85.158.138.51:19200] by server-8.bemta-3.messagelabs.com id
	AC/34-29583-61C0E205; Fri, 17 Aug 2012 09:17:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345195029!28769510!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2410 invoked from network); 17 Aug 2012 09:17:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	17 Aug 2012 09:17:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:17:09 +0100
Message-Id: <502E285A0200007800095D04@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:17:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 19:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
>> > +#define set_xen_guest_handle_raw(hnd, val)                  \
>> > +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
>> 
>> If you made the "normal" handle a union too, you could avoid
>> the explicit cast (which e.g. gcc, when not passed
>> -fno-strict-aliasing, will choke on) and instead use (hnd).q (and
>> at once avoid the double initialization of the low half).
>> 
>> Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
>> usable at once for 64-bit avoiding a full double initialization there.
>> 
>> > +         (hnd).p = val;                                     \
>> 
>> In a public header you certainly want to avoid evaluating a
>> macro argument twice.
> 
> That's a really good suggestion.
> I am going to make both handles unions and therefore
> set_xen_guest_handle_raw becomes:
> 
> #define set_xen_guest_handle_raw(hnd, val)                  \
>     do { (hnd).q = 0;                                       \
>          (hnd).p = val;                                     \
>     } while ( 0 )

But that still doesn't eliminate the double evaluation of "hnd".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:21:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ik1-0001vb-7v; Fri, 17 Aug 2012 09:21:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Ijz-0001vS-SV
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:21:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345195262!9566035!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15584 invoked from network); 17 Aug 2012 09:21:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 09:21:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:21:01 +0100
Message-Id: <502E29450200007800095D14@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:21:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <Santosh.Jodh@citrix.com>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
	<502B7FC9020000780009500F@nat28.tlf.novell.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0E399CF7@SJCPMAILBOX01.citrite.net>
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0E399CF7@SJCPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim\(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:27, Santosh Jodh <Santosh.Jodh@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 14.08.12 at 21:55, Santosh Jodh <santosh.jodh@citrix.com> wrote:
>> > +                   (unsigned long)(address >> PAGE_SHIFT_4K),
>> > +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
>> > +                   dma_pte_superpage(*pte)? 1 : 0,
>> > +                   dma_pte_read(*pte)? 1 : 0,
>> > +                   dma_pte_write(*pte)? 1 : 0);
>> 
>> Missing spaces. Even worse - given your definitions of these macros
>> there's no point in using the conditional operators here at all.
>> 
>> And, despite your claim in another response, this still isn't similar
>> to AMD's variant (which still doesn't print any of these three
>> attributes).
> 
> I meant structure in terms of recursion logic, level checks etc. I will just 
> remove the extra prints to make it more similar. My original goal was to 
> print it similar to the existing MMU p2m table.

Actually, retaining the read/write information here (and adding it
to AMD's) would be quite useful imo. The superpage part, as said
earlier, needs to be done differently anyway.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:21:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ik1-0001vb-7v; Fri, 17 Aug 2012 09:21:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Ijz-0001vS-SV
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:21:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345195262!9566035!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15584 invoked from network); 17 Aug 2012 09:21:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 09:21:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:21:01 +0100
Message-Id: <502E29450200007800095D14@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:21:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Santosh Jodh" <Santosh.Jodh@citrix.com>
References: <5357dccf4ba353d08e8e.1344974109@REDBLD-XS.ad.xensource.com>
	<502B7FC9020000780009500F@nat28.tlf.novell.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0E399CF7@SJCPMAILBOX01.citrite.net>
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0E399CF7@SJCPMAILBOX01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>, "Tim\(Xen.org\)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:27, Santosh Jodh <Santosh.Jodh@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 14.08.12 at 21:55, Santosh Jodh <santosh.jodh@citrix.com> wrote:
>> > +                   (unsigned long)(address >> PAGE_SHIFT_4K),
>> > +                   (unsigned long)(pte->val >> PAGE_SHIFT_4K),
>> > +                   dma_pte_superpage(*pte)? 1 : 0,
>> > +                   dma_pte_read(*pte)? 1 : 0,
>> > +                   dma_pte_write(*pte)? 1 : 0);
>> 
>> Missing spaces. Even worse - given your definitions of these macros
>> there's no point in using the conditional operators here at all.
>> 
>> And, despite your claim in another response, this still isn't similar
>> to AMD's variant (which still doesn't print any of these three
>> attributes).
> 
> I meant structure in terms of recursion logic, level checks etc. I will just 
> remove the extra prints to make it more similar. My original goal was to 
> print it similar to the existing MMU p2m table.

Actually, retaining the read/write information here (and adding it
to AMD's) would be quite useful imo. The superpage part, as said
earlier, needs to be done differently anyway.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:25:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ini-00024o-SZ; Fri, 17 Aug 2012 09:25:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T2Ing-00024b-IS
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:25:04 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345195497!2220563!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA5MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8529 invoked from network); 17 Aug 2012 09:24:58 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 09:24:58 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 17 Aug 2012 02:24:57 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,783,1336374000"; d="scan'208";a="181879226"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 17 Aug 2012 02:24:56 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 17 Aug 2012 02:24:56 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 17:24:54 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw
Date: Fri, 17 Aug 2012 09:24:54 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
In-Reply-To: <502CEFC10200007800095727@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, August 16, 2012 7:04 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
> +0200
> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
> +0800
> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded.
> */
> >> >  #define PCI_MEM_START       0xf0000000
> >> >  #define PCI_MEM_END         0xfc000000
> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >>
> >> With such hard coded values, this is hardly meant to be anything
> >> more than an RFC, is it? These values should not exist in the first
> >> place, and the variables below should be determined from VM
> >> characteristics (best would presumably be to allocate them top
> >> down from the end of physical address space, making sure you
> >> don't run into RAM).
> 
> No comment on this part?
> 

The MMIO high memory start from 640G, it's already very high, I think we don't need allocate MMIO top down from the high of physical address space. Another thing you remind me that maybe we can skip this high MMIO hole when setup p2m table in build hvm of libxc(setup_guest()), like the handles for MMIO below 4G.

> >> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >> >          /* Map the I/O memory and port resources. */
> >> >          for ( bar = 0; bar < 7; bar++ )
> >> >          {
> >> > +            bar_sz_upper = 0;
> >> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
> >> >              if ( bar == 6 )
> >> >                  bar_reg = PCI_ROM_ADDRESS;
> >> >
> >> >              bar_data = pci_readl(devfn, bar_reg);
> >> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK))
> ==
> >> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >> >              pci_writel(devfn, bar_reg, ~0);
> >> >              bar_sz = pci_readl(devfn, bar_reg);
> >> >              pci_writel(devfn, bar_reg, bar_data);
> >> > +
> >> > +            if (is_64bar) {
> >> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> >> > +                pci_writel(devfn, bar_reg + 4, ~0);
> >> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> >> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> >> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> >> > +            }
> >> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> >> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> >> > +                       0xfffffffffffffff0 :
> >>
> >> This should be a proper constant (or the masking could be
> >> done earlier, in which case you could continue to use the
> >> existing PCI_BASE_ADDRESS_MEM_MASK).
> >>
> >
> > So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.
> 
> I'd recommend not touching existing variables. As said before,
> by re-ordering the code you can still use the constant as-is.
> 

Mmh, I misunderstood you in the previous mail. 
So you means we put the mask before "if(is_64bar)", and then we do not do changes for old mask code, right?

> >> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >> >      }
> >> >
> >> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> >> > -            ((pci_mem_start << 1) != 0) )
> >> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> >> pci_mem_start )
> >>
> >> The old code here could remain in place if ...
> >>
> >> >          pci_mem_start <<= 1;
> >> >
> >> > +    if (!pci_mem_start) {
> >>
> >> .. the condition here would get changed to the one used in the
> >> first part of the while above.
> >>
> >> > +        bar64_relocate = 1;
> >> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> >>
> >> Which would then also make this assignment (and the
> >> constant) unnecessary.
> >>
> >
> > Cool, I'll leave the old code, and just add
> >
> > if (pci_mem_start = PCI_MIN_MMIO_ADDR)
> >     bar64_relocate = 1;
> 
> No, that's not correct. It's the other half of the while()'s condition
> that you need to re-check here.
> 

Oh, so it can avoid PCI_MIN_MMIO_ADDR usage, I'll use your suggestion.

-Thanks,
Xudong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:25:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ini-00024o-SZ; Fri, 17 Aug 2012 09:25:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T2Ing-00024b-IS
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:25:04 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345195497!2220563!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzA5MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8529 invoked from network); 17 Aug 2012 09:24:58 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 09:24:58 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 17 Aug 2012 02:24:57 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,783,1336374000"; d="scan'208";a="181879226"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 17 Aug 2012 02:24:56 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 17 Aug 2012 02:24:56 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 17 Aug 2012 17:24:54 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw
Date: Fri, 17 Aug 2012 09:24:54 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
In-Reply-To: <502CEFC10200007800095727@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, August 16, 2012 7:04 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
> +0200
> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
> +0800
> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded.
> */
> >> >  #define PCI_MEM_START       0xf0000000
> >> >  #define PCI_MEM_END         0xfc000000
> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >>
> >> With such hard coded values, this is hardly meant to be anything
> >> more than an RFC, is it? These values should not exist in the first
> >> place, and the variables below should be determined from VM
> >> characteristics (best would presumably be to allocate them top
> >> down from the end of physical address space, making sure you
> >> don't run into RAM).
> 
> No comment on this part?
> 

The MMIO high memory start from 640G, it's already very high, I think we don't need allocate MMIO top down from the high of physical address space. Another thing you remind me that maybe we can skip this high MMIO hole when setup p2m table in build hvm of libxc(setup_guest()), like the handles for MMIO below 4G.

> >> > @@ -133,23 +142,35 @@ void pci_setup(void)
> >> >          /* Map the I/O memory and port resources. */
> >> >          for ( bar = 0; bar < 7; bar++ )
> >> >          {
> >> > +            bar_sz_upper = 0;
> >> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
> >> >              if ( bar == 6 )
> >> >                  bar_reg = PCI_ROM_ADDRESS;
> >> >
> >> >              bar_data = pci_readl(devfn, bar_reg);
> >> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK))
> ==
> >> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
> >> >              pci_writel(devfn, bar_reg, ~0);
> >> >              bar_sz = pci_readl(devfn, bar_reg);
> >> >              pci_writel(devfn, bar_reg, bar_data);
> >> > +
> >> > +            if (is_64bar) {
> >> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
> >> > +                pci_writel(devfn, bar_reg + 4, ~0);
> >> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
> >> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
> >> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
> >> > +            }
> >> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
> >> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
> >> > +                       0xfffffffffffffff0 :
> >>
> >> This should be a proper constant (or the masking could be
> >> done earlier, in which case you could continue to use the
> >> existing PCI_BASE_ADDRESS_MEM_MASK).
> >>
> >
> > So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.
> 
> I'd recommend not touching existing variables. As said before,
> by re-ordering the code you can still use the constant as-is.
> 

Mmh, I misunderstood you in the previous mail. 
So you means we put the mask before "if(is_64bar)", and then we do not do changes for old mask code, right?

> >> > @@ -193,10 +212,14 @@ void pci_setup(void)
> >> >          pci_writew(devfn, PCI_COMMAND, cmd);
> >> >      }
> >> >
> >> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
> >> > -            ((pci_mem_start << 1) != 0) )
> >> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
> >> pci_mem_start )
> >>
> >> The old code here could remain in place if ...
> >>
> >> >          pci_mem_start <<= 1;
> >> >
> >> > +    if (!pci_mem_start) {
> >>
> >> .. the condition here would get changed to the one used in the
> >> first part of the while above.
> >>
> >> > +        bar64_relocate = 1;
> >> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
> >>
> >> Which would then also make this assignment (and the
> >> constant) unnecessary.
> >>
> >
> > Cool, I'll leave the old code, and just add
> >
> > if (pci_mem_start = PCI_MIN_MMIO_ADDR)
> >     bar64_relocate = 1;
> 
> No, that's not correct. It's the other half of the while()'s condition
> that you need to re-check here.
> 

Oh, so it can avoid PCI_MIN_MMIO_ADDR usage, I'll use your suggestion.

-Thanks,
Xudong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:26:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Iou-0002CL-Fx; Fri, 17 Aug 2012 09:26:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Ios-0002C4-OC
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:26:19 +0000
Received: from [85.158.138.51:35965] by server-4.bemta-3.messagelabs.com id
	2D/B6-04276-93E0E205; Fri, 17 Aug 2012 09:26:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345195576!28770705!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27974 invoked from network); 17 Aug 2012 09:26:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:26:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14055734"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:26:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:26:16 +0100
Message-ID: <1345195575.30865.128.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:26:15 +0100
In-Reply-To: <20120815180250.1e068d10@mantra.us.oracle.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:02 +0100, Mukesh Rathor wrote:
> 
> ---
>  arch/x86/xen/mmu.c              |  179 ++++++++++++++++++++++++++++++++++++---
>  arch/x86/xen/mmu.h              |    2 +
>  include/xen/interface/memory.h  |   27 ++++++-
>  include/xen/interface/physdev.h |   10 ++
>  include/xen/xen-ops.h           |    7 ++
>  5 files changed, 211 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..44a6477 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -330,6 +330,38 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
>         __xen_set_pte(ptep, pteval);
>  }
> 
> +/* This for PV guest in hvm container */

Shall I stop mentioning uses of "HVM" in the context of PVH? ;-)

> +void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
> +                             int nr_mfns, int add_mapping)
> +{
> +       int rc;
> +       struct physdev_map_iomem iomem;
> +
> +       iomem.first_gfn = pfn;
> +       iomem.first_mfn = mfn;
> +       iomem.nr_mfns = nr_mfns;
> +       iomem.add_mapping = add_mapping;
> +
> +       rc = HYPERVISOR_physdev_op(PHYSDEVOP_pvh_map_iomem, &iomem);
> +       BUG_ON(rc);

It's purely a matter of taste but we would tend to write
	if (HYPERVISOR_foO(...)
		BUG();

> +}
> +
> +/* This for PV guest in hvm container.
> + * We need this because during boot early_ioremap path eventually calls
> + * set_pte that maps io space. Also, ACPI pages are not mapped into to the
> + * EPT during dom0 creation. The pages are mapped initially here from
> + * kernel_physical_mapping_init() then later the memtype is changed.  */
> +static void xen_dom0pvh_set_pte(pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}
> +
> +static void xen_dom0pvh_set_pte_at(struct mm_struct *mm, unsigned long addr,
> +                                  pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}

I think Stefano alreeady asked why not just use native_* as the hook.

> +
>  static void xen_set_pte_at(struct mm_struct *mm, unsigned long addr,
>                     pte_t *ptep, pte_t pteval)
>  {
> @@ -1197,6 +1229,10 @@ static void xen_post_allocator_init(void);
>  static void __init xen_pagetable_setup_done(pgd_t *base)
>  {
>         xen_setup_shared_info();
> +
> +       if (xen_pvh_domain())
> +               return;
> +
>         xen_post_allocator_init();
>  }
> 
> @@ -1652,6 +1688,10 @@ static void set_page_prot(void *addr, pgprot_t prot)
>         unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
>         pte_t pte = pfn_pte(pfn, prot);
> 
> +       /* for PVH, page tables are native. */
> +       if (xen_pvh_domain())
> +               return;
> +
>         if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>                 BUG();
>  }
> @@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
>   * but that's enough to get __va working.  We need to fill in the rest
>   * of the physical mapping once some sort of allocator has been set
>   * up.
> + * NOTE: for PVH, the page tables are native with HAP required.

OOI does this mean shadow doesn't work?

> @@ -1761,10 +1802,12 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         /* Zap identity mapping */
>         init_level4_pgt[0] = __pgd(0);
> 
> -       /* Pre-constructed entries are in pfn, so convert to mfn */
> -       convert_pfn_mfn(init_level4_pgt);
> -       convert_pfn_mfn(level3_ident_pgt);
> -       convert_pfn_mfn(level3_kernel_pgt);
> +       if (!xen_pvh_domain()) {

Does this test really mean if(XENFEAT_auto_translated_physmap)?

And maybe it fits mnore logically in convert_pfn_mfn itself?

> +               /* Pre-constructed entries are in pfn, so convert to mfn */
> +               convert_pfn_mfn(init_level4_pgt);
> +               convert_pfn_mfn(level3_ident_pgt);
> +               convert_pfn_mfn(level3_kernel_pgt);
> +       }
> 
>         l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
>         l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
> @@ -1787,12 +1830,14 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>         set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
> 
> -       /* Pin down new L4 */
> -       pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> -                         PFN_DOWN(__pa_symbol(init_level4_pgt)));
> +       if (!xen_pvh_domain()) {

XENFEAT_writeable_page_tables?

And inside pin_pagetable_pfn perhaps?


> +               /* Pin down new L4 */
> +               pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> +                               PFN_DOWN(__pa_symbol(init_level4_pgt)));
> 
> -       /* Unpin Xen-provided one */
> -       pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +               /* Unpin Xen-provided one */
> +               pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +       }
> 
>         /* Switch over */
>         pgd = init_level4_pgt;
> @@ -1802,9 +1847,13 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>          * structure to attach it to, so make sure we just set kernel
>          * pgd.
>          */
> -       xen_mc_batch();
> -       __xen_write_cr3(true, __pa(pgd));
> -       xen_mc_issue(PARAVIRT_LAZY_CPU);

Not your issue but I wonder why we do this weird looking singleton
batch?

> +       if (xen_pvh_domain()) {
> +               native_write_cr3(__pa(pgd));
> +       } else {
> +               xen_mc_batch();
> +               __xen_write_cr3(true, __pa(pgd));
> +               xen_mc_issue(PARAVIRT_LAZY_CPU);
> +       }
> 
>         memblock_reserve(__pa(xen_start_info->pt_base),
>                          xen_start_info->nr_pt_frames * PAGE_SIZE);
> @@ -2067,9 +2116,21 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> 
>  void __init xen_init_mmu_ops(void)
>  {
> +       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;

I'd be tempted to leave this below, where it fits in alongside the
others and duplicate it in the if as necessary.

Given that the two cases are basically non-overlapping perhaps you want
xen_init_(pv,pvh)_mmu_ops as separate hooks?

Looking back the same might apply to the pagetable_setup_done hook?

> +
> +       if (xen_pvh_domain()) {
> +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> +
> +               /* set_pte* for PCI devices to map iomem. */
> +               if (xen_initial_domain()) {
> +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> +                       pv_mmu_ops.set_pte_at = xen_dom0pvh_set_pte_at;

Is this wrong for domU or have you just not tried/implemented
passthrough support yet?

> +               }
> +               return;
> +       }
> +
>         x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
>         x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
> -       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
>         pv_mmu_ops = xen_mmu_ops;
> 
>         memset(dummy_mapping, 0xff, PAGE_SIZE);
> @@ -2305,6 +2366,93 @@ void __init xen_hvm_init_mmu_ops(void)
>  }
>  #endif
> 
> +/* Map foreign gmfn, fgmfn, to local pfn, lpfn. This for the user space
> + * creating new guest on PVH dom0 and needs to map domU pages. Called from
> + * exported function, so no need to export this.
> + */
> +static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long fgmfn,
> +                             unsigned int domid)
> +{
> +       int rc;
> +       struct xen_add_to_physmap pmb = {.foreign_domid = domid};

I'm not sure but I think CodingStyle would want spaces inside the {}s.

What is the b in pmb? Phys Map B??? (every other user of this interface
says xatp, FWIW)

> +
> +       pmb.gpfn = lpfn;
> +       pmb.idx = fgmfn;
> +       pmb.space = XENMAPSPACE_gmfn_foreign;
> +       rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &pmb);
> +       if (rc) {
> +               pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
> +                       rc, lpfn, fgmfn);
> +               return 1;
> +       }
> +       return 0;
> +}
> +
> +/* Unmap an entry from xen p2m table */

Actually it is @count entries, starting a spfn. That seems obvious
enough from the prototype not to be worth saying though.

> +int pvh_rem_xen_p2m(unsigned long spfn, int count)
> +{
> +       struct xen_remove_from_physmap xrp;
> +       int i, rc;
> +
> +       for (i=0; i < count; i++) {
> +               xrp.domid = DOMID_SELF;
> +               xrp.gpfn = spfn+i;
> +               rc = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
> +               if (rc) {
> +                       pr_warn("Failed to unmap pfn:%lx rc:%d done:%d\n",
> +                               spfn+i, rc, i);
> +                       return 1;
> +               }
> +       }
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);

What is the external/modular user of this?

I guess this is why you noted that pvh_add_to_xen_p2m didn't need
exporting, which struck me as unnecessary at the time.

pvh_add_to_xen_p2m seems to have an exported a wrapper, why not rem too?
e.g. xen_unmap_domain_mfn_range?

> +
> +struct pvh_remap_data {
> +       unsigned long fgmfn;            /* foreign domain's gmfn */
> +       pgprot_t prot;
> +       domid_t  domid;
> +       struct vm_area_struct *vma;
> +};
> +
> +static int pvh_map_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
> +                       void *data)
> +{
> +       struct pvh_remap_data *pvhp = data;
> +       struct xen_pvh_sav_pfn_info *savp = pvhp->vma->vm_private_data;
> +       unsigned long pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo++]);
> +       pte_t pteval = pte_mkspecial(pfn_pte(pfn, pvhp->prot));
> +
> +       native_set_pte(ptep, pteval);
> +       if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
> +               return -EFAULT;

Is there a little window here where we've setup the page table entry but
the p2m entry behind it is uninitialised?

I suppose even if an interrupt occurred we can rely on the virtual
address not having been "exposed" anywhere yet and therefore there is no
chance of anyone dereferencing it. But is there any harm in just
flipping the ordering here?

Why EFAULT? Seems a bit random, I think HYPERVISOR_memory_op returns a
-ve errno which you could propagate.

> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..1b213b1 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,10 +163,19 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
> 
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -    unsigned int space;
> +#define XENMAPSPACE_gmfn        2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> +    uint16_t space;
> +    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */
> +
> +#define XENMAPIDX_grant_table_status 0x80000000

Even if we don't get the full PVH implementation in the hypervisor right
away I think we should at least update the canonical interface
definition in the Xen tree (xen/include/public) first. (same goes for
the other interface definitions here).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:26:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Iou-0002CL-Fx; Fri, 17 Aug 2012 09:26:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Ios-0002C4-OC
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:26:19 +0000
Received: from [85.158.138.51:35965] by server-4.bemta-3.messagelabs.com id
	2D/B6-04276-93E0E205; Fri, 17 Aug 2012 09:26:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345195576!28770705!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27974 invoked from network); 17 Aug 2012 09:26:17 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:26:17 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14055734"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:26:16 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:26:16 +0100
Message-ID: <1345195575.30865.128.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:26:15 +0100
In-Reply-To: <20120815180250.1e068d10@mantra.us.oracle.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:02 +0100, Mukesh Rathor wrote:
> 
> ---
>  arch/x86/xen/mmu.c              |  179 ++++++++++++++++++++++++++++++++++++---
>  arch/x86/xen/mmu.h              |    2 +
>  include/xen/interface/memory.h  |   27 ++++++-
>  include/xen/interface/physdev.h |   10 ++
>  include/xen/xen-ops.h           |    7 ++
>  5 files changed, 211 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..44a6477 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -330,6 +330,38 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
>         __xen_set_pte(ptep, pteval);
>  }
> 
> +/* This for PV guest in hvm container */

Shall I stop mentioning uses of "HVM" in the context of PVH? ;-)

> +void xen_set_clr_mmio_pvh_pte(unsigned long pfn, unsigned long mfn,
> +                             int nr_mfns, int add_mapping)
> +{
> +       int rc;
> +       struct physdev_map_iomem iomem;
> +
> +       iomem.first_gfn = pfn;
> +       iomem.first_mfn = mfn;
> +       iomem.nr_mfns = nr_mfns;
> +       iomem.add_mapping = add_mapping;
> +
> +       rc = HYPERVISOR_physdev_op(PHYSDEVOP_pvh_map_iomem, &iomem);
> +       BUG_ON(rc);

It's purely a matter of taste but we would tend to write
	if (HYPERVISOR_foO(...)
		BUG();

> +}
> +
> +/* This for PV guest in hvm container.
> + * We need this because during boot early_ioremap path eventually calls
> + * set_pte that maps io space. Also, ACPI pages are not mapped into to the
> + * EPT during dom0 creation. The pages are mapped initially here from
> + * kernel_physical_mapping_init() then later the memtype is changed.  */
> +static void xen_dom0pvh_set_pte(pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}
> +
> +static void xen_dom0pvh_set_pte_at(struct mm_struct *mm, unsigned long addr,
> +                                  pte_t *ptep, pte_t pteval)
> +{
> +       native_set_pte(ptep, pteval);
> +}

I think Stefano alreeady asked why not just use native_* as the hook.

> +
>  static void xen_set_pte_at(struct mm_struct *mm, unsigned long addr,
>                     pte_t *ptep, pte_t pteval)
>  {
> @@ -1197,6 +1229,10 @@ static void xen_post_allocator_init(void);
>  static void __init xen_pagetable_setup_done(pgd_t *base)
>  {
>         xen_setup_shared_info();
> +
> +       if (xen_pvh_domain())
> +               return;
> +
>         xen_post_allocator_init();
>  }
> 
> @@ -1652,6 +1688,10 @@ static void set_page_prot(void *addr, pgprot_t prot)
>         unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
>         pte_t pte = pfn_pte(pfn, prot);
> 
> +       /* for PVH, page tables are native. */
> +       if (xen_pvh_domain())
> +               return;
> +
>         if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>                 BUG();
>  }
> @@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
>   * but that's enough to get __va working.  We need to fill in the rest
>   * of the physical mapping once some sort of allocator has been set
>   * up.
> + * NOTE: for PVH, the page tables are native with HAP required.

OOI does this mean shadow doesn't work?

> @@ -1761,10 +1802,12 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         /* Zap identity mapping */
>         init_level4_pgt[0] = __pgd(0);
> 
> -       /* Pre-constructed entries are in pfn, so convert to mfn */
> -       convert_pfn_mfn(init_level4_pgt);
> -       convert_pfn_mfn(level3_ident_pgt);
> -       convert_pfn_mfn(level3_kernel_pgt);
> +       if (!xen_pvh_domain()) {

Does this test really mean if(XENFEAT_auto_translated_physmap)?

And maybe it fits mnore logically in convert_pfn_mfn itself?

> +               /* Pre-constructed entries are in pfn, so convert to mfn */
> +               convert_pfn_mfn(init_level4_pgt);
> +               convert_pfn_mfn(level3_ident_pgt);
> +               convert_pfn_mfn(level3_kernel_pgt);
> +       }
> 
>         l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
>         l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
> @@ -1787,12 +1830,14 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>         set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>         set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
> 
> -       /* Pin down new L4 */
> -       pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> -                         PFN_DOWN(__pa_symbol(init_level4_pgt)));
> +       if (!xen_pvh_domain()) {

XENFEAT_writeable_page_tables?

And inside pin_pagetable_pfn perhaps?


> +               /* Pin down new L4 */
> +               pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
> +                               PFN_DOWN(__pa_symbol(init_level4_pgt)));
> 
> -       /* Unpin Xen-provided one */
> -       pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +               /* Unpin Xen-provided one */
> +               pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> +       }
> 
>         /* Switch over */
>         pgd = init_level4_pgt;
> @@ -1802,9 +1847,13 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>          * structure to attach it to, so make sure we just set kernel
>          * pgd.
>          */
> -       xen_mc_batch();
> -       __xen_write_cr3(true, __pa(pgd));
> -       xen_mc_issue(PARAVIRT_LAZY_CPU);

Not your issue but I wonder why we do this weird looking singleton
batch?

> +       if (xen_pvh_domain()) {
> +               native_write_cr3(__pa(pgd));
> +       } else {
> +               xen_mc_batch();
> +               __xen_write_cr3(true, __pa(pgd));
> +               xen_mc_issue(PARAVIRT_LAZY_CPU);
> +       }
> 
>         memblock_reserve(__pa(xen_start_info->pt_base),
>                          xen_start_info->nr_pt_frames * PAGE_SIZE);
> @@ -2067,9 +2116,21 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> 
>  void __init xen_init_mmu_ops(void)
>  {
> +       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;

I'd be tempted to leave this below, where it fits in alongside the
others and duplicate it in the if as necessary.

Given that the two cases are basically non-overlapping perhaps you want
xen_init_(pv,pvh)_mmu_ops as separate hooks?

Looking back the same might apply to the pagetable_setup_done hook?

> +
> +       if (xen_pvh_domain()) {
> +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> +
> +               /* set_pte* for PCI devices to map iomem. */
> +               if (xen_initial_domain()) {
> +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> +                       pv_mmu_ops.set_pte_at = xen_dom0pvh_set_pte_at;

Is this wrong for domU or have you just not tried/implemented
passthrough support yet?

> +               }
> +               return;
> +       }
> +
>         x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
>         x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
> -       x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
>         pv_mmu_ops = xen_mmu_ops;
> 
>         memset(dummy_mapping, 0xff, PAGE_SIZE);
> @@ -2305,6 +2366,93 @@ void __init xen_hvm_init_mmu_ops(void)
>  }
>  #endif
> 
> +/* Map foreign gmfn, fgmfn, to local pfn, lpfn. This for the user space
> + * creating new guest on PVH dom0 and needs to map domU pages. Called from
> + * exported function, so no need to export this.
> + */
> +static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long fgmfn,
> +                             unsigned int domid)
> +{
> +       int rc;
> +       struct xen_add_to_physmap pmb = {.foreign_domid = domid};

I'm not sure but I think CodingStyle would want spaces inside the {}s.

What is the b in pmb? Phys Map B??? (every other user of this interface
says xatp, FWIW)

> +
> +       pmb.gpfn = lpfn;
> +       pmb.idx = fgmfn;
> +       pmb.space = XENMAPSPACE_gmfn_foreign;
> +       rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &pmb);
> +       if (rc) {
> +               pr_warn("Failed to map pfn to mfn rc:%d pfn:%lx mfn:%lx\n",
> +                       rc, lpfn, fgmfn);
> +               return 1;
> +       }
> +       return 0;
> +}
> +
> +/* Unmap an entry from xen p2m table */

Actually it is @count entries, starting a spfn. That seems obvious
enough from the prototype not to be worth saying though.

> +int pvh_rem_xen_p2m(unsigned long spfn, int count)
> +{
> +       struct xen_remove_from_physmap xrp;
> +       int i, rc;
> +
> +       for (i=0; i < count; i++) {
> +               xrp.domid = DOMID_SELF;
> +               xrp.gpfn = spfn+i;
> +               rc = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
> +               if (rc) {
> +                       pr_warn("Failed to unmap pfn:%lx rc:%d done:%d\n",
> +                               spfn+i, rc, i);
> +                       return 1;
> +               }
> +       }
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);

What is the external/modular user of this?

I guess this is why you noted that pvh_add_to_xen_p2m didn't need
exporting, which struck me as unnecessary at the time.

pvh_add_to_xen_p2m seems to have an exported a wrapper, why not rem too?
e.g. xen_unmap_domain_mfn_range?

> +
> +struct pvh_remap_data {
> +       unsigned long fgmfn;            /* foreign domain's gmfn */
> +       pgprot_t prot;
> +       domid_t  domid;
> +       struct vm_area_struct *vma;
> +};
> +
> +static int pvh_map_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr,
> +                       void *data)
> +{
> +       struct pvh_remap_data *pvhp = data;
> +       struct xen_pvh_sav_pfn_info *savp = pvhp->vma->vm_private_data;
> +       unsigned long pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo++]);
> +       pte_t pteval = pte_mkspecial(pfn_pte(pfn, pvhp->prot));
> +
> +       native_set_pte(ptep, pteval);
> +       if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
> +               return -EFAULT;

Is there a little window here where we've setup the page table entry but
the p2m entry behind it is uninitialised?

I suppose even if an interrupt occurred we can rely on the virtual
address not having been "exposed" anywhere yet and therefore there is no
chance of anyone dereferencing it. But is there any harm in just
flipping the ordering here?

Why EFAULT? Seems a bit random, I think HYPERVISOR_memory_op returns a
-ve errno which you could propagate.

> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index eac3ce1..1b213b1 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -163,10 +163,19 @@ struct xen_add_to_physmap {
>      /* Which domain to change the mapping for. */
>      domid_t domid;
> 
> +    /* Number of pages to go through for gmfn_range */
> +    uint16_t    size;
> +
>      /* Source mapping space. */
>  #define XENMAPSPACE_shared_info 0 /* shared info page */
>  #define XENMAPSPACE_grant_table 1 /* grant table page */
> -    unsigned int space;
> +#define XENMAPSPACE_gmfn        2 /* GMFN */
> +#define XENMAPSPACE_gmfn_range  3 /* GMFN range */
> +#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
> +    uint16_t space;
> +    domid_t foreign_domid;         /* IFF XENMAPSPACE_gmfn_foreign */
> +
> +#define XENMAPIDX_grant_table_status 0x80000000

Even if we don't get the full PVH implementation in the hypervisor right
away I think we should at least update the canonical interface
definition in the Xen tree (xen/include/public) first. (same goes for
the other interface definitions here).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:35:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ix5-0002UI-FI; Fri, 17 Aug 2012 09:34:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Ix3-0002Tl-QK
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:34:46 +0000
Received: from [85.158.138.51:56661] by server-11.bemta-3.messagelabs.com id
	31/D2-23152-5301E205; Fri, 17 Aug 2012 09:34:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345196084!28806067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30044 invoked from network); 17 Aug 2012 09:34:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:34:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14055918"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:34:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:34:44 +0100
Message-ID: <1345196083.30865.133.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:34:43 +0100
In-Reply-To: <20120815180356.08d4d2e4@mantra.us.oracle.com>
References: <20120815180356.08d4d2e4@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
 and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:03 +0100, Mukesh Rathor wrote:
> +				if (xen_pvh_domain()) {
> +					xen_pvh_identity_map_chunk(start_pfn,
> +								   end_pfn);
> +					released += end_pfn - start_pfn;
> +					identity += end_pfn - start_pfn;

In the non pvh case this is done inside
xen_set_identity_and_release_chunk. I think the interface ought to be
the same in both halves of the if.

Not sure if it makes sense to push the if down into
xen_set_identity_and_release_chunk, I think the PVH case doesn't do the
release bit?

(what does happen to the old backing MFN in this case?)

> +				} else {
> +					xen_set_identity_and_release_chunk(
> +						start_pfn, end_pfn, nr_pages,
> +						&released, &identity);
> +				}
> +			}
>  			start = end;
>  		}
>  	}
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7595581..260113e 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
>  		if (xen_initial_domain())
>  			pci_xen_initial_domain();
>  
> +		if (xen_pvh_domain()) {
> +			xen_callback_vector();

The definition of this function is surrounded by CONFIG_XEN_PVHVM, or
did I miss where you removed that and/or the appropriate Kconfig runes
to make it so?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:35:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ix5-0002UI-FI; Fri, 17 Aug 2012 09:34:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Ix3-0002Tl-QK
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:34:46 +0000
Received: from [85.158.138.51:56661] by server-11.bemta-3.messagelabs.com id
	31/D2-23152-5301E205; Fri, 17 Aug 2012 09:34:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345196084!28806067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30044 invoked from network); 17 Aug 2012 09:34:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:34:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14055918"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:34:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:34:44 +0100
Message-ID: <1345196083.30865.133.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:34:43 +0100
In-Reply-To: <20120815180356.08d4d2e4@mantra.us.oracle.com>
References: <20120815180356.08d4d2e4@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
 and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:03 +0100, Mukesh Rathor wrote:
> +				if (xen_pvh_domain()) {
> +					xen_pvh_identity_map_chunk(start_pfn,
> +								   end_pfn);
> +					released += end_pfn - start_pfn;
> +					identity += end_pfn - start_pfn;

In the non pvh case this is done inside
xen_set_identity_and_release_chunk. I think the interface ought to be
the same in both halves of the if.

Not sure if it makes sense to push the if down into
xen_set_identity_and_release_chunk, I think the PVH case doesn't do the
release bit?

(what does happen to the old backing MFN in this case?)

> +				} else {
> +					xen_set_identity_and_release_chunk(
> +						start_pfn, end_pfn, nr_pages,
> +						&released, &identity);
> +				}
> +			}
>  			start = end;
>  		}
>  	}
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 7595581..260113e 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
>  		if (xen_initial_domain())
>  			pci_xen_initial_domain();
>  
> +		if (xen_pvh_domain()) {
> +			xen_callback_vector();

The definition of this function is surrounded by CONFIG_XEN_PVHVM, or
did I miss where you removed that and/or the appropriate Kconfig runes
to make it so?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:35:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ixk-0002Wn-T8; Fri, 17 Aug 2012 09:35:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Ixj-0002Wf-H9
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:35:27 +0000
Received: from [85.158.143.99:28809] by server-2.bemta-4.messagelabs.com id
	1E/CB-31966-E501E205; Fri, 17 Aug 2012 09:35:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345196117!25242360!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26777 invoked from network); 17 Aug 2012 09:35:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 09:35:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:35:17 +0100
Message-Id: <502E2C9C0200007800095D33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:35:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Thursday, August 16, 2012 7:04 PM
>> To: Hao, Xudong
>> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> 
>> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
>> +0200
>> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
>> +0800
>> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded.
>> */
>> >> >  #define PCI_MEM_START       0xf0000000
>> >> >  #define PCI_MEM_END         0xfc000000
>> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
>> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
>> >>
>> >> With such hard coded values, this is hardly meant to be anything
>> >> more than an RFC, is it? These values should not exist in the first
>> >> place, and the variables below should be determined from VM
>> >> characteristics (best would presumably be to allocate them top
>> >> down from the end of physical address space, making sure you
>> >> don't run into RAM).
>> 
>> No comment on this part?
>> 
> 
> The MMIO high memory start from 640G, it's already very high, I think we 
> don't need allocate MMIO top down from the high of physical address space. 
> Another thing you remind me that maybe we can skip this high MMIO hole when 
> setup p2m table in build hvm of libxc(setup_guest()), like the handles for 
> MMIO below 4G.

That would be an option, but any fixed address you pick here
will look arbitrary (and will sooner or later raise questions). Plus
by allowing the RAM above 4G to remain contiguous even for
huge guests, we'd retain maximum compatibility with all sorts
of guest OSes. Furthermore, did you check that we in all cases
can use 40-bit (guest) physical addresses (I would think that 36
is the biggest common value). Bottom line - please don't use a
fixed number here.

>> >> > @@ -133,23 +142,35 @@ void pci_setup(void)
>> >> >          /* Map the I/O memory and port resources. */
>> >> >          for ( bar = 0; bar < 7; bar++ )
>> >> >          {
>> >> > +            bar_sz_upper = 0;
>> >> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>> >> >              if ( bar == 6 )
>> >> >                  bar_reg = PCI_ROM_ADDRESS;
>> >> >
>> >> >              bar_data = pci_readl(devfn, bar_reg);
>> >> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
>> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK))
>> ==
>> >> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
>> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>> >> >              pci_writel(devfn, bar_reg, ~0);
>> >> >              bar_sz = pci_readl(devfn, bar_reg);
>> >> >              pci_writel(devfn, bar_reg, bar_data);
>> >> > +
>> >> > +            if (is_64bar) {
>> >> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
>> >> > +                pci_writel(devfn, bar_reg + 4, ~0);
>> >> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
>> >> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
>> >> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
>> >> > +            }
>> >> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
>> >> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
>> >> > +                       0xfffffffffffffff0 :
>> >>
>> >> This should be a proper constant (or the masking could be
>> >> done earlier, in which case you could continue to use the
>> >> existing PCI_BASE_ADDRESS_MEM_MASK).
>> >>
>> >
>> > So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.
>> 
>> I'd recommend not touching existing variables. As said before,
>> by re-ordering the code you can still use the constant as-is.
>> 
> 
> Mmh, I misunderstood you in the previous mail. 
> So you means we put the mask before "if(is_64bar)", and then we do not do 
> changes for old mask code, right?

Exactly.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:35:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ixk-0002Wn-T8; Fri, 17 Aug 2012 09:35:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Ixj-0002Wf-H9
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:35:27 +0000
Received: from [85.158.143.99:28809] by server-2.bemta-4.messagelabs.com id
	1E/CB-31966-E501E205; Fri, 17 Aug 2012 09:35:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345196117!25242360!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26777 invoked from network); 17 Aug 2012 09:35:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 09:35:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:35:17 +0100
Message-Id: <502E2C9C0200007800095D33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:35:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Thursday, August 16, 2012 7:04 PM
>> To: Hao, Xudong
>> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> 
>> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
>> +0200
>> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
>> +0800
>> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded.
>> */
>> >> >  #define PCI_MEM_START       0xf0000000
>> >> >  #define PCI_MEM_END         0xfc000000
>> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
>> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
>> >>
>> >> With such hard coded values, this is hardly meant to be anything
>> >> more than an RFC, is it? These values should not exist in the first
>> >> place, and the variables below should be determined from VM
>> >> characteristics (best would presumably be to allocate them top
>> >> down from the end of physical address space, making sure you
>> >> don't run into RAM).
>> 
>> No comment on this part?
>> 
> 
> The MMIO high memory start from 640G, it's already very high, I think we 
> don't need allocate MMIO top down from the high of physical address space. 
> Another thing you remind me that maybe we can skip this high MMIO hole when 
> setup p2m table in build hvm of libxc(setup_guest()), like the handles for 
> MMIO below 4G.

That would be an option, but any fixed address you pick here
will look arbitrary (and will sooner or later raise questions). Plus
by allowing the RAM above 4G to remain contiguous even for
huge guests, we'd retain maximum compatibility with all sorts
of guest OSes. Furthermore, did you check that we in all cases
can use 40-bit (guest) physical addresses (I would think that 36
is the biggest common value). Bottom line - please don't use a
fixed number here.

>> >> > @@ -133,23 +142,35 @@ void pci_setup(void)
>> >> >          /* Map the I/O memory and port resources. */
>> >> >          for ( bar = 0; bar < 7; bar++ )
>> >> >          {
>> >> > +            bar_sz_upper = 0;
>> >> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>> >> >              if ( bar == 6 )
>> >> >                  bar_reg = PCI_ROM_ADDRESS;
>> >> >
>> >> >              bar_data = pci_readl(devfn, bar_reg);
>> >> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
>> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK))
>> ==
>> >> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
>> >> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>> >> >              pci_writel(devfn, bar_reg, ~0);
>> >> >              bar_sz = pci_readl(devfn, bar_reg);
>> >> >              pci_writel(devfn, bar_reg, bar_data);
>> >> > +
>> >> > +            if (is_64bar) {
>> >> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
>> >> > +                pci_writel(devfn, bar_reg + 4, ~0);
>> >> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
>> >> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
>> >> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
>> >> > +            }
>> >> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
>> >> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
>> >> > +                       0xfffffffffffffff0 :
>> >>
>> >> This should be a proper constant (or the masking could be
>> >> done earlier, in which case you could continue to use the
>> >> existing PCI_BASE_ADDRESS_MEM_MASK).
>> >>
>> >
>> > So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.
>> 
>> I'd recommend not touching existing variables. As said before,
>> by re-ordering the code you can still use the constant as-is.
>> 
> 
> Mmh, I misunderstood you in the previous mail. 
> So you means we put the mask before "if(is_64bar)", and then we do not do 
> changes for old mask code, right?

Exactly.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:36:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Iyv-0002dt-Bq; Fri, 17 Aug 2012 09:36:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T2Iyu-0002dm-Os
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:36:40 +0000
Received: from [85.158.139.83:41395] by server-6.bemta-5.messagelabs.com id
	8B/39-22415-7A01E205; Fri, 17 Aug 2012 09:36:39 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345196199!17366852!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODI2Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17866 invoked from network); 17 Aug 2012 09:36:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 09:36:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 13B7922DD;
	Fri, 17 Aug 2012 12:36:38 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D33072005D; Fri, 17 Aug 2012 12:36:37 +0300 (EEST)
Date: Fri, 17 Aug 2012 12:36:37 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817093637.GJ19851@reaktio.net>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1345189880.30865.60.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345189880.30865.60.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"msw@amazon.com" <msw@amazon.com>, "jbeulich@suse.com" <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
 query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 08:51:20AM +0100, Ian Campbell wrote:
> On Thu, 2012-08-16 at 21:40 +0100, Daniel De Graaf wrote:
> > On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > > 
> > > Hi Daniel,
> > > 
> > > What do you think about retaining a fallback of looking in xenstore if
> > > the hypercall fails?
> > > 
> > > Matt
> > > 
> > 
> > That sounds good; there's little cost to leaving the fallback in.
> > 
> > ----8<-----------------------------------------------------
> > 
> > This hypercall has been present since Xen 3.1,
> 
> Do a pvops kernels run on a hypervisor of that vintage? I have a vague
> recollection of the pvops kernels requiring 3.4+ or something. Maybe
> that was only for dom0 though.
> 

The Xen requirement is only for pvops dom0.

pvops kernels, for example the Linux 3.3.x and 3.5.x in Fedora 17, run OK on RHEL5 Xen, which is Xen 3.1.2 based.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:36:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Iyv-0002dt-Bq; Fri, 17 Aug 2012 09:36:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T2Iyu-0002dm-Os
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:36:40 +0000
Received: from [85.158.139.83:41395] by server-6.bemta-5.messagelabs.com id
	8B/39-22415-7A01E205; Fri, 17 Aug 2012 09:36:39 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345196199!17366852!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODI2Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17866 invoked from network); 17 Aug 2012 09:36:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 09:36:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 13B7922DD;
	Fri, 17 Aug 2012 12:36:38 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D33072005D; Fri, 17 Aug 2012 12:36:37 +0300 (EEST)
Date: Fri, 17 Aug 2012 12:36:37 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817093637.GJ19851@reaktio.net>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1345189880.30865.60.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345189880.30865.60.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"msw@amazon.com" <msw@amazon.com>, "jbeulich@suse.com" <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
 query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 08:51:20AM +0100, Ian Campbell wrote:
> On Thu, 2012-08-16 at 21:40 +0100, Daniel De Graaf wrote:
> > On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > > 
> > > Hi Daniel,
> > > 
> > > What do you think about retaining a fallback of looking in xenstore if
> > > the hypercall fails?
> > > 
> > > Matt
> > > 
> > 
> > That sounds good; there's little cost to leaving the fallback in.
> > 
> > ----8<-----------------------------------------------------
> > 
> > This hypercall has been present since Xen 3.1,
> 
> Do a pvops kernels run on a hypervisor of that vintage? I have a vague
> recollection of the pvops kernels requiring 3.4+ or something. Maybe
> that was only for dom0 though.
> 

The Xen requirement is only for pvops dom0.

pvops kernels, for example the Linux 3.3.x and 3.5.x in Fedora 17, run OK on RHEL5 Xen, which is Xen 3.1.2 based.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J1r-0002rV-Vm; Fri, 17 Aug 2012 09:39:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T2J1q-0002rF-AA
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:39:42 +0000
Received: from [85.158.143.35:35910] by server-2.bemta-4.messagelabs.com id
	09/53-31966-D511E205; Fri, 17 Aug 2012 09:39:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345196378!13940806!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODI2Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29363 invoked from network); 17 Aug 2012 09:39:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 09:39:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 133B7242F;
	Fri, 17 Aug 2012 12:39:37 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id E5B1B2005D; Fri, 17 Aug 2012 12:39:36 +0300 (EEST)
Date: Fri, 17 Aug 2012 12:39:36 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817093936.GK19851@reaktio.net>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1345189880.30865.60.camel@zakaz.uk.xensource.com>
	<20120817093637.GJ19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120817093637.GJ19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"jbeulich@suse.com" <jbeulich@suse.com>, "msw@amazon.com" <msw@amazon.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
 query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 12:36:37PM +0300, Pasi K=E4rkk=E4inen wrote:
> On Fri, Aug 17, 2012 at 08:51:20AM +0100, Ian Campbell wrote:
> > On Thu, 2012-08-16 at 21:40 +0100, Daniel De Graaf wrote:
> > > On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > > > =

> > > > Hi Daniel,
> > > > =

> > > > What do you think about retaining a fallback of looking in xenstore=
 if
> > > > the hypercall fails?
> > > > =

> > > > Matt
> > > > =

> > > =

> > > That sounds good; there's little cost to leaving the fallback in.
> > > =

> > > ----8<-----------------------------------------------------
> > > =

> > > This hypercall has been present since Xen 3.1,
> > =

> > Do a pvops kernels run on a hypervisor of that vintage? I have a vague
> > recollection of the pvops kernels requiring 3.4+ or something. Maybe
> > that was only for dom0 though.
> > =

> =

> The Xen requirement is only for pvops dom0.
> =

> pvops kernels, for example the Linux 3.3.x and 3.5.x in Fedora 17, run OK=
 as domU on RHEL5 Xen, which is Xen 3.1.2 based.

I forget to write "as domU" earlier.. added.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:39:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J1r-0002rV-Vm; Fri, 17 Aug 2012 09:39:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T2J1q-0002rF-AA
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:39:42 +0000
Received: from [85.158.143.35:35910] by server-2.bemta-4.messagelabs.com id
	09/53-31966-D511E205; Fri, 17 Aug 2012 09:39:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345196378!13940806!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODI2Njc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29363 invoked from network); 17 Aug 2012 09:39:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 09:39:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 133B7242F;
	Fri, 17 Aug 2012 12:39:37 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id E5B1B2005D; Fri, 17 Aug 2012 12:39:36 +0300 (EEST)
Date: Fri, 17 Aug 2012 12:39:36 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817093936.GK19851@reaktio.net>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
	<1345189880.30865.60.camel@zakaz.uk.xensource.com>
	<20120817093637.GJ19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120817093637.GJ19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"jbeulich@suse.com" <jbeulich@suse.com>, "msw@amazon.com" <msw@amazon.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
 query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 12:36:37PM +0300, Pasi K=E4rkk=E4inen wrote:
> On Fri, Aug 17, 2012 at 08:51:20AM +0100, Ian Campbell wrote:
> > On Thu, 2012-08-16 at 21:40 +0100, Daniel De Graaf wrote:
> > > On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > > > =

> > > > Hi Daniel,
> > > > =

> > > > What do you think about retaining a fallback of looking in xenstore=
 if
> > > > the hypercall fails?
> > > > =

> > > > Matt
> > > > =

> > > =

> > > That sounds good; there's little cost to leaving the fallback in.
> > > =

> > > ----8<-----------------------------------------------------
> > > =

> > > This hypercall has been present since Xen 3.1,
> > =

> > Do a pvops kernels run on a hypervisor of that vintage? I have a vague
> > recollection of the pvops kernels requiring 3.4+ or something. Maybe
> > that was only for dom0 though.
> > =

> =

> The Xen requirement is only for pvops dom0.
> =

> pvops kernels, for example the Linux 3.3.x and 3.5.x in Fedora 17, run OK=
 as domU on RHEL5 Xen, which is Xen 3.1.2 based.

I forget to write "as domU" earlier.. added.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:42:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J4J-00032o-Mn; Fri, 17 Aug 2012 09:42:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2J4I-00032e-Hh
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:42:14 +0000
Received: from [85.158.143.99:38273] by server-2.bemta-4.messagelabs.com id
	80/87-31966-5F11E205; Fri, 17 Aug 2012 09:42:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345196533!23382327!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13570 invoked from network); 17 Aug 2012 09:42:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:42:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14056071"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:42:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:42:13 +0100
Message-ID: <1345196531.30865.139.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:42:11 +0100
In-Reply-To: <20120815180449.50410028@mantra.us.oracle.com>
References: <20120815180449.50410028@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:04 +0100, Mukesh Rathor wrote:
> ---
>  arch/x86/xen/smp.c |   33 +++++++++++++++++++++++++--------
>  1 files changed, 25 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index cdf269d..017d7fa 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -68,9 +68,11 @@ static void __cpuinit cpu_bringup(void)
>  	touch_softlockup_watchdog();
>  	preempt_disable();
>  
> -	xen_enable_sysenter();
> -	xen_enable_syscall();
> -
> +	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
> +	if (!xen_pvh_domain()) {

Can this be gated on X86_FEATURE_BLAH, with appropriate masking for the
PV vs PVH cases.

Looks like enable_sysenter actually does that already, maybe
enable_syscall could too?

> +		xen_enable_sysenter();
> +		xen_enable_syscall();
> +	}
>  	cpu = smp_processor_id();
>  	smp_store_cpu_info(cpu);
>  	cpu_data(cpu).x86_max_cores = 1;
> @@ -230,10 +232,11 @@ static void __init xen_smp_prepare_boot_cpu(void)
>  	BUG_ON(smp_processor_id() != 0);
>  	native_smp_prepare_boot_cpu();
>  
> -	/* We've switched to the "real" per-cpu gdt, so make sure the
> -	   old memory can be recycled */
> -	make_lowmem_page_readwrite(xen_initial_gdt);
> -
> +	if (!xen_pvh_domain()) {

XENFEAT_writable_pagetalbes, I think? And possibly pushed down into
make_lowmem_page_readwrite?

> +		/* We've switched to the "real" per-cpu gdt, so make sure the
> +		 * old memory can be recycled */
> +		make_lowmem_page_readwrite(xen_initial_gdt);
> +	}
>  	xen_filter_cpu_maps();
>  	xen_setup_vcpu_info_placement();
>  }
> @@ -312,6 +315,7 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  
>  	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
>  
> +	if (!xen_pvh_domain()) {
>  		ctxt->user_regs.ds = __USER_DS;
>  		ctxt->user_regs.es = __USER_DS;
>  
> @@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  					(unsigned long)xen_hypervisor_callback;
>  		ctxt->failsafe_callback_eip = 
>  					(unsigned long)xen_failsafe_callback;
> -
> +	} else {
> +		ctxt->user_regs.ds = __KERNEL_DS;
> +		ctxt->user_regs.es = 0;
> +		ctxt->user_regs.gs = 0;

Not __KERNEL_DS for es too?

Not sure about gs -- shouldn't that point to some per-cpu segment or
something? Maybe that happens somewhere else? (in which case a comment?)

> +
> +		ctxt->gdt_frames[0] = (unsigned long)gdt;
> +		ctxt->gdt_ents = (unsigned long)(GDT_SIZE - 1);
> +
> +		/* Note: PVH is not supported on x86_32. */
> +#ifdef __x86_64__

ITYM CONFIG_X86_64?

> +		ctxt->gs_base_user = (unsigned long)
> +		                         per_cpu(irq_stack_union.gs_base, cpu);
> +#endif
> +	}
>  	ctxt->user_regs.cs = __KERNEL_CS;
>  	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:42:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J4J-00032o-Mn; Fri, 17 Aug 2012 09:42:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2J4I-00032e-Hh
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:42:14 +0000
Received: from [85.158.143.99:38273] by server-2.bemta-4.messagelabs.com id
	80/87-31966-5F11E205; Fri, 17 Aug 2012 09:42:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345196533!23382327!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13570 invoked from network); 17 Aug 2012 09:42:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:42:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14056071"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:42:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:42:13 +0100
Message-ID: <1345196531.30865.139.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:42:11 +0100
In-Reply-To: <20120815180449.50410028@mantra.us.oracle.com>
References: <20120815180449.50410028@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:04 +0100, Mukesh Rathor wrote:
> ---
>  arch/x86/xen/smp.c |   33 +++++++++++++++++++++++++--------
>  1 files changed, 25 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index cdf269d..017d7fa 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -68,9 +68,11 @@ static void __cpuinit cpu_bringup(void)
>  	touch_softlockup_watchdog();
>  	preempt_disable();
>  
> -	xen_enable_sysenter();
> -	xen_enable_syscall();
> -
> +	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
> +	if (!xen_pvh_domain()) {

Can this be gated on X86_FEATURE_BLAH, with appropriate masking for the
PV vs PVH cases.

Looks like enable_sysenter actually does that already, maybe
enable_syscall could too?

> +		xen_enable_sysenter();
> +		xen_enable_syscall();
> +	}
>  	cpu = smp_processor_id();
>  	smp_store_cpu_info(cpu);
>  	cpu_data(cpu).x86_max_cores = 1;
> @@ -230,10 +232,11 @@ static void __init xen_smp_prepare_boot_cpu(void)
>  	BUG_ON(smp_processor_id() != 0);
>  	native_smp_prepare_boot_cpu();
>  
> -	/* We've switched to the "real" per-cpu gdt, so make sure the
> -	   old memory can be recycled */
> -	make_lowmem_page_readwrite(xen_initial_gdt);
> -
> +	if (!xen_pvh_domain()) {

XENFEAT_writable_pagetalbes, I think? And possibly pushed down into
make_lowmem_page_readwrite?

> +		/* We've switched to the "real" per-cpu gdt, so make sure the
> +		 * old memory can be recycled */
> +		make_lowmem_page_readwrite(xen_initial_gdt);
> +	}
>  	xen_filter_cpu_maps();
>  	xen_setup_vcpu_info_placement();
>  }
> @@ -312,6 +315,7 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  
>  	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
>  
> +	if (!xen_pvh_domain()) {
>  		ctxt->user_regs.ds = __USER_DS;
>  		ctxt->user_regs.es = __USER_DS;
>  
> @@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
>  					(unsigned long)xen_hypervisor_callback;
>  		ctxt->failsafe_callback_eip = 
>  					(unsigned long)xen_failsafe_callback;
> -
> +	} else {
> +		ctxt->user_regs.ds = __KERNEL_DS;
> +		ctxt->user_regs.es = 0;
> +		ctxt->user_regs.gs = 0;

Not __KERNEL_DS for es too?

Not sure about gs -- shouldn't that point to some per-cpu segment or
something? Maybe that happens somewhere else? (in which case a comment?)

> +
> +		ctxt->gdt_frames[0] = (unsigned long)gdt;
> +		ctxt->gdt_ents = (unsigned long)(GDT_SIZE - 1);
> +
> +		/* Note: PVH is not supported on x86_32. */
> +#ifdef __x86_64__

ITYM CONFIG_X86_64?

> +		ctxt->gs_base_user = (unsigned long)
> +		                         per_cpu(irq_stack_union.gs_base, cpu);
> +#endif
> +	}
>  	ctxt->user_regs.cs = __KERNEL_CS;
>  	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:46:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J7r-0003E8-CM; Fri, 17 Aug 2012 09:45:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2J7p-0003Dy-OY
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:45:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345196736!2393719!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29142 invoked from network); 17 Aug 2012 09:45:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 09:45:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:45:35 +0100
Message-Id: <502E2F090200007800095D62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:46:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345190532.30865.67.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > > Seeing the patch I btw realized that there's no easy way to
>> > > avoid having the type as a second argument in the conversion
>> > > macros. Nevertheless I still don't like the explicitly specified type
>> > > there.
>> > 
>> > Btw - on the architecture(s) where the two handles are identical
>> > I would prefer you to make the conversion functions trivial (and
>> > thus avoid making use of the "type" parameter), thus allowing
>> > the type checking to occur that you currently circumvent.
>> 
>> OK, I can do that.
> 
> Will this result in the type parameter potentially becoming stale?
> 
> Adding a redundant pointer compare is a good way to get the compiler to
> catch this. Smth like;
> 
>         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>         #define guest_handle_from_param(hnd, type) ({
>             typeof((hnd).p) _x = (hnd).p;
>             XEN_GUEST_HANDLE(type) _y;
>             &_y == &_x;
>             hnd;
>          })

Ah yes, that's a good suggestion.

> I'm not sure which two pointers of members of the various structs need
> to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> idea...

Right, comparing (hnd).p with _y.p would be the right thing; no
need for _x, but some other (mechanical) adjustments would be
necessary.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:46:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J7r-0003E8-CM; Fri, 17 Aug 2012 09:45:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2J7p-0003Dy-OY
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:45:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345196736!2393719!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29142 invoked from network); 17 Aug 2012 09:45:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 09:45:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:45:35 +0100
Message-Id: <502E2F090200007800095D62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:46:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Stefano Stabellini" <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345190532.30865.67.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > > Seeing the patch I btw realized that there's no easy way to
>> > > avoid having the type as a second argument in the conversion
>> > > macros. Nevertheless I still don't like the explicitly specified type
>> > > there.
>> > 
>> > Btw - on the architecture(s) where the two handles are identical
>> > I would prefer you to make the conversion functions trivial (and
>> > thus avoid making use of the "type" parameter), thus allowing
>> > the type checking to occur that you currently circumvent.
>> 
>> OK, I can do that.
> 
> Will this result in the type parameter potentially becoming stale?
> 
> Adding a redundant pointer compare is a good way to get the compiler to
> catch this. Smth like;
> 
>         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>         #define guest_handle_from_param(hnd, type) ({
>             typeof((hnd).p) _x = (hnd).p;
>             XEN_GUEST_HANDLE(type) _y;
>             &_y == &_x;
>             hnd;
>          })

Ah yes, that's a good suggestion.

> I'm not sure which two pointers of members of the various structs need
> to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> idea...

Right, comparing (hnd).p with _y.p would be the right thing; no
need for _x, but some other (mechanical) adjustments would be
necessary.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:46:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J8E-0003G3-Pi; Fri, 17 Aug 2012 09:46:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2J8D-0003Fu-AM
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:46:17 +0000
Received: from [85.158.143.35:10755] by server-2.bemta-4.messagelabs.com id
	3A/1E-31966-8E21E205; Fri, 17 Aug 2012 09:46:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345196775!12392788!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11312 invoked from network); 17 Aug 2012 09:46:16 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:46:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14056149"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:46:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:46:15 +0100
Message-ID: <1345196774.30865.143.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:46:14 +0100
In-Reply-To: <20120815180541.2fddead9@mantra.us.oracle.com>
References: <20120815180541.2fddead9@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 6/8]: Ballooning changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:05 +0100, Mukesh Rathor wrote:
> ---
>  drivers/xen/balloon.c |   26 +++++++++++++++++++++-----
>  1 files changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 31ab82f..57960a1 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -358,10 +358,18 @@ static enum bp_state increase_reservation(unsigned long nr_pages)
>  		BUG_ON(!xen_feature(XENFEAT_auto_translated_physmap) &&
>  		       phys_to_machine_mapping_valid(pfn));
>  
> -		set_phys_to_machine(pfn, frame_list[i]);
> -
> +		if (!xen_pvh_domain()) {

XENFEAT_auto_translated...?

> +			set_phys_to_machine(pfn, frame_list[i]);
> +		} else {
> +			pte_t *ptep;
> +			unsigned int level;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, pfn_pte(pfn, PAGE_KERNEL));
> +		}
>  		/* Link back into the page tables if not highmem. */
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pv_domain() && !PageHighMem(page) &&
> +		    !xen_pvh_domain()) {

It feels like this ought to be inside the !xen_pvh_domain above as well
as just hte set_phys_to_machine.

And is the else above not missing a !PageHighMem check?

>  			int ret;
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
> @@ -417,7 +425,14 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  
>  		scrub_page(page);
>  
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pvh_domain() && !PageHighMem(page)) {
> +			unsigned int level;
> +			pte_t *ptep;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, __pte(0));
> +
> +		} else if (xen_pv_domain() && !PageHighMem(page)) {
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
>  				__pte_ma(0), 0);

This pattern:

	if ( xen_pvh_... )
		... lookup_address / set_pte ...
	else
		... HYOERVISOR_update_va_mapping

Was present above too -- candidate for a helper? (I was a bit surprised
there wasn't already one...)

> @@ -433,7 +448,8 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  	/* No more mappings: invalidate P2M and add to balloon. */
>  	for (i = 0; i < nr_pages; i++) {
>  		pfn = mfn_to_pfn(frame_list[i]);
> -		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +		if (!xen_pvh_domain())

XENFEAT_something?

> +			__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
>  		balloon_append(pfn_to_page(pfn));
>  	}
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:46:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2J8E-0003G3-Pi; Fri, 17 Aug 2012 09:46:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2J8D-0003Fu-AM
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:46:17 +0000
Received: from [85.158.143.35:10755] by server-2.bemta-4.messagelabs.com id
	3A/1E-31966-8E21E205; Fri, 17 Aug 2012 09:46:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345196775!12392788!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDkyODE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11312 invoked from network); 17 Aug 2012 09:46:16 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:46:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,783,1336348800"; d="scan'208";a="14056149"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:46:15 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:46:15 +0100
Message-ID: <1345196774.30865.143.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:46:14 +0100
In-Reply-To: <20120815180541.2fddead9@mantra.us.oracle.com>
References: <20120815180541.2fddead9@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 6/8]: Ballooning changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:05 +0100, Mukesh Rathor wrote:
> ---
>  drivers/xen/balloon.c |   26 +++++++++++++++++++++-----
>  1 files changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 31ab82f..57960a1 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -358,10 +358,18 @@ static enum bp_state increase_reservation(unsigned long nr_pages)
>  		BUG_ON(!xen_feature(XENFEAT_auto_translated_physmap) &&
>  		       phys_to_machine_mapping_valid(pfn));
>  
> -		set_phys_to_machine(pfn, frame_list[i]);
> -
> +		if (!xen_pvh_domain()) {

XENFEAT_auto_translated...?

> +			set_phys_to_machine(pfn, frame_list[i]);
> +		} else {
> +			pte_t *ptep;
> +			unsigned int level;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, pfn_pte(pfn, PAGE_KERNEL));
> +		}
>  		/* Link back into the page tables if not highmem. */
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pv_domain() && !PageHighMem(page) &&
> +		    !xen_pvh_domain()) {

It feels like this ought to be inside the !xen_pvh_domain above as well
as just hte set_phys_to_machine.

And is the else above not missing a !PageHighMem check?

>  			int ret;
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
> @@ -417,7 +425,14 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  
>  		scrub_page(page);
>  
> -		if (xen_pv_domain() && !PageHighMem(page)) {
> +		if (xen_pvh_domain() && !PageHighMem(page)) {
> +			unsigned int level;
> +			pte_t *ptep;
> +			void *addr = __va(pfn << PAGE_SHIFT);
> +			ptep = lookup_address((unsigned long)addr, &level);
> +			set_pte(ptep, __pte(0));
> +
> +		} else if (xen_pv_domain() && !PageHighMem(page)) {
>  			ret = HYPERVISOR_update_va_mapping(
>  				(unsigned long)__va(pfn << PAGE_SHIFT),
>  				__pte_ma(0), 0);

This pattern:

	if ( xen_pvh_... )
		... lookup_address / set_pte ...
	else
		... HYOERVISOR_update_va_mapping

Was present above too -- candidate for a helper? (I was a bit surprised
there wasn't already one...)

> @@ -433,7 +448,8 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp)
>  	/* No more mappings: invalidate P2M and add to balloon. */
>  	for (i = 0; i < nr_pages; i++) {
>  		pfn = mfn_to_pfn(frame_list[i]);
> -		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +		if (!xen_pvh_domain())

XENFEAT_something?

> +			__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
>  		balloon_append(pfn_to_page(pfn));
>  	}
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:49:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JBZ-0003VU-Eo; Fri, 17 Aug 2012 09:49:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2JBY-0003VJ-60
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:49:44 +0000
Received: from [85.158.143.99:42692] by server-2.bemta-4.messagelabs.com id
	3A/14-31966-7B31E205; Fri, 17 Aug 2012 09:49:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345196981!21630712!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27665 invoked from network); 17 Aug 2012 09:49:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 09:49:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:49:41 +0100
Message-Id: <502E2FFC0200007800095D65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:50:20 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <wei.wang2@amd.com>,"Santosh Jodh" <santosh.jodh@citrix.com>,
	<xiantao.zhang@intel.com>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
In-Reply-To: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:36, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
> 
> Incorporated feedback from Jan Beulich and Wei Wang.
> Fixed indent printing with %*s.
> Removed superflous superpage and other attribute prints.
> Make next_level use consistent for AMD IOMMU dumps. Warn if found 
> inconsistent.
> 
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

Looks okay too me now. Wei, Zhang - can we get an ack (or
more) from both of you?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:49:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JBZ-0003VU-Eo; Fri, 17 Aug 2012 09:49:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2JBY-0003VJ-60
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 09:49:44 +0000
Received: from [85.158.143.99:42692] by server-2.bemta-4.messagelabs.com id
	3A/14-31966-7B31E205; Fri, 17 Aug 2012 09:49:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345196981!21630712!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27665 invoked from network); 17 Aug 2012 09:49:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 09:49:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 10:49:41 +0100
Message-Id: <502E2FFC0200007800095D65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 10:50:20 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <wei.wang2@amd.com>,"Santosh Jodh" <santosh.jodh@citrix.com>,
	<xiantao.zhang@intel.com>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
In-Reply-To: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:36, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
> 
> Incorporated feedback from Jan Beulich and Wei Wang.
> Fixed indent printing with %*s.
> Removed superflous superpage and other attribute prints.
> Make next_level use consistent for AMD IOMMU dumps. Warn if found 
> inconsistent.
> 
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

Looks okay too me now. Wei, Zhang - can we get an ack (or
more) from both of you?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:52:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JEJ-0003df-1V; Fri, 17 Aug 2012 09:52:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2JEH-0003dZ-Qi
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:52:34 +0000
Received: from [85.158.139.83:44088] by server-3.bemta-5.messagelabs.com id
	FF/D1-27237-0641E205; Fri, 17 Aug 2012 09:52:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345197152!28093492!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21332 invoked from network); 17 Aug 2012 09:52:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:52:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056357"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:52:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:52:31 +0100
Message-ID: <1345197150.30865.147.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:52:30 +0100
In-Reply-To: <20120815180622.0c988f48@mantra.us.oracle.com>
References: <20120815180622.0c988f48@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:06 +0100, Mukesh Rathor wrote:
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 0bfc1ef..2430133 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -974,14 +974,19 @@ static void gnttab_unmap_frames_v2(void)
>  static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  {
>  	struct gnttab_setup_table setup;
> -	unsigned long *frames;
> +	unsigned long *frames, start_gpfn;
>  	unsigned int nr_gframes = end_idx + 1;
>  	int rc;
>  
> -	if (xen_hvm_domain()) {
> +	if (xen_hvm_domain() || xen_pvh_domain()) {
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> +
> +		if (xen_hvm_domain())
> +			start_gpfn = xen_hvm_resume_frames >> PAGE_SHIFT;
> +		else
> +			start_gpfn = virt_to_pfn(gnttab_shared.addr);

I wonder why the HVM case doesn't already use
virt_to_pfn(gnttab_shared.addr) since it appears to set
gnttab_shared.addr:
		gnttab_shared.addr = ioremap(xen_hvm_resume_frames,
						PAGE_SIZE * max_nr_gframes);

Perhaps the result of ioremap isn't amenable to virt_to_pfn (I can never
remember off hand)

>  		/*
>  		 * Loop backwards, so that the first hypercall has the largest
>  		 * index, ensuring that the table will grow only once.
> @@ -990,7 +995,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  			xatp.domid = DOMID_SELF;
>  			xatp.idx = i;
>  			xatp.space = XENMAPSPACE_grant_table;
> -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> +			xatp.gpfn = start_gpfn + i;
>  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
>  			if (rc != 0) {
>  				printk(KERN_WARNING
> @@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
>  	int rc;
>  	struct gnttab_set_version gsv;
>  
> -	if (xen_hvm_domain())
> +	if (xen_hvm_domain() || xen_pvh_domain())

Does something stop pvh using v2?

Is it a hypervisor implementation thing? In which case it might be
better for GNTTABOP_set_version to explicitly fail the set attempt? (the
same may well apply to HVM for all I know...)

>  		gsv.version = 1;
>  	else
>  		gsv.version = 2;
> @@ -1081,13 +1086,24 @@ static void gnttab_request_version(void)
>  int gnttab_resume(void)
>  {
>  	unsigned int max_nr_gframes;
> +	char *kmsg="Failed to kmalloc pages for pv in hvm grant frames\n";
>  
>  	gnttab_request_version();
>  	max_nr_gframes = gnttab_max_grant_frames();
>  	if (max_nr_gframes < nr_grant_frames)
>  		return -ENOSYS;
>  
> -	if (xen_pv_domain())
> +	/* PVH note: xen will free existing kmalloc'd mfn in
> +	 * XENMEM_add_to_physmap */
> +	if (xen_pvh_domain() && !gnttab_shared.addr) {
> +		gnttab_shared.addr =
> +			kmalloc(max_nr_gframes * PAGE_SIZE, GFP_KERNEL);
> +		if ( !gnttab_shared.addr ) {
> +			printk(KERN_WARNING "%s", kmsg);

Why this construct instead of just the string literal?

> +			return -ENOMEM;
> +		}
> +	}
> +	if (xen_pv_domain() || xen_pvh_domain())
>  		return gnttab_map(0, nr_grant_frames - 1);
>  
>  	if (gnttab_shared.addr == NULL) {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 09:52:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 09:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JEJ-0003df-1V; Fri, 17 Aug 2012 09:52:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2JEH-0003dZ-Qi
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 09:52:34 +0000
Received: from [85.158.139.83:44088] by server-3.bemta-5.messagelabs.com id
	FF/D1-27237-0641E205; Fri, 17 Aug 2012 09:52:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345197152!28093492!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21332 invoked from network); 17 Aug 2012 09:52:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 09:52:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056357"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:52:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 10:52:31 +0100
Message-ID: <1345197150.30865.147.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 10:52:30 +0100
In-Reply-To: <20120815180622.0c988f48@mantra.us.oracle.com>
References: <20120815180622.0c988f48@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:06 +0100, Mukesh Rathor wrote:
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 0bfc1ef..2430133 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -974,14 +974,19 @@ static void gnttab_unmap_frames_v2(void)
>  static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  {
>  	struct gnttab_setup_table setup;
> -	unsigned long *frames;
> +	unsigned long *frames, start_gpfn;
>  	unsigned int nr_gframes = end_idx + 1;
>  	int rc;
>  
> -	if (xen_hvm_domain()) {
> +	if (xen_hvm_domain() || xen_pvh_domain()) {
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> +
> +		if (xen_hvm_domain())
> +			start_gpfn = xen_hvm_resume_frames >> PAGE_SHIFT;
> +		else
> +			start_gpfn = virt_to_pfn(gnttab_shared.addr);

I wonder why the HVM case doesn't already use
virt_to_pfn(gnttab_shared.addr) since it appears to set
gnttab_shared.addr:
		gnttab_shared.addr = ioremap(xen_hvm_resume_frames,
						PAGE_SIZE * max_nr_gframes);

Perhaps the result of ioremap isn't amenable to virt_to_pfn (I can never
remember off hand)

>  		/*
>  		 * Loop backwards, so that the first hypercall has the largest
>  		 * index, ensuring that the table will grow only once.
> @@ -990,7 +995,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  			xatp.domid = DOMID_SELF;
>  			xatp.idx = i;
>  			xatp.space = XENMAPSPACE_grant_table;
> -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> +			xatp.gpfn = start_gpfn + i;
>  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
>  			if (rc != 0) {
>  				printk(KERN_WARNING
> @@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
>  	int rc;
>  	struct gnttab_set_version gsv;
>  
> -	if (xen_hvm_domain())
> +	if (xen_hvm_domain() || xen_pvh_domain())

Does something stop pvh using v2?

Is it a hypervisor implementation thing? In which case it might be
better for GNTTABOP_set_version to explicitly fail the set attempt? (the
same may well apply to HVM for all I know...)

>  		gsv.version = 1;
>  	else
>  		gsv.version = 2;
> @@ -1081,13 +1086,24 @@ static void gnttab_request_version(void)
>  int gnttab_resume(void)
>  {
>  	unsigned int max_nr_gframes;
> +	char *kmsg="Failed to kmalloc pages for pv in hvm grant frames\n";
>  
>  	gnttab_request_version();
>  	max_nr_gframes = gnttab_max_grant_frames();
>  	if (max_nr_gframes < nr_grant_frames)
>  		return -ENOSYS;
>  
> -	if (xen_pv_domain())
> +	/* PVH note: xen will free existing kmalloc'd mfn in
> +	 * XENMEM_add_to_physmap */
> +	if (xen_pvh_domain() && !gnttab_shared.addr) {
> +		gnttab_shared.addr =
> +			kmalloc(max_nr_gframes * PAGE_SIZE, GFP_KERNEL);
> +		if ( !gnttab_shared.addr ) {
> +			printk(KERN_WARNING "%s", kmsg);

Why this construct instead of just the string literal?

> +			return -ENOMEM;
> +		}
> +	}
> +	if (xen_pv_domain() || xen_pvh_domain())
>  		return gnttab_map(0, nr_grant_frames - 1);
>  
>  	if (gnttab_shared.addr == NULL) {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:01:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JNC-0003tM-2a; Fri, 17 Aug 2012 10:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2JNA-0003tH-MR
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:01:44 +0000
Received: from [85.158.143.35:56730] by server-3.bemta-4.messagelabs.com id
	23/1E-09529-8861E205; Fri, 17 Aug 2012 10:01:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345197689!13966553!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17987 invoked from network); 17 Aug 2012 10:01:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:01:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056583"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:01:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 11:01:29 +0100
Message-ID: <1345197687.30865.152.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 11:01:27 +0100
In-Reply-To: <20120815180716.0049bffe@mantra.us.oracle.com>
References: <20120815180716.0049bffe@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 8/8]: PVH: privcmd changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:07 +0100, Mukesh Rathor wrote:
> ---
>  drivers/xen/privcmd.c |   68 +++++++++++++++++++++++++++++++++++++++++++++++-
>  1 files changed, 66 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..0a240ab 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -33,6 +33,7 @@
>  #include <xen/features.h>
>  #include <xen/page.h>
>  #include <xen/xen-ops.h>
> +#include <xen/balloon.h>
>  
>  #include "privcmd.h"
>  
> @@ -199,6 +200,10 @@ static long privcmd_ioctl_mmap(void __user *udata)
>  	if (!xen_initial_domain())
>  		return -EPERM;
>  
> +	/* PVH: TBD/FIXME. For now we only support privcmd_ioctl_mmap_batch */
> +	if (xen_pvh_domain())
> +		return -ENOSYS;
> +
>  	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
>  		return -EFAULT;
>  
> @@ -251,6 +256,8 @@ struct mmap_batch_state {
>  	xen_pfn_t __user *user;
>  };
>  
> +/* PVH dom0: if domU being created is PV, then mfn is mfn(addr on bus). If
> + * it's PVH then mfn is pfn (input to HAP). */
>  static int mmap_batch_fn(void *data, void *state)
>  {
>  	xen_pfn_t *mfnp = data;
> @@ -274,6 +281,40 @@ static int mmap_return_errors(void *data, void *state)
>  	return put_user(*mfnp, st->user++);
>  }
>  
> +/* Allocate pfns that are then mapped with gmfns from foreign domid. Update
> + * the vma with the page info to use later.
> + * Returns: 0 if success, otherwise -errno
> + */
> +static int pvh_privcmd_resv_pfns(struct vm_area_struct *vma, int numpgs)
> +{
> +	int rc;
> +	struct xen_pvh_sav_pfn_info *savp;
> +
> +	savp = kzalloc(sizeof(struct xen_pvh_sav_pfn_info), GFP_KERNEL);
> +	if (savp == NULL)
> +		return -ENOMEM;
> +
> +	savp->sp_paga = kcalloc(numpgs, sizeof(savp->sp_paga[0]), GFP_KERNEL);
> +	if (savp->sp_paga == NULL) {
> +		kfree(savp);
> +		return -ENOMEM;
> +	}
> +
> +	rc = alloc_xenballooned_pages(numpgs, savp->sp_paga, 0);
> +	if (rc != 0) {
> +		pr_warn("%s Could not alloc %d pfns rc:%d\n", __FUNCTION__,
> +			numpgs, rc);
> +		kfree(savp->sp_paga);
> +		kfree(savp);
> +		return -ENOMEM;
> +	}
> +	savp->sp_num_pgs = numpgs;
> +	BUG_ON(vma->vm_private_data);
> +	vma->vm_private_data = savp;
> +
> +	return 0;
> +}
> +
>  static struct vm_operations_struct privcmd_vm_ops;
>  
>  static long privcmd_ioctl_mmap_batch(void __user *udata)
> @@ -315,6 +356,12 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  		goto out;
>  	}
>  
> +	if (xen_pvh_domain()) {
> +		if ((ret=pvh_privcmd_resv_pfns(vma, m.num))) {
> +			up_write(&mm->mmap_sem);
> +			goto out;
> +		}
> +	}
>  	state.domain = m.dom;
>  	state.vma = vma;
>  	state.va = m.addr;
> @@ -365,6 +412,19 @@ static long privcmd_ioctl(struct file *file,
>  	return ret;
>  }
>  
> +static void privcmd_close(struct vm_area_struct *vma)
> +{
> +	struct xen_pvh_sav_pfn_info *savp;
> +
> +	if (!xen_pvh_domain() || ((savp=vma->vm_private_data) == NULL))
> +		return;
> +
> +	while (savp->sp_next_todo--) {
> +		xen_pfn_t pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo]);
> +		pvh_rem_xen_p2m(pfn, 1);
> +	}
> +}
> +
>  static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  {
>  	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
> @@ -375,13 +435,14 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops = {
> +	.close = privcmd_close,
>  	.fault = privcmd_fault
>  };
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
>  {
> -	/* Unsupported for auto-translate guests. */
> -	if (xen_feature(XENFEAT_auto_translated_physmap))
> +	/* Unsupported for auto-translate guests unless PVH */
> +	if (xen_feature(XENFEAT_auto_translated_physmap) && !xen_pvh_domain())
>  		return -ENOSYS;
>  
>  	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
> @@ -395,6 +456,9 @@ static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
>  
>  static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma)
>  {
> +	if (xen_pvh_domain())
> +		return (vma->vm_private_data == NULL);
> +
>  	return (xchg(&vma->vm_private_data, (void *)1) == NULL);

How come this is different for pvh?

Your version doesn't appear to enforce anything, since it doesn't set
it. Oh I see, you set it to some actual useful data in
pvh_privcmd_resv_pfns. I have a feeling you might want to use some sort
of atomic construct for that.

Which I suspect then means you need to be prepared for it to fail.

Can we set vm_private_data => 1 in privcmd_enforce_singleshot_mapping to
"open the transaction" and then up in pvh_privcmd_resv_pfns set it to
the final pointer up in pvh_privcmd_resv_pfns?

That was it would be non-NULL both while the mapping is being
constructed and in use. Might need some checks elsewhere that it is != 1
when you expect a real pointer.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:01:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JNC-0003tM-2a; Fri, 17 Aug 2012 10:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2JNA-0003tH-MR
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:01:44 +0000
Received: from [85.158.143.35:56730] by server-3.bemta-4.messagelabs.com id
	23/1E-09529-8861E205; Fri, 17 Aug 2012 10:01:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345197689!13966553!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17987 invoked from network); 17 Aug 2012 10:01:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:01:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056583"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:01:29 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 11:01:29 +0100
Message-ID: <1345197687.30865.152.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 11:01:27 +0100
In-Reply-To: <20120815180716.0049bffe@mantra.us.oracle.com>
References: <20120815180716.0049bffe@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 8/8]: PVH: privcmd changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 02:07 +0100, Mukesh Rathor wrote:
> ---
>  drivers/xen/privcmd.c |   68 +++++++++++++++++++++++++++++++++++++++++++++++-
>  1 files changed, 66 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..0a240ab 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -33,6 +33,7 @@
>  #include <xen/features.h>
>  #include <xen/page.h>
>  #include <xen/xen-ops.h>
> +#include <xen/balloon.h>
>  
>  #include "privcmd.h"
>  
> @@ -199,6 +200,10 @@ static long privcmd_ioctl_mmap(void __user *udata)
>  	if (!xen_initial_domain())
>  		return -EPERM;
>  
> +	/* PVH: TBD/FIXME. For now we only support privcmd_ioctl_mmap_batch */
> +	if (xen_pvh_domain())
> +		return -ENOSYS;
> +
>  	if (copy_from_user(&mmapcmd, udata, sizeof(mmapcmd)))
>  		return -EFAULT;
>  
> @@ -251,6 +256,8 @@ struct mmap_batch_state {
>  	xen_pfn_t __user *user;
>  };
>  
> +/* PVH dom0: if domU being created is PV, then mfn is mfn(addr on bus). If
> + * it's PVH then mfn is pfn (input to HAP). */
>  static int mmap_batch_fn(void *data, void *state)
>  {
>  	xen_pfn_t *mfnp = data;
> @@ -274,6 +281,40 @@ static int mmap_return_errors(void *data, void *state)
>  	return put_user(*mfnp, st->user++);
>  }
>  
> +/* Allocate pfns that are then mapped with gmfns from foreign domid. Update
> + * the vma with the page info to use later.
> + * Returns: 0 if success, otherwise -errno
> + */
> +static int pvh_privcmd_resv_pfns(struct vm_area_struct *vma, int numpgs)
> +{
> +	int rc;
> +	struct xen_pvh_sav_pfn_info *savp;
> +
> +	savp = kzalloc(sizeof(struct xen_pvh_sav_pfn_info), GFP_KERNEL);
> +	if (savp == NULL)
> +		return -ENOMEM;
> +
> +	savp->sp_paga = kcalloc(numpgs, sizeof(savp->sp_paga[0]), GFP_KERNEL);
> +	if (savp->sp_paga == NULL) {
> +		kfree(savp);
> +		return -ENOMEM;
> +	}
> +
> +	rc = alloc_xenballooned_pages(numpgs, savp->sp_paga, 0);
> +	if (rc != 0) {
> +		pr_warn("%s Could not alloc %d pfns rc:%d\n", __FUNCTION__,
> +			numpgs, rc);
> +		kfree(savp->sp_paga);
> +		kfree(savp);
> +		return -ENOMEM;
> +	}
> +	savp->sp_num_pgs = numpgs;
> +	BUG_ON(vma->vm_private_data);
> +	vma->vm_private_data = savp;
> +
> +	return 0;
> +}
> +
>  static struct vm_operations_struct privcmd_vm_ops;
>  
>  static long privcmd_ioctl_mmap_batch(void __user *udata)
> @@ -315,6 +356,12 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  		goto out;
>  	}
>  
> +	if (xen_pvh_domain()) {
> +		if ((ret=pvh_privcmd_resv_pfns(vma, m.num))) {
> +			up_write(&mm->mmap_sem);
> +			goto out;
> +		}
> +	}
>  	state.domain = m.dom;
>  	state.vma = vma;
>  	state.va = m.addr;
> @@ -365,6 +412,19 @@ static long privcmd_ioctl(struct file *file,
>  	return ret;
>  }
>  
> +static void privcmd_close(struct vm_area_struct *vma)
> +{
> +	struct xen_pvh_sav_pfn_info *savp;
> +
> +	if (!xen_pvh_domain() || ((savp=vma->vm_private_data) == NULL))
> +		return;
> +
> +	while (savp->sp_next_todo--) {
> +		xen_pfn_t pfn = page_to_pfn(savp->sp_paga[savp->sp_next_todo]);
> +		pvh_rem_xen_p2m(pfn, 1);
> +	}
> +}
> +
>  static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  {
>  	printk(KERN_DEBUG "privcmd_fault: vma=%p %lx-%lx, pgoff=%lx, uv=%p\n",
> @@ -375,13 +435,14 @@ static int privcmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops = {
> +	.close = privcmd_close,
>  	.fault = privcmd_fault
>  };
>  
>  static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
>  {
> -	/* Unsupported for auto-translate guests. */
> -	if (xen_feature(XENFEAT_auto_translated_physmap))
> +	/* Unsupported for auto-translate guests unless PVH */
> +	if (xen_feature(XENFEAT_auto_translated_physmap) && !xen_pvh_domain())
>  		return -ENOSYS;
>  
>  	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
> @@ -395,6 +456,9 @@ static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
>  
>  static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma)
>  {
> +	if (xen_pvh_domain())
> +		return (vma->vm_private_data == NULL);
> +
>  	return (xchg(&vma->vm_private_data, (void *)1) == NULL);

How come this is different for pvh?

Your version doesn't appear to enforce anything, since it doesn't set
it. Oh I see, you set it to some actual useful data in
pvh_privcmd_resv_pfns. I have a feeling you might want to use some sort
of atomic construct for that.

Which I suspect then means you need to be prepared for it to fail.

Can we set vm_private_data => 1 in privcmd_enforce_singleshot_mapping to
"open the transaction" and then up in pvh_privcmd_resv_pfns set it to
the final pointer up in pvh_privcmd_resv_pfns?

That was it would be non-NULL both while the mapping is being
constructed and in use. Might need some checks elsewhere that it is != 1
when you expect a real pointer.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:10:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JV6-00046A-2B; Fri, 17 Aug 2012 10:09:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2JV4-000465-6X
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:09:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345198143!9734232!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13075 invoked from network); 17 Aug 2012 10:09:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:09:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056778"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:08:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 11:08:44 +0100
Message-ID: <1345198122.30865.155.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 11:08:42 +0100
In-Reply-To: <1345133558-23341-5-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
	<1345133558-23341-5-git-send-email-konrad.wilk@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/mmu: Fix compile warnings.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 17:12 +0100, Konrad Rzeszutek Wilk wrote:
> linux/arch/x86/xen/mmu.c:1788:14: warning: comparison between pointer and integer [enabled by default]
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 90d31a2..4911354 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1786,11 +1786,11 @@ void __init xen_setup_machphys_mapping(void)
>  {
>  	struct xen_machphys_mapping mapping;
>  
> -	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, &mapping) == 0) {
> +	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, (void *)&mapping) == 0) {

This changes seems to be unnecessary and not related to the commit
message.

>  		machine_to_phys_mapping = (unsigned long *)mapping.v_start;
>  		machine_to_phys_nr = mapping.max_mfn + 1;
>  	} else {
> -		machine_to_phys_nr = MACH2PHYS_NR_ENTRIES;
> +		machine_to_phys_nr = (unsigned long)MACH2PHYS_NR_ENTRIES;

I must be missing something. Given:
        #define MACH2PHYS_VIRT_START  mk_unsigned_long(__MACH2PHYS_VIRT_START)
        #define MACH2PHYS_VIRT_END    mk_unsigned_long(__MACH2PHYS_VIRT_END)
        #define MACH2PHYS_NR_ENTRIES  ((MACH2PHYS_VIRT_END-MACH2PHYS_VIRT_START)>>__MACH2PHYS_SHIFT)
How is MACH2PHYS_NR_ENTRIES not already unsigned long? Or at the very
least how is it not an integer type of some sort, it certainly doesn't
look like it can be a pointer (as suggested by the commit message) to
me.


>  	}
>  #ifdef CONFIG_X86_32
>  	WARN_ON((machine_to_phys_mapping + (machine_to_phys_nr - 1))



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:10:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JV6-00046A-2B; Fri, 17 Aug 2012 10:09:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2JV4-000465-6X
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:09:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345198143!9734232!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13075 invoked from network); 17 Aug 2012 10:09:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:09:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056778"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:08:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 11:08:44 +0100
Message-ID: <1345198122.30865.155.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 11:08:42 +0100
In-Reply-To: <1345133558-23341-5-git-send-email-konrad.wilk@oracle.com>
References: <1345133558-23341-1-git-send-email-konrad.wilk@oracle.com>
	<1345133558-23341-5-git-send-email-konrad.wilk@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 4/6] xen/mmu: Fix compile warnings.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-16 at 17:12 +0100, Konrad Rzeszutek Wilk wrote:
> linux/arch/x86/xen/mmu.c:1788:14: warning: comparison between pointer and integer [enabled by default]
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 90d31a2..4911354 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1786,11 +1786,11 @@ void __init xen_setup_machphys_mapping(void)
>  {
>  	struct xen_machphys_mapping mapping;
>  
> -	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, &mapping) == 0) {
> +	if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, (void *)&mapping) == 0) {

This changes seems to be unnecessary and not related to the commit
message.

>  		machine_to_phys_mapping = (unsigned long *)mapping.v_start;
>  		machine_to_phys_nr = mapping.max_mfn + 1;
>  	} else {
> -		machine_to_phys_nr = MACH2PHYS_NR_ENTRIES;
> +		machine_to_phys_nr = (unsigned long)MACH2PHYS_NR_ENTRIES;

I must be missing something. Given:
        #define MACH2PHYS_VIRT_START  mk_unsigned_long(__MACH2PHYS_VIRT_START)
        #define MACH2PHYS_VIRT_END    mk_unsigned_long(__MACH2PHYS_VIRT_END)
        #define MACH2PHYS_NR_ENTRIES  ((MACH2PHYS_VIRT_END-MACH2PHYS_VIRT_START)>>__MACH2PHYS_SHIFT)
How is MACH2PHYS_NR_ENTRIES not already unsigned long? Or at the very
least how is it not an integer type of some sort, it certainly doesn't
look like it can be a pointer (as suggested by the commit message) to
me.


>  	}
>  #ifdef CONFIG_X86_32
>  	WARN_ON((machine_to_phys_mapping + (machine_to_phys_nr - 1))



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:16:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jar-0004HZ-06; Fri, 17 Aug 2012 10:15:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Jaq-0004HU-9q
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:15:52 +0000
Received: from [85.158.143.35:4034] by server-1.bemta-4.messagelabs.com id
	FE/D0-07754-7D91E205; Fri, 17 Aug 2012 10:15:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345198549!13969249!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4769 invoked from network); 17 Aug 2012 10:15:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:15:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056948"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:15:49 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 11:15:49 +0100
Date: Fri, 17 Aug 2012 11:15:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120816114650.4db2079f@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> > > diff --git a/include/xen/interface/xen.h
> > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > --- a/include/xen/interface/xen.h
> > > +++ b/include/xen/interface/xen.h
> > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > >  /* These flags are passed in the 'flags' field of start_info_t. */
> > >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> > >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > 1 byte for xen-pm options */ 
> > >  typedef uint64_t cpumap_t;
> > 
> > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> > generic xen.h interface file. 
> 
> > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > HVM. Also,
> > > + * note, xen PVH domain shares lot of HVM code */
> > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > &&                     \
> > > +				(xen_start_info->flags &
> > > SIF_IS_PVINHVM))
> >  
> > Also here.
> 
> Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> include/xen/interface/xen.h, and then do 
> #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> 
> What do you think about that?

I am not particularly fussed about the location of SIF_IS_PVINHVM.
We could define it in asm/xen/hypervisor.h for example.

The very important bit is to avoid xen_pvh_domain() in generic code
because it reduces code reusability.
xen_pvh_domain() covers too many different concepts (a PV guest, in an
HVM container, using nested paging in hardware), if we bundle them
together it makes it much harder to reuse them.
So we should avoid checking for xen_pvh_domain() and check for the
relevant sub-property we actually interested in.

For example in balloon.c we are probably only interested in memory
related behavior, so checking for XENFEAT_auto_translated_physmap should
be enough.  In other parts of the code we might want to check for
xen_pv_domain(). If xen_pv_domain() and XENFEAT_auto_translated_physmap
are not enough, we could introduce another small XENFEAT that specifies
that the domain is running in a HVM container. This way they are all
reusable.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:16:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jar-0004HZ-06; Fri, 17 Aug 2012 10:15:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Jaq-0004HU-9q
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:15:52 +0000
Received: from [85.158.143.35:4034] by server-1.bemta-4.messagelabs.com id
	FE/D0-07754-7D91E205; Fri, 17 Aug 2012 10:15:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345198549!13969249!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4769 invoked from network); 17 Aug 2012 10:15:49 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:15:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14056948"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:15:49 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 11:15:49 +0100
Date: Fri, 17 Aug 2012 11:15:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120816114650.4db2079f@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> > > diff --git a/include/xen/interface/xen.h
> > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > --- a/include/xen/interface/xen.h
> > > +++ b/include/xen/interface/xen.h
> > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > >  /* These flags are passed in the 'flags' field of start_info_t. */
> > >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> > >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > 1 byte for xen-pm options */ 
> > >  typedef uint64_t cpumap_t;
> > 
> > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> > generic xen.h interface file. 
> 
> > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > HVM. Also,
> > > + * note, xen PVH domain shares lot of HVM code */
> > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > &&                     \
> > > +				(xen_start_info->flags &
> > > SIF_IS_PVINHVM))
> >  
> > Also here.
> 
> Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> include/xen/interface/xen.h, and then do 
> #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> 
> What do you think about that?

I am not particularly fussed about the location of SIF_IS_PVINHVM.
We could define it in asm/xen/hypervisor.h for example.

The very important bit is to avoid xen_pvh_domain() in generic code
because it reduces code reusability.
xen_pvh_domain() covers too many different concepts (a PV guest, in an
HVM container, using nested paging in hardware), if we bundle them
together it makes it much harder to reuse them.
So we should avoid checking for xen_pvh_domain() and check for the
relevant sub-property we actually interested in.

For example in balloon.c we are probably only interested in memory
related behavior, so checking for XENFEAT_auto_translated_physmap should
be enough.  In other parts of the code we might want to check for
xen_pv_domain(). If xen_pv_domain() and XENFEAT_auto_translated_physmap
are not enough, we could introduce another small XENFEAT that specifies
that the domain is running in a HVM container. This way they are all
reusable.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:18:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JdW-0004Nr-Hw; Fri, 17 Aug 2012 10:18:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2JdV-0004Nh-Bf
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:18:37 +0000
Received: from [85.158.143.99:57121] by server-1.bemta-4.messagelabs.com id
	13/C5-07754-C7A1E205; Fri, 17 Aug 2012 10:18:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345198712!23418382!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10970 invoked from network); 17 Aug 2012 10:18:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 10:18:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 11:18:32 +0100
Message-Id: <502E36C00200007800095D8B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 11:19:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@amd.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
In-Reply-To: <85190245a94d9945b765.1345135279@localhost.localdomain>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> AMD, powernow: Update P-state directly when _PSD's CoordType is 
> DOMAIN_COORD_TYPE_HW_ALL
> 
> When _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL (i.e. shared_type is
> CPUFREQ_SHARED_TYPE_HW) which most often is the case on servers, there is no
> reason to go into on_selected_cpus() code, we call call transition_pstate()
> directly.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

Looks good to me in general (one minor comment below, but no
need to resubmit just because of this). But it's not really clear
to me whether this actually improves anything (knowing of which
is needed to decide whether to put this in now or after 4.2).

> --- a/xen/arch/x86/acpi/cpufreq/powernow.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/arch/x86/acpi/cpufreq/powernow.c	Thu Aug 16 18:38:21 2012 +0200
> @@ -56,20 +56,9 @@
>  
>  static struct cpufreq_driver powernow_cpufreq_driver;
>  
> -struct drv_cmd {
> -    unsigned int type;
> -    const cpumask_t *mask;
> -    u32 val;
> -    int turbo;
> -};
> -
> -static void transition_pstate(void *drvcmd)
> +static void transition_pstate(void *pstate)
>  {
> -    struct drv_cmd *cmd;
> -    cmd = (struct drv_cmd *) drvcmd;
> -
> -
> -    wrmsrl(MSR_PSTATE_CTRL, cmd->val);
> +    wrmsrl(MSR_PSTATE_CTRL, *(int *)pstate);

The variable a pointer to which gets passed in for this function
is "unsigned int", so you surely would need to cast to that type
instead of plain "int".

Jan

>  }
>  
>  static void update_cpb(void *data)
> @@ -106,13 +95,11 @@ static int powernow_cpufreq_target(struc
>  {
>      struct acpi_cpufreq_data *data = cpufreq_drv_data[policy->cpu];
>      struct processor_performance *perf;
> -    struct cpufreq_freqs freqs;
>      cpumask_t online_policy_cpus;
> -    struct drv_cmd cmd;
> -    unsigned int next_state = 0; /* Index into freq_table */
> -    unsigned int next_perf_state = 0; /* Index into perf table */
> -    int result = 0;
> -    int j = 0;
> +    unsigned int next_state; /* Index into freq_table */
> +    unsigned int next_perf_state; /* Index into perf table */
> +    int result;
> +    int j;
>  
>      if (unlikely(data == NULL ||
>          data->acpi_data == NULL || data->freq_table == NULL)) {
> @@ -125,9 +112,7 @@ static int powernow_cpufreq_target(struc
>                                              target_freq,
>                                              relation, &next_state);
>      if (unlikely(result))
> -        return -ENODEV;
> -
> -    cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
> +        return result;
>  
>      next_perf_state = data->freq_table[next_state].index;
>      if (perf->state == next_perf_state) {
> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>              return 0;
>      }
>  
> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
> -        cmd.mask = &online_policy_cpus;
> -    else
> -        cmd.mask = cpumask_of(policy->cpu);
> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
> +        likely(policy->cpu == smp_processor_id())) {
> +        transition_pstate(&next_perf_state);
> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
> +    } else {
> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>  
> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
> -    freqs.new = data->freq_table[next_state].frequency;
> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
> +            unlikely(policy->cpu != smp_processor_id()))
> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
> +                             &next_perf_state, 1);
> +        else
> +            transition_pstate(&next_perf_state);
>  
> -    cmd.val = next_perf_state;
> -    cmd.turbo = policy->turbo;
> -
> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
> -
> -    for_each_cpu(j, &online_policy_cpus)
> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
> +        for_each_cpu(j, &online_policy_cpus)
> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
> +    }    
>  
>      perf->state = next_perf_state;
> -    policy->cur = freqs.new;
> +    policy->cur = data->freq_table[next_state].frequency;
>  
> -    return result;
> +    return 0;
>  }
>  
>  static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:18:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2JdW-0004Nr-Hw; Fri, 17 Aug 2012 10:18:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2JdV-0004Nh-Bf
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:18:37 +0000
Received: from [85.158.143.99:57121] by server-1.bemta-4.messagelabs.com id
	13/C5-07754-C7A1E205; Fri, 17 Aug 2012 10:18:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345198712!23418382!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10970 invoked from network); 17 Aug 2012 10:18:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 10:18:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 11:18:32 +0100
Message-Id: <502E36C00200007800095D8B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 11:19:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@amd.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
In-Reply-To: <85190245a94d9945b765.1345135279@localhost.localdomain>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> AMD, powernow: Update P-state directly when _PSD's CoordType is 
> DOMAIN_COORD_TYPE_HW_ALL
> 
> When _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL (i.e. shared_type is
> CPUFREQ_SHARED_TYPE_HW) which most often is the case on servers, there is no
> reason to go into on_selected_cpus() code, we call call transition_pstate()
> directly.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>

Looks good to me in general (one minor comment below, but no
need to resubmit just because of this). But it's not really clear
to me whether this actually improves anything (knowing of which
is needed to decide whether to put this in now or after 4.2).

> --- a/xen/arch/x86/acpi/cpufreq/powernow.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/arch/x86/acpi/cpufreq/powernow.c	Thu Aug 16 18:38:21 2012 +0200
> @@ -56,20 +56,9 @@
>  
>  static struct cpufreq_driver powernow_cpufreq_driver;
>  
> -struct drv_cmd {
> -    unsigned int type;
> -    const cpumask_t *mask;
> -    u32 val;
> -    int turbo;
> -};
> -
> -static void transition_pstate(void *drvcmd)
> +static void transition_pstate(void *pstate)
>  {
> -    struct drv_cmd *cmd;
> -    cmd = (struct drv_cmd *) drvcmd;
> -
> -
> -    wrmsrl(MSR_PSTATE_CTRL, cmd->val);
> +    wrmsrl(MSR_PSTATE_CTRL, *(int *)pstate);

The variable a pointer to which gets passed in for this function
is "unsigned int", so you surely would need to cast to that type
instead of plain "int".

Jan

>  }
>  
>  static void update_cpb(void *data)
> @@ -106,13 +95,11 @@ static int powernow_cpufreq_target(struc
>  {
>      struct acpi_cpufreq_data *data = cpufreq_drv_data[policy->cpu];
>      struct processor_performance *perf;
> -    struct cpufreq_freqs freqs;
>      cpumask_t online_policy_cpus;
> -    struct drv_cmd cmd;
> -    unsigned int next_state = 0; /* Index into freq_table */
> -    unsigned int next_perf_state = 0; /* Index into perf table */
> -    int result = 0;
> -    int j = 0;
> +    unsigned int next_state; /* Index into freq_table */
> +    unsigned int next_perf_state; /* Index into perf table */
> +    int result;
> +    int j;
>  
>      if (unlikely(data == NULL ||
>          data->acpi_data == NULL || data->freq_table == NULL)) {
> @@ -125,9 +112,7 @@ static int powernow_cpufreq_target(struc
>                                              target_freq,
>                                              relation, &next_state);
>      if (unlikely(result))
> -        return -ENODEV;
> -
> -    cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
> +        return result;
>  
>      next_perf_state = data->freq_table[next_state].index;
>      if (perf->state == next_perf_state) {
> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>              return 0;
>      }
>  
> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
> -        cmd.mask = &online_policy_cpus;
> -    else
> -        cmd.mask = cpumask_of(policy->cpu);
> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
> +        likely(policy->cpu == smp_processor_id())) {
> +        transition_pstate(&next_perf_state);
> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
> +    } else {
> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>  
> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
> -    freqs.new = data->freq_table[next_state].frequency;
> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
> +            unlikely(policy->cpu != smp_processor_id()))
> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
> +                             &next_perf_state, 1);
> +        else
> +            transition_pstate(&next_perf_state);
>  
> -    cmd.val = next_perf_state;
> -    cmd.turbo = policy->turbo;
> -
> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
> -
> -    for_each_cpu(j, &online_policy_cpus)
> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
> +        for_each_cpu(j, &online_policy_cpus)
> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
> +    }    
>  
>      perf->state = next_perf_state;
> -    policy->cur = freqs.new;
> +    policy->cur = data->freq_table[next_state].frequency;
>  
> -    return result;
> +    return 0;
>  }
>  
>  static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:22:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jgz-0004YD-5h; Fri, 17 Aug 2012 10:22:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T2Jgx-0004Y1-HN
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:22:11 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345198921!9360913!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22668 invoked from network); 17 Aug 2012 10:22:03 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:22:03 -0000
Received: by yhpp34 with SMTP id p34so4305940yhp.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 03:22:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=HNx+aS83zxBfY717SWBZTcW/m+TcahZnlEiOd/2EgX0=;
	b=elnECMn4OBIgdqwV8+neyaADVJ5v3wi2WnI1Fp/hEDd1IX+m/9Ekp7GTWNJnNO/oTT
	ZRVwkXPMtod0W5+GCAVBT0gtRUvqfY5R64VSiSUrrHztwnF2IFzuvMX5I7jkifTniY6W
	aGmNZzxlxwgT9861uNewK7rlBMI2fD7pQnzlsiqfWZELez+1DxuMC2Zw8HQCmV/bVlcH
	nojF+il+ZAfQ/m5nAhewI2Lx9ccV0jJ0fMrO2HfAs/h/6PXdPKaPEIq6s+Uf96RBLkaS
	7PvXXAD5PayyuWMFNGVBIeELYcbqvqVd0yUI67m+/i3qmcm9D3TQ3BwNfGbU/YZYCSV3
	m1Ug==
MIME-Version: 1.0
Received: by 10.50.180.129 with SMTP id do1mr1305732igc.28.1345198921421; Fri,
	17 Aug 2012 03:22:01 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Fri, 17 Aug 2012 03:22:01 -0700 (PDT)
In-Reply-To: <CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
Date: Fri, 17 Aug 2012 06:22:01 -0400
X-Google-Sender-Auth: QiLDg1Ycku_-u2DqT_EJmSTvFOo
Message-ID: <CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
> In order to not flood the list with large logs, I put the logs here:
> https://citrix.sharefile.com/d/sfab699024a54df39
> I wasn't sure if you wanted a log with, or without the calls to
> evtchn_move_pirqs() commented out - so I included both.

I received notifications that this got downloaded yesterday by a couple people.
Did you have an opportunity to review it?

If so - did you glean any new knowledge?


Thanks for your time

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:22:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jgz-0004YD-5h; Fri, 17 Aug 2012 10:22:13 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T2Jgx-0004Y1-HN
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:22:11 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345198921!9360913!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22668 invoked from network); 17 Aug 2012 10:22:03 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:22:03 -0000
Received: by yhpp34 with SMTP id p34so4305940yhp.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 03:22:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=HNx+aS83zxBfY717SWBZTcW/m+TcahZnlEiOd/2EgX0=;
	b=elnECMn4OBIgdqwV8+neyaADVJ5v3wi2WnI1Fp/hEDd1IX+m/9Ekp7GTWNJnNO/oTT
	ZRVwkXPMtod0W5+GCAVBT0gtRUvqfY5R64VSiSUrrHztwnF2IFzuvMX5I7jkifTniY6W
	aGmNZzxlxwgT9861uNewK7rlBMI2fD7pQnzlsiqfWZELez+1DxuMC2Zw8HQCmV/bVlcH
	nojF+il+ZAfQ/m5nAhewI2Lx9ccV0jJ0fMrO2HfAs/h/6PXdPKaPEIq6s+Uf96RBLkaS
	7PvXXAD5PayyuWMFNGVBIeELYcbqvqVd0yUI67m+/i3qmcm9D3TQ3BwNfGbU/YZYCSV3
	m1Ug==
MIME-Version: 1.0
Received: by 10.50.180.129 with SMTP id do1mr1305732igc.28.1345198921421; Fri,
	17 Aug 2012 03:22:01 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Fri, 17 Aug 2012 03:22:01 -0700 (PDT)
In-Reply-To: <CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
Date: Fri, 17 Aug 2012 06:22:01 -0400
X-Google-Sender-Auth: QiLDg1Ycku_-u2DqT_EJmSTvFOo
Message-ID: <CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
> In order to not flood the list with large logs, I put the logs here:
> https://citrix.sharefile.com/d/sfab699024a54df39
> I wasn't sure if you wanted a log with, or without the calls to
> evtchn_move_pirqs() commented out - so I included both.

I received notifications that this got downloaded yesterday by a couple people.
Did you have an opportunity to review it?

If so - did you glean any new knowledge?


Thanks for your time

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:31:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jpa-0004ik-6J; Fri, 17 Aug 2012 10:31:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2JpZ-0004if-KE
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:31:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345199443!2234380!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14973 invoked from network); 17 Aug 2012 10:30:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 10:30:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 11:30:43 +0100
Message-Id: <502E399B0200007800095DA1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 11:31:23 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@amd.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
In-Reply-To: <85190245a94d9945b765.1345135279@localhost.localdomain>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>              return 0;
>      }
>  
> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
> -        cmd.mask = &online_policy_cpus;
> -    else
> -        cmd.mask = cpumask_of(policy->cpu);
> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
> +        likely(policy->cpu == smp_processor_id())) {
> +        transition_pstate(&next_perf_state);
> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);

Actually - is this enough? Doesn't this also need to be done based
on policy->cpus?

Jan

> +    } else {
> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>  
> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
> -    freqs.new = data->freq_table[next_state].frequency;
> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
> +            unlikely(policy->cpu != smp_processor_id()))
> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
> +                             &next_perf_state, 1);
> +        else
> +            transition_pstate(&next_perf_state);
>  
> -    cmd.val = next_perf_state;
> -    cmd.turbo = policy->turbo;
> -
> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
> -
> -    for_each_cpu(j, &online_policy_cpus)
> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
> +        for_each_cpu(j, &online_policy_cpus)
> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
> +    }    
>  
>      perf->state = next_perf_state;
> -    policy->cur = freqs.new;
> +    policy->cur = data->freq_table[next_state].frequency;
>  
> -    return result;
> +    return 0;
>  }
>  
>  static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:31:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jpa-0004ik-6J; Fri, 17 Aug 2012 10:31:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2JpZ-0004if-KE
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:31:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345199443!2234380!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14973 invoked from network); 17 Aug 2012 10:30:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 10:30:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 11:30:43 +0100
Message-Id: <502E399B0200007800095DA1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 11:31:23 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@amd.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
In-Reply-To: <85190245a94d9945b765.1345135279@localhost.localdomain>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>              return 0;
>      }
>  
> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
> -        cmd.mask = &online_policy_cpus;
> -    else
> -        cmd.mask = cpumask_of(policy->cpu);
> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
> +        likely(policy->cpu == smp_processor_id())) {
> +        transition_pstate(&next_perf_state);
> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);

Actually - is this enough? Doesn't this also need to be done based
on policy->cpus?

Jan

> +    } else {
> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>  
> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
> -    freqs.new = data->freq_table[next_state].frequency;
> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
> +            unlikely(policy->cpu != smp_processor_id()))
> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
> +                             &next_perf_state, 1);
> +        else
> +            transition_pstate(&next_perf_state);
>  
> -    cmd.val = next_perf_state;
> -    cmd.turbo = policy->turbo;
> -
> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
> -
> -    for_each_cpu(j, &online_policy_cpus)
> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
> +        for_each_cpu(j, &online_policy_cpus)
> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
> +    }    
>  
>      perf->state = next_perf_state;
> -    policy->cur = freqs.new;
> +    policy->cur = data->freq_table[next_state].frequency;
>  
> -    return result;
> +    return 0;
>  }
>  
>  static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:40:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:40:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jy7-0004w6-5x; Fri, 17 Aug 2012 10:39:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Jy6-0004w1-7i
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:39:54 +0000
Received: from [85.158.143.99:28791] by server-2.bemta-4.messagelabs.com id
	B8/0D-31966-97F1E205; Fri, 17 Aug 2012 10:39:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345199993!23393657!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1489 invoked from network); 17 Aug 2012 10:39:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 10:39:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 11:39:52 +0100
Message-Id: <502E3BC00200007800095DAB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 11:40:32 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
In-Reply-To: <CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 12:22, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
>> In order to not flood the list with large logs, I put the logs here:
>> https://citrix.sharefile.com/d/sfab699024a54df39 
>> I wasn't sure if you wanted a log with, or without the calls to
>> evtchn_move_pirqs() commented out - so I included both.
> 
> I received notifications that this got downloaded yesterday by a couple 
> people.
> Did you have an opportunity to review it?

Yes, I did.

> If so - did you glean any new knowledge?

Unfortunately not. Instead I was surprised that there were no
IRQ fixup related messages at all, of which I still will need to
make sense.

In any case, I'm planning on putting together a debugging patch,
but can't immediately tell when this will be.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:40:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:40:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Jy7-0004w6-5x; Fri, 17 Aug 2012 10:39:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Jy6-0004w1-7i
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:39:54 +0000
Received: from [85.158.143.99:28791] by server-2.bemta-4.messagelabs.com id
	B8/0D-31966-97F1E205; Fri, 17 Aug 2012 10:39:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345199993!23393657!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1489 invoked from network); 17 Aug 2012 10:39:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 10:39:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 11:39:52 +0100
Message-Id: <502E3BC00200007800095DAB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 11:40:32 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
In-Reply-To: <CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 12:22, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
>> In order to not flood the list with large logs, I put the logs here:
>> https://citrix.sharefile.com/d/sfab699024a54df39 
>> I wasn't sure if you wanted a log with, or without the calls to
>> evtchn_move_pirqs() commented out - so I included both.
> 
> I received notifications that this got downloaded yesterday by a couple 
> people.
> Did you have an opportunity to review it?

Yes, I did.

> If so - did you glean any new knowledge?

Unfortunately not. Instead I was surprised that there were no
IRQ fixup related messages at all, of which I still will need to
make sense.

In any case, I'm planning on putting together a debugging patch,
but can't immediately tell when this will be.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:47:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2K5F-00055c-44; Fri, 17 Aug 2012 10:47:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1T2K5D-00055X-An
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:47:15 +0000
Received: from [85.158.139.83:46622] by server-11.bemta-5.messagelabs.com id
	2B/69-29296-2312E205; Fri, 17 Aug 2012 10:47:14 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345200430!21295621!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10589 invoked from network); 17 Aug 2012 10:47:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:47:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="205472010"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 06:47:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 06:47:10 -0400
Received: from [10.80.3.61]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <anthony.perard@citrix.com>)	id 1T2K57-0000EP-IB	for
	xen-devel@lists.xen.org; Fri, 17 Aug 2012 11:47:09 +0100
Message-ID: <502E212E.6070306@citrix.com>
Date: Fri, 17 Aug 2012 11:47:10 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Xen Devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [ANNOUNCE] Git mirror of Xen repositories available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everyone,

I'm pleased to announce that a Git mirror repository is now available 
(and up-to-date). This one contains several branches:

  * master --> xen-unstable.hg
  * staging --> staging/xen-unstable.hg
  * stable-4.0 --> xen-4.0-testing.hg
  * stable-4.1 --> xen-4.1-testing.hg

http://xenbits.xen.org/gitweb/?p=xen.git
git://xenbits.xen.org/xen.git

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:47:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2K5F-00055c-44; Fri, 17 Aug 2012 10:47:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1T2K5D-00055X-An
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 10:47:15 +0000
Received: from [85.158.139.83:46622] by server-11.bemta-5.messagelabs.com id
	2B/69-29296-2312E205; Fri, 17 Aug 2012 10:47:14 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345200430!21295621!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10589 invoked from network); 17 Aug 2012 10:47:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:47:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="205472010"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 06:47:10 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 06:47:10 -0400
Received: from [10.80.3.61]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <anthony.perard@citrix.com>)	id 1T2K57-0000EP-IB	for
	xen-devel@lists.xen.org; Fri, 17 Aug 2012 11:47:09 +0100
Message-ID: <502E212E.6070306@citrix.com>
Date: Fri, 17 Aug 2012 11:47:10 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Xen Devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [ANNOUNCE] Git mirror of Xen repositories available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everyone,

I'm pleased to announce that a Git mirror repository is now available 
(and up-to-date). This one contains several branches:

  * master --> xen-unstable.hg
  * staging --> staging/xen-unstable.hg
  * stable-4.0 --> xen-4.0-testing.hg
  * stable-4.1 --> xen-4.1-testing.hg

http://xenbits.xen.org/gitweb/?p=xen.git
git://xenbits.xen.org/xen.git

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:58:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2KFh-0005FJ-7d; Fri, 17 Aug 2012 10:58:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2KFe-0005FE-SZ
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:58:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345201021!9585403!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17330 invoked from network); 17 Aug 2012 10:57:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:57:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14057868"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:57:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 11:57:00 +0100
Date: Fri, 17 Aug 2012 11:56:43 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345192115.30865.86.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<1345192115.30865.86.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:
> > > > diff --git a/include/xen/interface/xen.h
> > > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > > --- a/include/xen/interface/xen.h
> > > > +++ b/include/xen/interface/xen.h
> > > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > > >  /* These flags are passed in the 'flags' field of start_info_t. */
> > > >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> > > >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > > > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > > > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > > 1 byte for xen-pm options */ 
> > > >  typedef uint64_t cpumap_t;
> > > 
> > > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> > > generic xen.h interface file. 
> 
> Is PVH actually more like a XENFEAT style thing?
> 
> Is there actually anywhere which wants to know specifically about PVH
> rather than some more specific property which a PVH domain happen to
> has?

That's exactly the point.


> > > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > > HVM. Also,
> > > > + * note, xen PVH domain shares lot of HVM code */
> > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > &&                     \
> > > > +				(xen_start_info->flags &
> > > > SIF_IS_PVINHVM))
> > >  
> > > Also here.
> > 
> > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> > not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> > include/xen/interface/xen.h, and then do 
> > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > 
> > What do you think about that?
> 
> Should PVH actually be a new value in the xen_domain_type enum?

I don't think we should have a xen_domain_type pvh at all.
If we really need it we should define it as a set of individual
properties:

#define xen_pvh_domain() (xen_pv_domain() && \
                          xen_feature(XENFEAT_auto_translated_physmap) && \
                          xen_have_vector_callback)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 10:58:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 10:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2KFh-0005FJ-7d; Fri, 17 Aug 2012 10:58:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2KFe-0005FE-SZ
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 10:58:03 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345201021!9585403!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17330 invoked from network); 17 Aug 2012 10:57:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 10:57:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14057868"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:57:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 11:57:00 +0100
Date: Fri, 17 Aug 2012 11:56:43 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345192115.30865.86.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<1345192115.30865.86.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:
> > > > diff --git a/include/xen/interface/xen.h
> > > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > > --- a/include/xen/interface/xen.h
> > > > +++ b/include/xen/interface/xen.h
> > > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > > >  /* These flags are passed in the 'flags' field of start_info_t. */
> > > >  #define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
> > > >  #define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control
> > > > domain? */ +#define SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running
> > > > in HVM container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > > 1 byte for xen-pm options */ 
> > > >  typedef uint64_t cpumap_t;
> > > 
> > > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept, into a
> > > generic xen.h interface file. 
> 
> Is PVH actually more like a XENFEAT style thing?
> 
> Is there actually anywhere which wants to know specifically about PVH
> rather than some more specific property which a PVH domain happen to
> has?

That's exactly the point.


> > > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > > HVM. Also,
> > > > + * note, xen PVH domain shares lot of HVM code */
> > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > &&                     \
> > > > +				(xen_start_info->flags &
> > > > SIF_IS_PVINHVM))
> > >  
> > > Also here.
> > 
> > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> > not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> > include/xen/interface/xen.h, and then do 
> > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > 
> > What do you think about that?
> 
> Should PVH actually be a new value in the xen_domain_type enum?

I don't think we should have a xen_domain_type pvh at all.
If we really need it we should define it as a set of individual
properties:

#define xen_pvh_domain() (xen_pv_domain() && \
                          xen_feature(XENFEAT_auto_translated_physmap) && \
                          xen_have_vector_callback)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 11:14:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2KVe-0005Vy-UZ; Fri, 17 Aug 2012 11:14:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T2KVd-0005Vt-KP
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 11:14:33 +0000
Received: from [85.158.139.83:41815] by server-2.bemta-5.messagelabs.com id
	5B/C9-10142-8972E205; Fri, 17 Aug 2012 11:14:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345202069!21301668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32528 invoked from network); 17 Aug 2012 11:14:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 11:14:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="205473943"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 07:14:13 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	07:14:13 -0400
Message-ID: <502E2784.8060806@citrix.com>
Date: Fri, 17 Aug 2012 12:14:12 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
In-Reply-To: <20120816210206.GA17966@phenom.dumpdata.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
> 
> So I thought about this some more and came up with this patch. Its
> RFC and going to run it through some overnight tests to see how they fare.
> 
> 
> commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date:   Fri Jul 27 16:05:47 2012 -0400
> 
>     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
>     
>     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
>     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
>     with either a p2m_missing or p2m_identity respectively. The old
>     page (which was created via extend_brk or was grafted on from the
>     mfn_list) can be re-used for setting new PFNs.

Does this actually find any p2m pages to reclaim?

xen_set_identity_and_release() is careful to set the largest possible
range as 1:1 and the comments at the top of p2m.c suggest the mid
entries will be made to point to p2m_identity already.

David

>     This also means we can remove git commit:
>     5bc6f9888db5739abfa0cae279b4b442e4db8049
>     xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
>     which tried to fix this.
>     
>     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 29244d0..b6b7c10 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -194,11 +194,6 @@ RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID
>   * boundary violation will require three middle nodes. */
>  RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
>  
> -/* When we populate back during bootup, the amount of pages can vary. The
> - * max we have is seen is 395979, but that does not mean it can't be more.
> - * Some machines can have 3GB I/O holes so lets reserve for that. */
> -RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
> -
>  static inline unsigned p2m_top_index(unsigned long pfn)
>  {
>  	BUG_ON(pfn >= MAX_P2M_PFN);
> @@ -575,12 +570,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
>  	}
>  	return true;
>  }
> +
> +/*
> + * Skim over the P2M tree looking at pages that are either filled with
> + * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
> + * replace the P2M leaf with a p2m_missing or p2m_identity.
> + * Stick the old page in the new P2M tree location.
> + */
> +bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
> +{
> +	unsigned topidx;
> +	unsigned mididx;
> +	unsigned ident_pfns;
> +	unsigned inv_pfns;
> +	unsigned long *p2m;
> +	unsigned long *mid_mfn_p;
> +	unsigned idx;
> +	unsigned long pfn;
> +
> +	/* We only look when this entails a P2M middle layer */
> +	if (p2m_index(set_pfn))
> +		return false;
> +
> +	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
> +		topidx = p2m_top_index(pfn);
> +
> +		if (!p2m_top[topidx])
> +			continue;
> +
> +		if (p2m_top[topidx] == p2m_mid_missing)
> +			continue;
> +
> +		mididx = p2m_mid_index(pfn);
> +		p2m = p2m_top[topidx][mididx];
> +		if (!p2m)
> +			continue;
> +
> +		if ((p2m == p2m_missing) || (p2m == p2m_identity))
> +			continue;
> +
> +		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
> +			continue;
> +
> +		ident_pfns = 0;
> +		inv_pfns = 0;
> +		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
> +			/* IDENTITY_PFNs are 1:1 */
> +			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
> +				ident_pfns++;
> +			else if (p2m[idx] == INVALID_P2M_ENTRY)
> +				inv_pfns++;
> +			else
> +				break;
> +		}
> +		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
> +			goto found;
> +	}
> +	return false;
> +found:
> +	/* Found one, replace old with p2m_identity or p2m_missing */
> +	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
> +	/* And the other for save/restore.. */
> +	mid_mfn_p = p2m_top_mfn_p[topidx];
> +	/* NOTE: Even if it is a p2m_identity it should still be point to
> +	 * a page filled with INVALID_P2M_ENTRY entries. */
> +	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
> +
> +	/* Reset where we want to stick the old page in. */
> +	topidx = p2m_top_index(set_pfn);
> +	mididx = p2m_mid_index(set_pfn);
> +
> +	/* This shouldn't happen */
> +	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
> +		early_alloc_p2m(set_pfn);
> +
> +	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
> +		return false;
> +
> +	p2m_init(p2m);
> +	p2m_top[topidx][mididx] = p2m;
> +	mid_mfn_p = p2m_top_mfn_p[topidx];
> +	mid_mfn_p[mididx] = virt_to_mfn(p2m);
> +
> +	return true;
> +}
>  bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
>  {
>  	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
>  		if (!early_alloc_p2m(pfn))
>  			return false;
>  
> +		if (early_can_reuse_p2m_middle(pfn, mfn))
> +			return __set_phys_to_machine(pfn, mfn);
> +
>  		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
>  			return false;
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 11:14:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2KVe-0005Vy-UZ; Fri, 17 Aug 2012 11:14:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T2KVd-0005Vt-KP
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 11:14:33 +0000
Received: from [85.158.139.83:41815] by server-2.bemta-5.messagelabs.com id
	5B/C9-10142-8972E205; Fri, 17 Aug 2012 11:14:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345202069!21301668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32528 invoked from network); 17 Aug 2012 11:14:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 11:14:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="205473943"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 07:14:13 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	07:14:13 -0400
Message-ID: <502E2784.8060806@citrix.com>
Date: Fri, 17 Aug 2012 12:14:12 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
In-Reply-To: <20120816210206.GA17966@phenom.dumpdata.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
> 
> So I thought about this some more and came up with this patch. Its
> RFC and going to run it through some overnight tests to see how they fare.
> 
> 
> commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date:   Fri Jul 27 16:05:47 2012 -0400
> 
>     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
>     
>     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
>     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
>     with either a p2m_missing or p2m_identity respectively. The old
>     page (which was created via extend_brk or was grafted on from the
>     mfn_list) can be re-used for setting new PFNs.

Does this actually find any p2m pages to reclaim?

xen_set_identity_and_release() is careful to set the largest possible
range as 1:1 and the comments at the top of p2m.c suggest the mid
entries will be made to point to p2m_identity already.

David

>     This also means we can remove git commit:
>     5bc6f9888db5739abfa0cae279b4b442e4db8049
>     xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
>     which tried to fix this.
>     
>     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 29244d0..b6b7c10 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -194,11 +194,6 @@ RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID
>   * boundary violation will require three middle nodes. */
>  RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
>  
> -/* When we populate back during bootup, the amount of pages can vary. The
> - * max we have is seen is 395979, but that does not mean it can't be more.
> - * Some machines can have 3GB I/O holes so lets reserve for that. */
> -RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
> -
>  static inline unsigned p2m_top_index(unsigned long pfn)
>  {
>  	BUG_ON(pfn >= MAX_P2M_PFN);
> @@ -575,12 +570,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
>  	}
>  	return true;
>  }
> +
> +/*
> + * Skim over the P2M tree looking at pages that are either filled with
> + * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
> + * replace the P2M leaf with a p2m_missing or p2m_identity.
> + * Stick the old page in the new P2M tree location.
> + */
> +bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
> +{
> +	unsigned topidx;
> +	unsigned mididx;
> +	unsigned ident_pfns;
> +	unsigned inv_pfns;
> +	unsigned long *p2m;
> +	unsigned long *mid_mfn_p;
> +	unsigned idx;
> +	unsigned long pfn;
> +
> +	/* We only look when this entails a P2M middle layer */
> +	if (p2m_index(set_pfn))
> +		return false;
> +
> +	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
> +		topidx = p2m_top_index(pfn);
> +
> +		if (!p2m_top[topidx])
> +			continue;
> +
> +		if (p2m_top[topidx] == p2m_mid_missing)
> +			continue;
> +
> +		mididx = p2m_mid_index(pfn);
> +		p2m = p2m_top[topidx][mididx];
> +		if (!p2m)
> +			continue;
> +
> +		if ((p2m == p2m_missing) || (p2m == p2m_identity))
> +			continue;
> +
> +		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
> +			continue;
> +
> +		ident_pfns = 0;
> +		inv_pfns = 0;
> +		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
> +			/* IDENTITY_PFNs are 1:1 */
> +			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
> +				ident_pfns++;
> +			else if (p2m[idx] == INVALID_P2M_ENTRY)
> +				inv_pfns++;
> +			else
> +				break;
> +		}
> +		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
> +			goto found;
> +	}
> +	return false;
> +found:
> +	/* Found one, replace old with p2m_identity or p2m_missing */
> +	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
> +	/* And the other for save/restore.. */
> +	mid_mfn_p = p2m_top_mfn_p[topidx];
> +	/* NOTE: Even if it is a p2m_identity it should still be point to
> +	 * a page filled with INVALID_P2M_ENTRY entries. */
> +	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
> +
> +	/* Reset where we want to stick the old page in. */
> +	topidx = p2m_top_index(set_pfn);
> +	mididx = p2m_mid_index(set_pfn);
> +
> +	/* This shouldn't happen */
> +	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
> +		early_alloc_p2m(set_pfn);
> +
> +	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
> +		return false;
> +
> +	p2m_init(p2m);
> +	p2m_top[topidx][mididx] = p2m;
> +	mid_mfn_p = p2m_top_mfn_p[topidx];
> +	mid_mfn_p[mididx] = virt_to_mfn(p2m);
> +
> +	return true;
> +}
>  bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
>  {
>  	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
>  		if (!early_alloc_p2m(pfn))
>  			return false;
>  
> +		if (early_can_reuse_p2m_middle(pfn, mfn))
> +			return __set_phys_to_machine(pfn, mfn);
> +
>  		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
>  			return false;
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 11:15:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2KW5-0005XH-BW; Fri, 17 Aug 2012 11:15:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T2KW3-0005Wz-9b
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 11:14:59 +0000
Received: from [85.158.143.99:39116] by server-1.bemta-4.messagelabs.com id
	82/7C-07754-2B72E205; Fri, 17 Aug 2012 11:14:58 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345202094!28112225!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12041 invoked from network); 17 Aug 2012 11:14:55 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-3.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 11:14:55 -0000
Received: from mail38-ch1-R.bigfish.com (10.43.68.225) by
	CH1EHSOBE010.bigfish.com (10.43.70.60) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 11:14:54 +0000
Received: from mail38-ch1 (localhost [127.0.0.1])	by mail38-ch1-R.bigfish.com
	(Postfix) with ESMTP id D4D583C0310;
	Fri, 17 Aug 2012 11:14:53 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I4015I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail38-ch1 (localhost.localdomain [127.0.0.1]) by mail38-ch1
	(MessageSwitch) id 1345202090945261_8051;
	Fri, 17 Aug 2012 11:14:50 +0000 (UTC)
Received: from CH1EHSMHS008.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.233])	by mail38-ch1.bigfish.com (Postfix) with ESMTP id
	E443536016A;	Fri, 17 Aug 2012 11:14:50 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS008.bigfish.com (10.43.70.8) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 11:14:50 +0000
X-WSS-ID: 0M8WCKM-02-7AL-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 208D0C810A;	Fri, 17 Aug 2012 06:14:46 -0500 (CDT)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 06:15:35 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 17 Aug 2012 06:14:49 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	07:14:46 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id E4EF349C20C; Fri, 17 Aug 2012
	12:14:45 +0100 (BST)
Message-ID: <502E27B0.7030803@amd.com>
Date: Fri, 17 Aug 2012 13:14:56 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
In-Reply-To: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, JBeulich@suse.com,
	xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 06:36 PM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Incorporated feedback from Jan Beulich and Wei Wang.
> Fixed indent printing with %*s.
> Removed superflous superpage and other attribute prints.
> Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16 09:28:24 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        if ( next_level != (level - 1) )
> +        {
> +            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
> +                   next_level, level - 1);
> +
> +            continue;
> +       }

HI,

This check is not proper for 2MB and 1GB pages. For example, if a guest 
4 level page tables, for a 2MB entry, the next_level field will be 3 
->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries 
have next_level = 0. I saw following output for those pages:

(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1

Thanks,
Wei


> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( next_level>= 1 )
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), next_level,
> +                address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx  mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)PFN_DOWN(address),
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    printk("p2m table has %d levels\n", hd->paging_mode);
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Thu Aug 16 09:28:24 2012 -0700
> @@ -19,10 +19,12 @@
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
>   #include<xen/softirq.h>
> +#include<xen/keyhandler.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 16 09:28:24 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,60 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level>= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                     address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K));
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 16 09:28:24 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 16 09:28:24 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  (12 + (PTE_PER_TABLE_SHIFT * \
> +                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/xen/iommu.h	Thu Aug 16 09:28:24 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 11:15:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2KW5-0005XH-BW; Fri, 17 Aug 2012 11:15:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T2KW3-0005Wz-9b
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 11:14:59 +0000
Received: from [85.158.143.99:39116] by server-1.bemta-4.messagelabs.com id
	82/7C-07754-2B72E205; Fri, 17 Aug 2012 11:14:58 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345202094!28112225!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12041 invoked from network); 17 Aug 2012 11:14:55 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-3.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 11:14:55 -0000
Received: from mail38-ch1-R.bigfish.com (10.43.68.225) by
	CH1EHSOBE010.bigfish.com (10.43.70.60) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 11:14:54 +0000
Received: from mail38-ch1 (localhost [127.0.0.1])	by mail38-ch1-R.bigfish.com
	(Postfix) with ESMTP id D4D583C0310;
	Fri, 17 Aug 2012 11:14:53 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I4015I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail38-ch1 (localhost.localdomain [127.0.0.1]) by mail38-ch1
	(MessageSwitch) id 1345202090945261_8051;
	Fri, 17 Aug 2012 11:14:50 +0000 (UTC)
Received: from CH1EHSMHS008.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.233])	by mail38-ch1.bigfish.com (Postfix) with ESMTP id
	E443536016A;	Fri, 17 Aug 2012 11:14:50 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CH1EHSMHS008.bigfish.com (10.43.70.8) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 11:14:50 +0000
X-WSS-ID: 0M8WCKM-02-7AL-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 208D0C810A;	Fri, 17 Aug 2012 06:14:46 -0500 (CDT)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 06:15:35 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 17 Aug 2012 06:14:49 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	07:14:46 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id E4EF349C20C; Fri, 17 Aug 2012
	12:14:45 +0100 (BST)
Message-ID: <502E27B0.7030803@amd.com>
Date: Fri, 17 Aug 2012 13:14:56 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
In-Reply-To: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, JBeulich@suse.com,
	xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 06:36 PM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Incorporated feedback from Jan Beulich and Wei Wang.
> Fixed indent printing with %*s.
> Removed superflous superpage and other attribute prints.
> Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16 09:28:24 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        if ( next_level != (level - 1) )
> +        {
> +            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
> +                   next_level, level - 1);
> +
> +            continue;
> +       }

HI,

This check is not proper for 2MB and 1GB pages. For example, if a guest 
4 level page tables, for a 2MB entry, the next_level field will be 3 
->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries 
have next_level = 0. I saw following output for those pages:

(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1
(XEN) IOMMU p2m table error. next_level = 0, expected 1

Thanks,
Wei


> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( next_level>= 1 )
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), next_level,
> +                address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx  mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)PFN_DOWN(address),
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    printk("p2m table has %d levels\n", hd->paging_mode);
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Thu Aug 16 09:28:24 2012 -0700
> @@ -19,10 +19,12 @@
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
>   #include<xen/softirq.h>
> +#include<xen/keyhandler.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Thu Aug 16 09:28:24 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,60 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level>= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                     address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K));
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Thu Aug 16 09:28:24 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Thu Aug 16 09:28:24 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  (12 + (PTE_PER_TABLE_SHIFT * \
> +                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 6d56e31fe1e1 -r 575a53faf4e1 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/xen/iommu.h	Thu Aug 16 09:28:24 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 11:33:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ko1-0005ru-Hv; Fri, 17 Aug 2012 11:33:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T2KYQ-0005h5-AX
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 11:17:27 +0000
Received: from [85.158.143.35:59422] by server-1.bemta-4.messagelabs.com id
	F6/D0-07754-5482E205; Fri, 17 Aug 2012 11:17:25 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345202236!13839524!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32755 invoked from network); 17 Aug 2012 11:17:16 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 11:17:16 -0000
Received: by eaac13 with SMTP id c13so1154901eaa.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 04:17:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=i7+Ed18qeJyvcUfLbPmeM8H/sR6VzcqzqrvYRvOaeR0=;
	b=bJQXxPaYY+gfwas8vvY3faiCfAmry6/+2fglz3Hop+Cvm85PpFAxxd/uMR5xd4t+T+
	+YFATfYg/eB1Pz2gc4Z1eDNOg/Wdma5/FXbosoXkqmFBkWNLX8xs4UT0+3Eztej+1CQE
	7ld+ntecEVyIPT8NMLC/no7UVsx9xfH6xLXL5zRsl0N9MY//UwrlB77jXRBoWFwqywUE
	YKkS08I6tz5rBH/DfWXggWj+2/+sVPlahRil52pcZlGPoT/6Hp9oADdKMtRDYGOrvoUa
	3wV7K3V1AETw8gdIksifofhQhccwttbcPdZI8hgPQT9jvyGhgafzMt3bHLvQAWzK7Jv4
	DRqQ==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr5918790eej.0.1345202236309; Fri, 17
	Aug 2012 04:17:16 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Fri, 17 Aug 2012 04:17:15 -0700 (PDT)
Date: Fri, 17 Aug 2012 12:17:15 +0100
X-Google-Sender-Auth: 1eNBbbOwkVZ6k_kw0Y_eNlcUJgw
Message-ID: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Content-Type: multipart/mixed; boundary=047d7b66f1cd72413004c7744dc4
X-Mailman-Approved-At: Fri, 17 Aug 2012 11:33:31 +0000
Subject: [Xen-devel] Failure to boot default Debian wheezy (pvops) kernel on
	4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b66f1cd72413004c7744dc4
Content-Type: text/plain; charset=ISO-8859-1

I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.

The problems seem to have started here:

-- snip --
[    0.060280] ACPI: Core revision 20110623^M^M
[    0.072384] Performance Events: Broken BIOS detected, complain to
your hardware vendor.^M^M
[    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
(MSR c0010000 is 530076)^M^M
[    0.080007] AMD PMU driver.^M^M
[    0.082864] ------------[ cut here ]------------^M^M
[    0.084018] WARNING: at
/build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
perf_events_lapic_init+0x28/0x29()^M^M
[    0.088009] Hardware name: empty^M^M
[    0.091299] Modules linked in:^M^M
[    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
[    0.096008] Call Trace:^M^M
[    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
[    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
[    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
[    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
[    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
[    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
[    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
[    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
[    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
[    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
[    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
-- snip --

And pretty soon degenerated into log message spamming of this sort:

-- snip --
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
-- snip --

The serial log is attached ("exile.log").

An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
Jeremy's?) boots fine; the serial log is also attached
("exile-good.log").  It also seems ot have the WARN above, so maybe
that's not actually the issue.

Any ideas?

 -George

--047d7b66f1cd72413004c7744dc4
Content-Type: application/octet-stream; name="exile.log"
Content-Disposition: attachment; filename="exile.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5z6du4l0

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gICAgICAgICAgICAgIF9fX18g
IA0KIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgIC8gXyBcICAgIF8gX18gX19ffF9f
XyBcIA0KICBcICAvLyBfIFwgJ18gXCAgfCB8fCB8XyAgIF9fKSB8fCB8IHwgfF9ffCAnX18vIF9f
fCBfXykgfA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8gfCB8X3wgfF9ffCB8IHwg
KF9fIC8gX18vIA0KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX18oXylfX18vICAgfF98
ICBcX19ffF9fX19ffA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4yLjAtcmMyICh4ZW51c2VyQHVr
LnhlbnNvdXJjZS5jb20pIChnY2MgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpIDQuNi4z
KSBUdWUgQXVnIDE0IDE4OjU5OjQ5IEJTVCAyMDEyDQooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBU
dWUgQXVnIDE0IDE4OjQxOjUzIDIwMTIgKzAxMDAgMjU3NTA6ZDhkZjExNTJlYjNiDQooWEVOKSBD
b25zb2xlIG91dHB1dCBpcyBzeW5jaHJvbm91cy4NCihYRU4pIEJvb3Rsb2FkZXI6IEdSVUIgMS45
OS0yMi4xDQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIHdhdGNoZG9nIGNwdWluZm8g
Y29tMT0xMTUyMDAsOG4xIGNvbnNvbGU9Y29tMSx0dHkgc3luY19jb25zb2xlIGNvbnNvbGVfdG9f
cmluZw0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAgVkdBIGlzIHRleHQgbW9kZSA4
MHgyNSwgZm9udCA4eDE2DQooWEVOKSAgVkJFL0REQyBtZXRob2RzOiBWMjsgRURJRCB0cmFuc2Zl
ciB0aW1lOiAyIHNlY29uZHMNCihYRU4pIERpc2MgaW5mb3JtYXRpb246DQooWEVOKSAgRm91bmQg
MiBNQlIgc2lnbmF0dXJlcw0KKFhFTikgIEZvdW5kIDIgRUREIGluZm9ybWF0aW9uIHN0cnVjdHVy
ZXMNCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6DQooWEVOKSAgMDAwMDAwMDAwMDAwMDAwMCAtIDAw
MDAwMDAwMDAwOWM0MDAgKHVzYWJsZSkNCihYRU4pICAwMDAwMDAwMDAwMDljNDAwIC0gMDAwMDAw
MDAwMDBhMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDAwMDBjZTAwMCAtIDAwMDAwMDAw
MDAxMDAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNm
ZWUwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBjZmVlMDAwMCAtIDAwMDAwMDAwY2ZlZTUw
MDAgKEFDUEkgZGF0YSkNCihYRU4pICAwMDAwMDAwMGNmZWU1MDAwIC0gMDAwMDAwMDBjZmVmMTAw
MCAoQUNQSSBOVlMpDQooWEVOKSAgMDAwMDAwMDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAg
KHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAzMDAwIChy
ZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZTAwMDAwIC0gMDAwMDAwMDBmZWUwMTAwMCAocmVz
ZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwMTMwMDAwMDAwICh1c2FibGUp
DQooWEVOKSBBQ1BJOiBSU0RQIDAwMEY4MDgwLCAwMDI0IChyMiBQVExURCApDQooWEVOKSBBQ1BJ
OiBYU0RUIENGRUUwMUZFLCAwMDVDIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAg
MjAwMDAwMSkNCihYRU4pIEFDUEk6IEZBQ1AgQ0ZFRTAyQ0UsIDAwRjQgKHIzIEJSQ00gICBFWFBM
T1NOICAgNjA0MDAwMCBNU0ZUICAyMDAwMDAxKQ0KKFhFTikgQUNQSSBXYXJuaW5nICh0YmZhZHQt
MDQ0NCk6IE9wdGlvbmFsIGZpZWxkICJQbTJDb250cm9sQmxvY2siIGhhcyB6ZXJvIGFkZHJlc3Mg
b3IgbGVuZ3RoOiAwMDAwMDAwMDAwMDAwMDAwL0MgWzIwMDcwMTI2XQ0KKFhFTikgQUNQSTogRFNE
VCBDRkVFMDNDMiwgNDk0OSAocjIgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIE1TRlQgIDMwMDAw
MDApDQooWEVOKSBBQ1BJOiBGQUNTIENGRUYwRkMwLCAwMDQwDQooWEVOKSBBQ1BJOiBUQ1BBIENG
RUU0RDBCLCAwMDMyIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAyMDAwMDAwMSkN
CihYRU4pIEFDUEk6IFNSQVQgQ0ZFRTREM0QsIDAxMjggKHIxIEFNRCAgICBGQU1fRl8xMCAgNjA0
MDAwMCBBTUQgICAgICAgICAxKQ0KKFhFTikgQUNQSTogSFBFVCBDRkVFNEU2NSwgMDAzOCAocjEg
QlJDTSAgIEVYUExPU04gICA2MDQwMDAwIEJSQ00gIDIwMDAwMDEpDQooWEVOKSBBQ1BJOiBTU0RU
IENGRUU0RTlELCAwMDQ5IChyMSBCUkNNICAgUFJUMCAgICAgIDYwNDAwMDAgQlJDTSAgMjAwMDAw
MSkNCihYRU4pIEFDUEk6IFNQQ1IgQ0ZFRTRFRTYsIDAwNTAgKHIxIFBUTFREICAkVUNSVEJMJCAg
NjA0MDAwMCBQVEwgICAgICAgICAxKQ0KKFhFTikgQUNQSTogQVBJQyBDRkVFNEYzNiwgMDBDQSAo
cjEgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIFBUTCAgIDIwMDAwMDEpDQooWEVOKSBTeXN0ZW0g
UkFNOiA0MDk0TUIgKDQxOTI3NTJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBO
b2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6
IFBYTSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNCAtPiBOb2RlIDENCihYRU4pIFNS
QVQ6IFBYTSAxIC0+IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMg
NiAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNyAtPiBOb2RlIDENCihYRU4p
IFNSQVQ6IE5vZGUgMCBQWE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAw
MDAwLWQwMDAwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTEzMDAwMDAw
MA0KKFhFTikgTlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAxMmRmNzIwMDAgLSAxMmRm
NzQwMDANCihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgU1JB
VDogTm9kZSAxIGhhcyBubyBtZW1vcnkuIEJJT1MgQnVnIG9yIG1pcy1jb25maWd1cmVkIGhhcmR3
YXJlPw0KKFhFTikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQgRE1BIHdpZHRoIDMwIGJpdHMNCihY
RU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmODBiMA0KKFhFTikgRE1JIHByZXNlbnQuDQoo
WEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1UaW1lciBJTyBQ
b3J0OiAweDUwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4X2NudFs1NDQsNTA0
XSwgcG0xeF9ldnRbNTAwLDU0MF0NCihYRU4pIEFDUEk6ICAgICAgICAgICAgICAgICAgd2FrZXVw
X3ZlY1tjZmVmMGZjY10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxhcGljX2lk
WzB4MDBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzAgMDo0IEFQSUMgdmVyc2lvbiAxNg0K
KFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkN
CihYRU4pIFByb2Nlc3NvciAjMSAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29y
ICMyIDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDNd
IGxhcGljX2lkWzB4MDNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzMgMDo0IEFQSUMgdmVy
c2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwNF0g
ZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNCAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KKFhFTikg
UHJvY2Vzc29yICM1IDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDZdIGxhcGljX2lkWzB4MDZdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzYgMDo0
IEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwN10gbGFwaWNf
aWRbMHgwN10gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNyAwOjQgQVBJQyB2ZXJzaW9uIDE2
DQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMF0gaGlnaCBlZGdlIGxpbnRbMHgx
XSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBoaWdoIGVkZ2UgbGludFsw
eDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDJdIGhpZ2ggZWRnZSBsaW50
WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwM10gaGlnaCBlZGdlIGxp
bnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA0XSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDVdIGhpZ2ggZWRn
ZSBsaW50WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNl0gaGlnaCBl
ZGdlIGxpbnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA3XSBoaWdo
IGVkZ2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA4XSBhZGRyZXNzWzB4
ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDgsIHZlcnNp
b24gMTcsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMTUNCihYRU4pIEFDUEk6IElPQVBJQyAo
aWRbMHgwOV0gYWRkcmVzc1sweGZlYzAxMDAwXSBnc2lfYmFzZVsxNl0pDQooWEVOKSBJT0FQSUNb
MV06IGFwaWNfaWQgOSwgdmVyc2lvbiAxNywgYWRkcmVzcyAweGZlYzAxMDAwLCBHU0kgMTYtMzEN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwYV0gYWRkcmVzc1sweGZlYzAyMDAwXSBnc2lfYmFz
ZVszMl0pDQooWEVOKSBJT0FQSUNbMl06IGFwaWNfaWQgMTAsIHZlcnNpb24gMTcsIGFkZHJlc3Mg
MHhmZWMwMjAwMCwgR1NJIDMyLTQ3DQooWEVOKSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVz
X2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2UpDQooWEVOKSBBQ1BJOiBJUlEwIHVzZWQgYnkg
b3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFi
bGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMyBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQ
RVQgaWQ6IDB4MTE2NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgVGFibGUgaXMgbm90IGZv
dW5kIQ0KKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9y
bWF0aW9uDQooWEVOKSBTTVA6IEFsbG93aW5nIDggQ1BVcyAoMCBob3RwbHVnIENQVXMpDQooWEVO
KSBJUlEgbGltaXRzOiA0OCBHU0ksIDE1MDQgTVNJL01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBJbml0aWFsaXppbmcgQ1BV
IzANCihYRU4pIERldGVjdGVkIDE5OTUuMDkyIE1IeiBwcm9jZXNzb3IuDQooWEVOKSBJbml0aW5n
IG1lbW9yeSBzaGFyaW5nLg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGlu
ZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEy
SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSAwKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDAN
CihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikg
QU1ELVZpOiBJT01NVSBub3QgZm91bmQhDQooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZGlzYWJs
ZWQNCihYRU4pIENQVTA6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4p
IEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVzaW5nIG5ldyBBQ0sgbWV0aG9kDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4yPS0x
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiA2NCBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSBIVk06IFNW
TSBlbmFibGVkDQooWEVOKSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAoSEFQKSBkZXRl
Y3RlZA0KKFhFTikgSFZNOiBIQVAgcGFnZSBzaXplczogNGtCLCAyTUIsIDFHQg0KKFhFTikgQ1BV
IDAgQVBJQyAwIC0+IE5vZGUgMA0KKFhFTikgQ1BVIDEgQVBJQyAxIC0+IE5vZGUgMA0KKFhFTikg
Qm9vdGluZyBwcm9jZXNzb3IgMS8xIGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSMx
DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsg
KDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5l
KQ0KKFhFTikgQ1BVIDEoNCkgLT4gUHJvY2Vzc29yIDAsIENvcmUgMQ0KKFhFTikgQ1BVMTogQU1E
IEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0
X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIENQVSAyIEFQSUMgMiAtPiBOb2Rl
IDANCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29yIDIvMiBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxp
emluZyBDUFUjMg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQg
Y2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQg
Ynl0ZXMvbGluZSkNCihYRU4pIENQVSAyKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDINCihYRU4p
IENQVTI6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29k
ZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgMyBBUElD
IDMgLT4gTm9kZSAwDQooWEVOKSBCb290aW5nIHByb2Nlc3NvciAzLzMgZWlwIDhjMDAwDQooWEVO
KSBJbml0aWFsaXppbmcgQ1BVIzMNCihYRU4pIENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVz
L2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6
IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFUgMyg0KSAtPiBQcm9jZXNzb3IgMCwgQ29y
ZSAzDQooWEVOKSBDUFUzOiBBTUQgRW5naW5lZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVO
KSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikg
Q1BVIDQgQVBJQyA0IC0+IE5vZGUgMQ0KKFhFTikgQm9vdGluZyBwcm9jZXNzb3IgNC80IGVpcCA4
YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSM0DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRL
ICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6
IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVIDQoNCkgLT4gUHJvY2Vz
c29yIDEsIENvcmUgMA0KKFhFTikgQ1BVNDogQU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGlu
ZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAw
NjcNCihYRU4pIENQVSA1IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29y
IDUvNSBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxpemluZyBDUFUjNQ0KKFhFTikgQ1BVOiBMMSBJ
IGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0K
KFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSA1KDQp
IC0+IFByb2Nlc3NvciAxLCBDb3JlIDENCihYRU4pIENQVTU6IEFNRCBFbmdpbmVlcmluZyBTYW1w
bGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29kZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hf
aWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgNiBBUElDIDYgLT4gTm9kZSAxDQooWEVOKSBCb290aW5n
IHByb2Nlc3NvciA2LzYgZWlwIDhjMDAwDQooWEVOKSBJbml0aWFsaXppbmcgQ1BVIzYNCihYRU4p
IENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0
ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVO
KSBDUFUgNig0KSAtPiBQcm9jZXNzb3IgMSwgQ29yZSAyDQooWEVOKSBDUFU2OiBBTUQgRW5naW5l
ZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2lu
Zm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikgQ1BVIDcgQVBJQyA3IC0+IE5vZGUgMQ0KKFhF
TikgQm9vdGluZyBwcm9jZXNzb3IgNy83IGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQ
VSM3DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2
NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9s
aW5lKQ0KKFhFTikgQ1BVIDcoNCkgLT4gUHJvY2Vzc29yIDEsIENvcmUgMw0KKFhFTikgQ1BVNzog
QU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xs
ZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVz
DQooWEVOKSBUZXN0aW5nIE5NSSB3YXRjaGRvZyAtLS0gQ1BVIzAgb2theS4gQ1BVIzEgb2theS4g
Q1BVIzIgb2theS4gQ1BVIzMgb2theS4gQ1BVIzQgb2theS4gQ1BVIzUgb2theS4gQ1BVIzYgb2th
eS4gQ1BVIzcgb2theS4gDQooWEVOKSBIUEVUOiAzIHRpbWVycyAoMCB3aWxsIGJlIHVzZWQgZm9y
IGJyb2FkY2FzdCkNCihYRU4pIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBNQ0E6IFVzZSBo
dyB0aHJlc2hvbGRpbmcgdG8gYWRqdXN0IHBvbGxpbmcgZnJlcXVlbmN5DQooWEVOKSBtY2hlY2tf
cG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0YXJ0ZWQuDQooWEVOKSBYZW5vcHJv
ZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0LCBJQlNDVEwgPSAweGZmZmZmZmZm
DQooWEVOKSAqKiogTE9BRElORyBET01BSU4gMCAqKioNCihYRU4pIGVsZl9wYXJzZV9iaW5hcnk6
IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weDNkNjAwMA0KKFhFTikgZWxmX3BhcnNlX2Jp
bmFyeTogcGhkcjogcGFkZHI9MHgxM2Q2MDAwIG1lbXN6PTB4M2E5MDAwDQooWEVOKSBlbGZfcGFy
c2VfYmluYXJ5OiBtZW1vcnk6IDB4MTAwMDAwMCAtPiAweDE3N2YwMDANCihYRU4pIGVsZl94ZW5f
cGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IEdVRVNUX1ZFUlNJT04gPSAiMi42Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVS
U0lPTiA9ICJ4ZW4tMy4wIg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JBU0UgPSAw
eGMwMDAwMDAwDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhjMTQxNTAwMA0K
KFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFHRSA9IDB4YzEwMDIwMDANCihY
RU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVz
fHBhZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IFBBRV9NT0RF
ID0gInllcyINCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogTE9BREVSID0gImdlbmVyaWMiDQoo
WEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVsZiBub3RlICgweGQpDQooWEVO
KSBlbGZfeGVuX3BhcnNlX25vdGU6IFNVU1BFTkRfQ0FOQ0VMID0gMHgxDQooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IEhWX1NUQVJUX0xPVyA9IDB4ZjU4MDAwMDANCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBlbGZfeGVuX2FkZHJfY2FsY19jaGVj
azogYWRkcmVzc2VzOg0KKFhFTikgICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGMwMDAwMDAwDQoo
WEVOKSAgICAgZWxmX3BhZGRyX29mZnNldCA9IDB4MA0KKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAg
ICAgPSAweGMwMDAwMDAwDQooWEVOKSAgICAgdmlydF9rc3RhcnQgICAgICA9IDB4YzEwMDAwMDAN
CihYRU4pICAgICB2aXJ0X2tlbmQgICAgICAgID0gMHhjMTc3ZjAwMA0KKFhFTikgICAgIHZpcnRf
ZW50cnkgICAgICAgPSAweGMxNDE1MDAwDQooWEVOKSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4
ZmZmZmZmZmZmZmZmZmZmZg0KKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0
MzINCihYRU4pICBEb20wIGtlcm5lbDogMzItYml0LCBQQUUsIGxzYiwgcGFkZHIgMHgxMDAwMDAw
IC0+IDB4MTc3ZjAwMA0KKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikg
IERvbTAgYWxsb2MuOiAgIDAwMDAwMDAxMjgwMDAwMDAtPjAwMDAwMDAxMmEwMDAwMDAgKDk4NzUy
OCBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDEy
ZTcwNzAwMC0+MDAwMDAwMDEyZmZmZjgwMA0KKFhFTikgVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1F
TlQ6DQooWEVOKSAgTG9hZGVkIGtlcm5lbDogMDAwMDAwMDBjMTAwMDAwMC0+MDAwMDAwMDBjMTc3
ZjAwMA0KKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAwYzE3N2YwMDAtPjAwMDAwMDAwYzMw
Nzc4MDANCihYRU4pICBQaHlzLU1hY2ggbWFwOiAwMDAwMDAwMGMzMDc4MDAwLT4wMDAwMDAwMGMz
NDRhYTA0DQooWEVOKSAgU3RhcnQgaW5mbzogICAgMDAwMDAwMDBjMzQ0YjAwMC0+MDAwMDAwMDBj
MzQ0YjRiNA0KKFhFTikgIFBhZ2UgdGFibGVzOiAgIDAwMDAwMDAwYzM0NGMwMDAtPjAwMDAwMDAw
YzM0NmUwMDANCihYRU4pICBCb290IHN0YWNrOiAgICAwMDAwMDAwMGMzNDZlMDAwLT4wMDAwMDAw
MGMzNDZmMDAwDQooWEVOKSAgVE9UQUw6ICAgICAgICAgMDAwMDAwMDBjMDAwMDAwMC0+MDAwMDAw
MDBjMzgwMDAwMA0KKFhFTikgIEVOVFJZIEFERFJFU1M6IDAwMDAwMDAwYzE0MTUwMDANCihYRU4p
IERvbTAgaGFzIG1heGltdW0gOCBWQ1BVcw0KKFhFTikgZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAg
YXQgMHgwMDAwMDAwMGMxMDAwMDAwIC0+IDB4MDAwMDAwMDBjMTNkNjAwMA0KKFhFTikgZWxmX2xv
YWRfYmluYXJ5OiBwaGRyIDEgYXQgMHgwMDAwMDAwMGMxM2Q2MDAwIC0+IDB4MDAwMDAwMDBjMTQ3
ZTAwMA0KKFhFTikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4NCihYRU4pIEluaXRpYWwgbG93
IG1lbW9yeSB2aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLg0KKFhFTikgU3RkLiBM
b2dsZXZlbDogQWxsDQooWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSAqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQooWEVOKSAqKioqKioqIFdBUk5J
Tkc6IENPTlNPTEUgT1VUUFVUIElTIFNZTkNIUk9OT1VTDQooWEVOKSAqKioqKioqIFRoaXMgb3B0
aW9uIGlzIGludGVuZGVkIHRvIGFpZCBkZWJ1Z2dpbmcgb2YgWGVuIGJ5IGVuc3VyaW5nDQooWEVO
KSAqKioqKioqIHRoYXQgYWxsIG91dHB1dCBpcyBzeW5jaHJvbm91c2x5IGRlbGl2ZXJlZCBvbiB0
aGUgc2VyaWFsIGxpbmUuDQooWEVOKSAqKioqKioqIEhvd2V2ZXIgaXQgY2FuIGludHJvZHVjZSBT
SUdOSUZJQ0FOVCBsYXRlbmNpZXMgYW5kIGFmZmVjdA0KKFhFTikgKioqKioqKiB0aW1la2VlcGlu
Zy4gSXQgaXMgTk9UIHJlY29tbWVuZGVkIGZvciBwcm9kdWN0aW9uIHVzZSENCihYRU4pICoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIDMuLi4gMi4u
LiAxLi4uIA0KKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBlICdDVFJMLWEnIHRo
cmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pDQooWEVOKSBGcmVlZCAyNDRrQiBpbml0
IG1lbW9yeS4NCm1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQpYZW46IHNldHVw
IElTQSBpZGVudGl0eSBtYXBzDQphYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KWyAgICAwLjAwMDAw
MF0gUmVzZXJ2aW5nIHZpcnR1YWwgYWRkcmVzcyBzcGFjZSBhYm92ZSAweGZmODAwMDAwDQpbICAg
IDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAw
MDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGludXgg
dmVyc2lvbiAzLjIuMC0zLTY4Ni1wYWUgKERlYmlhbiAzLjIuMjEtMykgKGRlYmlhbi1rZXJuZWxA
bGlzdHMuZGViaWFuLm9yZykgKGdjYyB2ZXJzaW9uIDQuNi4zIChEZWJpYW4gNC42LjMtOCkgKSAj
MSBTTVAgVGh1IEp1biAyOCAwODo1Njo0NiBVVEMgMjAxMg0KWyAgICAwLjAwMDAwMF0gRnJlZWlu
ZyAgOWMtMTAwIHBmbiByYW5nZTogMTAwIHBhZ2VzIGZyZWVkDQpbICAgIDAuMDAwMDAwXSBGcmVl
aW5nICBjZmVlMC1mNGE4MSBwZm4gcmFuZ2U6IDE1MDQzMyBwYWdlcyBmcmVlZA0KWyAgICAwLjAw
MDAwMF0gUmVsZWFzZWQgMTUwNTMzIHBhZ2VzIG9mIHVudXNlZCBtZW1vcnkNClsgICAgMC4wMDAw
MDBdIFNldCAxOTY5OTYgcGFnZShzKSB0byAxLTEgbWFwcGluZw0KWyAgICAwLjAwMDAwMF0gQklP
Uy1wcm92aWRlZCBwaHlzaWNhbCBSQU0gbWFwOg0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAw
MDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWMwMDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBdICBY
ZW46IDAwMDAwMDAwMDAwOWM0MDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkNClsgICAg
MC4wMDAwMDBdICBYZW46IDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNmZWUwMDAwICh1c2Fi
bGUpDQpbICAgIDAuMDAwMDAwXSAgWGVuOiAwMDAwMDAwMGNmZWUwMDAwIC0gMDAwMDAwMDBjZmVl
NTAwMCAoQUNQSSBkYXRhKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBjZmVlNTAwMCAt
IDAwMDAwMDAwY2ZlZjEwMDAgKEFDUEkgTlZTKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAw
MDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0g
IFhlbjogMDAwMDAwMDBmZWMwMDAwMCAtIDAwMDAwMDAwZmVjMDMwMDAgKHJlc2VydmVkKQ0KWyAg
ICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZWUwMDAwMCAtIDAwMDAwMDAwZmVlMDEwMDAgKHJl
c2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAwMDAx
MDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDEwMDAwMDAw
MCAtIDAwMDAwMDAxMzAwMDAwMDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBdIGJvb3Rjb25zb2xl
IFt4ZW5ib290MF0gZW5hYmxlZA0KWyAgICAwLjAwMDAwMF0gTlggKEV4ZWN1dGUgRGlzYWJsZSkg
cHJvdGVjdGlvbjogYWN0aXZlDQpbICAgIDAuMDAwMDAwXSBETUkgcHJlc2VudC4NClsgICAgMC4w
MDAwMDBdIGxhc3RfcGZuID0gMHgxMzAwMDAgbWF4X2FyY2hfcGZuID0gMHgxMDAwMDAwDQpbICAg
IDAuMDAwMDAwXSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgW2MwMGY4MGIwXSBmODBiMA0KWyAgICAw
LjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogMDAwMDAwMDAwMDAwMDAwMC0wMDAwMDAwMDM3
MWZlMDAwDQpbICAgIDAuMDAwMDAwXSBSQU1ESVNLOiAwMTc3ZjAwMCAtIDAzMDc4MDAwDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBSU0RQIDAwMGY4MDgwIDAwMDI0ICh2MDIgUFRMVEQgKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogWFNEVCBjZmVlMDFmZSAwMDA1QyAodjAxIEJSQ00gICBFWFBMT1NOICAw
NjA0MDAwMCBQVEwgIDAyMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRkFDUCBjZmVlMDJj
ZSAwMDBGNCAodjAzIEJSQ00gICBFWFBMT1NOICAwNjA0MDAwMCBNU0ZUIDAyMDAwMDAxKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSSBXYXJuaW5nOiBPcHRpb25hbCBmaWVsZCBQbTJDb250cm9sQmxvY2sg
aGFzIHplcm8gYWRkcmVzcyBvciBsZW5ndGg6IDB4MDAwMDAwMDAwMDAwMDAwMC8weEMgKDIwMTEw
NjIzL3RiZmFkdC01NjApDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBEU0RUIGNmZWUwM2MyIDA0OTQ5
ICh2MDIgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIE1TRlQgMDMwMDAwMDApDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBGQUNTIGNmZWYwZmMwIDAwMDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBUQ1BB
IGNmZWU0ZDBiIDAwMDMyICh2MDEgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIFBUTCAgMjAwMDAw
MDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTUkFUIGNmZWU0ZDNkIDAwMTI4ICh2MDEgQU1EICAg
IEZBTV9GXzEwIDA2MDQwMDAwIEFNRCAgMDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBI
UEVUIGNmZWU0ZTY1IDAwMDM4ICh2MDEgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIEJSQ00gMDIw
MDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTU0RUIGNmZWU0ZTlkIDAwMDQ5ICh2MDEgQlJD
TSAgIFBSVDAgICAgIDA2MDQwMDAwIEJSQ00gMDIwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBTUENSIGNmZWU0ZWU2IDAwMDUwICh2MDEgUFRMVEQgICRVQ1JUQkwkIDA2MDQwMDAwIFBUTCAg
MDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElDIGNmZWU0ZjM2IDAwMENBICh2MDEg
QlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIFBUTCAgMDIwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSAz
OTgyTUIgSElHSE1FTSBhdmFpbGFibGUuDQpbICAgIDAuMDAwMDAwXSA4ODFNQiBMT1dNRU0gYXZh
aWxhYmxlLg0KWyAgICAwLjAwMDAwMF0gICBtYXBwZWQgbG93IHJhbTogMCAtIDM3MWZlMDAwDQpb
ICAgIDAuMDAwMDAwXSAgIGxvdyByYW06IDAgLSAzNzFmZTAwMA0KWyAgICAwLjAwMDAwMF0gWm9u
ZSBQRk4gcmFuZ2VzOg0KWyAgICAwLjAwMDAwMF0gICBETUEgICAgICAweDAwMDAwMDEwIC0+IDB4
MDAwMDEwMDANClsgICAgMC4wMDAwMDBdICAgTm9ybWFsICAgMHgwMDAwMTAwMCAtPiAweDAwMDM3
MWZlDQpbICAgIDAuMDAwMDAwXSAgIEhpZ2hNZW0gIDB4MDAwMzcxZmUgLT4gMHgwMDEzMDAwMA0K
WyAgICAwLjAwMDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IFBGTiBmb3IgZWFjaCBub2RlDQpbICAg
IDAuMDAwMDAwXSBlYXJseV9ub2RlX21hcFszXSBhY3RpdmUgUEZOIHJhbmdlcw0KWyAgICAwLjAw
MDAwMF0gICAgIDA6IDB4MDAwMDAwMTAgLT4gMHgwMDAwMDA5Yw0KWyAgICAwLjAwMDAwMF0gICAg
IDA6IDB4MDAwMDAxMDAgLT4gMHgwMDBjZmVlMA0KWyAgICAwLjAwMDAwMF0gICAgIDA6IDB4MDAx
MDAwMDAgLT4gMHgwMDEzMDAwMA0KWyAgICAwLjAwMDAwMF0gVXNpbmcgQVBJQyBkcml2ZXIgZGVm
YXVsdA0KWyAgICAwLjAwMDAwMF0gQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg1MDgNClsgICAg
MC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxhcGljX2lkWzB4MDBdIGVuYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBCSU9TIGJ1ZzogQVBJQyB2ZXJzaW9uIGlzIDAgZm9yIENQVSAw
LzB4MCwgZml4aW5nIHVwIHRvIDB4MTANClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDFdIGxhcGljX2lkWzB4MDFdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDRdIGVu
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19p
ZFsweDA1XSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
Nl0gbGFwaWNfaWRbMHgwNl0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDddIGxhcGljX2lkWzB4MDddIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMF0gaGlnaCBlZGdlIGxpbnRbMHgxXSkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBoaWdoIGVkZ2UgbGludFsweDFd
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDJdIGhpZ2ggZWRn
ZSBsaW50WzB4MV0pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgw
M10gaGlnaCBlZGdlIGxpbnRbMHgxXSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAo
YWNwaV9pZFsweDA0XSBoaWdoIGVkZ2UgbGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUNfTk1JIChhY3BpX2lkWzB4MDVdIGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA3XSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA4XSBhZGRyZXNz
WzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9BUElDWzBdOiBhcGlj
X2lkIDgsIHZlcnNpb24gMjU1LCBhZGRyZXNzIDB4ZmVjMDAwMDAsIEdTSSAwLTI1NQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA5XSBhZGRyZXNzWzB4ZmVjMDEwMDBdIGdzaV9i
YXNlWzE2XSkNClsgICAgMC4wMDAwMDBdIElPQVBJQ1sxXTogYXBpY19pZCA5LCB2ZXJzaW9uIDI1
NSwgYWRkcmVzcyAweGZlYzAxMDAwLCBHU0kgMTYtMjcxDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJ
T0FQSUMgKGlkWzB4MGFdIGFkZHJlc3NbMHhmZWMwMjAwMF0gZ3NpX2Jhc2VbMzJdKQ0KWyAgICAw
LjAwMDAwMF0gSU9BUElDWzJdOiBhcGljX2lkIDEwLCB2ZXJzaW9uIDI1NSwgYWRkcmVzcyAweGZl
YzAyMDAwLCBHU0kgMzItMjg3DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVz
IDAgYnVzX2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2UpDQpbICAgIDAuMDAwMDAwXSBVc2lu
ZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRpb24NClsgICAgMC4w
MDAwMDBdIEFDUEk6IEhQRVQgaWQ6IDB4MTE2NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KWyAgICAw
LjAwMDAwMF0gU01QOiBBbGxvd2luZyA4IENQVXMsIDAgaG90cGx1ZyBDUFVzDQpbICAgIDAuMDAw
MDAwXSBQTTogUmVnaXN0ZXJlZCBub3NhdmUgbWVtb3J5OiAwMDAwMDAwMDAwMDljMDAwIC0gMDAw
MDAwMDAwMDA5ZDAwMA0KWyAgICAwLjAwMDAwMF0gUE06IFJlZ2lzdGVyZWQgbm9zYXZlIG1lbW9y
eTogMDAwMDAwMDAwMDA5ZDAwMCAtIDAwMDAwMDAwMDAxMDAwMDANClsgICAgMC4wMDAwMDBdIEFs
bG9jYXRpbmcgUENJIHJlc291cmNlcyBzdGFydGluZyBhdCBkMDAwMDAwMCAoZ2FwOiBkMDAwMDAw
MDoyZWMwMDAwMCkNClsgICAgMC4wMDAwMDBdIEJvb3RpbmcgcGFyYXZpcnR1YWxpemVkIGtlcm5l
bCBvbiBYZW4NClsgICAgMC4wMDAwMDBdIFhlbiB2ZXJzaW9uOiA0LjIuMC1yYzIgKHByZXNlcnZl
LUFEKQ0KWyAgICAwLjAwMDAwMF0gc2V0dXBfcGVyY3B1OiBOUl9DUFVTOjMyIG5yX2NwdW1hc2tf
Yml0czozMiBucl9jcHVfaWRzOjggbnJfbm9kZV9pZHM6MQ0KWyAgICAwLjAwMDAwMF0gUEVSQ1BV
OiBFbWJlZGRlZCAxNCBwYWdlcy9jcHUgQGY0YjdiMDAwIHMzMzI4MCByMCBkMjQwNjQgdTU3MzQ0
DQpbICAgIDAuMDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0
eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2VzOiAxMDM4NDQzDQpbICAgIDAuMDAwMDAwXSBLZXJu
ZWwgY29tbWFuZCBsaW5lOiBwbGFjZWhvbGRlciByb290PVVVSUQ9ZTc1NjhhYzYtZDU2MS00ODNh
LWI1YmEtODhmZDkzNTBlZTE4IHJvIGNvbnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rl
c2V0DQpbICAgIDAuMDAwMDAwXSBQSUQgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjog
MiwgMTYzODQgYnl0ZXMpDQpbICAgIDAuMDAwMDAwXSBEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBl
bnRyaWVzOiAxMzEwNzIgKG9yZGVyOiA3LCA1MjQyODggYnl0ZXMpDQpbICAgIDAuMDAwMDAwXSBJ
bm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjogNiwgMjYyMTQ0IGJ5
dGVzKQ0KWyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5nIENQVSMwDQpbICAgIDAuMDAwMDAwXSBQ
bGFjaW5nIDY0TUIgc29mdHdhcmUgSU8gVExCIGJldHdlZW4gZjBhYjcwMDAgLSBmNGFiNzAwMA0K
WyAgICAwLjAwMDAwMF0gc29mdHdhcmUgSU8gVExCIGF0IHBoeXMgMHgzMGFiNzAwMCAtIDB4MzRh
YjcwMDANClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBIaWdoTWVtIGZvciBub2RlIDAgKDAw
MDM3MWZlOjAwMTMwMDAwKQ0KWyAgICAwLjAwMDAwMF0gTWVtb3J5OiAzMjYzNTgway80OTgwNzM2
ayBhdmFpbGFibGUgKDI4MzZrIGtlcm5lbCBjb2RlLCAxNDI2NzZrIHJlc2VydmVkLCAxMzQzayBk
YXRhLCA0MTJrIGluaXQsIDI1MDM1NjBrIGhpZ2htZW0pDQpbICAgIDAuMDAwMDAwXSB2aXJ0dWFs
IGtlcm5lbCBtZW1vcnkgbGF5b3V0Og0KWyAgICAwLjAwMDAwMF0gICAgIGZpeG1hcCAgOiAweGZm
NTM2MDAwIC0gMHhmZjdmZjAwMCAgICgyODUyIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgIHBrbWFw
ICAgOiAweGZmMjAwMDAwIC0gMHhmZjQwMDAwMCAgICgyMDQ4IGtCKQ0KWyAgICAwLjAwMDAwMF0g
ICAgIHZtYWxsb2MgOiAweGY3OWZlMDAwIC0gMHhmZjFmZTAwMCAgICggMTIwIE1CKQ0KWyAgICAw
LjAwMDAwMF0gICAgIGxvd21lbSAgOiAweGMwMDAwMDAwIC0gMHhmNzFmZTAwMCAgICggODgxIE1C
KQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmluaXQgOiAweGMxNDE1MDAwIC0gMHhjMTQ3YzAwMCAg
ICggNDEyIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmRhdGEgOiAweGMxMmM1MGVjIC0gMHhj
MTQxNGYwMCAgICgxMzQzIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLnRleHQgOiAweGMxMDAw
MDAwIC0gMHhjMTJjNTBlYyAgICgyODM2IGtCKQ0KWyAgICAwLjAwMDAwMF0gSGllcmFyY2hpY2Fs
IFJDVSBpbXBsZW1lbnRhdGlvbi4NClsgICAgMC4wMDAwMDBdIAlSQ1UgZHludGljay1pZGxlIGdy
YWNlLXBlcmlvZCBhY2NlbGVyYXRpb24gaXMgZW5hYmxlZC4NClsgICAgMC4wMDAwMDBdIE5SX0lS
UVM6MjMwNCBucl9pcnFzOjIwNDggMTYNClsgICAgMC4wMDAwMDBdIHhlbjogc2NpIG92ZXJyaWRl
OiBnbG9iYWxfaXJxPTkgdHJpZ2dlcj0wIHBvbGFyaXR5PTENClsgICAgMC4wMDAwMDBdIHhlbjog
YWNwaSBzY2kgOQ0KWyAgICAwLjAwMDAwMF0gQ29uc29sZTogY29sb3VyIFZHQSsgODB4MjUNClsg
ICAgMC4wMDAwMDBdIGNvbnNvbGUgW2h2YzBdIGVuYWJsZWQsIGJvb3Rjb25zb2xlIGRpc2FibGVk
DQ0KWyAgICAwLjAwMDAwMF0gY29uc29sZSBbaHZjMF0gZW5hYmxlZCwgYm9vdGNvbnNvbGUgZGlz
YWJsZWQNClsgICAgMC4wMDAwMDBdIGluc3RhbGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMA0NClsg
ICAgMC4wMDAwMDBdIERldGVjdGVkIDE5OTUuMDkyIE1IeiBwcm9jZXNzb3IuDQ0KWyAgICAwLjAw
ODAwMF0gQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZhbHVlIGNhbGN1bGF0ZWQg
dXNpbmcgdGltZXIgZnJlcXVlbmN5Li4gMzk5MC4xOCBCb2dvTUlQUyAobHBqPTc5ODAzNjgpDQ0K
WyAgICAwLjAxMDg1OV0gcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxDQ0KWyAg
ICAwLjAxMjA1NV0gU2VjdXJpdHkgRnJhbWV3b3JrIGluaXRpYWxpemVkDQ0KWyAgICAwLjAxNjAx
MV0gQXBwQXJtb3I6IEFwcEFybW9yIGRpc2FibGVkIGJ5IGJvb3QgdGltZSBwYXJhbWV0ZXINDQpb
ICAgIDAuMDIwMDI3XSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMg0NClsgICAg
MC4wMjQxNzZdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdWFjY3QNDQpbICAgIDAuMDI4
MDEyXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBtZW1vcnkNDQpbICAgIDAuMDMyMDE4XSBJ
bml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBkZXZpY2VzDQ0KWyAgICAwLjAzNjAwNV0gSW5pdGlh
bGl6aW5nIGNncm91cCBzdWJzeXMgZnJlZXplcg0NClsgICAgMC4wNDAwMTBdIEluaXRpYWxpemlu
ZyBjZ3JvdXAgc3Vic3lzIG5ldF9jbHMNDQpbICAgIDAuMDQ0MDA2XSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBibGtpbw0NClsgICAgMC4wNDgwMTNdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vi
c3lzIHBlcmZfZXZlbnQNDQpbICAgIDAuMDUyMDY2XSBDUFU6IFBoeXNpY2FsIFByb2Nlc3NvciBJ
RDogMA0NClsgICAgMC4wNTYwMDZdIENQVTogUHJvY2Vzc29yIENvcmUgSUQ6IDANDQpbICAgIDAu
MDYwMjgwXSBBQ1BJOiBDb3JlIHJldmlzaW9uIDIwMTEwNjIzDQ0KWyAgICAwLjA3MjM4NF0gUGVy
Zm9ybWFuY2UgRXZlbnRzOiBCcm9rZW4gQklPUyBkZXRlY3RlZCwgY29tcGxhaW4gdG8geW91ciBo
YXJkd2FyZSB2ZW5kb3IuDQ0KWyAgICAwLjA3NjAxNF0gW0Zpcm13YXJlIEJ1Z106IHRoZSBCSU9T
IGhhcyBjb3JydXB0ZWQgaHctUE1VIHJlc291cmNlcyAoTVNSIGMwMDEwMDAwIGlzIDUzMDA3NikN
DQpbICAgIDAuMDgwMDA3XSBBTUQgUE1VIGRyaXZlci4NDQpbICAgIDAuMDgyODY0XSAtLS0tLS0t
LS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0NDQpbICAgIDAuMDg0MDE4XSBXQVJOSU5HOiBh
dCAvYnVpbGQvYnVpbGRkLWxpbnV4XzMuMi4yMS0zLWkzODYtdkVvaG40L2xpbnV4LTMuMi4yMS9h
cmNoL3g4Ni94ZW4vZW5saWdodGVuLmM6NzM4IHBlcmZfZXZlbnRzX2xhcGljX2luaXQrMHgyOC8w
eDI5KCkNDQpbICAgIDAuMDg4MDA5XSBIYXJkd2FyZSBuYW1lOiBlbXB0eQ0NClsgICAgMC4wOTEy
OTldIE1vZHVsZXMgbGlua2VkIGluOg0NClsgICAgMC4wOTIyNzVdIFBpZDogMSwgY29tbTogc3dh
cHBlci8wIE5vdCB0YWludGVkIDMuMi4wLTMtNjg2LXBhZSAjMQ0NClsgICAgMC4wOTYwMDhdIENh
bGwgVHJhY2U6DQ0KWyAgICAwLjA5ODUyN10gIFs8YzEwMzdmY2M+XSA/IHdhcm5fc2xvd3BhdGhf
Y29tbW9uKzB4NjgvMHg3OQ0NClsgICAgMC4xMDAwMTldICBbPGMxMDE1MGQyPl0gPyBwZXJmX2V2
ZW50c19sYXBpY19pbml0KzB4MjgvMHgyOQ0NClsgICAgMC4xMDQwMTBdICBbPGMxMDM3ZmVhPl0g
PyB3YXJuX3Nsb3dwYXRoX251bGwrMHhkLzB4MTANDQpbICAgIDAuMTA4MDExXSAgWzxjMTAxNTBk
Mj5dID8gcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkNDQpbICAgIDAuMTEyMDE1XSAg
WzxjMTQxYzk3ZT5dID8gaW5pdF9od19wZXJmX2V2ZW50cysweDIyMy8weDNiMQ0NClsgICAgMC4x
MTYwMTJdICBbPGMxNDFjNzViPl0gPyBjaGVja19idWdzKzB4MWQ5LzB4MWQ5DQ0KWyAgICAwLjEy
MDAxMl0gIFs8YzEwMDMwNzQ+XSA/IGRvX29uZV9pbml0Y2FsbCsweDY2LzB4MTBlDQ0KWyAgICAw
LjEyNDAxMl0gIFs8YzE0MTU3NzA+XSA/IGtlcm5lbF9pbml0KzB4NmQvMHgxMjUNDQpbICAgIDAu
MTI4MDEyXSAgWzxjMTQxNTcwMz5dID8gc3RhcnRfa2VybmVsKzB4MzI1LzB4MzI1DQ0KWyAgICAw
LjEzMjAxNV0gIFs8YzEyYzQ2M2U+XSA/IGtlcm5lbF90aHJlYWRfaGVscGVyKzB4Ni8weDEwDQ0K
WyAgICAwLjEzNjAxOV0gLS0tWyBlbmQgdHJhY2UgYTc5MTllN2YxN2MwYTcyNSBdLS0tDQ0KWyAg
ICAwLjE0MDAxM10gLi4uIHZlcnNpb246ICAgICAgICAgICAgICAgIDANDQpbICAgIDAuMTQ0MDEx
XSAuLi4gYml0IHdpZHRoOiAgICAgICAgICAgICAgNDgNDQpbICAgIDAuMTQ4MDE5XSAuLi4gZ2Vu
ZXJpYyByZWdpc3RlcnM6ICAgICAgNA0NClsgICAgMC4xNTIwMTddIC4uLiB2YWx1ZSBtYXNrOiAg
ICAgICAgICAgICAwMDAwZmZmZmZmZmZmZmZmDQ0KWyAgICAwLjE1NjAxMl0gLi4uIG1heCBwZXJp
b2Q6ICAgICAgICAgICAgIDAwMDA3ZmZmZmZmZmZmZmYNDQpbICAgIDAuMTYwMDE5XSAuLi4gZml4
ZWQtcHVycG9zZSBldmVudHM6ICAgMA0NClsgICAgMC4xNjQwMTldIC4uLiBldmVudCBtYXNrOiAg
ICAgICAgICAgICAwMDAwMDAwMDAwMDAwMDBmDQ0KWyAgICAwLjE2ODI3N10gTk1JIHdhdGNoZG9n
IGVuYWJsZWQsIHRha2VzIG9uZSBody1wbXUgY291bnRlci4NDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwZmZm
ZjlhYzQ2NDYwIHRvIDB4MDAwMGZmZmI1YWQ1MWVjMC4NClsgICAgMC4xNzYwMTBdIC0tLS0tLS0t
LS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0tLQ0NClsgICAgMC4xNzYwMTBdIFdBUk5JTkc6IGF0
IC9idWlsZC9idWlsZGQtbGludXhfMy4yLjIxLTMtaTM4Ni12RW9objQvbGludXgtMy4yLjIxL2Fy
Y2gveDg2L3hlbi9lbmxpZ2h0ZW4uYzo3MzggcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4
MjkoKQ0NClsgICAgMC4xNzYwMTBdIEhhcmR3YXJlIG5hbWU6IGVtcHR5DQ0KWyAgICAwLjE3NjAx
MF0gTW9kdWxlcyBsaW5rZWQgaW46DQ0KWyAgICAwLjE3NjAxMF0gUGlkOiAxLCBjb21tOiBzd2Fw
cGVyLzAgVGFpbnRlZDogRyAgICAgICAgVyAgICAzLjIuMC0zLTY4Ni1wYWUgIzENDQpbICAgIDAu
MTc2MDEwXSBDYWxsIFRyYWNlOg0NClsgICAgMC4xNzYwMTBdICBbPGMxMDM3ZmNjPl0gPyB3YXJu
X3Nsb3dwYXRoX2NvbW1vbisweDY4LzB4NzkNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTAxNTBkMj5d
ID8gcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkNDQpbICAgIDAuMTc2MDEwXSAgWzxj
MTAzN2ZlYT5dID8gd2Fybl9zbG93cGF0aF9udWxsKzB4ZC8weDEwDQ0KWyAgICAwLjE3NjAxMF0g
IFs8YzEwMTUwZDI+XSA/IHBlcmZfZXZlbnRzX2xhcGljX2luaXQrMHgyOC8weDI5DQ0KWyAgICAw
LjE3NjAxMF0gIFs8YzEwMTUyNzQ+XSA/IHg4Nl9wbXVfZW5hYmxlKzB4MWExLzB4MjI3DQ0KWyAg
ICAwLjE3NjAxMF0gIFs8YzEwOGZmMjY+XSA/IHBlcmZfcG11X2VuYWJsZSsweDFhLzB4MWINDQpb
ICAgIDAuMTc2MDEwXSAgWzxjMTAxNDAzMT5dID8geDg2X3BtdV9jb21taXRfdHhuKzB4NmIvMHg3
OA0NClsgICAgMC4xNzYwMTBdICBbPGMxMDc3MGRmPl0gPyBoYW5kbGVfaXJxX2V2ZW50X3BlcmNw
dSsweDE0Mi8weDE1OA0NClsgICAgMC4xNzYwMTBdICBbPGMxMDc4ZTM0Pl0gPyBoYW5kbGVfcGVy
Y3B1X2lycSsweDI5LzB4MzcNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTFjMjNmOT5dID8gYXJjaF9s
b2NhbF9zYXZlX2ZsYWdzKzB4Ni8weDcNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTFjMjVlMj5dID8g
X194ZW5fZXZ0Y2huX2RvX3VwY2FsbCsweDE3Yi8weDFhZA0NClsgICAgMC4xNzYwMTBdICBbPGMx
MDI0OTRjPl0gPyBwdmNsb2NrX2Nsb2Nrc291cmNlX3JlYWQrMHhjNS8weGY3DQ0KWyAgICAwLjE3
NjAxMF0gIFs8YzEwMGY3ZGY+XSA/IHNjaGVkX2Nsb2NrKzB4OS8weGQNDQpbICAgIDAuMTc2MDEw
XSAgWzxjMTA1MGZkNj5dID8gc2NoZWRfY2xvY2tfbG9jYWwrMHgxMC8weDE0Yg0NClsgICAgMC4x
NzYwMTBdICBbPGMxMDkwODQxPl0gPyBldmVudF9zY2hlZF9pbisweDczLzB4MTA4DQ0KWyAgICAw
LjE3NjAxMF0gIFs8YzEwOTA5NDM+XSA/IGdyb3VwX3NjaGVkX2luKzB4NmQvMHgxMGINDQpbICAg
IDAuMTc2MDEwXSAgWzxjMTA1MGY5ZD5dID8gYXJjaF9sb2NhbF9pcnFfcmVzdG9yZSsweDYvMHg3
DQ0KWyAgICAwLjE3NjAxMF0gIFs8YzEwNTEyYmI+XSA/IGxvY2FsX2Nsb2NrKzB4MjMvMHgyYw0N
ClsgICAgMC4xNzYwMTBdICBbPGMxMDkxMTg1Pl0gPyBfX3BlcmZfZXZlbnRfZW5hYmxlKzB4MTA3
LzB4MTQ0DQ0KWyAgICAwLjE3NjAxMF0gIFs8YzEwOGU0Njk+XSA/IHBlcmZfZXhjbHVkZV9ldmVu
dC5wYXJ0LjIzKzB4MzkvMHgzOQ0NClsgICAgMC4xNzYwMTBdICBbPGMxMDhlNDk3Pl0gPyByZW1v
dGVfZnVuY3Rpb24rMHgyZS8weDMzDQ0KWyAgICAwLjE3NjAxMF0gIFs8YzEwNWNiNjY+XSA/IHNt
cF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweDZlLzB4Y2MNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTA4
ZDM2MT5dID8gY3B1X2Z1bmN0aW9uX2NhbGwrMHgyYS8weDMyDQ0KWyAgICAwLjE3NjAxMF0gIFs8
YzEwOTEwN2U+XSA/IF9fcGVyZl9ldmVudF90YXNrX3NjaGVkX2luKzB4NWYvMHg1Zg0NClsgICAg
MC4xNzYwMTBdICBbPGMxMDc2NmQ3Pl0gPyB3YXRjaGRvZ19lbmFibGUrMHhjOS8weDE0MA0NClsg
ICAgMC4xNzYwMTBdICBbPGMxMmI4ZDBjPl0gPyBjcHVfY2FsbGJhY2srMHg2Mi8weDcwDQ0KWyAg
ICAwLjE3NjAxMF0gIFs8YzE0MmFhYTU+XSA/IGxvY2t1cF9kZXRlY3Rvcl9pbml0KzB4MzAvMHg0
Yw0NClsgICAgMC4xNzYwMTBdICBbPGMxNDE1NzdkPl0gPyBrZXJuZWxfaW5pdCsweDdhLzB4MTI1
DQ0KWyAgICAwLjE3NjAxMF0gIFs8YzE0MTU3MDM+XSA/IHN0YXJ0X2tlcm5lbCsweDMyNS8weDMy
NQ0NClsgICAgMC4xNzYwMTBdICBbPGMxMmM0NjNlPl0gPyBrZXJuZWxfdGhyZWFkX2hlbHBlcisw
eDYvMHgxMA0NClsgICAgMC4xNzYwMTBdIC0tLVsgZW5kIHRyYWNlIGE3OTE5ZTdmMTdjMGE3MjYg
XS0tLQ0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KWyAgICAwLjE4MDAxMF0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAxDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMDA4
MDAwXSBJbml0aWFsaXppbmcgQ1BVIzENDQpbICAgIDAuMTg0MDEwXSBOTUkgd2F0Y2hkb2cgZW5h
YmxlZCwgdGFrZXMgb25lIGh3LXBtdSBjb3VudGVyLg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMGZmZmZjYTVjYzkz
NiB0byAweDAwMDBmZmZiNWFkNTFlYzAuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4xODgwMTFdIC0tLS0tLS0tLS0tLVsgY3V0IGhl
cmUgXS0tLS0tLS0tLShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KLS0tDQ0KWyAgICAwLjE4ODAxMV0gV0FSTklORzogYXQgL2J1aWxkL2J1aWxkZC1s
aW51eF8zKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQouMi4yMS0zLWkzODYtdkVvaG40L2xpbnV4LTMuMi4yMS9hcmNoL3g4Ni94ZW4vZW4oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmxpZ2h0ZW4uYzo3
MzggcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkoKShYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQ0KWyAgICAwLjE4ODAxMV0gSGFyZHdh
cmUgbmFtZTogZW1wdHkNDQpbICAgIDAuMTg4MDExXSBNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpvZHVsZXMgbGlua2VkIGluOg0NClsgICAgMC4x
ODgwMTFdIFBpZDogMCwgY29tbTogc3dhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQpwcGVyLzEgVGFpbnRlZDogRyAgICAgICAgVyAgICAzLjIuMC0z
LTY4Ni1wYWUgIzENDQpbICAgIDAuMTg4MDExXSBDKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphbGwgVHJhY2U6DQ0KWyAgICAwLjE4ODAxMV0gIFs8
YzEwMzdmY2M+XSA/IHdhcm5fc2xvd3BhdGhfY29tKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQptb24rMHg2OC8weDc5DQ0KWyAgICAwLjE4ODAxMV0g
IFs8YzEwMTUwZDI+XSA/IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkNDQpbICAgIDAuMTg4
MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbPGMxMDM3ZmVhPl0gPyB3YXJuX3Nsb3dwYXRoX251bGwrMHhkLzB4MTANDQpbICAgIDAuMTg4
MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbPGMxMDE1MGQyPl0gPyBwZXJmX2V2ZW50c19sYXBpY19pbml0KzB4MjgvMHgyOQ0oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCg0KWyAgICAwLjE4
ODAxMV0gIFs8YzEwMTUyNzQ+XSA/IHg4Nl9wbXVfZW5hYmxlKzB4KFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoxYTEvMHgyMjcNDQpbICAgIDAuMTg4
MDExXSAgWzxjMTA4ZmYyNj5dID8gcGVyZl9wbXVfZW5hYmxlKzAoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCngxYS8weDFiDQ0KWyAgICAwLjE4ODAx
MV0gIFs8YzEwMTQwMzE+XSA/IHg4Nl9wbXVfY29tbWl0X3R4KFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpuKzB4NmIvMHg3OA0NClsgICAgMC4xODgw
MTFdICBbPGMxMDUwZjlkPl0gPyBhcmNoX2xvY2FsX2lycV9yZShYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0Kc3RvcmUrMHg2LzB4Nw0NClsgICAgMC4x
ODgwMTFdICBbPGMxMDUxMmJiPl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQpvY2FsX2Nsb2NrKzB4MjMvMHgyYw0NClsgICAgMC4xODgwMTFd
ICBbPGMxMDkwYTBlPl0gPyBjKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQp0eF9zY2hlZF9pbisweDJkLzB4MTM1DQ0KWyAgICAwLjE4ODAxMV0gIFs8
YzEwMTUyODg+XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCjg2X3BtdV9lbmFibGUrMHgxYjUvMHgyMjcNDQpbICAgIDAuMTg4MDExXSAgWzxj
MTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KdmNsb2NrX2Nsb2Nrc291cmNlX3JlYWQrMHhjNS8weGY3DQ0KWyAgICAwLjE4ODAx
MV0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTAyNDk0Yz5dID8gcHZjbG9ja19jbG9ja3NvdXJjZV9yZWFkKzB4YzUvMHhmKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3DQ0KWyAgICAwLjE4
ODAxMV0gIFs8YzEwMGY3ZGY+XSA/IHNjaGVkX2Nsb2NrKzB4OS8wKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4ZA0NClsgICAgMC4xODgwMTFdICBb
PGMxMDUwZmQ2Pl0gPyBzY2hlZF9jbG9ja19sb2NhbChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKzB4MTAvMHgxNGINDQpbICAgIDAuMTg4MDExXSAg
WzxjMTA5MDg0MT5dID8gZXZlbnRfc2NoZWRfaW4rMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjczLzB4MTA4DQ0KWyAgICAwLjE4ODAxMV0gIFs8
YzEwOTA5NDM+XSA/IGdyb3VwX3NjaGVkX2luKzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo2ZC8weDEwYg0NClsgICAgMC4xODgwMTFdICBbPGMx
MDUwZjlkPl0gPyBhcmNoX2xvY2FsX2lycV9yZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0Kc3RvcmUrMHg2LzB4Nw0NClsgICAgMC4xODgwMTFdICBb
PGMxMDUxMmJiPl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpvY2FsX2Nsb2NrKzB4MjMvMHgyYw0NClsgICAgMC4xODgwMTFdICBbPGMxMDkx
MTg1Pl0gPyBfKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpfcGVyZl9ldmVudF9lbmFibGUrMHgxMDcvMHgxNDQNDQpbICAgIDAuMTg4MDExXSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDhl
NDk3Pl0gPyByZW1vdGVfZnVuY3Rpb24rMHgyZS8weDMzDQ0KWyAgICAwLjE4ODAxMV0gIChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1Y2Zh
ZD5dID8gZ2VuZXJpY19zbXBfY2FsbF9mdW5jdGlvbl9zaW5nbGVfKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQppbnRlcnJ1cHQrMHg5Ny8weGIyDQ0K
WyAgICAwLjE4ODAxMV0gIFs8YzEwMGE3NWM+XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmVuX2NhbGxfZnVuY3Rpb25fc2luZ2xlX2ludGVy
cnVwdCsweGEvMHgxYw0NClsgICAgMC4xODgwMTFdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNzZmZTQ+XSA/IGhhbmRsZV9pcnFfZXZl
bnRfcGVyY3B1KzB4NDcvMHgxNShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KOA0NClsgICAgMC4xODgwMTFdICBbPGMxMDc4ODExPl0gPyBpcnFfZ2V0
X2lycV9kYXRhKyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KMHg1LzB4Ng0NClsgICAgMC4xODgwMTFdICBbPGMxMDc4ZTM0Pl0gPyBoYW5kbGVfcGVy
Y3B1X2lycShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKzB4MjkvMHgzNw0NClsgICAgMC4xODgwMTFdICBbPGMxMWMyNThkPl0gPyBfX3hlbl9ldnRj
aG5fZG9fdShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KcGNhbGwrMHgxMjYvMHgxYWQNDQpbICAgIDAuMTg4MDExXSAgWzxjMTFjMzdkMD5dID8geChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5fZXZ0
Y2huX2RvX3VwY2FsbCsweDE4LzB4MjYNDQpbICAgIDAuMTg4MDExXSAgKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMmM0Njk3Pl0gPyB4ZW5f
ZG9fdXBjYWxsKzB4Ny8weGMNDQpbICAgIDAuMTg4MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDAyM2E3Pl0gPyBoeXBlcmNhbGxf
cGFnZSsweDNhNy8weDEwMDANDQpbICAgIDAuMTg4MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDA2MDNhPl0gPyB4ZW5fc2FmZV9o
YWx0KzB4Zi8weDE5DQ0KWyAgICAwLjE4ODAxMV0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxMDViND5dID8gZGVmYXVsdF9pZGxlKzB4
NTIvMHg4Nw0NClsgICAgMC4xODgwMTFdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMGFhNDc+XSA/IGNwdV9pZGxlKzB4OTUvMHhhZg0N
ClsgICAgMC4xODgwMTFdIC0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCi0tWyBlbmQgdHJhY2UgYTc5MTllN2YxN2MwYTcyNyBdLS0tDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzcy
MDIyXSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDINDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjAwODAwMF0gSW5pdGlhbGl6aW5nIENQVSMyDQ0KWyAg
ICAwLjM4MDAyM10gTk1JIHdhdGNoZG9nIGVuYWJsZWQsIHRha2VzIG9uZSBody1wbXUgY291bnRl
ci4NDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZy
b20gMHgwMDAwZmZmZmNhY2MyYTcxIHRvIDB4MDAwMGZmZmI1YWQ1MWVjMC4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdIC0oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQotLS0tLS0tLS0tLVsgY3V0KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQogaGVyZSBdLS0tLS0tLS0tKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCi0tLQ0NClsgICAgMC4zODQwMjNdIFdBUk5JTkc6IGF0IC9idWlsZC9idWlsZGQtbGlu
dXhfMyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KLjIuMjEtMy1pMzg2LXZFbyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmhuNC9saW51eC0zLjIuMjEoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi9hcmNoL3g4Ni94ZW4vZW4oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KbGlnaHRlbi5jOjczOCBwZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnJmX2V2ZW50c19sYXBpY18oWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KaW5pdCsweDI4LzB4MjkoKShYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMC4zODQwMjNdIEgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCmFyZHdhcmUgbmFtZTogZW0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpwdHkNDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMC4zODQwMjNdIE0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9kdWxlcyBsaW5rZWQgaW4oWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gUChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KaWQ6IDAsIGNvbW06IHN3YShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCnBwZXIvMiBUYWludGVkOiBHICAgICAgICBXICAgIDMuMi4wLTMtNjg2
LXBhZSAjMShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gQyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KYWxsIFRyYWNlOg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbPGMxMDM3ZmNjPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphcm5fc2xvd3BhdGhfY29tKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbW9uKzB4NjgvMHg3OQ0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICBbPGMxMDE1
MGQyPl0gPyBwZXJmX2V2ZW50c19sYXBpYyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpfaW5pdCsweDI4LzB4MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDM3ZmVhPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQphcm5fc2xvd3BhdGhfbnVsKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmwrMHhkLzB4MTANDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQw
MjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8
YzEwMTUwZDI+XSA/IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpl
cmZfZXZlbnRzX2xhcGljKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfaW5pdCsweDI4LzB4
MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gIChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAx
NTI3ND5dID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KODZfcG11X2VuYWJsZSsweChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjFhMS8weDIyNw0NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIz
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEw
OGZmMjY+XSA/IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmVyZl9wbXVfZW5hYmxlKzAoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4MWEv
MHgxYg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAg
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDE0MDMxPl0gPyB4ODZfcG11
X2NvbW1pdF90eG4rMHg2Yi8weDc4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUwZjlkPl0gPyBhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyY2hfbG9j
YWxfaXJxX3JlKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CnN0b3JlKzB4Ni8weDcNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
ClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUx
MmJiPl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KMHgyYw0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDkwYTBlPl0gPyBjKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnR4X3NjaGVkX2luKzB4MmQoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KLzB4MTM1DQ0KWyAgICAwLjM4NDAyM10gIFs8YzEwMTUyODg+XSA/IHg4Nl9wbXVfZW5hYmxl
KzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjFiNS8w
eDIyNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4
NDAyM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KWzxjMTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCnZjbG9ja19jbG9ja3NvdXIoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KY2VfcmVhZCsweGM1LzB4ZihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3DQ0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAg
IDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpbPGMxMDI0OTRjPl0gPyBwdmNsb2NrX2Nsb2Nrc291cmNlX3JlYWQr
MHhjNS8weGYoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KNw0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbPGMxMDBmN2RmPl0gPyBzKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpjaGVkX2Nsb2Nr
KzB4OS8wKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4ZA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICBb
PGMxMDUwZmQ2Pl0gPyBzY2hlZF9jbG9ja19sb2NhbChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKzB4MTAvMHgxNGINDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTA4
NDE+XSA/IGUoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnZlbnRfc2No
ZWRfaW4rMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3My8weDEwOA0NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDkwOTQzPl0gPyBncm91
cF9zY2hlZF9pbisweDZkLzB4MTBiDQ0KWyAgICAwLjM4NDAyM10gIChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTBmOWQ+XSA/IGEoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyY2hfbG9jYWxfaXJxX3JlKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnN0b3JlKzB4Ni8weDcNDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEw
NTEyYmI+XSA/IGwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpv
Y2FsX2Nsb2NrKzB4MjMvKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoweDJjDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA5MTE4NT5dID8gXyhYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpfcGVyZl9ldmVudF9lbmFiKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KbGUrMHgxMDcvMHgxNDQNDQpbICAgIDAuMzg0MDIzXSAgWzxjMTA4ZTQ5Nz5dID8gcihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbW90ZV9mdW5jdGlv
biswKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCngyZS8weDMzDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNd
ICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1Y2ZhZD5dID8gZyhYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbmVyaWNfc21wX2NhbGxfKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmZ1bmN0aW9uX3NpbmdsZV8oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCmludGVycnVwdCsweDk3LzAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KeGIyDQ0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgWzxjMTAwYTc1Yz5dID8g
eGVuX2NhbGxfZnVuY3Rpb24oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfc2luZ2xlX2ludGVycnVwKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KdCsweGEvMHgxYw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCls8YzEwNzZmZTQ+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQphbmRsZV9pcnFfZXZlbnRfKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpwZXJjcHUrMHg0Ny8weDE1KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQo4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDc4ODExPl0gPyBpKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCnJxX2dldF9pcnFfZGF0YSsweDUvMHg2DQ0KWyAgICAwLjM4NDAyM10gIFs8YzEwNzhl
MzQ+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmFuZGxlX3BlcmNwdV9pcnEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQorMHgyOS8weDM3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KWzxjMTFjMjU4ZD5dID8gXyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfeGVuX2V2dGNobl9k
b191cGNhbGwrMHgxMjYvMHgxYWQNDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMWMzN2QwPl0gPyB4KFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmVuX2V2dGNobl9kb191
cGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFsbCsweDE4LzB4MjYNDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTJjNDY5Nz5dID8geChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbl9kb191cGNhbGwrMHg3
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi8weGMNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTAwMjNhNz5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQp5cGVyY2FsbF9wYWdlKzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CjNhNy8weDEwMDANDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gIFs8YzEwMDYwM2E+XSA/IHhlbl9zYWZlX2hh
bHQrMHhmKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovMHgxOQ0NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDEwNWI0Pl0g
PyBkKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpl
ZmF1bHRfaWRsZSsweDUyKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi8weDg3DQ0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCls8YzEwMGFhNDc+XSA/IGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpwdV9pZGxlKzB4OTUvMHhhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZg0NClsgICAgMC4zODQwMjNdIC0t
LVsgZW5kIHRyYWNlIGE3OTE5ZTdmMTdjMGE3MjggXShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQotLS0NDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KWyAgICAzLjQxMjIxMl0gaW5zdGFsbGluZyBYZW4gdGkoWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbWVyIGZvciBDUFUgMw0NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjAwODAwMF0gSW5pdGlhbGl6aW5nIENQVSMzDQ0K
WyAgICAzLjQyMDIxM10gTk1JIHdhdGNoZG9nIGVuYWJsZWQsIHRha2VzIG9uZSBoKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnctcG11IGNvdW50ZXIuDQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMGZmZmZkZDJj
ZmEzZSB0byAweDAwMDBmZmZiNWFkNTFlYzAuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdIC0oWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0tLS0tLS0tLVsgY3V0KFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQogaGVyZSBdLS0tLS0tLS0t
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0N
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsg
ICAgMy40MjQyMTNdIFcoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCkFSTklORzogYXQgL2J1aWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmQvYnVpbGRkLWxpbnV4XzMoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi4yLjIxLTMtaTM4Ni12RW8oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmhuNC9saW51eC0z
LjIuMjEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovYXJjaC94
ODYveGVuL2VuKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbGln
aHRlbi5jOjczOCBwZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CnJmX2V2ZW50c19sYXBpY18oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmluaXQrMHgyOC8weDI5KCkoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSBIKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFyZHdhcmUgbmFtZTogZW0oWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnB0eQ0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10g
TShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpvZHVsZXMg
bGlua2VkIGluKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQo6KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10g
UChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KaWQ6
IDAsIGNvbW06IHN3YShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KcHBlci8zIFRhaW50ZWQ6IChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KRyAgICAgICAgVyAgICAzLihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoyLjAtMy02ODYtcGFlICMxKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdIEMo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KYWxsIFRyYWNl
Og0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbPGMxMDM3ZmNjPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCmFybl9zbG93cGF0aF9jb20oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQptb24rMHg2OC8weDc5DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxNTBkMj5dID8gcChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplcmZfZXZlbnRzX2xhcGljKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfaW5pdCsweDI4LzB4
MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMzdmZWE+XSA/IHcoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFybl9zbG93cGF0
aF9udWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmwrMHhkLzB4MTANDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMTUwZDI+XSA/
IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZXJmX2V2ZW50c19sYXBpYyhYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCl9pbml0KzB4MjgvMHgyOQ0oWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMTUyNzQ+XSA/IHgoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjg2X3BtdV9l
bmFibGUrMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCjFhMS8weDIyNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIx
M10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTA4ZmYyNj5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KZXJmX3BtdV9lbmFibGUrMChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQp4MWEvMHgxYg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxNDAzMT5dID8geChYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KODZfcG11X2NvbW1pdF90eChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbisweDZiLzB4NzgNDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIx
M10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTBm
OWQ+XSA/IGEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcmNoX2xvY2FsX2ly
cV9yZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0Kc3RvcmUrMHg2LzB4Nw0NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTEyYmI+XSA/IGwoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KMHgyYw0NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTBhMGU+XSA/IGMoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdHhfc2NoZWRfaW4rMHgyZChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KLzB4MTM1DQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIx
M10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxNTI4OD5d
ID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo4Nl9w
bXVfZW5hYmxlKzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCjFiNS8weDIyNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbPGMxMDI0OTRjPl0gPyBwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCnZjbG9ja19jbG9ja3NvdXIoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KY2VfcmVhZCsweGM1LzB4ZihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCjcNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdmNsb2NrX2Nsb2Nrc291cihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KY2VfcmVhZCsw
eGM1LzB4ZihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3
DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEz
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDBmN2RmPl0gPyBzKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KY2hlZF9jbG9jaysweDkvMChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KeGQNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1MGZkNj5dID8gcyhYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmNoZWRfY2xvY2tfbG9jYWwoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKzB4MTAvMHgxNGINDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDkwODQxPl0gPyBlKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp2ZW50X3NjaGVkX2luKzB4KFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjczLzB4MTA4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA5MDk0Mz5dID8gZyhYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyb3VwX3NjaGVkX2luKzB4
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo2ZC8w
eDEwYg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
ICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDUwZjlkPl0gPyBhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpyY2hfbG9jYWxfaXJxX3JlKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQpzdG9yZSsweDYvMHg3DQ0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUxMmJi
Pl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpvY2FsX2Nsb2NrKzB4MjMvKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjB4MmMNDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA5MTE4NT5dID8g
XyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KX3Bl
cmZfZXZlbnRfZW5hYihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpsZSsweDEwNy8weDE0NA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA4ZTQ5Nz5dID8gcmVtb3RlX2Z1bmN0aW9uKzB4MmUvMHgz
Mw0NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbPGMxMDVjZmFkPl0gPyBnKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCmVuZXJpY19zbXBfY2FsbF8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmZ1bmN0aW9uX3NpbmdsZV8oWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQppbnRlcnJ1cHQrMHg5Ny8wKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4YjINDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMGE3NWM+
XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5f
Y2FsbF9mdW5jdGlvbihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfc2luZ2xl
X2ludGVycnVwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdCsw
eGEvMHgxYw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA3NmZlND5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphbmRsZV9pcnFfZXZlbnRfKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KcGVyY3B1KzB4NDcvMHgxNShYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KOA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA3ODgxMT5dID8g
aShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpycV9nZXRf
aXJxX2RhdGErKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQoweDUvMHg2DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNzhlMzQ+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFuZGxlX3BlcmNwdV9pcnEoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCisweDI5LzB4MzcNDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEz
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMWMyNThkPl0gPyBfKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCl94ZW5fZXZ0Y2huX2RvX3UoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCnBjYWxsKzB4MTI2LzB4MWEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpbPGMxMWMzN2QwPl0gPyB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCmVuX2V2dGNobl9kb191cGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KYWxsKzB4MTgvMHgyNg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTJjNDY5Nz5dID8geChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5fZG9f
dXBjYWxsKzB4NyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KLzB4Yw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCls8YzEwMDIzYTc+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnlwZXJjYWxsX3BhZ2UrMHgoWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjNhNy8weDEwMDANDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAwNjAzYT5dID8g
eChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbl9zYWZl
X2hhbHQrMHhmKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
Ci8weDE5DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpbPGMxMDEwNWI0Pl0gPyBkKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KZWZhdWx0X2lkbGUrMHg1MihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovMHg4Nw0NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIFs8YzEwMGFhNDc+XSA/IGNwdV9p
ZGxlKzB4OTUvMHhhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpmDQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NClsgICAgMy40MjQyMTNdIC0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCi0tWyBlbmQgdHJhY2UgYTcoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQo5MTllN2YxN2MwYTcyOSBdKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0NDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsg
ICAgMy45NzIyNDddIGkoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpuc3RhbGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgNA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbICAgIDAuMDA4MDAwXSBJbml0aWFsaXppbmcgQ1BVIzQNDQpbICAgIDMuOTgwMjQ4XSBOTUkg
d2F0Y2hkb2cgZW5hYihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CmxlZCwgdGFrZXMgb25lIGh3LXBtdSBjb3VudGVyLg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwZmZmZmU0YWE1MmYxIHRv
IDB4MDAwMGZmZmI1YWQ1MWVjMC4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdIC0oWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi0tLS0tLS0tLS0tWyBjdXQoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQogaGVyZSBdLS0tLS0tLS0tKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0NDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSBXKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpBUk5JTkc6IGF0IC9idWlsKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KZC9idWlsZGQtbGludXhfMyhYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KLjIuMjEtMy1pMzg2LXZFbyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KaG40L2xpbnV4LTMuMi4yMShYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi9hcmNoL3g4Ni94ZW4vZW4oWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmxpZ2h0ZW4uYzo3MzggcGUoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnJmX2V2ZW50c19sYXBpY18oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmluaXQrMHgyOC8w
eDI5KCkoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdIEgoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphcmR3YXJlIG5hbWU6IGVt
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcHR5
DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4
XSBNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpvZHVsZXMgbGlua2Vk
IGluKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KOihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQ0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSBQKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQppZDogMCwgY29tbTogc3dhKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcHBlci80IFRhaW50ZWQ6IChYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpHICAgICAgICBXICAgIDMu
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoyLjAt
My02ODYtcGFlICMxKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbICAgIDMuOTg0MjQ4XSBDKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFsbCBUcmFjZToN
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAz
Ljk4NDI0OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDM3ZmNj
Pl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFybl9zbG93cGF0aF9j
b20oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQptb24rMHg2OC8w
eDc5DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAz
Ljk4NDI0OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDE1MGQyPl0g
PyBwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplcmZfZXZlbnRzX2xh
cGljKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpf
aW5pdCsweDI4LzB4MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbPGMxMDM3ZmVhPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCmFybl9zbG93cGF0aF9udWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpsKzB4ZC8weDEwDQ0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMx
MDE1MGQyPl0gPyBwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplcmZf
ZXZlbnRzX2xhcGljKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCl9pbml0KzB4MjgvMHgyOQ0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0
MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAx
NTI3ND5dID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KODZfcG11X2VuYWJsZSsweChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KMWExLzB4MjI3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDhmZjI2Pl0gPyBwKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZXJmX3BtdV9lbmFibGUrMChYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCngxYS8weDFiDQ0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMTQwMzE+XSA/IHgoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo4Nl9wbXVfY29tbWl0X3R4
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpuKzB4NmIvMHg3OA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1MGY5ZD5dID8gYShYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyY2hfbG9jYWxfaXJx
X3JlKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpz
dG9yZSsweDYvMHg3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTEyYmI+XSA/IGwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KMHgyYw0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTBhMGU+
XSA/IGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnR4X3NjaGVkX2luKzB4MmQoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovMHgxMzUNDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhd
ICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8
YzEwMTUyODg+XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjg2
X3BtdV9lbmFibGUrMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQoxYjUvMHgyMjcNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp2Y2xvY2tfY2xvY2tzb3VyKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmNlX3JlYWQrMHhjNS8weGYoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMjQ5NGM+XSA/IHAoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQp2Y2xvY2tfY2xvY2tzb3VyKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpjZV9yZWFkKzB4YzUvMHhmKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjk4NDI0OF0gIChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDBmN2RmPl0gPyBzKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KY2hlZF9jbG9jaysweDkvMChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4ZA0NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUw
ZmQ2Pl0gPyBzKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmNoZWRfY2xvY2tf
bG9jYWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQor
MHgxMC8weDE0Yg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCls8YzEwOTA4NDE+XSA/IGUoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCnZlbnRfc2NoZWRfaW4rMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjczLzB4MTA4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAg
IDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTA5NDM+XSA/IGcoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0Kcm91cF9zY2hlZF9pbisweChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KNmQvMHgxMGINDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTBm
OWQ+XSA/IGEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcmNoX2xvY2FsX2ly
cV9yZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpzdG9yZSsweDYvMHg3DQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjk4NDI0
OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTA1MTJiYj5dID8gbChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoweDJjDQ0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KWyAgICAzLjk4NDI0OF0gIFs8YzEwOTExODU+XSA/IF9fcGVyZl9ldmVudF9lbmFiKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmxlKzB4MTA3LzB4MTQ0
DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA4ZTQ5Nz5dID8gcihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCmVtb3RlX2Z1bmN0aW9uKzAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KeDJlLzB4MzMNDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1Y2Zh
ZD5dID8gZyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5lcmljX3Nt
cF9jYWxsXyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmZ1bmN0
aW9uX3NpbmdsZV8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmludGVy
cnVwdCsweDk3LzAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQp4YjINDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAwYTc1Yz5dID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5fY2FsbF9mdW5jdGlvbihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KX3NpbmdsZV9pbnRlcnJ1
cChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdCsw
eGEvMHgxYw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KWyAgICAzLjk4NDI0OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA3NmZlND5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphbmRsZV9pcnFfZXZlbnRfKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnBlcmNwdSsweDQ3LzB4MTUoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjgNDQooWEVOKSAqKiogU2VyaWFsIGlu
cHV0IC0+IFhlbjxHPjwxPnRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbPGMxMDc4ODExPl0gPyBpKFhFTikgICh0eXBlICdDVFJMLWEnIHRocmVl
IHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBET00wKTxHPjwxPnRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcnFfZ2V0X2lycV9kYXRhKyhYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgDQoweDUvMHg2DQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAg
IDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTA3OGUzND5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFuZGxlX3Bl
cmNwdV9pcnEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQorMHgy
OS8weDM3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCls8YzExYzI1OGQ+XSA/IF8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCl94ZW5fZXZ0Y2huX2RvX3UoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnBjYWxsKzB4MTI2LzB4MWEoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgJ1InIHByZXNzZWQg
LT4gcmVib290aW5nIG1hY2hpbmUNCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KCShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0K
--047d7b66f1cd72413004c7744dc4
Content-Type: application/octet-stream; name="exile-good.log"
Content-Disposition: attachment; filename="exile-good.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5z6jr9p1

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gICAgICAgICAgICAgIF9fX18g
IA0KIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgIC8gXyBcICAgIF8gX18gX19ffF9f
XyBcIA0KICBcICAvLyBfIFwgJ18gXCAgfCB8fCB8XyAgIF9fKSB8fCB8IHwgfF9ffCAnX18vIF9f
fCBfXykgfA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8gfCB8X3wgfF9ffCB8IHwg
KF9fIC8gX18vIA0KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX18oXylfX18vICAgfF98
ICBcX19ffF9fX19ffA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4yLjAtcmMyICh4ZW51c2VyQHVr
LnhlbnNvdXJjZS5jb20pIChnY2MgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpIDQuNi4z
KSBUdWUgQXVnIDE0IDE4OjU5OjQ5IEJTVCAyMDEyDQooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBU
dWUgQXVnIDE0IDE4OjQxOjUzIDIwMTIgKzAxMDAgMjU3NTA6ZDhkZjExNTJlYjNiDQooWEVOKSBD
b25zb2xlIG91dHB1dCBpcyBzeW5jaHJvbm91cy4NCihYRU4pIEJvb3Rsb2FkZXI6IEdSVUIgMS45
OS0yMi4xDQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIHdhdGNoZG9nIGNwdWluZm8g
Y29tMT0xMTUyMDAsOG4xIGNvbnNvbGU9Y29tMSx0dHkgc3luY19jb25zb2xlIGNvbnNvbGVfdG9f
cmluZw0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAgVkdBIGlzIHRleHQgbW9kZSA4
MHgyNSwgZm9udCA4eDE2DQooWEVOKSAgVkJFL0REQyBtZXRob2RzOiBWMjsgRURJRCB0cmFuc2Zl
ciB0aW1lOiAyIHNlY29uZHMNCihYRU4pIERpc2MgaW5mb3JtYXRpb246DQooWEVOKSAgRm91bmQg
MiBNQlIgc2lnbmF0dXJlcw0KKFhFTikgIEZvdW5kIDIgRUREIGluZm9ybWF0aW9uIHN0cnVjdHVy
ZXMNCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6DQooWEVOKSAgMDAwMDAwMDAwMDAwMDAwMCAtIDAw
MDAwMDAwMDAwOWM0MDAgKHVzYWJsZSkNCihYRU4pICAwMDAwMDAwMDAwMDljNDAwIC0gMDAwMDAw
MDAwMDBhMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDAwMDBjZTAwMCAtIDAwMDAwMDAw
MDAxMDAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNm
ZWUwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBjZmVlMDAwMCAtIDAwMDAwMDAwY2ZlZTUw
MDAgKEFDUEkgZGF0YSkNCihYRU4pICAwMDAwMDAwMGNmZWU1MDAwIC0gMDAwMDAwMDBjZmVmMTAw
MCAoQUNQSSBOVlMpDQooWEVOKSAgMDAwMDAwMDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAg
KHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAzMDAwIChy
ZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZTAwMDAwIC0gMDAwMDAwMDBmZWUwMTAwMCAocmVz
ZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwMTMwMDAwMDAwICh1c2FibGUp
DQooWEVOKSBBQ1BJOiBSU0RQIDAwMEY4MDgwLCAwMDI0IChyMiBQVExURCApDQooWEVOKSBBQ1BJ
OiBYU0RUIENGRUUwMUZFLCAwMDVDIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAg
MjAwMDAwMSkNCihYRU4pIEFDUEk6IEZBQ1AgQ0ZFRTAyQ0UsIDAwRjQgKHIzIEJSQ00gICBFWFBM
T1NOICAgNjA0MDAwMCBNU0ZUICAyMDAwMDAxKQ0KKFhFTikgQUNQSSBXYXJuaW5nICh0YmZhZHQt
MDQ0NCk6IE9wdGlvbmFsIGZpZWxkICJQbTJDb250cm9sQmxvY2siIGhhcyB6ZXJvIGFkZHJlc3Mg
b3IgbGVuZ3RoOiAwMDAwMDAwMDAwMDAwMDAwL0MgWzIwMDcwMTI2XQ0KKFhFTikgQUNQSTogRFNE
VCBDRkVFMDNDMiwgNDk0OSAocjIgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIE1TRlQgIDMwMDAw
MDApDQooWEVOKSBBQ1BJOiBGQUNTIENGRUYwRkMwLCAwMDQwDQooWEVOKSBBQ1BJOiBUQ1BBIENG
RUU0RDBCLCAwMDMyIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAyMDAwMDAwMSkN
CihYRU4pIEFDUEk6IFNSQVQgQ0ZFRTREM0QsIDAxMjggKHIxIEFNRCAgICBGQU1fRl8xMCAgNjA0
MDAwMCBBTUQgICAgICAgICAxKQ0KKFhFTikgQUNQSTogSFBFVCBDRkVFNEU2NSwgMDAzOCAocjEg
QlJDTSAgIEVYUExPU04gICA2MDQwMDAwIEJSQ00gIDIwMDAwMDEpDQooWEVOKSBBQ1BJOiBTU0RU
IENGRUU0RTlELCAwMDQ5IChyMSBCUkNNICAgUFJUMCAgICAgIDYwNDAwMDAgQlJDTSAgMjAwMDAw
MSkNCihYRU4pIEFDUEk6IFNQQ1IgQ0ZFRTRFRTYsIDAwNTAgKHIxIFBUTFREICAkVUNSVEJMJCAg
NjA0MDAwMCBQVEwgICAgICAgICAxKQ0KKFhFTikgQUNQSTogQVBJQyBDRkVFNEYzNiwgMDBDQSAo
cjEgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIFBUTCAgIDIwMDAwMDEpDQooWEVOKSBTeXN0ZW0g
UkFNOiA0MDk0TUIgKDQxOTI3NTJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBO
b2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6
IFBYTSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNCAtPiBOb2RlIDENCihYRU4pIFNS
QVQ6IFBYTSAxIC0+IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMg
NiAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNyAtPiBOb2RlIDENCihYRU4p
IFNSQVQ6IE5vZGUgMCBQWE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAw
MDAwLWQwMDAwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTEzMDAwMDAw
MA0KKFhFTikgTlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAxMmI3ZTAwMDAgLSAxMmI3
ZTIwMDANCihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgU1JB
VDogTm9kZSAxIGhhcyBubyBtZW1vcnkuIEJJT1MgQnVnIG9yIG1pcy1jb25maWd1cmVkIGhhcmR3
YXJlPw0KKFhFTikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQgRE1BIHdpZHRoIDMwIGJpdHMNCihY
RU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmODBiMA0KKFhFTikgRE1JIHByZXNlbnQuDQoo
WEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1UaW1lciBJTyBQ
b3J0OiAweDUwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4X2NudFs1NDQsNTA0
XSwgcG0xeF9ldnRbNTAwLDU0MF0NCihYRU4pIEFDUEk6ICAgICAgICAgICAgICAgICAgd2FrZXVw
X3ZlY1tjZmVmMGZjY10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxhcGljX2lk
WzB4MDBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzAgMDo0IEFQSUMgdmVyc2lvbiAxNg0K
KFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkN
CihYRU4pIFByb2Nlc3NvciAjMSAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29y
ICMyIDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDNd
IGxhcGljX2lkWzB4MDNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzMgMDo0IEFQSUMgdmVy
c2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwNF0g
ZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNCAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KKFhFTikg
UHJvY2Vzc29yICM1IDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDZdIGxhcGljX2lkWzB4MDZdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzYgMDo0
IEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwN10gbGFwaWNf
aWRbMHgwN10gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNyAwOjQgQVBJQyB2ZXJzaW9uIDE2
DQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMF0gaGlnaCBlZGdlIGxpbnRbMHgx
XSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBoaWdoIGVkZ2UgbGludFsw
eDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDJdIGhpZ2ggZWRnZSBsaW50
WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwM10gaGlnaCBlZGdlIGxp
bnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA0XSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDVdIGhpZ2ggZWRn
ZSBsaW50WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNl0gaGlnaCBl
ZGdlIGxpbnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA3XSBoaWdo
IGVkZ2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA4XSBhZGRyZXNzWzB4
ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDgsIHZlcnNp
b24gMTcsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMTUNCihYRU4pIEFDUEk6IElPQVBJQyAo
aWRbMHgwOV0gYWRkcmVzc1sweGZlYzAxMDAwXSBnc2lfYmFzZVsxNl0pDQooWEVOKSBJT0FQSUNb
MV06IGFwaWNfaWQgOSwgdmVyc2lvbiAxNywgYWRkcmVzcyAweGZlYzAxMDAwLCBHU0kgMTYtMzEN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwYV0gYWRkcmVzc1sweGZlYzAyMDAwXSBnc2lfYmFz
ZVszMl0pDQooWEVOKSBJT0FQSUNbMl06IGFwaWNfaWQgMTAsIHZlcnNpb24gMTcsIGFkZHJlc3Mg
MHhmZWMwMjAwMCwgR1NJIDMyLTQ3DQooWEVOKSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVz
X2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2UpDQooWEVOKSBBQ1BJOiBJUlEwIHVzZWQgYnkg
b3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFi
bGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMyBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQ
RVQgaWQ6IDB4MTE2NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgVGFibGUgaXMgbm90IGZv
dW5kIQ0KKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9y
bWF0aW9uDQooWEVOKSBTTVA6IEFsbG93aW5nIDggQ1BVcyAoMCBob3RwbHVnIENQVXMpDQooWEVO
KSBJUlEgbGltaXRzOiA0OCBHU0ksIDE1MDQgTVNJL01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBJbml0aWFsaXppbmcgQ1BV
IzANCihYRU4pIERldGVjdGVkIDE5OTUuMDIzIE1IeiBwcm9jZXNzb3IuDQooWEVOKSBJbml0aW5n
IG1lbW9yeSBzaGFyaW5nLg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGlu
ZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEy
SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSAwKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDAN
CihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikg
QU1ELVZpOiBJT01NVSBub3QgZm91bmQhDQooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZGlzYWJs
ZWQNCihYRU4pIENQVTA6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4p
IEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVzaW5nIG5ldyBBQ0sgbWV0aG9kDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4yPS0x
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiA2NCBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSBIVk06IFNW
TSBlbmFibGVkDQooWEVOKSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAoSEFQKSBkZXRl
Y3RlZA0KKFhFTikgSFZNOiBIQVAgcGFnZSBzaXplczogNGtCLCAyTUIsIDFHQg0KKFhFTikgQ1BV
IDAgQVBJQyAwIC0+IE5vZGUgMA0KKFhFTikgQ1BVIDEgQVBJQyAxIC0+IE5vZGUgMA0KKFhFTikg
Qm9vdGluZyBwcm9jZXNzb3IgMS8xIGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSMx
DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsg
KDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5l
KQ0KKFhFTikgQ1BVIDEoNCkgLT4gUHJvY2Vzc29yIDAsIENvcmUgMQ0KKFhFTikgQ1BVMTogQU1E
IEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0
X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIENQVSAyIEFQSUMgMiAtPiBOb2Rl
IDANCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29yIDIvMiBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxp
emluZyBDUFUjMg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQg
Y2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQg
Ynl0ZXMvbGluZSkNCihYRU4pIENQVSAyKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDINCihYRU4p
IENQVTI6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29k
ZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgMyBBUElD
IDMgLT4gTm9kZSAwDQooWEVOKSBCb290aW5nIHByb2Nlc3NvciAzLzMgZWlwIDhjMDAwDQooWEVO
KSBJbml0aWFsaXppbmcgQ1BVIzMNCihYRU4pIENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVz
L2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6
IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFUgMyg0KSAtPiBQcm9jZXNzb3IgMCwgQ29y
ZSAzDQooWEVOKSBDUFUzOiBBTUQgRW5naW5lZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVO
KSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikg
Q1BVIDQgQVBJQyA0IC0+IE5vZGUgMQ0KKFhFTikgQm9vdGluZyBwcm9jZXNzb3IgNC80IGVpcCA4
YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSM0DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRL
ICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6
IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVIDQoNCkgLT4gUHJvY2Vz
c29yIDEsIENvcmUgMA0KKFhFTikgQ1BVNDogQU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGlu
ZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAw
NjcNCihYRU4pIENQVSA1IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29y
IDUvNSBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxpemluZyBDUFUjNQ0KKFhFTikgQ1BVOiBMMSBJ
IGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0K
KFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSA1KDQp
IC0+IFByb2Nlc3NvciAxLCBDb3JlIDENCihYRU4pIENQVTU6IEFNRCBFbmdpbmVlcmluZyBTYW1w
bGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29kZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hf
aWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgNiBBUElDIDYgLT4gTm9kZSAxDQooWEVOKSBCb290aW5n
IHByb2Nlc3NvciA2LzYgZWlwIDhjMDAwDQooWEVOKSBJbml0aWFsaXppbmcgQ1BVIzYNCihYRU4p
IENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0
ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVO
KSBDUFUgNig0KSAtPiBQcm9jZXNzb3IgMSwgQ29yZSAyDQooWEVOKSBDUFU2OiBBTUQgRW5naW5l
ZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2lu
Zm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikgQ1BVIDcgQVBJQyA3IC0+IE5vZGUgMQ0KKFhF
TikgQm9vdGluZyBwcm9jZXNzb3IgNy83IGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQ
VSM3DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2
NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9s
aW5lKQ0KKFhFTikgQ1BVIDcoNCkgLT4gUHJvY2Vzc29yIDEsIENvcmUgMw0KKFhFTikgQ1BVNzog
QU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xs
ZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVz
DQooWEVOKSBUZXN0aW5nIE5NSSB3YXRjaGRvZyAtLS0gQ1BVIzAgb2theS4gQ1BVIzEgb2theS4g
Q1BVIzIgb2theS4gQ1BVIzMgb2theS4gQ1BVIzQgb2theS4gQ1BVIzUgb2theS4gQ1BVIzYgb2th
eS4gQ1BVIzcgb2theS4gDQooWEVOKSBIUEVUOiAzIHRpbWVycyAoMCB3aWxsIGJlIHVzZWQgZm9y
IGJyb2FkY2FzdCkNCihYRU4pIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBNQ0E6IFVzZSBo
dyB0aHJlc2hvbGRpbmcgdG8gYWRqdXN0IHBvbGxpbmcgZnJlcXVlbmN5DQooWEVOKSBtY2hlY2tf
cG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0YXJ0ZWQuDQooWEVOKSBYZW5vcHJv
ZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0LCBJQlNDVEwgPSAweGZmZmZmZmZm
DQooWEVOKSAqKiogTE9BRElORyBET01BSU4gMCAqKioNCihYRU4pIGVsZl9wYXJzZV9iaW5hcnk6
IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weDY3YjAwMA0KKFhFTikgZWxmX3BhcnNlX2Jp
bmFyeTogcGhkcjogcGFkZHI9MHgxNjdiMDAwIG1lbXN6PTB4MzFiMDAwDQooWEVOKSBlbGZfcGFy
c2VfYmluYXJ5OiBtZW1vcnk6IDB4MTAwMDAwMCAtPiAweDE5OTYwMDANCihYRU4pIGVsZl94ZW5f
cGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IEdVRVNUX1ZFUlNJT04gPSAiMi42Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVS
U0lPTiA9ICJ4ZW4tMy4wIg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JBU0UgPSAw
eGMwMDAwMDAwDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhjMTZmNTAwMA0K
KFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFHRSA9IDB4YzEwMDIwMDANCihY
RU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVz
fHBhZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IFBBRV9NT0RF
ID0gInllcyINCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogTE9BREVSID0gImdlbmVyaWMiDQoo
WEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVsZiBub3RlICgweGQpDQooWEVO
KSBlbGZfeGVuX3BhcnNlX25vdGU6IFNVU1BFTkRfQ0FOQ0VMID0gMHgxDQooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IEhWX1NUQVJUX0xPVyA9IDB4ZjU4MDAwMDANCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBlbGZfeGVuX2FkZHJfY2FsY19jaGVj
azogYWRkcmVzc2VzOg0KKFhFTikgICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGMwMDAwMDAwDQoo
WEVOKSAgICAgZWxmX3BhZGRyX29mZnNldCA9IDB4MA0KKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAg
ICAgPSAweGMwMDAwMDAwDQooWEVOKSAgICAgdmlydF9rc3RhcnQgICAgICA9IDB4YzEwMDAwMDAN
CihYRU4pICAgICB2aXJ0X2tlbmQgICAgICAgID0gMHhjMTk5NjAwMA0KKFhFTikgICAgIHZpcnRf
ZW50cnkgICAgICAgPSAweGMxNmY1MDAwDQooWEVOKSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4
ZmZmZmZmZmZmZmZmZmZmZg0KKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0
MzINCihYRU4pICBEb20wIGtlcm5lbDogMzItYml0LCBQQUUsIGxzYiwgcGFkZHIgMHgxMDAwMDAw
IC0+IDB4MTk5NjAwMA0KKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikg
IERvbTAgYWxsb2MuOiAgIDAwMDAwMDAxMjQwMDAwMDAtPjAwMDAwMDAxMjgwMDAwMDAgKDk3MDQ1
MSBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDEy
YzQ1MjAwMC0+MDAwMDAwMDEyZmZmZmEwMA0KKFhFTikgVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1F
TlQ6DQooWEVOKSAgTG9hZGVkIGtlcm5lbDogMDAwMDAwMDBjMTAwMDAwMC0+MDAwMDAwMDBjMTk5
NjAwMA0KKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAwYzE5OTYwMDAtPjAwMDAwMDAwYzU1
NDNhMDANCihYRU4pICBQaHlzLU1hY2ggbWFwOiAwMDAwMDAwMGM1NTQ0MDAwLT4wMDAwMDAwMGM1
OTE2YTA0DQooWEVOKSAgU3RhcnQgaW5mbzogICAgMDAwMDAwMDBjNTkxNzAwMC0+MDAwMDAwMDBj
NTkxNzRiNA0KKFhFTikgIFBhZ2UgdGFibGVzOiAgIDAwMDAwMDAwYzU5MTgwMDAtPjAwMDAwMDAw
YzU5NGMwMDANCihYRU4pICBCb290IHN0YWNrOiAgICAwMDAwMDAwMGM1OTRjMDAwLT4wMDAwMDAw
MGM1OTRkMDAwDQooWEVOKSAgVE9UQUw6ICAgICAgICAgMDAwMDAwMDBjMDAwMDAwMC0+MDAwMDAw
MDBjNWMwMDAwMA0KKFhFTikgIEVOVFJZIEFERFJFU1M6IDAwMDAwMDAwYzE2ZjUwMDANCihYRU4p
IERvbTAgaGFzIG1heGltdW0gOCBWQ1BVcw0KKFhFTikgZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAg
YXQgMHgwMDAwMDAwMGMxMDAwMDAwIC0+IDB4MDAwMDAwMDBjMTY3YjAwMA0KKFhFTikgZWxmX2xv
YWRfYmluYXJ5OiBwaGRyIDEgYXQgMHgwMDAwMDAwMGMxNjdiMDAwIC0+IDB4MDAwMDAwMDBjMTc3
NDAwMA0KKFhFTikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4NCihYRU4pIEluaXRpYWwgbG93
IG1lbW9yeSB2aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLg0KKFhFTikgU3RkLiBM
b2dsZXZlbDogQWxsDQooWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSAqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQooWEVOKSAqKioqKioqIFdBUk5J
Tkc6IENPTlNPTEUgT1VUUFVUIElTIFNZTkNIUk9OT1VTDQooWEVOKSAqKioqKioqIFRoaXMgb3B0
aW9uIGlzIGludGVuZGVkIHRvIGFpZCBkZWJ1Z2dpbmcgb2YgWGVuIGJ5IGVuc3VyaW5nDQooWEVO
KSAqKioqKioqIHRoYXQgYWxsIG91dHB1dCBpcyBzeW5jaHJvbm91c2x5IGRlbGl2ZXJlZCBvbiB0
aGUgc2VyaWFsIGxpbmUuDQooWEVOKSAqKioqKioqIEhvd2V2ZXIgaXQgY2FuIGludHJvZHVjZSBT
SUdOSUZJQ0FOVCBsYXRlbmNpZXMgYW5kIGFmZmVjdA0KKFhFTikgKioqKioqKiB0aW1la2VlcGlu
Zy4gSXQgaXMgTk9UIHJlY29tbWVuZGVkIGZvciBwcm9kdWN0aW9uIHVzZSENCihYRU4pICoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIDMuLi4gMi4u
LiAxLi4uIA0KKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBlICdDVFJMLWEnIHRo
cmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pDQooWEVOKSBGcmVlZCAyNDRrQiBpbml0
IG1lbW9yeS4NCm1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQpYZW46IHNldHVw
IElTQSBpZGVudGl0eSBtYXBzDQphYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KWyAgICAwLjAwMDAw
MF0gUmVzZXJ2aW5nIHZpcnR1YWwgYWRkcmVzcyBzcGFjZSBhYm92ZSAweGZmODAwMDAwDQpbICAg
IDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAw
MDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGludXgg
dmVyc2lvbiAyLjYuMzIuMjUgKG9zc3Rlc3RAaXRjaC1taXRlLmNhbS54Y2ktdGVzdC5jb20pIChn
Y2MgdmVyc2lvbiA0LjMuMiAoRGViaWFuIDQuMy4yLTEuMSkgKSAjMSBTTVAgVHVlIE5vdiAyMyAw
NToyNzo0NyBHTVQgMjAxMA0KWyAgICAwLjAwMDAwMF0gS0VSTkVMIHN1cHBvcnRlZCBjcHVzOg0K
WyAgICAwLjAwMDAwMF0gICBJbnRlbCBHZW51aW5lSW50ZWwNClsgICAgMC4wMDAwMDBdICAgQU1E
IEF1dGhlbnRpY0FNRA0KWyAgICAwLjAwMDAwMF0gICBOU0MgR2VvZGUgYnkgTlNDDQpbICAgIDAu
MDAwMDAwXSAgIEN5cml4IEN5cml4SW5zdGVhZA0KWyAgICAwLjAwMDAwMF0gICBDZW50YXVyIENl
bnRhdXJIYXVscw0KWyAgICAwLjAwMDAwMF0gICBUcmFuc21ldGEgR2VudWluZVRNeDg2DQpbICAg
IDAuMDAwMDAwXSAgIFRyYW5zbWV0YSBUcmFuc21ldGFDUFUNClsgICAgMC4wMDAwMDBdICAgVU1D
IFVNQyBVTUMgVU1DDQpbICAgIDAuMDAwMDAwXSB4ZW5fcmVsZWFzZV9jaHVuazogbG9va2luZyBh
dCBhcmVhIHBmbiBkMDAwMC1mNGE4MTogMTUwMTQ1IHBhZ2VzIGZyZWVkDQpbICAgIDAuMDAwMDAw
XSByZWxlYXNlZCAxNTAxNDUgcGFnZXMgb2YgdW51c2VkIG1lbW9yeQ0KWyAgICAwLjAwMDAwMF0g
QklPUy1wcm92aWRlZCBwaHlzaWNhbCBSQU0gbWFwOg0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAw
MDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWM0MDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBd
ICBYZW46IDAwMDAwMDAwMDAwOWM0MDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkNClsg
ICAgMC4wMDAwMDBdICBYZW46IDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNmZWUwMDAwICh1
c2FibGUpDQpbICAgIDAuMDAwMDAwXSAgWGVuOiAwMDAwMDAwMGNmZWUwMDAwIC0gMDAwMDAwMDBj
ZmVlNTAwMCAoQUNQSSBkYXRhKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBjZmVlNTAw
MCAtIDAwMDAwMDAwY2ZlZjEwMDAgKEFDUEkgTlZTKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAw
MDAwMDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAw
MF0gIFhlbjogMDAwMDAwMDBmZWMwMDAwMCAtIDAwMDAwMDAwZmVjMDMwMDAgKHJlc2VydmVkKQ0K
WyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZWUwMDAwMCAtIDAwMDAwMDAwZmVlMDEwMDAg
KHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAw
MDAxMDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDEzMDAw
MDAwMCAtIDAwMDAwMDAxNTRhODEwMDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBdIGJvb3Rjb25z
b2xlIFt4ZW5ib290MF0gZW5hYmxlZA0KWyAgICAwLjAwMDAwMF0gRE1JIHByZXNlbnQuDQpbICAg
IDAuMDAwMDAwXSBQaG9lbml4IEJJT1MgZGV0ZWN0ZWQ6IEJJT1MgbWF5IGNvcnJ1cHQgbG93IFJB
TSwgd29ya2luZyBhcm91bmQgaXQuDQpbICAgIDAuMDAwMDAwXSBsYXN0X3BmbiA9IDB4MTU0YTgx
IG1heF9hcmNoX3BmbiA9IDB4MTAwMDAwMA0KWyAgICAwLjAwMDAwMF0geDg2IFBBVCBlbmFibGVk
OiBjcHUgMCwgb2xkIDB4NTAxMDAwNzA0MDYsIG5ldyAweDcwMTA2MDAwNzAxMDYNClsgICAgMC4w
MDAwMDBdIFNjYW5uaW5nIDAgYXJlYXMgZm9yIGxvdyBtZW1vcnkgY29ycnVwdGlvbg0KWyAgICAw
LjAwMDAwMF0gbW9kaWZpZWQgcGh5c2ljYWwgUkFNIG1hcDoNClsgICAgMC4wMDAwMDBdICBtb2Rp
ZmllZDogMDAwMDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwMTAwMDAgKHJlc2VydmVkKQ0KWyAg
ICAwLjAwMDAwMF0gIG1vZGlmaWVkOiAwMDAwMDAwMDAwMDEwMDAwIC0gMDAwMDAwMDAwMDA5YzQw
MCAodXNhYmxlKQ0KWyAgICAwLjAwMDAwMF0gIG1vZGlmaWVkOiAwMDAwMDAwMDAwMDljNDAwIC0g
MDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpDQpbICAgIDAuMDAwMDAwXSAgbW9kaWZpZWQ6IDAw
MDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNmZWUwMDAwICh1c2FibGUpDQpbICAgIDAuMDAwMDAw
XSAgbW9kaWZpZWQ6IDAwMDAwMDAwY2ZlZTAwMDAgLSAwMDAwMDAwMGNmZWU1MDAwIChBQ1BJIGRh
dGEpDQpbICAgIDAuMDAwMDAwXSAgbW9kaWZpZWQ6IDAwMDAwMDAwY2ZlZTUwMDAgLSAwMDAwMDAw
MGNmZWYxMDAwIChBQ1BJIE5WUykNClsgICAgMC4wMDAwMDBdICBtb2RpZmllZDogMDAwMDAwMDBj
ZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIG1v
ZGlmaWVkOiAwMDAwMDAwMGZlYzAwMDAwIC0gMDAwMDAwMDBmZWMwMzAwMCAocmVzZXJ2ZWQpDQpb
ICAgIDAuMDAwMDAwXSAgbW9kaWZpZWQ6IDAwMDAwMDAwZmVlMDAwMDAgLSAwMDAwMDAwMGZlZTAx
MDAwIChyZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdICBtb2RpZmllZDogMDAwMDAwMDBmZmY4MDAw
MCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIG1vZGlmaWVk
OiAwMDAwMDAwMTMwMDAwMDAwIC0gMDAwMDAwMDE1NGE4MTAwMCAodXNhYmxlKQ0KWyAgICAwLjAw
MDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogMDAwMDAwMDAwMDAwMDAwMC0wMDAwMDAwMDM3MWZl
MDAwDQpbICAgIDAuMDAwMDAwXSBOWCAoRXhlY3V0ZSBEaXNhYmxlKSBwcm90ZWN0aW9uOiBhY3Rp
dmUNClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IDAxOTk2MDAwIC0gMDU1NDNhMDANClsgICAgMC4w
MDAwMDBdIEFDUEk6IFJTRFAgMDAwZjgwODAgMDAwMjQgKHYwMiBQVExURCApDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBYU0RUIGNmZWUwMWZlIDAwMDVDICh2MDEgQlJDTSAgIEVYUExPU04gIDA2MDQw
MDAwIFBUTCAgMDIwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNQIGNmZWUwMmNlIDAw
MEY0ICh2MDMgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIE1TRlQgMDIwMDAwMDEpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJIFdhcm5pbmc6IE9wdGlvbmFsIGZpZWxkIFBtMkNvbnRyb2xCbG9jayBoYXMg
emVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC9DICgyMDA5MDkwMy90YmZh
ZHQtNTU3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRFNEVCBjZmVlMDNjMiAwNDk0OSAodjAyIEJS
Q00gICBFWFBMT1NOICAwNjA0MDAwMCBNU0ZUIDAzMDAwMDAwKQ0KWyAgICAwLjAwMDAwMF0gQUNQ
STogRkFDUyBjZmVmMGZjMCAwMDA0MA0KWyAgICAwLjAwMDAwMF0gQUNQSTogVENQQSBjZmVlNGQw
YiAwMDAzMiAodjAxIEJSQ00gICBFWFBMT1NOICAwNjA0MDAwMCBQVEwgIDIwMDAwMDAxKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSTogU1JBVCBjZmVlNGQzZCAwMDEyOCAodjAxIEFNRCAgICBGQU1fRl8x
MCAwNjA0MDAwMCBBTUQgIDAwMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSFBFVCBjZmVl
NGU2NSAwMDAzOCAodjAxIEJSQ00gICBFWFBMT1NOICAwNjA0MDAwMCBCUkNNIDAyMDAwMDAxKQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogU1NEVCBjZmVlNGU5ZCAwMDA0OSAodjAxIEJSQ00gICBQUlQw
ICAgICAwNjA0MDAwMCBCUkNNIDAyMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogU1BDUiBj
ZmVlNGVlNiAwMDA1MCAodjAxIFBUTFREICAkVUNSVEJMJCAwNjA0MDAwMCBQVEwgIDAwMDAwMDAx
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogQVBJQyBjZmVlNGYzNiAwMDBDQSAodjAxIEJSQ00gICBF
WFBMT1NOICAwNjA0MDAwMCBQVEwgIDAyMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gNDU2OE1CIEhJ
R0hNRU0gYXZhaWxhYmxlLg0KWyAgICAwLjAwMDAwMF0gODgxTUIgTE9XTUVNIGF2YWlsYWJsZS4N
ClsgICAgMC4wMDAwMDBdICAgbWFwcGVkIGxvdyByYW06IDAgLSAzNzFmZTAwMA0KWyAgICAwLjAw
MDAwMF0gICBsb3cgcmFtOiAwIC0gMzcxZmUwMDANClsgICAgMC4wMDAwMDBdICAgbm9kZSAwIGxv
dyByYW06IDAwMDAwMDAwIC0gMzcxZmUwMDANClsgICAgMC4wMDAwMDBdICAgbm9kZSAwIGJvb3Rt
YXAgMDAwMTAwMDAgLSAwMDAxNmU0MA0KWyAgICAwLjAwMDAwMF0gKDExIGVhcmx5IHJlc2VydmF0
aW9ucykgPT0+IGJvb3RtZW0gWzAwMDAwMDAwMDAgLSAwMDM3MWZlMDAwXQ0KWyAgICAwLjAwMDAw
MF0gICAjMCBbMDAwMDAwMDAwMCAtIDAwMDAwMDEwMDBdICAgQklPUyBkYXRhIHBhZ2UgPT0+IFsw
MDAwMDAwMDAwIC0gMDAwMDAwMTAwMF0NClsgICAgMC4wMDAwMDBdICAgIzEgWzAwMDU5MWEwMDAg
LSAwMDA1OTRlMDAwXSAgIFhFTiBQQUdFVEFCTEVTID09PiBbMDAwNTkxYTAwMCAtIDAwMDU5NGUw
MDBdDQpbICAgIDAuMDAwMDAwXSAgICMyIFswMDAwMDAxMDAwIC0gMDAwMDAwMjAwMF0gICAgRVgg
VFJBTVBPTElORSA9PT4gWzAwMDAwMDEwMDAgLSAwMDAwMDAyMDAwXQ0KWyAgICAwLjAwMDAwMF0g
ICAjMyBbMDAwMDAwNjAwMCAtIDAwMDAwMDcwMDBdICAgICAgIFRSQU1QT0xJTkUgPT0+IFswMDAw
MDA2MDAwIC0gMDAwMDAwNzAwMF0NClsgICAgMC4wMDAwMDBdICAgIzQgWzAwMDEwMDAwMDAgLSAw
MDAxODI1M2MwXSAgICBURVhUIERBVEEgQlNTID09PiBbMDAwMTAwMDAwMCAtIDAwMDE4MjUzYzBd
DQpbICAgIDAuMDAwMDAwXSAgICM1IFswMDAxOTk2MDAwIC0gMDAwNTU0M2EwMF0gICAgICAgICAg
UkFNRElTSyA9PT4gWzAwMDE5OTYwMDAgLSAwMDA1NTQzYTAwXQ0KWyAgICAwLjAwMDAwMF0gICAj
NiBbMDAwNTU0NDAwMCAtIDAwMDU5MWEwMDBdICAgWEVOIFNUQVJUIElORk8gPT0+IFswMDA1NTQ0
MDAwIC0gMDAwNTkxYTAwMF0NClsgICAgMC4wMDAwMDBdICAgIzcgWzAxMzAwMDAwMDAgLSAwMTU0
YTgxMDAwXSAgICAgICAgWEVOIEVYVFJBDQpbICAgIDAuMDAwMDAwXSAgICM4IFswMDAxODI2MDAw
IC0gMDAwMTgzMzE4Y10gICAgICAgICAgICAgIEJSSyA9PT4gWzAwMDE4MjYwMDAgLSAwMDAxODMz
MThjXQ0KWyAgICAwLjAwMDAwMF0gICAjOSBbMDAwMDEwMDAwMCAtIDAwMDAyODgwMDBdICAgICAg
ICAgIFBHVEFCTEUgPT0+IFswMDAwMTAwMDAwIC0gMDAwMDI4ODAwMF0NClsgICAgMC4wMDAwMDBd
ICAgIzEwIFswMDAwMDEwMDAwIC0gMDAwMDAxNzAwMF0gICAgICAgICAgQk9PVE1BUCA9PT4gWzAw
MDAwMTAwMDAgLSAwMDAwMDE3MDAwXQ0KWyAgICAwLjAwMDAwMF0gZm91bmQgU01QIE1QLXRhYmxl
IGF0IFtjMDBmODBiMF0gZjgwYjANClsgICAgMC4wMDAwMDBdIFpvbmUgUEZOIHJhbmdlczoNClsg
ICAgMC4wMDAwMDBdICAgRE1BICAgICAgMHgwMDAwMDAxMCAtPiAweDAwMDAxMDAwDQpbICAgIDAu
MDAwMDAwXSAgIE5vcm1hbCAgIDB4MDAwMDEwMDAgLT4gMHgwMDAzNzFmZQ0KWyAgICAwLjAwMDAw
MF0gICBIaWdoTWVtICAweDAwMDM3MWZlIC0+IDB4MDAxNTRhODENClsgICAgMC4wMDAwMDBdIE1v
dmFibGUgem9uZSBzdGFydCBQRk4gZm9yIGVhY2ggbm9kZQ0KWyAgICAwLjAwMDAwMF0gZWFybHlf
bm9kZV9tYXBbM10gYWN0aXZlIFBGTiByYW5nZXMNClsgICAgMC4wMDAwMDBdICAgICAwOiAweDAw
MDAwMDEwIC0+IDB4MDAwMDAwOWMNClsgICAgMC4wMDAwMDBdICAgICAwOiAweDAwMDAwMTAwIC0+
IDB4MDAwY2ZlZTANClsgICAgMC4wMDAwMDBdICAgICAwOiAweDAwMTMwMDAwIC0+IDB4MDAxNTRh
ODENClsgICAgMC4wMDAwMDBdIFVzaW5nIEFQSUMgZHJpdmVyIGRlZmF1bHQNClsgICAgMC4wMDAw
MDBdIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4NTA4DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAwXSBsYXBpY19pZFsweDAwXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MDJdIGVu
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19p
ZFsweDAzXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
NF0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDVdIGxhcGljX2lkWzB4MDVdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDA2XSBsYXBpY19pZFsweDA2XSBlbmFibGVkKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwN10gbGFwaWNfaWRbMHgwN10gZW5hYmxlZCkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAwXSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDFd
IGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFj
cGlfaWRbMHgwMl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDX05NSSAoYWNwaV9pZFsweDAzXSBoaWdoIGVkZ2UgbGludFsweDFdKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDRdIGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNV0gaGlnaCBlZGdlIGxp
bnRbMHgxXSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA2XSBo
aWdoIGVkZ2UgbGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3Bp
X2lkWzB4MDddIGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJT0FQ
SUMgKGlkWzB4MDhdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3NpX2Jhc2VbMF0pDQpbICAgIDAuMDAw
MDAwXSBJT0FQSUNbMF06IGFwaWNfaWQgOCwgdmVyc2lvbiAwLCBhZGRyZXNzIDB4ZmVjMDAwMDAs
IEdTSSAwLTANClsgICAgMC4wMDAwMDBdIEFDUEk6IElPQVBJQyAoaWRbMHgwOV0gYWRkcmVzc1sw
eGZlYzAxMDAwXSBnc2lfYmFzZVsxNl0pDQpbICAgIDAuMDAwMDAwXSBJT0FQSUNbMV06IGFwaWNf
aWQgOSwgdmVyc2lvbiAwLCBhZGRyZXNzIDB4ZmVjMDEwMDAsIEdTSSAxNi0xNg0KWyAgICAwLjAw
MDAwMF0gQUNQSTogSU9BUElDIChpZFsweDBhXSBhZGRyZXNzWzB4ZmVjMDIwMDBdIGdzaV9iYXNl
WzMyXSkNClsgICAgMC4wMDAwMDBdIElPQVBJQ1syXTogYXBpY19pZCAxMCwgdmVyc2lvbiAwLCBh
ZGRyZXNzIDB4ZmVjMDIwMDAsIEdTSSAzMi0zMg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NS
Q19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgaGlnaCBlZGdlKQ0KWyAgICAwLjAw
MDAwMF0gRVJST1I6IFVuYWJsZSB0byBsb2NhdGUgSU9BUElDIGZvciBHU0kgMg0KWyAgICAwLjAw
MDAwMF0gRVJST1I6IFVuYWJsZSB0byBsb2NhdGUgSU9BUElDIGZvciBHU0kgOQ0KWyAgICAwLjAw
MDAwMF0gVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9u
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBIUEVUIGlkOiAweDExNjZhMjAxIGJhc2U6IDB4ZmVkMDAw
MDANClsgICAgMC4wMDAwMDBdIFNNUDogQWxsb3dpbmcgOCBDUFVzLCAwIGhvdHBsdWcgQ1BVcw0K
WyAgICAwLjAwMDAwMF0gUE06IFJlZ2lzdGVyZWQgbm9zYXZlIG1lbW9yeTogMDAwMDAwMDAwMDA5
YzAwMCAtIDAwMDAwMDAwMDAwOWQwMDANClsgICAgMC4wMDAwMDBdIFBNOiBSZWdpc3RlcmVkIG5v
c2F2ZSBtZW1vcnk6IDAwMDAwMDAwMDAwOWQwMDAgLSAwMDAwMDAwMDAwMTAwMDAwDQpbICAgIDAu
MDAwMDAwXSBBbGxvY2F0aW5nIFBDSSByZXNvdXJjZXMgc3RhcnRpbmcgYXQgZDAwMDAwMDAgKGdh
cDogZDAwMDAwMDA6MmVjMDAwMDApDQpbICAgIDAuMDAwMDAwXSBCb290aW5nIHBhcmF2aXJ0dWFs
aXplZCBrZXJuZWwgb24gWGVuDQpbICAgIDAuMDAwMDAwXSBYZW4gdmVyc2lvbjogNC4yLjAtcmMy
IChwcmVzZXJ2ZS1BRCkgKGRvbTApDQpbICAgIDAuMDAwMDAwXSBOUl9DUFVTOjggbnJfY3B1bWFz
a19iaXRzOjggbnJfY3B1X2lkczo4IG5yX25vZGVfaWRzOjENClsgICAgMC4wMDAwMDBdIFBFUkNQ
VTogRW1iZWRkZWQgMTUgcGFnZXMvY3B1IEBjODNmODAwMCBzMzcwODAgcjAgZDI0MzYwIHU2NTUz
Ng0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxvYzogczM3MDgwIHIwIGQyNDM2MCB1NjU1MzYgYWxs
b2M9MTYqNDA5Ng0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxvYzogWzBdIDAgWzBdIDEgWzBdIDIg
WzBdIDMgWzBdIDQgWzBdIDUgWzBdIDYgWzBdIDcgDQpbICAgIDAuMDAwMDAwXSBCdWlsdCAxIHpv
bmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2Vz
OiA5OTA4MDcNClsgICAgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVy
IHJvb3Q9VVVJRD1lNzU2OGFjNi1kNTYxLTQ4M2EtYjViYS04OGZkOTM1MGVlMTggcm8gY29uc29s
ZT1odmMwIGVhcmx5cHJpbnRrPXhlbiBub21vZGVzZXQNClsgICAgMC4wMDAwMDBdIFBJRCBoYXNo
IHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiAyLCAxNjM4NCBieXRlcykNClsgICAgMC4wMDAw
MDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3JkZXI6IDcsIDUy
NDI4OCBieXRlcykNClsgICAgMC4wMDAwMDBdIElub2RlLWNhY2hlIGhhc2ggdGFibGUgZW50cmll
czogNjU1MzYgKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMpDQpbICAgIDAuMDAwMDAwXSBFbmFibGlu
ZyBmYXN0IEZQVSBzYXZlIGFuZCByZXN0b3JlLi4uIGRvbmUuDQpbICAgIDAuMDAwMDAwXSBFbmFi
bGluZyB1bm1hc2tlZCBTSU1EIEZQVSBleGNlcHRpb24gc3VwcG9ydC4uLiBkb25lLg0KWyAgICAw
LjAwMDAwMF0gSW5pdGlhbGl6aW5nIENQVSMwDQpbICAgIDAuMDAwMDAwXSBETUE6IFBsYWNpbmcg
NjRNQiBzb2Z0d2FyZSBJTyBUTEIgYmV0d2VlbiBjODUzYjAwMCAtIGNjNTNiMDAwDQpbICAgIDAu
MDAwMDAwXSBETUE6IHNvZnR3YXJlIElPIFRMQiBhdCBwaHlzIDB4ODUzYjAwMCAtIDB4YzUzYjAw
MA0KWyAgICAwLjAwMDAwMF0geGVuX3N3aW90bGJfZml4dXA6IGJ1Zj1jODUzYjAwMCBzaXplPTY3
MTA4ODY0DQpbICAgIDAuMDAwMDAwXSB4ZW5fc3dpb3RsYl9maXh1cDogYnVmPWNjNTliMDAwIHNp
emU9MzI3NjgNClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBIaWdoTWVtIGZvciBub2RlIDAg
KDAwMDM3MWZlOjAwMTU0YTgxKQ0KWyAgICAwLjAwMDAwMF0gTWVtb3J5OiAzMjE5MTAway81NTgx
MzE2ayBhdmFpbGFibGUgKDQ2MTlrIGtlcm5lbCBjb2RlLCAxODYxNjRrIHJlc2VydmVkLCAyNTAy
ayBkYXRhLCA0ODRrIGluaXQsIDMxMDQxNDBrIGhpZ2htZW0pDQpbICAgIDAuMDAwMDAwXSB2aXJ0
dWFsIGtlcm5lbCBtZW1vcnkgbGF5b3V0Og0KWyAgICAwLjAwMDAwMF0gICAgIGZpeG1hcCAgOiAw
eGZmNzFkMDAwIC0gMHhmZjdmZjAwMCAgICggOTA0IGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgIHBr
bWFwICAgOiAweGZmMjAwMDAwIC0gMHhmZjQwMDAwMCAgICgyMDQ4IGtCKQ0KWyAgICAwLjAwMDAw
MF0gICAgIHZtYWxsb2MgOiAweGY3OWZlMDAwIC0gMHhmZjFmZTAwMCAgICggMTIwIE1CKQ0KWyAg
ICAwLjAwMDAwMF0gICAgIGxvd21lbSAgOiAweGMwMDAwMDAwIC0gMHhmNzFmZTAwMCAgICggODgx
IE1CKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmluaXQgOiAweGMxNmY1MDAwIC0gMHhjMTc2ZTAw
MCAgICggNDg0IGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmRhdGEgOiAweGMxNDgyZDJlIC0g
MHhjMTZmNDY1NCAgICgyNTAyIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLnRleHQgOiAweGMx
MDAwMDAwIC0gMHhjMTQ4MmQyZSAgICg0NjE5IGtCKQ0KWyAgICAwLjAwMDAwMF0gU0xVQjogR2Vu
c2xhYnM9MTMsIEhXYWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTgsIE5v
ZGVzPTENClsgICAgMC4wMDAwMDBdIEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRpb24uDQpb
ICAgIDAuMDAwMDAwXSBOUl9JUlFTOjIzMDQgbnJfaXJxczoyMzA0DQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2Up
DQpbICAgIDAuMDAwMDAwXSBDb25zb2xlOiBjb2xvdXIgVkdBKyA4MHgyNQ0KWyAgICAwLjAwMDAw
MF0gY29uc29sZSBbaHZjMF0gZW5hYmxlZCwgYm9vdGNvbnNvbGUgZGlzYWJsZWQNDQpbICAgIDAu
MDAwMDAwXSBjb25zb2xlIFtodmMwXSBlbmFibGVkLCBib290Y29uc29sZSBkaXNhYmxlZA0KWyAg
ICAwLjAwMDAwMF0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAwDQ0KWyAgICAwLjAwMDAw
MF0gRGV0ZWN0ZWQgMTk5NS4wMjMgTUh6IHByb2Nlc3Nvci4NDQpbICAgIDAuMDAwOTk5XSBDYWxp
YnJhdGluZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2luZyB0aW1l
ciBmcmVxdWVuY3kuLiAzOTkwLjA0IEJvZ29NSVBTIChscGo9MTk5NTAyMykNDQpbICAgIDAuMDAw
OTk5XSBTZWN1cml0eSBGcmFtZXdvcmsgaW5pdGlhbGl6ZWQNDQpbICAgIDAuMDAwOTk5XSBTRUxp
bnV4OiAgSW5pdGlhbGl6aW5nLg0NClsgICAgMC4wMDA5OTldIE1vdW50LWNhY2hlIGhhc2ggdGFi
bGUgZW50cmllczogNTEyDQ0KWyAgICAwLjAwMTM5M10gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJz
eXMgbnMNDQpbICAgIDAuMDAxOTk5XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVhY2N0
DQ0KWyAgICAwLjAwMTk5OV0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgZnJlZXplcg0NClsg
ICAgMC4wMDIwMzNdIENQVTogTDEgSSBDYWNoZTogNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNo
ZSA2NEsgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMzAwN10gQ1BVOiBMMiBDYWNoZTogNTEy
SyAoNjQgYnl0ZXMvbGluZSkNDQpbICAgIDAuMDAzOTk5XSBDUFU6IFBoeXNpY2FsIFByb2Nlc3Nv
ciBJRDogMA0NClsgICAgMC4wMDM5OTldIENQVTogUHJvY2Vzc29yIENvcmUgSUQ6IDANDQpbICAg
IDAuMDA0MDExXSBtY2U6IENQVSBzdXBwb3J0cyAyIE1DRSBiYW5rcw0NClsgICAgMC4wMDUwMjJd
IFBlcmZvcm1hbmNlIEV2ZW50czogQU1EIFBNVSBkcml2ZXIuDQ0KWyAgICAwLjAwNTk5OV0gLS0t
LS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tDQ0KWyAgICAwLjAwNTk5OV0gV0FSTklO
RzogYXQgYXJjaC94ODYveGVuL2VubGlnaHRlbi5jOjcyOSB4ZW5fYXBpY193cml0ZSsweDEyLzB4
MTQoKQ0NClsgICAgMC4wMDU5OTldIEhhcmR3YXJlIG5hbWU6IGVtcHR5DQ0KWyAgICAwLjAwNTk5
OV0gTW9kdWxlcyBsaW5rZWQgaW46DQ0KWyAgICAwLjAwNTk5OV0gUGlkOiAwLCBjb21tOiBzd2Fw
cGVyIE5vdCB0YWludGVkIDIuNi4zMi4yNSAjMQ0NClsgICAgMC4wMDU5OTldIENhbGwgVHJhY2U6
DQ0KWyAgICAwLjAwNTk5OV0gIFs8YzEwMjlmZGU+XSA/IHhlbl9hcGljX3dyaXRlKzB4MTIvMHgx
NA0NClsgICAgMC4wMDU5OTldICBbPGMxMDYzYjJkPl0gd2Fybl9zbG93cGF0aF9jb21tb24rMHg2
MC8weDkwDQ0KWyAgICAwLjAwNTk5OV0gIFs8YzEwNjNiNmE+XSB3YXJuX3Nsb3dwYXRoX251bGwr
MHhkLzB4MTANDQpbICAgIDAuMDA1OTk5XSAgWzxjMTAyOWZkZT5dIHhlbl9hcGljX3dyaXRlKzB4
MTIvMHgxNA0NClsgICAgMC4wMDU5OTldICBbPGMxMDM5YmJiPl0gcGVyZl9ldmVudHNfbGFwaWNf
aW5pdCsweDJiLzB4MmQNDQpbICAgIDAuMDA1OTk5XSAgWzxjMTZmZjg4Mz5dIGluaXRfaHdfcGVy
Zl9ldmVudHMrMHgyZTUvMHgzN2YNDQpbICAgIDAuMDA1OTk5XSAgWzxjMTZmZjJjNT5dIGlkZW50
aWZ5X2Jvb3RfY3B1KzB4MjEvMHgyMw0NClsgICAgMC4wMDU5OTldICBbPGMxNmZmNDlhPl0gY2hl
Y2tfYnVncysweGIvMHgxMGYNDQpbICAgIDAuMDA1OTk5XSAgWzxjMTBhOGIxNT5dID8gZGVsYXlh
Y2N0X2luaXQrMHg0Mi8weDQ1DQ0KWyAgICAwLjAwNTk5OV0gIFs8YzE2ZjU4NWU+XSBzdGFydF9r
ZXJuZWwrMHgzMGIvMHgzMWENDQpbICAgIDAuMDA1OTk5XSAgWzxjMTZmNTBhMj5dIGkzODZfc3Rh
cnRfa2VybmVsKzB4OTEvMHg5Ng0NClsgICAgMC4wMDU5OTldICBbPGMxNmY4ZTdkPl0geGVuX3N0
YXJ0X2tlcm5lbCsweDU1Yi8weDU2Mw0NClsgICAgMC4wMDU5OTldIC0tLVsgZW5kIHRyYWNlIDRl
YWEyYTg2YThlMmRhMjIgXS0tLQ0NClsgICAgMC4wMDU5OTldIC4uLiB2ZXJzaW9uOiAgICAgICAg
ICAgICAgICAwDQ0KWyAgICAwLjAwNTk5OV0gLi4uIGJpdCB3aWR0aDogICAgICAgICAgICAgIDQ4
DQ0KWyAgICAwLjAwNTk5OV0gLi4uIGdlbmVyaWMgcmVnaXN0ZXJzOiAgICAgIDQNDQpbICAgIDAu
MDA1OTk5XSAuLi4gdmFsdWUgbWFzazogICAgICAgICAgICAgMDAwMGZmZmZmZmZmZmZmZg0NClsg
ICAgMC4wMDU5OTldIC4uLiBtYXggcGVyaW9kOiAgICAgICAgICAgICAwMDAwN2ZmZmZmZmZmZmZm
DQ0KWyAgICAwLjAwNTk5OV0gLi4uIGZpeGVkLXB1cnBvc2UgZXZlbnRzOiAgIDANDQpbICAgIDAu
MDA1OTk5XSAuLi4gZXZlbnQgbWFzazogICAgICAgICAgICAgMDAwMDAwMDAwMDAwMDAwZg0NClsg
ICAgMC4wMDcwOTddIFNNUCBhbHRlcm5hdGl2ZXM6IHN3aXRjaGluZyB0byBVUCBjb2RlDQ0KWyAg
ICAwLjAwOTk5OF0gQUNQSTogQ29yZSByZXZpc2lvbiAyMDA5MDkwMw0NClsgICAgMC4wMjMyMzdd
IGNwdSAwIHNwaW5sb2NrIGV2ZW50IGlycSAyMzAyDQ0KWyAgICAwLjAyNDU1OV0gaW5zdGFsbGlu
ZyBYZW4gdGltZXIgZm9yIENQVSAxDQ0KWyAgICAwLjAyNTAzNl0gY3B1IDEgc3BpbmxvY2sgZXZl
bnQgaXJxIDIyOTYNDQpbICAgIDAuMDI2MDgyXSBTTVAgYWx0ZXJuYXRpdmVzOiBzd2l0Y2hpbmcg
dG8gU01QIGNvZGUNDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcgQ1BVIzENDQpbICAgIDAu
MDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRL
ICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0
IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6
IDANDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElEOiAxDQ0KWyAgICAwLjAy
ODU3NV0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAyDQ0KWyAgICAwLjAyOTAyOF0gY3B1
IDIgc3BpbmxvY2sgZXZlbnQgaXJxIDIyOTANDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcg
Q1BVIzINDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGlu
ZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIg
Q2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNh
bCBQcm9jZXNzb3IgSUQ6IDANDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElE
OiAyDQ0KWyAgICAwLjAzMDY4M10gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAzDQ0KWyAg
ICAwLjAzMTA0OV0gY3B1IDMgc3BpbmxvY2sgZXZlbnQgaXJxIDIyODQNDQpbICAgIDAuMDAwOTk5
XSBJbml0aWFsaXppbmcgQ1BVIzMNDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0
SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4w
MDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5
OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDANDQpbICAgIDAuMDAwOTk5XSBDUFU6IFBy
b2Nlc3NvciBDb3JlIElEOiAzDQ0KWyAgICAwLjAzMjY4OF0gaW5zdGFsbGluZyBYZW4gdGltZXIg
Zm9yIENQVSA0DQ0KWyAgICAwLjAzMzAyOV0gY3B1IDQgc3BpbmxvY2sgZXZlbnQgaXJxIDIyNzgN
DQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcgQ1BVIzQNDQpbICAgIDAuMDAwOTk5XSBDUFU6
IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9s
aW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUp
DQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDENDQpbICAgIDAu
MDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElEOiAwDQ0KWyAgICAwLjAzNDc5Ml0gaW5zdGFs
bGluZyBYZW4gdGltZXIgZm9yIENQVSA1DQ0KWyAgICAwLjAzNTA0OF0gY3B1IDUgc3BpbmxvY2sg
ZXZlbnQgaXJxIDIyNzINDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcgQ1BVIzUNDQpbICAg
IDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUg
NjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksg
KDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3Ig
SUQ6IDENDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElEOiAxDQ0KWyAgICAw
LjAzNjc5OV0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSA2DQ0KWyAgICAwLjAzNzA0OV0g
Y3B1IDYgc3BpbmxvY2sgZXZlbnQgaXJxIDIyNjYNDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXpp
bmcgQ1BVIzYNDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMv
bGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTog
TDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlz
aWNhbCBQcm9jZXNzb3IgSUQ6IDENDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3Jl
IElEOiAyDQ0KWyAgICAwLjAzODgyMl0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSA3DQ0K
WyAgICAwLjAzOTAzMl0gY3B1IDcgc3BpbmxvY2sgZXZlbnQgaXJxIDIyNjANDQpbICAgIDAuMDAw
OTk5XSBJbml0aWFsaXppbmcgQ1BVIzcNDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6
IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAg
MC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAw
MDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDENDQpbICAgIDAuMDAwOTk5XSBDUFU6
IFByb2Nlc3NvciBDb3JlIElEOiAzDQ0KWyAgICAwLjA0MDY2Ml0gQnJvdWdodCB1cCA4IENQVXMN
DQpbICAgIDAuMDQ0MjIxXSBHcmFudCB0YWJsZSBpbml0aWFsaXplZA0NClsgICAgMC4wNDQ5OTNd
IFRpbWU6IDEwOjU3OjQxICBEYXRlOiAwOC8xNy8xMg0NClsgICAgMC4wNDUxMzhdIE5FVDogUmVn
aXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTYNDQpbICAgIDAuMDQ3MTg1XSBBQ1BJOiBidXMgdHlw
ZSBwY2kgcmVnaXN0ZXJlZA0NClsgICAgMC4wNDkzNzFdIFBDSTogQklPUyBCVUcgIzMyWzAwMDAw
MTRhXSBmb3VuZA0NClsgICAgMC4wNDk5OTJdIFBDSTogVXNpbmcgY29uZmlndXJhdGlvbiB0eXBl
IDEgZm9yIGJhc2UgYWNjZXNzDQ0KWyAgICAwLjA0OTk5Ml0gUENJOiBVc2luZyBjb25maWd1cmF0
aW9uIHR5cGUgMSBmb3IgZXh0ZW5kZWQgYWNjZXNzDQ0KWyAgICAwLjA4MzA5MF0gYmlvOiBjcmVh
dGUgc2xhYiA8YmlvLTA+IGF0IDANDQpbICAgIDAuMDg4Mjg5XSBFUlJPUjogVW5hYmxlIHRvIGxv
Y2F0ZSBJT0FQSUMgZm9yIEdTSSA5DQ0KWyAgICAwLjA5OTcyMl0gQUNQSTogSW50ZXJwcmV0ZXIg
ZW5hYmxlZA0NClsgICAgMC4wOTk5ODRdIEFDUEk6IChzdXBwb3J0cyBTMCBTMSBTNCBTNSkNDQpb
ICAgIDAuMDk5OTg0XSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGludGVycnVwdCByb3V0aW5nDQ0K
WyAgICAwLjEzNjk3OV0gQUNQSTogTm8gZG9jayBkZXZpY2VzIGZvdW5kLg0NClsgICAgMC4xMzc5
NzhdIEFDUEk6IFBDSSBSb290IEJyaWRnZSBbUENJMF0gKDAwMDA6MDApDQ0KWyAgICAwLjEzODI1
M10gcGNpIDAwMDA6MDA6MDEuMDogRW5hYmxpbmcgSFQgTVNJIE1hcHBpbmcNDQpbICAgIDAuMTM4
OTc4XSBwY2kgMDAwMDowMDowMy4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90
DQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDMuMjogUE1FIyBkaXNhYmxlZA0NClsgICAg
MC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA2LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3Qg
RDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDYuMDogUE1FIyBkaXNhYmxlZA0N
ClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA3LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAg
RDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDcuMDogUE1FIyBkaXNh
YmxlZA0NClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA4LjA6IFBNRSMgc3VwcG9ydGVkIGZy
b20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDguMDogUE1F
IyBkaXNhYmxlZA0NClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA5LjA6IFBNRSMgc3VwcG9y
dGVkIGZyb20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDku
MDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjBhLjA6IFBNRSMg
c3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6
MDA6MGEuMDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzk3NTBdIHBjaSAwMDAwOjA3OjAwLjA6
IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzOTk3OF0gcGNp
IDAwMDA6MDc6MDAuMDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzk5NzhdIHBjaSAwMDAwOjA4
OjA0LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzOTk3OF0g
cGNpIDAwMDA6MDg6MDQuMDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzk5NzhdIHBjaSAwMDAw
OjA4OjA0LjE6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzOTk3
OF0gcGNpIDAwMDA6MDg6MDQuMTogUE1FIyBkaXNhYmxlZA0NCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MDEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMi4wDQooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjAyLjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MDIuMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMy4wDQooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjAzLjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDMuMg0KKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowNC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjA2LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDcuMA0KKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDowOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA5LjAN
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MGEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjENCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
OC4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQNCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTkuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOS4xDQooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE5LjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6
MDA6MTkuMw0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOS40DQooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAxOjBkLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDE6MGUuMA0K
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowZS4xDQooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjA2OjAwLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDc6MDAuMA0KKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowODowNC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA4OjA0
LjENClsgICAgMC4xODYyMjhdIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LVV0gKElSUXMg
KjEwIDExKQ0NClsgICAgMC4xODcwODldIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LV10g
KElSUXMgMTAgMTEpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMTg4MDk3XSBBQ1BJOiBQQ0kgSW50
ZXJydXB0IExpbmsgW0xOS1NdIChJUlFzIDUgKjcgMTEpDQ0KWyAgICAwLjE4OTA4N10gQUNQSTog
UENJIEludGVycnVwdCBMaW5rIFtMTjAwXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwg
ZGlzYWJsZWQuDQ0KWyAgICAwLjE5MDA3Nl0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjAx
XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5MTA3
NV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjAyXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0
IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5MjA3NF0gQUNQSTogUENJIEludGVycnVwdCBM
aW5rIFtMTjAzXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAg
ICAwLjE5MzA4MV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjA0XSAoSVJRcyAzIDQgNSA3
IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5NDA4M10gQUNQSTogUENJIElu
dGVycnVwdCBMaW5rIFtMTjA1XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJs
ZWQuDQ0KWyAgICAwLjE5NTA2NV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjA2XSAoSVJR
cyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5NjA5NF0gQUNQ
STogUENJIEludGVycnVwdCBMaW5rIFtMTjA3XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAq
MCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5NzA3NV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtM
TjA4XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5
ODA2Ml0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjA5XSAoSVJRcyAzIDQgNSA3IDExIDEy
IDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5OTA4Ml0gQUNQSTogUENJIEludGVycnVw
dCBMaW5rIFtMTjBBXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0K
WyAgICAwLjIwMDA3OF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjBCXSAoSVJRcyAzIDQg
NSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIwMTA3OF0gQUNQSTogUENJ
IEludGVycnVwdCBMaW5rIFtMTjBDXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlz
YWJsZWQuDQ0KWyAgICAwLjIwMjA3N10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjBEXSAo
SVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIwMzA4NV0g
QUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjBFXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1
KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIwNDA4N10gQUNQSTogUENJIEludGVycnVwdCBMaW5r
IFtMTjBGXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAw
LjIwNTE0M10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjEwXSAoSVJRcyAzIDQgNSA3ICox
MSAxMiAxNCAxNSkNDQpbICAgIDAuMjA2MDc1XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xO
MTFdIChJUlFzIDMgNCA1IDcgMTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjA3
MDg0XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMTJdIChJUlFzIDMgNCA1IDcgMTEgMTIg
MTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjA4MTQ3XSBBQ1BJOiBQQ0kgSW50ZXJydXB0
IExpbmsgW0xOMTNdIChJUlFzIDMgNCAqNSA3IDExIDEyIDE0IDE1KQ0NClsgICAgMC4yMDkxNDRd
IEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE4xNF0gKElSUXMgMyA0IDUgNyAqMTEgMTIgMTQg
MTUpDQ0KWyAgICAwLjIxMDA3MF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjE1XSAoSVJR
cyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxMTA4MF0gQUNQ
STogUENJIEludGVycnVwdCBMaW5rIFtMTjE2XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAq
MCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxMjA4N10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtM
TjE3XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIx
MzA4OF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjE4XSAoSVJRcyAzIDQgNSA3IDExIDEy
IDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxNDA5Ml0gQUNQSTogUENJIEludGVycnVw
dCBMaW5rIFtMTjE5XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0K
WyAgICAwLjIxNTAyN10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjFBXSAoSVJRcyAzIDQg
NSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxNjE0NF0gQUNQSTogUENJ
IEludGVycnVwdCBMaW5rIFtMTjFCXSAoSVJRcyAzIDQgKjUgNyAxMSAxMiAxNCAxNSkNDQpbICAg
IDAuMjE3MDc3XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMUNdIChJUlFzIDMgNCA1IDcg
MTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjE4MDYwXSBBQ1BJOiBQQ0kgSW50
ZXJydXB0IExpbmsgW0xOMURdIChJUlFzIDMgNCA1IDcgMTEgMTIgMTQgMTUpICowLCBkaXNhYmxl
ZC4NDQpbICAgIDAuMjE5MDg1XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMUVdIChJUlFz
IDMgNCA1IDcgMTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjIwMDg1XSBBQ1BJ
OiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMUZdIChJUlFzIDMgNCA1IDcgMTEgMTIgMTQgMTUpICow
LCBkaXNhYmxlZC4NDQpbICAgIDAuMjIxMDI5XSB4ZW5fYmFsbG9vbjogSW5pdGlhbGlzaW5nIGJh
bGxvb24gZHJpdmVyIHdpdGggcGFnZSBvcmRlciAwLg0NClsgICAgMC4yMjIwMjhdIGxhc3RfcGZu
ID0gMHgxNTRhODEgbWF4X2FyY2hfcGZuID0gMHgxMDAwMDAwDQ0KWyAgICAwLjIyNTU5MV0gdmdh
YXJiOiBkZXZpY2UgYWRkZWQ6IFBDSTowMDAwOjAwOjA0LjAsZGVjb2Rlcz1pbyttZW0sb3ducz1p
byttZW0sbG9ja3M9bm9uZQ0NClsgICAgMC4yMjU5NjVdIHZnYWFyYjogbG9hZGVkDQ0KWyAgICAw
LjIyNjE3OV0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQNDQpbICAgIDAuMjI4MDYwXSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYmZzDQ0KWyAgICAwLjIyODk5
N10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWINDQpbICAgIDAu
MjMwMDM0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYg0NClsgICAg
MC4yMzEwMzldIFBDSTogVXNpbmcgQUNQSSBmb3IgSVJRIHJvdXRpbmcNDQpbICAgIDAuMjMzMDg1
XSBjZmc4MDIxMTogVXNpbmcgc3RhdGljIHJlZ3VsYXRvcnkgZG9tYWluIGluZm8NDQpbICAgIDAu
MjMzOTY0XSBjZmc4MDIxMTogUmVndWxhdG9yeSBkb21haW46IFVTDQ0KWyAgICAwLjIzMzk2NF0g
CShzdGFydF9mcmVxIC0gZW5kX2ZyZXEgQCBiYW5kd2lkdGgpLCAobWF4X2FudGVubmFfZ2Fpbiwg
bWF4X2VpcnApDQ0KWyAgICAwLjIzMzk2NF0gCSgyNDAyMDAwIEtIeiAtIDI0NzIwMDAgS0h6IEAg
NDAwMDAgS0h6KSwgKDYwMCBtQmksIDI3MDAgbUJtKQ0NClsgICAgMC4yMzM5NjRdIAkoNTE3MDAw
MCBLSHogLSA1MTkwMDAwIEtIeiBAIDQwMDAwIEtIeiksICg2MDAgbUJpLCAyMzAwIG1CbSkNDQpb
ICAgIDAuMjMzOTY0XSAJKDUxOTAwMDAgS0h6IC0gNTIxMDAwMCBLSHogQCA0MDAwMCBLSHopLCAo
NjAwIG1CaSwgMjMwMCBtQm0pDQ0KWyAgICAwLjIzMzk2NF0gCSg1MjEwMDAwIEtIeiAtIDUyMzAw
MDAgS0h6IEAgNDAwMDAgS0h6KSwgKDYwMCBtQmksIDIzMDAgbUJtKQ0NClsgICAgMC4yMzM5NjRd
IAkoNTIzMDAwMCBLSHogLSA1MzMwMDAwIEtIeiBAIDQwMDAwIEtIeiksICg2MDAgbUJpLCAyMzAw
IG1CbSkNDQpbICAgIDAuMjMzOTY0XSAJKDU3MzUwMDAgS0h6IC0gNTgzNTAwMCBLSHogQCA0MDAw
MCBLSHopLCAoNjAwIG1CaSwgMzAwMCBtQm0pDQ0KWyAgICAwLjIzMzk3Nl0gY2ZnODAyMTE6IENh
bGxpbmcgQ1JEQSBmb3IgY291bnRyeTogVVMNDQpbICAgIDAuMjM0OTk3XSBOZXRMYWJlbDogSW5p
dGlhbGl6aW5nDQ0KWyAgICAwLjIzNTk2NF0gTmV0TGFiZWw6ICBkb21haW4gaGFzaCBzaXplID0g
MTI4DQ0KWyAgICAwLjIzNTk2NF0gTmV0TGFiZWw6ICBwcm90b2NvbHMgPSBVTkxBQkVMRUQgQ0lQ
U092NA0NClsgICAgMC4yMzU5NjRdIE5ldExhYmVsOiAgdW5sYWJlbGVkIHRyYWZmaWMgYWxsb3dl
ZCBieSBkZWZhdWx0DQ0KWyAgICAwLjIzNjk4OF0gU3dpdGNoaW5nIHRvIGNsb2Nrc291cmNlIHhl
bg0NClsgICAgMC4yNDgxNzNdIHBucDogUG5QIEFDUEkgaW5pdA0NClsgICAgMC4yNDg0OTVdIEFD
UEk6IGJ1cyB0eXBlIHBucCByZWdpc3RlcmVkDQ0KWyAgICAwLjI1OTY0N10geGVuX2FsbG9jYXRl
X3BpcnE6IHJldHVybmluZyBpcnEgOCBmb3IgZ3NpIDgNDQpbICAgIDAuMjY4NjMxXSB4ZW5fYWxs
b2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxIGZvciBnc2kgMQ0NClsgICAgMC4yNzU2ODRdIHhl
bl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDEzIGZvciBnc2kgMTMNDQpbICAgIDAuMjgy
MjUwXSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxMiBmb3IgZ3NpIDEyDQ0KWyAg
ICAwLjI5MDA0MF0geGVuX2FsbG9jYXRlX3BpcnE6IHJldHVybmluZyBpcnEgNCBmb3IgZ3NpIDQN
DQpbICAgIDAuMjk1MzYyXSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjQNDQpbICAgIDAuMjk5OTQx
XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAzIGZvciBnc2kgMw0NClsgICAgMC4z
MDYwODddIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDYgZm9yIGdzaSA2DQ0KWyAg
ICAwLjMxMjY0M10gcG5wOiBQblAgQUNQSTogZm91bmQgMTYgZGV2aWNlcw0NClsgICAgMC4zMTM0
NThdIEFDUEk6IEFDUEkgYnVzIHR5cGUgcG5wIHVucmVnaXN0ZXJlZA0NClsgICAgMC4zMTM0NThd
IHN5c3RlbSAwMDowMDogaW9tZW0gcmFuZ2UgMHhmZWQwODAwMC0weGZlZDA4MDA3IGhhcyBiZWVu
IHJlc2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjAxOiBpb21lbSByYW5nZSAweGUw
MDAwMDAwLTB4ZWZmZmZmZmYgaGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0
ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAweDQwYi0weDQwYiBoYXMgYmVlbiByZXNlcnZlZA0NClsg
ICAgMC4zMTM0NThdIHN5c3RlbSAwMDowYjogaW9wb3J0IHJhbmdlIDB4NGQwLTB4NGQxIGhhcyBi
ZWVuIHJlc2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2Ug
MHg0ZDYtMHg0ZDYgaGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6
MGI6IGlvcG9ydCByYW5nZSAweDUwMC0weDU2MCBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4z
MTM0NThdIHN5c3RlbSAwMDowYjogaW9wb3J0IHJhbmdlIDB4NTU4LTB4NTViIGhhcyBiZWVuIHJl
c2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHg1ODAt
MHg1OGYgaGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlv
cG9ydCByYW5nZSAweDU5MC0weDU5MyBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThd
IHN5c3RlbSAwMDowYjogaW9wb3J0IHJhbmdlIDB4NjAwLTB4NjFmIGhhcyBiZWVuIHJlc2VydmVk
DQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHg2MjAtMHg2MjMg
aGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCBy
YW5nZSAweDcwMC0weDcwMyBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThdIHN5c3Rl
bSAwMDowYjogaW9wb3J0IHJhbmdlIDB4YzAwLTB4YzAxIGhhcyBiZWVuIHJlc2VydmVkDQ0KWyAg
ICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHhjMDYtMHhjMDggaGFzIGJl
ZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAw
eGMxNC0weGMxNCBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThdIHN5c3RlbSAwMDow
YjogaW9wb3J0IHJhbmdlIDB4YzQ5LTB4YzRhIGhhcyBiZWVuIHJlc2VydmVkDQ0KWyAgICAwLjMx
MzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHhjNTAtMHhjNTMgaGFzIGJlZW4gcmVz
ZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAweGM2Yy0w
eGM2YyBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThdIHN5c3RlbSAwMDowYjogaW9w
b3J0IHJhbmdlIDB4YzZmLTB4YzZmIGhhcyBiZWVuIHJlc2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0g
c3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHhjZDYtMHhjZDcgaGFzIGJlZW4gcmVzZXJ2ZWQN
DQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAweGNmOS0weGNmOSBj
b3VsZCBub3QgYmUgcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9y
dCByYW5nZSAweGY1MC0weGY1OCBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC40NzIxOTFdIFBN
LVRpbWVyIGZhaWxlZCBjb25zaXN0ZW5jeSBjaGVjayAgKDB4MHhmZmZmZmYpIC0gYWJvcnRpbmcu
DQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDE6MGQuMDogUENJIGJyaWRnZSwgc2Vjb25kYXJ5
IGJ1cyAwMDAwOjAyDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDE6MGQuMDogICBJTyB3aW5k
b3c6IGRpc2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDE6MGQuMDogICBNRU0gd2lu
ZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAwOjAxOjBkLjA6ICAgUFJFRkVU
Q0ggd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAwOjAwOjAxLjA6IFBD
SSBicmlkZ2UsIHNlY29uZGFyeSBidXMgMDAwMDowMQ0NClsgICAgMC40NzI0OTldIHBjaSAwMDAw
OjAwOjAxLjA6ICAgSU8gd2luZG93OiAweDYwMDAtMHg2ZmZmDQ0KWyAgICAwLjQ3MjQ5OV0gcGNp
IDAwMDA6MDA6MDEuMDogICBNRU0gd2luZG93OiAweGQ4MzAwMDAwLTB4ZDgzZmZmZmYNDQpbICAg
IDAuNDcyNDk5XSBwY2kgMDAwMDowMDowMS4wOiAgIFBSRUZFVENIIHdpbmRvdzogMHhkODAwMDAw
MC0weGQ4MGZmZmZmDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDYuMDogUENJIGJyaWRn
ZSwgc2Vjb25kYXJ5IGJ1cyAwMDAwOjAzDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDYu
MDogICBJTyB3aW5kb3c6IGRpc2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDYu
MDogICBNRU0gd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAwOjAwOjA2
LjA6ICAgUFJFRkVUQ0ggd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAw
OjAwOjA3LjA6IFBDSSBicmlkZ2UsIHNlY29uZGFyeSBidXMgMDAwMDowNA0NClsgICAgMC40NzI0
OTldIHBjaSAwMDAwOjAwOjA3LjA6ICAgSU8gd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0
OTldIHBjaSAwMDAwOjAwOjA3LjA6ICAgTUVNIHdpbmRvdzogZGlzYWJsZWQNDQpbICAgIDAuNDcy
NDk5XSBwY2kgMDAwMDowMDowNy4wOiAgIFBSRUZFVENIIHdpbmRvdzogZGlzYWJsZWQNDQpbICAg
IDAuNDcyNDk5XSBwY2kgMDAwMDowMDowOC4wOiBQQ0kgYnJpZGdlLCBzZWNvbmRhcnkgYnVzIDAw
MDA6MDUNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowMDowOC4wOiAgIElPIHdpbmRvdzogZGlz
YWJsZWQNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowMDowOC4wOiAgIE1FTSB3aW5kb3c6IGRp
c2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDguMDogICBQUkVGRVRDSCB3aW5k
b3c6IGRpc2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDkuMDogUENJIGJyaWRn
ZSwgc2Vjb25kYXJ5IGJ1cyAwMDAwOjA2DQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDku
MDogICBJTyB3aW5kb3c6IDB4NzAwMC0weDdmZmYNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDow
MDowOS4wOiAgIE1FTSB3aW5kb3c6IDB4ZDgyMDAwMDAtMHhkODJmZmZmZg0NClsgICAgMC40NzI0
OTldIHBjaSAwMDAwOjAwOjA5LjA6ICAgUFJFRkVUQ0ggd2luZG93OiAweGQ4MTAwMDAwLTB4ZDgx
ZmZmZmYNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowNzowMC4wOiBQQ0kgYnJpZGdlLCBzZWNv
bmRhcnkgYnVzIDAwMDA6MDgNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowNzowMC4wOiAgIElP
IHdpbmRvdzogZGlzYWJsZWQNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowNzowMC4wOiAgIE1F
TSB3aW5kb3c6IDB4ZDg0MDAwMDAtMHhkODRmZmZmZg0NClsgICAgMC40NzI0OTldIHBjaSAwMDAw
OjA3OjAwLjA6ICAgUFJFRkVUQ0ggd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBj
aSAwMDAwOjAwOjBhLjA6IFBDSSBicmlkZ2UsIHNlY29uZGFyeSBidXMgMDAwMDowNw0NClsgICAg
MC40NzI0OTldIHBjaSAwMDAwOjAwOjBhLjA6ICAgSU8gd2luZG93OiBkaXNhYmxlZA0NClsgICAg
MC40NzI0OTldIHBjaSAwMDAwOjAwOjBhLjA6ICAgTUVNIHdpbmRvdzogMHhkODQwMDAwMC0weGQ4
NGZmZmZmDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MGEuMDogICBQUkVGRVRDSCB3aW5k
b3c6IGRpc2FibGVkDQ0KWyAgICAwLjY1ODE2NV0gcGNpIDAwMDA6MDA6MDYuMDogUENJIElOVCBB
IC0+IEdTSSAzMiAobGV2ZWwsIGxvdykgLT4gSVJRIDMyDQ0KWyAgICAwLjY1OTEyMF0geGVuX2Fs
bG9jYXRlX3BpcnE6IHJldHVybmluZyBpcnEgMzIgZm9yIGdzaSAzMg0NClsgICAgMC42NzA0Mjdd
IEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MzINDQpbICAgIDAuNjcxNDE3XSBwY2kgMDAwMDowMDow
Ny4wOiBQQ0kgSU5UIEEgLT4gR1NJIDMyIChsZXZlbCwgbG93KSAtPiBJUlEgMzINDQpbICAgIDAu
NjcxNDE3XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAzMiBmb3IgZ3NpIDMyDQ0K
WyAgICAwLjY4NjQ3Ml0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozMg0NClsgICAgMC42ODc0NjJd
IHBjaSAwMDAwOjAwOjA4LjA6IFBDSSBJTlQgQSAtPiBHU0kgMzIgKGxldmVsLCBsb3cpIC0+IElS
USAzMg0NClsgICAgMC42ODc0NjJdIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDMy
IGZvciBnc2kgMzINDQpbICAgIDAuNzAyNTA1XSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjMyDQ0K
WyAgICAwLjcwMzQ5NV0gcGNpIDAwMDA6MDA6MDkuMDogUENJIElOVCBBIC0+IEdTSSAzMiAobGV2
ZWwsIGxvdykgLT4gSVJRIDMyDQ0KWyAgICAwLjcwMzQ5NV0geGVuX2FsbG9jYXRlX3BpcnE6IHJl
dHVybmluZyBpcnEgMzIgZm9yIGdzaSAzMg0NClsgICAgMC43MTg1MzNdIEFscmVhZHkgc2V0dXAg
dGhlIEdTSSA6MzINDQpbICAgIDAuNzE5NTIyXSBwY2kgMDAwMDowMDowYS4wOiBQQ0kgSU5UIEEg
LT4gR1NJIDMyIChsZXZlbCwgbG93KSAtPiBJUlEgMzINDQpbICAgIDAuNzI5MzEzXSBORVQ6IFJl
Z2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDINDQpbICAgIDAuNzMwMTQzXSBJUCByb3V0ZSBjYWNo
ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4IChvcmRlcjogNSwgMTMxMDcyIGJ5dGVzKQ0NClsg
ICAgMC43NDI2NjJdIFRDUCBlc3RhYmxpc2hlZCBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAo
b3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMpDQ0KWyAgICAwLjc0MzQ5N10gVENQIGJpbmQgaGFzaCB0
YWJsZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDcsIDUyNDI4OCBieXRlcykNDQpbICAgIDAuNzQz
NDk3XSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQgKGVzdGFibGlzaGVkIDEzMTA3MiBiaW5k
IDY1NTM2KQ0NClsgICAgMC43NDM0OTddIFRDUCByZW5vIHJlZ2lzdGVyZWQNDQpbICAgIDAuNzY3
OTkwXSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDENDQpbICAgIDAuNzczNjQ2XSBS
UEM6IFJlZ2lzdGVyZWQgdWRwIHRyYW5zcG9ydCBtb2R1bGUuDQ0KWyAgICAwLjc3NDQ5N10gUlBD
OiBSZWdpc3RlcmVkIHRjcCB0cmFuc3BvcnQgbW9kdWxlLg0NClsgICAgMC43NzQ0OTddIFJQQzog
UmVnaXN0ZXJlZCB0Y3AgTkZTdjQuMSBiYWNrY2hhbm5lbCB0cmFuc3BvcnQgbW9kdWxlLg0NClsg
ICAgMC43ODk4NDZdIHBjaSAwMDAwOjAwOjAyLjA6IGRpc2FibGVkIGJvb3QgaW50ZXJydXB0cyBv
biBkZXZpY2UgWzExNjY6MDIwNV0NDQpbICAgIDAuNzk3NzY4XSBUcnlpbmcgdG8gdW5wYWNrIHJv
b3RmcyBpbWFnZSBhcyBpbml0cmFtZnMuLi4NDQpbICAgIDEuMDU4MjI1XSBGcmVlaW5nIGluaXRy
ZCBtZW1vcnk6IDYxMTEwayBmcmVlZA0NClsgICAgMS4wOTQwNjddIFBDSS1ETUE6IFVzaW5nIHNv
ZnR3YXJlIGJvdW5jZSBidWZmZXJpbmcgZm9yIElPIChTV0lPVExCKQ0NClsgICAgMS4wOTQwNjdd
IERNQTogUGxhY2luZyA2NE1CIHNvZnR3YXJlIElPIFRMQiBiZXR3ZWVuIGM4NTNiMDAwIC0gY2M1
M2IwMDANDQpbICAgIDEuMDk0MDY3XSBETUE6IHNvZnR3YXJlIElPIFRMQiBhdCBwaHlzIDB4ODUz
YjAwMCAtIDB4YzUzYjAwMA0NClsgICAgMS4xMTY3MDJdIGt2bTogbm8gaGFyZHdhcmUgc3VwcG9y
dA0NClsgICAgMS4xMjAzNzVdIGhhc19zdm06IHN2bSBub3QgYXZhaWxhYmxlDQ0KWyAgICAxLjEy
MzE5OV0ga3ZtOiBubyBoYXJkd2FyZSBzdXBwb3J0DQ0KWyAgICAxLjE0MjEyM10gTWljcm9jb2Rl
IFVwZGF0ZSBEcml2ZXI6IHYyLjAwIDx0aWdyYW5AYWl2YXppYW4uZnNuZXQuY28udWs+LCBQZXRl
ciBPcnViYQ0NClsgICAgMS4xNDMxMDldIFNjYW5uaW5nIGZvciBsb3cgbWVtb3J5IGNvcnJ1cHRp
b24gZXZlcnkgNjAgc2Vjb25kcw0NClsgICAgMS4xNTcwMTVdIGF1ZGl0OiBpbml0aWFsaXppbmcg
bmV0bGluayBzb2NrZXQgKGRpc2FibGVkKQ0NClsgICAgMS4xNjI0NzVdIHR5cGU9MjAwMCBhdWRp
dCgxMzQ1MjAxMDYyLjMxMDoxKTogaW5pdGlhbGl6ZWQNDQpbICAgIDEuMTc5ODk5XSBoaWdobWVt
IGJvdW5jZSBwb29sIHNpemU6IDY0IHBhZ2VzDQ0KWyAgICAxLjE4MDcwOV0gSHVnZVRMQiByZWdp
c3RlcmVkIDIgTUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXMNDQpbICAgIDEuMjAy
MDU3XSBWRlM6IERpc2sgcXVvdGFzIGRxdW90XzYuNS4yDQ0KWyAgICAxLjIwNjUyMV0gRHF1b3Qt
Y2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAxMDI0IChvcmRlciAwLCA0MDk2IGJ5dGVzKQ0NClsg
ICAgMS4yMTY3MDBdIG1zZ21uaSBoYXMgYmVlbiBzZXQgdG8gMjY5MQ0NClsgICAgMS4yMjM1MTld
IGFsZzogTm8gdGVzdCBmb3Igc3Rkcm5nIChrcm5nKQ0NClsgICAgMS4yMjgwNjVdIEJsb2NrIGxh
eWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAy
NTIpDQ0KWyAgICAxLjIyOTA0MV0gaW8gc2NoZWR1bGVyIG5vb3AgcmVnaXN0ZXJlZA0NClsgICAg
MS4yMjkwNDFdIGlvIHNjaGVkdWxlciBhbnRpY2lwYXRvcnkgcmVnaXN0ZXJlZA0NClsgICAgMS4y
MjkwNDFdIGlvIHNjaGVkdWxlciBkZWFkbGluZSByZWdpc3RlcmVkDQ0KWyAgICAxLjI0ODcwNl0g
aW8gc2NoZWR1bGVyIGNmcSByZWdpc3RlcmVkIChkZWZhdWx0KQ0NClsgICAgMS4yNTc1NzJdIHBj
aV9ob3RwbHVnOiBQQ0kgSG90IFBsdWcgUENJIENvcmUgdmVyc2lvbjogMC41DQ0KWyAgICAxLjI2
NDU3MF0gaW5wdXQ6IFBvd2VyIEJ1dHRvbiBhcyAvZGV2aWNlcy9MTlhTWVNUTTowMC9MTlhTWUJV
UzowMC9QTlAwQTA4OjAwL1BOUDBDMEM6MDAvaW5wdXQvaW5wdXQwDQ0KWyAgICAxLjI2NTU0OV0g
QUNQSTogUG93ZXIgQnV0dG9uIFtQV1JCXQ0NClsgICAgMS4yNzc5NTNdIGlucHV0OiBQb3dlciBC
dXR0b24gYXMgL2RldmljZXMvTE5YU1lTVE06MDAvTE5YUFdSQk46MDAvaW5wdXQvaW5wdXQxDQ0K
WyAgICAxLjI3ODkzM10gQUNQSTogUG93ZXIgQnV0dG9uIFtQV1JGXQ0NClsgICAgMS4yODk0MjFd
IGlucHV0OiBTbGVlcCBCdXR0b24gYXMgL2RldmljZXMvTE5YU1lTVE06MDAvTE5YU0xQQk46MDAv
aW5wdXQvaW5wdXQyDQ0KWyAgICAxLjI5MDQwMl0gQUNQSTogU2xlZXAgQnV0dG9uIFtTTFBGXQ0N
ClsgICAgMS4zMTE3NjJdIEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZC4NDQpbICAgIDEu
MzIyNjAxXSBibGt0YXBfZGV2aWNlX2luaXQ6IGJsa3RhcCBkZXZpY2UgbWFqb3IgMjUzDQ0KWyAg
ICAxLjMyMzU3MV0gYmxrdGFwX3JpbmdfaW5pdDogYmxrdGFwIHJpbmcgbWFqb3I6IDI1MQ0NClsg
ICAgMS4zNDkzNjVdIHJlZ2lzdGVyaW5nIG5ldGJhY2sNDQpbICAgIDEuMzY4NTA2XSBocGV0X2Fj
cGlfYWRkOiBubyBhZGRyZXNzIG9yIGlycXMgaW4gX0NSUw0NClsgICAgMS4zNzM5NTldIE5vbi12
b2xhdGlsZSBtZW1vcnkgZHJpdmVyIHYxLjMNDQpbICAgIDEuMzc0OTM2XSBTZXJpYWw6IDgyNTAv
MTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBlbmFibGVkDQ0KKFhFTikgQ2Fubm90
IGJpbmQgSVJRNCB0byBkb20wLiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJp
bmQgSVJRMiB0byBkb20wLiBJbiB1c2UgYnkgJ2Nhc2NhZGUnLg0KKFhFTikgQ2Fubm90IGJpbmQg
SVJRNCB0byBkb20wLiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJR
MiB0byBkb20wLiBJbiB1c2UgYnkgJ2Nhc2NhZGUnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRNCB0
byBkb20wLiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRMiB0byBk
b20wLiBJbiB1c2UgYnkgJ2Nhc2NhZGUnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRNCB0byBkb20w
LiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRMiB0byBkb20wLiBJ
biB1c2UgYnkgJ2Nhc2NhZGUnLg0KWyAgICAxLjY2NjMzNF0gc2VyaWFsODI1MDogdHR5UzEgYXQg
SS9PIDB4MmY4IChpcnEgPSAzKSBpcyBhIDE2NTUwQQ0NClsgICAgMS42NzQxODVdIDAwOjBkOiB0
dHlTMSBhdCBJL08gMHgyZjggKGlycSA9IDMpIGlzIGEgMTY1NTBBDQ0KWyAgICAxLjY4ODM2Nl0g
YnJkOiBtb2R1bGUgbG9hZGVkDQ0KWyAgICAxLjY5NTYzM10gbG9vcDogbW9kdWxlIGxvYWRlZA0N
ClsgICAgMS42OTkyMTZdIGlucHV0OiBNYWNpbnRvc2ggbW91c2UgYnV0dG9uIGVtdWxhdGlvbiBh
cyAvZGV2aWNlcy92aXJ0dWFsL2lucHV0L2lucHV0Mw0NClsgICAgMS43MDk5MjFdIGUxMDA6IElu
dGVsKFIpIFBSTy8xMDAgTmV0d29yayBEcml2ZXIsIDMuNS4yNC1rMi1OQVBJDQ0KWyAgICAxLjcx
MDg5OV0gZTEwMDogQ29weXJpZ2h0KGMpIDE5OTktMjAwNiBJbnRlbCBDb3Jwb3JhdGlvbg0NClsg
ICAgMS43MjE4NTddIHRnMy5jOnYzLjEwMiAoU2VwdGVtYmVyIDEsIDIwMDkpDQ0KWyAgICAxLjcy
NjI0OV0gdGczIDAwMDA6MDg6MDQuMDogUENJIElOVCBBIC0+IEdTSSAzNiAobGV2ZWwsIGxvdykg
LT4gSVJRIDM2DQ0KWyAgICAxLjc0MzE2M10gZXRoMDogVGlnb24zIFtwYXJ0bm8oQkNNOTU3MTUp
IHJldiA5MDAzXSAoUENJWDoxMzNNSHo6NjQtYml0KSBNQUMgYWRkcmVzcyAwMDplMDo4MTo4MDox
ZDphMA0NClsgICAgMS43NDMyNDhdIGV0aDA6IGF0dGFjaGVkIFBIWSBpcyA1NzE0ICgxMC8xMDAv
MTAwMEJhc2UtVCBFdGhlcm5ldCkgKFdpcmVTcGVlZFsxXSkNDQpbICAgIDEuNzQzMjQ4XSBldGgw
OiBSWGNzdW1zWzFdIExpbmtDaGdSRUdbMF0gTUlpcnFbMF0gQVNGWzBdIFRTT2NhcFsxXQ0NClsg
ICAgMS43NDMyNDhdIGV0aDA6IGRtYV9yd2N0cmxbNzYxNDgwMDBdIGRtYV9tYXNrWzY0LWJpdF0N
DQpbICAgIDEuNzQzMjQ4XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAzNiBmb3Ig
Z3NpIDM2DQ0KWyAgICAxLjc3NzQ1OV0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozNg0NClsgICAg
MS43Nzg0NDldIHRnMyAwMDAwOjA4OjA0LjE6IFBDSSBJTlQgQiAtPiBHU0kgMzYgKGxldmVsLCBs
b3cpIC0+IElSUSAzNg0NClsgICAgMS43OTk3MDZdIGV0aDE6IFRpZ29uMyBbcGFydG5vKEJDTTk1
NzE1KSByZXYgOTAwM10gKFBDSVg6MTMzTUh6OjY0LWJpdCkgTUFDIGFkZHJlc3MgMDA6ZTA6ODE6
ODA6MWQ6YTENDQpbICAgIDEuODAwMjUyXSBldGgxOiBhdHRhY2hlZCBQSFkgaXMgNTcxNCAoMTAv
MTAwLzEwMDBCYXNlLVQgRXRoZXJuZXQpIChXaXJlU3BlZWRbMV0pDQ0KWyAgICAxLjgwMDI1Ml0g
ZXRoMTogUlhjc3Vtc1sxXSBMaW5rQ2hnUkVHWzBdIE1JaXJxWzBdIEFTRlswXSBUU09jYXBbMV0N
DQpbICAgIDEuODAwMjUyXSBldGgxOiBkbWFfcndjdHJsWzc2MTQ4MDAwXSBkbWFfbWFza1s2NC1i
aXRdDQ0KWyAgICAxLjgyODYwNF0gc2t5MiBkcml2ZXIgdmVyc2lvbiAxLjI1DQ0KWyAgICAxLjgz
Mjg2NF0gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBkZXZpY2UgZHJpdmVyLCAxLjYNDQpbICAgIDEu
ODMzODQyXSB0dW46IChDKSAxOTk5LTIwMDQgTWF4IEtyYXNueWFuc2t5IDxtYXhrQHF1YWxjb21t
LmNvbT4NDQpbICAgIDEuODQ0NTkwXSBjb25zb2xlIFtuZXRjb24wXSBlbmFibGVkDQ0KWyAgICAx
Ljg0NTU2Ml0gbmV0Y29uc29sZTogbmV0d29yayBsb2dnaW5nIHN0YXJ0ZWQNDQpbICAgIDEuODUz
NjkwXSBlaGNpX2hjZDogVVNCIDIuMCAnRW5oYW5jZWQnIEhvc3QgQ29udHJvbGxlciAoRUhDSSkg
RHJpdmVyDQ0KWyAgICAxLjg2MDY4NF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktVXSBl
bmFibGVkIGF0IElSUSAxMA0NClsgICAgMS44NjEzNTVdIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1
cm5pbmcgaXJxIDEwIGZvciBnc2kgMTANDQpbICAgIDEuODcyMDE3XSBlaGNpX2hjZCAwMDAwOjAw
OjAzLjI6IFBDSSBJTlQgQSAtPiBMaW5rW0xOS1VdIC0+IEdTSSAxMCAobGV2ZWwsIGxvdykgLT4g
SVJRIDEwDQ0KWyAgICAxLjg3Mjk5N10gZWhjaV9oY2QgMDAwMDowMDowMy4yOiBFSENJIEhvc3Qg
Q29udHJvbGxlcg0NClsgICAgMS44ODYwMjddIGVoY2lfaGNkIDAwMDA6MDA6MDMuMjogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAxDQ0KWyAgICAxLjkxNDE1Ml0g
ZWhjaV9oY2QgMDAwMDowMDowMy4yOiBpcnEgMTAsIGlvIG1lbSAweGQ4NTEyMDAwDQ0KWyAgICAx
LjkyMDI3Nl0gZWhjaV9oY2QgMDAwMDowMDowMy4yOiBVU0IgMi4wIHN0YXJ0ZWQsIEVIQ0kgMS4w
MA0NClsgICAgMS45MjYxNjRdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5k
b3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDINDQpbICAgIDEuOTI3MDY0XSB1c2IgdXNiMTogTmV3IFVT
QiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENDQpbICAg
IDEuOTI3MDY0XSB1c2IgdXNiMTogUHJvZHVjdDogRUhDSSBIb3N0IENvbnRyb2xsZXINDQpbICAg
IDEuOTI3MDY0XSB1c2IgdXNiMTogTWFudWZhY3R1cmVyOiBMaW51eCAyLjYuMzIuMjUgZWhjaV9o
Y2QNDQpbICAgIDEuOTI3MDY0XSB1c2IgdXNiMTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjAzLjIN
DQpbICAgIDEuOTU2MDI1XSB1c2IgdXNiMTogY29uZmlndXJhdGlvbiAjMSBjaG9zZW4gZnJvbSAx
IGNob2ljZQ0NClsgICAgMS45NjE4ODFdIGh1YiAxLTA6MS4wOiBVU0IgaHViIGZvdW5kDQ0KWyAg
ICAxLjk2NTY4NF0gaHViIDEtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQNDQpbICAgIDEuOTcwMzg5
XSBvaGNpX2hjZDogVVNCIDEuMSAnT3BlbicgSG9zdCBDb250cm9sbGVyIChPSENJKSBEcml2ZXIN
DQpbICAgIDEuOTcxMzY1XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxMCBmb3Ig
Z3NpIDEwDQ0KWyAgICAxLjk4MjE3M10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxMA0NClsgICAg
MS45ODMxNjNdIG9oY2lfaGNkIDAwMDA6MDA6MDMuMDogUENJIElOVCBBIC0+IExpbmtbTE5LVV0g
LT4gR1NJIDEwIChsZXZlbCwgbG93KSAtPiBJUlEgMTANDQpbICAgIDEuOTgzMTYzXSBvaGNpX2hj
ZCAwMDAwOjAwOjAzLjA6IE9IQ0kgSG9zdCBDb250cm9sbGVyDQ0KWyAgICAxLjk5OTgxNl0gb2hj
aV9oY2QgMDAwMDowMDowMy4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMg
bnVtYmVyIDINDQpbICAgIDIuMDAwNzk0XSBvaGNpX2hjZCAwMDAwOjAwOjAzLjA6IGlycSAxMCwg
aW8gbWVtIDB4ZDg1MTAwMDANDQpbICAgIDIuMDYzMzYwXSB1c2IgdXNiMjogTmV3IFVTQiBkZXZp
Y2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxDQ0KWyAgICAyLjA2NDI1N10g
dXNiIHVzYjI6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlh
bE51bWJlcj0xDQ0KWyAgICAyLjA2NDI1N10gdXNiIHVzYjI6IFByb2R1Y3Q6IE9IQ0kgSG9zdCBD
b250cm9sbGVyDQ0KWyAgICAyLjA2NDI1N10gdXNiIHVzYjI6IE1hbnVmYWN0dXJlcjogTGludXgg
Mi42LjMyLjI1IG9oY2lfaGNkDQ0KWyAgICAyLjA2NDI1N10gdXNiIHVzYjI6IFNlcmlhbE51bWJl
cjogMDAwMDowMDowMy4wDQ0KWyAgICAyLjA5MzE5Ml0gdXNiIHVzYjI6IGNvbmZpZ3VyYXRpb24g
IzEgY2hvc2VuIGZyb20gMSBjaG9pY2UNDQpbICAgIDIuMDk5MDIwXSBodWIgMi0wOjEuMDogVVNC
IGh1YiBmb3VuZA0NClsgICAgMi4xMDI4NDFdIGh1YiAyLTA6MS4wOiAyIHBvcnRzIGRldGVjdGVk
DQ0KWyAgICAyLjEwNzE0Ml0geGVuX2FsbG9jYXRlX3BpcnE6IHJldHVybmluZyBpcnEgMTAgZm9y
IGdzaSAxMA0NClsgICAgMi4xMTI2NDVdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MTANDQpbICAg
IDIuMTEzNjM1XSBvaGNpX2hjZCAwMDAwOjAwOjAzLjE6IFBDSSBJTlQgQSAtPiBMaW5rW0xOS1Vd
IC0+IEdTSSAxMCAobGV2ZWwsIGxvdykgLT4gSVJRIDEwDQ0KWyAgICAyLjExMzYzNV0gb2hjaV9o
Y2QgMDAwMDowMDowMy4xOiBPSENJIEhvc3QgQ29udHJvbGxlcg0NClsgICAgMi4xMzA1MTJdIG9o
Y2lfaGNkIDAwMDA6MDA6MDMuMTogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVz
IG51bWJlciAzDQ0KWyAgICAyLjEzODAzNV0gb2hjaV9oY2QgMDAwMDowMDowMy4xOiBpcnEgMTAs
IGlvIG1lbSAweGQ4NTExMDAwDQ0KWyAgICAyLjE5NDM2OF0gdXNiIHVzYjM6IE5ldyBVU0IgZGV2
aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMQ0NClsgICAgMi4xOTUyNjZd
IHVzYiB1c2IzOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJp
YWxOdW1iZXI9MQ0NClsgICAgMi4xOTUyNjZdIHVzYiB1c2IzOiBQcm9kdWN0OiBPSENJIEhvc3Qg
Q29udHJvbGxlcg0NClsgICAgMi4xOTUyNjZdIHVzYiB1c2IzOiBNYW51ZmFjdHVyZXI6IExpbnV4
IDIuNi4zMi4yNSBvaGNpX2hjZA0NClsgICAgMi4xOTUyNjZdIHVzYiB1c2IzOiBTZXJpYWxOdW1i
ZXI6IDAwMDA6MDA6MDMuMQ0NClsgICAgMi4yMjQyMzNdIHVzYiB1c2IzOiBjb25maWd1cmF0aW9u
ICMxIGNob3NlbiBmcm9tIDEgY2hvaWNlDQ0KWyAgICAyLjIzMDA2NV0gaHViIDMtMDoxLjA6IFVT
QiBodWIgZm91bmQNDQpbICAgIDIuMjMzODc4XSBodWIgMy0wOjEuMDogMiBwb3J0cyBkZXRlY3Rl
ZA0NClsgICAgMi4yMzg0OTldIHVoY2lfaGNkOiBVU0IgVW5pdmVyc2FsIEhvc3QgQ29udHJvbGxl
ciBJbnRlcmZhY2UgZHJpdmVyDQ0KWyAgICAyLjI0NTMzMl0gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JscA0NClsgICAgMi4yNDYzMTVdIEluaXRpYWxpemluZyBV
U0IgTWFzcyBTdG9yYWdlIGRyaXZlci4uLg0NClsgICAgMi4yNTU5NTddIHVzYmNvcmU6IHJlZ2lz
dGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiLXN0b3JhZ2UNDQpbICAgIDIuMjU2OTQwXSBV
U0IgTWFzcyBTdG9yYWdlIHN1cHBvcnQgcmVnaXN0ZXJlZC4NDQpbICAgIDIuMjY2ODU5XSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGxpYnVzdWFsDQ0KWyAgICAyLjI3
Mjk5MV0gUE5QOiBQUy8yIENvbnRyb2xsZXIgW1BOUDAzMDM6S0JDLFBOUDBmMTM6TU9VRV0gYXQg
MHg2MCwweDY0IGlycSAxLDEyDQ0KWyAgICAyLjI4MzQ3OV0gc2VyaW86IGk4MDQyIEtCRCBwb3J0
IGF0IDB4NjAsMHg2NCBpcnEgMQ0NClsgICAgMi4yODM0NzldIHNlcmlvOiBpODA0MiBBVVggcG9y
dCBhdCAweDYwLDB4NjQgaXJxIDEyDQ0KWyAgICAyLjI5NDUzM10gbWljZTogUFMvMiBtb3VzZSBk
ZXZpY2UgY29tbW9uIGZvciBhbGwgbWljZQ0NClsgICAgMi4zMDExMjVdIHJ0Y19jbW9zIDAwOjA2
OiBSVEMgY2FuIHdha2UgZnJvbSBTNA0NClsgICAgMi4zMDYwMDZdIHJ0Y19jbW9zIDAwOjA2OiBy
dGMgY29yZTogcmVnaXN0ZXJlZCBydGNfY21vcyBhcyBydGMwDQ0KWyAgICAyLjMxMjE5Nl0gcnRj
MDogYWxhcm1zIHVwIHRvIG9uZSB5ZWFyLCB5M2ssIDExNCBieXRlcyBudnJhbQ0NClsgICAgMi4z
MTkxNDFdIGRldmljZS1tYXBwZXI6IGlvY3RsOiA0LjE1LjAtaW9jdGwgKDIwMDktMDQtMDEpIGlu
aXRpYWxpc2VkOiBkbS1kZXZlbEByZWRoYXQuY29tDQ0KWyAgICAyLjMyODk4OF0gY3B1aWRsZTog
dXNpbmcgZ292ZXJub3IgbGFkZGVyDQ0KWyAgICAyLjMyOTk1Nl0gY3B1aWRsZTogdXNpbmcgZ292
ZXJub3IgbWVudQ0NClsgICAgMi4zNDIxODJdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVy
ZmFjZSBkcml2ZXIgaGlkZGV2DQ0KWyAgICAyLjM0NzkyMV0gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JoaWQNDQpbICAgIDIuMzQ4OTAyXSB1c2JoaWQ6IHYyLjY6
VVNCIEhJRCBjb3JlIGRyaXZlcg0NClsgICAgMi4zNTg5MzBdIEFkdmFuY2VkIExpbnV4IFNvdW5k
IEFyY2hpdGVjdHVyZSBEcml2ZXIgVmVyc2lvbiAxLjAuMjEuDQ0KbW9kcHJvYmU6IEZBVEFMOiBD
b3VsZCBub3QgbG9hZCAvbGliL21vZHVsZXMvMi42LjMyLjI1L21vZHVsZXMuZGVwOiBObyBzdWNo
IGZpbGUgb3IgZGlyZWN0b3J5DQ0KDQ0NClsgICAgMi4zODQ1ODVdIEFMU0EgZGV2aWNlIGxpc3Q6
DQ0KWyAgICAyLjM4NTU1OV0gICBObyBzb3VuZGNhcmRzIGZvdW5kLg0NClsgICAgMi4zOTEyMzNd
IE5ldGZpbHRlciBtZXNzYWdlcyB2aWEgTkVUTElOSyB2MC4zMC4NDQpbICAgIDIuMzk2MDA2XSBu
Zl9jb25udHJhY2sgdmVyc2lvbiAwLjUuMCAoMTYzODQgYnVja2V0cywgNjU1MzYgbWF4KQ0NClsg
ICAgMi40MDMxNDhdIGN0bmV0bGluayB2MC45MzogcmVnaXN0ZXJpbmcgd2l0aCBuZm5ldGxpbmsu
DQ0KWyAgICAyLjQxMjc5OV0gaXBfdGFibGVzOiAoQykgMjAwMC0yMDA2IE5ldGZpbHRlciBDb3Jl
IFRlYW0NDQpbICAgIDIuNDE4MTkxXSBUQ1AgY3ViaWMgcmVnaXN0ZXJlZA0NClsgICAgMi40MTkx
MzVdIEluaXRpYWxpemluZyBYRlJNIG5ldGxpbmsgc29ja2V0DQ0KWyAgICAyLjQyNjkzM10gTkVU
OiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxMA0NClsgICAgMi40NDEzMzddIGlwNl90YWJs
ZXM6IChDKSAyMDAwLTIwMDYgTmV0ZmlsdGVyIENvcmUgVGVhbQ0NClsgICAgMi40NDcwMzhdIElQ
djYgb3ZlciBJUHY0IHR1bm5lbGluZyBkcml2ZXINDQpbICAgIDIuNDU2MDk0XSBORVQ6IFJlZ2lz
dGVyZWQgcHJvdG9jb2wgZmFtaWx5IDE3DQ0KWyAgICAyLjQ2MDgxOF0gVXNpbmcgSVBJIE5vLVNo
b3J0Y3V0IG1vZGUNDQpbICAgIDIuNDY1MjI1XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJzaW9u
IDENDQpbICAgIDIuNDcwNDAyXSAgIE1hZ2ljIG51bWJlcjogODo3OTA6OTgyDQ0KWyAgICAyLjQ3
NTQ5Nl0gRnJlZWluZyB1bnVzZWQga2VybmVsIG1lbW9yeTogNDg0ayBmcmVlZA0NClsgICAgMi40
ODIxODNdIHVzYiAyLTI6IG5ldyBsb3cgc3BlZWQgVVNCIGRldmljZSB1c2luZyBvaGNpX2hjZCBh
bmQgYWRkcmVzcyAyDQ0KWyAgICAyLjQ4OTI0Ml0gV3JpdGUgcHJvdGVjdGluZyB0aGUga2VybmVs
IHRleHQ6IDQ2MjBrDQ0KWyAgICAyLjQ5NTk5MV0gV3JpdGUgcHJvdGVjdGluZyB0aGUga2VybmVs
IHJlYWQtb25seSBkYXRhOiAyMDE2aw0NCkxvYWRpbmcsIHBsZWFzZSB3YWl0Li4uDQ0KWyAgICAy
LjY4NDI3Nl0gdUJlZ2luOiBMb2FkaW5nIGVzc2VudGlhbCBkcml2ZXJzIC4uLiBzYiAyLTI6IE5l
dyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0wNjI0LCBpZFByb2R1Y3Q9MDIwMA0NClsgICAg
Mi42ODUxMzddIHVzYiAyLTI6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0xLCBQcm9kdWN0
PTIsIFNlcmlhbE51bWJlcj0wDQ0KWyAgICAyLjY4NTEzN10gdXNiIDItMjogUHJvZHVjdDogVVNC
IERTUklRDQ0KWyAgICAyLjY4NTEzN10gdXNiIDItMjogTWFudWZhY3R1cmVyOiBBdm9jZW50DQ0K
WyAgICAyLjcxMDQ5NF0gdXNiIDItMjogY29uZmlndXJhdGlvbiAjMSBjaG9zZW4gZnJvbSAxIGNo
b2ljZQ0NClsgICAgMi43MzA0NTFdIGlucHV0OiBBdm9jZW50IFVTQiBEU1JJUSBhcyAvZGV2aWNl
cy9wY2kwMDAwOjAwLzAwMDA6MDA6MDMuMC91c2IyLzItMi8yLTI6MS4wL2lucHV0L2lucHV0NA0N
ClsgICAgMi43NDA3OTJdIGdlbmVyaWMtdXNiIDAwMDM6MDYyNDowMjAwLjAwMDE6IGlucHV0LGhp
ZHJhdzA6IFVTQiBISUQgdjEuMTAgS2V5Ym9hcmQgW0F2b2NlbnQgVVNCIERTUklRXSBvbiB1c2It
MDAwMDowMDowMy4wLTIvaW5wdXQwDQ0KZG9uZS4NDQpbICAgIDIuNzY0NDI1XSBpQmVnaW46IFJ1
bm5pbmcgL3NjcmlwdHMvaW5pdC1wcmVtb3VudCAuLi4gbnB1dDogQXZvY2VudCBVU0IgRFNSSVEg
YXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjAzLjAvdXNiMi8yLTIvMi0yOjEuMS9pbnB1
dC9pbnB1dDUNDQpbICAgIDIuNzc4NTk2XSBnZW5lcmljLXVzYiAwMDAzOjA2MjQ6MDIwMC4wMDAy
OiBpbnB1dCxoaWRyYXcxOiBVU0IgSElEIHYxLjEwIE1vdXNlIFtBdm9jZW50IFVTQiBEU1JJUV0g
b24gdXNiLTAwMDA6MDA6MDMuMC0yL2lucHV0MQ0NClsgICAgMy45MzE0MDVdIEFDUEk6IFBDSSBJ
bnRlcnJ1cHQgTGluayBbTE5LU10gZW5hYmxlZCBhdCBJUlEgMTENDQpbICAgIDMuOTM3NDc2XSB4
ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxMSBmb3IgZ3NpIDExDQ0KWyAgICAzLjk0
MzM1N10gc2F0YV9zdncgMDAwMDowMTowZS4wOiBQQ0kgSU5UIEEgLT4gTGlua1tMTktTXSAtPiBH
U0kgMTEgKGxldmVsLCBsb3cpIC0+IElSUSAxMQ0NClsgICAgMy45NTI0MTNdIHNjc2kwIDogc2F0
YV9zdncNDQpbICAgIDMuOTU2MDE0XSBzY3NpMSA6IHNhdGFfc3Z3DQ0KWyAgICAzLjk1OTU4MV0g
c2NzaTIgOiBzYXRhX3N2dw0NClsgICAgMy45NjMyNjBdIHNjc2kzIDogc2F0YV9zdncNDQpbICAg
IDMuOTY2MjY3XSBhdGExOiBTQVRBIG1heCBVRE1BLzEzMyBtbWlvIG04MTkyQDB4ZDgzMDAwMDAg
cG9ydCAweGQ4MzAwMDAwIGlycSAxMQ0NClsgICAgMy45NjcyMzRdIGF0YTI6IFNBVEEgbWF4IFVE
TUEvMTMzIG1taW8gbTgxOTJAMHhkODMwMDAwMCBwb3J0IDB4ZDgzMDAxMDAgaXJxIDExDQ0KWyAg
ICAzLjk2NzIzNF0gYXRhMzogU0FUQSBtYXggVURNQS8xMzMgbW1pbyBtODE5MkAweGQ4MzAwMDAw
IHBvcnQgMHhkODMwMDIwMCBpcnEgMTENDQpbICAgIDMuOTY3MjM0XSBhdGE0OiBTQVRBIG1heCBV
RE1BLzEzMyBtbWlvIG04MTkyQDB4ZDgzMDAwMDAgcG9ydCAweGQ4MzAwMzAwIGlycSAxMQ0NClsg
ICAgMy45OTY1MzZdIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDExIGZvciBnc2kg
MTENDQpbICAgIDQuMDAyMjQwXSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjExDQ0KWyAgICA0LjAw
MzE5M10gc2F0YV9zdncgMDAwMDowMTowZS4xOiBQQ0kgSU5UIEEgLT4gTGlua1tMTktTXSAtPiBH
U0kgMTEgKGxldmVsLCBsb3cpIC0+IElSUSAxMQ0NClsgICAgNC4zMDUyNzFdIGF0YTE6IFNBVEEg
bGluayB1cCAxLjUgR2JwcyAoU1N0YXR1cyAxMTMgU0NvbnRyb2wgMzAwKQ0NClsgICAgNC4zNjI1
MzVdIGF0YTEuMDA6IEFUQS03OiBTVDMxNjA4MTVBUywgMy5BQUMsIG1heCBVRE1BLzEzMw0NClsg
ICAgNC4zNjMzMzddIGF0YTEuMDA6IDMxMjU4MTgwOCBzZWN0b3JzLCBtdWx0aSAxNjogTEJBNDgg
TkNRIChkZXB0aCAwLzMyKQ0NClsgICAgNC40MjkxMzRdIGF0YTEuMDA6IGNvbmZpZ3VyZWQgZm9y
IFVETUEvMTMzDQ0KWyAgICA0LjQzMzk3Ml0gc2NzaSAwOjA6MDowOiBEaXJlY3QtQWNjZXNzICAg
ICBBVEEgICAgICBTVDMxNjA4MTVBUyAgICAgIDMuQUEgUFE6IDAgQU5TSTogNQ0NClsgICAgNC40
NDM1MjhdIHNkIDA6MDowOjA6IEF0dGFjaGVkIHNjc2kgZ2VuZXJpYyBzZzAgdHlwZSAwDQ0KWyAg
ICA0LjQ0MzU5MV0gc2QgMDowOjA6MDogW3NkYV0gMzEyNTgxODA4IDUxMi1ieXRlIGxvZ2ljYWwg
YmxvY2tzOiAoMTYwIEdCLzE0OSBHaUIpDQ0KWyAgICA0LjQ1MTAxN10gc2QgMDowOjA6MDogW3Nk
YV0gV3JpdGUgUHJvdGVjdCBpcyBvZmYNDQpbICAgIDQuNDUxMTIzXSBzZCAwOjA6MDowOiBbc2Rh
XSBXcml0ZSBjYWNoZTogZW5hYmxlZCwgcmVhZCBjYWNoZTogZW5hYmxlZCwgZG9lc24ndCBzdXBw
b3J0IERQTyBvciBGVUENDQpbICAgIDQuNDUxNzg2XSAgc2RhOiBzZGExIHNkYTIgPCBzZGE1ID4N
DQpbICAgIDQuNDg1OTQ4XSBzZCAwOjA6MDowOiBbc2RhXSBBdHRhY2hlZCBTQ1NJIGRpc2sNDQpb
ICAgIDQuNzc5NTMzXSBhdGEyOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFND
b250cm9sIDMwMCkNDQpbICAgIDQuNzkwNzU1XSBhdGEyLjAwOiBBVEEtODogV0RDIFdENTAwMEFB
S1MtMDBWMUEwLCAwNS4wMUQwNSwgbWF4IFVETUEvMTMzDQ0KWyAgICA0Ljc5MTUwNF0gYXRhMi4w
MDogOTc2NzczMTY4IHNlY3RvcnMsIG11bHRpIDE2OiBMQkE0OCBOQ1EgKGRlcHRoIDAvMzIpDQ0K
WyAgICA0LjgxMTA1MV0gYXRhMi4wMDogY29uZmlndXJlZCBmb3IgVURNQS8xMzMNDQpbICAgIDQu
ODE2NDM1XSBzY3NpIDE6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFUQSAgICAgIFdEQyBXRDUw
MDBBQUtTLTAgMDUuMCBQUTogMCBBTlNJOiA1DQ0KWyAgICA0LjgyNjE1N10gc2QgMTowOjA6MDog
W3NkYl0gOTc2NzczMTY4IDUxMi1ieXRlIGxvZ2ljYWwgYmxvY2tzOiAoNTAwIEdCLzQ2NSBHaUIp
DQ0KWyAgICA0LjgyNjQxOF0gc2QgMTowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMSB0
eXBlIDANDQpbICAgIDQuODM5ODEyXSBzZCAxOjA6MDowOiBbc2RiXSBXcml0ZSBQcm90ZWN0IGlz
IG9mZg0NClsgICAgNC44NDUxNTJdIHNkIDE6MDowOjA6IFtzZGJdIFdyaXRlIGNhY2hlOiBlbmFi
bGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZVQQ0NClsg
ICAgNC44NTU1MDldICBzZGI6IHNkYjEgc2RiMiBzZGIzDQ0KWyAgICA0Ljg5MTkyMF0gc2QgMTow
OjA6MDogW3NkYl0gQXR0YWNoZWQgU0NTSSBkaXNrDQ0KWyAgICA1LjEzNDI3OF0gYXRhMzogU0FU
QSBsaW5rIGRvd24gKFNTdGF0dXMgNCBTQ29udHJvbCAzMDApDQ0KWyAgICA1LjQ1OTExOF0gYXRh
NDogU0FUQSBsaW5rIGRvd24gKFNTdGF0dXMgNCBTQ29udHJvbCAzMDApDQ0KWyAgICA1LjQ4NDMz
MF0gRnVzaW9uIE1QVCBiYXNlIGRyaXZlciAzLjA0LjEyDQ0KWyAgICA1LjQ4NTEwMF0gQ29weXJp
Z2h0IChjKSAxOTk5LTIwMDggTFNJIENvcnBvcmF0aW9uDQ0KWyAgICA1LjUxNjQ1Ml0gRnVzaW9u
IE1QVCBTQVMgSG9zdCBkcml2ZXIgMy4wNC4xMg0NClsgICAgNS41MjEzNThdIG1wdHNhcyAwMDAw
OjA2OjAwLjA6IFBDSSBJTlQgQSAtPiBHU0kgMzUgKGxldmVsLCBsb3cpIC0+IElSUSAzNQ0NClsg
ICAgNS41Mjg5MzRdIG1wdGJhc2U6IGlvYzA6IEluaXRpYXRpbmcgYnJpbmd1cA0NClsgICAgNS44
MTUwOTNdIGlvYzA6IExTSVNBUzEwNjRFIEIxOiBDYXBhYmlsaXRpZXM9e0luaXRpYXRvcn0NDQpb
ICAgMTAuODY3MzM0XSBzY3NpNCA6IGlvYzA6IExTSVNBUzEwNjRFIEIxLCBGd1Jldj0wMTBhMDAw
MGgsIFBvcnRzPTEsIE1heFE9NTExLCBJUlE9MzUNDQpkb25lLg0NCkJlZ2luOiBNb3VudGluZyBy
b290IGZpbGUgc3lzdGVtIC4uLiBCZWdpbjogUnVubmluZyAvc2NyaXB0cy9sb2NhbC10b3AgLi4u
IGRvbmUuDQ0KQmVnaW46IFJ1bm5pbmcgL3NjcmlwdHMvbG9jYWwtcHJlbW91bnQgLi4uIGtpbml0
OiBuYW1lX3RvX2Rldl90KC9kZXYvc2RiMikgPSBkZXYoOCwxOCkNDQpraW5pdDogdHJ5aW5nIHRv
IHJlc3VtZSBmcm9tIC9kZXYvc2RiMg0NClsgICAxMS4xOTczNjNdIFBNOiBTdGFydGluZyBtYW51
YWwgcmVzdW1lIGZyb20gZGlzaw0NCmtpbml0OiBObyByZXN1bWUgaW1hZ2UsIGRvaW5nIG5vcm1h
bCBib290Li4uDQ0KZG9uZS4NDQpbICAgMTEuMjY2ODU4XSBram91cm5hbGQgc3RhcnRpbmcuICBD
b21taXQgaW50ZXJ2YWwgNSBzZWNvbmRzDQ0KWyAgIDExLjI2Njg4OF0gRVhUMy1mczogbW91bnRl
ZCBmaWxlc3lzdGVtIHdpdGggd3JpdGViYWNrIGRhdGEgbW9kZS4NDQpCZWdpbjogUnVubmluZyAv
c2NyaXB0cy9sb2NhbC1ib3R0b20gLi4uIGRvbmUuDQ0KZG9uZS4NDQpCZWdpbjogUnVubmluZyAv
c2NyaXB0cy9pbml0LWJvdHRvbSAuLi4gZG9uZS4NDQpTRUxpbnV4OiAgQ291bGQgbm90IG9wZW4g
cG9saWN5IGZpbGUgPD0gL2V0Yy9zZWxpbnV4L3RhcmdldGVkL3BvbGljeS9wb2xpY3kuMjY6ICBO
byBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5DQ0KDUlOSVQ6IHZlcnNpb24gMi44OCBib290aW5nDQ0N
ClsbWzM2bWluZm8bWzM5OzQ5bV0gVXNpbmcgbWFrZWZpbGUtc3R5bGUgY29uY3VycmVudCBib290
IGluIHJ1bmxldmVsIFMuDQ0KWy4uLi5dIFN0YXJ0aW5nIHRoZSBob3RwbHVnIGV2ZW50cyBkaXNw
YXRjaGVyOiB1ZGV2ZFsgICAxMy42NzEzMDVdIDwzMD51ZGV2ZFsyMTUwXTogc3RhcnRpbmcgdmVy
c2lvbiAxNzUNDQobWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/
MGMuDQ0KWy4uLi5dIFN5bnRoZXNpemluZyB0aGUgaW5pdGlhbCBob3RwbHVnIGV2ZW50cy4uLhtb
PzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4u
Li5dIFdhaXRpbmcgZm9yIC9kZXYgdG8gYmUgZnVsbHkgcG9wdWxhdGVkLi4udWRldmRbMjE4Ml06
IGtlcm5lbC1wcm92aWRlZCBuYW1lICdibGt0YXAtY29udHJvbCcgYW5kIE5BTUU9ICd4ZW4vYmxr
dGFwLTIvY29udHJvbCcgZGlzYWdyZWUsIHBsZWFzZSB1c2UgU1lNTElOSys9IG9yIGNoYW5nZSB0
aGUga2VybmVsIHRvIHByb3ZpZGUgdGhlIHByb3BlciBuYW1lDQ0KDQ0NChtbPzI1bBtbPzFjGzcb
WzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5dIEFjdGl2YXRp
bmcgc3dhcC4uLlsgICAxNS44NDU2MjBdIEFkZGluZyAzOTAzNzg0ayBzd2FwIG9uIC9kZXYvc2Ri
Mi4gIFByaW9yaXR5Oi0xIGV4dGVudHM6MSBhY3Jvc3M6MzkwMzc4NGsgDQ0KG1s/MjVsG1s/MWMb
NxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjZG9uZS4NDQpbLi4uLl0gQ2hlY2tp
bmcgcm9vdCBmaWxlIHN5c3RlbS4uLmZzY2sgZnJvbSB1dGlsLWxpbnV4IDIuMjAuMQ0NCi9kZXYv
c2RiMTogY2xlYW4sIDk3OTg1LzQ4ODY0MCBmaWxlcywgOTk0MzkwLzE5NTM4OTcgYmxvY2tzDQ0K
G1s/MjVsG1s/MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjZG9uZS4NDQpb
ICAgMTYuNDk1OTQ3XSBFWFQzIEZTIG9uIHNkYjEsIGludGVybmFsIGpvdXJuYWwNDQpbLi4uLl0g
TG9hZGluZyBrZXJuZWwgbW9kdWxlcy4uLhtbPzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7
NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5dIFNldHRpbmcgdXAgTFZNIFZvbHVtZSBHcm91
cHMuLi4bWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGNkb25l
Lg0NCl5bPlsuLi4uXSBBY3RpdmF0aW5nIGx2bSBhbmQgbWQgc3dhcC4uLhtbPzI1bBtbPzFjGzcb
WzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5dIENoZWNraW5n
IGZpbGUgc3lzdGVtcy4uLmZzY2sgZnJvbSB1dGlsLWxpbnV4IDIuMjAuMQ0NCi9kZXYvbWFwcGVy
L3Vuc3RhYmxlLXRyYWNlczogY2xlYW4sIDE1LzY1NTM2MDAgZmlsZXMsIDQ2OTg0NC8yNjIxNDQw
MCBibG9ja3MNDQobWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/
MGNkb25lLg0NClsuLi4uXSBNb3VudGluZyBsb2NhbCBmaWxlc3lzdGVtcy4uLlsgICAyMS4yNjQy
NTVdIGtqb3VybmFsZCBzdGFydGluZy4gIENvbW1pdCBpbnRlcnZhbCA1IHNlY29uZHMNDQpbICAg
MjEuMjY0NjI3XSBFWFQzIEZTIG9uIGRtLTUsIGludGVybmFsIGpvdXJuYWwNDQpbICAgMjEuMjY0
NjQzXSBFWFQzLWZzOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCB3cml0ZWJhY2sgZGF0YSBtb2Rl
Lg0NChtbPzI1bBtbPzFjGzcbWzFHWxtbMzFtRkFJTBtbMzk7NDltGzgbWz8yNWgbWz8wYxtbMzFt
ZmFpbGVkLhtbMzk7NDltDQ0KWy4uLi5dIEFjdGl2YXRpbmcgc3dhcGZpbGUgc3dhcC4uLhtbPzI1
bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5d
IENsZWFuaW5nIHVwIHRlbXBvcmFyeSBmaWxlcy4uLhtbPzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9r
IBtbMzk7NDltGzgbWz8yNWgbWz8wYy4NDQpbLi4uLl0gQ29uZmlndXJpbmcgbmV0d29yayBpbnRl
cmZhY2VzLi4uDQ0KV2FpdGluZyBmb3IgYSBtYXggb2YgMCBzZWNvbmRzIGZvciBldGgwIHRvIGJl
Y29tZSBhdmFpbGFibGUuDQ0KWyAgIDIyLjU1OTEyNV0gZGV2aWNlIGV0aDAgZW50ZXJlZCBwcm9t
aXNjdW91cyBtb2RlDQ0KWyAgIDIyLjYzNDQ2NV0gQUREUkNPTkYoTkVUREVWX1VQKTogZXRoMDog
bGluayBpcyBub3QgcmVhZHkNDQpJbnRlcm5ldCBTeXN0ZW1zIENvbnNvcnRpdW0gREhDUCBDbGll
bnQgNC4yLjINDQpDb3B5cmlnaHQgMjAwNC0yMDExIEludGVybmV0IFN5c3RlbXMgQ29uc29ydGl1
bS4NDQpBbGwgcmlnaHRzIHJlc2VydmVkLg0NCkZvciBpbmZvLCBwbGVhc2UgdmlzaXQgaHR0cHM6
Ly93d3cuaXNjLm9yZy9zb2Z0d2FyZS9kaGNwLw0NCg0NCkxpc3RlbmluZyBvbiBMUEYveGVuYnIw
LzAwOmUwOjgxOjgwOjFkOmEwDQ0KU2VuZGluZyBvbiAgIExQRi94ZW5icjAvMDA6ZTA6ODE6ODA6
MWQ6YTANDQpTZW5kaW5nIG9uICAgU29ja2V0L2ZhbGxiYWNrDQ0KREhDUERJU0NPVkVSIG9uIHhl
bmJyMCB0byAyNTUuMjU1LjI1NS4yNTUgcG9ydCA2NyBpbnRlcnZhbCA0DQ0KWyAgIDI1LjQwODk5
M10gdGczOiBldGgwOiBMaW5rIGlzIHVwIGF0IDEwMDAgTWJwcywgZnVsbCBkdXBsZXguDQ0KWyAg
IDI1LjQwOTIyNl0gdGczOiBldGgwOiBGbG93IGNvbnRyb2wgaXMgb24gZm9yIFRYIGFuZCBvbiBm
b3IgUlguDQ0KWyAgIDI1LjQwOTIyNl0gQUREUkNPTkYoTkVUREVWX0NIQU5HRSk6IGV0aDA6IGxp
bmsgYmVjb21lcyByZWFkeQ0NClsgICAyNS40MDkyMjZdIHhlbmJyMDogcG9ydCAxKGV0aDApIGVu
dGVyaW5nIGZvcndhcmRpbmcgc3RhdGUNDQpESENQRElTQ09WRVIgb24geGVuYnIwIHRvIDI1NS4y
NTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDQNDQpESENQRElTQ09WRVIgb24geGVuYnIwIHRv
IDI1NS4yNTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDUNDQpESENQRElTQ09WRVIgb24geGVu
YnIwIHRvIDI1NS4yNTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDcNDQpESENQRElTQ09WRVIg
b24geGVuYnIwIHRvIDI1NS4yNTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDE1DQ0KREhDUERJ
U0NPVkVSIG9uIHhlbmJyMCB0byAyNTUuMjU1LjI1NS4yNTUgcG9ydCA2NyBpbnRlcnZhbCAxMw0N
CkRIQ1BSRVFVRVNUIG9uIHhlbmJyMCB0byAyNTUuMjU1LjI1NS4yNTUgcG9ydCA2Nw0NCkRIQ1BP
RkZFUiBmcm9tIDEwLjgwLjIyNC4xDQ0KREhDUEFDSyBmcm9tIDEwLjgwLjIyNC4xDQ0KYm91bmQg
dG8gMTAuODAuMjI0LjE0NCAtLSByZW5ld2FsIGluIDIwOTgzIHNlY29uZHMuDQ0KDQ0KV2FpdGlu
ZyBmb3IgYSBtYXggb2YgMCBzZWNvbmRzIGZvciBldGgxIHRvIGJlY29tZSBhdmFpbGFibGUuDQ0K
WyAgIDU4Ljc0NzA3MV0gZGV2aWNlIGV0aDEgZW50ZXJlZCBwcm9taXNjdW91cyBtb2RlDQ0KWyAg
IDU4Ljc5ODQ3Nl0gQUREUkNPTkYoTkVUREVWX1VQKTogZXRoMTogbGluayBpcyBub3QgcmVhZHkN
DQppZnVwOiBpbnRlcmZhY2UgZXRoMCBhbHJlYWR5IGNvbmZpZ3VyZWQNDQobWz8yNWwbWz8xYxs3
G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGNkb25lLg0NClsuLi4uXSBTZXR0aW5n
IGtlcm5lbCB2YXJpYWJsZXMgLi4uG1s/MjVsG1s/MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0b
OBtbPzI1aBtbPzBjZG9uZS4NDQpbLi4uLl0gU3RhcnRpbmcgcnBjYmluZCBkYWVtb24uLi4bWz8y
NWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWy4uLi5dIFN0
YXJ0aW5nIE5GUyBjb21tb24gdXRpbGl0aWVzOiBzdGF0ZCBpZG1hcGQbWz8yNWwbWz8xYxs3G1sx
R1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWy4uLi5dIENsZWFuaW5nIHVwIHRl
bXBvcmFyeSBmaWxlcy4uLhtbPzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8y
NWgbWz8wYy4NDQpbG1szNm1pbmZvG1szOTs0OW1dIFNldHRpbmcgY29uc29sZSBzY3JlZW4gbW9k
ZXMgYW5kIGZvbnRzLg0NCnNldHRlcm06IGNhbm5vdCAodW4pc2V0IHBvd2Vyc2F2ZSBtb2RlOiBJ
bnZhbGlkIGFyZ3VtZW50DQ0KG1s5OzMwXRtbMTQ7MzBdDUlOSVQ6IEVudGVyaW5nIHJ1bmxldmVs
OiAyDQ0NClsbWzM2bWluZm8bWzM5OzQ5bV0gVXNpbmcgbWFrZWZpbGUtc3R5bGUgY29uY3VycmVu
dCBib290IGluIHJ1bmxldmVsIDIuDQ0KWy4uLi5dIFN0YXJ0aW5nIE5GUyBjb21tb24gdXRpbGl0
aWVzOiBzdGF0ZCBpZG1hcGQbWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/
MjVoG1s/MGMuDQ0KWy4uLi5dIFN0YXJ0aW5nIHJwY2JpbmQgZGFlbW9uLi4uWy4uLi5dIEFscmVh
ZHkgcnVubmluZy4bWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/
MGMuDQ0KWy4uLi5dIFN0YXJ0aW5nIGVuaGFuY2VkIHN5c2xvZ2Q6IHJzeXNsb2dkG1s/MjVsG1s/
MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjLg0NClsuLi4uXSBTdGFydGlu
ZyBBQ1BJIHNlcnZpY2VzLi4uG1s/MjVsG1s/MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtb
PzI1aBtbPzBjLg0NClsuLi4uXSBTdGFydGluZyBkZWZlcnJlZCBleGVjdXRpb24gc2NoZWR1bGVy
OiBhdGQbWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0K
Wy4uLi5dIFN0YXJ0aW5nIHBlcmlvZGljIGNvbW1hbmQgc2NoZWR1bGVyOiBjcm9uG1s/MjVsG1s/
MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjLg0NClsuLi4uXSBTdGFydGlu
ZyBzeXN0ZW0gbWVzc2FnZSBidXM6IGRidXMbWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5
OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWy4uLi5dIFN0YXJ0aW5nIE1UQTogZXhpbTQbWz8yNWwbWz8x
Yxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWxtbMzZtaW5mbxtbMzk7
NDltXSBOb3Qgc3RhcnRpbmcgaW50ZXJuZXQgc3VwZXJzZXJ2ZXI6IG5vIHNlcnZpY2VzIGVuYWJs
ZWQuDQ0KWy4uLi5dIFN0YXJ0aW5nIE9wZW5CU0QgU2VjdXJlIFNoZWxsIHNlcnZlcjogc3NoZBtb
PzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wYy4NDQpTdGFydGlu
ZyBveGVuc3RvcmVkLi4uWyAgIDY3Ljc5ODA0MV0gWEVOQlVTOiBVbmFibGUgdG8gcmVhZCBjcHUg
c3RhdGUNDQpbICAgNjcuODAzNTk2XSBYRU5CVVM6IFVuYWJsZSB0byByZWFkIGNwdSBzdGF0ZQ0N
ClsgICA2Ny44MDg5OTJdIFgNDQpFTkJVUzogVW5hYmxlIHRvU2V0dGluZyBkb21haW4gMCBuYW1l
Li4uIHJlYWQgY3B1IHN0YXRlDQ0NCg0KWyAgIDY3LjgxNjQ5NV0gWEVOQlVTOiBVbmFibGUgdG8g
cmVhZCBjcHUgc3RhdGUNDQpbICAgNjcuODIxNTA0XSBYRU5CVVM6IFVuYWJsZSB0byByZWFkIGNw
dSBzdGF0ZQ0NClsgICA2Ny44MjcxNjldIFhFTkJVUzogVW5hYmxlIHRvIHJlYWQgY3B1IHN0YXRl
DQ0KWyAgIDY3LjgzMjA0N10gWEVOQlVTOiBVbmFibGUgdG8gcmVhZCBjcHUgc3RhdGUNDQpbICAg
NjcuODM3MTA4XSBYU3RhcnRpbmcgeGVuY29uc29sZWQuLi5FTkJVUzogVW5hYmxlIHRvDQ0KIHJl
YWQgY3B1IHN0YXRlDQ0KU3RhcnRpbmcgUUVNVSBhcyBkaXNrIGJhY2tlbmQgZm9yIGRvbTANDQpS
dW5uaW5nIHJjLmxvY2FsDQ0KG1tyG1tIG1tKDQ0NCkRlYmlhbiBHTlUvTGludXggd2hlZXp5L3Np
ZCBleGlsZSBodmMwDQ0KDQ0KZXhpbGUgbG9naW46IAo=
--047d7b66f1cd72413004c7744dc4
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b66f1cd72413004c7744dc4--


From xen-devel-bounces@lists.xen.org Fri Aug 17 11:33:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ko1-0005ru-Hv; Fri, 17 Aug 2012 11:33:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T2KYQ-0005h5-AX
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 11:17:27 +0000
Received: from [85.158.143.35:59422] by server-1.bemta-4.messagelabs.com id
	F6/D0-07754-5482E205; Fri, 17 Aug 2012 11:17:25 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345202236!13839524!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32755 invoked from network); 17 Aug 2012 11:17:16 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 11:17:16 -0000
Received: by eaac13 with SMTP id c13so1154901eaa.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 04:17:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=i7+Ed18qeJyvcUfLbPmeM8H/sR6VzcqzqrvYRvOaeR0=;
	b=bJQXxPaYY+gfwas8vvY3faiCfAmry6/+2fglz3Hop+Cvm85PpFAxxd/uMR5xd4t+T+
	+YFATfYg/eB1Pz2gc4Z1eDNOg/Wdma5/FXbosoXkqmFBkWNLX8xs4UT0+3Eztej+1CQE
	7ld+ntecEVyIPT8NMLC/no7UVsx9xfH6xLXL5zRsl0N9MY//UwrlB77jXRBoWFwqywUE
	YKkS08I6tz5rBH/DfWXggWj+2/+sVPlahRil52pcZlGPoT/6Hp9oADdKMtRDYGOrvoUa
	3wV7K3V1AETw8gdIksifofhQhccwttbcPdZI8hgPQT9jvyGhgafzMt3bHLvQAWzK7Jv4
	DRqQ==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr5918790eej.0.1345202236309; Fri, 17
	Aug 2012 04:17:16 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Fri, 17 Aug 2012 04:17:15 -0700 (PDT)
Date: Fri, 17 Aug 2012 12:17:15 +0100
X-Google-Sender-Auth: 1eNBbbOwkVZ6k_kw0Y_eNlcUJgw
Message-ID: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, 
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Content-Type: multipart/mixed; boundary=047d7b66f1cd72413004c7744dc4
X-Mailman-Approved-At: Fri, 17 Aug 2012 11:33:31 +0000
Subject: [Xen-devel] Failure to boot default Debian wheezy (pvops) kernel on
	4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b66f1cd72413004c7744dc4
Content-Type: text/plain; charset=ISO-8859-1

I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.

The problems seem to have started here:

-- snip --
[    0.060280] ACPI: Core revision 20110623^M^M
[    0.072384] Performance Events: Broken BIOS detected, complain to
your hardware vendor.^M^M
[    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
(MSR c0010000 is 530076)^M^M
[    0.080007] AMD PMU driver.^M^M
[    0.082864] ------------[ cut here ]------------^M^M
[    0.084018] WARNING: at
/build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
perf_events_lapic_init+0x28/0x29()^M^M
[    0.088009] Hardware name: empty^M^M
[    0.091299] Modules linked in:^M^M
[    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
[    0.096008] Call Trace:^M^M
[    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
[    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
[    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
[    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
[    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
[    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
[    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
[    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
[    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
[    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
[    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
-- snip --

And pretty soon degenerated into log message spamming of this sort:

-- snip --
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
0x0000000000530076 to 0x0000000000130076.^M
-- snip --

The serial log is attached ("exile.log").

An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
Jeremy's?) boots fine; the serial log is also attached
("exile-good.log").  It also seems ot have the WARN above, so maybe
that's not actually the issue.

Any ideas?

 -George

--047d7b66f1cd72413004c7744dc4
Content-Type: application/octet-stream; name="exile.log"
Content-Disposition: attachment; filename="exile.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5z6du4l0

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gICAgICAgICAgICAgIF9fX18g
IA0KIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgIC8gXyBcICAgIF8gX18gX19ffF9f
XyBcIA0KICBcICAvLyBfIFwgJ18gXCAgfCB8fCB8XyAgIF9fKSB8fCB8IHwgfF9ffCAnX18vIF9f
fCBfXykgfA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8gfCB8X3wgfF9ffCB8IHwg
KF9fIC8gX18vIA0KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX18oXylfX18vICAgfF98
ICBcX19ffF9fX19ffA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4yLjAtcmMyICh4ZW51c2VyQHVr
LnhlbnNvdXJjZS5jb20pIChnY2MgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpIDQuNi4z
KSBUdWUgQXVnIDE0IDE4OjU5OjQ5IEJTVCAyMDEyDQooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBU
dWUgQXVnIDE0IDE4OjQxOjUzIDIwMTIgKzAxMDAgMjU3NTA6ZDhkZjExNTJlYjNiDQooWEVOKSBD
b25zb2xlIG91dHB1dCBpcyBzeW5jaHJvbm91cy4NCihYRU4pIEJvb3Rsb2FkZXI6IEdSVUIgMS45
OS0yMi4xDQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIHdhdGNoZG9nIGNwdWluZm8g
Y29tMT0xMTUyMDAsOG4xIGNvbnNvbGU9Y29tMSx0dHkgc3luY19jb25zb2xlIGNvbnNvbGVfdG9f
cmluZw0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAgVkdBIGlzIHRleHQgbW9kZSA4
MHgyNSwgZm9udCA4eDE2DQooWEVOKSAgVkJFL0REQyBtZXRob2RzOiBWMjsgRURJRCB0cmFuc2Zl
ciB0aW1lOiAyIHNlY29uZHMNCihYRU4pIERpc2MgaW5mb3JtYXRpb246DQooWEVOKSAgRm91bmQg
MiBNQlIgc2lnbmF0dXJlcw0KKFhFTikgIEZvdW5kIDIgRUREIGluZm9ybWF0aW9uIHN0cnVjdHVy
ZXMNCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6DQooWEVOKSAgMDAwMDAwMDAwMDAwMDAwMCAtIDAw
MDAwMDAwMDAwOWM0MDAgKHVzYWJsZSkNCihYRU4pICAwMDAwMDAwMDAwMDljNDAwIC0gMDAwMDAw
MDAwMDBhMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDAwMDBjZTAwMCAtIDAwMDAwMDAw
MDAxMDAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNm
ZWUwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBjZmVlMDAwMCAtIDAwMDAwMDAwY2ZlZTUw
MDAgKEFDUEkgZGF0YSkNCihYRU4pICAwMDAwMDAwMGNmZWU1MDAwIC0gMDAwMDAwMDBjZmVmMTAw
MCAoQUNQSSBOVlMpDQooWEVOKSAgMDAwMDAwMDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAg
KHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAzMDAwIChy
ZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZTAwMDAwIC0gMDAwMDAwMDBmZWUwMTAwMCAocmVz
ZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwMTMwMDAwMDAwICh1c2FibGUp
DQooWEVOKSBBQ1BJOiBSU0RQIDAwMEY4MDgwLCAwMDI0IChyMiBQVExURCApDQooWEVOKSBBQ1BJ
OiBYU0RUIENGRUUwMUZFLCAwMDVDIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAg
MjAwMDAwMSkNCihYRU4pIEFDUEk6IEZBQ1AgQ0ZFRTAyQ0UsIDAwRjQgKHIzIEJSQ00gICBFWFBM
T1NOICAgNjA0MDAwMCBNU0ZUICAyMDAwMDAxKQ0KKFhFTikgQUNQSSBXYXJuaW5nICh0YmZhZHQt
MDQ0NCk6IE9wdGlvbmFsIGZpZWxkICJQbTJDb250cm9sQmxvY2siIGhhcyB6ZXJvIGFkZHJlc3Mg
b3IgbGVuZ3RoOiAwMDAwMDAwMDAwMDAwMDAwL0MgWzIwMDcwMTI2XQ0KKFhFTikgQUNQSTogRFNE
VCBDRkVFMDNDMiwgNDk0OSAocjIgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIE1TRlQgIDMwMDAw
MDApDQooWEVOKSBBQ1BJOiBGQUNTIENGRUYwRkMwLCAwMDQwDQooWEVOKSBBQ1BJOiBUQ1BBIENG
RUU0RDBCLCAwMDMyIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAyMDAwMDAwMSkN
CihYRU4pIEFDUEk6IFNSQVQgQ0ZFRTREM0QsIDAxMjggKHIxIEFNRCAgICBGQU1fRl8xMCAgNjA0
MDAwMCBBTUQgICAgICAgICAxKQ0KKFhFTikgQUNQSTogSFBFVCBDRkVFNEU2NSwgMDAzOCAocjEg
QlJDTSAgIEVYUExPU04gICA2MDQwMDAwIEJSQ00gIDIwMDAwMDEpDQooWEVOKSBBQ1BJOiBTU0RU
IENGRUU0RTlELCAwMDQ5IChyMSBCUkNNICAgUFJUMCAgICAgIDYwNDAwMDAgQlJDTSAgMjAwMDAw
MSkNCihYRU4pIEFDUEk6IFNQQ1IgQ0ZFRTRFRTYsIDAwNTAgKHIxIFBUTFREICAkVUNSVEJMJCAg
NjA0MDAwMCBQVEwgICAgICAgICAxKQ0KKFhFTikgQUNQSTogQVBJQyBDRkVFNEYzNiwgMDBDQSAo
cjEgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIFBUTCAgIDIwMDAwMDEpDQooWEVOKSBTeXN0ZW0g
UkFNOiA0MDk0TUIgKDQxOTI3NTJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBO
b2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6
IFBYTSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNCAtPiBOb2RlIDENCihYRU4pIFNS
QVQ6IFBYTSAxIC0+IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMg
NiAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNyAtPiBOb2RlIDENCihYRU4p
IFNSQVQ6IE5vZGUgMCBQWE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAw
MDAwLWQwMDAwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTEzMDAwMDAw
MA0KKFhFTikgTlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAxMmRmNzIwMDAgLSAxMmRm
NzQwMDANCihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgU1JB
VDogTm9kZSAxIGhhcyBubyBtZW1vcnkuIEJJT1MgQnVnIG9yIG1pcy1jb25maWd1cmVkIGhhcmR3
YXJlPw0KKFhFTikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQgRE1BIHdpZHRoIDMwIGJpdHMNCihY
RU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmODBiMA0KKFhFTikgRE1JIHByZXNlbnQuDQoo
WEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1UaW1lciBJTyBQ
b3J0OiAweDUwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4X2NudFs1NDQsNTA0
XSwgcG0xeF9ldnRbNTAwLDU0MF0NCihYRU4pIEFDUEk6ICAgICAgICAgICAgICAgICAgd2FrZXVw
X3ZlY1tjZmVmMGZjY10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxhcGljX2lk
WzB4MDBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzAgMDo0IEFQSUMgdmVyc2lvbiAxNg0K
KFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkN
CihYRU4pIFByb2Nlc3NvciAjMSAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29y
ICMyIDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDNd
IGxhcGljX2lkWzB4MDNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzMgMDo0IEFQSUMgdmVy
c2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwNF0g
ZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNCAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KKFhFTikg
UHJvY2Vzc29yICM1IDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDZdIGxhcGljX2lkWzB4MDZdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzYgMDo0
IEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwN10gbGFwaWNf
aWRbMHgwN10gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNyAwOjQgQVBJQyB2ZXJzaW9uIDE2
DQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMF0gaGlnaCBlZGdlIGxpbnRbMHgx
XSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBoaWdoIGVkZ2UgbGludFsw
eDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDJdIGhpZ2ggZWRnZSBsaW50
WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwM10gaGlnaCBlZGdlIGxp
bnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA0XSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDVdIGhpZ2ggZWRn
ZSBsaW50WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNl0gaGlnaCBl
ZGdlIGxpbnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA3XSBoaWdo
IGVkZ2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA4XSBhZGRyZXNzWzB4
ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDgsIHZlcnNp
b24gMTcsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMTUNCihYRU4pIEFDUEk6IElPQVBJQyAo
aWRbMHgwOV0gYWRkcmVzc1sweGZlYzAxMDAwXSBnc2lfYmFzZVsxNl0pDQooWEVOKSBJT0FQSUNb
MV06IGFwaWNfaWQgOSwgdmVyc2lvbiAxNywgYWRkcmVzcyAweGZlYzAxMDAwLCBHU0kgMTYtMzEN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwYV0gYWRkcmVzc1sweGZlYzAyMDAwXSBnc2lfYmFz
ZVszMl0pDQooWEVOKSBJT0FQSUNbMl06IGFwaWNfaWQgMTAsIHZlcnNpb24gMTcsIGFkZHJlc3Mg
MHhmZWMwMjAwMCwgR1NJIDMyLTQ3DQooWEVOKSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVz
X2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2UpDQooWEVOKSBBQ1BJOiBJUlEwIHVzZWQgYnkg
b3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFi
bGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMyBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQ
RVQgaWQ6IDB4MTE2NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgVGFibGUgaXMgbm90IGZv
dW5kIQ0KKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9y
bWF0aW9uDQooWEVOKSBTTVA6IEFsbG93aW5nIDggQ1BVcyAoMCBob3RwbHVnIENQVXMpDQooWEVO
KSBJUlEgbGltaXRzOiA0OCBHU0ksIDE1MDQgTVNJL01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBJbml0aWFsaXppbmcgQ1BV
IzANCihYRU4pIERldGVjdGVkIDE5OTUuMDkyIE1IeiBwcm9jZXNzb3IuDQooWEVOKSBJbml0aW5n
IG1lbW9yeSBzaGFyaW5nLg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGlu
ZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEy
SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSAwKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDAN
CihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikg
QU1ELVZpOiBJT01NVSBub3QgZm91bmQhDQooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZGlzYWJs
ZWQNCihYRU4pIENQVTA6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4p
IEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVzaW5nIG5ldyBBQ0sgbWV0aG9kDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4yPS0x
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiA2NCBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSBIVk06IFNW
TSBlbmFibGVkDQooWEVOKSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAoSEFQKSBkZXRl
Y3RlZA0KKFhFTikgSFZNOiBIQVAgcGFnZSBzaXplczogNGtCLCAyTUIsIDFHQg0KKFhFTikgQ1BV
IDAgQVBJQyAwIC0+IE5vZGUgMA0KKFhFTikgQ1BVIDEgQVBJQyAxIC0+IE5vZGUgMA0KKFhFTikg
Qm9vdGluZyBwcm9jZXNzb3IgMS8xIGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSMx
DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsg
KDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5l
KQ0KKFhFTikgQ1BVIDEoNCkgLT4gUHJvY2Vzc29yIDAsIENvcmUgMQ0KKFhFTikgQ1BVMTogQU1E
IEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0
X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIENQVSAyIEFQSUMgMiAtPiBOb2Rl
IDANCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29yIDIvMiBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxp
emluZyBDUFUjMg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQg
Y2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQg
Ynl0ZXMvbGluZSkNCihYRU4pIENQVSAyKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDINCihYRU4p
IENQVTI6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29k
ZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgMyBBUElD
IDMgLT4gTm9kZSAwDQooWEVOKSBCb290aW5nIHByb2Nlc3NvciAzLzMgZWlwIDhjMDAwDQooWEVO
KSBJbml0aWFsaXppbmcgQ1BVIzMNCihYRU4pIENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVz
L2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6
IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFUgMyg0KSAtPiBQcm9jZXNzb3IgMCwgQ29y
ZSAzDQooWEVOKSBDUFUzOiBBTUQgRW5naW5lZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVO
KSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikg
Q1BVIDQgQVBJQyA0IC0+IE5vZGUgMQ0KKFhFTikgQm9vdGluZyBwcm9jZXNzb3IgNC80IGVpcCA4
YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSM0DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRL
ICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6
IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVIDQoNCkgLT4gUHJvY2Vz
c29yIDEsIENvcmUgMA0KKFhFTikgQ1BVNDogQU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGlu
ZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAw
NjcNCihYRU4pIENQVSA1IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29y
IDUvNSBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxpemluZyBDUFUjNQ0KKFhFTikgQ1BVOiBMMSBJ
IGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0K
KFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSA1KDQp
IC0+IFByb2Nlc3NvciAxLCBDb3JlIDENCihYRU4pIENQVTU6IEFNRCBFbmdpbmVlcmluZyBTYW1w
bGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29kZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hf
aWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgNiBBUElDIDYgLT4gTm9kZSAxDQooWEVOKSBCb290aW5n
IHByb2Nlc3NvciA2LzYgZWlwIDhjMDAwDQooWEVOKSBJbml0aWFsaXppbmcgQ1BVIzYNCihYRU4p
IENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0
ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVO
KSBDUFUgNig0KSAtPiBQcm9jZXNzb3IgMSwgQ29yZSAyDQooWEVOKSBDUFU2OiBBTUQgRW5naW5l
ZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2lu
Zm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikgQ1BVIDcgQVBJQyA3IC0+IE5vZGUgMQ0KKFhF
TikgQm9vdGluZyBwcm9jZXNzb3IgNy83IGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQ
VSM3DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2
NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9s
aW5lKQ0KKFhFTikgQ1BVIDcoNCkgLT4gUHJvY2Vzc29yIDEsIENvcmUgMw0KKFhFTikgQ1BVNzog
QU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xs
ZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVz
DQooWEVOKSBUZXN0aW5nIE5NSSB3YXRjaGRvZyAtLS0gQ1BVIzAgb2theS4gQ1BVIzEgb2theS4g
Q1BVIzIgb2theS4gQ1BVIzMgb2theS4gQ1BVIzQgb2theS4gQ1BVIzUgb2theS4gQ1BVIzYgb2th
eS4gQ1BVIzcgb2theS4gDQooWEVOKSBIUEVUOiAzIHRpbWVycyAoMCB3aWxsIGJlIHVzZWQgZm9y
IGJyb2FkY2FzdCkNCihYRU4pIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBNQ0E6IFVzZSBo
dyB0aHJlc2hvbGRpbmcgdG8gYWRqdXN0IHBvbGxpbmcgZnJlcXVlbmN5DQooWEVOKSBtY2hlY2tf
cG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0YXJ0ZWQuDQooWEVOKSBYZW5vcHJv
ZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0LCBJQlNDVEwgPSAweGZmZmZmZmZm
DQooWEVOKSAqKiogTE9BRElORyBET01BSU4gMCAqKioNCihYRU4pIGVsZl9wYXJzZV9iaW5hcnk6
IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weDNkNjAwMA0KKFhFTikgZWxmX3BhcnNlX2Jp
bmFyeTogcGhkcjogcGFkZHI9MHgxM2Q2MDAwIG1lbXN6PTB4M2E5MDAwDQooWEVOKSBlbGZfcGFy
c2VfYmluYXJ5OiBtZW1vcnk6IDB4MTAwMDAwMCAtPiAweDE3N2YwMDANCihYRU4pIGVsZl94ZW5f
cGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IEdVRVNUX1ZFUlNJT04gPSAiMi42Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVS
U0lPTiA9ICJ4ZW4tMy4wIg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JBU0UgPSAw
eGMwMDAwMDAwDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhjMTQxNTAwMA0K
KFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFHRSA9IDB4YzEwMDIwMDANCihY
RU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVz
fHBhZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IFBBRV9NT0RF
ID0gInllcyINCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogTE9BREVSID0gImdlbmVyaWMiDQoo
WEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVsZiBub3RlICgweGQpDQooWEVO
KSBlbGZfeGVuX3BhcnNlX25vdGU6IFNVU1BFTkRfQ0FOQ0VMID0gMHgxDQooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IEhWX1NUQVJUX0xPVyA9IDB4ZjU4MDAwMDANCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBlbGZfeGVuX2FkZHJfY2FsY19jaGVj
azogYWRkcmVzc2VzOg0KKFhFTikgICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGMwMDAwMDAwDQoo
WEVOKSAgICAgZWxmX3BhZGRyX29mZnNldCA9IDB4MA0KKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAg
ICAgPSAweGMwMDAwMDAwDQooWEVOKSAgICAgdmlydF9rc3RhcnQgICAgICA9IDB4YzEwMDAwMDAN
CihYRU4pICAgICB2aXJ0X2tlbmQgICAgICAgID0gMHhjMTc3ZjAwMA0KKFhFTikgICAgIHZpcnRf
ZW50cnkgICAgICAgPSAweGMxNDE1MDAwDQooWEVOKSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4
ZmZmZmZmZmZmZmZmZmZmZg0KKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0
MzINCihYRU4pICBEb20wIGtlcm5lbDogMzItYml0LCBQQUUsIGxzYiwgcGFkZHIgMHgxMDAwMDAw
IC0+IDB4MTc3ZjAwMA0KKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikg
IERvbTAgYWxsb2MuOiAgIDAwMDAwMDAxMjgwMDAwMDAtPjAwMDAwMDAxMmEwMDAwMDAgKDk4NzUy
OCBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDEy
ZTcwNzAwMC0+MDAwMDAwMDEyZmZmZjgwMA0KKFhFTikgVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1F
TlQ6DQooWEVOKSAgTG9hZGVkIGtlcm5lbDogMDAwMDAwMDBjMTAwMDAwMC0+MDAwMDAwMDBjMTc3
ZjAwMA0KKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAwYzE3N2YwMDAtPjAwMDAwMDAwYzMw
Nzc4MDANCihYRU4pICBQaHlzLU1hY2ggbWFwOiAwMDAwMDAwMGMzMDc4MDAwLT4wMDAwMDAwMGMz
NDRhYTA0DQooWEVOKSAgU3RhcnQgaW5mbzogICAgMDAwMDAwMDBjMzQ0YjAwMC0+MDAwMDAwMDBj
MzQ0YjRiNA0KKFhFTikgIFBhZ2UgdGFibGVzOiAgIDAwMDAwMDAwYzM0NGMwMDAtPjAwMDAwMDAw
YzM0NmUwMDANCihYRU4pICBCb290IHN0YWNrOiAgICAwMDAwMDAwMGMzNDZlMDAwLT4wMDAwMDAw
MGMzNDZmMDAwDQooWEVOKSAgVE9UQUw6ICAgICAgICAgMDAwMDAwMDBjMDAwMDAwMC0+MDAwMDAw
MDBjMzgwMDAwMA0KKFhFTikgIEVOVFJZIEFERFJFU1M6IDAwMDAwMDAwYzE0MTUwMDANCihYRU4p
IERvbTAgaGFzIG1heGltdW0gOCBWQ1BVcw0KKFhFTikgZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAg
YXQgMHgwMDAwMDAwMGMxMDAwMDAwIC0+IDB4MDAwMDAwMDBjMTNkNjAwMA0KKFhFTikgZWxmX2xv
YWRfYmluYXJ5OiBwaGRyIDEgYXQgMHgwMDAwMDAwMGMxM2Q2MDAwIC0+IDB4MDAwMDAwMDBjMTQ3
ZTAwMA0KKFhFTikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4NCihYRU4pIEluaXRpYWwgbG93
IG1lbW9yeSB2aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLg0KKFhFTikgU3RkLiBM
b2dsZXZlbDogQWxsDQooWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSAqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQooWEVOKSAqKioqKioqIFdBUk5J
Tkc6IENPTlNPTEUgT1VUUFVUIElTIFNZTkNIUk9OT1VTDQooWEVOKSAqKioqKioqIFRoaXMgb3B0
aW9uIGlzIGludGVuZGVkIHRvIGFpZCBkZWJ1Z2dpbmcgb2YgWGVuIGJ5IGVuc3VyaW5nDQooWEVO
KSAqKioqKioqIHRoYXQgYWxsIG91dHB1dCBpcyBzeW5jaHJvbm91c2x5IGRlbGl2ZXJlZCBvbiB0
aGUgc2VyaWFsIGxpbmUuDQooWEVOKSAqKioqKioqIEhvd2V2ZXIgaXQgY2FuIGludHJvZHVjZSBT
SUdOSUZJQ0FOVCBsYXRlbmNpZXMgYW5kIGFmZmVjdA0KKFhFTikgKioqKioqKiB0aW1la2VlcGlu
Zy4gSXQgaXMgTk9UIHJlY29tbWVuZGVkIGZvciBwcm9kdWN0aW9uIHVzZSENCihYRU4pICoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIDMuLi4gMi4u
LiAxLi4uIA0KKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBlICdDVFJMLWEnIHRo
cmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pDQooWEVOKSBGcmVlZCAyNDRrQiBpbml0
IG1lbW9yeS4NCm1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQpYZW46IHNldHVw
IElTQSBpZGVudGl0eSBtYXBzDQphYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KWyAgICAwLjAwMDAw
MF0gUmVzZXJ2aW5nIHZpcnR1YWwgYWRkcmVzcyBzcGFjZSBhYm92ZSAweGZmODAwMDAwDQpbICAg
IDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAw
MDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGludXgg
dmVyc2lvbiAzLjIuMC0zLTY4Ni1wYWUgKERlYmlhbiAzLjIuMjEtMykgKGRlYmlhbi1rZXJuZWxA
bGlzdHMuZGViaWFuLm9yZykgKGdjYyB2ZXJzaW9uIDQuNi4zIChEZWJpYW4gNC42LjMtOCkgKSAj
MSBTTVAgVGh1IEp1biAyOCAwODo1Njo0NiBVVEMgMjAxMg0KWyAgICAwLjAwMDAwMF0gRnJlZWlu
ZyAgOWMtMTAwIHBmbiByYW5nZTogMTAwIHBhZ2VzIGZyZWVkDQpbICAgIDAuMDAwMDAwXSBGcmVl
aW5nICBjZmVlMC1mNGE4MSBwZm4gcmFuZ2U6IDE1MDQzMyBwYWdlcyBmcmVlZA0KWyAgICAwLjAw
MDAwMF0gUmVsZWFzZWQgMTUwNTMzIHBhZ2VzIG9mIHVudXNlZCBtZW1vcnkNClsgICAgMC4wMDAw
MDBdIFNldCAxOTY5OTYgcGFnZShzKSB0byAxLTEgbWFwcGluZw0KWyAgICAwLjAwMDAwMF0gQklP
Uy1wcm92aWRlZCBwaHlzaWNhbCBSQU0gbWFwOg0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAw
MDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWMwMDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBdICBY
ZW46IDAwMDAwMDAwMDAwOWM0MDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkNClsgICAg
MC4wMDAwMDBdICBYZW46IDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNmZWUwMDAwICh1c2Fi
bGUpDQpbICAgIDAuMDAwMDAwXSAgWGVuOiAwMDAwMDAwMGNmZWUwMDAwIC0gMDAwMDAwMDBjZmVl
NTAwMCAoQUNQSSBkYXRhKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBjZmVlNTAwMCAt
IDAwMDAwMDAwY2ZlZjEwMDAgKEFDUEkgTlZTKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAw
MDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0g
IFhlbjogMDAwMDAwMDBmZWMwMDAwMCAtIDAwMDAwMDAwZmVjMDMwMDAgKHJlc2VydmVkKQ0KWyAg
ICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZWUwMDAwMCAtIDAwMDAwMDAwZmVlMDEwMDAgKHJl
c2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAwMDAx
MDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDEwMDAwMDAw
MCAtIDAwMDAwMDAxMzAwMDAwMDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBdIGJvb3Rjb25zb2xl
IFt4ZW5ib290MF0gZW5hYmxlZA0KWyAgICAwLjAwMDAwMF0gTlggKEV4ZWN1dGUgRGlzYWJsZSkg
cHJvdGVjdGlvbjogYWN0aXZlDQpbICAgIDAuMDAwMDAwXSBETUkgcHJlc2VudC4NClsgICAgMC4w
MDAwMDBdIGxhc3RfcGZuID0gMHgxMzAwMDAgbWF4X2FyY2hfcGZuID0gMHgxMDAwMDAwDQpbICAg
IDAuMDAwMDAwXSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgW2MwMGY4MGIwXSBmODBiMA0KWyAgICAw
LjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogMDAwMDAwMDAwMDAwMDAwMC0wMDAwMDAwMDM3
MWZlMDAwDQpbICAgIDAuMDAwMDAwXSBSQU1ESVNLOiAwMTc3ZjAwMCAtIDAzMDc4MDAwDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBSU0RQIDAwMGY4MDgwIDAwMDI0ICh2MDIgUFRMVEQgKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogWFNEVCBjZmVlMDFmZSAwMDA1QyAodjAxIEJSQ00gICBFWFBMT1NOICAw
NjA0MDAwMCBQVEwgIDAyMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRkFDUCBjZmVlMDJj
ZSAwMDBGNCAodjAzIEJSQ00gICBFWFBMT1NOICAwNjA0MDAwMCBNU0ZUIDAyMDAwMDAxKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSSBXYXJuaW5nOiBPcHRpb25hbCBmaWVsZCBQbTJDb250cm9sQmxvY2sg
aGFzIHplcm8gYWRkcmVzcyBvciBsZW5ndGg6IDB4MDAwMDAwMDAwMDAwMDAwMC8weEMgKDIwMTEw
NjIzL3RiZmFkdC01NjApDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBEU0RUIGNmZWUwM2MyIDA0OTQ5
ICh2MDIgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIE1TRlQgMDMwMDAwMDApDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBGQUNTIGNmZWYwZmMwIDAwMDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBUQ1BB
IGNmZWU0ZDBiIDAwMDMyICh2MDEgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIFBUTCAgMjAwMDAw
MDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTUkFUIGNmZWU0ZDNkIDAwMTI4ICh2MDEgQU1EICAg
IEZBTV9GXzEwIDA2MDQwMDAwIEFNRCAgMDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBI
UEVUIGNmZWU0ZTY1IDAwMDM4ICh2MDEgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIEJSQ00gMDIw
MDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTU0RUIGNmZWU0ZTlkIDAwMDQ5ICh2MDEgQlJD
TSAgIFBSVDAgICAgIDA2MDQwMDAwIEJSQ00gMDIwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBTUENSIGNmZWU0ZWU2IDAwMDUwICh2MDEgUFRMVEQgICRVQ1JUQkwkIDA2MDQwMDAwIFBUTCAg
MDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElDIGNmZWU0ZjM2IDAwMENBICh2MDEg
QlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIFBUTCAgMDIwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSAz
OTgyTUIgSElHSE1FTSBhdmFpbGFibGUuDQpbICAgIDAuMDAwMDAwXSA4ODFNQiBMT1dNRU0gYXZh
aWxhYmxlLg0KWyAgICAwLjAwMDAwMF0gICBtYXBwZWQgbG93IHJhbTogMCAtIDM3MWZlMDAwDQpb
ICAgIDAuMDAwMDAwXSAgIGxvdyByYW06IDAgLSAzNzFmZTAwMA0KWyAgICAwLjAwMDAwMF0gWm9u
ZSBQRk4gcmFuZ2VzOg0KWyAgICAwLjAwMDAwMF0gICBETUEgICAgICAweDAwMDAwMDEwIC0+IDB4
MDAwMDEwMDANClsgICAgMC4wMDAwMDBdICAgTm9ybWFsICAgMHgwMDAwMTAwMCAtPiAweDAwMDM3
MWZlDQpbICAgIDAuMDAwMDAwXSAgIEhpZ2hNZW0gIDB4MDAwMzcxZmUgLT4gMHgwMDEzMDAwMA0K
WyAgICAwLjAwMDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IFBGTiBmb3IgZWFjaCBub2RlDQpbICAg
IDAuMDAwMDAwXSBlYXJseV9ub2RlX21hcFszXSBhY3RpdmUgUEZOIHJhbmdlcw0KWyAgICAwLjAw
MDAwMF0gICAgIDA6IDB4MDAwMDAwMTAgLT4gMHgwMDAwMDA5Yw0KWyAgICAwLjAwMDAwMF0gICAg
IDA6IDB4MDAwMDAxMDAgLT4gMHgwMDBjZmVlMA0KWyAgICAwLjAwMDAwMF0gICAgIDA6IDB4MDAx
MDAwMDAgLT4gMHgwMDEzMDAwMA0KWyAgICAwLjAwMDAwMF0gVXNpbmcgQVBJQyBkcml2ZXIgZGVm
YXVsdA0KWyAgICAwLjAwMDAwMF0gQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg1MDgNClsgICAg
MC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxhcGljX2lkWzB4MDBdIGVuYWJs
ZWQpDQpbICAgIDAuMDAwMDAwXSBCSU9TIGJ1ZzogQVBJQyB2ZXJzaW9uIGlzIDAgZm9yIENQVSAw
LzB4MCwgZml4aW5nIHVwIHRvIDB4MTANClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDFdIGxhcGljX2lkWzB4MDFdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDRdIGVu
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19p
ZFsweDA1XSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
Nl0gbGFwaWNfaWRbMHgwNl0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDddIGxhcGljX2lkWzB4MDddIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMF0gaGlnaCBlZGdlIGxpbnRbMHgxXSkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBoaWdoIGVkZ2UgbGludFsweDFd
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDJdIGhpZ2ggZWRn
ZSBsaW50WzB4MV0pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgw
M10gaGlnaCBlZGdlIGxpbnRbMHgxXSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAo
YWNwaV9pZFsweDA0XSBoaWdoIGVkZ2UgbGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTog
TEFQSUNfTk1JIChhY3BpX2lkWzB4MDVdIGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA3XSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA4XSBhZGRyZXNz
WzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KWyAgICAwLjAwMDAwMF0gSU9BUElDWzBdOiBhcGlj
X2lkIDgsIHZlcnNpb24gMjU1LCBhZGRyZXNzIDB4ZmVjMDAwMDAsIEdTSSAwLTI1NQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA5XSBhZGRyZXNzWzB4ZmVjMDEwMDBdIGdzaV9i
YXNlWzE2XSkNClsgICAgMC4wMDAwMDBdIElPQVBJQ1sxXTogYXBpY19pZCA5LCB2ZXJzaW9uIDI1
NSwgYWRkcmVzcyAweGZlYzAxMDAwLCBHU0kgMTYtMjcxDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJ
T0FQSUMgKGlkWzB4MGFdIGFkZHJlc3NbMHhmZWMwMjAwMF0gZ3NpX2Jhc2VbMzJdKQ0KWyAgICAw
LjAwMDAwMF0gSU9BUElDWzJdOiBhcGljX2lkIDEwLCB2ZXJzaW9uIDI1NSwgYWRkcmVzcyAweGZl
YzAyMDAwLCBHU0kgMzItMjg3DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVz
IDAgYnVzX2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2UpDQpbICAgIDAuMDAwMDAwXSBVc2lu
ZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRpb24NClsgICAgMC4w
MDAwMDBdIEFDUEk6IEhQRVQgaWQ6IDB4MTE2NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KWyAgICAw
LjAwMDAwMF0gU01QOiBBbGxvd2luZyA4IENQVXMsIDAgaG90cGx1ZyBDUFVzDQpbICAgIDAuMDAw
MDAwXSBQTTogUmVnaXN0ZXJlZCBub3NhdmUgbWVtb3J5OiAwMDAwMDAwMDAwMDljMDAwIC0gMDAw
MDAwMDAwMDA5ZDAwMA0KWyAgICAwLjAwMDAwMF0gUE06IFJlZ2lzdGVyZWQgbm9zYXZlIG1lbW9y
eTogMDAwMDAwMDAwMDA5ZDAwMCAtIDAwMDAwMDAwMDAxMDAwMDANClsgICAgMC4wMDAwMDBdIEFs
bG9jYXRpbmcgUENJIHJlc291cmNlcyBzdGFydGluZyBhdCBkMDAwMDAwMCAoZ2FwOiBkMDAwMDAw
MDoyZWMwMDAwMCkNClsgICAgMC4wMDAwMDBdIEJvb3RpbmcgcGFyYXZpcnR1YWxpemVkIGtlcm5l
bCBvbiBYZW4NClsgICAgMC4wMDAwMDBdIFhlbiB2ZXJzaW9uOiA0LjIuMC1yYzIgKHByZXNlcnZl
LUFEKQ0KWyAgICAwLjAwMDAwMF0gc2V0dXBfcGVyY3B1OiBOUl9DUFVTOjMyIG5yX2NwdW1hc2tf
Yml0czozMiBucl9jcHVfaWRzOjggbnJfbm9kZV9pZHM6MQ0KWyAgICAwLjAwMDAwMF0gUEVSQ1BV
OiBFbWJlZGRlZCAxNCBwYWdlcy9jcHUgQGY0YjdiMDAwIHMzMzI4MCByMCBkMjQwNjQgdTU3MzQ0
DQpbICAgIDAuMDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0
eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2VzOiAxMDM4NDQzDQpbICAgIDAuMDAwMDAwXSBLZXJu
ZWwgY29tbWFuZCBsaW5lOiBwbGFjZWhvbGRlciByb290PVVVSUQ9ZTc1NjhhYzYtZDU2MS00ODNh
LWI1YmEtODhmZDkzNTBlZTE4IHJvIGNvbnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rl
c2V0DQpbICAgIDAuMDAwMDAwXSBQSUQgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjog
MiwgMTYzODQgYnl0ZXMpDQpbICAgIDAuMDAwMDAwXSBEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBl
bnRyaWVzOiAxMzEwNzIgKG9yZGVyOiA3LCA1MjQyODggYnl0ZXMpDQpbICAgIDAuMDAwMDAwXSBJ
bm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjogNiwgMjYyMTQ0IGJ5
dGVzKQ0KWyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5nIENQVSMwDQpbICAgIDAuMDAwMDAwXSBQ
bGFjaW5nIDY0TUIgc29mdHdhcmUgSU8gVExCIGJldHdlZW4gZjBhYjcwMDAgLSBmNGFiNzAwMA0K
WyAgICAwLjAwMDAwMF0gc29mdHdhcmUgSU8gVExCIGF0IHBoeXMgMHgzMGFiNzAwMCAtIDB4MzRh
YjcwMDANClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBIaWdoTWVtIGZvciBub2RlIDAgKDAw
MDM3MWZlOjAwMTMwMDAwKQ0KWyAgICAwLjAwMDAwMF0gTWVtb3J5OiAzMjYzNTgway80OTgwNzM2
ayBhdmFpbGFibGUgKDI4MzZrIGtlcm5lbCBjb2RlLCAxNDI2NzZrIHJlc2VydmVkLCAxMzQzayBk
YXRhLCA0MTJrIGluaXQsIDI1MDM1NjBrIGhpZ2htZW0pDQpbICAgIDAuMDAwMDAwXSB2aXJ0dWFs
IGtlcm5lbCBtZW1vcnkgbGF5b3V0Og0KWyAgICAwLjAwMDAwMF0gICAgIGZpeG1hcCAgOiAweGZm
NTM2MDAwIC0gMHhmZjdmZjAwMCAgICgyODUyIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgIHBrbWFw
ICAgOiAweGZmMjAwMDAwIC0gMHhmZjQwMDAwMCAgICgyMDQ4IGtCKQ0KWyAgICAwLjAwMDAwMF0g
ICAgIHZtYWxsb2MgOiAweGY3OWZlMDAwIC0gMHhmZjFmZTAwMCAgICggMTIwIE1CKQ0KWyAgICAw
LjAwMDAwMF0gICAgIGxvd21lbSAgOiAweGMwMDAwMDAwIC0gMHhmNzFmZTAwMCAgICggODgxIE1C
KQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmluaXQgOiAweGMxNDE1MDAwIC0gMHhjMTQ3YzAwMCAg
ICggNDEyIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmRhdGEgOiAweGMxMmM1MGVjIC0gMHhj
MTQxNGYwMCAgICgxMzQzIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLnRleHQgOiAweGMxMDAw
MDAwIC0gMHhjMTJjNTBlYyAgICgyODM2IGtCKQ0KWyAgICAwLjAwMDAwMF0gSGllcmFyY2hpY2Fs
IFJDVSBpbXBsZW1lbnRhdGlvbi4NClsgICAgMC4wMDAwMDBdIAlSQ1UgZHludGljay1pZGxlIGdy
YWNlLXBlcmlvZCBhY2NlbGVyYXRpb24gaXMgZW5hYmxlZC4NClsgICAgMC4wMDAwMDBdIE5SX0lS
UVM6MjMwNCBucl9pcnFzOjIwNDggMTYNClsgICAgMC4wMDAwMDBdIHhlbjogc2NpIG92ZXJyaWRl
OiBnbG9iYWxfaXJxPTkgdHJpZ2dlcj0wIHBvbGFyaXR5PTENClsgICAgMC4wMDAwMDBdIHhlbjog
YWNwaSBzY2kgOQ0KWyAgICAwLjAwMDAwMF0gQ29uc29sZTogY29sb3VyIFZHQSsgODB4MjUNClsg
ICAgMC4wMDAwMDBdIGNvbnNvbGUgW2h2YzBdIGVuYWJsZWQsIGJvb3Rjb25zb2xlIGRpc2FibGVk
DQ0KWyAgICAwLjAwMDAwMF0gY29uc29sZSBbaHZjMF0gZW5hYmxlZCwgYm9vdGNvbnNvbGUgZGlz
YWJsZWQNClsgICAgMC4wMDAwMDBdIGluc3RhbGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgMA0NClsg
ICAgMC4wMDAwMDBdIERldGVjdGVkIDE5OTUuMDkyIE1IeiBwcm9jZXNzb3IuDQ0KWyAgICAwLjAw
ODAwMF0gQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZhbHVlIGNhbGN1bGF0ZWQg
dXNpbmcgdGltZXIgZnJlcXVlbmN5Li4gMzk5MC4xOCBCb2dvTUlQUyAobHBqPTc5ODAzNjgpDQ0K
WyAgICAwLjAxMDg1OV0gcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxDQ0KWyAg
ICAwLjAxMjA1NV0gU2VjdXJpdHkgRnJhbWV3b3JrIGluaXRpYWxpemVkDQ0KWyAgICAwLjAxNjAx
MV0gQXBwQXJtb3I6IEFwcEFybW9yIGRpc2FibGVkIGJ5IGJvb3QgdGltZSBwYXJhbWV0ZXINDQpb
ICAgIDAuMDIwMDI3XSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMg0NClsgICAg
MC4wMjQxNzZdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdWFjY3QNDQpbICAgIDAuMDI4
MDEyXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBtZW1vcnkNDQpbICAgIDAuMDMyMDE4XSBJ
bml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBkZXZpY2VzDQ0KWyAgICAwLjAzNjAwNV0gSW5pdGlh
bGl6aW5nIGNncm91cCBzdWJzeXMgZnJlZXplcg0NClsgICAgMC4wNDAwMTBdIEluaXRpYWxpemlu
ZyBjZ3JvdXAgc3Vic3lzIG5ldF9jbHMNDQpbICAgIDAuMDQ0MDA2XSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBibGtpbw0NClsgICAgMC4wNDgwMTNdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vi
c3lzIHBlcmZfZXZlbnQNDQpbICAgIDAuMDUyMDY2XSBDUFU6IFBoeXNpY2FsIFByb2Nlc3NvciBJ
RDogMA0NClsgICAgMC4wNTYwMDZdIENQVTogUHJvY2Vzc29yIENvcmUgSUQ6IDANDQpbICAgIDAu
MDYwMjgwXSBBQ1BJOiBDb3JlIHJldmlzaW9uIDIwMTEwNjIzDQ0KWyAgICAwLjA3MjM4NF0gUGVy
Zm9ybWFuY2UgRXZlbnRzOiBCcm9rZW4gQklPUyBkZXRlY3RlZCwgY29tcGxhaW4gdG8geW91ciBo
YXJkd2FyZSB2ZW5kb3IuDQ0KWyAgICAwLjA3NjAxNF0gW0Zpcm13YXJlIEJ1Z106IHRoZSBCSU9T
IGhhcyBjb3JydXB0ZWQgaHctUE1VIHJlc291cmNlcyAoTVNSIGMwMDEwMDAwIGlzIDUzMDA3NikN
DQpbICAgIDAuMDgwMDA3XSBBTUQgUE1VIGRyaXZlci4NDQpbICAgIDAuMDgyODY0XSAtLS0tLS0t
LS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0NDQpbICAgIDAuMDg0MDE4XSBXQVJOSU5HOiBh
dCAvYnVpbGQvYnVpbGRkLWxpbnV4XzMuMi4yMS0zLWkzODYtdkVvaG40L2xpbnV4LTMuMi4yMS9h
cmNoL3g4Ni94ZW4vZW5saWdodGVuLmM6NzM4IHBlcmZfZXZlbnRzX2xhcGljX2luaXQrMHgyOC8w
eDI5KCkNDQpbICAgIDAuMDg4MDA5XSBIYXJkd2FyZSBuYW1lOiBlbXB0eQ0NClsgICAgMC4wOTEy
OTldIE1vZHVsZXMgbGlua2VkIGluOg0NClsgICAgMC4wOTIyNzVdIFBpZDogMSwgY29tbTogc3dh
cHBlci8wIE5vdCB0YWludGVkIDMuMi4wLTMtNjg2LXBhZSAjMQ0NClsgICAgMC4wOTYwMDhdIENh
bGwgVHJhY2U6DQ0KWyAgICAwLjA5ODUyN10gIFs8YzEwMzdmY2M+XSA/IHdhcm5fc2xvd3BhdGhf
Y29tbW9uKzB4NjgvMHg3OQ0NClsgICAgMC4xMDAwMTldICBbPGMxMDE1MGQyPl0gPyBwZXJmX2V2
ZW50c19sYXBpY19pbml0KzB4MjgvMHgyOQ0NClsgICAgMC4xMDQwMTBdICBbPGMxMDM3ZmVhPl0g
PyB3YXJuX3Nsb3dwYXRoX251bGwrMHhkLzB4MTANDQpbICAgIDAuMTA4MDExXSAgWzxjMTAxNTBk
Mj5dID8gcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkNDQpbICAgIDAuMTEyMDE1XSAg
WzxjMTQxYzk3ZT5dID8gaW5pdF9od19wZXJmX2V2ZW50cysweDIyMy8weDNiMQ0NClsgICAgMC4x
MTYwMTJdICBbPGMxNDFjNzViPl0gPyBjaGVja19idWdzKzB4MWQ5LzB4MWQ5DQ0KWyAgICAwLjEy
MDAxMl0gIFs8YzEwMDMwNzQ+XSA/IGRvX29uZV9pbml0Y2FsbCsweDY2LzB4MTBlDQ0KWyAgICAw
LjEyNDAxMl0gIFs8YzE0MTU3NzA+XSA/IGtlcm5lbF9pbml0KzB4NmQvMHgxMjUNDQpbICAgIDAu
MTI4MDEyXSAgWzxjMTQxNTcwMz5dID8gc3RhcnRfa2VybmVsKzB4MzI1LzB4MzI1DQ0KWyAgICAw
LjEzMjAxNV0gIFs8YzEyYzQ2M2U+XSA/IGtlcm5lbF90aHJlYWRfaGVscGVyKzB4Ni8weDEwDQ0K
WyAgICAwLjEzNjAxOV0gLS0tWyBlbmQgdHJhY2UgYTc5MTllN2YxN2MwYTcyNSBdLS0tDQ0KWyAg
ICAwLjE0MDAxM10gLi4uIHZlcnNpb246ICAgICAgICAgICAgICAgIDANDQpbICAgIDAuMTQ0MDEx
XSAuLi4gYml0IHdpZHRoOiAgICAgICAgICAgICAgNDgNDQpbICAgIDAuMTQ4MDE5XSAuLi4gZ2Vu
ZXJpYyByZWdpc3RlcnM6ICAgICAgNA0NClsgICAgMC4xNTIwMTddIC4uLiB2YWx1ZSBtYXNrOiAg
ICAgICAgICAgICAwMDAwZmZmZmZmZmZmZmZmDQ0KWyAgICAwLjE1NjAxMl0gLi4uIG1heCBwZXJp
b2Q6ICAgICAgICAgICAgIDAwMDA3ZmZmZmZmZmZmZmYNDQpbICAgIDAuMTYwMDE5XSAuLi4gZml4
ZWQtcHVycG9zZSBldmVudHM6ICAgMA0NClsgICAgMC4xNjQwMTldIC4uLiBldmVudCBtYXNrOiAg
ICAgICAgICAgICAwMDAwMDAwMDAwMDAwMDBmDQ0KWyAgICAwLjE2ODI3N10gTk1JIHdhdGNoZG9n
IGVuYWJsZWQsIHRha2VzIG9uZSBody1wbXUgY291bnRlci4NDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwZmZm
ZjlhYzQ2NDYwIHRvIDB4MDAwMGZmZmI1YWQ1MWVjMC4NClsgICAgMC4xNzYwMTBdIC0tLS0tLS0t
LS0tLVsgY3V0IGhlcmUgXS0tLS0tLS0tLS0tLQ0NClsgICAgMC4xNzYwMTBdIFdBUk5JTkc6IGF0
IC9idWlsZC9idWlsZGQtbGludXhfMy4yLjIxLTMtaTM4Ni12RW9objQvbGludXgtMy4yLjIxL2Fy
Y2gveDg2L3hlbi9lbmxpZ2h0ZW4uYzo3MzggcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4
MjkoKQ0NClsgICAgMC4xNzYwMTBdIEhhcmR3YXJlIG5hbWU6IGVtcHR5DQ0KWyAgICAwLjE3NjAx
MF0gTW9kdWxlcyBsaW5rZWQgaW46DQ0KWyAgICAwLjE3NjAxMF0gUGlkOiAxLCBjb21tOiBzd2Fw
cGVyLzAgVGFpbnRlZDogRyAgICAgICAgVyAgICAzLjIuMC0zLTY4Ni1wYWUgIzENDQpbICAgIDAu
MTc2MDEwXSBDYWxsIFRyYWNlOg0NClsgICAgMC4xNzYwMTBdICBbPGMxMDM3ZmNjPl0gPyB3YXJu
X3Nsb3dwYXRoX2NvbW1vbisweDY4LzB4NzkNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTAxNTBkMj5d
ID8gcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkNDQpbICAgIDAuMTc2MDEwXSAgWzxj
MTAzN2ZlYT5dID8gd2Fybl9zbG93cGF0aF9udWxsKzB4ZC8weDEwDQ0KWyAgICAwLjE3NjAxMF0g
IFs8YzEwMTUwZDI+XSA/IHBlcmZfZXZlbnRzX2xhcGljX2luaXQrMHgyOC8weDI5DQ0KWyAgICAw
LjE3NjAxMF0gIFs8YzEwMTUyNzQ+XSA/IHg4Nl9wbXVfZW5hYmxlKzB4MWExLzB4MjI3DQ0KWyAg
ICAwLjE3NjAxMF0gIFs8YzEwOGZmMjY+XSA/IHBlcmZfcG11X2VuYWJsZSsweDFhLzB4MWINDQpb
ICAgIDAuMTc2MDEwXSAgWzxjMTAxNDAzMT5dID8geDg2X3BtdV9jb21taXRfdHhuKzB4NmIvMHg3
OA0NClsgICAgMC4xNzYwMTBdICBbPGMxMDc3MGRmPl0gPyBoYW5kbGVfaXJxX2V2ZW50X3BlcmNw
dSsweDE0Mi8weDE1OA0NClsgICAgMC4xNzYwMTBdICBbPGMxMDc4ZTM0Pl0gPyBoYW5kbGVfcGVy
Y3B1X2lycSsweDI5LzB4MzcNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTFjMjNmOT5dID8gYXJjaF9s
b2NhbF9zYXZlX2ZsYWdzKzB4Ni8weDcNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTFjMjVlMj5dID8g
X194ZW5fZXZ0Y2huX2RvX3VwY2FsbCsweDE3Yi8weDFhZA0NClsgICAgMC4xNzYwMTBdICBbPGMx
MDI0OTRjPl0gPyBwdmNsb2NrX2Nsb2Nrc291cmNlX3JlYWQrMHhjNS8weGY3DQ0KWyAgICAwLjE3
NjAxMF0gIFs8YzEwMGY3ZGY+XSA/IHNjaGVkX2Nsb2NrKzB4OS8weGQNDQpbICAgIDAuMTc2MDEw
XSAgWzxjMTA1MGZkNj5dID8gc2NoZWRfY2xvY2tfbG9jYWwrMHgxMC8weDE0Yg0NClsgICAgMC4x
NzYwMTBdICBbPGMxMDkwODQxPl0gPyBldmVudF9zY2hlZF9pbisweDczLzB4MTA4DQ0KWyAgICAw
LjE3NjAxMF0gIFs8YzEwOTA5NDM+XSA/IGdyb3VwX3NjaGVkX2luKzB4NmQvMHgxMGINDQpbICAg
IDAuMTc2MDEwXSAgWzxjMTA1MGY5ZD5dID8gYXJjaF9sb2NhbF9pcnFfcmVzdG9yZSsweDYvMHg3
DQ0KWyAgICAwLjE3NjAxMF0gIFs8YzEwNTEyYmI+XSA/IGxvY2FsX2Nsb2NrKzB4MjMvMHgyYw0N
ClsgICAgMC4xNzYwMTBdICBbPGMxMDkxMTg1Pl0gPyBfX3BlcmZfZXZlbnRfZW5hYmxlKzB4MTA3
LzB4MTQ0DQ0KWyAgICAwLjE3NjAxMF0gIFs8YzEwOGU0Njk+XSA/IHBlcmZfZXhjbHVkZV9ldmVu
dC5wYXJ0LjIzKzB4MzkvMHgzOQ0NClsgICAgMC4xNzYwMTBdICBbPGMxMDhlNDk3Pl0gPyByZW1v
dGVfZnVuY3Rpb24rMHgyZS8weDMzDQ0KWyAgICAwLjE3NjAxMF0gIFs8YzEwNWNiNjY+XSA/IHNt
cF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweDZlLzB4Y2MNDQpbICAgIDAuMTc2MDEwXSAgWzxjMTA4
ZDM2MT5dID8gY3B1X2Z1bmN0aW9uX2NhbGwrMHgyYS8weDMyDQ0KWyAgICAwLjE3NjAxMF0gIFs8
YzEwOTEwN2U+XSA/IF9fcGVyZl9ldmVudF90YXNrX3NjaGVkX2luKzB4NWYvMHg1Zg0NClsgICAg
MC4xNzYwMTBdICBbPGMxMDc2NmQ3Pl0gPyB3YXRjaGRvZ19lbmFibGUrMHhjOS8weDE0MA0NClsg
ICAgMC4xNzYwMTBdICBbPGMxMmI4ZDBjPl0gPyBjcHVfY2FsbGJhY2srMHg2Mi8weDcwDQ0KWyAg
ICAwLjE3NjAxMF0gIFs8YzE0MmFhYTU+XSA/IGxvY2t1cF9kZXRlY3Rvcl9pbml0KzB4MzAvMHg0
Yw0NClsgICAgMC4xNzYwMTBdICBbPGMxNDE1NzdkPl0gPyBrZXJuZWxfaW5pdCsweDdhLzB4MTI1
DQ0KWyAgICAwLjE3NjAxMF0gIFs8YzE0MTU3MDM+XSA/IHN0YXJ0X2tlcm5lbCsweDMyNS8weDMy
NQ0NClsgICAgMC4xNzYwMTBdICBbPGMxMmM0NjNlPl0gPyBrZXJuZWxfdGhyZWFkX2hlbHBlcisw
eDYvMHgxMA0NClsgICAgMC4xNzYwMTBdIC0tLVsgZW5kIHRyYWNlIGE3OTE5ZTdmMTdjMGE3MjYg
XS0tLQ0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KWyAgICAwLjE4MDAxMF0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAxDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMDA4
MDAwXSBJbml0aWFsaXppbmcgQ1BVIzENDQpbICAgIDAuMTg0MDEwXSBOTUkgd2F0Y2hkb2cgZW5h
YmxlZCwgdGFrZXMgb25lIGh3LXBtdSBjb3VudGVyLg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMGZmZmZjYTVjYzkz
NiB0byAweDAwMDBmZmZiNWFkNTFlYzAuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4xODgwMTFdIC0tLS0tLS0tLS0tLVsgY3V0IGhl
cmUgXS0tLS0tLS0tLShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KLS0tDQ0KWyAgICAwLjE4ODAxMV0gV0FSTklORzogYXQgL2J1aWxkL2J1aWxkZC1s
aW51eF8zKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQouMi4yMS0zLWkzODYtdkVvaG40L2xpbnV4LTMuMi4yMS9hcmNoL3g4Ni94ZW4vZW4oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmxpZ2h0ZW4uYzo3
MzggcGVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkoKShYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQ0KWyAgICAwLjE4ODAxMV0gSGFyZHdh
cmUgbmFtZTogZW1wdHkNDQpbICAgIDAuMTg4MDExXSBNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpvZHVsZXMgbGlua2VkIGluOg0NClsgICAgMC4x
ODgwMTFdIFBpZDogMCwgY29tbTogc3dhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQpwcGVyLzEgVGFpbnRlZDogRyAgICAgICAgVyAgICAzLjIuMC0z
LTY4Ni1wYWUgIzENDQpbICAgIDAuMTg4MDExXSBDKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphbGwgVHJhY2U6DQ0KWyAgICAwLjE4ODAxMV0gIFs8
YzEwMzdmY2M+XSA/IHdhcm5fc2xvd3BhdGhfY29tKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQptb24rMHg2OC8weDc5DQ0KWyAgICAwLjE4ODAxMV0g
IFs8YzEwMTUwZDI+XSA/IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmVyZl9ldmVudHNfbGFwaWNfaW5pdCsweDI4LzB4MjkNDQpbICAgIDAuMTg4
MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbPGMxMDM3ZmVhPl0gPyB3YXJuX3Nsb3dwYXRoX251bGwrMHhkLzB4MTANDQpbICAgIDAuMTg4
MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbPGMxMDE1MGQyPl0gPyBwZXJmX2V2ZW50c19sYXBpY19pbml0KzB4MjgvMHgyOQ0oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCg0KWyAgICAwLjE4
ODAxMV0gIFs8YzEwMTUyNzQ+XSA/IHg4Nl9wbXVfZW5hYmxlKzB4KFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoxYTEvMHgyMjcNDQpbICAgIDAuMTg4
MDExXSAgWzxjMTA4ZmYyNj5dID8gcGVyZl9wbXVfZW5hYmxlKzAoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCngxYS8weDFiDQ0KWyAgICAwLjE4ODAx
MV0gIFs8YzEwMTQwMzE+XSA/IHg4Nl9wbXVfY29tbWl0X3R4KFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpuKzB4NmIvMHg3OA0NClsgICAgMC4xODgw
MTFdICBbPGMxMDUwZjlkPl0gPyBhcmNoX2xvY2FsX2lycV9yZShYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0Kc3RvcmUrMHg2LzB4Nw0NClsgICAgMC4x
ODgwMTFdICBbPGMxMDUxMmJiPl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQpvY2FsX2Nsb2NrKzB4MjMvMHgyYw0NClsgICAgMC4xODgwMTFd
ICBbPGMxMDkwYTBlPl0gPyBjKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQp0eF9zY2hlZF9pbisweDJkLzB4MTM1DQ0KWyAgICAwLjE4ODAxMV0gIFs8
YzEwMTUyODg+XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCjg2X3BtdV9lbmFibGUrMHgxYjUvMHgyMjcNDQpbICAgIDAuMTg4MDExXSAgWzxj
MTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KdmNsb2NrX2Nsb2Nrc291cmNlX3JlYWQrMHhjNS8weGY3DQ0KWyAgICAwLjE4ODAx
MV0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTAyNDk0Yz5dID8gcHZjbG9ja19jbG9ja3NvdXJjZV9yZWFkKzB4YzUvMHhmKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3DQ0KWyAgICAwLjE4
ODAxMV0gIFs8YzEwMGY3ZGY+XSA/IHNjaGVkX2Nsb2NrKzB4OS8wKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4ZA0NClsgICAgMC4xODgwMTFdICBb
PGMxMDUwZmQ2Pl0gPyBzY2hlZF9jbG9ja19sb2NhbChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKzB4MTAvMHgxNGINDQpbICAgIDAuMTg4MDExXSAg
WzxjMTA5MDg0MT5dID8gZXZlbnRfc2NoZWRfaW4rMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjczLzB4MTA4DQ0KWyAgICAwLjE4ODAxMV0gIFs8
YzEwOTA5NDM+XSA/IGdyb3VwX3NjaGVkX2luKzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo2ZC8weDEwYg0NClsgICAgMC4xODgwMTFdICBbPGMx
MDUwZjlkPl0gPyBhcmNoX2xvY2FsX2lycV9yZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0Kc3RvcmUrMHg2LzB4Nw0NClsgICAgMC4xODgwMTFdICBb
PGMxMDUxMmJiPl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpvY2FsX2Nsb2NrKzB4MjMvMHgyYw0NClsgICAgMC4xODgwMTFdICBbPGMxMDkx
MTg1Pl0gPyBfKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpfcGVyZl9ldmVudF9lbmFibGUrMHgxMDcvMHgxNDQNDQpbICAgIDAuMTg4MDExXSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDhl
NDk3Pl0gPyByZW1vdGVfZnVuY3Rpb24rMHgyZS8weDMzDQ0KWyAgICAwLjE4ODAxMV0gIChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1Y2Zh
ZD5dID8gZ2VuZXJpY19zbXBfY2FsbF9mdW5jdGlvbl9zaW5nbGVfKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQppbnRlcnJ1cHQrMHg5Ny8weGIyDQ0K
WyAgICAwLjE4ODAxMV0gIFs8YzEwMGE3NWM+XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmVuX2NhbGxfZnVuY3Rpb25fc2luZ2xlX2ludGVy
cnVwdCsweGEvMHgxYw0NClsgICAgMC4xODgwMTFdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNzZmZTQ+XSA/IGhhbmRsZV9pcnFfZXZl
bnRfcGVyY3B1KzB4NDcvMHgxNShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KOA0NClsgICAgMC4xODgwMTFdICBbPGMxMDc4ODExPl0gPyBpcnFfZ2V0
X2lycV9kYXRhKyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KMHg1LzB4Ng0NClsgICAgMC4xODgwMTFdICBbPGMxMDc4ZTM0Pl0gPyBoYW5kbGVfcGVy
Y3B1X2lycShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKzB4MjkvMHgzNw0NClsgICAgMC4xODgwMTFdICBbPGMxMWMyNThkPl0gPyBfX3hlbl9ldnRj
aG5fZG9fdShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KcGNhbGwrMHgxMjYvMHgxYWQNDQpbICAgIDAuMTg4MDExXSAgWzxjMTFjMzdkMD5dID8geChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5fZXZ0
Y2huX2RvX3VwY2FsbCsweDE4LzB4MjYNDQpbICAgIDAuMTg4MDExXSAgKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMmM0Njk3Pl0gPyB4ZW5f
ZG9fdXBjYWxsKzB4Ny8weGMNDQpbICAgIDAuMTg4MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDAyM2E3Pl0gPyBoeXBlcmNhbGxf
cGFnZSsweDNhNy8weDEwMDANDQpbICAgIDAuMTg4MDExXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDA2MDNhPl0gPyB4ZW5fc2FmZV9o
YWx0KzB4Zi8weDE5DQ0KWyAgICAwLjE4ODAxMV0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxMDViND5dID8gZGVmYXVsdF9pZGxlKzB4
NTIvMHg4Nw0NClsgICAgMC4xODgwMTFdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMGFhNDc+XSA/IGNwdV9pZGxlKzB4OTUvMHhhZg0N
ClsgICAgMC4xODgwMTFdIC0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCi0tWyBlbmQgdHJhY2UgYTc5MTllN2YxN2MwYTcyNyBdLS0tDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzcy
MDIyXSBpbnN0YWxsaW5nIFhlbiB0aW1lciBmb3IgQ1BVIDINDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjAwODAwMF0gSW5pdGlhbGl6aW5nIENQVSMyDQ0KWyAg
ICAwLjM4MDAyM10gTk1JIHdhdGNoZG9nIGVuYWJsZWQsIHRha2VzIG9uZSBody1wbXUgY291bnRl
ci4NDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZy
b20gMHgwMDAwZmZmZmNhY2MyYTcxIHRvIDB4MDAwMGZmZmI1YWQ1MWVjMC4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdIC0oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQotLS0tLS0tLS0tLVsgY3V0KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQogaGVyZSBdLS0tLS0tLS0tKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCi0tLQ0NClsgICAgMC4zODQwMjNdIFdBUk5JTkc6IGF0IC9idWlsZC9idWlsZGQtbGlu
dXhfMyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KLjIuMjEtMy1pMzg2LXZFbyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmhuNC9saW51eC0zLjIuMjEoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi9hcmNoL3g4Ni94ZW4vZW4oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KbGlnaHRlbi5jOjczOCBwZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnJmX2V2ZW50c19sYXBpY18oWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KaW5pdCsweDI4LzB4MjkoKShYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMC4zODQwMjNdIEgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCmFyZHdhcmUgbmFtZTogZW0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpwdHkNDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMC4zODQwMjNdIE0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9kdWxlcyBsaW5rZWQgaW4oWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gUChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KaWQ6IDAsIGNvbW06IHN3YShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCnBwZXIvMiBUYWludGVkOiBHICAgICAgICBXICAgIDMuMi4wLTMtNjg2
LXBhZSAjMShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gQyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KYWxsIFRyYWNlOg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbPGMxMDM3ZmNjPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphcm5fc2xvd3BhdGhfY29tKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbW9uKzB4NjgvMHg3OQ0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICBbPGMxMDE1
MGQyPl0gPyBwZXJmX2V2ZW50c19sYXBpYyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpfaW5pdCsweDI4LzB4MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDM3ZmVhPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQphcm5fc2xvd3BhdGhfbnVsKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmwrMHhkLzB4MTANDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQw
MjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8
YzEwMTUwZDI+XSA/IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpl
cmZfZXZlbnRzX2xhcGljKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfaW5pdCsweDI4LzB4
MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gIChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAx
NTI3ND5dID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KODZfcG11X2VuYWJsZSsweChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjFhMS8weDIyNw0NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIz
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEw
OGZmMjY+XSA/IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmVyZl9wbXVfZW5hYmxlKzAoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4MWEv
MHgxYg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAg
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDE0MDMxPl0gPyB4ODZfcG11
X2NvbW1pdF90eG4rMHg2Yi8weDc4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUwZjlkPl0gPyBhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyY2hfbG9j
YWxfaXJxX3JlKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CnN0b3JlKzB4Ni8weDcNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
ClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUx
MmJiPl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KMHgyYw0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDkwYTBlPl0gPyBjKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnR4X3NjaGVkX2luKzB4MmQoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KLzB4MTM1DQ0KWyAgICAwLjM4NDAyM10gIFs8YzEwMTUyODg+XSA/IHg4Nl9wbXVfZW5hYmxl
KzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjFiNS8w
eDIyNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4
NDAyM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KWzxjMTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCnZjbG9ja19jbG9ja3NvdXIoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KY2VfcmVhZCsweGM1LzB4ZihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3DQ0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAg
IDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpbPGMxMDI0OTRjPl0gPyBwdmNsb2NrX2Nsb2Nrc291cmNlX3JlYWQr
MHhjNS8weGYoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KNw0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbPGMxMDBmN2RmPl0gPyBzKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpjaGVkX2Nsb2Nr
KzB4OS8wKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4ZA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICBb
PGMxMDUwZmQ2Pl0gPyBzY2hlZF9jbG9ja19sb2NhbChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKzB4MTAvMHgxNGINDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTA4
NDE+XSA/IGUoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnZlbnRfc2No
ZWRfaW4rMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3My8weDEwOA0NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDkwOTQzPl0gPyBncm91
cF9zY2hlZF9pbisweDZkLzB4MTBiDQ0KWyAgICAwLjM4NDAyM10gIChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTBmOWQ+XSA/IGEoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyY2hfbG9jYWxfaXJxX3JlKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnN0b3JlKzB4Ni8weDcNDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEw
NTEyYmI+XSA/IGwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpv
Y2FsX2Nsb2NrKzB4MjMvKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoweDJjDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA5MTE4NT5dID8gXyhYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpfcGVyZl9ldmVudF9lbmFiKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KbGUrMHgxMDcvMHgxNDQNDQpbICAgIDAuMzg0MDIzXSAgWzxjMTA4ZTQ5Nz5dID8gcihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbW90ZV9mdW5jdGlv
biswKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCngyZS8weDMzDQ0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNd
ICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1Y2ZhZD5dID8gZyhYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbmVyaWNfc21wX2NhbGxfKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmZ1bmN0aW9uX3NpbmdsZV8oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCmludGVycnVwdCsweDk3LzAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KeGIyDQ0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgWzxjMTAwYTc1Yz5dID8g
eGVuX2NhbGxfZnVuY3Rpb24oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfc2luZ2xlX2ludGVycnVwKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KdCsweGEvMHgxYw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCls8YzEwNzZmZTQ+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQphbmRsZV9pcnFfZXZlbnRfKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpwZXJjcHUrMHg0Ny8weDE1KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQo4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDc4ODExPl0gPyBpKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCnJxX2dldF9pcnFfZGF0YSsweDUvMHg2DQ0KWyAgICAwLjM4NDAyM10gIFs8YzEwNzhl
MzQ+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmFuZGxlX3BlcmNwdV9pcnEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQorMHgyOS8weDM3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KWzxjMTFjMjU4ZD5dID8gXyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfeGVuX2V2dGNobl9k
b191cGNhbGwrMHgxMjYvMHgxYWQNDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMWMzN2QwPl0gPyB4KFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmVuX2V2dGNobl9kb191
cGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFsbCsweDE4LzB4MjYNDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTJjNDY5Nz5dID8geChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbl9kb191cGNhbGwrMHg3
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi8weGMNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTAwMjNhNz5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQp5cGVyY2FsbF9wYWdlKzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CjNhNy8weDEwMDANDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAwLjM4NDAyM10gIFs8YzEwMDYwM2E+XSA/IHhlbl9zYWZlX2hh
bHQrMHhmKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovMHgxOQ0NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDAuMzg0MDIzXSAgKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDEwNWI0Pl0g
PyBkKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpl
ZmF1bHRfaWRsZSsweDUyKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi8weDg3DQ0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMC4zODQwMjNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCls8YzEwMGFhNDc+XSA/IGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpwdV9pZGxlKzB4OTUvMHhhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZg0NClsgICAgMC4zODQwMjNdIC0t
LVsgZW5kIHRyYWNlIGE3OTE5ZTdmMTdjMGE3MjggXShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQotLS0NDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KWyAgICAzLjQxMjIxMl0gaW5zdGFsbGluZyBYZW4gdGkoWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbWVyIGZvciBDUFUgMw0NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAwLjAwODAwMF0gSW5pdGlhbGl6aW5nIENQVSMzDQ0K
WyAgICAzLjQyMDIxM10gTk1JIHdhdGNoZG9nIGVuYWJsZWQsIHRha2VzIG9uZSBoKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnctcG11IGNvdW50ZXIuDQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMGZmZmZkZDJj
ZmEzZSB0byAweDAwMDBmZmZiNWFkNTFlYzAuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdIC0oWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0tLS0tLS0tLVsgY3V0KFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQogaGVyZSBdLS0tLS0tLS0t
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0N
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsg
ICAgMy40MjQyMTNdIFcoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCkFSTklORzogYXQgL2J1aWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmQvYnVpbGRkLWxpbnV4XzMoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi4yLjIxLTMtaTM4Ni12RW8oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmhuNC9saW51eC0z
LjIuMjEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovYXJjaC94
ODYveGVuL2VuKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbGln
aHRlbi5jOjczOCBwZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CnJmX2V2ZW50c19sYXBpY18oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmluaXQrMHgyOC8weDI5KCkoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSBIKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFyZHdhcmUgbmFtZTogZW0oWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnB0eQ0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10g
TShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpvZHVsZXMg
bGlua2VkIGluKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQo6KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10g
UChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KaWQ6
IDAsIGNvbW06IHN3YShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KcHBlci8zIFRhaW50ZWQ6IChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KRyAgICAgICAgVyAgICAzLihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoyLjAtMy02ODYtcGFlICMxKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdIEMo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KYWxsIFRyYWNl
Og0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbPGMxMDM3ZmNjPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCmFybl9zbG93cGF0aF9jb20oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQptb24rMHg2OC8weDc5DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxNTBkMj5dID8gcChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplcmZfZXZlbnRzX2xhcGljKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfaW5pdCsweDI4LzB4
MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMzdmZWE+XSA/IHcoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFybl9zbG93cGF0
aF9udWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmwrMHhkLzB4MTANDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMTUwZDI+XSA/
IHAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZXJmX2V2ZW50c19sYXBpYyhYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCl9pbml0KzB4MjgvMHgyOQ0oWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMTUyNzQ+XSA/IHgoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjg2X3BtdV9l
bmFibGUrMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCjFhMS8weDIyNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIx
M10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTA4ZmYyNj5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KZXJmX3BtdV9lbmFibGUrMChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQp4MWEvMHgxYg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxNDAzMT5dID8geChYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KODZfcG11X2NvbW1pdF90eChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KbisweDZiLzB4NzgNDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIx
M10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTBm
OWQ+XSA/IGEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcmNoX2xvY2FsX2ly
cV9yZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0Kc3RvcmUrMHg2LzB4Nw0NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTEyYmI+XSA/IGwoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KMHgyYw0NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTBhMGU+XSA/IGMoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdHhfc2NoZWRfaW4rMHgyZChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KLzB4MTM1DQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIx
M10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAxNTI4OD5d
ID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo4Nl9w
bXVfZW5hYmxlKzB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCjFiNS8weDIyNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbPGMxMDI0OTRjPl0gPyBwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCnZjbG9ja19jbG9ja3NvdXIoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KY2VfcmVhZCsweGM1LzB4ZihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCjcNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdmNsb2NrX2Nsb2Nrc291cihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KY2VfcmVhZCsw
eGM1LzB4ZihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3
DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEz
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDBmN2RmPl0gPyBzKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KY2hlZF9jbG9jaysweDkvMChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KeGQNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1MGZkNj5dID8gcyhYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmNoZWRfY2xvY2tfbG9jYWwoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKzB4MTAvMHgxNGINDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDkwODQxPl0gPyBlKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp2ZW50X3NjaGVkX2luKzB4KFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjczLzB4MTA4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA5MDk0Mz5dID8gZyhYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyb3VwX3NjaGVkX2luKzB4
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo2ZC8w
eDEwYg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
ICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMDUwZjlkPl0gPyBhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpyY2hfbG9jYWxfaXJxX3JlKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQpzdG9yZSsweDYvMHg3DQ0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUxMmJi
Pl0gPyBsKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpvY2FsX2Nsb2NrKzB4MjMvKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjB4MmMNDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA5MTE4NT5dID8g
XyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KX3Bl
cmZfZXZlbnRfZW5hYihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpsZSsweDEwNy8weDE0NA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA4ZTQ5Nz5dID8gcmVtb3RlX2Z1bmN0aW9uKzB4MmUvMHgz
Mw0NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbPGMxMDVjZmFkPl0gPyBnKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCmVuZXJpY19zbXBfY2FsbF8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmZ1bmN0aW9uX3NpbmdsZV8oWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQppbnRlcnJ1cHQrMHg5Ny8wKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4YjINDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEzXSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMGE3NWM+
XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5f
Y2FsbF9mdW5jdGlvbihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpfc2luZ2xl
X2ludGVycnVwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdCsw
eGEvMHgxYw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA3NmZlND5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphbmRsZV9pcnFfZXZlbnRfKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KcGVyY3B1KzB4NDcvMHgxNShYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KOA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA3ODgxMT5dID8g
aShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpycV9nZXRf
aXJxX2RhdGErKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQoweDUvMHg2DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNzhlMzQ+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFuZGxlX3BlcmNwdV9pcnEoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCisweDI5LzB4MzcNDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuNDI0MjEz
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpb
PGMxMWMyNThkPl0gPyBfKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCl94ZW5fZXZ0Y2huX2RvX3UoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCnBjYWxsKzB4MTI2LzB4MWEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpbPGMxMWMzN2QwPl0gPyB4KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCmVuX2V2dGNobl9kb191cGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KYWxsKzB4MTgvMHgyNg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIChYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTJjNDY5Nz5dID8geChY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5fZG9f
dXBjYWxsKzB4NyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KLzB4Yw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCls8YzEwMDIzYTc+XSA/IGgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnlwZXJjYWxsX3BhZ2UrMHgoWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjNhNy8weDEwMDANDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy40MjQyMTNdICAoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAwNjAzYT5dID8g
eChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplbl9zYWZl
X2hhbHQrMHhmKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
Ci8weDE5DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuNDI0MjEzXSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQpbPGMxMDEwNWI0Pl0gPyBkKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KZWZhdWx0X2lkbGUrMHg1MihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovMHg4Nw0NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjQyNDIxM10gIFs8YzEwMGFhNDc+XSA/IGNwdV9p
ZGxlKzB4OTUvMHhhKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpmDQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NClsgICAgMy40MjQyMTNdIC0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCi0tWyBlbmQgdHJhY2UgYTcoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQo5MTllN2YxN2MwYTcyOSBdKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0NDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsg
ICAgMy45NzIyNDddIGkoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpuc3RhbGxpbmcgWGVuIHRpbWVyIGZvciBDUFUgNA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbICAgIDAuMDA4MDAwXSBJbml0aWFsaXppbmcgQ1BVIzQNDQpbICAgIDMuOTgwMjQ4XSBOTUkg
d2F0Y2hkb2cgZW5hYihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CmxlZCwgdGFrZXMgb25lIGh3LXBtdSBjb3VudGVyLg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwZmZmZmU0YWE1MmYxIHRv
IDB4MDAwMGZmZmI1YWQ1MWVjMC4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdIC0oWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi0tLS0tLS0tLS0tWyBjdXQoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQogaGVyZSBdLS0tLS0tLS0tKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQotLS0NDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSBXKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpBUk5JTkc6IGF0IC9idWlsKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KZC9idWlsZGQtbGludXhfMyhYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KLjIuMjEtMy1pMzg2LXZFbyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KaG40L2xpbnV4LTMuMi4yMShYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCi9hcmNoL3g4Ni94ZW4vZW4oWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCmxpZ2h0ZW4uYzo3MzggcGUoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnJmX2V2ZW50c19sYXBpY18oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmluaXQrMHgyOC8w
eDI5KCkoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoNDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdIEgoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphcmR3YXJlIG5hbWU6IGVt
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcHR5
DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4
XSBNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpvZHVsZXMgbGlua2Vk
IGluKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KOihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KDQ0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSBQKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQppZDogMCwgY29tbTogc3dhKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcHBlci80IFRhaW50ZWQ6IChYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpHICAgICAgICBXICAgIDMu
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoyLjAt
My02ODYtcGFlICMxKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQpbICAgIDMuOTg0MjQ4XSBDKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFsbCBUcmFjZToN
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAz
Ljk4NDI0OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDM3ZmNj
Pl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFybl9zbG93cGF0aF9j
b20oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQptb24rMHg2OC8w
eDc5DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAz
Ljk4NDI0OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDE1MGQyPl0g
PyBwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplcmZfZXZlbnRzX2xh
cGljKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpf
aW5pdCsweDI4LzB4MjkNKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbPGMxMDM3ZmVhPl0gPyB3KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCmFybl9zbG93cGF0aF9udWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpsKzB4ZC8weDEwDQ0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMx
MDE1MGQyPl0gPyBwKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQplcmZf
ZXZlbnRzX2xhcGljKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCl9pbml0KzB4MjgvMHgyOQ0oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0
MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAx
NTI3ND5dID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KODZfcG11X2VuYWJsZSsweChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KMWExLzB4MjI3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDhmZjI2Pl0gPyBwKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZXJmX3BtdV9lbmFibGUrMChYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCngxYS8weDFiDQ0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMTQwMzE+XSA/IHgoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo4Nl9wbXVfY29tbWl0X3R4
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpuKzB4NmIvMHg3OA0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1MGY5ZD5dID8gYShYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpyY2hfbG9jYWxfaXJx
X3JlKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpz
dG9yZSsweDYvMHg3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTEyYmI+XSA/IGwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KMHgyYw0NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTBhMGU+
XSA/IGMoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnR4X3NjaGVkX2luKzB4MmQoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQovMHgxMzUNDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhd
ICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8
YzEwMTUyODg+XSA/IHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjg2
X3BtdV9lbmFibGUrMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQoxYjUvMHgyMjcNDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAyNDk0Yz5dID8gcChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp2Y2xvY2tfY2xvY2tzb3VyKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmNlX3JlYWQrMHhjNS8weGYoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQo3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwMjQ5NGM+XSA/IHAoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQp2Y2xvY2tfY2xvY2tzb3VyKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQpjZV9yZWFkKzB4YzUvMHhmKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KNw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjk4NDI0OF0gIChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDBmN2RmPl0gPyBzKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KY2hlZF9jbG9jaysweDkvMChYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQp4ZA0NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbPGMxMDUw
ZmQ2Pl0gPyBzKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmNoZWRfY2xvY2tf
bG9jYWwoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0
byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4
MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAw
MDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQor
MHgxMC8weDE0Yg0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCls8YzEwOTA4NDE+XSA/IGUoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCnZlbnRfc2NoZWRfaW4rMHgoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjczLzB4MTA4DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAg
IDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwOTA5NDM+XSA/IGcoWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0Kcm91cF9zY2hlZF9pbisweChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KNmQvMHgxMGINDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCls8YzEwNTBm
OWQ+XSA/IGEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcmNoX2xvY2FsX2ly
cV9yZShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpzdG9yZSsweDYvMHg3DQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWyAgICAzLjk4NDI0
OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTA1MTJiYj5dID8gbChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMu
YzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4
MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAw
MDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpk
MCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAw
MDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUz
MDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCm9jYWxfY2xvY2srMHgyMy8oWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoweDJjDQ0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KWyAgICAzLjk4NDI0OF0gIFs8YzEwOTExODU+XSA/IF9fcGVyZl9ldmVudF9lbmFiKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmxlKzB4MTA3LzB4MTQ0
DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA4ZTQ5Nz5dID8gcihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCmVtb3RlX2Z1bmN0aW9uKzAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRl
ZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAw
MDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRy
YXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJv
bSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KeDJlLzB4MzMNDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAgIDMuOTg0MjQ4
XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA1Y2Zh
ZD5dID8gZyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5lcmljX3Nt
cF9jYWxsXyhYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmZ1bmN0
aW9uX3NpbmdsZV8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmludGVy
cnVwdCsweDk3LzAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEz
MDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAw
MDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2
Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBj
MDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQoo
WEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEw
MDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4p
IHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAg
ZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJh
cHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9t
IDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5j
OjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgw
MDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4
NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAw
MDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQw
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAw
MDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9t
YWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMw
MDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4g
YXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYg
dG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAw
eDAwMDAwMDAwMDAxMzAwNzYuDQp4YjINDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NClsgICAgMy45ODQyNDhdICAoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTAwYTc1Yz5dID8geChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KZW5fY2FsbF9mdW5jdGlvbihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KX3NpbmdsZV9pbnRlcnJ1
cChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KdCsw
eGEvMHgxYw0NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KWyAgICAzLjk4NDI0OF0gIChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KWzxjMTA3NmZlND5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQphbmRsZV9pcnFfZXZlbnRfKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnBlcmNwdSsweDQ3LzB4MTUoWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCjgNDQooWEVOKSAqKiogU2VyaWFsIGlu
cHV0IC0+IFhlbjxHPjwxPnRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQpbPGMxMDc4ODExPl0gPyBpKFhFTikgICh0eXBlICdDVFJMLWEnIHRocmVl
IHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBET00wKTxHPjwxPnRyYXBzLmM6MjU4NDpkMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAw
NzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KcnFfZ2V0X2lycV9kYXRhKyhYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgDQoweDUvMHg2DQ0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQpbICAg
IDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
WzxjMTA3OGUzND5dID8gaChYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdS
TVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAw
MDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAx
MzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVO
KSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAw
IGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCmFuZGxlX3Bl
cmNwdV9pcnEoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3
Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0K
KFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQorMHgy
OS8weDM3DQ0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQpbICAgIDMuOTg0MjQ4XSAgKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCls8YzExYzI1OGQ+XSA/IF8oWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCl94ZW5fZXZ0Y2huX2RvX3UoWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2
IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8g
MHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAw
MDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAw
MDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNS
IDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAw
MTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAw
MDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAw
NzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4N
CihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAw
MTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhF
TikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAw
MCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0
cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZy
b20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBz
LmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAw
eDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoy
NTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAw
MDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6
ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAw
MDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBE
b21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1
MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3
NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0
dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRv
IDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAw
MDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAw
MDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMw
MDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYu
DQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDAwIGZyb20gMHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihY
RU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikg
dHJhcHMuYzoyNTg0OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBm
cm9tIDB4MDAwMDAwMDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgdHJhcHMuYzoyNTg0
OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwMCBmcm9tIDB4MDAwMDAw
MDAwMDUzMDA3NiB0byAweDAwMDAwMDAwMDAxMzAwNzYuDQooWEVOKSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20gMHgwMDAwMDAwMDAw
NTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCnBjYWxsKzB4MTI2LzB4MWEoWEVOKSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDAwIGZyb20g
MHgwMDAwMDAwMDAwNTMwMDc2IHRvIDB4MDAwMDAwMDAwMDEzMDA3Ni4NCihYRU4pIHRyYXBzLmM6
MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAw
MDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAwMDAwMDAwMTMwMDc2Lg0KKFhFTikgJ1InIHByZXNzZWQg
LT4gcmVib290aW5nIG1hY2hpbmUNCihYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgw
MDAwMDAwMDAwMTMwMDc2Lg0KCShYRU4pIHRyYXBzLmM6MjU4NDpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwYzAwMTAwMDAgZnJvbSAweDAwMDAwMDAwMDA1MzAwNzYgdG8gMHgwMDAw
MDAwMDAwMTMwMDc2Lg0K
--047d7b66f1cd72413004c7744dc4
Content-Type: application/octet-stream; name="exile-good.log"
Content-Disposition: attachment; filename="exile-good.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h5z6jr9p1

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gICAgICAgICAgICAgIF9fX18g
IA0KIFwgXC8gL19fXyBfIF9fICAgfCB8fCB8ICB8X19fIFwgIC8gXyBcICAgIF8gX18gX19ffF9f
XyBcIA0KICBcICAvLyBfIFwgJ18gXCAgfCB8fCB8XyAgIF9fKSB8fCB8IHwgfF9ffCAnX18vIF9f
fCBfXykgfA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8gfCB8X3wgfF9ffCB8IHwg
KF9fIC8gX18vIA0KIC9fL1xfXF9fX3xffCB8X3wgICAgfF98KF8pX19fX18oXylfX18vICAgfF98
ICBcX19ffF9fX19ffA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC4yLjAtcmMyICh4ZW51c2VyQHVr
LnhlbnNvdXJjZS5jb20pIChnY2MgKFVidW50dS9MaW5hcm8gNC42LjMtMXVidW50dTUpIDQuNi4z
KSBUdWUgQXVnIDE0IDE4OjU5OjQ5IEJTVCAyMDEyDQooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBU
dWUgQXVnIDE0IDE4OjQxOjUzIDIwMTIgKzAxMDAgMjU3NTA6ZDhkZjExNTJlYjNiDQooWEVOKSBD
b25zb2xlIG91dHB1dCBpcyBzeW5jaHJvbm91cy4NCihYRU4pIEJvb3Rsb2FkZXI6IEdSVUIgMS45
OS0yMi4xDQooWEVOKSBDb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVyIHdhdGNoZG9nIGNwdWluZm8g
Y29tMT0xMTUyMDAsOG4xIGNvbnNvbGU9Y29tMSx0dHkgc3luY19jb25zb2xlIGNvbnNvbGVfdG9f
cmluZw0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAgVkdBIGlzIHRleHQgbW9kZSA4
MHgyNSwgZm9udCA4eDE2DQooWEVOKSAgVkJFL0REQyBtZXRob2RzOiBWMjsgRURJRCB0cmFuc2Zl
ciB0aW1lOiAyIHNlY29uZHMNCihYRU4pIERpc2MgaW5mb3JtYXRpb246DQooWEVOKSAgRm91bmQg
MiBNQlIgc2lnbmF0dXJlcw0KKFhFTikgIEZvdW5kIDIgRUREIGluZm9ybWF0aW9uIHN0cnVjdHVy
ZXMNCihYRU4pIFhlbi1lODIwIFJBTSBtYXA6DQooWEVOKSAgMDAwMDAwMDAwMDAwMDAwMCAtIDAw
MDAwMDAwMDAwOWM0MDAgKHVzYWJsZSkNCihYRU4pICAwMDAwMDAwMDAwMDljNDAwIC0gMDAwMDAw
MDAwMDBhMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDAwMDBjZTAwMCAtIDAwMDAwMDAw
MDAxMDAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNm
ZWUwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBjZmVlMDAwMCAtIDAwMDAwMDAwY2ZlZTUw
MDAgKEFDUEkgZGF0YSkNCihYRU4pICAwMDAwMDAwMGNmZWU1MDAwIC0gMDAwMDAwMDBjZmVmMTAw
MCAoQUNQSSBOVlMpDQooWEVOKSAgMDAwMDAwMDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAg
KHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMDAwMDAgLSAwMDAwMDAwMGZlYzAzMDAwIChy
ZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZTAwMDAwIC0gMDAwMDAwMDBmZWUwMTAwMCAocmVz
ZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwMTMwMDAwMDAwICh1c2FibGUp
DQooWEVOKSBBQ1BJOiBSU0RQIDAwMEY4MDgwLCAwMDI0IChyMiBQVExURCApDQooWEVOKSBBQ1BJ
OiBYU0RUIENGRUUwMUZFLCAwMDVDIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAg
MjAwMDAwMSkNCihYRU4pIEFDUEk6IEZBQ1AgQ0ZFRTAyQ0UsIDAwRjQgKHIzIEJSQ00gICBFWFBM
T1NOICAgNjA0MDAwMCBNU0ZUICAyMDAwMDAxKQ0KKFhFTikgQUNQSSBXYXJuaW5nICh0YmZhZHQt
MDQ0NCk6IE9wdGlvbmFsIGZpZWxkICJQbTJDb250cm9sQmxvY2siIGhhcyB6ZXJvIGFkZHJlc3Mg
b3IgbGVuZ3RoOiAwMDAwMDAwMDAwMDAwMDAwL0MgWzIwMDcwMTI2XQ0KKFhFTikgQUNQSTogRFNE
VCBDRkVFMDNDMiwgNDk0OSAocjIgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIE1TRlQgIDMwMDAw
MDApDQooWEVOKSBBQ1BJOiBGQUNTIENGRUYwRkMwLCAwMDQwDQooWEVOKSBBQ1BJOiBUQ1BBIENG
RUU0RDBCLCAwMDMyIChyMSBCUkNNICAgRVhQTE9TTiAgIDYwNDAwMDAgUFRMICAyMDAwMDAwMSkN
CihYRU4pIEFDUEk6IFNSQVQgQ0ZFRTREM0QsIDAxMjggKHIxIEFNRCAgICBGQU1fRl8xMCAgNjA0
MDAwMCBBTUQgICAgICAgICAxKQ0KKFhFTikgQUNQSTogSFBFVCBDRkVFNEU2NSwgMDAzOCAocjEg
QlJDTSAgIEVYUExPU04gICA2MDQwMDAwIEJSQ00gIDIwMDAwMDEpDQooWEVOKSBBQ1BJOiBTU0RU
IENGRUU0RTlELCAwMDQ5IChyMSBCUkNNICAgUFJUMCAgICAgIDYwNDAwMDAgQlJDTSAgMjAwMDAw
MSkNCihYRU4pIEFDUEk6IFNQQ1IgQ0ZFRTRFRTYsIDAwNTAgKHIxIFBUTFREICAkVUNSVEJMJCAg
NjA0MDAwMCBQVEwgICAgICAgICAxKQ0KKFhFTikgQUNQSTogQVBJQyBDRkVFNEYzNiwgMDBDQSAo
cjEgQlJDTSAgIEVYUExPU04gICA2MDQwMDAwIFBUTCAgIDIwMDAwMDEpDQooWEVOKSBTeXN0ZW0g
UkFNOiA0MDk0TUIgKDQxOTI3NTJrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAtPiBO
b2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6
IFBYTSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMyAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNCAtPiBOb2RlIDENCihYRU4pIFNS
QVQ6IFBYTSAxIC0+IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMg
NiAtPiBOb2RlIDENCihYRU4pIFNSQVQ6IFBYTSAxIC0+IEFQSUMgNyAtPiBOb2RlIDENCihYRU4p
IFNSQVQ6IE5vZGUgMCBQWE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAw
MDAwLWQwMDAwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTEzMDAwMDAw
MA0KKFhFTikgTlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSAxMmI3ZTAwMDAgLSAxMmI3
ZTIwMDANCihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhFTikgU1JB
VDogTm9kZSAxIGhhcyBubyBtZW1vcnkuIEJJT1MgQnVnIG9yIG1pcy1jb25maWd1cmVkIGhhcmR3
YXJlPw0KKFhFTikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQgRE1BIHdpZHRoIDMwIGJpdHMNCihY
RU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmODBiMA0KKFhFTikgRE1JIHByZXNlbnQuDQoo
WEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBBQ1BJOiBQTS1UaW1lciBJTyBQ
b3J0OiAweDUwOA0KKFhFTikgQUNQSTogQUNQSSBTTEVFUCBJTkZPOiBwbTF4X2NudFs1NDQsNTA0
XSwgcG0xeF9ldnRbNTAwLDU0MF0NCihYRU4pIEFDUEk6ICAgICAgICAgICAgICAgICAgd2FrZXVw
X3ZlY1tjZmVmMGZjY10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDBdIGxhcGljX2lk
WzB4MDBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzAgMDo0IEFQSUMgdmVyc2lvbiAxNg0K
KFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkN
CihYRU4pIFByb2Nlc3NvciAjMSAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDAyXSBsYXBpY19pZFsweDAyXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29y
ICMyIDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDNd
IGxhcGljX2lkWzB4MDNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzMgMDo0IEFQSUMgdmVy
c2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwNF0g
ZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNCAwOjQgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KKFhFTikg
UHJvY2Vzc29yICM1IDA6NCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDZdIGxhcGljX2lkWzB4MDZdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzYgMDo0
IEFQSUMgdmVyc2lvbiAxNg0KKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwN10gbGFwaWNf
aWRbMHgwN10gZW5hYmxlZCkNCihYRU4pIFByb2Nlc3NvciAjNyAwOjQgQVBJQyB2ZXJzaW9uIDE2
DQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwMF0gaGlnaCBlZGdlIGxpbnRbMHgx
XSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBoaWdoIGVkZ2UgbGludFsw
eDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDJdIGhpZ2ggZWRnZSBsaW50
WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwM10gaGlnaCBlZGdlIGxp
bnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA0XSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDVdIGhpZ2ggZWRn
ZSBsaW50WzB4MV0pDQooWEVOKSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNl0gaGlnaCBl
ZGdlIGxpbnRbMHgxXSkNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA3XSBoaWdo
IGVkZ2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA4XSBhZGRyZXNzWzB4
ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDgsIHZlcnNp
b24gMTcsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMTUNCihYRU4pIEFDUEk6IElPQVBJQyAo
aWRbMHgwOV0gYWRkcmVzc1sweGZlYzAxMDAwXSBnc2lfYmFzZVsxNl0pDQooWEVOKSBJT0FQSUNb
MV06IGFwaWNfaWQgOSwgdmVyc2lvbiAxNywgYWRkcmVzcyAweGZlYzAxMDAwLCBHU0kgMTYtMzEN
CihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwYV0gYWRkcmVzc1sweGZlYzAyMDAwXSBnc2lfYmFz
ZVszMl0pDQooWEVOKSBJT0FQSUNbMl06IGFwaWNfaWQgMTAsIHZlcnNpb24gMTcsIGFkZHJlc3Mg
MHhmZWMwMjAwMCwgR1NJIDMyLTQ3DQooWEVOKSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVz
X2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2UpDQooWEVOKSBBQ1BJOiBJUlEwIHVzZWQgYnkg
b3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFi
bGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMyBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQ
RVQgaWQ6IDB4MTE2NmEyMDEgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgVGFibGUgaXMgbm90IGZv
dW5kIQ0KKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9y
bWF0aW9uDQooWEVOKSBTTVA6IEFsbG93aW5nIDggQ1BVcyAoMCBob3RwbHVnIENQVXMpDQooWEVO
KSBJUlEgbGltaXRzOiA0OCBHU0ksIDE1MDQgTVNJL01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQooWEVOKSBJbml0aWFsaXppbmcgQ1BV
IzANCihYRU4pIERldGVjdGVkIDE5OTUuMDIzIE1IeiBwcm9jZXNzb3IuDQooWEVOKSBJbml0aW5n
IG1lbW9yeSBzaGFyaW5nLg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGlu
ZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEy
SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSAwKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDAN
CihYRU4pIEFNRCBGYW0xMGggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikg
QU1ELVZpOiBJT01NVSBub3QgZm91bmQhDQooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZGlzYWJs
ZWQNCihYRU4pIENQVTA6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4p
IEVOQUJMSU5HIElPLUFQSUMgSVJRcw0KKFhFTikgIC0+IFVzaW5nIG5ldyBBQ0sgbWV0aG9kDQoo
WEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4yPS0x
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiA2NCBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSBIVk06IFNW
TSBlbmFibGVkDQooWEVOKSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAoSEFQKSBkZXRl
Y3RlZA0KKFhFTikgSFZNOiBIQVAgcGFnZSBzaXplczogNGtCLCAyTUIsIDFHQg0KKFhFTikgQ1BV
IDAgQVBJQyAwIC0+IE5vZGUgMA0KKFhFTikgQ1BVIDEgQVBJQyAxIC0+IE5vZGUgMA0KKFhFTikg
Qm9vdGluZyBwcm9jZXNzb3IgMS8xIGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSMx
DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsg
KDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5l
KQ0KKFhFTikgQ1BVIDEoNCkgLT4gUHJvY2Vzc29yIDAsIENvcmUgMQ0KKFhFTikgQ1BVMTogQU1E
IEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0
X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIENQVSAyIEFQSUMgMiAtPiBOb2Rl
IDANCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29yIDIvMiBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxp
emluZyBDUFUjMg0KKFhFTikgQ1BVOiBMMSBJIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQg
Y2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQg
Ynl0ZXMvbGluZSkNCihYRU4pIENQVSAyKDQpIC0+IFByb2Nlc3NvciAwLCBDb3JlIDINCihYRU4p
IENQVTI6IEFNRCBFbmdpbmVlcmluZyBTYW1wbGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29k
ZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgMyBBUElD
IDMgLT4gTm9kZSAwDQooWEVOKSBCb290aW5nIHByb2Nlc3NvciAzLzMgZWlwIDhjMDAwDQooWEVO
KSBJbml0aWFsaXppbmcgQ1BVIzMNCihYRU4pIENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVz
L2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6
IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFUgMyg0KSAtPiBQcm9jZXNzb3IgMCwgQ29y
ZSAzDQooWEVOKSBDUFUzOiBBTUQgRW5naW5lZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVO
KSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikg
Q1BVIDQgQVBJQyA0IC0+IE5vZGUgMQ0KKFhFTikgQm9vdGluZyBwcm9jZXNzb3IgNC80IGVpcCA4
YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQVSM0DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRL
ICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6
IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9saW5lKQ0KKFhFTikgQ1BVIDQoNCkgLT4gUHJvY2Vz
c29yIDEsIENvcmUgMA0KKFhFTikgQ1BVNDogQU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGlu
ZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAw
NjcNCihYRU4pIENQVSA1IEFQSUMgNSAtPiBOb2RlIDENCihYRU4pIEJvb3RpbmcgcHJvY2Vzc29y
IDUvNSBlaXAgOGMwMDANCihYRU4pIEluaXRpYWxpemluZyBDUFUjNQ0KKFhFTikgQ1BVOiBMMSBJ
IGNhY2hlIDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0K
KFhFTikgQ1BVOiBMMiBDYWNoZTogNTEySyAoNjQgYnl0ZXMvbGluZSkNCihYRU4pIENQVSA1KDQp
IC0+IFByb2Nlc3NvciAxLCBDb3JlIDENCihYRU4pIENQVTU6IEFNRCBFbmdpbmVlcmluZyBTYW1w
bGUgc3RlcHBpbmcgMDANCihYRU4pIG1pY3JvY29kZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hf
aWQ9MHgxMDAwMDY3DQooWEVOKSBDUFUgNiBBUElDIDYgLT4gTm9kZSAxDQooWEVOKSBCb290aW5n
IHByb2Nlc3NvciA2LzYgZWlwIDhjMDAwDQooWEVOKSBJbml0aWFsaXppbmcgQ1BVIzYNCihYRU4p
IENQVTogTDEgSSBjYWNoZSA2NEsgKDY0IGJ5dGVzL2xpbmUpLCBEIGNhY2hlIDY0SyAoNjQgYnl0
ZXMvbGluZSkNCihYRU4pIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQooWEVO
KSBDUFUgNig0KSAtPiBQcm9jZXNzb3IgMSwgQ29yZSAyDQooWEVOKSBDUFU2OiBBTUQgRW5naW5l
ZXJpbmcgU2FtcGxlIHN0ZXBwaW5nIDAwDQooWEVOKSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2lu
Zm86IHBhdGNoX2lkPTB4MTAwMDA2Nw0KKFhFTikgQ1BVIDcgQVBJQyA3IC0+IE5vZGUgMQ0KKFhF
TikgQm9vdGluZyBwcm9jZXNzb3IgNy83IGVpcCA4YzAwMA0KKFhFTikgSW5pdGlhbGl6aW5nIENQ
VSM3DQooWEVOKSBDUFU6IEwxIEkgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNoZSA2
NEsgKDY0IGJ5dGVzL2xpbmUpDQooWEVOKSBDUFU6IEwyIENhY2hlOiA1MTJLICg2NCBieXRlcy9s
aW5lKQ0KKFhFTikgQ1BVIDcoNCkgLT4gUHJvY2Vzc29yIDEsIENvcmUgMw0KKFhFTikgQ1BVNzog
QU1EIEVuZ2luZWVyaW5nIFNhbXBsZSBzdGVwcGluZyAwMA0KKFhFTikgbWljcm9jb2RlOiBjb2xs
ZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwNjcNCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVz
DQooWEVOKSBUZXN0aW5nIE5NSSB3YXRjaGRvZyAtLS0gQ1BVIzAgb2theS4gQ1BVIzEgb2theS4g
Q1BVIzIgb2theS4gQ1BVIzMgb2theS4gQ1BVIzQgb2theS4gQ1BVIzUgb2theS4gQ1BVIzYgb2th
eS4gQ1BVIzcgb2theS4gDQooWEVOKSBIUEVUOiAzIHRpbWVycyAoMCB3aWxsIGJlIHVzZWQgZm9y
IGJyb2FkY2FzdCkNCihYRU4pIEFDUEkgc2xlZXAgbW9kZXM6IFMzDQooWEVOKSBNQ0E6IFVzZSBo
dyB0aHJlc2hvbGRpbmcgdG8gYWRqdXN0IHBvbGxpbmcgZnJlcXVlbmN5DQooWEVOKSBtY2hlY2tf
cG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0YXJ0ZWQuDQooWEVOKSBYZW5vcHJv
ZmlsZTogRmFpbGVkIHRvIHNldHVwIElCUyBMVlQgb2Zmc2V0LCBJQlNDVEwgPSAweGZmZmZmZmZm
DQooWEVOKSAqKiogTE9BRElORyBET01BSU4gMCAqKioNCihYRU4pIGVsZl9wYXJzZV9iaW5hcnk6
IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weDY3YjAwMA0KKFhFTikgZWxmX3BhcnNlX2Jp
bmFyeTogcGhkcjogcGFkZHI9MHgxNjdiMDAwIG1lbXN6PTB4MzFiMDAwDQooWEVOKSBlbGZfcGFy
c2VfYmluYXJ5OiBtZW1vcnk6IDB4MTAwMDAwMCAtPiAweDE5OTYwMDANCihYRU4pIGVsZl94ZW5f
cGFyc2Vfbm90ZTogR1VFU1RfT1MgPSAibGludXgiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6
IEdVRVNUX1ZFUlNJT04gPSAiMi42Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVS
U0lPTiA9ICJ4ZW4tMy4wIg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JBU0UgPSAw
eGMwMDAwMDAwDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhjMTZmNTAwMA0K
KFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBIWVBFUkNBTExfUEFHRSA9IDB4YzEwMDIwMDANCihY
RU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFibGVz
fHBhZV9wZ2Rpcl9hYm92ZV80Z2IiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IFBBRV9NT0RF
ID0gInllcyINCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogTE9BREVSID0gImdlbmVyaWMiDQoo
WEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVsZiBub3RlICgweGQpDQooWEVO
KSBlbGZfeGVuX3BhcnNlX25vdGU6IFNVU1BFTkRfQ0FOQ0VMID0gMHgxDQooWEVOKSBlbGZfeGVu
X3BhcnNlX25vdGU6IEhWX1NUQVJUX0xPVyA9IDB4ZjU4MDAwMDANCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBlbGZfeGVuX2FkZHJfY2FsY19jaGVj
azogYWRkcmVzc2VzOg0KKFhFTikgICAgIHZpcnRfYmFzZSAgICAgICAgPSAweGMwMDAwMDAwDQoo
WEVOKSAgICAgZWxmX3BhZGRyX29mZnNldCA9IDB4MA0KKFhFTikgICAgIHZpcnRfb2Zmc2V0ICAg
ICAgPSAweGMwMDAwMDAwDQooWEVOKSAgICAgdmlydF9rc3RhcnQgICAgICA9IDB4YzEwMDAwMDAN
CihYRU4pICAgICB2aXJ0X2tlbmQgICAgICAgID0gMHhjMTk5NjAwMA0KKFhFTikgICAgIHZpcnRf
ZW50cnkgICAgICAgPSAweGMxNmY1MDAwDQooWEVOKSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4
ZmZmZmZmZmZmZmZmZmZmZg0KKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0
MzINCihYRU4pICBEb20wIGtlcm5lbDogMzItYml0LCBQQUUsIGxzYiwgcGFkZHIgMHgxMDAwMDAw
IC0+IDB4MTk5NjAwMA0KKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikg
IERvbTAgYWxsb2MuOiAgIDAwMDAwMDAxMjQwMDAwMDAtPjAwMDAwMDAxMjgwMDAwMDAgKDk3MDQ1
MSBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDEy
YzQ1MjAwMC0+MDAwMDAwMDEyZmZmZmEwMA0KKFhFTikgVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1F
TlQ6DQooWEVOKSAgTG9hZGVkIGtlcm5lbDogMDAwMDAwMDBjMTAwMDAwMC0+MDAwMDAwMDBjMTk5
NjAwMA0KKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAwYzE5OTYwMDAtPjAwMDAwMDAwYzU1
NDNhMDANCihYRU4pICBQaHlzLU1hY2ggbWFwOiAwMDAwMDAwMGM1NTQ0MDAwLT4wMDAwMDAwMGM1
OTE2YTA0DQooWEVOKSAgU3RhcnQgaW5mbzogICAgMDAwMDAwMDBjNTkxNzAwMC0+MDAwMDAwMDBj
NTkxNzRiNA0KKFhFTikgIFBhZ2UgdGFibGVzOiAgIDAwMDAwMDAwYzU5MTgwMDAtPjAwMDAwMDAw
YzU5NGMwMDANCihYRU4pICBCb290IHN0YWNrOiAgICAwMDAwMDAwMGM1OTRjMDAwLT4wMDAwMDAw
MGM1OTRkMDAwDQooWEVOKSAgVE9UQUw6ICAgICAgICAgMDAwMDAwMDBjMDAwMDAwMC0+MDAwMDAw
MDBjNWMwMDAwMA0KKFhFTikgIEVOVFJZIEFERFJFU1M6IDAwMDAwMDAwYzE2ZjUwMDANCihYRU4p
IERvbTAgaGFzIG1heGltdW0gOCBWQ1BVcw0KKFhFTikgZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAg
YXQgMHgwMDAwMDAwMGMxMDAwMDAwIC0+IDB4MDAwMDAwMDBjMTY3YjAwMA0KKFhFTikgZWxmX2xv
YWRfYmluYXJ5OiBwaGRyIDEgYXQgMHgwMDAwMDAwMGMxNjdiMDAwIC0+IDB4MDAwMDAwMDBjMTc3
NDAwMA0KKFhFTikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4NCihYRU4pIEluaXRpYWwgbG93
IG1lbW9yeSB2aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLg0KKFhFTikgU3RkLiBM
b2dsZXZlbDogQWxsDQooWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSAqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQooWEVOKSAqKioqKioqIFdBUk5J
Tkc6IENPTlNPTEUgT1VUUFVUIElTIFNZTkNIUk9OT1VTDQooWEVOKSAqKioqKioqIFRoaXMgb3B0
aW9uIGlzIGludGVuZGVkIHRvIGFpZCBkZWJ1Z2dpbmcgb2YgWGVuIGJ5IGVuc3VyaW5nDQooWEVO
KSAqKioqKioqIHRoYXQgYWxsIG91dHB1dCBpcyBzeW5jaHJvbm91c2x5IGRlbGl2ZXJlZCBvbiB0
aGUgc2VyaWFsIGxpbmUuDQooWEVOKSAqKioqKioqIEhvd2V2ZXIgaXQgY2FuIGludHJvZHVjZSBT
SUdOSUZJQ0FOVCBsYXRlbmNpZXMgYW5kIGFmZmVjdA0KKFhFTikgKioqKioqKiB0aW1la2VlcGlu
Zy4gSXQgaXMgTk9UIHJlY29tbWVuZGVkIGZvciBwcm9kdWN0aW9uIHVzZSENCihYRU4pICoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIDMuLi4gMi4u
LiAxLi4uIA0KKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBlICdDVFJMLWEnIHRo
cmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pDQooWEVOKSBGcmVlZCAyNDRrQiBpbml0
IG1lbW9yeS4NCm1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQpYZW46IHNldHVw
IElTQSBpZGVudGl0eSBtYXBzDQphYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KWyAgICAwLjAwMDAw
MF0gUmVzZXJ2aW5nIHZpcnR1YWwgYWRkcmVzcyBzcGFjZSBhYm92ZSAweGZmODAwMDAwDQpbICAg
IDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsgICAgMC4wMDAw
MDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAwMDAwMF0gTGludXgg
dmVyc2lvbiAyLjYuMzIuMjUgKG9zc3Rlc3RAaXRjaC1taXRlLmNhbS54Y2ktdGVzdC5jb20pIChn
Y2MgdmVyc2lvbiA0LjMuMiAoRGViaWFuIDQuMy4yLTEuMSkgKSAjMSBTTVAgVHVlIE5vdiAyMyAw
NToyNzo0NyBHTVQgMjAxMA0KWyAgICAwLjAwMDAwMF0gS0VSTkVMIHN1cHBvcnRlZCBjcHVzOg0K
WyAgICAwLjAwMDAwMF0gICBJbnRlbCBHZW51aW5lSW50ZWwNClsgICAgMC4wMDAwMDBdICAgQU1E
IEF1dGhlbnRpY0FNRA0KWyAgICAwLjAwMDAwMF0gICBOU0MgR2VvZGUgYnkgTlNDDQpbICAgIDAu
MDAwMDAwXSAgIEN5cml4IEN5cml4SW5zdGVhZA0KWyAgICAwLjAwMDAwMF0gICBDZW50YXVyIENl
bnRhdXJIYXVscw0KWyAgICAwLjAwMDAwMF0gICBUcmFuc21ldGEgR2VudWluZVRNeDg2DQpbICAg
IDAuMDAwMDAwXSAgIFRyYW5zbWV0YSBUcmFuc21ldGFDUFUNClsgICAgMC4wMDAwMDBdICAgVU1D
IFVNQyBVTUMgVU1DDQpbICAgIDAuMDAwMDAwXSB4ZW5fcmVsZWFzZV9jaHVuazogbG9va2luZyBh
dCBhcmVhIHBmbiBkMDAwMC1mNGE4MTogMTUwMTQ1IHBhZ2VzIGZyZWVkDQpbICAgIDAuMDAwMDAw
XSByZWxlYXNlZCAxNTAxNDUgcGFnZXMgb2YgdW51c2VkIG1lbW9yeQ0KWyAgICAwLjAwMDAwMF0g
QklPUy1wcm92aWRlZCBwaHlzaWNhbCBSQU0gbWFwOg0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAw
MDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWM0MDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBd
ICBYZW46IDAwMDAwMDAwMDAwOWM0MDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkNClsg
ICAgMC4wMDAwMDBdICBYZW46IDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNmZWUwMDAwICh1
c2FibGUpDQpbICAgIDAuMDAwMDAwXSAgWGVuOiAwMDAwMDAwMGNmZWUwMDAwIC0gMDAwMDAwMDBj
ZmVlNTAwMCAoQUNQSSBkYXRhKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBjZmVlNTAw
MCAtIDAwMDAwMDAwY2ZlZjEwMDAgKEFDUEkgTlZTKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAw
MDAwMDBjZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAw
MF0gIFhlbjogMDAwMDAwMDBmZWMwMDAwMCAtIDAwMDAwMDAwZmVjMDMwMDAgKHJlc2VydmVkKQ0K
WyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZWUwMDAwMCAtIDAwMDAwMDAwZmVlMDEwMDAg
KHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDBmZmY4MDAwMCAtIDAwMDAw
MDAxMDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIFhlbjogMDAwMDAwMDEzMDAw
MDAwMCAtIDAwMDAwMDAxNTRhODEwMDAgKHVzYWJsZSkNClsgICAgMC4wMDAwMDBdIGJvb3Rjb25z
b2xlIFt4ZW5ib290MF0gZW5hYmxlZA0KWyAgICAwLjAwMDAwMF0gRE1JIHByZXNlbnQuDQpbICAg
IDAuMDAwMDAwXSBQaG9lbml4IEJJT1MgZGV0ZWN0ZWQ6IEJJT1MgbWF5IGNvcnJ1cHQgbG93IFJB
TSwgd29ya2luZyBhcm91bmQgaXQuDQpbICAgIDAuMDAwMDAwXSBsYXN0X3BmbiA9IDB4MTU0YTgx
IG1heF9hcmNoX3BmbiA9IDB4MTAwMDAwMA0KWyAgICAwLjAwMDAwMF0geDg2IFBBVCBlbmFibGVk
OiBjcHUgMCwgb2xkIDB4NTAxMDAwNzA0MDYsIG5ldyAweDcwMTA2MDAwNzAxMDYNClsgICAgMC4w
MDAwMDBdIFNjYW5uaW5nIDAgYXJlYXMgZm9yIGxvdyBtZW1vcnkgY29ycnVwdGlvbg0KWyAgICAw
LjAwMDAwMF0gbW9kaWZpZWQgcGh5c2ljYWwgUkFNIG1hcDoNClsgICAgMC4wMDAwMDBdICBtb2Rp
ZmllZDogMDAwMDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwMTAwMDAgKHJlc2VydmVkKQ0KWyAg
ICAwLjAwMDAwMF0gIG1vZGlmaWVkOiAwMDAwMDAwMDAwMDEwMDAwIC0gMDAwMDAwMDAwMDA5YzQw
MCAodXNhYmxlKQ0KWyAgICAwLjAwMDAwMF0gIG1vZGlmaWVkOiAwMDAwMDAwMDAwMDljNDAwIC0g
MDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpDQpbICAgIDAuMDAwMDAwXSAgbW9kaWZpZWQ6IDAw
MDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAwMGNmZWUwMDAwICh1c2FibGUpDQpbICAgIDAuMDAwMDAw
XSAgbW9kaWZpZWQ6IDAwMDAwMDAwY2ZlZTAwMDAgLSAwMDAwMDAwMGNmZWU1MDAwIChBQ1BJIGRh
dGEpDQpbICAgIDAuMDAwMDAwXSAgbW9kaWZpZWQ6IDAwMDAwMDAwY2ZlZTUwMDAgLSAwMDAwMDAw
MGNmZWYxMDAwIChBQ1BJIE5WUykNClsgICAgMC4wMDAwMDBdICBtb2RpZmllZDogMDAwMDAwMDBj
ZmVmMTAwMCAtIDAwMDAwMDAwZDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIG1v
ZGlmaWVkOiAwMDAwMDAwMGZlYzAwMDAwIC0gMDAwMDAwMDBmZWMwMzAwMCAocmVzZXJ2ZWQpDQpb
ICAgIDAuMDAwMDAwXSAgbW9kaWZpZWQ6IDAwMDAwMDAwZmVlMDAwMDAgLSAwMDAwMDAwMGZlZTAx
MDAwIChyZXNlcnZlZCkNClsgICAgMC4wMDAwMDBdICBtb2RpZmllZDogMDAwMDAwMDBmZmY4MDAw
MCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVkKQ0KWyAgICAwLjAwMDAwMF0gIG1vZGlmaWVk
OiAwMDAwMDAwMTMwMDAwMDAwIC0gMDAwMDAwMDE1NGE4MTAwMCAodXNhYmxlKQ0KWyAgICAwLjAw
MDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogMDAwMDAwMDAwMDAwMDAwMC0wMDAwMDAwMDM3MWZl
MDAwDQpbICAgIDAuMDAwMDAwXSBOWCAoRXhlY3V0ZSBEaXNhYmxlKSBwcm90ZWN0aW9uOiBhY3Rp
dmUNClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IDAxOTk2MDAwIC0gMDU1NDNhMDANClsgICAgMC4w
MDAwMDBdIEFDUEk6IFJTRFAgMDAwZjgwODAgMDAwMjQgKHYwMiBQVExURCApDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBYU0RUIGNmZWUwMWZlIDAwMDVDICh2MDEgQlJDTSAgIEVYUExPU04gIDA2MDQw
MDAwIFBUTCAgMDIwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNQIGNmZWUwMmNlIDAw
MEY0ICh2MDMgQlJDTSAgIEVYUExPU04gIDA2MDQwMDAwIE1TRlQgMDIwMDAwMDEpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJIFdhcm5pbmc6IE9wdGlvbmFsIGZpZWxkIFBtMkNvbnRyb2xCbG9jayBoYXMg
emVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC9DICgyMDA5MDkwMy90YmZh
ZHQtNTU3KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogRFNEVCBjZmVlMDNjMiAwNDk0OSAodjAyIEJS
Q00gICBFWFBMT1NOICAwNjA0MDAwMCBNU0ZUIDAzMDAwMDAwKQ0KWyAgICAwLjAwMDAwMF0gQUNQ
STogRkFDUyBjZmVmMGZjMCAwMDA0MA0KWyAgICAwLjAwMDAwMF0gQUNQSTogVENQQSBjZmVlNGQw
YiAwMDAzMiAodjAxIEJSQ00gICBFWFBMT1NOICAwNjA0MDAwMCBQVEwgIDIwMDAwMDAxKQ0KWyAg
ICAwLjAwMDAwMF0gQUNQSTogU1JBVCBjZmVlNGQzZCAwMDEyOCAodjAxIEFNRCAgICBGQU1fRl8x
MCAwNjA0MDAwMCBBTUQgIDAwMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogSFBFVCBjZmVl
NGU2NSAwMDAzOCAodjAxIEJSQ00gICBFWFBMT1NOICAwNjA0MDAwMCBCUkNNIDAyMDAwMDAxKQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogU1NEVCBjZmVlNGU5ZCAwMDA0OSAodjAxIEJSQ00gICBQUlQw
ICAgICAwNjA0MDAwMCBCUkNNIDAyMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogU1BDUiBj
ZmVlNGVlNiAwMDA1MCAodjAxIFBUTFREICAkVUNSVEJMJCAwNjA0MDAwMCBQVEwgIDAwMDAwMDAx
KQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogQVBJQyBjZmVlNGYzNiAwMDBDQSAodjAxIEJSQ00gICBF
WFBMT1NOICAwNjA0MDAwMCBQVEwgIDAyMDAwMDAxKQ0KWyAgICAwLjAwMDAwMF0gNDU2OE1CIEhJ
R0hNRU0gYXZhaWxhYmxlLg0KWyAgICAwLjAwMDAwMF0gODgxTUIgTE9XTUVNIGF2YWlsYWJsZS4N
ClsgICAgMC4wMDAwMDBdICAgbWFwcGVkIGxvdyByYW06IDAgLSAzNzFmZTAwMA0KWyAgICAwLjAw
MDAwMF0gICBsb3cgcmFtOiAwIC0gMzcxZmUwMDANClsgICAgMC4wMDAwMDBdICAgbm9kZSAwIGxv
dyByYW06IDAwMDAwMDAwIC0gMzcxZmUwMDANClsgICAgMC4wMDAwMDBdICAgbm9kZSAwIGJvb3Rt
YXAgMDAwMTAwMDAgLSAwMDAxNmU0MA0KWyAgICAwLjAwMDAwMF0gKDExIGVhcmx5IHJlc2VydmF0
aW9ucykgPT0+IGJvb3RtZW0gWzAwMDAwMDAwMDAgLSAwMDM3MWZlMDAwXQ0KWyAgICAwLjAwMDAw
MF0gICAjMCBbMDAwMDAwMDAwMCAtIDAwMDAwMDEwMDBdICAgQklPUyBkYXRhIHBhZ2UgPT0+IFsw
MDAwMDAwMDAwIC0gMDAwMDAwMTAwMF0NClsgICAgMC4wMDAwMDBdICAgIzEgWzAwMDU5MWEwMDAg
LSAwMDA1OTRlMDAwXSAgIFhFTiBQQUdFVEFCTEVTID09PiBbMDAwNTkxYTAwMCAtIDAwMDU5NGUw
MDBdDQpbICAgIDAuMDAwMDAwXSAgICMyIFswMDAwMDAxMDAwIC0gMDAwMDAwMjAwMF0gICAgRVgg
VFJBTVBPTElORSA9PT4gWzAwMDAwMDEwMDAgLSAwMDAwMDAyMDAwXQ0KWyAgICAwLjAwMDAwMF0g
ICAjMyBbMDAwMDAwNjAwMCAtIDAwMDAwMDcwMDBdICAgICAgIFRSQU1QT0xJTkUgPT0+IFswMDAw
MDA2MDAwIC0gMDAwMDAwNzAwMF0NClsgICAgMC4wMDAwMDBdICAgIzQgWzAwMDEwMDAwMDAgLSAw
MDAxODI1M2MwXSAgICBURVhUIERBVEEgQlNTID09PiBbMDAwMTAwMDAwMCAtIDAwMDE4MjUzYzBd
DQpbICAgIDAuMDAwMDAwXSAgICM1IFswMDAxOTk2MDAwIC0gMDAwNTU0M2EwMF0gICAgICAgICAg
UkFNRElTSyA9PT4gWzAwMDE5OTYwMDAgLSAwMDA1NTQzYTAwXQ0KWyAgICAwLjAwMDAwMF0gICAj
NiBbMDAwNTU0NDAwMCAtIDAwMDU5MWEwMDBdICAgWEVOIFNUQVJUIElORk8gPT0+IFswMDA1NTQ0
MDAwIC0gMDAwNTkxYTAwMF0NClsgICAgMC4wMDAwMDBdICAgIzcgWzAxMzAwMDAwMDAgLSAwMTU0
YTgxMDAwXSAgICAgICAgWEVOIEVYVFJBDQpbICAgIDAuMDAwMDAwXSAgICM4IFswMDAxODI2MDAw
IC0gMDAwMTgzMzE4Y10gICAgICAgICAgICAgIEJSSyA9PT4gWzAwMDE4MjYwMDAgLSAwMDAxODMz
MThjXQ0KWyAgICAwLjAwMDAwMF0gICAjOSBbMDAwMDEwMDAwMCAtIDAwMDAyODgwMDBdICAgICAg
ICAgIFBHVEFCTEUgPT0+IFswMDAwMTAwMDAwIC0gMDAwMDI4ODAwMF0NClsgICAgMC4wMDAwMDBd
ICAgIzEwIFswMDAwMDEwMDAwIC0gMDAwMDAxNzAwMF0gICAgICAgICAgQk9PVE1BUCA9PT4gWzAw
MDAwMTAwMDAgLSAwMDAwMDE3MDAwXQ0KWyAgICAwLjAwMDAwMF0gZm91bmQgU01QIE1QLXRhYmxl
IGF0IFtjMDBmODBiMF0gZjgwYjANClsgICAgMC4wMDAwMDBdIFpvbmUgUEZOIHJhbmdlczoNClsg
ICAgMC4wMDAwMDBdICAgRE1BICAgICAgMHgwMDAwMDAxMCAtPiAweDAwMDAxMDAwDQpbICAgIDAu
MDAwMDAwXSAgIE5vcm1hbCAgIDB4MDAwMDEwMDAgLT4gMHgwMDAzNzFmZQ0KWyAgICAwLjAwMDAw
MF0gICBIaWdoTWVtICAweDAwMDM3MWZlIC0+IDB4MDAxNTRhODENClsgICAgMC4wMDAwMDBdIE1v
dmFibGUgem9uZSBzdGFydCBQRk4gZm9yIGVhY2ggbm9kZQ0KWyAgICAwLjAwMDAwMF0gZWFybHlf
bm9kZV9tYXBbM10gYWN0aXZlIFBGTiByYW5nZXMNClsgICAgMC4wMDAwMDBdICAgICAwOiAweDAw
MDAwMDEwIC0+IDB4MDAwMDAwOWMNClsgICAgMC4wMDAwMDBdICAgICAwOiAweDAwMDAwMTAwIC0+
IDB4MDAwY2ZlZTANClsgICAgMC4wMDAwMDBdICAgICAwOiAweDAwMTMwMDAwIC0+IDB4MDAxNTRh
ODENClsgICAgMC4wMDAwMDBdIFVzaW5nIEFQSUMgZHJpdmVyIGRlZmF1bHQNClsgICAgMC4wMDAw
MDBdIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4NTA4DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBM
QVBJQyAoYWNwaV9pZFsweDAwXSBsYXBpY19pZFsweDAwXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MDJdIGVu
YWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19p
ZFsweDAzXSBlbmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgw
NF0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDVdIGxhcGljX2lkWzB4MDVdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDA2XSBsYXBpY19pZFsweDA2XSBlbmFibGVkKQ0KWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwN10gbGFwaWNfaWRbMHgwN10gZW5hYmxlZCkN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAwXSBoaWdoIGVkZ2Ug
bGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDFd
IGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFj
cGlfaWRbMHgwMl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExB
UElDX05NSSAoYWNwaV9pZFsweDAzXSBoaWdoIGVkZ2UgbGludFsweDFdKQ0KWyAgICAwLjAwMDAw
MF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDRdIGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBMQVBJQ19OTUkgKGFjcGlfaWRbMHgwNV0gaGlnaCBlZGdlIGxp
bnRbMHgxXSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDA2XSBo
aWdoIGVkZ2UgbGludFsweDFdKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3Bp
X2lkWzB4MDddIGhpZ2ggZWRnZSBsaW50WzB4MV0pDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJT0FQ
SUMgKGlkWzB4MDhdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3NpX2Jhc2VbMF0pDQpbICAgIDAuMDAw
MDAwXSBJT0FQSUNbMF06IGFwaWNfaWQgOCwgdmVyc2lvbiAwLCBhZGRyZXNzIDB4ZmVjMDAwMDAs
IEdTSSAwLTANClsgICAgMC4wMDAwMDBdIEFDUEk6IElPQVBJQyAoaWRbMHgwOV0gYWRkcmVzc1sw
eGZlYzAxMDAwXSBnc2lfYmFzZVsxNl0pDQpbICAgIDAuMDAwMDAwXSBJT0FQSUNbMV06IGFwaWNf
aWQgOSwgdmVyc2lvbiAwLCBhZGRyZXNzIDB4ZmVjMDEwMDAsIEdTSSAxNi0xNg0KWyAgICAwLjAw
MDAwMF0gQUNQSTogSU9BUElDIChpZFsweDBhXSBhZGRyZXNzWzB4ZmVjMDIwMDBdIGdzaV9iYXNl
WzMyXSkNClsgICAgMC4wMDAwMDBdIElPQVBJQ1syXTogYXBpY19pZCAxMCwgdmVyc2lvbiAwLCBh
ZGRyZXNzIDB4ZmVjMDIwMDAsIEdTSSAzMi0zMg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NS
Q19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgaGlnaCBlZGdlKQ0KWyAgICAwLjAw
MDAwMF0gRVJST1I6IFVuYWJsZSB0byBsb2NhdGUgSU9BUElDIGZvciBHU0kgMg0KWyAgICAwLjAw
MDAwMF0gRVJST1I6IFVuYWJsZSB0byBsb2NhdGUgSU9BUElDIGZvciBHU0kgOQ0KWyAgICAwLjAw
MDAwMF0gVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9u
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBIUEVUIGlkOiAweDExNjZhMjAxIGJhc2U6IDB4ZmVkMDAw
MDANClsgICAgMC4wMDAwMDBdIFNNUDogQWxsb3dpbmcgOCBDUFVzLCAwIGhvdHBsdWcgQ1BVcw0K
WyAgICAwLjAwMDAwMF0gUE06IFJlZ2lzdGVyZWQgbm9zYXZlIG1lbW9yeTogMDAwMDAwMDAwMDA5
YzAwMCAtIDAwMDAwMDAwMDAwOWQwMDANClsgICAgMC4wMDAwMDBdIFBNOiBSZWdpc3RlcmVkIG5v
c2F2ZSBtZW1vcnk6IDAwMDAwMDAwMDAwOWQwMDAgLSAwMDAwMDAwMDAwMTAwMDAwDQpbICAgIDAu
MDAwMDAwXSBBbGxvY2F0aW5nIFBDSSByZXNvdXJjZXMgc3RhcnRpbmcgYXQgZDAwMDAwMDAgKGdh
cDogZDAwMDAwMDA6MmVjMDAwMDApDQpbICAgIDAuMDAwMDAwXSBCb290aW5nIHBhcmF2aXJ0dWFs
aXplZCBrZXJuZWwgb24gWGVuDQpbICAgIDAuMDAwMDAwXSBYZW4gdmVyc2lvbjogNC4yLjAtcmMy
IChwcmVzZXJ2ZS1BRCkgKGRvbTApDQpbICAgIDAuMDAwMDAwXSBOUl9DUFVTOjggbnJfY3B1bWFz
a19iaXRzOjggbnJfY3B1X2lkczo4IG5yX25vZGVfaWRzOjENClsgICAgMC4wMDAwMDBdIFBFUkNQ
VTogRW1iZWRkZWQgMTUgcGFnZXMvY3B1IEBjODNmODAwMCBzMzcwODAgcjAgZDI0MzYwIHU2NTUz
Ng0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxvYzogczM3MDgwIHIwIGQyNDM2MCB1NjU1MzYgYWxs
b2M9MTYqNDA5Ng0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxvYzogWzBdIDAgWzBdIDEgWzBdIDIg
WzBdIDMgWzBdIDQgWzBdIDUgWzBdIDYgWzBdIDcgDQpbICAgIDAuMDAwMDAwXSBCdWlsdCAxIHpv
bmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2Vz
OiA5OTA4MDcNClsgICAgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IHBsYWNlaG9sZGVy
IHJvb3Q9VVVJRD1lNzU2OGFjNi1kNTYxLTQ4M2EtYjViYS04OGZkOTM1MGVlMTggcm8gY29uc29s
ZT1odmMwIGVhcmx5cHJpbnRrPXhlbiBub21vZGVzZXQNClsgICAgMC4wMDAwMDBdIFBJRCBoYXNo
IHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiAyLCAxNjM4NCBieXRlcykNClsgICAgMC4wMDAw
MDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3JkZXI6IDcsIDUy
NDI4OCBieXRlcykNClsgICAgMC4wMDAwMDBdIElub2RlLWNhY2hlIGhhc2ggdGFibGUgZW50cmll
czogNjU1MzYgKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMpDQpbICAgIDAuMDAwMDAwXSBFbmFibGlu
ZyBmYXN0IEZQVSBzYXZlIGFuZCByZXN0b3JlLi4uIGRvbmUuDQpbICAgIDAuMDAwMDAwXSBFbmFi
bGluZyB1bm1hc2tlZCBTSU1EIEZQVSBleGNlcHRpb24gc3VwcG9ydC4uLiBkb25lLg0KWyAgICAw
LjAwMDAwMF0gSW5pdGlhbGl6aW5nIENQVSMwDQpbICAgIDAuMDAwMDAwXSBETUE6IFBsYWNpbmcg
NjRNQiBzb2Z0d2FyZSBJTyBUTEIgYmV0d2VlbiBjODUzYjAwMCAtIGNjNTNiMDAwDQpbICAgIDAu
MDAwMDAwXSBETUE6IHNvZnR3YXJlIElPIFRMQiBhdCBwaHlzIDB4ODUzYjAwMCAtIDB4YzUzYjAw
MA0KWyAgICAwLjAwMDAwMF0geGVuX3N3aW90bGJfZml4dXA6IGJ1Zj1jODUzYjAwMCBzaXplPTY3
MTA4ODY0DQpbICAgIDAuMDAwMDAwXSB4ZW5fc3dpb3RsYl9maXh1cDogYnVmPWNjNTliMDAwIHNp
emU9MzI3NjgNClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBIaWdoTWVtIGZvciBub2RlIDAg
KDAwMDM3MWZlOjAwMTU0YTgxKQ0KWyAgICAwLjAwMDAwMF0gTWVtb3J5OiAzMjE5MTAway81NTgx
MzE2ayBhdmFpbGFibGUgKDQ2MTlrIGtlcm5lbCBjb2RlLCAxODYxNjRrIHJlc2VydmVkLCAyNTAy
ayBkYXRhLCA0ODRrIGluaXQsIDMxMDQxNDBrIGhpZ2htZW0pDQpbICAgIDAuMDAwMDAwXSB2aXJ0
dWFsIGtlcm5lbCBtZW1vcnkgbGF5b3V0Og0KWyAgICAwLjAwMDAwMF0gICAgIGZpeG1hcCAgOiAw
eGZmNzFkMDAwIC0gMHhmZjdmZjAwMCAgICggOTA0IGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgIHBr
bWFwICAgOiAweGZmMjAwMDAwIC0gMHhmZjQwMDAwMCAgICgyMDQ4IGtCKQ0KWyAgICAwLjAwMDAw
MF0gICAgIHZtYWxsb2MgOiAweGY3OWZlMDAwIC0gMHhmZjFmZTAwMCAgICggMTIwIE1CKQ0KWyAg
ICAwLjAwMDAwMF0gICAgIGxvd21lbSAgOiAweGMwMDAwMDAwIC0gMHhmNzFmZTAwMCAgICggODgx
IE1CKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmluaXQgOiAweGMxNmY1MDAwIC0gMHhjMTc2ZTAw
MCAgICggNDg0IGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLmRhdGEgOiAweGMxNDgyZDJlIC0g
MHhjMTZmNDY1NCAgICgyNTAyIGtCKQ0KWyAgICAwLjAwMDAwMF0gICAgICAgLnRleHQgOiAweGMx
MDAwMDAwIC0gMHhjMTQ4MmQyZSAgICg0NjE5IGtCKQ0KWyAgICAwLjAwMDAwMF0gU0xVQjogR2Vu
c2xhYnM9MTMsIEhXYWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTgsIE5v
ZGVzPTENClsgICAgMC4wMDAwMDBdIEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRpb24uDQpb
ICAgIDAuMDAwMDAwXSBOUl9JUlFTOjIzMDQgbnJfaXJxczoyMzA0DQpbICAgIDAuMDAwMDAwXSBB
Q1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSAwIGdsb2JhbF9pcnEgMiBoaWdoIGVkZ2Up
DQpbICAgIDAuMDAwMDAwXSBDb25zb2xlOiBjb2xvdXIgVkdBKyA4MHgyNQ0KWyAgICAwLjAwMDAw
MF0gY29uc29sZSBbaHZjMF0gZW5hYmxlZCwgYm9vdGNvbnNvbGUgZGlzYWJsZWQNDQpbICAgIDAu
MDAwMDAwXSBjb25zb2xlIFtodmMwXSBlbmFibGVkLCBib290Y29uc29sZSBkaXNhYmxlZA0KWyAg
ICAwLjAwMDAwMF0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAwDQ0KWyAgICAwLjAwMDAw
MF0gRGV0ZWN0ZWQgMTk5NS4wMjMgTUh6IHByb2Nlc3Nvci4NDQpbICAgIDAuMDAwOTk5XSBDYWxp
YnJhdGluZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2luZyB0aW1l
ciBmcmVxdWVuY3kuLiAzOTkwLjA0IEJvZ29NSVBTIChscGo9MTk5NTAyMykNDQpbICAgIDAuMDAw
OTk5XSBTZWN1cml0eSBGcmFtZXdvcmsgaW5pdGlhbGl6ZWQNDQpbICAgIDAuMDAwOTk5XSBTRUxp
bnV4OiAgSW5pdGlhbGl6aW5nLg0NClsgICAgMC4wMDA5OTldIE1vdW50LWNhY2hlIGhhc2ggdGFi
bGUgZW50cmllczogNTEyDQ0KWyAgICAwLjAwMTM5M10gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJz
eXMgbnMNDQpbICAgIDAuMDAxOTk5XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVhY2N0
DQ0KWyAgICAwLjAwMTk5OV0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgZnJlZXplcg0NClsg
ICAgMC4wMDIwMzNdIENQVTogTDEgSSBDYWNoZTogNjRLICg2NCBieXRlcy9saW5lKSwgRCBjYWNo
ZSA2NEsgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMzAwN10gQ1BVOiBMMiBDYWNoZTogNTEy
SyAoNjQgYnl0ZXMvbGluZSkNDQpbICAgIDAuMDAzOTk5XSBDUFU6IFBoeXNpY2FsIFByb2Nlc3Nv
ciBJRDogMA0NClsgICAgMC4wMDM5OTldIENQVTogUHJvY2Vzc29yIENvcmUgSUQ6IDANDQpbICAg
IDAuMDA0MDExXSBtY2U6IENQVSBzdXBwb3J0cyAyIE1DRSBiYW5rcw0NClsgICAgMC4wMDUwMjJd
IFBlcmZvcm1hbmNlIEV2ZW50czogQU1EIFBNVSBkcml2ZXIuDQ0KWyAgICAwLjAwNTk5OV0gLS0t
LS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0tLS0tDQ0KWyAgICAwLjAwNTk5OV0gV0FSTklO
RzogYXQgYXJjaC94ODYveGVuL2VubGlnaHRlbi5jOjcyOSB4ZW5fYXBpY193cml0ZSsweDEyLzB4
MTQoKQ0NClsgICAgMC4wMDU5OTldIEhhcmR3YXJlIG5hbWU6IGVtcHR5DQ0KWyAgICAwLjAwNTk5
OV0gTW9kdWxlcyBsaW5rZWQgaW46DQ0KWyAgICAwLjAwNTk5OV0gUGlkOiAwLCBjb21tOiBzd2Fw
cGVyIE5vdCB0YWludGVkIDIuNi4zMi4yNSAjMQ0NClsgICAgMC4wMDU5OTldIENhbGwgVHJhY2U6
DQ0KWyAgICAwLjAwNTk5OV0gIFs8YzEwMjlmZGU+XSA/IHhlbl9hcGljX3dyaXRlKzB4MTIvMHgx
NA0NClsgICAgMC4wMDU5OTldICBbPGMxMDYzYjJkPl0gd2Fybl9zbG93cGF0aF9jb21tb24rMHg2
MC8weDkwDQ0KWyAgICAwLjAwNTk5OV0gIFs8YzEwNjNiNmE+XSB3YXJuX3Nsb3dwYXRoX251bGwr
MHhkLzB4MTANDQpbICAgIDAuMDA1OTk5XSAgWzxjMTAyOWZkZT5dIHhlbl9hcGljX3dyaXRlKzB4
MTIvMHgxNA0NClsgICAgMC4wMDU5OTldICBbPGMxMDM5YmJiPl0gcGVyZl9ldmVudHNfbGFwaWNf
aW5pdCsweDJiLzB4MmQNDQpbICAgIDAuMDA1OTk5XSAgWzxjMTZmZjg4Mz5dIGluaXRfaHdfcGVy
Zl9ldmVudHMrMHgyZTUvMHgzN2YNDQpbICAgIDAuMDA1OTk5XSAgWzxjMTZmZjJjNT5dIGlkZW50
aWZ5X2Jvb3RfY3B1KzB4MjEvMHgyMw0NClsgICAgMC4wMDU5OTldICBbPGMxNmZmNDlhPl0gY2hl
Y2tfYnVncysweGIvMHgxMGYNDQpbICAgIDAuMDA1OTk5XSAgWzxjMTBhOGIxNT5dID8gZGVsYXlh
Y2N0X2luaXQrMHg0Mi8weDQ1DQ0KWyAgICAwLjAwNTk5OV0gIFs8YzE2ZjU4NWU+XSBzdGFydF9r
ZXJuZWwrMHgzMGIvMHgzMWENDQpbICAgIDAuMDA1OTk5XSAgWzxjMTZmNTBhMj5dIGkzODZfc3Rh
cnRfa2VybmVsKzB4OTEvMHg5Ng0NClsgICAgMC4wMDU5OTldICBbPGMxNmY4ZTdkPl0geGVuX3N0
YXJ0X2tlcm5lbCsweDU1Yi8weDU2Mw0NClsgICAgMC4wMDU5OTldIC0tLVsgZW5kIHRyYWNlIDRl
YWEyYTg2YThlMmRhMjIgXS0tLQ0NClsgICAgMC4wMDU5OTldIC4uLiB2ZXJzaW9uOiAgICAgICAg
ICAgICAgICAwDQ0KWyAgICAwLjAwNTk5OV0gLi4uIGJpdCB3aWR0aDogICAgICAgICAgICAgIDQ4
DQ0KWyAgICAwLjAwNTk5OV0gLi4uIGdlbmVyaWMgcmVnaXN0ZXJzOiAgICAgIDQNDQpbICAgIDAu
MDA1OTk5XSAuLi4gdmFsdWUgbWFzazogICAgICAgICAgICAgMDAwMGZmZmZmZmZmZmZmZg0NClsg
ICAgMC4wMDU5OTldIC4uLiBtYXggcGVyaW9kOiAgICAgICAgICAgICAwMDAwN2ZmZmZmZmZmZmZm
DQ0KWyAgICAwLjAwNTk5OV0gLi4uIGZpeGVkLXB1cnBvc2UgZXZlbnRzOiAgIDANDQpbICAgIDAu
MDA1OTk5XSAuLi4gZXZlbnQgbWFzazogICAgICAgICAgICAgMDAwMDAwMDAwMDAwMDAwZg0NClsg
ICAgMC4wMDcwOTddIFNNUCBhbHRlcm5hdGl2ZXM6IHN3aXRjaGluZyB0byBVUCBjb2RlDQ0KWyAg
ICAwLjAwOTk5OF0gQUNQSTogQ29yZSByZXZpc2lvbiAyMDA5MDkwMw0NClsgICAgMC4wMjMyMzdd
IGNwdSAwIHNwaW5sb2NrIGV2ZW50IGlycSAyMzAyDQ0KWyAgICAwLjAyNDU1OV0gaW5zdGFsbGlu
ZyBYZW4gdGltZXIgZm9yIENQVSAxDQ0KWyAgICAwLjAyNTAzNl0gY3B1IDEgc3BpbmxvY2sgZXZl
bnQgaXJxIDIyOTYNDQpbICAgIDAuMDI2MDgyXSBTTVAgYWx0ZXJuYXRpdmVzOiBzd2l0Y2hpbmcg
dG8gU01QIGNvZGUNDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcgQ1BVIzENDQpbICAgIDAu
MDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRL
ICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0
IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6
IDANDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElEOiAxDQ0KWyAgICAwLjAy
ODU3NV0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAyDQ0KWyAgICAwLjAyOTAyOF0gY3B1
IDIgc3BpbmxvY2sgZXZlbnQgaXJxIDIyOTANDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcg
Q1BVIzINDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGlu
ZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIg
Q2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNh
bCBQcm9jZXNzb3IgSUQ6IDANDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElE
OiAyDQ0KWyAgICAwLjAzMDY4M10gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSAzDQ0KWyAg
ICAwLjAzMTA0OV0gY3B1IDMgc3BpbmxvY2sgZXZlbnQgaXJxIDIyODQNDQpbICAgIDAuMDAwOTk5
XSBJbml0aWFsaXppbmcgQ1BVIzMNDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0
SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4w
MDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5
OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDANDQpbICAgIDAuMDAwOTk5XSBDUFU6IFBy
b2Nlc3NvciBDb3JlIElEOiAzDQ0KWyAgICAwLjAzMjY4OF0gaW5zdGFsbGluZyBYZW4gdGltZXIg
Zm9yIENQVSA0DQ0KWyAgICAwLjAzMzAyOV0gY3B1IDQgc3BpbmxvY2sgZXZlbnQgaXJxIDIyNzgN
DQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcgQ1BVIzQNDQpbICAgIDAuMDAwOTk5XSBDUFU6
IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9s
aW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUp
DQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDENDQpbICAgIDAu
MDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElEOiAwDQ0KWyAgICAwLjAzNDc5Ml0gaW5zdGFs
bGluZyBYZW4gdGltZXIgZm9yIENQVSA1DQ0KWyAgICAwLjAzNTA0OF0gY3B1IDUgc3BpbmxvY2sg
ZXZlbnQgaXJxIDIyNzINDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXppbmcgQ1BVIzUNDQpbICAg
IDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUg
NjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksg
KDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3Ig
SUQ6IDENDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3JlIElEOiAxDQ0KWyAgICAw
LjAzNjc5OV0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSA2DQ0KWyAgICAwLjAzNzA0OV0g
Y3B1IDYgc3BpbmxvY2sgZXZlbnQgaXJxIDIyNjYNDQpbICAgIDAuMDAwOTk5XSBJbml0aWFsaXpp
bmcgQ1BVIzYNDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6IDY0SyAoNjQgYnl0ZXMv
bGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAgMC4wMDA5OTldIENQVTog
TDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAwMDk5OV0gQ1BVOiBQaHlz
aWNhbCBQcm9jZXNzb3IgSUQ6IDENDQpbICAgIDAuMDAwOTk5XSBDUFU6IFByb2Nlc3NvciBDb3Jl
IElEOiAyDQ0KWyAgICAwLjAzODgyMl0gaW5zdGFsbGluZyBYZW4gdGltZXIgZm9yIENQVSA3DQ0K
WyAgICAwLjAzOTAzMl0gY3B1IDcgc3BpbmxvY2sgZXZlbnQgaXJxIDIyNjANDQpbICAgIDAuMDAw
OTk5XSBJbml0aWFsaXppbmcgQ1BVIzcNDQpbICAgIDAuMDAwOTk5XSBDUFU6IEwxIEkgQ2FjaGU6
IDY0SyAoNjQgYnl0ZXMvbGluZSksIEQgY2FjaGUgNjRLICg2NCBieXRlcy9saW5lKQ0NClsgICAg
MC4wMDA5OTldIENQVTogTDIgQ2FjaGU6IDUxMksgKDY0IGJ5dGVzL2xpbmUpDQ0KWyAgICAwLjAw
MDk5OV0gQ1BVOiBQaHlzaWNhbCBQcm9jZXNzb3IgSUQ6IDENDQpbICAgIDAuMDAwOTk5XSBDUFU6
IFByb2Nlc3NvciBDb3JlIElEOiAzDQ0KWyAgICAwLjA0MDY2Ml0gQnJvdWdodCB1cCA4IENQVXMN
DQpbICAgIDAuMDQ0MjIxXSBHcmFudCB0YWJsZSBpbml0aWFsaXplZA0NClsgICAgMC4wNDQ5OTNd
IFRpbWU6IDEwOjU3OjQxICBEYXRlOiAwOC8xNy8xMg0NClsgICAgMC4wNDUxMzhdIE5FVDogUmVn
aXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTYNDQpbICAgIDAuMDQ3MTg1XSBBQ1BJOiBidXMgdHlw
ZSBwY2kgcmVnaXN0ZXJlZA0NClsgICAgMC4wNDkzNzFdIFBDSTogQklPUyBCVUcgIzMyWzAwMDAw
MTRhXSBmb3VuZA0NClsgICAgMC4wNDk5OTJdIFBDSTogVXNpbmcgY29uZmlndXJhdGlvbiB0eXBl
IDEgZm9yIGJhc2UgYWNjZXNzDQ0KWyAgICAwLjA0OTk5Ml0gUENJOiBVc2luZyBjb25maWd1cmF0
aW9uIHR5cGUgMSBmb3IgZXh0ZW5kZWQgYWNjZXNzDQ0KWyAgICAwLjA4MzA5MF0gYmlvOiBjcmVh
dGUgc2xhYiA8YmlvLTA+IGF0IDANDQpbICAgIDAuMDg4Mjg5XSBFUlJPUjogVW5hYmxlIHRvIGxv
Y2F0ZSBJT0FQSUMgZm9yIEdTSSA5DQ0KWyAgICAwLjA5OTcyMl0gQUNQSTogSW50ZXJwcmV0ZXIg
ZW5hYmxlZA0NClsgICAgMC4wOTk5ODRdIEFDUEk6IChzdXBwb3J0cyBTMCBTMSBTNCBTNSkNDQpb
ICAgIDAuMDk5OTg0XSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGludGVycnVwdCByb3V0aW5nDQ0K
WyAgICAwLjEzNjk3OV0gQUNQSTogTm8gZG9jayBkZXZpY2VzIGZvdW5kLg0NClsgICAgMC4xMzc5
NzhdIEFDUEk6IFBDSSBSb290IEJyaWRnZSBbUENJMF0gKDAwMDA6MDApDQ0KWyAgICAwLjEzODI1
M10gcGNpIDAwMDA6MDA6MDEuMDogRW5hYmxpbmcgSFQgTVNJIE1hcHBpbmcNDQpbICAgIDAuMTM4
OTc4XSBwY2kgMDAwMDowMDowMy4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90
DQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDMuMjogUE1FIyBkaXNhYmxlZA0NClsgICAg
MC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA2LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3Qg
RDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDYuMDogUE1FIyBkaXNhYmxlZA0N
ClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA3LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAg
RDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDcuMDogUE1FIyBkaXNh
YmxlZA0NClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA4LjA6IFBNRSMgc3VwcG9ydGVkIGZy
b20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDguMDogUE1F
IyBkaXNhYmxlZA0NClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjA5LjA6IFBNRSMgc3VwcG9y
dGVkIGZyb20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6MDA6MDku
MDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzg5NzhdIHBjaSAwMDAwOjAwOjBhLjA6IFBNRSMg
c3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzODk3OF0gcGNpIDAwMDA6
MDA6MGEuMDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzk3NTBdIHBjaSAwMDAwOjA3OjAwLjA6
IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzOTk3OF0gcGNp
IDAwMDA6MDc6MDAuMDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzk5NzhdIHBjaSAwMDAwOjA4
OjA0LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzOTk3OF0g
cGNpIDAwMDA6MDg6MDQuMDogUE1FIyBkaXNhYmxlZA0NClsgICAgMC4xMzk5NzhdIHBjaSAwMDAw
OjA4OjA0LjE6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDNob3QgRDNjb2xkDQ0KWyAgICAwLjEzOTk3
OF0gcGNpIDAwMDA6MDg6MDQuMTogUE1FIyBkaXNhYmxlZA0NCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MDEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMi4wDQooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjAyLjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MDIuMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMy4wDQooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjAzLjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDMuMg0KKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowNC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjA2LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDcuMA0KKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDowOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA5LjAN
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MGEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMDoxOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjENCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
OC4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQNCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTkuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOS4xDQooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE5LjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6
MDA6MTkuMw0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOS40DQooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAxOjBkLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDE6MGUuMA0K
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowZS4xDQooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjA2OjAwLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDc6MDAuMA0KKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowODowNC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA4OjA0
LjENClsgICAgMC4xODYyMjhdIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LVV0gKElSUXMg
KjEwIDExKQ0NClsgICAgMC4xODcwODldIEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LV10g
KElSUXMgMTAgMTEpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMTg4MDk3XSBBQ1BJOiBQQ0kgSW50
ZXJydXB0IExpbmsgW0xOS1NdIChJUlFzIDUgKjcgMTEpDQ0KWyAgICAwLjE4OTA4N10gQUNQSTog
UENJIEludGVycnVwdCBMaW5rIFtMTjAwXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwg
ZGlzYWJsZWQuDQ0KWyAgICAwLjE5MDA3Nl0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjAx
XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5MTA3
NV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjAyXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0
IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5MjA3NF0gQUNQSTogUENJIEludGVycnVwdCBM
aW5rIFtMTjAzXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAg
ICAwLjE5MzA4MV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjA0XSAoSVJRcyAzIDQgNSA3
IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5NDA4M10gQUNQSTogUENJIElu
dGVycnVwdCBMaW5rIFtMTjA1XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJs
ZWQuDQ0KWyAgICAwLjE5NTA2NV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjA2XSAoSVJR
cyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5NjA5NF0gQUNQ
STogUENJIEludGVycnVwdCBMaW5rIFtMTjA3XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAq
MCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5NzA3NV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtM
TjA4XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5
ODA2Ml0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjA5XSAoSVJRcyAzIDQgNSA3IDExIDEy
IDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjE5OTA4Ml0gQUNQSTogUENJIEludGVycnVw
dCBMaW5rIFtMTjBBXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0K
WyAgICAwLjIwMDA3OF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjBCXSAoSVJRcyAzIDQg
NSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIwMTA3OF0gQUNQSTogUENJ
IEludGVycnVwdCBMaW5rIFtMTjBDXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlz
YWJsZWQuDQ0KWyAgICAwLjIwMjA3N10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjBEXSAo
SVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIwMzA4NV0g
QUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjBFXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1
KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIwNDA4N10gQUNQSTogUENJIEludGVycnVwdCBMaW5r
IFtMTjBGXSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAw
LjIwNTE0M10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjEwXSAoSVJRcyAzIDQgNSA3ICox
MSAxMiAxNCAxNSkNDQpbICAgIDAuMjA2MDc1XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xO
MTFdIChJUlFzIDMgNCA1IDcgMTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjA3
MDg0XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMTJdIChJUlFzIDMgNCA1IDcgMTEgMTIg
MTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjA4MTQ3XSBBQ1BJOiBQQ0kgSW50ZXJydXB0
IExpbmsgW0xOMTNdIChJUlFzIDMgNCAqNSA3IDExIDEyIDE0IDE1KQ0NClsgICAgMC4yMDkxNDRd
IEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE4xNF0gKElSUXMgMyA0IDUgNyAqMTEgMTIgMTQg
MTUpDQ0KWyAgICAwLjIxMDA3MF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjE1XSAoSVJR
cyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxMTA4MF0gQUNQ
STogUENJIEludGVycnVwdCBMaW5rIFtMTjE2XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAq
MCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxMjA4N10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtM
TjE3XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIx
MzA4OF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjE4XSAoSVJRcyAzIDQgNSA3IDExIDEy
IDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxNDA5Ml0gQUNQSTogUENJIEludGVycnVw
dCBMaW5rIFtMTjE5XSAoSVJRcyAzIDQgNSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0K
WyAgICAwLjIxNTAyN10gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTjFBXSAoSVJRcyAzIDQg
NSA3IDExIDEyIDE0IDE1KSAqMCwgZGlzYWJsZWQuDQ0KWyAgICAwLjIxNjE0NF0gQUNQSTogUENJ
IEludGVycnVwdCBMaW5rIFtMTjFCXSAoSVJRcyAzIDQgKjUgNyAxMSAxMiAxNCAxNSkNDQpbICAg
IDAuMjE3MDc3XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMUNdIChJUlFzIDMgNCA1IDcg
MTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjE4MDYwXSBBQ1BJOiBQQ0kgSW50
ZXJydXB0IExpbmsgW0xOMURdIChJUlFzIDMgNCA1IDcgMTEgMTIgMTQgMTUpICowLCBkaXNhYmxl
ZC4NDQpbICAgIDAuMjE5MDg1XSBBQ1BJOiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMUVdIChJUlFz
IDMgNCA1IDcgMTEgMTIgMTQgMTUpICowLCBkaXNhYmxlZC4NDQpbICAgIDAuMjIwMDg1XSBBQ1BJ
OiBQQ0kgSW50ZXJydXB0IExpbmsgW0xOMUZdIChJUlFzIDMgNCA1IDcgMTEgMTIgMTQgMTUpICow
LCBkaXNhYmxlZC4NDQpbICAgIDAuMjIxMDI5XSB4ZW5fYmFsbG9vbjogSW5pdGlhbGlzaW5nIGJh
bGxvb24gZHJpdmVyIHdpdGggcGFnZSBvcmRlciAwLg0NClsgICAgMC4yMjIwMjhdIGxhc3RfcGZu
ID0gMHgxNTRhODEgbWF4X2FyY2hfcGZuID0gMHgxMDAwMDAwDQ0KWyAgICAwLjIyNTU5MV0gdmdh
YXJiOiBkZXZpY2UgYWRkZWQ6IFBDSTowMDAwOjAwOjA0LjAsZGVjb2Rlcz1pbyttZW0sb3ducz1p
byttZW0sbG9ja3M9bm9uZQ0NClsgICAgMC4yMjU5NjVdIHZnYWFyYjogbG9hZGVkDQ0KWyAgICAw
LjIyNjE3OV0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQNDQpbICAgIDAuMjI4MDYwXSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYmZzDQ0KWyAgICAwLjIyODk5
N10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWINDQpbICAgIDAu
MjMwMDM0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYg0NClsgICAg
MC4yMzEwMzldIFBDSTogVXNpbmcgQUNQSSBmb3IgSVJRIHJvdXRpbmcNDQpbICAgIDAuMjMzMDg1
XSBjZmc4MDIxMTogVXNpbmcgc3RhdGljIHJlZ3VsYXRvcnkgZG9tYWluIGluZm8NDQpbICAgIDAu
MjMzOTY0XSBjZmc4MDIxMTogUmVndWxhdG9yeSBkb21haW46IFVTDQ0KWyAgICAwLjIzMzk2NF0g
CShzdGFydF9mcmVxIC0gZW5kX2ZyZXEgQCBiYW5kd2lkdGgpLCAobWF4X2FudGVubmFfZ2Fpbiwg
bWF4X2VpcnApDQ0KWyAgICAwLjIzMzk2NF0gCSgyNDAyMDAwIEtIeiAtIDI0NzIwMDAgS0h6IEAg
NDAwMDAgS0h6KSwgKDYwMCBtQmksIDI3MDAgbUJtKQ0NClsgICAgMC4yMzM5NjRdIAkoNTE3MDAw
MCBLSHogLSA1MTkwMDAwIEtIeiBAIDQwMDAwIEtIeiksICg2MDAgbUJpLCAyMzAwIG1CbSkNDQpb
ICAgIDAuMjMzOTY0XSAJKDUxOTAwMDAgS0h6IC0gNTIxMDAwMCBLSHogQCA0MDAwMCBLSHopLCAo
NjAwIG1CaSwgMjMwMCBtQm0pDQ0KWyAgICAwLjIzMzk2NF0gCSg1MjEwMDAwIEtIeiAtIDUyMzAw
MDAgS0h6IEAgNDAwMDAgS0h6KSwgKDYwMCBtQmksIDIzMDAgbUJtKQ0NClsgICAgMC4yMzM5NjRd
IAkoNTIzMDAwMCBLSHogLSA1MzMwMDAwIEtIeiBAIDQwMDAwIEtIeiksICg2MDAgbUJpLCAyMzAw
IG1CbSkNDQpbICAgIDAuMjMzOTY0XSAJKDU3MzUwMDAgS0h6IC0gNTgzNTAwMCBLSHogQCA0MDAw
MCBLSHopLCAoNjAwIG1CaSwgMzAwMCBtQm0pDQ0KWyAgICAwLjIzMzk3Nl0gY2ZnODAyMTE6IENh
bGxpbmcgQ1JEQSBmb3IgY291bnRyeTogVVMNDQpbICAgIDAuMjM0OTk3XSBOZXRMYWJlbDogSW5p
dGlhbGl6aW5nDQ0KWyAgICAwLjIzNTk2NF0gTmV0TGFiZWw6ICBkb21haW4gaGFzaCBzaXplID0g
MTI4DQ0KWyAgICAwLjIzNTk2NF0gTmV0TGFiZWw6ICBwcm90b2NvbHMgPSBVTkxBQkVMRUQgQ0lQ
U092NA0NClsgICAgMC4yMzU5NjRdIE5ldExhYmVsOiAgdW5sYWJlbGVkIHRyYWZmaWMgYWxsb3dl
ZCBieSBkZWZhdWx0DQ0KWyAgICAwLjIzNjk4OF0gU3dpdGNoaW5nIHRvIGNsb2Nrc291cmNlIHhl
bg0NClsgICAgMC4yNDgxNzNdIHBucDogUG5QIEFDUEkgaW5pdA0NClsgICAgMC4yNDg0OTVdIEFD
UEk6IGJ1cyB0eXBlIHBucCByZWdpc3RlcmVkDQ0KWyAgICAwLjI1OTY0N10geGVuX2FsbG9jYXRl
X3BpcnE6IHJldHVybmluZyBpcnEgOCBmb3IgZ3NpIDgNDQpbICAgIDAuMjY4NjMxXSB4ZW5fYWxs
b2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxIGZvciBnc2kgMQ0NClsgICAgMC4yNzU2ODRdIHhl
bl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDEzIGZvciBnc2kgMTMNDQpbICAgIDAuMjgy
MjUwXSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxMiBmb3IgZ3NpIDEyDQ0KWyAg
ICAwLjI5MDA0MF0geGVuX2FsbG9jYXRlX3BpcnE6IHJldHVybmluZyBpcnEgNCBmb3IgZ3NpIDQN
DQpbICAgIDAuMjk1MzYyXSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjQNDQpbICAgIDAuMjk5OTQx
XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAzIGZvciBnc2kgMw0NClsgICAgMC4z
MDYwODddIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDYgZm9yIGdzaSA2DQ0KWyAg
ICAwLjMxMjY0M10gcG5wOiBQblAgQUNQSTogZm91bmQgMTYgZGV2aWNlcw0NClsgICAgMC4zMTM0
NThdIEFDUEk6IEFDUEkgYnVzIHR5cGUgcG5wIHVucmVnaXN0ZXJlZA0NClsgICAgMC4zMTM0NThd
IHN5c3RlbSAwMDowMDogaW9tZW0gcmFuZ2UgMHhmZWQwODAwMC0weGZlZDA4MDA3IGhhcyBiZWVu
IHJlc2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjAxOiBpb21lbSByYW5nZSAweGUw
MDAwMDAwLTB4ZWZmZmZmZmYgaGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0
ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAweDQwYi0weDQwYiBoYXMgYmVlbiByZXNlcnZlZA0NClsg
ICAgMC4zMTM0NThdIHN5c3RlbSAwMDowYjogaW9wb3J0IHJhbmdlIDB4NGQwLTB4NGQxIGhhcyBi
ZWVuIHJlc2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2Ug
MHg0ZDYtMHg0ZDYgaGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6
MGI6IGlvcG9ydCByYW5nZSAweDUwMC0weDU2MCBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4z
MTM0NThdIHN5c3RlbSAwMDowYjogaW9wb3J0IHJhbmdlIDB4NTU4LTB4NTViIGhhcyBiZWVuIHJl
c2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHg1ODAt
MHg1OGYgaGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlv
cG9ydCByYW5nZSAweDU5MC0weDU5MyBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThd
IHN5c3RlbSAwMDowYjogaW9wb3J0IHJhbmdlIDB4NjAwLTB4NjFmIGhhcyBiZWVuIHJlc2VydmVk
DQ0KWyAgICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHg2MjAtMHg2MjMg
aGFzIGJlZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCBy
YW5nZSAweDcwMC0weDcwMyBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThdIHN5c3Rl
bSAwMDowYjogaW9wb3J0IHJhbmdlIDB4YzAwLTB4YzAxIGhhcyBiZWVuIHJlc2VydmVkDQ0KWyAg
ICAwLjMxMzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHhjMDYtMHhjMDggaGFzIGJl
ZW4gcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAw
eGMxNC0weGMxNCBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThdIHN5c3RlbSAwMDow
YjogaW9wb3J0IHJhbmdlIDB4YzQ5LTB4YzRhIGhhcyBiZWVuIHJlc2VydmVkDQ0KWyAgICAwLjMx
MzQ1OF0gc3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHhjNTAtMHhjNTMgaGFzIGJlZW4gcmVz
ZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAweGM2Yy0w
eGM2YyBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC4zMTM0NThdIHN5c3RlbSAwMDowYjogaW9w
b3J0IHJhbmdlIDB4YzZmLTB4YzZmIGhhcyBiZWVuIHJlc2VydmVkDQ0KWyAgICAwLjMxMzQ1OF0g
c3lzdGVtIDAwOjBiOiBpb3BvcnQgcmFuZ2UgMHhjZDYtMHhjZDcgaGFzIGJlZW4gcmVzZXJ2ZWQN
DQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9ydCByYW5nZSAweGNmOS0weGNmOSBj
b3VsZCBub3QgYmUgcmVzZXJ2ZWQNDQpbICAgIDAuMzEzNDU4XSBzeXN0ZW0gMDA6MGI6IGlvcG9y
dCByYW5nZSAweGY1MC0weGY1OCBoYXMgYmVlbiByZXNlcnZlZA0NClsgICAgMC40NzIxOTFdIFBN
LVRpbWVyIGZhaWxlZCBjb25zaXN0ZW5jeSBjaGVjayAgKDB4MHhmZmZmZmYpIC0gYWJvcnRpbmcu
DQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDE6MGQuMDogUENJIGJyaWRnZSwgc2Vjb25kYXJ5
IGJ1cyAwMDAwOjAyDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDE6MGQuMDogICBJTyB3aW5k
b3c6IGRpc2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDE6MGQuMDogICBNRU0gd2lu
ZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAwOjAxOjBkLjA6ICAgUFJFRkVU
Q0ggd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAwOjAwOjAxLjA6IFBD
SSBicmlkZ2UsIHNlY29uZGFyeSBidXMgMDAwMDowMQ0NClsgICAgMC40NzI0OTldIHBjaSAwMDAw
OjAwOjAxLjA6ICAgSU8gd2luZG93OiAweDYwMDAtMHg2ZmZmDQ0KWyAgICAwLjQ3MjQ5OV0gcGNp
IDAwMDA6MDA6MDEuMDogICBNRU0gd2luZG93OiAweGQ4MzAwMDAwLTB4ZDgzZmZmZmYNDQpbICAg
IDAuNDcyNDk5XSBwY2kgMDAwMDowMDowMS4wOiAgIFBSRUZFVENIIHdpbmRvdzogMHhkODAwMDAw
MC0weGQ4MGZmZmZmDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDYuMDogUENJIGJyaWRn
ZSwgc2Vjb25kYXJ5IGJ1cyAwMDAwOjAzDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDYu
MDogICBJTyB3aW5kb3c6IGRpc2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDYu
MDogICBNRU0gd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAwOjAwOjA2
LjA6ICAgUFJFRkVUQ0ggd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBjaSAwMDAw
OjAwOjA3LjA6IFBDSSBicmlkZ2UsIHNlY29uZGFyeSBidXMgMDAwMDowNA0NClsgICAgMC40NzI0
OTldIHBjaSAwMDAwOjAwOjA3LjA6ICAgSU8gd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0
OTldIHBjaSAwMDAwOjAwOjA3LjA6ICAgTUVNIHdpbmRvdzogZGlzYWJsZWQNDQpbICAgIDAuNDcy
NDk5XSBwY2kgMDAwMDowMDowNy4wOiAgIFBSRUZFVENIIHdpbmRvdzogZGlzYWJsZWQNDQpbICAg
IDAuNDcyNDk5XSBwY2kgMDAwMDowMDowOC4wOiBQQ0kgYnJpZGdlLCBzZWNvbmRhcnkgYnVzIDAw
MDA6MDUNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowMDowOC4wOiAgIElPIHdpbmRvdzogZGlz
YWJsZWQNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowMDowOC4wOiAgIE1FTSB3aW5kb3c6IGRp
c2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDguMDogICBQUkVGRVRDSCB3aW5k
b3c6IGRpc2FibGVkDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDkuMDogUENJIGJyaWRn
ZSwgc2Vjb25kYXJ5IGJ1cyAwMDAwOjA2DQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MDku
MDogICBJTyB3aW5kb3c6IDB4NzAwMC0weDdmZmYNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDow
MDowOS4wOiAgIE1FTSB3aW5kb3c6IDB4ZDgyMDAwMDAtMHhkODJmZmZmZg0NClsgICAgMC40NzI0
OTldIHBjaSAwMDAwOjAwOjA5LjA6ICAgUFJFRkVUQ0ggd2luZG93OiAweGQ4MTAwMDAwLTB4ZDgx
ZmZmZmYNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowNzowMC4wOiBQQ0kgYnJpZGdlLCBzZWNv
bmRhcnkgYnVzIDAwMDA6MDgNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowNzowMC4wOiAgIElP
IHdpbmRvdzogZGlzYWJsZWQNDQpbICAgIDAuNDcyNDk5XSBwY2kgMDAwMDowNzowMC4wOiAgIE1F
TSB3aW5kb3c6IDB4ZDg0MDAwMDAtMHhkODRmZmZmZg0NClsgICAgMC40NzI0OTldIHBjaSAwMDAw
OjA3OjAwLjA6ICAgUFJFRkVUQ0ggd2luZG93OiBkaXNhYmxlZA0NClsgICAgMC40NzI0OTldIHBj
aSAwMDAwOjAwOjBhLjA6IFBDSSBicmlkZ2UsIHNlY29uZGFyeSBidXMgMDAwMDowNw0NClsgICAg
MC40NzI0OTldIHBjaSAwMDAwOjAwOjBhLjA6ICAgSU8gd2luZG93OiBkaXNhYmxlZA0NClsgICAg
MC40NzI0OTldIHBjaSAwMDAwOjAwOjBhLjA6ICAgTUVNIHdpbmRvdzogMHhkODQwMDAwMC0weGQ4
NGZmZmZmDQ0KWyAgICAwLjQ3MjQ5OV0gcGNpIDAwMDA6MDA6MGEuMDogICBQUkVGRVRDSCB3aW5k
b3c6IGRpc2FibGVkDQ0KWyAgICAwLjY1ODE2NV0gcGNpIDAwMDA6MDA6MDYuMDogUENJIElOVCBB
IC0+IEdTSSAzMiAobGV2ZWwsIGxvdykgLT4gSVJRIDMyDQ0KWyAgICAwLjY1OTEyMF0geGVuX2Fs
bG9jYXRlX3BpcnE6IHJldHVybmluZyBpcnEgMzIgZm9yIGdzaSAzMg0NClsgICAgMC42NzA0Mjdd
IEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MzINDQpbICAgIDAuNjcxNDE3XSBwY2kgMDAwMDowMDow
Ny4wOiBQQ0kgSU5UIEEgLT4gR1NJIDMyIChsZXZlbCwgbG93KSAtPiBJUlEgMzINDQpbICAgIDAu
NjcxNDE3XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAzMiBmb3IgZ3NpIDMyDQ0K
WyAgICAwLjY4NjQ3Ml0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozMg0NClsgICAgMC42ODc0NjJd
IHBjaSAwMDAwOjAwOjA4LjA6IFBDSSBJTlQgQSAtPiBHU0kgMzIgKGxldmVsLCBsb3cpIC0+IElS
USAzMg0NClsgICAgMC42ODc0NjJdIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDMy
IGZvciBnc2kgMzINDQpbICAgIDAuNzAyNTA1XSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjMyDQ0K
WyAgICAwLjcwMzQ5NV0gcGNpIDAwMDA6MDA6MDkuMDogUENJIElOVCBBIC0+IEdTSSAzMiAobGV2
ZWwsIGxvdykgLT4gSVJRIDMyDQ0KWyAgICAwLjcwMzQ5NV0geGVuX2FsbG9jYXRlX3BpcnE6IHJl
dHVybmluZyBpcnEgMzIgZm9yIGdzaSAzMg0NClsgICAgMC43MTg1MzNdIEFscmVhZHkgc2V0dXAg
dGhlIEdTSSA6MzINDQpbICAgIDAuNzE5NTIyXSBwY2kgMDAwMDowMDowYS4wOiBQQ0kgSU5UIEEg
LT4gR1NJIDMyIChsZXZlbCwgbG93KSAtPiBJUlEgMzINDQpbICAgIDAuNzI5MzEzXSBORVQ6IFJl
Z2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDINDQpbICAgIDAuNzMwMTQzXSBJUCByb3V0ZSBjYWNo
ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4IChvcmRlcjogNSwgMTMxMDcyIGJ5dGVzKQ0NClsg
ICAgMC43NDI2NjJdIFRDUCBlc3RhYmxpc2hlZCBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAo
b3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMpDQ0KWyAgICAwLjc0MzQ5N10gVENQIGJpbmQgaGFzaCB0
YWJsZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDcsIDUyNDI4OCBieXRlcykNDQpbICAgIDAuNzQz
NDk3XSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQgKGVzdGFibGlzaGVkIDEzMTA3MiBiaW5k
IDY1NTM2KQ0NClsgICAgMC43NDM0OTddIFRDUCByZW5vIHJlZ2lzdGVyZWQNDQpbICAgIDAuNzY3
OTkwXSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDENDQpbICAgIDAuNzczNjQ2XSBS
UEM6IFJlZ2lzdGVyZWQgdWRwIHRyYW5zcG9ydCBtb2R1bGUuDQ0KWyAgICAwLjc3NDQ5N10gUlBD
OiBSZWdpc3RlcmVkIHRjcCB0cmFuc3BvcnQgbW9kdWxlLg0NClsgICAgMC43NzQ0OTddIFJQQzog
UmVnaXN0ZXJlZCB0Y3AgTkZTdjQuMSBiYWNrY2hhbm5lbCB0cmFuc3BvcnQgbW9kdWxlLg0NClsg
ICAgMC43ODk4NDZdIHBjaSAwMDAwOjAwOjAyLjA6IGRpc2FibGVkIGJvb3QgaW50ZXJydXB0cyBv
biBkZXZpY2UgWzExNjY6MDIwNV0NDQpbICAgIDAuNzk3NzY4XSBUcnlpbmcgdG8gdW5wYWNrIHJv
b3RmcyBpbWFnZSBhcyBpbml0cmFtZnMuLi4NDQpbICAgIDEuMDU4MjI1XSBGcmVlaW5nIGluaXRy
ZCBtZW1vcnk6IDYxMTEwayBmcmVlZA0NClsgICAgMS4wOTQwNjddIFBDSS1ETUE6IFVzaW5nIHNv
ZnR3YXJlIGJvdW5jZSBidWZmZXJpbmcgZm9yIElPIChTV0lPVExCKQ0NClsgICAgMS4wOTQwNjdd
IERNQTogUGxhY2luZyA2NE1CIHNvZnR3YXJlIElPIFRMQiBiZXR3ZWVuIGM4NTNiMDAwIC0gY2M1
M2IwMDANDQpbICAgIDEuMDk0MDY3XSBETUE6IHNvZnR3YXJlIElPIFRMQiBhdCBwaHlzIDB4ODUz
YjAwMCAtIDB4YzUzYjAwMA0NClsgICAgMS4xMTY3MDJdIGt2bTogbm8gaGFyZHdhcmUgc3VwcG9y
dA0NClsgICAgMS4xMjAzNzVdIGhhc19zdm06IHN2bSBub3QgYXZhaWxhYmxlDQ0KWyAgICAxLjEy
MzE5OV0ga3ZtOiBubyBoYXJkd2FyZSBzdXBwb3J0DQ0KWyAgICAxLjE0MjEyM10gTWljcm9jb2Rl
IFVwZGF0ZSBEcml2ZXI6IHYyLjAwIDx0aWdyYW5AYWl2YXppYW4uZnNuZXQuY28udWs+LCBQZXRl
ciBPcnViYQ0NClsgICAgMS4xNDMxMDldIFNjYW5uaW5nIGZvciBsb3cgbWVtb3J5IGNvcnJ1cHRp
b24gZXZlcnkgNjAgc2Vjb25kcw0NClsgICAgMS4xNTcwMTVdIGF1ZGl0OiBpbml0aWFsaXppbmcg
bmV0bGluayBzb2NrZXQgKGRpc2FibGVkKQ0NClsgICAgMS4xNjI0NzVdIHR5cGU9MjAwMCBhdWRp
dCgxMzQ1MjAxMDYyLjMxMDoxKTogaW5pdGlhbGl6ZWQNDQpbICAgIDEuMTc5ODk5XSBoaWdobWVt
IGJvdW5jZSBwb29sIHNpemU6IDY0IHBhZ2VzDQ0KWyAgICAxLjE4MDcwOV0gSHVnZVRMQiByZWdp
c3RlcmVkIDIgTUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXMNDQpbICAgIDEuMjAy
MDU3XSBWRlM6IERpc2sgcXVvdGFzIGRxdW90XzYuNS4yDQ0KWyAgICAxLjIwNjUyMV0gRHF1b3Qt
Y2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAxMDI0IChvcmRlciAwLCA0MDk2IGJ5dGVzKQ0NClsg
ICAgMS4yMTY3MDBdIG1zZ21uaSBoYXMgYmVlbiBzZXQgdG8gMjY5MQ0NClsgICAgMS4yMjM1MTld
IGFsZzogTm8gdGVzdCBmb3Igc3Rkcm5nIChrcm5nKQ0NClsgICAgMS4yMjgwNjVdIEJsb2NrIGxh
eWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAy
NTIpDQ0KWyAgICAxLjIyOTA0MV0gaW8gc2NoZWR1bGVyIG5vb3AgcmVnaXN0ZXJlZA0NClsgICAg
MS4yMjkwNDFdIGlvIHNjaGVkdWxlciBhbnRpY2lwYXRvcnkgcmVnaXN0ZXJlZA0NClsgICAgMS4y
MjkwNDFdIGlvIHNjaGVkdWxlciBkZWFkbGluZSByZWdpc3RlcmVkDQ0KWyAgICAxLjI0ODcwNl0g
aW8gc2NoZWR1bGVyIGNmcSByZWdpc3RlcmVkIChkZWZhdWx0KQ0NClsgICAgMS4yNTc1NzJdIHBj
aV9ob3RwbHVnOiBQQ0kgSG90IFBsdWcgUENJIENvcmUgdmVyc2lvbjogMC41DQ0KWyAgICAxLjI2
NDU3MF0gaW5wdXQ6IFBvd2VyIEJ1dHRvbiBhcyAvZGV2aWNlcy9MTlhTWVNUTTowMC9MTlhTWUJV
UzowMC9QTlAwQTA4OjAwL1BOUDBDMEM6MDAvaW5wdXQvaW5wdXQwDQ0KWyAgICAxLjI2NTU0OV0g
QUNQSTogUG93ZXIgQnV0dG9uIFtQV1JCXQ0NClsgICAgMS4yNzc5NTNdIGlucHV0OiBQb3dlciBC
dXR0b24gYXMgL2RldmljZXMvTE5YU1lTVE06MDAvTE5YUFdSQk46MDAvaW5wdXQvaW5wdXQxDQ0K
WyAgICAxLjI3ODkzM10gQUNQSTogUG93ZXIgQnV0dG9uIFtQV1JGXQ0NClsgICAgMS4yODk0MjFd
IGlucHV0OiBTbGVlcCBCdXR0b24gYXMgL2RldmljZXMvTE5YU1lTVE06MDAvTE5YU0xQQk46MDAv
aW5wdXQvaW5wdXQyDQ0KWyAgICAxLjI5MDQwMl0gQUNQSTogU2xlZXAgQnV0dG9uIFtTTFBGXQ0N
ClsgICAgMS4zMTE3NjJdIEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZC4NDQpbICAgIDEu
MzIyNjAxXSBibGt0YXBfZGV2aWNlX2luaXQ6IGJsa3RhcCBkZXZpY2UgbWFqb3IgMjUzDQ0KWyAg
ICAxLjMyMzU3MV0gYmxrdGFwX3JpbmdfaW5pdDogYmxrdGFwIHJpbmcgbWFqb3I6IDI1MQ0NClsg
ICAgMS4zNDkzNjVdIHJlZ2lzdGVyaW5nIG5ldGJhY2sNDQpbICAgIDEuMzY4NTA2XSBocGV0X2Fj
cGlfYWRkOiBubyBhZGRyZXNzIG9yIGlycXMgaW4gX0NSUw0NClsgICAgMS4zNzM5NTldIE5vbi12
b2xhdGlsZSBtZW1vcnkgZHJpdmVyIHYxLjMNDQpbICAgIDEuMzc0OTM2XSBTZXJpYWw6IDgyNTAv
MTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBlbmFibGVkDQ0KKFhFTikgQ2Fubm90
IGJpbmQgSVJRNCB0byBkb20wLiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJp
bmQgSVJRMiB0byBkb20wLiBJbiB1c2UgYnkgJ2Nhc2NhZGUnLg0KKFhFTikgQ2Fubm90IGJpbmQg
SVJRNCB0byBkb20wLiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJR
MiB0byBkb20wLiBJbiB1c2UgYnkgJ2Nhc2NhZGUnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRNCB0
byBkb20wLiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRMiB0byBk
b20wLiBJbiB1c2UgYnkgJ2Nhc2NhZGUnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRNCB0byBkb20w
LiBJbiB1c2UgYnkgJ25zMTY1NTAnLg0KKFhFTikgQ2Fubm90IGJpbmQgSVJRMiB0byBkb20wLiBJ
biB1c2UgYnkgJ2Nhc2NhZGUnLg0KWyAgICAxLjY2NjMzNF0gc2VyaWFsODI1MDogdHR5UzEgYXQg
SS9PIDB4MmY4IChpcnEgPSAzKSBpcyBhIDE2NTUwQQ0NClsgICAgMS42NzQxODVdIDAwOjBkOiB0
dHlTMSBhdCBJL08gMHgyZjggKGlycSA9IDMpIGlzIGEgMTY1NTBBDQ0KWyAgICAxLjY4ODM2Nl0g
YnJkOiBtb2R1bGUgbG9hZGVkDQ0KWyAgICAxLjY5NTYzM10gbG9vcDogbW9kdWxlIGxvYWRlZA0N
ClsgICAgMS42OTkyMTZdIGlucHV0OiBNYWNpbnRvc2ggbW91c2UgYnV0dG9uIGVtdWxhdGlvbiBh
cyAvZGV2aWNlcy92aXJ0dWFsL2lucHV0L2lucHV0Mw0NClsgICAgMS43MDk5MjFdIGUxMDA6IElu
dGVsKFIpIFBSTy8xMDAgTmV0d29yayBEcml2ZXIsIDMuNS4yNC1rMi1OQVBJDQ0KWyAgICAxLjcx
MDg5OV0gZTEwMDogQ29weXJpZ2h0KGMpIDE5OTktMjAwNiBJbnRlbCBDb3Jwb3JhdGlvbg0NClsg
ICAgMS43MjE4NTddIHRnMy5jOnYzLjEwMiAoU2VwdGVtYmVyIDEsIDIwMDkpDQ0KWyAgICAxLjcy
NjI0OV0gdGczIDAwMDA6MDg6MDQuMDogUENJIElOVCBBIC0+IEdTSSAzNiAobGV2ZWwsIGxvdykg
LT4gSVJRIDM2DQ0KWyAgICAxLjc0MzE2M10gZXRoMDogVGlnb24zIFtwYXJ0bm8oQkNNOTU3MTUp
IHJldiA5MDAzXSAoUENJWDoxMzNNSHo6NjQtYml0KSBNQUMgYWRkcmVzcyAwMDplMDo4MTo4MDox
ZDphMA0NClsgICAgMS43NDMyNDhdIGV0aDA6IGF0dGFjaGVkIFBIWSBpcyA1NzE0ICgxMC8xMDAv
MTAwMEJhc2UtVCBFdGhlcm5ldCkgKFdpcmVTcGVlZFsxXSkNDQpbICAgIDEuNzQzMjQ4XSBldGgw
OiBSWGNzdW1zWzFdIExpbmtDaGdSRUdbMF0gTUlpcnFbMF0gQVNGWzBdIFRTT2NhcFsxXQ0NClsg
ICAgMS43NDMyNDhdIGV0aDA6IGRtYV9yd2N0cmxbNzYxNDgwMDBdIGRtYV9tYXNrWzY0LWJpdF0N
DQpbICAgIDEuNzQzMjQ4XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAzNiBmb3Ig
Z3NpIDM2DQ0KWyAgICAxLjc3NzQ1OV0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozNg0NClsgICAg
MS43Nzg0NDldIHRnMyAwMDAwOjA4OjA0LjE6IFBDSSBJTlQgQiAtPiBHU0kgMzYgKGxldmVsLCBs
b3cpIC0+IElSUSAzNg0NClsgICAgMS43OTk3MDZdIGV0aDE6IFRpZ29uMyBbcGFydG5vKEJDTTk1
NzE1KSByZXYgOTAwM10gKFBDSVg6MTMzTUh6OjY0LWJpdCkgTUFDIGFkZHJlc3MgMDA6ZTA6ODE6
ODA6MWQ6YTENDQpbICAgIDEuODAwMjUyXSBldGgxOiBhdHRhY2hlZCBQSFkgaXMgNTcxNCAoMTAv
MTAwLzEwMDBCYXNlLVQgRXRoZXJuZXQpIChXaXJlU3BlZWRbMV0pDQ0KWyAgICAxLjgwMDI1Ml0g
ZXRoMTogUlhjc3Vtc1sxXSBMaW5rQ2hnUkVHWzBdIE1JaXJxWzBdIEFTRlswXSBUU09jYXBbMV0N
DQpbICAgIDEuODAwMjUyXSBldGgxOiBkbWFfcndjdHJsWzc2MTQ4MDAwXSBkbWFfbWFza1s2NC1i
aXRdDQ0KWyAgICAxLjgyODYwNF0gc2t5MiBkcml2ZXIgdmVyc2lvbiAxLjI1DQ0KWyAgICAxLjgz
Mjg2NF0gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBkZXZpY2UgZHJpdmVyLCAxLjYNDQpbICAgIDEu
ODMzODQyXSB0dW46IChDKSAxOTk5LTIwMDQgTWF4IEtyYXNueWFuc2t5IDxtYXhrQHF1YWxjb21t
LmNvbT4NDQpbICAgIDEuODQ0NTkwXSBjb25zb2xlIFtuZXRjb24wXSBlbmFibGVkDQ0KWyAgICAx
Ljg0NTU2Ml0gbmV0Y29uc29sZTogbmV0d29yayBsb2dnaW5nIHN0YXJ0ZWQNDQpbICAgIDEuODUz
NjkwXSBlaGNpX2hjZDogVVNCIDIuMCAnRW5oYW5jZWQnIEhvc3QgQ29udHJvbGxlciAoRUhDSSkg
RHJpdmVyDQ0KWyAgICAxLjg2MDY4NF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktVXSBl
bmFibGVkIGF0IElSUSAxMA0NClsgICAgMS44NjEzNTVdIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1
cm5pbmcgaXJxIDEwIGZvciBnc2kgMTANDQpbICAgIDEuODcyMDE3XSBlaGNpX2hjZCAwMDAwOjAw
OjAzLjI6IFBDSSBJTlQgQSAtPiBMaW5rW0xOS1VdIC0+IEdTSSAxMCAobGV2ZWwsIGxvdykgLT4g
SVJRIDEwDQ0KWyAgICAxLjg3Mjk5N10gZWhjaV9oY2QgMDAwMDowMDowMy4yOiBFSENJIEhvc3Qg
Q29udHJvbGxlcg0NClsgICAgMS44ODYwMjddIGVoY2lfaGNkIDAwMDA6MDA6MDMuMjogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAxDQ0KWyAgICAxLjkxNDE1Ml0g
ZWhjaV9oY2QgMDAwMDowMDowMy4yOiBpcnEgMTAsIGlvIG1lbSAweGQ4NTEyMDAwDQ0KWyAgICAx
LjkyMDI3Nl0gZWhjaV9oY2QgMDAwMDowMDowMy4yOiBVU0IgMi4wIHN0YXJ0ZWQsIEVIQ0kgMS4w
MA0NClsgICAgMS45MjYxNjRdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5k
b3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDINDQpbICAgIDEuOTI3MDY0XSB1c2IgdXNiMTogTmV3IFVT
QiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENDQpbICAg
IDEuOTI3MDY0XSB1c2IgdXNiMTogUHJvZHVjdDogRUhDSSBIb3N0IENvbnRyb2xsZXINDQpbICAg
IDEuOTI3MDY0XSB1c2IgdXNiMTogTWFudWZhY3R1cmVyOiBMaW51eCAyLjYuMzIuMjUgZWhjaV9o
Y2QNDQpbICAgIDEuOTI3MDY0XSB1c2IgdXNiMTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjAzLjIN
DQpbICAgIDEuOTU2MDI1XSB1c2IgdXNiMTogY29uZmlndXJhdGlvbiAjMSBjaG9zZW4gZnJvbSAx
IGNob2ljZQ0NClsgICAgMS45NjE4ODFdIGh1YiAxLTA6MS4wOiBVU0IgaHViIGZvdW5kDQ0KWyAg
ICAxLjk2NTY4NF0gaHViIDEtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQNDQpbICAgIDEuOTcwMzg5
XSBvaGNpX2hjZDogVVNCIDEuMSAnT3BlbicgSG9zdCBDb250cm9sbGVyIChPSENJKSBEcml2ZXIN
DQpbICAgIDEuOTcxMzY1XSB4ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxMCBmb3Ig
Z3NpIDEwDQ0KWyAgICAxLjk4MjE3M10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxMA0NClsgICAg
MS45ODMxNjNdIG9oY2lfaGNkIDAwMDA6MDA6MDMuMDogUENJIElOVCBBIC0+IExpbmtbTE5LVV0g
LT4gR1NJIDEwIChsZXZlbCwgbG93KSAtPiBJUlEgMTANDQpbICAgIDEuOTgzMTYzXSBvaGNpX2hj
ZCAwMDAwOjAwOjAzLjA6IE9IQ0kgSG9zdCBDb250cm9sbGVyDQ0KWyAgICAxLjk5OTgxNl0gb2hj
aV9oY2QgMDAwMDowMDowMy4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMg
bnVtYmVyIDINDQpbICAgIDIuMDAwNzk0XSBvaGNpX2hjZCAwMDAwOjAwOjAzLjA6IGlycSAxMCwg
aW8gbWVtIDB4ZDg1MTAwMDANDQpbICAgIDIuMDYzMzYwXSB1c2IgdXNiMjogTmV3IFVTQiBkZXZp
Y2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxDQ0KWyAgICAyLjA2NDI1N10g
dXNiIHVzYjI6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlh
bE51bWJlcj0xDQ0KWyAgICAyLjA2NDI1N10gdXNiIHVzYjI6IFByb2R1Y3Q6IE9IQ0kgSG9zdCBD
b250cm9sbGVyDQ0KWyAgICAyLjA2NDI1N10gdXNiIHVzYjI6IE1hbnVmYWN0dXJlcjogTGludXgg
Mi42LjMyLjI1IG9oY2lfaGNkDQ0KWyAgICAyLjA2NDI1N10gdXNiIHVzYjI6IFNlcmlhbE51bWJl
cjogMDAwMDowMDowMy4wDQ0KWyAgICAyLjA5MzE5Ml0gdXNiIHVzYjI6IGNvbmZpZ3VyYXRpb24g
IzEgY2hvc2VuIGZyb20gMSBjaG9pY2UNDQpbICAgIDIuMDk5MDIwXSBodWIgMi0wOjEuMDogVVNC
IGh1YiBmb3VuZA0NClsgICAgMi4xMDI4NDFdIGh1YiAyLTA6MS4wOiAyIHBvcnRzIGRldGVjdGVk
DQ0KWyAgICAyLjEwNzE0Ml0geGVuX2FsbG9jYXRlX3BpcnE6IHJldHVybmluZyBpcnEgMTAgZm9y
IGdzaSAxMA0NClsgICAgMi4xMTI2NDVdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MTANDQpbICAg
IDIuMTEzNjM1XSBvaGNpX2hjZCAwMDAwOjAwOjAzLjE6IFBDSSBJTlQgQSAtPiBMaW5rW0xOS1Vd
IC0+IEdTSSAxMCAobGV2ZWwsIGxvdykgLT4gSVJRIDEwDQ0KWyAgICAyLjExMzYzNV0gb2hjaV9o
Y2QgMDAwMDowMDowMy4xOiBPSENJIEhvc3QgQ29udHJvbGxlcg0NClsgICAgMi4xMzA1MTJdIG9o
Y2lfaGNkIDAwMDA6MDA6MDMuMTogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVz
IG51bWJlciAzDQ0KWyAgICAyLjEzODAzNV0gb2hjaV9oY2QgMDAwMDowMDowMy4xOiBpcnEgMTAs
IGlvIG1lbSAweGQ4NTExMDAwDQ0KWyAgICAyLjE5NDM2OF0gdXNiIHVzYjM6IE5ldyBVU0IgZGV2
aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMQ0NClsgICAgMi4xOTUyNjZd
IHVzYiB1c2IzOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJp
YWxOdW1iZXI9MQ0NClsgICAgMi4xOTUyNjZdIHVzYiB1c2IzOiBQcm9kdWN0OiBPSENJIEhvc3Qg
Q29udHJvbGxlcg0NClsgICAgMi4xOTUyNjZdIHVzYiB1c2IzOiBNYW51ZmFjdHVyZXI6IExpbnV4
IDIuNi4zMi4yNSBvaGNpX2hjZA0NClsgICAgMi4xOTUyNjZdIHVzYiB1c2IzOiBTZXJpYWxOdW1i
ZXI6IDAwMDA6MDA6MDMuMQ0NClsgICAgMi4yMjQyMzNdIHVzYiB1c2IzOiBjb25maWd1cmF0aW9u
ICMxIGNob3NlbiBmcm9tIDEgY2hvaWNlDQ0KWyAgICAyLjIzMDA2NV0gaHViIDMtMDoxLjA6IFVT
QiBodWIgZm91bmQNDQpbICAgIDIuMjMzODc4XSBodWIgMy0wOjEuMDogMiBwb3J0cyBkZXRlY3Rl
ZA0NClsgICAgMi4yMzg0OTldIHVoY2lfaGNkOiBVU0IgVW5pdmVyc2FsIEhvc3QgQ29udHJvbGxl
ciBJbnRlcmZhY2UgZHJpdmVyDQ0KWyAgICAyLjI0NTMzMl0gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JscA0NClsgICAgMi4yNDYzMTVdIEluaXRpYWxpemluZyBV
U0IgTWFzcyBTdG9yYWdlIGRyaXZlci4uLg0NClsgICAgMi4yNTU5NTddIHVzYmNvcmU6IHJlZ2lz
dGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiLXN0b3JhZ2UNDQpbICAgIDIuMjU2OTQwXSBV
U0IgTWFzcyBTdG9yYWdlIHN1cHBvcnQgcmVnaXN0ZXJlZC4NDQpbICAgIDIuMjY2ODU5XSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGxpYnVzdWFsDQ0KWyAgICAyLjI3
Mjk5MV0gUE5QOiBQUy8yIENvbnRyb2xsZXIgW1BOUDAzMDM6S0JDLFBOUDBmMTM6TU9VRV0gYXQg
MHg2MCwweDY0IGlycSAxLDEyDQ0KWyAgICAyLjI4MzQ3OV0gc2VyaW86IGk4MDQyIEtCRCBwb3J0
IGF0IDB4NjAsMHg2NCBpcnEgMQ0NClsgICAgMi4yODM0NzldIHNlcmlvOiBpODA0MiBBVVggcG9y
dCBhdCAweDYwLDB4NjQgaXJxIDEyDQ0KWyAgICAyLjI5NDUzM10gbWljZTogUFMvMiBtb3VzZSBk
ZXZpY2UgY29tbW9uIGZvciBhbGwgbWljZQ0NClsgICAgMi4zMDExMjVdIHJ0Y19jbW9zIDAwOjA2
OiBSVEMgY2FuIHdha2UgZnJvbSBTNA0NClsgICAgMi4zMDYwMDZdIHJ0Y19jbW9zIDAwOjA2OiBy
dGMgY29yZTogcmVnaXN0ZXJlZCBydGNfY21vcyBhcyBydGMwDQ0KWyAgICAyLjMxMjE5Nl0gcnRj
MDogYWxhcm1zIHVwIHRvIG9uZSB5ZWFyLCB5M2ssIDExNCBieXRlcyBudnJhbQ0NClsgICAgMi4z
MTkxNDFdIGRldmljZS1tYXBwZXI6IGlvY3RsOiA0LjE1LjAtaW9jdGwgKDIwMDktMDQtMDEpIGlu
aXRpYWxpc2VkOiBkbS1kZXZlbEByZWRoYXQuY29tDQ0KWyAgICAyLjMyODk4OF0gY3B1aWRsZTog
dXNpbmcgZ292ZXJub3IgbGFkZGVyDQ0KWyAgICAyLjMyOTk1Nl0gY3B1aWRsZTogdXNpbmcgZ292
ZXJub3IgbWVudQ0NClsgICAgMi4zNDIxODJdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVy
ZmFjZSBkcml2ZXIgaGlkZGV2DQ0KWyAgICAyLjM0NzkyMV0gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JoaWQNDQpbICAgIDIuMzQ4OTAyXSB1c2JoaWQ6IHYyLjY6
VVNCIEhJRCBjb3JlIGRyaXZlcg0NClsgICAgMi4zNTg5MzBdIEFkdmFuY2VkIExpbnV4IFNvdW5k
IEFyY2hpdGVjdHVyZSBEcml2ZXIgVmVyc2lvbiAxLjAuMjEuDQ0KbW9kcHJvYmU6IEZBVEFMOiBD
b3VsZCBub3QgbG9hZCAvbGliL21vZHVsZXMvMi42LjMyLjI1L21vZHVsZXMuZGVwOiBObyBzdWNo
IGZpbGUgb3IgZGlyZWN0b3J5DQ0KDQ0NClsgICAgMi4zODQ1ODVdIEFMU0EgZGV2aWNlIGxpc3Q6
DQ0KWyAgICAyLjM4NTU1OV0gICBObyBzb3VuZGNhcmRzIGZvdW5kLg0NClsgICAgMi4zOTEyMzNd
IE5ldGZpbHRlciBtZXNzYWdlcyB2aWEgTkVUTElOSyB2MC4zMC4NDQpbICAgIDIuMzk2MDA2XSBu
Zl9jb25udHJhY2sgdmVyc2lvbiAwLjUuMCAoMTYzODQgYnVja2V0cywgNjU1MzYgbWF4KQ0NClsg
ICAgMi40MDMxNDhdIGN0bmV0bGluayB2MC45MzogcmVnaXN0ZXJpbmcgd2l0aCBuZm5ldGxpbmsu
DQ0KWyAgICAyLjQxMjc5OV0gaXBfdGFibGVzOiAoQykgMjAwMC0yMDA2IE5ldGZpbHRlciBDb3Jl
IFRlYW0NDQpbICAgIDIuNDE4MTkxXSBUQ1AgY3ViaWMgcmVnaXN0ZXJlZA0NClsgICAgMi40MTkx
MzVdIEluaXRpYWxpemluZyBYRlJNIG5ldGxpbmsgc29ja2V0DQ0KWyAgICAyLjQyNjkzM10gTkVU
OiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxMA0NClsgICAgMi40NDEzMzddIGlwNl90YWJs
ZXM6IChDKSAyMDAwLTIwMDYgTmV0ZmlsdGVyIENvcmUgVGVhbQ0NClsgICAgMi40NDcwMzhdIElQ
djYgb3ZlciBJUHY0IHR1bm5lbGluZyBkcml2ZXINDQpbICAgIDIuNDU2MDk0XSBORVQ6IFJlZ2lz
dGVyZWQgcHJvdG9jb2wgZmFtaWx5IDE3DQ0KWyAgICAyLjQ2MDgxOF0gVXNpbmcgSVBJIE5vLVNo
b3J0Y3V0IG1vZGUNDQpbICAgIDIuNDY1MjI1XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJzaW9u
IDENDQpbICAgIDIuNDcwNDAyXSAgIE1hZ2ljIG51bWJlcjogODo3OTA6OTgyDQ0KWyAgICAyLjQ3
NTQ5Nl0gRnJlZWluZyB1bnVzZWQga2VybmVsIG1lbW9yeTogNDg0ayBmcmVlZA0NClsgICAgMi40
ODIxODNdIHVzYiAyLTI6IG5ldyBsb3cgc3BlZWQgVVNCIGRldmljZSB1c2luZyBvaGNpX2hjZCBh
bmQgYWRkcmVzcyAyDQ0KWyAgICAyLjQ4OTI0Ml0gV3JpdGUgcHJvdGVjdGluZyB0aGUga2VybmVs
IHRleHQ6IDQ2MjBrDQ0KWyAgICAyLjQ5NTk5MV0gV3JpdGUgcHJvdGVjdGluZyB0aGUga2VybmVs
IHJlYWQtb25seSBkYXRhOiAyMDE2aw0NCkxvYWRpbmcsIHBsZWFzZSB3YWl0Li4uDQ0KWyAgICAy
LjY4NDI3Nl0gdUJlZ2luOiBMb2FkaW5nIGVzc2VudGlhbCBkcml2ZXJzIC4uLiBzYiAyLTI6IE5l
dyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0wNjI0LCBpZFByb2R1Y3Q9MDIwMA0NClsgICAg
Mi42ODUxMzddIHVzYiAyLTI6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0xLCBQcm9kdWN0
PTIsIFNlcmlhbE51bWJlcj0wDQ0KWyAgICAyLjY4NTEzN10gdXNiIDItMjogUHJvZHVjdDogVVNC
IERTUklRDQ0KWyAgICAyLjY4NTEzN10gdXNiIDItMjogTWFudWZhY3R1cmVyOiBBdm9jZW50DQ0K
WyAgICAyLjcxMDQ5NF0gdXNiIDItMjogY29uZmlndXJhdGlvbiAjMSBjaG9zZW4gZnJvbSAxIGNo
b2ljZQ0NClsgICAgMi43MzA0NTFdIGlucHV0OiBBdm9jZW50IFVTQiBEU1JJUSBhcyAvZGV2aWNl
cy9wY2kwMDAwOjAwLzAwMDA6MDA6MDMuMC91c2IyLzItMi8yLTI6MS4wL2lucHV0L2lucHV0NA0N
ClsgICAgMi43NDA3OTJdIGdlbmVyaWMtdXNiIDAwMDM6MDYyNDowMjAwLjAwMDE6IGlucHV0LGhp
ZHJhdzA6IFVTQiBISUQgdjEuMTAgS2V5Ym9hcmQgW0F2b2NlbnQgVVNCIERTUklRXSBvbiB1c2It
MDAwMDowMDowMy4wLTIvaW5wdXQwDQ0KZG9uZS4NDQpbICAgIDIuNzY0NDI1XSBpQmVnaW46IFJ1
bm5pbmcgL3NjcmlwdHMvaW5pdC1wcmVtb3VudCAuLi4gbnB1dDogQXZvY2VudCBVU0IgRFNSSVEg
YXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjAzLjAvdXNiMi8yLTIvMi0yOjEuMS9pbnB1
dC9pbnB1dDUNDQpbICAgIDIuNzc4NTk2XSBnZW5lcmljLXVzYiAwMDAzOjA2MjQ6MDIwMC4wMDAy
OiBpbnB1dCxoaWRyYXcxOiBVU0IgSElEIHYxLjEwIE1vdXNlIFtBdm9jZW50IFVTQiBEU1JJUV0g
b24gdXNiLTAwMDA6MDA6MDMuMC0yL2lucHV0MQ0NClsgICAgMy45MzE0MDVdIEFDUEk6IFBDSSBJ
bnRlcnJ1cHQgTGluayBbTE5LU10gZW5hYmxlZCBhdCBJUlEgMTENDQpbICAgIDMuOTM3NDc2XSB4
ZW5fYWxsb2NhdGVfcGlycTogcmV0dXJuaW5nIGlycSAxMSBmb3IgZ3NpIDExDQ0KWyAgICAzLjk0
MzM1N10gc2F0YV9zdncgMDAwMDowMTowZS4wOiBQQ0kgSU5UIEEgLT4gTGlua1tMTktTXSAtPiBH
U0kgMTEgKGxldmVsLCBsb3cpIC0+IElSUSAxMQ0NClsgICAgMy45NTI0MTNdIHNjc2kwIDogc2F0
YV9zdncNDQpbICAgIDMuOTU2MDE0XSBzY3NpMSA6IHNhdGFfc3Z3DQ0KWyAgICAzLjk1OTU4MV0g
c2NzaTIgOiBzYXRhX3N2dw0NClsgICAgMy45NjMyNjBdIHNjc2kzIDogc2F0YV9zdncNDQpbICAg
IDMuOTY2MjY3XSBhdGExOiBTQVRBIG1heCBVRE1BLzEzMyBtbWlvIG04MTkyQDB4ZDgzMDAwMDAg
cG9ydCAweGQ4MzAwMDAwIGlycSAxMQ0NClsgICAgMy45NjcyMzRdIGF0YTI6IFNBVEEgbWF4IFVE
TUEvMTMzIG1taW8gbTgxOTJAMHhkODMwMDAwMCBwb3J0IDB4ZDgzMDAxMDAgaXJxIDExDQ0KWyAg
ICAzLjk2NzIzNF0gYXRhMzogU0FUQSBtYXggVURNQS8xMzMgbW1pbyBtODE5MkAweGQ4MzAwMDAw
IHBvcnQgMHhkODMwMDIwMCBpcnEgMTENDQpbICAgIDMuOTY3MjM0XSBhdGE0OiBTQVRBIG1heCBV
RE1BLzEzMyBtbWlvIG04MTkyQDB4ZDgzMDAwMDAgcG9ydCAweGQ4MzAwMzAwIGlycSAxMQ0NClsg
ICAgMy45OTY1MzZdIHhlbl9hbGxvY2F0ZV9waXJxOiByZXR1cm5pbmcgaXJxIDExIGZvciBnc2kg
MTENDQpbICAgIDQuMDAyMjQwXSBBbHJlYWR5IHNldHVwIHRoZSBHU0kgOjExDQ0KWyAgICA0LjAw
MzE5M10gc2F0YV9zdncgMDAwMDowMTowZS4xOiBQQ0kgSU5UIEEgLT4gTGlua1tMTktTXSAtPiBH
U0kgMTEgKGxldmVsLCBsb3cpIC0+IElSUSAxMQ0NClsgICAgNC4zMDUyNzFdIGF0YTE6IFNBVEEg
bGluayB1cCAxLjUgR2JwcyAoU1N0YXR1cyAxMTMgU0NvbnRyb2wgMzAwKQ0NClsgICAgNC4zNjI1
MzVdIGF0YTEuMDA6IEFUQS03OiBTVDMxNjA4MTVBUywgMy5BQUMsIG1heCBVRE1BLzEzMw0NClsg
ICAgNC4zNjMzMzddIGF0YTEuMDA6IDMxMjU4MTgwOCBzZWN0b3JzLCBtdWx0aSAxNjogTEJBNDgg
TkNRIChkZXB0aCAwLzMyKQ0NClsgICAgNC40MjkxMzRdIGF0YTEuMDA6IGNvbmZpZ3VyZWQgZm9y
IFVETUEvMTMzDQ0KWyAgICA0LjQzMzk3Ml0gc2NzaSAwOjA6MDowOiBEaXJlY3QtQWNjZXNzICAg
ICBBVEEgICAgICBTVDMxNjA4MTVBUyAgICAgIDMuQUEgUFE6IDAgQU5TSTogNQ0NClsgICAgNC40
NDM1MjhdIHNkIDA6MDowOjA6IEF0dGFjaGVkIHNjc2kgZ2VuZXJpYyBzZzAgdHlwZSAwDQ0KWyAg
ICA0LjQ0MzU5MV0gc2QgMDowOjA6MDogW3NkYV0gMzEyNTgxODA4IDUxMi1ieXRlIGxvZ2ljYWwg
YmxvY2tzOiAoMTYwIEdCLzE0OSBHaUIpDQ0KWyAgICA0LjQ1MTAxN10gc2QgMDowOjA6MDogW3Nk
YV0gV3JpdGUgUHJvdGVjdCBpcyBvZmYNDQpbICAgIDQuNDUxMTIzXSBzZCAwOjA6MDowOiBbc2Rh
XSBXcml0ZSBjYWNoZTogZW5hYmxlZCwgcmVhZCBjYWNoZTogZW5hYmxlZCwgZG9lc24ndCBzdXBw
b3J0IERQTyBvciBGVUENDQpbICAgIDQuNDUxNzg2XSAgc2RhOiBzZGExIHNkYTIgPCBzZGE1ID4N
DQpbICAgIDQuNDg1OTQ4XSBzZCAwOjA6MDowOiBbc2RhXSBBdHRhY2hlZCBTQ1NJIGRpc2sNDQpb
ICAgIDQuNzc5NTMzXSBhdGEyOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFND
b250cm9sIDMwMCkNDQpbICAgIDQuNzkwNzU1XSBhdGEyLjAwOiBBVEEtODogV0RDIFdENTAwMEFB
S1MtMDBWMUEwLCAwNS4wMUQwNSwgbWF4IFVETUEvMTMzDQ0KWyAgICA0Ljc5MTUwNF0gYXRhMi4w
MDogOTc2NzczMTY4IHNlY3RvcnMsIG11bHRpIDE2OiBMQkE0OCBOQ1EgKGRlcHRoIDAvMzIpDQ0K
WyAgICA0LjgxMTA1MV0gYXRhMi4wMDogY29uZmlndXJlZCBmb3IgVURNQS8xMzMNDQpbICAgIDQu
ODE2NDM1XSBzY3NpIDE6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFUQSAgICAgIFdEQyBXRDUw
MDBBQUtTLTAgMDUuMCBQUTogMCBBTlNJOiA1DQ0KWyAgICA0LjgyNjE1N10gc2QgMTowOjA6MDog
W3NkYl0gOTc2NzczMTY4IDUxMi1ieXRlIGxvZ2ljYWwgYmxvY2tzOiAoNTAwIEdCLzQ2NSBHaUIp
DQ0KWyAgICA0LjgyNjQxOF0gc2QgMTowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMSB0
eXBlIDANDQpbICAgIDQuODM5ODEyXSBzZCAxOjA6MDowOiBbc2RiXSBXcml0ZSBQcm90ZWN0IGlz
IG9mZg0NClsgICAgNC44NDUxNTJdIHNkIDE6MDowOjA6IFtzZGJdIFdyaXRlIGNhY2hlOiBlbmFi
bGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZVQQ0NClsg
ICAgNC44NTU1MDldICBzZGI6IHNkYjEgc2RiMiBzZGIzDQ0KWyAgICA0Ljg5MTkyMF0gc2QgMTow
OjA6MDogW3NkYl0gQXR0YWNoZWQgU0NTSSBkaXNrDQ0KWyAgICA1LjEzNDI3OF0gYXRhMzogU0FU
QSBsaW5rIGRvd24gKFNTdGF0dXMgNCBTQ29udHJvbCAzMDApDQ0KWyAgICA1LjQ1OTExOF0gYXRh
NDogU0FUQSBsaW5rIGRvd24gKFNTdGF0dXMgNCBTQ29udHJvbCAzMDApDQ0KWyAgICA1LjQ4NDMz
MF0gRnVzaW9uIE1QVCBiYXNlIGRyaXZlciAzLjA0LjEyDQ0KWyAgICA1LjQ4NTEwMF0gQ29weXJp
Z2h0IChjKSAxOTk5LTIwMDggTFNJIENvcnBvcmF0aW9uDQ0KWyAgICA1LjUxNjQ1Ml0gRnVzaW9u
IE1QVCBTQVMgSG9zdCBkcml2ZXIgMy4wNC4xMg0NClsgICAgNS41MjEzNThdIG1wdHNhcyAwMDAw
OjA2OjAwLjA6IFBDSSBJTlQgQSAtPiBHU0kgMzUgKGxldmVsLCBsb3cpIC0+IElSUSAzNQ0NClsg
ICAgNS41Mjg5MzRdIG1wdGJhc2U6IGlvYzA6IEluaXRpYXRpbmcgYnJpbmd1cA0NClsgICAgNS44
MTUwOTNdIGlvYzA6IExTSVNBUzEwNjRFIEIxOiBDYXBhYmlsaXRpZXM9e0luaXRpYXRvcn0NDQpb
ICAgMTAuODY3MzM0XSBzY3NpNCA6IGlvYzA6IExTSVNBUzEwNjRFIEIxLCBGd1Jldj0wMTBhMDAw
MGgsIFBvcnRzPTEsIE1heFE9NTExLCBJUlE9MzUNDQpkb25lLg0NCkJlZ2luOiBNb3VudGluZyBy
b290IGZpbGUgc3lzdGVtIC4uLiBCZWdpbjogUnVubmluZyAvc2NyaXB0cy9sb2NhbC10b3AgLi4u
IGRvbmUuDQ0KQmVnaW46IFJ1bm5pbmcgL3NjcmlwdHMvbG9jYWwtcHJlbW91bnQgLi4uIGtpbml0
OiBuYW1lX3RvX2Rldl90KC9kZXYvc2RiMikgPSBkZXYoOCwxOCkNDQpraW5pdDogdHJ5aW5nIHRv
IHJlc3VtZSBmcm9tIC9kZXYvc2RiMg0NClsgICAxMS4xOTczNjNdIFBNOiBTdGFydGluZyBtYW51
YWwgcmVzdW1lIGZyb20gZGlzaw0NCmtpbml0OiBObyByZXN1bWUgaW1hZ2UsIGRvaW5nIG5vcm1h
bCBib290Li4uDQ0KZG9uZS4NDQpbICAgMTEuMjY2ODU4XSBram91cm5hbGQgc3RhcnRpbmcuICBD
b21taXQgaW50ZXJ2YWwgNSBzZWNvbmRzDQ0KWyAgIDExLjI2Njg4OF0gRVhUMy1mczogbW91bnRl
ZCBmaWxlc3lzdGVtIHdpdGggd3JpdGViYWNrIGRhdGEgbW9kZS4NDQpCZWdpbjogUnVubmluZyAv
c2NyaXB0cy9sb2NhbC1ib3R0b20gLi4uIGRvbmUuDQ0KZG9uZS4NDQpCZWdpbjogUnVubmluZyAv
c2NyaXB0cy9pbml0LWJvdHRvbSAuLi4gZG9uZS4NDQpTRUxpbnV4OiAgQ291bGQgbm90IG9wZW4g
cG9saWN5IGZpbGUgPD0gL2V0Yy9zZWxpbnV4L3RhcmdldGVkL3BvbGljeS9wb2xpY3kuMjY6ICBO
byBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5DQ0KDUlOSVQ6IHZlcnNpb24gMi44OCBib290aW5nDQ0N
ClsbWzM2bWluZm8bWzM5OzQ5bV0gVXNpbmcgbWFrZWZpbGUtc3R5bGUgY29uY3VycmVudCBib290
IGluIHJ1bmxldmVsIFMuDQ0KWy4uLi5dIFN0YXJ0aW5nIHRoZSBob3RwbHVnIGV2ZW50cyBkaXNw
YXRjaGVyOiB1ZGV2ZFsgICAxMy42NzEzMDVdIDwzMD51ZGV2ZFsyMTUwXTogc3RhcnRpbmcgdmVy
c2lvbiAxNzUNDQobWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/
MGMuDQ0KWy4uLi5dIFN5bnRoZXNpemluZyB0aGUgaW5pdGlhbCBob3RwbHVnIGV2ZW50cy4uLhtb
PzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4u
Li5dIFdhaXRpbmcgZm9yIC9kZXYgdG8gYmUgZnVsbHkgcG9wdWxhdGVkLi4udWRldmRbMjE4Ml06
IGtlcm5lbC1wcm92aWRlZCBuYW1lICdibGt0YXAtY29udHJvbCcgYW5kIE5BTUU9ICd4ZW4vYmxr
dGFwLTIvY29udHJvbCcgZGlzYWdyZWUsIHBsZWFzZSB1c2UgU1lNTElOSys9IG9yIGNoYW5nZSB0
aGUga2VybmVsIHRvIHByb3ZpZGUgdGhlIHByb3BlciBuYW1lDQ0KDQ0NChtbPzI1bBtbPzFjGzcb
WzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5dIEFjdGl2YXRp
bmcgc3dhcC4uLlsgICAxNS44NDU2MjBdIEFkZGluZyAzOTAzNzg0ayBzd2FwIG9uIC9kZXYvc2Ri
Mi4gIFByaW9yaXR5Oi0xIGV4dGVudHM6MSBhY3Jvc3M6MzkwMzc4NGsgDQ0KG1s/MjVsG1s/MWMb
NxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjZG9uZS4NDQpbLi4uLl0gQ2hlY2tp
bmcgcm9vdCBmaWxlIHN5c3RlbS4uLmZzY2sgZnJvbSB1dGlsLWxpbnV4IDIuMjAuMQ0NCi9kZXYv
c2RiMTogY2xlYW4sIDk3OTg1LzQ4ODY0MCBmaWxlcywgOTk0MzkwLzE5NTM4OTcgYmxvY2tzDQ0K
G1s/MjVsG1s/MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjZG9uZS4NDQpb
ICAgMTYuNDk1OTQ3XSBFWFQzIEZTIG9uIHNkYjEsIGludGVybmFsIGpvdXJuYWwNDQpbLi4uLl0g
TG9hZGluZyBrZXJuZWwgbW9kdWxlcy4uLhtbPzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7
NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5dIFNldHRpbmcgdXAgTFZNIFZvbHVtZSBHcm91
cHMuLi4bWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGNkb25l
Lg0NCl5bPlsuLi4uXSBBY3RpdmF0aW5nIGx2bSBhbmQgbWQgc3dhcC4uLhtbPzI1bBtbPzFjGzcb
WzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5dIENoZWNraW5n
IGZpbGUgc3lzdGVtcy4uLmZzY2sgZnJvbSB1dGlsLWxpbnV4IDIuMjAuMQ0NCi9kZXYvbWFwcGVy
L3Vuc3RhYmxlLXRyYWNlczogY2xlYW4sIDE1LzY1NTM2MDAgZmlsZXMsIDQ2OTg0NC8yNjIxNDQw
MCBibG9ja3MNDQobWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/
MGNkb25lLg0NClsuLi4uXSBNb3VudGluZyBsb2NhbCBmaWxlc3lzdGVtcy4uLlsgICAyMS4yNjQy
NTVdIGtqb3VybmFsZCBzdGFydGluZy4gIENvbW1pdCBpbnRlcnZhbCA1IHNlY29uZHMNDQpbICAg
MjEuMjY0NjI3XSBFWFQzIEZTIG9uIGRtLTUsIGludGVybmFsIGpvdXJuYWwNDQpbICAgMjEuMjY0
NjQzXSBFWFQzLWZzOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCB3cml0ZWJhY2sgZGF0YSBtb2Rl
Lg0NChtbPzI1bBtbPzFjGzcbWzFHWxtbMzFtRkFJTBtbMzk7NDltGzgbWz8yNWgbWz8wYxtbMzFt
ZmFpbGVkLhtbMzk7NDltDQ0KWy4uLi5dIEFjdGl2YXRpbmcgc3dhcGZpbGUgc3dhcC4uLhtbPzI1
bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wY2RvbmUuDQ0KWy4uLi5d
IENsZWFuaW5nIHVwIHRlbXBvcmFyeSBmaWxlcy4uLhtbPzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9r
IBtbMzk7NDltGzgbWz8yNWgbWz8wYy4NDQpbLi4uLl0gQ29uZmlndXJpbmcgbmV0d29yayBpbnRl
cmZhY2VzLi4uDQ0KV2FpdGluZyBmb3IgYSBtYXggb2YgMCBzZWNvbmRzIGZvciBldGgwIHRvIGJl
Y29tZSBhdmFpbGFibGUuDQ0KWyAgIDIyLjU1OTEyNV0gZGV2aWNlIGV0aDAgZW50ZXJlZCBwcm9t
aXNjdW91cyBtb2RlDQ0KWyAgIDIyLjYzNDQ2NV0gQUREUkNPTkYoTkVUREVWX1VQKTogZXRoMDog
bGluayBpcyBub3QgcmVhZHkNDQpJbnRlcm5ldCBTeXN0ZW1zIENvbnNvcnRpdW0gREhDUCBDbGll
bnQgNC4yLjINDQpDb3B5cmlnaHQgMjAwNC0yMDExIEludGVybmV0IFN5c3RlbXMgQ29uc29ydGl1
bS4NDQpBbGwgcmlnaHRzIHJlc2VydmVkLg0NCkZvciBpbmZvLCBwbGVhc2UgdmlzaXQgaHR0cHM6
Ly93d3cuaXNjLm9yZy9zb2Z0d2FyZS9kaGNwLw0NCg0NCkxpc3RlbmluZyBvbiBMUEYveGVuYnIw
LzAwOmUwOjgxOjgwOjFkOmEwDQ0KU2VuZGluZyBvbiAgIExQRi94ZW5icjAvMDA6ZTA6ODE6ODA6
MWQ6YTANDQpTZW5kaW5nIG9uICAgU29ja2V0L2ZhbGxiYWNrDQ0KREhDUERJU0NPVkVSIG9uIHhl
bmJyMCB0byAyNTUuMjU1LjI1NS4yNTUgcG9ydCA2NyBpbnRlcnZhbCA0DQ0KWyAgIDI1LjQwODk5
M10gdGczOiBldGgwOiBMaW5rIGlzIHVwIGF0IDEwMDAgTWJwcywgZnVsbCBkdXBsZXguDQ0KWyAg
IDI1LjQwOTIyNl0gdGczOiBldGgwOiBGbG93IGNvbnRyb2wgaXMgb24gZm9yIFRYIGFuZCBvbiBm
b3IgUlguDQ0KWyAgIDI1LjQwOTIyNl0gQUREUkNPTkYoTkVUREVWX0NIQU5HRSk6IGV0aDA6IGxp
bmsgYmVjb21lcyByZWFkeQ0NClsgICAyNS40MDkyMjZdIHhlbmJyMDogcG9ydCAxKGV0aDApIGVu
dGVyaW5nIGZvcndhcmRpbmcgc3RhdGUNDQpESENQRElTQ09WRVIgb24geGVuYnIwIHRvIDI1NS4y
NTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDQNDQpESENQRElTQ09WRVIgb24geGVuYnIwIHRv
IDI1NS4yNTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDUNDQpESENQRElTQ09WRVIgb24geGVu
YnIwIHRvIDI1NS4yNTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDcNDQpESENQRElTQ09WRVIg
b24geGVuYnIwIHRvIDI1NS4yNTUuMjU1LjI1NSBwb3J0IDY3IGludGVydmFsIDE1DQ0KREhDUERJ
U0NPVkVSIG9uIHhlbmJyMCB0byAyNTUuMjU1LjI1NS4yNTUgcG9ydCA2NyBpbnRlcnZhbCAxMw0N
CkRIQ1BSRVFVRVNUIG9uIHhlbmJyMCB0byAyNTUuMjU1LjI1NS4yNTUgcG9ydCA2Nw0NCkRIQ1BP
RkZFUiBmcm9tIDEwLjgwLjIyNC4xDQ0KREhDUEFDSyBmcm9tIDEwLjgwLjIyNC4xDQ0KYm91bmQg
dG8gMTAuODAuMjI0LjE0NCAtLSByZW5ld2FsIGluIDIwOTgzIHNlY29uZHMuDQ0KDQ0KV2FpdGlu
ZyBmb3IgYSBtYXggb2YgMCBzZWNvbmRzIGZvciBldGgxIHRvIGJlY29tZSBhdmFpbGFibGUuDQ0K
WyAgIDU4Ljc0NzA3MV0gZGV2aWNlIGV0aDEgZW50ZXJlZCBwcm9taXNjdW91cyBtb2RlDQ0KWyAg
IDU4Ljc5ODQ3Nl0gQUREUkNPTkYoTkVUREVWX1VQKTogZXRoMTogbGluayBpcyBub3QgcmVhZHkN
DQppZnVwOiBpbnRlcmZhY2UgZXRoMCBhbHJlYWR5IGNvbmZpZ3VyZWQNDQobWz8yNWwbWz8xYxs3
G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGNkb25lLg0NClsuLi4uXSBTZXR0aW5n
IGtlcm5lbCB2YXJpYWJsZXMgLi4uG1s/MjVsG1s/MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0b
OBtbPzI1aBtbPzBjZG9uZS4NDQpbLi4uLl0gU3RhcnRpbmcgcnBjYmluZCBkYWVtb24uLi4bWz8y
NWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWy4uLi5dIFN0
YXJ0aW5nIE5GUyBjb21tb24gdXRpbGl0aWVzOiBzdGF0ZCBpZG1hcGQbWz8yNWwbWz8xYxs3G1sx
R1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWy4uLi5dIENsZWFuaW5nIHVwIHRl
bXBvcmFyeSBmaWxlcy4uLhtbPzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8y
NWgbWz8wYy4NDQpbG1szNm1pbmZvG1szOTs0OW1dIFNldHRpbmcgY29uc29sZSBzY3JlZW4gbW9k
ZXMgYW5kIGZvbnRzLg0NCnNldHRlcm06IGNhbm5vdCAodW4pc2V0IHBvd2Vyc2F2ZSBtb2RlOiBJ
bnZhbGlkIGFyZ3VtZW50DQ0KG1s5OzMwXRtbMTQ7MzBdDUlOSVQ6IEVudGVyaW5nIHJ1bmxldmVs
OiAyDQ0NClsbWzM2bWluZm8bWzM5OzQ5bV0gVXNpbmcgbWFrZWZpbGUtc3R5bGUgY29uY3VycmVu
dCBib290IGluIHJ1bmxldmVsIDIuDQ0KWy4uLi5dIFN0YXJ0aW5nIE5GUyBjb21tb24gdXRpbGl0
aWVzOiBzdGF0ZCBpZG1hcGQbWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/
MjVoG1s/MGMuDQ0KWy4uLi5dIFN0YXJ0aW5nIHJwY2JpbmQgZGFlbW9uLi4uWy4uLi5dIEFscmVh
ZHkgcnVubmluZy4bWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/
MGMuDQ0KWy4uLi5dIFN0YXJ0aW5nIGVuaGFuY2VkIHN5c2xvZ2Q6IHJzeXNsb2dkG1s/MjVsG1s/
MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjLg0NClsuLi4uXSBTdGFydGlu
ZyBBQ1BJIHNlcnZpY2VzLi4uG1s/MjVsG1s/MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtb
PzI1aBtbPzBjLg0NClsuLi4uXSBTdGFydGluZyBkZWZlcnJlZCBleGVjdXRpb24gc2NoZWR1bGVy
OiBhdGQbWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0K
Wy4uLi5dIFN0YXJ0aW5nIHBlcmlvZGljIGNvbW1hbmQgc2NoZWR1bGVyOiBjcm9uG1s/MjVsG1s/
MWMbNxtbMUdbG1szMm0gb2sgG1szOTs0OW0bOBtbPzI1aBtbPzBjLg0NClsuLi4uXSBTdGFydGlu
ZyBzeXN0ZW0gbWVzc2FnZSBidXM6IGRidXMbWz8yNWwbWz8xYxs3G1sxR1sbWzMybSBvayAbWzM5
OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWy4uLi5dIFN0YXJ0aW5nIE1UQTogZXhpbTQbWz8yNWwbWz8x
Yxs3G1sxR1sbWzMybSBvayAbWzM5OzQ5bRs4G1s/MjVoG1s/MGMuDQ0KWxtbMzZtaW5mbxtbMzk7
NDltXSBOb3Qgc3RhcnRpbmcgaW50ZXJuZXQgc3VwZXJzZXJ2ZXI6IG5vIHNlcnZpY2VzIGVuYWJs
ZWQuDQ0KWy4uLi5dIFN0YXJ0aW5nIE9wZW5CU0QgU2VjdXJlIFNoZWxsIHNlcnZlcjogc3NoZBtb
PzI1bBtbPzFjGzcbWzFHWxtbMzJtIG9rIBtbMzk7NDltGzgbWz8yNWgbWz8wYy4NDQpTdGFydGlu
ZyBveGVuc3RvcmVkLi4uWyAgIDY3Ljc5ODA0MV0gWEVOQlVTOiBVbmFibGUgdG8gcmVhZCBjcHUg
c3RhdGUNDQpbICAgNjcuODAzNTk2XSBYRU5CVVM6IFVuYWJsZSB0byByZWFkIGNwdSBzdGF0ZQ0N
ClsgICA2Ny44MDg5OTJdIFgNDQpFTkJVUzogVW5hYmxlIHRvU2V0dGluZyBkb21haW4gMCBuYW1l
Li4uIHJlYWQgY3B1IHN0YXRlDQ0NCg0KWyAgIDY3LjgxNjQ5NV0gWEVOQlVTOiBVbmFibGUgdG8g
cmVhZCBjcHUgc3RhdGUNDQpbICAgNjcuODIxNTA0XSBYRU5CVVM6IFVuYWJsZSB0byByZWFkIGNw
dSBzdGF0ZQ0NClsgICA2Ny44MjcxNjldIFhFTkJVUzogVW5hYmxlIHRvIHJlYWQgY3B1IHN0YXRl
DQ0KWyAgIDY3LjgzMjA0N10gWEVOQlVTOiBVbmFibGUgdG8gcmVhZCBjcHUgc3RhdGUNDQpbICAg
NjcuODM3MTA4XSBYU3RhcnRpbmcgeGVuY29uc29sZWQuLi5FTkJVUzogVW5hYmxlIHRvDQ0KIHJl
YWQgY3B1IHN0YXRlDQ0KU3RhcnRpbmcgUUVNVSBhcyBkaXNrIGJhY2tlbmQgZm9yIGRvbTANDQpS
dW5uaW5nIHJjLmxvY2FsDQ0KG1tyG1tIG1tKDQ0NCkRlYmlhbiBHTlUvTGludXggd2hlZXp5L3Np
ZCBleGlsZSBodmMwDQ0KDQ0KZXhpbGUgbG9naW46IAo=
--047d7b66f1cd72413004c7744dc4
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b66f1cd72413004c7744dc4--


From xen-devel-bounces@lists.xen.org Fri Aug 17 11:47:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2L13-00063h-Gx; Fri, 17 Aug 2012 11:47:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2L11-00063c-B7
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 11:46:59 +0000
Received: from [85.158.138.51:40109] by server-1.bemta-3.messagelabs.com id
	B2/D1-09327-23F2E205; Fri, 17 Aug 2012 11:46:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345204017!19897702!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22951 invoked from network); 17 Aug 2012 11:46:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 11:46:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14058699"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 11:46:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 12:46:23 +0100
Message-ID: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Wang <wei.wang2@amd.com>, xen-devel <xen-devel@lists.xen.org>
Date: Fri, 17 Aug 2012 12:46:22 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: 684661@bugs.debian.org
Subject: [Xen-devel] Xen BUG at pci_amd_iommu.c:33
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Wei,

A Debian user has hit this message and reported it in
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684661

I think it is the BUG_ON in:
        struct amd_iommu *find_iommu_for_device(int bdf)
        {
            BUG_ON ( bdf >= ivrs_bdf_entries );
            return ivrs_mappings[bdf].iommu;
        }

It looks like ivrs_bdf_entries comes from ACPI. Unfortunately the bug
report is a bit vague about things like stack traces etc so it's hard to
say where BDF came from.

Any ideas? I'll also follow up to the submitter to try and get some more
details.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 11:47:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 11:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2L13-00063h-Gx; Fri, 17 Aug 2012 11:47:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2L11-00063c-B7
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 11:46:59 +0000
Received: from [85.158.138.51:40109] by server-1.bemta-3.messagelabs.com id
	B2/D1-09327-23F2E205; Fri, 17 Aug 2012 11:46:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345204017!19897702!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22951 invoked from network); 17 Aug 2012 11:46:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 11:46:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14058699"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 11:46:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 12:46:23 +0100
Message-ID: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Wang <wei.wang2@amd.com>, xen-devel <xen-devel@lists.xen.org>
Date: Fri, 17 Aug 2012 12:46:22 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: 684661@bugs.debian.org
Subject: [Xen-devel] Xen BUG at pci_amd_iommu.c:33
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Wei,

A Debian user has hit this message and reported it in
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684661

I think it is the BUG_ON in:
        struct amd_iommu *find_iommu_for_device(int bdf)
        {
            BUG_ON ( bdf >= ivrs_bdf_entries );
            return ivrs_mappings[bdf].iommu;
        }

It looks like ivrs_bdf_entries comes from ACPI. Unfortunately the bug
report is a bit vague about things like stack traces etc so it's hard to
say where BDF came from.

Any ideas? I'll also follow up to the submitter to try and get some more
details.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 12:10:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 12:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2LNg-0006U7-DU; Fri, 17 Aug 2012 12:10:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T2LNe-0006U2-EI
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 12:10:22 +0000
Received: from [85.158.138.51:42098] by server-8.bemta-3.messagelabs.com id
	08/37-29583-DA43E205; Fri, 17 Aug 2012 12:10:21 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345205418!28766898!1
X-Originating-IP: [216.32.180.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11026 invoked from network); 17 Aug 2012 12:10:20 -0000
Received: from co1ehsobe005.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.188)
	by server-8.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 12:10:20 -0000
Received: from mail142-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE006.bigfish.com (10.243.66.69) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 12:10:18 +0000
Received: from mail142-co1 (localhost [127.0.0.1])	by
	mail142-co1-R.bigfish.com (Postfix) with ESMTP id 2D61088011B;
	Fri, 17 Aug 2012 12:10:18 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I4015Izz1202hzz8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail142-co1 (localhost.localdomain [127.0.0.1]) by mail142-co1
	(MessageSwitch) id 1345205416229893_26821;
	Fri, 17 Aug 2012 12:10:16 +0000 (UTC)
Received: from CO1EHSMHS017.bigfish.com (unknown [10.243.78.245])	by
	mail142-co1.bigfish.com (Postfix) with ESMTP id 35D61C40044;
	Fri, 17 Aug 2012 12:10:16 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS017.bigfish.com (10.243.66.27) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 12:10:15 +0000
X-WSS-ID: 0M8WF4X-02-A02-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2120EC8111;	Fri, 17 Aug 2012 07:10:09 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 07:10:33 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 17 Aug 2012 07:10:13 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	08:10:12 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 19E3D49C20C; Fri, 17 Aug 2012
	13:10:11 +0100 (BST)
Message-ID: <502E34AE.9030105@amd.com>
Date: Fri, 17 Aug 2012 14:10:22 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: 684661@bugs.debian.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen BUG at pci_amd_iommu.c:33
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/2012 01:46 PM, Ian Campbell wrote:
> Hi Wei,
>
> A Debian user has hit this message and reported it in
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684661
>
> I think it is the BUG_ON in:
>          struct amd_iommu *find_iommu_for_device(int bdf)
>          {
>              BUG_ON ( bdf>= ivrs_bdf_entries );
>              return ivrs_mappings[bdf].iommu;
>          }
>
> It looks like ivrs_bdf_entries comes from ACPI. Unfortunately the bug
> report is a bit vague about things like stack traces etc so it's hard to
> say where BDF came from.
>
> Any ideas? I'll also follow up to the submitter to try and get some more
> details.

It difficult to identify the issue from the description and I think a 
full serial boot log would be great. Also using iommu=debug as boot 
option is very helpful to get more information.

Thanks,
Wei

> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 12:10:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 12:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2LNg-0006U7-DU; Fri, 17 Aug 2012 12:10:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T2LNe-0006U2-EI
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 12:10:22 +0000
Received: from [85.158.138.51:42098] by server-8.bemta-3.messagelabs.com id
	08/37-29583-DA43E205; Fri, 17 Aug 2012 12:10:21 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345205418!28766898!1
X-Originating-IP: [216.32.180.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11026 invoked from network); 17 Aug 2012 12:10:20 -0000
Received: from co1ehsobe005.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.188)
	by server-8.tower-174.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 12:10:20 -0000
Received: from mail142-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE006.bigfish.com (10.243.66.69) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 12:10:18 +0000
Received: from mail142-co1 (localhost [127.0.0.1])	by
	mail142-co1-R.bigfish.com (Postfix) with ESMTP id 2D61088011B;
	Fri, 17 Aug 2012 12:10:18 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I4015Izz1202hzz8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail142-co1 (localhost.localdomain [127.0.0.1]) by mail142-co1
	(MessageSwitch) id 1345205416229893_26821;
	Fri, 17 Aug 2012 12:10:16 +0000 (UTC)
Received: from CO1EHSMHS017.bigfish.com (unknown [10.243.78.245])	by
	mail142-co1.bigfish.com (Postfix) with ESMTP id 35D61C40044;
	Fri, 17 Aug 2012 12:10:16 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	CO1EHSMHS017.bigfish.com (10.243.66.27) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 12:10:15 +0000
X-WSS-ID: 0M8WF4X-02-A02-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2120EC8111;	Fri, 17 Aug 2012 07:10:09 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 07:10:33 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 17 Aug 2012 07:10:13 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	08:10:12 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 19E3D49C20C; Fri, 17 Aug 2012
	13:10:11 +0100 (BST)
Message-ID: <502E34AE.9030105@amd.com>
Date: Fri, 17 Aug 2012 14:10:22 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
X-OriginatorOrg: amd.com
Cc: 684661@bugs.debian.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen BUG at pci_amd_iommu.c:33
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/2012 01:46 PM, Ian Campbell wrote:
> Hi Wei,
>
> A Debian user has hit this message and reported it in
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684661
>
> I think it is the BUG_ON in:
>          struct amd_iommu *find_iommu_for_device(int bdf)
>          {
>              BUG_ON ( bdf>= ivrs_bdf_entries );
>              return ivrs_mappings[bdf].iommu;
>          }
>
> It looks like ivrs_bdf_entries comes from ACPI. Unfortunately the bug
> report is a bit vague about things like stack traces etc so it's hard to
> say where BDF came from.
>
> Any ideas? I'll also follow up to the submitter to try and get some more
> details.

It difficult to identify the issue from the description and I think a 
full serial boot log would be great. Also using iommu=debug as boot 
option is very helpful to get more information.

Thanks,
Wei

> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:08:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MHZ-0006qb-AD; Fri, 17 Aug 2012 13:08:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MHX-0006qW-OK
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:08:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345208879!8932130!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28080 invoked from network); 17 Aug 2012 13:07:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:07:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14060185"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:07:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:07:59 +0100
Message-ID: <1345208877.10161.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Wang <wei.wang2@amd.com>
Date: Fri, 17 Aug 2012 14:07:57 +0100
In-Reply-To: <502E34AE.9030105@amd.com>
References: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
	<502E34AE.9030105@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen BUG at pci_amd_iommu.c:33
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 13:10 +0100, Wei Wang wrote:
> On 08/17/2012 01:46 PM, Ian Campbell wrote:
> > Hi Wei,
> >
> > A Debian user has hit this message and reported it in
> > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684661
> >
> > I think it is the BUG_ON in:
> >          struct amd_iommu *find_iommu_for_device(int bdf)
> >          {
> >              BUG_ON ( bdf>= ivrs_bdf_entries );
> >              return ivrs_mappings[bdf].iommu;
> >          }
> >
> > It looks like ivrs_bdf_entries comes from ACPI. Unfortunately the bug
> > report is a bit vague about things like stack traces etc so it's hard to
> > say where BDF came from.
> >
> > Any ideas? I'll also follow up to the submitter to try and get some more
> > details.
> 
> It difficult to identify the issue from the description and I think a 
> full serial boot log would be great. Also using iommu=debug as boot 
> option is very helpful to get more information.

Thanks, I've added that to my request to the submitter.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:08:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MHZ-0006qb-AD; Fri, 17 Aug 2012 13:08:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MHX-0006qW-OK
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:08:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345208879!8932130!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28080 invoked from network); 17 Aug 2012 13:07:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:07:59 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14060185"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:07:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:07:59 +0100
Message-ID: <1345208877.10161.17.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Wang <wei.wang2@amd.com>
Date: Fri, 17 Aug 2012 14:07:57 +0100
In-Reply-To: <502E34AE.9030105@amd.com>
References: <1345203982.10161.11.camel@zakaz.uk.xensource.com>
	<502E34AE.9030105@amd.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen BUG at pci_amd_iommu.c:33
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 13:10 +0100, Wei Wang wrote:
> On 08/17/2012 01:46 PM, Ian Campbell wrote:
> > Hi Wei,
> >
> > A Debian user has hit this message and reported it in
> > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=684661
> >
> > I think it is the BUG_ON in:
> >          struct amd_iommu *find_iommu_for_device(int bdf)
> >          {
> >              BUG_ON ( bdf>= ivrs_bdf_entries );
> >              return ivrs_mappings[bdf].iommu;
> >          }
> >
> > It looks like ivrs_bdf_entries comes from ACPI. Unfortunately the bug
> > report is a bit vague about things like stack traces etc so it's hard to
> > say where BDF came from.
> >
> > Any ideas? I'll also follow up to the submitter to try and get some more
> > details.
> 
> It difficult to identify the issue from the description and I think a 
> full serial boot log would be great. Also using iommu=debug as boot 
> option is very helpful to get more information.

Thanks, I've added that to my request to the submitter.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:15:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MO0-0006ze-4x; Fri, 17 Aug 2012 13:14:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MNz-0006zU-10
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:14:47 +0000
Received: from [85.158.143.35:17257] by server-1.bemta-4.messagelabs.com id
	5E/C4-07754-6C34E205; Fri, 17 Aug 2012 13:14:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345209272!12779664!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3402 invoked from network); 17 Aug 2012 13:14:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:14:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14060343"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:13:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:13:46 +0100
Message-ID: <1345209224.10161.21.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Fri, 17 Aug 2012 14:13:44 +0100
In-Reply-To: <alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-08-12 at 19:03 +0100, M A Young wrote:
> On Wed, 25 Jul 2012, Ian Campbell wrote:
> 
> > On Tue, 2012-07-24 at 20:36 +0100, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jul 24, 2012 at 08:04:30PM +0100, M A Young wrote:
> >>> Fedora is keen to stop using PyXML and I have been sent a bug report
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=842843 which includes a
> >>> patch to remove the use of XMLPrettyPrint in
> >>> tools/python/xen/xm/create.py . I am going to try this in the Fedora
> >>> build, but I was wondering if it makes sense to do this in the
> >>> official xen releases as well.
> >>
> >> Yes.
> >
> > Agreed.
> >
> > According to the bug we've already moved from PyXML to lxml
> > (22235:b8cc53d22545 from the looks of it) and simply missed this one use
> > of PyXML.
> 
> Here is the patch from that bug (trivially) rebased to 4.2.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

and pushed. Is the following section of README still accurate? At least
the mention of "PyXML" seems wrong to me.
        
        Python Runtime Libraries
        ========================
        
        Xend (the Xen daemon) has the following runtime dependencies:
        
            * Python 2.3 or later.
              In some distros, the XML-aspects to the standard library
              (xml.dom.minidom etc) are broken out into a separate python-xml package.
              This is also required.
              In more recent versions of Debian and Ubuntu the XML-aspects are included
              in the base python package however (python-xml has been removed
              from Debian in squeeze and from Ubuntu in intrepid).
        
                  URL:    http://www.python.org/
                  Debian: python
        
            * For optional SSL support, pyOpenSSL:
                  URL:    http://pyopenssl.sourceforge.net/
                  Debian: python-pyopenssl
        
            * For optional PAM support, PyPAM:
                  URL:    http://www.pangalactic.org/PyPAM/
                  Debian: python-pam
        
            * For optional XenAPI support in XM, PyXML:
                  URL:    http://codespeak.net/lxml/
                  Debian: python-lxml
                  YUM:    python-lxml



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:15:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MO0-0006ze-4x; Fri, 17 Aug 2012 13:14:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MNz-0006zU-10
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:14:47 +0000
Received: from [85.158.143.35:17257] by server-1.bemta-4.messagelabs.com id
	5E/C4-07754-6C34E205; Fri, 17 Aug 2012 13:14:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345209272!12779664!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3402 invoked from network); 17 Aug 2012 13:14:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:14:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14060343"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:13:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:13:46 +0100
Message-ID: <1345209224.10161.21.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Fri, 17 Aug 2012 14:13:44 +0100
In-Reply-To: <alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2012-08-12 at 19:03 +0100, M A Young wrote:
> On Wed, 25 Jul 2012, Ian Campbell wrote:
> 
> > On Tue, 2012-07-24 at 20:36 +0100, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jul 24, 2012 at 08:04:30PM +0100, M A Young wrote:
> >>> Fedora is keen to stop using PyXML and I have been sent a bug report
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=842843 which includes a
> >>> patch to remove the use of XMLPrettyPrint in
> >>> tools/python/xen/xm/create.py . I am going to try this in the Fedora
> >>> build, but I was wondering if it makes sense to do this in the
> >>> official xen releases as well.
> >>
> >> Yes.
> >
> > Agreed.
> >
> > According to the bug we've already moved from PyXML to lxml
> > (22235:b8cc53d22545 from the looks of it) and simply missed this one use
> > of PyXML.
> 
> Here is the patch from that bug (trivially) rebased to 4.2.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

and pushed. Is the following section of README still accurate? At least
the mention of "PyXML" seems wrong to me.
        
        Python Runtime Libraries
        ========================
        
        Xend (the Xen daemon) has the following runtime dependencies:
        
            * Python 2.3 or later.
              In some distros, the XML-aspects to the standard library
              (xml.dom.minidom etc) are broken out into a separate python-xml package.
              This is also required.
              In more recent versions of Debian and Ubuntu the XML-aspects are included
              in the base python package however (python-xml has been removed
              from Debian in squeeze and from Ubuntu in intrepid).
        
                  URL:    http://www.python.org/
                  Debian: python
        
            * For optional SSL support, pyOpenSSL:
                  URL:    http://pyopenssl.sourceforge.net/
                  Debian: python-pyopenssl
        
            * For optional PAM support, PyPAM:
                  URL:    http://www.pangalactic.org/PyPAM/
                  Debian: python-pam
        
            * For optional XenAPI support in XM, PyXML:
                  URL:    http://codespeak.net/lxml/
                  Debian: python-lxml
                  YUM:    python-lxml



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MPb-00074b-Kq; Fri, 17 Aug 2012 13:16:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2MPa-00074N-F3
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:16:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345209374!9395307!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23282 invoked from network); 17 Aug 2012 13:16:16 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 13:16:16 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HDGBiC014554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 13:16:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HDGBuU024440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 13:16:11 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HDGAxt006478; Fri, 17 Aug 2012 08:16:10 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 06:16:10 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AB236402B8; Fri, 17 Aug 2012 09:06:21 -0400 (EDT)
Date: Fri, 17 Aug 2012 09:06:21 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120817130621.GC31903@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
	<502E2784.8060806@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E2784.8060806@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 12:14:12PM +0100, David Vrabel wrote:
> On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
> > 
> > So I thought about this some more and came up with this patch. Its
> > RFC and going to run it through some overnight tests to see how they fare.
> > 
> > 
> > commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
> > Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Date:   Fri Jul 27 16:05:47 2012 -0400
> > 
> >     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
> >     
> >     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
> >     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
> >     with either a p2m_missing or p2m_identity respectively. The old
> >     page (which was created via extend_brk or was grafted on from the
> >     mfn_list) can be re-used for setting new PFNs.
> 
> Does this actually find any p2m pages to reclaim?

Very much so. When I run the kernel without dom0_mem, and end up returning
around 372300 pages back, and then populating them back - they (mostly)
all get to re-use the transplanted mfn_list.

The ones in the 9a-100 obviously don't.
> 
> xen_set_identity_and_release() is careful to set the largest possible
> range as 1:1 and the comments at the top of p2m.c suggest the mid
> entries will be made to point to p2m_identity already.

Right, and that is still true - for cases where the are no mid entries
(so P2M[3][400] for example can point in the middle of the MMIO region).

But if you boot without dom0_mem=max, that region (P2M[3][400]) would at
the start be backed by the &mfn_list, so when we call 1-1 on that region
it ends up sticking in the &mfn_list a whole bunch of IDENTITY_FRAME(pfn).

This patch harvests those chunks of &mfn_list that have that and re-uses them.

And without any dom0_mem= I seem to at most call extend_bkr twice (to
allocate the top leafs P2M[4] and P2M[5]). Hm, to be on a safe side I should
probably do 'reserve_brk(p2m_popualated, 3 * PAGE_SIZE)' in case we
end up transplanting 3GB of PFNs in in the P2M[4], P2M[5] and P2M[6] nodes.

> 
> David
> 
> >     This also means we can remove git commit:
> >     5bc6f9888db5739abfa0cae279b4b442e4db8049
> >     xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
> >     which tried to fix this.
> >     
> >     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index 29244d0..b6b7c10 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -194,11 +194,6 @@ RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID
> >   * boundary violation will require three middle nodes. */
> >  RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
> >  
> > -/* When we populate back during bootup, the amount of pages can vary. The
> > - * max we have is seen is 395979, but that does not mean it can't be more.
> > - * Some machines can have 3GB I/O holes so lets reserve for that. */
> > -RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
> > -
> >  static inline unsigned p2m_top_index(unsigned long pfn)
> >  {
> >  	BUG_ON(pfn >= MAX_P2M_PFN);
> > @@ -575,12 +570,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
> >  	}
> >  	return true;
> >  }
> > +
> > +/*
> > + * Skim over the P2M tree looking at pages that are either filled with
> > + * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
> > + * replace the P2M leaf with a p2m_missing or p2m_identity.
> > + * Stick the old page in the new P2M tree location.
> > + */
> > +bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
> > +{
> > +	unsigned topidx;
> > +	unsigned mididx;
> > +	unsigned ident_pfns;
> > +	unsigned inv_pfns;
> > +	unsigned long *p2m;
> > +	unsigned long *mid_mfn_p;
> > +	unsigned idx;
> > +	unsigned long pfn;
> > +
> > +	/* We only look when this entails a P2M middle layer */
> > +	if (p2m_index(set_pfn))
> > +		return false;
> > +
> > +	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
> > +		topidx = p2m_top_index(pfn);
> > +
> > +		if (!p2m_top[topidx])
> > +			continue;
> > +
> > +		if (p2m_top[topidx] == p2m_mid_missing)
> > +			continue;
> > +
> > +		mididx = p2m_mid_index(pfn);
> > +		p2m = p2m_top[topidx][mididx];
> > +		if (!p2m)
> > +			continue;
> > +
> > +		if ((p2m == p2m_missing) || (p2m == p2m_identity))
> > +			continue;
> > +
> > +		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
> > +			continue;
> > +
> > +		ident_pfns = 0;
> > +		inv_pfns = 0;
> > +		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
> > +			/* IDENTITY_PFNs are 1:1 */
> > +			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
> > +				ident_pfns++;
> > +			else if (p2m[idx] == INVALID_P2M_ENTRY)
> > +				inv_pfns++;
> > +			else
> > +				break;
> > +		}
> > +		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
> > +			goto found;
> > +	}
> > +	return false;
> > +found:
> > +	/* Found one, replace old with p2m_identity or p2m_missing */
> > +	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
> > +	/* And the other for save/restore.. */
> > +	mid_mfn_p = p2m_top_mfn_p[topidx];
> > +	/* NOTE: Even if it is a p2m_identity it should still be point to
> > +	 * a page filled with INVALID_P2M_ENTRY entries. */
> > +	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
> > +
> > +	/* Reset where we want to stick the old page in. */
> > +	topidx = p2m_top_index(set_pfn);
> > +	mididx = p2m_mid_index(set_pfn);
> > +
> > +	/* This shouldn't happen */
> > +	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
> > +		early_alloc_p2m(set_pfn);
> > +
> > +	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
> > +		return false;
> > +
> > +	p2m_init(p2m);
> > +	p2m_top[topidx][mididx] = p2m;
> > +	mid_mfn_p = p2m_top_mfn_p[topidx];
> > +	mid_mfn_p[mididx] = virt_to_mfn(p2m);
> > +
> > +	return true;
> > +}
> >  bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> >  {
> >  	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
> >  		if (!early_alloc_p2m(pfn))
> >  			return false;
> >  
> > +		if (early_can_reuse_p2m_middle(pfn, mfn))
> > +			return __set_phys_to_machine(pfn, mfn);
> > +
> >  		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
> >  			return false;
> >  
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MPb-00074b-Kq; Fri, 17 Aug 2012 13:16:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2MPa-00074N-F3
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:16:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345209374!9395307!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23282 invoked from network); 17 Aug 2012 13:16:16 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 13:16:16 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HDGBiC014554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 13:16:12 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HDGBuU024440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 13:16:11 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HDGAxt006478; Fri, 17 Aug 2012 08:16:10 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 06:16:10 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AB236402B8; Fri, 17 Aug 2012 09:06:21 -0400 (EDT)
Date: Fri, 17 Aug 2012 09:06:21 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120817130621.GC31903@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
	<502E2784.8060806@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E2784.8060806@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 12:14:12PM +0100, David Vrabel wrote:
> On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
> > 
> > So I thought about this some more and came up with this patch. Its
> > RFC and going to run it through some overnight tests to see how they fare.
> > 
> > 
> > commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
> > Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Date:   Fri Jul 27 16:05:47 2012 -0400
> > 
> >     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
> >     
> >     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
> >     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
> >     with either a p2m_missing or p2m_identity respectively. The old
> >     page (which was created via extend_brk or was grafted on from the
> >     mfn_list) can be re-used for setting new PFNs.
> 
> Does this actually find any p2m pages to reclaim?

Very much so. When I run the kernel without dom0_mem, and end up returning
around 372300 pages back, and then populating them back - they (mostly)
all get to re-use the transplanted mfn_list.

The ones in the 9a-100 obviously don't.
> 
> xen_set_identity_and_release() is careful to set the largest possible
> range as 1:1 and the comments at the top of p2m.c suggest the mid
> entries will be made to point to p2m_identity already.

Right, and that is still true - for cases where the are no mid entries
(so P2M[3][400] for example can point in the middle of the MMIO region).

But if you boot without dom0_mem=max, that region (P2M[3][400]) would at
the start be backed by the &mfn_list, so when we call 1-1 on that region
it ends up sticking in the &mfn_list a whole bunch of IDENTITY_FRAME(pfn).

This patch harvests those chunks of &mfn_list that have that and re-uses them.

And without any dom0_mem= I seem to at most call extend_bkr twice (to
allocate the top leafs P2M[4] and P2M[5]). Hm, to be on a safe side I should
probably do 'reserve_brk(p2m_popualated, 3 * PAGE_SIZE)' in case we
end up transplanting 3GB of PFNs in in the P2M[4], P2M[5] and P2M[6] nodes.

> 
> David
> 
> >     This also means we can remove git commit:
> >     5bc6f9888db5739abfa0cae279b4b442e4db8049
> >     xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
> >     which tried to fix this.
> >     
> >     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index 29244d0..b6b7c10 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -194,11 +194,6 @@ RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID
> >   * boundary violation will require three middle nodes. */
> >  RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
> >  
> > -/* When we populate back during bootup, the amount of pages can vary. The
> > - * max we have is seen is 395979, but that does not mean it can't be more.
> > - * Some machines can have 3GB I/O holes so lets reserve for that. */
> > -RESERVE_BRK(p2m_populated, 786432 * sizeof(unsigned long));
> > -
> >  static inline unsigned p2m_top_index(unsigned long pfn)
> >  {
> >  	BUG_ON(pfn >= MAX_P2M_PFN);
> > @@ -575,12 +570,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
> >  	}
> >  	return true;
> >  }
> > +
> > +/*
> > + * Skim over the P2M tree looking at pages that are either filled with
> > + * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
> > + * replace the P2M leaf with a p2m_missing or p2m_identity.
> > + * Stick the old page in the new P2M tree location.
> > + */
> > +bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
> > +{
> > +	unsigned topidx;
> > +	unsigned mididx;
> > +	unsigned ident_pfns;
> > +	unsigned inv_pfns;
> > +	unsigned long *p2m;
> > +	unsigned long *mid_mfn_p;
> > +	unsigned idx;
> > +	unsigned long pfn;
> > +
> > +	/* We only look when this entails a P2M middle layer */
> > +	if (p2m_index(set_pfn))
> > +		return false;
> > +
> > +	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
> > +		topidx = p2m_top_index(pfn);
> > +
> > +		if (!p2m_top[topidx])
> > +			continue;
> > +
> > +		if (p2m_top[topidx] == p2m_mid_missing)
> > +			continue;
> > +
> > +		mididx = p2m_mid_index(pfn);
> > +		p2m = p2m_top[topidx][mididx];
> > +		if (!p2m)
> > +			continue;
> > +
> > +		if ((p2m == p2m_missing) || (p2m == p2m_identity))
> > +			continue;
> > +
> > +		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
> > +			continue;
> > +
> > +		ident_pfns = 0;
> > +		inv_pfns = 0;
> > +		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
> > +			/* IDENTITY_PFNs are 1:1 */
> > +			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
> > +				ident_pfns++;
> > +			else if (p2m[idx] == INVALID_P2M_ENTRY)
> > +				inv_pfns++;
> > +			else
> > +				break;
> > +		}
> > +		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
> > +			goto found;
> > +	}
> > +	return false;
> > +found:
> > +	/* Found one, replace old with p2m_identity or p2m_missing */
> > +	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
> > +	/* And the other for save/restore.. */
> > +	mid_mfn_p = p2m_top_mfn_p[topidx];
> > +	/* NOTE: Even if it is a p2m_identity it should still be point to
> > +	 * a page filled with INVALID_P2M_ENTRY entries. */
> > +	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
> > +
> > +	/* Reset where we want to stick the old page in. */
> > +	topidx = p2m_top_index(set_pfn);
> > +	mididx = p2m_mid_index(set_pfn);
> > +
> > +	/* This shouldn't happen */
> > +	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
> > +		early_alloc_p2m(set_pfn);
> > +
> > +	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
> > +		return false;
> > +
> > +	p2m_init(p2m);
> > +	p2m_top[topidx][mididx] = p2m;
> > +	mid_mfn_p = p2m_top_mfn_p[topidx];
> > +	mid_mfn_p[mididx] = virt_to_mfn(p2m);
> > +
> > +	return true;
> > +}
> >  bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> >  {
> >  	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
> >  		if (!early_alloc_p2m(pfn))
> >  			return false;
> >  
> > +		if (early_can_reuse_p2m_middle(pfn, mfn))
> > +			return __set_phys_to_machine(pfn, mfn);
> > +
> >  		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
> >  			return false;
> >  
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:18:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MR3-0007AG-47; Fri, 17 Aug 2012 13:17:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2MR1-0007A6-Jp
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:17:55 +0000
Received: from [85.158.139.83:59119] by server-12.bemta-5.messagelabs.com id
	F2/93-22359-2844E205; Fri, 17 Aug 2012 13:17:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345209471!25958340!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4560 invoked from network); 17 Aug 2012 13:17:53 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 13:17:53 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HDHk04004927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 13:17:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HDHkc4016575
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 13:17:46 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HDHku9024183; Fri, 17 Aug 2012 08:17:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 06:17:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4392F402B8; Fri, 17 Aug 2012 09:07:57 -0400 (EDT)
Date: Fri, 17 Aug 2012 09:07:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120817130757.GD31903@phenom.dumpdata.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
> I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
> couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
> Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
> 
> The problems seem to have started here:
> 
> -- snip --
> [    0.060280] ACPI: Core revision 20110623^M^M
> [    0.072384] Performance Events: Broken BIOS detected, complain to
> your hardware vendor.^M^M
> [    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
> (MSR c0010000 is 530076)^M^M
> [    0.080007] AMD PMU driver.^M^M
> [    0.082864] ------------[ cut here ]------------^M^M
> [    0.084018] WARNING: at
> /build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
> perf_events_lapic_init+0x28/0x29()^M^M
> [    0.088009] Hardware name: empty^M^M
> [    0.091299] Modules linked in:^M^M
> [    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
> [    0.096008] Call Trace:^M^M
> [    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
> [    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> [    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
> [    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> [    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
> [    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
> [    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
> [    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
> [    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
> [    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
> [    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
> -- snip --
> 
> And pretty soon degenerated into log message spamming of this sort:
> 
> -- snip --
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> -- snip --
> 
> The serial log is attached ("exile.log").
> 
> An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
> Jeremy's?) boots fine; the serial log is also attached
> ("exile-good.log").  It also seems ot have the WARN above, so maybe
> that's not actually the issue.
> 
> Any ideas?

Implement the perf framework to work with Xen's oprofile, or make a new
set of hypercalls for it.

The WARN can go away - its there to remind us to get it done at some point :-(
> 
>  -George




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:18:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MR3-0007AG-47; Fri, 17 Aug 2012 13:17:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2MR1-0007A6-Jp
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:17:55 +0000
Received: from [85.158.139.83:59119] by server-12.bemta-5.messagelabs.com id
	F2/93-22359-2844E205; Fri, 17 Aug 2012 13:17:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345209471!25958340!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4560 invoked from network); 17 Aug 2012 13:17:53 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 13:17:53 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HDHk04004927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 13:17:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HDHkc4016575
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 13:17:46 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HDHku9024183; Fri, 17 Aug 2012 08:17:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 06:17:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4392F402B8; Fri, 17 Aug 2012 09:07:57 -0400 (EDT)
Date: Fri, 17 Aug 2012 09:07:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120817130757.GD31903@phenom.dumpdata.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
> I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
> couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
> Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
> 
> The problems seem to have started here:
> 
> -- snip --
> [    0.060280] ACPI: Core revision 20110623^M^M
> [    0.072384] Performance Events: Broken BIOS detected, complain to
> your hardware vendor.^M^M
> [    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
> (MSR c0010000 is 530076)^M^M
> [    0.080007] AMD PMU driver.^M^M
> [    0.082864] ------------[ cut here ]------------^M^M
> [    0.084018] WARNING: at
> /build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
> perf_events_lapic_init+0x28/0x29()^M^M
> [    0.088009] Hardware name: empty^M^M
> [    0.091299] Modules linked in:^M^M
> [    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
> [    0.096008] Call Trace:^M^M
> [    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
> [    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> [    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
> [    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> [    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
> [    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
> [    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
> [    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
> [    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
> [    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
> [    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
> -- snip --
> 
> And pretty soon degenerated into log message spamming of this sort:
> 
> -- snip --
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> 0x0000000000530076 to 0x0000000000130076.^M
> -- snip --
> 
> The serial log is attached ("exile.log").
> 
> An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
> Jeremy's?) boots fine; the serial log is also attached
> ("exile-good.log").  It also seems ot have the WARN above, so maybe
> that's not actually the issue.
> 
> Any ideas?

Implement the perf framework to work with Xen's oprofile, or make a new
set of hypercalls for it.

The WARN can go away - its there to remind us to get it done at some point :-(
> 
>  -George




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:22:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MV8-0007Pg-QI; Fri, 17 Aug 2012 13:22:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T2MV7-0007PT-Jx
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:22:09 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345209709!9396536!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ1NzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14841 invoked from network); 17 Aug 2012 13:21:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:21:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="34976827"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:21:48 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:21:48 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T2MUm-0003DF-6B;
	Fri, 17 Aug 2012 14:21:48 +0100
Message-ID: <502E449B.2030502@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:18:19 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
	<20120817130757.GD31903@phenom.dumpdata.com>
In-Reply-To: <20120817130757.GD31903@phenom.dumpdata.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/12 14:07, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
>> I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
>> couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
>> Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
>>
>> The problems seem to have started here:
>>
>> -- snip --
>> [    0.060280] ACPI: Core revision 20110623^M^M
>> [    0.072384] Performance Events: Broken BIOS detected, complain to
>> your hardware vendor.^M^M
>> [    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
>> (MSR c0010000 is 530076)^M^M
>> [    0.080007] AMD PMU driver.^M^M
>> [    0.082864] ------------[ cut here ]------------^M^M
>> [    0.084018] WARNING: at
>> /build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
>> perf_events_lapic_init+0x28/0x29()^M^M
>> [    0.088009] Hardware name: empty^M^M
>> [    0.091299] Modules linked in:^M^M
>> [    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
>> [    0.096008] Call Trace:^M^M
>> [    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
>> [    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
>> [    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
>> [    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
>> [    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
>> [    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
>> [    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
>> [    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
>> [    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
>> [    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
>> [    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
>> -- snip --
>>
>> And pretty soon degenerated into log message spamming of this sort:
>>
>> -- snip --
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> -- snip --
>>
>> The serial log is attached ("exile.log").
>>
>> An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
>> Jeremy's?) boots fine; the serial log is also attached
>> ("exile-good.log").  It also seems ot have the WARN above, so maybe
>> that's not actually the issue.
>>
>> Any ideas?
> Implement the perf framework to work with Xen's oprofile, or make a new
> set of hypercalls for it.
>
> The WARN can go away - its there to remind us to get it done at some point :-(
OK, but is there a way I can actually get it to boot?  I think the WRMSR 
is probably the real problem.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:22:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MV8-0007Pg-QI; Fri, 17 Aug 2012 13:22:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T2MV7-0007PT-Jx
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:22:09 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345209709!9396536!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ1NzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14841 invoked from network); 17 Aug 2012 13:21:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:21:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="34976827"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:21:48 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 09:21:48 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T2MUm-0003DF-6B;
	Fri, 17 Aug 2012 14:21:48 +0100
Message-ID: <502E449B.2030502@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:18:19 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
	<20120817130757.GD31903@phenom.dumpdata.com>
In-Reply-To: <20120817130757.GD31903@phenom.dumpdata.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/12 14:07, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
>> I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
>> couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
>> Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
>>
>> The problems seem to have started here:
>>
>> -- snip --
>> [    0.060280] ACPI: Core revision 20110623^M^M
>> [    0.072384] Performance Events: Broken BIOS detected, complain to
>> your hardware vendor.^M^M
>> [    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
>> (MSR c0010000 is 530076)^M^M
>> [    0.080007] AMD PMU driver.^M^M
>> [    0.082864] ------------[ cut here ]------------^M^M
>> [    0.084018] WARNING: at
>> /build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
>> perf_events_lapic_init+0x28/0x29()^M^M
>> [    0.088009] Hardware name: empty^M^M
>> [    0.091299] Modules linked in:^M^M
>> [    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
>> [    0.096008] Call Trace:^M^M
>> [    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
>> [    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
>> [    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
>> [    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
>> [    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
>> [    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
>> [    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
>> [    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
>> [    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
>> [    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
>> [    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
>> -- snip --
>>
>> And pretty soon degenerated into log message spamming of this sort:
>>
>> -- snip --
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> (XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
>> 0x0000000000530076 to 0x0000000000130076.^M
>> -- snip --
>>
>> The serial log is attached ("exile.log").
>>
>> An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
>> Jeremy's?) boots fine; the serial log is also attached
>> ("exile-good.log").  It also seems ot have the WARN above, so maybe
>> that's not actually the issue.
>>
>> Any ideas?
> Implement the perf framework to work with Xen's oprofile, or make a new
> set of hypercalls for it.
>
> The WARN can go away - its there to remind us to get it done at some point :-(
OK, but is there a way I can actually get it to boot?  I think the WRMSR 
is probably the real problem.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:28:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MbM-0007bh-Rc; Fri, 17 Aug 2012 13:28:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2MbK-0007bc-UF
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:28:35 +0000
Received: from [85.158.138.51:28509] by server-7.bemta-3.messagelabs.com id
	FC/B8-01906-1074E205; Fri, 17 Aug 2012 13:28:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345210111!28781458!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22418 invoked from network); 17 Aug 2012 13:28:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:28:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14060652"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:28:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:28:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2MbG-0001Xl-D7;
	Fri, 17 Aug 2012 13:28:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2MbG-00077T-9Y;
	Fri, 17 Aug 2012 14:28:30 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13612-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:28:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13612: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13612 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13612/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13611
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13611
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13611
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13611

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  3468a834be8d

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:28:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MbM-0007bh-Rc; Fri, 17 Aug 2012 13:28:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2MbK-0007bc-UF
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:28:35 +0000
Received: from [85.158.138.51:28509] by server-7.bemta-3.messagelabs.com id
	FC/B8-01906-1074E205; Fri, 17 Aug 2012 13:28:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345210111!28781458!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22418 invoked from network); 17 Aug 2012 13:28:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:28:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14060652"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:28:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:28:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2MbG-0001Xl-D7;
	Fri, 17 Aug 2012 13:28:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2MbG-00077T-9Y;
	Fri, 17 Aug 2012 14:28:30 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13612-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:28:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13612: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13612 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13612/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13611
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13611
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13611
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13611

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3468a834be8d
baseline version:
 xen                  3468a834be8d

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:29:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mbs-0007dD-8q; Fri, 17 Aug 2012 13:29:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T2Mbq-0007d3-0K
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:29:06 +0000
Received: from [85.158.139.83:28227] by server-6.bemta-5.messagelabs.com id
	82/40-22415-1274E205; Fri, 17 Aug 2012 13:29:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345210143!21451324!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ1NzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29857 invoked from network); 17 Aug 2012 13:29:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:29:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="34977602"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:28:53 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	09:28:53 -0400
Message-ID: <502E4713.9050000@citrix.com>
Date: Fri, 17 Aug 2012 14:28:51 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
	<502E2784.8060806@citrix.com>
	<20120817130621.GC31903@phenom.dumpdata.com>
In-Reply-To: <20120817130621.GC31903@phenom.dumpdata.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/12 14:06, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 12:14:12PM +0100, David Vrabel wrote:
>> On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
>>>
>>> So I thought about this some more and came up with this patch. Its
>>> RFC and going to run it through some overnight tests to see how they fare.
>>>
>>>
>>> commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
>>> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Date:   Fri Jul 27 16:05:47 2012 -0400
>>>
>>>     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
>>>     
>>>     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
>>>     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
>>>     with either a p2m_missing or p2m_identity respectively. The old
>>>     page (which was created via extend_brk or was grafted on from the
>>>     mfn_list) can be re-used for setting new PFNs.
>>
>> Does this actually find any p2m pages to reclaim?
> 
> Very much so. When I run the kernel without dom0_mem, and end up returning
> around 372300 pages back, and then populating them back - they (mostly)
> all get to re-use the transplanted mfn_list.
> 
> The ones in the 9a-100 obviously don't.
>>
>> xen_set_identity_and_release() is careful to set the largest possible
>> range as 1:1 and the comments at the top of p2m.c suggest the mid
>> entries will be made to point to p2m_identity already.
> 
> Right, and that is still true - for cases where the are no mid entries
> (so P2M[3][400] for example can point in the middle of the MMIO region).
> 
> But if you boot without dom0_mem=max, that region (P2M[3][400]) would at
> the start be backed by the &mfn_list, so when we call 1-1 on that region
> it ends up sticking in the &mfn_list a whole bunch of IDENTITY_FRAME(pfn).

Ah, I see.  This makes sense now.

> This patch harvests those chunks of &mfn_list that have that and re-uses them.
> 
> And without any dom0_mem= I seem to at most call extend_bkr twice (to
> allocate the top leafs P2M[4] and P2M[5]). Hm, to be on a safe side I should
> probably do 'reserve_brk(p2m_popualated, 3 * PAGE_SIZE)' in case we
> end up transplanting 3GB of PFNs in in the P2M[4], P2M[5] and P2M[6] nodes.

That sounds sensible.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:29:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mbs-0007dD-8q; Fri, 17 Aug 2012 13:29:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T2Mbq-0007d3-0K
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:29:06 +0000
Received: from [85.158.139.83:28227] by server-6.bemta-5.messagelabs.com id
	82/40-22415-1274E205; Fri, 17 Aug 2012 13:29:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345210143!21451324!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ1NzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29857 invoked from network); 17 Aug 2012 13:29:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:29:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336363200"; d="scan'208";a="34977602"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 09:28:53 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	09:28:53 -0400
Message-ID: <502E4713.9050000@citrix.com>
Date: Fri, 17 Aug 2012 14:28:51 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
	<502E2784.8060806@citrix.com>
	<20120817130621.GC31903@phenom.dumpdata.com>
In-Reply-To: <20120817130621.GC31903@phenom.dumpdata.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/12 14:06, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 12:14:12PM +0100, David Vrabel wrote:
>> On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
>>>
>>> So I thought about this some more and came up with this patch. Its
>>> RFC and going to run it through some overnight tests to see how they fare.
>>>
>>>
>>> commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
>>> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Date:   Fri Jul 27 16:05:47 2012 -0400
>>>
>>>     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
>>>     
>>>     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
>>>     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
>>>     with either a p2m_missing or p2m_identity respectively. The old
>>>     page (which was created via extend_brk or was grafted on from the
>>>     mfn_list) can be re-used for setting new PFNs.
>>
>> Does this actually find any p2m pages to reclaim?
> 
> Very much so. When I run the kernel without dom0_mem, and end up returning
> around 372300 pages back, and then populating them back - they (mostly)
> all get to re-use the transplanted mfn_list.
> 
> The ones in the 9a-100 obviously don't.
>>
>> xen_set_identity_and_release() is careful to set the largest possible
>> range as 1:1 and the comments at the top of p2m.c suggest the mid
>> entries will be made to point to p2m_identity already.
> 
> Right, and that is still true - for cases where the are no mid entries
> (so P2M[3][400] for example can point in the middle of the MMIO region).
> 
> But if you boot without dom0_mem=max, that region (P2M[3][400]) would at
> the start be backed by the &mfn_list, so when we call 1-1 on that region
> it ends up sticking in the &mfn_list a whole bunch of IDENTITY_FRAME(pfn).

Ah, I see.  This makes sense now.

> This patch harvests those chunks of &mfn_list that have that and re-uses them.
> 
> And without any dom0_mem= I seem to at most call extend_bkr twice (to
> allocate the top leafs P2M[4] and P2M[5]). Hm, to be on a safe side I should
> probably do 'reserve_brk(p2m_popualated, 3 * PAGE_SIZE)' in case we
> end up transplanting 3GB of PFNs in in the P2M[4], P2M[5] and P2M[6] nodes.

That sounds sensible.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:44:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mqw-0007xM-PG; Fri, 17 Aug 2012 13:44:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T2Mqv-0007xH-Jm
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:44:41 +0000
Received: from [85.158.139.83:5319] by server-11.bemta-5.messagelabs.com id
	04/C6-29296-8CA4E205; Fri, 17 Aug 2012 13:44:40 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345211080!21454432!1
X-Originating-IP: [213.199.154.204]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25544 invoked from network); 17 Aug 2012 13:44:40 -0000
Received: from am1ehsobe001.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.204)
	by server-11.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 13:44:40 -0000
Received: from mail27-am1-R.bigfish.com (10.3.201.254) by
	AM1EHSOBE008.bigfish.com (10.3.204.28) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:44:39 +0000
Received: from mail27-am1 (localhost [127.0.0.1])	by mail27-am1-R.bigfish.com
	(Postfix) with ESMTP id C30933A0090;
	Fri, 17 Aug 2012 13:44:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail27-am1 (localhost.localdomain [127.0.0.1]) by mail27-am1
	(MessageSwitch) id 1345211077968547_10697;
	Fri, 17 Aug 2012 13:44:37 +0000 (UTC)
Received: from AM1EHSMHS003.bigfish.com (unknown [10.3.201.243])	by
	mail27-am1.bigfish.com (Postfix) with ESMTP id E06C9C0052;
	Fri, 17 Aug 2012 13:44:37 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS003.bigfish.com (10.3.207.103) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:44:37 +0000
X-WSS-ID: 0M8WJI6-02-EQD-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2E23DC810C;	Fri, 17 Aug 2012 08:44:29 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 08:44:53 -0500
Received: from [10.234.222.82] (163.181.55.254) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server id 14.1.323.3;
	Fri, 17 Aug 2012 08:44:33 -0500
Message-ID: <502E4AC0.9080502@amd.com>
Date: Fri, 17 Aug 2012 09:44:32 -0400
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
	<502E36C00200007800095D8B@nat28.tlf.novell.com>
In-Reply-To: <502E36C00200007800095D8B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 08/17/2012 06:19 AM, Jan Beulich wrote:
>>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> AMD, powernow: Update P-state directly when _PSD's CoordType is
>> DOMAIN_COORD_TYPE_HW_ALL
>>
>> When _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL (i.e. shared_type is
>> CPUFREQ_SHARED_TYPE_HW) which most often is the case on servers, there is no
>> reason to go into on_selected_cpus() code, we call call transition_pstate()
>> directly.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>
> Looks good to me in general (one minor comment below, but no
> need to resubmit just because of this). But it's not really clear
> to me whether this actually improves anything (knowing of which
> is needed to decide whether to put this in now or after 4.2).

I didn't see any noticeable improvement but then I may not have run the 
right tests.

I imagine this may be helpful, for example, in webserver-type 
environments with frequent P-state transitions when new connections are 
requested and short-lived threads/processes are created. But I haven't 
been able to set this up to observe this.

I think post-4.2 is fine.

-boris

>
>> --- a/xen/arch/x86/acpi/cpufreq/powernow.c	Wed Aug 15 09:41:21 2012 +0100
>> +++ b/xen/arch/x86/acpi/cpufreq/powernow.c	Thu Aug 16 18:38:21 2012 +0200
>> @@ -56,20 +56,9 @@
>>
>>   static struct cpufreq_driver powernow_cpufreq_driver;
>>
>> -struct drv_cmd {
>> -    unsigned int type;
>> -    const cpumask_t *mask;
>> -    u32 val;
>> -    int turbo;
>> -};
>> -
>> -static void transition_pstate(void *drvcmd)
>> +static void transition_pstate(void *pstate)
>>   {
>> -    struct drv_cmd *cmd;
>> -    cmd = (struct drv_cmd *) drvcmd;
>> -
>> -
>> -    wrmsrl(MSR_PSTATE_CTRL, cmd->val);
>> +    wrmsrl(MSR_PSTATE_CTRL, *(int *)pstate);
>
> The variable a pointer to which gets passed in for this function
> is "unsigned int", so you surely would need to cast to that type
> instead of plain "int".
>
> Jan
>
>>   }
>>
>>   static void update_cpb(void *data)
>> @@ -106,13 +95,11 @@ static int powernow_cpufreq_target(struc
>>   {
>>       struct acpi_cpufreq_data *data = cpufreq_drv_data[policy->cpu];
>>       struct processor_performance *perf;
>> -    struct cpufreq_freqs freqs;
>>       cpumask_t online_policy_cpus;
>> -    struct drv_cmd cmd;
>> -    unsigned int next_state = 0; /* Index into freq_table */
>> -    unsigned int next_perf_state = 0; /* Index into perf table */
>> -    int result = 0;
>> -    int j = 0;
>> +    unsigned int next_state; /* Index into freq_table */
>> +    unsigned int next_perf_state; /* Index into perf table */
>> +    int result;
>> +    int j;
>>
>>       if (unlikely(data == NULL ||
>>           data->acpi_data == NULL || data->freq_table == NULL)) {
>> @@ -125,9 +112,7 @@ static int powernow_cpufreq_target(struc
>>                                               target_freq,
>>                                               relation, &next_state);
>>       if (unlikely(result))
>> -        return -ENODEV;
>> -
>> -    cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>> +        return result;
>>
>>       next_perf_state = data->freq_table[next_state].index;
>>       if (perf->state == next_perf_state) {
>> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>>               return 0;
>>       }
>>
>> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
>> -        cmd.mask = &online_policy_cpus;
>> -    else
>> -        cmd.mask = cpumask_of(policy->cpu);
>> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
>> +        likely(policy->cpu == smp_processor_id())) {
>> +        transition_pstate(&next_perf_state);
>> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
>> +    } else {
>> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>>
>> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
>> -    freqs.new = data->freq_table[next_state].frequency;
>> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
>> +            unlikely(policy->cpu != smp_processor_id()))
>> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
>> +                             &next_perf_state, 1);
>> +        else
>> +            transition_pstate(&next_perf_state);
>>
>> -    cmd.val = next_perf_state;
>> -    cmd.turbo = policy->turbo;
>> -
>> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
>> -
>> -    for_each_cpu(j, &online_policy_cpus)
>> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +        for_each_cpu(j, &online_policy_cpus)
>> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +    }
>>
>>       perf->state = next_perf_state;
>> -    policy->cur = freqs.new;
>> +    policy->cur = data->freq_table[next_state].frequency;
>>
>> -    return result;
>> +    return 0;
>>   }
>>
>>   static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:44:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mqw-0007xM-PG; Fri, 17 Aug 2012 13:44:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T2Mqv-0007xH-Jm
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:44:41 +0000
Received: from [85.158.139.83:5319] by server-11.bemta-5.messagelabs.com id
	04/C6-29296-8CA4E205; Fri, 17 Aug 2012 13:44:40 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345211080!21454432!1
X-Originating-IP: [213.199.154.204]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25544 invoked from network); 17 Aug 2012 13:44:40 -0000
Received: from am1ehsobe001.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.204)
	by server-11.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 13:44:40 -0000
Received: from mail27-am1-R.bigfish.com (10.3.201.254) by
	AM1EHSOBE008.bigfish.com (10.3.204.28) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:44:39 +0000
Received: from mail27-am1 (localhost [127.0.0.1])	by mail27-am1-R.bigfish.com
	(Postfix) with ESMTP id C30933A0090;
	Fri, 17 Aug 2012 13:44:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail27-am1 (localhost.localdomain [127.0.0.1]) by mail27-am1
	(MessageSwitch) id 1345211077968547_10697;
	Fri, 17 Aug 2012 13:44:37 +0000 (UTC)
Received: from AM1EHSMHS003.bigfish.com (unknown [10.3.201.243])	by
	mail27-am1.bigfish.com (Postfix) with ESMTP id E06C9C0052;
	Fri, 17 Aug 2012 13:44:37 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS003.bigfish.com (10.3.207.103) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:44:37 +0000
X-WSS-ID: 0M8WJI6-02-EQD-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2E23DC810C;	Fri, 17 Aug 2012 08:44:29 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 08:44:53 -0500
Received: from [10.234.222.82] (163.181.55.254) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server id 14.1.323.3;
	Fri, 17 Aug 2012 08:44:33 -0500
Message-ID: <502E4AC0.9080502@amd.com>
Date: Fri, 17 Aug 2012 09:44:32 -0400
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
	<502E36C00200007800095D8B@nat28.tlf.novell.com>
In-Reply-To: <502E36C00200007800095D8B@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 08/17/2012 06:19 AM, Jan Beulich wrote:
>>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> AMD, powernow: Update P-state directly when _PSD's CoordType is
>> DOMAIN_COORD_TYPE_HW_ALL
>>
>> When _PSD's CoordType is DOMAIN_COORD_TYPE_HW_ALL (i.e. shared_type is
>> CPUFREQ_SHARED_TYPE_HW) which most often is the case on servers, there is no
>> reason to go into on_selected_cpus() code, we call call transition_pstate()
>> directly.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
>
> Looks good to me in general (one minor comment below, but no
> need to resubmit just because of this). But it's not really clear
> to me whether this actually improves anything (knowing of which
> is needed to decide whether to put this in now or after 4.2).

I didn't see any noticeable improvement but then I may not have run the 
right tests.

I imagine this may be helpful, for example, in webserver-type 
environments with frequent P-state transitions when new connections are 
requested and short-lived threads/processes are created. But I haven't 
been able to set this up to observe this.

I think post-4.2 is fine.

-boris

>
>> --- a/xen/arch/x86/acpi/cpufreq/powernow.c	Wed Aug 15 09:41:21 2012 +0100
>> +++ b/xen/arch/x86/acpi/cpufreq/powernow.c	Thu Aug 16 18:38:21 2012 +0200
>> @@ -56,20 +56,9 @@
>>
>>   static struct cpufreq_driver powernow_cpufreq_driver;
>>
>> -struct drv_cmd {
>> -    unsigned int type;
>> -    const cpumask_t *mask;
>> -    u32 val;
>> -    int turbo;
>> -};
>> -
>> -static void transition_pstate(void *drvcmd)
>> +static void transition_pstate(void *pstate)
>>   {
>> -    struct drv_cmd *cmd;
>> -    cmd = (struct drv_cmd *) drvcmd;
>> -
>> -
>> -    wrmsrl(MSR_PSTATE_CTRL, cmd->val);
>> +    wrmsrl(MSR_PSTATE_CTRL, *(int *)pstate);
>
> The variable a pointer to which gets passed in for this function
> is "unsigned int", so you surely would need to cast to that type
> instead of plain "int".
>
> Jan
>
>>   }
>>
>>   static void update_cpb(void *data)
>> @@ -106,13 +95,11 @@ static int powernow_cpufreq_target(struc
>>   {
>>       struct acpi_cpufreq_data *data = cpufreq_drv_data[policy->cpu];
>>       struct processor_performance *perf;
>> -    struct cpufreq_freqs freqs;
>>       cpumask_t online_policy_cpus;
>> -    struct drv_cmd cmd;
>> -    unsigned int next_state = 0; /* Index into freq_table */
>> -    unsigned int next_perf_state = 0; /* Index into perf table */
>> -    int result = 0;
>> -    int j = 0;
>> +    unsigned int next_state; /* Index into freq_table */
>> +    unsigned int next_perf_state; /* Index into perf table */
>> +    int result;
>> +    int j;
>>
>>       if (unlikely(data == NULL ||
>>           data->acpi_data == NULL || data->freq_table == NULL)) {
>> @@ -125,9 +112,7 @@ static int powernow_cpufreq_target(struc
>>                                               target_freq,
>>                                               relation, &next_state);
>>       if (unlikely(result))
>> -        return -ENODEV;
>> -
>> -    cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>> +        return result;
>>
>>       next_perf_state = data->freq_table[next_state].index;
>>       if (perf->state == next_perf_state) {
>> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>>               return 0;
>>       }
>>
>> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
>> -        cmd.mask = &online_policy_cpus;
>> -    else
>> -        cmd.mask = cpumask_of(policy->cpu);
>> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
>> +        likely(policy->cpu == smp_processor_id())) {
>> +        transition_pstate(&next_perf_state);
>> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
>> +    } else {
>> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>>
>> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
>> -    freqs.new = data->freq_table[next_state].frequency;
>> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
>> +            unlikely(policy->cpu != smp_processor_id()))
>> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
>> +                             &next_perf_state, 1);
>> +        else
>> +            transition_pstate(&next_perf_state);
>>
>> -    cmd.val = next_perf_state;
>> -    cmd.turbo = policy->turbo;
>> -
>> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
>> -
>> -    for_each_cpu(j, &online_policy_cpus)
>> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +        for_each_cpu(j, &online_policy_cpus)
>> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +    }
>>
>>       perf->state = next_perf_state;
>> -    policy->cur = freqs.new;
>> +    policy->cur = data->freq_table[next_state].frequency;
>>
>> -    return result;
>> +    return 0;
>>   }
>>
>>   static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:48:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mtw-00084Q-C1; Fri, 17 Aug 2012 13:47:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Mtu-00084E-Qi
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:47:47 +0000
Received: from [85.158.138.51:54366] by server-11.bemta-3.messagelabs.com id
	AF/A7-23152-18B4E205; Fri, 17 Aug 2012 13:47:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345211264!20886032!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7826 invoked from network); 17 Aug 2012 13:47:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:47:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061073"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:47:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:47:44 +0100
Date: Fri, 17 Aug 2012 14:47:27 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502E2F090200007800095D62@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Jan Beulich wrote:
> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> > > Seeing the patch I btw realized that there's no easy way to
> >> > > avoid having the type as a second argument in the conversion
> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> > > there.
> >> > 
> >> > Btw - on the architecture(s) where the two handles are identical
> >> > I would prefer you to make the conversion functions trivial (and
> >> > thus avoid making use of the "type" parameter), thus allowing
> >> > the type checking to occur that you currently circumvent.
> >> 
> >> OK, I can do that.
> > 
> > Will this result in the type parameter potentially becoming stale?
> > 
> > Adding a redundant pointer compare is a good way to get the compiler to
> > catch this. Smth like;
> > 
> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >         #define guest_handle_from_param(hnd, type) ({
> >             typeof((hnd).p) _x = (hnd).p;
> >             XEN_GUEST_HANDLE(type) _y;
> >             &_y == &_x;
> >             hnd;
> >          })
> 
> Ah yes, that's a good suggestion.
> 
> > I'm not sure which two pointers of members of the various structs need
> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > idea...
> 
> Right, comparing (hnd).p with _y.p would be the right thing; no
> need for _x, but some other (mechanical) adjustments would be
> necessary.

The _x variable is still useful to avoid multiple evaluations of hnd,
even though I know that this is not a public header.

What about the following:

/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
#define guest_handle_to_param(hnd, type) ({                \
    typeof((hnd).p) _x = (hnd).p;                          \
    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
    if (&_x != &_y.p) BUG();                               \
    _y;                                                    \
})


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:48:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mtw-00084Q-C1; Fri, 17 Aug 2012 13:47:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Mtu-00084E-Qi
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:47:47 +0000
Received: from [85.158.138.51:54366] by server-11.bemta-3.messagelabs.com id
	AF/A7-23152-18B4E205; Fri, 17 Aug 2012 13:47:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345211264!20886032!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7826 invoked from network); 17 Aug 2012 13:47:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:47:45 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061073"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:47:44 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:47:44 +0100
Date: Fri, 17 Aug 2012 14:47:27 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502E2F090200007800095D62@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Jan Beulich wrote:
> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> > > Seeing the patch I btw realized that there's no easy way to
> >> > > avoid having the type as a second argument in the conversion
> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> > > there.
> >> > 
> >> > Btw - on the architecture(s) where the two handles are identical
> >> > I would prefer you to make the conversion functions trivial (and
> >> > thus avoid making use of the "type" parameter), thus allowing
> >> > the type checking to occur that you currently circumvent.
> >> 
> >> OK, I can do that.
> > 
> > Will this result in the type parameter potentially becoming stale?
> > 
> > Adding a redundant pointer compare is a good way to get the compiler to
> > catch this. Smth like;
> > 
> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >         #define guest_handle_from_param(hnd, type) ({
> >             typeof((hnd).p) _x = (hnd).p;
> >             XEN_GUEST_HANDLE(type) _y;
> >             &_y == &_x;
> >             hnd;
> >          })
> 
> Ah yes, that's a good suggestion.
> 
> > I'm not sure which two pointers of members of the various structs need
> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > idea...
> 
> Right, comparing (hnd).p with _y.p would be the right thing; no
> need for _x, but some other (mechanical) adjustments would be
> necessary.

The _x variable is still useful to avoid multiple evaluations of hnd,
even though I know that this is not a public header.

What about the following:

/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
#define guest_handle_to_param(hnd, type) ({                \
    typeof((hnd).p) _x = (hnd).p;                          \
    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
    if (&_x != &_y.p) BUG();                               \
    _y;                                                    \
})


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:48:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MuL-00086p-PP; Fri, 17 Aug 2012 13:48:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MuK-00086X-KU
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:48:12 +0000
Received: from [85.158.138.51:32556] by server-2.bemta-3.messagelabs.com id
	D5/5A-17748-A9B4E205; Fri, 17 Aug 2012 13:48:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345211289!20794768!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18285 invoked from network); 17 Aug 2012 13:48:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:48:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061080"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:48:09 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:48:08 +0100
Message-ID: <1345211287.10161.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 17 Aug 2012 14:48:07 +0100
In-Reply-To: <502A8EC3.1020107@citrix.com>
References: <502A5C58.5050001@citrix.com>	<502A82DA.7000205@citrix.com>
	<20522.35865.492083.486374@mariner.uk.xensource.com>
	<502A8DD7.3080504@citrix.com> <502A8EC3.1020107@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V3) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 18:45 +0100, Andrew Cooper wrote:
> tools/python: Clean python correctly
> 
> Cleaning the python directory should completely remove the build/
> directory, otherwise subsequent builds may be short-circuited and a
> stale build installed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> 

Acked-by: Ian Campbell <ian.campbell@citrix.com>

And applied, thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:48:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MuL-00086p-PP; Fri, 17 Aug 2012 13:48:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MuK-00086X-KU
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:48:12 +0000
Received: from [85.158.138.51:32556] by server-2.bemta-3.messagelabs.com id
	D5/5A-17748-A9B4E205; Fri, 17 Aug 2012 13:48:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345211289!20794768!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18285 invoked from network); 17 Aug 2012 13:48:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:48:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061080"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:48:09 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:48:08 +0100
Message-ID: <1345211287.10161.27.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 17 Aug 2012 14:48:07 +0100
In-Reply-To: <502A8EC3.1020107@citrix.com>
References: <502A5C58.5050001@citrix.com>	<502A82DA.7000205@citrix.com>
	<20522.35865.492083.486374@mariner.uk.xensource.com>
	<502A8DD7.3080504@citrix.com> <502A8EC3.1020107@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (V3) tools/python: Clean python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 18:45 +0100, Andrew Cooper wrote:
> tools/python: Clean python correctly
> 
> Cleaning the python directory should completely remove the build/
> directory, otherwise subsequent builds may be short-circuited and a
> stale build installed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> 

Acked-by: Ian Campbell <ian.campbell@citrix.com>

And applied, thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:48:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MuN-000877-4V; Fri, 17 Aug 2012 13:48:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MuL-00086I-RD
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:48:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345211277!2146716!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22235 invoked from network); 17 Aug 2012 13:47:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:47:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061076"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:47:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:47:57 +0100
Message-ID: <1345211276.10161.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Fri, 17 Aug 2012 14:47:56 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
	<1344427585.32142.34.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDA3OjU2ICswMTAwLCBXYW5nemhlbmd1byB3cm90ZToKCj4g
IyBIRyBjaGFuZ2VzZXQgcGF0Y2gKPiAjIFBhcmVudCBhNWRmZDkyNGZjZGIxNzNhMTU0ZGFkOWYz
NzA3M2MxZGUxMzAyMDY1Cj4gbGlieGM6IEFkZCBWTV9ET05UQ09QWSBmbGFnIG9mIHRoZSBWTUEg
b2YgdGhlIGh5cGVyY2FsbCBidWZmZXIsIHRvIGF2b2lkIHRoZQo+ICAgICAgICBoeXBlcmNhbGwg
YnVmZmVyIGJlY29taW5nIENPVyBvbiBoeXBlcmNhbGwuCgpXaGVuIEkgY2FtZSB0byBjb21taXQg
dGhpcyBJIGdvdDoKeGNfbGludXhfb3NkZXAuYzogSW4gZnVuY3Rpb24g4oCYbGludXhfcHJpdmNt
ZF9hbGxvY19oeXBlcmNhbGxfYnVmZmVy4oCZOgp4Y19saW51eF9vc2RlcC5jOjEwMTogZXJyb3I6
IOKAmHB0cuKAmSB1bmRlY2xhcmVkIChmaXJzdCB1c2UgaW4gdGhpcyBmdW5jdGlvbikKeGNfbGlu
dXhfb3NkZXAuYzoxMDE6IGVycm9yOiAoRWFjaCB1bmRlY2xhcmVkIGlkZW50aWZpZXIgaXMgcmVw
b3J0ZWQgb25seSBvbmNlCnhjX2xpbnV4X29zZGVwLmM6MTAxOiBlcnJvcjogZm9yIGVhY2ggZnVu
Y3Rpb24gaXQgYXBwZWFycyBpbi4pCgpJJ3ZlIGZpeGVkIHRoaXMgdXAgZm9yIHlvdSB0aGlzIHRp
bWUgYnV0IHBsZWFzZSBiZSBtb3JlIGNhcmVmdWwgaW4KZnV0dXJlIGFuZCBjb21waWxlL3Rlc3Qg
eW91ciBwYXRjaGVzLgoKSSBhbHNvIHR3ZWFrZWQgdGhlIHdvcmRpbmcgb2YgeW91ciBjb21tZW50
cyBhIGxpdHRsZSBiaXQsIEkgaG9wZSB0aGF0J3MKb2suCgpJYW4uCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:48:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MuN-000877-4V; Fri, 17 Aug 2012 13:48:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MuL-00086I-RD
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:48:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345211277!2146716!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22235 invoked from network); 17 Aug 2012 13:47:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:47:58 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061076"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:47:57 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:47:57 +0100
Message-ID: <1345211276.10161.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wangzhenguo <wangzhenguo@huawei.com>
Date: Fri, 17 Aug 2012 14:47:56 +0100
In-Reply-To: <B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
	<1344427585.32142.34.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDEyLTA4LTA5IGF0IDA3OjU2ICswMTAwLCBXYW5nemhlbmd1byB3cm90ZToKCj4g
IyBIRyBjaGFuZ2VzZXQgcGF0Y2gKPiAjIFBhcmVudCBhNWRmZDkyNGZjZGIxNzNhMTU0ZGFkOWYz
NzA3M2MxZGUxMzAyMDY1Cj4gbGlieGM6IEFkZCBWTV9ET05UQ09QWSBmbGFnIG9mIHRoZSBWTUEg
b2YgdGhlIGh5cGVyY2FsbCBidWZmZXIsIHRvIGF2b2lkIHRoZQo+ICAgICAgICBoeXBlcmNhbGwg
YnVmZmVyIGJlY29taW5nIENPVyBvbiBoeXBlcmNhbGwuCgpXaGVuIEkgY2FtZSB0byBjb21taXQg
dGhpcyBJIGdvdDoKeGNfbGludXhfb3NkZXAuYzogSW4gZnVuY3Rpb24g4oCYbGludXhfcHJpdmNt
ZF9hbGxvY19oeXBlcmNhbGxfYnVmZmVy4oCZOgp4Y19saW51eF9vc2RlcC5jOjEwMTogZXJyb3I6
IOKAmHB0cuKAmSB1bmRlY2xhcmVkIChmaXJzdCB1c2UgaW4gdGhpcyBmdW5jdGlvbikKeGNfbGlu
dXhfb3NkZXAuYzoxMDE6IGVycm9yOiAoRWFjaCB1bmRlY2xhcmVkIGlkZW50aWZpZXIgaXMgcmVw
b3J0ZWQgb25seSBvbmNlCnhjX2xpbnV4X29zZGVwLmM6MTAxOiBlcnJvcjogZm9yIGVhY2ggZnVu
Y3Rpb24gaXQgYXBwZWFycyBpbi4pCgpJJ3ZlIGZpeGVkIHRoaXMgdXAgZm9yIHlvdSB0aGlzIHRp
bWUgYnV0IHBsZWFzZSBiZSBtb3JlIGNhcmVmdWwgaW4KZnV0dXJlIGFuZCBjb21waWxlL3Rlc3Qg
eW91ciBwYXRjaGVzLgoKSSBhbHNvIHR3ZWFrZWQgdGhlIHdvcmRpbmcgb2YgeW91ciBjb21tZW50
cyBhIGxpdHRsZSBiaXQsIEkgaG9wZSB0aGF0J3MKb2suCgpJYW4uCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:49:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:49:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Muv-0008Ed-P2; Fri, 17 Aug 2012 13:48:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Muu-0008EA-Bq
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:48:48 +0000
Received: from [85.158.139.83:64861] by server-3.bemta-5.messagelabs.com id
	E7/BE-27237-FBB4E205; Fri, 17 Aug 2012 13:48:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345211326!24420231!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5533 invoked from network); 17 Aug 2012 13:48:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:48:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061101"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:48:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:48:46 +0100
Message-ID: <1345211325.10161.28.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:48:45 +0100
In-Reply-To: <alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<1345192115.30865.86.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 11:56 +0100, Stefano Stabellini wrote:
> On Fri, 17 Aug 2012, Ian Campbell wrote:
> > > > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > > > HVM. Also,
> > > > > + * note, xen PVH domain shares lot of HVM code */
> > > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > > &&                     \
> > > > > +				(xen_start_info->flags &
> > > > > SIF_IS_PVINHVM))
> > > >  
> > > > Also here.
> > > 
> > > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> > > not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> > > include/xen/interface/xen.h, and then do 
> > > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > > 
> > > What do you think about that?
> > 
> > Should PVH actually be a new value in the xen_domain_type enum?
> 
> I don't think we should have a xen_domain_type pvh at all.
> If we really need it we should define it as a set of individual
> properties:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap) && \
>                           xen_have_vector_callback)

Hopefully it won't be necessary but if it is then this seems like a good
idea to me.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:49:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:49:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Muv-0008Ed-P2; Fri, 17 Aug 2012 13:48:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Muu-0008EA-Bq
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:48:48 +0000
Received: from [85.158.139.83:64861] by server-3.bemta-5.messagelabs.com id
	E7/BE-27237-FBB4E205; Fri, 17 Aug 2012 13:48:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345211326!24420231!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5533 invoked from network); 17 Aug 2012 13:48:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:48:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,784,1336348800"; d="scan'208";a="14061101"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:48:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:48:46 +0100
Message-ID: <1345211325.10161.28.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:48:45 +0100
In-Reply-To: <alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<1345192115.30865.86.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 11:56 +0100, Stefano Stabellini wrote:
> On Fri, 17 Aug 2012, Ian Campbell wrote:
> > > > > +/* xen_pv_domain check is necessary as start_info ptr is null in
> > > > > HVM. Also,
> > > > > + * note, xen PVH domain shares lot of HVM code */
> > > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > > &&                     \
> > > > > +				(xen_start_info->flags &
> > > > > SIF_IS_PVINHVM))
> > > >  
> > > > Also here.
> > > 
> > > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy. But,
> > > not sure how to define SIF_IS_PVINHVM then? I could put SIF_IS_RESVD in
> > > include/xen/interface/xen.h, and then do 
> > > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > > 
> > > What do you think about that?
> > 
> > Should PVH actually be a new value in the xen_domain_type enum?
> 
> I don't think we should have a xen_domain_type pvh at all.
> If we really need it we should define it as a set of individual
> properties:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap) && \
>                           xen_have_vector_callback)

Hopefully it won't be necessary but if it is then this seems like a good
idea to me.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:51:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mx3-000082-Ay; Fri, 17 Aug 2012 13:51:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T2Mx2-00007o-07
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:51:00 +0000
Received: from [85.158.143.35:14293] by server-1.bemta-4.messagelabs.com id
	4F/BD-07754-34C4E205; Fri, 17 Aug 2012 13:50:59 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345211458!11666308!1
X-Originating-IP: [213.199.154.206]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26520 invoked from network); 17 Aug 2012 13:50:58 -0000
Received: from am1ehsobe003.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.206)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 13:50:58 -0000
Received: from mail26-am1-R.bigfish.com (10.3.201.229) by
	AM1EHSOBE003.bigfish.com (10.3.204.23) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:50:57 +0000
Received: from mail26-am1 (localhost [127.0.0.1])	by mail26-am1-R.bigfish.com
	(Postfix) with ESMTP id 8AAC62C0146;
	Fri, 17 Aug 2012 13:50:57 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail26-am1 (localhost.localdomain [127.0.0.1]) by mail26-am1
	(MessageSwitch) id 1345211455449464_27416;
	Fri, 17 Aug 2012 13:50:55 +0000 (UTC)
Received: from AM1EHSMHS002.bigfish.com (unknown [10.3.201.225])	by
	mail26-am1.bigfish.com (Postfix) with ESMTP id 623051C004F;
	Fri, 17 Aug 2012 13:50:55 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS002.bigfish.com (10.3.207.102) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:50:54 +0000
X-WSS-ID: 0M8WJSO-02-F3S-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B036C810A;	Fri, 17 Aug 2012 08:50:47 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 08:51:37 -0500
Received: from [10.234.222.82] (163.181.55.254) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server id 14.1.323.3;
	Fri, 17 Aug 2012 08:50:51 -0500
Message-ID: <502E4C3A.5050409@amd.com>
Date: Fri, 17 Aug 2012 09:50:50 -0400
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
	<502E399B0200007800095DA1@nat28.tlf.novell.com>
In-Reply-To: <502E399B0200007800095DA1@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 08/17/2012 06:31 AM, Jan Beulich wrote:
>>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>>               return 0;
>>       }
>>
>> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
>> -        cmd.mask = &online_policy_cpus;
>> -    else
>> -        cmd.mask = cpumask_of(policy->cpu);
>> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
>> +        likely(policy->cpu == smp_processor_id())) {
>> +        transition_pstate(&next_perf_state);
>> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
>
> Actually - is this enough? Doesn't this also need to be done based
> on policy->cpus?

With HW-coordinated transitions there is a policy structure per CPU so 
policy->cpus is always 1 and policy->cpu is the same as policy->cpus. 
You can see this in cpufreq_add_cpu(), when hw_all is set.

(This is consistent with ACPI spec:
	When hardware coordinates transitions, OSPM continues to
	initiate state transitions as it would if there were no
	dependencies.
)

-boris


>
> Jan
>
>> +    } else {
>> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>>
>> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
>> -    freqs.new = data->freq_table[next_state].frequency;
>> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
>> +            unlikely(policy->cpu != smp_processor_id()))
>> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
>> +                             &next_perf_state, 1);
>> +        else
>> +            transition_pstate(&next_perf_state);
>>
>> -    cmd.val = next_perf_state;
>> -    cmd.turbo = policy->turbo;
>> -
>> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
>> -
>> -    for_each_cpu(j, &online_policy_cpus)
>> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +        for_each_cpu(j, &online_policy_cpus)
>> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +    }
>>
>>       perf->state = next_perf_state;
>> -    policy->cur = freqs.new;
>> +    policy->cur = data->freq_table[next_state].frequency;
>>
>> -    return result;
>> +    return 0;
>>   }
>>
>>   static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:51:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Mx3-000082-Ay; Fri, 17 Aug 2012 13:51:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Boris.Ostrovsky@amd.com>) id 1T2Mx2-00007o-07
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:51:00 +0000
Received: from [85.158.143.35:14293] by server-1.bemta-4.messagelabs.com id
	4F/BD-07754-34C4E205; Fri, 17 Aug 2012 13:50:59 +0000
X-Env-Sender: Boris.Ostrovsky@amd.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345211458!11666308!1
X-Originating-IP: [213.199.154.206]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26520 invoked from network); 17 Aug 2012 13:50:58 -0000
Received: from am1ehsobe003.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.206)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 13:50:58 -0000
Received: from mail26-am1-R.bigfish.com (10.3.201.229) by
	AM1EHSOBE003.bigfish.com (10.3.204.23) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:50:57 +0000
Received: from mail26-am1 (localhost [127.0.0.1])	by mail26-am1-R.bigfish.com
	(Postfix) with ESMTP id 8AAC62C0146;
	Fri, 17 Aug 2012 13:50:57 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail26-am1 (localhost.localdomain [127.0.0.1]) by mail26-am1
	(MessageSwitch) id 1345211455449464_27416;
	Fri, 17 Aug 2012 13:50:55 +0000 (UTC)
Received: from AM1EHSMHS002.bigfish.com (unknown [10.3.201.225])	by
	mail26-am1.bigfish.com (Postfix) with ESMTP id 623051C004F;
	Fri, 17 Aug 2012 13:50:55 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	AM1EHSMHS002.bigfish.com (10.3.207.102) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 13:50:54 +0000
X-WSS-ID: 0M8WJSO-02-F3S-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2B036C810A;	Fri, 17 Aug 2012 08:50:47 -0500 (CDT)
Received: from SAUSEXDAG05.amd.com (163.181.55.6) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 08:51:37 -0500
Received: from [10.234.222.82] (163.181.55.254) by sausexdag05.amd.com
	(163.181.55.6) with Microsoft SMTP Server id 14.1.323.3;
	Fri, 17 Aug 2012 08:50:51 -0500
Message-ID: <502E4C3A.5050409@amd.com>
Date: Fri, 17 Aug 2012 09:50:50 -0400
From: Boris Ostrovsky <boris.ostrovsky@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
	<502E399B0200007800095DA1@nat28.tlf.novell.com>
In-Reply-To: <502E399B0200007800095DA1@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 08/17/2012 06:31 AM, Jan Beulich wrote:
>>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>>               return 0;
>>       }
>>
>> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
>> -        cmd.mask = &online_policy_cpus;
>> -    else
>> -        cmd.mask = cpumask_of(policy->cpu);
>> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
>> +        likely(policy->cpu == smp_processor_id())) {
>> +        transition_pstate(&next_perf_state);
>> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
>
> Actually - is this enough? Doesn't this also need to be done based
> on policy->cpus?

With HW-coordinated transitions there is a policy structure per CPU so 
policy->cpus is always 1 and policy->cpu is the same as policy->cpus. 
You can see this in cpufreq_add_cpu(), when hw_all is set.

(This is consistent with ACPI spec:
	When hardware coordinates transitions, OSPM continues to
	initiate state transitions as it would if there were no
	dependencies.
)

-boris


>
> Jan
>
>> +    } else {
>> +        cpumask_and(&online_policy_cpus, policy->cpus, &cpu_online_map);
>>
>> -    freqs.old = perf->states[perf->state].core_frequency * 1000;
>> -    freqs.new = data->freq_table[next_state].frequency;
>> +        if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
>> +            unlikely(policy->cpu != smp_processor_id()))
>> +            on_selected_cpus(&online_policy_cpus, transition_pstate,
>> +                             &next_perf_state, 1);
>> +        else
>> +            transition_pstate(&next_perf_state);
>>
>> -    cmd.val = next_perf_state;
>> -    cmd.turbo = policy->turbo;
>> -
>> -    on_selected_cpus(cmd.mask, transition_pstate, &cmd, 1);
>> -
>> -    for_each_cpu(j, &online_policy_cpus)
>> -        cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +        for_each_cpu(j, &online_policy_cpus)
>> +            cpufreq_statistic_update(j, perf->state, next_perf_state);
>> +    }
>>
>>       perf->state = next_perf_state;
>> -    policy->cur = freqs.new;
>> +    policy->cur = data->freq_table[next_state].frequency;
>>
>> -    return result;
>> +    return 0;
>>   }
>>
>>   static int powernow_cpufreq_verify(struct cpufreq_policy *policy)
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:51:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MxR-0000Bu-O0; Fri, 17 Aug 2012 13:51:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2MxQ-0000Bg-Cm
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:51:24 +0000
Received: from [85.158.139.83:24763] by server-11.bemta-5.messagelabs.com id
	D0/48-29296-B5C4E205; Fri, 17 Aug 2012 13:51:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345211483!24818175!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1696 invoked from network); 17 Aug 2012 13:51:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:51:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061162"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:51:22 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:51:22 +0100
Date: Fri, 17 Aug 2012 14:51:05 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502E285A0200007800095D04@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208171450420.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
	<502E285A0200007800095D04@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Jan Beulich wrote:
> >>> On 16.08.12 at 19:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
> >> > +#define set_xen_guest_handle_raw(hnd, val)                  \
> >> > +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
> >> 
> >> If you made the "normal" handle a union too, you could avoid
> >> the explicit cast (which e.g. gcc, when not passed
> >> -fno-strict-aliasing, will choke on) and instead use (hnd).q (and
> >> at once avoid the double initialization of the low half).
> >> 
> >> Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
> >> usable at once for 64-bit avoiding a full double initialization there.
> >> 
> >> > +         (hnd).p = val;                                     \
> >> 
> >> In a public header you certainly want to avoid evaluating a
> >> macro argument twice.
> > 
> > That's a really good suggestion.
> > I am going to make both handles unions and therefore
> > set_xen_guest_handle_raw becomes:
> > 
> > #define set_xen_guest_handle_raw(hnd, val)                  \
> >     do { (hnd).q = 0;                                       \
> >          (hnd).p = val;                                     \
> >     } while ( 0 )
> 
> But that still doesn't eliminate the double evaluation of "hnd".

Yes, you are right. I'll use a temporary pointer.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:51:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MxR-0000Bu-O0; Fri, 17 Aug 2012 13:51:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2MxQ-0000Bg-Cm
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 13:51:24 +0000
Received: from [85.158.139.83:24763] by server-11.bemta-5.messagelabs.com id
	D0/48-29296-B5C4E205; Fri, 17 Aug 2012 13:51:23 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345211483!24818175!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1696 invoked from network); 17 Aug 2012 13:51:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:51:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061162"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:51:22 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:51:22 +0100
Date: Fri, 17 Aug 2012 14:51:05 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502E285A0200007800095D04@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208171450420.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161756140.15568@kaball.uk.xensource.com>
	<502E285A0200007800095D04@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Jan Beulich wrote:
> >>> On 16.08.12 at 19:08, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >>> On 16.08.12 at 16:50, Stefano Stabellini <stefano.stabellini@eu.citrix.com>  wrote:
> >> > +#define set_xen_guest_handle_raw(hnd, val)                  \
> >> > +    do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
> >> 
> >> If you made the "normal" handle a union too, you could avoid
> >> the explicit cast (which e.g. gcc, when not passed
> >> -fno-strict-aliasing, will choke on) and instead use (hnd).q (and
> >> at once avoid the double initialization of the low half).
> >> 
> >> Also, the condition to do this could be "sizeof(hnd) > sizeof((hnd).p)",
> >> usable at once for 64-bit avoiding a full double initialization there.
> >> 
> >> > +         (hnd).p = val;                                     \
> >> 
> >> In a public header you certainly want to avoid evaluating a
> >> macro argument twice.
> > 
> > That's a really good suggestion.
> > I am going to make both handles unions and therefore
> > set_xen_guest_handle_raw becomes:
> > 
> > #define set_xen_guest_handle_raw(hnd, val)                  \
> >     do { (hnd).q = 0;                                       \
> >          (hnd).p = val;                                     \
> >     } while ( 0 )
> 
> But that still doesn't eliminate the double evaluation of "hnd".

Yes, you are right. I'll use a temporary pointer.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:51:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:51:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MxX-0000DX-4W; Fri, 17 Aug 2012 13:51:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MxV-0000Cs-8f
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:51:29 +0000
Received: from [85.158.143.99:64352] by server-1.bemta-4.messagelabs.com id
	A1/AE-07754-06C4E205; Fri, 17 Aug 2012 13:51:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345211488!28373286!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17748 invoked from network); 17 Aug 2012 13:51:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:51:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:51:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:51:27 +0100
Message-ID: <1345211486.10161.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:51:26 +0100
In-Reply-To: <alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 14:47 +0100, Stefano Stabellini wrote:
> On Fri, 17 Aug 2012, Jan Beulich wrote:
> > >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> > >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > >> > > Seeing the patch I btw realized that there's no easy way to
> > >> > > avoid having the type as a second argument in the conversion
> > >> > > macros. Nevertheless I still don't like the explicitly specified type
> > >> > > there.
> > >> > 
> > >> > Btw - on the architecture(s) where the two handles are identical
> > >> > I would prefer you to make the conversion functions trivial (and
> > >> > thus avoid making use of the "type" parameter), thus allowing
> > >> > the type checking to occur that you currently circumvent.
> > >> 
> > >> OK, I can do that.
> > > 
> > > Will this result in the type parameter potentially becoming stale?
> > > 
> > > Adding a redundant pointer compare is a good way to get the compiler to
> > > catch this. Smth like;
> > > 
> > >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> > >         #define guest_handle_from_param(hnd, type) ({
> > >             typeof((hnd).p) _x = (hnd).p;
> > >             XEN_GUEST_HANDLE(type) _y;
> > >             &_y == &_x;
> > >             hnd;
> > >          })
> > 
> > Ah yes, that's a good suggestion.
> > 
> > > I'm not sure which two pointers of members of the various structs need
> > > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > > idea...
> > 
> > Right, comparing (hnd).p with _y.p would be the right thing; no
> > need for _x, but some other (mechanical) adjustments would be
> > necessary.
> 
> The _x variable is still useful to avoid multiple evaluations of hnd,
> even though I know that this is not a public header.
> 
> What about the following:
> 
> /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> #define guest_handle_to_param(hnd, type) ({                \
>     typeof((hnd).p) _x = (hnd).p;                          \
>     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>     if (&_x != &_y.p) BUG();                               \

&_x and &_y.p will always be different => this will always BUG().

You just need "(&_x == &_y.p)" if the types of _x and _y.p are different
then the compiler will error out due to the comparison of differently
typed pointers.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:51:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:51:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2MxX-0000DX-4W; Fri, 17 Aug 2012 13:51:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2MxV-0000Cs-8f
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:51:29 +0000
Received: from [85.158.143.99:64352] by server-1.bemta-4.messagelabs.com id
	A1/AE-07754-06C4E205; Fri, 17 Aug 2012 13:51:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345211488!28373286!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17748 invoked from network); 17 Aug 2012 13:51:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:51:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:51:27 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:51:27 +0100
Message-ID: <1345211486.10161.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 14:51:26 +0100
In-Reply-To: <alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 14:47 +0100, Stefano Stabellini wrote:
> On Fri, 17 Aug 2012, Jan Beulich wrote:
> > >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> > >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > >> > > Seeing the patch I btw realized that there's no easy way to
> > >> > > avoid having the type as a second argument in the conversion
> > >> > > macros. Nevertheless I still don't like the explicitly specified type
> > >> > > there.
> > >> > 
> > >> > Btw - on the architecture(s) where the two handles are identical
> > >> > I would prefer you to make the conversion functions trivial (and
> > >> > thus avoid making use of the "type" parameter), thus allowing
> > >> > the type checking to occur that you currently circumvent.
> > >> 
> > >> OK, I can do that.
> > > 
> > > Will this result in the type parameter potentially becoming stale?
> > > 
> > > Adding a redundant pointer compare is a good way to get the compiler to
> > > catch this. Smth like;
> > > 
> > >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> > >         #define guest_handle_from_param(hnd, type) ({
> > >             typeof((hnd).p) _x = (hnd).p;
> > >             XEN_GUEST_HANDLE(type) _y;
> > >             &_y == &_x;
> > >             hnd;
> > >          })
> > 
> > Ah yes, that's a good suggestion.
> > 
> > > I'm not sure which two pointers of members of the various structs need
> > > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > > idea...
> > 
> > Right, comparing (hnd).p with _y.p would be the right thing; no
> > need for _x, but some other (mechanical) adjustments would be
> > necessary.
> 
> The _x variable is still useful to avoid multiple evaluations of hnd,
> even though I know that this is not a public header.
> 
> What about the following:
> 
> /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> #define guest_handle_to_param(hnd, type) ({                \
>     typeof((hnd).p) _x = (hnd).p;                          \
>     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>     if (&_x != &_y.p) BUG();                               \

&_x and &_y.p will always be different => this will always BUG().

You just need "(&_x == &_y.p)" if the types of _x and _y.p are different
then the compiler will error out due to the comparison of differently
typed pointers.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:56:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N1o-0000g7-UZ; Fri, 17 Aug 2012 13:55:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2N1m-0000ff-WB
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:55:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345211748!3422300!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29510 invoked from network); 17 Aug 2012 13:55:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:55:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061272"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:55:48 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:55:48 +0100
Date: Fri, 17 Aug 2012 14:55:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345211486.10161.31.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208171453060.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com> 
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<1345211486.10161.31.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:
> On Fri, 2012-08-17 at 14:47 +0100, Stefano Stabellini wrote:
> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> > > >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> > > >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > > >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > > >> > > Seeing the patch I btw realized that there's no easy way to
> > > >> > > avoid having the type as a second argument in the conversion
> > > >> > > macros. Nevertheless I still don't like the explicitly specified type
> > > >> > > there.
> > > >> > 
> > > >> > Btw - on the architecture(s) where the two handles are identical
> > > >> > I would prefer you to make the conversion functions trivial (and
> > > >> > thus avoid making use of the "type" parameter), thus allowing
> > > >> > the type checking to occur that you currently circumvent.
> > > >> 
> > > >> OK, I can do that.
> > > > 
> > > > Will this result in the type parameter potentially becoming stale?
> > > > 
> > > > Adding a redundant pointer compare is a good way to get the compiler to
> > > > catch this. Smth like;
> > > > 
> > > >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> > > >         #define guest_handle_from_param(hnd, type) ({
> > > >             typeof((hnd).p) _x = (hnd).p;
> > > >             XEN_GUEST_HANDLE(type) _y;
> > > >             &_y == &_x;
> > > >             hnd;
> > > >          })
> > > 
> > > Ah yes, that's a good suggestion.
> > > 
> > > > I'm not sure which two pointers of members of the various structs need
> > > > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > > > idea...
> > > 
> > > Right, comparing (hnd).p with _y.p would be the right thing; no
> > > need for _x, but some other (mechanical) adjustments would be
> > > necessary.
> > 
> > The _x variable is still useful to avoid multiple evaluations of hnd,
> > even though I know that this is not a public header.
> > 
> > What about the following:
> > 
> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > #define guest_handle_to_param(hnd, type) ({                \
> >     typeof((hnd).p) _x = (hnd).p;                          \
> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >     if (&_x != &_y.p) BUG();                               \
> 
> &_x and &_y.p will always be different => this will always BUG().
>
> You just need "(&_x == &_y.p)" if the types of _x and _y.p are different
> then the compiler will error out due to the comparison of differently
> typed pointers.

I know what you mean, but we cannot do that because the compiler will
complain with "statement has no effects".
So we have to do something like:

if ((&_x == &_y.p) && 0) BUG();

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:56:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N1o-0000g7-UZ; Fri, 17 Aug 2012 13:55:56 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2N1m-0000ff-WB
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:55:55 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345211748!3422300!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29510 invoked from network); 17 Aug 2012 13:55:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:55:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061272"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:55:48 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 14:55:48 +0100
Date: Fri, 17 Aug 2012 14:55:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345211486.10161.31.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208171453060.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com> 
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<1345211486.10161.31.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:
> On Fri, 2012-08-17 at 14:47 +0100, Stefano Stabellini wrote:
> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> > > >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> > > >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > > >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > > >> > > Seeing the patch I btw realized that there's no easy way to
> > > >> > > avoid having the type as a second argument in the conversion
> > > >> > > macros. Nevertheless I still don't like the explicitly specified type
> > > >> > > there.
> > > >> > 
> > > >> > Btw - on the architecture(s) where the two handles are identical
> > > >> > I would prefer you to make the conversion functions trivial (and
> > > >> > thus avoid making use of the "type" parameter), thus allowing
> > > >> > the type checking to occur that you currently circumvent.
> > > >> 
> > > >> OK, I can do that.
> > > > 
> > > > Will this result in the type parameter potentially becoming stale?
> > > > 
> > > > Adding a redundant pointer compare is a good way to get the compiler to
> > > > catch this. Smth like;
> > > > 
> > > >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> > > >         #define guest_handle_from_param(hnd, type) ({
> > > >             typeof((hnd).p) _x = (hnd).p;
> > > >             XEN_GUEST_HANDLE(type) _y;
> > > >             &_y == &_x;
> > > >             hnd;
> > > >          })
> > > 
> > > Ah yes, that's a good suggestion.
> > > 
> > > > I'm not sure which two pointers of members of the various structs need
> > > > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > > > idea...
> > > 
> > > Right, comparing (hnd).p with _y.p would be the right thing; no
> > > need for _x, but some other (mechanical) adjustments would be
> > > necessary.
> > 
> > The _x variable is still useful to avoid multiple evaluations of hnd,
> > even though I know that this is not a public header.
> > 
> > What about the following:
> > 
> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > #define guest_handle_to_param(hnd, type) ({                \
> >     typeof((hnd).p) _x = (hnd).p;                          \
> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >     if (&_x != &_y.p) BUG();                               \
> 
> &_x and &_y.p will always be different => this will always BUG().
>
> You just need "(&_x == &_y.p)" if the types of _x and _y.p are different
> then the compiler will error out due to the comparison of differently
> typed pointers.

I know what you mean, but we cannot do that because the compiler will
complain with "statement has no effects".
So we have to do something like:

if ((&_x == &_y.p) && 0) BUG();

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:57:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:57:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N3h-0000mT-Ey; Fri, 17 Aug 2012 13:57:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2N3g-0000mD-IX
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:57:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345211864!9860202!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9695 invoked from network); 17 Aug 2012 13:57:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 13:57:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 14:57:44 +0100
Message-Id: <502E6A200200007800095E9A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 14:58:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> > > Seeing the patch I btw realized that there's no easy way to
>> >> > > avoid having the type as a second argument in the conversion
>> >> > > macros. Nevertheless I still don't like the explicitly specified type
>> >> > > there.
>> >> > 
>> >> > Btw - on the architecture(s) where the two handles are identical
>> >> > I would prefer you to make the conversion functions trivial (and
>> >> > thus avoid making use of the "type" parameter), thus allowing
>> >> > the type checking to occur that you currently circumvent.
>> >> 
>> >> OK, I can do that.
>> > 
>> > Will this result in the type parameter potentially becoming stale?
>> > 
>> > Adding a redundant pointer compare is a good way to get the compiler to
>> > catch this. Smth like;
>> > 
>> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> >         #define guest_handle_from_param(hnd, type) ({
>> >             typeof((hnd).p) _x = (hnd).p;
>> >             XEN_GUEST_HANDLE(type) _y;
>> >             &_y == &_x;
>> >             hnd;
>> >          })
>> 
>> Ah yes, that's a good suggestion.
>> 
>> > I'm not sure which two pointers of members of the various structs need
>> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> > idea...
>> 
>> Right, comparing (hnd).p with _y.p would be the right thing; no
>> need for _x, but some other (mechanical) adjustments would be
>> necessary.
> 
> The _x variable is still useful to avoid multiple evaluations of hnd,
> even though I know that this is not a public header.

But we had settled on returning hnd unmodified when both
handle types are the same.

> What about the following:
> 
> /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> #define guest_handle_to_param(hnd, type) ({                \
>     typeof((hnd).p) _x = (hnd).p;                          \
>     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>     if (&_x != &_y.p) BUG();                               \
>     _y;                                                    \
> })

Since this is not a public header, something like this (untested,
so may not compile as is)

#define guest_handle_to_param(hnd, type) ({                \
    (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
    (hnd);                                                    \
})

is what I was thinking of.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:57:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:57:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N3h-0000mT-Ey; Fri, 17 Aug 2012 13:57:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2N3g-0000mD-IX
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:57:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345211864!9860202!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9695 invoked from network); 17 Aug 2012 13:57:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 13:57:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 14:57:44 +0100
Message-Id: <502E6A200200007800095E9A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 14:58:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> > > Seeing the patch I btw realized that there's no easy way to
>> >> > > avoid having the type as a second argument in the conversion
>> >> > > macros. Nevertheless I still don't like the explicitly specified type
>> >> > > there.
>> >> > 
>> >> > Btw - on the architecture(s) where the two handles are identical
>> >> > I would prefer you to make the conversion functions trivial (and
>> >> > thus avoid making use of the "type" parameter), thus allowing
>> >> > the type checking to occur that you currently circumvent.
>> >> 
>> >> OK, I can do that.
>> > 
>> > Will this result in the type parameter potentially becoming stale?
>> > 
>> > Adding a redundant pointer compare is a good way to get the compiler to
>> > catch this. Smth like;
>> > 
>> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> >         #define guest_handle_from_param(hnd, type) ({
>> >             typeof((hnd).p) _x = (hnd).p;
>> >             XEN_GUEST_HANDLE(type) _y;
>> >             &_y == &_x;
>> >             hnd;
>> >          })
>> 
>> Ah yes, that's a good suggestion.
>> 
>> > I'm not sure which two pointers of members of the various structs need
>> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> > idea...
>> 
>> Right, comparing (hnd).p with _y.p would be the right thing; no
>> need for _x, but some other (mechanical) adjustments would be
>> necessary.
> 
> The _x variable is still useful to avoid multiple evaluations of hnd,
> even though I know that this is not a public header.

But we had settled on returning hnd unmodified when both
handle types are the same.

> What about the following:
> 
> /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> #define guest_handle_to_param(hnd, type) ({                \
>     typeof((hnd).p) _x = (hnd).p;                          \
>     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>     if (&_x != &_y.p) BUG();                               \
>     _y;                                                    \
> })

Since this is not a public header, something like this (untested,
so may not compile as is)

#define guest_handle_to_param(hnd, type) ({                \
    (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
    (hnd);                                                    \
})

is what I was thinking of.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:59:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N4j-0000tM-1c; Fri, 17 Aug 2012 13:58:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2N4h-0000t2-18
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:58:55 +0000
Received: from [85.158.139.83:23974] by server-1.bemta-5.messagelabs.com id
	F9/F7-09980-E1E4E205; Fri, 17 Aug 2012 13:58:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345211932!17419024!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29422 invoked from network); 17 Aug 2012 13:58:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:58:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061364"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:58:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:58:51 +0100
Message-ID: <1345211930.10161.33.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 17 Aug 2012 14:58:50 +0100
In-Reply-To: <20120807032453.GB4324@US-SEA-R8XVZTX>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 04:24 +0100, Matt Wilson wrote:
> On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> > This change improves documentation for several Xen command line
> > parameters. Some of the Itanium-specific options are now removed. A
> > more thorough check should be performed to remove any other remnants.
> >
> > I've reformatted some of the entries to fit in 80 column terminals.
> >
> > Options that are yet undocumented but accept standard boolean /
> > integer values are now annotated as such.
> >
> > The size suffixes have been corrected to use the binary prefixes
> > instead of decimal prefixes.
> >
> > Changes since v2:
> >  * Change *bi prefixes to GiB, MiB, KiB
> >
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> George's concerns were adressed in this version, and Andrew gave an
> Ack. Anything else keeping this from landing in staging?

Keir, do you want to apply this sort of thing or shall I (it's not
really a tools doc so I haven't touched it)


> 
> Matt
> 
> > diff -r bf922651da96 -r 1809175cdc9b docs/misc/xen-command-line.markdown
> > --- a/docs/misc/xen-command-line.markdown       Sat Jul 28 17:27:30 2012 +0000
> > +++ b/docs/misc/xen-command-line.markdown       Mon Jul 30 19:04:59 2012 +0000
> > @@ -46,9 +46,9 @@ if a leading `0` is present.
> >
> >  A size parameter may be any integer, with a size suffix
> >
> > -* `G` or `g`: Giga (2^30)
> > -* `M` or `m`: Mega (2^20)
> > -* `K` or `k`: Kilo (2^10)
> > +* `G` or `g`: GiB (2^30)
> > +* `M` or `m`: MiB (2^20)
> > +* `K` or `k`: KiB (2^10)
> >  * `B` or `b`: Bytes
> >
> >  Without a size suffix, the default will be kilo.
> > @@ -107,8 +107,10 @@ Specify which ACPI MADT table to parse f
> >  than one is present.
> >
> >  ### acpi\_pstate\_strict
> > +> `= <integer>`
> >
> >  ### acpi\_skip\_timer\_override
> > +> `= <boolean>`
> >
> >  Instruct Xen to ignore timer-interrupt override.
> >
> > @@ -117,6 +119,8 @@ the domain 0 kernel this option is autom
> >  domain 0 command line
> >
> >  ### acpi\_sleep
> > +> `= s3_bios | s3_mode`
> > +
> >  ### allowsuperpage
> >  > `= <boolean>`
> >
> > @@ -136,12 +140,12 @@ there are more than 8 CPUs, Xen will swi
> >
> >  > Default: `false`
> >
> > -Force boot on potentially unsafe systems. By default Xen will refuse to boot on
> > -systems with the following errata:
> > +Force boot on potentially unsafe systems. By default Xen will refuse
> > +to boot on systems with the following errata:
> >
> >  * AMD Erratum 121. Processors with this erratum are subject to a guest
> > -  triggerable Denial of Service. Override only if you trust all of your PV
> > -  guests.
> > +  triggerable Denial of Service. Override only if you trust all of
> > +  your PV guests.
> >
> >  ### apic\_verbosity
> >  > `= verbose | debug`
> > @@ -153,15 +157,16 @@ Increase the verbosity of the APIC code
> >
> >  > Default: `true`
> >
> > -Permits Xen to set up and use PCI Address Translation Services, which is required
> > -for PCI Passthrough.
> > +Permits Xen to set up and use PCI Address Translation Services, which
> > +is required for PCI Passthrough.
> >
> >  ### availmem
> >  > `= <size>`
> >
> >  > Default: `0` (no limit)
> >
> > -Specify a maximum amount of available memory, to which Xen will clamp the e820 table.
> > +Specify a maximum amount of available memory, to which Xen will clamp
> > +the e820 table.
> >
> >  ### badpage
> >  > `= List of [ <integer> | <integer>-<integer> ]`
> > @@ -176,8 +181,9 @@ Xen's command line.
> >
> >  > Default: `true`
> >
> > -Scrub free RAM during boot.  This is a safety feature to prevent accidentally leaking
> > -sensitive VM data into other VMs if Xen crashes and reboots.
> > +Scrub free RAM during boot.  This is a safety feature to prevent
> > +accidentally leaking sensitive VM data into other VMs if Xen crashes
> > +and reboots.
> >
> >  ### cachesize
> >  > `= <size>`
> > @@ -227,7 +233,6 @@ Both option `com1` and `com2` follow the
> >
> >  A typical setup for most situations might be `com1=115200,8n1`
> >
> > -
> >  ### conring\_size
> >  > `= <size>`
> >
> > @@ -300,25 +305,30 @@ Indicate where the responsibility for dr
> >  ### cpuid\_mask\_cpu (AMD only)
> >  > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
> >
> > -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set (unspecified
> > -on the command line), specify a pre-canned cpuid mask to mask the current
> > -processor down to appear as the specified processor.  It is important to ensure
> > -that all hosts in a pool appear the same to guests to allow successful live
> > -migration.
> > +If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
> > +(unspecified on the command line), specify a pre-canned cpuid mask to
> > +mask the current processor down to appear as the specified processor.
> > +It is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
> >  > `= <integer>`
> >
> >  > Default: `~0` (all bits set)
> >
> > -These five command line parameters are used to specify cpuid masks to help with
> > -cpuid levelling across a pool of hosts.  Setting a bit in the mask indicates that
> > -the feature should be enabled, while clearing a bit in the mask indicates that
> > -the feature should be disabled.  It is important to ensure that all hosts in a
> > -pool appear the same to guests to allow successful live migration.
> > +These five command line parameters are used to specify cpuid masks to
> > +help with cpuid levelling across a pool of hosts.  Setting a bit in
> > +the mask indicates that the feature should be enabled, while clearing
> > +a bit in the mask indicates that the feature should be disabled.  It
> > +is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuidle
> > +> `= <boolean>`
> > +
> >  ### cpuinfo
> > +> `= <boolean>`
> > +
> >  ### crashinfo_maxaddr
> >  > `= <size>`
> >
> > @@ -328,17 +338,42 @@ Specify the maximum address to allocate
> >  combination with the `low_crashinfo` command line option.
> >
> >  ### crashkernel
> > +> `= <ramsize-range>:<size>[,...][@<offset>]`
> > +
> >  ### credit2\_balance\_over
> > +> `= <integer>`
> > +
> >  ### credit2\_balance\_under
> > +> `= <integer>`
> > +
> >  ### credit2\_load\_window\_shift
> > +> `= <integer>`
> > +
> >  ### debug\_stack\_lines
> > +> `= <integer>`
> > +
> > +> Default: `20`
> > +
> > +Limits the number lines printed in Xen stack traces.
> > +
> >  ### debugtrace
> > +> `= <integer>`
> > +
> > +> Default: `128`
> > +
> > +Specify the size of the console debug trace buffer in KiB. The debug
> > +trace feature is only enabled in debugging builds of Xen.
> > +
> >  ### dma\_bits
> >  > `= <integer>`
> >
> >  Specify the bit width of the DMA heap.
> >
> >  ### dom0\_ioports\_disable
> > +> `= List of <hex>-<hex>`
> > +
> > +Specify a list of IO ports to be excluded from dom0 access.
> > +
> >  ### dom0\_max\_vcpus
> >  > `= <integer>`
> >
> > @@ -372,6 +407,8 @@ For example, to set dom0's initial memor
> >  allow it to balloon up as far as 1GB use `dom0_mem=512M,max:1G`
> >
> >  ### dom0\_shadow
> > +> `= <boolean>`
> > +
> >  ### dom0\_vcpus\_pin
> >  > `= <boolean>`
> >
> > @@ -379,10 +416,21 @@ allow it to balloon up as far as 1GB use
> >
> >  Pin dom0 vcpus to their respective pcpus
> >
> > -### dom0\_vhpt\_size\_log2
> > -### dom\_rid\_bits
> >  ### e820-mtrr-clip
> > +> `= <boolean>`
> > +
> > +Flag that specifies if RAM should be clipped to the highest cacheable
> > +MTRR.
> > +
> > +> Default: `true` on Intel CPUs, otherwise `false`
> > +
> >  ### e820-verbose
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> > +
> > +Flag that enables verbose output when processing e820 information and
> > +applying clipping.
> >
> >  ### edd (x86)
> >  > `= off | on | skipmbr`
> > @@ -397,17 +445,32 @@ Either force retrieval of monitor EDID i
> >  disable it (edid=no). This option should not normally be required
> >  except for debugging purposes.
> >
> > -### efi\_print
> >  ### extra\_guest\_irqs
> >  > `= <number>`
> >
> >  Increase the number of PIRQs available for the guest. The default is 32.
> >
> >  ### flask\_enabled
> > +> `= <integer>`
> > +
> >  ### flask\_enforcing
> > +> `= <integer>`
> > +
> >  ### font
> > +> `= <height>` where height is `8x8 | 8x14 | 8x16 '`
> > +
> > +Specify the font size when using the VESA console driver.
> > +
> >  ### gdb
> > +> `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
> > +
> > +Specify the serial parameters for the GDB stub.
> > +
> >  ### gnttab\_max\_nr\_frames
> > +> `= <integer>`
> > +
> > +Specify the maximum number of frames per grant table operation.
> > +
> >  ### guest\_loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -420,15 +483,41 @@ The optional `<rate-limited level>` opti
> >  should be rate limited.
> >
> >  ### hap\_1gb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hap\_2mb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hpetbroadcast
> > +> `= <boolean>`
> > +
> >  ### hvm\_debug
> > +> `= <integer>`
> > +
> >  ### hvm\_port80
> > +> `= <boolean>`
> > +
> >  ### idle\_latency\_factor
> > +> `= <integer>`
> > +
> >  ### ioapic\_ack
> >  ### iommu
> >  ### iommu\_inclusive\_mapping
> > +> `= <boolean>`
> > +
> >  ### irq\_ratelimit
> > +> `= <integer>`
> > +
> >  ### irq\_vector\_map
> >  ### lapic
> >
> > @@ -437,7 +526,11 @@ if left disabled by the BIOS.  This opti
> >  all.
> >
> >  ### lapic\_timer\_c2\_ok
> > +> `= <boolean>`
> > +
> >  ### ler
> > +> `= <boolean>`
> > +
> >  ### loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -461,18 +554,38 @@ so the crash kernel may find find them.
> >  with **crashinfo_maxaddr**.
> >
> >  ### max\_cstate
> > +> `= <integer>`
> > +
> >  ### max\_gsi\_irqs
> > +> `= <integer>`
> > +
> >  ### maxcpus
> > +> `= <integer>`
> > +
> >  ### mce
> > +> `= <integer>`
> > +
> >  ### mce\_fb
> > +> `= <integer>`
> > +
> >  ### mce\_verbosity
> > +> `= verbose`
> > +
> > +Specify verbose machine check output.
> > +
> >  ### mem
> >  > `= <size>`
> >
> > -Specifies the maximum address of physical RAM.  Any RAM beyond this
> > +Specify the maximum address of physical RAM.  Any RAM beyond this
> >  limit is ignored by Xen.
> >
> >  ### mmcfg
> > +> `= <boolean>[,amd-fam10]`
> > +
> > +> Default: `1`
> > +
> > +Specify if the MMConfig space should be enabled.
> > +
> >  ### nmi
> >  > `= ignore | dom0 | fatal`
> >
> > @@ -493,6 +606,8 @@ domain 0 kernel this option is automatic
> >  0 command line.
> >
> >  ### nofxsr
> > +> `= <boolean>`
> > +
> >  ### noirqbalance
> >  > `= <boolean>`
> >
> > @@ -501,11 +616,15 @@ systems such as Dell 1850/2850 that have
> >  IRQ routing issues.
> >
> >  ### nolapic
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> >
> >  Ignore the local APIC on a uniprocessor system, even if enabled by the
> >  BIOS.  This option will accept value.
> >
> >  ### no-real-mode (x86)
> > +> `= <boolean>`
> >
> >  Do not execute real-mode bootstrap code when booting Xen. This option
> >  should not be used except for debugging. It will effectively disable
> > @@ -519,6 +638,10 @@ catching debug output.  Defaults to auto
> >  seconds.
> >
> >  ### noserialnumber
> > +> `= <boolean>`
> > +
> > +Disable CPU serial number reporting.
> > +
> >  ### nosmp
> >  > `= <boolean>`
> >
> > @@ -526,11 +649,39 @@ Disable SMP support.  No secondary proce
> >  Defaults to booting secondary processors.
> >
> >  ### nr\_irqs
> > +> `= <integer>`
> > +
> >  ### numa
> > -### pervcpu\_vhpt
> > +> `= on | off | fake=<integer> | noacpi`
> > +
> > +Default: `on`
> > +
> >  ### ple\_gap
> > +> `= <integer>`
> > +
> >  ### ple\_window
> > +> `= <integer>`
> > +
> >  ### reboot
> > +> `= b[ios] | t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
> > +
> > +Default: `0`
> > +
> > +Specify the host reboot method.
> > +
> > +`warm` instructs Xen to not set the cold reboot flag.
> > +
> > +`cold` instructs Xen to set the cold reboot flag.
> > +
> > +`bios` instructs Xen to reboot the host by jumping to BIOS. This is
> > +only available on 32-bit x86 platforms.
> > +
> > +`triple` instructs Xen to reboot the host by causing a triple fault.
> > +
> > +`kbd` instructs Xen to reboot the host via the keyboard controller.
> > +
> > +`acpi` instructs Xen to reboot the host using RESET_REG in the ACPI FADT.
> > +
> >  ### sched
> >  > `= credit | credit2 | sedf | arinc653`
> >
> > @@ -539,10 +690,20 @@ Defaults to booting secondary processors
> >  Choose the default scheduler.
> >
> >  ### sched\_credit2\_migrate\_resist
> > +> `= <integer>`
> > +
> >  ### sched\_credit\_default\_yield
> > +> `= <boolean>`
> > +
> >  ### sched\_credit\_tslice\_ms
> > +> `= <integer>`
> > +
> >  ### sched\_ratelimit\_us
> > +> `= <integer>`
> > +
> >  ### sched\_smt\_power\_savings
> > +> `= <boolean>`
> > +
> >  ### serial\_tx\_buffer
> >  > `= <size>`
> >
> > @@ -551,7 +712,15 @@ Choose the default scheduler.
> >  Set the serial transmit buffer size.
> >
> >  ### smep
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable Supervisor Mode Execution Protection
> > +
> >  ### snb\_igd\_quirk
> > +> `= <boolean>`
> > +
> >  ### sync\_console
> >  > `= <boolean>`
> >
> > @@ -561,28 +730,80 @@ Flag to force synchronous console output
> >  not suitable for production environments due to incurred overhead.
> >
> >  ### tboot
> > +> `= 0x<phys_addr>`
> > +
> > +Specify the physical address of the trusted boot shared page.
> > +
> >  ### tbuf\_size
> >  > `= <integer>`
> >
> >  Specify the per-cpu trace buffer size in pages.
> >
> >  ### tdt
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable TSC deadline as the APIC timer mode.
> > +
> >  ### tevt\_mask
> > +> `= <integer>`
> > +
> > +Specify a mask for Xen event tracing. This allows Xen tracing to be
> > +enabled at boot. Refer to the xentrace(8) documentation for a list of
> > +valid event mask values. In order to enable tracing, a buffer size (in
> > +pages) must also be specified via the tbuf\_size parameter.
> > +
> >  ### tickle\_one\_idle\_cpu
> > +> `= <boolean>`
> > +
> >  ### timer\_slop
> > +> `= <integer>`
> > +
> >  ### tmem
> > +> `= <boolean>`
> > +
> >  ### tmem\_compress
> > +> `= <boolean>`
> > +
> >  ### tmem\_dedup
> > +> `= <boolean>`
> > +
> >  ### tmem\_lock
> > +> `= <integer>`
> > +
> >  ### tmem\_shared\_auth
> > +> `= <boolean>`
> > +
> >  ### tmem\_tze
> > +> `= <integer>`
> > +
> >  ### tsc
> > +> `= unstable | skewed`
> > +
> >  ### ucode
> >  ### unrestricted\_guest
> > +> `= <boolean>`
> > +
> >  ### vcpu\_migration\_delay
> > +> `= <integer>`
> > +
> > +> Default: `0`
> > +
> > +Specify a delay, in microseconds, between migrations of a VCPU between
> > +PCPUs when using the credit1 scheduler. This prevents rapid fluttering
> > +of a VCPU between CPUs, and reduces the implicit overheads such as
> > +cache-warming. 1ms (1000) has been measured as a good value.
> > +
> >  ### vesa-map
> > +> `= <integer>`
> > +
> >  ### vesa-mtrr
> > +> `= <integer>`
> > +
> >  ### vesa-ram
> > +> `= <integer>`
> > +
> >  ### vga
> >  > `= ( ask | current | text-80x<rows> | gfx-<width>x<height>x<depth> | mode-<mode> )[,keep]`
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:59:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N4j-0000tM-1c; Fri, 17 Aug 2012 13:58:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2N4h-0000t2-18
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:58:55 +0000
Received: from [85.158.139.83:23974] by server-1.bemta-5.messagelabs.com id
	F9/F7-09980-E1E4E205; Fri, 17 Aug 2012 13:58:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345211932!17419024!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29422 invoked from network); 17 Aug 2012 13:58:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:58:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061364"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:58:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:58:51 +0100
Message-ID: <1345211930.10161.33.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 17 Aug 2012 14:58:50 +0100
In-Reply-To: <20120807032453.GB4324@US-SEA-R8XVZTX>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-07 at 04:24 +0100, Matt Wilson wrote:
> On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> > This change improves documentation for several Xen command line
> > parameters. Some of the Itanium-specific options are now removed. A
> > more thorough check should be performed to remove any other remnants.
> >
> > I've reformatted some of the entries to fit in 80 column terminals.
> >
> > Options that are yet undocumented but accept standard boolean /
> > integer values are now annotated as such.
> >
> > The size suffixes have been corrected to use the binary prefixes
> > instead of decimal prefixes.
> >
> > Changes since v2:
> >  * Change *bi prefixes to GiB, MiB, KiB
> >
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> George's concerns were adressed in this version, and Andrew gave an
> Ack. Anything else keeping this from landing in staging?

Keir, do you want to apply this sort of thing or shall I (it's not
really a tools doc so I haven't touched it)


> 
> Matt
> 
> > diff -r bf922651da96 -r 1809175cdc9b docs/misc/xen-command-line.markdown
> > --- a/docs/misc/xen-command-line.markdown       Sat Jul 28 17:27:30 2012 +0000
> > +++ b/docs/misc/xen-command-line.markdown       Mon Jul 30 19:04:59 2012 +0000
> > @@ -46,9 +46,9 @@ if a leading `0` is present.
> >
> >  A size parameter may be any integer, with a size suffix
> >
> > -* `G` or `g`: Giga (2^30)
> > -* `M` or `m`: Mega (2^20)
> > -* `K` or `k`: Kilo (2^10)
> > +* `G` or `g`: GiB (2^30)
> > +* `M` or `m`: MiB (2^20)
> > +* `K` or `k`: KiB (2^10)
> >  * `B` or `b`: Bytes
> >
> >  Without a size suffix, the default will be kilo.
> > @@ -107,8 +107,10 @@ Specify which ACPI MADT table to parse f
> >  than one is present.
> >
> >  ### acpi\_pstate\_strict
> > +> `= <integer>`
> >
> >  ### acpi\_skip\_timer\_override
> > +> `= <boolean>`
> >
> >  Instruct Xen to ignore timer-interrupt override.
> >
> > @@ -117,6 +119,8 @@ the domain 0 kernel this option is autom
> >  domain 0 command line
> >
> >  ### acpi\_sleep
> > +> `= s3_bios | s3_mode`
> > +
> >  ### allowsuperpage
> >  > `= <boolean>`
> >
> > @@ -136,12 +140,12 @@ there are more than 8 CPUs, Xen will swi
> >
> >  > Default: `false`
> >
> > -Force boot on potentially unsafe systems. By default Xen will refuse to boot on
> > -systems with the following errata:
> > +Force boot on potentially unsafe systems. By default Xen will refuse
> > +to boot on systems with the following errata:
> >
> >  * AMD Erratum 121. Processors with this erratum are subject to a guest
> > -  triggerable Denial of Service. Override only if you trust all of your PV
> > -  guests.
> > +  triggerable Denial of Service. Override only if you trust all of
> > +  your PV guests.
> >
> >  ### apic\_verbosity
> >  > `= verbose | debug`
> > @@ -153,15 +157,16 @@ Increase the verbosity of the APIC code
> >
> >  > Default: `true`
> >
> > -Permits Xen to set up and use PCI Address Translation Services, which is required
> > -for PCI Passthrough.
> > +Permits Xen to set up and use PCI Address Translation Services, which
> > +is required for PCI Passthrough.
> >
> >  ### availmem
> >  > `= <size>`
> >
> >  > Default: `0` (no limit)
> >
> > -Specify a maximum amount of available memory, to which Xen will clamp the e820 table.
> > +Specify a maximum amount of available memory, to which Xen will clamp
> > +the e820 table.
> >
> >  ### badpage
> >  > `= List of [ <integer> | <integer>-<integer> ]`
> > @@ -176,8 +181,9 @@ Xen's command line.
> >
> >  > Default: `true`
> >
> > -Scrub free RAM during boot.  This is a safety feature to prevent accidentally leaking
> > -sensitive VM data into other VMs if Xen crashes and reboots.
> > +Scrub free RAM during boot.  This is a safety feature to prevent
> > +accidentally leaking sensitive VM data into other VMs if Xen crashes
> > +and reboots.
> >
> >  ### cachesize
> >  > `= <size>`
> > @@ -227,7 +233,6 @@ Both option `com1` and `com2` follow the
> >
> >  A typical setup for most situations might be `com1=115200,8n1`
> >
> > -
> >  ### conring\_size
> >  > `= <size>`
> >
> > @@ -300,25 +305,30 @@ Indicate where the responsibility for dr
> >  ### cpuid\_mask\_cpu (AMD only)
> >  > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b`
> >
> > -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set (unspecified
> > -on the command line), specify a pre-canned cpuid mask to mask the current
> > -processor down to appear as the specified processor.  It is important to ensure
> > -that all hosts in a pool appear the same to guests to allow successful live
> > -migration.
> > +If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set
> > +(unspecified on the command line), specify a pre-canned cpuid mask to
> > +mask the current processor down to appear as the specified processor.
> > +It is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax
> >  > `= <integer>`
> >
> >  > Default: `~0` (all bits set)
> >
> > -These five command line parameters are used to specify cpuid masks to help with
> > -cpuid levelling across a pool of hosts.  Setting a bit in the mask indicates that
> > -the feature should be enabled, while clearing a bit in the mask indicates that
> > -the feature should be disabled.  It is important to ensure that all hosts in a
> > -pool appear the same to guests to allow successful live migration.
> > +These five command line parameters are used to specify cpuid masks to
> > +help with cpuid levelling across a pool of hosts.  Setting a bit in
> > +the mask indicates that the feature should be enabled, while clearing
> > +a bit in the mask indicates that the feature should be disabled.  It
> > +is important to ensure that all hosts in a pool appear the same to
> > +guests to allow successful live migration.
> >
> >  ### cpuidle
> > +> `= <boolean>`
> > +
> >  ### cpuinfo
> > +> `= <boolean>`
> > +
> >  ### crashinfo_maxaddr
> >  > `= <size>`
> >
> > @@ -328,17 +338,42 @@ Specify the maximum address to allocate
> >  combination with the `low_crashinfo` command line option.
> >
> >  ### crashkernel
> > +> `= <ramsize-range>:<size>[,...][@<offset>]`
> > +
> >  ### credit2\_balance\_over
> > +> `= <integer>`
> > +
> >  ### credit2\_balance\_under
> > +> `= <integer>`
> > +
> >  ### credit2\_load\_window\_shift
> > +> `= <integer>`
> > +
> >  ### debug\_stack\_lines
> > +> `= <integer>`
> > +
> > +> Default: `20`
> > +
> > +Limits the number lines printed in Xen stack traces.
> > +
> >  ### debugtrace
> > +> `= <integer>`
> > +
> > +> Default: `128`
> > +
> > +Specify the size of the console debug trace buffer in KiB. The debug
> > +trace feature is only enabled in debugging builds of Xen.
> > +
> >  ### dma\_bits
> >  > `= <integer>`
> >
> >  Specify the bit width of the DMA heap.
> >
> >  ### dom0\_ioports\_disable
> > +> `= List of <hex>-<hex>`
> > +
> > +Specify a list of IO ports to be excluded from dom0 access.
> > +
> >  ### dom0\_max\_vcpus
> >  > `= <integer>`
> >
> > @@ -372,6 +407,8 @@ For example, to set dom0's initial memor
> >  allow it to balloon up as far as 1GB use `dom0_mem=512M,max:1G`
> >
> >  ### dom0\_shadow
> > +> `= <boolean>`
> > +
> >  ### dom0\_vcpus\_pin
> >  > `= <boolean>`
> >
> > @@ -379,10 +416,21 @@ allow it to balloon up as far as 1GB use
> >
> >  Pin dom0 vcpus to their respective pcpus
> >
> > -### dom0\_vhpt\_size\_log2
> > -### dom\_rid\_bits
> >  ### e820-mtrr-clip
> > +> `= <boolean>`
> > +
> > +Flag that specifies if RAM should be clipped to the highest cacheable
> > +MTRR.
> > +
> > +> Default: `true` on Intel CPUs, otherwise `false`
> > +
> >  ### e820-verbose
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> > +
> > +Flag that enables verbose output when processing e820 information and
> > +applying clipping.
> >
> >  ### edd (x86)
> >  > `= off | on | skipmbr`
> > @@ -397,17 +445,32 @@ Either force retrieval of monitor EDID i
> >  disable it (edid=no). This option should not normally be required
> >  except for debugging purposes.
> >
> > -### efi\_print
> >  ### extra\_guest\_irqs
> >  > `= <number>`
> >
> >  Increase the number of PIRQs available for the guest. The default is 32.
> >
> >  ### flask\_enabled
> > +> `= <integer>`
> > +
> >  ### flask\_enforcing
> > +> `= <integer>`
> > +
> >  ### font
> > +> `= <height>` where height is `8x8 | 8x14 | 8x16 '`
> > +
> > +Specify the font size when using the VESA console driver.
> > +
> >  ### gdb
> > +> `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
> > +
> > +Specify the serial parameters for the GDB stub.
> > +
> >  ### gnttab\_max\_nr\_frames
> > +> `= <integer>`
> > +
> > +Specify the maximum number of frames per grant table operation.
> > +
> >  ### guest\_loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -420,15 +483,41 @@ The optional `<rate-limited level>` opti
> >  should be rate limited.
> >
> >  ### hap\_1gb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hap\_2mb
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable 1 GB host page table support for Hardware Assisted
> > +Paging (HAP).
> > +
> >  ### hpetbroadcast
> > +> `= <boolean>`
> > +
> >  ### hvm\_debug
> > +> `= <integer>`
> > +
> >  ### hvm\_port80
> > +> `= <boolean>`
> > +
> >  ### idle\_latency\_factor
> > +> `= <integer>`
> > +
> >  ### ioapic\_ack
> >  ### iommu
> >  ### iommu\_inclusive\_mapping
> > +> `= <boolean>`
> > +
> >  ### irq\_ratelimit
> > +> `= <integer>`
> > +
> >  ### irq\_vector\_map
> >  ### lapic
> >
> > @@ -437,7 +526,11 @@ if left disabled by the BIOS.  This opti
> >  all.
> >
> >  ### lapic\_timer\_c2\_ok
> > +> `= <boolean>`
> > +
> >  ### ler
> > +> `= <boolean>`
> > +
> >  ### loglvl
> >  > `= <level>[/<rate-limited level>]` where level is `none | error | warning | info | debug | all`
> >
> > @@ -461,18 +554,38 @@ so the crash kernel may find find them.
> >  with **crashinfo_maxaddr**.
> >
> >  ### max\_cstate
> > +> `= <integer>`
> > +
> >  ### max\_gsi\_irqs
> > +> `= <integer>`
> > +
> >  ### maxcpus
> > +> `= <integer>`
> > +
> >  ### mce
> > +> `= <integer>`
> > +
> >  ### mce\_fb
> > +> `= <integer>`
> > +
> >  ### mce\_verbosity
> > +> `= verbose`
> > +
> > +Specify verbose machine check output.
> > +
> >  ### mem
> >  > `= <size>`
> >
> > -Specifies the maximum address of physical RAM.  Any RAM beyond this
> > +Specify the maximum address of physical RAM.  Any RAM beyond this
> >  limit is ignored by Xen.
> >
> >  ### mmcfg
> > +> `= <boolean>[,amd-fam10]`
> > +
> > +> Default: `1`
> > +
> > +Specify if the MMConfig space should be enabled.
> > +
> >  ### nmi
> >  > `= ignore | dom0 | fatal`
> >
> > @@ -493,6 +606,8 @@ domain 0 kernel this option is automatic
> >  0 command line.
> >
> >  ### nofxsr
> > +> `= <boolean>`
> > +
> >  ### noirqbalance
> >  > `= <boolean>`
> >
> > @@ -501,11 +616,15 @@ systems such as Dell 1850/2850 that have
> >  IRQ routing issues.
> >
> >  ### nolapic
> > +> `= <boolean>`
> > +
> > +> Default: `false`
> >
> >  Ignore the local APIC on a uniprocessor system, even if enabled by the
> >  BIOS.  This option will accept value.
> >
> >  ### no-real-mode (x86)
> > +> `= <boolean>`
> >
> >  Do not execute real-mode bootstrap code when booting Xen. This option
> >  should not be used except for debugging. It will effectively disable
> > @@ -519,6 +638,10 @@ catching debug output.  Defaults to auto
> >  seconds.
> >
> >  ### noserialnumber
> > +> `= <boolean>`
> > +
> > +Disable CPU serial number reporting.
> > +
> >  ### nosmp
> >  > `= <boolean>`
> >
> > @@ -526,11 +649,39 @@ Disable SMP support.  No secondary proce
> >  Defaults to booting secondary processors.
> >
> >  ### nr\_irqs
> > +> `= <integer>`
> > +
> >  ### numa
> > -### pervcpu\_vhpt
> > +> `= on | off | fake=<integer> | noacpi`
> > +
> > +Default: `on`
> > +
> >  ### ple\_gap
> > +> `= <integer>`
> > +
> >  ### ple\_window
> > +> `= <integer>`
> > +
> >  ### reboot
> > +> `= b[ios] | t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
> > +
> > +Default: `0`
> > +
> > +Specify the host reboot method.
> > +
> > +`warm` instructs Xen to not set the cold reboot flag.
> > +
> > +`cold` instructs Xen to set the cold reboot flag.
> > +
> > +`bios` instructs Xen to reboot the host by jumping to BIOS. This is
> > +only available on 32-bit x86 platforms.
> > +
> > +`triple` instructs Xen to reboot the host by causing a triple fault.
> > +
> > +`kbd` instructs Xen to reboot the host via the keyboard controller.
> > +
> > +`acpi` instructs Xen to reboot the host using RESET_REG in the ACPI FADT.
> > +
> >  ### sched
> >  > `= credit | credit2 | sedf | arinc653`
> >
> > @@ -539,10 +690,20 @@ Defaults to booting secondary processors
> >  Choose the default scheduler.
> >
> >  ### sched\_credit2\_migrate\_resist
> > +> `= <integer>`
> > +
> >  ### sched\_credit\_default\_yield
> > +> `= <boolean>`
> > +
> >  ### sched\_credit\_tslice\_ms
> > +> `= <integer>`
> > +
> >  ### sched\_ratelimit\_us
> > +> `= <integer>`
> > +
> >  ### sched\_smt\_power\_savings
> > +> `= <boolean>`
> > +
> >  ### serial\_tx\_buffer
> >  > `= <size>`
> >
> > @@ -551,7 +712,15 @@ Choose the default scheduler.
> >  Set the serial transmit buffer size.
> >
> >  ### smep
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable Supervisor Mode Execution Protection
> > +
> >  ### snb\_igd\_quirk
> > +> `= <boolean>`
> > +
> >  ### sync\_console
> >  > `= <boolean>`
> >
> > @@ -561,28 +730,80 @@ Flag to force synchronous console output
> >  not suitable for production environments due to incurred overhead.
> >
> >  ### tboot
> > +> `= 0x<phys_addr>`
> > +
> > +Specify the physical address of the trusted boot shared page.
> > +
> >  ### tbuf\_size
> >  > `= <integer>`
> >
> >  Specify the per-cpu trace buffer size in pages.
> >
> >  ### tdt
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Flag to enable TSC deadline as the APIC timer mode.
> > +
> >  ### tevt\_mask
> > +> `= <integer>`
> > +
> > +Specify a mask for Xen event tracing. This allows Xen tracing to be
> > +enabled at boot. Refer to the xentrace(8) documentation for a list of
> > +valid event mask values. In order to enable tracing, a buffer size (in
> > +pages) must also be specified via the tbuf\_size parameter.
> > +
> >  ### tickle\_one\_idle\_cpu
> > +> `= <boolean>`
> > +
> >  ### timer\_slop
> > +> `= <integer>`
> > +
> >  ### tmem
> > +> `= <boolean>`
> > +
> >  ### tmem\_compress
> > +> `= <boolean>`
> > +
> >  ### tmem\_dedup
> > +> `= <boolean>`
> > +
> >  ### tmem\_lock
> > +> `= <integer>`
> > +
> >  ### tmem\_shared\_auth
> > +> `= <boolean>`
> > +
> >  ### tmem\_tze
> > +> `= <integer>`
> > +
> >  ### tsc
> > +> `= unstable | skewed`
> > +
> >  ### ucode
> >  ### unrestricted\_guest
> > +> `= <boolean>`
> > +
> >  ### vcpu\_migration\_delay
> > +> `= <integer>`
> > +
> > +> Default: `0`
> > +
> > +Specify a delay, in microseconds, between migrations of a VCPU between
> > +PCPUs when using the credit1 scheduler. This prevents rapid fluttering
> > +of a VCPU between CPUs, and reduces the implicit overheads such as
> > +cache-warming. 1ms (1000) has been measured as a good value.
> > +
> >  ### vesa-map
> > +> `= <integer>`
> > +
> >  ### vesa-mtrr
> > +> `= <integer>`
> > +
> >  ### vesa-ram
> > +> `= <integer>`
> > +
> >  ### vga
> >  > `= ( ask | current | text-80x<rows> | gfx-<width>x<height>x<depth> | mode-<mode> )[,keep]`
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:59:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N5W-0000zC-Fk; Fri, 17 Aug 2012 13:59:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2N5V-0000yq-0Q
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:59:45 +0000
Received: from [85.158.139.83:29869] by server-12.bemta-5.messagelabs.com id
	9E/AF-22359-05E4E205; Fri, 17 Aug 2012 13:59:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345211982!27995047!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6259 invoked from network); 17 Aug 2012 13:59:42 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:59:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061382"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:59:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:59:42 +0100
Message-ID: <1345211980.10161.34.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Keir Fraser
	<keir@xen.org>
Date: Fri, 17 Aug 2012 14:59:40 +0100
In-Reply-To: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
References: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: document/mark-up SCHEDOP_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Keir, does this look ok?

On Mon, 2012-07-30 at 10:42 +0100, Ian Campbell wrote:
> The biggest subtlety here is there additional argument when op ==
> SCHEDOP_shutdown and reason == SHUTDOWN_suspend and its interpretation by
> xc_domain_{save,restore}. Add some clarifying comments to libxc as well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  tools/libxc/xc_domain_restore.c |   10 ++++-
>  tools/libxc/xc_domain_save.c    |    9 ++++-
>  xen/include/public/sched.h      |   84 ++++++++++++++++++++++++++-------------
>  3 files changed, 72 insertions(+), 31 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 3fe2b12..5541e73 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1895,8 +1895,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          if ( i == 0 )
>          {
>              /*
> -             * Uncanonicalise the suspend-record frame number and poke
> -             * resume record.
> +             * Uncanonicalise the start info frame number and poke in
> +             * updated values into the start info itself.
> +             *
> +             * The start info MFN is the 3rd argument to the
> +             * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown
> +             * and reason==SHUTDOWN_suspend, it is canonicalised in
> +             * xc_domain_save and therefore the PFN is found in the
> +             * edx register.
>               */
>              pfn = GET_FIELD(ctxt, user_regs.edx);
>              if ( (pfn >= dinfo->p2m_size) ||
> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
> index c359649..f161472 100644
> --- a/tools/libxc/xc_domain_save.c
> +++ b/tools/libxc/xc_domain_save.c
> @@ -1867,7 +1867,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
>          goto out;
>      }
>  
> -    /* Canonicalise the suspend-record frame number. */
> +    /*
> +     * Canonicalise the start info frame number.
> +     *
> +     * The start info MFN is the 3rd argument to the
> +     * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown and
> +     * reason==SHUTDOWN_suspend and is therefore found in the edx
> +     * register.
> +     */
>      mfn = GET_FIELD(&ctxt, user_regs.edx);
>      if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
>      {
> diff --git a/xen/include/public/sched.h b/xen/include/public/sched.h
> index 7f87420..db5124a 100644
> --- a/xen/include/public/sched.h
> +++ b/xen/include/public/sched.h
> @@ -1,8 +1,8 @@
>  /******************************************************************************
>   * sched.h
> - * 
> + *
>   * Scheduler state interactions
> - * 
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
>   * of this software and associated documentation files (the "Software"), to
>   * deal in the Software without restriction, including without limitation the
> @@ -30,20 +30,33 @@
>  #include "event_channel.h"
>  
>  /*
> + * `incontents 150 sched Guest Scheduler Operations
> + *
> + * The SCHEDOP interface provides mechanisms for a guest to interact
> + * with the scheduler, including yield, blocking and shutting itself
> + * down.
> + */
> +
> +/*
>   * The prototype for this hypercall is:
> - *  long sched_op(int cmd, void *arg)
> + * ` long HYPERVISOR_sched_op(enum sched_op cmd, void *arg, ...)
> + *
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == Operation-specific extra argument(s), as described below.
> - * 
> + * ...  == Additional Operation-specific extra arguments, described below.
> + *
>   * Versions of Xen prior to 3.0.2 provided only the following legacy version
>   * of this hypercall, supporting only the commands yield, block and shutdown:
>   *  long sched_op(int cmd, unsigned long arg)
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == 0               (SCHEDOP_yield and SCHEDOP_block)
>   *      == SHUTDOWN_* code (SCHEDOP_shutdown)
> - * This legacy version is available to new guests as sched_op_compat().
> + *
> + * This legacy version is available to new guests as:
> + * ` long HYPERVISOR_sched_op_compat(enum sched_op cmd, unsigned long arg)
>   */
>  
> +/* ` enum sched_op { // SCHEDOP_* => struct sched_* */
>  /*
>   * Voluntarily yield the CPU.
>   * @arg == NULL.
> @@ -61,59 +74,72 @@
>  
>  /*
>   * Halt execution of this domain (all VCPUs) and notify the system controller.
> - * @arg == pointer to sched_shutdown structure.
> + * @arg == pointer to sched_shutdown_t structure.
> + *
> + * If the sched_shutdown_t reason is SHUTDOWN_suspend then this
> + * hypercall takes an additional extra argument which should be the
> + * MFN of the guest's start_info_t.
> + *
> + * In addition, which reason is SHUTDOWN_suspend this hypercall
> + * returns 1 if suspend was cancelled or the domain was merely
> + * checkpointed, and 0 if it is resuming in a new domain.
>   */
>  #define SCHEDOP_shutdown    2
> -struct sched_shutdown {
> -    unsigned int reason; /* SHUTDOWN_* */
> -};
> -typedef struct sched_shutdown sched_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
>  
>  /*
>   * Poll a set of event-channel ports. Return when one or more are pending. An
>   * optional timeout may be specified.
> - * @arg == pointer to sched_poll structure.
> + * @arg == pointer to sched_poll_t structure.
>   */
>  #define SCHEDOP_poll        3
> -struct sched_poll {
> -    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> -    unsigned int nr_ports;
> -    uint64_t timeout;
> -};
> -typedef struct sched_poll sched_poll_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
>  
>  /*
>   * Declare a shutdown for another domain. The main use of this function is
>   * in interpreting shutdown requests and reasons for fully-virtualized
>   * domains.  A para-virtualized domain may use SCHEDOP_shutdown directly.
> - * @arg == pointer to sched_remote_shutdown structure.
> + * @arg == pointer to sched_remote_shutdown_t structure.
>   */
>  #define SCHEDOP_remote_shutdown        4
> -struct sched_remote_shutdown {
> -    domid_t domain_id;         /* Remote domain ID */
> -    unsigned int reason;       /* SHUTDOWN_xxx reason */
> -};
> -typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
>  
>  /*
>   * Latch a shutdown code, so that when the domain later shuts down it
>   * reports this code to the control tools.
> - * @arg == as for SCHEDOP_shutdown.
> + * @arg == sched_shutdown_t, as for SCHEDOP_shutdown.
>   */
>  #define SCHEDOP_shutdown_code 5
>  
>  /*
>   * Setup, poke and destroy a domain watchdog timer.
> - * @arg == pointer to sched_watchdog structure.
> + * @arg == pointer to sched_watchdog_t structure.
>   * With id == 0, setup a domain watchdog timer to cause domain shutdown
>   *               after timeout, returns watchdog id.
>   * With id != 0 and timeout == 0, destroy domain watchdog timer.
>   * With id != 0 and timeout != 0, poke watchdog timer and set new timeout.
>   */
>  #define SCHEDOP_watchdog    6
> +/* ` } */
> +
> +struct sched_shutdown {
> +    unsigned int reason; /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_shutdown sched_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
> +
> +struct sched_poll {
> +    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> +    unsigned int nr_ports;
> +    uint64_t timeout;
> +};
> +typedef struct sched_poll sched_poll_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
> +
> +struct sched_remote_shutdown {
> +    domid_t domain_id;         /* Remote domain ID */
> +    unsigned int reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
> +
>  struct sched_watchdog {
>      uint32_t id;                /* watchdog ID */
>      uint32_t timeout;           /* timeout */
> @@ -126,11 +152,13 @@ DEFINE_XEN_GUEST_HANDLE(sched_watchdog_t);
>   * software to determine the appropriate action. For the most part, Xen does
>   * not care about the shutdown code.
>   */
> +/* ` enum sched_shutdown_reason { */
>  #define SHUTDOWN_poweroff   0  /* Domain exited normally. Clean up and kill. */
>  #define SHUTDOWN_reboot     1  /* Clean up, kill, and then restart.          */
>  #define SHUTDOWN_suspend    2  /* Clean up, save suspend info, kill.         */
>  #define SHUTDOWN_crash      3  /* Tell controller we've crashed.             */
>  #define SHUTDOWN_watchdog   4  /* Restart because watchdog time expired.     */
> +/* ` } */
>  
>  #endif /* __XEN_PUBLIC_SCHED_H__ */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 13:59:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 13:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N5W-0000zC-Fk; Fri, 17 Aug 2012 13:59:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2N5V-0000yq-0Q
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 13:59:45 +0000
Received: from [85.158.139.83:29869] by server-12.bemta-5.messagelabs.com id
	9E/AF-22359-05E4E205; Fri, 17 Aug 2012 13:59:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345211982!27995047!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6259 invoked from network); 17 Aug 2012 13:59:42 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 13:59:42 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061382"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 13:59:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 14:59:42 +0100
Message-ID: <1345211980.10161.34.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Keir Fraser
	<keir@xen.org>
Date: Fri, 17 Aug 2012 14:59:40 +0100
In-Reply-To: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
References: <1343641349-28344-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: document/mark-up SCHEDOP_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Keir, does this look ok?

On Mon, 2012-07-30 at 10:42 +0100, Ian Campbell wrote:
> The biggest subtlety here is there additional argument when op ==
> SCHEDOP_shutdown and reason == SHUTDOWN_suspend and its interpretation by
> xc_domain_{save,restore}. Add some clarifying comments to libxc as well.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  tools/libxc/xc_domain_restore.c |   10 ++++-
>  tools/libxc/xc_domain_save.c    |    9 ++++-
>  xen/include/public/sched.h      |   84 ++++++++++++++++++++++++++-------------
>  3 files changed, 72 insertions(+), 31 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 3fe2b12..5541e73 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1895,8 +1895,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          if ( i == 0 )
>          {
>              /*
> -             * Uncanonicalise the suspend-record frame number and poke
> -             * resume record.
> +             * Uncanonicalise the start info frame number and poke in
> +             * updated values into the start info itself.
> +             *
> +             * The start info MFN is the 3rd argument to the
> +             * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown
> +             * and reason==SHUTDOWN_suspend, it is canonicalised in
> +             * xc_domain_save and therefore the PFN is found in the
> +             * edx register.
>               */
>              pfn = GET_FIELD(ctxt, user_regs.edx);
>              if ( (pfn >= dinfo->p2m_size) ||
> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
> index c359649..f161472 100644
> --- a/tools/libxc/xc_domain_save.c
> +++ b/tools/libxc/xc_domain_save.c
> @@ -1867,7 +1867,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
>          goto out;
>      }
>  
> -    /* Canonicalise the suspend-record frame number. */
> +    /*
> +     * Canonicalise the start info frame number.
> +     *
> +     * The start info MFN is the 3rd argument to the
> +     * HYPERVISOR_sched_op hypercall when op==SCHEDOP_shutdown and
> +     * reason==SHUTDOWN_suspend and is therefore found in the edx
> +     * register.
> +     */
>      mfn = GET_FIELD(&ctxt, user_regs.edx);
>      if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
>      {
> diff --git a/xen/include/public/sched.h b/xen/include/public/sched.h
> index 7f87420..db5124a 100644
> --- a/xen/include/public/sched.h
> +++ b/xen/include/public/sched.h
> @@ -1,8 +1,8 @@
>  /******************************************************************************
>   * sched.h
> - * 
> + *
>   * Scheduler state interactions
> - * 
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
>   * of this software and associated documentation files (the "Software"), to
>   * deal in the Software without restriction, including without limitation the
> @@ -30,20 +30,33 @@
>  #include "event_channel.h"
>  
>  /*
> + * `incontents 150 sched Guest Scheduler Operations
> + *
> + * The SCHEDOP interface provides mechanisms for a guest to interact
> + * with the scheduler, including yield, blocking and shutting itself
> + * down.
> + */
> +
> +/*
>   * The prototype for this hypercall is:
> - *  long sched_op(int cmd, void *arg)
> + * ` long HYPERVISOR_sched_op(enum sched_op cmd, void *arg, ...)
> + *
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == Operation-specific extra argument(s), as described below.
> - * 
> + * ...  == Additional Operation-specific extra arguments, described below.
> + *
>   * Versions of Xen prior to 3.0.2 provided only the following legacy version
>   * of this hypercall, supporting only the commands yield, block and shutdown:
>   *  long sched_op(int cmd, unsigned long arg)
>   * @cmd == SCHEDOP_??? (scheduler operation).
>   * @arg == 0               (SCHEDOP_yield and SCHEDOP_block)
>   *      == SHUTDOWN_* code (SCHEDOP_shutdown)
> - * This legacy version is available to new guests as sched_op_compat().
> + *
> + * This legacy version is available to new guests as:
> + * ` long HYPERVISOR_sched_op_compat(enum sched_op cmd, unsigned long arg)
>   */
>  
> +/* ` enum sched_op { // SCHEDOP_* => struct sched_* */
>  /*
>   * Voluntarily yield the CPU.
>   * @arg == NULL.
> @@ -61,59 +74,72 @@
>  
>  /*
>   * Halt execution of this domain (all VCPUs) and notify the system controller.
> - * @arg == pointer to sched_shutdown structure.
> + * @arg == pointer to sched_shutdown_t structure.
> + *
> + * If the sched_shutdown_t reason is SHUTDOWN_suspend then this
> + * hypercall takes an additional extra argument which should be the
> + * MFN of the guest's start_info_t.
> + *
> + * In addition, which reason is SHUTDOWN_suspend this hypercall
> + * returns 1 if suspend was cancelled or the domain was merely
> + * checkpointed, and 0 if it is resuming in a new domain.
>   */
>  #define SCHEDOP_shutdown    2
> -struct sched_shutdown {
> -    unsigned int reason; /* SHUTDOWN_* */
> -};
> -typedef struct sched_shutdown sched_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
>  
>  /*
>   * Poll a set of event-channel ports. Return when one or more are pending. An
>   * optional timeout may be specified.
> - * @arg == pointer to sched_poll structure.
> + * @arg == pointer to sched_poll_t structure.
>   */
>  #define SCHEDOP_poll        3
> -struct sched_poll {
> -    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> -    unsigned int nr_ports;
> -    uint64_t timeout;
> -};
> -typedef struct sched_poll sched_poll_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
>  
>  /*
>   * Declare a shutdown for another domain. The main use of this function is
>   * in interpreting shutdown requests and reasons for fully-virtualized
>   * domains.  A para-virtualized domain may use SCHEDOP_shutdown directly.
> - * @arg == pointer to sched_remote_shutdown structure.
> + * @arg == pointer to sched_remote_shutdown_t structure.
>   */
>  #define SCHEDOP_remote_shutdown        4
> -struct sched_remote_shutdown {
> -    domid_t domain_id;         /* Remote domain ID */
> -    unsigned int reason;       /* SHUTDOWN_xxx reason */
> -};
> -typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> -DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
>  
>  /*
>   * Latch a shutdown code, so that when the domain later shuts down it
>   * reports this code to the control tools.
> - * @arg == as for SCHEDOP_shutdown.
> + * @arg == sched_shutdown_t, as for SCHEDOP_shutdown.
>   */
>  #define SCHEDOP_shutdown_code 5
>  
>  /*
>   * Setup, poke and destroy a domain watchdog timer.
> - * @arg == pointer to sched_watchdog structure.
> + * @arg == pointer to sched_watchdog_t structure.
>   * With id == 0, setup a domain watchdog timer to cause domain shutdown
>   *               after timeout, returns watchdog id.
>   * With id != 0 and timeout == 0, destroy domain watchdog timer.
>   * With id != 0 and timeout != 0, poke watchdog timer and set new timeout.
>   */
>  #define SCHEDOP_watchdog    6
> +/* ` } */
> +
> +struct sched_shutdown {
> +    unsigned int reason; /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_shutdown sched_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
> +
> +struct sched_poll {
> +    XEN_GUEST_HANDLE(evtchn_port_t) ports;
> +    unsigned int nr_ports;
> +    uint64_t timeout;
> +};
> +typedef struct sched_poll sched_poll_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
> +
> +struct sched_remote_shutdown {
> +    domid_t domain_id;         /* Remote domain ID */
> +    unsigned int reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
> +};
> +typedef struct sched_remote_shutdown sched_remote_shutdown_t;
> +DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
> +
>  struct sched_watchdog {
>      uint32_t id;                /* watchdog ID */
>      uint32_t timeout;           /* timeout */
> @@ -126,11 +152,13 @@ DEFINE_XEN_GUEST_HANDLE(sched_watchdog_t);
>   * software to determine the appropriate action. For the most part, Xen does
>   * not care about the shutdown code.
>   */
> +/* ` enum sched_shutdown_reason { */
>  #define SHUTDOWN_poweroff   0  /* Domain exited normally. Clean up and kill. */
>  #define SHUTDOWN_reboot     1  /* Clean up, kill, and then restart.          */
>  #define SHUTDOWN_suspend    2  /* Clean up, save suspend info, kill.         */
>  #define SHUTDOWN_crash      3  /* Tell controller we've crashed.             */
>  #define SHUTDOWN_watchdog   4  /* Restart because watchdog time expired.     */
> +/* ` } */
>  
>  #endif /* __XEN_PUBLIC_SCHED_H__ */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:01:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N6d-0001C9-VV; Fri, 17 Aug 2012 14:00:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2N6c-0001BN-Ea
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:00:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345212027!4645376!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26543 invoked from network); 17 Aug 2012 14:00:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 14:00:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:00:27 +0100
Message-Id: <502E6AC40200007800095EB8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:01:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
	<502E399B0200007800095DA1@nat28.tlf.novell.com>
	<502E4C3A.5050409@amd.com>
In-Reply-To: <502E4C3A.5050409@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 15:50, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> On 08/17/2012 06:31 AM, Jan Beulich wrote:
>>>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>>> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>>>               return 0;
>>>       }
>>>
>>> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
>>> -        cmd.mask = &online_policy_cpus;
>>> -    else
>>> -        cmd.mask = cpumask_of(policy->cpu);
>>> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
>>> +        likely(policy->cpu == smp_processor_id())) {
>>> +        transition_pstate(&next_perf_state);
>>> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
>>
>> Actually - is this enough? Doesn't this also need to be done based
>> on policy->cpus?
> 
> With HW-coordinated transitions there is a policy structure per CPU so 
> policy->cpus is always 1 and policy->cpu is the same as policy->cpus. 
> You can see this in cpufreq_add_cpu(), when hw_all is set.
> 
> (This is consistent with ACPI spec:
> 	When hardware coordinates transitions, OSPM continues to
> 	initiate state transitions as it would if there were no
> 	dependencies.
> )

Ah, okay, I didn't recall that (which means in this case the stats
simply can't be right, as the hardware may do as it pleases).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:01:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2N6d-0001C9-VV; Fri, 17 Aug 2012 14:00:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2N6c-0001BN-Ea
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:00:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345212027!4645376!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26543 invoked from network); 17 Aug 2012 14:00:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 14:00:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:00:27 +0100
Message-Id: <502E6AC40200007800095EB8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:01:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@amd.com>
References: <85190245a94d9945b765.1345135279@localhost.localdomain>
	<502E399B0200007800095DA1@nat28.tlf.novell.com>
	<502E4C3A.5050409@amd.com>
In-Reply-To: <502E4C3A.5050409@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] AMD,
 powernow: Update P-state directly when _PSD's CoordType is
 DOMAIN_COORD_TYPE_HW_ALL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 15:50, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
> On 08/17/2012 06:31 AM, Jan Beulich wrote:
>>>>> On 16.08.12 at 18:41, Boris Ostrovsky <boris.ostrovsky@amd.com> wrote:
>>> @@ -137,26 +122,28 @@ static int powernow_cpufreq_target(struc
>>>               return 0;
>>>       }
>>>
>>> -    if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
>>> -        cmd.mask = &online_policy_cpus;
>>> -    else
>>> -        cmd.mask = cpumask_of(policy->cpu);
>>> +    if (policy->shared_type == CPUFREQ_SHARED_TYPE_HW &&
>>> +        likely(policy->cpu == smp_processor_id())) {
>>> +        transition_pstate(&next_perf_state);
>>> +        cpufreq_statistic_update(policy->cpu, perf->state, next_perf_state);
>>
>> Actually - is this enough? Doesn't this also need to be done based
>> on policy->cpus?
> 
> With HW-coordinated transitions there is a policy structure per CPU so 
> policy->cpus is always 1 and policy->cpu is the same as policy->cpus. 
> You can see this in cpufreq_add_cpu(), when hw_all is set.
> 
> (This is consistent with ACPI spec:
> 	When hardware coordinates transitions, OSPM continues to
> 	initiate state transitions as it would if there were no
> 	dependencies.
> )

Ah, okay, I didn't recall that (which means in this case the stats
simply can't be right, as the hardware may do as it pleases).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NAF-0001ZH-WB; Fri, 17 Aug 2012 14:04:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2NAE-0001ZB-4a
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:04:38 +0000
Received: from [85.158.143.99:29924] by server-2.bemta-4.messagelabs.com id
	8E/02-31966-57F4E205; Fri, 17 Aug 2012 14:04:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345212274!28144147!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9555 invoked from network); 17 Aug 2012 14:04:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 14:04:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:04:34 +0100
Message-Id: <502E6BBA0200007800095EBB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:05:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<1345211486.10161.31.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208171453060.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208171453060.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 15:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 17 Aug 2012, Ian Campbell wrote:
>> On Fri, 2012-08-17 at 14:47 +0100, Stefano Stabellini wrote:
>> > On Fri, 17 Aug 2012, Jan Beulich wrote:
>> > > >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > > > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> > > >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> > > >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > > >> > > Seeing the patch I btw realized that there's no easy way to
>> > > >> > > avoid having the type as a second argument in the conversion
>> > > >> > > macros. Nevertheless I still don't like the explicitly specified type
>> > > >> > > there.
>> > > >> > 
>> > > >> > Btw - on the architecture(s) where the two handles are identical
>> > > >> > I would prefer you to make the conversion functions trivial (and
>> > > >> > thus avoid making use of the "type" parameter), thus allowing
>> > > >> > the type checking to occur that you currently circumvent.
>> > > >> 
>> > > >> OK, I can do that.
>> > > > 
>> > > > Will this result in the type parameter potentially becoming stale?
>> > > > 
>> > > > Adding a redundant pointer compare is a good way to get the compiler to
>> > > > catch this. Smth like;
>> > > > 
>> > > >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> > > >         #define guest_handle_from_param(hnd, type) ({
>> > > >             typeof((hnd).p) _x = (hnd).p;
>> > > >             XEN_GUEST_HANDLE(type) _y;
>> > > >             &_y == &_x;
>> > > >             hnd;
>> > > >          })
>> > > 
>> > > Ah yes, that's a good suggestion.
>> > > 
>> > > > I'm not sure which two pointers of members of the various structs need
>> > > > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> > > > idea...
>> > > 
>> > > Right, comparing (hnd).p with _y.p would be the right thing; no
>> > > need for _x, but some other (mechanical) adjustments would be
>> > > necessary.
>> > 
>> > The _x variable is still useful to avoid multiple evaluations of hnd,
>> > even though I know that this is not a public header.
>> > 
>> > What about the following:
>> > 
>> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
>> > #define guest_handle_to_param(hnd, type) ({                \
>> >     typeof((hnd).p) _x = (hnd).p;                          \
>> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>> >     if (&_x != &_y.p) BUG();                               \
>> 
>> &_x and &_y.p will always be different => this will always BUG().
>>
>> You just need "(&_x == &_y.p)" if the types of _x and _y.p are different
>> then the compiler will error out due to the comparison of differently
>> typed pointers.
> 
> I know what you mean, but we cannot do that because the compiler will
> complain with "statement has no effects".
> So we have to do something like:
> 
> if ((&_x == &_y.p) && 0) BUG();

As done in my other mail - simply cast to void.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:04:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NAF-0001ZH-WB; Fri, 17 Aug 2012 14:04:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2NAE-0001ZB-4a
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:04:38 +0000
Received: from [85.158.143.99:29924] by server-2.bemta-4.messagelabs.com id
	8E/02-31966-57F4E205; Fri, 17 Aug 2012 14:04:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345212274!28144147!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9555 invoked from network); 17 Aug 2012 14:04:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 14:04:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:04:34 +0100
Message-Id: <502E6BBA0200007800095EBB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:05:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<1345211486.10161.31.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208171453060.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208171453060.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim \(Xen.org\)" <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 15:55, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 17 Aug 2012, Ian Campbell wrote:
>> On Fri, 2012-08-17 at 14:47 +0100, Stefano Stabellini wrote:
>> > On Fri, 17 Aug 2012, Jan Beulich wrote:
>> > > >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > > > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> > > >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> > > >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> > > >> > > Seeing the patch I btw realized that there's no easy way to
>> > > >> > > avoid having the type as a second argument in the conversion
>> > > >> > > macros. Nevertheless I still don't like the explicitly specified type
>> > > >> > > there.
>> > > >> > 
>> > > >> > Btw - on the architecture(s) where the two handles are identical
>> > > >> > I would prefer you to make the conversion functions trivial (and
>> > > >> > thus avoid making use of the "type" parameter), thus allowing
>> > > >> > the type checking to occur that you currently circumvent.
>> > > >> 
>> > > >> OK, I can do that.
>> > > > 
>> > > > Will this result in the type parameter potentially becoming stale?
>> > > > 
>> > > > Adding a redundant pointer compare is a good way to get the compiler to
>> > > > catch this. Smth like;
>> > > > 
>> > > >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> > > >         #define guest_handle_from_param(hnd, type) ({
>> > > >             typeof((hnd).p) _x = (hnd).p;
>> > > >             XEN_GUEST_HANDLE(type) _y;
>> > > >             &_y == &_x;
>> > > >             hnd;
>> > > >          })
>> > > 
>> > > Ah yes, that's a good suggestion.
>> > > 
>> > > > I'm not sure which two pointers of members of the various structs need
>> > > > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> > > > idea...
>> > > 
>> > > Right, comparing (hnd).p with _y.p would be the right thing; no
>> > > need for _x, but some other (mechanical) adjustments would be
>> > > necessary.
>> > 
>> > The _x variable is still useful to avoid multiple evaluations of hnd,
>> > even though I know that this is not a public header.
>> > 
>> > What about the following:
>> > 
>> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
>> > #define guest_handle_to_param(hnd, type) ({                \
>> >     typeof((hnd).p) _x = (hnd).p;                          \
>> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>> >     if (&_x != &_y.p) BUG();                               \
>> 
>> &_x and &_y.p will always be different => this will always BUG().
>>
>> You just need "(&_x == &_y.p)" if the types of _x and _y.p are different
>> then the compiler will error out due to the comparison of differently
>> typed pointers.
> 
> I know what you mean, but we cannot do that because the compiler will
> complain with "statement has no effects".
> So we have to do something like:
> 
> if ((&_x == &_y.p) && 0) BUG();

As done in my other mail - simply cast to void.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:06:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NBu-0001ez-GJ; Fri, 17 Aug 2012 14:06:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2NBt-0001et-Nh
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:06:21 +0000
Received: from [85.158.138.51:34221] by server-4.bemta-3.messagelabs.com id
	6B/8E-04276-CDF4E205; Fri, 17 Aug 2012 14:06:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345212379!19922216!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25217 invoked from network); 17 Aug 2012 14:06:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:06:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061537"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:06:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:06:19 +0100
Message-ID: <1345212377.10161.37.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 15:06:17 +0100
In-Reply-To: <alpine.DEB.2.02.1208091101270.21096@kaball.uk.xensource.com>
References: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
	<1344506410.32142.95.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208091101270.21096@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: console: correct example
 console type definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 11:01 +0100, Stefano Stabellini wrote:
> On Thu, 9 Aug 2012, Ian Campbell wrote:
> > On Mon, 2012-07-30 at 13:04 +0100, Ian Campbell wrote:
> > > I think this is intended to be under the specific consoles directory.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Any thoughts on this patch?
> 
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Applied, thanks.

> 
> > > ---
> > >  docs/misc/console.txt |    2 +-
> > >  1 files changed, 1 insertions(+), 1 deletions(-)
> > > 
> > > diff --git a/docs/misc/console.txt b/docs/misc/console.txt
> > > index 73ca835..8a53a95 100644
> > > --- a/docs/misc/console.txt
> > > +++ b/docs/misc/console.txt
> > > @@ -36,7 +36,7 @@ toolstack in the "type" node on xenstore, under the relevant console
> > >  section.
> > >  For example:
> > >  
> > > -# xenstore-read /local/domain/26/console/type
> > > +# xenstore-read /local/domain/26/console/1/type
> > >  xenconsoled
> > >  
> > >  The supported values are only xenconsoled or ioemu; xenconsoled has
> > 
> > 
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:06:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NBu-0001ez-GJ; Fri, 17 Aug 2012 14:06:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2NBt-0001et-Nh
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:06:21 +0000
Received: from [85.158.138.51:34221] by server-4.bemta-3.messagelabs.com id
	6B/8E-04276-CDF4E205; Fri, 17 Aug 2012 14:06:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345212379!19922216!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25217 invoked from network); 17 Aug 2012 14:06:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:06:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061537"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:06:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:06:19 +0100
Message-ID: <1345212377.10161.37.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Date: Fri, 17 Aug 2012 15:06:17 +0100
In-Reply-To: <alpine.DEB.2.02.1208091101270.21096@kaball.uk.xensource.com>
References: <1343649872-23799-1-git-send-email-ian.campbell@citrix.com>
	<1344506410.32142.95.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208091101270.21096@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: console: correct example
 console type definition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 11:01 +0100, Stefano Stabellini wrote:
> On Thu, 9 Aug 2012, Ian Campbell wrote:
> > On Mon, 2012-07-30 at 13:04 +0100, Ian Campbell wrote:
> > > I think this is intended to be under the specific consoles directory.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Any thoughts on this patch?
> 
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Applied, thanks.

> 
> > > ---
> > >  docs/misc/console.txt |    2 +-
> > >  1 files changed, 1 insertions(+), 1 deletions(-)
> > > 
> > > diff --git a/docs/misc/console.txt b/docs/misc/console.txt
> > > index 73ca835..8a53a95 100644
> > > --- a/docs/misc/console.txt
> > > +++ b/docs/misc/console.txt
> > > @@ -36,7 +36,7 @@ toolstack in the "type" node on xenstore, under the relevant console
> > >  section.
> > >  For example:
> > >  
> > > -# xenstore-read /local/domain/26/console/type
> > > +# xenstore-read /local/domain/26/console/1/type
> > >  xenconsoled
> > >  
> > >  The supported values are only xenconsoled or ioemu; xenconsoled has
> > 
> > 
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:08:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NE0-0001nX-0Z; Fri, 17 Aug 2012 14:08:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2NDy-0001nP-N8
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:08:30 +0000
Received: from [85.158.143.99:53571] by server-2.bemta-4.messagelabs.com id
	1B/28-31966-E505E205; Fri, 17 Aug 2012 14:08:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345212508!23395971!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30591 invoked from network); 17 Aug 2012 14:08:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 14:08:29 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HE8O5b026308
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:08:25 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HE8O3H017526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:08:24 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HE8O9W029575; Fri, 17 Aug 2012 09:08:24 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:08:24 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EE601402D7; Fri, 17 Aug 2012 09:58:34 -0400 (EDT)
Date: Fri, 17 Aug 2012 09:58:34 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120817135834.GA8093@phenom.dumpdata.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
	<20120817130757.GD31903@phenom.dumpdata.com>
	<502E449B.2030502@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E449B.2030502@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:18:19PM +0100, George Dunlap wrote:
> On 17/08/12 14:07, Konrad Rzeszutek Wilk wrote:
> >On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
> >>I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
> >>couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
> >>Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
> >>
> >>The problems seem to have started here:
> >>
> >>-- snip --
> >>[    0.060280] ACPI: Core revision 20110623^M^M
> >>[    0.072384] Performance Events: Broken BIOS detected, complain to
> >>your hardware vendor.^M^M
> >>[    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
> >>(MSR c0010000 is 530076)^M^M
> >>[    0.080007] AMD PMU driver.^M^M
> >>[    0.082864] ------------[ cut here ]------------^M^M
> >>[    0.084018] WARNING: at
> >>/build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
> >>perf_events_lapic_init+0x28/0x29()^M^M
> >>[    0.088009] Hardware name: empty^M^M
> >>[    0.091299] Modules linked in:^M^M
> >>[    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
> >>[    0.096008] Call Trace:^M^M
> >>[    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
> >>[    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
> >>[    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
> >>[    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
> >>[    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
> >>[    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
> >>[    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
> >>[    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
> >>[    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
> >>-- snip --
> >>
> >>And pretty soon degenerated into log message spamming of this sort:
> >>
> >>-- snip --
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>-- snip --
> >>
> >>The serial log is attached ("exile.log").
> >>
> >>An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
> >>Jeremy's?) boots fine; the serial log is also attached
> >>("exile-good.log").  It also seems ot have the WARN above, so maybe
> >>that's not actually the issue.
> >>
> >>Any ideas?
> >Implement the perf framework to work with Xen's oprofile, or make a new
> >set of hypercalls for it.
> >
> >The WARN can go away - its there to remind us to get it done at some point :-(
> OK, but is there a way I can actually get it to boot?  I think the
> WRMSR is probably the real problem.

It should have no trouble booting? The WRMSR are the perf counters that are
being tested (I think)

Oh, maybe not. I wonder if those are the APERF? So the scheduler has some
code to probe the MSRS, This git commit: d95a8d4b876b60ce8497fc3216d06823c492bba6
takes care of that.

But that should show up 3.2 kernel? Not there?

> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:08:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NE0-0001nX-0Z; Fri, 17 Aug 2012 14:08:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2NDy-0001nP-N8
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:08:30 +0000
Received: from [85.158.143.99:53571] by server-2.bemta-4.messagelabs.com id
	1B/28-31966-E505E205; Fri, 17 Aug 2012 14:08:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345212508!23395971!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30591 invoked from network); 17 Aug 2012 14:08:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 14:08:29 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HE8O5b026308
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:08:25 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HE8O3H017526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:08:24 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HE8O9W029575; Fri, 17 Aug 2012 09:08:24 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:08:24 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EE601402D7; Fri, 17 Aug 2012 09:58:34 -0400 (EDT)
Date: Fri, 17 Aug 2012 09:58:34 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120817135834.GA8093@phenom.dumpdata.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
	<20120817130757.GD31903@phenom.dumpdata.com>
	<502E449B.2030502@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E449B.2030502@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:18:19PM +0100, George Dunlap wrote:
> On 17/08/12 14:07, Konrad Rzeszutek Wilk wrote:
> >On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
> >>I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
> >>couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
> >>Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
> >>
> >>The problems seem to have started here:
> >>
> >>-- snip --
> >>[    0.060280] ACPI: Core revision 20110623^M^M
> >>[    0.072384] Performance Events: Broken BIOS detected, complain to
> >>your hardware vendor.^M^M
> >>[    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
> >>(MSR c0010000 is 530076)^M^M
> >>[    0.080007] AMD PMU driver.^M^M
> >>[    0.082864] ------------[ cut here ]------------^M^M
> >>[    0.084018] WARNING: at
> >>/build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
> >>perf_events_lapic_init+0x28/0x29()^M^M
> >>[    0.088009] Hardware name: empty^M^M
> >>[    0.091299] Modules linked in:^M^M
> >>[    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
> >>[    0.096008] Call Trace:^M^M
> >>[    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
> >>[    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
> >>[    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
> >>[    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
> >>[    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
> >>[    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
> >>[    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
> >>[    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
> >>[    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
> >>-- snip --
> >>
> >>And pretty soon degenerated into log message spamming of this sort:
> >>
> >>-- snip --
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>-- snip --
> >>
> >>The serial log is attached ("exile.log").
> >>
> >>An earlier kernel I had lying around, 2.6.32.25 (perhaps one of
> >>Jeremy's?) boots fine; the serial log is also attached
> >>("exile-good.log").  It also seems ot have the WARN above, so maybe
> >>that's not actually the issue.
> >>
> >>Any ideas?
> >Implement the perf framework to work with Xen's oprofile, or make a new
> >set of hypercalls for it.
> >
> >The WARN can go away - its there to remind us to get it done at some point :-(
> OK, but is there a way I can actually get it to boot?  I think the
> WRMSR is probably the real problem.

It should have no trouble booting? The WRMSR are the perf counters that are
being tested (I think)

Oh, maybe not. I wonder if those are the APERF? So the scheduler has some
code to probe the MSRS, This git commit: d95a8d4b876b60ce8497fc3216d06823c492bba6
takes care of that.

But that should show up 3.2 kernel? Not there?

> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:10:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NFv-0001xV-K9; Fri, 17 Aug 2012 14:10:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2NFu-0001xF-Cj
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:10:30 +0000
Received: from [85.158.139.83:59977] by server-1.bemta-5.messagelabs.com id
	9B/3F-09980-3D05E205; Fri, 17 Aug 2012 14:10:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345212625!17421127!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8486 invoked from network); 17 Aug 2012 14:10:26 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 14:10:26 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HEAINj028533
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:10:19 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HEAIPl012612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:10:18 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HEAH7Q012448; Fri, 17 Aug 2012 09:10:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:10:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DD6BA402D7; Fri, 17 Aug 2012 10:00:28 -0400 (EDT)
Date: Fri, 17 Aug 2012 10:00:28 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120817140028.GB8093@phenom.dumpdata.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
	<20120817130757.GD31903@phenom.dumpdata.com>
	<502E449B.2030502@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E449B.2030502@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:18:19PM +0100, George Dunlap wrote:
> On 17/08/12 14:07, Konrad Rzeszutek Wilk wrote:
> >On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
> >>I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
> >>couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
> >>Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
> >>
> >>The problems seem to have started here:
> >>
> >>-- snip --
> >>[    0.060280] ACPI: Core revision 20110623^M^M
> >>[    0.072384] Performance Events: Broken BIOS detected, complain to
> >>your hardware vendor.^M^M
> >>[    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
> >>(MSR c0010000 is 530076)^M^M
> >>[    0.080007] AMD PMU driver.^M^M
> >>[    0.082864] ------------[ cut here ]------------^M^M
> >>[    0.084018] WARNING: at
> >>/build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
> >>perf_events_lapic_init+0x28/0x29()^M^M
> >>[    0.088009] Hardware name: empty^M^M
> >>[    0.091299] Modules linked in:^M^M
> >>[    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
> >>[    0.096008] Call Trace:^M^M
> >>[    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
> >>[    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
> >>[    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
> >>[    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
> >>[    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
> >>[    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
> >>[    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
> >>[    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
> >>[    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
> >>-- snip --
> >>
> >>And pretty soon degenerated into log message spamming of this sort:
> >>
> >>-- snip --
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M

So that translates to MSR_K7_EVNTSEL0.

And that should only been shown once. Is the perf trying to load
the module over and over?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:10:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NFv-0001xV-K9; Fri, 17 Aug 2012 14:10:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2NFu-0001xF-Cj
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:10:30 +0000
Received: from [85.158.139.83:59977] by server-1.bemta-5.messagelabs.com id
	9B/3F-09980-3D05E205; Fri, 17 Aug 2012 14:10:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345212625!17421127!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8486 invoked from network); 17 Aug 2012 14:10:26 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 14:10:26 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HEAINj028533
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:10:19 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HEAIPl012612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:10:18 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HEAH7Q012448; Fri, 17 Aug 2012 09:10:18 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:10:17 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DD6BA402D7; Fri, 17 Aug 2012 10:00:28 -0400 (EDT)
Date: Fri, 17 Aug 2012 10:00:28 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120817140028.GB8093@phenom.dumpdata.com>
References: <CAFLBxZZaC8NM5Xk5nqHBMA-SftkomuG1VAcWvaqh4rac5hCi7Q@mail.gmail.com>
	<20120817130757.GD31903@phenom.dumpdata.com>
	<502E449B.2030502@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E449B.2030502@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Failure to boot default Debian wheezy (pvops)
	kernel on 4.2-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:18:19PM +0100, George Dunlap wrote:
> On 17/08/12 14:07, Konrad Rzeszutek Wilk wrote:
> >On Fri, Aug 17, 2012 at 12:17:15PM +0100, George Dunlap wrote:
> >>I just tried to install Xen-4.2.0-rc2 on a Debian wheezy system, but
> >>couldn't boot under Xen 4.2.  The box is an 8-core AMD, I think
> >>Barcelona.  The wheezy kernel is 3.2.21-3, 32-bit version.
> >>
> >>The problems seem to have started here:
> >>
> >>-- snip --
> >>[    0.060280] ACPI: Core revision 20110623^M^M
> >>[    0.072384] Performance Events: Broken BIOS detected, complain to
> >>your hardware vendor.^M^M
> >>[    0.076014] [Firmware Bug]: the BIOS has corrupted hw-PMU resources
> >>(MSR c0010000 is 530076)^M^M
> >>[    0.080007] AMD PMU driver.^M^M
> >>[    0.082864] ------------[ cut here ]------------^M^M
> >>[    0.084018] WARNING: at
> >>/build/buildd-linux_3.2.21-3-i386-vEohn4/linux-3.2.21/arch/x86/xen/enlighten.c:738
> >>perf_events_lapic_init+0x28/0x29()^M^M
> >>[    0.088009] Hardware name: empty^M^M
> >>[    0.091299] Modules linked in:^M^M
> >>[    0.092275] Pid: 1, comm: swapper/0 Not tainted 3.2.0-3-686-pae #1^M^M
> >>[    0.096008] Call Trace:^M^M
> >>[    0.098527]  [<c1037fcc>] ? warn_slowpath_common+0x68/0x79^M^M
> >>[    0.100019]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.104010]  [<c1037fea>] ? warn_slowpath_null+0xd/0x10^M^M
> >>[    0.108011]  [<c10150d2>] ? perf_events_lapic_init+0x28/0x29^M^M
> >>[    0.112015]  [<c141c97e>] ? init_hw_perf_events+0x223/0x3b1^M^M
> >>[    0.116012]  [<c141c75b>] ? check_bugs+0x1d9/0x1d9^M^M
> >>[    0.120012]  [<c1003074>] ? do_one_initcall+0x66/0x10e^M^M
> >>[    0.124012]  [<c1415770>] ? kernel_init+0x6d/0x125^M^M
> >>[    0.128012]  [<c1415703>] ? start_kernel+0x325/0x325^M^M
> >>[    0.132015]  [<c12c463e>] ? kernel_thread_helper+0x6/0x10^M^M
> >>[    0.136019] ---[ end trace a7919e7f17c0a725 ]---^M^M
> >>-- snip --
> >>
> >>And pretty soon degenerated into log message spamming of this sort:
> >>
> >>-- snip --
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M
> >>(XEN) traps.c:2584:d0 Domain attempted WRMSR 00000000c0010000 from
> >>0x0000000000530076 to 0x0000000000130076.^M

So that translates to MSR_K7_EVNTSEL0.

And that should only been shown once. Is the perf trying to load
the module over and over?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:18:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NNM-0002B2-Gi; Fri, 17 Aug 2012 14:18:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2NNL-0002Ax-CN
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:18:11 +0000
Received: from [85.158.138.51:61352] by server-12.bemta-3.messagelabs.com id
	B3/A6-04073-2A25E205; Fri, 17 Aug 2012 14:18:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345213088!28825703!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19904 invoked from network); 17 Aug 2012 14:18:09 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 14:18:09 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HEI3aE004925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:18:04 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HEI2kL019303
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:18:02 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HEI1KR018117; Fri, 17 Aug 2012 09:18:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:18:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B2621402D7; Fri, 17 Aug 2012 10:08:12 -0400 (EDT)
Date: Fri, 17 Aug 2012 10:08:12 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120817140812.GC8093@phenom.dumpdata.com>
References: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
	<20120806152815.GE8967@phenom.dumpdata.com>
	<CAEBdQ90s11dBVsKCURwvZTNE+PE0nuG2WeoHsc7QGZcQ_9oWZQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ90s11dBVsKCURwvZTNE+PE0nuG2WeoHsc7QGZcQ_9oWZQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 09:37:15AM +0100, Jean Guyader wrote:
> On 6 August 2012 16:28, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Fri, Aug 03, 2012 at 11:24:20PM +0100, Jean Guyader wrote:
> >> This is a Linux driver for the V4V inter VM communication system.
> >>
> >> I've posted the V4V Xen patches for comments, to find more info about
> >> V4V you can check out this link.
> >> http://osdir.com/ml/general/2012-08/msg05904.html
> >>
> >> This linux driver exposes two char devices one for TCP one for UDP.
> >> The interface exposed to userspace are made of IOCTLs, one per
> >> network operation (listen, bind, accept, send, recv, ...).
> >
> > I haven't had a chance to take a look at this and won't until next
> > week. But just a couple of quick questions:
> >
> >  - Is there a test application for this? If so where can I get it
> 
> I have a userspace library that talks to it, I'm in the process of
> cleaning it up.
> I'll send a patch series today that would add it in xen/tools.
> 
> >  - Is there any code in the Xen repository that uses it.
> 
> The Xen support is being upstream right now, but because it needs some
> userspace kernel to be useful it's kind a chicken and a egg problem, so I'm
> trying to upstream both at the same time.
> 
> You can find the last version of the Xen patches here:
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00385.html
> 
> >  - Who are the users?
> 
> Right now we use a close but not compatible version in XenClient.
> Potentially the users
> would be anyone that is looking to for a easy way to communicate
> between VMs with
> that has a feel of TCP/UDP.
> 
> Some background info about V4V could be found here:
> http://lists.xen.org/archives/html/xen-devel/2012-05/msg01866.html
> 
> >  - Why .. TCP and UDP ? Does that mean it masquarades as an Ethernet
> >    device? Why the choice of using a char device?
> >
> 
> Because of security concerns we didn't want to rely on the Linux
> networking code because it would
> have been hard for us to prove that a V4V packet could never end up on
> your network card.
> Although we understand that there is a need for a network like driver
> and we are working on a version
> of the V4V driver that will use SKBs and expose itself as a new socket type.
> 
> In fact we asked on the LKML if it would be acceptable to add a new
> type of socket in linux for
> inter-VM communication but we are still waiting for an answer.
> http://comments.gmane.org/gmane.linux.kernel/1337472

I saw that and wasn't sure what it meant. .. Why a new family?
You didn't really explain why it is neccessary and why you could
not create message sockets for example? Or just make your driver
be an network driver.
> 
> The really nice feature about V4V is it's ability leverage all the
> existing networking programs.
> We have a libc interposer library that wraps all the networking
> functions. Here is an example
> to access a ssh server running in another domain (domid=16)
> 
> LD_PRELOAD=/usr/lib/libv4v.so ssh 1.0.0.16

Wouldn't it be just easier to not have an interposer?

I mean, it all sounds like it is for networking, so.. it would
seem like doing the full networking (or even a partial simple
implemenation) would be the way to go?


> 
> Thanks,
> Jean
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:18:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NNM-0002B2-Gi; Fri, 17 Aug 2012 14:18:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2NNL-0002Ax-CN
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:18:11 +0000
Received: from [85.158.138.51:61352] by server-12.bemta-3.messagelabs.com id
	B3/A6-04073-2A25E205; Fri, 17 Aug 2012 14:18:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345213088!28825703!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19904 invoked from network); 17 Aug 2012 14:18:09 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 14:18:09 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HEI3aE004925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:18:04 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HEI2kL019303
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:18:02 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HEI1KR018117; Fri, 17 Aug 2012 09:18:01 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:18:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B2621402D7; Fri, 17 Aug 2012 10:08:12 -0400 (EDT)
Date: Fri, 17 Aug 2012 10:08:12 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jean Guyader <jean.guyader@gmail.com>
Message-ID: <20120817140812.GC8093@phenom.dumpdata.com>
References: <1344032660-1251-1-git-send-email-jean.guyader@citrix.com>
	<20120806152815.GE8967@phenom.dumpdata.com>
	<CAEBdQ90s11dBVsKCURwvZTNE+PE0nuG2WeoHsc7QGZcQ_9oWZQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEBdQ90s11dBVsKCURwvZTNE+PE0nuG2WeoHsc7QGZcQ_9oWZQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] RFC: V4V Linux Driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 10, 2012 at 09:37:15AM +0100, Jean Guyader wrote:
> On 6 August 2012 16:28, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Fri, Aug 03, 2012 at 11:24:20PM +0100, Jean Guyader wrote:
> >> This is a Linux driver for the V4V inter VM communication system.
> >>
> >> I've posted the V4V Xen patches for comments, to find more info about
> >> V4V you can check out this link.
> >> http://osdir.com/ml/general/2012-08/msg05904.html
> >>
> >> This linux driver exposes two char devices one for TCP one for UDP.
> >> The interface exposed to userspace are made of IOCTLs, one per
> >> network operation (listen, bind, accept, send, recv, ...).
> >
> > I haven't had a chance to take a look at this and won't until next
> > week. But just a couple of quick questions:
> >
> >  - Is there a test application for this? If so where can I get it
> 
> I have a userspace library that talks to it, I'm in the process of
> cleaning it up.
> I'll send a patch series today that would add it in xen/tools.
> 
> >  - Is there any code in the Xen repository that uses it.
> 
> The Xen support is being upstream right now, but because it needs some
> userspace kernel to be useful it's kind a chicken and a egg problem, so I'm
> trying to upstream both at the same time.
> 
> You can find the last version of the Xen patches here:
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00385.html
> 
> >  - Who are the users?
> 
> Right now we use a close but not compatible version in XenClient.
> Potentially the users
> would be anyone that is looking to for a easy way to communicate
> between VMs with
> that has a feel of TCP/UDP.
> 
> Some background info about V4V could be found here:
> http://lists.xen.org/archives/html/xen-devel/2012-05/msg01866.html
> 
> >  - Why .. TCP and UDP ? Does that mean it masquarades as an Ethernet
> >    device? Why the choice of using a char device?
> >
> 
> Because of security concerns we didn't want to rely on the Linux
> networking code because it would
> have been hard for us to prove that a V4V packet could never end up on
> your network card.
> Although we understand that there is a need for a network like driver
> and we are working on a version
> of the V4V driver that will use SKBs and expose itself as a new socket type.
> 
> In fact we asked on the LKML if it would be acceptable to add a new
> type of socket in linux for
> inter-VM communication but we are still waiting for an answer.
> http://comments.gmane.org/gmane.linux.kernel/1337472

I saw that and wasn't sure what it meant. .. Why a new family?
You didn't really explain why it is neccessary and why you could
not create message sockets for example? Or just make your driver
be an network driver.
> 
> The really nice feature about V4V is it's ability leverage all the
> existing networking programs.
> We have a libc interposer library that wraps all the networking
> functions. Here is an example
> to access a ssh server running in another domain (domid=16)
> 
> LD_PRELOAD=/usr/lib/libv4v.so ssh 1.0.0.16

Wouldn't it be just easier to not have an interposer?

I mean, it all sounds like it is for networking, so.. it would
seem like doing the full networking (or even a partial simple
implemenation) would be the way to go?


> 
> Thanks,
> Jean
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:24:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NTG-0002LQ-AX; Fri, 17 Aug 2012 14:24:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Shakirov@cg.ru>) id 1T2NTF-0002LL-1m
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:24:17 +0000
Received: from [85.158.138.51:48131] by server-6.bemta-3.messagelabs.com id
	24/41-32013-0145E205; Fri, 17 Aug 2012 14:24:16 +0000
X-Env-Sender: Shakirov@cg.ru
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345213455!28893266!1
X-Originating-IP: [217.198.1.70]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31741 invoked from network); 17 Aug 2012 14:24:15 -0000
Received: from tepmel.cg.ru (HELO tepmel.cg.ru) (217.198.1.70)
	by server-9.tower-174.messagelabs.com with SMTP;
	17 Aug 2012 14:24:15 -0000
Received: from shakirovpc.center.cg ([10.10.19.50]) by tepmel.cg.ru with
	Microsoft SMTPSVC(6.0.3790.4675); Fri, 17 Aug 2012 18:24:14 +0400
Message-ID: <502E540E.5090108@cg.ru>
Date: Fri, 17 Aug 2012 18:24:14 +0400
From: Lenar Shakirov <shakirov@cg.ru>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; ru;
	rv:1.9.1.5) Gecko/20091204 Thunderbird/3.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-OriginalArrivalTime: 17 Aug 2012 14:24:14.0605 (UTC)
	FILETIME=[FA550BD0:01CD7C83]
Cc: Linux <linux@cg.ru>
Subject: [Xen-devel] [PATCH] Simple emulation of host keyboard and mouse for
	gfx_passthru.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

See http://lists.xen.org/archives/html/xen-devel/2010-03/msg01292.html.

I compiled XEN with patch listed above and get passthrough of PS/2
keyboard and mouse (touchpad) works very well!

I have the same situation: laptop with VT-d and PS/2 keyboard and touchpad.

P.S.: thanks to Dietmar Hahn!


-- 
Best regards,
Lenar Shakirov

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:24:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NTG-0002LQ-AX; Fri, 17 Aug 2012 14:24:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Shakirov@cg.ru>) id 1T2NTF-0002LL-1m
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:24:17 +0000
Received: from [85.158.138.51:48131] by server-6.bemta-3.messagelabs.com id
	24/41-32013-0145E205; Fri, 17 Aug 2012 14:24:16 +0000
X-Env-Sender: Shakirov@cg.ru
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345213455!28893266!1
X-Originating-IP: [217.198.1.70]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31741 invoked from network); 17 Aug 2012 14:24:15 -0000
Received: from tepmel.cg.ru (HELO tepmel.cg.ru) (217.198.1.70)
	by server-9.tower-174.messagelabs.com with SMTP;
	17 Aug 2012 14:24:15 -0000
Received: from shakirovpc.center.cg ([10.10.19.50]) by tepmel.cg.ru with
	Microsoft SMTPSVC(6.0.3790.4675); Fri, 17 Aug 2012 18:24:14 +0400
Message-ID: <502E540E.5090108@cg.ru>
Date: Fri, 17 Aug 2012 18:24:14 +0400
From: Lenar Shakirov <shakirov@cg.ru>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; ru;
	rv:1.9.1.5) Gecko/20091204 Thunderbird/3.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-OriginalArrivalTime: 17 Aug 2012 14:24:14.0605 (UTC)
	FILETIME=[FA550BD0:01CD7C83]
Cc: Linux <linux@cg.ru>
Subject: [Xen-devel] [PATCH] Simple emulation of host keyboard and mouse for
	gfx_passthru.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

See http://lists.xen.org/archives/html/xen-devel/2010-03/msg01292.html.

I compiled XEN with patch listed above and get passthrough of PS/2
keyboard and mouse (touchpad) works very well!

I have the same situation: laptop with VT-d and PS/2 keyboard and touchpad.

P.S.: thanks to Dietmar Hahn!


-- 
Best regards,
Lenar Shakirov

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:28:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NWj-0002V9-35; Fri, 17 Aug 2012 14:27:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2NWh-0002Uz-1u
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:27:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345213631!9865523!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 683 invoked from network); 17 Aug 2012 14:27:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:27:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061960"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:27:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:27:10 +0100
Message-ID: <1345213629.10161.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Fri, 17 Aug 2012 15:27:09 +0100
In-Reply-To: <E1T2N1I-0007qX-EF@xenbits.xen.org>
References: <E1T2N1I-0007qX-EF@xenbits.xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Wangzhenguo <wangzhenguo@huawei.com>
Subject: Re: [Xen-devel] [Xen-staging] [xen-unstable] tools/python: Clean
 python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 14:55 +0100, Xen patchbot-unstable wrote:
> # HG changeset patch
> # User Andrew Cooper <andrew.cooper3@citrix.com>
> # Date 1345211209 -3600
> # Node ID 2eae0ec993b80cab2c9469db3b9ae9a644a74ac9
> # Parent  b021cca938e55bb1d9fae4f1fd4df1a2d20db215
> tools/python: Clean python correctly
> 
> Cleaning the python directory should completely remove the build/
> directory, otherwise subsequent builds may be short-circuited and a
> stale build installed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> Committed-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> 
> 
> diff -r b021cca938e5 -r 2eae0ec993b8 tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Fri Aug 17 14:46:48 2012 +0100
> +++ b/tools/libxc/xc_linux_osdep.c	Fri Aug 17 14:46:49 2012 +0100
> @@ -1,4 +1,4 @@
[...]
> diff -r b021cca938e5 -r 2eae0ec993b8 tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Fri Aug 17 14:46:48 2012 +0100
> +++ b/tools/libxc/xenctrl.h	Fri Aug 17 14:46:49 2012 +0100
> @@ -135,10 +135,9 @@ typedef enum xc_error_code xc_error_code
[...]

These two were supposed to be part of the previous commit "Libxc/Linux:
Add VM_DONTCOPY flag of the VMA of the hypercall buffer". Obviously I
need to inspect what I've actually committed more closely before
pushing.

Given it's whitespace and comment tweaks I don't propose to revert and
try again.

Sorry everyone.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:28:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2NWj-0002V9-35; Fri, 17 Aug 2012 14:27:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2NWh-0002Uz-1u
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:27:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345213631!9865523!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 683 invoked from network); 17 Aug 2012 14:27:11 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:27:11 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14061960"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:27:10 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:27:10 +0100
Message-ID: <1345213629.10161.47.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Date: Fri, 17 Aug 2012 15:27:09 +0100
In-Reply-To: <E1T2N1I-0007qX-EF@xenbits.xen.org>
References: <E1T2N1I-0007qX-EF@xenbits.xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Wangzhenguo <wangzhenguo@huawei.com>
Subject: Re: [Xen-devel] [Xen-staging] [xen-unstable] tools/python: Clean
 python correctly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 14:55 +0100, Xen patchbot-unstable wrote:
> # HG changeset patch
> # User Andrew Cooper <andrew.cooper3@citrix.com>
> # Date 1345211209 -3600
> # Node ID 2eae0ec993b80cab2c9469db3b9ae9a644a74ac9
> # Parent  b021cca938e55bb1d9fae4f1fd4df1a2d20db215
> tools/python: Clean python correctly
> 
> Cleaning the python directory should completely remove the build/
> directory, otherwise subsequent builds may be short-circuited and a
> stale build installed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> Committed-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> 
> 
> diff -r b021cca938e5 -r 2eae0ec993b8 tools/libxc/xc_linux_osdep.c
> --- a/tools/libxc/xc_linux_osdep.c	Fri Aug 17 14:46:48 2012 +0100
> +++ b/tools/libxc/xc_linux_osdep.c	Fri Aug 17 14:46:49 2012 +0100
> @@ -1,4 +1,4 @@
[...]
> diff -r b021cca938e5 -r 2eae0ec993b8 tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Fri Aug 17 14:46:48 2012 +0100
> +++ b/tools/libxc/xenctrl.h	Fri Aug 17 14:46:49 2012 +0100
> @@ -135,10 +135,9 @@ typedef enum xc_error_code xc_error_code
[...]

These two were supposed to be part of the previous commit "Libxc/Linux:
Add VM_DONTCOPY flag of the VMA of the hypercall buffer". Obviously I
need to inspect what I've actually committed more closely before
pushing.

Given it's whitespace and comment tweaks I don't propose to revert and
try again.

Sorry everyone.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:32:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Naf-0002ep-O1; Fri, 17 Aug 2012 14:31:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T2Nae-0002ej-54
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:31:56 +0000
Received: from [85.158.143.35:29037] by server-2.bemta-4.messagelabs.com id
	F9/1B-31966-BD55E205; Fri, 17 Aug 2012 14:31:55 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345213902!11673679!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 447 invoked from network); 17 Aug 2012 14:31:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:31:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336363200"; d="scan'208";a="205492130"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:31:41 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 17 Aug 2012
	07:31:40 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Wei Wang <wei.wang2@amd.com>
Date: Fri, 17 Aug 2012 07:31:37 -0700
Thread-Topic: [PATCH] Dump IOMMU p2m table
Thread-Index: Ac18aYjVlOUBpQyXR2eeKhlJkBR1uAAGysGw
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E39A007@SJCPMAILBOX01.citrite.net>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
	<502E27B0.7030803@amd.com>
In-Reply-To: <502E27B0.7030803@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Wei Wang [mailto:wei.wang2@amd.com]
> Sent: Friday, August 17, 2012 4:15 AM
> To: Santosh Jodh
> Cc: xen-devel@lists.xensource.com; xiantao.zhang@intel.com; Tim
> (Xen.org); JBeulich@suse.com
> Subject: Re: [PATCH] Dump IOMMU p2m table
> 
> On 08/16/2012 06:36 PM, Santosh Jodh wrote:
> > New key handler 'o' to dump the IOMMU p2m table for each domain.
> > Skips dumping table for domain0.
> > Intel and AMD specific iommu_ops handler for dumping p2m table.
> >
> > Incorporated feedback from Jan Beulich and Wei Wang.
> > Fixed indent printing with %*s.
> > Removed superflous superpage and other attribute prints.
> > Make next_level use consistent for AMD IOMMU dumps. Warn if found
> inconsistent.
> >
> > Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
> >
> > diff -r 6d56e31fe1e1 -r 575a53faf4e1
> xen/drivers/passthrough/amd/pci_amd_iommu.c
> > --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15
> 09:41:21 2012 +0100
> > +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16
> 09:28:24 2012 -0700
> > @@ -22,6 +22,7 @@
> >   #include<xen/pci.h>
> >   #include<xen/pci_regs.h>
> >   #include<xen/paging.h>
> > +#include<xen/softirq.h>
> >   #include<asm/hvm/iommu.h>
> >   #include<asm/amd-iommu.h>
> >   #include<asm/hvm/svm/amd-iommu-proto.h>
> > @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
> >
> >   #include<asm/io_apic.h>
> >
> > +static void amd_dump_p2m_table_level(struct page_info* pg, int
> level,
> > +                                     paddr_t gpa, int indent) {
> > +    paddr_t address;
> > +    void *table_vaddr, *pde;
> > +    paddr_t next_table_maddr;
> > +    int index, next_level, present;
> > +    u32 *entry;
> > +
> > +    if ( level<  1 )
> > +        return;
> > +
> > +    table_vaddr = __map_domain_page(pg);
> > +    if ( table_vaddr == NULL )
> > +    {
> > +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> > +                page_to_maddr(pg));
> > +        return;
> > +    }
> > +
> > +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> > +    {
> > +        if ( !(index % 2) )
> > +            process_pending_softirqs();
> > +
> > +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> > +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> > +        entry = (u32*)pde;
> > +
> > +        present = get_field_from_reg_u32(entry[0],
> > +                                         IOMMU_PDE_PRESENT_MASK,
> > +                                         IOMMU_PDE_PRESENT_SHIFT);
> > +
> > +        if ( !present )
> > +            continue;
> > +
> > +        next_level = get_field_from_reg_u32(entry[0],
> > +
> IOMMU_PDE_NEXT_LEVEL_MASK,
> > +
> > + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> > +
> > +        if ( next_level != (level - 1) )
> > +        {
> > +            printk("IOMMU p2m table error. next_level = %d, expected
> %d\n",
> > +                   next_level, level - 1);
> > +
> > +            continue;
> > +       }
> 
> HI,
> 
> This check is not proper for 2MB and 1GB pages. For example, if a guest
> 4 level page tables, for a 2MB entry, the next_level field will be 3
> ->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries
> have next_level = 0. I saw following output for those pages:
> 
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> 
> Thanks,
> Wei

How about changing the check to:
        if ( next_level && (next_level != (level - 1)) )
 
Thanks,
Santosh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:32:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Naf-0002ep-O1; Fri, 17 Aug 2012 14:31:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T2Nae-0002ej-54
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:31:56 +0000
Received: from [85.158.143.35:29037] by server-2.bemta-4.messagelabs.com id
	F9/1B-31966-BD55E205; Fri, 17 Aug 2012 14:31:55 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345213902!11673679!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 447 invoked from network); 17 Aug 2012 14:31:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:31:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336363200"; d="scan'208";a="205492130"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:31:41 -0400
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 17 Aug 2012
	07:31:40 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Wei Wang <wei.wang2@amd.com>
Date: Fri, 17 Aug 2012 07:31:37 -0700
Thread-Topic: [PATCH] Dump IOMMU p2m table
Thread-Index: Ac18aYjVlOUBpQyXR2eeKhlJkBR1uAAGysGw
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E39A007@SJCPMAILBOX01.citrite.net>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
	<502E27B0.7030803@amd.com>
In-Reply-To: <502E27B0.7030803@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Wei Wang [mailto:wei.wang2@amd.com]
> Sent: Friday, August 17, 2012 4:15 AM
> To: Santosh Jodh
> Cc: xen-devel@lists.xensource.com; xiantao.zhang@intel.com; Tim
> (Xen.org); JBeulich@suse.com
> Subject: Re: [PATCH] Dump IOMMU p2m table
> 
> On 08/16/2012 06:36 PM, Santosh Jodh wrote:
> > New key handler 'o' to dump the IOMMU p2m table for each domain.
> > Skips dumping table for domain0.
> > Intel and AMD specific iommu_ops handler for dumping p2m table.
> >
> > Incorporated feedback from Jan Beulich and Wei Wang.
> > Fixed indent printing with %*s.
> > Removed superflous superpage and other attribute prints.
> > Make next_level use consistent for AMD IOMMU dumps. Warn if found
> inconsistent.
> >
> > Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
> >
> > diff -r 6d56e31fe1e1 -r 575a53faf4e1
> xen/drivers/passthrough/amd/pci_amd_iommu.c
> > --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15
> 09:41:21 2012 +0100
> > +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16
> 09:28:24 2012 -0700
> > @@ -22,6 +22,7 @@
> >   #include<xen/pci.h>
> >   #include<xen/pci_regs.h>
> >   #include<xen/paging.h>
> > +#include<xen/softirq.h>
> >   #include<asm/hvm/iommu.h>
> >   #include<asm/amd-iommu.h>
> >   #include<asm/hvm/svm/amd-iommu-proto.h>
> > @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
> >
> >   #include<asm/io_apic.h>
> >
> > +static void amd_dump_p2m_table_level(struct page_info* pg, int
> level,
> > +                                     paddr_t gpa, int indent) {
> > +    paddr_t address;
> > +    void *table_vaddr, *pde;
> > +    paddr_t next_table_maddr;
> > +    int index, next_level, present;
> > +    u32 *entry;
> > +
> > +    if ( level<  1 )
> > +        return;
> > +
> > +    table_vaddr = __map_domain_page(pg);
> > +    if ( table_vaddr == NULL )
> > +    {
> > +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> > +                page_to_maddr(pg));
> > +        return;
> > +    }
> > +
> > +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> > +    {
> > +        if ( !(index % 2) )
> > +            process_pending_softirqs();
> > +
> > +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> > +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> > +        entry = (u32*)pde;
> > +
> > +        present = get_field_from_reg_u32(entry[0],
> > +                                         IOMMU_PDE_PRESENT_MASK,
> > +                                         IOMMU_PDE_PRESENT_SHIFT);
> > +
> > +        if ( !present )
> > +            continue;
> > +
> > +        next_level = get_field_from_reg_u32(entry[0],
> > +
> IOMMU_PDE_NEXT_LEVEL_MASK,
> > +
> > + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> > +
> > +        if ( next_level != (level - 1) )
> > +        {
> > +            printk("IOMMU p2m table error. next_level = %d, expected
> %d\n",
> > +                   next_level, level - 1);
> > +
> > +            continue;
> > +       }
> 
> HI,
> 
> This check is not proper for 2MB and 1GB pages. For example, if a guest
> 4 level page tables, for a 2MB entry, the next_level field will be 3
> ->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries
> have next_level = 0. I saw following output for those pages:
> 
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> (XEN) IOMMU p2m table error. next_level = 0, expected 1
> 
> Thanks,
> Wei

How about changing the check to:
        if ( next_level && (next_level != (level - 1)) )
 
Thanks,
Santosh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:33:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:33:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nbe-0002jp-6a; Fri, 17 Aug 2012 14:32:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2Nbc-0002jZ-Po
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:32:57 +0000
Received: from [85.158.143.35:34332] by server-2.bemta-4.messagelabs.com id
	7B/BC-31966-8165E205; Fri, 17 Aug 2012 14:32:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345213972!16333596!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9051 invoked from network); 17 Aug 2012 14:32:53 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 14:32:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HEWXWq021492
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:32:34 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HEWUrM002102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:32:32 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HEWQTA028383; Fri, 17 Aug 2012 09:32:28 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:32:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BC4FD402D7; Fri, 17 Aug 2012 10:22:37 -0400 (EDT)
Date: Fri, 17 Aug 2012 10:22:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20120817142237.GA8467@phenom.dumpdata.com>
References: <501BC20F.3040205@amd.com>
	<20120803123628.GB10670@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120803123628.GB10670@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Dom0 crash with old style AMD NUMA detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 08:36:28AM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 03, 2012 at 02:20:31PM +0200, Andre Przywara wrote:
> > Hi,
> > 
> > we see Dom0 crashes due to the kernel detecting the NUMA topology not by 
> > ACPI, but directly from the northbridge (CONFIG_AMD_NUMA).
> > 
> > This will detect the actual NUMA config of the physical machine, but 
> > will crash about the mismatch with Dom0's virtual memory. Variation of 
> > the theme: Dom0 sees what it's not supposed to see.
> > 
> > This happens with the said config option enabled and on a machine where 
> > this scanning is still enabled (K8 and Fam10h, not Bulldozer class)
> > 
> > We have this dump then:
> > [    0.000000] NUMA: Warning: node ids are out of bound, from=-1 to=-1
> > distance=10
> > [    0.000000] Scanning NUMA topology in Northbridge 24
> > [    0.000000] Number of physical nodes 4
> > [    0.000000] Node 0 MemBase 0000000000000000 Limit 0000000040000000
> > [    0.000000] Node 1 MemBase 0000000040000000 Limit 0000000138000000
> > [    0.000000] Node 2 MemBase 0000000138000000 Limit 00000001f8000000
> > [    0.000000] Node 3 MemBase 00000001f8000000 Limit 0000000238000000
> > [    0.000000] Initmem setup node 0 0000000000000000-0000000040000000
> > [    0.000000]   NODE_DATA [000000003ffd9000 - 000000003fffffff]
> > [    0.000000] Initmem setup node 1 0000000040000000-0000000138000000
> > [    0.000000]   NODE_DATA [0000000137fd9000 - 0000000137ffffff]
> > [    0.000000] Initmem setup node 2 0000000138000000-00000001f8000000
> > [    0.000000]   NODE_DATA [00000001f095e000 - 00000001f0984fff]
> > [    0.000000] Initmem setup node 3 00000001f8000000-0000000238000000
> > [    0.000000] Cannot find 159744 bytes in node 3
> > [    0.000000] BUG: unable to handle kernel NULL pointer dereference at 
> > (null)
> > [    0.000000] IP: [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> > [    0.000000] PGD 0
> > [    0.000000] Oops: 0000 [#1] SMP
> > [    0.000000] CPU 0
> > [    0.000000] Modules linked in:
> > [    0.000000]
> > [    0.000000] Pid: 0, comm: swapper Not tainted 3.3.6 #1 AMD Dinar/Dinar
> > [    0.000000] RIP: e030:[<ffffffff81d220e6>]  [<ffffffff81d220e6>] 
> > __alloc_bootmem_node+0x43/0x96
> > [    0.000000] RSP: e02b:ffffffff81c01de8  EFLAGS: 00010046
> > [    0.000000] RAX: 0000000000000000 RBX: 00000000000000c0 RCX: 
> > 0000000000000000
> > [    0.000000] RDX: 0000000000000040 RSI: 00000000000000c0 RDI: 
> > 0000000000000000
> > [    0.000000] RBP: ffffffff81c01e08 R08: 0000000000000000 R09: 
> > 0000000000000000
> > [    0.000000] R10: 0000000000098000 R11: 0000000000000000 R12: 
> > 0000000000000000
> > [    0.000000] R13: 0000000000000000 R14: 0000000000000040 R15: 
> > 0000000000000003
> > [    0.000000] FS:  0000000000000000(0000) GS:ffffffff81ced000(0000) 
> > knlGS:0000000000000000
> > [    0.000000] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [    0.000000] CR2: 0000000000000000 CR3: 0000000001c05000 CR4: 
> > 0000000000000660
> > [    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
> > 0000000000000000
> > [    0.000000] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 
> > 0000000000000000
> > [    0.000000] Process swapper (pid: 0, threadinfo ffffffff81c00000, 
> > task ffffffff81c0d020)
> > [    0.000000] Stack:
> > [    0.000000]  00000000000000c0 0000000000000003 0000000000000000 
> > 000000000000003f
> > [    0.000000]  ffffffff81c01e68 ffffffff81d23024 0000000000400000 
> > 0000000000000002
> > [    0.000000]  0000000000080000 ffff8801f055e000 ffff8801f055e1f8 
> > 0000000000000000
> > [    0.000000] Call Trace:
> > [    0.000000]  [<ffffffff81d23024>] 
> > sparse_early_usemaps_alloc_node+0x64/0x178
> > [    0.000000]  [<ffffffff81d23348>] sparse_init+0xe4/0x25a
> > [    0.000000]  [<ffffffff81d16840>] paging_init+0x13/0x22
> > [    0.000000]  [<ffffffff81d07fbb>] setup_arch+0x9c6/0xa9b
> > [    0.000000]  [<ffffffff81683954>] ? printk+0x3c/0x3e
> > [    0.000000]  [<ffffffff81d01a38>] start_kernel+0xe5/0x468
> > [    0.000000]  [<ffffffff81d012cf>] x86_64_start_reservations+0xba/0xc1
> > [    0.000000]  [<ffffffff81007153>] ? xen_setup_runstate_info+0x2c/0x36
> > [    0.000000]  [<ffffffff81d050ee>] xen_start_kernel+0x565/0x56c
> > [    0.000000] Code: 79 bc 3e ff 85 c0 74 23 80 3d 19 e9 21 00 00 75 59 
> > be 2a
> > 01 00 00 48 c7 c7 d0 55 a8 81 e8 b6 dc 31 ff c6 05 ff e8 21 00 01 eb 3f 
> > <41> 8b
> > bc 24 60 60 02 00 49 83 c8 ff 4c 89 e9 4c 89 f2 48 89 de
> > [    0.000000] RIP  [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> > [    0.000000]  RSP <ffffffff81c01de8>
> > [    0.000000] CR2: 0000000000000000
> > [    0.000000] ---[ end trace a7919e7f17c0a725 ]---
> > [    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
> > (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> > 
> > 
> > 
> > The obvious solution would be to explicitly deny northbridge scanning 
> > when running as Dom0, though I am not sure how to implement this without 
> > upsetting the other kernel folks about "that crappy Xen thing" again ;-)
> 
> Heh.
> Is there a numa=0 option that could be used to override it to turn it
> off?

Not compile tested.. but was thinking something like this:

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 43fd630..838cc1f 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -17,6 +17,7 @@
 #include <asm/e820.h>
 #include <asm/setup.h>
 #include <asm/acpi.h>
+#include <asm/numa.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 
@@ -528,4 +529,7 @@ void __init xen_arch_setup(void)
 	disable_cpufreq();
 	WARN_ON(set_pm_idle_to_default());
 	fiddle_vdso();
+#ifdef CONFIG_NUMA
+	numa_off = 1;
+#endif
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:33:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:33:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nbe-0002jp-6a; Fri, 17 Aug 2012 14:32:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2Nbc-0002jZ-Po
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:32:57 +0000
Received: from [85.158.143.35:34332] by server-2.bemta-4.messagelabs.com id
	7B/BC-31966-8165E205; Fri, 17 Aug 2012 14:32:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345213972!16333596!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9051 invoked from network); 17 Aug 2012 14:32:53 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 14:32:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HEWXWq021492
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 14:32:34 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HEWUrM002102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 14:32:32 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HEWQTA028383; Fri, 17 Aug 2012 09:32:28 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 07:32:26 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BC4FD402D7; Fri, 17 Aug 2012 10:22:37 -0400 (EDT)
Date: Fri, 17 Aug 2012 10:22:37 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20120817142237.GA8467@phenom.dumpdata.com>
References: <501BC20F.3040205@amd.com>
	<20120803123628.GB10670@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120803123628.GB10670@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Dom0 crash with old style AMD NUMA detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 03, 2012 at 08:36:28AM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 03, 2012 at 02:20:31PM +0200, Andre Przywara wrote:
> > Hi,
> > 
> > we see Dom0 crashes due to the kernel detecting the NUMA topology not by 
> > ACPI, but directly from the northbridge (CONFIG_AMD_NUMA).
> > 
> > This will detect the actual NUMA config of the physical machine, but 
> > will crash about the mismatch with Dom0's virtual memory. Variation of 
> > the theme: Dom0 sees what it's not supposed to see.
> > 
> > This happens with the said config option enabled and on a machine where 
> > this scanning is still enabled (K8 and Fam10h, not Bulldozer class)
> > 
> > We have this dump then:
> > [    0.000000] NUMA: Warning: node ids are out of bound, from=-1 to=-1
> > distance=10
> > [    0.000000] Scanning NUMA topology in Northbridge 24
> > [    0.000000] Number of physical nodes 4
> > [    0.000000] Node 0 MemBase 0000000000000000 Limit 0000000040000000
> > [    0.000000] Node 1 MemBase 0000000040000000 Limit 0000000138000000
> > [    0.000000] Node 2 MemBase 0000000138000000 Limit 00000001f8000000
> > [    0.000000] Node 3 MemBase 00000001f8000000 Limit 0000000238000000
> > [    0.000000] Initmem setup node 0 0000000000000000-0000000040000000
> > [    0.000000]   NODE_DATA [000000003ffd9000 - 000000003fffffff]
> > [    0.000000] Initmem setup node 1 0000000040000000-0000000138000000
> > [    0.000000]   NODE_DATA [0000000137fd9000 - 0000000137ffffff]
> > [    0.000000] Initmem setup node 2 0000000138000000-00000001f8000000
> > [    0.000000]   NODE_DATA [00000001f095e000 - 00000001f0984fff]
> > [    0.000000] Initmem setup node 3 00000001f8000000-0000000238000000
> > [    0.000000] Cannot find 159744 bytes in node 3
> > [    0.000000] BUG: unable to handle kernel NULL pointer dereference at 
> > (null)
> > [    0.000000] IP: [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> > [    0.000000] PGD 0
> > [    0.000000] Oops: 0000 [#1] SMP
> > [    0.000000] CPU 0
> > [    0.000000] Modules linked in:
> > [    0.000000]
> > [    0.000000] Pid: 0, comm: swapper Not tainted 3.3.6 #1 AMD Dinar/Dinar
> > [    0.000000] RIP: e030:[<ffffffff81d220e6>]  [<ffffffff81d220e6>] 
> > __alloc_bootmem_node+0x43/0x96
> > [    0.000000] RSP: e02b:ffffffff81c01de8  EFLAGS: 00010046
> > [    0.000000] RAX: 0000000000000000 RBX: 00000000000000c0 RCX: 
> > 0000000000000000
> > [    0.000000] RDX: 0000000000000040 RSI: 00000000000000c0 RDI: 
> > 0000000000000000
> > [    0.000000] RBP: ffffffff81c01e08 R08: 0000000000000000 R09: 
> > 0000000000000000
> > [    0.000000] R10: 0000000000098000 R11: 0000000000000000 R12: 
> > 0000000000000000
> > [    0.000000] R13: 0000000000000000 R14: 0000000000000040 R15: 
> > 0000000000000003
> > [    0.000000] FS:  0000000000000000(0000) GS:ffffffff81ced000(0000) 
> > knlGS:0000000000000000
> > [    0.000000] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [    0.000000] CR2: 0000000000000000 CR3: 0000000001c05000 CR4: 
> > 0000000000000660
> > [    0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
> > 0000000000000000
> > [    0.000000] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 
> > 0000000000000000
> > [    0.000000] Process swapper (pid: 0, threadinfo ffffffff81c00000, 
> > task ffffffff81c0d020)
> > [    0.000000] Stack:
> > [    0.000000]  00000000000000c0 0000000000000003 0000000000000000 
> > 000000000000003f
> > [    0.000000]  ffffffff81c01e68 ffffffff81d23024 0000000000400000 
> > 0000000000000002
> > [    0.000000]  0000000000080000 ffff8801f055e000 ffff8801f055e1f8 
> > 0000000000000000
> > [    0.000000] Call Trace:
> > [    0.000000]  [<ffffffff81d23024>] 
> > sparse_early_usemaps_alloc_node+0x64/0x178
> > [    0.000000]  [<ffffffff81d23348>] sparse_init+0xe4/0x25a
> > [    0.000000]  [<ffffffff81d16840>] paging_init+0x13/0x22
> > [    0.000000]  [<ffffffff81d07fbb>] setup_arch+0x9c6/0xa9b
> > [    0.000000]  [<ffffffff81683954>] ? printk+0x3c/0x3e
> > [    0.000000]  [<ffffffff81d01a38>] start_kernel+0xe5/0x468
> > [    0.000000]  [<ffffffff81d012cf>] x86_64_start_reservations+0xba/0xc1
> > [    0.000000]  [<ffffffff81007153>] ? xen_setup_runstate_info+0x2c/0x36
> > [    0.000000]  [<ffffffff81d050ee>] xen_start_kernel+0x565/0x56c
> > [    0.000000] Code: 79 bc 3e ff 85 c0 74 23 80 3d 19 e9 21 00 00 75 59 
> > be 2a
> > 01 00 00 48 c7 c7 d0 55 a8 81 e8 b6 dc 31 ff c6 05 ff e8 21 00 01 eb 3f 
> > <41> 8b
> > bc 24 60 60 02 00 49 83 c8 ff 4c 89 e9 4c 89 f2 48 89 de
> > [    0.000000] RIP  [<ffffffff81d220e6>] __alloc_bootmem_node+0x43/0x96
> > [    0.000000]  RSP <ffffffff81c01de8>
> > [    0.000000] CR2: 0000000000000000
> > [    0.000000] ---[ end trace a7919e7f17c0a725 ]---
> > [    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
> > (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
> > 
> > 
> > 
> > The obvious solution would be to explicitly deny northbridge scanning 
> > when running as Dom0, though I am not sure how to implement this without 
> > upsetting the other kernel folks about "that crappy Xen thing" again ;-)
> 
> Heh.
> Is there a numa=0 option that could be used to override it to turn it
> off?

Not compile tested.. but was thinking something like this:

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 43fd630..838cc1f 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -17,6 +17,7 @@
 #include <asm/e820.h>
 #include <asm/setup.h>
 #include <asm/acpi.h>
+#include <asm/numa.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 
@@ -528,4 +529,7 @@ void __init xen_arch_setup(void)
 	disable_cpufreq();
 	WARN_ON(set_pm_idle_to_default());
 	fiddle_vdso();
+#ifdef CONFIG_NUMA
+	numa_off = 1;
+#endif
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:35:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ndt-0002wj-PG; Fri, 17 Aug 2012 14:35:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Nds-0002wc-F5
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:35:16 +0000
Received: from [85.158.143.99:19774] by server-1.bemta-4.messagelabs.com id
	60/22-07754-3A65E205; Fri, 17 Aug 2012 14:35:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345214107!28699893!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32138 invoked from network); 17 Aug 2012 14:35:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:35:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062133"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:35:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:35:07 +0100
Message-ID: <1345214106.10161.51.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 15:35:06 +0100
In-Reply-To: <20515.48491.125590.97199@mariner.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
	<20515.48491.125590.97199@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 14:38 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
> > On Mon, 2012-07-30 at 15:03 +0100, Ian Campbell wrote:
> > > This is based upon my inspection of a system with a single PV domain running
> > > and is therefore very incomplete.
> > > 
> > > There are several things I'm not sure of here, mostly marked with XXX in the
> > > text.
> > > 
> > > In particular:
> > > 
> > >  - We seem to expose various things to the guest which really it has no need to
> > >    know (at least not via xenstore). e.g. its own domid, its device model pid,
> > >    the size of the video ram, store port and gref.
> 
> Maybe we need to have a "???" or "?deprecated" tag ?  That would avoid
> us documenting things which we don't want people to use.

deprecated would be useful in some places.

I think we have other places where a key is visible to the guest e.g.
because hvmloader uses it, which we can't really "deprecate" as a means
of hiding them. Perhaps "internal" or "private" in addition to
deprecate?.

> > >  - What is the distinction between /vm/UUID and /local/domain/DOMID
> 
> I think the former should be abolished (eventually).

Is there a distinction between "domain" and "vm" which is relevant here.
Is a VM potentially composed of several domains? (e.g. the guest, the
stub domain, a. n. other helper domain etc)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:35:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Ndt-0002wj-PG; Fri, 17 Aug 2012 14:35:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Nds-0002wc-F5
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:35:16 +0000
Received: from [85.158.143.99:19774] by server-1.bemta-4.messagelabs.com id
	60/22-07754-3A65E205; Fri, 17 Aug 2012 14:35:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345214107!28699893!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32138 invoked from network); 17 Aug 2012 14:35:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:35:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062133"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:35:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:35:07 +0100
Message-ID: <1345214106.10161.51.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 15:35:06 +0100
In-Reply-To: <20515.48491.125590.97199@mariner.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
	<20515.48491.125590.97199@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 14:38 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
> > On Mon, 2012-07-30 at 15:03 +0100, Ian Campbell wrote:
> > > This is based upon my inspection of a system with a single PV domain running
> > > and is therefore very incomplete.
> > > 
> > > There are several things I'm not sure of here, mostly marked with XXX in the
> > > text.
> > > 
> > > In particular:
> > > 
> > >  - We seem to expose various things to the guest which really it has no need to
> > >    know (at least not via xenstore). e.g. its own domid, its device model pid,
> > >    the size of the video ram, store port and gref.
> 
> Maybe we need to have a "???" or "?deprecated" tag ?  That would avoid
> us documenting things which we don't want people to use.

deprecated would be useful in some places.

I think we have other places where a key is visible to the guest e.g.
because hvmloader uses it, which we can't really "deprecate" as a means
of hiding them. Perhaps "internal" or "private" in addition to
deprecate?.

> > >  - What is the distinction between /vm/UUID and /local/domain/DOMID
> 
> I think the former should be abolished (eventually).

Is there a distinction between "domain" and "vm" which is relevant here.
Is a VM potentially composed of several domains? (e.g. the guest, the
stub domain, a. n. other helper domain etc)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:43:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nlz-0003Cz-Ot; Fri, 17 Aug 2012 14:43:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T2Nlx-0003Cu-BQ
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:43:37 +0000
Received: from [85.158.143.99:65363] by server-2.bemta-4.messagelabs.com id
	34/4B-31966-8985E205; Fri, 17 Aug 2012 14:43:36 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345214614!28744561!1
X-Originating-IP: [216.32.180.31]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7901 invoked from network); 17 Aug 2012 14:43:35 -0000
Received: from va3ehsobe005.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.31)
	by server-9.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 14:43:35 -0000
Received: from mail27-va3-R.bigfish.com (10.7.14.236) by
	VA3EHSOBE003.bigfish.com (10.7.40.23) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 14:43:34 +0000
Received: from mail27-va3 (localhost [127.0.0.1])	by mail27-va3-R.bigfish.com
	(Postfix) with ESMTP id 6D2383A0202;
	Fri, 17 Aug 2012 14:43:34 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -10
X-BigFish: VPS-10(zzbb2dI98dI9371I542M1432I4015Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail27-va3 (localhost.localdomain [127.0.0.1]) by mail27-va3
	(MessageSwitch) id 1345214611664779_4231;
	Fri, 17 Aug 2012 14:43:31 +0000 (UTC)
Received: from VA3EHSMHS029.bigfish.com (unknown [10.7.14.236])	by
	mail27-va3.bigfish.com (Postfix) with ESMTP id 94493460089;
	Fri, 17 Aug 2012 14:43:31 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	VA3EHSMHS029.bigfish.com (10.7.99.39) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 14:43:30 +0000
X-WSS-ID: 0M8WM8G-02-IKA-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2D092C8078;	Fri, 17 Aug 2012 09:43:28 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 09:43:48 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 17 Aug 2012 09:43:27 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	10:43:23 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id D0D3F49C20C; Fri, 17 Aug 2012
	15:43:22 +0100 (BST)
Message-ID: <502E5896.30606@amd.com>
Date: Fri, 17 Aug 2012 16:43:34 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <Santosh.Jodh@citrix.com>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
	<502E27B0.7030803@amd.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0E39A007@SJCPMAILBOX01.citrite.net>
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0E39A007@SJCPMAILBOX01.citrite.net>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/2012 04:31 PM, Santosh Jodh wrote:
>
>
>> -----Original Message-----
>> From: Wei Wang [mailto:wei.wang2@amd.com]
>> Sent: Friday, August 17, 2012 4:15 AM
>> To: Santosh Jodh
>> Cc: xen-devel@lists.xensource.com; xiantao.zhang@intel.com; Tim
>> (Xen.org); JBeulich@suse.com
>> Subject: Re: [PATCH] Dump IOMMU p2m table
>>
>> On 08/16/2012 06:36 PM, Santosh Jodh wrote:
>>> New key handler 'o' to dump the IOMMU p2m table for each domain.
>>> Skips dumping table for domain0.
>>> Intel and AMD specific iommu_ops handler for dumping p2m table.
>>>
>>> Incorporated feedback from Jan Beulich and Wei Wang.
>>> Fixed indent printing with %*s.
>>> Removed superflous superpage and other attribute prints.
>>> Make next_level use consistent for AMD IOMMU dumps. Warn if found
>> inconsistent.
>>>
>>> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>>>
>>> diff -r 6d56e31fe1e1 -r 575a53faf4e1
>> xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15
>> 09:41:21 2012 +0100
>>> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16
>> 09:28:24 2012 -0700
>>> @@ -22,6 +22,7 @@
>>>    #include<xen/pci.h>
>>>    #include<xen/pci_regs.h>
>>>    #include<xen/paging.h>
>>> +#include<xen/softirq.h>
>>>    #include<asm/hvm/iommu.h>
>>>    #include<asm/amd-iommu.h>
>>>    #include<asm/hvm/svm/amd-iommu-proto.h>
>>> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>>>
>>>    #include<asm/io_apic.h>
>>>
>>> +static void amd_dump_p2m_table_level(struct page_info* pg, int
>> level,
>>> +                                     paddr_t gpa, int indent) {
>>> +    paddr_t address;
>>> +    void *table_vaddr, *pde;
>>> +    paddr_t next_table_maddr;
>>> +    int index, next_level, present;
>>> +    u32 *entry;
>>> +
>>> +    if ( level<   1 )
>>> +        return;
>>> +
>>> +    table_vaddr = __map_domain_page(pg);
>>> +    if ( table_vaddr == NULL )
>>> +    {
>>> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
>>> +                page_to_maddr(pg));
>>> +        return;
>>> +    }
>>> +
>>> +    for ( index = 0; index<   PTE_PER_TABLE_SIZE; index++ )
>>> +    {
>>> +        if ( !(index % 2) )
>>> +            process_pending_softirqs();
>>> +
>>> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
>>> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
>>> +        entry = (u32*)pde;
>>> +
>>> +        present = get_field_from_reg_u32(entry[0],
>>> +                                         IOMMU_PDE_PRESENT_MASK,
>>> +                                         IOMMU_PDE_PRESENT_SHIFT);
>>> +
>>> +        if ( !present )
>>> +            continue;
>>> +
>>> +        next_level = get_field_from_reg_u32(entry[0],
>>> +
>> IOMMU_PDE_NEXT_LEVEL_MASK,
>>> +
>>> + IOMMU_PDE_NEXT_LEVEL_SHIFT);
>>> +
>>> +        if ( next_level != (level - 1) )
>>> +        {
>>> +            printk("IOMMU p2m table error. next_level = %d, expected
>> %d\n",
>>> +                   next_level, level - 1);
>>> +
>>> +            continue;
>>> +       }
>>
>> HI,
>>
>> This check is not proper for 2MB and 1GB pages. For example, if a guest
>> 4 level page tables, for a 2MB entry, the next_level field will be 3
>> ->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries
>> have next_level = 0. I saw following output for those pages:
>>
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>>
>> Thanks,
>> Wei
>
> How about changing the check to:
>          if ( next_level&&  (next_level != (level - 1)) )

That should be good, since we don't have skip levels.
Thanks,
Wei

>
> Thanks,
> Santosh
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:43:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nlz-0003Cz-Ot; Fri, 17 Aug 2012 14:43:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T2Nlx-0003Cu-BQ
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:43:37 +0000
Received: from [85.158.143.99:65363] by server-2.bemta-4.messagelabs.com id
	34/4B-31966-8985E205; Fri, 17 Aug 2012 14:43:36 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345214614!28744561!1
X-Originating-IP: [216.32.180.31]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7901 invoked from network); 17 Aug 2012 14:43:35 -0000
Received: from va3ehsobe005.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.31)
	by server-9.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	17 Aug 2012 14:43:35 -0000
Received: from mail27-va3-R.bigfish.com (10.7.14.236) by
	VA3EHSOBE003.bigfish.com (10.7.40.23) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 14:43:34 +0000
Received: from mail27-va3 (localhost [127.0.0.1])	by mail27-va3-R.bigfish.com
	(Postfix) with ESMTP id 6D2383A0202;
	Fri, 17 Aug 2012 14:43:34 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -10
X-BigFish: VPS-10(zzbb2dI98dI9371I542M1432I4015Izz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail27-va3 (localhost.localdomain [127.0.0.1]) by mail27-va3
	(MessageSwitch) id 1345214611664779_4231;
	Fri, 17 Aug 2012 14:43:31 +0000 (UTC)
Received: from VA3EHSMHS029.bigfish.com (unknown [10.7.14.236])	by
	mail27-va3.bigfish.com (Postfix) with ESMTP id 94493460089;
	Fri, 17 Aug 2012 14:43:31 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	VA3EHSMHS029.bigfish.com (10.7.99.39) with Microsoft SMTP Server id
	14.1.225.23; Fri, 17 Aug 2012 14:43:30 +0000
X-WSS-ID: 0M8WM8G-02-IKA-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2D092C8078;	Fri, 17 Aug 2012 09:43:28 -0500 (CDT)
Received: from SAUSEXDAG03.amd.com (163.181.55.3) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 17 Aug 2012 09:43:48 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag03.amd.com
	(163.181.55.3) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 17 Aug 2012 09:43:27 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Fri, 17 Aug 2012
	10:43:23 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id D0D3F49C20C; Fri, 17 Aug 2012
	15:43:22 +0100 (BST)
Message-ID: <502E5896.30606@amd.com>
Date: Fri, 17 Aug 2012 16:43:34 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <Santosh.Jodh@citrix.com>
References: <575a53faf4e1f3533096.1345134973@REDBLD-XS.ad.xensource.com>
	<502E27B0.7030803@amd.com>
	<7914B38A4445B34AA16EB9F1352942F1012F0E39A007@SJCPMAILBOX01.citrite.net>
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0E39A007@SJCPMAILBOX01.citrite.net>
X-OriginatorOrg: amd.com
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Tim \(Xen.org\)" <tim@xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/2012 04:31 PM, Santosh Jodh wrote:
>
>
>> -----Original Message-----
>> From: Wei Wang [mailto:wei.wang2@amd.com]
>> Sent: Friday, August 17, 2012 4:15 AM
>> To: Santosh Jodh
>> Cc: xen-devel@lists.xensource.com; xiantao.zhang@intel.com; Tim
>> (Xen.org); JBeulich@suse.com
>> Subject: Re: [PATCH] Dump IOMMU p2m table
>>
>> On 08/16/2012 06:36 PM, Santosh Jodh wrote:
>>> New key handler 'o' to dump the IOMMU p2m table for each domain.
>>> Skips dumping table for domain0.
>>> Intel and AMD specific iommu_ops handler for dumping p2m table.
>>>
>>> Incorporated feedback from Jan Beulich and Wei Wang.
>>> Fixed indent printing with %*s.
>>> Removed superflous superpage and other attribute prints.
>>> Make next_level use consistent for AMD IOMMU dumps. Warn if found
>> inconsistent.
>>>
>>> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>>>
>>> diff -r 6d56e31fe1e1 -r 575a53faf4e1
>> xen/drivers/passthrough/amd/pci_amd_iommu.c
>>> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15
>> 09:41:21 2012 +0100
>>> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 16
>> 09:28:24 2012 -0700
>>> @@ -22,6 +22,7 @@
>>>    #include<xen/pci.h>
>>>    #include<xen/pci_regs.h>
>>>    #include<xen/paging.h>
>>> +#include<xen/softirq.h>
>>>    #include<asm/hvm/iommu.h>
>>>    #include<asm/amd-iommu.h>
>>>    #include<asm/hvm/svm/amd-iommu-proto.h>
>>> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>>>
>>>    #include<asm/io_apic.h>
>>>
>>> +static void amd_dump_p2m_table_level(struct page_info* pg, int
>> level,
>>> +                                     paddr_t gpa, int indent) {
>>> +    paddr_t address;
>>> +    void *table_vaddr, *pde;
>>> +    paddr_t next_table_maddr;
>>> +    int index, next_level, present;
>>> +    u32 *entry;
>>> +
>>> +    if ( level<   1 )
>>> +        return;
>>> +
>>> +    table_vaddr = __map_domain_page(pg);
>>> +    if ( table_vaddr == NULL )
>>> +    {
>>> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
>>> +                page_to_maddr(pg));
>>> +        return;
>>> +    }
>>> +
>>> +    for ( index = 0; index<   PTE_PER_TABLE_SIZE; index++ )
>>> +    {
>>> +        if ( !(index % 2) )
>>> +            process_pending_softirqs();
>>> +
>>> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
>>> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
>>> +        entry = (u32*)pde;
>>> +
>>> +        present = get_field_from_reg_u32(entry[0],
>>> +                                         IOMMU_PDE_PRESENT_MASK,
>>> +                                         IOMMU_PDE_PRESENT_SHIFT);
>>> +
>>> +        if ( !present )
>>> +            continue;
>>> +
>>> +        next_level = get_field_from_reg_u32(entry[0],
>>> +
>> IOMMU_PDE_NEXT_LEVEL_MASK,
>>> +
>>> + IOMMU_PDE_NEXT_LEVEL_SHIFT);
>>> +
>>> +        if ( next_level != (level - 1) )
>>> +        {
>>> +            printk("IOMMU p2m table error. next_level = %d, expected
>> %d\n",
>>> +                   next_level, level - 1);
>>> +
>>> +            continue;
>>> +       }
>>
>> HI,
>>
>> This check is not proper for 2MB and 1GB pages. For example, if a guest
>> 4 level page tables, for a 2MB entry, the next_level field will be 3
>> ->(l4)->2(l3)->0(l2), because l2 entries becomes PTEs and PTE entries
>> have next_level = 0. I saw following output for those pages:
>>
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>> (XEN) IOMMU p2m table error. next_level = 0, expected 1
>>
>> Thanks,
>> Wei
>
> How about changing the check to:
>          if ( next_level&&  (next_level != (level - 1)) )

That should be good, since we don't have skip levels.
Thanks,
Wei

>
> Thanks,
> Santosh
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:45:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nnk-0003Jl-Dk; Fri, 17 Aug 2012 14:45:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Nnj-0003Jd-1I
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:45:27 +0000
Received: from [85.158.138.51:63458] by server-3.bemta-3.messagelabs.com id
	35/06-13809-6095E205; Fri, 17 Aug 2012 14:45:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345214725!28795919!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1044 invoked from network); 17 Aug 2012 14:45:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:45:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062371"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:45:24 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 15:45:24 +0100
Date: Fri, 17 Aug 2012 15:45:07 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502E6A200200007800095E9A@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208171540350.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Jan Beulich wrote:
> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> > > Seeing the patch I btw realized that there's no easy way to
> >> >> > > avoid having the type as a second argument in the conversion
> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> >> > > there.
> >> >> > 
> >> >> > Btw - on the architecture(s) where the two handles are identical
> >> >> > I would prefer you to make the conversion functions trivial (and
> >> >> > thus avoid making use of the "type" parameter), thus allowing
> >> >> > the type checking to occur that you currently circumvent.
> >> >> 
> >> >> OK, I can do that.
> >> > 
> >> > Will this result in the type parameter potentially becoming stale?
> >> > 
> >> > Adding a redundant pointer compare is a good way to get the compiler to
> >> > catch this. Smth like;
> >> > 
> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >> >         #define guest_handle_from_param(hnd, type) ({
> >> >             typeof((hnd).p) _x = (hnd).p;
> >> >             XEN_GUEST_HANDLE(type) _y;
> >> >             &_y == &_x;
> >> >             hnd;
> >> >          })
> >> 
> >> Ah yes, that's a good suggestion.
> >> 
> >> > I'm not sure which two pointers of members of the various structs need
> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> >> > idea...
> >> 
> >> Right, comparing (hnd).p with _y.p would be the right thing; no
> >> need for _x, but some other (mechanical) adjustments would be
> >> necessary.
> > 
> > The _x variable is still useful to avoid multiple evaluations of hnd,
> > even though I know that this is not a public header.
> 
> But we had settled on returning hnd unmodified when both
> handle types are the same.
> 
> > What about the following:
> > 
> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > #define guest_handle_to_param(hnd, type) ({                \
> >     typeof((hnd).p) _x = (hnd).p;                          \
> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >     if (&_x != &_y.p) BUG();                               \
> >     _y;                                                    \
> > })
> 
> Since this is not a public header, something like this (untested,
> so may not compile as is)
> 
> #define guest_handle_to_param(hnd, type) ({                \
>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>     (hnd);                                                    \
> })
> 
> is what I was thinking of.
> 

this is how it would look like:

#define guest_handle_to_param(hnd, type) ({                  \
    /* type checking: make sure that the pointers inside     \
     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
     * the same type, than return hnd */                     \
    (void)((typeof(&(hnd).p)) 0 ==                           \
        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
    (hnd);                                                   \
})


Honestly I have very rarely seen anything less readable, but at least is
very compact.
For ARM I was going to go with the following, that is only slightly more
readable:

#define guest_handle_to_param(hnd, type) ({                  \
    typeof((hnd).p) _x = (hnd).p;                            \
    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
    /* type checking: make sure that the pointers inside     \
     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
     * the same type, than return hnd */                     \
    (void)(&_x == &_y.p);                                    \
    _y;                                                      \
})

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:45:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nnk-0003Jl-Dk; Fri, 17 Aug 2012 14:45:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Nnj-0003Jd-1I
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:45:27 +0000
Received: from [85.158.138.51:63458] by server-3.bemta-3.messagelabs.com id
	35/06-13809-6095E205; Fri, 17 Aug 2012 14:45:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345214725!28795919!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1044 invoked from network); 17 Aug 2012 14:45:25 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:45:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062371"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:45:24 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 15:45:24 +0100
Date: Fri, 17 Aug 2012 15:45:07 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <502E6A200200007800095E9A@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208171540350.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim
	\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Jan Beulich wrote:
> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> > > Seeing the patch I btw realized that there's no easy way to
> >> >> > > avoid having the type as a second argument in the conversion
> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> >> > > there.
> >> >> > 
> >> >> > Btw - on the architecture(s) where the two handles are identical
> >> >> > I would prefer you to make the conversion functions trivial (and
> >> >> > thus avoid making use of the "type" parameter), thus allowing
> >> >> > the type checking to occur that you currently circumvent.
> >> >> 
> >> >> OK, I can do that.
> >> > 
> >> > Will this result in the type parameter potentially becoming stale?
> >> > 
> >> > Adding a redundant pointer compare is a good way to get the compiler to
> >> > catch this. Smth like;
> >> > 
> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >> >         #define guest_handle_from_param(hnd, type) ({
> >> >             typeof((hnd).p) _x = (hnd).p;
> >> >             XEN_GUEST_HANDLE(type) _y;
> >> >             &_y == &_x;
> >> >             hnd;
> >> >          })
> >> 
> >> Ah yes, that's a good suggestion.
> >> 
> >> > I'm not sure which two pointers of members of the various structs need
> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> >> > idea...
> >> 
> >> Right, comparing (hnd).p with _y.p would be the right thing; no
> >> need for _x, but some other (mechanical) adjustments would be
> >> necessary.
> > 
> > The _x variable is still useful to avoid multiple evaluations of hnd,
> > even though I know that this is not a public header.
> 
> But we had settled on returning hnd unmodified when both
> handle types are the same.
> 
> > What about the following:
> > 
> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > #define guest_handle_to_param(hnd, type) ({                \
> >     typeof((hnd).p) _x = (hnd).p;                          \
> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >     if (&_x != &_y.p) BUG();                               \
> >     _y;                                                    \
> > })
> 
> Since this is not a public header, something like this (untested,
> so may not compile as is)
> 
> #define guest_handle_to_param(hnd, type) ({                \
>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>     (hnd);                                                    \
> })
> 
> is what I was thinking of.
> 

this is how it would look like:

#define guest_handle_to_param(hnd, type) ({                  \
    /* type checking: make sure that the pointers inside     \
     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
     * the same type, than return hnd */                     \
    (void)((typeof(&(hnd).p)) 0 ==                           \
        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
    (hnd);                                                   \
})


Honestly I have very rarely seen anything less readable, but at least is
very compact.
For ARM I was going to go with the following, that is only slightly more
readable:

#define guest_handle_to_param(hnd, type) ({                  \
    typeof((hnd).p) _x = (hnd).p;                            \
    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
    /* type checking: make sure that the pointers inside     \
     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
     * the same type, than return hnd */                     \
    (void)(&_x == &_y.p);                                    \
    _y;                                                      \
})

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:50:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nsd-0003WZ-53; Fri, 17 Aug 2012 14:50:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Nsb-0003WU-Lo
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:50:29 +0000
Received: from [85.158.143.35:61361] by server-3.bemta-4.messagelabs.com id
	18/80-09529-53A5E205; Fri, 17 Aug 2012 14:50:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345215026!15938799!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28877 invoked from network); 17 Aug 2012 14:50:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:50:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062470"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:50:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:50:21 +0100
Message-ID: <1345215020.10161.64.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Aug 2012 15:50:20 +0100
In-Reply-To: <502E6A200200007800095E9A@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> > > Seeing the patch I btw realized that there's no easy way to
> >> >> > > avoid having the type as a second argument in the conversion
> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> >> > > there.
> >> >> > 
> >> >> > Btw - on the architecture(s) where the two handles are identical
> >> >> > I would prefer you to make the conversion functions trivial (and
> >> >> > thus avoid making use of the "type" parameter), thus allowing
> >> >> > the type checking to occur that you currently circumvent.
> >> >> 
> >> >> OK, I can do that.
> >> > 
> >> > Will this result in the type parameter potentially becoming stale?
> >> > 
> >> > Adding a redundant pointer compare is a good way to get the compiler to
> >> > catch this. Smth like;
> >> > 
> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >> >         #define guest_handle_from_param(hnd, type) ({
> >> >             typeof((hnd).p) _x = (hnd).p;
> >> >             XEN_GUEST_HANDLE(type) _y;
> >> >             &_y == &_x;
> >> >             hnd;
> >> >          })
> >> 
> >> Ah yes, that's a good suggestion.
> >> 
> >> > I'm not sure which two pointers of members of the various structs need
> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> >> > idea...
> >> 
> >> Right, comparing (hnd).p with _y.p would be the right thing; no
> >> need for _x, but some other (mechanical) adjustments would be
> >> necessary.
> > 
> > The _x variable is still useful to avoid multiple evaluations of hnd,
> > even though I know that this is not a public header.
> 
> But we had settled on returning hnd unmodified when both
> handle types are the same.
> 
> > What about the following:
> > 
> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > #define guest_handle_to_param(hnd, type) ({                \
> >     typeof((hnd).p) _x = (hnd).p;                          \
> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >     if (&_x != &_y.p) BUG();                               \
> >     _y;                                                    \
> > })
> 
> Since this is not a public header, something like this (untested,
> so may not compile as is)
> 
> #define guest_handle_to_param(hnd, type) ({                \
>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>     (hnd);                                                    \
> })
> 
> is what I was thinking of.

This evaluates hnd twice, or do we only care about that in public
headers for some reason? (personally I think principal of least surprise
suggests avoiding it wherever possible)


> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:50:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nsd-0003WZ-53; Fri, 17 Aug 2012 14:50:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Nsb-0003WU-Lo
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:50:29 +0000
Received: from [85.158.143.35:61361] by server-3.bemta-4.messagelabs.com id
	18/80-09529-53A5E205; Fri, 17 Aug 2012 14:50:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345215026!15938799!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28877 invoked from network); 17 Aug 2012 14:50:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:50:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062470"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:50:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 15:50:21 +0100
Message-ID: <1345215020.10161.64.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Aug 2012 15:50:20 +0100
In-Reply-To: <502E6A200200007800095E9A@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> > > Seeing the patch I btw realized that there's no easy way to
> >> >> > > avoid having the type as a second argument in the conversion
> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> >> > > there.
> >> >> > 
> >> >> > Btw - on the architecture(s) where the two handles are identical
> >> >> > I would prefer you to make the conversion functions trivial (and
> >> >> > thus avoid making use of the "type" parameter), thus allowing
> >> >> > the type checking to occur that you currently circumvent.
> >> >> 
> >> >> OK, I can do that.
> >> > 
> >> > Will this result in the type parameter potentially becoming stale?
> >> > 
> >> > Adding a redundant pointer compare is a good way to get the compiler to
> >> > catch this. Smth like;
> >> > 
> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >> >         #define guest_handle_from_param(hnd, type) ({
> >> >             typeof((hnd).p) _x = (hnd).p;
> >> >             XEN_GUEST_HANDLE(type) _y;
> >> >             &_y == &_x;
> >> >             hnd;
> >> >          })
> >> 
> >> Ah yes, that's a good suggestion.
> >> 
> >> > I'm not sure which two pointers of members of the various structs need
> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> >> > idea...
> >> 
> >> Right, comparing (hnd).p with _y.p would be the right thing; no
> >> need for _x, but some other (mechanical) adjustments would be
> >> necessary.
> > 
> > The _x variable is still useful to avoid multiple evaluations of hnd,
> > even though I know that this is not a public header.
> 
> But we had settled on returning hnd unmodified when both
> handle types are the same.
> 
> > What about the following:
> > 
> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > #define guest_handle_to_param(hnd, type) ({                \
> >     typeof((hnd).p) _x = (hnd).p;                          \
> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >     if (&_x != &_y.p) BUG();                               \
> >     _y;                                                    \
> > })
> 
> Since this is not a public header, something like this (untested,
> so may not compile as is)
> 
> #define guest_handle_to_param(hnd, type) ({                \
>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>     (hnd);                                                    \
> })
> 
> is what I was thinking of.

This evaluates hnd twice, or do we only care about that in public
headers for some reason? (personally I think principal of least surprise
suggests avoiding it wherever possible)


> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:54:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nvq-0003dX-P3; Fri, 17 Aug 2012 14:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T2Nvo-0003dM-NB
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:53:49 +0000
Received: from [85.158.139.83:10483] by server-10.bemta-5.messagelabs.com id
	5E/63-13125-BFA5E205; Fri, 17 Aug 2012 14:53:47 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345215225!21344522!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17393 invoked from network); 17 Aug 2012 14:53:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:53:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336363200"; d="scan'208";a="205494879"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:53:45 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 10:53:45 -0400
MIME-Version: 1.0
X-Mercurial-Node: e26e2d1e5642fa8aeada2dec48a5ce6a5d3a2d8a
Message-ID: <e26e2d1e5642fa8aeada.1345215224@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 17 Aug 2012 07:53:44 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, JBeulich@suse.com, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Incorporated feedback from Jan Beulich and Wei Wang.
Fixed indent printing with %*s.
Removed superflous superpage and other attribute prints.
Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
AMD IOMMU does skip levels. Handle 2mb and 1gb IOMMU page size for AMD.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 17 07:53:38 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        if ( next_level && (next_level != (level - 1)) )
+        {
+            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
+                   next_level, level - 1);
+
+            continue;
+        }
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), next_level,
+                address, indent + 1);
+        else
+            printk("%*sgfn: %08lx  mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Fri Aug 17 07:53:38 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 17 07:53:38 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,60 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*sgfn: %08lx mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K));
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 17 07:53:38 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 17 07:53:38 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/xen/iommu.h	Fri Aug 17 07:53:38 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:54:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nvq-0003dX-P3; Fri, 17 Aug 2012 14:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T2Nvo-0003dM-NB
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:53:49 +0000
Received: from [85.158.139.83:10483] by server-10.bemta-5.messagelabs.com id
	5E/63-13125-BFA5E205; Fri, 17 Aug 2012 14:53:47 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345215225!21344522!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzIwNzA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17393 invoked from network); 17 Aug 2012 14:53:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:53:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336363200"; d="scan'208";a="205494879"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:53:45 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 10:53:45 -0400
MIME-Version: 1.0
X-Mercurial-Node: e26e2d1e5642fa8aeada2dec48a5ce6a5d3a2d8a
Message-ID: <e26e2d1e5642fa8aeada.1345215224@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 17 Aug 2012 07:53:44 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, JBeulich@suse.com, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Incorporated feedback from Jan Beulich and Wei Wang.
Fixed indent printing with %*s.
Removed superflous superpage and other attribute prints.
Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
AMD IOMMU does skip levels. Handle 2mb and 1gb IOMMU page size for AMD.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 17 07:53:38 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        if ( next_level && (next_level != (level - 1)) )
+        {
+            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
+                   next_level, level - 1);
+
+            continue;
+        }
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), next_level,
+                address, indent + 1);
+        else
+            printk("%*sgfn: %08lx  mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Fri Aug 17 07:53:38 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 17 07:53:38 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,60 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*sgfn: %08lx mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K));
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 17 07:53:38 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 17 07:53:38 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 6d56e31fe1e1 -r e26e2d1e5642 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/xen/iommu.h	Fri Aug 17 07:53:38 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:54:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nvz-0003eA-5i; Fri, 17 Aug 2012 14:53:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Nvy-0003e2-1L
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:53:58 +0000
Received: from [85.158.138.51:16810] by server-11.bemta-3.messagelabs.com id
	1C/E2-23152-50B5E205; Fri, 17 Aug 2012 14:53:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345215236!10151316!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12506 invoked from network); 17 Aug 2012 14:53:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:53:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062524"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:53:56 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 15:53:56 +0100
Date: Fri, 17 Aug 2012 15:53:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345215020.10161.64.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208171553060.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com> 
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<1345215020.10161.64.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:
> On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
> > >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Fri, 17 Aug 2012, Jan Beulich wrote:
> > >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> > >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > >> >> > > Seeing the patch I btw realized that there's no easy way to
> > >> >> > > avoid having the type as a second argument in the conversion
> > >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> > >> >> > > there.
> > >> >> > 
> > >> >> > Btw - on the architecture(s) where the two handles are identical
> > >> >> > I would prefer you to make the conversion functions trivial (and
> > >> >> > thus avoid making use of the "type" parameter), thus allowing
> > >> >> > the type checking to occur that you currently circumvent.
> > >> >> 
> > >> >> OK, I can do that.
> > >> > 
> > >> > Will this result in the type parameter potentially becoming stale?
> > >> > 
> > >> > Adding a redundant pointer compare is a good way to get the compiler to
> > >> > catch this. Smth like;
> > >> > 
> > >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> > >> >         #define guest_handle_from_param(hnd, type) ({
> > >> >             typeof((hnd).p) _x = (hnd).p;
> > >> >             XEN_GUEST_HANDLE(type) _y;
> > >> >             &_y == &_x;
> > >> >             hnd;
> > >> >          })
> > >> 
> > >> Ah yes, that's a good suggestion.
> > >> 
> > >> > I'm not sure which two pointers of members of the various structs need
> > >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > >> > idea...
> > >> 
> > >> Right, comparing (hnd).p with _y.p would be the right thing; no
> > >> need for _x, but some other (mechanical) adjustments would be
> > >> necessary.
> > > 
> > > The _x variable is still useful to avoid multiple evaluations of hnd,
> > > even though I know that this is not a public header.
> > 
> > But we had settled on returning hnd unmodified when both
> > handle types are the same.
> > 
> > > What about the following:
> > > 
> > > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > > #define guest_handle_to_param(hnd, type) ({                \
> > >     typeof((hnd).p) _x = (hnd).p;                          \
> > >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> > >     if (&_x != &_y.p) BUG();                               \
> > >     _y;                                                    \
> > > })
> > 
> > Since this is not a public header, something like this (untested,
> > so may not compile as is)
> > 
> > #define guest_handle_to_param(hnd, type) ({                \
> >     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
> >     (hnd);                                                    \
> > })
> > 
> > is what I was thinking of.
> 
> This evaluates hnd twice, or do we only care about that in public
> headers for some reason? (personally I think principal of least surprise
> suggests avoiding it wherever possible)

the ARM version evaluates hnd only once and would work for x86 too

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:54:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nvz-0003eA-5i; Fri, 17 Aug 2012 14:53:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Nvy-0003e2-1L
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:53:58 +0000
Received: from [85.158.138.51:16810] by server-11.bemta-3.messagelabs.com id
	1C/E2-23152-50B5E205; Fri, 17 Aug 2012 14:53:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345215236!10151316!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12506 invoked from network); 17 Aug 2012 14:53:56 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:53:56 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062524"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 14:53:56 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 15:53:56 +0100
Date: Fri, 17 Aug 2012 15:53:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345215020.10161.64.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208171553060.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com> 
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<1345215020.10161.64.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:
> On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
> > >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > wrote:
> > > On Fri, 17 Aug 2012, Jan Beulich wrote:
> > >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> > >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> > >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> > >> >> > > Seeing the patch I btw realized that there's no easy way to
> > >> >> > > avoid having the type as a second argument in the conversion
> > >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> > >> >> > > there.
> > >> >> > 
> > >> >> > Btw - on the architecture(s) where the two handles are identical
> > >> >> > I would prefer you to make the conversion functions trivial (and
> > >> >> > thus avoid making use of the "type" parameter), thus allowing
> > >> >> > the type checking to occur that you currently circumvent.
> > >> >> 
> > >> >> OK, I can do that.
> > >> > 
> > >> > Will this result in the type parameter potentially becoming stale?
> > >> > 
> > >> > Adding a redundant pointer compare is a good way to get the compiler to
> > >> > catch this. Smth like;
> > >> > 
> > >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> > >> >         #define guest_handle_from_param(hnd, type) ({
> > >> >             typeof((hnd).p) _x = (hnd).p;
> > >> >             XEN_GUEST_HANDLE(type) _y;
> > >> >             &_y == &_x;
> > >> >             hnd;
> > >> >          })
> > >> 
> > >> Ah yes, that's a good suggestion.
> > >> 
> > >> > I'm not sure which two pointers of members of the various structs need
> > >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> > >> > idea...
> > >> 
> > >> Right, comparing (hnd).p with _y.p would be the right thing; no
> > >> need for _x, but some other (mechanical) adjustments would be
> > >> necessary.
> > > 
> > > The _x variable is still useful to avoid multiple evaluations of hnd,
> > > even though I know that this is not a public header.
> > 
> > But we had settled on returning hnd unmodified when both
> > handle types are the same.
> > 
> > > What about the following:
> > > 
> > > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> > > #define guest_handle_to_param(hnd, type) ({                \
> > >     typeof((hnd).p) _x = (hnd).p;                          \
> > >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> > >     if (&_x != &_y.p) BUG();                               \
> > >     _y;                                                    \
> > > })
> > 
> > Since this is not a public header, something like this (untested,
> > so may not compile as is)
> > 
> > #define guest_handle_to_param(hnd, type) ({                \
> >     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
> >     (hnd);                                                    \
> > })
> > 
> > is what I was thinking of.
> 
> This evaluates hnd twice, or do we only care about that in public
> headers for some reason? (personally I think principal of least surprise
> suggests avoiding it wherever possible)

the ARM version evaluates hnd only once and would work for x86 too

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:57:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nz1-0003wB-4P; Fri, 17 Aug 2012 14:57:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Nyz-0003vv-8M
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:57:05 +0000
Received: from [85.158.143.35:35108] by server-2.bemta-4.messagelabs.com id
	EA/7D-31966-0CB5E205; Fri, 17 Aug 2012 14:57:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345215418!15940103!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27384 invoked from network); 17 Aug 2012 14:56:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	17 Aug 2012 14:56:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:56:57 +0100
Message-Id: <502E78010200007800095F56@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:57:37 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<1345215020.10161.64.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345215020.10161.64.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 16:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
>> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> >> > > Seeing the patch I btw realized that there's no easy way to
>> >> >> > > avoid having the type as a second argument in the conversion
>> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
>> >> >> > > there.
>> >> >> > 
>> >> >> > Btw - on the architecture(s) where the two handles are identical
>> >> >> > I would prefer you to make the conversion functions trivial (and
>> >> >> > thus avoid making use of the "type" parameter), thus allowing
>> >> >> > the type checking to occur that you currently circumvent.
>> >> >> 
>> >> >> OK, I can do that.
>> >> > 
>> >> > Will this result in the type parameter potentially becoming stale?
>> >> > 
>> >> > Adding a redundant pointer compare is a good way to get the compiler to
>> >> > catch this. Smth like;
>> >> > 
>> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> >> >         #define guest_handle_from_param(hnd, type) ({
>> >> >             typeof((hnd).p) _x = (hnd).p;
>> >> >             XEN_GUEST_HANDLE(type) _y;
>> >> >             &_y == &_x;
>> >> >             hnd;
>> >> >          })
>> >> 
>> >> Ah yes, that's a good suggestion.
>> >> 
>> >> > I'm not sure which two pointers of members of the various structs need
>> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> >> > idea...
>> >> 
>> >> Right, comparing (hnd).p with _y.p would be the right thing; no
>> >> need for _x, but some other (mechanical) adjustments would be
>> >> necessary.
>> > 
>> > The _x variable is still useful to avoid multiple evaluations of hnd,
>> > even though I know that this is not a public header.
>> 
>> But we had settled on returning hnd unmodified when both
>> handle types are the same.
>> 
>> > What about the following:
>> > 
>> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
>> > #define guest_handle_to_param(hnd, type) ({                \
>> >     typeof((hnd).p) _x = (hnd).p;                          \
>> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>> >     if (&_x != &_y.p) BUG();                               \
>> >     _y;                                                    \
>> > })
>> 
>> Since this is not a public header, something like this (untested,
>> so may not compile as is)
>> 
>> #define guest_handle_to_param(hnd, type) ({                \
>>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>>     (hnd);                                                    \
>> })
>> 
>> is what I was thinking of.
> 
> This evaluates hnd twice, or do we only care about that in public
> headers for some reason? (personally I think principal of least surprise
> suggests avoiding it wherever possible)

No, it doesn't - like sizeof(), typeof() doesn't evaluate its
argument.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:57:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nz1-0003wB-4P; Fri, 17 Aug 2012 14:57:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2Nyz-0003vv-8M
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:57:05 +0000
Received: from [85.158.143.35:35108] by server-2.bemta-4.messagelabs.com id
	EA/7D-31966-0CB5E205; Fri, 17 Aug 2012 14:57:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345215418!15940103!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27384 invoked from network); 17 Aug 2012 14:56:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	17 Aug 2012 14:56:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:56:57 +0100
Message-Id: <502E78010200007800095F56@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:57:37 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<1345215020.10161.64.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345215020.10161.64.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 16:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
>> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> >> > > Seeing the patch I btw realized that there's no easy way to
>> >> >> > > avoid having the type as a second argument in the conversion
>> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
>> >> >> > > there.
>> >> >> > 
>> >> >> > Btw - on the architecture(s) where the two handles are identical
>> >> >> > I would prefer you to make the conversion functions trivial (and
>> >> >> > thus avoid making use of the "type" parameter), thus allowing
>> >> >> > the type checking to occur that you currently circumvent.
>> >> >> 
>> >> >> OK, I can do that.
>> >> > 
>> >> > Will this result in the type parameter potentially becoming stale?
>> >> > 
>> >> > Adding a redundant pointer compare is a good way to get the compiler to
>> >> > catch this. Smth like;
>> >> > 
>> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> >> >         #define guest_handle_from_param(hnd, type) ({
>> >> >             typeof((hnd).p) _x = (hnd).p;
>> >> >             XEN_GUEST_HANDLE(type) _y;
>> >> >             &_y == &_x;
>> >> >             hnd;
>> >> >          })
>> >> 
>> >> Ah yes, that's a good suggestion.
>> >> 
>> >> > I'm not sure which two pointers of members of the various structs need
>> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> >> > idea...
>> >> 
>> >> Right, comparing (hnd).p with _y.p would be the right thing; no
>> >> need for _x, but some other (mechanical) adjustments would be
>> >> necessary.
>> > 
>> > The _x variable is still useful to avoid multiple evaluations of hnd,
>> > even though I know that this is not a public header.
>> 
>> But we had settled on returning hnd unmodified when both
>> handle types are the same.
>> 
>> > What about the following:
>> > 
>> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
>> > #define guest_handle_to_param(hnd, type) ({                \
>> >     typeof((hnd).p) _x = (hnd).p;                          \
>> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>> >     if (&_x != &_y.p) BUG();                               \
>> >     _y;                                                    \
>> > })
>> 
>> Since this is not a public header, something like this (untested,
>> so may not compile as is)
>> 
>> #define guest_handle_to_param(hnd, type) ({                \
>>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>>     (hnd);                                                    \
>> })
>> 
>> is what I was thinking of.
> 
> This evaluates hnd twice, or do we only care about that in public
> headers for some reason? (personally I think principal of least surprise
> suggests avoiding it wherever possible)

No, it doesn't - like sizeof(), typeof() doesn't evaluate its
argument.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:57:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nz3-0003wU-HM; Fri, 17 Aug 2012 14:57:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T2Nz1-0003wA-GD
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:57:07 +0000
Received: from [85.158.143.35:35251] by server-1.bemta-4.messagelabs.com id
	E2/C1-07754-2CB5E205; Fri, 17 Aug 2012 14:57:06 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345215424!6169363!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ1NzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12326 invoked from network); 17 Aug 2012 14:57:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:57:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336363200"; d="scan'208";a="34989648"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:57:04 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 10:57:03 -0400
MIME-Version: 1.0
X-Mercurial-Node: 995803806158d2dfce2d963bd372132e51b292fc
Message-ID: <995803806158d2dfce2d.1345215423@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 17 Aug 2012 07:57:03 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, JBeulich@suse.com, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Incorporated feedback from Jan Beulich and Wei Wang.
Fixed indent printing with %*s.
Removed superflous superpage and other attribute prints.
Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
AMD IOMMU does not skip levels. Handle 2mb and 1gb IOMMU page size for AMD.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 17 07:56:55 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        if ( next_level && (next_level != (level - 1)) )
+        {
+            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
+                   next_level, level - 1);
+
+            continue;
+        }
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), next_level,
+                address, indent + 1);
+        else
+            printk("%*sgfn: %08lx  mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Fri Aug 17 07:56:55 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 17 07:56:55 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,60 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*sgfn: %08lx mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K));
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 17 07:56:55 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
diff -r 6d56e31fe1e1 -r 995803806158 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 17 07:56:55 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 6d56e31fe1e1 -r 995803806158 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/xen/iommu.h	Fri Aug 17 07:56:55 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:57:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Nz3-0003wU-HM; Fri, 17 Aug 2012 14:57:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T2Nz1-0003wA-GD
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 14:57:07 +0000
Received: from [85.158.143.35:35251] by server-1.bemta-4.messagelabs.com id
	E2/C1-07754-2CB5E205; Fri, 17 Aug 2012 14:57:06 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345215424!6169363!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjQ1NzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12326 invoked from network); 17 Aug 2012 14:57:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 14:57:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336363200"; d="scan'208";a="34989648"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 10:57:04 -0400
Received: from REDBLD-XS.ad.xensource.com (10.232.6.25) by
	smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 10:57:03 -0400
MIME-Version: 1.0
X-Mercurial-Node: 995803806158d2dfce2d963bd372132e51b292fc
Message-ID: <995803806158d2dfce2d.1345215423@REDBLD-XS.ad.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 17 Aug 2012 07:57:03 -0700
From: Santosh Jodh <santosh.jodh@citrix.com>
To: <xen-devel@lists.xensource.com>
Cc: wei.wang2@amd.com, tim@xen.org, JBeulich@suse.com, xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

New key handler 'o' to dump the IOMMU p2m table for each domain.
Skips dumping table for domain0.
Intel and AMD specific iommu_ops handler for dumping p2m table.

Incorporated feedback from Jan Beulich and Wei Wang.
Fixed indent printing with %*s.
Removed superflous superpage and other attribute prints.
Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
AMD IOMMU does not skip levels. Handle 2mb and 1gb IOMMU page size for AMD.

Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>

diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 17 07:56:55 2012 -0700
@@ -22,6 +22,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/paging.h>
+#include <xen/softirq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/amd-iommu.h>
 #include <asm/hvm/svm/amd-iommu-proto.h>
@@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
 
 #include <asm/io_apic.h>
 
+static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
+                                     paddr_t gpa, int indent)
+{
+    paddr_t address;
+    void *table_vaddr, *pde;
+    paddr_t next_table_maddr;
+    int index, next_level, present;
+    u32 *entry;
+
+    if ( level < 1 )
+        return;
+
+    table_vaddr = __map_domain_page(pg);
+    if ( table_vaddr == NULL )
+    {
+        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+                page_to_maddr(pg));
+        return;
+    }
+
+    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
+    {
+        if ( !(index % 2) )
+            process_pending_softirqs();
+
+        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
+        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
+        entry = (u32*)pde;
+
+        present = get_field_from_reg_u32(entry[0],
+                                         IOMMU_PDE_PRESENT_MASK,
+                                         IOMMU_PDE_PRESENT_SHIFT);
+
+        if ( !present )
+            continue;
+
+        next_level = get_field_from_reg_u32(entry[0],
+                                            IOMMU_PDE_NEXT_LEVEL_MASK,
+                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
+
+        if ( next_level && (next_level != (level - 1)) )
+        {
+            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
+                   next_level, level - 1);
+
+            continue;
+        }
+
+        address = gpa + amd_offset_level_address(index, level);
+        if ( next_level >= 1 )
+            amd_dump_p2m_table_level(
+                maddr_to_page(next_table_maddr), next_level,
+                address, indent + 1);
+        else
+            printk("%*sgfn: %08lx  mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)PFN_DOWN(address),
+                   (unsigned long)PFN_DOWN(next_table_maddr));
+    }
+
+    unmap_domain_page(table_vaddr);
+}
+
+static void amd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd  = domain_hvm_iommu(d);
+
+    if ( !hd->root_table ) 
+        return;
+
+    printk("p2m table has %d levels\n", hd->paging_mode);
+    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+}
+
 const struct iommu_ops amd_iommu_ops = {
     .init = amd_iommu_domain_init,
     .dom0_init = amd_iommu_dom0_init,
@@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
     .resume = amd_iommu_resume,
     .share_p2m = amd_iommu_share_p2m,
     .crash_shutdown = amd_iommu_suspend,
+    .dump_p2m_table = amd_dump_p2m_table,
 };
diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/iommu.c
--- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/iommu.c	Fri Aug 17 07:56:55 2012 -0700
@@ -19,10 +19,12 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
+#include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
 static int iommu_populate_page_table(struct domain *d);
+static void iommu_dump_p2m_table(unsigned char key);
 
 /*
  * The 'iommu' parameter enables the IOMMU.  Optional comma separated
@@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+static struct keyhandler iommu_p2m_table = {
+    .diagnostic = 0,
+    .u.fn = iommu_dump_p2m_table,
+    .desc = "dump iommu p2m table"
+};
+
 static void __init parse_iommu_param(char *s)
 {
     char *ss;
@@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
     if ( !iommu_enabled )
         return;
 
+    register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
     if ( need_iommu(d) )
     {
@@ -654,6 +663,34 @@ int iommu_do_domctl(
     return ret;
 }
 
+static void iommu_dump_p2m_table(unsigned char key)
+{
+    struct domain *d;
+    const struct iommu_ops *ops;
+
+    if ( !iommu_enabled )
+    {
+        printk("IOMMU not enabled!\n");
+        return;
+    }
+
+    ops = iommu_get_ops();
+    for_each_domain(d)
+    {
+        if ( !d->domain_id )
+            continue;
+
+        if ( iommu_use_hap_pt(d) )
+        {
+            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            continue;
+        }
+
+        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
+        ops->dump_p2m_table(d);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.c
--- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 17 07:56:55 2012 -0700
@@ -31,6 +31,7 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
+#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #if defined(__i386__) || defined(__x86_64__)
@@ -2365,6 +2366,60 @@ static void vtd_resume(void)
     }
 }
 
+static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
+                                     int indent)
+{
+    paddr_t address;
+    int i;
+    struct dma_pte *pt_vaddr, *pte;
+    int next_level;
+
+    if ( level < 1 )
+        return;
+
+    pt_vaddr = map_vtd_domain_page(pt_maddr);
+    if ( pt_vaddr == NULL )
+    {
+        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        return;
+    }
+
+    next_level = level - 1;
+    for ( i = 0; i < PTE_NUM; i++ )
+    {
+        if ( !(i % 2) )
+            process_pending_softirqs();
+
+        pte = &pt_vaddr[i];
+        if ( !dma_pte_present(*pte) )
+            continue;
+
+        address = gpa + offset_level_address(i, level);
+        if ( next_level >= 1 ) 
+            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
+                                     address, indent + 1);
+        else
+            printk("%*sgfn: %08lx mfn: %08lx\n",
+                   indent, "",
+                   (unsigned long)(address >> PAGE_SHIFT_4K),
+                   (unsigned long)(pte->val >> PAGE_SHIFT_4K));
+    }
+
+    unmap_vtd_domain_page(pt_vaddr);
+}
+
+static void vtd_dump_p2m_table(struct domain *d)
+{
+    struct hvm_iommu *hd;
+
+    if ( list_empty(&acpi_drhd_units) )
+        return;
+
+    hd = domain_hvm_iommu(d);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
+    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+}
+
 const struct iommu_ops intel_iommu_ops = {
     .init = intel_iommu_domain_init,
     .dom0_init = intel_iommu_dom0_init,
@@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = intel_iommu_iotlb_flush,
     .iotlb_flush_all = intel_iommu_iotlb_flush_all,
+    .dump_p2m_table = vtd_dump_p2m_table,
 };
 
 /*
diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.h
--- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 17 07:56:55 2012 -0700
@@ -248,6 +248,8 @@ struct context_entry {
 #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
 #define address_level_offset(addr, level) \
             ((addr >> level_to_offset_bits(level)) & LEVEL_MASK)
+#define offset_level_address(offset, level) \
+            ((u64)(offset) << level_to_offset_bits(level))
 #define level_mask(l) (((u64)(-1)) << level_to_offset_bits(l))
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
diff -r 6d56e31fe1e1 -r 995803806158 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
--- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 17 07:56:55 2012 -0700
@@ -38,6 +38,10 @@
 #define PTE_PER_TABLE_ALLOC(entries)	\
 	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT)
 
+#define amd_offset_level_address(offset, level) \
+      	((u64)(offset) << (12 + (PTE_PER_TABLE_SHIFT * \
+                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
+
 #define PCI_MIN_CAP_OFFSET	0x40
 #define PCI_MAX_CAP_BLOCKS	48
 #define PCI_CAP_PTR_MASK	0xFC
diff -r 6d56e31fe1e1 -r 995803806158 xen/include/xen/iommu.h
--- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
+++ b/xen/include/xen/iommu.h	Fri Aug 17 07:56:55 2012 -0700
@@ -141,6 +141,7 @@ struct iommu_ops {
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
+    void (*dump_p2m_table)(struct domain *d);
 };
 
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:59:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:59:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2O1M-0004AE-35; Fri, 17 Aug 2012 14:59:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2O1L-0004A8-9S
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:59:31 +0000
Received: from [85.158.143.35:48196] by server-1.bemta-4.messagelabs.com id
	2F/35-07754-25C5E205; Fri, 17 Aug 2012 14:59:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345215556!14671915!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28543 invoked from network); 17 Aug 2012 14:59:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	17 Aug 2012 14:59:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:59:16 +0100
Message-Id: <502E788B0200007800095F59@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:59:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171540350.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208171540350.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 16:45, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> >> > > Seeing the patch I btw realized that there's no easy way to
>> >> >> > > avoid having the type as a second argument in the conversion
>> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
>> >> >> > > there.
>> >> >> > 
>> >> >> > Btw - on the architecture(s) where the two handles are identical
>> >> >> > I would prefer you to make the conversion functions trivial (and
>> >> >> > thus avoid making use of the "type" parameter), thus allowing
>> >> >> > the type checking to occur that you currently circumvent.
>> >> >> 
>> >> >> OK, I can do that.
>> >> > 
>> >> > Will this result in the type parameter potentially becoming stale?
>> >> > 
>> >> > Adding a redundant pointer compare is a good way to get the compiler to
>> >> > catch this. Smth like;
>> >> > 
>> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> >> >         #define guest_handle_from_param(hnd, type) ({
>> >> >             typeof((hnd).p) _x = (hnd).p;
>> >> >             XEN_GUEST_HANDLE(type) _y;
>> >> >             &_y == &_x;
>> >> >             hnd;
>> >> >          })
>> >> 
>> >> Ah yes, that's a good suggestion.
>> >> 
>> >> > I'm not sure which two pointers of members of the various structs need
>> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> >> > idea...
>> >> 
>> >> Right, comparing (hnd).p with _y.p would be the right thing; no
>> >> need for _x, but some other (mechanical) adjustments would be
>> >> necessary.
>> > 
>> > The _x variable is still useful to avoid multiple evaluations of hnd,
>> > even though I know that this is not a public header.
>> 
>> But we had settled on returning hnd unmodified when both
>> handle types are the same.
>> 
>> > What about the following:
>> > 
>> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
>> > #define guest_handle_to_param(hnd, type) ({                \
>> >     typeof((hnd).p) _x = (hnd).p;                          \
>> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>> >     if (&_x != &_y.p) BUG();                               \
>> >     _y;                                                    \
>> > })
>> 
>> Since this is not a public header, something like this (untested,
>> so may not compile as is)
>> 
>> #define guest_handle_to_param(hnd, type) ({                \
>>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>>     (hnd);                                                    \
>> })
>> 
>> is what I was thinking of.
>> 
> 
> this is how it would look like:
> 
> #define guest_handle_to_param(hnd, type) ({                  \
>     /* type checking: make sure that the pointers inside     \
>      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
>      * the same type, than return hnd */                     \
>     (void)((typeof(&(hnd).p)) 0 ==                           \
>         (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
>     (hnd);                                                   \
> })
> 
> 
> Honestly I have very rarely seen anything less readable, but at least is
> very compact.
> For ARM I was going to go with the following, that is only slightly more
> readable:
> 
> #define guest_handle_to_param(hnd, type) ({                  \
>     typeof((hnd).p) _x = (hnd).p;                            \
>     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
>     /* type checking: make sure that the pointers inside     \
>      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
>      * the same type, than return hnd */                     \
>     (void)(&_x == &_y.p);                                    \
>     _y;                                                      \
> })

Yes, this looks good now to me (perhaps correct the minor
spelling mistake - should be "then", not "than" I think).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 14:59:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 14:59:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2O1M-0004AE-35; Fri, 17 Aug 2012 14:59:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2O1L-0004A8-9S
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 14:59:31 +0000
Received: from [85.158.143.35:48196] by server-1.bemta-4.messagelabs.com id
	2F/35-07754-25C5E205; Fri, 17 Aug 2012 14:59:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345215556!14671915!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28543 invoked from network); 17 Aug 2012 14:59:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	17 Aug 2012 14:59:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 15:59:16 +0100
Message-Id: <502E788B0200007800095F59@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 15:59:55 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171540350.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208171540350.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Tim\(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 16:45, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> wrote:
>> > On Fri, 17 Aug 2012, Jan Beulich wrote:
>> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
>> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
>> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> >> > > Seeing the patch I btw realized that there's no easy way to
>> >> >> > > avoid having the type as a second argument in the conversion
>> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
>> >> >> > > there.
>> >> >> > 
>> >> >> > Btw - on the architecture(s) where the two handles are identical
>> >> >> > I would prefer you to make the conversion functions trivial (and
>> >> >> > thus avoid making use of the "type" parameter), thus allowing
>> >> >> > the type checking to occur that you currently circumvent.
>> >> >> 
>> >> >> OK, I can do that.
>> >> > 
>> >> > Will this result in the type parameter potentially becoming stale?
>> >> > 
>> >> > Adding a redundant pointer compare is a good way to get the compiler to
>> >> > catch this. Smth like;
>> >> > 
>> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
>> >> >         #define guest_handle_from_param(hnd, type) ({
>> >> >             typeof((hnd).p) _x = (hnd).p;
>> >> >             XEN_GUEST_HANDLE(type) _y;
>> >> >             &_y == &_x;
>> >> >             hnd;
>> >> >          })
>> >> 
>> >> Ah yes, that's a good suggestion.
>> >> 
>> >> > I'm not sure which two pointers of members of the various structs need
>> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
>> >> > idea...
>> >> 
>> >> Right, comparing (hnd).p with _y.p would be the right thing; no
>> >> need for _x, but some other (mechanical) adjustments would be
>> >> necessary.
>> > 
>> > The _x variable is still useful to avoid multiple evaluations of hnd,
>> > even though I know that this is not a public header.
>> 
>> But we had settled on returning hnd unmodified when both
>> handle types are the same.
>> 
>> > What about the following:
>> > 
>> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
>> > #define guest_handle_to_param(hnd, type) ({                \
>> >     typeof((hnd).p) _x = (hnd).p;                          \
>> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
>> >     if (&_x != &_y.p) BUG();                               \
>> >     _y;                                                    \
>> > })
>> 
>> Since this is not a public header, something like this (untested,
>> so may not compile as is)
>> 
>> #define guest_handle_to_param(hnd, type) ({                \
>>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
>>     (hnd);                                                    \
>> })
>> 
>> is what I was thinking of.
>> 
> 
> this is how it would look like:
> 
> #define guest_handle_to_param(hnd, type) ({                  \
>     /* type checking: make sure that the pointers inside     \
>      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
>      * the same type, than return hnd */                     \
>     (void)((typeof(&(hnd).p)) 0 ==                           \
>         (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
>     (hnd);                                                   \
> })
> 
> 
> Honestly I have very rarely seen anything less readable, but at least is
> very compact.
> For ARM I was going to go with the following, that is only slightly more
> readable:
> 
> #define guest_handle_to_param(hnd, type) ({                  \
>     typeof((hnd).p) _x = (hnd).p;                            \
>     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
>     /* type checking: make sure that the pointers inside     \
>      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
>      * the same type, than return hnd */                     \
>     (void)(&_x == &_y.p);                                    \
>     _y;                                                      \
> })

Yes, this looks good now to me (perhaps correct the minor
spelling mistake - should be "then", not "than" I think).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:06:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2O7V-0004WG-W7; Fri, 17 Aug 2012 15:05:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2O7U-0004Vv-6l
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:05:52 +0000
Received: from [85.158.139.83:56888] by server-8.bemta-5.messagelabs.com id
	4F/CB-02481-FCD5E205; Fri, 17 Aug 2012 15:05:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345215950!28156207!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25496 invoked from network); 17 Aug 2012 15:05:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:05:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062839"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 15:05:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 16:05:50 +0100
Message-ID: <1345215949.10161.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 16:05:49 +0100
In-Reply-To: <20515.49901.369414.638893@mariner.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
	<20515.49901.369414.638893@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 15:02 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
> ...
> > > --- a/docs/misc/xenstore-paths.markdown
> > > +++ b/docs/misc/xenstore-paths.markdown
> > > @@ -0,0 +1,294 @@
> ...
> > > +PATH can contain simple regex constructs following the POSIX regexp
> > > +syntax described in regexp(7). In addition the following additional
> > > +wild card names are defined and are evaluated before regexp expansion:
> 
> Can we use a restricted perl re syntax ?  That avoids weirdness with
> the rules for \.

Is "restricted perl re syntax" a well defined thing (reference?) or do
you just mean perlre(1)--?

What's the weirdness with \.?

> Also how does this interact with markdown ?

The html version looks ok after a brief inspection.

> > > +#### ~/image/device-model-pid = INTEGER   [r]
> 
> This [r] tag is not defined above.  I assume you mean "readonly to the
> domain" but that's the default.  Left over from an earlier version ?

Yes, it's vestigial. Remove it.

> 
> > > +The process ID of the device model associated with this domain, if it
> > > +has one.
> > > +
> > > +XXX why is this visible to the guest?
> 
> I think some of these things were put here just because there wasn't
> another place for the toolstack to store things.  See also the
> arbitrary junk stored by scripts in the device backend directories.

Should we define a proper home for these? e.g. /$toolstack/$domid?

> > > +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
> > > +
> > > +One node for each virtual CPU up to the guest's configured
> > > +maximum. Valid values are "online" and "offline". 
> 
> Should have a cross-reference to the cpu online/offline protocol,
> which appears to be in xen/include/public/vcpu.h.  It doesn't seem to
> be fully documented yet.

vcpu.h has the hypercalls which are the mechanism by which a guest
brings a cpu up/down but nothing on the xenstore protocol which might
cause it to do so.

I don't think a reference currently exists for that protocol. This
probably belongs in the same (non-existent) protocol doc as
~/control/shutdown in so much as it is a toolstack<->guest kernel
protocol.

> > > +#### ~/memory/static-max = MEMKB []
> > > +
> > > +Specifies a static maximum amount memory which this domain should
> > > +expect to be given. In the absence of in-guest memory hotplug support
> > > +this set on domain boot and is usually the maximum amount of RAM which
> > > +a guest can make use of .
> 
> This should have a cross-reference to the documentation defining
> static-max etc.  I thought we had some in tree but I can't seem to
> find it.  The best I can find is docs/man/xl.cfg.pod.5.

I think you might be thinking of tools/libxl/libxl_memory.txt.

Shall we move that doc to docs/misc?

> 
> > > +#### ~/memory/target = MEMKB []
> > > +
> > > +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort 
> 
> every effort to ... ?

err. yes. I appear  to have got distracted there ...

Perhaps:

        every effort to ... reach this target

? but I'm not sure that is strictly correct, a guest can use less if it
wants to. So perhaps

        every effort to ... not use more than this
        
? seems clumsy though.

> 
> The interaction with the Xen maximum should be stated, preferably by
> cross-reference.  In general it might be better to have a single place
> where all these values and their semantics are written down ?
> 
> > > +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
> > > +
> > > +The domain's suspend event channel. The use of a suspend event channel
> > > +is optional at the domain's discression. If it is not used then this
> > > +path will be left blank.
> 
> May it be ENOENT ?  Does the toolstack create it as "" then ?

libxl seems to *mkdir* it:
    libxl__xs_mkdir(gc, t,
                    libxl__sprintf(gc, "%s/device/suspend/event-channel", dom_path),
                    rwperm, ARRAY_SIZE(rwperm));

which I suppose is the same as writing it as "" (unless there is some
subtle xenstore semantic difference I'm not thinking of)

If xend writes this key then I can't find it. I rather suspect the
~/device/suspend is guest writeable in that case (but I can't find that
either).

While grepping around I noticed xs_suspend_evtchn_port which reads this.
Seems like an odd place for it...

> 
> > > +#### ~/device/serial/$DEVID/* [HVM]
> > > +
> > > +An emulated serial device
> 
> You should presumably add
>     XXX documentation for the protocol needed
> here.

I think this is in docs/misc/console.txt along with the PV stuff, so
I've added that as a reference.

> 
> > > +#### ~/store/port = EVTCHN []
> > > +
> > > +The event channel used by the domains connection to XenStore.
> 
> Apostrophe.
> 
> > > +XXX why is this exposed to the guest?
> 
> Is there really only one event channel ?  Ie the same evtchn is used
> to signal to xenstore that the guest has sent a command, and to signal
> the guest that xenstore has written the response ?

Yes, event channels are bidirectional so that's quite common.

> Anyway surely this is something the guest needs to know.  Why it's in
> xenstore is a bit of a mystery since you can't use xenstore without it
> and it's in the start_info.

I should have written "why is this exposed to the guest via xenstore?"

> Is this the same value as start_info.store_evtchn ?  Cross reference ?

I'd be semi inclined to ditch/deprecate it unless we can figure out what
it is for -- as you say there is something of a chicken and egg problem
with using it.

> 
> > > +#### ~/store/ring-ref = GNTREF []
> > > +
> > > +The grant reference of the domain's XenStore ring.
> > > +
> > > +XXX why is this exposed to the guest?
> 
> See above.

Yup, the same issues.

> > > +#### ~/device-model/$DOMID/* []
> > > +
> > > +Information relating to device models running in the domain. $DOMID is
> > > +the target domain of the device model.
> > > +
> > > +XXX where is the contents of this directory specified?
> 
> I think it's not specified anywhere.  It's ad-hoc.  The guest
> shouldn't need to see it but exposing it readonly is probably
> harmless.  Except perhaps for the vnc password ?

vnc password appears to go into /vm/$uuid/vncpass (looking at libxl code
only).

AFAIK it does nothing special with the perms, but /vm/$uuid is not guest
readable (perms are "n0") so I think that works out ok.

I wonder if that's part of the point of /vm/$uuid.

> > > +### /vm/$UUID/uuid = UUID []
> > > +
> > > +Value is the same UUID as the path.
> > > +
> > > +### /vm/$UUID/name = STRING []
> > > +
> > > +The domains name.
> 
> IMO this should be
>   (a) in /local/domain/$DOMID
>   (b) also a copy in /byname/$NAME = $DOMID   for fast lookup
> but not in 4.2.
> 
> Guests shouldn't rely on it.  In fact do guests actually need anything
> from here ?

I'd say definitely not, but it has existed with xend for many years so
I'd be surprised if something hadn't crept in somewhere :-(

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:06:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2O7V-0004WG-W7; Fri, 17 Aug 2012 15:05:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2O7U-0004Vv-6l
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:05:52 +0000
Received: from [85.158.139.83:56888] by server-8.bemta-5.messagelabs.com id
	4F/CB-02481-FCD5E205; Fri, 17 Aug 2012 15:05:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345215950!28156207!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25496 invoked from network); 17 Aug 2012 15:05:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:05:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062839"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 15:05:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 16:05:50 +0100
Message-ID: <1345215949.10161.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 16:05:49 +0100
In-Reply-To: <20515.49901.369414.638893@mariner.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
	<20515.49901.369414.638893@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
	xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-09 at 15:02 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
> ...
> > > --- a/docs/misc/xenstore-paths.markdown
> > > +++ b/docs/misc/xenstore-paths.markdown
> > > @@ -0,0 +1,294 @@
> ...
> > > +PATH can contain simple regex constructs following the POSIX regexp
> > > +syntax described in regexp(7). In addition the following additional
> > > +wild card names are defined and are evaluated before regexp expansion:
> 
> Can we use a restricted perl re syntax ?  That avoids weirdness with
> the rules for \.

Is "restricted perl re syntax" a well defined thing (reference?) or do
you just mean perlre(1)--?

What's the weirdness with \.?

> Also how does this interact with markdown ?

The html version looks ok after a brief inspection.

> > > +#### ~/image/device-model-pid = INTEGER   [r]
> 
> This [r] tag is not defined above.  I assume you mean "readonly to the
> domain" but that's the default.  Left over from an earlier version ?

Yes, it's vestigial. Remove it.

> 
> > > +The process ID of the device model associated with this domain, if it
> > > +has one.
> > > +
> > > +XXX why is this visible to the guest?
> 
> I think some of these things were put here just because there wasn't
> another place for the toolstack to store things.  See also the
> arbitrary junk stored by scripts in the device backend directories.

Should we define a proper home for these? e.g. /$toolstack/$domid?

> > > +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
> > > +
> > > +One node for each virtual CPU up to the guest's configured
> > > +maximum. Valid values are "online" and "offline". 
> 
> Should have a cross-reference to the cpu online/offline protocol,
> which appears to be in xen/include/public/vcpu.h.  It doesn't seem to
> be fully documented yet.

vcpu.h has the hypercalls which are the mechanism by which a guest
brings a cpu up/down but nothing on the xenstore protocol which might
cause it to do so.

I don't think a reference currently exists for that protocol. This
probably belongs in the same (non-existent) protocol doc as
~/control/shutdown in so much as it is a toolstack<->guest kernel
protocol.

> > > +#### ~/memory/static-max = MEMKB []
> > > +
> > > +Specifies a static maximum amount memory which this domain should
> > > +expect to be given. In the absence of in-guest memory hotplug support
> > > +this set on domain boot and is usually the maximum amount of RAM which
> > > +a guest can make use of .
> 
> This should have a cross-reference to the documentation defining
> static-max etc.  I thought we had some in tree but I can't seem to
> find it.  The best I can find is docs/man/xl.cfg.pod.5.

I think you might be thinking of tools/libxl/libxl_memory.txt.

Shall we move that doc to docs/misc?

> 
> > > +#### ~/memory/target = MEMKB []
> > > +
> > > +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort 
> 
> every effort to ... ?

err. yes. I appear  to have got distracted there ...

Perhaps:

        every effort to ... reach this target

? but I'm not sure that is strictly correct, a guest can use less if it
wants to. So perhaps

        every effort to ... not use more than this
        
? seems clumsy though.

> 
> The interaction with the Xen maximum should be stated, preferably by
> cross-reference.  In general it might be better to have a single place
> where all these values and their semantics are written down ?
> 
> > > +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
> > > +
> > > +The domain's suspend event channel. The use of a suspend event channel
> > > +is optional at the domain's discression. If it is not used then this
> > > +path will be left blank.
> 
> May it be ENOENT ?  Does the toolstack create it as "" then ?

libxl seems to *mkdir* it:
    libxl__xs_mkdir(gc, t,
                    libxl__sprintf(gc, "%s/device/suspend/event-channel", dom_path),
                    rwperm, ARRAY_SIZE(rwperm));

which I suppose is the same as writing it as "" (unless there is some
subtle xenstore semantic difference I'm not thinking of)

If xend writes this key then I can't find it. I rather suspect the
~/device/suspend is guest writeable in that case (but I can't find that
either).

While grepping around I noticed xs_suspend_evtchn_port which reads this.
Seems like an odd place for it...

> 
> > > +#### ~/device/serial/$DEVID/* [HVM]
> > > +
> > > +An emulated serial device
> 
> You should presumably add
>     XXX documentation for the protocol needed
> here.

I think this is in docs/misc/console.txt along with the PV stuff, so
I've added that as a reference.

> 
> > > +#### ~/store/port = EVTCHN []
> > > +
> > > +The event channel used by the domains connection to XenStore.
> 
> Apostrophe.
> 
> > > +XXX why is this exposed to the guest?
> 
> Is there really only one event channel ?  Ie the same evtchn is used
> to signal to xenstore that the guest has sent a command, and to signal
> the guest that xenstore has written the response ?

Yes, event channels are bidirectional so that's quite common.

> Anyway surely this is something the guest needs to know.  Why it's in
> xenstore is a bit of a mystery since you can't use xenstore without it
> and it's in the start_info.

I should have written "why is this exposed to the guest via xenstore?"

> Is this the same value as start_info.store_evtchn ?  Cross reference ?

I'd be semi inclined to ditch/deprecate it unless we can figure out what
it is for -- as you say there is something of a chicken and egg problem
with using it.

> 
> > > +#### ~/store/ring-ref = GNTREF []
> > > +
> > > +The grant reference of the domain's XenStore ring.
> > > +
> > > +XXX why is this exposed to the guest?
> 
> See above.

Yup, the same issues.

> > > +#### ~/device-model/$DOMID/* []
> > > +
> > > +Information relating to device models running in the domain. $DOMID is
> > > +the target domain of the device model.
> > > +
> > > +XXX where is the contents of this directory specified?
> 
> I think it's not specified anywhere.  It's ad-hoc.  The guest
> shouldn't need to see it but exposing it readonly is probably
> harmless.  Except perhaps for the vnc password ?

vnc password appears to go into /vm/$uuid/vncpass (looking at libxl code
only).

AFAIK it does nothing special with the perms, but /vm/$uuid is not guest
readable (perms are "n0") so I think that works out ok.

I wonder if that's part of the point of /vm/$uuid.

> > > +### /vm/$UUID/uuid = UUID []
> > > +
> > > +Value is the same UUID as the path.
> > > +
> > > +### /vm/$UUID/name = STRING []
> > > +
> > > +The domains name.
> 
> IMO this should be
>   (a) in /local/domain/$DOMID
>   (b) also a copy in /byname/$NAME = $DOMID   for fast lookup
> but not in 4.2.
> 
> Guests shouldn't rely on it.  In fact do guests actually need anything
> from here ?

I'd say definitely not, but it has existed with xend for many years so
I'd be surprised if something hadn't crept in somewhere :-(

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:07:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2O9G-0004f1-KX; Fri, 17 Aug 2012 15:07:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2O9F-0004es-7z
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:07:41 +0000
Received: from [85.158.143.35:35502] by server-3.bemta-4.messagelabs.com id
	09/BA-09529-C3E5E205; Fri, 17 Aug 2012 15:07:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345216023!15942248!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9219 invoked from network); 17 Aug 2012 15:07:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:07:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062858"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 15:06:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 16:06:33 +0100
Message-ID: <1345215992.10161.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Aug 2012 16:06:32 +0100
In-Reply-To: <502E78010200007800095F56@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<1345215020.10161.64.camel@zakaz.uk.xensource.com>
	<502E78010200007800095F56@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 15:57 +0100, Jan Beulich wrote:
> >>> On 17.08.12 at 16:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
> >> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> wrote:
> >> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> >> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> >> > > Seeing the patch I btw realized that there's no easy way to
> >> >> >> > > avoid having the type as a second argument in the conversion
> >> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> >> >> > > there.
> >> >> >> > 
> >> >> >> > Btw - on the architecture(s) where the two handles are identical
> >> >> >> > I would prefer you to make the conversion functions trivial (and
> >> >> >> > thus avoid making use of the "type" parameter), thus allowing
> >> >> >> > the type checking to occur that you currently circumvent.
> >> >> >> 
> >> >> >> OK, I can do that.
> >> >> > 
> >> >> > Will this result in the type parameter potentially becoming stale?
> >> >> > 
> >> >> > Adding a redundant pointer compare is a good way to get the compiler to
> >> >> > catch this. Smth like;
> >> >> > 
> >> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >> >> >         #define guest_handle_from_param(hnd, type) ({
> >> >> >             typeof((hnd).p) _x = (hnd).p;
> >> >> >             XEN_GUEST_HANDLE(type) _y;
> >> >> >             &_y == &_x;
> >> >> >             hnd;
> >> >> >          })
> >> >> 
> >> >> Ah yes, that's a good suggestion.
> >> >> 
> >> >> > I'm not sure which two pointers of members of the various structs need
> >> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> >> >> > idea...
> >> >> 
> >> >> Right, comparing (hnd).p with _y.p would be the right thing; no
> >> >> need for _x, but some other (mechanical) adjustments would be
> >> >> necessary.
> >> > 
> >> > The _x variable is still useful to avoid multiple evaluations of hnd,
> >> > even though I know that this is not a public header.
> >> 
> >> But we had settled on returning hnd unmodified when both
> >> handle types are the same.
> >> 
> >> > What about the following:
> >> > 
> >> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> >> > #define guest_handle_to_param(hnd, type) ({                \
> >> >     typeof((hnd).p) _x = (hnd).p;                          \
> >> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >> >     if (&_x != &_y.p) BUG();                               \
> >> >     _y;                                                    \
> >> > })
> >> 
> >> Since this is not a public header, something like this (untested,
> >> so may not compile as is)
> >> 
> >> #define guest_handle_to_param(hnd, type) ({                \
> >>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
> >>     (hnd);                                                    \
> >> })
> >> 
> >> is what I was thinking of.
> > 
> > This evaluates hnd twice, or do we only care about that in public
> > headers for some reason? (personally I think principal of least surprise
> > suggests avoiding it wherever possible)
> 
> No, it doesn't - like sizeof(), typeof() doesn't evaluate its
> argument.

Right, of course, silly me.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:07:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2O9G-0004f1-KX; Fri, 17 Aug 2012 15:07:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2O9F-0004es-7z
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:07:41 +0000
Received: from [85.158.143.35:35502] by server-3.bemta-4.messagelabs.com id
	09/BA-09529-C3E5E205; Fri, 17 Aug 2012 15:07:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345216023!15942248!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9219 invoked from network); 17 Aug 2012 15:07:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:07:04 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14062858"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 15:06:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 16:06:33 +0100
Message-ID: <1345215992.10161.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Aug 2012 16:06:32 +0100
In-Reply-To: <502E78010200007800095F56@nat28.tlf.novell.com>
References: <alpine.DEB.2.02.1208161523420.4850@kaball.uk.xensource.com>
	<1345128612-10323-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<502D33B8020000780009596B@nat28.tlf.novell.com>
	<502D37D702000078000959F7@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208161801210.15568@kaball.uk.xensource.com>
	<1345190532.30865.67.camel@zakaz.uk.xensource.com>
	<502E2F090200007800095D62@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208171443580.15568@kaball.uk.xensource.com>
	<502E6A200200007800095E9A@nat28.tlf.novell.com>
	<1345215020.10161.64.camel@zakaz.uk.xensource.com>
	<502E78010200007800095F56@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>, "Tim \(Xen.org\)" <tim@xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v3 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 15:57 +0100, Jan Beulich wrote:
> >>> On 17.08.12 at 16:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2012-08-17 at 14:58 +0100, Jan Beulich wrote:
> >> >>> On 17.08.12 at 15:47, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> wrote:
> >> > On Fri, 17 Aug 2012, Jan Beulich wrote:
> >> >> >>> On 17.08.12 at 10:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> >> > On Thu, 2012-08-16 at 18:10 +0100, Stefano Stabellini wrote:
> >> >> >> On Thu, 16 Aug 2012, Jan Beulich wrote:
> >> >> >> > >>> On 16.08.12 at 17:54, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> >> >> > > Seeing the patch I btw realized that there's no easy way to
> >> >> >> > > avoid having the type as a second argument in the conversion
> >> >> >> > > macros. Nevertheless I still don't like the explicitly specified type
> >> >> >> > > there.
> >> >> >> > 
> >> >> >> > Btw - on the architecture(s) where the two handles are identical
> >> >> >> > I would prefer you to make the conversion functions trivial (and
> >> >> >> > thus avoid making use of the "type" parameter), thus allowing
> >> >> >> > the type checking to occur that you currently circumvent.
> >> >> >> 
> >> >> >> OK, I can do that.
> >> >> > 
> >> >> > Will this result in the type parameter potentially becoming stale?
> >> >> > 
> >> >> > Adding a redundant pointer compare is a good way to get the compiler to
> >> >> > catch this. Smth like;
> >> >> > 
> >> >> >         /* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
> >> >> >         #define guest_handle_from_param(hnd, type) ({
> >> >> >             typeof((hnd).p) _x = (hnd).p;
> >> >> >             XEN_GUEST_HANDLE(type) _y;
> >> >> >             &_y == &_x;
> >> >> >             hnd;
> >> >> >          })
> >> >> 
> >> >> Ah yes, that's a good suggestion.
> >> >> 
> >> >> > I'm not sure which two pointers of members of the various structs need
> >> >> > to be compared, maybe it's actually &_y.p and &hnd.p, but you get the
> >> >> > idea...
> >> >> 
> >> >> Right, comparing (hnd).p with _y.p would be the right thing; no
> >> >> need for _x, but some other (mechanical) adjustments would be
> >> >> necessary.
> >> > 
> >> > The _x variable is still useful to avoid multiple evaluations of hnd,
> >> > even though I know that this is not a public header.
> >> 
> >> But we had settled on returning hnd unmodified when both
> >> handle types are the same.
> >> 
> >> > What about the following:
> >> > 
> >> > /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> >> > #define guest_handle_to_param(hnd, type) ({                \
> >> >     typeof((hnd).p) _x = (hnd).p;                          \
> >> >     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };              \
> >> >     if (&_x != &_y.p) BUG();                               \
> >> >     _y;                                                    \
> >> > })
> >> 
> >> Since this is not a public header, something like this (untested,
> >> so may not compile as is)
> >> 
> >> #define guest_handle_to_param(hnd, type) ({                \
> >>     (void)(typeof((hnd).p)0 == (XEN_GUEST_HANDLE_PARAM(type){}).p); \
> >>     (hnd);                                                    \
> >> })
> >> 
> >> is what I was thinking of.
> > 
> > This evaluates hnd twice, or do we only care about that in public
> > headers for some reason? (personally I think principal of least surprise
> > suggests avoiding it wherever possible)
> 
> No, it doesn't - like sizeof(), typeof() doesn't evaluate its
> argument.

Right, of course, silly me.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2ODv-0004sH-Dj; Fri, 17 Aug 2012 15:12:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T2ODt-0004sC-QA
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:12:29 +0000
Received: from [85.158.143.35:59043] by server-3.bemta-4.messagelabs.com id
	B9/B2-09529-D5F5E205; Fri, 17 Aug 2012 15:12:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345216305!12802809!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14271 invoked from network); 17 Aug 2012 15:11:46 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 15:11:46 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lr3c7pEgsE
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-064-208.pools.arcor-ip.net [84.57.64.208])
	by smtp.strato.de (jored mo36) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id x01cb3o7HEErLC ;
	Fri, 17 Aug 2012 17:11:39 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 3AF891836E; Fri, 17 Aug 2012 17:11:37 +0200 (CEST)
Date: Fri, 17 Aug 2012 17:11:36 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir@xen.org>
Message-ID: <20120817151136.GA25138@aepfle.de>
References: <4FD881CB0200007800089ADB@nat28.tlf.novell.com>
	<CC04F47B.431CE%keir@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC04F47B.431CE%keir@xen.org>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jun 18, Keir Fraser wrote:

> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
> > Our product management wasn't happy with the "solution" for XSA-9, and
> > demanded that customer systems must continue to boot. Rather than
> > having our and perhaps other distros carry non-trivial patches, allow
> > for more fine grained control (panic on boot, deny guest creation, or
> > merely warn) by means of a single line change.
> 
> All this seems to allow is to boot but not create domU-s. Which seems a bit
> pointless.

Refusing to boot into dom0 with no good reason is a good way to lose
remote control of a system without serial console. Not funny.

Fortunately I booted and tested with sles11 Xen first before ruining the
box with plain xen-unstable.

So, please apply this patch and remove the panic() from ./xen/arch/x86/cpu/amd.c

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:12:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2ODv-0004sH-Dj; Fri, 17 Aug 2012 15:12:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T2ODt-0004sC-QA
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:12:29 +0000
Received: from [85.158.143.35:59043] by server-3.bemta-4.messagelabs.com id
	B9/B2-09529-D5F5E205; Fri, 17 Aug 2012 15:12:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345216305!12802809!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14271 invoked from network); 17 Aug 2012 15:11:46 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 15:11:46 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lr3c7pEgsE
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-064-208.pools.arcor-ip.net [84.57.64.208])
	by smtp.strato.de (jored mo36) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id x01cb3o7HEErLC ;
	Fri, 17 Aug 2012 17:11:39 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 3AF891836E; Fri, 17 Aug 2012 17:11:37 +0200 (CEST)
Date: Fri, 17 Aug 2012 17:11:36 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir@xen.org>
Message-ID: <20120817151136.GA25138@aepfle.de>
References: <4FD881CB0200007800089ADB@nat28.tlf.novell.com>
	<CC04F47B.431CE%keir@xen.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC04F47B.431CE%keir@xen.org>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jun 18, Keir Fraser wrote:

> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
> 
> > Our product management wasn't happy with the "solution" for XSA-9, and
> > demanded that customer systems must continue to boot. Rather than
> > having our and perhaps other distros carry non-trivial patches, allow
> > for more fine grained control (panic on boot, deny guest creation, or
> > merely warn) by means of a single line change.
> 
> All this seems to allow is to boot but not create domU-s. Which seems a bit
> pointless.

Refusing to boot into dom0 with no good reason is a good way to lose
remote control of a system without serial console. Not funny.

Fortunately I booted and tested with sles11 Xen first before ruining the
box with plain xen-unstable.

So, please apply this patch and remove the panic() from ./xen/arch/x86/cpu/amd.c

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:31:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2OW7-000536-99; Fri, 17 Aug 2012 15:31:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T2OW5-000531-Dw
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:31:17 +0000
Received: from [85.158.143.35:24987] by server-2.bemta-4.messagelabs.com id
	23/1E-31966-4C36E205; Fri, 17 Aug 2012 15:31:16 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345217472!14602871!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30754 invoked from network); 17 Aug 2012 15:31:12 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:31:12 -0000
Received: by lagz14 with SMTP id z14so2382380lag.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 08:31:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=eAZigW7InVrb9GIfDxmpr57i4O40Q8N0RZ/P9HKB1Lw=;
	b=mRGIYjduaa3CoNwqmOwF6m80sWUfb52uwRo2RdhW/s7MeE9h5OXYxQpdKM72BH6kL1
	bjBfHSzM0NL4m3jGJ6ukJvr0lYHzPk/ESI51FpJdzpxWThJQ/poLt6roHNqD2yY9p5Nv
	hL9MY5JlhMYlJrdRWXCAjYZP3mQx7YqdxuInn7GtXfZ+bo+QuCmR5VmmNNuuZYtaR2SV
	t2mzolnuOmNEcc5Dj5r+pWt7EN6Rz2DFdkBcSICvz0otTmgU67V98cucWSmzVaZJwSXP
	xc6i4cE3BfgiFmQIQMnbKfgScYh3U7beMVBGhwZ1Vx5DYV5/Vr0GYQc0CzwTogmdqlGa
	KzWw==
MIME-Version: 1.0
Received: by 10.112.85.200 with SMTP id j8mr2426122lbz.41.1345217471735; Fri,
	17 Aug 2012 08:31:11 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Fri, 17 Aug 2012 08:31:11 -0700 (PDT)
In-Reply-To: <1344441333.32142.48.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
Date: Fri, 17 Aug 2012 23:31:11 +0800
Message-ID: <CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2152305181264139810=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2152305181264139810==
Content-Type: multipart/alternative; boundary=bcaec554d60c8c691104c777d9ec

--bcaec554d60c8c691104c777d9ec
Content-Type: text/plain; charset=ISO-8859-1

2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>

> On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
>
> > Thank you very much for your help.
> > Is there any example code of initialization of grant table in HVM that
> > I can refer to?
>
> The PVHVM support in upstream Linux would be a good place to look.
>
> So might the code in the xen tree in unmodified_drivers/linux-2.6/
>
> IIRC Daniel got grant tables working in SeaBIOS last summer for GSoC so
> you might also find some useful examples in
> git://github.com/evildani/seabios_patch.git

Hi Ian,

Thank you very much for this information. It's very useful to me.

However, I'm still confused with the initialization of the grant table in
HVM.

The relationship of the methods in the initialization of the grant table in
linux source code (driver/xen) is:
platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().

So, I am not sure that what's the function of the method
apply_to_page_range(), which is implemented in code file [1].
This function is a little complex. Is there any simple method to do
this? Thank you for your time.

[1] http://lxr.free-electrons.com/source/mm/memory.c#L2412


Best Regards,
Bei Guan

--bcaec554d60c8c691104c777d9ec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">2012/8/8 Ian Campbell <span dir=3D"ltr">&lt;=
<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@c=
itrix.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div>On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:<br>
<br>
&gt; Thank you very much for your help.<br>
&gt; Is there any example code of initialization of grant table in HVM that=
<br>
&gt; I can refer to?<br>
<br>
</div>The PVHVM support in upstream Linux would be a good place to look.<br=
>
<br>
So might the code in the xen tree in unmodified_drivers/linux-2.6/<br>
<br>
IIRC Daniel got grant tables working in SeaBIOS last summer for GSoC so<br>
you might also find some useful examples in<br>
git://<a href=3D"http://github.com/evildani/seabios_patch.git" target=3D"_b=
lank">github.com/evildani/seabios_patch.git</a></blockquote><div>Hi Ian,</d=
iv><div><br></div><div>Thank you very much for this information. It&#39;s v=
ery useful to me.</div>
<div><br></div><div>
<div>However, I&#39;m still confused with the initialization of the grant t=
able in HVM.=A0</div><div><br></div><div>The relationship of the methods in=
 the initialization of the grant table in linux source code (driver/xen) is=
:</div>
<div>platform_pci_init()--&gt;gnttab_init()--&gt;gnttab_resume()--&gt;gntta=
b_map()--&gt;arch_gnttab_map_shared()--&gt;apply_to_page_range().</div><div=
><br></div><div>So, I am not sure that what&#39;s the function of the metho=
d apply_to_page_range(), which is implemented in code file [1].=A0</div>
<div>This function is a little complex. Is there any simple method to do th=
is?=A0Thank you for your time.</div></div><div><br></div><div>[1] <a href=
=3D"http://lxr.free-electrons.com/source/mm/memory.c#L2412">http://lxr.free=
-electrons.com/source/mm/memory.c#L2412</a></div>
<div><br></div><div><br></div><div>Best Regards,</div><div>Bei Guan</div><d=
iv><br></div></div>

--bcaec554d60c8c691104c777d9ec--


--===============2152305181264139810==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2152305181264139810==--


From xen-devel-bounces@lists.xen.org Fri Aug 17 15:31:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2OW7-000536-99; Fri, 17 Aug 2012 15:31:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T2OW5-000531-Dw
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:31:17 +0000
Received: from [85.158.143.35:24987] by server-2.bemta-4.messagelabs.com id
	23/1E-31966-4C36E205; Fri, 17 Aug 2012 15:31:16 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345217472!14602871!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30754 invoked from network); 17 Aug 2012 15:31:12 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:31:12 -0000
Received: by lagz14 with SMTP id z14so2382380lag.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 08:31:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=eAZigW7InVrb9GIfDxmpr57i4O40Q8N0RZ/P9HKB1Lw=;
	b=mRGIYjduaa3CoNwqmOwF6m80sWUfb52uwRo2RdhW/s7MeE9h5OXYxQpdKM72BH6kL1
	bjBfHSzM0NL4m3jGJ6ukJvr0lYHzPk/ESI51FpJdzpxWThJQ/poLt6roHNqD2yY9p5Nv
	hL9MY5JlhMYlJrdRWXCAjYZP3mQx7YqdxuInn7GtXfZ+bo+QuCmR5VmmNNuuZYtaR2SV
	t2mzolnuOmNEcc5Dj5r+pWt7EN6Rz2DFdkBcSICvz0otTmgU67V98cucWSmzVaZJwSXP
	xc6i4cE3BfgiFmQIQMnbKfgScYh3U7beMVBGhwZ1Vx5DYV5/Vr0GYQc0CzwTogmdqlGa
	KzWw==
MIME-Version: 1.0
Received: by 10.112.85.200 with SMTP id j8mr2426122lbz.41.1345217471735; Fri,
	17 Aug 2012 08:31:11 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Fri, 17 Aug 2012 08:31:11 -0700 (PDT)
In-Reply-To: <1344441333.32142.48.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
Date: Fri, 17 Aug 2012 23:31:11 +0800
Message-ID: <CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2152305181264139810=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2152305181264139810==
Content-Type: multipart/alternative; boundary=bcaec554d60c8c691104c777d9ec

--bcaec554d60c8c691104c777d9ec
Content-Type: text/plain; charset=ISO-8859-1

2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>

> On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
>
> > Thank you very much for your help.
> > Is there any example code of initialization of grant table in HVM that
> > I can refer to?
>
> The PVHVM support in upstream Linux would be a good place to look.
>
> So might the code in the xen tree in unmodified_drivers/linux-2.6/
>
> IIRC Daniel got grant tables working in SeaBIOS last summer for GSoC so
> you might also find some useful examples in
> git://github.com/evildani/seabios_patch.git

Hi Ian,

Thank you very much for this information. It's very useful to me.

However, I'm still confused with the initialization of the grant table in
HVM.

The relationship of the methods in the initialization of the grant table in
linux source code (driver/xen) is:
platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().

So, I am not sure that what's the function of the method
apply_to_page_range(), which is implemented in code file [1].
This function is a little complex. Is there any simple method to do
this? Thank you for your time.

[1] http://lxr.free-electrons.com/source/mm/memory.c#L2412


Best Regards,
Bei Guan

--bcaec554d60c8c691104c777d9ec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><div class=3D"gmail_quote">2012/8/8 Ian Campbell <span dir=3D"ltr">&lt;=
<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@c=
itrix.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div>On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:<br>
<br>
&gt; Thank you very much for your help.<br>
&gt; Is there any example code of initialization of grant table in HVM that=
<br>
&gt; I can refer to?<br>
<br>
</div>The PVHVM support in upstream Linux would be a good place to look.<br=
>
<br>
So might the code in the xen tree in unmodified_drivers/linux-2.6/<br>
<br>
IIRC Daniel got grant tables working in SeaBIOS last summer for GSoC so<br>
you might also find some useful examples in<br>
git://<a href=3D"http://github.com/evildani/seabios_patch.git" target=3D"_b=
lank">github.com/evildani/seabios_patch.git</a></blockquote><div>Hi Ian,</d=
iv><div><br></div><div>Thank you very much for this information. It&#39;s v=
ery useful to me.</div>
<div><br></div><div>
<div>However, I&#39;m still confused with the initialization of the grant t=
able in HVM.=A0</div><div><br></div><div>The relationship of the methods in=
 the initialization of the grant table in linux source code (driver/xen) is=
:</div>
<div>platform_pci_init()--&gt;gnttab_init()--&gt;gnttab_resume()--&gt;gntta=
b_map()--&gt;arch_gnttab_map_shared()--&gt;apply_to_page_range().</div><div=
><br></div><div>So, I am not sure that what&#39;s the function of the metho=
d apply_to_page_range(), which is implemented in code file [1].=A0</div>
<div>This function is a little complex. Is there any simple method to do th=
is?=A0Thank you for your time.</div></div><div><br></div><div>[1] <a href=
=3D"http://lxr.free-electrons.com/source/mm/memory.c#L2412">http://lxr.free=
-electrons.com/source/mm/memory.c#L2412</a></div>
<div><br></div><div><br></div><div>Best Regards,</div><div>Bei Guan</div><d=
iv><br></div></div>

--bcaec554d60c8c691104c777d9ec--


--===============2152305181264139810==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2152305181264139810==--


From xen-devel-bounces@lists.xen.org Fri Aug 17 15:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Oj1-0005NG-Jp; Fri, 17 Aug 2012 15:44:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Oiz-0005NB-Ob
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:44:37 +0000
Received: from [85.158.143.99:43696] by server-3.bemta-4.messagelabs.com id
	17/FB-09529-5E66E205; Fri, 17 Aug 2012 15:44:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345218276!18952320!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12706 invoked from network); 17 Aug 2012 15:44:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:44:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14063513"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 15:44:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 16:44:36 +0100
Message-ID: <1345218274.10161.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Fri, 17 Aug 2012 16:44:34 +0100
In-Reply-To: <CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 16:31 +0100, Bei Guan wrote:
> 
> 2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>
>         On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
>         
>         > Thank you very much for your help.
>         > Is there any example code of initialization of grant table
>         in HVM that
>         > I can refer to?
>         
>         
>         The PVHVM support in upstream Linux would be a good place to
>         look.
>         
>         So might the code in the xen tree in
>         unmodified_drivers/linux-2.6/
>         
>         IIRC Daniel got grant tables working in SeaBIOS last summer
>         for GSoC so
>         you might also find some useful examples in
>         git://github.com/evildani/seabios_patch.git
> Hi Ian,
> 
> 
> Thank you very much for this information. It's very useful to me.
> 
> 
> However, I'm still confused with the initialization of the grant table
> in HVM. 
> 
> 
> The relationship of the methods in the initialization of the grant
> table in linux source code (driver/xen) is:
> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
> 
> 
> So, I am not sure that what's the function of the method
> apply_to_page_range(), which is implemented in code file [1]. 
> This function is a little complex. Is there any simple method to do
> this? Thank you for your time.

This function is the simple method ;-)

All it basically does is iterate over the page tables corresponding to a
range of addresses and calling a user supplied function on each leaf
PTE. In the case of the gnttab_map this user supplied function simply
sets the leaf PTEs to point to the right grant table page.

I suppose you are working on tianocore? I've no idea what the page table
layout is in that environment, I suppose it either has a linear map or
some other way of getting at the leaf ptes. Anyway since the method to
use is specific to the OS (or firmware) environment you are running in I
think you'll have to ask on the tianocore development list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:44:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Oj1-0005NG-Jp; Fri, 17 Aug 2012 15:44:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2Oiz-0005NB-Ob
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:44:37 +0000
Received: from [85.158.143.99:43696] by server-3.bemta-4.messagelabs.com id
	17/FB-09529-5E66E205; Fri, 17 Aug 2012 15:44:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345218276!18952320!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12706 invoked from network); 17 Aug 2012 15:44:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:44:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14063513"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 15:44:36 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 16:44:36 +0100
Message-ID: <1345218274.10161.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Fri, 17 Aug 2012 16:44:34 +0100
In-Reply-To: <CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 16:31 +0100, Bei Guan wrote:
> 
> 2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>
>         On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
>         
>         > Thank you very much for your help.
>         > Is there any example code of initialization of grant table
>         in HVM that
>         > I can refer to?
>         
>         
>         The PVHVM support in upstream Linux would be a good place to
>         look.
>         
>         So might the code in the xen tree in
>         unmodified_drivers/linux-2.6/
>         
>         IIRC Daniel got grant tables working in SeaBIOS last summer
>         for GSoC so
>         you might also find some useful examples in
>         git://github.com/evildani/seabios_patch.git
> Hi Ian,
> 
> 
> Thank you very much for this information. It's very useful to me.
> 
> 
> However, I'm still confused with the initialization of the grant table
> in HVM. 
> 
> 
> The relationship of the methods in the initialization of the grant
> table in linux source code (driver/xen) is:
> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
> 
> 
> So, I am not sure that what's the function of the method
> apply_to_page_range(), which is implemented in code file [1]. 
> This function is a little complex. Is there any simple method to do
> this? Thank you for your time.

This function is the simple method ;-)

All it basically does is iterate over the page tables corresponding to a
range of addresses and calling a user supplied function on each leaf
PTE. In the case of the gnttab_map this user supplied function simply
sets the leaf PTEs to point to the right grant table page.

I suppose you are working on tianocore? I've no idea what the page table
layout is in that environment, I suppose it either has a linear map or
some other way of getting at the leaf ptes. Anyway since the method to
use is specific to the OS (or firmware) environment you are running in I
think you'll have to ask on the tianocore development list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Omt-0005Ug-8E; Fri, 17 Aug 2012 15:48:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2Oms-0005Ua-GV
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:48:38 +0000
Received: from [85.158.143.99:6959] by server-1.bemta-4.messagelabs.com id
	B9/36-07754-5D76E205; Fri, 17 Aug 2012 15:48:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345218517!19104616!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18261 invoked from network); 17 Aug 2012 15:48:37 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:48:37 -0000
Received: by eeke53 with SMTP id e53so1106386eek.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 08:48:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=p6l2jnzTH7zz/xvQyYFWbJ+GkMoCE0QSu1zwcLwUvVw=;
	b=EirfW9uZ83HQMOBlSIvW3DjBVrCvBOfIBpE7nfHUijp5OVU5hgFEVgW1eDYMvpeMVN
	mWGDmoC0lRpYxgXQbb9pqfC2vwuHjSRZUTmkJ96y1fbp2HEo/RZhR1HFo75aQi5Yrvbq
	hQEO93L93DdTQnSUJfleKWMaV/3mG071yMHZS8izwc8PornK8tZscAJBOYwD5SaHCb8D
	7izV7CVdJtmm6UlQm5ajASzKxwRChUM7gRvrb5yFlfbW6zSSlByOvi3CJQ5o8SaBCnvd
	+XIphDbCj75loI8mv48lqXvWahqgmnpYVccrgvZEMc9ms316kCr2sf1odfw3acymGYNO
	rodg==
Received: by 10.14.180.68 with SMTP id i44mr6984844eem.20.1345218516961;
	Fri, 17 Aug 2012 08:48:36 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id z3sm18718903eel.15.2012.08.17.08.48.27
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 08:48:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 16:48:22 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Message-ID: <CC542656.3C452%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
Thread-Index: Ac18j7rQ2+dwkjB/P0WQYEAJ5jDPZQ==
In-Reply-To: <20120817151136.GA25138@aepfle.de>
Mime-version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Mon, Jun 18, Keir Fraser wrote:
> 
>> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> Our product management wasn't happy with the "solution" for XSA-9, and
>>> demanded that customer systems must continue to boot. Rather than
>>> having our and perhaps other distros carry non-trivial patches, allow
>>> for more fine grained control (panic on boot, deny guest creation, or
>>> merely warn) by means of a single line change.
>> 
>> All this seems to allow is to boot but not create domU-s. Which seems a bit
>> pointless.
> 
> Refusing to boot into dom0 with no good reason is a good way to lose
> remote control of a system without serial console. Not funny.
> 
> Fortunately I booted and tested with sles11 Xen first before ruining the
> box with plain xen-unstable.
> 
> So, please apply this patch and remove the panic() from
> ./xen/arch/x86/cpu/amd.c

Okay, that's a good argument for that patch.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:48:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Omt-0005Ug-8E; Fri, 17 Aug 2012 15:48:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2Oms-0005Ua-GV
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:48:38 +0000
Received: from [85.158.143.99:6959] by server-1.bemta-4.messagelabs.com id
	B9/36-07754-5D76E205; Fri, 17 Aug 2012 15:48:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345218517!19104616!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18261 invoked from network); 17 Aug 2012 15:48:37 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:48:37 -0000
Received: by eeke53 with SMTP id e53so1106386eek.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 08:48:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=p6l2jnzTH7zz/xvQyYFWbJ+GkMoCE0QSu1zwcLwUvVw=;
	b=EirfW9uZ83HQMOBlSIvW3DjBVrCvBOfIBpE7nfHUijp5OVU5hgFEVgW1eDYMvpeMVN
	mWGDmoC0lRpYxgXQbb9pqfC2vwuHjSRZUTmkJ96y1fbp2HEo/RZhR1HFo75aQi5Yrvbq
	hQEO93L93DdTQnSUJfleKWMaV/3mG071yMHZS8izwc8PornK8tZscAJBOYwD5SaHCb8D
	7izV7CVdJtmm6UlQm5ajASzKxwRChUM7gRvrb5yFlfbW6zSSlByOvi3CJQ5o8SaBCnvd
	+XIphDbCj75loI8mv48lqXvWahqgmnpYVccrgvZEMc9ms316kCr2sf1odfw3acymGYNO
	rodg==
Received: by 10.14.180.68 with SMTP id i44mr6984844eem.20.1345218516961;
	Fri, 17 Aug 2012 08:48:36 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id z3sm18718903eel.15.2012.08.17.08.48.27
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 08:48:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 16:48:22 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Message-ID: <CC542656.3C452%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
Thread-Index: Ac18j7rQ2+dwkjB/P0WQYEAJ5jDPZQ==
In-Reply-To: <20120817151136.GA25138@aepfle.de>
Mime-version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Mon, Jun 18, Keir Fraser wrote:
> 
>> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 
>>> Our product management wasn't happy with the "solution" for XSA-9, and
>>> demanded that customer systems must continue to boot. Rather than
>>> having our and perhaps other distros carry non-trivial patches, allow
>>> for more fine grained control (panic on boot, deny guest creation, or
>>> merely warn) by means of a single line change.
>> 
>> All this seems to allow is to boot but not create domU-s. Which seems a bit
>> pointless.
> 
> Refusing to boot into dom0 with no good reason is a good way to lose
> remote control of a system without serial console. Not funny.
> 
> Fortunately I booted and tested with sles11 Xen first before ruining the
> box with plain xen-unstable.
> 
> So, please apply this patch and remove the panic() from
> ./xen/arch/x86/cpu/amd.c

Okay, that's a good argument for that patch.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:49:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Onn-0005Ym-MZ; Fri, 17 Aug 2012 15:49:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2Onl-0005YX-W6
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:49:34 +0000
Received: from [85.158.139.83:63111] by server-12.bemta-5.messagelabs.com id
	99/9B-22359-D086E205; Fri, 17 Aug 2012 15:49:33 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345218572!21353884!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11630 invoked from network); 17 Aug 2012 15:49:32 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:49:32 -0000
Received: by weyz53 with SMTP id z53so3016847wey.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 08:49:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=TL8rEEKpAb1pRnAFcObYRLliM5bq4iAhg1/5vCwHb4I=;
	b=ISoa1iXXERYiZbUO4aJCodf78IG618MRBpIQ/R+ta9KPPN21mcnS7yeTNfr7wiyW+E
	BtZ9VtBHZBno8Y2UIXarqJow1Of+G7l4511bnMw3oNscisZfOIq5bU4G91Sj0OdGWy7D
	rZUCamyUFNkmoSrk55OZCjgngajE5M+IM7yLb92IP5ZkU+z3CJuDfp1iT510jV+bbqnB
	vxujGozVHYQNv4Bg+D4dEj3UGi5byohfy/addQkYJ77NMow/3wxFWGoilSEI3juDlCgn
	a4tAyVpA1GrH7DX5ZW6fx2mKM9YTWcmeHgUOSsFeQcq1E9m0xe/LBg+hdAK++XGkchzb
	r9Tw==
Received: by 10.180.104.200 with SMTP id gg8mr5947138wib.14.1345218572344;
	Fri, 17 Aug 2012 08:49:32 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id cu1sm10399779wib.6.2012.08.17.08.49.30
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 08:49:31 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 16:49:21 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC542691.3C453%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
Thread-Index: Ac18j937ex7DCv2+KEmUpJTPD9zUNA==
In-Reply-To: <4FD881CB0200007800089ADB@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:

> Our product management wasn't happy with the "solution" for XSA-9, and
> demanded that customer systems must continue to boot. Rather than
> having our and perhaps other distros carry non-trivial patches, allow
> for more fine grained control (panic on boot, deny guest creation, or
> merely warn) by means of a single line change.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -32,8 +32,11 @@
>  static char opt_famrev[14];
>  string_param("cpuid_mask_cpu", opt_famrev);
>  
> -static bool_t opt_allow_unsafe;
> +#ifdef __x86_64__
> +/* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
> +s8 __read_mostly opt_allow_unsafe = -1;
>  boolean_param("allow_unsafe", opt_allow_unsafe);
> +#endif
>  
>  static inline void wrmsr_amd(unsigned int index, unsigned int lo,
> unsigned int hi)
> @@ -496,10 +499,19 @@ static void __devinit init_amd(struct cp
> clear_bit(X86_FEATURE_MWAIT, c->x86_capability);
>  
>  #ifdef __x86_64__
> - if (cpu_has_amd_erratum(c, AMD_ERRATUM_121) && !opt_allow_unsafe)
> + if (!cpu_has_amd_erratum(c, AMD_ERRATUM_121))
> +  opt_allow_unsafe = 1;
> + else if (opt_allow_unsafe < 0)
> panic("Xen will not boot on this CPU for security reasons.\n"
>      "Pass \"allow_unsafe\" if you're trusting all your"
>      " (PV) guest kernels.\n");
> + else if (!opt_allow_unsafe && c == &boot_cpu_data)
> +  printk(KERN_WARNING
> +         "*** Xen will not allow creation of DomU-s on"
> +         " this CPU for security reasons. ***\n"
> +         KERN_WARNING
> +         "*** Pass \"allow_unsafe\" if you're trusting"
> +         " all your (PV) guest kernels. ***\n");
>  
> /* AMD CPUs do not support SYSENTER outside of legacy mode. */
> clear_bit(X86_FEATURE_SEP, c->x86_capability);
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -55,6 +55,7 @@
>  #include <asm/traps.h>
>  #include <asm/nmi.h>
>  #include <asm/mce.h>
> +#include <asm/amd.h>
>  #include <xen/numa.h>
>  #include <xen/iommu.h>
>  #ifdef CONFIG_COMPAT
> @@ -531,6 +532,20 @@ int arch_domain_create(struct domain *d,
>  
>  #else /* __x86_64__ */
>  
> +    if ( d->domain_id && !is_idle_domain(d) &&
> +         cpu_has_amd_erratum(&boot_cpu_data, AMD_ERRATUM_121) )
> +    {
> +        if ( !opt_allow_unsafe )
> +        {
> +            printk(XENLOG_G_ERR "Xen does not allow DomU creation on this
> CPU"
> +                   " for security reasons.\n");
> +            return -EPERM;
> +        }
> +        printk(XENLOG_G_WARNING
> +               "Dom%d may compromise security on this CPU.\n",
> +               d->domain_id);
> +    }
> +
>      BUILD_BUG_ON(PDPT_L2_ENTRIES * sizeof(*d->arch.mm_perdomain_pt_pages)
>                   != PAGE_SIZE);
>      pg = alloc_domheap_page(NULL, MEMF_node(domain_to_node(d)));
> --- a/xen/include/asm-x86/amd.h
> +++ b/xen/include/asm-x86/amd.h
> @@ -147,6 +147,8 @@ struct cpuinfo_x86;
>  int cpu_has_amd_erratum(const struct cpuinfo_x86 *, int, ...);
>  
>  #ifdef __x86_64__
> +extern s8 opt_allow_unsafe;
> +
>  void fam10h_check_enable_mmcfg(void);
>  void check_enable_amd_mmconf_dmi(void);
>  #endif
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:49:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Onn-0005Ym-MZ; Fri, 17 Aug 2012 15:49:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2Onl-0005YX-W6
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:49:34 +0000
Received: from [85.158.139.83:63111] by server-12.bemta-5.messagelabs.com id
	99/9B-22359-D086E205; Fri, 17 Aug 2012 15:49:33 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345218572!21353884!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11630 invoked from network); 17 Aug 2012 15:49:32 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 15:49:32 -0000
Received: by weyz53 with SMTP id z53so3016847wey.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 08:49:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=TL8rEEKpAb1pRnAFcObYRLliM5bq4iAhg1/5vCwHb4I=;
	b=ISoa1iXXERYiZbUO4aJCodf78IG618MRBpIQ/R+ta9KPPN21mcnS7yeTNfr7wiyW+E
	BtZ9VtBHZBno8Y2UIXarqJow1Of+G7l4511bnMw3oNscisZfOIq5bU4G91Sj0OdGWy7D
	rZUCamyUFNkmoSrk55OZCjgngajE5M+IM7yLb92IP5ZkU+z3CJuDfp1iT510jV+bbqnB
	vxujGozVHYQNv4Bg+D4dEj3UGi5byohfy/addQkYJ77NMow/3wxFWGoilSEI3juDlCgn
	a4tAyVpA1GrH7DX5ZW6fx2mKM9YTWcmeHgUOSsFeQcq1E9m0xe/LBg+hdAK++XGkchzb
	r9Tw==
Received: by 10.180.104.200 with SMTP id gg8mr5947138wib.14.1345218572344;
	Fri, 17 Aug 2012 08:49:32 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id cu1sm10399779wib.6.2012.08.17.08.49.30
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 08:49:31 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 16:49:21 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC542691.3C453%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
Thread-Index: Ac18j937ex7DCv2+KEmUpJTPD9zUNA==
In-Reply-To: <4FD881CB0200007800089ADB@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:

> Our product management wasn't happy with the "solution" for XSA-9, and
> demanded that customer systems must continue to boot. Rather than
> having our and perhaps other distros carry non-trivial patches, allow
> for more fine grained control (panic on boot, deny guest creation, or
> merely warn) by means of a single line change.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -32,8 +32,11 @@
>  static char opt_famrev[14];
>  string_param("cpuid_mask_cpu", opt_famrev);
>  
> -static bool_t opt_allow_unsafe;
> +#ifdef __x86_64__
> +/* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */
> +s8 __read_mostly opt_allow_unsafe = -1;
>  boolean_param("allow_unsafe", opt_allow_unsafe);
> +#endif
>  
>  static inline void wrmsr_amd(unsigned int index, unsigned int lo,
> unsigned int hi)
> @@ -496,10 +499,19 @@ static void __devinit init_amd(struct cp
> clear_bit(X86_FEATURE_MWAIT, c->x86_capability);
>  
>  #ifdef __x86_64__
> - if (cpu_has_amd_erratum(c, AMD_ERRATUM_121) && !opt_allow_unsafe)
> + if (!cpu_has_amd_erratum(c, AMD_ERRATUM_121))
> +  opt_allow_unsafe = 1;
> + else if (opt_allow_unsafe < 0)
> panic("Xen will not boot on this CPU for security reasons.\n"
>      "Pass \"allow_unsafe\" if you're trusting all your"
>      " (PV) guest kernels.\n");
> + else if (!opt_allow_unsafe && c == &boot_cpu_data)
> +  printk(KERN_WARNING
> +         "*** Xen will not allow creation of DomU-s on"
> +         " this CPU for security reasons. ***\n"
> +         KERN_WARNING
> +         "*** Pass \"allow_unsafe\" if you're trusting"
> +         " all your (PV) guest kernels. ***\n");
>  
> /* AMD CPUs do not support SYSENTER outside of legacy mode. */
> clear_bit(X86_FEATURE_SEP, c->x86_capability);
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -55,6 +55,7 @@
>  #include <asm/traps.h>
>  #include <asm/nmi.h>
>  #include <asm/mce.h>
> +#include <asm/amd.h>
>  #include <xen/numa.h>
>  #include <xen/iommu.h>
>  #ifdef CONFIG_COMPAT
> @@ -531,6 +532,20 @@ int arch_domain_create(struct domain *d,
>  
>  #else /* __x86_64__ */
>  
> +    if ( d->domain_id && !is_idle_domain(d) &&
> +         cpu_has_amd_erratum(&boot_cpu_data, AMD_ERRATUM_121) )
> +    {
> +        if ( !opt_allow_unsafe )
> +        {
> +            printk(XENLOG_G_ERR "Xen does not allow DomU creation on this
> CPU"
> +                   " for security reasons.\n");
> +            return -EPERM;
> +        }
> +        printk(XENLOG_G_WARNING
> +               "Dom%d may compromise security on this CPU.\n",
> +               d->domain_id);
> +    }
> +
>      BUILD_BUG_ON(PDPT_L2_ENTRIES * sizeof(*d->arch.mm_perdomain_pt_pages)
>                   != PAGE_SIZE);
>      pg = alloc_domheap_page(NULL, MEMF_node(domain_to_node(d)));
> --- a/xen/include/asm-x86/amd.h
> +++ b/xen/include/asm-x86/amd.h
> @@ -147,6 +147,8 @@ struct cpuinfo_x86;
>  int cpu_has_amd_erratum(const struct cpuinfo_x86 *, int, ...);
>  
>  #ifdef __x86_64__
> +extern s8 opt_allow_unsafe;
> +
>  void fam10h_check_enable_mmcfg(void);
>  void check_enable_amd_mmconf_dmi(void);
>  #endif
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:56:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2OuW-0005px-OH; Fri, 17 Aug 2012 15:56:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T2OuU-0005ps-Hv
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:56:30 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345218983!9799578!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4516 invoked from network); 17 Aug 2012 15:56:24 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 15:56:24 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lr3c7pEgsE
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-064-208.pools.arcor-ip.net [84.57.64.208])
	by smtp.strato.de (josoe mo94) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 503631o7HCu0fY ;
	Fri, 17 Aug 2012 17:56:18 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id D20381836E; Fri, 17 Aug 2012 17:56:17 +0200 (CEST)
Date: Fri, 17 Aug 2012 17:56:17 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120817155617.GA32537@aepfle.de>
References: <20120817151136.GA25138@aepfle.de>
	<CC542656.3C452%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC542656.3C452%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, Keir Fraser wrote:

> On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> > On Mon, Jun 18, Keir Fraser wrote:
> > 
> >> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> 
> >>> Our product management wasn't happy with the "solution" for XSA-9, and
> >>> demanded that customer systems must continue to boot. Rather than
> >>> having our and perhaps other distros carry non-trivial patches, allow
> >>> for more fine grained control (panic on boot, deny guest creation, or
> >>> merely warn) by means of a single line change.
> >> 
> >> All this seems to allow is to boot but not create domU-s. Which seems a bit
> >> pointless.
> > 
> > Refusing to boot into dom0 with no good reason is a good way to lose
> > remote control of a system without serial console. Not funny.
> > 
> > Fortunately I booted and tested with sles11 Xen first before ruining the
> > box with plain xen-unstable.
> > 
> > So, please apply this patch and remove the panic() from
> > ./xen/arch/x86/cpu/amd.c
> 
> Okay, that's a good argument for that patch.

Oh, now that the context was posted again:
With the patch the box would still panic per default. Leaving it zero to
refuse guest creation looks like a sensible default.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 15:56:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 15:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2OuW-0005px-OH; Fri, 17 Aug 2012 15:56:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T2OuU-0005ps-Hv
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 15:56:30 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345218983!9799578!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzNzQyMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4516 invoked from network); 17 Aug 2012 15:56:24 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 15:56:24 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lr3c7pEgsE
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-064-208.pools.arcor-ip.net [84.57.64.208])
	by smtp.strato.de (josoe mo94) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 503631o7HCu0fY ;
	Fri, 17 Aug 2012 17:56:18 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id D20381836E; Fri, 17 Aug 2012 17:56:17 +0200 (CEST)
Date: Fri, 17 Aug 2012 17:56:17 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120817155617.GA32537@aepfle.de>
References: <20120817151136.GA25138@aepfle.de>
	<CC542656.3C452%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC542656.3C452%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, Keir Fraser wrote:

> On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> > On Mon, Jun 18, Keir Fraser wrote:
> > 
> >> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
> >> 
> >>> Our product management wasn't happy with the "solution" for XSA-9, and
> >>> demanded that customer systems must continue to boot. Rather than
> >>> having our and perhaps other distros carry non-trivial patches, allow
> >>> for more fine grained control (panic on boot, deny guest creation, or
> >>> merely warn) by means of a single line change.
> >> 
> >> All this seems to allow is to boot but not create domU-s. Which seems a bit
> >> pointless.
> > 
> > Refusing to boot into dom0 with no good reason is a good way to lose
> > remote control of a system without serial console. Not funny.
> > 
> > Fortunately I booted and tested with sles11 Xen first before ruining the
> > box with plain xen-unstable.
> > 
> > So, please apply this patch and remove the panic() from
> > ./xen/arch/x86/cpu/amd.c
> 
> Okay, that's a good argument for that patch.

Oh, now that the context was posted again:
With the patch the box would still panic per default. Leaving it zero to
refuse guest creation looks like a sensible default.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKn-0006lY-DU; Fri, 17 Aug 2012 16:23:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKm-0006l5-7J
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345220613!2931315!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16470 invoked from network); 17 Aug 2012 16:23:34 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:34 -0000
X-TM-IMSS-Message-ID: <b41234e80001c966@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	b41234e80001c966 ; Fri, 17 Aug 2012 12:24:35 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpd015709; 
	Fri, 17 Aug 2012 12:23:29 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:27 -0400
Message-Id: <1345220607-15625-4-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/3] flask/policy: add accesses used by newer
	dom0s
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.if | 2 +-
 tools/flask/policy/policy/modules/xen/xen.te | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 87ef165..3f58909 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -100,7 +100,7 @@ define(`use_device', `
 # admin_device(domain, device)
 #   Allow a device to be used and delegated by a domain
 define(`admin_device', `
-    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport remove_device remove_irq remove_iomem remove_ioport };
+    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport remove_device remove_irq remove_iomem remove_ioport plug unplug };
     allow $1 $2:hvm bind_irq;
     use_device($1, $2)
 ')
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 29885c4..e175d4b 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -55,8 +55,8 @@ type device_t, resource_type;
 allow xen_t dom0_t:domain { create };
 
 allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add mtrr_del
-	scheduler physinfo heap quirk readconsole writeconsole settime
-	microcode cpupool_op sched_op };
+	scheduler physinfo heap quirk readconsole writeconsole settime getcpuinfo
+	microcode cpupool_op sched_op pm_op };
 allow dom0_t xen_t:mmu { memorymap };
 allow dom0_t security_t:security { check_context compute_av compute_create
 	compute_member load_policy compute_relabel compute_user setenforce
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKn-0006lY-DU; Fri, 17 Aug 2012 16:23:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKm-0006l5-7J
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:40 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345220613!2931315!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16470 invoked from network); 17 Aug 2012 16:23:34 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:34 -0000
X-TM-IMSS-Message-ID: <b41234e80001c966@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1) id
	b41234e80001c966 ; Fri, 17 Aug 2012 12:24:35 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpd015709; 
	Fri, 17 Aug 2012 12:23:29 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:27 -0400
Message-Id: <1345220607-15625-4-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/3] flask/policy: add accesses used by newer
	dom0s
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/flask/policy/policy/modules/xen/xen.if | 2 +-
 tools/flask/policy/policy/modules/xen/xen.te | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index 87ef165..3f58909 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -100,7 +100,7 @@ define(`use_device', `
 # admin_device(domain, device)
 #   Allow a device to be used and delegated by a domain
 define(`admin_device', `
-    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport remove_device remove_irq remove_iomem remove_ioport };
+    allow $1 $2:resource { setup stat_device add_device add_irq add_iomem add_ioport remove_device remove_irq remove_iomem remove_ioport plug unplug };
     allow $1 $2:hvm bind_irq;
     use_device($1, $2)
 ')
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index 29885c4..e175d4b 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -55,8 +55,8 @@ type device_t, resource_type;
 allow xen_t dom0_t:domain { create };
 
 allow dom0_t xen_t:xen { kexec readapic writeapic mtrr_read mtrr_add mtrr_del
-	scheduler physinfo heap quirk readconsole writeconsole settime
-	microcode cpupool_op sched_op };
+	scheduler physinfo heap quirk readconsole writeconsole settime getcpuinfo
+	microcode cpupool_op sched_op pm_op };
 allow dom0_t xen_t:mmu { memorymap };
 allow dom0_t security_t:security { check_context compute_av compute_create
 	compute_member load_policy compute_relabel compute_user setenforce
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKm-0006lK-Mo; Fri, 17 Aug 2012 16:23:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKl-0006l3-CB
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345220612!3447611!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27130 invoked from network); 17 Aug 2012 16:23:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:33 -0000
X-TM-IMSS-Message-ID: <b4126a3b0001e76e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id b4126a3b0001e76e ;
	Fri, 17 Aug 2012 12:23:16 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpa015709; 
	Fri, 17 Aug 2012 12:23:28 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:24 -0400
Message-Id: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/3] XSM, FLASK fixes for 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The first and third patches are simple cleanup; #1 does fix null
function pointer dereferences if you compile with XSM but don't use it.

The second patch addresses the issue partially fixed by changeset
25605:9950f2dc2ee6; XSM hooks were incorrectly assuming a number of
things about the presence and validity of struct page_info fields while
validating domU-provided MFN values. This patch discards these checks
and uses the pg_owner domain to validate access to other domain's pages.

This series supersedes the two patches sent on 2012-07-31 that applied
the incomplete fix to flask_mmu_machphys_update; patch #1 here is
identical to that post's #2.

[PATCH 1/3] xsm: Add missing dummy hooks
[PATCH 2/3] xsm/flask: remove page-to-domain lookups from XSM hooks
[PATCH 3/3] flask/policy: add accesses used by newer dom0s

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKm-0006lK-Mo; Fri, 17 Aug 2012 16:23:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKl-0006l3-CB
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345220612!3447611!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27130 invoked from network); 17 Aug 2012 16:23:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-6.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:33 -0000
X-TM-IMSS-Message-ID: <b4126a3b0001e76e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id b4126a3b0001e76e ;
	Fri, 17 Aug 2012 12:23:16 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpa015709; 
	Fri, 17 Aug 2012 12:23:28 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:24 -0400
Message-Id: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/3] XSM, FLASK fixes for 4.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The first and third patches are simple cleanup; #1 does fix null
function pointer dereferences if you compile with XSM but don't use it.

The second patch addresses the issue partially fixed by changeset
25605:9950f2dc2ee6; XSM hooks were incorrectly assuming a number of
things about the presence and validity of struct page_info fields while
validating domU-provided MFN values. This patch discards these checks
and uses the pg_owner domain to validate access to other domain's pages.

This series supersedes the two patches sent on 2012-07-31 that applied
the incomplete fix to flask_mmu_machphys_update; patch #1 here is
identical to that post's #2.

[PATCH 1/3] xsm: Add missing dummy hooks
[PATCH 2/3] xsm/flask: remove page-to-domain lookups from XSM hooks
[PATCH 3/3] flask/policy: add accesses used by newer dom0s

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKp-0006lu-Pc; Fri, 17 Aug 2012 16:23:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKo-0006l7-70
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:42 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345220612!1732760!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10524 invoked from network); 17 Aug 2012 16:23:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:33 -0000
X-TM-IMSS-Message-ID: <b4126c3d0001e770@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id b4126c3d0001e770 ;
	Fri, 17 Aug 2012 12:23:16 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpc015709; 
	Fri, 17 Aug 2012 12:23:29 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:26 -0400
Message-Id: <1345220607-15625-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/3] xsm/flask: remove page-to-domain lookups
	from XSM hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Doing a reverse lookup from MFN to its owning domain is redundant with
the internal checks Xen does on pages. Change the checks to operate
directly on the domain owning the pages for normal memory; MMIO areas
are still checked with security_iomem_sid.

This fixes a hypervisor crash when a domU attempts to map an MFN that
is free in Xen's heap: the XSM hook is called before the validity check,
and page_get_owner returns garbage when called on these pages. While
explicitly checking for such pages using page_get_owner_and_reference is
a possible solution, this ends up duplicating parts of get_page_from_l1e.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/domctl.c |  23 +++---
 xen/arch/x86/mm.c     |   4 +-
 xen/include/xsm/xsm.h |  23 +++---
 xen/xsm/dummy.c       |   6 +-
 xen/xsm/flask/hooks.c | 189 +++++++++++++-------------------------------------
 5 files changed, 80 insertions(+), 165 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..97a13fb 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -114,7 +114,7 @@ long arch_do_domctl(
 
         page = mfn_to_page(mfn);
 
-        ret = xsm_getpageframeinfo(page);
+        ret = xsm_getpageframeinfo(d);
         if ( ret )
         {
             rcu_unlock_domain(d);
@@ -170,6 +170,13 @@ long arch_do_domctl(
             if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
                 break;
 
+            ret = xsm_getpageframeinfo(d);
+            if ( ret )
+            {
+                rcu_unlock_domain(d);
+                break;
+            }
+
             if ( unlikely(num > 1024) ||
                  unlikely(num != domctl->u.getpageframeinfo3.num) )
             {
@@ -209,8 +216,6 @@ long arch_do_domctl(
                     if ( unlikely(!page) ||
                          unlikely(is_xen_heap_page(page)) )
                         type = XEN_DOMCTL_PFINFO_XTAB;
-                    else if ( xsm_getpageframeinfo(page) != 0 )
-                        ;
                     else
                     {
                         switch( page->u.inuse.type_info & PGT_type_mask )
@@ -267,6 +272,13 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
             break;
 
+        ret = xsm_getpageframeinfo(d);
+        if ( ret )
+        {
+            rcu_unlock_domain(d);
+            break;
+        }
+
         if ( unlikely(num > 1024) )
         {
             ret = -E2BIG;
@@ -310,11 +322,6 @@ long arch_do_domctl(
                 if ( unlikely(!page) ||
                      unlikely(is_xen_heap_page(page)) )
                     arr32[j] |= XEN_DOMCTL_PFINFO_XTAB;
-                else if ( xsm_getpageframeinfo(page) != 0 )
-                {
-                    put_page(page);
-                    continue;
-                }
                 else
                 {
                     unsigned long type = 0;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 84820f1..d02fe97 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3073,7 +3073,7 @@ long do_mmuext_op(
                 break;
             }
 
-            if ( (rc = xsm_memory_pin_page(d, page)) != 0 )
+            if ( (rc = xsm_memory_pin_page(d, pg_owner, page)) != 0 )
             {
                 put_page_and_type(page);
                 okay = 0;
@@ -3643,7 +3643,7 @@ long do_mmu_update(
             mfn = req.ptr >> PAGE_SHIFT;
             gpfn = req.val;
 
-            rc = xsm_mmu_machphys_update(d, mfn);
+            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
             if ( rc )
                 break;
 
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..593cdbd 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -105,7 +105,7 @@ struct xsm_operations {
     int (*set_pod_target) (struct domain *d);
     int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
-    int (*memory_pin_page) (struct domain *d, struct page_info *page);
+    int (*memory_pin_page) (struct domain *d1, struct domain *d2, struct page_info *page);
     int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
 
     int (*console_io) (struct domain *d, int cmd);
@@ -143,7 +143,7 @@ struct xsm_operations {
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
-    int (*getpageframeinfo) (struct page_info *page);
+    int (*getpageframeinfo) (struct domain *d);
     int (*getmemlist) (struct domain *d);
     int (*hypercall_init) (struct domain *d);
     int (*hvmcontext) (struct domain *d, uint32_t op);
@@ -171,9 +171,8 @@ struct xsm_operations {
     int (*domain_memory_map) (struct domain *d);
     int (*mmu_normal_update) (struct domain *d, struct domain *t,
                               struct domain *f, intpte_t fpte);
-    int (*mmu_machphys_update) (struct domain *d, unsigned long mfn);
-    int (*update_va_mapping) (struct domain *d, struct domain *f, 
-                                                            l1_pgentry_t pte);
+    int (*mmu_machphys_update) (struct domain *d1, struct domain *d2, unsigned long mfn);
+    int (*update_va_mapping) (struct domain *d, struct domain *f, l1_pgentry_t pte);
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
     int (*sendtrigger) (struct domain *d);
     int (*bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
@@ -455,9 +454,10 @@ static inline int xsm_memory_stat_reservation (struct domain *d1,
     return xsm_call(memory_stat_reservation(d1, d2));
 }
 
-static inline int xsm_memory_pin_page(struct domain *d, struct page_info *page)
+static inline int xsm_memory_pin_page(struct domain *d1, struct domain *d2,
+                                      struct page_info *page)
 {
-    return xsm_call(memory_pin_page(d, page));
+    return xsm_call(memory_pin_page(d1, d2, page));
 }
 
 static inline int xsm_remove_from_physmap(struct domain *d1, struct domain *d2)
@@ -617,9 +617,9 @@ static inline int xsm_shadow_control (struct domain *d, uint32_t op)
     return xsm_call(shadow_control(d, op));
 }
 
-static inline int xsm_getpageframeinfo (struct page_info *page)
+static inline int xsm_getpageframeinfo (struct domain *d)
 {
-    return xsm_call(getpageframeinfo(page));
+    return xsm_call(getpageframeinfo(d));
 }
 
 static inline int xsm_getmemlist (struct domain *d)
@@ -753,9 +753,10 @@ static inline int xsm_mmu_normal_update (struct domain *d, struct domain *t,
     return xsm_call(mmu_normal_update(d, t, f, fpte));
 }
 
-static inline int xsm_mmu_machphys_update (struct domain *d, unsigned long mfn)
+static inline int xsm_mmu_machphys_update (struct domain *d1, struct domain *d2,
+                                           unsigned long mfn)
 {
-    return xsm_call(mmu_machphys_update(d, mfn));
+    return xsm_call(mmu_machphys_update(d1, d2, mfn));
 }
 
 static inline int xsm_update_va_mapping(struct domain *d, struct domain *f, 
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5d35342..4836fc0 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -243,7 +243,7 @@ static int dummy_schedop_shutdown (struct domain *d1, struct domain *d2)
     return 0;
 }
 
-static int dummy_memory_pin_page(struct domain *d, struct page_info *page)
+static int dummy_memory_pin_page(struct domain *d1, struct domain *d2, struct page_info *page)
 {
     return 0;
 }
@@ -418,7 +418,7 @@ static int dummy_shadow_control (struct domain *d, uint32_t op)
     return 0;
 }
 
-static int dummy_getpageframeinfo (struct page_info *page)
+static int dummy_getpageframeinfo (struct domain *d)
 {
     return 0;
 }
@@ -554,7 +554,7 @@ static int dummy_mmu_normal_update (struct domain *d, struct domain *t,
     return 0;
 }
 
-static int dummy_mmu_machphys_update (struct domain *d, unsigned long mfn)
+static int dummy_mmu_machphys_update (struct domain *d, struct domain *f, unsigned long mfn)
 {
     return 0;
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index de79d66..8c853de 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -108,15 +108,21 @@ static int flask_domain_alloc_security(struct domain *d)
 
     memset(dsec, 0, sizeof(struct domain_security_struct));
 
-    if ( is_idle_domain(d) )
+    dsec->create_sid = SECSID_NULL;
+    switch ( d->domain_id )
     {
+    case DOMID_IDLE:
         dsec->sid = SECINITSID_XEN;
         dsec->create_sid = SECINITSID_DOM0;
-    }
-    else
-    {
+        break;
+    case DOMID_XEN:
+        dsec->sid = SECINITSID_DOMXEN;
+        break;
+    case DOMID_IO:
+        dsec->sid = SECINITSID_DOMIO;
+        break;
+    default:
         dsec->sid = SECINITSID_UNLABELED;
-        dsec->create_sid = SECSID_NULL;
     }
 
     d->ssid = dsec;
@@ -361,64 +367,6 @@ static int flask_grant_query_size(struct domain *d1, struct domain *d2)
     return domain_has_perm(d1, d2, SECCLASS_GRANT, GRANT__QUERY);
 }
 
-static int get_page_sid(struct page_info *page, u32 *sid)
-{
-    int rc = 0;
-    struct domain *d;
-    struct domain_security_struct *dsec;
-    unsigned long mfn;
-
-    d = page_get_owner(page);
-
-    if ( d == NULL )
-    {
-        mfn = page_to_mfn(page);
-        rc = security_iomem_sid(mfn, sid);
-        return rc;
-    }
-
-    switch ( d->domain_id )
-    {
-    case DOMID_IO:
-        /*A tracked IO page?*/
-        *sid = SECINITSID_DOMIO;
-        break;
-
-    case DOMID_XEN:
-        /*A page from Xen's private heap?*/
-        *sid = SECINITSID_DOMXEN;
-        break;
-
-    default:
-        /*Pages are implicitly labeled by domain ownership!*/
-        dsec = d->ssid;
-        *sid = dsec ? dsec->sid : SECINITSID_UNLABELED;
-        break;
-    }
-
-    return rc;
-}
-
-static int get_mfn_sid(unsigned long mfn, u32 *sid)
-{
-    int rc = 0;
-    struct page_info *page;
-
-    if ( mfn_valid(mfn) )
-    {
-        /*mfn is valid if this is a page that Xen is tracking!*/
-        page = mfn_to_page(mfn);
-        rc = get_page_sid(page, sid);
-    }
-    else
-    {
-        /*Possibly an untracked IO page?*/
-        rc = security_iomem_sid(mfn, sid);
-    }
-
-    return rc;    
-}
-
 static int flask_get_pod_target(struct domain *d)
 {
     return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__GETPODTARGET);
@@ -439,18 +387,10 @@ static int flask_memory_stat_reservation(struct domain *d1, struct domain *d2)
     return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__STAT);
 }
 
-static int flask_memory_pin_page(struct domain *d, struct page_info *page)
+static int flask_memory_pin_page(struct domain *d1, struct domain *d2,
+                                 struct page_info *page)
 {
-    int rc = 0;
-    u32 sid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
-
-    rc = get_page_sid(page, &sid);
-    if ( rc )
-        return rc;
-
-    return avc_has_perm(dsec->sid, sid, SECCLASS_MMU, MMU__PINPAGE, NULL);
+    return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PINPAGE);
 }
 
 static int flask_console_io(struct domain *d, int cmd)
@@ -1095,19 +1035,9 @@ static int flask_ioport_permission(struct domain *d, uint32_t start, uint32_t en
     return security_iterate_ioport_sids(start, end, _ioport_has_perm, &data);
 }
 
-static int flask_getpageframeinfo(struct page_info *page)
+static int flask_getpageframeinfo(struct domain *d)
 {
-    int rc = 0;
-    u32 tsid;
-    struct domain_security_struct *dsec;
-
-    dsec = current->domain->ssid;
-
-    rc = get_page_sid(page, &tsid);
-    if ( rc )
-        return rc;
-
-    return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);    
+    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
 }
 
 static int flask_getmemlist(struct domain *d)
@@ -1314,88 +1244,65 @@ static int flask_domain_memory_map(struct domain *d)
     return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__MEMORYMAP);
 }
 
-static int flask_mmu_normal_update(struct domain *d, struct domain *t,
-                                   struct domain *f, intpte_t fpte)
+static int domain_memory_perm(struct domain *d, struct domain *f, l1_pgentry_t pte)
 {
     int rc = 0;
     u32 map_perms = MMU__MAP_READ;
     unsigned long fgfn, fmfn;
-    struct domain_security_struct *dsec;
-    u32 fsid;
-    struct avc_audit_data ad;
     p2m_type_t p2mt;
 
-    if (d != t)
-        rc = domain_has_perm(d, t, SECCLASS_MMU, MMU__REMOTE_REMAP);
-    if ( rc )
-        return rc;
-
-    if ( !(l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_PRESENT) )
+    if ( !(l1e_get_flags(pte) & _PAGE_PRESENT) )
         return 0;
 
-    dsec = d->ssid;
-
-    if ( l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_RW )
+    if ( l1e_get_flags(pte) & _PAGE_RW )
         map_perms |= MMU__MAP_WRITE;
 
-    AVC_AUDIT_DATA_INIT(&ad, MEMORY);
-    fgfn = l1e_get_pfn(l1e_from_intpte(fpte));
+    fgfn = l1e_get_pfn(pte);
     fmfn = mfn_x(get_gfn_query(f, fgfn, &p2mt));
-
-    ad.sdom = d;
-    ad.tdom = f;
-    ad.memory.pte = fpte;
-    ad.memory.mfn = fmfn;
-
-    rc = get_mfn_sid(fmfn, &fsid);
-
     put_gfn(f, fgfn);
 
-    if ( rc )
-        return rc;
+    if ( f->domain_id == DOMID_IO || !mfn_valid(fmfn) )
+    {
+        struct avc_audit_data ad;
+        struct domain_security_struct *dsec = d->ssid;
+        u32 fsid;
+        AVC_AUDIT_DATA_INIT(&ad, MEMORY);
+        ad.sdom = d;
+        ad.tdom = f;
+        ad.memory.pte = pte.l1;
+        ad.memory.mfn = fmfn;
+        rc = security_iomem_sid(fmfn, &fsid);
+        if ( rc )
+            return rc;
+        return avc_has_perm(dsec->sid, fsid, SECCLASS_MMU, map_perms, &ad);
+    }
 
-    return avc_has_perm(dsec->sid, fsid, SECCLASS_MMU, map_perms, &ad);
+    return domain_has_perm(d, f, SECCLASS_MMU, map_perms);
 }
 
-static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
+static int flask_mmu_normal_update(struct domain *d, struct domain *t,
+                                   struct domain *f, intpte_t fpte)
 {
     int rc = 0;
-    u32 psid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
 
-    rc = get_mfn_sid(mfn, &psid);
+    if (d != t)
+        rc = domain_has_perm(d, t, SECCLASS_MMU, MMU__REMOTE_REMAP);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
+    return domain_memory_perm(d, f, l1e_from_intpte(fpte));
+}
+
+static int flask_mmu_machphys_update(struct domain *d1, struct domain *d2,
+                                     unsigned long mfn)
+{
+    return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__UPDATEMP);
 }
 
 static int flask_update_va_mapping(struct domain *d, struct domain *f,
                                    l1_pgentry_t pte)
 {
-    int rc = 0;
-    u32 psid;
-    u32 map_perms = MMU__MAP_READ;
-    struct page_info *page = NULL;
-    struct domain_security_struct *dsec;
-
-    if ( !(l1e_get_flags(pte) & _PAGE_PRESENT) )
-        return 0;
-
-    if ( l1e_get_flags(pte) & _PAGE_RW )
-        map_perms |= MMU__MAP_WRITE;
-
-    dsec = d->ssid;
-
-    page = get_page_from_gfn(f, l1e_get_pfn(pte), NULL, P2M_ALLOC);
-    rc = get_mfn_sid(page ? page_to_mfn(page) : INVALID_MFN, &psid);
-    if ( page )
-        put_page(page);
-    if ( rc )
-        return rc;
-
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, map_perms, NULL);
+    return domain_memory_perm(d, f, pte);
 }
 
 static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKp-0006lu-Pc; Fri, 17 Aug 2012 16:23:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKo-0006l7-70
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:42 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345220612!1732760!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10524 invoked from network); 17 Aug 2012 16:23:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-11.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:33 -0000
X-TM-IMSS-Message-ID: <b4126c3d0001e770@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id b4126c3d0001e770 ;
	Fri, 17 Aug 2012 12:23:16 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpc015709; 
	Fri, 17 Aug 2012 12:23:29 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:26 -0400
Message-Id: <1345220607-15625-3-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/3] xsm/flask: remove page-to-domain lookups
	from XSM hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Doing a reverse lookup from MFN to its owning domain is redundant with
the internal checks Xen does on pages. Change the checks to operate
directly on the domain owning the pages for normal memory; MMIO areas
are still checked with security_iomem_sid.

This fixes a hypervisor crash when a domU attempts to map an MFN that
is free in Xen's heap: the XSM hook is called before the validity check,
and page_get_owner returns garbage when called on these pages. While
explicitly checking for such pages using page_get_owner_and_reference is
a possible solution, this ends up duplicating parts of get_page_from_l1e.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/domctl.c |  23 +++---
 xen/arch/x86/mm.c     |   4 +-
 xen/include/xsm/xsm.h |  23 +++---
 xen/xsm/dummy.c       |   6 +-
 xen/xsm/flask/hooks.c | 189 +++++++++++++-------------------------------------
 5 files changed, 80 insertions(+), 165 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..97a13fb 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -114,7 +114,7 @@ long arch_do_domctl(
 
         page = mfn_to_page(mfn);
 
-        ret = xsm_getpageframeinfo(page);
+        ret = xsm_getpageframeinfo(d);
         if ( ret )
         {
             rcu_unlock_domain(d);
@@ -170,6 +170,13 @@ long arch_do_domctl(
             if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
                 break;
 
+            ret = xsm_getpageframeinfo(d);
+            if ( ret )
+            {
+                rcu_unlock_domain(d);
+                break;
+            }
+
             if ( unlikely(num > 1024) ||
                  unlikely(num != domctl->u.getpageframeinfo3.num) )
             {
@@ -209,8 +216,6 @@ long arch_do_domctl(
                     if ( unlikely(!page) ||
                          unlikely(is_xen_heap_page(page)) )
                         type = XEN_DOMCTL_PFINFO_XTAB;
-                    else if ( xsm_getpageframeinfo(page) != 0 )
-                        ;
                     else
                     {
                         switch( page->u.inuse.type_info & PGT_type_mask )
@@ -267,6 +272,13 @@ long arch_do_domctl(
         if ( unlikely((d = rcu_lock_domain_by_id(dom)) == NULL) )
             break;
 
+        ret = xsm_getpageframeinfo(d);
+        if ( ret )
+        {
+            rcu_unlock_domain(d);
+            break;
+        }
+
         if ( unlikely(num > 1024) )
         {
             ret = -E2BIG;
@@ -310,11 +322,6 @@ long arch_do_domctl(
                 if ( unlikely(!page) ||
                      unlikely(is_xen_heap_page(page)) )
                     arr32[j] |= XEN_DOMCTL_PFINFO_XTAB;
-                else if ( xsm_getpageframeinfo(page) != 0 )
-                {
-                    put_page(page);
-                    continue;
-                }
                 else
                 {
                     unsigned long type = 0;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 84820f1..d02fe97 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3073,7 +3073,7 @@ long do_mmuext_op(
                 break;
             }
 
-            if ( (rc = xsm_memory_pin_page(d, page)) != 0 )
+            if ( (rc = xsm_memory_pin_page(d, pg_owner, page)) != 0 )
             {
                 put_page_and_type(page);
                 okay = 0;
@@ -3643,7 +3643,7 @@ long do_mmu_update(
             mfn = req.ptr >> PAGE_SHIFT;
             gpfn = req.val;
 
-            rc = xsm_mmu_machphys_update(d, mfn);
+            rc = xsm_mmu_machphys_update(d, pg_owner, mfn);
             if ( rc )
                 break;
 
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..593cdbd 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -105,7 +105,7 @@ struct xsm_operations {
     int (*set_pod_target) (struct domain *d);
     int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
-    int (*memory_pin_page) (struct domain *d, struct page_info *page);
+    int (*memory_pin_page) (struct domain *d1, struct domain *d2, struct page_info *page);
     int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
 
     int (*console_io) (struct domain *d, int cmd);
@@ -143,7 +143,7 @@ struct xsm_operations {
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
-    int (*getpageframeinfo) (struct page_info *page);
+    int (*getpageframeinfo) (struct domain *d);
     int (*getmemlist) (struct domain *d);
     int (*hypercall_init) (struct domain *d);
     int (*hvmcontext) (struct domain *d, uint32_t op);
@@ -171,9 +171,8 @@ struct xsm_operations {
     int (*domain_memory_map) (struct domain *d);
     int (*mmu_normal_update) (struct domain *d, struct domain *t,
                               struct domain *f, intpte_t fpte);
-    int (*mmu_machphys_update) (struct domain *d, unsigned long mfn);
-    int (*update_va_mapping) (struct domain *d, struct domain *f, 
-                                                            l1_pgentry_t pte);
+    int (*mmu_machphys_update) (struct domain *d1, struct domain *d2, unsigned long mfn);
+    int (*update_va_mapping) (struct domain *d, struct domain *f, l1_pgentry_t pte);
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
     int (*sendtrigger) (struct domain *d);
     int (*bind_pt_irq) (struct domain *d, struct xen_domctl_bind_pt_irq *bind);
@@ -455,9 +454,10 @@ static inline int xsm_memory_stat_reservation (struct domain *d1,
     return xsm_call(memory_stat_reservation(d1, d2));
 }
 
-static inline int xsm_memory_pin_page(struct domain *d, struct page_info *page)
+static inline int xsm_memory_pin_page(struct domain *d1, struct domain *d2,
+                                      struct page_info *page)
 {
-    return xsm_call(memory_pin_page(d, page));
+    return xsm_call(memory_pin_page(d1, d2, page));
 }
 
 static inline int xsm_remove_from_physmap(struct domain *d1, struct domain *d2)
@@ -617,9 +617,9 @@ static inline int xsm_shadow_control (struct domain *d, uint32_t op)
     return xsm_call(shadow_control(d, op));
 }
 
-static inline int xsm_getpageframeinfo (struct page_info *page)
+static inline int xsm_getpageframeinfo (struct domain *d)
 {
-    return xsm_call(getpageframeinfo(page));
+    return xsm_call(getpageframeinfo(d));
 }
 
 static inline int xsm_getmemlist (struct domain *d)
@@ -753,9 +753,10 @@ static inline int xsm_mmu_normal_update (struct domain *d, struct domain *t,
     return xsm_call(mmu_normal_update(d, t, f, fpte));
 }
 
-static inline int xsm_mmu_machphys_update (struct domain *d, unsigned long mfn)
+static inline int xsm_mmu_machphys_update (struct domain *d1, struct domain *d2,
+                                           unsigned long mfn)
 {
-    return xsm_call(mmu_machphys_update(d, mfn));
+    return xsm_call(mmu_machphys_update(d1, d2, mfn));
 }
 
 static inline int xsm_update_va_mapping(struct domain *d, struct domain *f, 
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5d35342..4836fc0 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -243,7 +243,7 @@ static int dummy_schedop_shutdown (struct domain *d1, struct domain *d2)
     return 0;
 }
 
-static int dummy_memory_pin_page(struct domain *d, struct page_info *page)
+static int dummy_memory_pin_page(struct domain *d1, struct domain *d2, struct page_info *page)
 {
     return 0;
 }
@@ -418,7 +418,7 @@ static int dummy_shadow_control (struct domain *d, uint32_t op)
     return 0;
 }
 
-static int dummy_getpageframeinfo (struct page_info *page)
+static int dummy_getpageframeinfo (struct domain *d)
 {
     return 0;
 }
@@ -554,7 +554,7 @@ static int dummy_mmu_normal_update (struct domain *d, struct domain *t,
     return 0;
 }
 
-static int dummy_mmu_machphys_update (struct domain *d, unsigned long mfn)
+static int dummy_mmu_machphys_update (struct domain *d, struct domain *f, unsigned long mfn)
 {
     return 0;
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index de79d66..8c853de 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -108,15 +108,21 @@ static int flask_domain_alloc_security(struct domain *d)
 
     memset(dsec, 0, sizeof(struct domain_security_struct));
 
-    if ( is_idle_domain(d) )
+    dsec->create_sid = SECSID_NULL;
+    switch ( d->domain_id )
     {
+    case DOMID_IDLE:
         dsec->sid = SECINITSID_XEN;
         dsec->create_sid = SECINITSID_DOM0;
-    }
-    else
-    {
+        break;
+    case DOMID_XEN:
+        dsec->sid = SECINITSID_DOMXEN;
+        break;
+    case DOMID_IO:
+        dsec->sid = SECINITSID_DOMIO;
+        break;
+    default:
         dsec->sid = SECINITSID_UNLABELED;
-        dsec->create_sid = SECSID_NULL;
     }
 
     d->ssid = dsec;
@@ -361,64 +367,6 @@ static int flask_grant_query_size(struct domain *d1, struct domain *d2)
     return domain_has_perm(d1, d2, SECCLASS_GRANT, GRANT__QUERY);
 }
 
-static int get_page_sid(struct page_info *page, u32 *sid)
-{
-    int rc = 0;
-    struct domain *d;
-    struct domain_security_struct *dsec;
-    unsigned long mfn;
-
-    d = page_get_owner(page);
-
-    if ( d == NULL )
-    {
-        mfn = page_to_mfn(page);
-        rc = security_iomem_sid(mfn, sid);
-        return rc;
-    }
-
-    switch ( d->domain_id )
-    {
-    case DOMID_IO:
-        /*A tracked IO page?*/
-        *sid = SECINITSID_DOMIO;
-        break;
-
-    case DOMID_XEN:
-        /*A page from Xen's private heap?*/
-        *sid = SECINITSID_DOMXEN;
-        break;
-
-    default:
-        /*Pages are implicitly labeled by domain ownership!*/
-        dsec = d->ssid;
-        *sid = dsec ? dsec->sid : SECINITSID_UNLABELED;
-        break;
-    }
-
-    return rc;
-}
-
-static int get_mfn_sid(unsigned long mfn, u32 *sid)
-{
-    int rc = 0;
-    struct page_info *page;
-
-    if ( mfn_valid(mfn) )
-    {
-        /*mfn is valid if this is a page that Xen is tracking!*/
-        page = mfn_to_page(mfn);
-        rc = get_page_sid(page, sid);
-    }
-    else
-    {
-        /*Possibly an untracked IO page?*/
-        rc = security_iomem_sid(mfn, sid);
-    }
-
-    return rc;    
-}
-
 static int flask_get_pod_target(struct domain *d)
 {
     return domain_has_perm(current->domain, d, SECCLASS_DOMAIN, DOMAIN__GETPODTARGET);
@@ -439,18 +387,10 @@ static int flask_memory_stat_reservation(struct domain *d1, struct domain *d2)
     return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__STAT);
 }
 
-static int flask_memory_pin_page(struct domain *d, struct page_info *page)
+static int flask_memory_pin_page(struct domain *d1, struct domain *d2,
+                                 struct page_info *page)
 {
-    int rc = 0;
-    u32 sid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
-
-    rc = get_page_sid(page, &sid);
-    if ( rc )
-        return rc;
-
-    return avc_has_perm(dsec->sid, sid, SECCLASS_MMU, MMU__PINPAGE, NULL);
+    return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PINPAGE);
 }
 
 static int flask_console_io(struct domain *d, int cmd)
@@ -1095,19 +1035,9 @@ static int flask_ioport_permission(struct domain *d, uint32_t start, uint32_t en
     return security_iterate_ioport_sids(start, end, _ioport_has_perm, &data);
 }
 
-static int flask_getpageframeinfo(struct page_info *page)
+static int flask_getpageframeinfo(struct domain *d)
 {
-    int rc = 0;
-    u32 tsid;
-    struct domain_security_struct *dsec;
-
-    dsec = current->domain->ssid;
-
-    rc = get_page_sid(page, &tsid);
-    if ( rc )
-        return rc;
-
-    return avc_has_perm(dsec->sid, tsid, SECCLASS_MMU, MMU__PAGEINFO, NULL);    
+    return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__PAGEINFO);
 }
 
 static int flask_getmemlist(struct domain *d)
@@ -1314,88 +1244,65 @@ static int flask_domain_memory_map(struct domain *d)
     return domain_has_perm(current->domain, d, SECCLASS_MMU, MMU__MEMORYMAP);
 }
 
-static int flask_mmu_normal_update(struct domain *d, struct domain *t,
-                                   struct domain *f, intpte_t fpte)
+static int domain_memory_perm(struct domain *d, struct domain *f, l1_pgentry_t pte)
 {
     int rc = 0;
     u32 map_perms = MMU__MAP_READ;
     unsigned long fgfn, fmfn;
-    struct domain_security_struct *dsec;
-    u32 fsid;
-    struct avc_audit_data ad;
     p2m_type_t p2mt;
 
-    if (d != t)
-        rc = domain_has_perm(d, t, SECCLASS_MMU, MMU__REMOTE_REMAP);
-    if ( rc )
-        return rc;
-
-    if ( !(l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_PRESENT) )
+    if ( !(l1e_get_flags(pte) & _PAGE_PRESENT) )
         return 0;
 
-    dsec = d->ssid;
-
-    if ( l1e_get_flags(l1e_from_intpte(fpte)) & _PAGE_RW )
+    if ( l1e_get_flags(pte) & _PAGE_RW )
         map_perms |= MMU__MAP_WRITE;
 
-    AVC_AUDIT_DATA_INIT(&ad, MEMORY);
-    fgfn = l1e_get_pfn(l1e_from_intpte(fpte));
+    fgfn = l1e_get_pfn(pte);
     fmfn = mfn_x(get_gfn_query(f, fgfn, &p2mt));
-
-    ad.sdom = d;
-    ad.tdom = f;
-    ad.memory.pte = fpte;
-    ad.memory.mfn = fmfn;
-
-    rc = get_mfn_sid(fmfn, &fsid);
-
     put_gfn(f, fgfn);
 
-    if ( rc )
-        return rc;
+    if ( f->domain_id == DOMID_IO || !mfn_valid(fmfn) )
+    {
+        struct avc_audit_data ad;
+        struct domain_security_struct *dsec = d->ssid;
+        u32 fsid;
+        AVC_AUDIT_DATA_INIT(&ad, MEMORY);
+        ad.sdom = d;
+        ad.tdom = f;
+        ad.memory.pte = pte.l1;
+        ad.memory.mfn = fmfn;
+        rc = security_iomem_sid(fmfn, &fsid);
+        if ( rc )
+            return rc;
+        return avc_has_perm(dsec->sid, fsid, SECCLASS_MMU, map_perms, &ad);
+    }
 
-    return avc_has_perm(dsec->sid, fsid, SECCLASS_MMU, map_perms, &ad);
+    return domain_has_perm(d, f, SECCLASS_MMU, map_perms);
 }
 
-static int flask_mmu_machphys_update(struct domain *d, unsigned long mfn)
+static int flask_mmu_normal_update(struct domain *d, struct domain *t,
+                                   struct domain *f, intpte_t fpte)
 {
     int rc = 0;
-    u32 psid;
-    struct domain_security_struct *dsec;
-    dsec = d->ssid;
 
-    rc = get_mfn_sid(mfn, &psid);
+    if (d != t)
+        rc = domain_has_perm(d, t, SECCLASS_MMU, MMU__REMOTE_REMAP);
     if ( rc )
         return rc;
 
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, MMU__UPDATEMP, NULL);
+    return domain_memory_perm(d, f, l1e_from_intpte(fpte));
+}
+
+static int flask_mmu_machphys_update(struct domain *d1, struct domain *d2,
+                                     unsigned long mfn)
+{
+    return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__UPDATEMP);
 }
 
 static int flask_update_va_mapping(struct domain *d, struct domain *f,
                                    l1_pgentry_t pte)
 {
-    int rc = 0;
-    u32 psid;
-    u32 map_perms = MMU__MAP_READ;
-    struct page_info *page = NULL;
-    struct domain_security_struct *dsec;
-
-    if ( !(l1e_get_flags(pte) & _PAGE_PRESENT) )
-        return 0;
-
-    if ( l1e_get_flags(pte) & _PAGE_RW )
-        map_perms |= MMU__MAP_WRITE;
-
-    dsec = d->ssid;
-
-    page = get_page_from_gfn(f, l1e_get_pfn(pte), NULL, P2M_ALLOC);
-    rc = get_mfn_sid(page ? page_to_mfn(page) : INVALID_MFN, &psid);
-    if ( page )
-        put_page(page);
-    if ( rc )
-        return rc;
-
-    return avc_has_perm(dsec->sid, psid, SECCLASS_MMU, map_perms, NULL);
+    return domain_memory_perm(d, f, pte);
 }
 
 static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKn-0006lR-1Y; Fri, 17 Aug 2012 16:23:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKl-0006l4-Dx
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345220612!9777357!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10322 invoked from network); 17 Aug 2012 16:23:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-13.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:33 -0000
X-TM-IMSS-Message-ID: <b4126b920001e76f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id b4126b920001e76f ;
	Fri, 17 Aug 2012 12:23:16 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpb015709; 
	Fri, 17 Aug 2012 12:23:29 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:25 -0400
Message-Id: <1345220607-15625-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/3] xsm: Add missing dummy hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A few XSM hooks have been defined without implementation in dummy.c;
these will cause a null function pointer deference if called. Also
implement the efi_call hook, which was incorrectly added without any
implementations.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/xsm/dummy.c       | 30 ++++++++++++++++++++++++++++++
 xen/xsm/flask/hooks.c |  6 ++++++
 2 files changed, 36 insertions(+)

diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5d35342 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -295,6 +295,21 @@ static char *dummy_show_security_evtchn (struct domain *d, const struct evtchn *
     return NULL;
 }
 
+static int dummy_get_pod_target(struct domain *d)
+{
+    return 0;
+}
+
+static int dummy_set_pod_target(struct domain *d)
+{
+    return 0;
+}
+
+static int dummy_get_device_group (uint32_t machine_bdf)
+{
+    return 0;
+}
+
 static int dummy_test_assign_device (uint32_t machine_bdf)
 {
     return 0;
@@ -503,6 +518,11 @@ static int dummy_firmware_info (void)
     return 0;
 }
 
+static int dummy_efi_call(void)
+{
+    return 0;
+}
+
 static int dummy_acpi_sleep (void)
 {
     return 0;
@@ -565,6 +585,11 @@ static int dummy_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     return 0;
 }
 
+static int dummy_unbind_pt_irq (struct domain *d)
+{
+    return 0;
+}
+
 static int dummy_pin_mem_cacheattr (struct domain *d)
 {
     return 0;
@@ -652,6 +677,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, alloc_security_evtchn);
     set_to_dummy_if_null(ops, free_security_evtchn);
     set_to_dummy_if_null(ops, show_security_evtchn);
+    set_to_dummy_if_null(ops, get_pod_target);
+    set_to_dummy_if_null(ops, set_pod_target);
 
     set_to_dummy_if_null(ops, memory_adjust_reservation);
     set_to_dummy_if_null(ops, memory_stat_reservation);
@@ -670,6 +697,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, iomem_permission);
     set_to_dummy_if_null(ops, pci_config_permission);
 
+    set_to_dummy_if_null(ops, get_device_group);
     set_to_dummy_if_null(ops, test_assign_device);
     set_to_dummy_if_null(ops, assign_device);
     set_to_dummy_if_null(ops, deassign_device);
@@ -711,6 +739,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, physinfo);
     set_to_dummy_if_null(ops, platform_quirk);
     set_to_dummy_if_null(ops, firmware_info);
+    set_to_dummy_if_null(ops, efi_call);
     set_to_dummy_if_null(ops, acpi_sleep);
     set_to_dummy_if_null(ops, change_freq);
     set_to_dummy_if_null(ops, getidletime);
@@ -723,6 +752,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, sendtrigger);
     set_to_dummy_if_null(ops, bind_pt_irq);
+    set_to_dummy_if_null(ops, unbind_pt_irq);
     set_to_dummy_if_null(ops, pin_mem_cacheattr);
     set_to_dummy_if_null(ops, ext_vcpucontext);
     set_to_dummy_if_null(ops, vcpuextstate);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..de79d66 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1280,6 +1280,11 @@ static int flask_firmware_info(void)
     return domain_has_xen(current->domain, XEN__FIRMWARE);
 }
 
+static int flask_efi_call(void)
+{
+    return domain_has_xen(current->domain, XEN__FIRMWARE);
+}
+
 static int flask_acpi_sleep(void)
 {
     return domain_has_xen(current->domain, XEN__SLEEP);
@@ -1663,6 +1668,7 @@ static struct xsm_operations flask_ops = {
     .physinfo = flask_physinfo,
     .platform_quirk = flask_platform_quirk,
     .firmware_info = flask_firmware_info,
+    .efi_call = flask_efi_call,
     .acpi_sleep = flask_acpi_sleep,
     .change_freq = flask_change_freq,
     .getidletime = flask_getidletime,
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PKn-0006lR-1Y; Fri, 17 Aug 2012 16:23:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1T2PKl-0006l4-Dx
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:23:39 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345220612!9777357!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10322 invoked from network); 17 Aug 2012 16:23:33 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-13.tower-27.messagelabs.com with SMTP;
	17 Aug 2012 16:23:33 -0000
X-TM-IMSS-Message-ID: <b4126b920001e76f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.3.1]) by nsa.gov ([63.239.67.9])
	with ESMTP (TREND IMSS SMTP Service 7.1) id b4126b920001e76f ;
	Fri, 17 Aug 2012 12:23:16 -0400
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [144.51.25.48])
	by tarius.tycho.ncsc.mil (8.13.1/8.13.1) with ESMTP id q7HGNSpb015709; 
	Fri, 17 Aug 2012 12:23:29 -0400
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
To: keir@xen.org
Date: Fri, 17 Aug 2012 12:23:25 -0400
Message-Id: <1345220607-15625-2-git-send-email-dgdegra@tycho.nsa.gov>
X-Mailer: git-send-email 1.7.11.2
In-Reply-To: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1345220607-15625-1-git-send-email-dgdegra@tycho.nsa.gov>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/3] xsm: Add missing dummy hooks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A few XSM hooks have been defined without implementation in dummy.c;
these will cause a null function pointer deference if called. Also
implement the efi_call hook, which was incorrectly added without any
implementations.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/xsm/dummy.c       | 30 ++++++++++++++++++++++++++++++
 xen/xsm/flask/hooks.c |  6 ++++++
 2 files changed, 36 insertions(+)

diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5d35342 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -295,6 +295,21 @@ static char *dummy_show_security_evtchn (struct domain *d, const struct evtchn *
     return NULL;
 }
 
+static int dummy_get_pod_target(struct domain *d)
+{
+    return 0;
+}
+
+static int dummy_set_pod_target(struct domain *d)
+{
+    return 0;
+}
+
+static int dummy_get_device_group (uint32_t machine_bdf)
+{
+    return 0;
+}
+
 static int dummy_test_assign_device (uint32_t machine_bdf)
 {
     return 0;
@@ -503,6 +518,11 @@ static int dummy_firmware_info (void)
     return 0;
 }
 
+static int dummy_efi_call(void)
+{
+    return 0;
+}
+
 static int dummy_acpi_sleep (void)
 {
     return 0;
@@ -565,6 +585,11 @@ static int dummy_bind_pt_irq (struct domain *d, struct xen_domctl_bind_pt_irq *b
     return 0;
 }
 
+static int dummy_unbind_pt_irq (struct domain *d)
+{
+    return 0;
+}
+
 static int dummy_pin_mem_cacheattr (struct domain *d)
 {
     return 0;
@@ -652,6 +677,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, alloc_security_evtchn);
     set_to_dummy_if_null(ops, free_security_evtchn);
     set_to_dummy_if_null(ops, show_security_evtchn);
+    set_to_dummy_if_null(ops, get_pod_target);
+    set_to_dummy_if_null(ops, set_pod_target);
 
     set_to_dummy_if_null(ops, memory_adjust_reservation);
     set_to_dummy_if_null(ops, memory_stat_reservation);
@@ -670,6 +697,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, iomem_permission);
     set_to_dummy_if_null(ops, pci_config_permission);
 
+    set_to_dummy_if_null(ops, get_device_group);
     set_to_dummy_if_null(ops, test_assign_device);
     set_to_dummy_if_null(ops, assign_device);
     set_to_dummy_if_null(ops, deassign_device);
@@ -711,6 +739,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, physinfo);
     set_to_dummy_if_null(ops, platform_quirk);
     set_to_dummy_if_null(ops, firmware_info);
+    set_to_dummy_if_null(ops, efi_call);
     set_to_dummy_if_null(ops, acpi_sleep);
     set_to_dummy_if_null(ops, change_freq);
     set_to_dummy_if_null(ops, getidletime);
@@ -723,6 +752,7 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, sendtrigger);
     set_to_dummy_if_null(ops, bind_pt_irq);
+    set_to_dummy_if_null(ops, unbind_pt_irq);
     set_to_dummy_if_null(ops, pin_mem_cacheattr);
     set_to_dummy_if_null(ops, ext_vcpucontext);
     set_to_dummy_if_null(ops, vcpuextstate);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..de79d66 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1280,6 +1280,11 @@ static int flask_firmware_info(void)
     return domain_has_xen(current->domain, XEN__FIRMWARE);
 }
 
+static int flask_efi_call(void)
+{
+    return domain_has_xen(current->domain, XEN__FIRMWARE);
+}
+
 static int flask_acpi_sleep(void)
 {
     return domain_has_xen(current->domain, XEN__SLEEP);
@@ -1663,6 +1668,7 @@ static struct xsm_operations flask_ops = {
     .physinfo = flask_physinfo,
     .platform_quirk = flask_platform_quirk,
     .firmware_info = flask_firmware_info,
+    .efi_call = flask_efi_call,
     .acpi_sleep = flask_acpi_sleep,
     .change_freq = flask_change_freq,
     .getidletime = flask_getidletime,
-- 
1.7.11.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PLf-0006xg-Cz; Fri, 17 Aug 2012 16:24:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2PLe-0006xQ-2X
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:24:34 +0000
Received: from [85.158.143.99:25667] by server-1.bemta-4.messagelabs.com id
	19/1D-07754-1407E205; Fri, 17 Aug 2012 16:24:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345220672!27834900!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29372 invoked from network); 17 Aug 2012 16:24:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 16:24:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 17:24:31 +0100
Message-Id: <502E8C89020000780009600F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 17:25:13 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1F2E6C79.0__="
Subject: [Xen-devel] [PATCH] x86: don't expose SYSENTER on unknown CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1F2E6C79.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

So far we only ever set up the respective MSRs on Intel CPUs, yet we
hide the feature only on a 32-bit hypervisor. That prevents booting of
PV guests on top of a 64-bit hypervisor making use of the instruction
on unknown CPUs (VIA in this case).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -55,6 +55,7 @@ static void default_init(struct cpuinfo_
 	/* Not much we can do here... */
 	/* Check if at least it has cpuid */
 	BUG_ON(c->cpuid_level =3D=3D -1);
+	__clear_bit(X86_FEATURE_SEP, c->x86_capability);
 }
=20
 static struct cpu_dev default_cpu =3D {




--=__Part1F2E6C79.0__=
Content-Type: text/plain; name="x86_64-prevent-sysenter.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86_64-prevent-sysenter.patch"

x86: don't expose SYSENTER on unknown CPUs=0A=0ASo far we only ever set up =
the respective MSRs on Intel CPUs, yet we=0Ahide the feature only on a =
32-bit hypervisor. That prevents booting of=0APV guests on top of a 64-bit =
hypervisor making use of the instruction=0Aon unknown CPUs (VIA in this =
case).=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/cpu/common.c=0A+++ b/xen/arch/x86/cpu/common.c=0A@@ -55,6 =
+55,7 @@ static void default_init(struct cpuinfo_=0A 	/* Not much we can =
do here... */=0A 	/* Check if at least it has cpuid */=0A 	=
BUG_ON(c->cpuid_level =3D=3D -1);=0A+	__clear_bit(X86_FEATURE_SEP, =
c->x86_capability);=0A }=0A =0A static struct cpu_dev default_cpu =3D {=0A
--=__Part1F2E6C79.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1F2E6C79.0__=--


From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PLf-0006xg-Cz; Fri, 17 Aug 2012 16:24:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2PLe-0006xQ-2X
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:24:34 +0000
Received: from [85.158.143.99:25667] by server-1.bemta-4.messagelabs.com id
	19/1D-07754-1407E205; Fri, 17 Aug 2012 16:24:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345220672!27834900!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29372 invoked from network); 17 Aug 2012 16:24:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 16:24:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 17:24:31 +0100
Message-Id: <502E8C89020000780009600F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 17:25:13 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1F2E6C79.0__="
Subject: [Xen-devel] [PATCH] x86: don't expose SYSENTER on unknown CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1F2E6C79.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

So far we only ever set up the respective MSRs on Intel CPUs, yet we
hide the feature only on a 32-bit hypervisor. That prevents booting of
PV guests on top of a 64-bit hypervisor making use of the instruction
on unknown CPUs (VIA in this case).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -55,6 +55,7 @@ static void default_init(struct cpuinfo_
 	/* Not much we can do here... */
 	/* Check if at least it has cpuid */
 	BUG_ON(c->cpuid_level =3D=3D -1);
+	__clear_bit(X86_FEATURE_SEP, c->x86_capability);
 }
=20
 static struct cpu_dev default_cpu =3D {




--=__Part1F2E6C79.0__=
Content-Type: text/plain; name="x86_64-prevent-sysenter.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86_64-prevent-sysenter.patch"

x86: don't expose SYSENTER on unknown CPUs=0A=0ASo far we only ever set up =
the respective MSRs on Intel CPUs, yet we=0Ahide the feature only on a =
32-bit hypervisor. That prevents booting of=0APV guests on top of a 64-bit =
hypervisor making use of the instruction=0Aon unknown CPUs (VIA in this =
case).=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/cpu/common.c=0A+++ b/xen/arch/x86/cpu/common.c=0A@@ -55,6 =
+55,7 @@ static void default_init(struct cpuinfo_=0A 	/* Not much we can =
do here... */=0A 	/* Check if at least it has cpuid */=0A 	=
BUG_ON(c->cpuid_level =3D=3D -1);=0A+	__clear_bit(X86_FEATURE_SEP, =
c->x86_capability);=0A }=0A =0A static struct cpu_dev default_cpu =3D {=0A
--=__Part1F2E6C79.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1F2E6C79.0__=--


From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PLj-0006yo-Qm; Fri, 17 Aug 2012 16:24:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T2PLi-0006xE-DC
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:24:38 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345220671!9645425!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27388 invoked from network); 17 Aug 2012 16:24:32 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 16:24:32 -0000
Received: by lagz14 with SMTP id z14so2416666lag.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 09:24:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ldH0/ia7Axyr55RuvlH7teKEuTSek+G+xspnYhgMjiQ=;
	b=Fz+CzwPw+qQQYWWvhkWhCDmeRMLpTquDsF+aRoRfAf3KKHMrVBWDsZiSfRpwBDWG2m
	c2ELz6n/xhHEDgv/sojhqh4GQ5fjfp6MQVNiz4eIPkCpGaHyB0019vv1zmUCG+11FE4K
	5O12t/k+w7NondHtL/Tl5PzNW/mUFjl1m3kad7w+7hF8H9I4I4Qb/E93w7g0crQCGXIs
	vmtdm2QUAQHVHqHOs+HtNCTerDZTII5u9iQ+OICVHo3imcwLJvu7WvB7QDsT2NgQ41SI
	FH/XY59rhpuWjQaFYZBfzTZ1Ho2UXMd5/NbxVXNPb2q1bUDE6PaE5N2lg746qa7FHoBD
	dFJQ==
MIME-Version: 1.0
Received: by 10.112.25.4 with SMTP id y4mr2528612lbf.61.1345220671230; Fri, 17
	Aug 2012 09:24:31 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Fri, 17 Aug 2012 09:24:31 -0700 (PDT)
In-Reply-To: <1345218274.10161.86.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
	<1345218274.10161.86.camel@zakaz.uk.xensource.com>
Date: Sat, 18 Aug 2012 00:24:31 +0800
Message-ID: <CAEQjb-S6c_0rjmymHFC7=Nft0bcPrVQbguQ2zpqHFsHmk9wn3w@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6199115711736849820=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6199115711736849820==
Content-Type: multipart/alternative; boundary=bcaec554dc2840d55a04c77898b6

--bcaec554dc2840d55a04c77898b6
Content-Type: text/plain; charset=ISO-8859-1

Hi Ian,

Yes, I'm working on the PV support on tianocore UEFI bios.

I have read the code in drivers/xen/grant-table.c. And my understanding of
the initialization of the grant table in HVM is as the following. Please
correct me if possible.

1. DomU sets up the grant table for itself. (Uses the hypercall
HYPERVISOR_grant_table_op)
2. DomU maps the shared grant table to its address space. (Uses the
hypercall HYPERVISOR_memory_op)
3. DomU needs to map the the shared grant table to the installed grant
table in step 1 (Maybe it's done in the method apply_to_page_range()).
Then, DomU can operate the shared grant table through the installed grant
table.

Thank you very much.

Best Regards,
Bei Guan




> > Hi Ian,
> >
> >
> > Thank you very much for this information. It's very useful to me.
> >
> >
> > However, I'm still confused with the initialization of the grant table
> > in HVM.
> >
> >
> > The relationship of the methods in the initialization of the grant
> > table in linux source code (driver/xen) is:
> >
> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
> >
> >
> > So, I am not sure that what's the function of the method
> > apply_to_page_range(), which is implemented in code file [1].
> > This function is a little complex. Is there any simple method to do
> > this? Thank you for your time.
>
> This function is the simple method ;-)
>
> All it basically does is iterate over the page tables corresponding to a
> range of addresses and calling a user supplied function on each leaf
> PTE. In the case of the gnttab_map this user supplied function simply
> sets the leaf PTEs to point to the right grant table page.
>
> I suppose you are working on tianocore? I've no idea what the page table
> layout is in that environment, I suppose it either has a linear map or
> some other way of getting at the leaf ptes. Anyway since the method to
> use is specific to the OS (or firmware) environment you are running in I
> think you'll have to ask on the tianocore development list.
>
> Ian.
>
>
>

--bcaec554dc2840d55a04c77898b6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote"><div>Hi Ian,</div><div><br></div><div>Yes, I&#39=
;m working on the PV support on tianocore UEFI bios.</div><div><br></div><d=
iv>I have read the code in drivers/xen/grant-table.c. And my understanding =
of the initialization of the grant table in HVM is as the following. Please=
 correct me if possible.</div>
<div><br></div><div>1. DomU sets up the grant table for itself. (Uses the h=
ypercall HYPERVISOR_grant_table_op)</div><div>2. DomU maps the shared grant=
 table to its address space. (Uses the hypercall HYPERVISOR_memory_op)</div=
>
<div>3. DomU needs to map the the shared grant table to the installed grant=
 table in step 1 (Maybe it&#39;s done in the method apply_to_page_range()).=
 Then, DomU can operate the shared grant table through the installed grant =
table.</div>
<div><br></div><div>Thank you very much.</div><div><br></div><div>Best Rega=
rds,</div><div>Bei Guan</div><div><br></div><div><br></div><div>=A0</div><b=
lockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px =
#ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">
&gt; Hi Ian,<br>
&gt;<br>
&gt;<br>
&gt; Thank you very much for this information. It&#39;s very useful to me.<=
br>
&gt;<br>
&gt;<br>
&gt; However, I&#39;m still confused with the initialization of the grant t=
able<br>
&gt; in HVM.<br>
&gt;<br>
&gt;<br>
&gt; The relationship of the methods in the initialization of the grant<br>
&gt; table in linux source code (driver/xen) is:<br>
&gt; platform_pci_init()--&gt;gnttab_init()--&gt;gnttab_resume()--&gt;gntta=
b_map()--&gt;arch_gnttab_map_shared()--&gt;apply_to_page_range().<br>
&gt;<br>
&gt;<br>
&gt; So, I am not sure that what&#39;s the function of the method<br>
&gt; apply_to_page_range(), which is implemented in code file [1].<br>
&gt; This function is a little complex. Is there any simple method to do<br=
>
&gt; this? Thank you for your time.<br>
<br>
</div></div>This function is the simple method ;-)<br>
<br>
All it basically does is iterate over the page tables corresponding to a<br=
>
range of addresses and calling a user supplied function on each leaf<br>
PTE. In the case of the gnttab_map this user supplied function simply<br>
sets the leaf PTEs to point to the right grant table page.<br>
<br>
I suppose you are working on tianocore? I&#39;ve no idea what the page tabl=
e<br>
layout is in that environment, I suppose it either has a linear map or<br>
some other way of getting at the leaf ptes. Anyway since the method to<br>
use is specific to the OS (or firmware) environment you are running in I<br=
>
think you&#39;ll have to ask on the tianocore development list.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br>

--bcaec554dc2840d55a04c77898b6--


--===============6199115711736849820==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6199115711736849820==--


From xen-devel-bounces@lists.xen.org Fri Aug 17 16:24:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PLj-0006yo-Qm; Fri, 17 Aug 2012 16:24:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T2PLi-0006xE-DC
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:24:38 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345220671!9645425!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27388 invoked from network); 17 Aug 2012 16:24:32 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 16:24:32 -0000
Received: by lagz14 with SMTP id z14so2416666lag.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 09:24:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ldH0/ia7Axyr55RuvlH7teKEuTSek+G+xspnYhgMjiQ=;
	b=Fz+CzwPw+qQQYWWvhkWhCDmeRMLpTquDsF+aRoRfAf3KKHMrVBWDsZiSfRpwBDWG2m
	c2ELz6n/xhHEDgv/sojhqh4GQ5fjfp6MQVNiz4eIPkCpGaHyB0019vv1zmUCG+11FE4K
	5O12t/k+w7NondHtL/Tl5PzNW/mUFjl1m3kad7w+7hF8H9I4I4Qb/E93w7g0crQCGXIs
	vmtdm2QUAQHVHqHOs+HtNCTerDZTII5u9iQ+OICVHo3imcwLJvu7WvB7QDsT2NgQ41SI
	FH/XY59rhpuWjQaFYZBfzTZ1Ho2UXMd5/NbxVXNPb2q1bUDE6PaE5N2lg746qa7FHoBD
	dFJQ==
MIME-Version: 1.0
Received: by 10.112.25.4 with SMTP id y4mr2528612lbf.61.1345220671230; Fri, 17
	Aug 2012 09:24:31 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Fri, 17 Aug 2012 09:24:31 -0700 (PDT)
In-Reply-To: <1345218274.10161.86.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
	<1345218274.10161.86.camel@zakaz.uk.xensource.com>
Date: Sat, 18 Aug 2012 00:24:31 +0800
Message-ID: <CAEQjb-S6c_0rjmymHFC7=Nft0bcPrVQbguQ2zpqHFsHmk9wn3w@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6199115711736849820=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6199115711736849820==
Content-Type: multipart/alternative; boundary=bcaec554dc2840d55a04c77898b6

--bcaec554dc2840d55a04c77898b6
Content-Type: text/plain; charset=ISO-8859-1

Hi Ian,

Yes, I'm working on the PV support on tianocore UEFI bios.

I have read the code in drivers/xen/grant-table.c. And my understanding of
the initialization of the grant table in HVM is as the following. Please
correct me if possible.

1. DomU sets up the grant table for itself. (Uses the hypercall
HYPERVISOR_grant_table_op)
2. DomU maps the shared grant table to its address space. (Uses the
hypercall HYPERVISOR_memory_op)
3. DomU needs to map the the shared grant table to the installed grant
table in step 1 (Maybe it's done in the method apply_to_page_range()).
Then, DomU can operate the shared grant table through the installed grant
table.

Thank you very much.

Best Regards,
Bei Guan




> > Hi Ian,
> >
> >
> > Thank you very much for this information. It's very useful to me.
> >
> >
> > However, I'm still confused with the initialization of the grant table
> > in HVM.
> >
> >
> > The relationship of the methods in the initialization of the grant
> > table in linux source code (driver/xen) is:
> >
> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
> >
> >
> > So, I am not sure that what's the function of the method
> > apply_to_page_range(), which is implemented in code file [1].
> > This function is a little complex. Is there any simple method to do
> > this? Thank you for your time.
>
> This function is the simple method ;-)
>
> All it basically does is iterate over the page tables corresponding to a
> range of addresses and calling a user supplied function on each leaf
> PTE. In the case of the gnttab_map this user supplied function simply
> sets the leaf PTEs to point to the right grant table page.
>
> I suppose you are working on tianocore? I've no idea what the page table
> layout is in that environment, I suppose it either has a linear map or
> some other way of getting at the leaf ptes. Anyway since the method to
> use is specific to the OS (or firmware) environment you are running in I
> think you'll have to ask on the tianocore development list.
>
> Ian.
>
>
>

--bcaec554dc2840d55a04c77898b6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote"><div>Hi Ian,</div><div><br></div><div>Yes, I&#39=
;m working on the PV support on tianocore UEFI bios.</div><div><br></div><d=
iv>I have read the code in drivers/xen/grant-table.c. And my understanding =
of the initialization of the grant table in HVM is as the following. Please=
 correct me if possible.</div>
<div><br></div><div>1. DomU sets up the grant table for itself. (Uses the h=
ypercall HYPERVISOR_grant_table_op)</div><div>2. DomU maps the shared grant=
 table to its address space. (Uses the hypercall HYPERVISOR_memory_op)</div=
>
<div>3. DomU needs to map the the shared grant table to the installed grant=
 table in step 1 (Maybe it&#39;s done in the method apply_to_page_range()).=
 Then, DomU can operate the shared grant table through the installed grant =
table.</div>
<div><br></div><div>Thank you very much.</div><div><br></div><div>Best Rega=
rds,</div><div>Bei Guan</div><div><br></div><div><br></div><div>=A0</div><b=
lockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px =
#ccc solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">
&gt; Hi Ian,<br>
&gt;<br>
&gt;<br>
&gt; Thank you very much for this information. It&#39;s very useful to me.<=
br>
&gt;<br>
&gt;<br>
&gt; However, I&#39;m still confused with the initialization of the grant t=
able<br>
&gt; in HVM.<br>
&gt;<br>
&gt;<br>
&gt; The relationship of the methods in the initialization of the grant<br>
&gt; table in linux source code (driver/xen) is:<br>
&gt; platform_pci_init()--&gt;gnttab_init()--&gt;gnttab_resume()--&gt;gntta=
b_map()--&gt;arch_gnttab_map_shared()--&gt;apply_to_page_range().<br>
&gt;<br>
&gt;<br>
&gt; So, I am not sure that what&#39;s the function of the method<br>
&gt; apply_to_page_range(), which is implemented in code file [1].<br>
&gt; This function is a little complex. Is there any simple method to do<br=
>
&gt; this? Thank you for your time.<br>
<br>
</div></div>This function is the simple method ;-)<br>
<br>
All it basically does is iterate over the page tables corresponding to a<br=
>
range of addresses and calling a user supplied function on each leaf<br>
PTE. In the case of the gnttab_map this user supplied function simply<br>
sets the leaf PTEs to point to the right grant table page.<br>
<br>
I suppose you are working on tianocore? I&#39;ve no idea what the page tabl=
e<br>
layout is in that environment, I suppose it either has a linear map or<br>
some other way of getting at the leaf ptes. Anyway since the method to<br>
use is specific to the OS (or firmware) environment you are running in I<br=
>
think you&#39;ll have to ask on the tianocore development list.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br>

--bcaec554dc2840d55a04c77898b6--


--===============6199115711736849820==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6199115711736849820==--


From xen-devel-bounces@lists.xen.org Fri Aug 17 16:28:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PP8-0007XY-Fi; Fri, 17 Aug 2012 16:28:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2PP7-0007XM-CP
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:28:09 +0000
Received: from [85.158.143.99:40017] by server-1.bemta-4.messagelabs.com id
	42/70-07754-8117E205; Fri, 17 Aug 2012 16:28:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345220888!25316220!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17470 invoked from network); 17 Aug 2012 16:28:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 16:28:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 17:28:07 +0100
Message-Id: <502E8D600200007800096034@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 17:28:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <20120817151136.GA25138@aepfle.de>
	<CC542656.3C452%keir.xen@gmail.com> <20120817155617.GA32537@aepfle.de>
In-Reply-To: <20120817155617.GA32537@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Olaf Hering <olaf@aepfle.de>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 17:56, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Aug 17, Keir Fraser wrote:
> 
>> On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:
>> 
>> > On Mon, Jun 18, Keir Fraser wrote:
>> > 
>> >> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> 
>> >>> Our product management wasn't happy with the "solution" for XSA-9, and
>> >>> demanded that customer systems must continue to boot. Rather than
>> >>> having our and perhaps other distros carry non-trivial patches, allow
>> >>> for more fine grained control (panic on boot, deny guest creation, or
>> >>> merely warn) by means of a single line change.
>> >> 
>> >> All this seems to allow is to boot but not create domU-s. Which seems a bit
>> >> pointless.
>> > 
>> > Refusing to boot into dom0 with no good reason is a good way to lose
>> > remote control of a system without serial console. Not funny.
>> > 
>> > Fortunately I booted and tested with sles11 Xen first before ruining the
>> > box with plain xen-unstable.
>> > 
>> > So, please apply this patch and remove the panic() from
>> > ./xen/arch/x86/cpu/amd.c
>> 
>> Okay, that's a good argument for that patch.
> 
> Oh, now that the context was posted again:
> With the patch the box would still panic per default. Leaving it zero to
> refuse guest creation looks like a sensible default.

Keir, should I change the default then before committing?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:28:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PP8-0007XY-Fi; Fri, 17 Aug 2012 16:28:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T2PP7-0007XM-CP
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:28:09 +0000
Received: from [85.158.143.99:40017] by server-1.bemta-4.messagelabs.com id
	42/70-07754-8117E205; Fri, 17 Aug 2012 16:28:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345220888!25316220!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17470 invoked from network); 17 Aug 2012 16:28:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	17 Aug 2012 16:28:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Aug 2012 17:28:07 +0100
Message-Id: <502E8D600200007800096034@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 17 Aug 2012 17:28:48 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Keir Fraser" <keir.xen@gmail.com>
References: <20120817151136.GA25138@aepfle.de>
	<CC542656.3C452%keir.xen@gmail.com> <20120817155617.GA32537@aepfle.de>
In-Reply-To: <20120817155617.GA32537@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Olaf Hering <olaf@aepfle.de>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.08.12 at 17:56, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Aug 17, Keir Fraser wrote:
> 
>> On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:
>> 
>> > On Mon, Jun 18, Keir Fraser wrote:
>> > 
>> >> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
>> >> 
>> >>> Our product management wasn't happy with the "solution" for XSA-9, and
>> >>> demanded that customer systems must continue to boot. Rather than
>> >>> having our and perhaps other distros carry non-trivial patches, allow
>> >>> for more fine grained control (panic on boot, deny guest creation, or
>> >>> merely warn) by means of a single line change.
>> >> 
>> >> All this seems to allow is to boot but not create domU-s. Which seems a bit
>> >> pointless.
>> > 
>> > Refusing to boot into dom0 with no good reason is a good way to lose
>> > remote control of a system without serial console. Not funny.
>> > 
>> > Fortunately I booted and tested with sles11 Xen first before ruining the
>> > box with plain xen-unstable.
>> > 
>> > So, please apply this patch and remove the panic() from
>> > ./xen/arch/x86/cpu/amd.c
>> 
>> Okay, that's a good argument for that patch.
> 
> Oh, now that the context was posted again:
> With the patch the box would still panic per default. Leaving it zero to
> refuse guest creation looks like a sensible default.

Keir, should I change the default then before committing?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:42:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PcZ-0007yt-7Z; Fri, 17 Aug 2012 16:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2PcX-0007yo-MN
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:42:01 +0000
Received: from [85.158.143.99:21296] by server-3.bemta-4.messagelabs.com id
	65/F6-09529-9547E205; Fri, 17 Aug 2012 16:42:01 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345221720!16154391!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31863 invoked from network); 17 Aug 2012 16:42:00 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 16:42:00 -0000
Received: by wibhm2 with SMTP id hm2so1640115wib.2
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 09:42:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=o4j70zvCRtwLfCT0Ok+r228OQIqL3Sd2X2muEWozvow=;
	b=AK9VppQW635AJaIpO1ts98r6vngAzfQVr2u2tRdP7HuFZ3vw5B+T6hbcvUd6a40jDe
	mmG1IcUtRnU8dpz2uhWdH5KjhsMm05IZrct131DL/bnUbaveUVO9PZFiuIFGAz/XEs1r
	YLn53ZXqhAJcF+s3DWRMxc2JzJb9GkHpRa6t43yWqOLGj4GfcQpg07ffaDrvqaquFHTQ
	r6R6dWHR0vpz095mjZ8KxXVSuD//P47twGdT9ZDM2fdRt8L29nijjVwf8XUI4KXZTBoF
	n7hOAIfiJzC7q3H6+Y7Th7mjCYKZ5zSz1EKji3Udyaqm9f/sWInIbui8etwN4aOfhmHV
	msig==
Received: by 10.180.84.1 with SMTP id u1mr6285059wiy.15.1345221720146;
	Fri, 17 Aug 2012 09:42:00 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id fr4sm15847630wib.8.2012.08.17.09.41.58
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 09:41:59 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 17:41:56 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC5432E4.3C46E%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
Thread-Index: Ac18lzaB5AbOR1d/30Wyh4ZF713eBw==
In-Reply-To: <502E8D600200007800096034@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Olaf Hering <olaf@aepfle.de>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 17:28, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 17.08.12 at 17:56, Olaf Hering <olaf@aepfle.de> wrote:
>> On Fri, Aug 17, Keir Fraser wrote:
>> 
>>> On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:
>>> 
>>>> On Mon, Jun 18, Keir Fraser wrote:
>>>> 
>>>>> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> 
>>>>>> Our product management wasn't happy with the "solution" for XSA-9, and
>>>>>> demanded that customer systems must continue to boot. Rather than
>>>>>> having our and perhaps other distros carry non-trivial patches, allow
>>>>>> for more fine grained control (panic on boot, deny guest creation, or
>>>>>> merely warn) by means of a single line change.
>>>>> 
>>>>> All this seems to allow is to boot but not create domU-s. Which seems a
>>>>> bit
>>>>> pointless.
>>>> 
>>>> Refusing to boot into dom0 with no good reason is a good way to lose
>>>> remote control of a system without serial console. Not funny.
>>>> 
>>>> Fortunately I booted and tested with sles11 Xen first before ruining the
>>>> box with plain xen-unstable.
>>>> 
>>>> So, please apply this patch and remove the panic() from
>>>> ./xen/arch/x86/cpu/amd.c
>>> 
>>> Okay, that's a good argument for that patch.
>> 
>> Oh, now that the context was posted again:
>> With the patch the box would still panic per default. Leaving it zero to
>> refuse guest creation looks like a sensible default.
> 
> Keir, should I change the default then before committing?

Yes please.

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:42:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PcZ-0007yt-7Z; Fri, 17 Aug 2012 16:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2PcX-0007yo-MN
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:42:01 +0000
Received: from [85.158.143.99:21296] by server-3.bemta-4.messagelabs.com id
	65/F6-09529-9547E205; Fri, 17 Aug 2012 16:42:01 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345221720!16154391!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31863 invoked from network); 17 Aug 2012 16:42:00 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 16:42:00 -0000
Received: by wibhm2 with SMTP id hm2so1640115wib.2
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 09:42:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=o4j70zvCRtwLfCT0Ok+r228OQIqL3Sd2X2muEWozvow=;
	b=AK9VppQW635AJaIpO1ts98r6vngAzfQVr2u2tRdP7HuFZ3vw5B+T6hbcvUd6a40jDe
	mmG1IcUtRnU8dpz2uhWdH5KjhsMm05IZrct131DL/bnUbaveUVO9PZFiuIFGAz/XEs1r
	YLn53ZXqhAJcF+s3DWRMxc2JzJb9GkHpRa6t43yWqOLGj4GfcQpg07ffaDrvqaquFHTQ
	r6R6dWHR0vpz095mjZ8KxXVSuD//P47twGdT9ZDM2fdRt8L29nijjVwf8XUI4KXZTBoF
	n7hOAIfiJzC7q3H6+Y7Th7mjCYKZ5zSz1EKji3Udyaqm9f/sWInIbui8etwN4aOfhmHV
	msig==
Received: by 10.180.84.1 with SMTP id u1mr6285059wiy.15.1345221720146;
	Fri, 17 Aug 2012 09:42:00 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id fr4sm15847630wib.8.2012.08.17.09.41.58
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 09:41:59 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 17:41:56 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <CC5432E4.3C46E%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
Thread-Index: Ac18lzaB5AbOR1d/30Wyh4ZF713eBw==
In-Reply-To: <502E8D600200007800096034@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Olaf Hering <olaf@aepfle.de>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86-64: refine the XSA-9 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 17:28, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 17.08.12 at 17:56, Olaf Hering <olaf@aepfle.de> wrote:
>> On Fri, Aug 17, Keir Fraser wrote:
>> 
>>> On 17/08/2012 16:11, "Olaf Hering" <olaf@aepfle.de> wrote:
>>> 
>>>> On Mon, Jun 18, Keir Fraser wrote:
>>>> 
>>>>> On 13/06/2012 11:04, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> 
>>>>>> Our product management wasn't happy with the "solution" for XSA-9, and
>>>>>> demanded that customer systems must continue to boot. Rather than
>>>>>> having our and perhaps other distros carry non-trivial patches, allow
>>>>>> for more fine grained control (panic on boot, deny guest creation, or
>>>>>> merely warn) by means of a single line change.
>>>>> 
>>>>> All this seems to allow is to boot but not create domU-s. Which seems a
>>>>> bit
>>>>> pointless.
>>>> 
>>>> Refusing to boot into dom0 with no good reason is a good way to lose
>>>> remote control of a system without serial console. Not funny.
>>>> 
>>>> Fortunately I booted and tested with sles11 Xen first before ruining the
>>>> box with plain xen-unstable.
>>>> 
>>>> So, please apply this patch and remove the panic() from
>>>> ./xen/arch/x86/cpu/amd.c
>>> 
>>> Okay, that's a good argument for that patch.
>> 
>> Oh, now that the context was posted again:
>> With the patch the box would still panic per default. Leaving it zero to
>> refuse guest creation looks like a sensible default.
> 
> Keir, should I change the default then before committing?

Yes please.

 -- Keir

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:43:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PeC-00083i-N7; Fri, 17 Aug 2012 16:43:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2PeA-00083Z-KI
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:43:42 +0000
Received: from [85.158.139.83:4129] by server-3.bemta-5.messagelabs.com id
	82/F0-27237-DB47E205; Fri, 17 Aug 2012 16:43:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345221821!25994103!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18496 invoked from network); 17 Aug 2012 16:43:41 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 16:43:41 -0000
Received: by weyz53 with SMTP id z53so3057333wey.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 09:43:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=FoWZ8UcTeHALcifd2/LmmCPu58P9rfdBZMv/K+128eA=;
	b=grroKzfTdslnHBbBC69liX/TWfzgGgwRebEw/Mmwa0RuQh7CIEmslBCjEojaJLQY5p
	R8gQYraJOzfiz7ZbUvYQYOqXwBVc17Z4/sE3g+fH5VURNrGHgkRNYA4AoCVnrsJ9NrCh
	lWVYb6wNj5HnQ2SQkrV6l+OfcL+6FjHYeKYDoevrnwXEkVSpGyY6yqG+xJWGx+qJanRy
	I1yRD1ivW/+9jHraQQppTr5tDkjzeF9muxLP0QNlqWF3wjTrRKA+Ejo2RS0ZJqmLCb5g
	hSJ3esGkILsY9fFtV/1U5/FXmNRBQvHYduE1dPvr64qmccX6cYpqdh9Z6MCpWvG+Ecn+
	Na6w==
Received: by 10.216.138.13 with SMTP id z13mr2779907wei.65.1345221821214;
	Fri, 17 Aug 2012 09:43:41 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id bc2sm15857776wib.0.2012.08.17.09.43.39
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 09:43:40 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 17:43:36 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC543348.3C46F%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: don't expose SYSENTER on unknown CPUs
Thread-Index: Ac18l3IcnnfTxNSj/0ygbGkwHoXmgQ==
In-Reply-To: <502E8C89020000780009600F@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: don't expose SYSENTER on unknown CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 17:25, "Jan Beulich" <JBeulich@suse.com> wrote:

> So far we only ever set up the respective MSRs on Intel CPUs, yet we
> hide the feature only on a 32-bit hypervisor. That prevents booting of
> PV guests on top of a 64-bit hypervisor making use of the instruction
> on unknown CPUs (VIA in this case).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -55,6 +55,7 @@ static void default_init(struct cpuinfo_
> /* Not much we can do here... */
> /* Check if at least it has cpuid */
> BUG_ON(c->cpuid_level == -1);
> + __clear_bit(X86_FEATURE_SEP, c->x86_capability);
>  }
>  
>  static struct cpu_dev default_cpu = {
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 16:43:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 16:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2PeC-00083i-N7; Fri, 17 Aug 2012 16:43:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2PeA-00083Z-KI
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 16:43:42 +0000
Received: from [85.158.139.83:4129] by server-3.bemta-5.messagelabs.com id
	82/F0-27237-DB47E205; Fri, 17 Aug 2012 16:43:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345221821!25994103!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18496 invoked from network); 17 Aug 2012 16:43:41 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 16:43:41 -0000
Received: by weyz53 with SMTP id z53so3057333wey.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 09:43:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=FoWZ8UcTeHALcifd2/LmmCPu58P9rfdBZMv/K+128eA=;
	b=grroKzfTdslnHBbBC69liX/TWfzgGgwRebEw/Mmwa0RuQh7CIEmslBCjEojaJLQY5p
	R8gQYraJOzfiz7ZbUvYQYOqXwBVc17Z4/sE3g+fH5VURNrGHgkRNYA4AoCVnrsJ9NrCh
	lWVYb6wNj5HnQ2SQkrV6l+OfcL+6FjHYeKYDoevrnwXEkVSpGyY6yqG+xJWGx+qJanRy
	I1yRD1ivW/+9jHraQQppTr5tDkjzeF9muxLP0QNlqWF3wjTrRKA+Ejo2RS0ZJqmLCb5g
	hSJ3esGkILsY9fFtV/1U5/FXmNRBQvHYduE1dPvr64qmccX6cYpqdh9Z6MCpWvG+Ecn+
	Na6w==
Received: by 10.216.138.13 with SMTP id z13mr2779907wei.65.1345221821214;
	Fri, 17 Aug 2012 09:43:41 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id bc2sm15857776wib.0.2012.08.17.09.43.39
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 09:43:40 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 17 Aug 2012 17:43:36 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC543348.3C46F%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] x86: don't expose SYSENTER on unknown CPUs
Thread-Index: Ac18l3IcnnfTxNSj/0ygbGkwHoXmgQ==
In-Reply-To: <502E8C89020000780009600F@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] x86: don't expose SYSENTER on unknown CPUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 17:25, "Jan Beulich" <JBeulich@suse.com> wrote:

> So far we only ever set up the respective MSRs on Intel CPUs, yet we
> hide the feature only on a 32-bit hypervisor. That prevents booting of
> PV guests on top of a 64-bit hypervisor making use of the instruction
> on unknown CPUs (VIA in this case).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -55,6 +55,7 @@ static void default_init(struct cpuinfo_
> /* Not much we can do here... */
> /* Check if at least it has cpuid */
> BUG_ON(c->cpuid_level == -1);
> + __clear_bit(X86_FEATURE_SEP, c->x86_capability);
>  }
>  
>  static struct cpu_dev default_cpu = {
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:09:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Q2W-0008SQ-2h; Fri, 17 Aug 2012 17:08:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jljusten@gmail.com>) id 1T2Q2T-0008SL-VR
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 17:08:50 +0000
Received: from [85.158.143.35:17861] by server-3.bemta-4.messagelabs.com id
	F2/CA-09529-1AA7E205; Fri, 17 Aug 2012 17:08:49 +0000
X-Env-Sender: jljusten@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345223327!14616616!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2585 invoked from network); 17 Aug 2012 17:08:48 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:08:48 -0000
Received: by vbip1 with SMTP id p1so4300940vbi.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 10:08:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=fIiflhoOtX+Y6C2X3Mk9BnkW6iK/wduU83UXjsnP18w=;
	b=oydopP89kh/fy9yx5yxbDPg2YVx1z508aJ87cyKzyH02F7AIVlIZfbe1lBF8YL+Bqs
	2FBF83NGxXXGD9NRcTWknMhVTz7e3X6K5r5pQ414A8Kuv8vumceAfhL6Oljthcl2n665
	DbuHPZ4CFRlhhMHV6nOnumfKZRM+z3rVFmZhQyVEohe+nPDrZxhmKBrcanp2+cSJQ903
	YmPMuX7zXMFTm7fEuU4NGN5oUyU4xHYKITzlKO2GjujN7IIuBiTKZfRd3vbDbAk9Pe+Z
	ayuRRqCV7WZ+C4hYTv0Pa/FO5MwMbgVGYPXhNQd8DuXHNfdoC7r2J0WwW2A0MV3CfSyG
	E2Cg==
MIME-Version: 1.0
Received: by 10.52.97.230 with SMTP id ed6mr2537885vdb.65.1345223327144; Fri,
	17 Aug 2012 10:08:47 -0700 (PDT)
Received: by 10.58.152.5 with HTTP; Fri, 17 Aug 2012 10:08:47 -0700 (PDT)
In-Reply-To: <1345218274.10161.86.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
	<1345218274.10161.86.camel@zakaz.uk.xensource.com>
Date: Fri, 17 Aug 2012 10:08:47 -0700
Message-ID: <CAFe8ug_YBeupJf9Kdq100WAYoKv_-otRppL0ACejJimjsTu-Nw@mail.gmail.com>
From: Jordan Justen <jljusten@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Bei Guan <gbtju85@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 8:44 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-17 at 16:31 +0100, Bei Guan wrote:
>>
>> 2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>
>>         On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
>>
>>         > Thank you very much for your help.
>>         > Is there any example code of initialization of grant table
>>         in HVM that
>>         > I can refer to?
>>
>>
>>         The PVHVM support in upstream Linux would be a good place to
>>         look.
>>
>>         So might the code in the xen tree in
>>         unmodified_drivers/linux-2.6/
>>
>>         IIRC Daniel got grant tables working in SeaBIOS last summer
>>         for GSoC so
>>         you might also find some useful examples in
>>         git://github.com/evildani/seabios_patch.git
>> Hi Ian,
>>
>>
>> Thank you very much for this information. It's very useful to me.
>>
>>
>> However, I'm still confused with the initialization of the grant table
>> in HVM.
>>
>>
>> The relationship of the methods in the initialization of the grant
>> table in linux source code (driver/xen) is:
>> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
>>
>>
>> So, I am not sure that what's the function of the method
>> apply_to_page_range(), which is implemented in code file [1].
>> This function is a little complex. Is there any simple method to do
>> this? Thank you for your time.
>
> This function is the simple method ;-)
>
> All it basically does is iterate over the page tables corresponding to a
> range of addresses and calling a user supplied function on each leaf
> PTE. In the case of the gnttab_map this user supplied function simply
> sets the leaf PTEs to point to the right grant table page.
>
> I suppose you are working on tianocore? I've no idea what the page table
> layout is in that environment, I suppose it either has a linear map or
> some other way of getting at the leaf ptes. Anyway since the method to
> use is specific to the OS (or firmware) environment you are running in I
> think you'll have to ask on the tianocore development list.

At boot time all pages are identity mapped. I don't think we need this
mapping step for our firmware. Does that sound right?

-Jordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:09:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Q2W-0008SQ-2h; Fri, 17 Aug 2012 17:08:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jljusten@gmail.com>) id 1T2Q2T-0008SL-VR
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 17:08:50 +0000
Received: from [85.158.143.35:17861] by server-3.bemta-4.messagelabs.com id
	F2/CA-09529-1AA7E205; Fri, 17 Aug 2012 17:08:49 +0000
X-Env-Sender: jljusten@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345223327!14616616!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2585 invoked from network); 17 Aug 2012 17:08:48 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:08:48 -0000
Received: by vbip1 with SMTP id p1so4300940vbi.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 10:08:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=fIiflhoOtX+Y6C2X3Mk9BnkW6iK/wduU83UXjsnP18w=;
	b=oydopP89kh/fy9yx5yxbDPg2YVx1z508aJ87cyKzyH02F7AIVlIZfbe1lBF8YL+Bqs
	2FBF83NGxXXGD9NRcTWknMhVTz7e3X6K5r5pQ414A8Kuv8vumceAfhL6Oljthcl2n665
	DbuHPZ4CFRlhhMHV6nOnumfKZRM+z3rVFmZhQyVEohe+nPDrZxhmKBrcanp2+cSJQ903
	YmPMuX7zXMFTm7fEuU4NGN5oUyU4xHYKITzlKO2GjujN7IIuBiTKZfRd3vbDbAk9Pe+Z
	ayuRRqCV7WZ+C4hYTv0Pa/FO5MwMbgVGYPXhNQd8DuXHNfdoC7r2J0WwW2A0MV3CfSyG
	E2Cg==
MIME-Version: 1.0
Received: by 10.52.97.230 with SMTP id ed6mr2537885vdb.65.1345223327144; Fri,
	17 Aug 2012 10:08:47 -0700 (PDT)
Received: by 10.58.152.5 with HTTP; Fri, 17 Aug 2012 10:08:47 -0700 (PDT)
In-Reply-To: <1345218274.10161.86.camel@zakaz.uk.xensource.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
	<1345218274.10161.86.camel@zakaz.uk.xensource.com>
Date: Fri, 17 Aug 2012 10:08:47 -0700
Message-ID: <CAFe8ug_YBeupJf9Kdq100WAYoKv_-otRppL0ACejJimjsTu-Nw@mail.gmail.com>
From: Jordan Justen <jljusten@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Bei Guan <gbtju85@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 8:44 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-17 at 16:31 +0100, Bei Guan wrote:
>>
>> 2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>
>>         On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
>>
>>         > Thank you very much for your help.
>>         > Is there any example code of initialization of grant table
>>         in HVM that
>>         > I can refer to?
>>
>>
>>         The PVHVM support in upstream Linux would be a good place to
>>         look.
>>
>>         So might the code in the xen tree in
>>         unmodified_drivers/linux-2.6/
>>
>>         IIRC Daniel got grant tables working in SeaBIOS last summer
>>         for GSoC so
>>         you might also find some useful examples in
>>         git://github.com/evildani/seabios_patch.git
>> Hi Ian,
>>
>>
>> Thank you very much for this information. It's very useful to me.
>>
>>
>> However, I'm still confused with the initialization of the grant table
>> in HVM.
>>
>>
>> The relationship of the methods in the initialization of the grant
>> table in linux source code (driver/xen) is:
>> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
>>
>>
>> So, I am not sure that what's the function of the method
>> apply_to_page_range(), which is implemented in code file [1].
>> This function is a little complex. Is there any simple method to do
>> this? Thank you for your time.
>
> This function is the simple method ;-)
>
> All it basically does is iterate over the page tables corresponding to a
> range of addresses and calling a user supplied function on each leaf
> PTE. In the case of the gnttab_map this user supplied function simply
> sets the leaf PTEs to point to the right grant table page.
>
> I suppose you are working on tianocore? I've no idea what the page table
> layout is in that environment, I suppose it either has a linear map or
> some other way of getting at the leaf ptes. Anyway since the method to
> use is specific to the OS (or firmware) environment you are running in I
> think you'll have to ask on the tianocore development list.

At boot time all pages are identity mapped. I don't think we need this
mapping step for our firmware. Does that sound right?

-Jordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:09:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Q2p-0008TK-FZ; Fri, 17 Aug 2012 17:09:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Q2o-0008TA-Jp
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:09:10 +0000
Received: from [85.158.143.35:51553] by server-1.bemta-4.messagelabs.com id
	45/21-07754-5BA7E205; Fri, 17 Aug 2012 17:09:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345223349!11087720!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23507 invoked from network); 17 Aug 2012 17:09:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064569"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:09:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:09:09 +0100
Date: Fri, 17 Aug 2012 18:08:51 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345132488-27323-2-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171808350.15568@kaball.uk.xensource.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-2-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen/swiotlb: Simplify the logic.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> Its pretty easy:
>  1). We only check to see if we need Xen SWIOTLB for PV guests.
>  2). If swiotlb=force or iommu=soft is set, then Xen SWIOTLB will
>      be enabled.
>  3). If it is an initial domain, then Xen SWIOTLB will be enabled.
>  4). Native SWIOTLB must be disabled for PV guests.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/pci-swiotlb-xen.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> index 967633a..b6a5340 100644
> --- a/arch/x86/xen/pci-swiotlb-xen.c
> +++ b/arch/x86/xen/pci-swiotlb-xen.c
> @@ -34,19 +34,20 @@ static struct dma_map_ops xen_swiotlb_dma_ops = {
>  int __init pci_xen_swiotlb_detect(void)
>  {
>  
> +	if (!xen_pv_domain())
> +		return 0;
> +
>  	/* If running as PV guest, either iommu=soft, or swiotlb=force will
>  	 * activate this IOMMU. If running as PV privileged, activate it
>  	 * irregardless.
>  	 */
> -	if ((xen_initial_domain() || swiotlb || swiotlb_force) &&
> -	    (xen_pv_domain()))
> +	if ((xen_initial_domain() || swiotlb || swiotlb_force))
>  		xen_swiotlb = 1;
>  
>  	/* If we are running under Xen, we MUST disable the native SWIOTLB.
>  	 * Don't worry about swiotlb_force flag activating the native, as
>  	 * the 'swiotlb' flag is the only one turning it on. */
> -	if (xen_pv_domain())
> -		swiotlb = 0;
> +	swiotlb = 0;
>  
>  	return xen_swiotlb;
>  }
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:09:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Q2p-0008TK-FZ; Fri, 17 Aug 2012 17:09:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Q2o-0008TA-Jp
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:09:10 +0000
Received: from [85.158.143.35:51553] by server-1.bemta-4.messagelabs.com id
	45/21-07754-5BA7E205; Fri, 17 Aug 2012 17:09:09 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345223349!11087720!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23507 invoked from network); 17 Aug 2012 17:09:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064569"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:09:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:09:09 +0100
Date: Fri, 17 Aug 2012 18:08:51 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345132488-27323-2-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171808350.15568@kaball.uk.xensource.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-2-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 1/5] xen/swiotlb: Simplify the logic.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> Its pretty easy:
>  1). We only check to see if we need Xen SWIOTLB for PV guests.
>  2). If swiotlb=force or iommu=soft is set, then Xen SWIOTLB will
>      be enabled.
>  3). If it is an initial domain, then Xen SWIOTLB will be enabled.
>  4). Native SWIOTLB must be disabled for PV guests.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/pci-swiotlb-xen.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> index 967633a..b6a5340 100644
> --- a/arch/x86/xen/pci-swiotlb-xen.c
> +++ b/arch/x86/xen/pci-swiotlb-xen.c
> @@ -34,19 +34,20 @@ static struct dma_map_ops xen_swiotlb_dma_ops = {
>  int __init pci_xen_swiotlb_detect(void)
>  {
>  
> +	if (!xen_pv_domain())
> +		return 0;
> +
>  	/* If running as PV guest, either iommu=soft, or swiotlb=force will
>  	 * activate this IOMMU. If running as PV privileged, activate it
>  	 * irregardless.
>  	 */
> -	if ((xen_initial_domain() || swiotlb || swiotlb_force) &&
> -	    (xen_pv_domain()))
> +	if ((xen_initial_domain() || swiotlb || swiotlb_force))
>  		xen_swiotlb = 1;
>  
>  	/* If we are running under Xen, we MUST disable the native SWIOTLB.
>  	 * Don't worry about swiotlb_force flag activating the native, as
>  	 * the 'swiotlb' flag is the only one turning it on. */
> -	if (xen_pv_domain())
> -		swiotlb = 0;
> +	swiotlb = 0;
>  
>  	return xen_swiotlb;
>  }
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:15:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:15:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Q8f-0000Ks-FO; Fri, 17 Aug 2012 17:15:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Q8e-0000Kn-B0
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:15:12 +0000
Received: from [85.158.143.35:35533] by server-1.bemta-4.messagelabs.com id
	B8/D4-07754-F1C7E205; Fri, 17 Aug 2012 17:15:11 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345223710!14692184!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 934 invoked from network); 17 Aug 2012 17:15:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:15:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064622"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:15:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:15:09 +0100
Date: Fri, 17 Aug 2012 18:14:52 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345132488-27323-6-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171814440.15568@kaball.uk.xensource.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-6-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen/pcifront: Use Xen-SWIOTLB when
 initting if required.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> We piggyback on "xen/swiotlb: Use the swiotlb_late_init_with_tbl to init
> Xen-SWIOTLB late when PV PCI is used." functionality to start up
> the Xen-SWIOTLB if we are hot-plugged. This allows us to bypass
> the need to supply 'iommu=soft' on the Linux command line (mostly).
> With this patch, if a user forgot 'iommu=soft' on the command line,
> and hotplug a PCI device they will get:
> 
> pcifront pci-0: Installing PCI frontend
> Warning: only able to allocate 4 MB for software IO TLB
> software IO TLB [mem 0x2a000000-0x2a3fffff] (4MB) mapped at [ffff88002a000000-ffff88002a3fffff]
> pcifront pci-0: Creating PCI Frontend Bus 0000:00
> pcifront pci-0: PCI host bridge to bus 0000:00
> pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
> pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffff]
> pci 0000:00:00.0: [8086:10d3] type 00 class 0x020000
> pci 0000:00:00.0: reg 10: [mem 0xfe5c0000-0xfe5dffff]
> pci 0000:00:00.0: reg 14: [mem 0xfe500000-0xfe57ffff]
> pci 0000:00:00.0: reg 18: [io  0xe000-0xe01f]
> pci 0000:00:00.0: reg 1c: [mem 0xfe5e0000-0xfe5e3fff]
> pcifront pci-0: claiming resource 0000:00:00.0/0
> pcifront pci-0: claiming resource 0000:00:00.0/1
> pcifront pci-0: claiming resource 0000:00:00.0/2
> pcifront pci-0: claiming resource 0000:00:00.0/3
> e1000e: Intel(R) PRO/1000 Network Driver - 2.0.0-k
> e1000e: Copyright(c) 1999 - 2012 Intel Corporation.
> e1000e 0000:00:00.0: Disabling ASPM L0s L1
> e1000e 0000:00:00.0: enabling device (0000 -> 0002)
> e1000e 0000:00:00.0: Xen PCI mapped GSI16 to IRQ34
> e1000e 0000:00:00.0: (unregistered net_device): Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
> e1000e 0000:00:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1b:21:ab:c6:13
> e1000e 0000:00:00.0: eth0: Intel(R) PRO/1000 Network Connection
> e1000e 0000:00:00.0: eth0: MAC: 3, PHY: 8, PBA No: E46981-005
> 
> The "Warning only" will go away if one supplies 'iommu=soft' instead
> as we have a higher chance of being able to allocate large swaths of
> memory.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  drivers/pci/xen-pcifront.c |   14 ++++++++++----
>  1 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index d6cc62c..ca92801 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -21,6 +21,7 @@
>  #include <linux/bitops.h>
>  #include <linux/time.h>
>  
> +#include <asm/xen/swiotlb-xen.h>
>  #define INVALID_GRANT_REF (0)
>  #define INVALID_EVTCHN    (-1)
>  
> @@ -668,7 +669,7 @@ static irqreturn_t pcifront_handler_aer(int irq, void *dev)
>  	schedule_pcifront_aer_op(pdev);
>  	return IRQ_HANDLED;
>  }
> -static int pcifront_connect(struct pcifront_device *pdev)
> +static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
>  {
>  	int err = 0;
>  
> @@ -681,9 +682,13 @@ static int pcifront_connect(struct pcifront_device *pdev)
>  		dev_err(&pdev->xdev->dev, "PCI frontend already installed!\n");
>  		err = -EEXIST;
>  	}
> -
>  	spin_unlock(&pcifront_dev_lock);
>  
> +	if (!err && !swiotlb_nr_tbl()) {
> +		err = pci_xen_swiotlb_init_late();
> +		if (err)
> +			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> +	}
>  	return err;
>  }
>  
> @@ -699,6 +704,7 @@ static void pcifront_disconnect(struct pcifront_device *pdev)
>  
>  	spin_unlock(&pcifront_dev_lock);
>  }
> +
>  static struct pcifront_device *alloc_pdev(struct xenbus_device *xdev)
>  {
>  	struct pcifront_device *pdev;
> @@ -842,10 +848,10 @@ static int __devinit pcifront_try_connect(struct pcifront_device *pdev)
>  	    XenbusStateInitialised)
>  		goto out;
>  
> -	err = pcifront_connect(pdev);
> +	err = pcifront_connect_and_init_dma(pdev);
>  	if (err) {
>  		xenbus_dev_fatal(pdev->xdev, err,
> -				 "Error connecting PCI Frontend");
> +				 "Error setting up PCI Frontend");
>  		goto out;
>  	}
>  
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:15:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:15:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Q8f-0000Ks-FO; Fri, 17 Aug 2012 17:15:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Q8e-0000Kn-B0
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:15:12 +0000
Received: from [85.158.143.35:35533] by server-1.bemta-4.messagelabs.com id
	B8/D4-07754-F1C7E205; Fri, 17 Aug 2012 17:15:11 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345223710!14692184!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 934 invoked from network); 17 Aug 2012 17:15:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:15:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064622"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:15:09 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:15:09 +0100
Date: Fri, 17 Aug 2012 18:14:52 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345132488-27323-6-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171814440.15568@kaball.uk.xensource.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-6-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 5/5] xen/pcifront: Use Xen-SWIOTLB when
 initting if required.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> We piggyback on "xen/swiotlb: Use the swiotlb_late_init_with_tbl to init
> Xen-SWIOTLB late when PV PCI is used." functionality to start up
> the Xen-SWIOTLB if we are hot-plugged. This allows us to bypass
> the need to supply 'iommu=soft' on the Linux command line (mostly).
> With this patch, if a user forgot 'iommu=soft' on the command line,
> and hotplug a PCI device they will get:
> 
> pcifront pci-0: Installing PCI frontend
> Warning: only able to allocate 4 MB for software IO TLB
> software IO TLB [mem 0x2a000000-0x2a3fffff] (4MB) mapped at [ffff88002a000000-ffff88002a3fffff]
> pcifront pci-0: Creating PCI Frontend Bus 0000:00
> pcifront pci-0: PCI host bridge to bus 0000:00
> pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
> pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffff]
> pci 0000:00:00.0: [8086:10d3] type 00 class 0x020000
> pci 0000:00:00.0: reg 10: [mem 0xfe5c0000-0xfe5dffff]
> pci 0000:00:00.0: reg 14: [mem 0xfe500000-0xfe57ffff]
> pci 0000:00:00.0: reg 18: [io  0xe000-0xe01f]
> pci 0000:00:00.0: reg 1c: [mem 0xfe5e0000-0xfe5e3fff]
> pcifront pci-0: claiming resource 0000:00:00.0/0
> pcifront pci-0: claiming resource 0000:00:00.0/1
> pcifront pci-0: claiming resource 0000:00:00.0/2
> pcifront pci-0: claiming resource 0000:00:00.0/3
> e1000e: Intel(R) PRO/1000 Network Driver - 2.0.0-k
> e1000e: Copyright(c) 1999 - 2012 Intel Corporation.
> e1000e 0000:00:00.0: Disabling ASPM L0s L1
> e1000e 0000:00:00.0: enabling device (0000 -> 0002)
> e1000e 0000:00:00.0: Xen PCI mapped GSI16 to IRQ34
> e1000e 0000:00:00.0: (unregistered net_device): Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
> e1000e 0000:00:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1b:21:ab:c6:13
> e1000e 0000:00:00.0: eth0: Intel(R) PRO/1000 Network Connection
> e1000e 0000:00:00.0: eth0: MAC: 3, PHY: 8, PBA No: E46981-005
> 
> The "Warning only" will go away if one supplies 'iommu=soft' instead
> as we have a higher chance of being able to allocate large swaths of
> memory.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  drivers/pci/xen-pcifront.c |   14 ++++++++++----
>  1 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index d6cc62c..ca92801 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -21,6 +21,7 @@
>  #include <linux/bitops.h>
>  #include <linux/time.h>
>  
> +#include <asm/xen/swiotlb-xen.h>
>  #define INVALID_GRANT_REF (0)
>  #define INVALID_EVTCHN    (-1)
>  
> @@ -668,7 +669,7 @@ static irqreturn_t pcifront_handler_aer(int irq, void *dev)
>  	schedule_pcifront_aer_op(pdev);
>  	return IRQ_HANDLED;
>  }
> -static int pcifront_connect(struct pcifront_device *pdev)
> +static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
>  {
>  	int err = 0;
>  
> @@ -681,9 +682,13 @@ static int pcifront_connect(struct pcifront_device *pdev)
>  		dev_err(&pdev->xdev->dev, "PCI frontend already installed!\n");
>  		err = -EEXIST;
>  	}
> -
>  	spin_unlock(&pcifront_dev_lock);
>  
> +	if (!err && !swiotlb_nr_tbl()) {
> +		err = pci_xen_swiotlb_init_late();
> +		if (err)
> +			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> +	}
>  	return err;
>  }
>  
> @@ -699,6 +704,7 @@ static void pcifront_disconnect(struct pcifront_device *pdev)
>  
>  	spin_unlock(&pcifront_dev_lock);
>  }
> +
>  static struct pcifront_device *alloc_pdev(struct xenbus_device *xdev)
>  {
>  	struct pcifront_device *pdev;
> @@ -842,10 +848,10 @@ static int __devinit pcifront_try_connect(struct pcifront_device *pdev)
>  	    XenbusStateInitialised)
>  		goto out;
>  
> -	err = pcifront_connect(pdev);
> +	err = pcifront_connect_and_init_dma(pdev);
>  	if (err) {
>  		xenbus_dev_fatal(pdev->xdev, err,
> -				 "Error connecting PCI Frontend");
> +				 "Error setting up PCI Frontend");
>  		goto out;
>  	}
>  
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:26:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QJT-0000VL-Q2; Fri, 17 Aug 2012 17:26:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QJR-0000VD-RE
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:26:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345224374!9891242!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6672 invoked from network); 17 Aug 2012 17:26:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:26:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064737"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:26:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:26:14 +0100
Date: Fri, 17 Aug 2012 18:25:56 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171818260.15568@kaball.uk.xensource.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen/swiotlb: Use the
 swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV PCI is used.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> With this patch we provide the functionality to initialize the
> Xen-SWIOTLB late in the bootup cycle - specifically for
> Xen PCI-frontend. We still will work if the user had
> supplied 'iommu=soft' on the Linux command line.
> 
> CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
> [v1: Fix smatch warnings]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
>  arch/x86/xen/pci-swiotlb-xen.c         |   17 +++++++++-
>  drivers/xen/swiotlb-xen.c              |   54 ++++++++++++++++++++++++++-----
>  include/xen/swiotlb-xen.h              |    1 +
>  4 files changed, 64 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
> index 1be1ab7..ee52fca 100644
> --- a/arch/x86/include/asm/xen/swiotlb-xen.h
> +++ b/arch/x86/include/asm/xen/swiotlb-xen.h
> @@ -5,10 +5,12 @@
>  extern int xen_swiotlb;
>  extern int __init pci_xen_swiotlb_detect(void);
>  extern void __init pci_xen_swiotlb_init(void);
> +extern int pci_xen_swiotlb_init_late(void);
>  #else
>  #define xen_swiotlb (0)
>  static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
>  static inline void __init pci_xen_swiotlb_init(void) { }
> +static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
>  #endif
>  
>  #endif /* _ASM_X86_SWIOTLB_XEN_H */
> diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> index 1c17227..031d8bc 100644
> --- a/arch/x86/xen/pci-swiotlb-xen.c
> +++ b/arch/x86/xen/pci-swiotlb-xen.c
> @@ -12,7 +12,7 @@
>  #include <asm/iommu.h>
>  #include <asm/dma.h>
>  #endif
> -
> +#include <linux/export.h>
>  int xen_swiotlb __read_mostly;
>  
>  static struct dma_map_ops xen_swiotlb_dma_ops = {
> @@ -76,6 +76,21 @@ void __init pci_xen_swiotlb_init(void)
>  		pci_request_acs();
>  	}
>  }
> +
> +int pci_xen_swiotlb_init_late(void)
> +{
> +	int rc = xen_swiotlb_late_init(1);
> +	if (rc)
> +		return rc;
> +
> +	dma_ops = &xen_swiotlb_dma_ops;
> +	/* Make sure ACS will be enabled */
> +	pci_request_acs();
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);

shouldn't we be checking whether the xen_swiotlb has already been
initialized?


>  IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
>  		  0,
>  		  pci_xen_swiotlb_init,
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1afb4fb..1942a3e 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -145,13 +145,14 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
>  	return 0;
>  }
>  
> -void __init xen_swiotlb_init(int verbose)
> +static int __ref __xen_swiotlb_init(int verbose, bool early)
>  {
>  	unsigned long bytes;
>  	int rc = -ENOMEM;
>  	unsigned long nr_tbl;
>  	char *m = NULL;
>  	unsigned int repeat = 3;
> +	unsigned long order;
>
>  	nr_tbl = swiotlb_nr_tbl();
>  	if (nr_tbl)
> @@ -161,12 +162,31 @@ void __init xen_swiotlb_init(int verbose)
>  		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
>  	}
>  retry:
> +	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
>  	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
>  
>  	/*
>  	 * Get IO TLB memory from any location.
>  	 */
> -	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> +	if (early)
> +		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> +	else {
> +#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
> +#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)


> +		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
> +			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
> +			if (xen_io_tlb_start)
> +				break;
> +			order--;
> +		}
> +		if (order != get_order(bytes)) {
> +			pr_warn("Warning: only able to allocate %ld MB "
> +				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
> +			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
> +			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
> +		}
> +	}
>  	if (!xen_io_tlb_start) {
>  		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
>  		goto error;
> @@ -179,17 +199,22 @@ retry:
>  			       bytes,
>  			       xen_io_tlb_nslabs);
>  	if (rc) {
> -		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
> +		if (early)
> +			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
>  		m = "Failed to get contiguous memory for DMA from Xen!\n"\
>  		    "You either: don't have the permissions, do not have"\
>  		    " enough free memory under 4GB, or the hypervisor memory"\
> -		    "is too fragmented!";
> +		    " is too fragmented!";
>  		goto error;
>  	}
>  	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
> -	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
>  
> -	return;
> +	rc = 0;
> +	if (early)
> +		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
> +	else
> +		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
> +	return rc;
>  error:
>  	if (repeat--) {
>  		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
> @@ -198,10 +223,21 @@ error:
>  		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
>  		goto retry;
>  	}
> -	xen_raw_printk("%s (rc:%d)", m, rc);
> -	panic("%s (rc:%d)", m, rc);
> +	pr_err("%s (rc:%d)", m, rc);
> +	if (early)
> +		panic("%s (rc:%d)", m, rc);
> +	else
> +		free_pages((unsigned long)xen_io_tlb_start, order);
> +	return rc;
> +}

All these "if" make the code a harder to read. Would it be possible at
least to unify the error paths and just check on after_bootmem whether
we need to call free_pages or free_bootmem?
In fact using after_bootmem you might get away without an early
parameter.


> +void __init xen_swiotlb_init(int verbose)
> +{
> +	__xen_swiotlb_init(verbose, true /* early */);
> +}
> +int xen_swiotlb_late_init(int verbose)
> +{
> +	return __xen_swiotlb_init(verbose, false /* late */);
>  }
>  void *
>  xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>  			   dma_addr_t *dma_handle, gfp_t flags,
> diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
> index 4f4d449..d38d984 100644
> --- a/include/xen/swiotlb-xen.h
> +++ b/include/xen/swiotlb-xen.h
> @@ -4,6 +4,7 @@
>  #include <linux/swiotlb.h>
>  
>  extern void xen_swiotlb_init(int verbose);
> +extern int  xen_swiotlb_late_init(int verbose);
>  
>  extern void
>  *xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:26:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QJT-0000VL-Q2; Fri, 17 Aug 2012 17:26:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QJR-0000VD-RE
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:26:22 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345224374!9891242!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6672 invoked from network); 17 Aug 2012 17:26:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:26:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064737"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:26:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:26:14 +0100
Date: Fri, 17 Aug 2012 18:25:56 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171818260.15568@kaball.uk.xensource.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen/swiotlb: Use the
 swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV PCI is used.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> With this patch we provide the functionality to initialize the
> Xen-SWIOTLB late in the bootup cycle - specifically for
> Xen PCI-frontend. We still will work if the user had
> supplied 'iommu=soft' on the Linux command line.
> 
> CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
> [v1: Fix smatch warnings]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
>  arch/x86/xen/pci-swiotlb-xen.c         |   17 +++++++++-
>  drivers/xen/swiotlb-xen.c              |   54 ++++++++++++++++++++++++++-----
>  include/xen/swiotlb-xen.h              |    1 +
>  4 files changed, 64 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
> index 1be1ab7..ee52fca 100644
> --- a/arch/x86/include/asm/xen/swiotlb-xen.h
> +++ b/arch/x86/include/asm/xen/swiotlb-xen.h
> @@ -5,10 +5,12 @@
>  extern int xen_swiotlb;
>  extern int __init pci_xen_swiotlb_detect(void);
>  extern void __init pci_xen_swiotlb_init(void);
> +extern int pci_xen_swiotlb_init_late(void);
>  #else
>  #define xen_swiotlb (0)
>  static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
>  static inline void __init pci_xen_swiotlb_init(void) { }
> +static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
>  #endif
>  
>  #endif /* _ASM_X86_SWIOTLB_XEN_H */
> diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> index 1c17227..031d8bc 100644
> --- a/arch/x86/xen/pci-swiotlb-xen.c
> +++ b/arch/x86/xen/pci-swiotlb-xen.c
> @@ -12,7 +12,7 @@
>  #include <asm/iommu.h>
>  #include <asm/dma.h>
>  #endif
> -
> +#include <linux/export.h>
>  int xen_swiotlb __read_mostly;
>  
>  static struct dma_map_ops xen_swiotlb_dma_ops = {
> @@ -76,6 +76,21 @@ void __init pci_xen_swiotlb_init(void)
>  		pci_request_acs();
>  	}
>  }
> +
> +int pci_xen_swiotlb_init_late(void)
> +{
> +	int rc = xen_swiotlb_late_init(1);
> +	if (rc)
> +		return rc;
> +
> +	dma_ops = &xen_swiotlb_dma_ops;
> +	/* Make sure ACS will be enabled */
> +	pci_request_acs();
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);

shouldn't we be checking whether the xen_swiotlb has already been
initialized?


>  IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
>  		  0,
>  		  pci_xen_swiotlb_init,
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1afb4fb..1942a3e 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -145,13 +145,14 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
>  	return 0;
>  }
>  
> -void __init xen_swiotlb_init(int verbose)
> +static int __ref __xen_swiotlb_init(int verbose, bool early)
>  {
>  	unsigned long bytes;
>  	int rc = -ENOMEM;
>  	unsigned long nr_tbl;
>  	char *m = NULL;
>  	unsigned int repeat = 3;
> +	unsigned long order;
>
>  	nr_tbl = swiotlb_nr_tbl();
>  	if (nr_tbl)
> @@ -161,12 +162,31 @@ void __init xen_swiotlb_init(int verbose)
>  		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
>  	}
>  retry:
> +	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
>  	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
>  
>  	/*
>  	 * Get IO TLB memory from any location.
>  	 */
> -	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> +	if (early)
> +		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> +	else {
> +#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
> +#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)


> +		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
> +			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
> +			if (xen_io_tlb_start)
> +				break;
> +			order--;
> +		}
> +		if (order != get_order(bytes)) {
> +			pr_warn("Warning: only able to allocate %ld MB "
> +				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
> +			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
> +			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
> +		}
> +	}
>  	if (!xen_io_tlb_start) {
>  		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
>  		goto error;
> @@ -179,17 +199,22 @@ retry:
>  			       bytes,
>  			       xen_io_tlb_nslabs);
>  	if (rc) {
> -		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
> +		if (early)
> +			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
>  		m = "Failed to get contiguous memory for DMA from Xen!\n"\
>  		    "You either: don't have the permissions, do not have"\
>  		    " enough free memory under 4GB, or the hypervisor memory"\
> -		    "is too fragmented!";
> +		    " is too fragmented!";
>  		goto error;
>  	}
>  	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
> -	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
>  
> -	return;
> +	rc = 0;
> +	if (early)
> +		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
> +	else
> +		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
> +	return rc;
>  error:
>  	if (repeat--) {
>  		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
> @@ -198,10 +223,21 @@ error:
>  		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
>  		goto retry;
>  	}
> -	xen_raw_printk("%s (rc:%d)", m, rc);
> -	panic("%s (rc:%d)", m, rc);
> +	pr_err("%s (rc:%d)", m, rc);
> +	if (early)
> +		panic("%s (rc:%d)", m, rc);
> +	else
> +		free_pages((unsigned long)xen_io_tlb_start, order);
> +	return rc;
> +}

All these "if" make the code a harder to read. Would it be possible at
least to unify the error paths and just check on after_bootmem whether
we need to call free_pages or free_bootmem?
In fact using after_bootmem you might get away without an early
parameter.


> +void __init xen_swiotlb_init(int verbose)
> +{
> +	__xen_swiotlb_init(verbose, true /* early */);
> +}
> +int xen_swiotlb_late_init(int verbose)
> +{
> +	return __xen_swiotlb_init(verbose, false /* late */);
>  }
>  void *
>  xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>  			   dma_addr_t *dma_handle, gfp_t flags,
> diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
> index 4f4d449..d38d984 100644
> --- a/include/xen/swiotlb-xen.h
> +++ b/include/xen/swiotlb-xen.h
> @@ -4,6 +4,7 @@
>  #include <linux/swiotlb.h>
>  
>  extern void xen_swiotlb_init(int verbose);
> +extern int  xen_swiotlb_late_init(int verbose);
>  
>  extern void
>  *xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:30:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QMr-0000ck-HM; Fri, 17 Aug 2012 17:29:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QMp-0000cc-NS
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:29:51 +0000
Received: from [85.158.138.51:37912] by server-12.bemta-3.messagelabs.com id
	7B/70-04073-E8F7E205; Fri, 17 Aug 2012 17:29:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345224590!27099786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6060 invoked from network); 17 Aug 2012 17:29:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:29:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064774"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:29:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:29:50 +0100
Date: Fri, 17 Aug 2012 18:29:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-2-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171829230.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-2-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 01/11] xen/p2m: Fix the comment describing
 the P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> It mixed up the p2m_mid_missing with p2m_missing. Also
> remove some extra spaces.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  arch/x86/xen/p2m.c |   14 +++++++-------
>  1 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 64effdc..e4adbfb 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -22,7 +22,7 @@
>   *
>   * P2M_PER_PAGE depends on the architecture, as a mfn is always
>   * unsigned long (8 bytes on 64-bit, 4 bytes on 32), leading to
> - * 512 and 1024 entries respectively. 
> + * 512 and 1024 entries respectively.
>   *
>   * In short, these structures contain the Machine Frame Number (MFN) of the PFN.
>   *
> @@ -139,11 +139,11 @@
>   *      /    | ~0, ~0, ....  |
>   *     |     \---------------/
>   *     |
> - *     p2m_missing             p2m_missing
> - * /------------------\     /------------\
> - * | [p2m_mid_missing]+---->| ~0, ~0, ~0 |
> - * | [p2m_mid_missing]+---->| ..., ~0    |
> - * \------------------/     \------------/
> + *   p2m_mid_missing           p2m_missing
> + * /-----------------\     /------------\
> + * | [p2m_missing]   +---->| ~0, ~0, ~0 |
> + * | [p2m_missing]   +---->| ..., ~0    |
> + * \-----------------/     \------------/
>   *
>   * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
>   */
> @@ -423,7 +423,7 @@ static void free_p2m_page(void *p)
>  	free_page((unsigned long)p);
>  }
>  
> -/* 
> +/*
>   * Fully allocate the p2m structure for a given pfn.  We need to check
>   * that both the top and mid levels are allocated, and make sure the
>   * parallel mfn tree is kept in sync.  We may race with other cpus, so
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:30:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QMr-0000ck-HM; Fri, 17 Aug 2012 17:29:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QMp-0000cc-NS
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:29:51 +0000
Received: from [85.158.138.51:37912] by server-12.bemta-3.messagelabs.com id
	7B/70-04073-E8F7E205; Fri, 17 Aug 2012 17:29:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345224590!27099786!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6060 invoked from network); 17 Aug 2012 17:29:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:29:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064774"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:29:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:29:50 +0100
Date: Fri, 17 Aug 2012 18:29:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-2-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171829230.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-2-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 01/11] xen/p2m: Fix the comment describing
 the P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> It mixed up the p2m_mid_missing with p2m_missing. Also
> remove some extra spaces.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>


Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  arch/x86/xen/p2m.c |   14 +++++++-------
>  1 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 64effdc..e4adbfb 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -22,7 +22,7 @@
>   *
>   * P2M_PER_PAGE depends on the architecture, as a mfn is always
>   * unsigned long (8 bytes on 64-bit, 4 bytes on 32), leading to
> - * 512 and 1024 entries respectively. 
> + * 512 and 1024 entries respectively.
>   *
>   * In short, these structures contain the Machine Frame Number (MFN) of the PFN.
>   *
> @@ -139,11 +139,11 @@
>   *      /    | ~0, ~0, ....  |
>   *     |     \---------------/
>   *     |
> - *     p2m_missing             p2m_missing
> - * /------------------\     /------------\
> - * | [p2m_mid_missing]+---->| ~0, ~0, ~0 |
> - * | [p2m_mid_missing]+---->| ..., ~0    |
> - * \------------------/     \------------/
> + *   p2m_mid_missing           p2m_missing
> + * /-----------------\     /------------\
> + * | [p2m_missing]   +---->| ~0, ~0, ~0 |
> + * | [p2m_missing]   +---->| ..., ~0    |
> + * \-----------------/     \------------/
>   *
>   * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
>   */
> @@ -423,7 +423,7 @@ static void free_p2m_page(void *p)
>  	free_page((unsigned long)p);
>  }
>  
> -/* 
> +/*
>   * Fully allocate the p2m structure for a given pfn.  We need to check
>   * that both the top and mid levels are allocated, and make sure the
>   * parallel mfn tree is kept in sync.  We may race with other cpus, so
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:35:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QSL-0000qA-9Y; Fri, 17 Aug 2012 17:35:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QSJ-0000q5-VF
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:35:32 +0000
Received: from [85.158.143.99:10962] by server-3.bemta-4.messagelabs.com id
	1B/8B-09529-3E08E205; Fri, 17 Aug 2012 17:35:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345224930!23426277!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 953 invoked from network); 17 Aug 2012 17:35:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064812"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:35:29 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:35:29 +0100
Date: Fri, 17 Aug 2012 18:35:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 02/11] xen/x86: Use memblock_reserve for
 sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> instead of a big memblock_reserve. This way we can be more
> selective in freeing regions (and it also makes it easier
> to understand where is what).
> 
> [v1: Move the auto_translate_physmap to proper line]
> [v2: Per Stefano suggestion add more comments]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

much better now!

>  arch/x86/xen/enlighten.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++
>  arch/x86/xen/p2m.c       |    5 ++++
>  arch/x86/xen/setup.c     |    9 --------
>  3 files changed, 53 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..e532eb5 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -998,7 +998,54 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  	return ret;
>  }
> +/*
> + * If the MFN is not in the m2p (provided to us by the hypervisor) this
> + * function won't do anything. In practice this means that the XenBus
> + * MFN won't be available for the initial domain. */
> +static void __init xen_reserve_mfn(unsigned long mfn)
> +{
> +	unsigned long pfn;
> +
> +	if (!mfn)
> +		return;
> +	pfn = mfn_to_pfn(mfn);
> +	if (phys_to_machine_mapping_valid(pfn))
> +		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
> +}
> +static void __init xen_reserve_internals(void)
> +{
> +	unsigned long size;
> +
> +	if (!xen_pv_domain())
> +		return;
> +
> +	/* xen_start_info does not exist in the M2P, hence can't use
> +	 * xen_reserve_mfn. */
> +	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
> +
> +	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
> +	xen_reserve_mfn(xen_start_info->store_mfn);
>  
> +	if (!xen_initial_domain())
> +		xen_reserve_mfn(xen_start_info->console.domU.mfn);
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	/*
> +	 * ALIGN up to compensate for the p2m_page pointing to an array that
> +	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
> +	 */
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +
> +	/* We could use xen_reserve_mfn here, but would end up looping quite
> +	 * a lot (and call memblock_reserve for each PAGE), so lets just use
> +	 * the easy way and reserve it wholesale. */
> +	memblock_reserve(__pa(xen_start_info->mfn_list), size);
> +
> +	/* The pagetables are reserved in mmu.c */
> +}
>  void xen_setup_shared_info(void)
>  {
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> @@ -1362,6 +1409,7 @@ asmlinkage void __init xen_start_kernel(void)
>  	xen_raw_console_write("mapping kernel into physical memory\n");
>  	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
>  
> +	xen_reserve_internals();
>  	/* Allocate and initialize top and mid mfn levels for p2m structure */
>  	xen_build_mfn_list_list();
>  
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index e4adbfb..6a2bfa4 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -388,6 +388,11 @@ void __init xen_build_dynamic_phys_to_machine(void)
>  	}
>  
>  	m2p_override_init();
> +
> +	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
> +	 * isn't enough pieces to make it work (for one - we are still using the
> +	 * Xen provided pagetable). Do it later in xen_reserve_internals.
> +	 */
>  }
>  
>  unsigned long get_phys_to_machine(unsigned long pfn)
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index a4790bf..9efca75 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -424,15 +424,6 @@ char * __init xen_memory_setup(void)
>  	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
>  			E820_RESERVED);
>  
> -	/*
> -	 * Reserve Xen bits:
> -	 *  - mfn_list
> -	 *  - xen_start_info
> -	 * See comment above "struct start_info" in <xen/interface/xen.h>
> -	 */
> -	memblock_reserve(__pa(xen_start_info->mfn_list),
> -			 xen_start_info->pt_base - xen_start_info->mfn_list);
> -
>  	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
>  
>  	return "Xen";
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:35:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QSL-0000qA-9Y; Fri, 17 Aug 2012 17:35:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QSJ-0000q5-VF
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:35:32 +0000
Received: from [85.158.143.99:10962] by server-3.bemta-4.messagelabs.com id
	1B/8B-09529-3E08E205; Fri, 17 Aug 2012 17:35:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345224930!23426277!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 953 invoked from network); 17 Aug 2012 17:35:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064812"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:35:29 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:35:29 +0100
Date: Fri, 17 Aug 2012 18:35:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 02/11] xen/x86: Use memblock_reserve for
 sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> instead of a big memblock_reserve. This way we can be more
> selective in freeing regions (and it also makes it easier
> to understand where is what).
> 
> [v1: Move the auto_translate_physmap to proper line]
> [v2: Per Stefano suggestion add more comments]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

much better now!

>  arch/x86/xen/enlighten.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++
>  arch/x86/xen/p2m.c       |    5 ++++
>  arch/x86/xen/setup.c     |    9 --------
>  3 files changed, 53 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index ff962d4..e532eb5 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -998,7 +998,54 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>  
>  	return ret;
>  }
> +/*
> + * If the MFN is not in the m2p (provided to us by the hypervisor) this
> + * function won't do anything. In practice this means that the XenBus
> + * MFN won't be available for the initial domain. */
> +static void __init xen_reserve_mfn(unsigned long mfn)
> +{
> +	unsigned long pfn;
> +
> +	if (!mfn)
> +		return;
> +	pfn = mfn_to_pfn(mfn);
> +	if (phys_to_machine_mapping_valid(pfn))
> +		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
> +}
> +static void __init xen_reserve_internals(void)
> +{
> +	unsigned long size;
> +
> +	if (!xen_pv_domain())
> +		return;
> +
> +	/* xen_start_info does not exist in the M2P, hence can't use
> +	 * xen_reserve_mfn. */
> +	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
> +
> +	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
> +	xen_reserve_mfn(xen_start_info->store_mfn);
>  
> +	if (!xen_initial_domain())
> +		xen_reserve_mfn(xen_start_info->console.domU.mfn);
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	/*
> +	 * ALIGN up to compensate for the p2m_page pointing to an array that
> +	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
> +	 */
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +
> +	/* We could use xen_reserve_mfn here, but would end up looping quite
> +	 * a lot (and call memblock_reserve for each PAGE), so lets just use
> +	 * the easy way and reserve it wholesale. */
> +	memblock_reserve(__pa(xen_start_info->mfn_list), size);
> +
> +	/* The pagetables are reserved in mmu.c */
> +}
>  void xen_setup_shared_info(void)
>  {
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> @@ -1362,6 +1409,7 @@ asmlinkage void __init xen_start_kernel(void)
>  	xen_raw_console_write("mapping kernel into physical memory\n");
>  	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
>  
> +	xen_reserve_internals();
>  	/* Allocate and initialize top and mid mfn levels for p2m structure */
>  	xen_build_mfn_list_list();
>  
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index e4adbfb..6a2bfa4 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -388,6 +388,11 @@ void __init xen_build_dynamic_phys_to_machine(void)
>  	}
>  
>  	m2p_override_init();
> +
> +	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
> +	 * isn't enough pieces to make it work (for one - we are still using the
> +	 * Xen provided pagetable). Do it later in xen_reserve_internals.
> +	 */
>  }
>  
>  unsigned long get_phys_to_machine(unsigned long pfn)
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index a4790bf..9efca75 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -424,15 +424,6 @@ char * __init xen_memory_setup(void)
>  	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
>  			E820_RESERVED);
>  
> -	/*
> -	 * Reserve Xen bits:
> -	 *  - mfn_list
> -	 *  - xen_start_info
> -	 * See comment above "struct start_info" in <xen/interface/xen.h>
> -	 */
> -	memblock_reserve(__pa(xen_start_info->mfn_list),
> -			 xen_start_info->pt_base - xen_start_info->mfn_list);
> -
>  	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
>  
>  	return "Xen";
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:41:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QYK-00013d-3i; Fri, 17 Aug 2012 17:41:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QYJ-00013Y-3N
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:41:43 +0000
Received: from [85.158.138.51:30139] by server-11.bemta-3.messagelabs.com id
	0E/AF-23152-6528E205; Fri, 17 Aug 2012 17:41:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345225301!20773995!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16683 invoked from network); 17 Aug 2012 17:41:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:41:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064852"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:41:41 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:41:41 +0100
Date: Fri, 17 Aug 2012 18:41:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> B/c we do not need it. During the startup the Xen provides
> us with all the memory mapped that we need to function.

Shouldn't we check to make sure that is actually true (I am thinking at
nr_pt_frames)?
Or is it actually stated somewhere in the Xen headers?



> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |   11 +++++------
>  1 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 7247e5a..a59070b 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -84,6 +84,7 @@
>   */
>  DEFINE_SPINLOCK(xen_reservation_lock);
>  
> +#ifdef CONFIG_X86_32
>  /*
>   * Identity map, in addition to plain kernel map.  This needs to be
>   * large enough to allocate page table pages to allocate the rest.
> @@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
>   */
>  #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
>  static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
> -
> +#endif
>  #ifdef CONFIG_X86_64
>  /* l3 pud for userspace vsyscall mapping */
>  static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
> @@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
>  	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>  		BUG();
>  }
> -
> +#ifdef CONFIG_X86_32
>  static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  {
>  	unsigned pmdidx, pteidx;
> @@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  
>  	set_page_prot(pmd, PAGE_KERNEL_RO);
>  }
> -
> +#endif
>  void __init xen_setup_machphys_mapping(void)
>  {
>  	struct xen_machphys_mapping mapping;
> @@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	/* Note that we don't do anything with level1_fixmap_pgt which
>  	 * we don't need. */
>  
> -	/* Set up identity map */
> -	xen_map_identity_early(level2_ident_pgt, max_pfn);
> -
>  	/* Make pagetable pieces RO */
>  	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
> +	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
>  
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:41:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QYK-00013d-3i; Fri, 17 Aug 2012 17:41:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2QYJ-00013Y-3N
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:41:43 +0000
Received: from [85.158.138.51:30139] by server-11.bemta-3.messagelabs.com id
	0E/AF-23152-6528E205; Fri, 17 Aug 2012 17:41:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345225301!20773995!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16683 invoked from network); 17 Aug 2012 17:41:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 17:41:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14064852"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 17:41:41 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 18:41:41 +0100
Date: Fri, 17 Aug 2012 18:41:23 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> B/c we do not need it. During the startup the Xen provides
> us with all the memory mapped that we need to function.

Shouldn't we check to make sure that is actually true (I am thinking at
nr_pt_frames)?
Or is it actually stated somewhere in the Xen headers?



> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |   11 +++++------
>  1 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 7247e5a..a59070b 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -84,6 +84,7 @@
>   */
>  DEFINE_SPINLOCK(xen_reservation_lock);
>  
> +#ifdef CONFIG_X86_32
>  /*
>   * Identity map, in addition to plain kernel map.  This needs to be
>   * large enough to allocate page table pages to allocate the rest.
> @@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
>   */
>  #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
>  static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
> -
> +#endif
>  #ifdef CONFIG_X86_64
>  /* l3 pud for userspace vsyscall mapping */
>  static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
> @@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
>  	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>  		BUG();
>  }
> -
> +#ifdef CONFIG_X86_32
>  static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  {
>  	unsigned pmdidx, pteidx;
> @@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  
>  	set_page_prot(pmd, PAGE_KERNEL_RO);
>  }
> -
> +#endif
>  void __init xen_setup_machphys_mapping(void)
>  {
>  	struct xen_machphys_mapping mapping;
> @@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	/* Note that we don't do anything with level1_fixmap_pgt which
>  	 * we don't need. */
>  
> -	/* Set up identity map */
> -	xen_map_identity_early(level2_ident_pgt, max_pfn);
> -
>  	/* Make pagetable pieces RO */
>  	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
> +	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
>  
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Qcv-0001E2-1u; Fri, 17 Aug 2012 17:46:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2Qct-0001Dv-80
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:46:27 +0000
Received: from [85.158.139.83:35043] by server-10.bemta-5.messagelabs.com id
	77/14-13125-2738E205; Fri, 17 Aug 2012 17:46:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345225584!24853700!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1402 invoked from network); 17 Aug 2012 17:46:25 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 17:46:25 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HHkLnp014468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 17:46:22 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HHkK32016257
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 17:46:21 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HHkKfu002933; Fri, 17 Aug 2012 12:46:20 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 10:46:20 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A98EB402D7; Fri, 17 Aug 2012 13:36:31 -0400 (EDT)
Date: Fri, 17 Aug 2012 13:36:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120817173631.GA11688@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
	<502E2784.8060806@citrix.com>
	<20120817130621.GC31903@phenom.dumpdata.com>
	<502E4713.9050000@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E4713.9050000@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:28:51PM +0100, David Vrabel wrote:
> On 17/08/12 14:06, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 17, 2012 at 12:14:12PM +0100, David Vrabel wrote:
> >> On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
> >>>
> >>> So I thought about this some more and came up with this patch. Its
> >>> RFC and going to run it through some overnight tests to see how they fare.
> >>>
> >>>
> >>> commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
> >>> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>> Date:   Fri Jul 27 16:05:47 2012 -0400
> >>>
> >>>     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
> >>>     
> >>>     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
> >>>     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
> >>>     with either a p2m_missing or p2m_identity respectively. The old
> >>>     page (which was created via extend_brk or was grafted on from the
> >>>     mfn_list) can be re-used for setting new PFNs.
> >>
> >> Does this actually find any p2m pages to reclaim?
> > 
> > Very much so. When I run the kernel without dom0_mem, and end up returning
> > around 372300 pages back, and then populating them back - they (mostly)
> > all get to re-use the transplanted mfn_list.
> > 
> > The ones in the 9a-100 obviously don't.
> >>
> >> xen_set_identity_and_release() is careful to set the largest possible
> >> range as 1:1 and the comments at the top of p2m.c suggest the mid
> >> entries will be made to point to p2m_identity already.
> > 
> > Right, and that is still true - for cases where the are no mid entries
> > (so P2M[3][400] for example can point in the middle of the MMIO region).
> > 
> > But if you boot without dom0_mem=max, that region (P2M[3][400]) would at
> > the start be backed by the &mfn_list, so when we call 1-1 on that region
> > it ends up sticking in the &mfn_list a whole bunch of IDENTITY_FRAME(pfn).
> 
> Ah, I see.  This makes sense now.
> 
> > This patch harvests those chunks of &mfn_list that have that and re-uses them.
> > 
> > And without any dom0_mem= I seem to at most call extend_bkr twice (to
> > allocate the top leafs P2M[4] and P2M[5]). Hm, to be on a safe side I should
> > probably do 'reserve_brk(p2m_popualated, 3 * PAGE_SIZE)' in case we
> > end up transplanting 3GB of PFNs in in the P2M[4], P2M[5] and P2M[6] nodes.
> 
> That sounds sensible.

Here is an updated (just made so to scale the reserve_brk down)
one that I was thinking to send to Linus next week.

>From 250a41e0ecc433cdd553a364d0fc74c766425209 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 09:27:35 -0400
Subject: [PATCH] xen/p2m: Reuse existing P2M leafs if they are filled with
 1:1 PFNs or INVALID.

If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
with either a p2m_missing or p2m_identity respectively. The old
page (which was created via extend_brk or was grafted on from the
mfn_list) can be re-used for setting new PFNs.

This also means we can remove git commit:
5bc6f9888db5739abfa0cae279b4b442e4db8049
xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
which tried to fix this.

and make the amount that is required to be reserved much smaller.

CC: stable@vger.kernel.org # for 3.5 only.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |   95 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 files changed, 92 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index b2e91d4..d4b25546 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -196,9 +196,11 @@ RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
- * But some machines can have 3GB I/O holes even. So lets reserve enough
- * for 4GB of I/O and E820 holes. */
-RESERVE_BRK(p2m_populated, PMD_SIZE * 4);
+ * Some machines can have 3GB I/O holes even. With early_can_reuse_p2m_middle
+ * it can re-use Xen provided mfn_list array, so we only need to allocate at
+ * most three P2M top nodes. */
+RESERVE_BRK(p2m_populated, PAGE_SIZE * 3);
+
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
@@ -575,12 +577,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 	}
 	return true;
 }
+
+/*
+ * Skim over the P2M tree looking at pages that are either filled with
+ * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
+ * replace the P2M leaf with a p2m_missing or p2m_identity.
+ * Stick the old page in the new P2M tree location.
+ */
+bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
+{
+	unsigned topidx;
+	unsigned mididx;
+	unsigned ident_pfns;
+	unsigned inv_pfns;
+	unsigned long *p2m;
+	unsigned long *mid_mfn_p;
+	unsigned idx;
+	unsigned long pfn;
+
+	/* We only look when this entails a P2M middle layer */
+	if (p2m_index(set_pfn))
+		return false;
+
+	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
+		topidx = p2m_top_index(pfn);
+
+		if (!p2m_top[topidx])
+			continue;
+
+		if (p2m_top[topidx] == p2m_mid_missing)
+			continue;
+
+		mididx = p2m_mid_index(pfn);
+		p2m = p2m_top[topidx][mididx];
+		if (!p2m)
+			continue;
+
+		if ((p2m == p2m_missing) || (p2m == p2m_identity))
+			continue;
+
+		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
+			continue;
+
+		ident_pfns = 0;
+		inv_pfns = 0;
+		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
+			/* IDENTITY_PFNs are 1:1 */
+			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
+				ident_pfns++;
+			else if (p2m[idx] == INVALID_P2M_ENTRY)
+				inv_pfns++;
+			else
+				break;
+		}
+		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
+			goto found;
+	}
+	return false;
+found:
+	/* Found one, replace old with p2m_identity or p2m_missing */
+	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
+	/* And the other for save/restore.. */
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	/* NOTE: Even if it is a p2m_identity it should still be point to
+	 * a page filled with INVALID_P2M_ENTRY entries. */
+	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
+
+	/* Reset where we want to stick the old page in. */
+	topidx = p2m_top_index(set_pfn);
+	mididx = p2m_mid_index(set_pfn);
+
+	/* This shouldn't happen */
+	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
+		early_alloc_p2m(set_pfn);
+
+	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
+		return false;
+
+	p2m_init(p2m);
+	p2m_top[topidx][mididx] = p2m;
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	mid_mfn_p[mididx] = virt_to_mfn(p2m);
+
+	return true;
+}
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
 		if (!early_alloc_p2m(pfn))
 			return false;
 
+		if (early_can_reuse_p2m_middle(pfn, mfn))
+			return __set_phys_to_machine(pfn, mfn);
+
 		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
 			return false;
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:46:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Qcv-0001E2-1u; Fri, 17 Aug 2012 17:46:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2Qct-0001Dv-80
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:46:27 +0000
Received: from [85.158.139.83:35043] by server-10.bemta-5.messagelabs.com id
	77/14-13125-2738E205; Fri, 17 Aug 2012 17:46:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345225584!24853700!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1402 invoked from network); 17 Aug 2012 17:46:25 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 17:46:25 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HHkLnp014468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 17:46:22 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HHkK32016257
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 17:46:21 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HHkKfu002933; Fri, 17 Aug 2012 12:46:20 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 10:46:20 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A98EB402D7; Fri, 17 Aug 2012 13:36:31 -0400 (EDT)
Date: Fri, 17 Aug 2012 13:36:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120817173631.GA11688@phenom.dumpdata.com>
References: <1345132214-15298-1-git-send-email-konrad.wilk@oracle.com>
	<1345132214-15298-2-git-send-email-konrad.wilk@oracle.com>
	<20120816173215.GB9790@phenom.dumpdata.com>
	<20120816210206.GA17966@phenom.dumpdata.com>
	<502E2784.8060806@citrix.com>
	<20120817130621.GC31903@phenom.dumpdata.com>
	<502E4713.9050000@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <502E4713.9050000@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 1/2] xen/p2m: Fix for 32-bit builds the
 "Reserve 8MB of _brk space for P2M"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:28:51PM +0100, David Vrabel wrote:
> On 17/08/12 14:06, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 17, 2012 at 12:14:12PM +0100, David Vrabel wrote:
> >> On 16/08/12 22:02, Konrad Rzeszutek Wilk wrote:
> >>>
> >>> So I thought about this some more and came up with this patch. Its
> >>> RFC and going to run it through some overnight tests to see how they fare.
> >>>
> >>>
> >>> commit da858a92dbeb52fb3246e3d0f1dd57989b5b1734
> >>> Author: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>> Date:   Fri Jul 27 16:05:47 2012 -0400
> >>>
> >>>     xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID.
> >>>     
> >>>     If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
> >>>     1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
> >>>     with either a p2m_missing or p2m_identity respectively. The old
> >>>     page (which was created via extend_brk or was grafted on from the
> >>>     mfn_list) can be re-used for setting new PFNs.
> >>
> >> Does this actually find any p2m pages to reclaim?
> > 
> > Very much so. When I run the kernel without dom0_mem, and end up returning
> > around 372300 pages back, and then populating them back - they (mostly)
> > all get to re-use the transplanted mfn_list.
> > 
> > The ones in the 9a-100 obviously don't.
> >>
> >> xen_set_identity_and_release() is careful to set the largest possible
> >> range as 1:1 and the comments at the top of p2m.c suggest the mid
> >> entries will be made to point to p2m_identity already.
> > 
> > Right, and that is still true - for cases where the are no mid entries
> > (so P2M[3][400] for example can point in the middle of the MMIO region).
> > 
> > But if you boot without dom0_mem=max, that region (P2M[3][400]) would at
> > the start be backed by the &mfn_list, so when we call 1-1 on that region
> > it ends up sticking in the &mfn_list a whole bunch of IDENTITY_FRAME(pfn).
> 
> Ah, I see.  This makes sense now.
> 
> > This patch harvests those chunks of &mfn_list that have that and re-uses them.
> > 
> > And without any dom0_mem= I seem to at most call extend_bkr twice (to
> > allocate the top leafs P2M[4] and P2M[5]). Hm, to be on a safe side I should
> > probably do 'reserve_brk(p2m_popualated, 3 * PAGE_SIZE)' in case we
> > end up transplanting 3GB of PFNs in in the P2M[4], P2M[5] and P2M[6] nodes.
> 
> That sounds sensible.

Here is an updated (just made so to scale the reserve_brk down)
one that I was thinking to send to Linus next week.

>From 250a41e0ecc433cdd553a364d0fc74c766425209 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 09:27:35 -0400
Subject: [PATCH] xen/p2m: Reuse existing P2M leafs if they are filled with
 1:1 PFNs or INVALID.

If P2M leaf is completly packed with INVALID_P2M_ENTRY or with
1:1 PFNs (so IDENTITY_FRAME type PFNs), we can swap the P2M leaf
with either a p2m_missing or p2m_identity respectively. The old
page (which was created via extend_brk or was grafted on from the
mfn_list) can be re-used for setting new PFNs.

This also means we can remove git commit:
5bc6f9888db5739abfa0cae279b4b442e4db8049
xen/p2m: Reserve 8MB of _brk space for P2M leafs when populating back
which tried to fix this.

and make the amount that is required to be reserved much smaller.

CC: stable@vger.kernel.org # for 3.5 only.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |   95 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 files changed, 92 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index b2e91d4..d4b25546 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -196,9 +196,11 @@ RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
- * But some machines can have 3GB I/O holes even. So lets reserve enough
- * for 4GB of I/O and E820 holes. */
-RESERVE_BRK(p2m_populated, PMD_SIZE * 4);
+ * Some machines can have 3GB I/O holes even. With early_can_reuse_p2m_middle
+ * it can re-use Xen provided mfn_list array, so we only need to allocate at
+ * most three P2M top nodes. */
+RESERVE_BRK(p2m_populated, PAGE_SIZE * 3);
+
 static inline unsigned p2m_top_index(unsigned long pfn)
 {
 	BUG_ON(pfn >= MAX_P2M_PFN);
@@ -575,12 +577,99 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 	}
 	return true;
 }
+
+/*
+ * Skim over the P2M tree looking at pages that are either filled with
+ * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and
+ * replace the P2M leaf with a p2m_missing or p2m_identity.
+ * Stick the old page in the new P2M tree location.
+ */
+bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn)
+{
+	unsigned topidx;
+	unsigned mididx;
+	unsigned ident_pfns;
+	unsigned inv_pfns;
+	unsigned long *p2m;
+	unsigned long *mid_mfn_p;
+	unsigned idx;
+	unsigned long pfn;
+
+	/* We only look when this entails a P2M middle layer */
+	if (p2m_index(set_pfn))
+		return false;
+
+	for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) {
+		topidx = p2m_top_index(pfn);
+
+		if (!p2m_top[topidx])
+			continue;
+
+		if (p2m_top[topidx] == p2m_mid_missing)
+			continue;
+
+		mididx = p2m_mid_index(pfn);
+		p2m = p2m_top[topidx][mididx];
+		if (!p2m)
+			continue;
+
+		if ((p2m == p2m_missing) || (p2m == p2m_identity))
+			continue;
+
+		if ((unsigned long)p2m == INVALID_P2M_ENTRY)
+			continue;
+
+		ident_pfns = 0;
+		inv_pfns = 0;
+		for (idx = 0; idx < P2M_PER_PAGE; idx++) {
+			/* IDENTITY_PFNs are 1:1 */
+			if (p2m[idx] == IDENTITY_FRAME(pfn + idx))
+				ident_pfns++;
+			else if (p2m[idx] == INVALID_P2M_ENTRY)
+				inv_pfns++;
+			else
+				break;
+		}
+		if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE))
+			goto found;
+	}
+	return false;
+found:
+	/* Found one, replace old with p2m_identity or p2m_missing */
+	p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing);
+	/* And the other for save/restore.. */
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	/* NOTE: Even if it is a p2m_identity it should still be point to
+	 * a page filled with INVALID_P2M_ENTRY entries. */
+	mid_mfn_p[mididx] = virt_to_mfn(p2m_missing);
+
+	/* Reset where we want to stick the old page in. */
+	topidx = p2m_top_index(set_pfn);
+	mididx = p2m_mid_index(set_pfn);
+
+	/* This shouldn't happen */
+	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
+		early_alloc_p2m(set_pfn);
+
+	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
+		return false;
+
+	p2m_init(p2m);
+	p2m_top[topidx][mididx] = p2m;
+	mid_mfn_p = p2m_top_mfn_p[topidx];
+	mid_mfn_p[mididx] = virt_to_mfn(p2m);
+
+	return true;
+}
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
 		if (!early_alloc_p2m(pfn))
 			return false;
 
+		if (early_can_reuse_p2m_middle(pfn, mfn))
+			return __set_phys_to_machine(pfn, mfn);
+
 		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
 			return false;
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:49:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QfU-0001Kw-KG; Fri, 17 Aug 2012 17:49:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2QfT-0001Kl-00
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:49:07 +0000
Received: from [85.158.143.35:61230] by server-3.bemta-4.messagelabs.com id
	8F/F2-09529-2148E205; Fri, 17 Aug 2012 17:49:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345225744!6283034!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11513 invoked from network); 17 Aug 2012 17:49:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 17:49:05 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HHn1mM005325
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 17:49:02 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HHn1v3001039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 17:49:01 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HHn0li004654; Fri, 17 Aug 2012 12:49:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 10:49:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D9420402D7; Fri, 17 Aug 2012 13:39:11 -0400 (EDT)
Date: Fri, 17 Aug 2012 13:39:11 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20120817173911.GB11688@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v3)
	for v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 12:03:18PM -0400, Konrad Rzeszutek Wilk wrote:
> Since (v2): http://lists.xen.org/archives/html/xen-devel/2012-07/msg01864.html
>  - fixed a bug if guest booted with non-PMD aligned size (say, 899MB).
>  - fixed smack warnings
>  - moved a memset(xen_start_info->mfn_list, 0xff,.. ) from one patch to another.

And two more bug-fixes:

>From f042050664c97a365e98daf5783f682d734e35f8 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 16 Aug 2012 16:38:55 -0400
Subject: [PATCH 1/2] xen/p2m: When revectoring deal with holes in the P2M
 array.

When we free the PFNs and then subsequently populate them back
during bootup:

Freeing 20000-20200 pfn range: 512 pages freed
1-1 mapping on 20000->20200
Freeing 40000-40200 pfn range: 512 pages freed
1-1 mapping on 40000->40200
Freeing bad80-badf4 pfn range: 116 pages freed
1-1 mapping on bad80->badf4
Freeing badf6-bae7f pfn range: 137 pages freed
1-1 mapping on badf6->bae7f
Freeing bb000-100000 pfn range: 282624 pages freed
1-1 mapping on bb000->100000
Released 283999 pages of unused memory
Set 283999 page(s) to 1-1 mapping
Populating 1acb8a-1f20e9 pfn range: 283999 pages added

We end up having the P2M array (that is the one that was
grafted on the P2M tree) filled with IDENTITY_FRAME or
INVALID_P2M_ENTRY) entries. The patch titled

"xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID."
recycles said slots and replaces the P2M tree leaf's with
 &mfn_list[xx] with p2m_identity or p2m_missing.

And re-uses the P2M array sections for other P2M tree leaf's.
For the above mentioned bootup excerpt, the PFNs at
0x20000->0x20200 are going to be IDENTITY based:

P2M[0][256][0] -> P2M[0][257][0] get turned in IDENTITY_FRAME.

We can re-use that and replace P2M[0][256] to point to p2m_identity.
The "old" page (the grafted P2M array provided by Xen) that was at
P2M[0][256] gets put somewhere else. Specifically at P2M[6][358],
b/c when we populate back:

Populating 1acb8a-1f20e9 pfn range: 283999 pages added

we fill P2M[6][358][0] (and P2M[6][358], P2M[6][359], ...) with
the new MFNs.

That is all OK, except when we revector we assume that the PFN
count would be the same in the grafted P2M array and in the
newly allocated. Since that is no longer the case, as we have
holes in the P2M that point to p2m_missing or p2m_identity we
have to take that into account.

[v2: Check for overflow]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |   14 +++++++++++---
 1 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index bbfd085..3b5bd7e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -401,6 +401,7 @@ unsigned long __init xen_revector_p2m_tree(void)
 	unsigned long va_start;
 	unsigned long va_end;
 	unsigned long pfn;
+	unsigned long pfn_free = 0;
 	unsigned long *mfn_list = NULL;
 	unsigned long size;
 
@@ -443,15 +444,22 @@ unsigned long __init xen_revector_p2m_tree(void)
 		if ((unsigned long)mid_p == INVALID_P2M_ENTRY)
 			continue;
 
+		if ((pfn_free + P2M_PER_PAGE) * PAGE_SIZE > size) {
+			WARN(1, "Only allocated for %ld pages, but we want %ld!\n",
+			     size, pfn_free + P2M_PER_PAGE);
+			return 0;
+		}
 		/* The old va. Rebase it on mfn_list */
 		if (mid_p >= (unsigned long *)va_start && mid_p <= (unsigned long *)va_end) {
 			unsigned long *new;
 
-			new = &mfn_list[pfn];
+			new = &mfn_list[pfn_free];
 
 			copy_page(new, mid_p);
-			p2m_top[topidx][mididx] = &mfn_list[pfn];
-			p2m_top_mfn_p[topidx][mididx] = virt_to_mfn(&mfn_list[pfn]);
+			p2m_top[topidx][mididx] = &mfn_list[pfn_free];
+			p2m_top_mfn_p[topidx][mididx] = virt_to_mfn(&mfn_list[pfn_free]);
+
+			pfn_free += P2M_PER_PAGE;
 
 		}
 		/* This should be the leafs allocated for identity from _brk. */
-- 
1.7.7.6



>From 60a9b396456a2990bb3490671ca832d36e8dd6aa Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 09:35:31 -0400
Subject: [PATCH 2/2] xen/mmu: If the revector fails, don't attempt to
 revector anything else.

If the P2M revectoring would fail, we would try to continue on by
cleaning the PMD for L1 (PTE) page-tables. The xen_cleanhighmap
is greedy and erases the PMD on both boundaries. Since the P2M
array can share the PMD, we would wipe out part of the __ka
that is still used in the P2M tree to point to P2M leafs.

This fixes it by bypassing the revectoring and continuing on.
If the revector fails, a nice WARN is printed so we can still
troubleshoot this.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 6019c22..0dac3d2 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1238,7 +1238,8 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			memblock_free(__pa(xen_start_info->mfn_list), size);
 			/* And revector! Bye bye old array */
 			xen_start_info->mfn_list = new_mfn_list;
-		}
+		} else
+			goto skip;
 	}
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
@@ -1259,6 +1260,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 	 * anything at this stage. */
 	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
 #endif
+skip:
 #endif
 	xen_post_allocator_init();
 }
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:49:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2QfU-0001Kw-KG; Fri, 17 Aug 2012 17:49:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2QfT-0001Kl-00
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:49:07 +0000
Received: from [85.158.143.35:61230] by server-3.bemta-4.messagelabs.com id
	8F/F2-09529-2148E205; Fri, 17 Aug 2012 17:49:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345225744!6283034!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11513 invoked from network); 17 Aug 2012 17:49:05 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 17:49:05 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HHn1mM005325
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 17:49:02 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HHn1v3001039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 17:49:01 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HHn0li004654; Fri, 17 Aug 2012 12:49:00 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 10:49:00 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D9420402D7; Fri, 17 Aug 2012 13:39:11 -0400 (EDT)
Date: Fri, 17 Aug 2012 13:39:11 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20120817173911.GB11688@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] [PATCH] Boot PV guests with more than 128GB (v3)
	for v3.7.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 12:03:18PM -0400, Konrad Rzeszutek Wilk wrote:
> Since (v2): http://lists.xen.org/archives/html/xen-devel/2012-07/msg01864.html
>  - fixed a bug if guest booted with non-PMD aligned size (say, 899MB).
>  - fixed smack warnings
>  - moved a memset(xen_start_info->mfn_list, 0xff,.. ) from one patch to another.

And two more bug-fixes:

>From f042050664c97a365e98daf5783f682d734e35f8 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 16 Aug 2012 16:38:55 -0400
Subject: [PATCH 1/2] xen/p2m: When revectoring deal with holes in the P2M
 array.

When we free the PFNs and then subsequently populate them back
during bootup:

Freeing 20000-20200 pfn range: 512 pages freed
1-1 mapping on 20000->20200
Freeing 40000-40200 pfn range: 512 pages freed
1-1 mapping on 40000->40200
Freeing bad80-badf4 pfn range: 116 pages freed
1-1 mapping on bad80->badf4
Freeing badf6-bae7f pfn range: 137 pages freed
1-1 mapping on badf6->bae7f
Freeing bb000-100000 pfn range: 282624 pages freed
1-1 mapping on bb000->100000
Released 283999 pages of unused memory
Set 283999 page(s) to 1-1 mapping
Populating 1acb8a-1f20e9 pfn range: 283999 pages added

We end up having the P2M array (that is the one that was
grafted on the P2M tree) filled with IDENTITY_FRAME or
INVALID_P2M_ENTRY) entries. The patch titled

"xen/p2m: Reuse existing P2M leafs if they are filled with 1:1 PFNs or INVALID."
recycles said slots and replaces the P2M tree leaf's with
 &mfn_list[xx] with p2m_identity or p2m_missing.

And re-uses the P2M array sections for other P2M tree leaf's.
For the above mentioned bootup excerpt, the PFNs at
0x20000->0x20200 are going to be IDENTITY based:

P2M[0][256][0] -> P2M[0][257][0] get turned in IDENTITY_FRAME.

We can re-use that and replace P2M[0][256] to point to p2m_identity.
The "old" page (the grafted P2M array provided by Xen) that was at
P2M[0][256] gets put somewhere else. Specifically at P2M[6][358],
b/c when we populate back:

Populating 1acb8a-1f20e9 pfn range: 283999 pages added

we fill P2M[6][358][0] (and P2M[6][358], P2M[6][359], ...) with
the new MFNs.

That is all OK, except when we revector we assume that the PFN
count would be the same in the grafted P2M array and in the
newly allocated. Since that is no longer the case, as we have
holes in the P2M that point to p2m_missing or p2m_identity we
have to take that into account.

[v2: Check for overflow]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c |   14 +++++++++++---
 1 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index bbfd085..3b5bd7e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -401,6 +401,7 @@ unsigned long __init xen_revector_p2m_tree(void)
 	unsigned long va_start;
 	unsigned long va_end;
 	unsigned long pfn;
+	unsigned long pfn_free = 0;
 	unsigned long *mfn_list = NULL;
 	unsigned long size;
 
@@ -443,15 +444,22 @@ unsigned long __init xen_revector_p2m_tree(void)
 		if ((unsigned long)mid_p == INVALID_P2M_ENTRY)
 			continue;
 
+		if ((pfn_free + P2M_PER_PAGE) * PAGE_SIZE > size) {
+			WARN(1, "Only allocated for %ld pages, but we want %ld!\n",
+			     size, pfn_free + P2M_PER_PAGE);
+			return 0;
+		}
 		/* The old va. Rebase it on mfn_list */
 		if (mid_p >= (unsigned long *)va_start && mid_p <= (unsigned long *)va_end) {
 			unsigned long *new;
 
-			new = &mfn_list[pfn];
+			new = &mfn_list[pfn_free];
 
 			copy_page(new, mid_p);
-			p2m_top[topidx][mididx] = &mfn_list[pfn];
-			p2m_top_mfn_p[topidx][mididx] = virt_to_mfn(&mfn_list[pfn]);
+			p2m_top[topidx][mididx] = &mfn_list[pfn_free];
+			p2m_top_mfn_p[topidx][mididx] = virt_to_mfn(&mfn_list[pfn_free]);
+
+			pfn_free += P2M_PER_PAGE;
 
 		}
 		/* This should be the leafs allocated for identity from _brk. */
-- 
1.7.7.6



>From 60a9b396456a2990bb3490671ca832d36e8dd6aa Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 09:35:31 -0400
Subject: [PATCH 2/2] xen/mmu: If the revector fails, don't attempt to
 revector anything else.

If the P2M revectoring would fail, we would try to continue on by
cleaning the PMD for L1 (PTE) page-tables. The xen_cleanhighmap
is greedy and erases the PMD on both boundaries. Since the P2M
array can share the PMD, we would wipe out part of the __ka
that is still used in the P2M tree to point to P2M leafs.

This fixes it by bypassing the revectoring and continuing on.
If the revector fails, a nice WARN is printed so we can still
troubleshoot this.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 6019c22..0dac3d2 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1238,7 +1238,8 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 			memblock_free(__pa(xen_start_info->mfn_list), size);
 			/* And revector! Bye bye old array */
 			xen_start_info->mfn_list = new_mfn_list;
-		}
+		} else
+			goto skip;
 	}
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
@@ -1259,6 +1260,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
 	 * anything at this stage. */
 	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
 #endif
+skip:
 #endif
 	xen_post_allocator_init();
 }
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:56:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Qlw-0001ay-He; Fri, 17 Aug 2012 17:55:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2Qlv-0001aq-1e
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:55:47 +0000
Received: from [85.158.139.83:56026] by server-7.bemta-5.messagelabs.com id
	09/BB-32634-2A58E205; Fri, 17 Aug 2012 17:55:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345226144!21493654!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18993 invoked from network); 17 Aug 2012 17:55:45 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 17:55:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HHtevZ022752
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 17:55:41 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HHtd2P023403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 17:55:40 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HHtdvp008840; Fri, 17 Aug 2012 12:55:39 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 10:55:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4D046402D7; Fri, 17 Aug 2012 13:45:49 -0400 (EDT)
Date: Fri, 17 Aug 2012 13:45:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817174549.GA14257@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > B/c we do not need it. During the startup the Xen provides
> > us with all the memory mapped that we need to function.
> 
> Shouldn't we check to make sure that is actually true (I am thinking at
> nr_pt_frames)?

I was looking at the source code (hypervisor) to figure it out and
that is certainly true.


> Or is it actually stated somewhere in the Xen headers?

Couldn't find it, but after looking so long at the source code
I didn't even bother looking for it.

Thought to be honest - I only looked at how the 64-bit pagetables
were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef

> 
> 
> 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c |   11 +++++------
> >  1 files changed, 5 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index 7247e5a..a59070b 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -84,6 +84,7 @@
> >   */
> >  DEFINE_SPINLOCK(xen_reservation_lock);
> >  
> > +#ifdef CONFIG_X86_32
> >  /*
> >   * Identity map, in addition to plain kernel map.  This needs to be
> >   * large enough to allocate page table pages to allocate the rest.
> > @@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
> >   */
> >  #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
> >  static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
> > -
> > +#endif
> >  #ifdef CONFIG_X86_64
> >  /* l3 pud for userspace vsyscall mapping */
> >  static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
> > @@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
> >  	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
> >  		BUG();
> >  }
> > -
> > +#ifdef CONFIG_X86_32
> >  static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
> >  {
> >  	unsigned pmdidx, pteidx;
> > @@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
> >  
> >  	set_page_prot(pmd, PAGE_KERNEL_RO);
> >  }
> > -
> > +#endif
> >  void __init xen_setup_machphys_mapping(void)
> >  {
> >  	struct xen_machphys_mapping mapping;
> > @@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	/* Note that we don't do anything with level1_fixmap_pgt which
> >  	 * we don't need. */
> >  
> > -	/* Set up identity map */
> > -	xen_map_identity_early(level2_ident_pgt, max_pfn);
> > -
> >  	/* Make pagetable pieces RO */
> >  	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
> > +	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
> >  
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 17:56:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 17:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Qlw-0001ay-He; Fri, 17 Aug 2012 17:55:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2Qlv-0001aq-1e
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 17:55:47 +0000
Received: from [85.158.139.83:56026] by server-7.bemta-5.messagelabs.com id
	09/BB-32634-2A58E205; Fri, 17 Aug 2012 17:55:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345226144!21493654!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18993 invoked from network); 17 Aug 2012 17:55:45 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 17:55:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HHtevZ022752
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 17:55:41 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HHtd2P023403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 17:55:40 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HHtdvp008840; Fri, 17 Aug 2012 12:55:39 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 10:55:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4D046402D7; Fri, 17 Aug 2012 13:45:49 -0400 (EDT)
Date: Fri, 17 Aug 2012 13:45:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817174549.GA14257@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > B/c we do not need it. During the startup the Xen provides
> > us with all the memory mapped that we need to function.
> 
> Shouldn't we check to make sure that is actually true (I am thinking at
> nr_pt_frames)?

I was looking at the source code (hypervisor) to figure it out and
that is certainly true.


> Or is it actually stated somewhere in the Xen headers?

Couldn't find it, but after looking so long at the source code
I didn't even bother looking for it.

Thought to be honest - I only looked at how the 64-bit pagetables
were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef

> 
> 
> 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c |   11 +++++------
> >  1 files changed, 5 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index 7247e5a..a59070b 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -84,6 +84,7 @@
> >   */
> >  DEFINE_SPINLOCK(xen_reservation_lock);
> >  
> > +#ifdef CONFIG_X86_32
> >  /*
> >   * Identity map, in addition to plain kernel map.  This needs to be
> >   * large enough to allocate page table pages to allocate the rest.
> > @@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
> >   */
> >  #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
> >  static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
> > -
> > +#endif
> >  #ifdef CONFIG_X86_64
> >  /* l3 pud for userspace vsyscall mapping */
> >  static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
> > @@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
> >  	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
> >  		BUG();
> >  }
> > -
> > +#ifdef CONFIG_X86_32
> >  static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
> >  {
> >  	unsigned pmdidx, pteidx;
> > @@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
> >  
> >  	set_page_prot(pmd, PAGE_KERNEL_RO);
> >  }
> > -
> > +#endif
> >  void __init xen_setup_machphys_mapping(void)
> >  {
> >  	struct xen_machphys_mapping mapping;
> > @@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	/* Note that we don't do anything with level1_fixmap_pgt which
> >  	 * we don't need. */
> >  
> > -	/* Set up identity map */
> > -	xen_map_identity_early(level2_ident_pgt, max_pfn);
> > -
> >  	/* Make pagetable pieces RO */
> >  	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
> > +	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
> >  	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
> >  
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:08:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Qxd-0001sH-Q3; Fri, 17 Aug 2012 18:07:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Qxc-0001sA-2R
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 18:07:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345226865!9651684!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19202 invoked from network); 17 Aug 2012 18:07:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 18:07:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 18:07:45 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 19:07:45 +0100
Date: Fri, 17 Aug 2012 19:07:28 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171902320.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 07/11] xen/mmu: Recycle the Xen provided L4,
 L3, and L2 pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> As we are not using them. We end up only using the L1 pagetables
> and grafting those to our page-tables.
> 
> [v1: Per Stefano's suggestion squashed two commits]
> [v2: Per Stefano's suggestion simplified loop]
> [v3: Fix smatch warnings]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |   40 +++++++++++++++++++++++++++++++++-------
>  1 files changed, 33 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index a59070b..bd92c82 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1708,7 +1708,20 @@ static void convert_pfn_mfn(void *v)
>  	for (i = 0; i < PTRS_PER_PTE; i++)
>  		pte[i] = xen_make_pte(pte[i].pte);
>  }
> -
> +static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
> +				 unsigned long addr)
> +{
> +	if (*pt_base == PFN_DOWN(__pa(addr))) {
> +		set_page_prot((void *)addr, PAGE_KERNEL);
> +		clear_page((void *)addr);
> +		(*pt_base)++;
> +	}
> +	if (*pt_end == PFN_DOWN(__pa(addr))) {
> +		set_page_prot((void *)addr, PAGE_KERNEL);
> +		clear_page((void *)addr);
> +		(*pt_end)--;
> +	}
> +}
>  /*
>   * Set up the initial kernel pagetable.
>   *
> @@ -1724,6 +1737,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  {
>  	pud_t *l3;
>  	pmd_t *l2;
> +	unsigned long addr[3];
> +	unsigned long pt_base, pt_end;
> +	unsigned i;
>  
>  	/* max_pfn_mapped is the last pfn mapped in the initial memory
>  	 * mappings. Considering that on Xen after the kernel mappings we
> @@ -1731,6 +1747,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	 * set max_pfn_mapped to the last real pfn mapped. */
>  	max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
>  
> +	pt_base = PFN_DOWN(__pa(xen_start_info->pt_base));
> +	pt_end = PFN_DOWN(__pa(xen_start_info->pt_base + (xen_start_info->nr_pt_frames * PAGE_SIZE)));

code style

>  	/* Zap identity mapping */
>  	init_level4_pgt[0] = __pgd(0);
>  
> @@ -1749,6 +1768,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
>  	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
>  
> +	addr[0] = (unsigned long)pgd;
> +	addr[1] = (unsigned long)l3;
> +	addr[2] = (unsigned long)l2;
>  	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
>  	 * Both L4[272][0] and L4[511][511] have entries that point to the same
>  	 * L2 (PMD) tables. Meaning that if you modify it in __va space
> @@ -1782,20 +1804,24 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	/* Unpin Xen-provided one */
>  	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
>  
> -	/* Switch over */
> -	pgd = init_level4_pgt;
> -
>  	/*
>  	 * At this stage there can be no user pgd, and no page
>  	 * structure to attach it to, so make sure we just set kernel
>  	 * pgd.
>  	 */
>  	xen_mc_batch();
> -	__xen_write_cr3(true, __pa(pgd));
> +	__xen_write_cr3(true, __pa(init_level4_pgt));
>  	xen_mc_issue(PARAVIRT_LAZY_CPU);
>  
> -	memblock_reserve(__pa(xen_start_info->pt_base),
> -			 xen_start_info->nr_pt_frames * PAGE_SIZE);
> +	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
> +	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
> +	 * the initial domain. For guests using the toolstack, they are in:
> +	 * [L4], [L3], [L2], [L1], [L1], order .. */
> +	for (i = 0; i < ARRAY_SIZE(addr); i++)
> +		check_pt_base(&pt_base, &pt_end, addr[i]);

It is much clearer now, but if the comment is correct, doesn't it mean
that we are going to be able to free pgd, l3 and l2 only in the non-dom0
case?
If so it might be worth saying it explicitly.

Other than that, it is fine by me.


> +	/* Our (by three pages) smaller Xen pagetable that we are using */
> +	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
>  }
>  #else	/* !CONFIG_X86_64 */
>  static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:08:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Qxd-0001sH-Q3; Fri, 17 Aug 2012 18:07:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T2Qxc-0001sA-2R
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 18:07:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345226865!9651684!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19202 invoked from network); 17 Aug 2012 18:07:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 18:07:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 18:07:45 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 19:07:45 +0100
Date: Fri, 17 Aug 2012 19:07:28 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208171902320.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 07/11] xen/mmu: Recycle the Xen provided L4,
 L3, and L2 pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> As we are not using them. We end up only using the L1 pagetables
> and grafting those to our page-tables.
> 
> [v1: Per Stefano's suggestion squashed two commits]
> [v2: Per Stefano's suggestion simplified loop]
> [v3: Fix smatch warnings]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |   40 +++++++++++++++++++++++++++++++++-------
>  1 files changed, 33 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index a59070b..bd92c82 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1708,7 +1708,20 @@ static void convert_pfn_mfn(void *v)
>  	for (i = 0; i < PTRS_PER_PTE; i++)
>  		pte[i] = xen_make_pte(pte[i].pte);
>  }
> -
> +static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
> +				 unsigned long addr)
> +{
> +	if (*pt_base == PFN_DOWN(__pa(addr))) {
> +		set_page_prot((void *)addr, PAGE_KERNEL);
> +		clear_page((void *)addr);
> +		(*pt_base)++;
> +	}
> +	if (*pt_end == PFN_DOWN(__pa(addr))) {
> +		set_page_prot((void *)addr, PAGE_KERNEL);
> +		clear_page((void *)addr);
> +		(*pt_end)--;
> +	}
> +}
>  /*
>   * Set up the initial kernel pagetable.
>   *
> @@ -1724,6 +1737,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  {
>  	pud_t *l3;
>  	pmd_t *l2;
> +	unsigned long addr[3];
> +	unsigned long pt_base, pt_end;
> +	unsigned i;
>  
>  	/* max_pfn_mapped is the last pfn mapped in the initial memory
>  	 * mappings. Considering that on Xen after the kernel mappings we
> @@ -1731,6 +1747,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	 * set max_pfn_mapped to the last real pfn mapped. */
>  	max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
>  
> +	pt_base = PFN_DOWN(__pa(xen_start_info->pt_base));
> +	pt_end = PFN_DOWN(__pa(xen_start_info->pt_base + (xen_start_info->nr_pt_frames * PAGE_SIZE)));

code style

>  	/* Zap identity mapping */
>  	init_level4_pgt[0] = __pgd(0);
>  
> @@ -1749,6 +1768,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
>  	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
>  
> +	addr[0] = (unsigned long)pgd;
> +	addr[1] = (unsigned long)l3;
> +	addr[2] = (unsigned long)l2;
>  	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
>  	 * Both L4[272][0] and L4[511][511] have entries that point to the same
>  	 * L2 (PMD) tables. Meaning that if you modify it in __va space
> @@ -1782,20 +1804,24 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	/* Unpin Xen-provided one */
>  	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
>  
> -	/* Switch over */
> -	pgd = init_level4_pgt;
> -
>  	/*
>  	 * At this stage there can be no user pgd, and no page
>  	 * structure to attach it to, so make sure we just set kernel
>  	 * pgd.
>  	 */
>  	xen_mc_batch();
> -	__xen_write_cr3(true, __pa(pgd));
> +	__xen_write_cr3(true, __pa(init_level4_pgt));
>  	xen_mc_issue(PARAVIRT_LAZY_CPU);
>  
> -	memblock_reserve(__pa(xen_start_info->pt_base),
> -			 xen_start_info->nr_pt_frames * PAGE_SIZE);
> +	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
> +	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
> +	 * the initial domain. For guests using the toolstack, they are in:
> +	 * [L4], [L3], [L2], [L1], [L1], order .. */
> +	for (i = 0; i < ARRAY_SIZE(addr); i++)
> +		check_pt_base(&pt_base, &pt_end, addr[i]);

It is much clearer now, but if the comment is correct, doesn't it mean
that we are going to be able to free pgd, l3 and l2 only in the non-dom0
case?
If so it might be worth saying it explicitly.

Other than that, it is fine by me.


> +	/* Our (by three pages) smaller Xen pagetable that we are using */
> +	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
>  }
>  #else	/* !CONFIG_X86_64 */
>  static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:16:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2R5P-00023m-Vo; Fri, 17 Aug 2012 18:15:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2R5O-00023h-Vu
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 18:15:55 +0000
Received: from [85.158.143.99:26465] by server-2.bemta-4.messagelabs.com id
	50/B1-31966-A5A8E205; Fri, 17 Aug 2012 18:15:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345227352!21219154!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31151 invoked from network); 17 Aug 2012 18:15:53 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 18:15:53 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HIFlSO029866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 18:15:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HIFkYK007116
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 18:15:47 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HIFkim008098; Fri, 17 Aug 2012 13:15:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 11:15:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8107F402D7; Fri, 17 Aug 2012 14:05:57 -0400 (EDT)
Date: Fri, 17 Aug 2012 14:05:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817180557.GA18579@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171902320.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171902320.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 07/11] xen/mmu: Recycle the Xen provided L4,
 L3, and L2 pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 07:07:28PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > As we are not using them. We end up only using the L1 pagetables
> > and grafting those to our page-tables.
> > 
> > [v1: Per Stefano's suggestion squashed two commits]
> > [v2: Per Stefano's suggestion simplified loop]
> > [v3: Fix smatch warnings]
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c |   40 +++++++++++++++++++++++++++++++++-------
> >  1 files changed, 33 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index a59070b..bd92c82 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1708,7 +1708,20 @@ static void convert_pfn_mfn(void *v)
> >  	for (i = 0; i < PTRS_PER_PTE; i++)
> >  		pte[i] = xen_make_pte(pte[i].pte);
> >  }
> > -
> > +static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
> > +				 unsigned long addr)
> > +{
> > +	if (*pt_base == PFN_DOWN(__pa(addr))) {
> > +		set_page_prot((void *)addr, PAGE_KERNEL);
> > +		clear_page((void *)addr);
> > +		(*pt_base)++;
> > +	}
> > +	if (*pt_end == PFN_DOWN(__pa(addr))) {
> > +		set_page_prot((void *)addr, PAGE_KERNEL);
> > +		clear_page((void *)addr);
> > +		(*pt_end)--;
> > +	}
> > +}
> >  /*
> >   * Set up the initial kernel pagetable.
> >   *
> > @@ -1724,6 +1737,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  {
> >  	pud_t *l3;
> >  	pmd_t *l2;
> > +	unsigned long addr[3];
> > +	unsigned long pt_base, pt_end;
> > +	unsigned i;
> >  
> >  	/* max_pfn_mapped is the last pfn mapped in the initial memory
> >  	 * mappings. Considering that on Xen after the kernel mappings we
> > @@ -1731,6 +1747,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	 * set max_pfn_mapped to the last real pfn mapped. */
> >  	max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
> >  
> > +	pt_base = PFN_DOWN(__pa(xen_start_info->pt_base));
> > +	pt_end = PFN_DOWN(__pa(xen_start_info->pt_base + (xen_start_info->nr_pt_frames * PAGE_SIZE)));
> 

or just do:

	pt_end = pt_base + xen_start_info->nr_pt_frames;

> code style
> 
> >  	/* Zap identity mapping */
> >  	init_level4_pgt[0] = __pgd(0);
> >  
> > @@ -1749,6 +1768,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
> >  	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
> >  
> > +	addr[0] = (unsigned long)pgd;
> > +	addr[1] = (unsigned long)l3;
> > +	addr[2] = (unsigned long)l2;
> >  	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
> >  	 * Both L4[272][0] and L4[511][511] have entries that point to the same
> >  	 * L2 (PMD) tables. Meaning that if you modify it in __va space
> > @@ -1782,20 +1804,24 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	/* Unpin Xen-provided one */
> >  	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> >  
> > -	/* Switch over */
> > -	pgd = init_level4_pgt;
> > -
> >  	/*
> >  	 * At this stage there can be no user pgd, and no page
> >  	 * structure to attach it to, so make sure we just set kernel
> >  	 * pgd.
> >  	 */
> >  	xen_mc_batch();
> > -	__xen_write_cr3(true, __pa(pgd));
> > +	__xen_write_cr3(true, __pa(init_level4_pgt));
> >  	xen_mc_issue(PARAVIRT_LAZY_CPU);
> >  
> > -	memblock_reserve(__pa(xen_start_info->pt_base),
> > -			 xen_start_info->nr_pt_frames * PAGE_SIZE);
> > +	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
> > +	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
> > +	 * the initial domain. For guests using the toolstack, they are in:
> > +	 * [L4], [L3], [L2], [L1], [L1], order .. */
> > +	for (i = 0; i < ARRAY_SIZE(addr); i++)
> > +		check_pt_base(&pt_base, &pt_end, addr[i]);
> 
> It is much clearer now, but if the comment is correct, doesn't it mean
> that we are going to be able to free pgd, l3 and l2 only in the non-dom0
> case?

And in dom0 case only PGD.

> If so it might be worth saying it explicitly.

OK.
> 
> Other than that, it is fine by me.
> 
> 
> > +	/* Our (by three pages) smaller Xen pagetable that we are using */
> > +	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
> >  }
> >  #else	/* !CONFIG_X86_64 */
> >  static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:16:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2R5P-00023m-Vo; Fri, 17 Aug 2012 18:15:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2R5O-00023h-Vu
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 18:15:55 +0000
Received: from [85.158.143.99:26465] by server-2.bemta-4.messagelabs.com id
	50/B1-31966-A5A8E205; Fri, 17 Aug 2012 18:15:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345227352!21219154!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31151 invoked from network); 17 Aug 2012 18:15:53 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 18:15:53 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HIFlSO029866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 18:15:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HIFkYK007116
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 18:15:47 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HIFkim008098; Fri, 17 Aug 2012 13:15:46 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 11:15:46 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8107F402D7; Fri, 17 Aug 2012 14:05:57 -0400 (EDT)
Date: Fri, 17 Aug 2012 14:05:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817180557.GA18579@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-8-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171902320.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171902320.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 07/11] xen/mmu: Recycle the Xen provided L4,
 L3, and L2 pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 07:07:28PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > As we are not using them. We end up only using the L1 pagetables
> > and grafting those to our page-tables.
> > 
> > [v1: Per Stefano's suggestion squashed two commits]
> > [v2: Per Stefano's suggestion simplified loop]
> > [v3: Fix smatch warnings]
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c |   40 +++++++++++++++++++++++++++++++++-------
> >  1 files changed, 33 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index a59070b..bd92c82 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1708,7 +1708,20 @@ static void convert_pfn_mfn(void *v)
> >  	for (i = 0; i < PTRS_PER_PTE; i++)
> >  		pte[i] = xen_make_pte(pte[i].pte);
> >  }
> > -
> > +static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
> > +				 unsigned long addr)
> > +{
> > +	if (*pt_base == PFN_DOWN(__pa(addr))) {
> > +		set_page_prot((void *)addr, PAGE_KERNEL);
> > +		clear_page((void *)addr);
> > +		(*pt_base)++;
> > +	}
> > +	if (*pt_end == PFN_DOWN(__pa(addr))) {
> > +		set_page_prot((void *)addr, PAGE_KERNEL);
> > +		clear_page((void *)addr);
> > +		(*pt_end)--;
> > +	}
> > +}
> >  /*
> >   * Set up the initial kernel pagetable.
> >   *
> > @@ -1724,6 +1737,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  {
> >  	pud_t *l3;
> >  	pmd_t *l2;
> > +	unsigned long addr[3];
> > +	unsigned long pt_base, pt_end;
> > +	unsigned i;
> >  
> >  	/* max_pfn_mapped is the last pfn mapped in the initial memory
> >  	 * mappings. Considering that on Xen after the kernel mappings we
> > @@ -1731,6 +1747,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	 * set max_pfn_mapped to the last real pfn mapped. */
> >  	max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
> >  
> > +	pt_base = PFN_DOWN(__pa(xen_start_info->pt_base));
> > +	pt_end = PFN_DOWN(__pa(xen_start_info->pt_base + (xen_start_info->nr_pt_frames * PAGE_SIZE)));
> 

or just do:

	pt_end = pt_base + xen_start_info->nr_pt_frames;

> code style
> 
> >  	/* Zap identity mapping */
> >  	init_level4_pgt[0] = __pgd(0);
> >  
> > @@ -1749,6 +1768,9 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
> >  	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
> >  
> > +	addr[0] = (unsigned long)pgd;
> > +	addr[1] = (unsigned long)l3;
> > +	addr[2] = (unsigned long)l2;
> >  	/* Graft it onto L4[272][0]. Note that we creating an aliasing problem:
> >  	 * Both L4[272][0] and L4[511][511] have entries that point to the same
> >  	 * L2 (PMD) tables. Meaning that if you modify it in __va space
> > @@ -1782,20 +1804,24 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
> >  	/* Unpin Xen-provided one */
> >  	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> >  
> > -	/* Switch over */
> > -	pgd = init_level4_pgt;
> > -
> >  	/*
> >  	 * At this stage there can be no user pgd, and no page
> >  	 * structure to attach it to, so make sure we just set kernel
> >  	 * pgd.
> >  	 */
> >  	xen_mc_batch();
> > -	__xen_write_cr3(true, __pa(pgd));
> > +	__xen_write_cr3(true, __pa(init_level4_pgt));
> >  	xen_mc_issue(PARAVIRT_LAZY_CPU);
> >  
> > -	memblock_reserve(__pa(xen_start_info->pt_base),
> > -			 xen_start_info->nr_pt_frames * PAGE_SIZE);
> > +	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
> > +	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
> > +	 * the initial domain. For guests using the toolstack, they are in:
> > +	 * [L4], [L3], [L2], [L1], [L1], order .. */
> > +	for (i = 0; i < ARRAY_SIZE(addr); i++)
> > +		check_pt_base(&pt_base, &pt_end, addr[i]);
> 
> It is much clearer now, but if the comment is correct, doesn't it mean
> that we are going to be able to free pgd, l3 and l2 only in the non-dom0
> case?

And in dom0 case only PGD.

> If so it might be worth saying it explicitly.

OK.
> 
> Other than that, it is fine by me.
> 
> 
> > +	/* Our (by three pages) smaller Xen pagetable that we are using */
> > +	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
> >  }
> >  #else	/* !CONFIG_X86_64 */
> >  static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:31:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2RK0-0002FD-H6; Fri, 17 Aug 2012 18:31:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2RJy-0002F8-Km
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 18:30:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345228251!1747785!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17429 invoked from network); 17 Aug 2012 18:30:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 18:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065148"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 18:30:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 19:30:51 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2RJr-00047A-F8;
	Fri, 17 Aug 2012 18:30:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2RJr-00055I-3c;
	Fri, 17 Aug 2012 19:30:51 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13613-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 19:30:51 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13613: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13613 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13613/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13612
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13612
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13612
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13612

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  71a672765111
baseline version:
 xen                  3468a834be8d

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Michael Young <m.a.young@durham.ac.uk>
  Tim Deegan <tim@xen.org>
  Yongjie Ren <yongjie.ren@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=71a672765111
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 71a672765111
+ branch=xen-unstable
+ revision=71a672765111
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 71a672765111 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 3 changes to 3 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:31:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2RK0-0002FD-H6; Fri, 17 Aug 2012 18:31:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2RJy-0002F8-Km
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 18:30:58 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345228251!1747785!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17429 invoked from network); 17 Aug 2012 18:30:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 18:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065148"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 18:30:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 19:30:51 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2RJr-00047A-F8;
	Fri, 17 Aug 2012 18:30:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2RJr-00055I-3c;
	Fri, 17 Aug 2012 19:30:51 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13613-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 19:30:51 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13613: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13613 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13613/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13612
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13612
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13612
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13612

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  71a672765111
baseline version:
 xen                  3468a834be8d

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Michael Young <m.a.young@durham.ac.uk>
  Tim Deegan <tim@xen.org>
  Yongjie Ren <yongjie.ren@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=71a672765111
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 71a672765111
+ branch=xen-unstable
+ revision=71a672765111
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 71a672765111 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 3 changes to 3 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:47:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:47:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2RZW-0002Yo-8c; Fri, 17 Aug 2012 18:47:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2RZU-0002Yj-4o
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 18:47:00 +0000
Received: from [85.158.143.35:23272] by server-1.bemta-4.messagelabs.com id
	4D/A6-07754-3A19E205; Fri, 17 Aug 2012 18:46:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345229212!14055506!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18345 invoked from network); 17 Aug 2012 18:46:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 18:46:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065233"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 18:46:52 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 19:46:52 +0100
Message-ID: <1345229211.23624.0.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jordan Justen <jljusten@gmail.com>
Date: Fri, 17 Aug 2012 19:46:51 +0100
In-Reply-To: <CAFe8ug_YBeupJf9Kdq100WAYoKv_-otRppL0ACejJimjsTu-Nw@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
	<1345218274.10161.86.camel@zakaz.uk.xensource.com>
	<CAFe8ug_YBeupJf9Kdq100WAYoKv_-otRppL0ACejJimjsTu-Nw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Bei Guan <gbtju85@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 18:08 +0100, Jordan Justen wrote:
> On Fri, Aug 17, 2012 at 8:44 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2012-08-17 at 16:31 +0100, Bei Guan wrote:
> >>
> >> 2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>
> >>         On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
> >>
> >>         > Thank you very much for your help.
> >>         > Is there any example code of initialization of grant table
> >>         in HVM that
> >>         > I can refer to?
> >>
> >>
> >>         The PVHVM support in upstream Linux would be a good place to
> >>         look.
> >>
> >>         So might the code in the xen tree in
> >>         unmodified_drivers/linux-2.6/
> >>
> >>         IIRC Daniel got grant tables working in SeaBIOS last summer
> >>         for GSoC so
> >>         you might also find some useful examples in
> >>         git://github.com/evildani/seabios_patch.git
> >> Hi Ian,
> >>
> >>
> >> Thank you very much for this information. It's very useful to me.
> >>
> >>
> >> However, I'm still confused with the initialization of the grant table
> >> in HVM.
> >>
> >>
> >> The relationship of the methods in the initialization of the grant
> >> table in linux source code (driver/xen) is:
> >> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
> >>
> >>
> >> So, I am not sure that what's the function of the method
> >> apply_to_page_range(), which is implemented in code file [1].
> >> This function is a little complex. Is there any simple method to do
> >> this? Thank you for your time.
> >
> > This function is the simple method ;-)
> >
> > All it basically does is iterate over the page tables corresponding to a
> > range of addresses and calling a user supplied function on each leaf
> > PTE. In the case of the gnttab_map this user supplied function simply
> > sets the leaf PTEs to point to the right grant table page.
> >
> > I suppose you are working on tianocore? I've no idea what the page table
> > layout is in that environment, I suppose it either has a linear map or
> > some other way of getting at the leaf ptes. Anyway since the method to
> > use is specific to the OS (or firmware) environment you are running in I
> > think you'll have to ask on the tianocore development list.
> 
> At boot time all pages are identity mapped. I don't think we need this
> mapping step for our firmware. Does that sound right?

In that case you can skip the page table setup bit, you just need the
step where you add the grant table to your physical address space.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 18:47:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 18:47:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2RZW-0002Yo-8c; Fri, 17 Aug 2012 18:47:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2RZU-0002Yj-4o
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 18:47:00 +0000
Received: from [85.158.143.35:23272] by server-1.bemta-4.messagelabs.com id
	4D/A6-07754-3A19E205; Fri, 17 Aug 2012 18:46:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345229212!14055506!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18345 invoked from network); 17 Aug 2012 18:46:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 18:46:52 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065233"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 18:46:52 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 19:46:52 +0100
Message-ID: <1345229211.23624.0.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jordan Justen <jljusten@gmail.com>
Date: Fri, 17 Aug 2012 19:46:51 +0100
In-Reply-To: <CAFe8ug_YBeupJf9Kdq100WAYoKv_-otRppL0ACejJimjsTu-Nw@mail.gmail.com>
References: <CAEQjb-Rd2=DaxrxiK2TYzNNBH01w_5OgPeKxugpS26n4tGw4Yg@mail.gmail.com>
	<1344439371.32142.46.camel@zakaz.uk.xensource.com>
	<CAEQjb-T9Jg_9EjHi7fKUABBW3_PGNYgjU+cVGTLZ-ZiLA29AqA@mail.gmail.com>
	<1344441333.32142.48.camel@zakaz.uk.xensource.com>
	<CAEQjb-QvWYJtQu1vWJatitOgk6mA4TuUzFX2EdWSX3F3w6jGpQ@mail.gmail.com>
	<1345218274.10161.86.camel@zakaz.uk.xensource.com>
	<CAFe8ug_YBeupJf9Kdq100WAYoKv_-otRppL0ACejJimjsTu-Nw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Bei Guan <gbtju85@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to initialize the grant table in a HVM guest OS
 and its bios
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 18:08 +0100, Jordan Justen wrote:
> On Fri, Aug 17, 2012 at 8:44 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2012-08-17 at 16:31 +0100, Bei Guan wrote:
> >>
> >> 2012/8/8 Ian Campbell <Ian.Campbell@citrix.com>
> >>         On Wed, 2012-08-08 at 16:48 +0100, Bei Guan wrote:
> >>
> >>         > Thank you very much for your help.
> >>         > Is there any example code of initialization of grant table
> >>         in HVM that
> >>         > I can refer to?
> >>
> >>
> >>         The PVHVM support in upstream Linux would be a good place to
> >>         look.
> >>
> >>         So might the code in the xen tree in
> >>         unmodified_drivers/linux-2.6/
> >>
> >>         IIRC Daniel got grant tables working in SeaBIOS last summer
> >>         for GSoC so
> >>         you might also find some useful examples in
> >>         git://github.com/evildani/seabios_patch.git
> >> Hi Ian,
> >>
> >>
> >> Thank you very much for this information. It's very useful to me.
> >>
> >>
> >> However, I'm still confused with the initialization of the grant table
> >> in HVM.
> >>
> >>
> >> The relationship of the methods in the initialization of the grant
> >> table in linux source code (driver/xen) is:
> >> platform_pci_init()-->gnttab_init()-->gnttab_resume()-->gnttab_map()-->arch_gnttab_map_shared()-->apply_to_page_range().
> >>
> >>
> >> So, I am not sure that what's the function of the method
> >> apply_to_page_range(), which is implemented in code file [1].
> >> This function is a little complex. Is there any simple method to do
> >> this? Thank you for your time.
> >
> > This function is the simple method ;-)
> >
> > All it basically does is iterate over the page tables corresponding to a
> > range of addresses and calling a user supplied function on each leaf
> > PTE. In the case of the gnttab_map this user supplied function simply
> > sets the leaf PTEs to point to the right grant table page.
> >
> > I suppose you are working on tianocore? I've no idea what the page table
> > layout is in that environment, I suppose it either has a linear map or
> > some other way of getting at the leaf ptes. Anyway since the method to
> > use is specific to the OS (or firmware) environment you are running in I
> > think you'll have to ask on the tianocore development list.
> 
> At boot time all pages are identity mapped. I don't think we need this
> mapping step for our firmware. Does that sound right?

In that case you can skip the page table setup bit, you just need the
step where you add the grant table to your physical address space.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:18:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2S3Y-0002qm-4O; Fri, 17 Aug 2012 19:18:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2S3W-0002qh-Id
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:18:02 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345231075!3467525!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3806 invoked from network); 17 Aug 2012 19:17:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 19:17:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJHmm5021241
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:17:48 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJHk2b000866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:17:47 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJHkHs015060; Fri, 17 Aug 2012 14:17:46 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:17:46 -0700
Date: Fri, 17 Aug 2012 12:17:45 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817121745.55dc7c11@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<1345192115.30865.86.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 11:56:43 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 17 Aug 2012, Ian Campbell wrote:
> > > > > diff --git a/include/xen/interface/xen.h
> > > > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > > > --- a/include/xen/interface/xen.h
> > > > > +++ b/include/xen/interface/xen.h
> > > > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > > > >  /* These flags are passed in the 'flags' field of
> > > > > start_info_t. */ #define SIF_PRIVILEGED    (1<<0)  /* Is the
> > > > > domain privileged? */ #define SIF_INITDOMAIN    (1<<1)  /* Is
> > > > > this the initial control domain? */ +#define
> > > > > SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running in HVM
> > > > > container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > > > 1 byte for xen-pm options */ typedef uint64_t cpumap_t;
> > > > 
> > > > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept,
> > > > into a generic xen.h interface file. 
> > 
> > Is PVH actually more like a XENFEAT style thing?
> > 
> > Is there actually anywhere which wants to know specifically about
> > PVH rather than some more specific property which a PVH domain
> > happen to has?
> 
> That's exactly the point.
> 
> 
> > > > > +/* xen_pv_domain check is necessary as start_info ptr is
> > > > > null in HVM. Also,
> > > > > + * note, xen PVH domain shares lot of HVM code */
> > > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > > &&                     \
> > > > > +				(xen_start_info->flags &
> > > > > SIF_IS_PVINHVM))
> > > >  
> > > > Also here.
> > > 
> > > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy.
> > > But, not sure how to define SIF_IS_PVINHVM then? I could put
> > > SIF_IS_RESVD in include/xen/interface/xen.h, and then do 
> > > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > > 
> > > What do you think about that?
> > 
> > Should PVH actually be a new value in the xen_domain_type enum?
> 
> I don't think we should have a xen_domain_type pvh at all.
> If we really need it we should define it as a set of individual
> properties:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap)
> && \ xen_have_vector_callback)

No, I had started with enum, too many code unnecessary code changes
then. I like the above #define. It eradicates the need for SIF flag.
I'll see if that works.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:18:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2S3Y-0002qm-4O; Fri, 17 Aug 2012 19:18:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2S3W-0002qh-Id
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:18:02 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345231075!3467525!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3806 invoked from network); 17 Aug 2012 19:17:56 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 19:17:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJHmm5021241
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:17:48 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJHk2b000866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:17:47 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJHkHs015060; Fri, 17 Aug 2012 14:17:46 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:17:46 -0700
Date: Fri, 17 Aug 2012 12:17:45 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817121745.55dc7c11@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<1345192115.30865.86.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208171151070.15568@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 11:56:43 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 17 Aug 2012, Ian Campbell wrote:
> > > > > diff --git a/include/xen/interface/xen.h
> > > > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > > > --- a/include/xen/interface/xen.h
> > > > > +++ b/include/xen/interface/xen.h
> > > > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > > > >  /* These flags are passed in the 'flags' field of
> > > > > start_info_t. */ #define SIF_PRIVILEGED    (1<<0)  /* Is the
> > > > > domain privileged? */ #define SIF_INITDOMAIN    (1<<1)  /* Is
> > > > > this the initial control domain? */ +#define
> > > > > SIF_IS_PVINHVM    (1<<4)  /* Is it a PV running in HVM
> > > > > container? */ #define SIF_PM_MASK       (0xFF<<8) /* reserve
> > > > > 1 byte for xen-pm options */ typedef uint64_t cpumap_t;
> > > > 
> > > > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept,
> > > > into a generic xen.h interface file. 
> > 
> > Is PVH actually more like a XENFEAT style thing?
> > 
> > Is there actually anywhere which wants to know specifically about
> > PVH rather than some more specific property which a PVH domain
> > happen to has?
> 
> That's exactly the point.
> 
> 
> > > > > +/* xen_pv_domain check is necessary as start_info ptr is
> > > > > null in HVM. Also,
> > > > > + * note, xen PVH domain shares lot of HVM code */
> > > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > > &&                     \
> > > > > +				(xen_start_info->flags &
> > > > > SIF_IS_PVINHVM))
> > > >  
> > > > Also here.
> > > 
> > > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy.
> > > But, not sure how to define SIF_IS_PVINHVM then? I could put
> > > SIF_IS_RESVD in include/xen/interface/xen.h, and then do 
> > > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > > 
> > > What do you think about that?
> > 
> > Should PVH actually be a new value in the xen_domain_type enum?
> 
> I don't think we should have a xen_domain_type pvh at all.
> If we really need it we should define it as a set of individual
> properties:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap)
> && \ xen_have_vector_callback)

No, I had started with enum, too many code unnecessary code changes
then. I like the above #define. It eradicates the need for SIF flag.
I'll see if that works.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:20:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:20:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2S5s-0002wK-M9; Fri, 17 Aug 2012 19:20:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2S5q-0002wD-L4
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:20:26 +0000
Received: from [85.158.143.35:46954] by server-2.bemta-4.messagelabs.com id
	D6/B5-31966-9799E205; Fri, 17 Aug 2012 19:20:25 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345231223!14058847!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22974 invoked from network); 17 Aug 2012 19:20:24 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 19:20:24 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJKJnc024452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:20:20 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJKI9U004319
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:20:19 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJKInE030754; Fri, 17 Aug 2012 14:20:18 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:20:18 -0700
Date: Fri, 17 Aug 2012 12:20:14 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817122014.3c3387b5@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 11:15:32 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> > > > diff --git a/include/xen/interface/xen.h
> > > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > > --- a/include/xen/interface/xen.h
> > > > +++ b/include/xen/interface/xen.h
> > > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > > >  /* These flags are passed in the 'flags' field of
> > > > start_info_t. */ #define SIF_PRIVILEGED    (1<<0)  /* Is the
> > > > domain privileged? */ #define SIF_INITDOMAIN    (1<<1)  /* Is
> > > > this the initial control domain? */ +#define SIF_IS_PVINHVM
> > > > (1<<4)  /* Is it a PV running in HVM container? */ #define
> > > > SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm
> > > > options */ typedef uint64_t cpumap_t;
> > > 
> > > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept,
> > > into a generic xen.h interface file. 
> > 
> > > > +/* xen_pv_domain check is necessary as start_info ptr is null
> > > > in HVM. Also,
> > > > + * note, xen PVH domain shares lot of HVM code */
> > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > &&                     \
> > > > +				(xen_start_info->flags &
> > > > SIF_IS_PVINHVM))
> > >  
> > > Also here.
> > 
> > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy.
> > But, not sure how to define SIF_IS_PVINHVM then? I could put
> > SIF_IS_RESVD in include/xen/interface/xen.h, and then do 
> > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > 
> > What do you think about that?
> 
> I am not particularly fussed about the location of SIF_IS_PVINHVM.
> We could define it in asm/xen/hypervisor.h for example.
> 
> The very important bit is to avoid xen_pvh_domain() in generic code
> because it reduces code reusability.
> xen_pvh_domain() covers too many different concepts (a PV guest, in an
> HVM container, using nested paging in hardware), if we bundle them
> together it makes it much harder to reuse them.
> So we should avoid checking for xen_pvh_domain() and check for the
> relevant sub-property we actually interested in.
> 
> For example in balloon.c we are probably only interested in memory
> related behavior, so checking for XENFEAT_auto_translated_physmap
> should be enough.  In other parts of the code we might want to check
> for xen_pv_domain(). If xen_pv_domain() and
> XENFEAT_auto_translated_physmap are not enough, we could introduce
> another small XENFEAT that specifies that the domain is running in a
> HVM container. This way they are all reusable.

yeah, I thought about that, but wasn't sure what the implications would
be for a guest thats not PVH but has auto xlated physmap, if there's
such a possibility. If you guys think thats not an issue, I can change
it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:20:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:20:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2S5s-0002wK-M9; Fri, 17 Aug 2012 19:20:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2S5q-0002wD-L4
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:20:26 +0000
Received: from [85.158.143.35:46954] by server-2.bemta-4.messagelabs.com id
	D6/B5-31966-9799E205; Fri, 17 Aug 2012 19:20:25 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345231223!14058847!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22974 invoked from network); 17 Aug 2012 19:20:24 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 19:20:24 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJKJnc024452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:20:20 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJKI9U004319
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:20:19 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJKInE030754; Fri, 17 Aug 2012 14:20:18 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:20:18 -0700
Date: Fri, 17 Aug 2012 12:20:14 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817122014.3c3387b5@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 11:15:32 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Thu, 16 Aug 2012, Mukesh Rathor wrote:
> > > > diff --git a/include/xen/interface/xen.h
> > > > b/include/xen/interface/xen.h index 0801468..1d5bc36 100644
> > > > --- a/include/xen/interface/xen.h
> > > > +++ b/include/xen/interface/xen.h
> > > > @@ -493,6 +493,7 @@ struct dom0_vga_console_info {
> > > >  /* These flags are passed in the 'flags' field of
> > > > start_info_t. */ #define SIF_PRIVILEGED    (1<<0)  /* Is the
> > > > domain privileged? */ #define SIF_INITDOMAIN    (1<<1)  /* Is
> > > > this the initial control domain? */ +#define SIF_IS_PVINHVM
> > > > (1<<4)  /* Is it a PV running in HVM container? */ #define
> > > > SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm
> > > > options */ typedef uint64_t cpumap_t;
> > > 
> > > I would avoid adding SIF_IS_PVINHVM, an x86 specific concept,
> > > into a generic xen.h interface file. 
> > 
> > > > +/* xen_pv_domain check is necessary as start_info ptr is null
> > > > in HVM. Also,
> > > > + * note, xen PVH domain shares lot of HVM code */
> > > > +#define xen_pvh_domain()       (xen_pv_domain()
> > > > &&                     \
> > > > +				(xen_start_info->flags &
> > > > SIF_IS_PVINHVM))
> > >  
> > > Also here.
> > 
> > Hmm.. I can move '#define xen_pvh_domain()' to x86 header, easy.
> > But, not sure how to define SIF_IS_PVINHVM then? I could put
> > SIF_IS_RESVD in include/xen/interface/xen.h, and then do 
> > #define SIF_IS_PVINHVM SIF_IS_RESVD in an x86 file.
> > 
> > What do you think about that?
> 
> I am not particularly fussed about the location of SIF_IS_PVINHVM.
> We could define it in asm/xen/hypervisor.h for example.
> 
> The very important bit is to avoid xen_pvh_domain() in generic code
> because it reduces code reusability.
> xen_pvh_domain() covers too many different concepts (a PV guest, in an
> HVM container, using nested paging in hardware), if we bundle them
> together it makes it much harder to reuse them.
> So we should avoid checking for xen_pvh_domain() and check for the
> relevant sub-property we actually interested in.
> 
> For example in balloon.c we are probably only interested in memory
> related behavior, so checking for XENFEAT_auto_translated_physmap
> should be enough.  In other parts of the code we might want to check
> for xen_pv_domain(). If xen_pv_domain() and
> XENFEAT_auto_translated_physmap are not enough, we could introduce
> another small XENFEAT that specifies that the domain is running in a
> HVM container. This way they are all reusable.

yeah, I thought about that, but wasn't sure what the implications would
be for a guest thats not PVH but has auto xlated physmap, if there's
such a possibility. If you guys think thats not an issue, I can change
it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:24:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2S9u-00039S-H0; Fri, 17 Aug 2012 19:24:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2S9s-00039L-Tc
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:24:37 +0000
Received: from [85.158.143.99:48166] by server-1.bemta-4.messagelabs.com id
	72/7D-07754-47A9E205; Fri, 17 Aug 2012 19:24:36 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345231474!27855228!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22480 invoked from network); 17 Aug 2012 19:24:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 19:24:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJOVNN028020
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:24:32 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJOUCJ009197
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:24:30 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJOUKp000836; Fri, 17 Aug 2012 14:24:30 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:24:29 -0700
Date: Fri, 17 Aug 2012 12:24:28 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817122428.7704f87f@mantra.us.oracle.com>
In-Reply-To: <1345192554.30865.93.camel@zakaz.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 09:35:54 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> 
> > LDT (linear address, # ents) */
> > -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > frames, # ents) */
> > +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > frames, # ents).*
> > +                                            * PV in HVM: it's GDTR
> > addr/sz */
> 
> I'm not sure I understand this comment. What is "GDTR addr/sz" do you
> mean that gdtframes/gdt_ents has a different semantics here?
> 
> Might be worthy of a union? Or finding some other way to expand this
> struct.

In case of PVH, the field is used to send down GDTR address and size.
perhaps better to just leave the comment out.

> > 
> > -void __init xen_arch_setup(void)
> > +/* Normal PV domain not running in HVM container */
> 
> It's a bit of a shame to overload the "HVM" term this way, to mean
> both the traditional "providing a full PC like environment" and "PV
> using hardware virtualisation facilities".
> 
> Perhaps:
>         /* Normal PV domain without PVH extensions */

Ok, HVM==Hardware Virtual Machine seems more appropriate here, but I
can remove the word HVM and go with 'PVH extensions'.


> > +static __init void inline xen_non_pvh_arch_setup(void)
> > +       xen_panic_handler_init();
> > +
> > +       if (!xen_pvh_domain())
> > +               xen_non_pvh_arch_setup();
> 
> The negative in the fn name here strikes me as a bit weird. Can't this
> just be xen_pv_arch_setup?
> 
> Or even just have:
> 	/* Everything else is specific to PV without hardware support
> */ if (xen_pvh_domain())
> 		return;

OK.

> >  
> >  #ifdef CONFIG_ACPI
> >         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > index f58dca7..cdf269d 100644
> > --- a/arch/x86/xen/smp.c
> > +++ b/arch/x86/xen/smp.c
> > @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct
> > task_struct *idle) gdt = get_cpu_gdt_table(cpu);
> > -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> > +               ctxt->ldt_ents = 0;
> 
> Something odd is going on with the indentation here (and below I've
> just noticed). I suspect lots of the changes aren't really changing
> anything other than whitespace?

Konrad wanted just doing the indentation without code change first
where it made sense so that further patch makes it easy to see if
statements added.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:24:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2S9u-00039S-H0; Fri, 17 Aug 2012 19:24:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2S9s-00039L-Tc
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:24:37 +0000
Received: from [85.158.143.99:48166] by server-1.bemta-4.messagelabs.com id
	72/7D-07754-47A9E205; Fri, 17 Aug 2012 19:24:36 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345231474!27855228!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22480 invoked from network); 17 Aug 2012 19:24:35 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 19:24:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJOVNN028020
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:24:32 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJOUCJ009197
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:24:30 GMT
Received: from abhmt105.oracle.com (abhmt105.oracle.com [141.146.116.57])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJOUKp000836; Fri, 17 Aug 2012 14:24:30 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:24:29 -0700
Date: Fri, 17 Aug 2012 12:24:28 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817122428.7704f87f@mantra.us.oracle.com>
In-Reply-To: <1345192554.30865.93.camel@zakaz.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 09:35:54 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> 
> > LDT (linear address, # ents) */
> > -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > frames, # ents) */
> > +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > frames, # ents).*
> > +                                            * PV in HVM: it's GDTR
> > addr/sz */
> 
> I'm not sure I understand this comment. What is "GDTR addr/sz" do you
> mean that gdtframes/gdt_ents has a different semantics here?
> 
> Might be worthy of a union? Or finding some other way to expand this
> struct.

In case of PVH, the field is used to send down GDTR address and size.
perhaps better to just leave the comment out.

> > 
> > -void __init xen_arch_setup(void)
> > +/* Normal PV domain not running in HVM container */
> 
> It's a bit of a shame to overload the "HVM" term this way, to mean
> both the traditional "providing a full PC like environment" and "PV
> using hardware virtualisation facilities".
> 
> Perhaps:
>         /* Normal PV domain without PVH extensions */

Ok, HVM==Hardware Virtual Machine seems more appropriate here, but I
can remove the word HVM and go with 'PVH extensions'.


> > +static __init void inline xen_non_pvh_arch_setup(void)
> > +       xen_panic_handler_init();
> > +
> > +       if (!xen_pvh_domain())
> > +               xen_non_pvh_arch_setup();
> 
> The negative in the fn name here strikes me as a bit weird. Can't this
> just be xen_pv_arch_setup?
> 
> Or even just have:
> 	/* Everything else is specific to PV without hardware support
> */ if (xen_pvh_domain())
> 		return;

OK.

> >  
> >  #ifdef CONFIG_ACPI
> >         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > index f58dca7..cdf269d 100644
> > --- a/arch/x86/xen/smp.c
> > +++ b/arch/x86/xen/smp.c
> > @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct
> > task_struct *idle) gdt = get_cpu_gdt_table(cpu);
> > -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> > +               ctxt->ldt_ents = 0;
> 
> Something odd is going on with the indentation here (and below I've
> just noticed). I suspect lots of the changes aren't really changing
> anything other than whitespace?

Konrad wanted just doing the indentation without code change first
where it made sense so that further patch makes it easy to see if
statements added.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SKx-0003Nx-MU; Fri, 17 Aug 2012 19:36:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2SKw-0003Ns-EA
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:36:02 +0000
Received: from [85.158.138.51:43517] by server-7.bemta-3.messagelabs.com id
	0B/55-01906-12D9E205; Fri, 17 Aug 2012 19:36:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345232161!28831264!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5176 invoked from network); 17 Aug 2012 19:36:01 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 19:36:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065644"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 19:36:00 +0000
Received: from [127.0.0.1] (10.80.16.66) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 20:36:00 +0100
Message-ID: <1345232160.23624.1.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 20:36:00 +0100
In-Reply-To: <20120817122014.3c3387b5@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 20:20 +0100, Mukesh Rathor wrote:
> yeah, I thought about that, but wasn't sure what the implications would
> be for a guest thats not PVH but has auto xlated physmap, if there's
> such a possibility. If you guys think thats not an issue, I can change
> it.

I think you are basically never going to come across that situation in
real life. But if you are worried you could check for that combination
at boot and panic or something like that, which would mean you could not
worry about it at runtime.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:36:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SKx-0003Nx-MU; Fri, 17 Aug 2012 19:36:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2SKw-0003Ns-EA
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:36:02 +0000
Received: from [85.158.138.51:43517] by server-7.bemta-3.messagelabs.com id
	0B/55-01906-12D9E205; Fri, 17 Aug 2012 19:36:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345232161!28831264!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5176 invoked from network); 17 Aug 2012 19:36:01 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 19:36:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065644"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 19:36:00 +0000
Received: from [127.0.0.1] (10.80.16.66) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 20:36:00 +0100
Message-ID: <1345232160.23624.1.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 20:36:00 +0100
In-Reply-To: <20120817122014.3c3387b5@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 20:20 +0100, Mukesh Rathor wrote:
> yeah, I thought about that, but wasn't sure what the implications would
> be for a guest thats not PVH but has auto xlated physmap, if there's
> such a possibility. If you guys think thats not an issue, I can change
> it.

I think you are basically never going to come across that situation in
real life. But if you are worried you could check for that combination
at boot and panic or something like that, which would mean you could not
worry about it at runtime.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:46:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:46:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SUc-0003Xd-Or; Fri, 17 Aug 2012 19:46:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2SUa-0003XY-TX
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:46:01 +0000
Received: from [85.158.138.51:9909] by server-9.bemta-3.messagelabs.com id
	E0/B8-23952-77F9E205; Fri, 17 Aug 2012 19:45:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345232758!27114167!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25819 invoked from network); 17 Aug 2012 19:45:59 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 19:45:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJjs79023649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:45:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJjrLh016056
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:45:54 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJjrUK031479; Fri, 17 Aug 2012 14:45:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:45:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 62A1B402EC; Fri, 17 Aug 2012 15:36:04 -0400 (EDT)
Date: Fri, 17 Aug 2012 15:36:04 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120817193604.GA4573@phenom.dumpdata.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120817122014.3c3387b5@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > For example in balloon.c we are probably only interested in memory
> > related behavior, so checking for XENFEAT_auto_translated_physmap
> > should be enough.  In other parts of the code we might want to check
> > for xen_pv_domain(). If xen_pv_domain() and
> > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > another small XENFEAT that specifies that the domain is running in a
> > HVM container. This way they are all reusable.
> 
> yeah, I thought about that, but wasn't sure what the implications would
> be for a guest thats not PVH but has auto xlated physmap, if there's
> such a possibility. If you guys think thats not an issue, I can change
> it.

dom0_shadow=on on the hypervisor mode enables that in PV mode.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:46:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:46:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SUc-0003Xd-Or; Fri, 17 Aug 2012 19:46:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2SUa-0003XY-TX
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:46:01 +0000
Received: from [85.158.138.51:9909] by server-9.bemta-3.messagelabs.com id
	E0/B8-23952-77F9E205; Fri, 17 Aug 2012 19:45:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345232758!27114167!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25819 invoked from network); 17 Aug 2012 19:45:59 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 19:45:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HJjs79023649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 19:45:55 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HJjrLh016056
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 19:45:54 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HJjrUK031479; Fri, 17 Aug 2012 14:45:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 12:45:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 62A1B402EC; Fri, 17 Aug 2012 15:36:04 -0400 (EDT)
Date: Fri, 17 Aug 2012 15:36:04 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120817193604.GA4573@phenom.dumpdata.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120817122014.3c3387b5@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > For example in balloon.c we are probably only interested in memory
> > related behavior, so checking for XENFEAT_auto_translated_physmap
> > should be enough.  In other parts of the code we might want to check
> > for xen_pv_domain(). If xen_pv_domain() and
> > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > another small XENFEAT that specifies that the domain is running in a
> > HVM container. This way they are all reusable.
> 
> yeah, I thought about that, but wasn't sure what the implications would
> be for a guest thats not PVH but has auto xlated physmap, if there's
> such a possibility. If you guys think thats not an issue, I can change
> it.

dom0_shadow=on on the hypervisor mode enables that in PV mode.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:47:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SVn-0003bm-6u; Fri, 17 Aug 2012 19:47:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2SVl-0003bJ-Vi
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:47:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345232823!2493398!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5604 invoked from network); 17 Aug 2012 19:47:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 19:47:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 19:47:03 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 20:47:02 +0100
Message-ID: <1345232822.23624.8.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 20:47:02 +0100
In-Reply-To: <20120817122428.7704f87f@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
	<20120817122428.7704f87f@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 20:24 +0100, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 09:35:54 +0100
> Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> > 
> > > LDT (linear address, # ents) */
> > > -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > frames, # ents) */
> > > +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > frames, # ents).*
> > > +                                            * PV in HVM: it's GDTR
> > > addr/sz */
> > 
> > I'm not sure I understand this comment. What is "GDTR addr/sz" do you
> > mean that gdtframes/gdt_ents has a different semantics here?
> > 
> > Might be worthy of a union? Or finding some other way to expand this
> > struct.
> 
> In case of PVH, the field is used to send down GDTR address and size.
> perhaps better to just leave the comment out.

I think if the semantics of this field are totally different in the two
modes then I think a union is warranted.

> > > -void __init xen_arch_setup(void)
> > > +/* Normal PV domain not running in HVM container */
> > 
> > It's a bit of a shame to overload the "HVM" term this way, to mean
> > both the traditional "providing a full PC like environment" and "PV
> > using hardware virtualisation facilities".
> > 
> > Perhaps:
> >         /* Normal PV domain without PVH extensions */
> 
> Ok, HVM==Hardware Virtual Machine seems more appropriate here,

HVM in the context of Xen means more than that though, it implies a Qemu
and emulation of a complete "PC-like" environment and all of that stuff,
which is why I think it is inappropriate/confusing to be overloading it
to mean "PV with hardware assistance" too.

Xen's use of the term HVM is a bit unhelpful, exactly because it is a
broad sounding term but with a very specific meaning, but we are kind of
stuck with it.

> but I can remove the word HVM and go with 'PVH extensions'.

Please ;-)

> 
> > >  
> > >  #ifdef CONFIG_ACPI
> > >         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> > > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > > index f58dca7..cdf269d 100644
> > > --- a/arch/x86/xen/smp.c
> > > +++ b/arch/x86/xen/smp.c
> > > @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct
> > > task_struct *idle) gdt = get_cpu_gdt_table(cpu);
> > > -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> > > +               ctxt->ldt_ents = 0;
> > 
> > Something odd is going on with the indentation here (and below I've
> > just noticed). I suspect lots of the changes aren't really changing
> > anything other than whitespace?
> 
> Konrad wanted just doing the indentation without code change first
> where it made sense so that further patch makes it easy to see if
> statements added.

I disagree that this is a useful way to structure a series unless the
whitespace change is in a patch of *only* whitespace changes (and even
then this would be an uncommon way to do things IMO).

But putting the whitespace changes associated with adding an if
alongside unrelated actual semantic changes in a totally different patch
is probably the most confusing and least helpful of all the possible
options!

My personal preference would be to do the indent when adding the if and
let people who want to see the differences without the indentation
change use "diff -b".

Konrad is maintainer though, so if he likes this then fine.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 19:47:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 19:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SVn-0003bm-6u; Fri, 17 Aug 2012 19:47:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2SVl-0003bJ-Vi
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 19:47:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345232823!2493398!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5604 invoked from network); 17 Aug 2012 19:47:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 19:47:03 -0000
X-IronPort-AV: E=Sophos;i="4.77,785,1336348800"; d="scan'208";a="14065735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 19:47:03 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 17 Aug 2012 20:47:02 +0100
Message-ID: <1345232822.23624.8.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 17 Aug 2012 20:47:02 +0100
In-Reply-To: <20120817122428.7704f87f@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
	<20120817122428.7704f87f@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 20:24 +0100, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 09:35:54 +0100
> Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> > 
> > > LDT (linear address, # ents) */
> > > -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > frames, # ents) */
> > > +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > frames, # ents).*
> > > +                                            * PV in HVM: it's GDTR
> > > addr/sz */
> > 
> > I'm not sure I understand this comment. What is "GDTR addr/sz" do you
> > mean that gdtframes/gdt_ents has a different semantics here?
> > 
> > Might be worthy of a union? Or finding some other way to expand this
> > struct.
> 
> In case of PVH, the field is used to send down GDTR address and size.
> perhaps better to just leave the comment out.

I think if the semantics of this field are totally different in the two
modes then I think a union is warranted.

> > > -void __init xen_arch_setup(void)
> > > +/* Normal PV domain not running in HVM container */
> > 
> > It's a bit of a shame to overload the "HVM" term this way, to mean
> > both the traditional "providing a full PC like environment" and "PV
> > using hardware virtualisation facilities".
> > 
> > Perhaps:
> >         /* Normal PV domain without PVH extensions */
> 
> Ok, HVM==Hardware Virtual Machine seems more appropriate here,

HVM in the context of Xen means more than that though, it implies a Qemu
and emulation of a complete "PC-like" environment and all of that stuff,
which is why I think it is inappropriate/confusing to be overloading it
to mean "PV with hardware assistance" too.

Xen's use of the term HVM is a bit unhelpful, exactly because it is a
broad sounding term but with a very specific meaning, but we are kind of
stuck with it.

> but I can remove the word HVM and go with 'PVH extensions'.

Please ;-)

> 
> > >  
> > >  #ifdef CONFIG_ACPI
> > >         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> > > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > > index f58dca7..cdf269d 100644
> > > --- a/arch/x86/xen/smp.c
> > > +++ b/arch/x86/xen/smp.c
> > > @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct
> > > task_struct *idle) gdt = get_cpu_gdt_table(cpu);
> > > -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> > > +               ctxt->ldt_ents = 0;
> > 
> > Something odd is going on with the indentation here (and below I've
> > just noticed). I suspect lots of the changes aren't really changing
> > anything other than whitespace?
> 
> Konrad wanted just doing the indentation without code change first
> where it made sense so that further patch makes it easy to see if
> statements added.

I disagree that this is a useful way to structure a series unless the
whitespace change is in a patch of *only* whitespace changes (and even
then this would be an uncommon way to do things IMO).

But putting the whitespace changes associated with adding an if
alongside unrelated actual semantic changes in a totally different patch
is probably the most confusing and least helpful of all the possible
options!

My personal preference would be to do the indent when adding the if and
let people who want to see the differences without the indentation
change use "diff -b".

Konrad is maintainer though, so if he likes this then fine.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 20:16:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 20:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SxI-0004Dd-9W; Fri, 17 Aug 2012 20:15:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2SxG-0004DV-6Q
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 20:15:38 +0000
Received: from [85.158.143.99:3640] by server-3.bemta-4.messagelabs.com id
	88/83-09529-966AE205; Fri, 17 Aug 2012 20:15:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345234532!25340662!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21542 invoked from network); 17 Aug 2012 20:15:36 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 20:15:36 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HKFSRr016310
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 20:15:29 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HKFSFc010286
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 20:15:28 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HKFRqe031648; Fri, 17 Aug 2012 15:15:27 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 13:15:27 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 06C15402EC; Fri, 17 Aug 2012 16:05:39 -0400 (EDT)
Date: Fri, 17 Aug 2012 16:05:38 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817200538.GA20731@phenom.dumpdata.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
	<20120817122428.7704f87f@mantra.us.oracle.com>
	<1345232822.23624.8.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345232822.23624.8.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 08:47:02PM +0100, Ian Campbell wrote:
> On Fri, 2012-08-17 at 20:24 +0100, Mukesh Rathor wrote:
> > On Fri, 17 Aug 2012 09:35:54 +0100
> > Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > 
> > > On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> > > 
> > > > LDT (linear address, # ents) */
> > > > -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > > frames, # ents) */
> > > > +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > > frames, # ents).*
> > > > +                                            * PV in HVM: it's GDTR
> > > > addr/sz */
> > > 
> > > I'm not sure I understand this comment. What is "GDTR addr/sz" do you
> > > mean that gdtframes/gdt_ents has a different semantics here?
> > > 
> > > Might be worthy of a union? Or finding some other way to expand this
> > > struct.
> > 
> > In case of PVH, the field is used to send down GDTR address and size.
> > perhaps better to just leave the comment out.
> 
> I think if the semantics of this field are totally different in the two
> modes then I think a union is warranted.
> 
> > > > -void __init xen_arch_setup(void)
> > > > +/* Normal PV domain not running in HVM container */
> > > 
> > > It's a bit of a shame to overload the "HVM" term this way, to mean
> > > both the traditional "providing a full PC like environment" and "PV
> > > using hardware virtualisation facilities".
> > > 
> > > Perhaps:
> > >         /* Normal PV domain without PVH extensions */
> > 
> > Ok, HVM==Hardware Virtual Machine seems more appropriate here,
> 
> HVM in the context of Xen means more than that though, it implies a Qemu
> and emulation of a complete "PC-like" environment and all of that stuff,
> which is why I think it is inappropriate/confusing to be overloading it
> to mean "PV with hardware assistance" too.
> 
> Xen's use of the term HVM is a bit unhelpful, exactly because it is a
> broad sounding term but with a very specific meaning, but we are kind of
> stuck with it.
> 
> > but I can remove the word HVM and go with 'PVH extensions'.
> 
> Please ;-)
> 
> > 
> > > >  
> > > >  #ifdef CONFIG_ACPI
> > > >         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> > > > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > > > index f58dca7..cdf269d 100644
> > > > --- a/arch/x86/xen/smp.c
> > > > +++ b/arch/x86/xen/smp.c
> > > > @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct
> > > > task_struct *idle) gdt = get_cpu_gdt_table(cpu);
> > > > -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> > > > +               ctxt->ldt_ents = 0;
> > > 
> > > Something odd is going on with the indentation here (and below I've
> > > just noticed). I suspect lots of the changes aren't really changing
> > > anything other than whitespace?
> > 
> > Konrad wanted just doing the indentation without code change first
> > where it made sense so that further patch makes it easy to see if
> > statements added.
> 
> I disagree that this is a useful way to structure a series unless the
> whitespace change is in a patch of *only* whitespace changes (and even
> then this would be an uncommon way to do things IMO).

That is the intention - as otherwise its darn hard to read what has
changed in the further patches.

> 
> But putting the whitespace changes associated with adding an if
> alongside unrelated actual semantic changes in a totally different patch
> is probably the most confusing and least helpful of all the possible
> options!

<nods> The patch should have been seperate.
> 
> My personal preference would be to do the indent when adding the if and
> let people who want to see the differences without the indentation
> change use "diff -b".
> 
> Konrad is maintainer though, so if he likes this then fine.
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 20:16:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 20:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2SxI-0004Dd-9W; Fri, 17 Aug 2012 20:15:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2SxG-0004DV-6Q
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 20:15:38 +0000
Received: from [85.158.143.99:3640] by server-3.bemta-4.messagelabs.com id
	88/83-09529-966AE205; Fri, 17 Aug 2012 20:15:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345234532!25340662!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk2OTUy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21542 invoked from network); 17 Aug 2012 20:15:36 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 20:15:36 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HKFSRr016310
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 20:15:29 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HKFSFc010286
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 20:15:28 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HKFRqe031648; Fri, 17 Aug 2012 15:15:27 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 13:15:27 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 06C15402EC; Fri, 17 Aug 2012 16:05:39 -0400 (EDT)
Date: Fri, 17 Aug 2012 16:05:38 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817200538.GA20731@phenom.dumpdata.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
	<20120817122428.7704f87f@mantra.us.oracle.com>
	<1345232822.23624.8.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345232822.23624.8.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 08:47:02PM +0100, Ian Campbell wrote:
> On Fri, 2012-08-17 at 20:24 +0100, Mukesh Rathor wrote:
> > On Fri, 17 Aug 2012 09:35:54 +0100
> > Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > 
> > > On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> > > 
> > > > LDT (linear address, # ents) */
> > > > -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > > frames, # ents) */
> > > > +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > > frames, # ents).*
> > > > +                                            * PV in HVM: it's GDTR
> > > > addr/sz */
> > > 
> > > I'm not sure I understand this comment. What is "GDTR addr/sz" do you
> > > mean that gdtframes/gdt_ents has a different semantics here?
> > > 
> > > Might be worthy of a union? Or finding some other way to expand this
> > > struct.
> > 
> > In case of PVH, the field is used to send down GDTR address and size.
> > perhaps better to just leave the comment out.
> 
> I think if the semantics of this field are totally different in the two
> modes then I think a union is warranted.
> 
> > > > -void __init xen_arch_setup(void)
> > > > +/* Normal PV domain not running in HVM container */
> > > 
> > > It's a bit of a shame to overload the "HVM" term this way, to mean
> > > both the traditional "providing a full PC like environment" and "PV
> > > using hardware virtualisation facilities".
> > > 
> > > Perhaps:
> > >         /* Normal PV domain without PVH extensions */
> > 
> > Ok, HVM==Hardware Virtual Machine seems more appropriate here,
> 
> HVM in the context of Xen means more than that though, it implies a Qemu
> and emulation of a complete "PC-like" environment and all of that stuff,
> which is why I think it is inappropriate/confusing to be overloading it
> to mean "PV with hardware assistance" too.
> 
> Xen's use of the term HVM is a bit unhelpful, exactly because it is a
> broad sounding term but with a very specific meaning, but we are kind of
> stuck with it.
> 
> > but I can remove the word HVM and go with 'PVH extensions'.
> 
> Please ;-)
> 
> > 
> > > >  
> > > >  #ifdef CONFIG_ACPI
> > > >         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> > > > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > > > index f58dca7..cdf269d 100644
> > > > --- a/arch/x86/xen/smp.c
> > > > +++ b/arch/x86/xen/smp.c
> > > > @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct
> > > > task_struct *idle) gdt = get_cpu_gdt_table(cpu);
> > > > -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> > > > +               ctxt->ldt_ents = 0;
> > > 
> > > Something odd is going on with the indentation here (and below I've
> > > just noticed). I suspect lots of the changes aren't really changing
> > > anything other than whitespace?
> > 
> > Konrad wanted just doing the indentation without code change first
> > where it made sense so that further patch makes it easy to see if
> > statements added.
> 
> I disagree that this is a useful way to structure a series unless the
> whitespace change is in a patch of *only* whitespace changes (and even
> then this would be an uncommon way to do things IMO).

That is the intention - as otherwise its darn hard to read what has
changed in the further patches.

> 
> But putting the whitespace changes associated with adding an if
> alongside unrelated actual semantic changes in a totally different patch
> is probably the most confusing and least helpful of all the possible
> options!

<nods> The patch should have been seperate.
> 
> My personal preference would be to do the indent when adding the if and
> let people who want to see the differences without the indentation
> change use "diff -b".
> 
> Konrad is maintainer though, so if he likes this then fine.
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 20:39:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 20:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2TJY-0004aI-An; Fri, 17 Aug 2012 20:38:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T2TJW-0004aD-P5
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 20:38:38 +0000
Received: from [85.158.139.83:27977] by server-4.bemta-5.messagelabs.com id
	ED/7D-12386-DCBAE205; Fri, 17 Aug 2012 20:38:37 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345235917!26019182!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31143 invoked from network); 17 Aug 2012 20:38:37 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 20:38:37 -0000
Received: by eeke53 with SMTP id e53so1175358eek.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 13:38:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=hE5sDpu54kG5B3giBVJJMOW8MqrMiTiMP+i5C6hrfLw=;
	b=gbK1D45JSBUycq/zrdNsq1JCNvy9RIZu3GZIRfN7jUK5ZythX+WuOJk21tjGt2g+gR
	lcJL0uzAJV/30X2kqTrD/ZGa4g8IgeBOnc8/yu574GRaVy6k5VcZ1/cA6Z03RLlfNkR0
	IMWhOSnoqhLuxDwx+wOGFq66oKrOHaUCs/315O37tTtL1uMxGah1xGuMlH3kTLGVvm0f
	sNesWa/SaRAyAL1tZjyUaPEp0W0TGzqjVBJte9j96R7K5D5XOX94Czt7MySAhALvBh7W
	6I4LN8qeOaxmYu1iHunugERcwAoQIzg/FI71UmtF7K8fzgPfuwi+w8rsSxCCT5UhmQgt
	h+rw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=hE5sDpu54kG5B3giBVJJMOW8MqrMiTiMP+i5C6hrfLw=;
	b=HHVslZi3DA8I0u4o5ERyH6g54aMNOqFtWZW2VR3pD916tgSObny2heoSiTvx3f7/6G
	+qG5tL+ZZ0MGRY9krq6jNHN18yTRdiFGzSHhLH/oCCTeSbJqXz4Vk+9mrS8yXXb7lWcI
	vQ64I4cwkwec6HSz3kCe2D0ZZEEdd4FHM44cGn7v8LfO3YKYpU//YIeZdXDkt7J6u3lW
	P+hd3BFbwRlw85bOoB/7rds1yCOoe+xJCAUjV/1XTBWg1u8IwwbItfejJosyqEUBISqv
	Ndui7f1fVfTSkKg7xtuZqFXWxrUQ2TIZhSNAqvkyCUHXj481tVr3KAe+S3TCkZe/kkW/
	6gug==
Received: by 10.14.206.200 with SMTP id l48mr8185995eeo.41.1345235917392;
	Fri, 17 Aug 2012 13:38:37 -0700 (PDT)
Received: by 10.14.206.200 with SMTP id l48mr8185985eeo.41.1345235917268; Fri,
	17 Aug 2012 13:38:37 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Fri, 17 Aug 2012 13:38:07 -0700 (PDT)
In-Reply-To: <502A98640200007800094E1B@nat28.tlf.novell.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
	<502A946A0200007800094DE6@nat28.tlf.novell.com>
	<CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
	<502A98640200007800094E1B@nat28.tlf.novell.com>
From: Peter Moody <pmoody@google.com>
Date: Fri, 17 Aug 2012 13:38:07 -0700
Message-ID: <CALnj_=72=bV3h6_sthaugPfk=-cWAUczjXfQxCJx3FS8ZPrV9g@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQnGB5mbUPXgrL20Dkgcb1uFzwmtjaVodfMR5fc9xc7Eb7AQMBAp7GpdJvfcoSRdOhPXCwGvc/9CsHRV7rh8lM6DAd/kP/7udBpCpgCugfXau1PtZhXny/OisoBSrfGtqrhPW4HOV0eCjOuw7yG6mWgV7WelkS4W+ken2Qy+fgwMrvA9BrT/v6eDNLjgrRai9C8NhpyM
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just to close the loop over here. This is an audit bug, not a xen bug.

https://www.redhat.com/archives/linux-audit/2012-August/msg00018.html

Cheers,
peter

On Tue, Aug 14, 2012 at 9:26 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 14.08.12 at 18:16, Peter Moody <pmoody@google.com> wrote:
>> On Tue, Aug 14, 2012 at 9:09 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> From the above as well as based on you indicating that the
>>> traces are highly variable between instances, I'd suppose
>>> this is memory corruption of some sort, which can easily be
>>> hidden by all sorts of factors.
>>>
>>> Until you can find a pattern, I don't think there can be done
>>> much by anyone not having an affected system available for
>>> debugging.
>>
>> So I have such a system :).
>
> That's what I implied.
>
>> Are there any pointers or tips you can give me to help me track down
>> the root cause? I realize that's a broad question, and a perfectly
>> justifiable answer is "read the memory management chapter of
>> understanding linux device drivers" but at this point basically any
>> advice you can give me is appreciated (and will most likely get me
>> closer to the solution).
>
> As said, figuring out a pattern in the crashes would likely help
> placing debug prints, breakpoints, or anything similar to aid
> detecting the presumed corruption earlier. Without a pattern,
> there's regretfully not much I can suggest.
>
> Jan
>



-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 20:39:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 20:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2TJY-0004aI-An; Fri, 17 Aug 2012 20:38:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmoody@google.com>) id 1T2TJW-0004aD-P5
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 20:38:38 +0000
Received: from [85.158.139.83:27977] by server-4.bemta-5.messagelabs.com id
	ED/7D-12386-DCBAE205; Fri, 17 Aug 2012 20:38:37 +0000
X-Env-Sender: pmoody@google.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345235917!26019182!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31143 invoked from network); 17 Aug 2012 20:38:37 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 20:38:37 -0000
Received: by eeke53 with SMTP id e53so1175358eek.32
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 13:38:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record;
	bh=hE5sDpu54kG5B3giBVJJMOW8MqrMiTiMP+i5C6hrfLw=;
	b=gbK1D45JSBUycq/zrdNsq1JCNvy9RIZu3GZIRfN7jUK5ZythX+WuOJk21tjGt2g+gR
	lcJL0uzAJV/30X2kqTrD/ZGa4g8IgeBOnc8/yu574GRaVy6k5VcZ1/cA6Z03RLlfNkR0
	IMWhOSnoqhLuxDwx+wOGFq66oKrOHaUCs/315O37tTtL1uMxGah1xGuMlH3kTLGVvm0f
	sNesWa/SaRAyAL1tZjyUaPEp0W0TGzqjVBJte9j96R7K5D5XOX94Czt7MySAhALvBh7W
	6I4LN8qeOaxmYu1iHunugERcwAoQIzg/FI71UmtF7K8fzgPfuwi+w8rsSxCCT5UhmQgt
	h+rw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:x-system-of-record:x-gm-message-state;
	bh=hE5sDpu54kG5B3giBVJJMOW8MqrMiTiMP+i5C6hrfLw=;
	b=HHVslZi3DA8I0u4o5ERyH6g54aMNOqFtWZW2VR3pD916tgSObny2heoSiTvx3f7/6G
	+qG5tL+ZZ0MGRY9krq6jNHN18yTRdiFGzSHhLH/oCCTeSbJqXz4Vk+9mrS8yXXb7lWcI
	vQ64I4cwkwec6HSz3kCe2D0ZZEEdd4FHM44cGn7v8LfO3YKYpU//YIeZdXDkt7J6u3lW
	P+hd3BFbwRlw85bOoB/7rds1yCOoe+xJCAUjV/1XTBWg1u8IwwbItfejJosyqEUBISqv
	Ndui7f1fVfTSkKg7xtuZqFXWxrUQ2TIZhSNAqvkyCUHXj481tVr3KAe+S3TCkZe/kkW/
	6gug==
Received: by 10.14.206.200 with SMTP id l48mr8185995eeo.41.1345235917392;
	Fri, 17 Aug 2012 13:38:37 -0700 (PDT)
Received: by 10.14.206.200 with SMTP id l48mr8185985eeo.41.1345235917268; Fri,
	17 Aug 2012 13:38:37 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.14.22.3 with HTTP; Fri, 17 Aug 2012 13:38:07 -0700 (PDT)
In-Reply-To: <502A98640200007800094E1B@nat28.tlf.novell.com>
References: <CALnj_=4U+6n2-KSaLgbS5OpP3mNKwPredFYjG-0dEUNQpSSwxg@mail.gmail.com>
	<1344935951.5926.9.camel@zakaz.uk.xensource.com>
	<CALnj_=6WZmXO7Stu-B79Z30=hsSQW0FOQTLdHSH88OyQG6z5nw@mail.gmail.com>
	<502A81130200007800094D38@nat28.tlf.novell.com>
	<CALnj_=4cSr+Bz=o3jajHq2z8qMyKerYPwDOen-ivBNnHBLGq0Q@mail.gmail.com>
	<502A946A0200007800094DE6@nat28.tlf.novell.com>
	<CALnj_=4V4veVndiVY5A085u0Bh0dzHo48zDf6gm24nFPQnGY+A@mail.gmail.com>
	<502A98640200007800094E1B@nat28.tlf.novell.com>
From: Peter Moody <pmoody@google.com>
Date: Fri, 17 Aug 2012 13:38:07 -0700
Message-ID: <CALnj_=72=bV3h6_sthaugPfk=-cWAUczjXfQxCJx3FS8ZPrV9g@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
X-System-Of-Record: true
X-Gm-Message-State: ALoCoQnGB5mbUPXgrL20Dkgcb1uFzwmtjaVodfMR5fc9xc7Eb7AQMBAp7GpdJvfcoSRdOhPXCwGvc/9CsHRV7rh8lM6DAd/kP/7udBpCpgCugfXau1PtZhXny/OisoBSrfGtqrhPW4HOV0eCjOuw7yG6mWgV7WelkS4W+ken2Qy+fgwMrvA9BrT/v6eDNLjgrRai9C8NhpyM
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 100% reliable Oops on xen 4.0.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just to close the loop over here. This is an audit bug, not a xen bug.

https://www.redhat.com/archives/linux-audit/2012-August/msg00018.html

Cheers,
peter

On Tue, Aug 14, 2012 at 9:26 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 14.08.12 at 18:16, Peter Moody <pmoody@google.com> wrote:
>> On Tue, Aug 14, 2012 at 9:09 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> From the above as well as based on you indicating that the
>>> traces are highly variable between instances, I'd suppose
>>> this is memory corruption of some sort, which can easily be
>>> hidden by all sorts of factors.
>>>
>>> Until you can find a pattern, I don't think there can be done
>>> much by anyone not having an affected system available for
>>> debugging.
>>
>> So I have such a system :).
>
> That's what I implied.
>
>> Are there any pointers or tips you can give me to help me track down
>> the root cause? I realize that's a broad question, and a perfectly
>> justifiable answer is "read the memory management chapter of
>> understanding linux device drivers" but at this point basically any
>> advice you can give me is appreciated (and will most likely get me
>> closer to the solution).
>
> As said, figuring out a pattern in the crashes would likely help
> placing debug prints, breakpoints, or anything similar to aid
> detecting the presumed corruption earlier. Without a pattern,
> there's regretfully not much I can suggest.
>
> Jan
>



-- 
Peter Moody      Google    1.650.253.7306
Security Engineer  pgp:0xC3410038

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 21:02:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 21:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2TgN-0004p7-IX; Fri, 17 Aug 2012 21:02:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2TgL-0004p2-MU
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 21:02:13 +0000
Received: from [85.158.143.99:18575] by server-2.bemta-4.messagelabs.com id
	C1/A3-31966-551BE205; Fri, 17 Aug 2012 21:02:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345237327!23509784!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15459 invoked from network); 17 Aug 2012 21:02:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 21:02:08 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HL20M8010639
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 21:02:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HL1w5b003861
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 21:01:59 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HL1uef012193; Fri, 17 Aug 2012 16:01:56 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 14:01:56 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6FA9F402EC; Fri, 17 Aug 2012 16:52:07 -0400 (EDT)
Date: Fri, 17 Aug 2012 16:52:07 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andre Przywara <andre.przywara@amd.com>
Message-ID: <20120817205207.GA3002@phenom.dumpdata.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50114002.2030700@amd.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	david.vrabel@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> 
> Konrad, David,
> 
> back on track for this issue. Thanks for your input, I could do some
> more debugging (see below for a refresh):
> 
> It seems like it affects only the first page of the 1:1 mapping. I
> didn't have an issues with the last PFN or the page behind it (which
> failed properly).
> 
> David, thanks for the hint with varying dom0_mem parameter. I
> thought I already checked this, but I did it once again and it
> turned out that it is only an issue if dom0_mem is smaller than the
> ACPI area, which generates a hole in the memory map. So we have
> (simplified)
> * 1:1 mapping to 1 MB
> * normal mapping till dom0_mem
> * unmapped area till ACPI E820 area
> * ACPI E820 1:1 mapping
> 
> As far as I could chase it down the 1:1 mapping itself looks OK, I
> couldn't find any off-by-one bugs here. So maybe it is code that
> later on invalidates areas between the normal guest mapping and the
> ACPI mem?

I think I found it. Can you try this pls [and if you can't find
early_to_phys.. just use the __set_phys_to call]

>From ab915d98f321b0fcca1932747c632b5f0f299f55 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 16:43:28 -0400
Subject: [PATCH] xen/setup: Fix one-off error when adding for-balloon PFNs to
 the P2M.

When we are finished with return PFNs to the hypervisor, then
populate it back, and also mark the E820 MMIO and E820 gaps
as IDENTITY_FRAMEs, we then call P2M to set areas that can
be used for ballooning. We were off by one, and ended up
over-writting a P2M entry that most likely was an IDENTITY_FRAME.
For example:

1-1 mapping on 40000->40200
1-1 mapping on bc558->bc5ac
1-1 mapping on bc5b4->bc8c5
1-1 mapping on bc8c6->bcb7c
1-1 mapping on bcd00->100000
Released 614 pages of unused memory
Set 277889 page(s) to 1-1 mapping
Populating 40200-40466 pfn range: 614 pages added

=> here we set from 40466 up to bc559 P2M tree to be
INVALID_P2M_ENTRY. We should have done it up to bc558.

The end result is that if anybody is trying to construct
a PTE for PFN bc558 they end up with ~PAGE_PRESENT.

CC: stable@vger.kernel.org
Reported-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/setup.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index ead8557..030a55a 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -78,9 +78,16 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	memblock_reserve(start, size);
 
 	xen_max_p2m_pfn = PFN_DOWN(start + size);
+	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
+		unsigned long mfn = pfn_to_mfn(pfn);
+
+		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+			continue;
+		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			pfn, mfn);
 
-	for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
-		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
+		early_set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
+	}
 }
 
 static unsigned long __init xen_do_chunk(unsigned long start,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 21:02:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 21:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2TgN-0004p7-IX; Fri, 17 Aug 2012 21:02:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T2TgL-0004p2-MU
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 21:02:13 +0000
Received: from [85.158.143.99:18575] by server-2.bemta-4.messagelabs.com id
	C1/A3-31966-551BE205; Fri, 17 Aug 2012 21:02:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345237327!23509784!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15459 invoked from network); 17 Aug 2012 21:02:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 21:02:08 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HL20M8010639
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 21:02:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HL1w5b003861
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 21:01:59 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HL1uef012193; Fri, 17 Aug 2012 16:01:56 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 14:01:56 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6FA9F402EC; Fri, 17 Aug 2012 16:52:07 -0400 (EDT)
Date: Fri, 17 Aug 2012 16:52:07 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andre Przywara <andre.przywara@amd.com>
Message-ID: <20120817205207.GA3002@phenom.dumpdata.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50114002.2030700@amd.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	david.vrabel@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> 
> Konrad, David,
> 
> back on track for this issue. Thanks for your input, I could do some
> more debugging (see below for a refresh):
> 
> It seems like it affects only the first page of the 1:1 mapping. I
> didn't have an issues with the last PFN or the page behind it (which
> failed properly).
> 
> David, thanks for the hint with varying dom0_mem parameter. I
> thought I already checked this, but I did it once again and it
> turned out that it is only an issue if dom0_mem is smaller than the
> ACPI area, which generates a hole in the memory map. So we have
> (simplified)
> * 1:1 mapping to 1 MB
> * normal mapping till dom0_mem
> * unmapped area till ACPI E820 area
> * ACPI E820 1:1 mapping
> 
> As far as I could chase it down the 1:1 mapping itself looks OK, I
> couldn't find any off-by-one bugs here. So maybe it is code that
> later on invalidates areas between the normal guest mapping and the
> ACPI mem?

I think I found it. Can you try this pls [and if you can't find
early_to_phys.. just use the __set_phys_to call]

>From ab915d98f321b0fcca1932747c632b5f0f299f55 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Aug 2012 16:43:28 -0400
Subject: [PATCH] xen/setup: Fix one-off error when adding for-balloon PFNs to
 the P2M.

When we are finished with return PFNs to the hypervisor, then
populate it back, and also mark the E820 MMIO and E820 gaps
as IDENTITY_FRAMEs, we then call P2M to set areas that can
be used for ballooning. We were off by one, and ended up
over-writting a P2M entry that most likely was an IDENTITY_FRAME.
For example:

1-1 mapping on 40000->40200
1-1 mapping on bc558->bc5ac
1-1 mapping on bc5b4->bc8c5
1-1 mapping on bc8c6->bcb7c
1-1 mapping on bcd00->100000
Released 614 pages of unused memory
Set 277889 page(s) to 1-1 mapping
Populating 40200-40466 pfn range: 614 pages added

=> here we set from 40466 up to bc559 P2M tree to be
INVALID_P2M_ENTRY. We should have done it up to bc558.

The end result is that if anybody is trying to construct
a PTE for PFN bc558 they end up with ~PAGE_PRESENT.

CC: stable@vger.kernel.org
Reported-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/setup.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index ead8557..030a55a 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -78,9 +78,16 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	memblock_reserve(start, size);
 
 	xen_max_p2m_pfn = PFN_DOWN(start + size);
+	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
+		unsigned long mfn = pfn_to_mfn(pfn);
+
+		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+			continue;
+		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			pfn, mfn);
 
-	for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
-		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
+		early_set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
+	}
 }
 
 static unsigned long __init xen_do_chunk(unsigned long start,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 21:28:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 21:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2U5g-00053z-Tu; Fri, 17 Aug 2012 21:28:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T2U5f-00053u-7x
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 21:28:23 +0000
Received: from [85.158.138.51:45833] by server-5.bemta-3.messagelabs.com id
	BC/94-08865-677BE205; Fri, 17 Aug 2012 21:28:22 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345238898!10196975!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4572 invoked from network); 17 Aug 2012 21:28:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 21:28:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HLSGFH030623
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 21:28:16 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HLSFfv005567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 21:28:16 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HLSF0H026729
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 16:28:15 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 14:28:15 -0700
Message-ID: <502EB799.9040002@oracle.com>
Date: Fri, 17 Aug 2012 17:28:57 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary="------------020100050906040603030803"
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020100050906040603030803
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hello,

There is a problem in Xen-4.1.2 and early versions with VM save/restore.
When a VM is configured with VCPUs > 64, it can be started or stopped,
but cannot be saved. It happens to both PVM and HVM guests.

# xm vcpu-list 3 | grep OVM_OL5U7_X86_64_PVM_10GB | wc -l
65

# xm save 3 vm.save
Error: /usr/lib64/xen/bin/xc_save 27 3 0 0 0 failed

/var/log/xen/xend.log: INFO (XendCheckpoint:416)
xc: error: Too many VCPUS in guest!: Internal error

It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:

if ( info.max_vcpu_id >= 64 )
{
      ERROR("Too many VCPUS in guest!");
      goto out;
}

And also in tools/libxc/xc_domain_restore.c:

case XC_SAVE_ID_VCPU_INFO:
      buf->new_ctxt_format = 1;
      if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
          buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
                                            sizeof(uint64_t)) ) {
          PERROR("Error when reading max_vcpu_id");
          return -1;
      }

The code above is in both xen-4.1.2 and xen-unstable.

I think if a VM can be successfully started, then save/restore should
also work. So I made a patch and did some testing.

The above problem is gone but there are new ones.

Let me summarize the result here.

With the patch, save/restore works fine as long as it can be started,
except two cases.

1) 32-bit guests can be configured with VCPUs > 32 and started,
    but the guest can only make use of 32 of them.

2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
    but `xm save' does not work.

See the testing below for details.The limit of 128 VCPUs for HVM
guests is already considered.

Could you please review the patch and help with these two cases?


Thanks,
Junjie

-= Test environment =-

[root@ovs087 HVM_X86_64]# cat /etc/ovs-release
Oracle VM server release 3.2.1

[root@ovs087 HVM_X86_64]# uname -a
Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012 
x86_64 x86_64 x86_64 GNU/Linux

[root@ovs087 HVM_X86_64]# rpm -qa | grep xen
xenpvboot-0.1-8.el5
xen-devel-4.1.2-39
xen-tools-4.1.2-39
xen-4.1.2-39

-= PVM x86_64, 128 VCPUs =-

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   6916.9
OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1

[root@ovs087 PVM_X86_64]# xm save 9 vm.save

[root@ovs087 PVM_X86_64]# xm restore vm.save

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   7076.7
OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6

-= PVM x86_64, 256 VCPUs =-

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs State   Time(s)
Domain-0                                     0   511     8 r-----  10398.1
OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4

[root@ovs087 PVM_X86_64]# xm save 35 vm.save

[root@ovs087 PVM_X86_64]# xm restore vm.save

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs State   Time(s)
Domain-0                                     0   511     8 r-----  10572.1
OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9

-= HVM x86_64, 128 VCPUs =-

[root@ovs087 HVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   8017.4
OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7

[root@ovs087 HVM_X86_64]# xm save 19 vm.save

[root@ovs087 HVM_X86_64]# xm restore vm.save

[root@ovs087 HVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   8241.1
OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7

-= PVM x86, 64 VCPUs =-

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36798.0
OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8

[root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
64

[root@ovs087 PVM_X86]# xm save 54 vm.save

[root@ovs087 PVM_X86]# xm restore vm.save

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36959.3
OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0

[root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
64

32-bit PVM, 65 VCPUs:

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36975.9
OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6

[root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
65

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36977.7
OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8

[root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
65

[root@ovs087 PVM_X86]# xm save 56 vm.save
Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed

/var/log/xen/xend.log: INFO (XendCheckpoint:416)
xc: error: No context for VCPU64 (61 = No data available): Internal error

-= HVM x86, 64 VCPUs =-

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36506.1
OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6

[root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
64

[root@ovs087 HVM_X86]# xm save 52 vm.save

[root@ovs087 HVM_X86]# xm restore vm.save

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36730.5
OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8

[root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
64

-= HVM x86, 128 VCPUs =-

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36261.1
OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9

[root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
128

[root@ovs087 HVM_X86]# xm save 50 vm.save

[root@ovs087 HVM_X86]# xm restore vm.save

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36480.5
OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3

[root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
128

--------------020100050906040603030803
Content-Type: text/x-patch;
 name="skip-max-vcpu-id-check.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="skip-max-vcpu-id-check.patch"

Index: tools/libxc/xc_domain_restore.c
===================================================================
--- tools/libxc/xc_domain_restore.c	(revision 3415)
+++ tools/libxc/xc_domain_restore.c	(working copy)
@@ -771,8 +771,7 @@
     case XC_SAVE_ID_VCPU_INFO:
         buf->new_ctxt_format = 1;
         if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
-             buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
-                                               sizeof(uint64_t)) ) {
+             RDEXACT(fd, &buf->vcpumap, sizeof(uint64_t)) ) {
             PERROR("Error when reading max_vcpu_id");
             return -1;
         }
Index: tools/libxc/xc_domain_save.c
===================================================================
--- tools/libxc/xc_domain_save.c	(revision 3415)
+++ tools/libxc/xc_domain_save.c	(working copy)
@@ -1566,12 +1566,6 @@
             uint64_t vcpumap;
         } chunk = { XC_SAVE_ID_VCPU_INFO, info.max_vcpu_id };
 
-        if ( info.max_vcpu_id >= 64 )
-        {
-            ERROR("Too many VCPUS in guest!");
-            goto out;
-        }
-
         for ( i = 1; i <= info.max_vcpu_id; i++ )
         {
             xc_vcpuinfo_t vinfo;


--------------020100050906040603030803
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020100050906040603030803--


From xen-devel-bounces@lists.xen.org Fri Aug 17 21:28:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 21:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2U5g-00053z-Tu; Fri, 17 Aug 2012 21:28:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T2U5f-00053u-7x
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 21:28:23 +0000
Received: from [85.158.138.51:45833] by server-5.bemta-3.messagelabs.com id
	BC/94-08865-677BE205; Fri, 17 Aug 2012 21:28:22 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345238898!10196975!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4572 invoked from network); 17 Aug 2012 21:28:19 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 21:28:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HLSGFH030623
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 21:28:16 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HLSFfv005567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 21:28:16 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HLSF0H026729
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 16:28:15 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 14:28:15 -0700
Message-ID: <502EB799.9040002@oracle.com>
Date: Fri, 17 Aug 2012 17:28:57 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary="------------020100050906040603030803"
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020100050906040603030803
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hello,

There is a problem in Xen-4.1.2 and early versions with VM save/restore.
When a VM is configured with VCPUs > 64, it can be started or stopped,
but cannot be saved. It happens to both PVM and HVM guests.

# xm vcpu-list 3 | grep OVM_OL5U7_X86_64_PVM_10GB | wc -l
65

# xm save 3 vm.save
Error: /usr/lib64/xen/bin/xc_save 27 3 0 0 0 failed

/var/log/xen/xend.log: INFO (XendCheckpoint:416)
xc: error: Too many VCPUS in guest!: Internal error

It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:

if ( info.max_vcpu_id >= 64 )
{
      ERROR("Too many VCPUS in guest!");
      goto out;
}

And also in tools/libxc/xc_domain_restore.c:

case XC_SAVE_ID_VCPU_INFO:
      buf->new_ctxt_format = 1;
      if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
          buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
                                            sizeof(uint64_t)) ) {
          PERROR("Error when reading max_vcpu_id");
          return -1;
      }

The code above is in both xen-4.1.2 and xen-unstable.

I think if a VM can be successfully started, then save/restore should
also work. So I made a patch and did some testing.

The above problem is gone but there are new ones.

Let me summarize the result here.

With the patch, save/restore works fine as long as it can be started,
except two cases.

1) 32-bit guests can be configured with VCPUs > 32 and started,
    but the guest can only make use of 32 of them.

2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
    but `xm save' does not work.

See the testing below for details.The limit of 128 VCPUs for HVM
guests is already considered.

Could you please review the patch and help with these two cases?


Thanks,
Junjie

-= Test environment =-

[root@ovs087 HVM_X86_64]# cat /etc/ovs-release
Oracle VM server release 3.2.1

[root@ovs087 HVM_X86_64]# uname -a
Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012 
x86_64 x86_64 x86_64 GNU/Linux

[root@ovs087 HVM_X86_64]# rpm -qa | grep xen
xenpvboot-0.1-8.el5
xen-devel-4.1.2-39
xen-tools-4.1.2-39
xen-4.1.2-39

-= PVM x86_64, 128 VCPUs =-

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   6916.9
OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1

[root@ovs087 PVM_X86_64]# xm save 9 vm.save

[root@ovs087 PVM_X86_64]# xm restore vm.save

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   7076.7
OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6

-= PVM x86_64, 256 VCPUs =-

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs State   Time(s)
Domain-0                                     0   511     8 r-----  10398.1
OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4

[root@ovs087 PVM_X86_64]# xm save 35 vm.save

[root@ovs087 PVM_X86_64]# xm restore vm.save

[root@ovs087 PVM_X86_64]# xm list
Name                                        ID   Mem VCPUs State   Time(s)
Domain-0                                     0   511     8 r-----  10572.1
OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9

-= HVM x86_64, 128 VCPUs =-

[root@ovs087 HVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   8017.4
OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7

[root@ovs087 HVM_X86_64]# xm save 19 vm.save

[root@ovs087 HVM_X86_64]# xm restore vm.save

[root@ovs087 HVM_X86_64]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----   8241.1
OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7

-= PVM x86, 64 VCPUs =-

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36798.0
OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8

[root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
64

[root@ovs087 PVM_X86]# xm save 54 vm.save

[root@ovs087 PVM_X86]# xm restore vm.save

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36959.3
OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0

[root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
64

32-bit PVM, 65 VCPUs:

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36975.9
OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6

[root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
65

[root@ovs087 PVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36977.7
OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8

[root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
65

[root@ovs087 PVM_X86]# xm save 56 vm.save
Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed

/var/log/xen/xend.log: INFO (XendCheckpoint:416)
xc: error: No context for VCPU64 (61 = No data available): Internal error

-= HVM x86, 64 VCPUs =-

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36506.1
OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6

[root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
64

[root@ovs087 HVM_X86]# xm save 52 vm.save

[root@ovs087 HVM_X86]# xm restore vm.save

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36730.5
OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8

[root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
64

-= HVM x86, 128 VCPUs =-

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36261.1
OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9

[root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
128

[root@ovs087 HVM_X86]# xm save 50 vm.save

[root@ovs087 HVM_X86]# xm restore vm.save

[root@ovs087 HVM_X86]# xm list
Name                                        ID   Mem VCPUs      State   
Time(s)
Domain-0                                     0   511 8     r-----  36480.5
OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3

[root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB | 
wc -l
128

--------------020100050906040603030803
Content-Type: text/x-patch;
 name="skip-max-vcpu-id-check.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="skip-max-vcpu-id-check.patch"

Index: tools/libxc/xc_domain_restore.c
===================================================================
--- tools/libxc/xc_domain_restore.c	(revision 3415)
+++ tools/libxc/xc_domain_restore.c	(working copy)
@@ -771,8 +771,7 @@
     case XC_SAVE_ID_VCPU_INFO:
         buf->new_ctxt_format = 1;
         if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
-             buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
-                                               sizeof(uint64_t)) ) {
+             RDEXACT(fd, &buf->vcpumap, sizeof(uint64_t)) ) {
             PERROR("Error when reading max_vcpu_id");
             return -1;
         }
Index: tools/libxc/xc_domain_save.c
===================================================================
--- tools/libxc/xc_domain_save.c	(revision 3415)
+++ tools/libxc/xc_domain_save.c	(working copy)
@@ -1566,12 +1566,6 @@
             uint64_t vcpumap;
         } chunk = { XC_SAVE_ID_VCPU_INFO, info.max_vcpu_id };
 
-        if ( info.max_vcpu_id >= 64 )
-        {
-            ERROR("Too many VCPUS in guest!");
-            goto out;
-        }
-
         for ( i = 1; i <= info.max_vcpu_id; i++ )
         {
             xc_vcpuinfo_t vinfo;


--------------020100050906040603030803
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020100050906040603030803--


From xen-devel-bounces@lists.xen.org Fri Aug 17 22:26:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 22:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Uzu-0005SE-Ov; Fri, 17 Aug 2012 22:26:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2Uzt-0005S9-H5
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 22:26:29 +0000
Received: from [85.158.143.35:23697] by server-1.bemta-4.messagelabs.com id
	60/22-07754-415CE205; Fri, 17 Aug 2012 22:26:28 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345242386!13933729!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk4MDc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23725 invoked from network); 17 Aug 2012 22:26:27 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 22:26:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HMQKuu014745
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 22:26:21 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HMQJib014060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 22:26:19 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HMQItE010508; Fri, 17 Aug 2012 17:26:18 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 15:26:18 -0700
Date: Fri, 17 Aug 2012 15:26:17 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120817152617.64e2fe5e@mantra.us.oracle.com>
In-Reply-To: <20120817193604.GA4573@phenom.dumpdata.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 15:36:04 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> > > For example in balloon.c we are probably only interested in memory
> > > related behavior, so checking for XENFEAT_auto_translated_physmap
> > > should be enough.  In other parts of the code we might want to
> > > check for xen_pv_domain(). If xen_pv_domain() and
> > > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > > another small XENFEAT that specifies that the domain is running
> > > in a HVM container. This way they are all reusable.
> > 
> > yeah, I thought about that, but wasn't sure what the implications
> > would be for a guest thats not PVH but has auto xlated physmap, if
> > there's such a possibility. If you guys think thats not an issue, I
> > can change it.
> 
> dom0_shadow=on on the hypervisor mode enables that in PV mode.

So, if I just add checks for auto_translated_physmap like suggested,
wouldn't I be changing and breaking the code paths for dom0_shadow boot
of PV guest? is dom0_shadow depracated?

Following would be true for both, pvh and dom0_shadow:

#define xen_pvh_domain() (xen_pv_domain() && \
                          xen_feature(XENFEAT_auto_translated_physmap) && \
                          xen_have_vector_callback)  


Also, the SIF flag allows PVH to be enabled via config file where the
tool pareses and sets it for the guest.

At present:
  dom0: put pvh=true at grub command line
  domU: put pvh=1 in the vm.cfg file.


thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 22:26:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 22:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2Uzu-0005SE-Ov; Fri, 17 Aug 2012 22:26:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2Uzt-0005S9-H5
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 22:26:29 +0000
Received: from [85.158.143.35:23697] by server-1.bemta-4.messagelabs.com id
	60/22-07754-415CE205; Fri, 17 Aug 2012 22:26:28 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345242386!13933729!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk4MDc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23725 invoked from network); 17 Aug 2012 22:26:27 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 22:26:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HMQKuu014745
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 22:26:21 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HMQJib014060
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 22:26:19 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HMQItE010508; Fri, 17 Aug 2012 17:26:18 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 15:26:18 -0700
Date: Fri, 17 Aug 2012 15:26:17 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20120817152617.64e2fe5e@mantra.us.oracle.com>
In-Reply-To: <20120817193604.GA4573@phenom.dumpdata.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 15:36:04 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> > > For example in balloon.c we are probably only interested in memory
> > > related behavior, so checking for XENFEAT_auto_translated_physmap
> > > should be enough.  In other parts of the code we might want to
> > > check for xen_pv_domain(). If xen_pv_domain() and
> > > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > > another small XENFEAT that specifies that the domain is running
> > > in a HVM container. This way they are all reusable.
> > 
> > yeah, I thought about that, but wasn't sure what the implications
> > would be for a guest thats not PVH but has auto xlated physmap, if
> > there's such a possibility. If you guys think thats not an issue, I
> > can change it.
> 
> dom0_shadow=on on the hypervisor mode enables that in PV mode.

So, if I just add checks for auto_translated_physmap like suggested,
wouldn't I be changing and breaking the code paths for dom0_shadow boot
of PV guest? is dom0_shadow depracated?

Following would be true for both, pvh and dom0_shadow:

#define xen_pvh_domain() (xen_pv_domain() && \
                          xen_feature(XENFEAT_auto_translated_physmap) && \
                          xen_have_vector_callback)  


Also, the SIF flag allows PVH to be enabled via config file where the
tool pareses and sets it for the guest.

At present:
  dom0: put pvh=true at grub command line
  domU: put pvh=1 in the vm.cfg file.


thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 22:56:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 22:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2VSU-0005lh-Fz; Fri, 17 Aug 2012 22:56:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2VST-0005lc-Ac
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 22:56:01 +0000
Received: from [85.158.143.99:40030] by server-1.bemta-4.messagelabs.com id
	EA/1A-07754-00CCE205; Fri, 17 Aug 2012 22:56:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345244159!28796959!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19026 invoked from network); 17 Aug 2012 22:56:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 22:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,788,1336348800"; d="scan'208";a="14066880"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 22:55:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 23:55:59 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2VSR-0005qZ-6s;
	Fri, 17 Aug 2012 22:55:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2VSQ-0001iK-M9;
	Fri, 17 Aug 2012 23:55:58 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13614-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 23:55:58 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13614: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13614 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13614/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13613
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13613
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13613
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13613

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  71a672765111

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Michael Young <m.a.young@durham.ac.uk>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Xiaowei Yang <xiaowei.yang@huawei.com>
  Zhenguo Wang <wangzhenguo@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=73ac4b7ad2e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 73ac4b7ad2e1
+ branch=xen-unstable
+ revision=73ac4b7ad2e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 73ac4b7ad2e1 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 6 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 22:56:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 22:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2VSU-0005lh-Fz; Fri, 17 Aug 2012 22:56:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2VST-0005lc-Ac
	for xen-devel@lists.xensource.com; Fri, 17 Aug 2012 22:56:01 +0000
Received: from [85.158.143.99:40030] by server-1.bemta-4.messagelabs.com id
	EA/1A-07754-00CCE205; Fri, 17 Aug 2012 22:56:00 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345244159!28796959!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk0MzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19026 invoked from network); 17 Aug 2012 22:56:00 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Aug 2012 22:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.77,788,1336348800"; d="scan'208";a="14066880"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	17 Aug 2012 22:55:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 17 Aug 2012 23:55:59 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2VSR-0005qZ-6s;
	Fri, 17 Aug 2012 22:55:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2VSQ-0001iK-M9;
	Fri, 17 Aug 2012 23:55:58 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13614-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Aug 2012 23:55:58 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13614: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13614 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13614/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13613
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13613
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13613
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13613

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  71a672765111

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Michael Young <m.a.young@durham.ac.uk>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Xiaowei Yang <xiaowei.yang@huawei.com>
  Zhenguo Wang <wangzhenguo@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=73ac4b7ad2e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 73ac4b7ad2e1
+ branch=xen-unstable
+ revision=73ac4b7ad2e1
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 73ac4b7ad2e1 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 6 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:38:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2W6w-00068Z-1m; Fri, 17 Aug 2012 23:37:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2W6u-00068U-SJ
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:37:49 +0000
Received: from [85.158.143.35:38572] by server-2.bemta-4.messagelabs.com id
	25/54-31966-CC5DE205; Fri, 17 Aug 2012 23:37:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345246666!12857174!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk4MDc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3342 invoked from network); 17 Aug 2012 23:37:47 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 23:37:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNbhND022827
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:37:44 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNbhH6006708
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:37:43 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNbgFH015090; Fri, 17 Aug 2012 18:37:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:37:42 -0700
Date: Fri, 17 Aug 2012 16:37:39 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817163739.386fce5d@mantra.us.oracle.com>
In-Reply-To: <1345193780.30865.109.camel@zakaz.uk.xensource.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<1345193780.30865.109.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 09:56:20 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 02:01 +0100, Mukesh Rathor wrote:
> [...]
> > @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int
> > msr, unsigned low, unsigned high) 
> >  void xen_setup_shared_info(void)
> >  {
> > +	/* do later in xen_pvh_guest_init() when extend_brk is
> > properly setup*/
> > +	if (xen_pvh_domain() && xen_initial_domain())
> > +		return;
> 
> Could we push this setup later for a pv guest too and reduce the
> divergence?

A bit nervous changing PV paths until I've the bandwidth to test it
thoroughly with various mem configs. So, I'll put a TBD for now.

> > +
> >  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> >  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
> >  			   xen_start_info->shared_info);
> [...]
> > @@ -1274,6 +1287,10 @@ static const struct machine_ops
> > xen_machine_ops __initconst = { */
> >  static void __init xen_setup_stackprotector(void)
> >  {
> > +	if (xen_pvh_domain()) {
> > +		switch_to_new_gdt(0);
> 
> This seems to skip calling setup_stack_canary_segment too?
> 
> Assuming that's not deliberate I'd be tempted to just put "if
> (xen_pv_domain())" around the updates of pv_cpus_ops and leave the
> main flow of the code the same. If it was deliberate a comment might
> be in order.

I meant to comment to do this phase II. I'm not very familiar with
setup_stack_canary_segment stuff and will need to learn it first.
  

> >  }
> >  
> > +static void __init xen_pvh_guest_init(void)
> > +{
> > +#ifndef __HAVE_ARCH_PTE_SPECIAL
> > +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> > +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> > +#endif
> 
> Isn't this an unconditional feature of arch/x86?

Right. I can remove it now. I had started from much older linux.

> > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > index 1573376..7c7dfd1 100644
> > --- a/arch/x86/xen/irq.c
> > +++ b/arch/x86/xen/irq.c
> > @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
> >  
> >  static void xen_safe_halt(void)
> >  {
> > +	/* so event channel can be delivered to us, since in HVM
> > container */
> > +	if (xen_pvh_domain())
> > +		local_irq_enable();
> > +
> >  	/* Blocking includes an implicit local_irq_enable(). */
> 
> So this comment isn't true for a PVH guest? Why not? Should it be?
 
I need to make sure the EFLAGS.IF is enabled. IIRC, the comment is saying
that xen will clear event channel mask bit. For PVH, there's the additional
EFLAGS.IF flag.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:38:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2W6w-00068Z-1m; Fri, 17 Aug 2012 23:37:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2W6u-00068U-SJ
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:37:49 +0000
Received: from [85.158.143.35:38572] by server-2.bemta-4.messagelabs.com id
	25/54-31966-CC5DE205; Fri, 17 Aug 2012 23:37:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345246666!12857174!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk4MDc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3342 invoked from network); 17 Aug 2012 23:37:47 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 23:37:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNbhND022827
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:37:44 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNbhH6006708
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:37:43 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNbgFH015090; Fri, 17 Aug 2012 18:37:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:37:42 -0700
Date: Fri, 17 Aug 2012 16:37:39 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817163739.386fce5d@mantra.us.oracle.com>
In-Reply-To: <1345193780.30865.109.camel@zakaz.uk.xensource.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<1345193780.30865.109.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 09:56:20 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 02:01 +0100, Mukesh Rathor wrote:
> [...]
> > @@ -1034,6 +1039,10 @@ static int xen_write_msr_safe(unsigned int
> > msr, unsigned low, unsigned high) 
> >  void xen_setup_shared_info(void)
> >  {
> > +	/* do later in xen_pvh_guest_init() when extend_brk is
> > properly setup*/
> > +	if (xen_pvh_domain() && xen_initial_domain())
> > +		return;
> 
> Could we push this setup later for a pv guest too and reduce the
> divergence?

A bit nervous changing PV paths until I've the bandwidth to test it
thoroughly with various mem configs. So, I'll put a TBD for now.

> > +
> >  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> >  		set_fixmap(FIX_PARAVIRT_BOOTMAP,
> >  			   xen_start_info->shared_info);
> [...]
> > @@ -1274,6 +1287,10 @@ static const struct machine_ops
> > xen_machine_ops __initconst = { */
> >  static void __init xen_setup_stackprotector(void)
> >  {
> > +	if (xen_pvh_domain()) {
> > +		switch_to_new_gdt(0);
> 
> This seems to skip calling setup_stack_canary_segment too?
> 
> Assuming that's not deliberate I'd be tempted to just put "if
> (xen_pv_domain())" around the updates of pv_cpus_ops and leave the
> main flow of the code the same. If it was deliberate a comment might
> be in order.

I meant to comment to do this phase II. I'm not very familiar with
setup_stack_canary_segment stuff and will need to learn it first.
  

> >  }
> >  
> > +static void __init xen_pvh_guest_init(void)
> > +{
> > +#ifndef __HAVE_ARCH_PTE_SPECIAL
> > +	("__HAVE_ARCH_PTE_SPECIAL is required for PVH for now\n");
> > +	#error("__HAVE_ARCH_PTE_SPECIAL is required for PVH\n");
> > +#endif
> 
> Isn't this an unconditional feature of arch/x86?

Right. I can remove it now. I had started from much older linux.

> > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > index 1573376..7c7dfd1 100644
> > --- a/arch/x86/xen/irq.c
> > +++ b/arch/x86/xen/irq.c
> > @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
> >  
> >  static void xen_safe_halt(void)
> >  {
> > +	/* so event channel can be delivered to us, since in HVM
> > container */
> > +	if (xen_pvh_domain())
> > +		local_irq_enable();
> > +
> >  	/* Blocking includes an implicit local_irq_enable(). */
> 
> So this comment isn't true for a PVH guest? Why not? Should it be?
 
I need to make sure the EFLAGS.IF is enabled. IIRC, the comment is saying
that xen will clear event channel mask bit. For PVH, there's the additional
EFLAGS.IF flag.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:40:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2W9V-0006EC-KJ; Fri, 17 Aug 2012 23:40:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2W9U-0006E4-L2
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:40:28 +0000
Received: from [85.158.139.83:39753] by server-10.bemta-5.messagelabs.com id
	11/CC-13125-B66DE205; Fri, 17 Aug 2012 23:40:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345246825!28210745!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13575 invoked from network); 17 Aug 2012 23:40:26 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 23:40:26 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNeLqn013847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:40:21 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNeK8u010410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:40:20 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNeKQY016113; Fri, 17 Aug 2012 18:40:20 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:40:20 -0700
Date: Fri, 17 Aug 2012 16:40:19 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817164019.73600ee1@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 15:10:03 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> >  void __init xen_init_mmu_ops(void)
> >  {
> > +       x86_init.paging.pagetable_setup_done =
> > xen_pagetable_setup_done; +
> > +       if (xen_pvh_domain()) {
> > +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > +
> > +               /* set_pte* for PCI devices to map iomem. */
> > +               if (xen_initial_domain()) {
> > +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> > +                       pv_mmu_ops.set_pte_at =
> > xen_dom0pvh_set_pte_at;
> > +               }
> > +               return;
> > +       }
> 
> Considering that the implementation of xen_dom0pvh_set_pte is
> native_set_pte, can't we just leave it to the default that is
> native_set_pte?

We can. I had more code in the function that I removed when I mapped
the entire IO space up front. Then, I had lot of debug code there. 
I can change it to native now.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:40:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2W9V-0006EC-KJ; Fri, 17 Aug 2012 23:40:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2W9U-0006E4-L2
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:40:28 +0000
Received: from [85.158.139.83:39753] by server-10.bemta-5.messagelabs.com id
	11/CC-13125-B66DE205; Fri, 17 Aug 2012 23:40:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345246825!28210745!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13575 invoked from network); 17 Aug 2012 23:40:26 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Aug 2012 23:40:26 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNeLqn013847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:40:21 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNeK8u010410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:40:20 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNeKQY016113; Fri, 17 Aug 2012 18:40:20 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:40:20 -0700
Date: Fri, 17 Aug 2012 16:40:19 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120817164019.73600ee1@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161507020.4850@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012 15:10:03 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> >  void __init xen_init_mmu_ops(void)
> >  {
> > +       x86_init.paging.pagetable_setup_done =
> > xen_pagetable_setup_done; +
> > +       if (xen_pvh_domain()) {
> > +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > +
> > +               /* set_pte* for PCI devices to map iomem. */
> > +               if (xen_initial_domain()) {
> > +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> > +                       pv_mmu_ops.set_pte_at =
> > xen_dom0pvh_set_pte_at;
> > +               }
> > +               return;
> > +       }
> 
> Considering that the implementation of xen_dom0pvh_set_pte is
> native_set_pte, can't we just leave it to the default that is
> native_set_pte?

We can. I had more code in the function that I removed when I mapped
the entire IO space up front. Then, I had lot of debug code there. 
I can change it to native now.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:50:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2WIP-0006Qt-Ku; Fri, 17 Aug 2012 23:49:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2WIO-0006Qo-0s
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:49:40 +0000
Received: from [85.158.139.83:59328] by server-4.bemta-5.messagelabs.com id
	44/ED-12386-398DE205; Fri, 17 Aug 2012 23:49:39 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345247377!28743462!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk4MDc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29084 invoked from network); 17 Aug 2012 23:49:38 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 23:49:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNnYpI028670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:49:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNnXtp014735
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:49:34 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNnXwT019889; Fri, 17 Aug 2012 18:49:33 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:49:33 -0700
Date: Fri, 17 Aug 2012 16:49:30 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817164930.78c0abd7@mantra.us.oracle.com>
In-Reply-To: <1345195575.30865.128.camel@zakaz.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
	<1345195575.30865.128.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 10:26:15 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 02:02 +0100, Mukesh Rathor wrote:
> > +
> >         if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte,
> > 0)) BUG();
> >  }
> > @@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
> >   * but that's enough to get __va working.  We need to fill in the
> > rest
> >   * of the physical mapping once some sort of allocator has been set
> >   * up.
> > + * NOTE: for PVH, the page tables are native with HAP required.
> 
> OOI does this mean shadow doesn't work?

Most likely now. Will need to explore that. Later.


> > +
> > +       if (xen_pvh_domain()) {
> > +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > +
> > +               /* set_pte* for PCI devices to map iomem. */
> > +               if (xen_initial_domain()) {
> > +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> > +                       pv_mmu_ops.set_pte_at =
> > xen_dom0pvh_set_pte_at;
> 
> Is this wrong for domU or have you just not tried/implemented
> passthrough support yet?

No, passthrough is phase II/III. Too big a project for one person to do in
a single shot as it is ;)..


> > + * exported function, so no need to export this.
> > + */
> > +static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long
> > fgmfn,
> > +                             unsigned int domid)
> > +{
> > +       int rc;
> > +       struct xen_add_to_physmap pmb = {.foreign_domid = domid};
> 
> I'm not sure but I think CodingStyle would want spaces inside the {}s.
> 
> What is the b in pmb? Phys Map B??? (every other user of this
> interface says xatp, FWIW)

because originally it was a batch function but got changed after the
code review earlier. I will rename it.


> > HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
> > +               if (rc) {
> > +                       pr_warn("Failed to unmap pfn:%lx rc:%d
> > done:%d\n",
> > +                               spfn+i, rc, i);
> > +                       return 1;
> > +               }
> > +       }
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);
> 
> What is the external/modular user of this?

privcmd.

> I guess this is why you noted that pvh_add_to_xen_p2m didn't need
> exporting, which struck me as unnecessary at the time.
> 
> pvh_add_to_xen_p2m seems to have an exported a wrapper, why not rem
> too? e.g. xen_unmap_domain_mfn_range?

I believe unmapping happens thru native code. So, we just need to remove 
the entries from the ept. Ok, I mean, hap. 

> > +
> > +       native_set_pte(ptep, pteval);
> > +       if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
> > +               return -EFAULT;
> 
> Is there a little window here where we've setup the page table entry
> but the p2m entry behind it is uninitialised?
> 
> I suppose even if an interrupt occurred we can rely on the virtual
> address not having been "exposed" anywhere yet and therefore there is
> no chance of anyone dereferencing it. But is there any harm in just
> flipping the ordering here?

Should be ok flippiong the order.

> Why EFAULT? Seems a bit random, I think HYPERVISOR_memory_op returns a
> -ve errno which you could propagate.

Ok, sounds good.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:50:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2WIP-0006Qt-Ku; Fri, 17 Aug 2012 23:49:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2WIO-0006Qo-0s
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:49:40 +0000
Received: from [85.158.139.83:59328] by server-4.bemta-5.messagelabs.com id
	44/ED-12386-398DE205; Fri, 17 Aug 2012 23:49:39 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345247377!28743462!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk4MDc5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29084 invoked from network); 17 Aug 2012 23:49:38 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 23:49:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNnYpI028670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:49:35 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNnXtp014735
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:49:34 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNnXwT019889; Fri, 17 Aug 2012 18:49:33 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:49:33 -0700
Date: Fri, 17 Aug 2012 16:49:30 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817164930.78c0abd7@mantra.us.oracle.com>
In-Reply-To: <1345195575.30865.128.camel@zakaz.uk.xensource.com>
References: <20120815180250.1e068d10@mantra.us.oracle.com>
	<1345195575.30865.128.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 3/8]: PVH: memory manager and paging
 related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 10:26:15 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 02:02 +0100, Mukesh Rathor wrote:
> > +
> >         if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte,
> > 0)) BUG();
> >  }
> > @@ -1745,6 +1785,7 @@ static void convert_pfn_mfn(void *v)
> >   * but that's enough to get __va working.  We need to fill in the
> > rest
> >   * of the physical mapping once some sort of allocator has been set
> >   * up.
> > + * NOTE: for PVH, the page tables are native with HAP required.
> 
> OOI does this mean shadow doesn't work?

Most likely now. Will need to explore that. Later.


> > +
> > +       if (xen_pvh_domain()) {
> > +               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > +
> > +               /* set_pte* for PCI devices to map iomem. */
> > +               if (xen_initial_domain()) {
> > +                       pv_mmu_ops.set_pte = xen_dom0pvh_set_pte;
> > +                       pv_mmu_ops.set_pte_at =
> > xen_dom0pvh_set_pte_at;
> 
> Is this wrong for domU or have you just not tried/implemented
> passthrough support yet?

No, passthrough is phase II/III. Too big a project for one person to do in
a single shot as it is ;)..


> > + * exported function, so no need to export this.
> > + */
> > +static int pvh_add_to_xen_p2m(unsigned long lpfn, unsigned long
> > fgmfn,
> > +                             unsigned int domid)
> > +{
> > +       int rc;
> > +       struct xen_add_to_physmap pmb = {.foreign_domid = domid};
> 
> I'm not sure but I think CodingStyle would want spaces inside the {}s.
> 
> What is the b in pmb? Phys Map B??? (every other user of this
> interface says xatp, FWIW)

because originally it was a batch function but got changed after the
code review earlier. I will rename it.


> > HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
> > +               if (rc) {
> > +                       pr_warn("Failed to unmap pfn:%lx rc:%d
> > done:%d\n",
> > +                               spfn+i, rc, i);
> > +                       return 1;
> > +               }
> > +       }
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(pvh_rem_xen_p2m);
> 
> What is the external/modular user of this?

privcmd.

> I guess this is why you noted that pvh_add_to_xen_p2m didn't need
> exporting, which struck me as unnecessary at the time.
> 
> pvh_add_to_xen_p2m seems to have an exported a wrapper, why not rem
> too? e.g. xen_unmap_domain_mfn_range?

I believe unmapping happens thru native code. So, we just need to remove 
the entries from the ept. Ok, I mean, hap. 

> > +
> > +       native_set_pte(ptep, pteval);
> > +       if (pvh_add_to_xen_p2m(pfn, pvhp->fgmfn, pvhp->domid))
> > +               return -EFAULT;
> 
> Is there a little window here where we've setup the page table entry
> but the p2m entry behind it is uninitialised?
> 
> I suppose even if an interrupt occurred we can rely on the virtual
> address not having been "exposed" anywhere yet and therefore there is
> no chance of anyone dereferencing it. But is there any harm in just
> flipping the ordering here?

Should be ok flippiong the order.

> Why EFAULT? Seems a bit random, I think HYPERVISOR_memory_op returns a
> -ve errno which you could propagate.

Ok, sounds good.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2WNr-0006Zp-Dj; Fri, 17 Aug 2012 23:55:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2WNp-0006Zh-Fe
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:55:17 +0000
Received: from [85.158.143.99:39705] by server-2.bemta-4.messagelabs.com id
	79/29-31966-4E9DE205; Fri, 17 Aug 2012 23:55:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345247713!28801058!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19955 invoked from network); 17 Aug 2012 23:55:14 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 23:55:14 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNt8RH021315
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:55:09 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNt7YA018479
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:55:08 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNt67E007307; Fri, 17 Aug 2012 18:55:06 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:55:06 -0700
Date: Fri, 17 Aug 2012 16:55:05 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817165505.05981567@mantra.us.oracle.com>
In-Reply-To: <1345196083.30865.133.camel@zakaz.uk.xensource.com>
References: <20120815180356.08d4d2e4@mantra.us.oracle.com>
	<1345196083.30865.133.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
 and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 7595581..260113e 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
> >  		if (xen_initial_domain())
> >  			pci_xen_initial_domain();
> >  
> > +		if (xen_pvh_domain()) {
> > +			xen_callback_vector();
> 
> The definition of this function is surrounded by CONFIG_XEN_PVHVM, or
> did I miss where you removed that and/or the appropriate Kconfig runes
> to make it so?

Right. I really dislike the zillion config options. Since this is used by
PV now, we can remove the PVHVM restriction on it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 17 23:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Aug 2012 23:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2WNr-0006Zp-Dj; Fri, 17 Aug 2012 23:55:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2WNp-0006Zh-Fe
	for Xen-devel@lists.xensource.com; Fri, 17 Aug 2012 23:55:17 +0000
Received: from [85.158.143.99:39705] by server-2.bemta-4.messagelabs.com id
	79/29-31966-4E9DE205; Fri, 17 Aug 2012 23:55:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345247713!28801058!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNDM2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19955 invoked from network); 17 Aug 2012 23:55:14 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Aug 2012 23:55:14 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7HNt8RH021315
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Aug 2012 23:55:09 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7HNt7YA018479
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Aug 2012 23:55:08 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7HNt67E007307; Fri, 17 Aug 2012 18:55:06 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 16:55:06 -0700
Date: Fri, 17 Aug 2012 16:55:05 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817165505.05981567@mantra.us.oracle.com>
In-Reply-To: <1345196083.30865.133.camel@zakaz.uk.xensource.com>
References: <20120815180356.08d4d2e4@mantra.us.oracle.com>
	<1345196083.30865.133.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 4/8]: identity map, events,
 and xenbus related changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 7595581..260113e 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -1814,6 +1814,13 @@ void __init xen_init_IRQ(void)
> >  		if (xen_initial_domain())
> >  			pci_xen_initial_domain();
> >  
> > +		if (xen_pvh_domain()) {
> > +			xen_callback_vector();
> 
> The definition of this function is surrounded by CONFIG_XEN_PVHVM, or
> did I miss where you removed that and/or the appropriate Kconfig runes
> to make it so?

Right. I really dislike the zillion config options. Since this is used by
PV now, we can remove the PVHVM restriction on it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 00:07:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 00:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2WZ0-0007Hq-OI; Sat, 18 Aug 2012 00:06:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2WYz-0007Hl-Sx
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 00:06:50 +0000
Received: from [85.158.138.51:40435] by server-1.bemta-3.messagelabs.com id
	D8/5A-09327-99CDE205; Sat, 18 Aug 2012 00:06:49 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345248406!24852956!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNTUyMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16005 invoked from network); 18 Aug 2012 00:06:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Aug 2012 00:06:48 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7I06hNr027701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 18 Aug 2012 00:06:44 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7I06gKw013323
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 18 Aug 2012 00:06:43 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7I06gWn012272; Fri, 17 Aug 2012 19:06:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 17:06:42 -0700
Date: Fri, 17 Aug 2012 17:06:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817170641.4261229b@mantra.us.oracle.com>
In-Reply-To: <1345197150.30865.147.camel@zakaz.uk.xensource.com>
References: <20120815180622.0c988f48@mantra.us.oracle.com>
	<1345197150.30865.147.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 10:52:30 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 02:06 +0100, Mukesh Rathor wrote:
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 0bfc1ef..2430133 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c


> > HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp); if (rc != 0) {
> >  				printk(KERN_WARNING
> > @@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
> >  	int rc;
> >  	struct gnttab_set_version gsv;
> >  
> > -	if (xen_hvm_domain())
> > +	if (xen_hvm_domain() || xen_pvh_domain())
> 
> Does something stop pvh using v2?

I had some issue related to grstatus field that was added, so punted it
for now. It's phase II which is a big phase now :) :)... 


> >  
> > -	if (xen_pv_domain())
> > +	/* PVH note: xen will free existing kmalloc'd mfn in
> > +	 * XENMEM_add_to_physmap */
> > +	if (xen_pvh_domain() && !gnttab_shared.addr) {
> > +		gnttab_shared.addr =
> > +			kmalloc(max_nr_gframes * PAGE_SIZE,
> > GFP_KERNEL);
> > +		if ( !gnttab_shared.addr ) {
> > +			printk(KERN_WARNING "%s", kmsg);
> 
> Why this construct instead of just the string literal?
 
To keep line overflow. I dont' like code spanning 80 columns. If you split,
then you can't grep.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 00:07:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 00:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2WZ0-0007Hq-OI; Sat, 18 Aug 2012 00:06:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T2WYz-0007Hl-Sx
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 00:06:50 +0000
Received: from [85.158.138.51:40435] by server-1.bemta-3.messagelabs.com id
	D8/5A-09327-99CDE205; Sat, 18 Aug 2012 00:06:49 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345248406!24852956!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNTUyMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16005 invoked from network); 18 Aug 2012 00:06:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Aug 2012 00:06:48 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7I06hNr027701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 18 Aug 2012 00:06:44 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7I06gKw013323
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 18 Aug 2012 00:06:43 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7I06gWn012272; Fri, 17 Aug 2012 19:06:42 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Aug 2012 17:06:42 -0700
Date: Fri, 17 Aug 2012 17:06:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817170641.4261229b@mantra.us.oracle.com>
In-Reply-To: <1345197150.30865.147.camel@zakaz.uk.xensource.com>
References: <20120815180622.0c988f48@mantra.us.oracle.com>
	<1345197150.30865.147.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 10:52:30 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 02:06 +0100, Mukesh Rathor wrote:
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 0bfc1ef..2430133 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c


> > HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp); if (rc != 0) {
> >  				printk(KERN_WARNING
> > @@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
> >  	int rc;
> >  	struct gnttab_set_version gsv;
> >  
> > -	if (xen_hvm_domain())
> > +	if (xen_hvm_domain() || xen_pvh_domain())
> 
> Does something stop pvh using v2?

I had some issue related to grstatus field that was added, so punted it
for now. It's phase II which is a big phase now :) :)... 


> >  
> > -	if (xen_pv_domain())
> > +	/* PVH note: xen will free existing kmalloc'd mfn in
> > +	 * XENMEM_add_to_physmap */
> > +	if (xen_pvh_domain() && !gnttab_shared.addr) {
> > +		gnttab_shared.addr =
> > +			kmalloc(max_nr_gframes * PAGE_SIZE,
> > GFP_KERNEL);
> > +		if ( !gnttab_shared.addr ) {
> > +			printk(KERN_WARNING "%s", kmsg);
> 
> Why this construct instead of just the string literal?
 
To keep line overflow. I dont' like code spanning 80 columns. If you split,
then you can't grep.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 00:48:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 00:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2XCx-0007Y1-66; Sat, 18 Aug 2012 00:48:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mike@flyn.org>) id 1T2UES-0005Hu-7C
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 21:37:28 +0000
Received: from [85.158.139.83:56629] by server-9.bemta-5.messagelabs.com id
	46/0C-26123-799BE205; Fri, 17 Aug 2012 21:37:27 +0000
X-Env-Sender: mike@flyn.org
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345239446!24793216!1
X-Originating-IP: [72.251.202.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17299 invoked from network); 17 Aug 2012 21:37:26 -0000
Received: from cust-smtp2.lga6.us.voxel.net (HELO
	cust-smtp2.lga6.us.voxel.net) (72.251.202.115)
	by server-7.tower-182.messagelabs.com with SMTP;
	17 Aug 2012 21:37:26 -0000
Received: from vhost2.lga6.us.voxel.net (vhost2.lga6.us.voxel.net
	[72.251.193.170])
	by cust-smtp2.lga6.us.voxel.net (Postfix) with ESMTP id 4BD58F00E8
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 17:37:27 -0400 (EDT)
Received: (qmail 8143 invoked by uid 108); 17 Aug 2012 17:37:26 -0400
Received: from unknown (HELO imp.flyn.org) (mike@flyn.org@131.193.36.95)
	by vhost2.lga6.us.voxel.net with ESMTPSA; 17 Aug 2012 17:37:26 -0400
Received: by imp.flyn.org (Postfix, from userid 1101)
	id E29AED0A002; Fri, 17 Aug 2012 16:37:24 -0500 (CDT)
Date: Fri, 17 Aug 2012 16:37:24 -0500
From: "W. Michael Petullo" <mike@flyn.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817213723.GA3640@imp.flyn.org>
References: <20120531045040.GA16124@imp.flyn.org>
	<20425.863.785362.721748@mariner.uk.xensource.com>
	<20120602141903.GA2903@imp.flyn.org>
	<20431.13200.114406.125096@mariner.uk.xensource.com>
	<20432.59361.773966.629519@mariner.uk.xensource.com>
	<20120608183106.GA12674@imp.flyn.org>
	<20437.62868.542710.418122@mariner.uk.xensource.com>
	<20120611202224.GA22246@imp.flyn.org>
	<1341413324.31696.74.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1341413324.31696.74.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Sat, 18 Aug 2012 00:48:05 +0000
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using "xl create" without domain config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> # HG changeset patch
>> # Parent 435493696053a079ec17d6e1a63e5f2be3a2c9d0
>> xl: Allow use of /dev/null with xl create to enable command-line definition
>> 
>> xm allows specifying /dev/null as the domain configuration argument to its
>> create option; add same functionality to xl. xl treats the configuration
>> argument /dev/null as a special case.  This allows specifying an entire
>> domain configuration on the command line.
>> 
>> Signed-off-by: W. Michael Petullo <mike@flyn.org>
 
> I have applied this patch but it seems to be against the xen 4.1 tree
> and not against xen-unstable. In this case I think I was able to
> trivially resolve the conflicts, but please do check I got it right.

Things work nicely in Xen 4.2.0 RC2. Thank you.

-- 
Mike

:wq

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 00:48:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 00:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2XCx-0007Y1-66; Sat, 18 Aug 2012 00:48:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mike@flyn.org>) id 1T2UES-0005Hu-7C
	for xen-devel@lists.xen.org; Fri, 17 Aug 2012 21:37:28 +0000
Received: from [85.158.139.83:56629] by server-9.bemta-5.messagelabs.com id
	46/0C-26123-799BE205; Fri, 17 Aug 2012 21:37:27 +0000
X-Env-Sender: mike@flyn.org
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345239446!24793216!1
X-Originating-IP: [72.251.202.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17299 invoked from network); 17 Aug 2012 21:37:26 -0000
Received: from cust-smtp2.lga6.us.voxel.net (HELO
	cust-smtp2.lga6.us.voxel.net) (72.251.202.115)
	by server-7.tower-182.messagelabs.com with SMTP;
	17 Aug 2012 21:37:26 -0000
Received: from vhost2.lga6.us.voxel.net (vhost2.lga6.us.voxel.net
	[72.251.193.170])
	by cust-smtp2.lga6.us.voxel.net (Postfix) with ESMTP id 4BD58F00E8
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 17:37:27 -0400 (EDT)
Received: (qmail 8143 invoked by uid 108); 17 Aug 2012 17:37:26 -0400
Received: from unknown (HELO imp.flyn.org) (mike@flyn.org@131.193.36.95)
	by vhost2.lga6.us.voxel.net with ESMTPSA; 17 Aug 2012 17:37:26 -0400
Received: by imp.flyn.org (Postfix, from userid 1101)
	id E29AED0A002; Fri, 17 Aug 2012 16:37:24 -0500 (CDT)
Date: Fri, 17 Aug 2012 16:37:24 -0500
From: "W. Michael Petullo" <mike@flyn.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120817213723.GA3640@imp.flyn.org>
References: <20120531045040.GA16124@imp.flyn.org>
	<20425.863.785362.721748@mariner.uk.xensource.com>
	<20120602141903.GA2903@imp.flyn.org>
	<20431.13200.114406.125096@mariner.uk.xensource.com>
	<20432.59361.773966.629519@mariner.uk.xensource.com>
	<20120608183106.GA12674@imp.flyn.org>
	<20437.62868.542710.418122@mariner.uk.xensource.com>
	<20120611202224.GA22246@imp.flyn.org>
	<1341413324.31696.74.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1341413324.31696.74.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Sat, 18 Aug 2012 00:48:05 +0000
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using "xl create" without domain config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> # HG changeset patch
>> # Parent 435493696053a079ec17d6e1a63e5f2be3a2c9d0
>> xl: Allow use of /dev/null with xl create to enable command-line definition
>> 
>> xm allows specifying /dev/null as the domain configuration argument to its
>> create option; add same functionality to xl. xl treats the configuration
>> argument /dev/null as a special case.  This allows specifying an entire
>> domain configuration on the command line.
>> 
>> Signed-off-by: W. Michael Petullo <mike@flyn.org>
 
> I have applied this patch but it seems to be against the xen 4.1 tree
> and not against xen-unstable. In this case I think I was able to
> trivially resolve the conflicts, but please do check I got it right.

Things work nicely in Xen 4.2.0 RC2. Thank you.

-- 
Mike

:wq

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 05:43:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 05:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2bnp-0004Za-B3; Sat, 18 Aug 2012 05:42:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2bno-0004ZV-48
	for xen-devel@lists.xensource.com; Sat, 18 Aug 2012 05:42:28 +0000
Received: from [85.158.138.51:23240] by server-1.bemta-3.messagelabs.com id
	69/E0-09327-34B2F205; Sat, 18 Aug 2012 05:42:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345268545!28880469!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13032 invoked from network); 18 Aug 2012 05:42:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 05:42:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068137"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 05:42:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 18 Aug 2012 06:42:24 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2bnk-0008V6-2k;
	Sat, 18 Aug 2012 05:42:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2bnj-0001hG-VF;
	Sat, 18 Aug 2012 06:42:23 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13615-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Aug 2012 06:42:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13615: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13615 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13615/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)         broken pass in 13614

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13614
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13614
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13614
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13614

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  73ac4b7ad2e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 05:43:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 05:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2bnp-0004Za-B3; Sat, 18 Aug 2012 05:42:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2bno-0004ZV-48
	for xen-devel@lists.xensource.com; Sat, 18 Aug 2012 05:42:28 +0000
Received: from [85.158.138.51:23240] by server-1.bemta-3.messagelabs.com id
	69/E0-09327-34B2F205; Sat, 18 Aug 2012 05:42:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345268545!28880469!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13032 invoked from network); 18 Aug 2012 05:42:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 05:42:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068137"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 05:42:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 18 Aug 2012 06:42:24 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2bnk-0008V6-2k;
	Sat, 18 Aug 2012 05:42:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2bnj-0001hG-VF;
	Sat, 18 Aug 2012 06:42:23 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13615-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Aug 2012 06:42:23 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13615: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13615 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13615/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)         broken pass in 13614

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13614
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13614
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13614
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13614

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  73ac4b7ad2e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 06:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 06:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2cgc-00050z-9E; Sat, 18 Aug 2012 06:39:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2cga-00050q-Ez
	for xen-devel@lists.xen.org; Sat, 18 Aug 2012 06:39:04 +0000
Received: from [85.158.139.83:55672] by server-11.bemta-5.messagelabs.com id
	17/C0-29296-2883F205; Sat, 18 Aug 2012 06:38:58 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345271936!21430638!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5190 invoked from network); 18 Aug 2012 06:38:56 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 06:38:56 -0000
Received: by wibhq4 with SMTP id hq4so1881745wib.14
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 23:38:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=nkB9/y0OVnlL5BGG1Mn9pc0lKnHOK7Z0pXiZi5aRKs0=;
	b=nFK4egopsaq1xBssU1yoAD1thPj/SYrljq3vADe+PFkfTr/bLBNMmoM58u/hMwNcCc
	dfrUwlPUOJo6jmFM6umoy6sV4N4sGT6r1iA5r4hAudaq4woES3cP5Rx/vt53k07iIdnH
	6svhE5qNuZxmRV2D8B2scMWG0BUm5DXYFRrgyCnq9/gkRZ/UqMWQKpX5CeUftj4OyILP
	xPg6ZTNsoRDIIuo3opb7nwsloJbuNaTu28NTVudUvR2oY773cQUUSVFo6mJCCJij+FwY
	TEdl2tclhQgbe5OB+rYuAVTzGPQLdNk4ZBt26JLb67Nj5GfZLWAV/MQjw6FxG5UsIJPj
	2UDQ==
Received: by 10.216.122.203 with SMTP id t53mr4195421weh.5.1345271935975;
	Fri, 17 Aug 2012 23:38:55 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id ep14sm14329378wid.0.2012.08.17.23.38.53
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 23:38:55 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Sat, 18 Aug 2012 07:38:44 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Junjie Wei <junjie.wei@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC54F704.3C5C9%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] VM save/restore
Thread-Index: Ac19DBzOEz0X+5UCukWS7vOcOCUzrQ==
In-Reply-To: <502EB799.9040002@oracle.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 22:28, "Junjie Wei" <junjie.wei@oracle.com> wrote:

> Hello,
> 
> There is a problem in Xen-4.1.2 and early versions with VM save/restore.
> When a VM is configured with VCPUs > 64, it can be started or stopped,
> but cannot be saved. It happens to both PVM and HVM guests.
> 
> # xm vcpu-list 3 | grep OVM_OL5U7_X86_64_PVM_10GB | wc -l
> 65
> 
> # xm save 3 vm.save
> Error: /usr/lib64/xen/bin/xc_save 27 3 0 0 0 failed
> 
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: Too many VCPUS in guest!: Internal error
> 
> It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:
> 
> if ( info.max_vcpu_id >= 64 )
> {
>       ERROR("Too many VCPUS in guest!");
>       goto out;
> }
> 
> And also in tools/libxc/xc_domain_restore.c:
> 
> case XC_SAVE_ID_VCPU_INFO:
>       buf->new_ctxt_format = 1;
>       if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
>           buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
>                                             sizeof(uint64_t)) ) {
>           PERROR("Error when reading max_vcpu_id");
>           return -1;
>       }
> 
> The code above is in both xen-4.1.2 and xen-unstable.
> 
> I think if a VM can be successfully started, then save/restore should
> also work. So I made a patch and did some testing.

The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
after restore I would imagine.

And what is a PVM guest?

 -- Keir

> The above problem is gone but there are new ones.
> 
> Let me summarize the result here.
> 
> With the patch, save/restore works fine as long as it can be started,
> except two cases.
> 
> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>     but the guest can only make use of 32 of them.
> 
> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>     but `xm save' does not work.
> 
> See the testing below for details.The limit of 128 VCPUs for HVM
> guests is already considered.
> 
> Could you please review the patch and help with these two cases?
> 
> 
> Thanks,
> Junjie
> 
> -= Test environment =-
> 
> [root@ovs087 HVM_X86_64]# cat /etc/ovs-release
> Oracle VM server release 3.2.1
> 
> [root@ovs087 HVM_X86_64]# uname -a
> Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012
> x86_64 x86_64 x86_64 GNU/Linux
> 
> [root@ovs087 HVM_X86_64]# rpm -qa | grep xen
> xenpvboot-0.1-8.el5
> xen-devel-4.1.2-39
> xen-tools-4.1.2-39
> xen-4.1.2-39
> 
> -= PVM x86_64, 128 VCPUs =-
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   6916.9
> OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1
> 
> [root@ovs087 PVM_X86_64]# xm save 9 vm.save
> 
> [root@ovs087 PVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   7076.7
> OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6
> 
> -= PVM x86_64, 256 VCPUs =-
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs State   Time(s)
> Domain-0                                     0   511     8 r-----  10398.1
> OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4
> 
> [root@ovs087 PVM_X86_64]# xm save 35 vm.save
> 
> [root@ovs087 PVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs State   Time(s)
> Domain-0                                     0   511     8 r-----  10572.1
> OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9
> 
> -= HVM x86_64, 128 VCPUs =-
> 
> [root@ovs087 HVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   8017.4
> OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7
> 
> [root@ovs087 HVM_X86_64]# xm save 19 vm.save
> 
> [root@ovs087 HVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 HVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   8241.1
> OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7
> 
> -= PVM x86, 64 VCPUs =-
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36798.0
> OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
> 
> [root@ovs087 PVM_X86]# xm save 54 vm.save
> 
> [root@ovs087 PVM_X86]# xm restore vm.save
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36959.3
> OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
> 
> 32-bit PVM, 65 VCPUs:
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36975.9
> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36977.7
> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
> 
> [root@ovs087 PVM_X86]# xm save 56 vm.save
> Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed
> 
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: No context for VCPU64 (61 = No data available): Internal error
> 
> -= HVM x86, 64 VCPUs =-
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36506.1
> OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
> 
> [root@ovs087 HVM_X86]# xm save 52 vm.save
> 
> [root@ovs087 HVM_X86]# xm restore vm.save
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36730.5
> OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
> 
> -= HVM x86, 128 VCPUs =-
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36261.1
> OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
> 
> [root@ovs087 HVM_X86]# xm save 50 vm.save
> 
> [root@ovs087 HVM_X86]# xm restore vm.save
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36480.5
> OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 06:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 06:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2cgc-00050z-9E; Sat, 18 Aug 2012 06:39:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2cga-00050q-Ez
	for xen-devel@lists.xen.org; Sat, 18 Aug 2012 06:39:04 +0000
Received: from [85.158.139.83:55672] by server-11.bemta-5.messagelabs.com id
	17/C0-29296-2883F205; Sat, 18 Aug 2012 06:38:58 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345271936!21430638!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5190 invoked from network); 18 Aug 2012 06:38:56 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 06:38:56 -0000
Received: by wibhq4 with SMTP id hq4so1881745wib.14
	for <xen-devel@lists.xen.org>; Fri, 17 Aug 2012 23:38:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=nkB9/y0OVnlL5BGG1Mn9pc0lKnHOK7Z0pXiZi5aRKs0=;
	b=nFK4egopsaq1xBssU1yoAD1thPj/SYrljq3vADe+PFkfTr/bLBNMmoM58u/hMwNcCc
	dfrUwlPUOJo6jmFM6umoy6sV4N4sGT6r1iA5r4hAudaq4woES3cP5Rx/vt53k07iIdnH
	6svhE5qNuZxmRV2D8B2scMWG0BUm5DXYFRrgyCnq9/gkRZ/UqMWQKpX5CeUftj4OyILP
	xPg6ZTNsoRDIIuo3opb7nwsloJbuNaTu28NTVudUvR2oY773cQUUSVFo6mJCCJij+FwY
	TEdl2tclhQgbe5OB+rYuAVTzGPQLdNk4ZBt26JLb67Nj5GfZLWAV/MQjw6FxG5UsIJPj
	2UDQ==
Received: by 10.216.122.203 with SMTP id t53mr4195421weh.5.1345271935975;
	Fri, 17 Aug 2012 23:38:55 -0700 (PDT)
Received: from [192.168.1.68]
	(host86-157-166-190.range86-157.btcentralplus.com. [86.157.166.190])
	by mx.google.com with ESMTPS id ep14sm14329378wid.0.2012.08.17.23.38.53
	(version=SSLv3 cipher=OTHER); Fri, 17 Aug 2012 23:38:55 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Sat, 18 Aug 2012 07:38:44 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Junjie Wei <junjie.wei@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC54F704.3C5C9%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] VM save/restore
Thread-Index: Ac19DBzOEz0X+5UCukWS7vOcOCUzrQ==
In-Reply-To: <502EB799.9040002@oracle.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/08/2012 22:28, "Junjie Wei" <junjie.wei@oracle.com> wrote:

> Hello,
> 
> There is a problem in Xen-4.1.2 and early versions with VM save/restore.
> When a VM is configured with VCPUs > 64, it can be started or stopped,
> but cannot be saved. It happens to both PVM and HVM guests.
> 
> # xm vcpu-list 3 | grep OVM_OL5U7_X86_64_PVM_10GB | wc -l
> 65
> 
> # xm save 3 vm.save
> Error: /usr/lib64/xen/bin/xc_save 27 3 0 0 0 failed
> 
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: Too many VCPUS in guest!: Internal error
> 
> It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:
> 
> if ( info.max_vcpu_id >= 64 )
> {
>       ERROR("Too many VCPUS in guest!");
>       goto out;
> }
> 
> And also in tools/libxc/xc_domain_restore.c:
> 
> case XC_SAVE_ID_VCPU_INFO:
>       buf->new_ctxt_format = 1;
>       if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
>           buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
>                                             sizeof(uint64_t)) ) {
>           PERROR("Error when reading max_vcpu_id");
>           return -1;
>       }
> 
> The code above is in both xen-4.1.2 and xen-unstable.
> 
> I think if a VM can be successfully started, then save/restore should
> also work. So I made a patch and did some testing.

The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
after restore I would imagine.

And what is a PVM guest?

 -- Keir

> The above problem is gone but there are new ones.
> 
> Let me summarize the result here.
> 
> With the patch, save/restore works fine as long as it can be started,
> except two cases.
> 
> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>     but the guest can only make use of 32 of them.
> 
> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>     but `xm save' does not work.
> 
> See the testing below for details.The limit of 128 VCPUs for HVM
> guests is already considered.
> 
> Could you please review the patch and help with these two cases?
> 
> 
> Thanks,
> Junjie
> 
> -= Test environment =-
> 
> [root@ovs087 HVM_X86_64]# cat /etc/ovs-release
> Oracle VM server release 3.2.1
> 
> [root@ovs087 HVM_X86_64]# uname -a
> Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012
> x86_64 x86_64 x86_64 GNU/Linux
> 
> [root@ovs087 HVM_X86_64]# rpm -qa | grep xen
> xenpvboot-0.1-8.el5
> xen-devel-4.1.2-39
> xen-tools-4.1.2-39
> xen-4.1.2-39
> 
> -= PVM x86_64, 128 VCPUs =-
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   6916.9
> OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1
> 
> [root@ovs087 PVM_X86_64]# xm save 9 vm.save
> 
> [root@ovs087 PVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   7076.7
> OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6
> 
> -= PVM x86_64, 256 VCPUs =-
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs State   Time(s)
> Domain-0                                     0   511     8 r-----  10398.1
> OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4
> 
> [root@ovs087 PVM_X86_64]# xm save 35 vm.save
> 
> [root@ovs087 PVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 PVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs State   Time(s)
> Domain-0                                     0   511     8 r-----  10572.1
> OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9
> 
> -= HVM x86_64, 128 VCPUs =-
> 
> [root@ovs087 HVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   8017.4
> OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7
> 
> [root@ovs087 HVM_X86_64]# xm save 19 vm.save
> 
> [root@ovs087 HVM_X86_64]# xm restore vm.save
> 
> [root@ovs087 HVM_X86_64]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----   8241.1
> OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7
> 
> -= PVM x86, 64 VCPUs =-
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36798.0
> OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
> 
> [root@ovs087 PVM_X86]# xm save 54 vm.save
> 
> [root@ovs087 PVM_X86]# xm restore vm.save
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36959.3
> OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 64
> 
> 32-bit PVM, 65 VCPUs:
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36975.9
> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
> 
> [root@ovs087 PVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36977.7
> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8
> 
> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
> 65
> 
> [root@ovs087 PVM_X86]# xm save 56 vm.save
> Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed
> 
> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
> xc: error: No context for VCPU64 (61 = No data available): Internal error
> 
> -= HVM x86, 64 VCPUs =-
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36506.1
> OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
> 
> [root@ovs087 HVM_X86]# xm save 52 vm.save
> 
> [root@ovs087 HVM_X86]# xm restore vm.save
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36730.5
> OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 64
> 
> -= HVM x86, 128 VCPUs =-
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36261.1
> OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
> 
> [root@ovs087 HVM_X86]# xm save 50 vm.save
> 
> [root@ovs087 HVM_X86]# xm restore vm.save
> 
> [root@ovs087 HVM_X86]# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0   511 8     r-----  36480.5
> OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3
> 
> [root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB |
> wc -l
> 128
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 07:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 07:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2dYn-0005ZI-FJ; Sat, 18 Aug 2012 07:35:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2dYl-0005ZD-Hv
	for xen-devel@lists.xen.org; Sat, 18 Aug 2012 07:35:03 +0000
Received: from [85.158.143.35:59877] by server-3.bemta-4.messagelabs.com id
	A6/9A-09529-6A54F205; Sat, 18 Aug 2012 07:35:02 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345275301!16338686!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7340 invoked from network); 18 Aug 2012 07:35:01 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 07:35:01 -0000
Received: by wibhq4 with SMTP id hq4so1902933wib.14
	for <xen-devel@lists.xen.org>; Sat, 18 Aug 2012 00:35:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=R+RSsE7+QDy6+Ff5EwJ5ssRlZTQip7LQxoRGYftS2Ok=;
	b=hk4vCWu8hbErv3iSvzVx9oBVIWYUj4+l53A1w0ElwIHvbMBPS/iJQpDKxKT7BI2FyR
	kZPVmY+Nk0U9vajhkqG1bn9KBij1PEmwsZo6n1msMOvmzk4T6ywEbQfKTkn1McSyTtxl
	yy8Gs6VkW1PB3LX/dLUgLbvSNKwgKvenxsJ3sZnAprjy63VN6pppABYldt8eOWBAqKhN
	0p/DwFbtmDYg7PhHLYF/Jw38sKornq11MgMkeal9V5Imo8B5qBTAeE5k6VEaElerHJaM
	wOtSwedwkJYr+wFEUbTSbpYLDhUfa3NEY0aSs+nJCaVXQGJnpHQQeS+ayE6HoGrVvNf9
	ZeBg==
Received: by 10.217.0.145 with SMTP id l17mr3886461wes.133.1345275301127;
	Sat, 18 Aug 2012 00:35:01 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id h9sm14230742wiz.1.2012.08.18.00.34.58
	(version=SSLv3 cipher=OTHER); Sat, 18 Aug 2012 00:34:59 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Sat, 18 Aug 2012 08:34:49 +0100
From: Keir Fraser <keir@xen.org>
To: Junjie Wei <junjie.wei@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC55042A.4992C%keir@xen.org>
Thread-Topic: [Xen-devel] VM save/restore
Thread-Index: Ac19DBzOEz0X+5UCukWS7vOcOCUzrQAB9Wxz
In-Reply-To: <CC54F704.3C5C9%keir.xen@gmail.com>
Mime-version: 1.0
Content-type: multipart/mixed;
	boundary="B_3428123698_36200098"
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428123698_36200098
Content-type: text/plain;
	charset="US-ASCII"
Content-transfer-encoding: 7bit

On 18/08/2012 07:38, "Keir Fraser" <keir.xen@gmail.com> wrote:

>
>> I think if a VM can be successfully started, then save/restore should
>> also work. So I made a patch and did some testing.
> 
> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
> after restore I would imagine.

How about the attached patch? It might actually work properly, unlike yours.
;)

>> The above problem is gone but there are new ones.
>> 
>> Let me summarize the result here.
>> 
>> With the patch, save/restore works fine as long as it can be started,
>> except two cases.
>> 
>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>     but the guest can only make use of 32 of them.

HVM guest? I don't know why this is. You will have to investigate some more
what has happened to the rest of your VCPUs! I think it should definitely
work. Cc Jan in case he has any thoughts.

>> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>>     but `xm save' does not work.

That's because your changes to the save/restore code were wrong. Try my
patch instead.
 
 -- Keir

>> See the testing below for details.The limit of 128 VCPUs for HVM
>> guests is already considered.
>> 
>> Could you please review the patch and help with these two cases?
>> 
>> 
>> Thanks,
>> Junjie
>> 
>> -= Test environment =-
>> 
>> [root@ovs087 HVM_X86_64]# cat /etc/ovs-release
>> Oracle VM server release 3.2.1
>> 
>> [root@ovs087 HVM_X86_64]# uname -a
>> Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012
>> x86_64 x86_64 x86_64 GNU/Linux
>> 
>> [root@ovs087 HVM_X86_64]# rpm -qa | grep xen
>> xenpvboot-0.1-8.el5
>> xen-devel-4.1.2-39
>> xen-tools-4.1.2-39
>> xen-4.1.2-39
>> 
>> -= PVM x86_64, 128 VCPUs =-
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   6916.9
>> OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1
>> 
>> [root@ovs087 PVM_X86_64]# xm save 9 vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   7076.7
>> OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6
>> 
>> -= PVM x86_64, 256 VCPUs =-
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs State   Time(s)
>> Domain-0                                     0   511     8 r-----  10398.1
>> OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4
>> 
>> [root@ovs087 PVM_X86_64]# xm save 35 vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs State   Time(s)
>> Domain-0                                     0   511     8 r-----  10572.1
>> OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9
>> 
>> -= HVM x86_64, 128 VCPUs =-
>> 
>> [root@ovs087 HVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   8017.4
>> OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7
>> 
>> [root@ovs087 HVM_X86_64]# xm save 19 vm.save
>> 
>> [root@ovs087 HVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   8241.1
>> OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7
>> 
>> -= PVM x86, 64 VCPUs =-
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36798.0
>> OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 64
>> 
>> [root@ovs087 PVM_X86]# xm save 54 vm.save
>> 
>> [root@ovs087 PVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36959.3
>> OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 64
>> 
>> 32-bit PVM, 65 VCPUs:
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36975.9
>> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 65
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36977.7
>> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 65
>> 
>> [root@ovs087 PVM_X86]# xm save 56 vm.save
>> Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed
>> 
>> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
>> xc: error: No context for VCPU64 (61 = No data available): Internal error
>> 
>> -= HVM x86, 64 VCPUs =-
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36506.1
>> OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 64
>> 
>> [root@ovs087 HVM_X86]# xm save 52 vm.save
>> 
>> [root@ovs087 HVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36730.5
>> OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 64
>> 
>> -= HVM x86, 128 VCPUs =-
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36261.1
>> OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 128
>> 
>> [root@ovs087 HVM_X86]# xm save 50 vm.save
>> 
>> [root@ovs087 HVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36480.5
>> OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 128
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 


--B_3428123698_36200098
Content-type: application/octet-stream; name="00-sr-extend-vcpus"
Content-disposition: attachment;
	filename="00-sr-extend-vcpus"
Content-transfer-encoding: base64


ZGlmZiAtciA2NDAxN2Q0ZGY5ZGEgdG9vbHMvbGlieGMveGNfZG9tYWluX3Jlc3RvcmUuYwot
LS0gYS90b29scy9saWJ4Yy94Y19kb21haW5fcmVzdG9yZS5jCUZyaSBBdWcgMTcgMTE6MzY6
MDggMjAxMiArMDIwMAorKysgYi90b29scy9saWJ4Yy94Y19kb21haW5fcmVzdG9yZS5jCVNh
dCBBdWcgMTggMDg6MjU6NTIgMjAxMiArMDEwMApAQCAtNDYyLDcgKzQ2Miw3IEBAIHN0YXRp
YyBpbnQgZHVtcF9xZW11KHhjX2ludGVyZmFjZSAqeGNoLCAKIAogc3RhdGljIGludCBidWZm
ZXJfdGFpbF9odm0oeGNfaW50ZXJmYWNlICp4Y2gsIHN0cnVjdCByZXN0b3JlX2N0eCAqY3R4
LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RydWN0IHRhaWxidWZfaHZtICpidWYs
IGludCBmZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBtYXhf
dmNwdV9pZCwgdWludDY0X3QgdmNwdW1hcCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
IHVuc2lnbmVkIGludCBtYXhfdmNwdV9pZCwgdWludDY0X3QgKnZjcHVtYXAsCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBpbnQgZXh0X3ZjcHVjb250ZXh0LAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgaW50IHZjcHVleHRzdGF0ZSwgdWludDMyX3QgdmNwdWV4dHN0YXRl
X3NpemUpCiB7CkBAIC01MzAsNyArNTMwLDcgQEAgc3RhdGljIGludCBidWZmZXJfdGFpbF9o
dm0oeGNfaW50ZXJmYWNlIAogCiBzdGF0aWMgaW50IGJ1ZmZlcl90YWlsX3B2KHhjX2ludGVy
ZmFjZSAqeGNoLCBzdHJ1Y3QgcmVzdG9yZV9jdHggKmN0eCwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgc3RydWN0IHRhaWxidWZfcHYgKmJ1ZiwgaW50IGZkLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgbWF4X3ZjcHVfaWQsIHVpbnQ2NF90IHZjcHVt
YXAsCisgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBtYXhfdmNwdV9p
ZCwgdWludDY0X3QgKnZjcHVtYXAsCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCBl
eHRfdmNwdWNvbnRleHQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCB2Y3B1ZXh0
c3RhdGUsCiAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHZjcHVleHRzdGF0
ZV9zaXplKQpAQCAtNTYzLDggKzU2Myw4IEBAIHN0YXRpYyBpbnQgYnVmZmVyX3RhaWxfcHYo
eGNfaW50ZXJmYWNlICoKICAgICAvKiBWQ1BVIGNvbnRleHRzICovCiAgICAgYnVmLT52Y3B1
Y291bnQgPSAwOwogICAgIGZvciAoaSA9IDA7IGkgPD0gbWF4X3ZjcHVfaWQ7IGkrKykgewot
ICAgICAgICAvLyBEUFJJTlRGKCJ2Y3B1bWFwOiAlbGx4LCBjcHU6ICVkLCBiaXQ6ICVsbHVc
biIsIHZjcHVtYXAsIGksICh2Y3B1bWFwICUgKDFVTEwgPDwgaSkpKTsKLSAgICAgICAgaWYg
KCAoISh2Y3B1bWFwICYgKDFVTEwgPDwgaSkpKSApCisgICAgICAgIC8vIERQUklOVEYoInZj
cHVtYXA6ICVsbHgsIGNwdTogJWQsIGJpdDogJWxsdVxuIiwgdmNwdW1hcFtpLzY0XSwgaSwg
KHZjcHVtYXBbaS82NF0gJiAoMVVMTCA8PCAoaSU2NCkpKSk7CisgICAgICAgIGlmICggKCEo
dmNwdW1hcFtpLzY0XSAmICgxVUxMIDw8IChpJTY0KSkpKSApCiAgICAgICAgICAgICBjb250
aW51ZTsKICAgICAgICAgYnVmLT52Y3B1Y291bnQrKzsKICAgICB9CkBAIC02MTQsNyArNjE0
LDcgQEAgc3RhdGljIGludCBidWZmZXJfdGFpbF9wdih4Y19pbnRlcmZhY2UgKgogCiBzdGF0
aWMgaW50IGJ1ZmZlcl90YWlsKHhjX2ludGVyZmFjZSAqeGNoLCBzdHJ1Y3QgcmVzdG9yZV9j
dHggKmN0eCwKICAgICAgICAgICAgICAgICAgICAgICAgdGFpbGJ1Zl90ICpidWYsIGludCBm
ZCwgdW5zaWduZWQgaW50IG1heF92Y3B1X2lkLAotICAgICAgICAgICAgICAgICAgICAgICB1
aW50NjRfdCB2Y3B1bWFwLCBpbnQgZXh0X3ZjcHVjb250ZXh0LAorICAgICAgICAgICAgICAg
ICAgICAgICB1aW50NjRfdCAqdmNwdW1hcCwgaW50IGV4dF92Y3B1Y29udGV4dCwKICAgICAg
ICAgICAgICAgICAgICAgICAgaW50IHZjcHVleHRzdGF0ZSwgdWludDMyX3QgdmNwdWV4dHN0
YXRlX3NpemUpCiB7CiAgICAgaWYgKCBidWYtPmlzaHZtICkKQEAgLTY4MCw3ICs2ODAsNyBA
QCB0eXBlZGVmIHN0cnVjdCB7CiAKICAgICBpbnQgbmV3X2N0eHRfZm9ybWF0OwogICAgIGlu
dCBtYXhfdmNwdV9pZDsKLSAgICB1aW50NjRfdCB2Y3B1bWFwOworICAgIHVpbnQ2NF90IHZj
cHVtYXBbWENfU1JfTUFYX1ZDUFVTLzY0XTsKICAgICB1aW50NjRfdCBpZGVudHB0OwogICAg
IHVpbnQ2NF90IHBhZ2luZ19yaW5nX3BmbjsKICAgICB1aW50NjRfdCBhY2Nlc3NfcmluZ19w
Zm47CkBAIC03NDUsMTIgKzc0NSwxMiBAQCBzdGF0aWMgaW50IHBhZ2VidWZfZ2V0X29uZSh4
Y19pbnRlcmZhY2UgCiAgICAgY2FzZSBYQ19TQVZFX0lEX1ZDUFVfSU5GTzoKICAgICAgICAg
YnVmLT5uZXdfY3R4dF9mb3JtYXQgPSAxOwogICAgICAgICBpZiAoIFJERVhBQ1QoZmQsICZi
dWYtPm1heF92Y3B1X2lkLCBzaXplb2YoYnVmLT5tYXhfdmNwdV9pZCkpIHx8Ci0gICAgICAg
ICAgICAgYnVmLT5tYXhfdmNwdV9pZCA+PSA2NCB8fCBSREVYQUNUKGZkLCAmYnVmLT52Y3B1
bWFwLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YodWludDY0X3QpKSApIHsKKyAgICAgICAgICAgICBidWYtPm1heF92Y3B1X2lkID49
IFhDX1NSX01BWF9WQ1BVUyB8fAorICAgICAgICAgICAgIFJERVhBQ1QoZmQsIGJ1Zi0+dmNw
dW1hcCwgdmNwdW1hcF9zeihidWYtPm1heF92Y3B1X2lkKSkgKSB7CiAgICAgICAgICAgICBQ
RVJST1IoIkVycm9yIHdoZW4gcmVhZGluZyBtYXhfdmNwdV9pZCIpOwogICAgICAgICAgICAg
cmV0dXJuIC0xOwogICAgICAgICB9Ci0gICAgICAgIC8vIERQUklOVEYoIk1heCBWQ1BVIElE
OiAlZCwgdmNwdW1hcDogJWxseFxuIiwgYnVmLT5tYXhfdmNwdV9pZCwgYnVmLT52Y3B1bWFw
KTsKKyAgICAgICAgLy8gRFBSSU5URigiTWF4IFZDUFUgSUQ6ICVkLCB2Y3B1bWFwOiAlbGx4
XG4iLCBidWYtPm1heF92Y3B1X2lkLCBidWYtPnZjcHVtYXBbMF0pOwogICAgICAgICByZXR1
cm4gcGFnZWJ1Zl9nZXRfb25lKHhjaCwgY3R4LCBidWYsIGZkLCBkb20pOwogCiAgICAgY2Fz
ZSBYQ19TQVZFX0lEX0hWTV9JREVOVF9QVDoKQEAgLTEzNjYsNyArMTM2Niw3IEBAIGludCB4
Y19kb21haW5fcmVzdG9yZSh4Y19pbnRlcmZhY2UgKnhjaCwKICAgICBzdHJ1Y3QgbW11ZXh0
X29wIHBpbltNQVhfUElOX0JBVENIXTsKICAgICB1bnNpZ25lZCBpbnQgbnJfcGluczsKIAot
ICAgIHVpbnQ2NF90IHZjcHVtYXAgPSAxVUxMOworICAgIHVpbnQ2NF90IHZjcHVtYXBbWENf
U1JfTUFYX1ZDUFVTLzY0XSA9IHsgMVVMTCB9OwogICAgIHVuc2lnbmVkIGludCBtYXhfdmNw
dV9pZCA9IDA7CiAgICAgaW50IG5ld19jdHh0X2Zvcm1hdCA9IDA7CiAKQEAgLTE1MTcsOCAr
MTUxNyw4IEBAIGludCB4Y19kb21haW5fcmVzdG9yZSh4Y19pbnRlcmZhY2UgKnhjaCwKICAg
ICAgICAgaWYgKCBqID09IDAgKSB7CiAgICAgICAgICAgICAvKiBjYXRjaCB2Y3B1IHVwZGF0
ZXMgKi8KICAgICAgICAgICAgIGlmIChwYWdlYnVmLm5ld19jdHh0X2Zvcm1hdCkgewotICAg
ICAgICAgICAgICAgIHZjcHVtYXAgPSBwYWdlYnVmLnZjcHVtYXA7CiAgICAgICAgICAgICAg
ICAgbWF4X3ZjcHVfaWQgPSBwYWdlYnVmLm1heF92Y3B1X2lkOworICAgICAgICAgICAgICAg
IG1lbWNweSh2Y3B1bWFwLCBwYWdlYnVmLnZjcHVtYXAsIHZjcHVtYXBfc3oobWF4X3ZjcHVf
aWQpKTsKICAgICAgICAgICAgIH0KICAgICAgICAgICAgIC8qIHNob3VsZCB0aGlzIGJlIGRl
ZmVycmVkPyBkb2VzIGl0IGNoYW5nZT8gKi8KICAgICAgICAgICAgIGlmICggcGFnZWJ1Zi5p
ZGVudHB0ICkKQEAgLTE4ODAsNyArMTg4MCw3IEBAIGludCB4Y19kb21haW5fcmVzdG9yZSh4
Y19pbnRlcmZhY2UgKnhjaCwKICAgICB2Y3B1cCA9IHRhaWxidWYudS5wdi52Y3B1YnVmOwog
ICAgIGZvciAoIGkgPSAwOyBpIDw9IG1heF92Y3B1X2lkOyBpKysgKQogICAgIHsKLSAgICAg
ICAgaWYgKCAhKHZjcHVtYXAgJiAoMVVMTCA8PCBpKSkgKQorICAgICAgICBpZiAoICEodmNw
dW1hcFtpLzY0XSAmICgxVUxMIDw8IChpJTY0KSkpICkKICAgICAgICAgICAgIGNvbnRpbnVl
OwogCiAgICAgICAgIG1lbWNweShjdHh0LCB2Y3B1cCwgKChkaW5mby0+Z3Vlc3Rfd2lkdGgg
PT0gOCkgPyBzaXplb2YoY3R4dC0+eDY0KQpkaWZmIC1yIDY0MDE3ZDRkZjlkYSB0b29scy9s
aWJ4Yy94Y19kb21haW5fc2F2ZS5jCi0tLSBhL3Rvb2xzL2xpYnhjL3hjX2RvbWFpbl9zYXZl
LmMJRnJpIEF1ZyAxNyAxMTozNjowOCAyMDEyICswMjAwCisrKyBiL3Rvb2xzL2xpYnhjL3hj
X2RvbWFpbl9zYXZlLmMJU2F0IEF1ZyAxOCAwODoyNTo1MiAyMDEyICswMTAwCkBAIC04NTUs
NyArODU1LDcgQEAgaW50IHhjX2RvbWFpbl9zYXZlKHhjX2ludGVyZmFjZSAqeGNoLCBpbgog
ICAgIHVuc2lnbmVkIGxvbmcgbmVlZGVkX3RvX2ZpeCA9IDA7CiAgICAgdW5zaWduZWQgbG9u
ZyB0b3RhbF9zZW50ICAgID0gMDsKIAotICAgIHVpbnQ2NF90IHZjcHVtYXAgPSAxVUxMOwor
ICAgIHVpbnQ2NF90IHZjcHVtYXBbWENfU1JfTUFYX1ZDUFVTLzY0XSA9IHsgMVVMTCB9Owog
CiAgICAgLyogSFZNOiBhIGJ1ZmZlciBmb3IgaG9sZGluZyBIVk0gY29udGV4dCAqLwogICAg
IHVpbnQzMl90IGh2bV9idWZfc2l6ZSA9IDA7CkBAIC0xNTgxLDEzICsxNTgxLDEzIEBAIGlu
dCB4Y19kb21haW5fc2F2ZSh4Y19pbnRlcmZhY2UgKnhjaCwgaW4KICAgICB9CiAKICAgICB7
Ci0gICAgICAgIHN0cnVjdCB7CisgICAgICAgIHN0cnVjdCBjaHVuayB7CiAgICAgICAgICAg
ICBpbnQgaWQ7CiAgICAgICAgICAgICBpbnQgbWF4X3ZjcHVfaWQ7Ci0gICAgICAgICAgICB1
aW50NjRfdCB2Y3B1bWFwOworICAgICAgICAgICAgdWludDY0X3QgdmNwdW1hcFtYQ19TUl9N
QVhfVkNQVVMvNjRdOwogICAgICAgICB9IGNodW5rID0geyBYQ19TQVZFX0lEX1ZDUFVfSU5G
TywgaW5mby5tYXhfdmNwdV9pZCB9OwogCi0gICAgICAgIGlmICggaW5mby5tYXhfdmNwdV9p
ZCA+PSA2NCApCisgICAgICAgIGlmICggaW5mby5tYXhfdmNwdV9pZCA+PSBYQ19TUl9NQVhf
VkNQVVMgKQogICAgICAgICB7CiAgICAgICAgICAgICBFUlJPUigiVG9vIG1hbnkgVkNQVVMg
aW4gZ3Vlc3QhIik7CiAgICAgICAgICAgICBnb3RvIG91dDsKQEAgLTE1OTgsMTEgKzE1OTgs
MTIgQEAgaW50IHhjX2RvbWFpbl9zYXZlKHhjX2ludGVyZmFjZSAqeGNoLCBpbgogICAgICAg
ICAgICAgeGNfdmNwdWluZm9fdCB2aW5mbzsKICAgICAgICAgICAgIGlmICggKHhjX3ZjcHVf
Z2V0aW5mbyh4Y2gsIGRvbSwgaSwgJnZpbmZvKSA9PSAwKSAmJgogICAgICAgICAgICAgICAg
ICB2aW5mby5vbmxpbmUgKQotICAgICAgICAgICAgICAgIHZjcHVtYXAgfD0gMVVMTCA8PCBp
OworICAgICAgICAgICAgICAgIHZjcHVtYXBbaS82NF0gfD0gMVVMTCA8PCAoaSU2NCk7CiAg
ICAgICAgIH0KIAotICAgICAgICBjaHVuay52Y3B1bWFwID0gdmNwdW1hcDsKLSAgICAgICAg
aWYgKCB3cmV4YWN0KGlvX2ZkLCAmY2h1bmssIHNpemVvZihjaHVuaykpICkKKyAgICAgICAg
bWVtY3B5KGNodW5rLnZjcHVtYXAsIHZjcHVtYXAsIHZjcHVtYXBfc3ooaW5mby5tYXhfdmNw
dV9pZCkpOworICAgICAgICBpZiAoIHdyZXhhY3QoaW9fZmQsICZjaHVuaywgb2Zmc2V0b2Yo
c3RydWN0IGNodW5rLCB2Y3B1bWFwKQorICAgICAgICAgICAgICAgICAgICAgKyB2Y3B1bWFw
X3N6KGluZm8ubWF4X3ZjcHVfaWQpKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIFBFUlJP
UigiRXJyb3Igd2hlbiB3cml0aW5nIHRvIHN0YXRlIGZpbGUiKTsKICAgICAgICAgICAgIGdv
dG8gb3V0OwpAQCAtMTg3OCw3ICsxODc5LDcgQEAgaW50IHhjX2RvbWFpbl9zYXZlKHhjX2lu
dGVyZmFjZSAqeGNoLCBpbgogCiAgICAgZm9yICggaSA9IDA7IGkgPD0gaW5mby5tYXhfdmNw
dV9pZDsgaSsrICkKICAgICB7Ci0gICAgICAgIGlmICggISh2Y3B1bWFwICYgKDFVTEwgPDwg
aSkpICkKKyAgICAgICAgaWYgKCAhKHZjcHVtYXBbaS82NF0gJiAoMVVMTCA8PCAoaSU2NCkp
KSApCiAgICAgICAgICAgICBjb250aW51ZTsKIAogICAgICAgICBpZiAoIChpICE9IDApICYm
IHhjX3ZjcHVfZ2V0Y29udGV4dCh4Y2gsIGRvbSwgaSwgJmN0eHQpICkKZGlmZiAtciA2NDAx
N2Q0ZGY5ZGEgdG9vbHMvbGlieGMveGdfc2F2ZV9yZXN0b3JlLmgKLS0tIGEvdG9vbHMvbGli
eGMveGdfc2F2ZV9yZXN0b3JlLmgJRnJpIEF1ZyAxNyAxMTozNjowOCAyMDEyICswMjAwCisr
KyBiL3Rvb2xzL2xpYnhjL3hnX3NhdmVfcmVzdG9yZS5oCVNhdCBBdWcgMTggMDg6MjU6NTIg
MjAxMiArMDEwMApAQCAtMjY5LDYgKzI2OSw5IEBACiAvKiBXaGVuIHBpbm5pbmcgcGFnZSB0
YWJsZXMgYXQgdGhlIGVuZCBvZiByZXN0b3JlLCB3ZSBhbHNvIHVzZSBiYXRjaGluZy4gKi8K
ICNkZWZpbmUgTUFYX1BJTl9CQVRDSCAgMTAyNAogCisvKiBNYXhpbXVtICNWQ1BVcyBjdXJy
ZW50bHkgc3VwcG9ydGVkIGZvciBzYXZlL3Jlc3RvcmUuICovCisjZGVmaW5lIFhDX1NSX01B
WF9WQ1BVUyA0MDk2CisjZGVmaW5lIHZjcHVtYXBfc3oobWF4X2lkKSAoKChtYXhfaWQpLzY0
KzEpKnNpemVvZih1aW50NjRfdCkpCiAKIAogLyoK
--B_3428123698_36200098
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--B_3428123698_36200098--




From xen-devel-bounces@lists.xen.org Sat Aug 18 07:35:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 07:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2dYn-0005ZI-FJ; Sat, 18 Aug 2012 07:35:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T2dYl-0005ZD-Hv
	for xen-devel@lists.xen.org; Sat, 18 Aug 2012 07:35:03 +0000
Received: from [85.158.143.35:59877] by server-3.bemta-4.messagelabs.com id
	A6/9A-09529-6A54F205; Sat, 18 Aug 2012 07:35:02 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345275301!16338686!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7340 invoked from network); 18 Aug 2012 07:35:01 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 07:35:01 -0000
Received: by wibhq4 with SMTP id hq4so1902933wib.14
	for <xen-devel@lists.xen.org>; Sat, 18 Aug 2012 00:35:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=R+RSsE7+QDy6+Ff5EwJ5ssRlZTQip7LQxoRGYftS2Ok=;
	b=hk4vCWu8hbErv3iSvzVx9oBVIWYUj4+l53A1w0ElwIHvbMBPS/iJQpDKxKT7BI2FyR
	kZPVmY+Nk0U9vajhkqG1bn9KBij1PEmwsZo6n1msMOvmzk4T6ywEbQfKTkn1McSyTtxl
	yy8Gs6VkW1PB3LX/dLUgLbvSNKwgKvenxsJ3sZnAprjy63VN6pppABYldt8eOWBAqKhN
	0p/DwFbtmDYg7PhHLYF/Jw38sKornq11MgMkeal9V5Imo8B5qBTAeE5k6VEaElerHJaM
	wOtSwedwkJYr+wFEUbTSbpYLDhUfa3NEY0aSs+nJCaVXQGJnpHQQeS+ayE6HoGrVvNf9
	ZeBg==
Received: by 10.217.0.145 with SMTP id l17mr3886461wes.133.1345275301127;
	Sat, 18 Aug 2012 00:35:01 -0700 (PDT)
Received: from [192.168.1.3] (host86-157-166-190.range86-157.btcentralplus.com.
	[86.157.166.190])
	by mx.google.com with ESMTPS id h9sm14230742wiz.1.2012.08.18.00.34.58
	(version=SSLv3 cipher=OTHER); Sat, 18 Aug 2012 00:34:59 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Sat, 18 Aug 2012 08:34:49 +0100
From: Keir Fraser <keir@xen.org>
To: Junjie Wei <junjie.wei@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC55042A.4992C%keir@xen.org>
Thread-Topic: [Xen-devel] VM save/restore
Thread-Index: Ac19DBzOEz0X+5UCukWS7vOcOCUzrQAB9Wxz
In-Reply-To: <CC54F704.3C5C9%keir.xen@gmail.com>
Mime-version: 1.0
Content-type: multipart/mixed;
	boundary="B_3428123698_36200098"
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428123698_36200098
Content-type: text/plain;
	charset="US-ASCII"
Content-transfer-encoding: 7bit

On 18/08/2012 07:38, "Keir Fraser" <keir.xen@gmail.com> wrote:

>
>> I think if a VM can be successfully started, then save/restore should
>> also work. So I made a patch and did some testing.
> 
> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
> after restore I would imagine.

How about the attached patch? It might actually work properly, unlike yours.
;)

>> The above problem is gone but there are new ones.
>> 
>> Let me summarize the result here.
>> 
>> With the patch, save/restore works fine as long as it can be started,
>> except two cases.
>> 
>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>     but the guest can only make use of 32 of them.

HVM guest? I don't know why this is. You will have to investigate some more
what has happened to the rest of your VCPUs! I think it should definitely
work. Cc Jan in case he has any thoughts.

>> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>>     but `xm save' does not work.

That's because your changes to the save/restore code were wrong. Try my
patch instead.
 
 -- Keir

>> See the testing below for details.The limit of 128 VCPUs for HVM
>> guests is already considered.
>> 
>> Could you please review the patch and help with these two cases?
>> 
>> 
>> Thanks,
>> Junjie
>> 
>> -= Test environment =-
>> 
>> [root@ovs087 HVM_X86_64]# cat /etc/ovs-release
>> Oracle VM server release 3.2.1
>> 
>> [root@ovs087 HVM_X86_64]# uname -a
>> Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012
>> x86_64 x86_64 x86_64 GNU/Linux
>> 
>> [root@ovs087 HVM_X86_64]# rpm -qa | grep xen
>> xenpvboot-0.1-8.el5
>> xen-devel-4.1.2-39
>> xen-tools-4.1.2-39
>> xen-4.1.2-39
>> 
>> -= PVM x86_64, 128 VCPUs =-
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   6916.9
>> OVM_OL5U7_X86_64_PVM_10GB                    9  2048 128     r-----     48.1
>> 
>> [root@ovs087 PVM_X86_64]# xm save 9 vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   7076.7
>> OVM_OL5U7_X86_64_PVM_10GB                   10  2048 128     r-----     51.6
>> 
>> -= PVM x86_64, 256 VCPUs =-
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs State   Time(s)
>> Domain-0                                     0   511     8 r-----  10398.1
>> OVM_OL5U7_X86_64_PVM_10GB                   35  2048   256 r-----     30.4
>> 
>> [root@ovs087 PVM_X86_64]# xm save 35 vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs State   Time(s)
>> Domain-0                                     0   511     8 r-----  10572.1
>> OVM_OL5U7_X86_64_PVM_10GB                   36  2048   256 r-----   1466.9
>> 
>> -= HVM x86_64, 128 VCPUs =-
>> 
>> [root@ovs087 HVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   8017.4
>> OVM_OL5U7_X86_64_PVHVM_10GB                 19  2048 128     r-----    343.7
>> 
>> [root@ovs087 HVM_X86_64]# xm save 19 vm.save
>> 
>> [root@ovs087 HVM_X86_64]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86_64]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----   8241.1
>> OVM_OL5U7_X86_64_PVHVM_10GB                 20  2048 128     r-----    121.7
>> 
>> -= PVM x86, 64 VCPUs =-
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36798.0
>> OVM_OL5U7_X86_PVM_10GB                      54  2048 32     r-----     92.8
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 54 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 64
>> 
>> [root@ovs087 PVM_X86]# xm save 54 vm.save
>> 
>> [root@ovs087 PVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36959.3
>> OVM_OL5U7_X86_PVM_10GB                      55  2048 32     r-----     51.0
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 55 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 64
>> 
>> 32-bit PVM, 65 VCPUs:
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36975.9
>> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----      8.6
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 65
>> 
>> [root@ovs087 PVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36977.7
>> OVM_OL5U7_X86_PVM_10GB                      56  2048 32     r-----     24.8
>> 
>> [root@ovs087 PVM_X86]# xm vcpu-list 56 | grep OVM_OL5U7_X86_PVM_10GB | wc -l
>> 65
>> 
>> [root@ovs087 PVM_X86]# xm save 56 vm.save
>> Error: /usr/lib64/xen/bin/xc_save 26 56 0 0 0 failed
>> 
>> /var/log/xen/xend.log: INFO (XendCheckpoint:416)
>> xc: error: No context for VCPU64 (61 = No data available): Internal error
>> 
>> -= HVM x86, 64 VCPUs =-
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36506.1
>> OVM_OL5U7_X86_PVHVM_10GB                    52  2048 32     r-----     68.6
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 52 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 64
>> 
>> [root@ovs087 HVM_X86]# xm save 52 vm.save
>> 
>> [root@ovs087 HVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36730.5
>> OVM_OL5U7_X86_PVHVM_10GB                    53  2048 32     r-----     19.8
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 53 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 64
>> 
>> -= HVM x86, 128 VCPUs =-
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36261.1
>> OVM_OL5U7_X86_PVHVM_10GB                    50  2048 32     r-----     34.9
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 50 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 128
>> 
>> [root@ovs087 HVM_X86]# xm save 50 vm.save
>> 
>> [root@ovs087 HVM_X86]# xm restore vm.save
>> 
>> [root@ovs087 HVM_X86]# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   511 8     r-----  36480.5
>> OVM_OL5U7_X86_PVHVM_10GB                    51  2048 32     r-----     20.3
>> 
>> [root@ovs087 HVM_X86]# xm vcpu-list 51 | grep OVM_OL5U7_X86_PVHVM_10GB |
>> wc -l
>> 128
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 
> 


--B_3428123698_36200098
Content-type: application/octet-stream; name="00-sr-extend-vcpus"
Content-disposition: attachment;
	filename="00-sr-extend-vcpus"
Content-transfer-encoding: base64


ZGlmZiAtciA2NDAxN2Q0ZGY5ZGEgdG9vbHMvbGlieGMveGNfZG9tYWluX3Jlc3RvcmUuYwot
LS0gYS90b29scy9saWJ4Yy94Y19kb21haW5fcmVzdG9yZS5jCUZyaSBBdWcgMTcgMTE6MzY6
MDggMjAxMiArMDIwMAorKysgYi90b29scy9saWJ4Yy94Y19kb21haW5fcmVzdG9yZS5jCVNh
dCBBdWcgMTggMDg6MjU6NTIgMjAxMiArMDEwMApAQCAtNDYyLDcgKzQ2Miw3IEBAIHN0YXRp
YyBpbnQgZHVtcF9xZW11KHhjX2ludGVyZmFjZSAqeGNoLCAKIAogc3RhdGljIGludCBidWZm
ZXJfdGFpbF9odm0oeGNfaW50ZXJmYWNlICp4Y2gsIHN0cnVjdCByZXN0b3JlX2N0eCAqY3R4
LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RydWN0IHRhaWxidWZfaHZtICpidWYs
IGludCBmZCwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBtYXhf
dmNwdV9pZCwgdWludDY0X3QgdmNwdW1hcCwKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
IHVuc2lnbmVkIGludCBtYXhfdmNwdV9pZCwgdWludDY0X3QgKnZjcHVtYXAsCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBpbnQgZXh0X3ZjcHVjb250ZXh0LAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgaW50IHZjcHVleHRzdGF0ZSwgdWludDMyX3QgdmNwdWV4dHN0YXRl
X3NpemUpCiB7CkBAIC01MzAsNyArNTMwLDcgQEAgc3RhdGljIGludCBidWZmZXJfdGFpbF9o
dm0oeGNfaW50ZXJmYWNlIAogCiBzdGF0aWMgaW50IGJ1ZmZlcl90YWlsX3B2KHhjX2ludGVy
ZmFjZSAqeGNoLCBzdHJ1Y3QgcmVzdG9yZV9jdHggKmN0eCwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgc3RydWN0IHRhaWxidWZfcHYgKmJ1ZiwgaW50IGZkLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICB1bnNpZ25lZCBpbnQgbWF4X3ZjcHVfaWQsIHVpbnQ2NF90IHZjcHVt
YXAsCisgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBtYXhfdmNwdV9p
ZCwgdWludDY0X3QgKnZjcHVtYXAsCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCBl
eHRfdmNwdWNvbnRleHQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCB2Y3B1ZXh0
c3RhdGUsCiAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHZjcHVleHRzdGF0
ZV9zaXplKQpAQCAtNTYzLDggKzU2Myw4IEBAIHN0YXRpYyBpbnQgYnVmZmVyX3RhaWxfcHYo
eGNfaW50ZXJmYWNlICoKICAgICAvKiBWQ1BVIGNvbnRleHRzICovCiAgICAgYnVmLT52Y3B1
Y291bnQgPSAwOwogICAgIGZvciAoaSA9IDA7IGkgPD0gbWF4X3ZjcHVfaWQ7IGkrKykgewot
ICAgICAgICAvLyBEUFJJTlRGKCJ2Y3B1bWFwOiAlbGx4LCBjcHU6ICVkLCBiaXQ6ICVsbHVc
biIsIHZjcHVtYXAsIGksICh2Y3B1bWFwICUgKDFVTEwgPDwgaSkpKTsKLSAgICAgICAgaWYg
KCAoISh2Y3B1bWFwICYgKDFVTEwgPDwgaSkpKSApCisgICAgICAgIC8vIERQUklOVEYoInZj
cHVtYXA6ICVsbHgsIGNwdTogJWQsIGJpdDogJWxsdVxuIiwgdmNwdW1hcFtpLzY0XSwgaSwg
KHZjcHVtYXBbaS82NF0gJiAoMVVMTCA8PCAoaSU2NCkpKSk7CisgICAgICAgIGlmICggKCEo
dmNwdW1hcFtpLzY0XSAmICgxVUxMIDw8IChpJTY0KSkpKSApCiAgICAgICAgICAgICBjb250
aW51ZTsKICAgICAgICAgYnVmLT52Y3B1Y291bnQrKzsKICAgICB9CkBAIC02MTQsNyArNjE0
LDcgQEAgc3RhdGljIGludCBidWZmZXJfdGFpbF9wdih4Y19pbnRlcmZhY2UgKgogCiBzdGF0
aWMgaW50IGJ1ZmZlcl90YWlsKHhjX2ludGVyZmFjZSAqeGNoLCBzdHJ1Y3QgcmVzdG9yZV9j
dHggKmN0eCwKICAgICAgICAgICAgICAgICAgICAgICAgdGFpbGJ1Zl90ICpidWYsIGludCBm
ZCwgdW5zaWduZWQgaW50IG1heF92Y3B1X2lkLAotICAgICAgICAgICAgICAgICAgICAgICB1
aW50NjRfdCB2Y3B1bWFwLCBpbnQgZXh0X3ZjcHVjb250ZXh0LAorICAgICAgICAgICAgICAg
ICAgICAgICB1aW50NjRfdCAqdmNwdW1hcCwgaW50IGV4dF92Y3B1Y29udGV4dCwKICAgICAg
ICAgICAgICAgICAgICAgICAgaW50IHZjcHVleHRzdGF0ZSwgdWludDMyX3QgdmNwdWV4dHN0
YXRlX3NpemUpCiB7CiAgICAgaWYgKCBidWYtPmlzaHZtICkKQEAgLTY4MCw3ICs2ODAsNyBA
QCB0eXBlZGVmIHN0cnVjdCB7CiAKICAgICBpbnQgbmV3X2N0eHRfZm9ybWF0OwogICAgIGlu
dCBtYXhfdmNwdV9pZDsKLSAgICB1aW50NjRfdCB2Y3B1bWFwOworICAgIHVpbnQ2NF90IHZj
cHVtYXBbWENfU1JfTUFYX1ZDUFVTLzY0XTsKICAgICB1aW50NjRfdCBpZGVudHB0OwogICAg
IHVpbnQ2NF90IHBhZ2luZ19yaW5nX3BmbjsKICAgICB1aW50NjRfdCBhY2Nlc3NfcmluZ19w
Zm47CkBAIC03NDUsMTIgKzc0NSwxMiBAQCBzdGF0aWMgaW50IHBhZ2VidWZfZ2V0X29uZSh4
Y19pbnRlcmZhY2UgCiAgICAgY2FzZSBYQ19TQVZFX0lEX1ZDUFVfSU5GTzoKICAgICAgICAg
YnVmLT5uZXdfY3R4dF9mb3JtYXQgPSAxOwogICAgICAgICBpZiAoIFJERVhBQ1QoZmQsICZi
dWYtPm1heF92Y3B1X2lkLCBzaXplb2YoYnVmLT5tYXhfdmNwdV9pZCkpIHx8Ci0gICAgICAg
ICAgICAgYnVmLT5tYXhfdmNwdV9pZCA+PSA2NCB8fCBSREVYQUNUKGZkLCAmYnVmLT52Y3B1
bWFwLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YodWludDY0X3QpKSApIHsKKyAgICAgICAgICAgICBidWYtPm1heF92Y3B1X2lkID49
IFhDX1NSX01BWF9WQ1BVUyB8fAorICAgICAgICAgICAgIFJERVhBQ1QoZmQsIGJ1Zi0+dmNw
dW1hcCwgdmNwdW1hcF9zeihidWYtPm1heF92Y3B1X2lkKSkgKSB7CiAgICAgICAgICAgICBQ
RVJST1IoIkVycm9yIHdoZW4gcmVhZGluZyBtYXhfdmNwdV9pZCIpOwogICAgICAgICAgICAg
cmV0dXJuIC0xOwogICAgICAgICB9Ci0gICAgICAgIC8vIERQUklOVEYoIk1heCBWQ1BVIElE
OiAlZCwgdmNwdW1hcDogJWxseFxuIiwgYnVmLT5tYXhfdmNwdV9pZCwgYnVmLT52Y3B1bWFw
KTsKKyAgICAgICAgLy8gRFBSSU5URigiTWF4IFZDUFUgSUQ6ICVkLCB2Y3B1bWFwOiAlbGx4
XG4iLCBidWYtPm1heF92Y3B1X2lkLCBidWYtPnZjcHVtYXBbMF0pOwogICAgICAgICByZXR1
cm4gcGFnZWJ1Zl9nZXRfb25lKHhjaCwgY3R4LCBidWYsIGZkLCBkb20pOwogCiAgICAgY2Fz
ZSBYQ19TQVZFX0lEX0hWTV9JREVOVF9QVDoKQEAgLTEzNjYsNyArMTM2Niw3IEBAIGludCB4
Y19kb21haW5fcmVzdG9yZSh4Y19pbnRlcmZhY2UgKnhjaCwKICAgICBzdHJ1Y3QgbW11ZXh0
X29wIHBpbltNQVhfUElOX0JBVENIXTsKICAgICB1bnNpZ25lZCBpbnQgbnJfcGluczsKIAot
ICAgIHVpbnQ2NF90IHZjcHVtYXAgPSAxVUxMOworICAgIHVpbnQ2NF90IHZjcHVtYXBbWENf
U1JfTUFYX1ZDUFVTLzY0XSA9IHsgMVVMTCB9OwogICAgIHVuc2lnbmVkIGludCBtYXhfdmNw
dV9pZCA9IDA7CiAgICAgaW50IG5ld19jdHh0X2Zvcm1hdCA9IDA7CiAKQEAgLTE1MTcsOCAr
MTUxNyw4IEBAIGludCB4Y19kb21haW5fcmVzdG9yZSh4Y19pbnRlcmZhY2UgKnhjaCwKICAg
ICAgICAgaWYgKCBqID09IDAgKSB7CiAgICAgICAgICAgICAvKiBjYXRjaCB2Y3B1IHVwZGF0
ZXMgKi8KICAgICAgICAgICAgIGlmIChwYWdlYnVmLm5ld19jdHh0X2Zvcm1hdCkgewotICAg
ICAgICAgICAgICAgIHZjcHVtYXAgPSBwYWdlYnVmLnZjcHVtYXA7CiAgICAgICAgICAgICAg
ICAgbWF4X3ZjcHVfaWQgPSBwYWdlYnVmLm1heF92Y3B1X2lkOworICAgICAgICAgICAgICAg
IG1lbWNweSh2Y3B1bWFwLCBwYWdlYnVmLnZjcHVtYXAsIHZjcHVtYXBfc3oobWF4X3ZjcHVf
aWQpKTsKICAgICAgICAgICAgIH0KICAgICAgICAgICAgIC8qIHNob3VsZCB0aGlzIGJlIGRl
ZmVycmVkPyBkb2VzIGl0IGNoYW5nZT8gKi8KICAgICAgICAgICAgIGlmICggcGFnZWJ1Zi5p
ZGVudHB0ICkKQEAgLTE4ODAsNyArMTg4MCw3IEBAIGludCB4Y19kb21haW5fcmVzdG9yZSh4
Y19pbnRlcmZhY2UgKnhjaCwKICAgICB2Y3B1cCA9IHRhaWxidWYudS5wdi52Y3B1YnVmOwog
ICAgIGZvciAoIGkgPSAwOyBpIDw9IG1heF92Y3B1X2lkOyBpKysgKQogICAgIHsKLSAgICAg
ICAgaWYgKCAhKHZjcHVtYXAgJiAoMVVMTCA8PCBpKSkgKQorICAgICAgICBpZiAoICEodmNw
dW1hcFtpLzY0XSAmICgxVUxMIDw8IChpJTY0KSkpICkKICAgICAgICAgICAgIGNvbnRpbnVl
OwogCiAgICAgICAgIG1lbWNweShjdHh0LCB2Y3B1cCwgKChkaW5mby0+Z3Vlc3Rfd2lkdGgg
PT0gOCkgPyBzaXplb2YoY3R4dC0+eDY0KQpkaWZmIC1yIDY0MDE3ZDRkZjlkYSB0b29scy9s
aWJ4Yy94Y19kb21haW5fc2F2ZS5jCi0tLSBhL3Rvb2xzL2xpYnhjL3hjX2RvbWFpbl9zYXZl
LmMJRnJpIEF1ZyAxNyAxMTozNjowOCAyMDEyICswMjAwCisrKyBiL3Rvb2xzL2xpYnhjL3hj
X2RvbWFpbl9zYXZlLmMJU2F0IEF1ZyAxOCAwODoyNTo1MiAyMDEyICswMTAwCkBAIC04NTUs
NyArODU1LDcgQEAgaW50IHhjX2RvbWFpbl9zYXZlKHhjX2ludGVyZmFjZSAqeGNoLCBpbgog
ICAgIHVuc2lnbmVkIGxvbmcgbmVlZGVkX3RvX2ZpeCA9IDA7CiAgICAgdW5zaWduZWQgbG9u
ZyB0b3RhbF9zZW50ICAgID0gMDsKIAotICAgIHVpbnQ2NF90IHZjcHVtYXAgPSAxVUxMOwor
ICAgIHVpbnQ2NF90IHZjcHVtYXBbWENfU1JfTUFYX1ZDUFVTLzY0XSA9IHsgMVVMTCB9Owog
CiAgICAgLyogSFZNOiBhIGJ1ZmZlciBmb3IgaG9sZGluZyBIVk0gY29udGV4dCAqLwogICAg
IHVpbnQzMl90IGh2bV9idWZfc2l6ZSA9IDA7CkBAIC0xNTgxLDEzICsxNTgxLDEzIEBAIGlu
dCB4Y19kb21haW5fc2F2ZSh4Y19pbnRlcmZhY2UgKnhjaCwgaW4KICAgICB9CiAKICAgICB7
Ci0gICAgICAgIHN0cnVjdCB7CisgICAgICAgIHN0cnVjdCBjaHVuayB7CiAgICAgICAgICAg
ICBpbnQgaWQ7CiAgICAgICAgICAgICBpbnQgbWF4X3ZjcHVfaWQ7Ci0gICAgICAgICAgICB1
aW50NjRfdCB2Y3B1bWFwOworICAgICAgICAgICAgdWludDY0X3QgdmNwdW1hcFtYQ19TUl9N
QVhfVkNQVVMvNjRdOwogICAgICAgICB9IGNodW5rID0geyBYQ19TQVZFX0lEX1ZDUFVfSU5G
TywgaW5mby5tYXhfdmNwdV9pZCB9OwogCi0gICAgICAgIGlmICggaW5mby5tYXhfdmNwdV9p
ZCA+PSA2NCApCisgICAgICAgIGlmICggaW5mby5tYXhfdmNwdV9pZCA+PSBYQ19TUl9NQVhf
VkNQVVMgKQogICAgICAgICB7CiAgICAgICAgICAgICBFUlJPUigiVG9vIG1hbnkgVkNQVVMg
aW4gZ3Vlc3QhIik7CiAgICAgICAgICAgICBnb3RvIG91dDsKQEAgLTE1OTgsMTEgKzE1OTgs
MTIgQEAgaW50IHhjX2RvbWFpbl9zYXZlKHhjX2ludGVyZmFjZSAqeGNoLCBpbgogICAgICAg
ICAgICAgeGNfdmNwdWluZm9fdCB2aW5mbzsKICAgICAgICAgICAgIGlmICggKHhjX3ZjcHVf
Z2V0aW5mbyh4Y2gsIGRvbSwgaSwgJnZpbmZvKSA9PSAwKSAmJgogICAgICAgICAgICAgICAg
ICB2aW5mby5vbmxpbmUgKQotICAgICAgICAgICAgICAgIHZjcHVtYXAgfD0gMVVMTCA8PCBp
OworICAgICAgICAgICAgICAgIHZjcHVtYXBbaS82NF0gfD0gMVVMTCA8PCAoaSU2NCk7CiAg
ICAgICAgIH0KIAotICAgICAgICBjaHVuay52Y3B1bWFwID0gdmNwdW1hcDsKLSAgICAgICAg
aWYgKCB3cmV4YWN0KGlvX2ZkLCAmY2h1bmssIHNpemVvZihjaHVuaykpICkKKyAgICAgICAg
bWVtY3B5KGNodW5rLnZjcHVtYXAsIHZjcHVtYXAsIHZjcHVtYXBfc3ooaW5mby5tYXhfdmNw
dV9pZCkpOworICAgICAgICBpZiAoIHdyZXhhY3QoaW9fZmQsICZjaHVuaywgb2Zmc2V0b2Yo
c3RydWN0IGNodW5rLCB2Y3B1bWFwKQorICAgICAgICAgICAgICAgICAgICAgKyB2Y3B1bWFw
X3N6KGluZm8ubWF4X3ZjcHVfaWQpKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIFBFUlJP
UigiRXJyb3Igd2hlbiB3cml0aW5nIHRvIHN0YXRlIGZpbGUiKTsKICAgICAgICAgICAgIGdv
dG8gb3V0OwpAQCAtMTg3OCw3ICsxODc5LDcgQEAgaW50IHhjX2RvbWFpbl9zYXZlKHhjX2lu
dGVyZmFjZSAqeGNoLCBpbgogCiAgICAgZm9yICggaSA9IDA7IGkgPD0gaW5mby5tYXhfdmNw
dV9pZDsgaSsrICkKICAgICB7Ci0gICAgICAgIGlmICggISh2Y3B1bWFwICYgKDFVTEwgPDwg
aSkpICkKKyAgICAgICAgaWYgKCAhKHZjcHVtYXBbaS82NF0gJiAoMVVMTCA8PCAoaSU2NCkp
KSApCiAgICAgICAgICAgICBjb250aW51ZTsKIAogICAgICAgICBpZiAoIChpICE9IDApICYm
IHhjX3ZjcHVfZ2V0Y29udGV4dCh4Y2gsIGRvbSwgaSwgJmN0eHQpICkKZGlmZiAtciA2NDAx
N2Q0ZGY5ZGEgdG9vbHMvbGlieGMveGdfc2F2ZV9yZXN0b3JlLmgKLS0tIGEvdG9vbHMvbGli
eGMveGdfc2F2ZV9yZXN0b3JlLmgJRnJpIEF1ZyAxNyAxMTozNjowOCAyMDEyICswMjAwCisr
KyBiL3Rvb2xzL2xpYnhjL3hnX3NhdmVfcmVzdG9yZS5oCVNhdCBBdWcgMTggMDg6MjU6NTIg
MjAxMiArMDEwMApAQCAtMjY5LDYgKzI2OSw5IEBACiAvKiBXaGVuIHBpbm5pbmcgcGFnZSB0
YWJsZXMgYXQgdGhlIGVuZCBvZiByZXN0b3JlLCB3ZSBhbHNvIHVzZSBiYXRjaGluZy4gKi8K
ICNkZWZpbmUgTUFYX1BJTl9CQVRDSCAgMTAyNAogCisvKiBNYXhpbXVtICNWQ1BVcyBjdXJy
ZW50bHkgc3VwcG9ydGVkIGZvciBzYXZlL3Jlc3RvcmUuICovCisjZGVmaW5lIFhDX1NSX01B
WF9WQ1BVUyA0MDk2CisjZGVmaW5lIHZjcHVtYXBfc3oobWF4X2lkKSAoKChtYXhfaWQpLzY0
KzEpKnNpemVvZih1aW50NjRfdCkpCiAKIAogLyoK
--B_3428123698_36200098
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--B_3428123698_36200098--




From xen-devel-bounces@lists.xen.org Sat Aug 18 08:22:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 08:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2eIQ-0006H5-9R; Sat, 18 Aug 2012 08:22:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2eIM-0006Gx-8b
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 08:22:13 +0000
Received: from [85.158.139.83:52968] by server-9.bemta-5.messagelabs.com id
	5F/95-26123-EA05F205; Sat, 18 Aug 2012 08:22:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345278126!29038895!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7951 invoked from network); 18 Aug 2012 08:22:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 08:22:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068663"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 08:22:05 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Sat, 18 Aug 2012 09:22:05 +0100
Message-ID: <1345278124.23624.11.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Sat, 18 Aug 2012 09:22:04 +0100
In-Reply-To: <20120817170641.4261229b@mantra.us.oracle.com>
References: <20120815180622.0c988f48@mantra.us.oracle.com>
	<1345197150.30865.147.camel@zakaz.uk.xensource.com>
	<20120817170641.4261229b@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp); if (rc != 0) {
> > >  				printk(KERN_WARNING
> > > @@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
> > >  	int rc;
> > >  	struct gnttab_set_version gsv;
> > >  
> > > -	if (xen_hvm_domain())
> > > +	if (xen_hvm_domain() || xen_pvh_domain())
> > 
> > Does something stop pvh using v2?
> 
> I had some issue related to grstatus field that was added, so punted it
> for now. It's phase II which is a big phase now :) :)...

It seems like it's got a lot of most independent bits in it, so
hopefully you should get some help ;-)

> > >  
> > > -	if (xen_pv_domain())
> > > +	/* PVH note: xen will free existing kmalloc'd mfn in
> > > +	 * XENMEM_add_to_physmap */
> > > +	if (xen_pvh_domain() && !gnttab_shared.addr) {
> > > +		gnttab_shared.addr =
> > > +			kmalloc(max_nr_gframes * PAGE_SIZE,
> > > GFP_KERNEL);
> > > +		if ( !gnttab_shared.addr ) {
> > > +			printk(KERN_WARNING "%s", kmsg);
> > 
> > Why this construct instead of just the string literal?
>  
> To keep line overflow. I dont' like code spanning 80 columns. If you split,
> then you can't grep.

FWIW CodingStyle relaxes the 80 column limit for literal strings for
exactly this reason.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 08:22:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 08:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2eIQ-0006H5-9R; Sat, 18 Aug 2012 08:22:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2eIM-0006Gx-8b
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 08:22:13 +0000
Received: from [85.158.139.83:52968] by server-9.bemta-5.messagelabs.com id
	5F/95-26123-EA05F205; Sat, 18 Aug 2012 08:22:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345278126!29038895!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7951 invoked from network); 18 Aug 2012 08:22:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 08:22:06 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068663"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 08:22:05 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Sat, 18 Aug 2012 09:22:05 +0100
Message-ID: <1345278124.23624.11.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Sat, 18 Aug 2012 09:22:04 +0100
In-Reply-To: <20120817170641.4261229b@mantra.us.oracle.com>
References: <20120815180622.0c988f48@mantra.us.oracle.com>
	<1345197150.30865.147.camel@zakaz.uk.xensource.com>
	<20120817170641.4261229b@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp); if (rc != 0) {
> > >  				printk(KERN_WARNING
> > > @@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
> > >  	int rc;
> > >  	struct gnttab_set_version gsv;
> > >  
> > > -	if (xen_hvm_domain())
> > > +	if (xen_hvm_domain() || xen_pvh_domain())
> > 
> > Does something stop pvh using v2?
> 
> I had some issue related to grstatus field that was added, so punted it
> for now. It's phase II which is a big phase now :) :)...

It seems like it's got a lot of most independent bits in it, so
hopefully you should get some help ;-)

> > >  
> > > -	if (xen_pv_domain())
> > > +	/* PVH note: xen will free existing kmalloc'd mfn in
> > > +	 * XENMEM_add_to_physmap */
> > > +	if (xen_pvh_domain() && !gnttab_shared.addr) {
> > > +		gnttab_shared.addr =
> > > +			kmalloc(max_nr_gframes * PAGE_SIZE,
> > > GFP_KERNEL);
> > > +		if ( !gnttab_shared.addr ) {
> > > +			printk(KERN_WARNING "%s", kmsg);
> > 
> > Why this construct instead of just the string literal?
>  
> To keep line overflow. I dont' like code spanning 80 columns. If you split,
> then you can't grep.

FWIW CodingStyle relaxes the 80 column limit for literal strings for
exactly this reason.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 08:23:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 08:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2eJY-0006KE-Nk; Sat, 18 Aug 2012 08:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2eJX-0006K8-Ln
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 08:23:23 +0000
Received: from [85.158.143.99:23722] by server-2.bemta-4.messagelabs.com id
	43/87-31966-BF05F205; Sat, 18 Aug 2012 08:23:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345278202!28245168!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16526 invoked from network); 18 Aug 2012 08:23:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 08:23:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068672"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 08:23:22 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Sat, 18 Aug 2012 09:23:22 +0100
Message-ID: <1345278201.23624.13.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Sat, 18 Aug 2012 09:23:21 +0100
In-Reply-To: <20120817163739.386fce5d@mantra.us.oracle.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<1345193780.30865.109.camel@zakaz.uk.xensource.com>
	<20120817163739.386fce5d@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > > index 1573376..7c7dfd1 100644
> > > --- a/arch/x86/xen/irq.c
> > > +++ b/arch/x86/xen/irq.c
> > > @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
> > >  
> > >  static void xen_safe_halt(void)
> > >  {
> > > +	/* so event channel can be delivered to us, since in HVM
> > > container */
> > > +	if (xen_pvh_domain())
> > > +		local_irq_enable();
> > > +
> > >  	/* Blocking includes an implicit local_irq_enable(). */
> > 
> > So this comment isn't true for a PVH guest? Why not? Should it be?
>  
> I need to make sure the EFLAGS.IF is enabled. IIRC, the comment is saying
> that xen will clear event channel mask bit. For PVH, there's the additional
> EFLAGS.IF flag.
> 

My reading of the hypercall semantics would be that it reenables
whichever event delivery mechanism the guest is using and therefore it
should enable EFLAGS.IF for a PVH guest since manipulating the evtchn
mask in this case is pointless.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 08:23:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 08:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2eJY-0006KE-Nk; Sat, 18 Aug 2012 08:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2eJX-0006K8-Ln
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 08:23:23 +0000
Received: from [85.158.143.99:23722] by server-2.bemta-4.messagelabs.com id
	43/87-31966-BF05F205; Sat, 18 Aug 2012 08:23:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345278202!28245168!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16526 invoked from network); 18 Aug 2012 08:23:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 08:23:22 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068672"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 08:23:22 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Sat, 18 Aug 2012 09:23:22 +0100
Message-ID: <1345278201.23624.13.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Sat, 18 Aug 2012 09:23:21 +0100
In-Reply-To: <20120817163739.386fce5d@mantra.us.oracle.com>
References: <20120815180131.24aaa5ce@mantra.us.oracle.com>
	<1345193780.30865.109.camel@zakaz.uk.xensource.com>
	<20120817163739.386fce5d@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [[RFC PATCH 2/8]: PVH: changes related to initial
 boot and irq rewiring
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > > index 1573376..7c7dfd1 100644
> > > --- a/arch/x86/xen/irq.c
> > > +++ b/arch/x86/xen/irq.c
> > > @@ -100,6 +100,10 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
> > >  
> > >  static void xen_safe_halt(void)
> > >  {
> > > +	/* so event channel can be delivered to us, since in HVM
> > > container */
> > > +	if (xen_pvh_domain())
> > > +		local_irq_enable();
> > > +
> > >  	/* Blocking includes an implicit local_irq_enable(). */
> > 
> > So this comment isn't true for a PVH guest? Why not? Should it be?
>  
> I need to make sure the EFLAGS.IF is enabled. IIRC, the comment is saying
> that xen will clear event channel mask bit. For PVH, there's the additional
> EFLAGS.IF flag.
> 

My reading of the hypercall semantics would be that it reenables
whichever event delivery mechanism the guest is using and therefore it
should enable EFLAGS.IF for a PVH guest since manipulating the evtchn
mask in this case is pointless.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 08:57:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 08:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2eq7-0006d9-IU; Sat, 18 Aug 2012 08:57:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2eq6-0006d4-60
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 08:57:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345280215!9968637!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10511 invoked from network); 18 Aug 2012 08:56:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 08:56:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068781"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 08:56:55 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Sat, 18 Aug 2012 09:56:54 +0100
Message-ID: <1345280214.23624.20.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Sat, 18 Aug 2012 09:56:54 +0100
In-Reply-To: <20120817152617.64e2fe5e@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
	<20120817152617.64e2fe5e@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 23:26 +0100, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 15:36:04 -0400
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > > > For example in balloon.c we are probably only interested in memory
> > > > related behavior, so checking for XENFEAT_auto_translated_physmap
> > > > should be enough.  In other parts of the code we might want to
> > > > check for xen_pv_domain(). If xen_pv_domain() and
> > > > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > > > another small XENFEAT that specifies that the domain is running
> > > > in a HVM container. This way they are all reusable.
> > > 
> > > yeah, I thought about that, but wasn't sure what the implications
> > > would be for a guest thats not PVH but has auto xlated physmap, if
> > > there's such a possibility. If you guys think thats not an issue, I
> > > can change it.
> > 
> > dom0_shadow=on on the hypervisor mode enables that in PV mode.
> 
> So, if I just add checks for auto_translated_physmap like suggested,
> wouldn't I be changing and breaking the code paths for dom0_shadow boot
> of PV guest?

Changing, but not breaking, I think. Assuming auto_translated_physmap is
used in the logically correct way.

If anything I think you'd be making dom0_shadow work better, since you
are making stuff actually work.

>  is dom0_shadow depracated?

I hadn't even heard of it until today.

> 
> Following would be true for both, pvh and dom0_shadow:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap) && \
>                           xen_have_vector_callback)  

FWIW I don't think dom0 shadow has vector callback support.

But even if it did we could add a new XENFEAT to allow you to
distinguish if necessary. Lets wait and see what uses of xen_pvh_domain
remain once you converted the easy ones to XENFEAT_writable etc etc. The
remaining uses may show some pattern which we can use to name the new
XENFEAT something more specific than XENFEAT_pvh.

I wonder if PVH deserves a new entry in the XENVER_capabilities string?

> Also, the SIF flag allows PVH to be enabled via config file where the
> tool pareses and sets it for the guest.
> 
> At present:
>   dom0: put pvh=true at grub command line
>   domU: put pvh=1 in the vm.cfg file.

I guess these turn into something like a new XEN_DOMCTL_CDF_* which is
passed to XEN_DOMCTL_createdomain? 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 18 08:57:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Aug 2012 08:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2eq7-0006d9-IU; Sat, 18 Aug 2012 08:57:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T2eq6-0006d4-60
	for Xen-devel@lists.xensource.com; Sat, 18 Aug 2012 08:57:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345280215!9968637!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk1NDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10511 invoked from network); 18 Aug 2012 08:56:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Aug 2012 08:56:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,790,1336348800"; d="scan'208";a="14068781"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	18 Aug 2012 08:56:55 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Sat, 18 Aug 2012 09:56:54 +0100
Message-ID: <1345280214.23624.20.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Sat, 18 Aug 2012 09:56:54 +0100
In-Reply-To: <20120817152617.64e2fe5e@mantra.us.oracle.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
	<20120817152617.64e2fe5e@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 23:26 +0100, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 15:36:04 -0400
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > > > For example in balloon.c we are probably only interested in memory
> > > > related behavior, so checking for XENFEAT_auto_translated_physmap
> > > > should be enough.  In other parts of the code we might want to
> > > > check for xen_pv_domain(). If xen_pv_domain() and
> > > > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > > > another small XENFEAT that specifies that the domain is running
> > > > in a HVM container. This way they are all reusable.
> > > 
> > > yeah, I thought about that, but wasn't sure what the implications
> > > would be for a guest thats not PVH but has auto xlated physmap, if
> > > there's such a possibility. If you guys think thats not an issue, I
> > > can change it.
> > 
> > dom0_shadow=on on the hypervisor mode enables that in PV mode.
> 
> So, if I just add checks for auto_translated_physmap like suggested,
> wouldn't I be changing and breaking the code paths for dom0_shadow boot
> of PV guest?

Changing, but not breaking, I think. Assuming auto_translated_physmap is
used in the logically correct way.

If anything I think you'd be making dom0_shadow work better, since you
are making stuff actually work.

>  is dom0_shadow depracated?

I hadn't even heard of it until today.

> 
> Following would be true for both, pvh and dom0_shadow:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap) && \
>                           xen_have_vector_callback)  

FWIW I don't think dom0 shadow has vector callback support.

But even if it did we could add a new XENFEAT to allow you to
distinguish if necessary. Lets wait and see what uses of xen_pvh_domain
remain once you converted the easy ones to XENFEAT_writable etc etc. The
remaining uses may show some pattern which we can use to name the new
XENFEAT something more specific than XENFEAT_pvh.

I wonder if PVH deserves a new entry in the XENVER_capabilities string?

> Also, the SIF flag allows PVH to be enabled via config file where the
> tool pareses and sets it for the guest.
> 
> At present:
>   dom0: put pvh=true at grub command line
>   domU: put pvh=1 in the vm.cfg file.

I guess these turn into something like a new XEN_DOMCTL_CDF_* which is
passed to XEN_DOMCTL_createdomain? 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 19 05:23:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Aug 2012 05:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2xyD-0007sL-3R; Sun, 19 Aug 2012 05:22:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2xyB-0007sG-5u
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 05:22:39 +0000
Received: from [85.158.143.99:28465] by server-3.bemta-4.messagelabs.com id
	EE/2F-09529-E1870305; Sun, 19 Aug 2012 05:22:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345353757!28885663!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18970 invoked from network); 19 Aug 2012 05:22:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Aug 2012 05:22:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,792,1336348800"; d="scan'208";a="14071893"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Aug 2012 05:22:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 19 Aug 2012 06:22:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2xy7-00012Z-DM;
	Sun, 19 Aug 2012 05:22:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2xy7-0002tW-3N;
	Sun, 19 Aug 2012 06:22:35 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13616-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 19 Aug 2012 06:22:35 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13616: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13616 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13616/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13615
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13615
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13615
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13615

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  73ac4b7ad2e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 19 05:23:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Aug 2012 05:23:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T2xyD-0007sL-3R; Sun, 19 Aug 2012 05:22:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T2xyB-0007sG-5u
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 05:22:39 +0000
Received: from [85.158.143.99:28465] by server-3.bemta-4.messagelabs.com id
	EE/2F-09529-E1870305; Sun, 19 Aug 2012 05:22:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345353757!28885663!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18970 invoked from network); 19 Aug 2012 05:22:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Aug 2012 05:22:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,792,1336348800"; d="scan'208";a="14071893"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	19 Aug 2012 05:22:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sun, 19 Aug 2012 06:22:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T2xy7-00012Z-DM;
	Sun, 19 Aug 2012 05:22:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T2xy7-0002tW-3N;
	Sun, 19 Aug 2012 06:22:35 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13616-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 19 Aug 2012 06:22:35 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13616: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13616 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13616/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13615
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13615
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13615
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13615

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  73ac4b7ad2e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 19 13:28:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Aug 2012 13:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T35XQ-0001j2-70; Sun, 19 Aug 2012 13:27:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shandivo@gmail.com>) id 1T35XO-0001ix-18
	for xen-devel@lists.xen.org; Sun, 19 Aug 2012 13:27:30 +0000
Received: from [85.158.139.83:62197] by server-4.bemta-5.messagelabs.com id
	FF/14-12386-1C9E0305; Sun, 19 Aug 2012 13:27:29 +0000
X-Env-Sender: shandivo@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345382845!29163779!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11495 invoked from network); 19 Aug 2012 13:27:26 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Aug 2012 13:27:26 -0000
Received: by vbip1 with SMTP id p1so5693236vbi.32
	for <xen-devel@lists.xen.org>; Sun, 19 Aug 2012 06:27:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=oDxI0XLoIFxghs0RcfUOcmgEv/DqKDT9KZMYCEv6yFI=;
	b=MH6dbfnL5yrcwqI8l9vd1UidYgjaHwhqMGsOGExR1dY5Ba3IPwTsG3w8oL7hmOYmo3
	TjFTnveo2o13o75olVx4zY/spje9BsOspbjDU5hWT5SffLpo6I7yCuJ4/xYXifSnPwHT
	tBJr1BTkE2jPY8AwqesdB+uRe0EiRbro9KpiKPzzoNiAr45PxJKHtCuz++poDIQXmWHU
	OoSaBGbpivORGCBrLyGo50d0rGrw9fUPVNF4MgFGCvj7NKARZ8LILBmtouRu3WfNQUJy
	6Dtl0+lAK2Yr27kR878LQ5y8i0lnf5xMo9DM2YlJqJtF8ltKsCqBGekHzRw2l9ddUDNm
	olNQ==
Received: by 10.58.221.66 with SMTP id qc2mr8310685vec.30.1345382845153; Sun,
	19 Aug 2012 06:27:25 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.211.169 with HTTP; Sun, 19 Aug 2012 06:27:05 -0700 (PDT)
From: ivo <shandivo@gmail.com>
Date: Sun, 19 Aug 2012 15:27:05 +0200
Message-ID: <CACA08DwJjqWXy92aCezSr9eoV4wDxWiVQEPJkyGiGJ+=W=Wf=g@mail.gmail.com>
To: xen-devel@lists.xen.org, p.d@gmx.de
Subject: Re: [Xen-devel] SATA controller passthrough - option rom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9146503509356630763=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9146503509356630763==
Content-Type: multipart/alternative; boundary=047d7bf162e492821504c79e5a77

--047d7bf162e492821504c79e5a77
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Aug 19, 2012 at 3:08 PM, <p.d@gmx.de> wrote:

> Hello, Ivo,
>
> I hope I found right email-address. If not, I beg your pardon.
>
>
> I found Your post here:
> http://lists.xen.org/archives/html/xen-devel/2012-07/msg00529.html
> I don't know how can I write here, so I write to You personally.
>
> Nearly situation: I'm trying pass through a SATA-controller (Asus U3S6,
> without RAID), but the BIOS  of controller don't find the disk in DomU. I
> think I do something wrong.
> May be can You give me some tipps, how did You do it?
>
>
> Ubuntu 12.04, + self compiled kernel 3.4.9 + xen unstable, rev. 25753,
> from 15.08.12
> GPU passthrough works fine.
>
> If I pass through to DomU only "09:00.0 SATA controller: Marvell
> Technology " , I see that after normal bios the bios from controller will
> start, but it don't find the disk.
> If I pass through to guest all PCI's from  physical device, it's following
> devices:
> -----------------------------------
> lspci -k:
> 06:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:05.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:09.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 08:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller
> (rev 03)
>         Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard
>         Kernel driver in use: pciback
> 09:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9120 SATA 6Gb/s
> Controller (rev 12)
>         Subsystem: ASUSTeK Computer Inc. Device 8400
>         Kernel driver in use: pciback
> -----------------------------------
>
> my guest will not start. I get error like:  kernel can nor reset device
> from sysfs.
>
> from /etc/default/grub:
> GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 intel_iommu=on vt.handoff=7
> xen-pciback.passthrough=1
> xen-pciback.hide=(0000:04:00.0)(0000:04:00.1)(0000:06:00.0)(0000:07:01.0)(0000:07:05.0)(0000:07:07.0)(0000:07:09.0)(0000:08:00.0)(0000:09:00.0)"
>
> I attach my DomU configuration file for xl: "winxp3".
>
>
> Best regards,
>
> Panschinski Denis
>
>
> --
> Panschinski Denis
> Wielandstr. 36
> 65187 Wiesbaden
> Germany
>
> Tel.:  +49(0)-611-20 57 639
> Mobil: +49(0)-1777-19 79 61
> Skype: panschinski
> mailto:p.d@gmx.de <p.d@gmx.de>


I honestly think that if you can passthrough the device alone and it works,
the problem maybe related to something else.
It's better if you ask for help at the whole list. I'm now forwarding this
to xen-devel, just make sure to reply at "xen-devel@lists.xen.org" so
everyone can read and help.

--047d7bf162e492821504c79e5a77
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Sun, Aug 19, 2012 at 3:08 PM,=A0<span=
 dir=3D"ltr">&lt;<a href=3D"mailto:p.d@gmx.de" target=3D"_blank">p.d@gmx.de=
</a>&gt;</span>=A0wrote:<br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,20=
4);border-left-style:solid;padding-left:1ex">


<span style=3D"font-family:Cambria;font-size:11pt">Hello, Ivo,=A0<br><br>I =
hope I found right email-address. If not, I beg your pardon.<br><br><br>I f=
ound Your post here:<br><a href=3D"http://lists.xen.org/archives/html/xen-d=
evel/2012-07/msg00529.html" target=3D"_blank">http://lists.xen.org/archives=
/html/xen-devel/2012-07/msg00529.html</a><br>


I don&#39;t know how can I write here, so I write to You personally.<br><br=
>Nearly situation: I&#39;m trying pass through a SATA-controller (Asus U3S6=
, without RAID), but the BIOS =A0of controller don&#39;t find the disk in D=
omU. I think I do something wrong.<br>


May be can You give me some tipps, how did You do it?<br><br><br>Ubuntu 12.=
04, + self compiled kernel 3.4.9 + xen unstable, rev. 25753, from 15.08.12<=
br>GPU passthrough works fine.=A0<br><br>If I pass through to DomU only &qu=
ot;09:00.0 SATA controller: Marvell Technology &quot; , I see that after no=
rmal bios the bios from controller will start, but it don&#39;t find the di=
sk.<br>


If I pass through to guest all PCI&#39;s from =A0physical device, it&#39;s =
following devices:<br>-----------------------------------<br>lspci -k:<br>0=
6:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express=
 Gen 2 (5.0 GT/s) Switch (rev ba)<br>


=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>=A0 =A0 =A0 =A0 Kernel mod=
ules: shpchp<br>07:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8=
-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba)<br>=A0 =A0 =A0 =A0 Kerne=
l driver in use: pciback<br>


=A0 =A0 =A0 =A0 Kernel modules: shpchp<br>07:05.0 PCI bridge: PLX Technolog=
y, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba=
)<br>=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>=A0 =A0 =A0 =A0 Kerne=
l modules: shpchp<br>


07:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Expres=
s Gen 2 (5.0 GT/s) Switch (rev ba)<br>=A0 =A0 =A0 =A0 Kernel driver in use:=
 pciback<br>=A0 =A0 =A0 =A0 Kernel modules: shpchp<br>07:09.0 PCI bridge: P=
LX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Sw=
itch (rev ba)<br>


=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>=A0 =A0 =A0 =A0 Kernel mod=
ules: shpchp<br>08:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 H=
ost Controller (rev 03)<br>=A0 =A0 =A0 =A0 Subsystem: ASUSTeK Computer Inc.=
 P8P67 Deluxe Motherboard<br>


=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>09:00.0 SATA controller: M=
arvell Technology Group Ltd. 88SE9120 SATA 6Gb/s Controller (rev 12)<br>=A0=
 =A0 =A0 =A0 Subsystem: ASUSTeK Computer Inc. Device 8400<br>=A0 =A0 =A0 =
=A0 Kernel driver in use: pciback<br>


-----------------------------------<br><br>my guest will not start. I get e=
rror like: =A0kernel can nor reset device from sysfs.<br><br>from /etc/defa=
ult/grub:<br>GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;ipv6.disable=3D1 intel_iomm=
u=3Don vt.handoff=3D7 xen-pciback.passthrough=3D1 xen-pciback.hide=3D(0000:=
04:00.0)(0000:04:00.1)(0000:06:00.0)(0000:07:01.0)(0000:07:05.0)(0000:07:07=
.0)(0000:07:09.0)(0000:08:00.0)(0000:09:00.0)&quot;<br>


<br>I attach my DomU configuration file for xl: &quot;winxp3&quot;.<br><br>=
<br>Best regards,<br><br>Panschinski Denis<br><br><br>--<br>Panschinski Den=
is<br>Wielandstr. 36<br>65187 Wiesbaden<br>Germany<br><br>Tel.: =A0<a href=
=3D"tel:%2B49%280%29-611-20%2057%20639" value=3D"+496112057639" target=3D"_=
blank">+49(0)-611-20 57 639</a><br>


Mobil:=A0<a href=3D"tel:%2B49%280%29-1777-19%2079%2061" value=3D"+491777197=
961" target=3D"_blank">+49(0)-1777-19 79 61</a><br>Skype: panschinski=A0<br=
></span><a href=3D"mailto:p.d@gmx.de" style=3D"font-family:cambria;font-siz=
e:11pt" target=3D"_blank">mailto:p.d@gmx.de</a><span style=3D"font-family:C=
ambria;font-size:11pt">=A0</span></blockquote>


<div><br></div><div>I honestly think that if you can passthrough the device=
 alone and it works, the problem maybe related to something else.</div><div=
>It&#39;s better if you ask for help at the whole list. I&#39;m now forward=
ing this to xen-devel, just make sure to reply at &quot;<a href=3D"mailto:x=
en-devel@lists.xen.org" target=3D"_blank">xen-devel@lists.xen.org</a>&quot;=
 so everyone can read and help.</div>


</div><br>

--047d7bf162e492821504c79e5a77--


--===============9146503509356630763==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9146503509356630763==--


From xen-devel-bounces@lists.xen.org Sun Aug 19 13:28:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Aug 2012 13:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T35XQ-0001j2-70; Sun, 19 Aug 2012 13:27:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shandivo@gmail.com>) id 1T35XO-0001ix-18
	for xen-devel@lists.xen.org; Sun, 19 Aug 2012 13:27:30 +0000
Received: from [85.158.139.83:62197] by server-4.bemta-5.messagelabs.com id
	FF/14-12386-1C9E0305; Sun, 19 Aug 2012 13:27:29 +0000
X-Env-Sender: shandivo@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345382845!29163779!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11495 invoked from network); 19 Aug 2012 13:27:26 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Aug 2012 13:27:26 -0000
Received: by vbip1 with SMTP id p1so5693236vbi.32
	for <xen-devel@lists.xen.org>; Sun, 19 Aug 2012 06:27:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=oDxI0XLoIFxghs0RcfUOcmgEv/DqKDT9KZMYCEv6yFI=;
	b=MH6dbfnL5yrcwqI8l9vd1UidYgjaHwhqMGsOGExR1dY5Ba3IPwTsG3w8oL7hmOYmo3
	TjFTnveo2o13o75olVx4zY/spje9BsOspbjDU5hWT5SffLpo6I7yCuJ4/xYXifSnPwHT
	tBJr1BTkE2jPY8AwqesdB+uRe0EiRbro9KpiKPzzoNiAr45PxJKHtCuz++poDIQXmWHU
	OoSaBGbpivORGCBrLyGo50d0rGrw9fUPVNF4MgFGCvj7NKARZ8LILBmtouRu3WfNQUJy
	6Dtl0+lAK2Yr27kR878LQ5y8i0lnf5xMo9DM2YlJqJtF8ltKsCqBGekHzRw2l9ddUDNm
	olNQ==
Received: by 10.58.221.66 with SMTP id qc2mr8310685vec.30.1345382845153; Sun,
	19 Aug 2012 06:27:25 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.211.169 with HTTP; Sun, 19 Aug 2012 06:27:05 -0700 (PDT)
From: ivo <shandivo@gmail.com>
Date: Sun, 19 Aug 2012 15:27:05 +0200
Message-ID: <CACA08DwJjqWXy92aCezSr9eoV4wDxWiVQEPJkyGiGJ+=W=Wf=g@mail.gmail.com>
To: xen-devel@lists.xen.org, p.d@gmx.de
Subject: Re: [Xen-devel] SATA controller passthrough - option rom
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9146503509356630763=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9146503509356630763==
Content-Type: multipart/alternative; boundary=047d7bf162e492821504c79e5a77

--047d7bf162e492821504c79e5a77
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Aug 19, 2012 at 3:08 PM, <p.d@gmx.de> wrote:

> Hello, Ivo,
>
> I hope I found right email-address. If not, I beg your pardon.
>
>
> I found Your post here:
> http://lists.xen.org/archives/html/xen-devel/2012-07/msg00529.html
> I don't know how can I write here, so I write to You personally.
>
> Nearly situation: I'm trying pass through a SATA-controller (Asus U3S6,
> without RAID), but the BIOS  of controller don't find the disk in DomU. I
> think I do something wrong.
> May be can You give me some tipps, how did You do it?
>
>
> Ubuntu 12.04, + self compiled kernel 3.4.9 + xen unstable, rev. 25753,
> from 15.08.12
> GPU passthrough works fine.
>
> If I pass through to DomU only "09:00.0 SATA controller: Marvell
> Technology " , I see that after normal bios the bios from controller will
> start, but it don't find the disk.
> If I pass through to guest all PCI's from  physical device, it's following
> devices:
> -----------------------------------
> lspci -k:
> 06:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:05.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 07:09.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI
> Express Gen 2 (5.0 GT/s) Switch (rev ba)
>         Kernel driver in use: pciback
>         Kernel modules: shpchp
> 08:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller
> (rev 03)
>         Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard
>         Kernel driver in use: pciback
> 09:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9120 SATA 6Gb/s
> Controller (rev 12)
>         Subsystem: ASUSTeK Computer Inc. Device 8400
>         Kernel driver in use: pciback
> -----------------------------------
>
> my guest will not start. I get error like:  kernel can nor reset device
> from sysfs.
>
> from /etc/default/grub:
> GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 intel_iommu=on vt.handoff=7
> xen-pciback.passthrough=1
> xen-pciback.hide=(0000:04:00.0)(0000:04:00.1)(0000:06:00.0)(0000:07:01.0)(0000:07:05.0)(0000:07:07.0)(0000:07:09.0)(0000:08:00.0)(0000:09:00.0)"
>
> I attach my DomU configuration file for xl: "winxp3".
>
>
> Best regards,
>
> Panschinski Denis
>
>
> --
> Panschinski Denis
> Wielandstr. 36
> 65187 Wiesbaden
> Germany
>
> Tel.:  +49(0)-611-20 57 639
> Mobil: +49(0)-1777-19 79 61
> Skype: panschinski
> mailto:p.d@gmx.de <p.d@gmx.de>


I honestly think that if you can passthrough the device alone and it works,
the problem maybe related to something else.
It's better if you ask for help at the whole list. I'm now forwarding this
to xen-devel, just make sure to reply at "xen-devel@lists.xen.org" so
everyone can read and help.

--047d7bf162e492821504c79e5a77
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Sun, Aug 19, 2012 at 3:08 PM,=A0<span=
 dir=3D"ltr">&lt;<a href=3D"mailto:p.d@gmx.de" target=3D"_blank">p.d@gmx.de=
</a>&gt;</span>=A0wrote:<br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,20=
4);border-left-style:solid;padding-left:1ex">


<span style=3D"font-family:Cambria;font-size:11pt">Hello, Ivo,=A0<br><br>I =
hope I found right email-address. If not, I beg your pardon.<br><br><br>I f=
ound Your post here:<br><a href=3D"http://lists.xen.org/archives/html/xen-d=
evel/2012-07/msg00529.html" target=3D"_blank">http://lists.xen.org/archives=
/html/xen-devel/2012-07/msg00529.html</a><br>


I don&#39;t know how can I write here, so I write to You personally.<br><br=
>Nearly situation: I&#39;m trying pass through a SATA-controller (Asus U3S6=
, without RAID), but the BIOS =A0of controller don&#39;t find the disk in D=
omU. I think I do something wrong.<br>


May be can You give me some tipps, how did You do it?<br><br><br>Ubuntu 12.=
04, + self compiled kernel 3.4.9 + xen unstable, rev. 25753, from 15.08.12<=
br>GPU passthrough works fine.=A0<br><br>If I pass through to DomU only &qu=
ot;09:00.0 SATA controller: Marvell Technology &quot; , I see that after no=
rmal bios the bios from controller will start, but it don&#39;t find the di=
sk.<br>


If I pass through to guest all PCI&#39;s from =A0physical device, it&#39;s =
following devices:<br>-----------------------------------<br>lspci -k:<br>0=
6:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express=
 Gen 2 (5.0 GT/s) Switch (rev ba)<br>


=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>=A0 =A0 =A0 =A0 Kernel mod=
ules: shpchp<br>07:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8=
-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba)<br>=A0 =A0 =A0 =A0 Kerne=
l driver in use: pciback<br>


=A0 =A0 =A0 =A0 Kernel modules: shpchp<br>07:05.0 PCI bridge: PLX Technolog=
y, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba=
)<br>=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>=A0 =A0 =A0 =A0 Kerne=
l modules: shpchp<br>


07:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Expres=
s Gen 2 (5.0 GT/s) Switch (rev ba)<br>=A0 =A0 =A0 =A0 Kernel driver in use:=
 pciback<br>=A0 =A0 =A0 =A0 Kernel modules: shpchp<br>07:09.0 PCI bridge: P=
LX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Sw=
itch (rev ba)<br>


=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>=A0 =A0 =A0 =A0 Kernel mod=
ules: shpchp<br>08:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 H=
ost Controller (rev 03)<br>=A0 =A0 =A0 =A0 Subsystem: ASUSTeK Computer Inc.=
 P8P67 Deluxe Motherboard<br>


=A0 =A0 =A0 =A0 Kernel driver in use: pciback<br>09:00.0 SATA controller: M=
arvell Technology Group Ltd. 88SE9120 SATA 6Gb/s Controller (rev 12)<br>=A0=
 =A0 =A0 =A0 Subsystem: ASUSTeK Computer Inc. Device 8400<br>=A0 =A0 =A0 =
=A0 Kernel driver in use: pciback<br>


-----------------------------------<br><br>my guest will not start. I get e=
rror like: =A0kernel can nor reset device from sysfs.<br><br>from /etc/defa=
ult/grub:<br>GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;ipv6.disable=3D1 intel_iomm=
u=3Don vt.handoff=3D7 xen-pciback.passthrough=3D1 xen-pciback.hide=3D(0000:=
04:00.0)(0000:04:00.1)(0000:06:00.0)(0000:07:01.0)(0000:07:05.0)(0000:07:07=
.0)(0000:07:09.0)(0000:08:00.0)(0000:09:00.0)&quot;<br>


<br>I attach my DomU configuration file for xl: &quot;winxp3&quot;.<br><br>=
<br>Best regards,<br><br>Panschinski Denis<br><br><br>--<br>Panschinski Den=
is<br>Wielandstr. 36<br>65187 Wiesbaden<br>Germany<br><br>Tel.: =A0<a href=
=3D"tel:%2B49%280%29-611-20%2057%20639" value=3D"+496112057639" target=3D"_=
blank">+49(0)-611-20 57 639</a><br>


Mobil:=A0<a href=3D"tel:%2B49%280%29-1777-19%2079%2061" value=3D"+491777197=
961" target=3D"_blank">+49(0)-1777-19 79 61</a><br>Skype: panschinski=A0<br=
></span><a href=3D"mailto:p.d@gmx.de" style=3D"font-family:cambria;font-siz=
e:11pt" target=3D"_blank">mailto:p.d@gmx.de</a><span style=3D"font-family:C=
ambria;font-size:11pt">=A0</span></blockquote>


<div><br></div><div>I honestly think that if you can passthrough the device=
 alone and it works, the problem maybe related to something else.</div><div=
>It&#39;s better if you ask for help at the whole list. I&#39;m now forward=
ing this to xen-devel, just make sure to reply at &quot;<a href=3D"mailto:x=
en-devel@lists.xen.org" target=3D"_blank">xen-devel@lists.xen.org</a>&quot;=
 so everyone can read and help.</div>


</div><br>

--047d7bf162e492821504c79e5a77--


--===============9146503509356630763==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9146503509356630763==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 01:31:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 01:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3GpJ-0000v7-4P; Mon, 20 Aug 2012 01:30:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1T3GpH-0000v2-V2
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 01:30:44 +0000
Received: from [85.158.139.83:51635] by server-6.bemta-5.messagelabs.com id
	BB/88-22415-34391305; Mon, 20 Aug 2012 01:30:43 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345426239!17703525!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzNDYwOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26340 invoked from network); 20 Aug 2012 01:30:41 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-8.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 01:30:41 -0000
Received: from 172.24.2.119 (EHLO szxeml211-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id ANS83259; Mon, 20 Aug 2012 09:30:30 +0800 (CST)
Received: from SZXEML431-HUB.china.huawei.com (10.72.61.39) by
	szxeml211-edg.china.huawei.com (172.24.2.182) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Mon, 20 Aug 2012 09:30:13 +0800
Received: from SZXEML528-MBS.china.huawei.com ([169.254.5.217]) by
	szxeml431-hub.china.huawei.com ([10.72.61.39]) with mapi id
	14.01.0323.003; Mon, 20 Aug 2012 09:30:04 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ/8o++QD/k88DUP8oGQ+A/k5x8JD8kGK1APkcWzqw
Date: Mon, 20 Aug 2012 01:30:04 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B53306BA567@szxeml528-mbs.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
	<1344427585.32142.34.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
	<1345211276.10161.26.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345211276.10161.26.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBJYW4gQ2FtcGJlbGwgW21haWx0
bzpJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbV0NCj4gU2VudDogRnJpZGF5LCBBdWd1c3QgMTcsIDIw
MTIgOTo0OCBQTQ0KPiBUbzogV2FuZ3poZW5ndW8NCj4gQ2M6IFlhbmd4aWFvd2VpOyBZZWNodWFu
OyBIb25ndGFvOyB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZzsgSWFuIEphY2tzb24NCj4gU3ViamVj
dDogUmU6IFtYZW4tZGV2ZWxdIFRoZSBoeXBlcmNhbGwgd2lsbCBmYWlsIGFuZCByZXR1cm4gRUZB
VUxUIHdoZW4NCj4gdGhlIHBhZ2UgYmVjb21lcyBDT1cgYnkgZm9ya2luZyBwcm9jZXNzIGluIGxp
bnV4DQo+IA0KPiBPbiBUaHUsIDIwMTItMDgtMDkgYXQgMDc6NTYgKzAxMDAsIFdhbmd6aGVuZ3Vv
IHdyb3RlOg0KPiANCj4gPiAjIEhHIGNoYW5nZXNldCBwYXRjaA0KPiA+ICMgUGFyZW50IGE1ZGZk
OTI0ZmNkYjE3M2ExNTRkYWQ5ZjM3MDczYzFkZTEzMDIwNjUNCj4gPiBsaWJ4YzogQWRkIFZNX0RP
TlRDT1BZIGZsYWcgb2YgdGhlIFZNQSBvZiB0aGUgaHlwZXJjYWxsIGJ1ZmZlciwgdG8NCj4gYXZv
aWQgdGhlDQo+ID4gICAgICAgIGh5cGVyY2FsbCBidWZmZXIgYmVjb21pbmcgQ09XIG9uIGh5cGVy
Y2FsbC4NCj4gDQo+IFdoZW4gSSBjYW1lIHRvIGNvbW1pdCB0aGlzIEkgZ290Og0KPiB4Y19saW51
eF9vc2RlcC5jOiBJbiBmdW5jdGlvbiDigJhsaW51eF9wcml2Y21kX2FsbG9jX2h5cGVyY2FsbF9i
dWZmZXLigJk6DQo+IHhjX2xpbnV4X29zZGVwLmM6MTAxOiBlcnJvcjog4oCYcHRy4oCZIHVuZGVj
bGFyZWQgKGZpcnN0IHVzZSBpbiB0aGlzDQo+IGZ1bmN0aW9uKQ0KPiB4Y19saW51eF9vc2RlcC5j
OjEwMTogZXJyb3I6IChFYWNoIHVuZGVjbGFyZWQgaWRlbnRpZmllciBpcyByZXBvcnRlZA0KPiBv
bmx5IG9uY2UNCj4geGNfbGludXhfb3NkZXAuYzoxMDE6IGVycm9yOiBmb3IgZWFjaCBmdW5jdGlv
biBpdCBhcHBlYXJzIGluLikNCj4gDQo+IEkndmUgZml4ZWQgdGhpcyB1cCBmb3IgeW91IHRoaXMg
dGltZSBidXQgcGxlYXNlIGJlIG1vcmUgY2FyZWZ1bCBpbg0KPiBmdXR1cmUgYW5kIGNvbXBpbGUv
dGVzdCB5b3VyIHBhdGNoZXMuDQpUaGFua3MgSWFuLg0KSSdtIHNvcnJ5LCBJIHByb21pc2UgdG8g
Y29tcGlsZS90ZXN0IG15IHBhdGNoZXMuDQoNCj4gDQo+IEkgYWxzbyB0d2Vha2VkIHRoZSB3b3Jk
aW5nIG9mIHlvdXIgY29tbWVudHMgYSBsaXR0bGUgYml0LCBJIGhvcGUgdGhhdCdzDQo+IG9rLg0K
PiANCj4gSWFuLg0KDQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 20 01:31:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 01:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3GpJ-0000v7-4P; Mon, 20 Aug 2012 01:30:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wangzhenguo@huawei.com>) id 1T3GpH-0000v2-V2
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 01:30:44 +0000
Received: from [85.158.139.83:51635] by server-6.bemta-5.messagelabs.com id
	BB/88-22415-34391305; Mon, 20 Aug 2012 01:30:43 +0000
X-Env-Sender: wangzhenguo@huawei.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345426239!17703525!1
X-Originating-IP: [119.145.14.65]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiAzNDYwOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26340 invoked from network); 20 Aug 2012 01:30:41 -0000
Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com)
	(119.145.14.65) by server-8.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 01:30:41 -0000
Received: from 172.24.2.119 (EHLO szxeml211-edg.china.huawei.com)
	([172.24.2.119])
	by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued)
	with ESMTP id ANS83259; Mon, 20 Aug 2012 09:30:30 +0800 (CST)
Received: from SZXEML431-HUB.china.huawei.com (10.72.61.39) by
	szxeml211-edg.china.huawei.com (172.24.2.182) with Microsoft SMTP
	Server (TLS) id 14.1.323.3; Mon, 20 Aug 2012 09:30:13 +0800
Received: from SZXEML528-MBS.china.huawei.com ([169.254.5.217]) by
	szxeml431-hub.china.huawei.com ([10.72.61.39]) with mapi id
	14.01.0323.003; Mon, 20 Aug 2012 09:30:04 +0800
From: Wangzhenguo <wangzhenguo@huawei.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] The hypercall will fail and return EFAULT when the
	page becomes COW by forking process in linux
Thread-Index: Ac1YHX79yLSELb+4TLqeIKwBXZDIFf//q9kA//3DDTCAJQongP/9/iEAgAOV5ICADfI9AP/6ldlwAUsPRwD//lly8P/8/06A//lxU9D/80ShAP/lChpQ/8o++QD/k88DUP8oGQ+A/k5x8JD8kGK1APkcWzqw
Date: Mon, 20 Aug 2012 01:30:04 +0000
Message-ID: <B44CA5218606DC4FA941D19CCEB27B53306BA567@szxeml528-mbs.china.huawei.com>
References: <B44CA5218606DC4FA941D19CCEB27B5323BEF12C@szxeml528-mbs.china.huawei.com>
	<1341221926.4625.12.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B5323BEF99A@szxeml528-mbs.china.huawei.com>
	<1343135163.18971.15.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF769F2@szxeml528-mbx.china.huawei.com>
	<1343221925.18971.93.camel@zakaz.uk.xensource.com>
	<1343988628.21372.46.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76ACE@szxeml528-mbx.china.huawei.com>
	<1344259710.11339.39.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B32@szxeml528-mbx.china.huawei.com>
	<1344334043.11339.85.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B4F@szxeml528-mbx.china.huawei.com>
	<1344343342.11339.96.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76B9E@szxeml528-mbx.china.huawei.com>
	<1344416440.32142.7.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76BEE@szxeml528-mbx.china.huawei.com>
	<1344427585.32142.34.camel@zakaz.uk.xensource.com>
	<B44CA5218606DC4FA941D19CCEB27B532CF76C40@szxeml528-mbx.china.huawei.com>
	<1345211276.10161.26.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345211276.10161.26.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.135.65.30]
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Yangxiaowei <xiaowei.yang@huawei.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Hongtao <bobby.hong@huawei.com>, Yechuan <yechuan@huawei.com>
Subject: Re: [Xen-devel] The hypercall will fail and return EFAULT when the
 page becomes COW by forking process in linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBJYW4gQ2FtcGJlbGwgW21haWx0
bzpJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbV0NCj4gU2VudDogRnJpZGF5LCBBdWd1c3QgMTcsIDIw
MTIgOTo0OCBQTQ0KPiBUbzogV2FuZ3poZW5ndW8NCj4gQ2M6IFlhbmd4aWFvd2VpOyBZZWNodWFu
OyBIb25ndGFvOyB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZzsgSWFuIEphY2tzb24NCj4gU3ViamVj
dDogUmU6IFtYZW4tZGV2ZWxdIFRoZSBoeXBlcmNhbGwgd2lsbCBmYWlsIGFuZCByZXR1cm4gRUZB
VUxUIHdoZW4NCj4gdGhlIHBhZ2UgYmVjb21lcyBDT1cgYnkgZm9ya2luZyBwcm9jZXNzIGluIGxp
bnV4DQo+IA0KPiBPbiBUaHUsIDIwMTItMDgtMDkgYXQgMDc6NTYgKzAxMDAsIFdhbmd6aGVuZ3Vv
IHdyb3RlOg0KPiANCj4gPiAjIEhHIGNoYW5nZXNldCBwYXRjaA0KPiA+ICMgUGFyZW50IGE1ZGZk
OTI0ZmNkYjE3M2ExNTRkYWQ5ZjM3MDczYzFkZTEzMDIwNjUNCj4gPiBsaWJ4YzogQWRkIFZNX0RP
TlRDT1BZIGZsYWcgb2YgdGhlIFZNQSBvZiB0aGUgaHlwZXJjYWxsIGJ1ZmZlciwgdG8NCj4gYXZv
aWQgdGhlDQo+ID4gICAgICAgIGh5cGVyY2FsbCBidWZmZXIgYmVjb21pbmcgQ09XIG9uIGh5cGVy
Y2FsbC4NCj4gDQo+IFdoZW4gSSBjYW1lIHRvIGNvbW1pdCB0aGlzIEkgZ290Og0KPiB4Y19saW51
eF9vc2RlcC5jOiBJbiBmdW5jdGlvbiDigJhsaW51eF9wcml2Y21kX2FsbG9jX2h5cGVyY2FsbF9i
dWZmZXLigJk6DQo+IHhjX2xpbnV4X29zZGVwLmM6MTAxOiBlcnJvcjog4oCYcHRy4oCZIHVuZGVj
bGFyZWQgKGZpcnN0IHVzZSBpbiB0aGlzDQo+IGZ1bmN0aW9uKQ0KPiB4Y19saW51eF9vc2RlcC5j
OjEwMTogZXJyb3I6IChFYWNoIHVuZGVjbGFyZWQgaWRlbnRpZmllciBpcyByZXBvcnRlZA0KPiBv
bmx5IG9uY2UNCj4geGNfbGludXhfb3NkZXAuYzoxMDE6IGVycm9yOiBmb3IgZWFjaCBmdW5jdGlv
biBpdCBhcHBlYXJzIGluLikNCj4gDQo+IEkndmUgZml4ZWQgdGhpcyB1cCBmb3IgeW91IHRoaXMg
dGltZSBidXQgcGxlYXNlIGJlIG1vcmUgY2FyZWZ1bCBpbg0KPiBmdXR1cmUgYW5kIGNvbXBpbGUv
dGVzdCB5b3VyIHBhdGNoZXMuDQpUaGFua3MgSWFuLg0KSSdtIHNvcnJ5LCBJIHByb21pc2UgdG8g
Y29tcGlsZS90ZXN0IG15IHBhdGNoZXMuDQoNCj4gDQo+IEkgYWxzbyB0d2Vha2VkIHRoZSB3b3Jk
aW5nIG9mIHlvdXIgY29tbWVudHMgYSBsaXR0bGUgYml0LCBJIGhvcGUgdGhhdCdzDQo+IG9rLg0K
PiANCj4gSWFuLg0KDQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 20 03:24:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 03:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3IaS-00023n-AD; Mon, 20 Aug 2012 03:23:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T3IaR-00023f-3o
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 03:23:31 +0000
Received: from [85.158.143.35:20386] by server-1.bemta-4.messagelabs.com id
	1B/57-07754-2BDA1305; Mon, 20 Aug 2012 03:23:30 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345433009!14890550!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU5Mjg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30239 invoked from network); 20 Aug 2012 03:23:29 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 03:23:29 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Aug 2012 20:23:28 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,795,1336374000"; d="scan'208";a="188513588"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 19 Aug 2012 20:23:27 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 19 Aug 2012 20:23:26 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 19 Aug 2012 20:23:26 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 11:22:57 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw//+X9gCABNO7cA==
Date: Mon, 20 Aug 2012 03:22:56 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
In-Reply-To: <502E2C9C0200007800095D33@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, August 17, 2012 5:36 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Thursday, August 16, 2012 7:04 PM
> >> To: Hao, Xudong
> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> >>
> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
> >> +0200
> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
> >> +0800
> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
> expanded.
> >> */
> >> >> >  #define PCI_MEM_START       0xf0000000
> >> >> >  #define PCI_MEM_END         0xfc000000
> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >> >>
> >> >> With such hard coded values, this is hardly meant to be anything
> >> >> more than an RFC, is it? These values should not exist in the first
> >> >> place, and the variables below should be determined from VM
> >> >> characteristics (best would presumably be to allocate them top
> >> >> down from the end of physical address space, making sure you
> >> >> don't run into RAM).
> >>
> >> No comment on this part?
> >>
> >
> > The MMIO high memory start from 640G, it's already very high, I think we
> > don't need allocate MMIO top down from the high of physical address space.
> > Another thing you remind me that maybe we can skip this high MMIO hole
> when
> > setup p2m table in build hvm of libxc(setup_guest()), like the handles for
> > MMIO below 4G.
> 
> That would be an option, but any fixed address you pick here
> will look arbitrary (and will sooner or later raise questions). Plus
> by allowing the RAM above 4G to remain contiguous even for
> huge guests, we'd retain maximum compatibility with all sorts
> of guest OSes. Furthermore, did you check that we in all cases
> can use 40-bit (guest) physical addresses (I would think that 36
> is the biggest common value). Bottom line - please don't use a
> fixed number here.
> 

Hi, Jan

Where does present the 36-bit physical addresses limit, could you help to point out in the current Xen code?

Thanks,
-Xudong

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 03:24:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 03:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3IaS-00023n-AD; Mon, 20 Aug 2012 03:23:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T3IaR-00023f-3o
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 03:23:31 +0000
Received: from [85.158.143.35:20386] by server-1.bemta-4.messagelabs.com id
	1B/57-07754-2BDA1305; Mon, 20 Aug 2012 03:23:30 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345433009!14890550!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU5Mjg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30239 invoked from network); 20 Aug 2012 03:23:29 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 03:23:29 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Aug 2012 20:23:28 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,795,1336374000"; d="scan'208";a="188513588"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 19 Aug 2012 20:23:27 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 19 Aug 2012 20:23:26 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 19 Aug 2012 20:23:26 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 11:22:57 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw//+X9gCABNO7cA==
Date: Mon, 20 Aug 2012 03:22:56 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
In-Reply-To: <502E2C9C0200007800095D33@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, August 17, 2012 5:36 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Thursday, August 16, 2012 7:04 PM
> >> To: Hao, Xudong
> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> >>
> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
> >> +0200
> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
> >> +0800
> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
> expanded.
> >> */
> >> >> >  #define PCI_MEM_START       0xf0000000
> >> >> >  #define PCI_MEM_END         0xfc000000
> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >> >>
> >> >> With such hard coded values, this is hardly meant to be anything
> >> >> more than an RFC, is it? These values should not exist in the first
> >> >> place, and the variables below should be determined from VM
> >> >> characteristics (best would presumably be to allocate them top
> >> >> down from the end of physical address space, making sure you
> >> >> don't run into RAM).
> >>
> >> No comment on this part?
> >>
> >
> > The MMIO high memory start from 640G, it's already very high, I think we
> > don't need allocate MMIO top down from the high of physical address space.
> > Another thing you remind me that maybe we can skip this high MMIO hole
> when
> > setup p2m table in build hvm of libxc(setup_guest()), like the handles for
> > MMIO below 4G.
> 
> That would be an option, but any fixed address you pick here
> will look arbitrary (and will sooner or later raise questions). Plus
> by allowing the RAM above 4G to remain contiguous even for
> huge guests, we'd retain maximum compatibility with all sorts
> of guest OSes. Furthermore, did you check that we in all cases
> can use 40-bit (guest) physical addresses (I would think that 36
> is the biggest common value). Bottom line - please don't use a
> fixed number here.
> 

Hi, Jan

Where does present the 36-bit physical addresses limit, could you help to point out in the current Xen code?

Thanks,
-Xudong

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 05:19:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 05:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3KNg-00033B-D4; Mon, 20 Aug 2012 05:18:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3KNe-000336-OR
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 05:18:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345439900!10081605!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7999 invoked from network); 20 Aug 2012 05:18:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 05:18:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,795,1336348800"; d="scan'208";a="14076137"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 05:18:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 06:18:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3KNX-00029u-8L;
	Mon, 20 Aug 2012 05:18:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3KNW-0002Ye-Ry;
	Mon, 20 Aug 2012 06:18:18 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13617-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Aug 2012 06:18:18 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13617: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13617 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13617/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 13616

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13616
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13616
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13616
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13616

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  73ac4b7ad2e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 05:19:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 05:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3KNg-00033B-D4; Mon, 20 Aug 2012 05:18:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3KNe-000336-OR
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 05:18:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345439900!10081605!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7999 invoked from network); 20 Aug 2012 05:18:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 05:18:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,795,1336348800"; d="scan'208";a="14076137"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 05:18:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 06:18:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3KNX-00029u-8L;
	Mon, 20 Aug 2012 05:18:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3KNW-0002Ye-Ry;
	Mon, 20 Aug 2012 06:18:18 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13617-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Aug 2012 06:18:18 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13617: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13617 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13617/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 13616

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13616
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13616
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13616
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13616

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  73ac4b7ad2e1
baseline version:
 xen                  73ac4b7ad2e1

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD6-0003Pw-TJ; Mon, 20 Aug 2012 06:11:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F93-0004rp-AX
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:01 +0000
Received: from [85.158.143.35:12211] by server-3.bemta-4.messagelabs.com id
	47/6A-09529-40A71305; Sun, 19 Aug 2012 23:43:00 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345419779!14300116!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ5OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15748 invoked from network); 19 Aug 2012 23:42:59 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-21.messagelabs.com with SMTP;
	19 Aug 2012 23:42:59 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNgk9L016966
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:42:47 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH1020926; Sun, 19 Aug 2012 19:42:40 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:34 +0200
Message-Id: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 0/5 v2] cpu: make a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

this is th 3rd approach to make CPU a child of DeviceState
for both kinds of targets *-user and *-softmmu. It seems
with current state of qemu it doesn't take too much effort
to make it compile. Please check if it doesn't break
something on other targets/archs/hosts than i386.

what's tested:
  - compile tested building all targets on FC17x64 host.
  - briefly tested i386 user and softmmu targets

Anthony Liguori (1):
  qdev: split up header so it can be used in cpu.h

Igor Mammedov (4):
  move qemu_irq typedef out of cpu-common.h
  qapi-types.h doesn't really need to include qemu-common.h
  cleanup error.h, included qapi-types.h aready has stdbool.h
  make CPU a child of DeviceState

 error.h               |    1 -
 hw/arm-misc.h         |    1 +
 hw/bt.h               |    2 +
 hw/devices.h          |    2 +
 hw/irq.h              |    2 +
 hw/mc146818rtc.c      |    1 +
 hw/omap.h             |    1 +
 hw/qdev-addr.c        |    1 +
 hw/qdev-core.h        |  240 ++++++++++++++++++++++++++++++++
 hw/qdev-monitor.h     |   16 ++
 hw/qdev-properties.c  |    1 +
 hw/qdev-properties.h  |  128 +++++++++++++++++
 hw/qdev.c             |    1 +
 hw/qdev.h             |  371 +------------------------------------------------
 hw/soc_dma.h          |    1 +
 hw/xen.h              |    1 +
 include/qemu/cpu.h    |    6 +-
 qemu-common.h         |    1 -
 scripts/qapi-types.py |    2 +-
 sysemu.h              |    1 +
 20 files changed, 407 insertions(+), 373 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties.h


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD8-0003QP-Br; Mon, 20 Aug 2012 06:11:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F9L-0004t2-Mv
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:19 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345419792!4921128!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ5OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 799 invoked from network); 19 Aug 2012 23:43:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	19 Aug 2012 23:43:13 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNh6bF016999
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:43:06 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH5020926; Sun, 19 Aug 2012 19:43:02 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:38 +0200
Message-Id: <1345419579-25499-5-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 4/5] cleanup error.h,
	included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 error.h |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/error.h b/error.h
index 96fc203..643a372 100644
--- a/error.h
+++ b/error.h
@@ -14,7 +14,6 @@
 
 #include "compiler.h"
 #include "qapi-types.h"
-#include <stdbool.h>
 
 /**
  * A class representing internal errors within QEMU.  An error has a ErrorClass
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD7-0003QB-L0; Mon, 20 Aug 2012 06:11:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F98-0004sL-K2
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:07 +0000
Received: from [85.158.138.51:20511] by server-12.bemta-3.messagelabs.com id
	D6/8E-04073-90A71305; Sun, 19 Aug 2012 23:43:05 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345419784!22759023!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21187 invoked from network); 19 Aug 2012 23:43:04 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-174.messagelabs.com with SMTP;
	19 Aug 2012 23:43:04 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNguQC016986
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:42:56 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH3020926; Sun, 19 Aug 2012 19:42:52 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:36 +0200
Message-Id: <1345419579-25499-3-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 2/5] qdev: split up header so it can be used in
	cpu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Anthony Liguori <aliguori@us.ibm.com>

Header file dependency is a frickin' nightmare right now.  cpu.h tends to get
included in our 'include everything' header files but qdev also needs to include
those headers mainly for qdev-properties since it knows about CharDriverState
and friends.

We can solve this for now by splitting out qdev.h along the same lines that we
previously split the C file.  Then cpu.h just needs to include qdev-core.h

v1->v2:
  move qemu_irq typedef out of this patch into a separate one with an additional
  cleanup of headers to fix build breakage

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/mc146818rtc.c     |    1 +
 hw/qdev-addr.c       |    1 +
 hw/qdev-core.h       |  240 ++++++++++++++++++++++++++++++++
 hw/qdev-monitor.h    |   16 ++
 hw/qdev-properties.c |    1 +
 hw/qdev-properties.h |  128 +++++++++++++++++
 hw/qdev.c            |    1 +
 hw/qdev.h            |  371 +-------------------------------------------------
 8 files changed, 392 insertions(+), 367 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties.h

diff --git a/hw/mc146818rtc.c b/hw/mc146818rtc.c
index 3777f85..3780617 100644
--- a/hw/mc146818rtc.c
+++ b/hw/mc146818rtc.c
@@ -25,6 +25,7 @@
 #include "qemu-timer.h"
 #include "sysemu.h"
 #include "mc146818rtc.h"
+#include "qapi/qapi-visit-core.h"
 
 #ifdef TARGET_I386
 #include "apic.h"
diff --git a/hw/qdev-addr.c b/hw/qdev-addr.c
index b711b6b..5b5d38f 100644
--- a/hw/qdev-addr.c
+++ b/hw/qdev-addr.c
@@ -1,6 +1,7 @@
 #include "qdev.h"
 #include "qdev-addr.h"
 #include "targphys.h"
+#include "qapi/qapi-visit-core.h"
 
 /* --- target physical address --- */
 
diff --git a/hw/qdev-core.h b/hw/qdev-core.h
new file mode 100644
index 0000000..ca205fc
--- /dev/null
+++ b/hw/qdev-core.h
@@ -0,0 +1,240 @@
+#ifndef QDEV_CORE_H
+#define QDEV_CORE_H
+
+#include "qemu-queue.h"
+#include "qemu-option.h"
+#include "qemu/object.h"
+#include "hw/irq.h"
+#include "error.h"
+
+typedef struct Property Property;
+
+typedef struct PropertyInfo PropertyInfo;
+
+typedef struct CompatProperty CompatProperty;
+
+typedef struct BusState BusState;
+
+typedef struct BusClass BusClass;
+
+enum DevState {
+    DEV_STATE_CREATED = 1,
+    DEV_STATE_INITIALIZED,
+};
+
+enum {
+    DEV_NVECTORS_UNSPECIFIED = -1,
+};
+
+#define TYPE_DEVICE "device"
+#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
+#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
+#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
+
+typedef int (*qdev_initfn)(DeviceState *dev);
+typedef int (*qdev_event)(DeviceState *dev);
+typedef void (*qdev_resetfn)(DeviceState *dev);
+
+struct VMStateDescription;
+
+typedef struct DeviceClass {
+    ObjectClass parent_class;
+
+    const char *fw_name;
+    const char *desc;
+    Property *props;
+    int no_user;
+
+    /* callbacks */
+    void (*reset)(DeviceState *dev);
+
+    /* device state */
+    const struct VMStateDescription *vmsd;
+
+    /* Private to qdev / bus.  */
+    qdev_initfn init;
+    qdev_event unplug;
+    qdev_event exit;
+    const char *bus_type;
+} DeviceClass;
+
+/* This structure should not be accessed directly.  We declare it here
+   so that it can be embedded in individual device state structures.  */
+struct DeviceState {
+    Object parent_obj;
+
+    const char *id;
+    enum DevState state;
+    struct QemuOpts *opts;
+    int hotplugged;
+    BusState *parent_bus;
+    int num_gpio_out;
+    qemu_irq *gpio_out;
+    int num_gpio_in;
+    qemu_irq *gpio_in;
+    QLIST_HEAD(, BusState) child_bus;
+    int num_child_bus;
+    int instance_id_alias;
+    int alias_required_for_version;
+};
+
+/*
+ * This callback is used to create Open Firmware device path in accordance with
+ * OF spec http://forthworks.com/standards/of1275.pdf. Indicidual bus bindings
+ * can be found here http://playground.sun.com/1275/bindings/.
+ */
+
+#define TYPE_BUS "bus"
+#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
+#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
+#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
+
+struct BusClass {
+    ObjectClass parent_class;
+
+    /* FIXME first arg should be BusState */
+    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
+    char *(*get_dev_path)(DeviceState *dev);
+    char *(*get_fw_dev_path)(DeviceState *dev);
+    int (*reset)(BusState *bus);
+};
+
+typedef struct BusChild {
+    DeviceState *child;
+    int index;
+    QTAILQ_ENTRY(BusChild) sibling;
+} BusChild;
+
+/**
+ * BusState:
+ * @qom_allocated: Indicates whether the object was allocated by QOM.
+ * @glib_allocated: Indicates whether the object was initialized in-place
+ * yet is expected to be freed with g_free().
+ */
+struct BusState {
+    Object obj;
+    DeviceState *parent;
+    const char *name;
+    int allow_hotplug;
+    bool qom_allocated;
+    bool glib_allocated;
+    int max_index;
+    QTAILQ_HEAD(ChildrenHead, BusChild) children;
+    QLIST_ENTRY(BusState) sibling;
+};
+
+struct Property {
+    const char   *name;
+    PropertyInfo *info;
+    int          offset;
+    uint8_t      bitnr;
+    uint8_t      qtype;
+    int64_t      defval;
+};
+
+struct PropertyInfo {
+    const char *name;
+    const char *legacy_name;
+    const char **enum_table;
+    int (*parse)(DeviceState *dev, Property *prop, const char *str);
+    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
+    ObjectPropertyAccessor *get;
+    ObjectPropertyAccessor *set;
+    ObjectPropertyRelease *release;
+};
+
+typedef struct GlobalProperty {
+    const char *driver;
+    const char *property;
+    const char *value;
+    QTAILQ_ENTRY(GlobalProperty) next;
+} GlobalProperty;
+
+/*** Board API.  This should go away once we have a machine config file.  ***/
+
+DeviceState *qdev_create(BusState *bus, const char *name);
+DeviceState *qdev_try_create(BusState *bus, const char *name);
+bool qdev_exists(const char *name);
+int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
+void qdev_init_nofail(DeviceState *dev);
+void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
+                                 int required_for_version);
+void qdev_unplug(DeviceState *dev, Error **errp);
+void qdev_free(DeviceState *dev);
+int qdev_simple_unplug_cb(DeviceState *dev);
+void qdev_machine_creation_done(void);
+bool qdev_machine_modified(void);
+
+qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
+void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
+
+BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
+
+/*** Device API.  ***/
+
+/* Register device properties.  */
+/* GPIO inputs also double as IRQ sinks.  */
+void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
+void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
+
+BusState *qdev_get_parent_bus(DeviceState *dev);
+
+/*** BUS API. ***/
+
+DeviceState *qdev_find_recursive(BusState *bus, const char *id);
+
+/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
+typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
+typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
+
+void qbus_create_inplace(BusState *bus, const char *typename,
+                         DeviceState *parent, const char *name);
+BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
+/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
+ *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
+ *           0 otherwise. */
+int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+void qdev_reset_all(DeviceState *dev);
+void qbus_reset_all_fn(void *opaque);
+
+void qbus_free(BusState *bus);
+
+#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
+
+/* This should go away once we get rid of the NULL bus hack */
+BusState *sysbus_get_default(void);
+
+char *qdev_get_fw_dev_path(DeviceState *dev);
+
+/**
+ * @qdev_machine_init
+ *
+ * Initialize platform devices before machine init.  This is a hack until full
+ * support for composition is added.
+ */
+void qdev_machine_init(void);
+
+/**
+ * @device_reset
+ *
+ * Reset a single device (by calling the reset method).
+ */
+void device_reset(DeviceState *dev);
+
+const struct VMStateDescription *qdev_get_vmsd(DeviceState *dev);
+
+const char *qdev_fw_name(DeviceState *dev);
+
+Object *qdev_get_machine(void);
+
+/* FIXME: make this a link<> */
+void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
+
+extern int qdev_hotplug;
+
+char *qdev_get_dev_path(DeviceState *dev);
+
+#endif
diff --git a/hw/qdev-monitor.h b/hw/qdev-monitor.h
new file mode 100644
index 0000000..220ceba
--- /dev/null
+++ b/hw/qdev-monitor.h
@@ -0,0 +1,16 @@
+#ifndef QEMU_QDEV_MONITOR_H
+#define QEMU_QDEV_MONITOR_H
+
+#include "qdev-core.h"
+#include "monitor.h"
+
+/*** monitor commands ***/
+
+void do_info_qtree(Monitor *mon);
+void do_info_qdm(Monitor *mon);
+int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int qdev_device_help(QemuOpts *opts);
+DeviceState *qdev_device_add(QemuOpts *opts);
+
+#endif
diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 8aca0d4..81d901c 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -4,6 +4,7 @@
 #include "blockdev.h"
 #include "hw/block-common.h"
 #include "net/hub.h"
+#include "qapi/qapi-visit-core.h"
 
 void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 {
diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
new file mode 100644
index 0000000..e93336a
--- /dev/null
+++ b/hw/qdev-properties.h
@@ -0,0 +1,128 @@
+#ifndef QEMU_QDEV_PROPERTIES_H
+#define QEMU_QDEV_PROPERTIES_H
+
+#include "qdev-core.h"
+
+/*** qdev-properties.c ***/
+
+extern PropertyInfo qdev_prop_bit;
+extern PropertyInfo qdev_prop_uint8;
+extern PropertyInfo qdev_prop_uint16;
+extern PropertyInfo qdev_prop_uint32;
+extern PropertyInfo qdev_prop_int32;
+extern PropertyInfo qdev_prop_uint64;
+extern PropertyInfo qdev_prop_hex8;
+extern PropertyInfo qdev_prop_hex32;
+extern PropertyInfo qdev_prop_hex64;
+extern PropertyInfo qdev_prop_string;
+extern PropertyInfo qdev_prop_chr;
+extern PropertyInfo qdev_prop_ptr;
+extern PropertyInfo qdev_prop_macaddr;
+extern PropertyInfo qdev_prop_losttickpolicy;
+extern PropertyInfo qdev_prop_bios_chs_trans;
+extern PropertyInfo qdev_prop_drive;
+extern PropertyInfo qdev_prop_netdev;
+extern PropertyInfo qdev_prop_vlan;
+extern PropertyInfo qdev_prop_pci_devfn;
+extern PropertyInfo qdev_prop_blocksize;
+
+#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
+        .name      = (_name),                                    \
+        .info      = &(_prop),                                   \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(_type,typeof_field(_state, _field)),    \
+        }
+#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
+        .name      = (_name),                                           \
+        .info      = &(_prop),                                          \
+        .offset    = offsetof(_state, _field)                           \
+            + type_check(_type,typeof_field(_state, _field)),           \
+        .qtype     = QTYPE_QINT,                                        \
+        .defval    = (_type)_defval,                                    \
+        }
+#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
+        .name      = (_name),                                    \
+        .info      = &(qdev_prop_bit),                           \
+        .bitnr    = (_bit),                                      \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(uint32_t,typeof_field(_state, _field)), \
+        .qtype     = QTYPE_QBOOL,                                \
+        .defval    = (bool)_defval,                              \
+        }
+
+#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
+#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
+#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
+#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
+#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
+#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
+#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
+#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
+#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
+
+#define DEFINE_PROP_PTR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
+#define DEFINE_PROP_CHR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
+#define DEFINE_PROP_STRING(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
+#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
+#define DEFINE_PROP_VLAN(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
+#define DEFINE_PROP_DRIVE(_n, _s, _f) \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
+#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
+#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
+                        LostTickPolicy)
+#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
+#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
+
+#define DEFINE_PROP_END_OF_LIST()               \
+    {}
+
+/* Set properties between creation and init.  */
+void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
+void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
+void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
+void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
+void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
+void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
+void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
+void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
+void qdev_prop_set_vlan(DeviceState *dev, const char *name, NetClientState *value);
+int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
+void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
+void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
+void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
+/* FIXME: Remove opaque pointer properties.  */
+void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
+
+void qdev_prop_register_global_list(GlobalProperty *props);
+void qdev_prop_set_globals(DeviceState *dev);
+void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
+                                    Property *prop, const char *value);
+
+/**
+ * @qdev_property_add_static - add a @Property to a device referencing a
+ * field in a struct.
+ */
+void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
+
+#endif
diff --git a/hw/qdev.c b/hw/qdev.c
index b5b74b9..36c3e4b 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -29,6 +29,7 @@
 #include "qdev.h"
 #include "sysemu.h"
 #include "error.h"
+#include "qapi/qapi-visit-core.h"
 
 int qdev_hotplug = 0;
 static bool qdev_hot_added = false;
diff --git a/hw/qdev.h b/hw/qdev.h
index d699194..365b8d6 100644
--- a/hw/qdev.h
+++ b/hw/qdev.h
@@ -1,372 +1,9 @@
 #ifndef QDEV_H
 #define QDEV_H
 
-#include "hw.h"
-#include "qemu-queue.h"
-#include "qemu-char.h"
-#include "qemu-option.h"
-#include "qapi/qapi-visit-core.h"
-#include "qemu/object.h"
-#include "error.h"
-
-typedef struct Property Property;
-
-typedef struct PropertyInfo PropertyInfo;
-
-typedef struct CompatProperty CompatProperty;
-
-typedef struct BusState BusState;
-
-typedef struct BusClass BusClass;
-
-enum DevState {
-    DEV_STATE_CREATED = 1,
-    DEV_STATE_INITIALIZED,
-};
-
-enum {
-    DEV_NVECTORS_UNSPECIFIED = -1,
-};
-
-#define TYPE_DEVICE "device"
-#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
-#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
-#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
-
-typedef int (*qdev_initfn)(DeviceState *dev);
-typedef int (*qdev_event)(DeviceState *dev);
-typedef void (*qdev_resetfn)(DeviceState *dev);
-
-typedef struct DeviceClass {
-    ObjectClass parent_class;
-
-    const char *fw_name;
-    const char *desc;
-    Property *props;
-    int no_user;
-
-    /* callbacks */
-    void (*reset)(DeviceState *dev);
-
-    /* device state */
-    const VMStateDescription *vmsd;
-
-    /* Private to qdev / bus.  */
-    qdev_initfn init;
-    qdev_event unplug;
-    qdev_event exit;
-    const char *bus_type;
-} DeviceClass;
-
-/* This structure should not be accessed directly.  We declare it here
-   so that it can be embedded in individual device state structures.  */
-struct DeviceState {
-    Object parent_obj;
-
-    const char *id;
-    enum DevState state;
-    QemuOpts *opts;
-    int hotplugged;
-    BusState *parent_bus;
-    int num_gpio_out;
-    qemu_irq *gpio_out;
-    int num_gpio_in;
-    qemu_irq *gpio_in;
-    QLIST_HEAD(, BusState) child_bus;
-    int num_child_bus;
-    int instance_id_alias;
-    int alias_required_for_version;
-};
-
-#define TYPE_BUS "bus"
-#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
-#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
-#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
-
-struct BusClass {
-    ObjectClass parent_class;
-
-    /* FIXME first arg should be BusState */
-    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
-    char *(*get_dev_path)(DeviceState *dev);
-    /*
-     * This callback is used to create Open Firmware device path in accordance
-     * with OF spec http://forthworks.com/standards/of1275.pdf. Individual bus
-     * bindings can be found at http://playground.sun.com/1275/bindings/.
-     */
-    char *(*get_fw_dev_path)(DeviceState *dev);
-    int (*reset)(BusState *bus);
-};
-
-typedef struct BusChild {
-    DeviceState *child;
-    int index;
-    QTAILQ_ENTRY(BusChild) sibling;
-} BusChild;
-
-/**
- * BusState:
- * @qom_allocated: Indicates whether the object was allocated by QOM.
- * @glib_allocated: Indicates whether the object was initialized in-place
- * yet is expected to be freed with g_free().
- */
-struct BusState {
-    Object obj;
-    DeviceState *parent;
-    const char *name;
-    int allow_hotplug;
-    bool qom_allocated;
-    bool glib_allocated;
-    int max_index;
-    QTAILQ_HEAD(ChildrenHead, BusChild) children;
-    QLIST_ENTRY(BusState) sibling;
-};
-
-struct Property {
-    const char   *name;
-    PropertyInfo *info;
-    int          offset;
-    uint8_t      bitnr;
-    uint8_t      qtype;
-    int64_t      defval;
-};
-
-struct PropertyInfo {
-    const char *name;
-    const char *legacy_name;
-    const char **enum_table;
-    int (*parse)(DeviceState *dev, Property *prop, const char *str);
-    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
-    ObjectPropertyAccessor *get;
-    ObjectPropertyAccessor *set;
-    ObjectPropertyRelease *release;
-};
-
-typedef struct GlobalProperty {
-    const char *driver;
-    const char *property;
-    const char *value;
-    QTAILQ_ENTRY(GlobalProperty) next;
-} GlobalProperty;
-
-/*** Board API.  This should go away once we have a machine config file.  ***/
-
-DeviceState *qdev_create(BusState *bus, const char *name);
-DeviceState *qdev_try_create(BusState *bus, const char *name);
-bool qdev_exists(const char *name);
-int qdev_device_help(QemuOpts *opts);
-DeviceState *qdev_device_add(QemuOpts *opts);
-int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
-void qdev_init_nofail(DeviceState *dev);
-void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
-                                 int required_for_version);
-void qdev_unplug(DeviceState *dev, Error **errp);
-void qdev_free(DeviceState *dev);
-int qdev_simple_unplug_cb(DeviceState *dev);
-void qdev_machine_creation_done(void);
-bool qdev_machine_modified(void);
-
-qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
-void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
-
-BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
-
-/*** Device API.  ***/
-
-/* Register device properties.  */
-/* GPIO inputs also double as IRQ sinks.  */
-void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
-void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
-
-BusState *qdev_get_parent_bus(DeviceState *dev);
-
-/*** BUS API. ***/
-
-DeviceState *qdev_find_recursive(BusState *bus, const char *id);
-
-/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
-typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
-typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
-
-void qbus_create_inplace(BusState *bus, const char *typename,
-                         DeviceState *parent, const char *name);
-BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
-/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
- *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
- *           0 otherwise. */
-int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-void qdev_reset_all(DeviceState *dev);
-void qbus_reset_all_fn(void *opaque);
-
-void qbus_free(BusState *bus);
-
-#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
-
-/* This should go away once we get rid of the NULL bus hack */
-BusState *sysbus_get_default(void);
-
-/*** monitor commands ***/
-
-void do_info_qtree(Monitor *mon);
-void do_info_qdm(Monitor *mon);
-int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
-int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
-
-/*** qdev-properties.c ***/
-
-extern PropertyInfo qdev_prop_bit;
-extern PropertyInfo qdev_prop_uint8;
-extern PropertyInfo qdev_prop_uint16;
-extern PropertyInfo qdev_prop_uint32;
-extern PropertyInfo qdev_prop_int32;
-extern PropertyInfo qdev_prop_uint64;
-extern PropertyInfo qdev_prop_hex8;
-extern PropertyInfo qdev_prop_hex32;
-extern PropertyInfo qdev_prop_hex64;
-extern PropertyInfo qdev_prop_string;
-extern PropertyInfo qdev_prop_chr;
-extern PropertyInfo qdev_prop_ptr;
-extern PropertyInfo qdev_prop_macaddr;
-extern PropertyInfo qdev_prop_losttickpolicy;
-extern PropertyInfo qdev_prop_bios_chs_trans;
-extern PropertyInfo qdev_prop_drive;
-extern PropertyInfo qdev_prop_netdev;
-extern PropertyInfo qdev_prop_vlan;
-extern PropertyInfo qdev_prop_pci_devfn;
-extern PropertyInfo qdev_prop_blocksize;
-extern PropertyInfo qdev_prop_pci_host_devaddr;
-
-#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
-        .name      = (_name),                                    \
-        .info      = &(_prop),                                   \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(_type,typeof_field(_state, _field)),    \
-        }
-#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
-        .name      = (_name),                                           \
-        .info      = &(_prop),                                          \
-        .offset    = offsetof(_state, _field)                           \
-            + type_check(_type,typeof_field(_state, _field)),           \
-        .qtype     = QTYPE_QINT,                                        \
-        .defval    = (_type)_defval,                                    \
-        }
-#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
-        .name      = (_name),                                    \
-        .info      = &(qdev_prop_bit),                           \
-        .bitnr    = (_bit),                                      \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(uint32_t,typeof_field(_state, _field)), \
-        .qtype     = QTYPE_QBOOL,                                \
-        .defval    = (bool)_defval,                              \
-        }
-
-#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
-#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
-#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
-#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
-#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
-#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
-#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
-#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
-#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
-
-#define DEFINE_PROP_PTR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
-#define DEFINE_PROP_CHR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
-#define DEFINE_PROP_STRING(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
-#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
-#define DEFINE_PROP_VLAN(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
-#define DEFINE_PROP_DRIVE(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
-#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
-#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
-                        LostTickPolicy)
-#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
-#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
-#define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_pci_host_devaddr, PCIHostDeviceAddress)
-
-#define DEFINE_PROP_END_OF_LIST()               \
-    {}
-
-/* Set properties between creation and init.  */
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
-int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
-void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
-void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
-void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
-void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
-void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
-void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
-void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
-int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
-void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
-void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
-void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
-/* FIXME: Remove opaque pointer properties.  */
-void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
-
-void qdev_prop_register_global_list(GlobalProperty *props);
-void qdev_prop_set_globals(DeviceState *dev);
-void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
-                                    Property *prop, const char *value);
-
-char *qdev_get_fw_dev_path(DeviceState *dev);
-
-/**
- * @qdev_property_add_static - add a @Property to a device referencing a
- * field in a struct.
- */
-void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
-
-/**
- * @qdev_machine_init
- *
- * Initialize platform devices before machine init.  This is a hack until full
- * support for composition is added.
- */
-void qdev_machine_init(void);
-
-/**
- * @device_reset
- *
- * Reset a single device (by calling the reset method).
- */
-void device_reset(DeviceState *dev);
-
-const VMStateDescription *qdev_get_vmsd(DeviceState *dev);
-
-const char *qdev_fw_name(DeviceState *dev);
-
-Object *qdev_get_machine(void);
-
-/* FIXME: make this a link<> */
-void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
-
-extern int qdev_hotplug;
-
-char *qdev_get_dev_path(DeviceState *dev);
+#include "hw/hw.h"
+#include "qdev-core.h"
+#include "qdev-properties.h"
+#include "qdev-monitor.h"
 
 #endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD9-0003Qj-30; Mon, 20 Aug 2012 06:11:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>) id 1T3Jne-0002lS-KZ
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 04:41:14 +0000
X-Env-Sender: sw@weilnetz.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345437666!3205499!1
X-Originating-IP: [78.47.199.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2230 invoked from network); 20 Aug 2012 04:41:07 -0000
Received: from v220110690675601.yourvserver.net (HELO
	v220110690675601.yourvserver.net) (78.47.199.172)
	by server-14.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 04:41:07 -0000
Received: from localhost (v220110690675601.yourvserver.net.local [127.0.0.1])
	by v220110690675601.yourvserver.net (Postfix) with ESMTP id
	928907280029; Mon, 20 Aug 2012 06:41:06 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at weilnetz.de
Received: from v220110690675601.yourvserver.net ([127.0.0.1])
	by localhost (v220110690675601.yourvserver.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id jg-Y46d+2Fir; Mon, 20 Aug 2012 06:41:05 +0200 (CEST)
Received: from [192.168.178.20] (p5086EF7B.dip.t-dialin.net [80.134.239.123])
	by v220110690675601.yourvserver.net (Postfix) with ESMTPSA id
	BAAEF7280028; Mon, 20 Aug 2012 06:41:04 +0200 (CEST)
Message-ID: <5031BFE0.1070909@weilnetz.de>
Date: Mon, 20 Aug 2012 06:41:04 +0200
From: Stefan Weil <sw@weilnetz.de>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Igor Mammedov <imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-2-git-send-email-imammedo@redhat.com>
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:34 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of
	cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.08.2012 01:39, schrieb Igor Mammedov:
> it's necessary for making CPU child of DEVICE without
> causing circular header deps.
>
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>   hw/arm-misc.h |    1 +
>   hw/bt.h       |    2 ++
>   hw/devices.h  |    2 ++
>   hw/irq.h      |    2 ++
>   hw/omap.h     |    1 +
>   hw/soc_dma.h  |    1 +
>   hw/xen.h      |    1 +
>   qemu-common.h |    1 -
>   sysemu.h      |    1 +
>   9 files changed, 11 insertions(+), 1 deletions(-)
>
> diff --git a/hw/arm-misc.h b/hw/arm-misc.h
> index bdd8fec..b13aa59 100644
> --- a/hw/arm-misc.h
> +++ b/hw/arm-misc.h
> @@ -12,6 +12,7 @@
>   #define ARM_MISC_H 1
>   
>   #include "memory.h"
> +#include "hw/irq.h"
>   
>   /* The CPU is also modeled as an interrupt controller.  */
>   #define ARM_PIC_CPU_IRQ 0
> diff --git a/hw/bt.h b/hw/bt.h
> index a48b8d4..ebf6a37 100644
> --- a/hw/bt.h
> +++ b/hw/bt.h
> @@ -23,6 +23,8 @@
>    * along with this program; if not, see <http://www.gnu.org/licenses/>.
>    */
>   
> +#include "hw/irq.h"
> +
>   /* BD Address */
>   typedef struct {
>       uint8_t b[6];
> diff --git a/hw/devices.h b/hw/devices.h
> index 1a55c1e..c60bcab 100644
> --- a/hw/devices.h
> +++ b/hw/devices.h
> @@ -1,6 +1,8 @@
>   #ifndef QEMU_DEVICES_H
>   #define QEMU_DEVICES_H
>   
> +#include "hw/irq.h"
> +
>   /* ??? Not all users of this file can include cpu-common.h.  */
>   struct MemoryRegion;
>   
> diff --git a/hw/irq.h b/hw/irq.h
> index 56c55f0..1339a3a 100644
> --- a/hw/irq.h
> +++ b/hw/irq.h
> @@ -3,6 +3,8 @@
>   
>   /* Generic IRQ/GPIO pin infrastructure.  */
>   
> +typedef struct IRQState *qemu_irq;
> +
>   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
>   
>   void qemu_set_irq(qemu_irq irq, int level);
> diff --git a/hw/omap.h b/hw/omap.h
> index 413851b..8b08462 100644
> --- a/hw/omap.h
> +++ b/hw/omap.h
> @@ -19,6 +19,7 @@
>   #ifndef hw_omap_h
>   #include "memory.h"
>   # define hw_omap_h		"omap.h"
> +#include "hw/irq.h"
>   
>   # define OMAP_EMIFS_BASE	0x00000000
>   # define OMAP2_Q0_BASE		0x00000000
> diff --git a/hw/soc_dma.h b/hw/soc_dma.h
> index 904b26c..e386ace 100644
> --- a/hw/soc_dma.h
> +++ b/hw/soc_dma.h
> @@ -19,6 +19,7 @@
>    */
>   
>   #include "memory.h"
> +#include "hw/irq.h"
>   
>   struct soc_dma_s;
>   struct soc_dma_ch_s;
> diff --git a/hw/xen.h b/hw/xen.h
> index e5926b7..ff11dfd 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -8,6 +8,7 @@
>    */
>   #include <inttypes.h>
>   
> +#include "hw/irq.h"
>   #include "qemu-common.h"
>   
>   /* xen-machine.c */
> diff --git a/qemu-common.h b/qemu-common.h
> index e5c2bcd..6677a30 100644
> --- a/qemu-common.h
> +++ b/qemu-common.h
> @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>   typedef struct PCIESlot PCIESlot;
>   typedef struct MSIMessage MSIMessage;
>   typedef struct SerialState SerialState;
> -typedef struct IRQState *qemu_irq;
>   typedef struct PCMCIACardState PCMCIACardState;
>   typedef struct MouseTransformInfo MouseTransformInfo;
>   typedef struct uWireSlave uWireSlave;

Just move the declaration of qemu_irq to the beginning of qemu-common.h
and leave the rest of files untouched. That also fixes the circular 
dependency.

I already have a patch that does this, so you can integrate it in your 
series
instead of this one.


> diff --git a/sysemu.h b/sysemu.h
> index 65552ac..f765821 100644
> --- a/sysemu.h
> +++ b/sysemu.h
> @@ -9,6 +9,7 @@
>   #include "qapi-types.h"
>   #include "notify.h"
>   #include "main-loop.h"
> +#include "hw/irq.h"
>   
>   /* vl.c */
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD6-0003Pw-TJ; Mon, 20 Aug 2012 06:11:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F93-0004rp-AX
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:01 +0000
Received: from [85.158.143.35:12211] by server-3.bemta-4.messagelabs.com id
	47/6A-09529-40A71305; Sun, 19 Aug 2012 23:43:00 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345419779!14300116!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ5OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15748 invoked from network); 19 Aug 2012 23:42:59 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-21.messagelabs.com with SMTP;
	19 Aug 2012 23:42:59 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNgk9L016966
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:42:47 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH1020926; Sun, 19 Aug 2012 19:42:40 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:34 +0200
Message-Id: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 0/5 v2] cpu: make a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

this is th 3rd approach to make CPU a child of DeviceState
for both kinds of targets *-user and *-softmmu. It seems
with current state of qemu it doesn't take too much effort
to make it compile. Please check if it doesn't break
something on other targets/archs/hosts than i386.

what's tested:
  - compile tested building all targets on FC17x64 host.
  - briefly tested i386 user and softmmu targets

Anthony Liguori (1):
  qdev: split up header so it can be used in cpu.h

Igor Mammedov (4):
  move qemu_irq typedef out of cpu-common.h
  qapi-types.h doesn't really need to include qemu-common.h
  cleanup error.h, included qapi-types.h aready has stdbool.h
  make CPU a child of DeviceState

 error.h               |    1 -
 hw/arm-misc.h         |    1 +
 hw/bt.h               |    2 +
 hw/devices.h          |    2 +
 hw/irq.h              |    2 +
 hw/mc146818rtc.c      |    1 +
 hw/omap.h             |    1 +
 hw/qdev-addr.c        |    1 +
 hw/qdev-core.h        |  240 ++++++++++++++++++++++++++++++++
 hw/qdev-monitor.h     |   16 ++
 hw/qdev-properties.c  |    1 +
 hw/qdev-properties.h  |  128 +++++++++++++++++
 hw/qdev.c             |    1 +
 hw/qdev.h             |  371 +------------------------------------------------
 hw/soc_dma.h          |    1 +
 hw/xen.h              |    1 +
 include/qemu/cpu.h    |    6 +-
 qemu-common.h         |    1 -
 scripts/qapi-types.py |    2 +-
 sysemu.h              |    1 +
 20 files changed, 407 insertions(+), 373 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties.h


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD8-0003QP-Br; Mon, 20 Aug 2012 06:11:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F9L-0004t2-Mv
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:19 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345419792!4921128!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ5OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 799 invoked from network); 19 Aug 2012 23:43:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	19 Aug 2012 23:43:13 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNh6bF016999
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:43:06 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH5020926; Sun, 19 Aug 2012 19:43:02 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:38 +0200
Message-Id: <1345419579-25499-5-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 4/5] cleanup error.h,
	included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 error.h |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/error.h b/error.h
index 96fc203..643a372 100644
--- a/error.h
+++ b/error.h
@@ -14,7 +14,6 @@
 
 #include "compiler.h"
 #include "qapi-types.h"
-#include <stdbool.h>
 
 /**
  * A class representing internal errors within QEMU.  An error has a ErrorClass
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD7-0003QB-L0; Mon, 20 Aug 2012 06:11:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F98-0004sL-K2
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:07 +0000
Received: from [85.158.138.51:20511] by server-12.bemta-3.messagelabs.com id
	D6/8E-04073-90A71305; Sun, 19 Aug 2012 23:43:05 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345419784!22759023!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21187 invoked from network); 19 Aug 2012 23:43:04 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-174.messagelabs.com with SMTP;
	19 Aug 2012 23:43:04 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNguQC016986
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:42:56 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH3020926; Sun, 19 Aug 2012 19:42:52 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:36 +0200
Message-Id: <1345419579-25499-3-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 2/5] qdev: split up header so it can be used in
	cpu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Anthony Liguori <aliguori@us.ibm.com>

Header file dependency is a frickin' nightmare right now.  cpu.h tends to get
included in our 'include everything' header files but qdev also needs to include
those headers mainly for qdev-properties since it knows about CharDriverState
and friends.

We can solve this for now by splitting out qdev.h along the same lines that we
previously split the C file.  Then cpu.h just needs to include qdev-core.h

v1->v2:
  move qemu_irq typedef out of this patch into a separate one with an additional
  cleanup of headers to fix build breakage

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/mc146818rtc.c     |    1 +
 hw/qdev-addr.c       |    1 +
 hw/qdev-core.h       |  240 ++++++++++++++++++++++++++++++++
 hw/qdev-monitor.h    |   16 ++
 hw/qdev-properties.c |    1 +
 hw/qdev-properties.h |  128 +++++++++++++++++
 hw/qdev.c            |    1 +
 hw/qdev.h            |  371 +-------------------------------------------------
 8 files changed, 392 insertions(+), 367 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties.h

diff --git a/hw/mc146818rtc.c b/hw/mc146818rtc.c
index 3777f85..3780617 100644
--- a/hw/mc146818rtc.c
+++ b/hw/mc146818rtc.c
@@ -25,6 +25,7 @@
 #include "qemu-timer.h"
 #include "sysemu.h"
 #include "mc146818rtc.h"
+#include "qapi/qapi-visit-core.h"
 
 #ifdef TARGET_I386
 #include "apic.h"
diff --git a/hw/qdev-addr.c b/hw/qdev-addr.c
index b711b6b..5b5d38f 100644
--- a/hw/qdev-addr.c
+++ b/hw/qdev-addr.c
@@ -1,6 +1,7 @@
 #include "qdev.h"
 #include "qdev-addr.h"
 #include "targphys.h"
+#include "qapi/qapi-visit-core.h"
 
 /* --- target physical address --- */
 
diff --git a/hw/qdev-core.h b/hw/qdev-core.h
new file mode 100644
index 0000000..ca205fc
--- /dev/null
+++ b/hw/qdev-core.h
@@ -0,0 +1,240 @@
+#ifndef QDEV_CORE_H
+#define QDEV_CORE_H
+
+#include "qemu-queue.h"
+#include "qemu-option.h"
+#include "qemu/object.h"
+#include "hw/irq.h"
+#include "error.h"
+
+typedef struct Property Property;
+
+typedef struct PropertyInfo PropertyInfo;
+
+typedef struct CompatProperty CompatProperty;
+
+typedef struct BusState BusState;
+
+typedef struct BusClass BusClass;
+
+enum DevState {
+    DEV_STATE_CREATED = 1,
+    DEV_STATE_INITIALIZED,
+};
+
+enum {
+    DEV_NVECTORS_UNSPECIFIED = -1,
+};
+
+#define TYPE_DEVICE "device"
+#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
+#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
+#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
+
+typedef int (*qdev_initfn)(DeviceState *dev);
+typedef int (*qdev_event)(DeviceState *dev);
+typedef void (*qdev_resetfn)(DeviceState *dev);
+
+struct VMStateDescription;
+
+typedef struct DeviceClass {
+    ObjectClass parent_class;
+
+    const char *fw_name;
+    const char *desc;
+    Property *props;
+    int no_user;
+
+    /* callbacks */
+    void (*reset)(DeviceState *dev);
+
+    /* device state */
+    const struct VMStateDescription *vmsd;
+
+    /* Private to qdev / bus.  */
+    qdev_initfn init;
+    qdev_event unplug;
+    qdev_event exit;
+    const char *bus_type;
+} DeviceClass;
+
+/* This structure should not be accessed directly.  We declare it here
+   so that it can be embedded in individual device state structures.  */
+struct DeviceState {
+    Object parent_obj;
+
+    const char *id;
+    enum DevState state;
+    struct QemuOpts *opts;
+    int hotplugged;
+    BusState *parent_bus;
+    int num_gpio_out;
+    qemu_irq *gpio_out;
+    int num_gpio_in;
+    qemu_irq *gpio_in;
+    QLIST_HEAD(, BusState) child_bus;
+    int num_child_bus;
+    int instance_id_alias;
+    int alias_required_for_version;
+};
+
+/*
+ * This callback is used to create Open Firmware device path in accordance with
+ * OF spec http://forthworks.com/standards/of1275.pdf. Indicidual bus bindings
+ * can be found here http://playground.sun.com/1275/bindings/.
+ */
+
+#define TYPE_BUS "bus"
+#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
+#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
+#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
+
+struct BusClass {
+    ObjectClass parent_class;
+
+    /* FIXME first arg should be BusState */
+    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
+    char *(*get_dev_path)(DeviceState *dev);
+    char *(*get_fw_dev_path)(DeviceState *dev);
+    int (*reset)(BusState *bus);
+};
+
+typedef struct BusChild {
+    DeviceState *child;
+    int index;
+    QTAILQ_ENTRY(BusChild) sibling;
+} BusChild;
+
+/**
+ * BusState:
+ * @qom_allocated: Indicates whether the object was allocated by QOM.
+ * @glib_allocated: Indicates whether the object was initialized in-place
+ * yet is expected to be freed with g_free().
+ */
+struct BusState {
+    Object obj;
+    DeviceState *parent;
+    const char *name;
+    int allow_hotplug;
+    bool qom_allocated;
+    bool glib_allocated;
+    int max_index;
+    QTAILQ_HEAD(ChildrenHead, BusChild) children;
+    QLIST_ENTRY(BusState) sibling;
+};
+
+struct Property {
+    const char   *name;
+    PropertyInfo *info;
+    int          offset;
+    uint8_t      bitnr;
+    uint8_t      qtype;
+    int64_t      defval;
+};
+
+struct PropertyInfo {
+    const char *name;
+    const char *legacy_name;
+    const char **enum_table;
+    int (*parse)(DeviceState *dev, Property *prop, const char *str);
+    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
+    ObjectPropertyAccessor *get;
+    ObjectPropertyAccessor *set;
+    ObjectPropertyRelease *release;
+};
+
+typedef struct GlobalProperty {
+    const char *driver;
+    const char *property;
+    const char *value;
+    QTAILQ_ENTRY(GlobalProperty) next;
+} GlobalProperty;
+
+/*** Board API.  This should go away once we have a machine config file.  ***/
+
+DeviceState *qdev_create(BusState *bus, const char *name);
+DeviceState *qdev_try_create(BusState *bus, const char *name);
+bool qdev_exists(const char *name);
+int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
+void qdev_init_nofail(DeviceState *dev);
+void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
+                                 int required_for_version);
+void qdev_unplug(DeviceState *dev, Error **errp);
+void qdev_free(DeviceState *dev);
+int qdev_simple_unplug_cb(DeviceState *dev);
+void qdev_machine_creation_done(void);
+bool qdev_machine_modified(void);
+
+qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
+void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
+
+BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
+
+/*** Device API.  ***/
+
+/* Register device properties.  */
+/* GPIO inputs also double as IRQ sinks.  */
+void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
+void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
+
+BusState *qdev_get_parent_bus(DeviceState *dev);
+
+/*** BUS API. ***/
+
+DeviceState *qdev_find_recursive(BusState *bus, const char *id);
+
+/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
+typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
+typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
+
+void qbus_create_inplace(BusState *bus, const char *typename,
+                         DeviceState *parent, const char *name);
+BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
+/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
+ *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
+ *           0 otherwise. */
+int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+void qdev_reset_all(DeviceState *dev);
+void qbus_reset_all_fn(void *opaque);
+
+void qbus_free(BusState *bus);
+
+#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
+
+/* This should go away once we get rid of the NULL bus hack */
+BusState *sysbus_get_default(void);
+
+char *qdev_get_fw_dev_path(DeviceState *dev);
+
+/**
+ * @qdev_machine_init
+ *
+ * Initialize platform devices before machine init.  This is a hack until full
+ * support for composition is added.
+ */
+void qdev_machine_init(void);
+
+/**
+ * @device_reset
+ *
+ * Reset a single device (by calling the reset method).
+ */
+void device_reset(DeviceState *dev);
+
+const struct VMStateDescription *qdev_get_vmsd(DeviceState *dev);
+
+const char *qdev_fw_name(DeviceState *dev);
+
+Object *qdev_get_machine(void);
+
+/* FIXME: make this a link<> */
+void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
+
+extern int qdev_hotplug;
+
+char *qdev_get_dev_path(DeviceState *dev);
+
+#endif
diff --git a/hw/qdev-monitor.h b/hw/qdev-monitor.h
new file mode 100644
index 0000000..220ceba
--- /dev/null
+++ b/hw/qdev-monitor.h
@@ -0,0 +1,16 @@
+#ifndef QEMU_QDEV_MONITOR_H
+#define QEMU_QDEV_MONITOR_H
+
+#include "qdev-core.h"
+#include "monitor.h"
+
+/*** monitor commands ***/
+
+void do_info_qtree(Monitor *mon);
+void do_info_qdm(Monitor *mon);
+int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int qdev_device_help(QemuOpts *opts);
+DeviceState *qdev_device_add(QemuOpts *opts);
+
+#endif
diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 8aca0d4..81d901c 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -4,6 +4,7 @@
 #include "blockdev.h"
 #include "hw/block-common.h"
 #include "net/hub.h"
+#include "qapi/qapi-visit-core.h"
 
 void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 {
diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
new file mode 100644
index 0000000..e93336a
--- /dev/null
+++ b/hw/qdev-properties.h
@@ -0,0 +1,128 @@
+#ifndef QEMU_QDEV_PROPERTIES_H
+#define QEMU_QDEV_PROPERTIES_H
+
+#include "qdev-core.h"
+
+/*** qdev-properties.c ***/
+
+extern PropertyInfo qdev_prop_bit;
+extern PropertyInfo qdev_prop_uint8;
+extern PropertyInfo qdev_prop_uint16;
+extern PropertyInfo qdev_prop_uint32;
+extern PropertyInfo qdev_prop_int32;
+extern PropertyInfo qdev_prop_uint64;
+extern PropertyInfo qdev_prop_hex8;
+extern PropertyInfo qdev_prop_hex32;
+extern PropertyInfo qdev_prop_hex64;
+extern PropertyInfo qdev_prop_string;
+extern PropertyInfo qdev_prop_chr;
+extern PropertyInfo qdev_prop_ptr;
+extern PropertyInfo qdev_prop_macaddr;
+extern PropertyInfo qdev_prop_losttickpolicy;
+extern PropertyInfo qdev_prop_bios_chs_trans;
+extern PropertyInfo qdev_prop_drive;
+extern PropertyInfo qdev_prop_netdev;
+extern PropertyInfo qdev_prop_vlan;
+extern PropertyInfo qdev_prop_pci_devfn;
+extern PropertyInfo qdev_prop_blocksize;
+
+#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
+        .name      = (_name),                                    \
+        .info      = &(_prop),                                   \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(_type,typeof_field(_state, _field)),    \
+        }
+#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
+        .name      = (_name),                                           \
+        .info      = &(_prop),                                          \
+        .offset    = offsetof(_state, _field)                           \
+            + type_check(_type,typeof_field(_state, _field)),           \
+        .qtype     = QTYPE_QINT,                                        \
+        .defval    = (_type)_defval,                                    \
+        }
+#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
+        .name      = (_name),                                    \
+        .info      = &(qdev_prop_bit),                           \
+        .bitnr    = (_bit),                                      \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(uint32_t,typeof_field(_state, _field)), \
+        .qtype     = QTYPE_QBOOL,                                \
+        .defval    = (bool)_defval,                              \
+        }
+
+#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
+#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
+#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
+#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
+#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
+#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
+#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
+#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
+#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
+
+#define DEFINE_PROP_PTR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
+#define DEFINE_PROP_CHR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
+#define DEFINE_PROP_STRING(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
+#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
+#define DEFINE_PROP_VLAN(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
+#define DEFINE_PROP_DRIVE(_n, _s, _f) \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
+#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
+#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
+                        LostTickPolicy)
+#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
+#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
+
+#define DEFINE_PROP_END_OF_LIST()               \
+    {}
+
+/* Set properties between creation and init.  */
+void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
+void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
+void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
+void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
+void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
+void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
+void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
+void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
+void qdev_prop_set_vlan(DeviceState *dev, const char *name, NetClientState *value);
+int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
+void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
+void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
+void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
+/* FIXME: Remove opaque pointer properties.  */
+void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
+
+void qdev_prop_register_global_list(GlobalProperty *props);
+void qdev_prop_set_globals(DeviceState *dev);
+void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
+                                    Property *prop, const char *value);
+
+/**
+ * @qdev_property_add_static - add a @Property to a device referencing a
+ * field in a struct.
+ */
+void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
+
+#endif
diff --git a/hw/qdev.c b/hw/qdev.c
index b5b74b9..36c3e4b 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -29,6 +29,7 @@
 #include "qdev.h"
 #include "sysemu.h"
 #include "error.h"
+#include "qapi/qapi-visit-core.h"
 
 int qdev_hotplug = 0;
 static bool qdev_hot_added = false;
diff --git a/hw/qdev.h b/hw/qdev.h
index d699194..365b8d6 100644
--- a/hw/qdev.h
+++ b/hw/qdev.h
@@ -1,372 +1,9 @@
 #ifndef QDEV_H
 #define QDEV_H
 
-#include "hw.h"
-#include "qemu-queue.h"
-#include "qemu-char.h"
-#include "qemu-option.h"
-#include "qapi/qapi-visit-core.h"
-#include "qemu/object.h"
-#include "error.h"
-
-typedef struct Property Property;
-
-typedef struct PropertyInfo PropertyInfo;
-
-typedef struct CompatProperty CompatProperty;
-
-typedef struct BusState BusState;
-
-typedef struct BusClass BusClass;
-
-enum DevState {
-    DEV_STATE_CREATED = 1,
-    DEV_STATE_INITIALIZED,
-};
-
-enum {
-    DEV_NVECTORS_UNSPECIFIED = -1,
-};
-
-#define TYPE_DEVICE "device"
-#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
-#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
-#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
-
-typedef int (*qdev_initfn)(DeviceState *dev);
-typedef int (*qdev_event)(DeviceState *dev);
-typedef void (*qdev_resetfn)(DeviceState *dev);
-
-typedef struct DeviceClass {
-    ObjectClass parent_class;
-
-    const char *fw_name;
-    const char *desc;
-    Property *props;
-    int no_user;
-
-    /* callbacks */
-    void (*reset)(DeviceState *dev);
-
-    /* device state */
-    const VMStateDescription *vmsd;
-
-    /* Private to qdev / bus.  */
-    qdev_initfn init;
-    qdev_event unplug;
-    qdev_event exit;
-    const char *bus_type;
-} DeviceClass;
-
-/* This structure should not be accessed directly.  We declare it here
-   so that it can be embedded in individual device state structures.  */
-struct DeviceState {
-    Object parent_obj;
-
-    const char *id;
-    enum DevState state;
-    QemuOpts *opts;
-    int hotplugged;
-    BusState *parent_bus;
-    int num_gpio_out;
-    qemu_irq *gpio_out;
-    int num_gpio_in;
-    qemu_irq *gpio_in;
-    QLIST_HEAD(, BusState) child_bus;
-    int num_child_bus;
-    int instance_id_alias;
-    int alias_required_for_version;
-};
-
-#define TYPE_BUS "bus"
-#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
-#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
-#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
-
-struct BusClass {
-    ObjectClass parent_class;
-
-    /* FIXME first arg should be BusState */
-    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
-    char *(*get_dev_path)(DeviceState *dev);
-    /*
-     * This callback is used to create Open Firmware device path in accordance
-     * with OF spec http://forthworks.com/standards/of1275.pdf. Individual bus
-     * bindings can be found at http://playground.sun.com/1275/bindings/.
-     */
-    char *(*get_fw_dev_path)(DeviceState *dev);
-    int (*reset)(BusState *bus);
-};
-
-typedef struct BusChild {
-    DeviceState *child;
-    int index;
-    QTAILQ_ENTRY(BusChild) sibling;
-} BusChild;
-
-/**
- * BusState:
- * @qom_allocated: Indicates whether the object was allocated by QOM.
- * @glib_allocated: Indicates whether the object was initialized in-place
- * yet is expected to be freed with g_free().
- */
-struct BusState {
-    Object obj;
-    DeviceState *parent;
-    const char *name;
-    int allow_hotplug;
-    bool qom_allocated;
-    bool glib_allocated;
-    int max_index;
-    QTAILQ_HEAD(ChildrenHead, BusChild) children;
-    QLIST_ENTRY(BusState) sibling;
-};
-
-struct Property {
-    const char   *name;
-    PropertyInfo *info;
-    int          offset;
-    uint8_t      bitnr;
-    uint8_t      qtype;
-    int64_t      defval;
-};
-
-struct PropertyInfo {
-    const char *name;
-    const char *legacy_name;
-    const char **enum_table;
-    int (*parse)(DeviceState *dev, Property *prop, const char *str);
-    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
-    ObjectPropertyAccessor *get;
-    ObjectPropertyAccessor *set;
-    ObjectPropertyRelease *release;
-};
-
-typedef struct GlobalProperty {
-    const char *driver;
-    const char *property;
-    const char *value;
-    QTAILQ_ENTRY(GlobalProperty) next;
-} GlobalProperty;
-
-/*** Board API.  This should go away once we have a machine config file.  ***/
-
-DeviceState *qdev_create(BusState *bus, const char *name);
-DeviceState *qdev_try_create(BusState *bus, const char *name);
-bool qdev_exists(const char *name);
-int qdev_device_help(QemuOpts *opts);
-DeviceState *qdev_device_add(QemuOpts *opts);
-int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
-void qdev_init_nofail(DeviceState *dev);
-void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
-                                 int required_for_version);
-void qdev_unplug(DeviceState *dev, Error **errp);
-void qdev_free(DeviceState *dev);
-int qdev_simple_unplug_cb(DeviceState *dev);
-void qdev_machine_creation_done(void);
-bool qdev_machine_modified(void);
-
-qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
-void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
-
-BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
-
-/*** Device API.  ***/
-
-/* Register device properties.  */
-/* GPIO inputs also double as IRQ sinks.  */
-void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
-void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
-
-BusState *qdev_get_parent_bus(DeviceState *dev);
-
-/*** BUS API. ***/
-
-DeviceState *qdev_find_recursive(BusState *bus, const char *id);
-
-/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
-typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
-typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
-
-void qbus_create_inplace(BusState *bus, const char *typename,
-                         DeviceState *parent, const char *name);
-BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
-/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
- *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
- *           0 otherwise. */
-int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-void qdev_reset_all(DeviceState *dev);
-void qbus_reset_all_fn(void *opaque);
-
-void qbus_free(BusState *bus);
-
-#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
-
-/* This should go away once we get rid of the NULL bus hack */
-BusState *sysbus_get_default(void);
-
-/*** monitor commands ***/
-
-void do_info_qtree(Monitor *mon);
-void do_info_qdm(Monitor *mon);
-int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
-int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
-
-/*** qdev-properties.c ***/
-
-extern PropertyInfo qdev_prop_bit;
-extern PropertyInfo qdev_prop_uint8;
-extern PropertyInfo qdev_prop_uint16;
-extern PropertyInfo qdev_prop_uint32;
-extern PropertyInfo qdev_prop_int32;
-extern PropertyInfo qdev_prop_uint64;
-extern PropertyInfo qdev_prop_hex8;
-extern PropertyInfo qdev_prop_hex32;
-extern PropertyInfo qdev_prop_hex64;
-extern PropertyInfo qdev_prop_string;
-extern PropertyInfo qdev_prop_chr;
-extern PropertyInfo qdev_prop_ptr;
-extern PropertyInfo qdev_prop_macaddr;
-extern PropertyInfo qdev_prop_losttickpolicy;
-extern PropertyInfo qdev_prop_bios_chs_trans;
-extern PropertyInfo qdev_prop_drive;
-extern PropertyInfo qdev_prop_netdev;
-extern PropertyInfo qdev_prop_vlan;
-extern PropertyInfo qdev_prop_pci_devfn;
-extern PropertyInfo qdev_prop_blocksize;
-extern PropertyInfo qdev_prop_pci_host_devaddr;
-
-#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
-        .name      = (_name),                                    \
-        .info      = &(_prop),                                   \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(_type,typeof_field(_state, _field)),    \
-        }
-#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
-        .name      = (_name),                                           \
-        .info      = &(_prop),                                          \
-        .offset    = offsetof(_state, _field)                           \
-            + type_check(_type,typeof_field(_state, _field)),           \
-        .qtype     = QTYPE_QINT,                                        \
-        .defval    = (_type)_defval,                                    \
-        }
-#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
-        .name      = (_name),                                    \
-        .info      = &(qdev_prop_bit),                           \
-        .bitnr    = (_bit),                                      \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(uint32_t,typeof_field(_state, _field)), \
-        .qtype     = QTYPE_QBOOL,                                \
-        .defval    = (bool)_defval,                              \
-        }
-
-#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
-#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
-#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
-#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
-#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
-#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
-#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
-#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
-#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
-
-#define DEFINE_PROP_PTR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
-#define DEFINE_PROP_CHR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
-#define DEFINE_PROP_STRING(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
-#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
-#define DEFINE_PROP_VLAN(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
-#define DEFINE_PROP_DRIVE(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
-#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
-#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
-                        LostTickPolicy)
-#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
-#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
-#define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_pci_host_devaddr, PCIHostDeviceAddress)
-
-#define DEFINE_PROP_END_OF_LIST()               \
-    {}
-
-/* Set properties between creation and init.  */
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
-int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
-void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
-void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
-void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
-void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
-void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
-void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
-void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
-int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
-void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
-void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
-void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
-/* FIXME: Remove opaque pointer properties.  */
-void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
-
-void qdev_prop_register_global_list(GlobalProperty *props);
-void qdev_prop_set_globals(DeviceState *dev);
-void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
-                                    Property *prop, const char *value);
-
-char *qdev_get_fw_dev_path(DeviceState *dev);
-
-/**
- * @qdev_property_add_static - add a @Property to a device referencing a
- * field in a struct.
- */
-void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
-
-/**
- * @qdev_machine_init
- *
- * Initialize platform devices before machine init.  This is a hack until full
- * support for composition is added.
- */
-void qdev_machine_init(void);
-
-/**
- * @device_reset
- *
- * Reset a single device (by calling the reset method).
- */
-void device_reset(DeviceState *dev);
-
-const VMStateDescription *qdev_get_vmsd(DeviceState *dev);
-
-const char *qdev_fw_name(DeviceState *dev);
-
-Object *qdev_get_machine(void);
-
-/* FIXME: make this a link<> */
-void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
-
-extern int qdev_hotplug;
-
-char *qdev_get_dev_path(DeviceState *dev);
+#include "hw/hw.h"
+#include "qdev-core.h"
+#include "qdev-properties.h"
+#include "qdev-monitor.h"
 
 #endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD9-0003Qj-30; Mon, 20 Aug 2012 06:11:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>) id 1T3Jne-0002lS-KZ
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 04:41:14 +0000
X-Env-Sender: sw@weilnetz.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345437666!3205499!1
X-Originating-IP: [78.47.199.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2230 invoked from network); 20 Aug 2012 04:41:07 -0000
Received: from v220110690675601.yourvserver.net (HELO
	v220110690675601.yourvserver.net) (78.47.199.172)
	by server-14.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 04:41:07 -0000
Received: from localhost (v220110690675601.yourvserver.net.local [127.0.0.1])
	by v220110690675601.yourvserver.net (Postfix) with ESMTP id
	928907280029; Mon, 20 Aug 2012 06:41:06 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at weilnetz.de
Received: from v220110690675601.yourvserver.net ([127.0.0.1])
	by localhost (v220110690675601.yourvserver.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id jg-Y46d+2Fir; Mon, 20 Aug 2012 06:41:05 +0200 (CEST)
Received: from [192.168.178.20] (p5086EF7B.dip.t-dialin.net [80.134.239.123])
	by v220110690675601.yourvserver.net (Postfix) with ESMTPSA id
	BAAEF7280028; Mon, 20 Aug 2012 06:41:04 +0200 (CEST)
Message-ID: <5031BFE0.1070909@weilnetz.de>
Date: Mon, 20 Aug 2012 06:41:04 +0200
From: Stefan Weil <sw@weilnetz.de>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Igor Mammedov <imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-2-git-send-email-imammedo@redhat.com>
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:34 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of
	cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.08.2012 01:39, schrieb Igor Mammedov:
> it's necessary for making CPU child of DEVICE without
> causing circular header deps.
>
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>   hw/arm-misc.h |    1 +
>   hw/bt.h       |    2 ++
>   hw/devices.h  |    2 ++
>   hw/irq.h      |    2 ++
>   hw/omap.h     |    1 +
>   hw/soc_dma.h  |    1 +
>   hw/xen.h      |    1 +
>   qemu-common.h |    1 -
>   sysemu.h      |    1 +
>   9 files changed, 11 insertions(+), 1 deletions(-)
>
> diff --git a/hw/arm-misc.h b/hw/arm-misc.h
> index bdd8fec..b13aa59 100644
> --- a/hw/arm-misc.h
> +++ b/hw/arm-misc.h
> @@ -12,6 +12,7 @@
>   #define ARM_MISC_H 1
>   
>   #include "memory.h"
> +#include "hw/irq.h"
>   
>   /* The CPU is also modeled as an interrupt controller.  */
>   #define ARM_PIC_CPU_IRQ 0
> diff --git a/hw/bt.h b/hw/bt.h
> index a48b8d4..ebf6a37 100644
> --- a/hw/bt.h
> +++ b/hw/bt.h
> @@ -23,6 +23,8 @@
>    * along with this program; if not, see <http://www.gnu.org/licenses/>.
>    */
>   
> +#include "hw/irq.h"
> +
>   /* BD Address */
>   typedef struct {
>       uint8_t b[6];
> diff --git a/hw/devices.h b/hw/devices.h
> index 1a55c1e..c60bcab 100644
> --- a/hw/devices.h
> +++ b/hw/devices.h
> @@ -1,6 +1,8 @@
>   #ifndef QEMU_DEVICES_H
>   #define QEMU_DEVICES_H
>   
> +#include "hw/irq.h"
> +
>   /* ??? Not all users of this file can include cpu-common.h.  */
>   struct MemoryRegion;
>   
> diff --git a/hw/irq.h b/hw/irq.h
> index 56c55f0..1339a3a 100644
> --- a/hw/irq.h
> +++ b/hw/irq.h
> @@ -3,6 +3,8 @@
>   
>   /* Generic IRQ/GPIO pin infrastructure.  */
>   
> +typedef struct IRQState *qemu_irq;
> +
>   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
>   
>   void qemu_set_irq(qemu_irq irq, int level);
> diff --git a/hw/omap.h b/hw/omap.h
> index 413851b..8b08462 100644
> --- a/hw/omap.h
> +++ b/hw/omap.h
> @@ -19,6 +19,7 @@
>   #ifndef hw_omap_h
>   #include "memory.h"
>   # define hw_omap_h		"omap.h"
> +#include "hw/irq.h"
>   
>   # define OMAP_EMIFS_BASE	0x00000000
>   # define OMAP2_Q0_BASE		0x00000000
> diff --git a/hw/soc_dma.h b/hw/soc_dma.h
> index 904b26c..e386ace 100644
> --- a/hw/soc_dma.h
> +++ b/hw/soc_dma.h
> @@ -19,6 +19,7 @@
>    */
>   
>   #include "memory.h"
> +#include "hw/irq.h"
>   
>   struct soc_dma_s;
>   struct soc_dma_ch_s;
> diff --git a/hw/xen.h b/hw/xen.h
> index e5926b7..ff11dfd 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -8,6 +8,7 @@
>    */
>   #include <inttypes.h>
>   
> +#include "hw/irq.h"
>   #include "qemu-common.h"
>   
>   /* xen-machine.c */
> diff --git a/qemu-common.h b/qemu-common.h
> index e5c2bcd..6677a30 100644
> --- a/qemu-common.h
> +++ b/qemu-common.h
> @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>   typedef struct PCIESlot PCIESlot;
>   typedef struct MSIMessage MSIMessage;
>   typedef struct SerialState SerialState;
> -typedef struct IRQState *qemu_irq;
>   typedef struct PCMCIACardState PCMCIACardState;
>   typedef struct MouseTransformInfo MouseTransformInfo;
>   typedef struct uWireSlave uWireSlave;

Just move the declaration of qemu_irq to the beginning of qemu-common.h
and leave the rest of files untouched. That also fixes the circular 
dependency.

I already have a patch that does this, so you can integrate it in your 
series
instead of this one.


> diff --git a/sysemu.h b/sysemu.h
> index 65552ac..f765821 100644
> --- a/sysemu.h
> +++ b/sysemu.h
> @@ -9,6 +9,7 @@
>   #include "qapi-types.h"
>   #include "notify.h"
>   #include "main-loop.h"
> +#include "hw/irq.h"
>   
>   /* vl.c */
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD9-0003Qx-GP; Mon, 20 Aug 2012 06:11:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>) id 1T3Jyx-0002n1-5K
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 04:52:55 +0000
Received: from [85.158.139.83:27175] by server-7.bemta-5.messagelabs.com id
	92/11-32634-6A2C1305; Mon, 20 Aug 2012 04:52:54 +0000
X-Env-Sender: sw@weilnetz.de
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345438373!21635735!1
X-Originating-IP: [78.47.199.172]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27676 invoked from network); 20 Aug 2012 04:52:53 -0000
Received: from v220110690675601.yourvserver.net (HELO
	v220110690675601.yourvserver.net) (78.47.199.172)
	by server-16.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 04:52:53 -0000
Received: from localhost (v220110690675601.yourvserver.net.local [127.0.0.1])
	by v220110690675601.yourvserver.net (Postfix) with ESMTP id
	64E96728002A; Mon, 20 Aug 2012 06:52:53 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at weilnetz.de
Received: from v220110690675601.yourvserver.net ([127.0.0.1])
	by localhost (v220110690675601.yourvserver.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id g6PZrzKUGmL4; Mon, 20 Aug 2012 06:52:52 +0200 (CEST)
Received: from [192.168.178.20] (p5086EF7B.dip.t-dialin.net [80.134.239.123])
	by v220110690675601.yourvserver.net (Postfix) with ESMTPSA id
	009357280028; Mon, 20 Aug 2012 06:52:51 +0200 (CEST)
Message-ID: <5031C2A3.3060800@weilnetz.de>
Date: Mon, 20 Aug 2012 06:52:51 +0200
From: Stefan Weil <sw@weilnetz.de>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Igor Mammedov <imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:34 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 0/5 v2] cpu: make a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.08.2012 01:39, schrieb Igor Mammedov:
> this is th 3rd approach to make CPU a child of DeviceState
> for both kinds of targets *-user and *-softmmu. It seems
> with current state of qemu it doesn't take too much effort
> to make it compile. Please check if it doesn't break
> something on other targets/archs/hosts than i386.
>
> what's tested:
>    - compile tested building all targets on FC17x64 host.
>    - briefly tested i386 user and softmmu targets
>
> Anthony Liguori (1):
>    qdev: split up header so it can be used in cpu.h
>
> Igor Mammedov (4):
>    move qemu_irq typedef out of cpu-common.h
>    qapi-types.h doesn't really need to include qemu-common.h
>    cleanup error.h, included qapi-types.h aready has stdbool.h
>    make CPU a child of DeviceState
>
>   error.h               |    1 -
>   hw/arm-misc.h         |    1 +
>   hw/bt.h               |    2 +
>   hw/devices.h          |    2 +
>   hw/irq.h              |    2 +
>   hw/mc146818rtc.c      |    1 +
>   hw/omap.h             |    1 +
>   hw/qdev-addr.c        |    1 +
>   hw/qdev-core.h        |  240 ++++++++++++++++++++++++++++++++
>   hw/qdev-monitor.h     |   16 ++
>   hw/qdev-properties.c  |    1 +
>   hw/qdev-properties.h  |  128 +++++++++++++++++
>   hw/qdev.c             |    1 +
>   hw/qdev.h             |  371 +------------------------------------------------
>   hw/soc_dma.h          |    1 +
>   hw/xen.h              |    1 +
>   include/qemu/cpu.h    |    6 +-
>   qemu-common.h         |    1 -
>   scripts/qapi-types.py |    2 +-
>   sysemu.h              |    1 +
>   20 files changed, 407 insertions(+), 373 deletions(-)
>   create mode 100644 hw/qdev-core.h
>   create mode 100644 hw/qdev-monitor.h
>   create mode 100644 hw/qdev-properties.h
>

I'd prefer if you could keep the following simple pattern:

* Start includes in *.c files with config.h (optionally)
   and qemu-common.h.

* Don't include standard include files which are already
   included in qemu-common.h

* Don't include qemu-common.h in *.h files.

Regards,

Stefan Weil


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD8-0003QI-0U; Mon, 20 Aug 2012 06:11:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F9B-0004sU-5B
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:09 +0000
Received: from [85.158.143.35:12362] by server-3.bemta-4.messagelabs.com id
	3F/6A-09529-C0A71305; Sun, 19 Aug 2012 23:43:08 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345419787!16215494!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24475 invoked from network); 19 Aug 2012 23:43:07 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	19 Aug 2012 23:43:07 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNh1qM027141
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:43:01 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH4020926; Sun, 19 Aug 2012 19:42:57 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:37 +0200
Message-Id: <1345419579-25499-4-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 3/5] qapi-types.h doesn't really need to include
	qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

needed to prevent build breakage when CPU becomes a child of DeviceState

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 scripts/qapi-types.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
index cf601ae..f34addb 100644
--- a/scripts/qapi-types.py
+++ b/scripts/qapi-types.py
@@ -263,7 +263,7 @@ fdecl.write(mcgen('''
 #ifndef %(guard)s
 #define %(guard)s
 
-#include "qemu-common.h"
+#include <stdbool.h>
 
 ''',
                   guard=guardname(h_file)))
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD7-0003Q4-8z; Mon, 20 Aug 2012 06:11:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F93-0004rq-UB
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:02 +0000
Received: from [85.158.143.99:14072] by server-2.bemta-4.messagelabs.com id
	D5/51-31966-50A71305; Sun, 19 Aug 2012 23:43:01 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345419780!21948019!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19790 invoked from network); 19 Aug 2012 23:43:00 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	19 Aug 2012 23:43:00 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNgpFY022881
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:42:51 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH2020926; Sun, 19 Aug 2012 19:42:47 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:35 +0200
Message-Id: <1345419579-25499-2-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

it's necessary for making CPU child of DEVICE without
causing circular header deps.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/arm-misc.h |    1 +
 hw/bt.h       |    2 ++
 hw/devices.h  |    2 ++
 hw/irq.h      |    2 ++
 hw/omap.h     |    1 +
 hw/soc_dma.h  |    1 +
 hw/xen.h      |    1 +
 qemu-common.h |    1 -
 sysemu.h      |    1 +
 9 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/hw/arm-misc.h b/hw/arm-misc.h
index bdd8fec..b13aa59 100644
--- a/hw/arm-misc.h
+++ b/hw/arm-misc.h
@@ -12,6 +12,7 @@
 #define ARM_MISC_H 1
 
 #include "memory.h"
+#include "hw/irq.h"
 
 /* The CPU is also modeled as an interrupt controller.  */
 #define ARM_PIC_CPU_IRQ 0
diff --git a/hw/bt.h b/hw/bt.h
index a48b8d4..ebf6a37 100644
--- a/hw/bt.h
+++ b/hw/bt.h
@@ -23,6 +23,8 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
+#include "hw/irq.h"
+
 /* BD Address */
 typedef struct {
     uint8_t b[6];
diff --git a/hw/devices.h b/hw/devices.h
index 1a55c1e..c60bcab 100644
--- a/hw/devices.h
+++ b/hw/devices.h
@@ -1,6 +1,8 @@
 #ifndef QEMU_DEVICES_H
 #define QEMU_DEVICES_H
 
+#include "hw/irq.h"
+
 /* ??? Not all users of this file can include cpu-common.h.  */
 struct MemoryRegion;
 
diff --git a/hw/irq.h b/hw/irq.h
index 56c55f0..1339a3a 100644
--- a/hw/irq.h
+++ b/hw/irq.h
@@ -3,6 +3,8 @@
 
 /* Generic IRQ/GPIO pin infrastructure.  */
 
+typedef struct IRQState *qemu_irq;
+
 typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
 
 void qemu_set_irq(qemu_irq irq, int level);
diff --git a/hw/omap.h b/hw/omap.h
index 413851b..8b08462 100644
--- a/hw/omap.h
+++ b/hw/omap.h
@@ -19,6 +19,7 @@
 #ifndef hw_omap_h
 #include "memory.h"
 # define hw_omap_h		"omap.h"
+#include "hw/irq.h"
 
 # define OMAP_EMIFS_BASE	0x00000000
 # define OMAP2_Q0_BASE		0x00000000
diff --git a/hw/soc_dma.h b/hw/soc_dma.h
index 904b26c..e386ace 100644
--- a/hw/soc_dma.h
+++ b/hw/soc_dma.h
@@ -19,6 +19,7 @@
  */
 
 #include "memory.h"
+#include "hw/irq.h"
 
 struct soc_dma_s;
 struct soc_dma_ch_s;
diff --git a/hw/xen.h b/hw/xen.h
index e5926b7..ff11dfd 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -8,6 +8,7 @@
  */
 #include <inttypes.h>
 
+#include "hw/irq.h"
 #include "qemu-common.h"
 
 /* xen-machine.c */
diff --git a/qemu-common.h b/qemu-common.h
index e5c2bcd..6677a30 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
 typedef struct PCIESlot PCIESlot;
 typedef struct MSIMessage MSIMessage;
 typedef struct SerialState SerialState;
-typedef struct IRQState *qemu_irq;
 typedef struct PCMCIACardState PCMCIACardState;
 typedef struct MouseTransformInfo MouseTransformInfo;
 typedef struct uWireSlave uWireSlave;
diff --git a/sysemu.h b/sysemu.h
index 65552ac..f765821 100644
--- a/sysemu.h
+++ b/sysemu.h
@@ -9,6 +9,7 @@
 #include "qapi-types.h"
 #include "notify.h"
 #include "main-loop.h"
+#include "hw/irq.h"
 
 /* vl.c */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD7-0003Q4-8z; Mon, 20 Aug 2012 06:11:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F93-0004rq-UB
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:02 +0000
Received: from [85.158.143.99:14072] by server-2.bemta-4.messagelabs.com id
	D5/51-31966-50A71305; Sun, 19 Aug 2012 23:43:01 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345419780!21948019!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19790 invoked from network); 19 Aug 2012 23:43:00 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	19 Aug 2012 23:43:00 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNgpFY022881
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:42:51 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH2020926; Sun, 19 Aug 2012 19:42:47 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:35 +0200
Message-Id: <1345419579-25499-2-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

it's necessary for making CPU child of DEVICE without
causing circular header deps.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/arm-misc.h |    1 +
 hw/bt.h       |    2 ++
 hw/devices.h  |    2 ++
 hw/irq.h      |    2 ++
 hw/omap.h     |    1 +
 hw/soc_dma.h  |    1 +
 hw/xen.h      |    1 +
 qemu-common.h |    1 -
 sysemu.h      |    1 +
 9 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/hw/arm-misc.h b/hw/arm-misc.h
index bdd8fec..b13aa59 100644
--- a/hw/arm-misc.h
+++ b/hw/arm-misc.h
@@ -12,6 +12,7 @@
 #define ARM_MISC_H 1
 
 #include "memory.h"
+#include "hw/irq.h"
 
 /* The CPU is also modeled as an interrupt controller.  */
 #define ARM_PIC_CPU_IRQ 0
diff --git a/hw/bt.h b/hw/bt.h
index a48b8d4..ebf6a37 100644
--- a/hw/bt.h
+++ b/hw/bt.h
@@ -23,6 +23,8 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
+#include "hw/irq.h"
+
 /* BD Address */
 typedef struct {
     uint8_t b[6];
diff --git a/hw/devices.h b/hw/devices.h
index 1a55c1e..c60bcab 100644
--- a/hw/devices.h
+++ b/hw/devices.h
@@ -1,6 +1,8 @@
 #ifndef QEMU_DEVICES_H
 #define QEMU_DEVICES_H
 
+#include "hw/irq.h"
+
 /* ??? Not all users of this file can include cpu-common.h.  */
 struct MemoryRegion;
 
diff --git a/hw/irq.h b/hw/irq.h
index 56c55f0..1339a3a 100644
--- a/hw/irq.h
+++ b/hw/irq.h
@@ -3,6 +3,8 @@
 
 /* Generic IRQ/GPIO pin infrastructure.  */
 
+typedef struct IRQState *qemu_irq;
+
 typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
 
 void qemu_set_irq(qemu_irq irq, int level);
diff --git a/hw/omap.h b/hw/omap.h
index 413851b..8b08462 100644
--- a/hw/omap.h
+++ b/hw/omap.h
@@ -19,6 +19,7 @@
 #ifndef hw_omap_h
 #include "memory.h"
 # define hw_omap_h		"omap.h"
+#include "hw/irq.h"
 
 # define OMAP_EMIFS_BASE	0x00000000
 # define OMAP2_Q0_BASE		0x00000000
diff --git a/hw/soc_dma.h b/hw/soc_dma.h
index 904b26c..e386ace 100644
--- a/hw/soc_dma.h
+++ b/hw/soc_dma.h
@@ -19,6 +19,7 @@
  */
 
 #include "memory.h"
+#include "hw/irq.h"
 
 struct soc_dma_s;
 struct soc_dma_ch_s;
diff --git a/hw/xen.h b/hw/xen.h
index e5926b7..ff11dfd 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -8,6 +8,7 @@
  */
 #include <inttypes.h>
 
+#include "hw/irq.h"
 #include "qemu-common.h"
 
 /* xen-machine.c */
diff --git a/qemu-common.h b/qemu-common.h
index e5c2bcd..6677a30 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
 typedef struct PCIESlot PCIESlot;
 typedef struct MSIMessage MSIMessage;
 typedef struct SerialState SerialState;
-typedef struct IRQState *qemu_irq;
 typedef struct PCMCIACardState PCMCIACardState;
 typedef struct MouseTransformInfo MouseTransformInfo;
 typedef struct uWireSlave uWireSlave;
diff --git a/sysemu.h b/sysemu.h
index 65552ac..f765821 100644
--- a/sysemu.h
+++ b/sysemu.h
@@ -9,6 +9,7 @@
 #include "qapi-types.h"
 #include "notify.h"
 #include "main-loop.h"
+#include "hw/irq.h"
 
 /* vl.c */
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD8-0003QI-0U; Mon, 20 Aug 2012 06:11:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F9B-0004sU-5B
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:09 +0000
Received: from [85.158.143.35:12362] by server-3.bemta-4.messagelabs.com id
	3F/6A-09529-C0A71305; Sun, 19 Aug 2012 23:43:08 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345419787!16215494!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24475 invoked from network); 19 Aug 2012 23:43:07 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	19 Aug 2012 23:43:07 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNh1qM027141
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:43:01 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH4020926; Sun, 19 Aug 2012 19:42:57 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:37 +0200
Message-Id: <1345419579-25499-4-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 3/5] qapi-types.h doesn't really need to include
	qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

needed to prevent build breakage when CPU becomes a child of DeviceState

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 scripts/qapi-types.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
index cf601ae..f34addb 100644
--- a/scripts/qapi-types.py
+++ b/scripts/qapi-types.py
@@ -263,7 +263,7 @@ fdecl.write(mcgen('''
 #ifndef %(guard)s
 #define %(guard)s
 
-#include "qemu-common.h"
+#include <stdbool.h>
 
 ''',
                   guard=guardname(h_file)))
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD9-0003Qx-GP; Mon, 20 Aug 2012 06:11:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>) id 1T3Jyx-0002n1-5K
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 04:52:55 +0000
Received: from [85.158.139.83:27175] by server-7.bemta-5.messagelabs.com id
	92/11-32634-6A2C1305; Mon, 20 Aug 2012 04:52:54 +0000
X-Env-Sender: sw@weilnetz.de
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345438373!21635735!1
X-Originating-IP: [78.47.199.172]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27676 invoked from network); 20 Aug 2012 04:52:53 -0000
Received: from v220110690675601.yourvserver.net (HELO
	v220110690675601.yourvserver.net) (78.47.199.172)
	by server-16.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 04:52:53 -0000
Received: from localhost (v220110690675601.yourvserver.net.local [127.0.0.1])
	by v220110690675601.yourvserver.net (Postfix) with ESMTP id
	64E96728002A; Mon, 20 Aug 2012 06:52:53 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at weilnetz.de
Received: from v220110690675601.yourvserver.net ([127.0.0.1])
	by localhost (v220110690675601.yourvserver.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id g6PZrzKUGmL4; Mon, 20 Aug 2012 06:52:52 +0200 (CEST)
Received: from [192.168.178.20] (p5086EF7B.dip.t-dialin.net [80.134.239.123])
	by v220110690675601.yourvserver.net (Postfix) with ESMTPSA id
	009357280028; Mon, 20 Aug 2012 06:52:51 +0200 (CEST)
Message-ID: <5031C2A3.3060800@weilnetz.de>
Date: Mon, 20 Aug 2012 06:52:51 +0200
From: Stefan Weil <sw@weilnetz.de>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Igor Mammedov <imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:34 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 0/5 v2] cpu: make a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.08.2012 01:39, schrieb Igor Mammedov:
> this is th 3rd approach to make CPU a child of DeviceState
> for both kinds of targets *-user and *-softmmu. It seems
> with current state of qemu it doesn't take too much effort
> to make it compile. Please check if it doesn't break
> something on other targets/archs/hosts than i386.
>
> what's tested:
>    - compile tested building all targets on FC17x64 host.
>    - briefly tested i386 user and softmmu targets
>
> Anthony Liguori (1):
>    qdev: split up header so it can be used in cpu.h
>
> Igor Mammedov (4):
>    move qemu_irq typedef out of cpu-common.h
>    qapi-types.h doesn't really need to include qemu-common.h
>    cleanup error.h, included qapi-types.h aready has stdbool.h
>    make CPU a child of DeviceState
>
>   error.h               |    1 -
>   hw/arm-misc.h         |    1 +
>   hw/bt.h               |    2 +
>   hw/devices.h          |    2 +
>   hw/irq.h              |    2 +
>   hw/mc146818rtc.c      |    1 +
>   hw/omap.h             |    1 +
>   hw/qdev-addr.c        |    1 +
>   hw/qdev-core.h        |  240 ++++++++++++++++++++++++++++++++
>   hw/qdev-monitor.h     |   16 ++
>   hw/qdev-properties.c  |    1 +
>   hw/qdev-properties.h  |  128 +++++++++++++++++
>   hw/qdev.c             |    1 +
>   hw/qdev.h             |  371 +------------------------------------------------
>   hw/soc_dma.h          |    1 +
>   hw/xen.h              |    1 +
>   include/qemu/cpu.h    |    6 +-
>   qemu-common.h         |    1 -
>   scripts/qapi-types.py |    2 +-
>   sysemu.h              |    1 +
>   20 files changed, 407 insertions(+), 373 deletions(-)
>   create mode 100644 hw/qdev-core.h
>   create mode 100644 hw/qdev-monitor.h
>   create mode 100644 hw/qdev-properties.h
>

I'd prefer if you could keep the following simple pattern:

* Start includes in *.c files with config.h (optionally)
   and qemu-common.h.

* Don't include standard include files which are already
   included in qemu-common.h

* Don't include qemu-common.h in *.h files.

Regards,

Stefan Weil


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD8-0003QW-Nk; Mon, 20 Aug 2012 06:11:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F9O-0004t8-Ij
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:22 +0000
Received: from [85.158.139.83:30205] by server-8.bemta-5.messagelabs.com id
	70/10-02481-91A71305; Sun, 19 Aug 2012 23:43:21 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345419800!28954378!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4875 invoked from network); 19 Aug 2012 23:43:21 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	19 Aug 2012 23:43:21 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNhBbK004773
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:43:11 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH6020926; Sun, 19 Aug 2012 19:43:06 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:39 +0200
Message-Id: <1345419579-25499-6-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 5/5] make CPU a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 include/qemu/cpu.h |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/qemu/cpu.h b/include/qemu/cpu.h
index ad706a6..ac44057 100644
--- a/include/qemu/cpu.h
+++ b/include/qemu/cpu.h
@@ -20,7 +20,7 @@
 #ifndef QEMU_CPU_H
 #define QEMU_CPU_H
 
-#include "qemu/object.h"
+#include "hw/qdev-core.h"
 #include "qemu-thread.h"
 
 /**
@@ -46,7 +46,7 @@ typedef struct CPUState CPUState;
  */
 typedef struct CPUClass {
     /*< private >*/
-    ObjectClass parent_class;
+    DeviceClass parent_class;
     /*< public >*/
 
     void (*reset)(CPUState *cpu);
@@ -59,7 +59,7 @@ typedef struct CPUClass {
  */
 struct CPUState {
     /*< private >*/
-    Object parent_obj;
+    DeviceState parent_obj;
     /*< public >*/
 
     struct QemuThread *thread;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 06:12:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 06:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3LD8-0003QW-Nk; Mon, 20 Aug 2012 06:11:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3F9O-0004t8-Ij
	for xen-devel@lists.xensource.com; Sun, 19 Aug 2012 23:43:22 +0000
Received: from [85.158.139.83:30205] by server-8.bemta-5.messagelabs.com id
	70/10-02481-91A71305; Sun, 19 Aug 2012 23:43:21 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345419800!28954378!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4875 invoked from network); 19 Aug 2012 23:43:21 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	19 Aug 2012 23:43:21 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7JNhBbK004773
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 19 Aug 2012 19:43:11 -0400
Received: from dell-pet610-01.lab.eng.brq.redhat.com
	(dell-pet610-01.lab.eng.brq.redhat.com [10.34.42.20])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7JNgdH6020926; Sun, 19 Aug 2012 19:43:06 -0400
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Date: Mon, 20 Aug 2012 01:39:39 +0200
Message-Id: <1345419579-25499-6-git-send-email-imammedo@redhat.com>
In-Reply-To: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 06:11:35 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [PATCH 5/5] make CPU a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 include/qemu/cpu.h |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/qemu/cpu.h b/include/qemu/cpu.h
index ad706a6..ac44057 100644
--- a/include/qemu/cpu.h
+++ b/include/qemu/cpu.h
@@ -20,7 +20,7 @@
 #ifndef QEMU_CPU_H
 #define QEMU_CPU_H
 
-#include "qemu/object.h"
+#include "hw/qdev-core.h"
 #include "qemu-thread.h"
 
 /**
@@ -46,7 +46,7 @@ typedef struct CPUState CPUState;
  */
 typedef struct CPUClass {
     /*< private >*/
-    ObjectClass parent_class;
+    DeviceClass parent_class;
     /*< public >*/
 
     void (*reset)(CPUState *cpu);
@@ -59,7 +59,7 @@ typedef struct CPUClass {
  */
 struct CPUState {
     /*< private >*/
-    Object parent_obj;
+    DeviceState parent_obj;
     /*< public >*/
 
     struct QemuThread *thread;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 07:49:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 07:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Mjk-00052n-TL; Mon, 20 Aug 2012 07:49:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1T3Mji-00052P-9H; Mon, 20 Aug 2012 07:49:22 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345448955!3230836!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22715 invoked from network); 20 Aug 2012 07:49:15 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 07:49:15 -0000
Received: by eaac13 with SMTP id c13so1789361eaa.32
	for <multiple recipients>; Mon, 20 Aug 2012 00:49:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type;
	bh=oNeiw0yrThK+63gXCcTxUTzs9vuMRpFBH8uDrvZlSPU=;
	b=vL1+GGa8h+a9MQy3m6Ek0SN/tCWfs05LUdTPNikdh6PFm26JmytnzOMev9CBCAhM5n
	TSObiHv/YGqqoCESYRF0fNagYKKPQ85KG5hu250a5Ndn3uOrfwUx1LUxoKWMzNvlGHPz
	sfwIOjKF54IsJOUeYbrKCeyxa97gtEg64cjZ0a6HS1kp3fgFmXx2A25ai0SyzPHhLywp
	N2he6DIepFYNyjkI5h8wi+49nVGQnZPWBaJia5XyFMcgIqEWLtA1ICmq4N5HHp/8dLdN
	yRsQ9xl3ORaoSXsZdXviP6zmPbkQuI1gTSuH1H4idcY4Kxy6h13KihsUwQxjNYY4gFVY
	94BA==
Received: by 10.14.223.9 with SMTP id u9mr7598919eep.10.1345448955309;
	Mon, 20 Aug 2012 00:49:15 -0700 (PDT)
Received: from [172.16.26.11] (b0fb50b8.bb.sky.com. [176.251.80.184])
	by mx.google.com with ESMTPS id v3sm13545548eep.10.2012.08.20.00.49.13
	(version=SSLv3 cipher=OTHER); Mon, 20 Aug 2012 00:49:14 -0700 (PDT)
Message-ID: <5031EBF8.7000105@xen.org>
Date: Mon, 20 Aug 2012 08:49:12 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-users@lists.xen.org, xen-arm@lists.xen.org, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Reminder: Xen Document Day is Today
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6912866376630282552=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6912866376630282552==
Content-Type: multipart/alternative;
 boundary="------------030205030701080207090108"

This is a multi-part message in MIME format.
--------------030205030701080207090108
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

everybody. A quick reminder that the next Xen Document Day is 
happeningtoday . More info on document days at 
http://wiki.xen.org/wiki/Xen_Document_Days

Hope to see you on IRC! Feel free to add stuff to the TODO list 
(http://wiki.xen.org/wiki/Xen_Document_Days/TODO) or put your name 
besides an item if you intend to work on it.

Best Regards
Lars

*********************
* Xen Document Days *
*********************

We have another Xen document day come up next Monday. Xen Document Days 
are for people who care about Xen Documentation and want to improve it. 
We introduced Documentation Days, because working on documentation in 
parallel with like minded-people, is just more fun than working alone! 
Everybody who can contribute is welcome to join!

For a list of items that need work, check out the community maintained 
TODO list (http://wiki.xen.org/wiki/Xen_Document_Days/TODO 
<http://wiki.xen.org/wiki/Xen_Document_Days/TODO>). Of course, you can 
work on anything you like: the list just provides suggestions.

How do I participate?
=====================

- Join us on IRC: freenode channel #xendocs
- Tell people what you intend to work on (to avoid doing something somebody
   else is already working on)
- Fix some documentation
- Help others
- And above all: have fun!

--------------030205030701080207090108
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-text-flowed" style="font-family: -moz-fixed;
      font-size: 14px;" lang="x-western">Hi,
      <br>
      <br>
      everybody. A quick reminder that the next Xen Document Day is
      happeningtoday . More info on document days at <a
        class="moz-txt-link-freetext"
        href="http://wiki.xen.org/wiki/Xen_Document_Days">http://wiki.xen.org/wiki/Xen_Document_Days</a>
      <br>
      <br>
      Hope to see you on IRC! Feel free to add stuff to the TODO list (<a
        class="moz-txt-link-freetext"
        href="http://wiki.xen.org/wiki/Xen_Document_Days/TODO">http://wiki.xen.org/wiki/Xen_Document_Days/TODO</a>)
      or put your name besides an item if you intend to work on it.
      <br>
      <br>
      Best Regards
      <br>
      Lars
      <br>
      <br>
      *********************
      <br>
      * Xen Document Days *
      <br>
      *********************
      <br>
      <br>
      We have another Xen document day come up next Monday. Xen Document
      Days are for people who care about Xen Documentation and want to
      improve it. We introduced Documentation Days, because working on
      documentation in parallel with like minded-people, is just more
      fun than working alone! Everybody who can contribute is welcome to
      join!
      <br>
      <br>
      For a list of items that need work, check out the community
      maintained TODO list (<a class="moz-txt-link-freetext"
        href="http://wiki.xen.org/wiki/Xen_Document_Days/TODO">http://wiki.xen.org/wiki/Xen_Document_Days/TODO</a>
      <a class="moz-txt-link-rfc2396E"
        href="http://wiki.xen.org/wiki/Xen_Document_Days/TODO">&lt;http://wiki.xen.org/wiki/Xen_Document_Days/TODO&gt;</a>).
      Of course, you can work on anything you like: the list just
      provides suggestions.
      <br>
      <br>
      How do I participate?
      <br>
      =====================
      <br>
      <br>
      - Join us on IRC: freenode channel #xendocs
      <br>
      - Tell people what you intend to work on (to avoid doing something
      somebody
      <br>
      &nbsp; else is already working on)
      <br>
      - Fix some documentation
      <br>
      - Help others
      <br>
      - And above all: have fun!
      <br>
    </div>
  </body>
</html>

--------------030205030701080207090108--


--===============6912866376630282552==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6912866376630282552==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 07:49:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 07:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Mjk-00052n-TL; Mon, 20 Aug 2012 07:49:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1T3Mji-00052P-9H; Mon, 20 Aug 2012 07:49:22 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345448955!3230836!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22715 invoked from network); 20 Aug 2012 07:49:15 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 07:49:15 -0000
Received: by eaac13 with SMTP id c13so1789361eaa.32
	for <multiple recipients>; Mon, 20 Aug 2012 00:49:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type;
	bh=oNeiw0yrThK+63gXCcTxUTzs9vuMRpFBH8uDrvZlSPU=;
	b=vL1+GGa8h+a9MQy3m6Ek0SN/tCWfs05LUdTPNikdh6PFm26JmytnzOMev9CBCAhM5n
	TSObiHv/YGqqoCESYRF0fNagYKKPQ85KG5hu250a5Ndn3uOrfwUx1LUxoKWMzNvlGHPz
	sfwIOjKF54IsJOUeYbrKCeyxa97gtEg64cjZ0a6HS1kp3fgFmXx2A25ai0SyzPHhLywp
	N2he6DIepFYNyjkI5h8wi+49nVGQnZPWBaJia5XyFMcgIqEWLtA1ICmq4N5HHp/8dLdN
	yRsQ9xl3ORaoSXsZdXviP6zmPbkQuI1gTSuH1H4idcY4Kxy6h13KihsUwQxjNYY4gFVY
	94BA==
Received: by 10.14.223.9 with SMTP id u9mr7598919eep.10.1345448955309;
	Mon, 20 Aug 2012 00:49:15 -0700 (PDT)
Received: from [172.16.26.11] (b0fb50b8.bb.sky.com. [176.251.80.184])
	by mx.google.com with ESMTPS id v3sm13545548eep.10.2012.08.20.00.49.13
	(version=SSLv3 cipher=OTHER); Mon, 20 Aug 2012 00:49:14 -0700 (PDT)
Message-ID: <5031EBF8.7000105@xen.org>
Date: Mon, 20 Aug 2012 08:49:12 +0100
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-users@lists.xen.org, xen-arm@lists.xen.org, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Reminder: Xen Document Day is Today
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6912866376630282552=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6912866376630282552==
Content-Type: multipart/alternative;
 boundary="------------030205030701080207090108"

This is a multi-part message in MIME format.
--------------030205030701080207090108
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

everybody. A quick reminder that the next Xen Document Day is 
happeningtoday . More info on document days at 
http://wiki.xen.org/wiki/Xen_Document_Days

Hope to see you on IRC! Feel free to add stuff to the TODO list 
(http://wiki.xen.org/wiki/Xen_Document_Days/TODO) or put your name 
besides an item if you intend to work on it.

Best Regards
Lars

*********************
* Xen Document Days *
*********************

We have another Xen document day come up next Monday. Xen Document Days 
are for people who care about Xen Documentation and want to improve it. 
We introduced Documentation Days, because working on documentation in 
parallel with like minded-people, is just more fun than working alone! 
Everybody who can contribute is welcome to join!

For a list of items that need work, check out the community maintained 
TODO list (http://wiki.xen.org/wiki/Xen_Document_Days/TODO 
<http://wiki.xen.org/wiki/Xen_Document_Days/TODO>). Of course, you can 
work on anything you like: the list just provides suggestions.

How do I participate?
=====================

- Join us on IRC: freenode channel #xendocs
- Tell people what you intend to work on (to avoid doing something somebody
   else is already working on)
- Fix some documentation
- Help others
- And above all: have fun!

--------------030205030701080207090108
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-text-flowed" style="font-family: -moz-fixed;
      font-size: 14px;" lang="x-western">Hi,
      <br>
      <br>
      everybody. A quick reminder that the next Xen Document Day is
      happeningtoday . More info on document days at <a
        class="moz-txt-link-freetext"
        href="http://wiki.xen.org/wiki/Xen_Document_Days">http://wiki.xen.org/wiki/Xen_Document_Days</a>
      <br>
      <br>
      Hope to see you on IRC! Feel free to add stuff to the TODO list (<a
        class="moz-txt-link-freetext"
        href="http://wiki.xen.org/wiki/Xen_Document_Days/TODO">http://wiki.xen.org/wiki/Xen_Document_Days/TODO</a>)
      or put your name besides an item if you intend to work on it.
      <br>
      <br>
      Best Regards
      <br>
      Lars
      <br>
      <br>
      *********************
      <br>
      * Xen Document Days *
      <br>
      *********************
      <br>
      <br>
      We have another Xen document day come up next Monday. Xen Document
      Days are for people who care about Xen Documentation and want to
      improve it. We introduced Documentation Days, because working on
      documentation in parallel with like minded-people, is just more
      fun than working alone! Everybody who can contribute is welcome to
      join!
      <br>
      <br>
      For a list of items that need work, check out the community
      maintained TODO list (<a class="moz-txt-link-freetext"
        href="http://wiki.xen.org/wiki/Xen_Document_Days/TODO">http://wiki.xen.org/wiki/Xen_Document_Days/TODO</a>
      <a class="moz-txt-link-rfc2396E"
        href="http://wiki.xen.org/wiki/Xen_Document_Days/TODO">&lt;http://wiki.xen.org/wiki/Xen_Document_Days/TODO&gt;</a>).
      Of course, you can work on anything you like: the list just
      provides suggestions.
      <br>
      <br>
      How do I participate?
      <br>
      =====================
      <br>
      <br>
      - Join us on IRC: freenode channel #xendocs
      <br>
      - Tell people what you intend to work on (to avoid doing something
      somebody
      <br>
      &nbsp; else is already working on)
      <br>
      - Fix some documentation
      <br>
      - Help others
      <br>
      - And above all: have fun!
      <br>
    </div>
  </body>
</html>

--------------030205030701080207090108--


--===============6912866376630282552==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6912866376630282552==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 09:03:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 09:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Nst-0006cw-1U; Mon, 20 Aug 2012 09:02:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T3Nss-0006cm-5v
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 09:02:54 +0000
Received: from [85.158.138.51:51764] by server-3.bemta-3.messagelabs.com id
	C9/E5-13809-D3DF1305; Mon, 20 Aug 2012 09:02:53 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345453372!21145468!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26126 invoked from network); 20 Aug 2012 09:02:52 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 09:02:52 -0000
Received: by eekd4 with SMTP id d4so1584451eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 02:02:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+pRnCEpAqlvQIzW8yUpgHQ0pePNvMyekB5ISGtIKKW0=;
	b=K+mQIefZZ3Cow9eiNF7ONxiDg2chXd3W7p6dpldBnGLN21cRmoS4Oa1QhhA60GW3Q5
	y5rpwcvYSHVrEV40XtWH869LgZvDG6zAyey+YzPunHmm7ODEuC1lGkOATecefIALiHup
	fdU0elIe/3hXHoeoUs1FSc30Q82T3ABNKgPCJVqmUtDO2Y9ceHsuO5Mrq6and6qKguxL
	UyIx4U7rS1tz7/qLYQgQRGkrRs1tZjjNDCOAuyh4Aw40QVl8gAzr9YQnqFxJs/vbWmo4
	Adr4jTo1DUHC9UKswPs1u8EIxOomRyq7EruLfo2Jra9t56ZKtczAZunjSNPN2AnnVJ41
	+SXQ==
MIME-Version: 1.0
Received: by 10.14.172.193 with SMTP id t41mr7843644eel.25.1345453372437; Mon,
	20 Aug 2012 02:02:52 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 20 Aug 2012 02:02:52 -0700 (PDT)
Date: Mon, 20 Aug 2012 17:02:52 +0800
Message-ID: <CA+ePHTAPTrt4EcS512a5Bv+Q-7j1=YWc3S-mYW_ER4VKUptAeQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] [help] What's the relationship between tap*.* and vif
	*.* ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8392506533864985937=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8392506533864985937==
Content-Type: multipart/alternative; boundary=047d7b603ef253742f04c7aec683

--047d7b603ef253742f04c7aec683
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

    You can see a list of interfaces containing such as tap and vif using
`ifconfig`, and also using `brctl show`.
    My virtual machine is of HVM mode, in this case, what's the
relationship between the pair of interfaces, when
and where were they created?
    Looking forward to your reply.

Thanks

--047d7b603ef253742f04c7aec683
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<div><br><div>=A0 =A0 You can see a list of interfaces containing su=
ch as tap and vif using `ifconfig`, and also using `brctl show`.</div><div>=
=A0 =A0 My virtual machine is of HVM mode, in this case, what&#39;s the rel=
ationship between the pair of interfaces, when</div>
<div>and where were they created?</div><div>=A0 =A0 Looking forward to your=
 reply.</div><div><br></div><div>Thanks</div></div>

--047d7b603ef253742f04c7aec683--


--===============8392506533864985937==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8392506533864985937==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 09:03:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 09:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Nst-0006cw-1U; Mon, 20 Aug 2012 09:02:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T3Nss-0006cm-5v
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 09:02:54 +0000
Received: from [85.158.138.51:51764] by server-3.bemta-3.messagelabs.com id
	C9/E5-13809-D3DF1305; Mon, 20 Aug 2012 09:02:53 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345453372!21145468!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26126 invoked from network); 20 Aug 2012 09:02:52 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 09:02:52 -0000
Received: by eekd4 with SMTP id d4so1584451eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 02:02:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+pRnCEpAqlvQIzW8yUpgHQ0pePNvMyekB5ISGtIKKW0=;
	b=K+mQIefZZ3Cow9eiNF7ONxiDg2chXd3W7p6dpldBnGLN21cRmoS4Oa1QhhA60GW3Q5
	y5rpwcvYSHVrEV40XtWH869LgZvDG6zAyey+YzPunHmm7ODEuC1lGkOATecefIALiHup
	fdU0elIe/3hXHoeoUs1FSc30Q82T3ABNKgPCJVqmUtDO2Y9ceHsuO5Mrq6and6qKguxL
	UyIx4U7rS1tz7/qLYQgQRGkrRs1tZjjNDCOAuyh4Aw40QVl8gAzr9YQnqFxJs/vbWmo4
	Adr4jTo1DUHC9UKswPs1u8EIxOomRyq7EruLfo2Jra9t56ZKtczAZunjSNPN2AnnVJ41
	+SXQ==
MIME-Version: 1.0
Received: by 10.14.172.193 with SMTP id t41mr7843644eel.25.1345453372437; Mon,
	20 Aug 2012 02:02:52 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 20 Aug 2012 02:02:52 -0700 (PDT)
Date: Mon, 20 Aug 2012 17:02:52 +0800
Message-ID: <CA+ePHTAPTrt4EcS512a5Bv+Q-7j1=YWc3S-mYW_ER4VKUptAeQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] [help] What's the relationship between tap*.* and vif
	*.* ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8392506533864985937=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8392506533864985937==
Content-Type: multipart/alternative; boundary=047d7b603ef253742f04c7aec683

--047d7b603ef253742f04c7aec683
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

    You can see a list of interfaces containing such as tap and vif using
`ifconfig`, and also using `brctl show`.
    My virtual machine is of HVM mode, in this case, what's the
relationship between the pair of interfaces, when
and where were they created?
    Looking forward to your reply.

Thanks

--047d7b603ef253742f04c7aec683
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<div><br><div>=A0 =A0 You can see a list of interfaces containing su=
ch as tap and vif using `ifconfig`, and also using `brctl show`.</div><div>=
=A0 =A0 My virtual machine is of HVM mode, in this case, what&#39;s the rel=
ationship between the pair of interfaces, when</div>
<div>and where were they created?</div><div>=A0 =A0 Looking forward to your=
 reply.</div><div><br></div><div>Thanks</div></div>

--047d7b603ef253742f04c7aec683--


--===============8392506533864985937==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8392506533864985937==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 09:18:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 09:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3O7C-0006pv-Nt; Mon, 20 Aug 2012 09:17:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3O7A-0006pm-S9
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 09:17:41 +0000
Received: from [85.158.139.83:9611] by server-6.bemta-5.messagelabs.com id
	58/DA-22415-4B002305; Mon, 20 Aug 2012 09:17:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345454259!29074403!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25268 invoked from network); 20 Aug 2012 09:17:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 09:17:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14079543"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 09:17:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 20 Aug 2012 10:17:23 +0100
Message-ID: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Date: Mon, 20 Aug 2012 10:17:20 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UGxhbiBmb3IgYSA0LjIgcmVsZWFzZToKaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRt
bC94ZW4tZGV2ZWwvMjAxMi0wMy9tc2cwMDc5My5odG1sCgpUaGUgdGltZSBsaW5lIGlzIGFzIGZv
bGxvd3M6CgoxOSBNYXJjaCAgICAgICAgLS0gVE9ETyBsaXN0IGxvY2tlZCBkb3duCjIgQXByaWwg
ICAgICAgICAtLSBGZWF0dXJlIEZyZWV6ZQozMCBKdWx5ICAgICAgICAgLS0gRmlyc3QgcmVsZWFz
ZSBjYW5kaWRhdGUKV2Vla2x5ICAgICAgICAgIC0tIFJDTisxIHVudGlsIHJlbGVhc2UgICAgICAg
ICAgPDwgV0UgQVJFIEhFUkUKCgpBIGhhbmRmdWwgb2YgaXNzdWVzIGlkZW50aWZpZWQgYnkgdGhl
IHRlc3QgZGF5IGxhc3Qgd2VlayBhcmUgaW5jbHVkZWQsCnRoYW5rcyB0byBhbGwgd2hvIHRvb2sg
cGFydC4KClRoZSB1cGRhdGVkIFRPRE8gbGlzdCBmb2xsb3dzLgoKaHlwZXJ2aXNvciwgYmxvY2tl
cnM6CgogICAgKiBOb25lCgp0b29scywgYmxvY2tlcnM6CgogICAgKiBsaWJ4bCBzdGFibGUgQVBJ
IC0tIHdlIHdvdWxkIGxpa2UgNC4yIHRvIGRlZmluZSBhIHN0YWJsZSBBUEkKICAgICAgd2hpY2gg
ZG93bnN0cmVhbSdzIGNhbiBzdGFydCB0byByZWx5IG9uIG5vdCBjaGFuZ2luZy4gQXNwZWN0cyBv
ZgogICAgICB0aGlzIGFyZToKCiAgICAgICAgKiBOb25lIGtub3duCgogICAgKiB4bCBjb21wYXRp
YmlsaXR5IHdpdGggeG06CgogICAgICAgICogTm8ga25vd24gaXNzdWVzCgogICAgKiBbQ0hFQ0td
IE1vcmUgZm9ybWFsbHkgZGVwcmVjYXRlIHhtL3hlbmQuIE1hbnBhZ2UgcGF0Y2hlcyBhbHJlYWR5
CiAgICAgIGluIHRyZWUuIE5lZWRzIHJlbGVhc2Ugbm90aW5nIGFuZCBjb21tdW5pY2F0aW9uIGFy
b3VuZCAtcmMxIHRvCiAgICAgIHJlbWluZCBwZW9wbGUgdG8gdGVzdCB4bC4KCiAgICAqIFtDSEVD
S10gQ29uZmlybSB0aGF0IG1pZ3JhdGlvbiBmcm9tIFhlbiA0LjEgLT4gNC4yIHdvcmtzLgoKICAg
ICogQnVtcCBsaWJyYXJ5IFNPTkFNRVMgYXMgbmVjZXNzYXJ5LgogICAgICA8MjA1MDIuMzk0NDAu
OTY5NjE5LjgyNDk3NkBtYXJpbmVyLnVrLnhlbnNvdXJjZS5jb20+CgogICAgKiBbQlVHXSBxZW11
LXRyYWRpdGlvbmFsIGhhcyA1MCUgY3B1IHV0aWxpemF0aW9uIG9uIGFuIGlkbGUKICAgICAgV2lu
ZG93cyBzeXN0ZW0gaWYgVVNCIGlzIGVuYWJsZWQuIE5vdCAxMDAlIGNsZWFyIHdoZXRoZXIgdGhp
cyBpcwogICAgICBYZW4gb3IgcWVtdS4gIEdlb3JnZSBEdW5sYXAgaXMgcGVyZm9ybWluZyBpbml0
aWFsCiAgICAgIGludmVzdGlnYXRpb25zLgoKaHlwZXJ2aXNvciwgbmljZSB0byBoYXZlOgoKICAg
ICogW0JVRyg/KV0gVW5kZXIgY2VydGFpbiBjb25kaXRpb25zLCB0aGUgcDJtX3BvZF9zd2VlcCBj
b2RlIHdpbGwKICAgICAgc3RvcCBoYWxmd2F5IHRocm91Z2ggc2VhcmNoaW5nLCBjYXVzaW5nIGEg
Z3Vlc3QgdG8gY3Jhc2ggZXZlbiBpZgogICAgICB0aGVyZSB3YXMgemVyb2VkIG1lbW9yeSBhdmFp
bGFibGUuICBUaGlzIGlzIE5PVCBhIHJlZ3Jlc3Npb24KICAgICAgZnJvbSA0LjEsIGFuZCBpcyBh
IHZlcnkgcmFyZSBjYXNlLCBzbyBwcm9iYWJseSBzaG91bGRuJ3QgYmUgYQogICAgICBibG9ja2Vy
LiAgKEluIGZhY3QsIEknZCBiZSBvcGVuIHRvIHRoZSBpZGVhIHRoYXQgaXQgc2hvdWxkIHdhaXQK
ICAgICAgdW50aWwgYWZ0ZXIgdGhlIHJlbGVhc2UgdG8gZ2V0IG1vcmUgdGVzdGluZy4pCiAgICAg
IAkgICAgKEdlb3JnZSBEdW5sYXApCgogICAgKiBTMyByZWdyZXNzaW9uKHM/KSByZXBvcnRlZCBi
eSBCZW4gR3V0aHJvIChCZW4gJiBKYW4gQmV1bGljaCkKCiAgICAqIGZpeCBoaWdoIGNoYW5nZSBy
YXRlIHRvIENNT1MgUlRDIHBlcmlvZGljIGludGVycnVwdCBjYXVzaW5nCiAgICAgIGd1ZXN0IHdh
bGwgY2xvY2sgdGltZSB0byBsYWcgKHBvc3NpYmxlIGZpeCBvdXRsaW5lZCwgbmVlZHMgdG8gYmUK
ICAgICAgcHV0IGluIHBhdGNoIGZvcm0gYW5kIHRob3JvdWdobHkgcmV2aWV3ZWQvdGVzdGVkIGZv
ciB1bndhbnRlZAogICAgICBzaWRlIGVmZmVjdHMsIEphbiBCZXVsaWNoKQoKdG9vbHMsIG5pY2Ug
dG8gaGF2ZToKCiAgICAqIHhsIGNvbXBhdGliaWxpdHkgd2l0aCB4bToKCiAgICAgICAgKiB0aGUg
cGFyYW1ldGVyIGlvIGFuZCBpcnEgaW4gZG9tVSBjb25maWcgZmlsZXMgYXJlIG5vdAogICAgICAg
ICAgZXZhbHVhdGVkIGJ5IHhsLiAgU28gaXQgaXMgbm90IHBvc3NpYmxlIHRvIHBhc3N0aHJvdWdo
IGEKICAgICAgICAgIHBhcmFsbGVsIHBvcnQgZm9yIG15IHByaW50ZXIgdG8gZG9tVSB3aGVuIEkg
c3RhcnQgdGhlIGRvbVUKICAgICAgICAgIHdpdGggeGwgY29tbWFuZC4gKHJlcG9ydGVkIGJ5IERp
ZXRlciBCbG9tcywKICAgICAgICAgIDwyMDEyMDgxNDEwMDcwNC5HQTE5NzA0QGJsb21zLmRlPikK
CiAgICAqIHhsLmNmZyg1KSBkb2N1bWVudGF0aW9uIHBhdGNoIGZvciBxZW11LXVwc3RyZWFtCiAg
ICAgIHZpZGVvcmFtL3ZpZGVvbWVtIHN1cHBvcnQ6CiAgICAgIGh0dHA6Ly9saXN0cy54ZW4ub3Jn
L2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTItMDUvbXNnMDAyNTAuaHRtbAogICAgICBxZW11
LXVwc3RyZWFtIGRvZXNuJ3Qgc3VwcG9ydCBzcGVjaWZ5aW5nIHZpZGVvbWVtIHNpemUgZm9yIHRo
ZQogICAgICBIVk0gZ3Vlc3QgY2lycnVzL3N0ZHZnYS4gIChidXQgdGhpcyB3b3JrcyB3aXRoCiAg
ICAgIHFlbXUteGVuLXRyYWRpdGlvbmFsKS4gKFBhc2kgS8Okcmtrw6RpbmVuKQoKICAgICogW0JV
R10gbG9uZyBzdG9wIGR1cmluZyB0aGUgZ3Vlc3QgYm9vdCBwcm9jZXNzIHdpdGggcWNvdyBpbWFn
ZSwKICAgICAgcmVwb3J0ZWQgYnkgSW50ZWw6IGh0dHA6Ly9idWd6aWxsYS54ZW4ub3JnL2J1Z3pp
bGxhL3Nob3dfYnVnLmNnaT9pZD0xODIxCgogICAgKiBbQlVHXSB2Y3B1LXNldCBkb2Vzbid0IHRh
a2UgZWZmZWN0IG9uIGd1ZXN0LCByZXBvcnRlZCBieSBJbnRlbDoKICAgICAgaHR0cDovL2J1Z3pp
bGxhLnhlbi5vcmcvYnVnemlsbGEvc2hvd19idWcuY2dpP2lkPTE4MjIKCiAgICAqIExvYWQgYmxr
dGFwIGRyaXZlciBmcm9tIHhlbmNvbW1vbnMgaW5pdHNjcmlwdCBpZiBhdmFpbGFibGUsIHRocmVh
ZCBhdDoKICAgICAgPGRiNjE0ZTkyZmFmNzQzZTIwYjNmLjEzMzcwOTY5NzdAa29kbzI+LiBUbyBi
ZSBmaXhlZCBtb3JlCiAgICAgIHByb3Blcmx5IGluIDQuMy4gKFBhdGNoIHBvc3RlZCwgZGlzY3Vz
c2lvbiwgcGxhbiB0byB0YWtlIHNpbXBsZQogICAgICB4ZW5jb21tb25zIHBhdGNoIGZvciA0LjIg
YW5kIHJldmlzdCBmb3IgNC4zLiBQaW5nIHNlbnQpCgogICAgKiBbQlVHXSB4bCBhbGxvd3Mgc2Ft
ZSBQQ0kgZGV2aWNlIHRvIGJlIGFzc2lnbmVkIHRvIG11bHRpcGxlCiAgICAgIGd1ZXN0cy4gaHR0
cDovL2J1Z3ppbGxhLnhlbi5vcmcvYnVnemlsbGEvc2hvd19idWcuY2dpP2lkPTE4MjYKICAgICAg
KDxFNDU1OEMwQzk2Njg4NzQ4ODM3RUIxQjA1QkVFRDc1QTBGRDU1NzRBQFNIU01TWDEwMi5jY3Iu
Y29ycC5pbnRlbC5jb20+KQoKICAgICogYWRkcmVzcyBQb0QgcHJvYmxlbXMgd2l0aCBlYXJseSBo
b3N0IHNpZGUgYWNjZXNzZXMgdG8gZ3Vlc3QKICAgICAgYWRkcmVzcyBzcGFjZSAoSmFuIEJldWxp
Y2gsIERPTkUpCgogICAgKiBmaXggaXB4ZSBidWlsZCBwcm9ibGVtcyB3aXRoIGdjYyA0LjcgKGZl
ZG9yYSAxNykuCiAgICAgIFRoZSBmb2xsb3dpbmcgZmlsZXMgZmFpbCB0byBidWlsZDoKICAgICAg
ICAtIGlweGUvc3JjL2RyaXZlcnMvYnVzL2lzYS5jCiAgICAgICAgLSBpcHhlL3NyYy9kcml2ZXJz
L25ldC9teXJpMTBnZS5jCiAgICAgICAgLSBpcHhlL3NyYy9kcml2ZXJzL2luZmluaWJhbmQvcWli
NzMyMi5jCiAgICAgIFBhdGNoZXMgaGF2ZSBiZWVuIHBvc3RlZCB0byBpcHhlLWRldmVsIG1haWxp
bmdsaXN0LCBzbyB3ZSBuZWVkCiAgICAgIHRvIHVwZGF0ZSBvdXIgaXB4ZSB2ZXJzaW9uIG9yIGdy
YWIgdGhlIHBhdGNoZXMuIChET05FLCBLZWlyKQoKICAgICogInhsIGxpc3QgLWwiIGRvZXMgbm90
IHByb2R1Y2UgcHJvcGVyIGpzb24uIFNob3VsZCBiZSBwb3NzaWJsZSB0bwogICAgICBtYWtlIGl0
IGludG8gYW4gYXJyYXkuIFJlcG9ydGVkIGJ5IEJhc3RpYW4gQmxhbmssCiAgICAgIDwyMDEyMDgx
NDEyMTc0MS5HQTEwMjE0QHdhdmVoYW1tZXIud2FsZGkuZXUub3JnPi4gKElhbiBDYW1wYmVsbCwK
ICAgICAgcGF0Y2ggcG9zdGVkKQoKICAgICogInhsIGNwdXBvb2wtY3JlYXRlIiBzZWdmYXVsdCBv
biBpbmNvcnJlY3QgaW5wdXQuIFJlcG9ydGVkIGJ5CiAgICAgIEdlb3JnZSBEdW5sYXAsCiAgICAg
IDxDQUZMQnhaYUVjaTBtT2NEQ2dGWDl6az13aDN6NE5mMUxENUU1RmN5N1kzPWlvREFNPWdAbWFp
bC5nbWFpbC5jb20+CiAgICAgIChJYW4gQ2FtcGJlbGwsIHBhdGNoIHBvc3RlZCkKCgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Aug 20 09:18:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 09:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3O7C-0006pv-Nt; Mon, 20 Aug 2012 09:17:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3O7A-0006pm-S9
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 09:17:41 +0000
Received: from [85.158.139.83:9611] by server-6.bemta-5.messagelabs.com id
	58/DA-22415-4B002305; Mon, 20 Aug 2012 09:17:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345454259!29074403!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25268 invoked from network); 20 Aug 2012 09:17:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 09:17:39 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14079543"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 09:17:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 20 Aug 2012 10:17:23 +0100
Message-ID: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Date: Mon, 20 Aug 2012 10:17:20 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UGxhbiBmb3IgYSA0LjIgcmVsZWFzZToKaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRt
bC94ZW4tZGV2ZWwvMjAxMi0wMy9tc2cwMDc5My5odG1sCgpUaGUgdGltZSBsaW5lIGlzIGFzIGZv
bGxvd3M6CgoxOSBNYXJjaCAgICAgICAgLS0gVE9ETyBsaXN0IGxvY2tlZCBkb3duCjIgQXByaWwg
ICAgICAgICAtLSBGZWF0dXJlIEZyZWV6ZQozMCBKdWx5ICAgICAgICAgLS0gRmlyc3QgcmVsZWFz
ZSBjYW5kaWRhdGUKV2Vla2x5ICAgICAgICAgIC0tIFJDTisxIHVudGlsIHJlbGVhc2UgICAgICAg
ICAgPDwgV0UgQVJFIEhFUkUKCgpBIGhhbmRmdWwgb2YgaXNzdWVzIGlkZW50aWZpZWQgYnkgdGhl
IHRlc3QgZGF5IGxhc3Qgd2VlayBhcmUgaW5jbHVkZWQsCnRoYW5rcyB0byBhbGwgd2hvIHRvb2sg
cGFydC4KClRoZSB1cGRhdGVkIFRPRE8gbGlzdCBmb2xsb3dzLgoKaHlwZXJ2aXNvciwgYmxvY2tl
cnM6CgogICAgKiBOb25lCgp0b29scywgYmxvY2tlcnM6CgogICAgKiBsaWJ4bCBzdGFibGUgQVBJ
IC0tIHdlIHdvdWxkIGxpa2UgNC4yIHRvIGRlZmluZSBhIHN0YWJsZSBBUEkKICAgICAgd2hpY2gg
ZG93bnN0cmVhbSdzIGNhbiBzdGFydCB0byByZWx5IG9uIG5vdCBjaGFuZ2luZy4gQXNwZWN0cyBv
ZgogICAgICB0aGlzIGFyZToKCiAgICAgICAgKiBOb25lIGtub3duCgogICAgKiB4bCBjb21wYXRp
YmlsaXR5IHdpdGggeG06CgogICAgICAgICogTm8ga25vd24gaXNzdWVzCgogICAgKiBbQ0hFQ0td
IE1vcmUgZm9ybWFsbHkgZGVwcmVjYXRlIHhtL3hlbmQuIE1hbnBhZ2UgcGF0Y2hlcyBhbHJlYWR5
CiAgICAgIGluIHRyZWUuIE5lZWRzIHJlbGVhc2Ugbm90aW5nIGFuZCBjb21tdW5pY2F0aW9uIGFy
b3VuZCAtcmMxIHRvCiAgICAgIHJlbWluZCBwZW9wbGUgdG8gdGVzdCB4bC4KCiAgICAqIFtDSEVD
S10gQ29uZmlybSB0aGF0IG1pZ3JhdGlvbiBmcm9tIFhlbiA0LjEgLT4gNC4yIHdvcmtzLgoKICAg
ICogQnVtcCBsaWJyYXJ5IFNPTkFNRVMgYXMgbmVjZXNzYXJ5LgogICAgICA8MjA1MDIuMzk0NDAu
OTY5NjE5LjgyNDk3NkBtYXJpbmVyLnVrLnhlbnNvdXJjZS5jb20+CgogICAgKiBbQlVHXSBxZW11
LXRyYWRpdGlvbmFsIGhhcyA1MCUgY3B1IHV0aWxpemF0aW9uIG9uIGFuIGlkbGUKICAgICAgV2lu
ZG93cyBzeXN0ZW0gaWYgVVNCIGlzIGVuYWJsZWQuIE5vdCAxMDAlIGNsZWFyIHdoZXRoZXIgdGhp
cyBpcwogICAgICBYZW4gb3IgcWVtdS4gIEdlb3JnZSBEdW5sYXAgaXMgcGVyZm9ybWluZyBpbml0
aWFsCiAgICAgIGludmVzdGlnYXRpb25zLgoKaHlwZXJ2aXNvciwgbmljZSB0byBoYXZlOgoKICAg
ICogW0JVRyg/KV0gVW5kZXIgY2VydGFpbiBjb25kaXRpb25zLCB0aGUgcDJtX3BvZF9zd2VlcCBj
b2RlIHdpbGwKICAgICAgc3RvcCBoYWxmd2F5IHRocm91Z2ggc2VhcmNoaW5nLCBjYXVzaW5nIGEg
Z3Vlc3QgdG8gY3Jhc2ggZXZlbiBpZgogICAgICB0aGVyZSB3YXMgemVyb2VkIG1lbW9yeSBhdmFp
bGFibGUuICBUaGlzIGlzIE5PVCBhIHJlZ3Jlc3Npb24KICAgICAgZnJvbSA0LjEsIGFuZCBpcyBh
IHZlcnkgcmFyZSBjYXNlLCBzbyBwcm9iYWJseSBzaG91bGRuJ3QgYmUgYQogICAgICBibG9ja2Vy
LiAgKEluIGZhY3QsIEknZCBiZSBvcGVuIHRvIHRoZSBpZGVhIHRoYXQgaXQgc2hvdWxkIHdhaXQK
ICAgICAgdW50aWwgYWZ0ZXIgdGhlIHJlbGVhc2UgdG8gZ2V0IG1vcmUgdGVzdGluZy4pCiAgICAg
IAkgICAgKEdlb3JnZSBEdW5sYXApCgogICAgKiBTMyByZWdyZXNzaW9uKHM/KSByZXBvcnRlZCBi
eSBCZW4gR3V0aHJvIChCZW4gJiBKYW4gQmV1bGljaCkKCiAgICAqIGZpeCBoaWdoIGNoYW5nZSBy
YXRlIHRvIENNT1MgUlRDIHBlcmlvZGljIGludGVycnVwdCBjYXVzaW5nCiAgICAgIGd1ZXN0IHdh
bGwgY2xvY2sgdGltZSB0byBsYWcgKHBvc3NpYmxlIGZpeCBvdXRsaW5lZCwgbmVlZHMgdG8gYmUK
ICAgICAgcHV0IGluIHBhdGNoIGZvcm0gYW5kIHRob3JvdWdobHkgcmV2aWV3ZWQvdGVzdGVkIGZv
ciB1bndhbnRlZAogICAgICBzaWRlIGVmZmVjdHMsIEphbiBCZXVsaWNoKQoKdG9vbHMsIG5pY2Ug
dG8gaGF2ZToKCiAgICAqIHhsIGNvbXBhdGliaWxpdHkgd2l0aCB4bToKCiAgICAgICAgKiB0aGUg
cGFyYW1ldGVyIGlvIGFuZCBpcnEgaW4gZG9tVSBjb25maWcgZmlsZXMgYXJlIG5vdAogICAgICAg
ICAgZXZhbHVhdGVkIGJ5IHhsLiAgU28gaXQgaXMgbm90IHBvc3NpYmxlIHRvIHBhc3N0aHJvdWdo
IGEKICAgICAgICAgIHBhcmFsbGVsIHBvcnQgZm9yIG15IHByaW50ZXIgdG8gZG9tVSB3aGVuIEkg
c3RhcnQgdGhlIGRvbVUKICAgICAgICAgIHdpdGggeGwgY29tbWFuZC4gKHJlcG9ydGVkIGJ5IERp
ZXRlciBCbG9tcywKICAgICAgICAgIDwyMDEyMDgxNDEwMDcwNC5HQTE5NzA0QGJsb21zLmRlPikK
CiAgICAqIHhsLmNmZyg1KSBkb2N1bWVudGF0aW9uIHBhdGNoIGZvciBxZW11LXVwc3RyZWFtCiAg
ICAgIHZpZGVvcmFtL3ZpZGVvbWVtIHN1cHBvcnQ6CiAgICAgIGh0dHA6Ly9saXN0cy54ZW4ub3Jn
L2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTItMDUvbXNnMDAyNTAuaHRtbAogICAgICBxZW11
LXVwc3RyZWFtIGRvZXNuJ3Qgc3VwcG9ydCBzcGVjaWZ5aW5nIHZpZGVvbWVtIHNpemUgZm9yIHRo
ZQogICAgICBIVk0gZ3Vlc3QgY2lycnVzL3N0ZHZnYS4gIChidXQgdGhpcyB3b3JrcyB3aXRoCiAg
ICAgIHFlbXUteGVuLXRyYWRpdGlvbmFsKS4gKFBhc2kgS8Okcmtrw6RpbmVuKQoKICAgICogW0JV
R10gbG9uZyBzdG9wIGR1cmluZyB0aGUgZ3Vlc3QgYm9vdCBwcm9jZXNzIHdpdGggcWNvdyBpbWFn
ZSwKICAgICAgcmVwb3J0ZWQgYnkgSW50ZWw6IGh0dHA6Ly9idWd6aWxsYS54ZW4ub3JnL2J1Z3pp
bGxhL3Nob3dfYnVnLmNnaT9pZD0xODIxCgogICAgKiBbQlVHXSB2Y3B1LXNldCBkb2Vzbid0IHRh
a2UgZWZmZWN0IG9uIGd1ZXN0LCByZXBvcnRlZCBieSBJbnRlbDoKICAgICAgaHR0cDovL2J1Z3pp
bGxhLnhlbi5vcmcvYnVnemlsbGEvc2hvd19idWcuY2dpP2lkPTE4MjIKCiAgICAqIExvYWQgYmxr
dGFwIGRyaXZlciBmcm9tIHhlbmNvbW1vbnMgaW5pdHNjcmlwdCBpZiBhdmFpbGFibGUsIHRocmVh
ZCBhdDoKICAgICAgPGRiNjE0ZTkyZmFmNzQzZTIwYjNmLjEzMzcwOTY5NzdAa29kbzI+LiBUbyBi
ZSBmaXhlZCBtb3JlCiAgICAgIHByb3Blcmx5IGluIDQuMy4gKFBhdGNoIHBvc3RlZCwgZGlzY3Vz
c2lvbiwgcGxhbiB0byB0YWtlIHNpbXBsZQogICAgICB4ZW5jb21tb25zIHBhdGNoIGZvciA0LjIg
YW5kIHJldmlzdCBmb3IgNC4zLiBQaW5nIHNlbnQpCgogICAgKiBbQlVHXSB4bCBhbGxvd3Mgc2Ft
ZSBQQ0kgZGV2aWNlIHRvIGJlIGFzc2lnbmVkIHRvIG11bHRpcGxlCiAgICAgIGd1ZXN0cy4gaHR0
cDovL2J1Z3ppbGxhLnhlbi5vcmcvYnVnemlsbGEvc2hvd19idWcuY2dpP2lkPTE4MjYKICAgICAg
KDxFNDU1OEMwQzk2Njg4NzQ4ODM3RUIxQjA1QkVFRDc1QTBGRDU1NzRBQFNIU01TWDEwMi5jY3Iu
Y29ycC5pbnRlbC5jb20+KQoKICAgICogYWRkcmVzcyBQb0QgcHJvYmxlbXMgd2l0aCBlYXJseSBo
b3N0IHNpZGUgYWNjZXNzZXMgdG8gZ3Vlc3QKICAgICAgYWRkcmVzcyBzcGFjZSAoSmFuIEJldWxp
Y2gsIERPTkUpCgogICAgKiBmaXggaXB4ZSBidWlsZCBwcm9ibGVtcyB3aXRoIGdjYyA0LjcgKGZl
ZG9yYSAxNykuCiAgICAgIFRoZSBmb2xsb3dpbmcgZmlsZXMgZmFpbCB0byBidWlsZDoKICAgICAg
ICAtIGlweGUvc3JjL2RyaXZlcnMvYnVzL2lzYS5jCiAgICAgICAgLSBpcHhlL3NyYy9kcml2ZXJz
L25ldC9teXJpMTBnZS5jCiAgICAgICAgLSBpcHhlL3NyYy9kcml2ZXJzL2luZmluaWJhbmQvcWli
NzMyMi5jCiAgICAgIFBhdGNoZXMgaGF2ZSBiZWVuIHBvc3RlZCB0byBpcHhlLWRldmVsIG1haWxp
bmdsaXN0LCBzbyB3ZSBuZWVkCiAgICAgIHRvIHVwZGF0ZSBvdXIgaXB4ZSB2ZXJzaW9uIG9yIGdy
YWIgdGhlIHBhdGNoZXMuIChET05FLCBLZWlyKQoKICAgICogInhsIGxpc3QgLWwiIGRvZXMgbm90
IHByb2R1Y2UgcHJvcGVyIGpzb24uIFNob3VsZCBiZSBwb3NzaWJsZSB0bwogICAgICBtYWtlIGl0
IGludG8gYW4gYXJyYXkuIFJlcG9ydGVkIGJ5IEJhc3RpYW4gQmxhbmssCiAgICAgIDwyMDEyMDgx
NDEyMTc0MS5HQTEwMjE0QHdhdmVoYW1tZXIud2FsZGkuZXUub3JnPi4gKElhbiBDYW1wYmVsbCwK
ICAgICAgcGF0Y2ggcG9zdGVkKQoKICAgICogInhsIGNwdXBvb2wtY3JlYXRlIiBzZWdmYXVsdCBv
biBpbmNvcnJlY3QgaW5wdXQuIFJlcG9ydGVkIGJ5CiAgICAgIEdlb3JnZSBEdW5sYXAsCiAgICAg
IDxDQUZMQnhaYUVjaTBtT2NEQ2dGWDl6az13aDN6NE5mMUxENUU1RmN5N1kzPWlvREFNPWdAbWFp
bC5nbWFpbC5jb20+CiAgICAgIChJYW4gQ2FtcGJlbGwsIHBhdGNoIHBvc3RlZCkKCgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Aug 20 09:28:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 09:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3OHa-0007Ez-Gf; Mon, 20 Aug 2012 09:28:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T3OHZ-0007Et-2X
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 09:28:25 +0000
Received: from [85.158.143.99:38326] by server-1.bemta-4.messagelabs.com id
	A7/F4-07754-83302305; Mon, 20 Aug 2012 09:28:24 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345454894!28712245!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22199 invoked from network); 20 Aug 2012 09:28:15 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 09:28:15 -0000
Received: by eekd4 with SMTP id d4so1594752eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 02:28:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=8hr2q8Lsf4CcbO2Ksw6PI4fHMZBR09ETxgc4GM7dtuE=;
	b=rsVWKndNUl5/V60InHksLyCKP8huWNjjiHkJjjnokJXjEcfxbk4zm/4MV4AT15m5GB
	MV73ZxsFZrgMXS/NegxWupbKa6O9jyoZw4NWEUAHiiNRZbxFl8FE5N+ZQ7KWTgZBVR/t
	J6w+ppwHkrg/xzwmul2MEEVIBS6nEL6E8uDcQDPkP6kpQ2htXZzYpR0BAK+kTivh+omi
	0TGvqhQ1iVbYnUowb5tVlDWqb3tTtYn5EX/ms0htyh75MdhXKTG9MSXfjN82hxIzYuaJ
	GGuuwNuT2oL6anhCoZwazvky6KN8vvTPrD0ZyTsWNtwF5IWDKZtti5WDK6iFG/xlePR7
	z4Cg==
MIME-Version: 1.0
Received: by 10.14.5.78 with SMTP id 54mr7977261eek.1.1345454894760; Mon, 20
	Aug 2012 02:28:14 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 20 Aug 2012 02:28:14 -0700 (PDT)
In-Reply-To: <3682d11b08c19cad2fa06aa6b9ace1f5@abpni.co.uk>
References: <CA+ePHTAPTrt4EcS512a5Bv+Q-7j1=YWc3S-mYW_ER4VKUptAeQ@mail.gmail.com>
	<3682d11b08c19cad2fa06aa6b9ace1f5@abpni.co.uk>
Date: Mon, 20 Aug 2012 17:28:14 +0800
Message-ID: <CA+ePHTAuR9LBBNxW9Qem7af0N7cRnH+2SsCU=JD50aX9nTJ+ew@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [help] What's the relationship between tap*.* and
 vif *.* ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2822935770625338520=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2822935770625338520==
Content-Type: multipart/alternative; boundary=047d7b6dc2d210422b04c7af21b0

--047d7b6dc2d210422b04c7af21b0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Mon, Aug 20, 2012 at 5:16 PM, Jonathan Tripathy <jonnyt@abpni.co.uk>wrot=
e:

>
>
> On 20.08.2012 10:02, =E9=A9=AC=E7=A3=8A wrote:
>
>> Hi all,
>>
>>     You can see a list of interfaces containing such as tap and vif
>> using `ifconfig`, and also using `brctl show`.
>>     My virtual machine is of HVM mode, in this case, what's the
>> relationship between the pair of interfaces, when
>> and where were they created?
>>     Looking forward to your reply.
>>
>> Thanks
>>
>
> You are probably using HVM mode. TAP network devices are used by the full
> emulation devices (e.g. e1000). vif* devices are used if you install the
> GPLPV drivers.
>
> HTH
>
> Cheers
>
>
I wannar disable the vif*.*, that is to say, not to create vif devices when
HVM mode;
is there any exsiting resolution?

in src/tools/libxl/libxl_create.c, I found several lines:
(it seems to create vif devices)

         for (i =3D 0; i < d_config->num_vifs; i++) {


             d_config->vifs[i].domid =3D domid;


              ret =3D libxl_device_nic_add(ctx, domid, &d_config->vifs[i])

Is there something wrong with skipping that?

--047d7b6dc2d210422b04c7af21b0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGJyPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gTW9uLCBBdWcgMjAsIDIwMTIgYXQg
NToxNiBQTSwgSm9uYXRoYW4gVHJpcGF0aHkgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJt
YWlsdG86am9ubnl0QGFicG5pLmNvLnVrIiB0YXJnZXQ9Il9ibGFuayI+am9ubnl0QGFicG5pLmNv
LnVrPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVv
dGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtw
YWRkaW5nLWxlZnQ6MWV4Ij4KPGRpdiBjbGFzcz0iSE9FblpiIj48ZGl2IGNsYXNzPSJoNSI+PGJy
Pgo8YnI+Ck9uIDIwLjA4LjIwMTIgMTA6MDIsIOmprOejiiB3cm90ZTo8YnI+CjxibG9ja3F1b3Rl
IGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0
OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPgpIaSBhbGwsPGJyPgo8YnI+CsKgIMKg
IFlvdSBjYW4gc2VlIGEgbGlzdCBvZiBpbnRlcmZhY2VzIGNvbnRhaW5pbmcgc3VjaCBhcyB0YXAg
YW5kIHZpZjxicj4KdXNpbmcgYGlmY29uZmlnYCwgYW5kIGFsc28gdXNpbmcgYGJyY3RsIHNob3dg
Ljxicj4KwqAgwqAgTXkgdmlydHVhbCBtYWNoaW5lIGlzIG9mIEhWTSBtb2RlLCBpbiB0aGlzIGNh
c2UsIHdoYXQmIzM5O3MgdGhlPGJyPgpyZWxhdGlvbnNoaXAgYmV0d2VlbiB0aGUgcGFpciBvZiBp
bnRlcmZhY2VzLCB3aGVuPGJyPgphbmQgd2hlcmUgd2VyZSB0aGV5IGNyZWF0ZWQ/PGJyPgrCoCDC
oCBMb29raW5nIGZvcndhcmQgdG8geW91ciByZXBseS48YnI+Cjxicj4KVGhhbmtzPGJyPgo8L2Js
b2NrcXVvdGU+Cjxicj48L2Rpdj48L2Rpdj4KWW91IGFyZSBwcm9iYWJseSB1c2luZyBIVk0gbW9k
ZS4gVEFQIG5ldHdvcmsgZGV2aWNlcyBhcmUgdXNlZCBieSB0aGUgZnVsbCBlbXVsYXRpb24gZGV2
aWNlcyAoZS5nLiBlMTAwMCkuIHZpZiogZGV2aWNlcyBhcmUgdXNlZCBpZiB5b3UgaW5zdGFsbCB0
aGUgR1BMUFYgZHJpdmVycy48YnI+Cjxicj4KSFRIPGJyPgo8YnI+CkNoZWVyczxicj4KPGJyPgo8
L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxkaXY+SSB3YW5uYXIgZGlzYWJsZSB0aGUgdmlmKi4qLCB0
aGF0IGlzIHRvIHNheSwgbm90IHRvIGNyZWF0ZSB2aWYgZGV2aWNlcyB3aGVuIEhWTSBtb2RlO8Kg
PC9kaXY+PGRpdj5pcyB0aGVyZSBhbnkgZXhzaXRpbmcgcmVzb2x1dGlvbj88L2Rpdj48ZGl2Pjxi
cj48L2Rpdj48ZGl2PjxkaXY+aW4gc3JjL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0ZS5jLCBJIGZv
dW5kIHNldmVyYWwgbGluZXM6PC9kaXY+CjxkaXY+KGl0IHNlZW1zIHRvIGNyZWF0ZSB2aWYgZGV2
aWNlcyk8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PsKgIMKgIMKgIMKgIMKgZm9yIChpID0gMDsg
aSAmbHQ7IGRfY29uZmlnLSZndDtudW1fdmlmczsgaSsrKSB7IMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgwqDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oMKgPC9kaXY+CjxkaXY+wqAgwqAgwqAgwqAgwqAgwqAgwqBkX2NvbmZpZy0mZ3Q7dmlmc1tpXS5k
b21pZCA9IGRvbWlkOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqDCoDwvZGl2Pgo8ZGl2PsKg
IMKgIMKgIMKgIMKgIMKgIMKgIHJldCA9IGxpYnhsX2RldmljZV9uaWNfYWRkKGN0eCwgZG9taWQs
ICZhbXA7ZF9jb25maWctJmd0O3ZpZnNbaV0pPC9kaXY+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRp
dj5JcyB0aGVyZSBzb21ldGhpbmcgd3Jvbmcgd2l0aCBza2lwcGluZyB0aGF0PzwvZGl2Pgo=
--047d7b6dc2d210422b04c7af21b0--


--===============2822935770625338520==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2822935770625338520==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 09:28:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 09:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3OHa-0007Ez-Gf; Mon, 20 Aug 2012 09:28:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T3OHZ-0007Et-2X
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 09:28:25 +0000
Received: from [85.158.143.99:38326] by server-1.bemta-4.messagelabs.com id
	A7/F4-07754-83302305; Mon, 20 Aug 2012 09:28:24 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345454894!28712245!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22199 invoked from network); 20 Aug 2012 09:28:15 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 09:28:15 -0000
Received: by eekd4 with SMTP id d4so1594752eek.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 02:28:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=8hr2q8Lsf4CcbO2Ksw6PI4fHMZBR09ETxgc4GM7dtuE=;
	b=rsVWKndNUl5/V60InHksLyCKP8huWNjjiHkJjjnokJXjEcfxbk4zm/4MV4AT15m5GB
	MV73ZxsFZrgMXS/NegxWupbKa6O9jyoZw4NWEUAHiiNRZbxFl8FE5N+ZQ7KWTgZBVR/t
	J6w+ppwHkrg/xzwmul2MEEVIBS6nEL6E8uDcQDPkP6kpQ2htXZzYpR0BAK+kTivh+omi
	0TGvqhQ1iVbYnUowb5tVlDWqb3tTtYn5EX/ms0htyh75MdhXKTG9MSXfjN82hxIzYuaJ
	GGuuwNuT2oL6anhCoZwazvky6KN8vvTPrD0ZyTsWNtwF5IWDKZtti5WDK6iFG/xlePR7
	z4Cg==
MIME-Version: 1.0
Received: by 10.14.5.78 with SMTP id 54mr7977261eek.1.1345454894760; Mon, 20
	Aug 2012 02:28:14 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 20 Aug 2012 02:28:14 -0700 (PDT)
In-Reply-To: <3682d11b08c19cad2fa06aa6b9ace1f5@abpni.co.uk>
References: <CA+ePHTAPTrt4EcS512a5Bv+Q-7j1=YWc3S-mYW_ER4VKUptAeQ@mail.gmail.com>
	<3682d11b08c19cad2fa06aa6b9ace1f5@abpni.co.uk>
Date: Mon, 20 Aug 2012 17:28:14 +0800
Message-ID: <CA+ePHTAuR9LBBNxW9Qem7af0N7cRnH+2SsCU=JD50aX9nTJ+ew@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [help] What's the relationship between tap*.* and
 vif *.* ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2822935770625338520=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2822935770625338520==
Content-Type: multipart/alternative; boundary=047d7b6dc2d210422b04c7af21b0

--047d7b6dc2d210422b04c7af21b0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Mon, Aug 20, 2012 at 5:16 PM, Jonathan Tripathy <jonnyt@abpni.co.uk>wrot=
e:

>
>
> On 20.08.2012 10:02, =E9=A9=AC=E7=A3=8A wrote:
>
>> Hi all,
>>
>>     You can see a list of interfaces containing such as tap and vif
>> using `ifconfig`, and also using `brctl show`.
>>     My virtual machine is of HVM mode, in this case, what's the
>> relationship between the pair of interfaces, when
>> and where were they created?
>>     Looking forward to your reply.
>>
>> Thanks
>>
>
> You are probably using HVM mode. TAP network devices are used by the full
> emulation devices (e.g. e1000). vif* devices are used if you install the
> GPLPV drivers.
>
> HTH
>
> Cheers
>
>
I wannar disable the vif*.*, that is to say, not to create vif devices when
HVM mode;
is there any exsiting resolution?

in src/tools/libxl/libxl_create.c, I found several lines:
(it seems to create vif devices)

         for (i =3D 0; i < d_config->num_vifs; i++) {


             d_config->vifs[i].domid =3D domid;


              ret =3D libxl_device_nic_add(ctx, domid, &d_config->vifs[i])

Is there something wrong with skipping that?

--047d7b6dc2d210422b04c7af21b0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGJyPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gTW9uLCBBdWcgMjAsIDIwMTIgYXQg
NToxNiBQTSwgSm9uYXRoYW4gVHJpcGF0aHkgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJt
YWlsdG86am9ubnl0QGFicG5pLmNvLnVrIiB0YXJnZXQ9Il9ibGFuayI+am9ubnl0QGFicG5pLmNv
LnVrPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVv
dGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBzb2xpZDtw
YWRkaW5nLWxlZnQ6MWV4Ij4KPGRpdiBjbGFzcz0iSE9FblpiIj48ZGl2IGNsYXNzPSJoNSI+PGJy
Pgo8YnI+Ck9uIDIwLjA4LjIwMTIgMTA6MDIsIOmprOejiiB3cm90ZTo8YnI+CjxibG9ja3F1b3Rl
IGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0
OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPgpIaSBhbGwsPGJyPgo8YnI+CsKgIMKg
IFlvdSBjYW4gc2VlIGEgbGlzdCBvZiBpbnRlcmZhY2VzIGNvbnRhaW5pbmcgc3VjaCBhcyB0YXAg
YW5kIHZpZjxicj4KdXNpbmcgYGlmY29uZmlnYCwgYW5kIGFsc28gdXNpbmcgYGJyY3RsIHNob3dg
Ljxicj4KwqAgwqAgTXkgdmlydHVhbCBtYWNoaW5lIGlzIG9mIEhWTSBtb2RlLCBpbiB0aGlzIGNh
c2UsIHdoYXQmIzM5O3MgdGhlPGJyPgpyZWxhdGlvbnNoaXAgYmV0d2VlbiB0aGUgcGFpciBvZiBp
bnRlcmZhY2VzLCB3aGVuPGJyPgphbmQgd2hlcmUgd2VyZSB0aGV5IGNyZWF0ZWQ/PGJyPgrCoCDC
oCBMb29raW5nIGZvcndhcmQgdG8geW91ciByZXBseS48YnI+Cjxicj4KVGhhbmtzPGJyPgo8L2Js
b2NrcXVvdGU+Cjxicj48L2Rpdj48L2Rpdj4KWW91IGFyZSBwcm9iYWJseSB1c2luZyBIVk0gbW9k
ZS4gVEFQIG5ldHdvcmsgZGV2aWNlcyBhcmUgdXNlZCBieSB0aGUgZnVsbCBlbXVsYXRpb24gZGV2
aWNlcyAoZS5nLiBlMTAwMCkuIHZpZiogZGV2aWNlcyBhcmUgdXNlZCBpZiB5b3UgaW5zdGFsbCB0
aGUgR1BMUFYgZHJpdmVycy48YnI+Cjxicj4KSFRIPGJyPgo8YnI+CkNoZWVyczxicj4KPGJyPgo8
L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxkaXY+SSB3YW5uYXIgZGlzYWJsZSB0aGUgdmlmKi4qLCB0
aGF0IGlzIHRvIHNheSwgbm90IHRvIGNyZWF0ZSB2aWYgZGV2aWNlcyB3aGVuIEhWTSBtb2RlO8Kg
PC9kaXY+PGRpdj5pcyB0aGVyZSBhbnkgZXhzaXRpbmcgcmVzb2x1dGlvbj88L2Rpdj48ZGl2Pjxi
cj48L2Rpdj48ZGl2PjxkaXY+aW4gc3JjL3Rvb2xzL2xpYnhsL2xpYnhsX2NyZWF0ZS5jLCBJIGZv
dW5kIHNldmVyYWwgbGluZXM6PC9kaXY+CjxkaXY+KGl0IHNlZW1zIHRvIGNyZWF0ZSB2aWYgZGV2
aWNlcyk8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PsKgIMKgIMKgIMKgIMKgZm9yIChpID0gMDsg
aSAmbHQ7IGRfY29uZmlnLSZndDtudW1fdmlmczsgaSsrKSB7IMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgwqDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oMKgPC9kaXY+CjxkaXY+wqAgwqAgwqAgwqAgwqAgwqAgwqBkX2NvbmZpZy0mZ3Q7dmlmc1tpXS5k
b21pZCA9IGRvbWlkOyDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoMKgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqDCoDwvZGl2Pgo8ZGl2PsKg
IMKgIMKgIMKgIMKgIMKgIMKgIHJldCA9IGxpYnhsX2RldmljZV9uaWNfYWRkKGN0eCwgZG9taWQs
ICZhbXA7ZF9jb25maWctJmd0O3ZpZnNbaV0pPC9kaXY+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRp
dj5JcyB0aGVyZSBzb21ldGhpbmcgd3Jvbmcgd2l0aCBza2lwcGluZyB0aGF0PzwvZGl2Pgo=
--047d7b6dc2d210422b04c7af21b0--


--===============2822935770625338520==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2822935770625338520==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 10:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 10:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3PUF-00086c-7S; Mon, 20 Aug 2012 10:45:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3PUD-00086X-MO
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 10:45:33 +0000
Received: from [85.158.143.35:11720] by server-3.bemta-4.messagelabs.com id
	6E/80-09529-C4512305; Mon, 20 Aug 2012 10:45:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345459529!14965799!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11568 invoked from network); 20 Aug 2012 10:45:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 10:45:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Aug 2012 11:47:37 +0100
Message-Id: <5032316202000078000965DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 20 Aug 2012 11:45:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.08.12 at 05:22, "Hao, Xudong" <xudong.hao@intel.com> wrote:

>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Friday, August 17, 2012 5:36 PM
>> To: Hao, Xudong
>> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> 
>> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >>  -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Thursday, August 16, 2012 7:04 PM
>> >> To: Hao, Xudong
>> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> >>
>> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
>> >> +0200
>> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
>> >> +0800
>> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
>> expanded.
>> >> */
>> >> >> >  #define PCI_MEM_START       0xf0000000
>> >> >> >  #define PCI_MEM_END         0xfc000000
>> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
>> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
>> >> >>
>> >> >> With such hard coded values, this is hardly meant to be anything
>> >> >> more than an RFC, is it? These values should not exist in the first
>> >> >> place, and the variables below should be determined from VM
>> >> >> characteristics (best would presumably be to allocate them top
>> >> >> down from the end of physical address space, making sure you
>> >> >> don't run into RAM).
>> >>
>> >> No comment on this part?
>> >>
>> >
>> > The MMIO high memory start from 640G, it's already very high, I think we
>> > don't need allocate MMIO top down from the high of physical address space.
>> > Another thing you remind me that maybe we can skip this high MMIO hole
>> when
>> > setup p2m table in build hvm of libxc(setup_guest()), like the handles for
>> > MMIO below 4G.
>> 
>> That would be an option, but any fixed address you pick here
>> will look arbitrary (and will sooner or later raise questions). Plus
>> by allowing the RAM above 4G to remain contiguous even for
>> huge guests, we'd retain maximum compatibility with all sorts
>> of guest OSes. Furthermore, did you check that we in all cases
>> can use 40-bit (guest) physical addresses (I would think that 36
>> is the biggest common value). Bottom line - please don't use a
>> fixed number here.
>> 
> 
> Where does present the 36-bit physical addresses limit, could you help to 
> point out in the current Xen code?

Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
mtrr_var_range_msr_set().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 10:46:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 10:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3PUF-00086c-7S; Mon, 20 Aug 2012 10:45:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3PUD-00086X-MO
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 10:45:33 +0000
Received: from [85.158.143.35:11720] by server-3.bemta-4.messagelabs.com id
	6E/80-09529-C4512305; Mon, 20 Aug 2012 10:45:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345459529!14965799!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11568 invoked from network); 20 Aug 2012 10:45:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 10:45:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Aug 2012 11:47:37 +0100
Message-Id: <5032316202000078000965DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 20 Aug 2012 11:45:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.08.12 at 05:22, "Hao, Xudong" <xudong.hao@intel.com> wrote:

>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Friday, August 17, 2012 5:36 PM
>> To: Hao, Xudong
>> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> 
>> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >>  -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> Sent: Thursday, August 16, 2012 7:04 PM
>> >> To: Hao, Xudong
>> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> >>
>> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com> wrote:
>> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
>> >> +0200
>> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
>> >> +0800
>> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
>> expanded.
>> >> */
>> >> >> >  #define PCI_MEM_START       0xf0000000
>> >> >> >  #define PCI_MEM_END         0xfc000000
>> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
>> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
>> >> >>
>> >> >> With such hard coded values, this is hardly meant to be anything
>> >> >> more than an RFC, is it? These values should not exist in the first
>> >> >> place, and the variables below should be determined from VM
>> >> >> characteristics (best would presumably be to allocate them top
>> >> >> down from the end of physical address space, making sure you
>> >> >> don't run into RAM).
>> >>
>> >> No comment on this part?
>> >>
>> >
>> > The MMIO high memory start from 640G, it's already very high, I think we
>> > don't need allocate MMIO top down from the high of physical address space.
>> > Another thing you remind me that maybe we can skip this high MMIO hole
>> when
>> > setup p2m table in build hvm of libxc(setup_guest()), like the handles for
>> > MMIO below 4G.
>> 
>> That would be an option, but any fixed address you pick here
>> will look arbitrary (and will sooner or later raise questions). Plus
>> by allowing the RAM above 4G to remain contiguous even for
>> huge guests, we'd retain maximum compatibility with all sorts
>> of guest OSes. Furthermore, did you check that we in all cases
>> can use 40-bit (guest) physical addresses (I would think that 36
>> is the biggest common value). Bottom line - please don't use a
>> fixed number here.
>> 
> 
> Where does present the 36-bit physical addresses limit, could you help to 
> point out in the current Xen code?

Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
mtrr_var_range_msr_set().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:03:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3PlS-0008Js-Tp; Mon, 20 Aug 2012 11:03:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3PlR-0008Jn-Ab
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:03:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345460595!9308301!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27129 invoked from network); 20 Aug 2012 11:03:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:03:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14082136"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:03:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 12:03:14 +0100
Date: Mon, 20 Aug 2012 12:02:55 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120817152617.64e2fe5e@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208201152510.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
	<20120817152617.64e2fe5e@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 15:36:04 -0400
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > > > For example in balloon.c we are probably only interested in memory
> > > > related behavior, so checking for XENFEAT_auto_translated_physmap
> > > > should be enough.  In other parts of the code we might want to
> > > > check for xen_pv_domain(). If xen_pv_domain() and
> > > > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > > > another small XENFEAT that specifies that the domain is running
> > > > in a HVM container. This way they are all reusable.
> > > 
> > > yeah, I thought about that, but wasn't sure what the implications
> > > would be for a guest thats not PVH but has auto xlated physmap, if
> > > there's such a possibility. If you guys think thats not an issue, I
> > > can change it.
> > 
> > dom0_shadow=on on the hypervisor mode enables that in PV mode.
> 
> So, if I just add checks for auto_translated_physmap like suggested,
> wouldn't I be changing and breaking the code paths for dom0_shadow boot
> of PV guest? is dom0_shadow depracated?

I think that it is just a debugging option. The most recent reference to
dom0_shadow is in 2005, according to Google. Not many people would miss
it.


> Following would be true for both, pvh and dom0_shadow:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap) && \
>                           xen_have_vector_callback)  

If I understand dom0_shadow correctly, it wouldn't have
xen_have_vector_callback set, so the above #define would still work as
you expect.
But if all the above characterists are actually true for dom0_shadow
guests too, then it might make sense to call them pvh domains anyway.


> Also, the SIF flag allows PVH to be enabled via config file where the
> tool pareses and sets it for the guest.
> 
> At present:
>   dom0: put pvh=true at grub command line
>   domU: put pvh=1 in the vm.cfg file.

We can still have a pvh option in the VM config file or as a Xen
parameter for dom0: it doesn't have to be exported as a SIF flag
to the Linux kernel though.
If xen_have_vector_callback is enabled and
XENFEAT_auto_translated_physmap is also set, then we are effectively
running as a PVH domain, otherwise we are not. As a consequence only the
toolstack needs to know about the pvh option in the config file to build
the guest correctly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:03:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3PlS-0008Js-Tp; Mon, 20 Aug 2012 11:03:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3PlR-0008Jn-Ab
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:03:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345460595!9308301!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27129 invoked from network); 20 Aug 2012 11:03:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:03:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14082136"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:03:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 12:03:14 +0100
Date: Mon, 20 Aug 2012 12:02:55 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20120817152617.64e2fe5e@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1208201152510.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
	<20120817152617.64e2fe5e@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 15:36:04 -0400
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > > > For example in balloon.c we are probably only interested in memory
> > > > related behavior, so checking for XENFEAT_auto_translated_physmap
> > > > should be enough.  In other parts of the code we might want to
> > > > check for xen_pv_domain(). If xen_pv_domain() and
> > > > XENFEAT_auto_translated_physmap are not enough, we could introduce
> > > > another small XENFEAT that specifies that the domain is running
> > > > in a HVM container. This way they are all reusable.
> > > 
> > > yeah, I thought about that, but wasn't sure what the implications
> > > would be for a guest thats not PVH but has auto xlated physmap, if
> > > there's such a possibility. If you guys think thats not an issue, I
> > > can change it.
> > 
> > dom0_shadow=on on the hypervisor mode enables that in PV mode.
> 
> So, if I just add checks for auto_translated_physmap like suggested,
> wouldn't I be changing and breaking the code paths for dom0_shadow boot
> of PV guest? is dom0_shadow depracated?

I think that it is just a debugging option. The most recent reference to
dom0_shadow is in 2005, according to Google. Not many people would miss
it.


> Following would be true for both, pvh and dom0_shadow:
> 
> #define xen_pvh_domain() (xen_pv_domain() && \
>                           xen_feature(XENFEAT_auto_translated_physmap) && \
>                           xen_have_vector_callback)  

If I understand dom0_shadow correctly, it wouldn't have
xen_have_vector_callback set, so the above #define would still work as
you expect.
But if all the above characterists are actually true for dom0_shadow
guests too, then it might make sense to call them pvh domains anyway.


> Also, the SIF flag allows PVH to be enabled via config file where the
> tool pareses and sets it for the guest.
> 
> At present:
>   dom0: put pvh=true at grub command line
>   domU: put pvh=1 in the vm.cfg file.

We can still have a pvh option in the VM config file or as a Xen
parameter for dom0: it doesn't have to be exported as a SIF flag
to the Linux kernel though.
If xen_have_vector_callback is enabled and
XENFEAT_auto_translated_physmap is also set, then we are effectively
running as a PVH domain, otherwise we are not. As a consequence only the
toolstack needs to know about the pvh option in the config file to build
the guest correctly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:06:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3PoG-0008PY-GE; Mon, 20 Aug 2012 11:06:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3PoF-0008PS-2y
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 11:06:15 +0000
Received: from [85.158.139.83:19249] by server-3.bemta-5.messagelabs.com id
	E3/33-27237-52A12305; Mon, 20 Aug 2012 11:06:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345460772!25188224!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13893 invoked from network); 20 Aug 2012 11:06:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 11:06:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Aug 2012 12:08:19 +0100
Message-Id: <50323640020000780009662A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 20 Aug 2012 12:06:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)

Patch was posted, but no comments or approval to commit so far.
Also, reportedly the patch only improves the situation, it doesn't
completely eliminate the problem. For the moment I'm out of ideas,
though, and hence would hope some others could help here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:06:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3PoG-0008PY-GE; Mon, 20 Aug 2012 11:06:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3PoF-0008PS-2y
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 11:06:15 +0000
Received: from [85.158.139.83:19249] by server-3.bemta-5.messagelabs.com id
	E3/33-27237-52A12305; Mon, 20 Aug 2012 11:06:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345460772!25188224!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13893 invoked from network); 20 Aug 2012 11:06:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 11:06:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Aug 2012 12:08:19 +0100
Message-Id: <50323640020000780009662A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Mon, 20 Aug 2012 12:06:08 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)

Patch was posted, but no comments or approval to commit so far.
Also, reportedly the patch only improves the situation, it doesn't
completely eliminate the problem. For the moment I'm out of ideas,
though, and hence would hope some others could help here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:41:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QM1-0000Rg-7P; Mon, 20 Aug 2012 11:41:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T3QLz-0000RT-Vv
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 11:41:08 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345462859!9779122!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU5MzIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6140 invoked from network); 20 Aug 2012 11:41:00 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 11:41:00 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Aug 2012 04:40:59 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,796,1336374000"; 
	d="scan'208,223";a="182856863"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 20 Aug 2012 04:40:59 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 04:40:58 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 19:40:57 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH] QEMU/helper2.c: Fix multiply issue for int and uint types
Thread-Index: Ac1+yKMvaYimh/KqShCBCOFlig8iDg==
Date: Mon, 20 Aug 2012 11:40:56 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int and
	uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,=20

The following patch fixes an issue of (uint * int) multiply in QEMU.
The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than =
30 minutes on Level 0 (L0) Xen (The nested virtualization case).
Please help to review and pull.

I saw the upstream QEMU also have a similar code piece, I will also send a =
patch there.

Thanks,
Dongxiao


>From 442297c3f5073b11033e6af2338f571418eff024 Mon Sep 17 00:00:00 2001
From: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Mon, 20 Aug 2012 16:45:04 +0800
Subject: [PATCH] helper2: fix multiply issue for int and uint types

If the two multiply operands are int and uint types separately,
the int type will be transformed to uint firstly, which is not the
intent in our code piece. The fix is to add (int) transform for the
uint type before the multiply.

This helps to fix the Xen hypevisor slow booting issue (boots more
than 30 minutes) on another Xen hypervisor
(the nested virtualization case).

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 i386-dm/helper2.c |   16 ++++++++--------
 1 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..2a66086 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i =3D 0; i < req->count; i++) {
                 tmp =3D do_inp(env, req->addr, req->size);
                 write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
             }
         }
@@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
                 unsigned long tmp =3D 0;
=20
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
@@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &req->data);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &req->data);
             }
         }
@@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
                 write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
             }
         }
--=20
1.7.1


--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_
Content-Type: application/octet-stream;
	name="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch"
Content-Description: 0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch
Content-Disposition: attachment;
	filename="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch";
	size=3368; creation-date="Mon, 20 Aug 2012 11:35:13 GMT";
	modification-date="Mon, 20 Aug 2012 11:34:34 GMT"
Content-Transfer-Encoding: base64

RnJvbSA0NDIyOTdjM2Y1MDczYjExMDMzZTZhZjIzMzhmNTcxNDE4ZWZmMDI0IE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBEb25neGlhbyBYdSA8ZG9uZ3hpYW8ueHVAaW50ZWwuY29tPgpE
YXRlOiBNb24sIDIwIEF1ZyAyMDEyIDE2OjQ1OjA0ICswODAwClN1YmplY3Q6IFtQQVRDSF0gaGVs
cGVyMjogZml4IG11bHRpcGx5IGlzc3VlIGZvciBpbnQgYW5kIHVpbnQgdHlwZXMKCklmIHRoZSB0
d28gbXVsdGlwbHkgb3BlcmFuZHMgYXJlIGludCBhbmQgdWludCB0eXBlcyBzZXBhcmF0ZWx5LAp0
aGUgaW50IHR5cGUgd2lsbCBiZSB0cmFuc2Zvcm1lZCB0byB1aW50IGZpcnN0bHksIHdoaWNoIGlz
IG5vdCB0aGUKaW50ZW50IGluIG91ciBjb2RlIHBpZWNlLiBUaGUgZml4IGlzIHRvIGFkZCAoaW50
KSB0cmFuc2Zvcm0gZm9yIHRoZQp1aW50IHR5cGUgYmVmb3JlIHRoZSBtdWx0aXBseS4KClRoaXMg
aGVscHMgdG8gZml4IHRoZSBYZW4gaHlwZXZpc29yIHNsb3cgYm9vdGluZyBpc3N1ZSAoYm9vdHMg
bW9yZQp0aGFuIDMwIG1pbnV0ZXMpIG9uIGFub3RoZXIgWGVuIGh5cGVydmlzb3IKKHRoZSBuZXN0
ZWQgdmlydHVhbGl6YXRpb24gY2FzZSkuCgpTaWduZWQtb2ZmLWJ5OiBEb25neGlhbyBYdSA8ZG9u
Z3hpYW8ueHVAaW50ZWwuY29tPgotLS0KIGkzODYtZG0vaGVscGVyMi5jIHwgICAxNiArKysrKysr
Ky0tLS0tLS0tCiAxIGZpbGVzIGNoYW5nZWQsIDggaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMo
LSkKCmRpZmYgLS1naXQgYS9pMzg2LWRtL2hlbHBlcjIuYyBiL2kzODYtZG0vaGVscGVyMi5jCmlu
ZGV4IGM2ZDA0OWMuLjJhNjYwODYgMTAwNjQ0Ci0tLSBhL2kzODYtZG0vaGVscGVyMi5jCisrKyBi
L2kzODYtZG0vaGVscGVyMi5jCkBAIC0zNjQsNyArMzY0LDcgQEAgc3RhdGljIHZvaWQgY3B1X2lv
cmVxX3BpbyhDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgICAgICBmb3IgKGkg
PSAwOyBpIDwgcmVxLT5jb3VudDsgaSsrKSB7CiAgICAgICAgICAgICAgICAgdG1wID0gZG9faW5w
KGVudiwgcmVxLT5hZGRyLCByZXEtPnNpemUpOwogICAgICAgICAgICAgICAgIHdyaXRlX3BoeXNp
Y2FsKCh0YXJnZXRfcGh5c19hZGRyX3QpIHJlcS0+ZGF0YQotICAgICAgICAgICAgICAgICAgKyAo
c2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkgKiAo
aW50KXJlcS0+c2l6ZSksCiAgICAgICAgICAgICAgICAgICByZXEtPnNpemUsICZ0bXApOwogICAg
ICAgICAgICAgfQogICAgICAgICB9CkBAIC0zNzYsNyArMzc2LDcgQEAgc3RhdGljIHZvaWQgY3B1
X2lvcmVxX3BpbyhDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgICAgICAgICAg
dW5zaWduZWQgbG9uZyB0bXAgPSAwOwogCiAgICAgICAgICAgICAgICAgcmVhZF9waHlzaWNhbCgo
dGFyZ2V0X3BoeXNfYWRkcl90KSByZXEtPmRhdGEKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24g
KiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludCly
ZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1wKTsKICAgICAgICAg
ICAgICAgICBkb19vdXRwKGVudiwgcmVxLT5hZGRyLCByZXEtPnNpemUsIHRtcCk7CiAgICAgICAg
ICAgICB9CkBAIC0zOTQsMTMgKzM5NCwxMyBAQCBzdGF0aWMgdm9pZCBjcHVfaW9yZXFfbW92ZShD
UFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgIGlmIChyZXEtPmRpciA9PSBJT1JF
UV9SRUFEKSB7CiAgICAgICAgICAgICBmb3IgKGkgPSAwOyBpIDwgcmVxLT5jb3VudDsgaSsrKSB7
CiAgICAgICAgICAgICAgICAgcmVhZF9waHlzaWNhbChyZXEtPmFkZHIKLSAgICAgICAgICAgICAg
ICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24g
KiBpICogKGludClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmcmVx
LT5kYXRhKTsKICAgICAgICAgICAgIH0KICAgICAgICAgfSBlbHNlIGlmIChyZXEtPmRpciA9PSBJ
T1JFUV9XUklURSkgewogICAgICAgICAgICAgZm9yIChpID0gMDsgaSA8IHJlcS0+Y291bnQ7IGkr
KykgewogICAgICAgICAgICAgICAgIHdyaXRlX3BoeXNpY2FsKHJlcS0+YWRkcgotICAgICAgICAg
ICAgICAgICAgKyAoc2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAo
c2lnbiAqIGkgKiAoaW50KXJlcS0+c2l6ZSksCiAgICAgICAgICAgICAgICAgICByZXEtPnNpemUs
ICZyZXEtPmRhdGEpOwogICAgICAgICAgICAgfQogICAgICAgICB9CkBAIC00MTAsMTkgKzQxMCwx
OSBAQCBzdGF0aWMgdm9pZCBjcHVfaW9yZXFfbW92ZShDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpy
ZXEpCiAgICAgICAgIGlmIChyZXEtPmRpciA9PSBJT1JFUV9SRUFEKSB7CiAgICAgICAgICAgICBm
b3IgKGkgPSAwOyBpIDwgcmVxLT5jb3VudDsgaSsrKSB7CiAgICAgICAgICAgICAgICAgcmVhZF9w
aHlzaWNhbChyZXEtPmFkZHIKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5z
aXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludClyZXEtPnNpemUpLAog
ICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1wKTsKICAgICAgICAgICAgICAgICB3cml0
ZV9waHlzaWNhbCgodGFyZ2V0X3BoeXNfYWRkcl90IClyZXEtPmRhdGEKLSAgICAgICAgICAgICAg
ICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24g
KiBpICogKGludClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1w
KTsKICAgICAgICAgICAgIH0KICAgICAgICAgfSBlbHNlIGlmIChyZXEtPmRpciA9PSBJT1JFUV9X
UklURSkgewogICAgICAgICAgICAgZm9yIChpID0gMDsgaSA8IHJlcS0+Y291bnQ7IGkrKykgewog
ICAgICAgICAgICAgICAgIHJlYWRfcGh5c2ljYWwoKHRhcmdldF9waHlzX2FkZHJfdCkgcmVxLT5k
YXRhCi0gICAgICAgICAgICAgICAgICArIChzaWduICogaSAqIHJlcS0+c2l6ZSksCisgICAgICAg
ICAgICAgICAgICArIChzaWduICogaSAqIChpbnQpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAg
ICAgIHJlcS0+c2l6ZSwgJnRtcCk7CiAgICAgICAgICAgICAgICAgd3JpdGVfcGh5c2ljYWwocmVx
LT5hZGRyCi0gICAgICAgICAgICAgICAgICArIChzaWduICogaSAqIHJlcS0+c2l6ZSksCisgICAg
ICAgICAgICAgICAgICArIChzaWduICogaSAqIChpbnQpcmVxLT5zaXplKSwKICAgICAgICAgICAg
ICAgICAgIHJlcS0+c2l6ZSwgJnRtcCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KLS0gCjEu
Ny4xCgo=

--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Mon Aug 20 11:41:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QM1-0000Rg-7P; Mon, 20 Aug 2012 11:41:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T3QLz-0000RT-Vv
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 11:41:08 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345462859!9779122!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzU5MzIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6140 invoked from network); 20 Aug 2012 11:41:00 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 11:41:00 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Aug 2012 04:40:59 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,796,1336374000"; 
	d="scan'208,223";a="182856863"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 20 Aug 2012 04:40:59 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 04:40:58 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 19:40:57 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH] QEMU/helper2.c: Fix multiply issue for int and uint types
Thread-Index: Ac1+yKMvaYimh/KqShCBCOFlig8iDg==
Date: Mon, 20 Aug 2012 11:40:56 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int and
	uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi,=20

The following patch fixes an issue of (uint * int) multiply in QEMU.
The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than =
30 minutes on Level 0 (L0) Xen (The nested virtualization case).
Please help to review and pull.

I saw the upstream QEMU also have a similar code piece, I will also send a =
patch there.

Thanks,
Dongxiao


>From 442297c3f5073b11033e6af2338f571418eff024 Mon Sep 17 00:00:00 2001
From: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Mon, 20 Aug 2012 16:45:04 +0800
Subject: [PATCH] helper2: fix multiply issue for int and uint types

If the two multiply operands are int and uint types separately,
the int type will be transformed to uint firstly, which is not the
intent in our code piece. The fix is to add (int) transform for the
uint type before the multiply.

This helps to fix the Xen hypevisor slow booting issue (boots more
than 30 minutes) on another Xen hypervisor
(the nested virtualization case).

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 i386-dm/helper2.c |   16 ++++++++--------
 1 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..2a66086 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i =3D 0; i < req->count; i++) {
                 tmp =3D do_inp(env, req->addr, req->size);
                 write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
             }
         }
@@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
                 unsigned long tmp =3D 0;
=20
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
@@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &req->data);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &req->data);
             }
         }
@@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
                 write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int)req->size),
                   req->size, &tmp);
             }
         }
--=20
1.7.1


--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_
Content-Type: application/octet-stream;
	name="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch"
Content-Description: 0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch
Content-Disposition: attachment;
	filename="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch";
	size=3368; creation-date="Mon, 20 Aug 2012 11:35:13 GMT";
	modification-date="Mon, 20 Aug 2012 11:34:34 GMT"
Content-Transfer-Encoding: base64

RnJvbSA0NDIyOTdjM2Y1MDczYjExMDMzZTZhZjIzMzhmNTcxNDE4ZWZmMDI0IE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBEb25neGlhbyBYdSA8ZG9uZ3hpYW8ueHVAaW50ZWwuY29tPgpE
YXRlOiBNb24sIDIwIEF1ZyAyMDEyIDE2OjQ1OjA0ICswODAwClN1YmplY3Q6IFtQQVRDSF0gaGVs
cGVyMjogZml4IG11bHRpcGx5IGlzc3VlIGZvciBpbnQgYW5kIHVpbnQgdHlwZXMKCklmIHRoZSB0
d28gbXVsdGlwbHkgb3BlcmFuZHMgYXJlIGludCBhbmQgdWludCB0eXBlcyBzZXBhcmF0ZWx5LAp0
aGUgaW50IHR5cGUgd2lsbCBiZSB0cmFuc2Zvcm1lZCB0byB1aW50IGZpcnN0bHksIHdoaWNoIGlz
IG5vdCB0aGUKaW50ZW50IGluIG91ciBjb2RlIHBpZWNlLiBUaGUgZml4IGlzIHRvIGFkZCAoaW50
KSB0cmFuc2Zvcm0gZm9yIHRoZQp1aW50IHR5cGUgYmVmb3JlIHRoZSBtdWx0aXBseS4KClRoaXMg
aGVscHMgdG8gZml4IHRoZSBYZW4gaHlwZXZpc29yIHNsb3cgYm9vdGluZyBpc3N1ZSAoYm9vdHMg
bW9yZQp0aGFuIDMwIG1pbnV0ZXMpIG9uIGFub3RoZXIgWGVuIGh5cGVydmlzb3IKKHRoZSBuZXN0
ZWQgdmlydHVhbGl6YXRpb24gY2FzZSkuCgpTaWduZWQtb2ZmLWJ5OiBEb25neGlhbyBYdSA8ZG9u
Z3hpYW8ueHVAaW50ZWwuY29tPgotLS0KIGkzODYtZG0vaGVscGVyMi5jIHwgICAxNiArKysrKysr
Ky0tLS0tLS0tCiAxIGZpbGVzIGNoYW5nZWQsIDggaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMo
LSkKCmRpZmYgLS1naXQgYS9pMzg2LWRtL2hlbHBlcjIuYyBiL2kzODYtZG0vaGVscGVyMi5jCmlu
ZGV4IGM2ZDA0OWMuLjJhNjYwODYgMTAwNjQ0Ci0tLSBhL2kzODYtZG0vaGVscGVyMi5jCisrKyBi
L2kzODYtZG0vaGVscGVyMi5jCkBAIC0zNjQsNyArMzY0LDcgQEAgc3RhdGljIHZvaWQgY3B1X2lv
cmVxX3BpbyhDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgICAgICBmb3IgKGkg
PSAwOyBpIDwgcmVxLT5jb3VudDsgaSsrKSB7CiAgICAgICAgICAgICAgICAgdG1wID0gZG9faW5w
KGVudiwgcmVxLT5hZGRyLCByZXEtPnNpemUpOwogICAgICAgICAgICAgICAgIHdyaXRlX3BoeXNp
Y2FsKCh0YXJnZXRfcGh5c19hZGRyX3QpIHJlcS0+ZGF0YQotICAgICAgICAgICAgICAgICAgKyAo
c2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkgKiAo
aW50KXJlcS0+c2l6ZSksCiAgICAgICAgICAgICAgICAgICByZXEtPnNpemUsICZ0bXApOwogICAg
ICAgICAgICAgfQogICAgICAgICB9CkBAIC0zNzYsNyArMzc2LDcgQEAgc3RhdGljIHZvaWQgY3B1
X2lvcmVxX3BpbyhDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgICAgICAgICAg
dW5zaWduZWQgbG9uZyB0bXAgPSAwOwogCiAgICAgICAgICAgICAgICAgcmVhZF9waHlzaWNhbCgo
dGFyZ2V0X3BoeXNfYWRkcl90KSByZXEtPmRhdGEKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24g
KiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludCly
ZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1wKTsKICAgICAgICAg
ICAgICAgICBkb19vdXRwKGVudiwgcmVxLT5hZGRyLCByZXEtPnNpemUsIHRtcCk7CiAgICAgICAg
ICAgICB9CkBAIC0zOTQsMTMgKzM5NCwxMyBAQCBzdGF0aWMgdm9pZCBjcHVfaW9yZXFfbW92ZShD
UFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgIGlmIChyZXEtPmRpciA9PSBJT1JF
UV9SRUFEKSB7CiAgICAgICAgICAgICBmb3IgKGkgPSAwOyBpIDwgcmVxLT5jb3VudDsgaSsrKSB7
CiAgICAgICAgICAgICAgICAgcmVhZF9waHlzaWNhbChyZXEtPmFkZHIKLSAgICAgICAgICAgICAg
ICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24g
KiBpICogKGludClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmcmVx
LT5kYXRhKTsKICAgICAgICAgICAgIH0KICAgICAgICAgfSBlbHNlIGlmIChyZXEtPmRpciA9PSBJ
T1JFUV9XUklURSkgewogICAgICAgICAgICAgZm9yIChpID0gMDsgaSA8IHJlcS0+Y291bnQ7IGkr
KykgewogICAgICAgICAgICAgICAgIHdyaXRlX3BoeXNpY2FsKHJlcS0+YWRkcgotICAgICAgICAg
ICAgICAgICAgKyAoc2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAo
c2lnbiAqIGkgKiAoaW50KXJlcS0+c2l6ZSksCiAgICAgICAgICAgICAgICAgICByZXEtPnNpemUs
ICZyZXEtPmRhdGEpOwogICAgICAgICAgICAgfQogICAgICAgICB9CkBAIC00MTAsMTkgKzQxMCwx
OSBAQCBzdGF0aWMgdm9pZCBjcHVfaW9yZXFfbW92ZShDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpy
ZXEpCiAgICAgICAgIGlmIChyZXEtPmRpciA9PSBJT1JFUV9SRUFEKSB7CiAgICAgICAgICAgICBm
b3IgKGkgPSAwOyBpIDwgcmVxLT5jb3VudDsgaSsrKSB7CiAgICAgICAgICAgICAgICAgcmVhZF9w
aHlzaWNhbChyZXEtPmFkZHIKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5z
aXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludClyZXEtPnNpemUpLAog
ICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1wKTsKICAgICAgICAgICAgICAgICB3cml0
ZV9waHlzaWNhbCgodGFyZ2V0X3BoeXNfYWRkcl90IClyZXEtPmRhdGEKLSAgICAgICAgICAgICAg
ICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24g
KiBpICogKGludClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1w
KTsKICAgICAgICAgICAgIH0KICAgICAgICAgfSBlbHNlIGlmIChyZXEtPmRpciA9PSBJT1JFUV9X
UklURSkgewogICAgICAgICAgICAgZm9yIChpID0gMDsgaSA8IHJlcS0+Y291bnQ7IGkrKykgewog
ICAgICAgICAgICAgICAgIHJlYWRfcGh5c2ljYWwoKHRhcmdldF9waHlzX2FkZHJfdCkgcmVxLT5k
YXRhCi0gICAgICAgICAgICAgICAgICArIChzaWduICogaSAqIHJlcS0+c2l6ZSksCisgICAgICAg
ICAgICAgICAgICArIChzaWduICogaSAqIChpbnQpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAg
ICAgIHJlcS0+c2l6ZSwgJnRtcCk7CiAgICAgICAgICAgICAgICAgd3JpdGVfcGh5c2ljYWwocmVx
LT5hZGRyCi0gICAgICAgICAgICAgICAgICArIChzaWduICogaSAqIHJlcS0+c2l6ZSksCisgICAg
ICAgICAgICAgICAgICArIChzaWduICogaSAqIChpbnQpcmVxLT5zaXplKSwKICAgICAgICAgICAg
ICAgICAgIHJlcS0+c2l6ZSwgJnRtcCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KLS0gCjEu
Ny4xCgo=

--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_40776A41FC278F40B59438AD47D147A90FE17CC0SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Mon Aug 20 11:45:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QPk-0000cs-1E; Mon, 20 Aug 2012 11:45:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3Pvh-0000BK-SE
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:13:58 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345461230!2815039!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19441 invoked from network); 20 Aug 2012 11:13:51 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 11:13:51 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KBDbKN005778
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 07:13:37 -0400
Received: from thinkpad.mammed.net (vpn-238-235.phx2.redhat.com [10.3.238.235])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7KBDR9e006097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 07:13:29 -0400
Date: Mon, 20 Aug 2012 13:13:26 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Stefan Weil <sw@weilnetz.de>
Message-ID: <20120820131326.22ea454f@thinkpad.mammed.net>
In-Reply-To: <5031BFE0.1070909@weilnetz.de>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
	<5031BFE0.1070909@weilnetz.de>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 11:44:58 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of
	cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 06:41:04 +0200
Stefan Weil <sw@weilnetz.de> wrote:

> Am 20.08.2012 01:39, schrieb Igor Mammedov:
> > it's necessary for making CPU child of DEVICE without
> > causing circular header deps.
> >
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > ---
> >   hw/arm-misc.h |    1 +
> >   hw/bt.h       |    2 ++
> >   hw/devices.h  |    2 ++
> >   hw/irq.h      |    2 ++
> >   hw/omap.h     |    1 +
> >   hw/soc_dma.h  |    1 +
> >   hw/xen.h      |    1 +
> >   qemu-common.h |    1 -
> >   sysemu.h      |    1 +
> >   9 files changed, 11 insertions(+), 1 deletions(-)
> >
> > diff --git a/hw/arm-misc.h b/hw/arm-misc.h
> > index bdd8fec..b13aa59 100644
> > --- a/hw/arm-misc.h
> > +++ b/hw/arm-misc.h
> > @@ -12,6 +12,7 @@
> >   #define ARM_MISC_H 1
> >   
> >   #include "memory.h"
> > +#include "hw/irq.h"
> >   
> >   /* The CPU is also modeled as an interrupt controller.  */
> >   #define ARM_PIC_CPU_IRQ 0
> > diff --git a/hw/bt.h b/hw/bt.h
> > index a48b8d4..ebf6a37 100644
> > --- a/hw/bt.h
> > +++ b/hw/bt.h
> > @@ -23,6 +23,8 @@
> >    * along with this program; if not, see <http://www.gnu.org/licenses/>.
> >    */
> >   
> > +#include "hw/irq.h"
> > +
> >   /* BD Address */
> >   typedef struct {
> >       uint8_t b[6];
> > diff --git a/hw/devices.h b/hw/devices.h
> > index 1a55c1e..c60bcab 100644
> > --- a/hw/devices.h
> > +++ b/hw/devices.h
> > @@ -1,6 +1,8 @@
> >   #ifndef QEMU_DEVICES_H
> >   #define QEMU_DEVICES_H
> >   
> > +#include "hw/irq.h"
> > +
> >   /* ??? Not all users of this file can include cpu-common.h.  */
> >   struct MemoryRegion;
> >   
> > diff --git a/hw/irq.h b/hw/irq.h
> > index 56c55f0..1339a3a 100644
> > --- a/hw/irq.h
> > +++ b/hw/irq.h
> > @@ -3,6 +3,8 @@
> >   
> >   /* Generic IRQ/GPIO pin infrastructure.  */
> >   
> > +typedef struct IRQState *qemu_irq;
> > +
> >   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
> >   
> >   void qemu_set_irq(qemu_irq irq, int level);
> > diff --git a/hw/omap.h b/hw/omap.h
> > index 413851b..8b08462 100644
> > --- a/hw/omap.h
> > +++ b/hw/omap.h
> > @@ -19,6 +19,7 @@
> >   #ifndef hw_omap_h
> >   #include "memory.h"
> >   # define hw_omap_h		"omap.h"
> > +#include "hw/irq.h"
> >   
> >   # define OMAP_EMIFS_BASE	0x00000000
> >   # define OMAP2_Q0_BASE		0x00000000
> > diff --git a/hw/soc_dma.h b/hw/soc_dma.h
> > index 904b26c..e386ace 100644
> > --- a/hw/soc_dma.h
> > +++ b/hw/soc_dma.h
> > @@ -19,6 +19,7 @@
> >    */
> >   
> >   #include "memory.h"
> > +#include "hw/irq.h"
> >   
> >   struct soc_dma_s;
> >   struct soc_dma_ch_s;
> > diff --git a/hw/xen.h b/hw/xen.h
> > index e5926b7..ff11dfd 100644
> > --- a/hw/xen.h
> > +++ b/hw/xen.h
> > @@ -8,6 +8,7 @@
> >    */
> >   #include <inttypes.h>
> >   
> > +#include "hw/irq.h"
> >   #include "qemu-common.h"
> >   
> >   /* xen-machine.c */
> > diff --git a/qemu-common.h b/qemu-common.h
> > index e5c2bcd..6677a30 100644
> > --- a/qemu-common.h
> > +++ b/qemu-common.h
> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
> >   typedef struct PCIESlot PCIESlot;
> >   typedef struct MSIMessage MSIMessage;
> >   typedef struct SerialState SerialState;
> > -typedef struct IRQState *qemu_irq;
> >   typedef struct PCMCIACardState PCMCIACardState;
> >   typedef struct MouseTransformInfo MouseTransformInfo;
> >   typedef struct uWireSlave uWireSlave;
> 
> Just move the declaration of qemu_irq to the beginning of qemu-common.h
> and leave the rest of files untouched. That also fixes the circular 
> dependency.
> 
> I already have a patch that does this, so you can integrate it in your 
> series
> instead of this one.
No doubt it's more simpler way, but IMHO It's more of a hack than fixing
problem.
It works for now but doesn't alleviate problem with header nightmare in qemu,
where everything is included in qemu-common.h and everything includes it as
well.

Any way if majority prefer simple move, I'll drop this patch in favor of yours.
 
> 
> 
> > diff --git a/sysemu.h b/sysemu.h
> > index 65552ac..f765821 100644
> > --- a/sysemu.h
> > +++ b/sysemu.h
> > @@ -9,6 +9,7 @@
> >   #include "qapi-types.h"
> >   #include "notify.h"
> >   #include "main-loop.h"
> > +#include "hw/irq.h"
> >   
> >   /* vl.c */
> >   
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:45:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QPk-0000cs-1E; Mon, 20 Aug 2012 11:45:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3Pvh-0000BK-SE
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:13:58 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345461230!2815039!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19441 invoked from network); 20 Aug 2012 11:13:51 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 11:13:51 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KBDbKN005778
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 07:13:37 -0400
Received: from thinkpad.mammed.net (vpn-238-235.phx2.redhat.com [10.3.238.235])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7KBDR9e006097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 07:13:29 -0400
Date: Mon, 20 Aug 2012 13:13:26 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Stefan Weil <sw@weilnetz.de>
Message-ID: <20120820131326.22ea454f@thinkpad.mammed.net>
In-Reply-To: <5031BFE0.1070909@weilnetz.de>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
	<5031BFE0.1070909@weilnetz.de>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Mon, 20 Aug 2012 11:44:58 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of
	cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 06:41:04 +0200
Stefan Weil <sw@weilnetz.de> wrote:

> Am 20.08.2012 01:39, schrieb Igor Mammedov:
> > it's necessary for making CPU child of DEVICE without
> > causing circular header deps.
> >
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > ---
> >   hw/arm-misc.h |    1 +
> >   hw/bt.h       |    2 ++
> >   hw/devices.h  |    2 ++
> >   hw/irq.h      |    2 ++
> >   hw/omap.h     |    1 +
> >   hw/soc_dma.h  |    1 +
> >   hw/xen.h      |    1 +
> >   qemu-common.h |    1 -
> >   sysemu.h      |    1 +
> >   9 files changed, 11 insertions(+), 1 deletions(-)
> >
> > diff --git a/hw/arm-misc.h b/hw/arm-misc.h
> > index bdd8fec..b13aa59 100644
> > --- a/hw/arm-misc.h
> > +++ b/hw/arm-misc.h
> > @@ -12,6 +12,7 @@
> >   #define ARM_MISC_H 1
> >   
> >   #include "memory.h"
> > +#include "hw/irq.h"
> >   
> >   /* The CPU is also modeled as an interrupt controller.  */
> >   #define ARM_PIC_CPU_IRQ 0
> > diff --git a/hw/bt.h b/hw/bt.h
> > index a48b8d4..ebf6a37 100644
> > --- a/hw/bt.h
> > +++ b/hw/bt.h
> > @@ -23,6 +23,8 @@
> >    * along with this program; if not, see <http://www.gnu.org/licenses/>.
> >    */
> >   
> > +#include "hw/irq.h"
> > +
> >   /* BD Address */
> >   typedef struct {
> >       uint8_t b[6];
> > diff --git a/hw/devices.h b/hw/devices.h
> > index 1a55c1e..c60bcab 100644
> > --- a/hw/devices.h
> > +++ b/hw/devices.h
> > @@ -1,6 +1,8 @@
> >   #ifndef QEMU_DEVICES_H
> >   #define QEMU_DEVICES_H
> >   
> > +#include "hw/irq.h"
> > +
> >   /* ??? Not all users of this file can include cpu-common.h.  */
> >   struct MemoryRegion;
> >   
> > diff --git a/hw/irq.h b/hw/irq.h
> > index 56c55f0..1339a3a 100644
> > --- a/hw/irq.h
> > +++ b/hw/irq.h
> > @@ -3,6 +3,8 @@
> >   
> >   /* Generic IRQ/GPIO pin infrastructure.  */
> >   
> > +typedef struct IRQState *qemu_irq;
> > +
> >   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
> >   
> >   void qemu_set_irq(qemu_irq irq, int level);
> > diff --git a/hw/omap.h b/hw/omap.h
> > index 413851b..8b08462 100644
> > --- a/hw/omap.h
> > +++ b/hw/omap.h
> > @@ -19,6 +19,7 @@
> >   #ifndef hw_omap_h
> >   #include "memory.h"
> >   # define hw_omap_h		"omap.h"
> > +#include "hw/irq.h"
> >   
> >   # define OMAP_EMIFS_BASE	0x00000000
> >   # define OMAP2_Q0_BASE		0x00000000
> > diff --git a/hw/soc_dma.h b/hw/soc_dma.h
> > index 904b26c..e386ace 100644
> > --- a/hw/soc_dma.h
> > +++ b/hw/soc_dma.h
> > @@ -19,6 +19,7 @@
> >    */
> >   
> >   #include "memory.h"
> > +#include "hw/irq.h"
> >   
> >   struct soc_dma_s;
> >   struct soc_dma_ch_s;
> > diff --git a/hw/xen.h b/hw/xen.h
> > index e5926b7..ff11dfd 100644
> > --- a/hw/xen.h
> > +++ b/hw/xen.h
> > @@ -8,6 +8,7 @@
> >    */
> >   #include <inttypes.h>
> >   
> > +#include "hw/irq.h"
> >   #include "qemu-common.h"
> >   
> >   /* xen-machine.c */
> > diff --git a/qemu-common.h b/qemu-common.h
> > index e5c2bcd..6677a30 100644
> > --- a/qemu-common.h
> > +++ b/qemu-common.h
> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
> >   typedef struct PCIESlot PCIESlot;
> >   typedef struct MSIMessage MSIMessage;
> >   typedef struct SerialState SerialState;
> > -typedef struct IRQState *qemu_irq;
> >   typedef struct PCMCIACardState PCMCIACardState;
> >   typedef struct MouseTransformInfo MouseTransformInfo;
> >   typedef struct uWireSlave uWireSlave;
> 
> Just move the declaration of qemu_irq to the beginning of qemu-common.h
> and leave the rest of files untouched. That also fixes the circular 
> dependency.
> 
> I already have a patch that does this, so you can integrate it in your 
> series
> instead of this one.
No doubt it's more simpler way, but IMHO It's more of a hack than fixing
problem.
It works for now but doesn't alleviate problem with header nightmare in qemu,
where everything is included in qemu-common.h and everything includes it as
well.

Any way if majority prefer simple move, I'll drop this patch in favor of yours.
 
> 
> 
> > diff --git a/sysemu.h b/sysemu.h
> > index 65552ac..f765821 100644
> > --- a/sysemu.h
> > +++ b/sysemu.h
> > @@ -9,6 +9,7 @@
> >   #include "qapi-types.h"
> >   #include "notify.h"
> >   #include "main-loop.h"
> > +#include "hw/irq.h"
> >   
> >   /* vl.c */
> >   
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:46:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QR7-0000k9-P1; Mon, 20 Aug 2012 11:46:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3QR6-0000jz-S5
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:46:25 +0000
Received: from [85.158.143.35:17559] by server-3.bemta-4.messagelabs.com id
	6D/D3-09529-09322305; Mon, 20 Aug 2012 11:46:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345463165!14261204!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17798 invoked from network); 20 Aug 2012 11:46:07 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:46:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14082988"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:46:05 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 12:46:05 +0100
Date: Mon, 20 Aug 2012 12:45:45 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120817174549.GA14257@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > B/c we do not need it. During the startup the Xen provides
> > > us with all the memory mapped that we need to function.
> > 
> > Shouldn't we check to make sure that is actually true (I am thinking at
> > nr_pt_frames)?
> 
> I was looking at the source code (hypervisor) to figure it out and
> that is certainly true.
> 
> 
> > Or is it actually stated somewhere in the Xen headers?
> 
> Couldn't find it, but after looking so long at the source code
> I didn't even bother looking for it.
> 
> Thought to be honest - I only looked at how the 64-bit pagetables
> were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef

I think that we need to involve some Xen maintainers and get this
written down somewhere in the public headers, otherwise we have no
guarantees that it is going to stay as it is in the next Xen versions.

Maybe we just need to add a couple of lines of comment to
xen/include/public/xen.h.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:46:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QR7-0000k9-P1; Mon, 20 Aug 2012 11:46:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3QR6-0000jz-S5
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:46:25 +0000
Received: from [85.158.143.35:17559] by server-3.bemta-4.messagelabs.com id
	6D/D3-09529-09322305; Mon, 20 Aug 2012 11:46:24 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345463165!14261204!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17798 invoked from network); 20 Aug 2012 11:46:07 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:46:07 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14082988"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:46:05 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 12:46:05 +0100
Date: Mon, 20 Aug 2012 12:45:45 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120817174549.GA14257@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > B/c we do not need it. During the startup the Xen provides
> > > us with all the memory mapped that we need to function.
> > 
> > Shouldn't we check to make sure that is actually true (I am thinking at
> > nr_pt_frames)?
> 
> I was looking at the source code (hypervisor) to figure it out and
> that is certainly true.
> 
> 
> > Or is it actually stated somewhere in the Xen headers?
> 
> Couldn't find it, but after looking so long at the source code
> I didn't even bother looking for it.
> 
> Thought to be honest - I only looked at how the 64-bit pagetables
> were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef

I think that we need to involve some Xen maintainers and get this
written down somewhere in the public headers, otherwise we have no
guarantees that it is going to stay as it is in the next Xen versions.

Maybe we just need to add a couple of lines of comment to
xen/include/public/xen.h.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:50:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:50:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QUZ-000107-Fg; Mon, 20 Aug 2012 11:49:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3QSc-0000sz-CO
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:47:58 +0000
Received: from [85.158.138.51:8661] by server-11.bemta-3.messagelabs.com id
	F5/A8-23152-DE322305; Mon, 20 Aug 2012 11:47:57 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345463276!29204385!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13537 invoked from network); 20 Aug 2012 11:47:56 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	20 Aug 2012 11:47:56 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KBlmlP017861
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 07:47:48 -0400
Received: from thinkpad.mammed.net (vpn-238-235.phx2.redhat.com [10.3.238.235])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7KBlfEG009523
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 07:47:43 -0400
Date: Mon, 20 Aug 2012 13:47:39 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Stefan Weil <sw@weilnetz.de>
Message-ID: <20120820134739.08bf6de4@thinkpad.mammed.net>
In-Reply-To: <5031C2A3.3060800@weilnetz.de>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<5031C2A3.3060800@weilnetz.de>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Mon, 20 Aug 2012 11:49:58 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 0/5 v2] cpu: make a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 06:52:51 +0200
Stefan Weil <sw@weilnetz.de> wrote:

> Am 20.08.2012 01:39, schrieb Igor Mammedov:
> > this is th 3rd approach to make CPU a child of DeviceState
> > for both kinds of targets *-user and *-softmmu. It seems
> > with current state of qemu it doesn't take too much effort
> > to make it compile. Please check if it doesn't break
> > something on other targets/archs/hosts than i386.
> >
> > what's tested:
> >    - compile tested building all targets on FC17x64 host.
> >    - briefly tested i386 user and softmmu targets
> >
> > Anthony Liguori (1):
> >    qdev: split up header so it can be used in cpu.h
> >
> > Igor Mammedov (4):
> >    move qemu_irq typedef out of cpu-common.h
> >    qapi-types.h doesn't really need to include qemu-common.h
> >    cleanup error.h, included qapi-types.h aready has stdbool.h
> >    make CPU a child of DeviceState
> >
> >   error.h               |    1 -
> >   hw/arm-misc.h         |    1 +
> >   hw/bt.h               |    2 +
> >   hw/devices.h          |    2 +
> >   hw/irq.h              |    2 +
> >   hw/mc146818rtc.c      |    1 +
> >   hw/omap.h             |    1 +
> >   hw/qdev-addr.c        |    1 +
> >   hw/qdev-core.h        |  240 ++++++++++++++++++++++++++++++++
> >   hw/qdev-monitor.h     |   16 ++
> >   hw/qdev-properties.c  |    1 +
> >   hw/qdev-properties.h  |  128 +++++++++++++++++
> >   hw/qdev.c             |    1 +
> >   hw/qdev.h             |  371 +------------------------------------------------
> >   hw/soc_dma.h          |    1 +
> >   hw/xen.h              |    1 +
> >   include/qemu/cpu.h    |    6 +-
> >   qemu-common.h         |    1 -
> >   scripts/qapi-types.py |    2 +-
> >   sysemu.h              |    1 +
> >   20 files changed, 407 insertions(+), 373 deletions(-)
> >   create mode 100644 hw/qdev-core.h
> >   create mode 100644 hw/qdev-monitor.h
> >   create mode 100644 hw/qdev-properties.h
> >
> 
> I'd prefer if you could keep the following simple pattern:
> 
> * Start includes in *.c files with config.h (optionally)
>    and qemu-common.h.
Can't agree with you on this. I'd say that every header should be be self
sufficient, include other headers if it uses types from them and NOT depend on
the position where it's included, it should provide all its own deps.

> 
> * Don't include standard include files which are already
>    included in qemu-common.h
Probably initially qemu-common.h was intended to simplify inclusion
of standard headers and glue stuff in multi-os build environment. but it seems
to be become misused. It includes now a lot of stuff that is not common to
every file it's included in.
Perhaps we should split out of it std includes and glue layer into something
like std/host-common.h and case on case basis move out other type
definitions that are not common into there appropriate places. Like with
qemu_irq.

> 
> * Don't include qemu-common.h in *.h files.
I'm all for it. But it's difficult right now because it tends to be included
in a lot of headers that don't really need it and then other headers happens
to depend on previous inclusion of it. It's difficult to untangle this in one
take, but could be possible in small steps.

[1/1] is a direction that could be used to put type custom types in their
proper places. It would be better way long term and help to avoid problems
with including one header in another. 

> 
> Regards,
> 
> Stefan Weil
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:50:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:50:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QUZ-000107-Fg; Mon, 20 Aug 2012 11:49:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3QSc-0000sz-CO
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:47:58 +0000
Received: from [85.158.138.51:8661] by server-11.bemta-3.messagelabs.com id
	F5/A8-23152-DE322305; Mon, 20 Aug 2012 11:47:57 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345463276!29204385!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13537 invoked from network); 20 Aug 2012 11:47:56 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	20 Aug 2012 11:47:56 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KBlmlP017861
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 07:47:48 -0400
Received: from thinkpad.mammed.net (vpn-238-235.phx2.redhat.com [10.3.238.235])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7KBlfEG009523
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 07:47:43 -0400
Date: Mon, 20 Aug 2012 13:47:39 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Stefan Weil <sw@weilnetz.de>
Message-ID: <20120820134739.08bf6de4@thinkpad.mammed.net>
In-Reply-To: <5031C2A3.3060800@weilnetz.de>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<5031C2A3.3060800@weilnetz.de>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Mon, 20 Aug 2012 11:49:58 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	lcapitulino@redhat.com, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 0/5 v2] cpu: make a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 06:52:51 +0200
Stefan Weil <sw@weilnetz.de> wrote:

> Am 20.08.2012 01:39, schrieb Igor Mammedov:
> > this is th 3rd approach to make CPU a child of DeviceState
> > for both kinds of targets *-user and *-softmmu. It seems
> > with current state of qemu it doesn't take too much effort
> > to make it compile. Please check if it doesn't break
> > something on other targets/archs/hosts than i386.
> >
> > what's tested:
> >    - compile tested building all targets on FC17x64 host.
> >    - briefly tested i386 user and softmmu targets
> >
> > Anthony Liguori (1):
> >    qdev: split up header so it can be used in cpu.h
> >
> > Igor Mammedov (4):
> >    move qemu_irq typedef out of cpu-common.h
> >    qapi-types.h doesn't really need to include qemu-common.h
> >    cleanup error.h, included qapi-types.h aready has stdbool.h
> >    make CPU a child of DeviceState
> >
> >   error.h               |    1 -
> >   hw/arm-misc.h         |    1 +
> >   hw/bt.h               |    2 +
> >   hw/devices.h          |    2 +
> >   hw/irq.h              |    2 +
> >   hw/mc146818rtc.c      |    1 +
> >   hw/omap.h             |    1 +
> >   hw/qdev-addr.c        |    1 +
> >   hw/qdev-core.h        |  240 ++++++++++++++++++++++++++++++++
> >   hw/qdev-monitor.h     |   16 ++
> >   hw/qdev-properties.c  |    1 +
> >   hw/qdev-properties.h  |  128 +++++++++++++++++
> >   hw/qdev.c             |    1 +
> >   hw/qdev.h             |  371 +------------------------------------------------
> >   hw/soc_dma.h          |    1 +
> >   hw/xen.h              |    1 +
> >   include/qemu/cpu.h    |    6 +-
> >   qemu-common.h         |    1 -
> >   scripts/qapi-types.py |    2 +-
> >   sysemu.h              |    1 +
> >   20 files changed, 407 insertions(+), 373 deletions(-)
> >   create mode 100644 hw/qdev-core.h
> >   create mode 100644 hw/qdev-monitor.h
> >   create mode 100644 hw/qdev-properties.h
> >
> 
> I'd prefer if you could keep the following simple pattern:
> 
> * Start includes in *.c files with config.h (optionally)
>    and qemu-common.h.
Can't agree with you on this. I'd say that every header should be be self
sufficient, include other headers if it uses types from them and NOT depend on
the position where it's included, it should provide all its own deps.

> 
> * Don't include standard include files which are already
>    included in qemu-common.h
Probably initially qemu-common.h was intended to simplify inclusion
of standard headers and glue stuff in multi-os build environment. but it seems
to be become misused. It includes now a lot of stuff that is not common to
every file it's included in.
Perhaps we should split out of it std includes and glue layer into something
like std/host-common.h and case on case basis move out other type
definitions that are not common into there appropriate places. Like with
qemu_irq.

> 
> * Don't include qemu-common.h in *.h files.
I'm all for it. But it's difficult right now because it tends to be included
in a lot of headers that don't really need it and then other headers happens
to depend on previous inclusion of it. It's difficult to untangle this in one
take, but could be possible in small steps.

[1/1] is a direction that could be used to put type custom types in their
proper places. It would be better way long term and help to avoid problems
with including one header in another. 

> 
> Regards,
> 
> Stefan Weil
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:54:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QYR-0001BA-4x; Mon, 20 Aug 2012 11:53:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3QYP-0001Ay-Nl
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:53:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345463626!9781860!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22630 invoked from network); 20 Aug 2012 11:53:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:53:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14083254"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:53:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 20 Aug 2012 12:53:45 +0100
Message-ID: <1345463624.28762.67.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 20 Aug 2012 12:53:44 +0100
In-Reply-To: <alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > B/c we do not need it. During the startup the Xen provides
> > > > us with all the memory mapped that we need to function.
> > > 
> > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > nr_pt_frames)?
> > 
> > I was looking at the source code (hypervisor) to figure it out and
> > that is certainly true.
> > 
> > 
> > > Or is it actually stated somewhere in the Xen headers?
> > 
> > Couldn't find it, but after looking so long at the source code
> > I didn't even bother looking for it.
> > 
> > Thought to be honest - I only looked at how the 64-bit pagetables
> > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> 
> I think that we need to involve some Xen maintainers and get this
> written down somewhere in the public headers, otherwise we have no
> guarantees that it is going to stay as it is in the next Xen versions.
> 
> Maybe we just need to add a couple of lines of comment to
> xen/include/public/xen.h.

The start of day memory layout for PV guests is written down in the
comment just before struct start_info at
http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info

(I haven't read this thread to determine if what is documented matches
what you guys are talking about relying on)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:54:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QYR-0001BA-4x; Mon, 20 Aug 2012 11:53:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3QYP-0001Ay-Nl
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:53:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345463626!9781860!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22630 invoked from network); 20 Aug 2012 11:53:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:53:46 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14083254"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:53:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Mon, 20 Aug 2012 12:53:45 +0100
Message-ID: <1345463624.28762.67.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 20 Aug 2012 12:53:44 +0100
In-Reply-To: <alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > B/c we do not need it. During the startup the Xen provides
> > > > us with all the memory mapped that we need to function.
> > > 
> > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > nr_pt_frames)?
> > 
> > I was looking at the source code (hypervisor) to figure it out and
> > that is certainly true.
> > 
> > 
> > > Or is it actually stated somewhere in the Xen headers?
> > 
> > Couldn't find it, but after looking so long at the source code
> > I didn't even bother looking for it.
> > 
> > Thought to be honest - I only looked at how the 64-bit pagetables
> > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> 
> I think that we need to involve some Xen maintainers and get this
> written down somewhere in the public headers, otherwise we have no
> guarantees that it is going to stay as it is in the next Xen versions.
> 
> Maybe we just need to add a couple of lines of comment to
> xen/include/public/xen.h.

The start of day memory layout for PV guests is written down in the
comment just before struct start_info at
http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info

(I haven't read this thread to determine if what is documented matches
what you guys are talking about relying on)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:59:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QdL-0001L0-Sd; Mon, 20 Aug 2012 11:59:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3QdK-0001Ku-UG
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:59:03 +0000
Received: from [85.158.143.35:3256] by server-1.bemta-4.messagelabs.com id
	D8/F8-07754-68622305; Mon, 20 Aug 2012 11:59:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345463937!14391474!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25666 invoked from network); 20 Aug 2012 11:58:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:58:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14083372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:58:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 12:58:57 +0100
Date: Mon, 20 Aug 2012 12:58:37 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345463624.28762.67.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com> 
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Ian Campbell wrote:
> On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > B/c we do not need it. During the startup the Xen provides
> > > > > us with all the memory mapped that we need to function.
> > > > 
> > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > nr_pt_frames)?
> > > 
> > > I was looking at the source code (hypervisor) to figure it out and
> > > that is certainly true.
> > > 
> > > 
> > > > Or is it actually stated somewhere in the Xen headers?
> > > 
> > > Couldn't find it, but after looking so long at the source code
> > > I didn't even bother looking for it.
> > > 
> > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > 
> > I think that we need to involve some Xen maintainers and get this
> > written down somewhere in the public headers, otherwise we have no
> > guarantees that it is going to stay as it is in the next Xen versions.
> > 
> > Maybe we just need to add a couple of lines of comment to
> > xen/include/public/xen.h.
> 
> The start of day memory layout for PV guests is written down in the
> comment just before struct start_info at
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> 
> (I haven't read this thread to determine if what is documented matches
> what you guys are talking about relying on)

but it is not written down how much physical memory is going to be
mapped in the bootstrap page tables.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 11:59:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 11:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QdL-0001L0-Sd; Mon, 20 Aug 2012 11:59:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3QdK-0001Ku-UG
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 11:59:03 +0000
Received: from [85.158.143.35:3256] by server-1.bemta-4.messagelabs.com id
	D8/F8-07754-68622305; Mon, 20 Aug 2012 11:59:02 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345463937!14391474!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25666 invoked from network); 20 Aug 2012 11:58:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 11:58:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14083372"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 11:58:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 12:58:57 +0100
Date: Mon, 20 Aug 2012 12:58:37 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345463624.28762.67.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com> 
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Ian Campbell wrote:
> On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > B/c we do not need it. During the startup the Xen provides
> > > > > us with all the memory mapped that we need to function.
> > > > 
> > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > nr_pt_frames)?
> > > 
> > > I was looking at the source code (hypervisor) to figure it out and
> > > that is certainly true.
> > > 
> > > 
> > > > Or is it actually stated somewhere in the Xen headers?
> > > 
> > > Couldn't find it, but after looking so long at the source code
> > > I didn't even bother looking for it.
> > > 
> > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > 
> > I think that we need to involve some Xen maintainers and get this
> > written down somewhere in the public headers, otherwise we have no
> > guarantees that it is going to stay as it is in the next Xen versions.
> > 
> > Maybe we just need to add a couple of lines of comment to
> > xen/include/public/xen.h.
> 
> The start of day memory layout for PV guests is written down in the
> comment just before struct start_info at
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> 
> (I haven't read this thread to determine if what is documented matches
> what you guys are talking about relying on)

but it is not written down how much physical memory is going to be
mapped in the bootstrap page tables.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:07:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:07:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QlD-0001hf-Jq; Mon, 20 Aug 2012 12:07:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3QlC-0001hX-Ru
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 12:07:11 +0000
Received: from [85.158.139.83:3266] by server-12.bemta-5.messagelabs.com id
	02/99-22359-E6822305; Mon, 20 Aug 2012 12:07:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345464425!28526248!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2450 invoked from network); 20 Aug 2012 12:07:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 12:07:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KC6xD2028229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 12:07:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KC6wBQ002996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 12:06:59 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KC6vH9016635; Mon, 20 Aug 2012 07:06:57 -0500
Received: from localhost.localdomain (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 05:06:57 -0700
Date: Mon, 20 Aug 2012 08:06:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820120649.GI13755@localhost.localdomain>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 12:58:37PM +0100, Stefano Stabellini wrote:
> On Mon, 20 Aug 2012, Ian Campbell wrote:
> > On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > > B/c we do not need it. During the startup the Xen provides
> > > > > > us with all the memory mapped that we need to function.
> > > > > 
> > > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > > nr_pt_frames)?
> > > > 
> > > > I was looking at the source code (hypervisor) to figure it out and
> > > > that is certainly true.
> > > > 
> > > > 
> > > > > Or is it actually stated somewhere in the Xen headers?
> > > > 
> > > > Couldn't find it, but after looking so long at the source code
> > > > I didn't even bother looking for it.
> > > > 
> > > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > > 
> > > I think that we need to involve some Xen maintainers and get this
> > > written down somewhere in the public headers, otherwise we have no
> > > guarantees that it is going to stay as it is in the next Xen versions.
> > > 
> > > Maybe we just need to add a couple of lines of comment to
> > > xen/include/public/xen.h.
> > 
> > The start of day memory layout for PV guests is written down in the
> > comment just before struct start_info at
> > http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> > 
> > (I haven't read this thread to determine if what is documented matches
> > what you guys are talking about relying on)
> 
> but it is not written down how much physical memory is going to be
> mapped in the bootstrap page tables.

Considering that only pvops kernel has this change I think we are ok?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:07:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:07:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3QlD-0001hf-Jq; Mon, 20 Aug 2012 12:07:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3QlC-0001hX-Ru
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 12:07:11 +0000
Received: from [85.158.139.83:3266] by server-12.bemta-5.messagelabs.com id
	02/99-22359-E6822305; Mon, 20 Aug 2012 12:07:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345464425!28526248!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2450 invoked from network); 20 Aug 2012 12:07:07 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 12:07:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KC6xD2028229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 12:07:00 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KC6wBQ002996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 12:06:59 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KC6vH9016635; Mon, 20 Aug 2012 07:06:57 -0500
Received: from localhost.localdomain (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 05:06:57 -0700
Date: Mon, 20 Aug 2012 08:06:49 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820120649.GI13755@localhost.localdomain>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 12:58:37PM +0100, Stefano Stabellini wrote:
> On Mon, 20 Aug 2012, Ian Campbell wrote:
> > On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > > B/c we do not need it. During the startup the Xen provides
> > > > > > us with all the memory mapped that we need to function.
> > > > > 
> > > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > > nr_pt_frames)?
> > > > 
> > > > I was looking at the source code (hypervisor) to figure it out and
> > > > that is certainly true.
> > > > 
> > > > 
> > > > > Or is it actually stated somewhere in the Xen headers?
> > > > 
> > > > Couldn't find it, but after looking so long at the source code
> > > > I didn't even bother looking for it.
> > > > 
> > > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > > 
> > > I think that we need to involve some Xen maintainers and get this
> > > written down somewhere in the public headers, otherwise we have no
> > > guarantees that it is going to stay as it is in the next Xen versions.
> > > 
> > > Maybe we just need to add a couple of lines of comment to
> > > xen/include/public/xen.h.
> > 
> > The start of day memory layout for PV guests is written down in the
> > comment just before struct start_info at
> > http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> > 
> > (I haven't read this thread to determine if what is documented matches
> > what you guys are talking about relying on)
> 
> but it is not written down how much physical memory is going to be
> mapped in the bootstrap page tables.

Considering that only pvops kernel has this change I think we are ok?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:18:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Qw2-0001sL-Oy; Mon, 20 Aug 2012 12:18:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3Qw1-0001sG-PA
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 12:18:21 +0000
Received: from [85.158.143.99:32738] by server-2.bemta-4.messagelabs.com id
	95/D1-31966-D0B22305; Mon, 20 Aug 2012 12:18:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345465099!21555167!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21539 invoked from network); 20 Aug 2012 12:18:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 12:18:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14084144"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 12:18:19 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 13:18:19 +0100
Date: Mon, 20 Aug 2012 13:17:59 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208201312170.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int
 and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Xu, Dongxiao wrote:
> Hi, 
> 
> The following patch fixes an issue of (uint * int) multiply in QEMU.
> The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than 30 minutes on Level 0 (L0) Xen (The nested virtualization case).
> Please help to review and pull.
> 
> I saw the upstream QEMU also have a similar code piece, I will also send a patch there.
> 
> Thanks,
> Dongxiao
> 
> 
> >From 442297c3f5073b11033e6af2338f571418eff024 Mon Sep 17 00:00:00 2001
> From: Dongxiao Xu <dongxiao.xu@intel.com>
> Date: Mon, 20 Aug 2012 16:45:04 +0800
> Subject: [PATCH] helper2: fix multiply issue for int and uint types
> 
> If the two multiply operands are int and uint types separately,
> the int type will be transformed to uint firstly, which is not the
> intent in our code piece. The fix is to add (int) transform for the
> uint type before the multiply.

well spotted!


> This helps to fix the Xen hypevisor slow booting issue (boots more
> than 30 minutes) on another Xen hypervisor
> (the nested virtualization case).

If we do that we suddenly restrict the maximum req->size from UINT_MAX to
INT_MAX.
I would rather cast req->size to int64_t instead.


> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  i386-dm/helper2.c |   16 ++++++++--------
>  1 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..2a66086 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
>                  write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>              }
>          }
> @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>                  unsigned long tmp = 0;
>  
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
> @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &req->data);
>              }
>          }
> @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>                  write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>              }
>          }
> -- 
> 1.7.1
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:18:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Qw2-0001sL-Oy; Mon, 20 Aug 2012 12:18:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3Qw1-0001sG-PA
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 12:18:21 +0000
Received: from [85.158.143.99:32738] by server-2.bemta-4.messagelabs.com id
	95/D1-31966-D0B22305; Mon, 20 Aug 2012 12:18:21 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345465099!21555167!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21539 invoked from network); 20 Aug 2012 12:18:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 12:18:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14084144"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 12:18:19 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 13:18:19 +0100
Date: Mon, 20 Aug 2012 13:17:59 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208201312170.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int
 and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Xu, Dongxiao wrote:
> Hi, 
> 
> The following patch fixes an issue of (uint * int) multiply in QEMU.
> The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than 30 minutes on Level 0 (L0) Xen (The nested virtualization case).
> Please help to review and pull.
> 
> I saw the upstream QEMU also have a similar code piece, I will also send a patch there.
> 
> Thanks,
> Dongxiao
> 
> 
> >From 442297c3f5073b11033e6af2338f571418eff024 Mon Sep 17 00:00:00 2001
> From: Dongxiao Xu <dongxiao.xu@intel.com>
> Date: Mon, 20 Aug 2012 16:45:04 +0800
> Subject: [PATCH] helper2: fix multiply issue for int and uint types
> 
> If the two multiply operands are int and uint types separately,
> the int type will be transformed to uint firstly, which is not the
> intent in our code piece. The fix is to add (int) transform for the
> uint type before the multiply.

well spotted!


> This helps to fix the Xen hypevisor slow booting issue (boots more
> than 30 minutes) on another Xen hypervisor
> (the nested virtualization case).

If we do that we suddenly restrict the maximum req->size from UINT_MAX to
INT_MAX.
I would rather cast req->size to int64_t instead.


> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  i386-dm/helper2.c |   16 ++++++++--------
>  1 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..2a66086 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
>                  write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>              }
>          }
> @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>                  unsigned long tmp = 0;
>  
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
> @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &req->data);
>              }
>          }
> @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>                  write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int)req->size),
>                    req->size, &tmp);
>              }
>          }
> -- 
> 1.7.1
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:20:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Qy8-0001xW-AD; Mon, 20 Aug 2012 12:20:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3Qy6-0001xQ-VM
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 12:20:31 +0000
Received: from [85.158.143.35:37037] by server-3.bemta-4.messagelabs.com id
	69/BD-09529-E8B22305; Mon, 20 Aug 2012 12:20:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345465225!6639132!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26377 invoked from network); 20 Aug 2012 12:20:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 12:20:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14084201"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 12:19:42 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 13:19:41 +0100
Date: Mon, 20 Aug 2012 13:19:22 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120820120649.GI13755@localhost.localdomain>
Message-ID: <alpine.DEB.2.02.1208201318240.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
	<20120820120649.GI13755@localhost.localdomain>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 12:58:37PM +0100, Stefano Stabellini wrote:
> > On Mon, 20 Aug 2012, Ian Campbell wrote:
> > > On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > > > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > > > B/c we do not need it. During the startup the Xen provides
> > > > > > > us with all the memory mapped that we need to function.
> > > > > > 
> > > > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > > > nr_pt_frames)?
> > > > > 
> > > > > I was looking at the source code (hypervisor) to figure it out and
> > > > > that is certainly true.
> > > > > 
> > > > > 
> > > > > > Or is it actually stated somewhere in the Xen headers?
> > > > > 
> > > > > Couldn't find it, but after looking so long at the source code
> > > > > I didn't even bother looking for it.
> > > > > 
> > > > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > > > 
> > > > I think that we need to involve some Xen maintainers and get this
> > > > written down somewhere in the public headers, otherwise we have no
> > > > guarantees that it is going to stay as it is in the next Xen versions.
> > > > 
> > > > Maybe we just need to add a couple of lines of comment to
> > > > xen/include/public/xen.h.
> > > 
> > > The start of day memory layout for PV guests is written down in the
> > > comment just before struct start_info at
> > > http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> > > 
> > > (I haven't read this thread to determine if what is documented matches
> > > what you guys are talking about relying on)
> > 
> > but it is not written down how much physical memory is going to be
> > mapped in the bootstrap page tables.
> 
> Considering that only pvops kernel has this change I think we are ok?

We are OK if we write it down :)
Otherwise it might change in the future and we won't even know what the
correct behavior is supposed to be.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:20:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Qy8-0001xW-AD; Mon, 20 Aug 2012 12:20:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3Qy6-0001xQ-VM
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 12:20:31 +0000
Received: from [85.158.143.35:37037] by server-3.bemta-4.messagelabs.com id
	69/BD-09529-E8B22305; Mon, 20 Aug 2012 12:20:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345465225!6639132!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26377 invoked from network); 20 Aug 2012 12:20:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 12:20:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,796,1336348800"; d="scan'208";a="14084201"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 12:19:42 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 13:19:41 +0100
Date: Mon, 20 Aug 2012 13:19:22 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120820120649.GI13755@localhost.localdomain>
Message-ID: <alpine.DEB.2.02.1208201318240.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
	<20120820120649.GI13755@localhost.localdomain>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 12:58:37PM +0100, Stefano Stabellini wrote:
> > On Mon, 20 Aug 2012, Ian Campbell wrote:
> > > On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > > > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > > > B/c we do not need it. During the startup the Xen provides
> > > > > > > us with all the memory mapped that we need to function.
> > > > > > 
> > > > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > > > nr_pt_frames)?
> > > > > 
> > > > > I was looking at the source code (hypervisor) to figure it out and
> > > > > that is certainly true.
> > > > > 
> > > > > 
> > > > > > Or is it actually stated somewhere in the Xen headers?
> > > > > 
> > > > > Couldn't find it, but after looking so long at the source code
> > > > > I didn't even bother looking for it.
> > > > > 
> > > > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > > > 
> > > > I think that we need to involve some Xen maintainers and get this
> > > > written down somewhere in the public headers, otherwise we have no
> > > > guarantees that it is going to stay as it is in the next Xen versions.
> > > > 
> > > > Maybe we just need to add a couple of lines of comment to
> > > > xen/include/public/xen.h.
> > > 
> > > The start of day memory layout for PV guests is written down in the
> > > comment just before struct start_info at
> > > http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> > > 
> > > (I haven't read this thread to determine if what is documented matches
> > > what you guys are talking about relying on)
> > 
> > but it is not written down how much physical memory is going to be
> > mapped in the bootstrap page tables.
> 
> Considering that only pvops kernel has this change I think we are ok?

We are OK if we write it down :)
Otherwise it might change in the future and we won't even know what the
correct behavior is supposed to be.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:26:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3R3n-0002AB-47; Mon, 20 Aug 2012 12:26:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T3R3l-0002A6-7v
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 12:26:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345465572!2829070!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjcwMDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22623 invoked from network); 20 Aug 2012 12:26:13 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-15.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 12:26:13 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 20 Aug 2012 05:26:12 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,796,1336374000"; d="scan'208";a="183115247"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by azsmga001.ch.intel.com with ESMTP; 20 Aug 2012 05:26:12 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 05:26:11 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 20:26:10 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int
	and uint types
Thread-Index: AQHNfs3ygx8kst9AtkGQ/zuE3O5MzJdioBIw
Date: Mon, 20 Aug 2012 12:26:09 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE17D7F@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201312170.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208201312170.15568@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int
 and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Monday, August 20, 2012 8:18 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int and
> uint types
> 
> On Mon, 20 Aug 2012, Xu, Dongxiao wrote:
> > Hi,
> >
> > The following patch fixes an issue of (uint * int) multiply in QEMU.
> > The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than
> 30 minutes on Level 0 (L0) Xen (The nested virtualization case).
> > Please help to review and pull.
> >
> > I saw the upstream QEMU also have a similar code piece, I will also send a
> patch there.
> >
> > Thanks,
> > Dongxiao
> >
> >
> > >From 442297c3f5073b11033e6af2338f571418eff024 Mon Sep 17 00:00:00
> > >2001
> > From: Dongxiao Xu <dongxiao.xu@intel.com>
> > Date: Mon, 20 Aug 2012 16:45:04 +0800
> > Subject: [PATCH] helper2: fix multiply issue for int and uint types
> >
> > If the two multiply operands are int and uint types separately, the
> > int type will be transformed to uint firstly, which is not the intent
> > in our code piece. The fix is to add (int) transform for the uint type
> > before the multiply.
> 
> well spotted!
> 
> 
> > This helps to fix the Xen hypevisor slow booting issue (boots more
> > than 30 minutes) on another Xen hypervisor (the nested virtualization
> > case).
> 
> If we do that we suddenly restrict the maximum req->size from UINT_MAX to
> INT_MAX.
> I would rather cast req->size to int64_t instead.

Yes, this is reasonable. I will update my patch.

Thanks,
Dongxiao

> 
> 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  i386-dm/helper2.c |   16 ++++++++--------
> >  1 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c index
> > c6d049c..2a66086 100644
> > --- a/i386-dm/helper2.c
> > +++ b/i386-dm/helper2.c
> > @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >              for (i = 0; i < req->count; i++) {
> >                  tmp = do_inp(env, req->addr, req->size);
> >                  write_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >                  unsigned long tmp = 0;
> >
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >                  do_outp(env, req->addr, req->size, tmp);
> >              }
> > @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &req->data);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &req->data);
> >              }
> >          }
> > @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >                  write_physical((target_phys_addr_t )req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > --
> > 1.7.1
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:26:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3R3n-0002AB-47; Mon, 20 Aug 2012 12:26:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T3R3l-0002A6-7v
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 12:26:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345465572!2829070!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjcwMDQw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22623 invoked from network); 20 Aug 2012 12:26:13 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-15.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 12:26:13 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 20 Aug 2012 05:26:12 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,796,1336374000"; d="scan'208";a="183115247"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by azsmga001.ch.intel.com with ESMTP; 20 Aug 2012 05:26:12 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 05:26:11 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 20:26:10 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int
	and uint types
Thread-Index: AQHNfs3ygx8kst9AtkGQ/zuE3O5MzJdioBIw
Date: Mon, 20 Aug 2012 12:26:09 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE17D7F@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE17CC0@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201312170.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208201312170.15568@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int
 and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Monday, August 20, 2012 8:18 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] QEMU/helper2.c: Fix multiply issue for int and
> uint types
> 
> On Mon, 20 Aug 2012, Xu, Dongxiao wrote:
> > Hi,
> >
> > The following patch fixes an issue of (uint * int) multiply in QEMU.
> > The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than
> 30 minutes on Level 0 (L0) Xen (The nested virtualization case).
> > Please help to review and pull.
> >
> > I saw the upstream QEMU also have a similar code piece, I will also send a
> patch there.
> >
> > Thanks,
> > Dongxiao
> >
> >
> > >From 442297c3f5073b11033e6af2338f571418eff024 Mon Sep 17 00:00:00
> > >2001
> > From: Dongxiao Xu <dongxiao.xu@intel.com>
> > Date: Mon, 20 Aug 2012 16:45:04 +0800
> > Subject: [PATCH] helper2: fix multiply issue for int and uint types
> >
> > If the two multiply operands are int and uint types separately, the
> > int type will be transformed to uint firstly, which is not the intent
> > in our code piece. The fix is to add (int) transform for the uint type
> > before the multiply.
> 
> well spotted!
> 
> 
> > This helps to fix the Xen hypevisor slow booting issue (boots more
> > than 30 minutes) on another Xen hypervisor (the nested virtualization
> > case).
> 
> If we do that we suddenly restrict the maximum req->size from UINT_MAX to
> INT_MAX.
> I would rather cast req->size to int64_t instead.

Yes, this is reasonable. I will update my patch.

Thanks,
Dongxiao

> 
> 
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > ---
> >  i386-dm/helper2.c |   16 ++++++++--------
> >  1 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c index
> > c6d049c..2a66086 100644
> > --- a/i386-dm/helper2.c
> > +++ b/i386-dm/helper2.c
> > @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >              for (i = 0; i < req->count; i++) {
> >                  tmp = do_inp(env, req->addr, req->size);
> >                  write_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >                  unsigned long tmp = 0;
> >
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >                  do_outp(env, req->addr, req->size, tmp);
> >              }
> > @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &req->data);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &req->data);
> >              }
> >          }
> > @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >                  write_physical((target_phys_addr_t )req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > --
> > 1.7.1
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 12:35:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3RCI-0002K2-4d; Mon, 20 Aug 2012 12:35:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T3RCG-0002Jx-SU
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 12:35:09 +0000
Received: from [85.158.139.83:54835] by server-2.bemta-5.messagelabs.com id
	4A/E2-10142-CFE22305; Mon, 20 Aug 2012 12:35:08 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345466105!28982477!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE1OTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21393 invoked from network); 20 Aug 2012 12:35:06 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 12:35:06 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 20 Aug 2012 05:35:04 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,796,1336374000"; 
	d="scan'208,223";a="136191507"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 20 Aug 2012 05:35:04 -0700
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 05:35:02 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 05:35:01 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 20:35:00 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH v2] QEMU/helper2.c: Fix multiply issue for int and uint
	types
Thread-Index: Ac1+0DYuDTlRVN/dQjO7DF9GvQD4JQ==
Date: Mon, 20 Aug 2012 12:34:59 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for int
	and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Changes from v1: use int64_t to cast the uint32_t type to the signed type.

This is the updated version patch to fix the multiply issue for int and uin=
t types operands, which reflects Stefano's comments.

The following patch fixes an issue of (uint * int) multiply in QEMU.
The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than =
30 minutes on Level 0 (L0) Xen (The nested virtualization case).
Please help to review and pull.

I saw the upstream QEMU also have a similar code piece, I will also send a =
patch there.

Thanks,
Dongxiao

>From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00 2001
From: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Mon, 20 Aug 2012 16:45:04 +0800
Subject: [PATCH] helper2: fix multiply issue for int and uint types

If the two multiply operands are int and uint types separately,
the int type will be transformed to uint firstly, which is not the
intent in our code piece. The fix is to add (int64_t) transform
for the uint type before the multiply.

This helps to fix the Xen hypevisor slow booting issue (boots more
than 30 minutes) on another Xen hypervisor
(the nested virtualization case).

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 i386-dm/helper2.c |   16 ++++++++--------
 1 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..c093249 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i =3D 0; i < req->count; i++) {
                 tmp =3D do_inp(env, req->addr, req->size);
                 write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
             }
         }
@@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
                 unsigned long tmp =3D 0;
=20
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
@@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &req->data);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &req->data);
             }
         }
@@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
                 write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
             }
         }
--=20
1.7.1


--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_
Content-Type: application/octet-stream;
	name="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch"
Content-Description: 0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch
Content-Disposition: attachment;
	filename="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch";
	size=3404; creation-date="Mon, 20 Aug 2012 12:29:12 GMT";
	modification-date="Mon, 20 Aug 2012 12:28:45 GMT"
Content-Transfer-Encoding: base64

RnJvbSBkNzFmOWJlODJlYzAwNzlhYTg4Zjc3OWRlYTkwZTQ3NWIxNzdlMzJmIE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBEb25neGlhbyBYdSA8ZG9uZ3hpYW8ueHVAaW50ZWwuY29tPgpE
YXRlOiBNb24sIDIwIEF1ZyAyMDEyIDE2OjQ1OjA0ICswODAwClN1YmplY3Q6IFtQQVRDSF0gaGVs
cGVyMjogZml4IG11bHRpcGx5IGlzc3VlIGZvciBpbnQgYW5kIHVpbnQgdHlwZXMKCklmIHRoZSB0
d28gbXVsdGlwbHkgb3BlcmFuZHMgYXJlIGludCBhbmQgdWludCB0eXBlcyBzZXBhcmF0ZWx5LAp0
aGUgaW50IHR5cGUgd2lsbCBiZSB0cmFuc2Zvcm1lZCB0byB1aW50IGZpcnN0bHksIHdoaWNoIGlz
IG5vdCB0aGUKaW50ZW50IGluIG91ciBjb2RlIHBpZWNlLiBUaGUgZml4IGlzIHRvIGFkZCAoaW50
NjRfdCkgdHJhbnNmb3JtCmZvciB0aGUgdWludCB0eXBlIGJlZm9yZSB0aGUgbXVsdGlwbHkuCgpU
aGlzIGhlbHBzIHRvIGZpeCB0aGUgWGVuIGh5cGV2aXNvciBzbG93IGJvb3RpbmcgaXNzdWUgKGJv
b3RzIG1vcmUKdGhhbiAzMCBtaW51dGVzKSBvbiBhbm90aGVyIFhlbiBoeXBlcnZpc29yCih0aGUg
bmVzdGVkIHZpcnR1YWxpemF0aW9uIGNhc2UpLgoKU2lnbmVkLW9mZi1ieTogRG9uZ3hpYW8gWHUg
PGRvbmd4aWFvLnh1QGludGVsLmNvbT4KLS0tCiBpMzg2LWRtL2hlbHBlcjIuYyB8ICAgMTYgKysr
KysrKystLS0tLS0tLQogMSBmaWxlcyBjaGFuZ2VkLCA4IGluc2VydGlvbnMoKyksIDggZGVsZXRp
b25zKC0pCgpkaWZmIC0tZ2l0IGEvaTM4Ni1kbS9oZWxwZXIyLmMgYi9pMzg2LWRtL2hlbHBlcjIu
YwppbmRleCBjNmQwNDljLi5jMDkzMjQ5IDEwMDY0NAotLS0gYS9pMzg2LWRtL2hlbHBlcjIuYwor
KysgYi9pMzg2LWRtL2hlbHBlcjIuYwpAQCAtMzY0LDcgKzM2NCw3IEBAIHN0YXRpYyB2b2lkIGNw
dV9pb3JlcV9waW8oQ1BVU3RhdGUgKmVudiwgaW9yZXFfdCAqcmVxKQogICAgICAgICAgICAgZm9y
IChpID0gMDsgaSA8IHJlcS0+Y291bnQ7IGkrKykgewogICAgICAgICAgICAgICAgIHRtcCA9IGRv
X2lucChlbnYsIHJlcS0+YWRkciwgcmVxLT5zaXplKTsKICAgICAgICAgICAgICAgICB3cml0ZV9w
aHlzaWNhbCgodGFyZ2V0X3BoeXNfYWRkcl90KSByZXEtPmRhdGEKLSAgICAgICAgICAgICAgICAg
ICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBp
ICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAgICAgIHJlcS0+c2l6ZSwgJnRt
cCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KQEAgLTM3Niw3ICszNzYsNyBAQCBzdGF0aWMg
dm9pZCBjcHVfaW9yZXFfcGlvKENQVVN0YXRlICplbnYsIGlvcmVxX3QgKnJlcSkKICAgICAgICAg
ICAgICAgICB1bnNpZ25lZCBsb25nIHRtcCA9IDA7CiAKICAgICAgICAgICAgICAgICByZWFkX3Bo
eXNpY2FsKCh0YXJnZXRfcGh5c19hZGRyX3QpIHJlcS0+ZGF0YQotICAgICAgICAgICAgICAgICAg
KyAoc2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkg
KiAoaW50NjRfdClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1w
KTsKICAgICAgICAgICAgICAgICBkb19vdXRwKGVudiwgcmVxLT5hZGRyLCByZXEtPnNpemUsIHRt
cCk7CiAgICAgICAgICAgICB9CkBAIC0zOTQsMTMgKzM5NCwxMyBAQCBzdGF0aWMgdm9pZCBjcHVf
aW9yZXFfbW92ZShDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgIGlmIChyZXEt
PmRpciA9PSBJT1JFUV9SRUFEKSB7CiAgICAgICAgICAgICBmb3IgKGkgPSAwOyBpIDwgcmVxLT5j
b3VudDsgaSsrKSB7CiAgICAgICAgICAgICAgICAgcmVhZF9waHlzaWNhbChyZXEtPmFkZHIKLSAg
ICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAg
ICAgICsgKHNpZ24gKiBpICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAgICAg
IHJlcS0+c2l6ZSwgJnJlcS0+ZGF0YSk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0gZWxzZSBp
ZiAocmVxLT5kaXIgPT0gSU9SRVFfV1JJVEUpIHsKICAgICAgICAgICAgIGZvciAoaSA9IDA7IGkg
PCByZXEtPmNvdW50OyBpKyspIHsKICAgICAgICAgICAgICAgICB3cml0ZV9waHlzaWNhbChyZXEt
PmFkZHIKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAg
ICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAg
ICAgICAgICAgIHJlcS0+c2l6ZSwgJnJlcS0+ZGF0YSk7CiAgICAgICAgICAgICB9CiAgICAgICAg
IH0KQEAgLTQxMCwxOSArNDEwLDE5IEBAIHN0YXRpYyB2b2lkIGNwdV9pb3JlcV9tb3ZlKENQVVN0
YXRlICplbnYsIGlvcmVxX3QgKnJlcSkKICAgICAgICAgaWYgKHJlcS0+ZGlyID09IElPUkVRX1JF
QUQpIHsKICAgICAgICAgICAgIGZvciAoaSA9IDA7IGkgPCByZXEtPmNvdW50OyBpKyspIHsKICAg
ICAgICAgICAgICAgICByZWFkX3BoeXNpY2FsKHJlcS0+YWRkcgotICAgICAgICAgICAgICAgICAg
KyAoc2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkg
KiAoaW50NjRfdClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1w
KTsKICAgICAgICAgICAgICAgICB3cml0ZV9waHlzaWNhbCgodGFyZ2V0X3BoeXNfYWRkcl90ICly
ZXEtPmRhdGEKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAg
ICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAg
ICAgICAgICAgICAgIHJlcS0+c2l6ZSwgJnRtcCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0g
ZWxzZSBpZiAocmVxLT5kaXIgPT0gSU9SRVFfV1JJVEUpIHsKICAgICAgICAgICAgIGZvciAoaSA9
IDA7IGkgPCByZXEtPmNvdW50OyBpKyspIHsKICAgICAgICAgICAgICAgICByZWFkX3BoeXNpY2Fs
KCh0YXJnZXRfcGh5c19hZGRyX3QpIHJlcS0+ZGF0YQotICAgICAgICAgICAgICAgICAgKyAoc2ln
biAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkgKiAoaW50
NjRfdClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1wKTsKICAg
ICAgICAgICAgICAgICB3cml0ZV9waHlzaWNhbChyZXEtPmFkZHIKLSAgICAgICAgICAgICAgICAg
ICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBp
ICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAgICAgIHJlcS0+c2l6ZSwgJnRt
cCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KLS0gCjEuNy4xCgo=

--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Mon Aug 20 12:35:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 12:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3RCI-0002K2-4d; Mon, 20 Aug 2012 12:35:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T3RCG-0002Jx-SU
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 12:35:09 +0000
Received: from [85.158.139.83:54835] by server-2.bemta-5.messagelabs.com id
	4A/E2-10142-CFE22305; Mon, 20 Aug 2012 12:35:08 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345466105!28982477!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMzE1OTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21393 invoked from network); 20 Aug 2012 12:35:06 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-15.tower-182.messagelabs.com with SMTP;
	20 Aug 2012 12:35:06 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 20 Aug 2012 05:35:04 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,796,1336374000"; 
	d="scan'208,223";a="136191507"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 20 Aug 2012 05:35:04 -0700
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 05:35:02 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 05:35:01 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Mon, 20 Aug 2012 20:35:00 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH v2] QEMU/helper2.c: Fix multiply issue for int and uint
	types
Thread-Index: Ac1+0DYuDTlRVN/dQjO7DF9GvQD4JQ==
Date: Mon, 20 Aug 2012 12:34:59 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_"
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for int
	and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Changes from v1: use int64_t to cast the uint32_t type to the signed type.

This is the updated version patch to fix the multiply issue for int and uin=
t types operands, which reflects Stefano's comments.

The following patch fixes an issue of (uint * int) multiply in QEMU.
The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than =
30 minutes on Level 0 (L0) Xen (The nested virtualization case).
Please help to review and pull.

I saw the upstream QEMU also have a similar code piece, I will also send a =
patch there.

Thanks,
Dongxiao

>From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00 2001
From: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Mon, 20 Aug 2012 16:45:04 +0800
Subject: [PATCH] helper2: fix multiply issue for int and uint types

If the two multiply operands are int and uint types separately,
the int type will be transformed to uint firstly, which is not the
intent in our code piece. The fix is to add (int64_t) transform
for the uint type before the multiply.

This helps to fix the Xen hypevisor slow booting issue (boots more
than 30 minutes) on another Xen hypervisor
(the nested virtualization case).

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 i386-dm/helper2.c |   16 ++++++++--------
 1 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
index c6d049c..c093249 100644
--- a/i386-dm/helper2.c
+++ b/i386-dm/helper2.c
@@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
             for (i =3D 0; i < req->count; i++) {
                 tmp =3D do_inp(env, req->addr, req->size);
                 write_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
             }
         }
@@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
                 unsigned long tmp =3D 0;
=20
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
                 do_outp(env, req->addr, req->size, tmp);
             }
@@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &req->data);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &req->data);
             }
         }
@@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *re=
q)
         if (req->dir =3D=3D IOREQ_READ) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
                 write_physical((target_phys_addr_t )req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
             }
         } else if (req->dir =3D=3D IOREQ_WRITE) {
             for (i =3D 0; i < req->count; i++) {
                 read_physical((target_phys_addr_t) req->data
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
                 write_physical(req->addr
-                  + (sign * i * req->size),
+                  + (sign * i * (int64_t)req->size),
                   req->size, &tmp);
             }
         }
--=20
1.7.1


--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_
Content-Type: application/octet-stream;
	name="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch"
Content-Description: 0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch
Content-Disposition: attachment;
	filename="0001-helper2-fix-multiply-issue-for-int-and-uint-types.patch";
	size=3404; creation-date="Mon, 20 Aug 2012 12:29:12 GMT";
	modification-date="Mon, 20 Aug 2012 12:28:45 GMT"
Content-Transfer-Encoding: base64

RnJvbSBkNzFmOWJlODJlYzAwNzlhYTg4Zjc3OWRlYTkwZTQ3NWIxNzdlMzJmIE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBEb25neGlhbyBYdSA8ZG9uZ3hpYW8ueHVAaW50ZWwuY29tPgpE
YXRlOiBNb24sIDIwIEF1ZyAyMDEyIDE2OjQ1OjA0ICswODAwClN1YmplY3Q6IFtQQVRDSF0gaGVs
cGVyMjogZml4IG11bHRpcGx5IGlzc3VlIGZvciBpbnQgYW5kIHVpbnQgdHlwZXMKCklmIHRoZSB0
d28gbXVsdGlwbHkgb3BlcmFuZHMgYXJlIGludCBhbmQgdWludCB0eXBlcyBzZXBhcmF0ZWx5LAp0
aGUgaW50IHR5cGUgd2lsbCBiZSB0cmFuc2Zvcm1lZCB0byB1aW50IGZpcnN0bHksIHdoaWNoIGlz
IG5vdCB0aGUKaW50ZW50IGluIG91ciBjb2RlIHBpZWNlLiBUaGUgZml4IGlzIHRvIGFkZCAoaW50
NjRfdCkgdHJhbnNmb3JtCmZvciB0aGUgdWludCB0eXBlIGJlZm9yZSB0aGUgbXVsdGlwbHkuCgpU
aGlzIGhlbHBzIHRvIGZpeCB0aGUgWGVuIGh5cGV2aXNvciBzbG93IGJvb3RpbmcgaXNzdWUgKGJv
b3RzIG1vcmUKdGhhbiAzMCBtaW51dGVzKSBvbiBhbm90aGVyIFhlbiBoeXBlcnZpc29yCih0aGUg
bmVzdGVkIHZpcnR1YWxpemF0aW9uIGNhc2UpLgoKU2lnbmVkLW9mZi1ieTogRG9uZ3hpYW8gWHUg
PGRvbmd4aWFvLnh1QGludGVsLmNvbT4KLS0tCiBpMzg2LWRtL2hlbHBlcjIuYyB8ICAgMTYgKysr
KysrKystLS0tLS0tLQogMSBmaWxlcyBjaGFuZ2VkLCA4IGluc2VydGlvbnMoKyksIDggZGVsZXRp
b25zKC0pCgpkaWZmIC0tZ2l0IGEvaTM4Ni1kbS9oZWxwZXIyLmMgYi9pMzg2LWRtL2hlbHBlcjIu
YwppbmRleCBjNmQwNDljLi5jMDkzMjQ5IDEwMDY0NAotLS0gYS9pMzg2LWRtL2hlbHBlcjIuYwor
KysgYi9pMzg2LWRtL2hlbHBlcjIuYwpAQCAtMzY0LDcgKzM2NCw3IEBAIHN0YXRpYyB2b2lkIGNw
dV9pb3JlcV9waW8oQ1BVU3RhdGUgKmVudiwgaW9yZXFfdCAqcmVxKQogICAgICAgICAgICAgZm9y
IChpID0gMDsgaSA8IHJlcS0+Y291bnQ7IGkrKykgewogICAgICAgICAgICAgICAgIHRtcCA9IGRv
X2lucChlbnYsIHJlcS0+YWRkciwgcmVxLT5zaXplKTsKICAgICAgICAgICAgICAgICB3cml0ZV9w
aHlzaWNhbCgodGFyZ2V0X3BoeXNfYWRkcl90KSByZXEtPmRhdGEKLSAgICAgICAgICAgICAgICAg
ICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBp
ICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAgICAgIHJlcS0+c2l6ZSwgJnRt
cCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KQEAgLTM3Niw3ICszNzYsNyBAQCBzdGF0aWMg
dm9pZCBjcHVfaW9yZXFfcGlvKENQVVN0YXRlICplbnYsIGlvcmVxX3QgKnJlcSkKICAgICAgICAg
ICAgICAgICB1bnNpZ25lZCBsb25nIHRtcCA9IDA7CiAKICAgICAgICAgICAgICAgICByZWFkX3Bo
eXNpY2FsKCh0YXJnZXRfcGh5c19hZGRyX3QpIHJlcS0+ZGF0YQotICAgICAgICAgICAgICAgICAg
KyAoc2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkg
KiAoaW50NjRfdClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1w
KTsKICAgICAgICAgICAgICAgICBkb19vdXRwKGVudiwgcmVxLT5hZGRyLCByZXEtPnNpemUsIHRt
cCk7CiAgICAgICAgICAgICB9CkBAIC0zOTQsMTMgKzM5NCwxMyBAQCBzdGF0aWMgdm9pZCBjcHVf
aW9yZXFfbW92ZShDUFVTdGF0ZSAqZW52LCBpb3JlcV90ICpyZXEpCiAgICAgICAgIGlmIChyZXEt
PmRpciA9PSBJT1JFUV9SRUFEKSB7CiAgICAgICAgICAgICBmb3IgKGkgPSAwOyBpIDwgcmVxLT5j
b3VudDsgaSsrKSB7CiAgICAgICAgICAgICAgICAgcmVhZF9waHlzaWNhbChyZXEtPmFkZHIKLSAg
ICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAg
ICAgICsgKHNpZ24gKiBpICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAgICAg
IHJlcS0+c2l6ZSwgJnJlcS0+ZGF0YSk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0gZWxzZSBp
ZiAocmVxLT5kaXIgPT0gSU9SRVFfV1JJVEUpIHsKICAgICAgICAgICAgIGZvciAoaSA9IDA7IGkg
PCByZXEtPmNvdW50OyBpKyspIHsKICAgICAgICAgICAgICAgICB3cml0ZV9waHlzaWNhbChyZXEt
PmFkZHIKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAg
ICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAg
ICAgICAgICAgIHJlcS0+c2l6ZSwgJnJlcS0+ZGF0YSk7CiAgICAgICAgICAgICB9CiAgICAgICAg
IH0KQEAgLTQxMCwxOSArNDEwLDE5IEBAIHN0YXRpYyB2b2lkIGNwdV9pb3JlcV9tb3ZlKENQVVN0
YXRlICplbnYsIGlvcmVxX3QgKnJlcSkKICAgICAgICAgaWYgKHJlcS0+ZGlyID09IElPUkVRX1JF
QUQpIHsKICAgICAgICAgICAgIGZvciAoaSA9IDA7IGkgPCByZXEtPmNvdW50OyBpKyspIHsKICAg
ICAgICAgICAgICAgICByZWFkX3BoeXNpY2FsKHJlcS0+YWRkcgotICAgICAgICAgICAgICAgICAg
KyAoc2lnbiAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkg
KiAoaW50NjRfdClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1w
KTsKICAgICAgICAgICAgICAgICB3cml0ZV9waHlzaWNhbCgodGFyZ2V0X3BoeXNfYWRkcl90ICly
ZXEtPmRhdGEKLSAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAg
ICAgICAgICAgICAgICAgICsgKHNpZ24gKiBpICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAg
ICAgICAgICAgICAgIHJlcS0+c2l6ZSwgJnRtcCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0g
ZWxzZSBpZiAocmVxLT5kaXIgPT0gSU9SRVFfV1JJVEUpIHsKICAgICAgICAgICAgIGZvciAoaSA9
IDA7IGkgPCByZXEtPmNvdW50OyBpKyspIHsKICAgICAgICAgICAgICAgICByZWFkX3BoeXNpY2Fs
KCh0YXJnZXRfcGh5c19hZGRyX3QpIHJlcS0+ZGF0YQotICAgICAgICAgICAgICAgICAgKyAoc2ln
biAqIGkgKiByZXEtPnNpemUpLAorICAgICAgICAgICAgICAgICAgKyAoc2lnbiAqIGkgKiAoaW50
NjRfdClyZXEtPnNpemUpLAogICAgICAgICAgICAgICAgICAgcmVxLT5zaXplLCAmdG1wKTsKICAg
ICAgICAgICAgICAgICB3cml0ZV9waHlzaWNhbChyZXEtPmFkZHIKLSAgICAgICAgICAgICAgICAg
ICsgKHNpZ24gKiBpICogcmVxLT5zaXplKSwKKyAgICAgICAgICAgICAgICAgICsgKHNpZ24gKiBp
ICogKGludDY0X3QpcmVxLT5zaXplKSwKICAgICAgICAgICAgICAgICAgIHJlcS0+c2l6ZSwgJnRt
cCk7CiAgICAgICAgICAgICB9CiAgICAgICAgIH0KLS0gCjEuNy4xCgo=

--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_40776A41FC278F40B59438AD47D147A90FE17D93SHSMSX102ccrcor_--


From xen-devel-bounces@lists.xen.org Mon Aug 20 14:13:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 14:13:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Siq-0002qd-P2; Mon, 20 Aug 2012 14:12:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3Sip-0002qY-1H
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 14:12:51 +0000
Received: from [85.158.143.99:41960] by server-1.bemta-4.messagelabs.com id
	7D/07-07754-0E542305; Mon, 20 Aug 2012 14:12:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345471965!16524551!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5057 invoked from network); 20 Aug 2012 14:12:46 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 14:12:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KECcws031519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 14:12:39 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KECcW3000338
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 14:12:38 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KECbqg003162; Mon, 20 Aug 2012 09:12:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 07:12:37 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 300A94029A; Mon, 20 Aug 2012 10:02:42 -0400 (EDT)
Date: Mon, 20 Aug 2012 10:02:41 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Xu <wei.xu.prc@gmail.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820140241.GD7847@phenom.dumpdata.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen <xen@lists.fedoraproject.org>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > Hi All,
> > I'm try to set up and verify Xen console driver base on Fedora 17 and Xen
> > 4.1.2 with hvm guest mode,
> > i searched around and got a link, it give steps both for PV and HVM mode,
> > I followed the HVM guide
> > and upgraded my kernel to 3.5.0.
> >
> >            http://www.dedoimedo.com/computers/xen-console.html
> >
> > After that, I can got console output with "xm console <dom_id>", but the
> > console driver is not used when I tracing the driver

.. I am not sure I understand: "when I tracing the driver" ? Are
you referring to the PV driver?

> > with "crash" utility, by examing the "console_drivers", the console driver
> > is still "serial8250 console", so i wonder if I didn't
> > set up it properly or something else, is there someone ever experienced
> > it, thanks.

Hm, it should be the hvc one. Perhaps Stefano knows..

CC-ing him and xen-devel here.

> >
> > crash> p *console_drivers
> > $10 = {
> >   name = "ttyS\000\000\000\000\000\000\000\000\000\000\000",
> >   write = 0xffffffff8138d5a0 <serial8250_console_write>,
> >   read = 0,
> >   device = 0xffffffff8138c350 <uart_console_device>,
> >   unblank = 0,
> >   setup = 0xffffffff81d29926 <serial8250_console_setup>,
> >   early_setup = 0xffffffff8138ca10 <serial8250_console_early_setup>,
> >   flags = 22,
> >   index = 0,
> >   cflag = 0,
> >   data = 0xffffffff81c82640,
> >   next = 0x0
> > }
> >
> > Thanks,
> > Wei
> >

> --
> xen mailing list
> xen@lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/xen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 14:13:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 14:13:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Siq-0002qd-P2; Mon, 20 Aug 2012 14:12:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3Sip-0002qY-1H
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 14:12:51 +0000
Received: from [85.158.143.99:41960] by server-1.bemta-4.messagelabs.com id
	7D/07-07754-0E542305; Mon, 20 Aug 2012 14:12:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345471965!16524551!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5057 invoked from network); 20 Aug 2012 14:12:46 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 14:12:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KECcws031519
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 14:12:39 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KECcW3000338
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 14:12:38 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KECbqg003162; Mon, 20 Aug 2012 09:12:37 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 07:12:37 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 300A94029A; Mon, 20 Aug 2012 10:02:42 -0400 (EDT)
Date: Mon, 20 Aug 2012 10:02:41 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Xu <wei.xu.prc@gmail.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820140241.GD7847@phenom.dumpdata.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen <xen@lists.fedoraproject.org>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > Hi All,
> > I'm try to set up and verify Xen console driver base on Fedora 17 and Xen
> > 4.1.2 with hvm guest mode,
> > i searched around and got a link, it give steps both for PV and HVM mode,
> > I followed the HVM guide
> > and upgraded my kernel to 3.5.0.
> >
> >            http://www.dedoimedo.com/computers/xen-console.html
> >
> > After that, I can got console output with "xm console <dom_id>", but the
> > console driver is not used when I tracing the driver

.. I am not sure I understand: "when I tracing the driver" ? Are
you referring to the PV driver?

> > with "crash" utility, by examing the "console_drivers", the console driver
> > is still "serial8250 console", so i wonder if I didn't
> > set up it properly or something else, is there someone ever experienced
> > it, thanks.

Hm, it should be the hvc one. Perhaps Stefano knows..

CC-ing him and xen-devel here.

> >
> > crash> p *console_drivers
> > $10 = {
> >   name = "ttyS\000\000\000\000\000\000\000\000\000\000\000",
> >   write = 0xffffffff8138d5a0 <serial8250_console_write>,
> >   read = 0,
> >   device = 0xffffffff8138c350 <uart_console_device>,
> >   unblank = 0,
> >   setup = 0xffffffff81d29926 <serial8250_console_setup>,
> >   early_setup = 0xffffffff8138ca10 <serial8250_console_early_setup>,
> >   flags = 22,
> >   index = 0,
> >   cflag = 0,
> >   data = 0xffffffff81c82640,
> >   next = 0x0
> > }
> >
> > Thanks,
> > Wei
> >

> --
> xen mailing list
> xen@lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/xen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 14:23:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 14:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3St1-00030N-6P; Mon, 20 Aug 2012 14:23:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3Ssz-00030I-Lv
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 14:23:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345472587!2853154!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk5ODQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1320 invoked from network); 20 Aug 2012 14:23:12 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 14:23:12 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KEN3UY006990
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 14:23:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KEN2e6007076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 14:23:03 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KEN1o9001850; Mon, 20 Aug 2012 09:23:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 07:23:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C71694029A; Mon, 20 Aug 2012 10:13:05 -0400 (EDT)
Date: Mon, 20 Aug 2012 10:13:05 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820141305.GA2713@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 02/11] xen/x86: Use memblock_reserve for
 sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > instead of a big memblock_reserve. This way we can be more
> > selective in freeing regions (and it also makes it easier
> > to understand where is what).
> > 
> > [v1: Move the auto_translate_physmap to proper line]
> > [v2: Per Stefano suggestion add more comments]
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> much better now!

Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
Will have a revised patch posted shortly.

> 
> >  arch/x86/xen/enlighten.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++
> >  arch/x86/xen/p2m.c       |    5 ++++
> >  arch/x86/xen/setup.c     |    9 --------
> >  3 files changed, 53 insertions(+), 9 deletions(-)
> > 
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index ff962d4..e532eb5 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -998,7 +998,54 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
> >  
> >  	return ret;
> >  }
> > +/*
> > + * If the MFN is not in the m2p (provided to us by the hypervisor) this
> > + * function won't do anything. In practice this means that the XenBus
> > + * MFN won't be available for the initial domain. */
> > +static void __init xen_reserve_mfn(unsigned long mfn)
> > +{
> > +	unsigned long pfn;
> > +
> > +	if (!mfn)
> > +		return;
> > +	pfn = mfn_to_pfn(mfn);
> > +	if (phys_to_machine_mapping_valid(pfn))
> > +		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
> > +}
> > +static void __init xen_reserve_internals(void)
> > +{
> > +	unsigned long size;
> > +
> > +	if (!xen_pv_domain())
> > +		return;
> > +
> > +	/* xen_start_info does not exist in the M2P, hence can't use
> > +	 * xen_reserve_mfn. */
> > +	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
> > +
> > +	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
> > +	xen_reserve_mfn(xen_start_info->store_mfn);
> >  
> > +	if (!xen_initial_domain())
> > +		xen_reserve_mfn(xen_start_info->console.domU.mfn);
> > +
> > +	if (xen_feature(XENFEAT_auto_translated_physmap))
> > +		return;
> > +
> > +	/*
> > +	 * ALIGN up to compensate for the p2m_page pointing to an array that
> > +	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
> > +	 */
> > +
> > +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > +
> > +	/* We could use xen_reserve_mfn here, but would end up looping quite
> > +	 * a lot (and call memblock_reserve for each PAGE), so lets just use
> > +	 * the easy way and reserve it wholesale. */
> > +	memblock_reserve(__pa(xen_start_info->mfn_list), size);
> > +
> > +	/* The pagetables are reserved in mmu.c */
> > +}
> >  void xen_setup_shared_info(void)
> >  {
> >  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > @@ -1362,6 +1409,7 @@ asmlinkage void __init xen_start_kernel(void)
> >  	xen_raw_console_write("mapping kernel into physical memory\n");
> >  	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
> >  
> > +	xen_reserve_internals();
> >  	/* Allocate and initialize top and mid mfn levels for p2m structure */
> >  	xen_build_mfn_list_list();
> >  
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index e4adbfb..6a2bfa4 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -388,6 +388,11 @@ void __init xen_build_dynamic_phys_to_machine(void)
> >  	}
> >  
> >  	m2p_override_init();
> > +
> > +	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
> > +	 * isn't enough pieces to make it work (for one - we are still using the
> > +	 * Xen provided pagetable). Do it later in xen_reserve_internals.
> > +	 */
> >  }
> >  
> >  unsigned long get_phys_to_machine(unsigned long pfn)
> > diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> > index a4790bf..9efca75 100644
> > --- a/arch/x86/xen/setup.c
> > +++ b/arch/x86/xen/setup.c
> > @@ -424,15 +424,6 @@ char * __init xen_memory_setup(void)
> >  	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
> >  			E820_RESERVED);
> >  
> > -	/*
> > -	 * Reserve Xen bits:
> > -	 *  - mfn_list
> > -	 *  - xen_start_info
> > -	 * See comment above "struct start_info" in <xen/interface/xen.h>
> > -	 */
> > -	memblock_reserve(__pa(xen_start_info->mfn_list),
> > -			 xen_start_info->pt_base - xen_start_info->mfn_list);
> > -
> >  	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
> >  
> >  	return "Xen";
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 14:23:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 14:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3St1-00030N-6P; Mon, 20 Aug 2012 14:23:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3Ssz-00030I-Lv
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 14:23:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345472587!2853154!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk5ODQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1320 invoked from network); 20 Aug 2012 14:23:12 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 14:23:12 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KEN3UY006990
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 14:23:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KEN2e6007076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 14:23:03 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KEN1o9001850; Mon, 20 Aug 2012 09:23:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 07:23:01 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C71694029A; Mon, 20 Aug 2012 10:13:05 -0400 (EDT)
Date: Mon, 20 Aug 2012 10:13:05 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820141305.GA2713@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 02/11] xen/x86: Use memblock_reserve for
 sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > instead of a big memblock_reserve. This way we can be more
> > selective in freeing regions (and it also makes it easier
> > to understand where is what).
> > 
> > [v1: Move the auto_translate_physmap to proper line]
> > [v2: Per Stefano suggestion add more comments]
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> much better now!

Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
Will have a revised patch posted shortly.

> 
> >  arch/x86/xen/enlighten.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++
> >  arch/x86/xen/p2m.c       |    5 ++++
> >  arch/x86/xen/setup.c     |    9 --------
> >  3 files changed, 53 insertions(+), 9 deletions(-)
> > 
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index ff962d4..e532eb5 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -998,7 +998,54 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
> >  
> >  	return ret;
> >  }
> > +/*
> > + * If the MFN is not in the m2p (provided to us by the hypervisor) this
> > + * function won't do anything. In practice this means that the XenBus
> > + * MFN won't be available for the initial domain. */
> > +static void __init xen_reserve_mfn(unsigned long mfn)
> > +{
> > +	unsigned long pfn;
> > +
> > +	if (!mfn)
> > +		return;
> > +	pfn = mfn_to_pfn(mfn);
> > +	if (phys_to_machine_mapping_valid(pfn))
> > +		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
> > +}
> > +static void __init xen_reserve_internals(void)
> > +{
> > +	unsigned long size;
> > +
> > +	if (!xen_pv_domain())
> > +		return;
> > +
> > +	/* xen_start_info does not exist in the M2P, hence can't use
> > +	 * xen_reserve_mfn. */
> > +	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
> > +
> > +	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
> > +	xen_reserve_mfn(xen_start_info->store_mfn);
> >  
> > +	if (!xen_initial_domain())
> > +		xen_reserve_mfn(xen_start_info->console.domU.mfn);
> > +
> > +	if (xen_feature(XENFEAT_auto_translated_physmap))
> > +		return;
> > +
> > +	/*
> > +	 * ALIGN up to compensate for the p2m_page pointing to an array that
> > +	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
> > +	 */
> > +
> > +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > +
> > +	/* We could use xen_reserve_mfn here, but would end up looping quite
> > +	 * a lot (and call memblock_reserve for each PAGE), so lets just use
> > +	 * the easy way and reserve it wholesale. */
> > +	memblock_reserve(__pa(xen_start_info->mfn_list), size);
> > +
> > +	/* The pagetables are reserved in mmu.c */
> > +}
> >  void xen_setup_shared_info(void)
> >  {
> >  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > @@ -1362,6 +1409,7 @@ asmlinkage void __init xen_start_kernel(void)
> >  	xen_raw_console_write("mapping kernel into physical memory\n");
> >  	pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
> >  
> > +	xen_reserve_internals();
> >  	/* Allocate and initialize top and mid mfn levels for p2m structure */
> >  	xen_build_mfn_list_list();
> >  
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index e4adbfb..6a2bfa4 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -388,6 +388,11 @@ void __init xen_build_dynamic_phys_to_machine(void)
> >  	}
> >  
> >  	m2p_override_init();
> > +
> > +	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
> > +	 * isn't enough pieces to make it work (for one - we are still using the
> > +	 * Xen provided pagetable). Do it later in xen_reserve_internals.
> > +	 */
> >  }
> >  
> >  unsigned long get_phys_to_machine(unsigned long pfn)
> > diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> > index a4790bf..9efca75 100644
> > --- a/arch/x86/xen/setup.c
> > +++ b/arch/x86/xen/setup.c
> > @@ -424,15 +424,6 @@ char * __init xen_memory_setup(void)
> >  	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
> >  			E820_RESERVED);
> >  
> > -	/*
> > -	 * Reserve Xen bits:
> > -	 *  - mfn_list
> > -	 *  - xen_start_info
> > -	 * See comment above "struct start_info" in <xen/interface/xen.h>
> > -	 */
> > -	memblock_reserve(__pa(xen_start_info->mfn_list),
> > -			 xen_start_info->pt_base - xen_start_info->mfn_list);
> > -
> >  	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
> >  
> >  	return "Xen";
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 14:44:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 14:44:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3TD5-0003CK-6v; Mon, 20 Aug 2012 14:44:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T3SwS-00036j-WB
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 14:26:57 +0000
Received: from [85.158.143.99:38744] by server-1.bemta-4.messagelabs.com id
	3F/BE-07754-03942305; Mon, 20 Aug 2012 14:26:56 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345472813!22079095!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27689 invoked from network); 20 Aug 2012 14:26:55 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 14:26:55 -0000
Received: by iabz25 with SMTP id z25so3463946iab.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 07:26:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=P+mMyuBnjdlivAnR4H3ROjd9qmFgPWQ0TOjdGOYXyko=;
	b=uKbCwbfm5zlkD8WtnNwtmqOebIYlC5SXhPE0MeSLtkSxd/enaApc3aBFPeL3vzLoeN
	BmdnvIVtN+hGY03/3o3/Kk/0N5Ce1ru+eBpiP+myT4Y0xjn7Xdm55UkILKeAShOM8CWx
	dEc13F6xk6c3Q6AxZomYQD9p1jFcAhyXjSLA+qo8Eja5jR2gJ5/s93626GjEtZRVLxrO
	ePEFXHJCTH7rIL4tv0GAo8gyuFWr2J+gK5R1bU2ILXnOYpe/z2j/7vP4u99HmBvn8tp2
	fN9PlfWhLQE2C5LeTIfcLlHNamdYWx8w3lWHaGDI7SVG7/aMwAyM/2ULpcWQhPqcgi+5
	ukNw==
MIME-Version: 1.0
Received: by 10.42.19.2 with SMTP id z2mr11176173ica.33.1345472813191; Mon, 20
	Aug 2012 07:26:53 -0700 (PDT)
Received: by 10.64.100.139 with HTTP; Mon, 20 Aug 2012 07:26:53 -0700 (PDT)
In-Reply-To: <20120820140241.GD7847@phenom.dumpdata.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
Date: Mon, 20 Aug 2012 22:26:53 +0800
Message-ID: <CAH=9XOYhTAcKNvsfmMy6Wvu+JNC_yyasb9b8SZymjR2XMy-agA@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Mon, 20 Aug 2012 14:44:05 +0000
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] DomU console driver not works for Fedora17 in HVM
	mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8169799964048888993=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8169799964048888993==
Content-Type: multipart/alternative; boundary=20cf3040ee5c15d38604c7b34d36

--20cf3040ee5c15d38604c7b34d36
Content-Type: text/plain; charset=ISO-8859-1

Thanks.

On Monday, August 20, 2012, Konrad Rzeszutek Wilk wrote:

> On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > > Hi All,
> > > I'm try to set up and verify Xen console driver base on Fedora 17 and
> Xen
> > > 4.1.2 with hvm guest mode,
> > > i searched around and got a link, it give steps both for PV and HVM
> mode,
> > > I followed the HVM guide
> > > and upgraded my kernel to 3.5.0.
> > >
> > >            http://www.dedoimedo.com/computers/xen-console.html
> > >
> > > After that, I can got console output with "xm console <dom_id>", but
> the
> > > console driver is not used when I tracing the driver
>
> .. I am not sure I understand: "when I tracing the driver" ? Are
> you referring to the PV driver?


No, i mean the hvm driver, I add some debug info to hvm console read/write
function
but it never be invoked, then I exam the "console_drivers" and found no hvm
like driver was
registered.


>
> > > with "crash" utility, by examing the "console_drivers", the console
> driver
> > > is still "serial8250 console", so i wonder if I didn't
> > > set up it properly or something else, is there someone ever experienced
> > > it, thanks.
>
> Hm, it should be the hvc one. Perhaps Stefano knows..
>
> CC-ing him and xen-devel here.
>
> > >
> > > crash> p *console_drivers
> > > $10 = {
> > >   name = "ttyS\000\000\000\000\000\000\000\000\000\000\000",
> > >   write = 0xffffffff8138d5a0 <serial8250_console_write>,
> > >   read = 0,
> > >   device = 0xffffffff8138c350 <uart_console_device>,
> > >   unblank = 0,
> > >   setup = 0xffffffff81d29926 <serial8250_console_setup>,
> > >   early_setup = 0xffffffff8138ca10 <serial8250_console_early_setup>,
> > >   flags = 22,
> > >   index = 0,
> > >   cflag = 0,
> > >   data = 0xffffffff81c82640,
> > >   next = 0x0
> > > }
> > >
> > > Thanks,
> > > Wei
> > >
>
> > --
> > xen mailing list
> > xen@lists.fedoraproject.org <javascript:;>
> > https://admin.fedoraproject.org/mailman/listinfo/xen
>
>

--20cf3040ee5c15d38604c7b34d36
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks.=A0<div><br><div><div><div>On Monday, August 20, 2012, Konrad Rzeszu=
tek Wilk  wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0=
 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Mon, Aug 20, 2012 at =
09:39:56PM +0800, Wei Xu wrote:<br>

&gt; &gt; Hi All,<br>
&gt; &gt; I&#39;m try to set up and verify Xen console driver base on Fedor=
a 17 and Xen<br>
&gt; &gt; 4.1.2 with hvm guest mode,<br>
&gt; &gt; i searched around and got a link, it give steps both for PV and H=
VM mode,<br>
&gt; &gt; I followed the HVM guide<br>
&gt; &gt; and upgraded my kernel to 3.5.0.<br>
&gt; &gt;<br>
&gt; &gt; =A0 =A0 =A0 =A0 =A0 =A0<a href=3D"http://www.dedoimedo.com/comput=
ers/xen-console.html" target=3D"_blank">http://www.dedoimedo.com/computers/=
xen-console.html</a><br>
&gt; &gt;<br>
&gt; &gt; After that, I can got console output with &quot;xm console &lt;do=
m_id&gt;&quot;, but the<br>
&gt; &gt; console driver is not used when I tracing the driver<br>
<br>
.. I am not sure I understand: &quot;when I tracing the driver&quot; ? Are<=
br>
you referring to the PV driver?</blockquote><div><br dir=3D"ltr"></div><spa=
n class=3D"Apple-style-span" style>No, i mean the hvm driver, I add some de=
bug info to hvm console read/write function<div>but it never be invoked, th=
en I exam the &quot;console_drivers&quot; and found no hvm like driver was<=
/div>
<div>registered.<span></span></div></span><div>=A0</div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<br>
&gt; &gt; with &quot;crash&quot; utility, by examing the &quot;console_driv=
ers&quot;, the console driver<br>
&gt; &gt; is still &quot;serial8250 console&quot;, so i wonder if I didn&#3=
9;t<br>
&gt; &gt; set up it properly or something else, is there someone ever exper=
ienced<br>
&gt; &gt; it, thanks.<br>
<br>
Hm, it should be the hvc one. Perhaps Stefano knows..<br>
<br>
CC-ing him and xen-devel here.<br>
<br>
&gt; &gt;<br>
&gt; &gt; crash&gt; p *console_drivers<br>
&gt; &gt; $10 =3D {<br>
&gt; &gt; =A0 name =3D &quot;ttyS\000\000\000\000\000\000\000\000\000\000\0=
00&quot;,<br>
&gt; &gt; =A0 write =3D 0xffffffff8138d5a0 &lt;serial8250_console_write&gt;=
,<br>
&gt; &gt; =A0 read =3D 0,<br>
&gt; &gt; =A0 device =3D 0xffffffff8138c350 &lt;uart_console_device&gt;,<br=
>
&gt; &gt; =A0 unblank =3D 0,<br>
&gt; &gt; =A0 setup =3D 0xffffffff81d29926 &lt;serial8250_console_setup&gt;=
,<br>
&gt; &gt; =A0 early_setup =3D 0xffffffff8138ca10 &lt;serial8250_console_ear=
ly_setup&gt;,<br>
&gt; &gt; =A0 flags =3D 22,<br>
&gt; &gt; =A0 index =3D 0,<br>
&gt; &gt; =A0 cflag =3D 0,<br>
&gt; &gt; =A0 data =3D 0xffffffff81c82640,<br>
&gt; &gt; =A0 next =3D 0x0<br>
&gt; &gt; }<br>
&gt; &gt;<br>
&gt; &gt; Thanks,<br>
&gt; &gt; Wei<br>
&gt; &gt;<br>
<br>
&gt; --<br>
&gt; xen mailing list<br>
&gt; <a href=3D"javascript:;" onclick=3D"_e(event, &#39;cvml&#39;, &#39;xen=
@lists.fedoraproject.org&#39;)">xen@lists.fedoraproject.org</a><br>
&gt; <a href=3D"https://admin.fedoraproject.org/mailman/listinfo/xen" targe=
t=3D"_blank">https://admin.fedoraproject.org/mailman/listinfo/xen</a><br>
<br>
</blockquote></div></div></div></div>

--20cf3040ee5c15d38604c7b34d36--


--===============8169799964048888993==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8169799964048888993==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 14:44:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 14:44:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3TD5-0003CK-6v; Mon, 20 Aug 2012 14:44:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T3SwS-00036j-WB
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 14:26:57 +0000
Received: from [85.158.143.99:38744] by server-1.bemta-4.messagelabs.com id
	3F/BE-07754-03942305; Mon, 20 Aug 2012 14:26:56 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345472813!22079095!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27689 invoked from network); 20 Aug 2012 14:26:55 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 14:26:55 -0000
Received: by iabz25 with SMTP id z25so3463946iab.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 07:26:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=P+mMyuBnjdlivAnR4H3ROjd9qmFgPWQ0TOjdGOYXyko=;
	b=uKbCwbfm5zlkD8WtnNwtmqOebIYlC5SXhPE0MeSLtkSxd/enaApc3aBFPeL3vzLoeN
	BmdnvIVtN+hGY03/3o3/Kk/0N5Ce1ru+eBpiP+myT4Y0xjn7Xdm55UkILKeAShOM8CWx
	dEc13F6xk6c3Q6AxZomYQD9p1jFcAhyXjSLA+qo8Eja5jR2gJ5/s93626GjEtZRVLxrO
	ePEFXHJCTH7rIL4tv0GAo8gyuFWr2J+gK5R1bU2ILXnOYpe/z2j/7vP4u99HmBvn8tp2
	fN9PlfWhLQE2C5LeTIfcLlHNamdYWx8w3lWHaGDI7SVG7/aMwAyM/2ULpcWQhPqcgi+5
	ukNw==
MIME-Version: 1.0
Received: by 10.42.19.2 with SMTP id z2mr11176173ica.33.1345472813191; Mon, 20
	Aug 2012 07:26:53 -0700 (PDT)
Received: by 10.64.100.139 with HTTP; Mon, 20 Aug 2012 07:26:53 -0700 (PDT)
In-Reply-To: <20120820140241.GD7847@phenom.dumpdata.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
Date: Mon, 20 Aug 2012 22:26:53 +0800
Message-ID: <CAH=9XOYhTAcKNvsfmMy6Wvu+JNC_yyasb9b8SZymjR2XMy-agA@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Mon, 20 Aug 2012 14:44:05 +0000
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] DomU console driver not works for Fedora17 in HVM
	mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8169799964048888993=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8169799964048888993==
Content-Type: multipart/alternative; boundary=20cf3040ee5c15d38604c7b34d36

--20cf3040ee5c15d38604c7b34d36
Content-Type: text/plain; charset=ISO-8859-1

Thanks.

On Monday, August 20, 2012, Konrad Rzeszutek Wilk wrote:

> On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > > Hi All,
> > > I'm try to set up and verify Xen console driver base on Fedora 17 and
> Xen
> > > 4.1.2 with hvm guest mode,
> > > i searched around and got a link, it give steps both for PV and HVM
> mode,
> > > I followed the HVM guide
> > > and upgraded my kernel to 3.5.0.
> > >
> > >            http://www.dedoimedo.com/computers/xen-console.html
> > >
> > > After that, I can got console output with "xm console <dom_id>", but
> the
> > > console driver is not used when I tracing the driver
>
> .. I am not sure I understand: "when I tracing the driver" ? Are
> you referring to the PV driver?


No, i mean the hvm driver, I add some debug info to hvm console read/write
function
but it never be invoked, then I exam the "console_drivers" and found no hvm
like driver was
registered.


>
> > > with "crash" utility, by examing the "console_drivers", the console
> driver
> > > is still "serial8250 console", so i wonder if I didn't
> > > set up it properly or something else, is there someone ever experienced
> > > it, thanks.
>
> Hm, it should be the hvc one. Perhaps Stefano knows..
>
> CC-ing him and xen-devel here.
>
> > >
> > > crash> p *console_drivers
> > > $10 = {
> > >   name = "ttyS\000\000\000\000\000\000\000\000\000\000\000",
> > >   write = 0xffffffff8138d5a0 <serial8250_console_write>,
> > >   read = 0,
> > >   device = 0xffffffff8138c350 <uart_console_device>,
> > >   unblank = 0,
> > >   setup = 0xffffffff81d29926 <serial8250_console_setup>,
> > >   early_setup = 0xffffffff8138ca10 <serial8250_console_early_setup>,
> > >   flags = 22,
> > >   index = 0,
> > >   cflag = 0,
> > >   data = 0xffffffff81c82640,
> > >   next = 0x0
> > > }
> > >
> > > Thanks,
> > > Wei
> > >
>
> > --
> > xen mailing list
> > xen@lists.fedoraproject.org <javascript:;>
> > https://admin.fedoraproject.org/mailman/listinfo/xen
>
>

--20cf3040ee5c15d38604c7b34d36
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks.=A0<div><br><div><div><div>On Monday, August 20, 2012, Konrad Rzeszu=
tek Wilk  wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0=
 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Mon, Aug 20, 2012 at =
09:39:56PM +0800, Wei Xu wrote:<br>

&gt; &gt; Hi All,<br>
&gt; &gt; I&#39;m try to set up and verify Xen console driver base on Fedor=
a 17 and Xen<br>
&gt; &gt; 4.1.2 with hvm guest mode,<br>
&gt; &gt; i searched around and got a link, it give steps both for PV and H=
VM mode,<br>
&gt; &gt; I followed the HVM guide<br>
&gt; &gt; and upgraded my kernel to 3.5.0.<br>
&gt; &gt;<br>
&gt; &gt; =A0 =A0 =A0 =A0 =A0 =A0<a href=3D"http://www.dedoimedo.com/comput=
ers/xen-console.html" target=3D"_blank">http://www.dedoimedo.com/computers/=
xen-console.html</a><br>
&gt; &gt;<br>
&gt; &gt; After that, I can got console output with &quot;xm console &lt;do=
m_id&gt;&quot;, but the<br>
&gt; &gt; console driver is not used when I tracing the driver<br>
<br>
.. I am not sure I understand: &quot;when I tracing the driver&quot; ? Are<=
br>
you referring to the PV driver?</blockquote><div><br dir=3D"ltr"></div><spa=
n class=3D"Apple-style-span" style>No, i mean the hvm driver, I add some de=
bug info to hvm console read/write function<div>but it never be invoked, th=
en I exam the &quot;console_drivers&quot; and found no hvm like driver was<=
/div>
<div>registered.<span></span></div></span><div>=A0</div><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<br>
&gt; &gt; with &quot;crash&quot; utility, by examing the &quot;console_driv=
ers&quot;, the console driver<br>
&gt; &gt; is still &quot;serial8250 console&quot;, so i wonder if I didn&#3=
9;t<br>
&gt; &gt; set up it properly or something else, is there someone ever exper=
ienced<br>
&gt; &gt; it, thanks.<br>
<br>
Hm, it should be the hvc one. Perhaps Stefano knows..<br>
<br>
CC-ing him and xen-devel here.<br>
<br>
&gt; &gt;<br>
&gt; &gt; crash&gt; p *console_drivers<br>
&gt; &gt; $10 =3D {<br>
&gt; &gt; =A0 name =3D &quot;ttyS\000\000\000\000\000\000\000\000\000\000\0=
00&quot;,<br>
&gt; &gt; =A0 write =3D 0xffffffff8138d5a0 &lt;serial8250_console_write&gt;=
,<br>
&gt; &gt; =A0 read =3D 0,<br>
&gt; &gt; =A0 device =3D 0xffffffff8138c350 &lt;uart_console_device&gt;,<br=
>
&gt; &gt; =A0 unblank =3D 0,<br>
&gt; &gt; =A0 setup =3D 0xffffffff81d29926 &lt;serial8250_console_setup&gt;=
,<br>
&gt; &gt; =A0 early_setup =3D 0xffffffff8138ca10 &lt;serial8250_console_ear=
ly_setup&gt;,<br>
&gt; &gt; =A0 flags =3D 22,<br>
&gt; &gt; =A0 index =3D 0,<br>
&gt; &gt; =A0 cflag =3D 0,<br>
&gt; &gt; =A0 data =3D 0xffffffff81c82640,<br>
&gt; &gt; =A0 next =3D 0x0<br>
&gt; &gt; }<br>
&gt; &gt;<br>
&gt; &gt; Thanks,<br>
&gt; &gt; Wei<br>
&gt; &gt;<br>
<br>
&gt; --<br>
&gt; xen mailing list<br>
&gt; <a href=3D"javascript:;" onclick=3D"_e(event, &#39;cvml&#39;, &#39;xen=
@lists.fedoraproject.org&#39;)">xen@lists.fedoraproject.org</a><br>
&gt; <a href=3D"https://admin.fedoraproject.org/mailman/listinfo/xen" targe=
t=3D"_blank">https://admin.fedoraproject.org/mailman/listinfo/xen</a><br>
<br>
</blockquote></div></div></div></div>

--20cf3040ee5c15d38604c7b34d36--


--===============8169799964048888993==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8169799964048888993==--


From xen-devel-bounces@lists.xen.org Mon Aug 20 15:22:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Tna-0003SJ-G9; Mon, 20 Aug 2012 15:21:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lcapitulino@redhat.com>) id 1T3TnY-0003SC-Lh
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 15:21:48 +0000
Received: from [85.158.143.35:25953] by server-3.bemta-4.messagelabs.com id
	13/8E-09529-B0652305; Mon, 20 Aug 2012 15:21:47 +0000
X-Env-Sender: lcapitulino@redhat.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345476104!14429046!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17268 invoked from network); 20 Aug 2012 15:21:45 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 15:21:45 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KFLVRc029283
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 11:21:31 -0400
Received: from doriath.home (ovpn-113-59.phx2.redhat.com [10.3.113.59])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7KFLOOJ002660; Mon, 20 Aug 2012 11:21:25 -0400
Date: Mon, 20 Aug 2012 12:22:10 -0300
From: Luiz Capitulino <lcapitulino@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20120820122210.572a79b5@doriath.home>
In-Reply-To: <1345419579-25499-4-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-4-git-send-email-imammedo@redhat.com>
Organization: Red Hat
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 3/5] qapi-types.h doesn't really need to
 include qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 01:39:37 +0200
Igor Mammedov <imammedo@redhat.com> wrote:

> needed to prevent build breakage when CPU becomes a child of DeviceState
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  scripts/qapi-types.py |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
> index cf601ae..f34addb 100644
> --- a/scripts/qapi-types.py
> +++ b/scripts/qapi-types.py
> @@ -263,7 +263,7 @@ fdecl.write(mcgen('''
>  #ifndef %(guard)s
>  #define %(guard)s
>  
> -#include "qemu-common.h"
> +#include <stdbool.h>

Please, also include <stdint.h>, as int64_t is used in qapi-types.h.

The build doesn't break probably because files including qapi-types.h are
including <stdint.h> first.

>  
>  ''',
>                    guard=guardname(h_file)))


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 15:22:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Tna-0003SJ-G9; Mon, 20 Aug 2012 15:21:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lcapitulino@redhat.com>) id 1T3TnY-0003SC-Lh
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 15:21:48 +0000
Received: from [85.158.143.35:25953] by server-3.bemta-4.messagelabs.com id
	13/8E-09529-B0652305; Mon, 20 Aug 2012 15:21:47 +0000
X-Env-Sender: lcapitulino@redhat.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345476104!14429046!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17268 invoked from network); 20 Aug 2012 15:21:45 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 15:21:45 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KFLVRc029283
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 11:21:31 -0400
Received: from doriath.home (ovpn-113-59.phx2.redhat.com [10.3.113.59])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7KFLOOJ002660; Mon, 20 Aug 2012 11:21:25 -0400
Date: Mon, 20 Aug 2012 12:22:10 -0300
From: Luiz Capitulino <lcapitulino@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20120820122210.572a79b5@doriath.home>
In-Reply-To: <1345419579-25499-4-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-4-git-send-email-imammedo@redhat.com>
Organization: Red Hat
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 3/5] qapi-types.h doesn't really need to
 include qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 01:39:37 +0200
Igor Mammedov <imammedo@redhat.com> wrote:

> needed to prevent build breakage when CPU becomes a child of DeviceState
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  scripts/qapi-types.py |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
> index cf601ae..f34addb 100644
> --- a/scripts/qapi-types.py
> +++ b/scripts/qapi-types.py
> @@ -263,7 +263,7 @@ fdecl.write(mcgen('''
>  #ifndef %(guard)s
>  #define %(guard)s
>  
> -#include "qemu-common.h"
> +#include <stdbool.h>

Please, also include <stdint.h>, as int64_t is used in qapi-types.h.

The build doesn't break probably because files including qapi-types.h are
including <stdint.h> first.

>  
>  ''',
>                    guard=guardname(h_file)))


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 15:23:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Tof-0003VM-Up; Mon, 20 Aug 2012 15:22:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3Tof-0003VH-7i
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 15:22:57 +0000
Received: from [85.158.138.51:27913] by server-10.bemta-3.messagelabs.com id
	9C/B6-20518-05652305; Mon, 20 Aug 2012 15:22:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345476175!20347928!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 486 invoked from network); 20 Aug 2012 15:22:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 15:22:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,797,1336348800"; d="scan'208";a="14088735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 15:22:34 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 16:22:34 +0100
Date: Mon, 20 Aug 2012 16:22:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Xu, Dongxiao wrote:
> Changes from v1: use int64_t to cast the uint32_t type to the signed type.
> 
> This is the updated version patch to fix the multiply issue for int and uint types operands, which reflects Stefano's comments.
> 
> The following patch fixes an issue of (uint * int) multiply in QEMU.
> The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than 30 minutes on Level 0 (L0) Xen (The nested virtualization case).
> Please help to review and pull.
> 
> I saw the upstream QEMU also have a similar code piece, I will also send a patch there.

Please send the corresponding patch for upstream QEMU.


> Thanks,
> Dongxiao
> 
> >From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00 2001
> From: Dongxiao Xu <dongxiao.xu@intel.com>
> Date: Mon, 20 Aug 2012 16:45:04 +0800
> Subject: [PATCH] helper2: fix multiply issue for int and uint types
> 
> If the two multiply operands are int and uint types separately,
> the int type will be transformed to uint firstly, which is not the
> intent in our code piece. The fix is to add (int64_t) transform
> for the uint type before the multiply.
> 
> This helps to fix the Xen hypevisor slow booting issue (boots more
> than 30 minutes) on another Xen hypervisor
> (the nested virtualization case).
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>  i386-dm/helper2.c |   16 ++++++++--------
>  1 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..c093249 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
>                  write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>              }
>          }
> @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>                  unsigned long tmp = 0;
>  
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
> @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &req->data);
>              }
>          }
> @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>                  write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>              }
>          }
> -- 
> 1.7.1
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 15:23:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Tof-0003VM-Up; Mon, 20 Aug 2012 15:22:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3Tof-0003VH-7i
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 15:22:57 +0000
Received: from [85.158.138.51:27913] by server-10.bemta-3.messagelabs.com id
	9C/B6-20518-05652305; Mon, 20 Aug 2012 15:22:56 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345476175!20347928!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 486 invoked from network); 20 Aug 2012 15:22:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 15:22:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,797,1336348800"; d="scan'208";a="14088735"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 15:22:34 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 16:22:34 +0100
Date: Mon, 20 Aug 2012 16:22:14 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Xu, Dongxiao wrote:
> Changes from v1: use int64_t to cast the uint32_t type to the signed type.
> 
> This is the updated version patch to fix the multiply issue for int and uint types operands, which reflects Stefano's comments.
> 
> The following patch fixes an issue of (uint * int) multiply in QEMU.
> The bug phenomenon is that, it causes the Level 1 (L1) Xen boots more than 30 minutes on Level 0 (L0) Xen (The nested virtualization case).
> Please help to review and pull.
> 
> I saw the upstream QEMU also have a similar code piece, I will also send a patch there.

Please send the corresponding patch for upstream QEMU.


> Thanks,
> Dongxiao
> 
> >From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00 2001
> From: Dongxiao Xu <dongxiao.xu@intel.com>
> Date: Mon, 20 Aug 2012 16:45:04 +0800
> Subject: [PATCH] helper2: fix multiply issue for int and uint types
> 
> If the two multiply operands are int and uint types separately,
> the int type will be transformed to uint firstly, which is not the
> intent in our code piece. The fix is to add (int64_t) transform
> for the uint type before the multiply.
> 
> This helps to fix the Xen hypevisor slow booting issue (boots more
> than 30 minutes) on another Xen hypervisor
> (the nested virtualization case).
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>  i386-dm/helper2.c |   16 ++++++++--------
>  1 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c
> index c6d049c..c093249 100644
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>              for (i = 0; i < req->count; i++) {
>                  tmp = do_inp(env, req->addr, req->size);
>                  write_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>              }
>          }
> @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t *req)
>                  unsigned long tmp = 0;
>  
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>                  do_outp(env, req->addr, req->size, tmp);
>              }
> @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &req->data);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &req->data);
>              }
>          }
> @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env, ioreq_t *req)
>          if (req->dir == IOREQ_READ) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>                  write_physical((target_phys_addr_t )req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>              }
>          } else if (req->dir == IOREQ_WRITE) {
>              for (i = 0; i < req->count; i++) {
>                  read_physical((target_phys_addr_t) req->data
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>                  write_physical(req->addr
> -                  + (sign * i * req->size),
> +                  + (sign * i * (int64_t)req->size),
>                    req->size, &tmp);
>              }
>          }
> -- 
> 1.7.1
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 15:28:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3TtP-0003kb-SG; Mon, 20 Aug 2012 15:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lcapitulino@redhat.com>) id 1T3TtO-0003kP-1m
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 15:27:50 +0000
Received: from [85.158.143.99:18153] by server-2.bemta-4.messagelabs.com id
	50/A0-31966-57752305; Mon, 20 Aug 2012 15:27:49 +0000
X-Env-Sender: lcapitulino@redhat.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345476467!22090670!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1206 invoked from network); 20 Aug 2012 15:27:48 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	20 Aug 2012 15:27:48 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KFRPaE009445
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 11:27:26 -0400
Received: from doriath.home (ovpn-113-59.phx2.redhat.com [10.3.113.59])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7KFRInl022699; Mon, 20 Aug 2012 11:27:19 -0400
Date: Mon, 20 Aug 2012 12:28:05 -0300
From: Luiz Capitulino <lcapitulino@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20120820122805.0cc63a8c@doriath.home>
In-Reply-To: <1345419579-25499-5-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-5-git-send-email-imammedo@redhat.com>
Organization: Red Hat
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 4/5] cleanup error.h,
 included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 01:39:38 +0200
Igor Mammedov <imammedo@redhat.com> wrote:

> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  error.h |    1 -
>  1 files changed, 0 insertions(+), 1 deletions(-)
> 
> diff --git a/error.h b/error.h
> index 96fc203..643a372 100644
> --- a/error.h
> +++ b/error.h
> @@ -14,7 +14,6 @@
>  
>  #include "compiler.h"
>  #include "qapi-types.h"
> -#include <stdbool.h>

Hmm, not good. qapi-types.h includes <stdbool.h> for internal matters, files
including qapi-types.h shouldn't rely on this (as they can break if qapi-types.h
is changed not to include <stdbool.h>).

You can keep this code as it is.

>  
>  /**
>   * A class representing internal errors within QEMU.  An error has a ErrorClass


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 15:28:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3TtP-0003kb-SG; Mon, 20 Aug 2012 15:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lcapitulino@redhat.com>) id 1T3TtO-0003kP-1m
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 15:27:50 +0000
Received: from [85.158.143.99:18153] by server-2.bemta-4.messagelabs.com id
	50/A0-31966-57752305; Mon, 20 Aug 2012 15:27:49 +0000
X-Env-Sender: lcapitulino@redhat.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345476467!22090670!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1206 invoked from network); 20 Aug 2012 15:27:48 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-216.messagelabs.com with SMTP;
	20 Aug 2012 15:27:48 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KFRPaE009445
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 11:27:26 -0400
Received: from doriath.home (ovpn-113-59.phx2.redhat.com [10.3.113.59])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7KFRInl022699; Mon, 20 Aug 2012 11:27:19 -0400
Date: Mon, 20 Aug 2012 12:28:05 -0300
From: Luiz Capitulino <lcapitulino@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20120820122805.0cc63a8c@doriath.home>
In-Reply-To: <1345419579-25499-5-git-send-email-imammedo@redhat.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-5-git-send-email-imammedo@redhat.com>
Organization: Red Hat
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 4/5] cleanup error.h,
 included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 01:39:38 +0200
Igor Mammedov <imammedo@redhat.com> wrote:

> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  error.h |    1 -
>  1 files changed, 0 insertions(+), 1 deletions(-)
> 
> diff --git a/error.h b/error.h
> index 96fc203..643a372 100644
> --- a/error.h
> +++ b/error.h
> @@ -14,7 +14,6 @@
>  
>  #include "compiler.h"
>  #include "qapi-types.h"
> -#include <stdbool.h>

Hmm, not good. qapi-types.h includes <stdbool.h> for internal matters, files
including qapi-types.h shouldn't rely on this (as they can break if qapi-types.h
is changed not to include <stdbool.h>).

You can keep this code as it is.

>  
>  /**
>   * A class representing internal errors within QEMU.  An error has a ErrorClass


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 15:47:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3UCP-0003xg-P5; Mon, 20 Aug 2012 15:47:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3UCO-0003xb-1y
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 15:47:28 +0000
Received: from [85.158.138.51:17293] by server-7.bemta-3.messagelabs.com id
	6F/52-01906-F0C52305; Mon, 20 Aug 2012 15:47:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345477644!21229777!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18557 invoked from network); 20 Aug 2012 15:47:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 15:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,797,1336348800"; d="scan'208";a="14089335"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 15:47:24 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 16:47:24 +0100
Date: Mon, 20 Aug 2012 16:47:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120820140241.GD7847@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Wei Xu <wei.xu.prc@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > > Hi All,
> > > I'm try to set up and verify Xen console driver base on Fedora 17 and Xen
> > > 4.1.2 with hvm guest mode,
> > > i searched around and got a link, it give steps both for PV and HVM mode,
> > > I followed the HVM guide
> > > and upgraded my kernel to 3.5.0.
> > >
> > >            http://www.dedoimedo.com/computers/xen-console.html
> > >
> > > After that, I can got console output with "xm console <dom_id>", but the
> > > console driver is not used when I tracing the driver
> 
> .. I am not sure I understand: "when I tracing the driver" ? Are
> you referring to the PV driver?
> 
> > > with "crash" utility, by examing the "console_drivers", the console driver
> > > is still "serial8250 console", so i wonder if I didn't
> > > set up it properly or something else, is there someone ever experienced
> > > it, thanks.
> 
> Hm, it should be the hvc one. Perhaps Stefano knows..
> 
> CC-ing him and xen-devel here.

An HVM guest has access to both an emulated serial (if a serial="pty"
parameter is present in the VM config file) and a PV console.
However the default first console is the emulated serial with libxl (see
libxl__primary_console_find), that is what you get when you execute "xl
console" without -t.

But if you edit inittab to spawn a getty on hvc0 and then execute "xl
console -t pv" you should get access to the PV console.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 15:47:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 15:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3UCP-0003xg-P5; Mon, 20 Aug 2012 15:47:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3UCO-0003xb-1y
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 15:47:28 +0000
Received: from [85.158.138.51:17293] by server-7.bemta-3.messagelabs.com id
	6F/52-01906-F0C52305; Mon, 20 Aug 2012 15:47:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345477644!21229777!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18557 invoked from network); 20 Aug 2012 15:47:25 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 15:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.77,797,1336348800"; d="scan'208";a="14089335"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 15:47:24 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 16:47:24 +0100
Date: Mon, 20 Aug 2012 16:47:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120820140241.GD7847@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Wei Xu <wei.xu.prc@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > > Hi All,
> > > I'm try to set up and verify Xen console driver base on Fedora 17 and Xen
> > > 4.1.2 with hvm guest mode,
> > > i searched around and got a link, it give steps both for PV and HVM mode,
> > > I followed the HVM guide
> > > and upgraded my kernel to 3.5.0.
> > >
> > >            http://www.dedoimedo.com/computers/xen-console.html
> > >
> > > After that, I can got console output with "xm console <dom_id>", but the
> > > console driver is not used when I tracing the driver
> 
> .. I am not sure I understand: "when I tracing the driver" ? Are
> you referring to the PV driver?
> 
> > > with "crash" utility, by examing the "console_drivers", the console driver
> > > is still "serial8250 console", so i wonder if I didn't
> > > set up it properly or something else, is there someone ever experienced
> > > it, thanks.
> 
> Hm, it should be the hvc one. Perhaps Stefano knows..
> 
> CC-ing him and xen-devel here.

An HVM guest has access to both an emulated serial (if a serial="pty"
parameter is present in the VM config file) and a PV console.
However the default first console is the emulated serial with libxl (see
libxl__primary_console_find), that is what you get when you execute "xl
console" without -t.

But if you edit inittab to spawn a getty on hvc0 and then execute "xl
console -t pv" you should get access to the PV console.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 16:17:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 16:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Uf1-0004ZS-8I; Mon, 20 Aug 2012 16:17:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3Uez-0004ZN-LJ
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 16:17:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345479393!6233858!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23433 invoked from network); 20 Aug 2012 16:16:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 16:16:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,797,1336348800"; d="scan'208";a="14090071"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 16:16:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 17:16:32 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3UeW-0007NM-De;
	Mon, 20 Aug 2012 16:16:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3UeW-0000cP-2A;
	Mon, 20 Aug 2012 17:16:32 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13618-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Aug 2012 17:16:32 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13618: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13618 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13618/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 13617

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13617
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13617
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13617
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13617

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  e6ca45ca03c2
baseline version:
 xen                  73ac4b7ad2e1

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25765:e6ca45ca03c2
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Aug 20 08:46:47 2012 +0200
    
    x86-64: refine the XSA-9 fix
    
    Our product management wasn't happy with the "solution" for XSA-9, and
    demanded that customer systems must continue to boot. Rather than
    having our and perhaps other distros carry non-trivial patches, allow
    for more fine grained control (panic on boot, deny guest creation, or
    merely warn) by means of a single line change.
    
    Also, as this was found to be a problem with remotely managed systems,
    don't default to boot denial (just deny guest creation).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25764:4b0d263008cd
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Aug 20 08:40:01 2012 +0200
    
    x86: don't expose SYSENTER on unknown CPUs
    
    So far we only ever set up the respective MSRs on Intel CPUs, yet we
    hide the feature only on a 32-bit hypervisor. That prevents booting of
    PV guests on top of a 64-bit hypervisor making use of the instruction
    on unknown CPUs (VIA in this case).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25763:73ac4b7ad2e1
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 17 14:57:29 2012 +0100
    
    docs: console: correct example console type definition
    
    I think this is intended to be under the specific console's directory.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 16:17:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 16:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Uf1-0004ZS-8I; Mon, 20 Aug 2012 16:17:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3Uez-0004ZN-LJ
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 16:17:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345479393!6233858!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23433 invoked from network); 20 Aug 2012 16:16:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 16:16:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,797,1336348800"; d="scan'208";a="14090071"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 16:16:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 17:16:32 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3UeW-0007NM-De;
	Mon, 20 Aug 2012 16:16:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3UeW-0000cP-2A;
	Mon, 20 Aug 2012 17:16:32 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13618-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Aug 2012 17:16:32 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13618: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13618 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13618/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 13617

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13617
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13617
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13617
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13617

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  e6ca45ca03c2
baseline version:
 xen                  73ac4b7ad2e1

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25765:e6ca45ca03c2
tag:         tip
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Aug 20 08:46:47 2012 +0200
    
    x86-64: refine the XSA-9 fix
    
    Our product management wasn't happy with the "solution" for XSA-9, and
    demanded that customer systems must continue to boot. Rather than
    having our and perhaps other distros carry non-trivial patches, allow
    for more fine grained control (panic on boot, deny guest creation, or
    merely warn) by means of a single line change.
    
    Also, as this was found to be a problem with remotely managed systems,
    don't default to boot denial (just deny guest creation).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25764:4b0d263008cd
user:        Jan Beulich <jbeulich@suse.com>
date:        Mon Aug 20 08:40:01 2012 +0200
    
    x86: don't expose SYSENTER on unknown CPUs
    
    So far we only ever set up the respective MSRs on Intel CPUs, yet we
    hide the feature only on a 32-bit hypervisor. That prevents booting of
    PV guests on top of a 64-bit hypervisor making use of the instruction
    on unknown CPUs (VIA in this case).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25763:73ac4b7ad2e1
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Fri Aug 17 14:57:29 2012 +0100
    
    docs: console: correct example console type definition
    
    I think this is intended to be under the specific console's directory.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Committed-by: Ian Campbell <ian.campbell@citrix.com>
    
    
========================================
commit effd5676225761abdab90becac519716515c3be4
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Aug 14 15:57:49 2012 +0100

    Revert "qemu-xen-traditional: use O_DIRECT to open disk images for IDE"
    
    This reverts commit 1307e42a4b3c1102d75401bc0cffb4eb6c9b7a38.
    
    In fact after a lengthy discussion, we came up with the conclusion
    that WRITEBACK is OK for IDE.
    See: http://marc.info/?l=xen-devel&m=133311527009773
    
    Therefore revert this which was committed in error.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 16:47:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 16:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3V82-0004ma-Up; Mon, 20 Aug 2012 16:47:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T3V81-0004mV-BY
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 16:47:01 +0000
Received: from [85.158.139.83:49818] by server-7.bemta-5.messagelabs.com id
	3E/20-32634-40A62305; Mon, 20 Aug 2012 16:47:00 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345481219!29112534!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17777 invoked from network); 20 Aug 2012 16:46:59 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 16:46:59 -0000
Received: by eaac13 with SMTP id c13so1989003eaa.32
	for <xen-devel@lists.xen.org>; Mon, 20 Aug 2012 09:46:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=6g1eLgDsd7iNMyOlS6DWiKBgj9B856n02FYOKlgYMDA=;
	b=fo+sI7BpSkWufMt/kOra9DzEFUW2qn4tBYakibWuSC7faSAwY1scWeh6yAdEjJ4sX4
	5Hu58x3PwgJQTKMxMAxZt6d12dzti2d/qZfcPKr2SILahrETkrmKUgz9X0vA7wg8j6W8
	z2ny8EvJMyryhjpuUJ9rv6Po1XO7VFVusB5INuVYMd5COglUtJpY+pxg80s0u9stntGK
	UjvKRG9FVtCCEjykjxMTLEzsiM0WbrkNhDnQbIAK7SnDtbOFj5pOfjq3mWwX64pgIZsN
	Un96A9G9OKE5WGsAEh+vsjpPTuD9b1MLzYgBbYlfCgVVTOFtU74zH4eGVQ68hBOwHw6P
	Xusg==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr9599134eej.0.1345481219417; Mon, 20
	Aug 2012 09:46:59 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Mon, 20 Aug 2012 09:46:59 -0700 (PDT)
Date: Mon, 20 Aug 2012 17:46:59 +0100
X-Google-Sender-Auth: pkfj2Pn-JOy-wuZAbUSCL3MuIQQ
Message-ID: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello everyone!  With the completion of our first few release candidates
for 4.2, it's time to look forward and start planning for the 4.3
release.  I've volunteered to step up and help coordinate the release
for this cycle.

The 4.2 release cycle this time has been nearly a year and a half.
One of the problems with having such a long release is that people who
get in features early have to wait a long time for that feature to be
in a published version; they then have to wait even longer for it to
be part of a released distribution.  Historically the cycle has been
around 9 months, but this has not been made explicit.  Many people
(including myself) think that the 9 month release cycle was a good
cadence that we should aim for.

So I propose that we move to a time-based release schedule.  Rather
than aiming for a release date, I propose that we aim to do a "feature
freeze" six months after the 4.2 release -- that would be around March
1, 2013.  That way we'll probably end up releasing in 9 months' time,
around June 2013.  This is one of the things we can discuss at the Dev
Meeting before the Xen Summit next week.  If you have other opinions,
please let us know.

I will also be tracking ahead of time many of the features and
improvements that we want to try to get into 4.3.  Below is a list of
high-level features and improvements that the Citrix Xen.org team are
either planning on working on ourselves, or are aware of other people
working on, and that we should reasonably be able to get into the 4.3
release.  Most of them have owners, but many do not yet; volunteers
are welcome.

If you are planning on working any features not listed that you would like
to have tracked, please let me know.

I will be sending tracking updates similar to Ian Campbell's 4.2
release updates.  I think to begin with, weekly may be a bit
excessive.  I'll probably go for bi-weekly, and switch to weekly after
the feature freeze.

It should be noted this is not an exhaustive list, nor an immutable
one.  Our main priority will be to release within 9 months; only a
very important feature indeed would cause us to slip the release.

Features and improvements not on this list are of course welcome at
any time before the feature freeze.

Any questions and feedback are welcome!

Your 4.3 release coordinator,
 George Dunlap

* Event channel scalability
  owner: attilio@citrix
  Increase limit on event channels (currently 1024 for 32-bit guests,
  4096 for 64-bit guests)

* NUMA scheduler affinity
  owner: dario@citrix

* NUMA Memory migration
  owner: dario@citrix

* PVH mode, domU (w/ Linux)
  owner: mukesh@oracle

* PVH mode, dom0 (w/ Linux)
  owner: mukesh@oracle

* ARM server port
  owner: @citrix

* blktap3
  owner: @citrix

* Default to QEMU upstream
 - qemu-based stubdom (Linux or BSD libc)
    owner: anthony@citrix
    qemu-upstream needs a more fully-featured libc than exists in
    minios.  Either work on a minimalist linux-based stubdom with
    glibc, or port one of the BSD libcs to minios.

 - pci pass-thru
    owner: anthony@citrix

* Persistent grants
  owner: @citrix

* Multi-page blk rings
 - blkback in kernel (@intel)
 - qemu blkback

* Multi-page net protocol
  owner: ?
  expand the network ring protocol to allow multiple pages for
  increased throughput

* xl vm-{export,import}
  owner: ?
  Allow xl to import and export VMs to other formats; particularly
  ovf, perhaps the XenServer format, or more.


* xl USB pass-through for PV guests
  owner: ?
  Port the xend PV pass-through functionality to xl.

* openvswitch toostack integration
  owner: roger@citrix

* Rationalized backend scripts (incl. driver domains)
  owner: roger@citrix

* Full-VM snapshotting
  owner: ?
  Have a way of coordinating the taking and restoring of VM memory and
  disk snapshots.  This would involve some investigation into the best
  way to accomplish this.

* VM Cloning
  owner: ?
  Again, a way of coordinating the memory and disk aspects.  Research
  into the best way to do this would probably go along with the
  snapshotting feature.

* Make storage migration possible
  owner: ?
  There needs to be a way, either via command-line or via some hooks,
  that someone can build a "storage migration" feature on top of libxl
  or xl.

* PV audio (audio for stubdom qemu)
  owner: stefano.panella@citrix

* Memory: Replace PoD with paging mechanism
  owner: george@citrix

* Managed domains?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 16:47:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 16:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3V82-0004ma-Up; Mon, 20 Aug 2012 16:47:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T3V81-0004mV-BY
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 16:47:01 +0000
Received: from [85.158.139.83:49818] by server-7.bemta-5.messagelabs.com id
	3E/20-32634-40A62305; Mon, 20 Aug 2012 16:47:00 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345481219!29112534!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17777 invoked from network); 20 Aug 2012 16:46:59 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 16:46:59 -0000
Received: by eaac13 with SMTP id c13so1989003eaa.32
	for <xen-devel@lists.xen.org>; Mon, 20 Aug 2012 09:46:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=6g1eLgDsd7iNMyOlS6DWiKBgj9B856n02FYOKlgYMDA=;
	b=fo+sI7BpSkWufMt/kOra9DzEFUW2qn4tBYakibWuSC7faSAwY1scWeh6yAdEjJ4sX4
	5Hu58x3PwgJQTKMxMAxZt6d12dzti2d/qZfcPKr2SILahrETkrmKUgz9X0vA7wg8j6W8
	z2ny8EvJMyryhjpuUJ9rv6Po1XO7VFVusB5INuVYMd5COglUtJpY+pxg80s0u9stntGK
	UjvKRG9FVtCCEjykjxMTLEzsiM0WbrkNhDnQbIAK7SnDtbOFj5pOfjq3mWwX64pgIZsN
	Un96A9G9OKE5WGsAEh+vsjpPTuD9b1MLzYgBbYlfCgVVTOFtU74zH4eGVQ68hBOwHw6P
	Xusg==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr9599134eej.0.1345481219417; Mon, 20
	Aug 2012 09:46:59 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Mon, 20 Aug 2012 09:46:59 -0700 (PDT)
Date: Mon, 20 Aug 2012 17:46:59 +0100
X-Google-Sender-Auth: pkfj2Pn-JOy-wuZAbUSCL3MuIQQ
Message-ID: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello everyone!  With the completion of our first few release candidates
for 4.2, it's time to look forward and start planning for the 4.3
release.  I've volunteered to step up and help coordinate the release
for this cycle.

The 4.2 release cycle this time has been nearly a year and a half.
One of the problems with having such a long release is that people who
get in features early have to wait a long time for that feature to be
in a published version; they then have to wait even longer for it to
be part of a released distribution.  Historically the cycle has been
around 9 months, but this has not been made explicit.  Many people
(including myself) think that the 9 month release cycle was a good
cadence that we should aim for.

So I propose that we move to a time-based release schedule.  Rather
than aiming for a release date, I propose that we aim to do a "feature
freeze" six months after the 4.2 release -- that would be around March
1, 2013.  That way we'll probably end up releasing in 9 months' time,
around June 2013.  This is one of the things we can discuss at the Dev
Meeting before the Xen Summit next week.  If you have other opinions,
please let us know.

I will also be tracking ahead of time many of the features and
improvements that we want to try to get into 4.3.  Below is a list of
high-level features and improvements that the Citrix Xen.org team are
either planning on working on ourselves, or are aware of other people
working on, and that we should reasonably be able to get into the 4.3
release.  Most of them have owners, but many do not yet; volunteers
are welcome.

If you are planning on working any features not listed that you would like
to have tracked, please let me know.

I will be sending tracking updates similar to Ian Campbell's 4.2
release updates.  I think to begin with, weekly may be a bit
excessive.  I'll probably go for bi-weekly, and switch to weekly after
the feature freeze.

It should be noted this is not an exhaustive list, nor an immutable
one.  Our main priority will be to release within 9 months; only a
very important feature indeed would cause us to slip the release.

Features and improvements not on this list are of course welcome at
any time before the feature freeze.

Any questions and feedback are welcome!

Your 4.3 release coordinator,
 George Dunlap

* Event channel scalability
  owner: attilio@citrix
  Increase limit on event channels (currently 1024 for 32-bit guests,
  4096 for 64-bit guests)

* NUMA scheduler affinity
  owner: dario@citrix

* NUMA Memory migration
  owner: dario@citrix

* PVH mode, domU (w/ Linux)
  owner: mukesh@oracle

* PVH mode, dom0 (w/ Linux)
  owner: mukesh@oracle

* ARM server port
  owner: @citrix

* blktap3
  owner: @citrix

* Default to QEMU upstream
 - qemu-based stubdom (Linux or BSD libc)
    owner: anthony@citrix
    qemu-upstream needs a more fully-featured libc than exists in
    minios.  Either work on a minimalist linux-based stubdom with
    glibc, or port one of the BSD libcs to minios.

 - pci pass-thru
    owner: anthony@citrix

* Persistent grants
  owner: @citrix

* Multi-page blk rings
 - blkback in kernel (@intel)
 - qemu blkback

* Multi-page net protocol
  owner: ?
  expand the network ring protocol to allow multiple pages for
  increased throughput

* xl vm-{export,import}
  owner: ?
  Allow xl to import and export VMs to other formats; particularly
  ovf, perhaps the XenServer format, or more.


* xl USB pass-through for PV guests
  owner: ?
  Port the xend PV pass-through functionality to xl.

* openvswitch toostack integration
  owner: roger@citrix

* Rationalized backend scripts (incl. driver domains)
  owner: roger@citrix

* Full-VM snapshotting
  owner: ?
  Have a way of coordinating the taking and restoring of VM memory and
  disk snapshots.  This would involve some investigation into the best
  way to accomplish this.

* VM Cloning
  owner: ?
  Again, a way of coordinating the memory and disk aspects.  Research
  into the best way to do this would probably go along with the
  snapshotting feature.

* Make storage migration possible
  owner: ?
  There needs to be a way, either via command-line or via some hooks,
  that someone can build a "storage migration" feature on top of libxl
  or xl.

* PV audio (audio for stubdom qemu)
  owner: stefano.panella@citrix

* Memory: Replace PoD with paging mechanism
  owner: george@citrix

* Managed domains?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 17:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 17:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3W7T-00055w-Rg; Mon, 20 Aug 2012 17:50:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3W7S-00055r-Rv
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 17:50:31 +0000
Received: from [85.158.143.99:43492] by server-2.bemta-4.messagelabs.com id
	8E/0C-31966-6E872305; Mon, 20 Aug 2012 17:50:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345485026!16555753!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12734 invoked from network); 20 Aug 2012 17:50:28 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 17:50:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KHoJen019281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 17:50:20 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KHoJsq010160
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 17:50:19 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KHoI3n006623; Mon, 20 Aug 2012 12:50:18 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 10:50:18 -0700
Date: Mon, 20 Aug 2012 10:50:17 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820105017.65b6e35e@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208201152510.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
	<20120817152617.64e2fe5e@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208201152510.15568@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 12:02:55 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 17 Aug 2012, Mukesh Rathor wrote:
> > On Fri, 17 Aug 2012 15:36:04 -0400
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > So, if I just add checks for auto_translated_physmap like suggested,
> > wouldn't I be changing and breaking the code paths for dom0_shadow
> > boot of PV guest? is dom0_shadow depracated?
> 
> I think that it is just a debugging option. The most recent reference
> to dom0_shadow is in 2005, according to Google. Not many people would
> miss it.

Agree.

> 
> If I understand dom0_shadow correctly, it wouldn't have
> xen_have_vector_callback set, so the above #define would still work as
> you expect.
> But if all the above characterists are actually true for dom0_shadow
> guests too, then it might make sense to call them pvh domains anyway.
  
Right.

> 
> We can still have a pvh option in the VM config file or as a Xen
> parameter for dom0: it doesn't have to be exported as a SIF flag
> to the Linux kernel though.
> If xen_have_vector_callback is enabled and
> XENFEAT_auto_translated_physmap is also set, then we are effectively
> running as a PVH domain, otherwise we are not. As a consequence only
> the toolstack needs to know about the pvh option in the config file
> to build the guest correctly.

Ok, getting rid of SIF flag. The guest will check for above conditions. 

Thanks for the feedback.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 17:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 17:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3W7T-00055w-Rg; Mon, 20 Aug 2012 17:50:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3W7S-00055r-Rv
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 17:50:31 +0000
Received: from [85.158.143.99:43492] by server-2.bemta-4.messagelabs.com id
	8E/0C-31966-6E872305; Mon, 20 Aug 2012 17:50:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345485026!16555753!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12734 invoked from network); 20 Aug 2012 17:50:28 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 17:50:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KHoJen019281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 17:50:20 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KHoJsq010160
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 17:50:19 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KHoI3n006623; Mon, 20 Aug 2012 12:50:18 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 10:50:18 -0700
Date: Mon, 20 Aug 2012 10:50:17 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120820105017.65b6e35e@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1208201152510.15568@kaball.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208161454430.4850@kaball.uk.xensource.com>
	<20120816114650.4db2079f@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208171104230.15568@kaball.uk.xensource.com>
	<20120817122014.3c3387b5@mantra.us.oracle.com>
	<20120817193604.GA4573@phenom.dumpdata.com>
	<20120817152617.64e2fe5e@mantra.us.oracle.com>
	<alpine.DEB.2.02.1208201152510.15568@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 12:02:55 +0100
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Fri, 17 Aug 2012, Mukesh Rathor wrote:
> > On Fri, 17 Aug 2012 15:36:04 -0400
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > So, if I just add checks for auto_translated_physmap like suggested,
> > wouldn't I be changing and breaking the code paths for dom0_shadow
> > boot of PV guest? is dom0_shadow depracated?
> 
> I think that it is just a debugging option. The most recent reference
> to dom0_shadow is in 2005, according to Google. Not many people would
> miss it.

Agree.

> 
> If I understand dom0_shadow correctly, it wouldn't have
> xen_have_vector_callback set, so the above #define would still work as
> you expect.
> But if all the above characterists are actually true for dom0_shadow
> guests too, then it might make sense to call them pvh domains anyway.
  
Right.

> 
> We can still have a pvh option in the VM config file or as a Xen
> parameter for dom0: it doesn't have to be exported as a SIF flag
> to the Linux kernel though.
> If xen_have_vector_callback is enabled and
> XENFEAT_auto_translated_physmap is also set, then we are effectively
> running as a PVH domain, otherwise we are not. As a consequence only
> the toolstack needs to know about the pvh option in the config file
> to build the guest correctly.

Ok, getting rid of SIF flag. The guest will check for above conditions. 

Thanks for the feedback.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 19:15:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3XQm-0005WK-Qd; Mon, 20 Aug 2012 19:14:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3XQl-0005WF-SH
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 19:14:32 +0000
Received: from [85.158.139.83:27239] by server-4.bemta-5.messagelabs.com id
	6E/62-12386-79C82305; Mon, 20 Aug 2012 19:14:31 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345490070!17867215!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODU3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26356 invoked from network); 20 Aug 2012 19:14:30 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 19:14:30 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 0732926D8;
	Mon, 20 Aug 2012 22:14:29 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A6F532005D; Mon, 20 Aug 2012 22:14:29 +0300 (EEST)
Date: Mon, 20 Aug 2012 22:14:29 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120820191429.GY19851@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> 
> Features and improvements not on this list are of course welcome at
> any time before the feature freeze.
> 
> Any questions and feedback are welcome!
> 
> Your 4.3 release coordinator,
>  George Dunlap
> 

<snip>

> 
> * xl USB pass-through for PV guests
>   owner: ?
>   Port the xend PV pass-through functionality to xl.
> 

xm/xend PVUSB works for both PV and HVM guests, so xl should support PVUSB for both PV and HVM guests aswell.
James Harper's GPLPV drivers actually do have PVUSB frontend driver for Windows.

Also Suse's xenlinux forward-ported patches have PVUSB support in unmodified_drivers for HVM guests.


Another USB item:

* xl support for USB device passthru using QEMU emulated USB for HVM guests (no need for PVUSB drivers in the HVM guest).
  This works today in xm/xend with qemu-traditional, but is limited to USB 1.1, probably because 
  the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
  So xl support for emulated USB device passthru for both qemu-upstream and qemu-traditional.


More wishlist items:

* Nested hardware virtualization. Important for easier testing and development of Xen (Xen-on-Xen),
  and for running other hypervisors in Xen VMs. Interesting for labs, POCs, etc.

* VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel archives, 
  but noone has yet stepped up to clean up and get them merged. 
  Currently Intel gfx passthru patches are merged to Xen, but primary ATI/NVIDIA require extra patches.
  This is actually something that a LOT of users ask often, it's discussed almost every day on ##xen on IRC.
  I wonder if XenClient folks could help here? 

* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU passthru users.
  Fujitsu guys posted some patches for this in 2010, and XenClient guys in 2009 (iirc),
  but nothing got further developed and merged to upstream Xen.

* QXL virtual GPU support for SPICE. Someone was already developing this, 
  and posted patches earlier during 4.2 development cycle to xen-devel. 
  Upstream Qemu includes QXL support.

* PVSCSI support in XL. James Harper was (semi) interested in working with this,
  because he has a PVSCSI frontend driver in Windows GPLPV drivers, and he's using PVSCSI for tape backups himself.

* libvirt libxl driver improvements; support more Xen features. 
  Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default" virtualization GUI also with Xen.


Hopefully we'll find interested developers for these items :)


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 19:15:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3XQm-0005WK-Qd; Mon, 20 Aug 2012 19:14:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3XQl-0005WF-SH
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 19:14:32 +0000
Received: from [85.158.139.83:27239] by server-4.bemta-5.messagelabs.com id
	6E/62-12386-79C82305; Mon, 20 Aug 2012 19:14:31 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345490070!17867215!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODU3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26356 invoked from network); 20 Aug 2012 19:14:30 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 19:14:30 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 0732926D8;
	Mon, 20 Aug 2012 22:14:29 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A6F532005D; Mon, 20 Aug 2012 22:14:29 +0300 (EEST)
Date: Mon, 20 Aug 2012 22:14:29 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120820191429.GY19851@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> 
> Features and improvements not on this list are of course welcome at
> any time before the feature freeze.
> 
> Any questions and feedback are welcome!
> 
> Your 4.3 release coordinator,
>  George Dunlap
> 

<snip>

> 
> * xl USB pass-through for PV guests
>   owner: ?
>   Port the xend PV pass-through functionality to xl.
> 

xm/xend PVUSB works for both PV and HVM guests, so xl should support PVUSB for both PV and HVM guests aswell.
James Harper's GPLPV drivers actually do have PVUSB frontend driver for Windows.

Also Suse's xenlinux forward-ported patches have PVUSB support in unmodified_drivers for HVM guests.


Another USB item:

* xl support for USB device passthru using QEMU emulated USB for HVM guests (no need for PVUSB drivers in the HVM guest).
  This works today in xm/xend with qemu-traditional, but is limited to USB 1.1, probably because 
  the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
  So xl support for emulated USB device passthru for both qemu-upstream and qemu-traditional.


More wishlist items:

* Nested hardware virtualization. Important for easier testing and development of Xen (Xen-on-Xen),
  and for running other hypervisors in Xen VMs. Interesting for labs, POCs, etc.

* VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel archives, 
  but noone has yet stepped up to clean up and get them merged. 
  Currently Intel gfx passthru patches are merged to Xen, but primary ATI/NVIDIA require extra patches.
  This is actually something that a LOT of users ask often, it's discussed almost every day on ##xen on IRC.
  I wonder if XenClient folks could help here? 

* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU passthru users.
  Fujitsu guys posted some patches for this in 2010, and XenClient guys in 2009 (iirc),
  but nothing got further developed and merged to upstream Xen.

* QXL virtual GPU support for SPICE. Someone was already developing this, 
  and posted patches earlier during 4.2 development cycle to xen-devel. 
  Upstream Qemu includes QXL support.

* PVSCSI support in XL. James Harper was (semi) interested in working with this,
  because he has a PVSCSI frontend driver in Windows GPLPV drivers, and he's using PVSCSI for tape backups himself.

* libvirt libxl driver improvements; support more Xen features. 
  Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default" virtualization GUI also with Xen.


Hopefully we'll find interested developers for these items :)


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 19:46:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Xuq-0005kg-Re; Mon, 20 Aug 2012 19:45:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1T3Xup-0005kb-Bi
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 19:45:35 +0000
Received: from [85.158.138.51:39885] by server-12.bemta-3.messagelabs.com id
	ED/B7-04073-ED392305; Mon, 20 Aug 2012 19:45:34 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345491932!28044700!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4OTk1NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27872 invoked from network); 20 Aug 2012 19:45:33 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 19:45:33 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q7KJj5gc006518;
	Mon, 20 Aug 2012 20:45:09 +0100
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost1.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q7KJimaO007690
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 20:44:48 +0100
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q7KJimuh023259;
	Mon, 20 Aug 2012 20:44:48 +0100
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	q7KJimQX023254; Mon, 20 Aug 2012 20:44:48 +0100
Date: Mon, 20 Aug 2012 20:44:48 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345209224.10161.21.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q7KJj5gc006518
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:

> ...
> and pushed. Is the following section of README still accurate? At least
> the mention of "PyXML" seems wrong to me.
>
>        Python Runtime Libraries
>        ========================
>
>        Xend (the Xen daemon) has the following runtime dependencies:
>
>            * Python 2.3 or later.
>              In some distros, the XML-aspects to the standard library
>              (xml.dom.minidom etc) are broken out into a separate python-xml package.
>              This is also required.
>              In more recent versions of Debian and Ubuntu the XML-aspects are included
>              in the base python package however (python-xml has been removed
>              from Debian in squeeze and from Ubuntu in intrepid).
>
>                  URL:    http://www.python.org/
>                  Debian: python
>
>            * For optional SSL support, pyOpenSSL:
>                  URL:    http://pyopenssl.sourceforge.net/
>                  Debian: python-pyopenssl
>
>            * For optional PAM support, PyPAM:
>                  URL:    http://www.pangalactic.org/PyPAM/
>                  Debian: python-pam
>
>            * For optional XenAPI support in XM, PyXML:
>                  URL:    http://codespeak.net/lxml/
>                  Debian: python-lxml
>                  YUM:    python-lxml

Yes, it should be lxml not PyXML. The link could be upgraded as well as 
http://codespeak.net/lxml/ redirects to http://lxml.de/ .

I also found mention of PyXML in tools/python/logging/logging-0.4.9.2/ in 
the files README.txt, python_logging.html, and test/logrecv.py (in an 
error message) as a dependency for ZSI though I don't think xen ever uses 
the code.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 19:46:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Xuq-0005kg-Re; Mon, 20 Aug 2012 19:45:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1T3Xup-0005kb-Bi
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 19:45:35 +0000
Received: from [85.158.138.51:39885] by server-12.bemta-3.messagelabs.com id
	ED/B7-04073-ED392305; Mon, 20 Aug 2012 19:45:34 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345491932!28044700!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA4OTk1NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27872 invoked from network); 20 Aug 2012 19:45:33 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-5.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 19:45:33 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes2.dur.ac.uk (8.13.8/8.13.8) with ESMTP id q7KJj5gc006518;
	Mon, 20 Aug 2012 20:45:09 +0100
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost1.dur.ac.uk (8.13.8/8.13.7) with ESMTP id q7KJimaO007690
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 20:44:48 +0100
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id q7KJimuh023259;
	Mon, 20 Aug 2012 20:44:48 +0100
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	q7KJimQX023254; Mon, 20 Aug 2012 20:44:48 +0100
Date: Mon, 20 Aug 2012 20:44:48 +0100 (BST)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345209224.10161.21.camel@zakaz.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: q7KJj5gc006518
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] Re:  remove dependency on PyXML from xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012, Ian Campbell wrote:

> ...
> and pushed. Is the following section of README still accurate? At least
> the mention of "PyXML" seems wrong to me.
>
>        Python Runtime Libraries
>        ========================
>
>        Xend (the Xen daemon) has the following runtime dependencies:
>
>            * Python 2.3 or later.
>              In some distros, the XML-aspects to the standard library
>              (xml.dom.minidom etc) are broken out into a separate python-xml package.
>              This is also required.
>              In more recent versions of Debian and Ubuntu the XML-aspects are included
>              in the base python package however (python-xml has been removed
>              from Debian in squeeze and from Ubuntu in intrepid).
>
>                  URL:    http://www.python.org/
>                  Debian: python
>
>            * For optional SSL support, pyOpenSSL:
>                  URL:    http://pyopenssl.sourceforge.net/
>                  Debian: python-pyopenssl
>
>            * For optional PAM support, PyPAM:
>                  URL:    http://www.pangalactic.org/PyPAM/
>                  Debian: python-pam
>
>            * For optional XenAPI support in XM, PyXML:
>                  URL:    http://codespeak.net/lxml/
>                  Debian: python-lxml
>                  YUM:    python-lxml

Yes, it should be lxml not PyXML. The link could be upgraded as well as 
http://codespeak.net/lxml/ redirects to http://lxml.de/ .

I also found mention of PyXML in tools/python/logging/logging-0.4.9.2/ in 
the files README.txt, python_logging.html, and test/logrecv.py (in an 
error message) as a dependency for ZSI though I don't think xen ever uses 
the code.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 19:47:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:47:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Xw4-0005nt-AH; Mon, 20 Aug 2012 19:46:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <blauwirbel@gmail.com>) id 1T3Xw2-0005nj-Ga
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 19:46:50 +0000
Received: from [85.158.138.51:39317] by server-8.bemta-3.messagelabs.com id
	0B/77-29583-92492305; Mon, 20 Aug 2012 19:46:49 +0000
X-Env-Sender: blauwirbel@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345492007!28044826!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30650 invoked from network); 20 Aug 2012 19:46:48 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 19:46:48 -0000
Received: by ghy16 with SMTP id 16so6758649ghy.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 12:46:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=U0IsZGGnwcUrZT23I/z2jeCB++qp7DYlpbxVAhFLZGs=;
	b=dGedAXHAjxQXmkiGA10IwSaGlh07rQaX1KahVa0pTvH7napEjLl/bmHAfLUbZZfVjs
	3bTvi0oYVP9fyfdDSVz9cswNmzvSAtqBsX8OZ9F7tlYz5mwjmXzpCgSci29FvUaREES4
	voIRRpZLIqPyOY9tZJCFau0/Pd2QB9oFkhqeKdgvNJL16GpbmRyMwD//mvTHuWVEGdRU
	4NaIbtQyzSsvdU7J3eZo7MbKqxzLC6yD1X3/Or7KL/jOrDOXnseATR0QLm0lCPm1iPdd
	Uo9GPFyaZ2hkP42Tg/uudwH9bKOwD0Uxh9LAxtsFnhEQBwBayRpQDGiJ0gCAOJ44ZCfp
	u4WA==
Received: by 10.50.17.133 with SMTP id o5mr11053148igd.41.1345492006497; Mon,
	20 Aug 2012 12:46:46 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.64.78.161 with HTTP; Mon, 20 Aug 2012 12:46:26 -0700 (PDT)
In-Reply-To: <20120820131326.22ea454f@thinkpad.mammed.net>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
	<5031BFE0.1070909@weilnetz.de>
	<20120820131326.22ea454f@thinkpad.mammed.net>
From: Blue Swirl <blauwirbel@gmail.com>
Date: Mon, 20 Aug 2012 19:46:26 +0000
Message-ID: <CAAu8pHt9n_61t0=R2MGAmJ8yo4=CfcriZFqHfxjZkFuj5zYQgw@mail.gmail.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, lcapitulino@redhat.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	Stefan Weil <sw@weilnetz.de>, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, armbru@redhat.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of
	cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 11:13 AM, Igor Mammedov <imammedo@redhat.com> wrote:
> On Mon, 20 Aug 2012 06:41:04 +0200
> Stefan Weil <sw@weilnetz.de> wrote:
>
>> Am 20.08.2012 01:39, schrieb Igor Mammedov:
>> > it's necessary for making CPU child of DEVICE without
>> > causing circular header deps.
>> >
>> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>> > ---
>> >   hw/arm-misc.h |    1 +
>> >   hw/bt.h       |    2 ++
>> >   hw/devices.h  |    2 ++
>> >   hw/irq.h      |    2 ++
>> >   hw/omap.h     |    1 +
>> >   hw/soc_dma.h  |    1 +
>> >   hw/xen.h      |    1 +
>> >   qemu-common.h |    1 -
>> >   sysemu.h      |    1 +
>> >   9 files changed, 11 insertions(+), 1 deletions(-)
>> >
>> > diff --git a/hw/arm-misc.h b/hw/arm-misc.h
>> > index bdd8fec..b13aa59 100644
>> > --- a/hw/arm-misc.h
>> > +++ b/hw/arm-misc.h
>> > @@ -12,6 +12,7 @@
>> >   #define ARM_MISC_H 1
>> >
>> >   #include "memory.h"
>> > +#include "hw/irq.h"
>> >
>> >   /* The CPU is also modeled as an interrupt controller.  */
>> >   #define ARM_PIC_CPU_IRQ 0
>> > diff --git a/hw/bt.h b/hw/bt.h
>> > index a48b8d4..ebf6a37 100644
>> > --- a/hw/bt.h
>> > +++ b/hw/bt.h
>> > @@ -23,6 +23,8 @@
>> >    * along with this program; if not, see <http://www.gnu.org/licenses/>.
>> >    */
>> >
>> > +#include "hw/irq.h"
>> > +
>> >   /* BD Address */
>> >   typedef struct {
>> >       uint8_t b[6];
>> > diff --git a/hw/devices.h b/hw/devices.h
>> > index 1a55c1e..c60bcab 100644
>> > --- a/hw/devices.h
>> > +++ b/hw/devices.h
>> > @@ -1,6 +1,8 @@
>> >   #ifndef QEMU_DEVICES_H
>> >   #define QEMU_DEVICES_H
>> >
>> > +#include "hw/irq.h"
>> > +
>> >   /* ??? Not all users of this file can include cpu-common.h.  */
>> >   struct MemoryRegion;
>> >
>> > diff --git a/hw/irq.h b/hw/irq.h
>> > index 56c55f0..1339a3a 100644
>> > --- a/hw/irq.h
>> > +++ b/hw/irq.h
>> > @@ -3,6 +3,8 @@
>> >
>> >   /* Generic IRQ/GPIO pin infrastructure.  */
>> >
>> > +typedef struct IRQState *qemu_irq;
>> > +
>> >   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
>> >
>> >   void qemu_set_irq(qemu_irq irq, int level);
>> > diff --git a/hw/omap.h b/hw/omap.h
>> > index 413851b..8b08462 100644
>> > --- a/hw/omap.h
>> > +++ b/hw/omap.h
>> > @@ -19,6 +19,7 @@
>> >   #ifndef hw_omap_h
>> >   #include "memory.h"
>> >   # define hw_omap_h                "omap.h"
>> > +#include "hw/irq.h"
>> >
>> >   # define OMAP_EMIFS_BASE  0x00000000
>> >   # define OMAP2_Q0_BASE            0x00000000
>> > diff --git a/hw/soc_dma.h b/hw/soc_dma.h
>> > index 904b26c..e386ace 100644
>> > --- a/hw/soc_dma.h
>> > +++ b/hw/soc_dma.h
>> > @@ -19,6 +19,7 @@
>> >    */
>> >
>> >   #include "memory.h"
>> > +#include "hw/irq.h"
>> >
>> >   struct soc_dma_s;
>> >   struct soc_dma_ch_s;
>> > diff --git a/hw/xen.h b/hw/xen.h
>> > index e5926b7..ff11dfd 100644
>> > --- a/hw/xen.h
>> > +++ b/hw/xen.h
>> > @@ -8,6 +8,7 @@
>> >    */
>> >   #include <inttypes.h>
>> >
>> > +#include "hw/irq.h"
>> >   #include "qemu-common.h"
>> >
>> >   /* xen-machine.c */
>> > diff --git a/qemu-common.h b/qemu-common.h
>> > index e5c2bcd..6677a30 100644
>> > --- a/qemu-common.h
>> > +++ b/qemu-common.h
>> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>> >   typedef struct PCIESlot PCIESlot;
>> >   typedef struct MSIMessage MSIMessage;
>> >   typedef struct SerialState SerialState;
>> > -typedef struct IRQState *qemu_irq;
>> >   typedef struct PCMCIACardState PCMCIACardState;
>> >   typedef struct MouseTransformInfo MouseTransformInfo;
>> >   typedef struct uWireSlave uWireSlave;
>>
>> Just move the declaration of qemu_irq to the beginning of qemu-common.h
>> and leave the rest of files untouched. That also fixes the circular
>> dependency.
>>
>> I already have a patch that does this, so you can integrate it in your
>> series
>> instead of this one.
> No doubt it's more simpler way, but IMHO It's more of a hack than fixing
> problem.
> It works for now but doesn't alleviate problem with header nightmare in qemu,
> where everything is included in qemu-common.h and everything includes it as
> well.
>
> Any way if majority prefer simple move, I'll drop this patch in favor of yours.

I like Igor's approach more.

>
>>
>>
>> > diff --git a/sysemu.h b/sysemu.h
>> > index 65552ac..f765821 100644
>> > --- a/sysemu.h
>> > +++ b/sysemu.h
>> > @@ -9,6 +9,7 @@
>> >   #include "qapi-types.h"
>> >   #include "notify.h"
>> >   #include "main-loop.h"
>> > +#include "hw/irq.h"
>> >
>> >   /* vl.c */
>> >
>>
>
>
> --
> Regards,
>   Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 19:47:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:47:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Xw4-0005nt-AH; Mon, 20 Aug 2012 19:46:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <blauwirbel@gmail.com>) id 1T3Xw2-0005nj-Ga
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 19:46:50 +0000
Received: from [85.158.138.51:39317] by server-8.bemta-3.messagelabs.com id
	0B/77-29583-92492305; Mon, 20 Aug 2012 19:46:49 +0000
X-Env-Sender: blauwirbel@gmail.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345492007!28044826!1
X-Originating-IP: [209.85.160.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30650 invoked from network); 20 Aug 2012 19:46:48 -0000
Received: from mail-gh0-f171.google.com (HELO mail-gh0-f171.google.com)
	(209.85.160.171)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 19:46:48 -0000
Received: by ghy16 with SMTP id 16so6758649ghy.30
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 12:46:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=U0IsZGGnwcUrZT23I/z2jeCB++qp7DYlpbxVAhFLZGs=;
	b=dGedAXHAjxQXmkiGA10IwSaGlh07rQaX1KahVa0pTvH7napEjLl/bmHAfLUbZZfVjs
	3bTvi0oYVP9fyfdDSVz9cswNmzvSAtqBsX8OZ9F7tlYz5mwjmXzpCgSci29FvUaREES4
	voIRRpZLIqPyOY9tZJCFau0/Pd2QB9oFkhqeKdgvNJL16GpbmRyMwD//mvTHuWVEGdRU
	4NaIbtQyzSsvdU7J3eZo7MbKqxzLC6yD1X3/Or7KL/jOrDOXnseATR0QLm0lCPm1iPdd
	Uo9GPFyaZ2hkP42Tg/uudwH9bKOwD0Uxh9LAxtsFnhEQBwBayRpQDGiJ0gCAOJ44ZCfp
	u4WA==
Received: by 10.50.17.133 with SMTP id o5mr11053148igd.41.1345492006497; Mon,
	20 Aug 2012 12:46:46 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.64.78.161 with HTTP; Mon, 20 Aug 2012 12:46:26 -0700 (PDT)
In-Reply-To: <20120820131326.22ea454f@thinkpad.mammed.net>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
	<5031BFE0.1070909@weilnetz.de>
	<20120820131326.22ea454f@thinkpad.mammed.net>
From: Blue Swirl <blauwirbel@gmail.com>
Date: Mon, 20 Aug 2012 19:46:26 +0000
Message-ID: <CAAu8pHt9n_61t0=R2MGAmJ8yo4=CfcriZFqHfxjZkFuj5zYQgw@mail.gmail.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, lcapitulino@redhat.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	Stefan Weil <sw@weilnetz.de>, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, armbru@redhat.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 1/5] move qemu_irq typedef out of
	cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 11:13 AM, Igor Mammedov <imammedo@redhat.com> wrote:
> On Mon, 20 Aug 2012 06:41:04 +0200
> Stefan Weil <sw@weilnetz.de> wrote:
>
>> Am 20.08.2012 01:39, schrieb Igor Mammedov:
>> > it's necessary for making CPU child of DEVICE without
>> > causing circular header deps.
>> >
>> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>> > ---
>> >   hw/arm-misc.h |    1 +
>> >   hw/bt.h       |    2 ++
>> >   hw/devices.h  |    2 ++
>> >   hw/irq.h      |    2 ++
>> >   hw/omap.h     |    1 +
>> >   hw/soc_dma.h  |    1 +
>> >   hw/xen.h      |    1 +
>> >   qemu-common.h |    1 -
>> >   sysemu.h      |    1 +
>> >   9 files changed, 11 insertions(+), 1 deletions(-)
>> >
>> > diff --git a/hw/arm-misc.h b/hw/arm-misc.h
>> > index bdd8fec..b13aa59 100644
>> > --- a/hw/arm-misc.h
>> > +++ b/hw/arm-misc.h
>> > @@ -12,6 +12,7 @@
>> >   #define ARM_MISC_H 1
>> >
>> >   #include "memory.h"
>> > +#include "hw/irq.h"
>> >
>> >   /* The CPU is also modeled as an interrupt controller.  */
>> >   #define ARM_PIC_CPU_IRQ 0
>> > diff --git a/hw/bt.h b/hw/bt.h
>> > index a48b8d4..ebf6a37 100644
>> > --- a/hw/bt.h
>> > +++ b/hw/bt.h
>> > @@ -23,6 +23,8 @@
>> >    * along with this program; if not, see <http://www.gnu.org/licenses/>.
>> >    */
>> >
>> > +#include "hw/irq.h"
>> > +
>> >   /* BD Address */
>> >   typedef struct {
>> >       uint8_t b[6];
>> > diff --git a/hw/devices.h b/hw/devices.h
>> > index 1a55c1e..c60bcab 100644
>> > --- a/hw/devices.h
>> > +++ b/hw/devices.h
>> > @@ -1,6 +1,8 @@
>> >   #ifndef QEMU_DEVICES_H
>> >   #define QEMU_DEVICES_H
>> >
>> > +#include "hw/irq.h"
>> > +
>> >   /* ??? Not all users of this file can include cpu-common.h.  */
>> >   struct MemoryRegion;
>> >
>> > diff --git a/hw/irq.h b/hw/irq.h
>> > index 56c55f0..1339a3a 100644
>> > --- a/hw/irq.h
>> > +++ b/hw/irq.h
>> > @@ -3,6 +3,8 @@
>> >
>> >   /* Generic IRQ/GPIO pin infrastructure.  */
>> >
>> > +typedef struct IRQState *qemu_irq;
>> > +
>> >   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
>> >
>> >   void qemu_set_irq(qemu_irq irq, int level);
>> > diff --git a/hw/omap.h b/hw/omap.h
>> > index 413851b..8b08462 100644
>> > --- a/hw/omap.h
>> > +++ b/hw/omap.h
>> > @@ -19,6 +19,7 @@
>> >   #ifndef hw_omap_h
>> >   #include "memory.h"
>> >   # define hw_omap_h                "omap.h"
>> > +#include "hw/irq.h"
>> >
>> >   # define OMAP_EMIFS_BASE  0x00000000
>> >   # define OMAP2_Q0_BASE            0x00000000
>> > diff --git a/hw/soc_dma.h b/hw/soc_dma.h
>> > index 904b26c..e386ace 100644
>> > --- a/hw/soc_dma.h
>> > +++ b/hw/soc_dma.h
>> > @@ -19,6 +19,7 @@
>> >    */
>> >
>> >   #include "memory.h"
>> > +#include "hw/irq.h"
>> >
>> >   struct soc_dma_s;
>> >   struct soc_dma_ch_s;
>> > diff --git a/hw/xen.h b/hw/xen.h
>> > index e5926b7..ff11dfd 100644
>> > --- a/hw/xen.h
>> > +++ b/hw/xen.h
>> > @@ -8,6 +8,7 @@
>> >    */
>> >   #include <inttypes.h>
>> >
>> > +#include "hw/irq.h"
>> >   #include "qemu-common.h"
>> >
>> >   /* xen-machine.c */
>> > diff --git a/qemu-common.h b/qemu-common.h
>> > index e5c2bcd..6677a30 100644
>> > --- a/qemu-common.h
>> > +++ b/qemu-common.h
>> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>> >   typedef struct PCIESlot PCIESlot;
>> >   typedef struct MSIMessage MSIMessage;
>> >   typedef struct SerialState SerialState;
>> > -typedef struct IRQState *qemu_irq;
>> >   typedef struct PCMCIACardState PCMCIACardState;
>> >   typedef struct MouseTransformInfo MouseTransformInfo;
>> >   typedef struct uWireSlave uWireSlave;
>>
>> Just move the declaration of qemu_irq to the beginning of qemu-common.h
>> and leave the rest of files untouched. That also fixes the circular
>> dependency.
>>
>> I already have a patch that does this, so you can integrate it in your
>> series
>> instead of this one.
> No doubt it's more simpler way, but IMHO It's more of a hack than fixing
> problem.
> It works for now but doesn't alleviate problem with header nightmare in qemu,
> where everything is included in qemu-common.h and everything includes it as
> well.
>
> Any way if majority prefer simple move, I'll drop this patch in favor of yours.

I like Igor's approach more.

>
>>
>>
>> > diff --git a/sysemu.h b/sysemu.h
>> > index 65552ac..f765821 100644
>> > --- a/sysemu.h
>> > +++ b/sysemu.h
>> > @@ -9,6 +9,7 @@
>> >   #include "qapi-types.h"
>> >   #include "notify.h"
>> >   #include "main-loop.h"
>> > +#include "hw/irq.h"
>> >
>> >   /* vl.c */
>> >
>>
>
>
> --
> Regards,
>   Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 19:49:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Xya-0005x4-Se; Mon, 20 Aug 2012 19:49:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tparker@cbnco.com>) id 1T3XyY-0005we-Jw
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 19:49:27 +0000
X-Env-Sender: tparker@cbnco.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345492126!10223014!1
X-Originating-IP: [207.164.182.72]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22696 invoked from network); 20 Aug 2012 19:48:47 -0000
Received: from smtp.cbnco.com (HELO smtp.cbnco.com) (207.164.182.72)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 19:48:47 -0000
Received: from localhost (localhost [127.0.0.1])
	by smtp.cbnco.com (Postfix) with ESMTP id 9D69510958DF;
	Mon, 20 Aug 2012 15:48:45 -0400 (EDT)
Received: from smtp.cbnco.com ([127.0.0.1])
	by localhost (mail.cbnco.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 05350-08; Mon, 20 Aug 2012 15:48:45 -0400 (EDT)
Received: from [172.20.23.60] (dmzgw2.cbnco.com [207.164.182.65])
	by smtp.cbnco.com (Postfix) with ESMTPSA id 4EFDE109553C;
	Mon, 20 Aug 2012 15:48:45 -0400 (EDT)
Message-ID: <5032949D.5010805@cbnco.com>
Date: Mon, 20 Aug 2012 15:48:45 -0400
From: Tom Parker <tparker@cbnco.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345109102.27489.38.camel@zakaz.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------090001030600030704090908"
X-Virus-Scanned: amavisd-new at cbnco.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------090001030600030704090908
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

Hi Ian

Sorry to be slow to respond.  I missed this e-mail when you sent it.  I
will try to get the information you are looking for.
On 08/16/2012 05:25 AM, Ian Campbell wrote:
> On Wed, 2012-08-15 at 18:07 +0100, Tom Parker wrote:
>> Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
>> channel to provide our use case for PV USB in our environment.  This
>> is possible with the current xm stack but not available with the xl
>> stack.
> Thanks for doing this.
>
> At first glance this doesn't seem like something which we could do for
> 4.2.0 at this stage, although we should do it for 4.3 and potentially
> consider it for 4.2.1.
>
> Is it something which you guys might be interested in providing patches
> for? It is at heart a moderately simple C coding exercise, I'm more than
> happy to provide guidance etc. Much of the generic framework already
> exists and there are examples in the form of other device types.
At this time we can't provide patches.  We are a sysadmin group and have
no programmers with experience in this.  It would take me a long time to
get back into C programming :)  Sorry.
>
>> Currently we use PVUSB to attach a USB Smartcard reader through our
>> dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
>> mounted on an internal USB Port to our domU CA server (SLES 11)
>>
>> The config file syntax is broken so we have to manually attach (I have
>> it scripted) whenever our hosts reboot (which is almost never.)
> Can you give an example of what the syntax *should* be?
There used to be some data in the wiki or in an initial presentation on
PVUSB but as it has never worked for me.  I don't remember how it worked.
>
>> On the dom0 server I have to do the following steps:
>>
>> /usr/sbin/xm usb-list-assignable-devices (get the bus-id of the USB
>> device)
>> /usr/sbin/xm usb-hc-create $Domain 2 2 (Create a USB 2.0 Root Hub with
>> 2 ports in $Domain)
>> /usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId (Attach the
>> USB bus-id found in step 1 to the hub created in step 2)
> What (if anything) is the output of these commands?
Nothing.  They return silently if there is no error.
>
> Do you need to do anything to make a device "assignable"? (I get no
> output from the list command for example)
You just have to have a device that is not attached anywhere else.   For
example:

mgaxen1:~ # xm usb-list-assignable-devices
6-1          : ID 03f0:1027 HP Virtual Keyboard

That means that for the xm usb-attach (/usr/sbin/xm usb-attach $Domain
$DevId $PortNumber $BusId) part I would use the 6-1 as the $BusId

>
>> On the domU the lsusb looks like this after the above (before it
>> returns nothing)
>>
>> mgaca:~ # lsusb 
>> Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
>> SmartCard Reader
>> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> Can you post the output of "xenstore-ls -fp" while the device is
> connected?
This is the part of the output that refers to the the vusb:
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend =
"/local/domain/3/device/vusb/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend-id =
"3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend-id =
"0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend =
"/local/domain/0/backend/vusb/3/0"   (n0)

The rest of the output I have attached as a file.
>
> Do you happen to know if this uses the PVUSB drivers or some other
> mechanism? "lsmod" in both dom0 and domU should provide a clue if the
> drivers are loaded.
Looks like it: 

dom0
mgaxen1:~ # lsmod | grep usb
usbbk                  23503  0
xenbus_be               3952  4 usbbk,netbk,blkbk,blktap
usbhid                 50900  0
hid                    83977  1 usbhid
usbcore               221920  5 usbbk,usbhid,uhci_hcd,ehci_hcd

domU
mgaca:~ # lsmod | grep usb
usbcore               220777  3 xen_hcd
>
> Does this work for both PV and HVM guests or do you only use one or the
> other?
I only use PV guests.
>
>> Once I have done this I can use the usb devce in the domU as if it was
>> directly connected. 
>>
>> Thanks for your time.
> Thank you for describing the functionality.
>
> Ian.
>
>


--------------090001030600030704090908
Content-Type: text/plain; charset=UTF-8;
 name="xenstore-ls.fp.out"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xenstore-ls.fp.out"

/tool = ""   (n0)
/tool/xenstored = ""   (n0)
/local = ""   (n0)
/local/domain = ""   (n0)
/local/domain/0 = ""   (r0)
/local/domain/0/vm = "/vm/00000000-0000-0000-0000-000000000000"   (r0)
/local/domain/0/device = ""   (n0)
/local/domain/0/control = ""   (n0)
/local/domain/0/control/platform-feature-multiprocessor-suspend = "1"   (n0)
/local/domain/0/error = ""   (n0)
/local/domain/0/memory = ""   (n0)
/local/domain/0/memory/target = "510464"   (n0)
/local/domain/0/guest = ""   (n0)
/local/domain/0/hvmpv = ""   (n0)
/local/domain/0/data = ""   (n0)
/local/domain/0/cpu = ""   (r0)
/local/domain/0/cpu/1 = ""   (r0)
/local/domain/0/cpu/1/availability = "offline"   (r0)
/local/domain/0/cpu/3 = ""   (r0)
/local/domain/0/cpu/3/availability = "offline"   (r0)
/local/domain/0/cpu/2 = ""   (r0)
/local/domain/0/cpu/2/availability = "offline"   (r0)
/local/domain/0/cpu/0 = ""   (r0)
/local/domain/0/cpu/0/availability = "online"   (r0)
/local/domain/0/description = ""   (r0)
/local/domain/0/console = ""   (r0)
/local/domain/0/console/limit = "1048576"   (r0)
/local/domain/0/console/type = "xenconsoled"   (r0)
/local/domain/0/domid = "0"   (r0)
/local/domain/0/name = "Domain-0"   (r0)
/local/domain/0/backend = ""   (r0)
/local/domain/0/backend/vkbd = ""   (r0)
/local/domain/0/backend/vkbd/3 = ""   (r0)
/local/domain/0/backend/vkbd/3/0 = ""   (n0,r3)
/local/domain/0/backend/vkbd/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/frontend = "/local/domain/3/device/vkbd/0"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/feature-abs-pointer = "1"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vkbd/4 = ""   (r0)
/local/domain/0/backend/vkbd/4/0 = ""   (n0,r4)
/local/domain/0/backend/vkbd/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/frontend = "/local/domain/4/device/vkbd/0"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/feature-abs-pointer = "1"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vkbd/5 = ""   (r0)
/local/domain/0/backend/vkbd/5/0 = ""   (n0,r5)
/local/domain/0/backend/vkbd/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/frontend = "/local/domain/5/device/vkbd/0"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/feature-abs-pointer = "1"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vkbd/6 = ""   (r0)
/local/domain/0/backend/vkbd/6/0 = ""   (n0,r6)
/local/domain/0/backend/vkbd/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/frontend = "/local/domain/6/device/vkbd/0"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/feature-abs-pointer = "1"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vfb = ""   (r0)
/local/domain/0/backend/vfb/3 = ""   (r0)
/local/domain/0/backend/vfb/3/0 = ""   (n0,r3)
/local/domain/0/backend/vfb/3/0/vncunused = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vfb/3/0/vnc = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/xauthority = "//.Xauthority"   (n0,r3)
/local/domain/0/backend/vfb/3/0/frontend = "/local/domain/3/device/vfb/0"   (n0,r3)
/local/domain/0/backend/vfb/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vfb/3/0/keymap = "en-us"   (n0,r3)
/local/domain/0/backend/vfb/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vfb/3/0/uuid = "7e434190-c8a6-a205-2f4b-12b2c98dc056"   (n0,r3)
/local/domain/0/backend/vfb/3/0/feature-resize = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vfb/3/0/request-update = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/location = "127.0.0.1:5901"   (n0,r3)
/local/domain/0/backend/vfb/4 = ""   (r0)
/local/domain/0/backend/vfb/4/0 = ""   (n0,r4)
/local/domain/0/backend/vfb/4/0/vncunused = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vfb/4/0/vnc = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/xauthority = "//.Xauthority"   (n0,r4)
/local/domain/0/backend/vfb/4/0/frontend = "/local/domain/4/device/vfb/0"   (n0,r4)
/local/domain/0/backend/vfb/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/vfb/4/0/keymap = "en-us"   (n0,r4)
/local/domain/0/backend/vfb/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vfb/4/0/uuid = "2aaf7c89-aac3-73df-900f-e679de9174c2"   (n0,r4)
/local/domain/0/backend/vfb/4/0/feature-resize = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vfb/4/0/location = "127.0.0.1:5900"   (n0,r4)
/local/domain/0/backend/vfb/4/0/request-update = "1"   (n0,r4)
/local/domain/0/backend/vfb/5 = ""   (r0)
/local/domain/0/backend/vfb/5/0 = ""   (n0,r5)
/local/domain/0/backend/vfb/5/0/vncunused = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vfb/5/0/vnc = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/xauthority = "//.Xauthority"   (n0,r5)
/local/domain/0/backend/vfb/5/0/frontend = "/local/domain/5/device/vfb/0"   (n0,r5)
/local/domain/0/backend/vfb/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/vfb/5/0/keymap = "en-us"   (n0,r5)
/local/domain/0/backend/vfb/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vfb/5/0/uuid = "7c1ffff8-6c55-0b90-5670-5a1bca0e1e2e"   (n0,r5)
/local/domain/0/backend/vfb/5/0/feature-resize = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vfb/5/0/location = "127.0.0.1:5902"   (n0,r5)
/local/domain/0/backend/vfb/5/0/request-update = "1"   (n0,r5)
/local/domain/0/backend/vfb/6 = ""   (r0)
/local/domain/0/backend/vfb/6/0 = ""   (n0,r6)
/local/domain/0/backend/vfb/6/0/vncunused = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/vnc = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/xauthority = "//.Xauthority"   (n0,r6)
/local/domain/0/backend/vfb/6/0/frontend = "/local/domain/6/device/vfb/0"   (n0,r6)
/local/domain/0/backend/vfb/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/vfb/6/0/keymap = "en-us"   (n0,r6)
/local/domain/0/backend/vfb/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vfb/6/0/uuid = "f486c7a3-a900-a671-b24c-365ce03b5b03"   (n0,r6)
/local/domain/0/backend/vfb/6/0/feature-resize = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vfb/6/0/location = "127.0.0.1:5903"   (n0,r6)
/local/domain/0/backend/vfb/6/0/request-update = "1"   (n0,r6)
/local/domain/0/backend/vbd = ""   (r0)
/local/domain/0/backend/vbd/3 = ""   (r0)
/local/domain/0/backend/vbd/3/51713 = ""   (n0,r3)
/local/domain/0/backend/vbd/3/51713/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/frontend = "/local/domain/3/device/vbd/51713"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/uuid = "4da405c9-6afe-e5d4-3351-02945da036c6"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/bootable = "1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/dev = "xvda1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/state = "4"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/params = "/dev/vg-mgadomu2/mgaca_os"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/mode = "w"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/online = "1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/type = "phy"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/physical-device = "fd:4"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/feature-barrier = "1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/sectors = "62914560"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/info = "0"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/sector-size = "512"   (n0,r3)
/local/domain/0/backend/vbd/4 = ""   (r0)
/local/domain/0/backend/vbd/4/51713 = ""   (n0,r4)
/local/domain/0/backend/vbd/4/51713/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/frontend = "/local/domain/4/device/vbd/51713"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/uuid = "c78be812-74d5-1ace-fd61-caba4cc21937"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/bootable = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/dev = "xvda1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/state = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/params = "/dev/vg-mgadomu2/mgatrain_os"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/mode = "w"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/online = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/type = "phy"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/physical-device = "fd:5"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/feature-barrier = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/sectors = "62914560"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/info = "0"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/sector-size = "512"   (n0,r4)
/local/domain/0/backend/vbd/4/51714 = ""   (n0,r4)
/local/domain/0/backend/vbd/4/51714/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/frontend = "/local/domain/4/device/vbd/51714"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/uuid = "dc438cec-0611-c938-009f-3c97428aa18d"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/bootable = "0"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/dev = "xvda2"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/state = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/params = "/dev/vg-mgadomu2/mgatrain_swap"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/mode = "w"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/online = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/type = "phy"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/physical-device = "fd:6"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/feature-barrier = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/sectors = "4194304"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/info = "0"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/sector-size = "512"   (n0,r4)
/local/domain/0/backend/vbd/5 = ""   (r0)
/local/domain/0/backend/vbd/5/51713 = ""   (n0,r5)
/local/domain/0/backend/vbd/5/51713/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/frontend = "/local/domain/5/device/vbd/51713"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/uuid = "7237d864-04ca-1a4e-964c-107aa25c4615"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/bootable = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/dev = "xvda1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/state = "4"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/params = "/dev/vg-mgadomu2/mgaextws_os"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/mode = "w"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/online = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/type = "phy"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/physical-device = "fd:16"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/feature-barrier = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/sectors = "62914560"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/info = "0"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/sector-size = "512"   (n0,r5)
/local/domain/0/backend/vbd/5/51714 = ""   (n0,r5)
/local/domain/0/backend/vbd/5/51714/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/frontend = "/local/domain/5/device/vbd/51714"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/uuid = "0baaa3a5-3f58-3e13-5896-0cf37dc9d533"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/bootable = "0"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/dev = "xvda2"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/state = "4"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/params = "/dev/vg-mgadomu2/mgaextws_swap"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/mode = "w"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/online = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/type = "phy"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/physical-device = "fd:17"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/feature-barrier = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/sectors = "4194304"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/info = "0"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/sector-size = "512"   (n0,r5)
/local/domain/0/backend/vbd/6 = ""   (r0)
/local/domain/0/backend/vbd/6/51713 = ""   (n0,r6)
/local/domain/0/backend/vbd/6/51713/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/frontend = "/local/domain/6/device/vbd/51713"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/uuid = "e068d988-7285-46aa-7bbe-f1325c0c55cb"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/bootable = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/dev = "xvda1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/state = "4"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/params = "/dev/vg-mgadomu1/mgaweb1_os"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/mode = "w"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/online = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/type = "phy"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/physical-device = "fd:20"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/feature-barrier = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/sectors = "104857600"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/info = "0"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/sector-size = "512"   (n0,r6)
/local/domain/0/backend/vbd/6/51714 = ""   (n0,r6)
/local/domain/0/backend/vbd/6/51714/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/frontend = "/local/domain/6/device/vbd/51714"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/uuid = "f925c4cb-ef72-d505-7c86-a72309d480a2"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/bootable = "0"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/dev = "xvda2"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/state = "4"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/params = "/dev/vg-mgadomu1/mgaweb1_swap"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/mode = "w"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/online = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/type = "phy"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/physical-device = "fd:21"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/feature-barrier = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/sectors = "4194304"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/info = "0"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/sector-size = "512"   (n0,r6)
/local/domain/0/backend/vif = ""   (r0)
/local/domain/0/backend/vif/3 = ""   (r0)
/local/domain/0/backend/vif/3/0 = ""   (n0,r3)
/local/domain/0/backend/vif/3/0/bridge = "br60"   (n0,r3)
/local/domain/0/backend/vif/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vif/3/0/handle = "0"   (n0,r3)
/local/domain/0/backend/vif/3/0/uuid = "ae3badb7-b3e9-0ec3-1472-e312f547028b"   (n0,r3)
/local/domain/0/backend/vif/3/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r3)
/local/domain/0/backend/vif/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vif/3/0/frontend = "/local/domain/3/device/vif/0"   (n0,r3)
/local/domain/0/backend/vif/3/0/mac = "00:16:3e:7a:30:25"   (n0,r3)
/local/domain/0/backend/vif/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-sg = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-gso-tcpv4 = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-rx-copy = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-rx-flip = "0"   (n0,r3)
/local/domain/0/backend/vif/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vif/3/1 = ""   (n0,r3)
/local/domain/0/backend/vif/3/1/bridge = "br250"   (n0,r3)
/local/domain/0/backend/vif/3/1/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vif/3/1/handle = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/uuid = "8412664d-562d-8603-bbb0-b2ed30a47649"   (n0,r3)
/local/domain/0/backend/vif/3/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r3)
/local/domain/0/backend/vif/3/1/state = "4"   (n0,r3)
/local/domain/0/backend/vif/3/1/frontend = "/local/domain/3/device/vif/1"   (n0,r3)
/local/domain/0/backend/vif/3/1/mac = "00:16:3e:7a:30:26"   (n0,r3)
/local/domain/0/backend/vif/3/1/online = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-sg = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-gso-tcpv4 = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-rx-copy = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-rx-flip = "0"   (n0,r3)
/local/domain/0/backend/vif/3/1/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vif/4 = ""   (r0)
/local/domain/0/backend/vif/4/0 = ""   (n0,r4)
/local/domain/0/backend/vif/4/0/bridge = "br250"   (n0,r4)
/local/domain/0/backend/vif/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vif/4/0/handle = "0"   (n0,r4)
/local/domain/0/backend/vif/4/0/uuid = "38b78e88-115a-1c04-e5bf-27af19064574"   (n0,r4)
/local/domain/0/backend/vif/4/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r4)
/local/domain/0/backend/vif/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/vif/4/0/frontend = "/local/domain/4/device/vif/0"   (n0,r4)
/local/domain/0/backend/vif/4/0/mac = "00:16:3e:1c:00:d4"   (n0,r4)
/local/domain/0/backend/vif/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-sg = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-gso-tcpv4 = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-rx-copy = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-rx-flip = "0"   (n0,r4)
/local/domain/0/backend/vif/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vif/4/1 = ""   (n0,r4)
/local/domain/0/backend/vif/4/1/bridge = "br225"   (n0,r4)
/local/domain/0/backend/vif/4/1/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vif/4/1/handle = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/uuid = "e81a6ae9-2112-f89a-e515-fec3b2f3e0be"   (n0,r4)
/local/domain/0/backend/vif/4/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r4)
/local/domain/0/backend/vif/4/1/state = "4"   (n0,r4)
/local/domain/0/backend/vif/4/1/frontend = "/local/domain/4/device/vif/1"   (n0,r4)
/local/domain/0/backend/vif/4/1/mac = "00:16:3e:1c:00:d6"   (n0,r4)
/local/domain/0/backend/vif/4/1/online = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-sg = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-gso-tcpv4 = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-rx-copy = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-rx-flip = "0"   (n0,r4)
/local/domain/0/backend/vif/4/1/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vif/5 = ""   (r0)
/local/domain/0/backend/vif/5/0 = ""   (n0,r5)
/local/domain/0/backend/vif/5/0/bridge = "br260"   (n0,r5)
/local/domain/0/backend/vif/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vif/5/0/handle = "0"   (n0,r5)
/local/domain/0/backend/vif/5/0/uuid = "26017add-8695-477b-d239-d260fef76bec"   (n0,r5)
/local/domain/0/backend/vif/5/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r5)
/local/domain/0/backend/vif/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/vif/5/0/frontend = "/local/domain/5/device/vif/0"   (n0,r5)
/local/domain/0/backend/vif/5/0/mac = "00:16:3e:87:06:56"   (n0,r5)
/local/domain/0/backend/vif/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-sg = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-gso-tcpv4 = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-rx-copy = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-rx-flip = "0"   (n0,r5)
/local/domain/0/backend/vif/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vif/5/1 = ""   (n0,r5)
/local/domain/0/backend/vif/5/1/bridge = "br405"   (n0,r5)
/local/domain/0/backend/vif/5/1/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vif/5/1/handle = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/uuid = "82263a3a-3eb2-c86c-239c-9a03506ba51e"   (n0,r5)
/local/domain/0/backend/vif/5/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r5)
/local/domain/0/backend/vif/5/1/state = "4"   (n0,r5)
/local/domain/0/backend/vif/5/1/frontend = "/local/domain/5/device/vif/1"   (n0,r5)
/local/domain/0/backend/vif/5/1/mac = "00:16:3e:87:06:58"   (n0,r5)
/local/domain/0/backend/vif/5/1/online = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-sg = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-gso-tcpv4 = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-rx-copy = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-rx-flip = "0"   (n0,r5)
/local/domain/0/backend/vif/5/1/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vif/6 = ""   (r0)
/local/domain/0/backend/vif/6/0 = ""   (n0,r6)
/local/domain/0/backend/vif/6/0/bridge = "br250"   (n0,r6)
/local/domain/0/backend/vif/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vif/6/0/handle = "0"   (n0,r6)
/local/domain/0/backend/vif/6/0/uuid = "741a4b1d-6257-45f1-55f2-6a3dc29cee31"   (n0,r6)
/local/domain/0/backend/vif/6/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r6)
/local/domain/0/backend/vif/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/vif/6/0/frontend = "/local/domain/6/device/vif/0"   (n0,r6)
/local/domain/0/backend/vif/6/0/mac = "00:16:3e:03:52:b9"   (n0,r6)
/local/domain/0/backend/vif/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-sg = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-gso-tcpv4 = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-rx-copy = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-rx-flip = "0"   (n0,r6)
/local/domain/0/backend/vif/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vif/6/1 = ""   (n0,r6)
/local/domain/0/backend/vif/6/1/bridge = "br260"   (n0,r6)
/local/domain/0/backend/vif/6/1/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vif/6/1/handle = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/uuid = "ff3eb0cb-9193-e5a5-f1c7-56606c44a4bc"   (n0,r6)
/local/domain/0/backend/vif/6/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r6)
/local/domain/0/backend/vif/6/1/state = "4"   (n0,r6)
/local/domain/0/backend/vif/6/1/frontend = "/local/domain/6/device/vif/1"   (n0,r6)
/local/domain/0/backend/vif/6/1/mac = "00:16:3e:3f:83:2a"   (n0,r6)
/local/domain/0/backend/vif/6/1/online = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-sg = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-gso-tcpv4 = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-rx-copy = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-rx-flip = "0"   (n0,r6)
/local/domain/0/backend/vif/6/1/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/console = ""   (r0)
/local/domain/0/backend/console/3 = ""   (r0)
/local/domain/0/backend/console/3/0 = ""   (n0,r3)
/local/domain/0/backend/console/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/console/3/0/protocol = "vt100"   (n0,r3)
/local/domain/0/backend/console/3/0/uuid = "02f74b54-4174-74aa-4324-f2436c730a64"   (n0,r3)
/local/domain/0/backend/console/3/0/frontend = "/local/domain/3/device/console/0"   (n0,r3)
/local/domain/0/backend/console/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/console/3/0/location = "2"   (n0,r3)
/local/domain/0/backend/console/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/console/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/console/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/console/4 = ""   (r0)
/local/domain/0/backend/console/4/0 = ""   (n0,r4)
/local/domain/0/backend/console/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/console/4/0/protocol = "vt100"   (n0,r4)
/local/domain/0/backend/console/4/0/uuid = "01a6c026-56a4-9d8a-43b7-6389a783e825"   (n0,r4)
/local/domain/0/backend/console/4/0/frontend = "/local/domain/4/device/console/0"   (n0,r4)
/local/domain/0/backend/console/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/console/4/0/location = "2"   (n0,r4)
/local/domain/0/backend/console/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/console/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/console/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/console/5 = ""   (r0)
/local/domain/0/backend/console/5/0 = ""   (n0,r5)
/local/domain/0/backend/console/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/console/5/0/protocol = "vt100"   (n0,r5)
/local/domain/0/backend/console/5/0/uuid = "e3f73836-13ad-6372-0d82-31cea623bdd4"   (n0,r5)
/local/domain/0/backend/console/5/0/frontend = "/local/domain/5/device/console/0"   (n0,r5)
/local/domain/0/backend/console/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/console/5/0/location = "2"   (n0,r5)
/local/domain/0/backend/console/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/console/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/console/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/console/6 = ""   (r0)
/local/domain/0/backend/console/6/0 = ""   (n0,r6)
/local/domain/0/backend/console/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/console/6/0/protocol = "vt100"   (n0,r6)
/local/domain/0/backend/console/6/0/uuid = "1d96e799-7de9-1d6c-1c26-a543dacf250a"   (n0,r6)
/local/domain/0/backend/console/6/0/frontend = "/local/domain/6/device/console/0"   (n0,r6)
/local/domain/0/backend/console/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/console/6/0/location = "2"   (n0,r6)
/local/domain/0/backend/console/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/console/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/console/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vusb = ""   (r0)
/local/domain/0/backend/vusb/3 = ""   (r0)
/local/domain/0/backend/vusb/3/0 = ""   (n0,r3)
/local/domain/0/backend/vusb/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vusb/3/0/frontend = "/local/domain/3/device/vusb/0"   (n0,r3)
/local/domain/0/backend/vusb/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vusb/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vusb/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vusb/3/0/port = ""   (n0,r3)
/local/domain/0/backend/vusb/3/0/port/2 = ""   (n0,r3)
/local/domain/0/backend/vusb/3/0/port/1 = "2-1"   (n0,r3)
/local/domain/0/backend/vusb/3/0/usb-ver = "2"   (n0,r3)
/local/domain/0/backend/vusb/3/0/num-ports = "2"   (n0,r3)
/local/domain/0/device-model = ""   (r0)
/local/domain/0/device-model/3 = ""   (n0,b3)
/local/domain/0/device-model/3/state = "running"   (n0,b3)
/local/domain/0/device-model/4 = ""   (n0,b4)
/local/domain/0/device-model/4/state = "running"   (n0,b4)
/local/domain/0/device-model/5 = ""   (n0,b5)
/local/domain/0/device-model/5/state = "running"   (n0,b5)
/local/domain/0/device-model/6 = ""   (n0,b6)
/local/domain/0/device-model/6/state = "running"   (n0,b6)
/local/domain/3 = ""   (n0,r3)
/local/domain/3/vm = "/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2"   (n0,r3)
/local/domain/3/device = ""   (n3)
/local/domain/3/device/vkbd = ""   (n3)
/local/domain/3/device/vkbd/0 = ""   (n3,r0)
/local/domain/3/device/vkbd/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vkbd/0/state = "4"   (n3,r0)
/local/domain/3/device/vkbd/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/3/0"   (n3,r0)
/local/domain/3/device/vkbd/0/page-ref = "1473690"   (n3,r0)
/local/domain/3/device/vkbd/0/event-channel = "11"   (n3,r0)
/local/domain/3/device/vkbd/0/request-abs-pointer = "1"   (n3,r0)
/local/domain/3/device/vfb = ""   (n3)
/local/domain/3/device/vfb/0 = ""   (n3,r0)
/local/domain/3/device/vfb/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vfb/0/state = "4"   (n3,r0)
/local/domain/3/device/vfb/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vfb/0/backend = "/local/domain/0/backend/vfb/3/0"   (n3,r0)
/local/domain/3/device/vfb/0/page-ref = "1480004"   (n3,r0)
/local/domain/3/device/vfb/0/event-channel = "10"   (n3,r0)
/local/domain/3/device/vfb/0/feature-update = "1"   (n3,r0)
/local/domain/3/device/vbd = ""   (n3)
/local/domain/3/device/vbd/51713 = ""   (n3,r0)
/local/domain/3/device/vbd/51713/virtual-device = "51713"   (n3,r0)
/local/domain/3/device/vbd/51713/device-type = "disk"   (n3,r0)
/local/domain/3/device/vbd/51713/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vbd/51713/backend-id = "0"   (n3,r0)
/local/domain/3/device/vbd/51713/state = "4"   (n3,r0)
/local/domain/3/device/vbd/51713/backend = "/local/domain/0/backend/vbd/3/51713"   (n3,r0)
/local/domain/3/device/vbd/51713/ring-ref = "8"   (n3,r0)
/local/domain/3/device/vbd/51713/event-channel = "12"   (n3,r0)
/local/domain/3/device/vif = ""   (n3)
/local/domain/3/device/vif/0 = ""   (n3,r0)
/local/domain/3/device/vif/0/mac = "00:16:3e:7a:30:25"   (n3,r0)
/local/domain/3/device/vif/0/handle = "0"   (n3,r0)
/local/domain/3/device/vif/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vif/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vif/0/state = "4"   (n3,r0)
/local/domain/3/device/vif/0/backend = "/local/domain/0/backend/vif/3/0"   (n3,r0)
/local/domain/3/device/vif/0/tx-ring-ref = "1280"   (n3,r0)
/local/domain/3/device/vif/0/rx-ring-ref = "1281"   (n3,r0)
/local/domain/3/device/vif/0/event-channel = "13"   (n3,r0)
/local/domain/3/device/vif/0/request-rx-copy = "1"   (n3,r0)
/local/domain/3/device/vif/0/feature-rx-notify = "1"   (n3,r0)
/local/domain/3/device/vif/0/feature-no-csum-offload = "0"   (n3,r0)
/local/domain/3/device/vif/0/feature-sg = "1"   (n3,r0)
/local/domain/3/device/vif/0/feature-gso-tcpv4 = "1"   (n3,r0)
/local/domain/3/device/vif/1 = ""   (n3,r0)
/local/domain/3/device/vif/1/mac = "00:16:3e:7a:30:26"   (n3,r0)
/local/domain/3/device/vif/1/handle = "1"   (n3,r0)
/local/domain/3/device/vif/1/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vif/1/backend-id = "0"   (n3,r0)
/local/domain/3/device/vif/1/state = "4"   (n3,r0)
/local/domain/3/device/vif/1/backend = "/local/domain/0/backend/vif/3/1"   (n3,r0)
/local/domain/3/device/vif/1/tx-ring-ref = "1282"   (n3,r0)
/local/domain/3/device/vif/1/rx-ring-ref = "1283"   (n3,r0)
/local/domain/3/device/vif/1/event-channel = "14"   (n3,r0)
/local/domain/3/device/vif/1/request-rx-copy = "1"   (n3,r0)
/local/domain/3/device/vif/1/feature-rx-notify = "1"   (n3,r0)
/local/domain/3/device/vif/1/feature-no-csum-offload = "0"   (n3,r0)
/local/domain/3/device/vif/1/feature-sg = "1"   (n3,r0)
/local/domain/3/device/vif/1/feature-gso-tcpv4 = "1"   (n3,r0)
/local/domain/3/device/console = ""   (n3)
/local/domain/3/device/console/0 = ""   (n3,r0)
/local/domain/3/device/console/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/console/0/state = "1"   (n3,r0)
/local/domain/3/device/console/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/console/0/backend = "/local/domain/0/backend/console/3/0"   (n3,r0)
/local/domain/3/device/suspend = ""   (n3)
/local/domain/3/device/suspend/event-channel = "9"   (n3)
/local/domain/3/device/vusb = ""   (n3)
/local/domain/3/device/vusb/0 = ""   (n3,r0)
/local/domain/3/device/vusb/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vusb/0/state = "4"   (n3,r0)
/local/domain/3/device/vusb/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vusb/0/backend = "/local/domain/0/backend/vusb/3/0"   (n3,r0)
/local/domain/3/device/vusb/0/urb-ring-ref = "1316"   (n3,r0)
/local/domain/3/device/vusb/0/conn-ring-ref = "1311"   (n3,r0)
/local/domain/3/device/vusb/0/event-channel = "15"   (n3,r0)
/local/domain/3/control = ""   (n3)
/local/domain/3/control/platform-feature-multiprocessor-suspend = "1"   (n3)
/local/domain/3/error = ""   (n3)
/local/domain/3/memory = ""   (n3)
/local/domain/3/memory/target = "1048576"   (n3)
/local/domain/3/guest = ""   (n3)
/local/domain/3/hvmpv = ""   (n3)
/local/domain/3/data = ""   (n3)
/local/domain/3/serial = ""   (n0,r3)
/local/domain/3/serial/0 = ""   (n0,r3)
/local/domain/3/serial/0/tty = "/dev/pts/1"   (n0,r3)
/local/domain/3/device-misc = ""   (n0,r3)
/local/domain/3/device-misc/vif = ""   (n0,r3)
/local/domain/3/device-misc/vif/nextDeviceID = "2"   (n0,r3)
/local/domain/3/device-misc/console = ""   (n0,r3)
/local/domain/3/device-misc/console/nextDeviceID = "1"   (n0,r3)
/local/domain/3/device-misc/vusb = ""   (n0,r3)
/local/domain/3/device-misc/vusb/nextDeviceID = "1"   (n0,r3)
/local/domain/3/image = ""   (n0,r3)
/local/domain/3/image/device-model-fifo = "/var/run/xend/dm-3-1340009524.fifo"   (n0,r3)
/local/domain/3/image/device-model-pid = "19562"   (n0,r3)
/local/domain/3/image/entry = "18446744071562076160"   (n0,r3)
/local/domain/3/image/loader = "generic"   (n0,r3)
/local/domain/3/image/guest-os = "linux"   (n0,r3)
/local/domain/3/image/features = ""   (n0,r3)
/local/domain/3/image/features/writable-descriptor-tables = "1"   (n0,r3)
/local/domain/3/image/features/supervisor-mode-kernel = "1"   (n0,r3)
/local/domain/3/image/features/pae-pgdir-above-4gb = "1"   (n0,r3)
/local/domain/3/image/features/writable-page-tables = "1"   (n0,r3)
/local/domain/3/image/features/auto-translated-physmap = "1"   (n0,r3)
/local/domain/3/image/hypercall-page = "18446744071562080256"   (n0,r3)
/local/domain/3/image/guest-version = "2.6"   (n0,r3)
/local/domain/3/image/paddr-offset = "0"   (n0,r3)
/local/domain/3/image/virt-base = "18446744071562067968"   (n0,r3)
/local/domain/3/image/suspend-cancel = "1"   (n0,r3)
/local/domain/3/image/xen-version = "xen-3.0"   (n0,r3)
/local/domain/3/image/init-p2m = "18446719884453740544"   (n0,r3)
/local/domain/3/console = ""   (n0,r3)
/local/domain/3/console/ring-ref = "4087985"   (n0,r3)
/local/domain/3/console/port = "2"   (n0,r3)
/local/domain/3/console/limit = "1048576"   (n0,r3)
/local/domain/3/console/type = "ioemu"   (n0,r3)
/local/domain/3/console/vnc-port = "5901"   (n0,r3)
/local/domain/3/console/tty = "/dev/pts/1"   (n0,r3)
/local/domain/3/store = ""   (n0,r3)
/local/domain/3/store/ring-ref = "4087986"   (n0,r3)
/local/domain/3/store/port = "1"   (n0,r3)
/local/domain/3/cpu = ""   (n0,r3)
/local/domain/3/cpu/1 = ""   (n0,r3)
/local/domain/3/cpu/1/availability = "online"   (n0,r3)
/local/domain/3/cpu/0 = ""   (n0,r3)
/local/domain/3/cpu/0/availability = "online"   (n0,r3)
/local/domain/3/description = "Managua Certification Server"   (n0,r3)
/local/domain/3/name = "mgaca"   (n0,r3)
/local/domain/3/domid = "3"   (n0,r3)
/local/domain/4 = ""   (n0,r4)
/local/domain/4/vm = "/vm/b38923a5-69b1-493e-9220-e81440b63d4e"   (n0,r4)
/local/domain/4/device = ""   (n4)
/local/domain/4/device/vkbd = ""   (n4)
/local/domain/4/device/vkbd/0 = ""   (n4,r0)
/local/domain/4/device/vkbd/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vkbd/0/state = "4"   (n4,r0)
/local/domain/4/device/vkbd/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/4/0"   (n4,r0)
/local/domain/4/device/vkbd/0/page-ref = "3804588"   (n4,r0)
/local/domain/4/device/vkbd/0/event-channel = "17"   (n4,r0)
/local/domain/4/device/vkbd/0/request-abs-pointer = "1"   (n4,r0)
/local/domain/4/device/vfb = ""   (n4)
/local/domain/4/device/vfb/0 = ""   (n4,r0)
/local/domain/4/device/vfb/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vfb/0/state = "4"   (n4,r0)
/local/domain/4/device/vfb/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/vfb/0/backend = "/local/domain/0/backend/vfb/4/0"   (n4,r0)
/local/domain/4/device/vfb/0/page-ref = "3808785"   (n4,r0)
/local/domain/4/device/vfb/0/event-channel = "16"   (n4,r0)
/local/domain/4/device/vfb/0/feature-update = "1"   (n4,r0)
/local/domain/4/device/vbd = ""   (n4)
/local/domain/4/device/vbd/51713 = ""   (n4,r0)
/local/domain/4/device/vbd/51713/virtual-device = "51713"   (n4,r0)
/local/domain/4/device/vbd/51713/device-type = "disk"   (n4,r0)
/local/domain/4/device/vbd/51713/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vbd/51713/backend-id = "0"   (n4,r0)
/local/domain/4/device/vbd/51713/state = "4"   (n4,r0)
/local/domain/4/device/vbd/51713/backend = "/local/domain/0/backend/vbd/4/51713"   (n4,r0)
/local/domain/4/device/vbd/51713/ring-ref = "8"   (n4,r0)
/local/domain/4/device/vbd/51713/event-channel = "18"   (n4,r0)
/local/domain/4/device/vbd/51714 = ""   (n4,r0)
/local/domain/4/device/vbd/51714/virtual-device = "51714"   (n4,r0)
/local/domain/4/device/vbd/51714/device-type = "disk"   (n4,r0)
/local/domain/4/device/vbd/51714/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vbd/51714/backend-id = "0"   (n4,r0)
/local/domain/4/device/vbd/51714/state = "4"   (n4,r0)
/local/domain/4/device/vbd/51714/backend = "/local/domain/0/backend/vbd/4/51714"   (n4,r0)
/local/domain/4/device/vbd/51714/ring-ref = "9"   (n4,r0)
/local/domain/4/device/vbd/51714/event-channel = "19"   (n4,r0)
/local/domain/4/device/vif = ""   (n4)
/local/domain/4/device/vif/0 = ""   (n4,r0)
/local/domain/4/device/vif/0/mac = "00:16:3e:1c:00:d4"   (n4,r0)
/local/domain/4/device/vif/0/handle = "0"   (n4,r0)
/local/domain/4/device/vif/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vif/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/vif/0/state = "4"   (n4,r0)
/local/domain/4/device/vif/0/backend = "/local/domain/0/backend/vif/4/0"   (n4,r0)
/local/domain/4/device/vif/0/tx-ring-ref = "1280"   (n4,r0)
/local/domain/4/device/vif/0/rx-ring-ref = "1281"   (n4,r0)
/local/domain/4/device/vif/0/event-channel = "20"   (n4,r0)
/local/domain/4/device/vif/0/request-rx-copy = "1"   (n4,r0)
/local/domain/4/device/vif/0/feature-rx-notify = "1"   (n4,r0)
/local/domain/4/device/vif/0/feature-no-csum-offload = "0"   (n4,r0)
/local/domain/4/device/vif/0/feature-sg = "1"   (n4,r0)
/local/domain/4/device/vif/0/feature-gso-tcpv4 = "1"   (n4,r0)
/local/domain/4/device/vif/1 = ""   (n4,r0)
/local/domain/4/device/vif/1/mac = "00:16:3e:1c:00:d6"   (n4,r0)
/local/domain/4/device/vif/1/handle = "1"   (n4,r0)
/local/domain/4/device/vif/1/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vif/1/backend-id = "0"   (n4,r0)
/local/domain/4/device/vif/1/state = "4"   (n4,r0)
/local/domain/4/device/vif/1/backend = "/local/domain/0/backend/vif/4/1"   (n4,r0)
/local/domain/4/device/vif/1/tx-ring-ref = "1282"   (n4,r0)
/local/domain/4/device/vif/1/rx-ring-ref = "1283"   (n4,r0)
/local/domain/4/device/vif/1/event-channel = "21"   (n4,r0)
/local/domain/4/device/vif/1/request-rx-copy = "1"   (n4,r0)
/local/domain/4/device/vif/1/feature-rx-notify = "1"   (n4,r0)
/local/domain/4/device/vif/1/feature-no-csum-offload = "0"   (n4,r0)
/local/domain/4/device/vif/1/feature-sg = "1"   (n4,r0)
/local/domain/4/device/vif/1/feature-gso-tcpv4 = "1"   (n4,r0)
/local/domain/4/device/console = ""   (n4)
/local/domain/4/device/console/0 = ""   (n4,r0)
/local/domain/4/device/console/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/console/0/state = "1"   (n4,r0)
/local/domain/4/device/console/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/console/0/backend = "/local/domain/0/backend/console/4/0"   (n4,r0)
/local/domain/4/device/suspend = ""   (n4)
/local/domain/4/device/suspend/event-channel = "15"   (n4)
/local/domain/4/control = ""   (n4)
/local/domain/4/control/platform-feature-multiprocessor-suspend = "1"   (n4)
/local/domain/4/error = ""   (n4)
/local/domain/4/memory = ""   (n4)
/local/domain/4/memory/target = "1048576"   (n4)
/local/domain/4/guest = ""   (n4)
/local/domain/4/hvmpv = ""   (n4)
/local/domain/4/data = ""   (n4)
/local/domain/4/serial = ""   (n0,r4)
/local/domain/4/serial/0 = ""   (n0,r4)
/local/domain/4/serial/0/tty = "/dev/pts/0"   (n0,r4)
/local/domain/4/device-misc = ""   (n0,r4)
/local/domain/4/device-misc/vif = ""   (n0,r4)
/local/domain/4/device-misc/vif/nextDeviceID = "2"   (n0,r4)
/local/domain/4/device-misc/console = ""   (n0,r4)
/local/domain/4/device-misc/console/nextDeviceID = "1"   (n0,r4)
/local/domain/4/image = ""   (n0,r4)
/local/domain/4/image/device-model-fifo = "/var/run/xend/dm-4-1342477947.fifo"   (n0,r4)
/local/domain/4/image/device-model-pid = "11578"   (n0,r4)
/local/domain/4/image/entry = "18446744071562076160"   (n0,r4)
/local/domain/4/image/loader = "generic"   (n0,r4)
/local/domain/4/image/guest-os = "linux"   (n0,r4)
/local/domain/4/image/features = ""   (n0,r4)
/local/domain/4/image/features/writable-descriptor-tables = "1"   (n0,r4)
/local/domain/4/image/features/supervisor-mode-kernel = "1"   (n0,r4)
/local/domain/4/image/features/pae-pgdir-above-4gb = "1"   (n0,r4)
/local/domain/4/image/features/writable-page-tables = "1"   (n0,r4)
/local/domain/4/image/features/auto-translated-physmap = "1"   (n0,r4)
/local/domain/4/image/hypercall-page = "18446744071562080256"   (n0,r4)
/local/domain/4/image/guest-version = "2.6"   (n0,r4)
/local/domain/4/image/paddr-offset = "0"   (n0,r4)
/local/domain/4/image/virt-base = "18446744071562067968"   (n0,r4)
/local/domain/4/image/suspend-cancel = "1"   (n0,r4)
/local/domain/4/image/xen-version = "xen-3.0"   (n0,r4)
/local/domain/4/image/init-p2m = "18446719884453740544"   (n0,r4)
/local/domain/4/console = ""   (n0,r4)
/local/domain/4/console/ring-ref = "4342118"   (n0,r4)
/local/domain/4/console/port = "2"   (n0,r4)
/local/domain/4/console/limit = "1048576"   (n0,r4)
/local/domain/4/console/type = "ioemu"   (n0,r4)
/local/domain/4/console/vnc-port = "5900"   (n0,r4)
/local/domain/4/console/tty = "/dev/pts/0"   (n0,r4)
/local/domain/4/cpu = ""   (n0,r4)
/local/domain/4/cpu/3 = ""   (n0,r4)
/local/domain/4/cpu/3/availability = "online"   (n0,r4)
/local/domain/4/cpu/1 = ""   (n0,r4)
/local/domain/4/cpu/1/availability = "online"   (n0,r4)
/local/domain/4/cpu/2 = ""   (n0,r4)
/local/domain/4/cpu/2/availability = "online"   (n0,r4)
/local/domain/4/cpu/0 = ""   (n0,r4)
/local/domain/4/cpu/0/availability = "online"   (n0,r4)
/local/domain/4/store = ""   (n0,r4)
/local/domain/4/store/ring-ref = "4342119"   (n0,r4)
/local/domain/4/store/port = "1"   (n0,r4)
/local/domain/4/description = ""   (n0,r4)
/local/domain/4/name = "mgatrain"   (n0,r4)
/local/domain/4/domid = "4"   (n0,r4)
/local/domain/5 = ""   (n0,r5)
/local/domain/5/vm = "/vm/3b498296-a711-40b7-b9db-e0f257e5bb44"   (n0,r5)
/local/domain/5/device = ""   (n5)
/local/domain/5/device/vkbd = ""   (n5)
/local/domain/5/device/vkbd/0 = ""   (n5,r0)
/local/domain/5/device/vkbd/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vkbd/0/state = "4"   (n5,r0)
/local/domain/5/device/vkbd/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/5/0"   (n5,r0)
/local/domain/5/device/vkbd/0/request-abs-pointer = "1"   (n5,r0)
/local/domain/5/device/vkbd/0/page-ref = "2753702"   (n5,r0)
/local/domain/5/device/vkbd/0/event-channel = "17"   (n5,r0)
/local/domain/5/device/vfb = ""   (n5)
/local/domain/5/device/vfb/0 = ""   (n5,r0)
/local/domain/5/device/vfb/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vfb/0/state = "4"   (n5,r0)
/local/domain/5/device/vfb/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/vfb/0/backend = "/local/domain/0/backend/vfb/5/0"   (n5,r0)
/local/domain/5/device/vfb/0/page-ref = "2760197"   (n5,r0)
/local/domain/5/device/vfb/0/event-channel = "16"   (n5,r0)
/local/domain/5/device/vfb/0/feature-update = "1"   (n5,r0)
/local/domain/5/device/vbd = ""   (n5)
/local/domain/5/device/vbd/51713 = ""   (n5,r0)
/local/domain/5/device/vbd/51713/virtual-device = "51713"   (n5,r0)
/local/domain/5/device/vbd/51713/device-type = "disk"   (n5,r0)
/local/domain/5/device/vbd/51713/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vbd/51713/backend-id = "0"   (n5,r0)
/local/domain/5/device/vbd/51713/state = "4"   (n5,r0)
/local/domain/5/device/vbd/51713/backend = "/local/domain/0/backend/vbd/5/51713"   (n5,r0)
/local/domain/5/device/vbd/51713/ring-ref = "8"   (n5,r0)
/local/domain/5/device/vbd/51713/event-channel = "18"   (n5,r0)
/local/domain/5/device/vbd/51714 = ""   (n5,r0)
/local/domain/5/device/vbd/51714/virtual-device = "51714"   (n5,r0)
/local/domain/5/device/vbd/51714/device-type = "disk"   (n5,r0)
/local/domain/5/device/vbd/51714/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vbd/51714/backend-id = "0"   (n5,r0)
/local/domain/5/device/vbd/51714/state = "4"   (n5,r0)
/local/domain/5/device/vbd/51714/backend = "/local/domain/0/backend/vbd/5/51714"   (n5,r0)
/local/domain/5/device/vbd/51714/ring-ref = "9"   (n5,r0)
/local/domain/5/device/vbd/51714/event-channel = "19"   (n5,r0)
/local/domain/5/device/vif = ""   (n5)
/local/domain/5/device/vif/0 = ""   (n5,r0)
/local/domain/5/device/vif/0/mac = "00:16:3e:87:06:56"   (n5,r0)
/local/domain/5/device/vif/0/handle = "0"   (n5,r0)
/local/domain/5/device/vif/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vif/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/vif/0/state = "4"   (n5,r0)
/local/domain/5/device/vif/0/backend = "/local/domain/0/backend/vif/5/0"   (n5,r0)
/local/domain/5/device/vif/0/tx-ring-ref = "1280"   (n5,r0)
/local/domain/5/device/vif/0/rx-ring-ref = "1281"   (n5,r0)
/local/domain/5/device/vif/0/event-channel = "20"   (n5,r0)
/local/domain/5/device/vif/0/request-rx-copy = "1"   (n5,r0)
/local/domain/5/device/vif/0/feature-rx-notify = "1"   (n5,r0)
/local/domain/5/device/vif/0/feature-no-csum-offload = "0"   (n5,r0)
/local/domain/5/device/vif/0/feature-sg = "1"   (n5,r0)
/local/domain/5/device/vif/0/feature-gso-tcpv4 = "1"   (n5,r0)
/local/domain/5/device/vif/1 = ""   (n5,r0)
/local/domain/5/device/vif/1/mac = "00:16:3e:87:06:58"   (n5,r0)
/local/domain/5/device/vif/1/handle = "1"   (n5,r0)
/local/domain/5/device/vif/1/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vif/1/backend-id = "0"   (n5,r0)
/local/domain/5/device/vif/1/state = "4"   (n5,r0)
/local/domain/5/device/vif/1/backend = "/local/domain/0/backend/vif/5/1"   (n5,r0)
/local/domain/5/device/vif/1/tx-ring-ref = "1282"   (n5,r0)
/local/domain/5/device/vif/1/rx-ring-ref = "1283"   (n5,r0)
/local/domain/5/device/vif/1/event-channel = "21"   (n5,r0)
/local/domain/5/device/vif/1/request-rx-copy = "1"   (n5,r0)
/local/domain/5/device/vif/1/feature-rx-notify = "1"   (n5,r0)
/local/domain/5/device/vif/1/feature-no-csum-offload = "0"   (n5,r0)
/local/domain/5/device/vif/1/feature-sg = "1"   (n5,r0)
/local/domain/5/device/vif/1/feature-gso-tcpv4 = "1"   (n5,r0)
/local/domain/5/device/console = ""   (n5)
/local/domain/5/device/console/0 = ""   (n5,r0)
/local/domain/5/device/console/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/console/0/state = "1"   (n5,r0)
/local/domain/5/device/console/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/console/0/backend = "/local/domain/0/backend/console/5/0"   (n5,r0)
/local/domain/5/device/suspend = ""   (n5)
/local/domain/5/device/suspend/event-channel = "15"   (n5)
/local/domain/5/control = ""   (n5)
/local/domain/5/control/platform-feature-multiprocessor-suspend = "1"   (n5)
/local/domain/5/error = ""   (n5)
/local/domain/5/memory = ""   (n5)
/local/domain/5/memory/target = "1048576"   (n5)
/local/domain/5/guest = ""   (n5)
/local/domain/5/hvmpv = ""   (n5)
/local/domain/5/data = ""   (n5)
/local/domain/5/serial = ""   (n0,r5)
/local/domain/5/serial/0 = ""   (n0,r5)
/local/domain/5/serial/0/tty = "/dev/pts/2"   (n0,r5)
/local/domain/5/device-misc = ""   (n0,r5)
/local/domain/5/device-misc/vif = ""   (n0,r5)
/local/domain/5/device-misc/vif/nextDeviceID = "2"   (n0,r5)
/local/domain/5/device-misc/console = ""   (n0,r5)
/local/domain/5/device-misc/console/nextDeviceID = "1"   (n0,r5)
/local/domain/5/image = ""   (n0,r5)
/local/domain/5/image/device-model-fifo = "/var/run/xend/dm-5-1342477948.fifo"   (n0,r5)
/local/domain/5/image/device-model-pid = "11768"   (n0,r5)
/local/domain/5/image/entry = "18446744071562076160"   (n0,r5)
/local/domain/5/image/loader = "generic"   (n0,r5)
/local/domain/5/image/guest-os = "linux"   (n0,r5)
/local/domain/5/image/features = ""   (n0,r5)
/local/domain/5/image/features/writable-descriptor-tables = "1"   (n0,r5)
/local/domain/5/image/features/supervisor-mode-kernel = "1"   (n0,r5)
/local/domain/5/image/features/pae-pgdir-above-4gb = "1"   (n0,r5)
/local/domain/5/image/features/writable-page-tables = "1"   (n0,r5)
/local/domain/5/image/features/auto-translated-physmap = "1"   (n0,r5)
/local/domain/5/image/hypercall-page = "18446744071562080256"   (n0,r5)
/local/domain/5/image/guest-version = "2.6"   (n0,r5)
/local/domain/5/image/paddr-offset = "0"   (n0,r5)
/local/domain/5/image/virt-base = "18446744071562067968"   (n0,r5)
/local/domain/5/image/suspend-cancel = "1"   (n0,r5)
/local/domain/5/image/xen-version = "xen-3.0"   (n0,r5)
/local/domain/5/image/init-p2m = "18446719884453740544"   (n0,r5)
/local/domain/5/console = ""   (n0,r5)
/local/domain/5/console/ring-ref = "3795445"   (n0,r5)
/local/domain/5/console/port = "2"   (n0,r5)
/local/domain/5/console/limit = "1048576"   (n0,r5)
/local/domain/5/console/type = "ioemu"   (n0,r5)
/local/domain/5/console/vnc-port = "5902"   (n0,r5)
/local/domain/5/console/tty = "/dev/pts/2"   (n0,r5)
/local/domain/5/cpu = ""   (n0,r5)
/local/domain/5/cpu/3 = ""   (n0,r5)
/local/domain/5/cpu/3/availability = "online"   (n0,r5)
/local/domain/5/cpu/1 = ""   (n0,r5)
/local/domain/5/cpu/1/availability = "online"   (n0,r5)
/local/domain/5/cpu/2 = ""   (n0,r5)
/local/domain/5/cpu/2/availability = "online"   (n0,r5)
/local/domain/5/cpu/0 = ""   (n0,r5)
/local/domain/5/cpu/0/availability = "online"   (n0,r5)
/local/domain/5/store = ""   (n0,r5)
/local/domain/5/store/ring-ref = "3795446"   (n0,r5)
/local/domain/5/store/port = "1"   (n0,r5)
/local/domain/5/description = ""   (n0,r5)
/local/domain/5/name = "mgaextws"   (n0,r5)
/local/domain/5/domid = "5"   (n0,r5)
/local/domain/6 = ""   (n0,r6)
/local/domain/6/vm = "/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe"   (n0,r6)
/local/domain/6/device = ""   (n6)
/local/domain/6/device/vkbd = ""   (n6)
/local/domain/6/device/vkbd/0 = ""   (n6,r0)
/local/domain/6/device/vkbd/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vkbd/0/state = "4"   (n6,r0)
/local/domain/6/device/vkbd/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/6/0"   (n6,r0)
/local/domain/6/device/vkbd/0/request-abs-pointer = "1"   (n6,r0)
/local/domain/6/device/vkbd/0/page-ref = "3278054"   (n6,r0)
/local/domain/6/device/vkbd/0/event-channel = "17"   (n6,r0)
/local/domain/6/device/vfb = ""   (n6)
/local/domain/6/device/vfb/0 = ""   (n6,r0)
/local/domain/6/device/vfb/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vfb/0/state = "4"   (n6,r0)
/local/domain/6/device/vfb/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/vfb/0/backend = "/local/domain/0/backend/vfb/6/0"   (n6,r0)
/local/domain/6/device/vfb/0/page-ref = "3285571"   (n6,r0)
/local/domain/6/device/vfb/0/event-channel = "16"   (n6,r0)
/local/domain/6/device/vfb/0/feature-update = "1"   (n6,r0)
/local/domain/6/device/vbd = ""   (n6)
/local/domain/6/device/vbd/51713 = ""   (n6,r0)
/local/domain/6/device/vbd/51713/virtual-device = "51713"   (n6,r0)
/local/domain/6/device/vbd/51713/device-type = "disk"   (n6,r0)
/local/domain/6/device/vbd/51713/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vbd/51713/backend-id = "0"   (n6,r0)
/local/domain/6/device/vbd/51713/state = "4"   (n6,r0)
/local/domain/6/device/vbd/51713/backend = "/local/domain/0/backend/vbd/6/51713"   (n6,r0)
/local/domain/6/device/vbd/51713/ring-ref = "8"   (n6,r0)
/local/domain/6/device/vbd/51713/event-channel = "18"   (n6,r0)
/local/domain/6/device/vbd/51714 = ""   (n6,r0)
/local/domain/6/device/vbd/51714/virtual-device = "51714"   (n6,r0)
/local/domain/6/device/vbd/51714/device-type = "disk"   (n6,r0)
/local/domain/6/device/vbd/51714/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vbd/51714/backend-id = "0"   (n6,r0)
/local/domain/6/device/vbd/51714/state = "4"   (n6,r0)
/local/domain/6/device/vbd/51714/backend = "/local/domain/0/backend/vbd/6/51714"   (n6,r0)
/local/domain/6/device/vbd/51714/ring-ref = "9"   (n6,r0)
/local/domain/6/device/vbd/51714/event-channel = "19"   (n6,r0)
/local/domain/6/device/vif = ""   (n6)
/local/domain/6/device/vif/0 = ""   (n6,r0)
/local/domain/6/device/vif/0/mac = "00:16:3e:03:52:b9"   (n6,r0)
/local/domain/6/device/vif/0/handle = "0"   (n6,r0)
/local/domain/6/device/vif/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vif/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/vif/0/state = "4"   (n6,r0)
/local/domain/6/device/vif/0/backend = "/local/domain/0/backend/vif/6/0"   (n6,r0)
/local/domain/6/device/vif/0/tx-ring-ref = "1280"   (n6,r0)
/local/domain/6/device/vif/0/rx-ring-ref = "1281"   (n6,r0)
/local/domain/6/device/vif/0/event-channel = "20"   (n6,r0)
/local/domain/6/device/vif/0/request-rx-copy = "1"   (n6,r0)
/local/domain/6/device/vif/0/feature-rx-notify = "1"   (n6,r0)
/local/domain/6/device/vif/0/feature-no-csum-offload = "0"   (n6,r0)
/local/domain/6/device/vif/0/feature-sg = "1"   (n6,r0)
/local/domain/6/device/vif/0/feature-gso-tcpv4 = "1"   (n6,r0)
/local/domain/6/device/vif/1 = ""   (n6,r0)
/local/domain/6/device/vif/1/mac = "00:16:3e:3f:83:2a"   (n6,r0)
/local/domain/6/device/vif/1/handle = "1"   (n6,r0)
/local/domain/6/device/vif/1/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vif/1/backend-id = "0"   (n6,r0)
/local/domain/6/device/vif/1/state = "4"   (n6,r0)
/local/domain/6/device/vif/1/backend = "/local/domain/0/backend/vif/6/1"   (n6,r0)
/local/domain/6/device/vif/1/tx-ring-ref = "1282"   (n6,r0)
/local/domain/6/device/vif/1/rx-ring-ref = "1283"   (n6,r0)
/local/domain/6/device/vif/1/event-channel = "21"   (n6,r0)
/local/domain/6/device/vif/1/request-rx-copy = "1"   (n6,r0)
/local/domain/6/device/vif/1/feature-rx-notify = "1"   (n6,r0)
/local/domain/6/device/vif/1/feature-no-csum-offload = "0"   (n6,r0)
/local/domain/6/device/vif/1/feature-sg = "1"   (n6,r0)
/local/domain/6/device/vif/1/feature-gso-tcpv4 = "1"   (n6,r0)
/local/domain/6/device/console = ""   (n6)
/local/domain/6/device/console/0 = ""   (n6,r0)
/local/domain/6/device/console/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/console/0/state = "1"   (n6,r0)
/local/domain/6/device/console/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/console/0/backend = "/local/domain/0/backend/console/6/0"   (n6,r0)
/local/domain/6/device/suspend = ""   (n6)
/local/domain/6/device/suspend/event-channel = "15"   (n6)
/local/domain/6/control = ""   (n6)
/local/domain/6/control/platform-feature-multiprocessor-suspend = "1"   (n6)
/local/domain/6/error = ""   (n6)
/local/domain/6/memory = ""   (n6)
/local/domain/6/memory/target = "5242880"   (n6)
/local/domain/6/guest = ""   (n6)
/local/domain/6/hvmpv = ""   (n6)
/local/domain/6/data = ""   (n6)
/local/domain/6/serial = ""   (n0,r6)
/local/domain/6/serial/0 = ""   (n0,r6)
/local/domain/6/serial/0/tty = "/dev/pts/3"   (n0,r6)
/local/domain/6/device-misc = ""   (n0,r6)
/local/domain/6/device-misc/vif = ""   (n0,r6)
/local/domain/6/device-misc/vif/nextDeviceID = "2"   (n0,r6)
/local/domain/6/device-misc/console = ""   (n0,r6)
/local/domain/6/device-misc/console/nextDeviceID = "1"   (n0,r6)
/local/domain/6/image = ""   (n0,r6)
/local/domain/6/image/device-model-fifo = "/var/run/xend/dm-6-1342477950.fifo"   (n0,r6)
/local/domain/6/image/device-model-pid = "12020"   (n0,r6)
/local/domain/6/image/entry = "18446744071562076160"   (n0,r6)
/local/domain/6/image/loader = "generic"   (n0,r6)
/local/domain/6/image/guest-os = "linux"   (n0,r6)
/local/domain/6/image/features = ""   (n0,r6)
/local/domain/6/image/features/writable-descriptor-tables = "1"   (n0,r6)
/local/domain/6/image/features/supervisor-mode-kernel = "1"   (n0,r6)
/local/domain/6/image/features/pae-pgdir-above-4gb = "1"   (n0,r6)
/local/domain/6/image/features/writable-page-tables = "1"   (n0,r6)
/local/domain/6/image/features/auto-translated-physmap = "1"   (n0,r6)
/local/domain/6/image/hypercall-page = "18446744071562080256"   (n0,r6)
/local/domain/6/image/guest-version = "2.6"   (n0,r6)
/local/domain/6/image/paddr-offset = "0"   (n0,r6)
/local/domain/6/image/virt-base = "18446744071562067968"   (n0,r6)
/local/domain/6/image/suspend-cancel = "1"   (n0,r6)
/local/domain/6/image/xen-version = "xen-3.0"   (n0,r6)
/local/domain/6/image/init-p2m = "18446719884453740544"   (n0,r6)
/local/domain/6/console = ""   (n0,r6)
/local/domain/6/console/ring-ref = "2744821"   (n0,r6)
/local/domain/6/console/port = "2"   (n0,r6)
/local/domain/6/console/limit = "1048576"   (n0,r6)
/local/domain/6/console/type = "ioemu"   (n0,r6)
/local/domain/6/console/vnc-port = "5903"   (n0,r6)
/local/domain/6/console/tty = "/dev/pts/3"   (n0,r6)
/local/domain/6/cpu = ""   (n0,r6)
/local/domain/6/cpu/3 = ""   (n0,r6)
/local/domain/6/cpu/3/availability = "online"   (n0,r6)
/local/domain/6/cpu/1 = ""   (n0,r6)
/local/domain/6/cpu/1/availability = "online"   (n0,r6)
/local/domain/6/cpu/2 = ""   (n0,r6)
/local/domain/6/cpu/2/availability = "online"   (n0,r6)
/local/domain/6/cpu/0 = ""   (n0,r6)
/local/domain/6/cpu/0/availability = "online"   (n0,r6)
/local/domain/6/store = ""   (n0,r6)
/local/domain/6/store/ring-ref = "2744822"   (n0,r6)
/local/domain/6/store/port = "1"   (n0,r6)
/local/domain/6/description = ""   (n0,r6)
/local/domain/6/name = "mgaweb1"   (n0,r6)
/local/domain/6/domid = "6"   (n0,r6)
/local/pool = ""   (n0)
/local/pool/0 = ""   (n0)
/local/pool/0/other_config = ""   (n0)
/local/pool/0/description = "Pool-0"   (n0)
/local/pool/0/uuid = "f3e0f00a-ef58-45e7-0d95-422cac5d8970"   (n0)
/local/pool/0/name = "Pool-0"   (n0)
/vm = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000 = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_xend_stop = "ignore"   (n0)
/vm/00000000-0000-0000-0000-000000000000/pool_name = "Pool-0"   (n0)
/vm/00000000-0000-0000-0000-000000000000/shadow_memory = "0"   (n0)
/vm/00000000-0000-0000-0000-000000000000/uuid = "00000000-0000-0000-0000-000000000000"   (r0)
/vm/00000000-0000-0000-0000-000000000000/on_reboot = "restart"   (n0)
/vm/00000000-0000-0000-0000-000000000000/image = "(linux (kernel ) (superpages 0) (nomigrate 0) (tsc_mode 0))"   (n0)
/vm/00000000-0000-0000-0000-000000000000/image/ostype = "linux"   (n0)
/vm/00000000-0000-0000-0000-000000000000/image/kernel = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/image/cmdline = ""   (r0)
/vm/00000000-0000-0000-0000-000000000000/image/ramdisk = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_poweroff = "destroy"   (n0)
/vm/00000000-0000-0000-0000-000000000000/bootloader_args = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_xend_start = "ignore"   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_crash = "restart"   (n0)
/vm/00000000-0000-0000-0000-000000000000/xend = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/xend/restart_count = "0"   (n0)
/vm/00000000-0000-0000-0000-000000000000/vcpus = "4"   (n0)
/vm/00000000-0000-0000-0000-000000000000/vcpu_avail = "1"   (n0)
/vm/00000000-0000-0000-0000-000000000000/bootloader = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/name = "Domain-0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/ostype = "linux"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/kernel = "/var/run/xend/boot/boot_kernel.b_lsKP"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda1 splash=silent showopts console=tty1 console=xvc0 "   (n0,r3)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/ramdisk = "/var/run/xend/boot/boot_ramdisk._ivX_c"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/frontend = "/local/domain/3/device/vkbd/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/frontend = "/local/domain/3/device/vfb/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/backend = "/local/domain/0/backend/vfb/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/frontend = "/local/domain/3/device/vbd/51713"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/backend = "/local/domain/0/backend/vbd/3/51713"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/frontend = "/local/domain/3/device/vif/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/backend = "/local/domain/0/backend/vif/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/frontend = "/local/domain/3/device/vif/1"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/backend = "/local/domain/0/backend/vif/3/1"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/frontend = "/local/domain/3/device/console/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/backend = "/local/domain/0/backend/console/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend = "/local/domain/3/device/vusb/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend = "/local/domain/0/backend/vusb/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_xend_stop = "ignore"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/pool_name = "Pool-0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/shadow_memory = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/uuid = "99a002ad-7c36-4a40-b9f5-545770f2d1b2"   (n0,r3)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_reboot = "restart"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/start_time = "1340009524.72"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_poweroff = "destroy"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/bootloader_args = "-q"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_xend_start = "ignore"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_crash = "destroy"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/xend = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/xend/restart_count = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/vcpus = "2"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/vcpu_avail = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/bootloader = "/usr/bin/pygrub"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/name = "mgaca"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/ostype = "linux"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/kernel = "/var/run/xend/boot/boot_kernel.SQr0Vi"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda1 splash=silent showopts console=tty1 console=xvc0 "   (n0,r4)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/ramdisk = "/var/run/xend/boot/boot_ramdisk.SLsDmK"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/frontend = "/local/domain/4/device/vkbd/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/frontend = "/local/domain/4/device/vfb/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/backend = "/local/domain/0/backend/vfb/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/frontend = "/local/domain/4/device/vbd/51713"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/backend = "/local/domain/0/backend/vbd/4/51713"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/frontend = "/local/domain/4/device/vbd/51714"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/backend = "/local/domain/0/backend/vbd/4/51714"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/frontend = "/local/domain/4/device/vif/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/backend = "/local/domain/0/backend/vif/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/frontend = "/local/domain/4/device/vif/1"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/backend = "/local/domain/0/backend/vif/4/1"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/frontend = "/local/domain/4/device/console/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/backend = "/local/domain/0/backend/console/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_xend_stop = "ignore"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/pool_name = "Pool-0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/shadow_memory = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/uuid = "b38923a5-69b1-493e-9220-e81440b63d4e"   (n0,r4)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_reboot = "restart"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/start_time = "1342477947.59"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_poweroff = "destroy"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/bootloader_args = "-q"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_xend_start = "ignore"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_crash = "destroy"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/xend = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/xend/restart_count = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/vcpus = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/vcpu_avail = "15"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/bootloader = "/usr/bin/pygrub"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/name = "mgatrain"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/ostype = "linux"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/kernel = "/var/run/xend/boot/boot_kernel.U6MAIh"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda2 splash=silent showopts console=tty1 console=xvc0 "   (n0,r5)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/ramdisk = "/var/run/xend/boot/boot_ramdisk.464MmW"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/frontend = "/local/domain/5/device/vkbd/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/frontend = "/local/domain/5/device/vfb/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/backend = "/local/domain/0/backend/vfb/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/frontend = "/local/domain/5/device/vbd/51713"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/backend = "/local/domain/0/backend/vbd/5/51713"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/frontend = "/local/domain/5/device/vbd/51714"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/backend = "/local/domain/0/backend/vbd/5/51714"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/frontend = "/local/domain/5/device/vif/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/backend = "/local/domain/0/backend/vif/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/frontend = "/local/domain/5/device/vif/1"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/backend = "/local/domain/0/backend/vif/5/1"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/frontend = "/local/domain/5/device/console/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/backend = "/local/domain/0/backend/console/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_xend_stop = "ignore"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/pool_name = "Pool-0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/shadow_memory = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/uuid = "3b498296-a711-40b7-b9db-e0f257e5bb44"   (n0,r5)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_reboot = "restart"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/start_time = "1342477948.64"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_poweroff = "destroy"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/bootloader_args = "-q"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_xend_start = "ignore"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_crash = "destroy"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/xend = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/xend/restart_count = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/vcpus = "4"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/vcpu_avail = "15"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/bootloader = "/usr/bin/pygrub"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/name = "mgaextws"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/ostype = "linux"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/kernel = "/var/run/xend/boot/boot_kernel.4uMBhR"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda2 splash=silent showopts console=tty1 console=xvc0 "   (n0,r6)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/ramdisk = "/var/run/xend/boot/boot_ramdisk.BBHNyj"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/frontend = "/local/domain/6/device/vkbd/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/frontend = "/local/domain/6/device/vfb/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/backend = "/local/domain/0/backend/vfb/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/frontend = "/local/domain/6/device/vbd/51713"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/backend = "/local/domain/0/backend/vbd/6/51713"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/frontend = "/local/domain/6/device/vbd/51714"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/backend = "/local/domain/0/backend/vbd/6/51714"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/frontend = "/local/domain/6/device/vif/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/backend = "/local/domain/0/backend/vif/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/frontend = "/local/domain/6/device/vif/1"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/backend = "/local/domain/0/backend/vif/6/1"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/frontend = "/local/domain/6/device/console/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/backend = "/local/domain/0/backend/console/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_xend_stop = "ignore"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/pool_name = "Pool-0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/shadow_memory = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/uuid = "93284ae3-45f6-4308-adc1-be4cdd5c6cbe"   (n0,r6)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_reboot = "restart"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/start_time = "1342477950.11"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_poweroff = "destroy"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/bootloader_args = "-q"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_xend_start = "ignore"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_crash = "destroy"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/xend = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/xend/restart_count = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/vcpus = "4"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/vcpu_avail = "15"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/bootloader = "/usr/bin/pygrub"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/name = "mgaweb1"   (n0)

--------------090001030600030704090908
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------090001030600030704090908--


From xen-devel-bounces@lists.xen.org Mon Aug 20 19:49:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 19:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Xya-0005x4-Se; Mon, 20 Aug 2012 19:49:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tparker@cbnco.com>) id 1T3XyY-0005we-Jw
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 19:49:27 +0000
X-Env-Sender: tparker@cbnco.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1345492126!10223014!1
X-Originating-IP: [207.164.182.72]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22696 invoked from network); 20 Aug 2012 19:48:47 -0000
Received: from smtp.cbnco.com (HELO smtp.cbnco.com) (207.164.182.72)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 19:48:47 -0000
Received: from localhost (localhost [127.0.0.1])
	by smtp.cbnco.com (Postfix) with ESMTP id 9D69510958DF;
	Mon, 20 Aug 2012 15:48:45 -0400 (EDT)
Received: from smtp.cbnco.com ([127.0.0.1])
	by localhost (mail.cbnco.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 05350-08; Mon, 20 Aug 2012 15:48:45 -0400 (EDT)
Received: from [172.20.23.60] (dmzgw2.cbnco.com [207.164.182.65])
	by smtp.cbnco.com (Postfix) with ESMTPSA id 4EFDE109553C;
	Mon, 20 Aug 2012 15:48:45 -0400 (EDT)
Message-ID: <5032949D.5010805@cbnco.com>
Date: Mon, 20 Aug 2012 15:48:45 -0400
From: Tom Parker <tparker@cbnco.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345109102.27489.38.camel@zakaz.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------090001030600030704090908"
X-Virus-Scanned: amavisd-new at cbnco.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------090001030600030704090908
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

Hi Ian

Sorry to be slow to respond.  I missed this e-mail when you sent it.  I
will try to get the information you are looking for.
On 08/16/2012 05:25 AM, Ian Campbell wrote:
> On Wed, 2012-08-15 at 18:07 +0100, Tom Parker wrote:
>> Good Afternoon.  My colleague Stefan (sstan) was asked on the IRC
>> channel to provide our use case for PV USB in our environment.  This
>> is possible with the current xm stack but not available with the xl
>> stack.
> Thanks for doing this.
>
> At first glance this doesn't seem like something which we could do for
> 4.2.0 at this stage, although we should do it for 4.3 and potentially
> consider it for 4.2.1.
>
> Is it something which you guys might be interested in providing patches
> for? It is at heart a moderately simple C coding exercise, I'm more than
> happy to provide guidance etc. Much of the generic framework already
> exists and there are examples in the form of other device types.
At this time we can't provide patches.  We are a sysadmin group and have
no programmers with experience in this.  It would take me a long time to
get back into C programming :)  Sorry.
>
>> Currently we use PVUSB to attach a USB Smartcard reader through our
>> dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
>> mounted on an internal USB Port to our domU CA server (SLES 11)
>>
>> The config file syntax is broken so we have to manually attach (I have
>> it scripted) whenever our hosts reboot (which is almost never.)
> Can you give an example of what the syntax *should* be?
There used to be some data in the wiki or in an initial presentation on
PVUSB but as it has never worked for me.  I don't remember how it worked.
>
>> On the dom0 server I have to do the following steps:
>>
>> /usr/sbin/xm usb-list-assignable-devices (get the bus-id of the USB
>> device)
>> /usr/sbin/xm usb-hc-create $Domain 2 2 (Create a USB 2.0 Root Hub with
>> 2 ports in $Domain)
>> /usr/sbin/xm usb-attach $Domain $DevId $PortNumber $BusId (Attach the
>> USB bus-id found in step 1 to the hub created in step 2)
> What (if anything) is the output of these commands?
Nothing.  They return silently if there is no error.
>
> Do you need to do anything to make a device "assignable"? (I get no
> output from the list command for example)
You just have to have a device that is not attached anywhere else.   For
example:

mgaxen1:~ # xm usb-list-assignable-devices
6-1          : ID 03f0:1027 HP Virtual Keyboard

That means that for the xm usb-attach (/usr/sbin/xm usb-attach $Domain
$DevId $PortNumber $BusId) part I would use the 6-1 as the $BusId

>
>> On the domU the lsusb looks like this after the above (before it
>> returns nothing)
>>
>> mgaca:~ # lsusb 
>> Bus 001 Device 002: ID 04e6:5116 SCM Microsystems, Inc. SCR331-LC1
>> SmartCard Reader
>> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> Can you post the output of "xenstore-ls -fp" while the device is
> connected?
This is the part of the output that refers to the the vusb:
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend =
"/local/domain/3/device/vusb/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend-id =
"3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend-id =
"0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend =
"/local/domain/0/backend/vusb/3/0"   (n0)

The rest of the output I have attached as a file.
>
> Do you happen to know if this uses the PVUSB drivers or some other
> mechanism? "lsmod" in both dom0 and domU should provide a clue if the
> drivers are loaded.
Looks like it: 

dom0
mgaxen1:~ # lsmod | grep usb
usbbk                  23503  0
xenbus_be               3952  4 usbbk,netbk,blkbk,blktap
usbhid                 50900  0
hid                    83977  1 usbhid
usbcore               221920  5 usbbk,usbhid,uhci_hcd,ehci_hcd

domU
mgaca:~ # lsmod | grep usb
usbcore               220777  3 xen_hcd
>
> Does this work for both PV and HVM guests or do you only use one or the
> other?
I only use PV guests.
>
>> Once I have done this I can use the usb devce in the domU as if it was
>> directly connected. 
>>
>> Thanks for your time.
> Thank you for describing the functionality.
>
> Ian.
>
>


--------------090001030600030704090908
Content-Type: text/plain; charset=UTF-8;
 name="xenstore-ls.fp.out"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xenstore-ls.fp.out"

/tool = ""   (n0)
/tool/xenstored = ""   (n0)
/local = ""   (n0)
/local/domain = ""   (n0)
/local/domain/0 = ""   (r0)
/local/domain/0/vm = "/vm/00000000-0000-0000-0000-000000000000"   (r0)
/local/domain/0/device = ""   (n0)
/local/domain/0/control = ""   (n0)
/local/domain/0/control/platform-feature-multiprocessor-suspend = "1"   (n0)
/local/domain/0/error = ""   (n0)
/local/domain/0/memory = ""   (n0)
/local/domain/0/memory/target = "510464"   (n0)
/local/domain/0/guest = ""   (n0)
/local/domain/0/hvmpv = ""   (n0)
/local/domain/0/data = ""   (n0)
/local/domain/0/cpu = ""   (r0)
/local/domain/0/cpu/1 = ""   (r0)
/local/domain/0/cpu/1/availability = "offline"   (r0)
/local/domain/0/cpu/3 = ""   (r0)
/local/domain/0/cpu/3/availability = "offline"   (r0)
/local/domain/0/cpu/2 = ""   (r0)
/local/domain/0/cpu/2/availability = "offline"   (r0)
/local/domain/0/cpu/0 = ""   (r0)
/local/domain/0/cpu/0/availability = "online"   (r0)
/local/domain/0/description = ""   (r0)
/local/domain/0/console = ""   (r0)
/local/domain/0/console/limit = "1048576"   (r0)
/local/domain/0/console/type = "xenconsoled"   (r0)
/local/domain/0/domid = "0"   (r0)
/local/domain/0/name = "Domain-0"   (r0)
/local/domain/0/backend = ""   (r0)
/local/domain/0/backend/vkbd = ""   (r0)
/local/domain/0/backend/vkbd/3 = ""   (r0)
/local/domain/0/backend/vkbd/3/0 = ""   (n0,r3)
/local/domain/0/backend/vkbd/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/frontend = "/local/domain/3/device/vkbd/0"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/feature-abs-pointer = "1"   (n0,r3)
/local/domain/0/backend/vkbd/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vkbd/4 = ""   (r0)
/local/domain/0/backend/vkbd/4/0 = ""   (n0,r4)
/local/domain/0/backend/vkbd/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/frontend = "/local/domain/4/device/vkbd/0"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/feature-abs-pointer = "1"   (n0,r4)
/local/domain/0/backend/vkbd/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vkbd/5 = ""   (r0)
/local/domain/0/backend/vkbd/5/0 = ""   (n0,r5)
/local/domain/0/backend/vkbd/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/frontend = "/local/domain/5/device/vkbd/0"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/feature-abs-pointer = "1"   (n0,r5)
/local/domain/0/backend/vkbd/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vkbd/6 = ""   (r0)
/local/domain/0/backend/vkbd/6/0 = ""   (n0,r6)
/local/domain/0/backend/vkbd/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/frontend = "/local/domain/6/device/vkbd/0"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/feature-abs-pointer = "1"   (n0,r6)
/local/domain/0/backend/vkbd/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vfb = ""   (r0)
/local/domain/0/backend/vfb/3 = ""   (r0)
/local/domain/0/backend/vfb/3/0 = ""   (n0,r3)
/local/domain/0/backend/vfb/3/0/vncunused = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vfb/3/0/vnc = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/xauthority = "//.Xauthority"   (n0,r3)
/local/domain/0/backend/vfb/3/0/frontend = "/local/domain/3/device/vfb/0"   (n0,r3)
/local/domain/0/backend/vfb/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vfb/3/0/keymap = "en-us"   (n0,r3)
/local/domain/0/backend/vfb/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vfb/3/0/uuid = "7e434190-c8a6-a205-2f4b-12b2c98dc056"   (n0,r3)
/local/domain/0/backend/vfb/3/0/feature-resize = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vfb/3/0/request-update = "1"   (n0,r3)
/local/domain/0/backend/vfb/3/0/location = "127.0.0.1:5901"   (n0,r3)
/local/domain/0/backend/vfb/4 = ""   (r0)
/local/domain/0/backend/vfb/4/0 = ""   (n0,r4)
/local/domain/0/backend/vfb/4/0/vncunused = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vfb/4/0/vnc = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/xauthority = "//.Xauthority"   (n0,r4)
/local/domain/0/backend/vfb/4/0/frontend = "/local/domain/4/device/vfb/0"   (n0,r4)
/local/domain/0/backend/vfb/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/vfb/4/0/keymap = "en-us"   (n0,r4)
/local/domain/0/backend/vfb/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vfb/4/0/uuid = "2aaf7c89-aac3-73df-900f-e679de9174c2"   (n0,r4)
/local/domain/0/backend/vfb/4/0/feature-resize = "1"   (n0,r4)
/local/domain/0/backend/vfb/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vfb/4/0/location = "127.0.0.1:5900"   (n0,r4)
/local/domain/0/backend/vfb/4/0/request-update = "1"   (n0,r4)
/local/domain/0/backend/vfb/5 = ""   (r0)
/local/domain/0/backend/vfb/5/0 = ""   (n0,r5)
/local/domain/0/backend/vfb/5/0/vncunused = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vfb/5/0/vnc = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/xauthority = "//.Xauthority"   (n0,r5)
/local/domain/0/backend/vfb/5/0/frontend = "/local/domain/5/device/vfb/0"   (n0,r5)
/local/domain/0/backend/vfb/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/vfb/5/0/keymap = "en-us"   (n0,r5)
/local/domain/0/backend/vfb/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vfb/5/0/uuid = "7c1ffff8-6c55-0b90-5670-5a1bca0e1e2e"   (n0,r5)
/local/domain/0/backend/vfb/5/0/feature-resize = "1"   (n0,r5)
/local/domain/0/backend/vfb/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vfb/5/0/location = "127.0.0.1:5902"   (n0,r5)
/local/domain/0/backend/vfb/5/0/request-update = "1"   (n0,r5)
/local/domain/0/backend/vfb/6 = ""   (r0)
/local/domain/0/backend/vfb/6/0 = ""   (n0,r6)
/local/domain/0/backend/vfb/6/0/vncunused = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/vnc = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/xauthority = "//.Xauthority"   (n0,r6)
/local/domain/0/backend/vfb/6/0/frontend = "/local/domain/6/device/vfb/0"   (n0,r6)
/local/domain/0/backend/vfb/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/vfb/6/0/keymap = "en-us"   (n0,r6)
/local/domain/0/backend/vfb/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vfb/6/0/uuid = "f486c7a3-a900-a671-b24c-365ce03b5b03"   (n0,r6)
/local/domain/0/backend/vfb/6/0/feature-resize = "1"   (n0,r6)
/local/domain/0/backend/vfb/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vfb/6/0/location = "127.0.0.1:5903"   (n0,r6)
/local/domain/0/backend/vfb/6/0/request-update = "1"   (n0,r6)
/local/domain/0/backend/vbd = ""   (r0)
/local/domain/0/backend/vbd/3 = ""   (r0)
/local/domain/0/backend/vbd/3/51713 = ""   (n0,r3)
/local/domain/0/backend/vbd/3/51713/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/frontend = "/local/domain/3/device/vbd/51713"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/uuid = "4da405c9-6afe-e5d4-3351-02945da036c6"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/bootable = "1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/dev = "xvda1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/state = "4"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/params = "/dev/vg-mgadomu2/mgaca_os"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/mode = "w"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/online = "1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/type = "phy"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/physical-device = "fd:4"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/feature-barrier = "1"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/sectors = "62914560"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/info = "0"   (n0,r3)
/local/domain/0/backend/vbd/3/51713/sector-size = "512"   (n0,r3)
/local/domain/0/backend/vbd/4 = ""   (r0)
/local/domain/0/backend/vbd/4/51713 = ""   (n0,r4)
/local/domain/0/backend/vbd/4/51713/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/frontend = "/local/domain/4/device/vbd/51713"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/uuid = "c78be812-74d5-1ace-fd61-caba4cc21937"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/bootable = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/dev = "xvda1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/state = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/params = "/dev/vg-mgadomu2/mgatrain_os"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/mode = "w"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/online = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/type = "phy"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/physical-device = "fd:5"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/feature-barrier = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/sectors = "62914560"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/info = "0"   (n0,r4)
/local/domain/0/backend/vbd/4/51713/sector-size = "512"   (n0,r4)
/local/domain/0/backend/vbd/4/51714 = ""   (n0,r4)
/local/domain/0/backend/vbd/4/51714/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/frontend = "/local/domain/4/device/vbd/51714"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/uuid = "dc438cec-0611-c938-009f-3c97428aa18d"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/bootable = "0"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/dev = "xvda2"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/state = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/params = "/dev/vg-mgadomu2/mgatrain_swap"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/mode = "w"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/online = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/type = "phy"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/physical-device = "fd:6"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/feature-barrier = "1"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/sectors = "4194304"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/info = "0"   (n0,r4)
/local/domain/0/backend/vbd/4/51714/sector-size = "512"   (n0,r4)
/local/domain/0/backend/vbd/5 = ""   (r0)
/local/domain/0/backend/vbd/5/51713 = ""   (n0,r5)
/local/domain/0/backend/vbd/5/51713/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/frontend = "/local/domain/5/device/vbd/51713"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/uuid = "7237d864-04ca-1a4e-964c-107aa25c4615"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/bootable = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/dev = "xvda1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/state = "4"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/params = "/dev/vg-mgadomu2/mgaextws_os"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/mode = "w"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/online = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/type = "phy"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/physical-device = "fd:16"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/feature-barrier = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/sectors = "62914560"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/info = "0"   (n0,r5)
/local/domain/0/backend/vbd/5/51713/sector-size = "512"   (n0,r5)
/local/domain/0/backend/vbd/5/51714 = ""   (n0,r5)
/local/domain/0/backend/vbd/5/51714/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/frontend = "/local/domain/5/device/vbd/51714"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/uuid = "0baaa3a5-3f58-3e13-5896-0cf37dc9d533"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/bootable = "0"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/dev = "xvda2"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/state = "4"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/params = "/dev/vg-mgadomu2/mgaextws_swap"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/mode = "w"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/online = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/type = "phy"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/physical-device = "fd:17"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/feature-barrier = "1"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/sectors = "4194304"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/info = "0"   (n0,r5)
/local/domain/0/backend/vbd/5/51714/sector-size = "512"   (n0,r5)
/local/domain/0/backend/vbd/6 = ""   (r0)
/local/domain/0/backend/vbd/6/51713 = ""   (n0,r6)
/local/domain/0/backend/vbd/6/51713/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/frontend = "/local/domain/6/device/vbd/51713"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/uuid = "e068d988-7285-46aa-7bbe-f1325c0c55cb"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/bootable = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/dev = "xvda1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/state = "4"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/params = "/dev/vg-mgadomu1/mgaweb1_os"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/mode = "w"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/online = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/type = "phy"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/physical-device = "fd:20"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/feature-barrier = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/sectors = "104857600"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/info = "0"   (n0,r6)
/local/domain/0/backend/vbd/6/51713/sector-size = "512"   (n0,r6)
/local/domain/0/backend/vbd/6/51714 = ""   (n0,r6)
/local/domain/0/backend/vbd/6/51714/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/frontend = "/local/domain/6/device/vbd/51714"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/uuid = "f925c4cb-ef72-d505-7c86-a72309d480a2"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/bootable = "0"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/dev = "xvda2"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/state = "4"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/params = "/dev/vg-mgadomu1/mgaweb1_swap"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/mode = "w"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/online = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/type = "phy"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/physical-device = "fd:21"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/feature-barrier = "1"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/sectors = "4194304"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/info = "0"   (n0,r6)
/local/domain/0/backend/vbd/6/51714/sector-size = "512"   (n0,r6)
/local/domain/0/backend/vif = ""   (r0)
/local/domain/0/backend/vif/3 = ""   (r0)
/local/domain/0/backend/vif/3/0 = ""   (n0,r3)
/local/domain/0/backend/vif/3/0/bridge = "br60"   (n0,r3)
/local/domain/0/backend/vif/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vif/3/0/handle = "0"   (n0,r3)
/local/domain/0/backend/vif/3/0/uuid = "ae3badb7-b3e9-0ec3-1472-e312f547028b"   (n0,r3)
/local/domain/0/backend/vif/3/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r3)
/local/domain/0/backend/vif/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vif/3/0/frontend = "/local/domain/3/device/vif/0"   (n0,r3)
/local/domain/0/backend/vif/3/0/mac = "00:16:3e:7a:30:25"   (n0,r3)
/local/domain/0/backend/vif/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-sg = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-gso-tcpv4 = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-rx-copy = "1"   (n0,r3)
/local/domain/0/backend/vif/3/0/feature-rx-flip = "0"   (n0,r3)
/local/domain/0/backend/vif/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vif/3/1 = ""   (n0,r3)
/local/domain/0/backend/vif/3/1/bridge = "br250"   (n0,r3)
/local/domain/0/backend/vif/3/1/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vif/3/1/handle = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/uuid = "8412664d-562d-8603-bbb0-b2ed30a47649"   (n0,r3)
/local/domain/0/backend/vif/3/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r3)
/local/domain/0/backend/vif/3/1/state = "4"   (n0,r3)
/local/domain/0/backend/vif/3/1/frontend = "/local/domain/3/device/vif/1"   (n0,r3)
/local/domain/0/backend/vif/3/1/mac = "00:16:3e:7a:30:26"   (n0,r3)
/local/domain/0/backend/vif/3/1/online = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-sg = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-gso-tcpv4 = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-rx-copy = "1"   (n0,r3)
/local/domain/0/backend/vif/3/1/feature-rx-flip = "0"   (n0,r3)
/local/domain/0/backend/vif/3/1/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/vif/4 = ""   (r0)
/local/domain/0/backend/vif/4/0 = ""   (n0,r4)
/local/domain/0/backend/vif/4/0/bridge = "br250"   (n0,r4)
/local/domain/0/backend/vif/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vif/4/0/handle = "0"   (n0,r4)
/local/domain/0/backend/vif/4/0/uuid = "38b78e88-115a-1c04-e5bf-27af19064574"   (n0,r4)
/local/domain/0/backend/vif/4/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r4)
/local/domain/0/backend/vif/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/vif/4/0/frontend = "/local/domain/4/device/vif/0"   (n0,r4)
/local/domain/0/backend/vif/4/0/mac = "00:16:3e:1c:00:d4"   (n0,r4)
/local/domain/0/backend/vif/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-sg = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-gso-tcpv4 = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-rx-copy = "1"   (n0,r4)
/local/domain/0/backend/vif/4/0/feature-rx-flip = "0"   (n0,r4)
/local/domain/0/backend/vif/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vif/4/1 = ""   (n0,r4)
/local/domain/0/backend/vif/4/1/bridge = "br225"   (n0,r4)
/local/domain/0/backend/vif/4/1/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/vif/4/1/handle = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/uuid = "e81a6ae9-2112-f89a-e515-fec3b2f3e0be"   (n0,r4)
/local/domain/0/backend/vif/4/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r4)
/local/domain/0/backend/vif/4/1/state = "4"   (n0,r4)
/local/domain/0/backend/vif/4/1/frontend = "/local/domain/4/device/vif/1"   (n0,r4)
/local/domain/0/backend/vif/4/1/mac = "00:16:3e:1c:00:d6"   (n0,r4)
/local/domain/0/backend/vif/4/1/online = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-sg = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-gso-tcpv4 = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-rx-copy = "1"   (n0,r4)
/local/domain/0/backend/vif/4/1/feature-rx-flip = "0"   (n0,r4)
/local/domain/0/backend/vif/4/1/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/vif/5 = ""   (r0)
/local/domain/0/backend/vif/5/0 = ""   (n0,r5)
/local/domain/0/backend/vif/5/0/bridge = "br260"   (n0,r5)
/local/domain/0/backend/vif/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vif/5/0/handle = "0"   (n0,r5)
/local/domain/0/backend/vif/5/0/uuid = "26017add-8695-477b-d239-d260fef76bec"   (n0,r5)
/local/domain/0/backend/vif/5/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r5)
/local/domain/0/backend/vif/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/vif/5/0/frontend = "/local/domain/5/device/vif/0"   (n0,r5)
/local/domain/0/backend/vif/5/0/mac = "00:16:3e:87:06:56"   (n0,r5)
/local/domain/0/backend/vif/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-sg = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-gso-tcpv4 = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-rx-copy = "1"   (n0,r5)
/local/domain/0/backend/vif/5/0/feature-rx-flip = "0"   (n0,r5)
/local/domain/0/backend/vif/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vif/5/1 = ""   (n0,r5)
/local/domain/0/backend/vif/5/1/bridge = "br405"   (n0,r5)
/local/domain/0/backend/vif/5/1/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/vif/5/1/handle = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/uuid = "82263a3a-3eb2-c86c-239c-9a03506ba51e"   (n0,r5)
/local/domain/0/backend/vif/5/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r5)
/local/domain/0/backend/vif/5/1/state = "4"   (n0,r5)
/local/domain/0/backend/vif/5/1/frontend = "/local/domain/5/device/vif/1"   (n0,r5)
/local/domain/0/backend/vif/5/1/mac = "00:16:3e:87:06:58"   (n0,r5)
/local/domain/0/backend/vif/5/1/online = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-sg = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-gso-tcpv4 = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-rx-copy = "1"   (n0,r5)
/local/domain/0/backend/vif/5/1/feature-rx-flip = "0"   (n0,r5)
/local/domain/0/backend/vif/5/1/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/vif/6 = ""   (r0)
/local/domain/0/backend/vif/6/0 = ""   (n0,r6)
/local/domain/0/backend/vif/6/0/bridge = "br250"   (n0,r6)
/local/domain/0/backend/vif/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vif/6/0/handle = "0"   (n0,r6)
/local/domain/0/backend/vif/6/0/uuid = "741a4b1d-6257-45f1-55f2-6a3dc29cee31"   (n0,r6)
/local/domain/0/backend/vif/6/0/script = "/etc/xen/scripts/vif-bridge"   (n0,r6)
/local/domain/0/backend/vif/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/vif/6/0/frontend = "/local/domain/6/device/vif/0"   (n0,r6)
/local/domain/0/backend/vif/6/0/mac = "00:16:3e:03:52:b9"   (n0,r6)
/local/domain/0/backend/vif/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-sg = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-gso-tcpv4 = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-rx-copy = "1"   (n0,r6)
/local/domain/0/backend/vif/6/0/feature-rx-flip = "0"   (n0,r6)
/local/domain/0/backend/vif/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vif/6/1 = ""   (n0,r6)
/local/domain/0/backend/vif/6/1/bridge = "br260"   (n0,r6)
/local/domain/0/backend/vif/6/1/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/vif/6/1/handle = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/uuid = "ff3eb0cb-9193-e5a5-f1c7-56606c44a4bc"   (n0,r6)
/local/domain/0/backend/vif/6/1/script = "/etc/xen/scripts/vif-bridge"   (n0,r6)
/local/domain/0/backend/vif/6/1/state = "4"   (n0,r6)
/local/domain/0/backend/vif/6/1/frontend = "/local/domain/6/device/vif/1"   (n0,r6)
/local/domain/0/backend/vif/6/1/mac = "00:16:3e:3f:83:2a"   (n0,r6)
/local/domain/0/backend/vif/6/1/online = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-sg = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-gso-tcpv4 = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-rx-copy = "1"   (n0,r6)
/local/domain/0/backend/vif/6/1/feature-rx-flip = "0"   (n0,r6)
/local/domain/0/backend/vif/6/1/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/console = ""   (r0)
/local/domain/0/backend/console/3 = ""   (r0)
/local/domain/0/backend/console/3/0 = ""   (n0,r3)
/local/domain/0/backend/console/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/console/3/0/protocol = "vt100"   (n0,r3)
/local/domain/0/backend/console/3/0/uuid = "02f74b54-4174-74aa-4324-f2436c730a64"   (n0,r3)
/local/domain/0/backend/console/3/0/frontend = "/local/domain/3/device/console/0"   (n0,r3)
/local/domain/0/backend/console/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/console/3/0/location = "2"   (n0,r3)
/local/domain/0/backend/console/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/console/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/console/3/0/hotplug-status = "connected"   (n0,r3)
/local/domain/0/backend/console/4 = ""   (r0)
/local/domain/0/backend/console/4/0 = ""   (n0,r4)
/local/domain/0/backend/console/4/0/domain = "mgatrain"   (n0,r4)
/local/domain/0/backend/console/4/0/protocol = "vt100"   (n0,r4)
/local/domain/0/backend/console/4/0/uuid = "01a6c026-56a4-9d8a-43b7-6389a783e825"   (n0,r4)
/local/domain/0/backend/console/4/0/frontend = "/local/domain/4/device/console/0"   (n0,r4)
/local/domain/0/backend/console/4/0/state = "4"   (n0,r4)
/local/domain/0/backend/console/4/0/location = "2"   (n0,r4)
/local/domain/0/backend/console/4/0/online = "1"   (n0,r4)
/local/domain/0/backend/console/4/0/frontend-id = "4"   (n0,r4)
/local/domain/0/backend/console/4/0/hotplug-status = "connected"   (n0,r4)
/local/domain/0/backend/console/5 = ""   (r0)
/local/domain/0/backend/console/5/0 = ""   (n0,r5)
/local/domain/0/backend/console/5/0/domain = "mgaextws"   (n0,r5)
/local/domain/0/backend/console/5/0/protocol = "vt100"   (n0,r5)
/local/domain/0/backend/console/5/0/uuid = "e3f73836-13ad-6372-0d82-31cea623bdd4"   (n0,r5)
/local/domain/0/backend/console/5/0/frontend = "/local/domain/5/device/console/0"   (n0,r5)
/local/domain/0/backend/console/5/0/state = "4"   (n0,r5)
/local/domain/0/backend/console/5/0/location = "2"   (n0,r5)
/local/domain/0/backend/console/5/0/online = "1"   (n0,r5)
/local/domain/0/backend/console/5/0/frontend-id = "5"   (n0,r5)
/local/domain/0/backend/console/5/0/hotplug-status = "connected"   (n0,r5)
/local/domain/0/backend/console/6 = ""   (r0)
/local/domain/0/backend/console/6/0 = ""   (n0,r6)
/local/domain/0/backend/console/6/0/domain = "mgaweb1"   (n0,r6)
/local/domain/0/backend/console/6/0/protocol = "vt100"   (n0,r6)
/local/domain/0/backend/console/6/0/uuid = "1d96e799-7de9-1d6c-1c26-a543dacf250a"   (n0,r6)
/local/domain/0/backend/console/6/0/frontend = "/local/domain/6/device/console/0"   (n0,r6)
/local/domain/0/backend/console/6/0/state = "4"   (n0,r6)
/local/domain/0/backend/console/6/0/location = "2"   (n0,r6)
/local/domain/0/backend/console/6/0/online = "1"   (n0,r6)
/local/domain/0/backend/console/6/0/frontend-id = "6"   (n0,r6)
/local/domain/0/backend/console/6/0/hotplug-status = "connected"   (n0,r6)
/local/domain/0/backend/vusb = ""   (r0)
/local/domain/0/backend/vusb/3 = ""   (r0)
/local/domain/0/backend/vusb/3/0 = ""   (n0,r3)
/local/domain/0/backend/vusb/3/0/domain = "mgaca"   (n0,r3)
/local/domain/0/backend/vusb/3/0/frontend = "/local/domain/3/device/vusb/0"   (n0,r3)
/local/domain/0/backend/vusb/3/0/online = "1"   (n0,r3)
/local/domain/0/backend/vusb/3/0/state = "4"   (n0,r3)
/local/domain/0/backend/vusb/3/0/frontend-id = "3"   (n0,r3)
/local/domain/0/backend/vusb/3/0/port = ""   (n0,r3)
/local/domain/0/backend/vusb/3/0/port/2 = ""   (n0,r3)
/local/domain/0/backend/vusb/3/0/port/1 = "2-1"   (n0,r3)
/local/domain/0/backend/vusb/3/0/usb-ver = "2"   (n0,r3)
/local/domain/0/backend/vusb/3/0/num-ports = "2"   (n0,r3)
/local/domain/0/device-model = ""   (r0)
/local/domain/0/device-model/3 = ""   (n0,b3)
/local/domain/0/device-model/3/state = "running"   (n0,b3)
/local/domain/0/device-model/4 = ""   (n0,b4)
/local/domain/0/device-model/4/state = "running"   (n0,b4)
/local/domain/0/device-model/5 = ""   (n0,b5)
/local/domain/0/device-model/5/state = "running"   (n0,b5)
/local/domain/0/device-model/6 = ""   (n0,b6)
/local/domain/0/device-model/6/state = "running"   (n0,b6)
/local/domain/3 = ""   (n0,r3)
/local/domain/3/vm = "/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2"   (n0,r3)
/local/domain/3/device = ""   (n3)
/local/domain/3/device/vkbd = ""   (n3)
/local/domain/3/device/vkbd/0 = ""   (n3,r0)
/local/domain/3/device/vkbd/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vkbd/0/state = "4"   (n3,r0)
/local/domain/3/device/vkbd/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/3/0"   (n3,r0)
/local/domain/3/device/vkbd/0/page-ref = "1473690"   (n3,r0)
/local/domain/3/device/vkbd/0/event-channel = "11"   (n3,r0)
/local/domain/3/device/vkbd/0/request-abs-pointer = "1"   (n3,r0)
/local/domain/3/device/vfb = ""   (n3)
/local/domain/3/device/vfb/0 = ""   (n3,r0)
/local/domain/3/device/vfb/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vfb/0/state = "4"   (n3,r0)
/local/domain/3/device/vfb/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vfb/0/backend = "/local/domain/0/backend/vfb/3/0"   (n3,r0)
/local/domain/3/device/vfb/0/page-ref = "1480004"   (n3,r0)
/local/domain/3/device/vfb/0/event-channel = "10"   (n3,r0)
/local/domain/3/device/vfb/0/feature-update = "1"   (n3,r0)
/local/domain/3/device/vbd = ""   (n3)
/local/domain/3/device/vbd/51713 = ""   (n3,r0)
/local/domain/3/device/vbd/51713/virtual-device = "51713"   (n3,r0)
/local/domain/3/device/vbd/51713/device-type = "disk"   (n3,r0)
/local/domain/3/device/vbd/51713/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vbd/51713/backend-id = "0"   (n3,r0)
/local/domain/3/device/vbd/51713/state = "4"   (n3,r0)
/local/domain/3/device/vbd/51713/backend = "/local/domain/0/backend/vbd/3/51713"   (n3,r0)
/local/domain/3/device/vbd/51713/ring-ref = "8"   (n3,r0)
/local/domain/3/device/vbd/51713/event-channel = "12"   (n3,r0)
/local/domain/3/device/vif = ""   (n3)
/local/domain/3/device/vif/0 = ""   (n3,r0)
/local/domain/3/device/vif/0/mac = "00:16:3e:7a:30:25"   (n3,r0)
/local/domain/3/device/vif/0/handle = "0"   (n3,r0)
/local/domain/3/device/vif/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vif/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vif/0/state = "4"   (n3,r0)
/local/domain/3/device/vif/0/backend = "/local/domain/0/backend/vif/3/0"   (n3,r0)
/local/domain/3/device/vif/0/tx-ring-ref = "1280"   (n3,r0)
/local/domain/3/device/vif/0/rx-ring-ref = "1281"   (n3,r0)
/local/domain/3/device/vif/0/event-channel = "13"   (n3,r0)
/local/domain/3/device/vif/0/request-rx-copy = "1"   (n3,r0)
/local/domain/3/device/vif/0/feature-rx-notify = "1"   (n3,r0)
/local/domain/3/device/vif/0/feature-no-csum-offload = "0"   (n3,r0)
/local/domain/3/device/vif/0/feature-sg = "1"   (n3,r0)
/local/domain/3/device/vif/0/feature-gso-tcpv4 = "1"   (n3,r0)
/local/domain/3/device/vif/1 = ""   (n3,r0)
/local/domain/3/device/vif/1/mac = "00:16:3e:7a:30:26"   (n3,r0)
/local/domain/3/device/vif/1/handle = "1"   (n3,r0)
/local/domain/3/device/vif/1/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vif/1/backend-id = "0"   (n3,r0)
/local/domain/3/device/vif/1/state = "4"   (n3,r0)
/local/domain/3/device/vif/1/backend = "/local/domain/0/backend/vif/3/1"   (n3,r0)
/local/domain/3/device/vif/1/tx-ring-ref = "1282"   (n3,r0)
/local/domain/3/device/vif/1/rx-ring-ref = "1283"   (n3,r0)
/local/domain/3/device/vif/1/event-channel = "14"   (n3,r0)
/local/domain/3/device/vif/1/request-rx-copy = "1"   (n3,r0)
/local/domain/3/device/vif/1/feature-rx-notify = "1"   (n3,r0)
/local/domain/3/device/vif/1/feature-no-csum-offload = "0"   (n3,r0)
/local/domain/3/device/vif/1/feature-sg = "1"   (n3,r0)
/local/domain/3/device/vif/1/feature-gso-tcpv4 = "1"   (n3,r0)
/local/domain/3/device/console = ""   (n3)
/local/domain/3/device/console/0 = ""   (n3,r0)
/local/domain/3/device/console/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/console/0/state = "1"   (n3,r0)
/local/domain/3/device/console/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/console/0/backend = "/local/domain/0/backend/console/3/0"   (n3,r0)
/local/domain/3/device/suspend = ""   (n3)
/local/domain/3/device/suspend/event-channel = "9"   (n3)
/local/domain/3/device/vusb = ""   (n3)
/local/domain/3/device/vusb/0 = ""   (n3,r0)
/local/domain/3/device/vusb/0/protocol = "x86_64-abi"   (n3,r0)
/local/domain/3/device/vusb/0/state = "4"   (n3,r0)
/local/domain/3/device/vusb/0/backend-id = "0"   (n3,r0)
/local/domain/3/device/vusb/0/backend = "/local/domain/0/backend/vusb/3/0"   (n3,r0)
/local/domain/3/device/vusb/0/urb-ring-ref = "1316"   (n3,r0)
/local/domain/3/device/vusb/0/conn-ring-ref = "1311"   (n3,r0)
/local/domain/3/device/vusb/0/event-channel = "15"   (n3,r0)
/local/domain/3/control = ""   (n3)
/local/domain/3/control/platform-feature-multiprocessor-suspend = "1"   (n3)
/local/domain/3/error = ""   (n3)
/local/domain/3/memory = ""   (n3)
/local/domain/3/memory/target = "1048576"   (n3)
/local/domain/3/guest = ""   (n3)
/local/domain/3/hvmpv = ""   (n3)
/local/domain/3/data = ""   (n3)
/local/domain/3/serial = ""   (n0,r3)
/local/domain/3/serial/0 = ""   (n0,r3)
/local/domain/3/serial/0/tty = "/dev/pts/1"   (n0,r3)
/local/domain/3/device-misc = ""   (n0,r3)
/local/domain/3/device-misc/vif = ""   (n0,r3)
/local/domain/3/device-misc/vif/nextDeviceID = "2"   (n0,r3)
/local/domain/3/device-misc/console = ""   (n0,r3)
/local/domain/3/device-misc/console/nextDeviceID = "1"   (n0,r3)
/local/domain/3/device-misc/vusb = ""   (n0,r3)
/local/domain/3/device-misc/vusb/nextDeviceID = "1"   (n0,r3)
/local/domain/3/image = ""   (n0,r3)
/local/domain/3/image/device-model-fifo = "/var/run/xend/dm-3-1340009524.fifo"   (n0,r3)
/local/domain/3/image/device-model-pid = "19562"   (n0,r3)
/local/domain/3/image/entry = "18446744071562076160"   (n0,r3)
/local/domain/3/image/loader = "generic"   (n0,r3)
/local/domain/3/image/guest-os = "linux"   (n0,r3)
/local/domain/3/image/features = ""   (n0,r3)
/local/domain/3/image/features/writable-descriptor-tables = "1"   (n0,r3)
/local/domain/3/image/features/supervisor-mode-kernel = "1"   (n0,r3)
/local/domain/3/image/features/pae-pgdir-above-4gb = "1"   (n0,r3)
/local/domain/3/image/features/writable-page-tables = "1"   (n0,r3)
/local/domain/3/image/features/auto-translated-physmap = "1"   (n0,r3)
/local/domain/3/image/hypercall-page = "18446744071562080256"   (n0,r3)
/local/domain/3/image/guest-version = "2.6"   (n0,r3)
/local/domain/3/image/paddr-offset = "0"   (n0,r3)
/local/domain/3/image/virt-base = "18446744071562067968"   (n0,r3)
/local/domain/3/image/suspend-cancel = "1"   (n0,r3)
/local/domain/3/image/xen-version = "xen-3.0"   (n0,r3)
/local/domain/3/image/init-p2m = "18446719884453740544"   (n0,r3)
/local/domain/3/console = ""   (n0,r3)
/local/domain/3/console/ring-ref = "4087985"   (n0,r3)
/local/domain/3/console/port = "2"   (n0,r3)
/local/domain/3/console/limit = "1048576"   (n0,r3)
/local/domain/3/console/type = "ioemu"   (n0,r3)
/local/domain/3/console/vnc-port = "5901"   (n0,r3)
/local/domain/3/console/tty = "/dev/pts/1"   (n0,r3)
/local/domain/3/store = ""   (n0,r3)
/local/domain/3/store/ring-ref = "4087986"   (n0,r3)
/local/domain/3/store/port = "1"   (n0,r3)
/local/domain/3/cpu = ""   (n0,r3)
/local/domain/3/cpu/1 = ""   (n0,r3)
/local/domain/3/cpu/1/availability = "online"   (n0,r3)
/local/domain/3/cpu/0 = ""   (n0,r3)
/local/domain/3/cpu/0/availability = "online"   (n0,r3)
/local/domain/3/description = "Managua Certification Server"   (n0,r3)
/local/domain/3/name = "mgaca"   (n0,r3)
/local/domain/3/domid = "3"   (n0,r3)
/local/domain/4 = ""   (n0,r4)
/local/domain/4/vm = "/vm/b38923a5-69b1-493e-9220-e81440b63d4e"   (n0,r4)
/local/domain/4/device = ""   (n4)
/local/domain/4/device/vkbd = ""   (n4)
/local/domain/4/device/vkbd/0 = ""   (n4,r0)
/local/domain/4/device/vkbd/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vkbd/0/state = "4"   (n4,r0)
/local/domain/4/device/vkbd/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/4/0"   (n4,r0)
/local/domain/4/device/vkbd/0/page-ref = "3804588"   (n4,r0)
/local/domain/4/device/vkbd/0/event-channel = "17"   (n4,r0)
/local/domain/4/device/vkbd/0/request-abs-pointer = "1"   (n4,r0)
/local/domain/4/device/vfb = ""   (n4)
/local/domain/4/device/vfb/0 = ""   (n4,r0)
/local/domain/4/device/vfb/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vfb/0/state = "4"   (n4,r0)
/local/domain/4/device/vfb/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/vfb/0/backend = "/local/domain/0/backend/vfb/4/0"   (n4,r0)
/local/domain/4/device/vfb/0/page-ref = "3808785"   (n4,r0)
/local/domain/4/device/vfb/0/event-channel = "16"   (n4,r0)
/local/domain/4/device/vfb/0/feature-update = "1"   (n4,r0)
/local/domain/4/device/vbd = ""   (n4)
/local/domain/4/device/vbd/51713 = ""   (n4,r0)
/local/domain/4/device/vbd/51713/virtual-device = "51713"   (n4,r0)
/local/domain/4/device/vbd/51713/device-type = "disk"   (n4,r0)
/local/domain/4/device/vbd/51713/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vbd/51713/backend-id = "0"   (n4,r0)
/local/domain/4/device/vbd/51713/state = "4"   (n4,r0)
/local/domain/4/device/vbd/51713/backend = "/local/domain/0/backend/vbd/4/51713"   (n4,r0)
/local/domain/4/device/vbd/51713/ring-ref = "8"   (n4,r0)
/local/domain/4/device/vbd/51713/event-channel = "18"   (n4,r0)
/local/domain/4/device/vbd/51714 = ""   (n4,r0)
/local/domain/4/device/vbd/51714/virtual-device = "51714"   (n4,r0)
/local/domain/4/device/vbd/51714/device-type = "disk"   (n4,r0)
/local/domain/4/device/vbd/51714/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vbd/51714/backend-id = "0"   (n4,r0)
/local/domain/4/device/vbd/51714/state = "4"   (n4,r0)
/local/domain/4/device/vbd/51714/backend = "/local/domain/0/backend/vbd/4/51714"   (n4,r0)
/local/domain/4/device/vbd/51714/ring-ref = "9"   (n4,r0)
/local/domain/4/device/vbd/51714/event-channel = "19"   (n4,r0)
/local/domain/4/device/vif = ""   (n4)
/local/domain/4/device/vif/0 = ""   (n4,r0)
/local/domain/4/device/vif/0/mac = "00:16:3e:1c:00:d4"   (n4,r0)
/local/domain/4/device/vif/0/handle = "0"   (n4,r0)
/local/domain/4/device/vif/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vif/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/vif/0/state = "4"   (n4,r0)
/local/domain/4/device/vif/0/backend = "/local/domain/0/backend/vif/4/0"   (n4,r0)
/local/domain/4/device/vif/0/tx-ring-ref = "1280"   (n4,r0)
/local/domain/4/device/vif/0/rx-ring-ref = "1281"   (n4,r0)
/local/domain/4/device/vif/0/event-channel = "20"   (n4,r0)
/local/domain/4/device/vif/0/request-rx-copy = "1"   (n4,r0)
/local/domain/4/device/vif/0/feature-rx-notify = "1"   (n4,r0)
/local/domain/4/device/vif/0/feature-no-csum-offload = "0"   (n4,r0)
/local/domain/4/device/vif/0/feature-sg = "1"   (n4,r0)
/local/domain/4/device/vif/0/feature-gso-tcpv4 = "1"   (n4,r0)
/local/domain/4/device/vif/1 = ""   (n4,r0)
/local/domain/4/device/vif/1/mac = "00:16:3e:1c:00:d6"   (n4,r0)
/local/domain/4/device/vif/1/handle = "1"   (n4,r0)
/local/domain/4/device/vif/1/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/vif/1/backend-id = "0"   (n4,r0)
/local/domain/4/device/vif/1/state = "4"   (n4,r0)
/local/domain/4/device/vif/1/backend = "/local/domain/0/backend/vif/4/1"   (n4,r0)
/local/domain/4/device/vif/1/tx-ring-ref = "1282"   (n4,r0)
/local/domain/4/device/vif/1/rx-ring-ref = "1283"   (n4,r0)
/local/domain/4/device/vif/1/event-channel = "21"   (n4,r0)
/local/domain/4/device/vif/1/request-rx-copy = "1"   (n4,r0)
/local/domain/4/device/vif/1/feature-rx-notify = "1"   (n4,r0)
/local/domain/4/device/vif/1/feature-no-csum-offload = "0"   (n4,r0)
/local/domain/4/device/vif/1/feature-sg = "1"   (n4,r0)
/local/domain/4/device/vif/1/feature-gso-tcpv4 = "1"   (n4,r0)
/local/domain/4/device/console = ""   (n4)
/local/domain/4/device/console/0 = ""   (n4,r0)
/local/domain/4/device/console/0/protocol = "x86_64-abi"   (n4,r0)
/local/domain/4/device/console/0/state = "1"   (n4,r0)
/local/domain/4/device/console/0/backend-id = "0"   (n4,r0)
/local/domain/4/device/console/0/backend = "/local/domain/0/backend/console/4/0"   (n4,r0)
/local/domain/4/device/suspend = ""   (n4)
/local/domain/4/device/suspend/event-channel = "15"   (n4)
/local/domain/4/control = ""   (n4)
/local/domain/4/control/platform-feature-multiprocessor-suspend = "1"   (n4)
/local/domain/4/error = ""   (n4)
/local/domain/4/memory = ""   (n4)
/local/domain/4/memory/target = "1048576"   (n4)
/local/domain/4/guest = ""   (n4)
/local/domain/4/hvmpv = ""   (n4)
/local/domain/4/data = ""   (n4)
/local/domain/4/serial = ""   (n0,r4)
/local/domain/4/serial/0 = ""   (n0,r4)
/local/domain/4/serial/0/tty = "/dev/pts/0"   (n0,r4)
/local/domain/4/device-misc = ""   (n0,r4)
/local/domain/4/device-misc/vif = ""   (n0,r4)
/local/domain/4/device-misc/vif/nextDeviceID = "2"   (n0,r4)
/local/domain/4/device-misc/console = ""   (n0,r4)
/local/domain/4/device-misc/console/nextDeviceID = "1"   (n0,r4)
/local/domain/4/image = ""   (n0,r4)
/local/domain/4/image/device-model-fifo = "/var/run/xend/dm-4-1342477947.fifo"   (n0,r4)
/local/domain/4/image/device-model-pid = "11578"   (n0,r4)
/local/domain/4/image/entry = "18446744071562076160"   (n0,r4)
/local/domain/4/image/loader = "generic"   (n0,r4)
/local/domain/4/image/guest-os = "linux"   (n0,r4)
/local/domain/4/image/features = ""   (n0,r4)
/local/domain/4/image/features/writable-descriptor-tables = "1"   (n0,r4)
/local/domain/4/image/features/supervisor-mode-kernel = "1"   (n0,r4)
/local/domain/4/image/features/pae-pgdir-above-4gb = "1"   (n0,r4)
/local/domain/4/image/features/writable-page-tables = "1"   (n0,r4)
/local/domain/4/image/features/auto-translated-physmap = "1"   (n0,r4)
/local/domain/4/image/hypercall-page = "18446744071562080256"   (n0,r4)
/local/domain/4/image/guest-version = "2.6"   (n0,r4)
/local/domain/4/image/paddr-offset = "0"   (n0,r4)
/local/domain/4/image/virt-base = "18446744071562067968"   (n0,r4)
/local/domain/4/image/suspend-cancel = "1"   (n0,r4)
/local/domain/4/image/xen-version = "xen-3.0"   (n0,r4)
/local/domain/4/image/init-p2m = "18446719884453740544"   (n0,r4)
/local/domain/4/console = ""   (n0,r4)
/local/domain/4/console/ring-ref = "4342118"   (n0,r4)
/local/domain/4/console/port = "2"   (n0,r4)
/local/domain/4/console/limit = "1048576"   (n0,r4)
/local/domain/4/console/type = "ioemu"   (n0,r4)
/local/domain/4/console/vnc-port = "5900"   (n0,r4)
/local/domain/4/console/tty = "/dev/pts/0"   (n0,r4)
/local/domain/4/cpu = ""   (n0,r4)
/local/domain/4/cpu/3 = ""   (n0,r4)
/local/domain/4/cpu/3/availability = "online"   (n0,r4)
/local/domain/4/cpu/1 = ""   (n0,r4)
/local/domain/4/cpu/1/availability = "online"   (n0,r4)
/local/domain/4/cpu/2 = ""   (n0,r4)
/local/domain/4/cpu/2/availability = "online"   (n0,r4)
/local/domain/4/cpu/0 = ""   (n0,r4)
/local/domain/4/cpu/0/availability = "online"   (n0,r4)
/local/domain/4/store = ""   (n0,r4)
/local/domain/4/store/ring-ref = "4342119"   (n0,r4)
/local/domain/4/store/port = "1"   (n0,r4)
/local/domain/4/description = ""   (n0,r4)
/local/domain/4/name = "mgatrain"   (n0,r4)
/local/domain/4/domid = "4"   (n0,r4)
/local/domain/5 = ""   (n0,r5)
/local/domain/5/vm = "/vm/3b498296-a711-40b7-b9db-e0f257e5bb44"   (n0,r5)
/local/domain/5/device = ""   (n5)
/local/domain/5/device/vkbd = ""   (n5)
/local/domain/5/device/vkbd/0 = ""   (n5,r0)
/local/domain/5/device/vkbd/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vkbd/0/state = "4"   (n5,r0)
/local/domain/5/device/vkbd/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/5/0"   (n5,r0)
/local/domain/5/device/vkbd/0/request-abs-pointer = "1"   (n5,r0)
/local/domain/5/device/vkbd/0/page-ref = "2753702"   (n5,r0)
/local/domain/5/device/vkbd/0/event-channel = "17"   (n5,r0)
/local/domain/5/device/vfb = ""   (n5)
/local/domain/5/device/vfb/0 = ""   (n5,r0)
/local/domain/5/device/vfb/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vfb/0/state = "4"   (n5,r0)
/local/domain/5/device/vfb/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/vfb/0/backend = "/local/domain/0/backend/vfb/5/0"   (n5,r0)
/local/domain/5/device/vfb/0/page-ref = "2760197"   (n5,r0)
/local/domain/5/device/vfb/0/event-channel = "16"   (n5,r0)
/local/domain/5/device/vfb/0/feature-update = "1"   (n5,r0)
/local/domain/5/device/vbd = ""   (n5)
/local/domain/5/device/vbd/51713 = ""   (n5,r0)
/local/domain/5/device/vbd/51713/virtual-device = "51713"   (n5,r0)
/local/domain/5/device/vbd/51713/device-type = "disk"   (n5,r0)
/local/domain/5/device/vbd/51713/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vbd/51713/backend-id = "0"   (n5,r0)
/local/domain/5/device/vbd/51713/state = "4"   (n5,r0)
/local/domain/5/device/vbd/51713/backend = "/local/domain/0/backend/vbd/5/51713"   (n5,r0)
/local/domain/5/device/vbd/51713/ring-ref = "8"   (n5,r0)
/local/domain/5/device/vbd/51713/event-channel = "18"   (n5,r0)
/local/domain/5/device/vbd/51714 = ""   (n5,r0)
/local/domain/5/device/vbd/51714/virtual-device = "51714"   (n5,r0)
/local/domain/5/device/vbd/51714/device-type = "disk"   (n5,r0)
/local/domain/5/device/vbd/51714/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vbd/51714/backend-id = "0"   (n5,r0)
/local/domain/5/device/vbd/51714/state = "4"   (n5,r0)
/local/domain/5/device/vbd/51714/backend = "/local/domain/0/backend/vbd/5/51714"   (n5,r0)
/local/domain/5/device/vbd/51714/ring-ref = "9"   (n5,r0)
/local/domain/5/device/vbd/51714/event-channel = "19"   (n5,r0)
/local/domain/5/device/vif = ""   (n5)
/local/domain/5/device/vif/0 = ""   (n5,r0)
/local/domain/5/device/vif/0/mac = "00:16:3e:87:06:56"   (n5,r0)
/local/domain/5/device/vif/0/handle = "0"   (n5,r0)
/local/domain/5/device/vif/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vif/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/vif/0/state = "4"   (n5,r0)
/local/domain/5/device/vif/0/backend = "/local/domain/0/backend/vif/5/0"   (n5,r0)
/local/domain/5/device/vif/0/tx-ring-ref = "1280"   (n5,r0)
/local/domain/5/device/vif/0/rx-ring-ref = "1281"   (n5,r0)
/local/domain/5/device/vif/0/event-channel = "20"   (n5,r0)
/local/domain/5/device/vif/0/request-rx-copy = "1"   (n5,r0)
/local/domain/5/device/vif/0/feature-rx-notify = "1"   (n5,r0)
/local/domain/5/device/vif/0/feature-no-csum-offload = "0"   (n5,r0)
/local/domain/5/device/vif/0/feature-sg = "1"   (n5,r0)
/local/domain/5/device/vif/0/feature-gso-tcpv4 = "1"   (n5,r0)
/local/domain/5/device/vif/1 = ""   (n5,r0)
/local/domain/5/device/vif/1/mac = "00:16:3e:87:06:58"   (n5,r0)
/local/domain/5/device/vif/1/handle = "1"   (n5,r0)
/local/domain/5/device/vif/1/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/vif/1/backend-id = "0"   (n5,r0)
/local/domain/5/device/vif/1/state = "4"   (n5,r0)
/local/domain/5/device/vif/1/backend = "/local/domain/0/backend/vif/5/1"   (n5,r0)
/local/domain/5/device/vif/1/tx-ring-ref = "1282"   (n5,r0)
/local/domain/5/device/vif/1/rx-ring-ref = "1283"   (n5,r0)
/local/domain/5/device/vif/1/event-channel = "21"   (n5,r0)
/local/domain/5/device/vif/1/request-rx-copy = "1"   (n5,r0)
/local/domain/5/device/vif/1/feature-rx-notify = "1"   (n5,r0)
/local/domain/5/device/vif/1/feature-no-csum-offload = "0"   (n5,r0)
/local/domain/5/device/vif/1/feature-sg = "1"   (n5,r0)
/local/domain/5/device/vif/1/feature-gso-tcpv4 = "1"   (n5,r0)
/local/domain/5/device/console = ""   (n5)
/local/domain/5/device/console/0 = ""   (n5,r0)
/local/domain/5/device/console/0/protocol = "x86_64-abi"   (n5,r0)
/local/domain/5/device/console/0/state = "1"   (n5,r0)
/local/domain/5/device/console/0/backend-id = "0"   (n5,r0)
/local/domain/5/device/console/0/backend = "/local/domain/0/backend/console/5/0"   (n5,r0)
/local/domain/5/device/suspend = ""   (n5)
/local/domain/5/device/suspend/event-channel = "15"   (n5)
/local/domain/5/control = ""   (n5)
/local/domain/5/control/platform-feature-multiprocessor-suspend = "1"   (n5)
/local/domain/5/error = ""   (n5)
/local/domain/5/memory = ""   (n5)
/local/domain/5/memory/target = "1048576"   (n5)
/local/domain/5/guest = ""   (n5)
/local/domain/5/hvmpv = ""   (n5)
/local/domain/5/data = ""   (n5)
/local/domain/5/serial = ""   (n0,r5)
/local/domain/5/serial/0 = ""   (n0,r5)
/local/domain/5/serial/0/tty = "/dev/pts/2"   (n0,r5)
/local/domain/5/device-misc = ""   (n0,r5)
/local/domain/5/device-misc/vif = ""   (n0,r5)
/local/domain/5/device-misc/vif/nextDeviceID = "2"   (n0,r5)
/local/domain/5/device-misc/console = ""   (n0,r5)
/local/domain/5/device-misc/console/nextDeviceID = "1"   (n0,r5)
/local/domain/5/image = ""   (n0,r5)
/local/domain/5/image/device-model-fifo = "/var/run/xend/dm-5-1342477948.fifo"   (n0,r5)
/local/domain/5/image/device-model-pid = "11768"   (n0,r5)
/local/domain/5/image/entry = "18446744071562076160"   (n0,r5)
/local/domain/5/image/loader = "generic"   (n0,r5)
/local/domain/5/image/guest-os = "linux"   (n0,r5)
/local/domain/5/image/features = ""   (n0,r5)
/local/domain/5/image/features/writable-descriptor-tables = "1"   (n0,r5)
/local/domain/5/image/features/supervisor-mode-kernel = "1"   (n0,r5)
/local/domain/5/image/features/pae-pgdir-above-4gb = "1"   (n0,r5)
/local/domain/5/image/features/writable-page-tables = "1"   (n0,r5)
/local/domain/5/image/features/auto-translated-physmap = "1"   (n0,r5)
/local/domain/5/image/hypercall-page = "18446744071562080256"   (n0,r5)
/local/domain/5/image/guest-version = "2.6"   (n0,r5)
/local/domain/5/image/paddr-offset = "0"   (n0,r5)
/local/domain/5/image/virt-base = "18446744071562067968"   (n0,r5)
/local/domain/5/image/suspend-cancel = "1"   (n0,r5)
/local/domain/5/image/xen-version = "xen-3.0"   (n0,r5)
/local/domain/5/image/init-p2m = "18446719884453740544"   (n0,r5)
/local/domain/5/console = ""   (n0,r5)
/local/domain/5/console/ring-ref = "3795445"   (n0,r5)
/local/domain/5/console/port = "2"   (n0,r5)
/local/domain/5/console/limit = "1048576"   (n0,r5)
/local/domain/5/console/type = "ioemu"   (n0,r5)
/local/domain/5/console/vnc-port = "5902"   (n0,r5)
/local/domain/5/console/tty = "/dev/pts/2"   (n0,r5)
/local/domain/5/cpu = ""   (n0,r5)
/local/domain/5/cpu/3 = ""   (n0,r5)
/local/domain/5/cpu/3/availability = "online"   (n0,r5)
/local/domain/5/cpu/1 = ""   (n0,r5)
/local/domain/5/cpu/1/availability = "online"   (n0,r5)
/local/domain/5/cpu/2 = ""   (n0,r5)
/local/domain/5/cpu/2/availability = "online"   (n0,r5)
/local/domain/5/cpu/0 = ""   (n0,r5)
/local/domain/5/cpu/0/availability = "online"   (n0,r5)
/local/domain/5/store = ""   (n0,r5)
/local/domain/5/store/ring-ref = "3795446"   (n0,r5)
/local/domain/5/store/port = "1"   (n0,r5)
/local/domain/5/description = ""   (n0,r5)
/local/domain/5/name = "mgaextws"   (n0,r5)
/local/domain/5/domid = "5"   (n0,r5)
/local/domain/6 = ""   (n0,r6)
/local/domain/6/vm = "/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe"   (n0,r6)
/local/domain/6/device = ""   (n6)
/local/domain/6/device/vkbd = ""   (n6)
/local/domain/6/device/vkbd/0 = ""   (n6,r0)
/local/domain/6/device/vkbd/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vkbd/0/state = "4"   (n6,r0)
/local/domain/6/device/vkbd/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/6/0"   (n6,r0)
/local/domain/6/device/vkbd/0/request-abs-pointer = "1"   (n6,r0)
/local/domain/6/device/vkbd/0/page-ref = "3278054"   (n6,r0)
/local/domain/6/device/vkbd/0/event-channel = "17"   (n6,r0)
/local/domain/6/device/vfb = ""   (n6)
/local/domain/6/device/vfb/0 = ""   (n6,r0)
/local/domain/6/device/vfb/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vfb/0/state = "4"   (n6,r0)
/local/domain/6/device/vfb/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/vfb/0/backend = "/local/domain/0/backend/vfb/6/0"   (n6,r0)
/local/domain/6/device/vfb/0/page-ref = "3285571"   (n6,r0)
/local/domain/6/device/vfb/0/event-channel = "16"   (n6,r0)
/local/domain/6/device/vfb/0/feature-update = "1"   (n6,r0)
/local/domain/6/device/vbd = ""   (n6)
/local/domain/6/device/vbd/51713 = ""   (n6,r0)
/local/domain/6/device/vbd/51713/virtual-device = "51713"   (n6,r0)
/local/domain/6/device/vbd/51713/device-type = "disk"   (n6,r0)
/local/domain/6/device/vbd/51713/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vbd/51713/backend-id = "0"   (n6,r0)
/local/domain/6/device/vbd/51713/state = "4"   (n6,r0)
/local/domain/6/device/vbd/51713/backend = "/local/domain/0/backend/vbd/6/51713"   (n6,r0)
/local/domain/6/device/vbd/51713/ring-ref = "8"   (n6,r0)
/local/domain/6/device/vbd/51713/event-channel = "18"   (n6,r0)
/local/domain/6/device/vbd/51714 = ""   (n6,r0)
/local/domain/6/device/vbd/51714/virtual-device = "51714"   (n6,r0)
/local/domain/6/device/vbd/51714/device-type = "disk"   (n6,r0)
/local/domain/6/device/vbd/51714/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vbd/51714/backend-id = "0"   (n6,r0)
/local/domain/6/device/vbd/51714/state = "4"   (n6,r0)
/local/domain/6/device/vbd/51714/backend = "/local/domain/0/backend/vbd/6/51714"   (n6,r0)
/local/domain/6/device/vbd/51714/ring-ref = "9"   (n6,r0)
/local/domain/6/device/vbd/51714/event-channel = "19"   (n6,r0)
/local/domain/6/device/vif = ""   (n6)
/local/domain/6/device/vif/0 = ""   (n6,r0)
/local/domain/6/device/vif/0/mac = "00:16:3e:03:52:b9"   (n6,r0)
/local/domain/6/device/vif/0/handle = "0"   (n6,r0)
/local/domain/6/device/vif/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vif/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/vif/0/state = "4"   (n6,r0)
/local/domain/6/device/vif/0/backend = "/local/domain/0/backend/vif/6/0"   (n6,r0)
/local/domain/6/device/vif/0/tx-ring-ref = "1280"   (n6,r0)
/local/domain/6/device/vif/0/rx-ring-ref = "1281"   (n6,r0)
/local/domain/6/device/vif/0/event-channel = "20"   (n6,r0)
/local/domain/6/device/vif/0/request-rx-copy = "1"   (n6,r0)
/local/domain/6/device/vif/0/feature-rx-notify = "1"   (n6,r0)
/local/domain/6/device/vif/0/feature-no-csum-offload = "0"   (n6,r0)
/local/domain/6/device/vif/0/feature-sg = "1"   (n6,r0)
/local/domain/6/device/vif/0/feature-gso-tcpv4 = "1"   (n6,r0)
/local/domain/6/device/vif/1 = ""   (n6,r0)
/local/domain/6/device/vif/1/mac = "00:16:3e:3f:83:2a"   (n6,r0)
/local/domain/6/device/vif/1/handle = "1"   (n6,r0)
/local/domain/6/device/vif/1/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/vif/1/backend-id = "0"   (n6,r0)
/local/domain/6/device/vif/1/state = "4"   (n6,r0)
/local/domain/6/device/vif/1/backend = "/local/domain/0/backend/vif/6/1"   (n6,r0)
/local/domain/6/device/vif/1/tx-ring-ref = "1282"   (n6,r0)
/local/domain/6/device/vif/1/rx-ring-ref = "1283"   (n6,r0)
/local/domain/6/device/vif/1/event-channel = "21"   (n6,r0)
/local/domain/6/device/vif/1/request-rx-copy = "1"   (n6,r0)
/local/domain/6/device/vif/1/feature-rx-notify = "1"   (n6,r0)
/local/domain/6/device/vif/1/feature-no-csum-offload = "0"   (n6,r0)
/local/domain/6/device/vif/1/feature-sg = "1"   (n6,r0)
/local/domain/6/device/vif/1/feature-gso-tcpv4 = "1"   (n6,r0)
/local/domain/6/device/console = ""   (n6)
/local/domain/6/device/console/0 = ""   (n6,r0)
/local/domain/6/device/console/0/protocol = "x86_64-abi"   (n6,r0)
/local/domain/6/device/console/0/state = "1"   (n6,r0)
/local/domain/6/device/console/0/backend-id = "0"   (n6,r0)
/local/domain/6/device/console/0/backend = "/local/domain/0/backend/console/6/0"   (n6,r0)
/local/domain/6/device/suspend = ""   (n6)
/local/domain/6/device/suspend/event-channel = "15"   (n6)
/local/domain/6/control = ""   (n6)
/local/domain/6/control/platform-feature-multiprocessor-suspend = "1"   (n6)
/local/domain/6/error = ""   (n6)
/local/domain/6/memory = ""   (n6)
/local/domain/6/memory/target = "5242880"   (n6)
/local/domain/6/guest = ""   (n6)
/local/domain/6/hvmpv = ""   (n6)
/local/domain/6/data = ""   (n6)
/local/domain/6/serial = ""   (n0,r6)
/local/domain/6/serial/0 = ""   (n0,r6)
/local/domain/6/serial/0/tty = "/dev/pts/3"   (n0,r6)
/local/domain/6/device-misc = ""   (n0,r6)
/local/domain/6/device-misc/vif = ""   (n0,r6)
/local/domain/6/device-misc/vif/nextDeviceID = "2"   (n0,r6)
/local/domain/6/device-misc/console = ""   (n0,r6)
/local/domain/6/device-misc/console/nextDeviceID = "1"   (n0,r6)
/local/domain/6/image = ""   (n0,r6)
/local/domain/6/image/device-model-fifo = "/var/run/xend/dm-6-1342477950.fifo"   (n0,r6)
/local/domain/6/image/device-model-pid = "12020"   (n0,r6)
/local/domain/6/image/entry = "18446744071562076160"   (n0,r6)
/local/domain/6/image/loader = "generic"   (n0,r6)
/local/domain/6/image/guest-os = "linux"   (n0,r6)
/local/domain/6/image/features = ""   (n0,r6)
/local/domain/6/image/features/writable-descriptor-tables = "1"   (n0,r6)
/local/domain/6/image/features/supervisor-mode-kernel = "1"   (n0,r6)
/local/domain/6/image/features/pae-pgdir-above-4gb = "1"   (n0,r6)
/local/domain/6/image/features/writable-page-tables = "1"   (n0,r6)
/local/domain/6/image/features/auto-translated-physmap = "1"   (n0,r6)
/local/domain/6/image/hypercall-page = "18446744071562080256"   (n0,r6)
/local/domain/6/image/guest-version = "2.6"   (n0,r6)
/local/domain/6/image/paddr-offset = "0"   (n0,r6)
/local/domain/6/image/virt-base = "18446744071562067968"   (n0,r6)
/local/domain/6/image/suspend-cancel = "1"   (n0,r6)
/local/domain/6/image/xen-version = "xen-3.0"   (n0,r6)
/local/domain/6/image/init-p2m = "18446719884453740544"   (n0,r6)
/local/domain/6/console = ""   (n0,r6)
/local/domain/6/console/ring-ref = "2744821"   (n0,r6)
/local/domain/6/console/port = "2"   (n0,r6)
/local/domain/6/console/limit = "1048576"   (n0,r6)
/local/domain/6/console/type = "ioemu"   (n0,r6)
/local/domain/6/console/vnc-port = "5903"   (n0,r6)
/local/domain/6/console/tty = "/dev/pts/3"   (n0,r6)
/local/domain/6/cpu = ""   (n0,r6)
/local/domain/6/cpu/3 = ""   (n0,r6)
/local/domain/6/cpu/3/availability = "online"   (n0,r6)
/local/domain/6/cpu/1 = ""   (n0,r6)
/local/domain/6/cpu/1/availability = "online"   (n0,r6)
/local/domain/6/cpu/2 = ""   (n0,r6)
/local/domain/6/cpu/2/availability = "online"   (n0,r6)
/local/domain/6/cpu/0 = ""   (n0,r6)
/local/domain/6/cpu/0/availability = "online"   (n0,r6)
/local/domain/6/store = ""   (n0,r6)
/local/domain/6/store/ring-ref = "2744822"   (n0,r6)
/local/domain/6/store/port = "1"   (n0,r6)
/local/domain/6/description = ""   (n0,r6)
/local/domain/6/name = "mgaweb1"   (n0,r6)
/local/domain/6/domid = "6"   (n0,r6)
/local/pool = ""   (n0)
/local/pool/0 = ""   (n0)
/local/pool/0/other_config = ""   (n0)
/local/pool/0/description = "Pool-0"   (n0)
/local/pool/0/uuid = "f3e0f00a-ef58-45e7-0d95-422cac5d8970"   (n0)
/local/pool/0/name = "Pool-0"   (n0)
/vm = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000 = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_xend_stop = "ignore"   (n0)
/vm/00000000-0000-0000-0000-000000000000/pool_name = "Pool-0"   (n0)
/vm/00000000-0000-0000-0000-000000000000/shadow_memory = "0"   (n0)
/vm/00000000-0000-0000-0000-000000000000/uuid = "00000000-0000-0000-0000-000000000000"   (r0)
/vm/00000000-0000-0000-0000-000000000000/on_reboot = "restart"   (n0)
/vm/00000000-0000-0000-0000-000000000000/image = "(linux (kernel ) (superpages 0) (nomigrate 0) (tsc_mode 0))"   (n0)
/vm/00000000-0000-0000-0000-000000000000/image/ostype = "linux"   (n0)
/vm/00000000-0000-0000-0000-000000000000/image/kernel = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/image/cmdline = ""   (r0)
/vm/00000000-0000-0000-0000-000000000000/image/ramdisk = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_poweroff = "destroy"   (n0)
/vm/00000000-0000-0000-0000-000000000000/bootloader_args = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_xend_start = "ignore"   (n0)
/vm/00000000-0000-0000-0000-000000000000/on_crash = "restart"   (n0)
/vm/00000000-0000-0000-0000-000000000000/xend = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/xend/restart_count = "0"   (n0)
/vm/00000000-0000-0000-0000-000000000000/vcpus = "4"   (n0)
/vm/00000000-0000-0000-0000-000000000000/vcpu_avail = "1"   (n0)
/vm/00000000-0000-0000-0000-000000000000/bootloader = ""   (n0)
/vm/00000000-0000-0000-0000-000000000000/name = "Domain-0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/ostype = "linux"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/kernel = "/var/run/xend/boot/boot_kernel.b_lsKP"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda1 splash=silent showopts console=tty1 console=xvc0 "   (n0,r3)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/image/ramdisk = "/var/run/xend/boot/boot_ramdisk._ivX_c"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/frontend = "/local/domain/3/device/vkbd/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/frontend = "/local/domain/3/device/vfb/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vfb/0/backend = "/local/domain/0/backend/vfb/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/frontend = "/local/domain/3/device/vbd/51713"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vbd/51713/backend = "/local/domain/0/backend/vbd/3/51713"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/frontend = "/local/domain/3/device/vif/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/0/backend = "/local/domain/0/backend/vif/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/frontend = "/local/domain/3/device/vif/1"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vif/1/backend = "/local/domain/0/backend/vif/3/1"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/frontend = "/local/domain/3/device/console/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/console/0/backend = "/local/domain/0/backend/console/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0 = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend = "/local/domain/3/device/vusb/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/frontend-id = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend-id = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/device/vusb/0/backend = "/local/domain/0/backend/vusb/3/0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_xend_stop = "ignore"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/pool_name = "Pool-0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/shadow_memory = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/uuid = "99a002ad-7c36-4a40-b9f5-545770f2d1b2"   (n0,r3)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_reboot = "restart"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/start_time = "1340009524.72"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_poweroff = "destroy"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/bootloader_args = "-q"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_xend_start = "ignore"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/on_crash = "destroy"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/xend = ""   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/xend/restart_count = "0"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/vcpus = "2"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/vcpu_avail = "3"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/bootloader = "/usr/bin/pygrub"   (n0)
/vm/99a002ad-7c36-4a40-b9f5-545770f2d1b2/name = "mgaca"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/ostype = "linux"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/kernel = "/var/run/xend/boot/boot_kernel.SQr0Vi"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda1 splash=silent showopts console=tty1 console=xvc0 "   (n0,r4)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/image/ramdisk = "/var/run/xend/boot/boot_ramdisk.SLsDmK"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/frontend = "/local/domain/4/device/vkbd/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/frontend = "/local/domain/4/device/vfb/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vfb/0/backend = "/local/domain/0/backend/vfb/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/frontend = "/local/domain/4/device/vbd/51713"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51713/backend = "/local/domain/0/backend/vbd/4/51713"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/frontend = "/local/domain/4/device/vbd/51714"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vbd/51714/backend = "/local/domain/0/backend/vbd/4/51714"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/frontend = "/local/domain/4/device/vif/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/0/backend = "/local/domain/0/backend/vif/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/frontend = "/local/domain/4/device/vif/1"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/vif/1/backend = "/local/domain/0/backend/vif/4/1"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0 = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/frontend = "/local/domain/4/device/console/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/frontend-id = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/backend-id = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/device/console/0/backend = "/local/domain/0/backend/console/4/0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_xend_stop = "ignore"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/pool_name = "Pool-0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/shadow_memory = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/uuid = "b38923a5-69b1-493e-9220-e81440b63d4e"   (n0,r4)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_reboot = "restart"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/start_time = "1342477947.59"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_poweroff = "destroy"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/bootloader_args = "-q"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_xend_start = "ignore"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/on_crash = "destroy"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/xend = ""   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/xend/restart_count = "0"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/vcpus = "4"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/vcpu_avail = "15"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/bootloader = "/usr/bin/pygrub"   (n0)
/vm/b38923a5-69b1-493e-9220-e81440b63d4e/name = "mgatrain"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/ostype = "linux"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/kernel = "/var/run/xend/boot/boot_kernel.U6MAIh"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda2 splash=silent showopts console=tty1 console=xvc0 "   (n0,r5)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/image/ramdisk = "/var/run/xend/boot/boot_ramdisk.464MmW"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/frontend = "/local/domain/5/device/vkbd/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/frontend = "/local/domain/5/device/vfb/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vfb/0/backend = "/local/domain/0/backend/vfb/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/frontend = "/local/domain/5/device/vbd/51713"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51713/backend = "/local/domain/0/backend/vbd/5/51713"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/frontend = "/local/domain/5/device/vbd/51714"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vbd/51714/backend = "/local/domain/0/backend/vbd/5/51714"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/frontend = "/local/domain/5/device/vif/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/0/backend = "/local/domain/0/backend/vif/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/frontend = "/local/domain/5/device/vif/1"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/vif/1/backend = "/local/domain/0/backend/vif/5/1"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0 = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/frontend = "/local/domain/5/device/console/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/frontend-id = "5"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/backend-id = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/device/console/0/backend = "/local/domain/0/backend/console/5/0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_xend_stop = "ignore"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/pool_name = "Pool-0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/shadow_memory = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/uuid = "3b498296-a711-40b7-b9db-e0f257e5bb44"   (n0,r5)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_reboot = "restart"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/start_time = "1342477948.64"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_poweroff = "destroy"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/bootloader_args = "-q"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_xend_start = "ignore"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/on_crash = "destroy"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/xend = ""   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/xend/restart_count = "0"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/vcpus = "4"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/vcpu_avail = "15"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/bootloader = "/usr/bin/pygrub"   (n0)
/vm/3b498296-a711-40b7-b9db-e0f257e5bb44/name = "mgaextws"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image = "(linux (kernel ) (superpages 0) (videoram 4) (pci ()) (nomigrate 0) (tsc_mode 0) (device_model /usr/lib/xen/bin/qemu-dm) (notes (FEATURES 'writable_page_tables|writable_descriptor_tables|auto_translated_physmap|pae_pgdir_above_4gb|supervisor_mode_kernel') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071562080256) (LOADER generic) (INIT_P2M 18446719884453740544) (SUSPEND_CANCEL 1) (ENTRY 18446744071562076160) (XEN_VERSION xen-3.0)))"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/ostype = "linux"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/kernel = "/var/run/xend/boot/boot_kernel.4uMBhR"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/cmdline = "root=/dev/xvda1 resume=/dev/xvda2 splash=silent showopts console=tty1 console=xvc0 "   (n0,r6)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/image/ramdisk = "/var/run/xend/boot/boot_ramdisk.BBHNyj"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/frontend = "/local/domain/6/device/vkbd/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vkbd/0/backend = "/local/domain/0/backend/vkbd/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/frontend = "/local/domain/6/device/vfb/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vfb/0/backend = "/local/domain/0/backend/vfb/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/frontend = "/local/domain/6/device/vbd/51713"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51713/backend = "/local/domain/0/backend/vbd/6/51713"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/frontend = "/local/domain/6/device/vbd/51714"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vbd/51714/backend = "/local/domain/0/backend/vbd/6/51714"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/frontend = "/local/domain/6/device/vif/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/0/backend = "/local/domain/0/backend/vif/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/frontend = "/local/domain/6/device/vif/1"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/vif/1/backend = "/local/domain/0/backend/vif/6/1"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0 = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/frontend = "/local/domain/6/device/console/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/frontend-id = "6"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/backend-id = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/device/console/0/backend = "/local/domain/0/backend/console/6/0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_xend_stop = "ignore"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/pool_name = "Pool-0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/shadow_memory = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/uuid = "93284ae3-45f6-4308-adc1-be4cdd5c6cbe"   (n0,r6)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_reboot = "restart"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/start_time = "1342477950.11"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_poweroff = "destroy"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/bootloader_args = "-q"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_xend_start = "ignore"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/on_crash = "destroy"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/xend = ""   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/xend/restart_count = "0"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/vcpus = "4"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/vcpu_avail = "15"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/bootloader = "/usr/bin/pygrub"   (n0)
/vm/93284ae3-45f6-4308-adc1-be4cdd5c6cbe/name = "mgaweb1"   (n0)

--------------090001030600030704090908
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------090001030600030704090908--


From xen-devel-bounces@lists.xen.org Mon Aug 20 20:00:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Y8f-0006DT-8h; Mon, 20 Aug 2012 19:59:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3Y8e-0006DN-Aq
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 19:59:52 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345492785!10313570!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24613 invoked from network); 20 Aug 2012 19:59:45 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 19:59:45 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KJxbRX022936
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 15:59:37 -0400
Received: from thinkpad.mammed.net (vpn-239-69.phx2.redhat.com [10.3.239.69])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with
	ESMTP id q7KJxTGv023252
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 15:59:31 -0400
Date: Mon, 20 Aug 2012 21:59:28 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Luiz Capitulino <lcapitulino@redhat.com>
Message-ID: <20120820215928.4f430207@thinkpad.mammed.net>
In-Reply-To: <20120820122210.572a79b5@doriath.home>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-4-git-send-email-imammedo@redhat.com>
	<20120820122210.572a79b5@doriath.home>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 3/5] qapi-types.h doesn't really need to
 include qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 12:22:10 -0300
Luiz Capitulino <lcapitulino@redhat.com> wrote:

> On Mon, 20 Aug 2012 01:39:37 +0200
> Igor Mammedov <imammedo@redhat.com> wrote:
> 
> > needed to prevent build breakage when CPU becomes a child of DeviceState
> > 
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > ---
> >  scripts/qapi-types.py |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
> > index cf601ae..f34addb 100644
> > --- a/scripts/qapi-types.py
> > +++ b/scripts/qapi-types.py
> > @@ -263,7 +263,7 @@ fdecl.write(mcgen('''
> >  #ifndef %(guard)s
> >  #define %(guard)s
> >  
> > -#include "qemu-common.h"
> > +#include <stdbool.h>
> 
> Please, also include <stdint.h>, as int64_t is used in qapi-types.h.
> 
> The build doesn't break probably because files including qapi-types.h are
> including <stdint.h> first.
Thanks for suggestion. I'll fix it for next respin.

> 
> >  
> >  ''',
> >                    guard=guardname(h_file)))
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:00:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Y8f-0006DT-8h; Mon, 20 Aug 2012 19:59:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3Y8e-0006DN-Aq
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 19:59:52 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345492785!10313570!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24613 invoked from network); 20 Aug 2012 19:59:45 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	20 Aug 2012 19:59:45 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KJxbRX022936
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 15:59:37 -0400
Received: from thinkpad.mammed.net (vpn-239-69.phx2.redhat.com [10.3.239.69])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with
	ESMTP id q7KJxTGv023252
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 15:59:31 -0400
Date: Mon, 20 Aug 2012 21:59:28 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Luiz Capitulino <lcapitulino@redhat.com>
Message-ID: <20120820215928.4f430207@thinkpad.mammed.net>
In-Reply-To: <20120820122210.572a79b5@doriath.home>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-4-git-send-email-imammedo@redhat.com>
	<20120820122210.572a79b5@doriath.home>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 3/5] qapi-types.h doesn't really need to
 include qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 12:22:10 -0300
Luiz Capitulino <lcapitulino@redhat.com> wrote:

> On Mon, 20 Aug 2012 01:39:37 +0200
> Igor Mammedov <imammedo@redhat.com> wrote:
> 
> > needed to prevent build breakage when CPU becomes a child of DeviceState
> > 
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > ---
> >  scripts/qapi-types.py |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
> > index cf601ae..f34addb 100644
> > --- a/scripts/qapi-types.py
> > +++ b/scripts/qapi-types.py
> > @@ -263,7 +263,7 @@ fdecl.write(mcgen('''
> >  #ifndef %(guard)s
> >  #define %(guard)s
> >  
> > -#include "qemu-common.h"
> > +#include <stdbool.h>
> 
> Please, also include <stdint.h>, as int64_t is used in qapi-types.h.
> 
> The build doesn't break probably because files including qapi-types.h are
> including <stdint.h> first.
Thanks for suggestion. I'll fix it for next respin.

> 
> >  
> >  ''',
> >                    guard=guardname(h_file)))
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:01:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Y9y-0006L5-OT; Mon, 20 Aug 2012 20:01:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3Y9x-0006Kv-Ir
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 20:01:13 +0000
Received: from [85.158.143.35:42440] by server-1.bemta-4.messagelabs.com id
	3A/AC-07754-88792305; Mon, 20 Aug 2012 20:01:12 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345492868!6638378!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32122 invoked from network); 20 Aug 2012 20:01:09 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 20:01:09 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KK12Wq029178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 16:01:02 -0400
Received: from thinkpad.mammed.net (vpn-239-69.phx2.redhat.com [10.3.239.69])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with
	ESMTP id q7KK0soL014761
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 16:00:56 -0400
Date: Mon, 20 Aug 2012 22:00:54 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Luiz Capitulino <lcapitulino@redhat.com>
Message-ID: <20120820220054.38e5aac0@thinkpad.mammed.net>
In-Reply-To: <20120820122805.0cc63a8c@doriath.home>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-5-git-send-email-imammedo@redhat.com>
	<20120820122805.0cc63a8c@doriath.home>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 4/5] cleanup error.h,
 included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 12:28:05 -0300
Luiz Capitulino <lcapitulino@redhat.com> wrote:

> On Mon, 20 Aug 2012 01:39:38 +0200
> Igor Mammedov <imammedo@redhat.com> wrote:
> 
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > ---
> >  error.h |    1 -
> >  1 files changed, 0 insertions(+), 1 deletions(-)
> > 
> > diff --git a/error.h b/error.h
> > index 96fc203..643a372 100644
> > --- a/error.h
> > +++ b/error.h
> > @@ -14,7 +14,6 @@
> >  
> >  #include "compiler.h"
> >  #include "qapi-types.h"
> > -#include <stdbool.h>
> 
> Hmm, not good. qapi-types.h includes <stdbool.h> for internal matters, files
> including qapi-types.h shouldn't rely on this (as they can break if qapi-types.h
> is changed not to include <stdbool.h>).
> 
> You can keep this code as it is.

Agreed, I'll drop this patch.
> 
> >  
> >  /**
> >   * A class representing internal errors within QEMU.  An error has a ErrorClass
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:01:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Y9y-0006L5-OT; Mon, 20 Aug 2012 20:01:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T3Y9x-0006Kv-Ir
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 20:01:13 +0000
Received: from [85.158.143.35:42440] by server-1.bemta-4.messagelabs.com id
	3A/AC-07754-88792305; Mon, 20 Aug 2012 20:01:12 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345492868!6638378!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUwMDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32122 invoked from network); 20 Aug 2012 20:01:09 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	20 Aug 2012 20:01:09 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7KK12Wq029178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 16:01:02 -0400
Received: from thinkpad.mammed.net (vpn-239-69.phx2.redhat.com [10.3.239.69])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with
	ESMTP id q7KK0soL014761
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 20 Aug 2012 16:00:56 -0400
Date: Mon, 20 Aug 2012 22:00:54 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Luiz Capitulino <lcapitulino@redhat.com>
Message-ID: <20120820220054.38e5aac0@thinkpad.mammed.net>
In-Reply-To: <20120820122805.0cc63a8c@doriath.home>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-5-git-send-email-imammedo@redhat.com>
	<20120820122805.0cc63a8c@doriath.home>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, rth@twiddle.net, kwolf@redhat.com,
	aliguori@us.ibm.com, mtosatti@redhat.com, pbonzini@redhat.com,
	afaerber@suse.de
Subject: Re: [Xen-devel] [PATCH 4/5] cleanup error.h,
 included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Aug 2012 12:28:05 -0300
Luiz Capitulino <lcapitulino@redhat.com> wrote:

> On Mon, 20 Aug 2012 01:39:38 +0200
> Igor Mammedov <imammedo@redhat.com> wrote:
> 
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > ---
> >  error.h |    1 -
> >  1 files changed, 0 insertions(+), 1 deletions(-)
> > 
> > diff --git a/error.h b/error.h
> > index 96fc203..643a372 100644
> > --- a/error.h
> > +++ b/error.h
> > @@ -14,7 +14,6 @@
> >  
> >  #include "compiler.h"
> >  #include "qapi-types.h"
> > -#include <stdbool.h>
> 
> Hmm, not good. qapi-types.h includes <stdbool.h> for internal matters, files
> including qapi-types.h shouldn't rely on this (as they can break if qapi-types.h
> is changed not to include <stdbool.h>).
> 
> You can keep this code as it is.

Agreed, I'll drop this patch.
> 
> >  
> >  /**
> >   * A class representing internal errors within QEMU.  An error has a ErrorClass
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:10:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3YIt-0006aK-SR; Mon, 20 Aug 2012 20:10:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3YIs-0006aF-7y
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:10:26 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345493419!2597027!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODU3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16607 invoked from network); 20 Aug 2012 20:10:20 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 20:10:20 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id F2D461134;
	Mon, 20 Aug 2012 23:10:18 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 0422D2005D; Mon, 20 Aug 2012 23:10:18 +0300 (EEST)
Date: Mon, 20 Aug 2012 23:10:17 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Tom Parker <tparker@cbnco.com>
Message-ID: <20120820201017.GZ19851@reaktio.net>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
	<5032949D.5010805@cbnco.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5032949D.5010805@cbnco.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 03:48:45PM -0400, Tom Parker wrote:
> >
> >> Currently we use PVUSB to attach a USB Smartcard reader through our
> >> dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
> >> mounted on an internal USB Port to our domU CA server (SLES 11)
> >>
> >> The config file syntax is broken so we have to manually attach (I have
> >> it scripted) whenever our hosts reboot (which is almost never.)
> > Can you give an example of what the syntax *should* be?
> There used to be some data in the wiki or in an initial presentation on
> PVUSB but as it has never worked for me.  I don't remember how it worked.
>

http://wiki.xen.org/wiki/Xen_USB_Passthrough

> >
> > Do you happen to know if this uses the PVUSB drivers or some other
> > mechanism? "lsmod" in both dom0 and domU should provide a clue if the
> > drivers are loaded.
> Looks like it: 
> 
> dom0
> mgaxen1:~ # lsmod | grep usb
> usbbk                  23503  0
> xenbus_be               3952  4 usbbk,netbk,blkbk,blktap
> usbhid                 50900  0
> hid                    83977  1 usbhid
> usbcore               221920  5 usbbk,usbhid,uhci_hcd,ehci_hcd
> 
> domU
> mgaca:~ # lsmod | grep usb
> usbcore               220777  3 xen_hcd
>

So this looks like the PVUSB drivers (usbback/usbfront) from xen unmodified_drivers 
and/or from Suse's xenlinux forward-ported patches.

There's also a PVUSB port to pvops kernels, it's available in konrad's git tree.


> >
> > Does this work for both PV and HVM guests or do you only use one or the
> > other?
> I only use PV guests.
>

PVUSB works for both PV and HVM guests.
And James Harper's GPLPV Windows drivers contain PVUSB frontend driver for Windows.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:10:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3YIt-0006aK-SR; Mon, 20 Aug 2012 20:10:27 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3YIs-0006aF-7y
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:10:26 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345493419!2597027!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODU3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16607 invoked from network); 20 Aug 2012 20:10:20 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 20:10:20 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id F2D461134;
	Mon, 20 Aug 2012 23:10:18 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 0422D2005D; Mon, 20 Aug 2012 23:10:18 +0300 (EEST)
Date: Mon, 20 Aug 2012 23:10:17 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Tom Parker <tparker@cbnco.com>
Message-ID: <20120820201017.GZ19851@reaktio.net>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
	<5032949D.5010805@cbnco.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5032949D.5010805@cbnco.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 03:48:45PM -0400, Tom Parker wrote:
> >
> >> Currently we use PVUSB to attach a USB Smartcard reader through our
> >> dom0 (SLES 11 SP1) running on an HP Blade Server with the Token
> >> mounted on an internal USB Port to our domU CA server (SLES 11)
> >>
> >> The config file syntax is broken so we have to manually attach (I have
> >> it scripted) whenever our hosts reboot (which is almost never.)
> > Can you give an example of what the syntax *should* be?
> There used to be some data in the wiki or in an initial presentation on
> PVUSB but as it has never worked for me.  I don't remember how it worked.
>

http://wiki.xen.org/wiki/Xen_USB_Passthrough

> >
> > Do you happen to know if this uses the PVUSB drivers or some other
> > mechanism? "lsmod" in both dom0 and domU should provide a clue if the
> > drivers are loaded.
> Looks like it: 
> 
> dom0
> mgaxen1:~ # lsmod | grep usb
> usbbk                  23503  0
> xenbus_be               3952  4 usbbk,netbk,blkbk,blktap
> usbhid                 50900  0
> hid                    83977  1 usbhid
> usbcore               221920  5 usbbk,usbhid,uhci_hcd,ehci_hcd
> 
> domU
> mgaca:~ # lsmod | grep usb
> usbcore               220777  3 xen_hcd
>

So this looks like the PVUSB drivers (usbback/usbfront) from xen unmodified_drivers 
and/or from Suse's xenlinux forward-ported patches.

There's also a PVUSB port to pvops kernels, it's available in konrad's git tree.


> >
> > Does this work for both PV and HVM guests or do you only use one or the
> > other?
> I only use PV guests.
>

PVUSB works for both PV and HVM guests.
And James Harper's GPLPV Windows drivers contain PVUSB frontend driver for Windows.


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:15:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3YNT-0006iE-IX; Mon, 20 Aug 2012 20:15:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aliguori@us.ibm.com>) id 1T3YNR-0006i8-KZ
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 20:15:09 +0000
Received: from [85.158.138.51:28296] by server-10.bemta-3.messagelabs.com id
	7D/41-20518-CCA92305; Mon, 20 Aug 2012 20:15:08 +0000
X-Env-Sender: aliguori@us.ibm.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345493705!22939519!1
X-Originating-IP: [32.97.110.151]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMzIuOTcuMTEwLjE1MSA9PiA0NzMwMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10506 invoked from network); 20 Aug 2012 20:15:07 -0000
Received: from e33.co.us.ibm.com (HELO e33.co.us.ibm.com) (32.97.110.151)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 20:15:07 -0000
Received: from /spool/local
	by e33.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only!
	Violators will be prosecuted
	for <xen-devel@lists.xensource.com> from <aliguori@us.ibm.com>;
	Mon, 20 Aug 2012 14:15:05 -0600
Received: from d03dlp03.boulder.ibm.com (9.17.202.179)
	by e33.co.us.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Mon, 20 Aug 2012 14:15:03 -0600
Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com
	[9.17.195.228])
	by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id BCA4719D8040
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 14:14:59 -0600 (MDT)
Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168])
	by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	q7KKEeOc130320
	for <xen-devel@lists.xensource.com>; Mon, 20 Aug 2012 14:14:42 -0600
Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1])
	by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP
	id q7KKEdKl004672
	for <xen-devel@lists.xensource.com>; Mon, 20 Aug 2012 14:14:40 -0600
Received: from titi.na.relay.ibm.com ([9.80.38.70])
	by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id
	q7KKEQvP003203; Mon, 20 Aug 2012 14:14:30 -0600
From: Anthony Liguori <aliguori@us.ibm.com>
To: Blue Swirl <blauwirbel@gmail.com>, Igor Mammedov <imammedo@redhat.com>
In-Reply-To: <CAAu8pHt9n_61t0=R2MGAmJ8yo4=CfcriZFqHfxjZkFuj5zYQgw@mail.gmail.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
	<5031BFE0.1070909@weilnetz.de>
	<20120820131326.22ea454f@thinkpad.mammed.net>
	<CAAu8pHt9n_61t0=R2MGAmJ8yo4=CfcriZFqHfxjZkFuj5zYQgw@mail.gmail.com>
User-Agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1
	(x86_64-pc-linux-gnu)
Date: Mon, 20 Aug 2012 15:14:25 -0500
Message-ID: <87zk5p2ue6.fsf@codemonkey.ws>
MIME-Version: 1.0
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12082020-2398-0000-0000-0000099D6D63
Cc: kwolf@redhat.com, peter.maydell@linaro.org, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	mdroth@linux.vnet.ibm.com, Stefan Weil <sw@weilnetz.de>,
	mtosatti@redhat.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	lcapitulino@redhat.com, avi@redhat.com, jan.kiszka@siemens.com,
	anthony.perard@citrix.com, pbonzini@redhat.com,
	lersek@redhat.com, afaerber@suse.de, armbru@redhat.com, rth@twiddle.net
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH 1/5] move qemu_irq typedef out
	of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Blue Swirl <blauwirbel@gmail.com> writes:

> On Mon, Aug 20, 2012 at 11:13 AM, Igor Mammedov <imammedo@redhat.com> wrote:
>> On Mon, 20 Aug 2012 06:41:04 +0200
>> Stefan Weil <sw@weilnetz.de> wrote:
>>
>>> Am 20.08.2012 01:39, schrieb Igor Mammedov:
>>> > it's necessary for making CPU child of DEVICE without
>>> > causing circular header deps.
>>> >
>>> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>>> > ---
>>> >   hw/arm-misc.h |    1 +
>>> >   hw/bt.h       |    2 ++
>>> >   hw/devices.h  |    2 ++
>>> >   hw/irq.h      |    2 ++
>>> >   hw/omap.h     |    1 +
>>> >   hw/soc_dma.h  |    1 +
>>> >   hw/xen.h      |    1 +
>>> >   qemu-common.h |    1 -
>>> >   sysemu.h      |    1 +
>>> >   9 files changed, 11 insertions(+), 1 deletions(-)
>>> >
>>> > diff --git a/hw/arm-misc.h b/hw/arm-misc.h
>>> > index bdd8fec..b13aa59 100644
>>> > --- a/hw/arm-misc.h
>>> > +++ b/hw/arm-misc.h
>>> > @@ -12,6 +12,7 @@
>>> >   #define ARM_MISC_H 1
>>> >
>>> >   #include "memory.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   /* The CPU is also modeled as an interrupt controller.  */
>>> >   #define ARM_PIC_CPU_IRQ 0
>>> > diff --git a/hw/bt.h b/hw/bt.h
>>> > index a48b8d4..ebf6a37 100644
>>> > --- a/hw/bt.h
>>> > +++ b/hw/bt.h
>>> > @@ -23,6 +23,8 @@
>>> >    * along with this program; if not, see <http://www.gnu.org/licenses/>.
>>> >    */
>>> >
>>> > +#include "hw/irq.h"
>>> > +
>>> >   /* BD Address */
>>> >   typedef struct {
>>> >       uint8_t b[6];
>>> > diff --git a/hw/devices.h b/hw/devices.h
>>> > index 1a55c1e..c60bcab 100644
>>> > --- a/hw/devices.h
>>> > +++ b/hw/devices.h
>>> > @@ -1,6 +1,8 @@
>>> >   #ifndef QEMU_DEVICES_H
>>> >   #define QEMU_DEVICES_H
>>> >
>>> > +#include "hw/irq.h"
>>> > +
>>> >   /* ??? Not all users of this file can include cpu-common.h.  */
>>> >   struct MemoryRegion;
>>> >
>>> > diff --git a/hw/irq.h b/hw/irq.h
>>> > index 56c55f0..1339a3a 100644
>>> > --- a/hw/irq.h
>>> > +++ b/hw/irq.h
>>> > @@ -3,6 +3,8 @@
>>> >
>>> >   /* Generic IRQ/GPIO pin infrastructure.  */
>>> >
>>> > +typedef struct IRQState *qemu_irq;
>>> > +
>>> >   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
>>> >
>>> >   void qemu_set_irq(qemu_irq irq, int level);
>>> > diff --git a/hw/omap.h b/hw/omap.h
>>> > index 413851b..8b08462 100644
>>> > --- a/hw/omap.h
>>> > +++ b/hw/omap.h
>>> > @@ -19,6 +19,7 @@
>>> >   #ifndef hw_omap_h
>>> >   #include "memory.h"
>>> >   # define hw_omap_h                "omap.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   # define OMAP_EMIFS_BASE  0x00000000
>>> >   # define OMAP2_Q0_BASE            0x00000000
>>> > diff --git a/hw/soc_dma.h b/hw/soc_dma.h
>>> > index 904b26c..e386ace 100644
>>> > --- a/hw/soc_dma.h
>>> > +++ b/hw/soc_dma.h
>>> > @@ -19,6 +19,7 @@
>>> >    */
>>> >
>>> >   #include "memory.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   struct soc_dma_s;
>>> >   struct soc_dma_ch_s;
>>> > diff --git a/hw/xen.h b/hw/xen.h
>>> > index e5926b7..ff11dfd 100644
>>> > --- a/hw/xen.h
>>> > +++ b/hw/xen.h
>>> > @@ -8,6 +8,7 @@
>>> >    */
>>> >   #include <inttypes.h>
>>> >
>>> > +#include "hw/irq.h"
>>> >   #include "qemu-common.h"
>>> >
>>> >   /* xen-machine.c */
>>> > diff --git a/qemu-common.h b/qemu-common.h
>>> > index e5c2bcd..6677a30 100644
>>> > --- a/qemu-common.h
>>> > +++ b/qemu-common.h
>>> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>>> >   typedef struct PCIESlot PCIESlot;
>>> >   typedef struct MSIMessage MSIMessage;
>>> >   typedef struct SerialState SerialState;
>>> > -typedef struct IRQState *qemu_irq;
>>> >   typedef struct PCMCIACardState PCMCIACardState;
>>> >   typedef struct MouseTransformInfo MouseTransformInfo;
>>> >   typedef struct uWireSlave uWireSlave;
>>>
>>> Just move the declaration of qemu_irq to the beginning of qemu-common.h
>>> and leave the rest of files untouched. That also fixes the circular
>>> dependency.
>>>
>>> I already have a patch that does this, so you can integrate it in your
>>> series
>>> instead of this one.
>> No doubt it's more simpler way, but IMHO It's more of a hack than fixing
>> problem.
>> It works for now but doesn't alleviate problem with header nightmare in qemu,
>> where everything is included in qemu-common.h and everything includes it as
>> well.
>>
>> Any way if majority prefer simple move, I'll drop this patch in favor of yours.
>
> I like Igor's approach more.

Ack.

We should move away from dumping ground includes like qemu-common in
favor of meaningful headers that are explicitly included where needed.

Regards,

Anthony Liguori

>
>>
>>>
>>>
>>> > diff --git a/sysemu.h b/sysemu.h
>>> > index 65552ac..f765821 100644
>>> > --- a/sysemu.h
>>> > +++ b/sysemu.h
>>> > @@ -9,6 +9,7 @@
>>> >   #include "qapi-types.h"
>>> >   #include "notify.h"
>>> >   #include "main-loop.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   /* vl.c */
>>> >
>>>
>>
>>
>> --
>> Regards,
>>   Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:15:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3YNT-0006iE-IX; Mon, 20 Aug 2012 20:15:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aliguori@us.ibm.com>) id 1T3YNR-0006i8-KZ
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 20:15:09 +0000
Received: from [85.158.138.51:28296] by server-10.bemta-3.messagelabs.com id
	7D/41-20518-CCA92305; Mon, 20 Aug 2012 20:15:08 +0000
X-Env-Sender: aliguori@us.ibm.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345493705!22939519!1
X-Originating-IP: [32.97.110.151]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMzIuOTcuMTEwLjE1MSA9PiA0NzMwMjk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10506 invoked from network); 20 Aug 2012 20:15:07 -0000
Received: from e33.co.us.ibm.com (HELO e33.co.us.ibm.com) (32.97.110.151)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 20:15:07 -0000
Received: from /spool/local
	by e33.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only!
	Violators will be prosecuted
	for <xen-devel@lists.xensource.com> from <aliguori@us.ibm.com>;
	Mon, 20 Aug 2012 14:15:05 -0600
Received: from d03dlp03.boulder.ibm.com (9.17.202.179)
	by e33.co.us.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Mon, 20 Aug 2012 14:15:03 -0600
Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com
	[9.17.195.228])
	by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id BCA4719D8040
	for <xen-devel@lists.xensource.com>;
	Mon, 20 Aug 2012 14:14:59 -0600 (MDT)
Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168])
	by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	q7KKEeOc130320
	for <xen-devel@lists.xensource.com>; Mon, 20 Aug 2012 14:14:42 -0600
Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1])
	by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP
	id q7KKEdKl004672
	for <xen-devel@lists.xensource.com>; Mon, 20 Aug 2012 14:14:40 -0600
Received: from titi.na.relay.ibm.com ([9.80.38.70])
	by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id
	q7KKEQvP003203; Mon, 20 Aug 2012 14:14:30 -0600
From: Anthony Liguori <aliguori@us.ibm.com>
To: Blue Swirl <blauwirbel@gmail.com>, Igor Mammedov <imammedo@redhat.com>
In-Reply-To: <CAAu8pHt9n_61t0=R2MGAmJ8yo4=CfcriZFqHfxjZkFuj5zYQgw@mail.gmail.com>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-2-git-send-email-imammedo@redhat.com>
	<5031BFE0.1070909@weilnetz.de>
	<20120820131326.22ea454f@thinkpad.mammed.net>
	<CAAu8pHt9n_61t0=R2MGAmJ8yo4=CfcriZFqHfxjZkFuj5zYQgw@mail.gmail.com>
User-Agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1
	(x86_64-pc-linux-gnu)
Date: Mon, 20 Aug 2012 15:14:25 -0500
Message-ID: <87zk5p2ue6.fsf@codemonkey.ws>
MIME-Version: 1.0
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12082020-2398-0000-0000-0000099D6D63
Cc: kwolf@redhat.com, peter.maydell@linaro.org, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	mdroth@linux.vnet.ibm.com, Stefan Weil <sw@weilnetz.de>,
	mtosatti@redhat.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	lcapitulino@redhat.com, avi@redhat.com, jan.kiszka@siemens.com,
	anthony.perard@citrix.com, pbonzini@redhat.com,
	lersek@redhat.com, afaerber@suse.de, armbru@redhat.com, rth@twiddle.net
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH 1/5] move qemu_irq typedef out
	of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Blue Swirl <blauwirbel@gmail.com> writes:

> On Mon, Aug 20, 2012 at 11:13 AM, Igor Mammedov <imammedo@redhat.com> wrote:
>> On Mon, 20 Aug 2012 06:41:04 +0200
>> Stefan Weil <sw@weilnetz.de> wrote:
>>
>>> Am 20.08.2012 01:39, schrieb Igor Mammedov:
>>> > it's necessary for making CPU child of DEVICE without
>>> > causing circular header deps.
>>> >
>>> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>>> > ---
>>> >   hw/arm-misc.h |    1 +
>>> >   hw/bt.h       |    2 ++
>>> >   hw/devices.h  |    2 ++
>>> >   hw/irq.h      |    2 ++
>>> >   hw/omap.h     |    1 +
>>> >   hw/soc_dma.h  |    1 +
>>> >   hw/xen.h      |    1 +
>>> >   qemu-common.h |    1 -
>>> >   sysemu.h      |    1 +
>>> >   9 files changed, 11 insertions(+), 1 deletions(-)
>>> >
>>> > diff --git a/hw/arm-misc.h b/hw/arm-misc.h
>>> > index bdd8fec..b13aa59 100644
>>> > --- a/hw/arm-misc.h
>>> > +++ b/hw/arm-misc.h
>>> > @@ -12,6 +12,7 @@
>>> >   #define ARM_MISC_H 1
>>> >
>>> >   #include "memory.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   /* The CPU is also modeled as an interrupt controller.  */
>>> >   #define ARM_PIC_CPU_IRQ 0
>>> > diff --git a/hw/bt.h b/hw/bt.h
>>> > index a48b8d4..ebf6a37 100644
>>> > --- a/hw/bt.h
>>> > +++ b/hw/bt.h
>>> > @@ -23,6 +23,8 @@
>>> >    * along with this program; if not, see <http://www.gnu.org/licenses/>.
>>> >    */
>>> >
>>> > +#include "hw/irq.h"
>>> > +
>>> >   /* BD Address */
>>> >   typedef struct {
>>> >       uint8_t b[6];
>>> > diff --git a/hw/devices.h b/hw/devices.h
>>> > index 1a55c1e..c60bcab 100644
>>> > --- a/hw/devices.h
>>> > +++ b/hw/devices.h
>>> > @@ -1,6 +1,8 @@
>>> >   #ifndef QEMU_DEVICES_H
>>> >   #define QEMU_DEVICES_H
>>> >
>>> > +#include "hw/irq.h"
>>> > +
>>> >   /* ??? Not all users of this file can include cpu-common.h.  */
>>> >   struct MemoryRegion;
>>> >
>>> > diff --git a/hw/irq.h b/hw/irq.h
>>> > index 56c55f0..1339a3a 100644
>>> > --- a/hw/irq.h
>>> > +++ b/hw/irq.h
>>> > @@ -3,6 +3,8 @@
>>> >
>>> >   /* Generic IRQ/GPIO pin infrastructure.  */
>>> >
>>> > +typedef struct IRQState *qemu_irq;
>>> > +
>>> >   typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
>>> >
>>> >   void qemu_set_irq(qemu_irq irq, int level);
>>> > diff --git a/hw/omap.h b/hw/omap.h
>>> > index 413851b..8b08462 100644
>>> > --- a/hw/omap.h
>>> > +++ b/hw/omap.h
>>> > @@ -19,6 +19,7 @@
>>> >   #ifndef hw_omap_h
>>> >   #include "memory.h"
>>> >   # define hw_omap_h                "omap.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   # define OMAP_EMIFS_BASE  0x00000000
>>> >   # define OMAP2_Q0_BASE            0x00000000
>>> > diff --git a/hw/soc_dma.h b/hw/soc_dma.h
>>> > index 904b26c..e386ace 100644
>>> > --- a/hw/soc_dma.h
>>> > +++ b/hw/soc_dma.h
>>> > @@ -19,6 +19,7 @@
>>> >    */
>>> >
>>> >   #include "memory.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   struct soc_dma_s;
>>> >   struct soc_dma_ch_s;
>>> > diff --git a/hw/xen.h b/hw/xen.h
>>> > index e5926b7..ff11dfd 100644
>>> > --- a/hw/xen.h
>>> > +++ b/hw/xen.h
>>> > @@ -8,6 +8,7 @@
>>> >    */
>>> >   #include <inttypes.h>
>>> >
>>> > +#include "hw/irq.h"
>>> >   #include "qemu-common.h"
>>> >
>>> >   /* xen-machine.c */
>>> > diff --git a/qemu-common.h b/qemu-common.h
>>> > index e5c2bcd..6677a30 100644
>>> > --- a/qemu-common.h
>>> > +++ b/qemu-common.h
>>> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>>> >   typedef struct PCIESlot PCIESlot;
>>> >   typedef struct MSIMessage MSIMessage;
>>> >   typedef struct SerialState SerialState;
>>> > -typedef struct IRQState *qemu_irq;
>>> >   typedef struct PCMCIACardState PCMCIACardState;
>>> >   typedef struct MouseTransformInfo MouseTransformInfo;
>>> >   typedef struct uWireSlave uWireSlave;
>>>
>>> Just move the declaration of qemu_irq to the beginning of qemu-common.h
>>> and leave the rest of files untouched. That also fixes the circular
>>> dependency.
>>>
>>> I already have a patch that does this, so you can integrate it in your
>>> series
>>> instead of this one.
>> No doubt it's more simpler way, but IMHO It's more of a hack than fixing
>> problem.
>> It works for now but doesn't alleviate problem with header nightmare in qemu,
>> where everything is included in qemu-common.h and everything includes it as
>> well.
>>
>> Any way if majority prefer simple move, I'll drop this patch in favor of yours.
>
> I like Igor's approach more.

Ack.

We should move away from dumping ground includes like qemu-common in
favor of meaningful headers that are explicitly included where needed.

Regards,

Anthony Liguori

>
>>
>>>
>>>
>>> > diff --git a/sysemu.h b/sysemu.h
>>> > index 65552ac..f765821 100644
>>> > --- a/sysemu.h
>>> > +++ b/sysemu.h
>>> > @@ -9,6 +9,7 @@
>>> >   #include "qapi-types.h"
>>> >   #include "notify.h"
>>> >   #include "main-loop.h"
>>> > +#include "hw/irq.h"
>>> >
>>> >   /* vl.c */
>>> >
>>>
>>
>>
>> --
>> Regards,
>>   Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:39:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:39:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3YkX-0006v3-O2; Mon, 20 Aug 2012 20:39:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T3YkW-0006uy-IQ
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:39:00 +0000
Received: from [85.158.138.51:52805] by server-5.bemta-3.messagelabs.com id
	C2/C2-08865-360A2305; Mon, 20 Aug 2012 20:38:59 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345495137!29323039!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15262 invoked from network); 20 Aug 2012 20:38:58 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 20:38:58 -0000
Received: by vbip1 with SMTP id p1so7134787vbi.32
	for <xen-devel@lists.xen.org>; Mon, 20 Aug 2012 13:38:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=yHgrl7KBJWaem3zF9Um0S1zsnxo51sZUKlk4/EeVp/Q=;
	b=X6sUEH5WBwwp/3BDYmTIUCgnLn4hn5E+/unjq0PpRh7mrRUOeiu9b5ncQoGs8v10Jo
	Vr9OyKiz6fY5LTHWWx4tQw1jahbETesMOdRtXTVVzBTsUulDgZ1ruzsfUkS+A+MBvllE
	o9GHTQXZW0YyqQUtanDrxjxr+2JuLxan4eQ24FeQyA9JoM0CU2uYZYBHxOCmjigD4sZC
	fdtEcRt9NeriCY1L/82v8GPWyayjxRV6PfDnkZL6RzlaG1VlOiYx1dMpEAndhp0e0tkm
	SbulVZcGUF9Up+VwVU80ilX76q5z25dhwWxX9pvY2u0WdT45/VBF4TqkAkdL2WQiZodO
	Tc5A==
Received: by 10.52.18.143 with SMTP id w15mr9665465vdd.28.1345495137138;
	Mon, 20 Aug 2012 13:38:57 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id b2sm5838456vdu.3.2012.08.20.13.38.56
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 20 Aug 2012 13:38:56 -0700 (PDT)
Date: Mon, 20 Aug 2012 16:28:58 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120820202856.GA11485@phenom.dumpdata.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> Hello everyone!  With the completion of our first few release candidates
> for 4.2, it's time to look forward and start planning for the 4.3
> release.  I've volunteered to step up and help coordinate the release
> for this cycle.
> 
> The 4.2 release cycle this time has been nearly a year and a half.
> One of the problems with having such a long release is that people who
> get in features early have to wait a long time for that feature to be
> in a published version; they then have to wait even longer for it to
> be part of a released distribution.  Historically the cycle has been
> around 9 months, but this has not been made explicit.  Many people
> (including myself) think that the 9 month release cycle was a good
> cadence that we should aim for.
> 
> So I propose that we move to a time-based release schedule.  Rather
> than aiming for a release date, I propose that we aim to do a "feature
> freeze" six months after the 4.2 release -- that would be around March
> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
> around June 2013.  This is one of the things we can discuss at the Dev
> Meeting before the Xen Summit next week.  If you have other opinions,
> please let us know.
> 
> I will also be tracking ahead of time many of the features and
> improvements that we want to try to get into 4.3.  Below is a list of
> high-level features and improvements that the Citrix Xen.org team are
> either planning on working on ourselves, or are aware of other people
> working on, and that we should reasonably be able to get into the 4.3
> release.  Most of them have owners, but many do not yet; volunteers
> are welcome.
> 
> If you are planning on working any features not listed that you would like
> to have tracked, please let me know.
> 
> I will be sending tracking updates similar to Ian Campbell's 4.2
> release updates.  I think to begin with, weekly may be a bit
> excessive.  I'll probably go for bi-weekly, and switch to weekly after
> the feature freeze.
> 
> It should be noted this is not an exhaustive list, nor an immutable
> one.  Our main priority will be to release within 9 months; only a
> very important feature indeed would cause us to slip the release.
> 
> Features and improvements not on this list are of course welcome at
> any time before the feature freeze.
> 
> Any questions and feedback are welcome!
> 
> Your 4.3 release coordinator,
>  George Dunlap
> 
> * Event channel scalability
>   owner: attilio@citrix
>   Increase limit on event channels (currently 1024 for 32-bit guests,
>   4096 for 64-bit guests)
> 
> * NUMA scheduler affinity
>   owner: dario@citrix
> 
> * NUMA Memory migration
>   owner: dario@citrix
> 
> * PVH mode, domU (w/ Linux)
>   owner: mukesh@oracle
> 
> * PVH mode, dom0 (w/ Linux)
>   owner: mukesh@oracle
> 
> * ARM server port
>   owner: @citrix
> 
> * blktap3
>   owner: @citrix
> 
> * Default to QEMU upstream
>  - qemu-based stubdom (Linux or BSD libc)
>     owner: anthony@citrix
>     qemu-upstream needs a more fully-featured libc than exists in
>     minios.  Either work on a minimalist linux-based stubdom with
>     glibc, or port one of the BSD libcs to minios.
> 
>  - pci pass-thru
>     owner: anthony@citrix
> 
> * Persistent grants
>   owner: @citrix
> 
> * Multi-page blk rings
>  - blkback in kernel (@intel)

.. and me as well.

>  - qemu blkback
> 
> * Multi-page net protocol
>   owner: ?
>   expand the network ring protocol to allow multiple pages for
>   increased throughput

Multiple people working on this. Ian, me, Annie, and Wei I believe.
> 
> * xl vm-{export,import}
>   owner: ?
>   Allow xl to import and export VMs to other formats; particularly
>   ovf, perhaps the XenServer format, or more.
> 
> 
> * xl USB pass-through for PV guests
>   owner: ?
>   Port the xend PV pass-through functionality to xl.

Well, what about the Linux side? The frontend/backend drivers haven't
really been proposed for upstream.

> 
> * openvswitch toostack integration
>   owner: roger@citrix
> 
> * Rationalized backend scripts (incl. driver domains)
>   owner: roger@citrix
> 
> * Full-VM snapshotting
>   owner: ?
>   Have a way of coordinating the taking and restoring of VM memory and
>   disk snapshots.  This would involve some investigation into the best
>   way to accomplish this.
> 
> * VM Cloning
>   owner: ?
>   Again, a way of coordinating the memory and disk aspects.  Research
>   into the best way to do this would probably go along with the
>   snapshotting feature.
> 
> * Make storage migration possible
>   owner: ?
>   There needs to be a way, either via command-line or via some hooks,
>   that someone can build a "storage migration" feature on top of libxl
>   or xl.
> 
> * PV audio (audio for stubdom qemu)
>   owner: stefano.panella@citrix

Is this a new person? Anyhow, there was a PV audio in pulseaudio as part
of the GSOC.
> 
> * Memory: Replace PoD with paging mechanism
>   owner: george@citrix
> 
> * Managed domains?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:39:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:39:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3YkX-0006v3-O2; Mon, 20 Aug 2012 20:39:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T3YkW-0006uy-IQ
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:39:00 +0000
Received: from [85.158.138.51:52805] by server-5.bemta-3.messagelabs.com id
	C2/C2-08865-360A2305; Mon, 20 Aug 2012 20:38:59 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345495137!29323039!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15262 invoked from network); 20 Aug 2012 20:38:58 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 20:38:58 -0000
Received: by vbip1 with SMTP id p1so7134787vbi.32
	for <xen-devel@lists.xen.org>; Mon, 20 Aug 2012 13:38:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=yHgrl7KBJWaem3zF9Um0S1zsnxo51sZUKlk4/EeVp/Q=;
	b=X6sUEH5WBwwp/3BDYmTIUCgnLn4hn5E+/unjq0PpRh7mrRUOeiu9b5ncQoGs8v10Jo
	Vr9OyKiz6fY5LTHWWx4tQw1jahbETesMOdRtXTVVzBTsUulDgZ1ruzsfUkS+A+MBvllE
	o9GHTQXZW0YyqQUtanDrxjxr+2JuLxan4eQ24FeQyA9JoM0CU2uYZYBHxOCmjigD4sZC
	fdtEcRt9NeriCY1L/82v8GPWyayjxRV6PfDnkZL6RzlaG1VlOiYx1dMpEAndhp0e0tkm
	SbulVZcGUF9Up+VwVU80ilX76q5z25dhwWxX9pvY2u0WdT45/VBF4TqkAkdL2WQiZodO
	Tc5A==
Received: by 10.52.18.143 with SMTP id w15mr9665465vdd.28.1345495137138;
	Mon, 20 Aug 2012 13:38:57 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id b2sm5838456vdu.3.2012.08.20.13.38.56
	(version=TLSv1/SSLv3 cipher=OTHER);
	Mon, 20 Aug 2012 13:38:56 -0700 (PDT)
Date: Mon, 20 Aug 2012 16:28:58 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120820202856.GA11485@phenom.dumpdata.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> Hello everyone!  With the completion of our first few release candidates
> for 4.2, it's time to look forward and start planning for the 4.3
> release.  I've volunteered to step up and help coordinate the release
> for this cycle.
> 
> The 4.2 release cycle this time has been nearly a year and a half.
> One of the problems with having such a long release is that people who
> get in features early have to wait a long time for that feature to be
> in a published version; they then have to wait even longer for it to
> be part of a released distribution.  Historically the cycle has been
> around 9 months, but this has not been made explicit.  Many people
> (including myself) think that the 9 month release cycle was a good
> cadence that we should aim for.
> 
> So I propose that we move to a time-based release schedule.  Rather
> than aiming for a release date, I propose that we aim to do a "feature
> freeze" six months after the 4.2 release -- that would be around March
> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
> around June 2013.  This is one of the things we can discuss at the Dev
> Meeting before the Xen Summit next week.  If you have other opinions,
> please let us know.
> 
> I will also be tracking ahead of time many of the features and
> improvements that we want to try to get into 4.3.  Below is a list of
> high-level features and improvements that the Citrix Xen.org team are
> either planning on working on ourselves, or are aware of other people
> working on, and that we should reasonably be able to get into the 4.3
> release.  Most of them have owners, but many do not yet; volunteers
> are welcome.
> 
> If you are planning on working any features not listed that you would like
> to have tracked, please let me know.
> 
> I will be sending tracking updates similar to Ian Campbell's 4.2
> release updates.  I think to begin with, weekly may be a bit
> excessive.  I'll probably go for bi-weekly, and switch to weekly after
> the feature freeze.
> 
> It should be noted this is not an exhaustive list, nor an immutable
> one.  Our main priority will be to release within 9 months; only a
> very important feature indeed would cause us to slip the release.
> 
> Features and improvements not on this list are of course welcome at
> any time before the feature freeze.
> 
> Any questions and feedback are welcome!
> 
> Your 4.3 release coordinator,
>  George Dunlap
> 
> * Event channel scalability
>   owner: attilio@citrix
>   Increase limit on event channels (currently 1024 for 32-bit guests,
>   4096 for 64-bit guests)
> 
> * NUMA scheduler affinity
>   owner: dario@citrix
> 
> * NUMA Memory migration
>   owner: dario@citrix
> 
> * PVH mode, domU (w/ Linux)
>   owner: mukesh@oracle
> 
> * PVH mode, dom0 (w/ Linux)
>   owner: mukesh@oracle
> 
> * ARM server port
>   owner: @citrix
> 
> * blktap3
>   owner: @citrix
> 
> * Default to QEMU upstream
>  - qemu-based stubdom (Linux or BSD libc)
>     owner: anthony@citrix
>     qemu-upstream needs a more fully-featured libc than exists in
>     minios.  Either work on a minimalist linux-based stubdom with
>     glibc, or port one of the BSD libcs to minios.
> 
>  - pci pass-thru
>     owner: anthony@citrix
> 
> * Persistent grants
>   owner: @citrix
> 
> * Multi-page blk rings
>  - blkback in kernel (@intel)

.. and me as well.

>  - qemu blkback
> 
> * Multi-page net protocol
>   owner: ?
>   expand the network ring protocol to allow multiple pages for
>   increased throughput

Multiple people working on this. Ian, me, Annie, and Wei I believe.
> 
> * xl vm-{export,import}
>   owner: ?
>   Allow xl to import and export VMs to other formats; particularly
>   ovf, perhaps the XenServer format, or more.
> 
> 
> * xl USB pass-through for PV guests
>   owner: ?
>   Port the xend PV pass-through functionality to xl.

Well, what about the Linux side? The frontend/backend drivers haven't
really been proposed for upstream.

> 
> * openvswitch toostack integration
>   owner: roger@citrix
> 
> * Rationalized backend scripts (incl. driver domains)
>   owner: roger@citrix
> 
> * Full-VM snapshotting
>   owner: ?
>   Have a way of coordinating the taking and restoring of VM memory and
>   disk snapshots.  This would involve some investigation into the best
>   way to accomplish this.
> 
> * VM Cloning
>   owner: ?
>   Again, a way of coordinating the memory and disk aspects.  Research
>   into the best way to do this would probably go along with the
>   snapshotting feature.
> 
> * Make storage migration possible
>   owner: ?
>   There needs to be a way, either via command-line or via some hooks,
>   that someone can build a "storage migration" feature on top of libxl
>   or xl.
> 
> * PV audio (audio for stubdom qemu)
>   owner: stefano.panella@citrix

Is this a new person? Anyhow, there was a PV audio in pulseaudio as part
of the GSOC.
> 
> * Memory: Replace PoD with paging mechanism
>   owner: george@citrix
> 
> * Managed domains?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:54:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Yyp-00075P-7d; Mon, 20 Aug 2012 20:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T3Yyn-00075H-AD
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:53:45 +0000
Received: from [85.158.143.99:37476] by server-3.bemta-4.messagelabs.com id
	29/42-09529-8D3A2305; Mon, 20 Aug 2012 20:53:44 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345496022!16573069!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12818 invoked from network); 20 Aug 2012 20:53:44 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 20:53:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KKreS6020834
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 20:53:41 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KKregi005560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 20:53:40 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KKrdwQ021721; Mon, 20 Aug 2012 15:53:39 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 13:53:39 -0700
Message-ID: <5032A3FE.30402@oracle.com>
Date: Mon, 20 Aug 2012 16:54:22 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir@xen.org>
References: <CC55042A.4992C%keir@xen.org>
In-Reply-To: <CC55042A.4992C%keir@xen.org>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/18/2012 03:34 AM, Keir Fraser wrote:
> On 18/08/2012 07:38, "Keir Fraser" <keir.xen@gmail.com> wrote:
>
>>
>>> I think if a VM can be successfully started, then save/restore should
>>> also work. So I made a patch and did some testing.
>>
>> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
>> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
>> after restore I would imagine.
>
> How about the attached patch? It might actually work properly, unlike yours.
> ;)
>
>>> The above problem is gone but there are new ones.
>>>
>>> Let me summarize the result here.
>>>
>>> With the patch, save/restore works fine as long as it can be started,
>>> except two cases.
>>>
>>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>>      but the guest can only make use of 32 of them.
>
> HVM guest? I don't know why this is. You will have to investigate some more
> what has happened to the rest of your VCPUs! I think it should definitely
> work. Cc Jan in case he has any thoughts.
>
>>> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>>>      but `xm save' does not work.
>
> That's because your changes to the save/restore code were wrong. Try my
> patch instead.
>
>   -- Keir
>

Tested. Your patch works perfectly for all cases. :)


Thanks,
Junjie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:54:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Yyp-00075P-7d; Mon, 20 Aug 2012 20:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T3Yyn-00075H-AD
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:53:45 +0000
Received: from [85.158.143.99:37476] by server-3.bemta-4.messagelabs.com id
	29/42-09529-8D3A2305; Mon, 20 Aug 2012 20:53:44 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345496022!16573069!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12818 invoked from network); 20 Aug 2012 20:53:44 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 20:53:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KKreS6020834
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 20:53:41 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KKregi005560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 20:53:40 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KKrdwQ021721; Mon, 20 Aug 2012 15:53:39 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 13:53:39 -0700
Message-ID: <5032A3FE.30402@oracle.com>
Date: Mon, 20 Aug 2012 16:54:22 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir@xen.org>
References: <CC55042A.4992C%keir@xen.org>
In-Reply-To: <CC55042A.4992C%keir@xen.org>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/18/2012 03:34 AM, Keir Fraser wrote:
> On 18/08/2012 07:38, "Keir Fraser" <keir.xen@gmail.com> wrote:
>
>>
>>> I think if a VM can be successfully started, then save/restore should
>>> also work. So I made a patch and did some testing.
>>
>> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
>> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
>> after restore I would imagine.
>
> How about the attached patch? It might actually work properly, unlike yours.
> ;)
>
>>> The above problem is gone but there are new ones.
>>>
>>> Let me summarize the result here.
>>>
>>> With the patch, save/restore works fine as long as it can be started,
>>> except two cases.
>>>
>>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>>      but the guest can only make use of 32 of them.
>
> HVM guest? I don't know why this is. You will have to investigate some more
> what has happened to the rest of your VCPUs! I think it should definitely
> work. Cc Jan in case he has any thoughts.
>
>>> 2) 32-bit PVM guests can be configured with VCPUs > 64 and started,
>>>      but `xm save' does not work.
>
> That's because your changes to the save/restore code were wrong. Try my
> patch instead.
>
>   -- Keir
>

Tested. Your patch works perfectly for all cases. :)


Thanks,
Junjie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:57:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:57:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Z2Q-0007EL-1L; Mon, 20 Aug 2012 20:57:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T3Z2O-0007EB-HG
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:57:28 +0000
Received: from [85.158.139.83:3489] by server-4.bemta-5.messagelabs.com id
	B3/A0-12386-7B4A2305; Mon, 20 Aug 2012 20:57:27 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345496245!25196892!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17225 invoked from network); 20 Aug 2012 20:57:27 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 20:57:27 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KKvNZ6023962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 20:57:23 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KKvM1t026024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 20:57:23 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KKvMrH002889; Mon, 20 Aug 2012 15:57:22 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 13:57:22 -0700
Message-ID: <5032A4DE.5040307@oracle.com>
Date: Mon, 20 Aug 2012 16:58:06 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC54F704.3C5C9%keir.xen@gmail.com>
In-Reply-To: <CC54F704.3C5C9%keir.xen@gmail.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/18/2012 02:38 AM, Keir Fraser wrote:
>> It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:
>>
>> if ( info.max_vcpu_id >= 64 )
>> {
>>        ERROR("Too many VCPUS in guest!");
>>        goto out;
>> }
>>
>> And also in tools/libxc/xc_domain_restore.c:
>>
>> case XC_SAVE_ID_VCPU_INFO:
>>        buf->new_ctxt_format = 1;
>>        if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
>>            buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
>>                                              sizeof(uint64_t)) ) {
>>            PERROR("Error when reading max_vcpu_id");
>>            return -1;
>>        }
>>
>> The code above is in both xen-4.1.2 and xen-unstable.
>>
>> I think if a VM can be successfully started, then save/restore should
>> also work. So I made a patch and did some testing.
>
> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
> after restore I would imagine.
>
> And what is a PVM guest?
>
>   -- Keir
>

Paravirtualization / modified kernel. Am I using the wrong term "PVM"?

Thanks,
Junjie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 20:57:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 20:57:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Z2Q-0007EL-1L; Mon, 20 Aug 2012 20:57:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T3Z2O-0007EB-HG
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 20:57:28 +0000
Received: from [85.158.139.83:3489] by server-4.bemta-5.messagelabs.com id
	B3/A0-12386-7B4A2305; Mon, 20 Aug 2012 20:57:27 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1345496245!25196892!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwNzI2NA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17225 invoked from network); 20 Aug 2012 20:57:27 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 20:57:27 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KKvNZ6023962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 20:57:23 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KKvM1t026024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 20:57:23 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KKvMrH002889; Mon, 20 Aug 2012 15:57:22 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 13:57:22 -0700
Message-ID: <5032A4DE.5040307@oracle.com>
Date: Mon, 20 Aug 2012 16:58:06 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC54F704.3C5C9%keir.xen@gmail.com>
In-Reply-To: <CC54F704.3C5C9%keir.xen@gmail.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/18/2012 02:38 AM, Keir Fraser wrote:
>> It was caused by a hard-coded limit in tools/libxc/xc_domain_save.c:
>>
>> if ( info.max_vcpu_id >= 64 )
>> {
>>        ERROR("Too many VCPUS in guest!");
>>        goto out;
>> }
>>
>> And also in tools/libxc/xc_domain_restore.c:
>>
>> case XC_SAVE_ID_VCPU_INFO:
>>        buf->new_ctxt_format = 1;
>>        if ( RDEXACT(fd, &buf->max_vcpu_id, sizeof(buf->max_vcpu_id)) ||
>>            buf->max_vcpu_id >= 64 || RDEXACT(fd, &buf->vcpumap,
>>                                              sizeof(uint64_t)) ) {
>>            PERROR("Error when reading max_vcpu_id");
>>            return -1;
>>        }
>>
>> The code above is in both xen-4.1.2 and xen-unstable.
>>
>> I think if a VM can be successfully started, then save/restore should
>> also work. So I made a patch and did some testing.
>
> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
> after restore I would imagine.
>
> And what is a PVM guest?
>
>   -- Keir
>

Paravirtualization / modified kernel. Am I using the wrong term "PVM"?

Thanks,
Junjie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 21:05:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 21:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Z9h-0007RW-Vy; Mon, 20 Aug 2012 21:05:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T3Z9g-0007RR-Mo
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 21:05:00 +0000
Received: from [85.158.139.83:51193] by server-9.bemta-5.messagelabs.com id
	08/B2-26123-B76A2305; Mon, 20 Aug 2012 21:04:59 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345496698!29052916!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk5ODQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28479 invoked from network); 20 Aug 2012 21:04:59 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 21:04:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KL4uT5025703
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 21:04:57 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KL4tGW021798
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 21:04:55 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KL4tm3007448; Mon, 20 Aug 2012 16:04:55 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 14:04:55 -0700
Message-ID: <5032A6A2.7070802@oracle.com>
Date: Mon, 20 Aug 2012 17:05:38 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir@xen.org>
References: <CC55042A.4992C%keir@xen.org>
In-Reply-To: <CC55042A.4992C%keir@xen.org>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/18/2012 03:34 AM, Keir Fraser wrote:
>>>
>>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>>      but the guest can only make use of 32 of them.
>
> HVM guest? I don't know why this is. You will have to investigate some more
> what has happened to the rest of your VCPUs! I think it should definitely
> work. Cc Jan in case he has any thoughts.
>

It looks like that for 32-bit guests, the VCPU -> CPU mapping only works 
for the first 32 VCPUs. It can be reliably reproduced.

Thanks,
Junjie


[root@ovs087 HVM_X86]# rpm -qa | grep xen
xenpvboot-0.1-8.el5
xen-tools-4.1.2-39
xen-devel-4.1.2-39
xen-4.1.2-39

[root@ovs087 HVM_X86]# uname -a
Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012 
x86_64 x86_64 x86_64 GNU/Linux

[root@ovs087 HVM_X86]# cat vm.cfg | grep vcpus
vcpus = 36

[root@ovs087 HVM_X86]# xm list 65
Name                                        ID   Mem VCPUs      State 
Time(s)
OVM_OL5U7_X86_PVHVM_10GB                    65  2048    32     r----- 
   33.3

[root@ovs087 HVM_X86]# xm vcpu-list 65
Name                                ID  VCPU   CPU State   Time(s) CPU 
Affinity
OVM_OL5U7_X86_PVHVM_10GB            65     0     4   -b-      10.7 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     1     0   -b-       1.8 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     2     4   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     3     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     4     7   -b-       1.1 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     5     5   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     6     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     7     3   r--      10.6 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     8     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     9     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    10     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    11     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    12     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    13     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    14     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    15     5   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    16     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    17     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    18     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    19     5   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    20     7   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    21     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    22     5   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    23     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    24     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    25     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    26     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    27     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    28     4   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    29     7   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    30     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    31     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    32     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    33     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    34     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    35     -   --p       0.0 any cpu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 21:05:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 21:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3Z9h-0007RW-Vy; Mon, 20 Aug 2012 21:05:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <junjie.wei@oracle.com>) id 1T3Z9g-0007RR-Mo
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 21:05:00 +0000
Received: from [85.158.139.83:51193] by server-9.bemta-5.messagelabs.com id
	08/B2-26123-B76A2305; Mon, 20 Aug 2012 21:04:59 +0000
X-Env-Sender: junjie.wei@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345496698!29052916!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNjk5ODQx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28479 invoked from network); 20 Aug 2012 21:04:59 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Aug 2012 21:04:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KL4uT5025703
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 21:04:57 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KL4tGW021798
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 21:04:55 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KL4tm3007448; Mon, 20 Aug 2012 16:04:55 -0500
Received: from [10.149.237.224] (/10.149.237.224)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 14:04:55 -0700
Message-ID: <5032A6A2.7070802@oracle.com>
Date: Mon, 20 Aug 2012 17:05:38 -0400
From: Junjie Wei <junjie.wei@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir@xen.org>
References: <CC55042A.4992C%keir@xen.org>
In-Reply-To: <CC55042A.4992C%keir@xen.org>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/18/2012 03:34 AM, Keir Fraser wrote:
>>>
>>> 1) 32-bit guests can be configured with VCPUs > 32 and started,
>>>      but the guest can only make use of 32 of them.
>
> HVM guest? I don't know why this is. You will have to investigate some more
> what has happened to the rest of your VCPUs! I think it should definitely
> work. Cc Jan in case he has any thoughts.
>

It looks like that for 32-bit guests, the VCPU -> CPU mapping only works 
for the first 32 VCPUs. It can be reliably reproduced.

Thanks,
Junjie


[root@ovs087 HVM_X86]# rpm -qa | grep xen
xenpvboot-0.1-8.el5
xen-tools-4.1.2-39
xen-devel-4.1.2-39
xen-4.1.2-39

[root@ovs087 HVM_X86]# uname -a
Linux ovs087 2.6.39-200.30.1.el5uek #1 SMP Thu Jul 12 21:47:09 EDT 2012 
x86_64 x86_64 x86_64 GNU/Linux

[root@ovs087 HVM_X86]# cat vm.cfg | grep vcpus
vcpus = 36

[root@ovs087 HVM_X86]# xm list 65
Name                                        ID   Mem VCPUs      State 
Time(s)
OVM_OL5U7_X86_PVHVM_10GB                    65  2048    32     r----- 
   33.3

[root@ovs087 HVM_X86]# xm vcpu-list 65
Name                                ID  VCPU   CPU State   Time(s) CPU 
Affinity
OVM_OL5U7_X86_PVHVM_10GB            65     0     4   -b-      10.7 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     1     0   -b-       1.8 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     2     4   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     3     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     4     7   -b-       1.1 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     5     5   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     6     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     7     3   r--      10.6 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     8     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65     9     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    10     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    11     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    12     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    13     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    14     7   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    15     5   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    16     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    17     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    18     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    19     5   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    20     7   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    21     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    22     5   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    23     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    24     4   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    25     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    26     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    27     6   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    28     4   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    29     7   -b-       0.9 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    30     2   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    31     0   -b-       1.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    32     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    33     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    34     -   --p       0.0 any cpu
OVM_OL5U7_X86_PVHVM_10GB            65    35     -   --p       0.0 any cpu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 21:21:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 21:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ZPI-0007bh-He; Mon, 20 Aug 2012 21:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3ZPG-0007bc-8J
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 21:21:06 +0000
Received: from [85.158.139.83:13669] by server-4.bemta-5.messagelabs.com id
	AB/61-12386-14AA2305; Mon, 20 Aug 2012 21:21:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345497664!29054239!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25053 invoked from network); 20 Aug 2012 21:21:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 21:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14093274"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 21:21:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 22:21:03 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3ZPD-0001cq-Dh;
	Mon, 20 Aug 2012 21:21:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3ZPD-0001ST-1i;
	Mon, 20 Aug 2012 22:21:03 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13619-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Aug 2012 22:21:03 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13619: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13619 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13619/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13617
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13617
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13617
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13617

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  e6ca45ca03c2
baseline version:
 xen                  73ac4b7ad2e1

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=e6ca45ca03c2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable e6ca45ca03c2
+ branch=xen-unstable
+ revision=e6ca45ca03c2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r e6ca45ca03c2 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 21:21:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 21:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ZPI-0007bh-He; Mon, 20 Aug 2012 21:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3ZPG-0007bc-8J
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 21:21:06 +0000
Received: from [85.158.139.83:13669] by server-4.bemta-5.messagelabs.com id
	AB/61-12386-14AA2305; Mon, 20 Aug 2012 21:21:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345497664!29054239!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk2MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25053 invoked from network); 20 Aug 2012 21:21:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Aug 2012 21:21:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14093274"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	20 Aug 2012 21:21:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Mon, 20 Aug 2012 22:21:03 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3ZPD-0001cq-Dh;
	Mon, 20 Aug 2012 21:21:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3ZPD-0001ST-1i;
	Mon, 20 Aug 2012 22:21:03 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13619-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Aug 2012 22:21:03 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13619: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13619 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13619/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13617
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13617
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13617
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13617

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  e6ca45ca03c2
baseline version:
 xen                  73ac4b7ad2e1

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=e6ca45ca03c2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable e6ca45ca03c2
+ branch=xen-unstable
+ revision=e6ca45ca03c2
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r e6ca45ca03c2 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 4 changes to 4 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 23:40:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 23:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3bZz-00087h-8x; Mon, 20 Aug 2012 23:40:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3bZx-00087c-Op
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 23:40:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345506002!9412668!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwODQwNQ==\n,
	ML_RADAR_SPEW_LINKS_8,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjExMjMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30137 invoked from network); 20 Aug 2012 23:40:03 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 23:40:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KNdxue021031
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 23:40:00 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KNdwOK001731
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 23:39:58 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KNdvA6020571; Mon, 20 Aug 2012 18:39:58 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 16:39:57 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ECDC4402D4; Mon, 20 Aug 2012 19:30:01 -0400 (EDT)
Date: Mon, 20 Aug 2012 19:30:01 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Tobias Geiger <tobias.geiger@vido.info>
Message-ID: <20120820233001.GA19360@phenom.dumpdata.com>
References: <d19a37de141b07add2cd49f3d1f63f2b@vido.info>
	<20120725134357.GA8959@phenom.dumpdata.com>
	<20120806161632.GA11007@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120806161632.GA11007@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > Hi!
> > > 
> > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > > stable):
> > > 
> > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > > not recognized within the DomU (HVM Win7 64)
> > > Dom0 cmdline is:
> > > ro root=LABEL=dom0root xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > security=apparmor noirqdebug nouveau.msi=1
> > > 
> > > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > > USB Controller IDs are not correctly passed through and get a
> > > exclamation mark within the win7 device manager ("could not be
> > > started").
> > 
> > Ok, but they do get passed in though? As in, QEMU sees them.
> > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > passed in devices do you see them? Meaning lspci shows them?
> > 
> > 
> > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > 
> > > 
> > > 
> > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) - sorry
> > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > uploaded here:
> > > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > 
> > Ugh, that looks like somebody removed a large chunk of a pagetable.
> > 
> > Hmm. Are you using dom0_mem=max parameter? If not, can you try
> > that and also disable ballooning in the xm/xl config file pls?
> > 
> > > 
> > > 
> > > With 3.4 both issues were not there - everything worked perfectly.
> > > Tell me which debugging info you need, i may be able to re-install
> > > my netconsole to get the full stacktrace (but i had not much luck
> > > with netconsole regarding kernel panics - rarely this info gets sent
> > > before the "panic"...)
> 
> So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> an Intel 82574L NIC. The video card still works, but the NIC stopped
> working. Same version of hypervisor/toolstack/etc, only change is the
> kernel (v3.4.6->v3.5).
> 
> Time to get my hands greasy with this..

And its due to a patch I added in v3.4 (cd9db80e5257682a7f7ab245a2459648b3c8d268)
- which did not work properly in v3.4, but with v3.5 got it working
(977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to now work
anymore.

Anyhow, for right now jsut revert cd9db80e5257682a7f7ab245a2459648b3c8d268
and it should work for you.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 23:40:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 23:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3bZz-00087h-8x; Mon, 20 Aug 2012 23:40:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3bZx-00087c-Op
	for xen-devel@lists.xen.org; Mon, 20 Aug 2012 23:40:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345506002!9412668!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwODQwNQ==\n,
	ML_RADAR_SPEW_LINKS_8,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjExMjMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30137 invoked from network); 20 Aug 2012 23:40:03 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 23:40:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KNdxue021031
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 23:40:00 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KNdwOK001731
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 23:39:58 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KNdvA6020571; Mon, 20 Aug 2012 18:39:58 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 16:39:57 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ECDC4402D4; Mon, 20 Aug 2012 19:30:01 -0400 (EDT)
Date: Mon, 20 Aug 2012 19:30:01 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Tobias Geiger <tobias.geiger@vido.info>
Message-ID: <20120820233001.GA19360@phenom.dumpdata.com>
References: <d19a37de141b07add2cd49f3d1f63f2b@vido.info>
	<20120725134357.GA8959@phenom.dumpdata.com>
	<20120806161632.GA11007@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120806161632.GA11007@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > Hi!
> > > 
> > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > > stable):
> > > 
> > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > > not recognized within the DomU (HVM Win7 64)
> > > Dom0 cmdline is:
> > > ro root=LABEL=dom0root xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > security=apparmor noirqdebug nouveau.msi=1
> > > 
> > > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > > USB Controller IDs are not correctly passed through and get a
> > > exclamation mark within the win7 device manager ("could not be
> > > started").
> > 
> > Ok, but they do get passed in though? As in, QEMU sees them.
> > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > passed in devices do you see them? Meaning lspci shows them?
> > 
> > 
> > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > 
> > > 
> > > 
> > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) - sorry
> > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > uploaded here:
> > > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > 
> > Ugh, that looks like somebody removed a large chunk of a pagetable.
> > 
> > Hmm. Are you using dom0_mem=max parameter? If not, can you try
> > that and also disable ballooning in the xm/xl config file pls?
> > 
> > > 
> > > 
> > > With 3.4 both issues were not there - everything worked perfectly.
> > > Tell me which debugging info you need, i may be able to re-install
> > > my netconsole to get the full stacktrace (but i had not much luck
> > > with netconsole regarding kernel panics - rarely this info gets sent
> > > before the "panic"...)
> 
> So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> an Intel 82574L NIC. The video card still works, but the NIC stopped
> working. Same version of hypervisor/toolstack/etc, only change is the
> kernel (v3.4.6->v3.5).
> 
> Time to get my hands greasy with this..

And its due to a patch I added in v3.4 (cd9db80e5257682a7f7ab245a2459648b3c8d268)
- which did not work properly in v3.4, but with v3.5 got it working
(977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to now work
anymore.

Anyhow, for right now jsut revert cd9db80e5257682a7f7ab245a2459648b3c8d268
and it should work for you.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 23:43:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 23:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3bcB-0008Ci-QV; Mon, 20 Aug 2012 23:42:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3bcA-0008CZ-2q
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 23:42:34 +0000
Received: from [85.158.143.35:49608] by server-3.bemta-4.messagelabs.com id
	42/39-09529-96BC2305; Mon, 20 Aug 2012 23:42:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345506149!16729239!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwODQwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30646 invoked from network); 20 Aug 2012 23:42:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 23:42:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KNgQmq022609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 23:42:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KNgPro004625
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 23:42:26 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KNgNKQ000733; Mon, 20 Aug 2012 18:42:23 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 16:42:22 -0700
Date: Mon, 20 Aug 2012 16:42:21 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120820164221.3ce8f879@mantra.us.oracle.com>
In-Reply-To: <1345192554.30865.93.camel@zakaz.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 09:35:54 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> 
> > +void __init xen_arch_setup(void)
> > +{
> > +       xen_panic_handler_init();
> > +
> > +       if (!xen_pvh_domain())
> > +               xen_non_pvh_arch_setup();
> 
> The negative in the fn name here strikes me as a bit weird. Can't this
> just be xen_pv_arch_setup?

Well, PVH is PV, so xen_pv_arch_setup would be confusing. Thus I can't
say if (xen_pv_domain()). The negative logic tells the reader right
away that the PV code doesn't apply to PVH. I earlier had
xen_pure_pv_arch_setup(), but like non_pvh better.

> Or even just have:
> 	/* Everything else is specific to PV without hardware support
> */ if (xen_pvh_domain())
> 		return;

No, the code following if statement is common to both PV and PVH.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 23:43:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 23:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3bcB-0008Ci-QV; Mon, 20 Aug 2012 23:42:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3bcA-0008CZ-2q
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 23:42:34 +0000
Received: from [85.158.143.35:49608] by server-3.bemta-4.messagelabs.com id
	42/39-09529-96BC2305; Mon, 20 Aug 2012 23:42:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345506149!16729239!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwODQwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30646 invoked from network); 20 Aug 2012 23:42:30 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 23:42:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KNgQmq022609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 23:42:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KNgPro004625
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 23:42:26 GMT
Received: from abhmt109.oracle.com (abhmt109.oracle.com [141.146.116.61])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KNgNKQ000733; Mon, 20 Aug 2012 18:42:23 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 16:42:22 -0700
Date: Mon, 20 Aug 2012 16:42:21 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120820164221.3ce8f879@mantra.us.oracle.com>
In-Reply-To: <1345192554.30865.93.camel@zakaz.uk.xensource.com>
References: <20120815175724.3405043a@mantra.us.oracle.com>
	<1345192554.30865.93.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 09:35:54 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> 
> > +void __init xen_arch_setup(void)
> > +{
> > +       xen_panic_handler_init();
> > +
> > +       if (!xen_pvh_domain())
> > +               xen_non_pvh_arch_setup();
> 
> The negative in the fn name here strikes me as a bit weird. Can't this
> just be xen_pv_arch_setup?

Well, PVH is PV, so xen_pv_arch_setup would be confusing. Thus I can't
say if (xen_pv_domain()). The negative logic tells the reader right
away that the PV code doesn't apply to PVH. I earlier had
xen_pure_pv_arch_setup(), but like non_pvh better.

> Or even just have:
> 	/* Everything else is specific to PV without hardware support
> */ if (xen_pvh_domain())
> 		return;

No, the code following if statement is common to both PV and PVH.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 20 23:52:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 23:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3bl2-0008Rb-1B; Mon, 20 Aug 2012 23:51:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3bl1-0008RW-6h
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 23:51:43 +0000
Received: from [85.158.139.83:13731] by server-2.bemta-5.messagelabs.com id
	8C/93-10142-E8DC2305; Mon, 20 Aug 2012 23:51:42 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345506700!26437915!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwODQwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15131 invoked from network); 20 Aug 2012 23:51:41 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 23:51:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KNpast028144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 23:51:37 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KNpab8011530
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 23:51:36 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KNpZic005494; Mon, 20 Aug 2012 18:51:35 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 16:51:35 -0700
Date: Mon, 20 Aug 2012 16:51:34 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Gao, Ping" <Ping.Gao@amd.com>
Message-ID: <20120820165134.7810b1e3@mantra.us.oracle.com>
In-Reply-To: <E3CB3CDCA634EB4D92AA8B78492C79E517BFFE@SCYBEXDAG03.amd.com>
References: <E3CB3CDCA634EB4D92AA8B78492C79E517BFFE@SCYBEXDAG03.amd.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] gdbsx bug : fail on a breakpoint
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SG1tLi4uIEkgZG9uJ3Qga25vdyB3aGF0cyB0aGUgcmVhbCBjYXVzZS4gSXMgaXQgZHVyaW5nIGJv
b3Qgb2YgdGhlIEhWTQpndWVzdCwgb3IgaXMgdGhlIGd1ZXN0IGNvbXBsZXRlbHkgdXAgYW5kIHlv
dSBhcmUgdHJ5aW5nIHRvIHNldApicmVha3BvaW5nPyAKCkZvciBpbW1lZGlhdGUgdXNlLCBjYW4g
eW91IGp1c3QgcmVidWlsdCB0aGUga2VybmVsIHdpdGggbW9kdWxlIGJ1aWx0CmluPyBJIGRvbid0
IGhhdmUgYSBzZXR1cCByaWdodCBub3cgdG8gZGVidWcvZml4IGl0LCBhbmQgYW0gdmVyeSBidXN5
CndpdGggUFZIIGFuZCBwcmVwYXJpbmcgZm9yIHhlbiBzdW1taXQsIHNvIG5lZWQgc29tZSB0aW1l
IHRvIGRlYnVnIGFuZApmaXggaXQuCgp0aGFua3MsCm11a2VzaAoKUFM6IHBsZWFzZSBjYyB4ZW4t
ZGV2ZWwgb24gYWxsIHN1Y2ggaXNzdWVzLgoKCgpPbiBNb24sIDIwIEF1ZyAyMDEyIDA3OjQ2OjQ1
ICswMDAwCiJHYW8sIFBpbmciIDxQaW5nLkdhb0BhbWQuY29tPiB3cm90ZToKCj4gSGkgTXVrZXNo
LAo+IAo+IElzIGFueSBwcm9ncmVzcyBvbiB0aGlzIGdkYnN4IGlzc3VlIHJlcG9ydGVkIGJ5IEFu
ZHJhcz8gSSdtIGludGVyZXN0Cj4gdG8gdGFrZSB1c2Ugb2YgdGhlIGdkYnN4IGJ1dCBzdWZmZXIg
dGhlIHByb2JsZW0gc2VlbXMgdGhlIHNhbWUuCj4gCj4gRGVzY3JpcHRpb246Cj4gCj4gICAgICAg
ICBTZXQgZ2Ric3ggdG8gZGVidWcgYSBIVk0ga2VybmVsIG1vZHVsZSwgZmFpbCB0byBzZXQgYQo+
IGJyZWFrcG9pbnQuCj4gCj4gICAgICAgICBLZXJuZWwgOgo+IGtvbnJhZC94ZW4uZ2l0PGh0dHA6
Ly9naXQua2VybmVsLm9yZy8/cD1saW51eC9rZXJuZWwvZ2l0L2tvbnJhZC94ZW4uZ2l0O2E9c3Vt
bWFyeT4KPiB0ZXN0aW5nPGh0dHA6Ly9naXQua2VybmVsLm9yZy8/cD1saW51eC9rZXJuZWwvZ2l0
L2tvbnJhZC94ZW4uZ2l0O2E9c2hvcnRsb2c7aD1yZWZzL2hlYWRzL3Rlc3Rpbmc+Cj4gWGVuICAg
IDogIHN0YWJsZSA0LjEuMiAodWJ1bnR1IDEyLjA0KSBSZXN1bHQgOiAgYnJlYWtwb2ludCBmYWls
ZWQsCj4gZ2Ric3ggc2hvdyBFUlJPUjp4Z193cml0ZV9tZW06RVJST1I6IGZhaWxlZCB0byB3cml0
ZSAwIGJ5dGVzLgo+IGVycm5vOjE0IHJjOi0xCj4gCj4gSSdtIG5vdCB0aGUgc3VyZSB0aGUgaXNz
dWUgaW4gdGhlIGJlbG93IGxpbmsgaXMgZHVlIHRvIHRoZSBzYW1lIHJvb3QKPiBjYXVzZSBvciBu
b3QgaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD02Nzg1NzEKPiAK
PiBXYWl0IGZvciB5b3VyIGdvb2QgbmV3cy4KPiAKPiBCciwKPiBQaW5nCj4gCj4gLS0tLS1Pcmln
aW5hbCBNZXNzYWdlLS0tLS0KPiBPbiBXZWQsIDI1IEp1bCAyMDEyIDE2OjU1OjM2ICswMjAwCj4g
QW5kcsODcyBNw4NoZXMgPGFuZHJhcy5tZWhlc0B4eHh4eHh4eHh4eHg+IHdyb3RlOgo+IAo+ID4g
SGkgTXVrZXNoLAo+ID4KPiA+Cj4gPiBUaGFua3MgZm9yIHRoZSBnb29kIHdvcmsgb24gWGVuIGRl
YnVnZ2luZyEKPiA+Cj4gPiBJJ3ZlIGp1c3Qgc3R1bWJsZWQgdXBvbiB3aGF0IEkgYmVsaWV2ZSBp
cyBhIGJ1ZyBpbiAneGdfd3JpdGVfbWVtJzsKPiA+IHNwZWNpZmljYWxseSwgdGhlIGFzc3VtcHRp
b24gdGhhdCAnaW9wLT5yZW1haW4gPT0gMCcgaW1wbGllcyBzdWNjZXNzCj4gPiBzZWVtcyB0byBi
ZSBmYWxzZS4gIEZpcnN0LCB0aGUgJ21lbXNldCcgY2FsbCBzZXRzICdpb3AtPnJlbWFpbicgdG8K
PiA+IDAuIFRoZW4sIHRoZSBoeXBlcmNhbGwgZmFpbHMgd2l0aG91dCB0b3VjaGluZyAnaW9wLT5y
ZW1haW4nIGFuZAo+ID4gJ3hnX3dyaXRlX21lbScgcmVwb3J0cyBzdWNjZXNzLi4uLiAgSGVyZSdz
IHRoZSByZWxldmFudCBkZWJ1ZyBvdXRwdXQKPiA+IGZyb20gZ2Ric3g6Cj4gPgo+IEhpIEFuZHJh
cywKPiAKPiBJICBhbSBjdXJyZW50bHkgb3V0IG9mIG9mZmljZSwgYW5kIHdpbGwgbG9vayBhdCBp
dCBuZXh0IHdlZWsgc29tZXRpbWVzCj4gYWZ0ZXIgSSByZXR1cm4gYW5kIGNhdGNoIHVwLiBUaGFu
a3MgZm9yIHBvaW50aW5nIGl0IG91dC4KPiAKPiA+IFRoYXQgd291bGQgYmUgYSBtdWNoIGJldHRl
ciBhbHRlcm5hdGl2ZSBmb3Igc2V0dGluZyBicmVha3BvaW50cwo+ID4gKmJlZm9yZSogdGhlIGtl
cm5lbCBpcyBhY3R1YWxseSBtYXBwZWQgaW4uICBUaGUgdmVyeSBicnV0ZS1mb3JjZQo+ID4gd29y
a2Fyb3VuZCB3aXRoIG9ubHkgc29mdHdhcmUgYnJlYWtwb2ludHMgLWFmdGVyIHRoZSBwYXRjaC0g
aXMgdG8KPiA+IGtlZXAgc2luZ2xlIHN0ZXBwaW5nIHVudGlsIHNldHRpbmcgdGhlIGJyZWFrcG9p
bnQgZmluYWxseQo+ID4gc3VjY2VlZHMuLiA7LSgKPiAKPiBZZWFoLCBpdCdzIG9uIHRoZSBsaXN0
LCBidXQgSSd2ZSBiZWVuIHNvIGJ1c3kgd2l0aCBoeWJyaWQsIGFuZCBvdGhlcgo+IHN0dWZmLiBC
ZXNpZGVzLCBoYXZlbid0IGhhZCBtYW55IHBlb3BsZSBhc2sgZm9yIGl0LCBzbyBwcmlvcml0eQo+
IHJlbWFpbnMgbG93LiBGZWVsIGZyZWUgdG8gd29yayBvbiBpdCB0aG8sIG15IHRvcCBwcmkgYWZ0
ZWVyIEkgcmV0dXJuCj4gaXMgaHlicmlkLgo+IAo+IHRoYW5rcywKPiBNdWtlc2gKPiAKCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Aug 20 23:52:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Aug 2012 23:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3bl2-0008Rb-1B; Mon, 20 Aug 2012 23:51:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3bl1-0008RW-6h
	for Xen-devel@lists.xensource.com; Mon, 20 Aug 2012 23:51:43 +0000
Received: from [85.158.139.83:13731] by server-2.bemta-5.messagelabs.com id
	8C/93-10142-E8DC2305; Mon, 20 Aug 2012 23:51:42 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345506700!26437915!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwODQwNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15131 invoked from network); 20 Aug 2012 23:51:41 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Aug 2012 23:51:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7KNpast028144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Aug 2012 23:51:37 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7KNpab8011530
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Aug 2012 23:51:36 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7KNpZic005494; Mon, 20 Aug 2012 18:51:35 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Aug 2012 16:51:35 -0700
Date: Mon, 20 Aug 2012 16:51:34 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Gao, Ping" <Ping.Gao@amd.com>
Message-ID: <20120820165134.7810b1e3@mantra.us.oracle.com>
In-Reply-To: <E3CB3CDCA634EB4D92AA8B78492C79E517BFFE@SCYBEXDAG03.amd.com>
References: <E3CB3CDCA634EB4D92AA8B78492C79E517BFFE@SCYBEXDAG03.amd.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] gdbsx bug : fail on a breakpoint
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SG1tLi4uIEkgZG9uJ3Qga25vdyB3aGF0cyB0aGUgcmVhbCBjYXVzZS4gSXMgaXQgZHVyaW5nIGJv
b3Qgb2YgdGhlIEhWTQpndWVzdCwgb3IgaXMgdGhlIGd1ZXN0IGNvbXBsZXRlbHkgdXAgYW5kIHlv
dSBhcmUgdHJ5aW5nIHRvIHNldApicmVha3BvaW5nPyAKCkZvciBpbW1lZGlhdGUgdXNlLCBjYW4g
eW91IGp1c3QgcmVidWlsdCB0aGUga2VybmVsIHdpdGggbW9kdWxlIGJ1aWx0CmluPyBJIGRvbid0
IGhhdmUgYSBzZXR1cCByaWdodCBub3cgdG8gZGVidWcvZml4IGl0LCBhbmQgYW0gdmVyeSBidXN5
CndpdGggUFZIIGFuZCBwcmVwYXJpbmcgZm9yIHhlbiBzdW1taXQsIHNvIG5lZWQgc29tZSB0aW1l
IHRvIGRlYnVnIGFuZApmaXggaXQuCgp0aGFua3MsCm11a2VzaAoKUFM6IHBsZWFzZSBjYyB4ZW4t
ZGV2ZWwgb24gYWxsIHN1Y2ggaXNzdWVzLgoKCgpPbiBNb24sIDIwIEF1ZyAyMDEyIDA3OjQ2OjQ1
ICswMDAwCiJHYW8sIFBpbmciIDxQaW5nLkdhb0BhbWQuY29tPiB3cm90ZToKCj4gSGkgTXVrZXNo
LAo+IAo+IElzIGFueSBwcm9ncmVzcyBvbiB0aGlzIGdkYnN4IGlzc3VlIHJlcG9ydGVkIGJ5IEFu
ZHJhcz8gSSdtIGludGVyZXN0Cj4gdG8gdGFrZSB1c2Ugb2YgdGhlIGdkYnN4IGJ1dCBzdWZmZXIg
dGhlIHByb2JsZW0gc2VlbXMgdGhlIHNhbWUuCj4gCj4gRGVzY3JpcHRpb246Cj4gCj4gICAgICAg
ICBTZXQgZ2Ric3ggdG8gZGVidWcgYSBIVk0ga2VybmVsIG1vZHVsZSwgZmFpbCB0byBzZXQgYQo+
IGJyZWFrcG9pbnQuCj4gCj4gICAgICAgICBLZXJuZWwgOgo+IGtvbnJhZC94ZW4uZ2l0PGh0dHA6
Ly9naXQua2VybmVsLm9yZy8/cD1saW51eC9rZXJuZWwvZ2l0L2tvbnJhZC94ZW4uZ2l0O2E9c3Vt
bWFyeT4KPiB0ZXN0aW5nPGh0dHA6Ly9naXQua2VybmVsLm9yZy8/cD1saW51eC9rZXJuZWwvZ2l0
L2tvbnJhZC94ZW4uZ2l0O2E9c2hvcnRsb2c7aD1yZWZzL2hlYWRzL3Rlc3Rpbmc+Cj4gWGVuICAg
IDogIHN0YWJsZSA0LjEuMiAodWJ1bnR1IDEyLjA0KSBSZXN1bHQgOiAgYnJlYWtwb2ludCBmYWls
ZWQsCj4gZ2Ric3ggc2hvdyBFUlJPUjp4Z193cml0ZV9tZW06RVJST1I6IGZhaWxlZCB0byB3cml0
ZSAwIGJ5dGVzLgo+IGVycm5vOjE0IHJjOi0xCj4gCj4gSSdtIG5vdCB0aGUgc3VyZSB0aGUgaXNz
dWUgaW4gdGhlIGJlbG93IGxpbmsgaXMgZHVlIHRvIHRoZSBzYW1lIHJvb3QKPiBjYXVzZSBvciBu
b3QgaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD02Nzg1NzEKPiAK
PiBXYWl0IGZvciB5b3VyIGdvb2QgbmV3cy4KPiAKPiBCciwKPiBQaW5nCj4gCj4gLS0tLS1Pcmln
aW5hbCBNZXNzYWdlLS0tLS0KPiBPbiBXZWQsIDI1IEp1bCAyMDEyIDE2OjU1OjM2ICswMjAwCj4g
QW5kcsODcyBNw4NoZXMgPGFuZHJhcy5tZWhlc0B4eHh4eHh4eHh4eHg+IHdyb3RlOgo+IAo+ID4g
SGkgTXVrZXNoLAo+ID4KPiA+Cj4gPiBUaGFua3MgZm9yIHRoZSBnb29kIHdvcmsgb24gWGVuIGRl
YnVnZ2luZyEKPiA+Cj4gPiBJJ3ZlIGp1c3Qgc3R1bWJsZWQgdXBvbiB3aGF0IEkgYmVsaWV2ZSBp
cyBhIGJ1ZyBpbiAneGdfd3JpdGVfbWVtJzsKPiA+IHNwZWNpZmljYWxseSwgdGhlIGFzc3VtcHRp
b24gdGhhdCAnaW9wLT5yZW1haW4gPT0gMCcgaW1wbGllcyBzdWNjZXNzCj4gPiBzZWVtcyB0byBi
ZSBmYWxzZS4gIEZpcnN0LCB0aGUgJ21lbXNldCcgY2FsbCBzZXRzICdpb3AtPnJlbWFpbicgdG8K
PiA+IDAuIFRoZW4sIHRoZSBoeXBlcmNhbGwgZmFpbHMgd2l0aG91dCB0b3VjaGluZyAnaW9wLT5y
ZW1haW4nIGFuZAo+ID4gJ3hnX3dyaXRlX21lbScgcmVwb3J0cyBzdWNjZXNzLi4uLiAgSGVyZSdz
IHRoZSByZWxldmFudCBkZWJ1ZyBvdXRwdXQKPiA+IGZyb20gZ2Ric3g6Cj4gPgo+IEhpIEFuZHJh
cywKPiAKPiBJICBhbSBjdXJyZW50bHkgb3V0IG9mIG9mZmljZSwgYW5kIHdpbGwgbG9vayBhdCBp
dCBuZXh0IHdlZWsgc29tZXRpbWVzCj4gYWZ0ZXIgSSByZXR1cm4gYW5kIGNhdGNoIHVwLiBUaGFu
a3MgZm9yIHBvaW50aW5nIGl0IG91dC4KPiAKPiA+IFRoYXQgd291bGQgYmUgYSBtdWNoIGJldHRl
ciBhbHRlcm5hdGl2ZSBmb3Igc2V0dGluZyBicmVha3BvaW50cwo+ID4gKmJlZm9yZSogdGhlIGtl
cm5lbCBpcyBhY3R1YWxseSBtYXBwZWQgaW4uICBUaGUgdmVyeSBicnV0ZS1mb3JjZQo+ID4gd29y
a2Fyb3VuZCB3aXRoIG9ubHkgc29mdHdhcmUgYnJlYWtwb2ludHMgLWFmdGVyIHRoZSBwYXRjaC0g
aXMgdG8KPiA+IGtlZXAgc2luZ2xlIHN0ZXBwaW5nIHVudGlsIHNldHRpbmcgdGhlIGJyZWFrcG9p
bnQgZmluYWxseQo+ID4gc3VjY2VlZHMuLiA7LSgKPiAKPiBZZWFoLCBpdCdzIG9uIHRoZSBsaXN0
LCBidXQgSSd2ZSBiZWVuIHNvIGJ1c3kgd2l0aCBoeWJyaWQsIGFuZCBvdGhlcgo+IHN0dWZmLiBC
ZXNpZGVzLCBoYXZlbid0IGhhZCBtYW55IHBlb3BsZSBhc2sgZm9yIGl0LCBzbyBwcmlvcml0eQo+
IHJlbWFpbnMgbG93LiBGZWVsIGZyZWUgdG8gd29yayBvbiBpdCB0aG8sIG15IHRvcCBwcmkgYWZ0
ZWVyIEkgcmV0dXJuCj4gaXMgaHlicmlkLgo+IAo+IHRoYW5rcywKPiBNdWtlc2gKPiAKCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFn-0004ae-KK; Tue, 21 Aug 2012 01:27:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFm-0004aL-Ky
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:34 +0000
Received: from [85.158.139.83:49101] by server-5.bemta-5.messagelabs.com id
	63/A5-31019-504E2305; Tue, 21 Aug 2012 01:27:33 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10637 invoked from network); 21 Aug 2012 01:27:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095096"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:31 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:31 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:04 +0100
Message-ID: <1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 3/5] X86/XEN: Introduce the
	x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This new PVOPS is responsible to setup the kernel pagetables and
replace entirely x86_init.paging.pagetable_setup_start and
x86_init.paging.pagetable_setup_done PVOPS work.

For performance the x86_64 stub is implemented as a macro to paging_init()
rather than an actual function stub.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    2 +
 arch/x86/include/asm/x86_init.h      |    2 +
 arch/x86/kernel/x86_init.c           |    1 +
 arch/x86/mm/init_32.c                |   35 ++++++++++++++++++++++++++++++++++
 arch/x86/xen/mmu.c                   |   12 +++++++++-
 5 files changed, 50 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 9b1c1f7..55f24b5 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,9 +303,11 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
+extern void native_pagetable_init(void);
 extern void native_pagetable_setup_start(void);
 extern void native_pagetable_setup_done(void);
 #else
+#define native_pagetable_init        paging_init
 #define native_pagetable_setup_start x86_init_pgd_noop
 #define native_pagetable_setup_done  x86_init_pgd_noop
 #endif
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index efd0075..a74cc19 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -81,10 +81,12 @@ struct x86_init_mapping {
 
 /**
  * struct x86_init_paging - platform specific paging functions
+ * @pagetable_init:		platform specific paging initialization call
  * @pagetable_setup_start:	platform specific pre paging_init() call
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
+	void (*pagetable_init)(void);
 	void (*pagetable_setup_start)(void);
 	void (*pagetable_setup_done)(void);
 };
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 849be14..c1e910a 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -68,6 +68,7 @@ struct x86_init_ops x86_init __initdata = {
 	},
 
 	.paging = {
+		.pagetable_init		= native_pagetable_init,
 		.pagetable_setup_start	= native_pagetable_setup_start,
 		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 7999cef..2ff4790 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,6 +445,41 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
+void __init native_pagetable_init(void)
+{
+	unsigned long pfn, va;
+	pgd_t *base;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	base = swapper_pg_dir;
+
+	/*
+	 * Remove any mappings which extend past the end of physical
+	 * memory from the boot time page table:
+	 */
+	for (pfn = max_low_pfn + 1; pfn < 1<<(32-PAGE_SHIFT); pfn++) {
+		va = PAGE_OFFSET + (pfn<<PAGE_SHIFT);
+		pgd = base + pgd_index(va);
+		if (!pgd_present(*pgd))
+			break;
+
+		pud = pud_offset(pgd, va);
+		pmd = pmd_offset(pud, va);
+		if (!pmd_present(*pmd))
+			break;
+
+		pte = pte_offset_kernel(pmd, va);
+		if (!pte_present(*pte))
+			break;
+
+		pte_clear(NULL, va, pte);
+	}
+	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
+	paging_init();
+}
 void __init native_pagetable_setup_start(void)
 {
 	unsigned long pfn, va;
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 04a0a8f..68466ce 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,6 +1174,15 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
+static void xen_post_allocator_init(void);
+
+static void __init xen_pagetable_init(void)
+{
+	paging_init();
+	xen_setup_shared_info();
+	xen_post_allocator_init();
+}
+
 static void __init xen_pagetable_setup_start(void)
 {
 }
@@ -1192,8 +1201,6 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 	}
 }
 
-static void xen_post_allocator_init(void);
-
 static void __init xen_pagetable_setup_done(void)
 {
 	xen_setup_shared_info();
@@ -2068,6 +2075,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
+	x86_init.paging.pagetable_init = xen_pagetable_init;
 	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
 	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFh-0004Zl-Fg; Tue, 21 Aug 2012 01:27:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFg-0004Zf-Fx
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:28 +0000
Received: from [85.158.139.83:8921] by server-8.bemta-5.messagelabs.com id
	7C/AA-02481-FF3E2305; Tue, 21 Aug 2012 01:27:27 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10330 invoked from network); 21 Aug 2012 01:27:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095093"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:26 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:26 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:01 +0100
Message-ID: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
	x86_init.paging.pagetable_setup_start and
	x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the definition of x86_init.paging.pagetable_setup_start and
x86_init.paging.pagetable_setup_done is twisted and not really well
defined (in terms of prototypes desired). More specifically:
pagetable_setup_start:
 * it is a nop on x86_32
 * it is a nop for the XEN case
 * cleans up the boot time page table in the x86_64 case

pagetable_setup_done:
 * it is a nop on x86_32
 * sets up accessor functions for pagetable manipulation, for the
   XEN case
 * it is a nop on x86_64

Most of this logic can be skipped by creating a new PVOPS that can handle
pagetable setup and pre/post operations on it.
The new PVOPS must be called only once, during boot-time setup and
after the direct mapping for physical memory is available.

Attilio Rao (5):
  XEN: Remove the base argument from
    x86_init.paging.pagetable_setup_done PVOPS
  XEN: Remove the base argument from
    x86_init.paging.pagetable_setup_start PVOPS
  X86/XEN: Introduce the x86_init.paging.pagetable_init() PVOPS
  X86/XEN: Retire now unused x86_init.paging.pagetable_setup_start and
    x86_init.paging.pagetable_setup_done PVOPS
  X86/XEN: Add few lines explaining simple semantic for
    x86_init.paging.pagetable_init PVOPS

 arch/x86/include/asm/pgtable_types.h |    6 ++----
 arch/x86/include/asm/x86_init.h      |   11 +++++++----
 arch/x86/kernel/setup.c              |    4 +---
 arch/x86/kernel/x86_init.c           |    4 +---
 arch/x86/mm/init_32.c                |   12 ++++++------
 arch/x86/xen/mmu.c                   |   18 +++++++-----------
 6 files changed, 24 insertions(+), 31 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFr-0004bm-Hy; Tue, 21 Aug 2012 01:27:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFp-0004ad-9h
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:37 +0000
Received: from [85.158.139.83:49192] by server-1.bemta-5.messagelabs.com id
	8F/F1-09980-704E2305; Tue, 21 Aug 2012 01:27:35 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10818 invoked from network); 21 Aug 2012 01:27:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095098"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:34 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:34 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:06 +0100
Message-ID: <1345511646-12427-6-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 5/5] X86/XEN: Add few lines explaining simple
	semantic for x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Explain the purpose of the PVOPS
- Report execution constraints

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 995ea5c..7ea4186 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,6 +82,11 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
+ *
+ * It does setup the kernel pagetables and prepares accessors functions to
+ * manipulate them.
+ * It must be called once, during the boot sequence and after the direct
+ * mapping for phys memory is setup.
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFh-0004Zl-Fg; Tue, 21 Aug 2012 01:27:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFg-0004Zf-Fx
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:28 +0000
Received: from [85.158.139.83:8921] by server-8.bemta-5.messagelabs.com id
	7C/AA-02481-FF3E2305; Tue, 21 Aug 2012 01:27:27 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10330 invoked from network); 21 Aug 2012 01:27:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095093"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:26 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:26 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:01 +0100
Message-ID: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
	x86_init.paging.pagetable_setup_start and
	x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the definition of x86_init.paging.pagetable_setup_start and
x86_init.paging.pagetable_setup_done is twisted and not really well
defined (in terms of prototypes desired). More specifically:
pagetable_setup_start:
 * it is a nop on x86_32
 * it is a nop for the XEN case
 * cleans up the boot time page table in the x86_64 case

pagetable_setup_done:
 * it is a nop on x86_32
 * sets up accessor functions for pagetable manipulation, for the
   XEN case
 * it is a nop on x86_64

Most of this logic can be skipped by creating a new PVOPS that can handle
pagetable setup and pre/post operations on it.
The new PVOPS must be called only once, during boot-time setup and
after the direct mapping for physical memory is available.

Attilio Rao (5):
  XEN: Remove the base argument from
    x86_init.paging.pagetable_setup_done PVOPS
  XEN: Remove the base argument from
    x86_init.paging.pagetable_setup_start PVOPS
  X86/XEN: Introduce the x86_init.paging.pagetable_init() PVOPS
  X86/XEN: Retire now unused x86_init.paging.pagetable_setup_start and
    x86_init.paging.pagetable_setup_done PVOPS
  X86/XEN: Add few lines explaining simple semantic for
    x86_init.paging.pagetable_init PVOPS

 arch/x86/include/asm/pgtable_types.h |    6 ++----
 arch/x86/include/asm/x86_init.h      |   11 +++++++----
 arch/x86/kernel/setup.c              |    4 +---
 arch/x86/kernel/x86_init.c           |    4 +---
 arch/x86/mm/init_32.c                |   12 ++++++------
 arch/x86/xen/mmu.c                   |   18 +++++++-----------
 6 files changed, 24 insertions(+), 31 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFn-0004ae-KK; Tue, 21 Aug 2012 01:27:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFm-0004aL-Ky
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:34 +0000
Received: from [85.158.139.83:49101] by server-5.bemta-5.messagelabs.com id
	63/A5-31019-504E2305; Tue, 21 Aug 2012 01:27:33 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10637 invoked from network); 21 Aug 2012 01:27:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095096"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:31 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:31 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:04 +0100
Message-ID: <1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 3/5] X86/XEN: Introduce the
	x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This new PVOPS is responsible to setup the kernel pagetables and
replace entirely x86_init.paging.pagetable_setup_start and
x86_init.paging.pagetable_setup_done PVOPS work.

For performance the x86_64 stub is implemented as a macro to paging_init()
rather than an actual function stub.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    2 +
 arch/x86/include/asm/x86_init.h      |    2 +
 arch/x86/kernel/x86_init.c           |    1 +
 arch/x86/mm/init_32.c                |   35 ++++++++++++++++++++++++++++++++++
 arch/x86/xen/mmu.c                   |   12 +++++++++-
 5 files changed, 50 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 9b1c1f7..55f24b5 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,9 +303,11 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
+extern void native_pagetable_init(void);
 extern void native_pagetable_setup_start(void);
 extern void native_pagetable_setup_done(void);
 #else
+#define native_pagetable_init        paging_init
 #define native_pagetable_setup_start x86_init_pgd_noop
 #define native_pagetable_setup_done  x86_init_pgd_noop
 #endif
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index efd0075..a74cc19 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -81,10 +81,12 @@ struct x86_init_mapping {
 
 /**
  * struct x86_init_paging - platform specific paging functions
+ * @pagetable_init:		platform specific paging initialization call
  * @pagetable_setup_start:	platform specific pre paging_init() call
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
+	void (*pagetable_init)(void);
 	void (*pagetable_setup_start)(void);
 	void (*pagetable_setup_done)(void);
 };
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 849be14..c1e910a 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -68,6 +68,7 @@ struct x86_init_ops x86_init __initdata = {
 	},
 
 	.paging = {
+		.pagetable_init		= native_pagetable_init,
 		.pagetable_setup_start	= native_pagetable_setup_start,
 		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 7999cef..2ff4790 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,6 +445,41 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
+void __init native_pagetable_init(void)
+{
+	unsigned long pfn, va;
+	pgd_t *base;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	base = swapper_pg_dir;
+
+	/*
+	 * Remove any mappings which extend past the end of physical
+	 * memory from the boot time page table:
+	 */
+	for (pfn = max_low_pfn + 1; pfn < 1<<(32-PAGE_SHIFT); pfn++) {
+		va = PAGE_OFFSET + (pfn<<PAGE_SHIFT);
+		pgd = base + pgd_index(va);
+		if (!pgd_present(*pgd))
+			break;
+
+		pud = pud_offset(pgd, va);
+		pmd = pmd_offset(pud, va);
+		if (!pmd_present(*pmd))
+			break;
+
+		pte = pte_offset_kernel(pmd, va);
+		if (!pte_present(*pte))
+			break;
+
+		pte_clear(NULL, va, pte);
+	}
+	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
+	paging_init();
+}
 void __init native_pagetable_setup_start(void)
 {
 	unsigned long pfn, va;
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 04a0a8f..68466ce 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,6 +1174,15 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
+static void xen_post_allocator_init(void);
+
+static void __init xen_pagetable_init(void)
+{
+	paging_init();
+	xen_setup_shared_info();
+	xen_post_allocator_init();
+}
+
 static void __init xen_pagetable_setup_start(void)
 {
 }
@@ -1192,8 +1201,6 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 	}
 }
 
-static void xen_post_allocator_init(void);
-
 static void __init xen_pagetable_setup_done(void)
 {
 	xen_setup_shared_info();
@@ -2068,6 +2075,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
+	x86_init.paging.pagetable_init = xen_pagetable_init;
 	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
 	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFr-0004bm-Hy; Tue, 21 Aug 2012 01:27:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFp-0004ad-9h
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:37 +0000
Received: from [85.158.139.83:49192] by server-1.bemta-5.messagelabs.com id
	8F/F1-09980-704E2305; Tue, 21 Aug 2012 01:27:35 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!6
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10818 invoked from network); 21 Aug 2012 01:27:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095098"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:34 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:34 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:06 +0100
Message-ID: <1345511646-12427-6-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 5/5] X86/XEN: Add few lines explaining simple
	semantic for x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Explain the purpose of the PVOPS
- Report execution constraints

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 995ea5c..7ea4186 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,6 +82,11 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
+ *
+ * It does setup the kernel pagetables and prepares accessors functions to
+ * manipulate them.
+ * It must be called once, during the boot sequence and after the direct
+ * mapping for phys memory is setup.
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFl-0004aB-7f; Tue, 21 Aug 2012 01:27:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFj-0004Zw-DH
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:31 +0000
Received: from [85.158.139.83:9023] by server-4.bemta-5.messagelabs.com id
	2D/76-12386-204E2305; Tue, 21 Aug 2012 01:27:30 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10519 invoked from network); 21 Aug 2012 01:27:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095095"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:30 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:30 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:03 +0100
Message-ID: <1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
	x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

x86_init.paging.pagetable_setup_start for native will however use
swapper_pg_dir in the single place where it is used and for native the
argument is simply unused. Aditionally, the comments already point to
swapper_pg_dir as the sole base touched.
Finally, this will help with further merging of
x86_init.paging.pagetable_setup_start with
x86_init.paging.pagetable_setup_done PVOPS.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    6 +++---
 arch/x86/include/asm/x86_init.h      |    2 +-
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    3 +--
 arch/x86/mm/init_32.c                |    5 ++++-
 arch/x86/xen/mmu.c                   |    2 +-
 6 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index f9e07b0..9b1c1f7 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,11 +303,11 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
-extern void native_pagetable_setup_start(pgd_t *base);
+extern void native_pagetable_setup_start(void);
 extern void native_pagetable_setup_done(void);
 #else
-#define native_pagetable_setup_start x86_init_pgd_start_noop
-#define native_pagetable_setup_done  x86_init_pgd_stop_noop
+#define native_pagetable_setup_start x86_init_pgd_noop
+#define native_pagetable_setup_done  x86_init_pgd_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 439a4c3..efd0075 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -85,7 +85,7 @@ struct x86_init_mapping {
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
-	void (*pagetable_setup_start)(pgd_t *base);
+	void (*pagetable_setup_start)(void);
 	void (*pagetable_setup_done)(void);
 };
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ed9094d..d3d8f00 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,7 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start(swapper_pg_dir);
+	x86_init.paging.pagetable_setup_start();
 	paging_init();
 	x86_init.paging.pagetable_setup_done();
 
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index b27b30d..849be14 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,8 +26,7 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_start_noop(pgd_t *unused) { }
-void __init x86_init_pgd_stop_noop(void) { }
+void __init x86_init_pgd_noop(void) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 1019156..7999cef 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
-void __init native_pagetable_setup_start(pgd_t *base)
+void __init native_pagetable_setup_start(void)
 {
 	unsigned long pfn, va;
+	pgd_t *base;
 	pgd_t *pgd;
 	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte;
 
+	base = swapper_pg_dir;
+
 	/*
 	 * Remove any mappings which extend past the end of physical
 	 * memory from the boot time page table:
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index d847548..04a0a8f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,7 +1174,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
-static void __init xen_pagetable_setup_start(pgd_t *base)
+static void __init xen_pagetable_setup_start(void)
 {
 }
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFj-0004a3-RP; Tue, 21 Aug 2012 01:27:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFh-0004Zk-PS
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:30 +0000
Received: from [85.158.139.83:8972] by server-2.bemta-5.messagelabs.com id
	01/6F-10142-104E2305; Tue, 21 Aug 2012 01:27:29 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10426 invoked from network); 21 Aug 2012 01:27:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095094"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:28 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:28 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:02 +0100
Message-ID: <1345511646-12427-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 1/5] XEN: Remove the base argument from
	x86_init.paging.pagetable_setup_done PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

x86_init.paging.pagetable_setup_done PVOPS does not really need to know about
the base argument, thus just remove it.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    6 +++---
 arch/x86/include/asm/x86_init.h      |    2 +-
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    3 ++-
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/xen/mmu.c                   |    2 +-
 6 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 013286a..f9e07b0 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -304,10 +304,10 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_setup_start(pgd_t *base);
-extern void native_pagetable_setup_done(pgd_t *base);
+extern void native_pagetable_setup_done(void);
 #else
-#define native_pagetable_setup_start x86_init_pgd_noop
-#define native_pagetable_setup_done  x86_init_pgd_noop
+#define native_pagetable_setup_start x86_init_pgd_start_noop
+#define native_pagetable_setup_done  x86_init_pgd_stop_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..439a4c3 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -86,7 +86,7 @@ struct x86_init_mapping {
  */
 struct x86_init_paging {
 	void (*pagetable_setup_start)(pgd_t *base);
-	void (*pagetable_setup_done)(pgd_t *base);
+	void (*pagetable_setup_done)(void);
 };
 
 /**
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f4b9b80..ed9094d 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -963,7 +963,7 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_setup_start(swapper_pg_dir);
 	paging_init();
-	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
+	x86_init.paging.pagetable_setup_done();
 
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 9f3167e..b27b30d 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,8 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_noop(pgd_t *unused) { }
+void __init x86_init_pgd_start_noop(pgd_t *unused) { }
+void __init x86_init_pgd_stop_noop(void) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 575d86f..1019156 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -477,7 +477,7 @@ void __init native_pagetable_setup_start(pgd_t *base)
 	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
 }
 
-void __init native_pagetable_setup_done(pgd_t *base)
+void __init native_pagetable_setup_done(void)
 {
 }
 
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..d847548 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1194,7 +1194,7 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 
 static void xen_post_allocator_init(void);
 
-static void __init xen_pagetable_setup_done(pgd_t *base)
+static void __init xen_pagetable_setup_done(void)
 {
 	xen_setup_shared_info();
 	xen_post_allocator_init();
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFo-0004am-0g; Tue, 21 Aug 2012 01:27:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFm-0004aO-JG
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:34 +0000
Received: from [85.158.139.83:9136] by server-12.bemta-5.messagelabs.com id
	12/25-22359-504E2305; Tue, 21 Aug 2012 01:27:33 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10730 invoked from network); 21 Aug 2012 01:27:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095097"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:33 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:33 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:05 +0100
Message-ID: <1345511646-12427-5-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 4/5] X86/XEN: Retire now unused
	x86_init.paging.pagetable_setup_start and
	x86_init.paging.pagetable_setup_done PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that x86_init.paging.pagetable_init PVOPS is implemented just
retire the 2 mentioned above and replace their usage with it.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    4 ---
 arch/x86/include/asm/x86_init.h      |    4 ---
 arch/x86/kernel/setup.c              |    4 +--
 arch/x86/kernel/x86_init.c           |    3 --
 arch/x86/mm/init_32.c                |   40 +---------------------------------
 arch/x86/xen/mmu.c                   |   12 ----------
 6 files changed, 2 insertions(+), 65 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 55f24b5..db8fec6 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -304,12 +304,8 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_init(void);
-extern void native_pagetable_setup_start(void);
-extern void native_pagetable_setup_done(void);
 #else
 #define native_pagetable_init        paging_init
-#define native_pagetable_setup_start x86_init_pgd_noop
-#define native_pagetable_setup_done  x86_init_pgd_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index a74cc19..995ea5c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,13 +82,9 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
- * @pagetable_setup_start:	platform specific pre paging_init() call
- * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-	void (*pagetable_setup_start)(void);
-	void (*pagetable_setup_done)(void);
 };
 
 /**
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index d3d8f00..4f16547 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,9 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start();
-	paging_init();
-	x86_init.paging.pagetable_setup_done();
+	x86_init.paging.pagetable_init();
 
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index c1e910a..7a3d075 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,6 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_noop(void) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
@@ -69,8 +68,6 @@ struct x86_init_ops x86_init __initdata = {
 
 	.paging = {
 		.pagetable_init		= native_pagetable_init,
-		.pagetable_setup_start	= native_pagetable_setup_start,
-		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
 
 	.timers = {
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 2ff4790..f8771ad 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -480,44 +480,6 @@ void __init native_pagetable_init(void)
 	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
 	paging_init();
 }
-void __init native_pagetable_setup_start(void)
-{
-	unsigned long pfn, va;
-	pgd_t *base;
-	pgd_t *pgd;
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
-
-	base = swapper_pg_dir;
-
-	/*
-	 * Remove any mappings which extend past the end of physical
-	 * memory from the boot time page table:
-	 */
-	for (pfn = max_low_pfn + 1; pfn < 1<<(32-PAGE_SHIFT); pfn++) {
-		va = PAGE_OFFSET + (pfn<<PAGE_SHIFT);
-		pgd = base + pgd_index(va);
-		if (!pgd_present(*pgd))
-			break;
-
-		pud = pud_offset(pgd, va);
-		pmd = pmd_offset(pud, va);
-		if (!pmd_present(*pmd))
-			break;
-
-		pte = pte_offset_kernel(pmd, va);
-		if (!pte_present(*pte))
-			break;
-
-		pte_clear(NULL, va, pte);
-	}
-	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
-}
-
-void __init native_pagetable_setup_done(void)
-{
-}
 
 /*
  * Build a proper pagetable for the kernel mappings.  Up until this
@@ -531,7 +493,7 @@ void __init native_pagetable_setup_done(void)
  * If we're booting paravirtualized under a hypervisor, then there are
  * more options: we may already be running PAE, and the pagetable may
  * or may not be based in swapper_pg_dir.  In any case,
- * paravirt_pagetable_setup_start() will set up swapper_pg_dir
+ * paravirt_pagetable_init() will set up swapper_pg_dir
  * appropriately for the rest of the initialization to work.
  *
  * In general, pagetable_init() assumes that the pagetable may already
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 68466ce..4290d83 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1183,10 +1183,6 @@ static void __init xen_pagetable_init(void)
 	xen_post_allocator_init();
 }
 
-static void __init xen_pagetable_setup_start(void)
-{
-}
-
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 {
 	/* reserve the range used */
@@ -1201,12 +1197,6 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 	}
 }
 
-static void __init xen_pagetable_setup_done(void)
-{
-	xen_setup_shared_info();
-	xen_post_allocator_init();
-}
-
 static void xen_write_cr2(unsigned long cr2)
 {
 	this_cpu_read(xen_vcpu)->arch.cr2 = cr2;
@@ -2076,8 +2066,6 @@ void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_init = xen_pagetable_init;
-	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
-	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFl-0004aB-7f; Tue, 21 Aug 2012 01:27:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFj-0004Zw-DH
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:31 +0000
Received: from [85.158.139.83:9023] by server-4.bemta-5.messagelabs.com id
	2D/76-12386-204E2305; Tue, 21 Aug 2012 01:27:30 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10519 invoked from network); 21 Aug 2012 01:27:30 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095095"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:30 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:30 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:03 +0100
Message-ID: <1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
	x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

x86_init.paging.pagetable_setup_start for native will however use
swapper_pg_dir in the single place where it is used and for native the
argument is simply unused. Aditionally, the comments already point to
swapper_pg_dir as the sole base touched.
Finally, this will help with further merging of
x86_init.paging.pagetable_setup_start with
x86_init.paging.pagetable_setup_done PVOPS.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    6 +++---
 arch/x86/include/asm/x86_init.h      |    2 +-
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    3 +--
 arch/x86/mm/init_32.c                |    5 ++++-
 arch/x86/xen/mmu.c                   |    2 +-
 6 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index f9e07b0..9b1c1f7 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,11 +303,11 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
-extern void native_pagetable_setup_start(pgd_t *base);
+extern void native_pagetable_setup_start(void);
 extern void native_pagetable_setup_done(void);
 #else
-#define native_pagetable_setup_start x86_init_pgd_start_noop
-#define native_pagetable_setup_done  x86_init_pgd_stop_noop
+#define native_pagetable_setup_start x86_init_pgd_noop
+#define native_pagetable_setup_done  x86_init_pgd_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 439a4c3..efd0075 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -85,7 +85,7 @@ struct x86_init_mapping {
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
-	void (*pagetable_setup_start)(pgd_t *base);
+	void (*pagetable_setup_start)(void);
 	void (*pagetable_setup_done)(void);
 };
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index ed9094d..d3d8f00 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,7 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start(swapper_pg_dir);
+	x86_init.paging.pagetable_setup_start();
 	paging_init();
 	x86_init.paging.pagetable_setup_done();
 
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index b27b30d..849be14 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,8 +26,7 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_start_noop(pgd_t *unused) { }
-void __init x86_init_pgd_stop_noop(void) { }
+void __init x86_init_pgd_noop(void) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 1019156..7999cef 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
-void __init native_pagetable_setup_start(pgd_t *base)
+void __init native_pagetable_setup_start(void)
 {
 	unsigned long pfn, va;
+	pgd_t *base;
 	pgd_t *pgd;
 	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte;
 
+	base = swapper_pg_dir;
+
 	/*
 	 * Remove any mappings which extend past the end of physical
 	 * memory from the boot time page table:
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index d847548..04a0a8f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,7 +1174,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
-static void __init xen_pagetable_setup_start(pgd_t *base)
+static void __init xen_pagetable_setup_start(void)
 {
 }
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFj-0004a3-RP; Tue, 21 Aug 2012 01:27:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFh-0004Zk-PS
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:30 +0000
Received: from [85.158.139.83:8972] by server-2.bemta-5.messagelabs.com id
	01/6F-10142-104E2305; Tue, 21 Aug 2012 01:27:29 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10426 invoked from network); 21 Aug 2012 01:27:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:28 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095094"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:28 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:28 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:02 +0100
Message-ID: <1345511646-12427-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 1/5] XEN: Remove the base argument from
	x86_init.paging.pagetable_setup_done PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

x86_init.paging.pagetable_setup_done PVOPS does not really need to know about
the base argument, thus just remove it.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    6 +++---
 arch/x86/include/asm/x86_init.h      |    2 +-
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    3 ++-
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/xen/mmu.c                   |    2 +-
 6 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 013286a..f9e07b0 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -304,10 +304,10 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_setup_start(pgd_t *base);
-extern void native_pagetable_setup_done(pgd_t *base);
+extern void native_pagetable_setup_done(void);
 #else
-#define native_pagetable_setup_start x86_init_pgd_noop
-#define native_pagetable_setup_done  x86_init_pgd_noop
+#define native_pagetable_setup_start x86_init_pgd_start_noop
+#define native_pagetable_setup_done  x86_init_pgd_stop_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..439a4c3 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -86,7 +86,7 @@ struct x86_init_mapping {
  */
 struct x86_init_paging {
 	void (*pagetable_setup_start)(pgd_t *base);
-	void (*pagetable_setup_done)(pgd_t *base);
+	void (*pagetable_setup_done)(void);
 };
 
 /**
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f4b9b80..ed9094d 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -963,7 +963,7 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_setup_start(swapper_pg_dir);
 	paging_init();
-	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
+	x86_init.paging.pagetable_setup_done();
 
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 9f3167e..b27b30d 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,8 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_noop(pgd_t *unused) { }
+void __init x86_init_pgd_start_noop(pgd_t *unused) { }
+void __init x86_init_pgd_stop_noop(void) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 575d86f..1019156 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -477,7 +477,7 @@ void __init native_pagetable_setup_start(pgd_t *base)
 	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
 }
 
-void __init native_pagetable_setup_done(pgd_t *base)
+void __init native_pagetable_setup_done(void)
 {
 }
 
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..d847548 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1194,7 +1194,7 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 
 static void xen_post_allocator_init(void);
 
-static void __init xen_pagetable_setup_done(pgd_t *base)
+static void __init xen_pagetable_setup_done(void)
 {
 	xen_setup_shared_info();
 	xen_post_allocator_init();
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 01:27:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 01:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3dFo-0004am-0g; Tue, 21 Aug 2012 01:27:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3dFm-0004aO-JG
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 01:27:34 +0000
Received: from [85.158.139.83:9136] by server-12.bemta-5.messagelabs.com id
	12/25-22359-504E2305; Tue, 21 Aug 2012 01:27:33 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345512447!21822991!5
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10730 invoked from network); 21 Aug 2012 01:27:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 01:27:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,799,1336348800"; d="scan'208";a="14095097"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 01:27:33 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 02:27:33 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 02:14:05 +0100
Message-ID: <1345511646-12427-5-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH 4/5] X86/XEN: Retire now unused
	x86_init.paging.pagetable_setup_start and
	x86_init.paging.pagetable_setup_done PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that x86_init.paging.pagetable_init PVOPS is implemented just
retire the 2 mentioned above and replace their usage with it.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    4 ---
 arch/x86/include/asm/x86_init.h      |    4 ---
 arch/x86/kernel/setup.c              |    4 +--
 arch/x86/kernel/x86_init.c           |    3 --
 arch/x86/mm/init_32.c                |   40 +---------------------------------
 arch/x86/xen/mmu.c                   |   12 ----------
 6 files changed, 2 insertions(+), 65 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 55f24b5..db8fec6 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -304,12 +304,8 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_init(void);
-extern void native_pagetable_setup_start(void);
-extern void native_pagetable_setup_done(void);
 #else
 #define native_pagetable_init        paging_init
-#define native_pagetable_setup_start x86_init_pgd_noop
-#define native_pagetable_setup_done  x86_init_pgd_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index a74cc19..995ea5c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,13 +82,9 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
- * @pagetable_setup_start:	platform specific pre paging_init() call
- * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-	void (*pagetable_setup_start)(void);
-	void (*pagetable_setup_done)(void);
 };
 
 /**
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index d3d8f00..4f16547 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,9 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start();
-	paging_init();
-	x86_init.paging.pagetable_setup_done();
+	x86_init.paging.pagetable_init();
 
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index c1e910a..7a3d075 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,6 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_noop(void) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
@@ -69,8 +68,6 @@ struct x86_init_ops x86_init __initdata = {
 
 	.paging = {
 		.pagetable_init		= native_pagetable_init,
-		.pagetable_setup_start	= native_pagetable_setup_start,
-		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
 
 	.timers = {
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 2ff4790..f8771ad 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -480,44 +480,6 @@ void __init native_pagetable_init(void)
 	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
 	paging_init();
 }
-void __init native_pagetable_setup_start(void)
-{
-	unsigned long pfn, va;
-	pgd_t *base;
-	pgd_t *pgd;
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
-
-	base = swapper_pg_dir;
-
-	/*
-	 * Remove any mappings which extend past the end of physical
-	 * memory from the boot time page table:
-	 */
-	for (pfn = max_low_pfn + 1; pfn < 1<<(32-PAGE_SHIFT); pfn++) {
-		va = PAGE_OFFSET + (pfn<<PAGE_SHIFT);
-		pgd = base + pgd_index(va);
-		if (!pgd_present(*pgd))
-			break;
-
-		pud = pud_offset(pgd, va);
-		pmd = pmd_offset(pud, va);
-		if (!pmd_present(*pmd))
-			break;
-
-		pte = pte_offset_kernel(pmd, va);
-		if (!pte_present(*pte))
-			break;
-
-		pte_clear(NULL, va, pte);
-	}
-	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
-}
-
-void __init native_pagetable_setup_done(void)
-{
-}
 
 /*
  * Build a proper pagetable for the kernel mappings.  Up until this
@@ -531,7 +493,7 @@ void __init native_pagetable_setup_done(void)
  * If we're booting paravirtualized under a hypervisor, then there are
  * more options: we may already be running PAE, and the pagetable may
  * or may not be based in swapper_pg_dir.  In any case,
- * paravirt_pagetable_setup_start() will set up swapper_pg_dir
+ * paravirt_pagetable_init() will set up swapper_pg_dir
  * appropriately for the rest of the initialization to work.
  *
  * In general, pagetable_init() assumes that the pagetable may already
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 68466ce..4290d83 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1183,10 +1183,6 @@ static void __init xen_pagetable_init(void)
 	xen_post_allocator_init();
 }
 
-static void __init xen_pagetable_setup_start(void)
-{
-}
-
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 {
 	/* reserve the range used */
@@ -1201,12 +1197,6 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 	}
 }
 
-static void __init xen_pagetable_setup_done(void)
-{
-	xen_setup_shared_info();
-	xen_post_allocator_init();
-}
-
 static void xen_write_cr2(unsigned long cr2)
 {
 	this_cpu_read(xen_vcpu)->arch.cr2 = cr2;
@@ -2076,8 +2066,6 @@ void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_init = xen_pagetable_init;
-	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
-	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 02:42:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 02:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ePm-0005vd-9H; Tue, 21 Aug 2012 02:41:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T3ePl-0005vY-9G
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 02:41:57 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345516900!2631581!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMTgxMg==\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTYxOTQgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13679 invoked from network); 21 Aug 2012 02:41:41 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-7.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 02:41:41 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 20 Aug 2012 19:41:39 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,799,1336374000"; d="scan'208";a="211411923"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga002.fm.intel.com with ESMTP; 20 Aug 2012 19:41:39 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 19:41:37 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 21 Aug 2012 10:41:36 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Tobias Geiger
	<tobias.geiger@vido.info>
Thread-Topic: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
	Passthrough?!
Thread-Index: Ac1/Rnq4WagzHVFkR6e3kbFh20+Rzg==
Date: Tue, 21 Aug 2012 02:41:36 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101593BA@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad Rzeszutek
> Wilk
> Sent: Tuesday, August 21, 2012 7:30 AM
> To: Tobias Geiger
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> Passthrough?!
> 
> On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk
> wrote:
> > > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > > Hi!
> > > >
> > > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > > > stable):
> > > >
> > > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > > > not recognized within the DomU (HVM Win7 64)
> > > > Dom0 cmdline is:
> > > > ro root=LABEL=dom0root
> xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > > security=apparmor noirqdebug nouveau.msi=1
> > > >
> > > > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > > > USB Controller IDs are not correctly passed through and get a
> > > > exclamation mark within the win7 device manager ("could not be
> > > > started").
> > >
> > > Ok, but they do get passed in though? As in, QEMU sees them.
> > > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > > passed in devices do you see them? Meaning lspci shows them?
> > >
> > >
> > > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > >
> > > >
> > > >
> > > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) -
> sorry
> > > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > > uploaded here:
> > > >
> http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > >
> > > Ugh, that looks like somebody removed a large chunk of a pagetable.
> > >
> > > Hmm. Are you using dom0_mem=max parameter? If not, can you try
> > > that and also disable ballooning in the xm/xl config file pls?
> > >
> > > >
> > > >
> > > > With 3.4 both issues were not there - everything worked perfectly.
> > > > Tell me which debugging info you need, i may be able to re-install
> > > > my netconsole to get the full stacktrace (but i had not much luck
> > > > with netconsole regarding kernel panics - rarely this info gets sent
> > > > before the "panic"...)
> >
> > So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> > an Intel 82574L NIC. The video card still works, but the NIC stopped
> > working. Same version of hypervisor/toolstack/etc, only change is the
> > kernel (v3.4.6->v3.5).
> >
> > Time to get my hands greasy with this..
> 
> And its due to a patch I added in v3.4
> (cd9db80e5257682a7f7ab245a2459648b3c8d268)
> - which did not work properly in v3.4, but with v3.5 got it working
> (977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to now
> work
> anymore.
> 
> Anyhow, for right now jsut revert
> cd9db80e5257682a7f7ab245a2459648b3c8d268
> and it should work for you.
> 
Also, our team reported a VT-d bug 2 months ago.
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
We found "3bb07f1b73ea6313b843807063e183e168c9182a" is the bad commit in linux tree.
Linux3.4.7 works fine; but Linux 3.5 has this issue.
Seem Tobias has the same issue as that in the bug.
But we didn't meet Dom0 panic when shutting down the DomU.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 02:42:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 02:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ePm-0005vd-9H; Tue, 21 Aug 2012 02:41:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T3ePl-0005vY-9G
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 02:41:57 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345516900!2631581!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMTgxMg==\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTYxOTQgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13679 invoked from network); 21 Aug 2012 02:41:41 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-7.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 02:41:41 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 20 Aug 2012 19:41:39 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,799,1336374000"; d="scan'208";a="211411923"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga002.fm.intel.com with ESMTP; 20 Aug 2012 19:41:39 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 20 Aug 2012 19:41:37 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 21 Aug 2012 10:41:36 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Tobias Geiger
	<tobias.geiger@vido.info>
Thread-Topic: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
	Passthrough?!
Thread-Index: Ac1/Rnq4WagzHVFkR6e3kbFh20+Rzg==
Date: Tue, 21 Aug 2012 02:41:36 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101593BA@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad Rzeszutek
> Wilk
> Sent: Tuesday, August 21, 2012 7:30 AM
> To: Tobias Geiger
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> Passthrough?!
> 
> On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk
> wrote:
> > > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > > Hi!
> > > >
> > > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > > > stable):
> > > >
> > > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > > > not recognized within the DomU (HVM Win7 64)
> > > > Dom0 cmdline is:
> > > > ro root=LABEL=dom0root
> xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > > security=apparmor noirqdebug nouveau.msi=1
> > > >
> > > > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > > > USB Controller IDs are not correctly passed through and get a
> > > > exclamation mark within the win7 device manager ("could not be
> > > > started").
> > >
> > > Ok, but they do get passed in though? As in, QEMU sees them.
> > > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > > passed in devices do you see them? Meaning lspci shows them?
> > >
> > >
> > > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > >
> > > >
> > > >
> > > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) -
> sorry
> > > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > > uploaded here:
> > > >
> http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > >
> > > Ugh, that looks like somebody removed a large chunk of a pagetable.
> > >
> > > Hmm. Are you using dom0_mem=max parameter? If not, can you try
> > > that and also disable ballooning in the xm/xl config file pls?
> > >
> > > >
> > > >
> > > > With 3.4 both issues were not there - everything worked perfectly.
> > > > Tell me which debugging info you need, i may be able to re-install
> > > > my netconsole to get the full stacktrace (but i had not much luck
> > > > with netconsole regarding kernel panics - rarely this info gets sent
> > > > before the "panic"...)
> >
> > So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> > an Intel 82574L NIC. The video card still works, but the NIC stopped
> > working. Same version of hypervisor/toolstack/etc, only change is the
> > kernel (v3.4.6->v3.5).
> >
> > Time to get my hands greasy with this..
> 
> And its due to a patch I added in v3.4
> (cd9db80e5257682a7f7ab245a2459648b3c8d268)
> - which did not work properly in v3.4, but with v3.5 got it working
> (977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to now
> work
> anymore.
> 
> Anyhow, for right now jsut revert
> cd9db80e5257682a7f7ab245a2459648b3c8d268
> and it should work for you.
> 
Also, our team reported a VT-d bug 2 months ago.
http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
We found "3bb07f1b73ea6313b843807063e183e168c9182a" is the bad commit in linux tree.
Linux3.4.7 works fine; but Linux 3.5 has this issue.
Seem Tobias has the same issue as that in the bug.
But we didn't meet Dom0 panic when shutting down the DomU.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 05:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 05:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3gjU-0006gE-PI; Tue, 21 Aug 2012 05:10:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>) id 1T3YG7-0006ZW-An
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 20:07:35 +0000
Received: from [85.158.143.99:61665] by server-3.bemta-4.messagelabs.com id
	69/5A-09529-60992305; Mon, 20 Aug 2012 20:07:34 +0000
X-Env-Sender: sw@weilnetz.de
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345493253!29137131!1
X-Originating-IP: [78.47.199.172]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25400 invoked from network); 20 Aug 2012 20:07:33 -0000
Received: from v220110690675601.yourvserver.net (HELO
	v220110690675601.yourvserver.net) (78.47.199.172)
	by server-15.tower-216.messagelabs.com with SMTP;
	20 Aug 2012 20:07:33 -0000
Received: from localhost (v220110690675601.yourvserver.net.local [127.0.0.1])
	by v220110690675601.yourvserver.net (Postfix) with ESMTP id
	DCC897280029; Mon, 20 Aug 2012 22:07:32 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at weilnetz.de
Received: from v220110690675601.yourvserver.net ([127.0.0.1])
	by localhost (v220110690675601.yourvserver.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id 54rPGZT2hnKo; Mon, 20 Aug 2012 22:07:32 +0200 (CEST)
Received: from flocke.weilnetz.de (p5086EF7B.dip.t-dialin.net [80.134.239.123])
	by v220110690675601.yourvserver.net (Postfix) with ESMTPSA id
	4985F7280028; Mon, 20 Aug 2012 22:07:32 +0200 (CEST)
Received: from localhost ([127.0.0.1] ident=stefan)
	by flocke.weilnetz.de with esmtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>)
	id 1T3YG3-0001m5-ME; Mon, 20 Aug 2012 22:07:31 +0200
Message-ID: <50329903.8000608@weilnetz.de>
Date: Mon, 20 Aug 2012 22:07:31 +0200
From: Stefan Weil <sw@weilnetz.de>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120613 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: qemu-devel@nongnu.org
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>	<5031C2A3.3060800@weilnetz.de>
	<20120820134739.08bf6de4@thinkpad.mammed.net>
In-Reply-To: <20120820134739.08bf6de4@thinkpad.mammed.net>
X-Mailman-Approved-At: Tue, 21 Aug 2012 05:10:27 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	Igor Mammedov <imammedo@redhat.com>, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC] How should QEMU code handle include statements
 (was: Re: [PATCH 0/5 v2] cpu: make a child of DeviceState)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.08.2012 13:47, schrieb Igor Mammedov:
> On Mon, 20 Aug 2012 06:52:51 +0200
> Stefan Weil<sw@weilnetz.de>  wrote:
>> I'd prefer if you could keep the following simple pattern:
>>
>> * Start includes in *.c files with config.h (optionally)
>>     and qemu-common.h.
> Can't agree with you on this. I'd say that every header should be be self
> sufficient, include other headers if it uses types from them and NOT depend on
> the position where it's included, it should provide all its own deps.
>> * Don't include standard include files which are already
>>     included in qemu-common.h
> Probably initially qemu-common.h was intended to simplify inclusion
> of standard headers and glue stuff in multi-os build environment. but it seems
> to be become misused. It includes now a lot of stuff that is not common to
> every file it's included in.
> Perhaps we should split out of it std includes and glue layer into something
> like std/host-common.h and case on case basis move out other type
> definitions that are not common into there appropriate places. Like with
> qemu_irq.


There are several possible strategies regarding include statements:

1. Each header file which represents some public interface must include
    any header file which it depends on, so applications which include
    xxx.h can rely on the fact that they won't need anything else to
    compile xxx.h.

    To minimize dependencies, this rule can be extended: each header
    file must not include unneeded header files.

    Usually header files in /usr/include are built according to these rules.

2. There is one header file (or a small set of header files) which includes
    a basic set of features which normal code needs. Any C code file starts
    by including this header file.

    Other header files can rely on the fact that the basic set of features
    are already available.

3. In a modification of (2), each header file can include the basic
    header file(s).

4. The status quo of QEMU is a wild mixture of those strategies.

IMHO, there should be a consensus about the strategy which is used
for QEMU code.

While I personally prefer (1) and used it for my first contributions,
QEMU introduced qemu-common.h. I had the impression that from then on
QEMU preferred strategy (2). Obviously not everybody shares that
impression.

Which strategy / rule do we want to use for QEMU code?

Regards,

Stefan Weil


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 05:11:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 05:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3gjU-0006gE-PI; Tue, 21 Aug 2012 05:10:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>) id 1T3YG7-0006ZW-An
	for xen-devel@lists.xensource.com; Mon, 20 Aug 2012 20:07:35 +0000
Received: from [85.158.143.99:61665] by server-3.bemta-4.messagelabs.com id
	69/5A-09529-60992305; Mon, 20 Aug 2012 20:07:34 +0000
X-Env-Sender: sw@weilnetz.de
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345493253!29137131!1
X-Originating-IP: [78.47.199.172]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25400 invoked from network); 20 Aug 2012 20:07:33 -0000
Received: from v220110690675601.yourvserver.net (HELO
	v220110690675601.yourvserver.net) (78.47.199.172)
	by server-15.tower-216.messagelabs.com with SMTP;
	20 Aug 2012 20:07:33 -0000
Received: from localhost (v220110690675601.yourvserver.net.local [127.0.0.1])
	by v220110690675601.yourvserver.net (Postfix) with ESMTP id
	DCC897280029; Mon, 20 Aug 2012 22:07:32 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at weilnetz.de
Received: from v220110690675601.yourvserver.net ([127.0.0.1])
	by localhost (v220110690675601.yourvserver.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id 54rPGZT2hnKo; Mon, 20 Aug 2012 22:07:32 +0200 (CEST)
Received: from flocke.weilnetz.de (p5086EF7B.dip.t-dialin.net [80.134.239.123])
	by v220110690675601.yourvserver.net (Postfix) with ESMTPSA id
	4985F7280028; Mon, 20 Aug 2012 22:07:32 +0200 (CEST)
Received: from localhost ([127.0.0.1] ident=stefan)
	by flocke.weilnetz.de with esmtp (Exim 4.72)
	(envelope-from <sw@weilnetz.de>)
	id 1T3YG3-0001m5-ME; Mon, 20 Aug 2012 22:07:31 +0200
Message-ID: <50329903.8000608@weilnetz.de>
Date: Mon, 20 Aug 2012 22:07:31 +0200
From: Stefan Weil <sw@weilnetz.de>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120613 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: qemu-devel@nongnu.org
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>	<5031C2A3.3060800@weilnetz.de>
	<20120820134739.08bf6de4@thinkpad.mammed.net>
In-Reply-To: <20120820134739.08bf6de4@thinkpad.mammed.net>
X-Mailman-Approved-At: Tue, 21 Aug 2012 05:10:27 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	Igor Mammedov <imammedo@redhat.com>, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC] How should QEMU code handle include statements
 (was: Re: [PATCH 0/5 v2] cpu: make a child of DeviceState)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 20.08.2012 13:47, schrieb Igor Mammedov:
> On Mon, 20 Aug 2012 06:52:51 +0200
> Stefan Weil<sw@weilnetz.de>  wrote:
>> I'd prefer if you could keep the following simple pattern:
>>
>> * Start includes in *.c files with config.h (optionally)
>>     and qemu-common.h.
> Can't agree with you on this. I'd say that every header should be be self
> sufficient, include other headers if it uses types from them and NOT depend on
> the position where it's included, it should provide all its own deps.
>> * Don't include standard include files which are already
>>     included in qemu-common.h
> Probably initially qemu-common.h was intended to simplify inclusion
> of standard headers and glue stuff in multi-os build environment. but it seems
> to be become misused. It includes now a lot of stuff that is not common to
> every file it's included in.
> Perhaps we should split out of it std includes and glue layer into something
> like std/host-common.h and case on case basis move out other type
> definitions that are not common into there appropriate places. Like with
> qemu_irq.


There are several possible strategies regarding include statements:

1. Each header file which represents some public interface must include
    any header file which it depends on, so applications which include
    xxx.h can rely on the fact that they won't need anything else to
    compile xxx.h.

    To minimize dependencies, this rule can be extended: each header
    file must not include unneeded header files.

    Usually header files in /usr/include are built according to these rules.

2. There is one header file (or a small set of header files) which includes
    a basic set of features which normal code needs. Any C code file starts
    by including this header file.

    Other header files can rely on the fact that the basic set of features
    are already available.

3. In a modification of (2), each header file can include the basic
    header file(s).

4. The status quo of QEMU is a wild mixture of those strategies.

IMHO, there should be a consensus about the strategy which is used
for QEMU code.

While I personally prefer (1) and used it for my first contributions,
QEMU introduced qemu-common.h. I had the impression that from then on
QEMU preferred strategy (2). Obviously not everybody shares that
impression.

Which strategy / rule do we want to use for QEMU code?

Regards,

Stefan Weil


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 06:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 06:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3hnl-000711-7q; Tue, 21 Aug 2012 06:18:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T3hnk-00070w-Gx
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 06:18:56 +0000
Received: from [85.158.143.99:16370] by server-1.bemta-4.messagelabs.com id
	B3/96-07754-F4823305; Tue, 21 Aug 2012 06:18:55 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345529932!28306196!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19877 invoked from network); 21 Aug 2012 06:18:55 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-13.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 06:18:55 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T3hnX-0005bx-Dx; Tue, 21 Aug 2012 16:18:43 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Tue, 21 Aug 2012 16:18:43 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0379.000; Tue, 21 Aug 2012 16:18:42 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>, Tom Parker
	<tparker@cbnco.com>
Thread-Topic: [Xen-devel] PV USB Use Case for Xen 4.x
Thread-Index: AQHNewlQ72Mvtw+zjUu8KTeLRPYln5dbhFAAgAb3l4CAAAYEgIABUWHA
Date: Tue, 21 Aug 2012 06:18:40 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B29A5CF03@BITCOM1.int.sbss.com.au>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
	<5032949D.5010805@cbnco.com> <20120820201017.GZ19851@reaktio.net>
In-Reply-To: <20120820201017.GZ19851@reaktio.net>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [122.110.168.98]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19128.004
x-tm-as-result: No--27.188900-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 21 Aug 2012 06:18:43.0461 (UTC)
	FILETIME=[D0783B50:01CD7F64]
X-Really-From-Bendigo-IT: magichashvalue
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> 
> PVUSB works for both PV and HVM guests.
> And James Harper's GPLPV Windows drivers contain PVUSB frontend driver
> for Windows.
> 

GPLPV PVUSB is considered experimental... just in case anyone thinks it's production ready!

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 06:19:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 06:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3hnl-000711-7q; Tue, 21 Aug 2012 06:18:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T3hnk-00070w-Gx
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 06:18:56 +0000
Received: from [85.158.143.99:16370] by server-1.bemta-4.messagelabs.com id
	B3/96-07754-F4823305; Tue, 21 Aug 2012 06:18:55 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345529932!28306196!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19877 invoked from network); 21 Aug 2012 06:18:55 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-13.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 06:18:55 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T3hnX-0005bx-Dx; Tue, 21 Aug 2012 16:18:43 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Tue, 21 Aug 2012 16:18:43 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0379.000; Tue, 21 Aug 2012 16:18:42 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: =?iso-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>, Tom Parker
	<tparker@cbnco.com>
Thread-Topic: [Xen-devel] PV USB Use Case for Xen 4.x
Thread-Index: AQHNewlQ72Mvtw+zjUu8KTeLRPYln5dbhFAAgAb3l4CAAAYEgIABUWHA
Date: Tue, 21 Aug 2012 06:18:40 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B29A5CF03@BITCOM1.int.sbss.com.au>
References: <502BD75B.9040301@cbnco.com>
	<1345109102.27489.38.camel@zakaz.uk.xensource.com>
	<5032949D.5010805@cbnco.com> <20120820201017.GZ19851@reaktio.net>
In-Reply-To: <20120820201017.GZ19851@reaktio.net>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [122.110.168.98]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19128.004
x-tm-as-result: No--27.188900-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 21 Aug 2012 06:18:43.0461 (UTC)
	FILETIME=[D0783B50:01CD7F64]
X-Really-From-Bendigo-IT: magichashvalue
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PV USB Use Case for Xen 4.x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> 
> PVUSB works for both PV and HVM guests.
> And James Harper's GPLPV Windows drivers contain PVUSB frontend driver
> for Windows.
> 

GPLPV PVUSB is considered experimental... just in case anyone thinks it's production ready!

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 06:47:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 06:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3iEr-0007E4-Pk; Tue, 21 Aug 2012 06:46:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3iEq-0007Dz-KP
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 06:46:56 +0000
Received: from [85.158.143.35:5311] by server-2.bemta-4.messagelabs.com id
	B7/4B-31966-FDE23305; Tue, 21 Aug 2012 06:46:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345531615!6699361!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10103 invoked from network); 21 Aug 2012 06:46:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 06:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,801,1336348800"; d="scan'208";a="14097093"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 06:46:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 07:46:55 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3iEo-0006Iq-SG;
	Tue, 21 Aug 2012 06:46:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3iEo-0001b3-MB;
	Tue, 21 Aug 2012 07:46:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13620-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 21 Aug 2012 07:46:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13620: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13620 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13620/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13619
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13619
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13619
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13619

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  e6ca45ca03c2
baseline version:
 xen                  e6ca45ca03c2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 06:47:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 06:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3iEr-0007E4-Pk; Tue, 21 Aug 2012 06:46:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T3iEq-0007Dz-KP
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 06:46:56 +0000
Received: from [85.158.143.35:5311] by server-2.bemta-4.messagelabs.com id
	B7/4B-31966-FDE23305; Tue, 21 Aug 2012 06:46:55 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345531615!6699361!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10103 invoked from network); 21 Aug 2012 06:46:55 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 06:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.77,801,1336348800"; d="scan'208";a="14097093"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 06:46:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 07:46:55 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T3iEo-0006Iq-SG;
	Tue, 21 Aug 2012 06:46:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T3iEo-0001b3-MB;
	Tue, 21 Aug 2012 07:46:54 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13620-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 21 Aug 2012 07:46:54 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13620: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13620 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13620/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13619
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13619
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13619
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13619

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  e6ca45ca03c2
baseline version:
 xen                  e6ca45ca03c2

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 08:32:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 08:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3jrt-0008G5-OS; Tue, 21 Aug 2012 08:31:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3jrs-0008G0-En
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 08:31:20 +0000
Received: from [85.158.143.99:54980] by server-3.bemta-4.messagelabs.com id
	9D/D2-09529-75743305; Tue, 21 Aug 2012 08:31:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345537878!16644095!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22773 invoked from network); 21 Aug 2012 08:31:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 08:31:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,801,1336348800"; d="scan'208";a="14099220"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 08:31:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 09:31:18 +0100
Message-ID: <1345537876.28762.136.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Tue, 21 Aug 2012 09:31:16 +0100
In-Reply-To: <alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] README: Update references to PyXML to lxml
 (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-20 at 20:44 +0100, M A Young wrote:
> On Fri, 17 Aug 2012, Ian Campbell wrote:
> 
> > ...
> > and pushed. Is the following section of README still accurate? At least
> > the mention of "PyXML" seems wrong to me.
> >
> >        Python Runtime Libraries
> >        ========================
> >
> >        Xend (the Xen daemon) has the following runtime dependencies:
> >
> >            * Python 2.3 or later.
> >              In some distros, the XML-aspects to the standard library
> >              (xml.dom.minidom etc) are broken out into a separate python-xml package.
> >              This is also required.
> >              In more recent versions of Debian and Ubuntu the XML-aspects are included
> >              in the base python package however (python-xml has been removed
> >              from Debian in squeeze and from Ubuntu in intrepid).
> >
> >                  URL:    http://www.python.org/
> >                  Debian: python
> >
> >            * For optional SSL support, pyOpenSSL:
> >                  URL:    http://pyopenssl.sourceforge.net/
> >                  Debian: python-pyopenssl
> >
> >            * For optional PAM support, PyPAM:
> >                  URL:    http://www.pangalactic.org/PyPAM/
> >                  Debian: python-pam
> >
> >            * For optional XenAPI support in XM, PyXML:
> >                  URL:    http://codespeak.net/lxml/
> >                  Debian: python-lxml
> >                  YUM:    python-lxml
> 
> Yes, it should be lxml not PyXML. The link could be upgraded as well as 
> http://codespeak.net/lxml/ redirects to http://lxml.de/ .

Thanks, patch below.

> I also found mention of PyXML in tools/python/logging/logging-0.4.9.2/ in 
> the files README.txt, python_logging.html, and test/logrecv.py (in an 
> error message) as a dependency for ZSI though I don't think xen ever uses 
> the code.

Great, yet another imported source base!

It seems like the functionality is now part of mainline python. From
looking at Debian it seems that python 2.5+ contains this module
directly.

According to tools/python/logging/setup.py it is at least only installed
if the system version of python does not support it.

AFAICT this is only used by xend and the bit which uses pyxml is not
used.

8<--------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345537817 -3600
# Node ID 7ba9eaf898afafac6cf3956d01c86b861ec5d853
# Parent  b51f4c5a08480297d13439e66793a9002d9b49e9
README: Update references to PyXML to lxml

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r b51f4c5a0848 -r 7ba9eaf898af README
--- a/README	Tue Aug 21 09:20:41 2012 +0100
+++ b/README	Tue Aug 21 09:30:17 2012 +0100
@@ -148,9 +148,9 @@ Xend (the Xen daemon) has the following 
           URL:    http://www.pangalactic.org/PyPAM/
           Debian: python-pam
 
-    * For optional XenAPI support in XM, PyXML:
-          URL:    http://codespeak.net/lxml/
-	  Debian: python-lxml
+    * For optional XenAPI support in XM, lxml:
+          URL:    http://lxml.de/
+          Debian: python-lxml
           YUM:    python-lxml
 
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 08:32:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 08:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3jrt-0008G5-OS; Tue, 21 Aug 2012 08:31:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3jrs-0008G0-En
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 08:31:20 +0000
Received: from [85.158.143.99:54980] by server-3.bemta-4.messagelabs.com id
	9D/D2-09529-75743305; Tue, 21 Aug 2012 08:31:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345537878!16644095!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk3ODg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22773 invoked from network); 21 Aug 2012 08:31:18 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 08:31:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,801,1336348800"; d="scan'208";a="14099220"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 08:31:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 09:31:18 +0100
Message-ID: <1345537876.28762.136.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Tue, 21 Aug 2012 09:31:16 +0100
In-Reply-To: <alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] README: Update references to PyXML to lxml
 (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-20 at 20:44 +0100, M A Young wrote:
> On Fri, 17 Aug 2012, Ian Campbell wrote:
> 
> > ...
> > and pushed. Is the following section of README still accurate? At least
> > the mention of "PyXML" seems wrong to me.
> >
> >        Python Runtime Libraries
> >        ========================
> >
> >        Xend (the Xen daemon) has the following runtime dependencies:
> >
> >            * Python 2.3 or later.
> >              In some distros, the XML-aspects to the standard library
> >              (xml.dom.minidom etc) are broken out into a separate python-xml package.
> >              This is also required.
> >              In more recent versions of Debian and Ubuntu the XML-aspects are included
> >              in the base python package however (python-xml has been removed
> >              from Debian in squeeze and from Ubuntu in intrepid).
> >
> >                  URL:    http://www.python.org/
> >                  Debian: python
> >
> >            * For optional SSL support, pyOpenSSL:
> >                  URL:    http://pyopenssl.sourceforge.net/
> >                  Debian: python-pyopenssl
> >
> >            * For optional PAM support, PyPAM:
> >                  URL:    http://www.pangalactic.org/PyPAM/
> >                  Debian: python-pam
> >
> >            * For optional XenAPI support in XM, PyXML:
> >                  URL:    http://codespeak.net/lxml/
> >                  Debian: python-lxml
> >                  YUM:    python-lxml
> 
> Yes, it should be lxml not PyXML. The link could be upgraded as well as 
> http://codespeak.net/lxml/ redirects to http://lxml.de/ .

Thanks, patch below.

> I also found mention of PyXML in tools/python/logging/logging-0.4.9.2/ in 
> the files README.txt, python_logging.html, and test/logrecv.py (in an 
> error message) as a dependency for ZSI though I don't think xen ever uses 
> the code.

Great, yet another imported source base!

It seems like the functionality is now part of mainline python. From
looking at Debian it seems that python 2.5+ contains this module
directly.

According to tools/python/logging/setup.py it is at least only installed
if the system version of python does not support it.

AFAICT this is only used by xend and the bit which uses pyxml is not
used.

8<--------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345537817 -3600
# Node ID 7ba9eaf898afafac6cf3956d01c86b861ec5d853
# Parent  b51f4c5a08480297d13439e66793a9002d9b49e9
README: Update references to PyXML to lxml

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r b51f4c5a0848 -r 7ba9eaf898af README
--- a/README	Tue Aug 21 09:20:41 2012 +0100
+++ b/README	Tue Aug 21 09:30:17 2012 +0100
@@ -148,9 +148,9 @@ Xend (the Xen daemon) has the following 
           URL:    http://www.pangalactic.org/PyPAM/
           Debian: python-pam
 
-    * For optional XenAPI support in XM, PyXML:
-          URL:    http://codespeak.net/lxml/
-	  Debian: python-lxml
+    * For optional XenAPI support in XM, lxml:
+          URL:    http://lxml.de/
+          Debian: python-lxml
           YUM:    python-lxml
 
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 10:05:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 10:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3lJu-0000Ke-CN; Tue, 21 Aug 2012 10:04:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T3lJt-0000KZ-8T
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 10:04:21 +0000
Received: from [85.158.143.99:27382] by server-2.bemta-4.messagelabs.com id
	60/F0-21239-42D53305; Tue, 21 Aug 2012 10:04:20 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345543458!19474184!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26322 invoked from network); 21 Aug 2012 10:04:19 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-8.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	21 Aug 2012 10:04:19 -0000
Received: from mail16-ch1-R.bigfish.com (10.43.68.227) by
	CH1EHSOBE008.bigfish.com (10.43.70.58) with Microsoft SMTP Server id
	14.1.225.23; Tue, 21 Aug 2012 10:04:17 +0000
Received: from mail16-ch1 (localhost [127.0.0.1])	by mail16-ch1-R.bigfish.com
	(Postfix) with ESMTP id A31F22C0148;
	Tue, 21 Aug 2012 10:04:17 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I4015I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail16-ch1 (localhost.localdomain [127.0.0.1]) by mail16-ch1
	(MessageSwitch) id 1345543455141055_32439;
	Tue, 21 Aug 2012 10:04:15 +0000 (UTC)
Received: from CH1EHSMHS004.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.240])	by mail16-ch1.bigfish.com (Postfix) with ESMTP id
	15FBD120079;	Tue, 21 Aug 2012 10:04:15 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS004.bigfish.com (10.43.70.4) with Microsoft SMTP Server id
	14.1.225.23; Tue, 21 Aug 2012 10:04:14 +0000
X-WSS-ID: 0M93NZ0-01-3EY-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2CFBD1028053;	Tue, 21 Aug 2012 05:04:11 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 21 Aug 2012 05:04:19 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Tue, 21 Aug 2012 05:04:11 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Tue, 21 Aug 2012
	06:04:10 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 3057849C1E6; Tue, 21 Aug 2012
	11:04:09 +0100 (BST)
Message-ID: <50335D2F.5040705@amd.com>
Date: Tue, 21 Aug 2012 12:04:31 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <995803806158d2dfce2d.1345215423@REDBLD-XS.ad.xensource.com>
In-Reply-To: <995803806158d2dfce2d.1345215423@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, JBeulich@suse.com,
	xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tested and Acked.
Thanks,
Wei

On 08/17/2012 04:57 PM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Incorporated feedback from Jan Beulich and Wei Wang.
> Fixed indent printing with %*s.
> Removed superflous superpage and other attribute prints.
> Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
> AMD IOMMU does not skip levels. Handle 2mb and 1gb IOMMU page size for AMD.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 17 07:56:55 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        if ( next_level&&  (next_level != (level - 1)) )
> +        {
> +            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
> +                   next_level, level - 1);
> +
> +            continue;
> +        }
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( next_level>= 1 )
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), next_level,
> +                address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx  mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)PFN_DOWN(address),
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    printk("p2m table has %d levels\n", hd->paging_mode);
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Fri Aug 17 07:56:55 2012 -0700
> @@ -19,10 +19,12 @@
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
>   #include<xen/softirq.h>
> +#include<xen/keyhandler.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 17 07:56:55 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,60 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level>= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                     address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K));
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 17 07:56:55 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> diff -r 6d56e31fe1e1 -r 995803806158 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 17 07:56:55 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  (12 + (PTE_PER_TABLE_SHIFT * \
> +                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 6d56e31fe1e1 -r 995803806158 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/xen/iommu.h	Fri Aug 17 07:56:55 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 10:05:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 10:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3lJu-0000Ke-CN; Tue, 21 Aug 2012 10:04:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Wei.Wang2@amd.com>) id 1T3lJt-0000KZ-8T
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 10:04:21 +0000
Received: from [85.158.143.99:27382] by server-2.bemta-4.messagelabs.com id
	60/F0-21239-42D53305; Tue, 21 Aug 2012 10:04:20 +0000
X-Env-Sender: Wei.Wang2@amd.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345543458!19474184!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26322 invoked from network); 21 Aug 2012 10:04:19 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-8.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	21 Aug 2012 10:04:19 -0000
Received: from mail16-ch1-R.bigfish.com (10.43.68.227) by
	CH1EHSOBE008.bigfish.com (10.43.70.58) with Microsoft SMTP Server id
	14.1.225.23; Tue, 21 Aug 2012 10:04:17 +0000
Received: from mail16-ch1 (localhost [127.0.0.1])	by mail16-ch1-R.bigfish.com
	(Postfix) with ESMTP id A31F22C0148;
	Tue, 21 Aug 2012 10:04:17 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzbb2dI98dI9371I1432I4015I78fbmzz1202hzz8275bhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail16-ch1 (localhost.localdomain [127.0.0.1]) by mail16-ch1
	(MessageSwitch) id 1345543455141055_32439;
	Tue, 21 Aug 2012 10:04:15 +0000 (UTC)
Received: from CH1EHSMHS004.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.240])	by mail16-ch1.bigfish.com (Postfix) with ESMTP id
	15FBD120079;	Tue, 21 Aug 2012 10:04:15 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	CH1EHSMHS004.bigfish.com (10.43.70.4) with Microsoft SMTP Server id
	14.1.225.23; Tue, 21 Aug 2012 10:04:14 +0000
X-WSS-ID: 0M93NZ0-01-3EY-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2CFBD1028053;	Tue, 21 Aug 2012 05:04:11 -0500 (CDT)
Received: from SAUSEXDAG06.amd.com (163.181.55.7) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Tue, 21 Aug 2012 05:04:19 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag06.amd.com
	(163.181.55.7) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Tue, 21 Aug 2012 05:04:11 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Tue, 21 Aug 2012
	06:04:10 -0400
Received: from [165.204.15.57] (gran.osrc.amd.com [165.204.15.57])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 3057849C1E6; Tue, 21 Aug 2012
	11:04:09 +0100 (BST)
Message-ID: <50335D2F.5040705@amd.com>
Date: Tue, 21 Aug 2012 12:04:31 +0200
From: Wei Wang <wei.wang2@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:12.0) Gecko/20120421 Thunderbird/12.0
MIME-Version: 1.0
To: Santosh Jodh <santosh.jodh@citrix.com>
References: <995803806158d2dfce2d.1345215423@REDBLD-XS.ad.xensource.com>
In-Reply-To: <995803806158d2dfce2d.1345215423@REDBLD-XS.ad.xensource.com>
X-OriginatorOrg: amd.com
Cc: xen-devel@lists.xensource.com, tim@xen.org, JBeulich@suse.com,
	xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] Dump IOMMU p2m table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tested and Acked.
Thanks,
Wei

On 08/17/2012 04:57 PM, Santosh Jodh wrote:
> New key handler 'o' to dump the IOMMU p2m table for each domain.
> Skips dumping table for domain0.
> Intel and AMD specific iommu_ops handler for dumping p2m table.
>
> Incorporated feedback from Jan Beulich and Wei Wang.
> Fixed indent printing with %*s.
> Removed superflous superpage and other attribute prints.
> Make next_level use consistent for AMD IOMMU dumps. Warn if found inconsistent.
> AMD IOMMU does not skip levels. Handle 2mb and 1gb IOMMU page size for AMD.
>
> Signed-off-by: Santosh Jodh<santosh.jodh@citrix.com>
>
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/amd/pci_amd_iommu.c
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Fri Aug 17 07:56:55 2012 -0700
> @@ -22,6 +22,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/paging.h>
> +#include<xen/softirq.h>
>   #include<asm/hvm/iommu.h>
>   #include<asm/amd-iommu.h>
>   #include<asm/hvm/svm/amd-iommu-proto.h>
> @@ -512,6 +513,80 @@ static int amd_iommu_group_id(u16 seg, u
>
>   #include<asm/io_apic.h>
>
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level,
> +                                     paddr_t gpa, int indent)
> +{
> +    paddr_t address;
> +    void *table_vaddr, *pde;
> +    paddr_t next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %"PRIpaddr"\n",
> +                page_to_maddr(pg));
> +        return;
> +    }
> +
> +    for ( index = 0; index<  PTE_PER_TABLE_SIZE; index++ )
> +    {
> +        if ( !(index % 2) )
> +            process_pending_softirqs();
> +
> +        pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +        next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +        entry = (u32*)pde;
> +
> +        present = get_field_from_reg_u32(entry[0],
> +                                         IOMMU_PDE_PRESENT_MASK,
> +                                         IOMMU_PDE_PRESENT_SHIFT);
> +
> +        if ( !present )
> +            continue;
> +
> +        next_level = get_field_from_reg_u32(entry[0],
> +                                            IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                            IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +        if ( next_level&&  (next_level != (level - 1)) )
> +        {
> +            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
> +                   next_level, level - 1);
> +
> +            continue;
> +        }
> +
> +        address = gpa + amd_offset_level_address(index, level);
> +        if ( next_level>= 1 )
> +            amd_dump_p2m_table_level(
> +                maddr_to_page(next_table_maddr), next_level,
> +                address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx  mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)PFN_DOWN(address),
> +                   (unsigned long)PFN_DOWN(next_table_maddr));
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table )
> +        return;
> +
> +    printk("p2m table has %d levels\n", hd->paging_mode);
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
> +}
> +
>   const struct iommu_ops amd_iommu_ops = {
>       .init = amd_iommu_domain_init,
>       .dom0_init = amd_iommu_dom0_init,
> @@ -531,4 +606,5 @@ const struct iommu_ops amd_iommu_ops = {
>       .resume = amd_iommu_resume,
>       .share_p2m = amd_iommu_share_p2m,
>       .crash_shutdown = amd_iommu_suspend,
> +    .dump_p2m_table = amd_dump_p2m_table,
>   };
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/iommu.c
> --- a/xen/drivers/passthrough/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/iommu.c	Fri Aug 17 07:56:55 2012 -0700
> @@ -19,10 +19,12 @@
>   #include<xen/paging.h>
>   #include<xen/guest_access.h>
>   #include<xen/softirq.h>
> +#include<xen/keyhandler.h>
>   #include<xsm/xsm.h>
>
>   static void parse_iommu_param(char *s);
>   static int iommu_populate_page_table(struct domain *d);
> +static void iommu_dump_p2m_table(unsigned char key);
>
>   /*
>    * The 'iommu' parameter enables the IOMMU.  Optional comma separated
> @@ -54,6 +56,12 @@ bool_t __read_mostly amd_iommu_perdev_in
>
>   DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>
> +static struct keyhandler iommu_p2m_table = {
> +    .diagnostic = 0,
> +    .u.fn = iommu_dump_p2m_table,
> +    .desc = "dump iommu p2m table"
> +};
> +
>   static void __init parse_iommu_param(char *s)
>   {
>       char *ss;
> @@ -119,6 +127,7 @@ void __init iommu_dom0_init(struct domai
>       if ( !iommu_enabled )
>           return;
>
> +    register_keyhandler('o',&iommu_p2m_table);
>       d->need_iommu = !!iommu_dom0_strict;
>       if ( need_iommu(d) )
>       {
> @@ -654,6 +663,34 @@ int iommu_do_domctl(
>       return ret;
>   }
>
> +static void iommu_dump_p2m_table(unsigned char key)
> +{
> +    struct domain *d;
> +    const struct iommu_ops *ops;
> +
> +    if ( !iommu_enabled )
> +    {
> +        printk("IOMMU not enabled!\n");
> +        return;
> +    }
> +
> +    ops = iommu_get_ops();
> +    for_each_domain(d)
> +    {
> +        if ( !d->domain_id )
> +            continue;
> +
> +        if ( iommu_use_hap_pt(d) )
> +        {
> +            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            continue;
> +        }
> +
> +        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> +        ops->dump_p2m_table(d);
> +    }
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.c
> --- a/xen/drivers/passthrough/vtd/iommu.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.c	Fri Aug 17 07:56:55 2012 -0700
> @@ -31,6 +31,7 @@
>   #include<xen/pci.h>
>   #include<xen/pci_regs.h>
>   #include<xen/keyhandler.h>
> +#include<xen/softirq.h>
>   #include<asm/msi.h>
>   #include<asm/irq.h>
>   #if defined(__i386__) || defined(__x86_64__)
> @@ -2365,6 +2366,60 @@ static void vtd_resume(void)
>       }
>   }
>
> +static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
> +                                     int indent)
> +{
> +    paddr_t address;
> +    int i;
> +    struct dma_pte *pt_vaddr, *pte;
> +    int next_level;
> +
> +    if ( level<  1 )
> +        return;
> +
> +    pt_vaddr = map_vtd_domain_page(pt_maddr);
> +    if ( pt_vaddr == NULL )
> +    {
> +        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
> +        return;
> +    }
> +
> +    next_level = level - 1;
> +    for ( i = 0; i<  PTE_NUM; i++ )
> +    {
> +        if ( !(i % 2) )
> +            process_pending_softirqs();
> +
> +        pte =&pt_vaddr[i];
> +        if ( !dma_pte_present(*pte) )
> +            continue;
> +
> +        address = gpa + offset_level_address(i, level);
> +        if ( next_level>= 1 )
> +            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,
> +                                     address, indent + 1);
> +        else
> +            printk("%*sgfn: %08lx mfn: %08lx\n",
> +                   indent, "",
> +                   (unsigned long)(address>>  PAGE_SHIFT_4K),
> +                   (unsigned long)(pte->val>>  PAGE_SHIFT_4K));
> +    }
> +
> +    unmap_vtd_domain_page(pt_vaddr);
> +}
> +
> +static void vtd_dump_p2m_table(struct domain *d)
> +{
> +    struct hvm_iommu *hd;
> +
> +    if ( list_empty(&acpi_drhd_units) )
> +        return;
> +
> +    hd = domain_hvm_iommu(d);
> +    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
> +    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
> +}
> +
>   const struct iommu_ops intel_iommu_ops = {
>       .init = intel_iommu_domain_init,
>       .dom0_init = intel_iommu_dom0_init,
> @@ -2387,6 +2442,7 @@ const struct iommu_ops intel_iommu_ops =
>       .crash_shutdown = vtd_crash_shutdown,
>       .iotlb_flush = intel_iommu_iotlb_flush,
>       .iotlb_flush_all = intel_iommu_iotlb_flush_all,
> +    .dump_p2m_table = vtd_dump_p2m_table,
>   };
>
>   /*
> diff -r 6d56e31fe1e1 -r 995803806158 xen/drivers/passthrough/vtd/iommu.h
> --- a/xen/drivers/passthrough/vtd/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/drivers/passthrough/vtd/iommu.h	Fri Aug 17 07:56:55 2012 -0700
> @@ -248,6 +248,8 @@ struct context_entry {
>   #define level_to_offset_bits(l) (12 + (l - 1) * LEVEL_STRIDE)
>   #define address_level_offset(addr, level) \
>               ((addr>>  level_to_offset_bits(level))&  LEVEL_MASK)
> +#define offset_level_address(offset, level) \
> +            ((u64)(offset)<<  level_to_offset_bits(level))
>   #define level_mask(l) (((u64)(-1))<<  level_to_offset_bits(l))
>   #define level_size(l) (1<<  level_to_offset_bits(l))
>   #define align_to_level(addr, l) ((addr + level_size(l) - 1)&  level_mask(l))
> diff -r 6d56e31fe1e1 -r 995803806158 xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h	Fri Aug 17 07:56:55 2012 -0700
> @@ -38,6 +38,10 @@
>   #define PTE_PER_TABLE_ALLOC(entries)	\
>   	PAGE_SIZE * (PTE_PER_TABLE_ALIGN(entries)>>  PTE_PER_TABLE_SHIFT)
>
> +#define amd_offset_level_address(offset, level) \
> +      	((u64)(offset)<<  (12 + (PTE_PER_TABLE_SHIFT * \
> +                                (level - IOMMU_PAGING_MODE_LEVEL_1))))
> +
>   #define PCI_MIN_CAP_OFFSET	0x40
>   #define PCI_MAX_CAP_BLOCKS	48
>   #define PCI_CAP_PTR_MASK	0xFC
> diff -r 6d56e31fe1e1 -r 995803806158 xen/include/xen/iommu.h
> --- a/xen/include/xen/iommu.h	Wed Aug 15 09:41:21 2012 +0100
> +++ b/xen/include/xen/iommu.h	Fri Aug 17 07:56:55 2012 -0700
> @@ -141,6 +141,7 @@ struct iommu_ops {
>       void (*crash_shutdown)(void);
>       void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
>       void (*iotlb_flush_all)(struct domain *d);
> +    void (*dump_p2m_table)(struct domain *d);
>   };
>
>   void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 10:10:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 10:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3lPa-0000Ta-Bj; Tue, 21 Aug 2012 10:10:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T3lPZ-0000TS-I4
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 10:10:13 +0000
Received: from [85.158.143.99:47694] by server-1.bemta-4.messagelabs.com id
	3F/A5-07754-48E53305; Tue, 21 Aug 2012 10:10:12 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345543810!25826155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3645 invoked from network); 21 Aug 2012 10:10:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 10:10:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,801,1336363200"; d="scan'208";a="205743886"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 06:09:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 06:09:41 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T3lP3-0003yI-Gi;
	Tue, 21 Aug 2012 11:09:41 +0100
Message-ID: <50335D8F.6000603@eu.citrix.com>
Date: Tue, 21 Aug 2012 11:06:07 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820202856.GA11485@phenom.dumpdata.com>
In-Reply-To: <20120820202856.GA11485@phenom.dumpdata.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/08/12 21:28, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
>> * Multi-page blk rings
>>   - blkback in kernel (@intel)
> .. and me as well.
OK -- I think what I really need is just one person to be a 
point-of-contact, who can give updates on progress.  Shall I put you 
down for that, or should I find out who at Intel is working on it?
>>   - qemu blkback
>>
>> * Multi-page net protocol
>>    owner: ?
>>    expand the network ring protocol to allow multiple pages for
>>    increased throughput
> Multiple people working on this. Ian, me, Annie, and Wei I believe.
OK -- Ian C said he'd be the primary point of contact for this one until 
Wei gets back; if you or Annie want to be the contact directly, let me 
know. :-)
>> * xl USB pass-through for PV guests
>>    owner: ?
>>    Port the xend PV pass-through functionality to xl.
> Well, what about the Linux side? The frontend/backend drivers haven't
> really been proposed for upstream.
OK, I'll put that down as well.
>> * PV audio (audio for stubdom qemu)
>>    owner: stefano.panella@citrix
> Is this a new person? Anyhow, there was a PV audio in pulseaudio as part
> of the GSOC.
Stefano Panella is on the XenClient team at Citrix.  I think he probably 
knows about the previous work, but I'll make sure.  One thing I know 
he's been particularly concerned about is audio latency, particularly 
with using GoToMeeting or Skype inside a VM.

Thanks,
  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 10:10:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 10:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3lPa-0000Ta-Bj; Tue, 21 Aug 2012 10:10:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T3lPZ-0000TS-I4
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 10:10:13 +0000
Received: from [85.158.143.99:47694] by server-1.bemta-4.messagelabs.com id
	3F/A5-07754-48E53305; Tue, 21 Aug 2012 10:10:12 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345543810!25826155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3645 invoked from network); 21 Aug 2012 10:10:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 10:10:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,801,1336363200"; d="scan'208";a="205743886"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 06:09:42 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 06:09:41 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T3lP3-0003yI-Gi;
	Tue, 21 Aug 2012 11:09:41 +0100
Message-ID: <50335D8F.6000603@eu.citrix.com>
Date: Tue, 21 Aug 2012 11:06:07 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820202856.GA11485@phenom.dumpdata.com>
In-Reply-To: <20120820202856.GA11485@phenom.dumpdata.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/08/12 21:28, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
>> * Multi-page blk rings
>>   - blkback in kernel (@intel)
> .. and me as well.
OK -- I think what I really need is just one person to be a 
point-of-contact, who can give updates on progress.  Shall I put you 
down for that, or should I find out who at Intel is working on it?
>>   - qemu blkback
>>
>> * Multi-page net protocol
>>    owner: ?
>>    expand the network ring protocol to allow multiple pages for
>>    increased throughput
> Multiple people working on this. Ian, me, Annie, and Wei I believe.
OK -- Ian C said he'd be the primary point of contact for this one until 
Wei gets back; if you or Annie want to be the contact directly, let me 
know. :-)
>> * xl USB pass-through for PV guests
>>    owner: ?
>>    Port the xend PV pass-through functionality to xl.
> Well, what about the Linux side? The frontend/backend drivers haven't
> really been proposed for upstream.
OK, I'll put that down as well.
>> * PV audio (audio for stubdom qemu)
>>    owner: stefano.panella@citrix
> Is this a new person? Anyhow, there was a PV audio in pulseaudio as part
> of the GSOC.
Stefano Panella is on the XenClient team at Citrix.  I think he probably 
knows about the previous work, but I'll make sure.  One thing I know 
he's been particularly concerned about is audio latency, particularly 
with using GoToMeeting or Skype inside a VM.

Thanks,
  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 10:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 10:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3lYX-0000dt-Bs; Tue, 21 Aug 2012 10:19:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avi@redhat.com>) id 1T3lYV-0000do-Ro
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 10:19:28 +0000
Received: from [85.158.139.83:29980] by server-2.bemta-5.messagelabs.com id
	45/00-10142-FA063305; Tue, 21 Aug 2012 10:19:27 +0000
X-Env-Sender: avi@redhat.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345544364!28704309!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14838 invoked from network); 21 Aug 2012 10:19:25 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 10:19:25 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LAJEBX020476
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 06:19:14 -0400
Received: from balrog.usersys.tlv.redhat.com (dhcp-4-121.tlv.redhat.com
	[10.35.4.121])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LAJ5Sh024973; Tue, 21 Aug 2012 06:19:06 -0400
Message-ID: <50336099.3040206@redhat.com>
Date: Tue, 21 Aug 2012 13:19:05 +0300
From: Avi Kivity <avi@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Stefan Weil <sw@weilnetz.de>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>	<5031C2A3.3060800@weilnetz.de>
	<20120820134739.08bf6de4@thinkpad.mammed.net>
	<50329903.8000608@weilnetz.de>
In-Reply-To: <50329903.8000608@weilnetz.de>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	Igor Mammedov <imammedo@redhat.com>, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC] How should QEMU code handle include statements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/20/2012 11:07 PM, Stefan Weil wrote:

> While I personally prefer (1) and used it for my first contributions,
> QEMU introduced qemu-common.h. I had the impression that from then on
> QEMU preferred strategy (2). Obviously not everybody shares that
> impression.
> 
> Which strategy / rule do we want to use for QEMU code?

(1).


-- 
error compiling committee.c: too many arguments to function

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 10:19:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 10:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3lYX-0000dt-Bs; Tue, 21 Aug 2012 10:19:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avi@redhat.com>) id 1T3lYV-0000do-Ro
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 10:19:28 +0000
Received: from [85.158.139.83:29980] by server-2.bemta-5.messagelabs.com id
	45/00-10142-FA063305; Tue, 21 Aug 2012 10:19:27 +0000
X-Env-Sender: avi@redhat.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345544364!28704309!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14838 invoked from network); 21 Aug 2012 10:19:25 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 10:19:25 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LAJEBX020476
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 06:19:14 -0400
Received: from balrog.usersys.tlv.redhat.com (dhcp-4-121.tlv.redhat.com
	[10.35.4.121])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LAJ5Sh024973; Tue, 21 Aug 2012 06:19:06 -0400
Message-ID: <50336099.3040206@redhat.com>
Date: Tue, 21 Aug 2012 13:19:05 +0300
From: Avi Kivity <avi@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Stefan Weil <sw@weilnetz.de>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>	<5031C2A3.3060800@weilnetz.de>
	<20120820134739.08bf6de4@thinkpad.mammed.net>
	<50329903.8000608@weilnetz.de>
In-Reply-To: <50329903.8000608@weilnetz.de>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	Igor Mammedov <imammedo@redhat.com>, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC] How should QEMU code handle include statements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/20/2012 11:07 PM, Stefan Weil wrote:

> While I personally prefer (1) and used it for my first contributions,
> QEMU introduced qemu-common.h. I had the impression that from then on
> QEMU preferred strategy (2). Obviously not everybody shares that
> impression.
> 
> Which strategy / rule do we want to use for QEMU code?

(1).


-- 
error compiling committee.c: too many arguments to function

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 11:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 11:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3mLQ-0000uG-IZ; Tue, 21 Aug 2012 11:10:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3mLO-0000uB-UW
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 11:09:59 +0000
Received: from [85.158.143.99:56986] by server-1.bemta-4.messagelabs.com id
	7D/25-07754-68C63305; Tue, 21 Aug 2012 11:09:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345547397!29244821!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 779 invoked from network); 21 Aug 2012 11:09:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 11:09:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14103823"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 11:09:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 12:09:57 +0100
Date: Tue, 21 Aug 2012 12:09:36 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.DEB.2.02.1208211207550.15568@kaball.uk.xensource.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> Currently the definition of x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done is twisted and not really well
> defined (in terms of prototypes desired). More specifically:
> pagetable_setup_start:
>  * it is a nop on x86_32
>  * it is a nop for the XEN case
>  * cleans up the boot time page table in the x86_64 case

the description is wrong: you swapped x86_32 and x86_64 here


> pagetable_setup_done:
>  * it is a nop on x86_32
>  * sets up accessor functions for pagetable manipulation, for the
>    XEN case
>  * it is a nop on x86_64
> 
> Most of this logic can be skipped by creating a new PVOPS that can handle
> pagetable setup and pre/post operations on it.
> The new PVOPS must be called only once, during boot-time setup and
> after the direct mapping for physical memory is available.
> 
> Attilio Rao (5):
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_done PVOPS
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_start PVOPS
>   X86/XEN: Introduce the x86_init.paging.pagetable_init() PVOPS
>   X86/XEN: Retire now unused x86_init.paging.pagetable_setup_start and
>     x86_init.paging.pagetable_setup_done PVOPS
>   X86/XEN: Add few lines explaining simple semantic for
>     x86_init.paging.pagetable_init PVOPS
> 
>  arch/x86/include/asm/pgtable_types.h |    6 ++----
>  arch/x86/include/asm/x86_init.h      |   11 +++++++----
>  arch/x86/kernel/setup.c              |    4 +---
>  arch/x86/kernel/x86_init.c           |    4 +---
>  arch/x86/mm/init_32.c                |   12 ++++++------
>  arch/x86/xen/mmu.c                   |   18 +++++++-----------
>  6 files changed, 24 insertions(+), 31 deletions(-)
> 
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 11:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 11:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3mLQ-0000uG-IZ; Tue, 21 Aug 2012 11:10:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3mLO-0000uB-UW
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 11:09:59 +0000
Received: from [85.158.143.99:56986] by server-1.bemta-4.messagelabs.com id
	7D/25-07754-68C63305; Tue, 21 Aug 2012 11:09:58 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345547397!29244821!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 779 invoked from network); 21 Aug 2012 11:09:57 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 11:09:57 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14103823"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 11:09:57 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 12:09:57 +0100
Date: Tue, 21 Aug 2012 12:09:36 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.DEB.2.02.1208211207550.15568@kaball.uk.xensource.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> Currently the definition of x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done is twisted and not really well
> defined (in terms of prototypes desired). More specifically:
> pagetable_setup_start:
>  * it is a nop on x86_32
>  * it is a nop for the XEN case
>  * cleans up the boot time page table in the x86_64 case

the description is wrong: you swapped x86_32 and x86_64 here


> pagetable_setup_done:
>  * it is a nop on x86_32
>  * sets up accessor functions for pagetable manipulation, for the
>    XEN case
>  * it is a nop on x86_64
> 
> Most of this logic can be skipped by creating a new PVOPS that can handle
> pagetable setup and pre/post operations on it.
> The new PVOPS must be called only once, during boot-time setup and
> after the direct mapping for physical memory is available.
> 
> Attilio Rao (5):
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_done PVOPS
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_start PVOPS
>   X86/XEN: Introduce the x86_init.paging.pagetable_init() PVOPS
>   X86/XEN: Retire now unused x86_init.paging.pagetable_setup_start and
>     x86_init.paging.pagetable_setup_done PVOPS
>   X86/XEN: Add few lines explaining simple semantic for
>     x86_init.paging.pagetable_init PVOPS
> 
>  arch/x86/include/asm/pgtable_types.h |    6 ++----
>  arch/x86/include/asm/x86_init.h      |   11 +++++++----
>  arch/x86/kernel/setup.c              |    4 +---
>  arch/x86/kernel/x86_init.c           |    4 +---
>  arch/x86/mm/init_32.c                |   12 ++++++------
>  arch/x86/xen/mmu.c                   |   18 +++++++-----------
>  6 files changed, 24 insertions(+), 31 deletions(-)
> 
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 13:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 13:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3o40-0002k6-Tc; Tue, 21 Aug 2012 13:00:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T3o3z-0002k1-Kq
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 13:00:07 +0000
Received: from [85.158.138.51:50288] by server-1.bemta-3.messagelabs.com id
	03/44-09327-65683305; Tue, 21 Aug 2012 13:00:06 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345554003!29485838!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU1NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29414 invoked from network); 21 Aug 2012 13:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 13:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336363200"; d="scan'208";a="35285239"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 09:00:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 09:00:02 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T3o3u-0006qS-8H;
	Tue, 21 Aug 2012 14:00:02 +0100
Message-ID: <5033857B.2000804@eu.citrix.com>
Date: Tue, 21 Aug 2012 13:56:27 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
In-Reply-To: <20120820191429.GY19851@reaktio.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/08/12 20:14, Pasi K=E4rkk=E4inen wrote:
> Another USB item:
>
> * xl support for USB device passthru using QEMU emulated USB for HVM gues=
ts (no need for PVUSB drivers in the HVM guest).
>    This works today in xm/xend with qemu-traditional, but is limited to U=
SB 1.1, probably because
>    the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
>    So xl support for emulated USB device passthru for both qemu-upstream =
and qemu-traditional.

OK, I'll put that on the list.  Thanks.

> More wishlist items:

I'm going to be experimenting with a website designed for user feedback, =

to both suggest features but also to help prioritize what you think are =

the important features. Expect an e-mail in a day or two with the =

announcement.

I think the Citrix team is probably mostly full for this release cycle; =

so anything that requires significant development work but doesn't =

already have developers working on it will probably have to be put off =

until later, unless you can convince the regular devs it's something to =

prioritize, or you can bring in more developers to work on it. :-)

> * Nested hardware virtualization. Important for easier testing and develo=
pment of Xen (Xen-on-Xen),
>    and for running other hypervisors in Xen VMs. Interesting for labs, PO=
Cs, etc.

The ball is really in Intel / AMD's court on this one.  It's on our list =

of "things that might be nice", but given the other things we really do =

want to try to make into 4.3, wasn't that big of a priority for us.  If =

Intel or AMD want to make this a priority for them for 4.3, I can track it.

> * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel a=
rchives,
>    but noone has yet stepped up to clean up and get them merged.
>    Currently Intel gfx passthru patches are merged to Xen, but primary AT=
I/NVIDIA require extra patches.
>    This is actually something that a LOT of users ask often, it's discuss=
ed almost every day on ##xen on IRC.
>    I wonder if XenClient folks could help here?

What kind of patches are these? Are they mostly to pvops Linux?

> * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU pas=
sthru users.
>    Fujitsu guys posted some patches for this in 2010, and XenClient guys =
in 2009 (iirc),
>    but nothing got further developed and merged to upstream Xen.

What exactly is involved here? If the work is mostly done, but just =

needs someone to dust it off and submit it, then it's probably something =

we can add to the list.

> * QXL virtual GPU support for SPICE. Someone was already developing this,
>    and posted patches earlier during 4.2 development cycle to xen-devel.
>    Upstream Qemu includes QXL support.

If "someone" wants to step up and claim responsibility, I'll put it on =

the list of things to track. :-)

> * PVSCSI support in XL. James Harper was (semi) interested in working wit=
h this,
>    because he has a PVSCSI frontend driver in Windows GPLPV drivers, and =
he's using PVSCSI for tape backups himself.
>
> * libvirt libxl driver improvements; support more Xen features.
>    Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default" vir=
tualization GUI also with Xen.

This is pretty important.  Looking at feature parity between libvirt+KVM =

and libvirt+Xen on the Citrix xen.org to-do list, but to begin with it's =

more of a research than a feature, so wasn't on this particular list.  =

If you happen to know a specific list of missing features, that might be =

something we could try to fit in for 4.3.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 13:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 13:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3o40-0002k6-Tc; Tue, 21 Aug 2012 13:00:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@eu.citrix.com>) id 1T3o3z-0002k1-Kq
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 13:00:07 +0000
Received: from [85.158.138.51:50288] by server-1.bemta-3.messagelabs.com id
	03/44-09327-65683305; Tue, 21 Aug 2012 13:00:06 +0000
X-Env-Sender: George.Dunlap@eu.citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345554003!29485838!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU1NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29414 invoked from network); 21 Aug 2012 13:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 13:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336363200"; d="scan'208";a="35285239"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 09:00:03 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 09:00:02 -0400
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1T3o3u-0006qS-8H;
	Tue, 21 Aug 2012 14:00:02 +0100
Message-ID: <5033857B.2000804@eu.citrix.com>
Date: Tue, 21 Aug 2012 13:56:27 +0100
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
In-Reply-To: <20120820191429.GY19851@reaktio.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/08/12 20:14, Pasi K=E4rkk=E4inen wrote:
> Another USB item:
>
> * xl support for USB device passthru using QEMU emulated USB for HVM gues=
ts (no need for PVUSB drivers in the HVM guest).
>    This works today in xm/xend with qemu-traditional, but is limited to U=
SB 1.1, probably because
>    the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
>    So xl support for emulated USB device passthru for both qemu-upstream =
and qemu-traditional.

OK, I'll put that on the list.  Thanks.

> More wishlist items:

I'm going to be experimenting with a website designed for user feedback, =

to both suggest features but also to help prioritize what you think are =

the important features. Expect an e-mail in a day or two with the =

announcement.

I think the Citrix team is probably mostly full for this release cycle; =

so anything that requires significant development work but doesn't =

already have developers working on it will probably have to be put off =

until later, unless you can convince the regular devs it's something to =

prioritize, or you can bring in more developers to work on it. :-)

> * Nested hardware virtualization. Important for easier testing and develo=
pment of Xen (Xen-on-Xen),
>    and for running other hypervisors in Xen VMs. Interesting for labs, PO=
Cs, etc.

The ball is really in Intel / AMD's court on this one.  It's on our list =

of "things that might be nice", but given the other things we really do =

want to try to make into 4.3, wasn't that big of a priority for us.  If =

Intel or AMD want to make this a priority for them for 4.3, I can track it.

> * VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel a=
rchives,
>    but noone has yet stepped up to clean up and get them merged.
>    Currently Intel gfx passthru patches are merged to Xen, but primary AT=
I/NVIDIA require extra patches.
>    This is actually something that a LOT of users ask often, it's discuss=
ed almost every day on ##xen on IRC.
>    I wonder if XenClient folks could help here?

What kind of patches are these? Are they mostly to pvops Linux?

> * Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU pas=
sthru users.
>    Fujitsu guys posted some patches for this in 2010, and XenClient guys =
in 2009 (iirc),
>    but nothing got further developed and merged to upstream Xen.

What exactly is involved here? If the work is mostly done, but just =

needs someone to dust it off and submit it, then it's probably something =

we can add to the list.

> * QXL virtual GPU support for SPICE. Someone was already developing this,
>    and posted patches earlier during 4.2 development cycle to xen-devel.
>    Upstream Qemu includes QXL support.

If "someone" wants to step up and claim responsibility, I'll put it on =

the list of things to track. :-)

> * PVSCSI support in XL. James Harper was (semi) interested in working wit=
h this,
>    because he has a PVSCSI frontend driver in Windows GPLPV drivers, and =
he's using PVSCSI for tape backups himself.
>
> * libvirt libxl driver improvements; support more Xen features.
>    Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default" vir=
tualization GUI also with Xen.

This is pretty important.  Looking at feature parity between libvirt+KVM =

and libvirt+Xen on the Citrix xen.org to-do list, but to begin with it's =

more of a research than a feature, so wasn't on this particular list.  =

If you happen to know a specific list of missing features, that might be =

something we could try to fit in for 4.3.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 13:15:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 13:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3oIh-0002uz-CQ; Tue, 21 Aug 2012 13:15:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3oIf-0002uu-Ux
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 13:15:18 +0000
Received: from [85.158.138.51:52932] by server-4.bemta-3.messagelabs.com id
	B0/9F-04276-5E983305; Tue, 21 Aug 2012 13:15:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345554915!29384371!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9595 invoked from network); 21 Aug 2012 13:15:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 13:15:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14106815"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 13:14:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 14:14:49 +0100
Message-ID: <1345554888.6821.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Tue, 21 Aug 2012 14:14:48 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote:
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: Monday, August 13, 2012 1:58 PM
> > To: Palagummi, Siva
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > properly when larger MTU sizes are used
> > 
> > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> > wrote:
> > >--- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000
> > -0500
> > >+++ b/drivers/net/xen-netback/netback.c	2012-08-12 15:50:50.000000000
> > -0400
> > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> > >
> > > 		count += nr_frags + 1;
> > >
> > >+		/*
> > >+		 * The logic here should be somewhat similar to
> > >+		 * xen_netbk_count_skb_slots. In case of larger MTU size,
> > 
> > Is there a reason why you can't simply use that function then?
> > Afaict it's being used on the very same skb before it gets put on
> > rx_queue already anyway.
> > 
> 
> I did think about it. But this would mean iterating through similar
> piece of code twice with additional function calls.
> netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing
> similar code. And also not sure about any other implications. So
> decided to fix it by adding few lines of code in line.

I wonder if we could cache the result of the call to
xen_netbk_count_skb_slots in xenvif_start_xmit somewhere?

> > >+		 * skb head length may be more than a PAGE_SIZE. We need to
> > >+		 * consider ring slots consumed by that data. If we do not,
> > >+		 * then within this loop itself we end up consuming more
> > meta
> > >+		 * slots turning the BUG_ON below. With this fix we may end
> > up
> > >+		 * iterating through xen_netbk_rx_action multiple times
> > >+		 * instead of crashing netback thread.
> > >+		 */
> > >+
> > >+
> > >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> > 
> > This now over-accounts by one I think (due to the "+ 1" above;
> > the calculation here really is to replace that increment).
> > 
> > Jan
> > 
> I also wasn't sure about the actual purpose of "+1" above whether it
> is meant to take care of skb_headlen or non zero gso_size case or some
> other case.

I think it's intention was to account for skb_headlen and therefore it
should be replaced.

>   That's why I left it like that so that I can exit the loop on safer
> side. If someone who knows this area of code can confirm that we do
> not need it, I will create a new patch. In my environment I did
> observe that "count" is always greater than 
> actual meta slots produced because of this additional "+1" with my
> patch. When I took out this extra addition then count is always equal
> to actual meta slots produced and loop is exiting safely with more
> meta slots produced under heavy traffic.

I think that's an argument for removing it as well.

The + 1 leading to an early exit seems benign when you think about one
largish skb but imagine if you had 200 small (single page) skbs -- then
you have effectively halved the size of the ring (or at least the
batch).

This:
		/* Filled the batch queue? */
		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
			break;
seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be
max_required_rx_slots(vif)? Or maybe the preflight checks from 
xenvif_start_xmit save us from this fate?

Ian.

> 
> Thanks
> Siva
>  
> > >+
> > >+		if (skb_shinfo(skb)->gso_size)
> > >+			count++;
> > >+
> > > 		__skb_queue_tail(&rxq, skb);
> > >
> > > 		/* Filled the batch queue? */
> > 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 13:15:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 13:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3oIh-0002uz-CQ; Tue, 21 Aug 2012 13:15:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3oIf-0002uu-Ux
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 13:15:18 +0000
Received: from [85.158.138.51:52932] by server-4.bemta-3.messagelabs.com id
	B0/9F-04276-5E983305; Tue, 21 Aug 2012 13:15:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345554915!29384371!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9595 invoked from network); 21 Aug 2012 13:15:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 13:15:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14106815"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 13:14:49 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 14:14:49 +0100
Message-ID: <1345554888.6821.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Tue, 21 Aug 2012 14:14:48 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote:
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@suse.com]
> > Sent: Monday, August 13, 2012 1:58 PM
> > To: Palagummi, Siva
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > properly when larger MTU sizes are used
> > 
> > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> > wrote:
> > >--- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000
> > -0500
> > >+++ b/drivers/net/xen-netback/netback.c	2012-08-12 15:50:50.000000000
> > -0400
> > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> > >
> > > 		count += nr_frags + 1;
> > >
> > >+		/*
> > >+		 * The logic here should be somewhat similar to
> > >+		 * xen_netbk_count_skb_slots. In case of larger MTU size,
> > 
> > Is there a reason why you can't simply use that function then?
> > Afaict it's being used on the very same skb before it gets put on
> > rx_queue already anyway.
> > 
> 
> I did think about it. But this would mean iterating through similar
> piece of code twice with additional function calls.
> netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing
> similar code. And also not sure about any other implications. So
> decided to fix it by adding few lines of code in line.

I wonder if we could cache the result of the call to
xen_netbk_count_skb_slots in xenvif_start_xmit somewhere?

> > >+		 * skb head length may be more than a PAGE_SIZE. We need to
> > >+		 * consider ring slots consumed by that data. If we do not,
> > >+		 * then within this loop itself we end up consuming more
> > meta
> > >+		 * slots turning the BUG_ON below. With this fix we may end
> > up
> > >+		 * iterating through xen_netbk_rx_action multiple times
> > >+		 * instead of crashing netback thread.
> > >+		 */
> > >+
> > >+
> > >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> > 
> > This now over-accounts by one I think (due to the "+ 1" above;
> > the calculation here really is to replace that increment).
> > 
> > Jan
> > 
> I also wasn't sure about the actual purpose of "+1" above whether it
> is meant to take care of skb_headlen or non zero gso_size case or some
> other case.

I think it's intention was to account for skb_headlen and therefore it
should be replaced.

>   That's why I left it like that so that I can exit the loop on safer
> side. If someone who knows this area of code can confirm that we do
> not need it, I will create a new patch. In my environment I did
> observe that "count" is always greater than 
> actual meta slots produced because of this additional "+1" with my
> patch. When I took out this extra addition then count is always equal
> to actual meta slots produced and loop is exiting safely with more
> meta slots produced under heavy traffic.

I think that's an argument for removing it as well.

The + 1 leading to an early exit seems benign when you think about one
largish skb but imagine if you had 200 small (single page) skbs -- then
you have effectively halved the size of the ring (or at least the
batch).

This:
		/* Filled the batch queue? */
		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
			break;
seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be
max_required_rx_slots(vif)? Or maybe the preflight checks from 
xenvif_start_xmit save us from this fate?

Ian.

> 
> Thanks
> Siva
>  
> > >+
> > >+		if (skb_shinfo(skb)->gso_size)
> > >+			count++;
> > >+
> > > 		__skb_queue_tail(&rxq, skb);
> > >
> > > 		/* Filled the batch queue? */
> > 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:12:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pBZ-0003F4-4K; Tue, 21 Aug 2012 14:12:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3p2h-0003Db-BJ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 14:02:51 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345557729!3051650!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7643 invoked from network); 21 Aug 2012 14:02:10 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 14:02:10 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LE1wJd020297
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 10:01:58 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LE1tIR006568; Tue, 21 Aug 2012 10:01:56 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 6D101200CDE; Tue, 21 Aug 2012 11:03:02 -0300 (BRT)
Date: Tue, 21 Aug 2012 11:03:00 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20120821140300.GA4638@otherpad.lan.raisama.net>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-6-git-send-email-imammedo@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345419579-25499-6-git-send-email-imammedo@redhat.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
X-Mailman-Approved-At: Tue, 21 Aug 2012 14:11:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH 5/5] make CPU a child of
	DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 01:39:39AM +0200, Igor Mammedov wrote:
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  include/qemu/cpu.h |    6 +++---
>  1 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/include/qemu/cpu.h b/include/qemu/cpu.h
> index ad706a6..ac44057 100644
> --- a/include/qemu/cpu.h
> +++ b/include/qemu/cpu.h
> @@ -20,7 +20,7 @@
>  #ifndef QEMU_CPU_H
>  #define QEMU_CPU_H
>  
> -#include "qemu/object.h"
> +#include "hw/qdev-core.h"
>  #include "qemu-thread.h"
>  
>  /**
> @@ -46,7 +46,7 @@ typedef struct CPUState CPUState;
>   */
>  typedef struct CPUClass {
>      /*< private >*/
> -    ObjectClass parent_class;
> +    DeviceClass parent_class;
>      /*< public >*/
>  
>      void (*reset)(CPUState *cpu);
> @@ -59,7 +59,7 @@ typedef struct CPUClass {
>   */
>  struct CPUState {
>      /*< private >*/
> -    Object parent_obj;
> +    DeviceState parent_obj;
>      /*< public >*/
>  
>      struct QemuThread *thread;

Don't you need to update cpu_type_info on qom/cpu.c as well? It still
has .parent = TYPE_OBJECT.

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:12:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pBZ-0003F4-4K; Tue, 21 Aug 2012 14:12:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3p2h-0003Db-BJ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 14:02:51 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345557729!3051650!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7643 invoked from network); 21 Aug 2012 14:02:10 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 14:02:10 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LE1wJd020297
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 10:01:58 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LE1tIR006568; Tue, 21 Aug 2012 10:01:56 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 6D101200CDE; Tue, 21 Aug 2012 11:03:02 -0300 (BRT)
Date: Tue, 21 Aug 2012 11:03:00 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20120821140300.GA4638@otherpad.lan.raisama.net>
References: <1345419579-25499-1-git-send-email-imammedo@redhat.com>
	<1345419579-25499-6-git-send-email-imammedo@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345419579-25499-6-git-send-email-imammedo@redhat.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
X-Mailman-Approved-At: Tue, 21 Aug 2012 14:11:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [Qemu-devel] [PATCH 5/5] make CPU a child of
	DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 01:39:39AM +0200, Igor Mammedov wrote:
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  include/qemu/cpu.h |    6 +++---
>  1 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/include/qemu/cpu.h b/include/qemu/cpu.h
> index ad706a6..ac44057 100644
> --- a/include/qemu/cpu.h
> +++ b/include/qemu/cpu.h
> @@ -20,7 +20,7 @@
>  #ifndef QEMU_CPU_H
>  #define QEMU_CPU_H
>  
> -#include "qemu/object.h"
> +#include "hw/qdev-core.h"
>  #include "qemu-thread.h"
>  
>  /**
> @@ -46,7 +46,7 @@ typedef struct CPUState CPUState;
>   */
>  typedef struct CPUClass {
>      /*< private >*/
> -    ObjectClass parent_class;
> +    DeviceClass parent_class;
>      /*< public >*/
>  
>      void (*reset)(CPUState *cpu);
> @@ -59,7 +59,7 @@ typedef struct CPUClass {
>   */
>  struct CPUState {
>      /*< private >*/
> -    Object parent_obj;
> +    DeviceState parent_obj;
>      /*< public >*/
>  
>      struct QemuThread *thread;

Don't you need to update cpu_type_info on qom/cpu.c as well? It still
has .parent = TYPE_OBJECT.

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:19:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pID-0003OS-1D; Tue, 21 Aug 2012 14:18:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3pI9-0003OJ-Pz
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 14:18:51 +0000
Received: from [85.158.138.51:57035] by server-11.bemta-3.messagelabs.com id
	30/F1-23152-8C893305; Tue, 21 Aug 2012 14:18:48 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345558728!23406160!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20634 invoked from network); 21 Aug 2012 14:18:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14108281"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 14:18:47 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 15:18:47 +0100
Date: Tue, 21 Aug 2012 15:18:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
 not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> We call memblock_reserve for [start of mfn list] -> [PMD aligned end
> of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
> 
> This has the disastrous effect that if at bootup the end of mfn_list is
> not PMD aligned we end up returning to memblock parts of the region
> past the mfn_list array. And those parts are the PTE tables with
> the disastrous effect of seeing this at bootup:

This patch looks wrong to me.

Aren't you changing the way mfn_list is reserved using memblock in patch
#3? Moreover it really seems to me that you are PAGE_ALIGN'ing size
rather than PMD_ALIGN'ing it there.


> Write protecting the kernel read-only data: 10240k
> Freeing unused kernel memory: 1860k freed
> Freeing unused kernel memory: 200k freed
> (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
> ...
> (XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
> (XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
> .. and so on.
>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 5a880b8..6019c22 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1227,7 +1227,6 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
>  			/* We should be in __ka space. */
>  			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
>  			addr = xen_start_info->mfn_list;
> -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
>  			/* We roundup to the PMD, which means that if anybody at this stage is
>  			 * using the __ka address of xen_start_info or xen_start_info->shared_info
>  			 * they are in going to crash. Fortunatly we have already revectored
> @@ -1235,6 +1234,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
>  			size = roundup(size, PMD_SIZE);
>  			xen_cleanhighmap(addr, addr + size);
>  
> +			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
>  			memblock_free(__pa(xen_start_info->mfn_list), size);
>  			/* And revector! Bye bye old array */
>  			xen_start_info->mfn_list = new_mfn_list;
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:19:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pID-0003OS-1D; Tue, 21 Aug 2012 14:18:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3pI9-0003OJ-Pz
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 14:18:51 +0000
Received: from [85.158.138.51:57035] by server-11.bemta-3.messagelabs.com id
	30/F1-23152-8C893305; Tue, 21 Aug 2012 14:18:48 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345558728!23406160!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20634 invoked from network); 21 Aug 2012 14:18:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14108281"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 14:18:47 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 15:18:47 +0100
Date: Tue, 21 Aug 2012 15:18:26 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
 not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> We call memblock_reserve for [start of mfn list] -> [PMD aligned end
> of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
> 
> This has the disastrous effect that if at bootup the end of mfn_list is
> not PMD aligned we end up returning to memblock parts of the region
> past the mfn_list array. And those parts are the PTE tables with
> the disastrous effect of seeing this at bootup:

This patch looks wrong to me.

Aren't you changing the way mfn_list is reserved using memblock in patch
#3? Moreover it really seems to me that you are PAGE_ALIGN'ing size
rather than PMD_ALIGN'ing it there.


> Write protecting the kernel read-only data: 10240k
> Freeing unused kernel memory: 1860k freed
> Freeing unused kernel memory: 200k freed
> (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
> ...
> (XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
> (XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
> .. and so on.
>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 5a880b8..6019c22 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1227,7 +1227,6 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
>  			/* We should be in __ka space. */
>  			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
>  			addr = xen_start_info->mfn_list;
> -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
>  			/* We roundup to the PMD, which means that if anybody at this stage is
>  			 * using the __ka address of xen_start_info or xen_start_info->shared_info
>  			 * they are in going to crash. Fortunatly we have already revectored
> @@ -1235,6 +1234,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
>  			size = roundup(size, PMD_SIZE);
>  			xen_cleanhighmap(addr, addr + size);
>  
> +			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
>  			memblock_free(__pa(xen_start_info->mfn_list), size);
>  			/* And revector! Bye bye old array */
>  			xen_start_info->mfn_list = new_mfn_list;
> -- 
> 1.7.7.6
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:34:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pWb-0003do-WC; Tue, 21 Aug 2012 14:33:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T3pWa-0003dj-F6
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:33:44 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345559595!10379870!1
X-Originating-IP: [209.85.220.193]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	RCVD_BY_IP,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjcwMTUgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2844 invoked from network); 21 Aug 2012 14:33:17 -0000
Received: from mail-vc0-f193.google.com (HELO mail-vc0-f193.google.com)
	(209.85.220.193)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:33:17 -0000
Received: by vcbf13 with SMTP id f13so1034168vcb.8
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 07:33:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=D9/+JYjcljpVlnsa1CtNkDPavRuEyBB6koWB0Tu5JXQ=;
	b=eHsQPM/RFM9HZ4lnnFA06zpt4F1W6/iTTLVlHtDVuSYouflWrJtjW9Tlv98o1e38Y3
	eWc/Ag8Amr3HTCpSjyLF6xcazphdY2/Pq0uO2TdiV3pQOnzs5Kob7pzF+tVHd/W6ktqU
	mOU3XCXy3QlVuwVNMT1kHpBiocD1srfV8c7hnfl6beLHDzVrBS08fglF5eKL77gRfOCi
	9MFWa5t9HeI029kf/Dqm+KFBy+jJCPuR5C4Q25g7fIdJkzmP15wzhJ003JW4SHYZRFAH
	fjGjnefQitarLbDBtohVamztf/+20igzak8DYhY7nqPJoBlI17odM82+kQ9AZTLkPuxz
	m19g==
Received: by 10.52.66.165 with SMTP id g5mr11551957vdt.59.1345559595044;
	Tue, 21 Aug 2012 07:33:15 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id bm15sm732706vdb.22.2012.08.21.07.33.13
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 21 Aug 2012 07:33:14 -0700 (PDT)
Date: Tue, 21 Aug 2012 10:23:14 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120821142312.GA20097@phenom.dumpdata.com>
References: <1B4B44D9196EFF41AE41FDA404FC0A101593BA@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A101593BA@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Tobias Geiger <tobias.geiger@vido.info>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 02:41:36AM +0000, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad Rzeszutek
> > Wilk
> > Sent: Tuesday, August 21, 2012 7:30 AM
> > To: Tobias Geiger
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> > Passthrough?!
> > 
> > On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk
> > wrote:
> > > > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > > > Hi!
> > > > >
> > > > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > > > > stable):
> > > > >
> > > > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > > > > not recognized within the DomU (HVM Win7 64)
> > > > > Dom0 cmdline is:
> > > > > ro root=LABEL=dom0root
> > xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > > > security=apparmor noirqdebug nouveau.msi=1
> > > > >
> > > > > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > > > > USB Controller IDs are not correctly passed through and get a
> > > > > exclamation mark within the win7 device manager ("could not be
> > > > > started").
> > > >
> > > > Ok, but they do get passed in though? As in, QEMU sees them.
> > > > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > > > passed in devices do you see them? Meaning lspci shows them?
> > > >
> > > >
> > > > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > > >
> > > > >
> > > > >
> > > > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) -
> > sorry
> > > > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > > > uploaded here:
> > > > >
> > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > > >
> > > > Ugh, that looks like somebody removed a large chunk of a pagetable.
> > > >
> > > > Hmm. Are you using dom0_mem=max parameter? If not, can you try
> > > > that and also disable ballooning in the xm/xl config file pls?
> > > >
> > > > >
> > > > >
> > > > > With 3.4 both issues were not there - everything worked perfectly.
> > > > > Tell me which debugging info you need, i may be able to re-install
> > > > > my netconsole to get the full stacktrace (but i had not much luck
> > > > > with netconsole regarding kernel panics - rarely this info gets sent
> > > > > before the "panic"...)
> > >
> > > So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> > > an Intel 82574L NIC. The video card still works, but the NIC stopped
> > > working. Same version of hypervisor/toolstack/etc, only change is the
> > > kernel (v3.4.6->v3.5).
> > >
> > > Time to get my hands greasy with this..
> > 
> > And its due to a patch I added in v3.4
> > (cd9db80e5257682a7f7ab245a2459648b3c8d268)
> > - which did not work properly in v3.4, but with v3.5 got it working
> > (977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to now
> > work
> > anymore.
> > 
> > Anyhow, for right now jsut revert
> > cd9db80e5257682a7f7ab245a2459648b3c8d268
> > and it should work for you.
> > 
> Also, our team reported a VT-d bug 2 months ago.
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
> We found "3bb07f1b73ea6313b843807063e183e168c9182a" is the bad commit in linux tree.
> Linux3.4.7 works fine; but Linux 3.5 has this issue.

Oh, I wish I saw that earlier.

> Seem Tobias has the same issue as that in the bug.
> But we didn't meet Dom0 panic when shutting down the DomU.

Neither do I - not sure why he sees that.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:34:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pWb-0003do-WC; Tue, 21 Aug 2012 14:33:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T3pWa-0003dj-F6
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:33:44 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345559595!10379870!1
X-Originating-IP: [209.85.220.193]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	RCVD_BY_IP,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjcwMTUgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2844 invoked from network); 21 Aug 2012 14:33:17 -0000
Received: from mail-vc0-f193.google.com (HELO mail-vc0-f193.google.com)
	(209.85.220.193)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:33:17 -0000
Received: by vcbf13 with SMTP id f13so1034168vcb.8
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 07:33:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=D9/+JYjcljpVlnsa1CtNkDPavRuEyBB6koWB0Tu5JXQ=;
	b=eHsQPM/RFM9HZ4lnnFA06zpt4F1W6/iTTLVlHtDVuSYouflWrJtjW9Tlv98o1e38Y3
	eWc/Ag8Amr3HTCpSjyLF6xcazphdY2/Pq0uO2TdiV3pQOnzs5Kob7pzF+tVHd/W6ktqU
	mOU3XCXy3QlVuwVNMT1kHpBiocD1srfV8c7hnfl6beLHDzVrBS08fglF5eKL77gRfOCi
	9MFWa5t9HeI029kf/Dqm+KFBy+jJCPuR5C4Q25g7fIdJkzmP15wzhJ003JW4SHYZRFAH
	fjGjnefQitarLbDBtohVamztf/+20igzak8DYhY7nqPJoBlI17odM82+kQ9AZTLkPuxz
	m19g==
Received: by 10.52.66.165 with SMTP id g5mr11551957vdt.59.1345559595044;
	Tue, 21 Aug 2012 07:33:15 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id bm15sm732706vdb.22.2012.08.21.07.33.13
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 21 Aug 2012 07:33:14 -0700 (PDT)
Date: Tue, 21 Aug 2012 10:23:14 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120821142312.GA20097@phenom.dumpdata.com>
References: <1B4B44D9196EFF41AE41FDA404FC0A101593BA@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A101593BA@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Tobias Geiger <tobias.geiger@vido.info>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 02:41:36AM +0000, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad Rzeszutek
> > Wilk
> > Sent: Tuesday, August 21, 2012 7:30 AM
> > To: Tobias Geiger
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> > Passthrough?!
> > 
> > On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk
> > wrote:
> > > > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > > > Hi!
> > > > >
> > > > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
> > > > > stable):
> > > > >
> > > > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
> > > > > not recognized within the DomU (HVM Win7 64)
> > > > > Dom0 cmdline is:
> > > > > ro root=LABEL=dom0root
> > xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > > > security=apparmor noirqdebug nouveau.msi=1
> > > > >
> > > > > Only 8:00.0 and 8:00.1 get passed through without problems, all the
> > > > > USB Controller IDs are not correctly passed through and get a
> > > > > exclamation mark within the win7 device manager ("could not be
> > > > > started").
> > > >
> > > > Ok, but they do get passed in though? As in, QEMU sees them.
> > > > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > > > passed in devices do you see them? Meaning lspci shows them?
> > > >
> > > >
> > > > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > > >
> > > > >
> > > > >
> > > > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) -
> > sorry
> > > > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > > > uploaded here:
> > > > >
> > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > > >
> > > > Ugh, that looks like somebody removed a large chunk of a pagetable.
> > > >
> > > > Hmm. Are you using dom0_mem=max parameter? If not, can you try
> > > > that and also disable ballooning in the xm/xl config file pls?
> > > >
> > > > >
> > > > >
> > > > > With 3.4 both issues were not there - everything worked perfectly.
> > > > > Tell me which debugging info you need, i may be able to re-install
> > > > > my netconsole to get the full stacktrace (but i had not much luck
> > > > > with netconsole regarding kernel panics - rarely this info gets sent
> > > > > before the "panic"...)
> > >
> > > So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> > > an Intel 82574L NIC. The video card still works, but the NIC stopped
> > > working. Same version of hypervisor/toolstack/etc, only change is the
> > > kernel (v3.4.6->v3.5).
> > >
> > > Time to get my hands greasy with this..
> > 
> > And its due to a patch I added in v3.4
> > (cd9db80e5257682a7f7ab245a2459648b3c8d268)
> > - which did not work properly in v3.4, but with v3.5 got it working
> > (977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to now
> > work
> > anymore.
> > 
> > Anyhow, for right now jsut revert
> > cd9db80e5257682a7f7ab245a2459648b3c8d268
> > and it should work for you.
> > 
> Also, our team reported a VT-d bug 2 months ago.
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
> We found "3bb07f1b73ea6313b843807063e183e168c9182a" is the bad commit in linux tree.
> Linux3.4.7 works fine; but Linux 3.5 has this issue.

Oh, I wish I saw that earlier.

> Seem Tobias has the same issue as that in the bug.
> But we didn't meet Dom0 panic when shutting down the DomU.

Neither do I - not sure why he sees that.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:36:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pZ8-0003jG-I2; Tue, 21 Aug 2012 14:36:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T3pZ7-0003j5-RH
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:36:22 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345559773!6421775!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13962 invoked from network); 21 Aug 2012 14:36:15 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:36:15 -0000
Received: by vbip1 with SMTP id p1so8146362vbi.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 07:36:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=qutwON6O1QeUfLEhKeXyjv/v1d6ytqSS2TW9UhUXwj0=;
	b=n9Y3Q2qjWGlfOSSg7rXYwA6myOC2YhixP/Yu6a95vJWJaFtjgUjgJBjQEfdVh62fn/
	71FU2u/AHM37Ryl7OXNxIJ93TH5trcSTOgufEZlorFiUJDAk6PLrDIUoAC/UBndQeE5M
	bTXfHobWQ9eq7LFJFcKiAWTKdJyI3EmX4ETEOzeGzuXlwWE3Pd5INjoBcOl+Ln+UEL0v
	k4i+P5koG7rjjydvboHUfs9W9jTlKEKWOY/Ti0b5uJDAzpBbDA0SxV4Ebj8HGEcCMuJA
	YYluPft4K6YPMiMrZ2sBJX2IiGmA2CjMh3vJ1TssgBVb8xaTrUAy710dpRA4WxvZv5Mu
	2muQ==
Received: by 10.58.229.166 with SMTP id sr6mr14627651vec.52.1345559773144;
	Tue, 21 Aug 2012 07:36:13 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id b2sm761173vdu.3.2012.08.21.07.36.12
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 21 Aug 2012 07:36:12 -0700 (PDT)
Date: Tue, 21 Aug 2012 10:26:12 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120821142611.GB20097@phenom.dumpdata.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820202856.GA11485@phenom.dumpdata.com>
	<50335D8F.6000603@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50335D8F.6000603@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 11:06:07AM +0100, George Dunlap wrote:
> On 20/08/12 21:28, Konrad Rzeszutek Wilk wrote:
> >On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> >>* Multi-page blk rings
> >>  - blkback in kernel (@intel)
> >.. and me as well.
> OK -- I think what I really need is just one person to be a
> point-of-contact, who can give updates on progress.  Shall I put you
> down for that, or should I find out who at Intel is working on it?

You can put me as contact person and I will keep track of
status.
> >>  - qemu blkback
> >>
> >>* Multi-page net protocol
> >>   owner: ?
> >>   expand the network ring protocol to allow multiple pages for
> >>   increased throughput
> >Multiple people working on this. Ian, me, Annie, and Wei I believe.
> OK -- Ian C said he'd be the primary point of contact for this one
> until Wei gets back; if you or Annie want to be the contact
> directly, let me know. :-)

Ian would be a better choice.

> >>* xl USB pass-through for PV guests
> >>   owner: ?
> >>   Port the xend PV pass-through functionality to xl.
> >Well, what about the Linux side? The frontend/backend drivers haven't
> >really been proposed for upstream.
> OK, I'll put that down as well.
> >>* PV audio (audio for stubdom qemu)
> >>   owner: stefano.panella@citrix
> >Is this a new person? Anyhow, there was a PV audio in pulseaudio as part
> >of the GSOC.
> Stefano Panella is on the XenClient team at Citrix.  I think he
> probably knows about the previous work, but I'll make sure.  One
> thing I know he's been particularly concerned about is audio
> latency, particularly with using GoToMeeting or Skype inside a VM.
> 
> Thanks,
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:36:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pZ8-0003jG-I2; Tue, 21 Aug 2012 14:36:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T3pZ7-0003j5-RH
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:36:22 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345559773!6421775!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13962 invoked from network); 21 Aug 2012 14:36:15 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:36:15 -0000
Received: by vbip1 with SMTP id p1so8146362vbi.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 07:36:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=qutwON6O1QeUfLEhKeXyjv/v1d6ytqSS2TW9UhUXwj0=;
	b=n9Y3Q2qjWGlfOSSg7rXYwA6myOC2YhixP/Yu6a95vJWJaFtjgUjgJBjQEfdVh62fn/
	71FU2u/AHM37Ryl7OXNxIJ93TH5trcSTOgufEZlorFiUJDAk6PLrDIUoAC/UBndQeE5M
	bTXfHobWQ9eq7LFJFcKiAWTKdJyI3EmX4ETEOzeGzuXlwWE3Pd5INjoBcOl+Ln+UEL0v
	k4i+P5koG7rjjydvboHUfs9W9jTlKEKWOY/Ti0b5uJDAzpBbDA0SxV4Ebj8HGEcCMuJA
	YYluPft4K6YPMiMrZ2sBJX2IiGmA2CjMh3vJ1TssgBVb8xaTrUAy710dpRA4WxvZv5Mu
	2muQ==
Received: by 10.58.229.166 with SMTP id sr6mr14627651vec.52.1345559773144;
	Tue, 21 Aug 2012 07:36:13 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id b2sm761173vdu.3.2012.08.21.07.36.12
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 21 Aug 2012 07:36:12 -0700 (PDT)
Date: Tue, 21 Aug 2012 10:26:12 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120821142611.GB20097@phenom.dumpdata.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820202856.GA11485@phenom.dumpdata.com>
	<50335D8F.6000603@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50335D8F.6000603@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 11:06:07AM +0100, George Dunlap wrote:
> On 20/08/12 21:28, Konrad Rzeszutek Wilk wrote:
> >On Mon, Aug 20, 2012 at 05:46:59PM +0100, George Dunlap wrote:
> >>* Multi-page blk rings
> >>  - blkback in kernel (@intel)
> >.. and me as well.
> OK -- I think what I really need is just one person to be a
> point-of-contact, who can give updates on progress.  Shall I put you
> down for that, or should I find out who at Intel is working on it?

You can put me as contact person and I will keep track of
status.
> >>  - qemu blkback
> >>
> >>* Multi-page net protocol
> >>   owner: ?
> >>   expand the network ring protocol to allow multiple pages for
> >>   increased throughput
> >Multiple people working on this. Ian, me, Annie, and Wei I believe.
> OK -- Ian C said he'd be the primary point of contact for this one
> until Wei gets back; if you or Annie want to be the contact
> directly, let me know. :-)

Ian would be a better choice.

> >>* xl USB pass-through for PV guests
> >>   owner: ?
> >>   Port the xend PV pass-through functionality to xl.
> >Well, what about the Linux side? The frontend/backend drivers haven't
> >really been proposed for upstream.
> OK, I'll put that down as well.
> >>* PV audio (audio for stubdom qemu)
> >>   owner: stefano.panella@citrix
> >Is this a new person? Anyhow, there was a PV audio in pulseaudio as part
> >of the GSOC.
> Stefano Panella is on the XenClient team at Citrix.  I think he
> probably knows about the previous work, but I'll make sure.  One
> thing I know he's been particularly concerned about is audio
> latency, particularly with using GoToMeeting or Skype inside a VM.
> 
> Thanks,
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:42:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pf7-0003xc-HV; Tue, 21 Aug 2012 14:42:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3pf6-0003xX-3e
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:42:32 +0000
Received: from [85.158.139.83:21365] by server-2.bemta-5.messagelabs.com id
	6D/17-10142-75E93305; Tue, 21 Aug 2012 14:42:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345560150!28613267!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12280 invoked from network); 21 Aug 2012 14:42:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 14:42:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Aug 2012 15:42:30 +0100
Message-Id: <5033BAA00200007800096BB0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 21 Aug 2012 15:43:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.08.12 at 18:46, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> So I propose that we move to a time-based release schedule.  Rather
> than aiming for a release date, I propose that we aim to do a "feature
> freeze" six months after the 4.2 release -- that would be around March
> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
> around June 2013.  This is one of the things we can discuss at the Dev
> Meeting before the Xen Summit next week.  If you have other opinions,
> please let us know.

That would make for a scheduled 3 month feature freeze. I don't
recall for how long we've been in feature freeze for 4.2 now, but
from my perspective this is already definitely too long. Over that
time I accumulated around 50 patches (not counting the bug fixes
that I keep posting), and I'm sure it was never that bad during
earlier feature freeze periods.

> * Event channel scalability
>   owner: attilio@citrix
>   Increase limit on event channels (currently 1024 for 32-bit guests,
>   4096 for 64-bit guests)

This one I have on my todo list as well.

Bigger things that I have ready to be submitted (and that aren't
cleanup in a more or less strong sense) are an EHCI debug port
driver for the console and a port of the CPUID based (i.e. not
requiring ACPI data to be passed from Dom0) idle driver from
Linux.

Larger things I have on my todo list and didn't see in yours are
- breaking the 5Tb memory barrier (for the moment aiming at
  16Tb due to certain implementation details)
- an xHCI debug port driver for the console (no Linux driver to
  clone from so far, so needs someone with good knowledge of
  the device and access to hardware, neither of which is the
  case for me, or finding out whether this is already in the works
  on the Linux side)
- if possible, a FireWire based console driver (but I've never done
  anything with FireWire, so it would depend on me finding ample
  time to first play with and then work on this)
- getting the public headers x32-clean

Plus there's one tools side item that I was asked to raise in the
course of 4.3 planning/development, but which I'm unlikely to
work on myself - removal of the hard code modprobe-s from
xencommons, properly loading the needed modules on demand
from the tool stack.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:42:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pf7-0003xc-HV; Tue, 21 Aug 2012 14:42:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3pf6-0003xX-3e
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:42:32 +0000
Received: from [85.158.139.83:21365] by server-2.bemta-5.messagelabs.com id
	6D/17-10142-75E93305; Tue, 21 Aug 2012 14:42:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345560150!28613267!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12280 invoked from network); 21 Aug 2012 14:42:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 14:42:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Aug 2012 15:42:30 +0100
Message-Id: <5033BAA00200007800096BB0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 21 Aug 2012 15:43:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.08.12 at 18:46, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> So I propose that we move to a time-based release schedule.  Rather
> than aiming for a release date, I propose that we aim to do a "feature
> freeze" six months after the 4.2 release -- that would be around March
> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
> around June 2013.  This is one of the things we can discuss at the Dev
> Meeting before the Xen Summit next week.  If you have other opinions,
> please let us know.

That would make for a scheduled 3 month feature freeze. I don't
recall for how long we've been in feature freeze for 4.2 now, but
from my perspective this is already definitely too long. Over that
time I accumulated around 50 patches (not counting the bug fixes
that I keep posting), and I'm sure it was never that bad during
earlier feature freeze periods.

> * Event channel scalability
>   owner: attilio@citrix
>   Increase limit on event channels (currently 1024 for 32-bit guests,
>   4096 for 64-bit guests)

This one I have on my todo list as well.

Bigger things that I have ready to be submitted (and that aren't
cleanup in a more or less strong sense) are an EHCI debug port
driver for the console and a port of the CPUID based (i.e. not
requiring ACPI data to be passed from Dom0) idle driver from
Linux.

Larger things I have on my todo list and didn't see in yours are
- breaking the 5Tb memory barrier (for the moment aiming at
  16Tb due to certain implementation details)
- an xHCI debug port driver for the console (no Linux driver to
  clone from so far, so needs someone with good knowledge of
  the device and access to hardware, neither of which is the
  case for me, or finding out whether this is already in the works
  on the Linux side)
- if possible, a FireWire based console driver (but I've never done
  anything with FireWire, so it would depend on me finding ample
  time to first play with and then work on this)
- getting the public headers x32-clean

Plus there's one tools side item that I was asked to raise in the
course of 4.3 planning/development, but which I'm unlikely to
work on myself - removal of the hard code modprobe-s from
xencommons, properly loading the needed modules on demand
from the tool stack.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:50:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:50:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pmf-00047S-Fi; Tue, 21 Aug 2012 14:50:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3pmd-00047N-Ij
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:50:19 +0000
Received: from [85.158.138.51:56865] by server-11.bemta-3.messagelabs.com id
	59/6E-23152-A20A3305; Tue, 21 Aug 2012 14:50:18 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345560616!21358281!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16259 invoked from network); 21 Aug 2012 14:50:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:50:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14109047"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 14:50:16 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 15:50:16 +0100
Message-ID: <50339D0A.2070909@citrix.com>
Date: Tue, 21 Aug 2012 15:36:58 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<5033BAA00200007800096BB0@nat28.tlf.novell.com>
In-Reply-To: <5033BAA00200007800096BB0@nat28.tlf.novell.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/08/12 15:43, Jan Beulich wrote:
>>>> On 20.08.12 at 18:46, George Dunlap<George.Dunlap@eu.citrix.com>  wrote:
>>>>          
>> So I propose that we move to a time-based release schedule.  Rather
>> than aiming for a release date, I propose that we aim to do a "feature
>> freeze" six months after the 4.2 release -- that would be around March
>> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
>> around June 2013.  This is one of the things we can discuss at the Dev
>> Meeting before the Xen Summit next week.  If you have other opinions,
>> please let us know.
>>      
> That would make for a scheduled 3 month feature freeze. I don't
> recall for how long we've been in feature freeze for 4.2 now, but
> from my perspective this is already definitely too long. Over that
> time I accumulated around 50 patches (not counting the bug fixes
> that I keep posting), and I'm sure it was never that bad during
> earlier feature freeze periods.
>
>    
>> * Event channel scalability
>>    owner: attilio@citrix
>>    Increase limit on event channels (currently 1024 for 32-bit guests,
>>    4096 for 64-bit guests)
>>      
> This one I have on my todo list as well.
>
>    

Do you mean that you already have a patch for that?
I'm starting making a complete plan for it. I discussed with Ian 
Campbell some aspects and I plan to send a detailed e-mail in the next 
few days.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:50:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:50:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pmf-00047S-Fi; Tue, 21 Aug 2012 14:50:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3pmd-00047N-Ij
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:50:19 +0000
Received: from [85.158.138.51:56865] by server-11.bemta-3.messagelabs.com id
	59/6E-23152-A20A3305; Tue, 21 Aug 2012 14:50:18 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345560616!21358281!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16259 invoked from network); 21 Aug 2012 14:50:16 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 14:50:16 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14109047"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 14:50:16 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 15:50:16 +0100
Message-ID: <50339D0A.2070909@citrix.com>
Date: Tue, 21 Aug 2012 15:36:58 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<5033BAA00200007800096BB0@nat28.tlf.novell.com>
In-Reply-To: <5033BAA00200007800096BB0@nat28.tlf.novell.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/08/12 15:43, Jan Beulich wrote:
>>>> On 20.08.12 at 18:46, George Dunlap<George.Dunlap@eu.citrix.com>  wrote:
>>>>          
>> So I propose that we move to a time-based release schedule.  Rather
>> than aiming for a release date, I propose that we aim to do a "feature
>> freeze" six months after the 4.2 release -- that would be around March
>> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
>> around June 2013.  This is one of the things we can discuss at the Dev
>> Meeting before the Xen Summit next week.  If you have other opinions,
>> please let us know.
>>      
> That would make for a scheduled 3 month feature freeze. I don't
> recall for how long we've been in feature freeze for 4.2 now, but
> from my perspective this is already definitely too long. Over that
> time I accumulated around 50 patches (not counting the bug fixes
> that I keep posting), and I'm sure it was never that bad during
> earlier feature freeze periods.
>
>    
>> * Event channel scalability
>>    owner: attilio@citrix
>>    Increase limit on event channels (currently 1024 for 32-bit guests,
>>    4096 for 64-bit guests)
>>      
> This one I have on my todo list as well.
>
>    

Do you mean that you already have a patch for that?
I'm starting making a complete plan for it. I discussed with Ian 
Campbell some aspects and I plan to send a detailed e-mail in the next 
few days.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:51:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pnP-00049x-Tx; Tue, 21 Aug 2012 14:51:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T3pnO-00049q-SH
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:51:07 +0000
Received: from [85.158.143.99:42098] by server-3.bemta-4.messagelabs.com id
	73/19-09529-A50A3305; Tue, 21 Aug 2012 14:51:06 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345560665!19530693!1
X-Originating-IP: [208.97.132.202]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4yMDIgPT4gMjgxNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30534 invoked from network); 21 Aug 2012 14:51:05 -0000
Received: from caiajhbdccac.dreamhost.com (HELO homiemail-a15.g.dreamhost.com)
	(208.97.132.202) by server-8.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 14:51:05 -0000
Received: from homiemail-a15.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTP id 710AF76C058;
	Tue, 21 Aug 2012 07:51:04 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=m1ve2rzK5CxKg5v/6oW7kF10VpRdeR7iB4XpS9nIc2HW
	UK2b9zjZ8KjScIgoZqaSuVS4qXN5UFVA3ZCQsBiTeyXgYkBEelrKvWLtTFjl3/+7
	0bh1393iwypkujM/NlKeBZ3uPJ7WLr38AJGRkWAzt5z8x01iaGig1cY7W3WJ/w4=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=cFBtYQX3TUAb7UhwNbr7wvkj/bI=; b=uVTmSeMw
	rPBequ4oiFVXdj3AKw+6Npp02+TQkqqeGRAfweLMWwNlHam11eyqoh2ErLKs4Mpx
	sLwXh4mr3YyIpF0FvqF5bYGtNZC1LqHl3niQUwhIyQ0NwToilrYSYf2yId8xvnjy
	CWFF4guCwMq/KvHVRhw96zJZ4fQqvCzEwYM=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTPA id 3527576C06F; 
	Tue, 21 Aug 2012 07:51:04 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Tue, 21 Aug 2012 07:50:53 -0700
Message-ID: <b9f5103b2989ceb9c4b07da85405307e.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.11058.1345490072.1399.xen-devel@lists.xen.org>
References: <mailman.11058.1345490072.1399.xen-devel@lists.xen.org>
Date: Tue, 21 Aug 2012 07:50:53 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: george.dunlap@eu.citrix.com
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>
> Hello everyone!  With the completion of our first few release candidates
> for 4.2, it's time to look forward and start planning for the 4.3
> release.  I've volunteered to step up and help coordinate the release
> for this cycle.
Hi George. Great idea. Cutting to the chase below

>
An observation: the three below really sound like xapi or libvirt tasks.
>
> * Full-VM snapshotting
>   owner: ?
>   Have a way of coordinating the taking and restoring of VM memory and
>   disk snapshots.  This would involve some investigation into the best
>   way to accomplish this.
>
> * VM Cloning
>   owner: ?
>   Again, a way of coordinating the memory and disk aspects.  Research
>   into the best way to do this would probably go along with the
>   snapshotting feature.
>
> * Make storage migration possible
>   owner: ?
>   There needs to be a way, either via command-line or via some hooks,
>   that someone can build a "storage migration" feature on top of libxl
>   or xl.
>
> * PV audio (audio for stubdom qemu)
>   owner: stefano.panella@citrix
>
> * Memory: Replace PoD with paging mechanism
>   owner: george@citrix
This is one visible tip of the paging iceberg.

More generally, we need full wait-queue support for gfn translation
resolution. Working on this might include any of the folks who touched x86
mm code during 4.2.

Due to wait-queue locking limitations we decided not to tackle in the 4.2
time-frame, the hypervisor is unable to put a vcpu on a wait-queue in
hypervisor context at will. This means that right now, gfn->mfn
translations don't automagically hide all fixup conditions that require
helper intervention (paged out, unshare enomem). Fixing this will make the
p2m code a lot more self-contained and easier to parse, paging will be
completely transparent to guests (right now you can crash a guest with a
skillfully chosen paged out gfn), and will help you towards your goal of
s/pod/paging/

Andres
>
> * Managed domains?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:51:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3pnP-00049x-Tx; Tue, 21 Aug 2012 14:51:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T3pnO-00049q-SH
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:51:07 +0000
Received: from [85.158.143.99:42098] by server-3.bemta-4.messagelabs.com id
	73/19-09529-A50A3305; Tue, 21 Aug 2012 14:51:06 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345560665!19530693!1
X-Originating-IP: [208.97.132.202]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi4yMDIgPT4gMjgxNzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30534 invoked from network); 21 Aug 2012 14:51:05 -0000
Received: from caiajhbdccac.dreamhost.com (HELO homiemail-a15.g.dreamhost.com)
	(208.97.132.202) by server-8.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 14:51:05 -0000
Received: from homiemail-a15.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTP id 710AF76C058;
	Tue, 21 Aug 2012 07:51:04 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id
	:in-reply-to:references:date:subject:from:to:cc:reply-to
	:mime-version:content-type:content-transfer-encoding; q=dns; s=
	lagarcavilla.org; b=m1ve2rzK5CxKg5v/6oW7kF10VpRdeR7iB4XpS9nIc2HW
	UK2b9zjZ8KjScIgoZqaSuVS4qXN5UFVA3ZCQsBiTeyXgYkBEelrKvWLtTFjl3/+7
	0bh1393iwypkujM/NlKeBZ3uPJ7WLr38AJGRkWAzt5z8x01iaGig1cY7W3WJ/w4=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	message-id:in-reply-to:references:date:subject:from:to:cc
	:reply-to:mime-version:content-type:content-transfer-encoding;
	s=lagarcavilla.org; bh=cFBtYQX3TUAb7UhwNbr7wvkj/bI=; b=uVTmSeMw
	rPBequ4oiFVXdj3AKw+6Npp02+TQkqqeGRAfweLMWwNlHam11eyqoh2ErLKs4Mpx
	sLwXh4mr3YyIpF0FvqF5bYGtNZC1LqHl3niQUwhIyQ0NwToilrYSYf2yId8xvnjy
	CWFF4guCwMq/KvHVRhw96zJZ4fQqvCzEwYM=
Received: from webmail.lagarcavilla.org (caiajhbihbdd.dreamhost.com
	[208.97.187.133]) (Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a15.g.dreamhost.com (Postfix) with ESMTPA id 3527576C06F; 
	Tue, 21 Aug 2012 07:51:04 -0700 (PDT)
Received: from 206.223.182.18 (proxying for 206.223.182.18)
	(SquirrelMail authenticated user andres@lagarcavilla.com)
	by webmail.lagarcavilla.org with HTTP;
	Tue, 21 Aug 2012 07:50:53 -0700
Message-ID: <b9f5103b2989ceb9c4b07da85405307e.squirrel@webmail.lagarcavilla.org>
In-Reply-To: <mailman.11058.1345490072.1399.xen-devel@lists.xen.org>
References: <mailman.11058.1345490072.1399.xen-devel@lists.xen.org>
Date: Tue, 21 Aug 2012 07:50:53 -0700
From: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Cc: george.dunlap@eu.citrix.com
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: andres@lagarcavilla.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>
> Hello everyone!  With the completion of our first few release candidates
> for 4.2, it's time to look forward and start planning for the 4.3
> release.  I've volunteered to step up and help coordinate the release
> for this cycle.
Hi George. Great idea. Cutting to the chase below

>
An observation: the three below really sound like xapi or libvirt tasks.
>
> * Full-VM snapshotting
>   owner: ?
>   Have a way of coordinating the taking and restoring of VM memory and
>   disk snapshots.  This would involve some investigation into the best
>   way to accomplish this.
>
> * VM Cloning
>   owner: ?
>   Again, a way of coordinating the memory and disk aspects.  Research
>   into the best way to do this would probably go along with the
>   snapshotting feature.
>
> * Make storage migration possible
>   owner: ?
>   There needs to be a way, either via command-line or via some hooks,
>   that someone can build a "storage migration" feature on top of libxl
>   or xl.
>
> * PV audio (audio for stubdom qemu)
>   owner: stefano.panella@citrix
>
> * Memory: Replace PoD with paging mechanism
>   owner: george@citrix
This is one visible tip of the paging iceberg.

More generally, we need full wait-queue support for gfn translation
resolution. Working on this might include any of the folks who touched x86
mm code during 4.2.

Due to wait-queue locking limitations we decided not to tackle in the 4.2
time-frame, the hypervisor is unable to put a vcpu on a wait-queue in
hypervisor context at will. This means that right now, gfn->mfn
translations don't automagically hide all fixup conditions that require
helper intervention (paged out, unshare enomem). Fixing this will make the
p2m code a lot more self-contained and easier to parse, paging will be
completely transparent to guests (right now you can crash a guest with a
skillfully chosen paged out gfn), and will help you towards your goal of
s/pod/paging/

Andres
>
> * Managed domains?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:55:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3prA-0004Mg-Iy; Tue, 21 Aug 2012 14:55:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3pr9-0004Ma-Kr
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:54:59 +0000
Received: from [85.158.139.83:22541] by server-6.bemta-5.messagelabs.com id
	21/3A-22415-241A3305; Tue, 21 Aug 2012 14:54:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345560898!25439254!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22615 invoked from network); 21 Aug 2012 14:54:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 14:54:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Aug 2012 15:54:56 +0100
Message-Id: <5033BD8A0200007800096BF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 21 Aug 2012 15:55:38 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Attilio Rao" <attilio.rao@citrix.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<5033BAA00200007800096BB0@nat28.tlf.novell.com>
	<50339D0A.2070909@citrix.com>
In-Reply-To: <50339D0A.2070909@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 16:36, Attilio Rao <attilio.rao@citrix.com> wrote:
> On 21/08/12 15:43, Jan Beulich wrote:
>>>>> On 20.08.12 at 18:46, George Dunlap<George.Dunlap@eu.citrix.com>  wrote:
>>> * Event channel scalability
>>>    owner: attilio@citrix
>>>    Increase limit on event channels (currently 1024 for 32-bit guests,
>>>    4096 for 64-bit guests)
>>>      
>> This one I have on my todo list as well.
> 
> Do you mean that you already have a patch for that?

Oh, no, I didn't mean to say that.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 14:55:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 14:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3prA-0004Mg-Iy; Tue, 21 Aug 2012 14:55:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3pr9-0004Ma-Kr
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 14:54:59 +0000
Received: from [85.158.139.83:22541] by server-6.bemta-5.messagelabs.com id
	21/3A-22415-241A3305; Tue, 21 Aug 2012 14:54:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345560898!25439254!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22615 invoked from network); 21 Aug 2012 14:54:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 14:54:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Aug 2012 15:54:56 +0100
Message-Id: <5033BD8A0200007800096BF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 21 Aug 2012 15:55:38 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Attilio Rao" <attilio.rao@citrix.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<5033BAA00200007800096BB0@nat28.tlf.novell.com>
	<50339D0A.2070909@citrix.com>
In-Reply-To: <50339D0A.2070909@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 16:36, Attilio Rao <attilio.rao@citrix.com> wrote:
> On 21/08/12 15:43, Jan Beulich wrote:
>>>>> On 20.08.12 at 18:46, George Dunlap<George.Dunlap@eu.citrix.com>  wrote:
>>> * Event channel scalability
>>>    owner: attilio@citrix
>>>    Increase limit on event channels (currently 1024 for 32-bit guests,
>>>    4096 for 64-bit guests)
>>>      
>> This one I have on my todo list as well.
> 
> Do you mean that you already have a patch for that?

Oh, no, I didn't mean to say that.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:05:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:05:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3q0m-0004bZ-O0; Tue, 21 Aug 2012 15:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T3q0k-0004bU-Pb
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:04:54 +0000
Received: from [85.158.143.99:4513] by server-2.bemta-4.messagelabs.com id
	68/E4-21239-693A3305; Tue, 21 Aug 2012 15:04:54 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345561493!29291034!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17726 invoked from network); 21 Aug 2012 15:04:53 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:04:53 -0000
Received: by eeke53 with SMTP id e53so2111149eek.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 08:04:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=et2fqzDnyJpvVAzsZamWPu09Yzd0vZH2cZ9lf7Kr1UM=;
	b=gs9FXVQBY69xwyHP2trOt8tzejHddJn4xsmGuwvBEXnqbqfBUZPQ1MbWyIttobqBQx
	Bvl7xXq7Ad+bp/0+D8ydltTAc8PBesFygYStHNKLdTfneECkQ+dgvHW3bv6NHnTi6NT5
	hcfwp5qAsU+mCdWQ06lj9QQtKEpFYlzFl+mCOi4ngsQ+3f/mdR99ffXD7wrFU9p8tvrA
	bqz+yK8RVAhbBoTYdGk7gBfLCT56V+khF8ShyovqBo9xBXkz7T53vExdXeA12U6+axI3
	cA2IpL02FhQ7zbef0EAxMSwBtU9T4J+j7FQgESZ/XxCrmgbeNpizLcN0MDv+pnPrqu6H
	3PAQ==
MIME-Version: 1.0
Received: by 10.14.180.68 with SMTP id i44mr7972860eem.20.1345561493207; Tue,
	21 Aug 2012 08:04:53 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Tue, 21 Aug 2012 08:04:53 -0700 (PDT)
In-Reply-To: <5033BAA00200007800096BB0@nat28.tlf.novell.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<5033BAA00200007800096BB0@nat28.tlf.novell.com>
Date: Tue, 21 Aug 2012 16:04:53 +0100
X-Google-Sender-Auth: 0Ts1wYDunX7PgFx1mtXj9k_VKF8
Message-ID: <CAFLBxZYan0VPoxsPGr-YFGEod-z6P4=3BpERkO2y6qA8K4v05g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 3:43 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 20.08.12 at 18:46, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> So I propose that we move to a time-based release schedule.  Rather
>> than aiming for a release date, I propose that we aim to do a "feature
>> freeze" six months after the 4.2 release -- that would be around March
>> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
>> around June 2013.  This is one of the things we can discuss at the Dev
>> Meeting before the Xen Summit next week.  If you have other opinions,
>> please let us know.
>
> That would make for a scheduled 3 month feature freeze. I don't
> recall for how long we've been in feature freeze for 4.2 now, but
> from my perspective this is already definitely too long. Over that
> time I accumulated around 50 patches (not counting the bug fixes
> that I keep posting), and I'm sure it was never that bad during
> earlier feature freeze periods.

I think it's been nearly 5 months.  The main reason for the long
feature freeze, as far as I can tell, is that we did the freeze too
early; there were several major components (namely a stable libxl api)
that were nowhere near ready.  I'm hoping to avoid that this time by
having a clearer idea what we want from the release; features that
will be blockers must be in a reasonable state before we freeze.

A 3-month freeze is basically 6 weeks of clean-up and bug-fixing, and
6 weeks of RCs, which seems pretty realistic to me.  Feel free to
suggest another break-down if you want. :-)

>> * Event channel scalability
>>   owner: attilio@citrix
>>   Increase limit on event channels (currently 1024 for 32-bit guests,
>>   4096 for 64-bit guests)
>
> This one I have on my todo list as well.

I would say, "You should coordinate with Attilio", but I see Attilio
has already responded. :-)

>
> Bigger things that I have ready to be submitted (and that aren't
> cleanup in a more or less strong sense) are an EHCI debug port
> driver for the console and a port of the CPUID based (i.e. not
> requiring ACPI data to be passed from Dom0) idle driver from
> Linux.
>
> Larger things I have on my todo list and didn't see in yours are
> - breaking the 5Tb memory barrier (for the moment aiming at
>   16Tb due to certain implementation details)
> - an xHCI debug port driver for the console (no Linux driver to
>   clone from so far, so needs someone with good knowledge of
>   the device and access to hardware, neither of which is the
>   case for me, or finding out whether this is already in the works
>   on the Linux side)
> - if possible, a FireWire based console driver (but I've never done
>   anything with FireWire, so it would depend on me finding ample
>   time to first play with and then work on this)
> - getting the public headers x32-clean
>
> Plus there's one tools side item that I was asked to raise in the
> course of 4.3 planning/development, but which I'm unlikely to
> work on myself - removal of the hard code modprobe-s from
> xencommons, properly loading the needed modules on demand
> from the tool stack.

Great, thanks Jan -- I've put all of these on my list.

 -Geoge

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:05:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:05:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3q0m-0004bZ-O0; Tue, 21 Aug 2012 15:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T3q0k-0004bU-Pb
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:04:54 +0000
Received: from [85.158.143.99:4513] by server-2.bemta-4.messagelabs.com id
	68/E4-21239-693A3305; Tue, 21 Aug 2012 15:04:54 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345561493!29291034!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17726 invoked from network); 21 Aug 2012 15:04:53 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:04:53 -0000
Received: by eeke53 with SMTP id e53so2111149eek.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 08:04:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=et2fqzDnyJpvVAzsZamWPu09Yzd0vZH2cZ9lf7Kr1UM=;
	b=gs9FXVQBY69xwyHP2trOt8tzejHddJn4xsmGuwvBEXnqbqfBUZPQ1MbWyIttobqBQx
	Bvl7xXq7Ad+bp/0+D8ydltTAc8PBesFygYStHNKLdTfneECkQ+dgvHW3bv6NHnTi6NT5
	hcfwp5qAsU+mCdWQ06lj9QQtKEpFYlzFl+mCOi4ngsQ+3f/mdR99ffXD7wrFU9p8tvrA
	bqz+yK8RVAhbBoTYdGk7gBfLCT56V+khF8ShyovqBo9xBXkz7T53vExdXeA12U6+axI3
	cA2IpL02FhQ7zbef0EAxMSwBtU9T4J+j7FQgESZ/XxCrmgbeNpizLcN0MDv+pnPrqu6H
	3PAQ==
MIME-Version: 1.0
Received: by 10.14.180.68 with SMTP id i44mr7972860eem.20.1345561493207; Tue,
	21 Aug 2012 08:04:53 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Tue, 21 Aug 2012 08:04:53 -0700 (PDT)
In-Reply-To: <5033BAA00200007800096BB0@nat28.tlf.novell.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<5033BAA00200007800096BB0@nat28.tlf.novell.com>
Date: Tue, 21 Aug 2012 16:04:53 +0100
X-Google-Sender-Auth: 0Ts1wYDunX7PgFx1mtXj9k_VKF8
Message-ID: <CAFLBxZYan0VPoxsPGr-YFGEod-z6P4=3BpERkO2y6qA8K4v05g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 3:43 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 20.08.12 at 18:46, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> So I propose that we move to a time-based release schedule.  Rather
>> than aiming for a release date, I propose that we aim to do a "feature
>> freeze" six months after the 4.2 release -- that would be around March
>> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
>> around June 2013.  This is one of the things we can discuss at the Dev
>> Meeting before the Xen Summit next week.  If you have other opinions,
>> please let us know.
>
> That would make for a scheduled 3 month feature freeze. I don't
> recall for how long we've been in feature freeze for 4.2 now, but
> from my perspective this is already definitely too long. Over that
> time I accumulated around 50 patches (not counting the bug fixes
> that I keep posting), and I'm sure it was never that bad during
> earlier feature freeze periods.

I think it's been nearly 5 months.  The main reason for the long
feature freeze, as far as I can tell, is that we did the freeze too
early; there were several major components (namely a stable libxl api)
that were nowhere near ready.  I'm hoping to avoid that this time by
having a clearer idea what we want from the release; features that
will be blockers must be in a reasonable state before we freeze.

A 3-month freeze is basically 6 weeks of clean-up and bug-fixing, and
6 weeks of RCs, which seems pretty realistic to me.  Feel free to
suggest another break-down if you want. :-)

>> * Event channel scalability
>>   owner: attilio@citrix
>>   Increase limit on event channels (currently 1024 for 32-bit guests,
>>   4096 for 64-bit guests)
>
> This one I have on my todo list as well.

I would say, "You should coordinate with Attilio", but I see Attilio
has already responded. :-)

>
> Bigger things that I have ready to be submitted (and that aren't
> cleanup in a more or less strong sense) are an EHCI debug port
> driver for the console and a port of the CPUID based (i.e. not
> requiring ACPI data to be passed from Dom0) idle driver from
> Linux.
>
> Larger things I have on my todo list and didn't see in yours are
> - breaking the 5Tb memory barrier (for the moment aiming at
>   16Tb due to certain implementation details)
> - an xHCI debug port driver for the console (no Linux driver to
>   clone from so far, so needs someone with good knowledge of
>   the device and access to hardware, neither of which is the
>   case for me, or finding out whether this is already in the works
>   on the Linux side)
> - if possible, a FireWire based console driver (but I've never done
>   anything with FireWire, so it would depend on me finding ample
>   time to first play with and then work on this)
> - getting the public headers x32-clean
>
> Plus there's one tools side item that I was asked to raise in the
> course of 4.3 planning/development, but which I'm unlikely to
> work on myself - removal of the hard code modprobe-s from
> xencommons, properly loading the needed modules on demand
> from the tool stack.

Great, thanks Jan -- I've put all of these on my list.

 -Geoge

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:07:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3q2x-0004gV-8N; Tue, 21 Aug 2012 15:07:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3q2w-0004gQ-72
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:07:10 +0000
Received: from [85.158.143.35:24631] by server-2.bemta-4.messagelabs.com id
	8F/98-21239-D14A3305; Tue, 21 Aug 2012 15:07:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345561599!12315139!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18826 invoked from network); 21 Aug 2012 15:06:41 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 15:06:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LF388M011068
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 15:03:09 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LF379Q020024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 15:03:07 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LF36Xl005621; Tue, 21 Aug 2012 10:03:06 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 08:03:06 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 99D544031E; Tue, 21 Aug 2012 10:53:07 -0400 (EDT)
Date: Tue, 21 Aug 2012 10:53:07 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120821145307.GE20289@phenom.dumpdata.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 02:14:01AM +0100, Attilio Rao wrote:
> Currently the definition of x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done is twisted and not really well
> defined (in terms of prototypes desired). More specifically:
> pagetable_setup_start:
>  * it is a nop on x86_32
>  * it is a nop for the XEN case
>  * cleans up the boot time page table in the x86_64 case

Is it safe to call that 'boot time page table' in Xen case?
Since that is what it would be doing? Did you test it with dom0 and
PV guest and with 2GB, 3GB, 4GB, and 8GB layouts? I think those were
the ones that caught earlier mistakes.
> 
> pagetable_setup_done:
>  * it is a nop on x86_32
>  * sets up accessor functions for pagetable manipulation, for the
>    XEN case
>  * it is a nop on x86_64
> 
> Most of this logic can be skipped by creating a new PVOPS that can handle
> pagetable setup and pre/post operations on it.
> The new PVOPS must be called only once, during boot-time setup and
> after the direct mapping for physical memory is available.

Looks like you are missing the other crucial bit of information:
It removes two of the pvops and replaces them with just one.

> 
> Attilio Rao (5):
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_done PVOPS
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_start PVOPS
>   X86/XEN: Introduce the x86_init.paging.pagetable_init() PVOPS
>   X86/XEN: Retire now unused x86_init.paging.pagetable_setup_start and
>     x86_init.paging.pagetable_setup_done PVOPS
>   X86/XEN: Add few lines explaining simple semantic for
>     x86_init.paging.pagetable_init PVOPS
> 
>  arch/x86/include/asm/pgtable_types.h |    6 ++----
>  arch/x86/include/asm/x86_init.h      |   11 +++++++----
>  arch/x86/kernel/setup.c              |    4 +---
>  arch/x86/kernel/x86_init.c           |    4 +---
>  arch/x86/mm/init_32.c                |   12 ++++++------
>  arch/x86/xen/mmu.c                   |   18 +++++++-----------
>  6 files changed, 24 insertions(+), 31 deletions(-)
> 
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:07:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3q2x-0004gV-8N; Tue, 21 Aug 2012 15:07:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3q2w-0004gQ-72
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:07:10 +0000
Received: from [85.158.143.35:24631] by server-2.bemta-4.messagelabs.com id
	8F/98-21239-D14A3305; Tue, 21 Aug 2012 15:07:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345561599!12315139!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18826 invoked from network); 21 Aug 2012 15:06:41 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 15:06:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LF388M011068
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 15:03:09 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LF379Q020024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 15:03:07 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LF36Xl005621; Tue, 21 Aug 2012 10:03:06 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 08:03:06 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 99D544031E; Tue, 21 Aug 2012 10:53:07 -0400 (EDT)
Date: Tue, 21 Aug 2012 10:53:07 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120821145307.GE20289@phenom.dumpdata.com>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 02:14:01AM +0100, Attilio Rao wrote:
> Currently the definition of x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done is twisted and not really well
> defined (in terms of prototypes desired). More specifically:
> pagetable_setup_start:
>  * it is a nop on x86_32
>  * it is a nop for the XEN case
>  * cleans up the boot time page table in the x86_64 case

Is it safe to call that 'boot time page table' in Xen case?
Since that is what it would be doing? Did you test it with dom0 and
PV guest and with 2GB, 3GB, 4GB, and 8GB layouts? I think those were
the ones that caught earlier mistakes.
> 
> pagetable_setup_done:
>  * it is a nop on x86_32
>  * sets up accessor functions for pagetable manipulation, for the
>    XEN case
>  * it is a nop on x86_64
> 
> Most of this logic can be skipped by creating a new PVOPS that can handle
> pagetable setup and pre/post operations on it.
> The new PVOPS must be called only once, during boot-time setup and
> after the direct mapping for physical memory is available.

Looks like you are missing the other crucial bit of information:
It removes two of the pvops and replaces them with just one.

> 
> Attilio Rao (5):
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_done PVOPS
>   XEN: Remove the base argument from
>     x86_init.paging.pagetable_setup_start PVOPS
>   X86/XEN: Introduce the x86_init.paging.pagetable_init() PVOPS
>   X86/XEN: Retire now unused x86_init.paging.pagetable_setup_start and
>     x86_init.paging.pagetable_setup_done PVOPS
>   X86/XEN: Add few lines explaining simple semantic for
>     x86_init.paging.pagetable_init PVOPS
> 
>  arch/x86/include/asm/pgtable_types.h |    6 ++----
>  arch/x86/include/asm/x86_init.h      |   11 +++++++----
>  arch/x86/kernel/setup.c              |    4 +---
>  arch/x86/kernel/x86_init.c           |    4 +---
>  arch/x86/mm/init_32.c                |   12 ++++++------
>  arch/x86/xen/mmu.c                   |   18 +++++++-----------
>  6 files changed, 24 insertions(+), 31 deletions(-)
> 
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:07:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:07:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3q3E-0004i3-Lm; Tue, 21 Aug 2012 15:07:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3q3D-0004hs-6t
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:07:27 +0000
Received: from [85.158.138.51:28323] by server-11.bemta-3.messagelabs.com id
	C5/A6-23152-E24A3305; Tue, 21 Aug 2012 15:07:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345561644!23099216!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4193 invoked from network); 21 Aug 2012 15:07:25 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 15:07:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LF7KT5019205
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 15:07:22 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LF7CAj029749
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 15:07:12 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LF7BKl030219; Tue, 21 Aug 2012 10:07:12 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 08:07:11 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 296684031E; Tue, 21 Aug 2012 10:57:13 -0400 (EDT)
Date: Tue, 21 Aug 2012 10:57:13 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120821145713.GG20289@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
 not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 03:18:26PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > We call memblock_reserve for [start of mfn list] -> [PMD aligned end
> > of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
> > 
> > This has the disastrous effect that if at bootup the end of mfn_list is
> > not PMD aligned we end up returning to memblock parts of the region
> > past the mfn_list array. And those parts are the PTE tables with
> > the disastrous effect of seeing this at bootup:
> 
> This patch looks wrong to me.

Its easier to see if you stick the patch in the code. The size = part
was actually also done earlier.
> 
> Aren't you changing the way mfn_list is reserved using memblock in patch
> #3? Moreover it really seems to me that you are PAGE_ALIGN'ing size
> rather than PMD_ALIGN'ing it there.

Correct. That is proper way of doing it. We want to PMD_ALIGN for the xen_cleanhighmap
to remove the pesky virtual address, but we want PAGE_ALIGN (so exactly the
same way memblock_reserve was called) for memblock_free.
> 
> 
> > Write protecting the kernel read-only data: 10240k
> > Freeing unused kernel memory: 1860k freed
> > Freeing unused kernel memory: 200k freed
> > (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
> > ...
> > (XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
> > (XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
> > .. and so on.
> >
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index 5a880b8..6019c22 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1227,7 +1227,6 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
> >  			/* We should be in __ka space. */
> >  			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> >  			addr = xen_start_info->mfn_list;
> > -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> >  			/* We roundup to the PMD, which means that if anybody at this stage is
> >  			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> >  			 * they are in going to crash. Fortunatly we have already revectored
> > @@ -1235,6 +1234,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
> >  			size = roundup(size, PMD_SIZE);
> >  			xen_cleanhighmap(addr, addr + size);
> >  
> > +			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> >  			memblock_free(__pa(xen_start_info->mfn_list), size);
> >  			/* And revector! Bye bye old array */
> >  			xen_start_info->mfn_list = new_mfn_list;
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:07:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:07:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3q3E-0004i3-Lm; Tue, 21 Aug 2012 15:07:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3q3D-0004hs-6t
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:07:27 +0000
Received: from [85.158.138.51:28323] by server-11.bemta-3.messagelabs.com id
	C5/A6-23152-E24A3305; Tue, 21 Aug 2012 15:07:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345561644!23099216!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4193 invoked from network); 21 Aug 2012 15:07:25 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 15:07:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LF7KT5019205
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 15:07:22 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LF7CAj029749
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 15:07:12 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LF7BKl030219; Tue, 21 Aug 2012 10:07:12 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 08:07:11 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 296684031E; Tue, 21 Aug 2012 10:57:13 -0400 (EDT)
Date: Tue, 21 Aug 2012 10:57:13 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120821145713.GG20289@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
 not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 03:18:26PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > We call memblock_reserve for [start of mfn list] -> [PMD aligned end
> > of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
> > 
> > This has the disastrous effect that if at bootup the end of mfn_list is
> > not PMD aligned we end up returning to memblock parts of the region
> > past the mfn_list array. And those parts are the PTE tables with
> > the disastrous effect of seeing this at bootup:
> 
> This patch looks wrong to me.

Its easier to see if you stick the patch in the code. The size = part
was actually also done earlier.
> 
> Aren't you changing the way mfn_list is reserved using memblock in patch
> #3? Moreover it really seems to me that you are PAGE_ALIGN'ing size
> rather than PMD_ALIGN'ing it there.

Correct. That is proper way of doing it. We want to PMD_ALIGN for the xen_cleanhighmap
to remove the pesky virtual address, but we want PAGE_ALIGN (so exactly the
same way memblock_reserve was called) for memblock_free.
> 
> 
> > Write protecting the kernel read-only data: 10240k
> > Freeing unused kernel memory: 1860k freed
> > Freeing unused kernel memory: 200k freed
> > (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
> > ...
> > (XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
> > (XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
> > .. and so on.
> >
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index 5a880b8..6019c22 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1227,7 +1227,6 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
> >  			/* We should be in __ka space. */
> >  			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> >  			addr = xen_start_info->mfn_list;
> > -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> >  			/* We roundup to the PMD, which means that if anybody at this stage is
> >  			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> >  			 * they are in going to crash. Fortunatly we have already revectored
> > @@ -1235,6 +1234,7 @@ static void __init xen_pagetable_setup_done(pgd_t *base)
> >  			size = roundup(size, PMD_SIZE);
> >  			xen_cleanhighmap(addr, addr + size);
> >  
> > +			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> >  			memblock_free(__pa(xen_start_info->mfn_list), size);
> >  			/* And revector! Bye bye old array */
> >  			xen_start_info->mfn_list = new_mfn_list;
> > -- 
> > 1.7.7.6
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:18:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qE7-00054k-44; Tue, 21 Aug 2012 15:18:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T3qE5-00054b-TX
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:18:42 +0000
Received: from [85.158.143.99:17742] by server-2.bemta-4.messagelabs.com id
	5C/BB-21239-1D6A3305; Tue, 21 Aug 2012 15:18:41 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345562320!19688426!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzEwMTAz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29594 invoked from network); 21 Aug 2012 15:18:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 15:18:40 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 21 Aug 2012 08:14:38 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,802,1336374000"; d="scan'208";a="189329589"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 21 Aug 2012 08:14:32 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 08:14:29 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 08:14:29 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Tue, 21 Aug 2012 23:14:28 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel
	<xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNfrToCh14keoOOEq7wdrohrpVCZdkWMOw
Date: Tue, 21 Aug 2012 15:14:27 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> Sent: Monday, August 20, 2012 5:17 PM
> To: xen-devel; xen-users
> Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
> 
>     * [BUG] long stop during the guest boot process with qcow image,
>       reported by Intel:
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> 
After reverting the "O_DIRECT for IDE" patch in qemu-xen tree, it works as fine as that before the bug.
But it still have some stop time (about 8~10s) after loading the Grub for a RHEL6.x guest.
I found even an old CS #23124 (about 1 year ago) also has the same phenomenon.
Currently, RHEL6.2 or RHEL6.3 guest can bootup in 30~40s (for either RAW or QCOW2 image).
And, RHEL5.5 guest (which has no stop time issue) can bootup in 50~60s.

I also commented in the bugzilla. You can also have a look for more details.

Will you want still track or fix this old issue ?  If not, I want to marked it as "fixed and verified".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:18:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qE7-00054k-44; Tue, 21 Aug 2012 15:18:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T3qE5-00054b-TX
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:18:42 +0000
Received: from [85.158.143.99:17742] by server-2.bemta-4.messagelabs.com id
	5C/BB-21239-1D6A3305; Tue, 21 Aug 2012 15:18:41 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345562320!19688426!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzEwMTAz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29594 invoked from network); 21 Aug 2012 15:18:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 15:18:40 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 21 Aug 2012 08:14:38 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,802,1336374000"; d="scan'208";a="189329589"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 21 Aug 2012 08:14:32 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 08:14:29 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 08:14:29 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Tue, 21 Aug 2012 23:14:28 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel
	<xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNfrToCh14keoOOEq7wdrohrpVCZdkWMOw
Date: Tue, 21 Aug 2012 15:14:27 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> Sent: Monday, August 20, 2012 5:17 PM
> To: xen-devel; xen-users
> Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
> 
>     * [BUG] long stop during the guest boot process with qcow image,
>       reported by Intel:
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> 
After reverting the "O_DIRECT for IDE" patch in qemu-xen tree, it works as fine as that before the bug.
But it still have some stop time (about 8~10s) after loading the Grub for a RHEL6.x guest.
I found even an old CS #23124 (about 1 year ago) also has the same phenomenon.
Currently, RHEL6.2 or RHEL6.3 guest can bootup in 30~40s (for either RAW or QCOW2 image).
And, RHEL5.5 guest (which has no stop time issue) can bootup in 50~60s.

I also commented in the bugzilla. You can also have a look for more details.

Will you want still track or fix this old issue ?  If not, I want to marked it as "fixed and verified".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qIQ-0005DM-VC; Tue, 21 Aug 2012 15:23:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qIO-0005DE-PA
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:23:08 +0000
Received: from [85.158.143.99:39020] by server-3.bemta-4.messagelabs.com id
	5F/CE-09529-CD7A3305; Tue, 21 Aug 2012 15:23:08 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345562587!21783135!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27178 invoked from network); 21 Aug 2012 15:23:07 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-11.tower-216.messagelabs.com with AES256-SHA encrypted
	SMTP; 21 Aug 2012 15:23:07 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qI7-0006bl-Ev; Tue, 21 Aug 2012 17:22:51 +0200
Date: Tue, 21 Aug 2012 17:22:50 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211721250.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:

> Currently the definition of x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done is twisted and not really well
> defined (in terms of prototypes desired). More specifically:
> pagetable_setup_start:
>  * it is a nop on x86_32
>  * it is a nop for the XEN case
>  * cleans up the boot time page table in the x86_64 case
> 
> pagetable_setup_done:
>  * it is a nop on x86_32
>  * sets up accessor functions for pagetable manipulation, for the
>    XEN case
>  * it is a nop on x86_64
> 
> Most of this logic can be skipped by creating a new PVOPS that can handle
> pagetable setup and pre/post operations on it.
> The new PVOPS must be called only once, during boot-time setup and
> after the direct mapping for physical memory is available.

Can you please refrain from naming that PVOPS? The setup function
pointers have nothing to do with PVOPS.

They are explicitely meant for platforms and XEN is just another
platform as is 32bit and 64bit.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:23:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qIQ-0005DM-VC; Tue, 21 Aug 2012 15:23:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qIO-0005DE-PA
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:23:08 +0000
Received: from [85.158.143.99:39020] by server-3.bemta-4.messagelabs.com id
	5F/CE-09529-CD7A3305; Tue, 21 Aug 2012 15:23:08 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345562587!21783135!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27178 invoked from network); 21 Aug 2012 15:23:07 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-11.tower-216.messagelabs.com with AES256-SHA encrypted
	SMTP; 21 Aug 2012 15:23:07 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qI7-0006bl-Ev; Tue, 21 Aug 2012 17:22:51 +0200
Date: Tue, 21 Aug 2012 17:22:50 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211721250.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done PVOPS and document the semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:

> Currently the definition of x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done is twisted and not really well
> defined (in terms of prototypes desired). More specifically:
> pagetable_setup_start:
>  * it is a nop on x86_32
>  * it is a nop for the XEN case
>  * cleans up the boot time page table in the x86_64 case
> 
> pagetable_setup_done:
>  * it is a nop on x86_32
>  * sets up accessor functions for pagetable manipulation, for the
>    XEN case
>  * it is a nop on x86_64
> 
> Most of this logic can be skipped by creating a new PVOPS that can handle
> pagetable setup and pre/post operations on it.
> The new PVOPS must be called only once, during boot-time setup and
> after the direct mapping for physical memory is available.

Can you please refrain from naming that PVOPS? The setup function
pointers have nothing to do with PVOPS.

They are explicitely meant for platforms and XEN is just another
platform as is 32bit and 64bit.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:28:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qMs-0005N3-L5; Tue, 21 Aug 2012 15:27:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3qMr-0005Mt-Ft
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:27:45 +0000
Received: from [85.158.143.35:33643] by server-2.bemta-4.messagelabs.com id
	E4/E8-21239-0F8A3305; Tue, 21 Aug 2012 15:27:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345562863!12318789!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6738 invoked from network); 21 Aug 2012 15:27:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:27:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14110053"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 15:27:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 16:27:23 +0100
Message-ID: <1345562841.6821.94.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Date: Tue, 21 Aug 2012 16:27:21 +0100
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Users list to BCC.

On Tue, 2012-08-21 at 16:14 +0100, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > Sent: Monday, August 20, 2012 5:17 PM
> > To: xen-devel; xen-users
> > Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
> > 
> >     * [BUG] long stop during the guest boot process with qcow image,
> >       reported by Intel:
> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> > 
> After reverting the "O_DIRECT for IDE" patch in qemu-xen tree, it works as fine as that before the bug.
> But it still have some stop time (about 8~10s) after loading the Grub for a RHEL6.x guest.
> I found even an old CS #23124 (about 1 year ago) also has the same phenomenon.

23124 here is e3d4c34b14a3112481b5e86ff2406cd1bb5e1548 which is some
sort of tools build fix. What is the long hash of the changeset you are
referring to?

> Currently, RHEL6.2 or RHEL6.3 guest can bootup in 30~40s (for either RAW or QCOW2 image).
> And, RHEL5.5 guest (which has no stop time issue) can bootup in 50~60s.
> 
> I also commented in the bugzilla. You can also have a look for more details.
> 
> Will you want still track or fix this old issue ?  If not, I want to marked it as "fixed and verified".

If you think the issue is fixed then I will mark it done on the todo
list.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:28:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qMs-0005N3-L5; Tue, 21 Aug 2012 15:27:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T3qMr-0005Mt-Ft
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:27:45 +0000
Received: from [85.158.143.35:33643] by server-2.bemta-4.messagelabs.com id
	E4/E8-21239-0F8A3305; Tue, 21 Aug 2012 15:27:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345562863!12318789!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6738 invoked from network); 21 Aug 2012 15:27:44 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:27:44 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14110053"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 15:27:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 16:27:23 +0100
Message-ID: <1345562841.6821.94.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Date: Tue, 21 Aug 2012 16:27:21 +0100
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Users list to BCC.

On Tue, 2012-08-21 at 16:14 +0100, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > Sent: Monday, August 20, 2012 5:17 PM
> > To: xen-devel; xen-users
> > Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
> > 
> >     * [BUG] long stop during the guest boot process with qcow image,
> >       reported by Intel:
> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> > 
> After reverting the "O_DIRECT for IDE" patch in qemu-xen tree, it works as fine as that before the bug.
> But it still have some stop time (about 8~10s) after loading the Grub for a RHEL6.x guest.
> I found even an old CS #23124 (about 1 year ago) also has the same phenomenon.

23124 here is e3d4c34b14a3112481b5e86ff2406cd1bb5e1548 which is some
sort of tools build fix. What is the long hash of the changeset you are
referring to?

> Currently, RHEL6.2 or RHEL6.3 guest can bootup in 30~40s (for either RAW or QCOW2 image).
> And, RHEL5.5 guest (which has no stop time issue) can bootup in 50~60s.
> 
> I also commented in the bugzilla. You can also have a look for more details.
> 
> Will you want still track or fix this old issue ?  If not, I want to marked it as "fixed and verified".

If you think the issue is fixed then I will mark it done on the todo
list.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:28:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:28:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qNO-0005Q2-8s; Tue, 21 Aug 2012 15:28:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3qNM-0005Pj-FC
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:28:16 +0000
Received: from [85.158.143.35:17566] by server-3.bemta-4.messagelabs.com id
	D8/36-09529-F09A3305; Tue, 21 Aug 2012 15:28:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345562895!13441933!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17755 invoked from network); 21 Aug 2012 15:28:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14110072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 15:28:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 16:28:14 +0100
Date: Tue, 21 Aug 2012 16:27:53 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120821145713.GG20289@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208211626470.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
	<20120821145713.GG20289@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
 not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Tue, Aug 21, 2012 at 03:18:26PM +0100, Stefano Stabellini wrote:
> > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > We call memblock_reserve for [start of mfn list] -> [PMD aligned end
> > > of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
> > > 
> > > This has the disastrous effect that if at bootup the end of mfn_list is
> > > not PMD aligned we end up returning to memblock parts of the region
> > > past the mfn_list array. And those parts are the PTE tables with
> > > the disastrous effect of seeing this at bootup:
> > 
> > This patch looks wrong to me.
> 
> Its easier to see if you stick the patch in the code. The size = part
> was actually also done earlier.

Yes, you are right, I see that know.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:28:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:28:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qNO-0005Q2-8s; Tue, 21 Aug 2012 15:28:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T3qNM-0005Pj-FC
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:28:16 +0000
Received: from [85.158.143.35:17566] by server-3.bemta-4.messagelabs.com id
	D8/36-09529-F09A3305; Tue, 21 Aug 2012 15:28:15 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345562895!13441933!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17755 invoked from network); 21 Aug 2012 15:28:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14110072"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 15:28:14 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 16:28:14 +0100
Date: Tue, 21 Aug 2012 16:27:53 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120821145713.GG20289@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208211626470.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-12-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208211514130.15568@kaball.uk.xensource.com>
	<20120821145713.GG20289@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 11/11] xen/mmu: Release just the MFN list,
 not MFN list and part of pagetables.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Tue, Aug 21, 2012 at 03:18:26PM +0100, Stefano Stabellini wrote:
> > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > We call memblock_reserve for [start of mfn list] -> [PMD aligned end
> > > of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
> > > 
> > > This has the disastrous effect that if at bootup the end of mfn_list is
> > > not PMD aligned we end up returning to memblock parts of the region
> > > past the mfn_list array. And those parts are the PTE tables with
> > > the disastrous effect of seeing this at bootup:
> > 
> > This patch looks wrong to me.
> 
> Its easier to see if you stick the patch in the code. The size = part
> was actually also done earlier.

Yes, you are right, I see that know.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qY6-0005hf-Fy; Tue, 21 Aug 2012 15:39:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T3qY5-0005ha-30
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:39:21 +0000
Received: from [85.158.143.99:44117] by server-3.bemta-4.messagelabs.com id
	FD/77-09529-7ABA3305; Tue, 21 Aug 2012 15:39:19 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345563557!22284980!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7519 invoked from network); 21 Aug 2012 15:39:18 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:39:18 -0000
Received: by yhpp34 with SMTP id p34so6676103yhp.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 08:39:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=KW8GwaBdP6u6f32kr8m0hNw6GUztG5N4SNOU9dS4Ohw=;
	b=On94dqZ21iAF4hukQYZnzI1gZCwdFzjo8HTdGT7rtr8zKcg+bi9B7Mm9OEC9D8CRta
	R0Hjj5pp7/aMA5l6d9MMY529huAaEQHLbhvxBm+JixACqEQdEJR161ni98OV2nojNNb0
	CX5eideqbhJJBHN2MtY9sxbaBRvRKT/Q3I452suuIKRfIXhwvWzRiojEl+LBgxWWMptV
	IjrevM2A/A1F3pulJ0McpXJAjEfBdzhirXDNNv3y/KHr1rAq3elhWtw5IMekrur3R8Br
	t6cRHM8N/MR9DsLRHZyeEyKX1oYy5pRClBnSbmvk0cBVE2uwH+LzK4054N9vQm4miUq2
	WMFQ==
MIME-Version: 1.0
Received: by 10.58.211.71 with SMTP id na7mr15081670vec.39.1345563557051; Tue,
	21 Aug 2012 08:39:17 -0700 (PDT)
Received: by 10.58.127.232 with HTTP; Tue, 21 Aug 2012 08:39:16 -0700 (PDT)
In-Reply-To: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
Date: Tue, 21 Aug 2012 11:39:16 -0400
X-Google-Sender-Auth: npuNZzIBPz3I-P7MpQ3HJ6_gLhQ
Message-ID: <CAOvdn6WM96ne9uTRuZN1Exgp7MMOUmmsQWrV8KJ8GZaumMG_Fw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 5:17 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>
>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
>

No significant progress made on this since the last update.
More questions raised than answers found. I'm having trouble making
sense of the current test results.

More debugging is needed. Jan is working on a debug patch to give to me.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:39:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qY6-0005hf-Fy; Tue, 21 Aug 2012 15:39:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T3qY5-0005ha-30
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:39:21 +0000
Received: from [85.158.143.99:44117] by server-3.bemta-4.messagelabs.com id
	FD/77-09529-7ABA3305; Tue, 21 Aug 2012 15:39:19 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345563557!22284980!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7519 invoked from network); 21 Aug 2012 15:39:18 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:39:18 -0000
Received: by yhpp34 with SMTP id p34so6676103yhp.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 08:39:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=KW8GwaBdP6u6f32kr8m0hNw6GUztG5N4SNOU9dS4Ohw=;
	b=On94dqZ21iAF4hukQYZnzI1gZCwdFzjo8HTdGT7rtr8zKcg+bi9B7Mm9OEC9D8CRta
	R0Hjj5pp7/aMA5l6d9MMY529huAaEQHLbhvxBm+JixACqEQdEJR161ni98OV2nojNNb0
	CX5eideqbhJJBHN2MtY9sxbaBRvRKT/Q3I452suuIKRfIXhwvWzRiojEl+LBgxWWMptV
	IjrevM2A/A1F3pulJ0McpXJAjEfBdzhirXDNNv3y/KHr1rAq3elhWtw5IMekrur3R8Br
	t6cRHM8N/MR9DsLRHZyeEyKX1oYy5pRClBnSbmvk0cBVE2uwH+LzK4054N9vQm4miUq2
	WMFQ==
MIME-Version: 1.0
Received: by 10.58.211.71 with SMTP id na7mr15081670vec.39.1345563557051; Tue,
	21 Aug 2012 08:39:17 -0700 (PDT)
Received: by 10.58.127.232 with HTTP; Tue, 21 Aug 2012 08:39:16 -0700 (PDT)
In-Reply-To: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
Date: Tue, 21 Aug 2012 11:39:16 -0400
X-Google-Sender-Auth: npuNZzIBPz3I-P7MpQ3HJ6_gLhQ
Message-ID: <CAOvdn6WM96ne9uTRuZN1Exgp7MMOUmmsQWrV8KJ8GZaumMG_Fw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 5:17 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>
>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
>

No significant progress made on this since the last update.
More questions raised than answers found. I'm having trouble making
sense of the current test results.

More debugging is needed. Jan is working on a debug patch to give to me.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qau-0005pP-DE; Tue, 21 Aug 2012 15:42:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qar-0005nd-Rl
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:14 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345563720!5264138!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3896 invoked from network); 21 Aug 2012 15:42:00 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-10.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 15:42:00 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qaR-0006pm-Ii; Tue, 21 Aug 2012 17:41:47 +0200
Date: Tue, 21 Aug 2012 17:41:46 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211740340.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
 x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> index 1019156..7999cef 100644
> --- a/arch/x86/mm/init_32.c
> +++ b/arch/x86/mm/init_32.c
> @@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
>  }
>  #endif /* CONFIG_HIGHMEM */
>  
> -void __init native_pagetable_setup_start(pgd_t *base)
> +void __init native_pagetable_setup_start(void)
>  {
>  	unsigned long pfn, va;
> +	pgd_t *base;
>  	pgd_t *pgd;

	pgd_t *pgd, *base = swapper_pg_dir;

Please. No need to add 5 lines just for this.

Thanks,

	tglx



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qau-0005pP-DE; Tue, 21 Aug 2012 15:42:16 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qar-0005nd-Rl
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:14 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345563720!5264138!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3896 invoked from network); 21 Aug 2012 15:42:00 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-10.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 15:42:00 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qaR-0006pm-Ii; Tue, 21 Aug 2012 17:41:47 +0200
Date: Tue, 21 Aug 2012 17:41:46 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211740340.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
 x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> index 1019156..7999cef 100644
> --- a/arch/x86/mm/init_32.c
> +++ b/arch/x86/mm/init_32.c
> @@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
>  }
>  #endif /* CONFIG_HIGHMEM */
>  
> -void __init native_pagetable_setup_start(pgd_t *base)
> +void __init native_pagetable_setup_start(void)
>  {
>  	unsigned long pfn, va;
> +	pgd_t *base;
>  	pgd_t *pgd;

	pgd_t *pgd, *base = swapper_pg_dir;

Please. No need to add 5 lines just for this.

Thanks,

	tglx



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:44:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qd1-0005yR-Bi; Tue, 21 Aug 2012 15:44:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qcz-0005yJ-FE
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:44:25 +0000
Received: from [85.158.139.83:62657] by server-2.bemta-5.messagelabs.com id
	47/10-10142-8DCA3305; Tue, 21 Aug 2012 15:44:24 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345563864!18043121!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17842 invoked from network); 21 Aug 2012 15:44:24 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-8.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 15:44:24 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qco-0006rR-P9; Tue, 21 Aug 2012 17:44:14 +0200
Date: Tue, 21 Aug 2012 17:44:13 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211722560.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/5] X86/XEN: Introduce the
 x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:

> This new PVOPS is responsible to setup the kernel pagetables and
> replace entirely x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done PVOPS work.
 
> For performance the x86_64 stub is implemented as a macro to paging_init()
> rather than an actual function stub.

Huch, using a macro for an once per boot time call is really a massive
performance improvement.

It's confusing and wrong. You just use a macro because x86_64 does not
need any extra setups aside of paging_init().
 
> diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
> index 849be14..c1e910a 100644
> --- a/arch/x86/kernel/x86_init.c
> +++ b/arch/x86/kernel/x86_init.c
> @@ -68,6 +68,7 @@ struct x86_init_ops x86_init __initdata = {
>  	},
>  
>  	.paging = {
> +		.pagetable_init		= native_pagetable_init,

I'd prefer to see these patches implemented differently.

 #1 Remove the base argument from pagetable_setup_start (leave
    pagetable_setup_done() alone).

 #2 Rename pagetable_setup_start to pagetable_init,
    native_pagetable_setup_start to native_pagetable_init and
    xen_pagetable_setup_start to xen_pagetable_init
 
 #3 Instead of copying the whole native_pagetable_setup_start()
    function and deleting it later, move the paging_init() call from
    setup.c to native_pagetable_init() and xen_pagetable_init()
    and define native_pagetable_init as paging_init() for x86_64

 #4 Move the code from xen_pagetable_setup_done() into
    xen_pagetable_init() and remove the now unused
    pagetable_setup_done().

That's less code shuffling and pointless copying which makes the
review way easier.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:44:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qd1-0005yR-Bi; Tue, 21 Aug 2012 15:44:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qcz-0005yJ-FE
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:44:25 +0000
Received: from [85.158.139.83:62657] by server-2.bemta-5.messagelabs.com id
	47/10-10142-8DCA3305; Tue, 21 Aug 2012 15:44:24 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345563864!18043121!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17842 invoked from network); 21 Aug 2012 15:44:24 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-8.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 15:44:24 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qco-0006rR-P9; Tue, 21 Aug 2012 17:44:14 +0200
Date: Tue, 21 Aug 2012 17:44:13 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211722560.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/5] X86/XEN: Introduce the
 x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:

> This new PVOPS is responsible to setup the kernel pagetables and
> replace entirely x86_init.paging.pagetable_setup_start and
> x86_init.paging.pagetable_setup_done PVOPS work.
 
> For performance the x86_64 stub is implemented as a macro to paging_init()
> rather than an actual function stub.

Huch, using a macro for an once per boot time call is really a massive
performance improvement.

It's confusing and wrong. You just use a macro because x86_64 does not
need any extra setups aside of paging_init().
 
> diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
> index 849be14..c1e910a 100644
> --- a/arch/x86/kernel/x86_init.c
> +++ b/arch/x86/kernel/x86_init.c
> @@ -68,6 +68,7 @@ struct x86_init_ops x86_init __initdata = {
>  	},
>  
>  	.paging = {
> +		.pagetable_init		= native_pagetable_init,

I'd prefer to see these patches implemented differently.

 #1 Remove the base argument from pagetable_setup_start (leave
    pagetable_setup_done() alone).

 #2 Rename pagetable_setup_start to pagetable_init,
    native_pagetable_setup_start to native_pagetable_init and
    xen_pagetable_setup_start to xen_pagetable_init
 
 #3 Instead of copying the whole native_pagetable_setup_start()
    function and deleting it later, move the paging_init() call from
    setup.c to native_pagetable_init() and xen_pagetable_init()
    and define native_pagetable_init as paging_init() for x86_64

 #4 Move the code from xen_pagetable_setup_done() into
    xen_pagetable_init() and remove the now unused
    pagetable_setup_done().

That's less code shuffling and pointless copying which makes the
review way easier.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:49:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qi4-0006B7-3p; Tue, 21 Aug 2012 15:49:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T3qi2-0006B2-N3
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:49:38 +0000
Received: from [85.158.143.35:58173] by server-3.bemta-4.messagelabs.com id
	45/66-09529-21EA3305; Tue, 21 Aug 2012 15:49:38 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345564176!16979390!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzODYxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29362 invoked from network); 21 Aug 2012 15:49:37 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 15:49:37 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 21 Aug 2012 08:49:36 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,802,1336374000"; d="scan'208";a="211722595"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga002.fm.intel.com with ESMTP; 21 Aug 2012 08:49:35 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 08:49:35 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 21 Aug 2012 23:49:34 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNfrToCh14keoOOEq7wdrohrpVCZdkWMOw//+GoICAAIlekA==
Date: Tue, 21 Aug 2012 15:49:33 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101675C4@SHSMSX101.ccr.corp.intel.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
	<1345562841.6821.94.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345562841.6821.94.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, August 21, 2012 11:27 PM
> To: Ren, Yongjie
> Cc: xen-devel
> Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
> 
> Users list to BCC.
> 
> On Tue, 2012-08-21 at 16:14 +0100, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: xen-devel-bounces@lists.xen.org
> > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > > Sent: Monday, August 20, 2012 5:17 PM
> > > To: xen-devel; xen-users
> > > Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
> > >
> > >     * [BUG] long stop during the guest boot process with qcow
> image,
> > >       reported by Intel:
> > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> > >
> > After reverting the "O_DIRECT for IDE" patch in qemu-xen tree, it works
> as fine as that before the bug.
> > But it still have some stop time (about 8~10s) after loading the Grub for a
> RHEL6.x guest.
> > I found even an old CS #23124 (about 1 year ago) also has the same
> phenomenon.
> 
> 23124 here is e3d4c34b14a3112481b5e86ff2406cd1bb5e1548 which is
> some
> sort of tools build fix. What is the long hash of the changeset you are
> referring to?
> 
Yes, it's "changeset:  23124:e3d4c34b14a3".
That changeset is picked out randomly just to confirm the stop time (8~10s) should already exist for a long time.

> > Currently, RHEL6.2 or RHEL6.3 guest can bootup in 30~40s (for either
> RAW or QCOW2 image).
> > And, RHEL5.5 guest (which has no stop time issue) can bootup in 50~60s.
> >
> > I also commented in the bugzilla. You can also have a look for more
> details.
> >
> > Will you want still track or fix this old issue ?  If not, I want to marked it
> as "fixed and verified".
> 
> If you think the issue is fixed then I will mark it done on the todo
> list.
> 
I think yes. The regression for long stop time (about 50s) in bootup for qcow2 image has already been fixed.
Now, 30s for a guest booting up is acceptable from my viewpoint.
If the 8~10 stop time really matters, we can setup another thread to discuss it.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:49:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qi4-0006B7-3p; Tue, 21 Aug 2012 15:49:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T3qi2-0006B2-N3
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:49:38 +0000
Received: from [85.158.143.35:58173] by server-3.bemta-4.messagelabs.com id
	45/66-09529-21EA3305; Tue, 21 Aug 2012 15:49:38 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345564176!16979390!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMzODYxNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29362 invoked from network); 21 Aug 2012 15:49:37 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 15:49:37 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 21 Aug 2012 08:49:36 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,802,1336374000"; d="scan'208";a="211722595"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga002.fm.intel.com with ESMTP; 21 Aug 2012 08:49:35 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 08:49:35 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.89]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 21 Aug 2012 23:49:34 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] Xen 4.2 TODO / Release Plan
Thread-Index: AQHNfrToCh14keoOOEq7wdrohrpVCZdkWMOw//+GoICAAIlekA==
Date: Tue, 21 Aug 2012 15:49:33 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101675C4@SHSMSX101.ccr.corp.intel.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<1B4B44D9196EFF41AE41FDA404FC0A101674E6@SHSMSX101.ccr.corp.intel.com>
	<1345562841.6821.94.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345562841.6821.94.camel@zakaz.uk.xensource.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, August 21, 2012 11:27 PM
> To: Ren, Yongjie
> Cc: xen-devel
> Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
> 
> Users list to BCC.
> 
> On Tue, 2012-08-21 at 16:14 +0100, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: xen-devel-bounces@lists.xen.org
> > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > > Sent: Monday, August 20, 2012 5:17 PM
> > > To: xen-devel; xen-users
> > > Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
> > >
> > >     * [BUG] long stop during the guest boot process with qcow
> image,
> > >       reported by Intel:
> > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
> > >
> > After reverting the "O_DIRECT for IDE" patch in qemu-xen tree, it works
> as fine as that before the bug.
> > But it still have some stop time (about 8~10s) after loading the Grub for a
> RHEL6.x guest.
> > I found even an old CS #23124 (about 1 year ago) also has the same
> phenomenon.
> 
> 23124 here is e3d4c34b14a3112481b5e86ff2406cd1bb5e1548 which is
> some
> sort of tools build fix. What is the long hash of the changeset you are
> referring to?
> 
Yes, it's "changeset:  23124:e3d4c34b14a3".
That changeset is picked out randomly just to confirm the stop time (8~10s) should already exist for a long time.

> > Currently, RHEL6.2 or RHEL6.3 guest can bootup in 30~40s (for either
> RAW or QCOW2 image).
> > And, RHEL5.5 guest (which has no stop time issue) can bootup in 50~60s.
> >
> > I also commented in the bugzilla. You can also have a look for more
> details.
> >
> > Will you want still track or fix this old issue ?  If not, I want to marked it
> as "fixed and verified".
> 
> If you think the issue is fixed then I will mark it done on the todo
> list.
> 
I think yes. The regression for long stop time (about 50s) in bootup for qcow2 image has already been fixed.
Now, 30s for a guest booting up is acceptable from my viewpoint.
If the 8~10 stop time really matters, we can setup another thread to discuss it.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlM-0006MY-DU; Tue, 21 Aug 2012 15:53:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qb0-0005pj-De
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:22 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345563727!6434030!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30524 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfw6p023820
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:58 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LFfvIx005688; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 27CE8200CDE; Tue, 21 Aug 2012 12:43:07 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:54 -0300
Message-Id: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 0/8] include qdev core in *-user,
	make CPU child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So, here's a third suggestion to the CPU/DeviceState problem. Basically I split
the qdev code into a core (that can be easily compiled into *-user), and a part
specific to qemu-system-*.

There are two remaining parts that forced me to use #ifdefs in the "core" .c
files:

 - vmstate handling on qdev_init()
 - the qemu_register_reset() hack

If we address those two issues inside qemu-system-* (instead of inside the qdev
core code), using DeviceState inside *-user shouldn't be a big problem anymore.


Anthony Liguori (1):
  qdev: split up header so it can be used in cpu.h

Eduardo Habkost (3):
  split qdev into a core and code used only by qemu-system-*
  qdev: use full qdev.h include path on qdev*.c
  include core qdev code into *-user, too

Igor Mammedov (4):
  move qemu_irq typedef out of cpu-common.h
  qapi-types.h doesn't really need to include qemu-common.h
  cleanup error.h, included qapi-types.h aready has stdbool.h
  make CPU a child of DeviceState

 Makefile.objs                                   |   1 +
 error.h                                         |   1 -
 hw/Makefile.objs                                |   3 +-
 hw/arm-misc.h                                   |   1 +
 hw/bt.h                                         |   2 +
 hw/devices.h                                    |   2 +
 hw/irq.h                                        |   2 +
 hw/mc146818rtc.c                                |   1 +
 hw/omap.h                                       |   1 +
 hw/qdev-addr.c                                  |   1 +
 hw/qdev-core.h                                  | 240 +++++++++++++++
 hw/qdev-monitor.h                               |  16 +
 hw/qdev-properties-system.c                     | 329 +++++++++++++++++++++
 hw/qdev-properties.h                            | 129 ++++++++
 hw/qdev-system.c                                |  93 ++++++
 hw/qdev.h                                       | 371 +-----------------------
 hw/soc_dma.h                                    |   1 +
 hw/xen.h                                        |   1 +
 include/qemu/cpu.h                              |   6 +-
 qemu-common.h                                   |   1 -
 qom/Makefile.objs                               |   2 +-
 qom/cpu.c                                       |   3 +-
 hw/qdev-properties.c => qom/device-properties.c | 323 +--------------------
 hw/qdev.c => qom/device.c                       | 106 +------
 scripts/qapi-types.py                           |   2 +-
 sysemu.h                                        |   1 +
 26 files changed, 849 insertions(+), 790 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties-system.c
 create mode 100644 hw/qdev-properties.h
 create mode 100644 hw/qdev-system.c
 rename hw/qdev-properties.c => qom/device-properties.c (75%)
 rename hw/qdev.c => qom/device.c (89%)

-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlM-0006MY-DU; Tue, 21 Aug 2012 15:53:04 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qb0-0005pj-De
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:22 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345563727!6434030!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30524 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfw6p023820
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:58 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LFfvIx005688; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 27CE8200CDE; Tue, 21 Aug 2012 12:43:07 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:54 -0300
Message-Id: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 0/8] include qdev core in *-user,
	make CPU child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So, here's a third suggestion to the CPU/DeviceState problem. Basically I split
the qdev code into a core (that can be easily compiled into *-user), and a part
specific to qemu-system-*.

There are two remaining parts that forced me to use #ifdefs in the "core" .c
files:

 - vmstate handling on qdev_init()
 - the qemu_register_reset() hack

If we address those two issues inside qemu-system-* (instead of inside the qdev
core code), using DeviceState inside *-user shouldn't be a big problem anymore.


Anthony Liguori (1):
  qdev: split up header so it can be used in cpu.h

Eduardo Habkost (3):
  split qdev into a core and code used only by qemu-system-*
  qdev: use full qdev.h include path on qdev*.c
  include core qdev code into *-user, too

Igor Mammedov (4):
  move qemu_irq typedef out of cpu-common.h
  qapi-types.h doesn't really need to include qemu-common.h
  cleanup error.h, included qapi-types.h aready has stdbool.h
  make CPU a child of DeviceState

 Makefile.objs                                   |   1 +
 error.h                                         |   1 -
 hw/Makefile.objs                                |   3 +-
 hw/arm-misc.h                                   |   1 +
 hw/bt.h                                         |   2 +
 hw/devices.h                                    |   2 +
 hw/irq.h                                        |   2 +
 hw/mc146818rtc.c                                |   1 +
 hw/omap.h                                       |   1 +
 hw/qdev-addr.c                                  |   1 +
 hw/qdev-core.h                                  | 240 +++++++++++++++
 hw/qdev-monitor.h                               |  16 +
 hw/qdev-properties-system.c                     | 329 +++++++++++++++++++++
 hw/qdev-properties.h                            | 129 ++++++++
 hw/qdev-system.c                                |  93 ++++++
 hw/qdev.h                                       | 371 +-----------------------
 hw/soc_dma.h                                    |   1 +
 hw/xen.h                                        |   1 +
 include/qemu/cpu.h                              |   6 +-
 qemu-common.h                                   |   1 -
 qom/Makefile.objs                               |   2 +-
 qom/cpu.c                                       |   3 +-
 hw/qdev-properties.c => qom/device-properties.c | 323 +--------------------
 hw/qdev.c => qom/device.c                       | 106 +------
 scripts/qapi-types.py                           |   2 +-
 sysemu.h                                        |   1 +
 26 files changed, 849 insertions(+), 790 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties-system.c
 create mode 100644 hw/qdev-properties.h
 create mode 100644 hw/qdev-system.c
 rename hw/qdev-properties.c => qom/device-properties.c (75%)
 rename hw/qdev.c => qom/device.c (89%)

-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlM-0006MI-19; Tue, 21 Aug 2012 15:53:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qav-0005pl-UU
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:18 +0000
Received: from [85.158.143.35:23683] by server-1.bemta-4.messagelabs.com id
	DC/CF-07754-95CA3305; Tue, 21 Aug 2012 15:42:17 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345563731!6813400!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18427 invoked from network); 21 Aug 2012 15:42:12 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 15:42:12 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfxCL017854
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfv8r006333; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id E1DF720112C; Tue, 21 Aug 2012 12:43:08 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:56 -0300
Message-Id: <1345563782-11224-3-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 2/8] qdev: split up header so it can be used in
	cpu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Anthony Liguori <aliguori@us.ibm.com>

Header file dependency is a frickin' nightmare right now.  cpu.h tends to get
included in our 'include everything' header files but qdev also needs to include
those headers mainly for qdev-properties since it knows about CharDriverState
and friends.

We can solve this for now by splitting out qdev.h along the same lines that we
previously split the C file.  Then cpu.h just needs to include qdev-core.h

v1->v2:
  move qemu_irq typedef out of this patch into a separate one with an additional
  cleanup of headers to fix build breakage

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/mc146818rtc.c     |   1 +
 hw/qdev-addr.c       |   1 +
 hw/qdev-core.h       | 240 +++++++++++++++++++++++++++++++++
 hw/qdev-monitor.h    |  16 +++
 hw/qdev-properties.c |   1 +
 hw/qdev-properties.h | 128 ++++++++++++++++++
 hw/qdev.c            |   1 +
 hw/qdev.h            | 371 +--------------------------------------------------
 8 files changed, 392 insertions(+), 367 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties.h

diff --git a/hw/mc146818rtc.c b/hw/mc146818rtc.c
index 3777f85..3780617 100644
--- a/hw/mc146818rtc.c
+++ b/hw/mc146818rtc.c
@@ -25,6 +25,7 @@
 #include "qemu-timer.h"
 #include "sysemu.h"
 #include "mc146818rtc.h"
+#include "qapi/qapi-visit-core.h"
 
 #ifdef TARGET_I386
 #include "apic.h"
diff --git a/hw/qdev-addr.c b/hw/qdev-addr.c
index b711b6b..5b5d38f 100644
--- a/hw/qdev-addr.c
+++ b/hw/qdev-addr.c
@@ -1,6 +1,7 @@
 #include "qdev.h"
 #include "qdev-addr.h"
 #include "targphys.h"
+#include "qapi/qapi-visit-core.h"
 
 /* --- target physical address --- */
 
diff --git a/hw/qdev-core.h b/hw/qdev-core.h
new file mode 100644
index 0000000..ca205fc
--- /dev/null
+++ b/hw/qdev-core.h
@@ -0,0 +1,240 @@
+#ifndef QDEV_CORE_H
+#define QDEV_CORE_H
+
+#include "qemu-queue.h"
+#include "qemu-option.h"
+#include "qemu/object.h"
+#include "hw/irq.h"
+#include "error.h"
+
+typedef struct Property Property;
+
+typedef struct PropertyInfo PropertyInfo;
+
+typedef struct CompatProperty CompatProperty;
+
+typedef struct BusState BusState;
+
+typedef struct BusClass BusClass;
+
+enum DevState {
+    DEV_STATE_CREATED = 1,
+    DEV_STATE_INITIALIZED,
+};
+
+enum {
+    DEV_NVECTORS_UNSPECIFIED = -1,
+};
+
+#define TYPE_DEVICE "device"
+#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
+#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
+#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
+
+typedef int (*qdev_initfn)(DeviceState *dev);
+typedef int (*qdev_event)(DeviceState *dev);
+typedef void (*qdev_resetfn)(DeviceState *dev);
+
+struct VMStateDescription;
+
+typedef struct DeviceClass {
+    ObjectClass parent_class;
+
+    const char *fw_name;
+    const char *desc;
+    Property *props;
+    int no_user;
+
+    /* callbacks */
+    void (*reset)(DeviceState *dev);
+
+    /* device state */
+    const struct VMStateDescription *vmsd;
+
+    /* Private to qdev / bus.  */
+    qdev_initfn init;
+    qdev_event unplug;
+    qdev_event exit;
+    const char *bus_type;
+} DeviceClass;
+
+/* This structure should not be accessed directly.  We declare it here
+   so that it can be embedded in individual device state structures.  */
+struct DeviceState {
+    Object parent_obj;
+
+    const char *id;
+    enum DevState state;
+    struct QemuOpts *opts;
+    int hotplugged;
+    BusState *parent_bus;
+    int num_gpio_out;
+    qemu_irq *gpio_out;
+    int num_gpio_in;
+    qemu_irq *gpio_in;
+    QLIST_HEAD(, BusState) child_bus;
+    int num_child_bus;
+    int instance_id_alias;
+    int alias_required_for_version;
+};
+
+/*
+ * This callback is used to create Open Firmware device path in accordance with
+ * OF spec http://forthworks.com/standards/of1275.pdf. Indicidual bus bindings
+ * can be found here http://playground.sun.com/1275/bindings/.
+ */
+
+#define TYPE_BUS "bus"
+#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
+#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
+#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
+
+struct BusClass {
+    ObjectClass parent_class;
+
+    /* FIXME first arg should be BusState */
+    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
+    char *(*get_dev_path)(DeviceState *dev);
+    char *(*get_fw_dev_path)(DeviceState *dev);
+    int (*reset)(BusState *bus);
+};
+
+typedef struct BusChild {
+    DeviceState *child;
+    int index;
+    QTAILQ_ENTRY(BusChild) sibling;
+} BusChild;
+
+/**
+ * BusState:
+ * @qom_allocated: Indicates whether the object was allocated by QOM.
+ * @glib_allocated: Indicates whether the object was initialized in-place
+ * yet is expected to be freed with g_free().
+ */
+struct BusState {
+    Object obj;
+    DeviceState *parent;
+    const char *name;
+    int allow_hotplug;
+    bool qom_allocated;
+    bool glib_allocated;
+    int max_index;
+    QTAILQ_HEAD(ChildrenHead, BusChild) children;
+    QLIST_ENTRY(BusState) sibling;
+};
+
+struct Property {
+    const char   *name;
+    PropertyInfo *info;
+    int          offset;
+    uint8_t      bitnr;
+    uint8_t      qtype;
+    int64_t      defval;
+};
+
+struct PropertyInfo {
+    const char *name;
+    const char *legacy_name;
+    const char **enum_table;
+    int (*parse)(DeviceState *dev, Property *prop, const char *str);
+    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
+    ObjectPropertyAccessor *get;
+    ObjectPropertyAccessor *set;
+    ObjectPropertyRelease *release;
+};
+
+typedef struct GlobalProperty {
+    const char *driver;
+    const char *property;
+    const char *value;
+    QTAILQ_ENTRY(GlobalProperty) next;
+} GlobalProperty;
+
+/*** Board API.  This should go away once we have a machine config file.  ***/
+
+DeviceState *qdev_create(BusState *bus, const char *name);
+DeviceState *qdev_try_create(BusState *bus, const char *name);
+bool qdev_exists(const char *name);
+int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
+void qdev_init_nofail(DeviceState *dev);
+void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
+                                 int required_for_version);
+void qdev_unplug(DeviceState *dev, Error **errp);
+void qdev_free(DeviceState *dev);
+int qdev_simple_unplug_cb(DeviceState *dev);
+void qdev_machine_creation_done(void);
+bool qdev_machine_modified(void);
+
+qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
+void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
+
+BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
+
+/*** Device API.  ***/
+
+/* Register device properties.  */
+/* GPIO inputs also double as IRQ sinks.  */
+void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
+void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
+
+BusState *qdev_get_parent_bus(DeviceState *dev);
+
+/*** BUS API. ***/
+
+DeviceState *qdev_find_recursive(BusState *bus, const char *id);
+
+/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
+typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
+typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
+
+void qbus_create_inplace(BusState *bus, const char *typename,
+                         DeviceState *parent, const char *name);
+BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
+/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
+ *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
+ *           0 otherwise. */
+int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+void qdev_reset_all(DeviceState *dev);
+void qbus_reset_all_fn(void *opaque);
+
+void qbus_free(BusState *bus);
+
+#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
+
+/* This should go away once we get rid of the NULL bus hack */
+BusState *sysbus_get_default(void);
+
+char *qdev_get_fw_dev_path(DeviceState *dev);
+
+/**
+ * @qdev_machine_init
+ *
+ * Initialize platform devices before machine init.  This is a hack until full
+ * support for composition is added.
+ */
+void qdev_machine_init(void);
+
+/**
+ * @device_reset
+ *
+ * Reset a single device (by calling the reset method).
+ */
+void device_reset(DeviceState *dev);
+
+const struct VMStateDescription *qdev_get_vmsd(DeviceState *dev);
+
+const char *qdev_fw_name(DeviceState *dev);
+
+Object *qdev_get_machine(void);
+
+/* FIXME: make this a link<> */
+void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
+
+extern int qdev_hotplug;
+
+char *qdev_get_dev_path(DeviceState *dev);
+
+#endif
diff --git a/hw/qdev-monitor.h b/hw/qdev-monitor.h
new file mode 100644
index 0000000..220ceba
--- /dev/null
+++ b/hw/qdev-monitor.h
@@ -0,0 +1,16 @@
+#ifndef QEMU_QDEV_MONITOR_H
+#define QEMU_QDEV_MONITOR_H
+
+#include "qdev-core.h"
+#include "monitor.h"
+
+/*** monitor commands ***/
+
+void do_info_qtree(Monitor *mon);
+void do_info_qdm(Monitor *mon);
+int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int qdev_device_help(QemuOpts *opts);
+DeviceState *qdev_device_add(QemuOpts *opts);
+
+#endif
diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 8aca0d4..81d901c 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -4,6 +4,7 @@
 #include "blockdev.h"
 #include "hw/block-common.h"
 #include "net/hub.h"
+#include "qapi/qapi-visit-core.h"
 
 void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 {
diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
new file mode 100644
index 0000000..e93336a
--- /dev/null
+++ b/hw/qdev-properties.h
@@ -0,0 +1,128 @@
+#ifndef QEMU_QDEV_PROPERTIES_H
+#define QEMU_QDEV_PROPERTIES_H
+
+#include "qdev-core.h"
+
+/*** qdev-properties.c ***/
+
+extern PropertyInfo qdev_prop_bit;
+extern PropertyInfo qdev_prop_uint8;
+extern PropertyInfo qdev_prop_uint16;
+extern PropertyInfo qdev_prop_uint32;
+extern PropertyInfo qdev_prop_int32;
+extern PropertyInfo qdev_prop_uint64;
+extern PropertyInfo qdev_prop_hex8;
+extern PropertyInfo qdev_prop_hex32;
+extern PropertyInfo qdev_prop_hex64;
+extern PropertyInfo qdev_prop_string;
+extern PropertyInfo qdev_prop_chr;
+extern PropertyInfo qdev_prop_ptr;
+extern PropertyInfo qdev_prop_macaddr;
+extern PropertyInfo qdev_prop_losttickpolicy;
+extern PropertyInfo qdev_prop_bios_chs_trans;
+extern PropertyInfo qdev_prop_drive;
+extern PropertyInfo qdev_prop_netdev;
+extern PropertyInfo qdev_prop_vlan;
+extern PropertyInfo qdev_prop_pci_devfn;
+extern PropertyInfo qdev_prop_blocksize;
+
+#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
+        .name      = (_name),                                    \
+        .info      = &(_prop),                                   \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(_type,typeof_field(_state, _field)),    \
+        }
+#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
+        .name      = (_name),                                           \
+        .info      = &(_prop),                                          \
+        .offset    = offsetof(_state, _field)                           \
+            + type_check(_type,typeof_field(_state, _field)),           \
+        .qtype     = QTYPE_QINT,                                        \
+        .defval    = (_type)_defval,                                    \
+        }
+#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
+        .name      = (_name),                                    \
+        .info      = &(qdev_prop_bit),                           \
+        .bitnr    = (_bit),                                      \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(uint32_t,typeof_field(_state, _field)), \
+        .qtype     = QTYPE_QBOOL,                                \
+        .defval    = (bool)_defval,                              \
+        }
+
+#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
+#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
+#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
+#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
+#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
+#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
+#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
+#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
+#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
+
+#define DEFINE_PROP_PTR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
+#define DEFINE_PROP_CHR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
+#define DEFINE_PROP_STRING(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
+#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
+#define DEFINE_PROP_VLAN(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
+#define DEFINE_PROP_DRIVE(_n, _s, _f) \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
+#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
+#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
+                        LostTickPolicy)
+#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
+#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
+
+#define DEFINE_PROP_END_OF_LIST()               \
+    {}
+
+/* Set properties between creation and init.  */
+void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
+void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
+void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
+void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
+void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
+void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
+void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
+void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
+void qdev_prop_set_vlan(DeviceState *dev, const char *name, NetClientState *value);
+int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
+void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
+void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
+void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
+/* FIXME: Remove opaque pointer properties.  */
+void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
+
+void qdev_prop_register_global_list(GlobalProperty *props);
+void qdev_prop_set_globals(DeviceState *dev);
+void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
+                                    Property *prop, const char *value);
+
+/**
+ * @qdev_property_add_static - add a @Property to a device referencing a
+ * field in a struct.
+ */
+void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
+
+#endif
diff --git a/hw/qdev.c b/hw/qdev.c
index b5b74b9..36c3e4b 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -29,6 +29,7 @@
 #include "qdev.h"
 #include "sysemu.h"
 #include "error.h"
+#include "qapi/qapi-visit-core.h"
 
 int qdev_hotplug = 0;
 static bool qdev_hot_added = false;
diff --git a/hw/qdev.h b/hw/qdev.h
index d699194..365b8d6 100644
--- a/hw/qdev.h
+++ b/hw/qdev.h
@@ -1,372 +1,9 @@
 #ifndef QDEV_H
 #define QDEV_H
 
-#include "hw.h"
-#include "qemu-queue.h"
-#include "qemu-char.h"
-#include "qemu-option.h"
-#include "qapi/qapi-visit-core.h"
-#include "qemu/object.h"
-#include "error.h"
-
-typedef struct Property Property;
-
-typedef struct PropertyInfo PropertyInfo;
-
-typedef struct CompatProperty CompatProperty;
-
-typedef struct BusState BusState;
-
-typedef struct BusClass BusClass;
-
-enum DevState {
-    DEV_STATE_CREATED = 1,
-    DEV_STATE_INITIALIZED,
-};
-
-enum {
-    DEV_NVECTORS_UNSPECIFIED = -1,
-};
-
-#define TYPE_DEVICE "device"
-#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
-#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
-#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
-
-typedef int (*qdev_initfn)(DeviceState *dev);
-typedef int (*qdev_event)(DeviceState *dev);
-typedef void (*qdev_resetfn)(DeviceState *dev);
-
-typedef struct DeviceClass {
-    ObjectClass parent_class;
-
-    const char *fw_name;
-    const char *desc;
-    Property *props;
-    int no_user;
-
-    /* callbacks */
-    void (*reset)(DeviceState *dev);
-
-    /* device state */
-    const VMStateDescription *vmsd;
-
-    /* Private to qdev / bus.  */
-    qdev_initfn init;
-    qdev_event unplug;
-    qdev_event exit;
-    const char *bus_type;
-} DeviceClass;
-
-/* This structure should not be accessed directly.  We declare it here
-   so that it can be embedded in individual device state structures.  */
-struct DeviceState {
-    Object parent_obj;
-
-    const char *id;
-    enum DevState state;
-    QemuOpts *opts;
-    int hotplugged;
-    BusState *parent_bus;
-    int num_gpio_out;
-    qemu_irq *gpio_out;
-    int num_gpio_in;
-    qemu_irq *gpio_in;
-    QLIST_HEAD(, BusState) child_bus;
-    int num_child_bus;
-    int instance_id_alias;
-    int alias_required_for_version;
-};
-
-#define TYPE_BUS "bus"
-#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
-#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
-#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
-
-struct BusClass {
-    ObjectClass parent_class;
-
-    /* FIXME first arg should be BusState */
-    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
-    char *(*get_dev_path)(DeviceState *dev);
-    /*
-     * This callback is used to create Open Firmware device path in accordance
-     * with OF spec http://forthworks.com/standards/of1275.pdf. Individual bus
-     * bindings can be found at http://playground.sun.com/1275/bindings/.
-     */
-    char *(*get_fw_dev_path)(DeviceState *dev);
-    int (*reset)(BusState *bus);
-};
-
-typedef struct BusChild {
-    DeviceState *child;
-    int index;
-    QTAILQ_ENTRY(BusChild) sibling;
-} BusChild;
-
-/**
- * BusState:
- * @qom_allocated: Indicates whether the object was allocated by QOM.
- * @glib_allocated: Indicates whether the object was initialized in-place
- * yet is expected to be freed with g_free().
- */
-struct BusState {
-    Object obj;
-    DeviceState *parent;
-    const char *name;
-    int allow_hotplug;
-    bool qom_allocated;
-    bool glib_allocated;
-    int max_index;
-    QTAILQ_HEAD(ChildrenHead, BusChild) children;
-    QLIST_ENTRY(BusState) sibling;
-};
-
-struct Property {
-    const char   *name;
-    PropertyInfo *info;
-    int          offset;
-    uint8_t      bitnr;
-    uint8_t      qtype;
-    int64_t      defval;
-};
-
-struct PropertyInfo {
-    const char *name;
-    const char *legacy_name;
-    const char **enum_table;
-    int (*parse)(DeviceState *dev, Property *prop, const char *str);
-    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
-    ObjectPropertyAccessor *get;
-    ObjectPropertyAccessor *set;
-    ObjectPropertyRelease *release;
-};
-
-typedef struct GlobalProperty {
-    const char *driver;
-    const char *property;
-    const char *value;
-    QTAILQ_ENTRY(GlobalProperty) next;
-} GlobalProperty;
-
-/*** Board API.  This should go away once we have a machine config file.  ***/
-
-DeviceState *qdev_create(BusState *bus, const char *name);
-DeviceState *qdev_try_create(BusState *bus, const char *name);
-bool qdev_exists(const char *name);
-int qdev_device_help(QemuOpts *opts);
-DeviceState *qdev_device_add(QemuOpts *opts);
-int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
-void qdev_init_nofail(DeviceState *dev);
-void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
-                                 int required_for_version);
-void qdev_unplug(DeviceState *dev, Error **errp);
-void qdev_free(DeviceState *dev);
-int qdev_simple_unplug_cb(DeviceState *dev);
-void qdev_machine_creation_done(void);
-bool qdev_machine_modified(void);
-
-qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
-void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
-
-BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
-
-/*** Device API.  ***/
-
-/* Register device properties.  */
-/* GPIO inputs also double as IRQ sinks.  */
-void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
-void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
-
-BusState *qdev_get_parent_bus(DeviceState *dev);
-
-/*** BUS API. ***/
-
-DeviceState *qdev_find_recursive(BusState *bus, const char *id);
-
-/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
-typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
-typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
-
-void qbus_create_inplace(BusState *bus, const char *typename,
-                         DeviceState *parent, const char *name);
-BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
-/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
- *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
- *           0 otherwise. */
-int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-void qdev_reset_all(DeviceState *dev);
-void qbus_reset_all_fn(void *opaque);
-
-void qbus_free(BusState *bus);
-
-#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
-
-/* This should go away once we get rid of the NULL bus hack */
-BusState *sysbus_get_default(void);
-
-/*** monitor commands ***/
-
-void do_info_qtree(Monitor *mon);
-void do_info_qdm(Monitor *mon);
-int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
-int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
-
-/*** qdev-properties.c ***/
-
-extern PropertyInfo qdev_prop_bit;
-extern PropertyInfo qdev_prop_uint8;
-extern PropertyInfo qdev_prop_uint16;
-extern PropertyInfo qdev_prop_uint32;
-extern PropertyInfo qdev_prop_int32;
-extern PropertyInfo qdev_prop_uint64;
-extern PropertyInfo qdev_prop_hex8;
-extern PropertyInfo qdev_prop_hex32;
-extern PropertyInfo qdev_prop_hex64;
-extern PropertyInfo qdev_prop_string;
-extern PropertyInfo qdev_prop_chr;
-extern PropertyInfo qdev_prop_ptr;
-extern PropertyInfo qdev_prop_macaddr;
-extern PropertyInfo qdev_prop_losttickpolicy;
-extern PropertyInfo qdev_prop_bios_chs_trans;
-extern PropertyInfo qdev_prop_drive;
-extern PropertyInfo qdev_prop_netdev;
-extern PropertyInfo qdev_prop_vlan;
-extern PropertyInfo qdev_prop_pci_devfn;
-extern PropertyInfo qdev_prop_blocksize;
-extern PropertyInfo qdev_prop_pci_host_devaddr;
-
-#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
-        .name      = (_name),                                    \
-        .info      = &(_prop),                                   \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(_type,typeof_field(_state, _field)),    \
-        }
-#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
-        .name      = (_name),                                           \
-        .info      = &(_prop),                                          \
-        .offset    = offsetof(_state, _field)                           \
-            + type_check(_type,typeof_field(_state, _field)),           \
-        .qtype     = QTYPE_QINT,                                        \
-        .defval    = (_type)_defval,                                    \
-        }
-#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
-        .name      = (_name),                                    \
-        .info      = &(qdev_prop_bit),                           \
-        .bitnr    = (_bit),                                      \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(uint32_t,typeof_field(_state, _field)), \
-        .qtype     = QTYPE_QBOOL,                                \
-        .defval    = (bool)_defval,                              \
-        }
-
-#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
-#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
-#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
-#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
-#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
-#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
-#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
-#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
-#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
-
-#define DEFINE_PROP_PTR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
-#define DEFINE_PROP_CHR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
-#define DEFINE_PROP_STRING(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
-#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
-#define DEFINE_PROP_VLAN(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
-#define DEFINE_PROP_DRIVE(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
-#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
-#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
-                        LostTickPolicy)
-#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
-#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
-#define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_pci_host_devaddr, PCIHostDeviceAddress)
-
-#define DEFINE_PROP_END_OF_LIST()               \
-    {}
-
-/* Set properties between creation and init.  */
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
-int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
-void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
-void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
-void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
-void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
-void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
-void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
-void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
-int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
-void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
-void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
-void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
-/* FIXME: Remove opaque pointer properties.  */
-void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
-
-void qdev_prop_register_global_list(GlobalProperty *props);
-void qdev_prop_set_globals(DeviceState *dev);
-void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
-                                    Property *prop, const char *value);
-
-char *qdev_get_fw_dev_path(DeviceState *dev);
-
-/**
- * @qdev_property_add_static - add a @Property to a device referencing a
- * field in a struct.
- */
-void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
-
-/**
- * @qdev_machine_init
- *
- * Initialize platform devices before machine init.  This is a hack until full
- * support for composition is added.
- */
-void qdev_machine_init(void);
-
-/**
- * @device_reset
- *
- * Reset a single device (by calling the reset method).
- */
-void device_reset(DeviceState *dev);
-
-const VMStateDescription *qdev_get_vmsd(DeviceState *dev);
-
-const char *qdev_fw_name(DeviceState *dev);
-
-Object *qdev_get_machine(void);
-
-/* FIXME: make this a link<> */
-void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
-
-extern int qdev_hotplug;
-
-char *qdev_get_dev_path(DeviceState *dev);
+#include "hw/hw.h"
+#include "qdev-core.h"
+#include "qdev-properties.h"
+#include "qdev-monitor.h"
 
 #endif
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlM-0006MI-19; Tue, 21 Aug 2012 15:53:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qav-0005pl-UU
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:18 +0000
Received: from [85.158.143.35:23683] by server-1.bemta-4.messagelabs.com id
	DC/CF-07754-95CA3305; Tue, 21 Aug 2012 15:42:17 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345563731!6813400!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18427 invoked from network); 21 Aug 2012 15:42:12 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 15:42:12 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfxCL017854
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfv8r006333; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id E1DF720112C; Tue, 21 Aug 2012 12:43:08 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:56 -0300
Message-Id: <1345563782-11224-3-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 2/8] qdev: split up header so it can be used in
	cpu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Anthony Liguori <aliguori@us.ibm.com>

Header file dependency is a frickin' nightmare right now.  cpu.h tends to get
included in our 'include everything' header files but qdev also needs to include
those headers mainly for qdev-properties since it knows about CharDriverState
and friends.

We can solve this for now by splitting out qdev.h along the same lines that we
previously split the C file.  Then cpu.h just needs to include qdev-core.h

v1->v2:
  move qemu_irq typedef out of this patch into a separate one with an additional
  cleanup of headers to fix build breakage

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/mc146818rtc.c     |   1 +
 hw/qdev-addr.c       |   1 +
 hw/qdev-core.h       | 240 +++++++++++++++++++++++++++++++++
 hw/qdev-monitor.h    |  16 +++
 hw/qdev-properties.c |   1 +
 hw/qdev-properties.h | 128 ++++++++++++++++++
 hw/qdev.c            |   1 +
 hw/qdev.h            | 371 +--------------------------------------------------
 8 files changed, 392 insertions(+), 367 deletions(-)
 create mode 100644 hw/qdev-core.h
 create mode 100644 hw/qdev-monitor.h
 create mode 100644 hw/qdev-properties.h

diff --git a/hw/mc146818rtc.c b/hw/mc146818rtc.c
index 3777f85..3780617 100644
--- a/hw/mc146818rtc.c
+++ b/hw/mc146818rtc.c
@@ -25,6 +25,7 @@
 #include "qemu-timer.h"
 #include "sysemu.h"
 #include "mc146818rtc.h"
+#include "qapi/qapi-visit-core.h"
 
 #ifdef TARGET_I386
 #include "apic.h"
diff --git a/hw/qdev-addr.c b/hw/qdev-addr.c
index b711b6b..5b5d38f 100644
--- a/hw/qdev-addr.c
+++ b/hw/qdev-addr.c
@@ -1,6 +1,7 @@
 #include "qdev.h"
 #include "qdev-addr.h"
 #include "targphys.h"
+#include "qapi/qapi-visit-core.h"
 
 /* --- target physical address --- */
 
diff --git a/hw/qdev-core.h b/hw/qdev-core.h
new file mode 100644
index 0000000..ca205fc
--- /dev/null
+++ b/hw/qdev-core.h
@@ -0,0 +1,240 @@
+#ifndef QDEV_CORE_H
+#define QDEV_CORE_H
+
+#include "qemu-queue.h"
+#include "qemu-option.h"
+#include "qemu/object.h"
+#include "hw/irq.h"
+#include "error.h"
+
+typedef struct Property Property;
+
+typedef struct PropertyInfo PropertyInfo;
+
+typedef struct CompatProperty CompatProperty;
+
+typedef struct BusState BusState;
+
+typedef struct BusClass BusClass;
+
+enum DevState {
+    DEV_STATE_CREATED = 1,
+    DEV_STATE_INITIALIZED,
+};
+
+enum {
+    DEV_NVECTORS_UNSPECIFIED = -1,
+};
+
+#define TYPE_DEVICE "device"
+#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
+#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
+#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
+
+typedef int (*qdev_initfn)(DeviceState *dev);
+typedef int (*qdev_event)(DeviceState *dev);
+typedef void (*qdev_resetfn)(DeviceState *dev);
+
+struct VMStateDescription;
+
+typedef struct DeviceClass {
+    ObjectClass parent_class;
+
+    const char *fw_name;
+    const char *desc;
+    Property *props;
+    int no_user;
+
+    /* callbacks */
+    void (*reset)(DeviceState *dev);
+
+    /* device state */
+    const struct VMStateDescription *vmsd;
+
+    /* Private to qdev / bus.  */
+    qdev_initfn init;
+    qdev_event unplug;
+    qdev_event exit;
+    const char *bus_type;
+} DeviceClass;
+
+/* This structure should not be accessed directly.  We declare it here
+   so that it can be embedded in individual device state structures.  */
+struct DeviceState {
+    Object parent_obj;
+
+    const char *id;
+    enum DevState state;
+    struct QemuOpts *opts;
+    int hotplugged;
+    BusState *parent_bus;
+    int num_gpio_out;
+    qemu_irq *gpio_out;
+    int num_gpio_in;
+    qemu_irq *gpio_in;
+    QLIST_HEAD(, BusState) child_bus;
+    int num_child_bus;
+    int instance_id_alias;
+    int alias_required_for_version;
+};
+
+/*
+ * This callback is used to create Open Firmware device path in accordance with
+ * OF spec http://forthworks.com/standards/of1275.pdf. Indicidual bus bindings
+ * can be found here http://playground.sun.com/1275/bindings/.
+ */
+
+#define TYPE_BUS "bus"
+#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
+#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
+#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
+
+struct BusClass {
+    ObjectClass parent_class;
+
+    /* FIXME first arg should be BusState */
+    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
+    char *(*get_dev_path)(DeviceState *dev);
+    char *(*get_fw_dev_path)(DeviceState *dev);
+    int (*reset)(BusState *bus);
+};
+
+typedef struct BusChild {
+    DeviceState *child;
+    int index;
+    QTAILQ_ENTRY(BusChild) sibling;
+} BusChild;
+
+/**
+ * BusState:
+ * @qom_allocated: Indicates whether the object was allocated by QOM.
+ * @glib_allocated: Indicates whether the object was initialized in-place
+ * yet is expected to be freed with g_free().
+ */
+struct BusState {
+    Object obj;
+    DeviceState *parent;
+    const char *name;
+    int allow_hotplug;
+    bool qom_allocated;
+    bool glib_allocated;
+    int max_index;
+    QTAILQ_HEAD(ChildrenHead, BusChild) children;
+    QLIST_ENTRY(BusState) sibling;
+};
+
+struct Property {
+    const char   *name;
+    PropertyInfo *info;
+    int          offset;
+    uint8_t      bitnr;
+    uint8_t      qtype;
+    int64_t      defval;
+};
+
+struct PropertyInfo {
+    const char *name;
+    const char *legacy_name;
+    const char **enum_table;
+    int (*parse)(DeviceState *dev, Property *prop, const char *str);
+    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
+    ObjectPropertyAccessor *get;
+    ObjectPropertyAccessor *set;
+    ObjectPropertyRelease *release;
+};
+
+typedef struct GlobalProperty {
+    const char *driver;
+    const char *property;
+    const char *value;
+    QTAILQ_ENTRY(GlobalProperty) next;
+} GlobalProperty;
+
+/*** Board API.  This should go away once we have a machine config file.  ***/
+
+DeviceState *qdev_create(BusState *bus, const char *name);
+DeviceState *qdev_try_create(BusState *bus, const char *name);
+bool qdev_exists(const char *name);
+int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
+void qdev_init_nofail(DeviceState *dev);
+void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
+                                 int required_for_version);
+void qdev_unplug(DeviceState *dev, Error **errp);
+void qdev_free(DeviceState *dev);
+int qdev_simple_unplug_cb(DeviceState *dev);
+void qdev_machine_creation_done(void);
+bool qdev_machine_modified(void);
+
+qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
+void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
+
+BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
+
+/*** Device API.  ***/
+
+/* Register device properties.  */
+/* GPIO inputs also double as IRQ sinks.  */
+void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
+void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
+
+BusState *qdev_get_parent_bus(DeviceState *dev);
+
+/*** BUS API. ***/
+
+DeviceState *qdev_find_recursive(BusState *bus, const char *id);
+
+/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
+typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
+typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
+
+void qbus_create_inplace(BusState *bus, const char *typename,
+                         DeviceState *parent, const char *name);
+BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
+/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
+ *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
+ *           0 otherwise. */
+int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
+                       qbus_walkerfn *busfn, void *opaque);
+void qdev_reset_all(DeviceState *dev);
+void qbus_reset_all_fn(void *opaque);
+
+void qbus_free(BusState *bus);
+
+#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
+
+/* This should go away once we get rid of the NULL bus hack */
+BusState *sysbus_get_default(void);
+
+char *qdev_get_fw_dev_path(DeviceState *dev);
+
+/**
+ * @qdev_machine_init
+ *
+ * Initialize platform devices before machine init.  This is a hack until full
+ * support for composition is added.
+ */
+void qdev_machine_init(void);
+
+/**
+ * @device_reset
+ *
+ * Reset a single device (by calling the reset method).
+ */
+void device_reset(DeviceState *dev);
+
+const struct VMStateDescription *qdev_get_vmsd(DeviceState *dev);
+
+const char *qdev_fw_name(DeviceState *dev);
+
+Object *qdev_get_machine(void);
+
+/* FIXME: make this a link<> */
+void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
+
+extern int qdev_hotplug;
+
+char *qdev_get_dev_path(DeviceState *dev);
+
+#endif
diff --git a/hw/qdev-monitor.h b/hw/qdev-monitor.h
new file mode 100644
index 0000000..220ceba
--- /dev/null
+++ b/hw/qdev-monitor.h
@@ -0,0 +1,16 @@
+#ifndef QEMU_QDEV_MONITOR_H
+#define QEMU_QDEV_MONITOR_H
+
+#include "qdev-core.h"
+#include "monitor.h"
+
+/*** monitor commands ***/
+
+void do_info_qtree(Monitor *mon);
+void do_info_qdm(Monitor *mon);
+int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
+int qdev_device_help(QemuOpts *opts);
+DeviceState *qdev_device_add(QemuOpts *opts);
+
+#endif
diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 8aca0d4..81d901c 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -4,6 +4,7 @@
 #include "blockdev.h"
 #include "hw/block-common.h"
 #include "net/hub.h"
+#include "qapi/qapi-visit-core.h"
 
 void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 {
diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
new file mode 100644
index 0000000..e93336a
--- /dev/null
+++ b/hw/qdev-properties.h
@@ -0,0 +1,128 @@
+#ifndef QEMU_QDEV_PROPERTIES_H
+#define QEMU_QDEV_PROPERTIES_H
+
+#include "qdev-core.h"
+
+/*** qdev-properties.c ***/
+
+extern PropertyInfo qdev_prop_bit;
+extern PropertyInfo qdev_prop_uint8;
+extern PropertyInfo qdev_prop_uint16;
+extern PropertyInfo qdev_prop_uint32;
+extern PropertyInfo qdev_prop_int32;
+extern PropertyInfo qdev_prop_uint64;
+extern PropertyInfo qdev_prop_hex8;
+extern PropertyInfo qdev_prop_hex32;
+extern PropertyInfo qdev_prop_hex64;
+extern PropertyInfo qdev_prop_string;
+extern PropertyInfo qdev_prop_chr;
+extern PropertyInfo qdev_prop_ptr;
+extern PropertyInfo qdev_prop_macaddr;
+extern PropertyInfo qdev_prop_losttickpolicy;
+extern PropertyInfo qdev_prop_bios_chs_trans;
+extern PropertyInfo qdev_prop_drive;
+extern PropertyInfo qdev_prop_netdev;
+extern PropertyInfo qdev_prop_vlan;
+extern PropertyInfo qdev_prop_pci_devfn;
+extern PropertyInfo qdev_prop_blocksize;
+
+#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
+        .name      = (_name),                                    \
+        .info      = &(_prop),                                   \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(_type,typeof_field(_state, _field)),    \
+        }
+#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
+        .name      = (_name),                                           \
+        .info      = &(_prop),                                          \
+        .offset    = offsetof(_state, _field)                           \
+            + type_check(_type,typeof_field(_state, _field)),           \
+        .qtype     = QTYPE_QINT,                                        \
+        .defval    = (_type)_defval,                                    \
+        }
+#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
+        .name      = (_name),                                    \
+        .info      = &(qdev_prop_bit),                           \
+        .bitnr    = (_bit),                                      \
+        .offset    = offsetof(_state, _field)                    \
+            + type_check(uint32_t,typeof_field(_state, _field)), \
+        .qtype     = QTYPE_QBOOL,                                \
+        .defval    = (bool)_defval,                              \
+        }
+
+#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
+#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
+#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
+#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
+#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
+#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
+#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
+#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
+#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
+
+#define DEFINE_PROP_PTR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
+#define DEFINE_PROP_CHR(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
+#define DEFINE_PROP_STRING(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
+#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
+#define DEFINE_PROP_VLAN(_n, _s, _f)             \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
+#define DEFINE_PROP_DRIVE(_n, _s, _f) \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
+#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
+    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
+#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
+                        LostTickPolicy)
+#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
+#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
+    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
+
+#define DEFINE_PROP_END_OF_LIST()               \
+    {}
+
+/* Set properties between creation and init.  */
+void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
+void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
+void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
+void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
+void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
+void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
+void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
+void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
+void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
+void qdev_prop_set_vlan(DeviceState *dev, const char *name, NetClientState *value);
+int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
+void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
+void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
+void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
+/* FIXME: Remove opaque pointer properties.  */
+void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
+
+void qdev_prop_register_global_list(GlobalProperty *props);
+void qdev_prop_set_globals(DeviceState *dev);
+void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
+                                    Property *prop, const char *value);
+
+/**
+ * @qdev_property_add_static - add a @Property to a device referencing a
+ * field in a struct.
+ */
+void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
+
+#endif
diff --git a/hw/qdev.c b/hw/qdev.c
index b5b74b9..36c3e4b 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -29,6 +29,7 @@
 #include "qdev.h"
 #include "sysemu.h"
 #include "error.h"
+#include "qapi/qapi-visit-core.h"
 
 int qdev_hotplug = 0;
 static bool qdev_hot_added = false;
diff --git a/hw/qdev.h b/hw/qdev.h
index d699194..365b8d6 100644
--- a/hw/qdev.h
+++ b/hw/qdev.h
@@ -1,372 +1,9 @@
 #ifndef QDEV_H
 #define QDEV_H
 
-#include "hw.h"
-#include "qemu-queue.h"
-#include "qemu-char.h"
-#include "qemu-option.h"
-#include "qapi/qapi-visit-core.h"
-#include "qemu/object.h"
-#include "error.h"
-
-typedef struct Property Property;
-
-typedef struct PropertyInfo PropertyInfo;
-
-typedef struct CompatProperty CompatProperty;
-
-typedef struct BusState BusState;
-
-typedef struct BusClass BusClass;
-
-enum DevState {
-    DEV_STATE_CREATED = 1,
-    DEV_STATE_INITIALIZED,
-};
-
-enum {
-    DEV_NVECTORS_UNSPECIFIED = -1,
-};
-
-#define TYPE_DEVICE "device"
-#define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
-#define DEVICE_CLASS(klass) OBJECT_CLASS_CHECK(DeviceClass, (klass), TYPE_DEVICE)
-#define DEVICE_GET_CLASS(obj) OBJECT_GET_CLASS(DeviceClass, (obj), TYPE_DEVICE)
-
-typedef int (*qdev_initfn)(DeviceState *dev);
-typedef int (*qdev_event)(DeviceState *dev);
-typedef void (*qdev_resetfn)(DeviceState *dev);
-
-typedef struct DeviceClass {
-    ObjectClass parent_class;
-
-    const char *fw_name;
-    const char *desc;
-    Property *props;
-    int no_user;
-
-    /* callbacks */
-    void (*reset)(DeviceState *dev);
-
-    /* device state */
-    const VMStateDescription *vmsd;
-
-    /* Private to qdev / bus.  */
-    qdev_initfn init;
-    qdev_event unplug;
-    qdev_event exit;
-    const char *bus_type;
-} DeviceClass;
-
-/* This structure should not be accessed directly.  We declare it here
-   so that it can be embedded in individual device state structures.  */
-struct DeviceState {
-    Object parent_obj;
-
-    const char *id;
-    enum DevState state;
-    QemuOpts *opts;
-    int hotplugged;
-    BusState *parent_bus;
-    int num_gpio_out;
-    qemu_irq *gpio_out;
-    int num_gpio_in;
-    qemu_irq *gpio_in;
-    QLIST_HEAD(, BusState) child_bus;
-    int num_child_bus;
-    int instance_id_alias;
-    int alias_required_for_version;
-};
-
-#define TYPE_BUS "bus"
-#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
-#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
-#define BUS_GET_CLASS(obj) OBJECT_GET_CLASS(BusClass, (obj), TYPE_BUS)
-
-struct BusClass {
-    ObjectClass parent_class;
-
-    /* FIXME first arg should be BusState */
-    void (*print_dev)(Monitor *mon, DeviceState *dev, int indent);
-    char *(*get_dev_path)(DeviceState *dev);
-    /*
-     * This callback is used to create Open Firmware device path in accordance
-     * with OF spec http://forthworks.com/standards/of1275.pdf. Individual bus
-     * bindings can be found at http://playground.sun.com/1275/bindings/.
-     */
-    char *(*get_fw_dev_path)(DeviceState *dev);
-    int (*reset)(BusState *bus);
-};
-
-typedef struct BusChild {
-    DeviceState *child;
-    int index;
-    QTAILQ_ENTRY(BusChild) sibling;
-} BusChild;
-
-/**
- * BusState:
- * @qom_allocated: Indicates whether the object was allocated by QOM.
- * @glib_allocated: Indicates whether the object was initialized in-place
- * yet is expected to be freed with g_free().
- */
-struct BusState {
-    Object obj;
-    DeviceState *parent;
-    const char *name;
-    int allow_hotplug;
-    bool qom_allocated;
-    bool glib_allocated;
-    int max_index;
-    QTAILQ_HEAD(ChildrenHead, BusChild) children;
-    QLIST_ENTRY(BusState) sibling;
-};
-
-struct Property {
-    const char   *name;
-    PropertyInfo *info;
-    int          offset;
-    uint8_t      bitnr;
-    uint8_t      qtype;
-    int64_t      defval;
-};
-
-struct PropertyInfo {
-    const char *name;
-    const char *legacy_name;
-    const char **enum_table;
-    int (*parse)(DeviceState *dev, Property *prop, const char *str);
-    int (*print)(DeviceState *dev, Property *prop, char *dest, size_t len);
-    ObjectPropertyAccessor *get;
-    ObjectPropertyAccessor *set;
-    ObjectPropertyRelease *release;
-};
-
-typedef struct GlobalProperty {
-    const char *driver;
-    const char *property;
-    const char *value;
-    QTAILQ_ENTRY(GlobalProperty) next;
-} GlobalProperty;
-
-/*** Board API.  This should go away once we have a machine config file.  ***/
-
-DeviceState *qdev_create(BusState *bus, const char *name);
-DeviceState *qdev_try_create(BusState *bus, const char *name);
-bool qdev_exists(const char *name);
-int qdev_device_help(QemuOpts *opts);
-DeviceState *qdev_device_add(QemuOpts *opts);
-int qdev_init(DeviceState *dev) QEMU_WARN_UNUSED_RESULT;
-void qdev_init_nofail(DeviceState *dev);
-void qdev_set_legacy_instance_id(DeviceState *dev, int alias_id,
-                                 int required_for_version);
-void qdev_unplug(DeviceState *dev, Error **errp);
-void qdev_free(DeviceState *dev);
-int qdev_simple_unplug_cb(DeviceState *dev);
-void qdev_machine_creation_done(void);
-bool qdev_machine_modified(void);
-
-qemu_irq qdev_get_gpio_in(DeviceState *dev, int n);
-void qdev_connect_gpio_out(DeviceState *dev, int n, qemu_irq pin);
-
-BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
-
-/*** Device API.  ***/
-
-/* Register device properties.  */
-/* GPIO inputs also double as IRQ sinks.  */
-void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
-void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
-
-BusState *qdev_get_parent_bus(DeviceState *dev);
-
-/*** BUS API. ***/
-
-DeviceState *qdev_find_recursive(BusState *bus, const char *id);
-
-/* Returns 0 to walk children, > 0 to skip walk, < 0 to terminate walk. */
-typedef int (qbus_walkerfn)(BusState *bus, void *opaque);
-typedef int (qdev_walkerfn)(DeviceState *dev, void *opaque);
-
-void qbus_create_inplace(BusState *bus, const char *typename,
-                         DeviceState *parent, const char *name);
-BusState *qbus_create(const char *typename, DeviceState *parent, const char *name);
-/* Returns > 0 if either devfn or busfn skip walk somewhere in cursion,
- *         < 0 if either devfn or busfn terminate walk somewhere in cursion,
- *           0 otherwise. */
-int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-int qdev_walk_children(DeviceState *dev, qdev_walkerfn *devfn,
-                       qbus_walkerfn *busfn, void *opaque);
-void qdev_reset_all(DeviceState *dev);
-void qbus_reset_all_fn(void *opaque);
-
-void qbus_free(BusState *bus);
-
-#define FROM_QBUS(type, dev) DO_UPCAST(type, qbus, dev)
-
-/* This should go away once we get rid of the NULL bus hack */
-BusState *sysbus_get_default(void);
-
-/*** monitor commands ***/
-
-void do_info_qtree(Monitor *mon);
-void do_info_qdm(Monitor *mon);
-int do_device_add(Monitor *mon, const QDict *qdict, QObject **ret_data);
-int do_device_del(Monitor *mon, const QDict *qdict, QObject **ret_data);
-
-/*** qdev-properties.c ***/
-
-extern PropertyInfo qdev_prop_bit;
-extern PropertyInfo qdev_prop_uint8;
-extern PropertyInfo qdev_prop_uint16;
-extern PropertyInfo qdev_prop_uint32;
-extern PropertyInfo qdev_prop_int32;
-extern PropertyInfo qdev_prop_uint64;
-extern PropertyInfo qdev_prop_hex8;
-extern PropertyInfo qdev_prop_hex32;
-extern PropertyInfo qdev_prop_hex64;
-extern PropertyInfo qdev_prop_string;
-extern PropertyInfo qdev_prop_chr;
-extern PropertyInfo qdev_prop_ptr;
-extern PropertyInfo qdev_prop_macaddr;
-extern PropertyInfo qdev_prop_losttickpolicy;
-extern PropertyInfo qdev_prop_bios_chs_trans;
-extern PropertyInfo qdev_prop_drive;
-extern PropertyInfo qdev_prop_netdev;
-extern PropertyInfo qdev_prop_vlan;
-extern PropertyInfo qdev_prop_pci_devfn;
-extern PropertyInfo qdev_prop_blocksize;
-extern PropertyInfo qdev_prop_pci_host_devaddr;
-
-#define DEFINE_PROP(_name, _state, _field, _prop, _type) { \
-        .name      = (_name),                                    \
-        .info      = &(_prop),                                   \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(_type,typeof_field(_state, _field)),    \
-        }
-#define DEFINE_PROP_DEFAULT(_name, _state, _field, _defval, _prop, _type) { \
-        .name      = (_name),                                           \
-        .info      = &(_prop),                                          \
-        .offset    = offsetof(_state, _field)                           \
-            + type_check(_type,typeof_field(_state, _field)),           \
-        .qtype     = QTYPE_QINT,                                        \
-        .defval    = (_type)_defval,                                    \
-        }
-#define DEFINE_PROP_BIT(_name, _state, _field, _bit, _defval) {  \
-        .name      = (_name),                                    \
-        .info      = &(qdev_prop_bit),                           \
-        .bitnr    = (_bit),                                      \
-        .offset    = offsetof(_state, _field)                    \
-            + type_check(uint32_t,typeof_field(_state, _field)), \
-        .qtype     = QTYPE_QBOOL,                                \
-        .defval    = (bool)_defval,                              \
-        }
-
-#define DEFINE_PROP_UINT8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint8, uint8_t)
-#define DEFINE_PROP_UINT16(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint16, uint16_t)
-#define DEFINE_PROP_UINT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint32, uint32_t)
-#define DEFINE_PROP_INT32(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_int32, int32_t)
-#define DEFINE_PROP_UINT64(_n, _s, _f, _d)                      \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_uint64, uint64_t)
-#define DEFINE_PROP_HEX8(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex8, uint8_t)
-#define DEFINE_PROP_HEX32(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex32, uint32_t)
-#define DEFINE_PROP_HEX64(_n, _s, _f, _d)                       \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_hex64, uint64_t)
-#define DEFINE_PROP_PCI_DEVFN(_n, _s, _f, _d)                   \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_pci_devfn, int32_t)
-
-#define DEFINE_PROP_PTR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_ptr, void*)
-#define DEFINE_PROP_CHR(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_chr, CharDriverState*)
-#define DEFINE_PROP_STRING(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_string, char*)
-#define DEFINE_PROP_NETDEV(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_netdev, NetClientState*)
-#define DEFINE_PROP_VLAN(_n, _s, _f)             \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_vlan, NetClientState*)
-#define DEFINE_PROP_DRIVE(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_drive, BlockDriverState *)
-#define DEFINE_PROP_MACADDR(_n, _s, _f)         \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_macaddr, MACAddr)
-#define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_losttickpolicy, \
-                        LostTickPolicy)
-#define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
-#define DEFINE_PROP_BLOCKSIZE(_n, _s, _f, _d) \
-    DEFINE_PROP_DEFAULT(_n, _s, _f, _d, qdev_prop_blocksize, uint16_t)
-#define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
-    DEFINE_PROP(_n, _s, _f, qdev_prop_pci_host_devaddr, PCIHostDeviceAddress)
-
-#define DEFINE_PROP_END_OF_LIST()               \
-    {}
-
-/* Set properties between creation and init.  */
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
-int qdev_prop_parse(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_bit(DeviceState *dev, const char *name, bool value);
-void qdev_prop_set_uint8(DeviceState *dev, const char *name, uint8_t value);
-void qdev_prop_set_uint16(DeviceState *dev, const char *name, uint16_t value);
-void qdev_prop_set_uint32(DeviceState *dev, const char *name, uint32_t value);
-void qdev_prop_set_int32(DeviceState *dev, const char *name, int32_t value);
-void qdev_prop_set_uint64(DeviceState *dev, const char *name, uint64_t value);
-void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value);
-void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value);
-void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value);
-int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value) QEMU_WARN_UNUSED_RESULT;
-void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value);
-void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value);
-void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
-/* FIXME: Remove opaque pointer properties.  */
-void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
-
-void qdev_prop_register_global_list(GlobalProperty *props);
-void qdev_prop_set_globals(DeviceState *dev);
-void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
-                                    Property *prop, const char *value);
-
-char *qdev_get_fw_dev_path(DeviceState *dev);
-
-/**
- * @qdev_property_add_static - add a @Property to a device referencing a
- * field in a struct.
- */
-void qdev_property_add_static(DeviceState *dev, Property *prop, Error **errp);
-
-/**
- * @qdev_machine_init
- *
- * Initialize platform devices before machine init.  This is a hack until full
- * support for composition is added.
- */
-void qdev_machine_init(void);
-
-/**
- * @device_reset
- *
- * Reset a single device (by calling the reset method).
- */
-void device_reset(DeviceState *dev);
-
-const VMStateDescription *qdev_get_vmsd(DeviceState *dev);
-
-const char *qdev_fw_name(DeviceState *dev);
-
-Object *qdev_get_machine(void);
-
-/* FIXME: make this a link<> */
-void qdev_set_parent_bus(DeviceState *dev, BusState *bus);
-
-extern int qdev_hotplug;
-
-char *qdev_get_dev_path(DeviceState *dev);
+#include "hw/hw.h"
+#include "qdev-core.h"
+#include "qdev-properties.h"
+#include "qdev-monitor.h"
 
 #endif
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlJ-0006LA-Ur; Tue, 21 Aug 2012 15:53:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qao-0005nj-06
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:10 +0000
Received: from [85.158.138.51:47237] by server-9.bemta-3.messagelabs.com id
	E9/09-23952-15CA3305; Tue, 21 Aug 2012 15:42:09 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345563727!21428883!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20092 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0T5010162
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LFfx6t012327; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 7A3AF202868; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:43:01 -0300
Message-Id: <1345563782-11224-8-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 7/8] include core qdev code into *-user, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The code depends on some functions from qemu-option.o, so add
qemu-option.o to qom-obj-y to make sure it's included.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 Makefile.objs                                   | 1 +
 hw/Makefile.objs                                | 2 +-
 qom/Makefile.objs                               | 2 +-
 hw/qdev-properties.c => qom/device-properties.c | 0
 hw/qdev.c => qom/device.c                       | 0
 5 files changed, 3 insertions(+), 2 deletions(-)
 rename hw/qdev-properties.c => qom/device-properties.c (100%)
 rename hw/qdev.c => qom/device.c (100%)

diff --git a/Makefile.objs b/Makefile.objs
index 4412757..2cf91c2 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -14,6 +14,7 @@ universal-obj-y += $(qobject-obj-y)
 #######################################################################
 # QOM
 qom-obj-y = qom/
+qom-obj-y += qemu-option.o
 
 universal-obj-y += $(qom-obj-y)
 
diff --git a/hw/Makefile.objs b/hw/Makefile.objs
index 04d3b5e..c09e291 100644
--- a/hw/Makefile.objs
+++ b/hw/Makefile.objs
@@ -176,7 +176,7 @@ common-obj-$(CONFIG_SD) += sd.o
 common-obj-y += bt.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
 common-obj-y += bt-hci-csr.o
 common-obj-y += msmouse.o ps2.o
-common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
+common-obj-y += qdev-monitor.o
 common-obj-y += qdev-system.o qdev-properties-system.o
 common-obj-$(CONFIG_BRLAPI) += baum.o
 
diff --git a/qom/Makefile.objs b/qom/Makefile.objs
index 5ef060a..9d86d88 100644
--- a/qom/Makefile.objs
+++ b/qom/Makefile.objs
@@ -1,4 +1,4 @@
 qom-obj-y = object.o container.o qom-qobject.o
-qom-obj-twice-y = cpu.o
+qom-obj-twice-y = cpu.o device.o device-properties.o
 common-obj-y = $(qom-obj-twice-y)
 user-obj-y = $(qom-obj-twice-y)
diff --git a/hw/qdev-properties.c b/qom/device-properties.c
similarity index 100%
rename from hw/qdev-properties.c
rename to qom/device-properties.c
diff --git a/hw/qdev.c b/qom/device.c
similarity index 100%
rename from hw/qdev.c
rename to qom/device.c
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlJ-0006LA-Ur; Tue, 21 Aug 2012 15:53:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qao-0005nj-06
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:10 +0000
Received: from [85.158.138.51:47237] by server-9.bemta-3.messagelabs.com id
	E9/09-23952-15CA3305; Tue, 21 Aug 2012 15:42:09 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345563727!21428883!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20092 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0T5010162
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LFfx6t012327; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 7A3AF202868; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:43:01 -0300
Message-Id: <1345563782-11224-8-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 7/8] include core qdev code into *-user, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The code depends on some functions from qemu-option.o, so add
qemu-option.o to qom-obj-y to make sure it's included.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 Makefile.objs                                   | 1 +
 hw/Makefile.objs                                | 2 +-
 qom/Makefile.objs                               | 2 +-
 hw/qdev-properties.c => qom/device-properties.c | 0
 hw/qdev.c => qom/device.c                       | 0
 5 files changed, 3 insertions(+), 2 deletions(-)
 rename hw/qdev-properties.c => qom/device-properties.c (100%)
 rename hw/qdev.c => qom/device.c (100%)

diff --git a/Makefile.objs b/Makefile.objs
index 4412757..2cf91c2 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -14,6 +14,7 @@ universal-obj-y += $(qobject-obj-y)
 #######################################################################
 # QOM
 qom-obj-y = qom/
+qom-obj-y += qemu-option.o
 
 universal-obj-y += $(qom-obj-y)
 
diff --git a/hw/Makefile.objs b/hw/Makefile.objs
index 04d3b5e..c09e291 100644
--- a/hw/Makefile.objs
+++ b/hw/Makefile.objs
@@ -176,7 +176,7 @@ common-obj-$(CONFIG_SD) += sd.o
 common-obj-y += bt.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
 common-obj-y += bt-hci-csr.o
 common-obj-y += msmouse.o ps2.o
-common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
+common-obj-y += qdev-monitor.o
 common-obj-y += qdev-system.o qdev-properties-system.o
 common-obj-$(CONFIG_BRLAPI) += baum.o
 
diff --git a/qom/Makefile.objs b/qom/Makefile.objs
index 5ef060a..9d86d88 100644
--- a/qom/Makefile.objs
+++ b/qom/Makefile.objs
@@ -1,4 +1,4 @@
 qom-obj-y = object.o container.o qom-qobject.o
-qom-obj-twice-y = cpu.o
+qom-obj-twice-y = cpu.o device.o device-properties.o
 common-obj-y = $(qom-obj-twice-y)
 user-obj-y = $(qom-obj-twice-y)
diff --git a/hw/qdev-properties.c b/qom/device-properties.c
similarity index 100%
rename from hw/qdev-properties.c
rename to qom/device-properties.c
diff --git a/hw/qdev.c b/qom/device.c
similarity index 100%
rename from hw/qdev.c
rename to qom/device.c
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlI-0006Kw-Th; Tue, 21 Aug 2012 15:53:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qan-0005nh-GJ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:09 +0000
Received: from [85.158.143.35:23346] by server-1.bemta-4.messagelabs.com id
	B1/AF-07754-05CA3305; Tue, 21 Aug 2012 15:42:08 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345563727!15303658!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22103 invoked from network); 21 Aug 2012 15:42:07 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 15:42:07 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0No017856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfx4Y006368; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 9AE2920286A; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:43:02 -0300
Message-Id: <1345563782-11224-9-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 8/8] make CPU a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

[ehabkost: change CPU type declaration to hae TYPE_DEVICE as parent]

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 include/qemu/cpu.h | 6 +++---
 qom/cpu.c          | 3 ++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/qemu/cpu.h b/include/qemu/cpu.h
index ad706a6..ac44057 100644
--- a/include/qemu/cpu.h
+++ b/include/qemu/cpu.h
@@ -20,7 +20,7 @@
 #ifndef QEMU_CPU_H
 #define QEMU_CPU_H
 
-#include "qemu/object.h"
+#include "hw/qdev-core.h"
 #include "qemu-thread.h"
 
 /**
@@ -46,7 +46,7 @@ typedef struct CPUState CPUState;
  */
 typedef struct CPUClass {
     /*< private >*/
-    ObjectClass parent_class;
+    DeviceClass parent_class;
     /*< public >*/
 
     void (*reset)(CPUState *cpu);
@@ -59,7 +59,7 @@ typedef struct CPUClass {
  */
 struct CPUState {
     /*< private >*/
-    Object parent_obj;
+    DeviceState parent_obj;
     /*< public >*/
 
     struct QemuThread *thread;
diff --git a/qom/cpu.c b/qom/cpu.c
index 5b36046..f59db7d 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -20,6 +20,7 @@
 
 #include "qemu/cpu.h"
 #include "qemu-common.h"
+#include "hw/qdev-core.h"
 
 void cpu_reset(CPUState *cpu)
 {
@@ -43,7 +44,7 @@ static void cpu_class_init(ObjectClass *klass, void *data)
 
 static TypeInfo cpu_type_info = {
     .name = TYPE_CPU,
-    .parent = TYPE_OBJECT,
+    .parent = TYPE_DEVICE,
     .instance_size = sizeof(CPUState),
     .abstract = true,
     .class_size = sizeof(CPUClass),
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlL-0006M1-KJ; Tue, 21 Aug 2012 15:53:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qav-0005oi-RD
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:18 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345563730!8606005!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15416 invoked from network); 21 Aug 2012 15:42:11 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 15:42:11 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfwFM002787
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:58 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfvtV017579; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 210A32019DD; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:57 -0300
Message-Id: <1345563782-11224-4-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 3/8] qapi-types.h doesn't really need to include
	qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

needed to prevent build breakage when CPU becomes a child of DeviceState

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 scripts/qapi-types.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
index cf601ae..f34addb 100644
--- a/scripts/qapi-types.py
+++ b/scripts/qapi-types.py
@@ -263,7 +263,7 @@ fdecl.write(mcgen('''
 #ifndef %(guard)s
 #define %(guard)s
 
-#include "qemu-common.h"
+#include <stdbool.h>
 
 ''',
                   guard=guardname(h_file)))
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlK-0006LL-Bd; Tue, 21 Aug 2012 15:53:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qao-0005nt-MB
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:10 +0000
Received: from [85.158.139.83:58408] by server-12.bemta-5.messagelabs.com id
	11/46-22359-15CA3305; Tue, 21 Aug 2012 15:42:09 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345563728!29307115!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31042 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfxHs008604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:59 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfvQU024379; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 7BA30200965; Tue, 21 Aug 2012 12:43:08 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:55 -0300
Message-Id: <1345563782-11224-2-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

it's necessary for making CPU child of DEVICE without
causing circular header deps.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/arm-misc.h | 1 +
 hw/bt.h       | 2 ++
 hw/devices.h  | 2 ++
 hw/irq.h      | 2 ++
 hw/omap.h     | 1 +
 hw/soc_dma.h  | 1 +
 hw/xen.h      | 1 +
 qemu-common.h | 1 -
 sysemu.h      | 1 +
 9 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/hw/arm-misc.h b/hw/arm-misc.h
index bdd8fec..b13aa59 100644
--- a/hw/arm-misc.h
+++ b/hw/arm-misc.h
@@ -12,6 +12,7 @@
 #define ARM_MISC_H 1
 
 #include "memory.h"
+#include "hw/irq.h"
 
 /* The CPU is also modeled as an interrupt controller.  */
 #define ARM_PIC_CPU_IRQ 0
diff --git a/hw/bt.h b/hw/bt.h
index a48b8d4..ebf6a37 100644
--- a/hw/bt.h
+++ b/hw/bt.h
@@ -23,6 +23,8 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
+#include "hw/irq.h"
+
 /* BD Address */
 typedef struct {
     uint8_t b[6];
diff --git a/hw/devices.h b/hw/devices.h
index 1a55c1e..c60bcab 100644
--- a/hw/devices.h
+++ b/hw/devices.h
@@ -1,6 +1,8 @@
 #ifndef QEMU_DEVICES_H
 #define QEMU_DEVICES_H
 
+#include "hw/irq.h"
+
 /* ??? Not all users of this file can include cpu-common.h.  */
 struct MemoryRegion;
 
diff --git a/hw/irq.h b/hw/irq.h
index 56c55f0..1339a3a 100644
--- a/hw/irq.h
+++ b/hw/irq.h
@@ -3,6 +3,8 @@
 
 /* Generic IRQ/GPIO pin infrastructure.  */
 
+typedef struct IRQState *qemu_irq;
+
 typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
 
 void qemu_set_irq(qemu_irq irq, int level);
diff --git a/hw/omap.h b/hw/omap.h
index 413851b..8b08462 100644
--- a/hw/omap.h
+++ b/hw/omap.h
@@ -19,6 +19,7 @@
 #ifndef hw_omap_h
 #include "memory.h"
 # define hw_omap_h		"omap.h"
+#include "hw/irq.h"
 
 # define OMAP_EMIFS_BASE	0x00000000
 # define OMAP2_Q0_BASE		0x00000000
diff --git a/hw/soc_dma.h b/hw/soc_dma.h
index 904b26c..e386ace 100644
--- a/hw/soc_dma.h
+++ b/hw/soc_dma.h
@@ -19,6 +19,7 @@
  */
 
 #include "memory.h"
+#include "hw/irq.h"
 
 struct soc_dma_s;
 struct soc_dma_ch_s;
diff --git a/hw/xen.h b/hw/xen.h
index e5926b7..ff11dfd 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -8,6 +8,7 @@
  */
 #include <inttypes.h>
 
+#include "hw/irq.h"
 #include "qemu-common.h"
 
 /* xen-machine.c */
diff --git a/qemu-common.h b/qemu-common.h
index e5c2bcd..6677a30 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
 typedef struct PCIESlot PCIESlot;
 typedef struct MSIMessage MSIMessage;
 typedef struct SerialState SerialState;
-typedef struct IRQState *qemu_irq;
 typedef struct PCMCIACardState PCMCIACardState;
 typedef struct MouseTransformInfo MouseTransformInfo;
 typedef struct uWireSlave uWireSlave;
diff --git a/sysemu.h b/sysemu.h
index 65552ac..f765821 100644
--- a/sysemu.h
+++ b/sysemu.h
@@ -9,6 +9,7 @@
 #include "qapi-types.h"
 #include "notify.h"
 #include "main-loop.h"
+#include "hw/irq.h"
 
 /* vl.c */
 
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlL-0006Li-5h; Tue, 21 Aug 2012 15:53:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qar-0005nt-Oo
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:14 +0000
Received: from [85.158.139.83:38175] by server-12.bemta-5.messagelabs.com id
	44/66-22359-55CA3305; Tue, 21 Aug 2012 15:42:13 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345563730!28624150!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32316 invoked from network); 21 Aug 2012 15:42:11 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 15:42:11 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0eE008612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LFfxxJ012328; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 4CACC202864; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:59 -0300
Message-Id: <1345563782-11224-6-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 5/8] split qdev into a core and code used only by
	qemu-system-*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This also makes it visible what are the parts of qdev that we may want
to split more cleanly (as they are using #ifdefs, now).

There are basically two parts that are specific to qemu-system-*, but
are still inside qdev.c (but inside a "#ifndef CONFIG_USER_ONLY").

- vmstate handling
- reset function registration

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 hw/Makefile.objs            |   1 +
 hw/qdev-properties-system.c | 329 ++++++++++++++++++++++++++++++++++++++++++++
 hw/qdev-properties.c        | 320 +-----------------------------------------
 hw/qdev-properties.h        |   1 +
 hw/qdev-system.c            |  93 +++++++++++++
 hw/qdev.c                   | 103 ++------------
 6 files changed, 435 insertions(+), 412 deletions(-)
 create mode 100644 hw/qdev-properties-system.c
 create mode 100644 hw/qdev-system.c

diff --git a/hw/Makefile.objs b/hw/Makefile.objs
index 7f57ed5..04d3b5e 100644
--- a/hw/Makefile.objs
+++ b/hw/Makefile.objs
@@ -177,6 +177,7 @@ common-obj-y += bt.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
 common-obj-y += bt-hci-csr.o
 common-obj-y += msmouse.o ps2.o
 common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
+common-obj-y += qdev-system.o qdev-properties-system.o
 common-obj-$(CONFIG_BRLAPI) += baum.o
 
 # xen backend driver support
diff --git a/hw/qdev-properties-system.c b/hw/qdev-properties-system.c
new file mode 100644
index 0000000..c42e656
--- /dev/null
+++ b/hw/qdev-properties-system.c
@@ -0,0 +1,329 @@
+#include "net.h"
+#include "qdev.h"
+#include "qerror.h"
+#include "blockdev.h"
+#include "hw/block-common.h"
+#include "net/hub.h"
+#include "qapi/qapi-visit-core.h"
+
+static void get_pointer(Object *obj, Visitor *v, Property *prop,
+                        const char *(*print)(void *ptr),
+                        const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    void **ptr = qdev_get_prop_ptr(dev, prop);
+    char *p;
+
+    p = (char *) (*ptr ? print(*ptr) : "");
+    visit_type_str(v, &p, name, errp);
+}
+
+static void set_pointer(Object *obj, Visitor *v, Property *prop,
+                        int (*parse)(DeviceState *dev, const char *str,
+                                     void **ptr),
+                        const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Error *local_err = NULL;
+    void **ptr = qdev_get_prop_ptr(dev, prop);
+    char *str;
+    int ret;
+
+    if (dev->state != DEV_STATE_CREATED) {
+        error_set(errp, QERR_PERMISSION_DENIED);
+        return;
+    }
+
+    visit_type_str(v, &str, name, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+    if (!*str) {
+        g_free(str);
+        *ptr = NULL;
+        return;
+    }
+    ret = parse(dev, str, ptr);
+    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
+    g_free(str);
+}
+
+
+/* --- drive --- */
+
+static int parse_drive(DeviceState *dev, const char *str, void **ptr)
+{
+    BlockDriverState *bs;
+
+    bs = bdrv_find(str);
+    if (bs == NULL)
+        return -ENOENT;
+    if (bdrv_attach_dev(bs, dev) < 0)
+        return -EEXIST;
+    *ptr = bs;
+    return 0;
+}
+
+static void release_drive(Object *obj, const char *name, void *opaque)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
+
+    if (*ptr) {
+        bdrv_detach_dev(*ptr, dev);
+        blockdev_auto_del(*ptr);
+    }
+}
+
+static const char *print_drive(void *ptr)
+{
+    return bdrv_get_device_name(ptr);
+}
+
+static void get_drive(Object *obj, Visitor *v, void *opaque,
+                      const char *name, Error **errp)
+{
+    get_pointer(obj, v, opaque, print_drive, name, errp);
+}
+
+static void set_drive(Object *obj, Visitor *v, void *opaque,
+                      const char *name, Error **errp)
+{
+    set_pointer(obj, v, opaque, parse_drive, name, errp);
+}
+
+PropertyInfo qdev_prop_drive = {
+    .name  = "drive",
+    .get   = get_drive,
+    .set   = set_drive,
+    .release = release_drive,
+};
+
+/* --- character device --- */
+
+static int parse_chr(DeviceState *dev, const char *str, void **ptr)
+{
+    CharDriverState *chr = qemu_chr_find(str);
+    if (chr == NULL) {
+        return -ENOENT;
+    }
+    if (chr->avail_connections < 1) {
+        return -EEXIST;
+    }
+    *ptr = chr;
+    --chr->avail_connections;
+    return 0;
+}
+
+static void release_chr(Object *obj, const char *name, void *opaque)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
+
+    if (*ptr) {
+        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
+    }
+}
+
+
+static const char *print_chr(void *ptr)
+{
+    CharDriverState *chr = ptr;
+
+    return chr->label ? chr->label : "";
+}
+
+static void get_chr(Object *obj, Visitor *v, void *opaque,
+                    const char *name, Error **errp)
+{
+    get_pointer(obj, v, opaque, print_chr, name, errp);
+}
+
+static void set_chr(Object *obj, Visitor *v, void *opaque,
+                    const char *name, Error **errp)
+{
+    set_pointer(obj, v, opaque, parse_chr, name, errp);
+}
+
+PropertyInfo qdev_prop_chr = {
+    .name  = "chr",
+    .get   = get_chr,
+    .set   = set_chr,
+    .release = release_chr,
+};
+
+/* --- netdev device --- */
+
+static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
+{
+    NetClientState *netdev = qemu_find_netdev(str);
+
+    if (netdev == NULL) {
+        return -ENOENT;
+    }
+    if (netdev->peer) {
+        return -EEXIST;
+    }
+    *ptr = netdev;
+    return 0;
+}
+
+static const char *print_netdev(void *ptr)
+{
+    NetClientState *netdev = ptr;
+
+    return netdev->name ? netdev->name : "";
+}
+
+static void get_netdev(Object *obj, Visitor *v, void *opaque,
+                       const char *name, Error **errp)
+{
+    get_pointer(obj, v, opaque, print_netdev, name, errp);
+}
+
+static void set_netdev(Object *obj, Visitor *v, void *opaque,
+                       const char *name, Error **errp)
+{
+    set_pointer(obj, v, opaque, parse_netdev, name, errp);
+}
+
+PropertyInfo qdev_prop_netdev = {
+    .name  = "netdev",
+    .get   = get_netdev,
+    .set   = set_netdev,
+};
+
+/* --- vlan --- */
+
+static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
+{
+    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
+
+    if (*ptr) {
+        int id;
+        if (!net_hub_id_for_client(*ptr, &id)) {
+            return snprintf(dest, len, "%d", id);
+        }
+    }
+
+    return snprintf(dest, len, "<null>");
+}
+
+static void get_vlan(Object *obj, Visitor *v, void *opaque,
+                     const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t id = -1;
+
+    if (*ptr) {
+        int hub_id;
+        if (!net_hub_id_for_client(*ptr, &hub_id)) {
+            id = hub_id;
+        }
+    }
+
+    visit_type_int32(v, &id, name, errp);
+}
+
+static void set_vlan(Object *obj, Visitor *v, void *opaque,
+                     const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
+    Error *local_err = NULL;
+    int32_t id;
+    NetClientState *hubport;
+
+    if (dev->state != DEV_STATE_CREATED) {
+        error_set(errp, QERR_PERMISSION_DENIED);
+        return;
+    }
+
+    visit_type_int32(v, &id, name, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+    if (id == -1) {
+        *ptr = NULL;
+        return;
+    }
+
+    hubport = net_hub_port_find(id);
+    if (!hubport) {
+        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
+                  name, prop->info->name);
+        return;
+    }
+    *ptr = hubport;
+}
+
+PropertyInfo qdev_prop_vlan = {
+    .name  = "vlan",
+    .print = print_vlan,
+    .get   = get_vlan,
+    .set   = set_vlan,
+};
+
+
+int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
+{
+    Error *errp = NULL;
+    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
+    object_property_set_str(OBJECT(dev), bdrv_name,
+                            name, &errp);
+    if (errp) {
+        qerror_report_err(errp);
+        error_free(errp);
+        return -1;
+    }
+    return 0;
+}
+
+void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
+{
+    if (qdev_prop_set_drive(dev, name, value) < 0) {
+        exit(1);
+    }
+}
+
+void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
+{
+    Error *errp = NULL;
+    assert(!value || value->label);
+    object_property_set_str(OBJECT(dev),
+                            value ? value->label : "", name, &errp);
+    assert_no_error(errp);
+}
+
+void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
+{
+    Error *errp = NULL;
+    assert(!value || value->name);
+    object_property_set_str(OBJECT(dev),
+                            value ? value->name : "", name, &errp);
+    assert_no_error(errp);
+}
+
+static int qdev_add_one_global(QemuOpts *opts, void *opaque)
+{
+    GlobalProperty *g;
+
+    g = g_malloc0(sizeof(*g));
+    g->driver   = qemu_opt_get(opts, "driver");
+    g->property = qemu_opt_get(opts, "property");
+    g->value    = qemu_opt_get(opts, "value");
+    qdev_prop_register_global(g);
+    return 0;
+}
+
+void qemu_add_globals(void)
+{
+    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
+}
diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 81d901c..917d986 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -13,49 +13,6 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
     return ptr;
 }
 
-static void get_pointer(Object *obj, Visitor *v, Property *prop,
-                        const char *(*print)(void *ptr),
-                        const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    void **ptr = qdev_get_prop_ptr(dev, prop);
-    char *p;
-
-    p = (char *) (*ptr ? print(*ptr) : "");
-    visit_type_str(v, &p, name, errp);
-}
-
-static void set_pointer(Object *obj, Visitor *v, Property *prop,
-                        int (*parse)(DeviceState *dev, const char *str,
-                                     void **ptr),
-                        const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    Error *local_err = NULL;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
-    char *str;
-    int ret;
-
-    if (dev->state != DEV_STATE_CREATED) {
-        error_set(errp, QERR_PERMISSION_DENIED);
-        return;
-    }
-
-    visit_type_str(v, &str, name, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        return;
-    }
-    if (!*str) {
-        g_free(str);
-        *ptr = NULL;
-        return;
-    }
-    ret = parse(dev, str, ptr);
-    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
-    g_free(str);
-}
-
 static void get_enum(Object *obj, Visitor *v, void *opaque,
                      const char *name, Error **errp)
 {
@@ -476,227 +433,6 @@ PropertyInfo qdev_prop_string = {
     .set   = set_string,
 };
 
-/* --- drive --- */
-
-static int parse_drive(DeviceState *dev, const char *str, void **ptr)
-{
-    BlockDriverState *bs;
-
-    bs = bdrv_find(str);
-    if (bs == NULL)
-        return -ENOENT;
-    if (bdrv_attach_dev(bs, dev) < 0)
-        return -EEXIST;
-    *ptr = bs;
-    return 0;
-}
-
-static void release_drive(Object *obj, const char *name, void *opaque)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
-
-    if (*ptr) {
-        bdrv_detach_dev(*ptr, dev);
-        blockdev_auto_del(*ptr);
-    }
-}
-
-static const char *print_drive(void *ptr)
-{
-    return bdrv_get_device_name(ptr);
-}
-
-static void get_drive(Object *obj, Visitor *v, void *opaque,
-                      const char *name, Error **errp)
-{
-    get_pointer(obj, v, opaque, print_drive, name, errp);
-}
-
-static void set_drive(Object *obj, Visitor *v, void *opaque,
-                      const char *name, Error **errp)
-{
-    set_pointer(obj, v, opaque, parse_drive, name, errp);
-}
-
-PropertyInfo qdev_prop_drive = {
-    .name  = "drive",
-    .get   = get_drive,
-    .set   = set_drive,
-    .release = release_drive,
-};
-
-/* --- character device --- */
-
-static int parse_chr(DeviceState *dev, const char *str, void **ptr)
-{
-    CharDriverState *chr = qemu_chr_find(str);
-    if (chr == NULL) {
-        return -ENOENT;
-    }
-    if (chr->avail_connections < 1) {
-        return -EEXIST;
-    }
-    *ptr = chr;
-    --chr->avail_connections;
-    return 0;
-}
-
-static void release_chr(Object *obj, const char *name, void *opaque)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
-
-    if (*ptr) {
-        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
-    }
-}
-
-
-static const char *print_chr(void *ptr)
-{
-    CharDriverState *chr = ptr;
-
-    return chr->label ? chr->label : "";
-}
-
-static void get_chr(Object *obj, Visitor *v, void *opaque,
-                    const char *name, Error **errp)
-{
-    get_pointer(obj, v, opaque, print_chr, name, errp);
-}
-
-static void set_chr(Object *obj, Visitor *v, void *opaque,
-                    const char *name, Error **errp)
-{
-    set_pointer(obj, v, opaque, parse_chr, name, errp);
-}
-
-PropertyInfo qdev_prop_chr = {
-    .name  = "chr",
-    .get   = get_chr,
-    .set   = set_chr,
-    .release = release_chr,
-};
-
-/* --- netdev device --- */
-
-static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
-{
-    NetClientState *netdev = qemu_find_netdev(str);
-
-    if (netdev == NULL) {
-        return -ENOENT;
-    }
-    if (netdev->peer) {
-        return -EEXIST;
-    }
-    *ptr = netdev;
-    return 0;
-}
-
-static const char *print_netdev(void *ptr)
-{
-    NetClientState *netdev = ptr;
-
-    return netdev->name ? netdev->name : "";
-}
-
-static void get_netdev(Object *obj, Visitor *v, void *opaque,
-                       const char *name, Error **errp)
-{
-    get_pointer(obj, v, opaque, print_netdev, name, errp);
-}
-
-static void set_netdev(Object *obj, Visitor *v, void *opaque,
-                       const char *name, Error **errp)
-{
-    set_pointer(obj, v, opaque, parse_netdev, name, errp);
-}
-
-PropertyInfo qdev_prop_netdev = {
-    .name  = "netdev",
-    .get   = get_netdev,
-    .set   = set_netdev,
-};
-
-/* --- vlan --- */
-
-static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
-{
-    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
-
-    if (*ptr) {
-        int id;
-        if (!net_hub_id_for_client(*ptr, &id)) {
-            return snprintf(dest, len, "%d", id);
-        }
-    }
-
-    return snprintf(dest, len, "<null>");
-}
-
-static void get_vlan(Object *obj, Visitor *v, void *opaque,
-                     const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
-    int32_t id = -1;
-
-    if (*ptr) {
-        int hub_id;
-        if (!net_hub_id_for_client(*ptr, &hub_id)) {
-            id = hub_id;
-        }
-    }
-
-    visit_type_int32(v, &id, name, errp);
-}
-
-static void set_vlan(Object *obj, Visitor *v, void *opaque,
-                     const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
-    Error *local_err = NULL;
-    int32_t id;
-    NetClientState *hubport;
-
-    if (dev->state != DEV_STATE_CREATED) {
-        error_set(errp, QERR_PERMISSION_DENIED);
-        return;
-    }
-
-    visit_type_int32(v, &id, name, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        return;
-    }
-    if (id == -1) {
-        *ptr = NULL;
-        return;
-    }
-
-    hubport = net_hub_port_find(id);
-    if (!hubport) {
-        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
-                  name, prop->info->name);
-        return;
-    }
-    *ptr = hubport;
-}
-
-PropertyInfo qdev_prop_vlan = {
-    .name  = "vlan",
-    .print = print_vlan,
-    .get   = get_vlan,
-    .set   = set_vlan,
-};
-
 /* --- pointer --- */
 
 /* Not a proper property, just for dirty hacks.  TODO Remove it!  */
@@ -1158,44 +894,6 @@ void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value)
     assert_no_error(errp);
 }
 
-int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
-{
-    Error *errp = NULL;
-    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
-    object_property_set_str(OBJECT(dev), bdrv_name,
-                            name, &errp);
-    if (errp) {
-        qerror_report_err(errp);
-        error_free(errp);
-        return -1;
-    }
-    return 0;
-}
-
-void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
-{
-    if (qdev_prop_set_drive(dev, name, value) < 0) {
-        exit(1);
-    }
-}
-void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
-{
-    Error *errp = NULL;
-    assert(!value || value->label);
-    object_property_set_str(OBJECT(dev),
-                            value ? value->label : "", name, &errp);
-    assert_no_error(errp);
-}
-
-void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
-{
-    Error *errp = NULL;
-    assert(!value || value->name);
-    object_property_set_str(OBJECT(dev),
-                            value ? value->name : "", name, &errp);
-    assert_no_error(errp);
-}
-
 void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value)
 {
     Error *errp = NULL;
@@ -1231,7 +929,7 @@ void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value)
 
 static QTAILQ_HEAD(, GlobalProperty) global_props = QTAILQ_HEAD_INITIALIZER(global_props);
 
-static void qdev_prop_register_global(GlobalProperty *prop)
+void qdev_prop_register_global(GlobalProperty *prop)
 {
     QTAILQ_INSERT_TAIL(&global_props, prop, next);
 }
@@ -1263,19 +961,3 @@ void qdev_prop_set_globals(DeviceState *dev)
     } while (class);
 }
 
-static int qdev_add_one_global(QemuOpts *opts, void *opaque)
-{
-    GlobalProperty *g;
-
-    g = g_malloc0(sizeof(*g));
-    g->driver   = qemu_opt_get(opts, "driver");
-    g->property = qemu_opt_get(opts, "property");
-    g->value    = qemu_opt_get(opts, "value");
-    qdev_prop_register_global(g);
-    return 0;
-}
-
-void qemu_add_globals(void)
-{
-    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
-}
diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
index e93336a..a145084 100644
--- a/hw/qdev-properties.h
+++ b/hw/qdev-properties.h
@@ -114,6 +114,7 @@ void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 /* FIXME: Remove opaque pointer properties.  */
 void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
 
+void qdev_prop_register_global(GlobalProperty *prop);
 void qdev_prop_register_global_list(GlobalProperty *props);
 void qdev_prop_set_globals(DeviceState *dev);
 void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
diff --git a/hw/qdev-system.c b/hw/qdev-system.c
new file mode 100644
index 0000000..4891d2f
--- /dev/null
+++ b/hw/qdev-system.c
@@ -0,0 +1,93 @@
+#include "net.h"
+#include "qdev.h"
+
+void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
+{
+    assert(dev->num_gpio_in == 0);
+    dev->num_gpio_in = n;
+    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
+}
+
+void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
+{
+    assert(dev->num_gpio_out == 0);
+    dev->num_gpio_out = n;
+    dev->gpio_out = pins;
+}
+
+qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
+{
+    assert(n >= 0 && n < dev->num_gpio_in);
+    return dev->gpio_in[n];
+}
+
+void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
+{
+    assert(n >= 0 && n < dev->num_gpio_out);
+    dev->gpio_out[n] = pin;
+}
+
+void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
+{
+    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
+    if (nd->netdev)
+        qdev_prop_set_netdev(dev, "netdev", nd->netdev);
+    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
+        object_property_find(OBJECT(dev), "vectors", NULL)) {
+        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
+    }
+    nd->instantiated = 1;
+}
+
+BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
+{
+    BusState *bus;
+
+    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
+        if (strcmp(name, bus->name) == 0) {
+            return bus;
+        }
+    }
+    return NULL;
+}
+
+/* Create a new device.  This only initializes the device state structure
+   and allows properties to be set.  qdev_init should be called to
+   initialize the actual device emulation.  */
+DeviceState *qdev_create(BusState *bus, const char *name)
+{
+    DeviceState *dev;
+
+    dev = qdev_try_create(bus, name);
+    if (!dev) {
+        if (bus) {
+            hw_error("Unknown device '%s' for bus '%s'\n", name,
+                     object_get_typename(OBJECT(bus)));
+        } else {
+            hw_error("Unknown device '%s' for default sysbus\n", name);
+        }
+    }
+
+    return dev;
+}
+
+DeviceState *qdev_try_create(BusState *bus, const char *type)
+{
+    DeviceState *dev;
+
+    if (object_class_by_name(type) == NULL) {
+        return NULL;
+    }
+    dev = DEVICE(object_new(type));
+    if (!dev) {
+        return NULL;
+    }
+
+    if (!bus) {
+        bus = sysbus_get_default();
+    }
+
+    qdev_set_parent_bus(dev, bus);
+
+    return dev;
+}
diff --git a/hw/qdev.c b/hw/qdev.c
index 36c3e4b..3dc38f7 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -25,7 +25,6 @@
    inherit from a particular bus (e.g. PCI or I2C) rather than
    this API directly.  */
 
-#include "net.h"
 #include "qdev.h"
 #include "sysemu.h"
 #include "error.h"
@@ -105,47 +104,6 @@ void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
     bus_add_child(bus, dev);
 }
 
-/* Create a new device.  This only initializes the device state structure
-   and allows properties to be set.  qdev_init should be called to
-   initialize the actual device emulation.  */
-DeviceState *qdev_create(BusState *bus, const char *name)
-{
-    DeviceState *dev;
-
-    dev = qdev_try_create(bus, name);
-    if (!dev) {
-        if (bus) {
-            hw_error("Unknown device '%s' for bus '%s'\n", name,
-                     object_get_typename(OBJECT(bus)));
-        } else {
-            hw_error("Unknown device '%s' for default sysbus\n", name);
-        }
-    }
-
-    return dev;
-}
-
-DeviceState *qdev_try_create(BusState *bus, const char *type)
-{
-    DeviceState *dev;
-
-    if (object_class_by_name(type) == NULL) {
-        return NULL;
-    }
-    dev = DEVICE(object_new(type));
-    if (!dev) {
-        return NULL;
-    }
-
-    if (!bus) {
-        bus = sysbus_get_default();
-    }
-
-    qdev_set_parent_bus(dev, bus);
-
-    return dev;
-}
-
 /* Initialize a device.  Device properties should be set before calling
    this function.  IRQs and MMIO regions should be connected/mapped after
    calling this function.
@@ -175,11 +133,13 @@ int qdev_init(DeviceState *dev)
         g_free(name);
     }
 
+#ifndef CONFIG_USER_ONLY
     if (qdev_get_vmsd(dev)) {
         vmstate_register_with_alias_id(dev, -1, qdev_get_vmsd(dev), dev,
                                        dev->instance_id_alias,
                                        dev->alias_required_for_version);
     }
+#endif
     dev->state = DEV_STATE_INITIALIZED;
     if (dev->hotplugged) {
         device_reset(dev);
@@ -292,56 +252,6 @@ BusState *qdev_get_parent_bus(DeviceState *dev)
     return dev->parent_bus;
 }
 
-void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
-{
-    assert(dev->num_gpio_in == 0);
-    dev->num_gpio_in = n;
-    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
-}
-
-void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
-{
-    assert(dev->num_gpio_out == 0);
-    dev->num_gpio_out = n;
-    dev->gpio_out = pins;
-}
-
-qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
-{
-    assert(n >= 0 && n < dev->num_gpio_in);
-    return dev->gpio_in[n];
-}
-
-void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
-{
-    assert(n >= 0 && n < dev->num_gpio_out);
-    dev->gpio_out[n] = pin;
-}
-
-void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
-{
-    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
-    if (nd->netdev)
-        qdev_prop_set_netdev(dev, "netdev", nd->netdev);
-    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
-        object_property_find(OBJECT(dev), "vectors", NULL)) {
-        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
-    }
-    nd->instantiated = 1;
-}
-
-BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
-{
-    BusState *bus;
-
-    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
-        if (strcmp(name, bus->name) == 0) {
-            return bus;
-        }
-    }
-    return NULL;
-}
-
 int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
                        qbus_walkerfn *busfn, void *opaque)
 {
@@ -440,11 +350,14 @@ static void qbus_realize(BusState *bus)
         QLIST_INSERT_HEAD(&bus->parent->child_bus, bus, sibling);
         bus->parent->num_child_bus++;
         object_property_add_child(OBJECT(bus->parent), bus->name, OBJECT(bus), NULL);
-    } else if (bus != sysbus_get_default()) {
+    }
+#ifndef CONFIG_USER_ONLY
+    else if (bus != sysbus_get_default()) {
         /* TODO: once all bus devices are qdevified,
            only reset handler for main_system_bus should be registered here. */
         qemu_register_reset(qbus_reset_all_fn, bus);
     }
+#endif
 }
 
 void qbus_create_inplace(BusState *bus, const char *typename,
@@ -703,9 +616,11 @@ static void device_finalize(Object *obj)
             bus = QLIST_FIRST(&dev->child_bus);
             qbus_free(bus);
         }
+#ifndef CONFIG_USER_ONLY
         if (qdev_get_vmsd(dev)) {
             vmstate_unregister(dev, qdev_get_vmsd(dev), dev);
         }
+#endif
         if (dc->exit) {
             dc->exit(dev);
         }
@@ -779,8 +694,10 @@ static void qbus_finalize(Object *obj)
         QLIST_REMOVE(bus, sibling);
         bus->parent->num_child_bus--;
     } else {
+#ifndef CONFIG_USER_ONLY
         assert(bus != sysbus_get_default()); /* main_system_bus is never freed */
         qemu_unregister_reset(qbus_reset_all_fn, bus);
+#endif
     }
     g_free((char *)bus->name);
 }
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlI-0006Kw-Th; Tue, 21 Aug 2012 15:53:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qan-0005nh-GJ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:09 +0000
Received: from [85.158.143.35:23346] by server-1.bemta-4.messagelabs.com id
	B1/AF-07754-05CA3305; Tue, 21 Aug 2012 15:42:08 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345563727!15303658!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22103 invoked from network); 21 Aug 2012 15:42:07 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 15:42:07 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0No017856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfx4Y006368; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 9AE2920286A; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:43:02 -0300
Message-Id: <1345563782-11224-9-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 8/8] make CPU a child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

[ehabkost: change CPU type declaration to hae TYPE_DEVICE as parent]

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 include/qemu/cpu.h | 6 +++---
 qom/cpu.c          | 3 ++-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/qemu/cpu.h b/include/qemu/cpu.h
index ad706a6..ac44057 100644
--- a/include/qemu/cpu.h
+++ b/include/qemu/cpu.h
@@ -20,7 +20,7 @@
 #ifndef QEMU_CPU_H
 #define QEMU_CPU_H
 
-#include "qemu/object.h"
+#include "hw/qdev-core.h"
 #include "qemu-thread.h"
 
 /**
@@ -46,7 +46,7 @@ typedef struct CPUState CPUState;
  */
 typedef struct CPUClass {
     /*< private >*/
-    ObjectClass parent_class;
+    DeviceClass parent_class;
     /*< public >*/
 
     void (*reset)(CPUState *cpu);
@@ -59,7 +59,7 @@ typedef struct CPUClass {
  */
 struct CPUState {
     /*< private >*/
-    Object parent_obj;
+    DeviceState parent_obj;
     /*< public >*/
 
     struct QemuThread *thread;
diff --git a/qom/cpu.c b/qom/cpu.c
index 5b36046..f59db7d 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -20,6 +20,7 @@
 
 #include "qemu/cpu.h"
 #include "qemu-common.h"
+#include "hw/qdev-core.h"
 
 void cpu_reset(CPUState *cpu)
 {
@@ -43,7 +44,7 @@ static void cpu_class_init(ObjectClass *klass, void *data)
 
 static TypeInfo cpu_type_info = {
     .name = TYPE_CPU,
-    .parent = TYPE_OBJECT,
+    .parent = TYPE_DEVICE,
     .instance_size = sizeof(CPUState),
     .abstract = true,
     .class_size = sizeof(CPUClass),
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlL-0006M1-KJ; Tue, 21 Aug 2012 15:53:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qav-0005oi-RD
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:18 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345563730!8606005!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15416 invoked from network); 21 Aug 2012 15:42:11 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 15:42:11 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfwFM002787
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:58 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfvtV017579; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 210A32019DD; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:57 -0300
Message-Id: <1345563782-11224-4-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 3/8] qapi-types.h doesn't really need to include
	qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

needed to prevent build breakage when CPU becomes a child of DeviceState

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 scripts/qapi-types.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
index cf601ae..f34addb 100644
--- a/scripts/qapi-types.py
+++ b/scripts/qapi-types.py
@@ -263,7 +263,7 @@ fdecl.write(mcgen('''
 #ifndef %(guard)s
 #define %(guard)s
 
-#include "qemu-common.h"
+#include <stdbool.h>
 
 ''',
                   guard=guardname(h_file)))
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlK-0006LL-Bd; Tue, 21 Aug 2012 15:53:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qao-0005nt-MB
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:10 +0000
Received: from [85.158.139.83:58408] by server-12.bemta-5.messagelabs.com id
	11/46-22359-15CA3305; Tue, 21 Aug 2012 15:42:09 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345563728!29307115!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31042 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfxHs008604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:59 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfvQU024379; Tue, 21 Aug 2012 11:41:57 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 7BA30200965; Tue, 21 Aug 2012 12:43:08 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:55 -0300
Message-Id: <1345563782-11224-2-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

it's necessary for making CPU child of DEVICE without
causing circular header deps.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/arm-misc.h | 1 +
 hw/bt.h       | 2 ++
 hw/devices.h  | 2 ++
 hw/irq.h      | 2 ++
 hw/omap.h     | 1 +
 hw/soc_dma.h  | 1 +
 hw/xen.h      | 1 +
 qemu-common.h | 1 -
 sysemu.h      | 1 +
 9 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/hw/arm-misc.h b/hw/arm-misc.h
index bdd8fec..b13aa59 100644
--- a/hw/arm-misc.h
+++ b/hw/arm-misc.h
@@ -12,6 +12,7 @@
 #define ARM_MISC_H 1
 
 #include "memory.h"
+#include "hw/irq.h"
 
 /* The CPU is also modeled as an interrupt controller.  */
 #define ARM_PIC_CPU_IRQ 0
diff --git a/hw/bt.h b/hw/bt.h
index a48b8d4..ebf6a37 100644
--- a/hw/bt.h
+++ b/hw/bt.h
@@ -23,6 +23,8 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
+#include "hw/irq.h"
+
 /* BD Address */
 typedef struct {
     uint8_t b[6];
diff --git a/hw/devices.h b/hw/devices.h
index 1a55c1e..c60bcab 100644
--- a/hw/devices.h
+++ b/hw/devices.h
@@ -1,6 +1,8 @@
 #ifndef QEMU_DEVICES_H
 #define QEMU_DEVICES_H
 
+#include "hw/irq.h"
+
 /* ??? Not all users of this file can include cpu-common.h.  */
 struct MemoryRegion;
 
diff --git a/hw/irq.h b/hw/irq.h
index 56c55f0..1339a3a 100644
--- a/hw/irq.h
+++ b/hw/irq.h
@@ -3,6 +3,8 @@
 
 /* Generic IRQ/GPIO pin infrastructure.  */
 
+typedef struct IRQState *qemu_irq;
+
 typedef void (*qemu_irq_handler)(void *opaque, int n, int level);
 
 void qemu_set_irq(qemu_irq irq, int level);
diff --git a/hw/omap.h b/hw/omap.h
index 413851b..8b08462 100644
--- a/hw/omap.h
+++ b/hw/omap.h
@@ -19,6 +19,7 @@
 #ifndef hw_omap_h
 #include "memory.h"
 # define hw_omap_h		"omap.h"
+#include "hw/irq.h"
 
 # define OMAP_EMIFS_BASE	0x00000000
 # define OMAP2_Q0_BASE		0x00000000
diff --git a/hw/soc_dma.h b/hw/soc_dma.h
index 904b26c..e386ace 100644
--- a/hw/soc_dma.h
+++ b/hw/soc_dma.h
@@ -19,6 +19,7 @@
  */
 
 #include "memory.h"
+#include "hw/irq.h"
 
 struct soc_dma_s;
 struct soc_dma_ch_s;
diff --git a/hw/xen.h b/hw/xen.h
index e5926b7..ff11dfd 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -8,6 +8,7 @@
  */
 #include <inttypes.h>
 
+#include "hw/irq.h"
 #include "qemu-common.h"
 
 /* xen-machine.c */
diff --git a/qemu-common.h b/qemu-common.h
index e5c2bcd..6677a30 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
 typedef struct PCIESlot PCIESlot;
 typedef struct MSIMessage MSIMessage;
 typedef struct SerialState SerialState;
-typedef struct IRQState *qemu_irq;
 typedef struct PCMCIACardState PCMCIACardState;
 typedef struct MouseTransformInfo MouseTransformInfo;
 typedef struct uWireSlave uWireSlave;
diff --git a/sysemu.h b/sysemu.h
index 65552ac..f765821 100644
--- a/sysemu.h
+++ b/sysemu.h
@@ -9,6 +9,7 @@
 #include "qapi-types.h"
 #include "notify.h"
 #include "main-loop.h"
+#include "hw/irq.h"
 
 /* vl.c */
 
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlL-0006Li-5h; Tue, 21 Aug 2012 15:53:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qar-0005nt-Oo
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:14 +0000
Received: from [85.158.139.83:38175] by server-12.bemta-5.messagelabs.com id
	44/66-22359-55CA3305; Tue, 21 Aug 2012 15:42:13 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345563730!28624150!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32316 invoked from network); 21 Aug 2012 15:42:11 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 15:42:11 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0eE008612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LFfxxJ012328; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 4CACC202864; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:59 -0300
Message-Id: <1345563782-11224-6-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 5/8] split qdev into a core and code used only by
	qemu-system-*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This also makes it visible what are the parts of qdev that we may want
to split more cleanly (as they are using #ifdefs, now).

There are basically two parts that are specific to qemu-system-*, but
are still inside qdev.c (but inside a "#ifndef CONFIG_USER_ONLY").

- vmstate handling
- reset function registration

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 hw/Makefile.objs            |   1 +
 hw/qdev-properties-system.c | 329 ++++++++++++++++++++++++++++++++++++++++++++
 hw/qdev-properties.c        | 320 +-----------------------------------------
 hw/qdev-properties.h        |   1 +
 hw/qdev-system.c            |  93 +++++++++++++
 hw/qdev.c                   | 103 ++------------
 6 files changed, 435 insertions(+), 412 deletions(-)
 create mode 100644 hw/qdev-properties-system.c
 create mode 100644 hw/qdev-system.c

diff --git a/hw/Makefile.objs b/hw/Makefile.objs
index 7f57ed5..04d3b5e 100644
--- a/hw/Makefile.objs
+++ b/hw/Makefile.objs
@@ -177,6 +177,7 @@ common-obj-y += bt.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
 common-obj-y += bt-hci-csr.o
 common-obj-y += msmouse.o ps2.o
 common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
+common-obj-y += qdev-system.o qdev-properties-system.o
 common-obj-$(CONFIG_BRLAPI) += baum.o
 
 # xen backend driver support
diff --git a/hw/qdev-properties-system.c b/hw/qdev-properties-system.c
new file mode 100644
index 0000000..c42e656
--- /dev/null
+++ b/hw/qdev-properties-system.c
@@ -0,0 +1,329 @@
+#include "net.h"
+#include "qdev.h"
+#include "qerror.h"
+#include "blockdev.h"
+#include "hw/block-common.h"
+#include "net/hub.h"
+#include "qapi/qapi-visit-core.h"
+
+static void get_pointer(Object *obj, Visitor *v, Property *prop,
+                        const char *(*print)(void *ptr),
+                        const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    void **ptr = qdev_get_prop_ptr(dev, prop);
+    char *p;
+
+    p = (char *) (*ptr ? print(*ptr) : "");
+    visit_type_str(v, &p, name, errp);
+}
+
+static void set_pointer(Object *obj, Visitor *v, Property *prop,
+                        int (*parse)(DeviceState *dev, const char *str,
+                                     void **ptr),
+                        const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Error *local_err = NULL;
+    void **ptr = qdev_get_prop_ptr(dev, prop);
+    char *str;
+    int ret;
+
+    if (dev->state != DEV_STATE_CREATED) {
+        error_set(errp, QERR_PERMISSION_DENIED);
+        return;
+    }
+
+    visit_type_str(v, &str, name, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+    if (!*str) {
+        g_free(str);
+        *ptr = NULL;
+        return;
+    }
+    ret = parse(dev, str, ptr);
+    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
+    g_free(str);
+}
+
+
+/* --- drive --- */
+
+static int parse_drive(DeviceState *dev, const char *str, void **ptr)
+{
+    BlockDriverState *bs;
+
+    bs = bdrv_find(str);
+    if (bs == NULL)
+        return -ENOENT;
+    if (bdrv_attach_dev(bs, dev) < 0)
+        return -EEXIST;
+    *ptr = bs;
+    return 0;
+}
+
+static void release_drive(Object *obj, const char *name, void *opaque)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
+
+    if (*ptr) {
+        bdrv_detach_dev(*ptr, dev);
+        blockdev_auto_del(*ptr);
+    }
+}
+
+static const char *print_drive(void *ptr)
+{
+    return bdrv_get_device_name(ptr);
+}
+
+static void get_drive(Object *obj, Visitor *v, void *opaque,
+                      const char *name, Error **errp)
+{
+    get_pointer(obj, v, opaque, print_drive, name, errp);
+}
+
+static void set_drive(Object *obj, Visitor *v, void *opaque,
+                      const char *name, Error **errp)
+{
+    set_pointer(obj, v, opaque, parse_drive, name, errp);
+}
+
+PropertyInfo qdev_prop_drive = {
+    .name  = "drive",
+    .get   = get_drive,
+    .set   = set_drive,
+    .release = release_drive,
+};
+
+/* --- character device --- */
+
+static int parse_chr(DeviceState *dev, const char *str, void **ptr)
+{
+    CharDriverState *chr = qemu_chr_find(str);
+    if (chr == NULL) {
+        return -ENOENT;
+    }
+    if (chr->avail_connections < 1) {
+        return -EEXIST;
+    }
+    *ptr = chr;
+    --chr->avail_connections;
+    return 0;
+}
+
+static void release_chr(Object *obj, const char *name, void *opaque)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
+
+    if (*ptr) {
+        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
+    }
+}
+
+
+static const char *print_chr(void *ptr)
+{
+    CharDriverState *chr = ptr;
+
+    return chr->label ? chr->label : "";
+}
+
+static void get_chr(Object *obj, Visitor *v, void *opaque,
+                    const char *name, Error **errp)
+{
+    get_pointer(obj, v, opaque, print_chr, name, errp);
+}
+
+static void set_chr(Object *obj, Visitor *v, void *opaque,
+                    const char *name, Error **errp)
+{
+    set_pointer(obj, v, opaque, parse_chr, name, errp);
+}
+
+PropertyInfo qdev_prop_chr = {
+    .name  = "chr",
+    .get   = get_chr,
+    .set   = set_chr,
+    .release = release_chr,
+};
+
+/* --- netdev device --- */
+
+static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
+{
+    NetClientState *netdev = qemu_find_netdev(str);
+
+    if (netdev == NULL) {
+        return -ENOENT;
+    }
+    if (netdev->peer) {
+        return -EEXIST;
+    }
+    *ptr = netdev;
+    return 0;
+}
+
+static const char *print_netdev(void *ptr)
+{
+    NetClientState *netdev = ptr;
+
+    return netdev->name ? netdev->name : "";
+}
+
+static void get_netdev(Object *obj, Visitor *v, void *opaque,
+                       const char *name, Error **errp)
+{
+    get_pointer(obj, v, opaque, print_netdev, name, errp);
+}
+
+static void set_netdev(Object *obj, Visitor *v, void *opaque,
+                       const char *name, Error **errp)
+{
+    set_pointer(obj, v, opaque, parse_netdev, name, errp);
+}
+
+PropertyInfo qdev_prop_netdev = {
+    .name  = "netdev",
+    .get   = get_netdev,
+    .set   = set_netdev,
+};
+
+/* --- vlan --- */
+
+static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
+{
+    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
+
+    if (*ptr) {
+        int id;
+        if (!net_hub_id_for_client(*ptr, &id)) {
+            return snprintf(dest, len, "%d", id);
+        }
+    }
+
+    return snprintf(dest, len, "<null>");
+}
+
+static void get_vlan(Object *obj, Visitor *v, void *opaque,
+                     const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t id = -1;
+
+    if (*ptr) {
+        int hub_id;
+        if (!net_hub_id_for_client(*ptr, &hub_id)) {
+            id = hub_id;
+        }
+    }
+
+    visit_type_int32(v, &id, name, errp);
+}
+
+static void set_vlan(Object *obj, Visitor *v, void *opaque,
+                     const char *name, Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
+    Error *local_err = NULL;
+    int32_t id;
+    NetClientState *hubport;
+
+    if (dev->state != DEV_STATE_CREATED) {
+        error_set(errp, QERR_PERMISSION_DENIED);
+        return;
+    }
+
+    visit_type_int32(v, &id, name, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+    if (id == -1) {
+        *ptr = NULL;
+        return;
+    }
+
+    hubport = net_hub_port_find(id);
+    if (!hubport) {
+        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
+                  name, prop->info->name);
+        return;
+    }
+    *ptr = hubport;
+}
+
+PropertyInfo qdev_prop_vlan = {
+    .name  = "vlan",
+    .print = print_vlan,
+    .get   = get_vlan,
+    .set   = set_vlan,
+};
+
+
+int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
+{
+    Error *errp = NULL;
+    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
+    object_property_set_str(OBJECT(dev), bdrv_name,
+                            name, &errp);
+    if (errp) {
+        qerror_report_err(errp);
+        error_free(errp);
+        return -1;
+    }
+    return 0;
+}
+
+void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
+{
+    if (qdev_prop_set_drive(dev, name, value) < 0) {
+        exit(1);
+    }
+}
+
+void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
+{
+    Error *errp = NULL;
+    assert(!value || value->label);
+    object_property_set_str(OBJECT(dev),
+                            value ? value->label : "", name, &errp);
+    assert_no_error(errp);
+}
+
+void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
+{
+    Error *errp = NULL;
+    assert(!value || value->name);
+    object_property_set_str(OBJECT(dev),
+                            value ? value->name : "", name, &errp);
+    assert_no_error(errp);
+}
+
+static int qdev_add_one_global(QemuOpts *opts, void *opaque)
+{
+    GlobalProperty *g;
+
+    g = g_malloc0(sizeof(*g));
+    g->driver   = qemu_opt_get(opts, "driver");
+    g->property = qemu_opt_get(opts, "property");
+    g->value    = qemu_opt_get(opts, "value");
+    qdev_prop_register_global(g);
+    return 0;
+}
+
+void qemu_add_globals(void)
+{
+    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
+}
diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 81d901c..917d986 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -13,49 +13,6 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
     return ptr;
 }
 
-static void get_pointer(Object *obj, Visitor *v, Property *prop,
-                        const char *(*print)(void *ptr),
-                        const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    void **ptr = qdev_get_prop_ptr(dev, prop);
-    char *p;
-
-    p = (char *) (*ptr ? print(*ptr) : "");
-    visit_type_str(v, &p, name, errp);
-}
-
-static void set_pointer(Object *obj, Visitor *v, Property *prop,
-                        int (*parse)(DeviceState *dev, const char *str,
-                                     void **ptr),
-                        const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    Error *local_err = NULL;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
-    char *str;
-    int ret;
-
-    if (dev->state != DEV_STATE_CREATED) {
-        error_set(errp, QERR_PERMISSION_DENIED);
-        return;
-    }
-
-    visit_type_str(v, &str, name, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        return;
-    }
-    if (!*str) {
-        g_free(str);
-        *ptr = NULL;
-        return;
-    }
-    ret = parse(dev, str, ptr);
-    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
-    g_free(str);
-}
-
 static void get_enum(Object *obj, Visitor *v, void *opaque,
                      const char *name, Error **errp)
 {
@@ -476,227 +433,6 @@ PropertyInfo qdev_prop_string = {
     .set   = set_string,
 };
 
-/* --- drive --- */
-
-static int parse_drive(DeviceState *dev, const char *str, void **ptr)
-{
-    BlockDriverState *bs;
-
-    bs = bdrv_find(str);
-    if (bs == NULL)
-        return -ENOENT;
-    if (bdrv_attach_dev(bs, dev) < 0)
-        return -EEXIST;
-    *ptr = bs;
-    return 0;
-}
-
-static void release_drive(Object *obj, const char *name, void *opaque)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
-
-    if (*ptr) {
-        bdrv_detach_dev(*ptr, dev);
-        blockdev_auto_del(*ptr);
-    }
-}
-
-static const char *print_drive(void *ptr)
-{
-    return bdrv_get_device_name(ptr);
-}
-
-static void get_drive(Object *obj, Visitor *v, void *opaque,
-                      const char *name, Error **errp)
-{
-    get_pointer(obj, v, opaque, print_drive, name, errp);
-}
-
-static void set_drive(Object *obj, Visitor *v, void *opaque,
-                      const char *name, Error **errp)
-{
-    set_pointer(obj, v, opaque, parse_drive, name, errp);
-}
-
-PropertyInfo qdev_prop_drive = {
-    .name  = "drive",
-    .get   = get_drive,
-    .set   = set_drive,
-    .release = release_drive,
-};
-
-/* --- character device --- */
-
-static int parse_chr(DeviceState *dev, const char *str, void **ptr)
-{
-    CharDriverState *chr = qemu_chr_find(str);
-    if (chr == NULL) {
-        return -ENOENT;
-    }
-    if (chr->avail_connections < 1) {
-        return -EEXIST;
-    }
-    *ptr = chr;
-    --chr->avail_connections;
-    return 0;
-}
-
-static void release_chr(Object *obj, const char *name, void *opaque)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
-
-    if (*ptr) {
-        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
-    }
-}
-
-
-static const char *print_chr(void *ptr)
-{
-    CharDriverState *chr = ptr;
-
-    return chr->label ? chr->label : "";
-}
-
-static void get_chr(Object *obj, Visitor *v, void *opaque,
-                    const char *name, Error **errp)
-{
-    get_pointer(obj, v, opaque, print_chr, name, errp);
-}
-
-static void set_chr(Object *obj, Visitor *v, void *opaque,
-                    const char *name, Error **errp)
-{
-    set_pointer(obj, v, opaque, parse_chr, name, errp);
-}
-
-PropertyInfo qdev_prop_chr = {
-    .name  = "chr",
-    .get   = get_chr,
-    .set   = set_chr,
-    .release = release_chr,
-};
-
-/* --- netdev device --- */
-
-static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
-{
-    NetClientState *netdev = qemu_find_netdev(str);
-
-    if (netdev == NULL) {
-        return -ENOENT;
-    }
-    if (netdev->peer) {
-        return -EEXIST;
-    }
-    *ptr = netdev;
-    return 0;
-}
-
-static const char *print_netdev(void *ptr)
-{
-    NetClientState *netdev = ptr;
-
-    return netdev->name ? netdev->name : "";
-}
-
-static void get_netdev(Object *obj, Visitor *v, void *opaque,
-                       const char *name, Error **errp)
-{
-    get_pointer(obj, v, opaque, print_netdev, name, errp);
-}
-
-static void set_netdev(Object *obj, Visitor *v, void *opaque,
-                       const char *name, Error **errp)
-{
-    set_pointer(obj, v, opaque, parse_netdev, name, errp);
-}
-
-PropertyInfo qdev_prop_netdev = {
-    .name  = "netdev",
-    .get   = get_netdev,
-    .set   = set_netdev,
-};
-
-/* --- vlan --- */
-
-static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
-{
-    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
-
-    if (*ptr) {
-        int id;
-        if (!net_hub_id_for_client(*ptr, &id)) {
-            return snprintf(dest, len, "%d", id);
-        }
-    }
-
-    return snprintf(dest, len, "<null>");
-}
-
-static void get_vlan(Object *obj, Visitor *v, void *opaque,
-                     const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
-    int32_t id = -1;
-
-    if (*ptr) {
-        int hub_id;
-        if (!net_hub_id_for_client(*ptr, &hub_id)) {
-            id = hub_id;
-        }
-    }
-
-    visit_type_int32(v, &id, name, errp);
-}
-
-static void set_vlan(Object *obj, Visitor *v, void *opaque,
-                     const char *name, Error **errp)
-{
-    DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
-    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
-    Error *local_err = NULL;
-    int32_t id;
-    NetClientState *hubport;
-
-    if (dev->state != DEV_STATE_CREATED) {
-        error_set(errp, QERR_PERMISSION_DENIED);
-        return;
-    }
-
-    visit_type_int32(v, &id, name, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        return;
-    }
-    if (id == -1) {
-        *ptr = NULL;
-        return;
-    }
-
-    hubport = net_hub_port_find(id);
-    if (!hubport) {
-        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
-                  name, prop->info->name);
-        return;
-    }
-    *ptr = hubport;
-}
-
-PropertyInfo qdev_prop_vlan = {
-    .name  = "vlan",
-    .print = print_vlan,
-    .get   = get_vlan,
-    .set   = set_vlan,
-};
-
 /* --- pointer --- */
 
 /* Not a proper property, just for dirty hacks.  TODO Remove it!  */
@@ -1158,44 +894,6 @@ void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value)
     assert_no_error(errp);
 }
 
-int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
-{
-    Error *errp = NULL;
-    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
-    object_property_set_str(OBJECT(dev), bdrv_name,
-                            name, &errp);
-    if (errp) {
-        qerror_report_err(errp);
-        error_free(errp);
-        return -1;
-    }
-    return 0;
-}
-
-void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
-{
-    if (qdev_prop_set_drive(dev, name, value) < 0) {
-        exit(1);
-    }
-}
-void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
-{
-    Error *errp = NULL;
-    assert(!value || value->label);
-    object_property_set_str(OBJECT(dev),
-                            value ? value->label : "", name, &errp);
-    assert_no_error(errp);
-}
-
-void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
-{
-    Error *errp = NULL;
-    assert(!value || value->name);
-    object_property_set_str(OBJECT(dev),
-                            value ? value->name : "", name, &errp);
-    assert_no_error(errp);
-}
-
 void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value)
 {
     Error *errp = NULL;
@@ -1231,7 +929,7 @@ void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value)
 
 static QTAILQ_HEAD(, GlobalProperty) global_props = QTAILQ_HEAD_INITIALIZER(global_props);
 
-static void qdev_prop_register_global(GlobalProperty *prop)
+void qdev_prop_register_global(GlobalProperty *prop)
 {
     QTAILQ_INSERT_TAIL(&global_props, prop, next);
 }
@@ -1263,19 +961,3 @@ void qdev_prop_set_globals(DeviceState *dev)
     } while (class);
 }
 
-static int qdev_add_one_global(QemuOpts *opts, void *opaque)
-{
-    GlobalProperty *g;
-
-    g = g_malloc0(sizeof(*g));
-    g->driver   = qemu_opt_get(opts, "driver");
-    g->property = qemu_opt_get(opts, "property");
-    g->value    = qemu_opt_get(opts, "value");
-    qdev_prop_register_global(g);
-    return 0;
-}
-
-void qemu_add_globals(void)
-{
-    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
-}
diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
index e93336a..a145084 100644
--- a/hw/qdev-properties.h
+++ b/hw/qdev-properties.h
@@ -114,6 +114,7 @@ void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 /* FIXME: Remove opaque pointer properties.  */
 void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
 
+void qdev_prop_register_global(GlobalProperty *prop);
 void qdev_prop_register_global_list(GlobalProperty *props);
 void qdev_prop_set_globals(DeviceState *dev);
 void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
diff --git a/hw/qdev-system.c b/hw/qdev-system.c
new file mode 100644
index 0000000..4891d2f
--- /dev/null
+++ b/hw/qdev-system.c
@@ -0,0 +1,93 @@
+#include "net.h"
+#include "qdev.h"
+
+void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
+{
+    assert(dev->num_gpio_in == 0);
+    dev->num_gpio_in = n;
+    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
+}
+
+void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
+{
+    assert(dev->num_gpio_out == 0);
+    dev->num_gpio_out = n;
+    dev->gpio_out = pins;
+}
+
+qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
+{
+    assert(n >= 0 && n < dev->num_gpio_in);
+    return dev->gpio_in[n];
+}
+
+void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
+{
+    assert(n >= 0 && n < dev->num_gpio_out);
+    dev->gpio_out[n] = pin;
+}
+
+void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
+{
+    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
+    if (nd->netdev)
+        qdev_prop_set_netdev(dev, "netdev", nd->netdev);
+    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
+        object_property_find(OBJECT(dev), "vectors", NULL)) {
+        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
+    }
+    nd->instantiated = 1;
+}
+
+BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
+{
+    BusState *bus;
+
+    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
+        if (strcmp(name, bus->name) == 0) {
+            return bus;
+        }
+    }
+    return NULL;
+}
+
+/* Create a new device.  This only initializes the device state structure
+   and allows properties to be set.  qdev_init should be called to
+   initialize the actual device emulation.  */
+DeviceState *qdev_create(BusState *bus, const char *name)
+{
+    DeviceState *dev;
+
+    dev = qdev_try_create(bus, name);
+    if (!dev) {
+        if (bus) {
+            hw_error("Unknown device '%s' for bus '%s'\n", name,
+                     object_get_typename(OBJECT(bus)));
+        } else {
+            hw_error("Unknown device '%s' for default sysbus\n", name);
+        }
+    }
+
+    return dev;
+}
+
+DeviceState *qdev_try_create(BusState *bus, const char *type)
+{
+    DeviceState *dev;
+
+    if (object_class_by_name(type) == NULL) {
+        return NULL;
+    }
+    dev = DEVICE(object_new(type));
+    if (!dev) {
+        return NULL;
+    }
+
+    if (!bus) {
+        bus = sysbus_get_default();
+    }
+
+    qdev_set_parent_bus(dev, bus);
+
+    return dev;
+}
diff --git a/hw/qdev.c b/hw/qdev.c
index 36c3e4b..3dc38f7 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -25,7 +25,6 @@
    inherit from a particular bus (e.g. PCI or I2C) rather than
    this API directly.  */
 
-#include "net.h"
 #include "qdev.h"
 #include "sysemu.h"
 #include "error.h"
@@ -105,47 +104,6 @@ void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
     bus_add_child(bus, dev);
 }
 
-/* Create a new device.  This only initializes the device state structure
-   and allows properties to be set.  qdev_init should be called to
-   initialize the actual device emulation.  */
-DeviceState *qdev_create(BusState *bus, const char *name)
-{
-    DeviceState *dev;
-
-    dev = qdev_try_create(bus, name);
-    if (!dev) {
-        if (bus) {
-            hw_error("Unknown device '%s' for bus '%s'\n", name,
-                     object_get_typename(OBJECT(bus)));
-        } else {
-            hw_error("Unknown device '%s' for default sysbus\n", name);
-        }
-    }
-
-    return dev;
-}
-
-DeviceState *qdev_try_create(BusState *bus, const char *type)
-{
-    DeviceState *dev;
-
-    if (object_class_by_name(type) == NULL) {
-        return NULL;
-    }
-    dev = DEVICE(object_new(type));
-    if (!dev) {
-        return NULL;
-    }
-
-    if (!bus) {
-        bus = sysbus_get_default();
-    }
-
-    qdev_set_parent_bus(dev, bus);
-
-    return dev;
-}
-
 /* Initialize a device.  Device properties should be set before calling
    this function.  IRQs and MMIO regions should be connected/mapped after
    calling this function.
@@ -175,11 +133,13 @@ int qdev_init(DeviceState *dev)
         g_free(name);
     }
 
+#ifndef CONFIG_USER_ONLY
     if (qdev_get_vmsd(dev)) {
         vmstate_register_with_alias_id(dev, -1, qdev_get_vmsd(dev), dev,
                                        dev->instance_id_alias,
                                        dev->alias_required_for_version);
     }
+#endif
     dev->state = DEV_STATE_INITIALIZED;
     if (dev->hotplugged) {
         device_reset(dev);
@@ -292,56 +252,6 @@ BusState *qdev_get_parent_bus(DeviceState *dev)
     return dev->parent_bus;
 }
 
-void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
-{
-    assert(dev->num_gpio_in == 0);
-    dev->num_gpio_in = n;
-    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
-}
-
-void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
-{
-    assert(dev->num_gpio_out == 0);
-    dev->num_gpio_out = n;
-    dev->gpio_out = pins;
-}
-
-qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
-{
-    assert(n >= 0 && n < dev->num_gpio_in);
-    return dev->gpio_in[n];
-}
-
-void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
-{
-    assert(n >= 0 && n < dev->num_gpio_out);
-    dev->gpio_out[n] = pin;
-}
-
-void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
-{
-    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
-    if (nd->netdev)
-        qdev_prop_set_netdev(dev, "netdev", nd->netdev);
-    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
-        object_property_find(OBJECT(dev), "vectors", NULL)) {
-        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
-    }
-    nd->instantiated = 1;
-}
-
-BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
-{
-    BusState *bus;
-
-    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
-        if (strcmp(name, bus->name) == 0) {
-            return bus;
-        }
-    }
-    return NULL;
-}
-
 int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
                        qbus_walkerfn *busfn, void *opaque)
 {
@@ -440,11 +350,14 @@ static void qbus_realize(BusState *bus)
         QLIST_INSERT_HEAD(&bus->parent->child_bus, bus, sibling);
         bus->parent->num_child_bus++;
         object_property_add_child(OBJECT(bus->parent), bus->name, OBJECT(bus), NULL);
-    } else if (bus != sysbus_get_default()) {
+    }
+#ifndef CONFIG_USER_ONLY
+    else if (bus != sysbus_get_default()) {
         /* TODO: once all bus devices are qdevified,
            only reset handler for main_system_bus should be registered here. */
         qemu_register_reset(qbus_reset_all_fn, bus);
     }
+#endif
 }
 
 void qbus_create_inplace(BusState *bus, const char *typename,
@@ -703,9 +616,11 @@ static void device_finalize(Object *obj)
             bus = QLIST_FIRST(&dev->child_bus);
             qbus_free(bus);
         }
+#ifndef CONFIG_USER_ONLY
         if (qdev_get_vmsd(dev)) {
             vmstate_unregister(dev, qdev_get_vmsd(dev), dev);
         }
+#endif
         if (dc->exit) {
             dc->exit(dev);
         }
@@ -779,8 +694,10 @@ static void qbus_finalize(Object *obj)
         QLIST_REMOVE(bus, sibling);
         bus->parent->num_child_bus--;
     } else {
+#ifndef CONFIG_USER_ONLY
         assert(bus != sysbus_get_default()); /* main_system_bus is never freed */
         qemu_unregister_reset(qbus_reset_all_fn, bus);
+#endif
     }
     g_free((char *)bus->name);
 }
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlK-0006LX-Ox; Tue, 21 Aug 2012 15:53:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qaq-0005oH-Kt
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:12 +0000
Received: from [85.158.138.51:47432] by server-1.bemta-3.messagelabs.com id
	26/70-09327-35CA3305; Tue, 21 Aug 2012 15:42:11 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345563730!21428892!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20189 invoked from network); 21 Aug 2012 15:42:11 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	21 Aug 2012 15:42:11 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfxWi010156
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:59 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfvXa006340; Tue, 21 Aug 2012 11:41:58 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 35D1B202862; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:58 -0300
Message-Id: <1345563782-11224-5-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 4/8] cleanup error.h,
	included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 error.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/error.h b/error.h
index 96fc203..643a372 100644
--- a/error.h
+++ b/error.h
@@ -14,7 +14,6 @@
 
 #include "compiler.h"
 #include "qapi-types.h"
-#include <stdbool.h>
 
 /**
  * A class representing internal errors within QEMU.  An error has a ErrorClass
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlK-0006LX-Ox; Tue, 21 Aug 2012 15:53:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qaq-0005oH-Kt
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:12 +0000
Received: from [85.158.138.51:47432] by server-1.bemta-3.messagelabs.com id
	26/70-09327-35CA3305; Tue, 21 Aug 2012 15:42:11 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345563730!21428892!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20189 invoked from network); 21 Aug 2012 15:42:11 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-174.messagelabs.com with SMTP;
	21 Aug 2012 15:42:11 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFfxWi010156
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:41:59 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfvXa006340; Tue, 21 Aug 2012 11:41:58 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 35D1B202862; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:42:58 -0300
Message-Id: <1345563782-11224-5-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 4/8] cleanup error.h,
	included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Igor Mammedov <imammedo@redhat.com>

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 error.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/error.h b/error.h
index 96fc203..643a372 100644
--- a/error.h
+++ b/error.h
@@ -14,7 +14,6 @@
 
 #include "compiler.h"
 #include "qapi-types.h"
-#include <stdbool.h>
 
 /**
  * A class representing internal errors within QEMU.  An error has a ErrorClass
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlJ-0006L3-I9; Tue, 21 Aug 2012 15:53:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qan-0005ng-Hm
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:09 +0000
Received: from [85.158.143.99:3194] by server-2.bemta-4.messagelabs.com id
	7D/5D-21239-05CA3305; Tue, 21 Aug 2012 15:42:08 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345563727!19538118!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1988 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0Mx010161
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfxTE017603; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 6183D202866; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:43:00 -0300
Message-Id: <1345563782-11224-7-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 6/8] qdev: use full qdev.h include path on qdev*.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 hw/qdev-properties.c | 2 +-
 hw/qdev.c            | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 917d986..2e82cb9 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -1,5 +1,5 @@
 #include "net.h"
-#include "qdev.h"
+#include "hw/qdev.h"
 #include "qerror.h"
 #include "blockdev.h"
 #include "hw/block-common.h"
diff --git a/hw/qdev.c b/hw/qdev.c
index 3dc38f7..ebe6671 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -25,7 +25,7 @@
    inherit from a particular bus (e.g. PCI or I2C) rather than
    this API directly.  */
 
-#include "qdev.h"
+#include "hw/qdev.h"
 #include "sysemu.h"
 #include "error.h"
 #include "qapi/qapi-visit-core.h"
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:53:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qlJ-0006L3-I9; Tue, 21 Aug 2012 15:53:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3qan-0005ng-Hm
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 15:42:09 +0000
Received: from [85.158.143.99:3194] by server-2.bemta-4.messagelabs.com id
	7D/5D-21239-05CA3305; Tue, 21 Aug 2012 15:42:08 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345563727!19538118!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1988 invoked from network); 21 Aug 2012 15:42:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 15:42:08 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LFg0Mx010161
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 11:42:00 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LFfxTE017603; Tue, 21 Aug 2012 11:41:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 6183D202866; Tue, 21 Aug 2012 12:43:09 -0300 (BRT)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Date: Tue, 21 Aug 2012 12:43:00 -0300
Message-Id: <1345563782-11224-7-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
X-Mailman-Approved-At: Tue, 21 Aug 2012 15:52:59 +0000
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	mdroth@linux.vnet.ibm.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, armbru@redhat.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: [Xen-devel] [RFC 6/8] qdev: use full qdev.h include path on qdev*.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 hw/qdev-properties.c | 2 +-
 hw/qdev.c            | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
index 917d986..2e82cb9 100644
--- a/hw/qdev-properties.c
+++ b/hw/qdev-properties.c
@@ -1,5 +1,5 @@
 #include "net.h"
-#include "qdev.h"
+#include "hw/qdev.h"
 #include "qerror.h"
 #include "blockdev.h"
 #include "hw/block-common.h"
diff --git a/hw/qdev.c b/hw/qdev.c
index 3dc38f7..ebe6671 100644
--- a/hw/qdev.c
+++ b/hw/qdev.c
@@ -25,7 +25,7 @@
    inherit from a particular bus (e.g. PCI or I2C) rather than
    this API directly.  */
 
-#include "qdev.h"
+#include "hw/qdev.h"
 #include "sysemu.h"
 #include "error.h"
 #include "qapi/qapi-visit-core.h"
-- 
1.7.11.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:55:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qnq-0007JS-Cp; Tue, 21 Aug 2012 15:55:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1T3qno-0007Ho-4Z
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:55:36 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345564527!10032316!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27365 invoked from network); 21 Aug 2012 15:55:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:55:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336363200"; d="scan'208";a="205786232"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 11:55:27 -0400
Received: from cosworth.uk.xensource.com (10.80.16.52) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 11:55:27 -0400
MIME-Version: 1.0
X-Mercurial-Node: 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
Message-ID: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 21 Aug 2012 16:54:57 +0100
From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Paul Durrant <paul.durrant@citrix.com>
# Date 1345564202 -3600
# Node ID 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
# Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
Remove VM genearation ID device and incr_generationid from build_info.

Microsoft have now published their VM generation ID specification at
https://www.microsoft.com/en-us/download/details.aspx?id=30707.
It differs from the original specification upon which I based my
implementation in several key areas. Particularly, it is no longer
an incrementing 64-bit counter and so this patch is to remove
the incr_generationid field from the build_info and also disable the
ACPI device before 4.2 is released.

I will follow up with further patches to implement the VM generation
ID to the new specification.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/firmware/hvmloader/acpi/dsdt.asl
--- a/tools/firmware/hvmloader/acpi/dsdt.asl	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/firmware/hvmloader/acpi/dsdt.asl	Tue Aug 21 16:50:02 2012 +0100
@@ -398,6 +398,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
                     })
                 } 
 
+                /*
                 Device(VGID) {
                     Name(_HID, EisaID ("XEN0000"))
                     Name(_UID, 0x00)
@@ -422,6 +423,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
                         Return(PKG)
                     }
                 }
+                */
             }
         }
     }
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/libxl_create.c	Tue Aug 21 16:50:02 2012 +0100
@@ -248,7 +248,6 @@ int libxl__domain_build_info_setdefault(
         libxl_defbool_setdefault(&b_info->u.hvm.hpet,               true);
         libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,          true);
         libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
-        libxl_defbool_setdefault(&b_info->u.hvm.incr_generationid,  false);
         libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
         libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
 
@@ -758,27 +757,24 @@ static void domcreate_bootloader_done(li
 
     /* read signature */
     int hvm, pae, superpages;
-    int no_incr_generationid;
     switch (info->type) {
     case LIBXL_DOMAIN_TYPE_HVM:
         hvm = 1;
         superpages = 1;
         pae = libxl_defbool_val(info->u.hvm.pae);
-        no_incr_generationid = !libxl_defbool_val(info->u.hvm.incr_generationid);
         callbacks->toolstack_restore = libxl__toolstack_restore;
         break;
     case LIBXL_DOMAIN_TYPE_PV:
         hvm = 0;
         superpages = 0;
         pae = 1;
-        no_incr_generationid = 0;
         break;
     default:
         rc = ERROR_INVAL;
         goto out;
     }
     libxl__xc_domain_restore(egc, dcs,
-                             hvm, pae, superpages, no_incr_generationid);
+                             hvm, pae, superpages, 1);
     return;
 
  out:
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Tue Aug 21 16:50:02 2012 +0100
@@ -292,7 +292,6 @@ libxl_domain_build_info = Struct("domain
                                        ("vpt_align",        libxl_defbool),
                                        ("timer_mode",       libxl_timer_mode),
                                        ("nested_hvm",       libxl_defbool),
-                                       ("incr_generationid",libxl_defbool),
                                        ("nographic",        libxl_defbool),
                                        ("vga",              libxl_vga_interface_info),
                                        ("vnc",              libxl_vnc_info),
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Tue Aug 21 16:50:02 2012 +0100
@@ -139,7 +139,6 @@ struct domain_create {
     const char *restore_file;
     int migrate_fd; /* -1 means none */
     char **migration_domname_r; /* from malloc */
-    int incr_generationid;
 };
 
 
@@ -1759,10 +1758,6 @@ static int create_domain(struct domain_c
         }
     }
 
-    if (d_config.c_info.type == LIBXL_DOMAIN_TYPE_HVM)
-        libxl_defbool_set(&d_config.b_info.u.hvm.incr_generationid,
-                          dom_info->incr_generationid);
-
     if (debug || dom_info->dryrun)
         printf_info(default_output_format, -1, &d_config);
 
@@ -3183,7 +3178,6 @@ static void migrate_receive(int debug, i
     dom_info.paused = 1;
     dom_info.migrate_fd = recv_fd;
     dom_info.migration_domname_r = &migration_domname;
-    dom_info.incr_generationid = 0;
 
     rc = create_domain(&dom_info);
     if (rc < 0) {
@@ -3364,7 +3358,6 @@ int main_restore(int argc, char **argv)
     dom_info.vnc = vnc;
     dom_info.vncautopass = vncautopass;
     dom_info.console_autoconnect = console_autoconnect;
-    dom_info.incr_generationid = 1;
 
     rc = create_domain(&dom_info);
     if (rc < 0)
@@ -3766,7 +3759,6 @@ int main_create(int argc, char **argv)
     dom_info.vnc = vnc;
     dom_info.vncautopass = vncautopass;
     dom_info.console_autoconnect = console_autoconnect;
-    dom_info.incr_generationid = 0;
 
     rc = create_domain(&dom_info);
     if (rc < 0)
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_sxp.c
--- a/tools/libxl/xl_sxp.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/xl_sxp.c	Tue Aug 21 16:50:02 2012 +0100
@@ -108,8 +108,6 @@ void printf_info_sexp(int domid, libxl_d
                libxl_timer_mode_to_string(b_info->u.hvm.timer_mode));
         printf("\t\t\t(nestedhvm %s)\n",
                libxl_defbool_to_string(b_info->u.hvm.nested_hvm));
-        printf("\t\t\t(no_incr_generationid %s)\n",
-               libxl_defbool_to_string(b_info->u.hvm.incr_generationid));
         printf("\t\t\t(stdvga %s)\n", b_info->u.hvm.vga.kind ==
                                       LIBXL_VGA_INTERFACE_TYPE_STD ?
                                       "True" : "False");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 15:55:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 15:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qnq-0007JS-Cp; Tue, 21 Aug 2012 15:55:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1T3qno-0007Ho-4Z
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 15:55:36 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345564527!10032316!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27365 invoked from network); 21 Aug 2012 15:55:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 15:55:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336363200"; d="scan'208";a="205786232"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 11:55:27 -0400
Received: from cosworth.uk.xensource.com (10.80.16.52) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 11:55:27 -0400
MIME-Version: 1.0
X-Mercurial-Node: 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
Message-ID: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Tue, 21 Aug 2012 16:54:57 +0100
From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Paul Durrant <paul.durrant@citrix.com>
# Date 1345564202 -3600
# Node ID 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
# Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
Remove VM genearation ID device and incr_generationid from build_info.

Microsoft have now published their VM generation ID specification at
https://www.microsoft.com/en-us/download/details.aspx?id=30707.
It differs from the original specification upon which I based my
implementation in several key areas. Particularly, it is no longer
an incrementing 64-bit counter and so this patch is to remove
the incr_generationid field from the build_info and also disable the
ACPI device before 4.2 is released.

I will follow up with further patches to implement the VM generation
ID to the new specification.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/firmware/hvmloader/acpi/dsdt.asl
--- a/tools/firmware/hvmloader/acpi/dsdt.asl	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/firmware/hvmloader/acpi/dsdt.asl	Tue Aug 21 16:50:02 2012 +0100
@@ -398,6 +398,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
                     })
                 } 
 
+                /*
                 Device(VGID) {
                     Name(_HID, EisaID ("XEN0000"))
                     Name(_UID, 0x00)
@@ -422,6 +423,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
                         Return(PKG)
                     }
                 }
+                */
             }
         }
     }
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/libxl_create.c	Tue Aug 21 16:50:02 2012 +0100
@@ -248,7 +248,6 @@ int libxl__domain_build_info_setdefault(
         libxl_defbool_setdefault(&b_info->u.hvm.hpet,               true);
         libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,          true);
         libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
-        libxl_defbool_setdefault(&b_info->u.hvm.incr_generationid,  false);
         libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
         libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
 
@@ -758,27 +757,24 @@ static void domcreate_bootloader_done(li
 
     /* read signature */
     int hvm, pae, superpages;
-    int no_incr_generationid;
     switch (info->type) {
     case LIBXL_DOMAIN_TYPE_HVM:
         hvm = 1;
         superpages = 1;
         pae = libxl_defbool_val(info->u.hvm.pae);
-        no_incr_generationid = !libxl_defbool_val(info->u.hvm.incr_generationid);
         callbacks->toolstack_restore = libxl__toolstack_restore;
         break;
     case LIBXL_DOMAIN_TYPE_PV:
         hvm = 0;
         superpages = 0;
         pae = 1;
-        no_incr_generationid = 0;
         break;
     default:
         rc = ERROR_INVAL;
         goto out;
     }
     libxl__xc_domain_restore(egc, dcs,
-                             hvm, pae, superpages, no_incr_generationid);
+                             hvm, pae, superpages, 1);
     return;
 
  out:
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Tue Aug 21 16:50:02 2012 +0100
@@ -292,7 +292,6 @@ libxl_domain_build_info = Struct("domain
                                        ("vpt_align",        libxl_defbool),
                                        ("timer_mode",       libxl_timer_mode),
                                        ("nested_hvm",       libxl_defbool),
-                                       ("incr_generationid",libxl_defbool),
                                        ("nographic",        libxl_defbool),
                                        ("vga",              libxl_vga_interface_info),
                                        ("vnc",              libxl_vnc_info),
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Tue Aug 21 16:50:02 2012 +0100
@@ -139,7 +139,6 @@ struct domain_create {
     const char *restore_file;
     int migrate_fd; /* -1 means none */
     char **migration_domname_r; /* from malloc */
-    int incr_generationid;
 };
 
 
@@ -1759,10 +1758,6 @@ static int create_domain(struct domain_c
         }
     }
 
-    if (d_config.c_info.type == LIBXL_DOMAIN_TYPE_HVM)
-        libxl_defbool_set(&d_config.b_info.u.hvm.incr_generationid,
-                          dom_info->incr_generationid);
-
     if (debug || dom_info->dryrun)
         printf_info(default_output_format, -1, &d_config);
 
@@ -3183,7 +3178,6 @@ static void migrate_receive(int debug, i
     dom_info.paused = 1;
     dom_info.migrate_fd = recv_fd;
     dom_info.migration_domname_r = &migration_domname;
-    dom_info.incr_generationid = 0;
 
     rc = create_domain(&dom_info);
     if (rc < 0) {
@@ -3364,7 +3358,6 @@ int main_restore(int argc, char **argv)
     dom_info.vnc = vnc;
     dom_info.vncautopass = vncautopass;
     dom_info.console_autoconnect = console_autoconnect;
-    dom_info.incr_generationid = 1;
 
     rc = create_domain(&dom_info);
     if (rc < 0)
@@ -3766,7 +3759,6 @@ int main_create(int argc, char **argv)
     dom_info.vnc = vnc;
     dom_info.vncautopass = vncautopass;
     dom_info.console_autoconnect = console_autoconnect;
-    dom_info.incr_generationid = 0;
 
     rc = create_domain(&dom_info);
     if (rc < 0)
diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_sxp.c
--- a/tools/libxl/xl_sxp.c	Wed Aug 15 09:41:21 2012 +0100
+++ b/tools/libxl/xl_sxp.c	Tue Aug 21 16:50:02 2012 +0100
@@ -108,8 +108,6 @@ void printf_info_sexp(int domid, libxl_d
                libxl_timer_mode_to_string(b_info->u.hvm.timer_mode));
         printf("\t\t\t(nestedhvm %s)\n",
                libxl_defbool_to_string(b_info->u.hvm.nested_hvm));
-        printf("\t\t\t(no_incr_generationid %s)\n",
-               libxl_defbool_to_string(b_info->u.hvm.incr_generationid));
         printf("\t\t\t(stdvga %s)\n", b_info->u.hvm.vga.kind ==
                                       LIBXL_VGA_INTERFACE_TYPE_STD ?
                                       "True" : "False");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:03:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qv4-0008ET-AH; Tue, 21 Aug 2012 16:03:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3qv2-0008EO-K0
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:03:04 +0000
Received: from [85.158.143.35:64035] by server-3.bemta-4.messagelabs.com id
	36/2A-09529-731B3305; Tue, 21 Aug 2012 16:03:03 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345564980!14516119!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11322 invoked from network); 21 Aug 2012 16:03:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:03:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14110784"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 16:03:00 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 17:02:57 +0100
Message-ID: <5033AE12.20609@citrix.com>
Date: Tue, 21 Aug 2012 16:49:38 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208211740340.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208211740340.2856@ionos>
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
 x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/08/12 16:41, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
>    
>> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
>> index 1019156..7999cef 100644
>> --- a/arch/x86/mm/init_32.c
>> +++ b/arch/x86/mm/init_32.c
>> @@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
>>   }
>>   #endif /* CONFIG_HIGHMEM */
>>
>> -void __init native_pagetable_setup_start(pgd_t *base)
>> +void __init native_pagetable_setup_start(void)
>>   {
>>   	unsigned long pfn, va;
>> +	pgd_t *base;
>>   	pgd_t *pgd;
>>      
> 	pgd_t *pgd, *base = swapper_pg_dir;
>
> Please. No need to add 5 lines just for this.
>
>    

I honestly thought it was cleaner -- what is exactly your preferred 
choice? Just use swapper_pg_dir directly in the 2 places needing it?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:03:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qv4-0008ET-AH; Tue, 21 Aug 2012 16:03:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3qv2-0008EO-K0
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:03:04 +0000
Received: from [85.158.143.35:64035] by server-3.bemta-4.messagelabs.com id
	36/2A-09529-731B3305; Tue, 21 Aug 2012 16:03:03 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345564980!14516119!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11322 invoked from network); 21 Aug 2012 16:03:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:03:01 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336348800"; d="scan'208";a="14110784"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 16:03:00 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 17:02:57 +0100
Message-ID: <5033AE12.20609@citrix.com>
Date: Tue, 21 Aug 2012 16:49:38 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208211740340.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208211740340.2856@ionos>
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
 x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/08/12 16:41, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
>    
>> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
>> index 1019156..7999cef 100644
>> --- a/arch/x86/mm/init_32.c
>> +++ b/arch/x86/mm/init_32.c
>> @@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
>>   }
>>   #endif /* CONFIG_HIGHMEM */
>>
>> -void __init native_pagetable_setup_start(pgd_t *base)
>> +void __init native_pagetable_setup_start(void)
>>   {
>>   	unsigned long pfn, va;
>> +	pgd_t *base;
>>   	pgd_t *pgd;
>>      
> 	pgd_t *pgd, *base = swapper_pg_dir;
>
> Please. No need to add 5 lines just for this.
>
>    

I honestly thought it was cleaner -- what is exactly your preferred 
choice? Just use swapper_pg_dir directly in the 2 places needing it?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:04:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qwf-0008MA-QF; Tue, 21 Aug 2012 16:04:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qwd-0008M0-Qj
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:04:44 +0000
Received: from [85.158.138.51:22716] by server-7.bemta-3.messagelabs.com id
	86/B9-01906-B91B3305; Tue, 21 Aug 2012 16:04:43 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345565080!29453831!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4475 invoked from network); 21 Aug 2012 16:04:41 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-11.tower-174.messagelabs.com with AES256-SHA encrypted
	SMTP; 21 Aug 2012 16:04:41 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qwG-00075Y-SN; Tue, 21 Aug 2012 18:04:21 +0200
Date: Tue, 21 Aug 2012 18:04:19 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <5033AE12.20609@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211804020.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208211740340.2856@ionos>
	<5033AE12.20609@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
 x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> On 21/08/12 16:41, Thomas Gleixner wrote:
> > On Tue, 21 Aug 2012, Attilio Rao wrote:
> >    
> > > diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> > > index 1019156..7999cef 100644
> > > --- a/arch/x86/mm/init_32.c
> > > +++ b/arch/x86/mm/init_32.c
> > > @@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t
> > > *pgd_base)
> > >   }
> > >   #endif /* CONFIG_HIGHMEM */
> > > 
> > > -void __init native_pagetable_setup_start(pgd_t *base)
> > > +void __init native_pagetable_setup_start(void)
> > >   {
> > >   	unsigned long pfn, va;
> > > +	pgd_t *base;
> > >   	pgd_t *pgd;
> > >      
> > 	pgd_t *pgd, *base = swapper_pg_dir;
> > 
> > Please. No need to add 5 lines just for this.
> > 
> >    
> 
> I honestly thought it was cleaner -- what is exactly your preferred choice?
> Just use swapper_pg_dir directly in the 2 places needing it?

Either that or the line I wrote above.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:04:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qwf-0008MA-QF; Tue, 21 Aug 2012 16:04:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3qwd-0008M0-Qj
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:04:44 +0000
Received: from [85.158.138.51:22716] by server-7.bemta-3.messagelabs.com id
	86/B9-01906-B91B3305; Tue, 21 Aug 2012 16:04:43 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345565080!29453831!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4475 invoked from network); 21 Aug 2012 16:04:41 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-11.tower-174.messagelabs.com with AES256-SHA encrypted
	SMTP; 21 Aug 2012 16:04:41 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3qwG-00075Y-SN; Tue, 21 Aug 2012 18:04:21 +0200
Date: Tue, 21 Aug 2012 18:04:19 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <5033AE12.20609@citrix.com>
Message-ID: <alpine.LFD.2.02.1208211804020.2856@ionos>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-3-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208211740340.2856@ionos>
	<5033AE12.20609@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 2/5] XEN: Remove the base argument from
 x86_init.paging.pagetable_setup_start PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> On 21/08/12 16:41, Thomas Gleixner wrote:
> > On Tue, 21 Aug 2012, Attilio Rao wrote:
> >    
> > > diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> > > index 1019156..7999cef 100644
> > > --- a/arch/x86/mm/init_32.c
> > > +++ b/arch/x86/mm/init_32.c
> > > @@ -445,14 +445,17 @@ static inline void permanent_kmaps_init(pgd_t
> > > *pgd_base)
> > >   }
> > >   #endif /* CONFIG_HIGHMEM */
> > > 
> > > -void __init native_pagetable_setup_start(pgd_t *base)
> > > +void __init native_pagetable_setup_start(void)
> > >   {
> > >   	unsigned long pfn, va;
> > > +	pgd_t *base;
> > >   	pgd_t *pgd;
> > >      
> > 	pgd_t *pgd, *base = swapper_pg_dir;
> > 
> > Please. No need to add 5 lines just for this.
> > 
> >    
> 
> I honestly thought it was cleaner -- what is exactly your preferred choice?
> Just use swapper_pg_dir directly in the 2 places needing it?

Either that or the line I wrote above.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:08:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qzw-00006g-DR; Tue, 21 Aug 2012 16:08:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3qzu-00006X-LX
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 16:08:06 +0000
Received: from [85.158.143.35:45153] by server-3.bemta-4.messagelabs.com id
	9C/91-09529-562B3305; Tue, 21 Aug 2012 16:08:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345565285!13448809!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4759 invoked from network); 21 Aug 2012 16:08:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 16:08:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Aug 2012 17:08:04 +0100
Message-Id: <5033CEAE0200007800096C99@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 21 Aug 2012 17:08:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<CAOvdn6WM96ne9uTRuZN1Exgp7MMOUmmsQWrV8KJ8GZaumMG_Fw@mail.gmail.com>
In-Reply-To: <CAOvdn6WM96ne9uTRuZN1Exgp7MMOUmmsQWrV8KJ8GZaumMG_Fw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 17:39, Ben Guthro <ben@guthro.net> wrote:
> On Mon, Aug 20, 2012 at 5:17 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>
>>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
>>
> 
> No significant progress made on this since the last update.
> More questions raised than answers found. I'm having trouble making
> sense of the current test results.
> 
> More debugging is needed. Jan is working on a debug patch to give to me.

I haven't been able to get to this yesterday and today, and will
be working only half day tomorrow. After that I'll be traveling,
so it may well end up being only after the return from the summit
that I may be able to get you something.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:08:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3qzw-00006g-DR; Tue, 21 Aug 2012 16:08:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T3qzu-00006X-LX
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 16:08:06 +0000
Received: from [85.158.143.35:45153] by server-3.bemta-4.messagelabs.com id
	9C/91-09529-562B3305; Tue, 21 Aug 2012 16:08:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345565285!13448809!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4759 invoked from network); 21 Aug 2012 16:08:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 16:08:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Aug 2012 17:08:04 +0100
Message-Id: <5033CEAE0200007800096C99@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 21 Aug 2012 17:08:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<CAOvdn6WM96ne9uTRuZN1Exgp7MMOUmmsQWrV8KJ8GZaumMG_Fw@mail.gmail.com>
In-Reply-To: <CAOvdn6WM96ne9uTRuZN1Exgp7MMOUmmsQWrV8KJ8GZaumMG_Fw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 17:39, Ben Guthro <ben@guthro.net> wrote:
> On Mon, Aug 20, 2012 at 5:17 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>
>>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
>>
> 
> No significant progress made on this since the last update.
> More questions raised than answers found. I'm having trouble making
> sense of the current test results.
> 
> More debugging is needed. Jan is working on a debug patch to give to me.

I haven't been able to get to this yesterday and today, and will
be working only half day tomorrow. After that I'll be traveling,
so it may well end up being only after the return from the summit
that I may be able to get you something.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3r2f-0000F9-0X; Tue, 21 Aug 2012 16:10:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3r2e-0000F1-2a
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:10:56 +0000
Received: from [85.158.143.35:37692] by server-1.bemta-4.messagelabs.com id
	E1/57-07754-F03B3305; Tue, 21 Aug 2012 16:10:55 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345565448!6818282!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7087 invoked from network); 21 Aug 2012 16:10:53 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:10:53 -0000
Received: by iabz25 with SMTP id z25so4648081iab.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:10:48 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=UXtOhRGx5+gZL7ZB5QukJ2NULXPhwt37LWfZiufE5jM=;
	b=LyWpw9XpcWlSTVzH9w5CM8pOpuKBZKB5EZE6rUvDLr+ZN2/4VcGafBEFueyia3/3Bf
	mFJPzA6IRpG+eI7nvAQwD95sMjOm4RDmgYWwErMjGfICn46P8BJldCd+LRT2N93sTbTa
	bumi6LGlyTq2ISAYfZXW4QAe954Rbz/nsIMwnOmX0Pnix3tUefW1F7fuwh/QgMcYaFxA
	n1mTPPNOTcimtCUbHeqOrwP4TAOkVP2vTTh3D8AlAB3osmelPGssfLygJn7P4qwchELJ
	nU7H9BJkcsYBgMtZ35Kvxw4EJavjYirHACLvLDXU0z9kTLtlZMM/TKos8a8CVHaJ8cqA
	l/Ag==
MIME-Version: 1.0
Received: by 10.43.106.147 with SMTP id du19mr14511218icc.56.1345565448486;
	Tue, 21 Aug 2012 09:10:48 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:10:48 -0700 (PDT)
In-Reply-To: <1345563782-11224-2-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-2-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:10:48 +0100
Message-ID: <CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQl0P/f+OncEqDk/MHZM0HEg+X9CXSRnR8ZSOp6oCoVvHChWK0O6NE3wQG6Gko36owWZXJKc
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> diff --git a/qemu-common.h b/qemu-common.h
> index e5c2bcd..6677a30 100644
> --- a/qemu-common.h
> +++ b/qemu-common.h
> @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>  typedef struct PCIESlot PCIESlot;
>  typedef struct MSIMessage MSIMessage;
>  typedef struct SerialState SerialState;
> -typedef struct IRQState *qemu_irq;
>  typedef struct PCMCIACardState PCMCIACardState;
>  typedef struct MouseTransformInfo MouseTransformInfo;
>  typedef struct uWireSlave uWireSlave;
> diff --git a/sysemu.h b/sysemu.h
> index 65552ac..f765821 100644
> --- a/sysemu.h
> +++ b/sysemu.h
> @@ -9,6 +9,7 @@
>  #include "qapi-types.h"
>  #include "notify.h"
>  #include "main-loop.h"
> +#include "hw/irq.h"
>
>  /* vl.c */

I'm not objecting to this patch if it helps us move forwards,
but adding the #include to sysemu.h is effectively just adding
the definition to another grabbag header (183 files include
sysemu.h). It would be nicer long-term to separate out the
one thing in this header that cares about qemu_irq (the extern
declaration of qemu_system_powerdown).
[I'm not really convinced that a qemu_irq is even the right
way to signal "hey the system has actually powered down now"...]

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:11:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3r2f-0000F9-0X; Tue, 21 Aug 2012 16:10:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3r2e-0000F1-2a
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:10:56 +0000
Received: from [85.158.143.35:37692] by server-1.bemta-4.messagelabs.com id
	E1/57-07754-F03B3305; Tue, 21 Aug 2012 16:10:55 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345565448!6818282!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7087 invoked from network); 21 Aug 2012 16:10:53 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:10:53 -0000
Received: by iabz25 with SMTP id z25so4648081iab.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:10:48 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=UXtOhRGx5+gZL7ZB5QukJ2NULXPhwt37LWfZiufE5jM=;
	b=LyWpw9XpcWlSTVzH9w5CM8pOpuKBZKB5EZE6rUvDLr+ZN2/4VcGafBEFueyia3/3Bf
	mFJPzA6IRpG+eI7nvAQwD95sMjOm4RDmgYWwErMjGfICn46P8BJldCd+LRT2N93sTbTa
	bumi6LGlyTq2ISAYfZXW4QAe954Rbz/nsIMwnOmX0Pnix3tUefW1F7fuwh/QgMcYaFxA
	n1mTPPNOTcimtCUbHeqOrwP4TAOkVP2vTTh3D8AlAB3osmelPGssfLygJn7P4qwchELJ
	nU7H9BJkcsYBgMtZ35Kvxw4EJavjYirHACLvLDXU0z9kTLtlZMM/TKos8a8CVHaJ8cqA
	l/Ag==
MIME-Version: 1.0
Received: by 10.43.106.147 with SMTP id du19mr14511218icc.56.1345565448486;
	Tue, 21 Aug 2012 09:10:48 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:10:48 -0700 (PDT)
In-Reply-To: <1345563782-11224-2-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-2-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:10:48 +0100
Message-ID: <CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQl0P/f+OncEqDk/MHZM0HEg+X9CXSRnR8ZSOp6oCoVvHChWK0O6NE3wQG6Gko36owWZXJKc
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> diff --git a/qemu-common.h b/qemu-common.h
> index e5c2bcd..6677a30 100644
> --- a/qemu-common.h
> +++ b/qemu-common.h
> @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
>  typedef struct PCIESlot PCIESlot;
>  typedef struct MSIMessage MSIMessage;
>  typedef struct SerialState SerialState;
> -typedef struct IRQState *qemu_irq;
>  typedef struct PCMCIACardState PCMCIACardState;
>  typedef struct MouseTransformInfo MouseTransformInfo;
>  typedef struct uWireSlave uWireSlave;
> diff --git a/sysemu.h b/sysemu.h
> index 65552ac..f765821 100644
> --- a/sysemu.h
> +++ b/sysemu.h
> @@ -9,6 +9,7 @@
>  #include "qapi-types.h"
>  #include "notify.h"
>  #include "main-loop.h"
> +#include "hw/irq.h"
>
>  /* vl.c */

I'm not objecting to this patch if it helps us move forwards,
but adding the #include to sysemu.h is effectively just adding
the definition to another grabbag header (183 files include
sysemu.h). It would be nicer long-term to separate out the
one thing in this header that cares about qemu_irq (the extern
declaration of qemu_system_powerdown).
[I'm not really convinced that a qemu_irq is even the right
way to signal "hey the system has actually powered down now"...]

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:24:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3rG0-0000TG-Hm; Tue, 21 Aug 2012 16:24:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T3rFy-0000T2-Db
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:24:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345566275!2345964!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU1NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1262 invoked from network); 21 Aug 2012 16:24:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:24:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336363200"; d="scan'208";a="35320016"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 12:24:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 12:24:34 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T3rFq-0002wA-0O;
	Tue, 21 Aug 2012 17:24:34 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Tue, 21 Aug 2012 17:24:25 +0100
Message-ID: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

xenconsoled expected domains that are being shutdown to end up in the
the DYING state and would only clean-up such domains.  HVM domains
either didn't enter the DYING state or weren't in long enough for
xenconsoled to notice.

For every shutdown HVM domain, xenconsoled would leak memory, grow its
list of domains and (if guest console logging was enabled) leak the
log file descriptor.  If the file descriptors were leaked and enough
HVM domains were shutdown, no more console connections would work as
the evtchn device could not be opened.  Guests would then block
waiting to send console output.

Fix this by tagging domains that exist in enum_domains().  Afterwards,
all untagged domains are assumed to be dead and are shutdown and
cleaned up.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 tools/console/daemon/io.c |   12 +++++++++++-
 1 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index f09d63a..592085f 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -84,6 +84,7 @@ struct domain {
 	int slave_fd;
 	int log_fd;
 	bool is_dead;
+	unsigned last_seen;
 	struct buffer buffer;
 	struct domain *next;
 	char *conspath;
@@ -727,12 +728,16 @@ static void shutdown_domain(struct domain *d)
 	d->xce_handle = NULL;
 }
 
+static unsigned enum_pass = 0;
+
 void enum_domains(void)
 {
 	int domid = 1;
 	xc_dominfo_t dominfo;
 	struct domain *dom;
 
+	enum_pass++;
+
 	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
 		dom = lookup_domain(dominfo.domid);
 		if (dominfo.dying) {
@@ -740,8 +745,10 @@ void enum_domains(void)
 				shutdown_domain(dom);
 		} else {
 			if (dom == NULL)
-				create_domain(dominfo.domid);
+				dom = create_domain(dominfo.domid);
 		}
+		if (dom)
+			dom->last_seen = enum_pass;
 		domid = dominfo.domid + 1;
 	}
 }
@@ -1068,6 +1075,9 @@ void handle_io(void)
 			if (d->master_fd != -1 && FD_ISSET(d->master_fd,
 							   &writefds))
 				handle_tty_write(d);
+			
+			if (d->last_seen != enum_pass)
+				shutdown_domain(d);
 
 			if (d->is_dead)
 				cleanup_domain(d);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:24:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3rG0-0000TG-Hm; Tue, 21 Aug 2012 16:24:44 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T3rFy-0000T2-Db
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:24:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345566275!2345964!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU1NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1262 invoked from network); 21 Aug 2012 16:24:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:24:36 -0000
X-IronPort-AV: E=Sophos;i="4.77,802,1336363200"; d="scan'208";a="35320016"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 12:24:34 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 12:24:34 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T3rFq-0002wA-0O;
	Tue, 21 Aug 2012 17:24:34 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Tue, 21 Aug 2012 17:24:25 +0100
Message-ID: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

xenconsoled expected domains that are being shutdown to end up in the
the DYING state and would only clean-up such domains.  HVM domains
either didn't enter the DYING state or weren't in long enough for
xenconsoled to notice.

For every shutdown HVM domain, xenconsoled would leak memory, grow its
list of domains and (if guest console logging was enabled) leak the
log file descriptor.  If the file descriptors were leaked and enough
HVM domains were shutdown, no more console connections would work as
the evtchn device could not be opened.  Guests would then block
waiting to send console output.

Fix this by tagging domains that exist in enum_domains().  Afterwards,
all untagged domains are assumed to be dead and are shutdown and
cleaned up.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 tools/console/daemon/io.c |   12 +++++++++++-
 1 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index f09d63a..592085f 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -84,6 +84,7 @@ struct domain {
 	int slave_fd;
 	int log_fd;
 	bool is_dead;
+	unsigned last_seen;
 	struct buffer buffer;
 	struct domain *next;
 	char *conspath;
@@ -727,12 +728,16 @@ static void shutdown_domain(struct domain *d)
 	d->xce_handle = NULL;
 }
 
+static unsigned enum_pass = 0;
+
 void enum_domains(void)
 {
 	int domid = 1;
 	xc_dominfo_t dominfo;
 	struct domain *dom;
 
+	enum_pass++;
+
 	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
 		dom = lookup_domain(dominfo.domid);
 		if (dominfo.dying) {
@@ -740,8 +745,10 @@ void enum_domains(void)
 				shutdown_domain(dom);
 		} else {
 			if (dom == NULL)
-				create_domain(dominfo.domid);
+				dom = create_domain(dominfo.domid);
 		}
+		if (dom)
+			dom->last_seen = enum_pass;
 		domid = dominfo.domid + 1;
 	}
 }
@@ -1068,6 +1075,9 @@ void handle_io(void)
 			if (d->master_fd != -1 && FD_ISSET(d->master_fd,
 							   &writefds))
 				handle_tty_write(d);
+			
+			if (d->last_seen != enum_pass)
+				shutdown_domain(d);
 
 			if (d->is_dead)
 				cleanup_domain(d);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:29:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:29:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3rKr-0000jw-El; Tue, 21 Aug 2012 16:29:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3rKq-0000jr-EB
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:29:44 +0000
Received: from [85.158.143.35:3489] by server-1.bemta-4.messagelabs.com id
	FF/6C-07754-777B3305; Tue, 21 Aug 2012 16:29:43 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345566582!15241230!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16910 invoked from network); 21 Aug 2012 16:29:43 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:29:43 -0000
Received: by iabz25 with SMTP id z25so4663596iab.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:29:41 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=JABsVdiUwwCGaKYfsf6PM7UjAMkmdCtm8baVVdfMtPw=;
	b=pEYO/M44Ho9tpw6DG6s4X9Jjr4IpMw2H6LAgt9rbzKw2gKhFQG8yZIJu6I9QJ9jzpA
	AS4Rqd6raLcJ1TCVBNySdIr8h2zt3qADIsYSK1519scD87wfxja9q9jXnswbS5Xrngwv
	fzMeqp4C3nviDGM4JQJhNEU9km9lERa/lkCSgOD+T0PDxPb1V/fuLVYVCEBJSlaB+i3f
	iuhKCfcnUeSY3Igj2PFrJowGNesQysgDSuvNZPl28d7+rLJtAKk9aT+HgYcBSUMQNMM8
	Gd4cen1/vyCSyqCfbvJolbFn7zm+h0TMiCODtLOeRrkd3bU3SSPJfjTv+Q8lu6VlOz9I
	Mv9w==
MIME-Version: 1.0
Received: by 10.43.106.147 with SMTP id du19mr14569919icc.56.1345566581837;
	Tue, 21 Aug 2012 09:29:41 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:29:41 -0700 (PDT)
In-Reply-To: <1345563782-11224-7-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-7-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:29:41 +0100
Message-ID: <CAFEAcA_uTP+MkAduqLuKFmx=fd5EdXeT6=s1WqA92sC+12EkNQ@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQmgW6lunzCa8HgA9g7Pfjen+164CsbYVdy+uHJfUc8lk5wkl6Q2Uiag1H94vlXI0tJwLGrA
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 6/8] qdev: use full qdev.h include path on
	qdev*.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

This could use a commit message saying why rather than merely
what the patch does.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:29:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:29:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3rKr-0000jw-El; Tue, 21 Aug 2012 16:29:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3rKq-0000jr-EB
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:29:44 +0000
Received: from [85.158.143.35:3489] by server-1.bemta-4.messagelabs.com id
	FF/6C-07754-777B3305; Tue, 21 Aug 2012 16:29:43 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345566582!15241230!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16910 invoked from network); 21 Aug 2012 16:29:43 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:29:43 -0000
Received: by iabz25 with SMTP id z25so4663596iab.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:29:41 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=JABsVdiUwwCGaKYfsf6PM7UjAMkmdCtm8baVVdfMtPw=;
	b=pEYO/M44Ho9tpw6DG6s4X9Jjr4IpMw2H6LAgt9rbzKw2gKhFQG8yZIJu6I9QJ9jzpA
	AS4Rqd6raLcJ1TCVBNySdIr8h2zt3qADIsYSK1519scD87wfxja9q9jXnswbS5Xrngwv
	fzMeqp4C3nviDGM4JQJhNEU9km9lERa/lkCSgOD+T0PDxPb1V/fuLVYVCEBJSlaB+i3f
	iuhKCfcnUeSY3Igj2PFrJowGNesQysgDSuvNZPl28d7+rLJtAKk9aT+HgYcBSUMQNMM8
	Gd4cen1/vyCSyqCfbvJolbFn7zm+h0TMiCODtLOeRrkd3bU3SSPJfjTv+Q8lu6VlOz9I
	Mv9w==
MIME-Version: 1.0
Received: by 10.43.106.147 with SMTP id du19mr14569919icc.56.1345566581837;
	Tue, 21 Aug 2012 09:29:41 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:29:41 -0700 (PDT)
In-Reply-To: <1345563782-11224-7-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-7-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:29:41 +0100
Message-ID: <CAFEAcA_uTP+MkAduqLuKFmx=fd5EdXeT6=s1WqA92sC+12EkNQ@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQmgW6lunzCa8HgA9g7Pfjen+164CsbYVdy+uHJfUc8lk5wkl6Q2Uiag1H94vlXI0tJwLGrA
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 6/8] qdev: use full qdev.h include path on
	qdev*.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

This could use a commit message saying why rather than merely
what the patch does.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:45:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ra8-0000xv-VT; Tue, 21 Aug 2012 16:45:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3ra7-0000xq-PZ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:45:32 +0000
Received: from [85.158.143.99:35878] by server-2.bemta-4.messagelabs.com id
	C3/E9-21239-B2BB3305; Tue, 21 Aug 2012 16:45:31 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345567527!22295180!1
X-Originating-IP: [209.85.161.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9359 invoked from network); 21 Aug 2012 16:45:28 -0000
Received: from mail-gg0-f171.google.com (HELO mail-gg0-f171.google.com)
	(209.85.161.171)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:45:28 -0000
Received: by ggnp1 with SMTP id p1so11559ggn.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:45:27 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=hUf+qgpGuP/2eT2EHqs3PTF2+yg0TjuLeVrc5lLD/0I=;
	b=KljoeHUwAyHshFT9FbbvOxAKpWgMWQ6ySsPSlGCisrhn8obFDGVoZIC0Gp04f953US
	bBLNRj7BVbX7Sv1hJvYYkT6r5NYukVoUkLpcaHAOkrSh5Pd3S/wIxtPAwFr1qrkEnRui
	4IGo6FygOhc+jVpd76KdVRKDUx6S+8qvsIZVZtMiWAa1owbt2B196gFTPaVEj927JZuw
	aDGaC43dzcGtP1l4FavonNrVMxbY2wpcCSur+6gUs2yZq00EWPOfunmBGgvRofhXhYwr
	s9r5lEjm63BcFqOb8xx+fY5kkp+JBqqnS1JsT5OkE4kEMwUVnLi5/Y60a9vB5JOvM9wv
	00YQ==
MIME-Version: 1.0
Received: by 10.50.186.130 with SMTP id fk2mr14207918igc.60.1345567526741;
	Tue, 21 Aug 2012 09:45:26 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:45:26 -0700 (PDT)
In-Reply-To: <1345563782-11224-5-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-5-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:45:26 +0100
Message-ID: <CAFEAcA8ji9VE6+r5ybpQq7Miri7hbt9GgpMom8aMx4XQ4h6pCg@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQl4aGV8J8xugB4lh9Qt3E8XaBYKNlL4ODaXt/R71N6yomc1ESgQ+/3h02O7dpt0tpK3EJA2
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 4/8] cleanup error.h,
	included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> From: Igor Mammedov <imammedo@redhat.com>
>
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>

I thought we'd agreed to drop this patch?
http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg03644.html

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:45:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ra8-0000xv-VT; Tue, 21 Aug 2012 16:45:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3ra7-0000xq-PZ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:45:32 +0000
Received: from [85.158.143.99:35878] by server-2.bemta-4.messagelabs.com id
	C3/E9-21239-B2BB3305; Tue, 21 Aug 2012 16:45:31 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345567527!22295180!1
X-Originating-IP: [209.85.161.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9359 invoked from network); 21 Aug 2012 16:45:28 -0000
Received: from mail-gg0-f171.google.com (HELO mail-gg0-f171.google.com)
	(209.85.161.171)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:45:28 -0000
Received: by ggnp1 with SMTP id p1so11559ggn.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:45:27 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=hUf+qgpGuP/2eT2EHqs3PTF2+yg0TjuLeVrc5lLD/0I=;
	b=KljoeHUwAyHshFT9FbbvOxAKpWgMWQ6ySsPSlGCisrhn8obFDGVoZIC0Gp04f953US
	bBLNRj7BVbX7Sv1hJvYYkT6r5NYukVoUkLpcaHAOkrSh5Pd3S/wIxtPAwFr1qrkEnRui
	4IGo6FygOhc+jVpd76KdVRKDUx6S+8qvsIZVZtMiWAa1owbt2B196gFTPaVEj927JZuw
	aDGaC43dzcGtP1l4FavonNrVMxbY2wpcCSur+6gUs2yZq00EWPOfunmBGgvRofhXhYwr
	s9r5lEjm63BcFqOb8xx+fY5kkp+JBqqnS1JsT5OkE4kEMwUVnLi5/Y60a9vB5JOvM9wv
	00YQ==
MIME-Version: 1.0
Received: by 10.50.186.130 with SMTP id fk2mr14207918igc.60.1345567526741;
	Tue, 21 Aug 2012 09:45:26 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:45:26 -0700 (PDT)
In-Reply-To: <1345563782-11224-5-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-5-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:45:26 +0100
Message-ID: <CAFEAcA8ji9VE6+r5ybpQq7Miri7hbt9GgpMom8aMx4XQ4h6pCg@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQl4aGV8J8xugB4lh9Qt3E8XaBYKNlL4ODaXt/R71N6yomc1ESgQ+/3h02O7dpt0tpK3EJA2
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 4/8] cleanup error.h,
	included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> From: Igor Mammedov <imammedo@redhat.com>
>
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>

I thought we'd agreed to drop this patch?
http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg03644.html

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:59:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3rna-00018N-Ey; Tue, 21 Aug 2012 16:59:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3rnZ-00018I-5P
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:59:25 +0000
Received: from [85.158.143.99:14668] by server-3.bemta-4.messagelabs.com id
	D0/C7-09529-C6EB3305; Tue, 21 Aug 2012 16:59:24 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345568362!28983649!1
X-Originating-IP: [209.85.213.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29863 invoked from network); 21 Aug 2012 16:59:23 -0000
Received: from mail-yx0-f171.google.com (HELO mail-yx0-f171.google.com)
	(209.85.213.171)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:59:23 -0000
Received: by yenl4 with SMTP id l4so27185yen.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:59:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=Pb+aahCoS3xhC8eQqkE/d6KQAmBg9oClcjoiF/ZkjIc=;
	b=Z96Sy4nPfu14v4pn7VQCeEiOMHgYRNbVU6+KpSsi6DXDnqjzaZFg/y0qaf6RrGBOwx
	GchbSOdlXoXaaNgB/PzrIJ97AUmjegfAWAM3alPUpIGSrSsYl38zmFte4kpM33kVDBDd
	XpaqtSj3n3PCBh22EXiS1W5GltqdeeAJkHUi+rF6V5ixFuvFOHAobaPyX5LQNzO8rnji
	CcQub5ws0WBh60BJ2kuhjVmGRXubTwp/Z8jrj7Y7GPxTVWfu1bd7TCHsQQHxi2ocNqLp
	1EbMyB8XZX0WXKBpMP+aQLVnM3lQnTqfQmCpwYrzTLiDuZ4oMcn/TlBa+FrVHfWtHIC2
	GjEg==
MIME-Version: 1.0
Received: by 10.50.95.200 with SMTP id dm8mr14249938igb.60.1345568362235; Tue,
	21 Aug 2012 09:59:22 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:59:22 -0700 (PDT)
In-Reply-To: <1345563782-11224-8-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-8-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:59:22 +0100
Message-ID: <CAFEAcA9R6GBZB_B1=sn4iAVXBGL8CyPjxNCaL=6fF+wAiyEwSA@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQk6DsVoLWIchVyPrSJtESONe5Z8Rk6d1rVNE8g93Ri5DrW2Dvt5pEHiDywW+CFB3twlOhyz
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 7/8] include core qdev code into *-user, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> The code depends on some functions from qemu-option.o, so add
> qemu-option.o to qom-obj-y to make sure it's included.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
>  Makefile.objs                                   | 1 +
>  hw/Makefile.objs                                | 2 +-
>  qom/Makefile.objs                               | 2 +-
>  hw/qdev-properties.c => qom/device-properties.c | 0
>  hw/qdev.c => qom/device.c                       | 0
>  5 files changed, 3 insertions(+), 2 deletions(-)
>  rename hw/qdev-properties.c => qom/device-properties.c (100%)
>  rename hw/qdev.c => qom/device.c (100%)
>
> diff --git a/Makefile.objs b/Makefile.objs
> index 4412757..2cf91c2 100644
> --- a/Makefile.objs
> +++ b/Makefile.objs
> @@ -14,6 +14,7 @@ universal-obj-y += $(qobject-obj-y)
>  #######################################################################
>  # QOM
>  qom-obj-y = qom/
> +qom-obj-y += qemu-option.o

qemu-option.c isn't actually QOM related code...

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 16:59:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 16:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3rna-00018N-Ey; Tue, 21 Aug 2012 16:59:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T3rnZ-00018I-5P
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 16:59:25 +0000
Received: from [85.158.143.99:14668] by server-3.bemta-4.messagelabs.com id
	D0/C7-09529-C6EB3305; Tue, 21 Aug 2012 16:59:24 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345568362!28983649!1
X-Originating-IP: [209.85.213.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29863 invoked from network); 21 Aug 2012 16:59:23 -0000
Received: from mail-yx0-f171.google.com (HELO mail-yx0-f171.google.com)
	(209.85.213.171)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 16:59:23 -0000
Received: by yenl4 with SMTP id l4so27185yen.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 09:59:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=Pb+aahCoS3xhC8eQqkE/d6KQAmBg9oClcjoiF/ZkjIc=;
	b=Z96Sy4nPfu14v4pn7VQCeEiOMHgYRNbVU6+KpSsi6DXDnqjzaZFg/y0qaf6RrGBOwx
	GchbSOdlXoXaaNgB/PzrIJ97AUmjegfAWAM3alPUpIGSrSsYl38zmFte4kpM33kVDBDd
	XpaqtSj3n3PCBh22EXiS1W5GltqdeeAJkHUi+rF6V5ixFuvFOHAobaPyX5LQNzO8rnji
	CcQub5ws0WBh60BJ2kuhjVmGRXubTwp/Z8jrj7Y7GPxTVWfu1bd7TCHsQQHxi2ocNqLp
	1EbMyB8XZX0WXKBpMP+aQLVnM3lQnTqfQmCpwYrzTLiDuZ4oMcn/TlBa+FrVHfWtHIC2
	GjEg==
MIME-Version: 1.0
Received: by 10.50.95.200 with SMTP id dm8mr14249938igb.60.1345568362235; Tue,
	21 Aug 2012 09:59:22 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Tue, 21 Aug 2012 09:59:22 -0700 (PDT)
In-Reply-To: <1345563782-11224-8-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-8-git-send-email-ehabkost@redhat.com>
Date: Tue, 21 Aug 2012 17:59:22 +0100
Message-ID: <CAFEAcA9R6GBZB_B1=sn4iAVXBGL8CyPjxNCaL=6fF+wAiyEwSA@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Eduardo Habkost <ehabkost@redhat.com>
X-Gm-Message-State: ALoCoQk6DsVoLWIchVyPrSJtESONe5Z8Rk6d1rVNE8g93Ri5DrW2Dvt5pEHiDywW+CFB3twlOhyz
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 7/8] include core qdev code into *-user, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> The code depends on some functions from qemu-option.o, so add
> qemu-option.o to qom-obj-y to make sure it's included.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
>  Makefile.objs                                   | 1 +
>  hw/Makefile.objs                                | 2 +-
>  qom/Makefile.objs                               | 2 +-
>  hw/qdev-properties.c => qom/device-properties.c | 0
>  hw/qdev.c => qom/device.c                       | 0
>  5 files changed, 3 insertions(+), 2 deletions(-)
>  rename hw/qdev-properties.c => qom/device-properties.c (100%)
>  rename hw/qdev.c => qom/device.c (100%)
>
> diff --git a/Makefile.objs b/Makefile.objs
> index 4412757..2cf91c2 100644
> --- a/Makefile.objs
> +++ b/Makefile.objs
> @@ -14,6 +14,7 @@ universal-obj-y += $(qobject-obj-y)
>  #######################################################################
>  # QOM
>  qom-obj-y = qom/
> +qom-obj-y += qemu-option.o

qemu-option.c isn't actually QOM related code...

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 17:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 17:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3sOd-0001PX-ST; Tue, 21 Aug 2012 17:37:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3sOb-0001PS-UA
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 17:37:42 +0000
Received: from [85.158.143.99:30547] by server-2.bemta-4.messagelabs.com id
	FF/44-21239-567C3305; Tue, 21 Aug 2012 17:37:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345570659!28754333!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27507 invoked from network); 21 Aug 2012 17:37:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 17:37:40 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LHbWIK007606
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 17:37:33 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LHbV82003047
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 17:37:32 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LHbVDQ016351; Tue, 21 Aug 2012 12:37:31 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 10:37:31 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E2EC24031E; Tue, 21 Aug 2012 13:27:32 -0400 (EDT)
Date: Tue, 21 Aug 2012 13:27:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, JBeulich@suse.com
Message-ID: <20120821172732.GA23715@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120820141305.GA2713@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages. Was:Re:
 [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > instead of a big memblock_reserve. This way we can be more
> > > selective in freeing regions (and it also makes it easier
> > > to understand where is what).
> > > 
> > > [v1: Move the auto_translate_physmap to proper line]
> > > [v2: Per Stefano suggestion add more comments]
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > much better now!
> 
> Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> Will have a revised patch posted shortly.

Jan, I thought something odd. Part of this code replaces this:

	memblock_reserve(__pa(xen_start_info->mfn_list),
		xen_start_info->pt_base - xen_start_info->mfn_list);

with a more region-by-region area. What I found out that if I boot this
as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
actually wrong.

Specifically this is what bootup says:

(good working case - 32bit hypervisor with 32-bit dom0):
(XEN)  Loaded kernel: c1000000->c1a23000
(XEN)  Init. ramdisk: c1a23000->cf730e00
(XEN)  Phys-Mach map: cf731000->cf831000
(XEN)  Start info:    cf831000->cf83147c
(XEN)  Page tables:   cf832000->cf8b5000
(XEN)  Boot stack:    cf8b5000->cf8b6000
(XEN)  TOTAL:         c0000000->cfc00000

[    0.000000] PT: cf832000 (f832000)
[    0.000000] Reserving PT: f832000->f8b5000

And with a 64-bit hypervisor:

XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
(XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
(XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
(XEN)  Start info:    00000000cf831000->00000000cf8314b4
(XEN)  Page tables:   00000000cf832000->00000000cf8b6000
(XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
(XEN)  TOTAL:         00000000c0000000->00000000cfc00000
(XEN)  ENTRY ADDRESS: 00000000c16bb22c

[    0.000000] PT: cf834000 (f834000)
[    0.000000] Reserving PT: f834000->f8b8000

So the pt_base is offset by two pages. And looking at c/s 13257
its not clear to me why this two page offset was added?

The toolstack works fine - so launching 32-bit guests either
under a 32-bit hypervisor or 64-bit works fine:
] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 0xcf885000  (pfn 0xf805 + 0x80 pages)
[    0.000000] PT: cf805000 (f805000)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 17:38:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 17:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3sOd-0001PX-ST; Tue, 21 Aug 2012 17:37:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3sOb-0001PS-UA
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 17:37:42 +0000
Received: from [85.158.143.99:30547] by server-2.bemta-4.messagelabs.com id
	FF/44-21239-567C3305; Tue, 21 Aug 2012 17:37:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345570659!28754333!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27507 invoked from network); 21 Aug 2012 17:37:40 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 17:37:40 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LHbWIK007606
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 17:37:33 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LHbV82003047
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 17:37:32 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LHbVDQ016351; Tue, 21 Aug 2012 12:37:31 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 10:37:31 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E2EC24031E; Tue, 21 Aug 2012 13:27:32 -0400 (EDT)
Date: Tue, 21 Aug 2012 13:27:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, JBeulich@suse.com
Message-ID: <20120821172732.GA23715@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120820141305.GA2713@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages. Was:Re:
 [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > instead of a big memblock_reserve. This way we can be more
> > > selective in freeing regions (and it also makes it easier
> > > to understand where is what).
> > > 
> > > [v1: Move the auto_translate_physmap to proper line]
> > > [v2: Per Stefano suggestion add more comments]
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > much better now!
> 
> Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> Will have a revised patch posted shortly.

Jan, I thought something odd. Part of this code replaces this:

	memblock_reserve(__pa(xen_start_info->mfn_list),
		xen_start_info->pt_base - xen_start_info->mfn_list);

with a more region-by-region area. What I found out that if I boot this
as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
actually wrong.

Specifically this is what bootup says:

(good working case - 32bit hypervisor with 32-bit dom0):
(XEN)  Loaded kernel: c1000000->c1a23000
(XEN)  Init. ramdisk: c1a23000->cf730e00
(XEN)  Phys-Mach map: cf731000->cf831000
(XEN)  Start info:    cf831000->cf83147c
(XEN)  Page tables:   cf832000->cf8b5000
(XEN)  Boot stack:    cf8b5000->cf8b6000
(XEN)  TOTAL:         c0000000->cfc00000

[    0.000000] PT: cf832000 (f832000)
[    0.000000] Reserving PT: f832000->f8b5000

And with a 64-bit hypervisor:

XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
(XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
(XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
(XEN)  Start info:    00000000cf831000->00000000cf8314b4
(XEN)  Page tables:   00000000cf832000->00000000cf8b6000
(XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
(XEN)  TOTAL:         00000000c0000000->00000000cfc00000
(XEN)  ENTRY ADDRESS: 00000000c16bb22c

[    0.000000] PT: cf834000 (f834000)
[    0.000000] Reserving PT: f834000->f8b8000

So the pt_base is offset by two pages. And looking at c/s 13257
its not clear to me why this two page offset was added?

The toolstack works fine - so launching 32-bit guests either
under a 32-bit hypervisor or 64-bit works fine:
] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 0xcf885000  (pfn 0xf805 + 0x80 pages)
[    0.000000] PT: cf805000 (f805000)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:22:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3t5z-0002MF-Oh; Tue, 21 Aug 2012 18:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lcapitulino@redhat.com>) id 1T3t5y-0002MA-3U
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:22:30 +0000
Received: from [85.158.143.35:63747] by server-1.bemta-4.messagelabs.com id
	EE/46-07754-5E1D3305; Tue, 21 Aug 2012 18:22:29 +0000
X-Env-Sender: lcapitulino@redhat.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345573347!15324665!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16077 invoked from network); 21 Aug 2012 18:22:28 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 18:22:28 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LIL8sx014027
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:21:08 -0400
Received: from doriath.home (ovpn-113-75.phx2.redhat.com [10.3.113.75])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LIL2Cd005993; Tue, 21 Aug 2012 14:21:02 -0400
Date: Tue, 21 Aug 2012 15:21:48 -0300
From: Luiz Capitulino <lcapitulino@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Message-ID: <20120821152148.3bcff810@doriath.home>
In-Reply-To: <1345563782-11224-4-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-4-git-send-email-ehabkost@redhat.com>
Organization: Red Hat
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 3/8] qapi-types.h doesn't really need to
 include qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012 12:42:57 -0300
Eduardo Habkost <ehabkost@redhat.com> wrote:

> From: Igor Mammedov <imammedo@redhat.com>
> 
> needed to prevent build breakage when CPU becomes a child of DeviceState
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  scripts/qapi-types.py | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
> index cf601ae..f34addb 100644
> --- a/scripts/qapi-types.py
> +++ b/scripts/qapi-types.py
> @@ -263,7 +263,7 @@ fdecl.write(mcgen('''
>  #ifndef %(guard)s
>  #define %(guard)s
>  
> -#include "qemu-common.h"
> +#include <stdbool.h>

Case you didn't notice my last review: we should also include <stdint.h> here.

>  
>  ''',
>                    guard=guardname(h_file)))


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:22:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3t5z-0002MF-Oh; Tue, 21 Aug 2012 18:22:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lcapitulino@redhat.com>) id 1T3t5y-0002MA-3U
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:22:30 +0000
Received: from [85.158.143.35:63747] by server-1.bemta-4.messagelabs.com id
	EE/46-07754-5E1D3305; Tue, 21 Aug 2012 18:22:29 +0000
X-Env-Sender: lcapitulino@redhat.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345573347!15324665!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16077 invoked from network); 21 Aug 2012 18:22:28 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	21 Aug 2012 18:22:28 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LIL8sx014027
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:21:08 -0400
Received: from doriath.home (ovpn-113-75.phx2.redhat.com [10.3.113.75])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7LIL2Cd005993; Tue, 21 Aug 2012 14:21:02 -0400
Date: Tue, 21 Aug 2012 15:21:48 -0300
From: Luiz Capitulino <lcapitulino@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Message-ID: <20120821152148.3bcff810@doriath.home>
In-Reply-To: <1345563782-11224-4-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-4-git-send-email-ehabkost@redhat.com>
Organization: Red Hat
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 3/8] qapi-types.h doesn't really need to
 include qemu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012 12:42:57 -0300
Eduardo Habkost <ehabkost@redhat.com> wrote:

> From: Igor Mammedov <imammedo@redhat.com>
> 
> needed to prevent build breakage when CPU becomes a child of DeviceState
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
>  scripts/qapi-types.py | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/scripts/qapi-types.py b/scripts/qapi-types.py
> index cf601ae..f34addb 100644
> --- a/scripts/qapi-types.py
> +++ b/scripts/qapi-types.py
> @@ -263,7 +263,7 @@ fdecl.write(mcgen('''
>  #ifndef %(guard)s
>  #define %(guard)s
>  
> -#include "qemu-common.h"
> +#include <stdbool.h>

Case you didn't notice my last review: we should also include <stdint.h> here.

>  
>  ''',
>                    guard=guardname(h_file)))


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:23:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3t6B-0002Ml-55; Tue, 21 Aug 2012 18:22:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3t69-0002MR-Mn
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:22:41 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345573354!2360406!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3547 invoked from network); 21 Aug 2012 18:22:35 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 18:22:35 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LILYv3014959
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:22:28 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LGq9Me002453; Tue, 21 Aug 2012 12:52:10 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 602B2200CDE; Tue, 21 Aug 2012 13:53:21 -0300 (BRT)
Date: Tue, 21 Aug 2012 13:53:21 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120821165321.GA2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-5-git-send-email-ehabkost@redhat.com>
	<CAFEAcA8ji9VE6+r5ybpQq7Miri7hbt9GgpMom8aMx4XQ4h6pCg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA8ji9VE6+r5ybpQq7Miri7hbt9GgpMom8aMx4XQ4h6pCg@mail.gmail.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 4/8] cleanup error.h,
 included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 05:45:26PM +0100, Peter Maydell wrote:
> On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > From: Igor Mammedov <imammedo@redhat.com>
> >
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> 
> I thought we'd agreed to drop this patch?
> http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg03644.html

Sure. I just used the existing series as base. The main point is to ask
for comments about the last 4 patches.

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:23:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3t6B-0002Ml-55; Tue, 21 Aug 2012 18:22:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3t69-0002MR-Mn
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:22:41 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345573354!2360406!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3547 invoked from network); 21 Aug 2012 18:22:35 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 18:22:35 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LILYv3014959
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:22:28 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LGq9Me002453; Tue, 21 Aug 2012 12:52:10 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 602B2200CDE; Tue, 21 Aug 2012 13:53:21 -0300 (BRT)
Date: Tue, 21 Aug 2012 13:53:21 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120821165321.GA2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-5-git-send-email-ehabkost@redhat.com>
	<CAFEAcA8ji9VE6+r5ybpQq7Miri7hbt9GgpMom8aMx4XQ4h6pCg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA8ji9VE6+r5ybpQq7Miri7hbt9GgpMom8aMx4XQ4h6pCg@mail.gmail.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 4/8] cleanup error.h,
 included qapi-types.h aready has stdbool.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 05:45:26PM +0100, Peter Maydell wrote:
> On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > From: Igor Mammedov <imammedo@redhat.com>
> >
> > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> 
> I thought we'd agreed to drop this patch?
> http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg03644.html

Sure. I just used the existing series as base. The main point is to ask
for comments about the last 4 patches.

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tAz-0002cT-Sz; Tue, 21 Aug 2012 18:27:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3tAy-0002cJ-BF
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 18:27:40 +0000
Received: from [85.158.143.99:54464] by server-1.bemta-4.messagelabs.com id
	B7/59-07754-B13D3305; Tue, 21 Aug 2012 18:27:39 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345573658!19712438!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODY3NDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27659 invoked from network); 21 Aug 2012 18:27:38 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 18:27:38 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id A34F11DF3;
	Tue, 21 Aug 2012 21:27:37 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 2BA6B2005D; Tue, 21 Aug 2012 21:27:37 +0300 (EEST)
Date: Tue, 21 Aug 2012 21:27:37 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120821182736.GC19851@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<5033857B.2000804@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5033857B.2000804@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 01:56:27PM +0100, George Dunlap wrote:
> On 20/08/12 20:14, Pasi K=E4rkk=E4inen wrote:
> >Another USB item:
> >
> >* xl support for USB device passthru using QEMU emulated USB for HVM gue=
sts (no need for PVUSB drivers in the HVM guest).
> >   This works today in xm/xend with qemu-traditional, but is limited to =
USB 1.1, probably because
> >   the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
> >   So xl support for emulated USB device passthru for both qemu-upstream=
 and qemu-traditional.
> =

> OK, I'll put that on the list.  Thanks.
> =


Thanks!

> >More wishlist items:
> =

> I'm going to be experimenting with a website designed for user
> feedback, to both suggest features but also to help prioritize what
> you think are the important features. Expect an e-mail in a day or
> two with the announcement.
> =


Good idea.

> I think the Citrix team is probably mostly full for this release
> cycle; so anything that requires significant development work but
> doesn't already have developers working on it will probably have to
> be put off until later, unless you can convince the regular devs
> it's something to prioritize, or you can bring in more developers to
> work on it. :-)
> =


Yeah I can see that :) If we aren't able to get these done for 4.3,
some of these might be good GSoC projects aswell.. =



> >* Nested hardware virtualization. Important for easier testing and devel=
opment of Xen (Xen-on-Xen),
> >   and for running other hypervisors in Xen VMs. Interesting for labs, P=
OCs, etc.
> =

> The ball is really in Intel / AMD's court on this one.  It's on our
> list of "things that might be nice", but given the other things we
> really do want to try to make into 4.3, wasn't that big of a
> priority for us.  If Intel or AMD want to make this a priority for
> them for 4.3, I can track it.
> =


I think AMD nested svm is in pretty good shape already,
but Intel nested VMX is not there yet.. =


I think we'll know more about this after XenSummit.

> >* VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel =
archives,
> >   but noone has yet stepped up to clean up and get them merged.
> >   Currently Intel gfx passthru patches are merged to Xen, but primary A=
TI/NVIDIA require extra patches.
> >   This is actually something that a LOT of users ask often, it's discus=
sed almost every day on ##xen on IRC.
> >   I wonder if XenClient folks could help here?
> =

> What kind of patches are these? Are they mostly to pvops Linux?
> =


I think mostly Qemu-dm patches.. some "tricks" are required to =

fetch the VBIOS from the physical gfx card so it can be copied to the HVM g=
uest =

(some vendor specific hacks are needed to get a properly working copy of th=
e VBIOS).
Also legacy IO ports, legacy vga memory ranges, MMIOs, VESA/VBE need to be =
passed thru etc.

Intel-specific tweaks are already in xen-unstable, but AMD/NVIDIA specific =
ones aren't.


> >* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU pa=
ssthru users.
> >   Fujitsu guys posted some patches for this in 2010, and XenClient guys=
 in 2009 (iirc),
> >   but nothing got further developed and merged to upstream Xen.
> =

> What exactly is involved here? If the work is mostly done, but just
> needs someone to dust it off and submit it, then it's probably
> something we can add to the list.
> =


Here's the patch from Dietmar Hahn / Fujitsu:
http://lists.xen.org/archives/html/xen-devel/2010-03/msg01292.html

And here's some discussion about the XenClient keyboard/mouse sharing patch=
 from Jean Guyader:

http://old-list-archives.xen.org/archives/html/xen-devel/2010-03/msg00979.h=
tml
and
http://lists.xen.org/archives/html/xen-devel/2012-01/msg00585.html


> >* QXL virtual GPU support for SPICE. Someone was already developing this,
> >   and posted patches earlier during 4.2 development cycle to xen-devel.
> >   Upstream Qemu includes QXL support.
> =

> If "someone" wants to step up and claim responsibility, I'll put it
> on the list of things to track. :-)
> =


Hehe, hopefully the "someone" notices this thread.. if not, we can go throu=
gh xen-devel archives.


> >* PVSCSI support in XL. James Harper was (semi) interested in working wi=
th this,
> >   because he has a PVSCSI frontend driver in Windows GPLPV drivers, and=
 he's using PVSCSI for tape backups himself.
> >


Hopefully James is still interested in this :) =



> >* libvirt libxl driver improvements; support more Xen features.
> >   Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default" vi=
rtualization GUI also with Xen.
> =

> This is pretty important.  Looking at feature parity between
> libvirt+KVM and libvirt+Xen on the Citrix xen.org to-do list, but to
> begin with it's more of a research than a feature, so wasn't on this
> particular list.  If you happen to know a specific list of missing
> features, that might be something we could try to fit in for 4.3.
> =


Yeah, I've been planning to test the latest libvirt libxl driver and =

see what works and what doesn't, but haven't gotten there yet.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:28:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tAz-0002cT-Sz; Tue, 21 Aug 2012 18:27:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3tAy-0002cJ-BF
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 18:27:40 +0000
Received: from [85.158.143.99:54464] by server-1.bemta-4.messagelabs.com id
	B7/59-07754-B13D3305; Tue, 21 Aug 2012 18:27:39 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345573658!19712438!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODY3NDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27659 invoked from network); 21 Aug 2012 18:27:38 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 18:27:38 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id A34F11DF3;
	Tue, 21 Aug 2012 21:27:37 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 2BA6B2005D; Tue, 21 Aug 2012 21:27:37 +0300 (EEST)
Date: Tue, 21 Aug 2012 21:27:37 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20120821182736.GC19851@reaktio.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<20120820191429.GY19851@reaktio.net>
	<5033857B.2000804@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5033857B.2000804@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 01:56:27PM +0100, George Dunlap wrote:
> On 20/08/12 20:14, Pasi K=E4rkk=E4inen wrote:
> >Another USB item:
> >
> >* xl support for USB device passthru using QEMU emulated USB for HVM gue=
sts (no need for PVUSB drivers in the HVM guest).
> >   This works today in xm/xend with qemu-traditional, but is limited to =
USB 1.1, probably because
> >   the old version of Qemu-dm-traditional which lacks USB 2.0/3.0.
> >   So xl support for emulated USB device passthru for both qemu-upstream=
 and qemu-traditional.
> =

> OK, I'll put that on the list.  Thanks.
> =


Thanks!

> >More wishlist items:
> =

> I'm going to be experimenting with a website designed for user
> feedback, to both suggest features but also to help prioritize what
> you think are the important features. Expect an e-mail in a day or
> two with the announcement.
> =


Good idea.

> I think the Citrix team is probably mostly full for this release
> cycle; so anything that requires significant development work but
> doesn't already have developers working on it will probably have to
> be put off until later, unless you can convince the regular devs
> it's something to prioritize, or you can bring in more developers to
> work on it. :-)
> =


Yeah I can see that :) If we aren't able to get these done for 4.3,
some of these might be good GSoC projects aswell.. =



> >* Nested hardware virtualization. Important for easier testing and devel=
opment of Xen (Xen-on-Xen),
> >   and for running other hypervisors in Xen VMs. Interesting for labs, P=
OCs, etc.
> =

> The ball is really in Intel / AMD's court on this one.  It's on our
> list of "things that might be nice", but given the other things we
> really do want to try to make into 4.3, wasn't that big of a
> priority for us.  If Intel or AMD want to make this a priority for
> them for 4.3, I can track it.
> =


I think AMD nested svm is in pretty good shape already,
but Intel nested VMX is not there yet.. =


I think we'll know more about this after XenSummit.

> >* VGA/GPU passthru support for AMD/NVIDIA; lots of patches on xen-devel =
archives,
> >   but noone has yet stepped up to clean up and get them merged.
> >   Currently Intel gfx passthru patches are merged to Xen, but primary A=
TI/NVIDIA require extra patches.
> >   This is actually something that a LOT of users ask often, it's discus=
sed almost every day on ##xen on IRC.
> >   I wonder if XenClient folks could help here?
> =

> What kind of patches are these? Are they mostly to pvops Linux?
> =


I think mostly Qemu-dm patches.. some "tricks" are required to =

fetch the VBIOS from the physical gfx card so it can be copied to the HVM g=
uest =

(some vendor specific hacks are needed to get a properly working copy of th=
e VBIOS).
Also legacy IO ports, legacy vga memory ranges, MMIOs, VESA/VBE need to be =
passed thru etc.

Intel-specific tweaks are already in xen-unstable, but AMD/NVIDIA specific =
ones aren't.


> >* Dom0 Keyboard/mouse sharing to HVM guests; mainly needed by VGA/GPU pa=
ssthru users.
> >   Fujitsu guys posted some patches for this in 2010, and XenClient guys=
 in 2009 (iirc),
> >   but nothing got further developed and merged to upstream Xen.
> =

> What exactly is involved here? If the work is mostly done, but just
> needs someone to dust it off and submit it, then it's probably
> something we can add to the list.
> =


Here's the patch from Dietmar Hahn / Fujitsu:
http://lists.xen.org/archives/html/xen-devel/2010-03/msg01292.html

And here's some discussion about the XenClient keyboard/mouse sharing patch=
 from Jean Guyader:

http://old-list-archives.xen.org/archives/html/xen-devel/2010-03/msg00979.h=
tml
and
http://lists.xen.org/archives/html/xen-devel/2012-01/msg00585.html


> >* QXL virtual GPU support for SPICE. Someone was already developing this,
> >   and posted patches earlier during 4.2 development cycle to xen-devel.
> >   Upstream Qemu includes QXL support.
> =

> If "someone" wants to step up and claim responsibility, I'll put it
> on the list of things to track. :-)
> =


Hehe, hopefully the "someone" notices this thread.. if not, we can go throu=
gh xen-devel archives.


> >* PVSCSI support in XL. James Harper was (semi) interested in working wi=
th this,
> >   because he has a PVSCSI frontend driver in Windows GPLPV drivers, and=
 he's using PVSCSI for tape backups himself.
> >


Hopefully James is still interested in this :) =



> >* libvirt libxl driver improvements; support more Xen features.
> >   Allows better using the Ubuntu/Debian/Fedora/RHEL/CentOS "default" vi=
rtualization GUI also with Xen.
> =

> This is pretty important.  Looking at feature parity between
> libvirt+KVM and libvirt+Xen on the Citrix xen.org to-do list, but to
> begin with it's more of a research than a feature, so wasn't on this
> particular list.  If you happen to know a specific list of missing
> features, that might be something we could try to fit in for 4.3.
> =


Yeah, I've been planning to test the latest libvirt libxl driver and =

see what works and what doesn't, but haven't gotten there yet.

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:30:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tDR-0002lS-Js; Tue, 21 Aug 2012 18:30:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3tDQ-0002lK-Au
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:30:12 +0000
Received: from [85.158.143.99:61051] by server-2.bemta-4.messagelabs.com id
	35/26-21239-3B3D3305; Tue, 21 Aug 2012 18:30:11 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345573807!28434734!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9561 invoked from network); 21 Aug 2012 18:30:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 18:30:08 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LIObXb016702
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:24:37 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LIOapJ015039; Tue, 21 Aug 2012 14:24:37 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 23208200CDE; Tue, 21 Aug 2012 15:25:48 -0300 (BRT)
Date: Tue, 21 Aug 2012 15:25:48 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120821182547.GC2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-7-git-send-email-ehabkost@redhat.com>
	<CAFEAcA_uTP+MkAduqLuKFmx=fd5EdXeT6=s1WqA92sC+12EkNQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA_uTP+MkAduqLuKFmx=fd5EdXeT6=s1WqA92sC+12EkNQ@mail.gmail.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 6/8] qdev: use full qdev.h include path on
	qdev*.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 05:29:41PM +0100, Peter Maydell wrote:
> On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> 
> This could use a commit message saying why rather than merely
> what the patch does.

Sorry. The reason is to allow the files to be moved inside "qom/" in the
next commit.

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:30:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tDR-0002lS-Js; Tue, 21 Aug 2012 18:30:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3tDQ-0002lK-Au
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:30:12 +0000
Received: from [85.158.143.99:61051] by server-2.bemta-4.messagelabs.com id
	35/26-21239-3B3D3305; Tue, 21 Aug 2012 18:30:11 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345573807!28434734!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9561 invoked from network); 21 Aug 2012 18:30:08 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-216.messagelabs.com with SMTP;
	21 Aug 2012 18:30:08 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LIObXb016702
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:24:37 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LIOapJ015039; Tue, 21 Aug 2012 14:24:37 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 23208200CDE; Tue, 21 Aug 2012 15:25:48 -0300 (BRT)
Date: Tue, 21 Aug 2012 15:25:48 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120821182547.GC2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-7-git-send-email-ehabkost@redhat.com>
	<CAFEAcA_uTP+MkAduqLuKFmx=fd5EdXeT6=s1WqA92sC+12EkNQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA_uTP+MkAduqLuKFmx=fd5EdXeT6=s1WqA92sC+12EkNQ@mail.gmail.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 6/8] qdev: use full qdev.h include path on
	qdev*.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 05:29:41PM +0100, Peter Maydell wrote:
> On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> 
> This could use a commit message saying why rather than merely
> what the patch does.

Sorry. The reason is to allow the files to be moved inside "qom/" in the
next commit.

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:44:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tQw-00030K-0p; Tue, 21 Aug 2012 18:44:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3tQu-00030F-Gw
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 18:44:08 +0000
Received: from [85.158.143.35:26783] by server-2.bemta-4.messagelabs.com id
	87/CE-21239-7F6D3305; Tue, 21 Aug 2012 18:44:07 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345574646!14535607!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODY3NDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6393 invoked from network); 21 Aug 2012 18:44:07 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 18:44:07 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id EDAA51FB3;
	Tue, 21 Aug 2012 21:44:05 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id B69092005D; Tue, 21 Aug 2012 21:44:05 +0300 (EEST)
Date: Tue, 21 Aug 2012 21:44:05 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Message-ID: <20120821184405.GD19851@reaktio.net>
References: <mailman.11058.1345490072.1399.xen-devel@lists.xen.org>
	<b9f5103b2989ceb9c4b07da85405307e.squirrel@webmail.lagarcavilla.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <b9f5103b2989ceb9c4b07da85405307e.squirrel@webmail.lagarcavilla.org>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 07:50:53AM -0700, Andres Lagar-Cavilla wrote:
> >
> > Hello everyone!  With the completion of our first few release candidates
> > for 4.2, it's time to look forward and start planning for the 4.3
> > release.  I've volunteered to step up and help coordinate the release
> > for this cycle.
> Hi George. Great idea. Cutting to the chase below
> 
> >
> An observation: the three below really sound like xapi or libvirt tasks.
>

libxl probably needs to provide the necessary "hooks" so that snapshots 
can have synchronized cpu/mem/disk states.. or what do you think? 

.. and the necessary bits for being able to do storage live migration..
this is an often asked feature. 

If you have thoughts how to best implement these, let us know! 
It'd be good to have these available also in xl, not only on the higher level toolstacks.


> >
> > * Full-VM snapshotting
> >   owner: ?
> >   Have a way of coordinating the taking and restoring of VM memory and
> >   disk snapshots.  This would involve some investigation into the best
> >   way to accomplish this.
> >
> > * VM Cloning
> >   owner: ?
> >   Again, a way of coordinating the memory and disk aspects.  Research
> >   into the best way to do this would probably go along with the
> >   snapshotting feature.
> >
> > * Make storage migration possible
> >   owner: ?
> >   There needs to be a way, either via command-line or via some hooks,
> >   that someone can build a "storage migration" feature on top of libxl
> >   or xl.
> >


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:44:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tQw-00030K-0p; Tue, 21 Aug 2012 18:44:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T3tQu-00030F-Gw
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 18:44:08 +0000
Received: from [85.158.143.35:26783] by server-2.bemta-4.messagelabs.com id
	87/CE-21239-7F6D3305; Tue, 21 Aug 2012 18:44:07 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345574646!14535607!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODY3NDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6393 invoked from network); 21 Aug 2012 18:44:07 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 18:44:07 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id EDAA51FB3;
	Tue, 21 Aug 2012 21:44:05 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id B69092005D; Tue, 21 Aug 2012 21:44:05 +0300 (EEST)
Date: Tue, 21 Aug 2012 21:44:05 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Message-ID: <20120821184405.GD19851@reaktio.net>
References: <mailman.11058.1345490072.1399.xen-devel@lists.xen.org>
	<b9f5103b2989ceb9c4b07da85405307e.squirrel@webmail.lagarcavilla.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <b9f5103b2989ceb9c4b07da85405307e.squirrel@webmail.lagarcavilla.org>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 07:50:53AM -0700, Andres Lagar-Cavilla wrote:
> >
> > Hello everyone!  With the completion of our first few release candidates
> > for 4.2, it's time to look forward and start planning for the 4.3
> > release.  I've volunteered to step up and help coordinate the release
> > for this cycle.
> Hi George. Great idea. Cutting to the chase below
> 
> >
> An observation: the three below really sound like xapi or libvirt tasks.
>

libxl probably needs to provide the necessary "hooks" so that snapshots 
can have synchronized cpu/mem/disk states.. or what do you think? 

.. and the necessary bits for being able to do storage live migration..
this is an often asked feature. 

If you have thoughts how to best implement these, let us know! 
It'd be good to have these available also in xl, not only on the higher level toolstacks.


> >
> > * Full-VM snapshotting
> >   owner: ?
> >   Have a way of coordinating the taking and restoring of VM memory and
> >   disk snapshots.  This would involve some investigation into the best
> >   way to accomplish this.
> >
> > * VM Cloning
> >   owner: ?
> >   Again, a way of coordinating the memory and disk aspects.  Research
> >   into the best way to do this would probably go along with the
> >   snapshotting feature.
> >
> > * Make storage migration possible
> >   owner: ?
> >   There needs to be a way, either via command-line or via some hooks,
> >   that someone can build a "storage migration" feature on top of libxl
> >   or xl.
> >


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:45:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tRS-00032B-Eu; Tue, 21 Aug 2012 18:44:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <blauwirbel@gmail.com>) id 1T3tRQ-00031n-NY
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:44:41 +0000
X-Env-Sender: blauwirbel@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345574672!3095080!1
X-Originating-IP: [209.85.161.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16505 invoked from network); 21 Aug 2012 18:44:33 -0000
Received: from mail-gg0-f171.google.com (HELO mail-gg0-f171.google.com)
	(209.85.161.171)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 18:44:33 -0000
Received: by ggnp1 with SMTP id p1so148602ggn.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 11:44:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=2gPo7WPGaQnwHqqE/pwBv5g+bkrBr3d3tIr1jGfhu70=;
	b=DILeKKcS4fdMx3YYd63i89X7obfBuqRNwm1cryKoAlRlEZvnE6HIYcm2bsuoN5hF+B
	13d72bokrEujA/ZbaPIi2yDKpXB2IKT9N8RbJHHa7Vuy8tF6OpQ3FbzQLPvTzQO2gTHw
	3h+bWuggAAKXedFKLAxaKBCZui/PfZb9tfkoeasN+3jLlvqvAooqvYOvpIZgCZ78AyRK
	fKYgyvMrTu7fhVMmeLOyFfJSluLSl9UF6V7yVf+apz6NiVksJv4ARRoQIyl59xBeOoy3
	/lAy+nZ8EV6MxbXVJKCWAEdJ7diC00rWeGlZXPYSX/WgcVbSGiBrJtDXO5GI46yvsp8H
	DV/A==
Received: by 10.50.6.163 with SMTP id c3mr14337958iga.35.1345574671298; Tue,
	21 Aug 2012 11:44:31 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.64.78.161 with HTTP; Tue, 21 Aug 2012 11:44:10 -0700 (PDT)
In-Reply-To: <1345563782-11224-6-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-6-git-send-email-ehabkost@redhat.com>
From: Blue Swirl <blauwirbel@gmail.com>
Date: Tue, 21 Aug 2012 18:44:10 +0000
Message-ID: <CAAu8pHv5VyVnx7sow-UMLvG3L0HExgRVSP0pXLvGi5aYh+GrqA@mail.gmail.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 5/8] split qdev into a core and code used only
	by qemu-system-*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 3:42 PM, Eduardo Habkost <ehabkost@redhat.com> wrote:
> This also makes it visible what are the parts of qdev that we may want
> to split more cleanly (as they are using #ifdefs, now).

Nice.

>
> There are basically two parts that are specific to qemu-system-*, but
> are still inside qdev.c (but inside a "#ifndef CONFIG_USER_ONLY").
>
> - vmstate handling
> - reset function registration
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
>  hw/Makefile.objs            |   1 +
>  hw/qdev-properties-system.c | 329 ++++++++++++++++++++++++++++++++++++++++++++
>  hw/qdev-properties.c        | 320 +-----------------------------------------
>  hw/qdev-properties.h        |   1 +
>  hw/qdev-system.c            |  93 +++++++++++++
>  hw/qdev.c                   | 103 ++------------
>  6 files changed, 435 insertions(+), 412 deletions(-)
>  create mode 100644 hw/qdev-properties-system.c
>  create mode 100644 hw/qdev-system.c
>
> diff --git a/hw/Makefile.objs b/hw/Makefile.objs
> index 7f57ed5..04d3b5e 100644
> --- a/hw/Makefile.objs
> +++ b/hw/Makefile.objs
> @@ -177,6 +177,7 @@ common-obj-y += bt.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
>  common-obj-y += bt-hci-csr.o
>  common-obj-y += msmouse.o ps2.o
>  common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
> +common-obj-y += qdev-system.o qdev-properties-system.o
>  common-obj-$(CONFIG_BRLAPI) += baum.o
>
>  # xen backend driver support
> diff --git a/hw/qdev-properties-system.c b/hw/qdev-properties-system.c
> new file mode 100644
> index 0000000..c42e656
> --- /dev/null
> +++ b/hw/qdev-properties-system.c
> @@ -0,0 +1,329 @@

There's no header, neither has qdev-properties.c though. Agreeing on
license would need some detective work.

> +#include "net.h"
> +#include "qdev.h"
> +#include "qerror.h"
> +#include "blockdev.h"
> +#include "hw/block-common.h"
> +#include "net/hub.h"
> +#include "qapi/qapi-visit-core.h"
> +
> +static void get_pointer(Object *obj, Visitor *v, Property *prop,
> +                        const char *(*print)(void *ptr),
> +                        const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    void **ptr = qdev_get_prop_ptr(dev, prop);
> +    char *p;
> +
> +    p = (char *) (*ptr ? print(*ptr) : "");
> +    visit_type_str(v, &p, name, errp);
> +}
> +
> +static void set_pointer(Object *obj, Visitor *v, Property *prop,
> +                        int (*parse)(DeviceState *dev, const char *str,
> +                                     void **ptr),
> +                        const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Error *local_err = NULL;
> +    void **ptr = qdev_get_prop_ptr(dev, prop);
> +    char *str;
> +    int ret;
> +
> +    if (dev->state != DEV_STATE_CREATED) {
> +        error_set(errp, QERR_PERMISSION_DENIED);
> +        return;
> +    }
> +
> +    visit_type_str(v, &str, name, &local_err);
> +    if (local_err) {
> +        error_propagate(errp, local_err);
> +        return;
> +    }
> +    if (!*str) {
> +        g_free(str);
> +        *ptr = NULL;
> +        return;
> +    }
> +    ret = parse(dev, str, ptr);
> +    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
> +    g_free(str);
> +}
> +
> +
> +/* --- drive --- */
> +
> +static int parse_drive(DeviceState *dev, const char *str, void **ptr)
> +{
> +    BlockDriverState *bs;
> +
> +    bs = bdrv_find(str);
> +    if (bs == NULL)
> +        return -ENOENT;

Before moving, please add braces here and below if. That way the new
file gets a fresh start.

> +    if (bdrv_attach_dev(bs, dev) < 0)
> +        return -EEXIST;
> +    *ptr = bs;
> +    return 0;
> +}
> +
> +static void release_drive(Object *obj, const char *name, void *opaque)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> +
> +    if (*ptr) {
> +        bdrv_detach_dev(*ptr, dev);
> +        blockdev_auto_del(*ptr);
> +    }
> +}
> +
> +static const char *print_drive(void *ptr)
> +{
> +    return bdrv_get_device_name(ptr);
> +}
> +
> +static void get_drive(Object *obj, Visitor *v, void *opaque,
> +                      const char *name, Error **errp)
> +{
> +    get_pointer(obj, v, opaque, print_drive, name, errp);
> +}
> +
> +static void set_drive(Object *obj, Visitor *v, void *opaque,
> +                      const char *name, Error **errp)
> +{
> +    set_pointer(obj, v, opaque, parse_drive, name, errp);
> +}
> +
> +PropertyInfo qdev_prop_drive = {
> +    .name  = "drive",
> +    .get   = get_drive,
> +    .set   = set_drive,
> +    .release = release_drive,
> +};
> +
> +/* --- character device --- */
> +
> +static int parse_chr(DeviceState *dev, const char *str, void **ptr)
> +{
> +    CharDriverState *chr = qemu_chr_find(str);
> +    if (chr == NULL) {
> +        return -ENOENT;
> +    }
> +    if (chr->avail_connections < 1) {
> +        return -EEXIST;
> +    }
> +    *ptr = chr;
> +    --chr->avail_connections;
> +    return 0;
> +}
> +
> +static void release_chr(Object *obj, const char *name, void *opaque)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> +
> +    if (*ptr) {
> +        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
> +    }
> +}
> +
> +
> +static const char *print_chr(void *ptr)
> +{
> +    CharDriverState *chr = ptr;
> +
> +    return chr->label ? chr->label : "";
> +}
> +
> +static void get_chr(Object *obj, Visitor *v, void *opaque,
> +                    const char *name, Error **errp)
> +{
> +    get_pointer(obj, v, opaque, print_chr, name, errp);
> +}
> +
> +static void set_chr(Object *obj, Visitor *v, void *opaque,
> +                    const char *name, Error **errp)
> +{
> +    set_pointer(obj, v, opaque, parse_chr, name, errp);
> +}
> +
> +PropertyInfo qdev_prop_chr = {
> +    .name  = "chr",
> +    .get   = get_chr,
> +    .set   = set_chr,
> +    .release = release_chr,
> +};
> +
> +/* --- netdev device --- */
> +
> +static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
> +{
> +    NetClientState *netdev = qemu_find_netdev(str);
> +
> +    if (netdev == NULL) {
> +        return -ENOENT;
> +    }
> +    if (netdev->peer) {
> +        return -EEXIST;
> +    }
> +    *ptr = netdev;
> +    return 0;
> +}
> +
> +static const char *print_netdev(void *ptr)
> +{
> +    NetClientState *netdev = ptr;
> +
> +    return netdev->name ? netdev->name : "";
> +}
> +
> +static void get_netdev(Object *obj, Visitor *v, void *opaque,
> +                       const char *name, Error **errp)
> +{
> +    get_pointer(obj, v, opaque, print_netdev, name, errp);
> +}
> +
> +static void set_netdev(Object *obj, Visitor *v, void *opaque,
> +                       const char *name, Error **errp)
> +{
> +    set_pointer(obj, v, opaque, parse_netdev, name, errp);
> +}
> +
> +PropertyInfo qdev_prop_netdev = {
> +    .name  = "netdev",
> +    .get   = get_netdev,
> +    .set   = set_netdev,
> +};
> +
> +/* --- vlan --- */
> +
> +static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
> +{
> +    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> +
> +    if (*ptr) {
> +        int id;
> +        if (!net_hub_id_for_client(*ptr, &id)) {
> +            return snprintf(dest, len, "%d", id);
> +        }
> +    }
> +
> +    return snprintf(dest, len, "<null>");
> +}
> +
> +static void get_vlan(Object *obj, Visitor *v, void *opaque,
> +                     const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> +    int32_t id = -1;
> +
> +    if (*ptr) {
> +        int hub_id;
> +        if (!net_hub_id_for_client(*ptr, &hub_id)) {
> +            id = hub_id;
> +        }
> +    }
> +
> +    visit_type_int32(v, &id, name, errp);
> +}
> +
> +static void set_vlan(Object *obj, Visitor *v, void *opaque,
> +                     const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> +    Error *local_err = NULL;
> +    int32_t id;
> +    NetClientState *hubport;
> +
> +    if (dev->state != DEV_STATE_CREATED) {
> +        error_set(errp, QERR_PERMISSION_DENIED);
> +        return;
> +    }
> +
> +    visit_type_int32(v, &id, name, &local_err);
> +    if (local_err) {
> +        error_propagate(errp, local_err);
> +        return;
> +    }
> +    if (id == -1) {
> +        *ptr = NULL;
> +        return;
> +    }
> +
> +    hubport = net_hub_port_find(id);
> +    if (!hubport) {
> +        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
> +                  name, prop->info->name);
> +        return;
> +    }
> +    *ptr = hubport;
> +}
> +
> +PropertyInfo qdev_prop_vlan = {
> +    .name  = "vlan",
> +    .print = print_vlan,
> +    .get   = get_vlan,
> +    .set   = set_vlan,
> +};
> +
> +
> +int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
> +{
> +    Error *errp = NULL;
> +    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
> +    object_property_set_str(OBJECT(dev), bdrv_name,
> +                            name, &errp);
> +    if (errp) {
> +        qerror_report_err(errp);
> +        error_free(errp);
> +        return -1;
> +    }
> +    return 0;
> +}
> +
> +void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
> +{
> +    if (qdev_prop_set_drive(dev, name, value) < 0) {
> +        exit(1);
> +    }
> +}
> +
> +void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
> +{
> +    Error *errp = NULL;
> +    assert(!value || value->label);
> +    object_property_set_str(OBJECT(dev),
> +                            value ? value->label : "", name, &errp);
> +    assert_no_error(errp);
> +}
> +
> +void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
> +{
> +    Error *errp = NULL;
> +    assert(!value || value->name);
> +    object_property_set_str(OBJECT(dev),
> +                            value ? value->name : "", name, &errp);
> +    assert_no_error(errp);
> +}
> +
> +static int qdev_add_one_global(QemuOpts *opts, void *opaque)
> +{
> +    GlobalProperty *g;
> +
> +    g = g_malloc0(sizeof(*g));
> +    g->driver   = qemu_opt_get(opts, "driver");
> +    g->property = qemu_opt_get(opts, "property");
> +    g->value    = qemu_opt_get(opts, "value");
> +    qdev_prop_register_global(g);
> +    return 0;
> +}
> +
> +void qemu_add_globals(void)
> +{
> +    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
> +}
> diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
> index 81d901c..917d986 100644
> --- a/hw/qdev-properties.c
> +++ b/hw/qdev-properties.c
> @@ -13,49 +13,6 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
>      return ptr;
>  }
>
> -static void get_pointer(Object *obj, Visitor *v, Property *prop,
> -                        const char *(*print)(void *ptr),
> -                        const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    void **ptr = qdev_get_prop_ptr(dev, prop);
> -    char *p;
> -
> -    p = (char *) (*ptr ? print(*ptr) : "");
> -    visit_type_str(v, &p, name, errp);
> -}
> -
> -static void set_pointer(Object *obj, Visitor *v, Property *prop,
> -                        int (*parse)(DeviceState *dev, const char *str,
> -                                     void **ptr),
> -                        const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Error *local_err = NULL;
> -    void **ptr = qdev_get_prop_ptr(dev, prop);
> -    char *str;
> -    int ret;
> -
> -    if (dev->state != DEV_STATE_CREATED) {
> -        error_set(errp, QERR_PERMISSION_DENIED);
> -        return;
> -    }
> -
> -    visit_type_str(v, &str, name, &local_err);
> -    if (local_err) {
> -        error_propagate(errp, local_err);
> -        return;
> -    }
> -    if (!*str) {
> -        g_free(str);
> -        *ptr = NULL;
> -        return;
> -    }
> -    ret = parse(dev, str, ptr);
> -    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
> -    g_free(str);
> -}
> -
>  static void get_enum(Object *obj, Visitor *v, void *opaque,
>                       const char *name, Error **errp)
>  {
> @@ -476,227 +433,6 @@ PropertyInfo qdev_prop_string = {
>      .set   = set_string,
>  };
>
> -/* --- drive --- */
> -
> -static int parse_drive(DeviceState *dev, const char *str, void **ptr)
> -{
> -    BlockDriverState *bs;
> -
> -    bs = bdrv_find(str);
> -    if (bs == NULL)
> -        return -ENOENT;
> -    if (bdrv_attach_dev(bs, dev) < 0)
> -        return -EEXIST;
> -    *ptr = bs;
> -    return 0;
> -}
> -
> -static void release_drive(Object *obj, const char *name, void *opaque)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> -
> -    if (*ptr) {
> -        bdrv_detach_dev(*ptr, dev);
> -        blockdev_auto_del(*ptr);
> -    }
> -}
> -
> -static const char *print_drive(void *ptr)
> -{
> -    return bdrv_get_device_name(ptr);
> -}
> -
> -static void get_drive(Object *obj, Visitor *v, void *opaque,
> -                      const char *name, Error **errp)
> -{
> -    get_pointer(obj, v, opaque, print_drive, name, errp);
> -}
> -
> -static void set_drive(Object *obj, Visitor *v, void *opaque,
> -                      const char *name, Error **errp)
> -{
> -    set_pointer(obj, v, opaque, parse_drive, name, errp);
> -}
> -
> -PropertyInfo qdev_prop_drive = {
> -    .name  = "drive",
> -    .get   = get_drive,
> -    .set   = set_drive,
> -    .release = release_drive,
> -};
> -
> -/* --- character device --- */
> -
> -static int parse_chr(DeviceState *dev, const char *str, void **ptr)
> -{
> -    CharDriverState *chr = qemu_chr_find(str);
> -    if (chr == NULL) {
> -        return -ENOENT;
> -    }
> -    if (chr->avail_connections < 1) {
> -        return -EEXIST;
> -    }
> -    *ptr = chr;
> -    --chr->avail_connections;
> -    return 0;
> -}
> -
> -static void release_chr(Object *obj, const char *name, void *opaque)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> -
> -    if (*ptr) {
> -        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
> -    }
> -}
> -
> -
> -static const char *print_chr(void *ptr)
> -{
> -    CharDriverState *chr = ptr;
> -
> -    return chr->label ? chr->label : "";
> -}
> -
> -static void get_chr(Object *obj, Visitor *v, void *opaque,
> -                    const char *name, Error **errp)
> -{
> -    get_pointer(obj, v, opaque, print_chr, name, errp);
> -}
> -
> -static void set_chr(Object *obj, Visitor *v, void *opaque,
> -                    const char *name, Error **errp)
> -{
> -    set_pointer(obj, v, opaque, parse_chr, name, errp);
> -}
> -
> -PropertyInfo qdev_prop_chr = {
> -    .name  = "chr",
> -    .get   = get_chr,
> -    .set   = set_chr,
> -    .release = release_chr,
> -};
> -
> -/* --- netdev device --- */
> -
> -static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
> -{
> -    NetClientState *netdev = qemu_find_netdev(str);
> -
> -    if (netdev == NULL) {
> -        return -ENOENT;
> -    }
> -    if (netdev->peer) {
> -        return -EEXIST;
> -    }
> -    *ptr = netdev;
> -    return 0;
> -}
> -
> -static const char *print_netdev(void *ptr)
> -{
> -    NetClientState *netdev = ptr;
> -
> -    return netdev->name ? netdev->name : "";
> -}
> -
> -static void get_netdev(Object *obj, Visitor *v, void *opaque,
> -                       const char *name, Error **errp)
> -{
> -    get_pointer(obj, v, opaque, print_netdev, name, errp);
> -}
> -
> -static void set_netdev(Object *obj, Visitor *v, void *opaque,
> -                       const char *name, Error **errp)
> -{
> -    set_pointer(obj, v, opaque, parse_netdev, name, errp);
> -}
> -
> -PropertyInfo qdev_prop_netdev = {
> -    .name  = "netdev",
> -    .get   = get_netdev,
> -    .set   = set_netdev,
> -};
> -
> -/* --- vlan --- */
> -
> -static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
> -{
> -    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> -
> -    if (*ptr) {
> -        int id;
> -        if (!net_hub_id_for_client(*ptr, &id)) {
> -            return snprintf(dest, len, "%d", id);
> -        }
> -    }
> -
> -    return snprintf(dest, len, "<null>");
> -}
> -
> -static void get_vlan(Object *obj, Visitor *v, void *opaque,
> -                     const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> -    int32_t id = -1;
> -
> -    if (*ptr) {
> -        int hub_id;
> -        if (!net_hub_id_for_client(*ptr, &hub_id)) {
> -            id = hub_id;
> -        }
> -    }
> -
> -    visit_type_int32(v, &id, name, errp);
> -}
> -
> -static void set_vlan(Object *obj, Visitor *v, void *opaque,
> -                     const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> -    Error *local_err = NULL;
> -    int32_t id;
> -    NetClientState *hubport;
> -
> -    if (dev->state != DEV_STATE_CREATED) {
> -        error_set(errp, QERR_PERMISSION_DENIED);
> -        return;
> -    }
> -
> -    visit_type_int32(v, &id, name, &local_err);
> -    if (local_err) {
> -        error_propagate(errp, local_err);
> -        return;
> -    }
> -    if (id == -1) {
> -        *ptr = NULL;
> -        return;
> -    }
> -
> -    hubport = net_hub_port_find(id);
> -    if (!hubport) {
> -        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
> -                  name, prop->info->name);
> -        return;
> -    }
> -    *ptr = hubport;
> -}
> -
> -PropertyInfo qdev_prop_vlan = {
> -    .name  = "vlan",
> -    .print = print_vlan,
> -    .get   = get_vlan,
> -    .set   = set_vlan,
> -};
> -
>  /* --- pointer --- */
>
>  /* Not a proper property, just for dirty hacks.  TODO Remove it!  */
> @@ -1158,44 +894,6 @@ void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value)
>      assert_no_error(errp);
>  }
>
> -int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
> -{
> -    Error *errp = NULL;
> -    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
> -    object_property_set_str(OBJECT(dev), bdrv_name,
> -                            name, &errp);
> -    if (errp) {
> -        qerror_report_err(errp);
> -        error_free(errp);
> -        return -1;
> -    }
> -    return 0;
> -}
> -
> -void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
> -{
> -    if (qdev_prop_set_drive(dev, name, value) < 0) {
> -        exit(1);
> -    }
> -}
> -void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
> -{
> -    Error *errp = NULL;
> -    assert(!value || value->label);
> -    object_property_set_str(OBJECT(dev),
> -                            value ? value->label : "", name, &errp);
> -    assert_no_error(errp);
> -}
> -
> -void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
> -{
> -    Error *errp = NULL;
> -    assert(!value || value->name);
> -    object_property_set_str(OBJECT(dev),
> -                            value ? value->name : "", name, &errp);
> -    assert_no_error(errp);
> -}
> -
>  void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value)
>  {
>      Error *errp = NULL;
> @@ -1231,7 +929,7 @@ void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value)
>
>  static QTAILQ_HEAD(, GlobalProperty) global_props = QTAILQ_HEAD_INITIALIZER(global_props);
>
> -static void qdev_prop_register_global(GlobalProperty *prop)
> +void qdev_prop_register_global(GlobalProperty *prop)
>  {
>      QTAILQ_INSERT_TAIL(&global_props, prop, next);
>  }
> @@ -1263,19 +961,3 @@ void qdev_prop_set_globals(DeviceState *dev)
>      } while (class);
>  }
>
> -static int qdev_add_one_global(QemuOpts *opts, void *opaque)
> -{
> -    GlobalProperty *g;
> -
> -    g = g_malloc0(sizeof(*g));
> -    g->driver   = qemu_opt_get(opts, "driver");
> -    g->property = qemu_opt_get(opts, "property");
> -    g->value    = qemu_opt_get(opts, "value");
> -    qdev_prop_register_global(g);
> -    return 0;
> -}
> -
> -void qemu_add_globals(void)
> -{
> -    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
> -}
> diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
> index e93336a..a145084 100644
> --- a/hw/qdev-properties.h
> +++ b/hw/qdev-properties.h
> @@ -114,6 +114,7 @@ void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
>  /* FIXME: Remove opaque pointer properties.  */
>  void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
>
> +void qdev_prop_register_global(GlobalProperty *prop);
>  void qdev_prop_register_global_list(GlobalProperty *props);
>  void qdev_prop_set_globals(DeviceState *dev);
>  void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
> diff --git a/hw/qdev-system.c b/hw/qdev-system.c
> new file mode 100644
> index 0000000..4891d2f
> --- /dev/null
> +++ b/hw/qdev-system.c
> @@ -0,0 +1,93 @@
> +#include "net.h"
> +#include "qdev.h"
> +
> +void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
> +{
> +    assert(dev->num_gpio_in == 0);
> +    dev->num_gpio_in = n;
> +    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
> +}
> +
> +void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
> +{
> +    assert(dev->num_gpio_out == 0);
> +    dev->num_gpio_out = n;
> +    dev->gpio_out = pins;
> +}
> +
> +qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
> +{
> +    assert(n >= 0 && n < dev->num_gpio_in);
> +    return dev->gpio_in[n];
> +}
> +
> +void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
> +{
> +    assert(n >= 0 && n < dev->num_gpio_out);
> +    dev->gpio_out[n] = pin;
> +}
> +
> +void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
> +{
> +    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
> +    if (nd->netdev)
> +        qdev_prop_set_netdev(dev, "netdev", nd->netdev);

Also here.

> +    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
> +        object_property_find(OBJECT(dev), "vectors", NULL)) {
> +        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
> +    }
> +    nd->instantiated = 1;
> +}
> +
> +BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
> +{
> +    BusState *bus;
> +
> +    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
> +        if (strcmp(name, bus->name) == 0) {
> +            return bus;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +/* Create a new device.  This only initializes the device state structure
> +   and allows properties to be set.  qdev_init should be called to
> +   initialize the actual device emulation.  */
> +DeviceState *qdev_create(BusState *bus, const char *name)
> +{
> +    DeviceState *dev;
> +
> +    dev = qdev_try_create(bus, name);
> +    if (!dev) {
> +        if (bus) {
> +            hw_error("Unknown device '%s' for bus '%s'\n", name,
> +                     object_get_typename(OBJECT(bus)));
> +        } else {
> +            hw_error("Unknown device '%s' for default sysbus\n", name);
> +        }
> +    }
> +
> +    return dev;
> +}
> +
> +DeviceState *qdev_try_create(BusState *bus, const char *type)
> +{
> +    DeviceState *dev;
> +
> +    if (object_class_by_name(type) == NULL) {
> +        return NULL;
> +    }
> +    dev = DEVICE(object_new(type));
> +    if (!dev) {
> +        return NULL;
> +    }
> +
> +    if (!bus) {
> +        bus = sysbus_get_default();
> +    }
> +
> +    qdev_set_parent_bus(dev, bus);
> +
> +    return dev;
> +}
> diff --git a/hw/qdev.c b/hw/qdev.c
> index 36c3e4b..3dc38f7 100644
> --- a/hw/qdev.c
> +++ b/hw/qdev.c
> @@ -25,7 +25,6 @@
>     inherit from a particular bus (e.g. PCI or I2C) rather than
>     this API directly.  */
>
> -#include "net.h"
>  #include "qdev.h"
>  #include "sysemu.h"
>  #include "error.h"
> @@ -105,47 +104,6 @@ void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
>      bus_add_child(bus, dev);
>  }
>
> -/* Create a new device.  This only initializes the device state structure
> -   and allows properties to be set.  qdev_init should be called to
> -   initialize the actual device emulation.  */
> -DeviceState *qdev_create(BusState *bus, const char *name)
> -{
> -    DeviceState *dev;
> -
> -    dev = qdev_try_create(bus, name);
> -    if (!dev) {
> -        if (bus) {
> -            hw_error("Unknown device '%s' for bus '%s'\n", name,
> -                     object_get_typename(OBJECT(bus)));
> -        } else {
> -            hw_error("Unknown device '%s' for default sysbus\n", name);
> -        }
> -    }
> -
> -    return dev;
> -}
> -
> -DeviceState *qdev_try_create(BusState *bus, const char *type)
> -{
> -    DeviceState *dev;
> -
> -    if (object_class_by_name(type) == NULL) {
> -        return NULL;
> -    }
> -    dev = DEVICE(object_new(type));
> -    if (!dev) {
> -        return NULL;
> -    }
> -
> -    if (!bus) {
> -        bus = sysbus_get_default();
> -    }
> -
> -    qdev_set_parent_bus(dev, bus);
> -
> -    return dev;
> -}
> -
>  /* Initialize a device.  Device properties should be set before calling
>     this function.  IRQs and MMIO regions should be connected/mapped after
>     calling this function.
> @@ -175,11 +133,13 @@ int qdev_init(DeviceState *dev)
>          g_free(name);
>      }
>
> +#ifndef CONFIG_USER_ONLY
>      if (qdev_get_vmsd(dev)) {
>          vmstate_register_with_alias_id(dev, -1, qdev_get_vmsd(dev), dev,
>                                         dev->instance_id_alias,
>                                         dev->alias_required_for_version);
>      }
> +#endif
>      dev->state = DEV_STATE_INITIALIZED;
>      if (dev->hotplugged) {
>          device_reset(dev);
> @@ -292,56 +252,6 @@ BusState *qdev_get_parent_bus(DeviceState *dev)
>      return dev->parent_bus;
>  }
>
> -void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
> -{
> -    assert(dev->num_gpio_in == 0);
> -    dev->num_gpio_in = n;
> -    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
> -}
> -
> -void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
> -{
> -    assert(dev->num_gpio_out == 0);
> -    dev->num_gpio_out = n;
> -    dev->gpio_out = pins;
> -}
> -
> -qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
> -{
> -    assert(n >= 0 && n < dev->num_gpio_in);
> -    return dev->gpio_in[n];
> -}
> -
> -void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
> -{
> -    assert(n >= 0 && n < dev->num_gpio_out);
> -    dev->gpio_out[n] = pin;
> -}
> -
> -void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
> -{
> -    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
> -    if (nd->netdev)
> -        qdev_prop_set_netdev(dev, "netdev", nd->netdev);
> -    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
> -        object_property_find(OBJECT(dev), "vectors", NULL)) {
> -        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
> -    }
> -    nd->instantiated = 1;
> -}
> -
> -BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
> -{
> -    BusState *bus;
> -
> -    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
> -        if (strcmp(name, bus->name) == 0) {
> -            return bus;
> -        }
> -    }
> -    return NULL;
> -}
> -
>  int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
>                         qbus_walkerfn *busfn, void *opaque)
>  {
> @@ -440,11 +350,14 @@ static void qbus_realize(BusState *bus)
>          QLIST_INSERT_HEAD(&bus->parent->child_bus, bus, sibling);
>          bus->parent->num_child_bus++;
>          object_property_add_child(OBJECT(bus->parent), bus->name, OBJECT(bus), NULL);
> -    } else if (bus != sysbus_get_default()) {
> +    }
> +#ifndef CONFIG_USER_ONLY
> +    else if (bus != sysbus_get_default()) {
>          /* TODO: once all bus devices are qdevified,
>             only reset handler for main_system_bus should be registered here. */
>          qemu_register_reset(qbus_reset_all_fn, bus);
>      }
> +#endif
>  }
>
>  void qbus_create_inplace(BusState *bus, const char *typename,
> @@ -703,9 +616,11 @@ static void device_finalize(Object *obj)
>              bus = QLIST_FIRST(&dev->child_bus);
>              qbus_free(bus);
>          }
> +#ifndef CONFIG_USER_ONLY
>          if (qdev_get_vmsd(dev)) {
>              vmstate_unregister(dev, qdev_get_vmsd(dev), dev);
>          }
> +#endif
>          if (dc->exit) {
>              dc->exit(dev);
>          }
> @@ -779,8 +694,10 @@ static void qbus_finalize(Object *obj)
>          QLIST_REMOVE(bus, sibling);
>          bus->parent->num_child_bus--;
>      } else {
> +#ifndef CONFIG_USER_ONLY
>          assert(bus != sysbus_get_default()); /* main_system_bus is never freed */
>          qemu_unregister_reset(qbus_reset_all_fn, bus);
> +#endif
>      }
>      g_free((char *)bus->name);
>  }
> --
> 1.7.11.4
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:45:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tRS-00032B-Eu; Tue, 21 Aug 2012 18:44:42 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <blauwirbel@gmail.com>) id 1T3tRQ-00031n-NY
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:44:41 +0000
X-Env-Sender: blauwirbel@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345574672!3095080!1
X-Originating-IP: [209.85.161.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16505 invoked from network); 21 Aug 2012 18:44:33 -0000
Received: from mail-gg0-f171.google.com (HELO mail-gg0-f171.google.com)
	(209.85.161.171)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 18:44:33 -0000
Received: by ggnp1 with SMTP id p1so148602ggn.30
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Aug 2012 11:44:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=2gPo7WPGaQnwHqqE/pwBv5g+bkrBr3d3tIr1jGfhu70=;
	b=DILeKKcS4fdMx3YYd63i89X7obfBuqRNwm1cryKoAlRlEZvnE6HIYcm2bsuoN5hF+B
	13d72bokrEujA/ZbaPIi2yDKpXB2IKT9N8RbJHHa7Vuy8tF6OpQ3FbzQLPvTzQO2gTHw
	3h+bWuggAAKXedFKLAxaKBCZui/PfZb9tfkoeasN+3jLlvqvAooqvYOvpIZgCZ78AyRK
	fKYgyvMrTu7fhVMmeLOyFfJSluLSl9UF6V7yVf+apz6NiVksJv4ARRoQIyl59xBeOoy3
	/lAy+nZ8EV6MxbXVJKCWAEdJ7diC00rWeGlZXPYSX/WgcVbSGiBrJtDXO5GI46yvsp8H
	DV/A==
Received: by 10.50.6.163 with SMTP id c3mr14337958iga.35.1345574671298; Tue,
	21 Aug 2012 11:44:31 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.64.78.161 with HTTP; Tue, 21 Aug 2012 11:44:10 -0700 (PDT)
In-Reply-To: <1345563782-11224-6-git-send-email-ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-6-git-send-email-ehabkost@redhat.com>
From: Blue Swirl <blauwirbel@gmail.com>
Date: Tue, 21 Aug 2012 18:44:10 +0000
Message-ID: <CAAu8pHv5VyVnx7sow-UMLvG3L0HExgRVSP0pXLvGi5aYh+GrqA@mail.gmail.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 5/8] split qdev into a core and code used only
	by qemu-system-*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 3:42 PM, Eduardo Habkost <ehabkost@redhat.com> wrote:
> This also makes it visible what are the parts of qdev that we may want
> to split more cleanly (as they are using #ifdefs, now).

Nice.

>
> There are basically two parts that are specific to qemu-system-*, but
> are still inside qdev.c (but inside a "#ifndef CONFIG_USER_ONLY").
>
> - vmstate handling
> - reset function registration
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
>  hw/Makefile.objs            |   1 +
>  hw/qdev-properties-system.c | 329 ++++++++++++++++++++++++++++++++++++++++++++
>  hw/qdev-properties.c        | 320 +-----------------------------------------
>  hw/qdev-properties.h        |   1 +
>  hw/qdev-system.c            |  93 +++++++++++++
>  hw/qdev.c                   | 103 ++------------
>  6 files changed, 435 insertions(+), 412 deletions(-)
>  create mode 100644 hw/qdev-properties-system.c
>  create mode 100644 hw/qdev-system.c
>
> diff --git a/hw/Makefile.objs b/hw/Makefile.objs
> index 7f57ed5..04d3b5e 100644
> --- a/hw/Makefile.objs
> +++ b/hw/Makefile.objs
> @@ -177,6 +177,7 @@ common-obj-y += bt.o bt-l2cap.o bt-sdp.o bt-hci.o bt-hid.o
>  common-obj-y += bt-hci-csr.o
>  common-obj-y += msmouse.o ps2.o
>  common-obj-y += qdev.o qdev-properties.o qdev-monitor.o
> +common-obj-y += qdev-system.o qdev-properties-system.o
>  common-obj-$(CONFIG_BRLAPI) += baum.o
>
>  # xen backend driver support
> diff --git a/hw/qdev-properties-system.c b/hw/qdev-properties-system.c
> new file mode 100644
> index 0000000..c42e656
> --- /dev/null
> +++ b/hw/qdev-properties-system.c
> @@ -0,0 +1,329 @@

There's no header, neither has qdev-properties.c though. Agreeing on
license would need some detective work.

> +#include "net.h"
> +#include "qdev.h"
> +#include "qerror.h"
> +#include "blockdev.h"
> +#include "hw/block-common.h"
> +#include "net/hub.h"
> +#include "qapi/qapi-visit-core.h"
> +
> +static void get_pointer(Object *obj, Visitor *v, Property *prop,
> +                        const char *(*print)(void *ptr),
> +                        const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    void **ptr = qdev_get_prop_ptr(dev, prop);
> +    char *p;
> +
> +    p = (char *) (*ptr ? print(*ptr) : "");
> +    visit_type_str(v, &p, name, errp);
> +}
> +
> +static void set_pointer(Object *obj, Visitor *v, Property *prop,
> +                        int (*parse)(DeviceState *dev, const char *str,
> +                                     void **ptr),
> +                        const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Error *local_err = NULL;
> +    void **ptr = qdev_get_prop_ptr(dev, prop);
> +    char *str;
> +    int ret;
> +
> +    if (dev->state != DEV_STATE_CREATED) {
> +        error_set(errp, QERR_PERMISSION_DENIED);
> +        return;
> +    }
> +
> +    visit_type_str(v, &str, name, &local_err);
> +    if (local_err) {
> +        error_propagate(errp, local_err);
> +        return;
> +    }
> +    if (!*str) {
> +        g_free(str);
> +        *ptr = NULL;
> +        return;
> +    }
> +    ret = parse(dev, str, ptr);
> +    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
> +    g_free(str);
> +}
> +
> +
> +/* --- drive --- */
> +
> +static int parse_drive(DeviceState *dev, const char *str, void **ptr)
> +{
> +    BlockDriverState *bs;
> +
> +    bs = bdrv_find(str);
> +    if (bs == NULL)
> +        return -ENOENT;

Before moving, please add braces here and below if. That way the new
file gets a fresh start.

> +    if (bdrv_attach_dev(bs, dev) < 0)
> +        return -EEXIST;
> +    *ptr = bs;
> +    return 0;
> +}
> +
> +static void release_drive(Object *obj, const char *name, void *opaque)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> +
> +    if (*ptr) {
> +        bdrv_detach_dev(*ptr, dev);
> +        blockdev_auto_del(*ptr);
> +    }
> +}
> +
> +static const char *print_drive(void *ptr)
> +{
> +    return bdrv_get_device_name(ptr);
> +}
> +
> +static void get_drive(Object *obj, Visitor *v, void *opaque,
> +                      const char *name, Error **errp)
> +{
> +    get_pointer(obj, v, opaque, print_drive, name, errp);
> +}
> +
> +static void set_drive(Object *obj, Visitor *v, void *opaque,
> +                      const char *name, Error **errp)
> +{
> +    set_pointer(obj, v, opaque, parse_drive, name, errp);
> +}
> +
> +PropertyInfo qdev_prop_drive = {
> +    .name  = "drive",
> +    .get   = get_drive,
> +    .set   = set_drive,
> +    .release = release_drive,
> +};
> +
> +/* --- character device --- */
> +
> +static int parse_chr(DeviceState *dev, const char *str, void **ptr)
> +{
> +    CharDriverState *chr = qemu_chr_find(str);
> +    if (chr == NULL) {
> +        return -ENOENT;
> +    }
> +    if (chr->avail_connections < 1) {
> +        return -EEXIST;
> +    }
> +    *ptr = chr;
> +    --chr->avail_connections;
> +    return 0;
> +}
> +
> +static void release_chr(Object *obj, const char *name, void *opaque)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> +
> +    if (*ptr) {
> +        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
> +    }
> +}
> +
> +
> +static const char *print_chr(void *ptr)
> +{
> +    CharDriverState *chr = ptr;
> +
> +    return chr->label ? chr->label : "";
> +}
> +
> +static void get_chr(Object *obj, Visitor *v, void *opaque,
> +                    const char *name, Error **errp)
> +{
> +    get_pointer(obj, v, opaque, print_chr, name, errp);
> +}
> +
> +static void set_chr(Object *obj, Visitor *v, void *opaque,
> +                    const char *name, Error **errp)
> +{
> +    set_pointer(obj, v, opaque, parse_chr, name, errp);
> +}
> +
> +PropertyInfo qdev_prop_chr = {
> +    .name  = "chr",
> +    .get   = get_chr,
> +    .set   = set_chr,
> +    .release = release_chr,
> +};
> +
> +/* --- netdev device --- */
> +
> +static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
> +{
> +    NetClientState *netdev = qemu_find_netdev(str);
> +
> +    if (netdev == NULL) {
> +        return -ENOENT;
> +    }
> +    if (netdev->peer) {
> +        return -EEXIST;
> +    }
> +    *ptr = netdev;
> +    return 0;
> +}
> +
> +static const char *print_netdev(void *ptr)
> +{
> +    NetClientState *netdev = ptr;
> +
> +    return netdev->name ? netdev->name : "";
> +}
> +
> +static void get_netdev(Object *obj, Visitor *v, void *opaque,
> +                       const char *name, Error **errp)
> +{
> +    get_pointer(obj, v, opaque, print_netdev, name, errp);
> +}
> +
> +static void set_netdev(Object *obj, Visitor *v, void *opaque,
> +                       const char *name, Error **errp)
> +{
> +    set_pointer(obj, v, opaque, parse_netdev, name, errp);
> +}
> +
> +PropertyInfo qdev_prop_netdev = {
> +    .name  = "netdev",
> +    .get   = get_netdev,
> +    .set   = set_netdev,
> +};
> +
> +/* --- vlan --- */
> +
> +static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
> +{
> +    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> +
> +    if (*ptr) {
> +        int id;
> +        if (!net_hub_id_for_client(*ptr, &id)) {
> +            return snprintf(dest, len, "%d", id);
> +        }
> +    }
> +
> +    return snprintf(dest, len, "<null>");
> +}
> +
> +static void get_vlan(Object *obj, Visitor *v, void *opaque,
> +                     const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> +    int32_t id = -1;
> +
> +    if (*ptr) {
> +        int hub_id;
> +        if (!net_hub_id_for_client(*ptr, &hub_id)) {
> +            id = hub_id;
> +        }
> +    }
> +
> +    visit_type_int32(v, &id, name, errp);
> +}
> +
> +static void set_vlan(Object *obj, Visitor *v, void *opaque,
> +                     const char *name, Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> +    Error *local_err = NULL;
> +    int32_t id;
> +    NetClientState *hubport;
> +
> +    if (dev->state != DEV_STATE_CREATED) {
> +        error_set(errp, QERR_PERMISSION_DENIED);
> +        return;
> +    }
> +
> +    visit_type_int32(v, &id, name, &local_err);
> +    if (local_err) {
> +        error_propagate(errp, local_err);
> +        return;
> +    }
> +    if (id == -1) {
> +        *ptr = NULL;
> +        return;
> +    }
> +
> +    hubport = net_hub_port_find(id);
> +    if (!hubport) {
> +        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
> +                  name, prop->info->name);
> +        return;
> +    }
> +    *ptr = hubport;
> +}
> +
> +PropertyInfo qdev_prop_vlan = {
> +    .name  = "vlan",
> +    .print = print_vlan,
> +    .get   = get_vlan,
> +    .set   = set_vlan,
> +};
> +
> +
> +int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
> +{
> +    Error *errp = NULL;
> +    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
> +    object_property_set_str(OBJECT(dev), bdrv_name,
> +                            name, &errp);
> +    if (errp) {
> +        qerror_report_err(errp);
> +        error_free(errp);
> +        return -1;
> +    }
> +    return 0;
> +}
> +
> +void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
> +{
> +    if (qdev_prop_set_drive(dev, name, value) < 0) {
> +        exit(1);
> +    }
> +}
> +
> +void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
> +{
> +    Error *errp = NULL;
> +    assert(!value || value->label);
> +    object_property_set_str(OBJECT(dev),
> +                            value ? value->label : "", name, &errp);
> +    assert_no_error(errp);
> +}
> +
> +void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
> +{
> +    Error *errp = NULL;
> +    assert(!value || value->name);
> +    object_property_set_str(OBJECT(dev),
> +                            value ? value->name : "", name, &errp);
> +    assert_no_error(errp);
> +}
> +
> +static int qdev_add_one_global(QemuOpts *opts, void *opaque)
> +{
> +    GlobalProperty *g;
> +
> +    g = g_malloc0(sizeof(*g));
> +    g->driver   = qemu_opt_get(opts, "driver");
> +    g->property = qemu_opt_get(opts, "property");
> +    g->value    = qemu_opt_get(opts, "value");
> +    qdev_prop_register_global(g);
> +    return 0;
> +}
> +
> +void qemu_add_globals(void)
> +{
> +    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
> +}
> diff --git a/hw/qdev-properties.c b/hw/qdev-properties.c
> index 81d901c..917d986 100644
> --- a/hw/qdev-properties.c
> +++ b/hw/qdev-properties.c
> @@ -13,49 +13,6 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
>      return ptr;
>  }
>
> -static void get_pointer(Object *obj, Visitor *v, Property *prop,
> -                        const char *(*print)(void *ptr),
> -                        const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    void **ptr = qdev_get_prop_ptr(dev, prop);
> -    char *p;
> -
> -    p = (char *) (*ptr ? print(*ptr) : "");
> -    visit_type_str(v, &p, name, errp);
> -}
> -
> -static void set_pointer(Object *obj, Visitor *v, Property *prop,
> -                        int (*parse)(DeviceState *dev, const char *str,
> -                                     void **ptr),
> -                        const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Error *local_err = NULL;
> -    void **ptr = qdev_get_prop_ptr(dev, prop);
> -    char *str;
> -    int ret;
> -
> -    if (dev->state != DEV_STATE_CREATED) {
> -        error_set(errp, QERR_PERMISSION_DENIED);
> -        return;
> -    }
> -
> -    visit_type_str(v, &str, name, &local_err);
> -    if (local_err) {
> -        error_propagate(errp, local_err);
> -        return;
> -    }
> -    if (!*str) {
> -        g_free(str);
> -        *ptr = NULL;
> -        return;
> -    }
> -    ret = parse(dev, str, ptr);
> -    error_set_from_qdev_prop_error(errp, ret, dev, prop, str);
> -    g_free(str);
> -}
> -
>  static void get_enum(Object *obj, Visitor *v, void *opaque,
>                       const char *name, Error **errp)
>  {
> @@ -476,227 +433,6 @@ PropertyInfo qdev_prop_string = {
>      .set   = set_string,
>  };
>
> -/* --- drive --- */
> -
> -static int parse_drive(DeviceState *dev, const char *str, void **ptr)
> -{
> -    BlockDriverState *bs;
> -
> -    bs = bdrv_find(str);
> -    if (bs == NULL)
> -        return -ENOENT;
> -    if (bdrv_attach_dev(bs, dev) < 0)
> -        return -EEXIST;
> -    *ptr = bs;
> -    return 0;
> -}
> -
> -static void release_drive(Object *obj, const char *name, void *opaque)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    BlockDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> -
> -    if (*ptr) {
> -        bdrv_detach_dev(*ptr, dev);
> -        blockdev_auto_del(*ptr);
> -    }
> -}
> -
> -static const char *print_drive(void *ptr)
> -{
> -    return bdrv_get_device_name(ptr);
> -}
> -
> -static void get_drive(Object *obj, Visitor *v, void *opaque,
> -                      const char *name, Error **errp)
> -{
> -    get_pointer(obj, v, opaque, print_drive, name, errp);
> -}
> -
> -static void set_drive(Object *obj, Visitor *v, void *opaque,
> -                      const char *name, Error **errp)
> -{
> -    set_pointer(obj, v, opaque, parse_drive, name, errp);
> -}
> -
> -PropertyInfo qdev_prop_drive = {
> -    .name  = "drive",
> -    .get   = get_drive,
> -    .set   = set_drive,
> -    .release = release_drive,
> -};
> -
> -/* --- character device --- */
> -
> -static int parse_chr(DeviceState *dev, const char *str, void **ptr)
> -{
> -    CharDriverState *chr = qemu_chr_find(str);
> -    if (chr == NULL) {
> -        return -ENOENT;
> -    }
> -    if (chr->avail_connections < 1) {
> -        return -EEXIST;
> -    }
> -    *ptr = chr;
> -    --chr->avail_connections;
> -    return 0;
> -}
> -
> -static void release_chr(Object *obj, const char *name, void *opaque)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    CharDriverState **ptr = qdev_get_prop_ptr(dev, prop);
> -
> -    if (*ptr) {
> -        qemu_chr_add_handlers(*ptr, NULL, NULL, NULL, NULL);
> -    }
> -}
> -
> -
> -static const char *print_chr(void *ptr)
> -{
> -    CharDriverState *chr = ptr;
> -
> -    return chr->label ? chr->label : "";
> -}
> -
> -static void get_chr(Object *obj, Visitor *v, void *opaque,
> -                    const char *name, Error **errp)
> -{
> -    get_pointer(obj, v, opaque, print_chr, name, errp);
> -}
> -
> -static void set_chr(Object *obj, Visitor *v, void *opaque,
> -                    const char *name, Error **errp)
> -{
> -    set_pointer(obj, v, opaque, parse_chr, name, errp);
> -}
> -
> -PropertyInfo qdev_prop_chr = {
> -    .name  = "chr",
> -    .get   = get_chr,
> -    .set   = set_chr,
> -    .release = release_chr,
> -};
> -
> -/* --- netdev device --- */
> -
> -static int parse_netdev(DeviceState *dev, const char *str, void **ptr)
> -{
> -    NetClientState *netdev = qemu_find_netdev(str);
> -
> -    if (netdev == NULL) {
> -        return -ENOENT;
> -    }
> -    if (netdev->peer) {
> -        return -EEXIST;
> -    }
> -    *ptr = netdev;
> -    return 0;
> -}
> -
> -static const char *print_netdev(void *ptr)
> -{
> -    NetClientState *netdev = ptr;
> -
> -    return netdev->name ? netdev->name : "";
> -}
> -
> -static void get_netdev(Object *obj, Visitor *v, void *opaque,
> -                       const char *name, Error **errp)
> -{
> -    get_pointer(obj, v, opaque, print_netdev, name, errp);
> -}
> -
> -static void set_netdev(Object *obj, Visitor *v, void *opaque,
> -                       const char *name, Error **errp)
> -{
> -    set_pointer(obj, v, opaque, parse_netdev, name, errp);
> -}
> -
> -PropertyInfo qdev_prop_netdev = {
> -    .name  = "netdev",
> -    .get   = get_netdev,
> -    .set   = set_netdev,
> -};
> -
> -/* --- vlan --- */
> -
> -static int print_vlan(DeviceState *dev, Property *prop, char *dest, size_t len)
> -{
> -    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> -
> -    if (*ptr) {
> -        int id;
> -        if (!net_hub_id_for_client(*ptr, &id)) {
> -            return snprintf(dest, len, "%d", id);
> -        }
> -    }
> -
> -    return snprintf(dest, len, "<null>");
> -}
> -
> -static void get_vlan(Object *obj, Visitor *v, void *opaque,
> -                     const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> -    int32_t id = -1;
> -
> -    if (*ptr) {
> -        int hub_id;
> -        if (!net_hub_id_for_client(*ptr, &hub_id)) {
> -            id = hub_id;
> -        }
> -    }
> -
> -    visit_type_int32(v, &id, name, errp);
> -}
> -
> -static void set_vlan(Object *obj, Visitor *v, void *opaque,
> -                     const char *name, Error **errp)
> -{
> -    DeviceState *dev = DEVICE(obj);
> -    Property *prop = opaque;
> -    NetClientState **ptr = qdev_get_prop_ptr(dev, prop);
> -    Error *local_err = NULL;
> -    int32_t id;
> -    NetClientState *hubport;
> -
> -    if (dev->state != DEV_STATE_CREATED) {
> -        error_set(errp, QERR_PERMISSION_DENIED);
> -        return;
> -    }
> -
> -    visit_type_int32(v, &id, name, &local_err);
> -    if (local_err) {
> -        error_propagate(errp, local_err);
> -        return;
> -    }
> -    if (id == -1) {
> -        *ptr = NULL;
> -        return;
> -    }
> -
> -    hubport = net_hub_port_find(id);
> -    if (!hubport) {
> -        error_set(errp, QERR_INVALID_PARAMETER_VALUE,
> -                  name, prop->info->name);
> -        return;
> -    }
> -    *ptr = hubport;
> -}
> -
> -PropertyInfo qdev_prop_vlan = {
> -    .name  = "vlan",
> -    .print = print_vlan,
> -    .get   = get_vlan,
> -    .set   = set_vlan,
> -};
> -
>  /* --- pointer --- */
>
>  /* Not a proper property, just for dirty hacks.  TODO Remove it!  */
> @@ -1158,44 +894,6 @@ void qdev_prop_set_string(DeviceState *dev, const char *name, const char *value)
>      assert_no_error(errp);
>  }
>
> -int qdev_prop_set_drive(DeviceState *dev, const char *name, BlockDriverState *value)
> -{
> -    Error *errp = NULL;
> -    const char *bdrv_name = value ? bdrv_get_device_name(value) : "";
> -    object_property_set_str(OBJECT(dev), bdrv_name,
> -                            name, &errp);
> -    if (errp) {
> -        qerror_report_err(errp);
> -        error_free(errp);
> -        return -1;
> -    }
> -    return 0;
> -}
> -
> -void qdev_prop_set_drive_nofail(DeviceState *dev, const char *name, BlockDriverState *value)
> -{
> -    if (qdev_prop_set_drive(dev, name, value) < 0) {
> -        exit(1);
> -    }
> -}
> -void qdev_prop_set_chr(DeviceState *dev, const char *name, CharDriverState *value)
> -{
> -    Error *errp = NULL;
> -    assert(!value || value->label);
> -    object_property_set_str(OBJECT(dev),
> -                            value ? value->label : "", name, &errp);
> -    assert_no_error(errp);
> -}
> -
> -void qdev_prop_set_netdev(DeviceState *dev, const char *name, NetClientState *value)
> -{
> -    Error *errp = NULL;
> -    assert(!value || value->name);
> -    object_property_set_str(OBJECT(dev),
> -                            value ? value->name : "", name, &errp);
> -    assert_no_error(errp);
> -}
> -
>  void qdev_prop_set_macaddr(DeviceState *dev, const char *name, uint8_t *value)
>  {
>      Error *errp = NULL;
> @@ -1231,7 +929,7 @@ void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value)
>
>  static QTAILQ_HEAD(, GlobalProperty) global_props = QTAILQ_HEAD_INITIALIZER(global_props);
>
> -static void qdev_prop_register_global(GlobalProperty *prop)
> +void qdev_prop_register_global(GlobalProperty *prop)
>  {
>      QTAILQ_INSERT_TAIL(&global_props, prop, next);
>  }
> @@ -1263,19 +961,3 @@ void qdev_prop_set_globals(DeviceState *dev)
>      } while (class);
>  }
>
> -static int qdev_add_one_global(QemuOpts *opts, void *opaque)
> -{
> -    GlobalProperty *g;
> -
> -    g = g_malloc0(sizeof(*g));
> -    g->driver   = qemu_opt_get(opts, "driver");
> -    g->property = qemu_opt_get(opts, "property");
> -    g->value    = qemu_opt_get(opts, "value");
> -    qdev_prop_register_global(g);
> -    return 0;
> -}
> -
> -void qemu_add_globals(void)
> -{
> -    qemu_opts_foreach(qemu_find_opts("global"), qdev_add_one_global, NULL, 0);
> -}
> diff --git a/hw/qdev-properties.h b/hw/qdev-properties.h
> index e93336a..a145084 100644
> --- a/hw/qdev-properties.h
> +++ b/hw/qdev-properties.h
> @@ -114,6 +114,7 @@ void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
>  /* FIXME: Remove opaque pointer properties.  */
>  void qdev_prop_set_ptr(DeviceState *dev, const char *name, void *value);
>
> +void qdev_prop_register_global(GlobalProperty *prop);
>  void qdev_prop_register_global_list(GlobalProperty *props);
>  void qdev_prop_set_globals(DeviceState *dev);
>  void error_set_from_qdev_prop_error(Error **errp, int ret, DeviceState *dev,
> diff --git a/hw/qdev-system.c b/hw/qdev-system.c
> new file mode 100644
> index 0000000..4891d2f
> --- /dev/null
> +++ b/hw/qdev-system.c
> @@ -0,0 +1,93 @@
> +#include "net.h"
> +#include "qdev.h"
> +
> +void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
> +{
> +    assert(dev->num_gpio_in == 0);
> +    dev->num_gpio_in = n;
> +    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
> +}
> +
> +void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
> +{
> +    assert(dev->num_gpio_out == 0);
> +    dev->num_gpio_out = n;
> +    dev->gpio_out = pins;
> +}
> +
> +qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
> +{
> +    assert(n >= 0 && n < dev->num_gpio_in);
> +    return dev->gpio_in[n];
> +}
> +
> +void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
> +{
> +    assert(n >= 0 && n < dev->num_gpio_out);
> +    dev->gpio_out[n] = pin;
> +}
> +
> +void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
> +{
> +    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
> +    if (nd->netdev)
> +        qdev_prop_set_netdev(dev, "netdev", nd->netdev);

Also here.

> +    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
> +        object_property_find(OBJECT(dev), "vectors", NULL)) {
> +        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
> +    }
> +    nd->instantiated = 1;
> +}
> +
> +BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
> +{
> +    BusState *bus;
> +
> +    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
> +        if (strcmp(name, bus->name) == 0) {
> +            return bus;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +/* Create a new device.  This only initializes the device state structure
> +   and allows properties to be set.  qdev_init should be called to
> +   initialize the actual device emulation.  */
> +DeviceState *qdev_create(BusState *bus, const char *name)
> +{
> +    DeviceState *dev;
> +
> +    dev = qdev_try_create(bus, name);
> +    if (!dev) {
> +        if (bus) {
> +            hw_error("Unknown device '%s' for bus '%s'\n", name,
> +                     object_get_typename(OBJECT(bus)));
> +        } else {
> +            hw_error("Unknown device '%s' for default sysbus\n", name);
> +        }
> +    }
> +
> +    return dev;
> +}
> +
> +DeviceState *qdev_try_create(BusState *bus, const char *type)
> +{
> +    DeviceState *dev;
> +
> +    if (object_class_by_name(type) == NULL) {
> +        return NULL;
> +    }
> +    dev = DEVICE(object_new(type));
> +    if (!dev) {
> +        return NULL;
> +    }
> +
> +    if (!bus) {
> +        bus = sysbus_get_default();
> +    }
> +
> +    qdev_set_parent_bus(dev, bus);
> +
> +    return dev;
> +}
> diff --git a/hw/qdev.c b/hw/qdev.c
> index 36c3e4b..3dc38f7 100644
> --- a/hw/qdev.c
> +++ b/hw/qdev.c
> @@ -25,7 +25,6 @@
>     inherit from a particular bus (e.g. PCI or I2C) rather than
>     this API directly.  */
>
> -#include "net.h"
>  #include "qdev.h"
>  #include "sysemu.h"
>  #include "error.h"
> @@ -105,47 +104,6 @@ void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
>      bus_add_child(bus, dev);
>  }
>
> -/* Create a new device.  This only initializes the device state structure
> -   and allows properties to be set.  qdev_init should be called to
> -   initialize the actual device emulation.  */
> -DeviceState *qdev_create(BusState *bus, const char *name)
> -{
> -    DeviceState *dev;
> -
> -    dev = qdev_try_create(bus, name);
> -    if (!dev) {
> -        if (bus) {
> -            hw_error("Unknown device '%s' for bus '%s'\n", name,
> -                     object_get_typename(OBJECT(bus)));
> -        } else {
> -            hw_error("Unknown device '%s' for default sysbus\n", name);
> -        }
> -    }
> -
> -    return dev;
> -}
> -
> -DeviceState *qdev_try_create(BusState *bus, const char *type)
> -{
> -    DeviceState *dev;
> -
> -    if (object_class_by_name(type) == NULL) {
> -        return NULL;
> -    }
> -    dev = DEVICE(object_new(type));
> -    if (!dev) {
> -        return NULL;
> -    }
> -
> -    if (!bus) {
> -        bus = sysbus_get_default();
> -    }
> -
> -    qdev_set_parent_bus(dev, bus);
> -
> -    return dev;
> -}
> -
>  /* Initialize a device.  Device properties should be set before calling
>     this function.  IRQs and MMIO regions should be connected/mapped after
>     calling this function.
> @@ -175,11 +133,13 @@ int qdev_init(DeviceState *dev)
>          g_free(name);
>      }
>
> +#ifndef CONFIG_USER_ONLY
>      if (qdev_get_vmsd(dev)) {
>          vmstate_register_with_alias_id(dev, -1, qdev_get_vmsd(dev), dev,
>                                         dev->instance_id_alias,
>                                         dev->alias_required_for_version);
>      }
> +#endif
>      dev->state = DEV_STATE_INITIALIZED;
>      if (dev->hotplugged) {
>          device_reset(dev);
> @@ -292,56 +252,6 @@ BusState *qdev_get_parent_bus(DeviceState *dev)
>      return dev->parent_bus;
>  }
>
> -void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n)
> -{
> -    assert(dev->num_gpio_in == 0);
> -    dev->num_gpio_in = n;
> -    dev->gpio_in = qemu_allocate_irqs(handler, dev, n);
> -}
> -
> -void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n)
> -{
> -    assert(dev->num_gpio_out == 0);
> -    dev->num_gpio_out = n;
> -    dev->gpio_out = pins;
> -}
> -
> -qemu_irq qdev_get_gpio_in(DeviceState *dev, int n)
> -{
> -    assert(n >= 0 && n < dev->num_gpio_in);
> -    return dev->gpio_in[n];
> -}
> -
> -void qdev_connect_gpio_out(DeviceState * dev, int n, qemu_irq pin)
> -{
> -    assert(n >= 0 && n < dev->num_gpio_out);
> -    dev->gpio_out[n] = pin;
> -}
> -
> -void qdev_set_nic_properties(DeviceState *dev, NICInfo *nd)
> -{
> -    qdev_prop_set_macaddr(dev, "mac", nd->macaddr.a);
> -    if (nd->netdev)
> -        qdev_prop_set_netdev(dev, "netdev", nd->netdev);
> -    if (nd->nvectors != DEV_NVECTORS_UNSPECIFIED &&
> -        object_property_find(OBJECT(dev), "vectors", NULL)) {
> -        qdev_prop_set_uint32(dev, "vectors", nd->nvectors);
> -    }
> -    nd->instantiated = 1;
> -}
> -
> -BusState *qdev_get_child_bus(DeviceState *dev, const char *name)
> -{
> -    BusState *bus;
> -
> -    QLIST_FOREACH(bus, &dev->child_bus, sibling) {
> -        if (strcmp(name, bus->name) == 0) {
> -            return bus;
> -        }
> -    }
> -    return NULL;
> -}
> -
>  int qbus_walk_children(BusState *bus, qdev_walkerfn *devfn,
>                         qbus_walkerfn *busfn, void *opaque)
>  {
> @@ -440,11 +350,14 @@ static void qbus_realize(BusState *bus)
>          QLIST_INSERT_HEAD(&bus->parent->child_bus, bus, sibling);
>          bus->parent->num_child_bus++;
>          object_property_add_child(OBJECT(bus->parent), bus->name, OBJECT(bus), NULL);
> -    } else if (bus != sysbus_get_default()) {
> +    }
> +#ifndef CONFIG_USER_ONLY
> +    else if (bus != sysbus_get_default()) {
>          /* TODO: once all bus devices are qdevified,
>             only reset handler for main_system_bus should be registered here. */
>          qemu_register_reset(qbus_reset_all_fn, bus);
>      }
> +#endif
>  }
>
>  void qbus_create_inplace(BusState *bus, const char *typename,
> @@ -703,9 +616,11 @@ static void device_finalize(Object *obj)
>              bus = QLIST_FIRST(&dev->child_bus);
>              qbus_free(bus);
>          }
> +#ifndef CONFIG_USER_ONLY
>          if (qdev_get_vmsd(dev)) {
>              vmstate_unregister(dev, qdev_get_vmsd(dev), dev);
>          }
> +#endif
>          if (dc->exit) {
>              dc->exit(dev);
>          }
> @@ -779,8 +694,10 @@ static void qbus_finalize(Object *obj)
>          QLIST_REMOVE(bus, sibling);
>          bus->parent->num_child_bus--;
>      } else {
> +#ifndef CONFIG_USER_ONLY
>          assert(bus != sysbus_get_default()); /* main_system_bus is never freed */
>          qemu_unregister_reset(qbus_reset_all_fn, bus);
> +#endif
>      }
>      g_free((char *)bus->name);
>  }
> --
> 1.7.11.4
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:55:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tbY-0003KN-TX; Tue, 21 Aug 2012 18:55:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3tbX-0003KI-1k
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:55:07 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345575298!10265967!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27542 invoked from network); 21 Aug 2012 18:54:58 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 18:54:58 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LIshfE022777
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:54:43 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LIsgTC020627; Tue, 21 Aug 2012 14:54:42 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id C1A16200CDE; Tue, 21 Aug 2012 15:55:53 -0300 (BRT)
Date: Tue, 21 Aug 2012 15:55:53 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120821185553.GD2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-8-git-send-email-ehabkost@redhat.com>
	<CAFEAcA9R6GBZB_B1=sn4iAVXBGL8CyPjxNCaL=6fF+wAiyEwSA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA9R6GBZB_B1=sn4iAVXBGL8CyPjxNCaL=6fF+wAiyEwSA@mail.gmail.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 7/8] include core qdev code into *-user, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 05:59:22PM +0100, Peter Maydell wrote:
> On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > The code depends on some functions from qemu-option.o, so add
> > qemu-option.o to qom-obj-y to make sure it's included.
> >
> > Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> > ---
> >  Makefile.objs                                   | 1 +
> >  hw/Makefile.objs                                | 2 +-
> >  qom/Makefile.objs                               | 2 +-
> >  hw/qdev-properties.c => qom/device-properties.c | 0
> >  hw/qdev.c => qom/device.c                       | 0
> >  5 files changed, 3 insertions(+), 2 deletions(-)
> >  rename hw/qdev-properties.c => qom/device-properties.c (100%)
> >  rename hw/qdev.c => qom/device.c (100%)
> >
> > diff --git a/Makefile.objs b/Makefile.objs
> > index 4412757..2cf91c2 100644
> > --- a/Makefile.objs
> > +++ b/Makefile.objs
> > @@ -14,6 +14,7 @@ universal-obj-y += $(qobject-obj-y)
> >  #######################################################################
> >  # QOM
> >  qom-obj-y = qom/
> > +qom-obj-y += qemu-option.o
> 
> qemu-option.c isn't actually QOM related code...

True, but it's a dependency of the QOM DeviceState code. I don't know if
qom-obj-y is for "the QOM code" or "QOM code + dependencies".

I simply added it to qom-obj-y to avoid having to repeat myself
(otherwise I would need to add qemu-option.o to both common-obj-y and
user-obj-y).

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 18:55:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 18:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3tbY-0003KN-TX; Tue, 21 Aug 2012 18:55:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T3tbX-0003KI-1k
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 18:55:07 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345575298!10265967!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ0MzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27542 invoked from network); 21 Aug 2012 18:54:58 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	21 Aug 2012 18:54:58 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7LIshfE022777
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 14:54:43 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-72.gru2.redhat.com
	[10.97.6.72])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7LIsgTC020627; Tue, 21 Aug 2012 14:54:42 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id C1A16200CDE; Tue, 21 Aug 2012 15:55:53 -0300 (BRT)
Date: Tue, 21 Aug 2012 15:55:53 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120821185553.GD2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-8-git-send-email-ehabkost@redhat.com>
	<CAFEAcA9R6GBZB_B1=sn4iAVXBGL8CyPjxNCaL=6fF+wAiyEwSA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA9R6GBZB_B1=sn4iAVXBGL8CyPjxNCaL=6fF+wAiyEwSA@mail.gmail.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	mdroth@linux.vnet.ibm.com, avi@redhat.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 7/8] include core qdev code into *-user, too
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 05:59:22PM +0100, Peter Maydell wrote:
> On 21 August 2012 16:43, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > The code depends on some functions from qemu-option.o, so add
> > qemu-option.o to qom-obj-y to make sure it's included.
> >
> > Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> > ---
> >  Makefile.objs                                   | 1 +
> >  hw/Makefile.objs                                | 2 +-
> >  qom/Makefile.objs                               | 2 +-
> >  hw/qdev-properties.c => qom/device-properties.c | 0
> >  hw/qdev.c => qom/device.c                       | 0
> >  5 files changed, 3 insertions(+), 2 deletions(-)
> >  rename hw/qdev-properties.c => qom/device-properties.c (100%)
> >  rename hw/qdev.c => qom/device.c (100%)
> >
> > diff --git a/Makefile.objs b/Makefile.objs
> > index 4412757..2cf91c2 100644
> > --- a/Makefile.objs
> > +++ b/Makefile.objs
> > @@ -14,6 +14,7 @@ universal-obj-y += $(qobject-obj-y)
> >  #######################################################################
> >  # QOM
> >  qom-obj-y = qom/
> > +qom-obj-y += qemu-option.o
> 
> qemu-option.c isn't actually QOM related code...

True, but it's a dependency of the QOM DeviceState code. I don't know if
qom-obj-y is for "the QOM code" or "QOM code + dependencies".

I simply added it to qom-obj-y to avoid having to repeat myself
(otherwise I would need to add qemu-option.o to both common-obj-y and
user-obj-y).

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 19:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 19:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ttJ-0003YP-Jg; Tue, 21 Aug 2012 19:13:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3ttH-0003YK-7o
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 19:13:27 +0000
Received: from [85.158.139.83:36164] by server-9.bemta-5.messagelabs.com id
	D1/5B-26123-6DDD3305; Tue, 21 Aug 2012 19:13:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345576404!29326099!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11789 invoked from network); 21 Aug 2012 19:13:25 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 19:13:25 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LJDHZH008413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 19:13:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LJDG0c004897
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 19:13:16 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LJDFMk031735; Tue, 21 Aug 2012 14:13:15 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 12:13:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7AFE64031E; Tue, 21 Aug 2012 15:03:17 -0400 (EDT)
Date: Tue, 21 Aug 2012 15:03:17 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, JBeulich@suse.com
Message-ID: <20120821190317.GA13035@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120821172732.GA23715@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > instead of a big memblock_reserve. This way we can be more
> > > > selective in freeing regions (and it also makes it easier
> > > > to understand where is what).
> > > > 
> > > > [v1: Move the auto_translate_physmap to proper line]
> > > > [v2: Per Stefano suggestion add more comments]
> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > 
> > > much better now!
> > 
> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> > Will have a revised patch posted shortly.
> 
> Jan, I thought something odd. Part of this code replaces this:
> 
> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> 
> with a more region-by-region area. What I found out that if I boot this
> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> actually wrong.
> 
> Specifically this is what bootup says:
> 
> (good working case - 32bit hypervisor with 32-bit dom0):
> (XEN)  Loaded kernel: c1000000->c1a23000
> (XEN)  Init. ramdisk: c1a23000->cf730e00
> (XEN)  Phys-Mach map: cf731000->cf831000
> (XEN)  Start info:    cf831000->cf83147c
> (XEN)  Page tables:   cf832000->cf8b5000
> (XEN)  Boot stack:    cf8b5000->cf8b6000
> (XEN)  TOTAL:         c0000000->cfc00000
> 
> [    0.000000] PT: cf832000 (f832000)
> [    0.000000] Reserving PT: f832000->f8b5000
> 
> And with a 64-bit hypervisor:
> 
> XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> 
> [    0.000000] PT: cf834000 (f834000)
> [    0.000000] Reserving PT: f834000->f8b8000
> 
> So the pt_base is offset by two pages. And looking at c/s 13257
> its not clear to me why this two page offset was added?
> 
> The toolstack works fine - so launching 32-bit guests either
> under a 32-bit hypervisor or 64-bit works fine:
> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 0xcf885000  (pfn 0xf805 + 0x80 pages)
> [    0.000000] PT: cf805000 (f805000)
> 

And this patch on top of the others fixes this..


>From 806c312e50f122c47913145cf884f53dd09d9199 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 21 Aug 2012 14:31:24 -0400
Subject: [PATCH] xen/x86: Workaround 64-bit hypervisor and 32-bit initial
 domain.

If a 64-bit hypervisor is booted with a 32-bit initial domain,
the hypervisor deals with the initial domain as "compat" and
does some extra adjustments (like pagetables are 4 bytes instead
of 8). It also adjusts the xen_start_info->pt_base incorrectly.

When booted with a 32-bit hypervisor (32-bit initial domain):
..
(XEN)  Start info:    cf831000->cf83147c
(XEN)  Page tables:   cf832000->cf8b5000
..
[    0.000000] PT: cf832000 (f832000)
[    0.000000] Reserving PT: f832000->f8b5000

And with a 64-bit hypervisor:
(XEN)  Start info:    00000000cf831000->00000000cf8314b4
(XEN)  Page tables:   00000000cf832000->00000000cf8b6000

[    0.000000] PT: cf834000 (f834000)
[    0.000000] Reserving PT: f834000->f8b8000

To deal with this, we keep keep track of the highest physical
address we have reserved via memblock_reserve. If that address
does not overlap with pt_base, we have a gap which we reserve.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |   30 +++++++++++++++++++++---------
 1 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index e532eb5..511f92d 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1002,19 +1002,24 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
  * If the MFN is not in the m2p (provided to us by the hypervisor) this
  * function won't do anything. In practice this means that the XenBus
  * MFN won't be available for the initial domain. */
-static void __init xen_reserve_mfn(unsigned long mfn)
+static unsigned long __init xen_reserve_mfn(unsigned long mfn)
 {
-	unsigned long pfn;
+	unsigned long pfn, end_pfn = 0;
 
 	if (!mfn)
-		return;
+		return end_pfn;
+
 	pfn = mfn_to_pfn(mfn);
-	if (phys_to_machine_mapping_valid(pfn))
-		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
+	if (phys_to_machine_mapping_valid(pfn)) {
+		end_pfn = PFN_PHYS(pfn) + PAGE_SIZE;
+		memblock_reserve(PFN_PHYS(pfn), end_pfn);
+	}
+	return end_pfn;
 }
 static void __init xen_reserve_internals(void)
 {
 	unsigned long size;
+	unsigned long last_phys = 0;
 
 	if (!xen_pv_domain())
 		return;
@@ -1022,12 +1027,13 @@ static void __init xen_reserve_internals(void)
 	/* xen_start_info does not exist in the M2P, hence can't use
 	 * xen_reserve_mfn. */
 	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
+	last_phys = __pa(xen_start_info) + PAGE_SIZE;
 
-	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
-	xen_reserve_mfn(xen_start_info->store_mfn);
+	last_phys = max(xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info)), last_phys);
+	last_phys = max(xen_reserve_mfn(xen_start_info->store_mfn), last_phys);
 
 	if (!xen_initial_domain())
-		xen_reserve_mfn(xen_start_info->console.domU.mfn);
+		last_phys = max(xen_reserve_mfn(xen_start_info->console.domU.mfn), last_phys);
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return;
@@ -1043,8 +1049,14 @@ static void __init xen_reserve_internals(void)
 	 * a lot (and call memblock_reserve for each PAGE), so lets just use
 	 * the easy way and reserve it wholesale. */
 	memblock_reserve(__pa(xen_start_info->mfn_list), size);
-
+	last_phys = max(__pa(xen_start_info->mfn_list) + size, last_phys);
 	/* The pagetables are reserved in mmu.c */
+
+	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
+	 * offsets the pt_base by two pages. Hence the reservation that is done
+	 * in mmu.c misses two pages. We correct it here if we detect this. */
+	if (last_phys < __pa(xen_start_info->pt_base))
+		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
 }
 void xen_setup_shared_info(void)
 {
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 19:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 19:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ttJ-0003YP-Jg; Tue, 21 Aug 2012 19:13:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3ttH-0003YK-7o
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 19:13:27 +0000
Received: from [85.158.139.83:36164] by server-9.bemta-5.messagelabs.com id
	D1/5B-26123-6DDD3305; Tue, 21 Aug 2012 19:13:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345576404!29326099!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11789 invoked from network); 21 Aug 2012 19:13:25 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-2.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 19:13:25 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LJDHZH008413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Aug 2012 19:13:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LJDG0c004897
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Aug 2012 19:13:16 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LJDFMk031735; Tue, 21 Aug 2012 14:13:15 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 12:13:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7AFE64031E; Tue, 21 Aug 2012 15:03:17 -0400 (EDT)
Date: Tue, 21 Aug 2012 15:03:17 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, JBeulich@suse.com
Message-ID: <20120821190317.GA13035@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120821172732.GA23715@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > instead of a big memblock_reserve. This way we can be more
> > > > selective in freeing regions (and it also makes it easier
> > > > to understand where is what).
> > > > 
> > > > [v1: Move the auto_translate_physmap to proper line]
> > > > [v2: Per Stefano suggestion add more comments]
> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > 
> > > much better now!
> > 
> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> > Will have a revised patch posted shortly.
> 
> Jan, I thought something odd. Part of this code replaces this:
> 
> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> 
> with a more region-by-region area. What I found out that if I boot this
> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> actually wrong.
> 
> Specifically this is what bootup says:
> 
> (good working case - 32bit hypervisor with 32-bit dom0):
> (XEN)  Loaded kernel: c1000000->c1a23000
> (XEN)  Init. ramdisk: c1a23000->cf730e00
> (XEN)  Phys-Mach map: cf731000->cf831000
> (XEN)  Start info:    cf831000->cf83147c
> (XEN)  Page tables:   cf832000->cf8b5000
> (XEN)  Boot stack:    cf8b5000->cf8b6000
> (XEN)  TOTAL:         c0000000->cfc00000
> 
> [    0.000000] PT: cf832000 (f832000)
> [    0.000000] Reserving PT: f832000->f8b5000
> 
> And with a 64-bit hypervisor:
> 
> XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> 
> [    0.000000] PT: cf834000 (f834000)
> [    0.000000] Reserving PT: f834000->f8b8000
> 
> So the pt_base is offset by two pages. And looking at c/s 13257
> its not clear to me why this two page offset was added?
> 
> The toolstack works fine - so launching 32-bit guests either
> under a 32-bit hypervisor or 64-bit works fine:
> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 0xcf885000  (pfn 0xf805 + 0x80 pages)
> [    0.000000] PT: cf805000 (f805000)
> 

And this patch on top of the others fixes this..


>From 806c312e50f122c47913145cf884f53dd09d9199 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 21 Aug 2012 14:31:24 -0400
Subject: [PATCH] xen/x86: Workaround 64-bit hypervisor and 32-bit initial
 domain.

If a 64-bit hypervisor is booted with a 32-bit initial domain,
the hypervisor deals with the initial domain as "compat" and
does some extra adjustments (like pagetables are 4 bytes instead
of 8). It also adjusts the xen_start_info->pt_base incorrectly.

When booted with a 32-bit hypervisor (32-bit initial domain):
..
(XEN)  Start info:    cf831000->cf83147c
(XEN)  Page tables:   cf832000->cf8b5000
..
[    0.000000] PT: cf832000 (f832000)
[    0.000000] Reserving PT: f832000->f8b5000

And with a 64-bit hypervisor:
(XEN)  Start info:    00000000cf831000->00000000cf8314b4
(XEN)  Page tables:   00000000cf832000->00000000cf8b6000

[    0.000000] PT: cf834000 (f834000)
[    0.000000] Reserving PT: f834000->f8b8000

To deal with this, we keep keep track of the highest physical
address we have reserved via memblock_reserve. If that address
does not overlap with pt_base, we have a gap which we reserve.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |   30 +++++++++++++++++++++---------
 1 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index e532eb5..511f92d 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1002,19 +1002,24 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
  * If the MFN is not in the m2p (provided to us by the hypervisor) this
  * function won't do anything. In practice this means that the XenBus
  * MFN won't be available for the initial domain. */
-static void __init xen_reserve_mfn(unsigned long mfn)
+static unsigned long __init xen_reserve_mfn(unsigned long mfn)
 {
-	unsigned long pfn;
+	unsigned long pfn, end_pfn = 0;
 
 	if (!mfn)
-		return;
+		return end_pfn;
+
 	pfn = mfn_to_pfn(mfn);
-	if (phys_to_machine_mapping_valid(pfn))
-		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
+	if (phys_to_machine_mapping_valid(pfn)) {
+		end_pfn = PFN_PHYS(pfn) + PAGE_SIZE;
+		memblock_reserve(PFN_PHYS(pfn), end_pfn);
+	}
+	return end_pfn;
 }
 static void __init xen_reserve_internals(void)
 {
 	unsigned long size;
+	unsigned long last_phys = 0;
 
 	if (!xen_pv_domain())
 		return;
@@ -1022,12 +1027,13 @@ static void __init xen_reserve_internals(void)
 	/* xen_start_info does not exist in the M2P, hence can't use
 	 * xen_reserve_mfn. */
 	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
+	last_phys = __pa(xen_start_info) + PAGE_SIZE;
 
-	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
-	xen_reserve_mfn(xen_start_info->store_mfn);
+	last_phys = max(xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info)), last_phys);
+	last_phys = max(xen_reserve_mfn(xen_start_info->store_mfn), last_phys);
 
 	if (!xen_initial_domain())
-		xen_reserve_mfn(xen_start_info->console.domU.mfn);
+		last_phys = max(xen_reserve_mfn(xen_start_info->console.domU.mfn), last_phys);
 
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return;
@@ -1043,8 +1049,14 @@ static void __init xen_reserve_internals(void)
 	 * a lot (and call memblock_reserve for each PAGE), so lets just use
 	 * the easy way and reserve it wholesale. */
 	memblock_reserve(__pa(xen_start_info->mfn_list), size);
-
+	last_phys = max(__pa(xen_start_info->mfn_list) + size, last_phys);
 	/* The pagetables are reserved in mmu.c */
+
+	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
+	 * offsets the pt_base by two pages. Hence the reservation that is done
+	 * in mmu.c misses two pages. We correct it here if we detect this. */
+	if (last_phys < __pa(xen_start_info->pt_base))
+		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
 }
 void xen_setup_shared_info(void)
 {
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ulu-0003v6-8L; Tue, 21 Aug 2012 20:09:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T3ulr-0003v1-Vc
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 20:09:52 +0000
Received: from [85.158.139.83:64765] by server-6.bemta-5.messagelabs.com id
	98/1C-22415-F0BE3305; Tue, 21 Aug 2012 20:09:51 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345579789!29388270!1
X-Originating-IP: [208.97.132.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMyNDAx\n,sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMyNDAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15641 invoked from network); 21 Aug 2012 20:09:50 -0000
Received: from caiajhbdcaib.dreamhost.com (HELO homiemail-a18.g.dreamhost.com)
	(208.97.132.81) by server-5.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 20:09:50 -0000
Received: from homiemail-a18.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a18.g.dreamhost.com (Postfix) with ESMTP id 7858925006C;
	Tue, 21 Aug 2012 13:09:49 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=content-type
	:mime-version:content-transfer-encoding:subject:message-id:date
	:from:to:cc; q=dns; s=lagarcavilla.org; b=knNHmQxRuc9fMu3kqtSxn6
	WxpSi13bv5FcuPkt0osUTFpjgDNFOqjx/OJfqG5rxEN6Q8tgtKzWwQPRgHp3ygol
	8OlIUTzv3n1MLbaWmwTFPGnqKeIWRJZv/F0iQe0k8QgQMLRI49UANHuXdQkEYJO2
	9zcXdGKKpkcOcxdmeuRd0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	content-type:mime-version:content-transfer-encoding:subject
	:message-id:date:from:to:cc; s=lagarcavilla.org; bh=zaoJB4iVbnZU
	Vor7j8y/Edw/UFc=; b=DaRWBCzAHj/zreO8EAwtKH5CaxdTZfdg26WUS3E7ioJU
	dBDV7Up97g7vgaAI6Ky/6HXmeRavLazcFXVDnoqosarVMf+0tamswrAi/MXjm4jn
	T0qrZu7FZTyqeSFOpgpk4A64rL1NKIa1fdRhzZhFcDkhBn9W4PvRhvolgFjNHh8=
Received: from xdev.gridcentric.ca (unknown [206.223.182.18])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a18.g.dreamhost.com (Postfix) with ESMTPSA id CA2F9250069; 
	Tue, 21 Aug 2012 13:09:48 -0700 (PDT)
MIME-Version: 1.0
X-Mercurial-Node: 84a23ae853a53e39ebd1d58a5b2cc65bce2b5281
Message-Id: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
User-Agent: Mercurial-patchbomb/1.8.4
Date: Tue, 21 Aug 2012 16:13:57 -0400
From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
Cc: keir@xen.org, andres@gridcentric.ca, tim@xen.org, JBeulich@suse.com
Subject: [Xen-devel] [PATCH] Fix shared entry status for grant copy
 operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
 1 files changed, 22 insertions(+), 11 deletions(-)


The unwind path was not clearing the shared entry status bits. This was
BSOD-ing guests on network activity under certain configurations.

Also:
 * sed the fixup method name to signal it's related to grant copy.
 * use atomic clear flag ops during fixup.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1751,14 +1751,14 @@ __release_grant_for_copy(
    under the domain's grant table lock. */
 /* Only safe on transitive grants.  Even then, note that we don't
    attempt to drop any pin on the referent grant. */
-static void __fixup_status_for_pin(const struct active_grant_entry *act,
+static void __fixup_status_for_copy_pin(const struct active_grant_entry *act,
                                    uint16_t *status)
 {
     if ( !(act->pin & GNTPIN_hstw_mask) )
-        *status &= ~GTF_writing;
+        gnttab_clear_flag(_GTF_writing, status);
 
     if ( !(act->pin & GNTPIN_hstr_mask) )
-        *status &= ~GTF_reading;
+        gnttab_clear_flag(_GTF_reading, status);
 }
 
 /* Grab a frame number from a grant entry and update the flags and pin
@@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
         if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
         {
             if ( !allow_transitive )
-                PIN_FAIL(unlock_out, GNTST_general_error,
+                PIN_FAIL(unlock_out_clear, GNTST_general_error,
                          "transitive grant when transitivity not allowed\n");
 
             trans_domid = sha2->transitive.trans_domid;
@@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
             barrier(); /* Stop the compiler from re-loading
                           trans_domid from shared memory */
             if ( trans_domid == rd->domain_id )
-                PIN_FAIL(unlock_out, GNTST_general_error,
+                PIN_FAIL(unlock_out_clear, GNTST_general_error,
                          "transitive grants cannot be self-referential\n");
 
             /* We allow the trans_domid == ldom case, which
@@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
             /* We need to leave the rrd locked during the grant copy */
             td = rcu_lock_domain_by_id(trans_domid);
             if ( td == NULL )
-                PIN_FAIL(unlock_out, GNTST_general_error,
+                PIN_FAIL(unlock_out_clear, GNTST_general_error,
                          "transitive grant referenced bad domain %d\n",
                          trans_domid);
             spin_unlock(&rgt->lock);
@@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
 
             spin_lock(&rgt->lock);
             if ( rc != GNTST_okay ) {
-                __fixup_status_for_pin(act, status);
+                __fixup_status_for_copy_pin(act, status);
                 rcu_unlock_domain(td);
                 spin_unlock(&rgt->lock);
                 return rc;
@@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
                and try again. */
             if ( act->pin != old_pin )
             {
-                __fixup_status_for_pin(act, status);
+                __fixup_status_for_copy_pin(act, status);
                 rcu_unlock_domain(td);
                 spin_unlock(&rgt->lock);
                 put_page(*page);
@@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
         {
             rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, rd);
             if ( rc != GNTST_okay )
-                goto unlock_out;
+                goto unlock_out_clear;
             act->gfn = sha1->frame;
             is_sub_page = 0;
             trans_page_off = 0;
@@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
         {
             rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, readonly, rd);
             if ( rc != GNTST_okay )
-                goto unlock_out;
+                goto unlock_out_clear;
             act->gfn = sha2->full_page.frame;
             is_sub_page = 0;
             trans_page_off = 0;
@@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
         {
             rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, readonly, rd);
             if ( rc != GNTST_okay )
-                goto unlock_out;
+                goto unlock_out_clear;
             act->gfn = sha2->sub_page.frame;
             is_sub_page = 1;
             trans_page_off = sha2->sub_page.page_off;
@@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
     *length = act->length;
     *frame = act->frame;
 
+    spin_unlock(&rgt->lock);
+    return rc;
+ 
+ unlock_out_clear:
+    if ( !(readonly) &&
+         !(act->pin & GNTPIN_hstw_mask) )
+        gnttab_clear_flag(_GTF_writing, status);
+
+    if ( !act->pin )
+        gnttab_clear_flag(_GTF_reading, status);
+
  unlock_out:
     spin_unlock(&rgt->lock);
     return rc;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3ulu-0003v6-8L; Tue, 21 Aug 2012 20:09:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T3ulr-0003v1-Vc
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 20:09:52 +0000
Received: from [85.158.139.83:64765] by server-6.bemta-5.messagelabs.com id
	98/1C-22415-F0BE3305; Tue, 21 Aug 2012 20:09:51 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345579789!29388270!1
X-Originating-IP: [208.97.132.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMyNDAx\n,sa_preprocessor: 
	QmFkIElQOiAyMDguOTcuMTMyLjgxID0+IDMyNDAx\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15641 invoked from network); 21 Aug 2012 20:09:50 -0000
Received: from caiajhbdcaib.dreamhost.com (HELO homiemail-a18.g.dreamhost.com)
	(208.97.132.81) by server-5.tower-182.messagelabs.com with SMTP;
	21 Aug 2012 20:09:50 -0000
Received: from homiemail-a18.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a18.g.dreamhost.com (Postfix) with ESMTP id 7858925006C;
	Tue, 21 Aug 2012 13:09:49 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=content-type
	:mime-version:content-transfer-encoding:subject:message-id:date
	:from:to:cc; q=dns; s=lagarcavilla.org; b=knNHmQxRuc9fMu3kqtSxn6
	WxpSi13bv5FcuPkt0osUTFpjgDNFOqjx/OJfqG5rxEN6Q8tgtKzWwQPRgHp3ygol
	8OlIUTzv3n1MLbaWmwTFPGnqKeIWRJZv/F0iQe0k8QgQMLRI49UANHuXdQkEYJO2
	9zcXdGKKpkcOcxdmeuRd0=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=
	content-type:mime-version:content-transfer-encoding:subject
	:message-id:date:from:to:cc; s=lagarcavilla.org; bh=zaoJB4iVbnZU
	Vor7j8y/Edw/UFc=; b=DaRWBCzAHj/zreO8EAwtKH5CaxdTZfdg26WUS3E7ioJU
	dBDV7Up97g7vgaAI6Ky/6HXmeRavLazcFXVDnoqosarVMf+0tamswrAi/MXjm4jn
	T0qrZu7FZTyqeSFOpgpk4A64rL1NKIa1fdRhzZhFcDkhBn9W4PvRhvolgFjNHh8=
Received: from xdev.gridcentric.ca (unknown [206.223.182.18])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a18.g.dreamhost.com (Postfix) with ESMTPSA id CA2F9250069; 
	Tue, 21 Aug 2012 13:09:48 -0700 (PDT)
MIME-Version: 1.0
X-Mercurial-Node: 84a23ae853a53e39ebd1d58a5b2cc65bce2b5281
Message-Id: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
User-Agent: Mercurial-patchbomb/1.8.4
Date: Tue, 21 Aug 2012 16:13:57 -0400
From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
To: xen-devel@lists.xen.org
Cc: keir@xen.org, andres@gridcentric.ca, tim@xen.org, JBeulich@suse.com
Subject: [Xen-devel] [PATCH] Fix shared entry status for grant copy
 operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
 1 files changed, 22 insertions(+), 11 deletions(-)


The unwind path was not clearing the shared entry status bits. This was
BSOD-ing guests on network activity under certain configurations.

Also:
 * sed the fixup method name to signal it's related to grant copy.
 * use atomic clear flag ops during fixup.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1751,14 +1751,14 @@ __release_grant_for_copy(
    under the domain's grant table lock. */
 /* Only safe on transitive grants.  Even then, note that we don't
    attempt to drop any pin on the referent grant. */
-static void __fixup_status_for_pin(const struct active_grant_entry *act,
+static void __fixup_status_for_copy_pin(const struct active_grant_entry *act,
                                    uint16_t *status)
 {
     if ( !(act->pin & GNTPIN_hstw_mask) )
-        *status &= ~GTF_writing;
+        gnttab_clear_flag(_GTF_writing, status);
 
     if ( !(act->pin & GNTPIN_hstr_mask) )
-        *status &= ~GTF_reading;
+        gnttab_clear_flag(_GTF_reading, status);
 }
 
 /* Grab a frame number from a grant entry and update the flags and pin
@@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
         if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
         {
             if ( !allow_transitive )
-                PIN_FAIL(unlock_out, GNTST_general_error,
+                PIN_FAIL(unlock_out_clear, GNTST_general_error,
                          "transitive grant when transitivity not allowed\n");
 
             trans_domid = sha2->transitive.trans_domid;
@@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
             barrier(); /* Stop the compiler from re-loading
                           trans_domid from shared memory */
             if ( trans_domid == rd->domain_id )
-                PIN_FAIL(unlock_out, GNTST_general_error,
+                PIN_FAIL(unlock_out_clear, GNTST_general_error,
                          "transitive grants cannot be self-referential\n");
 
             /* We allow the trans_domid == ldom case, which
@@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
             /* We need to leave the rrd locked during the grant copy */
             td = rcu_lock_domain_by_id(trans_domid);
             if ( td == NULL )
-                PIN_FAIL(unlock_out, GNTST_general_error,
+                PIN_FAIL(unlock_out_clear, GNTST_general_error,
                          "transitive grant referenced bad domain %d\n",
                          trans_domid);
             spin_unlock(&rgt->lock);
@@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
 
             spin_lock(&rgt->lock);
             if ( rc != GNTST_okay ) {
-                __fixup_status_for_pin(act, status);
+                __fixup_status_for_copy_pin(act, status);
                 rcu_unlock_domain(td);
                 spin_unlock(&rgt->lock);
                 return rc;
@@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
                and try again. */
             if ( act->pin != old_pin )
             {
-                __fixup_status_for_pin(act, status);
+                __fixup_status_for_copy_pin(act, status);
                 rcu_unlock_domain(td);
                 spin_unlock(&rgt->lock);
                 put_page(*page);
@@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
         {
             rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, rd);
             if ( rc != GNTST_okay )
-                goto unlock_out;
+                goto unlock_out_clear;
             act->gfn = sha1->frame;
             is_sub_page = 0;
             trans_page_off = 0;
@@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
         {
             rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, readonly, rd);
             if ( rc != GNTST_okay )
-                goto unlock_out;
+                goto unlock_out_clear;
             act->gfn = sha2->full_page.frame;
             is_sub_page = 0;
             trans_page_off = 0;
@@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
         {
             rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, readonly, rd);
             if ( rc != GNTST_okay )
-                goto unlock_out;
+                goto unlock_out_clear;
             act->gfn = sha2->sub_page.frame;
             is_sub_page = 1;
             trans_page_off = sha2->sub_page.page_off;
@@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
     *length = act->length;
     *frame = act->frame;
 
+    spin_unlock(&rgt->lock);
+    return rc;
+ 
+ unlock_out_clear:
+    if ( !(readonly) &&
+         !(act->pin & GNTPIN_hstw_mask) )
+        gnttab_clear_flag(_GTF_writing, status);
+
+    if ( !act->pin )
+        gnttab_clear_flag(_GTF_reading, status);
+
  unlock_out:
     spin_unlock(&rgt->lock);
     return rc;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:11:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3unP-0003zJ-O1; Tue, 21 Aug 2012 20:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T3unN-0003z8-4H
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 20:11:27 +0000
Received: from [85.158.139.83:43807] by server-2.bemta-5.messagelabs.com id
	52/A1-10142-C6BE3305; Tue, 21 Aug 2012 20:11:24 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345579882!21994733!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20491 invoked from network); 21 Aug 2012 20:11:23 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:11:23 -0000
Received: by ghrr17 with SMTP id r17so229437ghr.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 13:11:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=lXv+O5NS5XXN5A7HQvJp/h+BX+EDSvXQ2YscFoOrw6E=;
	b=AKjzez27DMjJCDBxYNcZbdHadsZ5nUthpDmhaghYBJXnHBUq0Dj7VcWMQ9xUUpk2N8
	bMWVeAdFuCubbL4ama5yKzQNzIm0rlgPScK+W4JePdM6irnMF6pCc2HgoCw9EwxS1slo
	bUrf/4Lhoit/E9uG+3Gf6Yqsw1YVmTJWZQs+Yav2CS9B0L1+Nncd4oONe4FOuTqCNCiO
	jg6MNnVuLfL/S7DtmbwEWUU8RVcwvDwOQ19/t3SfHpbLYkDUhrNAmglaNpFL9AC4AEbk
	UwHvKVwuxgralibXe7ry5tLKNT6kUIBvzYCbVogS8syERU72NWP7hiYUsjZfVeabjVeh
	r0aA==
Received: by 10.50.76.202 with SMTP id m10mr14719893igw.52.1345579881638;
	Tue, 21 Aug 2012 13:11:21 -0700 (PDT)
Received: from [192.168.5.251] ([206.223.182.18])
	by mx.google.com with ESMTPS id k6sm16732099igz.9.2012.08.21.13.11.20
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 21 Aug 2012 13:11:21 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
Date: Tue, 21 Aug 2012 16:11:19 -0400
Message-Id: <1EAB87C2-B963-4832-B249-D02193FFF197@gridcentric.ca>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkChqq+2+uCsmjfwxrFmxTvNGVALXE3s4hdENbYL9AbljYKi8iyt9vG4gUIRGNXFvrp7NYB
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
	operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

NB: this intended for 4.2 window as it solves a paging-related BUG
Andres
On Aug 21, 2012, at 4:13 PM, Andres Lagar-Cavilla wrote:

> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
> 1 files changed, 22 insertions(+), 11 deletions(-)
> 
> 
> The unwind path was not clearing the shared entry status bits. This was
> BSOD-ing guests on network activity under certain configurations.
> 
> Also:
> * sed the fixup method name to signal it's related to grant copy.
> * use atomic clear flag ops during fixup.
> 
> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1751,14 +1751,14 @@ __release_grant_for_copy(
>    under the domain's grant table lock. */
> /* Only safe on transitive grants.  Even then, note that we don't
>    attempt to drop any pin on the referent grant. */
> -static void __fixup_status_for_pin(const struct active_grant_entry *act,
> +static void __fixup_status_for_copy_pin(const struct active_grant_entry *act,
>                                    uint16_t *status)
> {
>     if ( !(act->pin & GNTPIN_hstw_mask) )
> -        *status &= ~GTF_writing;
> +        gnttab_clear_flag(_GTF_writing, status);
> 
>     if ( !(act->pin & GNTPIN_hstr_mask) )
> -        *status &= ~GTF_reading;
> +        gnttab_clear_flag(_GTF_reading, status);
> }
> 
> /* Grab a frame number from a grant entry and update the flags and pin
> @@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
>         if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
>         {
>             if ( !allow_transitive )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                          "transitive grant when transitivity not allowed\n");
> 
>             trans_domid = sha2->transitive.trans_domid;
> @@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
>             barrier(); /* Stop the compiler from re-loading
>                           trans_domid from shared memory */
>             if ( trans_domid == rd->domain_id )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                          "transitive grants cannot be self-referential\n");
> 
>             /* We allow the trans_domid == ldom case, which
> @@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
>             /* We need to leave the rrd locked during the grant copy */
>             td = rcu_lock_domain_by_id(trans_domid);
>             if ( td == NULL )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                          "transitive grant referenced bad domain %d\n",
>                          trans_domid);
>             spin_unlock(&rgt->lock);
> @@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
> 
>             spin_lock(&rgt->lock);
>             if ( rc != GNTST_okay ) {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                 rcu_unlock_domain(td);
>                 spin_unlock(&rgt->lock);
>                 return rc;
> @@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
>                and try again. */
>             if ( act->pin != old_pin )
>             {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                 rcu_unlock_domain(td);
>                 spin_unlock(&rgt->lock);
>                 put_page(*page);
> @@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
>         {
>             rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, rd);
>             if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>             act->gfn = sha1->frame;
>             is_sub_page = 0;
>             trans_page_off = 0;
> @@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
>         {
>             rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, readonly, rd);
>             if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>             act->gfn = sha2->full_page.frame;
>             is_sub_page = 0;
>             trans_page_off = 0;
> @@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
>         {
>             rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, readonly, rd);
>             if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>             act->gfn = sha2->sub_page.frame;
>             is_sub_page = 1;
>             trans_page_off = sha2->sub_page.page_off;
> @@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
>     *length = act->length;
>     *frame = act->frame;
> 
> +    spin_unlock(&rgt->lock);
> +    return rc;
> + 
> + unlock_out_clear:
> +    if ( !(readonly) &&
> +         !(act->pin & GNTPIN_hstw_mask) )
> +        gnttab_clear_flag(_GTF_writing, status);
> +
> +    if ( !act->pin )
> +        gnttab_clear_flag(_GTF_reading, status);
> +
>  unlock_out:
>     spin_unlock(&rgt->lock);
>     return rc;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:11:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3unP-0003zJ-O1; Tue, 21 Aug 2012 20:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T3unN-0003z8-4H
	for xen-devel@lists.xen.org; Tue, 21 Aug 2012 20:11:27 +0000
Received: from [85.158.139.83:43807] by server-2.bemta-5.messagelabs.com id
	52/A1-10142-C6BE3305; Tue, 21 Aug 2012 20:11:24 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345579882!21994733!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20491 invoked from network); 21 Aug 2012 20:11:23 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:11:23 -0000
Received: by ghrr17 with SMTP id r17so229437ghr.32
	for <xen-devel@lists.xen.org>; Tue, 21 Aug 2012 13:11:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=lXv+O5NS5XXN5A7HQvJp/h+BX+EDSvXQ2YscFoOrw6E=;
	b=AKjzez27DMjJCDBxYNcZbdHadsZ5nUthpDmhaghYBJXnHBUq0Dj7VcWMQ9xUUpk2N8
	bMWVeAdFuCubbL4ama5yKzQNzIm0rlgPScK+W4JePdM6irnMF6pCc2HgoCw9EwxS1slo
	bUrf/4Lhoit/E9uG+3Gf6Yqsw1YVmTJWZQs+Yav2CS9B0L1+Nncd4oONe4FOuTqCNCiO
	jg6MNnVuLfL/S7DtmbwEWUU8RVcwvDwOQ19/t3SfHpbLYkDUhrNAmglaNpFL9AC4AEbk
	UwHvKVwuxgralibXe7ry5tLKNT6kUIBvzYCbVogS8syERU72NWP7hiYUsjZfVeabjVeh
	r0aA==
Received: by 10.50.76.202 with SMTP id m10mr14719893igw.52.1345579881638;
	Tue, 21 Aug 2012 13:11:21 -0700 (PDT)
Received: from [192.168.5.251] ([206.223.182.18])
	by mx.google.com with ESMTPS id k6sm16732099igz.9.2012.08.21.13.11.20
	(version=TLSv1/SSLv3 cipher=OTHER);
	Tue, 21 Aug 2012 13:11:21 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
Date: Tue, 21 Aug 2012 16:11:19 -0400
Message-Id: <1EAB87C2-B963-4832-B249-D02193FFF197@gridcentric.ca>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
To: Andres Lagar-Cavilla <andres@lagarcavilla.org>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkChqq+2+uCsmjfwxrFmxTvNGVALXE3s4hdENbYL9AbljYKi8iyt9vG4gUIRGNXFvrp7NYB
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
	operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

NB: this intended for 4.2 window as it solves a paging-related BUG
Andres
On Aug 21, 2012, at 4:13 PM, Andres Lagar-Cavilla wrote:

> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
> 1 files changed, 22 insertions(+), 11 deletions(-)
> 
> 
> The unwind path was not clearing the shared entry status bits. This was
> BSOD-ing guests on network activity under certain configurations.
> 
> Also:
> * sed the fixup method name to signal it's related to grant copy.
> * use atomic clear flag ops during fixup.
> 
> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1751,14 +1751,14 @@ __release_grant_for_copy(
>    under the domain's grant table lock. */
> /* Only safe on transitive grants.  Even then, note that we don't
>    attempt to drop any pin on the referent grant. */
> -static void __fixup_status_for_pin(const struct active_grant_entry *act,
> +static void __fixup_status_for_copy_pin(const struct active_grant_entry *act,
>                                    uint16_t *status)
> {
>     if ( !(act->pin & GNTPIN_hstw_mask) )
> -        *status &= ~GTF_writing;
> +        gnttab_clear_flag(_GTF_writing, status);
> 
>     if ( !(act->pin & GNTPIN_hstr_mask) )
> -        *status &= ~GTF_reading;
> +        gnttab_clear_flag(_GTF_reading, status);
> }
> 
> /* Grab a frame number from a grant entry and update the flags and pin
> @@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
>         if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
>         {
>             if ( !allow_transitive )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                          "transitive grant when transitivity not allowed\n");
> 
>             trans_domid = sha2->transitive.trans_domid;
> @@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
>             barrier(); /* Stop the compiler from re-loading
>                           trans_domid from shared memory */
>             if ( trans_domid == rd->domain_id )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                          "transitive grants cannot be self-referential\n");
> 
>             /* We allow the trans_domid == ldom case, which
> @@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
>             /* We need to leave the rrd locked during the grant copy */
>             td = rcu_lock_domain_by_id(trans_domid);
>             if ( td == NULL )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                          "transitive grant referenced bad domain %d\n",
>                          trans_domid);
>             spin_unlock(&rgt->lock);
> @@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
> 
>             spin_lock(&rgt->lock);
>             if ( rc != GNTST_okay ) {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                 rcu_unlock_domain(td);
>                 spin_unlock(&rgt->lock);
>                 return rc;
> @@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
>                and try again. */
>             if ( act->pin != old_pin )
>             {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                 rcu_unlock_domain(td);
>                 spin_unlock(&rgt->lock);
>                 put_page(*page);
> @@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
>         {
>             rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, rd);
>             if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>             act->gfn = sha1->frame;
>             is_sub_page = 0;
>             trans_page_off = 0;
> @@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
>         {
>             rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, readonly, rd);
>             if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>             act->gfn = sha2->full_page.frame;
>             is_sub_page = 0;
>             trans_page_off = 0;
> @@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
>         {
>             rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, readonly, rd);
>             if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>             act->gfn = sha2->sub_page.frame;
>             is_sub_page = 1;
>             trans_page_off = sha2->sub_page.page_off;
> @@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
>     *length = act->length;
>     *frame = act->frame;
> 
> +    spin_unlock(&rgt->lock);
> +    return rc;
> + 
> + unlock_out_clear:
> +    if ( !(readonly) &&
> +         !(act->pin & GNTPIN_hstw_mask) )
> +        gnttab_clear_flag(_GTF_writing, status);
> +
> +    if ( !act->pin )
> +        gnttab_clear_flag(_GTF_reading, status);
> +
>  unlock_out:
>     spin_unlock(&rgt->lock);
>     return rc;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwv-0004Fr-5t; Tue, 21 Aug 2012 20:21:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uwu-0004FW-1w
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:16 +0000
Received: from [85.158.143.99:7811] by server-3.bemta-4.messagelabs.com id
	1D/6E-09529-BBDE3305; Tue, 21 Aug 2012 20:21:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345580464!22315720!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8590 invoked from network); 21 Aug 2012 20:21:14 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 20:21:14 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL3N1029936
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2n2026118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL28F014139
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B304440357; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: 74bedb086c5b72447262e087c0218b89f8bc9140
Message-Id: <74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:33 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 37
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of the
 initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
# Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
xen/pagetables: Document that all of the initial regions are mapped.

The documentation states that the layout of the initial region looks
as so:
   a. relocated kernel image
   b. initial ram disk              [mod_start, mod_len]
   c. list of allocated page frames [mfn_list, nr_pages]
      (unless relocated due to XEN_ELFNOTE_INIT_P2M)
   d. start_info_t structure        [register ESI (x86)]
   e. bootstrap page tables         [pt_base, CR3 (x86)]
   f. bootstrap stack               [register ESP (x86)]

But it does not clarify that the virtual address to all of
those areas is initially mapped by the pt_base (or CR3).
Lets fix that.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
--- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
@@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
  *  8. There is guaranteed to be at least 512kB padding after the final
  *     bootstrap element. If necessary, the bootstrap virtual region is
  *     extended by an extra 4MB to ensure this.
+ *
+ *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
+ *  pagetables [pt_base, CR3 (x86)].
  */
 
 #define MAX_GUEST_CMDLINE 1024



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwv-0004Fr-5t; Tue, 21 Aug 2012 20:21:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uwu-0004FW-1w
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:16 +0000
Received: from [85.158.143.99:7811] by server-3.bemta-4.messagelabs.com id
	1D/6E-09529-BBDE3305; Tue, 21 Aug 2012 20:21:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345580464!22315720!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8590 invoked from network); 21 Aug 2012 20:21:14 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-6.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 20:21:14 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL3N1029936
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2n2026118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL28F014139
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B304440357; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: 74bedb086c5b72447262e087c0218b89f8bc9140
Message-Id: <74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:33 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 37
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of the
 initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
# Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
xen/pagetables: Document that all of the initial regions are mapped.

The documentation states that the layout of the initial region looks
as so:
   a. relocated kernel image
   b. initial ram disk              [mod_start, mod_len]
   c. list of allocated page frames [mfn_list, nr_pages]
      (unless relocated due to XEN_ELFNOTE_INIT_P2M)
   d. start_info_t structure        [register ESI (x86)]
   e. bootstrap page tables         [pt_base, CR3 (x86)]
   f. bootstrap stack               [register ESP (x86)]

But it does not clarify that the virtual address to all of
those areas is initially mapped by the pt_base (or CR3).
Lets fix that.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
--- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
@@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
  *  8. There is guaranteed to be at least 512kB padding after the final
  *     bootstrap element. If necessary, the bootstrap virtual region is
  *     extended by an extra 4MB to ensure this.
+ *
+ *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
+ *  pagetables [pt_base, CR3 (x86)].
  */
 
 #define MAX_GUEST_CMDLINE 1024



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwu-0004Fd-CH; Tue, 21 Aug 2012 20:21:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uws-0004FB-1t
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:14 +0000
Received: from [85.158.139.83:44181] by server-12.bemta-5.messagelabs.com id
	1F/E7-22359-9BDE3305; Tue, 21 Aug 2012 20:21:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345580471!29249594!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwOTc1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12690 invoked from network); 21 Aug 2012 20:21:12 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 20:21:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL3pB030592
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2Ta013777
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL26n015862
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ABCB5402B8; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: 8ed3eef706710c9c476a8d984bfb2861d92bedfb
Message-Id: <8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 176
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
# Parent  635917c6dac4ab8748572fcbeb3e745428684e15
get_page_type: Print out extra information when failing to get page_type.

When any reference to __get_page_type is called and it fails, we get:

(XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 10e392 (pfn 1bf6c)

with this patch we get some extra details such as:
(XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]

where it actually is in the pagetable of the guest. This is useful
b/c we can figure out where it is, and use that to figure out where
the OS thinks it is.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r 635917c6dac4 -r 8ed3eef70671 xen/arch/x86/debug.c
--- a/xen/arch/x86/debug.c	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/arch/x86/debug.c	Tue Aug 21 16:08:29 2012 -0400
@@ -70,8 +70,127 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct dom
     return mfn;
 }
 
+#define LEVEL_L4 4
+#define LEVEL_L3 3
+#define LEVEL_L2 2
+#define LEVEL_L1 1
+#define UNDEFINED 0
+static void dbg_print_mfn(struct domain *dp, unsigned long mfn,
+                          int l4i, int l3i, int l2i, int l1i)
+{
+	char s[32];
+	char *p;
+	static const char *const names[] = {
+		[LEVEL_L4] = "PGD/L4",
+		[LEVEL_L3] = "PUD/L3",
+		[LEVEL_L2] = "PMD/L2",
+		[LEVEL_L1] = "PTE/L1",
+		[UNDEFINED] = "unknown",
+	};
+	unsigned level = 0;
+	p = s;
+	if (l4i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s), "[%d]", l4i);
+		level = LEVEL_L4;
+	}
+	if (l3i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s) - (p - s), "[%d]", l3i);
+		level = LEVEL_L3;
+	}
+	if (l2i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s) - (p - s), "[%d]", l2i);
+		level = LEVEL_L2;
+	}
+	if (l1i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s) - (p - s), "[%d]", l1i);
+		level = LEVEL_L1;
+	}
+	gdprintk(XENLOG_WARNING , "cr3(%lx) has mfn(%lx) in level %s: %s\n",
+			dp->vcpu[0]->arch.cr3, mfn, names[level], s);
+#undef LEVEL_L4
+#undef LEVEL_L3
+#undef LEVEL_L2
+#undef LEVEL_L1
+#undef UNDEFINED
+}
+void
+dbg_pv_mfn(unsigned long find_mfn, struct domain *dp)
+{
 #if defined(__x86_64__)
+    l4_pgentry_t l4e, *l4t;
+#endif
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
+    l1_pgentry_t l1e, *l1t;
+    unsigned long cr3 = dp->vcpu[0]->arch.cr3;
+    int l4i, l3i, l2i, l1i;
+    unsigned long mfn;
 
+	gdprintk(XENLOG_WARNING , "cr3: %lx, searching for %lx\n",
+			dp->vcpu[0]->arch.cr3, find_mfn);
+
+	l4i = l3i = l2i = l1i = 0;
+#if defined(__x86_64__)
+	for ( l4i = 0; l4i < L4_PAGETABLE_ENTRIES; l4i++ )
+	{
+
+        l4t = map_domain_page(cr3 >> PAGE_SHIFT);
+        l4e = l4t[l4i];
+        mfn = l4e_get_pfn(l4e);
+        if (mfn == find_mfn)
+            dbg_print_mfn(dp, mfn, l4i, /* L3 */-1, /* L2 */-1, /* L1 */-1);
+
+        if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+            continue;
+        mfn = l4e_get_pfn(l4e);
+        unmap_domain_page(l4t);
+        l3t = map_domain_page(mfn);
+#else
+		/* 32-bit start */
+        l3t = map_domain_page(cr3 >> PAGE_SHIFT);
+        l3t += (cr3 & 0xFE0UL) >> 3;
+#endif
+        for ( l3i = 0; l3i < L3_PAGETABLE_ENTRIES; l3i++ )
+        {
+            l3e = l3t[l3i];
+            mfn = l3e_get_pfn(l3e);
+            if ( mfn == find_mfn )
+                dbg_print_mfn(dp, mfn, l4i, l3i, /* L2 */-1, /* L1 */-1);
+
+            if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
+                continue;
+            l2t = map_domain_page(mfn);
+            for ( l2i = 0; l2i < L2_PAGETABLE_ENTRIES; l2i++ )
+            {
+                l2e = l2t[l2i];
+                mfn = l2e_get_pfn(l2e);
+                if (mfn == find_mfn )
+                    dbg_print_mfn(dp, mfn, l4i, l3i, l2i, /* L1 */-1);
+
+                if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
+                    (l2e_get_flags(l2e) & _PAGE_PSE) )
+                    continue;
+                l1t = map_domain_page(mfn);
+                for ( l1i = 0; l1i < L1_PAGETABLE_ENTRIES; l1i ++ )
+                {
+                    l1e = l1t[l1i];
+                    mfn = l1e_get_pfn(l1e);
+                    if ( !mfn_valid(mfn) )
+                        continue;
+                    if ( mfn == find_mfn )
+                        dbg_print_mfn(dp, mfn, l4i, l3i, l2i, l1i);
+                }
+                unmap_domain_page(l1t);
+            }
+            unmap_domain_page(l2t);
+        }
+        unmap_domain_page(l3t);
+#if defined(__x86_64__)
+	}
+#endif
+}
+
+#if defined(__x86_64__)
 /* 
  * pgd3val: this is the value of init_mm.pgd[3] in a PV guest. It is optional.
  *          This to assist debug of modules in the guest. The kernel address 
diff -r 635917c6dac4 -r 8ed3eef70671 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/arch/x86/mm.c	Tue Aug 21 16:08:29 2012 -0400
@@ -2422,6 +2422,8 @@ static int __put_page_type(struct page_i
 }
 
 
+extern void dbg_pv_mfn(unsigned long mfn, struct domain *d);
+
 static int __get_page_type(struct page_info *page, unsigned long type,
                            int preemptible)
 {
@@ -2503,6 +2505,7 @@ static int __get_page_type(struct page_i
                     "for mfn %lx (pfn %lx)",
                     x, type, page_to_mfn(page),
                     get_gpfn_from_mfn(page_to_mfn(page)));
+            dbg_pv_mfn(page_to_mfn(page), page_get_owner(page));
             return -EINVAL;
         }
         else if ( unlikely(!(x & PGT_validated)) )



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwu-0004Fd-CH; Tue, 21 Aug 2012 20:21:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uws-0004FB-1t
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:14 +0000
Received: from [85.158.139.83:44181] by server-12.bemta-5.messagelabs.com id
	1F/E7-22359-9BDE3305; Tue, 21 Aug 2012 20:21:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345580471!29249594!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwOTc1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12690 invoked from network); 21 Aug 2012 20:21:12 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 20:21:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL3pB030592
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2Ta013777
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL26n015862
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id ABCB5402B8; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: 8ed3eef706710c9c476a8d984bfb2861d92bedfb
Message-Id: <8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 176
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
# Parent  635917c6dac4ab8748572fcbeb3e745428684e15
get_page_type: Print out extra information when failing to get page_type.

When any reference to __get_page_type is called and it fails, we get:

(XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 10e392 (pfn 1bf6c)

with this patch we get some extra details such as:
(XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]

where it actually is in the pagetable of the guest. This is useful
b/c we can figure out where it is, and use that to figure out where
the OS thinks it is.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r 635917c6dac4 -r 8ed3eef70671 xen/arch/x86/debug.c
--- a/xen/arch/x86/debug.c	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/arch/x86/debug.c	Tue Aug 21 16:08:29 2012 -0400
@@ -70,8 +70,127 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct dom
     return mfn;
 }
 
+#define LEVEL_L4 4
+#define LEVEL_L3 3
+#define LEVEL_L2 2
+#define LEVEL_L1 1
+#define UNDEFINED 0
+static void dbg_print_mfn(struct domain *dp, unsigned long mfn,
+                          int l4i, int l3i, int l2i, int l1i)
+{
+	char s[32];
+	char *p;
+	static const char *const names[] = {
+		[LEVEL_L4] = "PGD/L4",
+		[LEVEL_L3] = "PUD/L3",
+		[LEVEL_L2] = "PMD/L2",
+		[LEVEL_L1] = "PTE/L1",
+		[UNDEFINED] = "unknown",
+	};
+	unsigned level = 0;
+	p = s;
+	if (l4i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s), "[%d]", l4i);
+		level = LEVEL_L4;
+	}
+	if (l3i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s) - (p - s), "[%d]", l3i);
+		level = LEVEL_L3;
+	}
+	if (l2i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s) - (p - s), "[%d]", l2i);
+		level = LEVEL_L2;
+	}
+	if (l1i >= 0) {
+		p += snprintf(p, ARRAY_SIZE(s) - (p - s), "[%d]", l1i);
+		level = LEVEL_L1;
+	}
+	gdprintk(XENLOG_WARNING , "cr3(%lx) has mfn(%lx) in level %s: %s\n",
+			dp->vcpu[0]->arch.cr3, mfn, names[level], s);
+#undef LEVEL_L4
+#undef LEVEL_L3
+#undef LEVEL_L2
+#undef LEVEL_L1
+#undef UNDEFINED
+}
+void
+dbg_pv_mfn(unsigned long find_mfn, struct domain *dp)
+{
 #if defined(__x86_64__)
+    l4_pgentry_t l4e, *l4t;
+#endif
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
+    l1_pgentry_t l1e, *l1t;
+    unsigned long cr3 = dp->vcpu[0]->arch.cr3;
+    int l4i, l3i, l2i, l1i;
+    unsigned long mfn;
 
+	gdprintk(XENLOG_WARNING , "cr3: %lx, searching for %lx\n",
+			dp->vcpu[0]->arch.cr3, find_mfn);
+
+	l4i = l3i = l2i = l1i = 0;
+#if defined(__x86_64__)
+	for ( l4i = 0; l4i < L4_PAGETABLE_ENTRIES; l4i++ )
+	{
+
+        l4t = map_domain_page(cr3 >> PAGE_SHIFT);
+        l4e = l4t[l4i];
+        mfn = l4e_get_pfn(l4e);
+        if (mfn == find_mfn)
+            dbg_print_mfn(dp, mfn, l4i, /* L3 */-1, /* L2 */-1, /* L1 */-1);
+
+        if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+            continue;
+        mfn = l4e_get_pfn(l4e);
+        unmap_domain_page(l4t);
+        l3t = map_domain_page(mfn);
+#else
+		/* 32-bit start */
+        l3t = map_domain_page(cr3 >> PAGE_SHIFT);
+        l3t += (cr3 & 0xFE0UL) >> 3;
+#endif
+        for ( l3i = 0; l3i < L3_PAGETABLE_ENTRIES; l3i++ )
+        {
+            l3e = l3t[l3i];
+            mfn = l3e_get_pfn(l3e);
+            if ( mfn == find_mfn )
+                dbg_print_mfn(dp, mfn, l4i, l3i, /* L2 */-1, /* L1 */-1);
+
+            if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
+                continue;
+            l2t = map_domain_page(mfn);
+            for ( l2i = 0; l2i < L2_PAGETABLE_ENTRIES; l2i++ )
+            {
+                l2e = l2t[l2i];
+                mfn = l2e_get_pfn(l2e);
+                if (mfn == find_mfn )
+                    dbg_print_mfn(dp, mfn, l4i, l3i, l2i, /* L1 */-1);
+
+                if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
+                    (l2e_get_flags(l2e) & _PAGE_PSE) )
+                    continue;
+                l1t = map_domain_page(mfn);
+                for ( l1i = 0; l1i < L1_PAGETABLE_ENTRIES; l1i ++ )
+                {
+                    l1e = l1t[l1i];
+                    mfn = l1e_get_pfn(l1e);
+                    if ( !mfn_valid(mfn) )
+                        continue;
+                    if ( mfn == find_mfn )
+                        dbg_print_mfn(dp, mfn, l4i, l3i, l2i, l1i);
+                }
+                unmap_domain_page(l1t);
+            }
+            unmap_domain_page(l2t);
+        }
+        unmap_domain_page(l3t);
+#if defined(__x86_64__)
+	}
+#endif
+}
+
+#if defined(__x86_64__)
 /* 
  * pgd3val: this is the value of init_mm.pgd[3] in a PV guest. It is optional.
  *          This to assist debug of modules in the guest. The kernel address 
diff -r 635917c6dac4 -r 8ed3eef70671 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/arch/x86/mm.c	Tue Aug 21 16:08:29 2012 -0400
@@ -2422,6 +2422,8 @@ static int __put_page_type(struct page_i
 }
 
 
+extern void dbg_pv_mfn(unsigned long mfn, struct domain *d);
+
 static int __get_page_type(struct page_info *page, unsigned long type,
                            int preemptible)
 {
@@ -2503,6 +2505,7 @@ static int __get_page_type(struct page_i
                     "for mfn %lx (pfn %lx)",
                     x, type, page_to_mfn(page),
                     get_gpfn_from_mfn(page_to_mfn(page)));
+            dbg_pv_mfn(page_to_mfn(page), page_get_owner(page));
             return -EINVAL;
         }
         else if ( unlikely(!(x & PGT_validated)) )



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwt-0004FP-00; Tue, 21 Aug 2012 20:21:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uwr-0004FA-BW
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:13 +0000
Received: from [85.158.138.51:38178] by server-1.bemta-3.messagelabs.com id
	62/ED-09327-8BDE3305; Tue, 21 Aug 2012 20:21:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345580470!29444506!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwOTc1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30391 invoked from network); 21 Aug 2012 20:21:11 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 20:21:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL2jT030584
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2eP027118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:02 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL2go002738
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AE9384032D; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: 635917c6dac4ab8748572fcbeb3e745428684e15
Message-Id: <635917c6dac4ab874857.1345579711@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 69
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH 1 of 4] xen/vga: Add 'vga_delay' parameter to
 delay screen output by X miliseconds per line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID 635917c6dac4ab8748572fcbeb3e745428684e15
# Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
xen/vga: Add 'vga_delay' parameter to delay screen output by X miliseconds per line.

This is useful if you find yourself on machine that has no serial console,
nor any PCI, PCIe to put in a serial card. Nothing really fancy except it allows
to capture the screenshot of the screen using a camera.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r e6ca45ca03c2 -r 635917c6dac4 docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown	Mon Aug 20 08:46:47 2012 +0200
+++ b/docs/misc/xen-command-line.markdown	Tue Aug 21 16:08:29 2012 -0400
@@ -606,6 +606,15 @@ The optional `keep` parameter causes Xen
 console even after dom0 has been started.  The default behaviour is to
 relinquish control to dom0.
 
+### vga_delay
+> `= <miliseconds>`
+
+> Default: `vga_delay=0`
+
+Defines the delay to print a line to the screen. '2' is a a good value
+to get a good screen output. Note: If you need to use this, do so with care
+as it will screw up time handling.
+
 ### vpid (Intel)
 > `= <boolean>`
 
diff -r e6ca45ca03c2 -r 635917c6dac4 xen/drivers/video/vga.c
--- a/xen/drivers/video/vga.c	Mon Aug 20 08:46:47 2012 +0200
+++ b/xen/drivers/video/vga.c	Tue Aug 21 16:08:29 2012 -0400
@@ -10,7 +10,7 @@
 #include <xen/mm.h>
 #include <xen/vga.h>
 #include <asm/io.h>
-
+#include <xen/delay.h>
 /* Filled in by arch boot code. */
 struct xen_vga_console_info vga_console_info;
 
@@ -49,6 +49,15 @@ void (*vga_puts)(const char *) = vga_noo
 static char __initdata opt_vga[30] = "";
 string_param("vga", opt_vga);
 
+/*
+ * 'vga_delay=miliseconds' which defines to delay to print a line
+ * to the screen. 2 is a a good value to get a good screen output.
+ * NOTE: If you need to use this, do so with care as it wil screw up
+ * time handling
+ */
+static unsigned int __read_mostly vga_delay;
+integer_param("vga_delay", vga_delay);
+
 /* VGA text-mode definitions. */
 static unsigned int columns, lines;
 #define ATTRIBUTE   7
@@ -135,6 +144,7 @@ static void vga_text_puts(const char *s)
                 ypos = lines - 1;
                 memmove(video, video + 2 * columns, ypos * 2 * columns);
                 memset(video + ypos * 2 * columns, 0, 2 * xpos);
+                mdelay(vga_delay);
             }
             xpos = 0;
         }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwt-0004FP-00; Tue, 21 Aug 2012 20:21:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uwr-0004FA-BW
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:13 +0000
Received: from [85.158.138.51:38178] by server-1.bemta-3.messagelabs.com id
	62/ED-09327-8BDE3305; Tue, 21 Aug 2012 20:21:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345580470!29444506!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwOTc1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30391 invoked from network); 21 Aug 2012 20:21:11 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 20:21:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL2jT030584
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2eP027118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:02 GMT
Received: from abhmt106.oracle.com (abhmt106.oracle.com [141.146.116.58])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL2go002738
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AE9384032D; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: 635917c6dac4ab8748572fcbeb3e745428684e15
Message-Id: <635917c6dac4ab874857.1345579711@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 69
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH 1 of 4] xen/vga: Add 'vga_delay' parameter to
 delay screen output by X miliseconds per line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID 635917c6dac4ab8748572fcbeb3e745428684e15
# Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
xen/vga: Add 'vga_delay' parameter to delay screen output by X miliseconds per line.

This is useful if you find yourself on machine that has no serial console,
nor any PCI, PCIe to put in a serial card. Nothing really fancy except it allows
to capture the screenshot of the screen using a camera.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r e6ca45ca03c2 -r 635917c6dac4 docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown	Mon Aug 20 08:46:47 2012 +0200
+++ b/docs/misc/xen-command-line.markdown	Tue Aug 21 16:08:29 2012 -0400
@@ -606,6 +606,15 @@ The optional `keep` parameter causes Xen
 console even after dom0 has been started.  The default behaviour is to
 relinquish control to dom0.
 
+### vga_delay
+> `= <miliseconds>`
+
+> Default: `vga_delay=0`
+
+Defines the delay to print a line to the screen. '2' is a a good value
+to get a good screen output. Note: If you need to use this, do so with care
+as it will screw up time handling.
+
 ### vpid (Intel)
 > `= <boolean>`
 
diff -r e6ca45ca03c2 -r 635917c6dac4 xen/drivers/video/vga.c
--- a/xen/drivers/video/vga.c	Mon Aug 20 08:46:47 2012 +0200
+++ b/xen/drivers/video/vga.c	Tue Aug 21 16:08:29 2012 -0400
@@ -10,7 +10,7 @@
 #include <xen/mm.h>
 #include <xen/vga.h>
 #include <asm/io.h>
-
+#include <xen/delay.h>
 /* Filled in by arch boot code. */
 struct xen_vga_console_info vga_console_info;
 
@@ -49,6 +49,15 @@ void (*vga_puts)(const char *) = vga_noo
 static char __initdata opt_vga[30] = "";
 string_param("vga", opt_vga);
 
+/*
+ * 'vga_delay=miliseconds' which defines to delay to print a line
+ * to the screen. 2 is a a good value to get a good screen output.
+ * NOTE: If you need to use this, do so with care as it wil screw up
+ * time handling
+ */
+static unsigned int __read_mostly vga_delay;
+integer_param("vga_delay", vga_delay);
+
 /* VGA text-mode definitions. */
 static unsigned int columns, lines;
 #define ATTRIBUTE   7
@@ -135,6 +144,7 @@ static void vga_text_puts(const char *s)
                 ypos = lines - 1;
                 memmove(video, video + 2 * columns, ypos * 2 * columns);
                 memset(video + ypos * 2 * columns, 0, 2 * xpos);
+                mdelay(vga_delay);
             }
             xpos = 0;
         }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwu-0004Fk-P6; Tue, 21 Aug 2012 20:21:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uws-0004FG-Na
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:14 +0000
Received: from [85.158.143.99:45018] by server-2.bemta-4.messagelabs.com id
	B1/9F-21239-ABDE3305; Tue, 21 Aug 2012 20:21:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345580471!19723028!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwOTc1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31631 invoked from network); 21 Aug 2012 20:21:12 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 20:21:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL3Bf030591
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2af013778
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL2GH002739
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BBBE64035A; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: b992f8e613a3401b9ddd140ce436c840d412beb7
Message-Id: <b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:34 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 37
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
# Parent  74bedb086c5b72447262e087c0218b89f8bc9140
xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.

c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
run with a 32-bit initial domain. One of the things it added was
that the pt_base is offset by two pages. This inconsistency
is only present in the COMPAT mode so lets document it.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r 74bedb086c5b -r b992f8e613a3 xen/include/public/xen.h
--- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
@@ -663,7 +663,7 @@ typedef struct shared_info shared_info_t
  *      c. list of allocated page frames [mfn_list, nr_pages]
  *         (unless relocated due to XEN_ELFNOTE_INIT_P2M)
  *      d. start_info_t structure        [register ESI (x86)]
- *      e. bootstrap page tables         [pt_base, CR3 (x86)]
+ *      e. bootstrap page tables         [pt_base, CR3 (x86)] *1
  *      f. bootstrap stack               [register ESP (x86)]
  *  4. Bootstrap elements are packed together, but each is 4kB-aligned.
  *  5. The initial ram disk may be omitted.
@@ -678,6 +678,9 @@ typedef struct shared_info shared_info_t
  *
  *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
  *  pagetables [pt_base, CR3 (x86)].
+ *
+ *  *1: When booting under a 64-bit hypervisor with a 32-bit initial domain
+ *  it is offset by two pages (pt_base += PAGE_SIZE*2).
  */
 
 #define MAX_GUEST_CMDLINE 1024



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uwu-0004Fk-P6; Tue, 21 Aug 2012 20:21:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uws-0004FG-Na
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:14 +0000
Received: from [85.158.143.99:45018] by server-2.bemta-4.messagelabs.com id
	B1/9F-21239-ABDE3305; Tue, 21 Aug 2012 20:21:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345580471!19723028!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcwOTc1OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31631 invoked from network); 21 Aug 2012 20:21:12 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-14.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Aug 2012 20:21:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL3Bf030591
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2af013778
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL2GH002739
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BBBE64035A; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
X-Mercurial-Node: b992f8e613a3401b9ddd140ce436c840d412beb7
Message-Id: <b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
In-Reply-To: <patchbomb.1345579710@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:34 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 37
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
# Date 1345579709 14400
# Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
# Parent  74bedb086c5b72447262e087c0218b89f8bc9140
xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.

c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
run with a 32-bit initial domain. One of the things it added was
that the pt_base is offset by two pages. This inconsistency
is only present in the COMPAT mode so lets document it.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

diff -r 74bedb086c5b -r b992f8e613a3 xen/include/public/xen.h
--- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
+++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
@@ -663,7 +663,7 @@ typedef struct shared_info shared_info_t
  *      c. list of allocated page frames [mfn_list, nr_pages]
  *         (unless relocated due to XEN_ELFNOTE_INIT_P2M)
  *      d. start_info_t structure        [register ESI (x86)]
- *      e. bootstrap page tables         [pt_base, CR3 (x86)]
+ *      e. bootstrap page tables         [pt_base, CR3 (x86)] *1
  *      f. bootstrap stack               [register ESP (x86)]
  *  4. Bootstrap elements are packed together, but each is 4kB-aligned.
  *  5. The initial ram disk may be omitted.
@@ -678,6 +678,9 @@ typedef struct shared_info shared_info_t
  *
  *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
  *  pagetables [pt_base, CR3 (x86)].
+ *
+ *  *1: When booting under a 64-bit hypervisor with a 32-bit initial domain
+ *  it is offset by two pages (pt_base += PAGE_SIZE*2).
  */
 
 #define MAX_GUEST_CMDLINE 1024



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uxG-0004LE-K8; Tue, 21 Aug 2012 20:21:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uxF-0004Kd-5d
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:37 +0000
Received: from [85.158.143.35:27002] by server-3.bemta-4.messagelabs.com id
	42/AE-09529-0DDE3305; Tue, 21 Aug 2012 20:21:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345580494!15336458!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5175 invoked from network); 21 Aug 2012 20:21:36 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 20:21:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL30b029941
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2Lu013776
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL2YC014136
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A35B04032C; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
Message-Id: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:30 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 8
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH 0 of 4] [RFC PATCH] Various docs and inthe field
	help patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Couple of patches that I've been having in my hg tree
that I've neglected to send out. The vga_delay I posted
some time ago and I have fixed it per review comments.

The other ones are newer and pertain to documentation and
making in the field easier to figure out what is going wrong
with a guest.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3uxG-0004LE-K8; Tue, 21 Aug 2012 20:21:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T3uxF-0004Kd-5d
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:21:37 +0000
Received: from [85.158.143.35:27002] by server-3.bemta-4.messagelabs.com id
	42/AE-09529-0DDE3305; Tue, 21 Aug 2012 20:21:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345580494!15336458!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5175 invoked from network); 21 Aug 2012 20:21:36 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Aug 2012 20:21:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7LKL30b029941
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:04 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7LKL2Lu013776
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 20:21:03 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7LKL2YC014136
	for <xen-devel@lists.xensource.com>; Tue, 21 Aug 2012 15:21:02 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 13:21:02 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A35B04032C; Tue, 21 Aug 2012 16:11:03 -0400 (EDT)
MIME-Version: 1.0
Message-Id: <patchbomb.1345579710@phenom.dumpdata.com>
User-Agent: Mercurial-patchbomb/1.9.3
Date: Tue, 21 Aug 2012 16:08:30 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com
Status: RO
Lines: 8
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH 0 of 4] [RFC PATCH] Various docs and inthe field
	help patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Couple of patches that I've been having in my hg tree
that I've neglected to send out. The vga_delay I posted
some time ago and I have fixed it per review comments.

The other ones are newer and pertain to documentation and
making in the field easier to figure out what is going wrong
with a guest.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBW-0004zj-CW; Tue, 21 Aug 2012 20:36:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBT-0004y6-RC
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14386 invoked from network); 21 Aug 2012 20:36:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113783"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:10 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:10 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:36 +0100
Message-ID: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
	x86_init.paging.pagetable_setup_start and
	x86_init.paging.pagetable_setup_done setup functions and
	document its semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the definition of x86_init.paging.pagetable_setup_start and
x86_init.paging.pagetable_setup_done is twisted and not really well
defined (in terms of prototypes desired). More specifically:
pagetable_setup_start:
 * cleans up the boot time page table in the x86_32 case
 * it is a nop for the XEN case
 * it is a nop on x86_64

pagetable_setup_done:
 * it is a nop on x86_32
 * sets up accessor functions for pagetable manipulation, for the
   XEN case
 * it is a nop on x86_64

Most of this logic can be skipped by creating a new setup function that can
handle pagetable setup and pre/post operations on it. This means the above
mentioned functions will be removed and only one will be used for the whole
operation.
The new function must be called only once, during boot-time setup and
after the direct mapping for physical memory is available.

Differences with v1:
- The patch serie is re-arranged in a way that it helps reviews, following
  a plan by Thomas Gleixner
- The PVOPS nomenclature is not used as it is not correct
- The front-end message is adjusted with feedback by Thomas Gleixner,
  Stefano Stabellini and Konrad Rzeszutek Wilk 


Attilio Rao (5):
  X86/XEN: Remove the base argument from
    x86_init.paging.pagetable_setup_start
  X86/XEN: Rename pagetable_setup_start() setup functions into
    pagetable_init()
  X86/XEN: Allow setup function x86_init.paging.pagetable_init to setup
    kernel pagetables
  X86/XEN: Move content of xen_pagetable_setup_done() into
    xen_pagetable_init() and retire now unused
    x86_init.paging.pagetable_setup_done
  X86/XEN: Add few lines explaining simple semantic for
    x86_init.paging.pagetable_init setup function

 arch/x86/include/asm/pgtable_types.h |    6 ++----
 arch/x86/include/asm/x86_init.h      |   11 +++++++----
 arch/x86/kernel/setup.c              |    4 +---
 arch/x86/kernel/x86_init.c           |    4 +---
 arch/x86/mm/init_32.c                |   11 ++++-------
 arch/x86/xen/mmu.c                   |   18 +++++++-----------
 6 files changed, 22 insertions(+), 32 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBW-0004zj-CW; Tue, 21 Aug 2012 20:36:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBT-0004y6-RC
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14386 invoked from network); 21 Aug 2012 20:36:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113783"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:10 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:10 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:36 +0100
Message-ID: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
	x86_init.paging.pagetable_setup_start and
	x86_init.paging.pagetable_setup_done setup functions and
	document its semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the definition of x86_init.paging.pagetable_setup_start and
x86_init.paging.pagetable_setup_done is twisted and not really well
defined (in terms of prototypes desired). More specifically:
pagetable_setup_start:
 * cleans up the boot time page table in the x86_32 case
 * it is a nop for the XEN case
 * it is a nop on x86_64

pagetable_setup_done:
 * it is a nop on x86_32
 * sets up accessor functions for pagetable manipulation, for the
   XEN case
 * it is a nop on x86_64

Most of this logic can be skipped by creating a new setup function that can
handle pagetable setup and pre/post operations on it. This means the above
mentioned functions will be removed and only one will be used for the whole
operation.
The new function must be called only once, during boot-time setup and
after the direct mapping for physical memory is available.

Differences with v1:
- The patch serie is re-arranged in a way that it helps reviews, following
  a plan by Thomas Gleixner
- The PVOPS nomenclature is not used as it is not correct
- The front-end message is adjusted with feedback by Thomas Gleixner,
  Stefano Stabellini and Konrad Rzeszutek Wilk 


Attilio Rao (5):
  X86/XEN: Remove the base argument from
    x86_init.paging.pagetable_setup_start
  X86/XEN: Rename pagetable_setup_start() setup functions into
    pagetable_init()
  X86/XEN: Allow setup function x86_init.paging.pagetable_init to setup
    kernel pagetables
  X86/XEN: Move content of xen_pagetable_setup_done() into
    xen_pagetable_init() and retire now unused
    x86_init.paging.pagetable_setup_done
  X86/XEN: Add few lines explaining simple semantic for
    x86_init.paging.pagetable_init setup function

 arch/x86/include/asm/pgtable_types.h |    6 ++----
 arch/x86/include/asm/x86_init.h      |   11 +++++++----
 arch/x86/kernel/setup.c              |    4 +---
 arch/x86/kernel/x86_init.c           |    4 +---
 arch/x86/mm/init_32.c                |   11 ++++-------
 arch/x86/xen/mmu.c                   |   18 +++++++-----------
 6 files changed, 22 insertions(+), 32 deletions(-)

-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBV-0004zX-Vh; Tue, 21 Aug 2012 20:36:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBT-0004y7-RA
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14391 invoked from network); 21 Aug 2012 20:36:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113784"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:10 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:10 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:37 +0100
Message-ID: <1345580561-8506-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/5] X86/XEN: Remove the base argument from
	x86_init.paging.pagetable_setup_start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

x86_init.paging.pagetable_setup_start for native will however use
swapper_pg_dir in the single place where it is used and for native the
argument is simply unused. Aditionally, the comments already point to
swapper_pg_dir as the sole base touched.
Finally, this will help with further merging of
x86_init.paging.pagetable_setup_start with
x86_init.paging.pagetable_setup_done.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    6 +++---
 arch/x86/include/asm/x86_init.h      |    2 +-
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    3 ++-
 arch/x86/mm/init_32.c                |    4 ++--
 arch/x86/xen/mmu.c                   |    2 +-
 6 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 013286a..e02b875 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,11 +303,11 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
-extern void native_pagetable_setup_start(pgd_t *base);
+extern void native_pagetable_setup_start(void);
 extern void native_pagetable_setup_done(pgd_t *base);
 #else
-#define native_pagetable_setup_start x86_init_pgd_noop
-#define native_pagetable_setup_done  x86_init_pgd_noop
+#define native_pagetable_setup_start x86_init_pgd_start_noop
+#define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..782ba0c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -85,7 +85,7 @@ struct x86_init_mapping {
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
-	void (*pagetable_setup_start)(pgd_t *base);
+	void (*pagetable_setup_start)(void);
 	void (*pagetable_setup_done)(pgd_t *base);
 };
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f4b9b80..90cbbe0 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,7 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start(swapper_pg_dir);
+	x86_init.paging.pagetable_setup_start();
 	paging_init();
 	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 9f3167e..3b88493 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,8 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_noop(pgd_t *unused) { }
+void __init x86_init_pgd_start_noop(void) { }
+void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 575d86f..c4aa1b2 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,10 +445,10 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
-void __init native_pagetable_setup_start(pgd_t *base)
+void __init native_pagetable_setup_start(void)
 {
 	unsigned long pfn, va;
-	pgd_t *pgd;
+	pgd_t *pgd, *base = swapper_pg_dir;
 	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte;
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..d89ea5c 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,7 +1174,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
-static void __init xen_pagetable_setup_start(pgd_t *base)
+static void __init xen_pagetable_setup_start(void)
 {
 }
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBQ-0004yK-2Q; Tue, 21 Aug 2012 20:36:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBP-0004yB-Dq
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:15 +0000
Received: from [85.158.138.51:47452] by server-9.bemta-3.messagelabs.com id
	8A/38-23952-E31F3305; Tue, 21 Aug 2012 20:36:14 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345581373!29445968!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26647 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113788"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:11 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:12 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:41 +0100
Message-ID: <1345580561-8506-6-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 5/5] X86/XEN: Add few lines explaining simple
	semantic for x86_init.paging.pagetable_init setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Explain the purpose of the hook
- Report execution constraints

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 995ea5c..7ea4186 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,6 +82,11 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
+ *
+ * It does setup the kernel pagetables and prepares accessors functions to
+ * manipulate them.
+ * It must be called once, during the boot sequence and after the direct
+ * mapping for phys memory is setup.
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBV-0004zX-Vh; Tue, 21 Aug 2012 20:36:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBT-0004y7-RA
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14391 invoked from network); 21 Aug 2012 20:36:13 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113784"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:10 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:10 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:37 +0100
Message-ID: <1345580561-8506-2-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/5] X86/XEN: Remove the base argument from
	x86_init.paging.pagetable_setup_start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

x86_init.paging.pagetable_setup_start for native will however use
swapper_pg_dir in the single place where it is used and for native the
argument is simply unused. Aditionally, the comments already point to
swapper_pg_dir as the sole base touched.
Finally, this will help with further merging of
x86_init.paging.pagetable_setup_start with
x86_init.paging.pagetable_setup_done.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    6 +++---
 arch/x86/include/asm/x86_init.h      |    2 +-
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    3 ++-
 arch/x86/mm/init_32.c                |    4 ++--
 arch/x86/xen/mmu.c                   |    2 +-
 6 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 013286a..e02b875 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,11 +303,11 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
-extern void native_pagetable_setup_start(pgd_t *base);
+extern void native_pagetable_setup_start(void);
 extern void native_pagetable_setup_done(pgd_t *base);
 #else
-#define native_pagetable_setup_start x86_init_pgd_noop
-#define native_pagetable_setup_done  x86_init_pgd_noop
+#define native_pagetable_setup_start x86_init_pgd_start_noop
+#define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 38155f6..782ba0c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -85,7 +85,7 @@ struct x86_init_mapping {
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
-	void (*pagetable_setup_start)(pgd_t *base);
+	void (*pagetable_setup_start)(void);
 	void (*pagetable_setup_done)(pgd_t *base);
 };
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f4b9b80..90cbbe0 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,7 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start(swapper_pg_dir);
+	x86_init.paging.pagetable_setup_start();
 	paging_init();
 	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 9f3167e..3b88493 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,8 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_noop(pgd_t *unused) { }
+void __init x86_init_pgd_start_noop(void) { }
+void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 575d86f..c4aa1b2 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,10 +445,10 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
-void __init native_pagetable_setup_start(pgd_t *base)
+void __init native_pagetable_setup_start(void)
 {
 	unsigned long pfn, va;
-	pgd_t *pgd;
+	pgd_t *pgd, *base = swapper_pg_dir;
 	pud_t *pud;
 	pmd_t *pmd;
 	pte_t *pte;
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..d89ea5c 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,7 +1174,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
-static void __init xen_pagetable_setup_start(pgd_t *base)
+static void __init xen_pagetable_setup_start(void)
 {
 }
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBQ-0004yK-2Q; Tue, 21 Aug 2012 20:36:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBP-0004yB-Dq
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:15 +0000
Received: from [85.158.138.51:47452] by server-9.bemta-3.messagelabs.com id
	8A/38-23952-E31F3305; Tue, 21 Aug 2012 20:36:14 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345581373!29445968!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26647 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113788"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:11 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:12 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:41 +0100
Message-ID: <1345580561-8506-6-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 5/5] X86/XEN: Add few lines explaining simple
	semantic for x86_init.paging.pagetable_init setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

- Explain the purpose of the hook
- Report execution constraints

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/x86_init.h |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 995ea5c..7ea4186 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,6 +82,11 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
+ *
+ * It does setup the kernel pagetables and prepares accessors functions to
+ * manipulate them.
+ * It must be called once, during the boot sequence and after the direct
+ * mapping for phys memory is setup.
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBX-000503-3x; Tue, 21 Aug 2012 20:36:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBU-0004y9-5b
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14417 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113787"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:11 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:11 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:40 +0100
Message-ID: <1345580561-8506-5-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/5] X86/XEN: Move content of
	xen_pagetable_setup_done() into xen_pagetable_init() and
	retire now unused x86_init.paging.pagetable_setup_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At this stage x86_init.paging.pagetable_setup_done is only used in the
XEN case. Move its content in the x86_init.paging.pagetable_init setup
function and remove the now unused x86_init.paging.pagetable_setup_done
remaining infrastructure.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    2 --
 arch/x86/include/asm/x86_init.h      |    2 --
 arch/x86/kernel/setup.c              |    1 -
 arch/x86/kernel/x86_init.c           |    2 --
 arch/x86/mm/init_32.c                |    4 ----
 arch/x86/xen/mmu.c                   |   13 ++++---------
 6 files changed, 4 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index c93cb8e..db8fec6 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -304,10 +304,8 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_init(void);
-extern void native_pagetable_setup_done(pgd_t *base);
 #else
 #define native_pagetable_init        paging_init
-#define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 24084b2..995ea5c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,11 +82,9 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
- * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-	void (*pagetable_setup_done)(pgd_t *base);
 };
 
 /**
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 315fd24..4f16547 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -962,7 +962,6 @@ void __init setup_arch(char **cmdline_p)
 #endif
 
 	x86_init.paging.pagetable_init();
-	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 5f2478f..7a3d075 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,6 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
@@ -69,7 +68,6 @@ struct x86_init_ops x86_init __initdata = {
 
 	.paging = {
 		.pagetable_init		= native_pagetable_init,
-		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
 
 	.timers = {
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index e35b4b1..4f04db1 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -478,10 +478,6 @@ void __init native_pagetable_init(void)
 	paging_init();
 }
 
-void __init native_pagetable_setup_done(pgd_t *base)
-{
-}
-
 /*
  * Build a proper pagetable for the kernel mappings.  Up until this
  * point, we've been running on some set of pagetables constructed by
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 4f47b87..4290d83 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,9 +1174,13 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
+static void xen_post_allocator_init(void);
+
 static void __init xen_pagetable_init(void)
 {
 	paging_init();
+	xen_setup_shared_info();
+	xen_post_allocator_init();
 }
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
@@ -1193,14 +1197,6 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 	}
 }
 
-static void xen_post_allocator_init(void);
-
-static void __init xen_pagetable_setup_done(pgd_t *base)
-{
-	xen_setup_shared_info();
-	xen_post_allocator_init();
-}
-
 static void xen_write_cr2(unsigned long cr2)
 {
 	this_cpu_read(xen_vcpu)->arch.cr2 = cr2;
@@ -2070,7 +2066,6 @@ void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_init = xen_pagetable_init;
-	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBQ-0004yR-Dr; Tue, 21 Aug 2012 20:36:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBP-0004yA-DK
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:15 +0000
Received: from [85.158.138.51:2837] by server-7.bemta-3.messagelabs.com id
	CD/1F-01906-E31F3305; Tue, 21 Aug 2012 20:36:14 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345581373!29445968!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26642 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113786"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:11 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:11 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:39 +0100
Message-ID: <1345580561-8506-4-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/5] X86/XEN: Allow setup function
	x86_init.paging.pagetable_init to setup kernel pagetables
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently, x86_init.paging.pagetable_init relies on callers to setup the
kernel pagetable.  In order to unify the functionality of
x86_init.paging.pagetable_setup_start and x86_init.paging.pagetable_setup_done
allow the new setup function to perform the operation itself.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    2 +-
 arch/x86/kernel/setup.c              |    1 -
 arch/x86/kernel/x86_init.c           |    1 -
 arch/x86/mm/init_32.c                |    1 +
 arch/x86/xen/mmu.c                   |    1 +
 5 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0c01e07..c93cb8e 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -306,7 +306,7 @@ extern void native_pagetable_reserve(u64 start, u64 end);
 extern void native_pagetable_init(void);
 extern void native_pagetable_setup_done(pgd_t *base);
 #else
-#define native_pagetable_init        x86_init_pgd_init_noop
+#define native_pagetable_init        paging_init
 #define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 61b7d98..315fd24 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -962,7 +962,6 @@ void __init setup_arch(char **cmdline_p)
 #endif
 
 	x86_init.paging.pagetable_init();
-	paging_init();
 	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
 	if (boot_cpu_data.cpuid_level >= 0) {
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 0e1e950..5f2478f 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,6 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_init_noop(void) { }
 void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 0e38e0e..e35b4b1 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -475,6 +475,7 @@ void __init native_pagetable_init(void)
 		pte_clear(NULL, va, pte);
 	}
 	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
+	paging_init();
 }
 
 void __init native_pagetable_setup_done(pgd_t *base)
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ff1af97..4f47b87 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1176,6 +1176,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 
 static void __init xen_pagetable_init(void)
 {
+	paging_init();
 }
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBW-0004zs-OT; Tue, 21 Aug 2012 20:36:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBT-0004y8-Tu
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14400 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113785"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:10 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:11 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:38 +0100
Message-ID: <1345580561-8506-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/5] X86/XEN: Rename pagetable_setup_start()
	setup functions into pagetable_init()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In preparation for unifying the pagetable_setup_start() and
pagetable_setup_done() setup functions, rename appropriately all the
infrastructure related to pagetable_setup_start().

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    4 ++--
 arch/x86/include/asm/x86_init.h      |    4 ++--
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    4 ++--
 arch/x86/mm/init_32.c                |    4 ++--
 arch/x86/xen/mmu.c                   |    4 ++--
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index e02b875..0c01e07 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,10 +303,10 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
-extern void native_pagetable_setup_start(void);
+extern void native_pagetable_init(void);
 extern void native_pagetable_setup_done(pgd_t *base);
 #else
-#define native_pagetable_setup_start x86_init_pgd_start_noop
+#define native_pagetable_init        x86_init_pgd_init_noop
 #define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 782ba0c..24084b2 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -81,11 +81,11 @@ struct x86_init_mapping {
 
 /**
  * struct x86_init_paging - platform specific paging functions
- * @pagetable_setup_start:	platform specific pre paging_init() call
+ * @pagetable_init:		platform specific paging initialization call
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
-	void (*pagetable_setup_start)(void);
+	void (*pagetable_init)(void);
 	void (*pagetable_setup_done)(pgd_t *base);
 };
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 90cbbe0..61b7d98 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,7 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start();
+	x86_init.paging.pagetable_init();
 	paging_init();
 	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 3b88493..0e1e950 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,7 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_start_noop(void) { }
+void __init x86_init_pgd_init_noop(void) { }
 void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
@@ -69,7 +69,7 @@ struct x86_init_ops x86_init __initdata = {
 	},
 
 	.paging = {
-		.pagetable_setup_start	= native_pagetable_setup_start,
+		.pagetable_init		= native_pagetable_init,
 		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index c4aa1b2..0e38e0e 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,7 +445,7 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
-void __init native_pagetable_setup_start(void)
+void __init native_pagetable_init(void)
 {
 	unsigned long pfn, va;
 	pgd_t *pgd, *base = swapper_pg_dir;
@@ -493,7 +493,7 @@ void __init native_pagetable_setup_done(pgd_t *base)
  * If we're booting paravirtualized under a hypervisor, then there are
  * more options: we may already be running PAE, and the pagetable may
  * or may not be based in swapper_pg_dir.  In any case,
- * paravirt_pagetable_setup_start() will set up swapper_pg_dir
+ * paravirt_pagetable_init() will set up swapper_pg_dir
  * appropriately for the rest of the initialization to work.
  *
  * In general, pagetable_init() assumes that the pagetable may already
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index d89ea5c..ff1af97 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,7 +1174,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
-static void __init xen_pagetable_setup_start(void)
+static void __init xen_pagetable_init(void)
 {
 }
 
@@ -2068,7 +2068,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
-	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
+	x86_init.paging.pagetable_init = xen_pagetable_init;
 	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBX-000503-3x; Tue, 21 Aug 2012 20:36:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBU-0004y9-5b
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14417 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113787"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:11 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:11 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:40 +0100
Message-ID: <1345580561-8506-5-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/5] X86/XEN: Move content of
	xen_pagetable_setup_done() into xen_pagetable_init() and
	retire now unused x86_init.paging.pagetable_setup_done
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At this stage x86_init.paging.pagetable_setup_done is only used in the
XEN case. Move its content in the x86_init.paging.pagetable_init setup
function and remove the now unused x86_init.paging.pagetable_setup_done
remaining infrastructure.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    2 --
 arch/x86/include/asm/x86_init.h      |    2 --
 arch/x86/kernel/setup.c              |    1 -
 arch/x86/kernel/x86_init.c           |    2 --
 arch/x86/mm/init_32.c                |    4 ----
 arch/x86/xen/mmu.c                   |   13 ++++---------
 6 files changed, 4 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index c93cb8e..db8fec6 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -304,10 +304,8 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
 extern void native_pagetable_init(void);
-extern void native_pagetable_setup_done(pgd_t *base);
 #else
 #define native_pagetable_init        paging_init
-#define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
 struct seq_file;
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 24084b2..995ea5c 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -82,11 +82,9 @@ struct x86_init_mapping {
 /**
  * struct x86_init_paging - platform specific paging functions
  * @pagetable_init:		platform specific paging initialization call
- * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
 	void (*pagetable_init)(void);
-	void (*pagetable_setup_done)(pgd_t *base);
 };
 
 /**
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 315fd24..4f16547 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -962,7 +962,6 @@ void __init setup_arch(char **cmdline_p)
 #endif
 
 	x86_init.paging.pagetable_init();
-	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
 	if (boot_cpu_data.cpuid_level >= 0) {
 		/* A CPU has %cr4 if and only if it has CPUID */
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 5f2478f..7a3d075 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,6 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
 
@@ -69,7 +68,6 @@ struct x86_init_ops x86_init __initdata = {
 
 	.paging = {
 		.pagetable_init		= native_pagetable_init,
-		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
 
 	.timers = {
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index e35b4b1..4f04db1 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -478,10 +478,6 @@ void __init native_pagetable_init(void)
 	paging_init();
 }
 
-void __init native_pagetable_setup_done(pgd_t *base)
-{
-}
-
 /*
  * Build a proper pagetable for the kernel mappings.  Up until this
  * point, we've been running on some set of pagetables constructed by
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 4f47b87..4290d83 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,9 +1174,13 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
+static void xen_post_allocator_init(void);
+
 static void __init xen_pagetable_init(void)
 {
 	paging_init();
+	xen_setup_shared_info();
+	xen_post_allocator_init();
 }
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
@@ -1193,14 +1197,6 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 	}
 }
 
-static void xen_post_allocator_init(void);
-
-static void __init xen_pagetable_setup_done(pgd_t *base)
-{
-	xen_setup_shared_info();
-	xen_post_allocator_init();
-}
-
 static void xen_write_cr2(unsigned long cr2)
 {
 	this_cpu_read(xen_vcpu)->arch.cr2 = cr2;
@@ -2070,7 +2066,6 @@ void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
 	x86_init.paging.pagetable_init = xen_pagetable_init;
-	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBQ-0004yR-Dr; Tue, 21 Aug 2012 20:36:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBP-0004yA-DK
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:15 +0000
Received: from [85.158.138.51:2837] by server-7.bemta-3.messagelabs.com id
	CD/1F-01906-E31F3305; Tue, 21 Aug 2012 20:36:14 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345581373!29445968!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26642 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113786"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:11 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:11 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:39 +0100
Message-ID: <1345580561-8506-4-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/5] X86/XEN: Allow setup function
	x86_init.paging.pagetable_init to setup kernel pagetables
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently, x86_init.paging.pagetable_init relies on callers to setup the
kernel pagetable.  In order to unify the functionality of
x86_init.paging.pagetable_setup_start and x86_init.paging.pagetable_setup_done
allow the new setup function to perform the operation itself.

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    2 +-
 arch/x86/kernel/setup.c              |    1 -
 arch/x86/kernel/x86_init.c           |    1 -
 arch/x86/mm/init_32.c                |    1 +
 arch/x86/xen/mmu.c                   |    1 +
 5 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0c01e07..c93cb8e 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -306,7 +306,7 @@ extern void native_pagetable_reserve(u64 start, u64 end);
 extern void native_pagetable_init(void);
 extern void native_pagetable_setup_done(pgd_t *base);
 #else
-#define native_pagetable_init        x86_init_pgd_init_noop
+#define native_pagetable_init        paging_init
 #define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 61b7d98..315fd24 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -962,7 +962,6 @@ void __init setup_arch(char **cmdline_p)
 #endif
 
 	x86_init.paging.pagetable_init();
-	paging_init();
 	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
 	if (boot_cpu_data.cpuid_level >= 0) {
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 0e1e950..5f2478f 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,6 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_init_noop(void) { }
 void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 0e38e0e..e35b4b1 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -475,6 +475,7 @@ void __init native_pagetable_init(void)
 		pte_clear(NULL, va, pte);
 	}
 	paravirt_alloc_pmd(&init_mm, __pa(base) >> PAGE_SHIFT);
+	paging_init();
 }
 
 void __init native_pagetable_setup_done(pgd_t *base)
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ff1af97..4f47b87 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1176,6 +1176,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 
 static void __init xen_pagetable_init(void)
 {
+	paging_init();
 }
 
 static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:36:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vBW-0004zs-OT; Tue, 21 Aug 2012 20:36:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vBT-0004y8-Tu
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:36:20 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345581373!2373277!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14400 invoked from network); 21 Aug 2012 20:36:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113785"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:36:10 +0000
Received: from dhcp-3-145.uk.xensource.com (10.80.3.221) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Tue, 21 Aug 2012 21:36:11 +0100
From: Attilio Rao <attilio.rao@citrix.com>
To: <konrad.wilk@oracle.com>, <Ian.Campbell@citrix.com>,
	<Stefano.Stabellini@eu.citrix.com>, <mingo@redhat.com>, <hpa@zytor.com>,
	<tglx@linutronix.de>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>, 
	<xen-devel@lists.xensource.com>
Date: Tue, 21 Aug 2012 21:22:38 +0100
Message-ID: <1345580561-8506-3-git-send-email-attilio.rao@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
MIME-Version: 1.0
Cc: Attilio Rao <attilio.rao@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/5] X86/XEN: Rename pagetable_setup_start()
	setup functions into pagetable_init()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In preparation for unifying the pagetable_setup_start() and
pagetable_setup_done() setup functions, rename appropriately all the
infrastructure related to pagetable_setup_start().

Signed-off-by: Attilio Rao <attilio.rao@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |    4 ++--
 arch/x86/include/asm/x86_init.h      |    4 ++--
 arch/x86/kernel/setup.c              |    2 +-
 arch/x86/kernel/x86_init.c           |    4 ++--
 arch/x86/mm/init_32.c                |    4 ++--
 arch/x86/xen/mmu.c                   |    4 ++--
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index e02b875..0c01e07 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -303,10 +303,10 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pte);
 
 extern void native_pagetable_reserve(u64 start, u64 end);
 #ifdef CONFIG_X86_32
-extern void native_pagetable_setup_start(void);
+extern void native_pagetable_init(void);
 extern void native_pagetable_setup_done(pgd_t *base);
 #else
-#define native_pagetable_setup_start x86_init_pgd_start_noop
+#define native_pagetable_init        x86_init_pgd_init_noop
 #define native_pagetable_setup_done  x86_init_pgd_done_noop
 #endif
 
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 782ba0c..24084b2 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -81,11 +81,11 @@ struct x86_init_mapping {
 
 /**
  * struct x86_init_paging - platform specific paging functions
- * @pagetable_setup_start:	platform specific pre paging_init() call
+ * @pagetable_init:		platform specific paging initialization call
  * @pagetable_setup_done:	platform specific post paging_init() call
  */
 struct x86_init_paging {
-	void (*pagetable_setup_start)(void);
+	void (*pagetable_init)(void);
 	void (*pagetable_setup_done)(pgd_t *base);
 };
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 90cbbe0..61b7d98 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -961,7 +961,7 @@ void __init setup_arch(char **cmdline_p)
 	kvmclock_init();
 #endif
 
-	x86_init.paging.pagetable_setup_start();
+	x86_init.paging.pagetable_init();
 	paging_init();
 	x86_init.paging.pagetable_setup_done(swapper_pg_dir);
 
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index 3b88493..0e1e950 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -26,7 +26,7 @@
 
 void __cpuinit x86_init_noop(void) { }
 void __init x86_init_uint_noop(unsigned int unused) { }
-void __init x86_init_pgd_start_noop(void) { }
+void __init x86_init_pgd_init_noop(void) { }
 void __init x86_init_pgd_done_noop(pgd_t *unused) { }
 int __init iommu_init_noop(void) { return 0; }
 void iommu_shutdown_noop(void) { }
@@ -69,7 +69,7 @@ struct x86_init_ops x86_init __initdata = {
 	},
 
 	.paging = {
-		.pagetable_setup_start	= native_pagetable_setup_start,
+		.pagetable_init		= native_pagetable_init,
 		.pagetable_setup_done	= native_pagetable_setup_done,
 	},
 
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index c4aa1b2..0e38e0e 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -445,7 +445,7 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
 }
 #endif /* CONFIG_HIGHMEM */
 
-void __init native_pagetable_setup_start(void)
+void __init native_pagetable_init(void)
 {
 	unsigned long pfn, va;
 	pgd_t *pgd, *base = swapper_pg_dir;
@@ -493,7 +493,7 @@ void __init native_pagetable_setup_done(pgd_t *base)
  * If we're booting paravirtualized under a hypervisor, then there are
  * more options: we may already be running PAE, and the pagetable may
  * or may not be based in swapper_pg_dir.  In any case,
- * paravirt_pagetable_setup_start() will set up swapper_pg_dir
+ * paravirt_pagetable_init() will set up swapper_pg_dir
  * appropriately for the rest of the initialization to work.
  *
  * In general, pagetable_init() assumes that the pagetable may already
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index d89ea5c..ff1af97 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1174,7 +1174,7 @@ static void xen_exit_mmap(struct mm_struct *mm)
 	spin_unlock(&mm->page_table_lock);
 }
 
-static void __init xen_pagetable_setup_start(void)
+static void __init xen_pagetable_init(void)
 {
 }
 
@@ -2068,7 +2068,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
-	x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
+	x86_init.paging.pagetable_init = xen_pagetable_init;
 	x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
 	pv_mmu_ops = xen_mmu_ops;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:39:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vEi-0005hd-TE; Tue, 21 Aug 2012 20:39:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vEh-0005hK-UJ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:39:40 +0000
Received: from [85.158.138.51:20693] by server-7.bemta-3.messagelabs.com id
	9A/61-01906-B02F3305; Tue, 21 Aug 2012 20:39:39 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345581578!21401811!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13704 invoked from network); 21 Aug 2012 20:39:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:39:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:39:38 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 21:39:38 +0100
Message-ID: <5033EEE9.70806@citrix.com>
Date: Tue, 21 Aug 2012 21:26:17 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208211722560.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208211722560.2856@ionos>
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3/5] X86/XEN: Introduce the
 x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/08/12 16:44, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
>
>    
>> This new PVOPS is responsible to setup the kernel pagetables and
>> replace entirely x86_init.paging.pagetable_setup_start and
>> x86_init.paging.pagetable_setup_done PVOPS work.
>>      
>
>    
>> For performance the x86_64 stub is implemented as a macro to paging_init()
>> rather than an actual function stub.
>>      
> Huch, using a macro for an once per boot time call is really a massive
> performance improvement.
>
> It's confusing and wrong. You just use a macro because x86_64 does not
> need any extra setups aside of paging_init().
>
>    
>> diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
>> index 849be14..c1e910a 100644
>> --- a/arch/x86/kernel/x86_init.c
>> +++ b/arch/x86/kernel/x86_init.c
>> @@ -68,6 +68,7 @@ struct x86_init_ops x86_init __initdata = {
>>   	},
>>
>>   	.paging = {
>> +		.pagetable_init		= native_pagetable_init,
>>      
> I'd prefer to see these patches implemented differently.
>
>   #1 Remove the base argument from pagetable_setup_start (leave
>      pagetable_setup_done() alone).
>
>   #2 Rename pagetable_setup_start to pagetable_init,
>      native_pagetable_setup_start to native_pagetable_init and
>      xen_pagetable_setup_start to xen_pagetable_init
>
>   #3 Instead of copying the whole native_pagetable_setup_start()
>      function and deleting it later, move the paging_init() call from
>      setup.c to native_pagetable_init() and xen_pagetable_init()
>      and define native_pagetable_init as paging_init() for x86_64
>
>   #4 Move the code from xen_pagetable_setup_done() into
>      xen_pagetable_init() and remove the now unused
>      pagetable_setup_done().
>
> That's less code shuffling and pointless copying which makes the
> review way easier.
>    

I've followed these steps in a new patch series (integrating suggestions 
from Konrad and Stefano too).

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 20:39:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 20:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vEi-0005hd-TE; Tue, 21 Aug 2012 20:39:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T3vEh-0005hK-UJ
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 20:39:40 +0000
Received: from [85.158.138.51:20693] by server-7.bemta-3.messagelabs.com id
	9A/61-01906-B02F3305; Tue, 21 Aug 2012 20:39:39 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345581578!21401811!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMDk4OTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13704 invoked from network); 21 Aug 2012 20:39:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Aug 2012 20:39:38 -0000
X-IronPort-AV: E=Sophos;i="4.77,804,1336348800"; d="scan'208";a="14113832"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	21 Aug 2012 20:39:38 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Tue, 21 Aug 2012 21:39:38 +0100
Message-ID: <5033EEE9.70806@citrix.com>
Date: Tue, 21 Aug 2012 21:26:17 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345511646-12427-1-git-send-email-attilio.rao@citrix.com>
	<1345511646-12427-4-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208211722560.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208211722560.2856@ionos>
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3/5] X86/XEN: Introduce the
 x86_init.paging.pagetable_init PVOPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/08/12 16:44, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
>
>    
>> This new PVOPS is responsible to setup the kernel pagetables and
>> replace entirely x86_init.paging.pagetable_setup_start and
>> x86_init.paging.pagetable_setup_done PVOPS work.
>>      
>
>    
>> For performance the x86_64 stub is implemented as a macro to paging_init()
>> rather than an actual function stub.
>>      
> Huch, using a macro for an once per boot time call is really a massive
> performance improvement.
>
> It's confusing and wrong. You just use a macro because x86_64 does not
> need any extra setups aside of paging_init().
>
>    
>> diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
>> index 849be14..c1e910a 100644
>> --- a/arch/x86/kernel/x86_init.c
>> +++ b/arch/x86/kernel/x86_init.c
>> @@ -68,6 +68,7 @@ struct x86_init_ops x86_init __initdata = {
>>   	},
>>
>>   	.paging = {
>> +		.pagetable_init		= native_pagetable_init,
>>      
> I'd prefer to see these patches implemented differently.
>
>   #1 Remove the base argument from pagetable_setup_start (leave
>      pagetable_setup_done() alone).
>
>   #2 Rename pagetable_setup_start to pagetable_init,
>      native_pagetable_setup_start to native_pagetable_init and
>      xen_pagetable_setup_start to xen_pagetable_init
>
>   #3 Instead of copying the whole native_pagetable_setup_start()
>      function and deleting it later, move the paging_init() call from
>      setup.c to native_pagetable_init() and xen_pagetable_init()
>      and define native_pagetable_init as paging_init() for x86_64
>
>   #4 Move the code from xen_pagetable_setup_done() into
>      xen_pagetable_init() and remove the now unused
>      pagetable_setup_done().
>
> That's less code shuffling and pointless copying which makes the
> review way easier.
>    

I've followed these steps in a new patch series (integrating suggestions 
from Konrad and Stefano too).

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 21:23:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 21:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vuL-00069f-EB; Tue, 21 Aug 2012 21:22:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3vuJ-00069a-Hi
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 21:22:39 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345584136!5301821!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21278 invoked from network); 21 Aug 2012 21:22:17 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-10.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 21:22:17 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3vtk-0000lj-Ik; Tue, 21 Aug 2012 23:22:04 +0200
Date: Tue, 21 Aug 2012 23:22:03 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208212315330.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian.Campbell@citrix.com, Stefano.Stabellini@eu.citrix.com,
	konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	mingo@redhat.com, hpa@zytor.com, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> Differences with v1:
> - The patch serie is re-arranged in a way that it helps reviews, following
>   a plan by Thomas Gleixner
> - The PVOPS nomenclature is not used as it is not correct
> - The front-end message is adjusted with feedback by Thomas Gleixner,
>   Stefano Stabellini and Konrad Rzeszutek Wilk 

This is much simpler to read and review. Just have a look at the
diffstats of the two series:

 6 files changed,  9 insertions(+),  8 deletions(-)
 6 files changed, 11 insertions(+),  9 deletions(-)
 5 files changed, 50 insertions(+),  2 deletions(-)
 6 files changed,  2 insertions(+), 65 deletions(-)
 1 files changed,  5 insertions(+),  0 deletions(-)

versus

 6 files changed, 10 insertions(+),  9 deletions(-)
 6 files changed, 11 insertions(+), 11 deletions(-)
 5 files changed,  3 insertions(+),  3 deletions(-)
 6 files changed,  4 insertions(+), 20 deletions(-)
 1 files changed,  5 insertions(+),  0 deletions(-)

The overall result is basically the same, but it's way simpler to look
at obvious and well done patches than checking whether a subtle copy
and paste bug happened in 3/5 of the first version. Copy and paste is
the #1 cause for subtle bugs. :)

I'm waiting for the ack of Xen folks before taking it into tip.

Thanks for following up!

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 21 21:23:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Aug 2012 21:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3vuL-00069f-EB; Tue, 21 Aug 2012 21:22:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T3vuJ-00069a-Hi
	for xen-devel@lists.xensource.com; Tue, 21 Aug 2012 21:22:39 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345584136!5301821!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21278 invoked from network); 21 Aug 2012 21:22:17 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-10.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	21 Aug 2012 21:22:17 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T3vtk-0000lj-Ik; Tue, 21 Aug 2012 23:22:04 +0200
Date: Tue, 21 Aug 2012 23:22:03 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
Message-ID: <alpine.LFD.2.02.1208212315330.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: x86@kernel.org, Ian.Campbell@citrix.com, Stefano.Stabellini@eu.citrix.com,
	konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
	mingo@redhat.com, hpa@zytor.com, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Attilio Rao wrote:
> Differences with v1:
> - The patch serie is re-arranged in a way that it helps reviews, following
>   a plan by Thomas Gleixner
> - The PVOPS nomenclature is not used as it is not correct
> - The front-end message is adjusted with feedback by Thomas Gleixner,
>   Stefano Stabellini and Konrad Rzeszutek Wilk 

This is much simpler to read and review. Just have a look at the
diffstats of the two series:

 6 files changed,  9 insertions(+),  8 deletions(-)
 6 files changed, 11 insertions(+),  9 deletions(-)
 5 files changed, 50 insertions(+),  2 deletions(-)
 6 files changed,  2 insertions(+), 65 deletions(-)
 1 files changed,  5 insertions(+),  0 deletions(-)

versus

 6 files changed, 10 insertions(+),  9 deletions(-)
 6 files changed, 11 insertions(+), 11 deletions(-)
 5 files changed,  3 insertions(+),  3 deletions(-)
 6 files changed,  4 insertions(+), 20 deletions(-)
 1 files changed,  5 insertions(+),  0 deletions(-)

The overall result is basically the same, but it's way simpler to look
at obvious and well done patches than checking whether a subtle copy
and paste bug happened in 3/5 of the first version. Copy and paste is
the #1 cause for subtle bugs. :)

I'm waiting for the ack of Xen folks before taking it into tip.

Thanks for following up!

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 00:24:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 00:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3yjN-0007fZ-KX; Wed, 22 Aug 2012 00:23:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3yjM-0007fU-0O
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 00:23:32 +0000
Received: from [85.158.143.99:49084] by server-1.bemta-4.messagelabs.com id
	F1/96-07754-38624305; Wed, 22 Aug 2012 00:23:31 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345595009!21835737!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27159 invoked from network); 22 Aug 2012 00:23:30 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Aug 2012 00:23:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7M0NQI3004790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 00:23:26 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7M0NPim014102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 00:23:26 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7M0NOGr028320; Tue, 21 Aug 2012 19:23:25 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 17:23:24 -0700
Date: Tue, 21 Aug 2012 17:23:23 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120821172323.38aec925@mantra.us.oracle.com>
In-Reply-To: <1345196531.30865.139.camel@zakaz.uk.xensource.com>
References: <20120815180449.50410028@mantra.us.oracle.com>
	<1345196531.30865.139.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 10:42:11 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> > @@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu,
> > struct task_struct *idle) (unsigned long)xen_hypervisor_callback;
> >  		ctxt->failsafe_callback_eip = 
> >  					(unsigned
> > long)xen_failsafe_callback; -
> > +	} else {
> > +		ctxt->user_regs.ds = __KERNEL_DS;
> > +		ctxt->user_regs.es = 0;
> > +		ctxt->user_regs.gs = 0;
> 
> Not __KERNEL_DS for es too?

64bit, es ignored, right?

> Not sure about gs -- shouldn't that point to some per-cpu segment or
> something? Maybe that happens somewhere else? (in which case a
> comment?)

Gets set later. comment good idea.

> > +
> > +		ctxt->gdt_frames[0] = (unsigned long)gdt;
> > +		ctxt->gdt_ents = (unsigned long)(GDT_SIZE - 1);
> > +
> > +		/* Note: PVH is not supported on x86_32. */
> > +#ifdef __x86_64__
> 
> ITYM CONFIG_X86_64?

Yup.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 00:24:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 00:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3yjN-0007fZ-KX; Wed, 22 Aug 2012 00:23:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T3yjM-0007fU-0O
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 00:23:32 +0000
Received: from [85.158.143.99:49084] by server-1.bemta-4.messagelabs.com id
	F1/96-07754-38624305; Wed, 22 Aug 2012 00:23:31 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345595009!21835737!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzAyMzU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27159 invoked from network); 22 Aug 2012 00:23:30 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Aug 2012 00:23:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7M0NQI3004790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 00:23:26 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7M0NPim014102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 00:23:26 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7M0NOGr028320; Tue, 21 Aug 2012 19:23:25 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Aug 2012 17:23:24 -0700
Date: Tue, 21 Aug 2012 17:23:23 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120821172323.38aec925@mantra.us.oracle.com>
In-Reply-To: <1345196531.30865.139.camel@zakaz.uk.xensource.com>
References: <20120815180449.50410028@mantra.us.oracle.com>
	<1345196531.30865.139.camel@zakaz.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Aug 2012 10:42:11 +0100
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> > @@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu,
> > struct task_struct *idle) (unsigned long)xen_hypervisor_callback;
> >  		ctxt->failsafe_callback_eip = 
> >  					(unsigned
> > long)xen_failsafe_callback; -
> > +	} else {
> > +		ctxt->user_regs.ds = __KERNEL_DS;
> > +		ctxt->user_regs.es = 0;
> > +		ctxt->user_regs.gs = 0;
> 
> Not __KERNEL_DS for es too?

64bit, es ignored, right?

> Not sure about gs -- shouldn't that point to some per-cpu segment or
> something? Maybe that happens somewhere else? (in which case a
> comment?)

Gets set later. comment good idea.

> > +
> > +		ctxt->gdt_frames[0] = (unsigned long)gdt;
> > +		ctxt->gdt_ents = (unsigned long)(GDT_SIZE - 1);
> > +
> > +		/* Note: PVH is not supported on x86_32. */
> > +#ifdef __x86_64__
> 
> ITYM CONFIG_X86_64?

Yup.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 01:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 01:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3zM1-0003Hk-UW; Wed, 22 Aug 2012 01:03:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T3zM0-0003Hf-FP
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 01:03:28 +0000
Received: from [85.158.139.83:4930] by server-2.bemta-5.messagelabs.com id
	3F/A1-10142-FDF24305; Wed, 22 Aug 2012 01:03:27 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345597406!28680579!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYwMDI5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3138 invoked from network); 22 Aug 2012 01:03:26 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-9.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 01:03:26 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 21 Aug 2012 18:03:25 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,806,1336374000"; d="scan'208";a="189588088"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 21 Aug 2012 18:03:25 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 18:03:24 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Wed, 22 Aug 2012 09:03:22 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw//+X9gCABNO7cP//9qoAAGC+ZBA=
Date: Wed, 22 Aug 2012 01:03:22 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
In-Reply-To: <5032316202000078000965DF@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, August 20, 2012 6:45 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 20.08.12 at 05:22, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> 
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Friday, August 17, 2012 5:36 PM
> >> To: Hao, Xudong
> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> >>
> >> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >>  -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, August 16, 2012 7:04 PM
> >> >> To: Hao, Xudong
> >> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> >> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar
> support
> >> >>
> >> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com>
> wrote:
> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com>
> wrote:
> >> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
> >> >> +0200
> >> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
> >> >> +0800
> >> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
> >> expanded.
> >> >> */
> >> >> >> >  #define PCI_MEM_START       0xf0000000
> >> >> >> >  #define PCI_MEM_END         0xfc000000
> >> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >> >> >>
> >> >> >> With such hard coded values, this is hardly meant to be anything
> >> >> >> more than an RFC, is it? These values should not exist in the first
> >> >> >> place, and the variables below should be determined from VM
> >> >> >> characteristics (best would presumably be to allocate them top
> >> >> >> down from the end of physical address space, making sure you
> >> >> >> don't run into RAM).
> >> >>
> >> >> No comment on this part?
> >> >>
> >> >
> >> > The MMIO high memory start from 640G, it's already very high, I think we
> >> > don't need allocate MMIO top down from the high of physical address
> space.
> >> > Another thing you remind me that maybe we can skip this high MMIO hole
> >> when
> >> > setup p2m table in build hvm of libxc(setup_guest()), like the handles for
> >> > MMIO below 4G.
> >>
> >> That would be an option, but any fixed address you pick here
> >> will look arbitrary (and will sooner or later raise questions). Plus
> >> by allowing the RAM above 4G to remain contiguous even for
> >> huge guests, we'd retain maximum compatibility with all sorts
> >> of guest OSes. Furthermore, did you check that we in all cases
> >> can use 40-bit (guest) physical addresses (I would think that 36
> >> is the biggest common value). Bottom line - please don't use a
> >> fixed number here.
> >>
> >
> > Where does present the 36-bit physical addresses limit, could you help to
> > point out in the current Xen code?
> 
> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
> mtrr_var_range_msr_set().

So if common 36-bit(guest) physical address could not change, can we use top down from 64G, Jan, do you have any suggestion? 

Thanks,
-Xudong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 01:03:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 01:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T3zM1-0003Hk-UW; Wed, 22 Aug 2012 01:03:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T3zM0-0003Hf-FP
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 01:03:28 +0000
Received: from [85.158.139.83:4930] by server-2.bemta-5.messagelabs.com id
	3F/A1-10142-FDF24305; Wed, 22 Aug 2012 01:03:27 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345597406!28680579!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYwMDI5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3138 invoked from network); 22 Aug 2012 01:03:26 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-9.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 01:03:26 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 21 Aug 2012 18:03:25 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,806,1336374000"; d="scan'208";a="189588088"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 21 Aug 2012 18:03:25 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 18:03:24 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Wed, 22 Aug 2012 09:03:22 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw//+X9gCABNO7cP//9qoAAGC+ZBA=
Date: Wed, 22 Aug 2012 01:03:22 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
In-Reply-To: <5032316202000078000965DF@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, August 20, 2012 6:45 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 20.08.12 at 05:22, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> 
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Friday, August 17, 2012 5:36 PM
> >> To: Hao, Xudong
> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> >>
> >> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >>  -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, August 16, 2012 7:04 PM
> >> >> To: Hao, Xudong
> >> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> >> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar
> support
> >> >>
> >> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com>
> wrote:
> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com>
> wrote:
> >> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04 2012
> >> >> +0200
> >> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01 2012
> >> >> +0800
> >> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
> >> expanded.
> >> >> */
> >> >> >> >  #define PCI_MEM_START       0xf0000000
> >> >> >> >  #define PCI_MEM_END         0xfc000000
> >> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >> >> >>
> >> >> >> With such hard coded values, this is hardly meant to be anything
> >> >> >> more than an RFC, is it? These values should not exist in the first
> >> >> >> place, and the variables below should be determined from VM
> >> >> >> characteristics (best would presumably be to allocate them top
> >> >> >> down from the end of physical address space, making sure you
> >> >> >> don't run into RAM).
> >> >>
> >> >> No comment on this part?
> >> >>
> >> >
> >> > The MMIO high memory start from 640G, it's already very high, I think we
> >> > don't need allocate MMIO top down from the high of physical address
> space.
> >> > Another thing you remind me that maybe we can skip this high MMIO hole
> >> when
> >> > setup p2m table in build hvm of libxc(setup_guest()), like the handles for
> >> > MMIO below 4G.
> >>
> >> That would be an option, but any fixed address you pick here
> >> will look arbitrary (and will sooner or later raise questions). Plus
> >> by allowing the RAM above 4G to remain contiguous even for
> >> huge guests, we'd retain maximum compatibility with all sorts
> >> of guest OSes. Furthermore, did you check that we in all cases
> >> can use 40-bit (guest) physical addresses (I would think that 36
> >> is the biggest common value). Bottom line - please don't use a
> >> fixed number here.
> >>
> >
> > Where does present the 36-bit physical addresses limit, could you help to
> > point out in the current Xen code?
> 
> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
> mtrr_var_range_msr_set().

So if common 36-bit(guest) physical address could not change, can we use top down from 64G, Jan, do you have any suggestion? 

Thanks,
-Xudong

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 05:57:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 05:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T43vu-0005id-5w; Wed, 22 Aug 2012 05:56:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <haitao.shan@intel.com>) id 1T43vs-0005iL-J0
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 05:56:48 +0000
Received: from [85.158.139.83:40411] by server-7.bemta-5.messagelabs.com id
	3E/C3-32634-F9474305; Wed, 22 Aug 2012 05:56:47 +0000
X-Env-Sender: haitao.shan@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345615006!29385304!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMjYzMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18887 invoked from network); 22 Aug 2012 05:56:47 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 05:56:47 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 21 Aug 2012 22:56:45 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,808,1336374000"; d="scan'208";a="208210278"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 21 Aug 2012 22:56:45 -0700
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 22:56:44 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 22:56:44 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Wed, 22 Aug 2012 13:56:40 +0800
From: "Shan, Haitao" <haitao.shan@intel.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "Zhou, Chao"
	<chao.zhou@intel.com>, Ian Campbell <Ian.Campbell@citrix.com>, Stefano
	Stabellini <Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest	with
	qemu-xen-dir-remote
Thread-Index: AQHNcWQHo7KALelyBkWancTZpqx1/5dREAmQgAeA1jCADOEbAA==
Date: Wed, 22 Aug 2012 05:56:39 +0000
Message-ID: <6E28F413615167448D5D4146F2D9370B0FE063EB@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to
	guest	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Do we have any conclusion on this? A bug?
We are trying to contribute codes to Xen. However, this bug has blocked us for some time. It would be great to see this fixed.

Shan Haitao

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Zhang, Yang Z
Sent: Tuesday, August 14, 2012 9:18 AM
To: Zhou, Chao; Ian Campbell; Stefano Stabellini
Cc: Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

Zhou, Chao wrote on 2012-08-09:
> I rebuild the upstream QEMU according to the wiki, but device static assignment
> doesn't work, no lspci output in guest. However hotplug & unplug works fine.

Hi Anthony,

We cannot see the device(via lspci) after guest boot up with upstream QEMU. Did you see the same problem? 
We follow the steps from wiki to build QEMU upstream and do the device assignment through old way(1. hide the device, 2 set the bdf in config file). 

> -----Original Message----- From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell Sent:
> Friday, August 03, 2012 6:36 PM To: Stefano Stabellini Cc: Zhang, Yang
> Z; Anthony Perard; xen-devel Subject: Re: [Xen-devel] error when pass
> through device to guest with qemu-xen-dir-remote
> 
> On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
>> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
>>> When create guest with device assigned, it shows the error and the
>>> device wasn't able to work inside guest: libxl: error:
>>> libxl_qmp.c:288:qmp_handle_error_response: received an error message
>>> from QMP server: Parameter 'driver' expects a driver name
>>> 
>>> It only fails with qemu-xen-dir-remote(Is this tree more close to
>>> upstream qemu?). I don't see the error with the traditional Qemu. I
>>> also tried qemu-upstream, but it fails when I try to enable pci
>>> pass-through
> for xen. I think Anthony's patch to add pci pass-through support for Xen is
> accepted by qemu-upstream, am I right?
>> 
>> Yes, it was accepted, but it is present only in upstream QEMU (from
>> git://git.qemu.org/qemu.git), not the tree we are currently using in
>> xen-unstable for development
>> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
>> Make sure you are using the right tree!
> 
> http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the
> upstream qemu tree instead of our stable branch of upstream.
> 
>> 
>> Anthony is currently on vacation and is going to be back in about a
>> week.
>> 
>>> Another question:
>>> Now I am trying to add some features (relevant to pass through device) to
> Qemu, which tree should I use? Since traditional qemu is great different from
> qemu-upstream, it is too old to develop patch base on it. But besides the old one,
> I cannot find a working qemu.
>> 
>> You should use upstream QEMU, I am going to rebase our tree on that
>> early on in the 4.3 release cycle.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 05:57:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 05:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T43vu-0005id-5w; Wed, 22 Aug 2012 05:56:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <haitao.shan@intel.com>) id 1T43vs-0005iL-J0
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 05:56:48 +0000
Received: from [85.158.139.83:40411] by server-7.bemta-5.messagelabs.com id
	3E/C3-32634-F9474305; Wed, 22 Aug 2012 05:56:47 +0000
X-Env-Sender: haitao.shan@intel.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345615006!29385304!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMjYzMg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18887 invoked from network); 22 Aug 2012 05:56:47 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 05:56:47 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 21 Aug 2012 22:56:45 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,808,1336374000"; d="scan'208";a="208210278"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 21 Aug 2012 22:56:45 -0700
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 22:56:44 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 21 Aug 2012 22:56:44 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Wed, 22 Aug 2012 13:56:40 +0800
From: "Shan, Haitao" <haitao.shan@intel.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "Zhou, Chao"
	<chao.zhou@intel.com>, Ian Campbell <Ian.Campbell@citrix.com>, Stefano
	Stabellini <Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest	with
	qemu-xen-dir-remote
Thread-Index: AQHNcWQHo7KALelyBkWancTZpqx1/5dREAmQgAeA1jCADOEbAA==
Date: Wed, 22 Aug 2012 05:56:39 +0000
Message-ID: <6E28F413615167448D5D4146F2D9370B0FE063EB@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony Perard <anthony.perard@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to
	guest	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Do we have any conclusion on this? A bug?
We are trying to contribute codes to Xen. However, this bug has blocked us for some time. It would be great to see this fixed.

Shan Haitao

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Zhang, Yang Z
Sent: Tuesday, August 14, 2012 9:18 AM
To: Zhou, Chao; Ian Campbell; Stefano Stabellini
Cc: Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

Zhou, Chao wrote on 2012-08-09:
> I rebuild the upstream QEMU according to the wiki, but device static assignment
> doesn't work, no lspci output in guest. However hotplug & unplug works fine.

Hi Anthony,

We cannot see the device(via lspci) after guest boot up with upstream QEMU. Did you see the same problem? 
We follow the steps from wiki to build QEMU upstream and do the device assignment through old way(1. hide the device, 2 set the bdf in config file). 

> -----Original Message----- From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell Sent:
> Friday, August 03, 2012 6:36 PM To: Stefano Stabellini Cc: Zhang, Yang
> Z; Anthony Perard; xen-devel Subject: Re: [Xen-devel] error when pass
> through device to guest with qemu-xen-dir-remote
> 
> On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
>> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
>>> When create guest with device assigned, it shows the error and the
>>> device wasn't able to work inside guest: libxl: error:
>>> libxl_qmp.c:288:qmp_handle_error_response: received an error message
>>> from QMP server: Parameter 'driver' expects a driver name
>>> 
>>> It only fails with qemu-xen-dir-remote(Is this tree more close to
>>> upstream qemu?). I don't see the error with the traditional Qemu. I
>>> also tried qemu-upstream, but it fails when I try to enable pci
>>> pass-through
> for xen. I think Anthony's patch to add pci pass-through support for Xen is
> accepted by qemu-upstream, am I right?
>> 
>> Yes, it was accepted, but it is present only in upstream QEMU (from
>> git://git.qemu.org/qemu.git), not the tree we are currently using in
>> xen-unstable for development
>> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
>> Make sure you are using the right tree!
> 
> http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the
> upstream qemu tree instead of our stable branch of upstream.
> 
>> 
>> Anthony is currently on vacation and is going to be back in about a
>> week.
>> 
>>> Another question:
>>> Now I am trying to add some features (relevant to pass through device) to
> Qemu, which tree should I use? Since traditional qemu is great different from
> qemu-upstream, it is too old to develop patch base on it. But besides the old one,
> I cannot find a working qemu.
>> 
>> You should use upstream QEMU, I am going to rebase our tree on that
>> early on in the 4.3 release cycle.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 06:11:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 06:11:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T449b-0006Ff-N5; Wed, 22 Aug 2012 06:10:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T449a-0006F8-3U
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 06:10:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345615851!6517119!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAxNzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16014 invoked from network); 22 Aug 2012 06:10:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 06:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14117135"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 06:10:50 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:10:50 +0100
Message-ID: <1345615849.23624.42.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 22 Aug 2012 07:10:49 +0100
In-Reply-To: <20120821172323.38aec925@mantra.us.oracle.com>
References: <20120815180449.50410028@mantra.us.oracle.com>
	<1345196531.30865.139.camel@zakaz.uk.xensource.com>
	<20120821172323.38aec925@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 01:23 +0100, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 10:42:11 +0100
> Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > > @@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu,
> > > struct task_struct *idle) (unsigned long)xen_hypervisor_callback;
> > >  		ctxt->failsafe_callback_eip = 
> > >  					(unsigned
> > > long)xen_failsafe_callback; -
> > > +	} else {
> > > +		ctxt->user_regs.ds = __KERNEL_DS;
> > > +		ctxt->user_regs.es = 0;
> > > +		ctxt->user_regs.gs = 0;
> > 
> > Not __KERNEL_DS for es too?
> 
> 64bit, es ignored, right?

Not 100% sure, the limit certainly is but the base and the privilege
level might not be? 

The reason I mentioned it was that I could see in the !pvh case:
                ctxt->user_regs.es = __USER_DS;

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 06:11:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 06:11:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T449b-0006Ff-N5; Wed, 22 Aug 2012 06:10:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T449a-0006F8-3U
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 06:10:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345615851!6517119!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAxNzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16014 invoked from network); 22 Aug 2012 06:10:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 06:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14117135"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 06:10:50 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:10:50 +0100
Message-ID: <1345615849.23624.42.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 22 Aug 2012 07:10:49 +0100
In-Reply-To: <20120821172323.38aec925@mantra.us.oracle.com>
References: <20120815180449.50410028@mantra.us.oracle.com>
	<1345196531.30865.139.camel@zakaz.uk.xensource.com>
	<20120821172323.38aec925@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 5/8]: PVH: smp changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 01:23 +0100, Mukesh Rathor wrote:
> On Fri, 17 Aug 2012 10:42:11 +0100
> Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > > @@ -339,7 +343,20 @@ cpu_initialize_context(unsigned int cpu,
> > > struct task_struct *idle) (unsigned long)xen_hypervisor_callback;
> > >  		ctxt->failsafe_callback_eip = 
> > >  					(unsigned
> > > long)xen_failsafe_callback; -
> > > +	} else {
> > > +		ctxt->user_regs.ds = __KERNEL_DS;
> > > +		ctxt->user_regs.es = 0;
> > > +		ctxt->user_regs.gs = 0;
> > 
> > Not __KERNEL_DS for es too?
> 
> 64bit, es ignored, right?

Not 100% sure, the limit certainly is but the base and the privilege
level might not be? 

The reason I mentioned it was that I could see in the !pvh case:
                ctxt->user_regs.es = __USER_DS;

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 07:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 07:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T45QA-0006lw-O3; Wed, 22 Aug 2012 07:32:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T45Q9-0006lr-3y
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 07:32:09 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345620722!9660423!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14631 invoked from network); 22 Aug 2012 07:32:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 07:32:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 08:32:01 +0100
Message-Id: <50349901020000780008A4A2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 08:32:01 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.jackson@eu.citrix.com, xiantao.zhang@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> "Hao, Xudong" <xudong.hao@intel.com> 08/22/12 3:03 AM >>>
>> > Where does present the 36-bit physical addresses limit, could you help to
>> > point out in the current Xen code?
>> 
>> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
>> mtrr_var_range_msr_set().
>
> So if common 36-bit(guest) physical address could not change, can we use
> top down from 64G, Jan, do you have any suggestion? 

Sorry, I already said that I think the only viable option is top down from
top of physical address space. No new address space holes please if at
all possible - just do it in ways real firmware would do it (which would
unlikely alter the RAM layout for this purpose).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 07:32:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 07:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T45QA-0006lw-O3; Wed, 22 Aug 2012 07:32:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T45Q9-0006lr-3y
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 07:32:09 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345620722!9660423!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14631 invoked from network); 22 Aug 2012 07:32:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 07:32:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 08:32:01 +0100
Message-Id: <50349901020000780008A4A2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 08:32:01 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.jackson@eu.citrix.com, xiantao.zhang@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> "Hao, Xudong" <xudong.hao@intel.com> 08/22/12 3:03 AM >>>
>> > Where does present the 36-bit physical addresses limit, could you help to
>> > point out in the current Xen code?
>> 
>> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
>> mtrr_var_range_msr_set().
>
> So if common 36-bit(guest) physical address could not change, can we use
> top down from 64G, Jan, do you have any suggestion? 

Sorry, I already said that I think the only viable option is top down from
top of physical address space. No new address space holes please if at
all possible - just do it in ways real firmware would do it (which would
unlikely alter the RAM layout for this purpose).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 08:05:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 08:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T45vS-0007TG-0P; Wed, 22 Aug 2012 08:04:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T45vP-0007TB-Tw
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 08:04:28 +0000
Received: from [85.158.143.99:37913] by server-3.bemta-4.messagelabs.com id
	70/64-09529-B8294305; Wed, 22 Aug 2012 08:04:27 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345622666!27329402!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19353 invoked from network); 22 Aug 2012 08:04:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 08:04:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 09:04:25 +0100
Message-Id: <5034A097020000780008A4D3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 09:04:23 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <xen-devel@lists.xensource.com>,<konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
In-Reply-To: <8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> 08/21/12 10:22 PM >>>
>get_page_type: Print out extra information when failing to get page_type.
>
>When any reference to __get_page_type is called and it fails, we get:
>
>(XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 10e392 (pfn 1bf6c)
>
>with this patch we get some extra details such as:
>(XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]
>
>where it actually is in the pagetable of the guest. This is useful
>b/c we can figure out where it is, and use that to figure out where
>the OS thinks it is.

Hmm, not sure how useful this is: To me, it is first of all non-obvious why
there are two instances of L3 here, what the non-consistent indexes really
mean, and what the repeated printing of CR3 is good for.

>--- a/xen/arch/x86/debug.c    Tue Aug 21 16:08:29 2012 -0400
>+++ b/xen/arch/x86/debug.c    Tue Aug 21 16:08:29 2012 -0400

Further, putting the code in this file is likely wrong - as it says close to its
top, its a helper file for debuggers.

>--- a/xen/arch/x86/mm.c    Tue Aug 21 16:08:29 2012 -0400
>+++ b/xen/arch/x86/mm.c    Tue Aug 21 16:08:29 2012 -0400
>@@ -2422,6 +2422,8 @@ static int __put_page_type(struct page_i
>}
 >
 >
>+extern void dbg_pv_mfn(unsigned long mfn, struct domain *d);
>+
>static int __get_page_type(struct page_info *page, unsigned long type,
>int preemptible)
>{

No extern declarations in C files please.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 08:05:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 08:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T45vS-0007TG-0P; Wed, 22 Aug 2012 08:04:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T45vP-0007TB-Tw
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 08:04:28 +0000
Received: from [85.158.143.99:37913] by server-3.bemta-4.messagelabs.com id
	70/64-09529-B8294305; Wed, 22 Aug 2012 08:04:27 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345622666!27329402!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19353 invoked from network); 22 Aug 2012 08:04:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 08:04:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 09:04:25 +0100
Message-Id: <5034A097020000780008A4D3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 09:04:23 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <xen-devel@lists.xensource.com>,<konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
In-Reply-To: <8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> 08/21/12 10:22 PM >>>
>get_page_type: Print out extra information when failing to get page_type.
>
>When any reference to __get_page_type is called and it fails, we get:
>
>(XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 10e392 (pfn 1bf6c)
>
>with this patch we get some extra details such as:
>(XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
>(XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]
>
>where it actually is in the pagetable of the guest. This is useful
>b/c we can figure out where it is, and use that to figure out where
>the OS thinks it is.

Hmm, not sure how useful this is: To me, it is first of all non-obvious why
there are two instances of L3 here, what the non-consistent indexes really
mean, and what the repeated printing of CR3 is good for.

>--- a/xen/arch/x86/debug.c    Tue Aug 21 16:08:29 2012 -0400
>+++ b/xen/arch/x86/debug.c    Tue Aug 21 16:08:29 2012 -0400

Further, putting the code in this file is likely wrong - as it says close to its
top, its a helper file for debuggers.

>--- a/xen/arch/x86/mm.c    Tue Aug 21 16:08:29 2012 -0400
>+++ b/xen/arch/x86/mm.c    Tue Aug 21 16:08:29 2012 -0400
>@@ -2422,6 +2422,8 @@ static int __put_page_type(struct page_i
>}
 >
 >
>+extern void dbg_pv_mfn(unsigned long mfn, struct domain *d);
>+
>static int __get_page_type(struct page_info *page, unsigned long type,
>int preemptible)
>{

No extern declarations in C files please.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 09:07:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 09:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T46tc-0007n5-Gm; Wed, 22 Aug 2012 09:06:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avi@redhat.com>) id 1T46tb-0007my-Eo
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 09:06:39 +0000
Received: from [85.158.143.99:60855] by server-1.bemta-4.messagelabs.com id
	67/CE-07754-E11A4305; Wed, 22 Aug 2012 09:06:38 +0000
X-Env-Sender: avi@redhat.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345626395!27342689!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ3MTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4821 invoked from network); 22 Aug 2012 09:06:36 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 09:06:36 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7M962Xe013935
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 05:06:02 -0400
Received: from balrog.usersys.tlv.redhat.com (dhcp-4-121.tlv.redhat.com
	[10.35.4.121])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7M95iIE027627; Wed, 22 Aug 2012 05:05:45 -0400
Message-ID: <5034A0E8.1080507@redhat.com>
Date: Wed, 22 Aug 2012 12:05:44 +0300
From: Avi Kivity <avi@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Eduardo Habkost <ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 0/8] include qdev core in *-user,
	make CPU child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/21/2012 06:42 PM, Eduardo Habkost wrote:
> So, here's a third suggestion to the CPU/DeviceState problem. Basically I split
> the qdev code into a core (that can be easily compiled into *-user), and a part
> specific to qemu-system-*.
> 

I'm barging in late here, so sorry if this has been suggested and shot
down: is it not possible to use composition here?

  typedef ... CPU;

  typedef struct CPUState {
      DeviceState qdev;
      CPU cpu;
  } CPUState;

But I guess bringing qdev to -user is inevitable.

-- 
error compiling committee.c: too many arguments to function

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 09:07:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 09:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T46tc-0007n5-Gm; Wed, 22 Aug 2012 09:06:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avi@redhat.com>) id 1T46tb-0007my-Eo
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 09:06:39 +0000
Received: from [85.158.143.99:60855] by server-1.bemta-4.messagelabs.com id
	67/CE-07754-E11A4305; Wed, 22 Aug 2012 09:06:38 +0000
X-Env-Sender: avi@redhat.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345626395!27342689!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ3MTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4821 invoked from network); 22 Aug 2012 09:06:36 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 09:06:36 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7M962Xe013935
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 05:06:02 -0400
Received: from balrog.usersys.tlv.redhat.com (dhcp-4-121.tlv.redhat.com
	[10.35.4.121])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7M95iIE027627; Wed, 22 Aug 2012 05:05:45 -0400
Message-ID: <5034A0E8.1080507@redhat.com>
Date: Wed, 22 Aug 2012 12:05:44 +0300
From: Avi Kivity <avi@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120717 Thunderbird/14.0
MIME-Version: 1.0
To: Eduardo Habkost <ehabkost@redhat.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
In-Reply-To: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 0/8] include qdev core in *-user,
	make CPU child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/21/2012 06:42 PM, Eduardo Habkost wrote:
> So, here's a third suggestion to the CPU/DeviceState problem. Basically I split
> the qdev code into a core (that can be easily compiled into *-user), and a part
> specific to qemu-system-*.
> 

I'm barging in late here, so sorry if this has been suggested and shot
down: is it not possible to use composition here?

  typedef ... CPU;

  typedef struct CPUState {
      DeviceState qdev;
      CPU cpu;
  } CPUState;

But I guess bringing qdev to -user is inevitable.

-- 
error compiling committee.c: too many arguments to function

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 09:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 09:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T477m-0008E0-UD; Wed, 22 Aug 2012 09:21:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T477k-0008Du-WB
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 09:21:17 +0000
Received: from [85.158.143.99:49919] by server-1.bemta-4.messagelabs.com id
	A9/DA-07754-C84A4305; Wed, 22 Aug 2012 09:21:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345627272!26760651!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14150 invoked from network); 22 Aug 2012 09:21:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 09:21:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14120699"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 09:21:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 10:21:12 +0100
Message-ID: <1345627270.6821.125.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 22 Aug 2012 10:21:10 +0100
In-Reply-To: <b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-21 at 21:08 +0100, Konrad Rzeszutek Wilk wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
> # Parent  74bedb086c5b72447262e087c0218b89f8bc9140
> xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.
> 
> c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
> run with a 32-bit initial domain. One of the things it added was
> that the pt_base is offset by two pages. This inconsistency
> is only present in the COMPAT mode so lets document it.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> diff -r 74bedb086c5b -r b992f8e613a3 xen/include/public/xen.h
> --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> @@ -663,7 +663,7 @@ typedef struct shared_info shared_info_t
>   *      c. list of allocated page frames [mfn_list, nr_pages]
>   *         (unless relocated due to XEN_ELFNOTE_INIT_P2M)
>   *      d. start_info_t structure        [register ESI (x86)]
> - *      e. bootstrap page tables         [pt_base, CR3 (x86)]
> + *      e. bootstrap page tables         [pt_base, CR3 (x86)] *1
>   *      f. bootstrap stack               [register ESP (x86)]
>   *  4. Bootstrap elements are packed together, but each is 4kB-aligned.
>   *  5. The initial ram disk may be omitted.
> @@ -678,6 +678,9 @@ typedef struct shared_info shared_info_t
>   *
>   *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
>   *  pagetables [pt_base, CR3 (x86)].
> + *
> + *  *1: When booting under a 64-bit hypervisor with a 32-bit initial domain
> + *  it is offset by two pages (pt_base += PAGE_SIZE*2).

"it" here being the page which you would have to load into cr3?

What, if anything, ends up in the other two pages?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 09:21:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 09:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T477m-0008E0-UD; Wed, 22 Aug 2012 09:21:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T477k-0008Du-WB
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 09:21:17 +0000
Received: from [85.158.143.99:49919] by server-1.bemta-4.messagelabs.com id
	A9/DA-07754-C84A4305; Wed, 22 Aug 2012 09:21:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345627272!26760651!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14150 invoked from network); 22 Aug 2012 09:21:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 09:21:12 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14120699"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 09:21:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 10:21:12 +0100
Message-ID: <1345627270.6821.125.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 22 Aug 2012 10:21:10 +0100
In-Reply-To: <b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-21 at 21:08 +0100, Konrad Rzeszutek Wilk wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
> # Parent  74bedb086c5b72447262e087c0218b89f8bc9140
> xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.
> 
> c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
> run with a 32-bit initial domain. One of the things it added was
> that the pt_base is offset by two pages. This inconsistency
> is only present in the COMPAT mode so lets document it.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> diff -r 74bedb086c5b -r b992f8e613a3 xen/include/public/xen.h
> --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> @@ -663,7 +663,7 @@ typedef struct shared_info shared_info_t
>   *      c. list of allocated page frames [mfn_list, nr_pages]
>   *         (unless relocated due to XEN_ELFNOTE_INIT_P2M)
>   *      d. start_info_t structure        [register ESI (x86)]
> - *      e. bootstrap page tables         [pt_base, CR3 (x86)]
> + *      e. bootstrap page tables         [pt_base, CR3 (x86)] *1
>   *      f. bootstrap stack               [register ESP (x86)]
>   *  4. Bootstrap elements are packed together, but each is 4kB-aligned.
>   *  5. The initial ram disk may be omitted.
> @@ -678,6 +678,9 @@ typedef struct shared_info shared_info_t
>   *
>   *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
>   *  pagetables [pt_base, CR3 (x86)].
> + *
> + *  *1: When booting under a 64-bit hypervisor with a 32-bit initial domain
> + *  it is offset by two pages (pt_base += PAGE_SIZE*2).

"it" here being the page which you would have to load into cr3?

What, if anything, ends up in the other two pages?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:04:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T47ms-00008E-Cn; Wed, 22 Aug 2012 10:03:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T47mr-000089-Jz
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 10:03:45 +0000
Received: from [85.158.143.35:38650] by server-3.bemta-4.messagelabs.com id
	00/25-09529-08EA4305; Wed, 22 Aug 2012 10:03:44 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345629586!15750038!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYwMzc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26745 invoked from network); 22 Aug 2012 09:59:46 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 09:59:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 22 Aug 2012 02:59:43 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,808,1336374000"; d="scan'208";a="189766222"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 22 Aug 2012 02:59:30 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 22 Aug 2012 02:59:29 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Wed, 22 Aug 2012 17:59:28 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <jbeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw//+X9gCABNO7cP//9qoAAGC+ZBD//+ixgP//VqIw
Date: Wed, 22 Aug 2012 09:59:27 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA5EAB@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
	<50349901020000780008A4A2@nat28.tlf.novell.com>
In-Reply-To: <50349901020000780008A4A2@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:jbeulich@suse.com]
> Sent: Wednesday, August 22, 2012 3:32 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> "Hao, Xudong" <xudong.hao@intel.com> 08/22/12 3:03 AM >>>
> >> > Where does present the 36-bit physical addresses limit, could you help to
> >> > point out in the current Xen code?
> >>
> >> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
> >> mtrr_var_range_msr_set().
> >
> > So if common 36-bit(guest) physical address could not change, can we use
> > top down from 64G, Jan, do you have any suggestion?
> 
> Sorry, I already said that I think the only viable option is top down from
> top of physical address space. No new address space holes please if at
> all possible - just do it in ways real firmware would do it (which would
> unlikely alter the RAM layout for this purpose).
> 

If the PCIe device has 64G bar size or more, how to do in current 36 bit, will we consider to extend the guest physical address? 
The real physical address space larger than 40 bit, there is not this issue.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:04:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T47ms-00008E-Cn; Wed, 22 Aug 2012 10:03:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xudong.hao@intel.com>) id 1T47mr-000089-Jz
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 10:03:45 +0000
Received: from [85.158.143.35:38650] by server-3.bemta-4.messagelabs.com id
	00/25-09529-08EA4305; Wed, 22 Aug 2012 10:03:44 +0000
X-Env-Sender: xudong.hao@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345629586!15750038!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYwMzc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26745 invoked from network); 22 Aug 2012 09:59:46 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 09:59:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 22 Aug 2012 02:59:43 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.77,808,1336374000"; d="scan'208";a="189766222"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 22 Aug 2012 02:59:30 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Wed, 22 Aug 2012 02:59:29 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.89]) with mapi id
	14.01.0355.002; Wed, 22 Aug 2012 17:59:28 +0800
From: "Hao, Xudong" <xudong.hao@intel.com>
To: Jan Beulich <jbeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNerKKEOT7HWjBmke6+J0K3zDOkJdaEb8AgAItqqD//4LEgIAB4cJw//+X9gCABNO7cP//9qoAAGC+ZBD//+ixgP//VqIw
Date: Wed, 22 Aug 2012 09:59:27 +0000
Message-ID: <403610A45A2B5242BD291EDAE8B37D300FEA5EAB@SHSMSX102.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
	<50349901020000780008A4A2@nat28.tlf.novell.com>
In-Reply-To: <50349901020000780008A4A2@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:jbeulich@suse.com]
> Sent: Wednesday, August 22, 2012 3:32 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> "Hao, Xudong" <xudong.hao@intel.com> 08/22/12 3:03 AM >>>
> >> > Where does present the 36-bit physical addresses limit, could you help to
> >> > point out in the current Xen code?
> >>
> >> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
> >> mtrr_var_range_msr_set().
> >
> > So if common 36-bit(guest) physical address could not change, can we use
> > top down from 64G, Jan, do you have any suggestion?
> 
> Sorry, I already said that I think the only viable option is top down from
> top of physical address space. No new address space holes please if at
> all possible - just do it in ways real firmware would do it (which would
> unlikely alter the RAM layout for this purpose).
> 

If the PCIe device has 64G bar size or more, how to do in current 36 bit, will we consider to extend the guest physical address? 
The real physical address space larger than 40 bit, there is not this issue.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4877-0000JG-Br; Wed, 22 Aug 2012 10:24:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4876-0000JB-GR
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:24:40 +0000
Received: from [85.158.139.83:60004] by server-6.bemta-5.messagelabs.com id
	7E/83-22415-763B4305; Wed, 22 Aug 2012 10:24:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345631077!27727644!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11369 invoked from network); 22 Aug 2012 10:24:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122292"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:24:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 11:24:06 +0100
Date: Wed, 22 Aug 2012 11:23:44 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Message-ID: <alpine.DEB.2.02.1208221120260.15568@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	dongxiao.xu@intel.com, frediano.ziglio@citrix.com,
	Anthony Liguori <anthony@codemonkey.ws>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [Xen-devel] [PULL] Xen fixes 2012-08-22
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Anthony,
please pull two fixes to xen-all.c:

git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-fixes-20120822


Dongxiao Xu (1):
      xen-all.c: fix multiply issue for int and uint types

Frediano Ziglio (1):
      Fix invalidate if memory requested was not bucket aligned

 xen-all.c      |   24 ++++++++++++++++--------
 xen-mapcache.c |    9 +++++----
 2 files changed, 21 insertions(+), 12 deletions(-)


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:25:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4877-0000JG-Br; Wed, 22 Aug 2012 10:24:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4876-0000JB-GR
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:24:40 +0000
Received: from [85.158.139.83:60004] by server-6.bemta-5.messagelabs.com id
	7E/83-22415-763B4305; Wed, 22 Aug 2012 10:24:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345631077!27727644!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11369 invoked from network); 22 Aug 2012 10:24:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122292"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:24:06 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 11:24:06 +0100
Date: Wed, 22 Aug 2012 11:23:44 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Message-ID: <alpine.DEB.2.02.1208221120260.15568@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	dongxiao.xu@intel.com, frediano.ziglio@citrix.com,
	Anthony Liguori <anthony@codemonkey.ws>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [Xen-devel] [PULL] Xen fixes 2012-08-22
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Anthony,
please pull two fixes to xen-all.c:

git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-fixes-20120822


Dongxiao Xu (1):
      xen-all.c: fix multiply issue for int and uint types

Frediano Ziglio (1):
      Fix invalidate if memory requested was not bucket aligned

 xen-all.c      |   24 ++++++++++++++++--------
 xen-mapcache.c |    9 +++++----
 2 files changed, 21 insertions(+), 12 deletions(-)


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T489G-0000Oc-Su; Wed, 22 Aug 2012 10:26:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T489E-0000OX-T8
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:26:53 +0000
Received: from [85.158.139.83:23046] by server-2.bemta-5.messagelabs.com id
	B3/DB-10142-CE3B4305; Wed, 22 Aug 2012 10:26:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345631210!28908469!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17953 invoked from network); 22 Aug 2012 10:26:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:26:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122351"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:26:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 11:26:50 +0100
Message-ID: <1345631207.6821.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Miller <davem@davemloft.net>
Date: Wed, 22 Aug 2012 11:26:47 +0100
In-Reply-To: <20120808.155046.820543563969484712.davem@davemloft.net>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"mgorman@suse.de" <mgorman@suse.de>,
	"konrad@darnok.org" <konrad@darnok.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 23:50 +0100, David Miller wrote:
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

I think you mean something like this, which works for me, although I've
only lightly tested it.

Ian.

8<----------------------------------------

>From 9e47e3e87a33b45974448649a97859a479183041 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 22 Aug 2012 10:15:29 +0100
Subject: [PATCH] xen-netfront: use __pskb_pull_tail to ensure linear area is big enough on RX

I'm slightly concerned by the "only in exceptional circumstances"
comment on __pskb_pull_tail but the structure of an skb just created
by netfront shouldn't hit any of the especially slow cases.

This approach still does slightly more work than the old way, since if
we pull up the entire first frag we now have to shuffle everything
down where before we just received into the right place in the first
place.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: xen-devel@lists.xensource.com
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/net/xen-netfront.c |   39 ++++++++++-----------------------------
 1 files changed, 10 insertions(+), 29 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3089990..650f79a 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,8 +57,7 @@
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
-	struct page *page;
-	unsigned offset;
+	int pull_to;
 };
 
 #define NETFRONT_SKB_CB(skb)	((struct netfront_cb *)((skb)->cb))
@@ -867,15 +866,9 @@ static int handle_incoming_queue(struct net_device *dev,
 	struct sk_buff *skb;
 
 	while ((skb = __skb_dequeue(rxq)) != NULL) {
-		struct page *page = NETFRONT_SKB_CB(skb)->page;
-		void *vaddr = page_address(page);
-		unsigned offset = NETFRONT_SKB_CB(skb)->offset;
-
-		memcpy(skb->data, vaddr + offset,
-		       skb_headlen(skb));
+		int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
 
-		if (page != skb_frag_page(&skb_shinfo(skb)->frags[0]))
-			__free_page(page);
+		__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
 		skb->protocol = eth_type_trans(skb, dev);
@@ -913,7 +906,6 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	struct sk_buff_head errq;
 	struct sk_buff_head tmpq;
 	unsigned long flags;
-	unsigned int len;
 	int err;
 
 	spin_lock(&np->rx_lock);
@@ -955,24 +947,13 @@ err:
 			}
 		}
 
-		NETFRONT_SKB_CB(skb)->page =
-			skb_frag_page(&skb_shinfo(skb)->frags[0]);
-		NETFRONT_SKB_CB(skb)->offset = rx->offset;
-
-		len = rx->status;
-		if (len > RX_COPY_THRESHOLD)
-			len = RX_COPY_THRESHOLD;
-		skb_put(skb, len);
+		NETFRONT_SKB_CB(skb)->pull_to = rx->status;
+		if (NETFRONT_SKB_CB(skb)->pull_to > RX_COPY_THRESHOLD)
+			NETFRONT_SKB_CB(skb)->pull_to = RX_COPY_THRESHOLD;
 
-		if (rx->status > len) {
-			skb_shinfo(skb)->frags[0].page_offset =
-				rx->offset + len;
-			skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status - len);
-			skb->data_len = rx->status - len;
-		} else {
-			__skb_fill_page_desc(skb, 0, NULL, 0, 0);
-			skb_shinfo(skb)->nr_frags = 0;
-		}
+		skb_shinfo(skb)->frags[0].page_offset = rx->offset;
+		skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status);
+		skb->data_len = rx->status;
 
 		i = xennet_fill_frags(np, skb, &tmpq);
 
@@ -999,7 +980,7 @@ err:
 		 * receive throughout using the standard receive
 		 * buffer size was cut by 25%(!!!).
 		 */
-		skb->truesize += skb->data_len - (RX_COPY_THRESHOLD - len);
+		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
 		skb->len += skb->data_len;
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
-- 
1.7.2.5




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T489G-0000Oc-Su; Wed, 22 Aug 2012 10:26:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T489E-0000OX-T8
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:26:53 +0000
Received: from [85.158.139.83:23046] by server-2.bemta-5.messagelabs.com id
	B3/DB-10142-CE3B4305; Wed, 22 Aug 2012 10:26:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345631210!28908469!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17953 invoked from network); 22 Aug 2012 10:26:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:26:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122351"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:26:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 11:26:50 +0100
Message-ID: <1345631207.6821.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Miller <davem@davemloft.net>
Date: Wed, 22 Aug 2012 11:26:47 +0100
In-Reply-To: <20120808.155046.820543563969484712.davem@davemloft.net>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"mgorman@suse.de" <mgorman@suse.de>,
	"konrad@darnok.org" <konrad@darnok.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-08 at 23:50 +0100, David Miller wrote:
> Just use something like a call to __pskb_pull_tail(skb, len) and all
> that other crap around that area can simply be deleted.

I think you mean something like this, which works for me, although I've
only lightly tested it.

Ian.

8<----------------------------------------

>From 9e47e3e87a33b45974448649a97859a479183041 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 22 Aug 2012 10:15:29 +0100
Subject: [PATCH] xen-netfront: use __pskb_pull_tail to ensure linear area is big enough on RX

I'm slightly concerned by the "only in exceptional circumstances"
comment on __pskb_pull_tail but the structure of an skb just created
by netfront shouldn't hit any of the especially slow cases.

This approach still does slightly more work than the old way, since if
we pull up the entire first frag we now have to shuffle everything
down where before we just received into the right place in the first
place.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: xen-devel@lists.xensource.com
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/net/xen-netfront.c |   39 ++++++++++-----------------------------
 1 files changed, 10 insertions(+), 29 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3089990..650f79a 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,8 +57,7 @@
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
-	struct page *page;
-	unsigned offset;
+	int pull_to;
 };
 
 #define NETFRONT_SKB_CB(skb)	((struct netfront_cb *)((skb)->cb))
@@ -867,15 +866,9 @@ static int handle_incoming_queue(struct net_device *dev,
 	struct sk_buff *skb;
 
 	while ((skb = __skb_dequeue(rxq)) != NULL) {
-		struct page *page = NETFRONT_SKB_CB(skb)->page;
-		void *vaddr = page_address(page);
-		unsigned offset = NETFRONT_SKB_CB(skb)->offset;
-
-		memcpy(skb->data, vaddr + offset,
-		       skb_headlen(skb));
+		int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
 
-		if (page != skb_frag_page(&skb_shinfo(skb)->frags[0]))
-			__free_page(page);
+		__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
 		skb->protocol = eth_type_trans(skb, dev);
@@ -913,7 +906,6 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	struct sk_buff_head errq;
 	struct sk_buff_head tmpq;
 	unsigned long flags;
-	unsigned int len;
 	int err;
 
 	spin_lock(&np->rx_lock);
@@ -955,24 +947,13 @@ err:
 			}
 		}
 
-		NETFRONT_SKB_CB(skb)->page =
-			skb_frag_page(&skb_shinfo(skb)->frags[0]);
-		NETFRONT_SKB_CB(skb)->offset = rx->offset;
-
-		len = rx->status;
-		if (len > RX_COPY_THRESHOLD)
-			len = RX_COPY_THRESHOLD;
-		skb_put(skb, len);
+		NETFRONT_SKB_CB(skb)->pull_to = rx->status;
+		if (NETFRONT_SKB_CB(skb)->pull_to > RX_COPY_THRESHOLD)
+			NETFRONT_SKB_CB(skb)->pull_to = RX_COPY_THRESHOLD;
 
-		if (rx->status > len) {
-			skb_shinfo(skb)->frags[0].page_offset =
-				rx->offset + len;
-			skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status - len);
-			skb->data_len = rx->status - len;
-		} else {
-			__skb_fill_page_desc(skb, 0, NULL, 0, 0);
-			skb_shinfo(skb)->nr_frags = 0;
-		}
+		skb_shinfo(skb)->frags[0].page_offset = rx->offset;
+		skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status);
+		skb->data_len = rx->status;
 
 		i = xennet_fill_frags(np, skb, &tmpq);
 
@@ -999,7 +980,7 @@ err:
 		 * receive throughout using the standard receive
 		 * buffer size was cut by 25%(!!!).
 		 */
-		skb->truesize += skb->data_len - (RX_COPY_THRESHOLD - len);
+		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
 		skb->len += skb->data_len;
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
-- 
1.7.2.5




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:30:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48Co-0000Z2-Gw; Wed, 22 Aug 2012 10:30:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48Cn-0000Yr-2e
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:30:33 +0000
Received: from [85.158.139.83:62178] by server-5.bemta-5.messagelabs.com id
	10/B2-31019-8C4B4305; Wed, 22 Aug 2012 10:30:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345631431!25190112!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24159 invoked from network); 22 Aug 2012 10:30:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122459"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:30:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 11:30:30 +0100
Date: Wed, 22 Aug 2012 11:30:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Thomas Gleixner <tglx@linutronix.de>
In-Reply-To: <alpine.LFD.2.02.1208212315330.2856@ionos>
Message-ID: <alpine.DEB.2.02.1208221129490.15568@kaball.uk.xensource.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Attilio Rao <attilio.rao@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
> > Differences with v1:
> > - The patch serie is re-arranged in a way that it helps reviews, following
> >   a plan by Thomas Gleixner
> > - The PVOPS nomenclature is not used as it is not correct
> > - The front-end message is adjusted with feedback by Thomas Gleixner,
> >   Stefano Stabellini and Konrad Rzeszutek Wilk 
> 
> This is much simpler to read and review. Just have a look at the
> diffstats of the two series:
> 
>  6 files changed,  9 insertions(+),  8 deletions(-)
>  6 files changed, 11 insertions(+),  9 deletions(-)
>  5 files changed, 50 insertions(+),  2 deletions(-)
>  6 files changed,  2 insertions(+), 65 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> versus
> 
>  6 files changed, 10 insertions(+),  9 deletions(-)
>  6 files changed, 11 insertions(+), 11 deletions(-)
>  5 files changed,  3 insertions(+),  3 deletions(-)
>  6 files changed,  4 insertions(+), 20 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> The overall result is basically the same, but it's way simpler to look
> at obvious and well done patches than checking whether a subtle copy
> and paste bug happened in 3/5 of the first version. Copy and paste is
> the #1 cause for subtle bugs. :)
> 
> I'm waiting for the ack of Xen folks before taking it into tip.

They are OK by me. Konrad?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:30:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48Co-0000Z2-Gw; Wed, 22 Aug 2012 10:30:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48Cn-0000Yr-2e
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:30:33 +0000
Received: from [85.158.139.83:62178] by server-5.bemta-5.messagelabs.com id
	10/B2-31019-8C4B4305; Wed, 22 Aug 2012 10:30:32 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345631431!25190112!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24159 invoked from network); 22 Aug 2012 10:30:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122459"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:30:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 11:30:30 +0100
Date: Wed, 22 Aug 2012 11:30:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Thomas Gleixner <tglx@linutronix.de>
In-Reply-To: <alpine.LFD.2.02.1208212315330.2856@ionos>
Message-ID: <alpine.DEB.2.02.1208221129490.15568@kaball.uk.xensource.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Attilio Rao <attilio.rao@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
> > Differences with v1:
> > - The patch serie is re-arranged in a way that it helps reviews, following
> >   a plan by Thomas Gleixner
> > - The PVOPS nomenclature is not used as it is not correct
> > - The front-end message is adjusted with feedback by Thomas Gleixner,
> >   Stefano Stabellini and Konrad Rzeszutek Wilk 
> 
> This is much simpler to read and review. Just have a look at the
> diffstats of the two series:
> 
>  6 files changed,  9 insertions(+),  8 deletions(-)
>  6 files changed, 11 insertions(+),  9 deletions(-)
>  5 files changed, 50 insertions(+),  2 deletions(-)
>  6 files changed,  2 insertions(+), 65 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> versus
> 
>  6 files changed, 10 insertions(+),  9 deletions(-)
>  6 files changed, 11 insertions(+), 11 deletions(-)
>  5 files changed,  3 insertions(+),  3 deletions(-)
>  6 files changed,  4 insertions(+), 20 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> The overall result is basically the same, but it's way simpler to look
> at obvious and well done patches than checking whether a subtle copy
> and paste bug happened in 3/5 of the first version. Copy and paste is
> the #1 cause for subtle bugs. :)
> 
> I'm waiting for the ack of Xen folks before taking it into tip.

They are OK by me. Konrad?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:50:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48W8-0000p4-HE; Wed, 22 Aug 2012 10:50:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48W6-0000oz-PL
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:50:31 +0000
Received: from [85.158.139.83:20257] by server-4.bemta-5.messagelabs.com id
	BC/9D-12386-579B4305; Wed, 22 Aug 2012 10:50:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345632628!18184919!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8852 invoked from network); 22 Aug 2012 10:50:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:50:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122906"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:49:02 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 11:49:02 +0100
Date: Wed, 22 Aug 2012 11:48:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120821190317.GA13035@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208221146230.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > instead of a big memblock_reserve. This way we can be more
> > > > > selective in freeing regions (and it also makes it easier
> > > > > to understand where is what).
> > > > > 
> > > > > [v1: Move the auto_translate_physmap to proper line]
> > > > > [v2: Per Stefano suggestion add more comments]
> > > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > 
> > > > much better now!
> > > 
> > > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> > > Will have a revised patch posted shortly.
> > 
> > Jan, I thought something odd. Part of this code replaces this:
> > 
> > 	memblock_reserve(__pa(xen_start_info->mfn_list),
> > 		xen_start_info->pt_base - xen_start_info->mfn_list);
> > 
> > with a more region-by-region area. What I found out that if I boot this
> > as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> > actually wrong.
> > 
> > Specifically this is what bootup says:
> > 
> > (good working case - 32bit hypervisor with 32-bit dom0):
> > (XEN)  Loaded kernel: c1000000->c1a23000
> > (XEN)  Init. ramdisk: c1a23000->cf730e00
> > (XEN)  Phys-Mach map: cf731000->cf831000
> > (XEN)  Start info:    cf831000->cf83147c
> > (XEN)  Page tables:   cf832000->cf8b5000
> > (XEN)  Boot stack:    cf8b5000->cf8b6000
> > (XEN)  TOTAL:         c0000000->cfc00000
> > 
> > [    0.000000] PT: cf832000 (f832000)
> > [    0.000000] Reserving PT: f832000->f8b5000
> > 
> > And with a 64-bit hypervisor:
> > 
> > XEN) VIRTUAL MEMORY ARRANGEMENT:
> > (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> > (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> > (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> > (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> > (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> > (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> > (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> > (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> > 
> > [    0.000000] PT: cf834000 (f834000)
> > [    0.000000] Reserving PT: f834000->f8b8000
> > 
> > So the pt_base is offset by two pages. And looking at c/s 13257
> > its not clear to me why this two page offset was added?
> > 
> > The toolstack works fine - so launching 32-bit guests either
> > under a 32-bit hypervisor or 64-bit works fine:
> > ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 0xcf885000  (pfn 0xf805 + 0x80 pages)
> > [    0.000000] PT: cf805000 (f805000)
> > 
> 
> And this patch on top of the others fixes this..
> 
> 
> >From 806c312e50f122c47913145cf884f53dd09d9199 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Tue, 21 Aug 2012 14:31:24 -0400
> Subject: [PATCH] xen/x86: Workaround 64-bit hypervisor and 32-bit initial
>  domain.
> 
> If a 64-bit hypervisor is booted with a 32-bit initial domain,
> the hypervisor deals with the initial domain as "compat" and
> does some extra adjustments (like pagetables are 4 bytes instead
> of 8). It also adjusts the xen_start_info->pt_base incorrectly.
> 
> When booted with a 32-bit hypervisor (32-bit initial domain):
> ..
> (XEN)  Start info:    cf831000->cf83147c
> (XEN)  Page tables:   cf832000->cf8b5000
> ..
> [    0.000000] PT: cf832000 (f832000)
> [    0.000000] Reserving PT: f832000->f8b5000
> 
> And with a 64-bit hypervisor:
> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> 
> [    0.000000] PT: cf834000 (f834000)
> [    0.000000] Reserving PT: f834000->f8b8000
> 
> To deal with this, we keep keep track of the highest physical
> address we have reserved via memblock_reserve. If that address
> does not overlap with pt_base, we have a gap which we reserve.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |   30 +++++++++++++++++++++---------
>  1 files changed, 21 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index e532eb5..511f92d 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1002,19 +1002,24 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>   * If the MFN is not in the m2p (provided to us by the hypervisor) this
>   * function won't do anything. In practice this means that the XenBus
>   * MFN won't be available for the initial domain. */
> -static void __init xen_reserve_mfn(unsigned long mfn)
> +static unsigned long __init xen_reserve_mfn(unsigned long mfn)
>  {
> -	unsigned long pfn;
> +	unsigned long pfn, end_pfn = 0;
>  
>  	if (!mfn)
> -		return;
> +		return end_pfn;
> +
>  	pfn = mfn_to_pfn(mfn);
> -	if (phys_to_machine_mapping_valid(pfn))
> -		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
> +	if (phys_to_machine_mapping_valid(pfn)) {
> +		end_pfn = PFN_PHYS(pfn) + PAGE_SIZE;
> +		memblock_reserve(PFN_PHYS(pfn), end_pfn);
> +	}
> +	return end_pfn;
>  }
>  static void __init xen_reserve_internals(void)
>  {
>  	unsigned long size;
> +	unsigned long last_phys = 0;
>  
>  	if (!xen_pv_domain())
>  		return;
> @@ -1022,12 +1027,13 @@ static void __init xen_reserve_internals(void)
>  	/* xen_start_info does not exist in the M2P, hence can't use
>  	 * xen_reserve_mfn. */
>  	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
> +	last_phys = __pa(xen_start_info) + PAGE_SIZE;
>  
> -	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
> -	xen_reserve_mfn(xen_start_info->store_mfn);
> +	last_phys = max(xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info)), last_phys);
> +	last_phys = max(xen_reserve_mfn(xen_start_info->store_mfn), last_phys);
>  
>  	if (!xen_initial_domain())
> -		xen_reserve_mfn(xen_start_info->console.domU.mfn);
> +		last_phys = max(xen_reserve_mfn(xen_start_info->console.domU.mfn), last_phys);
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return;
> @@ -1043,8 +1049,14 @@ static void __init xen_reserve_internals(void)
>  	 * a lot (and call memblock_reserve for each PAGE), so lets just use
>  	 * the easy way and reserve it wholesale. */
>  	memblock_reserve(__pa(xen_start_info->mfn_list), size);
> -
> +	last_phys = max(__pa(xen_start_info->mfn_list) + size, last_phys);
>  	/* The pagetables are reserved in mmu.c */
> +
> +	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
> +	 * offsets the pt_base by two pages. Hence the reservation that is done
> +	 * in mmu.c misses two pages. We correct it here if we detect this. */
> +	if (last_phys < __pa(xen_start_info->pt_base))
> +		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
>  }

What are these two pages used for? They are not documented in xen.h, why
should we reserve them?

After all we still have:

memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);

that should protect what we are interested in anyway...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:50:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48W8-0000p4-HE; Wed, 22 Aug 2012 10:50:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48W6-0000oz-PL
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 10:50:31 +0000
Received: from [85.158.139.83:20257] by server-4.bemta-5.messagelabs.com id
	BC/9D-12386-579B4305; Wed, 22 Aug 2012 10:50:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345632628!18184919!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8852 invoked from network); 22 Aug 2012 10:50:29 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:50:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14122906"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 10:49:02 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 11:49:02 +0100
Date: Wed, 22 Aug 2012 11:48:40 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120821190317.GA13035@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208221146230.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > instead of a big memblock_reserve. This way we can be more
> > > > > selective in freeing regions (and it also makes it easier
> > > > > to understand where is what).
> > > > > 
> > > > > [v1: Move the auto_translate_physmap to proper line]
> > > > > [v2: Per Stefano suggestion add more comments]
> > > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > 
> > > > much better now!
> > > 
> > > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> > > Will have a revised patch posted shortly.
> > 
> > Jan, I thought something odd. Part of this code replaces this:
> > 
> > 	memblock_reserve(__pa(xen_start_info->mfn_list),
> > 		xen_start_info->pt_base - xen_start_info->mfn_list);
> > 
> > with a more region-by-region area. What I found out that if I boot this
> > as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> > actually wrong.
> > 
> > Specifically this is what bootup says:
> > 
> > (good working case - 32bit hypervisor with 32-bit dom0):
> > (XEN)  Loaded kernel: c1000000->c1a23000
> > (XEN)  Init. ramdisk: c1a23000->cf730e00
> > (XEN)  Phys-Mach map: cf731000->cf831000
> > (XEN)  Start info:    cf831000->cf83147c
> > (XEN)  Page tables:   cf832000->cf8b5000
> > (XEN)  Boot stack:    cf8b5000->cf8b6000
> > (XEN)  TOTAL:         c0000000->cfc00000
> > 
> > [    0.000000] PT: cf832000 (f832000)
> > [    0.000000] Reserving PT: f832000->f8b5000
> > 
> > And with a 64-bit hypervisor:
> > 
> > XEN) VIRTUAL MEMORY ARRANGEMENT:
> > (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> > (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> > (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> > (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> > (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> > (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> > (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> > (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> > 
> > [    0.000000] PT: cf834000 (f834000)
> > [    0.000000] Reserving PT: f834000->f8b8000
> > 
> > So the pt_base is offset by two pages. And looking at c/s 13257
> > its not clear to me why this two page offset was added?
> > 
> > The toolstack works fine - so launching 32-bit guests either
> > under a 32-bit hypervisor or 64-bit works fine:
> > ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 0xcf885000  (pfn 0xf805 + 0x80 pages)
> > [    0.000000] PT: cf805000 (f805000)
> > 
> 
> And this patch on top of the others fixes this..
> 
> 
> >From 806c312e50f122c47913145cf884f53dd09d9199 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Tue, 21 Aug 2012 14:31:24 -0400
> Subject: [PATCH] xen/x86: Workaround 64-bit hypervisor and 32-bit initial
>  domain.
> 
> If a 64-bit hypervisor is booted with a 32-bit initial domain,
> the hypervisor deals with the initial domain as "compat" and
> does some extra adjustments (like pagetables are 4 bytes instead
> of 8). It also adjusts the xen_start_info->pt_base incorrectly.
> 
> When booted with a 32-bit hypervisor (32-bit initial domain):
> ..
> (XEN)  Start info:    cf831000->cf83147c
> (XEN)  Page tables:   cf832000->cf8b5000
> ..
> [    0.000000] PT: cf832000 (f832000)
> [    0.000000] Reserving PT: f832000->f8b5000
> 
> And with a 64-bit hypervisor:
> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> 
> [    0.000000] PT: cf834000 (f834000)
> [    0.000000] Reserving PT: f834000->f8b8000
> 
> To deal with this, we keep keep track of the highest physical
> address we have reserved via memblock_reserve. If that address
> does not overlap with pt_base, we have a gap which we reserve.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |   30 +++++++++++++++++++++---------
>  1 files changed, 21 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index e532eb5..511f92d 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1002,19 +1002,24 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
>   * If the MFN is not in the m2p (provided to us by the hypervisor) this
>   * function won't do anything. In practice this means that the XenBus
>   * MFN won't be available for the initial domain. */
> -static void __init xen_reserve_mfn(unsigned long mfn)
> +static unsigned long __init xen_reserve_mfn(unsigned long mfn)
>  {
> -	unsigned long pfn;
> +	unsigned long pfn, end_pfn = 0;
>  
>  	if (!mfn)
> -		return;
> +		return end_pfn;
> +
>  	pfn = mfn_to_pfn(mfn);
> -	if (phys_to_machine_mapping_valid(pfn))
> -		memblock_reserve(PFN_PHYS(pfn), PAGE_SIZE);
> +	if (phys_to_machine_mapping_valid(pfn)) {
> +		end_pfn = PFN_PHYS(pfn) + PAGE_SIZE;
> +		memblock_reserve(PFN_PHYS(pfn), end_pfn);
> +	}
> +	return end_pfn;
>  }
>  static void __init xen_reserve_internals(void)
>  {
>  	unsigned long size;
> +	unsigned long last_phys = 0;
>  
>  	if (!xen_pv_domain())
>  		return;
> @@ -1022,12 +1027,13 @@ static void __init xen_reserve_internals(void)
>  	/* xen_start_info does not exist in the M2P, hence can't use
>  	 * xen_reserve_mfn. */
>  	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
> +	last_phys = __pa(xen_start_info) + PAGE_SIZE;
>  
> -	xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info));
> -	xen_reserve_mfn(xen_start_info->store_mfn);
> +	last_phys = max(xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info)), last_phys);
> +	last_phys = max(xen_reserve_mfn(xen_start_info->store_mfn), last_phys);
>  
>  	if (!xen_initial_domain())
> -		xen_reserve_mfn(xen_start_info->console.domU.mfn);
> +		last_phys = max(xen_reserve_mfn(xen_start_info->console.domU.mfn), last_phys);
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return;
> @@ -1043,8 +1049,14 @@ static void __init xen_reserve_internals(void)
>  	 * a lot (and call memblock_reserve for each PAGE), so lets just use
>  	 * the easy way and reserve it wholesale. */
>  	memblock_reserve(__pa(xen_start_info->mfn_list), size);
> -
> +	last_phys = max(__pa(xen_start_info->mfn_list) + size, last_phys);
>  	/* The pagetables are reserved in mmu.c */
> +
> +	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
> +	 * offsets the pt_base by two pages. Hence the reservation that is done
> +	 * in mmu.c misses two pages. We correct it here if we detect this. */
> +	if (last_phys < __pa(xen_start_info->pt_base))
> +		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
>  }

What are these two pages used for? They are not documented in xen.h, why
should we reserve them?

After all we still have:

memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);

that should protect what we are interested in anyway...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:55:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48aG-0000wR-72; Wed, 22 Aug 2012 10:54:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@gmail.com>) id 1T48aE-0000wK-A1
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 10:54:46 +0000
Received: from [85.158.143.35:59639] by server-2.bemta-4.messagelabs.com id
	05/A4-21239-57AB4305; Wed, 22 Aug 2012 10:54:45 +0000
X-Env-Sender: anthony.perard@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345632874!12482221!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14027 invoked from network); 22 Aug 2012 10:54:35 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:54:35 -0000
Received: by yhpp34 with SMTP id p34so598375yhp.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 03:54:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=tEK5Zv3M125gjf7NSdMbKnVjJi8A50mfMcpkagbTWjY=;
	b=fb/8zF4YSNACJkQt2bQMG3iWW1JN233InlNsq39WB9gmZHLL3xSbkW0064J22pMbH2
	QvyLP8Vft6l73YHMkSbyh2rhsIZvy+O0Vh2ouuYt18Ol3ed35vb8E3YXstGEkuDxGq/U
	q9rXO4NdA8TBpV+3YCiFd1MZQR2nW9HKIq7BU9RPZyObmodIBvoQYOvPNZEktB0idlaa
	SUSlMhmz0OWoUI3p15NEHXmaK+PyU8azX71eg8zcB7OeX7QVyKSD+PQkS9LrT+n0Lxm5
	Ybwb+u9kSdHABpJ60bFlSDofur1VTLzwestNaDL/YALcdnMJx4b7qFRMHhCKLOif76/w
	72iQ==
Received: by 10.50.202.5 with SMTP id ke5mr1560427igc.64.1345632873572; Wed,
	22 Aug 2012 03:54:33 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.64.126.7 with HTTP; Wed, 22 Aug 2012 03:54:03 -0700 (PDT)
In-Reply-To: <6E28F413615167448D5D4146F2D9370B0FE063EB@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
	<6E28F413615167448D5D4146F2D9370B0FE063EB@SHSMSX102.ccr.corp.intel.com>
From: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 22 Aug 2012 11:54:03 +0100
X-Google-Sender-Auth: 5xOeBnrT8rfDTg4UzIUhr6zFwj4
Message-ID: <CAJJyHjKOXSQB65jk3am_0z=_25G7KSyd+i4MKDoZowcTfNW+=w@mail.gmail.com>
To: "Shan, Haitao" <haitao.shan@intel.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Zhou,
	Chao" <chao.zhou@intel.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 6:56 AM, Shan, Haitao <haitao.shan@intel.com> wrote:
> Do we have any conclusion on this? A bug?
> We are trying to contribute codes to Xen. However, this bug has blocked us for some time. It would be great to see this fixed.

I just try to start a guest with a network card passthrough, and it
seams to work fine (the card is present in `lspci`). Maybe having some
debug could help to understand the issue. Try to run `xl -vvv create
....`.

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 10:55:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 10:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48aG-0000wR-72; Wed, 22 Aug 2012 10:54:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@gmail.com>) id 1T48aE-0000wK-A1
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 10:54:46 +0000
Received: from [85.158.143.35:59639] by server-2.bemta-4.messagelabs.com id
	05/A4-21239-57AB4305; Wed, 22 Aug 2012 10:54:45 +0000
X-Env-Sender: anthony.perard@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345632874!12482221!1
X-Originating-IP: [209.85.213.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14027 invoked from network); 22 Aug 2012 10:54:35 -0000
Received: from mail-yw0-f45.google.com (HELO mail-yw0-f45.google.com)
	(209.85.213.45)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 10:54:35 -0000
Received: by yhpp34 with SMTP id p34so598375yhp.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 03:54:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date
	:x-google-sender-auth:message-id:subject:to:cc:content-type;
	bh=tEK5Zv3M125gjf7NSdMbKnVjJi8A50mfMcpkagbTWjY=;
	b=fb/8zF4YSNACJkQt2bQMG3iWW1JN233InlNsq39WB9gmZHLL3xSbkW0064J22pMbH2
	QvyLP8Vft6l73YHMkSbyh2rhsIZvy+O0Vh2ouuYt18Ol3ed35vb8E3YXstGEkuDxGq/U
	q9rXO4NdA8TBpV+3YCiFd1MZQR2nW9HKIq7BU9RPZyObmodIBvoQYOvPNZEktB0idlaa
	SUSlMhmz0OWoUI3p15NEHXmaK+PyU8azX71eg8zcB7OeX7QVyKSD+PQkS9LrT+n0Lxm5
	Ybwb+u9kSdHABpJ60bFlSDofur1VTLzwestNaDL/YALcdnMJx4b7qFRMHhCKLOif76/w
	72iQ==
Received: by 10.50.202.5 with SMTP id ke5mr1560427igc.64.1345632873572; Wed,
	22 Aug 2012 03:54:33 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.64.126.7 with HTTP; Wed, 22 Aug 2012 03:54:03 -0700 (PDT)
In-Reply-To: <6E28F413615167448D5D4146F2D9370B0FE063EB@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E207A21@SHSMSX101.ccr.corp.intel.com>
	<6E28F413615167448D5D4146F2D9370B0FE063EB@SHSMSX102.ccr.corp.intel.com>
From: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 22 Aug 2012 11:54:03 +0100
X-Google-Sender-Auth: 5xOeBnrT8rfDTg4UzIUhr6zFwj4
Message-ID: <CAJJyHjKOXSQB65jk3am_0z=_25G7KSyd+i4MKDoZowcTfNW+=w@mail.gmail.com>
To: "Shan, Haitao" <haitao.shan@intel.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	xen-devel <xen-devel@lists.xen.org>, "Zhou,
	Chao" <chao.zhou@intel.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Subject: Re: [Xen-devel] error when pass through device to guest with
	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 6:56 AM, Shan, Haitao <haitao.shan@intel.com> wrote:
> Do we have any conclusion on this? A bug?
> We are trying to contribute codes to Xen. However, this bug has blocked us for some time. It would be great to see this fixed.

I just try to start a guest with a network card passthrough, and it
seams to work fine (the card is present in `lspci`). Maybe having some
debug could help to understand the issue. Try to run `xl -vvv create
....`.

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48mq-0001BP-H2; Wed, 22 Aug 2012 11:07:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48mp-0001BH-5n
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:07:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345633647!10176431!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2207 invoked from network); 22 Aug 2012 11:07:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:07:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123292"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:07:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:07:27 +0100
Date: Wed, 22 Aug 2012 12:07:05 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-ID: <alpine.DEB.2.02.1208221159562.15568@kaball.uk.xensource.com>
Cc: Tim Deegan <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 0/6] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it introduces two new macros to convert XEN_GUEST_HANDLE_PARAM to
XEN_GUEST_HANDLE and vice versa;
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.

It is based on Ian's arm-for-4.3 branch. 


Changes in v4:
- make both XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM unions on ARM;
- simplify set_xen_guest_handle_raw on ARM;
- add type checking in guest_handle_to_param and guest_handle_from_param
(all architectures).

Changes in v3:
- default all the guest_handle_* conversion macros to
  XEN_GUEST_HANDLE_PARAM as return type;
- add two new guest_handle_to_param and guest_handle_from_param macros
  to do conversions.

Changes in v2:
- do not use an anonymous union in struct xen_add_to_physmap; 
- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t in python/xen/lowlevel/xc/xc;
- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code;
- add a patch to limit the maximum number of extents handled by
do_memory_op;
- remove the patch "introduce __lshrdi3 and __aeabi_llsr" that is
already in the for-4.3 branch.



Stefano Stabellini (6):
      xen: improve changes to xen_add_to_physmap
      xen: xen_ulong_t substitution
      xen: change the limit of nr_extents to UINT_MAX >> MEMOP_EXTENT_SHIFT
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate
      xen: more substitutions

 tools/firmware/hvmloader/pci.c           |    2 +-
 tools/python/xen/lowlevel/xc/xc.c        |    2 +-
 xen/arch/arm/domain.c                    |    2 +-
 xen/arch/arm/domctl.c                    |    2 +-
 xen/arch/arm/hvm.c                       |    2 +-
 xen/arch/arm/mm.c                        |    4 +-
 xen/arch/arm/physdev.c                   |    2 +-
 xen/arch/arm/sysctl.c                    |    2 +-
 xen/arch/x86/compat.c                    |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c            |    2 +-
 xen/arch/x86/domain.c                    |    2 +-
 xen/arch/x86/domctl.c                    |    2 +-
 xen/arch/x86/efi/runtime.c               |    2 +-
 xen/arch/x86/hvm/hvm.c                   |   26 +++++++-------
 xen/arch/x86/microcode.c                 |    2 +-
 xen/arch/x86/mm.c                        |   36 ++++++++++++--------
 xen/arch/x86/mm/hap/hap.c                |    2 +-
 xen/arch/x86/mm/mem_event.c              |    2 +-
 xen/arch/x86/mm/paging.c                 |    2 +-
 xen/arch/x86/mm/shadow/common.c          |    2 +-
 xen/arch/x86/oprofile/backtrace.c        |    4 ++-
 xen/arch/x86/oprofile/xenoprof.c         |    6 ++--
 xen/arch/x86/physdev.c                   |    2 +-
 xen/arch/x86/platform_hypercall.c        |   10 ++++--
 xen/arch/x86/sysctl.c                    |    2 +-
 xen/arch/x86/traps.c                     |    2 +-
 xen/arch/x86/x86_32/mm.c                 |    2 +-
 xen/arch/x86/x86_32/traps.c              |    2 +-
 xen/arch/x86/x86_64/compat/mm.c          |   16 ++++++---
 xen/arch/x86/x86_64/cpu_idle.c           |    4 ++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 ++-
 xen/arch/x86/x86_64/domain.c             |    2 +-
 xen/arch/x86/x86_64/mm.c                 |    2 +-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/arch/x86/x86_64/traps.c              |    2 +-
 xen/common/compat/domain.c               |    2 +-
 xen/common/compat/grant_table.c          |    8 ++--
 xen/common/compat/memory.c               |    4 +-
 xen/common/compat/multicall.c            |    1 +
 xen/common/domain.c                      |    2 +-
 xen/common/domctl.c                      |    2 +-
 xen/common/event_channel.c               |    2 +-
 xen/common/grant_table.c                 |   36 ++++++++++----------
 xen/common/kernel.c                      |    4 +-
 xen/common/kexec.c                       |   16 ++++----
 xen/common/memory.c                      |    6 ++--
 xen/common/multicall.c                   |    2 +-
 xen/common/schedule.c                    |    2 +-
 xen/common/sysctl.c                      |    2 +-
 xen/common/xenoprof.c                    |    8 ++--
 xen/drivers/acpi/pmstat.c                |    2 +-
 xen/drivers/char/console.c               |    6 ++--
 xen/drivers/passthrough/iommu.c          |    2 +-
 xen/include/asm-arm/guest_access.h       |   32 ++++++++++++++++--
 xen/include/asm-arm/hypercall.h          |    2 +-
 xen/include/asm-arm/mm.h                 |    2 +-
 xen/include/asm-x86/guest_access.h       |   29 ++++++++++++++--
 xen/include/asm-x86/hap.h                |    2 +-
 xen/include/asm-x86/hypercall.h          |   24 +++++++-------
 xen/include/asm-x86/mem_event.h          |    2 +-
 xen/include/asm-x86/mm.h                 |    8 ++--
 xen/include/asm-x86/paging.h             |    2 +-
 xen/include/asm-x86/processor.h          |    2 +-
 xen/include/asm-x86/shadow.h             |    2 +-
 xen/include/asm-x86/xenoprof.h           |    6 ++--
 xen/include/public/arch-arm.h            |   32 ++++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++
 xen/include/public/arch-x86/xen.h        |    9 +++++
 xen/include/public/memory.h              |   11 ++++--
 xen/include/public/version.h             |    4 ++-
 xen/include/xen/acpi.h                   |    4 +-
 xen/include/xen/hypercall.h              |   52 +++++++++++++++---------------
 xen/include/xen/iommu.h                  |    2 +-
 xen/include/xen/tmem_xen.h               |    2 +-
 xen/include/xen/xencomm.h                |   22 ++++++++++++-
 xen/include/xsm/xsm.h                    |    4 +-
 xen/xsm/dummy.c                          |    2 +-
 xen/xsm/flask/flask_op.c                 |    4 +-
 xen/xsm/flask/hooks.c                    |    2 +-
 xen/xsm/xsm_core.c                       |    2 +-
 80 files changed, 336 insertions(+), 206 deletions(-)

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:08:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48mq-0001BP-H2; Wed, 22 Aug 2012 11:07:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48mp-0001BH-5n
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:07:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345633647!10176431!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2207 invoked from network); 22 Aug 2012 11:07:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:07:27 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123292"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:07:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:07:27 +0100
Date: Wed, 22 Aug 2012 12:07:05 +0100
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-ID: <alpine.DEB.2.02.1208221159562.15568@kaball.uk.xensource.com>
Cc: Tim Deegan <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 0/6] ARM hypercall ABI: 64 bit ready
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:

- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it introduces two new macros to convert XEN_GUEST_HANDLE_PARAM to
XEN_GUEST_HANDLE and vice versa;
- it replaces all the occurrences of XEN_GUEST_HANDLE in hypercall
parameters with XEN_GUEST_HANDLE_PARAM.


On x86 and ia64 things should stay exactly the same.

On ARM all the unsigned long and the guest pointers that are members of
a struct become size 8 byte (both aarch and aarch64).
However guest pointers that are passed as hypercall arguments in
registers are going to be 4 bytes on aarch and 8 bytes on aarch64.

It is based on Ian's arm-for-4.3 branch. 


Changes in v4:
- make both XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM unions on ARM;
- simplify set_xen_guest_handle_raw on ARM;
- add type checking in guest_handle_to_param and guest_handle_from_param
(all architectures).

Changes in v3:
- default all the guest_handle_* conversion macros to
  XEN_GUEST_HANDLE_PARAM as return type;
- add two new guest_handle_to_param and guest_handle_from_param macros
  to do conversions.

Changes in v2:
- do not use an anonymous union in struct xen_add_to_physmap; 
- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t in python/xen/lowlevel/xc/xc;
- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code;
- add a patch to limit the maximum number of extents handled by
do_memory_op;
- remove the patch "introduce __lshrdi3 and __aeabi_llsr" that is
already in the for-4.3 branch.



Stefano Stabellini (6):
      xen: improve changes to xen_add_to_physmap
      xen: xen_ulong_t substitution
      xen: change the limit of nr_extents to UINT_MAX >> MEMOP_EXTENT_SHIFT
      xen: introduce XEN_GUEST_HANDLE_PARAM
      xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriate
      xen: more substitutions

 tools/firmware/hvmloader/pci.c           |    2 +-
 tools/python/xen/lowlevel/xc/xc.c        |    2 +-
 xen/arch/arm/domain.c                    |    2 +-
 xen/arch/arm/domctl.c                    |    2 +-
 xen/arch/arm/hvm.c                       |    2 +-
 xen/arch/arm/mm.c                        |    4 +-
 xen/arch/arm/physdev.c                   |    2 +-
 xen/arch/arm/sysctl.c                    |    2 +-
 xen/arch/x86/compat.c                    |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c            |    2 +-
 xen/arch/x86/domain.c                    |    2 +-
 xen/arch/x86/domctl.c                    |    2 +-
 xen/arch/x86/efi/runtime.c               |    2 +-
 xen/arch/x86/hvm/hvm.c                   |   26 +++++++-------
 xen/arch/x86/microcode.c                 |    2 +-
 xen/arch/x86/mm.c                        |   36 ++++++++++++--------
 xen/arch/x86/mm/hap/hap.c                |    2 +-
 xen/arch/x86/mm/mem_event.c              |    2 +-
 xen/arch/x86/mm/paging.c                 |    2 +-
 xen/arch/x86/mm/shadow/common.c          |    2 +-
 xen/arch/x86/oprofile/backtrace.c        |    4 ++-
 xen/arch/x86/oprofile/xenoprof.c         |    6 ++--
 xen/arch/x86/physdev.c                   |    2 +-
 xen/arch/x86/platform_hypercall.c        |   10 ++++--
 xen/arch/x86/sysctl.c                    |    2 +-
 xen/arch/x86/traps.c                     |    2 +-
 xen/arch/x86/x86_32/mm.c                 |    2 +-
 xen/arch/x86/x86_32/traps.c              |    2 +-
 xen/arch/x86/x86_64/compat/mm.c          |   16 ++++++---
 xen/arch/x86/x86_64/cpu_idle.c           |    4 ++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 ++-
 xen/arch/x86/x86_64/domain.c             |    2 +-
 xen/arch/x86/x86_64/mm.c                 |    2 +-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/arch/x86/x86_64/traps.c              |    2 +-
 xen/common/compat/domain.c               |    2 +-
 xen/common/compat/grant_table.c          |    8 ++--
 xen/common/compat/memory.c               |    4 +-
 xen/common/compat/multicall.c            |    1 +
 xen/common/domain.c                      |    2 +-
 xen/common/domctl.c                      |    2 +-
 xen/common/event_channel.c               |    2 +-
 xen/common/grant_table.c                 |   36 ++++++++++----------
 xen/common/kernel.c                      |    4 +-
 xen/common/kexec.c                       |   16 ++++----
 xen/common/memory.c                      |    6 ++--
 xen/common/multicall.c                   |    2 +-
 xen/common/schedule.c                    |    2 +-
 xen/common/sysctl.c                      |    2 +-
 xen/common/xenoprof.c                    |    8 ++--
 xen/drivers/acpi/pmstat.c                |    2 +-
 xen/drivers/char/console.c               |    6 ++--
 xen/drivers/passthrough/iommu.c          |    2 +-
 xen/include/asm-arm/guest_access.h       |   32 ++++++++++++++++--
 xen/include/asm-arm/hypercall.h          |    2 +-
 xen/include/asm-arm/mm.h                 |    2 +-
 xen/include/asm-x86/guest_access.h       |   29 ++++++++++++++--
 xen/include/asm-x86/hap.h                |    2 +-
 xen/include/asm-x86/hypercall.h          |   24 +++++++-------
 xen/include/asm-x86/mem_event.h          |    2 +-
 xen/include/asm-x86/mm.h                 |    8 ++--
 xen/include/asm-x86/paging.h             |    2 +-
 xen/include/asm-x86/processor.h          |    2 +-
 xen/include/asm-x86/shadow.h             |    2 +-
 xen/include/asm-x86/xenoprof.h           |    6 ++--
 xen/include/public/arch-arm.h            |   32 ++++++++++++++----
 xen/include/public/arch-ia64.h           |    9 +++++
 xen/include/public/arch-x86/xen.h        |    9 +++++
 xen/include/public/memory.h              |   11 ++++--
 xen/include/public/version.h             |    4 ++-
 xen/include/xen/acpi.h                   |    4 +-
 xen/include/xen/hypercall.h              |   52 +++++++++++++++---------------
 xen/include/xen/iommu.h                  |    2 +-
 xen/include/xen/tmem_xen.h               |    2 +-
 xen/include/xen/xencomm.h                |   22 ++++++++++++-
 xen/include/xsm/xsm.h                    |    4 +-
 xen/xsm/dummy.c                          |    2 +-
 xen/xsm/flask/flask_op.c                 |    4 +-
 xen/xsm/flask/hooks.c                    |    2 +-
 xen/xsm/xsm_core.c                       |    2 +-
 80 files changed, 336 insertions(+), 206 deletions(-)

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:08:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nX-0001DN-Uj; Wed, 22 Aug 2012 11:08:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nW-0001DE-Lh
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:30 +0000
Received: from [85.158.143.99:27349] by server-2.bemta-4.messagelabs.com id
	EF/53-21239-EADB4305; Wed, 22 Aug 2012 11:08:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345633707!27328660!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6188 invoked from network); 22 Aug 2012 11:08:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35423308"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-RK;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:08 +0100
Message-ID: <1345633688-31684-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 6/6] xen: more substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

More substitutions in this patch, not as obvious as the ones in the
previous patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/x86/mm.c                        |   12 +++++++++---
 xen/arch/x86/oprofile/backtrace.c        |    4 +++-
 xen/arch/x86/platform_hypercall.c        |    8 ++++++--
 xen/arch/x86/x86_64/cpu_idle.c           |    4 +++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 +++-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/common/compat/multicall.c            |    1 +
 7 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..088db11 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3198,7 +3198,9 @@ int do_mmuext_op(
         {
             cpumask_t pmask;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            if ( unlikely(vcpumask_to_pcpumask(d,
+                            guest_handle_to_param(op.arg2.vcpumask, const_void),
+                            &pmask)) )
             {
                 okay = 0;
                 break;
@@ -4484,6 +4486,7 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     if ( s > ctxt->s )
     {
         e820entry_t ent;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
 
         if ( ctxt->n + 1 >= ctxt->map.nr_entries )
@@ -4491,7 +4494,8 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
         ent.addr = (uint64_t)ctxt->s << PAGE_SHIFT;
         ent.size = (uint64_t)(s - ctxt->s) << PAGE_SHIFT;
         ent.type = E820_RESERVED;
-        buffer = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( __copy_to_guest_offset(buffer, ctxt->n, &ent, 1) )
             return -EFAULT;
         ctxt->n++;
@@ -4790,6 +4794,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
     {
         struct memory_map_context ctxt;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         unsigned int i;
 
         if ( !IS_PRIV(current->domain) )
@@ -4804,7 +4809,8 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( ctxt.map.nr_entries < e820.nr_map + 1 )
             return -EINVAL;
 
-        buffer = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( !guest_handle_okay(buffer, ctxt.map.nr_entries) )
             return -EFAULT;
 
diff --git a/xen/arch/x86/oprofile/backtrace.c b/xen/arch/x86/oprofile/backtrace.c
index 33fd142..699cd28 100644
--- a/xen/arch/x86/oprofile/backtrace.c
+++ b/xen/arch/x86/oprofile/backtrace.c
@@ -80,8 +80,10 @@ dump_guest_backtrace(struct vcpu *vcpu, const struct frame_head *head,
     else
 #endif
     {
-        XEN_GUEST_HANDLE(const_frame_head_t) guest_head =
+        XEN_GUEST_HANDLE(const_frame_head_t) guest_head;
+        XEN_GUEST_HANDLE_PARAM(const_frame_head_t) guest_head_t =
             const_guest_handle_from_ptr(head, frame_head_t);
+        guest_head = guest_handle_from_param(guest_head_t, const_frame_head_t);
 
         /* Also check accessibility of one struct frame_head beyond */
         if (!guest_handle_okay(guest_head, 2))
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index a32e0a2..2994b12 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -185,7 +185,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             }
         }
 
-        ret = microcode_update(data, op->u.microcode.length);
+        ret = microcode_update(
+                guest_handle_to_param(data, const_void),
+                op->u.microcode.length);
         spin_unlock(&vcpu_alloc_lock);
     }
     break;
@@ -448,7 +450,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             XEN_GUEST_HANDLE(uint32) pdc;
 
             guest_from_compat_handle(pdc, op->u.set_pminfo.u.pdc);
-            ret = acpi_set_pdc_bits(op->u.set_pminfo.id, pdc);
+            ret = acpi_set_pdc_bits(
+                    op->u.set_pminfo.id,
+                    guest_handle_to_param(pdc, uint32));
         }
         break;
 
diff --git a/xen/arch/x86/x86_64/cpu_idle.c b/xen/arch/x86/x86_64/cpu_idle.c
index 3e7422f..1cdaf96 100644
--- a/xen/arch/x86/x86_64/cpu_idle.c
+++ b/xen/arch/x86/x86_64/cpu_idle.c
@@ -57,10 +57,12 @@ static int copy_from_compat_state(xen_processor_cx_t *xen_state,
 {
 #define XLAT_processor_cx_HNDL_dp(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_csd_t) dps; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_csd_t) dps_t; \
     if ( unlikely(!compat_handle_okay((_s_)->dp, (_s_)->dpcnt)) ) \
             return -EFAULT; \
     guest_from_compat_handle(dps, (_s_)->dp); \
-    (_d_)->dp = guest_handle_cast(dps, xen_processor_csd_t); \
+    dps_t = guest_handle_cast(dps, xen_processor_csd_t); \
+    (_d_)->dp = guest_handle_from_param(dps_t, xen_processor_csd_t); \
 } while (0)
     XLAT_processor_cx(xen_state, state);
 #undef XLAT_processor_cx_HNDL_dp
diff --git a/xen/arch/x86/x86_64/cpufreq.c b/xen/arch/x86/x86_64/cpufreq.c
index ce9864e..1956777 100644
--- a/xen/arch/x86/x86_64/cpufreq.c
+++ b/xen/arch/x86/x86_64/cpufreq.c
@@ -45,10 +45,12 @@ compat_set_px_pminfo(uint32_t cpu, struct compat_processor_performance *perf)
 
 #define XLAT_processor_performance_HNDL_states(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_px_t) states; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_px_t) states_t; \
     if ( unlikely(!compat_handle_okay((_s_)->states, (_s_)->state_count)) ) \
         return -EFAULT; \
     guest_from_compat_handle(states, (_s_)->states); \
-    (_d_)->states = guest_handle_cast(states, xen_processor_px_t); \
+    states_t = guest_handle_cast(states, xen_processor_px_t); \
+    (_d_)->states = guest_handle_from_param(states_t, xen_processor_px_t); \
 } while (0)
 
     XLAT_processor_performance(xen_perf, perf);
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 188aa37..f577761 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
 
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 typedef int ret_t;
 
 #include "../platform_hypercall.c"
diff --git a/xen/common/compat/multicall.c b/xen/common/compat/multicall.c
index 0eb1212..72db213 100644
--- a/xen/common/compat/multicall.c
+++ b/xen/common/compat/multicall.c
@@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
 #define call                 compat_call
 #define do_multicall(l, n)   compat_multicall(_##l, n)
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 
 #include "../multicall.c"
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:08:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nX-0001DN-Uj; Wed, 22 Aug 2012 11:08:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nW-0001DE-Lh
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:30 +0000
Received: from [85.158.143.99:27349] by server-2.bemta-4.messagelabs.com id
	EF/53-21239-EADB4305; Wed, 22 Aug 2012 11:08:30 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345633707!27328660!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6188 invoked from network); 22 Aug 2012 11:08:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:29 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35423308"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-RK;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:08 +0100
Message-ID: <1345633688-31684-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 6/6] xen: more substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

More substitutions in this patch, not as obvious as the ones in the
previous patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/x86/mm.c                        |   12 +++++++++---
 xen/arch/x86/oprofile/backtrace.c        |    4 +++-
 xen/arch/x86/platform_hypercall.c        |    8 ++++++--
 xen/arch/x86/x86_64/cpu_idle.c           |    4 +++-
 xen/arch/x86/x86_64/cpufreq.c            |    4 +++-
 xen/arch/x86/x86_64/platform_hypercall.c |    1 +
 xen/common/compat/multicall.c            |    1 +
 7 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d72700..088db11 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3198,7 +3198,9 @@ int do_mmuext_op(
         {
             cpumask_t pmask;
 
-            if ( unlikely(vcpumask_to_pcpumask(d, op.arg2.vcpumask, &pmask)) )
+            if ( unlikely(vcpumask_to_pcpumask(d,
+                            guest_handle_to_param(op.arg2.vcpumask, const_void),
+                            &pmask)) )
             {
                 okay = 0;
                 break;
@@ -4484,6 +4486,7 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
     if ( s > ctxt->s )
     {
         e820entry_t ent;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
 
         if ( ctxt->n + 1 >= ctxt->map.nr_entries )
@@ -4491,7 +4494,8 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
         ent.addr = (uint64_t)ctxt->s << PAGE_SHIFT;
         ent.size = (uint64_t)(s - ctxt->s) << PAGE_SHIFT;
         ent.type = E820_RESERVED;
-        buffer = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt->map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( __copy_to_guest_offset(buffer, ctxt->n, &ent, 1) )
             return -EFAULT;
         ctxt->n++;
@@ -4790,6 +4794,7 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
     {
         struct memory_map_context ctxt;
         XEN_GUEST_HANDLE(e820entry_t) buffer;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_t;
         unsigned int i;
 
         if ( !IS_PRIV(current->domain) )
@@ -4804,7 +4809,8 @@ long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( ctxt.map.nr_entries < e820.nr_map + 1 )
             return -EINVAL;
 
-        buffer = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer_t = guest_handle_cast(ctxt.map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_t, e820entry_t);
         if ( !guest_handle_okay(buffer, ctxt.map.nr_entries) )
             return -EFAULT;
 
diff --git a/xen/arch/x86/oprofile/backtrace.c b/xen/arch/x86/oprofile/backtrace.c
index 33fd142..699cd28 100644
--- a/xen/arch/x86/oprofile/backtrace.c
+++ b/xen/arch/x86/oprofile/backtrace.c
@@ -80,8 +80,10 @@ dump_guest_backtrace(struct vcpu *vcpu, const struct frame_head *head,
     else
 #endif
     {
-        XEN_GUEST_HANDLE(const_frame_head_t) guest_head =
+        XEN_GUEST_HANDLE(const_frame_head_t) guest_head;
+        XEN_GUEST_HANDLE_PARAM(const_frame_head_t) guest_head_t =
             const_guest_handle_from_ptr(head, frame_head_t);
+        guest_head = guest_handle_from_param(guest_head_t, const_frame_head_t);
 
         /* Also check accessibility of one struct frame_head beyond */
         if (!guest_handle_okay(guest_head, 2))
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index a32e0a2..2994b12 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -185,7 +185,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             }
         }
 
-        ret = microcode_update(data, op->u.microcode.length);
+        ret = microcode_update(
+                guest_handle_to_param(data, const_void),
+                op->u.microcode.length);
         spin_unlock(&vcpu_alloc_lock);
     }
     break;
@@ -448,7 +450,9 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             XEN_GUEST_HANDLE(uint32) pdc;
 
             guest_from_compat_handle(pdc, op->u.set_pminfo.u.pdc);
-            ret = acpi_set_pdc_bits(op->u.set_pminfo.id, pdc);
+            ret = acpi_set_pdc_bits(
+                    op->u.set_pminfo.id,
+                    guest_handle_to_param(pdc, uint32));
         }
         break;
 
diff --git a/xen/arch/x86/x86_64/cpu_idle.c b/xen/arch/x86/x86_64/cpu_idle.c
index 3e7422f..1cdaf96 100644
--- a/xen/arch/x86/x86_64/cpu_idle.c
+++ b/xen/arch/x86/x86_64/cpu_idle.c
@@ -57,10 +57,12 @@ static int copy_from_compat_state(xen_processor_cx_t *xen_state,
 {
 #define XLAT_processor_cx_HNDL_dp(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_csd_t) dps; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_csd_t) dps_t; \
     if ( unlikely(!compat_handle_okay((_s_)->dp, (_s_)->dpcnt)) ) \
             return -EFAULT; \
     guest_from_compat_handle(dps, (_s_)->dp); \
-    (_d_)->dp = guest_handle_cast(dps, xen_processor_csd_t); \
+    dps_t = guest_handle_cast(dps, xen_processor_csd_t); \
+    (_d_)->dp = guest_handle_from_param(dps_t, xen_processor_csd_t); \
 } while (0)
     XLAT_processor_cx(xen_state, state);
 #undef XLAT_processor_cx_HNDL_dp
diff --git a/xen/arch/x86/x86_64/cpufreq.c b/xen/arch/x86/x86_64/cpufreq.c
index ce9864e..1956777 100644
--- a/xen/arch/x86/x86_64/cpufreq.c
+++ b/xen/arch/x86/x86_64/cpufreq.c
@@ -45,10 +45,12 @@ compat_set_px_pminfo(uint32_t cpu, struct compat_processor_performance *perf)
 
 #define XLAT_processor_performance_HNDL_states(_d_, _s_) do { \
     XEN_GUEST_HANDLE(compat_processor_px_t) states; \
+    XEN_GUEST_HANDLE_PARAM(xen_processor_px_t) states_t; \
     if ( unlikely(!compat_handle_okay((_s_)->states, (_s_)->state_count)) ) \
         return -EFAULT; \
     guest_from_compat_handle(states, (_s_)->states); \
-    (_d_)->states = guest_handle_cast(states, xen_processor_px_t); \
+    states_t = guest_handle_cast(states, xen_processor_px_t); \
+    (_d_)->states = guest_handle_from_param(states_t, xen_processor_px_t); \
 } while (0)
 
     XLAT_processor_performance(xen_perf, perf);
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 188aa37..f577761 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -38,6 +38,7 @@ CHECK_pf_pcpu_version;
 
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 typedef int ret_t;
 
 #include "../platform_hypercall.c"
diff --git a/xen/common/compat/multicall.c b/xen/common/compat/multicall.c
index 0eb1212..72db213 100644
--- a/xen/common/compat/multicall.c
+++ b/xen/common/compat/multicall.c
@@ -24,6 +24,7 @@ DEFINE_XEN_GUEST_HANDLE(multicall_entry_compat_t);
 #define call                 compat_call
 #define do_multicall(l, n)   compat_multicall(_##l, n)
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
+#define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE(t)
 
 #include "../multicall.c"
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:08:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nZ-0001Dr-Fm; Wed, 22 Aug 2012 11:08:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nX-0001DK-Sl
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:32 +0000
Received: from [85.158.143.99:48686] by server-1.bemta-4.messagelabs.com id
	64/28-07754-FADB4305; Wed, 22 Aug 2012 11:08:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345633707!27328660!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6278 invoked from network); 22 Aug 2012 11:08:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35423309"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-Ou;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:06 +0100
Message-ID: <1345633688-31684-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

guest_handle_* macros default to XEN_GUEST_HANDLE_PARAM as return type.
Two new guest_handle_to_param and guest_handle_from_param macros are
introduced to do conversions.

Changes in v2:

- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code.

Changes in v3:

- move the guest_handle_cast change into this patch;
- add a clear comment on top of guest_handle_cast;
- also s/XEN_GUEST_HANDLE/XEN_GUEST_HANDLE_PARAM in
  guest_handle_from_ptr and const_guest_handle_from_ptr;
- introduce guest_handle_from_param and guest_handle_to_param.

Changes in v4:

- make both XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM unions on ARM;
- simplify set_xen_guest_handle_raw on ARM;
- add type checking in guest_handle_to_param and guest_handle_from_param
trivial.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/guest_access.h |   32 ++++++++++++++++++++++++++++----
 xen/include/asm-x86/guest_access.h |   29 +++++++++++++++++++++++++----
 xen/include/public/arch-arm.h      |   26 ++++++++++++++++++++++----
 xen/include/public/arch-ia64.h     |    9 +++++++++
 xen/include/public/arch-x86/xen.h  |    9 +++++++++
 xen/include/xen/xencomm.h          |   22 +++++++++++++++++++++-
 6 files changed, 114 insertions(+), 13 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..5686217 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -27,16 +27,40 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                            \
+    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)(&_x == &_y.p);                                    \
+    _y;                                                      \
+})
+
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({               \
+    typeof((hnd).p) _x = (hnd).p;                           \
+    XEN_GUEST_HANDLE(type) _y = { _x };                     \
+    /* type checking: make sure that the pointers inside    \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of   \
+     * the same type, then return hnd */                    \
+    (void)(&_x == &_y.p);                                   \
+    _y;                                                     \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 2b429c2..e67ed82 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -45,16 +45,37 @@
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..4c6d607 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,36 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef union { type *p; unsigned long q; }                 \
+        __guest_handle_ ## name;                                \
+    typedef union { type *p; uint64_aligned_t q; }              \
+        __guest_handle_64_ ## name;
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
+ * aligned.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do {                                                    \
+        typeof(&(hnd)) t = &(hnd);                          \
+        _t->q = 0;                                          \
+        _t->p = val;                                        \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..e4e9688 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -45,8 +45,17 @@
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on ia64 but
+ * they might not be on other architectures.
+ */
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..0e10260 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -38,12 +38,21 @@
     typedef type * __guest_handle_ ## name
 #endif
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on X86 but
+ * they might not be on other architectures.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
diff --git a/xen/include/xen/xencomm.h b/xen/include/xen/xencomm.h
index 730da7c..3426b8a 100644
--- a/xen/include/xen/xencomm.h
+++ b/xen/include/xen/xencomm.h
@@ -66,11 +66,31 @@ static inline unsigned long xencomm_inline_addr(const void *handle)
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    XEN_GUEST_HANDLE(type) _y;                  \
+    XEN_GUEST_HANDLE_PARAM(type) _y;            \
     set_xen_guest_handle(_y, _x);               \
     _y;                                         \
 })
 
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
+})
+
 /* Since we run in real mode, we can safely access all addresses. That also
  * means our __routines are identical to our "normal" routines. */
 #define guest_handle_okay(hnd, nr) 1
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:08:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nZ-0001Dr-Fm; Wed, 22 Aug 2012 11:08:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nX-0001DK-Sl
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:32 +0000
Received: from [85.158.143.99:48686] by server-1.bemta-4.messagelabs.com id
	64/28-07754-FADB4305; Wed, 22 Aug 2012 11:08:31 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345633707!27328660!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6278 invoked from network); 22 Aug 2012 11:08:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:30 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35423309"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-Ou;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:06 +0100
Message-ID: <1345633688-31684-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 4/6] xen: introduce XEN_GUEST_HANDLE_PARAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XEN_GUEST_HANDLE_PARAM is going to be used to distinguish guest pointers
stored in memory from guest pointers as hypercall parameters.

guest_handle_* macros default to XEN_GUEST_HANDLE_PARAM as return type.
Two new guest_handle_to_param and guest_handle_from_param macros are
introduced to do conversions.

Changes in v2:

- add 2 missing #define _XEN_GUEST_HANDLE_PARAM for the compilation of
the compat code.

Changes in v3:

- move the guest_handle_cast change into this patch;
- add a clear comment on top of guest_handle_cast;
- also s/XEN_GUEST_HANDLE/XEN_GUEST_HANDLE_PARAM in
  guest_handle_from_ptr and const_guest_handle_from_ptr;
- introduce guest_handle_from_param and guest_handle_to_param.

Changes in v4:

- make both XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM unions on ARM;
- simplify set_xen_guest_handle_raw on ARM;
- add type checking in guest_handle_to_param and guest_handle_from_param
trivial.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/guest_access.h |   32 ++++++++++++++++++++++++++++----
 xen/include/asm-x86/guest_access.h |   29 +++++++++++++++++++++++++----
 xen/include/public/arch-arm.h      |   26 ++++++++++++++++++++++----
 xen/include/public/arch-ia64.h     |    9 +++++++++
 xen/include/public/arch-x86/xen.h  |    9 +++++++++
 xen/include/xen/xencomm.h          |   22 +++++++++++++++++++++-
 6 files changed, 114 insertions(+), 13 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 0fceae6..5686217 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -27,16 +27,40 @@ unsigned long raw_clear_guest(void *to, unsigned len);
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                            \
+    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)(&_x == &_y.p);                                    \
+    _y;                                                      \
+})
+
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({               \
+    typeof((hnd).p) _x = (hnd).p;                           \
+    XEN_GUEST_HANDLE(type) _y = { _x };                     \
+    /* type checking: make sure that the pointers inside    \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of   \
+     * the same type, then return hnd */                    \
+    (void)(&_x == &_y.p);                                   \
+    _y;                                                     \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 2b429c2..e67ed82 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -45,16 +45,37 @@
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle to the specified type of handle. */
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE(type)) { _x };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
 })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2ae6548..4c6d607 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -51,18 +51,36 @@
 
 #define XEN_HYPERCALL_TAG   0XEA1
 
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
 
 #ifndef __ASSEMBLY__
-#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \
-    typedef struct { type *p; } __guest_handle_ ## name
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef union { type *p; unsigned long q; }                 \
+        __guest_handle_ ## name;                                \
+    typedef union { type *p; uint64_aligned_t q; }              \
+        __guest_handle_64_ ## name;
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
+ * aligned.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument. It is 4 bytes on aarch and 8 bytes on aarch64.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
-#define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
+/* this is going to be changes on 64 bit */
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do {                                                    \
+        typeof(&(hnd)) t = &(hnd);                          \
+        _t->q = 0;                                          \
+        _t->p = val;                                        \
+    } while ( 0 )
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
diff --git a/xen/include/public/arch-ia64.h b/xen/include/public/arch-ia64.h
index c9da5d4..e4e9688 100644
--- a/xen/include/public/arch-ia64.h
+++ b/xen/include/public/arch-ia64.h
@@ -45,8 +45,17 @@
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on ia64 but
+ * they might not be on other architectures.
+ */
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
diff --git a/xen/include/public/arch-x86/xen.h b/xen/include/public/arch-x86/xen.h
index 1c186d7..0e10260 100644
--- a/xen/include/public/arch-x86/xen.h
+++ b/xen/include/public/arch-x86/xen.h
@@ -38,12 +38,21 @@
     typedef type * __guest_handle_ ## name
 #endif
 
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory.
+ * XEN_GUEST_HANDLE_PARAM represent a guest pointer, when passed as an
+ * hypercall argument.
+ * XEN_GUEST_HANDLE_PARAM and XEN_GUEST_HANDLE are the same on X86 but
+ * they might not be on other architectures.
+ */
 #define __DEFINE_XEN_GUEST_HANDLE(name, type) \
     ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
     ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    XEN_GUEST_HANDLE(name)
 #define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
diff --git a/xen/include/xen/xencomm.h b/xen/include/xen/xencomm.h
index 730da7c..3426b8a 100644
--- a/xen/include/xen/xencomm.h
+++ b/xen/include/xen/xencomm.h
@@ -66,11 +66,31 @@ static inline unsigned long xencomm_inline_addr(const void *handle)
 /* Cast a guest handle to the specified type of handle. */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    XEN_GUEST_HANDLE(type) _y;                  \
+    XEN_GUEST_HANDLE_PARAM(type) _y;            \
     set_xen_guest_handle(_y, _x);               \
     _y;                                         \
 })
 
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
+})
+
+/* Cast a XEN_GUEST_HANDLE_PARAM to XEN_GUEST_HANDLE */
+#define guest_handle_from_param(hnd, type) ({                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)((typeof(&(hnd).p)) 0 ==                           \
+        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
+    (hnd);                                                   \
+})
+
 /* Since we run in real mode, we can safely access all addresses. That also
  * means our __routines are identical to our "normal" routines. */
 #define guest_handle_okay(hnd, nr) 1
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nu-0001IQ-Le; Wed, 22 Aug 2012 11:08:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48ns-0001HH-J1
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:52 +0000
Received: from [85.158.138.51:16091] by server-5.bemta-3.messagelabs.com id
	4C/42-08865-3CDB4305; Wed, 22 Aug 2012 11:08:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345633727!25641644!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27504 invoked from network); 22 Aug 2012 11:08:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885718"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-MW;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:04 +0100
Message-ID: <1345633688-31684-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 2/6] xen: xen_ulong_t substitution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is still an unwanted unsigned long in the xen public interface:
replace it with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Changes in v2:

- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/python/xen/lowlevel/xc/xc.c |    2 +-
 xen/include/public/arch-arm.h     |    4 ++--
 xen/include/public/version.h      |    4 +++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..e220f68 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1439,7 +1439,7 @@ static PyObject *pyxc_xeninfo(XcObject *self)
     if ( xc_version(self->xc_handle, XENVER_commandline, &xen_commandline) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
-    snprintf(str, sizeof(str), "virt_start=0x%lx", p_parms.virt_start);
+    snprintf(str, sizeof(str), "virt_start=0x%"PRI_xen_ulong, p_parms.virt_start);
 
     xen_pagesize = xc_version(self->xc_handle, XENVER_pagesize, NULL);
     if (xen_pagesize < 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..c7e6f8c 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -28,6 +28,8 @@
 #ifndef __XEN_PUBLIC_VERSION_H__
 #define __XEN_PUBLIC_VERSION_H__
 
+#include "xen.h"
+
 /* NB. All ops return zero on success, except XENVER_{version,pagesize} */
 
 /* arg == NULL; returns major:minor (16:16). */
@@ -58,7 +60,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nu-0001IQ-Le; Wed, 22 Aug 2012 11:08:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48ns-0001HH-J1
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:52 +0000
Received: from [85.158.138.51:16091] by server-5.bemta-3.messagelabs.com id
	4C/42-08865-3CDB4305; Wed, 22 Aug 2012 11:08:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345633727!25641644!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27504 invoked from network); 22 Aug 2012 11:08:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885718"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-MW;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:04 +0100
Message-ID: <1345633688-31684-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 2/6] xen: xen_ulong_t substitution
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is still an unwanted unsigned long in the xen public interface:
replace it with xen_ulong_t.

Also typedef xen_ulong_t to uint64_t on ARM.

Changes in v2:

- do not replace the unsigned long in x86 specific calls;
- do not replace the unsigned long in multicall_entry;
- add missing include "xen.h" in version.h;
- use proper printf flag for xen_ulong_t.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/python/xen/lowlevel/xc/xc.c |    2 +-
 xen/include/public/arch-arm.h     |    4 ++--
 xen/include/public/version.h      |    4 +++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..e220f68 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1439,7 +1439,7 @@ static PyObject *pyxc_xeninfo(XcObject *self)
     if ( xc_version(self->xc_handle, XENVER_commandline, &xen_commandline) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
-    snprintf(str, sizeof(str), "virt_start=0x%lx", p_parms.virt_start);
+    snprintf(str, sizeof(str), "virt_start=0x%"PRI_xen_ulong, p_parms.virt_start);
 
     xen_pagesize = xc_version(self->xc_handle, XENVER_pagesize, NULL);
     if (xen_pagesize < 0 )
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 14ad0ab..2ae6548 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -122,8 +122,8 @@ typedef uint64_t xen_pfn_t;
 /* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
 #define XEN_LEGACY_MAX_VCPUS 1
 
-typedef uint32_t xen_ulong_t;
-#define PRI_xen_ulong PRIx32
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
 
 struct vcpu_guest_context {
 #define _VGCF_online                   0
diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 8742c2b..c7e6f8c 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -28,6 +28,8 @@
 #ifndef __XEN_PUBLIC_VERSION_H__
 #define __XEN_PUBLIC_VERSION_H__
 
+#include "xen.h"
+
 /* NB. All ops return zero on success, except XENVER_{version,pagesize} */
 
 /* arg == NULL; returns major:minor (16:16). */
@@ -58,7 +60,7 @@ typedef char xen_changeset_info_t[64];
 
 #define XENVER_platform_parameters 5
 struct xen_platform_parameters {
-    unsigned long virt_start;
+    xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nr-0001HP-TB; Wed, 22 Aug 2012 11:08:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nq-0001Gy-N9
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:50 +0000
Received: from [85.158.138.51:13918] by server-2.bemta-3.messagelabs.com id
	A6/3E-09157-1CDB4305; Wed, 22 Aug 2012 11:08:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345633727!25641644!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27313 invoked from network); 22 Aug 2012 11:08:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885717"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-OK;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:05 +0100
Message-ID: <1345633688-31684-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 3/6] xen: change the limit of nr_extents to
	UINT_MAX >> MEMOP_EXTENT_SHIFT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently do_memory_op has a different maximum limit for nr_extents on
32 bit and 64 bit.
Change the limit to UINT_MAX >> MEMOP_EXTENT_SHIFT, so that it is the
same in both cases.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/memory.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..7e58cc4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -553,7 +553,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
             return start_extent;
 
         /* Is size too large for us to encode a continuation? */
-        if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
+        if ( reservation.nr_extents > (UINT_MAX >> MEMOP_EXTENT_SHIFT) )
             return start_extent;
 
         if ( unlikely(start_extent >= reservation.nr_extents) )
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nr-0001HP-TB; Wed, 22 Aug 2012 11:08:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nq-0001Gy-N9
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:50 +0000
Received: from [85.158.138.51:13918] by server-2.bemta-3.messagelabs.com id
	A6/3E-09157-1CDB4305; Wed, 22 Aug 2012 11:08:49 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345633727!25641644!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27313 invoked from network); 22 Aug 2012 11:08:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:49 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885717"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-OK;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:05 +0100
Message-ID: <1345633688-31684-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 3/6] xen: change the limit of nr_extents to
	UINT_MAX >> MEMOP_EXTENT_SHIFT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently do_memory_op has a different maximum limit for nr_extents on
32 bit and 64 bit.
Change the limit to UINT_MAX >> MEMOP_EXTENT_SHIFT, so that it is the
same in both cases.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/common/memory.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5d64cb6..7e58cc4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -553,7 +553,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
             return start_extent;
 
         /* Is size too large for us to encode a continuation? */
-        if ( reservation.nr_extents > (ULONG_MAX >> MEMOP_EXTENT_SHIFT) )
+        if ( reservation.nr_extents > (UINT_MAX >> MEMOP_EXTENT_SHIFT) )
             return start_extent;
 
         if ( unlikely(start_extent >= reservation.nr_extents) )
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nt-0001Hr-8d; Wed, 22 Aug 2012 11:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nr-0001HC-Ii
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:51 +0000
Received: from [85.158.138.51:63288] by server-9.bemta-3.messagelabs.com id
	26/54-23952-2CDB4305; Wed, 22 Aug 2012 11:08:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345633727!25641644!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27424 invoked from network); 22 Aug 2012 11:08:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885719"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-Lx;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:03 +0100
Message-ID: <1345633688-31684-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 1/6] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Changes in v2:

- do not use an anonymous union.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/firmware/hvmloader/pci.c  |    2 +-
 xen/arch/arm/mm.c               |    2 +-
 xen/arch/x86/mm.c               |   10 +++++-----
 xen/arch/x86/x86_64/compat/mm.c |    6 ++++++
 xen/include/public/memory.h     |   11 +++++++----
 5 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index fd56e50..6375989 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -212,7 +212,7 @@ void pci_setup(void)
         xatp.space = XENMAPSPACE_gmfn_range;
         xatp.idx   = hvm_info->low_mem_pgend;
         xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
+        xatp.u.size  = nr_pages;
         if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
             BUG();
         hvm_info->high_mem_pgend += nr_pages;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..2400e1c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,7 +506,7 @@ static int xenmem_add_to_physmap_once(
         paddr_t maddr;
         struct domain *od;
 
-        rc = rcu_lock_target_domain_by_id(xatp->foreign_domid, &od);
+        rc = rcu_lock_target_domain_by_id(xatp->u.foreign_domid, &od);
         if ( rc < 0 )
             return rc;
         maddr = p2m_lookup(od, xatp->idx << PAGE_SHIFT);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..f5c704e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4630,7 +4630,7 @@ static int xenmem_add_to_physmap(struct domain *d,
             this_cpu(iommu_dont_flush_iotlb) = 1;
 
         start_xatp = *xatp;
-        while ( xatp->size > 0 )
+        while ( xatp->u.size > 0 )
         {
             rc = xenmem_add_to_physmap_once(d, xatp);
             if ( rc < 0 )
@@ -4638,10 +4638,10 @@ static int xenmem_add_to_physmap(struct domain *d,
 
             xatp->idx++;
             xatp->gpfn++;
-            xatp->size--;
+            xatp->u.size--;
 
             /* Check for continuation if it's not the last interation */
-            if ( xatp->size > 0 && hypercall_preempt_check() )
+            if ( xatp->u.size > 0 && hypercall_preempt_check() )
             {
                 rc = -EAGAIN;
                 break;
@@ -4651,8 +4651,8 @@ static int xenmem_add_to_physmap(struct domain *d,
         if ( need_iommu(d) )
         {
             this_cpu(iommu_dont_flush_iotlb) = 0;
-            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp->size);
-            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp->size);
+            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.u.size - xatp->u.size);
+            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.u.size - xatp->u.size);
         }
 
         return rc;
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..5bcd2fd 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -59,10 +59,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     {
         struct compat_add_to_physmap cmp;
         struct xen_add_to_physmap *nat = COMPAT_ARG_XLAT_VIRT_BASE;
+        enum XLAT_add_to_physmap_u u;
 
         if ( copy_from_guest(&cmp, arg, 1) )
             return -EFAULT;
 
+        if ( cmp.space == XENMAPSPACE_gmfn_range )
+            u = XLAT_add_to_physmap_u_size;
+        if ( cmp.space == XENMAPSPACE_gmfn_foreign )
+            u = XLAT_add_to_physmap_u_foreign_domid;
+
         XLAT_add_to_physmap(nat, &cmp);
         rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..7d4ee26 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nt-0001Hr-8d; Wed, 22 Aug 2012 11:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nr-0001HC-Ii
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:51 +0000
Received: from [85.158.138.51:63288] by server-9.bemta-3.messagelabs.com id
	26/54-23952-2CDB4305; Wed, 22 Aug 2012 11:08:50 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345633727!25641644!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27424 invoked from network); 22 Aug 2012 11:08:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885719"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:21 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-Lx;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:03 +0100
Message-ID: <1345633688-31684-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 1/6] xen: improve changes to
	xen_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an incremental patch on top of
c0bc926083b5987a3e9944eec2c12ad0580100e2: in order to retain binary
compatibility, it is better to introduce foreign_domid as part of a
union containing both size and foreign_domid.

Changes in v2:

- do not use an anonymous union.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 tools/firmware/hvmloader/pci.c  |    2 +-
 xen/arch/arm/mm.c               |    2 +-
 xen/arch/x86/mm.c               |   10 +++++-----
 xen/arch/x86/x86_64/compat/mm.c |    6 ++++++
 xen/include/public/memory.h     |   11 +++++++----
 5 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index fd56e50..6375989 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -212,7 +212,7 @@ void pci_setup(void)
         xatp.space = XENMAPSPACE_gmfn_range;
         xatp.idx   = hvm_info->low_mem_pgend;
         xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
+        xatp.u.size  = nr_pages;
         if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
             BUG();
         hvm_info->high_mem_pgend += nr_pages;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 08bc55b..2400e1c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -506,7 +506,7 @@ static int xenmem_add_to_physmap_once(
         paddr_t maddr;
         struct domain *od;
 
-        rc = rcu_lock_target_domain_by_id(xatp->foreign_domid, &od);
+        rc = rcu_lock_target_domain_by_id(xatp->u.foreign_domid, &od);
         if ( rc < 0 )
             return rc;
         maddr = p2m_lookup(od, xatp->idx << PAGE_SHIFT);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9f63974..f5c704e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4630,7 +4630,7 @@ static int xenmem_add_to_physmap(struct domain *d,
             this_cpu(iommu_dont_flush_iotlb) = 1;
 
         start_xatp = *xatp;
-        while ( xatp->size > 0 )
+        while ( xatp->u.size > 0 )
         {
             rc = xenmem_add_to_physmap_once(d, xatp);
             if ( rc < 0 )
@@ -4638,10 +4638,10 @@ static int xenmem_add_to_physmap(struct domain *d,
 
             xatp->idx++;
             xatp->gpfn++;
-            xatp->size--;
+            xatp->u.size--;
 
             /* Check for continuation if it's not the last interation */
-            if ( xatp->size > 0 && hypercall_preempt_check() )
+            if ( xatp->u.size > 0 && hypercall_preempt_check() )
             {
                 rc = -EAGAIN;
                 break;
@@ -4651,8 +4651,8 @@ static int xenmem_add_to_physmap(struct domain *d,
         if ( need_iommu(d) )
         {
             this_cpu(iommu_dont_flush_iotlb) = 0;
-            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.size - xatp->size);
-            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.size - xatp->size);
+            iommu_iotlb_flush(d, start_xatp.idx, start_xatp.u.size - xatp->u.size);
+            iommu_iotlb_flush(d, start_xatp.gpfn, start_xatp.u.size - xatp->u.size);
         }
 
         return rc;
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index f497503..5bcd2fd 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -59,10 +59,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
     {
         struct compat_add_to_physmap cmp;
         struct xen_add_to_physmap *nat = COMPAT_ARG_XLAT_VIRT_BASE;
+        enum XLAT_add_to_physmap_u u;
 
         if ( copy_from_guest(&cmp, arg, 1) )
             return -EFAULT;
 
+        if ( cmp.space == XENMAPSPACE_gmfn_range )
+            u = XLAT_add_to_physmap_u_size;
+        if ( cmp.space == XENMAPSPACE_gmfn_foreign )
+            u = XLAT_add_to_physmap_u_foreign_domid;
+
         XLAT_add_to_physmap(nat, &cmp);
         rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index b2adfbe..7d4ee26 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -208,8 +208,12 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
-    uint16_t    size;
+    union {
+        /* Number of pages to go through for gmfn_range */
+        uint16_t    size;
+        /* IFF gmfn_foreign */
+        domid_t foreign_domid;
+    } u;
 
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info  0 /* shared info page */
@@ -217,8 +221,7 @@ struct xen_add_to_physmap {
 #define XENMAPSPACE_gmfn         2 /* GMFN */
 #define XENMAPSPACE_gmfn_range   3 /* GMFN range */
 #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another guest */
-    uint16_t space;
-    domid_t foreign_domid; /* IFF gmfn_foreign */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nv-0001Iz-9L; Wed, 22 Aug 2012 11:08:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nt-0001Hq-Qq
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:54 +0000
Received: from [85.158.139.83:49007] by server-4.bemta-5.messagelabs.com id
	B4/45-12386-4CDB4305; Wed, 22 Aug 2012 11:08:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345633729!29499859!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32216 invoked from network); 22 Aug 2012 11:08:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885720"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:22 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-Qe;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:07 +0100
Message-ID: <1345633688-31684-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 5/6] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c             |    2 +-
 xen/arch/arm/domctl.c             |    2 +-
 xen/arch/arm/hvm.c                |    2 +-
 xen/arch/arm/mm.c                 |    2 +-
 xen/arch/arm/physdev.c            |    2 +-
 xen/arch/arm/sysctl.c             |    2 +-
 xen/arch/x86/compat.c             |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c     |    2 +-
 xen/arch/x86/domain.c             |    2 +-
 xen/arch/x86/domctl.c             |    2 +-
 xen/arch/x86/efi/runtime.c        |    2 +-
 xen/arch/x86/hvm/hvm.c            |   26 +++++++++---------
 xen/arch/x86/microcode.c          |    2 +-
 xen/arch/x86/mm.c                 |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c         |    2 +-
 xen/arch/x86/mm/mem_event.c       |    2 +-
 xen/arch/x86/mm/paging.c          |    2 +-
 xen/arch/x86/mm/shadow/common.c   |    2 +-
 xen/arch/x86/oprofile/xenoprof.c  |    6 ++--
 xen/arch/x86/physdev.c            |    2 +-
 xen/arch/x86/platform_hypercall.c |    2 +-
 xen/arch/x86/sysctl.c             |    2 +-
 xen/arch/x86/traps.c              |    2 +-
 xen/arch/x86/x86_32/mm.c          |    2 +-
 xen/arch/x86/x86_32/traps.c       |    2 +-
 xen/arch/x86/x86_64/compat/mm.c   |   10 +++---
 xen/arch/x86/x86_64/domain.c      |    2 +-
 xen/arch/x86/x86_64/mm.c          |    2 +-
 xen/arch/x86/x86_64/traps.c       |    2 +-
 xen/common/compat/domain.c        |    2 +-
 xen/common/compat/grant_table.c   |    8 +++---
 xen/common/compat/memory.c        |    4 +-
 xen/common/domain.c               |    2 +-
 xen/common/domctl.c               |    2 +-
 xen/common/event_channel.c        |    2 +-
 xen/common/grant_table.c          |   36 +++++++++++++-------------
 xen/common/kernel.c               |    4 +-
 xen/common/kexec.c                |   16 +++++-----
 xen/common/memory.c               |    4 +-
 xen/common/multicall.c            |    2 +-
 xen/common/schedule.c             |    2 +-
 xen/common/sysctl.c               |    2 +-
 xen/common/xenoprof.c             |    8 +++---
 xen/drivers/acpi/pmstat.c         |    2 +-
 xen/drivers/char/console.c        |    6 ++--
 xen/drivers/passthrough/iommu.c   |    2 +-
 xen/include/asm-arm/hypercall.h   |    2 +-
 xen/include/asm-arm/mm.h          |    2 +-
 xen/include/asm-x86/hap.h         |    2 +-
 xen/include/asm-x86/hypercall.h   |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h   |    2 +-
 xen/include/asm-x86/mm.h          |    8 +++---
 xen/include/asm-x86/paging.h      |    2 +-
 xen/include/asm-x86/processor.h   |    2 +-
 xen/include/asm-x86/shadow.h      |    2 +-
 xen/include/asm-x86/xenoprof.h    |    6 ++--
 xen/include/xen/acpi.h            |    4 +-
 xen/include/xen/hypercall.h       |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h           |    2 +-
 xen/include/xen/tmem_xen.h        |    2 +-
 xen/include/xsm/xsm.h             |    4 +-
 xen/xsm/dummy.c                   |    2 +-
 xen/xsm/flask/flask_op.c          |    4 +-
 xen/xsm/flask/hooks.c             |    2 +-
 xen/xsm/xsm_core.c                |    2 +-
 65 files changed, 168 insertions(+), 168 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2400e1c..3e8b6cc 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/compat.c b/xen/arch/x86/compat.c
index a4fda06..2d05867 100644
--- a/xen/arch/x86/compat.c
+++ b/xen/arch/x86/compat.c
@@ -27,7 +27,7 @@ ret_t do_physdev_op_compat(XEN_GUEST_HANDLE(physdev_op_t) uop)
 #ifndef COMPAT
 
 /* Legacy hypercall (as of 0x00030202). */
-long do_event_channel_op_compat(XEN_GUEST_HANDLE(evtchn_op_t) uop)
+long do_event_channel_op_compat(XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop)
 {
     struct evtchn_op op;
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index a89df6d..0f122b3 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1357,7 +1357,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..e2bf831 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3047,14 +3047,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3072,7 +3072,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3088,7 +3088,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3137,7 +3137,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3145,7 +3145,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3164,7 +3164,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3188,7 +3188,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3360,7 +3360,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3525,7 +3525,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3569,7 +3569,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3602,7 +3602,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3686,7 +3686,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f5c704e..4d72700 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/oprofile/xenoprof.c b/xen/arch/x86/oprofile/xenoprof.c
index 71f00ef..5d286a2 100644
--- a/xen/arch/x86/oprofile/xenoprof.c
+++ b/xen/arch/x86/oprofile/xenoprof.c
@@ -19,7 +19,7 @@
 
 #include "op_counter.h"
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_counter counter;
 
@@ -39,7 +39,7 @@ int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
     return 0;
 }
 
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_ibs_counter ibs_counter;
 
@@ -57,7 +57,7 @@ int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
 }
 
 #ifdef CONFIG_COMPAT
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_oprof_counter counter;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 5bcd2fd..1de93b7 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -266,14 +266,14 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
     int rc = 0;
-    XEN_GUEST_HANDLE(mmuext_op_t) nat_ops;
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) nat_ops;
 
     preempt_mask = count & MMU_UPDATE_PREEMPTED;
     count ^= preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..b524955 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,12 +52,12 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
     unsigned int i;
-    XEN_GUEST_HANDLE(void) cnt_uop;
+    XEN_GUEST_HANDLE_PARAM(void) cnt_uop;
 
     set_xen_guest_handle(cnt_uop, NULL);
     switch ( cmd )
@@ -206,7 +206,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_transfer_compat_t) xfer;
+                XEN_GUEST_HANDLE_PARAM(gnttab_transfer_compat_t) xfer;
 
                 xfer = guest_handle_cast(cmp_uop, gnttab_transfer_compat_t);
                 guest_handle_add_offset(xfer, i);
@@ -251,7 +251,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_copy_compat_t) copy;
+                XEN_GUEST_HANDLE_PARAM(gnttab_copy_compat_t) copy;
 
                 copy = guest_handle_cast(cmp_uop, gnttab_copy_compat_t);
                 guest_handle_add_offset(copy, i);
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..996151c 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
@@ -22,7 +22,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
     {
         unsigned int i, end_extent = 0;
         union {
-            XEN_GUEST_HANDLE(void) hnd;
+            XEN_GUEST_HANDLE_PARAM(void) hnd;
             struct xen_memory_reservation *rsrv;
             struct xen_memory_exchange *xchg;
             struct xen_remove_from_physmap *xrfp;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7e58cc4..a683954 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 698711e..f8d62f2 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -515,7 +515,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:09:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48nv-0001Iz-9L; Wed, 22 Aug 2012 11:08:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48nt-0001Hq-Qq
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:08:54 +0000
Received: from [85.158.139.83:49007] by server-4.bemta-5.messagelabs.com id
	B4/45-12386-4CDB4305; Wed, 22 Aug 2012 11:08:52 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345633729!29499859!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32216 invoked from network); 22 Aug 2012 11:08:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:08:51 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="205885720"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 07:08:22 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 07:08:21 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T48nH-0003lL-Qe;
	Wed, 22 Aug 2012 12:08:15 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 22 Aug 2012 12:08:07 +0100
Message-ID: <1345633688-31684-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221159560.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: tim@xen.org, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH v4 5/6] xen: replace XEN_GUEST_HANDLE with
	XEN_GUEST_HANDLE_PARAM when appropriate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note: these changes don't make any difference on x86 and ia64.


Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as
an hypercall argument.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c             |    2 +-
 xen/arch/arm/domctl.c             |    2 +-
 xen/arch/arm/hvm.c                |    2 +-
 xen/arch/arm/mm.c                 |    2 +-
 xen/arch/arm/physdev.c            |    2 +-
 xen/arch/arm/sysctl.c             |    2 +-
 xen/arch/x86/compat.c             |    2 +-
 xen/arch/x86/cpu/mcheck/mce.c     |    2 +-
 xen/arch/x86/domain.c             |    2 +-
 xen/arch/x86/domctl.c             |    2 +-
 xen/arch/x86/efi/runtime.c        |    2 +-
 xen/arch/x86/hvm/hvm.c            |   26 +++++++++---------
 xen/arch/x86/microcode.c          |    2 +-
 xen/arch/x86/mm.c                 |   14 +++++-----
 xen/arch/x86/mm/hap/hap.c         |    2 +-
 xen/arch/x86/mm/mem_event.c       |    2 +-
 xen/arch/x86/mm/paging.c          |    2 +-
 xen/arch/x86/mm/shadow/common.c   |    2 +-
 xen/arch/x86/oprofile/xenoprof.c  |    6 ++--
 xen/arch/x86/physdev.c            |    2 +-
 xen/arch/x86/platform_hypercall.c |    2 +-
 xen/arch/x86/sysctl.c             |    2 +-
 xen/arch/x86/traps.c              |    2 +-
 xen/arch/x86/x86_32/mm.c          |    2 +-
 xen/arch/x86/x86_32/traps.c       |    2 +-
 xen/arch/x86/x86_64/compat/mm.c   |   10 +++---
 xen/arch/x86/x86_64/domain.c      |    2 +-
 xen/arch/x86/x86_64/mm.c          |    2 +-
 xen/arch/x86/x86_64/traps.c       |    2 +-
 xen/common/compat/domain.c        |    2 +-
 xen/common/compat/grant_table.c   |    8 +++---
 xen/common/compat/memory.c        |    4 +-
 xen/common/domain.c               |    2 +-
 xen/common/domctl.c               |    2 +-
 xen/common/event_channel.c        |    2 +-
 xen/common/grant_table.c          |   36 +++++++++++++-------------
 xen/common/kernel.c               |    4 +-
 xen/common/kexec.c                |   16 +++++-----
 xen/common/memory.c               |    4 +-
 xen/common/multicall.c            |    2 +-
 xen/common/schedule.c             |    2 +-
 xen/common/sysctl.c               |    2 +-
 xen/common/xenoprof.c             |    8 +++---
 xen/drivers/acpi/pmstat.c         |    2 +-
 xen/drivers/char/console.c        |    6 ++--
 xen/drivers/passthrough/iommu.c   |    2 +-
 xen/include/asm-arm/hypercall.h   |    2 +-
 xen/include/asm-arm/mm.h          |    2 +-
 xen/include/asm-x86/hap.h         |    2 +-
 xen/include/asm-x86/hypercall.h   |   24 ++++++++--------
 xen/include/asm-x86/mem_event.h   |    2 +-
 xen/include/asm-x86/mm.h          |    8 +++---
 xen/include/asm-x86/paging.h      |    2 +-
 xen/include/asm-x86/processor.h   |    2 +-
 xen/include/asm-x86/shadow.h      |    2 +-
 xen/include/asm-x86/xenoprof.h    |    6 ++--
 xen/include/xen/acpi.h            |    4 +-
 xen/include/xen/hypercall.h       |   52 ++++++++++++++++++------------------
 xen/include/xen/iommu.h           |    2 +-
 xen/include/xen/tmem_xen.h        |    2 +-
 xen/include/xsm/xsm.h             |    4 +-
 xen/xsm/dummy.c                   |    2 +-
 xen/xsm/flask/flask_op.c          |    4 +-
 xen/xsm/flask/hooks.c             |    2 +-
 xen/xsm/xsm_core.c                |    2 +-
 65 files changed, 168 insertions(+), 168 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ee58d68..07b50e2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -515,7 +515,7 @@ void arch_dump_domain_info(struct domain *d)
 {
 }
 
-long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 1a5f79f..cf16791 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,7 +11,7 @@
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl,
-                    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index c11378d..40f519e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -11,7 +11,7 @@
 
 #include <asm/hypercall.h>
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     long rc = 0;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2400e1c..3e8b6cc 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -541,7 +541,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index bcf4337..0801e8c 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -11,7 +11,7 @@
 #include <asm/hypercall.h>
 
 
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
     return -ENOSYS;
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index e8e1c0d..a286abe 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -13,7 +13,7 @@
 #include <public/sysctl.h>
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
-                    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     return -ENOSYS;
 }
diff --git a/xen/arch/x86/compat.c b/xen/arch/x86/compat.c
index a4fda06..2d05867 100644
--- a/xen/arch/x86/compat.c
+++ b/xen/arch/x86/compat.c
@@ -27,7 +27,7 @@ ret_t do_physdev_op_compat(XEN_GUEST_HANDLE(physdev_op_t) uop)
 #ifndef COMPAT
 
 /* Legacy hypercall (as of 0x00030202). */
-long do_event_channel_op_compat(XEN_GUEST_HANDLE(evtchn_op_t) uop)
+long do_event_channel_op_compat(XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop)
 {
     struct evtchn_op op;
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index a89df6d..0f122b3 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1357,7 +1357,7 @@ CHECK_mcinfo_recovery;
 #endif
 
 /* Machine Check Architecture Hypercall */
-long do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc)
+long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
 {
     long ret = 0;
     struct xen_mc curop, *op = &curop;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 5bba4b9..13ff776 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1138,7 +1138,7 @@ map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
 
 long
 arch_do_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 135ea6e..663bfe4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(
 
 long arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/efi/runtime.c b/xen/arch/x86/efi/runtime.c
index 1dbe2db..b2ff495 100644
--- a/xen/arch/x86/efi/runtime.c
+++ b/xen/arch/x86/efi/runtime.c
@@ -184,7 +184,7 @@ int efi_get_info(uint32_t idx, union xenpf_efi_info *info)
     return 0;
 }
 
-static long gwstrlen(XEN_GUEST_HANDLE(CHAR16) str)
+static long gwstrlen(XEN_GUEST_HANDLE_PARAM(CHAR16) str)
 {
     unsigned long len;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..e2bf831 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3047,14 +3047,14 @@ static int grant_table_op_is_allowed(unsigned int cmd)
 }
 
 static long hvm_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
         return -ENOSYS; /* all other commands need auditing */
     return do_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3072,7 +3072,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE(void) arg)
     return do_memory_op(cmd, arg);
 }
 
-static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3088,7 +3088,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3137,7 +3137,7 @@ static hvm_hypercall_t *hvm_hypercall32_table[NR_hypercalls] = {
 #else /* defined(__x86_64__) */
 
 static long hvm_grant_table_op_compat32(unsigned int cmd,
-                                        XEN_GUEST_HANDLE(void) uop,
+                                        XEN_GUEST_HANDLE_PARAM(void) uop,
                                         unsigned int count)
 {
     if ( !grant_table_op_is_allowed(cmd) )
@@ -3145,7 +3145,7 @@ static long hvm_grant_table_op_compat32(unsigned int cmd,
     return compat_grant_table_op(cmd, uop, count);
 }
 
-static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
+static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
@@ -3164,7 +3164,7 @@ static long hvm_memory_op_compat32(int cmd, XEN_GUEST_HANDLE(void) arg)
 }
 
 static long hvm_vcpu_op_compat32(
-    int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+    int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
@@ -3188,7 +3188,7 @@ static long hvm_vcpu_op_compat32(
 }
 
 static long hvm_physdev_op_compat32(
-    int cmd, XEN_GUEST_HANDLE(void) arg)
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -3360,7 +3360,7 @@ void hvm_hypercall_page_initialise(struct domain *d,
 }
 
 static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_intx_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
 {
     struct xen_hvm_set_pci_intx_level op;
     struct domain *d;
@@ -3525,7 +3525,7 @@ static void hvm_s3_resume(struct domain *d)
 }
 
 static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE(xen_hvm_set_isa_irq_level_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
 {
     struct xen_hvm_set_isa_irq_level op;
     struct domain *d;
@@ -3569,7 +3569,7 @@ static int hvmop_set_isa_irq_level(
 }
 
 static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
 {
     struct xen_hvm_set_pci_link_route op;
     struct domain *d;
@@ -3602,7 +3602,7 @@ static int hvmop_set_pci_link_route(
 }
 
 static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE(xen_hvm_inject_msi_t) uop)
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
     struct xen_hvm_inject_msi op;
     struct domain *d;
@@ -3686,7 +3686,7 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
+long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
     struct domain *curr_d = current->domain;
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index bdda3f5..1477481 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -192,7 +192,7 @@ static long do_microcode_update(void *_info)
     return error;
 }
 
-int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void) buf, unsigned long len)
 {
     int ret;
     struct microcode_info *info;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f5c704e..4d72700 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2914,7 +2914,7 @@ static void put_pg_owner(struct domain *pg_owner)
 }
 
 static inline int vcpumask_to_pcpumask(
-    struct domain *d, XEN_GUEST_HANDLE(const_void) bmap, cpumask_t *pmask)
+    struct domain *d, XEN_GUEST_HANDLE_PARAM(const_void) bmap, cpumask_t *pmask)
 {
     unsigned int vcpu_id, vcpu_bias, offs;
     unsigned long vmask;
@@ -2974,9 +2974,9 @@ static inline void fixunmap_domain_page(const void *ptr)
 #endif
 
 int do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmuext_op op;
@@ -3438,9 +3438,9 @@ int do_mmuext_op(
 }
 
 int do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom)
 {
     struct mmu_update req;
@@ -4387,7 +4387,7 @@ long set_gdt(struct vcpu *v,
 }
 
 
-long do_set_gdt(XEN_GUEST_HANDLE(ulong) frame_list, unsigned int entries)
+long do_set_gdt(XEN_GUEST_HANDLE_PARAM(ulong) frame_list, unsigned int entries)
 {
     int nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -4661,7 +4661,7 @@ static int xenmem_add_to_physmap(struct domain *d,
     return xenmem_add_to_physmap_once(d, xatp);
 }
 
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 13b4be2..67e48a3 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,7 +690,7 @@ void hap_teardown(struct domain *d)
 }
 
 int hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-               XEN_GUEST_HANDLE(void) u_domctl)
+               XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
index d728889..d3dac14 100644
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -512,7 +512,7 @@ void mem_event_cleanup(struct domain *d)
 }
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl)
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ca879f9..ea44e39 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -654,7 +654,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index dc245be..bd47f03 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3786,7 +3786,7 @@ out:
 
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
 {
     int rc, preempted = 0;
 
diff --git a/xen/arch/x86/oprofile/xenoprof.c b/xen/arch/x86/oprofile/xenoprof.c
index 71f00ef..5d286a2 100644
--- a/xen/arch/x86/oprofile/xenoprof.c
+++ b/xen/arch/x86/oprofile/xenoprof.c
@@ -19,7 +19,7 @@
 
 #include "op_counter.h"
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_counter counter;
 
@@ -39,7 +39,7 @@ int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
     return 0;
 }
 
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_ibs_counter ibs_counter;
 
@@ -57,7 +57,7 @@ int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg)
 }
 
 #ifdef CONFIG_COMPAT
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg)
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_oprof_counter counter;
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b0458fd..b6474ef 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -255,7 +255,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 }
 #endif /* COMPAT */
 
-ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int irq;
     ret_t ret;
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 88880b0..a32e0a2 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -60,7 +60,7 @@ long cpu_down_helper(void *data);
 long core_parking_helper(void *data);
 uint32_t get_cur_idle_nums(void);
 
-ret_t do_platform_op(XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op)
+ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 {
     ret_t ret = 0;
     struct xen_platform_op curop, *op = &curop;
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 379f071..b84dd34 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -58,7 +58,7 @@ long cpu_down_helper(void *data)
 }
 
 long arch_do_sysctl(
-    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+    struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 767be86..281d9e7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3700,7 +3700,7 @@ int send_guest_trap(struct domain *d, uint16_t vcpuid, unsigned int trap_nr)
 }
 
 
-long do_set_trap_table(XEN_GUEST_HANDLE(const_trap_info_t) traps)
+long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
diff --git a/xen/arch/x86/x86_32/mm.c b/xen/arch/x86/x86_32/mm.c
index 37efa3c..f6448fb 100644
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
@@ -203,7 +203,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     unsigned long mfn, last_mfn;
diff --git a/xen/arch/x86/x86_32/traps.c b/xen/arch/x86/x86_32/traps.c
index 8f68808..0c7c860 100644
--- a/xen/arch/x86/x86_32/traps.c
+++ b/xen/arch/x86/x86_32/traps.c
@@ -492,7 +492,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 5bcd2fd..1de93b7 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -5,7 +5,7 @@
 #include <asm/mem_event.h>
 #include <asm/mem_sharing.h>
 
-int compat_set_gdt(XEN_GUEST_HANDLE(uint) frame_list, unsigned int entries)
+int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
 {
     unsigned int i, nr_pages = (entries + 511) / 512;
     unsigned long frames[16];
@@ -44,7 +44,7 @@ int compat_update_descriptor(u32 pa_lo, u32 pa_hi, u32 desc_lo, u32 desc_hi)
                                 desc_lo | ((u64)desc_hi << 32));
 }
 
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct compat_machphys_mfn_list xmml;
     l2_pgentry_t l2e;
@@ -266,14 +266,14 @@ int compat_update_va_mapping_otherdomain(unsigned long va, u32 lo, u32 hi,
 
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_compat_t);
 
-int compat_mmuext_op(XEN_GUEST_HANDLE(mmuext_op_compat_t) cmp_uops,
+int compat_mmuext_op(XEN_GUEST_HANDLE_PARAM(mmuext_op_compat_t) cmp_uops,
                      unsigned int count,
-                     XEN_GUEST_HANDLE(uint) pdone,
+                     XEN_GUEST_HANDLE_PARAM(uint) pdone,
                      unsigned int foreigndom)
 {
     unsigned int i, preempt_mask;
     int rc = 0;
-    XEN_GUEST_HANDLE(mmuext_op_t) nat_ops;
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) nat_ops;
 
     preempt_mask = count & MMU_UPDATE_PREEMPTED;
     count ^= preempt_mask;
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index e746c89..144ca2d 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -23,7 +23,7 @@ CHECK_vcpu_get_physid;
 
 int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg)
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc = -ENOSYS;
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 635a499..17c46a1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1043,7 +1043,7 @@ void __init subarch_init_memory(void)
     }
 }
 
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg)
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
     l3_pgentry_t l3e;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 806cf2e..6ead813 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -518,7 +518,7 @@ static long unregister_guest_callback(struct callback_unregister *unreg)
 }
 
 
-long do_callback_op(int cmd, XEN_GUEST_HANDLE(const_void) arg)
+long do_callback_op(int cmd, XEN_GUEST_HANDLE_PARAM(const_void) arg)
 {
     long ret;
 
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index 40a0287..e4c8ceb 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -15,7 +15,7 @@
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
-int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+int compat_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/compat/grant_table.c b/xen/common/compat/grant_table.c
index edd20c6..b524955 100644
--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -52,12 +52,12 @@ CHECK_gnttab_swap_grant_ref;
 #undef xen_gnttab_swap_grant_ref
 
 int compat_grant_table_op(unsigned int cmd,
-                          XEN_GUEST_HANDLE(void) cmp_uop,
+                          XEN_GUEST_HANDLE_PARAM(void) cmp_uop,
                           unsigned int count)
 {
     int rc = 0;
     unsigned int i;
-    XEN_GUEST_HANDLE(void) cnt_uop;
+    XEN_GUEST_HANDLE_PARAM(void) cnt_uop;
 
     set_xen_guest_handle(cnt_uop, NULL);
     switch ( cmd )
@@ -206,7 +206,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_transfer_compat_t) xfer;
+                XEN_GUEST_HANDLE_PARAM(gnttab_transfer_compat_t) xfer;
 
                 xfer = guest_handle_cast(cmp_uop, gnttab_transfer_compat_t);
                 guest_handle_add_offset(xfer, i);
@@ -251,7 +251,7 @@ int compat_grant_table_op(unsigned int cmd,
             }
             if ( rc >= 0 )
             {
-                XEN_GUEST_HANDLE(gnttab_copy_compat_t) copy;
+                XEN_GUEST_HANDLE_PARAM(gnttab_copy_compat_t) copy;
 
                 copy = guest_handle_cast(cmp_uop, gnttab_copy_compat_t);
                 guest_handle_add_offset(copy, i);
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index e7257cc..996151c 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -13,7 +13,7 @@ CHECK_TYPE(domid);
 #undef compat_domid_t
 #undef xen_domid_t
 
-int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
+int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
 {
     int rc, split, op = cmd & MEMOP_CMD_MASK;
     unsigned int start_extent = cmd >> MEMOP_EXTENT_SHIFT;
@@ -22,7 +22,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE(void) compat)
     {
         unsigned int i, end_extent = 0;
         union {
-            XEN_GUEST_HANDLE(void) hnd;
+            XEN_GUEST_HANDLE_PARAM(void) hnd;
             struct xen_memory_reservation *rsrv;
             struct xen_memory_exchange *xchg;
             struct xen_remove_from_physmap *xrfp;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4c5d241..d7cd135 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -804,7 +804,7 @@ void vcpu_reset(struct vcpu *v)
 }
 
 
-long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE(void) arg)
+long do_vcpu_op(int cmd, int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct vcpu *v;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 7ca6b08..527c5ad 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -238,7 +238,7 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
     struct xen_domctl curop, *op = &curop;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..a80a0d1 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -970,7 +970,7 @@ out:
 }
 
 
-long do_event_channel_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9961e83..d780dc6 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -771,7 +771,7 @@ __gnttab_map_grant_ref(
 
 static long
 gnttab_map_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_map_grant_ref op;
@@ -1040,7 +1040,7 @@ __gnttab_unmap_grant_ref(
 
 static long
 gnttab_unmap_grant_ref(
-    XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_grant_ref op;
@@ -1102,7 +1102,7 @@ __gnttab_unmap_and_replace(
 
 static long
 gnttab_unmap_and_replace(
-    XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
     int i, c, partial_done, done = 0;
     struct gnttab_unmap_and_replace op;
@@ -1254,7 +1254,7 @@ active_alloc_failed:
 
 static long 
 gnttab_setup_table(
-    XEN_GUEST_HANDLE(gnttab_setup_table_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
 {
     struct gnttab_setup_table op;
     struct domain *d;
@@ -1348,7 +1348,7 @@ gnttab_setup_table(
 
 static long 
 gnttab_query_size(
-    XEN_GUEST_HANDLE(gnttab_query_size_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_query_size_t) uop, unsigned int count)
 {
     struct gnttab_query_size op;
     struct domain *d;
@@ -1485,7 +1485,7 @@ gnttab_prepare_for_transfer(
 
 static long
 gnttab_transfer(
-    XEN_GUEST_HANDLE(gnttab_transfer_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) uop, unsigned int count)
 {
     struct domain *d = current->domain;
     struct domain *e;
@@ -2082,7 +2082,7 @@ __gnttab_copy(
 
 static long
 gnttab_copy(
-    XEN_GUEST_HANDLE(gnttab_copy_t) uop, unsigned int count)
+    XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) uop, unsigned int count)
 {
     int i;
     struct gnttab_copy op;
@@ -2101,7 +2101,7 @@ gnttab_copy(
 }
 
 static long
-gnttab_set_version(XEN_GUEST_HANDLE(gnttab_set_version_t uop))
+gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t uop))
 {
     gnttab_set_version_t op;
     struct domain *d = current->domain;
@@ -2220,7 +2220,7 @@ out:
 }
 
 static long
-gnttab_get_status_frames(XEN_GUEST_HANDLE(gnttab_get_status_frames_t) uop,
+gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
                          int count)
 {
     gnttab_get_status_frames_t op;
@@ -2289,7 +2289,7 @@ out1:
 }
 
 static long
-gnttab_get_version(XEN_GUEST_HANDLE(gnttab_get_version_t uop))
+gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t uop))
 {
     gnttab_get_version_t op;
     struct domain *d;
@@ -2368,7 +2368,7 @@ out:
 }
 
 static long
-gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
+gnttab_swap_grant_ref(XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t uop),
                       unsigned int count)
 {
     int i;
@@ -2389,7 +2389,7 @@ gnttab_swap_grant_ref(XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t uop),
 
 long
 do_grant_table_op(
-    unsigned int cmd, XEN_GUEST_HANDLE(void) uop, unsigned int count)
+    unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) uop, unsigned int count)
 {
     long rc;
     
@@ -2401,7 +2401,7 @@ do_grant_table_op(
     {
     case GNTTABOP_map_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_map_grant_ref_t) map =
+        XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) map =
             guest_handle_cast(uop, gnttab_map_grant_ref_t);
         if ( unlikely(!guest_handle_okay(map, count)) )
             goto out;
@@ -2415,7 +2415,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2429,7 +2429,7 @@ do_grant_table_op(
     }
     case GNTTABOP_unmap_and_replace:
     {
-        XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t) unmap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) unmap =
             guest_handle_cast(uop, gnttab_unmap_and_replace_t);
         if ( unlikely(!guest_handle_okay(unmap, count)) )
             goto out;
@@ -2453,7 +2453,7 @@ do_grant_table_op(
     }
     case GNTTABOP_transfer:
     {
-        XEN_GUEST_HANDLE(gnttab_transfer_t) transfer =
+        XEN_GUEST_HANDLE_PARAM(gnttab_transfer_t) transfer =
             guest_handle_cast(uop, gnttab_transfer_t);
         if ( unlikely(!guest_handle_okay(transfer, count)) )
             goto out;
@@ -2467,7 +2467,7 @@ do_grant_table_op(
     }
     case GNTTABOP_copy:
     {
-        XEN_GUEST_HANDLE(gnttab_copy_t) copy =
+        XEN_GUEST_HANDLE_PARAM(gnttab_copy_t) copy =
             guest_handle_cast(uop, gnttab_copy_t);
         if ( unlikely(!guest_handle_okay(copy, count)) )
             goto out;
@@ -2504,7 +2504,7 @@ do_grant_table_op(
     }
     case GNTTABOP_swap_grant_ref:
     {
-        XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t) swap =
+        XEN_GUEST_HANDLE_PARAM(gnttab_swap_grant_ref_t) swap =
             guest_handle_cast(uop, gnttab_swap_grant_ref_t);
         if ( unlikely(!guest_handle_okay(swap, count)) )
             goto out;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c915bbc..55caff6 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -204,7 +204,7 @@ void __init do_initcalls(void)
  * Simple hypercalls.
  */
 
-DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     switch ( cmd )
     {
@@ -332,7 +332,7 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE(void) arg)
     return -ENOSYS;
 }
 
-DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE(void) arg)
+DO(nmi_op)(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xennmi_callback cb;
     long rc = 0;
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 09a5624..03389eb 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -613,7 +613,7 @@ static int kexec_get_range_internal(xen_kexec_range_t *range)
     return ret;
 }
 
-static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_range_t range;
     int ret = -EINVAL;
@@ -629,7 +629,7 @@ static int kexec_get_range(XEN_GUEST_HANDLE(void) uarg)
     return ret;
 }
 
-static int kexec_get_range_compat(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_get_range_compat(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     xen_kexec_range_t range;
@@ -777,7 +777,7 @@ static int kexec_load_unload_internal(unsigned long op, xen_kexec_load_t *load)
     return ret;
 }
 
-static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_load_t load;
 
@@ -788,7 +788,7 @@ static int kexec_load_unload(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
 }
 
 static int kexec_load_unload_compat(unsigned long op,
-                                    XEN_GUEST_HANDLE(void) uarg)
+                                    XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
 #ifdef CONFIG_COMPAT
     compat_kexec_load_t compat_load;
@@ -813,7 +813,7 @@ static int kexec_load_unload_compat(unsigned long op,
 #endif /* CONFIG_COMPAT */
 }
 
-static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
+static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     xen_kexec_exec_t exec;
     xen_kexec_image_t *image;
@@ -845,7 +845,7 @@ static int kexec_exec(XEN_GUEST_HANDLE(void) uarg)
     return -EINVAL; /* never reached */
 }
 
-int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
+int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg,
                            int compat)
 {
     unsigned long flags;
@@ -886,13 +886,13 @@ int do_kexec_op_internal(unsigned long op, XEN_GUEST_HANDLE(void) uarg,
     return ret;
 }
 
-long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+long do_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 0);
 }
 
 #ifdef CONFIG_COMPAT
-int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE(void) uarg)
+int compat_kexec_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) uarg)
 {
     return do_kexec_op_internal(op, uarg, 1);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7e58cc4..a683954 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -277,7 +277,7 @@ static void decrease_reservation(struct memop_args *a)
     a->nr_done = i;
 }
 
-static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
+static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 {
     struct xen_memory_exchange exch;
     PAGE_LIST_HEAD(in_chunk_list);
@@ -530,7 +530,7 @@ static long memory_exchange(XEN_GUEST_HANDLE(xen_memory_exchange_t) arg)
     return rc;
 }
 
-long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE(void) arg)
+long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d;
     int rc, op;
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 6c1a9d7..5de5f8d 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -21,7 +21,7 @@ typedef long ret_t;
 
 ret_t
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list, unsigned int nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned int nr_calls)
 {
     struct mc_state *mcs = &current->mc_state;
     unsigned int     i;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 0854f55..c26eac4 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -836,7 +836,7 @@ typedef long ret_t;
 
 #endif /* !COMPAT */
 
-ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE(void) arg)
+ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret = 0;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index ea68278..47142f4 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -27,7 +27,7 @@
 #include <xsm/xsm.h>
 #include <xen/pmstat.h>
 
-long do_sysctl(XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl)
+long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
     long ret = 0;
     struct xen_sysctl curop, *op = &curop;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index e571fea..c001b38 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -404,7 +404,7 @@ static int add_active_list(domid_t domid)
     return 0;
 }
 
-static int add_passive_list(XEN_GUEST_HANDLE(void) arg)
+static int add_passive_list(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_passive passive;
     struct domain *d;
@@ -585,7 +585,7 @@ void xenoprof_log_event(struct vcpu *vcpu, const struct cpu_user_regs *regs,
 
 
 
-static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_init(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
     struct xenoprof_init xenoprof_init;
@@ -609,7 +609,7 @@ static int xenoprof_op_init(XEN_GUEST_HANDLE(void) arg)
 
 #endif /* !COMPAT */
 
-static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
+static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xenoprof_get_buffer xenoprof_get_buffer;
     struct domain *d = current->domain;
@@ -660,7 +660,7 @@ static int xenoprof_op_get_buffer(XEN_GUEST_HANDLE(void) arg)
                       || (op == XENOPROF_disable_virq)  \
                       || (op == XENOPROF_get_buffer))
  
-int do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg)
+int do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int ret = 0;
     
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 698711e..f8d62f2 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -515,7 +515,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
 {
     u32 bits[3];
     int ret;
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index e10bed5..b0f2334 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -182,7 +182,7 @@ static void putchar_console_ring(int c)
 
 long read_console_ring(struct xen_sysctl_readconsole *op)
 {
-    XEN_GUEST_HANDLE(char) str;
+    XEN_GUEST_HANDLE_PARAM(char) str;
     uint32_t idx, len, max, sofar, c;
 
     str   = guest_handle_cast(op->buffer, char),
@@ -320,7 +320,7 @@ static void notify_dom0_con_ring(unsigned long unused)
 static DECLARE_SOFTIRQ_TASKLET(notify_dom0_con_ring_tasklet,
                                notify_dom0_con_ring, 0);
 
-static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
+static long guest_console_write(XEN_GUEST_HANDLE_PARAM(char) buffer, int count)
 {
     char kbuf[128], *kptr;
     int kcount;
@@ -358,7 +358,7 @@ static long guest_console_write(XEN_GUEST_HANDLE(char) buffer, int count)
     return 0;
 }
 
-long do_console_io(int cmd, int count, XEN_GUEST_HANDLE(char) buffer)
+long do_console_io(int cmd, int count, XEN_GUEST_HANDLE_PARAM(char) buffer)
 {
     long rc;
     unsigned int idx, len;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 64f5fd1..396461f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -518,7 +518,7 @@ void iommu_crash_shutdown(void)
 
 int iommu_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     struct domain *d;
     u16 seg;
diff --git a/xen/include/asm-arm/hypercall.h b/xen/include/asm-arm/hypercall.h
index 454f02e..090e620 100644
--- a/xen/include/asm-arm/hypercall.h
+++ b/xen/include/asm-arm/hypercall.h
@@ -2,7 +2,7 @@
 #define __ASM_ARM_HYPERCALL_H__
 
 #include <public/domctl.h> /* for arch_do_domctl */
-int do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
+int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #endif /* __ASM_ARM_HYPERCALL_H__ */
 /*
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b37bd35..8bf45ba 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -267,7 +267,7 @@ static inline int relinquish_shared_pages(struct domain *d)
 
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index a2532a4..916a35b 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -51,7 +51,7 @@ hap_unmap_domain_page(void *p)
 /************************************************/
 void  hap_domain_init(struct domain *d);
 int   hap_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                 XEN_GUEST_HANDLE(void) u_domctl);
+                 XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 int   hap_enable(struct domain *d, u32 mode);
 void  hap_final_teardown(struct domain *d);
 void  hap_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/hypercall.h b/xen/include/asm-x86/hypercall.h
index 9e136c3..55b5ca2 100644
--- a/xen/include/asm-x86/hypercall.h
+++ b/xen/include/asm-x86/hypercall.h
@@ -18,22 +18,22 @@
 
 extern long
 do_event_channel_op_compat(
-    XEN_GUEST_HANDLE(evtchn_op_t) uop);
+    XEN_GUEST_HANDLE_PARAM(evtchn_op_t) uop);
 
 extern long
 do_set_trap_table(
-    XEN_GUEST_HANDLE(const_trap_info_t) traps);
+    XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps);
 
 extern int
 do_mmu_update(
-    XEN_GUEST_HANDLE(mmu_update_t) ureqs,
+    XEN_GUEST_HANDLE_PARAM(mmu_update_t) ureqs,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern long
 do_set_gdt(
-    XEN_GUEST_HANDLE(ulong) frame_list,
+    XEN_GUEST_HANDLE_PARAM(ulong) frame_list,
     unsigned int entries);
 
 extern long
@@ -60,7 +60,7 @@ do_update_descriptor(
     u64 desc);
 
 extern long
-do_mca(XEN_GUEST_HANDLE(xen_mc_t) u_xen_mc);
+do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc);
 
 extern int
 do_update_va_mapping(
@@ -70,7 +70,7 @@ do_update_va_mapping(
 
 extern long
 do_physdev_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 do_update_va_mapping_otherdomain(
@@ -81,9 +81,9 @@ do_update_va_mapping_otherdomain(
 
 extern int
 do_mmuext_op(
-    XEN_GUEST_HANDLE(mmuext_op_t) uops,
+    XEN_GUEST_HANDLE_PARAM(mmuext_op_t) uops,
     unsigned int count,
-    XEN_GUEST_HANDLE(uint) pdone,
+    XEN_GUEST_HANDLE_PARAM(uint) pdone,
     unsigned int foreigndom);
 
 extern unsigned long
@@ -92,7 +92,7 @@ do_iret(
 
 extern int
 do_kexec(
-    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE(void) uarg);
+    unsigned long op, unsigned arg1, XEN_GUEST_HANDLE_PARAM(void) uarg);
 
 #ifdef __x86_64__
 
@@ -110,11 +110,11 @@ do_set_segment_base(
 extern int
 compat_physdev_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 arch_compat_vcpu_op(
-    int cmd, struct vcpu *v, XEN_GUEST_HANDLE(void) arg);
+    int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #else
 
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
index 23d71c1..e17f36b 100644
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -65,7 +65,7 @@ int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
 struct domain *get_mem_event_op_target(uint32_t domain, int *rc);
 int do_mem_event_op(int op, uint32_t domain, void *arg);
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
-                     XEN_GUEST_HANDLE(void) u_domctl);
+                     XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 #endif /* __MEM_EVENT_H__ */
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4cba276..6373b3b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -604,10 +604,10 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
-long arch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg);
-int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void));
-int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE(void));
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+long subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
+int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void));
 
 int steal_page(
     struct domain *d, struct page_info *page, unsigned int memflags);
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index c432a97..1cd0e3f 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -215,7 +215,7 @@ int paging_domain_init(struct domain *d, unsigned int domcr_flags);
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void paging_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 7164a50..efdbddd 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -598,7 +598,7 @@ int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val);
 int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val);
 
 void microcode_set_module(unsigned int);
-int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
+int microcode_update(XEN_GUEST_HANDLE_PARAM(const_void), unsigned long len);
 int microcode_resume_cpu(int cpu);
 
 unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 88a8cd2..2eb6efc 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -73,7 +73,7 @@ int shadow_track_dirty_vram(struct domain *d,
  * manipulate the log-dirty bitmap. */
 int shadow_domctl(struct domain *d, 
                   xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
 
 /* Call when destroying a domain */
 void shadow_teardown(struct domain *d);
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index c03f8c8..3f5ea15 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -40,9 +40,9 @@ int xenoprof_arch_init(int *num_events, char *cpu_type);
 #define xenoprof_arch_disable_virq()            nmi_disable_virq()
 #define xenoprof_arch_release_counters()        nmi_release_counters()
 
-int xenoprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int compat_oprof_arch_counter(XEN_GUEST_HANDLE(void) arg);
-int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE(void) arg);
+int xenoprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int compat_oprof_arch_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
+int xenoprof_arch_ibs_counter(XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 struct cpu_user_regs;
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index d7e2f94..8f3cdca 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -145,8 +145,8 @@ static inline unsigned int acpi_get_cstate_limit(void) { return 0; }
 static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; }
 #endif
 
-#ifdef XEN_GUEST_HANDLE
-int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32));
+#ifdef XEN_GUEST_HANDLE_PARAM
+int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32));
 #endif
 int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *, u32 mask);
 
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 73b1598..e335037 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -29,29 +29,29 @@ do_sched_op_compat(
 extern long
 do_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_domctl(
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 arch_do_domctl(
     struct xen_domctl *domctl,
-    XEN_GUEST_HANDLE(xen_domctl_t) u_domctl);
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl);
 
 extern long
 do_sysctl(
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 arch_do_sysctl(
     struct xen_sysctl *sysctl,
-    XEN_GUEST_HANDLE(xen_sysctl_t) u_sysctl);
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl);
 
 extern long
 do_platform_op(
-    XEN_GUEST_HANDLE(xen_platform_op_t) u_xenpf_op);
+    XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op);
 
 /*
  * To allow safe resume of do_memory_op() after preemption, we need to know
@@ -64,11 +64,11 @@ do_platform_op(
 extern long
 do_memory_op(
     unsigned long cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_multicall(
-    XEN_GUEST_HANDLE(multicall_entry_t) call_list,
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list,
     unsigned int nr_calls);
 
 extern long
@@ -77,23 +77,23 @@ do_set_timer_op(
 
 extern long
 do_event_channel_op(
-    int cmd, XEN_GUEST_HANDLE(void) arg);
+    int cmd, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_console_io(
     int cmd,
     int count,
-    XEN_GUEST_HANDLE(char) buffer);
+    XEN_GUEST_HANDLE_PARAM(char) buffer);
 
 extern long
 do_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern long
@@ -105,72 +105,72 @@ extern long
 do_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 struct vcpu;
 extern long
 arch_do_vcpu_op(int cmd,
     struct vcpu *v,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_nmi_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_hvm_op(
     unsigned long op,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_kexec_op(
     unsigned long op,
     int arg1,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern long
 do_xsm_op(
-    XEN_GUEST_HANDLE(xsm_op_t) u_xsm_op);
+    XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_xsm_op);
 
 extern long
 do_tmem_op(
-    XEN_GUEST_HANDLE(tmem_op_t) uops);
+    XEN_GUEST_HANDLE_PARAM(tmem_op_t) uops);
 
 extern int
-do_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 #ifdef CONFIG_COMPAT
 
 extern int
 compat_memory_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_grant_table_op(
     unsigned int cmd,
-    XEN_GUEST_HANDLE(void) uop,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
 
 extern int
 compat_vcpu_op(
     int cmd,
     int vcpuid,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
-compat_xenoprof_op(int op, XEN_GUEST_HANDLE(void) arg);
+compat_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_xen_version(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_sched_op(
     int cmd,
-    XEN_GUEST_HANDLE(void) arg);
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 
 extern int
 compat_set_timer_op(
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 6f7fbf7..bd19e23 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -155,7 +155,7 @@ void iommu_crash_shutdown(void);
 void iommu_set_dom0_mapping(struct domain *d);
 void iommu_share_p2m_table(struct domain *d);
 
-int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE(xen_domctl_t));
+int iommu_do_domctl(struct xen_domctl *, XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4a35760..2e7199a 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -448,7 +448,7 @@ static inline void tmh_tze_copy_from_pfp(void *tva, pfp_t *pfp, pagesize_t len)
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
 */
-typedef XEN_GUEST_HANDLE(tmem_op_t) tmem_cli_op_t;
+typedef XEN_GUEST_HANDLE_PARAM(tmem_op_t) tmem_cli_op_t;
 
 static inline int tmh_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index bef79df..3e4a47f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,7 +139,7 @@ struct xsm_operations {
     int (*cpupool_op)(void);
     int (*sched_op)(void);
 
-    long (*__do_xsm_op) (XEN_GUEST_HANDLE(xsm_op_t) op);
+    long (*__do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 
 #ifdef CONFIG_X86
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -585,7 +585,7 @@ static inline int xsm_sched_op(void)
     return xsm_call(sched_op());
 }
 
-static inline long __do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+static inline long __do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
 #ifdef XSM_ENABLE
     return xsm_ops->__do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 7027ee7..5ef6529 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -365,7 +365,7 @@ static int dummy_sched_op (void)
     return 0;
 }
 
-static long dummy___do_xsm_op(XEN_GUEST_HANDLE(xsm_op_t) op)
+static long dummy___do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
 }
diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index bd4db37..23e7d34 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -71,7 +71,7 @@ static int domain_has_security(struct domain *d, u32 perms)
                         perms, NULL);
 }
 
-static int flask_copyin_string(XEN_GUEST_HANDLE(char) u_buf, char **buf, uint32_t size)
+static int flask_copyin_string(XEN_GUEST_HANDLE_PARAM(char) u_buf, char **buf, uint32_t size)
 {
     char *tmp = xmalloc_bytes(size + 1);
     if ( !tmp )
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen_flask_peersid *arg)
     return rv;
 }
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op)
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 23b84f3..0fc299c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1553,7 +1553,7 @@ static int flask_vcpuextstate (struct domain *d, uint32_t cmd)
 }
 #endif
 
-long do_flask_op(XEN_GUEST_HANDLE(xsm_op_t) u_flask_op);
+long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
 static struct xsm_operations flask_ops = {
     .security_domaininfo = flask_security_domaininfo,
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 96c8669..46287cb 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -111,7 +111,7 @@ int unregister_xsm(struct xsm_operations *ops)
 
 #endif
 
-long do_xsm_op (XEN_GUEST_HANDLE(xsm_op_t) op)
+long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return __do_xsm_op(op);
 }
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:12:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48r4-00020X-5K; Wed, 22 Aug 2012 11:12:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48r2-0001ze-Mt
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:12:08 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345633880!9702886!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14981 invoked from network); 22 Aug 2012 11:11:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:11:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123383"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:11:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:11:19 +0100
Date: Wed, 22 Aug 2012 12:10:58 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221209563.15568@kaball.uk.xensource.com>
References: <CABs9EjnLJ4ibhFEU0niEeciqu+y9GhqzD7YPkoTg-DijZqaqXw@mail.gmail.com>
	<4FE06671020000780008A946@nat28.tlf.novell.com>
	<CABs9EjnqSSnZ5dY-bGTVVn9kq+w3rnBGiNEKWQizZFDe_AXV1w@mail.gmail.com>
	<4FE21084020000780008AE60@nat28.tlf.novell.com>
	<CABs9Ejk2Fz9VWup26X32JkCYzQTN2pke3NGUiXfjCZczuGC+tw@mail.gmail.com>
	<4FE83444020000780008BA00@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1206251212120.27860@kaball.uk.xensource.com>
	<CABs9EjkibDNKkbXz4Gq_hNuPkgYU9NZYzhcvZySa6vX-7Bp2SQ@mail.gmail.com>
	<alpine.DEB.2.02.1206261349200.27860@kaball.uk.xensource.com>
	<CABs9Ej=3JPgRexVaTSFj3A1WX_O9+USAZy4XMnHxRHWJFzia=w@mail.gmail.com>
	<alpine.DEB.2.02.1206271401520.27860@kaball.uk.xensource.com>
	<CABs9Ej=63Coc9Soi4wbbM63d-Cm=AmcDj1AsRg62dYpicoqvvQ@mail.gmail.com>
	<alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "rolu@roce.org" <rolu@roce.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad: fix msi_translate with PV
 event delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 29 Jun 2012, Stefano Stabellini wrote:
> When switching from msitranslate to straight msi we need to make sure
> that we respect PV event delivery for the msi if the guest asked for it:
> 
> - completely disable MSI on the device in pt_disable_msi_translate;
> - then enable MSI again (pt_msi_setup), mapping the correct pirq to it.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Tested-by: Rolu <rolu@roce.org>

ping

> ---
> 
> diff --git a/hw/pass-through.c b/hw/pass-through.c
> index 8581253..fc8c49f 100644
> --- a/hw/pass-through.c
> +++ b/hw/pass-through.c
> @@ -3841,21 +3841,18 @@ static int pt_msgctrl_reg_write(struct pt_dev *ptdev,
>                  PT_LOG("guest enabling MSI, disable MSI-INTx translation\n");
>                  pt_disable_msi_translate(ptdev);
>              }
> -            else
> +            /* Init physical one */
> +            PT_LOG("setup msi for dev %x\n", pd->devfn);
> +            if (pt_msi_setup(ptdev))
>              {
> -                /* Init physical one */
> -                PT_LOG("setup msi for dev %x\n", pd->devfn);
> -                if (pt_msi_setup(ptdev))
> -                {
> -		    /* We do not broadcast the error to the framework code, so
> -		     * that MSI errors are contained in MSI emulation code and
> -		     * QEMU can go on running.
> -		     * Guest MSI would be actually not working.
> -		     */
> -		    *value &= ~PCI_MSI_FLAGS_ENABLE;
> -		    PT_LOG("Warning: Can not map MSI for dev %x\n", pd->devfn);
> -		    return 0;
> -                }
> +                /* We do not broadcast the error to the framework code, so
> +                 * that MSI errors are contained in MSI emulation code and
> +                 * QEMU can go on running.
> +                 * Guest MSI would be actually not working.
> +                 */
> +                *value &= ~PCI_MSI_FLAGS_ENABLE;
> +                PT_LOG("Warning: Can not map MSI for dev %x\n", pd->devfn);
> +                return 0;
>              }
>              if (pt_msi_update(ptdev))
>              {
> diff --git a/hw/pt-msi.c b/hw/pt-msi.c
> index 70c4023..73f737d 100644
> --- a/hw/pt-msi.c
> +++ b/hw/pt-msi.c
> @@ -263,16 +263,8 @@ void pt_disable_msi_translate(struct pt_dev *dev)
>      uint8_t e_device = 0;
>      uint8_t e_intx = 0;
>  
> -    /* MSI_ENABLE bit should be disabed until the new handler is set */
> -    msi_set_enable(dev, 0);
> -
> -    e_device = PCI_SLOT(dev->dev.devfn);
> -    e_intx = pci_intx(dev);
> -
> -    if (xc_domain_unbind_pt_irq(xc_handle, domid, dev->msi->pirq,
> -                                 PT_IRQ_TYPE_MSI_TRANSLATE, 0,
> -                                 e_device, e_intx, 0))
> -        PT_LOG("Error: Unbinding pt irq for MSI-INTx failed!\n");
> +    pt_msi_disable(dev);
> +    dev->msi->flags |= MSI_FLAG_UNINIT;
>  
>      if (dev->machine_irq)
>      {
> @@ -280,8 +272,6 @@ void pt_disable_msi_translate(struct pt_dev *dev)
>                                         0, e_device, e_intx))
>              PT_LOG("Error: Rebinding of interrupt failed!\n");
>      }
> -
> -    dev->msi_trans_en = 0;
>  }
>  
>  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:12:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48r4-00020X-5K; Wed, 22 Aug 2012 11:12:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48r2-0001ze-Mt
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:12:08 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345633880!9702886!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14981 invoked from network); 22 Aug 2012 11:11:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:11:20 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123383"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:11:20 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:11:19 +0100
Date: Wed, 22 Aug 2012 12:10:58 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221209563.15568@kaball.uk.xensource.com>
References: <CABs9EjnLJ4ibhFEU0niEeciqu+y9GhqzD7YPkoTg-DijZqaqXw@mail.gmail.com>
	<4FE06671020000780008A946@nat28.tlf.novell.com>
	<CABs9EjnqSSnZ5dY-bGTVVn9kq+w3rnBGiNEKWQizZFDe_AXV1w@mail.gmail.com>
	<4FE21084020000780008AE60@nat28.tlf.novell.com>
	<CABs9Ejk2Fz9VWup26X32JkCYzQTN2pke3NGUiXfjCZczuGC+tw@mail.gmail.com>
	<4FE83444020000780008BA00@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1206251212120.27860@kaball.uk.xensource.com>
	<CABs9EjkibDNKkbXz4Gq_hNuPkgYU9NZYzhcvZySa6vX-7Bp2SQ@mail.gmail.com>
	<alpine.DEB.2.02.1206261349200.27860@kaball.uk.xensource.com>
	<CABs9Ej=3JPgRexVaTSFj3A1WX_O9+USAZy4XMnHxRHWJFzia=w@mail.gmail.com>
	<alpine.DEB.2.02.1206271401520.27860@kaball.uk.xensource.com>
	<CABs9Ej=63Coc9Soi4wbbM63d-Cm=AmcDj1AsRg62dYpicoqvvQ@mail.gmail.com>
	<alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "rolu@roce.org" <rolu@roce.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad: fix msi_translate with PV
 event delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 29 Jun 2012, Stefano Stabellini wrote:
> When switching from msitranslate to straight msi we need to make sure
> that we respect PV event delivery for the msi if the guest asked for it:
> 
> - completely disable MSI on the device in pt_disable_msi_translate;
> - then enable MSI again (pt_msi_setup), mapping the correct pirq to it.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Tested-by: Rolu <rolu@roce.org>

ping

> ---
> 
> diff --git a/hw/pass-through.c b/hw/pass-through.c
> index 8581253..fc8c49f 100644
> --- a/hw/pass-through.c
> +++ b/hw/pass-through.c
> @@ -3841,21 +3841,18 @@ static int pt_msgctrl_reg_write(struct pt_dev *ptdev,
>                  PT_LOG("guest enabling MSI, disable MSI-INTx translation\n");
>                  pt_disable_msi_translate(ptdev);
>              }
> -            else
> +            /* Init physical one */
> +            PT_LOG("setup msi for dev %x\n", pd->devfn);
> +            if (pt_msi_setup(ptdev))
>              {
> -                /* Init physical one */
> -                PT_LOG("setup msi for dev %x\n", pd->devfn);
> -                if (pt_msi_setup(ptdev))
> -                {
> -		    /* We do not broadcast the error to the framework code, so
> -		     * that MSI errors are contained in MSI emulation code and
> -		     * QEMU can go on running.
> -		     * Guest MSI would be actually not working.
> -		     */
> -		    *value &= ~PCI_MSI_FLAGS_ENABLE;
> -		    PT_LOG("Warning: Can not map MSI for dev %x\n", pd->devfn);
> -		    return 0;
> -                }
> +                /* We do not broadcast the error to the framework code, so
> +                 * that MSI errors are contained in MSI emulation code and
> +                 * QEMU can go on running.
> +                 * Guest MSI would be actually not working.
> +                 */
> +                *value &= ~PCI_MSI_FLAGS_ENABLE;
> +                PT_LOG("Warning: Can not map MSI for dev %x\n", pd->devfn);
> +                return 0;
>              }
>              if (pt_msi_update(ptdev))
>              {
> diff --git a/hw/pt-msi.c b/hw/pt-msi.c
> index 70c4023..73f737d 100644
> --- a/hw/pt-msi.c
> +++ b/hw/pt-msi.c
> @@ -263,16 +263,8 @@ void pt_disable_msi_translate(struct pt_dev *dev)
>      uint8_t e_device = 0;
>      uint8_t e_intx = 0;
>  
> -    /* MSI_ENABLE bit should be disabed until the new handler is set */
> -    msi_set_enable(dev, 0);
> -
> -    e_device = PCI_SLOT(dev->dev.devfn);
> -    e_intx = pci_intx(dev);
> -
> -    if (xc_domain_unbind_pt_irq(xc_handle, domid, dev->msi->pirq,
> -                                 PT_IRQ_TYPE_MSI_TRANSLATE, 0,
> -                                 e_device, e_intx, 0))
> -        PT_LOG("Error: Unbinding pt irq for MSI-INTx failed!\n");
> +    pt_msi_disable(dev);
> +    dev->msi->flags |= MSI_FLAG_UNINIT;
>  
>      if (dev->machine_irq)
>      {
> @@ -280,8 +272,6 @@ void pt_disable_msi_translate(struct pt_dev *dev)
>                                         0, e_device, e_intx))
>              PT_LOG("Error: Rebinding of interrupt failed!\n");
>      }
> -
> -    dev->msi_trans_en = 0;
>  }
>  
>  static int pt_msix_update_one(struct pt_dev *dev, int entry_nr)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:13:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48sL-0002Dl-Ku; Wed, 22 Aug 2012 11:13:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48sJ-0002DJ-FL
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:13:27 +0000
Received: from [85.158.139.83:44524] by server-3.bemta-5.messagelabs.com id
	4E/5A-27237-6DEB4305; Wed, 22 Aug 2012 11:13:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345634005!25596495!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32248 invoked from network); 22 Aug 2012 11:13:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:13:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123417"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:13:25 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:13:25 +0100
Date: Wed, 22 Aug 2012 12:13:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221213070.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl/qemu-xen: use cache=writeback for IDE
 and cache=none for SCSI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

ping

> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index 0c0084f..1c94e80 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -549,10 +549,10 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>              if (disks[i].is_cdrom) {
>                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
>                      drive = libxl__sprintf
> -                        (gc, "if=ide,index=%d,media=cdrom", disk);
> +                        (gc, "if=ide,index=%d,media=cdrom,cache=writeback", disk);
>                  else
>                      drive = libxl__sprintf
> -                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
> +                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s,cache=writeback",
>                           disks[i].pdev_path, disk, format);
>              } else {
>                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
> @@ -575,11 +575,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>                   */
>                  if (strncmp(disks[i].vdev, "sd", 2) == 0)
>                      drive = libxl__sprintf
> -                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
> +                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=none",
>                           disks[i].pdev_path, disk, format);
>                  else if (disk < 4)
>                      drive = libxl__sprintf
> -                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
> +                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s,cache=writeback",
>                           disks[i].pdev_path, disk, format);
>                  else
>                      continue; /* Do not emulate this disk */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:13:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48sL-0002Dl-Ku; Wed, 22 Aug 2012 11:13:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48sJ-0002DJ-FL
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:13:27 +0000
Received: from [85.158.139.83:44524] by server-3.bemta-5.messagelabs.com id
	4E/5A-27237-6DEB4305; Wed, 22 Aug 2012 11:13:26 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345634005!25596495!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32248 invoked from network); 22 Aug 2012 11:13:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:13:26 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123417"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:13:25 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:13:25 +0100
Date: Wed, 22 Aug 2012 12:13:13 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221213070.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl/qemu-xen: use cache=writeback for IDE
 and cache=none for SCSI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 14 Aug 2012, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

ping

> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index 0c0084f..1c94e80 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -549,10 +549,10 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>              if (disks[i].is_cdrom) {
>                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
>                      drive = libxl__sprintf
> -                        (gc, "if=ide,index=%d,media=cdrom", disk);
> +                        (gc, "if=ide,index=%d,media=cdrom,cache=writeback", disk);
>                  else
>                      drive = libxl__sprintf
> -                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
> +                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s,cache=writeback",
>                           disks[i].pdev_path, disk, format);
>              } else {
>                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
> @@ -575,11 +575,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>                   */
>                  if (strncmp(disks[i].vdev, "sd", 2) == 0)
>                      drive = libxl__sprintf
> -                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
> +                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=none",
>                           disks[i].pdev_path, disk, format);
>                  else if (disk < 4)
>                      drive = libxl__sprintf
> -                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
> +                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s,cache=writeback",
>                           disks[i].pdev_path, disk, format);
>                  else
>                      continue; /* Do not emulate this disk */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:20:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48zH-0002ot-Kl; Wed, 22 Aug 2012 11:20:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48zF-0002on-Ra
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:20:38 +0000
Received: from [85.158.143.35:32501] by server-2.bemta-4.messagelabs.com id
	CA/DA-21239-580C4305; Wed, 22 Aug 2012 11:20:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345634431!5244696!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32733 invoked from network); 22 Aug 2012 11:20:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:20:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:20:31 +0100
Date: Wed, 22 Aug 2012 12:20:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1206221644370.27860@kaball.uk.xensource.com>
	<1340381685-22529-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1206221722040.27860@kaball.uk.xensource.com>
	<20120709141915.GB9580@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207131821250.23783@kaball.uk.xensource.com>
	<20120716151441.GD552@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/events: fix unmask_evtchn for PV on HVM
 guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,
I cannot see this patch anywhere in your trees. Did I miss it?
Or maybe it just fell through the cracks?
Let me know if you want me to do anything more.
Cheers,
Stefano

On Wed, 18 Jul 2012, Stefano Stabellini wrote:
> xen/events: fix unmask_evtchn for PV on HVM guests
> 
> When unmask_evtchn is called, if we already have an event pending, we
> just set evtchn_pending_sel waiting for local_irq_enable to be called.
> That is because PV guests set the irq_enable pvops to
> xen_irq_enable_direct in xen_setup_vcpu_info_placement:
> xen_irq_enable_direct is implemented in assembly in
> arch/x86/xen/xen-asm.S and call xen_force_evtchn_callback if
> XEN_vcpu_info_pending is set.
> 
> However HVM guests (and ARM guests) do not change or do not have the
> irq_enable pvop, so evtchn_unmask cannot work properly for them.
> 
> Considering that having the pending_irq bit set when unmask_evtchn is
> called is not very common, and it is simpler to keep the
> native_irq_enable implementation for HVM guests (and ARM guests), the
> best thing to do is just use the EVTCHNOP_unmask hypercall (Xen
> re-injects pending events in response).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/events.c |   17 ++++++++++++++---
>  1 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 0a8a17c..d75cc39 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -373,11 +373,22 @@ static void unmask_evtchn(int port)
>  {
>  	struct shared_info *s = HYPERVISOR_shared_info;
>  	unsigned int cpu = get_cpu();
> +	int do_hypercall = 0, evtchn_pending = 0;
>  
>  	BUG_ON(!irqs_disabled());
>  
> -	/* Slow path (hypercall) if this is a non-local port. */
> -	if (unlikely(cpu != cpu_from_evtchn(port))) {
> +	if (unlikely((cpu != cpu_from_evtchn(port))))
> +		do_hypercall = 1;
> +	else
> +		evtchn_pending = sync_test_bit(port, &s->evtchn_pending[0]);
> +
> +	if (unlikely(evtchn_pending && xen_hvm_domain()))
> +		do_hypercall = 1;
> +
> +	/* Slow path (hypercall) if this is a non-local port or if this is
> +	 * an hvm domain and an event is pending (hvm domains don't have
> +	 * their own implementation of irq_enable). */
> +	if (do_hypercall) {
>  		struct evtchn_unmask unmask = { .port = port };
>  		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
>  	} else {
> @@ -390,7 +401,7 @@ static void unmask_evtchn(int port)
>  		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
>  		 * the interrupt edge' if the channel is masked.
>  		 */
> -		if (sync_test_bit(port, &s->evtchn_pending[0]) &&
> +		if (evtchn_pending &&
>  		    !sync_test_and_set_bit(port / BITS_PER_LONG,
>  					   &vcpu_info->evtchn_pending_sel))
>  			vcpu_info->evtchn_upcall_pending = 1;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:20:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T48zH-0002ot-Kl; Wed, 22 Aug 2012 11:20:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T48zF-0002on-Ra
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 11:20:38 +0000
Received: from [85.158.143.35:32501] by server-2.bemta-4.messagelabs.com id
	CA/DA-21239-580C4305; Wed, 22 Aug 2012 11:20:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345634431!5244696!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32733 invoked from network); 22 Aug 2012 11:20:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 11:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14123572"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 11:20:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 12:20:31 +0100
Date: Wed, 22 Aug 2012 12:20:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1206221644370.27860@kaball.uk.xensource.com>
	<1340381685-22529-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1206221722040.27860@kaball.uk.xensource.com>
	<20120709141915.GB9580@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207131821250.23783@kaball.uk.xensource.com>
	<20120716151441.GD552@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/events: fix unmask_evtchn for PV on HVM
 guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,
I cannot see this patch anywhere in your trees. Did I miss it?
Or maybe it just fell through the cracks?
Let me know if you want me to do anything more.
Cheers,
Stefano

On Wed, 18 Jul 2012, Stefano Stabellini wrote:
> xen/events: fix unmask_evtchn for PV on HVM guests
> 
> When unmask_evtchn is called, if we already have an event pending, we
> just set evtchn_pending_sel waiting for local_irq_enable to be called.
> That is because PV guests set the irq_enable pvops to
> xen_irq_enable_direct in xen_setup_vcpu_info_placement:
> xen_irq_enable_direct is implemented in assembly in
> arch/x86/xen/xen-asm.S and call xen_force_evtchn_callback if
> XEN_vcpu_info_pending is set.
> 
> However HVM guests (and ARM guests) do not change or do not have the
> irq_enable pvop, so evtchn_unmask cannot work properly for them.
> 
> Considering that having the pending_irq bit set when unmask_evtchn is
> called is not very common, and it is simpler to keep the
> native_irq_enable implementation for HVM guests (and ARM guests), the
> best thing to do is just use the EVTCHNOP_unmask hypercall (Xen
> re-injects pending events in response).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  drivers/xen/events.c |   17 ++++++++++++++---
>  1 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 0a8a17c..d75cc39 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -373,11 +373,22 @@ static void unmask_evtchn(int port)
>  {
>  	struct shared_info *s = HYPERVISOR_shared_info;
>  	unsigned int cpu = get_cpu();
> +	int do_hypercall = 0, evtchn_pending = 0;
>  
>  	BUG_ON(!irqs_disabled());
>  
> -	/* Slow path (hypercall) if this is a non-local port. */
> -	if (unlikely(cpu != cpu_from_evtchn(port))) {
> +	if (unlikely((cpu != cpu_from_evtchn(port))))
> +		do_hypercall = 1;
> +	else
> +		evtchn_pending = sync_test_bit(port, &s->evtchn_pending[0]);
> +
> +	if (unlikely(evtchn_pending && xen_hvm_domain()))
> +		do_hypercall = 1;
> +
> +	/* Slow path (hypercall) if this is a non-local port or if this is
> +	 * an hvm domain and an event is pending (hvm domains don't have
> +	 * their own implementation of irq_enable). */
> +	if (do_hypercall) {
>  		struct evtchn_unmask unmask = { .port = port };
>  		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
>  	} else {
> @@ -390,7 +401,7 @@ static void unmask_evtchn(int port)
>  		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
>  		 * the interrupt edge' if the channel is masked.
>  		 */
> -		if (sync_test_bit(port, &s->evtchn_pending[0]) &&
> +		if (evtchn_pending &&
>  		    !sync_test_and_set_bit(port / BITS_PER_LONG,
>  					   &vcpu_info->evtchn_pending_sel))
>  			vcpu_info->evtchn_upcall_pending = 1;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 11:27:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T495y-00032O-GW; Wed, 22 Aug 2012 11:27:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <conor.winchcombe@sap.com>) id 1T495w-00032I-Ly
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 11:27:33 +0000
X-Env-Sender: conor.winchcombe@sap.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345634845!10363984!1
X-Originating-IP: [155.56.66.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU1LjU2LjY2Ljk4ID0+IDQ2NDE1Nw==\n,
	received_headers: No Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9860 invoked from network); 22 Aug 2012 11:27:26 -0000
Received: from smtpgw03.sap-ag.de (HELO smtpgw.sap-ag.de) (155.56.66.98)
	by server-4.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	22 Aug 2012 11:27:26 -0000
From: "conor.winchcombe@sap.com" <conor.winchcombe@sap.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 22 Aug 2012 13:27:23 +0200
Thread-Topic: Failure to boot Xen 4.1.2 kernel 2.6.32.40 with Ubuntu 10.04.4
Thread-Index: Ac2AVo2XdfeUuaSmQw+wptXOAI1/OgAAmX9Q
Message-ID: <3ED5771B034E314C8AC54D893482B5F51A783DA1F5@DEWDFECCR08.wdf.sap.corp>
Accept-Language: en-US, de-DE
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, de-DE
Content-Type: multipart/mixed;
	boundary="_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_"
MIME-Version: 1.0
Cc: "Khalid, Omer" <omer.khalid@sap.com>
Subject: [Xen-devel] Failure to boot Xen 4.1.2 kernel 2.6.32.40 with Ubuntu
	10.04.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: multipart/alternative;
	boundary="_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_"

--_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

After successful creation of the Xen kernel and entry into grub, whenever I=
 try and boot into the Xen kernel I receive the following error:

Gave up waiting for root device. Common problems:
   - Boot args (cat /proc/cmdline)
                - Check rootdelay=3D (did the system wait long enough?)
                - Check root=3D (did the system wait for the right device?)
   - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/sda6 does not exist. Dropping to a shell!

Here is the entry in /boot/grub/menu.lst
---------------------------------------------------------------------------=
------
title           Xen 4.1.2 / Ubuntu 10.04.4 kernel 2.6.32.40 (root=3Dsda6)
uuid            8edf0e1b-5f9c-4ca0-8f88-77d35af87093
#root           (hd0,1)
kernel          /xen-4.1.2.gz dom0_mem=3D4096M,max:4096M loglvl=3Dall guest=
_loglvl=3Dall
module          /vmlinuz-2.6.32.40 dummy=3Ddummy root=3D/dev/sda6 ro consol=
e=3Dtty0 nomodeset rootdelay=3D50
module          /initrd.img-2.6.32.40
---------------------------------------------------------------------------=
------
I have included the Results of the boot info script in the attached file, p=
lease let me know if you would prefer I placed the full text of that in an =
email.


Conor, Winchcombe
SAP Research Belfast
SAP (UK) Limited   I   The Concourse   I   Queen's Road   I   Queen's Islan=
d   I   Belfast BT3 9DT

mailto: conor.winchcombe@sap.com<mailto:conor.winchcombe@sap.com>  I   www.=
sap.com/research<http://www.sap.com/research>

---------------------------------------------------------------------------=
-----------------------------------------------
This communication contains information which is confidential and may also =
be privileged. It is for the exclusive use of the addressee. If you are not=
 the addressee please contact us immediately and also delete the communicat=
ion from your computer. Steps have been taken to ensure this e-mail is free=
 from computer viruses but the recipient is responsible for ensuring that i=
t is actually virus free before opening it or any attachments. Any views an=
d/or opinions expressed in this e-mail are of the author only and do not re=
present the views of SAP.

SAP (UK) Limited, Registered in England No. 2152073. Registered Office: Clo=
ckhouse Place, Bedfont Road, Feltham, Middlesex, TW14 8HD
---------------------------------------------------------------------------=
------------------------------------------------


--_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV=3D"Content-Type" CONTENT=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 14 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.EmailStyle18
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-IE link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>After successful=
 creation of the Xen kernel and entry into grub, whenever I try and boot in=
to the Xen kernel I receive the following error:<o:p></o:p></p><p class=3DM=
soNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Gave up waiting for root=
 device. Common problems:<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nbsp; -=
 Boot args (cat /proc/cmdline)<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp; - Check rootdelay=3D (did the system wait long enough?)<o:p></o:p></=
p><p class=3DMsoNormal>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; - Check root=3D (did the system wait=
 for the right device?)<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nbsp; - M=
issing modules (cat /proc/modules; ls /dev)<o:p></o:p></p><p class=3DMsoNor=
mal>ALERT! /dev/sda6 does not exist. Dropping to a shell!<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Here is the ent=
ry in /boot/grub/menu.lst<o:p></o:p></p><p class=3DMsoNormal>--------------=
-------------------------------------------------------------------<o:p></o=
:p></p><p class=3DMsoNormal>title&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; Xen 4.1.2 / Ubuntu 10.04.4 kernel 2.6.32.40 (root=3Dsda6=
)<o:p></o:p></p><p class=3DMsoNormal>uuid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8edf0e1b-5f9c-4ca0-8f88-77d35af87093<o:p><=
/o:p></p><p class=3DMsoNormal>#root&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; (hd0,1)<o:p></o:p></p><p class=3DMsoNormal>kernel&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /xen-4.1.2.gz dom0_mem=3D=
4096M,max:4096M loglvl=3Dall guest_loglvl=3Dall<o:p></o:p></p><p class=3DMs=
oNormal>module&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /vmlin=
uz-2.6.32.40 dummy=3Ddummy root=3D/dev/sda6 ro console=3Dtty0 nomodeset roo=
tdelay=3D50<o:p></o:p></p><p class=3DMsoNormal>module&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /initrd.img-2.6.32.40<o:p></o:p></p><p cla=
ss=3DMsoNormal>------------------------------------------------------------=
---------------------<o:p></o:p></p><p class=3DMsoNormal>I have included th=
e Results of the boot info script in the attached file, please let me know =
if you would prefer I placed the full text of that in an email.<o:p></o:p><=
/p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal><o:p>&nbs=
p;</o:p></p><p class=3DMsoNormal><b><span lang=3DEN-GB style=3D'font-size:8=
.0pt;font-family:"Arial","sans-serif";color:#666666;mso-fareast-language:EN=
-IE'>Conor, Winchcombe<o:p></o:p></span></b></p><p class=3DMsoNormal><b><sp=
an lang=3DEN-GB style=3D'font-size:8.0pt;font-family:"Arial","sans-serif";c=
olor:#666666;mso-fareast-language:EN-IE'>SAP Research Belfast<o:p></o:p></s=
pan></b></p><p class=3DMsoNormal><span style=3D'font-size:8.0pt;font-family=
:"Arial","sans-serif";color:#666666;mso-fareast-language:EN-IE'>SAP (UK) Li=
mited&nbsp;&nbsp; I&nbsp;&nbsp; The Concourse&nbsp;&nbsp; I&nbsp;&nbsp; Que=
en's Road&nbsp;&nbsp; I&nbsp;&nbsp; Queen's Island&nbsp;&nbsp; I&nbsp;&nbsp=
; Belfast BT3 9DT<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'=
font-size:8.0pt;font-family:"Arial","sans-serif";color:#666666;mso-fareast-=
language:EN-IE'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span styl=
e=3D'font-size:8.0pt;font-family:"Arial","sans-serif";color:#666666;mso-far=
east-language:EN-IE'>mailto: <a href=3D"mailto:conor.winchcombe@sap.com">co=
nor.winchcombe@sap.com</a>&nbsp; I&nbsp;&nbsp; <a href=3D"http://www.sap.co=
m/research">www.sap.com/research</a><o:p></o:p></span></p><p class=3DMsoNor=
mal><span style=3D'font-size:8.0pt;font-family:"Arial","sans-serif";color:#=
666666;mso-fareast-language:EN-IE'><o:p>&nbsp;</o:p></span></p><p class=3DM=
soNormal><span style=3D'font-size:8.0pt;font-family:"Arial","sans-serif";co=
lor:#666666;mso-fareast-language:EN-IE'>-----------------------------------=
---------------------------------------------------------------------------=
------------<o:p></o:p></span></p><p class=3DMsoNormal><i><span style=3D'fo=
nt-size:8.0pt;font-family:"Arial","sans-serif";color:#666666;mso-fareast-la=
nguage:EN-IE'>This communication contains information which is confidential=
 and may also be privileged. It is for the exclusive use of the addressee. =
If you are not the addressee please contact us immediately and also delete =
the communication from your computer. Steps have been taken to ensure this =
e-mail is free from computer viruses but the recipient is responsible for e=
nsuring that it is actually virus free before opening it or any attachments=
. Any views and/or opinions expressed in this e-mail are of the author only=
 and do not represent the views of SAP.<o:p></o:p></span></i></p><p class=
=3DMsoNormal><i><span style=3D'font-size:8.0pt;font-family:"Arial","sans-se=
rif";color:#666666;mso-fareast-language:EN-IE'><o:p>&nbsp;</o:p></span></i>=
</p><p class=3DMsoNormal><i><span style=3D'font-size:8.0pt;font-family:"Ari=
al","sans-serif";color:#666666;mso-fareast-language:EN-IE'>SAP (UK) Limited=
, Registered in England No. 2152073. Registered Office: Clockhouse Place, B=
edfont Road, Feltham, Middlesex, TW14 8HD<o:p></o:p></span></i></p><p class=
=3DMsoNormal><span style=3D'font-size:8.0pt;font-family:"Arial","sans-serif=
";color:#666666;mso-fareast-language:EN-IE'>-------------------------------=
---------------------------------------------------------------------------=
-----------------<o:p></o:p></span></p><p class=3DMsoNormal><o:p>&nbsp;</o:=
p></p></div></body></html>=

--_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_--

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/plain; name="RESULTS.txt"
Content-Description: RESULTS.txt
Content-Disposition: attachment; filename="RESULTS.txt"; size=37479;
	creation-date="Wed, 22 Aug 2012 11:11:58 GMT";
	modification-date="Wed, 22 Aug 2012 10:47:24 GMT"
Content-Transfer-Encoding: base64

PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gQm9vdCBJbmZvIFN1bW1hcnk6ID09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PQ0KDQogPT4gR3J1YjAuOTcgaXMgaW5zdGFsbGVkIGluIHRo
ZSBNQlIgb2YgL2Rldi9zZGEgYW5kIGxvb2tzIG9uIHRoZSBzYW1lIGRyaXZlIA0KICAgIGluIHBh
cnRpdGlvbiAjMiBmb3IgL2dydWIvc3RhZ2UyIGFuZCAvZ3J1Yi9tZW51LmxzdC4NCg0Kc2RhMTog
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXw0KDQogICAgRmlsZSBzeXN0ZW06ICAgICAgIGV4dDINCiAgICBCb290
IHNlY3RvciB0eXBlOiAgR3J1Yg0KICAgIEJvb3Qgc2VjdG9yIGluZm86ICBHcnViMC45NyBpcyBp
bnN0YWxsZWQgaW4gdGhlIGJvb3Qgc2VjdG9yIG9mIHNkYTEgYW5kIA0KICAgICAgICAgICAgICAg
ICAgICAgICBsb29rcyBhdCBzZWN0b3IgNDQ2MjY4OCBvZiB0aGUgc2FtZSBoYXJkIGRyaXZlIGZv
ciB0aGUgDQogICAgICAgICAgICAgICAgICAgICAgIHN0YWdlMiBmaWxlLiBBIHN0YWdlMiBmaWxl
IGlzIGF0IHRoaXMgbG9jYXRpb24gb24gDQogICAgICAgICAgICAgICAgICAgICAgIC9kZXYvc2Rh
LiBTdGFnZTIgbG9va3Mgb24gcGFydGl0aW9uICMyIGZvciANCiAgICAgICAgICAgICAgICAgICAg
ICAgL2dydWIvbWVudS5sc3QuDQogICAgT3BlcmF0aW5nIFN5c3RlbTogIA0KICAgIEJvb3QgZmls
ZXMvZGlyczogICANCg0Kc2RhMjogX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KDQogICAgRmlsZSBzeXN0ZW06
ICAgICAgIGV4dDQNCiAgICBCb290IHNlY3RvciB0eXBlOiAgR3J1Yg0KICAgIEJvb3Qgc2VjdG9y
IGluZm86ICBHcnViMC45NyBpcyBpbnN0YWxsZWQgaW4gdGhlIGJvb3Qgc2VjdG9yIG9mIHNkYTIg
YW5kIA0KICAgICAgICAgICAgICAgICAgICAgICBsb29rcyBhdCBzZWN0b3IgNDQ2MjY4OCBvZiB0
aGUgc2FtZSBoYXJkIGRyaXZlIGZvciB0aGUgDQogICAgICAgICAgICAgICAgICAgICAgIHN0YWdl
MiBmaWxlLiBBIHN0YWdlMiBmaWxlIGlzIGF0IHRoaXMgbG9jYXRpb24gb24gDQogICAgICAgICAg
ICAgICAgICAgICAgIC9kZXYvc2RhLiBTdGFnZTIgbG9va3Mgb24gcGFydGl0aW9uICMyIGZvciAN
CiAgICAgICAgICAgICAgICAgICAgICAgL2dydWIvbWVudS5sc3QuDQogICAgT3BlcmF0aW5nIFN5
c3RlbTogIA0KICAgIEJvb3QgZmlsZXMvZGlyczogICAvZ3J1Yi9tZW51LmxzdCAvZ3J1Yi9ncnVi
LmNmZyAvZ3J1Yi9jb3JlLmltZw0KDQpzZGEzOiBfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCiAgICBGaWxl
IHN5c3RlbTogICAgICAgRXh0ZW5kZWQgUGFydGl0aW9uDQogICAgQm9vdCBzZWN0b3IgdHlwZTog
IC0NCiAgICBCb290IHNlY3RvciBpbmZvOiAgDQoNCnNkYTU6IF9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCg0K
ICAgIEZpbGUgc3lzdGVtOiAgICAgICBzd2FwDQogICAgQm9vdCBzZWN0b3IgdHlwZTogIC0NCiAg
ICBCb290IHNlY3RvciBpbmZvOiAgDQoNCnNkYTY6IF9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCg0KICAgIEZp
bGUgc3lzdGVtOiAgICAgICBleHQ0DQogICAgQm9vdCBzZWN0b3IgdHlwZTogIC0NCiAgICBCb290
IHNlY3RvciBpbmZvOiAgDQogICAgT3BlcmF0aW5nIFN5c3RlbTogIFVidW50dSAxMC4wNC40IExU
Uw0KICAgIEJvb3QgZmlsZXMvZGlyczogICAvYm9vdC9ncnViL21lbnUubHN0IC9ib290L2dydWIv
Z3J1Yi5jZmcgL2V0Yy9mc3RhYiANCiAgICAgICAgICAgICAgICAgICAgICAgL2Jvb3QvZ3J1Yi9j
b3JlLmltZw0KDQpzZGE3OiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCiAgICBGaWxlIHN5c3RlbTogICAg
ICAgZXh0NA0KICAgIEJvb3Qgc2VjdG9yIHR5cGU6ICAtDQogICAgQm9vdCBzZWN0b3IgaW5mbzog
IA0KICAgIE9wZXJhdGluZyBTeXN0ZW06ICBVYnVudHUgMTAuMDQuNCBMVFMNCiAgICBCb290IGZp
bGVzL2RpcnM6ICAgL2Jvb3QvZ3J1Yi9ncnViLmNmZyAvZXRjL2ZzdGFiIC9ib290L2dydWIvY29y
ZS5pbWcNCg0Kc2RhODogX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KDQogICAgRmlsZSBzeXN0ZW06ICAgICAg
IGV4dDQNCiAgICBCb290IHNlY3RvciB0eXBlOiAgLQ0KICAgIEJvb3Qgc2VjdG9yIGluZm86ICAN
CiAgICBPcGVyYXRpbmcgU3lzdGVtOiAgDQogICAgQm9vdCBmaWxlcy9kaXJzOiAgIA0KDQpzZGE5
OiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fDQoNCiAgICBGaWxlIHN5c3RlbTogICAgICAgDQogICAgQm9vdCBz
ZWN0b3IgdHlwZTogIC0NCiAgICBCb290IHNlY3RvciBpbmZvOiAgDQogICAgTW91bnRpbmcgZmFp
bGVkOg0KbW91bnQ6IHVua25vd24gZmlsZXN5c3RlbSB0eXBlICcnDQoNCj09PT09PT09PT09PT09
PT09PT09PT09PT09PSBEcml2ZS9QYXJ0aXRpb24gSW5mbzogPT09PT09PT09PT09PT09PT09PT09
PT09PT09PT0NCg0KRHJpdmU6IHNkYSBfX19fX19fX19fX19fX19fX19fIF9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCkRpc2sgL2Rldi9zZGE6
IDE3ODguNiBHQiwgMTc4ODYxODk5Nzc2MCBieXRlcw0KMjU1IGhlYWRzLCA2MyBzZWN0b3JzL3Ry
YWNrLCAyMTc0NTMgY3lsaW5kZXJzLCB0b3RhbCAzNDkzMzk2NDgwIHNlY3RvcnMNClVuaXRzID0g
c2VjdG9ycyBvZiAxICogNTEyID0gNTEyIGJ5dGVzDQpTZWN0b3Igc2l6ZSAobG9naWNhbC9waHlz
aWNhbCk6IDUxMiBieXRlcyAvIDUxMiBieXRlcw0KDQpQYXJ0aXRpb24gIEJvb3QgICAgICAgICBT
dGFydCAgICAgICAgICAgRW5kICAgICAgICAgIFNpemUgIElkIFN5c3RlbQ0KDQovZGV2L3NkYTEg
ICAgICAgICAgICAgICAyLDA0OCAgICAgICAgIDQsMDk1ICAgICAgICAgMiwwNDggIDgzIExpbnV4
DQovZGV2L3NkYTIgICAgKiAgICAgICAgICA0LDA5NiAgICAxOSw1MzUsODcxICAgIDE5LDUzMSw3
NzYgIDgzIExpbnV4DQovZGV2L3NkYTMgICAgICAgICAgMTksNTM3LDkxOCAzLDQ5MywzOTQsNDMx
IDMsNDczLDg1Niw1MTQgICA1IEV4dGVuZGVkDQovZGV2L3NkYTUgICAgICAgICAgMTksNTM3LDky
MCAgICAzOSwwNjcsNjQ3ICAgIDE5LDUyOSw3MjggIDgyIExpbnV4IHN3YXAgLyBTb2xhcmlzDQov
ZGV2L3NkYTYgICAgICAgICAgMzksMDY5LDY5NiAgIDEzNiw3MjQsNDc5ICAgIDk3LDY1NCw3ODQg
IDgzIExpbnV4DQovZGV2L3NkYTcgICAgICAgICAxMzYsNzI2LDUyOCAgIDIzNCwzODEsMzExICAg
IDk3LDY1NCw3ODQgIDgzIExpbnV4DQovZGV2L3NkYTggICAgICAgICAyMzQsMzgzLDM2MCAgIDMz
MiwwMzgsMTQzICAgIDk3LDY1NCw3ODQgIDgzIExpbnV4DQovZGV2L3NkYTkgICAgICAgICAzMzIs
MDQwLDE5MiAzLDQ5MywzOTQsNDMxIDMsMTYxLDM1NCwyNDAgIDgzIExpbnV4DQoNCg0KYmxraWQg
LWMgL2Rldi9udWxsOiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18NCg0KL2Rldi9zZGExOiBVVUlEPSIxZGQwMTJiYS0wNGU4LTRjODkt
YmQyNS0wZTlmODllOTkxZWIiIFRZUEU9ImV4dDIiIA0KL2Rldi9zZGEyOiBVVUlEPSI4ZWRmMGUx
Yi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMiIFRZUEU9ImV4dDQiIA0KL2Rldi9zZGE1OiBV
VUlEPSJiNWU1YzQwZS0xOTNjLTRjNmQtOTA2OC00NWNjMDMzYjY2YTkiIFRZUEU9InN3YXAiIA0K
L2Rldi9zZGE2OiBVVUlEPSIyNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQiIFRZ
UEU9ImV4dDQiIA0KL2Rldi9zZGE3OiBVVUlEPSI1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcx
MzNkZDJjODAiIFRZUEU9ImV4dDQiIA0KL2Rldi9zZGE4OiBVVUlEPSIwMTVhNDBkZS04Zjk2LTRj
MWItOGZiMS0xYTIzNTc1MDU0YTYiIFRZUEU9ImV4dDQiIA0KDQo9PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09ICJtb3VudCIgb3V0cHV0OiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09DQoNCi9kZXYvc2RhNiBvbiAvIHR5cGUgZXh0NCAocncsZXJyb3JzPXJlbW91bnQtcm8pDQpw
cm9jIG9uIC9wcm9jIHR5cGUgcHJvYyAocncsbm9leGVjLG5vc3VpZCxub2RldikNCm5vbmUgb24g
L3N5cyB0eXBlIHN5c2ZzIChydyxub2V4ZWMsbm9zdWlkLG5vZGV2KQ0Kbm9uZSBvbiAvc3lzL2Zz
L2Z1c2UvY29ubmVjdGlvbnMgdHlwZSBmdXNlY3RsIChydykNCm5vbmUgb24gL3N5cy9rZXJuZWwv
ZGVidWcgdHlwZSBkZWJ1Z2ZzIChydykNCm5vbmUgb24gL3N5cy9rZXJuZWwvc2VjdXJpdHkgdHlw
ZSBzZWN1cml0eWZzIChydykNCm5vbmUgb24gL2RldiB0eXBlIGRldnRtcGZzIChydyxtb2RlPTA3
NTUpDQpub25lIG9uIC9kZXYvcHRzIHR5cGUgZGV2cHRzIChydyxub2V4ZWMsbm9zdWlkLGdpZD01
LG1vZGU9MDYyMCkNCm5vbmUgb24gL2Rldi9zaG0gdHlwZSB0bXBmcyAocncsbm9zdWlkLG5vZGV2
KQ0Kbm9uZSBvbiAvdmFyL3J1biB0eXBlIHRtcGZzIChydyxub3N1aWQsbW9kZT0wNzU1KQ0Kbm9u
ZSBvbiAvdmFyL2xvY2sgdHlwZSB0bXBmcyAocncsbm9leGVjLG5vc3VpZCxub2RldikNCm5vbmUg
b24gL2xpYi9pbml0L3J3IHR5cGUgdG1wZnMgKHJ3LG5vc3VpZCxtb2RlPTA3NTUpDQovZGV2L3Nk
YTIgb24gL2Jvb3QgdHlwZSBleHQ0IChydykNCmJpbmZtdF9taXNjIG9uIC9wcm9jL3N5cy9mcy9i
aW5mbXRfbWlzYyB0eXBlIGJpbmZtdF9taXNjIChydyxub2V4ZWMsbm9zdWlkLG5vZGV2KQ0KZ3Zm
cy1mdXNlLWRhZW1vbiBvbiAvaG9tZS9zYXAvLmd2ZnMgdHlwZSBmdXNlLmd2ZnMtZnVzZS1kYWVt
b24gKHJ3LG5vc3VpZCxub2Rldix1c2VyPXNhcCkNCm5hcy0xZzovZXhwb3J0L3V0aWxzL3NjcmF0
Y2ggb24gL3NhcG1udC9zY3JhdGNoIHR5cGUgbmZzIChydyxhZGRyPTEwLjU1LjE2OC4xNTApDQpu
YXMtMWc6L2V4cG9ydC92aXJ0dWFsX21hY2hpbmVzIG9uIC9zYXBtbnQvdmlydHVhbF9tYWNoaW5l
cyB0eXBlIG5mcyAocncsYWRkcj0xMC41NS4xNjguMTUwKQ0KDQoNCj09PT09PT09PT09PT09PT09
PT09PT09PT09PT09IHNkYTIvZ3J1Yi9tZW51LmxzdDogPT09PT09PT09PT09PT09PT09PT09PT09
PT09PT0NCg0KIyBtZW51LmxzdCAtIFNlZTogZ3J1Yig4KSwgaW5mbyBncnViLCB1cGRhdGUtZ3J1
Yig4KQ0KIyAgICAgICAgICAgIGdydWItaW5zdGFsbCg4KSwgZ3J1Yi1mbG9wcHkoOCksDQojICAg
ICAgICAgICAgZ3J1Yi1tZDUtY3J5cHQsIC91c3Ivc2hhcmUvZG9jL2dydWINCiMgICAgICAgICAg
ICBhbmQgL3Vzci9zaGFyZS9kb2MvZ3J1Yi1sZWdhY3ktZG9jLy4NCg0KIyMgZGVmYXVsdCBudW0N
CiMgU2V0IHRoZSBkZWZhdWx0IGVudHJ5IHRvIHRoZSBlbnRyeSBudW1iZXIgTlVNLiBOdW1iZXJp
bmcgc3RhcnRzIGZyb20gMCwgYW5kDQojIHRoZSBlbnRyeSBudW1iZXIgMCBpcyB0aGUgZGVmYXVs
dCBpZiB0aGUgY29tbWFuZCBpcyBub3QgdXNlZC4NCiMNCiMgWW91IGNhbiBzcGVjaWZ5ICdzYXZl
ZCcgaW5zdGVhZCBvZiBhIG51bWJlci4gSW4gdGhpcyBjYXNlLCB0aGUgZGVmYXVsdCBlbnRyeQ0K
IyBpcyB0aGUgZW50cnkgc2F2ZWQgd2l0aCB0aGUgY29tbWFuZCAnc2F2ZWRlZmF1bHQnLg0KIyBX
QVJOSU5HOiBJZiB5b3UgYXJlIHVzaW5nIGRtcmFpZCBkbyBub3QgdXNlICdzYXZlZGVmYXVsdCcg
b3IgeW91cg0KIyBhcnJheSB3aWxsIGRlc3luYyBhbmQgd2lsbCBub3QgbGV0IHlvdSBib290IHlv
dXIgc3lzdGVtLg0KZGVmYXVsdAkJMA0KZmFsbGJhY2sJMg0KIyMgdGltZW91dCBzZWMNCiMgU2V0
IGEgdGltZW91dCwgaW4gU0VDIHNlY29uZHMsIGJlZm9yZSBhdXRvbWF0aWNhbGx5IGJvb3Rpbmcg
dGhlIGRlZmF1bHQgZW50cnkNCiMgKG5vcm1hbGx5IHRoZSBmaXJzdCBlbnRyeSBkZWZpbmVkKS4N
CnRpbWVvdXQJCTEwDQoNCiMjIGhpZGRlbm1lbnUNCiMgSGlkZXMgdGhlIG1lbnUgYnkgZGVmYXVs
dCAocHJlc3MgRVNDIHRvIHNlZSB0aGUgbWVudSkNCiNoaWRkZW5tZW51DQoNCiMgUHJldHR5IGNv
bG91cnMNCiNjb2xvciBjeWFuL2JsdWUgd2hpdGUvYmx1ZQ0KDQojIyBwYXNzd29yZCBbJy0tbWQ1
J10gcGFzc3dkDQojIElmIHVzZWQgaW4gdGhlIGZpcnN0IHNlY3Rpb24gb2YgYSBtZW51IGZpbGUs
IGRpc2FibGUgYWxsIGludGVyYWN0aXZlIGVkaXRpbmcNCiMgY29udHJvbCAobWVudSBlbnRyeSBl
ZGl0b3IgYW5kIGNvbW1hbmQtbGluZSkgIGFuZCBlbnRyaWVzIHByb3RlY3RlZCBieSB0aGUNCiMg
Y29tbWFuZCAnbG9jaycNCiMgZS5nLiBwYXNzd29yZCB0b3BzZWNyZXQNCiMgICAgICBwYXNzd29y
ZCAtLW1kNSAkMSRnTGhVMC8kYVc3OGtISzFRZlYzUDJiMnpuVW9lLw0KIyBwYXNzd29yZCB0b3Bz
ZWNyZXQNCg0KIw0KIyBleGFtcGxlcw0KIw0KIyB0aXRsZQkJV2luZG93cyA5NS85OC9OVC8yMDAw
DQojIHJvb3QJCShoZDAsMCkNCiMgbWFrZWFjdGl2ZQ0KIyBjaGFpbmxvYWRlcgkrMQ0KIw0KIyB0
aXRsZQkJTGludXgNCiMgcm9vdAkJKGhkMCwxKQ0KIyBrZXJuZWwJL3ZtbGludXogcm9vdD0vZGV2
L2hkYTIgcm8NCiMNCg0KIw0KIyBQdXQgc3RhdGljIGJvb3Qgc3RhbnphcyBiZWZvcmUgYW5kL29y
IGFmdGVyIEFVVE9NQUdJQyBLRVJORUwgTElTVA0KDQojIyMgQkVHSU4gQVVUT01BR0lDIEtFUk5F
TFMgTElTVA0KIyMgbGluZXMgYmV0d2VlbiB0aGUgQVVUT01BR0lDIEtFUk5FTFMgTElTVCBtYXJr
ZXJzIHdpbGwgYmUgbW9kaWZpZWQNCiMjIGJ5IHRoZSBkZWJpYW4gdXBkYXRlLWdydWIgc2NyaXB0
IGV4Y2VwdCBmb3IgdGhlIGRlZmF1bHQgb3B0aW9ucyBiZWxvdw0KDQojIyBETyBOT1QgVU5DT01N
RU5UIFRIRU0sIEp1c3QgZWRpdCB0aGVtIHRvIHlvdXIgbmVlZHMNCg0KIyMgIyMgU3RhcnQgRGVm
YXVsdCBPcHRpb25zICMjDQojIyBkZWZhdWx0IGtlcm5lbCBvcHRpb25zDQojIyBkZWZhdWx0IGtl
cm5lbCBvcHRpb25zIGZvciBhdXRvbWFnaWMgYm9vdCBvcHRpb25zDQojIyBJZiB5b3Ugd2FudCBz
cGVjaWFsIG9wdGlvbnMgZm9yIHNwZWNpZmljIGtlcm5lbHMgdXNlIGtvcHRfeF95X3oNCiMjIHdo
ZXJlIHgueS56IGlzIGtlcm5lbCB2ZXJzaW9uLiBNaW5vciB2ZXJzaW9ucyBjYW4gYmUgb21pdHRl
ZC4NCiMjIGUuZy4ga29wdD1yb290PS9kZXYvaGRhMSBybw0KIyMgICAgICBrb3B0XzJfNl84PXJv
b3Q9L2Rldi9oZGMxIHJvDQojIyAgICAgIGtvcHRfMl82XzhfMl82ODY9cm9vdD0vZGV2L2hkYzIg
cm8NCiMga29wdD1yb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVk
IHJvDQoNCiMjIGRlZmF1bHQgZ3J1YiByb290IGRldmljZQ0KIyMgZS5nLiBncm9vdD0oaGQwLDAp
DQojIGdyb290PThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KDQojIyBzaG91
bGQgdXBkYXRlLWdydWIgY3JlYXRlIGFsdGVybmF0aXZlIGF1dG9tYWdpYyBib290IG9wdGlvbnMN
CiMjIGUuZy4gYWx0ZXJuYXRpdmU9dHJ1ZQ0KIyMgICAgICBhbHRlcm5hdGl2ZT1mYWxzZQ0KIyBh
bHRlcm5hdGl2ZT10cnVlDQoNCiMjIHNob3VsZCB1cGRhdGUtZ3J1YiBsb2NrIGFsdGVybmF0aXZl
IGF1dG9tYWdpYyBib290IG9wdGlvbnMNCiMjIGUuZy4gbG9ja2FsdGVybmF0aXZlPXRydWUNCiMj
ICAgICAgbG9ja2FsdGVybmF0aXZlPWZhbHNlDQojIGxvY2thbHRlcm5hdGl2ZT1mYWxzZQ0KDQoj
IyBhZGRpdGlvbmFsIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRlZmF1bHQgYm9vdCBvcHRpb24s
IGJ1dCBub3Qgd2l0aCB0aGUNCiMjIGFsdGVybmF0aXZlcw0KIyMgZS5nLiBkZWZvcHRpb25zPXZn
YT03OTEgcmVzdW1lPS9kZXYvaGRhNQ0KIyBkZWZvcHRpb25zPXF1aWV0IHNwbGFzaA0KDQojIyBz
aG91bGQgdXBkYXRlLWdydWIgbG9jayBvbGQgYXV0b21hZ2ljIGJvb3Qgb3B0aW9ucw0KIyMgZS5n
LiBsb2Nrb2xkPWZhbHNlDQojIyAgICAgIGxvY2tvbGQ9dHJ1ZQ0KIyBsb2Nrb2xkPWZhbHNlDQoN
CiMjIFhlbiBoeXBlcnZpc29yIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRlZmF1bHQgWGVuIGJv
b3Qgb3B0aW9uDQojIHhlbmhvcHQ9DQoNCiMjIFhlbiBMaW51eCBrZXJuZWwgb3B0aW9ucyB0byB1
c2Ugd2l0aCB0aGUgZGVmYXVsdCBYZW4gYm9vdCBvcHRpb24NCiMgeGVua29wdD1jb25zb2xlPXR0
eTANCg0KIyMgYWx0b3B0aW9uIGJvb3QgdGFyZ2V0cyBvcHRpb24NCiMjIG11bHRpcGxlIGFsdG9w
dGlvbnMgbGluZXMgYXJlIGFsbG93ZWQNCiMjIGUuZy4gYWx0b3B0aW9ucz0oZXh0cmEgbWVudSBz
dWZmaXgpIGV4dHJhIGJvb3Qgb3B0aW9ucw0KIyMgICAgICBhbHRvcHRpb25zPShyZWNvdmVyeSkg
c2luZ2xlDQojIGFsdG9wdGlvbnM9KHJlY292ZXJ5IG1vZGUpIHNpbmdsZQ0KDQojIyBjb250cm9s
cyBob3cgbWFueSBrZXJuZWxzIHNob3VsZCBiZSBwdXQgaW50byB0aGUgbWVudS5sc3QNCiMjIG9u
bHkgY291bnRzIHRoZSBmaXJzdCBvY2N1cmVuY2Ugb2YgYSBrZXJuZWwsIG5vdCB0aGUNCiMjIGFs
dGVybmF0aXZlIGtlcm5lbCBvcHRpb25zDQojIyBlLmcuIGhvd21hbnk9YWxsDQojIyAgICAgIGhv
d21hbnk9Nw0KIyBob3dtYW55PWFsbA0KDQojIyBzcGVjaWZ5IGlmIHJ1bm5pbmcgaW4gWGVuIGRv
bVUgb3IgaGF2ZSBncnViIGRldGVjdCBhdXRvbWF0aWNhbGx5DQojIyB1cGRhdGUtZ3J1YiB3aWxs
IGlnbm9yZSBub24teGVuIGtlcm5lbHMgd2hlbiBydW5uaW5nIGluIGRvbVUgYW5kIHZpY2UgdmVy
c2ENCiMjIGUuZy4gaW5kb21VPWRldGVjdA0KIyMgICAgICBpbmRvbVU9dHJ1ZQ0KIyMgICAgICBp
bmRvbVU9ZmFsc2UNCiMgaW5kb21VPWRldGVjdA0KDQojIyBzaG91bGQgdXBkYXRlLWdydWIgY3Jl
YXRlIG1lbXRlc3Q4NiBib290IG9wdGlvbg0KIyMgZS5nLiBtZW10ZXN0ODY9dHJ1ZQ0KIyMgICAg
ICBtZW10ZXN0ODY9ZmFsc2UNCiMgbWVtdGVzdDg2PXRydWUNCg0KIyMgc2hvdWxkIHVwZGF0ZS1n
cnViIGFkanVzdCB0aGUgdmFsdWUgb2YgdGhlIGRlZmF1bHQgYm9vdGVkIHN5c3RlbQ0KIyMgY2Fu
IGJlIHRydWUgb3IgZmFsc2UNCiMgdXBkYXRlZGVmYXVsdGVudHJ5PWZhbHNlDQoNCiMjIHNob3Vs
ZCB1cGRhdGUtZ3J1YiBhZGQgc2F2ZWRlZmF1bHQgdG8gdGhlIGRlZmF1bHQgb3B0aW9ucw0KIyMg
Y2FuIGJlIHRydWUgb3IgZmFsc2UNCiMgc2F2ZWRlZmF1bHQ9ZmFsc2UNCg0KIyMgIyMgRW5kIERl
ZmF1bHQgT3B0aW9ucyAjIw0KDQoNCnRpdGxlCQlYZW4gNC4xLjIgLyBVYnVudHUgMTAuMDQuNCBr
ZXJuZWwgMi42LjMyLjQwIChyb290PXNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4
OC03N2QzNWFmODcwOTMNCiNyb290CQkoaGQwLDEpDQprZXJuZWwJCS94ZW4tNC4xLjIuZ3ogZG9t
MF9tZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZsPWFsbA0KbW9kdWxl
CQkvdm1saW51ei0yLjYuMzIuNDAgZHVtbXk9ZHVtbXkgcm9vdD0vZGV2L3NkYTYgcm8gY29uc29s
ZT10dHkwIG5vbW9kZXNldCByb290ZGVsYXk9NTANCm1vZHVsZQkJL2luaXRyZC5pbWctMi42LjMy
LjQwDQoNCnRpdGxlCQlYZW4gNC4yLjAtcmMzIC8gVWJ1bnR1IDEwLjA0LjQga2VybmVsIDIuNi4z
Mi40MCAocm9vdD1zZGE2KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3
MDkzDQojcm9vdAkJKGhkMCwxKQ0Ka2VybmVsCQkveGVuLTQuMi4wLXJjMy1wcmUuZ3ogZG9tMF9t
ZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZsPWFsbCBjb20xPTk2MDAs
OG4xIGNvbnNvbGU9Y29tMSx2Z2ENCm1vZHVsZQkJL3ZtbGludXotMi42LjMyLjQwIHJvb3Q9VVVJ
RD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gY29uc29sZT10dHkwIGNv
bnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rlc2V0DQptb2R1bGUJCS9pbml0cmQuaW1n
LTIuNi4zMi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLTQy
LWdlbmVyaWMgKHNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcw
OTMNCmtlcm5lbAkJL3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0vZGV2L3NkYTYgcm8g
cXVpZXQgc3BsYXNoIA0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0
aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLTQyLWdlbmVyaWMgKHNkYTcp
DQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL3Zt
bGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0vZGV2L3NkYTcgcm8gcXVpZXQgc3BsYXNoIA0K
aW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEw
LjA0LjQgTFRTLCBrZXJuZWwgMy4xLjAtcmM5Kw0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThm
ODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD1VVUlEPTI2
NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBzcGxhc2ggDQoNCnRp
dGxlCQlVYnVudHUgMTAuMDQuNCBMVFMsIGtlcm5lbCAzLjEuMC1yYzkrIChyZWNvdmVyeSBtb2Rl
KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92
bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFm
NDQ0ODg1ZCBybyAgc2luZ2xlDQoNCnRpdGxlCQlVYnVudHUgMTAuMDQuNCBMVFMsIGtlcm5lbCAy
LjYuMzIuNDANCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0Ka2Vy
bmVsCQkvdm1saW51ei0yLjYuMzIuNDAgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhm
LTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBzcGxhc2ggDQppbml0cmQJCS9pbml0cmQuaW1nLTIuNi4z
Mi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLjQwIChyZWNv
dmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQpr
ZXJuZWwJCS92bWxpbnV6LTIuNi4zMi40MCByb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIz
OGYtMmRhMWY0NDQ4ODVkIHJvICBzaW5nbGUNCmluaXRyZAkJL2luaXRyZC5pbWctMi42LjMyLjQw
DQoNCg0KdGl0bGUJCVVidW50dSAxMC4wNC40IExUUywga2VybmVsIDIuNi4zMi00Mi1nZW5lcmlj
IChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3
MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4zMi00Mi1nZW5lcmljIHJvb3Q9VVVJRD0yNjZlNzFh
Zi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gIHNpbmdsZQ0KaW5pdHJkCQkvaW5pdHJk
LmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJu
ZWwgMi42LjMyLTM4LWdlbmVyaWMNCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1
YWY4NzA5Mw0Ka2VybmVsCQkvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYyByb290PVVVSUQ9MjY2
ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvIHF1aWV0IHNwbGFzaCANCmluaXRy
ZAkJL2luaXRyZC5pbWctMi42LjMyLTM4LWdlbmVyaWMNCg0KdGl0bGUJCVVidW50dSAxMC4wNC40
IExUUywga2VybmVsIDIuNi4zMi0zOC1nZW5lcmljIChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVk
ZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4z
Mi0zOC1nZW5lcmljIHJvb3Q9VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4
NWQgcm8gIHNpbmdsZQ0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KDQp0
aXRsZQkJQ2hhaW5sb2FkIGludG8gR1JVQiAyDQpyb290CQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4
OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL2Jvb3QvZ3J1Yi9jb3JlLmltZw0KDQojdGl0bGUJCVVi
dW50dSAxMC4wNC40IExUUywgbWVtdGVzdDg2Kw0KI3V1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04
Zjg4LTc3ZDM1YWY4NzA5Mw0KI2tlcm5lbAkJL21lbXRlc3Q4NisuYmluDQoNCiMjIyBFTkQgREVC
SUFOIEFVVE9NQUdJQyBLRVJORUxTIExJU1QNCg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09
PT0gc2RhMi9ncnViL2dydWIuY2ZnOiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KDQoj
DQojIERPIE5PVCBFRElUIFRISVMgRklMRQ0KIw0KIyBJdCBpcyBhdXRvbWF0aWNhbGx5IGdlbmVy
YXRlZCBieSAvdXNyL3NiaW4vZ3J1Yi1ta2NvbmZpZyB1c2luZyB0ZW1wbGF0ZXMNCiMgZnJvbSAv
ZXRjL2dydWIuZCBhbmQgc2V0dGluZ3MgZnJvbSAvZXRjL2RlZmF1bHQvZ3J1Yg0KIw0KDQojIyMg
QkVHSU4gL2V0Yy9ncnViLmQvMDBfaGVhZGVyICMjIw0KaWYgWyAtcyAkcHJlZml4L2dydWJlbnYg
XTsgdGhlbg0KICBsb2FkX2Vudg0KZmkNCnNldCBkZWZhdWx0PSIwIg0KaWYgWyAke3ByZXZfc2F2
ZWRfZW50cnl9IF07IHRoZW4NCiAgc2V0IHNhdmVkX2VudHJ5PSR7cHJldl9zYXZlZF9lbnRyeX0N
CiAgc2F2ZV9lbnYgc2F2ZWRfZW50cnkNCiAgc2V0IHByZXZfc2F2ZWRfZW50cnk9DQogIHNhdmVf
ZW52IHByZXZfc2F2ZWRfZW50cnkNCiAgc2V0IGJvb3Rfb25jZT10cnVlDQpmaQ0KDQpmdW5jdGlv
biBzYXZlZGVmYXVsdCB7DQogIGlmIFsgLXogJHtib290X29uY2V9IF07IHRoZW4NCiAgICBzYXZl
ZF9lbnRyeT0ke2Nob3Nlbn0NCiAgICBzYXZlX2VudiBzYXZlZF9lbnRyeQ0KICBmaQ0KfQ0KDQpm
dW5jdGlvbiByZWNvcmRmYWlsIHsNCiAgc2V0IHJlY29yZGZhaWw9MQ0KICBpZiBbIC1uICR7aGF2
ZV9ncnViZW52fSBdOyB0aGVuIGlmIFsgLXogJHtib290X29uY2V9IF07IHRoZW4gc2F2ZV9lbnYg
cmVjb3JkZmFpbDsgZmk7IGZpDQp9DQppbnNtb2QgZXh0Mg0Kc2V0IHJvb3Q9JyhoZDAsNiknDQpz
ZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhm
LTJkYTFmNDQ0ODg1ZA0KaWYgbG9hZGZvbnQgL3Vzci9zaGFyZS9ncnViL3VuaWNvZGUucGYyIDsg
dGhlbg0KICBzZXQgZ2Z4bW9kZT02NDB4NDgwDQogIGluc21vZCBnZnh0ZXJtDQogIGluc21vZCB2
YmUNCiAgaWYgdGVybWluYWxfb3V0cHV0IGdmeHRlcm0gOyB0aGVuIHRydWUgOyBlbHNlDQogICAg
IyBGb3IgYmFja3dhcmQgY29tcGF0aWJpbGl0eSB3aXRoIHZlcnNpb25zIG9mIHRlcm1pbmFsLm1v
ZCB0aGF0IGRvbid0DQogICAgIyB1bmRlcnN0YW5kIHRlcm1pbmFsX291dHB1dA0KICAgIHRlcm1p
bmFsIGdmeHRlcm0NCiAgZmkNCmZpDQppbnNtb2QgZXh0Mg0Kc2V0IHJvb3Q9JyhoZDAsMiknDQpz
ZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhlZGYwZTFiLTVmOWMtNGNhMC04Zjg4
LTc3ZDM1YWY4NzA5Mw0Kc2V0IGxvY2FsZV9kaXI9KCRyb290KS9ncnViL2xvY2FsZQ0Kc2V0IGxh
bmc9ZW4NCmluc21vZCBnZXR0ZXh0DQppZiBbICR7cmVjb3JkZmFpbH0gPSAxIF07IHRoZW4NCiAg
c2V0IHRpbWVvdXQ9LTENCmVsc2UNCiAgc2V0IHRpbWVvdXQ9MTANCmZpDQojIyMgRU5EIC9ldGMv
Z3J1Yi5kLzAwX2hlYWRlciAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzA1X2RlYmlhbl90
aGVtZSAjIyMNCnNldCBtZW51X2NvbG9yX25vcm1hbD13aGl0ZS9ibGFjaw0Kc2V0IG1lbnVfY29s
b3JfaGlnaGxpZ2h0PWJsYWNrL2xpZ2h0LWdyYXkNCiMjIyBFTkQgL2V0Yy9ncnViLmQvMDVfZGVi
aWFuX3RoZW1lICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvMDhfeGVuICMjIw0KbWVudWVu
dHJ5ICJYZW4gVW5zdGFibGUgNC4yIFJDMyAvIERlYmlhbiBTcXVlZXplIGtlcm5lbCAyLjYuMzIu
NDAiIHsNCiAgICAgICAgaW5zbW9kIGV4dDINCiAgICAgICAgc2V0IHJvb3Q9JyhoZDAsNCknDQog
ICAgICAgIG11bHRpYm9vdCAoaGQwLDEpL3hlbi00LjIuMC1yYzMtcHJlLmd6IGR1bW15DQogICAg
ICAgIG1vZHVsZSAoaGQwLDEpL3ZtbGludXotMi42LjMyLjQwIGR1bW15IHJvb3Q9VVVJRD0yNjZl
NzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gcXVpZXQgY29uc29sZT10dHkwIG5v
bW9kZXNldCByb290ZGVsYXk9MTMwDQogICAgICAgIG1vZHVsZSAoaGQwLDEpL2luaXRyZC5pbWct
Mi42LjMyLjQwDQp9DQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzA4X3hlbiAjIyMNCg0KIyMjIEJFR0lO
IC9ldGMvZ3J1Yi5kLzEwX2xpbnV4ICMjIw0KbWVudWVudHJ5ICdVYnVudHUsIHdpdGggTGludXgg
My4xLjAtcmM5KycgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUg
LS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAs
MiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRj
YTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eAkvdm1saW51ei0zLjEuMC1yYzkrIHJvb3Q9L2Rl
di9zZGE2IHJvICAgcXVpZXQgc3BsYXNoDQp9DQptZW51ZW50cnkgJ1VidW50dSwgd2l0aCBMaW51
eCAyLjYuMzIuNDAnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4IC0tY2xhc3MgZ251
IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNldCByb290PScoaGQw
LDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgOGVkZjBlMWItNWY5Yy00
Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJL3ZtbGludXotMi42LjMyLjQwIHJvb3Q9VVVJ
RD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gICBxdWlldCBzcGxhc2gN
Cglpbml0cmQJL2luaXRyZC5pbWctMi42LjMyLjQwDQp9DQptZW51ZW50cnkgJ1VidW50dSwgd2l0
aCBMaW51eCAyLjYuMzItNDItZ2VuZXJpYycgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGlu
dXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJ
c2V0IHJvb3Q9JyhoZDAsMiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4
ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eAkvdm1saW51ei0yLjYu
MzItNDItZ2VuZXJpYyByb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4
ODVkIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9pbml0cmQuaW1nLTIuNi4zMi00Mi1nZW5l
cmljDQp9DQptZW51ZW50cnkgJ1VidW50dSwgd2l0aCBMaW51eCAyLjYuMzItMzgtZ2VuZXJpYycg
LS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7
DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsMiknDQoJc2VhcmNo
IC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2Qz
NWFmODcwOTMNCglsaW51eAkvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYyByb290PVVVSUQ9MjY2
ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5p
dHJkCS9pbml0cmQuaW1nLTIuNi4zMi0zOC1nZW5lcmljDQp9DQojIyMgRU5EIC9ldGMvZ3J1Yi5k
LzEwX2xpbnV4ICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvMjBfbWVtdGVzdDg2KyAjIyMN
Cm1lbnVlbnRyeSAiTWVtb3J5IHRlc3QgKG1lbXRlc3Q4NispIiB7DQoJaW5zbW9kIGV4dDINCglz
ZXQgcm9vdD0nKGhkMCwyKScNCglzZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhl
ZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KCWxpbnV4MTYJL21lbXRlc3Q4Nisu
YmluDQp9DQptZW51ZW50cnkgIk1lbW9yeSB0ZXN0IChtZW10ZXN0ODYrLCBzZXJpYWwgY29uc29s
ZSAxMTUyMDApIiB7DQoJaW5zbW9kIGV4dDINCglzZXQgcm9vdD0nKGhkMCwyKScNCglzZWFyY2gg
LS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1
YWY4NzA5Mw0KCWxpbnV4MTYJL21lbXRlc3Q4NisuYmluIGNvbnNvbGU9dHR5UzAsMTE1MjAwbjgN
Cn0NCiMjIyBFTkQgL2V0Yy9ncnViLmQvMjBfbWVtdGVzdDg2KyAjIyMNCg0KIyMjIEJFR0lOIC9l
dGMvZ3J1Yi5kLzMwX29zLXByb2JlciAjIyMNCiMjIyBFTkQgL2V0Yy9ncnViLmQvMzBfb3MtcHJv
YmVyICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvNDBfY3VzdG9tICMjIw0KIyBUaGlzIGZp
bGUgcHJvdmlkZXMgYW4gZWFzeSB3YXkgdG8gYWRkIGN1c3RvbSBtZW51IGVudHJpZXMuICBTaW1w
bHkgdHlwZSB0aGUNCiMgbWVudSBlbnRyaWVzIHlvdSB3YW50IHRvIGFkZCBhZnRlciB0aGlzIGNv
bW1lbnQuICBCZSBjYXJlZnVsIG5vdCB0byBjaGFuZ2UNCiMgdGhlICdleGVjIHRhaWwnIGxpbmUg
YWJvdmUuDQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzQwX2N1c3RvbSAjIyMNCg0KPT09PT09PT09PT09
PT09PT09PSBzZGEyOiBMb2NhdGlvbiBvZiBmaWxlcyBsb2FkZWQgYnkgR3J1YjogPT09PT09PT09
PT09PT09PT09PQ0KDQoNCiAgICAuMEdCOiBncnViL2dydWIuY2ZnDQogICAgLjBHQjogZ3J1Yi9t
ZW51LmxzdA0KICAgIC4wR0I6IGdydWIvc3RhZ2UyDQogICAgLjBHQjogaW5pdHJkLmltZy0yLjYu
MzItMzgtZ2VuZXJpYw0KICAgIC4wR0I6IGluaXRyZC5pbWctMi42LjMyLjQwDQogICAgLjBHQjog
aW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KICAgIC4wR0I6IHZtbGludXotMi42LjMyLTM4
LWdlbmVyaWMNCiAgICAuMEdCOiB2bWxpbnV6LTIuNi4zMi40MA0KICAgIC4wR0I6IHZtbGludXot
Mi42LjMyLTQyLWdlbmVyaWMNCiAgICAuMEdCOiB2bWxpbnV6LTMuMS4wLXJjOSsNCg0KPT09PT09
PT09PT09PT09PT09PT09PT09PT09IHNkYTYvYm9vdC9ncnViL21lbnUubHN0OiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT0NCg0KIyBtZW51LmxzdCAtIFNlZTogZ3J1Yig4KSwgaW5mbyBncnVi
LCB1cGRhdGUtZ3J1Yig4KQ0KIyAgICAgICAgICAgIGdydWItaW5zdGFsbCg4KSwgZ3J1Yi1mbG9w
cHkoOCksDQojICAgICAgICAgICAgZ3J1Yi1tZDUtY3J5cHQsIC91c3Ivc2hhcmUvZG9jL2dydWIN
CiMgICAgICAgICAgICBhbmQgL3Vzci9zaGFyZS9kb2MvZ3J1Yi1sZWdhY3ktZG9jLy4NCg0KIyMg
ZGVmYXVsdCBudW0NCiMgU2V0IHRoZSBkZWZhdWx0IGVudHJ5IHRvIHRoZSBlbnRyeSBudW1iZXIg
TlVNLiBOdW1iZXJpbmcgc3RhcnRzIGZyb20gMCwgYW5kDQojIHRoZSBlbnRyeSBudW1iZXIgMCBp
cyB0aGUgZGVmYXVsdCBpZiB0aGUgY29tbWFuZCBpcyBub3QgdXNlZC4NCiMNCiMgWW91IGNhbiBz
cGVjaWZ5ICdzYXZlZCcgaW5zdGVhZCBvZiBhIG51bWJlci4gSW4gdGhpcyBjYXNlLCB0aGUgZGVm
YXVsdCBlbnRyeQ0KIyBpcyB0aGUgZW50cnkgc2F2ZWQgd2l0aCB0aGUgY29tbWFuZCAnc2F2ZWRl
ZmF1bHQnLg0KIyBXQVJOSU5HOiBJZiB5b3UgYXJlIHVzaW5nIGRtcmFpZCBkbyBub3QgdXNlICdz
YXZlZGVmYXVsdCcgb3IgeW91cg0KIyBhcnJheSB3aWxsIGRlc3luYyBhbmQgd2lsbCBub3QgbGV0
IHlvdSBib290IHlvdXIgc3lzdGVtLg0KZGVmYXVsdAkJMA0KZmFsbGJhY2sJMg0KIyMgdGltZW91
dCBzZWMNCiMgU2V0IGEgdGltZW91dCwgaW4gU0VDIHNlY29uZHMsIGJlZm9yZSBhdXRvbWF0aWNh
bGx5IGJvb3RpbmcgdGhlIGRlZmF1bHQgZW50cnkNCiMgKG5vcm1hbGx5IHRoZSBmaXJzdCBlbnRy
eSBkZWZpbmVkKS4NCnRpbWVvdXQJCTEwDQoNCiMjIGhpZGRlbm1lbnUNCiMgSGlkZXMgdGhlIG1l
bnUgYnkgZGVmYXVsdCAocHJlc3MgRVNDIHRvIHNlZSB0aGUgbWVudSkNCiNoaWRkZW5tZW51DQoN
CiMgUHJldHR5IGNvbG91cnMNCiNjb2xvciBjeWFuL2JsdWUgd2hpdGUvYmx1ZQ0KDQojIyBwYXNz
d29yZCBbJy0tbWQ1J10gcGFzc3dkDQojIElmIHVzZWQgaW4gdGhlIGZpcnN0IHNlY3Rpb24gb2Yg
YSBtZW51IGZpbGUsIGRpc2FibGUgYWxsIGludGVyYWN0aXZlIGVkaXRpbmcNCiMgY29udHJvbCAo
bWVudSBlbnRyeSBlZGl0b3IgYW5kIGNvbW1hbmQtbGluZSkgIGFuZCBlbnRyaWVzIHByb3RlY3Rl
ZCBieSB0aGUNCiMgY29tbWFuZCAnbG9jaycNCiMgZS5nLiBwYXNzd29yZCB0b3BzZWNyZXQNCiMg
ICAgICBwYXNzd29yZCAtLW1kNSAkMSRnTGhVMC8kYVc3OGtISzFRZlYzUDJiMnpuVW9lLw0KIyBw
YXNzd29yZCB0b3BzZWNyZXQNCg0KIw0KIyBleGFtcGxlcw0KIw0KIyB0aXRsZQkJV2luZG93cyA5
NS85OC9OVC8yMDAwDQojIHJvb3QJCShoZDAsMCkNCiMgbWFrZWFjdGl2ZQ0KIyBjaGFpbmxvYWRl
cgkrMQ0KIw0KIyB0aXRsZQkJTGludXgNCiMgcm9vdAkJKGhkMCwxKQ0KIyBrZXJuZWwJL3ZtbGlu
dXogcm9vdD0vZGV2L2hkYTIgcm8NCiMNCg0KIw0KIyBQdXQgc3RhdGljIGJvb3Qgc3RhbnphcyBi
ZWZvcmUgYW5kL29yIGFmdGVyIEFVVE9NQUdJQyBLRVJORUwgTElTVA0KDQojIyMgQkVHSU4gQVVU
T01BR0lDIEtFUk5FTFMgTElTVA0KIyMgbGluZXMgYmV0d2VlbiB0aGUgQVVUT01BR0lDIEtFUk5F
TFMgTElTVCBtYXJrZXJzIHdpbGwgYmUgbW9kaWZpZWQNCiMjIGJ5IHRoZSBkZWJpYW4gdXBkYXRl
LWdydWIgc2NyaXB0IGV4Y2VwdCBmb3IgdGhlIGRlZmF1bHQgb3B0aW9ucyBiZWxvdw0KDQojIyBE
TyBOT1QgVU5DT01NRU5UIFRIRU0sIEp1c3QgZWRpdCB0aGVtIHRvIHlvdXIgbmVlZHMNCg0KIyMg
IyMgU3RhcnQgRGVmYXVsdCBPcHRpb25zICMjDQojIyBkZWZhdWx0IGtlcm5lbCBvcHRpb25zDQoj
IyBkZWZhdWx0IGtlcm5lbCBvcHRpb25zIGZvciBhdXRvbWFnaWMgYm9vdCBvcHRpb25zDQojIyBJ
ZiB5b3Ugd2FudCBzcGVjaWFsIG9wdGlvbnMgZm9yIHNwZWNpZmljIGtlcm5lbHMgdXNlIGtvcHRf
eF95X3oNCiMjIHdoZXJlIHgueS56IGlzIGtlcm5lbCB2ZXJzaW9uLiBNaW5vciB2ZXJzaW9ucyBj
YW4gYmUgb21pdHRlZC4NCiMjIGUuZy4ga29wdD1yb290PS9kZXYvaGRhMSBybw0KIyMgICAgICBr
b3B0XzJfNl84PXJvb3Q9L2Rldi9oZGMxIHJvDQojIyAgICAgIGtvcHRfMl82XzhfMl82ODY9cm9v
dD0vZGV2L2hkYzIgcm8NCiMga29wdD1yb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYt
MmRhMWY0NDQ4ODVkIHJvDQoNCiMjIGRlZmF1bHQgZ3J1YiByb290IGRldmljZQ0KIyMgZS5nLiBn
cm9vdD0oaGQwLDApDQojIGdyb290PThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5
Mw0KDQojIyBzaG91bGQgdXBkYXRlLWdydWIgY3JlYXRlIGFsdGVybmF0aXZlIGF1dG9tYWdpYyBi
b290IG9wdGlvbnMNCiMjIGUuZy4gYWx0ZXJuYXRpdmU9dHJ1ZQ0KIyMgICAgICBhbHRlcm5hdGl2
ZT1mYWxzZQ0KIyBhbHRlcm5hdGl2ZT10cnVlDQoNCiMjIHNob3VsZCB1cGRhdGUtZ3J1YiBsb2Nr
IGFsdGVybmF0aXZlIGF1dG9tYWdpYyBib290IG9wdGlvbnMNCiMjIGUuZy4gbG9ja2FsdGVybmF0
aXZlPXRydWUNCiMjICAgICAgbG9ja2FsdGVybmF0aXZlPWZhbHNlDQojIGxvY2thbHRlcm5hdGl2
ZT1mYWxzZQ0KDQojIyBhZGRpdGlvbmFsIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRlZmF1bHQg
Ym9vdCBvcHRpb24sIGJ1dCBub3Qgd2l0aCB0aGUNCiMjIGFsdGVybmF0aXZlcw0KIyMgZS5nLiBk
ZWZvcHRpb25zPXZnYT03OTEgcmVzdW1lPS9kZXYvaGRhNQ0KIyBkZWZvcHRpb25zPXF1aWV0IHNw
bGFzaA0KDQojIyBzaG91bGQgdXBkYXRlLWdydWIgbG9jayBvbGQgYXV0b21hZ2ljIGJvb3Qgb3B0
aW9ucw0KIyMgZS5nLiBsb2Nrb2xkPWZhbHNlDQojIyAgICAgIGxvY2tvbGQ9dHJ1ZQ0KIyBsb2Nr
b2xkPWZhbHNlDQoNCiMjIFhlbiBoeXBlcnZpc29yIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRl
ZmF1bHQgWGVuIGJvb3Qgb3B0aW9uDQojIHhlbmhvcHQ9DQoNCiMjIFhlbiBMaW51eCBrZXJuZWwg
b3B0aW9ucyB0byB1c2Ugd2l0aCB0aGUgZGVmYXVsdCBYZW4gYm9vdCBvcHRpb24NCiMgeGVua29w
dD1jb25zb2xlPXR0eTANCg0KIyMgYWx0b3B0aW9uIGJvb3QgdGFyZ2V0cyBvcHRpb24NCiMjIG11
bHRpcGxlIGFsdG9wdGlvbnMgbGluZXMgYXJlIGFsbG93ZWQNCiMjIGUuZy4gYWx0b3B0aW9ucz0o
ZXh0cmEgbWVudSBzdWZmaXgpIGV4dHJhIGJvb3Qgb3B0aW9ucw0KIyMgICAgICBhbHRvcHRpb25z
PShyZWNvdmVyeSkgc2luZ2xlDQojIGFsdG9wdGlvbnM9KHJlY292ZXJ5IG1vZGUpIHNpbmdsZQ0K
DQojIyBjb250cm9scyBob3cgbWFueSBrZXJuZWxzIHNob3VsZCBiZSBwdXQgaW50byB0aGUgbWVu
dS5sc3QNCiMjIG9ubHkgY291bnRzIHRoZSBmaXJzdCBvY2N1cmVuY2Ugb2YgYSBrZXJuZWwsIG5v
dCB0aGUNCiMjIGFsdGVybmF0aXZlIGtlcm5lbCBvcHRpb25zDQojIyBlLmcuIGhvd21hbnk9YWxs
DQojIyAgICAgIGhvd21hbnk9Nw0KIyBob3dtYW55PWFsbA0KDQojIyBzcGVjaWZ5IGlmIHJ1bm5p
bmcgaW4gWGVuIGRvbVUgb3IgaGF2ZSBncnViIGRldGVjdCBhdXRvbWF0aWNhbGx5DQojIyB1cGRh
dGUtZ3J1YiB3aWxsIGlnbm9yZSBub24teGVuIGtlcm5lbHMgd2hlbiBydW5uaW5nIGluIGRvbVUg
YW5kIHZpY2UgdmVyc2ENCiMjIGUuZy4gaW5kb21VPWRldGVjdA0KIyMgICAgICBpbmRvbVU9dHJ1
ZQ0KIyMgICAgICBpbmRvbVU9ZmFsc2UNCiMgaW5kb21VPWRldGVjdA0KDQojIyBzaG91bGQgdXBk
YXRlLWdydWIgY3JlYXRlIG1lbXRlc3Q4NiBib290IG9wdGlvbg0KIyMgZS5nLiBtZW10ZXN0ODY9
dHJ1ZQ0KIyMgICAgICBtZW10ZXN0ODY9ZmFsc2UNCiMgbWVtdGVzdDg2PXRydWUNCg0KIyMgc2hv
dWxkIHVwZGF0ZS1ncnViIGFkanVzdCB0aGUgdmFsdWUgb2YgdGhlIGRlZmF1bHQgYm9vdGVkIHN5
c3RlbQ0KIyMgY2FuIGJlIHRydWUgb3IgZmFsc2UNCiMgdXBkYXRlZGVmYXVsdGVudHJ5PWZhbHNl
DQoNCiMjIHNob3VsZCB1cGRhdGUtZ3J1YiBhZGQgc2F2ZWRlZmF1bHQgdG8gdGhlIGRlZmF1bHQg
b3B0aW9ucw0KIyMgY2FuIGJlIHRydWUgb3IgZmFsc2UNCiMgc2F2ZWRlZmF1bHQ9ZmFsc2UNCg0K
IyMgIyMgRW5kIERlZmF1bHQgT3B0aW9ucyAjIw0KDQoNCnRpdGxlCQlYZW4gNC4xLjIgLyBVYnVu
dHUgMTAuMDQuNCBrZXJuZWwgMi42LjMyLjQwIChyb290PXNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01
ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCiNyb290CQkoaGQwLDEpDQprZXJuZWwJCS94ZW4t
NC4xLjIuZ3ogZG9tMF9tZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZs
PWFsbA0KbW9kdWxlCQkvdm1saW51ei0yLjYuMzIuNDAgZHVtbXk9ZHVtbXkgcm9vdD0vZGV2L3Nk
YTYgcm8gY29uc29sZT10dHkwIG5vbW9kZXNldCByb290ZGVsYXk9NTANCm1vZHVsZQkJL2luaXRy
ZC5pbWctMi42LjMyLjQwDQoNCnRpdGxlCQlYZW4gNC4yLjAtcmMzIC8gVWJ1bnR1IDEwLjA0LjQg
a2VybmVsIDIuNi4zMi40MCAocm9vdD1zZGE2KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThm
ODgtNzdkMzVhZjg3MDkzDQojcm9vdAkJKGhkMCwxKQ0Ka2VybmVsCQkveGVuLTQuMi4wLXJjMy1w
cmUuZ3ogZG9tMF9tZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZsPWFs
bCBjb20xPTk2MDAsOG4xIGNvbnNvbGU9Y29tMSx2Z2ENCm1vZHVsZQkJL3ZtbGludXotMi42LjMy
LjQwIHJvb3Q9VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gY29u
c29sZT10dHkwIGNvbnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rlc2V0DQptb2R1bGUJ
CS9pbml0cmQuaW1nLTIuNi4zMi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJu
ZWwgMi42LjMyLTQyLWdlbmVyaWMgKHNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4
OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0v
ZGV2L3NkYTYgcm8gcXVpZXQgc3BsYXNoIA0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDIt
Z2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLTQyLWdl
bmVyaWMgKHNkYTcpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMN
Cmtlcm5lbAkJL3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0vZGV2L3NkYTcgcm8gcXVp
ZXQgc3BsYXNoIA0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRs
ZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMy4xLjAtcmM5Kw0KdXVpZAkJOGVkZjBlMWIt
NWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTMuMS4wLXJjOSsg
cm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBz
cGxhc2ggDQoNCnRpdGxlCQlVYnVudHUgMTAuMDQuNCBMVFMsIGtlcm5lbCAzLjEuMC1yYzkrIChy
ZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkz
DQprZXJuZWwJCS92bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1
Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyAgc2luZ2xlDQoNCnRpdGxlCQlVYnVudHUgMTAuMDQuNCBM
VFMsIGtlcm5lbCAyLjYuMzIuNDANCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1
YWY4NzA5Mw0Ka2VybmVsCQkvdm1saW51ei0yLjYuMzIuNDAgcm9vdD1VVUlEPTI2NmU3MWFmLWUx
NDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBzcGxhc2ggDQppbml0cmQJCS9pbml0
cmQuaW1nLTIuNi4zMi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42
LjMyLjQwIChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdk
MzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4zMi40MCByb290PVVVSUQ9MjY2ZTcxYWYt
ZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvICBzaW5nbGUNCmluaXRyZAkJL2luaXRyZC5p
bWctMi42LjMyLjQwDQoNCg0KdGl0bGUJCVVidW50dSAxMC4wNC40IExUUywga2VybmVsIDIuNi4z
Mi00Mi1nZW5lcmljIChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThm
ODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4zMi00Mi1nZW5lcmljIHJvb3Q9
VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gIHNpbmdsZQ0KaW5p
dHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0
LjQgTFRTLCBrZXJuZWwgMi42LjMyLTM4LWdlbmVyaWMNCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNh
MC04Zjg4LTc3ZDM1YWY4NzA5Mw0Ka2VybmVsCQkvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYyBy
b290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvIHF1aWV0IHNw
bGFzaCANCmluaXRyZAkJL2luaXRyZC5pbWctMi42LjMyLTM4LWdlbmVyaWMNCg0KdGl0bGUJCVVi
dW50dSAxMC4wNC40IExUUywga2VybmVsIDIuNi4zMi0zOC1nZW5lcmljIChyZWNvdmVyeSBtb2Rl
KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92
bWxpbnV6LTIuNi4zMi0zOC1nZW5lcmljIHJvb3Q9VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4
Zi0yZGExZjQ0NDg4NWQgcm8gIHNpbmdsZQ0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItMzgt
Z2VuZXJpYw0KDQp0aXRsZQkJQ2hhaW5sb2FkIGludG8gR1JVQiAyDQpyb290CQk4ZWRmMGUxYi01
ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL2Jvb3QvZ3J1Yi9jb3JlLmltZw0K
DQojdGl0bGUJCVVidW50dSAxMC4wNC40IExUUywgbWVtdGVzdDg2Kw0KI3V1aWQJCThlZGYwZTFi
LTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KI2tlcm5lbAkJL21lbXRlc3Q4NisuYmluDQoN
CiMjIyBFTkQgREVCSUFOIEFVVE9NQUdJQyBLRVJORUxTIExJU1QNCg0KPT09PT09PT09PT09PT09
PT09PT09PT09PT09IHNkYTYvYm9vdC9ncnViL2dydWIuY2ZnOiA9PT09PT09PT09PT09PT09PT09
PT09PT09PT0NCg0KIw0KIyBETyBOT1QgRURJVCBUSElTIEZJTEUNCiMNCiMgSXQgaXMgYXV0b21h
dGljYWxseSBnZW5lcmF0ZWQgYnkgL3Vzci9zYmluL2dydWItbWtjb25maWcgdXNpbmcgdGVtcGxh
dGVzDQojIGZyb20gL2V0Yy9ncnViLmQgYW5kIHNldHRpbmdzIGZyb20gL2V0Yy9kZWZhdWx0L2dy
dWINCiMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzAwX2hlYWRlciAjIyMNCmlmIFsgLXMgJHBy
ZWZpeC9ncnViZW52IF07IHRoZW4NCiAgbG9hZF9lbnYNCmZpDQpzZXQgZGVmYXVsdD0iMCINCmlm
IFsgJHtwcmV2X3NhdmVkX2VudHJ5fSBdOyB0aGVuDQogIHNldCBzYXZlZF9lbnRyeT0ke3ByZXZf
c2F2ZWRfZW50cnl9DQogIHNhdmVfZW52IHNhdmVkX2VudHJ5DQogIHNldCBwcmV2X3NhdmVkX2Vu
dHJ5PQ0KICBzYXZlX2VudiBwcmV2X3NhdmVkX2VudHJ5DQogIHNldCBib290X29uY2U9dHJ1ZQ0K
ZmkNCg0KZnVuY3Rpb24gc2F2ZWRlZmF1bHQgew0KICBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0
aGVuDQogICAgc2F2ZWRfZW50cnk9JHtjaG9zZW59DQogICAgc2F2ZV9lbnYgc2F2ZWRfZW50cnkN
CiAgZmkNCn0NCg0KZnVuY3Rpb24gcmVjb3JkZmFpbCB7DQogIHNldCByZWNvcmRmYWlsPTENCiAg
aWYgWyAtbiAke2hhdmVfZ3J1YmVudn0gXTsgdGhlbiBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0
aGVuIHNhdmVfZW52IHJlY29yZGZhaWw7IGZpOyBmaQ0KfQ0KaW5zbW9kIGV4dDINCnNldCByb290
PScoaGQwLDYpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCAyNjZlNzFhZi1l
MTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQNCmlmIGxvYWRmb250IC91c3Ivc2hhcmUvZ3J1Yi91
bmljb2RlLnBmMiA7IHRoZW4NCiAgc2V0IGdmeG1vZGU9NjQweDQ4MA0KICBpbnNtb2QgZ2Z4dGVy
bQ0KICBpbnNtb2QgdmJlDQogIGlmIHRlcm1pbmFsX291dHB1dCBnZnh0ZXJtIDsgdGhlbiB0cnVl
IDsgZWxzZQ0KICAgICMgRm9yIGJhY2t3YXJkIGNvbXBhdGliaWxpdHkgd2l0aCB2ZXJzaW9ucyBv
ZiB0ZXJtaW5hbC5tb2QgdGhhdCBkb24ndA0KICAgICMgdW5kZXJzdGFuZCB0ZXJtaW5hbF9vdXRw
dXQNCiAgICB0ZXJtaW5hbCBnZnh0ZXJtDQogIGZpDQpmaQ0KaW5zbW9kIGV4dDINCnNldCByb290
PScoaGQwLDIpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01
ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCnNldCBsb2NhbGVfZGlyPSgkcm9vdCkvZ3J1Yi9s
b2NhbGUNCnNldCBsYW5nPWVuDQppbnNtb2QgZ2V0dGV4dA0KaWYgWyAke3JlY29yZGZhaWx9ID0g
MSBdOyB0aGVuDQogIHNldCB0aW1lb3V0PS0xDQplbHNlDQogIHNldCB0aW1lb3V0PTEwDQpmaQ0K
IyMjIEVORCAvZXRjL2dydWIuZC8wMF9oZWFkZXIgIyMjDQoNCiMjIyBCRUdJTiAvZXRjL2dydWIu
ZC8wNV9kZWJpYW5fdGhlbWUgIyMjDQpzZXQgbWVudV9jb2xvcl9ub3JtYWw9d2hpdGUvYmxhY2sN
CnNldCBtZW51X2NvbG9yX2hpZ2hsaWdodD1ibGFjay9saWdodC1ncmF5DQojIyMgRU5EIC9ldGMv
Z3J1Yi5kLzA1X2RlYmlhbl90aGVtZSAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzA4X3hl
biAjIyMNCm1lbnVlbnRyeSAiWGVuIFVuc3RhYmxlIDQuMiBSQzMgLyBEZWJpYW4gU3F1ZWV6ZSBr
ZXJuZWwgMi42LjMyLjQwIiB7DQogICAgICAgIGluc21vZCBleHQyDQogICAgICAgIHNldCByb290
PScoaGQwLDQpJw0KICAgICAgICBtdWx0aWJvb3QgKGhkMCwxKS94ZW4tNC4yLjAtcmMzLXByZS5n
eiBkdW1teQ0KICAgICAgICBtb2R1bGUgKGhkMCwxKS92bWxpbnV6LTIuNi4zMi40MCBkdW1teSBy
b290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvIHF1aWV0IGNv
bnNvbGU9dHR5MCBub21vZGVzZXQgcm9vdGRlbGF5PTEzMA0KICAgICAgICBtb2R1bGUgKGhkMCwx
KS9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KIyMjIEVORCAvZXRjL2dydWIuZC8wOF94ZW4gIyMj
DQoNCiMjIyBCRUdJTiAvZXRjL2dydWIuZC8xMF9saW51eCAjIyMNCm1lbnVlbnRyeSAnVWJ1bnR1
LCB3aXRoIExpbnV4IDMuMS4wLXJjOSsnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4
IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNl
dCByb290PScoaGQwLDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgOGVk
ZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJL3ZtbGludXotMy4xLjAt
cmM5KyByb290PS9kZXYvc2RhNiBybyAgIHF1aWV0IHNwbGFzaA0KfQ0KbWVudWVudHJ5ICdVYnVu
dHUsIHdpdGggTGludXggMi42LjMyLjQwJyAtLWNsYXNzIHVidW50dSAtLWNsYXNzIGdudS1saW51
eCAtLWNsYXNzIGdudSAtLWNsYXNzIG9zIHsNCglyZWNvcmRmYWlsDQoJaW5zbW9kIGV4dDINCglz
ZXQgcm9vdD0nKGhkMCwyKScNCglzZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhl
ZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KCWxpbnV4CS92bWxpbnV6LTIuNi4z
Mi40MCByb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvICAg
cXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KbWVudWVudHJ5
ICdVYnVudHUsIHdpdGggTGludXggMi42LjMyLTQyLWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0t
Y2xhc3MgZ251LWxpbnV4IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglp
bnNtb2QgZXh0Mg0KCXNldCByb290PScoaGQwLDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZz
LXV1aWQgLS1zZXQgOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJ
L3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1i
MzhmLTJkYTFmNDQ0ODg1ZCBybyAgIHF1aWV0IHNwbGFzaA0KCWluaXRyZAkvaW5pdHJkLmltZy0y
LjYuMzItNDItZ2VuZXJpYw0KfQ0KbWVudWVudHJ5ICdVYnVudHUsIHdpdGggTGludXggMi42LjMy
LTM4LWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4IC0tY2xhc3MgZ251
IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNldCByb290PScoaGQw
LDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgOGVkZjBlMWItNWY5Yy00
Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJL3ZtbGludXotMi42LjMyLTM4LWdlbmVyaWMg
cm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyAgIHF1aWV0
IHNwbGFzaA0KCWluaXRyZAkvaW5pdHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KfQ0KIyMjIEVO
RCAvZXRjL2dydWIuZC8xMF9saW51eCAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzIwX21l
bXRlc3Q4NisgIyMjDQptZW51ZW50cnkgIk1lbW9yeSB0ZXN0IChtZW10ZXN0ODYrKSIgew0KCWlu
c21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsMiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMt
dXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eDE2
CS9tZW10ZXN0ODYrLmJpbg0KfQ0KbWVudWVudHJ5ICJNZW1vcnkgdGVzdCAobWVtdGVzdDg2Kywg
c2VyaWFsIGNvbnNvbGUgMTE1MjAwKSIgew0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAs
MiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRj
YTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eDE2CS9tZW10ZXN0ODYrLmJpbiBjb25zb2xlPXR0
eVMwLDExNTIwMG44DQp9DQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzIwX21lbXRlc3Q4NisgIyMjDQoN
CiMjIyBCRUdJTiAvZXRjL2dydWIuZC8zMF9vcy1wcm9iZXIgIyMjDQojIyMgRU5EIC9ldGMvZ3J1
Yi5kLzMwX29zLXByb2JlciAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzQwX2N1c3RvbSAj
IyMNCiMgVGhpcyBmaWxlIHByb3ZpZGVzIGFuIGVhc3kgd2F5IHRvIGFkZCBjdXN0b20gbWVudSBl
bnRyaWVzLiAgU2ltcGx5IHR5cGUgdGhlDQojIG1lbnUgZW50cmllcyB5b3Ugd2FudCB0byBhZGQg
YWZ0ZXIgdGhpcyBjb21tZW50LiAgQmUgY2FyZWZ1bCBub3QgdG8gY2hhbmdlDQojIHRoZSAnZXhl
YyB0YWlsJyBsaW5lIGFib3ZlLg0KIyMjIEVORCAvZXRjL2dydWIuZC80MF9jdXN0b20gIyMjDQoN
Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gc2RhNi9ldGMvZnN0YWI6ID09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT0NCg0KIyAvZXRjL2ZzdGFiOiBzdGF0aWMgZmlsZSBzeXN0
ZW0gaW5mb3JtYXRpb24uDQojDQojIFVzZSAnYmxraWQgLW8gdmFsdWUgLXMgVVVJRCcgdG8gcHJp
bnQgdGhlIHVuaXZlcnNhbGx5IHVuaXF1ZSBpZGVudGlmaWVyDQojIGZvciBhIGRldmljZTsgdGhp
cyBtYXkgYmUgdXNlZCB3aXRoIFVVSUQ9IGFzIGEgbW9yZSByb2J1c3Qgd2F5IHRvIG5hbWUNCiMg
ZGV2aWNlcyB0aGF0IHdvcmtzIGV2ZW4gaWYgZGlza3MgYXJlIGFkZGVkIGFuZCByZW1vdmVkLiBT
ZWUgZnN0YWIoNSkuDQojDQojIDxmaWxlIHN5c3RlbT4gPG1vdW50IHBvaW50PiAgIDx0eXBlPiAg
PG9wdGlvbnM+ICAgICAgIDxkdW1wPiAgPHBhc3M+DQpwcm9jICAgICAgICAgICAgL3Byb2MgICAg
ICAgICAgIHByb2MgICAgbm9kZXYsbm9leGVjLG5vc3VpZCAwICAgICAgIDANCiMgLyB3YXMgb24g
L2Rldi9zZGE2IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIz
OGYtMmRhMWY0NDQ4ODVkIC8gICAgICAgICAgICAgICBleHQ0ICAgIGVycm9ycz1yZW1vdW50LXJv
IDAgICAgICAgMQ0KIyAvYm9vdCB3YXMgb24gL2Rldi9zZGEyIGR1cmluZyBpbnN0YWxsYXRpb24N
ClVVSUQ9OGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzIC9ib290ICAgICAgICAg
ICBleHQ0ICAgIGRlZmF1bHRzICAgICAgICAwICAgICAgIDINCiMgc3dhcCB3YXMgb24gL2Rldi9z
ZGE1IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9YjVlNWM0MGUtMTkzYy00YzZkLTkwNjgtNDVj
YzAzM2I2NmE5IG5vbmUgICAgICAgICAgICBzd2FwICAgIHN3ICAgICAgICAgICAgICAwICAgICAg
IDANCg0KbmFzLTFnOi9leHBvcnQvdXRpbHMvc2NyYXRjaAkvc2FwbW50L3NjcmF0Y2gJCW5mcyAJ
ZGVmYXVsdHMgMCAwDQpuYXMtMWc6L2V4cG9ydC92aXJ0dWFsX21hY2hpbmVzIAkvc2FwbW50L3Zp
cnR1YWxfbWFjaGluZXMgCW5mcyAJZGVmYXVsdHMgMCAwDQoNCj09PT09PT09PT09PT09PT09PT0g
c2RhNjogTG9jYXRpb24gb2YgZmlsZXMgbG9hZGVkIGJ5IEdydWI6ID09PT09PT09PT09PT09PT09
PT0NCg0KDQogIDIwLjBHQjogYm9vdC9ncnViL2dydWIuY2ZnDQogIDIwLjBHQjogYm9vdC9ncnVi
L21lbnUubHN0DQogIDIwLjBHQjogYm9vdC9ncnViL3N0YWdlMg0KICAyMC4wR0I6IGJvb3QvaW5p
dHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KICAyMC4wR0I6IGJvb3QvaW5pdHJkLmltZy0yLjYu
MzIuNDANCiAgMjAuMEdCOiBib290L2luaXRyZC5pbWctMi42LjMyLTQyLWdlbmVyaWMNCiAgMjAu
MEdCOiBib290L3ZtbGludXotMi42LjMyLTM4LWdlbmVyaWMNCiAgMjAuMEdCOiBib290L3ZtbGlu
dXotMi42LjMyLjQwDQogIDIwLjBHQjogYm9vdC92bWxpbnV6LTIuNi4zMi00Mi1nZW5lcmljDQog
IDIwLjBHQjogYm9vdC92bWxpbnV6LTMuMS4wLXJjOSsNCiAgMjAuMEdCOiBpbml0cmQuaW1nDQog
IDIwLjBHQjogaW5pdHJkLmltZy5vbGQNCiAgMjAuMEdCOiB2bWxpbnV6DQogIDIwLjBHQjogdm1s
aW51ei5vbGQNCg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09IHNkYTcvYm9vdC9ncnViL2dy
dWIuY2ZnOiA9PT09PT09PT09PT09PT09PT09PT09PT09PT0NCg0KIw0KIyBETyBOT1QgRURJVCBU
SElTIEZJTEUNCiMNCiMgSXQgaXMgYXV0b21hdGljYWxseSBnZW5lcmF0ZWQgYnkgL3Vzci9zYmlu
L2dydWItbWtjb25maWcgdXNpbmcgdGVtcGxhdGVzDQojIGZyb20gL2V0Yy9ncnViLmQgYW5kIHNl
dHRpbmdzIGZyb20gL2V0Yy9kZWZhdWx0L2dydWINCiMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5k
LzAwX2hlYWRlciAjIyMNCmlmIFsgLXMgJHByZWZpeC9ncnViZW52IF07IHRoZW4NCiAgbG9hZF9l
bnYNCmZpDQpzZXQgZGVmYXVsdD0iMCINCmlmIFsgJHtwcmV2X3NhdmVkX2VudHJ5fSBdOyB0aGVu
DQogIHNldCBzYXZlZF9lbnRyeT0ke3ByZXZfc2F2ZWRfZW50cnl9DQogIHNhdmVfZW52IHNhdmVk
X2VudHJ5DQogIHNldCBwcmV2X3NhdmVkX2VudHJ5PQ0KICBzYXZlX2VudiBwcmV2X3NhdmVkX2Vu
dHJ5DQogIHNldCBib290X29uY2U9dHJ1ZQ0KZmkNCg0KZnVuY3Rpb24gc2F2ZWRlZmF1bHQgew0K
ICBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0aGVuDQogICAgc2F2ZWRfZW50cnk9JHtjaG9zZW59
DQogICAgc2F2ZV9lbnYgc2F2ZWRfZW50cnkNCiAgZmkNCn0NCg0KZnVuY3Rpb24gcmVjb3JkZmFp
bCB7DQogIHNldCByZWNvcmRmYWlsPTENCiAgaWYgWyAtbiAke2hhdmVfZ3J1YmVudn0gXTsgdGhl
biBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0aGVuIHNhdmVfZW52IHJlY29yZGZhaWw7IGZpOyBm
aQ0KfQ0KaW5zbW9kIGV4dDINCnNldCByb290PScoaGQwLDcpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5
IC0tZnMtdXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCmlm
IGxvYWRmb250IC91c3Ivc2hhcmUvZ3J1Yi91bmljb2RlLnBmMiA7IHRoZW4NCiAgc2V0IGdmeG1v
ZGU9NjQweDQ4MA0KICBpbnNtb2QgZ2Z4dGVybQ0KICBpbnNtb2QgdmJlDQogIGlmIHRlcm1pbmFs
X291dHB1dCBnZnh0ZXJtIDsgdGhlbiB0cnVlIDsgZWxzZQ0KICAgICMgRm9yIGJhY2t3YXJkIGNv
bXBhdGliaWxpdHkgd2l0aCB2ZXJzaW9ucyBvZiB0ZXJtaW5hbC5tb2QgdGhhdCBkb24ndA0KICAg
ICMgdW5kZXJzdGFuZCB0ZXJtaW5hbF9vdXRwdXQNCiAgICB0ZXJtaW5hbCBnZnh0ZXJtDQogIGZp
DQpmaQ0KaW5zbW9kIGV4dDINCnNldCByb290PScoaGQwLDcpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5
IC0tZnMtdXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCnNl
dCBsb2NhbGVfZGlyPSgkcm9vdCkvYm9vdC9ncnViL2xvY2FsZQ0Kc2V0IGxhbmc9ZW4NCmluc21v
ZCBnZXR0ZXh0DQppZiBbICR7cmVjb3JkZmFpbH0gPSAxIF07IHRoZW4NCiAgc2V0IHRpbWVvdXQ9
LTENCmVsc2UNCiAgc2V0IHRpbWVvdXQ9LTENCmZpDQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzAwX2hl
YWRlciAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzA1X2RlYmlhbl90aGVtZSAjIyMNCnNl
dCBtZW51X2NvbG9yX25vcm1hbD13aGl0ZS9ibGFjaw0Kc2V0IG1lbnVfY29sb3JfaGlnaGxpZ2h0
PWJsYWNrL2xpZ2h0LWdyYXkNCiMjIyBFTkQgL2V0Yy9ncnViLmQvMDVfZGViaWFuX3RoZW1lICMj
Iw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvMDhfeGVuICMjIw0KbWVudWVudHJ5ICJYZW4gVW5z
dGFibGUgNC4yIFJDMyAvIERlYmlhbiBTcXVlZXplIGtlcm5lbCAyLjYuMzIuNDAiIHsNCiAgICAg
ICAgaW5zbW9kIGV4dDINCiAgICAgICAgaW5zbW9kIGV4dDMNCiAgICAgICAgc2V0IHJvb3Q9Jyho
ZDAsNyknDQogICAgICAgIG11bHRpYm9vdCAvYm9vdC94ZW4tNC4yLjAtcmMzLXByZS5neiBkdW1t
eQ0KICAgICAgICBtb2R1bGUgL2Jvb3Qvdm1saW51ei0yLjYuMzIuNDAgZHVtbXkgcm9vdD0vZGV2
L3NkYTcgcm8gcXVpZXQgY29uc29sZT10dHkwIG5vbW9kZXNldCByb290ZGVsYXk9NTANCiAgICAg
ICAgbW9kdWxlIC9ib290L2luaXRyZC5pbWctMi42LjMyLjQwDQp9DQojIyMgRU5EIC9ldGMvZ3J1
Yi5kLzA4X3hlbiAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzEwX2xpbnV4ICMjIw0KbWVu
dWVudHJ5ICdVYnVudHUsIHdpdGggTGludXggMy4xLjAtcmM5KycgLS1jbGFzcyB1YnVudHUgLS1j
bGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWlu
c21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMt
dXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCglsaW51eAkv
Ym9vdC92bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD0vZGV2L3NkYTcgcm8gICBxdWlldCBzcGxhc2gN
Cn0NCm1lbnVlbnRyeSAnVWJ1bnR1LCB3aXRoIExpbnV4IDMuMS4wLXJjOSsgKHJlY292ZXJ5IG1v
ZGUpJyAtLWNsYXNzIHVidW50dSAtLWNsYXNzIGdudS1saW51eCAtLWNsYXNzIGdudSAtLWNsYXNz
IG9zIHsNCglyZWNvcmRmYWlsDQoJaW5zbW9kIGV4dDINCglzZXQgcm9vdD0nKGhkMCw3KScNCglz
ZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDUyZjMyZTVkLTliNDMtNDk3Mi1iYjJl
LWU5NzEzM2RkMmM4MA0KCWVjaG8JJ0xvYWRpbmcgTGludXggMy4xLjAtcmM5KyAuLi4nDQoJbGlu
dXgJL2Jvb3Qvdm1saW51ei0zLjEuMC1yYzkrIHJvb3Q9L2Rldi9zZGE3IHJvIHNpbmdsZSANCgll
Y2hvCSdMb2FkaW5nIGluaXRpYWwgcmFtZGlzayAuLi4nDQp9DQptZW51ZW50cnkgJ1VidW50dSwg
d2l0aCBMaW51eCAyLjYuMzIuNDAnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4IC0t
Y2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNldCBy
b290PScoaGQwLDcpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgNTJmMzJl
NWQtOWI0My00OTcyLWJiMmUtZTk3MTMzZGQyYzgwDQoJbGludXgJL2Jvb3Qvdm1saW51ei0yLjYu
MzIuNDAgcm9vdD1VVUlEPTUyZjMyZTVkLTliNDMtNDk3Mi1iYjJlLWU5NzEzM2RkMmM4MCBybyAg
IHF1aWV0IHNwbGFzaA0KCWluaXRyZAkvYm9vdC9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KbWVu
dWVudHJ5ICdVYnVudHUsIHdpdGggTGludXggMi42LjMyLjQwIChyZWNvdmVyeSBtb2RlKScgLS1j
bGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJ
cmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0t
bm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNk
ZDJjODANCgllY2hvCSdMb2FkaW5nIExpbnV4IDIuNi4zMi40MCAuLi4nDQoJbGludXgJL2Jvb3Qv
dm1saW51ei0yLjYuMzIuNDAgcm9vdD1VVUlEPTUyZjMyZTVkLTliNDMtNDk3Mi1iYjJlLWU5NzEz
M2RkMmM4MCBybyBzaW5nbGUgDQoJZWNobwknTG9hZGluZyBpbml0aWFsIHJhbWRpc2sgLi4uJw0K
CWluaXRyZAkvYm9vdC9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KbWVudWVudHJ5ICdVYnVudHUs
IHdpdGggTGludXggMi42LjMyLTQyLWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251
LWxpbnV4IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0
Mg0KCXNldCByb290PScoaGQwLDcpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1z
ZXQgNTJmMzJlNWQtOWI0My00OTcyLWJiMmUtZTk3MTMzZGQyYzgwDQoJbGludXgJL2Jvb3Qvdm1s
aW51ei0yLjYuMzItNDItZ2VuZXJpYyByb290PVVVSUQ9NTJmMzJlNWQtOWI0My00OTcyLWJiMmUt
ZTk3MTMzZGQyYzgwIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9ib290L2luaXRyZC5pbWct
Mi42LjMyLTQyLWdlbmVyaWMNCn0NCm1lbnVlbnRyeSAnVWJ1bnR1LCB3aXRoIExpbnV4IDIuNi4z
Mi00Mi1nZW5lcmljIChyZWNvdmVyeSBtb2RlKScgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUt
bGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQy
DQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNl
dCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCgllY2hvCSdMb2FkaW5nIExp
bnV4IDIuNi4zMi00Mi1nZW5lcmljIC4uLicNCglsaW51eAkvYm9vdC92bWxpbnV6LTIuNi4zMi00
Mi1nZW5lcmljIHJvb3Q9VVVJRD01MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODAg
cm8gc2luZ2xlIA0KCWVjaG8JJ0xvYWRpbmcgaW5pdGlhbCByYW1kaXNrIC4uLicNCglpbml0cmQJ
L2Jvb3QvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KfQ0KbWVudWVudHJ5ICdVYnVudHUs
IHdpdGggTGludXggMi42LjMyLTM4LWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251
LWxpbnV4IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0
Mg0KCXNldCByb290PScoaGQwLDcpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1z
ZXQgNTJmMzJlNWQtOWI0My00OTcyLWJiMmUtZTk3MTMzZGQyYzgwDQoJbGludXgJL2Jvb3Qvdm1s
aW51ei0yLjYuMzItMzgtZ2VuZXJpYyByb290PVVVSUQ9NTJmMzJlNWQtOWI0My00OTcyLWJiMmUt
ZTk3MTMzZGQyYzgwIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9ib290L2luaXRyZC5pbWct
Mi42LjMyLTM4LWdlbmVyaWMNCn0NCm1lbnVlbnRyeSAnVWJ1bnR1LCB3aXRoIExpbnV4IDIuNi4z
Mi0zOC1nZW5lcmljIChyZWNvdmVyeSBtb2RlKScgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUt
bGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQy
DQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNl
dCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCgllY2hvCSdMb2FkaW5nIExp
bnV4IDIuNi4zMi0zOC1nZW5lcmljIC4uLicNCglsaW51eAkvYm9vdC92bWxpbnV6LTIuNi4zMi0z
OC1nZW5lcmljIHJvb3Q9VVVJRD01MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODAg
cm8gc2luZ2xlIA0KCWVjaG8JJ0xvYWRpbmcgaW5pdGlhbCByYW1kaXNrIC4uLicNCglpbml0cmQJ
L2Jvb3QvaW5pdHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KfQ0KIyMjIEVORCAvZXRjL2dydWIu
ZC8xMF9saW51eCAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzMwX29zLXByb2JlciAjIyMN
CiMjIyBFTkQgL2V0Yy9ncnViLmQvMzBfb3MtcHJvYmVyICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9n
cnViLmQvNDBfY3VzdG9tICMjIw0KIyBUaGlzIGZpbGUgcHJvdmlkZXMgYW4gZWFzeSB3YXkgdG8g
YWRkIGN1c3RvbSBtZW51IGVudHJpZXMuICBTaW1wbHkgdHlwZSB0aGUNCiMgbWVudSBlbnRyaWVz
IHlvdSB3YW50IHRvIGFkZCBhZnRlciB0aGlzIGNvbW1lbnQuICBCZSBjYXJlZnVsIG5vdCB0byBj
aGFuZ2UNCiMgdGhlICdleGVjIHRhaWwnIGxpbmUgYWJvdmUuDQoNCg0KIyMjIEVORCAvZXRjL2dy
dWIuZC80MF9jdXN0b20gIyMjDQoNCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gc2Rh
Ny9ldGMvZnN0YWI6ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCg0KIyAvZXRjL2Zz
dGFiOiBzdGF0aWMgZmlsZSBzeXN0ZW0gaW5mb3JtYXRpb24uDQojDQojIFVzZSAnYmxraWQgLW8g
dmFsdWUgLXMgVVVJRCcgdG8gcHJpbnQgdGhlIHVuaXZlcnNhbGx5IHVuaXF1ZSBpZGVudGlmaWVy
DQojIGZvciBhIGRldmljZTsgdGhpcyBtYXkgYmUgdXNlZCB3aXRoIFVVSUQ9IGFzIGEgbW9yZSBy
b2J1c3Qgd2F5IHRvIG5hbWUNCiMgZGV2aWNlcyB0aGF0IHdvcmtzIGV2ZW4gaWYgZGlza3MgYXJl
IGFkZGVkIGFuZCByZW1vdmVkLiBTZWUgZnN0YWIoNSkuDQojDQojIDxmaWxlIHN5c3RlbT4gPG1v
dW50IHBvaW50PiAgIDx0eXBlPiAgPG9wdGlvbnM+ICAgICAgIDxkdW1wPiAgPHBhc3M+DQpwcm9j
ICAgICAgICAgICAgL3Byb2MgICAgICAgICAgIHByb2MgICAgbm9kZXYsbm9leGVjLG5vc3VpZCAw
ICAgICAgIDANCiMgLyB3YXMgb24gL2Rldi9zZGE2IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9
MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIC8gICAgICAgICAgICAgICBleHQ0
ICAgIGVycm9ycz1yZW1vdW50LXJvIDAgICAgICAgMQ0KIyAvYm9vdCB3YXMgb24gL2Rldi9zZGEy
IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9OGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVh
Zjg3MDkzIC9ib290ICAgICAgICAgICBleHQ0ICAgIGRlZmF1bHRzICAgICAgICAwICAgICAgIDIN
CiMgc3dhcCB3YXMgb24gL2Rldi9zZGE1IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9YjVlNWM0
MGUtMTkzYy00YzZkLTkwNjgtNDVjYzAzM2I2NmE5IG5vbmUgICAgICAgICAgICBzd2FwICAgIHN3
ICAgICAgICAgICAgICAwICAgICAgIDANCg0KbmFzLTFnOi9leHBvcnQvdXRpbHMvc2NyYXRjaAkv
c2FwbW50L3NjcmF0Y2gJCW5mcyAJZGVmYXVsdHMgMCAwDQpuYXMtMWc6L2V4cG9ydC92aXJ0dWFs
X21hY2hpbmVzIAkvc2FwbW50L3ZpcnR1YWxfbWFjaGluZXMgCW5mcyAJZGVmYXVsdHMgMCAwDQoN
Cj09PT09PT09PT09PT09PT09PT0gc2RhNzogTG9jYXRpb24gb2YgZmlsZXMgbG9hZGVkIGJ5IEdy
dWI6ID09PT09PT09PT09PT09PT09PT0NCg0KDQogIDcwLjBHQjogYm9vdC9ncnViL2dydWIuY2Zn
DQogIDcwLjBHQjogYm9vdC9pbml0cmQuaW1nLTIuNi4zMi0zOC1nZW5lcmljDQogIDcwLjBHQjog
Ym9vdC9pbml0cmQuaW1nLTIuNi4zMi40MA0KICA3MC4wR0I6IGJvb3QvaW5pdHJkLmltZy0yLjYu
MzItNDItZ2VuZXJpYw0KICA3MC4wR0I6IGJvb3Qvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYw0K
ICA3MC4wR0I6IGJvb3Qvdm1saW51ei0yLjYuMzIuNDANCiAgNzAuMEdCOiBib290L3ZtbGludXot
Mi42LjMyLTQyLWdlbmVyaWMNCiAgNzAuMEdCOiBib290L3ZtbGludXotMy4xLjAtcmM5Kw0KICA3
MC4wR0I6IGluaXRyZC5pbWcNCiAgNzAuMEdCOiBpbml0cmQuaW1nLm9sZA0KICA3MC4wR0I6IHZt
bGludXoNCiAgNzAuMEdCOiB2bWxpbnV6Lm9sZA0K

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_--


From xen-devel-bounces@lists.xen.org Wed Aug 22 11:27:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 11:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T495y-00032O-GW; Wed, 22 Aug 2012 11:27:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <conor.winchcombe@sap.com>) id 1T495w-00032I-Ly
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 11:27:33 +0000
X-Env-Sender: conor.winchcombe@sap.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345634845!10363984!1
X-Originating-IP: [155.56.66.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU1LjU2LjY2Ljk4ID0+IDQ2NDE1Nw==\n,
	received_headers: No Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9860 invoked from network); 22 Aug 2012 11:27:26 -0000
Received: from smtpgw03.sap-ag.de (HELO smtpgw.sap-ag.de) (155.56.66.98)
	by server-4.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	22 Aug 2012 11:27:26 -0000
From: "conor.winchcombe@sap.com" <conor.winchcombe@sap.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Wed, 22 Aug 2012 13:27:23 +0200
Thread-Topic: Failure to boot Xen 4.1.2 kernel 2.6.32.40 with Ubuntu 10.04.4
Thread-Index: Ac2AVo2XdfeUuaSmQw+wptXOAI1/OgAAmX9Q
Message-ID: <3ED5771B034E314C8AC54D893482B5F51A783DA1F5@DEWDFECCR08.wdf.sap.corp>
Accept-Language: en-US, de-DE
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, de-DE
Content-Type: multipart/mixed;
	boundary="_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_"
MIME-Version: 1.0
Cc: "Khalid, Omer" <omer.khalid@sap.com>
Subject: [Xen-devel] Failure to boot Xen 4.1.2 kernel 2.6.32.40 with Ubuntu
	10.04.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: multipart/alternative;
	boundary="_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_"

--_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

After successful creation of the Xen kernel and entry into grub, whenever I=
 try and boot into the Xen kernel I receive the following error:

Gave up waiting for root device. Common problems:
   - Boot args (cat /proc/cmdline)
                - Check rootdelay=3D (did the system wait long enough?)
                - Check root=3D (did the system wait for the right device?)
   - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/sda6 does not exist. Dropping to a shell!

Here is the entry in /boot/grub/menu.lst
---------------------------------------------------------------------------=
------
title           Xen 4.1.2 / Ubuntu 10.04.4 kernel 2.6.32.40 (root=3Dsda6)
uuid            8edf0e1b-5f9c-4ca0-8f88-77d35af87093
#root           (hd0,1)
kernel          /xen-4.1.2.gz dom0_mem=3D4096M,max:4096M loglvl=3Dall guest=
_loglvl=3Dall
module          /vmlinuz-2.6.32.40 dummy=3Ddummy root=3D/dev/sda6 ro consol=
e=3Dtty0 nomodeset rootdelay=3D50
module          /initrd.img-2.6.32.40
---------------------------------------------------------------------------=
------
I have included the Results of the boot info script in the attached file, p=
lease let me know if you would prefer I placed the full text of that in an =
email.


Conor, Winchcombe
SAP Research Belfast
SAP (UK) Limited   I   The Concourse   I   Queen's Road   I   Queen's Islan=
d   I   Belfast BT3 9DT

mailto: conor.winchcombe@sap.com<mailto:conor.winchcombe@sap.com>  I   www.=
sap.com/research<http://www.sap.com/research>

---------------------------------------------------------------------------=
-----------------------------------------------
This communication contains information which is confidential and may also =
be privileged. It is for the exclusive use of the addressee. If you are not=
 the addressee please contact us immediately and also delete the communicat=
ion from your computer. Steps have been taken to ensure this e-mail is free=
 from computer viruses but the recipient is responsible for ensuring that i=
t is actually virus free before opening it or any attachments. Any views an=
d/or opinions expressed in this e-mail are of the author only and do not re=
present the views of SAP.

SAP (UK) Limited, Registered in England No. 2152073. Registered Office: Clo=
ckhouse Place, Bedfont Road, Feltham, Middlesex, TW14 8HD
---------------------------------------------------------------------------=
------------------------------------------------


--_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV=3D"Content-Type" CONTENT=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 14 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.EmailStyle18
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-IE link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>After successful=
 creation of the Xen kernel and entry into grub, whenever I try and boot in=
to the Xen kernel I receive the following error:<o:p></o:p></p><p class=3DM=
soNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Gave up waiting for root=
 device. Common problems:<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nbsp; -=
 Boot args (cat /proc/cmdline)<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp; - Check rootdelay=3D (did the system wait long enough?)<o:p></o:p></=
p><p class=3DMsoNormal>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; - Check root=3D (did the system wait=
 for the right device?)<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nbsp; - M=
issing modules (cat /proc/modules; ls /dev)<o:p></o:p></p><p class=3DMsoNor=
mal>ALERT! /dev/sda6 does not exist. Dropping to a shell!<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Here is the ent=
ry in /boot/grub/menu.lst<o:p></o:p></p><p class=3DMsoNormal>--------------=
-------------------------------------------------------------------<o:p></o=
:p></p><p class=3DMsoNormal>title&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp; Xen 4.1.2 / Ubuntu 10.04.4 kernel 2.6.32.40 (root=3Dsda6=
)<o:p></o:p></p><p class=3DMsoNormal>uuid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8edf0e1b-5f9c-4ca0-8f88-77d35af87093<o:p><=
/o:p></p><p class=3DMsoNormal>#root&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; (hd0,1)<o:p></o:p></p><p class=3DMsoNormal>kernel&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /xen-4.1.2.gz dom0_mem=3D=
4096M,max:4096M loglvl=3Dall guest_loglvl=3Dall<o:p></o:p></p><p class=3DMs=
oNormal>module&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /vmlin=
uz-2.6.32.40 dummy=3Ddummy root=3D/dev/sda6 ro console=3Dtty0 nomodeset roo=
tdelay=3D50<o:p></o:p></p><p class=3DMsoNormal>module&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /initrd.img-2.6.32.40<o:p></o:p></p><p cla=
ss=3DMsoNormal>------------------------------------------------------------=
---------------------<o:p></o:p></p><p class=3DMsoNormal>I have included th=
e Results of the boot info script in the attached file, please let me know =
if you would prefer I placed the full text of that in an email.<o:p></o:p><=
/p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal><o:p>&nbs=
p;</o:p></p><p class=3DMsoNormal><b><span lang=3DEN-GB style=3D'font-size:8=
.0pt;font-family:"Arial","sans-serif";color:#666666;mso-fareast-language:EN=
-IE'>Conor, Winchcombe<o:p></o:p></span></b></p><p class=3DMsoNormal><b><sp=
an lang=3DEN-GB style=3D'font-size:8.0pt;font-family:"Arial","sans-serif";c=
olor:#666666;mso-fareast-language:EN-IE'>SAP Research Belfast<o:p></o:p></s=
pan></b></p><p class=3DMsoNormal><span style=3D'font-size:8.0pt;font-family=
:"Arial","sans-serif";color:#666666;mso-fareast-language:EN-IE'>SAP (UK) Li=
mited&nbsp;&nbsp; I&nbsp;&nbsp; The Concourse&nbsp;&nbsp; I&nbsp;&nbsp; Que=
en's Road&nbsp;&nbsp; I&nbsp;&nbsp; Queen's Island&nbsp;&nbsp; I&nbsp;&nbsp=
; Belfast BT3 9DT<o:p></o:p></span></p><p class=3DMsoNormal><span style=3D'=
font-size:8.0pt;font-family:"Arial","sans-serif";color:#666666;mso-fareast-=
language:EN-IE'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span styl=
e=3D'font-size:8.0pt;font-family:"Arial","sans-serif";color:#666666;mso-far=
east-language:EN-IE'>mailto: <a href=3D"mailto:conor.winchcombe@sap.com">co=
nor.winchcombe@sap.com</a>&nbsp; I&nbsp;&nbsp; <a href=3D"http://www.sap.co=
m/research">www.sap.com/research</a><o:p></o:p></span></p><p class=3DMsoNor=
mal><span style=3D'font-size:8.0pt;font-family:"Arial","sans-serif";color:#=
666666;mso-fareast-language:EN-IE'><o:p>&nbsp;</o:p></span></p><p class=3DM=
soNormal><span style=3D'font-size:8.0pt;font-family:"Arial","sans-serif";co=
lor:#666666;mso-fareast-language:EN-IE'>-----------------------------------=
---------------------------------------------------------------------------=
------------<o:p></o:p></span></p><p class=3DMsoNormal><i><span style=3D'fo=
nt-size:8.0pt;font-family:"Arial","sans-serif";color:#666666;mso-fareast-la=
nguage:EN-IE'>This communication contains information which is confidential=
 and may also be privileged. It is for the exclusive use of the addressee. =
If you are not the addressee please contact us immediately and also delete =
the communication from your computer. Steps have been taken to ensure this =
e-mail is free from computer viruses but the recipient is responsible for e=
nsuring that it is actually virus free before opening it or any attachments=
. Any views and/or opinions expressed in this e-mail are of the author only=
 and do not represent the views of SAP.<o:p></o:p></span></i></p><p class=
=3DMsoNormal><i><span style=3D'font-size:8.0pt;font-family:"Arial","sans-se=
rif";color:#666666;mso-fareast-language:EN-IE'><o:p>&nbsp;</o:p></span></i>=
</p><p class=3DMsoNormal><i><span style=3D'font-size:8.0pt;font-family:"Ari=
al","sans-serif";color:#666666;mso-fareast-language:EN-IE'>SAP (UK) Limited=
, Registered in England No. 2152073. Registered Office: Clockhouse Place, B=
edfont Road, Feltham, Middlesex, TW14 8HD<o:p></o:p></span></i></p><p class=
=3DMsoNormal><span style=3D'font-size:8.0pt;font-family:"Arial","sans-serif=
";color:#666666;mso-fareast-language:EN-IE'>-------------------------------=
---------------------------------------------------------------------------=
-----------------<o:p></o:p></span></p><p class=3DMsoNormal><o:p>&nbsp;</o:=
p></p></div></body></html>=

--_000_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_--

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/plain; name="RESULTS.txt"
Content-Description: RESULTS.txt
Content-Disposition: attachment; filename="RESULTS.txt"; size=37479;
	creation-date="Wed, 22 Aug 2012 11:11:58 GMT";
	modification-date="Wed, 22 Aug 2012 10:47:24 GMT"
Content-Transfer-Encoding: base64

PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gQm9vdCBJbmZvIFN1bW1hcnk6ID09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PQ0KDQogPT4gR3J1YjAuOTcgaXMgaW5zdGFsbGVkIGluIHRo
ZSBNQlIgb2YgL2Rldi9zZGEgYW5kIGxvb2tzIG9uIHRoZSBzYW1lIGRyaXZlIA0KICAgIGluIHBh
cnRpdGlvbiAjMiBmb3IgL2dydWIvc3RhZ2UyIGFuZCAvZ3J1Yi9tZW51LmxzdC4NCg0Kc2RhMTog
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXw0KDQogICAgRmlsZSBzeXN0ZW06ICAgICAgIGV4dDINCiAgICBCb290
IHNlY3RvciB0eXBlOiAgR3J1Yg0KICAgIEJvb3Qgc2VjdG9yIGluZm86ICBHcnViMC45NyBpcyBp
bnN0YWxsZWQgaW4gdGhlIGJvb3Qgc2VjdG9yIG9mIHNkYTEgYW5kIA0KICAgICAgICAgICAgICAg
ICAgICAgICBsb29rcyBhdCBzZWN0b3IgNDQ2MjY4OCBvZiB0aGUgc2FtZSBoYXJkIGRyaXZlIGZv
ciB0aGUgDQogICAgICAgICAgICAgICAgICAgICAgIHN0YWdlMiBmaWxlLiBBIHN0YWdlMiBmaWxl
IGlzIGF0IHRoaXMgbG9jYXRpb24gb24gDQogICAgICAgICAgICAgICAgICAgICAgIC9kZXYvc2Rh
LiBTdGFnZTIgbG9va3Mgb24gcGFydGl0aW9uICMyIGZvciANCiAgICAgICAgICAgICAgICAgICAg
ICAgL2dydWIvbWVudS5sc3QuDQogICAgT3BlcmF0aW5nIFN5c3RlbTogIA0KICAgIEJvb3QgZmls
ZXMvZGlyczogICANCg0Kc2RhMjogX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KDQogICAgRmlsZSBzeXN0ZW06
ICAgICAgIGV4dDQNCiAgICBCb290IHNlY3RvciB0eXBlOiAgR3J1Yg0KICAgIEJvb3Qgc2VjdG9y
IGluZm86ICBHcnViMC45NyBpcyBpbnN0YWxsZWQgaW4gdGhlIGJvb3Qgc2VjdG9yIG9mIHNkYTIg
YW5kIA0KICAgICAgICAgICAgICAgICAgICAgICBsb29rcyBhdCBzZWN0b3IgNDQ2MjY4OCBvZiB0
aGUgc2FtZSBoYXJkIGRyaXZlIGZvciB0aGUgDQogICAgICAgICAgICAgICAgICAgICAgIHN0YWdl
MiBmaWxlLiBBIHN0YWdlMiBmaWxlIGlzIGF0IHRoaXMgbG9jYXRpb24gb24gDQogICAgICAgICAg
ICAgICAgICAgICAgIC9kZXYvc2RhLiBTdGFnZTIgbG9va3Mgb24gcGFydGl0aW9uICMyIGZvciAN
CiAgICAgICAgICAgICAgICAgICAgICAgL2dydWIvbWVudS5sc3QuDQogICAgT3BlcmF0aW5nIFN5
c3RlbTogIA0KICAgIEJvb3QgZmlsZXMvZGlyczogICAvZ3J1Yi9tZW51LmxzdCAvZ3J1Yi9ncnVi
LmNmZyAvZ3J1Yi9jb3JlLmltZw0KDQpzZGEzOiBfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCiAgICBGaWxl
IHN5c3RlbTogICAgICAgRXh0ZW5kZWQgUGFydGl0aW9uDQogICAgQm9vdCBzZWN0b3IgdHlwZTog
IC0NCiAgICBCb290IHNlY3RvciBpbmZvOiAgDQoNCnNkYTU6IF9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCg0K
ICAgIEZpbGUgc3lzdGVtOiAgICAgICBzd2FwDQogICAgQm9vdCBzZWN0b3IgdHlwZTogIC0NCiAg
ICBCb290IHNlY3RvciBpbmZvOiAgDQoNCnNkYTY6IF9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCg0KICAgIEZp
bGUgc3lzdGVtOiAgICAgICBleHQ0DQogICAgQm9vdCBzZWN0b3IgdHlwZTogIC0NCiAgICBCb290
IHNlY3RvciBpbmZvOiAgDQogICAgT3BlcmF0aW5nIFN5c3RlbTogIFVidW50dSAxMC4wNC40IExU
Uw0KICAgIEJvb3QgZmlsZXMvZGlyczogICAvYm9vdC9ncnViL21lbnUubHN0IC9ib290L2dydWIv
Z3J1Yi5jZmcgL2V0Yy9mc3RhYiANCiAgICAgICAgICAgICAgICAgICAgICAgL2Jvb3QvZ3J1Yi9j
b3JlLmltZw0KDQpzZGE3OiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCiAgICBGaWxlIHN5c3RlbTogICAg
ICAgZXh0NA0KICAgIEJvb3Qgc2VjdG9yIHR5cGU6ICAtDQogICAgQm9vdCBzZWN0b3IgaW5mbzog
IA0KICAgIE9wZXJhdGluZyBTeXN0ZW06ICBVYnVudHUgMTAuMDQuNCBMVFMNCiAgICBCb290IGZp
bGVzL2RpcnM6ICAgL2Jvb3QvZ3J1Yi9ncnViLmNmZyAvZXRjL2ZzdGFiIC9ib290L2dydWIvY29y
ZS5pbWcNCg0Kc2RhODogX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KDQogICAgRmlsZSBzeXN0ZW06ICAgICAg
IGV4dDQNCiAgICBCb290IHNlY3RvciB0eXBlOiAgLQ0KICAgIEJvb3Qgc2VjdG9yIGluZm86ICAN
CiAgICBPcGVyYXRpbmcgU3lzdGVtOiAgDQogICAgQm9vdCBmaWxlcy9kaXJzOiAgIA0KDQpzZGE5
OiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fDQoNCiAgICBGaWxlIHN5c3RlbTogICAgICAgDQogICAgQm9vdCBz
ZWN0b3IgdHlwZTogIC0NCiAgICBCb290IHNlY3RvciBpbmZvOiAgDQogICAgTW91bnRpbmcgZmFp
bGVkOg0KbW91bnQ6IHVua25vd24gZmlsZXN5c3RlbSB0eXBlICcnDQoNCj09PT09PT09PT09PT09
PT09PT09PT09PT09PSBEcml2ZS9QYXJ0aXRpb24gSW5mbzogPT09PT09PT09PT09PT09PT09PT09
PT09PT09PT0NCg0KRHJpdmU6IHNkYSBfX19fX19fX19fX19fX19fX19fIF9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCkRpc2sgL2Rldi9zZGE6
IDE3ODguNiBHQiwgMTc4ODYxODk5Nzc2MCBieXRlcw0KMjU1IGhlYWRzLCA2MyBzZWN0b3JzL3Ry
YWNrLCAyMTc0NTMgY3lsaW5kZXJzLCB0b3RhbCAzNDkzMzk2NDgwIHNlY3RvcnMNClVuaXRzID0g
c2VjdG9ycyBvZiAxICogNTEyID0gNTEyIGJ5dGVzDQpTZWN0b3Igc2l6ZSAobG9naWNhbC9waHlz
aWNhbCk6IDUxMiBieXRlcyAvIDUxMiBieXRlcw0KDQpQYXJ0aXRpb24gIEJvb3QgICAgICAgICBT
dGFydCAgICAgICAgICAgRW5kICAgICAgICAgIFNpemUgIElkIFN5c3RlbQ0KDQovZGV2L3NkYTEg
ICAgICAgICAgICAgICAyLDA0OCAgICAgICAgIDQsMDk1ICAgICAgICAgMiwwNDggIDgzIExpbnV4
DQovZGV2L3NkYTIgICAgKiAgICAgICAgICA0LDA5NiAgICAxOSw1MzUsODcxICAgIDE5LDUzMSw3
NzYgIDgzIExpbnV4DQovZGV2L3NkYTMgICAgICAgICAgMTksNTM3LDkxOCAzLDQ5MywzOTQsNDMx
IDMsNDczLDg1Niw1MTQgICA1IEV4dGVuZGVkDQovZGV2L3NkYTUgICAgICAgICAgMTksNTM3LDky
MCAgICAzOSwwNjcsNjQ3ICAgIDE5LDUyOSw3MjggIDgyIExpbnV4IHN3YXAgLyBTb2xhcmlzDQov
ZGV2L3NkYTYgICAgICAgICAgMzksMDY5LDY5NiAgIDEzNiw3MjQsNDc5ICAgIDk3LDY1NCw3ODQg
IDgzIExpbnV4DQovZGV2L3NkYTcgICAgICAgICAxMzYsNzI2LDUyOCAgIDIzNCwzODEsMzExICAg
IDk3LDY1NCw3ODQgIDgzIExpbnV4DQovZGV2L3NkYTggICAgICAgICAyMzQsMzgzLDM2MCAgIDMz
MiwwMzgsMTQzICAgIDk3LDY1NCw3ODQgIDgzIExpbnV4DQovZGV2L3NkYTkgICAgICAgICAzMzIs
MDQwLDE5MiAzLDQ5MywzOTQsNDMxIDMsMTYxLDM1NCwyNDAgIDgzIExpbnV4DQoNCg0KYmxraWQg
LWMgL2Rldi9udWxsOiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18NCg0KL2Rldi9zZGExOiBVVUlEPSIxZGQwMTJiYS0wNGU4LTRjODkt
YmQyNS0wZTlmODllOTkxZWIiIFRZUEU9ImV4dDIiIA0KL2Rldi9zZGEyOiBVVUlEPSI4ZWRmMGUx
Yi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMiIFRZUEU9ImV4dDQiIA0KL2Rldi9zZGE1OiBV
VUlEPSJiNWU1YzQwZS0xOTNjLTRjNmQtOTA2OC00NWNjMDMzYjY2YTkiIFRZUEU9InN3YXAiIA0K
L2Rldi9zZGE2OiBVVUlEPSIyNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQiIFRZ
UEU9ImV4dDQiIA0KL2Rldi9zZGE3OiBVVUlEPSI1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcx
MzNkZDJjODAiIFRZUEU9ImV4dDQiIA0KL2Rldi9zZGE4OiBVVUlEPSIwMTVhNDBkZS04Zjk2LTRj
MWItOGZiMS0xYTIzNTc1MDU0YTYiIFRZUEU9ImV4dDQiIA0KDQo9PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09ICJtb3VudCIgb3V0cHV0OiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09DQoNCi9kZXYvc2RhNiBvbiAvIHR5cGUgZXh0NCAocncsZXJyb3JzPXJlbW91bnQtcm8pDQpw
cm9jIG9uIC9wcm9jIHR5cGUgcHJvYyAocncsbm9leGVjLG5vc3VpZCxub2RldikNCm5vbmUgb24g
L3N5cyB0eXBlIHN5c2ZzIChydyxub2V4ZWMsbm9zdWlkLG5vZGV2KQ0Kbm9uZSBvbiAvc3lzL2Zz
L2Z1c2UvY29ubmVjdGlvbnMgdHlwZSBmdXNlY3RsIChydykNCm5vbmUgb24gL3N5cy9rZXJuZWwv
ZGVidWcgdHlwZSBkZWJ1Z2ZzIChydykNCm5vbmUgb24gL3N5cy9rZXJuZWwvc2VjdXJpdHkgdHlw
ZSBzZWN1cml0eWZzIChydykNCm5vbmUgb24gL2RldiB0eXBlIGRldnRtcGZzIChydyxtb2RlPTA3
NTUpDQpub25lIG9uIC9kZXYvcHRzIHR5cGUgZGV2cHRzIChydyxub2V4ZWMsbm9zdWlkLGdpZD01
LG1vZGU9MDYyMCkNCm5vbmUgb24gL2Rldi9zaG0gdHlwZSB0bXBmcyAocncsbm9zdWlkLG5vZGV2
KQ0Kbm9uZSBvbiAvdmFyL3J1biB0eXBlIHRtcGZzIChydyxub3N1aWQsbW9kZT0wNzU1KQ0Kbm9u
ZSBvbiAvdmFyL2xvY2sgdHlwZSB0bXBmcyAocncsbm9leGVjLG5vc3VpZCxub2RldikNCm5vbmUg
b24gL2xpYi9pbml0L3J3IHR5cGUgdG1wZnMgKHJ3LG5vc3VpZCxtb2RlPTA3NTUpDQovZGV2L3Nk
YTIgb24gL2Jvb3QgdHlwZSBleHQ0IChydykNCmJpbmZtdF9taXNjIG9uIC9wcm9jL3N5cy9mcy9i
aW5mbXRfbWlzYyB0eXBlIGJpbmZtdF9taXNjIChydyxub2V4ZWMsbm9zdWlkLG5vZGV2KQ0KZ3Zm
cy1mdXNlLWRhZW1vbiBvbiAvaG9tZS9zYXAvLmd2ZnMgdHlwZSBmdXNlLmd2ZnMtZnVzZS1kYWVt
b24gKHJ3LG5vc3VpZCxub2Rldix1c2VyPXNhcCkNCm5hcy0xZzovZXhwb3J0L3V0aWxzL3NjcmF0
Y2ggb24gL3NhcG1udC9zY3JhdGNoIHR5cGUgbmZzIChydyxhZGRyPTEwLjU1LjE2OC4xNTApDQpu
YXMtMWc6L2V4cG9ydC92aXJ0dWFsX21hY2hpbmVzIG9uIC9zYXBtbnQvdmlydHVhbF9tYWNoaW5l
cyB0eXBlIG5mcyAocncsYWRkcj0xMC41NS4xNjguMTUwKQ0KDQoNCj09PT09PT09PT09PT09PT09
PT09PT09PT09PT09IHNkYTIvZ3J1Yi9tZW51LmxzdDogPT09PT09PT09PT09PT09PT09PT09PT09
PT09PT0NCg0KIyBtZW51LmxzdCAtIFNlZTogZ3J1Yig4KSwgaW5mbyBncnViLCB1cGRhdGUtZ3J1
Yig4KQ0KIyAgICAgICAgICAgIGdydWItaW5zdGFsbCg4KSwgZ3J1Yi1mbG9wcHkoOCksDQojICAg
ICAgICAgICAgZ3J1Yi1tZDUtY3J5cHQsIC91c3Ivc2hhcmUvZG9jL2dydWINCiMgICAgICAgICAg
ICBhbmQgL3Vzci9zaGFyZS9kb2MvZ3J1Yi1sZWdhY3ktZG9jLy4NCg0KIyMgZGVmYXVsdCBudW0N
CiMgU2V0IHRoZSBkZWZhdWx0IGVudHJ5IHRvIHRoZSBlbnRyeSBudW1iZXIgTlVNLiBOdW1iZXJp
bmcgc3RhcnRzIGZyb20gMCwgYW5kDQojIHRoZSBlbnRyeSBudW1iZXIgMCBpcyB0aGUgZGVmYXVs
dCBpZiB0aGUgY29tbWFuZCBpcyBub3QgdXNlZC4NCiMNCiMgWW91IGNhbiBzcGVjaWZ5ICdzYXZl
ZCcgaW5zdGVhZCBvZiBhIG51bWJlci4gSW4gdGhpcyBjYXNlLCB0aGUgZGVmYXVsdCBlbnRyeQ0K
IyBpcyB0aGUgZW50cnkgc2F2ZWQgd2l0aCB0aGUgY29tbWFuZCAnc2F2ZWRlZmF1bHQnLg0KIyBX
QVJOSU5HOiBJZiB5b3UgYXJlIHVzaW5nIGRtcmFpZCBkbyBub3QgdXNlICdzYXZlZGVmYXVsdCcg
b3IgeW91cg0KIyBhcnJheSB3aWxsIGRlc3luYyBhbmQgd2lsbCBub3QgbGV0IHlvdSBib290IHlv
dXIgc3lzdGVtLg0KZGVmYXVsdAkJMA0KZmFsbGJhY2sJMg0KIyMgdGltZW91dCBzZWMNCiMgU2V0
IGEgdGltZW91dCwgaW4gU0VDIHNlY29uZHMsIGJlZm9yZSBhdXRvbWF0aWNhbGx5IGJvb3Rpbmcg
dGhlIGRlZmF1bHQgZW50cnkNCiMgKG5vcm1hbGx5IHRoZSBmaXJzdCBlbnRyeSBkZWZpbmVkKS4N
CnRpbWVvdXQJCTEwDQoNCiMjIGhpZGRlbm1lbnUNCiMgSGlkZXMgdGhlIG1lbnUgYnkgZGVmYXVs
dCAocHJlc3MgRVNDIHRvIHNlZSB0aGUgbWVudSkNCiNoaWRkZW5tZW51DQoNCiMgUHJldHR5IGNv
bG91cnMNCiNjb2xvciBjeWFuL2JsdWUgd2hpdGUvYmx1ZQ0KDQojIyBwYXNzd29yZCBbJy0tbWQ1
J10gcGFzc3dkDQojIElmIHVzZWQgaW4gdGhlIGZpcnN0IHNlY3Rpb24gb2YgYSBtZW51IGZpbGUs
IGRpc2FibGUgYWxsIGludGVyYWN0aXZlIGVkaXRpbmcNCiMgY29udHJvbCAobWVudSBlbnRyeSBl
ZGl0b3IgYW5kIGNvbW1hbmQtbGluZSkgIGFuZCBlbnRyaWVzIHByb3RlY3RlZCBieSB0aGUNCiMg
Y29tbWFuZCAnbG9jaycNCiMgZS5nLiBwYXNzd29yZCB0b3BzZWNyZXQNCiMgICAgICBwYXNzd29y
ZCAtLW1kNSAkMSRnTGhVMC8kYVc3OGtISzFRZlYzUDJiMnpuVW9lLw0KIyBwYXNzd29yZCB0b3Bz
ZWNyZXQNCg0KIw0KIyBleGFtcGxlcw0KIw0KIyB0aXRsZQkJV2luZG93cyA5NS85OC9OVC8yMDAw
DQojIHJvb3QJCShoZDAsMCkNCiMgbWFrZWFjdGl2ZQ0KIyBjaGFpbmxvYWRlcgkrMQ0KIw0KIyB0
aXRsZQkJTGludXgNCiMgcm9vdAkJKGhkMCwxKQ0KIyBrZXJuZWwJL3ZtbGludXogcm9vdD0vZGV2
L2hkYTIgcm8NCiMNCg0KIw0KIyBQdXQgc3RhdGljIGJvb3Qgc3RhbnphcyBiZWZvcmUgYW5kL29y
IGFmdGVyIEFVVE9NQUdJQyBLRVJORUwgTElTVA0KDQojIyMgQkVHSU4gQVVUT01BR0lDIEtFUk5F
TFMgTElTVA0KIyMgbGluZXMgYmV0d2VlbiB0aGUgQVVUT01BR0lDIEtFUk5FTFMgTElTVCBtYXJr
ZXJzIHdpbGwgYmUgbW9kaWZpZWQNCiMjIGJ5IHRoZSBkZWJpYW4gdXBkYXRlLWdydWIgc2NyaXB0
IGV4Y2VwdCBmb3IgdGhlIGRlZmF1bHQgb3B0aW9ucyBiZWxvdw0KDQojIyBETyBOT1QgVU5DT01N
RU5UIFRIRU0sIEp1c3QgZWRpdCB0aGVtIHRvIHlvdXIgbmVlZHMNCg0KIyMgIyMgU3RhcnQgRGVm
YXVsdCBPcHRpb25zICMjDQojIyBkZWZhdWx0IGtlcm5lbCBvcHRpb25zDQojIyBkZWZhdWx0IGtl
cm5lbCBvcHRpb25zIGZvciBhdXRvbWFnaWMgYm9vdCBvcHRpb25zDQojIyBJZiB5b3Ugd2FudCBz
cGVjaWFsIG9wdGlvbnMgZm9yIHNwZWNpZmljIGtlcm5lbHMgdXNlIGtvcHRfeF95X3oNCiMjIHdo
ZXJlIHgueS56IGlzIGtlcm5lbCB2ZXJzaW9uLiBNaW5vciB2ZXJzaW9ucyBjYW4gYmUgb21pdHRl
ZC4NCiMjIGUuZy4ga29wdD1yb290PS9kZXYvaGRhMSBybw0KIyMgICAgICBrb3B0XzJfNl84PXJv
b3Q9L2Rldi9oZGMxIHJvDQojIyAgICAgIGtvcHRfMl82XzhfMl82ODY9cm9vdD0vZGV2L2hkYzIg
cm8NCiMga29wdD1yb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVk
IHJvDQoNCiMjIGRlZmF1bHQgZ3J1YiByb290IGRldmljZQ0KIyMgZS5nLiBncm9vdD0oaGQwLDAp
DQojIGdyb290PThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KDQojIyBzaG91
bGQgdXBkYXRlLWdydWIgY3JlYXRlIGFsdGVybmF0aXZlIGF1dG9tYWdpYyBib290IG9wdGlvbnMN
CiMjIGUuZy4gYWx0ZXJuYXRpdmU9dHJ1ZQ0KIyMgICAgICBhbHRlcm5hdGl2ZT1mYWxzZQ0KIyBh
bHRlcm5hdGl2ZT10cnVlDQoNCiMjIHNob3VsZCB1cGRhdGUtZ3J1YiBsb2NrIGFsdGVybmF0aXZl
IGF1dG9tYWdpYyBib290IG9wdGlvbnMNCiMjIGUuZy4gbG9ja2FsdGVybmF0aXZlPXRydWUNCiMj
ICAgICAgbG9ja2FsdGVybmF0aXZlPWZhbHNlDQojIGxvY2thbHRlcm5hdGl2ZT1mYWxzZQ0KDQoj
IyBhZGRpdGlvbmFsIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRlZmF1bHQgYm9vdCBvcHRpb24s
IGJ1dCBub3Qgd2l0aCB0aGUNCiMjIGFsdGVybmF0aXZlcw0KIyMgZS5nLiBkZWZvcHRpb25zPXZn
YT03OTEgcmVzdW1lPS9kZXYvaGRhNQ0KIyBkZWZvcHRpb25zPXF1aWV0IHNwbGFzaA0KDQojIyBz
aG91bGQgdXBkYXRlLWdydWIgbG9jayBvbGQgYXV0b21hZ2ljIGJvb3Qgb3B0aW9ucw0KIyMgZS5n
LiBsb2Nrb2xkPWZhbHNlDQojIyAgICAgIGxvY2tvbGQ9dHJ1ZQ0KIyBsb2Nrb2xkPWZhbHNlDQoN
CiMjIFhlbiBoeXBlcnZpc29yIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRlZmF1bHQgWGVuIGJv
b3Qgb3B0aW9uDQojIHhlbmhvcHQ9DQoNCiMjIFhlbiBMaW51eCBrZXJuZWwgb3B0aW9ucyB0byB1
c2Ugd2l0aCB0aGUgZGVmYXVsdCBYZW4gYm9vdCBvcHRpb24NCiMgeGVua29wdD1jb25zb2xlPXR0
eTANCg0KIyMgYWx0b3B0aW9uIGJvb3QgdGFyZ2V0cyBvcHRpb24NCiMjIG11bHRpcGxlIGFsdG9w
dGlvbnMgbGluZXMgYXJlIGFsbG93ZWQNCiMjIGUuZy4gYWx0b3B0aW9ucz0oZXh0cmEgbWVudSBz
dWZmaXgpIGV4dHJhIGJvb3Qgb3B0aW9ucw0KIyMgICAgICBhbHRvcHRpb25zPShyZWNvdmVyeSkg
c2luZ2xlDQojIGFsdG9wdGlvbnM9KHJlY292ZXJ5IG1vZGUpIHNpbmdsZQ0KDQojIyBjb250cm9s
cyBob3cgbWFueSBrZXJuZWxzIHNob3VsZCBiZSBwdXQgaW50byB0aGUgbWVudS5sc3QNCiMjIG9u
bHkgY291bnRzIHRoZSBmaXJzdCBvY2N1cmVuY2Ugb2YgYSBrZXJuZWwsIG5vdCB0aGUNCiMjIGFs
dGVybmF0aXZlIGtlcm5lbCBvcHRpb25zDQojIyBlLmcuIGhvd21hbnk9YWxsDQojIyAgICAgIGhv
d21hbnk9Nw0KIyBob3dtYW55PWFsbA0KDQojIyBzcGVjaWZ5IGlmIHJ1bm5pbmcgaW4gWGVuIGRv
bVUgb3IgaGF2ZSBncnViIGRldGVjdCBhdXRvbWF0aWNhbGx5DQojIyB1cGRhdGUtZ3J1YiB3aWxs
IGlnbm9yZSBub24teGVuIGtlcm5lbHMgd2hlbiBydW5uaW5nIGluIGRvbVUgYW5kIHZpY2UgdmVy
c2ENCiMjIGUuZy4gaW5kb21VPWRldGVjdA0KIyMgICAgICBpbmRvbVU9dHJ1ZQ0KIyMgICAgICBp
bmRvbVU9ZmFsc2UNCiMgaW5kb21VPWRldGVjdA0KDQojIyBzaG91bGQgdXBkYXRlLWdydWIgY3Jl
YXRlIG1lbXRlc3Q4NiBib290IG9wdGlvbg0KIyMgZS5nLiBtZW10ZXN0ODY9dHJ1ZQ0KIyMgICAg
ICBtZW10ZXN0ODY9ZmFsc2UNCiMgbWVtdGVzdDg2PXRydWUNCg0KIyMgc2hvdWxkIHVwZGF0ZS1n
cnViIGFkanVzdCB0aGUgdmFsdWUgb2YgdGhlIGRlZmF1bHQgYm9vdGVkIHN5c3RlbQ0KIyMgY2Fu
IGJlIHRydWUgb3IgZmFsc2UNCiMgdXBkYXRlZGVmYXVsdGVudHJ5PWZhbHNlDQoNCiMjIHNob3Vs
ZCB1cGRhdGUtZ3J1YiBhZGQgc2F2ZWRlZmF1bHQgdG8gdGhlIGRlZmF1bHQgb3B0aW9ucw0KIyMg
Y2FuIGJlIHRydWUgb3IgZmFsc2UNCiMgc2F2ZWRlZmF1bHQ9ZmFsc2UNCg0KIyMgIyMgRW5kIERl
ZmF1bHQgT3B0aW9ucyAjIw0KDQoNCnRpdGxlCQlYZW4gNC4xLjIgLyBVYnVudHUgMTAuMDQuNCBr
ZXJuZWwgMi42LjMyLjQwIChyb290PXNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4
OC03N2QzNWFmODcwOTMNCiNyb290CQkoaGQwLDEpDQprZXJuZWwJCS94ZW4tNC4xLjIuZ3ogZG9t
MF9tZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZsPWFsbA0KbW9kdWxl
CQkvdm1saW51ei0yLjYuMzIuNDAgZHVtbXk9ZHVtbXkgcm9vdD0vZGV2L3NkYTYgcm8gY29uc29s
ZT10dHkwIG5vbW9kZXNldCByb290ZGVsYXk9NTANCm1vZHVsZQkJL2luaXRyZC5pbWctMi42LjMy
LjQwDQoNCnRpdGxlCQlYZW4gNC4yLjAtcmMzIC8gVWJ1bnR1IDEwLjA0LjQga2VybmVsIDIuNi4z
Mi40MCAocm9vdD1zZGE2KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3
MDkzDQojcm9vdAkJKGhkMCwxKQ0Ka2VybmVsCQkveGVuLTQuMi4wLXJjMy1wcmUuZ3ogZG9tMF9t
ZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZsPWFsbCBjb20xPTk2MDAs
OG4xIGNvbnNvbGU9Y29tMSx2Z2ENCm1vZHVsZQkJL3ZtbGludXotMi42LjMyLjQwIHJvb3Q9VVVJ
RD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gY29uc29sZT10dHkwIGNv
bnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rlc2V0DQptb2R1bGUJCS9pbml0cmQuaW1n
LTIuNi4zMi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLTQy
LWdlbmVyaWMgKHNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcw
OTMNCmtlcm5lbAkJL3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0vZGV2L3NkYTYgcm8g
cXVpZXQgc3BsYXNoIA0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0
aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLTQyLWdlbmVyaWMgKHNkYTcp
DQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL3Zt
bGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0vZGV2L3NkYTcgcm8gcXVpZXQgc3BsYXNoIA0K
aW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEw
LjA0LjQgTFRTLCBrZXJuZWwgMy4xLjAtcmM5Kw0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThm
ODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD1VVUlEPTI2
NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBzcGxhc2ggDQoNCnRp
dGxlCQlVYnVudHUgMTAuMDQuNCBMVFMsIGtlcm5lbCAzLjEuMC1yYzkrIChyZWNvdmVyeSBtb2Rl
KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92
bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFm
NDQ0ODg1ZCBybyAgc2luZ2xlDQoNCnRpdGxlCQlVYnVudHUgMTAuMDQuNCBMVFMsIGtlcm5lbCAy
LjYuMzIuNDANCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0Ka2Vy
bmVsCQkvdm1saW51ei0yLjYuMzIuNDAgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhm
LTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBzcGxhc2ggDQppbml0cmQJCS9pbml0cmQuaW1nLTIuNi4z
Mi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLjQwIChyZWNv
dmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQpr
ZXJuZWwJCS92bWxpbnV6LTIuNi4zMi40MCByb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIz
OGYtMmRhMWY0NDQ4ODVkIHJvICBzaW5nbGUNCmluaXRyZAkJL2luaXRyZC5pbWctMi42LjMyLjQw
DQoNCg0KdGl0bGUJCVVidW50dSAxMC4wNC40IExUUywga2VybmVsIDIuNi4zMi00Mi1nZW5lcmlj
IChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3
MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4zMi00Mi1nZW5lcmljIHJvb3Q9VVVJRD0yNjZlNzFh
Zi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gIHNpbmdsZQ0KaW5pdHJkCQkvaW5pdHJk
LmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJu
ZWwgMi42LjMyLTM4LWdlbmVyaWMNCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1
YWY4NzA5Mw0Ka2VybmVsCQkvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYyByb290PVVVSUQ9MjY2
ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvIHF1aWV0IHNwbGFzaCANCmluaXRy
ZAkJL2luaXRyZC5pbWctMi42LjMyLTM4LWdlbmVyaWMNCg0KdGl0bGUJCVVidW50dSAxMC4wNC40
IExUUywga2VybmVsIDIuNi4zMi0zOC1nZW5lcmljIChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVk
ZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4z
Mi0zOC1nZW5lcmljIHJvb3Q9VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4
NWQgcm8gIHNpbmdsZQ0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KDQp0
aXRsZQkJQ2hhaW5sb2FkIGludG8gR1JVQiAyDQpyb290CQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4
OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL2Jvb3QvZ3J1Yi9jb3JlLmltZw0KDQojdGl0bGUJCVVi
dW50dSAxMC4wNC40IExUUywgbWVtdGVzdDg2Kw0KI3V1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04
Zjg4LTc3ZDM1YWY4NzA5Mw0KI2tlcm5lbAkJL21lbXRlc3Q4NisuYmluDQoNCiMjIyBFTkQgREVC
SUFOIEFVVE9NQUdJQyBLRVJORUxTIExJU1QNCg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09
PT0gc2RhMi9ncnViL2dydWIuY2ZnOiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KDQoj
DQojIERPIE5PVCBFRElUIFRISVMgRklMRQ0KIw0KIyBJdCBpcyBhdXRvbWF0aWNhbGx5IGdlbmVy
YXRlZCBieSAvdXNyL3NiaW4vZ3J1Yi1ta2NvbmZpZyB1c2luZyB0ZW1wbGF0ZXMNCiMgZnJvbSAv
ZXRjL2dydWIuZCBhbmQgc2V0dGluZ3MgZnJvbSAvZXRjL2RlZmF1bHQvZ3J1Yg0KIw0KDQojIyMg
QkVHSU4gL2V0Yy9ncnViLmQvMDBfaGVhZGVyICMjIw0KaWYgWyAtcyAkcHJlZml4L2dydWJlbnYg
XTsgdGhlbg0KICBsb2FkX2Vudg0KZmkNCnNldCBkZWZhdWx0PSIwIg0KaWYgWyAke3ByZXZfc2F2
ZWRfZW50cnl9IF07IHRoZW4NCiAgc2V0IHNhdmVkX2VudHJ5PSR7cHJldl9zYXZlZF9lbnRyeX0N
CiAgc2F2ZV9lbnYgc2F2ZWRfZW50cnkNCiAgc2V0IHByZXZfc2F2ZWRfZW50cnk9DQogIHNhdmVf
ZW52IHByZXZfc2F2ZWRfZW50cnkNCiAgc2V0IGJvb3Rfb25jZT10cnVlDQpmaQ0KDQpmdW5jdGlv
biBzYXZlZGVmYXVsdCB7DQogIGlmIFsgLXogJHtib290X29uY2V9IF07IHRoZW4NCiAgICBzYXZl
ZF9lbnRyeT0ke2Nob3Nlbn0NCiAgICBzYXZlX2VudiBzYXZlZF9lbnRyeQ0KICBmaQ0KfQ0KDQpm
dW5jdGlvbiByZWNvcmRmYWlsIHsNCiAgc2V0IHJlY29yZGZhaWw9MQ0KICBpZiBbIC1uICR7aGF2
ZV9ncnViZW52fSBdOyB0aGVuIGlmIFsgLXogJHtib290X29uY2V9IF07IHRoZW4gc2F2ZV9lbnYg
cmVjb3JkZmFpbDsgZmk7IGZpDQp9DQppbnNtb2QgZXh0Mg0Kc2V0IHJvb3Q9JyhoZDAsNiknDQpz
ZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhm
LTJkYTFmNDQ0ODg1ZA0KaWYgbG9hZGZvbnQgL3Vzci9zaGFyZS9ncnViL3VuaWNvZGUucGYyIDsg
dGhlbg0KICBzZXQgZ2Z4bW9kZT02NDB4NDgwDQogIGluc21vZCBnZnh0ZXJtDQogIGluc21vZCB2
YmUNCiAgaWYgdGVybWluYWxfb3V0cHV0IGdmeHRlcm0gOyB0aGVuIHRydWUgOyBlbHNlDQogICAg
IyBGb3IgYmFja3dhcmQgY29tcGF0aWJpbGl0eSB3aXRoIHZlcnNpb25zIG9mIHRlcm1pbmFsLm1v
ZCB0aGF0IGRvbid0DQogICAgIyB1bmRlcnN0YW5kIHRlcm1pbmFsX291dHB1dA0KICAgIHRlcm1p
bmFsIGdmeHRlcm0NCiAgZmkNCmZpDQppbnNtb2QgZXh0Mg0Kc2V0IHJvb3Q9JyhoZDAsMiknDQpz
ZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhlZGYwZTFiLTVmOWMtNGNhMC04Zjg4
LTc3ZDM1YWY4NzA5Mw0Kc2V0IGxvY2FsZV9kaXI9KCRyb290KS9ncnViL2xvY2FsZQ0Kc2V0IGxh
bmc9ZW4NCmluc21vZCBnZXR0ZXh0DQppZiBbICR7cmVjb3JkZmFpbH0gPSAxIF07IHRoZW4NCiAg
c2V0IHRpbWVvdXQ9LTENCmVsc2UNCiAgc2V0IHRpbWVvdXQ9MTANCmZpDQojIyMgRU5EIC9ldGMv
Z3J1Yi5kLzAwX2hlYWRlciAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzA1X2RlYmlhbl90
aGVtZSAjIyMNCnNldCBtZW51X2NvbG9yX25vcm1hbD13aGl0ZS9ibGFjaw0Kc2V0IG1lbnVfY29s
b3JfaGlnaGxpZ2h0PWJsYWNrL2xpZ2h0LWdyYXkNCiMjIyBFTkQgL2V0Yy9ncnViLmQvMDVfZGVi
aWFuX3RoZW1lICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvMDhfeGVuICMjIw0KbWVudWVu
dHJ5ICJYZW4gVW5zdGFibGUgNC4yIFJDMyAvIERlYmlhbiBTcXVlZXplIGtlcm5lbCAyLjYuMzIu
NDAiIHsNCiAgICAgICAgaW5zbW9kIGV4dDINCiAgICAgICAgc2V0IHJvb3Q9JyhoZDAsNCknDQog
ICAgICAgIG11bHRpYm9vdCAoaGQwLDEpL3hlbi00LjIuMC1yYzMtcHJlLmd6IGR1bW15DQogICAg
ICAgIG1vZHVsZSAoaGQwLDEpL3ZtbGludXotMi42LjMyLjQwIGR1bW15IHJvb3Q9VVVJRD0yNjZl
NzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gcXVpZXQgY29uc29sZT10dHkwIG5v
bW9kZXNldCByb290ZGVsYXk9MTMwDQogICAgICAgIG1vZHVsZSAoaGQwLDEpL2luaXRyZC5pbWct
Mi42LjMyLjQwDQp9DQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzA4X3hlbiAjIyMNCg0KIyMjIEJFR0lO
IC9ldGMvZ3J1Yi5kLzEwX2xpbnV4ICMjIw0KbWVudWVudHJ5ICdVYnVudHUsIHdpdGggTGludXgg
My4xLjAtcmM5KycgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUg
LS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAs
MiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRj
YTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eAkvdm1saW51ei0zLjEuMC1yYzkrIHJvb3Q9L2Rl
di9zZGE2IHJvICAgcXVpZXQgc3BsYXNoDQp9DQptZW51ZW50cnkgJ1VidW50dSwgd2l0aCBMaW51
eCAyLjYuMzIuNDAnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4IC0tY2xhc3MgZ251
IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNldCByb290PScoaGQw
LDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgOGVkZjBlMWItNWY5Yy00
Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJL3ZtbGludXotMi42LjMyLjQwIHJvb3Q9VVVJ
RD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gICBxdWlldCBzcGxhc2gN
Cglpbml0cmQJL2luaXRyZC5pbWctMi42LjMyLjQwDQp9DQptZW51ZW50cnkgJ1VidW50dSwgd2l0
aCBMaW51eCAyLjYuMzItNDItZ2VuZXJpYycgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGlu
dXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJ
c2V0IHJvb3Q9JyhoZDAsMiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4
ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eAkvdm1saW51ei0yLjYu
MzItNDItZ2VuZXJpYyByb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4
ODVkIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9pbml0cmQuaW1nLTIuNi4zMi00Mi1nZW5l
cmljDQp9DQptZW51ZW50cnkgJ1VidW50dSwgd2l0aCBMaW51eCAyLjYuMzItMzgtZ2VuZXJpYycg
LS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7
DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsMiknDQoJc2VhcmNo
IC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2Qz
NWFmODcwOTMNCglsaW51eAkvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYyByb290PVVVSUQ9MjY2
ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5p
dHJkCS9pbml0cmQuaW1nLTIuNi4zMi0zOC1nZW5lcmljDQp9DQojIyMgRU5EIC9ldGMvZ3J1Yi5k
LzEwX2xpbnV4ICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvMjBfbWVtdGVzdDg2KyAjIyMN
Cm1lbnVlbnRyeSAiTWVtb3J5IHRlc3QgKG1lbXRlc3Q4NispIiB7DQoJaW5zbW9kIGV4dDINCglz
ZXQgcm9vdD0nKGhkMCwyKScNCglzZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhl
ZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KCWxpbnV4MTYJL21lbXRlc3Q4Nisu
YmluDQp9DQptZW51ZW50cnkgIk1lbW9yeSB0ZXN0IChtZW10ZXN0ODYrLCBzZXJpYWwgY29uc29s
ZSAxMTUyMDApIiB7DQoJaW5zbW9kIGV4dDINCglzZXQgcm9vdD0nKGhkMCwyKScNCglzZWFyY2gg
LS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1
YWY4NzA5Mw0KCWxpbnV4MTYJL21lbXRlc3Q4NisuYmluIGNvbnNvbGU9dHR5UzAsMTE1MjAwbjgN
Cn0NCiMjIyBFTkQgL2V0Yy9ncnViLmQvMjBfbWVtdGVzdDg2KyAjIyMNCg0KIyMjIEJFR0lOIC9l
dGMvZ3J1Yi5kLzMwX29zLXByb2JlciAjIyMNCiMjIyBFTkQgL2V0Yy9ncnViLmQvMzBfb3MtcHJv
YmVyICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvNDBfY3VzdG9tICMjIw0KIyBUaGlzIGZp
bGUgcHJvdmlkZXMgYW4gZWFzeSB3YXkgdG8gYWRkIGN1c3RvbSBtZW51IGVudHJpZXMuICBTaW1w
bHkgdHlwZSB0aGUNCiMgbWVudSBlbnRyaWVzIHlvdSB3YW50IHRvIGFkZCBhZnRlciB0aGlzIGNv
bW1lbnQuICBCZSBjYXJlZnVsIG5vdCB0byBjaGFuZ2UNCiMgdGhlICdleGVjIHRhaWwnIGxpbmUg
YWJvdmUuDQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzQwX2N1c3RvbSAjIyMNCg0KPT09PT09PT09PT09
PT09PT09PSBzZGEyOiBMb2NhdGlvbiBvZiBmaWxlcyBsb2FkZWQgYnkgR3J1YjogPT09PT09PT09
PT09PT09PT09PQ0KDQoNCiAgICAuMEdCOiBncnViL2dydWIuY2ZnDQogICAgLjBHQjogZ3J1Yi9t
ZW51LmxzdA0KICAgIC4wR0I6IGdydWIvc3RhZ2UyDQogICAgLjBHQjogaW5pdHJkLmltZy0yLjYu
MzItMzgtZ2VuZXJpYw0KICAgIC4wR0I6IGluaXRyZC5pbWctMi42LjMyLjQwDQogICAgLjBHQjog
aW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KICAgIC4wR0I6IHZtbGludXotMi42LjMyLTM4
LWdlbmVyaWMNCiAgICAuMEdCOiB2bWxpbnV6LTIuNi4zMi40MA0KICAgIC4wR0I6IHZtbGludXot
Mi42LjMyLTQyLWdlbmVyaWMNCiAgICAuMEdCOiB2bWxpbnV6LTMuMS4wLXJjOSsNCg0KPT09PT09
PT09PT09PT09PT09PT09PT09PT09IHNkYTYvYm9vdC9ncnViL21lbnUubHN0OiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT0NCg0KIyBtZW51LmxzdCAtIFNlZTogZ3J1Yig4KSwgaW5mbyBncnVi
LCB1cGRhdGUtZ3J1Yig4KQ0KIyAgICAgICAgICAgIGdydWItaW5zdGFsbCg4KSwgZ3J1Yi1mbG9w
cHkoOCksDQojICAgICAgICAgICAgZ3J1Yi1tZDUtY3J5cHQsIC91c3Ivc2hhcmUvZG9jL2dydWIN
CiMgICAgICAgICAgICBhbmQgL3Vzci9zaGFyZS9kb2MvZ3J1Yi1sZWdhY3ktZG9jLy4NCg0KIyMg
ZGVmYXVsdCBudW0NCiMgU2V0IHRoZSBkZWZhdWx0IGVudHJ5IHRvIHRoZSBlbnRyeSBudW1iZXIg
TlVNLiBOdW1iZXJpbmcgc3RhcnRzIGZyb20gMCwgYW5kDQojIHRoZSBlbnRyeSBudW1iZXIgMCBp
cyB0aGUgZGVmYXVsdCBpZiB0aGUgY29tbWFuZCBpcyBub3QgdXNlZC4NCiMNCiMgWW91IGNhbiBz
cGVjaWZ5ICdzYXZlZCcgaW5zdGVhZCBvZiBhIG51bWJlci4gSW4gdGhpcyBjYXNlLCB0aGUgZGVm
YXVsdCBlbnRyeQ0KIyBpcyB0aGUgZW50cnkgc2F2ZWQgd2l0aCB0aGUgY29tbWFuZCAnc2F2ZWRl
ZmF1bHQnLg0KIyBXQVJOSU5HOiBJZiB5b3UgYXJlIHVzaW5nIGRtcmFpZCBkbyBub3QgdXNlICdz
YXZlZGVmYXVsdCcgb3IgeW91cg0KIyBhcnJheSB3aWxsIGRlc3luYyBhbmQgd2lsbCBub3QgbGV0
IHlvdSBib290IHlvdXIgc3lzdGVtLg0KZGVmYXVsdAkJMA0KZmFsbGJhY2sJMg0KIyMgdGltZW91
dCBzZWMNCiMgU2V0IGEgdGltZW91dCwgaW4gU0VDIHNlY29uZHMsIGJlZm9yZSBhdXRvbWF0aWNh
bGx5IGJvb3RpbmcgdGhlIGRlZmF1bHQgZW50cnkNCiMgKG5vcm1hbGx5IHRoZSBmaXJzdCBlbnRy
eSBkZWZpbmVkKS4NCnRpbWVvdXQJCTEwDQoNCiMjIGhpZGRlbm1lbnUNCiMgSGlkZXMgdGhlIG1l
bnUgYnkgZGVmYXVsdCAocHJlc3MgRVNDIHRvIHNlZSB0aGUgbWVudSkNCiNoaWRkZW5tZW51DQoN
CiMgUHJldHR5IGNvbG91cnMNCiNjb2xvciBjeWFuL2JsdWUgd2hpdGUvYmx1ZQ0KDQojIyBwYXNz
d29yZCBbJy0tbWQ1J10gcGFzc3dkDQojIElmIHVzZWQgaW4gdGhlIGZpcnN0IHNlY3Rpb24gb2Yg
YSBtZW51IGZpbGUsIGRpc2FibGUgYWxsIGludGVyYWN0aXZlIGVkaXRpbmcNCiMgY29udHJvbCAo
bWVudSBlbnRyeSBlZGl0b3IgYW5kIGNvbW1hbmQtbGluZSkgIGFuZCBlbnRyaWVzIHByb3RlY3Rl
ZCBieSB0aGUNCiMgY29tbWFuZCAnbG9jaycNCiMgZS5nLiBwYXNzd29yZCB0b3BzZWNyZXQNCiMg
ICAgICBwYXNzd29yZCAtLW1kNSAkMSRnTGhVMC8kYVc3OGtISzFRZlYzUDJiMnpuVW9lLw0KIyBw
YXNzd29yZCB0b3BzZWNyZXQNCg0KIw0KIyBleGFtcGxlcw0KIw0KIyB0aXRsZQkJV2luZG93cyA5
NS85OC9OVC8yMDAwDQojIHJvb3QJCShoZDAsMCkNCiMgbWFrZWFjdGl2ZQ0KIyBjaGFpbmxvYWRl
cgkrMQ0KIw0KIyB0aXRsZQkJTGludXgNCiMgcm9vdAkJKGhkMCwxKQ0KIyBrZXJuZWwJL3ZtbGlu
dXogcm9vdD0vZGV2L2hkYTIgcm8NCiMNCg0KIw0KIyBQdXQgc3RhdGljIGJvb3Qgc3RhbnphcyBi
ZWZvcmUgYW5kL29yIGFmdGVyIEFVVE9NQUdJQyBLRVJORUwgTElTVA0KDQojIyMgQkVHSU4gQVVU
T01BR0lDIEtFUk5FTFMgTElTVA0KIyMgbGluZXMgYmV0d2VlbiB0aGUgQVVUT01BR0lDIEtFUk5F
TFMgTElTVCBtYXJrZXJzIHdpbGwgYmUgbW9kaWZpZWQNCiMjIGJ5IHRoZSBkZWJpYW4gdXBkYXRl
LWdydWIgc2NyaXB0IGV4Y2VwdCBmb3IgdGhlIGRlZmF1bHQgb3B0aW9ucyBiZWxvdw0KDQojIyBE
TyBOT1QgVU5DT01NRU5UIFRIRU0sIEp1c3QgZWRpdCB0aGVtIHRvIHlvdXIgbmVlZHMNCg0KIyMg
IyMgU3RhcnQgRGVmYXVsdCBPcHRpb25zICMjDQojIyBkZWZhdWx0IGtlcm5lbCBvcHRpb25zDQoj
IyBkZWZhdWx0IGtlcm5lbCBvcHRpb25zIGZvciBhdXRvbWFnaWMgYm9vdCBvcHRpb25zDQojIyBJ
ZiB5b3Ugd2FudCBzcGVjaWFsIG9wdGlvbnMgZm9yIHNwZWNpZmljIGtlcm5lbHMgdXNlIGtvcHRf
eF95X3oNCiMjIHdoZXJlIHgueS56IGlzIGtlcm5lbCB2ZXJzaW9uLiBNaW5vciB2ZXJzaW9ucyBj
YW4gYmUgb21pdHRlZC4NCiMjIGUuZy4ga29wdD1yb290PS9kZXYvaGRhMSBybw0KIyMgICAgICBr
b3B0XzJfNl84PXJvb3Q9L2Rldi9oZGMxIHJvDQojIyAgICAgIGtvcHRfMl82XzhfMl82ODY9cm9v
dD0vZGV2L2hkYzIgcm8NCiMga29wdD1yb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYt
MmRhMWY0NDQ4ODVkIHJvDQoNCiMjIGRlZmF1bHQgZ3J1YiByb290IGRldmljZQ0KIyMgZS5nLiBn
cm9vdD0oaGQwLDApDQojIGdyb290PThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5
Mw0KDQojIyBzaG91bGQgdXBkYXRlLWdydWIgY3JlYXRlIGFsdGVybmF0aXZlIGF1dG9tYWdpYyBi
b290IG9wdGlvbnMNCiMjIGUuZy4gYWx0ZXJuYXRpdmU9dHJ1ZQ0KIyMgICAgICBhbHRlcm5hdGl2
ZT1mYWxzZQ0KIyBhbHRlcm5hdGl2ZT10cnVlDQoNCiMjIHNob3VsZCB1cGRhdGUtZ3J1YiBsb2Nr
IGFsdGVybmF0aXZlIGF1dG9tYWdpYyBib290IG9wdGlvbnMNCiMjIGUuZy4gbG9ja2FsdGVybmF0
aXZlPXRydWUNCiMjICAgICAgbG9ja2FsdGVybmF0aXZlPWZhbHNlDQojIGxvY2thbHRlcm5hdGl2
ZT1mYWxzZQ0KDQojIyBhZGRpdGlvbmFsIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRlZmF1bHQg
Ym9vdCBvcHRpb24sIGJ1dCBub3Qgd2l0aCB0aGUNCiMjIGFsdGVybmF0aXZlcw0KIyMgZS5nLiBk
ZWZvcHRpb25zPXZnYT03OTEgcmVzdW1lPS9kZXYvaGRhNQ0KIyBkZWZvcHRpb25zPXF1aWV0IHNw
bGFzaA0KDQojIyBzaG91bGQgdXBkYXRlLWdydWIgbG9jayBvbGQgYXV0b21hZ2ljIGJvb3Qgb3B0
aW9ucw0KIyMgZS5nLiBsb2Nrb2xkPWZhbHNlDQojIyAgICAgIGxvY2tvbGQ9dHJ1ZQ0KIyBsb2Nr
b2xkPWZhbHNlDQoNCiMjIFhlbiBoeXBlcnZpc29yIG9wdGlvbnMgdG8gdXNlIHdpdGggdGhlIGRl
ZmF1bHQgWGVuIGJvb3Qgb3B0aW9uDQojIHhlbmhvcHQ9DQoNCiMjIFhlbiBMaW51eCBrZXJuZWwg
b3B0aW9ucyB0byB1c2Ugd2l0aCB0aGUgZGVmYXVsdCBYZW4gYm9vdCBvcHRpb24NCiMgeGVua29w
dD1jb25zb2xlPXR0eTANCg0KIyMgYWx0b3B0aW9uIGJvb3QgdGFyZ2V0cyBvcHRpb24NCiMjIG11
bHRpcGxlIGFsdG9wdGlvbnMgbGluZXMgYXJlIGFsbG93ZWQNCiMjIGUuZy4gYWx0b3B0aW9ucz0o
ZXh0cmEgbWVudSBzdWZmaXgpIGV4dHJhIGJvb3Qgb3B0aW9ucw0KIyMgICAgICBhbHRvcHRpb25z
PShyZWNvdmVyeSkgc2luZ2xlDQojIGFsdG9wdGlvbnM9KHJlY292ZXJ5IG1vZGUpIHNpbmdsZQ0K
DQojIyBjb250cm9scyBob3cgbWFueSBrZXJuZWxzIHNob3VsZCBiZSBwdXQgaW50byB0aGUgbWVu
dS5sc3QNCiMjIG9ubHkgY291bnRzIHRoZSBmaXJzdCBvY2N1cmVuY2Ugb2YgYSBrZXJuZWwsIG5v
dCB0aGUNCiMjIGFsdGVybmF0aXZlIGtlcm5lbCBvcHRpb25zDQojIyBlLmcuIGhvd21hbnk9YWxs
DQojIyAgICAgIGhvd21hbnk9Nw0KIyBob3dtYW55PWFsbA0KDQojIyBzcGVjaWZ5IGlmIHJ1bm5p
bmcgaW4gWGVuIGRvbVUgb3IgaGF2ZSBncnViIGRldGVjdCBhdXRvbWF0aWNhbGx5DQojIyB1cGRh
dGUtZ3J1YiB3aWxsIGlnbm9yZSBub24teGVuIGtlcm5lbHMgd2hlbiBydW5uaW5nIGluIGRvbVUg
YW5kIHZpY2UgdmVyc2ENCiMjIGUuZy4gaW5kb21VPWRldGVjdA0KIyMgICAgICBpbmRvbVU9dHJ1
ZQ0KIyMgICAgICBpbmRvbVU9ZmFsc2UNCiMgaW5kb21VPWRldGVjdA0KDQojIyBzaG91bGQgdXBk
YXRlLWdydWIgY3JlYXRlIG1lbXRlc3Q4NiBib290IG9wdGlvbg0KIyMgZS5nLiBtZW10ZXN0ODY9
dHJ1ZQ0KIyMgICAgICBtZW10ZXN0ODY9ZmFsc2UNCiMgbWVtdGVzdDg2PXRydWUNCg0KIyMgc2hv
dWxkIHVwZGF0ZS1ncnViIGFkanVzdCB0aGUgdmFsdWUgb2YgdGhlIGRlZmF1bHQgYm9vdGVkIHN5
c3RlbQ0KIyMgY2FuIGJlIHRydWUgb3IgZmFsc2UNCiMgdXBkYXRlZGVmYXVsdGVudHJ5PWZhbHNl
DQoNCiMjIHNob3VsZCB1cGRhdGUtZ3J1YiBhZGQgc2F2ZWRlZmF1bHQgdG8gdGhlIGRlZmF1bHQg
b3B0aW9ucw0KIyMgY2FuIGJlIHRydWUgb3IgZmFsc2UNCiMgc2F2ZWRlZmF1bHQ9ZmFsc2UNCg0K
IyMgIyMgRW5kIERlZmF1bHQgT3B0aW9ucyAjIw0KDQoNCnRpdGxlCQlYZW4gNC4xLjIgLyBVYnVu
dHUgMTAuMDQuNCBrZXJuZWwgMi42LjMyLjQwIChyb290PXNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01
ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCiNyb290CQkoaGQwLDEpDQprZXJuZWwJCS94ZW4t
NC4xLjIuZ3ogZG9tMF9tZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZs
PWFsbA0KbW9kdWxlCQkvdm1saW51ei0yLjYuMzIuNDAgZHVtbXk9ZHVtbXkgcm9vdD0vZGV2L3Nk
YTYgcm8gY29uc29sZT10dHkwIG5vbW9kZXNldCByb290ZGVsYXk9NTANCm1vZHVsZQkJL2luaXRy
ZC5pbWctMi42LjMyLjQwDQoNCnRpdGxlCQlYZW4gNC4yLjAtcmMzIC8gVWJ1bnR1IDEwLjA0LjQg
a2VybmVsIDIuNi4zMi40MCAocm9vdD1zZGE2KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThm
ODgtNzdkMzVhZjg3MDkzDQojcm9vdAkJKGhkMCwxKQ0Ka2VybmVsCQkveGVuLTQuMi4wLXJjMy1w
cmUuZ3ogZG9tMF9tZW09NDA5Nk0sbWF4OjQwOTZNIGxvZ2x2bD1hbGwgZ3Vlc3RfbG9nbHZsPWFs
bCBjb20xPTk2MDAsOG4xIGNvbnNvbGU9Y29tMSx2Z2ENCm1vZHVsZQkJL3ZtbGludXotMi42LjMy
LjQwIHJvb3Q9VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gY29u
c29sZT10dHkwIGNvbnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rlc2V0DQptb2R1bGUJ
CS9pbml0cmQuaW1nLTIuNi4zMi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJu
ZWwgMi42LjMyLTQyLWdlbmVyaWMgKHNkYTYpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4
OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0v
ZGV2L3NkYTYgcm8gcXVpZXQgc3BsYXNoIA0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDIt
Z2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42LjMyLTQyLWdl
bmVyaWMgKHNkYTcpDQp1dWlkCQk4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMN
Cmtlcm5lbAkJL3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD0vZGV2L3NkYTcgcm8gcXVp
ZXQgc3BsYXNoIA0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRs
ZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMy4xLjAtcmM5Kw0KdXVpZAkJOGVkZjBlMWIt
NWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTMuMS4wLXJjOSsg
cm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBz
cGxhc2ggDQoNCnRpdGxlCQlVYnVudHUgMTAuMDQuNCBMVFMsIGtlcm5lbCAzLjEuMC1yYzkrIChy
ZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkz
DQprZXJuZWwJCS92bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1
Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyAgc2luZ2xlDQoNCnRpdGxlCQlVYnVudHUgMTAuMDQuNCBM
VFMsIGtlcm5lbCAyLjYuMzIuNDANCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1
YWY4NzA5Mw0Ka2VybmVsCQkvdm1saW51ei0yLjYuMzIuNDAgcm9vdD1VVUlEPTI2NmU3MWFmLWUx
NDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyBxdWlldCBzcGxhc2ggDQppbml0cmQJCS9pbml0
cmQuaW1nLTIuNi4zMi40MA0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0LjQgTFRTLCBrZXJuZWwgMi42
LjMyLjQwIChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdk
MzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4zMi40MCByb290PVVVSUQ9MjY2ZTcxYWYt
ZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvICBzaW5nbGUNCmluaXRyZAkJL2luaXRyZC5p
bWctMi42LjMyLjQwDQoNCg0KdGl0bGUJCVVidW50dSAxMC4wNC40IExUUywga2VybmVsIDIuNi4z
Mi00Mi1nZW5lcmljIChyZWNvdmVyeSBtb2RlKQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThm
ODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92bWxpbnV6LTIuNi4zMi00Mi1nZW5lcmljIHJvb3Q9
VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQgcm8gIHNpbmdsZQ0KaW5p
dHJkCQkvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KDQp0aXRsZQkJVWJ1bnR1IDEwLjA0
LjQgTFRTLCBrZXJuZWwgMi42LjMyLTM4LWdlbmVyaWMNCnV1aWQJCThlZGYwZTFiLTVmOWMtNGNh
MC04Zjg4LTc3ZDM1YWY4NzA5Mw0Ka2VybmVsCQkvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYyBy
b290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvIHF1aWV0IHNw
bGFzaCANCmluaXRyZAkJL2luaXRyZC5pbWctMi42LjMyLTM4LWdlbmVyaWMNCg0KdGl0bGUJCVVi
dW50dSAxMC4wNC40IExUUywga2VybmVsIDIuNi4zMi0zOC1nZW5lcmljIChyZWNvdmVyeSBtb2Rl
KQ0KdXVpZAkJOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQprZXJuZWwJCS92
bWxpbnV6LTIuNi4zMi0zOC1nZW5lcmljIHJvb3Q9VVVJRD0yNjZlNzFhZi1lMTQ1LTQ5NWItYjM4
Zi0yZGExZjQ0NDg4NWQgcm8gIHNpbmdsZQ0KaW5pdHJkCQkvaW5pdHJkLmltZy0yLjYuMzItMzgt
Z2VuZXJpYw0KDQp0aXRsZQkJQ2hhaW5sb2FkIGludG8gR1JVQiAyDQpyb290CQk4ZWRmMGUxYi01
ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCmtlcm5lbAkJL2Jvb3QvZ3J1Yi9jb3JlLmltZw0K
DQojdGl0bGUJCVVidW50dSAxMC4wNC40IExUUywgbWVtdGVzdDg2Kw0KI3V1aWQJCThlZGYwZTFi
LTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KI2tlcm5lbAkJL21lbXRlc3Q4NisuYmluDQoN
CiMjIyBFTkQgREVCSUFOIEFVVE9NQUdJQyBLRVJORUxTIExJU1QNCg0KPT09PT09PT09PT09PT09
PT09PT09PT09PT09IHNkYTYvYm9vdC9ncnViL2dydWIuY2ZnOiA9PT09PT09PT09PT09PT09PT09
PT09PT09PT0NCg0KIw0KIyBETyBOT1QgRURJVCBUSElTIEZJTEUNCiMNCiMgSXQgaXMgYXV0b21h
dGljYWxseSBnZW5lcmF0ZWQgYnkgL3Vzci9zYmluL2dydWItbWtjb25maWcgdXNpbmcgdGVtcGxh
dGVzDQojIGZyb20gL2V0Yy9ncnViLmQgYW5kIHNldHRpbmdzIGZyb20gL2V0Yy9kZWZhdWx0L2dy
dWINCiMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzAwX2hlYWRlciAjIyMNCmlmIFsgLXMgJHBy
ZWZpeC9ncnViZW52IF07IHRoZW4NCiAgbG9hZF9lbnYNCmZpDQpzZXQgZGVmYXVsdD0iMCINCmlm
IFsgJHtwcmV2X3NhdmVkX2VudHJ5fSBdOyB0aGVuDQogIHNldCBzYXZlZF9lbnRyeT0ke3ByZXZf
c2F2ZWRfZW50cnl9DQogIHNhdmVfZW52IHNhdmVkX2VudHJ5DQogIHNldCBwcmV2X3NhdmVkX2Vu
dHJ5PQ0KICBzYXZlX2VudiBwcmV2X3NhdmVkX2VudHJ5DQogIHNldCBib290X29uY2U9dHJ1ZQ0K
ZmkNCg0KZnVuY3Rpb24gc2F2ZWRlZmF1bHQgew0KICBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0
aGVuDQogICAgc2F2ZWRfZW50cnk9JHtjaG9zZW59DQogICAgc2F2ZV9lbnYgc2F2ZWRfZW50cnkN
CiAgZmkNCn0NCg0KZnVuY3Rpb24gcmVjb3JkZmFpbCB7DQogIHNldCByZWNvcmRmYWlsPTENCiAg
aWYgWyAtbiAke2hhdmVfZ3J1YmVudn0gXTsgdGhlbiBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0
aGVuIHNhdmVfZW52IHJlY29yZGZhaWw7IGZpOyBmaQ0KfQ0KaW5zbW9kIGV4dDINCnNldCByb290
PScoaGQwLDYpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCAyNjZlNzFhZi1l
MTQ1LTQ5NWItYjM4Zi0yZGExZjQ0NDg4NWQNCmlmIGxvYWRmb250IC91c3Ivc2hhcmUvZ3J1Yi91
bmljb2RlLnBmMiA7IHRoZW4NCiAgc2V0IGdmeG1vZGU9NjQweDQ4MA0KICBpbnNtb2QgZ2Z4dGVy
bQ0KICBpbnNtb2QgdmJlDQogIGlmIHRlcm1pbmFsX291dHB1dCBnZnh0ZXJtIDsgdGhlbiB0cnVl
IDsgZWxzZQ0KICAgICMgRm9yIGJhY2t3YXJkIGNvbXBhdGliaWxpdHkgd2l0aCB2ZXJzaW9ucyBv
ZiB0ZXJtaW5hbC5tb2QgdGhhdCBkb24ndA0KICAgICMgdW5kZXJzdGFuZCB0ZXJtaW5hbF9vdXRw
dXQNCiAgICB0ZXJtaW5hbCBnZnh0ZXJtDQogIGZpDQpmaQ0KaW5zbW9kIGV4dDINCnNldCByb290
PScoaGQwLDIpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01
ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCnNldCBsb2NhbGVfZGlyPSgkcm9vdCkvZ3J1Yi9s
b2NhbGUNCnNldCBsYW5nPWVuDQppbnNtb2QgZ2V0dGV4dA0KaWYgWyAke3JlY29yZGZhaWx9ID0g
MSBdOyB0aGVuDQogIHNldCB0aW1lb3V0PS0xDQplbHNlDQogIHNldCB0aW1lb3V0PTEwDQpmaQ0K
IyMjIEVORCAvZXRjL2dydWIuZC8wMF9oZWFkZXIgIyMjDQoNCiMjIyBCRUdJTiAvZXRjL2dydWIu
ZC8wNV9kZWJpYW5fdGhlbWUgIyMjDQpzZXQgbWVudV9jb2xvcl9ub3JtYWw9d2hpdGUvYmxhY2sN
CnNldCBtZW51X2NvbG9yX2hpZ2hsaWdodD1ibGFjay9saWdodC1ncmF5DQojIyMgRU5EIC9ldGMv
Z3J1Yi5kLzA1X2RlYmlhbl90aGVtZSAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzA4X3hl
biAjIyMNCm1lbnVlbnRyeSAiWGVuIFVuc3RhYmxlIDQuMiBSQzMgLyBEZWJpYW4gU3F1ZWV6ZSBr
ZXJuZWwgMi42LjMyLjQwIiB7DQogICAgICAgIGluc21vZCBleHQyDQogICAgICAgIHNldCByb290
PScoaGQwLDQpJw0KICAgICAgICBtdWx0aWJvb3QgKGhkMCwxKS94ZW4tNC4yLjAtcmMzLXByZS5n
eiBkdW1teQ0KICAgICAgICBtb2R1bGUgKGhkMCwxKS92bWxpbnV6LTIuNi4zMi40MCBkdW1teSBy
b290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvIHF1aWV0IGNv
bnNvbGU9dHR5MCBub21vZGVzZXQgcm9vdGRlbGF5PTEzMA0KICAgICAgICBtb2R1bGUgKGhkMCwx
KS9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KIyMjIEVORCAvZXRjL2dydWIuZC8wOF94ZW4gIyMj
DQoNCiMjIyBCRUdJTiAvZXRjL2dydWIuZC8xMF9saW51eCAjIyMNCm1lbnVlbnRyeSAnVWJ1bnR1
LCB3aXRoIExpbnV4IDMuMS4wLXJjOSsnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4
IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNl
dCByb290PScoaGQwLDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgOGVk
ZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJL3ZtbGludXotMy4xLjAt
cmM5KyByb290PS9kZXYvc2RhNiBybyAgIHF1aWV0IHNwbGFzaA0KfQ0KbWVudWVudHJ5ICdVYnVu
dHUsIHdpdGggTGludXggMi42LjMyLjQwJyAtLWNsYXNzIHVidW50dSAtLWNsYXNzIGdudS1saW51
eCAtLWNsYXNzIGdudSAtLWNsYXNzIG9zIHsNCglyZWNvcmRmYWlsDQoJaW5zbW9kIGV4dDINCglz
ZXQgcm9vdD0nKGhkMCwyKScNCglzZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDhl
ZGYwZTFiLTVmOWMtNGNhMC04Zjg4LTc3ZDM1YWY4NzA5Mw0KCWxpbnV4CS92bWxpbnV6LTIuNi4z
Mi40MCByb290PVVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIHJvICAg
cXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KbWVudWVudHJ5
ICdVYnVudHUsIHdpdGggTGludXggMi42LjMyLTQyLWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0t
Y2xhc3MgZ251LWxpbnV4IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglp
bnNtb2QgZXh0Mg0KCXNldCByb290PScoaGQwLDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZz
LXV1aWQgLS1zZXQgOGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJ
L3ZtbGludXotMi42LjMyLTQyLWdlbmVyaWMgcm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1i
MzhmLTJkYTFmNDQ0ODg1ZCBybyAgIHF1aWV0IHNwbGFzaA0KCWluaXRyZAkvaW5pdHJkLmltZy0y
LjYuMzItNDItZ2VuZXJpYw0KfQ0KbWVudWVudHJ5ICdVYnVudHUsIHdpdGggTGludXggMi42LjMy
LTM4LWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4IC0tY2xhc3MgZ251
IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNldCByb290PScoaGQw
LDIpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgOGVkZjBlMWItNWY5Yy00
Y2EwLThmODgtNzdkMzVhZjg3MDkzDQoJbGludXgJL3ZtbGludXotMi42LjMyLTM4LWdlbmVyaWMg
cm9vdD1VVUlEPTI2NmU3MWFmLWUxNDUtNDk1Yi1iMzhmLTJkYTFmNDQ0ODg1ZCBybyAgIHF1aWV0
IHNwbGFzaA0KCWluaXRyZAkvaW5pdHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KfQ0KIyMjIEVO
RCAvZXRjL2dydWIuZC8xMF9saW51eCAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzIwX21l
bXRlc3Q4NisgIyMjDQptZW51ZW50cnkgIk1lbW9yeSB0ZXN0IChtZW10ZXN0ODYrKSIgew0KCWlu
c21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsMiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMt
dXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRjYTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eDE2
CS9tZW10ZXN0ODYrLmJpbg0KfQ0KbWVudWVudHJ5ICJNZW1vcnkgdGVzdCAobWVtdGVzdDg2Kywg
c2VyaWFsIGNvbnNvbGUgMTE1MjAwKSIgew0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAs
MiknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA4ZWRmMGUxYi01ZjljLTRj
YTAtOGY4OC03N2QzNWFmODcwOTMNCglsaW51eDE2CS9tZW10ZXN0ODYrLmJpbiBjb25zb2xlPXR0
eVMwLDExNTIwMG44DQp9DQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzIwX21lbXRlc3Q4NisgIyMjDQoN
CiMjIyBCRUdJTiAvZXRjL2dydWIuZC8zMF9vcy1wcm9iZXIgIyMjDQojIyMgRU5EIC9ldGMvZ3J1
Yi5kLzMwX29zLXByb2JlciAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzQwX2N1c3RvbSAj
IyMNCiMgVGhpcyBmaWxlIHByb3ZpZGVzIGFuIGVhc3kgd2F5IHRvIGFkZCBjdXN0b20gbWVudSBl
bnRyaWVzLiAgU2ltcGx5IHR5cGUgdGhlDQojIG1lbnUgZW50cmllcyB5b3Ugd2FudCB0byBhZGQg
YWZ0ZXIgdGhpcyBjb21tZW50LiAgQmUgY2FyZWZ1bCBub3QgdG8gY2hhbmdlDQojIHRoZSAnZXhl
YyB0YWlsJyBsaW5lIGFib3ZlLg0KIyMjIEVORCAvZXRjL2dydWIuZC80MF9jdXN0b20gIyMjDQoN
Cj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gc2RhNi9ldGMvZnN0YWI6ID09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT0NCg0KIyAvZXRjL2ZzdGFiOiBzdGF0aWMgZmlsZSBzeXN0
ZW0gaW5mb3JtYXRpb24uDQojDQojIFVzZSAnYmxraWQgLW8gdmFsdWUgLXMgVVVJRCcgdG8gcHJp
bnQgdGhlIHVuaXZlcnNhbGx5IHVuaXF1ZSBpZGVudGlmaWVyDQojIGZvciBhIGRldmljZTsgdGhp
cyBtYXkgYmUgdXNlZCB3aXRoIFVVSUQ9IGFzIGEgbW9yZSByb2J1c3Qgd2F5IHRvIG5hbWUNCiMg
ZGV2aWNlcyB0aGF0IHdvcmtzIGV2ZW4gaWYgZGlza3MgYXJlIGFkZGVkIGFuZCByZW1vdmVkLiBT
ZWUgZnN0YWIoNSkuDQojDQojIDxmaWxlIHN5c3RlbT4gPG1vdW50IHBvaW50PiAgIDx0eXBlPiAg
PG9wdGlvbnM+ICAgICAgIDxkdW1wPiAgPHBhc3M+DQpwcm9jICAgICAgICAgICAgL3Byb2MgICAg
ICAgICAgIHByb2MgICAgbm9kZXYsbm9leGVjLG5vc3VpZCAwICAgICAgIDANCiMgLyB3YXMgb24g
L2Rldi9zZGE2IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9MjY2ZTcxYWYtZTE0NS00OTViLWIz
OGYtMmRhMWY0NDQ4ODVkIC8gICAgICAgICAgICAgICBleHQ0ICAgIGVycm9ycz1yZW1vdW50LXJv
IDAgICAgICAgMQ0KIyAvYm9vdCB3YXMgb24gL2Rldi9zZGEyIGR1cmluZyBpbnN0YWxsYXRpb24N
ClVVSUQ9OGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVhZjg3MDkzIC9ib290ICAgICAgICAg
ICBleHQ0ICAgIGRlZmF1bHRzICAgICAgICAwICAgICAgIDINCiMgc3dhcCB3YXMgb24gL2Rldi9z
ZGE1IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9YjVlNWM0MGUtMTkzYy00YzZkLTkwNjgtNDVj
YzAzM2I2NmE5IG5vbmUgICAgICAgICAgICBzd2FwICAgIHN3ICAgICAgICAgICAgICAwICAgICAg
IDANCg0KbmFzLTFnOi9leHBvcnQvdXRpbHMvc2NyYXRjaAkvc2FwbW50L3NjcmF0Y2gJCW5mcyAJ
ZGVmYXVsdHMgMCAwDQpuYXMtMWc6L2V4cG9ydC92aXJ0dWFsX21hY2hpbmVzIAkvc2FwbW50L3Zp
cnR1YWxfbWFjaGluZXMgCW5mcyAJZGVmYXVsdHMgMCAwDQoNCj09PT09PT09PT09PT09PT09PT0g
c2RhNjogTG9jYXRpb24gb2YgZmlsZXMgbG9hZGVkIGJ5IEdydWI6ID09PT09PT09PT09PT09PT09
PT0NCg0KDQogIDIwLjBHQjogYm9vdC9ncnViL2dydWIuY2ZnDQogIDIwLjBHQjogYm9vdC9ncnVi
L21lbnUubHN0DQogIDIwLjBHQjogYm9vdC9ncnViL3N0YWdlMg0KICAyMC4wR0I6IGJvb3QvaW5p
dHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KICAyMC4wR0I6IGJvb3QvaW5pdHJkLmltZy0yLjYu
MzIuNDANCiAgMjAuMEdCOiBib290L2luaXRyZC5pbWctMi42LjMyLTQyLWdlbmVyaWMNCiAgMjAu
MEdCOiBib290L3ZtbGludXotMi42LjMyLTM4LWdlbmVyaWMNCiAgMjAuMEdCOiBib290L3ZtbGlu
dXotMi42LjMyLjQwDQogIDIwLjBHQjogYm9vdC92bWxpbnV6LTIuNi4zMi00Mi1nZW5lcmljDQog
IDIwLjBHQjogYm9vdC92bWxpbnV6LTMuMS4wLXJjOSsNCiAgMjAuMEdCOiBpbml0cmQuaW1nDQog
IDIwLjBHQjogaW5pdHJkLmltZy5vbGQNCiAgMjAuMEdCOiB2bWxpbnV6DQogIDIwLjBHQjogdm1s
aW51ei5vbGQNCg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09IHNkYTcvYm9vdC9ncnViL2dy
dWIuY2ZnOiA9PT09PT09PT09PT09PT09PT09PT09PT09PT0NCg0KIw0KIyBETyBOT1QgRURJVCBU
SElTIEZJTEUNCiMNCiMgSXQgaXMgYXV0b21hdGljYWxseSBnZW5lcmF0ZWQgYnkgL3Vzci9zYmlu
L2dydWItbWtjb25maWcgdXNpbmcgdGVtcGxhdGVzDQojIGZyb20gL2V0Yy9ncnViLmQgYW5kIHNl
dHRpbmdzIGZyb20gL2V0Yy9kZWZhdWx0L2dydWINCiMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5k
LzAwX2hlYWRlciAjIyMNCmlmIFsgLXMgJHByZWZpeC9ncnViZW52IF07IHRoZW4NCiAgbG9hZF9l
bnYNCmZpDQpzZXQgZGVmYXVsdD0iMCINCmlmIFsgJHtwcmV2X3NhdmVkX2VudHJ5fSBdOyB0aGVu
DQogIHNldCBzYXZlZF9lbnRyeT0ke3ByZXZfc2F2ZWRfZW50cnl9DQogIHNhdmVfZW52IHNhdmVk
X2VudHJ5DQogIHNldCBwcmV2X3NhdmVkX2VudHJ5PQ0KICBzYXZlX2VudiBwcmV2X3NhdmVkX2Vu
dHJ5DQogIHNldCBib290X29uY2U9dHJ1ZQ0KZmkNCg0KZnVuY3Rpb24gc2F2ZWRlZmF1bHQgew0K
ICBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0aGVuDQogICAgc2F2ZWRfZW50cnk9JHtjaG9zZW59
DQogICAgc2F2ZV9lbnYgc2F2ZWRfZW50cnkNCiAgZmkNCn0NCg0KZnVuY3Rpb24gcmVjb3JkZmFp
bCB7DQogIHNldCByZWNvcmRmYWlsPTENCiAgaWYgWyAtbiAke2hhdmVfZ3J1YmVudn0gXTsgdGhl
biBpZiBbIC16ICR7Ym9vdF9vbmNlfSBdOyB0aGVuIHNhdmVfZW52IHJlY29yZGZhaWw7IGZpOyBm
aQ0KfQ0KaW5zbW9kIGV4dDINCnNldCByb290PScoaGQwLDcpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5
IC0tZnMtdXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCmlm
IGxvYWRmb250IC91c3Ivc2hhcmUvZ3J1Yi91bmljb2RlLnBmMiA7IHRoZW4NCiAgc2V0IGdmeG1v
ZGU9NjQweDQ4MA0KICBpbnNtb2QgZ2Z4dGVybQ0KICBpbnNtb2QgdmJlDQogIGlmIHRlcm1pbmFs
X291dHB1dCBnZnh0ZXJtIDsgdGhlbiB0cnVlIDsgZWxzZQ0KICAgICMgRm9yIGJhY2t3YXJkIGNv
bXBhdGliaWxpdHkgd2l0aCB2ZXJzaW9ucyBvZiB0ZXJtaW5hbC5tb2QgdGhhdCBkb24ndA0KICAg
ICMgdW5kZXJzdGFuZCB0ZXJtaW5hbF9vdXRwdXQNCiAgICB0ZXJtaW5hbCBnZnh0ZXJtDQogIGZp
DQpmaQ0KaW5zbW9kIGV4dDINCnNldCByb290PScoaGQwLDcpJw0Kc2VhcmNoIC0tbm8tZmxvcHB5
IC0tZnMtdXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCnNl
dCBsb2NhbGVfZGlyPSgkcm9vdCkvYm9vdC9ncnViL2xvY2FsZQ0Kc2V0IGxhbmc9ZW4NCmluc21v
ZCBnZXR0ZXh0DQppZiBbICR7cmVjb3JkZmFpbH0gPSAxIF07IHRoZW4NCiAgc2V0IHRpbWVvdXQ9
LTENCmVsc2UNCiAgc2V0IHRpbWVvdXQ9LTENCmZpDQojIyMgRU5EIC9ldGMvZ3J1Yi5kLzAwX2hl
YWRlciAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzA1X2RlYmlhbl90aGVtZSAjIyMNCnNl
dCBtZW51X2NvbG9yX25vcm1hbD13aGl0ZS9ibGFjaw0Kc2V0IG1lbnVfY29sb3JfaGlnaGxpZ2h0
PWJsYWNrL2xpZ2h0LWdyYXkNCiMjIyBFTkQgL2V0Yy9ncnViLmQvMDVfZGViaWFuX3RoZW1lICMj
Iw0KDQojIyMgQkVHSU4gL2V0Yy9ncnViLmQvMDhfeGVuICMjIw0KbWVudWVudHJ5ICJYZW4gVW5z
dGFibGUgNC4yIFJDMyAvIERlYmlhbiBTcXVlZXplIGtlcm5lbCAyLjYuMzIuNDAiIHsNCiAgICAg
ICAgaW5zbW9kIGV4dDINCiAgICAgICAgaW5zbW9kIGV4dDMNCiAgICAgICAgc2V0IHJvb3Q9Jyho
ZDAsNyknDQogICAgICAgIG11bHRpYm9vdCAvYm9vdC94ZW4tNC4yLjAtcmMzLXByZS5neiBkdW1t
eQ0KICAgICAgICBtb2R1bGUgL2Jvb3Qvdm1saW51ei0yLjYuMzIuNDAgZHVtbXkgcm9vdD0vZGV2
L3NkYTcgcm8gcXVpZXQgY29uc29sZT10dHkwIG5vbW9kZXNldCByb290ZGVsYXk9NTANCiAgICAg
ICAgbW9kdWxlIC9ib290L2luaXRyZC5pbWctMi42LjMyLjQwDQp9DQojIyMgRU5EIC9ldGMvZ3J1
Yi5kLzA4X3hlbiAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzEwX2xpbnV4ICMjIw0KbWVu
dWVudHJ5ICdVYnVudHUsIHdpdGggTGludXggMy4xLjAtcmM5KycgLS1jbGFzcyB1YnVudHUgLS1j
bGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWlu
c21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMt
dXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCglsaW51eAkv
Ym9vdC92bWxpbnV6LTMuMS4wLXJjOSsgcm9vdD0vZGV2L3NkYTcgcm8gICBxdWlldCBzcGxhc2gN
Cn0NCm1lbnVlbnRyeSAnVWJ1bnR1LCB3aXRoIExpbnV4IDMuMS4wLXJjOSsgKHJlY292ZXJ5IG1v
ZGUpJyAtLWNsYXNzIHVidW50dSAtLWNsYXNzIGdudS1saW51eCAtLWNsYXNzIGdudSAtLWNsYXNz
IG9zIHsNCglyZWNvcmRmYWlsDQoJaW5zbW9kIGV4dDINCglzZXQgcm9vdD0nKGhkMCw3KScNCglz
ZWFyY2ggLS1uby1mbG9wcHkgLS1mcy11dWlkIC0tc2V0IDUyZjMyZTVkLTliNDMtNDk3Mi1iYjJl
LWU5NzEzM2RkMmM4MA0KCWVjaG8JJ0xvYWRpbmcgTGludXggMy4xLjAtcmM5KyAuLi4nDQoJbGlu
dXgJL2Jvb3Qvdm1saW51ei0zLjEuMC1yYzkrIHJvb3Q9L2Rldi9zZGE3IHJvIHNpbmdsZSANCgll
Y2hvCSdMb2FkaW5nIGluaXRpYWwgcmFtZGlzayAuLi4nDQp9DQptZW51ZW50cnkgJ1VidW50dSwg
d2l0aCBMaW51eCAyLjYuMzIuNDAnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251LWxpbnV4IC0t
Y2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0Mg0KCXNldCBy
b290PScoaGQwLDcpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1zZXQgNTJmMzJl
NWQtOWI0My00OTcyLWJiMmUtZTk3MTMzZGQyYzgwDQoJbGludXgJL2Jvb3Qvdm1saW51ei0yLjYu
MzIuNDAgcm9vdD1VVUlEPTUyZjMyZTVkLTliNDMtNDk3Mi1iYjJlLWU5NzEzM2RkMmM4MCBybyAg
IHF1aWV0IHNwbGFzaA0KCWluaXRyZAkvYm9vdC9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KbWVu
dWVudHJ5ICdVYnVudHUsIHdpdGggTGludXggMi42LjMyLjQwIChyZWNvdmVyeSBtb2RlKScgLS1j
bGFzcyB1YnVudHUgLS1jbGFzcyBnbnUtbGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJ
cmVjb3JkZmFpbA0KCWluc21vZCBleHQyDQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0t
bm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNldCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNk
ZDJjODANCgllY2hvCSdMb2FkaW5nIExpbnV4IDIuNi4zMi40MCAuLi4nDQoJbGludXgJL2Jvb3Qv
dm1saW51ei0yLjYuMzIuNDAgcm9vdD1VVUlEPTUyZjMyZTVkLTliNDMtNDk3Mi1iYjJlLWU5NzEz
M2RkMmM4MCBybyBzaW5nbGUgDQoJZWNobwknTG9hZGluZyBpbml0aWFsIHJhbWRpc2sgLi4uJw0K
CWluaXRyZAkvYm9vdC9pbml0cmQuaW1nLTIuNi4zMi40MA0KfQ0KbWVudWVudHJ5ICdVYnVudHUs
IHdpdGggTGludXggMi42LjMyLTQyLWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251
LWxpbnV4IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0
Mg0KCXNldCByb290PScoaGQwLDcpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1z
ZXQgNTJmMzJlNWQtOWI0My00OTcyLWJiMmUtZTk3MTMzZGQyYzgwDQoJbGludXgJL2Jvb3Qvdm1s
aW51ei0yLjYuMzItNDItZ2VuZXJpYyByb290PVVVSUQ9NTJmMzJlNWQtOWI0My00OTcyLWJiMmUt
ZTk3MTMzZGQyYzgwIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9ib290L2luaXRyZC5pbWct
Mi42LjMyLTQyLWdlbmVyaWMNCn0NCm1lbnVlbnRyeSAnVWJ1bnR1LCB3aXRoIExpbnV4IDIuNi4z
Mi00Mi1nZW5lcmljIChyZWNvdmVyeSBtb2RlKScgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUt
bGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQy
DQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNl
dCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCgllY2hvCSdMb2FkaW5nIExp
bnV4IDIuNi4zMi00Mi1nZW5lcmljIC4uLicNCglsaW51eAkvYm9vdC92bWxpbnV6LTIuNi4zMi00
Mi1nZW5lcmljIHJvb3Q9VVVJRD01MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODAg
cm8gc2luZ2xlIA0KCWVjaG8JJ0xvYWRpbmcgaW5pdGlhbCByYW1kaXNrIC4uLicNCglpbml0cmQJ
L2Jvb3QvaW5pdHJkLmltZy0yLjYuMzItNDItZ2VuZXJpYw0KfQ0KbWVudWVudHJ5ICdVYnVudHUs
IHdpdGggTGludXggMi42LjMyLTM4LWdlbmVyaWMnIC0tY2xhc3MgdWJ1bnR1IC0tY2xhc3MgZ251
LWxpbnV4IC0tY2xhc3MgZ251IC0tY2xhc3Mgb3Mgew0KCXJlY29yZGZhaWwNCglpbnNtb2QgZXh0
Mg0KCXNldCByb290PScoaGQwLDcpJw0KCXNlYXJjaCAtLW5vLWZsb3BweSAtLWZzLXV1aWQgLS1z
ZXQgNTJmMzJlNWQtOWI0My00OTcyLWJiMmUtZTk3MTMzZGQyYzgwDQoJbGludXgJL2Jvb3Qvdm1s
aW51ei0yLjYuMzItMzgtZ2VuZXJpYyByb290PVVVSUQ9NTJmMzJlNWQtOWI0My00OTcyLWJiMmUt
ZTk3MTMzZGQyYzgwIHJvICAgcXVpZXQgc3BsYXNoDQoJaW5pdHJkCS9ib290L2luaXRyZC5pbWct
Mi42LjMyLTM4LWdlbmVyaWMNCn0NCm1lbnVlbnRyeSAnVWJ1bnR1LCB3aXRoIExpbnV4IDIuNi4z
Mi0zOC1nZW5lcmljIChyZWNvdmVyeSBtb2RlKScgLS1jbGFzcyB1YnVudHUgLS1jbGFzcyBnbnUt
bGludXggLS1jbGFzcyBnbnUgLS1jbGFzcyBvcyB7DQoJcmVjb3JkZmFpbA0KCWluc21vZCBleHQy
DQoJc2V0IHJvb3Q9JyhoZDAsNyknDQoJc2VhcmNoIC0tbm8tZmxvcHB5IC0tZnMtdXVpZCAtLXNl
dCA1MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODANCgllY2hvCSdMb2FkaW5nIExp
bnV4IDIuNi4zMi0zOC1nZW5lcmljIC4uLicNCglsaW51eAkvYm9vdC92bWxpbnV6LTIuNi4zMi0z
OC1nZW5lcmljIHJvb3Q9VVVJRD01MmYzMmU1ZC05YjQzLTQ5NzItYmIyZS1lOTcxMzNkZDJjODAg
cm8gc2luZ2xlIA0KCWVjaG8JJ0xvYWRpbmcgaW5pdGlhbCByYW1kaXNrIC4uLicNCglpbml0cmQJ
L2Jvb3QvaW5pdHJkLmltZy0yLjYuMzItMzgtZ2VuZXJpYw0KfQ0KIyMjIEVORCAvZXRjL2dydWIu
ZC8xMF9saW51eCAjIyMNCg0KIyMjIEJFR0lOIC9ldGMvZ3J1Yi5kLzMwX29zLXByb2JlciAjIyMN
CiMjIyBFTkQgL2V0Yy9ncnViLmQvMzBfb3MtcHJvYmVyICMjIw0KDQojIyMgQkVHSU4gL2V0Yy9n
cnViLmQvNDBfY3VzdG9tICMjIw0KIyBUaGlzIGZpbGUgcHJvdmlkZXMgYW4gZWFzeSB3YXkgdG8g
YWRkIGN1c3RvbSBtZW51IGVudHJpZXMuICBTaW1wbHkgdHlwZSB0aGUNCiMgbWVudSBlbnRyaWVz
IHlvdSB3YW50IHRvIGFkZCBhZnRlciB0aGlzIGNvbW1lbnQuICBCZSBjYXJlZnVsIG5vdCB0byBj
aGFuZ2UNCiMgdGhlICdleGVjIHRhaWwnIGxpbmUgYWJvdmUuDQoNCg0KIyMjIEVORCAvZXRjL2dy
dWIuZC80MF9jdXN0b20gIyMjDQoNCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gc2Rh
Ny9ldGMvZnN0YWI6ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCg0KIyAvZXRjL2Zz
dGFiOiBzdGF0aWMgZmlsZSBzeXN0ZW0gaW5mb3JtYXRpb24uDQojDQojIFVzZSAnYmxraWQgLW8g
dmFsdWUgLXMgVVVJRCcgdG8gcHJpbnQgdGhlIHVuaXZlcnNhbGx5IHVuaXF1ZSBpZGVudGlmaWVy
DQojIGZvciBhIGRldmljZTsgdGhpcyBtYXkgYmUgdXNlZCB3aXRoIFVVSUQ9IGFzIGEgbW9yZSBy
b2J1c3Qgd2F5IHRvIG5hbWUNCiMgZGV2aWNlcyB0aGF0IHdvcmtzIGV2ZW4gaWYgZGlza3MgYXJl
IGFkZGVkIGFuZCByZW1vdmVkLiBTZWUgZnN0YWIoNSkuDQojDQojIDxmaWxlIHN5c3RlbT4gPG1v
dW50IHBvaW50PiAgIDx0eXBlPiAgPG9wdGlvbnM+ICAgICAgIDxkdW1wPiAgPHBhc3M+DQpwcm9j
ICAgICAgICAgICAgL3Byb2MgICAgICAgICAgIHByb2MgICAgbm9kZXYsbm9leGVjLG5vc3VpZCAw
ICAgICAgIDANCiMgLyB3YXMgb24gL2Rldi9zZGE2IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9
MjY2ZTcxYWYtZTE0NS00OTViLWIzOGYtMmRhMWY0NDQ4ODVkIC8gICAgICAgICAgICAgICBleHQ0
ICAgIGVycm9ycz1yZW1vdW50LXJvIDAgICAgICAgMQ0KIyAvYm9vdCB3YXMgb24gL2Rldi9zZGEy
IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9OGVkZjBlMWItNWY5Yy00Y2EwLThmODgtNzdkMzVh
Zjg3MDkzIC9ib290ICAgICAgICAgICBleHQ0ICAgIGRlZmF1bHRzICAgICAgICAwICAgICAgIDIN
CiMgc3dhcCB3YXMgb24gL2Rldi9zZGE1IGR1cmluZyBpbnN0YWxsYXRpb24NClVVSUQ9YjVlNWM0
MGUtMTkzYy00YzZkLTkwNjgtNDVjYzAzM2I2NmE5IG5vbmUgICAgICAgICAgICBzd2FwICAgIHN3
ICAgICAgICAgICAgICAwICAgICAgIDANCg0KbmFzLTFnOi9leHBvcnQvdXRpbHMvc2NyYXRjaAkv
c2FwbW50L3NjcmF0Y2gJCW5mcyAJZGVmYXVsdHMgMCAwDQpuYXMtMWc6L2V4cG9ydC92aXJ0dWFs
X21hY2hpbmVzIAkvc2FwbW50L3ZpcnR1YWxfbWFjaGluZXMgCW5mcyAJZGVmYXVsdHMgMCAwDQoN
Cj09PT09PT09PT09PT09PT09PT0gc2RhNzogTG9jYXRpb24gb2YgZmlsZXMgbG9hZGVkIGJ5IEdy
dWI6ID09PT09PT09PT09PT09PT09PT0NCg0KDQogIDcwLjBHQjogYm9vdC9ncnViL2dydWIuY2Zn
DQogIDcwLjBHQjogYm9vdC9pbml0cmQuaW1nLTIuNi4zMi0zOC1nZW5lcmljDQogIDcwLjBHQjog
Ym9vdC9pbml0cmQuaW1nLTIuNi4zMi40MA0KICA3MC4wR0I6IGJvb3QvaW5pdHJkLmltZy0yLjYu
MzItNDItZ2VuZXJpYw0KICA3MC4wR0I6IGJvb3Qvdm1saW51ei0yLjYuMzItMzgtZ2VuZXJpYw0K
ICA3MC4wR0I6IGJvb3Qvdm1saW51ei0yLjYuMzIuNDANCiAgNzAuMEdCOiBib290L3ZtbGludXot
Mi42LjMyLTQyLWdlbmVyaWMNCiAgNzAuMEdCOiBib290L3ZtbGludXotMy4xLjAtcmM5Kw0KICA3
MC4wR0I6IGluaXRyZC5pbWcNCiAgNzAuMEdCOiBpbml0cmQuaW1nLm9sZA0KICA3MC4wR0I6IHZt
bGludXoNCiAgNzAuMEdCOiB2bWxpbnV6Lm9sZA0K

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_3ED5771B034E314C8AC54D893482B5F51A783DA1F5DEWDFECCR08wd_--


From xen-devel-bounces@lists.xen.org Wed Aug 22 13:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Ale-0005C2-5i; Wed, 22 Aug 2012 13:14:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T4Alc-0005Bx-U9
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:14:41 +0000
Received: from [85.158.138.51:55053] by server-3.bemta-3.messagelabs.com id
	ED/10-13809-04BD4305; Wed, 22 Aug 2012 13:14:40 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345641276!27456748!1
X-Originating-IP: [74.125.149.151]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9736 invoked from network); 22 Aug 2012 13:14:38 -0000
Received: from na3sys009aog124.obsmtp.com (HELO na3sys009aog124.obsmtp.com)
	(74.125.149.151)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 13:14:38 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob124.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUDTbO+kbBQACdkjMcTVPdFKwQVrJ2l3q@postini.com;
	Wed, 22 Aug 2012 06:14:38 PDT
Received: from INHYMS173.ca.com (155.35.35.47) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 22 Aug 2012 18:44:33 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS173.ca.com
	([155.35.35.47]) with mapi id 14.01.0355.002;
	Wed, 22 Aug 2012 18:44:33 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZABRxX9AAAzUKyA
Date: Wed, 22 Aug 2012 13:14:32 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345554888.6821.56.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {004484A1-D73C-4963-9928-D1F77BDA7281}
x-cr-hashedpuzzle: NT0= ABoF BQoA EQxX EfBr FyvX GaGU IrUV J+kb KDIZ Upoi
	WI5S X6PB Y7vh dZna h0gh; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAagBiAGUAdQBsAGkAYwBoAEAAcwB1AHMAZQAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {004484A1-D73C-4963-9928-D1F77BDA7281};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	22 Aug 2012 13:12:27 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, August 21, 2012 6:45 PM
> To: Palagummi, Siva
> Cc: Jan Beulich; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote:
> >
> > > -----Original Message-----
> > > From: Jan Beulich [mailto:JBeulich@suse.com]
> > > Sent: Monday, August 13, 2012 1:58 PM
> > > To: Palagummi, Siva
> > > Cc: xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > > properly when larger MTU sizes are used
> > >
> > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> > > wrote:
> > > >--- a/drivers/net/xen-netback/netback.c	2012-01-25
> 19:39:32.000000000
> > > -0500
> > > >+++ b/drivers/net/xen-netback/netback.c	2012-08-12
> 15:50:50.000000000
> > > -0400
> > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> > > >
> > > > 		count += nr_frags + 1;
> > > >
> > > >+		/*
> > > >+		 * The logic here should be somewhat similar to
> > > >+		 * xen_netbk_count_skb_slots. In case of larger MTU
> size,
> > >
> > > Is there a reason why you can't simply use that function then?
> > > Afaict it's being used on the very same skb before it gets put on
> > > rx_queue already anyway.
> > >
> >
> > I did think about it. But this would mean iterating through similar
> > piece of code twice with additional function calls.
> > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing
> > similar code. And also not sure about any other implications. So
> > decided to fix it by adding few lines of code in line.
> 
> I wonder if we could cache the result of the call to
> xen_netbk_count_skb_slots in xenvif_start_xmit somewhere?
> 
> > > >+		 * skb head length may be more than a PAGE_SIZE. We
> need to
> > > >+		 * consider ring slots consumed by that data. If we
> do not,
> > > >+		 * then within this loop itself we end up consuming
> more
> > > meta
> > > >+		 * slots turning the BUG_ON below. With this fix we
> may end
> > > up
> > > >+		 * iterating through xen_netbk_rx_action multiple
> times
> > > >+		 * instead of crashing netback thread.
> > > >+		 */
> > > >+
> > > >+
> > > >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> > >
> > > This now over-accounts by one I think (due to the "+ 1" above;
> > > the calculation here really is to replace that increment).
> > >
> > > Jan
> > >
> > I also wasn't sure about the actual purpose of "+1" above whether it
> > is meant to take care of skb_headlen or non zero gso_size case or
> some
> > other case.
> 
> I think it's intention was to account for skb_headlen and therefore it
> should be replaced.
> 
> >   That's why I left it like that so that I can exit the loop on safer
> > side. If someone who knows this area of code can confirm that we do
> > not need it, I will create a new patch. In my environment I did
> > observe that "count" is always greater than
> > actual meta slots produced because of this additional "+1" with my
> > patch. When I took out this extra addition then count is always equal
> > to actual meta slots produced and loop is exiting safely with more
> > meta slots produced under heavy traffic.
> 
> I think that's an argument for removing it as well.
> 
> The + 1 leading to an early exit seems benign when you think about one
> largish skb but imagine if you had 200 small (single page) skbs -- then
> you have effectively halved the size of the ring (or at least the
> batch).
> 
I agree and that's what even I have observed where xen_netbk_kick_thread is getting invoked to finish work in next iteration.  I will create a patch replacing "+1" with DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE).


> This:
> 		/* Filled the batch queue? */
> 		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> 			break;
> seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be
> max_required_rx_slots(vif)? Or maybe the preflight checks from
> xenvif_start_xmit save us from this fate?
> 
> Ian.

You are right Ian. The intention of this check seems to be to ensure that enough slots are still left prior to picking up next skb. But instead of invoking max_required_rx_slots with already received skb, we may have to do skb_peek and invoke max_required_rx_slots on skb that we are about to dequeue. Is there any better way?

Thanks
Siva


> 
> >
> > Thanks
> > Siva
> >
> > > >+
> > > >+		if (skb_shinfo(skb)->gso_size)
> > > >+			count++;
> > > >+
> > > > 		__skb_queue_tail(&rxq, skb);
> > > >
> > > > 		/* Filled the batch queue? */
> > >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Ale-0005C2-5i; Wed, 22 Aug 2012 13:14:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T4Alc-0005Bx-U9
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:14:41 +0000
Received: from [85.158.138.51:55053] by server-3.bemta-3.messagelabs.com id
	ED/10-13809-04BD4305; Wed, 22 Aug 2012 13:14:40 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345641276!27456748!1
X-Originating-IP: [74.125.149.151]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9736 invoked from network); 22 Aug 2012 13:14:38 -0000
Received: from na3sys009aog124.obsmtp.com (HELO na3sys009aog124.obsmtp.com)
	(74.125.149.151)
	by server-2.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 13:14:38 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob124.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUDTbO+kbBQACdkjMcTVPdFKwQVrJ2l3q@postini.com;
	Wed, 22 Aug 2012 06:14:38 PDT
Received: from INHYMS173.ca.com (155.35.35.47) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 22 Aug 2012 18:44:33 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS173.ca.com
	([155.35.35.47]) with mapi id 14.01.0355.002;
	Wed, 22 Aug 2012 18:44:33 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZABRxX9AAAzUKyA
Date: Wed, 22 Aug 2012 13:14:32 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345554888.6821.56.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {004484A1-D73C-4963-9928-D1F77BDA7281}
x-cr-hashedpuzzle: NT0= ABoF BQoA EQxX EfBr FyvX GaGU IrUV J+kb KDIZ Upoi
	WI5S X6PB Y7vh dZna h0gh; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAagBiAGUAdQBsAGkAYwBoAEAAcwB1AHMAZQAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {004484A1-D73C-4963-9928-D1F77BDA7281};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	22 Aug 2012 13:12:27 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, August 21, 2012 6:45 PM
> To: Palagummi, Siva
> Cc: Jan Beulich; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote:
> >
> > > -----Original Message-----
> > > From: Jan Beulich [mailto:JBeulich@suse.com]
> > > Sent: Monday, August 13, 2012 1:58 PM
> > > To: Palagummi, Siva
> > > Cc: xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > > properly when larger MTU sizes are used
> > >
> > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> > > wrote:
> > > >--- a/drivers/net/xen-netback/netback.c	2012-01-25
> 19:39:32.000000000
> > > -0500
> > > >+++ b/drivers/net/xen-netback/netback.c	2012-08-12
> 15:50:50.000000000
> > > -0400
> > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> > > >
> > > > 		count += nr_frags + 1;
> > > >
> > > >+		/*
> > > >+		 * The logic here should be somewhat similar to
> > > >+		 * xen_netbk_count_skb_slots. In case of larger MTU
> size,
> > >
> > > Is there a reason why you can't simply use that function then?
> > > Afaict it's being used on the very same skb before it gets put on
> > > rx_queue already anyway.
> > >
> >
> > I did think about it. But this would mean iterating through similar
> > piece of code twice with additional function calls.
> > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing
> > similar code. And also not sure about any other implications. So
> > decided to fix it by adding few lines of code in line.
> 
> I wonder if we could cache the result of the call to
> xen_netbk_count_skb_slots in xenvif_start_xmit somewhere?
> 
> > > >+		 * skb head length may be more than a PAGE_SIZE. We
> need to
> > > >+		 * consider ring slots consumed by that data. If we
> do not,
> > > >+		 * then within this loop itself we end up consuming
> more
> > > meta
> > > >+		 * slots turning the BUG_ON below. With this fix we
> may end
> > > up
> > > >+		 * iterating through xen_netbk_rx_action multiple
> times
> > > >+		 * instead of crashing netback thread.
> > > >+		 */
> > > >+
> > > >+
> > > >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> > >
> > > This now over-accounts by one I think (due to the "+ 1" above;
> > > the calculation here really is to replace that increment).
> > >
> > > Jan
> > >
> > I also wasn't sure about the actual purpose of "+1" above whether it
> > is meant to take care of skb_headlen or non zero gso_size case or
> some
> > other case.
> 
> I think it's intention was to account for skb_headlen and therefore it
> should be replaced.
> 
> >   That's why I left it like that so that I can exit the loop on safer
> > side. If someone who knows this area of code can confirm that we do
> > not need it, I will create a new patch. In my environment I did
> > observe that "count" is always greater than
> > actual meta slots produced because of this additional "+1" with my
> > patch. When I took out this extra addition then count is always equal
> > to actual meta slots produced and loop is exiting safely with more
> > meta slots produced under heavy traffic.
> 
> I think that's an argument for removing it as well.
> 
> The + 1 leading to an early exit seems benign when you think about one
> largish skb but imagine if you had 200 small (single page) skbs -- then
> you have effectively halved the size of the ring (or at least the
> batch).
> 
I agree and that's what even I have observed where xen_netbk_kick_thread is getting invoked to finish work in next iteration.  I will create a patch replacing "+1" with DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE).


> This:
> 		/* Filled the batch queue? */
> 		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> 			break;
> seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be
> max_required_rx_slots(vif)? Or maybe the preflight checks from
> xenvif_start_xmit save us from this fate?
> 
> Ian.

You are right Ian. The intention of this check seems to be to ensure that enough slots are still left prior to picking up next skb. But instead of invoking max_required_rx_slots with already received skb, we may have to do skb_peek and invoke max_required_rx_slots on skb that we are about to dequeue. Is there any better way?

Thanks
Siva


> 
> >
> > Thanks
> > Siva
> >
> > > >+
> > > >+		if (skb_shinfo(skb)->gso_size)
> > > >+			count++;
> > > >+
> > > > 		__skb_queue_tail(&rxq, skb);
> > > >
> > > > 		/* Filled the batch queue? */
> > >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4AnS-0005GL-MQ; Wed, 22 Aug 2012 13:16:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T4AnQ-0005GE-V1
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 13:16:33 +0000
Received: from [85.158.143.35:63467] by server-2.bemta-4.messagelabs.com id
	08/34-21239-0BBD4305; Wed, 22 Aug 2012 13:16:32 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345640957!11305063!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ3MTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20883 invoked from network); 22 Aug 2012 13:09:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 13:09:17 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7MD91EF010982
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 09:09:01 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-84.gru2.redhat.com
	[10.97.6.84])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7MD8xYt010559; Wed, 22 Aug 2012 09:08:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 9D3A9202862; Wed, 22 Aug 2012 10:08:48 -0300 (BRT)
Date: Wed, 22 Aug 2012 10:08:48 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Avi Kivity <avi@redhat.com>
Message-ID: <20120822130848.GF2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<5034A0E8.1080507@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5034A0E8.1080507@redhat.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 0/8] include qdev core in *-user,
 make CPU child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 12:05:44PM +0300, Avi Kivity wrote:
> On 08/21/2012 06:42 PM, Eduardo Habkost wrote:
> > So, here's a third suggestion to the CPU/DeviceState problem. Basically I split
> > the qdev code into a core (that can be easily compiled into *-user), and a part
> > specific to qemu-system-*.
> > 
> 
> I'm barging in late here, so sorry if this has been suggested and shot
> down: is it not possible to use composition here?
> 
>   typedef ... CPU;
> 
>   typedef struct CPUState {
>       DeviceState qdev;
>       CPU cpu;
>   } CPUState;
> 
> But I guess bringing qdev to -user is inevitable.

I guess it would be OK, and almost equivalent to the suggestion by
Anthony (use a different parent class for the CPU class on system-* and
*-user), as most state today is in the arch-specific classes.

The only problem I see is when some part of the CPU code starts using a
DeviceState feature (e.g. calling x86_cpu_realize() only at
qdev_init()-time). Then we have to duplicate some code to make *-user
work differently (not much code, I guess, but it would still make it
easier to break it if we have two implementations).

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4AnS-0005GL-MQ; Wed, 22 Aug 2012 13:16:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehabkost@redhat.com>) id 1T4AnQ-0005GE-V1
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 13:16:33 +0000
Received: from [85.158.143.35:63467] by server-2.bemta-4.messagelabs.com id
	08/34-21239-0BBD4305; Wed, 22 Aug 2012 13:16:32 +0000
X-Env-Sender: ehabkost@redhat.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345640957!11305063!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQ3MTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20883 invoked from network); 22 Aug 2012 13:09:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 13:09:17 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7MD91EF010982
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 09:09:01 -0400
Received: from blackpad.lan.raisama.net (vpn1-6-84.gru2.redhat.com
	[10.97.6.84])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id q7MD8xYt010559; Wed, 22 Aug 2012 09:08:59 -0400
Received: by blackpad.lan.raisama.net (Postfix, from userid 500)
	id 9D3A9202862; Wed, 22 Aug 2012 10:08:48 -0300 (BRT)
Date: Wed, 22 Aug 2012 10:08:48 -0300
From: Eduardo Habkost <ehabkost@redhat.com>
To: Avi Kivity <avi@redhat.com>
Message-ID: <20120822130848.GF2886@otherpad.lan.raisama.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<5034A0E8.1080507@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5034A0E8.1080507@redhat.com>
X-Fnord: you can see the fnord
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: peter.maydell@linaro.org, jan.kiszka@siemens.com, mjt@tls.msk.ru,
	qemu-devel@nongnu.org, armbru@redhat.com, blauwirbel@gmail.com,
	kraxel@redhat.com, xen-devel@lists.xensource.com,
	i.mitsyanko@samsung.com, mdroth@linux.vnet.ibm.com,
	anthony.perard@citrix.com, lersek@redhat.com,
	stefanha@linux.vnet.ibm.com, stefano.stabellini@eu.citrix.com,
	sw@weilnetz.de, imammedo@redhat.com, lcapitulino@redhat.com,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 0/8] include qdev core in *-user,
 make CPU child of DeviceState
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 12:05:44PM +0300, Avi Kivity wrote:
> On 08/21/2012 06:42 PM, Eduardo Habkost wrote:
> > So, here's a third suggestion to the CPU/DeviceState problem. Basically I split
> > the qdev code into a core (that can be easily compiled into *-user), and a part
> > specific to qemu-system-*.
> > 
> 
> I'm barging in late here, so sorry if this has been suggested and shot
> down: is it not possible to use composition here?
> 
>   typedef ... CPU;
> 
>   typedef struct CPUState {
>       DeviceState qdev;
>       CPU cpu;
>   } CPUState;
> 
> But I guess bringing qdev to -user is inevitable.

I guess it would be OK, and almost equivalent to the suggestion by
Anthony (use a different parent class for the CPU class on system-* and
*-user), as most state today is in the arch-specific classes.

The only problem I see is when some part of the CPU code starts using a
DeviceState feature (e.g. calling x86_cpu_realize() only at
qdev_init()-time). Then we have to duplicate some code to make *-user
work differently (not much code, I guess, but it would still make it
easier to break it if we have two implementations).

-- 
Eduardo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:22:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4AtB-0005U0-Fz; Wed, 22 Aug 2012 13:22:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4At9-0005Tu-8I
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:22:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345641725!3384864!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21225 invoked from network); 22 Aug 2012 13:22:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 13:22:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14126829"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 13:22:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:22:05 +0100
Message-ID: <1345641723.12501.2.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Wed, 22 Aug 2012 14:22:03 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 14:14 +0100, Palagummi, Siva wrote:
> 
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Tuesday, August 21, 2012 6:45 PM
> > To: Palagummi, Siva
> > Cc: Jan Beulich; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > properly when larger MTU sizes are used
> > 
> > On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote:
> > >
> > > > -----Original Message-----
> > > > From: Jan Beulich [mailto:JBeulich@suse.com]
> > > > Sent: Monday, August 13, 2012 1:58 PM
> > > > To: Palagummi, Siva
> > > > Cc: xen-devel@lists.xen.org
> > > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > > > properly when larger MTU sizes are used
> > > >
> > > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> > > > wrote:
> > > > >--- a/drivers/net/xen-netback/netback.c	2012-01-25
> > 19:39:32.000000000
> > > > -0500
> > > > >+++ b/drivers/net/xen-netback/netback.c	2012-08-12
> > 15:50:50.000000000
> > > > -0400
> > > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> > > > >
> > > > > 		count += nr_frags + 1;
> > > > >
> > > > >+		/*
> > > > >+		 * The logic here should be somewhat similar to
> > > > >+		 * xen_netbk_count_skb_slots. In case of larger MTU
> > size,
> > > >
> > > > Is there a reason why you can't simply use that function then?
> > > > Afaict it's being used on the very same skb before it gets put on
> > > > rx_queue already anyway.
> > > >
> > >
> > > I did think about it. But this would mean iterating through similar
> > > piece of code twice with additional function calls.
> > > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing
> > > similar code. And also not sure about any other implications. So
> > > decided to fix it by adding few lines of code in line.
> > 
> > I wonder if we could cache the result of the call to
> > xen_netbk_count_skb_slots in xenvif_start_xmit somewhere?
> > 
> > > > >+		 * skb head length may be more than a PAGE_SIZE. We
> > need to
> > > > >+		 * consider ring slots consumed by that data. If we
> > do not,
> > > > >+		 * then within this loop itself we end up consuming
> > more
> > > > meta
> > > > >+		 * slots turning the BUG_ON below. With this fix we
> > may end
> > > > up
> > > > >+		 * iterating through xen_netbk_rx_action multiple
> > times
> > > > >+		 * instead of crashing netback thread.
> > > > >+		 */
> > > > >+
> > > > >+
> > > > >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> > > >
> > > > This now over-accounts by one I think (due to the "+ 1" above;
> > > > the calculation here really is to replace that increment).
> > > >
> > > > Jan
> > > >
> > > I also wasn't sure about the actual purpose of "+1" above whether it
> > > is meant to take care of skb_headlen or non zero gso_size case or
> > some
> > > other case.
> > 
> > I think it's intention was to account for skb_headlen and therefore it
> > should be replaced.
> > 
> > >   That's why I left it like that so that I can exit the loop on safer
> > > side. If someone who knows this area of code can confirm that we do
> > > not need it, I will create a new patch. In my environment I did
> > > observe that "count" is always greater than
> > > actual meta slots produced because of this additional "+1" with my
> > > patch. When I took out this extra addition then count is always equal
> > > to actual meta slots produced and loop is exiting safely with more
> > > meta slots produced under heavy traffic.
> > 
> > I think that's an argument for removing it as well.
> > 
> > The + 1 leading to an early exit seems benign when you think about one
> > largish skb but imagine if you had 200 small (single page) skbs -- then
> > you have effectively halved the size of the ring (or at least the
> > batch).
> > 
> I agree and that's what even I have observed where xen_netbk_kick_thread is getting invoked to finish work in next iteration.  I will create a patch replacing "+1" with DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE).
> 
> 
> > This:
> > 		/* Filled the batch queue? */
> > 		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > 			break;
> > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be
> > max_required_rx_slots(vif)? Or maybe the preflight checks from
> > xenvif_start_xmit save us from this fate?
> > 
> > Ian.
> 
> You are right Ian. The intention of this check seems to be to ensure
> that enough slots are still left prior to picking up next skb. But
> instead of invoking max_required_rx_slots with already received skb,
> we may have to do skb_peek and invoke max_required_rx_slots on skb
> that we are about to dequeue. Is there any better way?

max_required_rx_slots doesn't take an skb as an argument, just a vif. It
returns the worst case number of slots for any skb on that vif.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:22:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4AtB-0005U0-Fz; Wed, 22 Aug 2012 13:22:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4At9-0005Tu-8I
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:22:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345641725!3384864!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21225 invoked from network); 22 Aug 2012 13:22:10 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 13:22:10 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14126829"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 13:22:05 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:22:05 +0100
Message-ID: <1345641723.12501.2.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Wed, 22 Aug 2012 14:22:03 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 14:14 +0100, Palagummi, Siva wrote:
> 
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Tuesday, August 21, 2012 6:45 PM
> > To: Palagummi, Siva
> > Cc: Jan Beulich; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > properly when larger MTU sizes are used
> > 
> > On Tue, 2012-08-14 at 22:17 +0100, Palagummi, Siva wrote:
> > >
> > > > -----Original Message-----
> > > > From: Jan Beulich [mailto:JBeulich@suse.com]
> > > > Sent: Monday, August 13, 2012 1:58 PM
> > > > To: Palagummi, Siva
> > > > Cc: xen-devel@lists.xen.org
> > > > Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> > > > properly when larger MTU sizes are used
> > > >
> > > > >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@ca.com>
> > > > wrote:
> > > > >--- a/drivers/net/xen-netback/netback.c	2012-01-25
> > 19:39:32.000000000
> > > > -0500
> > > > >+++ b/drivers/net/xen-netback/netback.c	2012-08-12
> > 15:50:50.000000000
> > > > -0400
> > > > >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> > > > >
> > > > > 		count += nr_frags + 1;
> > > > >
> > > > >+		/*
> > > > >+		 * The logic here should be somewhat similar to
> > > > >+		 * xen_netbk_count_skb_slots. In case of larger MTU
> > size,
> > > >
> > > > Is there a reason why you can't simply use that function then?
> > > > Afaict it's being used on the very same skb before it gets put on
> > > > rx_queue already anyway.
> > > >
> > >
> > > I did think about it. But this would mean iterating through similar
> > > piece of code twice with additional function calls.
> > > netbk_gop_skb-->netbk_gop_frag_copy sequence is actually executing
> > > similar code. And also not sure about any other implications. So
> > > decided to fix it by adding few lines of code in line.
> > 
> > I wonder if we could cache the result of the call to
> > xen_netbk_count_skb_slots in xenvif_start_xmit somewhere?
> > 
> > > > >+		 * skb head length may be more than a PAGE_SIZE. We
> > need to
> > > > >+		 * consider ring slots consumed by that data. If we
> > do not,
> > > > >+		 * then within this loop itself we end up consuming
> > more
> > > > meta
> > > > >+		 * slots turning the BUG_ON below. With this fix we
> > may end
> > > > up
> > > > >+		 * iterating through xen_netbk_rx_action multiple
> > times
> > > > >+		 * instead of crashing netback thread.
> > > > >+		 */
> > > > >+
> > > > >+
> > > > >+		count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> > > >
> > > > This now over-accounts by one I think (due to the "+ 1" above;
> > > > the calculation here really is to replace that increment).
> > > >
> > > > Jan
> > > >
> > > I also wasn't sure about the actual purpose of "+1" above whether it
> > > is meant to take care of skb_headlen or non zero gso_size case or
> > some
> > > other case.
> > 
> > I think it's intention was to account for skb_headlen and therefore it
> > should be replaced.
> > 
> > >   That's why I left it like that so that I can exit the loop on safer
> > > side. If someone who knows this area of code can confirm that we do
> > > not need it, I will create a new patch. In my environment I did
> > > observe that "count" is always greater than
> > > actual meta slots produced because of this additional "+1" with my
> > > patch. When I took out this extra addition then count is always equal
> > > to actual meta slots produced and loop is exiting safely with more
> > > meta slots produced under heavy traffic.
> > 
> > I think that's an argument for removing it as well.
> > 
> > The + 1 leading to an early exit seems benign when you think about one
> > largish skb but imagine if you had 200 small (single page) skbs -- then
> > you have effectively halved the size of the ring (or at least the
> > batch).
> > 
> I agree and that's what even I have observed where xen_netbk_kick_thread is getting invoked to finish work in next iteration.  I will create a patch replacing "+1" with DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE).
> 
> 
> > This:
> > 		/* Filled the batch queue? */
> > 		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > 			break;
> > seems a bit iffy to me too. I wonder if MAX_SKB_FRAGS should be
> > max_required_rx_slots(vif)? Or maybe the preflight checks from
> > xenvif_start_xmit save us from this fate?
> > 
> > Ian.
> 
> You are right Ian. The intention of this check seems to be to ensure
> that enough slots are still left prior to picking up next skb. But
> instead of invoking max_required_rx_slots with already received skb,
> we may have to do skb_peek and invoke max_required_rx_slots on skb
> that we are about to dequeue. Is there any better way?

max_required_rx_slots doesn't take an skb as an argument, just a vif. It
returns the worst case number of slots for any skb on that vif.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:29:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4AzK-0005d2-AY; Wed, 22 Aug 2012 13:28:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4AzJ-0005cw-5l
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 13:28:49 +0000
Received: from [85.158.143.99:41118] by server-3.bemta-4.messagelabs.com id
	97/3A-09529-09ED4305; Wed, 22 Aug 2012 13:28:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345642127!20996932!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22891 invoked from network); 22 Aug 2012 13:28:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 13:28:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14126983"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 13:27:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:27:47 +0100
Message-ID: <1345642066.12501.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 22 Aug 2012 14:27:46 +0100
In-Reply-To: <alpine.DEB.2.02.1208221213070.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1208221213070.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl/qemu-xen: use cache=writeback for IDE
 and cache=none for SCSI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 12:13 +0100, Stefano Stabellini wrote:
> On Tue, 14 Aug 2012, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> ping

Is this an appropriate change during rcs, is it critical for 4.2 or
should it wait for 4.3?

I think the changelog here is rather lacking, which makes it hard for me
to decide what to do. e.g. the usual stuff: Why are you making this
change? What is the impact? etc

What is the default for all these cases?

> > diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> > index 0c0084f..1c94e80 100644
> > --- a/tools/libxl/libxl_dm.c
> > +++ b/tools/libxl/libxl_dm.c
> > @@ -549,10 +549,10 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >              if (disks[i].is_cdrom) {
> >                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
> >                      drive = libxl__sprintf
> > -                        (gc, "if=ide,index=%d,media=cdrom", disk);
> > +                        (gc, "if=ide,index=%d,media=cdrom,cache=writeback", disk);

Why does the cacheability matter for an empty device?

> >                  else
> >                      drive = libxl__sprintf
> > -                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
> > +                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s,cache=writeback",

Does writeback mean anything for a r/o device?

> >                           disks[i].pdev_path, disk, format);
> >              } else {
> >                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
> > @@ -575,11 +575,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >                   */
> >                  if (strncmp(disks[i].vdev, "sd", 2) == 0)
> >                      drive = libxl__sprintf
> > -                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
> > +                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=none",
> >                           disks[i].pdev_path, disk, format);
> >                  else if (disk < 4)
> >                      drive = libxl__sprintf
> > -                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
> > +                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s,cache=writeback",

Why is an SCSI disk treated differently to an IDE one wrt caching?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:29:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4AzK-0005d2-AY; Wed, 22 Aug 2012 13:28:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4AzJ-0005cw-5l
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 13:28:49 +0000
Received: from [85.158.143.99:41118] by server-3.bemta-4.messagelabs.com id
	97/3A-09529-09ED4305; Wed, 22 Aug 2012 13:28:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345642127!20996932!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22891 invoked from network); 22 Aug 2012 13:28:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 13:28:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14126983"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 13:27:47 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:27:47 +0100
Message-ID: <1345642066.12501.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 22 Aug 2012 14:27:46 +0100
In-Reply-To: <alpine.DEB.2.02.1208221213070.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208141559360.21096@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1208221213070.15568@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] libxl/qemu-xen: use cache=writeback for IDE
 and cache=none for SCSI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 12:13 +0100, Stefano Stabellini wrote:
> On Tue, 14 Aug 2012, Stefano Stabellini wrote:
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> ping

Is this an appropriate change during rcs, is it critical for 4.2 or
should it wait for 4.3?

I think the changelog here is rather lacking, which makes it hard for me
to decide what to do. e.g. the usual stuff: Why are you making this
change? What is the impact? etc

What is the default for all these cases?

> > diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> > index 0c0084f..1c94e80 100644
> > --- a/tools/libxl/libxl_dm.c
> > +++ b/tools/libxl/libxl_dm.c
> > @@ -549,10 +549,10 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >              if (disks[i].is_cdrom) {
> >                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
> >                      drive = libxl__sprintf
> > -                        (gc, "if=ide,index=%d,media=cdrom", disk);
> > +                        (gc, "if=ide,index=%d,media=cdrom,cache=writeback", disk);

Why does the cacheability matter for an empty device?

> >                  else
> >                      drive = libxl__sprintf
> > -                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
> > +                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s,cache=writeback",

Does writeback mean anything for a r/o device?

> >                           disks[i].pdev_path, disk, format);
> >              } else {
> >                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
> > @@ -575,11 +575,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >                   */
> >                  if (strncmp(disks[i].vdev, "sd", 2) == 0)
> >                      drive = libxl__sprintf
> > -                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
> > +                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=none",
> >                           disks[i].pdev_path, disk, format);
> >                  else if (disk < 4)
> >                      drive = libxl__sprintf
> > -                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
> > +                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s,cache=writeback",

Why is an SCSI disk treated differently to an IDE one wrt caching?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:29:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Aze-0005ec-OV; Wed, 22 Aug 2012 13:29:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Azd-0005eN-Cz
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:29:09 +0000
Received: from [85.158.143.35:38881] by server-3.bemta-4.messagelabs.com id
	A3/3B-09529-4AED4305; Wed, 22 Aug 2012 13:29:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345642146!4770852!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26407 invoked from network); 22 Aug 2012 13:29:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 13:29:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 14:28:20 +0100
Message-Id: <5034FAC00200007800096EA8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 14:29:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
	<50349901020000780008A4A2@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5EAB@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA5EAB@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 11:59, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:jbeulich@suse.com]
>> Sent: Wednesday, August 22, 2012 3:32 PM
>> To: Hao, Xudong
>> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> 
>> >>> "Hao, Xudong" <xudong.hao@intel.com> 08/22/12 3:03 AM >>>
>> >> > Where does present the 36-bit physical addresses limit, could you help to
>> >> > point out in the current Xen code?
>> >>
>> >> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
>> >> mtrr_var_range_msr_set().
>> >
>> > So if common 36-bit(guest) physical address could not change, can we use
>> > top down from 64G, Jan, do you have any suggestion?
>> 
>> Sorry, I already said that I think the only viable option is top down from
>> top of physical address space. No new address space holes please if at
>> all possible - just do it in ways real firmware would do it (which would
>> unlikely alter the RAM layout for this purpose).
>> 
> 
> If the PCIe device has 64G bar size or more, how to do in current 36 bit, 

Such a device makes no sense on a machine with 36-bit physical
address limit.

> will we consider to extend the guest physical address? 

???

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:29:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Aze-0005ec-OV; Wed, 22 Aug 2012 13:29:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Azd-0005eN-Cz
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:29:09 +0000
Received: from [85.158.143.35:38881] by server-3.bemta-4.messagelabs.com id
	A3/3B-09529-4AED4305; Wed, 22 Aug 2012 13:29:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345642146!4770852!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26407 invoked from network); 22 Aug 2012 13:29:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 13:29:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 14:28:20 +0100
Message-Id: <5034FAC00200007800096EA8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 14:29:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xudong Hao" <xudong.hao@intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5CBD@SHSMSX102.ccr.corp.intel.com>
	<50349901020000780008A4A2@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA5EAB@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <403610A45A2B5242BD291EDAE8B37D300FEA5EAB@SHSMSX102.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 11:59, "Hao, Xudong" <xudong.hao@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:jbeulich@suse.com]
>> Sent: Wednesday, August 22, 2012 3:32 PM
>> To: Hao, Xudong
>> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org 
>> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
>> 
>> >>> "Hao, Xudong" <xudong.hao@intel.com> 08/22/12 3:03 AM >>>
>> >> > Where does present the 36-bit physical addresses limit, could you help to
>> >> > point out in the current Xen code?
>> >>
>> >> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
>> >> mtrr_var_range_msr_set().
>> >
>> > So if common 36-bit(guest) physical address could not change, can we use
>> > top down from 64G, Jan, do you have any suggestion?
>> 
>> Sorry, I already said that I think the only viable option is top down from
>> top of physical address space. No new address space holes please if at
>> all possible - just do it in ways real firmware would do it (which would
>> unlikely alter the RAM layout for this purpose).
>> 
> 
> If the PCIe device has 64G bar size or more, how to do in current 36 bit, 

Such a device makes no sense on a machine with 36-bit physical
address limit.

> will we consider to extend the guest physical address? 

???

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:30:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4B0p-0005lz-Ej; Wed, 22 Aug 2012 13:30:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T4B0o-0005ln-Ja
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:30:22 +0000
Received: from [85.158.138.51:35974] by server-11.bemta-3.messagelabs.com id
	18/33-23152-DEED4305; Wed, 22 Aug 2012 13:30:21 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345642217!23337398!1
X-Originating-IP: [74.125.149.246]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18179 invoked from network); 22 Aug 2012 13:30:20 -0000
Received: from na3sys009aog119.obsmtp.com (HELO na3sys009aog119.obsmtp.com)
	(74.125.149.246)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Aug 2012 13:30:20 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob119.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUDTe6PKLIy4ePQZnyKwk+kPYecZxAS0J@postini.com;
	Wed, 22 Aug 2012 06:30:19 PDT
Received: from INHYMS171.ca.com (155.35.35.45) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 22 Aug 2012 19:00:14 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS171.ca.com
	([155.35.35.45]) with mapi id 14.01.0355.002;
	Wed, 22 Aug 2012 19:00:14 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZABRxX9AAAzUKyA///51oD//6PjEA==
Date: Wed, 22 Aug 2012 13:30:13 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345641723.12501.2.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {5EC0BB75-1073-4D0F-8B59-34838435978D}
x-cr-hashedpuzzle: CyEC JvcZ KVa0 Ke5i Ovnp O5u1 PEFR RD6J VRCz VVyP WqCl
	ZEzN Zkxp eZJW fq4S gd0/; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAagBiAGUAdQBsAGkAYwBoAEAAcwB1AHMAZQAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {5EC0BB75-1073-4D0F-8B59-34838435978D};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	22 Aug 2012 13:28:09 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSWFuIENhbXBiZWxsIFtt
YWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dDQo+IFNlbnQ6IFdlZG5lc2RheSwgQXVndXN0
IDIyLCAyMDEyIDY6NTIgUE0NCj4gVG86IFBhbGFndW1taSwgU2l2YQ0KPiBDYzogSmFuIEJldWxp
Y2g7IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnDQo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBb
UEFUQ0ggUkZDXSB4ZW4vbmV0YmFjazogQ291bnQgcmluZyBzbG90cw0KPiBwcm9wZXJseSB3aGVu
IGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVzZWQNCj4gDQo+IE9uIFdlZCwgMjAxMi0wOC0yMiBhdCAx
NDoxNCArMDEwMCwgUGFsYWd1bW1pLCBTaXZhIHdyb3RlOg0KPiA+DQo+ID4gPiAtLS0tLU9yaWdp
bmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4gRnJvbTogSWFuIENhbXBiZWxsIFttYWlsdG86SWFuLkNh
bXBiZWxsQGNpdHJpeC5jb21dDQo+ID4gPiBTZW50OiBUdWVzZGF5LCBBdWd1c3QgMjEsIDIwMTIg
Njo0NSBQTQ0KPiA+ID4gVG86IFBhbGFndW1taSwgU2l2YQ0KPiA+ID4gQ2M6IEphbiBCZXVsaWNo
OyB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZw0KPiA+ID4gU3ViamVjdDogUmU6IFtYZW4tZGV2ZWxd
IFtQQVRDSCBSRkNdIHhlbi9uZXRiYWNrOiBDb3VudCByaW5nIHNsb3RzDQo+ID4gPiBwcm9wZXJs
eSB3aGVuIGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVzZWQNCj4gPiA+DQo+ID4gPiBPbiBUdWUsIDIw
MTItMDgtMTQgYXQgMjI6MTcgKzAxMDAsIFBhbGFndW1taSwgU2l2YSB3cm90ZToNCj4gPiA+ID4N
Cj4gPiA+ID4gPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4gPiA+IEZyb206IEph
biBCZXVsaWNoIFttYWlsdG86SkJldWxpY2hAc3VzZS5jb21dDQo+ID4gPiA+ID4gU2VudDogTW9u
ZGF5LCBBdWd1c3QgMTMsIDIwMTIgMTo1OCBQTQ0KPiA+ID4gPiA+IFRvOiBQYWxhZ3VtbWksIFNp
dmENCj4gPiA+ID4gPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcNCj4gPiA+ID4gPiBTdWJq
ZWN0OiBSZTogW1hlbi1kZXZlbF0gW1BBVENIIFJGQ10geGVuL25ldGJhY2s6IENvdW50IHJpbmcN
Cj4gc2xvdHMNCj4gPiA+ID4gPiBwcm9wZXJseSB3aGVuIGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVz
ZWQNCj4gPiA+ID4gPg0KPiA+ID4gPiA+ID4+PiBPbiAxMy4wOC4xMiBhdCAwMjoxMiwgIlBhbGFn
dW1taSwgU2l2YSINCj4gPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4NCj4gPiA+ID4gPiB3cm90ZToN
Cj4gPiA+ID4gPiA+LS0tIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svbmV0YmFjay5jCTIwMTIt
MDEtMjUNCj4gPiA+IDE5OjM5OjMyLjAwMDAwMDAwMA0KPiA+ID4gPiA+IC0wNTAwDQo+ID4gPiA+
ID4gPisrKyBiL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYwkyMDEyLTA4LTEyDQo+
ID4gPiAxNTo1MDo1MC4wMDAwMDAwMDANCj4gPiA+ID4gPiAtMDQwMA0KPiA+ID4gPiA+ID5AQCAt
NjIzLDYgKzYyMywyNCBAQCBzdGF0aWMgdm9pZCB4ZW5fbmV0YmtfcnhfYWN0aW9uKHN0cnVjdCB4
DQo+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gCQljb3VudCArPSBucl9mcmFncyArIDE7DQo+ID4g
PiA+ID4gPg0KPiA+ID4gPiA+ID4rCQkvKg0KPiA+ID4gPiA+ID4rCQkgKiBUaGUgbG9naWMgaGVy
ZSBzaG91bGQgYmUgc29tZXdoYXQgc2ltaWxhciB0bw0KPiA+ID4gPiA+ID4rCQkgKiB4ZW5fbmV0
YmtfY291bnRfc2tiX3Nsb3RzLiBJbiBjYXNlIG9mIGxhcmdlciBNVFUNCj4gPiA+IHNpemUsDQo+
ID4gPiA+ID4NCj4gPiA+ID4gPiBJcyB0aGVyZSBhIHJlYXNvbiB3aHkgeW91IGNhbid0IHNpbXBs
eSB1c2UgdGhhdCBmdW5jdGlvbiB0aGVuPw0KPiA+ID4gPiA+IEFmYWljdCBpdCdzIGJlaW5nIHVz
ZWQgb24gdGhlIHZlcnkgc2FtZSBza2IgYmVmb3JlIGl0IGdldHMgcHV0DQo+IG9uDQo+ID4gPiA+
ID4gcnhfcXVldWUgYWxyZWFkeSBhbnl3YXkuDQo+ID4gPiA+ID4NCj4gPiA+ID4NCj4gPiA+ID4g
SSBkaWQgdGhpbmsgYWJvdXQgaXQuIEJ1dCB0aGlzIHdvdWxkIG1lYW4gaXRlcmF0aW5nIHRocm91
Z2gNCj4gc2ltaWxhcg0KPiA+ID4gPiBwaWVjZSBvZiBjb2RlIHR3aWNlIHdpdGggYWRkaXRpb25h
bCBmdW5jdGlvbiBjYWxscy4NCj4gPiA+ID4gbmV0YmtfZ29wX3NrYi0tPm5ldGJrX2dvcF9mcmFn
X2NvcHkgc2VxdWVuY2UgaXMgYWN0dWFsbHkNCj4gZXhlY3V0aW5nDQo+ID4gPiA+IHNpbWlsYXIg
Y29kZS4gQW5kIGFsc28gbm90IHN1cmUgYWJvdXQgYW55IG90aGVyIGltcGxpY2F0aW9ucy4gU28N
Cj4gPiA+ID4gZGVjaWRlZCB0byBmaXggaXQgYnkgYWRkaW5nIGZldyBsaW5lcyBvZiBjb2RlIGlu
IGxpbmUuDQo+ID4gPg0KPiA+ID4gSSB3b25kZXIgaWYgd2UgY291bGQgY2FjaGUgdGhlIHJlc3Vs
dCBvZiB0aGUgY2FsbCB0bw0KPiA+ID4geGVuX25ldGJrX2NvdW50X3NrYl9zbG90cyBpbiB4ZW52
aWZfc3RhcnRfeG1pdCBzb21ld2hlcmU/DQo+ID4gPg0KPiA+ID4gPiA+ID4rCQkgKiBza2IgaGVh
ZCBsZW5ndGggbWF5IGJlIG1vcmUgdGhhbiBhIFBBR0VfU0laRS4gV2UNCj4gPiA+IG5lZWQgdG8N
Cj4gPiA+ID4gPiA+KwkJICogY29uc2lkZXIgcmluZyBzbG90cyBjb25zdW1lZCBieSB0aGF0IGRh
dGEuIElmIHdlDQo+ID4gPiBkbyBub3QsDQo+ID4gPiA+ID4gPisJCSAqIHRoZW4gd2l0aGluIHRo
aXMgbG9vcCBpdHNlbGYgd2UgZW5kIHVwIGNvbnN1bWluZw0KPiA+ID4gbW9yZQ0KPiA+ID4gPiA+
IG1ldGENCj4gPiA+ID4gPiA+KwkJICogc2xvdHMgdHVybmluZyB0aGUgQlVHX09OIGJlbG93LiBX
aXRoIHRoaXMgZml4IHdlDQo+ID4gPiBtYXkgZW5kDQo+ID4gPiA+ID4gdXANCj4gPiA+ID4gPiA+
KwkJICogaXRlcmF0aW5nIHRocm91Z2ggeGVuX25ldGJrX3J4X2FjdGlvbiBtdWx0aXBsZQ0KPiA+
ID4gdGltZXMNCj4gPiA+ID4gPiA+KwkJICogaW5zdGVhZCBvZiBjcmFzaGluZyBuZXRiYWNrIHRo
cmVhZC4NCj4gPiA+ID4gPiA+KwkJICovDQo+ID4gPiA+ID4gPisNCj4gPiA+ID4gPiA+Kw0KPiA+
ID4gPiA+ID4rCQljb3VudCArPSBESVZfUk9VTkRfVVAoc2tiX2hlYWRsZW4oc2tiKSwgUEFHRV9T
SVpFKTsNCj4gPiA+ID4gPg0KPiA+ID4gPiA+IFRoaXMgbm93IG92ZXItYWNjb3VudHMgYnkgb25l
IEkgdGhpbmsgKGR1ZSB0byB0aGUgIisgMSIgYWJvdmU7DQo+ID4gPiA+ID4gdGhlIGNhbGN1bGF0
aW9uIGhlcmUgcmVhbGx5IGlzIHRvIHJlcGxhY2UgdGhhdCBpbmNyZW1lbnQpLg0KPiA+ID4gPiA+
DQo+ID4gPiA+ID4gSmFuDQo+ID4gPiA+ID4NCj4gPiA+ID4gSSBhbHNvIHdhc24ndCBzdXJlIGFi
b3V0IHRoZSBhY3R1YWwgcHVycG9zZSBvZiAiKzEiIGFib3ZlIHdoZXRoZXINCj4gaXQNCj4gPiA+
ID4gaXMgbWVhbnQgdG8gdGFrZSBjYXJlIG9mIHNrYl9oZWFkbGVuIG9yIG5vbiB6ZXJvIGdzb19z
aXplIGNhc2Ugb3INCj4gPiA+IHNvbWUNCj4gPiA+ID4gb3RoZXIgY2FzZS4NCj4gPiA+DQo+ID4g
PiBJIHRoaW5rIGl0J3MgaW50ZW50aW9uIHdhcyB0byBhY2NvdW50IGZvciBza2JfaGVhZGxlbiBh
bmQgdGhlcmVmb3JlDQo+IGl0DQo+ID4gPiBzaG91bGQgYmUgcmVwbGFjZWQuDQo+ID4gPg0KPiA+
ID4gPiAgIFRoYXQncyB3aHkgSSBsZWZ0IGl0IGxpa2UgdGhhdCBzbyB0aGF0IEkgY2FuIGV4aXQg
dGhlIGxvb3Agb24NCj4gc2FmZXINCj4gPiA+ID4gc2lkZS4gSWYgc29tZW9uZSB3aG8ga25vd3Mg
dGhpcyBhcmVhIG9mIGNvZGUgY2FuIGNvbmZpcm0gdGhhdCB3ZQ0KPiBkbw0KPiA+ID4gPiBub3Qg
bmVlZCBpdCwgSSB3aWxsIGNyZWF0ZSBhIG5ldyBwYXRjaC4gSW4gbXkgZW52aXJvbm1lbnQgSSBk
aWQNCj4gPiA+ID4gb2JzZXJ2ZSB0aGF0ICJjb3VudCIgaXMgYWx3YXlzIGdyZWF0ZXIgdGhhbg0K
PiA+ID4gPiBhY3R1YWwgbWV0YSBzbG90cyBwcm9kdWNlZCBiZWNhdXNlIG9mIHRoaXMgYWRkaXRp
b25hbCAiKzEiIHdpdGgNCj4gbXkNCj4gPiA+ID4gcGF0Y2guIFdoZW4gSSB0b29rIG91dCB0aGlz
IGV4dHJhIGFkZGl0aW9uIHRoZW4gY291bnQgaXMgYWx3YXlzDQo+IGVxdWFsDQo+ID4gPiA+IHRv
IGFjdHVhbCBtZXRhIHNsb3RzIHByb2R1Y2VkIGFuZCBsb29wIGlzIGV4aXRpbmcgc2FmZWx5IHdp
dGgNCj4gbW9yZQ0KPiA+ID4gPiBtZXRhIHNsb3RzIHByb2R1Y2VkIHVuZGVyIGhlYXZ5IHRyYWZm
aWMuDQo+ID4gPg0KPiA+ID4gSSB0aGluayB0aGF0J3MgYW4gYXJndW1lbnQgZm9yIHJlbW92aW5n
IGl0IGFzIHdlbGwuDQo+ID4gPg0KPiA+ID4gVGhlICsgMSBsZWFkaW5nIHRvIGFuIGVhcmx5IGV4
aXQgc2VlbXMgYmVuaWduIHdoZW4geW91IHRoaW5rIGFib3V0DQo+IG9uZQ0KPiA+ID4gbGFyZ2lz
aCBza2IgYnV0IGltYWdpbmUgaWYgeW91IGhhZCAyMDAgc21hbGwgKHNpbmdsZSBwYWdlKSBza2Jz
IC0tDQo+IHRoZW4NCj4gPiA+IHlvdSBoYXZlIGVmZmVjdGl2ZWx5IGhhbHZlZCB0aGUgc2l6ZSBv
ZiB0aGUgcmluZyAob3IgYXQgbGVhc3QgdGhlDQo+ID4gPiBiYXRjaCkuDQo+ID4gPg0KPiA+IEkg
YWdyZWUgYW5kIHRoYXQncyB3aGF0IGV2ZW4gSSBoYXZlIG9ic2VydmVkIHdoZXJlDQo+IHhlbl9u
ZXRia19raWNrX3RocmVhZCBpcyBnZXR0aW5nIGludm9rZWQgdG8gZmluaXNoIHdvcmsgaW4gbmV4
dA0KPiBpdGVyYXRpb24uICBJIHdpbGwgY3JlYXRlIGEgcGF0Y2ggcmVwbGFjaW5nICIrMSIgd2l0
aA0KPiBESVZfUk9VTkRfVVAoc2tiX2hlYWRsZW4oc2tiKSwgUEFHRV9TSVpFKS4NCj4gPg0KPiA+
DQo+ID4gPiBUaGlzOg0KPiA+ID4gCQkvKiBGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyAqLw0KPiA+
ID4gCQlpZiAoY291bnQgKyBNQVhfU0tCX0ZSQUdTID49IFhFTl9ORVRJRl9SWF9SSU5HX1NJWkUp
DQo+ID4gPiAJCQlicmVhazsNCj4gPiA+IHNlZW1zIGEgYml0IGlmZnkgdG8gbWUgdG9vLiBJIHdv
bmRlciBpZiBNQVhfU0tCX0ZSQUdTIHNob3VsZCBiZQ0KPiA+ID4gbWF4X3JlcXVpcmVkX3J4X3Ns
b3RzKHZpZik/IE9yIG1heWJlIHRoZSBwcmVmbGlnaHQgY2hlY2tzIGZyb20NCj4gPiA+IHhlbnZp
Zl9zdGFydF94bWl0IHNhdmUgdXMgZnJvbSB0aGlzIGZhdGU/DQo+ID4gPg0KPiA+ID4gSWFuLg0K
PiA+DQo+ID4gWW91IGFyZSByaWdodCBJYW4uIFRoZSBpbnRlbnRpb24gb2YgdGhpcyBjaGVjayBz
ZWVtcyB0byBiZSB0byBlbnN1cmUNCj4gPiB0aGF0IGVub3VnaCBzbG90cyBhcmUgc3RpbGwgbGVm
dCBwcmlvciB0byBwaWNraW5nIHVwIG5leHQgc2tiLiBCdXQNCj4gPiBpbnN0ZWFkIG9mIGludm9r
aW5nIG1heF9yZXF1aXJlZF9yeF9zbG90cyB3aXRoIGFscmVhZHkgcmVjZWl2ZWQgc2tiLA0KPiA+
IHdlIG1heSBoYXZlIHRvIGRvIHNrYl9wZWVrIGFuZCBpbnZva2UgbWF4X3JlcXVpcmVkX3J4X3Ns
b3RzIG9uIHNrYg0KPiA+IHRoYXQgd2UgYXJlIGFib3V0IHRvIGRlcXVldWUuIElzIHRoZXJlIGFu
eSBiZXR0ZXIgd2F5Pw0KPiANCj4gbWF4X3JlcXVpcmVkX3J4X3Nsb3RzIGRvZXNuJ3QgdGFrZSBh
biBza2IgYXMgYW4gYXJndW1lbnQsIGp1c3QgYSB2aWYuDQo+IEl0DQo+IHJldHVybnMgdGhlIHdv
cnN0IGNhc2UgbnVtYmVyIG9mIHNsb3RzIGZvciBhbnkgc2tiIG9uIHRoYXQgdmlmLg0KPiANCj4g
SWFuLg0KDQpUaGF04oCZcyB0cnVlLiBXaGF0IEkgbWVhbnQgaXMgdG8gcGVlayB0byBuZXh0IHNr
YiBhbmQgZ2V0IHZpZiBmcm9tIHRoYXQgc3RydWN0dXJlIHRvIGludm9rZSBtYXhfcmVxdWlyZWRf
cnhfc2xvdHMuIERvbid0IHlvdSB0aGluayB3ZSBuZWVkIHRvIGRvIGxpa2UgdGhhdD8NCg0KVGhh
bmtzDQpTaXZhDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:30:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4B0p-0005lz-Ej; Wed, 22 Aug 2012 13:30:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T4B0o-0005ln-Ja
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:30:22 +0000
Received: from [85.158.138.51:35974] by server-11.bemta-3.messagelabs.com id
	18/33-23152-DEED4305; Wed, 22 Aug 2012 13:30:21 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345642217!23337398!1
X-Originating-IP: [74.125.149.246]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18179 invoked from network); 22 Aug 2012 13:30:20 -0000
Received: from na3sys009aog119.obsmtp.com (HELO na3sys009aog119.obsmtp.com)
	(74.125.149.246)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Aug 2012 13:30:20 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob119.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUDTe6PKLIy4ePQZnyKwk+kPYecZxAS0J@postini.com;
	Wed, 22 Aug 2012 06:30:19 PDT
Received: from INHYMS171.ca.com (155.35.35.45) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 22 Aug 2012 19:00:14 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS171.ca.com
	([155.35.35.45]) with mapi id 14.01.0355.002;
	Wed, 22 Aug 2012 19:00:14 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZABRxX9AAAzUKyA///51oD//6PjEA==
Date: Wed, 22 Aug 2012 13:30:13 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345641723.12501.2.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {5EC0BB75-1073-4D0F-8B59-34838435978D}
x-cr-hashedpuzzle: CyEC JvcZ KVa0 Ke5i Ovnp O5u1 PEFR RD6J VRCz VVyP WqCl
	ZEzN Zkxp eZJW fq4S gd0/; 3;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAagBiAGUAdQBsAGkAYwBoAEAAcwB1AHMAZQAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {5EC0BB75-1073-4D0F-8B59-34838435978D};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	22 Aug 2012 13:28:09 GMT;
	UgBFADoAIABbAFgAZQBuAC0AZABlAHYAZQBsAF0AIABbAFAAQQBUAEMASAAgAFIARgBDAF0AIAB4AGUAbgAvAG4AZQB0AGIAYQBjAGsAOgAgAEMAbwB1AG4AdAAgAHIAaQBuAGcAIABzAGwAbwB0AHMAIABwAHIAbwBwAGUAcgBsAHkAIAB3AGgAZQBuACAAbABhAHIAZwBlAHIAIABNAFQAVQAgAHMAaQB6AGUAcwAgAGEAcgBlACAAdQBzAGUAZAA=
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSWFuIENhbXBiZWxsIFtt
YWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dDQo+IFNlbnQ6IFdlZG5lc2RheSwgQXVndXN0
IDIyLCAyMDEyIDY6NTIgUE0NCj4gVG86IFBhbGFndW1taSwgU2l2YQ0KPiBDYzogSmFuIEJldWxp
Y2g7IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnDQo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBb
UEFUQ0ggUkZDXSB4ZW4vbmV0YmFjazogQ291bnQgcmluZyBzbG90cw0KPiBwcm9wZXJseSB3aGVu
IGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVzZWQNCj4gDQo+IE9uIFdlZCwgMjAxMi0wOC0yMiBhdCAx
NDoxNCArMDEwMCwgUGFsYWd1bW1pLCBTaXZhIHdyb3RlOg0KPiA+DQo+ID4gPiAtLS0tLU9yaWdp
bmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4gRnJvbTogSWFuIENhbXBiZWxsIFttYWlsdG86SWFuLkNh
bXBiZWxsQGNpdHJpeC5jb21dDQo+ID4gPiBTZW50OiBUdWVzZGF5LCBBdWd1c3QgMjEsIDIwMTIg
Njo0NSBQTQ0KPiA+ID4gVG86IFBhbGFndW1taSwgU2l2YQ0KPiA+ID4gQ2M6IEphbiBCZXVsaWNo
OyB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZw0KPiA+ID4gU3ViamVjdDogUmU6IFtYZW4tZGV2ZWxd
IFtQQVRDSCBSRkNdIHhlbi9uZXRiYWNrOiBDb3VudCByaW5nIHNsb3RzDQo+ID4gPiBwcm9wZXJs
eSB3aGVuIGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVzZWQNCj4gPiA+DQo+ID4gPiBPbiBUdWUsIDIw
MTItMDgtMTQgYXQgMjI6MTcgKzAxMDAsIFBhbGFndW1taSwgU2l2YSB3cm90ZToNCj4gPiA+ID4N
Cj4gPiA+ID4gPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4gPiA+IEZyb206IEph
biBCZXVsaWNoIFttYWlsdG86SkJldWxpY2hAc3VzZS5jb21dDQo+ID4gPiA+ID4gU2VudDogTW9u
ZGF5LCBBdWd1c3QgMTMsIDIwMTIgMTo1OCBQTQ0KPiA+ID4gPiA+IFRvOiBQYWxhZ3VtbWksIFNp
dmENCj4gPiA+ID4gPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcNCj4gPiA+ID4gPiBTdWJq
ZWN0OiBSZTogW1hlbi1kZXZlbF0gW1BBVENIIFJGQ10geGVuL25ldGJhY2s6IENvdW50IHJpbmcN
Cj4gc2xvdHMNCj4gPiA+ID4gPiBwcm9wZXJseSB3aGVuIGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVz
ZWQNCj4gPiA+ID4gPg0KPiA+ID4gPiA+ID4+PiBPbiAxMy4wOC4xMiBhdCAwMjoxMiwgIlBhbGFn
dW1taSwgU2l2YSINCj4gPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4NCj4gPiA+ID4gPiB3cm90ZToN
Cj4gPiA+ID4gPiA+LS0tIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svbmV0YmFjay5jCTIwMTIt
MDEtMjUNCj4gPiA+IDE5OjM5OjMyLjAwMDAwMDAwMA0KPiA+ID4gPiA+IC0wNTAwDQo+ID4gPiA+
ID4gPisrKyBiL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYwkyMDEyLTA4LTEyDQo+
ID4gPiAxNTo1MDo1MC4wMDAwMDAwMDANCj4gPiA+ID4gPiAtMDQwMA0KPiA+ID4gPiA+ID5AQCAt
NjIzLDYgKzYyMywyNCBAQCBzdGF0aWMgdm9pZCB4ZW5fbmV0YmtfcnhfYWN0aW9uKHN0cnVjdCB4
DQo+ID4gPiA+ID4gPg0KPiA+ID4gPiA+ID4gCQljb3VudCArPSBucl9mcmFncyArIDE7DQo+ID4g
PiA+ID4gPg0KPiA+ID4gPiA+ID4rCQkvKg0KPiA+ID4gPiA+ID4rCQkgKiBUaGUgbG9naWMgaGVy
ZSBzaG91bGQgYmUgc29tZXdoYXQgc2ltaWxhciB0bw0KPiA+ID4gPiA+ID4rCQkgKiB4ZW5fbmV0
YmtfY291bnRfc2tiX3Nsb3RzLiBJbiBjYXNlIG9mIGxhcmdlciBNVFUNCj4gPiA+IHNpemUsDQo+
ID4gPiA+ID4NCj4gPiA+ID4gPiBJcyB0aGVyZSBhIHJlYXNvbiB3aHkgeW91IGNhbid0IHNpbXBs
eSB1c2UgdGhhdCBmdW5jdGlvbiB0aGVuPw0KPiA+ID4gPiA+IEFmYWljdCBpdCdzIGJlaW5nIHVz
ZWQgb24gdGhlIHZlcnkgc2FtZSBza2IgYmVmb3JlIGl0IGdldHMgcHV0DQo+IG9uDQo+ID4gPiA+
ID4gcnhfcXVldWUgYWxyZWFkeSBhbnl3YXkuDQo+ID4gPiA+ID4NCj4gPiA+ID4NCj4gPiA+ID4g
SSBkaWQgdGhpbmsgYWJvdXQgaXQuIEJ1dCB0aGlzIHdvdWxkIG1lYW4gaXRlcmF0aW5nIHRocm91
Z2gNCj4gc2ltaWxhcg0KPiA+ID4gPiBwaWVjZSBvZiBjb2RlIHR3aWNlIHdpdGggYWRkaXRpb25h
bCBmdW5jdGlvbiBjYWxscy4NCj4gPiA+ID4gbmV0YmtfZ29wX3NrYi0tPm5ldGJrX2dvcF9mcmFn
X2NvcHkgc2VxdWVuY2UgaXMgYWN0dWFsbHkNCj4gZXhlY3V0aW5nDQo+ID4gPiA+IHNpbWlsYXIg
Y29kZS4gQW5kIGFsc28gbm90IHN1cmUgYWJvdXQgYW55IG90aGVyIGltcGxpY2F0aW9ucy4gU28N
Cj4gPiA+ID4gZGVjaWRlZCB0byBmaXggaXQgYnkgYWRkaW5nIGZldyBsaW5lcyBvZiBjb2RlIGlu
IGxpbmUuDQo+ID4gPg0KPiA+ID4gSSB3b25kZXIgaWYgd2UgY291bGQgY2FjaGUgdGhlIHJlc3Vs
dCBvZiB0aGUgY2FsbCB0bw0KPiA+ID4geGVuX25ldGJrX2NvdW50X3NrYl9zbG90cyBpbiB4ZW52
aWZfc3RhcnRfeG1pdCBzb21ld2hlcmU/DQo+ID4gPg0KPiA+ID4gPiA+ID4rCQkgKiBza2IgaGVh
ZCBsZW5ndGggbWF5IGJlIG1vcmUgdGhhbiBhIFBBR0VfU0laRS4gV2UNCj4gPiA+IG5lZWQgdG8N
Cj4gPiA+ID4gPiA+KwkJICogY29uc2lkZXIgcmluZyBzbG90cyBjb25zdW1lZCBieSB0aGF0IGRh
dGEuIElmIHdlDQo+ID4gPiBkbyBub3QsDQo+ID4gPiA+ID4gPisJCSAqIHRoZW4gd2l0aGluIHRo
aXMgbG9vcCBpdHNlbGYgd2UgZW5kIHVwIGNvbnN1bWluZw0KPiA+ID4gbW9yZQ0KPiA+ID4gPiA+
IG1ldGENCj4gPiA+ID4gPiA+KwkJICogc2xvdHMgdHVybmluZyB0aGUgQlVHX09OIGJlbG93LiBX
aXRoIHRoaXMgZml4IHdlDQo+ID4gPiBtYXkgZW5kDQo+ID4gPiA+ID4gdXANCj4gPiA+ID4gPiA+
KwkJICogaXRlcmF0aW5nIHRocm91Z2ggeGVuX25ldGJrX3J4X2FjdGlvbiBtdWx0aXBsZQ0KPiA+
ID4gdGltZXMNCj4gPiA+ID4gPiA+KwkJICogaW5zdGVhZCBvZiBjcmFzaGluZyBuZXRiYWNrIHRo
cmVhZC4NCj4gPiA+ID4gPiA+KwkJICovDQo+ID4gPiA+ID4gPisNCj4gPiA+ID4gPiA+Kw0KPiA+
ID4gPiA+ID4rCQljb3VudCArPSBESVZfUk9VTkRfVVAoc2tiX2hlYWRsZW4oc2tiKSwgUEFHRV9T
SVpFKTsNCj4gPiA+ID4gPg0KPiA+ID4gPiA+IFRoaXMgbm93IG92ZXItYWNjb3VudHMgYnkgb25l
IEkgdGhpbmsgKGR1ZSB0byB0aGUgIisgMSIgYWJvdmU7DQo+ID4gPiA+ID4gdGhlIGNhbGN1bGF0
aW9uIGhlcmUgcmVhbGx5IGlzIHRvIHJlcGxhY2UgdGhhdCBpbmNyZW1lbnQpLg0KPiA+ID4gPiA+
DQo+ID4gPiA+ID4gSmFuDQo+ID4gPiA+ID4NCj4gPiA+ID4gSSBhbHNvIHdhc24ndCBzdXJlIGFi
b3V0IHRoZSBhY3R1YWwgcHVycG9zZSBvZiAiKzEiIGFib3ZlIHdoZXRoZXINCj4gaXQNCj4gPiA+
ID4gaXMgbWVhbnQgdG8gdGFrZSBjYXJlIG9mIHNrYl9oZWFkbGVuIG9yIG5vbiB6ZXJvIGdzb19z
aXplIGNhc2Ugb3INCj4gPiA+IHNvbWUNCj4gPiA+ID4gb3RoZXIgY2FzZS4NCj4gPiA+DQo+ID4g
PiBJIHRoaW5rIGl0J3MgaW50ZW50aW9uIHdhcyB0byBhY2NvdW50IGZvciBza2JfaGVhZGxlbiBh
bmQgdGhlcmVmb3JlDQo+IGl0DQo+ID4gPiBzaG91bGQgYmUgcmVwbGFjZWQuDQo+ID4gPg0KPiA+
ID4gPiAgIFRoYXQncyB3aHkgSSBsZWZ0IGl0IGxpa2UgdGhhdCBzbyB0aGF0IEkgY2FuIGV4aXQg
dGhlIGxvb3Agb24NCj4gc2FmZXINCj4gPiA+ID4gc2lkZS4gSWYgc29tZW9uZSB3aG8ga25vd3Mg
dGhpcyBhcmVhIG9mIGNvZGUgY2FuIGNvbmZpcm0gdGhhdCB3ZQ0KPiBkbw0KPiA+ID4gPiBub3Qg
bmVlZCBpdCwgSSB3aWxsIGNyZWF0ZSBhIG5ldyBwYXRjaC4gSW4gbXkgZW52aXJvbm1lbnQgSSBk
aWQNCj4gPiA+ID4gb2JzZXJ2ZSB0aGF0ICJjb3VudCIgaXMgYWx3YXlzIGdyZWF0ZXIgdGhhbg0K
PiA+ID4gPiBhY3R1YWwgbWV0YSBzbG90cyBwcm9kdWNlZCBiZWNhdXNlIG9mIHRoaXMgYWRkaXRp
b25hbCAiKzEiIHdpdGgNCj4gbXkNCj4gPiA+ID4gcGF0Y2guIFdoZW4gSSB0b29rIG91dCB0aGlz
IGV4dHJhIGFkZGl0aW9uIHRoZW4gY291bnQgaXMgYWx3YXlzDQo+IGVxdWFsDQo+ID4gPiA+IHRv
IGFjdHVhbCBtZXRhIHNsb3RzIHByb2R1Y2VkIGFuZCBsb29wIGlzIGV4aXRpbmcgc2FmZWx5IHdp
dGgNCj4gbW9yZQ0KPiA+ID4gPiBtZXRhIHNsb3RzIHByb2R1Y2VkIHVuZGVyIGhlYXZ5IHRyYWZm
aWMuDQo+ID4gPg0KPiA+ID4gSSB0aGluayB0aGF0J3MgYW4gYXJndW1lbnQgZm9yIHJlbW92aW5n
IGl0IGFzIHdlbGwuDQo+ID4gPg0KPiA+ID4gVGhlICsgMSBsZWFkaW5nIHRvIGFuIGVhcmx5IGV4
aXQgc2VlbXMgYmVuaWduIHdoZW4geW91IHRoaW5rIGFib3V0DQo+IG9uZQ0KPiA+ID4gbGFyZ2lz
aCBza2IgYnV0IGltYWdpbmUgaWYgeW91IGhhZCAyMDAgc21hbGwgKHNpbmdsZSBwYWdlKSBza2Jz
IC0tDQo+IHRoZW4NCj4gPiA+IHlvdSBoYXZlIGVmZmVjdGl2ZWx5IGhhbHZlZCB0aGUgc2l6ZSBv
ZiB0aGUgcmluZyAob3IgYXQgbGVhc3QgdGhlDQo+ID4gPiBiYXRjaCkuDQo+ID4gPg0KPiA+IEkg
YWdyZWUgYW5kIHRoYXQncyB3aGF0IGV2ZW4gSSBoYXZlIG9ic2VydmVkIHdoZXJlDQo+IHhlbl9u
ZXRia19raWNrX3RocmVhZCBpcyBnZXR0aW5nIGludm9rZWQgdG8gZmluaXNoIHdvcmsgaW4gbmV4
dA0KPiBpdGVyYXRpb24uICBJIHdpbGwgY3JlYXRlIGEgcGF0Y2ggcmVwbGFjaW5nICIrMSIgd2l0
aA0KPiBESVZfUk9VTkRfVVAoc2tiX2hlYWRsZW4oc2tiKSwgUEFHRV9TSVpFKS4NCj4gPg0KPiA+
DQo+ID4gPiBUaGlzOg0KPiA+ID4gCQkvKiBGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyAqLw0KPiA+
ID4gCQlpZiAoY291bnQgKyBNQVhfU0tCX0ZSQUdTID49IFhFTl9ORVRJRl9SWF9SSU5HX1NJWkUp
DQo+ID4gPiAJCQlicmVhazsNCj4gPiA+IHNlZW1zIGEgYml0IGlmZnkgdG8gbWUgdG9vLiBJIHdv
bmRlciBpZiBNQVhfU0tCX0ZSQUdTIHNob3VsZCBiZQ0KPiA+ID4gbWF4X3JlcXVpcmVkX3J4X3Ns
b3RzKHZpZik/IE9yIG1heWJlIHRoZSBwcmVmbGlnaHQgY2hlY2tzIGZyb20NCj4gPiA+IHhlbnZp
Zl9zdGFydF94bWl0IHNhdmUgdXMgZnJvbSB0aGlzIGZhdGU/DQo+ID4gPg0KPiA+ID4gSWFuLg0K
PiA+DQo+ID4gWW91IGFyZSByaWdodCBJYW4uIFRoZSBpbnRlbnRpb24gb2YgdGhpcyBjaGVjayBz
ZWVtcyB0byBiZSB0byBlbnN1cmUNCj4gPiB0aGF0IGVub3VnaCBzbG90cyBhcmUgc3RpbGwgbGVm
dCBwcmlvciB0byBwaWNraW5nIHVwIG5leHQgc2tiLiBCdXQNCj4gPiBpbnN0ZWFkIG9mIGludm9r
aW5nIG1heF9yZXF1aXJlZF9yeF9zbG90cyB3aXRoIGFscmVhZHkgcmVjZWl2ZWQgc2tiLA0KPiA+
IHdlIG1heSBoYXZlIHRvIGRvIHNrYl9wZWVrIGFuZCBpbnZva2UgbWF4X3JlcXVpcmVkX3J4X3Ns
b3RzIG9uIHNrYg0KPiA+IHRoYXQgd2UgYXJlIGFib3V0IHRvIGRlcXVldWUuIElzIHRoZXJlIGFu
eSBiZXR0ZXIgd2F5Pw0KPiANCj4gbWF4X3JlcXVpcmVkX3J4X3Nsb3RzIGRvZXNuJ3QgdGFrZSBh
biBza2IgYXMgYW4gYXJndW1lbnQsIGp1c3QgYSB2aWYuDQo+IEl0DQo+IHJldHVybnMgdGhlIHdv
cnN0IGNhc2UgbnVtYmVyIG9mIHNsb3RzIGZvciBhbnkgc2tiIG9uIHRoYXQgdmlmLg0KPiANCj4g
SWFuLg0KDQpUaGF04oCZcyB0cnVlLiBXaGF0IEkgbWVhbnQgaXMgdG8gcGVlayB0byBuZXh0IHNr
YiBhbmQgZ2V0IHZpZiBmcm9tIHRoYXQgc3RydWN0dXJlIHRvIGludm9rZSBtYXhfcmVxdWlyZWRf
cnhfc2xvdHMuIERvbid0IHlvdSB0aGluayB3ZSBuZWVkIHRvIGRvIGxpa2UgdGhhdD8NCg0KVGhh
bmtzDQpTaXZhDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:41:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:41:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BBU-00068O-Ka; Wed, 22 Aug 2012 13:41:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4BBS-00068J-Tr
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:41:23 +0000
Received: from [85.158.143.35:50422] by server-1.bemta-4.messagelabs.com id
	BB/A7-07754-281E4305; Wed, 22 Aug 2012 13:41:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345642801!13906431!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13499 invoked from network); 22 Aug 2012 13:40:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 13:40:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14127292"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 13:40:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:40:01 +0100
Message-ID: <1345642800.12501.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Wed, 22 Aug 2012 14:40:00 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTIyIGF0IDE0OjMwICswMTAwLCBQYWxhZ3VtbWksIFNpdmEgd3JvdGU6
Cj4gPiA+Cj4gPiA+Cj4gPiA+ID4gVGhpczoKPiA+ID4gPiAJCS8qIEZpbGxlZCB0aGUgYmF0Y2gg
cXVldWU/ICovCj4gPiA+ID4gCQlpZiAoY291bnQgKyBNQVhfU0tCX0ZSQUdTID49IFhFTl9ORVRJ
Rl9SWF9SSU5HX1NJWkUpCj4gPiA+ID4gCQkJYnJlYWs7Cj4gPiA+ID4gc2VlbXMgYSBiaXQgaWZm
eSB0byBtZSB0b28uIEkgd29uZGVyIGlmIE1BWF9TS0JfRlJBR1Mgc2hvdWxkIGJlCj4gPiA+ID4g
bWF4X3JlcXVpcmVkX3J4X3Nsb3RzKHZpZik/IE9yIG1heWJlIHRoZSBwcmVmbGlnaHQgY2hlY2tz
IGZyb20KPiA+ID4gPiB4ZW52aWZfc3RhcnRfeG1pdCBzYXZlIHVzIGZyb20gdGhpcyBmYXRlPwo+
ID4gPiA+Cj4gPiA+ID4gSWFuLgo+ID4gPgo+ID4gPiBZb3UgYXJlIHJpZ2h0IElhbi4gVGhlIGlu
dGVudGlvbiBvZiB0aGlzIGNoZWNrIHNlZW1zIHRvIGJlIHRvIGVuc3VyZQo+ID4gPiB0aGF0IGVu
b3VnaCBzbG90cyBhcmUgc3RpbGwgbGVmdCBwcmlvciB0byBwaWNraW5nIHVwIG5leHQgc2tiLiBC
dXQKPiA+ID4gaW5zdGVhZCBvZiBpbnZva2luZyBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgd2l0aCBh
bHJlYWR5IHJlY2VpdmVkIHNrYiwKPiA+ID4gd2UgbWF5IGhhdmUgdG8gZG8gc2tiX3BlZWsgYW5k
IGludm9rZSBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgb24gc2tiCj4gPiA+IHRoYXQgd2UgYXJlIGFi
b3V0IHRvIGRlcXVldWUuIElzIHRoZXJlIGFueSBiZXR0ZXIgd2F5Pwo+ID4gCj4gPiBtYXhfcmVx
dWlyZWRfcnhfc2xvdHMgZG9lc24ndCB0YWtlIGFuIHNrYiBhcyBhbiBhcmd1bWVudCwganVzdCBh
IHZpZi4KPiA+IEl0Cj4gPiByZXR1cm5zIHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cyBm
b3IgYW55IHNrYiBvbiB0aGF0IHZpZi4KPiA+IAo+ID4gSWFuLgo+IAo+IFRoYXTigJlzIHRydWUu
IFdoYXQgSSBtZWFudCBpcyB0byBwZWVrIHRvIG5leHQgc2tiIGFuZCBnZXQgdmlmIGZyb20gdGhh
dAo+IHN0cnVjdHVyZSB0byBpbnZva2UgbWF4X3JlcXVpcmVkX3J4X3Nsb3RzLiBEb24ndCB5b3Ug
dGhpbmsgd2UgbmVlZCB0bwo+IGRvIGxpa2UgdGhhdD8KCkRvIHlvdSBtZWFuIHNvbWV0aGluZyBv
dGhlciB0aGFuIG1heF9yZXF1aXJlZF9yeF9zbG90cz8gQmVjYXVzZSB0aGUKcHJvdG90eXBlIG9m
IHRoYXQgZnVuY3Rpb24gaXMKICAgICAgICBzdGF0aWMgaW50IG1heF9yZXF1aXJlZF9yeF9zbG90
cyhzdHJ1Y3QgeGVudmlmICp2aWYpCmkuZS4gaXQgZG9lc24ndCBuZWVkIGFuIHNrYi4KCkkgdGhp
bmsgaXQgaXMgYWNjZXB0YWJsZSB0byBjaGVjayBmb3IgdGhlIHdvcnN0IGNhc2UgbnVtYmVyIG9m
IHNsb3RzLgpUaGF0J3Mgd2hhdCB3ZSBkbyBlLmcuIGluIHhlbl9uZXRia19yeF9yaW5nX2Z1bGwu
CgpVc2luZyBza2JfcGVlayBtaWdodCB3b3JrIHRvbyB0aG91Z2gsIGFzc3VtaW5nIGFsbCB0aGUg
bG9ja2luZyBldGMgaXMgb2sKLS0gdGhpcyBpcyBhIHByaXZhdGUgcXVldWUgc28gSSB0aGluayBp
dCBpcyBwcm9iYWJseSBvay4gUmF0aGVyIHRoYW4KY2FsY3VsYXRpbmcgdGhlIG51bWJlciBvZiBz
bG90cyBpbiB4ZW5fbmV0YmtfcnhfYWN0aW9uIHlvdSBwcm9iYWJseSB3YW50CnRvIHJlbWVtYmVy
IHRoZSB2YWx1ZSBmcm9tIHRoZSBjYWxsIHRvIHhlbl9uZXRia19jb3VudF9za2Jfc2xvdHMgaW4K
c3RhcnRfeG1pdC4gUGVyaGFwcyBieSBzdGFzaGluZyBpdCBpbiBza2ItPmNiPyAoc2VlIE5FVEZS
T05UX1NLQl9DQiBmb3IKYW4gZXhhbXBsZSBvZiBob3cgdG8gZG8gdGhpcykKCklhbi4KCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Aug 22 13:41:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 13:41:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BBU-00068O-Ka; Wed, 22 Aug 2012 13:41:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4BBS-00068J-Tr
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 13:41:23 +0000
Received: from [85.158.143.35:50422] by server-1.bemta-4.messagelabs.com id
	BB/A7-07754-281E4305; Wed, 22 Aug 2012 13:41:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345642801!13906431!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13499 invoked from network); 22 Aug 2012 13:40:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 13:40:02 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14127292"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 13:40:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:40:01 +0100
Message-ID: <1345642800.12501.13.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Wed, 22 Aug 2012 14:40:00 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTIyIGF0IDE0OjMwICswMTAwLCBQYWxhZ3VtbWksIFNpdmEgd3JvdGU6
Cj4gPiA+Cj4gPiA+Cj4gPiA+ID4gVGhpczoKPiA+ID4gPiAJCS8qIEZpbGxlZCB0aGUgYmF0Y2gg
cXVldWU/ICovCj4gPiA+ID4gCQlpZiAoY291bnQgKyBNQVhfU0tCX0ZSQUdTID49IFhFTl9ORVRJ
Rl9SWF9SSU5HX1NJWkUpCj4gPiA+ID4gCQkJYnJlYWs7Cj4gPiA+ID4gc2VlbXMgYSBiaXQgaWZm
eSB0byBtZSB0b28uIEkgd29uZGVyIGlmIE1BWF9TS0JfRlJBR1Mgc2hvdWxkIGJlCj4gPiA+ID4g
bWF4X3JlcXVpcmVkX3J4X3Nsb3RzKHZpZik/IE9yIG1heWJlIHRoZSBwcmVmbGlnaHQgY2hlY2tz
IGZyb20KPiA+ID4gPiB4ZW52aWZfc3RhcnRfeG1pdCBzYXZlIHVzIGZyb20gdGhpcyBmYXRlPwo+
ID4gPiA+Cj4gPiA+ID4gSWFuLgo+ID4gPgo+ID4gPiBZb3UgYXJlIHJpZ2h0IElhbi4gVGhlIGlu
dGVudGlvbiBvZiB0aGlzIGNoZWNrIHNlZW1zIHRvIGJlIHRvIGVuc3VyZQo+ID4gPiB0aGF0IGVu
b3VnaCBzbG90cyBhcmUgc3RpbGwgbGVmdCBwcmlvciB0byBwaWNraW5nIHVwIG5leHQgc2tiLiBC
dXQKPiA+ID4gaW5zdGVhZCBvZiBpbnZva2luZyBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgd2l0aCBh
bHJlYWR5IHJlY2VpdmVkIHNrYiwKPiA+ID4gd2UgbWF5IGhhdmUgdG8gZG8gc2tiX3BlZWsgYW5k
IGludm9rZSBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgb24gc2tiCj4gPiA+IHRoYXQgd2UgYXJlIGFi
b3V0IHRvIGRlcXVldWUuIElzIHRoZXJlIGFueSBiZXR0ZXIgd2F5Pwo+ID4gCj4gPiBtYXhfcmVx
dWlyZWRfcnhfc2xvdHMgZG9lc24ndCB0YWtlIGFuIHNrYiBhcyBhbiBhcmd1bWVudCwganVzdCBh
IHZpZi4KPiA+IEl0Cj4gPiByZXR1cm5zIHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cyBm
b3IgYW55IHNrYiBvbiB0aGF0IHZpZi4KPiA+IAo+ID4gSWFuLgo+IAo+IFRoYXTigJlzIHRydWUu
IFdoYXQgSSBtZWFudCBpcyB0byBwZWVrIHRvIG5leHQgc2tiIGFuZCBnZXQgdmlmIGZyb20gdGhh
dAo+IHN0cnVjdHVyZSB0byBpbnZva2UgbWF4X3JlcXVpcmVkX3J4X3Nsb3RzLiBEb24ndCB5b3Ug
dGhpbmsgd2UgbmVlZCB0bwo+IGRvIGxpa2UgdGhhdD8KCkRvIHlvdSBtZWFuIHNvbWV0aGluZyBv
dGhlciB0aGFuIG1heF9yZXF1aXJlZF9yeF9zbG90cz8gQmVjYXVzZSB0aGUKcHJvdG90eXBlIG9m
IHRoYXQgZnVuY3Rpb24gaXMKICAgICAgICBzdGF0aWMgaW50IG1heF9yZXF1aXJlZF9yeF9zbG90
cyhzdHJ1Y3QgeGVudmlmICp2aWYpCmkuZS4gaXQgZG9lc24ndCBuZWVkIGFuIHNrYi4KCkkgdGhp
bmsgaXQgaXMgYWNjZXB0YWJsZSB0byBjaGVjayBmb3IgdGhlIHdvcnN0IGNhc2UgbnVtYmVyIG9m
IHNsb3RzLgpUaGF0J3Mgd2hhdCB3ZSBkbyBlLmcuIGluIHhlbl9uZXRia19yeF9yaW5nX2Z1bGwu
CgpVc2luZyBza2JfcGVlayBtaWdodCB3b3JrIHRvbyB0aG91Z2gsIGFzc3VtaW5nIGFsbCB0aGUg
bG9ja2luZyBldGMgaXMgb2sKLS0gdGhpcyBpcyBhIHByaXZhdGUgcXVldWUgc28gSSB0aGluayBp
dCBpcyBwcm9iYWJseSBvay4gUmF0aGVyIHRoYW4KY2FsY3VsYXRpbmcgdGhlIG51bWJlciBvZiBz
bG90cyBpbiB4ZW5fbmV0YmtfcnhfYWN0aW9uIHlvdSBwcm9iYWJseSB3YW50CnRvIHJlbWVtYmVy
IHRoZSB2YWx1ZSBmcm9tIHRoZSBjYWxsIHRvIHhlbl9uZXRia19jb3VudF9za2Jfc2xvdHMgaW4K
c3RhcnRfeG1pdC4gUGVyaGFwcyBieSBzdGFzaGluZyBpdCBpbiBza2ItPmNiPyAoc2VlIE5FVEZS
T05UX1NLQl9DQiBmb3IKYW4gZXhhbXBsZSBvZiBob3cgdG8gZG8gdGhpcykKCklhbi4KCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:08:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BbX-0006Pw-Ur; Wed, 22 Aug 2012 14:08:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4BbW-0006Pr-OJ
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:08:18 +0000
Received: from [85.158.143.99:9880] by server-3.bemta-4.messagelabs.com id
	BB/D0-09529-2D7E4305; Wed, 22 Aug 2012 14:08:18 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345644496!16193655!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21511 invoked from network); 22 Aug 2012 14:08:17 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Aug 2012 14:08:17 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ME7ur9024418
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 14:07:57 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ME7tYW010493
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 14:07:56 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ME7sin014116; Wed, 22 Aug 2012 09:07:54 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 07:07:54 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0E57A4031E; Wed, 22 Aug 2012 09:57:54 -0400 (EDT)
Date: Wed, 22 Aug 2012 09:57:53 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Thomas Gleixner <tglx@linutronix.de>
Message-ID: <20120822135753.GA30964@phenom.dumpdata.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1208212315330.2856@ionos>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com, x86@kernel.org,
	linux-kernel@vger.kernel.org, mingo@redhat.com, hpa@zytor.com,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
> > Differences with v1:
> > - The patch serie is re-arranged in a way that it helps reviews, following
> >   a plan by Thomas Gleixner
> > - The PVOPS nomenclature is not used as it is not correct
> > - The front-end message is adjusted with feedback by Thomas Gleixner,
> >   Stefano Stabellini and Konrad Rzeszutek Wilk 
> 
> This is much simpler to read and review. Just have a look at the
> diffstats of the two series:
> 
>  6 files changed,  9 insertions(+),  8 deletions(-)
>  6 files changed, 11 insertions(+),  9 deletions(-)
>  5 files changed, 50 insertions(+),  2 deletions(-)
>  6 files changed,  2 insertions(+), 65 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> versus
> 
>  6 files changed, 10 insertions(+),  9 deletions(-)
>  6 files changed, 11 insertions(+), 11 deletions(-)
>  5 files changed,  3 insertions(+),  3 deletions(-)
>  6 files changed,  4 insertions(+), 20 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> The overall result is basically the same, but it's way simpler to look
> at obvious and well done patches than checking whether a subtle copy
> and paste bug happened in 3/5 of the first version. Copy and paste is
> the #1 cause for subtle bugs. :)
> 
> I'm waiting for the ack of Xen folks before taking it into tip.

I've some extra patches that modify the new "paginig_init" in the Xen
code that I am going to propose for v3.7 - so will have some merge
conflicts. Let me figure that out and also run this set of patches
(and also the previous one .. which I think you didn't have a
chance to look since you were on vacation?) on an overnight
test to make sure there are no fallout.

With the merge issues that are going to prop up (x86 tip tree
and my tree in linux-next) should I just take these patches
in my tree with your Ack? Or should I just ingest your tiptree
in my tree and that way solve the merge issue? What's your
preference!
> 
> Thanks for following up!

Thank you for providing valuable feedback! Much appreciated.
> 
> 	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:08:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BbX-0006Pw-Ur; Wed, 22 Aug 2012 14:08:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4BbW-0006Pr-OJ
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:08:18 +0000
Received: from [85.158.143.99:9880] by server-3.bemta-4.messagelabs.com id
	BB/D0-09529-2D7E4305; Wed, 22 Aug 2012 14:08:18 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345644496!16193655!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21511 invoked from network); 22 Aug 2012 14:08:17 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Aug 2012 14:08:17 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7ME7ur9024418
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 14:07:57 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7ME7tYW010493
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 14:07:56 GMT
Received: from abhmt104.oracle.com (abhmt104.oracle.com [141.146.116.56])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7ME7sin014116; Wed, 22 Aug 2012 09:07:54 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 07:07:54 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0E57A4031E; Wed, 22 Aug 2012 09:57:54 -0400 (EDT)
Date: Wed, 22 Aug 2012 09:57:53 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Thomas Gleixner <tglx@linutronix.de>
Message-ID: <20120822135753.GA30964@phenom.dumpdata.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.02.1208212315330.2856@ionos>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com, x86@kernel.org,
	linux-kernel@vger.kernel.org, mingo@redhat.com, hpa@zytor.com,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
> On Tue, 21 Aug 2012, Attilio Rao wrote:
> > Differences with v1:
> > - The patch serie is re-arranged in a way that it helps reviews, following
> >   a plan by Thomas Gleixner
> > - The PVOPS nomenclature is not used as it is not correct
> > - The front-end message is adjusted with feedback by Thomas Gleixner,
> >   Stefano Stabellini and Konrad Rzeszutek Wilk 
> 
> This is much simpler to read and review. Just have a look at the
> diffstats of the two series:
> 
>  6 files changed,  9 insertions(+),  8 deletions(-)
>  6 files changed, 11 insertions(+),  9 deletions(-)
>  5 files changed, 50 insertions(+),  2 deletions(-)
>  6 files changed,  2 insertions(+), 65 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> versus
> 
>  6 files changed, 10 insertions(+),  9 deletions(-)
>  6 files changed, 11 insertions(+), 11 deletions(-)
>  5 files changed,  3 insertions(+),  3 deletions(-)
>  6 files changed,  4 insertions(+), 20 deletions(-)
>  1 files changed,  5 insertions(+),  0 deletions(-)
> 
> The overall result is basically the same, but it's way simpler to look
> at obvious and well done patches than checking whether a subtle copy
> and paste bug happened in 3/5 of the first version. Copy and paste is
> the #1 cause for subtle bugs. :)
> 
> I'm waiting for the ack of Xen folks before taking it into tip.

I've some extra patches that modify the new "paginig_init" in the Xen
code that I am going to propose for v3.7 - so will have some merge
conflicts. Let me figure that out and also run this set of patches
(and also the previous one .. which I think you didn't have a
chance to look since you were on vacation?) on an overnight
test to make sure there are no fallout.

With the merge issues that are going to prop up (x86 tip tree
and my tree in linux-next) should I just take these patches
in my tree with your Ack? Or should I just ingest your tiptree
in my tree and that way solve the merge issue? What's your
preference!
> 
> Thanks for following up!

Thank you for providing valuable feedback! Much appreciated.
> 
> 	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:11:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Be0-0006VT-Ga; Wed, 22 Aug 2012 14:10:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4Bdy-0006VL-OQ
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:10:50 +0000
Received: from [85.158.143.99:61011] by server-1.bemta-4.messagelabs.com id
	48/57-07754-A68E4305; Wed, 22 Aug 2012 14:10:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345644648!22112845!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25317 invoked from network); 22 Aug 2012 14:10:49 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 14:10:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MEAi6J027684
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 14:10:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MEAhxs011546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 14:10:43 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MEAg58016293; Wed, 22 Aug 2012 09:10:42 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 07:10:42 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 93A424031E; Wed, 22 Aug 2012 10:00:39 -0400 (EDT)
Date: Wed, 22 Aug 2012 10:00:39 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120822140039.GB30964@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208221146230.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208221146230.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > +	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
> > +	 * offsets the pt_base by two pages. Hence the reservation that is done
> > +	 * in mmu.c misses two pages. We correct it here if we detect this. */
> > +	if (last_phys < __pa(xen_start_info->pt_base))
> > +		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
> >  }
> 
> What are these two pages used for? They are not documented in xen.h, why
> should we reserve them?
> 
> After all we still have:
> 
> memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
> 
> that should protect what we are interested in anyway...

You are looking at the x86_64 piece of code. This issue only appears
on 32-bit which was not modified by my patches and has:

2003         memblock_reserve(__pa(xen_start_info->pt_base),
2004                          xen_start_info->nr_pt_frames * PAGE_SIZE);

and as I found out, the pt_base is wrong. The cr3 we load and use is actually
two pages back!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:11:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Be0-0006VT-Ga; Wed, 22 Aug 2012 14:10:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4Bdy-0006VL-OQ
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:10:50 +0000
Received: from [85.158.143.99:61011] by server-1.bemta-4.messagelabs.com id
	48/57-07754-A68E4305; Wed, 22 Aug 2012 14:10:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345644648!22112845!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25317 invoked from network); 22 Aug 2012 14:10:49 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 14:10:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MEAi6J027684
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 14:10:44 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MEAhxs011546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 14:10:43 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MEAg58016293; Wed, 22 Aug 2012 09:10:42 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 07:10:42 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 93A424031E; Wed, 22 Aug 2012 10:00:39 -0400 (EDT)
Date: Wed, 22 Aug 2012 10:00:39 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120822140039.GB30964@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208221146230.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208221146230.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > +	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
> > +	 * offsets the pt_base by two pages. Hence the reservation that is done
> > +	 * in mmu.c misses two pages. We correct it here if we detect this. */
> > +	if (last_phys < __pa(xen_start_info->pt_base))
> > +		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
> >  }
> 
> What are these two pages used for? They are not documented in xen.h, why
> should we reserve them?
> 
> After all we still have:
> 
> memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
> 
> that should protect what we are interested in anyway...

You are looking at the x86_64 piece of code. This issue only appears
on 32-bit which was not modified by my patches and has:

2003         memblock_reserve(__pa(xen_start_info->pt_base),
2004                          xen_start_info->nr_pt_frames * PAGE_SIZE);

and as I found out, the pt_base is wrong. The cr3 we load and use is actually
two pages back!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:12:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BfB-0006aL-Vk; Wed, 22 Aug 2012 14:12:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4BfB-0006a9-1P
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:12:05 +0000
Received: from [85.158.139.83:57324] by server-9.bemta-5.messagelabs.com id
	EC/27-26123-4B8E4305; Wed, 22 Aug 2012 14:12:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345644723!28959171!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13836 invoked from network); 22 Aug 2012 14:12:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 14:12:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:12:03 +0100
Message-Id: <503504FE0200007800096F08@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:12:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
In-Reply-To: <20120821190317.GA13035@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
>> Jan, I thought something odd. Part of this code replaces this:
>> 
>> 	memblock_reserve(__pa(xen_start_info->mfn_list),
>> 		xen_start_info->pt_base - xen_start_info->mfn_list);
>> 
>> with a more region-by-region area. What I found out that if I boot this
>> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
>> actually wrong.
>> 
>> Specifically this is what bootup says:
>> 
>> (good working case - 32bit hypervisor with 32-bit dom0):
>> (XEN)  Loaded kernel: c1000000->c1a23000
>> (XEN)  Init. ramdisk: c1a23000->cf730e00
>> (XEN)  Phys-Mach map: cf731000->cf831000
>> (XEN)  Start info:    cf831000->cf83147c
>> (XEN)  Page tables:   cf832000->cf8b5000
>> (XEN)  Boot stack:    cf8b5000->cf8b6000
>> (XEN)  TOTAL:         c0000000->cfc00000
>> 
>> [    0.000000] PT: cf832000 (f832000)
>> [    0.000000] Reserving PT: f832000->f8b5000
>> 
>> And with a 64-bit hypervisor:
>> 
>> XEN) VIRTUAL MEMORY ARRANGEMENT:
>> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
>> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
>> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
>> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
>> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
>> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
>> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
>> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
>> 
>> [    0.000000] PT: cf834000 (f834000)
>> [    0.000000] Reserving PT: f834000->f8b8000
>> 
>> So the pt_base is offset by two pages. And looking at c/s 13257
>> its not clear to me why this two page offset was added?

Honestly, without looking through this in greater detail I don't
recall. That'll have to wait possibly until after the summit, though.
I can't exclude that this is just a forgotten leftover from an earlier
version of the patch. I would have thought this was to account
for the L4 tables that the guest doesn't see, but
- this should only be a single page
- this should then also (or rather instead) be subtracted from
  nr_pt_frames
so that's likely not it.

>> The toolstack works fine - so launching 32-bit guests either
>> under a 32-bit hypervisor or 64-bit works fine:
>> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 
> 0xcf885000  (pfn 0xf805 + 0x80 pages)
>> [    0.000000] PT: cf805000 (f805000)
>> 
> 
> And this patch on top of the others fixes this..

I didn't look at this in too close detail, but I started to get
afraid that you might be making the code dependent on
many hypervisor implementation details. And should the
above turn out to be a bug in the hypervisor, I hope your
kernel side changes won't make it impossible to fix that bug.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:12:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BfB-0006aL-Vk; Wed, 22 Aug 2012 14:12:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4BfB-0006a9-1P
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:12:05 +0000
Received: from [85.158.139.83:57324] by server-9.bemta-5.messagelabs.com id
	EC/27-26123-4B8E4305; Wed, 22 Aug 2012 14:12:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345644723!28959171!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13836 invoked from network); 22 Aug 2012 14:12:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 14:12:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:12:03 +0100
Message-Id: <503504FE0200007800096F08@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:12:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
In-Reply-To: <20120821190317.GA13035@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
>> Jan, I thought something odd. Part of this code replaces this:
>> 
>> 	memblock_reserve(__pa(xen_start_info->mfn_list),
>> 		xen_start_info->pt_base - xen_start_info->mfn_list);
>> 
>> with a more region-by-region area. What I found out that if I boot this
>> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
>> actually wrong.
>> 
>> Specifically this is what bootup says:
>> 
>> (good working case - 32bit hypervisor with 32-bit dom0):
>> (XEN)  Loaded kernel: c1000000->c1a23000
>> (XEN)  Init. ramdisk: c1a23000->cf730e00
>> (XEN)  Phys-Mach map: cf731000->cf831000
>> (XEN)  Start info:    cf831000->cf83147c
>> (XEN)  Page tables:   cf832000->cf8b5000
>> (XEN)  Boot stack:    cf8b5000->cf8b6000
>> (XEN)  TOTAL:         c0000000->cfc00000
>> 
>> [    0.000000] PT: cf832000 (f832000)
>> [    0.000000] Reserving PT: f832000->f8b5000
>> 
>> And with a 64-bit hypervisor:
>> 
>> XEN) VIRTUAL MEMORY ARRANGEMENT:
>> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
>> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
>> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
>> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
>> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
>> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
>> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
>> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
>> 
>> [    0.000000] PT: cf834000 (f834000)
>> [    0.000000] Reserving PT: f834000->f8b8000
>> 
>> So the pt_base is offset by two pages. And looking at c/s 13257
>> its not clear to me why this two page offset was added?

Honestly, without looking through this in greater detail I don't
recall. That'll have to wait possibly until after the summit, though.
I can't exclude that this is just a forgotten leftover from an earlier
version of the patch. I would have thought this was to account
for the L4 tables that the guest doesn't see, but
- this should only be a single page
- this should then also (or rather instead) be subtracted from
  nr_pt_frames
so that's likely not it.

>> The toolstack works fine - so launching 32-bit guests either
>> under a 32-bit hypervisor or 64-bit works fine:
>> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 
> 0xcf885000  (pfn 0xf805 + 0x80 pages)
>> [    0.000000] PT: cf805000 (f805000)
>> 
> 
> And this patch on top of the others fixes this..

I didn't look at this in too close detail, but I started to get
afraid that you might be making the code dependent on
many hypervisor implementation details. And should the
above turn out to be a bug in the hypervisor, I hope your
kernel side changes won't make it impossible to fix that bug.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:13:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BgT-0006iQ-Kw; Wed, 22 Aug 2012 14:13:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4BgS-0006iA-7C
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:13:24 +0000
Received: from [85.158.143.99:19038] by server-1.bemta-4.messagelabs.com id
	10/DB-07754-309E4305; Wed, 22 Aug 2012 14:13:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345644801!17624502!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxMjQ2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5127 invoked from network); 22 Aug 2012 14:13:23 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 14:13:23 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MEDGBL001298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 14:13:17 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MEDGZ2004514
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 14:13:16 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MEDGKe018399; Wed, 22 Aug 2012 09:13:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 07:13:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D570B4031E; Wed, 22 Aug 2012 10:03:15 -0400 (EDT)
Date: Wed, 22 Aug 2012 10:03:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120822140315.GC30964@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1206221644370.27860@kaball.uk.xensource.com>
	<1340381685-22529-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1206221722040.27860@kaball.uk.xensource.com>
	<20120709141915.GB9580@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207131821250.23783@kaball.uk.xensource.com>
	<20120716151441.GD552@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH] xen/events: fix unmask_evtchn for PV on HVM
 guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 12:20:09PM +0100, Stefano Stabellini wrote:
> Konrad,
> I cannot see this patch anywhere in your trees. Did I miss it?
> Or maybe it just fell through the cracks?

Fell through the cracks.. Was there a new version of this? Can you
send me a subset of the patches you want me to pick up?

> Let me know if you want me to do anything more.
> Cheers,
> Stefano
> 
> On Wed, 18 Jul 2012, Stefano Stabellini wrote:
> > xen/events: fix unmask_evtchn for PV on HVM guests
> > 
> > When unmask_evtchn is called, if we already have an event pending, we
> > just set evtchn_pending_sel waiting for local_irq_enable to be called.
> > That is because PV guests set the irq_enable pvops to
> > xen_irq_enable_direct in xen_setup_vcpu_info_placement:
> > xen_irq_enable_direct is implemented in assembly in
> > arch/x86/xen/xen-asm.S and call xen_force_evtchn_callback if
> > XEN_vcpu_info_pending is set.
> > 
> > However HVM guests (and ARM guests) do not change or do not have the
> > irq_enable pvop, so evtchn_unmask cannot work properly for them.
> > 
> > Considering that having the pending_irq bit set when unmask_evtchn is
> > called is not very common, and it is simpler to keep the
> > native_irq_enable implementation for HVM guests (and ARM guests), the
> > best thing to do is just use the EVTCHNOP_unmask hypercall (Xen
> > re-injects pending events in response).
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/xen/events.c |   17 ++++++++++++++---
> >  1 files changed, 14 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 0a8a17c..d75cc39 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -373,11 +373,22 @@ static void unmask_evtchn(int port)
> >  {
> >  	struct shared_info *s = HYPERVISOR_shared_info;
> >  	unsigned int cpu = get_cpu();
> > +	int do_hypercall = 0, evtchn_pending = 0;
> >  
> >  	BUG_ON(!irqs_disabled());
> >  
> > -	/* Slow path (hypercall) if this is a non-local port. */
> > -	if (unlikely(cpu != cpu_from_evtchn(port))) {
> > +	if (unlikely((cpu != cpu_from_evtchn(port))))
> > +		do_hypercall = 1;
> > +	else
> > +		evtchn_pending = sync_test_bit(port, &s->evtchn_pending[0]);
> > +
> > +	if (unlikely(evtchn_pending && xen_hvm_domain()))
> > +		do_hypercall = 1;
> > +
> > +	/* Slow path (hypercall) if this is a non-local port or if this is
> > +	 * an hvm domain and an event is pending (hvm domains don't have
> > +	 * their own implementation of irq_enable). */
> > +	if (do_hypercall) {
> >  		struct evtchn_unmask unmask = { .port = port };
> >  		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
> >  	} else {
> > @@ -390,7 +401,7 @@ static void unmask_evtchn(int port)
> >  		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
> >  		 * the interrupt edge' if the channel is masked.
> >  		 */
> > -		if (sync_test_bit(port, &s->evtchn_pending[0]) &&
> > +		if (evtchn_pending &&
> >  		    !sync_test_and_set_bit(port / BITS_PER_LONG,
> >  					   &vcpu_info->evtchn_pending_sel))
> >  			vcpu_info->evtchn_upcall_pending = 1;
> > -- 
> > 1.7.2.5
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:13:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BgT-0006iQ-Kw; Wed, 22 Aug 2012 14:13:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4BgS-0006iA-7C
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:13:24 +0000
Received: from [85.158.143.99:19038] by server-1.bemta-4.messagelabs.com id
	10/DB-07754-309E4305; Wed, 22 Aug 2012 14:13:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345644801!17624502!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxMjQ2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5127 invoked from network); 22 Aug 2012 14:13:23 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 14:13:23 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MEDGBL001298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 14:13:17 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MEDGZ2004514
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 14:13:16 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MEDGKe018399; Wed, 22 Aug 2012 09:13:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 07:13:16 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D570B4031E; Wed, 22 Aug 2012 10:03:15 -0400 (EDT)
Date: Wed, 22 Aug 2012 10:03:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120822140315.GC30964@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1206221644370.27860@kaball.uk.xensource.com>
	<1340381685-22529-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1206221722040.27860@kaball.uk.xensource.com>
	<20120709141915.GB9580@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207131821250.23783@kaball.uk.xensource.com>
	<20120716151441.GD552@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH] xen/events: fix unmask_evtchn for PV on HVM
 guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 12:20:09PM +0100, Stefano Stabellini wrote:
> Konrad,
> I cannot see this patch anywhere in your trees. Did I miss it?
> Or maybe it just fell through the cracks?

Fell through the cracks.. Was there a new version of this? Can you
send me a subset of the patches you want me to pick up?

> Let me know if you want me to do anything more.
> Cheers,
> Stefano
> 
> On Wed, 18 Jul 2012, Stefano Stabellini wrote:
> > xen/events: fix unmask_evtchn for PV on HVM guests
> > 
> > When unmask_evtchn is called, if we already have an event pending, we
> > just set evtchn_pending_sel waiting for local_irq_enable to be called.
> > That is because PV guests set the irq_enable pvops to
> > xen_irq_enable_direct in xen_setup_vcpu_info_placement:
> > xen_irq_enable_direct is implemented in assembly in
> > arch/x86/xen/xen-asm.S and call xen_force_evtchn_callback if
> > XEN_vcpu_info_pending is set.
> > 
> > However HVM guests (and ARM guests) do not change or do not have the
> > irq_enable pvop, so evtchn_unmask cannot work properly for them.
> > 
> > Considering that having the pending_irq bit set when unmask_evtchn is
> > called is not very common, and it is simpler to keep the
> > native_irq_enable implementation for HVM guests (and ARM guests), the
> > best thing to do is just use the EVTCHNOP_unmask hypercall (Xen
> > re-injects pending events in response).
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  drivers/xen/events.c |   17 ++++++++++++++---
> >  1 files changed, 14 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 0a8a17c..d75cc39 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -373,11 +373,22 @@ static void unmask_evtchn(int port)
> >  {
> >  	struct shared_info *s = HYPERVISOR_shared_info;
> >  	unsigned int cpu = get_cpu();
> > +	int do_hypercall = 0, evtchn_pending = 0;
> >  
> >  	BUG_ON(!irqs_disabled());
> >  
> > -	/* Slow path (hypercall) if this is a non-local port. */
> > -	if (unlikely(cpu != cpu_from_evtchn(port))) {
> > +	if (unlikely((cpu != cpu_from_evtchn(port))))
> > +		do_hypercall = 1;
> > +	else
> > +		evtchn_pending = sync_test_bit(port, &s->evtchn_pending[0]);
> > +
> > +	if (unlikely(evtchn_pending && xen_hvm_domain()))
> > +		do_hypercall = 1;
> > +
> > +	/* Slow path (hypercall) if this is a non-local port or if this is
> > +	 * an hvm domain and an event is pending (hvm domains don't have
> > +	 * their own implementation of irq_enable). */
> > +	if (do_hypercall) {
> >  		struct evtchn_unmask unmask = { .port = port };
> >  		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
> >  	} else {
> > @@ -390,7 +401,7 @@ static void unmask_evtchn(int port)
> >  		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
> >  		 * the interrupt edge' if the channel is masked.
> >  		 */
> > -		if (sync_test_bit(port, &s->evtchn_pending[0]) &&
> > +		if (evtchn_pending &&
> >  		    !sync_test_and_set_bit(port / BITS_PER_LONG,
> >  					   &vcpu_info->evtchn_pending_sel))
> >  			vcpu_info->evtchn_upcall_pending = 1;
> > -- 
> > 1.7.2.5
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:14:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:14:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BhA-0006nd-2K; Wed, 22 Aug 2012 14:14:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Bh8-0006ma-7y
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:14:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345644838!10512319!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24041 invoked from network); 22 Aug 2012 14:13:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 14:13:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:13:57 +0100
Message-Id: <503505710200007800096F0B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:14:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
In-Reply-To: <8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
> # Parent  635917c6dac4ab8748572fcbeb3e745428684e15
> get_page_type: Print out extra information when failing to get page_type.
> 
> When any reference to __get_page_type is called and it fails, we get:
> 
> (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) 
> for mfn 10e392 (pfn 1bf6c)
> 
> with this patch we get some extra details such as:
> (XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]
> 
> where it actually is in the pagetable of the guest. This is useful
> b/c we can figure out where it is, and use that to figure out where
> the OS thinks it is.

In addition to my earlier reply, I also think that the printing
should be done at info level, so that nothing would get
additionally printed without special command line options.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:14:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:14:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BhA-0006nd-2K; Wed, 22 Aug 2012 14:14:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Bh8-0006ma-7y
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:14:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345644838!10512319!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24041 invoked from network); 22 Aug 2012 14:13:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 14:13:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:13:57 +0100
Message-Id: <503505710200007800096F0B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:14:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
In-Reply-To: <8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
> # Parent  635917c6dac4ab8748572fcbeb3e745428684e15
> get_page_type: Print out extra information when failing to get page_type.
> 
> When any reference to __get_page_type is called and it fails, we get:
> 
> (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) 
> for mfn 10e392 (pfn 1bf6c)
> 
> with this patch we get some extra details such as:
> (XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
> (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]
> 
> where it actually is in the pagetable of the guest. This is useful
> b/c we can figure out where it is, and use that to figure out where
> the OS thinks it is.

In addition to my earlier reply, I also think that the printing
should be done at info level, so that nothing would get
additionally printed without special command line options.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:16:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BjM-000720-Jw; Wed, 22 Aug 2012 14:16:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4BjL-00071k-2l
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:16:23 +0000
Received: from [85.158.143.99:36870] by server-3.bemta-4.messagelabs.com id
	55/DF-09529-6B9E4305; Wed, 22 Aug 2012 14:16:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345644981!22114321!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28244 invoked from network); 22 Aug 2012 14:16:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 14:16:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:16:20 +0100
Message-Id: <503506000200007800096F30@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:17:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
In-Reply-To: <74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> xen/pagetables: Document that all of the initial regions are mapped.
> 
> The documentation states that the layout of the initial region looks
> as so:
>    a. relocated kernel image
>    b. initial ram disk              [mod_start, mod_len]
>    c. list of allocated page frames [mfn_list, nr_pages]
>       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
>    d. start_info_t structure        [register ESI (x86)]
>    e. bootstrap page tables         [pt_base, CR3 (x86)]
>    f. bootstrap stack               [register ESP (x86)]
> 
> But it does not clarify that the virtual address to all of
> those areas is initially mapped by the pt_base (or CR3).
> Lets fix that.

To me this is already being said by "This the order of bootstrap
elements in the initial virtual region".

Jan

> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
> --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> @@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
>   *  8. There is guaranteed to be at least 512kB padding after the final
>   *     bootstrap element. If necessary, the bootstrap virtual region is
>   *     extended by an extra 4MB to ensure this.
> + *
> + *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> + *  pagetables [pt_base, CR3 (x86)].
>   */
>  
>  #define MAX_GUEST_CMDLINE 1024
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:16:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BjM-000720-Jw; Wed, 22 Aug 2012 14:16:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4BjL-00071k-2l
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:16:23 +0000
Received: from [85.158.143.99:36870] by server-3.bemta-4.messagelabs.com id
	55/DF-09529-6B9E4305; Wed, 22 Aug 2012 14:16:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345644981!22114321!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28244 invoked from network); 22 Aug 2012 14:16:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 14:16:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:16:20 +0100
Message-Id: <503506000200007800096F30@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:17:04 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
In-Reply-To: <74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> xen/pagetables: Document that all of the initial regions are mapped.
> 
> The documentation states that the layout of the initial region looks
> as so:
>    a. relocated kernel image
>    b. initial ram disk              [mod_start, mod_len]
>    c. list of allocated page frames [mfn_list, nr_pages]
>       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
>    d. start_info_t structure        [register ESI (x86)]
>    e. bootstrap page tables         [pt_base, CR3 (x86)]
>    f. bootstrap stack               [register ESP (x86)]
> 
> But it does not clarify that the virtual address to all of
> those areas is initially mapped by the pt_base (or CR3).
> Lets fix that.

To me this is already being said by "This the order of bootstrap
elements in the initial virtual region".

Jan

> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
> --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> @@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
>   *  8. There is guaranteed to be at least 512kB padding after the final
>   *     bootstrap element. If necessary, the bootstrap virtual region is
>   *     extended by an extra 4MB to ensure this.
> + *
> + *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> + *  pagetables [pt_base, CR3 (x86)].
>   */
>  
>  #define MAX_GUEST_CMDLINE 1024
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:17:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bju-00076Z-1U; Wed, 22 Aug 2012 14:16:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4Bjs-000760-EP
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:16:56 +0000
Received: from [85.158.143.99:41325] by server-1.bemta-4.messagelabs.com id
	84/B0-07754-7D9E4305; Wed, 22 Aug 2012 14:16:55 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345645012!27416347!1
X-Originating-IP: [209.85.212.65]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 686 invoked from network); 22 Aug 2012 14:16:53 -0000
Received: from mail-vb0-f65.google.com (HELO mail-vb0-f65.google.com)
	(209.85.212.65)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:16:53 -0000
Received: by vbme21 with SMTP id e21so100795vbm.0
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 07:16:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=LKX63LsC67wQrJShXnZwHRPrEKTqyAZAC74dhv31/gE=;
	b=QUqLoXTU3run57nfztCUMqem24EhubE8EhcgA9ydApnOKpXE6ZYV9LdRgdy695UXRq
	ljidAQMjPnzmXbvs61y0Wa5soejnvJjGpJz4eKtHoycMhK01zAg2K32Umao9mLUKCjuX
	qD/wfreJfL+V6zC08QXM544+fWzvUnRhi7Pl3RFJv3EYnqYNmMmSd21A23KlEQbG84jj
	c4I6XYWgxUvYtAh60zOWXnJ5PvB2bGKjAV+gP8d1eI69im7ewsdqkDB22kMB0Njr+AQv
	mDU1WGiFUqc2qJjg08GRMHblb60eU2SMxHvb7lL7Krb3pqMVz3uSuIhU87K2CtMO4hz8
	3qWA==
Received: by 10.52.26.104 with SMTP id k8mr7754967vdg.79.1345645012335;
	Wed, 22 Aug 2012 07:16:52 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id dt8sm1997336vec.3.2012.08.22.07.16.51
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:16:51 -0700 (PDT)
Date: Wed, 22 Aug 2012 10:06:49 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120822140649.GA31112@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
	<1345627270.6821.125.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345627270.6821.125.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 10:21:10AM +0100, Ian Campbell wrote:
> On Tue, 2012-08-21 at 21:08 +0100, Konrad Rzeszutek Wilk wrote:
> > # HG changeset patch
> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > # Date 1345579709 14400
> > # Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
> > # Parent  74bedb086c5b72447262e087c0218b89f8bc9140
> > xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.
> > 
> > c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
> > run with a 32-bit initial domain. One of the things it added was
> > that the pt_base is offset by two pages. This inconsistency
> > is only present in the COMPAT mode so lets document it.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > diff -r 74bedb086c5b -r b992f8e613a3 xen/include/public/xen.h
> > --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > @@ -663,7 +663,7 @@ typedef struct shared_info shared_info_t
> >   *      c. list of allocated page frames [mfn_list, nr_pages]
> >   *         (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> >   *      d. start_info_t structure        [register ESI (x86)]
> > - *      e. bootstrap page tables         [pt_base, CR3 (x86)]
> > + *      e. bootstrap page tables         [pt_base, CR3 (x86)] *1
> >   *      f. bootstrap stack               [register ESP (x86)]
> >   *  4. Bootstrap elements are packed together, but each is 4kB-aligned.
> >   *  5. The initial ram disk may be omitted.
> > @@ -678,6 +678,9 @@ typedef struct shared_info shared_info_t
> >   *
> >   *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> >   *  pagetables [pt_base, CR3 (x86)].
> > + *
> > + *  *1: When booting under a 64-bit hypervisor with a 32-bit initial domain
> > + *  it is offset by two pages (pt_base += PAGE_SIZE*2).
> 
> "it" here being the page which you would have to load into cr3?

No. The pt_base value.
> 
> What, if anything, ends up in the other two pages?

The L3 and L1. The pt_base has incorrect data and points to second L1 instead
of the L3.

Its laid out like this: [L3] [L1] [L1] ... [L2]
                         a    b    c

and pt_base has the phys_addr of 'c' instead of 'a'.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:17:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bju-00076Z-1U; Wed, 22 Aug 2012 14:16:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4Bjs-000760-EP
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:16:56 +0000
Received: from [85.158.143.99:41325] by server-1.bemta-4.messagelabs.com id
	84/B0-07754-7D9E4305; Wed, 22 Aug 2012 14:16:55 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345645012!27416347!1
X-Originating-IP: [209.85.212.65]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 686 invoked from network); 22 Aug 2012 14:16:53 -0000
Received: from mail-vb0-f65.google.com (HELO mail-vb0-f65.google.com)
	(209.85.212.65)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:16:53 -0000
Received: by vbme21 with SMTP id e21so100795vbm.0
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 07:16:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=LKX63LsC67wQrJShXnZwHRPrEKTqyAZAC74dhv31/gE=;
	b=QUqLoXTU3run57nfztCUMqem24EhubE8EhcgA9ydApnOKpXE6ZYV9LdRgdy695UXRq
	ljidAQMjPnzmXbvs61y0Wa5soejnvJjGpJz4eKtHoycMhK01zAg2K32Umao9mLUKCjuX
	qD/wfreJfL+V6zC08QXM544+fWzvUnRhi7Pl3RFJv3EYnqYNmMmSd21A23KlEQbG84jj
	c4I6XYWgxUvYtAh60zOWXnJ5PvB2bGKjAV+gP8d1eI69im7ewsdqkDB22kMB0Njr+AQv
	mDU1WGiFUqc2qJjg08GRMHblb60eU2SMxHvb7lL7Krb3pqMVz3uSuIhU87K2CtMO4hz8
	3qWA==
Received: by 10.52.26.104 with SMTP id k8mr7754967vdg.79.1345645012335;
	Wed, 22 Aug 2012 07:16:52 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id dt8sm1997336vec.3.2012.08.22.07.16.51
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:16:51 -0700 (PDT)
Date: Wed, 22 Aug 2012 10:06:49 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120822140649.GA31112@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
	<1345627270.6821.125.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345627270.6821.125.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 10:21:10AM +0100, Ian Campbell wrote:
> On Tue, 2012-08-21 at 21:08 +0100, Konrad Rzeszutek Wilk wrote:
> > # HG changeset patch
> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > # Date 1345579709 14400
> > # Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
> > # Parent  74bedb086c5b72447262e087c0218b89f8bc9140
> > xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.
> > 
> > c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
> > run with a 32-bit initial domain. One of the things it added was
> > that the pt_base is offset by two pages. This inconsistency
> > is only present in the COMPAT mode so lets document it.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > diff -r 74bedb086c5b -r b992f8e613a3 xen/include/public/xen.h
> > --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > @@ -663,7 +663,7 @@ typedef struct shared_info shared_info_t
> >   *      c. list of allocated page frames [mfn_list, nr_pages]
> >   *         (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> >   *      d. start_info_t structure        [register ESI (x86)]
> > - *      e. bootstrap page tables         [pt_base, CR3 (x86)]
> > + *      e. bootstrap page tables         [pt_base, CR3 (x86)] *1
> >   *      f. bootstrap stack               [register ESP (x86)]
> >   *  4. Bootstrap elements are packed together, but each is 4kB-aligned.
> >   *  5. The initial ram disk may be omitted.
> > @@ -678,6 +678,9 @@ typedef struct shared_info shared_info_t
> >   *
> >   *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> >   *  pagetables [pt_base, CR3 (x86)].
> > + *
> > + *  *1: When booting under a 64-bit hypervisor with a 32-bit initial domain
> > + *  it is offset by two pages (pt_base += PAGE_SIZE*2).
> 
> "it" here being the page which you would have to load into cr3?

No. The pt_base value.
> 
> What, if anything, ends up in the other two pages?

The L3 and L1. The pt_base has incorrect data and points to second L1 instead
of the L3.

Its laid out like this: [L3] [L1] [L1] ... [L2]
                         a    b    c

and pt_base has the phys_addr of 'c' instead of 'a'.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:17:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bk4-000788-El; Wed, 22 Aug 2012 14:17:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Bk2-00077o-NT
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:17:06 +0000
Received: from [85.158.143.35:25156] by server-1.bemta-4.messagelabs.com id
	3B/F0-07754-2E9E4305; Wed, 22 Aug 2012 14:17:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345645017!13824000!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24122 invoked from network); 22 Aug 2012 14:16:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 14:16:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:16:56 +0100
Message-Id: <503506250200007800096F33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:17:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
In-Reply-To: <b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
> # Parent  74bedb086c5b72447262e087c0218b89f8bc9140
> xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.
> 
> c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
> run with a 32-bit initial domain. One of the things it added was
> that the pt_base is offset by two pages. This inconsistency
> is only present in the COMPAT mode so lets document it.

Let's first understand whether that's really a bug.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:17:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bk4-000788-El; Wed, 22 Aug 2012 14:17:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Bk2-00077o-NT
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:17:06 +0000
Received: from [85.158.143.35:25156] by server-1.bemta-4.messagelabs.com id
	3B/F0-07754-2E9E4305; Wed, 22 Aug 2012 14:17:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345645017!13824000!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24122 invoked from network); 22 Aug 2012 14:16:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 14:16:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:16:56 +0100
Message-Id: <503506250200007800096F33@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:17:41 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
In-Reply-To: <b992f8e613a3401b9ddd.1345579714@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4 of 4] xen/pagetables: Document pt_base
 inconsistency when running in COMPAT mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> # HG changeset patch
> # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> # Date 1345579709 14400
> # Node ID b992f8e613a3401b9ddd140ce436c840d412beb7
> # Parent  74bedb086c5b72447262e087c0218b89f8bc9140
> xen/pagetables: Document pt_base inconsistency when running in COMPAT mode.
> 
> c/s 13257 added the COMPAT mode wherein a 64-bit hypervisor can
> run with a 32-bit initial domain. One of the things it added was
> that the pt_base is offset by two pages. This inconsistency
> is only present in the COMPAT mode so lets document it.

Let's first understand whether that's really a bug.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bn9-0007U8-1v; Wed, 22 Aug 2012 14:20:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T4Bn8-0007Tw-2P
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:20:18 +0000
Received: from [85.158.143.35:21814] by server-2.bemta-4.messagelabs.com id
	77/E5-21239-1AAE4305; Wed, 22 Aug 2012 14:20:17 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345645198!12340537!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6370 invoked from network); 22 Aug 2012 14:19:59 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	22 Aug 2012 14:19:59 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T4Bme-000678-Mk; Wed, 22 Aug 2012 16:19:48 +0200
Date: Wed, 22 Aug 2012 16:19:47 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120822135753.GA30964@phenom.dumpdata.com>
Message-ID: <alpine.LFD.2.02.1208221618380.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com, x86@kernel.org,
	linux-kernel@vger.kernel.org, mingo@redhat.com, hpa@zytor.com,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
> > On Tue, 21 Aug 2012, Attilio Rao wrote:
> > > Differences with v1:
> > > - The patch serie is re-arranged in a way that it helps reviews, following
> > >   a plan by Thomas Gleixner
> > > - The PVOPS nomenclature is not used as it is not correct
> > > - The front-end message is adjusted with feedback by Thomas Gleixner,
> > >   Stefano Stabellini and Konrad Rzeszutek Wilk 
> > 
> > This is much simpler to read and review. Just have a look at the
> > diffstats of the two series:
> > 
> >  6 files changed,  9 insertions(+),  8 deletions(-)
> >  6 files changed, 11 insertions(+),  9 deletions(-)
> >  5 files changed, 50 insertions(+),  2 deletions(-)
> >  6 files changed,  2 insertions(+), 65 deletions(-)
> >  1 files changed,  5 insertions(+),  0 deletions(-)
> > 
> > versus
> > 
> >  6 files changed, 10 insertions(+),  9 deletions(-)
> >  6 files changed, 11 insertions(+), 11 deletions(-)
> >  5 files changed,  3 insertions(+),  3 deletions(-)
> >  6 files changed,  4 insertions(+), 20 deletions(-)
> >  1 files changed,  5 insertions(+),  0 deletions(-)
> > 
> > The overall result is basically the same, but it's way simpler to look
> > at obvious and well done patches than checking whether a subtle copy
> > and paste bug happened in 3/5 of the first version. Copy and paste is
> > the #1 cause for subtle bugs. :)
> > 
> > I'm waiting for the ack of Xen folks before taking it into tip.
> 
> I've some extra patches that modify the new "paginig_init" in the Xen
> code that I am going to propose for v3.7 - so will have some merge
> conflicts. Let me figure that out and also run this set of patches
> (and also the previous one .. which I think you didn't have a
> chance to look since you were on vacation?) on an overnight

Which previous one ?

> test to make sure there are no fallout.
> 
> With the merge issues that are going to prop up (x86 tip tree
> and my tree in linux-next) should I just take these patches
> in my tree with your Ack? Or should I just ingest your tiptree
> in my tree and that way solve the merge issue? What's your
> preference!

Having it in tip in an extra branch which you pull into your
tree. That's the easiest one.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:20:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bn9-0007U8-1v; Wed, 22 Aug 2012 14:20:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T4Bn8-0007Tw-2P
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:20:18 +0000
Received: from [85.158.143.35:21814] by server-2.bemta-4.messagelabs.com id
	77/E5-21239-1AAE4305; Wed, 22 Aug 2012 14:20:17 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345645198!12340537!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6370 invoked from network); 22 Aug 2012 14:19:59 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	22 Aug 2012 14:19:59 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T4Bme-000678-Mk; Wed, 22 Aug 2012 16:19:48 +0200
Date: Wed, 22 Aug 2012 16:19:47 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120822135753.GA30964@phenom.dumpdata.com>
Message-ID: <alpine.LFD.2.02.1208221618380.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com, x86@kernel.org,
	linux-kernel@vger.kernel.org, mingo@redhat.com, hpa@zytor.com,
	Attilio Rao <attilio.rao@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
> > On Tue, 21 Aug 2012, Attilio Rao wrote:
> > > Differences with v1:
> > > - The patch serie is re-arranged in a way that it helps reviews, following
> > >   a plan by Thomas Gleixner
> > > - The PVOPS nomenclature is not used as it is not correct
> > > - The front-end message is adjusted with feedback by Thomas Gleixner,
> > >   Stefano Stabellini and Konrad Rzeszutek Wilk 
> > 
> > This is much simpler to read and review. Just have a look at the
> > diffstats of the two series:
> > 
> >  6 files changed,  9 insertions(+),  8 deletions(-)
> >  6 files changed, 11 insertions(+),  9 deletions(-)
> >  5 files changed, 50 insertions(+),  2 deletions(-)
> >  6 files changed,  2 insertions(+), 65 deletions(-)
> >  1 files changed,  5 insertions(+),  0 deletions(-)
> > 
> > versus
> > 
> >  6 files changed, 10 insertions(+),  9 deletions(-)
> >  6 files changed, 11 insertions(+), 11 deletions(-)
> >  5 files changed,  3 insertions(+),  3 deletions(-)
> >  6 files changed,  4 insertions(+), 20 deletions(-)
> >  1 files changed,  5 insertions(+),  0 deletions(-)
> > 
> > The overall result is basically the same, but it's way simpler to look
> > at obvious and well done patches than checking whether a subtle copy
> > and paste bug happened in 3/5 of the first version. Copy and paste is
> > the #1 cause for subtle bugs. :)
> > 
> > I'm waiting for the ack of Xen folks before taking it into tip.
> 
> I've some extra patches that modify the new "paginig_init" in the Xen
> code that I am going to propose for v3.7 - so will have some merge
> conflicts. Let me figure that out and also run this set of patches
> (and also the previous one .. which I think you didn't have a
> chance to look since you were on vacation?) on an overnight

Which previous one ?

> test to make sure there are no fallout.
> 
> With the merge issues that are going to prop up (x86 tip tree
> and my tree in linux-next) should I just take these patches
> in my tree with your Ack? Or should I just ingest your tiptree
> in my tree and that way solve the merge issue? What's your
> preference!

Having it in tip in an extra branch which you pull into your
tree. That's the easiest one.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:21:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bnk-0007Xo-F9; Wed, 22 Aug 2012 14:20:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Bni-0007Xa-N0
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:20:55 +0000
Received: from [85.158.138.51:10122] by server-3.bemta-3.messagelabs.com id
	1C/95-13809-5CAE4305; Wed, 22 Aug 2012 14:20:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345645253!27470824!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23308 invoked from network); 22 Aug 2012 14:20:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with SMTP;
	22 Aug 2012 14:20:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:20:52 +0100
Message-Id: <5035070F0200007800096F57@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:21:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
In-Reply-To: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, andres@gridcentric.ca, keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
 operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 >>> On 21.08.12 at 22:13, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote:
> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
>  1 files changed, 22 insertions(+), 11 deletions(-)
> 
> 
> The unwind path was not clearing the shared entry status bits. This was
> BSOD-ing guests on network activity under certain configurations.
> 
> Also:
>  * sed the fixup method name to signal it's related to grant copy.
>  * use atomic clear flag ops during fixup.

Is that last thing really needed? I remember having looked at
these non-atomic operations too a little while back, and came to
the conclusion that probably the authors intentionally coded it
that way.

jan

> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1751,14 +1751,14 @@ __release_grant_for_copy(
>     under the domain's grant table lock. */
>  /* Only safe on transitive grants.  Even then, note that we don't
>     attempt to drop any pin on the referent grant. */
> -static void __fixup_status_for_pin(const struct active_grant_entry *act,
> +static void __fixup_status_for_copy_pin(const struct active_grant_entry 
> *act,
>                                     uint16_t *status)
>  {
>      if ( !(act->pin & GNTPIN_hstw_mask) )
> -        *status &= ~GTF_writing;
> +        gnttab_clear_flag(_GTF_writing, status);
>  
>      if ( !(act->pin & GNTPIN_hstr_mask) )
> -        *status &= ~GTF_reading;
> +        gnttab_clear_flag(_GTF_reading, status);
>  }
>  
>  /* Grab a frame number from a grant entry and update the flags and pin
> @@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
>          if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
>          {
>              if ( !allow_transitive )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                           "transitive grant when transitivity not 
> allowed\n");
>  
>              trans_domid = sha2->transitive.trans_domid;
> @@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
>              barrier(); /* Stop the compiler from re-loading
>                            trans_domid from shared memory */
>              if ( trans_domid == rd->domain_id )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                           "transitive grants cannot be self-referential\n");
>  
>              /* We allow the trans_domid == ldom case, which
> @@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
>              /* We need to leave the rrd locked during the grant copy */
>              td = rcu_lock_domain_by_id(trans_domid);
>              if ( td == NULL )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                           "transitive grant referenced bad domain %d\n",
>                           trans_domid);
>              spin_unlock(&rgt->lock);
> @@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
>  
>              spin_lock(&rgt->lock);
>              if ( rc != GNTST_okay ) {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                  rcu_unlock_domain(td);
>                  spin_unlock(&rgt->lock);
>                  return rc;
> @@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
>                 and try again. */
>              if ( act->pin != old_pin )
>              {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                  rcu_unlock_domain(td);
>                  spin_unlock(&rgt->lock);
>                  put_page(*page);
> @@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
>          {
>              rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, 
> rd);
>              if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>              act->gfn = sha1->frame;
>              is_sub_page = 0;
>              trans_page_off = 0;
> @@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
>          {
>              rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, 
> readonly, rd);
>              if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>              act->gfn = sha2->full_page.frame;
>              is_sub_page = 0;
>              trans_page_off = 0;
> @@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
>          {
>              rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, 
> readonly, rd);
>              if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>              act->gfn = sha2->sub_page.frame;
>              is_sub_page = 1;
>              trans_page_off = sha2->sub_page.page_off;
> @@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
>      *length = act->length;
>      *frame = act->frame;
>  
> +    spin_unlock(&rgt->lock);
> +    return rc;
> + 
> + unlock_out_clear:
> +    if ( !(readonly) &&
> +         !(act->pin & GNTPIN_hstw_mask) )
> +        gnttab_clear_flag(_GTF_writing, status);
> +
> +    if ( !act->pin )
> +        gnttab_clear_flag(_GTF_reading, status);
> +
>   unlock_out:
>      spin_unlock(&rgt->lock);
>      return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:21:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Bnk-0007Xo-F9; Wed, 22 Aug 2012 14:20:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Bni-0007Xa-N0
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:20:55 +0000
Received: from [85.158.138.51:10122] by server-3.bemta-3.messagelabs.com id
	1C/95-13809-5CAE4305; Wed, 22 Aug 2012 14:20:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345645253!27470824!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23308 invoked from network); 22 Aug 2012 14:20:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with SMTP;
	22 Aug 2012 14:20:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:20:52 +0100
Message-Id: <5035070F0200007800096F57@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:21:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
In-Reply-To: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, andres@gridcentric.ca, keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
 operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 >>> On 21.08.12 at 22:13, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote:
> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
>  1 files changed, 22 insertions(+), 11 deletions(-)
> 
> 
> The unwind path was not clearing the shared entry status bits. This was
> BSOD-ing guests on network activity under certain configurations.
> 
> Also:
>  * sed the fixup method name to signal it's related to grant copy.
>  * use atomic clear flag ops during fixup.

Is that last thing really needed? I remember having looked at
these non-atomic operations too a little while back, and came to
the conclusion that probably the authors intentionally coded it
that way.

jan

> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1751,14 +1751,14 @@ __release_grant_for_copy(
>     under the domain's grant table lock. */
>  /* Only safe on transitive grants.  Even then, note that we don't
>     attempt to drop any pin on the referent grant. */
> -static void __fixup_status_for_pin(const struct active_grant_entry *act,
> +static void __fixup_status_for_copy_pin(const struct active_grant_entry 
> *act,
>                                     uint16_t *status)
>  {
>      if ( !(act->pin & GNTPIN_hstw_mask) )
> -        *status &= ~GTF_writing;
> +        gnttab_clear_flag(_GTF_writing, status);
>  
>      if ( !(act->pin & GNTPIN_hstr_mask) )
> -        *status &= ~GTF_reading;
> +        gnttab_clear_flag(_GTF_reading, status);
>  }
>  
>  /* Grab a frame number from a grant entry and update the flags and pin
> @@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
>          if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
>          {
>              if ( !allow_transitive )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                           "transitive grant when transitivity not 
> allowed\n");
>  
>              trans_domid = sha2->transitive.trans_domid;
> @@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
>              barrier(); /* Stop the compiler from re-loading
>                            trans_domid from shared memory */
>              if ( trans_domid == rd->domain_id )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                           "transitive grants cannot be self-referential\n");
>  
>              /* We allow the trans_domid == ldom case, which
> @@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
>              /* We need to leave the rrd locked during the grant copy */
>              td = rcu_lock_domain_by_id(trans_domid);
>              if ( td == NULL )
> -                PIN_FAIL(unlock_out, GNTST_general_error,
> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>                           "transitive grant referenced bad domain %d\n",
>                           trans_domid);
>              spin_unlock(&rgt->lock);
> @@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
>  
>              spin_lock(&rgt->lock);
>              if ( rc != GNTST_okay ) {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                  rcu_unlock_domain(td);
>                  spin_unlock(&rgt->lock);
>                  return rc;
> @@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
>                 and try again. */
>              if ( act->pin != old_pin )
>              {
> -                __fixup_status_for_pin(act, status);
> +                __fixup_status_for_copy_pin(act, status);
>                  rcu_unlock_domain(td);
>                  spin_unlock(&rgt->lock);
>                  put_page(*page);
> @@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
>          {
>              rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, 
> rd);
>              if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>              act->gfn = sha1->frame;
>              is_sub_page = 0;
>              trans_page_off = 0;
> @@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
>          {
>              rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, 
> readonly, rd);
>              if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>              act->gfn = sha2->full_page.frame;
>              is_sub_page = 0;
>              trans_page_off = 0;
> @@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
>          {
>              rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, 
> readonly, rd);
>              if ( rc != GNTST_okay )
> -                goto unlock_out;
> +                goto unlock_out_clear;
>              act->gfn = sha2->sub_page.frame;
>              is_sub_page = 1;
>              trans_page_off = sha2->sub_page.page_off;
> @@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
>      *length = act->length;
>      *frame = act->frame;
>  
> +    spin_unlock(&rgt->lock);
> +    return rc;
> + 
> + unlock_out_clear:
> +    if ( !(readonly) &&
> +         !(act->pin & GNTPIN_hstw_mask) )
> +        gnttab_clear_flag(_GTF_writing, status);
> +
> +    if ( !act->pin )
> +        gnttab_clear_flag(_GTF_reading, status);
> +
>   unlock_out:
>      spin_unlock(&rgt->lock);
>      return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:33:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BzG-0007tD-Rw; Wed, 22 Aug 2012 14:32:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4BzE-0007su-S7
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:32:49 +0000
Received: from [85.158.143.35:21698] by server-2.bemta-4.messagelabs.com id
	55/5B-21239-09DE4305; Wed, 22 Aug 2012 14:32:48 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345645957!12343166!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4262 invoked from network); 22 Aug 2012 14:32:39 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:32:39 -0000
Received: by yenm4 with SMTP id m4so803380yen.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 07:32:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=yAEcGvgWH8RutDmArJBjt/0Y5q1ct2PkTqdJtXfqH/I=;
	b=QdKuZCrs65cvaiXGSzlnl9nAAYMv3n4XH3yatolhKiu4iy/2QJ+fLaHn6P25PdKWdi
	1Q3YbkXB95NAaNHCuxx7Ie7Aidn9SJ1JhLiXNbZZxidPbAd/pt3U6yckXqpXhDRDlCGx
	v57uVJbE5dccNDc78CelAsKATI3fLUIUegRyIUehzHaGXW2FI78nrOrpyDBaZnCBZfNe
	FnwP+B6Y5Qroa10KQfiLRDkrnrPIeqMPY1FzOqTE0AktpWJKVzExbJft6OmVNl/EpspB
	f7bo/F0tMAdx+swtvXms9rnpLX3Vb09cjCht1mNS5fmKWY6HIVKYTpitE0TN1h+HWJjz
	fCtw==
Received: by 10.50.192.199 with SMTP id hi7mr2341231igc.68.1345645957030;
	Wed, 22 Aug 2012 07:32:37 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ua5sm3046716igb.10.2012.08.22.07.32.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:32:36 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <5035070F0200007800096F57@nat28.tlf.novell.com>
Date: Wed, 22 Aug 2012 10:32:36 -0400
Message-Id: <F0E0A8D1-8BC1-4E1C-BEE3-516C9C592E29@gridcentric.ca>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
	<5035070F0200007800096F57@nat28.tlf.novell.com>
To: Jan Beulich <JBeulich@suse.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQl68I/5BoFRWUszZjrb/YJN2JYDm35FECdHfNCTiXy5zzvikEoldaesNzeSgZdaqoA8RnZD
Cc: tim@xen.org, keir@xen.org, Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
	operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 22, 2012, at 10:21 AM, Jan Beulich wrote:

>>>> On 21.08.12 at 22:13, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote:
>> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
>> 1 files changed, 22 insertions(+), 11 deletions(-)
>> 
>> 
>> The unwind path was not clearing the shared entry status bits. This was
>> BSOD-ing guests on network activity under certain configurations.
>> 
>> Also:
>> * sed the fixup method name to signal it's related to grant copy.
>> * use atomic clear flag ops during fixup.
> 
> Is that last thing really needed? I remember having looked at
> these non-atomic operations too a little while back, and came to
> the conclusion that probably the authors intentionally coded it
> that way.

Due to some obscure property of transitive grants? All other flag clearing/setting in shared grant entries by the hypervisor is done atomically (GTF_transfer_completed being an exception).

>From my p.o.v. there is no downside. But I am not 100% certain and I can back it off.

Thanks
Andres

> 
> jan
> 
>> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> 
>> diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -1751,14 +1751,14 @@ __release_grant_for_copy(
>>    under the domain's grant table lock. */
>> /* Only safe on transitive grants.  Even then, note that we don't
>>    attempt to drop any pin on the referent grant. */
>> -static void __fixup_status_for_pin(const struct active_grant_entry *act,
>> +static void __fixup_status_for_copy_pin(const struct active_grant_entry 
>> *act,
>>                                    uint16_t *status)
>> {
>>     if ( !(act->pin & GNTPIN_hstw_mask) )
>> -        *status &= ~GTF_writing;
>> +        gnttab_clear_flag(_GTF_writing, status);
>> 
>>     if ( !(act->pin & GNTPIN_hstr_mask) )
>> -        *status &= ~GTF_reading;
>> +        gnttab_clear_flag(_GTF_reading, status);
>> }
>> 
>> /* Grab a frame number from a grant entry and update the flags and pin
>> @@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
>>         if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
>>         {
>>             if ( !allow_transitive )
>> -                PIN_FAIL(unlock_out, GNTST_general_error,
>> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>>                          "transitive grant when transitivity not 
>> allowed\n");
>> 
>>             trans_domid = sha2->transitive.trans_domid;
>> @@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
>>             barrier(); /* Stop the compiler from re-loading
>>                           trans_domid from shared memory */
>>             if ( trans_domid == rd->domain_id )
>> -                PIN_FAIL(unlock_out, GNTST_general_error,
>> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>>                          "transitive grants cannot be self-referential\n");
>> 
>>             /* We allow the trans_domid == ldom case, which
>> @@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
>>             /* We need to leave the rrd locked during the grant copy */
>>             td = rcu_lock_domain_by_id(trans_domid);
>>             if ( td == NULL )
>> -                PIN_FAIL(unlock_out, GNTST_general_error,
>> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>>                          "transitive grant referenced bad domain %d\n",
>>                          trans_domid);
>>             spin_unlock(&rgt->lock);
>> @@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
>> 
>>             spin_lock(&rgt->lock);
>>             if ( rc != GNTST_okay ) {
>> -                __fixup_status_for_pin(act, status);
>> +                __fixup_status_for_copy_pin(act, status);
>>                 rcu_unlock_domain(td);
>>                 spin_unlock(&rgt->lock);
>>                 return rc;
>> @@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
>>                and try again. */
>>             if ( act->pin != old_pin )
>>             {
>> -                __fixup_status_for_pin(act, status);
>> +                __fixup_status_for_copy_pin(act, status);
>>                 rcu_unlock_domain(td);
>>                 spin_unlock(&rgt->lock);
>>                 put_page(*page);
>> @@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
>>         {
>>             rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, 
>> rd);
>>             if ( rc != GNTST_okay )
>> -                goto unlock_out;
>> +                goto unlock_out_clear;
>>             act->gfn = sha1->frame;
>>             is_sub_page = 0;
>>             trans_page_off = 0;
>> @@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
>>         {
>>             rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, 
>> readonly, rd);
>>             if ( rc != GNTST_okay )
>> -                goto unlock_out;
>> +                goto unlock_out_clear;
>>             act->gfn = sha2->full_page.frame;
>>             is_sub_page = 0;
>>             trans_page_off = 0;
>> @@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
>>         {
>>             rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, 
>> readonly, rd);
>>             if ( rc != GNTST_okay )
>> -                goto unlock_out;
>> +                goto unlock_out_clear;
>>             act->gfn = sha2->sub_page.frame;
>>             is_sub_page = 1;
>>             trans_page_off = sha2->sub_page.page_off;
>> @@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
>>     *length = act->length;
>>     *frame = act->frame;
>> 
>> +    spin_unlock(&rgt->lock);
>> +    return rc;
>> + 
>> + unlock_out_clear:
>> +    if ( !(readonly) &&
>> +         !(act->pin & GNTPIN_hstw_mask) )
>> +        gnttab_clear_flag(_GTF_writing, status);
>> +
>> +    if ( !act->pin )
>> +        gnttab_clear_flag(_GTF_reading, status);
>> +
>>  unlock_out:
>>     spin_unlock(&rgt->lock);
>>     return rc;
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:33:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BzG-0007tD-Rw; Wed, 22 Aug 2012 14:32:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4BzE-0007su-S7
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:32:49 +0000
Received: from [85.158.143.35:21698] by server-2.bemta-4.messagelabs.com id
	55/5B-21239-09DE4305; Wed, 22 Aug 2012 14:32:48 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345645957!12343166!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4262 invoked from network); 22 Aug 2012 14:32:39 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:32:39 -0000
Received: by yenm4 with SMTP id m4so803380yen.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 07:32:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=yAEcGvgWH8RutDmArJBjt/0Y5q1ct2PkTqdJtXfqH/I=;
	b=QdKuZCrs65cvaiXGSzlnl9nAAYMv3n4XH3yatolhKiu4iy/2QJ+fLaHn6P25PdKWdi
	1Q3YbkXB95NAaNHCuxx7Ie7Aidn9SJ1JhLiXNbZZxidPbAd/pt3U6yckXqpXhDRDlCGx
	v57uVJbE5dccNDc78CelAsKATI3fLUIUegRyIUehzHaGXW2FI78nrOrpyDBaZnCBZfNe
	FnwP+B6Y5Qroa10KQfiLRDkrnrPIeqMPY1FzOqTE0AktpWJKVzExbJft6OmVNl/EpspB
	f7bo/F0tMAdx+swtvXms9rnpLX3Vb09cjCht1mNS5fmKWY6HIVKYTpitE0TN1h+HWJjz
	fCtw==
Received: by 10.50.192.199 with SMTP id hi7mr2341231igc.68.1345645957030;
	Wed, 22 Aug 2012 07:32:37 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ua5sm3046716igb.10.2012.08.22.07.32.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:32:36 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <5035070F0200007800096F57@nat28.tlf.novell.com>
Date: Wed, 22 Aug 2012 10:32:36 -0400
Message-Id: <F0E0A8D1-8BC1-4E1C-BEE3-516C9C592E29@gridcentric.ca>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
	<5035070F0200007800096F57@nat28.tlf.novell.com>
To: Jan Beulich <JBeulich@suse.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQl68I/5BoFRWUszZjrb/YJN2JYDm35FECdHfNCTiXy5zzvikEoldaesNzeSgZdaqoA8RnZD
Cc: tim@xen.org, keir@xen.org, Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
	operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 22, 2012, at 10:21 AM, Jan Beulich wrote:

>>>> On 21.08.12 at 22:13, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote:
>> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
>> 1 files changed, 22 insertions(+), 11 deletions(-)
>> 
>> 
>> The unwind path was not clearing the shared entry status bits. This was
>> BSOD-ing guests on network activity under certain configurations.
>> 
>> Also:
>> * sed the fixup method name to signal it's related to grant copy.
>> * use atomic clear flag ops during fixup.
> 
> Is that last thing really needed? I remember having looked at
> these non-atomic operations too a little while back, and came to
> the conclusion that probably the authors intentionally coded it
> that way.

Due to some obscure property of transitive grants? All other flag clearing/setting in shared grant entries by the hypervisor is done atomically (GTF_transfer_completed being an exception).

>From my p.o.v. there is no downside. But I am not 100% certain and I can back it off.

Thanks
Andres

> 
> jan
> 
>> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> 
>> diff -r 5267f78c8a1e -r 84a23ae853a5 xen/common/grant_table.c
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -1751,14 +1751,14 @@ __release_grant_for_copy(
>>    under the domain's grant table lock. */
>> /* Only safe on transitive grants.  Even then, note that we don't
>>    attempt to drop any pin on the referent grant. */
>> -static void __fixup_status_for_pin(const struct active_grant_entry *act,
>> +static void __fixup_status_for_copy_pin(const struct active_grant_entry 
>> *act,
>>                                    uint16_t *status)
>> {
>>     if ( !(act->pin & GNTPIN_hstw_mask) )
>> -        *status &= ~GTF_writing;
>> +        gnttab_clear_flag(_GTF_writing, status);
>> 
>>     if ( !(act->pin & GNTPIN_hstr_mask) )
>> -        *status &= ~GTF_reading;
>> +        gnttab_clear_flag(_GTF_reading, status);
>> }
>> 
>> /* Grab a frame number from a grant entry and update the flags and pin
>> @@ -1834,7 +1834,7 @@ __acquire_grant_for_copy(
>>         if ( sha2 && (shah->flags & GTF_type_mask) == GTF_transitive )
>>         {
>>             if ( !allow_transitive )
>> -                PIN_FAIL(unlock_out, GNTST_general_error,
>> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>>                          "transitive grant when transitivity not 
>> allowed\n");
>> 
>>             trans_domid = sha2->transitive.trans_domid;
>> @@ -1842,7 +1842,7 @@ __acquire_grant_for_copy(
>>             barrier(); /* Stop the compiler from re-loading
>>                           trans_domid from shared memory */
>>             if ( trans_domid == rd->domain_id )
>> -                PIN_FAIL(unlock_out, GNTST_general_error,
>> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>>                          "transitive grants cannot be self-referential\n");
>> 
>>             /* We allow the trans_domid == ldom case, which
>> @@ -1855,7 +1855,7 @@ __acquire_grant_for_copy(
>>             /* We need to leave the rrd locked during the grant copy */
>>             td = rcu_lock_domain_by_id(trans_domid);
>>             if ( td == NULL )
>> -                PIN_FAIL(unlock_out, GNTST_general_error,
>> +                PIN_FAIL(unlock_out_clear, GNTST_general_error,
>>                          "transitive grant referenced bad domain %d\n",
>>                          trans_domid);
>>             spin_unlock(&rgt->lock);
>> @@ -1866,7 +1866,7 @@ __acquire_grant_for_copy(
>> 
>>             spin_lock(&rgt->lock);
>>             if ( rc != GNTST_okay ) {
>> -                __fixup_status_for_pin(act, status);
>> +                __fixup_status_for_copy_pin(act, status);
>>                 rcu_unlock_domain(td);
>>                 spin_unlock(&rgt->lock);
>>                 return rc;
>> @@ -1878,7 +1878,7 @@ __acquire_grant_for_copy(
>>                and try again. */
>>             if ( act->pin != old_pin )
>>             {
>> -                __fixup_status_for_pin(act, status);
>> +                __fixup_status_for_copy_pin(act, status);
>>                 rcu_unlock_domain(td);
>>                 spin_unlock(&rgt->lock);
>>                 put_page(*page);
>> @@ -1897,7 +1897,7 @@ __acquire_grant_for_copy(
>>         {
>>             rc = __get_paged_frame(sha1->frame, &grant_frame, page, readonly, 
>> rd);
>>             if ( rc != GNTST_okay )
>> -                goto unlock_out;
>> +                goto unlock_out_clear;
>>             act->gfn = sha1->frame;
>>             is_sub_page = 0;
>>             trans_page_off = 0;
>> @@ -1907,7 +1907,7 @@ __acquire_grant_for_copy(
>>         {
>>             rc = __get_paged_frame(sha2->full_page.frame, &grant_frame, page, 
>> readonly, rd);
>>             if ( rc != GNTST_okay )
>> -                goto unlock_out;
>> +                goto unlock_out_clear;
>>             act->gfn = sha2->full_page.frame;
>>             is_sub_page = 0;
>>             trans_page_off = 0;
>> @@ -1917,7 +1917,7 @@ __acquire_grant_for_copy(
>>         {
>>             rc = __get_paged_frame(sha2->sub_page.frame, &grant_frame, page, 
>> readonly, rd);
>>             if ( rc != GNTST_okay )
>> -                goto unlock_out;
>> +                goto unlock_out_clear;
>>             act->gfn = sha2->sub_page.frame;
>>             is_sub_page = 1;
>>             trans_page_off = sha2->sub_page.page_off;
>> @@ -1948,6 +1948,17 @@ __acquire_grant_for_copy(
>>     *length = act->length;
>>     *frame = act->frame;
>> 
>> +    spin_unlock(&rgt->lock);
>> +    return rc;
>> + 
>> + unlock_out_clear:
>> +    if ( !(readonly) &&
>> +         !(act->pin & GNTPIN_hstw_mask) )
>> +        gnttab_clear_flag(_GTF_writing, status);
>> +
>> +    if ( !act->pin )
>> +        gnttab_clear_flag(_GTF_reading, status);
>> +
>>  unlock_out:
>>     spin_unlock(&rgt->lock);
>>     return rc;
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:33:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BzH-0007tK-7a; Wed, 22 Aug 2012 14:32:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T4BzF-0007su-Di
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:32:49 +0000
Received: from [85.158.143.35:60106] by server-2.bemta-4.messagelabs.com id
	66/5B-21239-09DE4305; Wed, 22 Aug 2012 14:32:48 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345645956!11249766!1
X-Originating-IP: [74.125.149.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22178 invoked from network); 22 Aug 2012 14:32:41 -0000
Received: from na3sys009aog130.obsmtp.com (HELO na3sys009aog130.obsmtp.com)
	(74.125.149.143)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 14:32:41 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob130.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUDTtg1nOqV30sMk2HDj6kERdsUyYq4Yb@postini.com;
	Wed, 22 Aug 2012 07:32:41 PDT
Received: from INHYMS172.ca.com (155.35.35.46) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 22 Aug 2012 20:02:32 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS172.ca.com
	([155.35.35.46]) with mapi id 14.01.0355.002;
	Wed, 22 Aug 2012 20:02:32 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZABRxX9AAAzUKyA///51oD//6PjEIAAYSAA//+jGZA=
Date: Wed, 22 Aug 2012 14:32:32 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1311B547@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
	<1345642800.12501.13.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345642800.12501.13.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSWFuIENhbXBiZWxsIFtt
YWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dDQo+IFNlbnQ6IFdlZG5lc2RheSwgQXVndXN0
IDIyLCAyMDEyIDc6MTAgUE0NCj4gVG86IFBhbGFndW1taSwgU2l2YQ0KPiBDYzogSmFuIEJldWxp
Y2g7IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnDQo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBb
UEFUQ0ggUkZDXSB4ZW4vbmV0YmFjazogQ291bnQgcmluZyBzbG90cw0KPiBwcm9wZXJseSB3aGVu
IGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVzZWQNCj4gDQo+IE9uIFdlZCwgMjAxMi0wOC0yMiBhdCAx
NDozMCArMDEwMCwgUGFsYWd1bW1pLCBTaXZhIHdyb3RlOg0KPiA+ID4gPg0KPiA+ID4gPg0KPiA+
ID4gPiA+IFRoaXM6DQo+ID4gPiA+ID4gCQkvKiBGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyAqLw0K
PiA+ID4gPiA+IAkJaWYgKGNvdW50ICsgTUFYX1NLQl9GUkFHUyA+PSBYRU5fTkVUSUZfUlhfUklO
R19TSVpFKQ0KPiA+ID4gPiA+IAkJCWJyZWFrOw0KPiA+ID4gPiA+IHNlZW1zIGEgYml0IGlmZnkg
dG8gbWUgdG9vLiBJIHdvbmRlciBpZiBNQVhfU0tCX0ZSQUdTIHNob3VsZCBiZQ0KPiA+ID4gPiA+
IG1heF9yZXF1aXJlZF9yeF9zbG90cyh2aWYpPyBPciBtYXliZSB0aGUgcHJlZmxpZ2h0IGNoZWNr
cyBmcm9tDQo+ID4gPiA+ID4geGVudmlmX3N0YXJ0X3htaXQgc2F2ZSB1cyBmcm9tIHRoaXMgZmF0
ZT8NCj4gPiA+ID4gPg0KPiA+ID4gPiA+IElhbi4NCj4gPiA+ID4NCj4gPiA+ID4gWW91IGFyZSBy
aWdodCBJYW4uIFRoZSBpbnRlbnRpb24gb2YgdGhpcyBjaGVjayBzZWVtcyB0byBiZSB0bw0KPiBl
bnN1cmUNCj4gPiA+ID4gdGhhdCBlbm91Z2ggc2xvdHMgYXJlIHN0aWxsIGxlZnQgcHJpb3IgdG8g
cGlja2luZyB1cCBuZXh0IHNrYi4NCj4gQnV0DQo+ID4gPiA+IGluc3RlYWQgb2YgaW52b2tpbmcg
bWF4X3JlcXVpcmVkX3J4X3Nsb3RzIHdpdGggYWxyZWFkeSByZWNlaXZlZA0KPiBza2IsDQo+ID4g
PiA+IHdlIG1heSBoYXZlIHRvIGRvIHNrYl9wZWVrIGFuZCBpbnZva2UgbWF4X3JlcXVpcmVkX3J4
X3Nsb3RzIG9uDQo+IHNrYg0KPiA+ID4gPiB0aGF0IHdlIGFyZSBhYm91dCB0byBkZXF1ZXVlLiBJ
cyB0aGVyZSBhbnkgYmV0dGVyIHdheT8NCj4gPiA+DQo+ID4gPiBtYXhfcmVxdWlyZWRfcnhfc2xv
dHMgZG9lc24ndCB0YWtlIGFuIHNrYiBhcyBhbiBhcmd1bWVudCwganVzdCBhDQo+IHZpZi4NCj4g
PiA+IEl0DQo+ID4gPiByZXR1cm5zIHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cyBmb3Ig
YW55IHNrYiBvbiB0aGF0IHZpZi4NCj4gPiA+DQo+ID4gPiBJYW4uDQo+ID4NCj4gPiBUaGF04oCZ
cyB0cnVlLiBXaGF0IEkgbWVhbnQgaXMgdG8gcGVlayB0byBuZXh0IHNrYiBhbmQgZ2V0IHZpZiBm
cm9tDQo+IHRoYXQNCj4gPiBzdHJ1Y3R1cmUgdG8gaW52b2tlIG1heF9yZXF1aXJlZF9yeF9zbG90
cy4gRG9uJ3QgeW91IHRoaW5rIHdlIG5lZWQgdG8NCj4gPiBkbyBsaWtlIHRoYXQ/DQo+IA0KPiBE
byB5b3UgbWVhbiBzb21ldGhpbmcgb3RoZXIgdGhhbiBtYXhfcmVxdWlyZWRfcnhfc2xvdHM/IEJl
Y2F1c2UgdGhlDQo+IHByb3RvdHlwZSBvZiB0aGF0IGZ1bmN0aW9uIGlzDQo+ICAgICAgICAgc3Rh
dGljIGludCBtYXhfcmVxdWlyZWRfcnhfc2xvdHMoc3RydWN0IHhlbnZpZiAqdmlmKQ0KPiBpLmUu
IGl0IGRvZXNuJ3QgbmVlZCBhbiBza2IuDQo+IA0KPiBJIHRoaW5rIGl0IGlzIGFjY2VwdGFibGUg
dG8gY2hlY2sgZm9yIHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cy4NCj4gVGhhdCdzIHdo
YXQgd2UgZG8gZS5nLiBpbiB4ZW5fbmV0YmtfcnhfcmluZ19mdWxsDQo+IA0KPiBVc2luZyBza2Jf
cGVlayBtaWdodCB3b3JrIHRvbyB0aG91Z2gsIGFzc3VtaW5nIGFsbCB0aGUgbG9ja2luZyBldGMg
aXMNCj4gb2sgbw0KDQpJIHdhbnQgdG8gdXNlIG1heF9yZXF1aXJlZF9yeF9zbG90cyBvbmx5LiBT
byB0aGUgY29kZSB3aWxsIGxvb2sgc29tZXdoYXQgbGlrZSB0aGlzLg0KDQpza2I9c2tiX3BlZWso
Jm5ldGJrLT5yeF9xdWV1ZSk7DQppZihza2IgPT0gTlVMTCkNCglicmVhazsNCnZpZj1uZXRkZXZf
cHJpdihza2ItPmRldik7DQoNCi8qRmlsbGVkIHRoZSBiYXRjaCBxdWV1ZT8qLw0KSWYoY291bnQg
KyBtYXhfcmVxdWlyZWRfcnhfc2xvdHModmlmKSA+PSBYRU5fTkVUSUZfUlhfUklOR19TSVpFKQ0K
CWJyZWFrOw0KDQoNCj4gLS0gdGhpcyBpcyBhIHByaXZhdGUgcXVldWUgc28gSSB0aGluayBpdCBp
cyBwcm9iYWJseSBvay4gUmF0aGVyIHRoYW4NCj4gY2FsY3VsYXRpbmcgdGhlIG51bWJlciBvZiBz
bG90cyBpbiB4ZW5fbmV0YmtfcnhfYWN0aW9uIHlvdSBwcm9iYWJseQ0KPiB3YW50DQo+IHRvIHJl
bWVtYmVyIHRoZSB2YWx1ZSBmcm9tIHRoZSBjYWxsIHRvIHhlbl9uZXRia19jb3VudF9za2Jfc2xv
dHMgaW4NCj4gc3RhcnRfeG1pdC4gUGVyaGFwcyBieSBzdGFzaGluZyBpdCBpbiBza2ItPmNiPyAo
c2VlIE5FVEZST05UX1NLQl9DQiBmb3INCj4gYW4gZXhhbXBsZSBvZiBob3cgdG8gZG8gdGhpcykN
Cj4gDQo+IElhbi4NCg0KT2suIEkgd2lsbCBsb29rIGludG8gdGhpcyBhcyB3ZWxsLiBUaGlzIHdp
bGwgZGVmaW5pdGVseSBzYXZlIHNvbWUgY3ljbGVzIGluIHhlbl9uZXRia19yeF9hY3Rpb24uIEFw
YXJ0IGZyb20gdGhhdCBjYWxjdWxhdGlvbnMgd2UgYWxyZWFkeSBkaXNjdXNzZWQgc2hvdWxkIHdv
cmsgZmluZSByaWdodD8NCg0KVGhhbmtzIGZvciBhbGwgeW91ciBpbnB1dCBJYW4uDQoNClNpdmEN
Cg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRl
dmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVu
Lm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:33:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4BzH-0007tK-7a; Wed, 22 Aug 2012 14:32:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T4BzF-0007su-Di
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:32:49 +0000
Received: from [85.158.143.35:60106] by server-2.bemta-4.messagelabs.com id
	66/5B-21239-09DE4305; Wed, 22 Aug 2012 14:32:48 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345645956!11249766!1
X-Originating-IP: [74.125.149.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22178 invoked from network); 22 Aug 2012 14:32:41 -0000
Received: from na3sys009aog130.obsmtp.com (HELO na3sys009aog130.obsmtp.com)
	(74.125.149.143)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 14:32:41 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob130.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUDTtg1nOqV30sMk2HDj6kERdsUyYq4Yb@postini.com;
	Wed, 22 Aug 2012 07:32:41 PDT
Received: from INHYMS172.ca.com (155.35.35.46) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 22 Aug 2012 20:02:32 +0530
Received: from INHYMS111B.ca.com ([169.254.4.84]) by INHYMS172.ca.com
	([155.35.35.46]) with mapi id 14.01.0355.002;
	Wed, 22 Aug 2012 20:02:32 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
	when larger MTU sizes are used
Thread-Index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZABRxX9AAAzUKyA///51oD//6PjEIAAYSAA//+jGZA=
Date: Wed, 22 Aug 2012 14:32:32 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1311B547@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
	<1345642800.12501.13.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345642800.12501.13.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSWFuIENhbXBiZWxsIFtt
YWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dDQo+IFNlbnQ6IFdlZG5lc2RheSwgQXVndXN0
IDIyLCAyMDEyIDc6MTAgUE0NCj4gVG86IFBhbGFndW1taSwgU2l2YQ0KPiBDYzogSmFuIEJldWxp
Y2g7IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnDQo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBb
UEFUQ0ggUkZDXSB4ZW4vbmV0YmFjazogQ291bnQgcmluZyBzbG90cw0KPiBwcm9wZXJseSB3aGVu
IGxhcmdlciBNVFUgc2l6ZXMgYXJlIHVzZWQNCj4gDQo+IE9uIFdlZCwgMjAxMi0wOC0yMiBhdCAx
NDozMCArMDEwMCwgUGFsYWd1bW1pLCBTaXZhIHdyb3RlOg0KPiA+ID4gPg0KPiA+ID4gPg0KPiA+
ID4gPiA+IFRoaXM6DQo+ID4gPiA+ID4gCQkvKiBGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyAqLw0K
PiA+ID4gPiA+IAkJaWYgKGNvdW50ICsgTUFYX1NLQl9GUkFHUyA+PSBYRU5fTkVUSUZfUlhfUklO
R19TSVpFKQ0KPiA+ID4gPiA+IAkJCWJyZWFrOw0KPiA+ID4gPiA+IHNlZW1zIGEgYml0IGlmZnkg
dG8gbWUgdG9vLiBJIHdvbmRlciBpZiBNQVhfU0tCX0ZSQUdTIHNob3VsZCBiZQ0KPiA+ID4gPiA+
IG1heF9yZXF1aXJlZF9yeF9zbG90cyh2aWYpPyBPciBtYXliZSB0aGUgcHJlZmxpZ2h0IGNoZWNr
cyBmcm9tDQo+ID4gPiA+ID4geGVudmlmX3N0YXJ0X3htaXQgc2F2ZSB1cyBmcm9tIHRoaXMgZmF0
ZT8NCj4gPiA+ID4gPg0KPiA+ID4gPiA+IElhbi4NCj4gPiA+ID4NCj4gPiA+ID4gWW91IGFyZSBy
aWdodCBJYW4uIFRoZSBpbnRlbnRpb24gb2YgdGhpcyBjaGVjayBzZWVtcyB0byBiZSB0bw0KPiBl
bnN1cmUNCj4gPiA+ID4gdGhhdCBlbm91Z2ggc2xvdHMgYXJlIHN0aWxsIGxlZnQgcHJpb3IgdG8g
cGlja2luZyB1cCBuZXh0IHNrYi4NCj4gQnV0DQo+ID4gPiA+IGluc3RlYWQgb2YgaW52b2tpbmcg
bWF4X3JlcXVpcmVkX3J4X3Nsb3RzIHdpdGggYWxyZWFkeSByZWNlaXZlZA0KPiBza2IsDQo+ID4g
PiA+IHdlIG1heSBoYXZlIHRvIGRvIHNrYl9wZWVrIGFuZCBpbnZva2UgbWF4X3JlcXVpcmVkX3J4
X3Nsb3RzIG9uDQo+IHNrYg0KPiA+ID4gPiB0aGF0IHdlIGFyZSBhYm91dCB0byBkZXF1ZXVlLiBJ
cyB0aGVyZSBhbnkgYmV0dGVyIHdheT8NCj4gPiA+DQo+ID4gPiBtYXhfcmVxdWlyZWRfcnhfc2xv
dHMgZG9lc24ndCB0YWtlIGFuIHNrYiBhcyBhbiBhcmd1bWVudCwganVzdCBhDQo+IHZpZi4NCj4g
PiA+IEl0DQo+ID4gPiByZXR1cm5zIHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cyBmb3Ig
YW55IHNrYiBvbiB0aGF0IHZpZi4NCj4gPiA+DQo+ID4gPiBJYW4uDQo+ID4NCj4gPiBUaGF04oCZ
cyB0cnVlLiBXaGF0IEkgbWVhbnQgaXMgdG8gcGVlayB0byBuZXh0IHNrYiBhbmQgZ2V0IHZpZiBm
cm9tDQo+IHRoYXQNCj4gPiBzdHJ1Y3R1cmUgdG8gaW52b2tlIG1heF9yZXF1aXJlZF9yeF9zbG90
cy4gRG9uJ3QgeW91IHRoaW5rIHdlIG5lZWQgdG8NCj4gPiBkbyBsaWtlIHRoYXQ/DQo+IA0KPiBE
byB5b3UgbWVhbiBzb21ldGhpbmcgb3RoZXIgdGhhbiBtYXhfcmVxdWlyZWRfcnhfc2xvdHM/IEJl
Y2F1c2UgdGhlDQo+IHByb3RvdHlwZSBvZiB0aGF0IGZ1bmN0aW9uIGlzDQo+ICAgICAgICAgc3Rh
dGljIGludCBtYXhfcmVxdWlyZWRfcnhfc2xvdHMoc3RydWN0IHhlbnZpZiAqdmlmKQ0KPiBpLmUu
IGl0IGRvZXNuJ3QgbmVlZCBhbiBza2IuDQo+IA0KPiBJIHRoaW5rIGl0IGlzIGFjY2VwdGFibGUg
dG8gY2hlY2sgZm9yIHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cy4NCj4gVGhhdCdzIHdo
YXQgd2UgZG8gZS5nLiBpbiB4ZW5fbmV0YmtfcnhfcmluZ19mdWxsDQo+IA0KPiBVc2luZyBza2Jf
cGVlayBtaWdodCB3b3JrIHRvbyB0aG91Z2gsIGFzc3VtaW5nIGFsbCB0aGUgbG9ja2luZyBldGMg
aXMNCj4gb2sgbw0KDQpJIHdhbnQgdG8gdXNlIG1heF9yZXF1aXJlZF9yeF9zbG90cyBvbmx5LiBT
byB0aGUgY29kZSB3aWxsIGxvb2sgc29tZXdoYXQgbGlrZSB0aGlzLg0KDQpza2I9c2tiX3BlZWso
Jm5ldGJrLT5yeF9xdWV1ZSk7DQppZihza2IgPT0gTlVMTCkNCglicmVhazsNCnZpZj1uZXRkZXZf
cHJpdihza2ItPmRldik7DQoNCi8qRmlsbGVkIHRoZSBiYXRjaCBxdWV1ZT8qLw0KSWYoY291bnQg
KyBtYXhfcmVxdWlyZWRfcnhfc2xvdHModmlmKSA+PSBYRU5fTkVUSUZfUlhfUklOR19TSVpFKQ0K
CWJyZWFrOw0KDQoNCj4gLS0gdGhpcyBpcyBhIHByaXZhdGUgcXVldWUgc28gSSB0aGluayBpdCBp
cyBwcm9iYWJseSBvay4gUmF0aGVyIHRoYW4NCj4gY2FsY3VsYXRpbmcgdGhlIG51bWJlciBvZiBz
bG90cyBpbiB4ZW5fbmV0YmtfcnhfYWN0aW9uIHlvdSBwcm9iYWJseQ0KPiB3YW50DQo+IHRvIHJl
bWVtYmVyIHRoZSB2YWx1ZSBmcm9tIHRoZSBjYWxsIHRvIHhlbl9uZXRia19jb3VudF9za2Jfc2xv
dHMgaW4NCj4gc3RhcnRfeG1pdC4gUGVyaGFwcyBieSBzdGFzaGluZyBpdCBpbiBza2ItPmNiPyAo
c2VlIE5FVEZST05UX1NLQl9DQiBmb3INCj4gYW4gZXhhbXBsZSBvZiBob3cgdG8gZG8gdGhpcykN
Cj4gDQo+IElhbi4NCg0KT2suIEkgd2lsbCBsb29rIGludG8gdGhpcyBhcyB3ZWxsLiBUaGlzIHdp
bGwgZGVmaW5pdGVseSBzYXZlIHNvbWUgY3ljbGVzIGluIHhlbl9uZXRia19yeF9hY3Rpb24uIEFw
YXJ0IGZyb20gdGhhdCBjYWxjdWxhdGlvbnMgd2UgYWxyZWFkeSBkaXNjdXNzZWQgc2hvdWxkIHdv
cmsgZmluZSByaWdodD8NCg0KVGhhbmtzIGZvciBhbGwgeW91ciBpbnB1dCBJYW4uDQoNClNpdmEN
Cg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRl
dmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVu
Lm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:35:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C1S-00085W-PO; Wed, 22 Aug 2012 14:35:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4C1R-00085O-LX
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:35:05 +0000
Received: from [85.158.138.51:14998] by server-11.bemta-3.messagelabs.com id
	D6/39-23152-81EE4305; Wed, 22 Aug 2012 14:35:04 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345646103!25689209!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11457 invoked from network); 22 Aug 2012 14:35:04 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:35:04 -0000
Received: by vcbgb23 with SMTP id gb23so1391054vcb.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 07:35:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=tCbPijhuhJHl4EWlXYbWoMgYy4CCQbUAMyJ0qYyLwzw=;
	b=JMOy2vBVerBnAdzO/5c12vmZ8V/Di/cwRi5KUd706qS/hNcp+ZotS9BaRF9jFeaIFP
	6Cnt8hEq6lHI0i2T+JUxDNOYv1b6FGiAtJZgDa83oOU5+ZfCWJRg7mrwBor6/kZVRLCK
	F5T/d1XmU1jxpCmBPGKhcAJ19rvL4m+SAn7WKrlZG2nY1DeSpZFw+ibADRPs3XVbgHbC
	OmrsJ36R77QUa8jOyd0E4OHZwMMxAvUdemSPO/sBqP8uuJUESfbCNNwZxKzKPoZAVU2x
	Nr6I9hjDtIJCTpQsVJk05oG3teBtM5PJ/b5ZIIqn+yfv4LAXAFdw4Hq6x9jt7IWMMTsf
	mshw==
Received: by 10.221.10.13 with SMTP id oy13mr17020647vcb.14.1345646102557;
	Wed, 22 Aug 2012 07:35:02 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id a10sm2005233vez.10.2012.08.22.07.35.00
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:35:01 -0700 (PDT)
Date: Wed, 22 Aug 2012 10:24:59 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822142457.GA31341@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
	<503505710200007800096F0B@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503505710200007800096F0B@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 03:14:41PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > # HG changeset patch
> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > # Date 1345579709 14400
> > # Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
> > # Parent  635917c6dac4ab8748572fcbeb3e745428684e15
> > get_page_type: Print out extra information when failing to get page_type.
> > 
> > When any reference to __get_page_type is called and it fails, we get:
> > 
> > (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) 
> > for mfn 10e392 (pfn 1bf6c)
> > 
> > with this patch we get some extra details such as:
> > (XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]
> > 
> > where it actually is in the pagetable of the guest. This is useful
> > b/c we can figure out where it is, and use that to figure out where
> > the OS thinks it is.
> 
> In addition to my earlier reply, I also think that the printing
> should be done at info level, so that nothing would get
> additionally printed without special command line options.

I will be more than happy to make those changes. However I think
you are feeling that this is not really that useful? Perhaps if I spiced
it up by also dumping what those 'types' are and added some
rudimentary logic troubleshooting (its an L2 type, and you are trying to
add it as an L1 entry as writeable, so on..)

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:35:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C1S-00085W-PO; Wed, 22 Aug 2012 14:35:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4C1R-00085O-LX
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:35:05 +0000
Received: from [85.158.138.51:14998] by server-11.bemta-3.messagelabs.com id
	D6/39-23152-81EE4305; Wed, 22 Aug 2012 14:35:04 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345646103!25689209!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11457 invoked from network); 22 Aug 2012 14:35:04 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:35:04 -0000
Received: by vcbgb23 with SMTP id gb23so1391054vcb.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 07:35:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=tCbPijhuhJHl4EWlXYbWoMgYy4CCQbUAMyJ0qYyLwzw=;
	b=JMOy2vBVerBnAdzO/5c12vmZ8V/Di/cwRi5KUd706qS/hNcp+ZotS9BaRF9jFeaIFP
	6Cnt8hEq6lHI0i2T+JUxDNOYv1b6FGiAtJZgDa83oOU5+ZfCWJRg7mrwBor6/kZVRLCK
	F5T/d1XmU1jxpCmBPGKhcAJ19rvL4m+SAn7WKrlZG2nY1DeSpZFw+ibADRPs3XVbgHbC
	OmrsJ36R77QUa8jOyd0E4OHZwMMxAvUdemSPO/sBqP8uuJUESfbCNNwZxKzKPoZAVU2x
	Nr6I9hjDtIJCTpQsVJk05oG3teBtM5PJ/b5ZIIqn+yfv4LAXAFdw4Hq6x9jt7IWMMTsf
	mshw==
Received: by 10.221.10.13 with SMTP id oy13mr17020647vcb.14.1345646102557;
	Wed, 22 Aug 2012 07:35:02 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id a10sm2005233vez.10.2012.08.22.07.35.00
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:35:01 -0700 (PDT)
Date: Wed, 22 Aug 2012 10:24:59 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822142457.GA31341@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
	<503505710200007800096F0B@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503505710200007800096F0B@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 03:14:41PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > # HG changeset patch
> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > # Date 1345579709 14400
> > # Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
> > # Parent  635917c6dac4ab8748572fcbeb3e745428684e15
> > get_page_type: Print out extra information when failing to get page_type.
> > 
> > When any reference to __get_page_type is called and it fails, we get:
> > 
> > (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) 
> > for mfn 10e392 (pfn 1bf6c)
> > 
> > with this patch we get some extra details such as:
> > (XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [258][272]
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: [258][511][511]
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: [511][511]
> > 
> > where it actually is in the pagetable of the guest. This is useful
> > b/c we can figure out where it is, and use that to figure out where
> > the OS thinks it is.
> 
> In addition to my earlier reply, I also think that the printing
> should be done at info level, so that nothing would get
> additionally printed without special command line options.

I will be more than happy to make those changes. However I think
you are feeling that this is not really that useful? Perhaps if I spiced
it up by also dumping what those 'types' are and added some
rudimentary logic troubleshooting (its an L2 type, and you are trying to
add it as an L1 entry as writeable, so on..)

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:37:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C3W-0008FP-Ay; Wed, 22 Aug 2012 14:37:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4C3U-0008F1-Bl
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:37:12 +0000
Received: from [85.158.143.99:42756] by server-2.bemta-4.messagelabs.com id
	07/23-21239-79EE4305; Wed, 22 Aug 2012 14:37:11 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345646230!21013783!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10734 invoked from network); 22 Aug 2012 14:37:11 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:37:11 -0000
Received: by vbip1 with SMTP id p1so1395124vbi.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 07:37:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=Nj57vptJDBjiRTUz9Bq+uX1RLfI3QgLTnhD+wGelImM=;
	b=MD3t1dJxvJIkZ/b41YXuVsvCQBwt9KiCC9mP8TTddtkb9jKgbWbxsTysiUWKDv8SWw
	u6jXy50U3JkJU6HoVRNl+SMQFjIJBxEZ77XMynrGxbVyAgYQx8/hmAAvIgndQTkPgGK+
	Zng7Umav2r9FYBDNcK9n6eGwSlgq12V7DYsvBvihCeR7d81G3BPdwsJ0xMgXwCR8x1zo
	u8cl+HeZHOM0K5RRusYgvonX9lc6nXbHrLF1/vb+fd4yedAvax8DVSK19PqtqrgauXbJ
	fuP/cXA6RT3AeYX7YhQeTuuqrpwyVLfdWken+F009rgIWDdg7FoioGAj+TKoNjrU14ua
	f/7A==
Received: by 10.220.37.194 with SMTP id y2mr16761198vcd.44.1345646229781;
	Wed, 22 Aug 2012 07:37:09 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id l12sm2281781vdh.8.2012.08.22.07.37.08
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:37:08 -0700 (PDT)
Date: Wed, 22 Aug 2012 10:27:07 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>, stefano.stabellini@eu.citrix.com
Message-ID: <20120822142705.GB31341@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503506000200007800096F30@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > # HG changeset patch
> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > # Date 1345579709 14400
> > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> > xen/pagetables: Document that all of the initial regions are mapped.
> > 
> > The documentation states that the layout of the initial region looks
> > as so:
> >    a. relocated kernel image
> >    b. initial ram disk              [mod_start, mod_len]
> >    c. list of allocated page frames [mfn_list, nr_pages]
> >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> >    d. start_info_t structure        [register ESI (x86)]
> >    e. bootstrap page tables         [pt_base, CR3 (x86)]
> >    f. bootstrap stack               [register ESP (x86)]
> > 
> > But it does not clarify that the virtual address to all of
> > those areas is initially mapped by the pt_base (or CR3).
> > Lets fix that.
> 
> To me this is already being said by "This the order of bootstrap
> elements in the initial virtual region".

Stefano wanted to make sure we have it written as clear as possible.
I am going to be a good little submitter and let you guys sort this
one out  :-)

<gets some popcorn out>
> 
> Jan
> 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
> > --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > @@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
> >   *  8. There is guaranteed to be at least 512kB padding after the final
> >   *     bootstrap element. If necessary, the bootstrap virtual region is
> >   *     extended by an extra 4MB to ensure this.
> > + *
> > + *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> > + *  pagetables [pt_base, CR3 (x86)].
> >   */
> >  
> >  #define MAX_GUEST_CMDLINE 1024
> > 
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:37:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C3W-0008FP-Ay; Wed, 22 Aug 2012 14:37:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4C3U-0008F1-Bl
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:37:12 +0000
Received: from [85.158.143.99:42756] by server-2.bemta-4.messagelabs.com id
	07/23-21239-79EE4305; Wed, 22 Aug 2012 14:37:11 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345646230!21013783!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10734 invoked from network); 22 Aug 2012 14:37:11 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:37:11 -0000
Received: by vbip1 with SMTP id p1so1395124vbi.32
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 07:37:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=Nj57vptJDBjiRTUz9Bq+uX1RLfI3QgLTnhD+wGelImM=;
	b=MD3t1dJxvJIkZ/b41YXuVsvCQBwt9KiCC9mP8TTddtkb9jKgbWbxsTysiUWKDv8SWw
	u6jXy50U3JkJU6HoVRNl+SMQFjIJBxEZ77XMynrGxbVyAgYQx8/hmAAvIgndQTkPgGK+
	Zng7Umav2r9FYBDNcK9n6eGwSlgq12V7DYsvBvihCeR7d81G3BPdwsJ0xMgXwCR8x1zo
	u8cl+HeZHOM0K5RRusYgvonX9lc6nXbHrLF1/vb+fd4yedAvax8DVSK19PqtqrgauXbJ
	fuP/cXA6RT3AeYX7YhQeTuuqrpwyVLfdWken+F009rgIWDdg7FoioGAj+TKoNjrU14ua
	f/7A==
Received: by 10.220.37.194 with SMTP id y2mr16761198vcd.44.1345646229781;
	Wed, 22 Aug 2012 07:37:09 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id l12sm2281781vdh.8.2012.08.22.07.37.08
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 07:37:08 -0700 (PDT)
Date: Wed, 22 Aug 2012 10:27:07 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>, stefano.stabellini@eu.citrix.com
Message-ID: <20120822142705.GB31341@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503506000200007800096F30@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > # HG changeset patch
> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > # Date 1345579709 14400
> > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> > xen/pagetables: Document that all of the initial regions are mapped.
> > 
> > The documentation states that the layout of the initial region looks
> > as so:
> >    a. relocated kernel image
> >    b. initial ram disk              [mod_start, mod_len]
> >    c. list of allocated page frames [mfn_list, nr_pages]
> >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> >    d. start_info_t structure        [register ESI (x86)]
> >    e. bootstrap page tables         [pt_base, CR3 (x86)]
> >    f. bootstrap stack               [register ESP (x86)]
> > 
> > But it does not clarify that the virtual address to all of
> > those areas is initially mapped by the pt_base (or CR3).
> > Lets fix that.
> 
> To me this is already being said by "This the order of bootstrap
> elements in the initial virtual region".

Stefano wanted to make sure we have it written as clear as possible.
I am going to be a good little submitter and let you guys sort this
one out  :-)

<gets some popcorn out>
> 
> Jan
> 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > 
> > diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
> > --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > @@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
> >   *  8. There is guaranteed to be at least 512kB padding after the final
> >   *     bootstrap element. If necessary, the bootstrap virtual region is
> >   *     extended by an extra 4MB to ensure this.
> > + *
> > + *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> > + *  pagetables [pt_base, CR3 (x86)].
> >   */
> >  
> >  #define MAX_GUEST_CMDLINE 1024
> > 
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:39:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C5d-0008QG-SW; Wed, 22 Aug 2012 14:39:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4C5d-0008Q6-2Z
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:39:25 +0000
Received: from [85.158.143.35:37600] by server-3.bemta-4.messagelabs.com id
	76/B9-09529-C1FE4305; Wed, 22 Aug 2012 14:39:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345646363!4600052!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18868 invoked from network); 22 Aug 2012 14:39:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:39:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14128860"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:39:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 15:39:23 +0100
Message-ID: <1345646362.12501.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Wed, 22 Aug 2012 15:39:22 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1311B547@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
	<1345642800.12501.13.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B547@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTIyIGF0IDE1OjMyICswMTAwLCBQYWxhZ3VtbWksIFNpdmEgd3JvdGU6
Cj4gCj4gPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQo+ID4gRnJvbTogSWFuIENhbXBiZWxs
IFttYWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dCj4gPiBTZW50OiBXZWRuZXNkYXksIEF1
Z3VzdCAyMiwgMjAxMiA3OjEwIFBNCj4gPiBUbzogUGFsYWd1bW1pLCBTaXZhCj4gPiBDYzogSmFu
IEJldWxpY2g7IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4gPiBTdWJqZWN0OiBSZTogW1hlbi1k
ZXZlbF0gW1BBVENIIFJGQ10geGVuL25ldGJhY2s6IENvdW50IHJpbmcgc2xvdHMKPiA+IHByb3Bl
cmx5IHdoZW4gbGFyZ2VyIE1UVSBzaXplcyBhcmUgdXNlZAo+ID4gCj4gPiBPbiBXZWQsIDIwMTIt
MDgtMjIgYXQgMTQ6MzAgKzAxMDAsIFBhbGFndW1taSwgU2l2YSB3cm90ZToKPiA+ID4gPiA+Cj4g
PiA+ID4gPgo+ID4gPiA+ID4gPiBUaGlzOgo+ID4gPiA+ID4gPiAJCS8qIEZpbGxlZCB0aGUgYmF0
Y2ggcXVldWU/ICovCj4gPiA+ID4gPiA+IAkJaWYgKGNvdW50ICsgTUFYX1NLQl9GUkFHUyA+PSBY
RU5fTkVUSUZfUlhfUklOR19TSVpFKQo+ID4gPiA+ID4gPiAJCQlicmVhazsKPiA+ID4gPiA+ID4g
c2VlbXMgYSBiaXQgaWZmeSB0byBtZSB0b28uIEkgd29uZGVyIGlmIE1BWF9TS0JfRlJBR1Mgc2hv
dWxkIGJlCj4gPiA+ID4gPiA+IG1heF9yZXF1aXJlZF9yeF9zbG90cyh2aWYpPyBPciBtYXliZSB0
aGUgcHJlZmxpZ2h0IGNoZWNrcyBmcm9tCj4gPiA+ID4gPiA+IHhlbnZpZl9zdGFydF94bWl0IHNh
dmUgdXMgZnJvbSB0aGlzIGZhdGU/Cj4gPiA+ID4gPiA+Cj4gPiA+ID4gPiA+IElhbi4KPiA+ID4g
PiA+Cj4gPiA+ID4gPiBZb3UgYXJlIHJpZ2h0IElhbi4gVGhlIGludGVudGlvbiBvZiB0aGlzIGNo
ZWNrIHNlZW1zIHRvIGJlIHRvCj4gPiBlbnN1cmUKPiA+ID4gPiA+IHRoYXQgZW5vdWdoIHNsb3Rz
IGFyZSBzdGlsbCBsZWZ0IHByaW9yIHRvIHBpY2tpbmcgdXAgbmV4dCBza2IuCj4gPiBCdXQKPiA+
ID4gPiA+IGluc3RlYWQgb2YgaW52b2tpbmcgbWF4X3JlcXVpcmVkX3J4X3Nsb3RzIHdpdGggYWxy
ZWFkeSByZWNlaXZlZAo+ID4gc2tiLAo+ID4gPiA+ID4gd2UgbWF5IGhhdmUgdG8gZG8gc2tiX3Bl
ZWsgYW5kIGludm9rZSBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgb24KPiA+IHNrYgo+ID4gPiA+ID4g
dGhhdCB3ZSBhcmUgYWJvdXQgdG8gZGVxdWV1ZS4gSXMgdGhlcmUgYW55IGJldHRlciB3YXk/Cj4g
PiA+ID4KPiA+ID4gPiBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgZG9lc24ndCB0YWtlIGFuIHNrYiBh
cyBhbiBhcmd1bWVudCwganVzdCBhCj4gPiB2aWYuCj4gPiA+ID4gSXQKPiA+ID4gPiByZXR1cm5z
IHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cyBmb3IgYW55IHNrYiBvbiB0aGF0IHZpZi4K
PiA+ID4gPgo+ID4gPiA+IElhbi4KPiA+ID4KPiA+ID4gVGhhdOKAmXMgdHJ1ZS4gV2hhdCBJIG1l
YW50IGlzIHRvIHBlZWsgdG8gbmV4dCBza2IgYW5kIGdldCB2aWYgZnJvbQo+ID4gdGhhdAo+ID4g
PiBzdHJ1Y3R1cmUgdG8gaW52b2tlIG1heF9yZXF1aXJlZF9yeF9zbG90cy4gRG9uJ3QgeW91IHRo
aW5rIHdlIG5lZWQgdG8KPiA+ID4gZG8gbGlrZSB0aGF0Pwo+ID4gCj4gPiBEbyB5b3UgbWVhbiBz
b21ldGhpbmcgb3RoZXIgdGhhbiBtYXhfcmVxdWlyZWRfcnhfc2xvdHM/IEJlY2F1c2UgdGhlCj4g
PiBwcm90b3R5cGUgb2YgdGhhdCBmdW5jdGlvbiBpcwo+ID4gICAgICAgICBzdGF0aWMgaW50IG1h
eF9yZXF1aXJlZF9yeF9zbG90cyhzdHJ1Y3QgeGVudmlmICp2aWYpCj4gPiBpLmUuIGl0IGRvZXNu
J3QgbmVlZCBhbiBza2IuCj4gPiAKPiA+IEkgdGhpbmsgaXQgaXMgYWNjZXB0YWJsZSB0byBjaGVj
ayBmb3IgdGhlIHdvcnN0IGNhc2UgbnVtYmVyIG9mIHNsb3RzLgo+ID4gVGhhdCdzIHdoYXQgd2Ug
ZG8gZS5nLiBpbiB4ZW5fbmV0YmtfcnhfcmluZ19mdWxsCj4gPiAKPiA+IFVzaW5nIHNrYl9wZWVr
IG1pZ2h0IHdvcmsgdG9vIHRob3VnaCwgYXNzdW1pbmcgYWxsIHRoZSBsb2NraW5nIGV0YyBpcwo+
ID4gb2sgbwo+IAo+IEkgd2FudCB0byB1c2UgbWF4X3JlcXVpcmVkX3J4X3Nsb3RzIG9ubHkuIFNv
IHRoZSBjb2RlIHdpbGwgbG9vayBzb21ld2hhdCBsaWtlIHRoaXMuCj4gCj4gc2tiPXNrYl9wZWVr
KCZuZXRiay0+cnhfcXVldWUpOwo+IGlmKHNrYiA9PSBOVUxMKQo+IAlicmVhazsKPiB2aWY9bmV0
ZGV2X3ByaXYoc2tiLT5kZXYpOwoKT2gsIEkgc2VlIHdoeSB5b3UgbmVlZCB0aGUgc2tiIG5vdyEK
Cj4gLypGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyovCj4gSWYoY291bnQgKyBtYXhfcmVxdWlyZWRf
cnhfc2xvdHModmlmKSA+PSBYRU5fTkVUSUZfUlhfUklOR19TSVpFKQo+IAlicmVhazsKCllvdSBu
ZWVkIHRvIHRvIGZpbmFsbHkgZGVxdWV1ZSB0aGUgc2tiIGhlcmUuCgo+IAo+IAo+ID4gLS0gdGhp
cyBpcyBhIHByaXZhdGUgcXVldWUgc28gSSB0aGluayBpdCBpcyBwcm9iYWJseSBvay4gUmF0aGVy
IHRoYW4KPiA+IGNhbGN1bGF0aW5nIHRoZSBudW1iZXIgb2Ygc2xvdHMgaW4geGVuX25ldGJrX3J4
X2FjdGlvbiB5b3UgcHJvYmFibHkKPiA+IHdhbnQKPiA+IHRvIHJlbWVtYmVyIHRoZSB2YWx1ZSBm
cm9tIHRoZSBjYWxsIHRvIHhlbl9uZXRia19jb3VudF9za2Jfc2xvdHMgaW4KPiA+IHN0YXJ0X3ht
aXQuIFBlcmhhcHMgYnkgc3Rhc2hpbmcgaXQgaW4gc2tiLT5jYj8gKHNlZSBORVRGUk9OVF9TS0Jf
Q0IgZm9yCj4gPiBhbiBleGFtcGxlIG9mIGhvdyB0byBkbyB0aGlzKQo+ID4gCj4gPiBJYW4uCj4g
Cj4gT2suIEkgd2lsbCBsb29rIGludG8gdGhpcyBhcyB3ZWxsLiBUaGlzIHdpbGwgZGVmaW5pdGVs
eSBzYXZlIHNvbWUKPiBjeWNsZXMgaW4geGVuX25ldGJrX3J4X2FjdGlvbi4gQXBhcnQgZnJvbSB0
aGF0IGNhbGN1bGF0aW9ucyB3ZSBhbHJlYWR5Cj4gZGlzY3Vzc2VkIHNob3VsZCB3b3JrIGZpbmUg
cmlnaHQ/CgpJIHRoaW5rIHNvLiBQcm9vZiBpbiB0aGUgcHVkZGluZyBhbmQgYWxsIHRoYXQgOy0p
CgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:39:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C5d-0008QG-SW; Wed, 22 Aug 2012 14:39:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4C5d-0008Q6-2Z
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:39:25 +0000
Received: from [85.158.143.35:37600] by server-3.bemta-4.messagelabs.com id
	76/B9-09529-C1FE4305; Wed, 22 Aug 2012 14:39:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345646363!4600052!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18868 invoked from network); 22 Aug 2012 14:39:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:39:23 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14128860"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:39:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 15:39:23 +0100
Message-ID: <1345646362.12501.26.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Wed, 22 Aug 2012 15:39:22 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1311B547@INHYMS111B.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1310F8E2@INHYMS111B.ca.com>
	<5028D6AC0200007800094651@nat28.tlf.novell.com>
	<7D7C26B1462EB14CB0E7246697A18C13119012@INHYMS111B.ca.com>
	<1345554888.6821.56.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4BD@INHYMS111B.ca.com>
	<1345641723.12501.2.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B4D9@INHYMS111B.ca.com>
	<1345642800.12501.13.camel@zakaz.uk.xensource.com>
	<7D7C26B1462EB14CB0E7246697A18C1311B547@INHYMS111B.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDEyLTA4LTIyIGF0IDE1OjMyICswMTAwLCBQYWxhZ3VtbWksIFNpdmEgd3JvdGU6
Cj4gCj4gPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQo+ID4gRnJvbTogSWFuIENhbXBiZWxs
IFttYWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dCj4gPiBTZW50OiBXZWRuZXNkYXksIEF1
Z3VzdCAyMiwgMjAxMiA3OjEwIFBNCj4gPiBUbzogUGFsYWd1bW1pLCBTaXZhCj4gPiBDYzogSmFu
IEJldWxpY2g7IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4gPiBTdWJqZWN0OiBSZTogW1hlbi1k
ZXZlbF0gW1BBVENIIFJGQ10geGVuL25ldGJhY2s6IENvdW50IHJpbmcgc2xvdHMKPiA+IHByb3Bl
cmx5IHdoZW4gbGFyZ2VyIE1UVSBzaXplcyBhcmUgdXNlZAo+ID4gCj4gPiBPbiBXZWQsIDIwMTIt
MDgtMjIgYXQgMTQ6MzAgKzAxMDAsIFBhbGFndW1taSwgU2l2YSB3cm90ZToKPiA+ID4gPiA+Cj4g
PiA+ID4gPgo+ID4gPiA+ID4gPiBUaGlzOgo+ID4gPiA+ID4gPiAJCS8qIEZpbGxlZCB0aGUgYmF0
Y2ggcXVldWU/ICovCj4gPiA+ID4gPiA+IAkJaWYgKGNvdW50ICsgTUFYX1NLQl9GUkFHUyA+PSBY
RU5fTkVUSUZfUlhfUklOR19TSVpFKQo+ID4gPiA+ID4gPiAJCQlicmVhazsKPiA+ID4gPiA+ID4g
c2VlbXMgYSBiaXQgaWZmeSB0byBtZSB0b28uIEkgd29uZGVyIGlmIE1BWF9TS0JfRlJBR1Mgc2hv
dWxkIGJlCj4gPiA+ID4gPiA+IG1heF9yZXF1aXJlZF9yeF9zbG90cyh2aWYpPyBPciBtYXliZSB0
aGUgcHJlZmxpZ2h0IGNoZWNrcyBmcm9tCj4gPiA+ID4gPiA+IHhlbnZpZl9zdGFydF94bWl0IHNh
dmUgdXMgZnJvbSB0aGlzIGZhdGU/Cj4gPiA+ID4gPiA+Cj4gPiA+ID4gPiA+IElhbi4KPiA+ID4g
PiA+Cj4gPiA+ID4gPiBZb3UgYXJlIHJpZ2h0IElhbi4gVGhlIGludGVudGlvbiBvZiB0aGlzIGNo
ZWNrIHNlZW1zIHRvIGJlIHRvCj4gPiBlbnN1cmUKPiA+ID4gPiA+IHRoYXQgZW5vdWdoIHNsb3Rz
IGFyZSBzdGlsbCBsZWZ0IHByaW9yIHRvIHBpY2tpbmcgdXAgbmV4dCBza2IuCj4gPiBCdXQKPiA+
ID4gPiA+IGluc3RlYWQgb2YgaW52b2tpbmcgbWF4X3JlcXVpcmVkX3J4X3Nsb3RzIHdpdGggYWxy
ZWFkeSByZWNlaXZlZAo+ID4gc2tiLAo+ID4gPiA+ID4gd2UgbWF5IGhhdmUgdG8gZG8gc2tiX3Bl
ZWsgYW5kIGludm9rZSBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgb24KPiA+IHNrYgo+ID4gPiA+ID4g
dGhhdCB3ZSBhcmUgYWJvdXQgdG8gZGVxdWV1ZS4gSXMgdGhlcmUgYW55IGJldHRlciB3YXk/Cj4g
PiA+ID4KPiA+ID4gPiBtYXhfcmVxdWlyZWRfcnhfc2xvdHMgZG9lc24ndCB0YWtlIGFuIHNrYiBh
cyBhbiBhcmd1bWVudCwganVzdCBhCj4gPiB2aWYuCj4gPiA+ID4gSXQKPiA+ID4gPiByZXR1cm5z
IHRoZSB3b3JzdCBjYXNlIG51bWJlciBvZiBzbG90cyBmb3IgYW55IHNrYiBvbiB0aGF0IHZpZi4K
PiA+ID4gPgo+ID4gPiA+IElhbi4KPiA+ID4KPiA+ID4gVGhhdOKAmXMgdHJ1ZS4gV2hhdCBJIG1l
YW50IGlzIHRvIHBlZWsgdG8gbmV4dCBza2IgYW5kIGdldCB2aWYgZnJvbQo+ID4gdGhhdAo+ID4g
PiBzdHJ1Y3R1cmUgdG8gaW52b2tlIG1heF9yZXF1aXJlZF9yeF9zbG90cy4gRG9uJ3QgeW91IHRo
aW5rIHdlIG5lZWQgdG8KPiA+ID4gZG8gbGlrZSB0aGF0Pwo+ID4gCj4gPiBEbyB5b3UgbWVhbiBz
b21ldGhpbmcgb3RoZXIgdGhhbiBtYXhfcmVxdWlyZWRfcnhfc2xvdHM/IEJlY2F1c2UgdGhlCj4g
PiBwcm90b3R5cGUgb2YgdGhhdCBmdW5jdGlvbiBpcwo+ID4gICAgICAgICBzdGF0aWMgaW50IG1h
eF9yZXF1aXJlZF9yeF9zbG90cyhzdHJ1Y3QgeGVudmlmICp2aWYpCj4gPiBpLmUuIGl0IGRvZXNu
J3QgbmVlZCBhbiBza2IuCj4gPiAKPiA+IEkgdGhpbmsgaXQgaXMgYWNjZXB0YWJsZSB0byBjaGVj
ayBmb3IgdGhlIHdvcnN0IGNhc2UgbnVtYmVyIG9mIHNsb3RzLgo+ID4gVGhhdCdzIHdoYXQgd2Ug
ZG8gZS5nLiBpbiB4ZW5fbmV0YmtfcnhfcmluZ19mdWxsCj4gPiAKPiA+IFVzaW5nIHNrYl9wZWVr
IG1pZ2h0IHdvcmsgdG9vIHRob3VnaCwgYXNzdW1pbmcgYWxsIHRoZSBsb2NraW5nIGV0YyBpcwo+
ID4gb2sgbwo+IAo+IEkgd2FudCB0byB1c2UgbWF4X3JlcXVpcmVkX3J4X3Nsb3RzIG9ubHkuIFNv
IHRoZSBjb2RlIHdpbGwgbG9vayBzb21ld2hhdCBsaWtlIHRoaXMuCj4gCj4gc2tiPXNrYl9wZWVr
KCZuZXRiay0+cnhfcXVldWUpOwo+IGlmKHNrYiA9PSBOVUxMKQo+IAlicmVhazsKPiB2aWY9bmV0
ZGV2X3ByaXYoc2tiLT5kZXYpOwoKT2gsIEkgc2VlIHdoeSB5b3UgbmVlZCB0aGUgc2tiIG5vdyEK
Cj4gLypGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyovCj4gSWYoY291bnQgKyBtYXhfcmVxdWlyZWRf
cnhfc2xvdHModmlmKSA+PSBYRU5fTkVUSUZfUlhfUklOR19TSVpFKQo+IAlicmVhazsKCllvdSBu
ZWVkIHRvIHRvIGZpbmFsbHkgZGVxdWV1ZSB0aGUgc2tiIGhlcmUuCgo+IAo+IAo+ID4gLS0gdGhp
cyBpcyBhIHByaXZhdGUgcXVldWUgc28gSSB0aGluayBpdCBpcyBwcm9iYWJseSBvay4gUmF0aGVy
IHRoYW4KPiA+IGNhbGN1bGF0aW5nIHRoZSBudW1iZXIgb2Ygc2xvdHMgaW4geGVuX25ldGJrX3J4
X2FjdGlvbiB5b3UgcHJvYmFibHkKPiA+IHdhbnQKPiA+IHRvIHJlbWVtYmVyIHRoZSB2YWx1ZSBm
cm9tIHRoZSBjYWxsIHRvIHhlbl9uZXRia19jb3VudF9za2Jfc2xvdHMgaW4KPiA+IHN0YXJ0X3ht
aXQuIFBlcmhhcHMgYnkgc3Rhc2hpbmcgaXQgaW4gc2tiLT5jYj8gKHNlZSBORVRGUk9OVF9TS0Jf
Q0IgZm9yCj4gPiBhbiBleGFtcGxlIG9mIGhvdyB0byBkbyB0aGlzKQo+ID4gCj4gPiBJYW4uCj4g
Cj4gT2suIEkgd2lsbCBsb29rIGludG8gdGhpcyBhcyB3ZWxsLiBUaGlzIHdpbGwgZGVmaW5pdGVs
eSBzYXZlIHNvbWUKPiBjeWNsZXMgaW4geGVuX25ldGJrX3J4X2FjdGlvbi4gQXBhcnQgZnJvbSB0
aGF0IGNhbGN1bGF0aW9ucyB3ZSBhbHJlYWR5Cj4gZGlzY3Vzc2VkIHNob3VsZCB3b3JrIGZpbmUg
cmlnaHQ/CgpJIHRoaW5rIHNvLiBQcm9vZiBpbiB0aGUgcHVkZGluZyBhbmQgYWxsIHRoYXQgOy0p
CgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:42:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C8U-0000Cl-M1; Wed, 22 Aug 2012 14:42:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4C8S-0000CU-M4
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:42:20 +0000
Received: from [85.158.143.35:53519] by server-3.bemta-4.messagelabs.com id
	94/8E-09529-CCFE4305; Wed, 22 Aug 2012 14:42:20 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345646538!13193740!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26310 invoked from network); 22 Aug 2012 14:42:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:42:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14128934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:41:34 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 15:41:34 +0100
Date: Wed, 22 Aug 2012 15:41:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <503504FE0200007800096F08@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208221540570.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<503504FE0200007800096F08@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Jan Beulich wrote:
> >> The toolstack works fine - so launching 32-bit guests either
> >> under a 32-bit hypervisor or 64-bit works fine:
> >> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 
> > 0xcf885000  (pfn 0xf805 + 0x80 pages)
> >> [    0.000000] PT: cf805000 (f805000)
> >> 
> > 
> > And this patch on top of the others fixes this..
> 
> I didn't look at this in too close detail, but I started to get
> afraid that you might be making the code dependent on
> many hypervisor implementation details. And should the
> above turn out to be a bug in the hypervisor, I hope your
> kernel side changes won't make it impossible to fix that bug.

I agree

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:42:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4C8U-0000Cl-M1; Wed, 22 Aug 2012 14:42:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4C8S-0000CU-M4
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 14:42:20 +0000
Received: from [85.158.143.35:53519] by server-3.bemta-4.messagelabs.com id
	94/8E-09529-CCFE4305; Wed, 22 Aug 2012 14:42:20 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345646538!13193740!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26310 invoked from network); 22 Aug 2012 14:42:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 14:42:18 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14128934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:41:34 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 15:41:34 +0100
Date: Wed, 22 Aug 2012 15:41:12 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <503504FE0200007800096F08@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208221540570.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<503504FE0200007800096F08@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Jan Beulich wrote:
> >> The toolstack works fine - so launching 32-bit guests either
> >> under a 32-bit hypervisor or 64-bit works fine:
> >> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 
> > 0xcf885000  (pfn 0xf805 + 0x80 pages)
> >> [    0.000000] PT: cf805000 (f805000)
> >> 
> > 
> > And this patch on top of the others fixes this..
> 
> I didn't look at this in too close detail, but I started to get
> afraid that you might be making the code dependent on
> many hypervisor implementation details. And should the
> above turn out to be a bug in the hypervisor, I hope your
> kernel side changes won't make it impossible to fix that bug.

I agree

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:46:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CCm-0000PT-Bb; Wed, 22 Aug 2012 14:46:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4CCk-0000PN-9s
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:46:46 +0000
Received: from [85.158.139.83:19358] by server-11.bemta-5.messagelabs.com id
	0D/CF-29296-5D0F4305; Wed, 22 Aug 2012 14:46:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345646805!18237729!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11522 invoked from network); 22 Aug 2012 14:46:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 14:46:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:46:44 +0100
Message-Id: <50350D1F0200007800096F9C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:47:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <Christoph.Egger@amd.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<50323640020000780009662A@nat28.tlf.novell.com>
	<50338D02.8050009@amd.com>
	<5033B0070200007800096B4D@nat28.tlf.novell.com>
	<5034B907.7020506@amd.com>
In-Reply-To: <5034B907.7020506@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RTC patch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 12:48, Christoph Egger <Christoph.Egger@amd.com> wrote:
> On 08/21/12 15:57, Jan Beulich wrote:
> 
>>>>> On 21.08.12 at 15:28, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>> On 08/20/12 13:06, Jan Beulich wrote:
>>>
>>>>>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>>>>       guest wall clock time to lag (possible fix outlined, needs to be
>>>>>       put in patch form and thoroughly reviewed/tested for unwanted
>>>>>       side effects, Jan Beulich)
>>>>
>>>> Patch was posted, but no comments or approval to commit so far.
>>>> Also, reportedly the patch only improves the situation, it doesn't
>>>> completely eliminate the problem. For the moment I'm out of ideas,
>>>> though, and hence would hope some others could help here.
>>>
>>>
>>> Can you point me to the patch (or resend it to me), please?
>>> I have some trouble with getting XP Mode in Windows 7 (nested
>>> virtualization) booting and figured out it uses the RTC.
>>> I want to give this patch a try.
>> 
>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01303.html 
> 
> When booting Windows 7 I get a crash due to a NULL pointer dereference
> in xen/common/spinlock.c:45.
> It looks like the spin lock is not initialized.

I rather think NULL gets passed from pt_update_irq() to
rtc_periodic_interrupt(). Yet rtc.c's sole call to
create_periodic_time() clearly passes non-NULL. Oh,
hpet_set_timer() can pass a literal 8 (which I didn't spot
grepping for RTC_IRQ) - could you refine the check in
pt_update_irq()

    else if ( irq == RTC_IRQ )

to read

    else if ( irq == RTC_IRQ && pt_priv )

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:46:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CCm-0000PT-Bb; Wed, 22 Aug 2012 14:46:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4CCk-0000PN-9s
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:46:46 +0000
Received: from [85.158.139.83:19358] by server-11.bemta-5.messagelabs.com id
	0D/CF-29296-5D0F4305; Wed, 22 Aug 2012 14:46:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345646805!18237729!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11522 invoked from network); 22 Aug 2012 14:46:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 14:46:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:46:44 +0100
Message-Id: <50350D1F0200007800096F9C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:47:27 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <Christoph.Egger@amd.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<50323640020000780009662A@nat28.tlf.novell.com>
	<50338D02.8050009@amd.com>
	<5033B0070200007800096B4D@nat28.tlf.novell.com>
	<5034B907.7020506@amd.com>
In-Reply-To: <5034B907.7020506@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RTC patch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 12:48, Christoph Egger <Christoph.Egger@amd.com> wrote:
> On 08/21/12 15:57, Jan Beulich wrote:
> 
>>>>> On 21.08.12 at 15:28, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>> On 08/20/12 13:06, Jan Beulich wrote:
>>>
>>>>>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>>>>       guest wall clock time to lag (possible fix outlined, needs to be
>>>>>       put in patch form and thoroughly reviewed/tested for unwanted
>>>>>       side effects, Jan Beulich)
>>>>
>>>> Patch was posted, but no comments or approval to commit so far.
>>>> Also, reportedly the patch only improves the situation, it doesn't
>>>> completely eliminate the problem. For the moment I'm out of ideas,
>>>> though, and hence would hope some others could help here.
>>>
>>>
>>> Can you point me to the patch (or resend it to me), please?
>>> I have some trouble with getting XP Mode in Windows 7 (nested
>>> virtualization) booting and figured out it uses the RTC.
>>> I want to give this patch a try.
>> 
>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01303.html 
> 
> When booting Windows 7 I get a crash due to a NULL pointer dereference
> in xen/common/spinlock.c:45.
> It looks like the spin lock is not initialized.

I rather think NULL gets passed from pt_update_irq() to
rtc_periodic_interrupt(). Yet rtc.c's sole call to
create_periodic_time() clearly passes non-NULL. Oh,
hpet_set_timer() can pass a literal 8 (which I didn't spot
grepping for RTC_IRQ) - could you refine the check in
pt_update_irq()

    else if ( irq == RTC_IRQ )

to read

    else if ( irq == RTC_IRQ && pt_priv )

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:48:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CE4-0000Tb-Qg; Wed, 22 Aug 2012 14:48:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4CE3-0000TS-5t
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:48:07 +0000
Received: from [85.158.139.83:31091] by server-10.bemta-5.messagelabs.com id
	A0/AE-13125-621F4305; Wed, 22 Aug 2012 14:48:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345646885!29550188!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4695 invoked from network); 22 Aug 2012 14:48:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 14:48:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:48:05 +0100
Message-Id: <50350D710200007800096F9F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:48:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andres Lagar-Cavilla" <andreslc@gridcentric.ca>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
	<5035070F0200007800096F57@nat28.tlf.novell.com>
	<F0E0A8D1-8BC1-4E1C-BEE3-516C9C592E29@gridcentric.ca>
In-Reply-To: <F0E0A8D1-8BC1-4E1C-BEE3-516C9C592E29@gridcentric.ca>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, keir@xen.org, Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
 operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 16:32, Andres Lagar-Cavilla <andreslc@gridcentric.ca> wrote:

> On Aug 22, 2012, at 10:21 AM, Jan Beulich wrote:
> 
>>>>> On 21.08.12 at 22:13, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote:
>>> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
>>> 1 files changed, 22 insertions(+), 11 deletions(-)
>>> 
>>> 
>>> The unwind path was not clearing the shared entry status bits. This was
>>> BSOD-ing guests on network activity under certain configurations.
>>> 
>>> Also:
>>> * sed the fixup method name to signal it's related to grant copy.
>>> * use atomic clear flag ops during fixup.
>> 
>> Is that last thing really needed? I remember having looked at
>> these non-atomic operations too a little while back, and came to
>> the conclusion that probably the authors intentionally coded it
>> that way.
> 
> Due to some obscure property of transitive grants? All other flag 
> clearing/setting in shared grant entries by the hypervisor is done atomically 
> (GTF_transfer_completed being an exception).
> 
> From my p.o.v. there is no downside. But I am not 100% certain and I can 
> back it off.

Only if they read the thread and respond with an explanation.
After all, the change is certainly not wrong, just slightly slowing
things down.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:48:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CE4-0000Tb-Qg; Wed, 22 Aug 2012 14:48:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4CE3-0000TS-5t
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:48:07 +0000
Received: from [85.158.139.83:31091] by server-10.bemta-5.messagelabs.com id
	A0/AE-13125-621F4305; Wed, 22 Aug 2012 14:48:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345646885!29550188!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4695 invoked from network); 22 Aug 2012 14:48:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	22 Aug 2012 14:48:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:48:05 +0100
Message-Id: <50350D710200007800096F9F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:48:49 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andres Lagar-Cavilla" <andreslc@gridcentric.ca>
References: <84a23ae853a53e39ebd1.1345580037@xdev.gridcentric.ca>
	<5035070F0200007800096F57@nat28.tlf.novell.com>
	<F0E0A8D1-8BC1-4E1C-BEE3-516C9C592E29@gridcentric.ca>
In-Reply-To: <F0E0A8D1-8BC1-4E1C-BEE3-516C9C592E29@gridcentric.ca>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, keir@xen.org, Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Fix shared entry status for grant copy
 operation on paged out gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 16:32, Andres Lagar-Cavilla <andreslc@gridcentric.ca> wrote:

> On Aug 22, 2012, at 10:21 AM, Jan Beulich wrote:
> 
>>>>> On 21.08.12 at 22:13, Andres Lagar-Cavilla <andres@lagarcavilla.org> wrote:
>>> xen/common/grant_table.c |  33 ++++++++++++++++++++++-----------
>>> 1 files changed, 22 insertions(+), 11 deletions(-)
>>> 
>>> 
>>> The unwind path was not clearing the shared entry status bits. This was
>>> BSOD-ing guests on network activity under certain configurations.
>>> 
>>> Also:
>>> * sed the fixup method name to signal it's related to grant copy.
>>> * use atomic clear flag ops during fixup.
>> 
>> Is that last thing really needed? I remember having looked at
>> these non-atomic operations too a little while back, and came to
>> the conclusion that probably the authors intentionally coded it
>> that way.
> 
> Due to some obscure property of transitive grants? All other flag 
> clearing/setting in shared grant entries by the hypervisor is done atomically 
> (GTF_transfer_completed being an exception).
> 
> From my p.o.v. there is no downside. But I am not 100% certain and I can 
> back it off.

Only if they read the thread and respond with an explanation.
After all, the change is certainly not wrong, just slightly slowing
things down.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CGL-0000dZ-BW; Wed, 22 Aug 2012 14:50:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4CGJ-0000dG-VE
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:50:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345647021!8790601!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26654 invoked from network); 22 Aug 2012 14:50:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 14:50:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:50:20 +0100
Message-Id: <50350DF70200007800096FB6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:51:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
	<503505710200007800096F0B@nat28.tlf.novell.com>
	<20120822142457.GA31341@phenom.dumpdata.com>
In-Reply-To: <20120822142457.GA31341@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 16:24, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Wed, Aug 22, 2012 at 03:14:41PM +0100, Jan Beulich wrote:
>> >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > # HG changeset patch
>> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > # Date 1345579709 14400
>> > # Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
>> > # Parent  635917c6dac4ab8748572fcbeb3e745428684e15
>> > get_page_type: Print out extra information when failing to get page_type.
>> > 
>> > When any reference to __get_page_type is called and it fails, we get:
>> > 
>> > (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) 
>> > for mfn 10e392 (pfn 1bf6c)
>> > 
>> > with this patch we get some extra details such as:
>> > (XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: 
> [258][272]
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: 
> [258][511][511]
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: 
> [511][511]
>> > 
>> > where it actually is in the pagetable of the guest. This is useful
>> > b/c we can figure out where it is, and use that to figure out where
>> > the OS thinks it is.
>> 
>> In addition to my earlier reply, I also think that the printing
>> should be done at info level, so that nothing would get
>> additionally printed without special command line options.
> 
> I will be more than happy to make those changes. However I think
> you are feeling that this is not really that useful? Perhaps if I spiced
> it up by also dumping what those 'types' are and added some
> rudimentary logic troubleshooting (its an L2 type, and you are trying to
> add it as an L1 entry as writeable, so on..)

No, it doesn't need to go that far I think. My main concern is that
if something like that should get added, it should reduce the need
to consult the sources to understand the messages. Currently I'm
rather suspecting the inverse.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 14:50:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CGL-0000dZ-BW; Wed, 22 Aug 2012 14:50:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4CGJ-0000dG-VE
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 14:50:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345647021!8790601!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26654 invoked from network); 22 Aug 2012 14:50:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 14:50:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 15:50:20 +0100
Message-Id: <50350DF70200007800096FB6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 15:51:03 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<8ed3eef706710c9c476a.1345579712@phenom.dumpdata.com>
	<503505710200007800096F0B@nat28.tlf.novell.com>
	<20120822142457.GA31341@phenom.dumpdata.com>
In-Reply-To: <20120822142457.GA31341@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 4] get_page_type: Print out extra
 information when failing to get page_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 16:24, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Wed, Aug 22, 2012 at 03:14:41PM +0100, Jan Beulich wrote:
>> >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > # HG changeset patch
>> > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > # Date 1345579709 14400
>> > # Node ID 8ed3eef706710c9c476a8d984bfb2861d92bedfb
>> > # Parent  635917c6dac4ab8748572fcbeb3e745428684e15
>> > get_page_type: Print out extra information when failing to get page_type.
>> > 
>> > When any reference to __get_page_type is called and it fails, we get:
>> > 
>> > (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) 
>> > for mfn 10e392 (pfn 1bf6c)
>> > 
>> > with this patch we get some extra details such as:
>> > (XEN) debug.c:127:d0 cr3: 10d80b000, searching for 10e392
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: 
> [258][272]
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PMD/L2: 
> [258][511][511]
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PGD/L4: [272]
>> > (XEN) debug.c:111:d0 cr3(10d80b000) has mfn(10e392) in level PUD/L3: 
> [511][511]
>> > 
>> > where it actually is in the pagetable of the guest. This is useful
>> > b/c we can figure out where it is, and use that to figure out where
>> > the OS thinks it is.
>> 
>> In addition to my earlier reply, I also think that the printing
>> should be done at info level, so that nothing would get
>> additionally printed without special command line options.
> 
> I will be more than happy to make those changes. However I think
> you are feeling that this is not really that useful? Perhaps if I spiced
> it up by also dumping what those 'types' are and added some
> rudimentary logic troubleshooting (its an L2 type, and you are trying to
> add it as an L1 entry as writeable, so on..)

No, it doesn't need to go that far I think. My main concern is that
if something like that should get added, it should reduce the need
to consult the sources to understand the messages. Currently I'm
rather suspecting the inverse.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:01:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CQU-0000uo-GO; Wed, 22 Aug 2012 15:00:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T4CQS-0000uj-6o
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:00:56 +0000
Received: from [85.158.138.51:37212] by server-12.bemta-3.messagelabs.com id
	05/6D-04073-724F4305; Wed, 22 Aug 2012 15:00:55 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345647654!27532430!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13320 invoked from network); 22 Aug 2012 15:00:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 15:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14129447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 15:00:53 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 16:00:52 +0100
Message-ID: <5034F0FD.8040902@citrix.com>
Date: Wed, 22 Aug 2012 15:47:25 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208221618380.2856@ionos>
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/08/12 15:19, Thomas Gleixner wrote:
> On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
>    
>> On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
>>      
>>> On Tue, 21 Aug 2012, Attilio Rao wrote:
>>>        
>>>> Differences with v1:
>>>> - The patch serie is re-arranged in a way that it helps reviews, following
>>>>    a plan by Thomas Gleixner
>>>> - The PVOPS nomenclature is not used as it is not correct
>>>> - The front-end message is adjusted with feedback by Thomas Gleixner,
>>>>    Stefano Stabellini and Konrad Rzeszutek Wilk
>>>>          
>>> This is much simpler to read and review. Just have a look at the
>>> diffstats of the two series:
>>>
>>>   6 files changed,  9 insertions(+),  8 deletions(-)
>>>   6 files changed, 11 insertions(+),  9 deletions(-)
>>>   5 files changed, 50 insertions(+),  2 deletions(-)
>>>   6 files changed,  2 insertions(+), 65 deletions(-)
>>>   1 files changed,  5 insertions(+),  0 deletions(-)
>>>
>>> versus
>>>
>>>   6 files changed, 10 insertions(+),  9 deletions(-)
>>>   6 files changed, 11 insertions(+), 11 deletions(-)
>>>   5 files changed,  3 insertions(+),  3 deletions(-)
>>>   6 files changed,  4 insertions(+), 20 deletions(-)
>>>   1 files changed,  5 insertions(+),  0 deletions(-)
>>>
>>> The overall result is basically the same, but it's way simpler to look
>>> at obvious and well done patches than checking whether a subtle copy
>>> and paste bug happened in 3/5 of the first version. Copy and paste is
>>> the #1 cause for subtle bugs. :)
>>>
>>> I'm waiting for the ack of Xen folks before taking it into tip.
>>>        
>> I've some extra patches that modify the new "paginig_init" in the Xen
>> code that I am going to propose for v3.7 - so will have some merge
>> conflicts. Let me figure that out and also run this set of patches
>> (and also the previous one .. which I think you didn't have a
>> chance to look since you were on vacation?) on an overnight
>>      
> Which previous one ?
>    

This one:
https://lkml.org/lkml/2012/8/21/369

but I would like to repost the patch serie skipping the referral to 
PVOPS in the commit logs, I will do so right now, so please wait for 
another patch serie.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:01:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CQU-0000uo-GO; Wed, 22 Aug 2012 15:00:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T4CQS-0000uj-6o
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:00:56 +0000
Received: from [85.158.138.51:37212] by server-12.bemta-3.messagelabs.com id
	05/6D-04073-724F4305; Wed, 22 Aug 2012 15:00:55 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345647654!27532430!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13320 invoked from network); 22 Aug 2012 15:00:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 15:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14129447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 15:00:53 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 16:00:52 +0100
Message-ID: <5034F0FD.8040902@citrix.com>
Date: Wed, 22 Aug 2012 15:47:25 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208221618380.2856@ionos>
Cc: "x86@kernel.org" <x86@kernel.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/08/12 15:19, Thomas Gleixner wrote:
> On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
>    
>> On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
>>      
>>> On Tue, 21 Aug 2012, Attilio Rao wrote:
>>>        
>>>> Differences with v1:
>>>> - The patch serie is re-arranged in a way that it helps reviews, following
>>>>    a plan by Thomas Gleixner
>>>> - The PVOPS nomenclature is not used as it is not correct
>>>> - The front-end message is adjusted with feedback by Thomas Gleixner,
>>>>    Stefano Stabellini and Konrad Rzeszutek Wilk
>>>>          
>>> This is much simpler to read and review. Just have a look at the
>>> diffstats of the two series:
>>>
>>>   6 files changed,  9 insertions(+),  8 deletions(-)
>>>   6 files changed, 11 insertions(+),  9 deletions(-)
>>>   5 files changed, 50 insertions(+),  2 deletions(-)
>>>   6 files changed,  2 insertions(+), 65 deletions(-)
>>>   1 files changed,  5 insertions(+),  0 deletions(-)
>>>
>>> versus
>>>
>>>   6 files changed, 10 insertions(+),  9 deletions(-)
>>>   6 files changed, 11 insertions(+), 11 deletions(-)
>>>   5 files changed,  3 insertions(+),  3 deletions(-)
>>>   6 files changed,  4 insertions(+), 20 deletions(-)
>>>   1 files changed,  5 insertions(+),  0 deletions(-)
>>>
>>> The overall result is basically the same, but it's way simpler to look
>>> at obvious and well done patches than checking whether a subtle copy
>>> and paste bug happened in 3/5 of the first version. Copy and paste is
>>> the #1 cause for subtle bugs. :)
>>>
>>> I'm waiting for the ack of Xen folks before taking it into tip.
>>>        
>> I've some extra patches that modify the new "paginig_init" in the Xen
>> code that I am going to propose for v3.7 - so will have some merge
>> conflicts. Let me figure that out and also run this set of patches
>> (and also the previous one .. which I think you didn't have a
>> chance to look since you were on vacation?) on an overnight
>>      
> Which previous one ?
>    

This one:
https://lkml.org/lkml/2012/8/21/369

but I would like to repost the patch serie skipping the referral to 
PVOPS in the commit logs, I will do so right now, so please wait for 
another patch serie.

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:03:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CT5-00011L-2N; Wed, 22 Aug 2012 15:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T4CT3-00011C-NZ
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:03:38 +0000
Received: from [85.158.139.83:49362] by server-8.bemta-5.messagelabs.com id
	AF/9C-02481-9C4F4305; Wed, 22 Aug 2012 15:03:37 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345647815!25250451!1
X-Originating-IP: [213.199.154.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6205 invoked from network); 22 Aug 2012 15:03:36 -0000
Received: from db3ehsobe005.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.143)
	by server-14.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	22 Aug 2012 15:03:36 -0000
Received: from mail72-db3-R.bigfish.com (10.3.81.235) by
	DB3EHSOBE002.bigfish.com (10.3.84.22) with Microsoft SMTP Server id
	14.1.225.23; Wed, 22 Aug 2012 15:03:35 +0000
Received: from mail72-db3 (localhost [127.0.0.1])	by mail72-db3-R.bigfish.com
	(Postfix) with ESMTP id 86D98380129;
	Wed, 22 Aug 2012 15:03:35 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail72-db3 (localhost.localdomain [127.0.0.1]) by mail72-db3
	(MessageSwitch) id 1345647813331437_16228;
	Wed, 22 Aug 2012 15:03:33 +0000 (UTC)
Received: from DB3EHSMHS008.bigfish.com (unknown [10.3.81.250])	by
	mail72-db3.bigfish.com (Postfix) with ESMTP id 43F1C1A0049;
	Wed, 22 Aug 2012 15:03:33 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	DB3EHSMHS008.bigfish.com (10.3.87.108) with Microsoft SMTP Server id
	14.1.225.23; Wed, 22 Aug 2012 15:03:30 +0000
X-WSS-ID: 0M95WHS-01-5IR-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2FDF6102805E;	Wed, 22 Aug 2012 10:03:27 -0500 (CDT)
Received: from SAUSEXDAG02.amd.com (163.181.55.2) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 22 Aug 2012 10:03:40 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag02.amd.com
	(163.181.55.2) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Wed, 22 Aug 2012 10:03:27 -0500
Received: from donner.osrc.amd.com (165.204.15.15) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Wed, 22 Aug 2012
	11:03:26 -0400
Message-ID: <5034F458.2090609@amd.com>
Date: Wed, 22 Aug 2012 17:01:44 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<50323640020000780009662A@nat28.tlf.novell.com>
	<50338D02.8050009@amd.com>
	<5033B0070200007800096B4D@nat28.tlf.novell.com>
	<5034B907.7020506@amd.com>
	<50350D1F0200007800096F9C@nat28.tlf.novell
In-Reply-To: <50350D1F0200007800096F9C@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RTC patch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/22/12 16:47, Jan Beulich wrote:

>>>> On 22.08.12 at 12:48, Christoph Egger <Christoph.Egger@amd.com> wrote:
>> On 08/21/12 15:57, Jan Beulich wrote:
>>
>>>>>> On 21.08.12 at 15:28, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>>> On 08/20/12 13:06, Jan Beulich wrote:
>>>>
>>>>>>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>>>>>       guest wall clock time to lag (possible fix outlined, needs to be
>>>>>>       put in patch form and thoroughly reviewed/tested for unwanted
>>>>>>       side effects, Jan Beulich)
>>>>>
>>>>> Patch was posted, but no comments or approval to commit so far.
>>>>> Also, reportedly the patch only improves the situation, it doesn't
>>>>> completely eliminate the problem. For the moment I'm out of ideas,
>>>>> though, and hence would hope some others could help here.
>>>>
>>>>
>>>> Can you point me to the patch (or resend it to me), please?
>>>> I have some trouble with getting XP Mode in Windows 7 (nested
>>>> virtualization) booting and figured out it uses the RTC.
>>>> I want to give this patch a try.
>>>
>>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01303.html 
>>
>> When booting Windows 7 I get a crash due to a NULL pointer dereference
>> in xen/common/spinlock.c:45.
>> It looks like the spin lock is not initialized.
> 
> I rather think NULL gets passed from pt_update_irq() to
> rtc_periodic_interrupt(). Yet rtc.c's sole call to
> create_periodic_time() clearly passes non-NULL. Oh,
> hpet_set_timer() can pass a literal 8 (which I didn't spot
> grepping for RTC_IRQ) - could you refine the check in
> pt_update_irq()
> 
>     else if ( irq == RTC_IRQ )
> 
> to read
> 
>     else if ( irq == RTC_IRQ && pt_priv )

Yes, Windows 7 boots with this change.

Christoph


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:03:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CT5-00011L-2N; Wed, 22 Aug 2012 15:03:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T4CT3-00011C-NZ
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:03:38 +0000
Received: from [85.158.139.83:49362] by server-8.bemta-5.messagelabs.com id
	AF/9C-02481-9C4F4305; Wed, 22 Aug 2012 15:03:37 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1345647815!25250451!1
X-Originating-IP: [213.199.154.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6205 invoked from network); 22 Aug 2012 15:03:36 -0000
Received: from db3ehsobe005.messaging.microsoft.com (HELO
	db3outboundpool.messaging.microsoft.com) (213.199.154.143)
	by server-14.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	22 Aug 2012 15:03:36 -0000
Received: from mail72-db3-R.bigfish.com (10.3.81.235) by
	DB3EHSOBE002.bigfish.com (10.3.84.22) with Microsoft SMTP Server id
	14.1.225.23; Wed, 22 Aug 2012 15:03:35 +0000
Received: from mail72-db3 (localhost [127.0.0.1])	by mail72-db3-R.bigfish.com
	(Postfix) with ESMTP id 86D98380129;
	Wed, 22 Aug 2012 15:03:35 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.108; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zzbb2dI98dI1432Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah1155h)
Received: from mail72-db3 (localhost.localdomain [127.0.0.1]) by mail72-db3
	(MessageSwitch) id 1345647813331437_16228;
	Wed, 22 Aug 2012 15:03:33 +0000 (UTC)
Received: from DB3EHSMHS008.bigfish.com (unknown [10.3.81.250])	by
	mail72-db3.bigfish.com (Postfix) with ESMTP id 43F1C1A0049;
	Wed, 22 Aug 2012 15:03:33 +0000 (UTC)
Received: from ausb3twp01.amd.com (163.181.249.108) by
	DB3EHSMHS008.bigfish.com (10.3.87.108) with Microsoft SMTP Server id
	14.1.225.23; Wed, 22 Aug 2012 15:03:30 +0000
X-WSS-ID: 0M95WHS-01-5IR-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp01.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 2FDF6102805E;	Wed, 22 Aug 2012 10:03:27 -0500 (CDT)
Received: from SAUSEXDAG02.amd.com (163.181.55.2) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Wed, 22 Aug 2012 10:03:40 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexdag02.amd.com
	(163.181.55.2) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Wed, 22 Aug 2012 10:03:27 -0500
Received: from donner.osrc.amd.com (165.204.15.15) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Wed, 22 Aug 2012
	11:03:26 -0400
Message-ID: <5034F458.2090609@amd.com>
Date: Wed, 22 Aug 2012 17:01:44 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<50323640020000780009662A@nat28.tlf.novell.com>
	<50338D02.8050009@amd.com>
	<5033B0070200007800096B4D@nat28.tlf.novell.com>
	<5034B907.7020506@amd.com>
	<50350D1F0200007800096F9C@nat28.tlf.novell
In-Reply-To: <50350D1F0200007800096F9C@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RTC patch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/22/12 16:47, Jan Beulich wrote:

>>>> On 22.08.12 at 12:48, Christoph Egger <Christoph.Egger@amd.com> wrote:
>> On 08/21/12 15:57, Jan Beulich wrote:
>>
>>>>>> On 21.08.12 at 15:28, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>>> On 08/20/12 13:06, Jan Beulich wrote:
>>>>
>>>>>>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>>>>>       guest wall clock time to lag (possible fix outlined, needs to be
>>>>>>       put in patch form and thoroughly reviewed/tested for unwanted
>>>>>>       side effects, Jan Beulich)
>>>>>
>>>>> Patch was posted, but no comments or approval to commit so far.
>>>>> Also, reportedly the patch only improves the situation, it doesn't
>>>>> completely eliminate the problem. For the moment I'm out of ideas,
>>>>> though, and hence would hope some others could help here.
>>>>
>>>>
>>>> Can you point me to the patch (or resend it to me), please?
>>>> I have some trouble with getting XP Mode in Windows 7 (nested
>>>> virtualization) booting and figured out it uses the RTC.
>>>> I want to give this patch a try.
>>>
>>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01303.html 
>>
>> When booting Windows 7 I get a crash due to a NULL pointer dereference
>> in xen/common/spinlock.c:45.
>> It looks like the spin lock is not initialized.
> 
> I rather think NULL gets passed from pt_update_irq() to
> rtc_periodic_interrupt(). Yet rtc.c's sole call to
> create_periodic_time() clearly passes non-NULL. Oh,
> hpet_set_timer() can pass a literal 8 (which I didn't spot
> grepping for RTC_IRQ) - could you refine the check in
> pt_update_irq()
> 
>     else if ( irq == RTC_IRQ )
> 
> to read
> 
>     else if ( irq == RTC_IRQ && pt_priv )

Yes, Windows 7 boots with this change.

Christoph


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:13:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cby-0001EE-58; Wed, 22 Aug 2012 15:12:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Cbw-0001E6-Vz
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:12:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345648361!6629496!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13973 invoked from network); 22 Aug 2012 15:12:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 15:12:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:12:40 +0100
Message-Id: <503513330200007800096FE9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:13:23 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <Christoph.Egger@amd.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<50323640020000780009662A@nat28.tlf.novell.com>
	<50338D02.8050009@amd.com>
	<5033B0070200007800096B4D@nat28.tlf.novell.com>
	<5034B907.7020506@amd.com>
	<50350D1F0200007800096F9C@nat28.tlf.novell<50350D1F0200007800096F9C@nat28.tlf.novell.com>
	<5034F458.2090609@amd.com>
In-Reply-To: <5034F458.2090609@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RTC patch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 17:01, Christoph Egger <Christoph.Egger@amd.com> wrote:
> On 08/22/12 16:47, Jan Beulich wrote:
> 
>>>>> On 22.08.12 at 12:48, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>> On 08/21/12 15:57, Jan Beulich wrote:
>>>
>>>>>>> On 21.08.12 at 15:28, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>>>> On 08/20/12 13:06, Jan Beulich wrote:
>>>>>
>>>>>>>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>>>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>>>>>>       guest wall clock time to lag (possible fix outlined, needs to be
>>>>>>>       put in patch form and thoroughly reviewed/tested for unwanted
>>>>>>>       side effects, Jan Beulich)
>>>>>>
>>>>>> Patch was posted, but no comments or approval to commit so far.
>>>>>> Also, reportedly the patch only improves the situation, it doesn't
>>>>>> completely eliminate the problem. For the moment I'm out of ideas,
>>>>>> though, and hence would hope some others could help here.
>>>>>
>>>>>
>>>>> Can you point me to the patch (or resend it to me), please?
>>>>> I have some trouble with getting XP Mode in Windows 7 (nested
>>>>> virtualization) booting and figured out it uses the RTC.
>>>>> I want to give this patch a try.
>>>>
>>>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01303.html 
>>>
>>> When booting Windows 7 I get a crash due to a NULL pointer dereference
>>> in xen/common/spinlock.c:45.
>>> It looks like the spin lock is not initialized.
>> 
>> I rather think NULL gets passed from pt_update_irq() to
>> rtc_periodic_interrupt(). Yet rtc.c's sole call to
>> create_periodic_time() clearly passes non-NULL. Oh,
>> hpet_set_timer() can pass a literal 8 (which I didn't spot
>> grepping for RTC_IRQ) - could you refine the check in
>> pt_update_irq()
>> 
>>     else if ( irq == RTC_IRQ )
>> 
>> to read
>> 
>>     else if ( irq == RTC_IRQ && pt_priv )
> 
> Yes, Windows 7 boots with this change.

Good, thanks. But I suppose it has no effect on the problem you
wanted to try this for?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:13:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cby-0001EE-58; Wed, 22 Aug 2012 15:12:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Cbw-0001E6-Vz
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:12:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1345648361!6629496!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13973 invoked from network); 22 Aug 2012 15:12:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 15:12:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:12:40 +0100
Message-Id: <503513330200007800096FE9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:13:23 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <Christoph.Egger@amd.com>
References: <1345454240.28762.18.camel@zakaz.uk.xensource.com>
	<50323640020000780009662A@nat28.tlf.novell.com>
	<50338D02.8050009@amd.com>
	<5033B0070200007800096B4D@nat28.tlf.novell.com>
	<5034B907.7020506@amd.com>
	<50350D1F0200007800096F9C@nat28.tlf.novell<50350D1F0200007800096F9C@nat28.tlf.novell.com>
	<5034F458.2090609@amd.com>
In-Reply-To: <5034F458.2090609@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RTC patch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 17:01, Christoph Egger <Christoph.Egger@amd.com> wrote:
> On 08/22/12 16:47, Jan Beulich wrote:
> 
>>>>> On 22.08.12 at 12:48, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>> On 08/21/12 15:57, Jan Beulich wrote:
>>>
>>>>>>> On 21.08.12 at 15:28, Christoph Egger <Christoph.Egger@amd.com> wrote:
>>>>> On 08/20/12 13:06, Jan Beulich wrote:
>>>>>
>>>>>>>>> On 20.08.12 at 11:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>>>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>>>>>>       guest wall clock time to lag (possible fix outlined, needs to be
>>>>>>>       put in patch form and thoroughly reviewed/tested for unwanted
>>>>>>>       side effects, Jan Beulich)
>>>>>>
>>>>>> Patch was posted, but no comments or approval to commit so far.
>>>>>> Also, reportedly the patch only improves the situation, it doesn't
>>>>>> completely eliminate the problem. For the moment I'm out of ideas,
>>>>>> though, and hence would hope some others could help here.
>>>>>
>>>>>
>>>>> Can you point me to the patch (or resend it to me), please?
>>>>> I have some trouble with getting XP Mode in Windows 7 (nested
>>>>> virtualization) booting and figured out it uses the RTC.
>>>>> I want to give this patch a try.
>>>>
>>>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01303.html 
>>>
>>> When booting Windows 7 I get a crash due to a NULL pointer dereference
>>> in xen/common/spinlock.c:45.
>>> It looks like the spin lock is not initialized.
>> 
>> I rather think NULL gets passed from pt_update_irq() to
>> rtc_periodic_interrupt(). Yet rtc.c's sole call to
>> create_periodic_time() clearly passes non-NULL. Oh,
>> hpet_set_timer() can pass a literal 8 (which I didn't spot
>> grepping for RTC_IRQ) - could you refine the check in
>> pt_update_irq()
>> 
>>     else if ( irq == RTC_IRQ )
>> 
>> to read
>> 
>>     else if ( irq == RTC_IRQ && pt_priv )
> 
> Yes, Windows 7 boots with this change.

Good, thanks. But I suppose it has no effect on the problem you
wanted to try this for?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:17:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cg9-0001SG-43; Wed, 22 Aug 2012 15:17:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Cg6-0001S9-R0
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:17:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345648605!9752046!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11815 invoked from network); 22 Aug 2012 15:16:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 15:16:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:16:45 +0100
Message-Id: <503514280200007800096FF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:17:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartECDD8518.0__="
Subject: [Xen-devel] [PATCH, v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartECDD8518.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes at all
- only call alarm_timer_update() on REG_B writes when relevant bits
  change
- only call check_update_timer() on REG_B writes when SET changes
- instead properly handle AF and PF when the guest is not also setting
  AIE/PIE respectively (for UF this was already the case, only a
  comment was slightly inaccurate)
- raise the RTC IRQ not only when UIE gets set while UF was already
  set, but generalize this to cover AIE and PIE as well
- properly mask off bit 7 when retrieving the hour values in
  alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
  converting from 12- to 24-hour value
- also handle the two other possible clock bases
- use RTC_* names in a couple of places where literal numbers were used
  so far

Note that this only improves the situation described in the thread at
http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
there are still problems with the emulation when invoked at a high rate
as described there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Small adjustment to the pt_update_irq() change, avoiding to call
    the RTC code for a HPET event (which also may pass 8 [aka RTC_IRQ]
    as create_periodic_time()'s "irq" argument.

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
=20
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d =3D vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s =3D opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
=20
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
=20
     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <=3D 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code !=3D 0) && (period_code <=3D 2) )
             period_code +=3D 7;
-
-        period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period =
in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code !=3D 0 )
+        {
+            period =3D 1 << (period_code - 1); /* period in 32 Khz cycles =
*/
+            period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns =
*/
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, =
NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
=20
@@ -102,7 +121,7 @@ static void check_update_timer(RTCState=20
         guest_usec =3D get_localtime_us(d) % USEC_PER_SEC;
         if (guest_usec >=3D (USEC_PER_SEC - 244))
         {
-            /* RTC is in update cycle when enabling UIE */
+            /* RTC is in update cycle */
             s->hw.cmos_data[RTC_REG_A] |=3D RTC_UIP;
             next_update_time =3D (USEC_PER_SEC - guest_usec) * NS_PER_USEC=
;
             expire_time =3D NOW() + next_update_time;
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState=20
=20
     stop_timer(&s->alarm_timer);
=20
-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
     {
         s->current_tm =3D gmtime(get_localtime(d));
         rtc_copy_date(s);
=20
         alarm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
         alarm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
-        alarm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
-        alarm_hour =3D convert_hour(s, alarm_hour);
+        alarm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
=20
         cur_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
         cur_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-        cur_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
-        cur_hour =3D convert_hour(s, cur_hour);
+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
=20
         next_update_time =3D USEC_PER_SEC - (get_localtime_us(d) % =
USEC_PER_SEC);
         next_update_time =3D next_update_time * NS_PER_USEC + NOW();
@@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState=20
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig, mask;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm =3D gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.
+         */
+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
-        check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_SET )
+            check_update_timer(s);
+        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
@@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
=20
 static inline int to_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a / 10) << 4) | (a % 10);
@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
=20
 static inline int from_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a >> 4) * 10) + (a & 0x0f);
@@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s,=20
=20
 /* Hours in 12 hour mode are in 1-12 range, not 0-11.
  * So we need convert it before using it*/
-static inline int convert_hour(RTCState *s, int hour)
+static inline int convert_hour(RTCState *s, int raw)
 {
+    int hour =3D from_bcd(s, raw & 0x7f);
+
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
     {
         hour %=3D 12;
-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
+        if (raw & 0x80)
             hour +=3D 12;
     }
     return hour;
@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
    =20
     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
     tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-    tm->tm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
-    tm->tm_hour =3D convert_hour(s, tm->tm_hour);
+    tm->tm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
     tm->tm_wday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
=20
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##nam=
e)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;
     uint64_t max_lag =3D -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
=20
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
=20
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued =3D 1;
     irq =3D earliest_pt->irq;
     is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
+    pt_priv =3D earliest_pt->priv;
=20
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
=20
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq =3D=3D RTC_IRQ && pt_priv )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
=20
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);



--=__PartECDD8518.0__=
Content-Type: text/plain; name="x86-hvm-rtc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc.patch"

x86/HVM: assorted RTC emulation adjustments=0A=0A- don't call rtc_timer_upd=
ate() on REG_A writes when the value didn't=0A  change (doing the call =
always was reported to cause wall clock time=0A  lagging with the JVM =
running on Windows)=0A- don't call rtc_timer_update() on REG_B writes at =
all=0A- only call alarm_timer_update() on REG_B writes when relevant =
bits=0A  change=0A- only call check_update_timer() on REG_B writes when =
SET changes=0A- instead properly handle AF and PF when the guest is not =
also setting=0A  AIE/PIE respectively (for UF this was already the case, =
only a=0A  comment was slightly inaccurate)=0A- raise the RTC IRQ not only =
when UIE gets set while UF was already=0A  set, but generalize this to =
cover AIE and PIE as well=0A- properly mask off bit 7 when retrieving the =
hour values in=0A  alarm_timer_update(), and properly use RTC_HOURS_ALARM's=
 bit 7 when=0A  converting from 12- to 24-hour value=0A- also handle the =
two other possible clock bases=0A- use RTC_* names in a couple of places =
where literal numbers were used=0A  so far=0A=0ANote that this only =
improves the situation described in the thread at=0Ahttp://lists.xen.org/ar=
chives/html/xen-devel/2012-08/msg00664.html,=0Athere are still problems =
with the emulation when invoked at a high rate=0Aas described there.=0A=0AS=
igned-off-by: Jan Beulich <jbeulich@suse.com>=0A---=0Av2: Small adjustment =
to the pt_update_irq() change, avoiding to call=0A    the RTC code for a =
HPET event (which also may pass 8 [aka RTC_IRQ]=0A    as create_periodic_ti=
me()'s "irq" argument.=0A=0A--- a/xen/arch/x86/hvm/rtc.c=0A+++ b/xen/arch/x=
86/hvm/rtc.c=0A@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState =
*s);=0A static inline int from_bcd(RTCState *s, int a);=0A static inline =
int convert_hour(RTCState *s, int hour);=0A =0A-static void rtc_periodic_cb=
(struct vcpu *v, void *opaque)=0A+static void rtc_toggle_irq(RTCState =
*s)=0A+{=0A+    struct domain *d =3D vrtc_domain(s);=0A+=0A+    ASSERT(spin=
_is_locked(&s->lock));=0A+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A+=
    hvm_isa_irq_deassert(d, RTC_IRQ);=0A+    hvm_isa_irq_assert(d, =
RTC_IRQ);=0A+}=0A+=0A+void rtc_periodic_interrupt(void *opaque)=0A {=0A    =
 RTCState *s =3D opaque;=0A+=0A     spin_lock(&s->lock);=0A-    s->hw.cmos_=
data[RTC_REG_C] |=3D 0xc0;=0A+    s->hw.cmos_data[RTC_REG_C] |=3D =
RTC_PF;=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )=0A+        =
rtc_toggle_irq(s);=0A     spin_unlock(&s->lock);=0A }=0A =0A@@ -68,19 =
+81,25 @@ static void rtc_timer_update(RTCState *s=0A     ASSERT(spin_is_lo=
cked(&s->lock));=0A =0A     period_code =3D s->hw.cmos_data[RTC_REG_A] & =
RTC_RATE_SELECT;=0A-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_R=
EG_B] & RTC_PIE) )=0A+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL=
 )=0A     {=0A-        if ( period_code <=3D 2 )=0A+    case RTC_REF_CLCK_3=
2KHZ:=0A+        if ( (period_code !=3D 0) && (period_code <=3D 2) )=0A    =
         period_code +=3D 7;=0A-=0A-        period =3D 1 << (period_code - =
1); /* period in 32 Khz cycles */=0A-        period =3D DIV_ROUND((period =
* 1000000000ULL), 32768); /* period in ns */=0A-        create_periodic_tim=
e(v, &s->pt, period, period, RTC_IRQ,=0A-                             =
rtc_periodic_cb, s);=0A-    }=0A-    else=0A-    {=0A+        /* fall =
through */=0A+    case RTC_REF_CLCK_1MHZ:=0A+    case RTC_REF_CLCK_4MHZ:=0A=
+        if ( period_code !=3D 0 )=0A+        {=0A+            period =3D =
1 << (period_code - 1); /* period in 32 Khz cycles */=0A+            =
period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */=0A+       =
     create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, =
s);=0A+            break;=0A+        }=0A+        /* fall through */=0A+   =
 default:=0A         destroy_periodic_time(&s->pt);=0A+        break;=0A   =
  }=0A }=0A =0A@@ -102,7 +121,7 @@ static void check_update_timer(RTCState =
=0A         guest_usec =3D get_localtime_us(d) % USEC_PER_SEC;=0A         =
if (guest_usec >=3D (USEC_PER_SEC - 244))=0A         {=0A-            /* =
RTC is in update cycle when enabling UIE */=0A+            /* RTC is in =
update cycle */=0A             s->hw.cmos_data[RTC_REG_A] |=3D RTC_UIP;=0A =
            next_update_time =3D (USEC_PER_SEC - guest_usec) * NS_PER_USEC;=
=0A             expire_time =3D NOW() + next_update_time;=0A@@ -144,7 =
+163,6 @@ static void rtc_update_timer(void *opaqu=0A static void =
rtc_update_timer2(void *opaque)=0A {=0A     RTCState *s =3D opaque;=0A-    =
struct domain *d =3D vrtc_domain(s);=0A =0A     spin_lock(&s->lock);=0A    =
 if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A@@ -152,11 +170,7 @@ =
static void rtc_update_timer2(void *opaq=0A         s->hw.cmos_data[RTC_REG=
_C] |=3D RTC_UF;=0A         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A   =
      if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))=0A-        {=0A-         =
   s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-            hvm_isa_irq_dea=
ssert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert(d, RTC_IRQ);=0A-      =
  }=0A+            rtc_toggle_irq(s);=0A         check_update_timer(s);=0A =
    }=0A     spin_unlock(&s->lock);=0A@@ -175,21 +189,18 @@ static void =
alarm_timer_update(RTCState =0A =0A     stop_timer(&s->alarm_timer);=0A =
=0A-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&=0A-            =
!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A+    if ( !(s->hw.cmos_data[RTC_=
REG_B] & RTC_SET) )=0A     {=0A         s->current_tm =3D gmtime(get_localt=
ime(d));=0A         rtc_copy_date(s);=0A =0A         alarm_sec =3D =
from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);=0A         alarm_min =3D =
from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);=0A-        alarm_hour =3D =
from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);=0A-        alarm_hour =3D =
convert_hour(s, alarm_hour);=0A+        alarm_hour =3D convert_hour(s, =
s->hw.cmos_data[RTC_HOURS_ALARM]);=0A =0A         cur_sec =3D from_bcd(s, =
s->hw.cmos_data[RTC_SECONDS]);=0A         cur_min =3D from_bcd(s, =
s->hw.cmos_data[RTC_MINUTES]);=0A-        cur_hour =3D from_bcd(s, =
s->hw.cmos_data[RTC_HOURS]);=0A-        cur_hour =3D convert_hour(s, =
cur_hour);=0A+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOUR=
S]);=0A =0A         next_update_time =3D USEC_PER_SEC - (get_localtime_us(d=
) % USEC_PER_SEC);=0A         next_update_time =3D next_update_time * =
NS_PER_USEC + NOW();=0A@@ -343,7 +354,6 @@ static void alarm_timer_update(R=
TCState =0A static void rtc_alarm_cb(void *opaque)=0A {=0A     RTCState *s =
=3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;=0A         /* alarm interrupt =
*/=0A         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)=0A-        {=0A-   =
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-            =
hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert(d, =
RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
alarm_timer_update(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -365,6 =
+371,7 @@ static int rtc_ioport_write(void *opaque=0A {=0A     RTCState *s =
=3D opaque;=0A     struct domain *d =3D vrtc_domain(s);=0A+    uint32_t =
orig, mask;=0A =0A     spin_lock(&s->lock);=0A =0A@@ -382,6 +389,7 @@ =
static int rtc_ioport_write(void *opaque=0A         return 0;=0A     }=0A =
=0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index];=0A     switch ( =
s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALARM:=0A@@ -405,9 =
+413,9 @@ static int rtc_ioport_write(void *opaque=0A         break;=0A    =
 case RTC_REG_A:=0A         /* UIP bit is read only */=0A-        =
s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -415,7 +423,7 @@ static =
int rtc_ioport_write(void *opaque=0A             /* set mode: reset UIP =
mode */=0A             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A        =
     /* adjust cmos before stopping */=0A-            if (!(s->hw.cmos_data=
[RTC_REG_B] & RTC_SET))=0A+            if (!(orig & RTC_SET))=0A           =
  {=0A                 s->current_tm =3D gmtime(get_localtime(d));=0A      =
           rtc_copy_date(s);=0A@@ -424,21 +432,26 @@ static int rtc_ioport_=
write(void *opaque=0A         else=0A         {=0A             /* if =
disabling set mode, update the time */=0A-            if ( s->hw.cmos_data[=
RTC_REG_B] & RTC_SET )=0A+            if ( orig & RTC_SET )=0A             =
    rtc_set_time(s);=0A         }=0A-        /* if the interrupt is =
already set when the interrupt become=0A-         * enabled, raise an =
interrupt immediately*/=0A-        if ((data & RTC_UIE) && !(s->hw.cmos_dat=
a[RTC_REG_B] & RTC_UIE))=0A-            if (s->hw.cmos_data[RTC_REG_C] & =
RTC_UF)=0A+        /*=0A+         * If the interrupt is already set when =
the interrupt becomes=0A+         * enabled, raise an interrupt immediately=
.=0A+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.=0A+    =
     */=0A+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 =
)=0A+            if ( (data & mask) && !(orig & mask) &&=0A+               =
  (s->hw.cmos_data[RTC_REG_C] & mask) )=0A             {=0A-               =
 hvm_isa_irq_deassert(d, RTC_IRQ);=0A-                hvm_isa_irq_assert(d,=
 RTC_IRQ);=0A+                rtc_toggle_irq(s);=0A+                =
break;=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A-        check_update_timer(s);=0A-=
        alarm_timer_update(s);=0A+        if ( (data ^ orig) & RTC_SET =
)=0A+            check_update_timer(s);=0A+        if ( (data ^ orig) & =
(RTC_24H | RTC_DM_BINARY | RTC_SET) )=0A+            alarm_timer_update(s);=
=0A         break;=0A     case RTC_REG_C:=0A     case RTC_REG_D:=0A@@ =
-453,7 +466,7 @@ static int rtc_ioport_write(void *opaque=0A =0A static =
inline int to_bcd(RTCState *s, int a)=0A {=0A-    if ( s->hw.cmos_data[RTC_=
REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY =
)=0A         return a;=0A     else=0A         return ((a / 10) << 4) | (a =
% 10);=0A@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in=0A =
=0A static inline int from_bcd(RTCState *s, int a)=0A {=0A-    if ( =
s->hw.cmos_data[RTC_REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] =
& RTC_DM_BINARY )=0A         return a;=0A     else=0A         return ((a =
>> 4) * 10) + (a & 0x0f);=0A@@ -469,12 +482,14 @@ static inline int =
from_bcd(RTCState *s, =0A =0A /* Hours in 12 hour mode are in 1-12 range, =
not 0-11.=0A  * So we need convert it before using it*/=0A-static inline =
int convert_hour(RTCState *s, int hour)=0A+static inline int convert_hour(R=
TCState *s, int raw)=0A {=0A+    int hour =3D from_bcd(s, raw & 0x7f);=0A+=
=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))=0A     {=0A         =
hour %=3D 12;=0A-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)=0A+        =
if (raw & 0x80)=0A             hour +=3D 12;=0A     }=0A     return =
hour;=0A@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)=0A     =
=0A     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);=0A     =
tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);=0A-    tm->tm_hou=
r =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);=0A-    tm->tm_hour =
=3D convert_hour(s, tm->tm_hour);=0A+    tm->tm_hour =3D convert_hour(s, =
s->hw.cmos_data[RTC_HOURS]);=0A     tm->tm_wday =3D from_bcd(s, s->hw.cmos_=
data[RTC_DAY_OF_WEEK]);=0A     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[=
RTC_DAY_OF_MONTH]);=0A     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_M=
ONTH]) - 1;=0A--- a/xen/arch/x86/hvm/vpt.c=0A+++ b/xen/arch/x86/hvm/vpt.c=
=0A@@ -22,6 +22,7 @@=0A #include <asm/hvm/vpt.h>=0A #include <asm/event.h>=
=0A #include <asm/apic.h>=0A+#include <asm/mc146818rtc.h>=0A =0A #define =
mode_is(d, name) \=0A     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE=
] =3D=3D HVMPTM_##name)=0A@@ -218,6 +219,7 @@ void pt_update_irq(struct =
vcpu *v)=0A     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;=0A =
    uint64_t max_lag =3D -1ULL;=0A     int irq, is_lapic;=0A+    void =
*pt_priv;=0A =0A     spin_lock(&v->arch.hvm_vcpu.tm_lock);=0A =0A@@ =
-251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)=0A     earliest_pt->i=
rq_issued =3D 1;=0A     irq =3D earliest_pt->irq;=0A     is_lapic =3D =
(earliest_pt->source =3D=3D PTSRC_lapic);=0A+    pt_priv =3D earliest_pt->p=
riv;=0A =0A     spin_unlock(&v->arch.hvm_vcpu.tm_lock);=0A =0A     if ( =
is_lapic )=0A-    {=0A         vlapic_set_irq(vcpu_vlapic(v), irq, 0);=0A- =
   }=0A+    else if ( irq =3D=3D RTC_IRQ && pt_priv )=0A+        rtc_period=
ic_interrupt(pt_priv);=0A     else=0A     {=0A         hvm_isa_irq_deassert=
(v->domain, irq);=0A--- a/xen/include/asm-x86/hvm/vpt.h=0A+++ b/xen/include=
/asm-x86/hvm/vpt.h=0A@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct =
vcpu *v);=0A void rtc_deinit(struct domain *d);=0A void rtc_reset(struct =
domain *d);=0A void rtc_update_clock(struct domain *d);=0A+void rtc_periodi=
c_interrupt(void *);=0A =0A void pmtimer_init(struct vcpu *v);=0A void =
pmtimer_deinit(struct domain *d);=0A
--=__PartECDD8518.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartECDD8518.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 22 15:17:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cg9-0001SG-43; Wed, 22 Aug 2012 15:17:09 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Cg6-0001S9-R0
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:17:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345648605!9752046!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11815 invoked from network); 22 Aug 2012 15:16:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 15:16:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:16:45 +0100
Message-Id: <503514280200007800096FF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:17:28 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartECDD8518.0__="
Subject: [Xen-devel] [PATCH, v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartECDD8518.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes at all
- only call alarm_timer_update() on REG_B writes when relevant bits
  change
- only call check_update_timer() on REG_B writes when SET changes
- instead properly handle AF and PF when the guest is not also setting
  AIE/PIE respectively (for UF this was already the case, only a
  comment was slightly inaccurate)
- raise the RTC IRQ not only when UIE gets set while UF was already
  set, but generalize this to cover AIE and PIE as well
- properly mask off bit 7 when retrieving the hour values in
  alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
  converting from 12- to 24-hour value
- also handle the two other possible clock bases
- use RTC_* names in a couple of places where literal numbers were used
  so far

Note that this only improves the situation described in the thread at
http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
there are still problems with the emulation when invoked at a high rate
as described there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Small adjustment to the pt_update_irq() change, avoiding to call
    the RTC code for a HPET event (which also may pass 8 [aka RTC_IRQ]
    as create_periodic_time()'s "irq" argument.

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
 static inline int from_bcd(RTCState *s, int a);
 static inline int convert_hour(RTCState *s, int hour);
=20
-static void rtc_periodic_cb(struct vcpu *v, void *opaque)
+static void rtc_toggle_irq(RTCState *s)
+{
+    struct domain *d =3D vrtc_domain(s);
+
+    ASSERT(spin_is_locked(&s->lock));
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
+    hvm_isa_irq_deassert(d, RTC_IRQ);
+    hvm_isa_irq_assert(d, RTC_IRQ);
+}
+
+void rtc_periodic_interrupt(void *opaque)
 {
     RTCState *s =3D opaque;
+
     spin_lock(&s->lock);
-    s->hw.cmos_data[RTC_REG_C] |=3D 0xc0;
+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_PF;
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+        rtc_toggle_irq(s);
     spin_unlock(&s->lock);
 }
=20
@@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
     ASSERT(spin_is_locked(&s->lock));
=20
     period_code =3D s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
     {
-        if ( period_code <=3D 2 )
+    case RTC_REF_CLCK_32KHZ:
+        if ( (period_code !=3D 0) && (period_code <=3D 2) )
             period_code +=3D 7;
-
-        period =3D 1 << (period_code - 1); /* period in 32 Khz cycles */
-        period =3D DIV_ROUND((period * 1000000000ULL), 32768); /* period =
in ns */
-        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
-                             rtc_periodic_cb, s);
-    }
-    else
-    {
+        /* fall through */
+    case RTC_REF_CLCK_1MHZ:
+    case RTC_REF_CLCK_4MHZ:
+        if ( period_code !=3D 0 )
+        {
+            period =3D 1 << (period_code - 1); /* period in 32 Khz cycles =
*/
+            period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns =
*/
+            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, =
NULL, s);
+            break;
+        }
+        /* fall through */
+    default:
         destroy_periodic_time(&s->pt);
+        break;
     }
 }
=20
@@ -102,7 +121,7 @@ static void check_update_timer(RTCState=20
         guest_usec =3D get_localtime_us(d) % USEC_PER_SEC;
         if (guest_usec >=3D (USEC_PER_SEC - 244))
         {
-            /* RTC is in update cycle when enabling UIE */
+            /* RTC is in update cycle */
             s->hw.cmos_data[RTC_REG_A] |=3D RTC_UIP;
             next_update_time =3D (USEC_PER_SEC - guest_usec) * NS_PER_USEC=
;
             expire_time =3D NOW() + next_update_time;
@@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
 static void rtc_update_timer2(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_UF;
         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
         if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         check_update_timer(s);
     }
     spin_unlock(&s->lock);
@@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState=20
=20
     stop_timer(&s->alarm_timer);
=20
-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
-            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
     {
         s->current_tm =3D gmtime(get_localtime(d));
         rtc_copy_date(s);
=20
         alarm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
         alarm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
-        alarm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
-        alarm_hour =3D convert_hour(s, alarm_hour);
+        alarm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
=20
         cur_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
         cur_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-        cur_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
-        cur_hour =3D convert_hour(s, cur_hour);
+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
=20
         next_update_time =3D USEC_PER_SEC - (get_localtime_us(d) % =
USEC_PER_SEC);
         next_update_time =3D next_update_time * NS_PER_USEC + NOW();
@@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState=20
 static void rtc_alarm_cb(void *opaque)
 {
     RTCState *s =3D opaque;
-    struct domain *d =3D vrtc_domain(s);
=20
     spin_lock(&s->lock);
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;
         /* alarm interrupt */
         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
-        {
-            s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
-            hvm_isa_irq_deassert(d, RTC_IRQ);
-            hvm_isa_irq_assert(d, RTC_IRQ);
-        }
+            rtc_toggle_irq(s);
         alarm_timer_update(s);
     }
     spin_unlock(&s->lock);
@@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig, mask;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
             /* set mode: reset UIP mode */
             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;
             /* adjust cmos before stopping */
-            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
+            if (!(orig & RTC_SET))
             {
                 s->current_tm =3D gmtime(get_localtime(d));
                 rtc_copy_date(s);
@@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
         else
         {
             /* if disabling set mode, update the time */
-            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
+            if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
-        /* if the interrupt is already set when the interrupt become
-         * enabled, raise an interrupt immediately*/
-        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
-            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
+        /*
+         * If the interrupt is already set when the interrupt becomes
+         * enabled, raise an interrupt immediately.
+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.
+         */
+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 )
+            if ( (data & mask) && !(orig & mask) &&
+                 (s->hw.cmos_data[RTC_REG_C] & mask) )
             {
-                hvm_isa_irq_deassert(d, RTC_IRQ);
-                hvm_isa_irq_assert(d, RTC_IRQ);
+                rtc_toggle_irq(s);
+                break;
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
-        check_update_timer(s);
-        alarm_timer_update(s);
+        if ( (data ^ orig) & RTC_SET )
+            check_update_timer(s);
+        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
+            alarm_timer_update(s);
         break;
     case RTC_REG_C:
     case RTC_REG_D:
@@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
=20
 static inline int to_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a / 10) << 4) | (a % 10);
@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
=20
 static inline int from_bcd(RTCState *s, int a)
 {
-    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
         return a;
     else
         return ((a >> 4) * 10) + (a & 0x0f);
@@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s,=20
=20
 /* Hours in 12 hour mode are in 1-12 range, not 0-11.
  * So we need convert it before using it*/
-static inline int convert_hour(RTCState *s, int hour)
+static inline int convert_hour(RTCState *s, int raw)
 {
+    int hour =3D from_bcd(s, raw & 0x7f);
+
     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
     {
         hour %=3D 12;
-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
+        if (raw & 0x80)
             hour +=3D 12;
     }
     return hour;
@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
    =20
     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
     tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
-    tm->tm_hour =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
-    tm->tm_hour =3D convert_hour(s, tm->tm_hour);
+    tm->tm_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
     tm->tm_wday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 #include <asm/apic.h>
+#include <asm/mc146818rtc.h>
=20
 #define mode_is(d, name) \
     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] =3D=3D HVMPTM_##nam=
e)
@@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;
     uint64_t max_lag =3D -1ULL;
     int irq, is_lapic;
+    void *pt_priv;
=20
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
=20
@@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued =3D 1;
     irq =3D earliest_pt->irq;
     is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
+    pt_priv =3D earliest_pt->priv;
=20
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
=20
     if ( is_lapic )
-    {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    }
+    else if ( irq =3D=3D RTC_IRQ && pt_priv )
+        rtc_periodic_interrupt(pt_priv);
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
+void rtc_periodic_interrupt(void *);
=20
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);



--=__PartECDD8518.0__=
Content-Type: text/plain; name="x86-hvm-rtc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc.patch"

x86/HVM: assorted RTC emulation adjustments=0A=0A- don't call rtc_timer_upd=
ate() on REG_A writes when the value didn't=0A  change (doing the call =
always was reported to cause wall clock time=0A  lagging with the JVM =
running on Windows)=0A- don't call rtc_timer_update() on REG_B writes at =
all=0A- only call alarm_timer_update() on REG_B writes when relevant =
bits=0A  change=0A- only call check_update_timer() on REG_B writes when =
SET changes=0A- instead properly handle AF and PF when the guest is not =
also setting=0A  AIE/PIE respectively (for UF this was already the case, =
only a=0A  comment was slightly inaccurate)=0A- raise the RTC IRQ not only =
when UIE gets set while UF was already=0A  set, but generalize this to =
cover AIE and PIE as well=0A- properly mask off bit 7 when retrieving the =
hour values in=0A  alarm_timer_update(), and properly use RTC_HOURS_ALARM's=
 bit 7 when=0A  converting from 12- to 24-hour value=0A- also handle the =
two other possible clock bases=0A- use RTC_* names in a couple of places =
where literal numbers were used=0A  so far=0A=0ANote that this only =
improves the situation described in the thread at=0Ahttp://lists.xen.org/ar=
chives/html/xen-devel/2012-08/msg00664.html,=0Athere are still problems =
with the emulation when invoked at a high rate=0Aas described there.=0A=0AS=
igned-off-by: Jan Beulich <jbeulich@suse.com>=0A---=0Av2: Small adjustment =
to the pt_update_irq() change, avoiding to call=0A    the RTC code for a =
HPET event (which also may pass 8 [aka RTC_IRQ]=0A    as create_periodic_ti=
me()'s "irq" argument.=0A=0A--- a/xen/arch/x86/hvm/rtc.c=0A+++ b/xen/arch/x=
86/hvm/rtc.c=0A@@ -50,11 +50,24 @@ static void rtc_set_time(RTCState =
*s);=0A static inline int from_bcd(RTCState *s, int a);=0A static inline =
int convert_hour(RTCState *s, int hour);=0A =0A-static void rtc_periodic_cb=
(struct vcpu *v, void *opaque)=0A+static void rtc_toggle_irq(RTCState =
*s)=0A+{=0A+    struct domain *d =3D vrtc_domain(s);=0A+=0A+    ASSERT(spin=
_is_locked(&s->lock));=0A+    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A+=
    hvm_isa_irq_deassert(d, RTC_IRQ);=0A+    hvm_isa_irq_assert(d, =
RTC_IRQ);=0A+}=0A+=0A+void rtc_periodic_interrupt(void *opaque)=0A {=0A    =
 RTCState *s =3D opaque;=0A+=0A     spin_lock(&s->lock);=0A-    s->hw.cmos_=
data[RTC_REG_C] |=3D 0xc0;=0A+    s->hw.cmos_data[RTC_REG_C] |=3D =
RTC_PF;=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )=0A+        =
rtc_toggle_irq(s);=0A     spin_unlock(&s->lock);=0A }=0A =0A@@ -68,19 =
+81,25 @@ static void rtc_timer_update(RTCState *s=0A     ASSERT(spin_is_lo=
cked(&s->lock));=0A =0A     period_code =3D s->hw.cmos_data[RTC_REG_A] & =
RTC_RATE_SELECT;=0A-    if ( (period_code !=3D 0) && (s->hw.cmos_data[RTC_R=
EG_B] & RTC_PIE) )=0A+    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL=
 )=0A     {=0A-        if ( period_code <=3D 2 )=0A+    case RTC_REF_CLCK_3=
2KHZ:=0A+        if ( (period_code !=3D 0) && (period_code <=3D 2) )=0A    =
         period_code +=3D 7;=0A-=0A-        period =3D 1 << (period_code - =
1); /* period in 32 Khz cycles */=0A-        period =3D DIV_ROUND((period =
* 1000000000ULL), 32768); /* period in ns */=0A-        create_periodic_tim=
e(v, &s->pt, period, period, RTC_IRQ,=0A-                             =
rtc_periodic_cb, s);=0A-    }=0A-    else=0A-    {=0A+        /* fall =
through */=0A+    case RTC_REF_CLCK_1MHZ:=0A+    case RTC_REF_CLCK_4MHZ:=0A=
+        if ( period_code !=3D 0 )=0A+        {=0A+            period =3D =
1 << (period_code - 1); /* period in 32 Khz cycles */=0A+            =
period =3D DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */=0A+       =
     create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, =
s);=0A+            break;=0A+        }=0A+        /* fall through */=0A+   =
 default:=0A         destroy_periodic_time(&s->pt);=0A+        break;=0A   =
  }=0A }=0A =0A@@ -102,7 +121,7 @@ static void check_update_timer(RTCState =
=0A         guest_usec =3D get_localtime_us(d) % USEC_PER_SEC;=0A         =
if (guest_usec >=3D (USEC_PER_SEC - 244))=0A         {=0A-            /* =
RTC is in update cycle when enabling UIE */=0A+            /* RTC is in =
update cycle */=0A             s->hw.cmos_data[RTC_REG_A] |=3D RTC_UIP;=0A =
            next_update_time =3D (USEC_PER_SEC - guest_usec) * NS_PER_USEC;=
=0A             expire_time =3D NOW() + next_update_time;=0A@@ -144,7 =
+163,6 @@ static void rtc_update_timer(void *opaqu=0A static void =
rtc_update_timer2(void *opaque)=0A {=0A     RTCState *s =3D opaque;=0A-    =
struct domain *d =3D vrtc_domain(s);=0A =0A     spin_lock(&s->lock);=0A    =
 if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A@@ -152,11 +170,7 @@ =
static void rtc_update_timer2(void *opaq=0A         s->hw.cmos_data[RTC_REG=
_C] |=3D RTC_UF;=0A         s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A   =
      if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))=0A-        {=0A-         =
   s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-            hvm_isa_irq_dea=
ssert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert(d, RTC_IRQ);=0A-      =
  }=0A+            rtc_toggle_irq(s);=0A         check_update_timer(s);=0A =
    }=0A     spin_unlock(&s->lock);=0A@@ -175,21 +189,18 @@ static void =
alarm_timer_update(RTCState =0A =0A     stop_timer(&s->alarm_timer);=0A =
=0A-    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&=0A-            =
!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A+    if ( !(s->hw.cmos_data[RTC_=
REG_B] & RTC_SET) )=0A     {=0A         s->current_tm =3D gmtime(get_localt=
ime(d));=0A         rtc_copy_date(s);=0A =0A         alarm_sec =3D =
from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);=0A         alarm_min =3D =
from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);=0A-        alarm_hour =3D =
from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);=0A-        alarm_hour =3D =
convert_hour(s, alarm_hour);=0A+        alarm_hour =3D convert_hour(s, =
s->hw.cmos_data[RTC_HOURS_ALARM]);=0A =0A         cur_sec =3D from_bcd(s, =
s->hw.cmos_data[RTC_SECONDS]);=0A         cur_min =3D from_bcd(s, =
s->hw.cmos_data[RTC_MINUTES]);=0A-        cur_hour =3D from_bcd(s, =
s->hw.cmos_data[RTC_HOURS]);=0A-        cur_hour =3D convert_hour(s, =
cur_hour);=0A+        cur_hour =3D convert_hour(s, s->hw.cmos_data[RTC_HOUR=
S]);=0A =0A         next_update_time =3D USEC_PER_SEC - (get_localtime_us(d=
) % USEC_PER_SEC);=0A         next_update_time =3D next_update_time * =
NS_PER_USEC + NOW();=0A@@ -343,7 +354,6 @@ static void alarm_timer_update(R=
TCState =0A static void rtc_alarm_cb(void *opaque)=0A {=0A     RTCState *s =
=3D opaque;=0A-    struct domain *d =3D vrtc_domain(s);=0A =0A     =
spin_lock(&s->lock);=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))=0A=
@@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)=0A         =
s->hw.cmos_data[RTC_REG_C] |=3D RTC_AF;=0A         /* alarm interrupt =
*/=0A         if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)=0A-        {=0A-   =
         s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;=0A-            =
hvm_isa_irq_deassert(d, RTC_IRQ);=0A-            hvm_isa_irq_assert(d, =
RTC_IRQ);=0A-        }=0A+            rtc_toggle_irq(s);=0A         =
alarm_timer_update(s);=0A     }=0A     spin_unlock(&s->lock);=0A@@ -365,6 =
+371,7 @@ static int rtc_ioport_write(void *opaque=0A {=0A     RTCState *s =
=3D opaque;=0A     struct domain *d =3D vrtc_domain(s);=0A+    uint32_t =
orig, mask;=0A =0A     spin_lock(&s->lock);=0A =0A@@ -382,6 +389,7 @@ =
static int rtc_ioport_write(void *opaque=0A         return 0;=0A     }=0A =
=0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index];=0A     switch ( =
s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALARM:=0A@@ -405,9 =
+413,9 @@ static int rtc_ioport_write(void *opaque=0A         break;=0A    =
 case RTC_REG_A:=0A         /* UIP bit is read only */=0A-        =
s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -415,7 +423,7 @@ static =
int rtc_ioport_write(void *opaque=0A             /* set mode: reset UIP =
mode */=0A             s->hw.cmos_data[RTC_REG_A] &=3D ~RTC_UIP;=0A        =
     /* adjust cmos before stopping */=0A-            if (!(s->hw.cmos_data=
[RTC_REG_B] & RTC_SET))=0A+            if (!(orig & RTC_SET))=0A           =
  {=0A                 s->current_tm =3D gmtime(get_localtime(d));=0A      =
           rtc_copy_date(s);=0A@@ -424,21 +432,26 @@ static int rtc_ioport_=
write(void *opaque=0A         else=0A         {=0A             /* if =
disabling set mode, update the time */=0A-            if ( s->hw.cmos_data[=
RTC_REG_B] & RTC_SET )=0A+            if ( orig & RTC_SET )=0A             =
    rtc_set_time(s);=0A         }=0A-        /* if the interrupt is =
already set when the interrupt become=0A-         * enabled, raise an =
interrupt immediately*/=0A-        if ((data & RTC_UIE) && !(s->hw.cmos_dat=
a[RTC_REG_B] & RTC_UIE))=0A-            if (s->hw.cmos_data[RTC_REG_C] & =
RTC_UF)=0A+        /*=0A+         * If the interrupt is already set when =
the interrupt becomes=0A+         * enabled, raise an interrupt immediately=
.=0A+         * NB: RTC_{A,P,U}IE =3D=3D RTC_{A,P,U}F respectively.=0A+    =
     */=0A+        for ( mask =3D RTC_UIE; mask <=3D RTC_PIE; mask <<=3D 1 =
)=0A+            if ( (data & mask) && !(orig & mask) &&=0A+               =
  (s->hw.cmos_data[RTC_REG_C] & mask) )=0A             {=0A-               =
 hvm_isa_irq_deassert(d, RTC_IRQ);=0A-                hvm_isa_irq_assert(d,=
 RTC_IRQ);=0A+                rtc_toggle_irq(s);=0A+                =
break;=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A-        check_update_timer(s);=0A-=
        alarm_timer_update(s);=0A+        if ( (data ^ orig) & RTC_SET =
)=0A+            check_update_timer(s);=0A+        if ( (data ^ orig) & =
(RTC_24H | RTC_DM_BINARY | RTC_SET) )=0A+            alarm_timer_update(s);=
=0A         break;=0A     case RTC_REG_C:=0A     case RTC_REG_D:=0A@@ =
-453,7 +466,7 @@ static int rtc_ioport_write(void *opaque=0A =0A static =
inline int to_bcd(RTCState *s, int a)=0A {=0A-    if ( s->hw.cmos_data[RTC_=
REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY =
)=0A         return a;=0A     else=0A         return ((a / 10) << 4) | (a =
% 10);=0A@@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in=0A =
=0A static inline int from_bcd(RTCState *s, int a)=0A {=0A-    if ( =
s->hw.cmos_data[RTC_REG_B] & 0x04 )=0A+    if ( s->hw.cmos_data[RTC_REG_B] =
& RTC_DM_BINARY )=0A         return a;=0A     else=0A         return ((a =
>> 4) * 10) + (a & 0x0f);=0A@@ -469,12 +482,14 @@ static inline int =
from_bcd(RTCState *s, =0A =0A /* Hours in 12 hour mode are in 1-12 range, =
not 0-11.=0A  * So we need convert it before using it*/=0A-static inline =
int convert_hour(RTCState *s, int hour)=0A+static inline int convert_hour(R=
TCState *s, int raw)=0A {=0A+    int hour =3D from_bcd(s, raw & 0x7f);=0A+=
=0A     if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))=0A     {=0A         =
hour %=3D 12;=0A-        if (s->hw.cmos_data[RTC_HOURS] & 0x80)=0A+        =
if (raw & 0x80)=0A             hour +=3D 12;=0A     }=0A     return =
hour;=0A@@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)=0A     =
=0A     tm->tm_sec =3D from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);=0A     =
tm->tm_min =3D from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);=0A-    tm->tm_hou=
r =3D from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);=0A-    tm->tm_hour =
=3D convert_hour(s, tm->tm_hour);=0A+    tm->tm_hour =3D convert_hour(s, =
s->hw.cmos_data[RTC_HOURS]);=0A     tm->tm_wday =3D from_bcd(s, s->hw.cmos_=
data[RTC_DAY_OF_WEEK]);=0A     tm->tm_mday =3D from_bcd(s, s->hw.cmos_data[=
RTC_DAY_OF_MONTH]);=0A     tm->tm_mon =3D from_bcd(s, s->hw.cmos_data[RTC_M=
ONTH]) - 1;=0A--- a/xen/arch/x86/hvm/vpt.c=0A+++ b/xen/arch/x86/hvm/vpt.c=
=0A@@ -22,6 +22,7 @@=0A #include <asm/hvm/vpt.h>=0A #include <asm/event.h>=
=0A #include <asm/apic.h>=0A+#include <asm/mc146818rtc.h>=0A =0A #define =
mode_is(d, name) \=0A     ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE=
] =3D=3D HVMPTM_##name)=0A@@ -218,6 +219,7 @@ void pt_update_irq(struct =
vcpu *v)=0A     struct periodic_time *pt, *temp, *earliest_pt =3D NULL;=0A =
    uint64_t max_lag =3D -1ULL;=0A     int irq, is_lapic;=0A+    void =
*pt_priv;=0A =0A     spin_lock(&v->arch.hvm_vcpu.tm_lock);=0A =0A@@ =
-251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)=0A     earliest_pt->i=
rq_issued =3D 1;=0A     irq =3D earliest_pt->irq;=0A     is_lapic =3D =
(earliest_pt->source =3D=3D PTSRC_lapic);=0A+    pt_priv =3D earliest_pt->p=
riv;=0A =0A     spin_unlock(&v->arch.hvm_vcpu.tm_lock);=0A =0A     if ( =
is_lapic )=0A-    {=0A         vlapic_set_irq(vcpu_vlapic(v), irq, 0);=0A- =
   }=0A+    else if ( irq =3D=3D RTC_IRQ && pt_priv )=0A+        rtc_period=
ic_interrupt(pt_priv);=0A     else=0A     {=0A         hvm_isa_irq_deassert=
(v->domain, irq);=0A--- a/xen/include/asm-x86/hvm/vpt.h=0A+++ b/xen/include=
/asm-x86/hvm/vpt.h=0A@@ -181,6 +181,7 @@ void rtc_migrate_timers(struct =
vcpu *v);=0A void rtc_deinit(struct domain *d);=0A void rtc_reset(struct =
domain *d);=0A void rtc_update_clock(struct domain *d);=0A+void rtc_periodi=
c_interrupt(void *);=0A =0A void pmtimer_init(struct vcpu *v);=0A void =
pmtimer_deinit(struct domain *d);=0A
--=__PartECDD8518.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartECDD8518.0__=--


From xen-devel-bounces@lists.xen.org Wed Aug 22 15:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CnO-0001d0-0g; Wed, 22 Aug 2012 15:24:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4CnN-0001cv-21
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:24:37 +0000
Received: from [85.158.138.51:51858] by server-11.bemta-3.messagelabs.com id
	02/E8-23152-4B9F4305; Wed, 22 Aug 2012 15:24:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345649071!27426126!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16205 invoked from network); 22 Aug 2012 15:24:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 15:24:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14130284"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 15:24:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 16:24:31 +0100
Date: Wed, 22 Aug 2012 16:24:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
In-Reply-To: <20120822142705.GB31341@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
	<20120822142705.GB31341@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
> > >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > > # HG changeset patch
> > > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > # Date 1345579709 14400
> > > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> > > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> > > xen/pagetables: Document that all of the initial regions are mapped.
> > > 
> > > The documentation states that the layout of the initial region looks
> > > as so:
> > >    a. relocated kernel image
> > >    b. initial ram disk              [mod_start, mod_len]
> > >    c. list of allocated page frames [mfn_list, nr_pages]
> > >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> > >    d. start_info_t structure        [register ESI (x86)]
> > >    e. bootstrap page tables         [pt_base, CR3 (x86)]
> > >    f. bootstrap stack               [register ESP (x86)]
> > > 
> > > But it does not clarify that the virtual address to all of
> > > those areas is initially mapped by the pt_base (or CR3).
> > > Lets fix that.
> > 
> > To me this is already being said by "This the order of bootstrap
> > elements in the initial virtual region".
> 
> Stefano wanted to make sure we have it written as clear as possible.
> I am going to be a good little submitter and let you guys sort this
> one out  :-)

Let's step back for a second and see if I understand correctly: your
patch 6/11 removes the call to xen_map_identity_early on x86_64 because
"Xen provides us with all the memory mapped that we need to function".

The original xen_map_identity_early maps up to max_pfn, that is
xen_start_info->nr_pages, so I am assuming that what you meant is that
"Xen provides us with all the memory already mapped in the bootstrap
page tables".
And that is not written anywhere in the Xen headers.

Therefore, if I understand the issue correctly, I would add the
following to xen.h:

"On x86_64 the bootstrap page tables map all the pages assigned to the
domain."



> > 
> > Jan
> > 
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > 
> > > diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
> > > --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > > +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > > @@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
> > >   *  8. There is guaranteed to be at least 512kB padding after the final
> > >   *     bootstrap element. If necessary, the bootstrap virtual region is
> > >   *     extended by an extra 4MB to ensure this.
> > > + *
> > > + *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> > > + *  pagetables [pt_base, CR3 (x86)].
> > >   */
> > >  
> > >  #define MAX_GUEST_CMDLINE 1024


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CnO-0001d0-0g; Wed, 22 Aug 2012 15:24:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4CnN-0001cv-21
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:24:37 +0000
Received: from [85.158.138.51:51858] by server-11.bemta-3.messagelabs.com id
	02/E8-23152-4B9F4305; Wed, 22 Aug 2012 15:24:36 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345649071!27426126!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16205 invoked from network); 22 Aug 2012 15:24:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 15:24:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14130284"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 15:24:31 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 16:24:31 +0100
Date: Wed, 22 Aug 2012 16:24:09 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
In-Reply-To: <20120822142705.GB31341@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
	<20120822142705.GB31341@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
> > >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > > # HG changeset patch
> > > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > # Date 1345579709 14400
> > > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> > > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> > > xen/pagetables: Document that all of the initial regions are mapped.
> > > 
> > > The documentation states that the layout of the initial region looks
> > > as so:
> > >    a. relocated kernel image
> > >    b. initial ram disk              [mod_start, mod_len]
> > >    c. list of allocated page frames [mfn_list, nr_pages]
> > >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> > >    d. start_info_t structure        [register ESI (x86)]
> > >    e. bootstrap page tables         [pt_base, CR3 (x86)]
> > >    f. bootstrap stack               [register ESP (x86)]
> > > 
> > > But it does not clarify that the virtual address to all of
> > > those areas is initially mapped by the pt_base (or CR3).
> > > Lets fix that.
> > 
> > To me this is already being said by "This the order of bootstrap
> > elements in the initial virtual region".
> 
> Stefano wanted to make sure we have it written as clear as possible.
> I am going to be a good little submitter and let you guys sort this
> one out  :-)

Let's step back for a second and see if I understand correctly: your
patch 6/11 removes the call to xen_map_identity_early on x86_64 because
"Xen provides us with all the memory mapped that we need to function".

The original xen_map_identity_early maps up to max_pfn, that is
xen_start_info->nr_pages, so I am assuming that what you meant is that
"Xen provides us with all the memory already mapped in the bootstrap
page tables".
And that is not written anywhere in the Xen headers.

Therefore, if I understand the issue correctly, I would add the
following to xen.h:

"On x86_64 the bootstrap page tables map all the pages assigned to the
domain."



> > 
> > Jan
> > 
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > 
> > > diff -r 8ed3eef70671 -r 74bedb086c5b xen/include/public/xen.h
> > > --- a/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > > +++ b/xen/include/public/xen.h	Tue Aug 21 16:08:29 2012 -0400
> > > @@ -675,6 +675,9 @@ typedef struct shared_info shared_info_t
> > >   *  8. There is guaranteed to be at least 512kB padding after the final
> > >   *     bootstrap element. If necessary, the bootstrap virtual region is
> > >   *     extended by an extra 4MB to ensure this.
> > > + *
> > > + *  NOTE: The initial virtual region (3a -> 3f) are all mapped by the initial
> > > + *  pagetables [pt_base, CR3 (x86)].
> > >   */
> > >  
> > >  #define MAX_GUEST_CMDLINE 1024


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:28:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cqd-0001nX-Qx; Wed, 22 Aug 2012 15:27:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Cqc-0001nO-E0
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:27:58 +0000
Received: from [85.158.138.51:18095] by server-8.bemta-3.messagelabs.com id
	3D/29-29583-D7AF4305; Wed, 22 Aug 2012 15:27:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345647766!23358709!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15502 invoked from network); 22 Aug 2012 15:02:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 15:02:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14129486"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 15:02:18 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 16:02:18 +0100
Date: Wed, 22 Aug 2012 16:01:56 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120822140315.GC30964@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208221601080.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1206221644370.27860@kaball.uk.xensource.com>
	<1340381685-22529-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1206221722040.27860@kaball.uk.xensource.com>
	<20120709141915.GB9580@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207131821250.23783@kaball.uk.xensource.com>
	<20120716151441.GD552@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
	<20120822140315.GC30964@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/events: fix unmask_evtchn for PV on HVM
 guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 22, 2012 at 12:20:09PM +0100, Stefano Stabellini wrote:
> > Konrad,
> > I cannot see this patch anywhere in your trees. Did I miss it?
> > Or maybe it just fell through the cracks?
> 
> Fell through the cracks.. Was there a new version of this? Can you
> send me a subset of the patches you want me to pick up?

OK, I'll send out a small patch series based on v3.6-rc2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:28:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cqd-0001nX-Qx; Wed, 22 Aug 2012 15:27:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Cqc-0001nO-E0
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:27:58 +0000
Received: from [85.158.138.51:18095] by server-8.bemta-3.messagelabs.com id
	3D/29-29583-D7AF4305; Wed, 22 Aug 2012 15:27:57 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345647766!23358709!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15502 invoked from network); 22 Aug 2012 15:02:47 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 15:02:47 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14129486"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 15:02:18 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 16:02:18 +0100
Date: Wed, 22 Aug 2012 16:01:56 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120822140315.GC30964@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208221601080.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1206221644370.27860@kaball.uk.xensource.com>
	<1340381685-22529-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1206221722040.27860@kaball.uk.xensource.com>
	<20120709141915.GB9580@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207131821250.23783@kaball.uk.xensource.com>
	<20120716151441.GD552@phenom.dumpdata.com>
	<alpine.DEB.2.02.1207181851010.23783@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1208221217290.15568@kaball.uk.xensource.com>
	<20120822140315.GC30964@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/events: fix unmask_evtchn for PV on HVM
 guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> On Wed, Aug 22, 2012 at 12:20:09PM +0100, Stefano Stabellini wrote:
> > Konrad,
> > I cannot see this patch anywhere in your trees. Did I miss it?
> > Or maybe it just fell through the cracks?
> 
> Fell through the cracks.. Was there a new version of this? Can you
> send me a subset of the patches you want me to pick up?

OK, I'll send out a small patch series based on v3.6-rc2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:35:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CxE-0001yx-N8; Wed, 22 Aug 2012 15:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4CxE-0001ys-2C
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:34:48 +0000
Received: from [85.158.143.35:30565] by server-2.bemta-4.messagelabs.com id
	E4/5F-21239-71CF4305; Wed, 22 Aug 2012 15:34:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345649684!13205292!1
X-Originating-IP: [141.146.126.236]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22274 invoked from network); 22 Aug 2012 15:34:45 -0000
Received: from acsinet14.oracle.com (HELO acsinet14.oracle.com)
	(141.146.126.236)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 15:34:45 -0000
Received: from acsinet15.oracle.com (acsinet15.oracle.com [141.146.126.227])
	by acsinet14.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MF7ZxM007350
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 15:07:35 GMT
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MF7Wid012266
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 15:07:32 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MF7Vmr010382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 15:07:32 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MF7Viv030942; Wed, 22 Aug 2012 10:07:31 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 08:07:31 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 13DC14031E; Wed, 22 Aug 2012 10:57:31 -0400 (EDT)
Date: Wed, 22 Aug 2012 10:57:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822145730.GI30964@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<503504FE0200007800096F08@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503504FE0200007800096F08@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet15.oracle.com [141.146.126.227]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 03:12:46PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> >> Jan, I thought something odd. Part of this code replaces this:
> >> 
> >> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> >> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> >> 
> >> with a more region-by-region area. What I found out that if I boot this
> >> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> >> actually wrong.
> >> 
> >> Specifically this is what bootup says:
> >> 
> >> (good working case - 32bit hypervisor with 32-bit dom0):
> >> (XEN)  Loaded kernel: c1000000->c1a23000
> >> (XEN)  Init. ramdisk: c1a23000->cf730e00
> >> (XEN)  Phys-Mach map: cf731000->cf831000
> >> (XEN)  Start info:    cf831000->cf83147c
> >> (XEN)  Page tables:   cf832000->cf8b5000
> >> (XEN)  Boot stack:    cf8b5000->cf8b6000
> >> (XEN)  TOTAL:         c0000000->cfc00000
> >> 
> >> [    0.000000] PT: cf832000 (f832000)
> >> [    0.000000] Reserving PT: f832000->f8b5000
> >> 
> >> And with a 64-bit hypervisor:
> >> 
> >> XEN) VIRTUAL MEMORY ARRANGEMENT:
> >> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> >> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> >> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> >> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> >> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> >> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> >> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> >> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> >> 
> >> [    0.000000] PT: cf834000 (f834000)
> >> [    0.000000] Reserving PT: f834000->f8b8000
> >> 
> >> So the pt_base is offset by two pages. And looking at c/s 13257
> >> its not clear to me why this two page offset was added?
> 
> Honestly, without looking through this in greater detail I don't
> recall. That'll have to wait possibly until after the summit, though.

I figured it was baked in the API so not really worth persuing
a fix and just leave it as is.

> I can't exclude that this is just a forgotten leftover from an earlier
> version of the patch. I would have thought this was to account
> for the L4 tables that the guest doesn't see, but
> - this should only be a single page
> - this should then also (or rather instead) be subtracted from
>   nr_pt_frames
> so that's likely not it.
> 
> >> The toolstack works fine - so launching 32-bit guests either
> >> under a 32-bit hypervisor or 64-bit works fine:
> >> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 
> > 0xcf885000  (pfn 0xf805 + 0x80 pages)
> >> [    0.000000] PT: cf805000 (f805000)
> >> 
> > 
> > And this patch on top of the others fixes this..
> 
> I didn't look at this in too close detail, but I started to get
> afraid that you might be making the code dependent on
> many hypervisor implementation details. And should the
> above turn out to be a bug in the hypervisor, I hope your
> kernel side changes won't make it impossible to fix that bug.

Actually they will work OK. I've tested it with and without the
hypervisor bug-fix and it worked nicely.

But this is "make the memblock_reserve" easier to see is getting
out of hands :-(

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:35:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4CxE-0001yx-N8; Wed, 22 Aug 2012 15:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4CxE-0001ys-2C
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:34:48 +0000
Received: from [85.158.143.35:30565] by server-2.bemta-4.messagelabs.com id
	E4/5F-21239-71CF4305; Wed, 22 Aug 2012 15:34:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345649684!13205292!1
X-Originating-IP: [141.146.126.236]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22274 invoked from network); 22 Aug 2012 15:34:45 -0000
Received: from acsinet14.oracle.com (HELO acsinet14.oracle.com)
	(141.146.126.236)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 15:34:45 -0000
Received: from acsinet15.oracle.com (acsinet15.oracle.com [141.146.126.227])
	by acsinet14.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MF7ZxM007350
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 15:07:35 GMT
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MF7Wid012266
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 15:07:32 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MF7Vmr010382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 15:07:32 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MF7Viv030942; Wed, 22 Aug 2012 10:07:31 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 08:07:31 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 13DC14031E; Wed, 22 Aug 2012 10:57:31 -0400 (EDT)
Date: Wed, 22 Aug 2012 10:57:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822145730.GI30964@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<503504FE0200007800096F08@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503504FE0200007800096F08@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet15.oracle.com [141.146.126.227]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 03:12:46PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> >> Jan, I thought something odd. Part of this code replaces this:
> >> 
> >> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> >> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> >> 
> >> with a more region-by-region area. What I found out that if I boot this
> >> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> >> actually wrong.
> >> 
> >> Specifically this is what bootup says:
> >> 
> >> (good working case - 32bit hypervisor with 32-bit dom0):
> >> (XEN)  Loaded kernel: c1000000->c1a23000
> >> (XEN)  Init. ramdisk: c1a23000->cf730e00
> >> (XEN)  Phys-Mach map: cf731000->cf831000
> >> (XEN)  Start info:    cf831000->cf83147c
> >> (XEN)  Page tables:   cf832000->cf8b5000
> >> (XEN)  Boot stack:    cf8b5000->cf8b6000
> >> (XEN)  TOTAL:         c0000000->cfc00000
> >> 
> >> [    0.000000] PT: cf832000 (f832000)
> >> [    0.000000] Reserving PT: f832000->f8b5000
> >> 
> >> And with a 64-bit hypervisor:
> >> 
> >> XEN) VIRTUAL MEMORY ARRANGEMENT:
> >> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> >> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> >> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> >> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> >> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> >> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> >> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> >> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> >> 
> >> [    0.000000] PT: cf834000 (f834000)
> >> [    0.000000] Reserving PT: f834000->f8b8000
> >> 
> >> So the pt_base is offset by two pages. And looking at c/s 13257
> >> its not clear to me why this two page offset was added?
> 
> Honestly, without looking through this in greater detail I don't
> recall. That'll have to wait possibly until after the summit, though.

I figured it was baked in the API so not really worth persuing
a fix and just leave it as is.

> I can't exclude that this is just a forgotten leftover from an earlier
> version of the patch. I would have thought this was to account
> for the L4 tables that the guest doesn't see, but
> - this should only be a single page
> - this should then also (or rather instead) be subtracted from
>   nr_pt_frames
> so that's likely not it.
> 
> >> The toolstack works fine - so launching 32-bit guests either
> >> under a 32-bit hypervisor or 64-bit works fine:
> >> ] domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 0xcf805000 -> 
> > 0xcf885000  (pfn 0xf805 + 0x80 pages)
> >> [    0.000000] PT: cf805000 (f805000)
> >> 
> > 
> > And this patch on top of the others fixes this..
> 
> I didn't look at this in too close detail, but I started to get
> afraid that you might be making the code dependent on
> many hypervisor implementation details. And should the
> above turn out to be a bug in the hypervisor, I hope your
> kernel side changes won't make it impossible to fix that bug.

Actually they will work OK. I've tested it with and without the
hypervisor bug-fix and it worked nicely.

But this is "make the memblock_reserve" easier to see is getting
out of hands :-(

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:35:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cxt-00021G-4Q; Wed, 22 Aug 2012 15:35:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Cxr-00021B-So
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:35:28 +0000
Received: from [85.158.143.35:9887] by server-1.bemta-4.messagelabs.com id
	2F/92-07754-F3CF4305; Wed, 22 Aug 2012 15:35:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345649713!4612792!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9666 invoked from network); 22 Aug 2012 15:35:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 15:35:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:35:12 +0100
Message-Id: <5035187C020000780009700E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:35:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
	<20120822142705.GB31341@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 17:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
>> On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
>> > >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > > # HG changeset patch
>> > > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > # Date 1345579709 14400
>> > > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
>> > > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
>> > > xen/pagetables: Document that all of the initial regions are mapped.
>> > > 
>> > > The documentation states that the layout of the initial region looks
>> > > as so:
>> > >    a. relocated kernel image
>> > >    b. initial ram disk              [mod_start, mod_len]
>> > >    c. list of allocated page frames [mfn_list, nr_pages]
>> > >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
>> > >    d. start_info_t structure        [register ESI (x86)]
>> > >    e. bootstrap page tables         [pt_base, CR3 (x86)]
>> > >    f. bootstrap stack               [register ESP (x86)]
>> > > 
>> > > But it does not clarify that the virtual address to all of
>> > > those areas is initially mapped by the pt_base (or CR3).
>> > > Lets fix that.
>> > 
>> > To me this is already being said by "This the order of bootstrap
>> > elements in the initial virtual region".
>> 
>> Stefano wanted to make sure we have it written as clear as possible.
>> I am going to be a good little submitter and let you guys sort this
>> one out  :-)
> 
> Let's step back for a second and see if I understand correctly: your
> patch 6/11 removes the call to xen_map_identity_early on x86_64 because
> "Xen provides us with all the memory mapped that we need to function".
> 
> The original xen_map_identity_early maps up to max_pfn, that is
> xen_start_info->nr_pages, so I am assuming that what you meant is that
> "Xen provides us with all the memory already mapped in the bootstrap
> page tables".
> And that is not written anywhere in the Xen headers.
> 
> Therefore, if I understand the issue correctly, I would add the
> following to xen.h:
> 
> "On x86_64 the bootstrap page tables map all the pages assigned to the
> domain."

That certainly is not the case (and can't be - remember that the
virtual space on x86-64 Linux'es initial mapping starts 2Gb from the
end of address space, so how could all memory possibly be mapped
[i.e. to what virtual addresses would those mappings be done]).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:35:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Cxt-00021G-4Q; Wed, 22 Aug 2012 15:35:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4Cxr-00021B-So
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 15:35:28 +0000
Received: from [85.158.143.35:9887] by server-1.bemta-4.messagelabs.com id
	2F/92-07754-F3CF4305; Wed, 22 Aug 2012 15:35:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345649713!4612792!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9666 invoked from network); 22 Aug 2012 15:35:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 15:35:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:35:12 +0100
Message-Id: <5035187C020000780009700E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:35:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
	<20120822142705.GB31341@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.08.12 at 17:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
>> On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
>> > >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > > # HG changeset patch
>> > > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > # Date 1345579709 14400
>> > > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
>> > > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
>> > > xen/pagetables: Document that all of the initial regions are mapped.
>> > > 
>> > > The documentation states that the layout of the initial region looks
>> > > as so:
>> > >    a. relocated kernel image
>> > >    b. initial ram disk              [mod_start, mod_len]
>> > >    c. list of allocated page frames [mfn_list, nr_pages]
>> > >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
>> > >    d. start_info_t structure        [register ESI (x86)]
>> > >    e. bootstrap page tables         [pt_base, CR3 (x86)]
>> > >    f. bootstrap stack               [register ESP (x86)]
>> > > 
>> > > But it does not clarify that the virtual address to all of
>> > > those areas is initially mapped by the pt_base (or CR3).
>> > > Lets fix that.
>> > 
>> > To me this is already being said by "This the order of bootstrap
>> > elements in the initial virtual region".
>> 
>> Stefano wanted to make sure we have it written as clear as possible.
>> I am going to be a good little submitter and let you guys sort this
>> one out  :-)
> 
> Let's step back for a second and see if I understand correctly: your
> patch 6/11 removes the call to xen_map_identity_early on x86_64 because
> "Xen provides us with all the memory mapped that we need to function".
> 
> The original xen_map_identity_early maps up to max_pfn, that is
> xen_start_info->nr_pages, so I am assuming that what you meant is that
> "Xen provides us with all the memory already mapped in the bootstrap
> page tables".
> And that is not written anywhere in the Xen headers.
> 
> Therefore, if I understand the issue correctly, I would add the
> following to xen.h:
> 
> "On x86_64 the bootstrap page tables map all the pages assigned to the
> domain."

That certainly is not the case (and can't be - remember that the
virtual space on x86-64 Linux'es initial mapping starts 2Gb from the
end of address space, so how could all memory possibly be mapped
[i.e. to what virtual addresses would those mappings be done]).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:58:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DKD-0002bB-Vr; Wed, 22 Aug 2012 15:58:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4DKC-0002b6-Kz
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:58:32 +0000
Received: from [85.158.143.99:59462] by server-3.bemta-4.messagelabs.com id
	A4/E6-09529-7A105305; Wed, 22 Aug 2012 15:58:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345651109!16213860!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7083 invoked from network); 22 Aug 2012 15:58:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 15:58:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:58:29 +0100
Message-Id: <50351DEF020000780009702A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:59:11 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
In-Reply-To: <20120821190317.GA13035@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
>> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
>> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
>> > > > instead of a big memblock_reserve. This way we can be more
>> > > > selective in freeing regions (and it also makes it easier
>> > > > to understand where is what).
>> > > > 
>> > > > [v1: Move the auto_translate_physmap to proper line]
>> > > > [v2: Per Stefano suggestion add more comments]
>> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > 
>> > > much better now!
>> > 
>> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
>> > Will have a revised patch posted shortly.
>> 
>> Jan, I thought something odd. Part of this code replaces this:
>> 
>> 	memblock_reserve(__pa(xen_start_info->mfn_list),
>> 		xen_start_info->pt_base - xen_start_info->mfn_list);
>> 
>> with a more region-by-region area. What I found out that if I boot this
>> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
>> actually wrong.
>> 
>> Specifically this is what bootup says:
>> 
>> (good working case - 32bit hypervisor with 32-bit dom0):
>> (XEN)  Loaded kernel: c1000000->c1a23000
>> (XEN)  Init. ramdisk: c1a23000->cf730e00
>> (XEN)  Phys-Mach map: cf731000->cf831000
>> (XEN)  Start info:    cf831000->cf83147c
>> (XEN)  Page tables:   cf832000->cf8b5000
>> (XEN)  Boot stack:    cf8b5000->cf8b6000
>> (XEN)  TOTAL:         c0000000->cfc00000
>> 
>> [    0.000000] PT: cf832000 (f832000)
>> [    0.000000] Reserving PT: f832000->f8b5000
>> 
>> And with a 64-bit hypervisor:
>> 
>> XEN) VIRTUAL MEMORY ARRANGEMENT:
>> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
>> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
>> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
>> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
>> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
>> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
>> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
>> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
>> 
>> [    0.000000] PT: cf834000 (f834000)
>> [    0.000000] Reserving PT: f834000->f8b8000
>> 
>> So the pt_base is offset by two pages. And looking at c/s 13257
>> its not clear to me why this two page offset was added?

Actually, the adjustment turns out to be correct: The page
tables for a 32-on-64 dom0 get allocated in the order "first L1",
"first L2", "first L3", so the offset to the page table base is
indeed 2. When reading xen/include/public/xen.h's comment
very strictly, this is not a violation (since there nothing is said
that the first thing in the page table space is pointed to by
pt_base; I admit that this seems to be implied though, namely
do I think that it is implied that the page table space is the
range [pt_base, pt_base + nt_pt_frames), whereas that
range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
which - without a priori knowledge - the kernel would have
difficulty to figure out).

Below is a debugging patch I used to see the full picture, if you
want to double check.

One thing I also noticed is that nr_pt_frames apparently is
one too high in this case, as the L4 is not really part of the
page tables from the kernel's perspective (and not represented
anywhere in the corresponding VA range).

Jan

--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -940,6 +940,7 @@ int __init construct_dom0(
     si->flags       |= (xen_processor_pmbits << 8) & SIF_PM_MASK;
     si->pt_base      = vpt_start + 2 * PAGE_SIZE * !!is_pv_32on64_domain(d);
     si->nr_pt_frames = nr_pt_pages;
+printk("PT#%lx\n", si->nr_pt_frames);//temp
     si->mfn_list     = vphysmap_start;
     snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s",
              elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : "");
@@ -1115,6 +1116,10 @@ int __init construct_dom0(
                 process_pending_softirqs();
         }
     }
+show_page_walk(vpt_start);//temp
+show_page_walk(si->pt_base);//temp
+show_page_walk(v_start);//temp
+show_page_walk(v_end - 1);//temp
 
     if ( initrd_len != 0 )
     {


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 15:58:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 15:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DKD-0002bB-Vr; Wed, 22 Aug 2012 15:58:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4DKC-0002b6-Kz
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 15:58:32 +0000
Received: from [85.158.143.99:59462] by server-3.bemta-4.messagelabs.com id
	A4/E6-09529-7A105305; Wed, 22 Aug 2012 15:58:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345651109!16213860!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7083 invoked from network); 22 Aug 2012 15:58:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 15:58:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Aug 2012 16:58:29 +0100
Message-Id: <50351DEF020000780009702A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Wed, 22 Aug 2012 16:59:11 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
In-Reply-To: <20120821190317.GA13035@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
>> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
>> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
>> > > > instead of a big memblock_reserve. This way we can be more
>> > > > selective in freeing regions (and it also makes it easier
>> > > > to understand where is what).
>> > > > 
>> > > > [v1: Move the auto_translate_physmap to proper line]
>> > > > [v2: Per Stefano suggestion add more comments]
>> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > 
>> > > much better now!
>> > 
>> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
>> > Will have a revised patch posted shortly.
>> 
>> Jan, I thought something odd. Part of this code replaces this:
>> 
>> 	memblock_reserve(__pa(xen_start_info->mfn_list),
>> 		xen_start_info->pt_base - xen_start_info->mfn_list);
>> 
>> with a more region-by-region area. What I found out that if I boot this
>> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
>> actually wrong.
>> 
>> Specifically this is what bootup says:
>> 
>> (good working case - 32bit hypervisor with 32-bit dom0):
>> (XEN)  Loaded kernel: c1000000->c1a23000
>> (XEN)  Init. ramdisk: c1a23000->cf730e00
>> (XEN)  Phys-Mach map: cf731000->cf831000
>> (XEN)  Start info:    cf831000->cf83147c
>> (XEN)  Page tables:   cf832000->cf8b5000
>> (XEN)  Boot stack:    cf8b5000->cf8b6000
>> (XEN)  TOTAL:         c0000000->cfc00000
>> 
>> [    0.000000] PT: cf832000 (f832000)
>> [    0.000000] Reserving PT: f832000->f8b5000
>> 
>> And with a 64-bit hypervisor:
>> 
>> XEN) VIRTUAL MEMORY ARRANGEMENT:
>> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
>> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
>> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
>> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
>> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
>> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
>> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
>> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
>> 
>> [    0.000000] PT: cf834000 (f834000)
>> [    0.000000] Reserving PT: f834000->f8b8000
>> 
>> So the pt_base is offset by two pages. And looking at c/s 13257
>> its not clear to me why this two page offset was added?

Actually, the adjustment turns out to be correct: The page
tables for a 32-on-64 dom0 get allocated in the order "first L1",
"first L2", "first L3", so the offset to the page table base is
indeed 2. When reading xen/include/public/xen.h's comment
very strictly, this is not a violation (since there nothing is said
that the first thing in the page table space is pointed to by
pt_base; I admit that this seems to be implied though, namely
do I think that it is implied that the page table space is the
range [pt_base, pt_base + nt_pt_frames), whereas that
range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
which - without a priori knowledge - the kernel would have
difficulty to figure out).

Below is a debugging patch I used to see the full picture, if you
want to double check.

One thing I also noticed is that nr_pt_frames apparently is
one too high in this case, as the L4 is not really part of the
page tables from the kernel's perspective (and not represented
anywhere in the corresponding VA range).

Jan

--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -940,6 +940,7 @@ int __init construct_dom0(
     si->flags       |= (xen_processor_pmbits << 8) & SIF_PM_MASK;
     si->pt_base      = vpt_start + 2 * PAGE_SIZE * !!is_pv_32on64_domain(d);
     si->nr_pt_frames = nr_pt_pages;
+printk("PT#%lx\n", si->nr_pt_frames);//temp
     si->mfn_list     = vphysmap_start;
     snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s",
              elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : "");
@@ -1115,6 +1116,10 @@ int __init construct_dom0(
                 process_pending_softirqs();
         }
     }
+show_page_walk(vpt_start);//temp
+show_page_walk(si->pt_base);//temp
+show_page_walk(v_start);//temp
+show_page_walk(v_end - 1);//temp
 
     if ( initrd_len != 0 )
     {


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dei-0003ML-AC; Wed, 22 Aug 2012 16:19:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Deg-0003ME-M6
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:19:42 +0000
Received: from [85.158.143.99:34501] by server-3.bemta-4.messagelabs.com id
	28/92-09529-D9605305; Wed, 22 Aug 2012 16:19:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345652381!19888783!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2713 invoked from network); 22 Aug 2012 16:19:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:19:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14131461"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 16:19:41 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 17:19:41 +0100
Date: Wed, 22 Aug 2012 17:19:18 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 0/6] Xen patches for Linux 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,
the followings are the patches that I am proposing for Linux 3.7.
I am leaving out the bulk of the ARM patches for the moment.


Stefano Stabellini (6):
      xen/events: fix unmask_evtchn for PV on HVM guests
      xen: missing includes
      xen: update xen_add_to_physmap interface
      xen: Introduce xen_pfn_t for pfn and mfn types
      xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
      xen: allow privcmd for HVM guests

 arch/ia64/include/asm/xen/interface.h      |    7 ++++++-
 arch/x86/include/asm/xen/interface.h       |    7 +++++++
 arch/x86/xen/mmu.c                         |    3 +++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/events.c                       |   18 +++++++++++++++---
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/privcmd.c                      |    4 ----
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/grant_table.h        |    4 ++--
 include/xen/interface/memory.h             |    9 ++++++---
 include/xen/interface/platform.h           |    4 ++--
 include/xen/interface/xen.h                |    7 +++----
 include/xen/privcmd.h                      |    3 +--
 13 files changed, 49 insertions(+), 21 deletions(-)


A git branch based on v3.6-rc2 is available here:

git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git for_upstream_3.7

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dei-0003ML-AC; Wed, 22 Aug 2012 16:19:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Deg-0003ME-M6
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:19:42 +0000
Received: from [85.158.143.99:34501] by server-3.bemta-4.messagelabs.com id
	28/92-09529-D9605305; Wed, 22 Aug 2012 16:19:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345652381!19888783!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2713 invoked from network); 22 Aug 2012 16:19:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:19:41 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336348800"; d="scan'208";a="14131461"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 16:19:41 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Wed, 22 Aug 2012 17:19:41 +0100
Date: Wed, 22 Aug 2012 17:19:18 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 0/6] Xen patches for Linux 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,
the followings are the patches that I am proposing for Linux 3.7.
I am leaving out the bulk of the ARM patches for the moment.


Stefano Stabellini (6):
      xen/events: fix unmask_evtchn for PV on HVM guests
      xen: missing includes
      xen: update xen_add_to_physmap interface
      xen: Introduce xen_pfn_t for pfn and mfn types
      xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
      xen: allow privcmd for HVM guests

 arch/ia64/include/asm/xen/interface.h      |    7 ++++++-
 arch/x86/include/asm/xen/interface.h       |    7 +++++++
 arch/x86/xen/mmu.c                         |    3 +++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/events.c                       |   18 +++++++++++++++---
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/privcmd.c                      |    4 ----
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/grant_table.h        |    4 ++--
 include/xen/interface/memory.h             |    9 ++++++---
 include/xen/interface/platform.h           |    4 ++--
 include/xen/interface/xen.h                |    7 +++----
 include/xen/privcmd.h                      |    3 +--
 13 files changed, 49 insertions(+), 21 deletions(-)


A git branch based on v3.6-rc2 is available here:

git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git for_upstream_3.7

Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfb-0003Qx-Rd; Wed, 22 Aug 2012 16:20:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfa-0003Pq-P3
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10390 invoked from network); 22 Aug 2012 16:20:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465043"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-Al;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:15 +0100
Message-ID: <1345652416-27181-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 5/6] xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
irq_startup, that is responsible for calling irq_unmask at startup time.
As a result event channels remain masked.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/events.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 36bf17d..c60d162 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -842,6 +842,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
 	}
+	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
 out:
 	mutex_unlock(&irq_mapping_update_lock);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfb-0003Qx-Rd; Wed, 22 Aug 2012 16:20:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfa-0003Pq-P3
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:38 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10390 invoked from network); 22 Aug 2012 16:20:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:32 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465043"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-Al;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:15 +0100
Message-ID: <1345652416-27181-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 5/6] xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reset the IRQ_NOAUTOEN and IRQ_NOREQUEST flags that are enabled by
default on ARM. If IRQ_NOAUTOEN is set, __setup_irq doesn't call
irq_startup, that is responsible for calling irq_unmask at startup time.
As a result event channels remain masked.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/events.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 36bf17d..c60d162 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -842,6 +842,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
 	}
+	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
 out:
 	mutex_unlock(&irq_mapping_update_lock);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfd-0003Rt-M6; Wed, 22 Aug 2012 16:20:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfb-0003Pw-M1
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10485 invoked from network); 22 Aug 2012 16:20:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465045"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-BD;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:16 +0100
Message-ID: <1345652416-27181-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 6/6] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the "return -ENOSYS" for auto_translated_physmap
guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
mmap calls. However privcmd mmap calls are still going to fail for HVM
and hybrid guests on x86 because the xen_remap_domain_mfn_range
implementation is currently PV only.

Changes in v2:

- better commit message;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c    |    3 +++
 drivers/xen/privcmd.c |    4 ----
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..2a1ee7b 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2337,6 +2337,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	unsigned long range;
 	int err = 0;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return -EINVAL;
+
 	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
 
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..85226cb 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return -ENOSYS;
-
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
 	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfd-0003Rt-M6; Wed, 22 Aug 2012 16:20:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfb-0003Pw-M1
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10485 invoked from network); 22 Aug 2012 16:20:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465045"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-BD;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:16 +0100
Message-ID: <1345652416-27181-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 6/6] xen: allow privcmd for HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the "return -ENOSYS" for auto_translated_physmap
guests from privcmd_mmap, thus it allows ARM guests to issue privcmd
mmap calls. However privcmd mmap calls are still going to fail for HVM
and hybrid guests on x86 because the xen_remap_domain_mfn_range
implementation is currently PV only.

Changes in v2:

- better commit message;
- return -EINVAL from xen_remap_domain_mfn_range if
  auto_translated_physmap.


Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c    |    3 +++
 drivers/xen/privcmd.c |    4 ----
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..2a1ee7b 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2337,6 +2337,9 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	unsigned long range;
 	int err = 0;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return -EINVAL;
+
 	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
 
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..85226cb 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -380,10 +380,6 @@ static struct vm_operations_struct privcmd_vm_ops = {
 
 static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
 {
-	/* Unsupported for auto-translate guests. */
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return -ENOSYS;
-
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
 	vma->vm_flags |= VM_RESERVED | VM_IO | VM_DONTCOPY | VM_PFNMAP;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DfY-0003Q6-OF; Wed, 22 Aug 2012 16:20:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4DfX-0003Px-Ep
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:35 +0000
Received: from [85.158.143.35:59483] by server-3.bemta-4.messagelabs.com id
	18/A3-09529-2D605305; Wed, 22 Aug 2012 16:20:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345652432!11646801!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19306 invoked from network); 22 Aug 2012 16:20:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465046"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-7u;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:12 +0100
Message-ID: <1345652416-27181-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 2/6] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v3:
- add missing pvclock-abi.h include to ia64 header files.

Changes in v2:
- remove pvclock hack;
- remove include linux/types.h from xen/interface/xen.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/ia64/include/asm/xen/interface.h      |    2 ++
 arch/x86/include/asm/xen/interface.h       |    2 ++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/xen.h                |    1 -
 include/xen/privcmd.h                      |    1 +
 7 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 09d5f7f..ee9cad6 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -265,6 +265,8 @@ typedef struct xen_callback xen_callback_t;
 
 #endif /* !__ASSEMBLY__ */
 
+#include <asm/pvclock-abi.h>
+
 /* Size of the shared_info area (this is not related to page size).  */
 #define XSI_SHIFT			14
 #define XSI_SIZE			(1 << XSI_SHIFT)
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..a93db16 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -121,6 +121,8 @@ struct arch_shared_info {
 #include "interface_64.h"
 #endif
 
+#include <asm/pvclock-abi.h>
+
 #ifndef __ASSEMBLY__
 /*
  * The following is all CPU context. Note that the fpu_ctxt block is filled
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 1e456dc..2944ff8 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -21,6 +21,7 @@
 #include <linux/console.h>
 #include <linux/delay.h>
 #include <linux/err.h>
+#include <linux/irq.h>
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/list.h>
@@ -35,6 +36,7 @@
 #include <xen/page.h>
 #include <xen/events.h>
 #include <xen/interface/io/console.h>
+#include <xen/interface/sched.h>
 #include <xen/hvc-console.h>
 #include <xen/xenbus.h>
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..1d0d95e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -47,6 +47,7 @@
 #include <xen/interface/memory.h>
 #include <xen/hvc-console.h>
 #include <asm/xen/hypercall.h>
+#include <asm/xen/interface.h>
 
 #include <asm/pgtable.h>
 #include <asm/sync_bitops.h>
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
index a31b54d..3159a37 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -21,6 +21,7 @@
 #include <xen/xenbus.h>
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 
 #include <xen/platform_pci.h>
 
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 0801468..6e75dea 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -10,7 +10,6 @@
 #define __XEN_PUBLIC_XEN_H__
 
 #include <asm/xen/interface.h>
-#include <asm/pvclock-abi.h>
 
 /*
  * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..4d58881 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -35,6 +35,7 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
+#include <xen/interface/xen.h>
 
 typedef unsigned long xen_pfn_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DfY-0003Q6-OF; Wed, 22 Aug 2012 16:20:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4DfX-0003Px-Ep
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:35 +0000
Received: from [85.158.143.35:59483] by server-3.bemta-4.messagelabs.com id
	18/A3-09529-2D605305; Wed, 22 Aug 2012 16:20:34 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345652432!11646801!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19306 invoked from network); 22 Aug 2012 16:20:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465046"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-7u;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:12 +0100
Message-ID: <1345652416-27181-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 2/6] xen: missing includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v3:
- add missing pvclock-abi.h include to ia64 header files.

Changes in v2:
- remove pvclock hack;
- remove include linux/types.h from xen/interface/xen.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/ia64/include/asm/xen/interface.h      |    2 ++
 arch/x86/include/asm/xen/interface.h       |    2 ++
 drivers/tty/hvc/hvc_xen.c                  |    2 ++
 drivers/xen/grant-table.c                  |    1 +
 drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
 include/xen/interface/xen.h                |    1 -
 include/xen/privcmd.h                      |    1 +
 7 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index 09d5f7f..ee9cad6 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -265,6 +265,8 @@ typedef struct xen_callback xen_callback_t;
 
 #endif /* !__ASSEMBLY__ */
 
+#include <asm/pvclock-abi.h>
+
 /* Size of the shared_info area (this is not related to page size).  */
 #define XSI_SHIFT			14
 #define XSI_SIZE			(1 << XSI_SHIFT)
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index cbf0c9d..a93db16 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -121,6 +121,8 @@ struct arch_shared_info {
 #include "interface_64.h"
 #endif
 
+#include <asm/pvclock-abi.h>
+
 #ifndef __ASSEMBLY__
 /*
  * The following is all CPU context. Note that the fpu_ctxt block is filled
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 1e456dc..2944ff8 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -21,6 +21,7 @@
 #include <linux/console.h>
 #include <linux/delay.h>
 #include <linux/err.h>
+#include <linux/irq.h>
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/list.h>
@@ -35,6 +36,7 @@
 #include <xen/page.h>
 #include <xen/events.h>
 #include <xen/interface/io/console.h>
+#include <xen/interface/sched.h>
 #include <xen/hvc-console.h>
 #include <xen/xenbus.h>
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 0bfc1ef..1d0d95e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -47,6 +47,7 @@
 #include <xen/interface/memory.h>
 #include <xen/hvc-console.h>
 #include <asm/xen/hypercall.h>
+#include <asm/xen/interface.h>
 
 #include <asm/pgtable.h>
 #include <asm/sync_bitops.h>
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
index a31b54d..3159a37 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -21,6 +21,7 @@
 #include <xen/xenbus.h>
 #include <xen/events.h>
 #include <xen/page.h>
+#include <xen/xen.h>
 
 #include <xen/platform_pci.h>
 
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 0801468..6e75dea 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -10,7 +10,6 @@
 #define __XEN_PUBLIC_XEN_H__
 
 #include <asm/xen/interface.h>
-#include <asm/pvclock-abi.h>
 
 /*
  * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS).
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..4d58881 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -35,6 +35,7 @@
 
 #include <linux/types.h>
 #include <linux/compiler.h>
+#include <xen/interface/xen.h>
 
 typedef unsigned long xen_pfn_t;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfb-0003Qo-FX; Wed, 22 Aug 2012 16:20:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4DfZ-0003Pj-Me
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10325 invoked from network); 22 Aug 2012 16:20:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465042"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-60;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:11 +0100
Message-ID: <1345652416-27181-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 1/6] xen/events: fix unmask_evtchn for PV on HVM
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When unmask_evtchn is called, if we already have an event pending, we
just set evtchn_pending_sel waiting for local_irq_enable to be called.
That is because PV guests set the irq_enable pvops to
xen_irq_enable_direct in xen_setup_vcpu_info_placement:
xen_irq_enable_direct is implemented in assembly in
arch/x86/xen/xen-asm.S and call xen_force_evtchn_callback if
XEN_vcpu_info_pending is set.

However HVM guests (and ARM guests) do not change or do not have the
irq_enable pvop, so evtchn_unmask cannot work properly for them.

Considering that having the pending_irq bit set when unmask_evtchn is
called is not very common, and it is simpler to keep the
native_irq_enable implementation for HVM guests (and ARM guests), the
best thing to do is just use the EVTCHNOP_unmask hypercall (Xen
re-injects pending events in response).

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/events.c |   17 ++++++++++++++---
 1 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..36bf17d 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -373,11 +373,22 @@ static void unmask_evtchn(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	unsigned int cpu = get_cpu();
+	int do_hypercall = 0, evtchn_pending = 0;
 
 	BUG_ON(!irqs_disabled());
 
-	/* Slow path (hypercall) if this is a non-local port. */
-	if (unlikely(cpu != cpu_from_evtchn(port))) {
+	if (unlikely((cpu != cpu_from_evtchn(port))))
+		do_hypercall = 1;
+	else
+		evtchn_pending = sync_test_bit(port, &s->evtchn_pending[0]);
+
+	if (unlikely(evtchn_pending && xen_hvm_domain()))
+		do_hypercall = 1;
+
+	/* Slow path (hypercall) if this is a non-local port or if this is
+	 * an hvm domain and an event is pending (hvm domains don't have
+	 * their own implementation of irq_enable). */
+	if (do_hypercall) {
 		struct evtchn_unmask unmask = { .port = port };
 		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
 	} else {
@@ -390,7 +401,7 @@ static void unmask_evtchn(int port)
 		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
 		 * the interrupt edge' if the channel is masked.
 		 */
-		if (sync_test_bit(port, &s->evtchn_pending[0]) &&
+		if (evtchn_pending &&
 		    !sync_test_and_set_bit(port / BITS_PER_LONG,
 					   &vcpu_info->evtchn_pending_sel))
 			vcpu_info->evtchn_upcall_pending = 1;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfb-0003Qo-FX; Wed, 22 Aug 2012 16:20:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4DfZ-0003Pj-Me
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:37 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10325 invoked from network); 22 Aug 2012 16:20:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:31 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465042"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-60;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:11 +0100
Message-ID: <1345652416-27181-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 1/6] xen/events: fix unmask_evtchn for PV on HVM
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When unmask_evtchn is called, if we already have an event pending, we
just set evtchn_pending_sel waiting for local_irq_enable to be called.
That is because PV guests set the irq_enable pvops to
xen_irq_enable_direct in xen_setup_vcpu_info_placement:
xen_irq_enable_direct is implemented in assembly in
arch/x86/xen/xen-asm.S and call xen_force_evtchn_callback if
XEN_vcpu_info_pending is set.

However HVM guests (and ARM guests) do not change or do not have the
irq_enable pvop, so evtchn_unmask cannot work properly for them.

Considering that having the pending_irq bit set when unmask_evtchn is
called is not very common, and it is simpler to keep the
native_irq_enable implementation for HVM guests (and ARM guests), the
best thing to do is just use the EVTCHNOP_unmask hypercall (Xen
re-injects pending events in response).

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 drivers/xen/events.c |   17 ++++++++++++++---
 1 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 7595581..36bf17d 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -373,11 +373,22 @@ static void unmask_evtchn(int port)
 {
 	struct shared_info *s = HYPERVISOR_shared_info;
 	unsigned int cpu = get_cpu();
+	int do_hypercall = 0, evtchn_pending = 0;
 
 	BUG_ON(!irqs_disabled());
 
-	/* Slow path (hypercall) if this is a non-local port. */
-	if (unlikely(cpu != cpu_from_evtchn(port))) {
+	if (unlikely((cpu != cpu_from_evtchn(port))))
+		do_hypercall = 1;
+	else
+		evtchn_pending = sync_test_bit(port, &s->evtchn_pending[0]);
+
+	if (unlikely(evtchn_pending && xen_hvm_domain()))
+		do_hypercall = 1;
+
+	/* Slow path (hypercall) if this is a non-local port or if this is
+	 * an hvm domain and an event is pending (hvm domains don't have
+	 * their own implementation of irq_enable). */
+	if (do_hypercall) {
 		struct evtchn_unmask unmask = { .port = port };
 		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
 	} else {
@@ -390,7 +401,7 @@ static void unmask_evtchn(int port)
 		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
 		 * the interrupt edge' if the channel is masked.
 		 */
-		if (sync_test_bit(port, &s->evtchn_pending[0]) &&
+		if (evtchn_pending &&
 		    !sync_test_and_set_bit(port / BITS_PER_LONG,
 					   &vcpu_info->evtchn_pending_sel))
 			vcpu_info->evtchn_upcall_pending = 1;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dff-0003SX-44; Wed, 22 Aug 2012 16:20:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfc-0003Py-Ql
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10514 invoked from network); 22 Aug 2012 16:20:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465047"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-AE;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:14 +0100
Message-ID: <1345652416-27181-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 4/6] xen: Introduce xen_pfn_t for pfn and mfn
	types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_pfn_t as mfn and pfn type, however
when they have been imported in Linux, xen_pfn_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
see fit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/ia64/include/asm/xen/interface.h |    5 ++++-
 arch/x86/include/asm/xen/interface.h  |    5 +++++
 include/xen/interface/grant_table.h   |    4 ++--
 include/xen/interface/memory.h        |    6 +++---
 include/xen/interface/platform.h      |    4 ++--
 include/xen/interface/xen.h           |    6 +++---
 include/xen/privcmd.h                 |    2 --
 7 files changed, 19 insertions(+), 13 deletions(-)

diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index ee9cad6..3d52a5b 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -67,6 +67,10 @@
 #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that we could have one ABI that works for 32 and 64 bit
+ * guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
@@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
 
-typedef unsigned long xen_pfn_t;
 DEFINE_GUEST_HANDLE(xen_pfn_t);
 #define PRI_xen_pfn	"lx"
 #endif
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index a93db16..555f94d 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -47,6 +47,10 @@
 #endif
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that on ARM we can have one ABI that works for 32 and 64
+ * bit guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 #endif
 
 #ifndef HYPERVISOR_VIRT_START
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index a17d844..7da811b 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
 #define GNTTABOP_transfer                4
 struct gnttab_transfer {
     /* IN parameters. */
-    unsigned long mfn;
+    xen_pfn_t mfn;
     domid_t       domid;
     grant_ref_t   ref;
     /* OUT parameters. */
@@ -375,7 +375,7 @@ struct gnttab_copy {
 	struct {
 		union {
 			grant_ref_t ref;
-			unsigned long   gmfn;
+			xen_pfn_t   gmfn;
 		} u;
 		domid_t  domid;
 		uint16_t offset;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index 8d4efc1..d8e33a9 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -31,7 +31,7 @@ struct xen_memory_reservation {
      *   OUT: GMFN bases of extents that were allocated
      *   (NB. This command also updates the mach_to_phys translation table)
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
     unsigned long  nr_extents;
@@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
      * any large discontiguities in the machine address space, 2MB gaps in
      * the machphys table will be represented by an MFN base of zero.
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /*
      * Number of extents written to the above array. This will be smaller
@@ -175,7 +175,7 @@ struct xen_add_to_physmap {
     unsigned long idx;
 
     /* GPFN where the source mapping page should appear. */
-    unsigned long gpfn;
+    xen_pfn_t gpfn;
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
 
diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
index 61fa661..a3275a8 100644
--- a/include/xen/interface/platform.h
+++ b/include/xen/interface/platform.h
@@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
 #define XENPF_add_memtype         31
 struct xenpf_add_memtype {
 	/* IN variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 	/* OUT variables. */
@@ -84,7 +84,7 @@ struct xenpf_read_memtype {
 	/* IN variables. */
 	uint32_t reg;
 	/* OUT variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 };
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 6e75dea..1e0df6b 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -189,7 +189,7 @@ struct mmuext_op {
 	unsigned int cmd;
 	union {
 		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
-		unsigned long mfn;
+		xen_pfn_t mfn;
 		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
 		unsigned long linear_addr;
 	} arg1;
@@ -429,11 +429,11 @@ struct start_info {
 	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
 	unsigned long shared_info;  /* MACHINE address of shared info struct. */
 	uint32_t flags;             /* SIF_xxx flags.                         */
-	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
+	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
 	uint32_t store_evtchn;      /* Event channel for store communication. */
 	union {
 		struct {
-			unsigned long mfn;  /* MACHINE page number of console page.   */
+			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
 			uint32_t  evtchn;   /* Event channel for console page.        */
 		} domU;
 		struct {
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 4d58881..45c1aa1 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -37,8 +37,6 @@
 #include <linux/compiler.h>
 #include <xen/interface/xen.h>
 
-typedef unsigned long xen_pfn_t;
-
 struct privcmd_hypercall {
 	__u64 op;
 	__u64 arg[5];
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dff-0003SX-44; Wed, 22 Aug 2012 16:20:43 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfc-0003Py-Ql
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:41 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10514 invoked from network); 22 Aug 2012 16:20:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:34 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465047"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-AE;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:14 +0100
Message-ID: <1345652416-27181-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 4/6] xen: Introduce xen_pfn_t for pfn and mfn
	types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the original Xen headers have xen_pfn_t as mfn and pfn type, however
when they have been imported in Linux, xen_pfn_t has been replaced with
unsigned long. That might work for x86 and ia64 but it does not for arm.
Bring back xen_pfn_t and let each architecture define xen_pfn_t as they
see fit.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/ia64/include/asm/xen/interface.h |    5 ++++-
 arch/x86/include/asm/xen/interface.h  |    5 +++++
 include/xen/interface/grant_table.h   |    4 ++--
 include/xen/interface/memory.h        |    6 +++---
 include/xen/interface/platform.h      |    4 ++--
 include/xen/interface/xen.h           |    6 +++---
 include/xen/privcmd.h                 |    2 --
 7 files changed, 19 insertions(+), 13 deletions(-)

diff --git a/arch/ia64/include/asm/xen/interface.h b/arch/ia64/include/asm/xen/interface.h
index ee9cad6..3d52a5b 100644
--- a/arch/ia64/include/asm/xen/interface.h
+++ b/arch/ia64/include/asm/xen/interface.h
@@ -67,6 +67,10 @@
 #define set_xen_guest_handle(hnd, val)	do { (hnd).p = val; } while (0)
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that we could have one ABI that works for 32 and 64 bit
+ * guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint, unsigned int);
@@ -79,7 +83,6 @@ DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
 
-typedef unsigned long xen_pfn_t;
 DEFINE_GUEST_HANDLE(xen_pfn_t);
 #define PRI_xen_pfn	"lx"
 #endif
diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
index a93db16..555f94d 100644
--- a/arch/x86/include/asm/xen/interface.h
+++ b/arch/x86/include/asm/xen/interface.h
@@ -47,6 +47,10 @@
 #endif
 
 #ifndef __ASSEMBLY__
+/* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that on ARM we can have one ABI that works for 32 and 64
+ * bit guests. */
+typedef unsigned long xen_pfn_t;
 /* Guest handles for primitive C types. */
 __DEFINE_GUEST_HANDLE(uchar, unsigned char);
 __DEFINE_GUEST_HANDLE(uint,  unsigned int);
@@ -57,6 +61,7 @@ DEFINE_GUEST_HANDLE(long);
 DEFINE_GUEST_HANDLE(void);
 DEFINE_GUEST_HANDLE(uint64_t);
 DEFINE_GUEST_HANDLE(uint32_t);
+DEFINE_GUEST_HANDLE(xen_pfn_t);
 #endif
 
 #ifndef HYPERVISOR_VIRT_START
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index a17d844..7da811b 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -338,7 +338,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table);
 #define GNTTABOP_transfer                4
 struct gnttab_transfer {
     /* IN parameters. */
-    unsigned long mfn;
+    xen_pfn_t mfn;
     domid_t       domid;
     grant_ref_t   ref;
     /* OUT parameters. */
@@ -375,7 +375,7 @@ struct gnttab_copy {
 	struct {
 		union {
 			grant_ref_t ref;
-			unsigned long   gmfn;
+			xen_pfn_t   gmfn;
 		} u;
 		domid_t  domid;
 		uint16_t offset;
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index 8d4efc1..d8e33a9 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -31,7 +31,7 @@ struct xen_memory_reservation {
      *   OUT: GMFN bases of extents that were allocated
      *   (NB. This command also updates the mach_to_phys translation table)
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /* Number of extents, and size/alignment of each (2^extent_order pages). */
     unsigned long  nr_extents;
@@ -130,7 +130,7 @@ struct xen_machphys_mfn_list {
      * any large discontiguities in the machine address space, 2MB gaps in
      * the machphys table will be represented by an MFN base of zero.
      */
-    GUEST_HANDLE(ulong) extent_start;
+    GUEST_HANDLE(xen_pfn_t) extent_start;
 
     /*
      * Number of extents written to the above array. This will be smaller
@@ -175,7 +175,7 @@ struct xen_add_to_physmap {
     unsigned long idx;
 
     /* GPFN where the source mapping page should appear. */
-    unsigned long gpfn;
+    xen_pfn_t gpfn;
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap);
 
diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
index 61fa661..a3275a8 100644
--- a/include/xen/interface/platform.h
+++ b/include/xen/interface/platform.h
@@ -54,7 +54,7 @@ DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime_t);
 #define XENPF_add_memtype         31
 struct xenpf_add_memtype {
 	/* IN variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 	/* OUT variables. */
@@ -84,7 +84,7 @@ struct xenpf_read_memtype {
 	/* IN variables. */
 	uint32_t reg;
 	/* OUT variables. */
-	unsigned long mfn;
+	xen_pfn_t mfn;
 	uint64_t nr_mfns;
 	uint32_t type;
 };
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 6e75dea..1e0df6b 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -189,7 +189,7 @@ struct mmuext_op {
 	unsigned int cmd;
 	union {
 		/* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR */
-		unsigned long mfn;
+		xen_pfn_t mfn;
 		/* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
 		unsigned long linear_addr;
 	} arg1;
@@ -429,11 +429,11 @@ struct start_info {
 	unsigned long nr_pages;     /* Total pages allocated to this domain.  */
 	unsigned long shared_info;  /* MACHINE address of shared info struct. */
 	uint32_t flags;             /* SIF_xxx flags.                         */
-	unsigned long store_mfn;    /* MACHINE page number of shared page.    */
+	xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
 	uint32_t store_evtchn;      /* Event channel for store communication. */
 	union {
 		struct {
-			unsigned long mfn;  /* MACHINE page number of console page.   */
+			xen_pfn_t mfn;      /* MACHINE page number of console page.   */
 			uint32_t  evtchn;   /* Event channel for console page.        */
 		} domU;
 		struct {
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 4d58881..45c1aa1 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -37,8 +37,6 @@
 #include <linux/compiler.h>
 #include <xen/interface/xen.h>
 
-typedef unsigned long xen_pfn_t;
-
 struct privcmd_hypercall {
 	__u64 op;
 	__u64 arg[5];
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfc-0003RD-8L; Wed, 22 Aug 2012 16:20:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfb-0003Pr-3Z
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10426 invoked from network); 22 Aug 2012 16:20:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465044"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-9k;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:13 +0100
Message-ID: <1345652416-27181-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 3/6] xen: update xen_add_to_physmap interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update struct xen_add_to_physmap to be in sync with Xen's version of the
structure.
The size field was introduced by:

changeset:   24164:707d27fe03e7
user:        Jean Guyader <jean.guyader@eu.citrix.com>
date:        Fri Nov 18 13:42:08 2011 +0000
summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range

According to the comment:

"This new field .size is located in the 16 bits padding between .domid
and .space in struct xen_add_to_physmap to stay compatible with older
versions."

Changes in v2:

- remove erroneous comment in the commit message.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 include/xen/interface/memory.h |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..8d4efc1 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,6 +163,9 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:20:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dfc-0003RD-8L; Wed, 22 Aug 2012 16:20:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Dfb-0003Pr-3Z
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:20:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1345652430!2961701!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10426 invoked from network); 22 Aug 2012 16:20:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.77,808,1336363200"; d="scan'208";a="35465044"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 12:20:30 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 12:20:29 -0400
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1T4DfM-0000qN-9k;
	Wed, 22 Aug 2012 17:20:24 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 22 Aug 2012 17:20:13 +0100
Message-ID: <1345652416-27181-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Subject: [Xen-devel] [PATCH 3/6] xen: update xen_add_to_physmap interface
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update struct xen_add_to_physmap to be in sync with Xen's version of the
structure.
The size field was introduced by:

changeset:   24164:707d27fe03e7
user:        Jean Guyader <jean.guyader@eu.citrix.com>
date:        Fri Nov 18 13:42:08 2011 +0000
summary:     mm: New XENMEM space, XENMAPSPACE_gmfn_range

According to the comment:

"This new field .size is located in the 16 bits padding between .domid
and .space in struct xen_add_to_physmap to stay compatible with older
versions."

Changes in v2:

- remove erroneous comment in the commit message.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 include/xen/interface/memory.h |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index eac3ce1..8d4efc1 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -163,6 +163,9 @@ struct xen_add_to_physmap {
     /* Which domain to change the mapping for. */
     domid_t domid;
 
+    /* Number of pages to go through for gmfn_range */
+    uint16_t    size;
+
     /* Source mapping space. */
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:24:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DjD-0004Gh-Pm; Wed, 22 Aug 2012 16:24:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1T4DjC-0004GM-7L
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 16:24:22 +0000
Received: from [85.158.143.35:16823] by server-2.bemta-4.messagelabs.com id
	F2/EE-21239-5B705305; Wed, 22 Aug 2012 16:24:21 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345652584!11647208!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28733 invoked from network); 22 Aug 2012 16:23:06 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	22 Aug 2012 16:23:06 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Wed, 22 Aug 2012 18:23:03 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
Thread-Index: Ac1ps8zkCPzyeRZ3RWeWIDwxqaix/gFrW5YABEg0R6A=
Date: Wed, 22 Aug 2012 16:23:02 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>
	<20120731231246.GD32698@phenom.dumpdata.com>
In-Reply-To: <20120731231246.GD32698@phenom.dumpdata.com>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I've been able to use ATI FirePro S400 with the unstable version. I've upda=
ted my Dom0 and pass the graphic card as secondary adaptor.
I've made a quick test for Xen 4.1.2 with these updates but it's still cras=
h (no matter I will use the unstable version).

Aur=E9lien =

_______________________________________________

On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
> Hi,
> =

> I'm currently trying to use XEN on graphical cluster.
> VGA passtrough works fine and I'm able to use quad-buffer for active ster=
eo.
> The last thing to do is to synchronize all GPU from the cluster.
> For this purpose I use ATI FirePro S400 and it didn't works.
> =

> I've seen two behaviors:
> -When I run lspci command on Dom0 I've got:
> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio [Radeon HD =
5800 Series]
> And sometimes just
> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> -When a DomU(Windows 7) is running, it's very slow (I can't do anything) =
and crash after several minutes.
> I've tried with the unstable version and I've seen the same 'lspci' behav=
ior.
> =

> I've a couple of questions:
> Is it possible to passthrough a VGA with this extension?

I never tried. I just pass in the VGA card and don't try to pass in the HDM=
I driver.

> As it's a particular use of VGA passtrough is it planned to able to use s=
ynchronization module?

So what is synchronization module for you? That is the HDMI part?

> Is it easy to add this feature (time cost)?
> =

> Computers : HP Z800 workstation
> GPU: ATI FirePro V8800
> CPU: Intel Xeon E5640
> MB: Intel 5520 chipset
> =

> XEN :
> Version  4.1.2
> With ATI patch from http://old-list-archives.xen.org/archives/html/xen-us=
ers/2011-05/msg00048.html
> Thanks,
> Aur=E9lien Milliat
> =

> =


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:24:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DjD-0004Gh-Pm; Wed, 22 Aug 2012 16:24:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1T4DjC-0004GM-7L
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 16:24:22 +0000
Received: from [85.158.143.35:16823] by server-2.bemta-4.messagelabs.com id
	F2/EE-21239-5B705305; Wed, 22 Aug 2012 16:24:21 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345652584!11647208!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28733 invoked from network); 22 Aug 2012 16:23:06 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	22 Aug 2012 16:23:06 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Wed, 22 Aug 2012 18:23:03 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
Thread-Index: Ac1ps8zkCPzyeRZ3RWeWIDwxqaix/gFrW5YABEg0R6A=
Date: Wed, 22 Aug 2012 16:23:02 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>
	<20120731231246.GD32698@phenom.dumpdata.com>
In-Reply-To: <20120731231246.GD32698@phenom.dumpdata.com>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I've been able to use ATI FirePro S400 with the unstable version. I've upda=
ted my Dom0 and pass the graphic card as secondary adaptor.
I've made a quick test for Xen 4.1.2 with these updates but it's still cras=
h (no matter I will use the unstable version).

Aur=E9lien =

_______________________________________________

On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
> Hi,
> =

> I'm currently trying to use XEN on graphical cluster.
> VGA passtrough works fine and I'm able to use quad-buffer for active ster=
eo.
> The last thing to do is to synchronize all GPU from the cluster.
> For this purpose I use ATI FirePro S400 and it didn't works.
> =

> I've seen two behaviors:
> -When I run lspci command on Dom0 I've got:
> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio [Radeon HD =
5800 Series]
> And sometimes just
> 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> -When a DomU(Windows 7) is running, it's very slow (I can't do anything) =
and crash after several minutes.
> I've tried with the unstable version and I've seen the same 'lspci' behav=
ior.
> =

> I've a couple of questions:
> Is it possible to passthrough a VGA with this extension?

I never tried. I just pass in the VGA card and don't try to pass in the HDM=
I driver.

> As it's a particular use of VGA passtrough is it planned to able to use s=
ynchronization module?

So what is synchronization module for you? That is the HDMI part?

> Is it easy to add this feature (time cost)?
> =

> Computers : HP Z800 workstation
> GPU: ATI FirePro V8800
> CPU: Intel Xeon E5640
> MB: Intel 5520 chipset
> =

> XEN :
> Version  4.1.2
> With ATI patch from http://old-list-archives.xen.org/archives/html/xen-us=
ers/2011-05/msg00048.html
> Thanks,
> Aur=E9lien Milliat
> =

> =


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:25:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DkV-0004SL-8w; Wed, 22 Aug 2012 16:25:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4DkT-0004S9-F8
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 16:25:41 +0000
Received: from [85.158.139.83:54130] by server-3.bemta-5.messagelabs.com id
	56/C3-27237-40805305; Wed, 22 Aug 2012 16:25:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345652738!18256794!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1870 invoked from network); 22 Aug 2012 16:25:39 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 16:25:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MGPXqA008814
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 16:25:33 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MGPWSd006012
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 16:25:32 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MGPWDv009111; Wed, 22 Aug 2012 11:25:32 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 09:25:32 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A873E4031E; Wed, 22 Aug 2012 12:15:31 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:15:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822161531.GA24203@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
	<20120822142705.GB31341@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
	<5035187C020000780009700E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5035187C020000780009700E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:35:56PM +0100, Jan Beulich wrote:
> >>> On 22.08.12 at 17:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >> On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
> >> > >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >> > > # HG changeset patch
> >> > > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > # Date 1345579709 14400
> >> > > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> >> > > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> >> > > xen/pagetables: Document that all of the initial regions are mapped.
> >> > > 
> >> > > The documentation states that the layout of the initial region looks
> >> > > as so:
> >> > >    a. relocated kernel image
> >> > >    b. initial ram disk              [mod_start, mod_len]
> >> > >    c. list of allocated page frames [mfn_list, nr_pages]
> >> > >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> >> > >    d. start_info_t structure        [register ESI (x86)]
> >> > >    e. bootstrap page tables         [pt_base, CR3 (x86)]
> >> > >    f. bootstrap stack               [register ESP (x86)]
> >> > > 
> >> > > But it does not clarify that the virtual address to all of
> >> > > those areas is initially mapped by the pt_base (or CR3).
> >> > > Lets fix that.
> >> > 
> >> > To me this is already being said by "This the order of bootstrap
> >> > elements in the initial virtual region".
> >> 
> >> Stefano wanted to make sure we have it written as clear as possible.
> >> I am going to be a good little submitter and let you guys sort this
> >> one out  :-)
> > 
> > Let's step back for a second and see if I understand correctly: your
> > patch 6/11 removes the call to xen_map_identity_early on x86_64 because
> > "Xen provides us with all the memory mapped that we need to function".

Hm, it should have said: "all the bootstrap memory we need.."
and perhaps include that it covers the kernel, ramdisk, p2m list
and the pagetables provided by the kernel.

> > 
> > The original xen_map_identity_early maps up to max_pfn, that is
> > xen_start_info->nr_pages, so I am assuming that what you meant is that
> > "Xen provides us with all the memory already mapped in the bootstrap
> > page tables".
> > And that is not written anywhere in the Xen headers.
> > 
> > Therefore, if I understand the issue correctly, I would add the
> > following to xen.h:
> > 
> > "On x86_64 the bootstrap page tables map all the pages assigned to the
> > domain."
> 
> That certainly is not the case (and can't be - remember that the
> virtual space on x86-64 Linux'es initial mapping starts 2Gb from the
> end of address space, so how could all memory possibly be mapped
> [i.e. to what virtual addresses would those mappings be done]).
> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:25:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4DkV-0004SL-8w; Wed, 22 Aug 2012 16:25:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4DkT-0004S9-F8
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 16:25:41 +0000
Received: from [85.158.139.83:54130] by server-3.bemta-5.messagelabs.com id
	56/C3-27237-40805305; Wed, 22 Aug 2012 16:25:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1345652738!18256794!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1870 invoked from network); 22 Aug 2012 16:25:39 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 16:25:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MGPXqA008814
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 16:25:33 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MGPWSd006012
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 16:25:32 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MGPWDv009111; Wed, 22 Aug 2012 11:25:32 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 09:25:32 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A873E4031E; Wed, 22 Aug 2012 12:15:31 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:15:31 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822161531.GA24203@phenom.dumpdata.com>
References: <patchbomb.1345579710@phenom.dumpdata.com>
	<74bedb086c5b72447262.1345579713@phenom.dumpdata.com>
	<503506000200007800096F30@nat28.tlf.novell.com>
	<20120822142705.GB31341@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208221604550.15568@kaball.uk.xensource.com>
	<5035187C020000780009700E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5035187C020000780009700E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3 of 4] xen/pagetables: Document that all of
 the initial regions are mapped
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:35:56PM +0100, Jan Beulich wrote:
> >>> On 22.08.12 at 17:24, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >> On Wed, Aug 22, 2012 at 03:17:04PM +0100, Jan Beulich wrote:
> >> > >>> On 21.08.12 at 22:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >> > > # HG changeset patch
> >> > > # User Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > # Date 1345579709 14400
> >> > > # Node ID 74bedb086c5b72447262e087c0218b89f8bc9140
> >> > > # Parent  8ed3eef706710c9c476a8d984bfb2861d92bedfb
> >> > > xen/pagetables: Document that all of the initial regions are mapped.
> >> > > 
> >> > > The documentation states that the layout of the initial region looks
> >> > > as so:
> >> > >    a. relocated kernel image
> >> > >    b. initial ram disk              [mod_start, mod_len]
> >> > >    c. list of allocated page frames [mfn_list, nr_pages]
> >> > >       (unless relocated due to XEN_ELFNOTE_INIT_P2M)
> >> > >    d. start_info_t structure        [register ESI (x86)]
> >> > >    e. bootstrap page tables         [pt_base, CR3 (x86)]
> >> > >    f. bootstrap stack               [register ESP (x86)]
> >> > > 
> >> > > But it does not clarify that the virtual address to all of
> >> > > those areas is initially mapped by the pt_base (or CR3).
> >> > > Lets fix that.
> >> > 
> >> > To me this is already being said by "This the order of bootstrap
> >> > elements in the initial virtual region".
> >> 
> >> Stefano wanted to make sure we have it written as clear as possible.
> >> I am going to be a good little submitter and let you guys sort this
> >> one out  :-)
> > 
> > Let's step back for a second and see if I understand correctly: your
> > patch 6/11 removes the call to xen_map_identity_early on x86_64 because
> > "Xen provides us with all the memory mapped that we need to function".

Hm, it should have said: "all the bootstrap memory we need.."
and perhaps include that it covers the kernel, ramdisk, p2m list
and the pagetables provided by the kernel.

> > 
> > The original xen_map_identity_early maps up to max_pfn, that is
> > xen_start_info->nr_pages, so I am assuming that what you meant is that
> > "Xen provides us with all the memory already mapped in the bootstrap
> > page tables".
> > And that is not written anywhere in the Xen headers.
> > 
> > Therefore, if I understand the issue correctly, I would add the
> > following to xen.h:
> > 
> > "On x86_64 the bootstrap page tables map all the pages assigned to the
> > domain."
> 
> That certainly is not the case (and can't be - remember that the
> virtual space on x86-64 Linux'es initial mapping starts 2Gb from the
> end of address space, so how could all memory possibly be mapped
> [i.e. to what virtual addresses would those mappings be done]).
> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dq8-0004hD-7z; Wed, 22 Aug 2012 16:31:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4Dq6-0004h8-Hy
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:31:30 +0000
Received: from [85.158.139.83:62520] by server-9.bemta-5.messagelabs.com id
	CD/03-26123-16905305; Wed, 22 Aug 2012 16:31:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345653085!26800789!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5935 invoked from network); 22 Aug 2012 16:31:26 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 16:31:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MGVLg4016274
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 16:31:22 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MGVKNJ013239
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 16:31:20 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MGVJlE017805; Wed, 22 Aug 2012 11:31:19 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 09:31:19 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4472E4031E; Wed, 22 Aug 2012 12:21:19 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:21:19 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822162119.GB24203@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<50351DEF020000780009702A@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50351DEF020000780009702A@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:59:11PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> >> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> >> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >> > > > instead of a big memblock_reserve. This way we can be more
> >> > > > selective in freeing regions (and it also makes it easier
> >> > > > to understand where is what).
> >> > > > 
> >> > > > [v1: Move the auto_translate_physmap to proper line]
> >> > > > [v2: Per Stefano suggestion add more comments]
> >> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > 
> >> > > much better now!
> >> > 
> >> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> >> > Will have a revised patch posted shortly.
> >> 
> >> Jan, I thought something odd. Part of this code replaces this:
> >> 
> >> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> >> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> >> 
> >> with a more region-by-region area. What I found out that if I boot this
> >> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> >> actually wrong.
> >> 
> >> Specifically this is what bootup says:
> >> 
> >> (good working case - 32bit hypervisor with 32-bit dom0):
> >> (XEN)  Loaded kernel: c1000000->c1a23000
> >> (XEN)  Init. ramdisk: c1a23000->cf730e00
> >> (XEN)  Phys-Mach map: cf731000->cf831000
> >> (XEN)  Start info:    cf831000->cf83147c
> >> (XEN)  Page tables:   cf832000->cf8b5000
> >> (XEN)  Boot stack:    cf8b5000->cf8b6000
> >> (XEN)  TOTAL:         c0000000->cfc00000
> >> 
> >> [    0.000000] PT: cf832000 (f832000)
> >> [    0.000000] Reserving PT: f832000->f8b5000
> >> 
> >> And with a 64-bit hypervisor:
> >> 
> >> XEN) VIRTUAL MEMORY ARRANGEMENT:
> >> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> >> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> >> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> >> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> >> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> >> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> >> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> >> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> >> 
> >> [    0.000000] PT: cf834000 (f834000)
> >> [    0.000000] Reserving PT: f834000->f8b8000
> >> 
> >> So the pt_base is offset by two pages. And looking at c/s 13257
> >> its not clear to me why this two page offset was added?
> 
> Actually, the adjustment turns out to be correct: The page
> tables for a 32-on-64 dom0 get allocated in the order "first L1",
> "first L2", "first L3", so the offset to the page table base is
> indeed 2. When reading xen/include/public/xen.h's comment
> very strictly, this is not a violation (since there nothing is said
> that the first thing in the page table space is pointed to by
> pt_base; I admit that this seems to be implied though, namely
> do I think that it is implied that the page table space is the
> range [pt_base, pt_base + nt_pt_frames), whereas that
> range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
> which - without a priori knowledge - the kernel would have
> difficulty to figure out).

And only in compat mode. <sigh> Well I am happy that we have found
this so we can document it more throughly but I think I will
step away from those memblock patches for a while as the earlier

 "lets just reserve everything from mfn->list up to the pt_base"

and then in the mmu:
 "reserve everything from pt_base up to nr_pt_frames*PAGE_SIZE"

works.

And document it in the Linux kernel a bit more of why we want to
do that.
> 
> Below is a debugging patch I used to see the full picture, if you
> want to double check.

I trust you and the production of said pages in the L1, L2, L3
is closly related to how the 64-bit does it. Which is L4, L1, L2, L3
and then the L1's follow.

The toolstack does it in L4, L3, L2, L1 order..
> 
> One thing I also noticed is that nr_pt_frames apparently is
> one too high in this case, as the L4 is not really part of the
> page tables from the kernel's perspective (and not represented
> anywhere in the corresponding VA range).
> 
> Jan
> 
> --- a/xen/arch/x86/domain_build.c
> +++ b/xen/arch/x86/domain_build.c
> @@ -940,6 +940,7 @@ int __init construct_dom0(
>      si->flags       |= (xen_processor_pmbits << 8) & SIF_PM_MASK;
>      si->pt_base      = vpt_start + 2 * PAGE_SIZE * !!is_pv_32on64_domain(d);
>      si->nr_pt_frames = nr_pt_pages;
> +printk("PT#%lx\n", si->nr_pt_frames);//temp
>      si->mfn_list     = vphysmap_start;
>      snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s",
>               elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : "");
> @@ -1115,6 +1116,10 @@ int __init construct_dom0(
>                  process_pending_softirqs();
>          }
>      }
> +show_page_walk(vpt_start);//temp
> +show_page_walk(si->pt_base);//temp
> +show_page_walk(v_start);//temp
> +show_page_walk(v_end - 1);//temp
>  
>      if ( initrd_len != 0 )
>      {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:31:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Dq8-0004hD-7z; Wed, 22 Aug 2012 16:31:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4Dq6-0004h8-Hy
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:31:30 +0000
Received: from [85.158.139.83:62520] by server-9.bemta-5.messagelabs.com id
	CD/03-26123-16905305; Wed, 22 Aug 2012 16:31:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345653085!26800789!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5935 invoked from network); 22 Aug 2012 16:31:26 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 16:31:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MGVLg4016274
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 16:31:22 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MGVKNJ013239
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 16:31:20 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MGVJlE017805; Wed, 22 Aug 2012 11:31:19 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 09:31:19 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4472E4031E; Wed, 22 Aug 2012 12:21:19 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:21:19 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822162119.GB24203@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<50351DEF020000780009702A@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50351DEF020000780009702A@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:59:11PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> >> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> >> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >> > > > instead of a big memblock_reserve. This way we can be more
> >> > > > selective in freeing regions (and it also makes it easier
> >> > > > to understand where is what).
> >> > > > 
> >> > > > [v1: Move the auto_translate_physmap to proper line]
> >> > > > [v2: Per Stefano suggestion add more comments]
> >> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > 
> >> > > much better now!
> >> > 
> >> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> >> > Will have a revised patch posted shortly.
> >> 
> >> Jan, I thought something odd. Part of this code replaces this:
> >> 
> >> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> >> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> >> 
> >> with a more region-by-region area. What I found out that if I boot this
> >> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> >> actually wrong.
> >> 
> >> Specifically this is what bootup says:
> >> 
> >> (good working case - 32bit hypervisor with 32-bit dom0):
> >> (XEN)  Loaded kernel: c1000000->c1a23000
> >> (XEN)  Init. ramdisk: c1a23000->cf730e00
> >> (XEN)  Phys-Mach map: cf731000->cf831000
> >> (XEN)  Start info:    cf831000->cf83147c
> >> (XEN)  Page tables:   cf832000->cf8b5000
> >> (XEN)  Boot stack:    cf8b5000->cf8b6000
> >> (XEN)  TOTAL:         c0000000->cfc00000
> >> 
> >> [    0.000000] PT: cf832000 (f832000)
> >> [    0.000000] Reserving PT: f832000->f8b5000
> >> 
> >> And with a 64-bit hypervisor:
> >> 
> >> XEN) VIRTUAL MEMORY ARRANGEMENT:
> >> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> >> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> >> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> >> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> >> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> >> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> >> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> >> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> >> 
> >> [    0.000000] PT: cf834000 (f834000)
> >> [    0.000000] Reserving PT: f834000->f8b8000
> >> 
> >> So the pt_base is offset by two pages. And looking at c/s 13257
> >> its not clear to me why this two page offset was added?
> 
> Actually, the adjustment turns out to be correct: The page
> tables for a 32-on-64 dom0 get allocated in the order "first L1",
> "first L2", "first L3", so the offset to the page table base is
> indeed 2. When reading xen/include/public/xen.h's comment
> very strictly, this is not a violation (since there nothing is said
> that the first thing in the page table space is pointed to by
> pt_base; I admit that this seems to be implied though, namely
> do I think that it is implied that the page table space is the
> range [pt_base, pt_base + nt_pt_frames), whereas that
> range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
> which - without a priori knowledge - the kernel would have
> difficulty to figure out).

And only in compat mode. <sigh> Well I am happy that we have found
this so we can document it more throughly but I think I will
step away from those memblock patches for a while as the earlier

 "lets just reserve everything from mfn->list up to the pt_base"

and then in the mmu:
 "reserve everything from pt_base up to nr_pt_frames*PAGE_SIZE"

works.

And document it in the Linux kernel a bit more of why we want to
do that.
> 
> Below is a debugging patch I used to see the full picture, if you
> want to double check.

I trust you and the production of said pages in the L1, L2, L3
is closly related to how the 64-bit does it. Which is L4, L1, L2, L3
and then the L1's follow.

The toolstack does it in L4, L3, L2, L1 order..
> 
> One thing I also noticed is that nr_pt_frames apparently is
> one too high in this case, as the L4 is not really part of the
> page tables from the kernel's perspective (and not represented
> anywhere in the corresponding VA range).
> 
> Jan
> 
> --- a/xen/arch/x86/domain_build.c
> +++ b/xen/arch/x86/domain_build.c
> @@ -940,6 +940,7 @@ int __init construct_dom0(
>      si->flags       |= (xen_processor_pmbits << 8) & SIF_PM_MASK;
>      si->pt_base      = vpt_start + 2 * PAGE_SIZE * !!is_pv_32on64_domain(d);
>      si->nr_pt_frames = nr_pt_pages;
> +printk("PT#%lx\n", si->nr_pt_frames);//temp
>      si->mfn_list     = vphysmap_start;
>      snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s",
>               elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : "");
> @@ -1115,6 +1116,10 @@ int __init construct_dom0(
>                  process_pending_softirqs();
>          }
>      }
> +show_page_walk(vpt_start);//temp
> +show_page_walk(si->pt_base);//temp
> +show_page_walk(v_start);//temp
> +show_page_walk(v_end - 1);//temp
>  
>      if ( initrd_len != 0 )
>      {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 16:54:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ECK-00059y-10; Wed, 22 Aug 2012 16:54:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kumarsukhani@gmail.com>) id 1T4ECI-00059p-Gp
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:54:26 +0000
Received: from [85.158.143.35:5311] by server-1.bemta-4.messagelabs.com id
	93/30-07754-0CE05305; Wed, 22 Aug 2012 16:54:24 +0000
X-Env-Sender: kumarsukhani@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345654461!13854100!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4090 invoked from network); 22 Aug 2012 16:54:21 -0000
Received: from mail-lpp01m010-f43.google.com (HELO
	mail-lpp01m010-f43.google.com) (209.85.215.43)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:54:21 -0000
Received: by lagk11 with SMTP id k11so826198lag.30
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 09:54:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=4J7+JBz/Hw4nmAmQUvQ5gHHKK6KtLnR8nla1ola/B9c=;
	b=ybqWStHDKrObj5A6GSX5LX8QGV70gXvFTIItIvzaM8Hp2vVNZft7DWaLAccYUOffQI
	Pw82PCTIEMJdTnMD3VwGgkS0RzW0uD4J5bMnyB0D7HX6l0vxN1+7Xefq6W1xXpa7juLY
	2wO4GlHS5d9tteqnwCuvuRb2fxaalzBE4LLVIs49g5SYhKmpGDeiIDdWJPBCmV+iBx+P
	4x8DrauNeiQINwzKQi+/REeOt63SghTqENmjel8uqEw+QPj4l4rr//mq1vjA+MxgvMad
	Gq4VZfW/7UoPGcwEKP6hXohD+7tjex5zXF1CHMejxYlR0F2IVw3ITeB+ZISqnNM8csNX
	aIbA==
MIME-Version: 1.0
Received: by 10.112.30.41 with SMTP id p9mr9753828lbh.26.1345654460655; Wed,
	22 Aug 2012 09:54:20 -0700 (PDT)
Received: by 10.112.42.161 with HTTP; Wed, 22 Aug 2012 09:54:20 -0700 (PDT)
In-Reply-To: <CALNeL8WF91Qjz3tSXparF2TZhdG0n0ncHPfHCte0=wJUB4t8RA@mail.gmail.com>
References: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
	<CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
	<CALNeL8WF91Qjz3tSXparF2TZhdG0n0ncHPfHCte0=wJUB4t8RA@mail.gmail.com>
Date: Wed, 22 Aug 2012 22:24:20 +0530
Message-ID: <CALNeL8Ufsi+AGZza48vyR9tszPbiraZ2A1dYjCE1XXu-em11iw@mail.gmail.com>
From: Kumar Sukhani <kumarsukhani@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1766318305447446408=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1766318305447446408==
Content-Type: multipart/alternative; boundary=bcaec554d94c1e28dc04c7dd98bb

--bcaec554d94c1e28dc04c7dd98bb
Content-Type: text/plain; charset=ISO-8859-1

After some research and discussion with our project mentor, the possible
improvements that we may get by developing our model compared with simply
running OpenVZ on Xen will be very less.
This leads to reducing the use case of the whole project very much.
So we are planning to drop this idea.

But we are also open for discussion, which might help to develop some
useful idea for community.

On Thu, Aug 16, 2012 at 7:49 PM, Kumar Sukhani <kumarsukhani@gmail.com>wrote:

> On Wed, Aug 15, 2012 at 1:51 AM, Fajar A. Nugraha <list@fajar.net> wrote:
> >
> > On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani <kumarsukhani@gmail.com>
> wrote:
> >
> > > So we are planning to introduce features of O.S. level virtualization
> > > in Xen, by proposing one integrated architecture[1] having Operating
> > > system virtualization over one of the VM of Xen.
> > > Please review our architecture and let us know whether it is worth to
> > > implement it or not.
> > >
> > > [1]
> http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream
> >
> >
> > How is this "integrated"?
> >
> > For example, what is the difference of that proposal with running lxc
> > on an Ubuntu domU (which you can already do)?
> >
> > --
> > Fajar
>
> We are planning to do the same thing but with a different approach,
> which will be some what efficient.
> [Note: I am assuming we are using lxc for O.S. level virtualization,
> even though we can use any other platform].
>
> Consider one scenario:
> http://www.flickr.com/photos/84959360@N02/7795262950/in/photostream/
>
> Simple case:
>        Here for xen their are 3 V.M.s running
>        1) Dom 0
>        2) Dom 1 (lxc)
>        3) Dom 2 (windows)
>
> Modified case:
>        Here we will modify the lxc implementation to make it aware
> that its running over xen in virtualize mode and will also make some
> changes in hypervisor and Dom0 to make it aware that we are also
> having one or more VMs running over Dom1.
>        So hypervisor and Dom0 will be aware that total 5 V.M.s are running.
>
> Few Advantages:
>        1. This implementation will make the distribution of resources
> simple, because xen is aware of how much actual VMs are running.
>        2. Process migration of VMs(running over lxc) will be possible
> here in the case like, we want to migrate VM to other hardware where
> lxc might not be available.
>
> [Note: It will also have all the advantages of O.S. level virtualization].
> --
> Be Happy Always
> +919579650250
>



-- 
Be Happy Always
+919579650250

--bcaec554d94c1e28dc04c7dd98bb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

After some research and discussion with our project mentor, the possible im=
provements that we may get by developing our model compared with simply run=
ning OpenVZ on Xen will be very less.<div>This leads to reducing the use ca=
se of the whole project very much.</div>
<div>So we are planning to drop this idea.</div><div><br></div><div>But we =
are also open for discussion, which might help to develop some useful idea =
for community.<br><br><div class=3D"gmail_quote">On Thu, Aug 16, 2012 at 7:=
49 PM, Kumar Sukhani <span dir=3D"ltr">&lt;<a href=3D"mailto:kumarsukhani@g=
mail.com" target=3D"_blank">kumarsukhani@gmail.com</a>&gt;</span> wrote:<br=
>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On W=
ed, Aug 15, 2012 at 1:51 AM, Fajar A. Nugraha &lt;<a href=3D"mailto:list@fa=
jar.net">list@fajar.net</a>&gt; wrote:<br>

&gt;<br>
&gt; On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani &lt;<a href=3D"mailto:=
kumarsukhani@gmail.com">kumarsukhani@gmail.com</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; So we are planning to introduce features of O.S. level virtualiza=
tion<br>
&gt; &gt; in Xen, by proposing one integrated architecture[1] having Operat=
ing<br>
&gt; &gt; system virtualization over one of the VM of Xen.<br>
&gt; &gt; Please review our architecture and let us know whether it is wort=
h to<br>
&gt; &gt; implement it or not.<br>
&gt; &gt;<br>
&gt; &gt; [1] <a href=3D"http://www.flickr.com/photos/84959360@N02/77825162=
74/in/photostream" target=3D"_blank">http://www.flickr.com/photos/84959360@=
N02/7782516274/in/photostream</a><br>
&gt;<br>
&gt;<br>
&gt; How is this &quot;integrated&quot;?<br>
&gt;<br>
&gt; For example, what is the difference of that proposal with running lxc<=
br>
&gt; on an Ubuntu domU (which you can already do)?<br>
&gt;<br>
&gt; --<br>
&gt; Fajar<br>
<br>
</div></div>We are planning to do the same thing but with a different appro=
ach,<br>
which will be some what efficient.<br>
[Note: I am assuming we are using lxc for O.S. level virtualization,<br>
even though we can use any other platform].<br>
<br>
Consider one scenario:<br>
<a href=3D"http://www.flickr.com/photos/84959360@N02/7795262950/in/photostr=
eam/" target=3D"_blank">http://www.flickr.com/photos/84959360@N02/779526295=
0/in/photostream/</a><br>
<br>
Simple case:<br>
=A0 =A0 =A0 =A0Here for xen their are 3 V.M.s running<br>
=A0 =A0 =A0 =A01) Dom 0<br>
=A0 =A0 =A0 =A02) Dom 1 (lxc)<br>
=A0 =A0 =A0 =A03) Dom 2 (windows)<br>
<br>
Modified case:<br>
=A0 =A0 =A0 =A0Here we will modify the lxc implementation to make it aware<=
br>
that its running over xen in virtualize mode and will also make some<br>
changes in hypervisor and Dom0 to make it aware that we are also<br>
having one or more VMs running over Dom1.<br>
=A0 =A0 =A0 =A0So hypervisor and Dom0 will be aware that total 5 V.M.s are =
running.<br>
<br>
Few Advantages:<br>
=A0 =A0 =A0 =A01. This implementation will make the distribution of resourc=
es<br>
simple, because xen is aware of how much actual VMs are running.<br>
=A0 =A0 =A0 =A02. Process migration of VMs(running over lxc) will be possib=
le<br>
here in the case like, we want to migrate VM to other hardware where<br>
lxc might not be available.<br>
<br>
[Note: It will also have all the advantages of O.S. level virtualization].<=
br>
<div class=3D"HOEnZb"><div class=3D"h5">--<br>
Be Happy Always<br>
+919579650250<br>
</div></div></blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>=
Be Happy Always<br>+919579650250<br><br>
</div>

--bcaec554d94c1e28dc04c7dd98bb--


--===============1766318305447446408==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1766318305447446408==--


From xen-devel-bounces@lists.xen.org Wed Aug 22 16:54:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 16:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ECK-00059y-10; Wed, 22 Aug 2012 16:54:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kumarsukhani@gmail.com>) id 1T4ECI-00059p-Gp
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 16:54:26 +0000
Received: from [85.158.143.35:5311] by server-1.bemta-4.messagelabs.com id
	93/30-07754-0CE05305; Wed, 22 Aug 2012 16:54:24 +0000
X-Env-Sender: kumarsukhani@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345654461!13854100!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4090 invoked from network); 22 Aug 2012 16:54:21 -0000
Received: from mail-lpp01m010-f43.google.com (HELO
	mail-lpp01m010-f43.google.com) (209.85.215.43)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 16:54:21 -0000
Received: by lagk11 with SMTP id k11so826198lag.30
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 09:54:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=4J7+JBz/Hw4nmAmQUvQ5gHHKK6KtLnR8nla1ola/B9c=;
	b=ybqWStHDKrObj5A6GSX5LX8QGV70gXvFTIItIvzaM8Hp2vVNZft7DWaLAccYUOffQI
	Pw82PCTIEMJdTnMD3VwGgkS0RzW0uD4J5bMnyB0D7HX6l0vxN1+7Xefq6W1xXpa7juLY
	2wO4GlHS5d9tteqnwCuvuRb2fxaalzBE4LLVIs49g5SYhKmpGDeiIDdWJPBCmV+iBx+P
	4x8DrauNeiQINwzKQi+/REeOt63SghTqENmjel8uqEw+QPj4l4rr//mq1vjA+MxgvMad
	Gq4VZfW/7UoPGcwEKP6hXohD+7tjex5zXF1CHMejxYlR0F2IVw3ITeB+ZISqnNM8csNX
	aIbA==
MIME-Version: 1.0
Received: by 10.112.30.41 with SMTP id p9mr9753828lbh.26.1345654460655; Wed,
	22 Aug 2012 09:54:20 -0700 (PDT)
Received: by 10.112.42.161 with HTTP; Wed, 22 Aug 2012 09:54:20 -0700 (PDT)
In-Reply-To: <CALNeL8WF91Qjz3tSXparF2TZhdG0n0ncHPfHCte0=wJUB4t8RA@mail.gmail.com>
References: <CALNeL8VenXBBGGd+MVQwh+iq8_QYk2jGSXVMvdE1tmWKCR9dvQ@mail.gmail.com>
	<CAG1y0se79GFRKt_E5hL-eN0ORQcH99tOtMtC5wv4DTViRMUPPw@mail.gmail.com>
	<CALNeL8WF91Qjz3tSXparF2TZhdG0n0ncHPfHCte0=wJUB4t8RA@mail.gmail.com>
Date: Wed, 22 Aug 2012 22:24:20 +0530
Message-ID: <CALNeL8Ufsi+AGZza48vyR9tszPbiraZ2A1dYjCE1XXu-em11iw@mail.gmail.com>
From: Kumar Sukhani <kumarsukhani@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Introducing O.S. system level virtualization in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1766318305447446408=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1766318305447446408==
Content-Type: multipart/alternative; boundary=bcaec554d94c1e28dc04c7dd98bb

--bcaec554d94c1e28dc04c7dd98bb
Content-Type: text/plain; charset=ISO-8859-1

After some research and discussion with our project mentor, the possible
improvements that we may get by developing our model compared with simply
running OpenVZ on Xen will be very less.
This leads to reducing the use case of the whole project very much.
So we are planning to drop this idea.

But we are also open for discussion, which might help to develop some
useful idea for community.

On Thu, Aug 16, 2012 at 7:49 PM, Kumar Sukhani <kumarsukhani@gmail.com>wrote:

> On Wed, Aug 15, 2012 at 1:51 AM, Fajar A. Nugraha <list@fajar.net> wrote:
> >
> > On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani <kumarsukhani@gmail.com>
> wrote:
> >
> > > So we are planning to introduce features of O.S. level virtualization
> > > in Xen, by proposing one integrated architecture[1] having Operating
> > > system virtualization over one of the VM of Xen.
> > > Please review our architecture and let us know whether it is worth to
> > > implement it or not.
> > >
> > > [1]
> http://www.flickr.com/photos/84959360@N02/7782516274/in/photostream
> >
> >
> > How is this "integrated"?
> >
> > For example, what is the difference of that proposal with running lxc
> > on an Ubuntu domU (which you can already do)?
> >
> > --
> > Fajar
>
> We are planning to do the same thing but with a different approach,
> which will be some what efficient.
> [Note: I am assuming we are using lxc for O.S. level virtualization,
> even though we can use any other platform].
>
> Consider one scenario:
> http://www.flickr.com/photos/84959360@N02/7795262950/in/photostream/
>
> Simple case:
>        Here for xen their are 3 V.M.s running
>        1) Dom 0
>        2) Dom 1 (lxc)
>        3) Dom 2 (windows)
>
> Modified case:
>        Here we will modify the lxc implementation to make it aware
> that its running over xen in virtualize mode and will also make some
> changes in hypervisor and Dom0 to make it aware that we are also
> having one or more VMs running over Dom1.
>        So hypervisor and Dom0 will be aware that total 5 V.M.s are running.
>
> Few Advantages:
>        1. This implementation will make the distribution of resources
> simple, because xen is aware of how much actual VMs are running.
>        2. Process migration of VMs(running over lxc) will be possible
> here in the case like, we want to migrate VM to other hardware where
> lxc might not be available.
>
> [Note: It will also have all the advantages of O.S. level virtualization].
> --
> Be Happy Always
> +919579650250
>



-- 
Be Happy Always
+919579650250

--bcaec554d94c1e28dc04c7dd98bb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

After some research and discussion with our project mentor, the possible im=
provements that we may get by developing our model compared with simply run=
ning OpenVZ on Xen will be very less.<div>This leads to reducing the use ca=
se of the whole project very much.</div>
<div>So we are planning to drop this idea.</div><div><br></div><div>But we =
are also open for discussion, which might help to develop some useful idea =
for community.<br><br><div class=3D"gmail_quote">On Thu, Aug 16, 2012 at 7:=
49 PM, Kumar Sukhani <span dir=3D"ltr">&lt;<a href=3D"mailto:kumarsukhani@g=
mail.com" target=3D"_blank">kumarsukhani@gmail.com</a>&gt;</span> wrote:<br=
>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">On W=
ed, Aug 15, 2012 at 1:51 AM, Fajar A. Nugraha &lt;<a href=3D"mailto:list@fa=
jar.net">list@fajar.net</a>&gt; wrote:<br>

&gt;<br>
&gt; On Wed, Aug 15, 2012 at 12:31 AM, Kumar Sukhani &lt;<a href=3D"mailto:=
kumarsukhani@gmail.com">kumarsukhani@gmail.com</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; So we are planning to introduce features of O.S. level virtualiza=
tion<br>
&gt; &gt; in Xen, by proposing one integrated architecture[1] having Operat=
ing<br>
&gt; &gt; system virtualization over one of the VM of Xen.<br>
&gt; &gt; Please review our architecture and let us know whether it is wort=
h to<br>
&gt; &gt; implement it or not.<br>
&gt; &gt;<br>
&gt; &gt; [1] <a href=3D"http://www.flickr.com/photos/84959360@N02/77825162=
74/in/photostream" target=3D"_blank">http://www.flickr.com/photos/84959360@=
N02/7782516274/in/photostream</a><br>
&gt;<br>
&gt;<br>
&gt; How is this &quot;integrated&quot;?<br>
&gt;<br>
&gt; For example, what is the difference of that proposal with running lxc<=
br>
&gt; on an Ubuntu domU (which you can already do)?<br>
&gt;<br>
&gt; --<br>
&gt; Fajar<br>
<br>
</div></div>We are planning to do the same thing but with a different appro=
ach,<br>
which will be some what efficient.<br>
[Note: I am assuming we are using lxc for O.S. level virtualization,<br>
even though we can use any other platform].<br>
<br>
Consider one scenario:<br>
<a href=3D"http://www.flickr.com/photos/84959360@N02/7795262950/in/photostr=
eam/" target=3D"_blank">http://www.flickr.com/photos/84959360@N02/779526295=
0/in/photostream/</a><br>
<br>
Simple case:<br>
=A0 =A0 =A0 =A0Here for xen their are 3 V.M.s running<br>
=A0 =A0 =A0 =A01) Dom 0<br>
=A0 =A0 =A0 =A02) Dom 1 (lxc)<br>
=A0 =A0 =A0 =A03) Dom 2 (windows)<br>
<br>
Modified case:<br>
=A0 =A0 =A0 =A0Here we will modify the lxc implementation to make it aware<=
br>
that its running over xen in virtualize mode and will also make some<br>
changes in hypervisor and Dom0 to make it aware that we are also<br>
having one or more VMs running over Dom1.<br>
=A0 =A0 =A0 =A0So hypervisor and Dom0 will be aware that total 5 V.M.s are =
running.<br>
<br>
Few Advantages:<br>
=A0 =A0 =A0 =A01. This implementation will make the distribution of resourc=
es<br>
simple, because xen is aware of how much actual VMs are running.<br>
=A0 =A0 =A0 =A02. Process migration of VMs(running over lxc) will be possib=
le<br>
here in the case like, we want to migrate VM to other hardware where<br>
lxc might not be available.<br>
<br>
[Note: It will also have all the advantages of O.S. level virtualization].<=
br>
<div class=3D"HOEnZb"><div class=3D"h5">--<br>
Be Happy Always<br>
+919579650250<br>
</div></div></blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>=
Be Happy Always<br>+919579650250<br><br>
</div>

--bcaec554d94c1e28dc04c7dd98bb--


--===============1766318305447446408==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1766318305447446408==--


From xen-devel-bounces@lists.xen.org Wed Aug 22 17:08:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 17:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4EPT-0005UC-Bi; Wed, 22 Aug 2012 17:08:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4EPS-0005U7-8g
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 17:08:02 +0000
Received: from [85.158.143.35:58979] by server-3.bemta-4.messagelabs.com id
	F8/33-09529-1F115305; Wed, 22 Aug 2012 17:08:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345655279!13219614!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32410 invoked from network); 22 Aug 2012 17:08:00 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 17:08:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MH7tRe026840
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 17:07:56 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MH7s9W010716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 17:07:54 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MH7rrc012461; Wed, 22 Aug 2012 12:07:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 10:07:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3DE024031E; Wed, 22 Aug 2012 12:57:53 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:57:53 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20120822165753.GA3328@phenom.dumpdata.com>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jbeulich@suse.com, ian.campbell@citrix.com, msw@amazon.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
	query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 04:40:26PM -0400, Daniel De Graaf wrote:
> On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > 
> > Hi Daniel,
> > 
> > What do you think about retaining a fallback of looking in xenstore if
> > the hypercall fails?
> > 
> > Matt
> > 
> 
> That sounds good; there's little cost to leaving the fallback in.

applied
> 
> ----8<-----------------------------------------------------
> 
> This hypercall has been present since Xen 3.1, and is the preferred
> method for a domain to obtain its UUID. Fall back to the xenstore method
> if using an older version of Xen (which returns -ENOSYS).
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  drivers/xen/sys-hypervisor.c    | 13 ++++++++++++-
>  include/xen/interface/version.h |  3 +++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
> index 4c7db7d..284df8a 100644
> --- a/drivers/xen/sys-hypervisor.c
> +++ b/drivers/xen/sys-hypervisor.c
> @@ -114,7 +114,7 @@ static void xen_sysfs_version_destroy(void)
>  
>  /* UUID */
>  
> -static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
> +static ssize_t uuid_show_fallback(struct hyp_sysfs_attr *attr, char *buffer)
>  {
>  	char *vm, *val;
>  	int ret;
> @@ -135,6 +135,17 @@ static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
>  	return ret;
>  }
>  
> +static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
> +{
> +	xen_domain_handle_t uuid;
> +	int ret;
> +	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
> +	if (ret)
> +		return uuid_show_fallback(attr, buffer);
> +	ret = sprintf(buffer, "%pU\n", uuid);
> +	return ret;
> +}
> +
>  HYPERVISOR_ATTR_RO(uuid);
>  
>  static int __init xen_sysfs_uuid_init(void)
> diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> index e8b6519..dd58cf5 100644
> --- a/include/xen/interface/version.h
> +++ b/include/xen/interface/version.h
> @@ -60,4 +60,7 @@ struct xen_feature_info {
>  /* arg == NULL; returns host memory page size. */
>  #define XENVER_pagesize 7
>  
> +/* arg == xen_domain_handle_t. */
> +#define XENVER_guest_handle 8
> +
>  #endif /* __XEN_PUBLIC_VERSION_H__ */
> -- 
> 1.7.11.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 17:08:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 17:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4EPT-0005UC-Bi; Wed, 22 Aug 2012 17:08:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4EPS-0005U7-8g
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 17:08:02 +0000
Received: from [85.158.143.35:58979] by server-3.bemta-4.messagelabs.com id
	F8/33-09529-1F115305; Wed, 22 Aug 2012 17:08:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345655279!13219614!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA1MDk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32410 invoked from network); 22 Aug 2012 17:08:00 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 17:08:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MH7tRe026840
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 17:07:56 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MH7s9W010716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 17:07:54 GMT
Received: from abhmt107.oracle.com (abhmt107.oracle.com [141.146.116.59])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MH7rrc012461; Wed, 22 Aug 2012 12:07:53 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 10:07:53 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3DE024031E; Wed, 22 Aug 2012 12:57:53 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:57:53 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Message-ID: <20120822165753.GA3328@phenom.dumpdata.com>
References: <20120816202246.GA6244@US-SEA-R8XVZTX>
	<1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345149626-32602-1-git-send-email-dgdegra@tycho.nsa.gov>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jbeulich@suse.com, ian.campbell@citrix.com, msw@amazon.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/sysfs: Use XENVER_guest_handle to
	query UUID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 16, 2012 at 04:40:26PM -0400, Daniel De Graaf wrote:
> On 08/16/2012 04:22 PM, Matt Wilson wrote:
> > 
> > Hi Daniel,
> > 
> > What do you think about retaining a fallback of looking in xenstore if
> > the hypercall fails?
> > 
> > Matt
> > 
> 
> That sounds good; there's little cost to leaving the fallback in.

applied
> 
> ----8<-----------------------------------------------------
> 
> This hypercall has been present since Xen 3.1, and is the preferred
> method for a domain to obtain its UUID. Fall back to the xenstore method
> if using an older version of Xen (which returns -ENOSYS).
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
>  drivers/xen/sys-hypervisor.c    | 13 ++++++++++++-
>  include/xen/interface/version.h |  3 +++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
> index 4c7db7d..284df8a 100644
> --- a/drivers/xen/sys-hypervisor.c
> +++ b/drivers/xen/sys-hypervisor.c
> @@ -114,7 +114,7 @@ static void xen_sysfs_version_destroy(void)
>  
>  /* UUID */
>  
> -static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
> +static ssize_t uuid_show_fallback(struct hyp_sysfs_attr *attr, char *buffer)
>  {
>  	char *vm, *val;
>  	int ret;
> @@ -135,6 +135,17 @@ static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
>  	return ret;
>  }
>  
> +static ssize_t uuid_show(struct hyp_sysfs_attr *attr, char *buffer)
> +{
> +	xen_domain_handle_t uuid;
> +	int ret;
> +	ret = HYPERVISOR_xen_version(XENVER_guest_handle, uuid);
> +	if (ret)
> +		return uuid_show_fallback(attr, buffer);
> +	ret = sprintf(buffer, "%pU\n", uuid);
> +	return ret;
> +}
> +
>  HYPERVISOR_ATTR_RO(uuid);
>  
>  static int __init xen_sysfs_uuid_init(void)
> diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
> index e8b6519..dd58cf5 100644
> --- a/include/xen/interface/version.h
> +++ b/include/xen/interface/version.h
> @@ -60,4 +60,7 @@ struct xen_feature_info {
>  /* arg == NULL; returns host memory page size. */
>  #define XENVER_pagesize 7
>  
> +/* arg == xen_domain_handle_t. */
> +#define XENVER_guest_handle 8
> +
>  #endif /* __XEN_PUBLIC_VERSION_H__ */
> -- 
> 1.7.11.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 17:09:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 17:09:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4EQg-0005Y7-Qj; Wed, 22 Aug 2012 17:09:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4EQf-0005Xz-Ia
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 17:09:17 +0000
Received: from [85.158.143.99:40205] by server-2.bemta-4.messagelabs.com id
	06/59-21239-C3215305; Wed, 22 Aug 2012 17:09:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345655355!27445011!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxMjQ2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10520 invoked from network); 22 Aug 2012 17:09:16 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 17:09:16 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MH99c8030997
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 17:09:10 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MH97mi027658
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 17:09:07 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MH96IO009101; Wed, 22 Aug 2012 12:09:06 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 10:09:06 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 95E634031E; Wed, 22 Aug 2012 12:59:06 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:59:06 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120822165906.GA3354@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 0/6] Xen patches for Linux 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 05:19:18PM +0100, Stefano Stabellini wrote:
> Hi Konrad,
> the followings are the patches that I am proposing for Linux 3.7.
> I am leaving out the bulk of the ARM patches for the moment.

applied. going to test it overnight.
> 
> 
> Stefano Stabellini (6):
>       xen/events: fix unmask_evtchn for PV on HVM guests
>       xen: missing includes
>       xen: update xen_add_to_physmap interface
>       xen: Introduce xen_pfn_t for pfn and mfn types
>       xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
>       xen: allow privcmd for HVM guests
> 
>  arch/ia64/include/asm/xen/interface.h      |    7 ++++++-
>  arch/x86/include/asm/xen/interface.h       |    7 +++++++
>  arch/x86/xen/mmu.c                         |    3 +++
>  drivers/tty/hvc/hvc_xen.c                  |    2 ++
>  drivers/xen/events.c                       |   18 +++++++++++++++---
>  drivers/xen/grant-table.c                  |    1 +
>  drivers/xen/privcmd.c                      |    4 ----
>  drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
>  include/xen/interface/grant_table.h        |    4 ++--
>  include/xen/interface/memory.h             |    9 ++++++---
>  include/xen/interface/platform.h           |    4 ++--
>  include/xen/interface/xen.h                |    7 +++----
>  include/xen/privcmd.h                      |    3 +--
>  13 files changed, 49 insertions(+), 21 deletions(-)
> 
> 
> A git branch based on v3.6-rc2 is available here:
> 
> git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git for_upstream_3.7
> 
> Cheers,
> 
> Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 17:09:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 17:09:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4EQg-0005Y7-Qj; Wed, 22 Aug 2012 17:09:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4EQf-0005Xz-Ia
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 17:09:17 +0000
Received: from [85.158.143.99:40205] by server-2.bemta-4.messagelabs.com id
	06/59-21239-C3215305; Wed, 22 Aug 2012 17:09:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345655355!27445011!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxMjQ2Nw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10520 invoked from network); 22 Aug 2012 17:09:16 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 17:09:16 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MH99c8030997
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Aug 2012 17:09:10 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MH97mi027658
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Aug 2012 17:09:07 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MH96IO009101; Wed, 22 Aug 2012 12:09:06 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 10:09:06 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 95E634031E; Wed, 22 Aug 2012 12:59:06 -0400 (EDT)
Date: Wed, 22 Aug 2012 12:59:06 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120822165906.GA3354@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208221635020.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 0/6] Xen patches for Linux 3.7
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 05:19:18PM +0100, Stefano Stabellini wrote:
> Hi Konrad,
> the followings are the patches that I am proposing for Linux 3.7.
> I am leaving out the bulk of the ARM patches for the moment.

applied. going to test it overnight.
> 
> 
> Stefano Stabellini (6):
>       xen/events: fix unmask_evtchn for PV on HVM guests
>       xen: missing includes
>       xen: update xen_add_to_physmap interface
>       xen: Introduce xen_pfn_t for pfn and mfn types
>       xen: clear IRQ_NOAUTOEN and IRQ_NOREQUEST
>       xen: allow privcmd for HVM guests
> 
>  arch/ia64/include/asm/xen/interface.h      |    7 ++++++-
>  arch/x86/include/asm/xen/interface.h       |    7 +++++++
>  arch/x86/xen/mmu.c                         |    3 +++
>  drivers/tty/hvc/hvc_xen.c                  |    2 ++
>  drivers/xen/events.c                       |   18 +++++++++++++++---
>  drivers/xen/grant-table.c                  |    1 +
>  drivers/xen/privcmd.c                      |    4 ----
>  drivers/xen/xenbus/xenbus_probe_frontend.c |    1 +
>  include/xen/interface/grant_table.h        |    4 ++--
>  include/xen/interface/memory.h             |    9 ++++++---
>  include/xen/interface/platform.h           |    4 ++--
>  include/xen/interface/xen.h                |    7 +++----
>  include/xen/privcmd.h                      |    3 +--
>  13 files changed, 49 insertions(+), 21 deletions(-)
> 
> 
> A git branch based on v3.6-rc2 is available here:
> 
> git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git for_upstream_3.7
> 
> Cheers,
> 
> Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4I-0006AA-Gm; Wed, 22 Aug 2012 18:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4G-00069W-1Y
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:16 +0000
Received: from [85.158.138.51:35427] by server-3.bemta-3.messagelabs.com id
	DC/75-13809-7DA25305; Wed, 22 Aug 2012 18:54:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345661652!18579895!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24969 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942917"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:07 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:06 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:19 +0100
Message-ID: <10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 06/10] xen-pci: register PCI device in
	Xen and handle IOREQ_TYPE_PCI_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With QEMU disaggregation QEMU needs to specify which PCI device it's able to
handle. It will use the device place in the topology (domain, bus, device,
function).
When Xen will trap an access for the config space, it will forge a new
ioreq and forward it to the right QEMU.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/pci.c   |    6 ++++++
 hw/xen.h   |    1 +
 xen-all.c  |   38 ++++++++++++++++++++++++++++++++++++++
 xen-stub.c |    5 +++++
 4 files changed, 50 insertions(+), 0 deletions(-)

diff --git a/hw/pci.c b/hw/pci.c
index 4d95984..0112edf 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -33,6 +33,7 @@
 #include "qmp-commands.h"
 #include "msi.h"
 #include "msix.h"
+#include "xen.h"
 
 //#define DEBUG_PCI
 #ifdef DEBUG_PCI
@@ -781,6 +782,11 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
     pci_dev->devfn = devfn;
     pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
     pci_dev->irq_state = 0;
+
+    if (xen_enabled() && xen_register_pcidev(pci_dev)) {
+        return NULL;
+    }
+
     pci_config_alloc(pci_dev);
 
     pci_config_set_vendor_id(pci_dev->config, pc->vendor_id);
diff --git a/hw/xen.h b/hw/xen.h
index e5926b7..663731a 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -35,6 +35,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
+int xen_register_pcidev(PCIDevice *pci_dev);
 void xen_cmos_set_s3_resume(void *opaque, int irq, int level);
 
 qemu_irq *xen_interrupt_controller_init(void);
diff --git a/xen-all.c b/xen-all.c
index 14e5d3d..485c312 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -174,6 +174,16 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
                               irq_num & 3, level);
 }
 
+int xen_register_pcidev(PCIDevice *pci_dev)
+{
+    DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
+            pci_dev->devfn & 0x7, pci_dev->name);
+
+    return xen_xc_hvm_register_pcidev(xen_xc, xen_domid, serverid,
+                                      0, 0, pci_dev->devfn >> 3,
+                                      pci_dev->devfn & 0x7);
+}
+
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
 {
     int i;
@@ -943,6 +953,29 @@ static void cpu_ioreq_move(ioreq_t *req)
     }
 }
 
+#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
+static void cpu_ioreq_config_space(ioreq_t *req)
+{
+    uint64_t cf8 = req->addr;
+    uint32_t tmp = req->size;
+    uint16_t size = req->size & 0xff;
+    uint16_t off = req->size >> 16;
+
+    if ((size + off + 0xcfc) > 0xd00) {
+        hw_error("Invalid ioreq config space size = %u off = %u\n",
+                 size, off);
+    }
+
+    req->addr = 0xcfc + off;
+    req->size = size;
+
+    do_outp(0xcf8, 4, cf8);
+    cpu_ioreq_pio(req);
+    req->addr = cf8;
+    req->size = tmp;
+}
+#endif
+
 static void handle_ioreq(ioreq_t *req)
 {
     if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
@@ -962,6 +995,11 @@ static void handle_ioreq(ioreq_t *req)
         case IOREQ_TYPE_INVALIDATE:
             xen_invalidate_map_cache();
             break;
+#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
+        case IOREQ_TYPE_PCI_CONFIG:
+            cpu_ioreq_config_space(req);
+            break;
+#endif
         default:
             hw_error("Invalid ioreq type 0x%x\n", req->type);
     }
diff --git a/xen-stub.c b/xen-stub.c
index 8ff2b79..0128965 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -25,6 +25,11 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
 {
 }
 
+int xen_register_pcidev(PCIDevice *pci_dev)
+{
+    return 1;
+}
+
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
 {
 }
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4M-0006Bf-9w; Wed, 22 Aug 2012 18:54:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4K-00069U-7V
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:20 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32764 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942928"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:10 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:09 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:21 +0100
Message-ID: <0358823c4700d4802235bc5790d78967053bc164.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 08/10] xen: audio is not a part of
	default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 arch_init.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 9b46bfc..1077b16 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -44,6 +44,7 @@
 #include "exec-memory.h"
 #include "hw/pcspk.h"
 #include "qemu/page_cache.h"
+#include "hw/xen.h"
 
 #ifdef DEBUG_ARCH_INIT
 #define DPRINTF(fmt, ...) \
@@ -976,6 +977,9 @@ void select_soundhw(const char *optarg)
 void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
 {
     struct soundhw *c;
+    int register_default_dev;
+
+    xen_set_register_default_dev(0, &register_default_dev);
 
     for (c = soundhw; c->name; ++c) {
         if (c->enabled) {
@@ -990,6 +994,8 @@ void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
             }
         }
     }
+
+    xen_set_register_default_dev(register_default_dev, NULL);
 }
 #else
 void select_soundhw(const char *optarg)
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4M-0006C6-U0; Wed, 22 Aug 2012 18:54:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4K-00069b-S2
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:21 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 309 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942934"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:12 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:11 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:23 +0100
Message-ID: <2fcc0dde3e9cf58a4cc65504d28c9c599296c4fd.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 10/10] xen: emulate IDE outside default
	device set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

IDE can be emulate in a different QEMU that the default.

This patch also fixes ide_get_geometry. When QEMU didn't emulate IDE,
it try to derefence a NULL bus.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/ide/qdev.c |    8 +++++++-
 hw/pc_piix.c  |   38 ++++++++++++++++++++++++--------------
 hw/xen.h      |   10 ++++++++++
 xen-all.c     |    8 +++++++-
 4 files changed, 48 insertions(+), 16 deletions(-)

diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index 5ea9b8f..473acd7 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -115,7 +115,13 @@ IDEDevice *ide_create_drive(IDEBus *bus, int unit, DriveInfo *drive)
 int ide_get_geometry(BusState *bus, int unit,
                      int16_t *cyls, int8_t *heads, int8_t *secs)
 {
-    IDEState *s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
+    IDEState *s = NULL;
+
+    if (!bus) {
+        return -1;
+    }
+
+    s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
 
     if (s->drive_kind != IDE_HD || !s->bs) {
         return -1;
diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index 6cb0a2a..b904100 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
@@ -148,6 +148,7 @@ static void pc_init1(MemoryRegion *system_memory,
     MemoryRegion *pci_memory;
     MemoryRegion *rom_memory;
     void *fw_cfg = NULL;
+    int register_default_dev = 0;
 
     pc_cpus_init(cpu_model);
 
@@ -242,23 +243,32 @@ static void pc_init1(MemoryRegion *system_memory,
             pci_nic_init_nofail(nd, "e1000", NULL);
     }
 
-    ide_drive_get(hd, MAX_IDE_BUS);
-    if (pci_enabled) {
-        PCIDevice *dev;
-        if (xen_enabled()) {
-            dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
+    if (!xen_enabled() || xen_is_emulated_ide()) {
+        xen_set_register_default_dev(0, &register_default_dev);
+        ide_drive_get(hd, MAX_IDE_BUS);
+        if (pci_enabled) {
+            PCIDevice *dev;
+            if (xen_enabled()) {
+                dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
+            } else {
+                dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
+            }
+            idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
+            idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
         } else {
-            dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
+            for (i = 0; i < MAX_IDE_BUS; i++) {
+                ISADevice *dev;
+                dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
+                                   ide_irq[i],
+                                   hd[MAX_IDE_DEVS * i],
+                                   hd[MAX_IDE_DEVS * i + 1]);
+                idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
+            }
         }
-        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
-        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
+        xen_set_register_default_dev(register_default_dev, NULL);
     } else {
-        for(i = 0; i < MAX_IDE_BUS; i++) {
-            ISADevice *dev;
-            dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
-                               ide_irq[i],
-                               hd[MAX_IDE_DEVS * i], hd[MAX_IDE_DEVS * i + 1]);
-            idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
+        for (i = 0; i < MAX_IDE_BUS; i++) {
+            idebus[i] = NULL;
         }
     }
 
diff --git a/hw/xen.h b/hw/xen.h
index 3c8724f..3c89fb9 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -22,6 +22,7 @@ extern enum xen_mode xen_mode;
 
 extern int xen_allowed;
 extern int xen_register_default_dev;
+extern int xen_emulate_ide;
 
 static inline int xen_enabled(void)
 {
@@ -32,6 +33,15 @@ static inline int xen_enabled(void)
 #endif
 }
 
+static inline int xen_is_emulated_ide(void)
+{
+#if defined(CONFIG_XEN)
+    return xen_emulate_ide;
+#else
+    return 1;
+#endif
+}
+
 static inline int xen_is_registered_default_dev(void)
 {
 #if defined(CONFIG_XEN)
diff --git a/xen-all.c b/xen-all.c
index f424cce..f091908 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -43,6 +43,8 @@ static uint32_t xen_dmid = ~0;
 int xen_register_default_dev = 0;
 static int xen_emulate_default_dev = 1;
 
+int xen_emulate_ide = 0;
+
 /* Compatibility with older version */
 #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
@@ -1342,6 +1344,8 @@ int xen_hvm_init(void)
                                        "xen_dmid", ~0);
         xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
                                                     "xen_default_dev", 1);
+        xen_emulate_ide = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
+                                            "xen_emulate_ide", 1);
     }
 
     state = g_malloc0(sizeof (XenIOState));
@@ -1437,12 +1441,14 @@ int xen_hvm_init(void)
         fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
         exit(1);
     }
-    xen_be_register("qdisk", &xen_blkdev_ops);
 
     if (xen_emulate_default_dev) {
         xen_be_register("console", &xen_console_ops);
         xen_be_register("vkbd", &xen_kbdmouse_ops);
     }
+    if (xen_emulate_ide) {
+        xen_be_register("qdisk", &xen_blkdev_ops);
+    }
     xen_read_physmap(state);
 
     return 0;
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4T-0006FF-Ng; Wed, 22 Aug 2012 18:54:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4R-0006EH-Vu
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:28 +0000
Received: from [85.158.143.35:49808] by server-2.bemta-4.messagelabs.com id
	DD/1F-21239-3EA25305; Wed, 22 Aug 2012 18:54:27 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345661661!4639216!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5661 invoked from network); 22 Aug 2012 18:54:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484613"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:05 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:05 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:18 +0100
Message-ID: <b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 05/10] xen-memory: register memory/IO
	range in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add Memory listener on IO and modify the one on memory.
Becareful, the first listener is not called is the range is still register with
register_ioport*. So Xen will never know that this QEMU is handle the range.

IO request works as before, the only thing is QEMU will never receive IO request
that it can't handle.

y Changes to be committed:

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |  113 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 113 insertions(+), 0 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 5f05838..14e5d3d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -152,6 +152,7 @@ typedef struct XenIOState {
 
     struct xs_handle *xenstore;
     MemoryListener memory_listener;
+    MemoryListener io_listener;
     QLIST_HEAD(, XenPhysmap) physmap;
     target_phys_addr_t free_phys_offset;
     const XenPhysmap *log_for_dirtybit;
@@ -195,6 +196,31 @@ void xen_hvm_inject_msi(uint64_t addr, uint32_t data)
     xen_xc_hvm_inject_msi(xen_xc, xen_domid, addr, data);
 }
 
+static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
+                            int is_mmio, const char *name)
+{
+    /* Don't register xen.ram */
+    if (is_mmio && !strncmp(name, "xen.ram", 7)) {
+        return;
+    }
+
+    DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
+            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
+
+    xen_xc_hvm_map_io_range_to_ioreq_server(xen_xc, xen_domid, serverid,
+                                            is_mmio, addr, addr + size - 1);
+}
+
+static void xen_unmap_iorange(target_phys_addr_t addr, uint64_t size,
+                              int is_mmio)
+{
+    DPRINTF("unmap %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
+            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
+
+    xen_xc_hvm_unmap_io_range_from_ioreq_server(xen_xc, xen_domid, serverid,
+                                                is_mmio, addr);
+}
+
 static void xen_suspend_notifier(Notifier *notifier, void *data)
 {
     xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 3);
@@ -527,12 +553,16 @@ static void xen_region_add(MemoryListener *listener,
                            MemoryRegionSection *section)
 {
     xen_set_memory(listener, section, true);
+    xen_map_iorange(section->offset_within_address_space,
+                    section->size, 1, section->mr->name);
 }
 
 static void xen_region_del(MemoryListener *listener,
                            MemoryRegionSection *section)
 {
     xen_set_memory(listener, section, false);
+    xen_unmap_iorange(section->offset_within_address_space,
+                      section->size, 1);
 }
 
 static void xen_region_nop(MemoryListener *listener,
@@ -651,6 +681,86 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
+static void xen_io_begin(MemoryListener *listener)
+{
+}
+
+static void xen_io_commit(MemoryListener *listener)
+{
+}
+
+static void xen_io_region_add(MemoryListener *listener,
+                              MemoryRegionSection *section)
+{
+    xen_map_iorange(section->offset_within_address_space,
+                    section->size, 0, section->mr->name);
+}
+
+static void xen_io_region_del(MemoryListener *listener,
+                              MemoryRegionSection *section)
+{
+    xen_unmap_iorange(section->offset_within_address_space,
+                      section->size, 0);
+}
+
+static void xen_io_region_nop(MemoryListener *listener,
+                              MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_start(MemoryListener *listener,
+                             MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_stop(MemoryListener *listener,
+                            MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_sync(MemoryListener *listener,
+                            MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_global_start(MemoryListener *listener)
+{
+}
+
+static void xen_io_log_global_stop(MemoryListener *listener)
+{
+}
+
+static void xen_io_eventfd_add(MemoryListener *listener,
+                               MemoryRegionSection *section,
+                               bool match_data, uint64_t data,
+                               EventNotifier *e)
+{
+}
+
+static void xen_io_eventfd_del(MemoryListener *listener,
+                               MemoryRegionSection *section,
+                               bool match_data, uint64_t data,
+                               EventNotifier *e)
+{
+}
+
+static MemoryListener xen_io_listener = {
+    .begin = xen_io_begin,
+    .commit = xen_io_commit,
+    .region_add = xen_io_region_add,
+    .region_del = xen_io_region_del,
+    .region_nop = xen_io_region_nop,
+    .log_start = xen_io_log_start,
+    .log_stop = xen_io_log_stop,
+    .log_sync = xen_io_log_sync,
+    .log_global_start = xen_io_log_global_start,
+    .log_global_stop = xen_io_log_global_stop,
+    .eventfd_add = xen_io_eventfd_add,
+    .eventfd_del = xen_io_eventfd_del,
+    .priority = 10,
+};
+
 /* VCPU Operations, MMIO, IO ring ... */
 
 static void xen_reset_vcpu(void *opaque)
@@ -1239,6 +1349,9 @@ int xen_hvm_init(void)
     memory_listener_register(&state->memory_listener, get_system_memory());
     state->log_for_dirtybit = NULL;
 
+    state->io_listener = xen_io_listener;
+    memory_listener_register(&state->io_listener, get_system_io());
+
     /* Initialize backend core & drivers */
     if (xen_be_init() != 0) {
         fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4M-0006C6-U0; Wed, 22 Aug 2012 18:54:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4K-00069b-S2
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:21 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 309 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942934"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:12 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:11 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:23 +0100
Message-ID: <2fcc0dde3e9cf58a4cc65504d28c9c599296c4fd.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 10/10] xen: emulate IDE outside default
	device set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

IDE can be emulate in a different QEMU that the default.

This patch also fixes ide_get_geometry. When QEMU didn't emulate IDE,
it try to derefence a NULL bus.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/ide/qdev.c |    8 +++++++-
 hw/pc_piix.c  |   38 ++++++++++++++++++++++++--------------
 hw/xen.h      |   10 ++++++++++
 xen-all.c     |    8 +++++++-
 4 files changed, 48 insertions(+), 16 deletions(-)

diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index 5ea9b8f..473acd7 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -115,7 +115,13 @@ IDEDevice *ide_create_drive(IDEBus *bus, int unit, DriveInfo *drive)
 int ide_get_geometry(BusState *bus, int unit,
                      int16_t *cyls, int8_t *heads, int8_t *secs)
 {
-    IDEState *s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
+    IDEState *s = NULL;
+
+    if (!bus) {
+        return -1;
+    }
+
+    s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
 
     if (s->drive_kind != IDE_HD || !s->bs) {
         return -1;
diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index 6cb0a2a..b904100 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
@@ -148,6 +148,7 @@ static void pc_init1(MemoryRegion *system_memory,
     MemoryRegion *pci_memory;
     MemoryRegion *rom_memory;
     void *fw_cfg = NULL;
+    int register_default_dev = 0;
 
     pc_cpus_init(cpu_model);
 
@@ -242,23 +243,32 @@ static void pc_init1(MemoryRegion *system_memory,
             pci_nic_init_nofail(nd, "e1000", NULL);
     }
 
-    ide_drive_get(hd, MAX_IDE_BUS);
-    if (pci_enabled) {
-        PCIDevice *dev;
-        if (xen_enabled()) {
-            dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
+    if (!xen_enabled() || xen_is_emulated_ide()) {
+        xen_set_register_default_dev(0, &register_default_dev);
+        ide_drive_get(hd, MAX_IDE_BUS);
+        if (pci_enabled) {
+            PCIDevice *dev;
+            if (xen_enabled()) {
+                dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
+            } else {
+                dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
+            }
+            idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
+            idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
         } else {
-            dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
+            for (i = 0; i < MAX_IDE_BUS; i++) {
+                ISADevice *dev;
+                dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
+                                   ide_irq[i],
+                                   hd[MAX_IDE_DEVS * i],
+                                   hd[MAX_IDE_DEVS * i + 1]);
+                idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
+            }
         }
-        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
-        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
+        xen_set_register_default_dev(register_default_dev, NULL);
     } else {
-        for(i = 0; i < MAX_IDE_BUS; i++) {
-            ISADevice *dev;
-            dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
-                               ide_irq[i],
-                               hd[MAX_IDE_DEVS * i], hd[MAX_IDE_DEVS * i + 1]);
-            idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
+        for (i = 0; i < MAX_IDE_BUS; i++) {
+            idebus[i] = NULL;
         }
     }
 
diff --git a/hw/xen.h b/hw/xen.h
index 3c8724f..3c89fb9 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -22,6 +22,7 @@ extern enum xen_mode xen_mode;
 
 extern int xen_allowed;
 extern int xen_register_default_dev;
+extern int xen_emulate_ide;
 
 static inline int xen_enabled(void)
 {
@@ -32,6 +33,15 @@ static inline int xen_enabled(void)
 #endif
 }
 
+static inline int xen_is_emulated_ide(void)
+{
+#if defined(CONFIG_XEN)
+    return xen_emulate_ide;
+#else
+    return 1;
+#endif
+}
+
 static inline int xen_is_registered_default_dev(void)
 {
 #if defined(CONFIG_XEN)
diff --git a/xen-all.c b/xen-all.c
index f424cce..f091908 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -43,6 +43,8 @@ static uint32_t xen_dmid = ~0;
 int xen_register_default_dev = 0;
 static int xen_emulate_default_dev = 1;
 
+int xen_emulate_ide = 0;
+
 /* Compatibility with older version */
 #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
@@ -1342,6 +1344,8 @@ int xen_hvm_init(void)
                                        "xen_dmid", ~0);
         xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
                                                     "xen_default_dev", 1);
+        xen_emulate_ide = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
+                                            "xen_emulate_ide", 1);
     }
 
     state = g_malloc0(sizeof (XenIOState));
@@ -1437,12 +1441,14 @@ int xen_hvm_init(void)
         fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
         exit(1);
     }
-    xen_be_register("qdisk", &xen_blkdev_ops);
 
     if (xen_emulate_default_dev) {
         xen_be_register("console", &xen_console_ops);
         xen_be_register("vkbd", &xen_kbdmouse_ops);
     }
+    if (xen_emulate_ide) {
+        xen_be_register("qdisk", &xen_blkdev_ops);
+    }
     xen_read_physmap(state);
 
     return 0;
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4M-0006Bf-9w; Wed, 22 Aug 2012 18:54:22 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4K-00069U-7V
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:20 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32764 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942928"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:10 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:09 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:21 +0100
Message-ID: <0358823c4700d4802235bc5790d78967053bc164.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 08/10] xen: audio is not a part of
	default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 arch_init.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 9b46bfc..1077b16 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -44,6 +44,7 @@
 #include "exec-memory.h"
 #include "hw/pcspk.h"
 #include "qemu/page_cache.h"
+#include "hw/xen.h"
 
 #ifdef DEBUG_ARCH_INIT
 #define DPRINTF(fmt, ...) \
@@ -976,6 +977,9 @@ void select_soundhw(const char *optarg)
 void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
 {
     struct soundhw *c;
+    int register_default_dev;
+
+    xen_set_register_default_dev(0, &register_default_dev);
 
     for (c = soundhw; c->name; ++c) {
         if (c->enabled) {
@@ -990,6 +994,8 @@ void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
             }
         }
     }
+
+    xen_set_register_default_dev(register_default_dev, NULL);
 }
 #else
 void select_soundhw(const char *optarg)
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4I-0006AA-Gm; Wed, 22 Aug 2012 18:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4G-00069W-1Y
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:16 +0000
Received: from [85.158.138.51:35427] by server-3.bemta-3.messagelabs.com id
	DC/75-13809-7DA25305; Wed, 22 Aug 2012 18:54:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345661652!18579895!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24969 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942917"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:07 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:06 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:19 +0100
Message-ID: <10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 06/10] xen-pci: register PCI device in
	Xen and handle IOREQ_TYPE_PCI_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With QEMU disaggregation QEMU needs to specify which PCI device it's able to
handle. It will use the device place in the topology (domain, bus, device,
function).
When Xen will trap an access for the config space, it will forge a new
ioreq and forward it to the right QEMU.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/pci.c   |    6 ++++++
 hw/xen.h   |    1 +
 xen-all.c  |   38 ++++++++++++++++++++++++++++++++++++++
 xen-stub.c |    5 +++++
 4 files changed, 50 insertions(+), 0 deletions(-)

diff --git a/hw/pci.c b/hw/pci.c
index 4d95984..0112edf 100644
--- a/hw/pci.c
+++ b/hw/pci.c
@@ -33,6 +33,7 @@
 #include "qmp-commands.h"
 #include "msi.h"
 #include "msix.h"
+#include "xen.h"
 
 //#define DEBUG_PCI
 #ifdef DEBUG_PCI
@@ -781,6 +782,11 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
     pci_dev->devfn = devfn;
     pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
     pci_dev->irq_state = 0;
+
+    if (xen_enabled() && xen_register_pcidev(pci_dev)) {
+        return NULL;
+    }
+
     pci_config_alloc(pci_dev);
 
     pci_config_set_vendor_id(pci_dev->config, pc->vendor_id);
diff --git a/hw/xen.h b/hw/xen.h
index e5926b7..663731a 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -35,6 +35,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
+int xen_register_pcidev(PCIDevice *pci_dev);
 void xen_cmos_set_s3_resume(void *opaque, int irq, int level);
 
 qemu_irq *xen_interrupt_controller_init(void);
diff --git a/xen-all.c b/xen-all.c
index 14e5d3d..485c312 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -174,6 +174,16 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
                               irq_num & 3, level);
 }
 
+int xen_register_pcidev(PCIDevice *pci_dev)
+{
+    DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
+            pci_dev->devfn & 0x7, pci_dev->name);
+
+    return xen_xc_hvm_register_pcidev(xen_xc, xen_domid, serverid,
+                                      0, 0, pci_dev->devfn >> 3,
+                                      pci_dev->devfn & 0x7);
+}
+
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
 {
     int i;
@@ -943,6 +953,29 @@ static void cpu_ioreq_move(ioreq_t *req)
     }
 }
 
+#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
+static void cpu_ioreq_config_space(ioreq_t *req)
+{
+    uint64_t cf8 = req->addr;
+    uint32_t tmp = req->size;
+    uint16_t size = req->size & 0xff;
+    uint16_t off = req->size >> 16;
+
+    if ((size + off + 0xcfc) > 0xd00) {
+        hw_error("Invalid ioreq config space size = %u off = %u\n",
+                 size, off);
+    }
+
+    req->addr = 0xcfc + off;
+    req->size = size;
+
+    do_outp(0xcf8, 4, cf8);
+    cpu_ioreq_pio(req);
+    req->addr = cf8;
+    req->size = tmp;
+}
+#endif
+
 static void handle_ioreq(ioreq_t *req)
 {
     if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
@@ -962,6 +995,11 @@ static void handle_ioreq(ioreq_t *req)
         case IOREQ_TYPE_INVALIDATE:
             xen_invalidate_map_cache();
             break;
+#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
+        case IOREQ_TYPE_PCI_CONFIG:
+            cpu_ioreq_config_space(req);
+            break;
+#endif
         default:
             hw_error("Invalid ioreq type 0x%x\n", req->type);
     }
diff --git a/xen-stub.c b/xen-stub.c
index 8ff2b79..0128965 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -25,6 +25,11 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
 {
 }
 
+int xen_register_pcidev(PCIDevice *pci_dev)
+{
+    return 1;
+}
+
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
 {
 }
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4L-0006BQ-Sc; Wed, 22 Aug 2012 18:54:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4K-00069T-55
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:20 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32732 invoked from network); 22 Aug 2012 18:54:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942907"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:03 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:02 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:16 +0100
Message-ID: <60e94dfc4ce6ebcd07e3256e314b9d0c99a13d1d.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 03/10] xen: add wrappers for new Xen
	disaggregation hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QEMU disaggregation is not supported on old Xen versions.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/xen_common.h |   58 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 58 insertions(+), 0 deletions(-)

diff --git a/hw/xen_common.h b/hw/xen_common.h
index 727757a..b2525ad 100644
--- a/hw/xen_common.h
+++ b/hw/xen_common.h
@@ -152,6 +152,64 @@ static inline int xen_xc_hvm_inject_msi(XenXC xen_xc, domid_t dom,
 }
 #endif
 
+/* Xen before 4.3 */
+#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 430
+static inline int xen_xc_hvm_register_pcidev(XenXC xen_xc, domid_t dom,
+        unsigned int serverid, uint8_t domain,
+        uint8_t bus, uint8_t device, uint8_t function)
+{
+    return 0;
+}
+
+static inline int xen_xc_hvm_map_io_range_to_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio,
+        uint64_t start, uint64_t end)
+{
+    return 1;
+}
+
+static inline int xen_xc_hvm_unmap_io_range_from_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio, uint64_t start)
+{
+    return 1;
+}
+
+static inline int xen_xc_hvm_register_ioreq_server(XenXC xen_xc, domid_t dom)
+{
+    return 0;
+}
+
+#else
+static inline int xen_xc_hvm_register_pcidev(XenXC xen_xc, domid_t dom,
+        unsigned int serverid, uint8_t domain,
+        uint8_t bus, uint8_t device, uint8_t function)
+{
+    return xc_hvm_register_pcidev(xen_xc, dom, serverid, domain,
+                                  bus, device, function);
+}
+
+static inline int xen_xc_hvm_map_io_range_to_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio,
+        uint64_t start, uint64_t end)
+{
+    return xc_hvm_map_io_range_to_ioreq_server(xen_xc, dom, serverid, is_mmio,
+                                               start, end);
+}
+
+static inline int xen_xc_hvm_unmap_io_range_from_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio, uint64_t start)
+{
+    return xc_hvm_unmap_io_range_from_ioreq_server(xen_xc, dom, serverid,
+                                                   is_mmio, start);
+}
+
+static inline int xen_xc_hvm_register_ioreq_server(XenXC xen_xc, domid_t dom)
+{
+    return xc_hvm_register_ioreq_server(xen_xc, dom);
+}
+
+#endif
+
 void destroy_hvm_domain(bool reboot);
 
 /* shutdown/destroy current domain because of an error */
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4K-0006Ab-2F; Wed, 22 Aug 2012 18:54:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4I-00069R-P2
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:18 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32680 invoked from network); 22 Aug 2012 18:54:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942895"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:01 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:53:59 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:14 +0100
Message-ID: <bd5dc1c039b5b6d30ac785630677ad3339039f6b.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 01/10] xen: add new machine options to
	support QEMU disaggregation in Xen environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

    - xen_dmid: specify the id of QEMU. It will be used to
    retrieve/store information inside XenStore.
    - xen_default_dev (on/off): as default devices need to be create in
    each QEMU (due to code dependency), this option specifies if it will
    register range/PCI of default device via xen hypercall.
    (Root bridge, south bridge, ...).
    - xen_emulate_ide (on/off): enable/disable emulation in QEMU.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 qemu-config.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/qemu-config.c b/qemu-config.c
index c05ffbc..7740442 100644
--- a/qemu-config.c
+++ b/qemu-config.c
@@ -612,6 +612,18 @@ static QemuOptsList qemu_machine_opts = {
             .name = "dump-guest-core",
             .type = QEMU_OPT_BOOL,
             .help = "Include guest memory in  a core dump",
+        }, {
+            .name = "xen_dmid",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Xen device model id",
+        }, {
+            .name = "xen_default_dev",
+            .type = QEMU_OPT_BOOL,
+            .help = "emulate Xen default device (South Bridge, IDE, ...)"
+        }, {
+            .name = "xen_emulate_ide",
+            .type = QEMU_OPT_BOOL,
+            .help = "emulate IDE with XEN"
         },
         { /* End of list */ }
     },
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4T-0006Ez-Aa; Wed, 22 Aug 2012 18:54:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4R-0006Do-3U
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:27 +0000
Received: from [85.158.143.35:16830] by server-1.bemta-4.messagelabs.com id
	64/B4-07754-2EA25305; Wed, 22 Aug 2012 18:54:26 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345661661!4639216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5575 invoked from network); 22 Aug 2012 18:54:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484605"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:53:59 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:53:58 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:13 +0100
Message-ID: <cover.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 00/10] QEMU disaggregation in Xen
	environment.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series only concerns QEMU. Another serie will come for Xen.

I'm currently working on QEMU disaggregation in Xen environment. The
goal is to be able to running multiple QEMU for a same domain
(http://lists.xen.org/archives/html/xen-devel/2012-03/msg00299.html).

I have already sent a version of patch series few months ago:
    - QEMU: https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04401.html
    - Xen: http://lists.xen.org/archives/html/xen-devel/2012-03/msg01947.html
With the different feedbacks, I have improved both QEMU and Xen modifications.
As before, I will sent two patch series, one for QEMU the other for Xen.

Modifications between V1 and V2:
    - introduce new machine options
    - use memory listener to avoid Xen specific code in QEMU core
    (depends of "[PATCH V5 0/8] memory: unify ioport registration" patch series)
    - implement disaggregation
    - add wrapper for older Xen version

Julien Grall (10):
  xen: add new machine options to support QEMU disaggregation in Xen
    environment
  xen: modify QEMU status path in XenStore
  xen: add wrappers for new Xen disaggregation hypercalls
  xen-hvm: register qemu as ioreq server and retrieve shared pages
  xen-memory: register memory/IO range in Xen
  xen-pci: register PCI device in Xen and handle IOREQ_TYPE_PCI_CONFIG
  xen: specify which device is part of default devices
  xen: audio is not a part of default devices
  xen-memory: handle node "device_model" for physical mapping
  xen: emulate IDE outside default device set

 arch_init.c     |    6 +
 hw/ide/qdev.c   |    8 ++-
 hw/pc_piix.c    |   40 +++++---
 hw/pci.c        |    6 +
 hw/xen.h        |   31 ++++++
 hw/xen_common.h |   58 +++++++++++
 qemu-config.c   |   12 ++
 xen-all.c       |  304 +++++++++++++++++++++++++++++++++++++++++++++++++++++--
 xen-stub.c      |    5 +
 9 files changed, 447 insertions(+), 23 deletions(-)

-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4T-0006FF-Ng; Wed, 22 Aug 2012 18:54:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4R-0006EH-Vu
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:28 +0000
Received: from [85.158.143.35:49808] by server-2.bemta-4.messagelabs.com id
	DD/1F-21239-3EA25305; Wed, 22 Aug 2012 18:54:27 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345661661!4639216!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5661 invoked from network); 22 Aug 2012 18:54:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484613"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:05 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:05 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:18 +0100
Message-ID: <b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 05/10] xen-memory: register memory/IO
	range in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add Memory listener on IO and modify the one on memory.
Becareful, the first listener is not called is the range is still register with
register_ioport*. So Xen will never know that this QEMU is handle the range.

IO request works as before, the only thing is QEMU will never receive IO request
that it can't handle.

y Changes to be committed:

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |  113 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 113 insertions(+), 0 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 5f05838..14e5d3d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -152,6 +152,7 @@ typedef struct XenIOState {
 
     struct xs_handle *xenstore;
     MemoryListener memory_listener;
+    MemoryListener io_listener;
     QLIST_HEAD(, XenPhysmap) physmap;
     target_phys_addr_t free_phys_offset;
     const XenPhysmap *log_for_dirtybit;
@@ -195,6 +196,31 @@ void xen_hvm_inject_msi(uint64_t addr, uint32_t data)
     xen_xc_hvm_inject_msi(xen_xc, xen_domid, addr, data);
 }
 
+static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
+                            int is_mmio, const char *name)
+{
+    /* Don't register xen.ram */
+    if (is_mmio && !strncmp(name, "xen.ram", 7)) {
+        return;
+    }
+
+    DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
+            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
+
+    xen_xc_hvm_map_io_range_to_ioreq_server(xen_xc, xen_domid, serverid,
+                                            is_mmio, addr, addr + size - 1);
+}
+
+static void xen_unmap_iorange(target_phys_addr_t addr, uint64_t size,
+                              int is_mmio)
+{
+    DPRINTF("unmap %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
+            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
+
+    xen_xc_hvm_unmap_io_range_from_ioreq_server(xen_xc, xen_domid, serverid,
+                                                is_mmio, addr);
+}
+
 static void xen_suspend_notifier(Notifier *notifier, void *data)
 {
     xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 3);
@@ -527,12 +553,16 @@ static void xen_region_add(MemoryListener *listener,
                            MemoryRegionSection *section)
 {
     xen_set_memory(listener, section, true);
+    xen_map_iorange(section->offset_within_address_space,
+                    section->size, 1, section->mr->name);
 }
 
 static void xen_region_del(MemoryListener *listener,
                            MemoryRegionSection *section)
 {
     xen_set_memory(listener, section, false);
+    xen_unmap_iorange(section->offset_within_address_space,
+                      section->size, 1);
 }
 
 static void xen_region_nop(MemoryListener *listener,
@@ -651,6 +681,86 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
+static void xen_io_begin(MemoryListener *listener)
+{
+}
+
+static void xen_io_commit(MemoryListener *listener)
+{
+}
+
+static void xen_io_region_add(MemoryListener *listener,
+                              MemoryRegionSection *section)
+{
+    xen_map_iorange(section->offset_within_address_space,
+                    section->size, 0, section->mr->name);
+}
+
+static void xen_io_region_del(MemoryListener *listener,
+                              MemoryRegionSection *section)
+{
+    xen_unmap_iorange(section->offset_within_address_space,
+                      section->size, 0);
+}
+
+static void xen_io_region_nop(MemoryListener *listener,
+                              MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_start(MemoryListener *listener,
+                             MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_stop(MemoryListener *listener,
+                            MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_sync(MemoryListener *listener,
+                            MemoryRegionSection *section)
+{
+}
+
+static void xen_io_log_global_start(MemoryListener *listener)
+{
+}
+
+static void xen_io_log_global_stop(MemoryListener *listener)
+{
+}
+
+static void xen_io_eventfd_add(MemoryListener *listener,
+                               MemoryRegionSection *section,
+                               bool match_data, uint64_t data,
+                               EventNotifier *e)
+{
+}
+
+static void xen_io_eventfd_del(MemoryListener *listener,
+                               MemoryRegionSection *section,
+                               bool match_data, uint64_t data,
+                               EventNotifier *e)
+{
+}
+
+static MemoryListener xen_io_listener = {
+    .begin = xen_io_begin,
+    .commit = xen_io_commit,
+    .region_add = xen_io_region_add,
+    .region_del = xen_io_region_del,
+    .region_nop = xen_io_region_nop,
+    .log_start = xen_io_log_start,
+    .log_stop = xen_io_log_stop,
+    .log_sync = xen_io_log_sync,
+    .log_global_start = xen_io_log_global_start,
+    .log_global_stop = xen_io_log_global_stop,
+    .eventfd_add = xen_io_eventfd_add,
+    .eventfd_del = xen_io_eventfd_del,
+    .priority = 10,
+};
+
 /* VCPU Operations, MMIO, IO ring ... */
 
 static void xen_reset_vcpu(void *opaque)
@@ -1239,6 +1349,9 @@ int xen_hvm_init(void)
     memory_listener_register(&state->memory_listener, get_system_memory());
     state->log_for_dirtybit = NULL;
 
+    state->io_listener = xen_io_listener;
+    memory_listener_register(&state->io_listener, get_system_io());
+
     /* Initialize backend core & drivers */
     if (xen_be_init() != 0) {
         fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4H-00069w-QI; Wed, 22 Aug 2012 18:54:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4F-00069V-Og
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:15 +0000
Received: from [85.158.139.83:51184] by server-2.bemta-5.messagelabs.com id
	FB/54-10142-6DA25305; Wed, 22 Aug 2012 18:54:14 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345661651!29534497!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32496 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942923"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:09 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:07 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:20 +0100
Message-ID: <552327742c4491e4dadbccfa189bf9e2ab706ba4.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 07/10] xen: specify which device is part
	of default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

One major problem of QEMU disaggregation is that some devices needs to
be "emulate" in each QEMU, but only one need to register it in Xen.

This patch introduces helpers that can be used in QEMU code (for
instance hw/pc_piix.c) to specify if the device is part of default sets.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/pc_piix.c |    2 ++
 hw/xen.h     |   20 ++++++++++++++++++++
 xen-all.c    |   29 +++++++++++++++++++++++++++--
 3 files changed, 49 insertions(+), 2 deletions(-)

diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index 0c0096f..6cb0a2a 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
@@ -342,9 +342,11 @@ static void pc_xen_hvm_init(ram_addr_t ram_size,
     if (xen_hvm_init() != 0) {
         hw_error("xen hardware virtual machine initialisation failed");
     }
+    xen_set_register_default_dev(1,  NULL);
     pc_init_pci_no_kvmclock(ram_size, boot_device,
                             kernel_filename, kernel_cmdline,
                             initrd_filename, cpu_model);
+    xen_set_register_default_dev(0, NULL);
     xen_vcpu_init();
 }
 #endif
diff --git a/hw/xen.h b/hw/xen.h
index 663731a..3c8724f 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -21,6 +21,7 @@ extern uint32_t xen_domid;
 extern enum xen_mode xen_mode;
 
 extern int xen_allowed;
+extern int xen_register_default_dev;
 
 static inline int xen_enabled(void)
 {
@@ -31,6 +32,25 @@ static inline int xen_enabled(void)
 #endif
 }
 
+static inline int xen_is_registered_default_dev(void)
+{
+#if defined(CONFIG_XEN)
+    return xen_register_default_dev;
+#else
+    return 1;
+#endif
+}
+
+static inline void xen_set_register_default_dev(int val, int *old)
+{
+#if defined(CONFIG_XEN)
+    if (old) {
+        *old = xen_register_default_dev;
+    }
+    xen_register_default_dev = val;
+#endif
+}
+
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
diff --git a/xen-all.c b/xen-all.c
index 485c312..afa9bcc 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -39,6 +39,10 @@ static MemoryRegion *framebuffer;
 static unsigned int serverid;
 static uint32_t xen_dmid = ~0;
 
+/* Use to tell if we register pci/mmio/pio of default devices */
+int xen_register_default_dev = 0;
+static int xen_emulate_default_dev = 1;
+
 /* Compatibility with older version */
 #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
@@ -176,6 +180,10 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
 
 int xen_register_pcidev(PCIDevice *pci_dev)
 {
+    if (xen_register_default_dev && !xen_emulate_default_dev) {
+        return 0;
+    }
+
     DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
             pci_dev->devfn & 0x7, pci_dev->name);
 
@@ -214,6 +222,18 @@ static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
         return;
     }
 
+    /* Handle the registration of all default io range */
+    if (xen_register_default_dev) {
+        /* Register ps/2 only if we emulate VGA */
+        if (!strcmp(name, "i8042-data") || !strcmp(name, "i8042-cmd")) {
+            if (display_type == DT_NOGRAPHIC) {
+                return;
+            }
+        } else if (!xen_emulate_default_dev && strcmp(name, "serial")) {
+            return;
+        }
+    }
+
     DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
             (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
 
@@ -1300,6 +1320,8 @@ int xen_hvm_init(void)
     if (!QTAILQ_EMPTY(&list->head)) {
         xen_dmid = qemu_opt_get_number(QTAILQ_FIRST(&list->head),
                                        "xen_dmid", ~0);
+        xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
+                                                    "xen_default_dev", 1);
     }
 
     state = g_malloc0(sizeof (XenIOState));
@@ -1395,9 +1417,12 @@ int xen_hvm_init(void)
         fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
         exit(1);
     }
-    xen_be_register("console", &xen_console_ops);
-    xen_be_register("vkbd", &xen_kbdmouse_ops);
     xen_be_register("qdisk", &xen_blkdev_ops);
+
+    if (xen_emulate_default_dev) {
+        xen_be_register("console", &xen_console_ops);
+        xen_be_register("vkbd", &xen_kbdmouse_ops);
+    }
     xen_read_physmap(state);
 
     return 0;
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4K-0006An-F7; Wed, 22 Aug 2012 18:54:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4I-00069Q-RD
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:19 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32693 invoked from network); 22 Aug 2012 18:54:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942900"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:03 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:01 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:15 +0100
Message-ID: <10aff66be1ac0ebad8afa20cd0946b68323cbbb9.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 02/10] xen: modify QEMU status path in
	XenStore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QEMU will now write its status in another XenStore path because multiple
QEMU can run for a same domain. If xen_dmid machine option is not specified,
it means that an old version of Xen is used, so status is written in the old
path.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |   16 +++++++++++++++-
 1 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 61def2e..df6927d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -36,6 +36,7 @@
 
 static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
 static MemoryRegion *framebuffer;
+static uint32_t xen_dmid = ~0;
 
 /* Compatibility with older version */
 #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
@@ -958,7 +959,14 @@ static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
         exit(1);
     }
 
-    snprintf(path, sizeof (path), "/local/domain/0/device-model/%u/state", xen_domid);
+    if (xen_dmid == ~0) {
+        snprintf(path, sizeof(path), "/local/domain/0/device-model/%u/state",
+                 xen_domid);
+    } else {
+        snprintf(path, sizeof(path), "/local/domain/0/dms/%u/%u/state",
+                 xen_domid, xen_dmid);
+    }
+
     if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
         fprintf(stderr, "error recording dm state\n");
         exit(1);
@@ -1077,6 +1085,12 @@ int xen_hvm_init(void)
     unsigned long ioreq_pfn;
     unsigned long bufioreq_evtchn;
     XenIOState *state;
+    QemuOptsList *list = qemu_find_opts("machine");
+
+    if (!QTAILQ_EMPTY(&list->head)) {
+        xen_dmid = qemu_opt_get_number(QTAILQ_FIRST(&list->head),
+                                       "xen_dmid", ~0);
+    }
 
     state = g_malloc0(sizeof (XenIOState));
 
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4T-0006Ez-Aa; Wed, 22 Aug 2012 18:54:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4R-0006Do-3U
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:27 +0000
Received: from [85.158.143.35:16830] by server-1.bemta-4.messagelabs.com id
	64/B4-07754-2EA25305; Wed, 22 Aug 2012 18:54:26 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345661661!4639216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5575 invoked from network); 22 Aug 2012 18:54:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484605"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:53:59 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:53:58 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:13 +0100
Message-ID: <cover.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 00/10] QEMU disaggregation in Xen
	environment.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series only concerns QEMU. Another serie will come for Xen.

I'm currently working on QEMU disaggregation in Xen environment. The
goal is to be able to running multiple QEMU for a same domain
(http://lists.xen.org/archives/html/xen-devel/2012-03/msg00299.html).

I have already sent a version of patch series few months ago:
    - QEMU: https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04401.html
    - Xen: http://lists.xen.org/archives/html/xen-devel/2012-03/msg01947.html
With the different feedbacks, I have improved both QEMU and Xen modifications.
As before, I will sent two patch series, one for QEMU the other for Xen.

Modifications between V1 and V2:
    - introduce new machine options
    - use memory listener to avoid Xen specific code in QEMU core
    (depends of "[PATCH V5 0/8] memory: unify ioport registration" patch series)
    - implement disaggregation
    - add wrapper for older Xen version

Julien Grall (10):
  xen: add new machine options to support QEMU disaggregation in Xen
    environment
  xen: modify QEMU status path in XenStore
  xen: add wrappers for new Xen disaggregation hypercalls
  xen-hvm: register qemu as ioreq server and retrieve shared pages
  xen-memory: register memory/IO range in Xen
  xen-pci: register PCI device in Xen and handle IOREQ_TYPE_PCI_CONFIG
  xen: specify which device is part of default devices
  xen: audio is not a part of default devices
  xen-memory: handle node "device_model" for physical mapping
  xen: emulate IDE outside default device set

 arch_init.c     |    6 +
 hw/ide/qdev.c   |    8 ++-
 hw/pc_piix.c    |   40 +++++---
 hw/pci.c        |    6 +
 hw/xen.h        |   31 ++++++
 hw/xen_common.h |   58 +++++++++++
 qemu-config.c   |   12 ++
 xen-all.c       |  304 +++++++++++++++++++++++++++++++++++++++++++++++++++++--
 xen-stub.c      |    5 +
 9 files changed, 447 insertions(+), 23 deletions(-)

-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4L-0006BQ-Sc; Wed, 22 Aug 2012 18:54:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4K-00069T-55
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:20 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32732 invoked from network); 22 Aug 2012 18:54:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942907"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:03 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:02 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:16 +0100
Message-ID: <60e94dfc4ce6ebcd07e3256e314b9d0c99a13d1d.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 03/10] xen: add wrappers for new Xen
	disaggregation hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QEMU disaggregation is not supported on old Xen versions.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/xen_common.h |   58 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 58 insertions(+), 0 deletions(-)

diff --git a/hw/xen_common.h b/hw/xen_common.h
index 727757a..b2525ad 100644
--- a/hw/xen_common.h
+++ b/hw/xen_common.h
@@ -152,6 +152,64 @@ static inline int xen_xc_hvm_inject_msi(XenXC xen_xc, domid_t dom,
 }
 #endif
 
+/* Xen before 4.3 */
+#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 430
+static inline int xen_xc_hvm_register_pcidev(XenXC xen_xc, domid_t dom,
+        unsigned int serverid, uint8_t domain,
+        uint8_t bus, uint8_t device, uint8_t function)
+{
+    return 0;
+}
+
+static inline int xen_xc_hvm_map_io_range_to_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio,
+        uint64_t start, uint64_t end)
+{
+    return 1;
+}
+
+static inline int xen_xc_hvm_unmap_io_range_from_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio, uint64_t start)
+{
+    return 1;
+}
+
+static inline int xen_xc_hvm_register_ioreq_server(XenXC xen_xc, domid_t dom)
+{
+    return 0;
+}
+
+#else
+static inline int xen_xc_hvm_register_pcidev(XenXC xen_xc, domid_t dom,
+        unsigned int serverid, uint8_t domain,
+        uint8_t bus, uint8_t device, uint8_t function)
+{
+    return xc_hvm_register_pcidev(xen_xc, dom, serverid, domain,
+                                  bus, device, function);
+}
+
+static inline int xen_xc_hvm_map_io_range_to_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio,
+        uint64_t start, uint64_t end)
+{
+    return xc_hvm_map_io_range_to_ioreq_server(xen_xc, dom, serverid, is_mmio,
+                                               start, end);
+}
+
+static inline int xen_xc_hvm_unmap_io_range_from_ioreq_server(XenXC xen_xc,
+        domid_t dom, unsigned int serverid, int is_mmio, uint64_t start)
+{
+    return xc_hvm_unmap_io_range_from_ioreq_server(xen_xc, dom, serverid,
+                                                   is_mmio, start);
+}
+
+static inline int xen_xc_hvm_register_ioreq_server(XenXC xen_xc, domid_t dom)
+{
+    return xc_hvm_register_ioreq_server(xen_xc, dom);
+}
+
+#endif
+
 void destroy_hvm_domain(bool reboot);
 
 /* shutdown/destroy current domain because of an error */
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4H-00069p-E6; Wed, 22 Aug 2012 18:54:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4F-00069S-2h
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:15 +0000
Received: from [85.158.139.83:51167] by server-5.bemta-5.messagelabs.com id
	93/CE-31019-6DA25305; Wed, 22 Aug 2012 18:54:14 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345661651!29534497!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32470 invoked from network); 22 Aug 2012 18:54:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942910"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:04 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:04 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:17 +0100
Message-ID: <218c6f257e81ec8ae998002f6ec6669002453d6a.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 04/10] xen-hvm: register qemu as ioreq
	server and retrieve shared pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With QEMU disaggregation in Xen environment, each QEMU needs to ask Xen
for an ioreq server id. This id will be use to retrieve its private share
pages.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |   80 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 76 insertions(+), 4 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index df6927d..5f05838 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -36,6 +36,7 @@
 
 static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
 static MemoryRegion *framebuffer;
+static unsigned int serverid;
 static uint32_t xen_dmid = ~0;
 
 /* Compatibility with older version */
@@ -64,6 +65,67 @@ static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
 #define HVM_PARAM_BUFIOREQ_EVTCHN 26
 #endif
 
+#if __XEN_LATEST_INTERFACE_VERSION__ < 0x00040300
+static inline unsigned long xen_buffered_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_PFN, &pfn);
+
+    return pfn;
+}
+
+static inline unsigned long xen_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN, &pfn);
+
+    return pfn;
+}
+
+static inline evtchn_port_or_error_t xen_buffered_channel(void)
+{
+    unsigned long evtchn;
+    int rc;
+
+    rc = xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_EVTCHN,
+                          &evtchn);
+
+    if (rc < 0) {
+        return rc;
+    } else {
+        return evtchn;
+    }
+}
+#else
+static inline unsigned long xen_buffered_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IO_PFN_FIRST, &pfn);
+    pfn += (serverid - 1) * 2 + 2;
+
+    return pfn;
+}
+
+static inline unsigned long xen_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IO_PFN_FIRST, &pfn);
+    pfn += (serverid - 1) * 2 + 1;
+
+    return pfn;
+}
+
+static inline evtchn_port_or_error_t xen_buffered_channel(void)
+{
+    return xc_hvm_get_ioreq_server_buf_channel(xen_xc, xen_domid, serverid);
+}
+
+#endif
+
 #define BUFFER_IO_MAX_DELAY  100
 
 typedef struct XenPhysmap {
@@ -1112,7 +1174,15 @@ int xen_hvm_init(void)
     state->suspend.notify = xen_suspend_notifier;
     qemu_register_suspend_notifier(&state->suspend);
 
-    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN, &ioreq_pfn);
+    rc = xen_xc_hvm_register_ioreq_server(xen_xc, xen_domid);
+
+    if (rc < 0) {
+        hw_error("registered server returned error %d", rc);
+    }
+
+    serverid = rc;
+
+    ioreq_pfn = xen_iopage();
     DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
     state->shared_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE,
                                               PROT_READ|PROT_WRITE, ioreq_pfn);
@@ -1121,7 +1191,7 @@ int xen_hvm_init(void)
                  errno, xen_xc);
     }
 
-    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_PFN, &ioreq_pfn);
+    ioreq_pfn = xen_buffered_iopage();
     DPRINTF("buffered io page at pfn %lx\n", ioreq_pfn);
     state->buffered_io_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE,
                                                    PROT_READ|PROT_WRITE, ioreq_pfn);
@@ -1142,12 +1212,14 @@ int xen_hvm_init(void)
         state->ioreq_local_port[i] = rc;
     }
 
-    rc = xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_EVTCHN,
-            &bufioreq_evtchn);
+    rc = xen_buffered_channel();
     if (rc < 0) {
         fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
         return -1;
     }
+
+    bufioreq_evtchn = rc;
+
     rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
             (uint32_t)bufioreq_evtchn);
     if (rc == -1) {
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4K-0006Ab-2F; Wed, 22 Aug 2012 18:54:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4I-00069R-P2
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:18 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32680 invoked from network); 22 Aug 2012 18:54:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942895"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:01 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:53:59 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:14 +0100
Message-ID: <bd5dc1c039b5b6d30ac785630677ad3339039f6b.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 01/10] xen: add new machine options to
	support QEMU disaggregation in Xen environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

    - xen_dmid: specify the id of QEMU. It will be used to
    retrieve/store information inside XenStore.
    - xen_default_dev (on/off): as default devices need to be create in
    each QEMU (due to code dependency), this option specifies if it will
    register range/PCI of default device via xen hypercall.
    (Root bridge, south bridge, ...).
    - xen_emulate_ide (on/off): enable/disable emulation in QEMU.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 qemu-config.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/qemu-config.c b/qemu-config.c
index c05ffbc..7740442 100644
--- a/qemu-config.c
+++ b/qemu-config.c
@@ -612,6 +612,18 @@ static QemuOptsList qemu_machine_opts = {
             .name = "dump-guest-core",
             .type = QEMU_OPT_BOOL,
             .help = "Include guest memory in  a core dump",
+        }, {
+            .name = "xen_dmid",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Xen device model id",
+        }, {
+            .name = "xen_default_dev",
+            .type = QEMU_OPT_BOOL,
+            .help = "emulate Xen default device (South Bridge, IDE, ...)"
+        }, {
+            .name = "xen_emulate_ide",
+            .type = QEMU_OPT_BOOL,
+            .help = "emulate IDE with XEN"
         },
         { /* End of list */ }
     },
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4K-0006An-F7; Wed, 22 Aug 2012 18:54:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4I-00069Q-RD
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:19 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345661648!8568704!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32693 invoked from network); 22 Aug 2012 18:54:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942900"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:03 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:01 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:15 +0100
Message-ID: <10aff66be1ac0ebad8afa20cd0946b68323cbbb9.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 02/10] xen: modify QEMU status path in
	XenStore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QEMU will now write its status in another XenStore path because multiple
QEMU can run for a same domain. If xen_dmid machine option is not specified,
it means that an old version of Xen is used, so status is written in the old
path.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |   16 +++++++++++++++-
 1 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 61def2e..df6927d 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -36,6 +36,7 @@
 
 static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
 static MemoryRegion *framebuffer;
+static uint32_t xen_dmid = ~0;
 
 /* Compatibility with older version */
 #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
@@ -958,7 +959,14 @@ static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
         exit(1);
     }
 
-    snprintf(path, sizeof (path), "/local/domain/0/device-model/%u/state", xen_domid);
+    if (xen_dmid == ~0) {
+        snprintf(path, sizeof(path), "/local/domain/0/device-model/%u/state",
+                 xen_domid);
+    } else {
+        snprintf(path, sizeof(path), "/local/domain/0/dms/%u/%u/state",
+                 xen_domid, xen_dmid);
+    }
+
     if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
         fprintf(stderr, "error recording dm state\n");
         exit(1);
@@ -1077,6 +1085,12 @@ int xen_hvm_init(void)
     unsigned long ioreq_pfn;
     unsigned long bufioreq_evtchn;
     XenIOState *state;
+    QemuOptsList *list = qemu_find_opts("machine");
+
+    if (!QTAILQ_EMPTY(&list->head)) {
+        xen_dmid = qemu_opt_get_number(QTAILQ_FIRST(&list->head),
+                                       "xen_dmid", ~0);
+    }
 
     state = g_malloc0(sizeof (XenIOState));
 
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4H-00069w-QI; Wed, 22 Aug 2012 18:54:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4F-00069V-Og
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:15 +0000
Received: from [85.158.139.83:51184] by server-2.bemta-5.messagelabs.com id
	FB/54-10142-6DA25305; Wed, 22 Aug 2012 18:54:14 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345661651!29534497!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32496 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942923"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:09 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:07 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:20 +0100
Message-ID: <552327742c4491e4dadbccfa189bf9e2ab706ba4.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 07/10] xen: specify which device is part
	of default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

One major problem of QEMU disaggregation is that some devices needs to
be "emulate" in each QEMU, but only one need to register it in Xen.

This patch introduces helpers that can be used in QEMU code (for
instance hw/pc_piix.c) to specify if the device is part of default sets.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 hw/pc_piix.c |    2 ++
 hw/xen.h     |   20 ++++++++++++++++++++
 xen-all.c    |   29 +++++++++++++++++++++++++++--
 3 files changed, 49 insertions(+), 2 deletions(-)

diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index 0c0096f..6cb0a2a 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
@@ -342,9 +342,11 @@ static void pc_xen_hvm_init(ram_addr_t ram_size,
     if (xen_hvm_init() != 0) {
         hw_error("xen hardware virtual machine initialisation failed");
     }
+    xen_set_register_default_dev(1,  NULL);
     pc_init_pci_no_kvmclock(ram_size, boot_device,
                             kernel_filename, kernel_cmdline,
                             initrd_filename, cpu_model);
+    xen_set_register_default_dev(0, NULL);
     xen_vcpu_init();
 }
 #endif
diff --git a/hw/xen.h b/hw/xen.h
index 663731a..3c8724f 100644
--- a/hw/xen.h
+++ b/hw/xen.h
@@ -21,6 +21,7 @@ extern uint32_t xen_domid;
 extern enum xen_mode xen_mode;
 
 extern int xen_allowed;
+extern int xen_register_default_dev;
 
 static inline int xen_enabled(void)
 {
@@ -31,6 +32,25 @@ static inline int xen_enabled(void)
 #endif
 }
 
+static inline int xen_is_registered_default_dev(void)
+{
+#if defined(CONFIG_XEN)
+    return xen_register_default_dev;
+#else
+    return 1;
+#endif
+}
+
+static inline void xen_set_register_default_dev(int val, int *old)
+{
+#if defined(CONFIG_XEN)
+    if (old) {
+        *old = xen_register_default_dev;
+    }
+    xen_register_default_dev = val;
+#endif
+}
+
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
diff --git a/xen-all.c b/xen-all.c
index 485c312..afa9bcc 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -39,6 +39,10 @@ static MemoryRegion *framebuffer;
 static unsigned int serverid;
 static uint32_t xen_dmid = ~0;
 
+/* Use to tell if we register pci/mmio/pio of default devices */
+int xen_register_default_dev = 0;
+static int xen_emulate_default_dev = 1;
+
 /* Compatibility with older version */
 #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
@@ -176,6 +180,10 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
 
 int xen_register_pcidev(PCIDevice *pci_dev)
 {
+    if (xen_register_default_dev && !xen_emulate_default_dev) {
+        return 0;
+    }
+
     DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
             pci_dev->devfn & 0x7, pci_dev->name);
 
@@ -214,6 +222,18 @@ static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
         return;
     }
 
+    /* Handle the registration of all default io range */
+    if (xen_register_default_dev) {
+        /* Register ps/2 only if we emulate VGA */
+        if (!strcmp(name, "i8042-data") || !strcmp(name, "i8042-cmd")) {
+            if (display_type == DT_NOGRAPHIC) {
+                return;
+            }
+        } else if (!xen_emulate_default_dev && strcmp(name, "serial")) {
+            return;
+        }
+    }
+
     DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
             (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
 
@@ -1300,6 +1320,8 @@ int xen_hvm_init(void)
     if (!QTAILQ_EMPTY(&list->head)) {
         xen_dmid = qemu_opt_get_number(QTAILQ_FIRST(&list->head),
                                        "xen_dmid", ~0);
+        xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
+                                                    "xen_default_dev", 1);
     }
 
     state = g_malloc0(sizeof (XenIOState));
@@ -1395,9 +1417,12 @@ int xen_hvm_init(void)
         fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
         exit(1);
     }
-    xen_be_register("console", &xen_console_ops);
-    xen_be_register("vkbd", &xen_kbdmouse_ops);
     xen_be_register("qdisk", &xen_blkdev_ops);
+
+    if (xen_emulate_default_dev) {
+        xen_be_register("console", &xen_console_ops);
+        xen_be_register("vkbd", &xen_kbdmouse_ops);
+    }
     xen_read_physmap(state);
 
     return 0;
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4H-00069p-E6; Wed, 22 Aug 2012 18:54:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4F-00069S-2h
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:15 +0000
Received: from [85.158.139.83:51167] by server-5.bemta-5.messagelabs.com id
	93/CE-31019-6DA25305; Wed, 22 Aug 2012 18:54:14 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345661651!29534497!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32470 invoked from network); 22 Aug 2012 18:54:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942910"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:04 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:04 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:17 +0100
Message-ID: <218c6f257e81ec8ae998002f6ec6669002453d6a.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 04/10] xen-hvm: register qemu as ioreq
	server and retrieve shared pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With QEMU disaggregation in Xen environment, each QEMU needs to ask Xen
for an ioreq server id. This id will be use to retrieve its private share
pages.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |   80 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 files changed, 76 insertions(+), 4 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index df6927d..5f05838 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -36,6 +36,7 @@
 
 static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
 static MemoryRegion *framebuffer;
+static unsigned int serverid;
 static uint32_t xen_dmid = ~0;
 
 /* Compatibility with older version */
@@ -64,6 +65,67 @@ static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
 #define HVM_PARAM_BUFIOREQ_EVTCHN 26
 #endif
 
+#if __XEN_LATEST_INTERFACE_VERSION__ < 0x00040300
+static inline unsigned long xen_buffered_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_PFN, &pfn);
+
+    return pfn;
+}
+
+static inline unsigned long xen_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN, &pfn);
+
+    return pfn;
+}
+
+static inline evtchn_port_or_error_t xen_buffered_channel(void)
+{
+    unsigned long evtchn;
+    int rc;
+
+    rc = xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_EVTCHN,
+                          &evtchn);
+
+    if (rc < 0) {
+        return rc;
+    } else {
+        return evtchn;
+    }
+}
+#else
+static inline unsigned long xen_buffered_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IO_PFN_FIRST, &pfn);
+    pfn += (serverid - 1) * 2 + 2;
+
+    return pfn;
+}
+
+static inline unsigned long xen_iopage(void)
+{
+    unsigned long pfn;
+
+    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IO_PFN_FIRST, &pfn);
+    pfn += (serverid - 1) * 2 + 1;
+
+    return pfn;
+}
+
+static inline evtchn_port_or_error_t xen_buffered_channel(void)
+{
+    return xc_hvm_get_ioreq_server_buf_channel(xen_xc, xen_domid, serverid);
+}
+
+#endif
+
 #define BUFFER_IO_MAX_DELAY  100
 
 typedef struct XenPhysmap {
@@ -1112,7 +1174,15 @@ int xen_hvm_init(void)
     state->suspend.notify = xen_suspend_notifier;
     qemu_register_suspend_notifier(&state->suspend);
 
-    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN, &ioreq_pfn);
+    rc = xen_xc_hvm_register_ioreq_server(xen_xc, xen_domid);
+
+    if (rc < 0) {
+        hw_error("registered server returned error %d", rc);
+    }
+
+    serverid = rc;
+
+    ioreq_pfn = xen_iopage();
     DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
     state->shared_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE,
                                               PROT_READ|PROT_WRITE, ioreq_pfn);
@@ -1121,7 +1191,7 @@ int xen_hvm_init(void)
                  errno, xen_xc);
     }
 
-    xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_PFN, &ioreq_pfn);
+    ioreq_pfn = xen_buffered_iopage();
     DPRINTF("buffered io page at pfn %lx\n", ioreq_pfn);
     state->buffered_io_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE,
                                                    PROT_READ|PROT_WRITE, ioreq_pfn);
@@ -1142,12 +1212,14 @@ int xen_hvm_init(void)
         state->ioreq_local_port[i] = rc;
     }
 
-    rc = xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_EVTCHN,
-            &bufioreq_evtchn);
+    rc = xen_buffered_channel();
     if (rc < 0) {
         fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
         return -1;
     }
+
+    bufioreq_evtchn = rc;
+
     rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
             (uint32_t)bufioreq_evtchn);
     if (rc == -1) {
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4I-0006A3-5L; Wed, 22 Aug 2012 18:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4G-00069c-0T
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:16 +0000
Received: from [85.158.138.51:20085] by server-1.bemta-3.messagelabs.com id
	64/35-09327-7DA25305; Wed, 22 Aug 2012 18:54:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345661652!18579895!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25030 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942930"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:11 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:10 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:22 +0100
Message-ID: <18e8a4bdb86eccae842aa5295ccc8137f151410a.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 09/10] xen-memory: handle node
	"device_model" for physical mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Retrieve only physical mapping where device model corresponds to dmid.
When a new physical mapping is added, specify the device model id of the
current QEMU.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index afa9bcc..f424cce 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -440,6 +440,14 @@ go_physmap:
                                    XEN_DOMCTL_MEM_CACHEATTR_WB);
 
     snprintf(path, sizeof(path),
+             "/local/domain/0/device-model/%d/physmap/%"PRIx64"/device_model",
+             xen_domid, (uint64_t)phys_offset);
+    snprintf(value, sizeof(value), "%u", xen_dmid);
+    if (!xs_write(state->xenstore, 0, path, value, strlen(value))) {
+        return -1;
+    }
+
+    snprintf(path, sizeof(path),
             "/local/domain/0/device-model/%d/physmap/%"PRIx64"/start_addr",
             xen_domid, (uint64_t)phys_offset);
     snprintf(value, sizeof(value), "%"PRIx64, (uint64_t)start_addr);
@@ -1266,6 +1274,7 @@ static void xen_read_physmap(XenIOState *state)
     unsigned int len, num, i;
     char path[80], *value = NULL;
     char **entries = NULL;
+    uint32_t dmid = ~0;
 
     snprintf(path, sizeof(path),
             "/local/domain/0/device-model/%d/physmap", xen_domid);
@@ -1274,6 +1283,17 @@ static void xen_read_physmap(XenIOState *state)
         return;
 
     for (i = 0; i < num; i++) {
+        snprintf(path, sizeof(path),
+                 "/local/domain/0/device-model/%d/physmap/%s/device_model",
+                 xen_domid, entries[i]);
+        value = xs_read(state->xenstore, 0, path, &len);
+        if (value) {
+            dmid = strtoul(value, NULL, 10);
+            free(value);
+            if (dmid != xen_dmid) {
+                continue;
+            }
+        }
         physmap = g_malloc(sizeof (XenPhysmap));
         physmap->phys_offset = strtoull(entries[i], NULL, 16);
         snprintf(path, sizeof(path),
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:54:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:54:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G4I-0006A3-5L; Wed, 22 Aug 2012 18:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G4G-00069c-0T
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:54:16 +0000
Received: from [85.158.138.51:20085] by server-1.bemta-3.messagelabs.com id
	64/35-09327-7DA25305; Wed, 22 Aug 2012 18:54:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345661652!18579895!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25030 invoked from network); 22 Aug 2012 18:54:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:54:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205942930"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:54:11 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:54:10 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:30:22 +0100
Message-ID: <18e8a4bdb86eccae842aa5295ccc8137f151410a.1345637459.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345637459.git.julien.grall@citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [QEMU][RFC V2 09/10] xen-memory: handle node
	"device_model" for physical mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Retrieve only physical mapping where device model corresponds to dmid.
When a new physical mapping is added, specify the device model id of the
current QEMU.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen-all.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index afa9bcc..f424cce 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -440,6 +440,14 @@ go_physmap:
                                    XEN_DOMCTL_MEM_CACHEATTR_WB);
 
     snprintf(path, sizeof(path),
+             "/local/domain/0/device-model/%d/physmap/%"PRIx64"/device_model",
+             xen_domid, (uint64_t)phys_offset);
+    snprintf(value, sizeof(value), "%u", xen_dmid);
+    if (!xs_write(state->xenstore, 0, path, value, strlen(value))) {
+        return -1;
+    }
+
+    snprintf(path, sizeof(path),
             "/local/domain/0/device-model/%d/physmap/%"PRIx64"/start_addr",
             xen_domid, (uint64_t)phys_offset);
     snprintf(value, sizeof(value), "%"PRIx64, (uint64_t)start_addr);
@@ -1266,6 +1274,7 @@ static void xen_read_physmap(XenIOState *state)
     unsigned int len, num, i;
     char path[80], *value = NULL;
     char **entries = NULL;
+    uint32_t dmid = ~0;
 
     snprintf(path, sizeof(path),
             "/local/domain/0/device-model/%d/physmap", xen_domid);
@@ -1274,6 +1283,17 @@ static void xen_read_physmap(XenIOState *state)
         return;
 
     for (i = 0; i < num; i++) {
+        snprintf(path, sizeof(path),
+                 "/local/domain/0/device-model/%d/physmap/%s/device_model",
+                 xen_domid, entries[i]);
+        value = xs_read(state->xenstore, 0, path, &len);
+        if (value) {
+            dmid = strtoul(value, NULL, 10);
+            free(value);
+            if (dmid != xen_dmid) {
+                continue;
+            }
+        }
         physmap = g_malloc(sizeof (XenPhysmap));
         physmap->phys_offset = strtoull(entries[i], NULL, 16);
         snprintf(path, sizeof(path),
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5A-0006eT-6R; Wed, 22 Aug 2012 18:55:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G58-0006d8-Mc
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:11 +0000
Received: from [85.158.138.51:25959] by server-10.bemta-3.messagelabs.com id
	3A/4C-20518-D0B25305; Wed, 22 Aug 2012 18:55:09 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9650 invoked from network); 22 Aug 2012 18:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943108"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:06 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:06 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:47 +0100
Message-ID: <96d6442c911f6ed7f6cb24670901b151fa1570d6.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
	support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add structure to handle ioreq server. It's a server which can
handle a range of IO (MMIO and/or PIO) and emulate a PCI device.
Each server has its own shared page to receive ioreq. So
we have introduced to HVM PARAM to set/get the first and
the last shared page used for ioreq. With this id, the server
is able to retrieve its page.

Introduce a new kind of ioreq type IOREQ_TYPE_PCICONFIG
which permits to forward easily PCI config space access.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/include/asm-x86/hvm/domain.h |   25 ++++++++++++++++++-
 xen/include/asm-x86/hvm/vcpu.h   |    4 ++-
 xen/include/public/hvm/hvm_op.h  |   51 ++++++++++++++++++++++++++++++++++++++
 xen/include/public/hvm/ioreq.h   |    1 +
 xen/include/public/hvm/params.h  |    6 +++-
 xen/include/public/xen.h         |    1 +
 xen/include/xen/hvm/pci_emul.h   |   29 +++++++++++++++++++++
 7 files changed, 114 insertions(+), 3 deletions(-)
 create mode 100644 xen/include/xen/hvm/pci_emul.h

diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 27b3de5..49d1ca0 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -28,6 +28,7 @@
 #include <asm/hvm/vioapic.h>
 #include <asm/hvm/io.h>
 #include <xen/hvm/iommu.h>
+#include <xen/hvm/pci_emul.h>
 #include <asm/hvm/viridian.h>
 #include <asm/hvm/vmx/vmcs.h>
 #include <asm/hvm/svm/vmcb.h>
@@ -41,14 +42,36 @@ struct hvm_ioreq_page {
     void *va;
 };
 
+struct hvm_io_range {
+    uint64_t s, e;
+    struct hvm_io_range *next;
+};
+
+struct hvm_ioreq_server {
+    unsigned int id;
+    domid_t domid;
+    struct hvm_io_range *mmio_range_list;
+    struct hvm_io_range *portio_range_list;
+    struct hvm_ioreq_server *next;
+    struct hvm_ioreq_page ioreq;
+    struct hvm_ioreq_page buf_ioreq;
+    unsigned int buf_ioreq_evtchn;
+};
+
 struct hvm_domain {
+    /* Use for the IO handles by Xen */
     struct hvm_ioreq_page  ioreq;
-    struct hvm_ioreq_page  buf_ioreq;
+    struct hvm_ioreq_server *ioreq_server_list;
+    uint32_t		     nr_ioreq_server;
+    spinlock_t               ioreq_server_lock;
 
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
 
+    /* PCI Information */
+    struct pci_root_emul pci_root;
+
     /* Lock protects access to irq, vpic and vioapic. */
     spinlock_t             irq_lock;
     struct hvm_irq         irq;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 9d68ed2..812b16e 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -125,7 +125,9 @@ struct hvm_vcpu {
     spinlock_t          tm_lock;
     struct list_head    tm_list;
 
-    int                 xen_port;
+    struct hvm_ioreq_page	*ioreq;
+    /* PCI Information */
+    uint32_t		pci_cf8;
 
     bool_t              flag_dr_dirty;
     bool_t              debug_state_latch;
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index a9aab4b..6b17c5f 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -23,6 +23,9 @@
 
 #include "../xen.h"
 #include "../trace.h"
+#include "../event_channel.h"
+
+#include "hvm_info_table.h" /* HVM_MAX_VCPUS */
 
 /* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
 #define HVMOP_set_param           0
@@ -238,6 +241,54 @@ struct xen_hvm_inject_trap {
 typedef struct xen_hvm_inject_trap xen_hvm_inject_trap_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_trap_t);
 
+#define HVMOP_register_ioreq_server 20
+struct xen_hvm_register_ioreq_server {
+    domid_t domid;  /* IN - domain to be serviced */
+    ioservid_t id;  /* OUT - handle for identifying this server */
+};
+typedef struct xen_hvm_register_ioreq_server xen_hvm_register_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_register_ioreq_server_t);
+
+#define HVMOP_get_ioreq_server_buf_channel 21
+struct xen_hvm_get_ioreq_server_buf_channel {
+    domid_t domid;	        /* IN - domain to be serviced */
+    ioservid_t id;	        /* IN - handle from HVMOP_register_ioreq_server */
+    evtchn_port_t channel;  /* OUT - buf ioreq channel */
+};
+typedef struct xen_hvm_get_ioreq_server_buf_channel xen_hvm_get_ioreq_server_buf_channel_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_buf_channel_t);
+
+#define HVMOP_map_io_range_to_ioreq_server 22
+struct xen_hvm_map_io_range_to_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    int is_mmio;         /* IN - MMIO or port IO? */
+    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
+    uint64_aligned_t s, e;  /* IN - inclusive start and end of range */
+};
+typedef struct xen_hvm_map_io_range_to_ioreq_server xen_hvm_map_io_range_to_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_io_range_to_ioreq_server_t);
+
+#define HVMOP_unmap_io_range_from_ioreq_server 23
+struct xen_hvm_unmap_io_range_from_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    uint8_t is_mmio;        /* IN - MMIO or port IO? */
+    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
+    uint64_aligned_t addr;  /* IN - address inside the range to remove */
+};
+typedef struct xen_hvm_unmap_io_range_from_ioreq_server xen_hvm_unmap_io_range_from_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_io_range_from_ioreq_server_t);
+
+#define HVMOP_register_pcidev 24
+struct xen_hvm_register_pcidev {
+    domid_t domid;	   /* IN - domain to be serviced */
+    ioservid_t id;	   /* IN - handle from HVMOP_register_ioreq_server */
+    /* IN - PCI identification in PCI topology (domain:bus:device:function) */
+    uint8_t domain, bus, device, function;
+};
+typedef struct xen_hvm_register_pcidev xen_hvm_register_pcidev_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_register_pcidev_t);
+
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #define HVMOP_get_mem_type    15
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index 4022a1d..87aacd3 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -34,6 +34,7 @@
 
 #define IOREQ_TYPE_PIO          0 /* pio */
 #define IOREQ_TYPE_COPY         1 /* mmio ops */
+#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
 #define IOREQ_TYPE_TIMEOFFSET   7
 #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
 
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 55c1b57..309ac1b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -147,6 +147,10 @@
 #define HVM_PARAM_ACCESS_RING_PFN   28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
-#define HVM_NR_PARAMS          30
+/* Param for ioreq servers */
+#define HVM_PARAM_IO_PFN_FIRST	30
+#define HVM_PARAM_IO_PFN_LAST	31
+
+#define HVM_NR_PARAMS          32
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b2f6c50..0de17b2 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -466,6 +466,7 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
 #ifndef __ASSEMBLY__
 
 typedef uint16_t domid_t;
+typedef uint32_t ioservid_t;
 
 /* Domain ids >= DOMID_FIRST_RESERVED cannot be used for ordinary domains. */
 #define DOMID_FIRST_RESERVED (0x7FF0U)
diff --git a/xen/include/xen/hvm/pci_emul.h b/xen/include/xen/hvm/pci_emul.h
new file mode 100644
index 0000000..4dfb577
--- /dev/null
+++ b/xen/include/xen/hvm/pci_emul.h
@@ -0,0 +1,29 @@
+#ifndef PCI_EMUL_H_
+# define PCI_EMUL_H_
+
+# include <xen/radix-tree.h>
+# include <xen/spinlock.h>
+# include <xen/types.h>
+
+void hvm_init_pci_emul(struct domain *d);
+void hvm_destroy_pci_emul(struct domain *d);
+int hvm_register_pcidev(domid_t domid, ioservid_t id,
+                        uint8_t domain, uint8_t bus,
+                        uint8_t device, uint8_t function);
+
+struct pci_root_emul {
+    spinlock_t pci_lock;
+    struct radix_tree_root pci_list;
+};
+
+#endif /* !PCI_EMUL_H_ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5A-0006eT-6R; Wed, 22 Aug 2012 18:55:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G58-0006d8-Mc
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:11 +0000
Received: from [85.158.138.51:25959] by server-10.bemta-3.messagelabs.com id
	3A/4C-20518-D0B25305; Wed, 22 Aug 2012 18:55:09 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9650 invoked from network); 22 Aug 2012 18:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943108"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:06 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:06 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:47 +0100
Message-ID: <96d6442c911f6ed7f6cb24670901b151fa1570d6.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
	support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add structure to handle ioreq server. It's a server which can
handle a range of IO (MMIO and/or PIO) and emulate a PCI device.
Each server has its own shared page to receive ioreq. So
we have introduced to HVM PARAM to set/get the first and
the last shared page used for ioreq. With this id, the server
is able to retrieve its page.

Introduce a new kind of ioreq type IOREQ_TYPE_PCICONFIG
which permits to forward easily PCI config space access.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/include/asm-x86/hvm/domain.h |   25 ++++++++++++++++++-
 xen/include/asm-x86/hvm/vcpu.h   |    4 ++-
 xen/include/public/hvm/hvm_op.h  |   51 ++++++++++++++++++++++++++++++++++++++
 xen/include/public/hvm/ioreq.h   |    1 +
 xen/include/public/hvm/params.h  |    6 +++-
 xen/include/public/xen.h         |    1 +
 xen/include/xen/hvm/pci_emul.h   |   29 +++++++++++++++++++++
 7 files changed, 114 insertions(+), 3 deletions(-)
 create mode 100644 xen/include/xen/hvm/pci_emul.h

diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 27b3de5..49d1ca0 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -28,6 +28,7 @@
 #include <asm/hvm/vioapic.h>
 #include <asm/hvm/io.h>
 #include <xen/hvm/iommu.h>
+#include <xen/hvm/pci_emul.h>
 #include <asm/hvm/viridian.h>
 #include <asm/hvm/vmx/vmcs.h>
 #include <asm/hvm/svm/vmcb.h>
@@ -41,14 +42,36 @@ struct hvm_ioreq_page {
     void *va;
 };
 
+struct hvm_io_range {
+    uint64_t s, e;
+    struct hvm_io_range *next;
+};
+
+struct hvm_ioreq_server {
+    unsigned int id;
+    domid_t domid;
+    struct hvm_io_range *mmio_range_list;
+    struct hvm_io_range *portio_range_list;
+    struct hvm_ioreq_server *next;
+    struct hvm_ioreq_page ioreq;
+    struct hvm_ioreq_page buf_ioreq;
+    unsigned int buf_ioreq_evtchn;
+};
+
 struct hvm_domain {
+    /* Use for the IO handles by Xen */
     struct hvm_ioreq_page  ioreq;
-    struct hvm_ioreq_page  buf_ioreq;
+    struct hvm_ioreq_server *ioreq_server_list;
+    uint32_t		     nr_ioreq_server;
+    spinlock_t               ioreq_server_lock;
 
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
 
+    /* PCI Information */
+    struct pci_root_emul pci_root;
+
     /* Lock protects access to irq, vpic and vioapic. */
     spinlock_t             irq_lock;
     struct hvm_irq         irq;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 9d68ed2..812b16e 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -125,7 +125,9 @@ struct hvm_vcpu {
     spinlock_t          tm_lock;
     struct list_head    tm_list;
 
-    int                 xen_port;
+    struct hvm_ioreq_page	*ioreq;
+    /* PCI Information */
+    uint32_t		pci_cf8;
 
     bool_t              flag_dr_dirty;
     bool_t              debug_state_latch;
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index a9aab4b..6b17c5f 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -23,6 +23,9 @@
 
 #include "../xen.h"
 #include "../trace.h"
+#include "../event_channel.h"
+
+#include "hvm_info_table.h" /* HVM_MAX_VCPUS */
 
 /* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
 #define HVMOP_set_param           0
@@ -238,6 +241,54 @@ struct xen_hvm_inject_trap {
 typedef struct xen_hvm_inject_trap xen_hvm_inject_trap_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_trap_t);
 
+#define HVMOP_register_ioreq_server 20
+struct xen_hvm_register_ioreq_server {
+    domid_t domid;  /* IN - domain to be serviced */
+    ioservid_t id;  /* OUT - handle for identifying this server */
+};
+typedef struct xen_hvm_register_ioreq_server xen_hvm_register_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_register_ioreq_server_t);
+
+#define HVMOP_get_ioreq_server_buf_channel 21
+struct xen_hvm_get_ioreq_server_buf_channel {
+    domid_t domid;	        /* IN - domain to be serviced */
+    ioservid_t id;	        /* IN - handle from HVMOP_register_ioreq_server */
+    evtchn_port_t channel;  /* OUT - buf ioreq channel */
+};
+typedef struct xen_hvm_get_ioreq_server_buf_channel xen_hvm_get_ioreq_server_buf_channel_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_buf_channel_t);
+
+#define HVMOP_map_io_range_to_ioreq_server 22
+struct xen_hvm_map_io_range_to_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    int is_mmio;         /* IN - MMIO or port IO? */
+    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
+    uint64_aligned_t s, e;  /* IN - inclusive start and end of range */
+};
+typedef struct xen_hvm_map_io_range_to_ioreq_server xen_hvm_map_io_range_to_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_io_range_to_ioreq_server_t);
+
+#define HVMOP_unmap_io_range_from_ioreq_server 23
+struct xen_hvm_unmap_io_range_from_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    uint8_t is_mmio;        /* IN - MMIO or port IO? */
+    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
+    uint64_aligned_t addr;  /* IN - address inside the range to remove */
+};
+typedef struct xen_hvm_unmap_io_range_from_ioreq_server xen_hvm_unmap_io_range_from_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_io_range_from_ioreq_server_t);
+
+#define HVMOP_register_pcidev 24
+struct xen_hvm_register_pcidev {
+    domid_t domid;	   /* IN - domain to be serviced */
+    ioservid_t id;	   /* IN - handle from HVMOP_register_ioreq_server */
+    /* IN - PCI identification in PCI topology (domain:bus:device:function) */
+    uint8_t domain, bus, device, function;
+};
+typedef struct xen_hvm_register_pcidev xen_hvm_register_pcidev_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_register_pcidev_t);
+
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #define HVMOP_get_mem_type    15
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index 4022a1d..87aacd3 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -34,6 +34,7 @@
 
 #define IOREQ_TYPE_PIO          0 /* pio */
 #define IOREQ_TYPE_COPY         1 /* mmio ops */
+#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
 #define IOREQ_TYPE_TIMEOFFSET   7
 #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
 
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 55c1b57..309ac1b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -147,6 +147,10 @@
 #define HVM_PARAM_ACCESS_RING_PFN   28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
-#define HVM_NR_PARAMS          30
+/* Param for ioreq servers */
+#define HVM_PARAM_IO_PFN_FIRST	30
+#define HVM_PARAM_IO_PFN_LAST	31
+
+#define HVM_NR_PARAMS          32
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b2f6c50..0de17b2 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -466,6 +466,7 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
 #ifndef __ASSEMBLY__
 
 typedef uint16_t domid_t;
+typedef uint32_t ioservid_t;
 
 /* Domain ids >= DOMID_FIRST_RESERVED cannot be used for ordinary domains. */
 #define DOMID_FIRST_RESERVED (0x7FF0U)
diff --git a/xen/include/xen/hvm/pci_emul.h b/xen/include/xen/hvm/pci_emul.h
new file mode 100644
index 0000000..4dfb577
--- /dev/null
+++ b/xen/include/xen/hvm/pci_emul.h
@@ -0,0 +1,29 @@
+#ifndef PCI_EMUL_H_
+# define PCI_EMUL_H_
+
+# include <xen/radix-tree.h>
+# include <xen/spinlock.h>
+# include <xen/types.h>
+
+void hvm_init_pci_emul(struct domain *d);
+void hvm_destroy_pci_emul(struct domain *d);
+int hvm_register_pcidev(domid_t domid, ioservid_t id,
+                        uint8_t domain, uint8_t bus,
+                        uint8_t device, uint8_t function);
+
+struct pci_root_emul {
+    spinlock_t pci_lock;
+    struct radix_tree_root pci_list;
+};
+
+#endif /* !PCI_EMUL_H_ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5E-0006hb-Oy; Wed, 22 Aug 2012 18:55:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5C-0006fp-G3
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:14 +0000
Received: from [85.158.138.51:37610] by server-7.bemta-3.messagelabs.com id
	5C/49-01906-11B25305; Wed, 22 Aug 2012 18:55:13 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9724 invoked from network); 22 Aug 2012 18:55:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943118"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:12 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:11 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:51 +0100
Message-ID: <c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes useless hvm_param due to structure modification
and bind new hypercalls to handle ioreq servers and PCI.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c          |  150 +++++++++++++++++++++------------------
 xen/include/public/hvm/params.h |    5 --
 2 files changed, 81 insertions(+), 74 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 292d57b..a2cd9b3 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -571,8 +571,7 @@ int hvm_domain_initialise(struct domain *d)
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
-    if ( hvm_init_pci_emul(d) )
-        goto fail2;
+    hvm_init_pci_emul(d);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
@@ -650,6 +649,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
 {
     hvm_destroy_ioreq_servers(d);
     hvm_destroy_pci_emul(d);
+    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
 
     msixtbl_pt_cleanup(d);
 
@@ -3742,21 +3742,6 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
-static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
-                                     int *p_port)
-{
-    int old_port, new_port;
-
-    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
-    if ( new_port < 0 )
-        return new_port;
-
-    /* xchg() ensures that only we call free_xen_event_channel(). */
-    old_port = xchg(p_port, new_port);
-    free_xen_event_channel(v, old_port);
-    return 0;
-}
-
 static int hvm_alloc_ioreq_server_page(struct domain *d,
                                        struct hvm_ioreq_server *s,
                                        struct hvm_ioreq_page *pfn,
@@ -4041,7 +4026,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
     case HVMOP_get_param:
     {
         struct xen_hvm_param a;
-        struct hvm_ioreq_page *iorp;
         struct domain *d;
         struct vcpu *v;
 
@@ -4069,20 +4053,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 
             switch ( a.index )
             {
-            case HVM_PARAM_IOREQ_PFN:
-                iorp = &d->arch.hvm_domain.ioreq;
-                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
-                    break;
-                spin_lock(&iorp->lock);
-                if ( iorp->va != NULL )
-                    /* Initialise evtchn port info if VCPUs already created. */
-                    for_each_vcpu ( d, v )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                spin_unlock(&iorp->lock);
+            case HVM_PARAM_IO_PFN_FIRST:
+                rc = hvm_set_ioreq_page(d, &d->arch.hvm_domain.ioreq, a.value);
                 break;
-            case HVM_PARAM_BUFIOREQ_PFN: 
-                iorp = &d->arch.hvm_domain.buf_ioreq;
-                rc = hvm_set_ioreq_page(d, iorp, a.value);
+            case HVM_PARAM_IO_PFN_LAST:
+                if ( (d->arch.hvm_domain.params[HVM_PARAM_IO_PFN_LAST]) )
+                    rc = -EINVAL;
                 break;
             case HVM_PARAM_CALLBACK_IRQ:
                 hvm_set_callback_via(d, a.value);
@@ -4128,41 +4104,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 
                 domctl_lock_release();
                 break;
-            case HVM_PARAM_DM_DOMAIN:
-                /* Not reflexive, as we must domain_pause(). */
-                rc = -EPERM;
-                if ( curr_d == d )
-                    break;
-
-                if ( a.value == DOMID_SELF )
-                    a.value = curr_d->domain_id;
-
-                rc = 0;
-                domain_pause(d); /* safe to change per-vcpu xen_port */
-                if ( d->vcpu[0] )
-                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
-                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
-                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
-                if ( rc )
-                {
-                    domain_unpause(d);
-                    break;
-                }
-                iorp = &d->arch.hvm_domain.ioreq;
-                for_each_vcpu ( d, v )
-                {
-                    rc = hvm_replace_event_channel(v, a.value,
-                                                   &v->arch.hvm_vcpu.xen_port);
-                    if ( rc )
-                        break;
-
-                    spin_lock(&iorp->lock);
-                    if ( iorp->va != NULL )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                    spin_unlock(&iorp->lock);
-                }
-                domain_unpause(d);
-                break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
                 rc = -EPERM;
@@ -4213,9 +4154,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
                         if ( rc == 0 )
                             rc = nestedhvm_vcpu_initialise(v);
                 break;
-            case HVM_PARAM_BUFIOREQ_EVTCHN:
-                rc = -EINVAL;
-                break;
             }
 
             if ( rc == 0 ) 
@@ -4669,6 +4607,80 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         break;
     }
 
+    case HVMOP_register_ioreq_server:
+    {
+        struct xen_hvm_register_ioreq_server a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_register_ioreq_server(&a);
+        if ( rc != 0 )
+            return rc;
+
+        rc = copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
+        break;
+    }
+
+    case HVMOP_get_ioreq_server_buf_channel:
+    {
+        struct xen_hvm_get_ioreq_server_buf_channel a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_get_ioreq_server_buf_channel(&a);
+        if ( rc != 0 )
+            return rc;
+
+        rc = copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
+
+        break;
+    }
+
+    case HVMOP_map_io_range_to_ioreq_server:
+    {
+        struct xen_hvm_map_io_range_to_ioreq_server a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_map_io_range_to_ioreq_server(&a);
+        if ( rc != 0 )
+            return rc;
+
+        break;
+    }
+
+    case HVMOP_unmap_io_range_from_ioreq_server:
+    {
+        struct xen_hvm_unmap_io_range_from_ioreq_server a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_unmap_io_range_from_ioreq_server(&a);
+        if ( rc != 0 )
+            return rc;
+
+        break;
+    }
+
+    case HVMOP_register_pcidev:
+    {
+        struct xen_hvm_register_pcidev a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvm_register_pcidev(a.domid, a.id, a.domain,
+                                 a.bus, a.device, a.function);
+        if ( rc != 0 )
+            return rc;
+
+        break;
+    }
+
     default:
     {
         gdprintk(XENLOG_DEBUG, "Bad HVM op %ld.\n", op);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 309ac1b..017493b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -49,11 +49,6 @@
 
 #define HVM_PARAM_PAE_ENABLED  4
 
-#define HVM_PARAM_IOREQ_PFN    5
-
-#define HVM_PARAM_BUFIOREQ_PFN 6
-#define HVM_PARAM_BUFIOREQ_EVTCHN 26
-
 #ifdef __ia64__
 
 #define HVM_PARAM_NVRAM_FD     7
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5E-0006hb-Oy; Wed, 22 Aug 2012 18:55:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5C-0006fp-G3
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:14 +0000
Received: from [85.158.138.51:37610] by server-7.bemta-3.messagelabs.com id
	5C/49-01906-11B25305; Wed, 22 Aug 2012 18:55:13 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9724 invoked from network); 22 Aug 2012 18:55:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943118"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:12 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:11 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:51 +0100
Message-ID: <c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes useless hvm_param due to structure modification
and bind new hypercalls to handle ioreq servers and PCI.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c          |  150 +++++++++++++++++++++------------------
 xen/include/public/hvm/params.h |    5 --
 2 files changed, 81 insertions(+), 74 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 292d57b..a2cd9b3 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -571,8 +571,7 @@ int hvm_domain_initialise(struct domain *d)
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
-    if ( hvm_init_pci_emul(d) )
-        goto fail2;
+    hvm_init_pci_emul(d);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
@@ -650,6 +649,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
 {
     hvm_destroy_ioreq_servers(d);
     hvm_destroy_pci_emul(d);
+    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
 
     msixtbl_pt_cleanup(d);
 
@@ -3742,21 +3742,6 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
-static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
-                                     int *p_port)
-{
-    int old_port, new_port;
-
-    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
-    if ( new_port < 0 )
-        return new_port;
-
-    /* xchg() ensures that only we call free_xen_event_channel(). */
-    old_port = xchg(p_port, new_port);
-    free_xen_event_channel(v, old_port);
-    return 0;
-}
-
 static int hvm_alloc_ioreq_server_page(struct domain *d,
                                        struct hvm_ioreq_server *s,
                                        struct hvm_ioreq_page *pfn,
@@ -4041,7 +4026,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
     case HVMOP_get_param:
     {
         struct xen_hvm_param a;
-        struct hvm_ioreq_page *iorp;
         struct domain *d;
         struct vcpu *v;
 
@@ -4069,20 +4053,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 
             switch ( a.index )
             {
-            case HVM_PARAM_IOREQ_PFN:
-                iorp = &d->arch.hvm_domain.ioreq;
-                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
-                    break;
-                spin_lock(&iorp->lock);
-                if ( iorp->va != NULL )
-                    /* Initialise evtchn port info if VCPUs already created. */
-                    for_each_vcpu ( d, v )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                spin_unlock(&iorp->lock);
+            case HVM_PARAM_IO_PFN_FIRST:
+                rc = hvm_set_ioreq_page(d, &d->arch.hvm_domain.ioreq, a.value);
                 break;
-            case HVM_PARAM_BUFIOREQ_PFN: 
-                iorp = &d->arch.hvm_domain.buf_ioreq;
-                rc = hvm_set_ioreq_page(d, iorp, a.value);
+            case HVM_PARAM_IO_PFN_LAST:
+                if ( (d->arch.hvm_domain.params[HVM_PARAM_IO_PFN_LAST]) )
+                    rc = -EINVAL;
                 break;
             case HVM_PARAM_CALLBACK_IRQ:
                 hvm_set_callback_via(d, a.value);
@@ -4128,41 +4104,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 
                 domctl_lock_release();
                 break;
-            case HVM_PARAM_DM_DOMAIN:
-                /* Not reflexive, as we must domain_pause(). */
-                rc = -EPERM;
-                if ( curr_d == d )
-                    break;
-
-                if ( a.value == DOMID_SELF )
-                    a.value = curr_d->domain_id;
-
-                rc = 0;
-                domain_pause(d); /* safe to change per-vcpu xen_port */
-                if ( d->vcpu[0] )
-                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
-                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
-                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
-                if ( rc )
-                {
-                    domain_unpause(d);
-                    break;
-                }
-                iorp = &d->arch.hvm_domain.ioreq;
-                for_each_vcpu ( d, v )
-                {
-                    rc = hvm_replace_event_channel(v, a.value,
-                                                   &v->arch.hvm_vcpu.xen_port);
-                    if ( rc )
-                        break;
-
-                    spin_lock(&iorp->lock);
-                    if ( iorp->va != NULL )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                    spin_unlock(&iorp->lock);
-                }
-                domain_unpause(d);
-                break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
                 rc = -EPERM;
@@ -4213,9 +4154,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
                         if ( rc == 0 )
                             rc = nestedhvm_vcpu_initialise(v);
                 break;
-            case HVM_PARAM_BUFIOREQ_EVTCHN:
-                rc = -EINVAL;
-                break;
             }
 
             if ( rc == 0 ) 
@@ -4669,6 +4607,80 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
         break;
     }
 
+    case HVMOP_register_ioreq_server:
+    {
+        struct xen_hvm_register_ioreq_server a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_register_ioreq_server(&a);
+        if ( rc != 0 )
+            return rc;
+
+        rc = copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
+        break;
+    }
+
+    case HVMOP_get_ioreq_server_buf_channel:
+    {
+        struct xen_hvm_get_ioreq_server_buf_channel a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_get_ioreq_server_buf_channel(&a);
+        if ( rc != 0 )
+            return rc;
+
+        rc = copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
+
+        break;
+    }
+
+    case HVMOP_map_io_range_to_ioreq_server:
+    {
+        struct xen_hvm_map_io_range_to_ioreq_server a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_map_io_range_to_ioreq_server(&a);
+        if ( rc != 0 )
+            return rc;
+
+        break;
+    }
+
+    case HVMOP_unmap_io_range_from_ioreq_server:
+    {
+        struct xen_hvm_unmap_io_range_from_ioreq_server a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvmop_unmap_io_range_from_ioreq_server(&a);
+        if ( rc != 0 )
+            return rc;
+
+        break;
+    }
+
+    case HVMOP_register_pcidev:
+    {
+        struct xen_hvm_register_pcidev a;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = hvm_register_pcidev(a.domid, a.id, a.domain,
+                                 a.bus, a.device, a.function);
+        if ( rc != 0 )
+            return rc;
+
+        break;
+    }
+
     default:
     {
         gdprintk(XENLOG_DEBUG, "Bad HVM op %ld.\n", op);
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 309ac1b..017493b 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -49,11 +49,6 @@
 
 #define HVM_PARAM_PAE_ENABLED  4
 
-#define HVM_PARAM_IOREQ_PFN    5
-
-#define HVM_PARAM_BUFIOREQ_PFN 6
-#define HVM_PARAM_BUFIOREQ_EVTCHN 26
-
 #ifdef __ia64__
 
 #define HVM_PARAM_NVRAM_FD     7
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5F-0006i2-6k; Wed, 22 Aug 2012 18:55:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5D-0006gF-9G
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:15 +0000
Received: from [85.158.138.51:37636] by server-4.bemta-3.messagelabs.com id
	A6/0D-04276-21B25305; Wed, 22 Aug 2012 18:55:14 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9667 invoked from network); 22 Aug 2012 18:55:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943112"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:09 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:08 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:49 +0100
Message-ID: <3921d4d38a5c20943af1ceb64f5f0691d7bfd702.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 03/17] hvm-pci: Handle PCI config
	space in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add function to register a bdf within a server.
An handler was add to catch (cf8 -> cff) ioport access.
When Xen reveices a PIO for cf8, it's store the value inside the current
vcpu until it receives a PIO for cfc -> cff.
In this case, it checks if the bdf is registered and forge the ioreq
that will be forward to server later.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/Makefile   |    1 +
 xen/arch/x86/hvm/pci_emul.c |  168 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 169 insertions(+), 0 deletions(-)
 create mode 100644 xen/arch/x86/hvm/pci_emul.c

diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..585e9c9 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -12,6 +12,7 @@ obj-y += irq.o
 obj-y += mtrr.o
 obj-y += nestedhvm.o
 obj-y += pmtimer.o
+obj-y += pci_emul.o
 obj-y += quirks.o
 obj-y += rtc.o
 obj-y += save.o
diff --git a/xen/arch/x86/hvm/pci_emul.c b/xen/arch/x86/hvm/pci_emul.c
new file mode 100644
index 0000000..48456dd
--- /dev/null
+++ b/xen/arch/x86/hvm/pci_emul.c
@@ -0,0 +1,168 @@
+#include <asm/hvm/support.h>
+#include <xen/hvm/pci_emul.h>
+#include <xen/pci.h>
+#include <xen/sched.h>
+#include <xen/xmalloc.h>
+
+#define PCI_DEBUGSTR "%x:%x.%x"
+#define PCI_DEBUG(bdf) ((bdf) >> 8) & 0xff, ((bdf) >> 3) & 0x1f, ((bdf)) & 0x7
+#define PCI_MASK_BDF(bdf) (((bdf) & 0x00ffff00) >> 8)
+#define PCI_CMP_BDF(Pci, Bdf) ((pci)->bdf == PCI_MASK_BDF(Bdf))
+
+static int handle_config_space(int dir, uint32_t port, uint32_t bytes,
+                               uint32_t *val)
+{
+    uint32_t pci_cf8;
+    struct hvm_ioreq_server *s;
+    ioreq_t *p = get_ioreq(current);
+    int rc = X86EMUL_UNHANDLEABLE;
+    struct vcpu *v = current;
+
+    if ( port == 0xcf8 && bytes == 4 )
+    {
+        if ( dir == IOREQ_READ )
+            *val = v->arch.hvm_vcpu.pci_cf8;
+        else
+            v->arch.hvm_vcpu.pci_cf8 = *val;
+        return X86EMUL_OKAY;
+    }
+    else if ( port < 0xcfc )
+        return X86EMUL_UNHANDLEABLE;
+
+    spin_lock(&v->domain->arch.hvm_domain.pci_root.pci_lock);
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    pci_cf8 = v->arch.hvm_vcpu.pci_cf8;
+
+    /* Retrieve PCI */
+    s = radix_tree_lookup(&v->domain->arch.hvm_domain.pci_root.pci_list,
+                          PCI_MASK_BDF(pci_cf8));
+
+    if ( unlikely(s == NULL) )
+    {
+        *val = ~0;
+        rc = X86EMUL_OKAY;
+        goto end_handle;
+    }
+
+    /**
+     * We just fill the ioreq, hvm_send_assist_req will send the request
+     * The size is used to find the right access
+     **/
+    /* We use the 16 high-bits for the offset (0 => 0xcfc, 1 => 0xcfd...) */
+    p->size = (p->addr - 0xcfc) << 16 | (p->size & 0xffff);
+    p->type = IOREQ_TYPE_PCI_CONFIG;
+    p->addr = pci_cf8;
+
+    set_ioreq(v, &s->ioreq, p);
+
+end_handle:
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    spin_unlock(&v->domain->arch.hvm_domain.pci_root.pci_lock);
+
+    return rc;
+}
+
+int hvm_register_pcidev(domid_t domid, ioservid_t id,
+                        uint8_t domain, uint8_t bus,
+                        uint8_t device, uint8_t function)
+{
+    struct domain *d;
+    struct hvm_ioreq_server *s;
+    int rc = 0;
+    struct radix_tree_root *tree;
+    uint16_t bdf = 0;
+
+    /* For the moment we don't handle pci when domain != 0 */
+    if ( domain != 0 )
+        return -EINVAL;
+
+    rc = rcu_lock_target_domain_by_id(domid, &d);
+
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    /* Search server */
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server_list;
+    while ( (s != NULL) && (s->id != id) )
+        s = s->next;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    if ( s == NULL )
+    {
+        gdprintk(XENLOG_ERR, "Cannot find server %u\n", id);
+        rc = -ENOENT;
+        goto fail;
+    }
+
+    spin_lock(&d->arch.hvm_domain.pci_root.pci_lock);
+
+    tree = &d->arch.hvm_domain.pci_root.pci_list;
+
+    bdf |= ((uint16_t)bus) << 8;
+    bdf |= ((uint16_t)device & 0x1f) << 3;
+    bdf |= ((uint16_t)function & 0x7);
+
+    if ( radix_tree_lookup(tree, bdf) )
+    {
+        rc = -EEXIST;
+        gdprintk(XENLOG_ERR, "Bdf " PCI_DEBUGSTR " is already allocated\n",
+                 PCI_DEBUG(bdf));
+        goto create_end;
+    }
+
+    rc = radix_tree_insert(tree, bdf, s);
+    if ( rc )
+    {
+        gdprintk(XENLOG_ERR, "Cannot insert the bdf\n");
+        goto create_end;
+    }
+
+create_end:
+    spin_unlock(&d->arch.hvm_domain.pci_root.pci_lock);
+fail:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+void hvm_init_pci_emul(struct domain *d)
+{
+    struct pci_root_emul *root = &d->arch.hvm_domain.pci_root;
+
+    spin_lock_init(&root->pci_lock);
+
+    radix_tree_init(&root->pci_list);
+
+    /* Register the config space handler */
+    register_portio_handler(d, 0xcf8, 8, handle_config_space);
+}
+
+void hvm_destroy_pci_emul(struct domain *d)
+{
+    struct pci_root_emul *root = &d->arch.hvm_domain.pci_root;
+
+    spin_lock(&root->pci_lock);
+
+    radix_tree_destroy(&root->pci_list, NULL);
+
+    spin_unlock(&root->pci_lock);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5F-0006i2-6k; Wed, 22 Aug 2012 18:55:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5D-0006gF-9G
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:15 +0000
Received: from [85.158.138.51:37636] by server-4.bemta-3.messagelabs.com id
	A6/0D-04276-21B25305; Wed, 22 Aug 2012 18:55:14 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9667 invoked from network); 22 Aug 2012 18:55:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943112"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:09 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:08 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:49 +0100
Message-ID: <3921d4d38a5c20943af1ceb64f5f0691d7bfd702.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 03/17] hvm-pci: Handle PCI config
	space in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add function to register a bdf within a server.
An handler was add to catch (cf8 -> cff) ioport access.
When Xen reveices a PIO for cf8, it's store the value inside the current
vcpu until it receives a PIO for cfc -> cff.
In this case, it checks if the bdf is registered and forge the ioreq
that will be forward to server later.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/Makefile   |    1 +
 xen/arch/x86/hvm/pci_emul.c |  168 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 169 insertions(+), 0 deletions(-)
 create mode 100644 xen/arch/x86/hvm/pci_emul.c

diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..585e9c9 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -12,6 +12,7 @@ obj-y += irq.o
 obj-y += mtrr.o
 obj-y += nestedhvm.o
 obj-y += pmtimer.o
+obj-y += pci_emul.o
 obj-y += quirks.o
 obj-y += rtc.o
 obj-y += save.o
diff --git a/xen/arch/x86/hvm/pci_emul.c b/xen/arch/x86/hvm/pci_emul.c
new file mode 100644
index 0000000..48456dd
--- /dev/null
+++ b/xen/arch/x86/hvm/pci_emul.c
@@ -0,0 +1,168 @@
+#include <asm/hvm/support.h>
+#include <xen/hvm/pci_emul.h>
+#include <xen/pci.h>
+#include <xen/sched.h>
+#include <xen/xmalloc.h>
+
+#define PCI_DEBUGSTR "%x:%x.%x"
+#define PCI_DEBUG(bdf) ((bdf) >> 8) & 0xff, ((bdf) >> 3) & 0x1f, ((bdf)) & 0x7
+#define PCI_MASK_BDF(bdf) (((bdf) & 0x00ffff00) >> 8)
+#define PCI_CMP_BDF(Pci, Bdf) ((pci)->bdf == PCI_MASK_BDF(Bdf))
+
+static int handle_config_space(int dir, uint32_t port, uint32_t bytes,
+                               uint32_t *val)
+{
+    uint32_t pci_cf8;
+    struct hvm_ioreq_server *s;
+    ioreq_t *p = get_ioreq(current);
+    int rc = X86EMUL_UNHANDLEABLE;
+    struct vcpu *v = current;
+
+    if ( port == 0xcf8 && bytes == 4 )
+    {
+        if ( dir == IOREQ_READ )
+            *val = v->arch.hvm_vcpu.pci_cf8;
+        else
+            v->arch.hvm_vcpu.pci_cf8 = *val;
+        return X86EMUL_OKAY;
+    }
+    else if ( port < 0xcfc )
+        return X86EMUL_UNHANDLEABLE;
+
+    spin_lock(&v->domain->arch.hvm_domain.pci_root.pci_lock);
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    pci_cf8 = v->arch.hvm_vcpu.pci_cf8;
+
+    /* Retrieve PCI */
+    s = radix_tree_lookup(&v->domain->arch.hvm_domain.pci_root.pci_list,
+                          PCI_MASK_BDF(pci_cf8));
+
+    if ( unlikely(s == NULL) )
+    {
+        *val = ~0;
+        rc = X86EMUL_OKAY;
+        goto end_handle;
+    }
+
+    /**
+     * We just fill the ioreq, hvm_send_assist_req will send the request
+     * The size is used to find the right access
+     **/
+    /* We use the 16 high-bits for the offset (0 => 0xcfc, 1 => 0xcfd...) */
+    p->size = (p->addr - 0xcfc) << 16 | (p->size & 0xffff);
+    p->type = IOREQ_TYPE_PCI_CONFIG;
+    p->addr = pci_cf8;
+
+    set_ioreq(v, &s->ioreq, p);
+
+end_handle:
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    spin_unlock(&v->domain->arch.hvm_domain.pci_root.pci_lock);
+
+    return rc;
+}
+
+int hvm_register_pcidev(domid_t domid, ioservid_t id,
+                        uint8_t domain, uint8_t bus,
+                        uint8_t device, uint8_t function)
+{
+    struct domain *d;
+    struct hvm_ioreq_server *s;
+    int rc = 0;
+    struct radix_tree_root *tree;
+    uint16_t bdf = 0;
+
+    /* For the moment we don't handle pci when domain != 0 */
+    if ( domain != 0 )
+        return -EINVAL;
+
+    rc = rcu_lock_target_domain_by_id(domid, &d);
+
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    /* Search server */
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server_list;
+    while ( (s != NULL) && (s->id != id) )
+        s = s->next;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    if ( s == NULL )
+    {
+        gdprintk(XENLOG_ERR, "Cannot find server %u\n", id);
+        rc = -ENOENT;
+        goto fail;
+    }
+
+    spin_lock(&d->arch.hvm_domain.pci_root.pci_lock);
+
+    tree = &d->arch.hvm_domain.pci_root.pci_list;
+
+    bdf |= ((uint16_t)bus) << 8;
+    bdf |= ((uint16_t)device & 0x1f) << 3;
+    bdf |= ((uint16_t)function & 0x7);
+
+    if ( radix_tree_lookup(tree, bdf) )
+    {
+        rc = -EEXIST;
+        gdprintk(XENLOG_ERR, "Bdf " PCI_DEBUGSTR " is already allocated\n",
+                 PCI_DEBUG(bdf));
+        goto create_end;
+    }
+
+    rc = radix_tree_insert(tree, bdf, s);
+    if ( rc )
+    {
+        gdprintk(XENLOG_ERR, "Cannot insert the bdf\n");
+        goto create_end;
+    }
+
+create_end:
+    spin_unlock(&d->arch.hvm_domain.pci_root.pci_lock);
+fail:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+void hvm_init_pci_emul(struct domain *d)
+{
+    struct pci_root_emul *root = &d->arch.hvm_domain.pci_root;
+
+    spin_lock_init(&root->pci_lock);
+
+    radix_tree_init(&root->pci_list);
+
+    /* Register the config space handler */
+    register_portio_handler(d, 0xcf8, 8, handle_config_space);
+}
+
+void hvm_destroy_pci_emul(struct domain *d)
+{
+    struct pci_root_emul *root = &d->arch.hvm_domain.pci_root;
+
+    spin_lock(&root->pci_lock);
+
+    radix_tree_destroy(&root->pci_list, NULL);
+
+    spin_unlock(&root->pci_lock);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5F-0006iN-IM; Wed, 22 Aug 2012 18:55:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5D-0006dC-HU
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345661706!10440772!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10268 invoked from network); 22 Aug 2012 18:55:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484777"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:05 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:05 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:46 +0100
Message-ID: <cover.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 00/17] QEMU disaggregation in Xen
	environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series only concerns Xen. Another serie will come for QEMU.

I'm currently working on QEMU disaggregation in Xen environment. The
goal is to be able to running multiple QEMU for a same domain
(http://lists.xen.org/archives/html/xen-devel/2012-03/msg00299.html).

I have already sent a version of patch series few months ago:
    - QEMU: https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04401.html
    - Xen: http://lists.xen.org/archives/html/xen-devel/2012-03/msg01947.html
With the different feedbacks, I have improved both QEMU and Xen modifications.
As before, I will sent two patch series, one for QEMU the other for Xen.

Full disaggregation is not possible (one device = one QEMU) because lots
of device depends on each others. With the help of Stefano, I have defined
as possible disaggregation:
    - ui: emulate default devices (root bridge, south bridge), VGA,
    keyboard, mouse and USB
    - audio: emulate audio
    - ide: emulate disks
    - serial: emulate serial port
    - net: it possible to have multiple QEMU that emulates one or more
    network card

Of course, a same QEMU can emulate both ui and audio. Old configuration
file with qemu-xen still works.
The patch series adds an option "device_models".

Example:

builder='hvm'
memory = 1024
name = "Debian"
vcpus=1
vif = [ 'type=ioemu, bridge=eth0, mac=00:16:3e:0e:f5:ef, id=nic1' ]
disk = [ 'tap:tapdisk:qcow2:/home/xentest/works/vms/debian.img,xvda,w' ]
device_model_override = '/home/xentest/works/qemu-devel/qemu-wrapper'
device_model_version = 'qemu-xen'
device_models = [ 'name=qnet,vifs=nic1',
                  'name=qall,ui,ide' ]

It possible to override device model path for each device model. It could be
useful for debugging. For instance, 'name=qnet,vifs=nic1,path=/my/path/wrapper'.
The option "name" is used for logging filename or debugging, if it's not
specify, a number is used.

Modifications between V1 and V2:
    - rewrite libxl patch according to the new API
    - improve user experience with configuration file (avoid to specify
      bdf)
    - improve PCI hypercall: use bus, domain, device, function instead of
    bdf.
    - fix PCI config space handler
    - remove unused HVM paramaters
    - handle save/restore

Drawbacks:
    - PCI hotplug doesn't works
    - stubdomain doesn't works because old QEMU is not modify for
    disaggregation. By the way it's works on XenClient stubdomain
    - Which QEMU need to emulate Xen Platform ? It's mainly used
    to unplug network cards and disks

Possible improvements:
    - Like hvm get parameters, introduce an hypercall to retrieve shared
    pages. For the moment the server id is used
    - Specify if we want buffered I/O shared page or not (It was an idea
    of Christian Limpach)

I don't test all configurations. Comments, bug reports, ... are welcome.

Julien Grall (17):
  hvm: Modify interface to support multiple ioreq server
  hvm: Add functions to handle ioreq servers
  hvm-pci: Handle PCI config space in Xen
  hvm: Change initialization/destruction of an hvm
  hvm: Modify hvm_op
  hvm-io: IO refactoring with ioreq server
  hvm-io: send invalidate map cache to each registered servers
  hvm-io: Handle server in buffered IO
  xc: Add the hypercall for multiple servers
  xc: Add argument to allocate more special pages
  xc: modify save/restore to support multiple device models
  xl: Add interface to handle qemu disaggregation
  xl: add device model id to qmp functions
  xl-parsing: Parse new device_models option
  xl: support spawn/destroy on multiple device model
  xl: Fix PCI library
  xl: implement save/restore for multiple device models

 tools/libxc/xc_domain.c           |  155 ++++++++++
 tools/libxc/xc_domain_restore.c   |  150 ++++++++---
 tools/libxc/xc_domain_save.c      |    6 +-
 tools/libxc/xc_hvm_build_x86.c    |   59 ++--
 tools/libxc/xenctrl.h             |   21 ++
 tools/libxc/xenguest.h            |    4 +-
 tools/libxl/Makefile              |    2 +-
 tools/libxl/libxl.c               |   21 +-
 tools/libxl/libxl.h               |    3 +
 tools/libxl/libxl_create.c        |  150 ++++++++---
 tools/libxl/libxl_device.c        |    7 +-
 tools/libxl/libxl_dm.c            |  369 +++++++++++++++++-------
 tools/libxl/libxl_dom.c           |  147 ++++++++--
 tools/libxl/libxl_internal.h      |   76 ++++--
 tools/libxl/libxl_pci.c           |   19 +-
 tools/libxl/libxl_qmp.c           |   49 ++--
 tools/libxl/libxl_types.idl       |   15 +
 tools/libxl/libxlu_dm.c           |   96 +++++++
 tools/libxl/libxlutil.h           |    5 +
 tools/libxl/xl_cmdimpl.c          |   29 ++-
 tools/python/xen/lowlevel/xc/xc.c |    3 +-
 xen/arch/x86/hvm/Makefile         |    1 +
 xen/arch/x86/hvm/emulate.c        |   56 ++++
 xen/arch/x86/hvm/hvm.c            |  567 +++++++++++++++++++++++++++++++------
 xen/arch/x86/hvm/io.c             |   90 +++++--
 xen/arch/x86/hvm/pci_emul.c       |  168 +++++++++++
 xen/include/asm-x86/hvm/domain.h  |   25 ++-
 xen/include/asm-x86/hvm/support.h |   26 ++-
 xen/include/asm-x86/hvm/vcpu.h    |    4 +-
 xen/include/public/hvm/hvm_op.h   |   51 ++++
 xen/include/public/hvm/ioreq.h    |    1 +
 xen/include/public/hvm/params.h   |   11 +-
 xen/include/public/xen.h          |    1 +
 xen/include/xen/hvm/pci_emul.h    |   29 ++
 34 files changed, 1986 insertions(+), 430 deletions(-)
 create mode 100644 tools/libxl/libxlu_dm.c
 create mode 100644 xen/arch/x86/hvm/pci_emul.c
 create mode 100644 xen/include/xen/hvm/pci_emul.h

-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5F-0006iN-IM; Wed, 22 Aug 2012 18:55:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5D-0006dC-HU
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345661706!10440772!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10268 invoked from network); 22 Aug 2012 18:55:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484777"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:05 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:05 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:46 +0100
Message-ID: <cover.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 00/17] QEMU disaggregation in Xen
	environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series only concerns Xen. Another serie will come for QEMU.

I'm currently working on QEMU disaggregation in Xen environment. The
goal is to be able to running multiple QEMU for a same domain
(http://lists.xen.org/archives/html/xen-devel/2012-03/msg00299.html).

I have already sent a version of patch series few months ago:
    - QEMU: https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04401.html
    - Xen: http://lists.xen.org/archives/html/xen-devel/2012-03/msg01947.html
With the different feedbacks, I have improved both QEMU and Xen modifications.
As before, I will sent two patch series, one for QEMU the other for Xen.

Full disaggregation is not possible (one device = one QEMU) because lots
of device depends on each others. With the help of Stefano, I have defined
as possible disaggregation:
    - ui: emulate default devices (root bridge, south bridge), VGA,
    keyboard, mouse and USB
    - audio: emulate audio
    - ide: emulate disks
    - serial: emulate serial port
    - net: it possible to have multiple QEMU that emulates one or more
    network card

Of course, a same QEMU can emulate both ui and audio. Old configuration
file with qemu-xen still works.
The patch series adds an option "device_models".

Example:

builder='hvm'
memory = 1024
name = "Debian"
vcpus=1
vif = [ 'type=ioemu, bridge=eth0, mac=00:16:3e:0e:f5:ef, id=nic1' ]
disk = [ 'tap:tapdisk:qcow2:/home/xentest/works/vms/debian.img,xvda,w' ]
device_model_override = '/home/xentest/works/qemu-devel/qemu-wrapper'
device_model_version = 'qemu-xen'
device_models = [ 'name=qnet,vifs=nic1',
                  'name=qall,ui,ide' ]

It possible to override device model path for each device model. It could be
useful for debugging. For instance, 'name=qnet,vifs=nic1,path=/my/path/wrapper'.
The option "name" is used for logging filename or debugging, if it's not
specify, a number is used.

Modifications between V1 and V2:
    - rewrite libxl patch according to the new API
    - improve user experience with configuration file (avoid to specify
      bdf)
    - improve PCI hypercall: use bus, domain, device, function instead of
    bdf.
    - fix PCI config space handler
    - remove unused HVM paramaters
    - handle save/restore

Drawbacks:
    - PCI hotplug doesn't works
    - stubdomain doesn't works because old QEMU is not modify for
    disaggregation. By the way it's works on XenClient stubdomain
    - Which QEMU need to emulate Xen Platform ? It's mainly used
    to unplug network cards and disks

Possible improvements:
    - Like hvm get parameters, introduce an hypercall to retrieve shared
    pages. For the moment the server id is used
    - Specify if we want buffered I/O shared page or not (It was an idea
    of Christian Limpach)

I don't test all configurations. Comments, bug reports, ... are welcome.

Julien Grall (17):
  hvm: Modify interface to support multiple ioreq server
  hvm: Add functions to handle ioreq servers
  hvm-pci: Handle PCI config space in Xen
  hvm: Change initialization/destruction of an hvm
  hvm: Modify hvm_op
  hvm-io: IO refactoring with ioreq server
  hvm-io: send invalidate map cache to each registered servers
  hvm-io: Handle server in buffered IO
  xc: Add the hypercall for multiple servers
  xc: Add argument to allocate more special pages
  xc: modify save/restore to support multiple device models
  xl: Add interface to handle qemu disaggregation
  xl: add device model id to qmp functions
  xl-parsing: Parse new device_models option
  xl: support spawn/destroy on multiple device model
  xl: Fix PCI library
  xl: implement save/restore for multiple device models

 tools/libxc/xc_domain.c           |  155 ++++++++++
 tools/libxc/xc_domain_restore.c   |  150 ++++++++---
 tools/libxc/xc_domain_save.c      |    6 +-
 tools/libxc/xc_hvm_build_x86.c    |   59 ++--
 tools/libxc/xenctrl.h             |   21 ++
 tools/libxc/xenguest.h            |    4 +-
 tools/libxl/Makefile              |    2 +-
 tools/libxl/libxl.c               |   21 +-
 tools/libxl/libxl.h               |    3 +
 tools/libxl/libxl_create.c        |  150 ++++++++---
 tools/libxl/libxl_device.c        |    7 +-
 tools/libxl/libxl_dm.c            |  369 +++++++++++++++++-------
 tools/libxl/libxl_dom.c           |  147 ++++++++--
 tools/libxl/libxl_internal.h      |   76 ++++--
 tools/libxl/libxl_pci.c           |   19 +-
 tools/libxl/libxl_qmp.c           |   49 ++--
 tools/libxl/libxl_types.idl       |   15 +
 tools/libxl/libxlu_dm.c           |   96 +++++++
 tools/libxl/libxlutil.h           |    5 +
 tools/libxl/xl_cmdimpl.c          |   29 ++-
 tools/python/xen/lowlevel/xc/xc.c |    3 +-
 xen/arch/x86/hvm/Makefile         |    1 +
 xen/arch/x86/hvm/emulate.c        |   56 ++++
 xen/arch/x86/hvm/hvm.c            |  567 +++++++++++++++++++++++++++++++------
 xen/arch/x86/hvm/io.c             |   90 +++++--
 xen/arch/x86/hvm/pci_emul.c       |  168 +++++++++++
 xen/include/asm-x86/hvm/domain.h  |   25 ++-
 xen/include/asm-x86/hvm/support.h |   26 ++-
 xen/include/asm-x86/hvm/vcpu.h    |    4 +-
 xen/include/public/hvm/hvm_op.h   |   51 ++++
 xen/include/public/hvm/ioreq.h    |    1 +
 xen/include/public/hvm/params.h   |   11 +-
 xen/include/public/xen.h          |    1 +
 xen/include/xen/hvm/pci_emul.h    |   29 ++
 34 files changed, 1986 insertions(+), 430 deletions(-)
 create mode 100644 tools/libxl/libxlu_dm.c
 create mode 100644 xen/arch/x86/hvm/pci_emul.c
 create mode 100644 xen/include/xen/hvm/pci_emul.h

-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5H-0006ko-JG; Wed, 22 Aug 2012 18:55:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5G-0006iH-09
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:18 +0000
Received: from [85.158.138.51:39729] by server-9.bemta-3.messagelabs.com id
	11/24-23952-51B25305; Wed, 22 Aug 2012 18:55:17 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9836 invoked from network); 22 Aug 2012 18:55:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:16 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943123"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:15 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:15 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:54 +0100
Message-ID: <ceab22f2150107af78fd8134b4dee9020a1aaf41.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 08/17] hvm-io: Handle server in
	buffered IO
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As for the normal IO, Xen browses the ranges to find which server
is able to handle the IO.
There is a special case for IOREQ_TYPE_TIMEOFFSET. Indeed,
this IO must be send to all servers.
For this purpose, a new function hvm_buffered_io_send_server was introduced.
It sends an IO to a specific server.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/io.c |   75 +++++++++++++++++++++++++++++++++++++-----------
 1 files changed, 58 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index b73a462..6e0160c 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -46,28 +46,17 @@
 #include <xen/iocap.h>
 #include <public/hvm/ioreq.h>
 
-int hvm_buffered_io_send(ioreq_t *p)
+static int hvm_buffered_io_send_to_server(ioreq_t *p, struct hvm_ioreq_server *s)
 {
     struct vcpu *v = current;
-    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
-    buffered_iopage_t *pg = iorp->va;
+    struct hvm_ioreq_page *iorp;
+    buffered_iopage_t *pg;
     buf_ioreq_t bp;
     /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
     int qw = 0;
 
-    /* Ensure buffered_iopage fits in a page */
-    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
-
-    /*
-     * Return 0 for the cases we can't deal with:
-     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
-     *  - we cannot buffer accesses to guest memory buffers, as the guest
-     *    may expect the memory buffer to be synchronously accessed
-     *  - the count field is usually used with data_is_ptr and since we don't
-     *    support data_is_ptr we do not waste space for the count field either
-     */
-    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
-        return 0;
+    iorp = &s->buf_ioreq;
+    pg = iorp->va;
 
     bp.type = p->type;
     bp.dir  = p->dir;
@@ -119,12 +108,64 @@ int hvm_buffered_io_send(ioreq_t *p)
     pg->write_pointer += qw ? 2 : 1;
 
     notify_via_xen_event_channel(v->domain,
-            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
+                                 s->buf_ioreq_evtchn);
     spin_unlock(&iorp->lock);
     
     return 1;
 }
 
+int hvm_buffered_io_send(ioreq_t *p)
+{
+    struct vcpu *v = current;
+    struct hvm_ioreq_server *s;
+    int rc = 1;
+
+    /* Ensure buffered_iopage fits in a page */
+    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
+
+    /*
+     * Return 0 for the cases we can't deal with:
+     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
+     *  - we cannot buffer accesses to guest memory buffers, as the guest
+     *    may expect the memory buffer to be synchronously accessed
+     *  - the count field is usually used with data_is_ptr and since we don't
+     *    support data_is_ptr we do not waste space for the count field either
+     */
+    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
+        return 0;
+
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    if ( p->type == IOREQ_TYPE_TIMEOFFSET )
+    {
+        /* Send TIME OFFSET to all servers */
+        for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+            rc = hvm_buffered_io_send_to_server(p, s) && rc;
+    }
+    else
+    {
+        for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+        {
+            struct hvm_io_range *x = (p->type == IOREQ_TYPE_COPY)
+                ? s->mmio_range_list : s->portio_range_list;
+            for ( ; x; x = x->next )
+            {
+                if ( (p->addr >= x->s) && (p->addr <= x->e) )
+                {
+                    rc = hvm_buffered_io_send_to_server(p, s);
+                    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+                    return rc;
+                }
+            }
+        }
+        rc = 0;
+    }
+
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
 void send_timeoffset_req(unsigned long timeoff)
 {
     ioreq_t p[1];
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5H-0006ko-JG; Wed, 22 Aug 2012 18:55:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5G-0006iH-09
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:18 +0000
Received: from [85.158.138.51:39729] by server-9.bemta-3.messagelabs.com id
	11/24-23952-51B25305; Wed, 22 Aug 2012 18:55:17 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9836 invoked from network); 22 Aug 2012 18:55:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:16 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943123"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:15 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:15 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:54 +0100
Message-ID: <ceab22f2150107af78fd8134b4dee9020a1aaf41.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 08/17] hvm-io: Handle server in
	buffered IO
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As for the normal IO, Xen browses the ranges to find which server
is able to handle the IO.
There is a special case for IOREQ_TYPE_TIMEOFFSET. Indeed,
this IO must be send to all servers.
For this purpose, a new function hvm_buffered_io_send_server was introduced.
It sends an IO to a specific server.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/io.c |   75 +++++++++++++++++++++++++++++++++++++-----------
 1 files changed, 58 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index b73a462..6e0160c 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -46,28 +46,17 @@
 #include <xen/iocap.h>
 #include <public/hvm/ioreq.h>
 
-int hvm_buffered_io_send(ioreq_t *p)
+static int hvm_buffered_io_send_to_server(ioreq_t *p, struct hvm_ioreq_server *s)
 {
     struct vcpu *v = current;
-    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
-    buffered_iopage_t *pg = iorp->va;
+    struct hvm_ioreq_page *iorp;
+    buffered_iopage_t *pg;
     buf_ioreq_t bp;
     /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
     int qw = 0;
 
-    /* Ensure buffered_iopage fits in a page */
-    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
-
-    /*
-     * Return 0 for the cases we can't deal with:
-     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
-     *  - we cannot buffer accesses to guest memory buffers, as the guest
-     *    may expect the memory buffer to be synchronously accessed
-     *  - the count field is usually used with data_is_ptr and since we don't
-     *    support data_is_ptr we do not waste space for the count field either
-     */
-    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
-        return 0;
+    iorp = &s->buf_ioreq;
+    pg = iorp->va;
 
     bp.type = p->type;
     bp.dir  = p->dir;
@@ -119,12 +108,64 @@ int hvm_buffered_io_send(ioreq_t *p)
     pg->write_pointer += qw ? 2 : 1;
 
     notify_via_xen_event_channel(v->domain,
-            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
+                                 s->buf_ioreq_evtchn);
     spin_unlock(&iorp->lock);
     
     return 1;
 }
 
+int hvm_buffered_io_send(ioreq_t *p)
+{
+    struct vcpu *v = current;
+    struct hvm_ioreq_server *s;
+    int rc = 1;
+
+    /* Ensure buffered_iopage fits in a page */
+    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
+
+    /*
+     * Return 0 for the cases we can't deal with:
+     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
+     *  - we cannot buffer accesses to guest memory buffers, as the guest
+     *    may expect the memory buffer to be synchronously accessed
+     *  - the count field is usually used with data_is_ptr and since we don't
+     *    support data_is_ptr we do not waste space for the count field either
+     */
+    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
+        return 0;
+
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    if ( p->type == IOREQ_TYPE_TIMEOFFSET )
+    {
+        /* Send TIME OFFSET to all servers */
+        for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+            rc = hvm_buffered_io_send_to_server(p, s) && rc;
+    }
+    else
+    {
+        for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+        {
+            struct hvm_io_range *x = (p->type == IOREQ_TYPE_COPY)
+                ? s->mmio_range_list : s->portio_range_list;
+            for ( ; x; x = x->next )
+            {
+                if ( (p->addr >= x->s) && (p->addr <= x->e) )
+                {
+                    rc = hvm_buffered_io_send_to_server(p, s);
+                    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+                    return rc;
+                }
+            }
+        }
+        rc = 0;
+    }
+
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
 void send_timeoffset_req(unsigned long timeoff)
 {
     ioreq_t p[1];
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5F-0006ig-U7; Wed, 22 Aug 2012 18:55:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5E-0006dZ-9T
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:16 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345661706!10440772!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10282 invoked from network); 22 Aug 2012 18:55:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484781"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:08 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:07 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:48 +0100
Message-ID: <dd493399678c6eee3b5b0a08eb790b73fba1a678.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 02/17] hvm: Add functions to handle
	ioreq servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds functions to :
  - create/destroy server
  - map/unmap IO range to a server

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c |  356 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 356 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..687e480 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -354,6 +354,37 @@ void hvm_do_resume(struct vcpu *v)
     }
 }
 
+static void hvm_init_ioreq_servers(struct domain *d)
+{
+    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
+    d->arch.hvm_domain.nr_ioreq_server = 0;
+}
+
+static int hvm_ioreq_servers_new_vcpu(struct vcpu *v)
+{
+    struct hvm_ioreq_server *s;
+    struct domain *d = v->domain;
+    shared_iopage_t *p;
+    int rc = 0;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    for ( s = d->arch.hvm_domain.ioreq_server_list; s != NULL; s = s->next )
+    {
+        p = s->ioreq.va;
+        ASSERT(p != NULL);
+
+        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+        if ( rc < 0 )
+            break;
+        p->vcpu_ioreq[v->vcpu_id].vp_eport = rc;
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return (rc < 0) ? rc : 0;
+}
+
 static void hvm_init_ioreq_page(
     struct domain *d, struct hvm_ioreq_page *iorp)
 {
@@ -559,6 +590,59 @@ int hvm_domain_initialise(struct domain *d)
     return rc;
 }
 
+static void hvm_destroy_ioreq_server(struct domain *d,
+                                     struct hvm_ioreq_server *s)
+{
+    struct hvm_io_range *x;
+    shared_iopage_t *p;
+    int i;
+
+    while ( (x = s->mmio_range_list) != NULL )
+    {
+        s->mmio_range_list = x->next;
+        xfree(x);
+    }
+    while ( (x = s->portio_range_list) != NULL )
+    {
+        s->portio_range_list = x->next;
+        xfree(x);
+    }
+
+    p = s->ioreq.va;
+
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+    {
+        if ( p->vcpu_ioreq[i].vp_eport )
+        {
+            free_xen_event_channel(d->vcpu[i], p->vcpu_ioreq[i].vp_eport);
+        }
+    }
+
+    free_xen_event_channel(d->vcpu[0], s->buf_ioreq_evtchn);
+
+    hvm_destroy_ioreq_page(d, &s->ioreq);
+    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
+
+    xfree(s);
+}
+
+static void hvm_destroy_ioreq_servers(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    ASSERT(d->is_dying);
+
+    while ( (s = d->arch.hvm_domain.ioreq_server_list) != NULL )
+    {
+        d->arch.hvm_domain.ioreq_server_list = s->next;
+        hvm_destroy_ioreq_server(d, s);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
 void hvm_domain_relinquish_resources(struct domain *d)
 {
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
@@ -3686,6 +3770,278 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
+static int hvm_alloc_ioreq_server_page(struct domain *d,
+                                       struct hvm_ioreq_server *s,
+                                       struct hvm_ioreq_page *pfn,
+                                       int i)
+{
+    int rc = 0;
+    unsigned long gmfn;
+
+    if (i < 0 || i > 1)
+        return -EINVAL;
+
+    hvm_init_ioreq_page(d, pfn);
+
+    gmfn = d->arch.hvm_domain.params[HVM_PARAM_IO_PFN_FIRST]
+        + (s->id - 1) * 2 + i + 1;
+
+    if (gmfn > d->arch.hvm_domain.params[HVM_PARAM_IO_PFN_LAST])
+        return -EINVAL;
+
+    rc = hvm_set_ioreq_page(d, pfn, gmfn);
+
+    if (!rc && pfn->va == NULL)
+        rc = -ENOMEM;
+
+    return rc;
+}
+
+static int hvmop_register_ioreq_server(
+    struct xen_hvm_register_ioreq_server *a)
+{
+    struct hvm_ioreq_server *s, **pp;
+    struct domain *d;
+    shared_iopage_t *p;
+    struct vcpu *v;
+    int i;
+    int rc = 0;
+
+    if ( current->domain->domain_id == a->domid )
+        return -EINVAL;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    s = xmalloc(struct hvm_ioreq_server);
+    if ( s == NULL )
+    {
+        rcu_unlock_domain(d);
+        return -ENOMEM;
+    }
+    memset(s, 0, sizeof(*s));
+
+    if ( d->is_dying)
+    {
+        rc = -EINVAL;
+        goto register_died;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s->id = d->arch.hvm_domain.nr_ioreq_server + 1;
+    s->domid = current->domain->domain_id;
+
+    /* Initialize shared pages */
+    if ( (rc = hvm_alloc_ioreq_server_page(d, s, &s->ioreq, 0)) )
+        goto register_ioreq;
+    if ( (rc = hvm_alloc_ioreq_server_page(d, s, &s->buf_ioreq, 1)) )
+        goto register_buf_ioreq;
+
+    p = s->ioreq.va;
+
+    for_each_vcpu ( d, v )
+    {
+        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+        if ( rc < 0 )
+            goto register_ports;
+        p->vcpu_ioreq[v->vcpu_id].vp_eport = rc;
+    }
+
+    /* Allocate buffer event channel */
+    rc = alloc_unbound_xen_event_channel(d->vcpu[0], s->domid, NULL);
+
+    if (rc < 0)
+        goto register_ports;
+    s->buf_ioreq_evtchn = rc;
+
+    pp = &d->arch.hvm_domain.ioreq_server_list;
+    while ( *pp != NULL )
+        pp = &(*pp)->next;
+    *pp = s;
+
+    d->arch.hvm_domain.nr_ioreq_server += 1;
+    a->id = s->id;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+
+    goto register_done;
+
+register_ports:
+    p = s->ioreq.va;
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+    {
+        if ( p->vcpu_ioreq[i].vp_eport )
+            free_xen_event_channel(d->vcpu[i], p->vcpu_ioreq[i].vp_eport);
+    }
+    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
+register_buf_ioreq:
+    hvm_destroy_ioreq_page(d, &s->ioreq);
+register_ioreq:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+register_died:
+    xfree(s);
+    rcu_unlock_domain(d);
+register_done:
+    return 0;
+}
+
+static int hvmop_get_ioreq_server_buf_channel(
+    struct xen_hvm_get_ioreq_server_buf_channel *a)
+{
+    struct domain *d;
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server_list;
+
+    while ( (s != NULL) && (s->id != a->id) )
+        s = s->next;
+
+    if ( s == NULL )
+    {
+        spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+        rcu_unlock_domain(d);
+        return -ENOENT;
+    }
+
+    a->channel = s->buf_ioreq_evtchn;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+
+    return 0;
+}
+
+static int hvmop_map_io_range_to_ioreq_server(
+    struct xen_hvm_map_io_range_to_ioreq_server *a)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_io_range *x;
+    struct domain *d;
+    int rc;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    x = xmalloc(struct hvm_io_range);
+    s = d->arch.hvm_domain.ioreq_server_list;
+    while ( (s != NULL) && (s->id != a->id) )
+        s = s->next;
+    if ( (s == NULL) || (x == NULL) )
+    {
+        xfree(x);
+        spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+        rcu_unlock_domain(d);
+        return x ? -ENOENT : -ENOMEM;
+    }
+
+    x->s = a->s;
+    x->e = a->e;
+    if ( a->is_mmio )
+    {
+        x->next = s->mmio_range_list;
+        s->mmio_range_list = x;
+    }
+    else
+    {
+        x->next = s->portio_range_list;
+        s->portio_range_list = x;
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+    return 0;
+}
+
+static int hvmop_unmap_io_range_from_ioreq_server(
+    struct xen_hvm_unmap_io_range_from_ioreq_server *a)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_io_range *x, **xp;
+    struct domain *d;
+    int rc;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server_list;
+    while ( (s != NULL) && (s->id != a->id) )
+        s = s->next;
+    if ( (s == NULL) )
+    {
+        spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+        rcu_unlock_domain(d);
+        return -ENOENT;
+    }
+
+    if ( a->is_mmio )
+    {
+        x = s->mmio_range_list;
+        xp = &s->mmio_range_list;
+    }
+    else
+    {
+        x = s->portio_range_list;
+        xp = &s->portio_range_list;
+    }
+    while ( (x != NULL) && (a->addr < x->s || a->addr > x->e) )
+    {
+      xp = &x->next;
+      x = x->next;
+    }
+    if ( (x != NULL) )
+    {
+      *xp = x->next;
+      xfree(x);
+      rc = 0;
+    }
+    else
+      rc = -ENOENT;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 
 {
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5F-0006ig-U7; Wed, 22 Aug 2012 18:55:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5E-0006dZ-9T
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:16 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345661706!10440772!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10282 invoked from network); 22 Aug 2012 18:55:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484781"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:08 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:07 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:48 +0100
Message-ID: <dd493399678c6eee3b5b0a08eb790b73fba1a678.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 02/17] hvm: Add functions to handle
	ioreq servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds functions to :
  - create/destroy server
  - map/unmap IO range to a server

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c |  356 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 356 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025c..687e480 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -354,6 +354,37 @@ void hvm_do_resume(struct vcpu *v)
     }
 }
 
+static void hvm_init_ioreq_servers(struct domain *d)
+{
+    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
+    d->arch.hvm_domain.nr_ioreq_server = 0;
+}
+
+static int hvm_ioreq_servers_new_vcpu(struct vcpu *v)
+{
+    struct hvm_ioreq_server *s;
+    struct domain *d = v->domain;
+    shared_iopage_t *p;
+    int rc = 0;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    for ( s = d->arch.hvm_domain.ioreq_server_list; s != NULL; s = s->next )
+    {
+        p = s->ioreq.va;
+        ASSERT(p != NULL);
+
+        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+        if ( rc < 0 )
+            break;
+        p->vcpu_ioreq[v->vcpu_id].vp_eport = rc;
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return (rc < 0) ? rc : 0;
+}
+
 static void hvm_init_ioreq_page(
     struct domain *d, struct hvm_ioreq_page *iorp)
 {
@@ -559,6 +590,59 @@ int hvm_domain_initialise(struct domain *d)
     return rc;
 }
 
+static void hvm_destroy_ioreq_server(struct domain *d,
+                                     struct hvm_ioreq_server *s)
+{
+    struct hvm_io_range *x;
+    shared_iopage_t *p;
+    int i;
+
+    while ( (x = s->mmio_range_list) != NULL )
+    {
+        s->mmio_range_list = x->next;
+        xfree(x);
+    }
+    while ( (x = s->portio_range_list) != NULL )
+    {
+        s->portio_range_list = x->next;
+        xfree(x);
+    }
+
+    p = s->ioreq.va;
+
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+    {
+        if ( p->vcpu_ioreq[i].vp_eport )
+        {
+            free_xen_event_channel(d->vcpu[i], p->vcpu_ioreq[i].vp_eport);
+        }
+    }
+
+    free_xen_event_channel(d->vcpu[0], s->buf_ioreq_evtchn);
+
+    hvm_destroy_ioreq_page(d, &s->ioreq);
+    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
+
+    xfree(s);
+}
+
+static void hvm_destroy_ioreq_servers(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    ASSERT(d->is_dying);
+
+    while ( (s = d->arch.hvm_domain.ioreq_server_list) != NULL )
+    {
+        d->arch.hvm_domain.ioreq_server_list = s->next;
+        hvm_destroy_ioreq_server(d, s);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
 void hvm_domain_relinquish_resources(struct domain *d)
 {
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
@@ -3686,6 +3770,278 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
+static int hvm_alloc_ioreq_server_page(struct domain *d,
+                                       struct hvm_ioreq_server *s,
+                                       struct hvm_ioreq_page *pfn,
+                                       int i)
+{
+    int rc = 0;
+    unsigned long gmfn;
+
+    if (i < 0 || i > 1)
+        return -EINVAL;
+
+    hvm_init_ioreq_page(d, pfn);
+
+    gmfn = d->arch.hvm_domain.params[HVM_PARAM_IO_PFN_FIRST]
+        + (s->id - 1) * 2 + i + 1;
+
+    if (gmfn > d->arch.hvm_domain.params[HVM_PARAM_IO_PFN_LAST])
+        return -EINVAL;
+
+    rc = hvm_set_ioreq_page(d, pfn, gmfn);
+
+    if (!rc && pfn->va == NULL)
+        rc = -ENOMEM;
+
+    return rc;
+}
+
+static int hvmop_register_ioreq_server(
+    struct xen_hvm_register_ioreq_server *a)
+{
+    struct hvm_ioreq_server *s, **pp;
+    struct domain *d;
+    shared_iopage_t *p;
+    struct vcpu *v;
+    int i;
+    int rc = 0;
+
+    if ( current->domain->domain_id == a->domid )
+        return -EINVAL;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    s = xmalloc(struct hvm_ioreq_server);
+    if ( s == NULL )
+    {
+        rcu_unlock_domain(d);
+        return -ENOMEM;
+    }
+    memset(s, 0, sizeof(*s));
+
+    if ( d->is_dying)
+    {
+        rc = -EINVAL;
+        goto register_died;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s->id = d->arch.hvm_domain.nr_ioreq_server + 1;
+    s->domid = current->domain->domain_id;
+
+    /* Initialize shared pages */
+    if ( (rc = hvm_alloc_ioreq_server_page(d, s, &s->ioreq, 0)) )
+        goto register_ioreq;
+    if ( (rc = hvm_alloc_ioreq_server_page(d, s, &s->buf_ioreq, 1)) )
+        goto register_buf_ioreq;
+
+    p = s->ioreq.va;
+
+    for_each_vcpu ( d, v )
+    {
+        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+        if ( rc < 0 )
+            goto register_ports;
+        p->vcpu_ioreq[v->vcpu_id].vp_eport = rc;
+    }
+
+    /* Allocate buffer event channel */
+    rc = alloc_unbound_xen_event_channel(d->vcpu[0], s->domid, NULL);
+
+    if (rc < 0)
+        goto register_ports;
+    s->buf_ioreq_evtchn = rc;
+
+    pp = &d->arch.hvm_domain.ioreq_server_list;
+    while ( *pp != NULL )
+        pp = &(*pp)->next;
+    *pp = s;
+
+    d->arch.hvm_domain.nr_ioreq_server += 1;
+    a->id = s->id;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+
+    goto register_done;
+
+register_ports:
+    p = s->ioreq.va;
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+    {
+        if ( p->vcpu_ioreq[i].vp_eport )
+            free_xen_event_channel(d->vcpu[i], p->vcpu_ioreq[i].vp_eport);
+    }
+    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
+register_buf_ioreq:
+    hvm_destroy_ioreq_page(d, &s->ioreq);
+register_ioreq:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+register_died:
+    xfree(s);
+    rcu_unlock_domain(d);
+register_done:
+    return 0;
+}
+
+static int hvmop_get_ioreq_server_buf_channel(
+    struct xen_hvm_get_ioreq_server_buf_channel *a)
+{
+    struct domain *d;
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server_list;
+
+    while ( (s != NULL) && (s->id != a->id) )
+        s = s->next;
+
+    if ( s == NULL )
+    {
+        spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+        rcu_unlock_domain(d);
+        return -ENOENT;
+    }
+
+    a->channel = s->buf_ioreq_evtchn;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+
+    return 0;
+}
+
+static int hvmop_map_io_range_to_ioreq_server(
+    struct xen_hvm_map_io_range_to_ioreq_server *a)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_io_range *x;
+    struct domain *d;
+    int rc;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    x = xmalloc(struct hvm_io_range);
+    s = d->arch.hvm_domain.ioreq_server_list;
+    while ( (s != NULL) && (s->id != a->id) )
+        s = s->next;
+    if ( (s == NULL) || (x == NULL) )
+    {
+        xfree(x);
+        spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+        rcu_unlock_domain(d);
+        return x ? -ENOENT : -ENOMEM;
+    }
+
+    x->s = a->s;
+    x->e = a->e;
+    if ( a->is_mmio )
+    {
+        x->next = s->mmio_range_list;
+        s->mmio_range_list = x;
+    }
+    else
+    {
+        x->next = s->portio_range_list;
+        s->portio_range_list = x;
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+    return 0;
+}
+
+static int hvmop_unmap_io_range_from_ioreq_server(
+    struct xen_hvm_unmap_io_range_from_ioreq_server *a)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_io_range *x, **xp;
+    struct domain *d;
+    int rc;
+
+    rc = rcu_lock_target_domain_by_id(a->domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+    {
+        rcu_unlock_domain(d);
+        return -EINVAL;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server_list;
+    while ( (s != NULL) && (s->id != a->id) )
+        s = s->next;
+    if ( (s == NULL) )
+    {
+        spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+        rcu_unlock_domain(d);
+        return -ENOENT;
+    }
+
+    if ( a->is_mmio )
+    {
+        x = s->mmio_range_list;
+        xp = &s->mmio_range_list;
+    }
+    else
+    {
+        x = s->portio_range_list;
+        xp = &s->portio_range_list;
+    }
+    while ( (x != NULL) && (a->addr < x->s || a->addr > x->e) )
+    {
+      xp = &x->next;
+      x = x->next;
+    }
+    if ( (x != NULL) )
+    {
+      *xp = x->next;
+      xfree(x);
+      rc = 0;
+    }
+    else
+      rc = -ENOENT;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 
 {
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5I-0006mV-VU; Wed, 22 Aug 2012 18:55:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5H-0006fw-Jd
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:19 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7308 invoked from network); 22 Aug 2012 18:55:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484787"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:10 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:10 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:50 +0100
Message-ID: <cc44fbd3e6bc6d252367bc7ee77151de5ac8d2d5.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 04/17] hvm: Change
	initialization/destruction of an hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Prepare/Release structure for multiple ioreq servers.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c |   33 ++++++++++-----------------------
 1 files changed, 10 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 687e480..292d57b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -567,10 +567,13 @@ int hvm_domain_initialise(struct domain *d)
     rtc_init(d);
 
     hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    hvm_init_ioreq_servers(d);
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
+    if ( hvm_init_pci_emul(d) )
+        goto fail2;
+
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
         goto fail2;
@@ -645,8 +648,8 @@ static void hvm_destroy_ioreq_servers(struct domain *d)
 
 void hvm_domain_relinquish_resources(struct domain *d)
 {
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    hvm_destroy_ioreq_servers(d);
+    hvm_destroy_pci_emul(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1104,27 +1107,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) 
         goto fail3;
 
-    /* Create ioreq event channel. */
-    rc = alloc_unbound_xen_event_channel(v, 0, NULL);
-    if ( rc < 0 )
-        goto fail4;
-
-    /* Register ioreq event channel. */
-    v->arch.hvm_vcpu.xen_port = rc;
-
-    if ( v->vcpu_id == 0 )
-    {
-        /* Create bufioreq event channel. */
-        rc = alloc_unbound_xen_event_channel(v, 0, NULL);
-        if ( rc < 0 )
-            goto fail2;
-        v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] = rc;
-    }
+    rc = hvm_ioreq_servers_new_vcpu(v);
+    if ( rc != 0 )
+        goto fail3;
 
-    spin_lock(&v->domain->arch.hvm_domain.ioreq.lock);
-    if ( v->domain->arch.hvm_domain.ioreq.va != NULL )
-        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-    spin_unlock(&v->domain->arch.hvm_domain.ioreq.lock);
+    v->arch.hvm_vcpu.ioreq = &v->domain->arch.hvm_domain.ioreq;
 
     spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
     INIT_LIST_HEAD(&v->arch.hvm_vcpu.tm_list);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5I-0006mV-VU; Wed, 22 Aug 2012 18:55:20 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5H-0006fw-Jd
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:19 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7308 invoked from network); 22 Aug 2012 18:55:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484787"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:10 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:10 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:50 +0100
Message-ID: <cc44fbd3e6bc6d252367bc7ee77151de5ac8d2d5.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 04/17] hvm: Change
	initialization/destruction of an hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Prepare/Release structure for multiple ioreq servers.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c |   33 ++++++++++-----------------------
 1 files changed, 10 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 687e480..292d57b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -567,10 +567,13 @@ int hvm_domain_initialise(struct domain *d)
     rtc_init(d);
 
     hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    hvm_init_ioreq_servers(d);
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
+    if ( hvm_init_pci_emul(d) )
+        goto fail2;
+
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
         goto fail2;
@@ -645,8 +648,8 @@ static void hvm_destroy_ioreq_servers(struct domain *d)
 
 void hvm_domain_relinquish_resources(struct domain *d)
 {
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    hvm_destroy_ioreq_servers(d);
+    hvm_destroy_pci_emul(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1104,27 +1107,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) 
         goto fail3;
 
-    /* Create ioreq event channel. */
-    rc = alloc_unbound_xen_event_channel(v, 0, NULL);
-    if ( rc < 0 )
-        goto fail4;
-
-    /* Register ioreq event channel. */
-    v->arch.hvm_vcpu.xen_port = rc;
-
-    if ( v->vcpu_id == 0 )
-    {
-        /* Create bufioreq event channel. */
-        rc = alloc_unbound_xen_event_channel(v, 0, NULL);
-        if ( rc < 0 )
-            goto fail2;
-        v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] = rc;
-    }
+    rc = hvm_ioreq_servers_new_vcpu(v);
+    if ( rc != 0 )
+        goto fail3;
 
-    spin_lock(&v->domain->arch.hvm_domain.ioreq.lock);
-    if ( v->domain->arch.hvm_domain.ioreq.va != NULL )
-        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-    spin_unlock(&v->domain->arch.hvm_domain.ioreq.lock);
+    v->arch.hvm_vcpu.ioreq = &v->domain->arch.hvm_domain.ioreq;
 
     spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
     INIT_LIST_HEAD(&v->arch.hvm_vcpu.tm_list);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5L-0006qL-CD; Wed, 22 Aug 2012 18:55:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5J-0006gz-4w
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:21 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7343 invoked from network); 22 Aug 2012 18:55:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:15 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484792"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:14 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:14 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:53 +0100
Message-ID: <0616f810ead8fa9d2beb8dc3b7306724a0e39ca1.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 07/17] hvm-io: send invalidate map
	cache to each registered servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When an invalidate mapcache cache occurs, Xen need to send and
IOREQ_TYPE_INVALIDATE to each server and wait that all IO is completed.
We introduce a new function hvm_wait_on_io to wait until an IO is completed.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c |   41 ++++++++++++++++++++++++++++++++---------
 xen/arch/x86/hvm/io.c  |   15 +++++++++++++--
 2 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 33ef0f2..fdb2515 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -316,16 +316,9 @@ void hvm_migrate_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
-void hvm_do_resume(struct vcpu *v)
+static void hvm_wait_on_io(struct vcpu *v, ioreq_t *p)
 {
-    ioreq_t *p;
-
-    pt_restore_timer(v);
-
-    check_wakeup_from_wait();
-
     /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
-    p = get_ioreq(v);
     while ( p->state != STATE_IOREQ_NONE )
     {
         switch ( p->state )
@@ -335,7 +328,7 @@ void hvm_do_resume(struct vcpu *v)
             break;
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
+            wait_on_xen_event_channel(p->vp_eport,
                                       (p->state != STATE_IOREQ_READY) &&
                                       (p->state != STATE_IOREQ_INPROCESS));
             break;
@@ -345,6 +338,36 @@ void hvm_do_resume(struct vcpu *v)
             return; /* bail */
         }
     }
+}
+
+void hvm_do_resume(struct vcpu *v)
+{
+    ioreq_t *p;
+    struct hvm_ioreq_server *s;
+    shared_iopage_t *page;
+
+    pt_restore_timer(v);
+
+    check_wakeup_from_wait();
+
+    p = get_ioreq(v);
+
+    if ( p->type == IOREQ_TYPE_INVALIDATE )
+    {
+        spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+        /* Wait all servers */
+        for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+        {
+            page = s->ioreq.va;
+            ASSERT((v == current) || spin_is_locked(&s->ioreq.lock));
+            ASSERT(s->ioreq.va != NULL);
+            v->arch.hvm_vcpu.ioreq = &s->ioreq;
+            hvm_wait_on_io(v, &page->vcpu_ioreq[v->vcpu_id]);
+        }
+        spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    }
+    else
+        hvm_wait_on_io(v, p);
 
     /* Inject pending hw/sw trap */
     if ( v->arch.hvm_vcpu.inject_trap.vector != -1 ) 
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index c20f4e8..b73a462 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -150,7 +150,8 @@ void send_timeoffset_req(unsigned long timeoff)
 void send_invalidate_req(void)
 {
     struct vcpu *v = current;
-    ioreq_t *p = get_ioreq(v);
+    ioreq_t p[1];
+    struct hvm_ioreq_server *s;
 
     if ( p->state != STATE_IOREQ_NONE )
     {
@@ -164,8 +165,18 @@ void send_invalidate_req(void)
     p->size = 4;
     p->dir = IOREQ_WRITE;
     p->data = ~0UL; /* flush all */
+    p->count = 0;
+    p->addr = 0;
+
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+    {
+        set_ioreq(v, &s->ioreq, p);
+        (void)hvm_send_assist_req(v);
+    }
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
 
-    (void)hvm_send_assist_req(v);
+    set_ioreq(v, &v->domain->arch.hvm_domain.ioreq, p);
 }
 
 int handle_mmio(void)
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5L-0006qL-CD; Wed, 22 Aug 2012 18:55:23 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5J-0006gz-4w
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:21 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7343 invoked from network); 22 Aug 2012 18:55:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:15 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484792"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:14 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:14 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:53 +0100
Message-ID: <0616f810ead8fa9d2beb8dc3b7306724a0e39ca1.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 07/17] hvm-io: send invalidate map
	cache to each registered servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When an invalidate mapcache cache occurs, Xen need to send and
IOREQ_TYPE_INVALIDATE to each server and wait that all IO is completed.
We introduce a new function hvm_wait_on_io to wait until an IO is completed.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/hvm.c |   41 ++++++++++++++++++++++++++++++++---------
 xen/arch/x86/hvm/io.c  |   15 +++++++++++++--
 2 files changed, 45 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 33ef0f2..fdb2515 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -316,16 +316,9 @@ void hvm_migrate_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
-void hvm_do_resume(struct vcpu *v)
+static void hvm_wait_on_io(struct vcpu *v, ioreq_t *p)
 {
-    ioreq_t *p;
-
-    pt_restore_timer(v);
-
-    check_wakeup_from_wait();
-
     /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
-    p = get_ioreq(v);
     while ( p->state != STATE_IOREQ_NONE )
     {
         switch ( p->state )
@@ -335,7 +328,7 @@ void hvm_do_resume(struct vcpu *v)
             break;
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
+            wait_on_xen_event_channel(p->vp_eport,
                                       (p->state != STATE_IOREQ_READY) &&
                                       (p->state != STATE_IOREQ_INPROCESS));
             break;
@@ -345,6 +338,36 @@ void hvm_do_resume(struct vcpu *v)
             return; /* bail */
         }
     }
+}
+
+void hvm_do_resume(struct vcpu *v)
+{
+    ioreq_t *p;
+    struct hvm_ioreq_server *s;
+    shared_iopage_t *page;
+
+    pt_restore_timer(v);
+
+    check_wakeup_from_wait();
+
+    p = get_ioreq(v);
+
+    if ( p->type == IOREQ_TYPE_INVALIDATE )
+    {
+        spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+        /* Wait all servers */
+        for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+        {
+            page = s->ioreq.va;
+            ASSERT((v == current) || spin_is_locked(&s->ioreq.lock));
+            ASSERT(s->ioreq.va != NULL);
+            v->arch.hvm_vcpu.ioreq = &s->ioreq;
+            hvm_wait_on_io(v, &page->vcpu_ioreq[v->vcpu_id]);
+        }
+        spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    }
+    else
+        hvm_wait_on_io(v, p);
 
     /* Inject pending hw/sw trap */
     if ( v->arch.hvm_vcpu.inject_trap.vector != -1 ) 
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index c20f4e8..b73a462 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -150,7 +150,8 @@ void send_timeoffset_req(unsigned long timeoff)
 void send_invalidate_req(void)
 {
     struct vcpu *v = current;
-    ioreq_t *p = get_ioreq(v);
+    ioreq_t p[1];
+    struct hvm_ioreq_server *s;
 
     if ( p->state != STATE_IOREQ_NONE )
     {
@@ -164,8 +165,18 @@ void send_invalidate_req(void)
     p->size = 4;
     p->dir = IOREQ_WRITE;
     p->data = ~0UL; /* flush all */
+    p->count = 0;
+    p->addr = 0;
+
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+    {
+        set_ioreq(v, &s->ioreq, p);
+        (void)hvm_send_assist_req(v);
+    }
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
 
-    (void)hvm_send_assist_req(v);
+    set_ioreq(v, &v->domain->arch.hvm_domain.ioreq, p);
 }
 
 int handle_mmio(void)
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5L-0006r9-TT; Wed, 22 Aug 2012 18:55:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5J-0006iH-FR
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:21 +0000
Received: from [85.158.138.51:39910] by server-9.bemta-3.messagelabs.com id
	BC/24-23952-91B25305; Wed, 22 Aug 2012 18:55:21 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10195 invoked from network); 22 Aug 2012 18:55:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943126"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:19 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:19 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:57 +0100
Message-ID: <fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore to
	support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

    - add save/restore new special pages and remove unused
    - modify save file structure to allow multiple qemu states

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxc/xc_domain_restore.c |  150 +++++++++++++++++++++++++++++----------
 tools/libxc/xc_domain_save.c    |    6 +-
 2 files changed, 116 insertions(+), 40 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 3fe2b12..9a49ee2 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
 #else
 #define RDEXACT read_exact
 #endif
+
+#define QEMUSIG_SIZE 21
+
 /*
 ** In the state file (or during transfer), all page-table pages are
 ** converted into a 'canonical' form where references to actual mfns
@@ -342,8 +345,11 @@ typedef struct {
                 uint32_t version;
                 uint64_t len;
             } qemuhdr;
-            uint32_t qemubufsize;
-            uint8_t* qemubuf;
+            uint32_t num_dms;
+            struct devmodel_buffer {
+                uint32_t size;
+                uint8_t* buf;
+            } *dmsbuf;
         } hvm;
     } u;
 } tailbuf_t;
@@ -392,63 +398,112 @@ static int compat_buffer_qemu(xc_interface *xch, struct restore_ctx *ctx,
         return -1;
     }
 
-    buf->qemubuf = qbuf;
-    buf->qemubufsize = dlen;
+    if ( !(buf->dmsbuf = calloc(1, sizeof(*buf->dmsbuf))) ) {
+        ERROR("Error allocating Device Model buffer");
+        free(qbuf);
+        return -1;
+    }
+
+    buf->dmsbuf[0].buf = qbuf;
+    buf->dmsbuf[0].size = dlen;
+    buf->num_dms = 1;
 
     return 0;
 }
 
 static int buffer_qemu(xc_interface *xch, struct restore_ctx *ctx,
-                       int fd, struct tailbuf_hvm *buf)
+                       uint32_t dmid, int fd, struct tailbuf_hvm *buf)
 {
     uint32_t qlen;
     uint8_t *tmp;
+    struct devmodel_buffer *dmb = &buf->dmsbuf[dmid];
 
     if ( RDEXACT(fd, &qlen, sizeof(qlen)) ) {
-        PERROR("Error reading QEMU header length");
+        PERROR("Error reading Device Model %u header length", dmid);
         return -1;
     }
 
-    if ( qlen > buf->qemubufsize ) {
-        if ( buf->qemubuf) {
-            tmp = realloc(buf->qemubuf, qlen);
+    if ( qlen > dmb->size ) {
+        if ( dmb->buf ) {
+            tmp = realloc(dmb->buf, qlen);
             if ( tmp )
-                buf->qemubuf = tmp;
+                dmb->buf = tmp;
             else {
-                ERROR("Error reallocating QEMU state buffer");
+                ERROR("Error reallocating Device Model %u state buffer", dmid);
                 return -1;
             }
         } else {
-            buf->qemubuf = malloc(qlen);
-            if ( !buf->qemubuf ) {
-                ERROR("Error allocating QEMU state buffer");
+            dmb->buf = malloc(qlen);
+            if ( !dmb->buf ) {
+                ERROR("Error allocating Device Model %u state buffer", dmid);
                 return -1;
             }
         }
     }
-    buf->qemubufsize = qlen;
+    dmb->size = qlen;
 
-    if ( RDEXACT(fd, buf->qemubuf, buf->qemubufsize) ) {
-        PERROR("Error reading QEMU state");
+    if ( RDEXACT(fd, dmb->buf, dmb->size) ) {
+        PERROR("Error reading Device Model %u state", dmid);
         return -1;
     }
 
     return 0;
 }
 
-static int dump_qemu(xc_interface *xch, uint32_t dom, struct tailbuf_hvm *buf)
+static int buffer_device_models(xc_interface *xch, struct restore_ctx *ctx,
+                                int fd, struct tailbuf_hvm *buf)
+{
+    uint32_t i, num_dms;
+    unsigned char qemusig[QEMUSIG_SIZE + 1];
+    int ret = 0;
+
+    if ( RDEXACT(fd, &num_dms, sizeof(num_dms)) ) {
+        PERROR("Error reading num dms");
+        return -1;
+    }
+
+    if ( !(buf->dmsbuf = calloc(num_dms, sizeof (*buf->dmsbuf))) ) {
+        PERROR("Error allocating Device Model buffers");
+        return -1;
+    }
+
+    buf->num_dms = num_dms;
+
+    for ( i = 0; i < num_dms; i++ ) {
+        if ( RDEXACT(fd, qemusig, QEMUSIG_SIZE) ) {
+            PERROR("Error reading Device Model %u signature", i);
+            return -1;
+        }
+
+        if ( memcmp(qemusig, "DeviceModelRecord0002", QEMUSIG_SIZE) ) {
+            qemusig[QEMUSIG_SIZE] = '\0';
+            ERROR("Invalid Device Model %u signature: %s", i, qemusig);
+            return -1;
+        }
+
+        ret = buffer_qemu(xch, ctx, i, fd, buf);
+        if ( ret )
+            return ret;
+    }
+
+    return 0;
+}
+
+static int dump_qemu(xc_interface *xch, uint32_t dom,
+                     uint32_t dmid, struct tailbuf_hvm *buf)
 {
     int saved_errno;
     char path[256];
     FILE *fp;
+    struct devmodel_buffer *dmb = &buf->dmsbuf[dmid];
 
-    sprintf(path, XC_DEVICE_MODEL_RESTORE_FILE".%u", dom);
+    sprintf(path, XC_DEVICE_MODEL_RESTORE_FILE".%u.%u", dom, dmid);
     fp = fopen(path, "wb");
     if ( !fp )
         return -1;
 
-    DPRINTF("Writing %d bytes of QEMU data\n", buf->qemubufsize);
-    if ( fwrite(buf->qemubuf, 1, buf->qemubufsize, fp) != buf->qemubufsize) {
+    DPRINTF("Writing %d bytes of Device Model %u data\n", dmb->size, dmid);
+    if ( fwrite(dmb->buf, 1, dmb->size, fp) != dmb->size ) {
         saved_errno = errno;
         fclose(fp);
         errno = saved_errno;
@@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
                            int vcpuextstate, uint32_t vcpuextstate_size)
 {
     uint8_t *tmp;
-    unsigned char qemusig[21];
+    unsigned char qemusig[QEMUSIG_SIZE + 1];
 
     if ( RDEXACT(fd, buf->magicpfns, sizeof(buf->magicpfns)) ) {
         PERROR("Error reading magic PFNs");
@@ -504,7 +559,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
         return -1;
     }
 
-    if ( RDEXACT(fd, qemusig, sizeof(qemusig)) ) {
+    if ( RDEXACT(fd, qemusig, QEMUSIG_SIZE) ) {
         PERROR("Error reading QEMU signature");
         return -1;
     }
@@ -517,13 +572,22 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
      * live-migration QEMU record and Remus which includes a length
      * prefix
      */
-    if ( !memcmp(qemusig, "QemuDeviceModelRecord", sizeof(qemusig)) )
+    if ( !memcmp(qemusig, "QemuDeviceModelRecord", QEMUSIG_SIZE) )
         return compat_buffer_qemu(xch, ctx, fd, buf);
-    else if ( !memcmp(qemusig, "DeviceModelRecord0002", sizeof(qemusig)) ||
-              !memcmp(qemusig, "RemusDeviceModelState", sizeof(qemusig)) )
-        return buffer_qemu(xch, ctx, fd, buf);
+    else if ( !memcmp(qemusig, "DeviceModelRecord0002", QEMUSIG_SIZE) ||
+              !memcmp(qemusig, "RemusDeviceModelState", QEMUSIG_SIZE) )
+    {
+        if ( !(buf->dmsbuf = calloc(1, sizeof (*buf->dmsbuf))) ) {
+            PERROR("Error allocating Device Model buffer");
+            return -1;
+        }
+        return buffer_qemu(xch, ctx, 0, fd, buf);
+    }
+    else if ( !memcmp(qemusig, "DeviceModelRecords001", QEMUSIG_SIZE) ) {
+        return buffer_device_models(xch, ctx, fd, buf);
+   }
 
-    qemusig[20] = '\0';
+    qemusig[QEMUSIG_SIZE] = '\0';
     ERROR("Invalid QEMU signature: %s", qemusig);
     return -1;
 }
@@ -629,13 +693,18 @@ static int buffer_tail(xc_interface *xch, struct restore_ctx *ctx,
 
 static void tailbuf_free_hvm(struct tailbuf_hvm *buf)
 {
+    uint32_t i;
+
     if ( buf->hvmbuf ) {
         free(buf->hvmbuf);
         buf->hvmbuf = NULL;
     }
-    if ( buf->qemubuf ) {
-        free(buf->qemubuf);
-        buf->qemubuf = NULL;
+
+    for (i = 0; i < buf->num_dms; i++)
+    {
+        if (buf->dmsbuf[i].buf)
+            free(buf->dmsbuf[i].buf);
+        buf->dmsbuf[i].buf = NULL;
     }
 }
 
@@ -2137,10 +2206,17 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         }
     }
 
-    /* Dump the QEMU state to a state file for QEMU to load */
-    if ( dump_qemu(xch, dom, &tailbuf.u.hvm) ) {
-        PERROR("Error dumping QEMU state to file");
-        goto out;
+    /**
+     * Dump the each Device Model state to a state file for the Device
+     * Model to load
+     */
+    for ( i = 0; i < tailbuf.u.hvm.num_dms; i++)
+    {
+        if ( dump_qemu(xch, dom, i, &tailbuf.u.hvm) )
+        {
+            PERROR("Error dumping Device Model %u state to file", i);
+            goto out;
+        }
     }
 
     /* These comms pages need to be zeroed at the start of day */
@@ -2153,9 +2229,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     if ( (frc = xc_set_hvm_param(xch, dom,
-                                 HVM_PARAM_IOREQ_PFN, tailbuf.u.hvm.magicpfns[0]))
+                                 HVM_PARAM_IO_PFN_FIRST, tailbuf.u.hvm.magicpfns[0]))
          || (frc = xc_set_hvm_param(xch, dom,
-                                    HVM_PARAM_BUFIOREQ_PFN, tailbuf.u.hvm.magicpfns[1]))
+                                    HVM_PARAM_IO_PFN_LAST, tailbuf.u.hvm.magicpfns[1]))
          || (frc = xc_set_hvm_param(xch, dom,
                                     HVM_PARAM_STORE_PFN, tailbuf.u.hvm.magicpfns[2]))
          || (frc = xc_set_hvm_param(xch, dom,
diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index c359649..2aa7a28 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -862,7 +862,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     uint8_t *hvm_buf = NULL;
 
     /* HVM: magic frames for ioreqs and xenstore comms. */
-    uint64_t magic_pfns[3]; /* ioreq_pfn, bufioreq_pfn, store_pfn */
+    uint64_t magic_pfns[3]; /* io_pfn_first , io_pfn_last, store_pfn */
 
     unsigned long mfn;
 
@@ -1787,9 +1787,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         /* Save magic-page locations. */
         memset(magic_pfns, 0, sizeof(magic_pfns));
-        xc_get_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
+        xc_get_hvm_param(xch, dom, HVM_PARAM_IO_PFN_FIRST,
                          (unsigned long *)&magic_pfns[0]);
-        xc_get_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
+        xc_get_hvm_param(xch, dom, HVM_PARAM_IO_PFN_LAST,
                          (unsigned long *)&magic_pfns[1]);
         xc_get_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
                          (unsigned long *)&magic_pfns[2]);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5L-0006r9-TT; Wed, 22 Aug 2012 18:55:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5J-0006iH-FR
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:21 +0000
Received: from [85.158.138.51:39910] by server-9.bemta-3.messagelabs.com id
	BC/24-23952-91B25305; Wed, 22 Aug 2012 18:55:21 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10195 invoked from network); 22 Aug 2012 18:55:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943126"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:19 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:19 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:57 +0100
Message-ID: <fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore to
	support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

    - add save/restore new special pages and remove unused
    - modify save file structure to allow multiple qemu states

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxc/xc_domain_restore.c |  150 +++++++++++++++++++++++++++++----------
 tools/libxc/xc_domain_save.c    |    6 +-
 2 files changed, 116 insertions(+), 40 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 3fe2b12..9a49ee2 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
 #else
 #define RDEXACT read_exact
 #endif
+
+#define QEMUSIG_SIZE 21
+
 /*
 ** In the state file (or during transfer), all page-table pages are
 ** converted into a 'canonical' form where references to actual mfns
@@ -342,8 +345,11 @@ typedef struct {
                 uint32_t version;
                 uint64_t len;
             } qemuhdr;
-            uint32_t qemubufsize;
-            uint8_t* qemubuf;
+            uint32_t num_dms;
+            struct devmodel_buffer {
+                uint32_t size;
+                uint8_t* buf;
+            } *dmsbuf;
         } hvm;
     } u;
 } tailbuf_t;
@@ -392,63 +398,112 @@ static int compat_buffer_qemu(xc_interface *xch, struct restore_ctx *ctx,
         return -1;
     }
 
-    buf->qemubuf = qbuf;
-    buf->qemubufsize = dlen;
+    if ( !(buf->dmsbuf = calloc(1, sizeof(*buf->dmsbuf))) ) {
+        ERROR("Error allocating Device Model buffer");
+        free(qbuf);
+        return -1;
+    }
+
+    buf->dmsbuf[0].buf = qbuf;
+    buf->dmsbuf[0].size = dlen;
+    buf->num_dms = 1;
 
     return 0;
 }
 
 static int buffer_qemu(xc_interface *xch, struct restore_ctx *ctx,
-                       int fd, struct tailbuf_hvm *buf)
+                       uint32_t dmid, int fd, struct tailbuf_hvm *buf)
 {
     uint32_t qlen;
     uint8_t *tmp;
+    struct devmodel_buffer *dmb = &buf->dmsbuf[dmid];
 
     if ( RDEXACT(fd, &qlen, sizeof(qlen)) ) {
-        PERROR("Error reading QEMU header length");
+        PERROR("Error reading Device Model %u header length", dmid);
         return -1;
     }
 
-    if ( qlen > buf->qemubufsize ) {
-        if ( buf->qemubuf) {
-            tmp = realloc(buf->qemubuf, qlen);
+    if ( qlen > dmb->size ) {
+        if ( dmb->buf ) {
+            tmp = realloc(dmb->buf, qlen);
             if ( tmp )
-                buf->qemubuf = tmp;
+                dmb->buf = tmp;
             else {
-                ERROR("Error reallocating QEMU state buffer");
+                ERROR("Error reallocating Device Model %u state buffer", dmid);
                 return -1;
             }
         } else {
-            buf->qemubuf = malloc(qlen);
-            if ( !buf->qemubuf ) {
-                ERROR("Error allocating QEMU state buffer");
+            dmb->buf = malloc(qlen);
+            if ( !dmb->buf ) {
+                ERROR("Error allocating Device Model %u state buffer", dmid);
                 return -1;
             }
         }
     }
-    buf->qemubufsize = qlen;
+    dmb->size = qlen;
 
-    if ( RDEXACT(fd, buf->qemubuf, buf->qemubufsize) ) {
-        PERROR("Error reading QEMU state");
+    if ( RDEXACT(fd, dmb->buf, dmb->size) ) {
+        PERROR("Error reading Device Model %u state", dmid);
         return -1;
     }
 
     return 0;
 }
 
-static int dump_qemu(xc_interface *xch, uint32_t dom, struct tailbuf_hvm *buf)
+static int buffer_device_models(xc_interface *xch, struct restore_ctx *ctx,
+                                int fd, struct tailbuf_hvm *buf)
+{
+    uint32_t i, num_dms;
+    unsigned char qemusig[QEMUSIG_SIZE + 1];
+    int ret = 0;
+
+    if ( RDEXACT(fd, &num_dms, sizeof(num_dms)) ) {
+        PERROR("Error reading num dms");
+        return -1;
+    }
+
+    if ( !(buf->dmsbuf = calloc(num_dms, sizeof (*buf->dmsbuf))) ) {
+        PERROR("Error allocating Device Model buffers");
+        return -1;
+    }
+
+    buf->num_dms = num_dms;
+
+    for ( i = 0; i < num_dms; i++ ) {
+        if ( RDEXACT(fd, qemusig, QEMUSIG_SIZE) ) {
+            PERROR("Error reading Device Model %u signature", i);
+            return -1;
+        }
+
+        if ( memcmp(qemusig, "DeviceModelRecord0002", QEMUSIG_SIZE) ) {
+            qemusig[QEMUSIG_SIZE] = '\0';
+            ERROR("Invalid Device Model %u signature: %s", i, qemusig);
+            return -1;
+        }
+
+        ret = buffer_qemu(xch, ctx, i, fd, buf);
+        if ( ret )
+            return ret;
+    }
+
+    return 0;
+}
+
+static int dump_qemu(xc_interface *xch, uint32_t dom,
+                     uint32_t dmid, struct tailbuf_hvm *buf)
 {
     int saved_errno;
     char path[256];
     FILE *fp;
+    struct devmodel_buffer *dmb = &buf->dmsbuf[dmid];
 
-    sprintf(path, XC_DEVICE_MODEL_RESTORE_FILE".%u", dom);
+    sprintf(path, XC_DEVICE_MODEL_RESTORE_FILE".%u.%u", dom, dmid);
     fp = fopen(path, "wb");
     if ( !fp )
         return -1;
 
-    DPRINTF("Writing %d bytes of QEMU data\n", buf->qemubufsize);
-    if ( fwrite(buf->qemubuf, 1, buf->qemubufsize, fp) != buf->qemubufsize) {
+    DPRINTF("Writing %d bytes of Device Model %u data\n", dmb->size, dmid);
+    if ( fwrite(dmb->buf, 1, dmb->size, fp) != dmb->size ) {
         saved_errno = errno;
         fclose(fp);
         errno = saved_errno;
@@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
                            int vcpuextstate, uint32_t vcpuextstate_size)
 {
     uint8_t *tmp;
-    unsigned char qemusig[21];
+    unsigned char qemusig[QEMUSIG_SIZE + 1];
 
     if ( RDEXACT(fd, buf->magicpfns, sizeof(buf->magicpfns)) ) {
         PERROR("Error reading magic PFNs");
@@ -504,7 +559,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
         return -1;
     }
 
-    if ( RDEXACT(fd, qemusig, sizeof(qemusig)) ) {
+    if ( RDEXACT(fd, qemusig, QEMUSIG_SIZE) ) {
         PERROR("Error reading QEMU signature");
         return -1;
     }
@@ -517,13 +572,22 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
      * live-migration QEMU record and Remus which includes a length
      * prefix
      */
-    if ( !memcmp(qemusig, "QemuDeviceModelRecord", sizeof(qemusig)) )
+    if ( !memcmp(qemusig, "QemuDeviceModelRecord", QEMUSIG_SIZE) )
         return compat_buffer_qemu(xch, ctx, fd, buf);
-    else if ( !memcmp(qemusig, "DeviceModelRecord0002", sizeof(qemusig)) ||
-              !memcmp(qemusig, "RemusDeviceModelState", sizeof(qemusig)) )
-        return buffer_qemu(xch, ctx, fd, buf);
+    else if ( !memcmp(qemusig, "DeviceModelRecord0002", QEMUSIG_SIZE) ||
+              !memcmp(qemusig, "RemusDeviceModelState", QEMUSIG_SIZE) )
+    {
+        if ( !(buf->dmsbuf = calloc(1, sizeof (*buf->dmsbuf))) ) {
+            PERROR("Error allocating Device Model buffer");
+            return -1;
+        }
+        return buffer_qemu(xch, ctx, 0, fd, buf);
+    }
+    else if ( !memcmp(qemusig, "DeviceModelRecords001", QEMUSIG_SIZE) ) {
+        return buffer_device_models(xch, ctx, fd, buf);
+   }
 
-    qemusig[20] = '\0';
+    qemusig[QEMUSIG_SIZE] = '\0';
     ERROR("Invalid QEMU signature: %s", qemusig);
     return -1;
 }
@@ -629,13 +693,18 @@ static int buffer_tail(xc_interface *xch, struct restore_ctx *ctx,
 
 static void tailbuf_free_hvm(struct tailbuf_hvm *buf)
 {
+    uint32_t i;
+
     if ( buf->hvmbuf ) {
         free(buf->hvmbuf);
         buf->hvmbuf = NULL;
     }
-    if ( buf->qemubuf ) {
-        free(buf->qemubuf);
-        buf->qemubuf = NULL;
+
+    for (i = 0; i < buf->num_dms; i++)
+    {
+        if (buf->dmsbuf[i].buf)
+            free(buf->dmsbuf[i].buf);
+        buf->dmsbuf[i].buf = NULL;
     }
 }
 
@@ -2137,10 +2206,17 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         }
     }
 
-    /* Dump the QEMU state to a state file for QEMU to load */
-    if ( dump_qemu(xch, dom, &tailbuf.u.hvm) ) {
-        PERROR("Error dumping QEMU state to file");
-        goto out;
+    /**
+     * Dump the each Device Model state to a state file for the Device
+     * Model to load
+     */
+    for ( i = 0; i < tailbuf.u.hvm.num_dms; i++)
+    {
+        if ( dump_qemu(xch, dom, i, &tailbuf.u.hvm) )
+        {
+            PERROR("Error dumping Device Model %u state to file", i);
+            goto out;
+        }
     }
 
     /* These comms pages need to be zeroed at the start of day */
@@ -2153,9 +2229,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     if ( (frc = xc_set_hvm_param(xch, dom,
-                                 HVM_PARAM_IOREQ_PFN, tailbuf.u.hvm.magicpfns[0]))
+                                 HVM_PARAM_IO_PFN_FIRST, tailbuf.u.hvm.magicpfns[0]))
          || (frc = xc_set_hvm_param(xch, dom,
-                                    HVM_PARAM_BUFIOREQ_PFN, tailbuf.u.hvm.magicpfns[1]))
+                                    HVM_PARAM_IO_PFN_LAST, tailbuf.u.hvm.magicpfns[1]))
          || (frc = xc_set_hvm_param(xch, dom,
                                     HVM_PARAM_STORE_PFN, tailbuf.u.hvm.magicpfns[2]))
          || (frc = xc_set_hvm_param(xch, dom,
diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index c359649..2aa7a28 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -862,7 +862,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     uint8_t *hvm_buf = NULL;
 
     /* HVM: magic frames for ioreqs and xenstore comms. */
-    uint64_t magic_pfns[3]; /* ioreq_pfn, bufioreq_pfn, store_pfn */
+    uint64_t magic_pfns[3]; /* io_pfn_first , io_pfn_last, store_pfn */
 
     unsigned long mfn;
 
@@ -1787,9 +1787,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         /* Save magic-page locations. */
         memset(magic_pfns, 0, sizeof(magic_pfns));
-        xc_get_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
+        xc_get_hvm_param(xch, dom, HVM_PARAM_IO_PFN_FIRST,
                          (unsigned long *)&magic_pfns[0]);
-        xc_get_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
+        xc_get_hvm_param(xch, dom, HVM_PARAM_IO_PFN_LAST,
                          (unsigned long *)&magic_pfns[1]);
         xc_get_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
                          (unsigned long *)&magic_pfns[2]);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5M-0006rq-CQ; Wed, 22 Aug 2012 18:55:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5K-0006hc-3u
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:22 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7320 invoked from network); 22 Aug 2012 18:55:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484789"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:13 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:12 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:52 +0100
Message-ID: <89b7274240cda4511d4e7e11fc9c21a53ba23887.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 06/17] hvm-io: IO refactoring with
	ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Modification of several parts of the IO handle.
Each vcpu now contain a pointer to the current IO shared page.
A default shared page has been created for IO handle by Xen.
Each time that Xen receives an ioreq, it will use the default
shared page and set the right shared page when it's able to
know the server.

Moreover, all IO which are unhandleabled by Xen or by a server
will be directly discard inside Xen.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/emulate.c        |   56 +++++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c            |    5 ++-
 xen/include/asm-x86/hvm/support.h |   26 ++++++++++++++--
 3 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 9bfba48..9e636b6 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -49,6 +49,55 @@ static void hvmtrace_io_assist(int is_mmio, ioreq_t *p)
     trace_var(event, 0/*!cycles*/, size, buffer);
 }
 
+static int hvmemul_prepare_assist(ioreq_t *p)
+{
+    struct vcpu *v = current;
+    struct hvm_ioreq_server *s;
+    int i;
+    int sign;
+    uint32_t data = ~0;
+
+    if ( p->type == IOREQ_TYPE_PCI_CONFIG )
+        return X86EMUL_UNHANDLEABLE;
+
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+    {
+        struct hvm_io_range *x = (p->type == IOREQ_TYPE_COPY)
+            ? s->mmio_range_list : s->portio_range_list;
+
+        for ( ; x; x = x->next )
+        {
+            if ( (p->addr >= x->s) && (p->addr <= x->e) )
+                goto done_server_scan;
+        }
+    }
+
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    sign = p->df ? -1 : 1;
+
+    if ( p->dir != IOREQ_WRITE )
+    {
+        if ( !p->data_is_ptr )
+            p->data = ~0;
+        else
+        {
+            for ( i = 0; i < p->count; i++ )
+                hvm_copy_to_guest_phys(p->data + sign * i * p->size, &data,
+                                       p->size);
+        }
+    }
+
+    return X86EMUL_OKAY;
+
+  done_server_scan:
+    set_ioreq(v, &s->ioreq, p);
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    return X86EMUL_UNHANDLEABLE;
+}
+
 static int hvmemul_do_io(
     int is_mmio, paddr_t addr, unsigned long *reps, int size,
     paddr_t ram_gpa, int dir, int df, void *p_data)
@@ -173,6 +222,10 @@ static int hvmemul_do_io(
         (p_data == NULL) ? HVMIO_dispatched : HVMIO_awaiting_completion;
     vio->io_size = size;
 
+    /* Use the default shared page */
+    current->arch.hvm_vcpu.ioreq = &curr->domain->arch.hvm_domain.ioreq;
+    p = get_ioreq(current);
+
     p->dir = dir;
     p->data_is_ptr = value_is_ptr;
     p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
@@ -196,6 +249,9 @@ static int hvmemul_do_io(
         rc = hvm_portio_intercept(p);
     }
 
+    if ( rc == X86EMUL_UNHANDLEABLE )
+        rc = hvmemul_prepare_assist(p);
+
     switch ( rc )
     {
     case X86EMUL_OKAY:
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a2cd9b3..33ef0f2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1223,14 +1223,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
         return 0;
     }
 
-    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
+    prepare_wait_on_xen_event_channel(p->vp_eport);
 
     /*
      * Following happens /after/ blocking and setting up ioreq contents.
      * prepare_wait_on_xen_event_channel() is an implicit barrier.
      */
     p->state = STATE_IOREQ_READY;
-    notify_via_xen_event_channel(v->domain, v->arch.hvm_vcpu.xen_port);
+
+    notify_via_xen_event_channel(v->domain, p->vp_eport);
 
     return 1;
 }
diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
index f9b102f..44acd37 100644
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -29,13 +29,31 @@
 
 static inline ioreq_t *get_ioreq(struct vcpu *v)
 {
-    struct domain *d = v->domain;
-    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
-    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
-    ASSERT(d->arch.hvm_domain.ioreq.va != NULL);
+    shared_iopage_t *p = v->arch.hvm_vcpu.ioreq->va;
+    ASSERT((v == current) || spin_is_locked(&v->arch.hvm_vcpu.ioreq->lock));
+    ASSERT(v->arch.hvm_vcpu.ioreq->va != NULL);
     return &p->vcpu_ioreq[v->vcpu_id];
 }
 
+static inline void set_ioreq(struct vcpu *v, struct hvm_ioreq_page *page,
+			     ioreq_t *p)
+{
+    ioreq_t *np;
+
+    v->arch.hvm_vcpu.ioreq = page;
+    spin_lock(&v->arch.hvm_vcpu.ioreq->lock);
+    np = get_ioreq(v);
+    np->dir = p->dir;
+    np->data_is_ptr = p->data_is_ptr;
+    np->type = p->type;
+    np->size = p->size;
+    np->addr = p->addr;
+    np->count = p->count;
+    np->df = p->df;
+    np->data = p->data;
+    spin_unlock(&v->arch.hvm_vcpu.ioreq->lock);
+}
+
 #define HVM_DELIVER_NO_ERROR_CODE  -1
 
 #ifndef NDEBUG
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5M-0006rq-CQ; Wed, 22 Aug 2012 18:55:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5K-0006hc-3u
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:22 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7320 invoked from network); 22 Aug 2012 18:55:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484789"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:13 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:12 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:52 +0100
Message-ID: <89b7274240cda4511d4e7e11fc9c21a53ba23887.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 06/17] hvm-io: IO refactoring with
	ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Modification of several parts of the IO handle.
Each vcpu now contain a pointer to the current IO shared page.
A default shared page has been created for IO handle by Xen.
Each time that Xen receives an ioreq, it will use the default
shared page and set the right shared page when it's able to
know the server.

Moreover, all IO which are unhandleabled by Xen or by a server
will be directly discard inside Xen.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 xen/arch/x86/hvm/emulate.c        |   56 +++++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c            |    5 ++-
 xen/include/asm-x86/hvm/support.h |   26 ++++++++++++++--
 3 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 9bfba48..9e636b6 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -49,6 +49,55 @@ static void hvmtrace_io_assist(int is_mmio, ioreq_t *p)
     trace_var(event, 0/*!cycles*/, size, buffer);
 }
 
+static int hvmemul_prepare_assist(ioreq_t *p)
+{
+    struct vcpu *v = current;
+    struct hvm_ioreq_server *s;
+    int i;
+    int sign;
+    uint32_t data = ~0;
+
+    if ( p->type == IOREQ_TYPE_PCI_CONFIG )
+        return X86EMUL_UNHANDLEABLE;
+
+    spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+    for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next )
+    {
+        struct hvm_io_range *x = (p->type == IOREQ_TYPE_COPY)
+            ? s->mmio_range_list : s->portio_range_list;
+
+        for ( ; x; x = x->next )
+        {
+            if ( (p->addr >= x->s) && (p->addr <= x->e) )
+                goto done_server_scan;
+        }
+    }
+
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    sign = p->df ? -1 : 1;
+
+    if ( p->dir != IOREQ_WRITE )
+    {
+        if ( !p->data_is_ptr )
+            p->data = ~0;
+        else
+        {
+            for ( i = 0; i < p->count; i++ )
+                hvm_copy_to_guest_phys(p->data + sign * i * p->size, &data,
+                                       p->size);
+        }
+    }
+
+    return X86EMUL_OKAY;
+
+  done_server_scan:
+    set_ioreq(v, &s->ioreq, p);
+    spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock);
+
+    return X86EMUL_UNHANDLEABLE;
+}
+
 static int hvmemul_do_io(
     int is_mmio, paddr_t addr, unsigned long *reps, int size,
     paddr_t ram_gpa, int dir, int df, void *p_data)
@@ -173,6 +222,10 @@ static int hvmemul_do_io(
         (p_data == NULL) ? HVMIO_dispatched : HVMIO_awaiting_completion;
     vio->io_size = size;
 
+    /* Use the default shared page */
+    current->arch.hvm_vcpu.ioreq = &curr->domain->arch.hvm_domain.ioreq;
+    p = get_ioreq(current);
+
     p->dir = dir;
     p->data_is_ptr = value_is_ptr;
     p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
@@ -196,6 +249,9 @@ static int hvmemul_do_io(
         rc = hvm_portio_intercept(p);
     }
 
+    if ( rc == X86EMUL_UNHANDLEABLE )
+        rc = hvmemul_prepare_assist(p);
+
     switch ( rc )
     {
     case X86EMUL_OKAY:
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a2cd9b3..33ef0f2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1223,14 +1223,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
         return 0;
     }
 
-    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
+    prepare_wait_on_xen_event_channel(p->vp_eport);
 
     /*
      * Following happens /after/ blocking and setting up ioreq contents.
      * prepare_wait_on_xen_event_channel() is an implicit barrier.
      */
     p->state = STATE_IOREQ_READY;
-    notify_via_xen_event_channel(v->domain, v->arch.hvm_vcpu.xen_port);
+
+    notify_via_xen_event_channel(v->domain, p->vp_eport);
 
     return 1;
 }
diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
index f9b102f..44acd37 100644
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -29,13 +29,31 @@
 
 static inline ioreq_t *get_ioreq(struct vcpu *v)
 {
-    struct domain *d = v->domain;
-    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
-    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
-    ASSERT(d->arch.hvm_domain.ioreq.va != NULL);
+    shared_iopage_t *p = v->arch.hvm_vcpu.ioreq->va;
+    ASSERT((v == current) || spin_is_locked(&v->arch.hvm_vcpu.ioreq->lock));
+    ASSERT(v->arch.hvm_vcpu.ioreq->va != NULL);
     return &p->vcpu_ioreq[v->vcpu_id];
 }
 
+static inline void set_ioreq(struct vcpu *v, struct hvm_ioreq_page *page,
+			     ioreq_t *p)
+{
+    ioreq_t *np;
+
+    v->arch.hvm_vcpu.ioreq = page;
+    spin_lock(&v->arch.hvm_vcpu.ioreq->lock);
+    np = get_ioreq(v);
+    np->dir = p->dir;
+    np->data_is_ptr = p->data_is_ptr;
+    np->type = p->type;
+    np->size = p->size;
+    np->addr = p->addr;
+    np->count = p->count;
+    np->df = p->df;
+    np->data = p->data;
+    spin_unlock(&v->arch.hvm_vcpu.ioreq->lock);
+}
+
 #define HVM_DELIVER_NO_ERROR_CODE  -1
 
 #ifndef NDEBUG
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5N-0006uL-KR; Wed, 22 Aug 2012 18:55:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5L-0006jm-RX
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:24 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7403 invoked from network); 22 Aug 2012 18:55:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:17 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484799"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:16 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:16 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:55 +0100
Message-ID: <8747cb48d50a10784df56904db29ca8b6e8c5d80.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 09/17] xc: Add the hypercall for
	multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add 5 hypercalls to register server, io range and PCI.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxc/xc_domain.c |  155 +++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxc/xenctrl.h   |   21 ++++++
 2 files changed, 176 insertions(+), 0 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index d98e68b..cb186c1 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1514,6 +1514,161 @@ int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq)
     return do_domctl(xch, &domctl);
 }
 
+ioservid_or_error_t xc_hvm_register_ioreq_server(xc_interface *xch,
+                                                 domid_t dom)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_register_ioreq_server_t, arg);
+    ioservid_or_error_t rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_register_ioreq_server hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_register_ioreq_server;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    rc = do_xen_hypercall(xch, &hypercall);
+    if ( !rc )
+        rc = arg->id;
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+evtchn_port_or_error_t xc_hvm_get_ioreq_server_buf_channel(xc_interface *xch,
+                                                           domid_t dom,
+                                                           ioservid_t id)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_buf_channel_t, arg);
+    evtchn_port_or_error_t rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_get_ioreq_servr_buf_channel");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_get_ioreq_server_buf_channel;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    if ( !rc )
+        rc = arg->channel;
+
+    xc_hypercall_buffer_free(xch, arg);
+
+out:
+    return rc;
+}
+
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t dom,
+                                        ioservid_t id, int is_mmio,
+                                        uint64_t start, uint64_t end)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_io_range_to_ioreq_server_t, arg);
+    int rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_map_io_range_to_ioreq_server hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_map_io_range_to_ioreq_server;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->s = start;
+    arg->e = end;
+
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t dom,
+                                            ioservid_t id, int is_mmio,
+                                            uint64_t addr)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_io_range_from_ioreq_server_t, arg);
+    int rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_unmap_io_range_from_ioreq_server hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_unmap_io_range_from_ioreq_server;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->addr = addr;
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+int xc_hvm_register_pcidev(xc_interface *xch, domid_t dom, ioservid_t id,
+                           uint8_t domain, uint8_t bus, uint8_t device,
+                           uint8_t function)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_register_pcidev_t, arg);
+    int rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_create_pci hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_register_pcidev;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    arg->domain = domain;
+    arg->bus = bus;
+    arg->device = device;
+    arg->function = function;
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index b7741ca..65a950e 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1659,6 +1659,27 @@ void xc_clear_last_error(xc_interface *xch);
 int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value);
 int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value);
 
+/* A IO server identifier is guaranteed to fit in 31 bits. */
+typedef int ioservid_or_error_t;
+
+ioservid_or_error_t xc_hvm_register_ioreq_server(xc_interface *xch,
+                                                 domid_t dom);
+evtchn_port_or_error_t xc_hvm_get_ioreq_server_buf_channel(xc_interface *xch,
+                                                           domid_t dom,
+                                                           ioservid_t id);
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t dom,
+                                        ioservid_t id, int is_mmio,
+                                        uint64_t start, uint64_t end);
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t dom,
+                                            ioservid_t id, int is_mmio,
+                                            uint64_t addr);
+/*
+ * Register a PCI device
+ */
+int xc_hvm_register_pcidev(xc_interface *xch, domid_t dom, unsigned int id,
+                           uint8_t domain, uint8_t bus, uint8_t device,
+                           uint8_t function);
+
 /* IA64 specific, nvram save */
 int xc_ia64_save_to_nvram(xc_interface *xch, uint32_t dom);
 
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5N-0006tE-3z; Wed, 22 Aug 2012 18:55:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5L-0006iH-Lp
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:23 +0000
Received: from [85.158.138.51:40019] by server-9.bemta-3.messagelabs.com id
	14/34-23952-B1B25305; Wed, 22 Aug 2012 18:55:23 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10412 invoked from network); 22 Aug 2012 18:55:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943131"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:21 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:21 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:59 +0100
Message-ID: <d481c7657e727bbd67f762244fc3b1383a0ee037.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 13/17] xl: add device model id to
	qmp functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With the support of multiple device model, the qmp library need to know
which device models is currently used.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl_internal.h |   24 ++++++++++++-------
 tools/libxl/libxl_qmp.c      |   49 ++++++++++++++++++++++++------------------
 2 files changed, 43 insertions(+), 30 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 7c3b179..71e4970 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1384,26 +1384,32 @@ typedef struct libxl__qmp_handler libxl__qmp_handler;
  *   Return an handler or NULL if there is an error
  */
 _hidden libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc,
-                                                  uint32_t domid);
+                                                  libxl_domid domid,
+                                                  libxl_dmid dmid);
 /* ask to QEMU the serial port information and store it in xenstore. */
 _hidden int libxl__qmp_query_serial(libxl__qmp_handler *qmp);
-_hidden int libxl__qmp_pci_add(libxl__gc *gc, int d, libxl_device_pci *pcidev);
-_hidden int libxl__qmp_pci_del(libxl__gc *gc, int domid,
-                               libxl_device_pci *pcidev);
+_hidden int libxl__qmp_pci_add(libxl__gc *gc, libxl_domid d,
+                               libxl_dmid dmid, libxl_device_pci *pcidev);
+_hidden int libxl__qmp_pci_del(libxl__gc *gc, libxl_domid domid,
+                               libxl_dmid dmid, libxl_device_pci *pcidev);
 /* Suspend QEMU. */
-_hidden int libxl__qmp_stop(libxl__gc *gc, int domid);
+_hidden int libxl__qmp_stop(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid);
 /* Resume QEMU. */
-_hidden int libxl__qmp_resume(libxl__gc *gc, int domid);
+_hidden int libxl__qmp_resume(libxl__gc *gc, libxl_domid domid,
+                              libxl_dmid dmid);
 /* Save current QEMU state into fd. */
-_hidden int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename);
+_hidden int libxl__qmp_save(libxl__gc *gc, libxl_domid domid,
+                            libxl_dmid dmid, const char *filename);
 /* close and free the QMP handler */
 _hidden void libxl__qmp_close(libxl__qmp_handler *qmp);
 /* remove the socket file, if the file has already been removed,
  * nothing happen */
-_hidden void libxl__qmp_cleanup(libxl__gc *gc, uint32_t domid);
+_hidden void libxl__qmp_cleanup(libxl__gc *gc, libxl_domid domid,
+                                libxl_dmid dmid);
 
 /* this helper calls qmp_initialize, query_serial and qmp_close */
-_hidden int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
+_hidden int libxl__qmp_initializations(libxl__gc *gc, libxl_domid domid,
+                                       libxl_dmid dmid,
                                        const libxl_domain_config *guest_config);
 
 /* on failure, logs */
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..3c3cccf 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -627,7 +627,8 @@ static void qmp_free_handler(libxl__qmp_handler *qmp)
  * API
  */
 
-libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc, uint32_t domid)
+libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc, libxl_domid domid,
+                                          libxl_dmid dmid)
 {
     int ret = 0;
     libxl__qmp_handler *qmp = NULL;
@@ -635,8 +636,8 @@ libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc, uint32_t domid)
 
     qmp = qmp_init_handler(gc, domid);
 
-    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%d",
-                                libxl__run_dir_path(), domid);
+    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%u-%u",
+                                libxl__run_dir_path(), domid, dmid);
     if ((ret = qmp_open(qmp, qmp_socket, QMP_SOCKET_CONNECT_TIMEOUT)) < 0) {
         LIBXL__LOG_ERRNO(qmp->ctx, LIBXL__LOG_ERROR, "Connection error");
         qmp_free_handler(qmp);
@@ -668,13 +669,13 @@ void libxl__qmp_close(libxl__qmp_handler *qmp)
     qmp_free_handler(qmp);
 }
 
-void libxl__qmp_cleanup(libxl__gc *gc, uint32_t domid)
+void libxl__qmp_cleanup(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *qmp_socket;
 
-    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%d",
-                                libxl__run_dir_path(), domid);
+    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%u-%u",
+                                libxl__run_dir_path(), domid, dmid);
     if (unlink(qmp_socket) == -1) {
         if (errno != ENOENT) {
             LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
@@ -746,7 +747,9 @@ out:
     return rc;
 }
 
-int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
+int libxl__qmp_pci_add(libxl__gc *gc, libxl_domid domid,
+                       libxl_dmid dmid,
+                       libxl_device_pci *pcidev)
 {
     libxl__qmp_handler *qmp = NULL;
     flexarray_t *parameters = NULL;
@@ -754,7 +757,7 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
     char *hostaddr = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, 0);
     if (!qmp)
         return -1;
 
@@ -792,14 +795,15 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
     return rc;
 }
 
-static int qmp_device_del(libxl__gc *gc, int domid, char *id)
+static int qmp_device_del(libxl__gc *gc, libxl_domid domid,
+                          libxl_dmid dmid, char *id)
 {
     libxl__qmp_handler *qmp = NULL;
     flexarray_t *parameters = NULL;
     libxl_key_value_list args = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, 0);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -817,24 +821,26 @@ static int qmp_device_del(libxl__gc *gc, int domid, char *id)
     return rc;
 }
 
-int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
+int libxl__qmp_pci_del(libxl__gc *gc, libxl_domid domid,
+                       libxl_domid dmid, libxl_device_pci *pcidev)
 {
     char *id = NULL;
 
     id = libxl__sprintf(gc, PCI_PT_QDEV_ID,
                         pcidev->bus, pcidev->dev, pcidev->func);
 
-    return qmp_device_del(gc, domid, id);
+    return qmp_device_del(gc, domid, dmid, id);
 }
 
-int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename)
+int libxl__qmp_save(libxl__gc *gc, libxl_domid domid,
+                    libxl_dmid dmid, const char *filename)
 {
     libxl__qmp_handler *qmp = NULL;
     flexarray_t *parameters = NULL;
     libxl_key_value_list args = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -883,12 +889,12 @@ static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
     return rc;
 }
 
-int libxl__qmp_stop(libxl__gc *gc, int domid)
+int libxl__qmp_stop(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid)
 {
     libxl__qmp_handler *qmp = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -899,12 +905,12 @@ int libxl__qmp_stop(libxl__gc *gc, int domid)
     return rc;
 }
 
-int libxl__qmp_resume(libxl__gc *gc, int domid)
+int libxl__qmp_resume(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid)
 {
     libxl__qmp_handler *qmp = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -915,14 +921,15 @@ int libxl__qmp_resume(libxl__gc *gc, int domid)
     return rc;
 }
 
-int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
+int libxl__qmp_initializations(libxl__gc *gc, libxl_domid domid,
+                               libxl_dmid dmid,
                                const libxl_domain_config *guest_config)
 {
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
     libxl__qmp_handler *qmp = NULL;
     int ret = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return -1;
     ret = libxl__qmp_query_serial(qmp);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5N-0006uL-KR; Wed, 22 Aug 2012 18:55:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5L-0006jm-RX
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:24 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7403 invoked from network); 22 Aug 2012 18:55:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:17 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484799"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:16 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:16 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:55 +0100
Message-ID: <8747cb48d50a10784df56904db29ca8b6e8c5d80.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 09/17] xc: Add the hypercall for
	multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add 5 hypercalls to register server, io range and PCI.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxc/xc_domain.c |  155 +++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxc/xenctrl.h   |   21 ++++++
 2 files changed, 176 insertions(+), 0 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index d98e68b..cb186c1 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1514,6 +1514,161 @@ int xc_domain_set_virq_handler(xc_interface *xch, uint32_t domid, int virq)
     return do_domctl(xch, &domctl);
 }
 
+ioservid_or_error_t xc_hvm_register_ioreq_server(xc_interface *xch,
+                                                 domid_t dom)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_register_ioreq_server_t, arg);
+    ioservid_or_error_t rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_register_ioreq_server hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_register_ioreq_server;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    rc = do_xen_hypercall(xch, &hypercall);
+    if ( !rc )
+        rc = arg->id;
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+evtchn_port_or_error_t xc_hvm_get_ioreq_server_buf_channel(xc_interface *xch,
+                                                           domid_t dom,
+                                                           ioservid_t id)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_buf_channel_t, arg);
+    evtchn_port_or_error_t rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_get_ioreq_servr_buf_channel");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_get_ioreq_server_buf_channel;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    if ( !rc )
+        rc = arg->channel;
+
+    xc_hypercall_buffer_free(xch, arg);
+
+out:
+    return rc;
+}
+
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t dom,
+                                        ioservid_t id, int is_mmio,
+                                        uint64_t start, uint64_t end)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_io_range_to_ioreq_server_t, arg);
+    int rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_map_io_range_to_ioreq_server hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_map_io_range_to_ioreq_server;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->s = start;
+    arg->e = end;
+
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t dom,
+                                            ioservid_t id, int is_mmio,
+                                            uint64_t addr)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_io_range_from_ioreq_server_t, arg);
+    int rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_unmap_io_range_from_ioreq_server hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_unmap_io_range_from_ioreq_server;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->addr = addr;
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+int xc_hvm_register_pcidev(xc_interface *xch, domid_t dom, ioservid_t id,
+                           uint8_t domain, uint8_t bus, uint8_t device,
+                           uint8_t function)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_register_pcidev_t, arg);
+    int rc = -1;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof (*arg));
+    if ( !arg )
+    {
+        PERROR("Could not allocate memory for xc_hvm_create_pci hypercall");
+        goto out;
+    }
+
+    hypercall.op        = __HYPERVISOR_hvm_op;
+    hypercall.arg[0]    = HVMOP_register_pcidev;
+    hypercall.arg[1]    = HYPERCALL_BUFFER_AS_ARG(arg);
+
+    arg->domid = dom;
+    arg->id = id;
+    arg->domain = domain;
+    arg->bus = bus;
+    arg->device = device;
+    arg->function = function;
+    rc = do_xen_hypercall(xch, &hypercall);
+
+    xc_hypercall_buffer_free(xch, arg);
+out:
+    return rc;
+}
+
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index b7741ca..65a950e 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1659,6 +1659,27 @@ void xc_clear_last_error(xc_interface *xch);
 int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value);
 int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value);
 
+/* A IO server identifier is guaranteed to fit in 31 bits. */
+typedef int ioservid_or_error_t;
+
+ioservid_or_error_t xc_hvm_register_ioreq_server(xc_interface *xch,
+                                                 domid_t dom);
+evtchn_port_or_error_t xc_hvm_get_ioreq_server_buf_channel(xc_interface *xch,
+                                                           domid_t dom,
+                                                           ioservid_t id);
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t dom,
+                                        ioservid_t id, int is_mmio,
+                                        uint64_t start, uint64_t end);
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t dom,
+                                            ioservid_t id, int is_mmio,
+                                            uint64_t addr);
+/*
+ * Register a PCI device
+ */
+int xc_hvm_register_pcidev(xc_interface *xch, domid_t dom, unsigned int id,
+                           uint8_t domain, uint8_t bus, uint8_t device,
+                           uint8_t function);
+
 /* IA64 specific, nvram save */
 int xc_ia64_save_to_nvram(xc_interface *xch, uint32_t dom);
 
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5N-0006tE-3z; Wed, 22 Aug 2012 18:55:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5L-0006iH-Lp
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:23 +0000
Received: from [85.158.138.51:40019] by server-9.bemta-3.messagelabs.com id
	14/34-23952-B1B25305; Wed, 22 Aug 2012 18:55:23 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10412 invoked from network); 22 Aug 2012 18:55:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943131"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:21 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:21 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:59 +0100
Message-ID: <d481c7657e727bbd67f762244fc3b1383a0ee037.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 13/17] xl: add device model id to
	qmp functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With the support of multiple device model, the qmp library need to know
which device models is currently used.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl_internal.h |   24 ++++++++++++-------
 tools/libxl/libxl_qmp.c      |   49 ++++++++++++++++++++++++------------------
 2 files changed, 43 insertions(+), 30 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 7c3b179..71e4970 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1384,26 +1384,32 @@ typedef struct libxl__qmp_handler libxl__qmp_handler;
  *   Return an handler or NULL if there is an error
  */
 _hidden libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc,
-                                                  uint32_t domid);
+                                                  libxl_domid domid,
+                                                  libxl_dmid dmid);
 /* ask to QEMU the serial port information and store it in xenstore. */
 _hidden int libxl__qmp_query_serial(libxl__qmp_handler *qmp);
-_hidden int libxl__qmp_pci_add(libxl__gc *gc, int d, libxl_device_pci *pcidev);
-_hidden int libxl__qmp_pci_del(libxl__gc *gc, int domid,
-                               libxl_device_pci *pcidev);
+_hidden int libxl__qmp_pci_add(libxl__gc *gc, libxl_domid d,
+                               libxl_dmid dmid, libxl_device_pci *pcidev);
+_hidden int libxl__qmp_pci_del(libxl__gc *gc, libxl_domid domid,
+                               libxl_dmid dmid, libxl_device_pci *pcidev);
 /* Suspend QEMU. */
-_hidden int libxl__qmp_stop(libxl__gc *gc, int domid);
+_hidden int libxl__qmp_stop(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid);
 /* Resume QEMU. */
-_hidden int libxl__qmp_resume(libxl__gc *gc, int domid);
+_hidden int libxl__qmp_resume(libxl__gc *gc, libxl_domid domid,
+                              libxl_dmid dmid);
 /* Save current QEMU state into fd. */
-_hidden int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename);
+_hidden int libxl__qmp_save(libxl__gc *gc, libxl_domid domid,
+                            libxl_dmid dmid, const char *filename);
 /* close and free the QMP handler */
 _hidden void libxl__qmp_close(libxl__qmp_handler *qmp);
 /* remove the socket file, if the file has already been removed,
  * nothing happen */
-_hidden void libxl__qmp_cleanup(libxl__gc *gc, uint32_t domid);
+_hidden void libxl__qmp_cleanup(libxl__gc *gc, libxl_domid domid,
+                                libxl_dmid dmid);
 
 /* this helper calls qmp_initialize, query_serial and qmp_close */
-_hidden int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
+_hidden int libxl__qmp_initializations(libxl__gc *gc, libxl_domid domid,
+                                       libxl_dmid dmid,
                                        const libxl_domain_config *guest_config);
 
 /* on failure, logs */
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index e33b130..3c3cccf 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -627,7 +627,8 @@ static void qmp_free_handler(libxl__qmp_handler *qmp)
  * API
  */
 
-libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc, uint32_t domid)
+libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc, libxl_domid domid,
+                                          libxl_dmid dmid)
 {
     int ret = 0;
     libxl__qmp_handler *qmp = NULL;
@@ -635,8 +636,8 @@ libxl__qmp_handler *libxl__qmp_initialize(libxl__gc *gc, uint32_t domid)
 
     qmp = qmp_init_handler(gc, domid);
 
-    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%d",
-                                libxl__run_dir_path(), domid);
+    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%u-%u",
+                                libxl__run_dir_path(), domid, dmid);
     if ((ret = qmp_open(qmp, qmp_socket, QMP_SOCKET_CONNECT_TIMEOUT)) < 0) {
         LIBXL__LOG_ERRNO(qmp->ctx, LIBXL__LOG_ERROR, "Connection error");
         qmp_free_handler(qmp);
@@ -668,13 +669,13 @@ void libxl__qmp_close(libxl__qmp_handler *qmp)
     qmp_free_handler(qmp);
 }
 
-void libxl__qmp_cleanup(libxl__gc *gc, uint32_t domid)
+void libxl__qmp_cleanup(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *qmp_socket;
 
-    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%d",
-                                libxl__run_dir_path(), domid);
+    qmp_socket = libxl__sprintf(gc, "%s/qmp-libxl-%u-%u",
+                                libxl__run_dir_path(), domid, dmid);
     if (unlink(qmp_socket) == -1) {
         if (errno != ENOENT) {
             LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
@@ -746,7 +747,9 @@ out:
     return rc;
 }
 
-int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
+int libxl__qmp_pci_add(libxl__gc *gc, libxl_domid domid,
+                       libxl_dmid dmid,
+                       libxl_device_pci *pcidev)
 {
     libxl__qmp_handler *qmp = NULL;
     flexarray_t *parameters = NULL;
@@ -754,7 +757,7 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
     char *hostaddr = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, 0);
     if (!qmp)
         return -1;
 
@@ -792,14 +795,15 @@ int libxl__qmp_pci_add(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
     return rc;
 }
 
-static int qmp_device_del(libxl__gc *gc, int domid, char *id)
+static int qmp_device_del(libxl__gc *gc, libxl_domid domid,
+                          libxl_dmid dmid, char *id)
 {
     libxl__qmp_handler *qmp = NULL;
     flexarray_t *parameters = NULL;
     libxl_key_value_list args = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, 0);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -817,24 +821,26 @@ static int qmp_device_del(libxl__gc *gc, int domid, char *id)
     return rc;
 }
 
-int libxl__qmp_pci_del(libxl__gc *gc, int domid, libxl_device_pci *pcidev)
+int libxl__qmp_pci_del(libxl__gc *gc, libxl_domid domid,
+                       libxl_domid dmid, libxl_device_pci *pcidev)
 {
     char *id = NULL;
 
     id = libxl__sprintf(gc, PCI_PT_QDEV_ID,
                         pcidev->bus, pcidev->dev, pcidev->func);
 
-    return qmp_device_del(gc, domid, id);
+    return qmp_device_del(gc, domid, dmid, id);
 }
 
-int libxl__qmp_save(libxl__gc *gc, int domid, const char *filename)
+int libxl__qmp_save(libxl__gc *gc, libxl_domid domid,
+                    libxl_dmid dmid, const char *filename)
 {
     libxl__qmp_handler *qmp = NULL;
     flexarray_t *parameters = NULL;
     libxl_key_value_list args = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -883,12 +889,12 @@ static int qmp_change(libxl__gc *gc, libxl__qmp_handler *qmp,
     return rc;
 }
 
-int libxl__qmp_stop(libxl__gc *gc, int domid)
+int libxl__qmp_stop(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid)
 {
     libxl__qmp_handler *qmp = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -899,12 +905,12 @@ int libxl__qmp_stop(libxl__gc *gc, int domid)
     return rc;
 }
 
-int libxl__qmp_resume(libxl__gc *gc, int domid)
+int libxl__qmp_resume(libxl__gc *gc, libxl_domid domid, libxl_dmid dmid)
 {
     libxl__qmp_handler *qmp = NULL;
     int rc = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return ERROR_FAIL;
 
@@ -915,14 +921,15 @@ int libxl__qmp_resume(libxl__gc *gc, int domid)
     return rc;
 }
 
-int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
+int libxl__qmp_initializations(libxl__gc *gc, libxl_domid domid,
+                               libxl_dmid dmid,
                                const libxl_domain_config *guest_config)
 {
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
     libxl__qmp_handler *qmp = NULL;
     int ret = 0;
 
-    qmp = libxl__qmp_initialize(gc, domid);
+    qmp = libxl__qmp_initialize(gc, domid, dmid);
     if (!qmp)
         return -1;
     ret = libxl__qmp_query_serial(qmp);
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5Q-0006zb-Ef; Wed, 22 Aug 2012 18:55:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5N-0006lR-Gx
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:25 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7437 invoked from network); 22 Aug 2012 18:55:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:18 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484803"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:18 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:17 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:56 +0100
Message-ID: <05f8b1f47b42aaf2430490453a2b783eabbe9bfb.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 10/17] xc: Add argument to allocate
	more special pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch permits to allocate more special pages. Indeed, for multiple
ioreq server, we need to have 2 shared pages by server.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxc/xc_hvm_build_x86.c    |   59 +++++++++++++++++++-----------------
 tools/libxc/xenguest.h            |    4 ++-
 tools/python/xen/lowlevel/xc/xc.c |    3 +-
 3 files changed, 36 insertions(+), 30 deletions(-)

diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index cf5d7fb..b98536b 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -41,16 +41,15 @@
 #define SPECIALPAGE_PAGING   0
 #define SPECIALPAGE_ACCESS   1
 #define SPECIALPAGE_SHARING  2
-#define SPECIALPAGE_BUFIOREQ 3
-#define SPECIALPAGE_XENSTORE 4
-#define SPECIALPAGE_IOREQ    5
-#define SPECIALPAGE_IDENT_PT 6
-#define SPECIALPAGE_CONSOLE  7
-#define NR_SPECIAL_PAGES     8
-#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
+#define SPECIALPAGE_XENSTORE 3
+#define SPECIALPAGE_IDENT_PT 4
+#define SPECIALPAGE_CONSOLE  5
+#define NR_SPECIAL_PAGES     6
+#define special_pfn(x, add) (0xff000u - (NR_SPECIAL_PAGES + (add)) + (x))
 
 static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
-                           uint64_t mmio_start, uint64_t mmio_size)
+                           uint64_t mmio_start, uint64_t mmio_size,
+                           uint32_t nr_special_pages)
 {
     struct hvm_info_table *hvm_info = (struct hvm_info_table *)
         (((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
@@ -78,7 +77,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
     /* Memory parameters. */
     hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
     hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
-    hvm_info->reserved_mem_pgstart = special_pfn(0);
+    hvm_info->reserved_mem_pgstart = special_pfn(0, nr_special_pages);
 
     /* Finish with the checksum. */
     for ( i = 0, sum = 0; i < hvm_info->length; i++ )
@@ -148,6 +147,7 @@ static int setup_guest(xc_interface *xch,
     unsigned long target_pages = args->mem_target >> PAGE_SHIFT;
     uint64_t mmio_start = (1ull << 32) - args->mmio_size;
     uint64_t mmio_size = args->mmio_size;
+    uint32_t nr_special_pages = args->nr_special_pages;
     unsigned long entry_eip, cur_pages, cur_pfn;
     void *hvm_info_page;
     uint32_t *ident_pt;
@@ -341,37 +341,38 @@ static int setup_guest(xc_interface *xch,
               xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
               HVM_INFO_PFN)) == NULL )
         goto error_out;
-    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
+    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size, nr_special_pages);
     munmap(hvm_info_page, PAGE_SIZE);
 
     /* Allocate and clear special pages. */
-    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
+    for ( i = 0; i < (NR_SPECIAL_PAGES + nr_special_pages); i++ )
     {
-        xen_pfn_t pfn = special_pfn(i);
+        xen_pfn_t pfn = special_pfn(i, nr_special_pages);
         rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
         if ( rc != 0 )
         {
             PERROR("Could not allocate %d'th special page.", i);
             goto error_out;
         }
-        if ( xc_clear_domain_page(xch, dom, special_pfn(i)) )
+        if ( xc_clear_domain_page(xch, dom, special_pfn(i, nr_special_pages)) )
             goto error_out;
     }
 
     xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
-                     special_pfn(SPECIALPAGE_XENSTORE));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
-                     special_pfn(SPECIALPAGE_BUFIOREQ));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
-                     special_pfn(SPECIALPAGE_IOREQ));
+                     special_pfn(SPECIALPAGE_XENSTORE, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
-                     special_pfn(SPECIALPAGE_CONSOLE));
+                     special_pfn(SPECIALPAGE_CONSOLE, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
-                     special_pfn(SPECIALPAGE_PAGING));
+                     special_pfn(SPECIALPAGE_PAGING, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_ACCESS_RING_PFN,
-                     special_pfn(SPECIALPAGE_ACCESS));
+                     special_pfn(SPECIALPAGE_ACCESS, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
-                     special_pfn(SPECIALPAGE_SHARING));
+                     special_pfn(SPECIALPAGE_SHARING, nr_special_pages));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_IO_PFN_FIRST,
+                     special_pfn(NR_SPECIAL_PAGES, nr_special_pages));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_IO_PFN_LAST,
+                     special_pfn(NR_SPECIAL_PAGES + nr_special_pages - 1,
+                                 nr_special_pages));
 
     /*
      * Identity-map page table is required for running with CR0.PG=0 when
@@ -379,14 +380,14 @@ static int setup_guest(xc_interface *xch,
      */
     if ( (ident_pt = xc_map_foreign_range(
               xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
-              special_pfn(SPECIALPAGE_IDENT_PT))) == NULL )
+              special_pfn(SPECIALPAGE_IDENT_PT, nr_special_pages))) == NULL )
         goto error_out;
     for ( i = 0; i < PAGE_SIZE / sizeof(*ident_pt); i++ )
         ident_pt[i] = ((i << 22) | _PAGE_PRESENT | _PAGE_RW | _PAGE_USER |
                        _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE);
     munmap(ident_pt, PAGE_SIZE);
     xc_set_hvm_param(xch, dom, HVM_PARAM_IDENT_PT,
-                     special_pfn(SPECIALPAGE_IDENT_PT) << PAGE_SHIFT);
+                     special_pfn(SPECIALPAGE_IDENT_PT, nr_special_pages) << PAGE_SHIFT);
 
     /* Insert JMP <rel32> instruction at address 0x0 to reach entry point. */
     entry_eip = elf_uval(&elf, elf.ehdr, e_entry);
@@ -454,16 +455,18 @@ int xc_hvm_build(xc_interface *xch, uint32_t domid,
  * If target == memsize, pages are populated normally.
  */
 int xc_hvm_build_target_mem(xc_interface *xch,
-                           uint32_t domid,
-                           int memsize,
-                           int target,
-                           const char *image_name)
+                            uint32_t domid,
+                            int memsize,
+                            int target,
+                            const char *image_name,
+                            uint32_t nr_special_pages)
 {
     struct xc_hvm_build_args args = {};
 
     args.mem_size = (uint64_t)memsize << 20;
     args.mem_target = (uint64_t)target << 20;
     args.image_file_name = image_name;
+    args.nr_special_pages = nr_special_pages;
 
     return xc_hvm_build(xch, domid, &args);
 }
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index 707e31c..9a0d38f 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -216,6 +216,7 @@ struct xc_hvm_build_args {
     uint64_t mem_target;         /* Memory target in bytes. */
     uint64_t mmio_size;          /* Size of the MMIO hole in bytes. */
     const char *image_file_name; /* File name of the image to load. */
+    uint32_t nr_special_pages;   /* Additional special pages for io daemon */
 };
 
 /**
@@ -234,7 +235,8 @@ int xc_hvm_build_target_mem(xc_interface *xch,
                             uint32_t domid,
                             int memsize,
                             int target,
-                            const char *image_name);
+                            const char *image_name,
+                            uint32_t nr_special_pages);
 
 int xc_suspend_evtchn_release(xc_interface *xch, xc_evtchn *xce, int domid, int suspend_evtchn);
 
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..eb004b6 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -984,8 +984,9 @@ static PyObject *pyxc_hvm_build(XcObject *self,
     if ( target == -1 )
         target = memsize;
 
+    // Ugly fix : we must retrieve the number of servers
     if ( xc_hvm_build_target_mem(self->xc_handle, dom, memsize,
-                                 target, image) != 0 )
+                                 target, image, 0) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
 #if !defined(__ia64__)
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5Q-0006zb-Ef; Wed, 22 Aug 2012 18:55:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5N-0006lR-Gx
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:25 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7437 invoked from network); 22 Aug 2012 18:55:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:18 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484803"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:18 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:17 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:56 +0100
Message-ID: <05f8b1f47b42aaf2430490453a2b783eabbe9bfb.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 10/17] xc: Add argument to allocate
	more special pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch permits to allocate more special pages. Indeed, for multiple
ioreq server, we need to have 2 shared pages by server.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxc/xc_hvm_build_x86.c    |   59 +++++++++++++++++++-----------------
 tools/libxc/xenguest.h            |    4 ++-
 tools/python/xen/lowlevel/xc/xc.c |    3 +-
 3 files changed, 36 insertions(+), 30 deletions(-)

diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index cf5d7fb..b98536b 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -41,16 +41,15 @@
 #define SPECIALPAGE_PAGING   0
 #define SPECIALPAGE_ACCESS   1
 #define SPECIALPAGE_SHARING  2
-#define SPECIALPAGE_BUFIOREQ 3
-#define SPECIALPAGE_XENSTORE 4
-#define SPECIALPAGE_IOREQ    5
-#define SPECIALPAGE_IDENT_PT 6
-#define SPECIALPAGE_CONSOLE  7
-#define NR_SPECIAL_PAGES     8
-#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
+#define SPECIALPAGE_XENSTORE 3
+#define SPECIALPAGE_IDENT_PT 4
+#define SPECIALPAGE_CONSOLE  5
+#define NR_SPECIAL_PAGES     6
+#define special_pfn(x, add) (0xff000u - (NR_SPECIAL_PAGES + (add)) + (x))
 
 static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
-                           uint64_t mmio_start, uint64_t mmio_size)
+                           uint64_t mmio_start, uint64_t mmio_size,
+                           uint32_t nr_special_pages)
 {
     struct hvm_info_table *hvm_info = (struct hvm_info_table *)
         (((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
@@ -78,7 +77,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
     /* Memory parameters. */
     hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
     hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
-    hvm_info->reserved_mem_pgstart = special_pfn(0);
+    hvm_info->reserved_mem_pgstart = special_pfn(0, nr_special_pages);
 
     /* Finish with the checksum. */
     for ( i = 0, sum = 0; i < hvm_info->length; i++ )
@@ -148,6 +147,7 @@ static int setup_guest(xc_interface *xch,
     unsigned long target_pages = args->mem_target >> PAGE_SHIFT;
     uint64_t mmio_start = (1ull << 32) - args->mmio_size;
     uint64_t mmio_size = args->mmio_size;
+    uint32_t nr_special_pages = args->nr_special_pages;
     unsigned long entry_eip, cur_pages, cur_pfn;
     void *hvm_info_page;
     uint32_t *ident_pt;
@@ -341,37 +341,38 @@ static int setup_guest(xc_interface *xch,
               xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
               HVM_INFO_PFN)) == NULL )
         goto error_out;
-    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
+    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size, nr_special_pages);
     munmap(hvm_info_page, PAGE_SIZE);
 
     /* Allocate and clear special pages. */
-    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
+    for ( i = 0; i < (NR_SPECIAL_PAGES + nr_special_pages); i++ )
     {
-        xen_pfn_t pfn = special_pfn(i);
+        xen_pfn_t pfn = special_pfn(i, nr_special_pages);
         rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
         if ( rc != 0 )
         {
             PERROR("Could not allocate %d'th special page.", i);
             goto error_out;
         }
-        if ( xc_clear_domain_page(xch, dom, special_pfn(i)) )
+        if ( xc_clear_domain_page(xch, dom, special_pfn(i, nr_special_pages)) )
             goto error_out;
     }
 
     xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
-                     special_pfn(SPECIALPAGE_XENSTORE));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
-                     special_pfn(SPECIALPAGE_BUFIOREQ));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
-                     special_pfn(SPECIALPAGE_IOREQ));
+                     special_pfn(SPECIALPAGE_XENSTORE, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
-                     special_pfn(SPECIALPAGE_CONSOLE));
+                     special_pfn(SPECIALPAGE_CONSOLE, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
-                     special_pfn(SPECIALPAGE_PAGING));
+                     special_pfn(SPECIALPAGE_PAGING, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_ACCESS_RING_PFN,
-                     special_pfn(SPECIALPAGE_ACCESS));
+                     special_pfn(SPECIALPAGE_ACCESS, nr_special_pages));
     xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
-                     special_pfn(SPECIALPAGE_SHARING));
+                     special_pfn(SPECIALPAGE_SHARING, nr_special_pages));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_IO_PFN_FIRST,
+                     special_pfn(NR_SPECIAL_PAGES, nr_special_pages));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_IO_PFN_LAST,
+                     special_pfn(NR_SPECIAL_PAGES + nr_special_pages - 1,
+                                 nr_special_pages));
 
     /*
      * Identity-map page table is required for running with CR0.PG=0 when
@@ -379,14 +380,14 @@ static int setup_guest(xc_interface *xch,
      */
     if ( (ident_pt = xc_map_foreign_range(
               xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
-              special_pfn(SPECIALPAGE_IDENT_PT))) == NULL )
+              special_pfn(SPECIALPAGE_IDENT_PT, nr_special_pages))) == NULL )
         goto error_out;
     for ( i = 0; i < PAGE_SIZE / sizeof(*ident_pt); i++ )
         ident_pt[i] = ((i << 22) | _PAGE_PRESENT | _PAGE_RW | _PAGE_USER |
                        _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE);
     munmap(ident_pt, PAGE_SIZE);
     xc_set_hvm_param(xch, dom, HVM_PARAM_IDENT_PT,
-                     special_pfn(SPECIALPAGE_IDENT_PT) << PAGE_SHIFT);
+                     special_pfn(SPECIALPAGE_IDENT_PT, nr_special_pages) << PAGE_SHIFT);
 
     /* Insert JMP <rel32> instruction at address 0x0 to reach entry point. */
     entry_eip = elf_uval(&elf, elf.ehdr, e_entry);
@@ -454,16 +455,18 @@ int xc_hvm_build(xc_interface *xch, uint32_t domid,
  * If target == memsize, pages are populated normally.
  */
 int xc_hvm_build_target_mem(xc_interface *xch,
-                           uint32_t domid,
-                           int memsize,
-                           int target,
-                           const char *image_name)
+                            uint32_t domid,
+                            int memsize,
+                            int target,
+                            const char *image_name,
+                            uint32_t nr_special_pages)
 {
     struct xc_hvm_build_args args = {};
 
     args.mem_size = (uint64_t)memsize << 20;
     args.mem_target = (uint64_t)target << 20;
     args.image_file_name = image_name;
+    args.nr_special_pages = nr_special_pages;
 
     return xc_hvm_build(xch, domid, &args);
 }
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index 707e31c..9a0d38f 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -216,6 +216,7 @@ struct xc_hvm_build_args {
     uint64_t mem_target;         /* Memory target in bytes. */
     uint64_t mmio_size;          /* Size of the MMIO hole in bytes. */
     const char *image_file_name; /* File name of the image to load. */
+    uint32_t nr_special_pages;   /* Additional special pages for io daemon */
 };
 
 /**
@@ -234,7 +235,8 @@ int xc_hvm_build_target_mem(xc_interface *xch,
                             uint32_t domid,
                             int memsize,
                             int target,
-                            const char *image_name);
+                            const char *image_name,
+                            uint32_t nr_special_pages);
 
 int xc_suspend_evtchn_release(xc_interface *xch, xc_evtchn *xce, int domid, int suspend_evtchn);
 
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 7c89756..eb004b6 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -984,8 +984,9 @@ static PyObject *pyxc_hvm_build(XcObject *self,
     if ( target == -1 )
         target = memsize;
 
+    // Ugly fix : we must retrieve the number of servers
     if ( xc_hvm_build_target_mem(self->xc_handle, dom, memsize,
-                                 target, image) != 0 )
+                                 target, image, 0) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
 
 #if !defined(__ia64__)
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5S-00073Q-B5; Wed, 22 Aug 2012 18:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5Q-0006zL-Mt
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:28 +0000
Received: from [85.158.138.51:40259] by server-5.bemta-3.messagelabs.com id
	82/A8-08865-F1B25305; Wed, 22 Aug 2012 18:55:27 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!8
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10561 invoked from network); 22 Aug 2012 18:55:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943141"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:25 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:25 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:02 +0100
Message-ID: <0c0e65af87b2e86b62d82565c2e363ddfb8acfc6.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 16/17] xl: Fix PCI library
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Quickly fix for PCI library. For the moment each hotplug PCI are add to
QEMU 0. We need to find a best way to specify which qemu handle the PCI.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl_pci.c |   19 +++++++++++--------
 1 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 48986f3..fe02ccd 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -834,13 +834,12 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
-    rc = libxl__wait_for_device_model(gc, domid, NULL, NULL,
+    rc = libxl__wait_for_device_model(gc, domid, 0, NULL, NULL,
                                       pci_ins_check, state);
     path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/parameter",
                           domid);
     vdevfn = libxl__xs_read(gc, XBT_NULL, path);
-    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state",
-                          domid);
+    path = libxl__sprintf(gc, "/local/domain/0/dms/%d/state", domid);
     if ( rc < 0 )
         LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
                    "qemu refused to add device: %s", vdevfn);
@@ -858,11 +857,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc, hvm = 0;
+    /* FIXME: handle multiple device model */
+    libxl_dmid dmid = 0;
 
     switch (libxl__domain_type(gc, domid)) {
     case LIBXL_DOMAIN_TYPE_HVM:
         hvm = 1;
-        if (libxl__wait_for_device_model(gc, domid, "running",
+        if (libxl__wait_for_device_model(gc, domid, dmid, "running",
                                          NULL, NULL, NULL) < 0) {
             return ERROR_FAIL;
         }
@@ -871,7 +872,7 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
                 rc = qemu_pci_add_xenstore(gc, domid, pcidev);
                 break;
             case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-                rc = libxl__qmp_pci_add(gc, domid, pcidev);
+                rc = libxl__qmp_pci_add(gc, domid, dmid, pcidev);
                 break;
             default:
                 return ERROR_INVAL;
@@ -1136,7 +1137,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
      * device-model for function 0 */
     if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
-        if (libxl__wait_for_device_model(gc, domid, "pci-removed",
+        if (libxl__wait_for_device_model(gc, domid, 0, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
             LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "Device Model didn't respond in time");
             /* This depends on guest operating system acknowledging the
@@ -1162,6 +1163,8 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
     libxl_device_pci *assigned;
     int hvm = 0, rc, num;
     int stubdomid = 0;
+    /* FIXME: Handle multiple device model */
+    libxl_dmid dmid = 0;
 
     assigned = libxl_device_pci_list(ctx, domid, &num);
     if ( assigned == NULL )
@@ -1178,7 +1181,7 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
     switch (libxl__domain_type(gc, domid)) {
     case LIBXL_DOMAIN_TYPE_HVM:
         hvm = 1;
-        if (libxl__wait_for_device_model(gc, domid, "running",
+        if (libxl__wait_for_device_model(gc, domid, dmid, "running",
                                          NULL, NULL, NULL) < 0)
             goto out_fail;
 
@@ -1187,7 +1190,7 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
             rc = qemu_pci_remove_xenstore(gc, domid, pcidev, force);
             break;
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-            rc = libxl__qmp_pci_del(gc, domid, pcidev);
+            rc = libxl__qmp_pci_del(gc, domid, dmid, pcidev);
             break;
         default:
             rc = ERROR_INVAL;
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5S-00073Q-B5; Wed, 22 Aug 2012 18:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5Q-0006zL-Mt
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:28 +0000
Received: from [85.158.138.51:40259] by server-5.bemta-3.messagelabs.com id
	82/A8-08865-F1B25305; Wed, 22 Aug 2012 18:55:27 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!8
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10561 invoked from network); 22 Aug 2012 18:55:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943141"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:25 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:25 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:02 +0100
Message-ID: <0c0e65af87b2e86b62d82565c2e363ddfb8acfc6.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 16/17] xl: Fix PCI library
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Quickly fix for PCI library. For the moment each hotplug PCI are add to
QEMU 0. We need to find a best way to specify which qemu handle the PCI.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl_pci.c |   19 +++++++++++--------
 1 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 48986f3..fe02ccd 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -834,13 +834,12 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
-    rc = libxl__wait_for_device_model(gc, domid, NULL, NULL,
+    rc = libxl__wait_for_device_model(gc, domid, 0, NULL, NULL,
                                       pci_ins_check, state);
     path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/parameter",
                           domid);
     vdevfn = libxl__xs_read(gc, XBT_NULL, path);
-    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state",
-                          domid);
+    path = libxl__sprintf(gc, "/local/domain/0/dms/%d/state", domid);
     if ( rc < 0 )
         LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
                    "qemu refused to add device: %s", vdevfn);
@@ -858,11 +857,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc, hvm = 0;
+    /* FIXME: handle multiple device model */
+    libxl_dmid dmid = 0;
 
     switch (libxl__domain_type(gc, domid)) {
     case LIBXL_DOMAIN_TYPE_HVM:
         hvm = 1;
-        if (libxl__wait_for_device_model(gc, domid, "running",
+        if (libxl__wait_for_device_model(gc, domid, dmid, "running",
                                          NULL, NULL, NULL) < 0) {
             return ERROR_FAIL;
         }
@@ -871,7 +872,7 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
                 rc = qemu_pci_add_xenstore(gc, domid, pcidev);
                 break;
             case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-                rc = libxl__qmp_pci_add(gc, domid, pcidev);
+                rc = libxl__qmp_pci_add(gc, domid, dmid, pcidev);
                 break;
             default:
                 return ERROR_INVAL;
@@ -1136,7 +1137,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
      * device-model for function 0 */
     if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
-        if (libxl__wait_for_device_model(gc, domid, "pci-removed",
+        if (libxl__wait_for_device_model(gc, domid, 0, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
             LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "Device Model didn't respond in time");
             /* This depends on guest operating system acknowledging the
@@ -1162,6 +1163,8 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
     libxl_device_pci *assigned;
     int hvm = 0, rc, num;
     int stubdomid = 0;
+    /* FIXME: Handle multiple device model */
+    libxl_dmid dmid = 0;
 
     assigned = libxl_device_pci_list(ctx, domid, &num);
     if ( assigned == NULL )
@@ -1178,7 +1181,7 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
     switch (libxl__domain_type(gc, domid)) {
     case LIBXL_DOMAIN_TYPE_HVM:
         hvm = 1;
-        if (libxl__wait_for_device_model(gc, domid, "running",
+        if (libxl__wait_for_device_model(gc, domid, dmid, "running",
                                          NULL, NULL, NULL) < 0)
             goto out_fail;
 
@@ -1187,7 +1190,7 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
             rc = qemu_pci_remove_xenstore(gc, domid, pcidev, force);
             break;
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-            rc = libxl__qmp_pci_del(gc, domid, pcidev);
+            rc = libxl__qmp_pci_del(gc, domid, dmid, pcidev);
             break;
         default:
             rc = ERROR_INVAL;
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5R-00072Z-RH; Wed, 22 Aug 2012 18:55:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5Q-0006pK-2E
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:28 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!6
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7504 invoked from network); 22 Aug 2012 18:55:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484809"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:20 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:20 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:58 +0100
Message-ID: <51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to handle
	qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch modifies libxl interface for qemu disaggregation.
For the moment, due to some dependencies between devices, we
can't let the user choose which QEMU emulate a device.

Moreoever this patch adds an "id" field to nic interface.
It will be used in config file to specify which QEMU handle
the network card.

A possible disaggregation is:
    - UI: Emulate graphic card, USB, keyboard, mouse, default devices
    (PIIX4, root bridge, ...)
    - IDE: Emulate disk
    - Serial: Emulate serial port
    - Audio: Emulate audio card
    - Net: Emulate one or more network cards, multiple QEMU can emulate
    different card. The emulated card is specified with its nic ID.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl.h         |    3 +++
 tools/libxl/libxl_types.idl |   15 +++++++++++++++
 2 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index c614d6f..71d4808 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -307,6 +307,7 @@ void libxl_cpuid_dispose(libxl_cpuid_policy_list *cpuid_list);
 #define LIBXL_PCI_FUNC_ALL (~0U)
 
 typedef uint32_t libxl_domid;
+typedef uint32_t libxl_dmid;
 
 /*
  * Formatting Enumerations.
@@ -478,12 +479,14 @@ typedef struct {
     libxl_domain_build_info b_info;
 
     int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs;
+    int num_dms;
 
     libxl_device_disk *disks;
     libxl_device_nic *nics;
     libxl_device_pci *pcidevs;
     libxl_device_vfb *vfbs;
     libxl_device_vkb *vkbs;
+    libxl_dm *dms;
 
     libxl_action_on_shutdown on_poweroff;
     libxl_action_on_shutdown on_reboot;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index daa8c79..36c802a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
     ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
     ])
 
+libxl_dm_cap = Enumeration("dm_cap", [
+    (1, "UI"), # Emulate all UI + default device
+    (2, "IDE"), # Emulate IDE
+    (4, "SERIAL"), # Emulate Serial
+    (8, "AUDIO"), # Emulate audio
+    ])
+
+libxl_dm = Struct("dm", [
+    ("name",         string),
+    ("path",         string),
+    ("capabilities",   uint64),
+    ("vifs",         libxl_string_list),
+    ])
+
 libxl_domain_build_info = Struct("domain_build_info",[
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
@@ -367,6 +381,7 @@ libxl_device_nic = Struct("device_nic", [
     ("nictype", libxl_nic_type),
     ("rate_bytes_per_interval", uint64),
     ("rate_interval_usecs", uint32),
+    ("id", string),
     ])
 
 libxl_device_pci = Struct("device_pci", [
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5R-00072Z-RH; Wed, 22 Aug 2012 18:55:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5Q-0006pK-2E
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:28 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!6
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7504 invoked from network); 22 Aug 2012 18:55:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484809"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:20 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:20 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:31:58 +0100
Message-ID: <51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to handle
	qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch modifies libxl interface for qemu disaggregation.
For the moment, due to some dependencies between devices, we
can't let the user choose which QEMU emulate a device.

Moreoever this patch adds an "id" field to nic interface.
It will be used in config file to specify which QEMU handle
the network card.

A possible disaggregation is:
    - UI: Emulate graphic card, USB, keyboard, mouse, default devices
    (PIIX4, root bridge, ...)
    - IDE: Emulate disk
    - Serial: Emulate serial port
    - Audio: Emulate audio card
    - Net: Emulate one or more network cards, multiple QEMU can emulate
    different card. The emulated card is specified with its nic ID.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl.h         |    3 +++
 tools/libxl/libxl_types.idl |   15 +++++++++++++++
 2 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index c614d6f..71d4808 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -307,6 +307,7 @@ void libxl_cpuid_dispose(libxl_cpuid_policy_list *cpuid_list);
 #define LIBXL_PCI_FUNC_ALL (~0U)
 
 typedef uint32_t libxl_domid;
+typedef uint32_t libxl_dmid;
 
 /*
  * Formatting Enumerations.
@@ -478,12 +479,14 @@ typedef struct {
     libxl_domain_build_info b_info;
 
     int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs;
+    int num_dms;
 
     libxl_device_disk *disks;
     libxl_device_nic *nics;
     libxl_device_pci *pcidevs;
     libxl_device_vfb *vfbs;
     libxl_device_vkb *vkbs;
+    libxl_dm *dms;
 
     libxl_action_on_shutdown on_poweroff;
     libxl_action_on_shutdown on_reboot;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index daa8c79..36c802a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
     ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
     ])
 
+libxl_dm_cap = Enumeration("dm_cap", [
+    (1, "UI"), # Emulate all UI + default device
+    (2, "IDE"), # Emulate IDE
+    (4, "SERIAL"), # Emulate Serial
+    (8, "AUDIO"), # Emulate audio
+    ])
+
+libxl_dm = Struct("dm", [
+    ("name",         string),
+    ("path",         string),
+    ("capabilities",   uint64),
+    ("vifs",         libxl_string_list),
+    ])
+
 libxl_domain_build_info = Struct("domain_build_info",[
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
@@ -367,6 +381,7 @@ libxl_device_nic = Struct("device_nic", [
     ("nictype", libxl_nic_type),
     ("rate_bytes_per_interval", uint64),
     ("rate_interval_usecs", uint32),
+    ("id", string),
     ])
 
 libxl_device_pci = Struct("device_pci", [
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5U-00077j-6S; Wed, 22 Aug 2012 18:55:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5R-00070A-6s
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:30 +0000
Received: from [85.158.138.51:26672] by server-7.bemta-3.messagelabs.com id
	77/79-01906-02B25305; Wed, 22 Aug 2012 18:55:28 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!7
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10498 invoked from network); 22 Aug 2012 18:55:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943138"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:24 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:23 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:01 +0100
Message-ID: <9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy on
	multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Old configuration file is still working with qemu disaggregation.
Before to spawn any QEMU, the toolstack will fill correctly, if needed,
configuration structure.

For the moment, the toolstack spawns device models one by one.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl.c          |   16 ++-
 tools/libxl/libxl_create.c   |  150 +++++++++++++-----
 tools/libxl/libxl_device.c   |    7 +-
 tools/libxl/libxl_dm.c       |  369 ++++++++++++++++++++++++++++++------------
 tools/libxl/libxl_dom.c      |    4 +-
 tools/libxl/libxl_internal.h |   36 +++--
 6 files changed, 421 insertions(+), 161 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8ea3478..60718b6 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1330,7 +1330,8 @@ static void stubdom_destroy_callback(libxl__egc *egc,
     }
 
     dds->stubdom_finished = 1;
-    savefile = libxl__device_model_savefile(gc, dis->domid);
+    /* FIXME: get dmid */
+    savefile = libxl__device_model_savefile(gc, dis->domid, 0);
     rc = libxl__remove_file(gc, savefile);
     /*
      * On suspend libxl__domain_save_device_model will have already
@@ -1423,10 +1424,8 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
         LIBXL__LOG_ERRNOVAL(ctx, LIBXL__LOG_ERROR, rc, "xc_domain_pause failed for %d", domid);
     }
     if (dm_present) {
-        if (libxl__destroy_device_model(gc, domid) < 0)
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_model failed for %d", domid);
-
-        libxl__qmp_cleanup(gc, domid);
+        if (libxl__destroy_device_models(gc, domid) < 0)
+            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_models failed for %d", domid);
     }
     dis->drs.ao = ao;
     dis->drs.domid = domid;
@@ -1725,6 +1724,13 @@ out:
 
 /******************************************************************************/
 
+int libxl__dm_setdefault(libxl__gc *gc, libxl_dm *dm)
+{
+    return 0;
+}
+
+/******************************************************************************/
+
 int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
 {
     int rc;
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5f0d26f..7160c78 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -35,6 +35,10 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
 {
     int i;
 
+    for (i=0; i<d_config->num_dms; i++)
+        libxl_dm_dispose(&d_config->dms[i]);
+    free(d_config->dms);
+
     for (i=0; i<d_config->num_disks; i++)
         libxl_device_disk_dispose(&d_config->disks[i]);
     free(d_config->disks);
@@ -59,6 +63,50 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
     libxl_domain_build_info_dispose(&d_config->b_info);
 }
 
+static int libxl__domain_config_setdefault(libxl__gc *gc,
+                                           libxl_domain_config *d_config)
+{
+    libxl_domain_build_info *b_info = &d_config->b_info;
+    uint64_t cap = 0;
+    int i = 0;
+    int ret = 0;
+    libxl_dm *default_dm = NULL;
+
+    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL
+        && (d_config->num_dms > 1))
+        return ERROR_INVAL;
+
+    if (!d_config->num_dms) {
+        d_config->dms = malloc(sizeof (*d_config->dms));
+        if (!d_config->dms)
+            return ERROR_NOMEM;
+        libxl_dm_init(d_config->dms);
+        d_config->num_dms = 1;
+    }
+
+    for (i = 0; i < d_config->num_dms; i++)
+    {
+        ret = libxl__dm_setdefault(gc, &d_config->dms[i]);
+        if (ret) return ret;
+
+        if (cap & d_config->dms[i].capabilities)
+            /* Some capabilities are already emulated */
+            return ERROR_INVAL;
+
+        cap |= d_config->dms[i].capabilities;
+        if (d_config->dms[i].capabilities & LIBXL_DM_CAP_UI)
+            default_dm = &d_config->dms[i];
+    }
+
+    if (!default_dm)
+        default_dm = &d_config->dms[0];
+
+    /* The default device model emulates all that the others don't emulate */
+    default_dm->capabilities |= ~cap;
+
+    return ret;
+}
+
 int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                          libxl_domain_create_info *c_info)
 {
@@ -145,11 +193,11 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
                 LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
         else {
             const char *dm;
-            int rc;
+            int rc = 0;
 
             b_info->device_model_version =
                 LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
-            dm = libxl__domain_device_model(gc, b_info);
+            dm = libxl__domain_device_model(gc, ~0, b_info);
             rc = access(dm, X_OK);
             if (rc < 0) {
                 /* qemu-xen unavailable, use qemu-xen-traditional */
@@ -651,11 +699,13 @@ static void initiate_domain_create(libxl__egc *egc,
     }
 
     dcs->guest_domid = domid;
-    dcs->dmss.dm.guest_domid = 0; /* means we haven't spawned */
 
     ret = libxl__domain_build_info_setdefault(gc, &d_config->b_info);
     if (ret) goto error_out;
 
+    ret = libxl__domain_config_setdefault(gc, d_config);
+    if (ret) goto error_out;
+
     if (!sched_params_valid(gc, domid, &d_config->b_info.sched_params)) {
         LOG(ERROR, "Invalid scheduling parameters\n");
         ret = ERROR_INVAL;
@@ -667,6 +717,15 @@ static void initiate_domain_create(libxl__egc *egc,
         if (ret) goto error_out;
     }
 
+    dcs->current_dmid = 0;
+    dcs->build_state.num_dms = d_config->num_dms;
+    GCNEW_ARRAY(dcs->dmss, d_config->num_dms);
+
+    for (i = 0; i < d_config->num_dms; i++) {
+        dcs->dmss[i].dm.guest_domid = 0; /* Means we haven't spawned */
+        dcs->dmss[i].dm.dcs = dcs;
+    }
+
     dcs->bl.ao = ao;
     libxl_device_disk *bootdisk =
         d_config->num_disks > 0 ? &d_config->disks[0] : NULL;
@@ -709,6 +768,26 @@ static void domcreate_console_available(libxl__egc *egc,
                                         dcs->guest_domid));
 }
 
+static void domcreate_spawn_devmodel(libxl__egc *egc,
+                                    libxl__domain_create_state *dcs,
+                                    libxl_dmid dmid)
+{
+    libxl__stub_dm_spawn_state *dmss = &dcs->dmss[dmid];
+    STATE_AO_GC(dcs->ao);
+
+    /* We might be going to call libxl__spawn_local_dm, or _spawn_stub_dm.
+     * Fill in any field required by either, including both relevant
+     * callbacks (_spawn_stub_dm will overwrite our trespass if needed). */
+    dmss->dm.spawn.ao = ao;
+    dmss->dm.guest_config = dcs->guest_config;
+    dmss->dm.build_state = &dcs->build_state;
+    dmss->dm.callback = domcreate_devmodel_started;
+    dmss->callback = domcreate_devmodel_started;
+    dmss->dm.dmid = dmid;
+
+    libxl__spawn_dm(egc, dmss);
+}
+
 static void domcreate_bootloader_done(libxl__egc *egc,
                                       libxl__bootloader_state *bl,
                                       int rc)
@@ -735,15 +814,6 @@ static void domcreate_bootloader_done(libxl__egc *egc,
      */
     state->pv_cmdline = bl->cmdline;
 
-    /* We might be going to call libxl__spawn_local_dm, or _spawn_stub_dm.
-     * Fill in any field required by either, including both relevant
-     * callbacks (_spawn_stub_dm will overwrite our trespass if needed). */
-    dcs->dmss.dm.spawn.ao = ao;
-    dcs->dmss.dm.guest_config = dcs->guest_config;
-    dcs->dmss.dm.build_state = &dcs->build_state;
-    dcs->dmss.dm.callback = domcreate_devmodel_started;
-    dcs->dmss.callback = domcreate_devmodel_started;
-
     if ( restore_fd < 0 ) {
         rc = libxl__domain_build(gc, &d_config->b_info, domid, state);
         domcreate_rebuild_done(egc, dcs, rc);
@@ -962,11 +1032,7 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         libxl__device_vkb_add(gc, domid, &vkb);
         libxl_device_vkb_dispose(&vkb);
 
-        dcs->dmss.dm.guest_domid = domid;
-        if (libxl_defbool_val(d_config->b_info.device_model_stubdomain))
-            libxl__spawn_stub_dm(egc, &dcs->dmss);
-        else
-            libxl__spawn_local_dm(egc, &dcs->dmss.dm);
+        domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
         return;
     }
     case LIBXL_DOMAIN_TYPE_PV:
@@ -991,12 +1057,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         libxl__device_console_dispose(&console);
 
         if (need_qemu) {
-            dcs->dmss.dm.guest_domid = domid;
-            libxl__spawn_local_dm(egc, &dcs->dmss.dm);
+            assert(dcs->dmss);
+            domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
             return;
         } else {
-            assert(!dcs->dmss.dm.guest_domid);
-            domcreate_devmodel_started(egc, &dcs->dmss.dm, 0);
+            assert(!dcs->dmss);
             return;
         }
     }
@@ -1015,7 +1080,7 @@ static void domcreate_devmodel_started(libxl__egc *egc,
                                        libxl__dm_spawn_state *dmss,
                                        int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(dmss, *dcs, dmss.dm);
+    libxl__domain_create_state *dcs = dmss->dcs;
     STATE_AO_GC(dmss->spawn.ao);
     libxl_ctx *ctx = CTX;
     int domid = dcs->guest_domid;
@@ -1029,15 +1094,15 @@ static void domcreate_devmodel_started(libxl__egc *egc,
         goto error_out;
     }
 
-    if (dcs->dmss.dm.guest_domid) {
+    if (dmss->guest_domid) {
         if (d_config->b_info.device_model_version
             == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) {
-            libxl__qmp_initializations(gc, domid, d_config);
+            libxl__qmp_initializations(gc, domid, dmss->dmid, d_config);
         }
     }
 
     /* Plug nic interfaces */
-    if (d_config->num_nics > 0) {
+    if (d_config->num_nics > 0 && dmss->dmid == 0) {
         /* Attach nics */
         libxl__multidev_begin(ao, &dcs->multidev);
         dcs->multidev.callback = domcreate_attach_pci;
@@ -1071,23 +1136,34 @@ static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *multidev,
         goto error_out;
     }
 
-    for (i = 0; i < d_config->num_pcidevs; i++)
-        libxl__device_pci_add(gc, domid, &d_config->pcidevs[i], 1);
+    /* TO FIX: for the moment only add to device model 0 */
 
-    if (d_config->num_pcidevs > 0) {
-        ret = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
-        if (ret < 0) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
-                "libxl_create_pci_backend failed: %d", ret);
-            goto error_out;
+    if (dcs->current_dmid == 0) {
+        for (i = 0; i < d_config->num_pcidevs; i++)
+            libxl__device_pci_add(gc, domid,
+                                  &d_config->pcidevs[i], 1);
+
+        if (d_config->num_pcidevs > 0) {
+            ret = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
+                d_config->num_pcidevs);
+            if (ret < 0) {
+                LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
+                    "libxl_create_pci_backend failed: %d", ret);
+                goto error_out;
+            }
         }
     }
 
-    libxl__arch_domain_create(gc, d_config, domid);
-    domcreate_console_available(egc, dcs);
+    dcs->current_dmid++;
+
+    if (dcs->current_dmid >= dcs->guest_config->num_dms) {
+        libxl__arch_domain_create(gc, d_config, domid);
+        domcreate_console_available(egc, dcs);
+        domcreate_complete(egc, dcs, 0);
+    } else {
+        domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
+    }
 
-    domcreate_complete(egc, dcs, 0);
     return;
 
 error_out:
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 8e8410e..798665e 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -1034,8 +1034,8 @@ static void devices_remove_callback(libxl__egc *egc,
     return;
 }
 
-int libxl__wait_for_device_model(libxl__gc *gc,
-                                 uint32_t domid, char *state,
+int libxl__wait_for_device_model(libxl__gc *gc, libxl_domid domid,
+                                 libxl_dmid dmid, char *state,
                                  libxl__spawn_starting *spawning,
                                  int (*check_callback)(libxl__gc *gc,
                                                        uint32_t domid,
@@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
                                  void *check_callback_userdata)
 {
     char *path;
-    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
+    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
+                          domid, dmid);
     return libxl__wait_for_offspring(gc, domid,
                                      LIBXL_DEVICE_MODEL_START_TIMEOUT,
                                      "Device Model", path, state, spawning,
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..de7138f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -28,24 +28,30 @@ static const char *libxl_tapif_script(libxl__gc *gc)
 #endif
 }
 
-const char *libxl__device_model_savefile(libxl__gc *gc, uint32_t domid)
+const char *libxl__device_model_savefile(libxl__gc *gc, libxl_domid domid,
+                                         libxl_dmid dmid)
 {
-    return libxl__sprintf(gc, "/var/lib/xen/qemu-save.%d", domid);
+    return libxl__sprintf(gc, "/var/lib/xen/qemu-save.%u.%u", domid, dmid);
 }
 
 const char *libxl__domain_device_model(libxl__gc *gc,
-                                       const libxl_domain_build_info *info)
+                                       uint32_t dmid,
+                                       const libxl_domain_build_info *b_info)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     const char *dm;
+    libxl_domain_config *guest_config = CONTAINER_OF(b_info, *guest_config,
+                                                     b_info);
 
-    if (libxl_defbool_val(info->device_model_stubdomain))
+    if (libxl_defbool_val(guest_config->b_info.device_model_stubdomain))
         return NULL;
 
-    if (info->device_model) {
-        dm = libxl__strdup(gc, info->device_model);
+    if (dmid < guest_config->num_dms && guest_config->dms[dmid].path) {
+        dm = libxl__strdup(gc, guest_config->dms[dmid].path);
+    } else if (guest_config->b_info.device_model) {
+        dm = libxl__strdup(gc, guest_config->b_info.device_model);
     } else {
-        switch (info->device_model_version) {
+        switch (guest_config->b_info.device_model_version) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
             dm = libxl__abs_path(gc, "qemu-dm", libxl__libexec_path());
             break;
@@ -55,7 +61,7 @@ const char *libxl__domain_device_model(libxl__gc *gc,
         default:
             LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
                        "invalid device model version %d\n",
-                       info->device_model_version);
+                       guest_config->b_info.device_model_version);
             dm = NULL;
             break;
         }
@@ -63,7 +69,8 @@ const char *libxl__domain_device_model(libxl__gc *gc,
     return dm;
 }
 
-const libxl_vnc_info *libxl__dm_vnc(const libxl_domain_config *guest_config)
+const libxl_vnc_info *libxl__dm_vnc(libxl_dmid dmid,
+                                    const libxl_domain_config *guest_config)
 {
     const libxl_vnc_info *vnc = NULL;
     if (guest_config->b_info.type == LIBXL_DOMAIN_TYPE_HVM) {
@@ -103,7 +110,7 @@ static char ** libxl__build_device_model_args_old(libxl__gc *gc,
     const libxl_domain_create_info *c_info = &guest_config->c_info;
     const libxl_domain_build_info *b_info = &guest_config->b_info;
     const libxl_device_nic *nics = guest_config->nics;
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(0, guest_config);
     const libxl_sdl_info *sdl = dm_sdl(guest_config);
     const int num_nics = guest_config->num_nics;
     const char *keymap = dm_keymap(guest_config);
@@ -321,24 +328,58 @@ static char *dm_spice_options(libxl__gc *gc,
     return opt;
 }
 
+static int libxl__dm_has_vif(const char *vifname, libxl_dmid dmid,
+                             const libxl_domain_config *guest_config)
+{
+    const libxl_dm *dm_config = &guest_config->dms[dmid];
+    int i = 0;
+
+    if (!vifname && (dm_config->capabilities & LIBXL_DM_CAP_UI))
+        return 1;
+
+    if (!dm_config->vifs)
+        return 0;
+
+    for (i = 0; dm_config->vifs[i]; i++) {
+        if (!strcmp(dm_config->vifs[i], vifname))
+            return 1;
+    }
+
+    return 0;
+}
+
 static char ** libxl__build_device_model_args_new(libxl__gc *gc,
-                                        const char *dm, int guest_domid,
+                                        const char *dm, libxl_dmid guest_domid,
+                                        libxl_dmid dmid,
                                         const libxl_domain_config *guest_config,
                                         const libxl__domain_build_state *state)
 {
+    /**
+     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
+     * XEN PCI. Theses devices will be emulate in each QEMU, but only
+     * one QEMU (the one which emulates default device) will register
+     * these devices through Xen PCI hypercall.
+     */
+    static unsigned int bdf = 3;
+
     libxl_ctx *ctx = libxl__gc_owner(gc);
     const libxl_domain_create_info *c_info = &guest_config->c_info;
     const libxl_domain_build_info *b_info = &guest_config->b_info;
+    const libxl_dm *dm_config = &guest_config->dms[dmid];
     const libxl_device_disk *disks = guest_config->disks;
     const libxl_device_nic *nics = guest_config->nics;
     const int num_disks = guest_config->num_disks;
     const int num_nics = guest_config->num_nics;
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
     const libxl_sdl_info *sdl = dm_sdl(guest_config);
     const char *keymap = dm_keymap(guest_config);
     flexarray_t *dm_args;
     int i;
     uint64_t ram_size;
+    uint32_t cap_ui = dm_config->capabilities & LIBXL_DM_CAP_UI;
+    uint32_t cap_ide = dm_config->capabilities & LIBXL_DM_CAP_IDE;
+    uint32_t cap_serial = dm_config->capabilities & LIBXL_DM_CAP_SERIAL;
+    uint32_t cap_audio = dm_config->capabilities & LIBXL_DM_CAP_AUDIO;
 
     dm_args = flexarray_make(16, 1);
     if (!dm_args)
@@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                       "-xen-domid",
                       libxl__sprintf(gc, "%d", guest_domid), NULL);
 
+    flexarray_append(dm_args, "-nodefaults");
     flexarray_append(dm_args, "-chardev");
     flexarray_append(dm_args,
                      libxl__sprintf(gc, "socket,id=libxl-cmd,"
-                                    "path=%s/qmp-libxl-%d,server,nowait",
-                                    libxl__run_dir_path(), guest_domid));
+                                    "path=%s/qmp-libxl-%u-%u,server,nowait",
+                                    libxl__run_dir_path(), guest_domid, dmid));
 
     flexarray_append(dm_args, "-mon");
     flexarray_append(dm_args, "chardev=libxl-cmd,mode=control");
@@ -364,7 +406,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     if (c_info->name) {
         flexarray_vappend(dm_args, "-name", c_info->name, NULL);
     }
-    if (vnc) {
+
+    if (vnc && cap_ui) {
         int display = 0;
         const char *listen = "127.0.0.1";
         char *vncarg = NULL;
@@ -395,7 +438,7 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         }
         flexarray_append(dm_args, vncarg);
     }
-    if (sdl) {
+    if (sdl && cap_ui) {
         flexarray_append(dm_args, "-sdl");
         /* XXX sdl->{display,xauthority} into $DISPLAY/$XAUTHORITY */
     }
@@ -411,13 +454,27 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         int ioemu_nics = 0;
 
-        if (b_info->u.hvm.serial) {
+        if (b_info->u.hvm.serial && cap_serial) {
             flexarray_vappend(dm_args, "-serial", b_info->u.hvm.serial, NULL);
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.nographic) && (!sdl && !vnc)) {
+        if ((libxl_defbool_val(b_info->u.hvm.nographic) && (!sdl && !vnc))
+            || !cap_ui) {
             flexarray_append(dm_args, "-nographic");
         }
+        else {
+            switch (b_info->u.hvm.vga.kind) {
+            case LIBXL_VGA_INTERFACE_TYPE_STD:
+                flexarray_vappend(dm_args, "-device",
+                                  GCSPRINTF("VGA,addr=%u", bdf++), NULL);
+                break;
+            case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
+                flexarray_vappend(dm_args, "-device",
+                                  GCSPRINTF("cirrus-vga,addr=%u", bdf++),
+                                  NULL);
+                break;
+            }
+        }
 
         if (libxl_defbool_val(b_info->u.hvm.spice.enable)) {
             const libxl_spice_info *spice = &b_info->u.hvm.spice;
@@ -429,27 +486,19 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, spiceoptions);
         }
 
-        switch (b_info->u.hvm.vga.kind) {
-        case LIBXL_VGA_INTERFACE_TYPE_STD:
-            flexarray_vappend(dm_args, "-vga", "std", NULL);
-            break;
-        case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
-            flexarray_vappend(dm_args, "-vga", "cirrus", NULL);
-            break;
-        }
-
         if (b_info->u.hvm.boot) {
             flexarray_vappend(dm_args, "-boot",
                     libxl__sprintf(gc, "order=%s", b_info->u.hvm.boot), NULL);
         }
-        if (libxl_defbool_val(b_info->u.hvm.usb) || b_info->u.hvm.usbdevice) {
+        if ((libxl_defbool_val(b_info->u.hvm.usb) || b_info->u.hvm.usbdevice)
+            && cap_ui) {
             flexarray_append(dm_args, "-usb");
             if (b_info->u.hvm.usbdevice) {
                 flexarray_vappend(dm_args,
                                   "-usbdevice", b_info->u.hvm.usbdevice, NULL);
             }
         }
-        if (b_info->u.hvm.soundhw) {
+        if (b_info->u.hvm.soundhw && cap_audio) {
             flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, NULL);
         }
         if (!libxl_defbool_val(b_info->u.hvm.acpi)) {
@@ -469,7 +518,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                                                          b_info->max_vcpus));
         }
         for (i = 0; i < num_nics; i++) {
-            if (nics[i].nictype == LIBXL_NIC_TYPE_VIF_IOEMU) {
+            if (nics[i].nictype == LIBXL_NIC_TYPE_VIF_IOEMU
+                && libxl__dm_has_vif(nics[i].id, dmid, guest_config)) {
                 char *smac = libxl__sprintf(gc,
                                 LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
                 const char *ifname = libxl__device_nic_devname(gc,
@@ -477,9 +527,9 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                                                 LIBXL_NIC_TYPE_VIF_IOEMU);
                 flexarray_append(dm_args, "-device");
                 flexarray_append(dm_args,
-                   libxl__sprintf(gc, "%s,id=nic%d,netdev=net%d,mac=%s",
-                                                nics[i].model, nics[i].devid,
-                                                nics[i].devid, smac));
+                            GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s,addr=%u",
+                                      nics[i].model, nics[i].devid,
+                                      nics[i].devid, smac, bdf++));
                 flexarray_append(dm_args, "-netdev");
                 flexarray_append(dm_args, GCSPRINTF(
                                           "type=tap,id=net%d,ifname=%s,"
@@ -495,7 +545,7 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-net");
             flexarray_append(dm_args, "none");
         }
-        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru) && cap_ui) {
             flexarray_append(dm_args, "-gfx_passthru");
         }
     } else {
@@ -506,13 +556,14 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
 
     if (state->saved_state) {
         /* This file descriptor is meant to be used by QEMU */
-        int migration_fd = open(state->saved_state, O_RDONLY);
+        int migration_fd = open(libxl__sprintf(gc, "%s.%u", state->saved_state,
+                                               dmid), O_RDONLY);
         flexarray_append(dm_args, "-incoming");
         flexarray_append(dm_args, libxl__sprintf(gc, "fd:%d", migration_fd));
     }
     for (i = 0; b_info->extra && b_info->extra[i] != NULL; i++)
         flexarray_append(dm_args, b_info->extra[i]);
-    flexarray_append(dm_args, "-M");
+    flexarray_append(dm_args, "-machine");
     switch (b_info->type) {
     case LIBXL_DOMAIN_TYPE_PV:
         flexarray_append(dm_args, "xenpv");
@@ -520,7 +571,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
-        flexarray_append(dm_args, "xenfv");
+        flexarray_append(dm_args,
+                         libxl__sprintf(gc,
+                                        "xenfv,xen_dmid=%u,xen_default_dev=%s,xen_emulate_ide=%s",
+                                        dmid, (cap_ui) ? "on" : "off",
+                                        (cap_ide) ? "on" : "off"));
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
         break;
@@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         abort();
     }
 
-    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
+    // Allocate ram space of 32Mo per previous device model to store rom
+    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb)
+        + 32 * dmid;
     flexarray_append(dm_args, "-m");
     flexarray_append(dm_args, libxl__sprintf(gc, "%"PRId64, ram_size));
 
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
-        for (i = 0; i < num_disks; i++) {
-            int disk, part;
-            int dev_number =
-                libxl__device_disk_dev_number(disks[i].vdev, &disk, &part);
-            const char *format = qemu_disk_format_string(disks[i].format);
-            char *drive;
-
-            if (dev_number == -1) {
-                LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
-                           " disk number for %s", disks[i].vdev);
-                continue;
-            }
-
-            if (disks[i].is_cdrom) {
-                if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
-                    drive = libxl__sprintf
-                        (gc, "if=ide,index=%d,media=cdrom", disk);
-                else
-                    drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
-                         disks[i].pdev_path, disk, format);
-            } else {
-                if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
-                    LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "cannot support"
-                               " empty disk format for %s", disks[i].vdev);
+        if (cap_ide) {
+            for (i = 0; i < num_disks; i++) {
+                int disk, part;
+                int dev_number =
+                    libxl__device_disk_dev_number(disks[i].vdev, &disk, &part);
+                const char *format = qemu_disk_format_string(disks[i].format);
+                char *drive;
+
+                if (dev_number == -1) {
+                    LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
+                               " disk number for %s", disks[i].vdev);
                     continue;
                 }
 
-                if (format == NULL) {
-                    LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
-                               " disk image format %s", disks[i].vdev);
-                    continue;
+                if (disks[i].is_cdrom) {
+                    if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
+                        drive = libxl__sprintf
+                            (gc, "if=ide,index=%d,media=cdrom", disk);
+                    else
+                        drive = libxl__sprintf
+                            (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
+                             disks[i].pdev_path, disk, format);
+                } else {
+                    if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
+                        LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "cannot support"
+                                   " empty disk format for %s", disks[i].vdev);
+                        continue;
+                    }
+
+                    if (format == NULL) {
+                        LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
+                                   " disk image format %s", disks[i].vdev);
+                        continue;
+                    }
+
+                    /*
+                     * Explicit sd disks are passed through as is.
+                     *
+                     * For other disks we translate devices 0..3 into
+                     * hd[a-d] and ignore the rest.
+                     */
+                    if (strncmp(disks[i].vdev, "sd", 2) == 0)
+                        drive = libxl__sprintf
+                            (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
+                             disks[i].pdev_path, disk, format);
+                    else if (disk < 4)
+                        drive = libxl__sprintf
+                            (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
+                             disks[i].pdev_path, disk, format);
+                    else
+                        continue; /* Do not emulate this disk */
                 }
 
-                /*
-                 * Explicit sd disks are passed through as is.
-                 *
-                 * For other disks we translate devices 0..3 into
-                 * hd[a-d] and ignore the rest.
-                 */
-                if (strncmp(disks[i].vdev, "sd", 2) == 0)
-                    drive = libxl__sprintf
-                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
-                         disks[i].pdev_path, disk, format);
-                else if (disk < 4)
-                    drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
-                         disks[i].pdev_path, disk, format);
-                else
-                    continue; /* Do not emulate this disk */
+                flexarray_append(dm_args, "-drive");
+                flexarray_append(dm_args, drive);
             }
-
-            flexarray_append(dm_args, "-drive");
-            flexarray_append(dm_args, drive);
         }
     }
     flexarray_append(dm_args, NULL);
@@ -594,7 +653,9 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
 }
 
 static char ** libxl__build_device_model_args(libxl__gc *gc,
-                                        const char *dm, int guest_domid,
+                                        const char *dm,
+                                        libxl_domid guest_domid,
+                                        libxl_dmid dmid,
                                         const libxl_domain_config *guest_config,
                                         const libxl__domain_build_state *state)
 {
@@ -607,8 +668,8 @@ static char ** libxl__build_device_model_args(libxl__gc *gc,
                                                   state);
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
         return libxl__build_device_model_args_new(gc, dm,
-                                                  guest_domid, guest_config,
-                                                  state);
+                                                  guest_domid, dmid,
+                                                  guest_config, state);
     default:
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "unknown device model version %d",
                          guest_config->b_info.device_model_version);
@@ -729,7 +790,8 @@ char *libxl__stub_dm_name(libxl__gc *gc, const char *guest_name)
     return libxl__sprintf(gc, "%s-dm", guest_name);
 }
 
-void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
+static void libxl__spawn_stub_dm(libxl__egc *egc,
+                                 libxl__stub_dm_spawn_state *sdss)
 {
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -815,7 +877,7 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
     if (ret)
         goto out;
 
-    args = libxl__build_device_model_args(gc, "stubdom-dm", guest_domid,
+    args = libxl__build_device_model_args(gc, "stubdom-dm", guest_domid, 0,
                                           guest_config, d_state);
     if (!args) {
         ret = ERROR_FAIL;
@@ -871,12 +933,16 @@ out:
     spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, ret);
 }
 
+static void libxl__spawn_local_dm(libxl__egc *egc,
+                                  libxl__dm_spawn_state *sdss);
+
 static void spawn_stub_launch_dm(libxl__egc *egc,
                                  libxl__multidev *multidev, int ret)
 {
     libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
+    libxl_dmid dmid = sdss->dm.dmid;
     int i, num_console = STUBDOM_SPECIAL_CONSOLES;
     libxl__device_console *console;
 
@@ -937,7 +1003,8 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
                 break;
             case STUBDOM_CONSOLE_SAVE:
                 console[i].output = libxl__sprintf(gc, "file:%s",
-                                libxl__device_model_savefile(gc, guest_domid));
+                                libxl__device_model_savefile(gc, guest_domid,
+                                                             dmid));
                 break;
             case STUBDOM_CONSOLE_RESTORE:
                 if (d_state->saved_state)
@@ -1049,10 +1116,11 @@ static void device_model_spawn_outcome(libxl__egc *egc,
                                        libxl__dm_spawn_state *dmss,
                                        int rc);
 
-void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
+static void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
 {
     /* convenience aliases */
     const int domid = dmss->guest_domid;
+    const libxl_dmid dmid = dmss->dmid;
     libxl__domain_build_state *const state = dmss->build_state;
     libxl__spawn_state *const spawn = &dmss->spawn;
 
@@ -1062,7 +1130,8 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
     libxl_domain_config *guest_config = dmss->guest_config;
     const libxl_domain_create_info *c_info = &guest_config->c_info;
     const libxl_domain_build_info *b_info = &guest_config->b_info;
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
+    const libxl_dm *dm_config = &guest_config->dms[dmid];
     char *path, *logfile;
     int logfile_w, null;
     int rc;
@@ -1071,12 +1140,13 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
     char *vm_path;
     char **pass_stuff;
     const char *dm;
+    const char *name;
 
     if (libxl_defbool_val(b_info->device_model_stubdomain)) {
         abort();
     }
 
-    dm = libxl__domain_device_model(gc, b_info);
+    dm = libxl__domain_device_model(gc, dmid, b_info);
     if (!dm) {
         rc = ERROR_FAIL;
         goto out;
@@ -1087,7 +1157,7 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
         rc = ERROR_FAIL;
         goto out;
     }
-    args = libxl__build_device_model_args(gc, dm, domid, guest_config, state);
+    args = libxl__build_device_model_args(gc, dm, domid, dmid, guest_config, state);
     if (!args) {
         rc = ERROR_FAIL;
         goto out;
@@ -1101,7 +1171,7 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
         free(path);
     }
 
-    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d", domid);
+    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u", domid, dmid);
     xs_mkdir(ctx->xsh, XBT_NULL, path);
 
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM &&
@@ -1110,8 +1180,13 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
         libxl__xs_write(gc, XBT_NULL, libxl__sprintf(gc, "%s/disable_pf", path),
                     "%d", !libxl_defbool_val(b_info->u.hvm.xen_platform_pci));
 
+
+    name = dm_config->name;
+    if (!name)
+        name = libxl__sprintf(gc, "%u", dmid);
+
     libxl_create_logfile(ctx,
-                         libxl__sprintf(gc, "qemu-dm-%s", c_info->name),
+                         libxl__sprintf(gc, "qemu-%s-%s", name, c_info->name),
                          &logfile);
     logfile_w = open(logfile, O_WRONLY|O_CREAT|O_APPEND, 0644);
     free(logfile);
@@ -1143,10 +1218,10 @@ retry_transaction:
     for (arg = args; *arg; arg++)
         LIBXL__LOG(CTX, XTL_DEBUG, "  %s", *arg);
 
-    spawn->what = GCSPRINTF("domain %d device model", domid);
-    spawn->xspath = GCSPRINTF("/local/domain/0/device-model/%d/state", domid);
+    spawn->what = GCSPRINTF("domain %d device model %s", domid, name);
+    spawn->xspath = GCSPRINTF("/local/domain/0/dms/%u/%u/state", domid, dmid);
     spawn->timeout_ms = LIBXL_DEVICE_MODEL_START_TIMEOUT * 1000;
-    spawn->pidpath = GCSPRINTF("%s/image/device-model-pid", dom_path);
+    spawn->pidpath = GCSPRINTF("%s/image/dms/%u-pid", dom_path, dmid);
     spawn->midproc_cb = libxl__spawn_record_pid;
     spawn->confirm_cb = device_model_confirm;
     spawn->failure_cb = device_model_startup_failed;
@@ -1171,6 +1246,32 @@ out:
         device_model_spawn_outcome(egc, dmss, rc);
 }
 
+void libxl__spawn_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *dmss)
+{
+    libxl__domain_create_state *dcs = dmss->dm.dcs;
+    libxl_domain_config *const d_config = dcs->guest_config;
+    STATE_AO_GC(dmss->dm.spawn.ao);
+
+    switch (d_config->c_info.type) {
+    case LIBXL_DOMAIN_TYPE_HVM:
+    {
+        dmss->dm.guest_domid = dcs->guest_domid;
+        if (libxl_defbool_val(d_config->b_info.device_model_stubdomain))
+            libxl__spawn_stub_dm(egc, dmss);
+        else
+            libxl__spawn_local_dm(egc, &dmss->dm);
+        break;
+    }
+    case LIBXL_DOMAIN_TYPE_PV:
+    {
+        dmss->dm.guest_domid = dcs->guest_domid;
+        libxl__spawn_local_dm(egc, &dmss->dm);
+        break;
+    }
+    default:
+        LIBXL__LOG(CTX, XTL_ERROR, "Unknow type %u", d_config->c_info.type);
+    }
+}
 
 static void device_model_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
                                  const char *xsdata)
@@ -1207,6 +1308,7 @@ static void device_model_spawn_outcome(libxl__egc *egc,
 {
     STATE_AO_GC(dmss->spawn.ao);
     int ret2;
+    char *filename;
 
     if (rc)
         LOG(ERROR, "%s: spawn failed (rc=%d)", dmss->spawn.what, rc);
@@ -1214,10 +1316,11 @@ static void device_model_spawn_outcome(libxl__egc *egc,
     libxl__domain_build_state *state = dmss->build_state;
 
     if (state->saved_state) {
-        ret2 = unlink(state->saved_state);
+        filename = GCSPRINTF("%s.%u", state->saved_state, dmss->dmid);
+        ret2 = unlink(filename);
         if (ret2) {
             LOGE(ERROR, "%s: failed to remove device-model state %s",
-                 dmss->spawn.what, state->saved_state);
+                 dmss->spawn.what, filename);
             rc = ERROR_FAIL;
             goto out;
         }
@@ -1229,12 +1332,14 @@ static void device_model_spawn_outcome(libxl__egc *egc,
     dmss->callback(egc, dmss, rc);
 }
 
-int libxl__destroy_device_model(libxl__gc *gc, uint32_t domid)
+static int libxl__destroy_device_model(libxl__gc *gc, libxl_domid domid,
+                                       libxl_dmid dmid)
 {
     char *pid;
     int ret;
 
-    pid = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "/local/domain/%d/image/device-model-pid", domid));
+    pid = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("/local/domain/%u/image/dms/%u-pid",
+                                                 domid, dmid));
     if (!pid || !atoi(pid)) {
         LOG(ERROR, "could not find device-model's pid for dom %u", domid);
         ret = ERROR_FAIL;
@@ -1300,6 +1405,60 @@ out:
     return ret;
 }
 
+libxl_dmid *libxl__list_device_models(libxl__gc *gc, libxl_domid domid,
+                                      unsigned *num_dms)
+{
+    unsigned int i = 0;
+    char **dir = NULL;
+    libxl_dmid *dms = NULL;
+    unsigned int num = 0;
+
+    dir = libxl__xs_directory(gc, XBT_NULL,
+                              GCSPRINTF("/local/domain/0/dms/%u", domid),
+                              &num);
+    if (dir) {
+        GCNEW_ARRAY(dms, num);
+
+        if (num_dms)
+            *num_dms = num;
+
+        for (i = 0; i < num; i++) {
+            dms[i] = atoi(dir[i]);
+        }
+
+        return dms;
+    }
+    else
+        return NULL;
+}
+
+int libxl__destroy_device_models(libxl__gc *gc,
+                                 libxl_domid domid)
+{
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    int ret = 0;
+    libxl_dmid *dms = NULL;
+    unsigned int num_dms;
+    unsigned int i;
+
+    dms = libxl__list_device_models(gc, domid, &num_dms);
+
+    if (!dms)
+        return ERROR_FAIL;
+
+    for (i = 0; i < num_dms; i++)
+        ret |= libxl__destroy_device_model(gc, domid, dms[i]);
+
+    if (!ret) {
+        xs_rm(ctx->xsh, XBT_NULL, libxl__sprintf(gc, "/local/domain/0/dms/%u",
+                                                 domid));
+        xs_rm(ctx->xsh, XBT_NULL, libxl__sprintf(gc, "/local/domain/0/device-model/%u",
+                                                 domid));
+    }
+
+    return ret;
+ }
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 06d5e4f..475fea8 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -544,6 +544,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int ret, rc = ERROR_FAIL;
     const char *firmware = libxl__domain_firmware(gc, info);
+    libxl_domain_config *d_config = CONTAINER_OF(info, *d_config, b_info);
 
     if (!firmware)
         goto out;
@@ -552,7 +553,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
         domid,
         (info->max_memkb - info->video_memkb) / 1024,
         (info->target_memkb - info->video_memkb) / 1024,
-        firmware);
+        firmware,
+        state->num_dms * 2 + 1);
     if (ret) {
         LIBXL__LOG_ERRNOVAL(ctx, LIBXL__LOG_ERROR, ret, "hvm building failed");
         goto out;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 71e4970..2e6eedc 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -867,6 +867,7 @@ typedef struct {
     unsigned long console_mfn;
 
     unsigned long vm_generationid_addr;
+    unsigned long num_dms;
 
     char *saved_state;
 
@@ -887,7 +888,7 @@ _hidden int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
               libxl_domain_build_info *info,
               libxl__domain_build_state *state);
 
-_hidden int libxl__qemu_traditional_cmd(libxl__gc *gc, uint32_t domid,
+_hidden int libxl__qemu_traditional_cmd(libxl__gc *gc, libxl_domid domid,
                                         const char *cmd);
 _hidden int libxl__domain_rename(libxl__gc *gc, uint32_t domid,
                                  const char *old_name, const char *new_name,
@@ -947,6 +948,8 @@ _hidden int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                         libxl_domain_create_info *c_info);
 _hidden int libxl__domain_build_info_setdefault(libxl__gc *gc,
                                         libxl_domain_build_info *b_info);
+_hidden int libxl__dm_setdefault(libxl__gc *gc,
+                                 libxl_dm *dm);
 _hidden int libxl__device_disk_setdefault(libxl__gc *gc,
                                           libxl_device_disk *disk);
 _hidden int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
@@ -1042,7 +1045,9 @@ _hidden char *libxl__devid_to_localdev(libxl__gc *gc, int devid);
 
 /* from libxl_pci */
 
-_hidden int libxl__device_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, int starting);
+_hidden int libxl__device_pci_add(libxl__gc *gc, libxl_domid domid,
+                                  libxl_device_pci *pcidev,
+                                  int starting);
 _hidden int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
                                       libxl_device_pci *pcidev, int num);
 _hidden int libxl__device_pci_destroy_all(libxl__gc *gc, uint32_t domid);
@@ -1272,6 +1277,7 @@ _hidden int libxl__domain_build(libxl__gc *gc,
 
 /* for device model creation */
 _hidden const char *libxl__domain_device_model(libxl__gc *gc,
+                                        libxl_dmid dmid,
                                         const libxl_domain_build_info *info);
 _hidden int libxl__need_xenpv_qemu(libxl__gc *gc,
         int nr_consoles, libxl__device_console *consoles,
@@ -1281,7 +1287,9 @@ _hidden int libxl__need_xenpv_qemu(libxl__gc *gc,
    * return pass *starting_r (which will be non-0) to
    * libxl__confirm_device_model_startup or libxl__detach_device_model. */
 _hidden int libxl__wait_for_device_model(libxl__gc *gc,
-                                uint32_t domid, char *state,
+                                libxl_domid domid,
+                                libxl_dmid dmid,
+                                char *state,
                                 libxl__spawn_starting *spawning,
                                 int (*check_callback)(libxl__gc *gc,
                                                       uint32_t domid,
@@ -1289,9 +1297,14 @@ _hidden int libxl__wait_for_device_model(libxl__gc *gc,
                                                       void *userdata),
                                 void *check_callback_userdata);
 
-_hidden int libxl__destroy_device_model(libxl__gc *gc, uint32_t domid);
+_hidden libxl_dmid *libxl__list_device_models(libxl__gc *gc,
+                                              libxl_domid domid,
+                                              unsigned int *num_dms);
 
-_hidden const libxl_vnc_info *libxl__dm_vnc(const libxl_domain_config *g_cfg);
+_hidden int libxl__destroy_device_models(libxl__gc *gc, libxl_domid domid);
+
+_hidden const libxl_vnc_info *libxl__dm_vnc(libxl_dmid dmid,
+                                            const libxl_domain_config *g_cfg);
 
 _hidden char *libxl__abs_path(libxl__gc *gc, const char *s, const char *path);
 
@@ -2427,10 +2440,10 @@ struct libxl__dm_spawn_state {
     libxl_domain_config *guest_config;
     libxl__domain_build_state *build_state; /* relates to guest_domid */
     libxl__dm_spawn_cb *callback;
+    libxl_dmid dmid;
+    struct libxl__domain_create_state *dcs;
 };
 
-_hidden void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state*);
-
 /* Stubdom device models. */
 
 typedef struct {
@@ -2447,7 +2460,7 @@ typedef struct {
     libxl__multidev multidev;
 } libxl__stub_dm_spawn_state;
 
-_hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
+_hidden void libxl__spawn_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
 
 _hidden char *libxl__stub_dm_name(libxl__gc *gc, const char * guest_name);
 
@@ -2470,7 +2483,8 @@ struct libxl__domain_create_state {
     int guest_domid;
     libxl__domain_build_state build_state;
     libxl__bootloader_state bl;
-    libxl__stub_dm_spawn_state dmss;
+    libxl_dmid current_dmid;
+    libxl__stub_dm_spawn_state* dmss;
         /* If we're not doing stubdom, we use only dmss.dm,
          * for the non-stubdom device model. */
     libxl__save_helper_state shs;
@@ -2527,7 +2541,9 @@ _hidden void libxl__domain_save_device_model(libxl__egc *egc,
                                      libxl__domain_suspend_state *dss,
                                      libxl__save_device_model_cb *callback);
 
-_hidden const char *libxl__device_model_savefile(libxl__gc *gc, uint32_t domid);
+_hidden const char *libxl__device_model_savefile(libxl__gc *gc,
+                                                 libxl_domid domid,
+                                                 libxl_dmid dmid);
 
 
 /*
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5U-00077j-6S; Wed, 22 Aug 2012 18:55:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5R-00070A-6s
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:30 +0000
Received: from [85.158.138.51:26672] by server-7.bemta-3.messagelabs.com id
	77/79-01906-02B25305; Wed, 22 Aug 2012 18:55:28 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345661707!27566656!7
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzMzNjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10498 invoked from network); 22 Aug 2012 18:55:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943138"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:24 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:23 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:01 +0100
Message-ID: <9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy on
	multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Old configuration file is still working with qemu disaggregation.
Before to spawn any QEMU, the toolstack will fill correctly, if needed,
configuration structure.

For the moment, the toolstack spawns device models one by one.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl.c          |   16 ++-
 tools/libxl/libxl_create.c   |  150 +++++++++++++-----
 tools/libxl/libxl_device.c   |    7 +-
 tools/libxl/libxl_dm.c       |  369 ++++++++++++++++++++++++++++++------------
 tools/libxl/libxl_dom.c      |    4 +-
 tools/libxl/libxl_internal.h |   36 +++--
 6 files changed, 421 insertions(+), 161 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 8ea3478..60718b6 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -1330,7 +1330,8 @@ static void stubdom_destroy_callback(libxl__egc *egc,
     }
 
     dds->stubdom_finished = 1;
-    savefile = libxl__device_model_savefile(gc, dis->domid);
+    /* FIXME: get dmid */
+    savefile = libxl__device_model_savefile(gc, dis->domid, 0);
     rc = libxl__remove_file(gc, savefile);
     /*
      * On suspend libxl__domain_save_device_model will have already
@@ -1423,10 +1424,8 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
         LIBXL__LOG_ERRNOVAL(ctx, LIBXL__LOG_ERROR, rc, "xc_domain_pause failed for %d", domid);
     }
     if (dm_present) {
-        if (libxl__destroy_device_model(gc, domid) < 0)
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_model failed for %d", domid);
-
-        libxl__qmp_cleanup(gc, domid);
+        if (libxl__destroy_device_models(gc, domid) < 0)
+            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_models failed for %d", domid);
     }
     dis->drs.ao = ao;
     dis->drs.domid = domid;
@@ -1725,6 +1724,13 @@ out:
 
 /******************************************************************************/
 
+int libxl__dm_setdefault(libxl__gc *gc, libxl_dm *dm)
+{
+    return 0;
+}
+
+/******************************************************************************/
+
 int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
 {
     int rc;
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5f0d26f..7160c78 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -35,6 +35,10 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
 {
     int i;
 
+    for (i=0; i<d_config->num_dms; i++)
+        libxl_dm_dispose(&d_config->dms[i]);
+    free(d_config->dms);
+
     for (i=0; i<d_config->num_disks; i++)
         libxl_device_disk_dispose(&d_config->disks[i]);
     free(d_config->disks);
@@ -59,6 +63,50 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
     libxl_domain_build_info_dispose(&d_config->b_info);
 }
 
+static int libxl__domain_config_setdefault(libxl__gc *gc,
+                                           libxl_domain_config *d_config)
+{
+    libxl_domain_build_info *b_info = &d_config->b_info;
+    uint64_t cap = 0;
+    int i = 0;
+    int ret = 0;
+    libxl_dm *default_dm = NULL;
+
+    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL
+        && (d_config->num_dms > 1))
+        return ERROR_INVAL;
+
+    if (!d_config->num_dms) {
+        d_config->dms = malloc(sizeof (*d_config->dms));
+        if (!d_config->dms)
+            return ERROR_NOMEM;
+        libxl_dm_init(d_config->dms);
+        d_config->num_dms = 1;
+    }
+
+    for (i = 0; i < d_config->num_dms; i++)
+    {
+        ret = libxl__dm_setdefault(gc, &d_config->dms[i]);
+        if (ret) return ret;
+
+        if (cap & d_config->dms[i].capabilities)
+            /* Some capabilities are already emulated */
+            return ERROR_INVAL;
+
+        cap |= d_config->dms[i].capabilities;
+        if (d_config->dms[i].capabilities & LIBXL_DM_CAP_UI)
+            default_dm = &d_config->dms[i];
+    }
+
+    if (!default_dm)
+        default_dm = &d_config->dms[0];
+
+    /* The default device model emulates all that the others don't emulate */
+    default_dm->capabilities |= ~cap;
+
+    return ret;
+}
+
 int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                          libxl_domain_create_info *c_info)
 {
@@ -145,11 +193,11 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
                 LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL;
         else {
             const char *dm;
-            int rc;
+            int rc = 0;
 
             b_info->device_model_version =
                 LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
-            dm = libxl__domain_device_model(gc, b_info);
+            dm = libxl__domain_device_model(gc, ~0, b_info);
             rc = access(dm, X_OK);
             if (rc < 0) {
                 /* qemu-xen unavailable, use qemu-xen-traditional */
@@ -651,11 +699,13 @@ static void initiate_domain_create(libxl__egc *egc,
     }
 
     dcs->guest_domid = domid;
-    dcs->dmss.dm.guest_domid = 0; /* means we haven't spawned */
 
     ret = libxl__domain_build_info_setdefault(gc, &d_config->b_info);
     if (ret) goto error_out;
 
+    ret = libxl__domain_config_setdefault(gc, d_config);
+    if (ret) goto error_out;
+
     if (!sched_params_valid(gc, domid, &d_config->b_info.sched_params)) {
         LOG(ERROR, "Invalid scheduling parameters\n");
         ret = ERROR_INVAL;
@@ -667,6 +717,15 @@ static void initiate_domain_create(libxl__egc *egc,
         if (ret) goto error_out;
     }
 
+    dcs->current_dmid = 0;
+    dcs->build_state.num_dms = d_config->num_dms;
+    GCNEW_ARRAY(dcs->dmss, d_config->num_dms);
+
+    for (i = 0; i < d_config->num_dms; i++) {
+        dcs->dmss[i].dm.guest_domid = 0; /* Means we haven't spawned */
+        dcs->dmss[i].dm.dcs = dcs;
+    }
+
     dcs->bl.ao = ao;
     libxl_device_disk *bootdisk =
         d_config->num_disks > 0 ? &d_config->disks[0] : NULL;
@@ -709,6 +768,26 @@ static void domcreate_console_available(libxl__egc *egc,
                                         dcs->guest_domid));
 }
 
+static void domcreate_spawn_devmodel(libxl__egc *egc,
+                                    libxl__domain_create_state *dcs,
+                                    libxl_dmid dmid)
+{
+    libxl__stub_dm_spawn_state *dmss = &dcs->dmss[dmid];
+    STATE_AO_GC(dcs->ao);
+
+    /* We might be going to call libxl__spawn_local_dm, or _spawn_stub_dm.
+     * Fill in any field required by either, including both relevant
+     * callbacks (_spawn_stub_dm will overwrite our trespass if needed). */
+    dmss->dm.spawn.ao = ao;
+    dmss->dm.guest_config = dcs->guest_config;
+    dmss->dm.build_state = &dcs->build_state;
+    dmss->dm.callback = domcreate_devmodel_started;
+    dmss->callback = domcreate_devmodel_started;
+    dmss->dm.dmid = dmid;
+
+    libxl__spawn_dm(egc, dmss);
+}
+
 static void domcreate_bootloader_done(libxl__egc *egc,
                                       libxl__bootloader_state *bl,
                                       int rc)
@@ -735,15 +814,6 @@ static void domcreate_bootloader_done(libxl__egc *egc,
      */
     state->pv_cmdline = bl->cmdline;
 
-    /* We might be going to call libxl__spawn_local_dm, or _spawn_stub_dm.
-     * Fill in any field required by either, including both relevant
-     * callbacks (_spawn_stub_dm will overwrite our trespass if needed). */
-    dcs->dmss.dm.spawn.ao = ao;
-    dcs->dmss.dm.guest_config = dcs->guest_config;
-    dcs->dmss.dm.build_state = &dcs->build_state;
-    dcs->dmss.dm.callback = domcreate_devmodel_started;
-    dcs->dmss.callback = domcreate_devmodel_started;
-
     if ( restore_fd < 0 ) {
         rc = libxl__domain_build(gc, &d_config->b_info, domid, state);
         domcreate_rebuild_done(egc, dcs, rc);
@@ -962,11 +1032,7 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         libxl__device_vkb_add(gc, domid, &vkb);
         libxl_device_vkb_dispose(&vkb);
 
-        dcs->dmss.dm.guest_domid = domid;
-        if (libxl_defbool_val(d_config->b_info.device_model_stubdomain))
-            libxl__spawn_stub_dm(egc, &dcs->dmss);
-        else
-            libxl__spawn_local_dm(egc, &dcs->dmss.dm);
+        domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
         return;
     }
     case LIBXL_DOMAIN_TYPE_PV:
@@ -991,12 +1057,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         libxl__device_console_dispose(&console);
 
         if (need_qemu) {
-            dcs->dmss.dm.guest_domid = domid;
-            libxl__spawn_local_dm(egc, &dcs->dmss.dm);
+            assert(dcs->dmss);
+            domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
             return;
         } else {
-            assert(!dcs->dmss.dm.guest_domid);
-            domcreate_devmodel_started(egc, &dcs->dmss.dm, 0);
+            assert(!dcs->dmss);
             return;
         }
     }
@@ -1015,7 +1080,7 @@ static void domcreate_devmodel_started(libxl__egc *egc,
                                        libxl__dm_spawn_state *dmss,
                                        int ret)
 {
-    libxl__domain_create_state *dcs = CONTAINER_OF(dmss, *dcs, dmss.dm);
+    libxl__domain_create_state *dcs = dmss->dcs;
     STATE_AO_GC(dmss->spawn.ao);
     libxl_ctx *ctx = CTX;
     int domid = dcs->guest_domid;
@@ -1029,15 +1094,15 @@ static void domcreate_devmodel_started(libxl__egc *egc,
         goto error_out;
     }
 
-    if (dcs->dmss.dm.guest_domid) {
+    if (dmss->guest_domid) {
         if (d_config->b_info.device_model_version
             == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) {
-            libxl__qmp_initializations(gc, domid, d_config);
+            libxl__qmp_initializations(gc, domid, dmss->dmid, d_config);
         }
     }
 
     /* Plug nic interfaces */
-    if (d_config->num_nics > 0) {
+    if (d_config->num_nics > 0 && dmss->dmid == 0) {
         /* Attach nics */
         libxl__multidev_begin(ao, &dcs->multidev);
         dcs->multidev.callback = domcreate_attach_pci;
@@ -1071,23 +1136,34 @@ static void domcreate_attach_pci(libxl__egc *egc, libxl__multidev *multidev,
         goto error_out;
     }
 
-    for (i = 0; i < d_config->num_pcidevs; i++)
-        libxl__device_pci_add(gc, domid, &d_config->pcidevs[i], 1);
+    /* TO FIX: for the moment only add to device model 0 */
 
-    if (d_config->num_pcidevs > 0) {
-        ret = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
-        if (ret < 0) {
-            LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
-                "libxl_create_pci_backend failed: %d", ret);
-            goto error_out;
+    if (dcs->current_dmid == 0) {
+        for (i = 0; i < d_config->num_pcidevs; i++)
+            libxl__device_pci_add(gc, domid,
+                                  &d_config->pcidevs[i], 1);
+
+        if (d_config->num_pcidevs > 0) {
+            ret = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
+                d_config->num_pcidevs);
+            if (ret < 0) {
+                LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
+                    "libxl_create_pci_backend failed: %d", ret);
+                goto error_out;
+            }
         }
     }
 
-    libxl__arch_domain_create(gc, d_config, domid);
-    domcreate_console_available(egc, dcs);
+    dcs->current_dmid++;
+
+    if (dcs->current_dmid >= dcs->guest_config->num_dms) {
+        libxl__arch_domain_create(gc, d_config, domid);
+        domcreate_console_available(egc, dcs);
+        domcreate_complete(egc, dcs, 0);
+    } else {
+        domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
+    }
 
-    domcreate_complete(egc, dcs, 0);
     return;
 
 error_out:
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 8e8410e..798665e 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -1034,8 +1034,8 @@ static void devices_remove_callback(libxl__egc *egc,
     return;
 }
 
-int libxl__wait_for_device_model(libxl__gc *gc,
-                                 uint32_t domid, char *state,
+int libxl__wait_for_device_model(libxl__gc *gc, libxl_domid domid,
+                                 libxl_dmid dmid, char *state,
                                  libxl__spawn_starting *spawning,
                                  int (*check_callback)(libxl__gc *gc,
                                                        uint32_t domid,
@@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
                                  void *check_callback_userdata)
 {
     char *path;
-    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
+    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
+                          domid, dmid);
     return libxl__wait_for_offspring(gc, domid,
                                      LIBXL_DEVICE_MODEL_START_TIMEOUT,
                                      "Device Model", path, state, spawning,
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c0084f..de7138f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -28,24 +28,30 @@ static const char *libxl_tapif_script(libxl__gc *gc)
 #endif
 }
 
-const char *libxl__device_model_savefile(libxl__gc *gc, uint32_t domid)
+const char *libxl__device_model_savefile(libxl__gc *gc, libxl_domid domid,
+                                         libxl_dmid dmid)
 {
-    return libxl__sprintf(gc, "/var/lib/xen/qemu-save.%d", domid);
+    return libxl__sprintf(gc, "/var/lib/xen/qemu-save.%u.%u", domid, dmid);
 }
 
 const char *libxl__domain_device_model(libxl__gc *gc,
-                                       const libxl_domain_build_info *info)
+                                       uint32_t dmid,
+                                       const libxl_domain_build_info *b_info)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     const char *dm;
+    libxl_domain_config *guest_config = CONTAINER_OF(b_info, *guest_config,
+                                                     b_info);
 
-    if (libxl_defbool_val(info->device_model_stubdomain))
+    if (libxl_defbool_val(guest_config->b_info.device_model_stubdomain))
         return NULL;
 
-    if (info->device_model) {
-        dm = libxl__strdup(gc, info->device_model);
+    if (dmid < guest_config->num_dms && guest_config->dms[dmid].path) {
+        dm = libxl__strdup(gc, guest_config->dms[dmid].path);
+    } else if (guest_config->b_info.device_model) {
+        dm = libxl__strdup(gc, guest_config->b_info.device_model);
     } else {
-        switch (info->device_model_version) {
+        switch (guest_config->b_info.device_model_version) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
             dm = libxl__abs_path(gc, "qemu-dm", libxl__libexec_path());
             break;
@@ -55,7 +61,7 @@ const char *libxl__domain_device_model(libxl__gc *gc,
         default:
             LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
                        "invalid device model version %d\n",
-                       info->device_model_version);
+                       guest_config->b_info.device_model_version);
             dm = NULL;
             break;
         }
@@ -63,7 +69,8 @@ const char *libxl__domain_device_model(libxl__gc *gc,
     return dm;
 }
 
-const libxl_vnc_info *libxl__dm_vnc(const libxl_domain_config *guest_config)
+const libxl_vnc_info *libxl__dm_vnc(libxl_dmid dmid,
+                                    const libxl_domain_config *guest_config)
 {
     const libxl_vnc_info *vnc = NULL;
     if (guest_config->b_info.type == LIBXL_DOMAIN_TYPE_HVM) {
@@ -103,7 +110,7 @@ static char ** libxl__build_device_model_args_old(libxl__gc *gc,
     const libxl_domain_create_info *c_info = &guest_config->c_info;
     const libxl_domain_build_info *b_info = &guest_config->b_info;
     const libxl_device_nic *nics = guest_config->nics;
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(0, guest_config);
     const libxl_sdl_info *sdl = dm_sdl(guest_config);
     const int num_nics = guest_config->num_nics;
     const char *keymap = dm_keymap(guest_config);
@@ -321,24 +328,58 @@ static char *dm_spice_options(libxl__gc *gc,
     return opt;
 }
 
+static int libxl__dm_has_vif(const char *vifname, libxl_dmid dmid,
+                             const libxl_domain_config *guest_config)
+{
+    const libxl_dm *dm_config = &guest_config->dms[dmid];
+    int i = 0;
+
+    if (!vifname && (dm_config->capabilities & LIBXL_DM_CAP_UI))
+        return 1;
+
+    if (!dm_config->vifs)
+        return 0;
+
+    for (i = 0; dm_config->vifs[i]; i++) {
+        if (!strcmp(dm_config->vifs[i], vifname))
+            return 1;
+    }
+
+    return 0;
+}
+
 static char ** libxl__build_device_model_args_new(libxl__gc *gc,
-                                        const char *dm, int guest_domid,
+                                        const char *dm, libxl_dmid guest_domid,
+                                        libxl_dmid dmid,
                                         const libxl_domain_config *guest_config,
                                         const libxl__domain_build_state *state)
 {
+    /**
+     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
+     * XEN PCI. Theses devices will be emulate in each QEMU, but only
+     * one QEMU (the one which emulates default device) will register
+     * these devices through Xen PCI hypercall.
+     */
+    static unsigned int bdf = 3;
+
     libxl_ctx *ctx = libxl__gc_owner(gc);
     const libxl_domain_create_info *c_info = &guest_config->c_info;
     const libxl_domain_build_info *b_info = &guest_config->b_info;
+    const libxl_dm *dm_config = &guest_config->dms[dmid];
     const libxl_device_disk *disks = guest_config->disks;
     const libxl_device_nic *nics = guest_config->nics;
     const int num_disks = guest_config->num_disks;
     const int num_nics = guest_config->num_nics;
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
     const libxl_sdl_info *sdl = dm_sdl(guest_config);
     const char *keymap = dm_keymap(guest_config);
     flexarray_t *dm_args;
     int i;
     uint64_t ram_size;
+    uint32_t cap_ui = dm_config->capabilities & LIBXL_DM_CAP_UI;
+    uint32_t cap_ide = dm_config->capabilities & LIBXL_DM_CAP_IDE;
+    uint32_t cap_serial = dm_config->capabilities & LIBXL_DM_CAP_SERIAL;
+    uint32_t cap_audio = dm_config->capabilities & LIBXL_DM_CAP_AUDIO;
 
     dm_args = flexarray_make(16, 1);
     if (!dm_args)
@@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                       "-xen-domid",
                       libxl__sprintf(gc, "%d", guest_domid), NULL);
 
+    flexarray_append(dm_args, "-nodefaults");
     flexarray_append(dm_args, "-chardev");
     flexarray_append(dm_args,
                      libxl__sprintf(gc, "socket,id=libxl-cmd,"
-                                    "path=%s/qmp-libxl-%d,server,nowait",
-                                    libxl__run_dir_path(), guest_domid));
+                                    "path=%s/qmp-libxl-%u-%u,server,nowait",
+                                    libxl__run_dir_path(), guest_domid, dmid));
 
     flexarray_append(dm_args, "-mon");
     flexarray_append(dm_args, "chardev=libxl-cmd,mode=control");
@@ -364,7 +406,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     if (c_info->name) {
         flexarray_vappend(dm_args, "-name", c_info->name, NULL);
     }
-    if (vnc) {
+
+    if (vnc && cap_ui) {
         int display = 0;
         const char *listen = "127.0.0.1";
         char *vncarg = NULL;
@@ -395,7 +438,7 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         }
         flexarray_append(dm_args, vncarg);
     }
-    if (sdl) {
+    if (sdl && cap_ui) {
         flexarray_append(dm_args, "-sdl");
         /* XXX sdl->{display,xauthority} into $DISPLAY/$XAUTHORITY */
     }
@@ -411,13 +454,27 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         int ioemu_nics = 0;
 
-        if (b_info->u.hvm.serial) {
+        if (b_info->u.hvm.serial && cap_serial) {
             flexarray_vappend(dm_args, "-serial", b_info->u.hvm.serial, NULL);
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.nographic) && (!sdl && !vnc)) {
+        if ((libxl_defbool_val(b_info->u.hvm.nographic) && (!sdl && !vnc))
+            || !cap_ui) {
             flexarray_append(dm_args, "-nographic");
         }
+        else {
+            switch (b_info->u.hvm.vga.kind) {
+            case LIBXL_VGA_INTERFACE_TYPE_STD:
+                flexarray_vappend(dm_args, "-device",
+                                  GCSPRINTF("VGA,addr=%u", bdf++), NULL);
+                break;
+            case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
+                flexarray_vappend(dm_args, "-device",
+                                  GCSPRINTF("cirrus-vga,addr=%u", bdf++),
+                                  NULL);
+                break;
+            }
+        }
 
         if (libxl_defbool_val(b_info->u.hvm.spice.enable)) {
             const libxl_spice_info *spice = &b_info->u.hvm.spice;
@@ -429,27 +486,19 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, spiceoptions);
         }
 
-        switch (b_info->u.hvm.vga.kind) {
-        case LIBXL_VGA_INTERFACE_TYPE_STD:
-            flexarray_vappend(dm_args, "-vga", "std", NULL);
-            break;
-        case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
-            flexarray_vappend(dm_args, "-vga", "cirrus", NULL);
-            break;
-        }
-
         if (b_info->u.hvm.boot) {
             flexarray_vappend(dm_args, "-boot",
                     libxl__sprintf(gc, "order=%s", b_info->u.hvm.boot), NULL);
         }
-        if (libxl_defbool_val(b_info->u.hvm.usb) || b_info->u.hvm.usbdevice) {
+        if ((libxl_defbool_val(b_info->u.hvm.usb) || b_info->u.hvm.usbdevice)
+            && cap_ui) {
             flexarray_append(dm_args, "-usb");
             if (b_info->u.hvm.usbdevice) {
                 flexarray_vappend(dm_args,
                                   "-usbdevice", b_info->u.hvm.usbdevice, NULL);
             }
         }
-        if (b_info->u.hvm.soundhw) {
+        if (b_info->u.hvm.soundhw && cap_audio) {
             flexarray_vappend(dm_args, "-soundhw", b_info->u.hvm.soundhw, NULL);
         }
         if (!libxl_defbool_val(b_info->u.hvm.acpi)) {
@@ -469,7 +518,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                                                          b_info->max_vcpus));
         }
         for (i = 0; i < num_nics; i++) {
-            if (nics[i].nictype == LIBXL_NIC_TYPE_VIF_IOEMU) {
+            if (nics[i].nictype == LIBXL_NIC_TYPE_VIF_IOEMU
+                && libxl__dm_has_vif(nics[i].id, dmid, guest_config)) {
                 char *smac = libxl__sprintf(gc,
                                 LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
                 const char *ifname = libxl__device_nic_devname(gc,
@@ -477,9 +527,9 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                                                 LIBXL_NIC_TYPE_VIF_IOEMU);
                 flexarray_append(dm_args, "-device");
                 flexarray_append(dm_args,
-                   libxl__sprintf(gc, "%s,id=nic%d,netdev=net%d,mac=%s",
-                                                nics[i].model, nics[i].devid,
-                                                nics[i].devid, smac));
+                            GCSPRINTF("%s,id=nic%d,netdev=net%d,mac=%s,addr=%u",
+                                      nics[i].model, nics[i].devid,
+                                      nics[i].devid, smac, bdf++));
                 flexarray_append(dm_args, "-netdev");
                 flexarray_append(dm_args, GCSPRINTF(
                                           "type=tap,id=net%d,ifname=%s,"
@@ -495,7 +545,7 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-net");
             flexarray_append(dm_args, "none");
         }
-        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru)) {
+        if (libxl_defbool_val(b_info->u.hvm.gfx_passthru) && cap_ui) {
             flexarray_append(dm_args, "-gfx_passthru");
         }
     } else {
@@ -506,13 +556,14 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
 
     if (state->saved_state) {
         /* This file descriptor is meant to be used by QEMU */
-        int migration_fd = open(state->saved_state, O_RDONLY);
+        int migration_fd = open(libxl__sprintf(gc, "%s.%u", state->saved_state,
+                                               dmid), O_RDONLY);
         flexarray_append(dm_args, "-incoming");
         flexarray_append(dm_args, libxl__sprintf(gc, "fd:%d", migration_fd));
     }
     for (i = 0; b_info->extra && b_info->extra[i] != NULL; i++)
         flexarray_append(dm_args, b_info->extra[i]);
-    flexarray_append(dm_args, "-M");
+    flexarray_append(dm_args, "-machine");
     switch (b_info->type) {
     case LIBXL_DOMAIN_TYPE_PV:
         flexarray_append(dm_args, "xenpv");
@@ -520,7 +571,11 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
-        flexarray_append(dm_args, "xenfv");
+        flexarray_append(dm_args,
+                         libxl__sprintf(gc,
+                                        "xenfv,xen_dmid=%u,xen_default_dev=%s,xen_emulate_ide=%s",
+                                        dmid, (cap_ui) ? "on" : "off",
+                                        (cap_ide) ? "on" : "off"));
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
         break;
@@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         abort();
     }
 
-    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
+    // Allocate ram space of 32Mo per previous device model to store rom
+    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb)
+        + 32 * dmid;
     flexarray_append(dm_args, "-m");
     flexarray_append(dm_args, libxl__sprintf(gc, "%"PRId64, ram_size));
 
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
-        for (i = 0; i < num_disks; i++) {
-            int disk, part;
-            int dev_number =
-                libxl__device_disk_dev_number(disks[i].vdev, &disk, &part);
-            const char *format = qemu_disk_format_string(disks[i].format);
-            char *drive;
-
-            if (dev_number == -1) {
-                LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
-                           " disk number for %s", disks[i].vdev);
-                continue;
-            }
-
-            if (disks[i].is_cdrom) {
-                if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
-                    drive = libxl__sprintf
-                        (gc, "if=ide,index=%d,media=cdrom", disk);
-                else
-                    drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
-                         disks[i].pdev_path, disk, format);
-            } else {
-                if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
-                    LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "cannot support"
-                               " empty disk format for %s", disks[i].vdev);
+        if (cap_ide) {
+            for (i = 0; i < num_disks; i++) {
+                int disk, part;
+                int dev_number =
+                    libxl__device_disk_dev_number(disks[i].vdev, &disk, &part);
+                const char *format = qemu_disk_format_string(disks[i].format);
+                char *drive;
+
+                if (dev_number == -1) {
+                    LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
+                               " disk number for %s", disks[i].vdev);
                     continue;
                 }
 
-                if (format == NULL) {
-                    LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
-                               " disk image format %s", disks[i].vdev);
-                    continue;
+                if (disks[i].is_cdrom) {
+                    if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
+                        drive = libxl__sprintf
+                            (gc, "if=ide,index=%d,media=cdrom", disk);
+                    else
+                        drive = libxl__sprintf
+                            (gc, "file=%s,if=ide,index=%d,media=cdrom,format=%s",
+                             disks[i].pdev_path, disk, format);
+                } else {
+                    if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
+                        LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "cannot support"
+                                   " empty disk format for %s", disks[i].vdev);
+                        continue;
+                    }
+
+                    if (format == NULL) {
+                        LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "unable to determine"
+                                   " disk image format %s", disks[i].vdev);
+                        continue;
+                    }
+
+                    /*
+                     * Explicit sd disks are passed through as is.
+                     *
+                     * For other disks we translate devices 0..3 into
+                     * hd[a-d] and ignore the rest.
+                     */
+                    if (strncmp(disks[i].vdev, "sd", 2) == 0)
+                        drive = libxl__sprintf
+                            (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
+                             disks[i].pdev_path, disk, format);
+                    else if (disk < 4)
+                        drive = libxl__sprintf
+                            (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
+                             disks[i].pdev_path, disk, format);
+                    else
+                        continue; /* Do not emulate this disk */
                 }
 
-                /*
-                 * Explicit sd disks are passed through as is.
-                 *
-                 * For other disks we translate devices 0..3 into
-                 * hd[a-d] and ignore the rest.
-                 */
-                if (strncmp(disks[i].vdev, "sd", 2) == 0)
-                    drive = libxl__sprintf
-                        (gc, "file=%s,if=scsi,bus=0,unit=%d,format=%s",
-                         disks[i].pdev_path, disk, format);
-                else if (disk < 4)
-                    drive = libxl__sprintf
-                        (gc, "file=%s,if=ide,index=%d,media=disk,format=%s",
-                         disks[i].pdev_path, disk, format);
-                else
-                    continue; /* Do not emulate this disk */
+                flexarray_append(dm_args, "-drive");
+                flexarray_append(dm_args, drive);
             }
-
-            flexarray_append(dm_args, "-drive");
-            flexarray_append(dm_args, drive);
         }
     }
     flexarray_append(dm_args, NULL);
@@ -594,7 +653,9 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
 }
 
 static char ** libxl__build_device_model_args(libxl__gc *gc,
-                                        const char *dm, int guest_domid,
+                                        const char *dm,
+                                        libxl_domid guest_domid,
+                                        libxl_dmid dmid,
                                         const libxl_domain_config *guest_config,
                                         const libxl__domain_build_state *state)
 {
@@ -607,8 +668,8 @@ static char ** libxl__build_device_model_args(libxl__gc *gc,
                                                   state);
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
         return libxl__build_device_model_args_new(gc, dm,
-                                                  guest_domid, guest_config,
-                                                  state);
+                                                  guest_domid, dmid,
+                                                  guest_config, state);
     default:
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "unknown device model version %d",
                          guest_config->b_info.device_model_version);
@@ -729,7 +790,8 @@ char *libxl__stub_dm_name(libxl__gc *gc, const char *guest_name)
     return libxl__sprintf(gc, "%s-dm", guest_name);
 }
 
-void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
+static void libxl__spawn_stub_dm(libxl__egc *egc,
+                                 libxl__stub_dm_spawn_state *sdss)
 {
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -815,7 +877,7 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
     if (ret)
         goto out;
 
-    args = libxl__build_device_model_args(gc, "stubdom-dm", guest_domid,
+    args = libxl__build_device_model_args(gc, "stubdom-dm", guest_domid, 0,
                                           guest_config, d_state);
     if (!args) {
         ret = ERROR_FAIL;
@@ -871,12 +933,16 @@ out:
     spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, ret);
 }
 
+static void libxl__spawn_local_dm(libxl__egc *egc,
+                                  libxl__dm_spawn_state *sdss);
+
 static void spawn_stub_launch_dm(libxl__egc *egc,
                                  libxl__multidev *multidev, int ret)
 {
     libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(multidev, *sdss, multidev);
     STATE_AO_GC(sdss->dm.spawn.ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
+    libxl_dmid dmid = sdss->dm.dmid;
     int i, num_console = STUBDOM_SPECIAL_CONSOLES;
     libxl__device_console *console;
 
@@ -937,7 +1003,8 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
                 break;
             case STUBDOM_CONSOLE_SAVE:
                 console[i].output = libxl__sprintf(gc, "file:%s",
-                                libxl__device_model_savefile(gc, guest_domid));
+                                libxl__device_model_savefile(gc, guest_domid,
+                                                             dmid));
                 break;
             case STUBDOM_CONSOLE_RESTORE:
                 if (d_state->saved_state)
@@ -1049,10 +1116,11 @@ static void device_model_spawn_outcome(libxl__egc *egc,
                                        libxl__dm_spawn_state *dmss,
                                        int rc);
 
-void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
+static void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
 {
     /* convenience aliases */
     const int domid = dmss->guest_domid;
+    const libxl_dmid dmid = dmss->dmid;
     libxl__domain_build_state *const state = dmss->build_state;
     libxl__spawn_state *const spawn = &dmss->spawn;
 
@@ -1062,7 +1130,8 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
     libxl_domain_config *guest_config = dmss->guest_config;
     const libxl_domain_create_info *c_info = &guest_config->c_info;
     const libxl_domain_build_info *b_info = &guest_config->b_info;
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
+    const libxl_dm *dm_config = &guest_config->dms[dmid];
     char *path, *logfile;
     int logfile_w, null;
     int rc;
@@ -1071,12 +1140,13 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
     char *vm_path;
     char **pass_stuff;
     const char *dm;
+    const char *name;
 
     if (libxl_defbool_val(b_info->device_model_stubdomain)) {
         abort();
     }
 
-    dm = libxl__domain_device_model(gc, b_info);
+    dm = libxl__domain_device_model(gc, dmid, b_info);
     if (!dm) {
         rc = ERROR_FAIL;
         goto out;
@@ -1087,7 +1157,7 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
         rc = ERROR_FAIL;
         goto out;
     }
-    args = libxl__build_device_model_args(gc, dm, domid, guest_config, state);
+    args = libxl__build_device_model_args(gc, dm, domid, dmid, guest_config, state);
     if (!args) {
         rc = ERROR_FAIL;
         goto out;
@@ -1101,7 +1171,7 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
         free(path);
     }
 
-    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d", domid);
+    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u", domid, dmid);
     xs_mkdir(ctx->xsh, XBT_NULL, path);
 
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM &&
@@ -1110,8 +1180,13 @@ void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state *dmss)
         libxl__xs_write(gc, XBT_NULL, libxl__sprintf(gc, "%s/disable_pf", path),
                     "%d", !libxl_defbool_val(b_info->u.hvm.xen_platform_pci));
 
+
+    name = dm_config->name;
+    if (!name)
+        name = libxl__sprintf(gc, "%u", dmid);
+
     libxl_create_logfile(ctx,
-                         libxl__sprintf(gc, "qemu-dm-%s", c_info->name),
+                         libxl__sprintf(gc, "qemu-%s-%s", name, c_info->name),
                          &logfile);
     logfile_w = open(logfile, O_WRONLY|O_CREAT|O_APPEND, 0644);
     free(logfile);
@@ -1143,10 +1218,10 @@ retry_transaction:
     for (arg = args; *arg; arg++)
         LIBXL__LOG(CTX, XTL_DEBUG, "  %s", *arg);
 
-    spawn->what = GCSPRINTF("domain %d device model", domid);
-    spawn->xspath = GCSPRINTF("/local/domain/0/device-model/%d/state", domid);
+    spawn->what = GCSPRINTF("domain %d device model %s", domid, name);
+    spawn->xspath = GCSPRINTF("/local/domain/0/dms/%u/%u/state", domid, dmid);
     spawn->timeout_ms = LIBXL_DEVICE_MODEL_START_TIMEOUT * 1000;
-    spawn->pidpath = GCSPRINTF("%s/image/device-model-pid", dom_path);
+    spawn->pidpath = GCSPRINTF("%s/image/dms/%u-pid", dom_path, dmid);
     spawn->midproc_cb = libxl__spawn_record_pid;
     spawn->confirm_cb = device_model_confirm;
     spawn->failure_cb = device_model_startup_failed;
@@ -1171,6 +1246,32 @@ out:
         device_model_spawn_outcome(egc, dmss, rc);
 }
 
+void libxl__spawn_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *dmss)
+{
+    libxl__domain_create_state *dcs = dmss->dm.dcs;
+    libxl_domain_config *const d_config = dcs->guest_config;
+    STATE_AO_GC(dmss->dm.spawn.ao);
+
+    switch (d_config->c_info.type) {
+    case LIBXL_DOMAIN_TYPE_HVM:
+    {
+        dmss->dm.guest_domid = dcs->guest_domid;
+        if (libxl_defbool_val(d_config->b_info.device_model_stubdomain))
+            libxl__spawn_stub_dm(egc, dmss);
+        else
+            libxl__spawn_local_dm(egc, &dmss->dm);
+        break;
+    }
+    case LIBXL_DOMAIN_TYPE_PV:
+    {
+        dmss->dm.guest_domid = dcs->guest_domid;
+        libxl__spawn_local_dm(egc, &dmss->dm);
+        break;
+    }
+    default:
+        LIBXL__LOG(CTX, XTL_ERROR, "Unknow type %u", d_config->c_info.type);
+    }
+}
 
 static void device_model_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
                                  const char *xsdata)
@@ -1207,6 +1308,7 @@ static void device_model_spawn_outcome(libxl__egc *egc,
 {
     STATE_AO_GC(dmss->spawn.ao);
     int ret2;
+    char *filename;
 
     if (rc)
         LOG(ERROR, "%s: spawn failed (rc=%d)", dmss->spawn.what, rc);
@@ -1214,10 +1316,11 @@ static void device_model_spawn_outcome(libxl__egc *egc,
     libxl__domain_build_state *state = dmss->build_state;
 
     if (state->saved_state) {
-        ret2 = unlink(state->saved_state);
+        filename = GCSPRINTF("%s.%u", state->saved_state, dmss->dmid);
+        ret2 = unlink(filename);
         if (ret2) {
             LOGE(ERROR, "%s: failed to remove device-model state %s",
-                 dmss->spawn.what, state->saved_state);
+                 dmss->spawn.what, filename);
             rc = ERROR_FAIL;
             goto out;
         }
@@ -1229,12 +1332,14 @@ static void device_model_spawn_outcome(libxl__egc *egc,
     dmss->callback(egc, dmss, rc);
 }
 
-int libxl__destroy_device_model(libxl__gc *gc, uint32_t domid)
+static int libxl__destroy_device_model(libxl__gc *gc, libxl_domid domid,
+                                       libxl_dmid dmid)
 {
     char *pid;
     int ret;
 
-    pid = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "/local/domain/%d/image/device-model-pid", domid));
+    pid = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("/local/domain/%u/image/dms/%u-pid",
+                                                 domid, dmid));
     if (!pid || !atoi(pid)) {
         LOG(ERROR, "could not find device-model's pid for dom %u", domid);
         ret = ERROR_FAIL;
@@ -1300,6 +1405,60 @@ out:
     return ret;
 }
 
+libxl_dmid *libxl__list_device_models(libxl__gc *gc, libxl_domid domid,
+                                      unsigned *num_dms)
+{
+    unsigned int i = 0;
+    char **dir = NULL;
+    libxl_dmid *dms = NULL;
+    unsigned int num = 0;
+
+    dir = libxl__xs_directory(gc, XBT_NULL,
+                              GCSPRINTF("/local/domain/0/dms/%u", domid),
+                              &num);
+    if (dir) {
+        GCNEW_ARRAY(dms, num);
+
+        if (num_dms)
+            *num_dms = num;
+
+        for (i = 0; i < num; i++) {
+            dms[i] = atoi(dir[i]);
+        }
+
+        return dms;
+    }
+    else
+        return NULL;
+}
+
+int libxl__destroy_device_models(libxl__gc *gc,
+                                 libxl_domid domid)
+{
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    int ret = 0;
+    libxl_dmid *dms = NULL;
+    unsigned int num_dms;
+    unsigned int i;
+
+    dms = libxl__list_device_models(gc, domid, &num_dms);
+
+    if (!dms)
+        return ERROR_FAIL;
+
+    for (i = 0; i < num_dms; i++)
+        ret |= libxl__destroy_device_model(gc, domid, dms[i]);
+
+    if (!ret) {
+        xs_rm(ctx->xsh, XBT_NULL, libxl__sprintf(gc, "/local/domain/0/dms/%u",
+                                                 domid));
+        xs_rm(ctx->xsh, XBT_NULL, libxl__sprintf(gc, "/local/domain/0/device-model/%u",
+                                                 domid));
+    }
+
+    return ret;
+ }
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 06d5e4f..475fea8 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -544,6 +544,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int ret, rc = ERROR_FAIL;
     const char *firmware = libxl__domain_firmware(gc, info);
+    libxl_domain_config *d_config = CONTAINER_OF(info, *d_config, b_info);
 
     if (!firmware)
         goto out;
@@ -552,7 +553,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
         domid,
         (info->max_memkb - info->video_memkb) / 1024,
         (info->target_memkb - info->video_memkb) / 1024,
-        firmware);
+        firmware,
+        state->num_dms * 2 + 1);
     if (ret) {
         LIBXL__LOG_ERRNOVAL(ctx, LIBXL__LOG_ERROR, ret, "hvm building failed");
         goto out;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 71e4970..2e6eedc 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -867,6 +867,7 @@ typedef struct {
     unsigned long console_mfn;
 
     unsigned long vm_generationid_addr;
+    unsigned long num_dms;
 
     char *saved_state;
 
@@ -887,7 +888,7 @@ _hidden int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
               libxl_domain_build_info *info,
               libxl__domain_build_state *state);
 
-_hidden int libxl__qemu_traditional_cmd(libxl__gc *gc, uint32_t domid,
+_hidden int libxl__qemu_traditional_cmd(libxl__gc *gc, libxl_domid domid,
                                         const char *cmd);
 _hidden int libxl__domain_rename(libxl__gc *gc, uint32_t domid,
                                  const char *old_name, const char *new_name,
@@ -947,6 +948,8 @@ _hidden int libxl__domain_create_info_setdefault(libxl__gc *gc,
                                         libxl_domain_create_info *c_info);
 _hidden int libxl__domain_build_info_setdefault(libxl__gc *gc,
                                         libxl_domain_build_info *b_info);
+_hidden int libxl__dm_setdefault(libxl__gc *gc,
+                                 libxl_dm *dm);
 _hidden int libxl__device_disk_setdefault(libxl__gc *gc,
                                           libxl_device_disk *disk);
 _hidden int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
@@ -1042,7 +1045,9 @@ _hidden char *libxl__devid_to_localdev(libxl__gc *gc, int devid);
 
 /* from libxl_pci */
 
-_hidden int libxl__device_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, int starting);
+_hidden int libxl__device_pci_add(libxl__gc *gc, libxl_domid domid,
+                                  libxl_device_pci *pcidev,
+                                  int starting);
 _hidden int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
                                       libxl_device_pci *pcidev, int num);
 _hidden int libxl__device_pci_destroy_all(libxl__gc *gc, uint32_t domid);
@@ -1272,6 +1277,7 @@ _hidden int libxl__domain_build(libxl__gc *gc,
 
 /* for device model creation */
 _hidden const char *libxl__domain_device_model(libxl__gc *gc,
+                                        libxl_dmid dmid,
                                         const libxl_domain_build_info *info);
 _hidden int libxl__need_xenpv_qemu(libxl__gc *gc,
         int nr_consoles, libxl__device_console *consoles,
@@ -1281,7 +1287,9 @@ _hidden int libxl__need_xenpv_qemu(libxl__gc *gc,
    * return pass *starting_r (which will be non-0) to
    * libxl__confirm_device_model_startup or libxl__detach_device_model. */
 _hidden int libxl__wait_for_device_model(libxl__gc *gc,
-                                uint32_t domid, char *state,
+                                libxl_domid domid,
+                                libxl_dmid dmid,
+                                char *state,
                                 libxl__spawn_starting *spawning,
                                 int (*check_callback)(libxl__gc *gc,
                                                       uint32_t domid,
@@ -1289,9 +1297,14 @@ _hidden int libxl__wait_for_device_model(libxl__gc *gc,
                                                       void *userdata),
                                 void *check_callback_userdata);
 
-_hidden int libxl__destroy_device_model(libxl__gc *gc, uint32_t domid);
+_hidden libxl_dmid *libxl__list_device_models(libxl__gc *gc,
+                                              libxl_domid domid,
+                                              unsigned int *num_dms);
 
-_hidden const libxl_vnc_info *libxl__dm_vnc(const libxl_domain_config *g_cfg);
+_hidden int libxl__destroy_device_models(libxl__gc *gc, libxl_domid domid);
+
+_hidden const libxl_vnc_info *libxl__dm_vnc(libxl_dmid dmid,
+                                            const libxl_domain_config *g_cfg);
 
 _hidden char *libxl__abs_path(libxl__gc *gc, const char *s, const char *path);
 
@@ -2427,10 +2440,10 @@ struct libxl__dm_spawn_state {
     libxl_domain_config *guest_config;
     libxl__domain_build_state *build_state; /* relates to guest_domid */
     libxl__dm_spawn_cb *callback;
+    libxl_dmid dmid;
+    struct libxl__domain_create_state *dcs;
 };
 
-_hidden void libxl__spawn_local_dm(libxl__egc *egc, libxl__dm_spawn_state*);
-
 /* Stubdom device models. */
 
 typedef struct {
@@ -2447,7 +2460,7 @@ typedef struct {
     libxl__multidev multidev;
 } libxl__stub_dm_spawn_state;
 
-_hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
+_hidden void libxl__spawn_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
 
 _hidden char *libxl__stub_dm_name(libxl__gc *gc, const char * guest_name);
 
@@ -2470,7 +2483,8 @@ struct libxl__domain_create_state {
     int guest_domid;
     libxl__domain_build_state build_state;
     libxl__bootloader_state bl;
-    libxl__stub_dm_spawn_state dmss;
+    libxl_dmid current_dmid;
+    libxl__stub_dm_spawn_state* dmss;
         /* If we're not doing stubdom, we use only dmss.dm,
          * for the non-stubdom device model. */
     libxl__save_helper_state shs;
@@ -2527,7 +2541,9 @@ _hidden void libxl__domain_save_device_model(libxl__egc *egc,
                                      libxl__domain_suspend_state *dss,
                                      libxl__save_device_model_cb *callback);
 
-_hidden const char *libxl__device_model_savefile(libxl__gc *gc, uint32_t domid);
+_hidden const char *libxl__device_model_savefile(libxl__gc *gc,
+                                                 libxl_domid domid,
+                                                 libxl_dmid dmid);
 
 
 /*
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5W-0007D1-KW; Wed, 22 Aug 2012 18:55:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5U-0006xb-MA
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:32 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!7
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7611 invoked from network); 22 Aug 2012 18:55:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484811"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:23 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:22 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:00 +0100
Message-ID: <557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 14/17] xl-parsing: Parse new
	device_models option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add new option "device_models". The user can specify the capability of the
QEMU (ui, vifs, ...). This option only works with QEMU upstream (qemu-xen).

For instance:
device_models= [ 'name=all,vifs=nic1', 'name=qvga,ui', 'name=qide,ide' ]

Each device model can also take a path argument which override the default one.
It's usefull for debugging.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/Makefile     |    2 +-
 tools/libxl/libxlu_dm.c  |   96 ++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxlutil.h  |    5 ++
 tools/libxl/xl_cmdimpl.c |   29 +++++++++++++-
 4 files changed, 130 insertions(+), 2 deletions(-)
 create mode 100644 tools/libxl/libxlu_dm.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 47fb110..2b58721 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -79,7 +79,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
-	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
+	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_dm.o
 $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 
 CLIENTS = xl testidl libxl-save-helper
diff --git a/tools/libxl/libxlu_dm.c b/tools/libxl/libxlu_dm.c
new file mode 100644
index 0000000..9f0a347
--- /dev/null
+++ b/tools/libxl/libxlu_dm.c
@@ -0,0 +1,96 @@
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include <stdlib.h>
+#include "libxlu_internal.h"
+#include "libxlu_cfg_i.h"
+
+static void split_string_into_string_list(const char *str,
+                                          const char *delim,
+                                          libxl_string_list *psl)
+{
+    char *s, *saveptr;
+    const char *p;
+    libxl_string_list sl;
+
+    int i = 0, nr = 0;
+
+    s = strdup(str);
+    if (s == NULL) {
+        fprintf(stderr, "xlu_dm: unable to allocate memory\n");
+        exit(-1);
+    }
+
+    /* Count number of entries */
+    p = strtok_r(s, delim, &saveptr);
+    do {
+        nr++;
+    } while ((p = strtok_r(NULL, delim, &saveptr)));
+
+    free(s);
+
+    s = strdup(str);
+
+    sl = malloc((nr+1) * sizeof (char *));
+    if (sl == NULL) {
+        fprintf(stderr, "xlu_dm: unable to allocate memory\n");
+        exit(-1);
+    }
+
+    p = strtok_r(s, delim, &saveptr);
+    do {
+        assert(i < nr);
+        // Skip blank
+        while (*p == ' ')
+            p++;
+        sl[i] = strdup(p);
+        i++;
+    } while ((p = strtok_r(NULL, delim, &saveptr)));
+    sl[i] = NULL;
+
+    *psl = sl;
+
+    free(s);
+}
+
+int xlu_dm_parse(XLU_Config *cfg, const char *spec,
+                 libxl_dm *dm)
+{
+    char *buf = strdup(spec);
+    char *p, *p2;
+    int rc = 0;
+
+    p = strtok(buf, ",");
+    if (!p)
+        goto skip_dm;
+    do {
+        while (*p == ' ')
+            p++;
+        if ((p2 = strchr(p, '=')) == NULL) {
+            if (!strcmp(p, "ui"))
+                dm->capabilities |= LIBXL_DM_CAP_UI;
+            else if (!strcmp(p, "ide"))
+                dm->capabilities |= LIBXL_DM_CAP_IDE;
+            else if (!strcmp(p, "serial"))
+                dm->capabilities |= LIBXL_DM_CAP_SERIAL;
+            else if (!strcmp(p, "audio"))
+                dm->capabilities |= LIBXL_DM_CAP_AUDIO;
+        } else {
+            *p2 = '\0';
+            if (!strcmp(p, "name"))
+                dm->name = strdup(p2 + 1);
+            else if (!strcmp(p, "path"))
+                dm->path = strdup(p2 + 1);
+            else if (!strcmp(p, "vifs"))
+                split_string_into_string_list(p2 + 1, ";", &dm->vifs);
+       }
+    } while ((p = strtok(NULL, ",")) != NULL);
+
+    if (!dm->name && dm->path)
+    {
+        fprintf(stderr, "xl: Unable to parse device_deamon\n");
+        exit(-ERROR_FAIL);
+    }
+skip_dm:
+    free(buf);
+
+    return rc;
+}
diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
index 0333e55..db22715 100644
--- a/tools/libxl/libxlutil.h
+++ b/tools/libxl/libxlutil.h
@@ -93,6 +93,11 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
  */
 int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
 
+/*
+ * Daemon specification parsing.
+ */
+int xlu_dm_parse(XLU_Config *cfg, const char *spec,
+                 libxl_dm *dm);
 
 /*
  * Vif rate parsing.
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 138cd72..2a26fa4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -561,7 +561,7 @@ static void parse_config_data(const char *config_source,
     const char *buf;
     long l;
     XLU_Config *config;
-    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *dms;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
@@ -995,6 +995,9 @@ static void parse_config_data(const char *config_source,
                 } else if (!strcmp(p, "vifname")) {
                     free(nic->ifname);
                     nic->ifname = strdup(p2 + 1);
+                } else if (!strcmp(p, "id")) {
+                    free(nic->id);
+                    nic->id = strdup(p2 + 1);
                 } else if (!strcmp(p, "backend")) {
                     if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
                         fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
@@ -1249,6 +1252,30 @@ skip_vfb:
             }
         }
     }
+
+    d_config->num_dms = 0;
+    d_config->dms = NULL;
+
+    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN
+        && !xlu_cfg_get_list (config, "device_models", &dms, 0, 0)) {
+        while ((buf = xlu_cfg_get_listitem (dms, d_config->num_dms)) != NULL) {
+            libxl_dm *dm;
+            size_t size = sizeof (libxl_dm) * (d_config->num_dms + 1);
+
+            d_config->dms = (libxl_dm *)realloc (d_config->dms, size);
+            if (!d_config->dms) {
+                fprintf(stderr, "Can't realloc d_config->dms\n");
+                exit (1);
+            }
+            dm = d_config->dms + d_config->num_dms;
+            libxl_dm_init (dm);
+            if (xlu_dm_parse(config, buf, dm)) {
+                exit (-ERROR_FAIL);
+            }
+            d_config->num_dms++;
+        }
+    }
+
 #define parse_extra_args(type)                                            \
     e = xlu_cfg_get_list_as_string_list(config, "device_model_args"#type, \
                                     &b_info->extra##type, 0);            \
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5W-0007D1-KW; Wed, 22 Aug 2012 18:55:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5U-0006xb-MA
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:32 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!7
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7611 invoked from network); 22 Aug 2012 18:55:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484811"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:23 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:22 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:00 +0100
Message-ID: <557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 14/17] xl-parsing: Parse new
	device_models option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add new option "device_models". The user can specify the capability of the
QEMU (ui, vifs, ...). This option only works with QEMU upstream (qemu-xen).

For instance:
device_models= [ 'name=all,vifs=nic1', 'name=qvga,ui', 'name=qide,ide' ]

Each device model can also take a path argument which override the default one.
It's usefull for debugging.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/Makefile     |    2 +-
 tools/libxl/libxlu_dm.c  |   96 ++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxlutil.h  |    5 ++
 tools/libxl/xl_cmdimpl.c |   29 +++++++++++++-
 4 files changed, 130 insertions(+), 2 deletions(-)
 create mode 100644 tools/libxl/libxlu_dm.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 47fb110..2b58721 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -79,7 +79,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
-	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
+	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_dm.o
 $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 
 CLIENTS = xl testidl libxl-save-helper
diff --git a/tools/libxl/libxlu_dm.c b/tools/libxl/libxlu_dm.c
new file mode 100644
index 0000000..9f0a347
--- /dev/null
+++ b/tools/libxl/libxlu_dm.c
@@ -0,0 +1,96 @@
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include <stdlib.h>
+#include "libxlu_internal.h"
+#include "libxlu_cfg_i.h"
+
+static void split_string_into_string_list(const char *str,
+                                          const char *delim,
+                                          libxl_string_list *psl)
+{
+    char *s, *saveptr;
+    const char *p;
+    libxl_string_list sl;
+
+    int i = 0, nr = 0;
+
+    s = strdup(str);
+    if (s == NULL) {
+        fprintf(stderr, "xlu_dm: unable to allocate memory\n");
+        exit(-1);
+    }
+
+    /* Count number of entries */
+    p = strtok_r(s, delim, &saveptr);
+    do {
+        nr++;
+    } while ((p = strtok_r(NULL, delim, &saveptr)));
+
+    free(s);
+
+    s = strdup(str);
+
+    sl = malloc((nr+1) * sizeof (char *));
+    if (sl == NULL) {
+        fprintf(stderr, "xlu_dm: unable to allocate memory\n");
+        exit(-1);
+    }
+
+    p = strtok_r(s, delim, &saveptr);
+    do {
+        assert(i < nr);
+        // Skip blank
+        while (*p == ' ')
+            p++;
+        sl[i] = strdup(p);
+        i++;
+    } while ((p = strtok_r(NULL, delim, &saveptr)));
+    sl[i] = NULL;
+
+    *psl = sl;
+
+    free(s);
+}
+
+int xlu_dm_parse(XLU_Config *cfg, const char *spec,
+                 libxl_dm *dm)
+{
+    char *buf = strdup(spec);
+    char *p, *p2;
+    int rc = 0;
+
+    p = strtok(buf, ",");
+    if (!p)
+        goto skip_dm;
+    do {
+        while (*p == ' ')
+            p++;
+        if ((p2 = strchr(p, '=')) == NULL) {
+            if (!strcmp(p, "ui"))
+                dm->capabilities |= LIBXL_DM_CAP_UI;
+            else if (!strcmp(p, "ide"))
+                dm->capabilities |= LIBXL_DM_CAP_IDE;
+            else if (!strcmp(p, "serial"))
+                dm->capabilities |= LIBXL_DM_CAP_SERIAL;
+            else if (!strcmp(p, "audio"))
+                dm->capabilities |= LIBXL_DM_CAP_AUDIO;
+        } else {
+            *p2 = '\0';
+            if (!strcmp(p, "name"))
+                dm->name = strdup(p2 + 1);
+            else if (!strcmp(p, "path"))
+                dm->path = strdup(p2 + 1);
+            else if (!strcmp(p, "vifs"))
+                split_string_into_string_list(p2 + 1, ";", &dm->vifs);
+       }
+    } while ((p = strtok(NULL, ",")) != NULL);
+
+    if (!dm->name && dm->path)
+    {
+        fprintf(stderr, "xl: Unable to parse device_deamon\n");
+        exit(-ERROR_FAIL);
+    }
+skip_dm:
+    free(buf);
+
+    return rc;
+}
diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
index 0333e55..db22715 100644
--- a/tools/libxl/libxlutil.h
+++ b/tools/libxl/libxlutil.h
@@ -93,6 +93,11 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
  */
 int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
 
+/*
+ * Daemon specification parsing.
+ */
+int xlu_dm_parse(XLU_Config *cfg, const char *spec,
+                 libxl_dm *dm);
 
 /*
  * Vif rate parsing.
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 138cd72..2a26fa4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -561,7 +561,7 @@ static void parse_config_data(const char *config_source,
     const char *buf;
     long l;
     XLU_Config *config;
-    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *dms;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
@@ -995,6 +995,9 @@ static void parse_config_data(const char *config_source,
                 } else if (!strcmp(p, "vifname")) {
                     free(nic->ifname);
                     nic->ifname = strdup(p2 + 1);
+                } else if (!strcmp(p, "id")) {
+                    free(nic->id);
+                    nic->id = strdup(p2 + 1);
                 } else if (!strcmp(p, "backend")) {
                     if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
                         fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
@@ -1249,6 +1252,30 @@ skip_vfb:
             }
         }
     }
+
+    d_config->num_dms = 0;
+    d_config->dms = NULL;
+
+    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN
+        && !xlu_cfg_get_list (config, "device_models", &dms, 0, 0)) {
+        while ((buf = xlu_cfg_get_listitem (dms, d_config->num_dms)) != NULL) {
+            libxl_dm *dm;
+            size_t size = sizeof (libxl_dm) * (d_config->num_dms + 1);
+
+            d_config->dms = (libxl_dm *)realloc (d_config->dms, size);
+            if (!d_config->dms) {
+                fprintf(stderr, "Can't realloc d_config->dms\n");
+                exit (1);
+            }
+            dm = d_config->dms + d_config->num_dms;
+            libxl_dm_init (dm);
+            if (xlu_dm_parse(config, buf, dm)) {
+                exit (-ERROR_FAIL);
+            }
+            d_config->num_dms++;
+        }
+    }
+
 #define parse_extra_args(type)                                            \
     e = xlu_cfg_get_list_as_string_list(config, "device_model_args"#type, \
                                     &b_info->extra##type, 0);            \
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5Y-0007Fy-5m; Wed, 22 Aug 2012 18:55:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5W-00070k-C3
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:34 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!8
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8053 invoked from network); 22 Aug 2012 18:55:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484818"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:27 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:26 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:03 +0100
Message-ID: <0daed9ef86102101bbc23a26ac887d751aa8f8ae.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 17/17] xl: implement save/restore
	for multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Each device model will be save/restore one by one.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl.c          |    5 +-
 tools/libxl/libxl_dom.c      |  143 ++++++++++++++++++++++++++++++++++--------
 tools/libxl/libxl_internal.h |   16 +++--
 3 files changed, 130 insertions(+), 34 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 60718b6..e9d14e8 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -416,7 +416,7 @@ int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel)
     }
 
     if (type == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = libxl__domain_resume_device_model(gc, domid);
+        rc = libxl__domain_resume_device_models(gc, domid);
         if (rc) {
             LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
                        "failed to resume device model for domain %u:%d",
@@ -852,8 +852,9 @@ int libxl_domain_unpause(libxl_ctx *ctx, uint32_t domid)
         path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
         state = libxl__xs_read(gc, XBT_NULL, path);
         if (state != NULL && !strcmp(state, "paused")) {
+            /* FIXME: handle multiple qemu */
             libxl__qemu_traditional_cmd(gc, domid, "continue");
-            libxl__wait_for_device_model(gc, domid, "running",
+            libxl__wait_for_device_model(gc, domid, 0, "running",
                                          NULL, NULL, NULL);
         }
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 475fea8..cd07140 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -582,6 +582,7 @@ int libxl__qemu_traditional_cmd(libxl__gc *gc, uint32_t domid,
 }
 
 struct libxl__physmap_info {
+    libxl_dmid device_model;
     uint64_t phys_offset;
     uint64_t start_addr;
     uint64_t size;
@@ -640,6 +641,10 @@ int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
         pi = (struct libxl__physmap_info*) ptr;
         ptr += sizeof(struct libxl__physmap_info) + pi->namelen;
 
+        xs_path = restore_helper(gc, domid, pi->phys_offset, "device_model");
+        ret = libxl__xs_write(gc, 0, xs_path, "%u", pi->device_model);
+        if (ret)
+            return -1;
         xs_path = restore_helper(gc, domid, pi->phys_offset, "start_addr");
         ret = libxl__xs_write(gc, 0, xs_path, "%"PRIx64, pi->start_addr);
         if (ret)
@@ -839,27 +844,28 @@ static void switch_logdirty_done(libxl__egc *egc,
 
 /*----- callbacks, called by xc_domain_save -----*/
 
-int libxl__domain_suspend_device_model(libxl__gc *gc,
-                                       libxl__domain_suspend_state *dss)
+static int libxl__domain_suspend_device_model(libxl__gc *gc,
+                                              libxl_dmid dmid,
+                                              libxl__domain_suspend_state *dss)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int ret = 0;
     uint32_t const domid = dss->domid;
-    const char *const filename = dss->dm_savefile;
+    const char *const filename = libxl__device_model_savefile(gc, domid, dmid);
 
     switch (libxl__device_model_version_running(gc, domid)) {
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL: {
         LIBXL__LOG(ctx, LIBXL__LOG_DEBUG,
                    "Saving device model state to %s", filename);
         libxl__qemu_traditional_cmd(gc, domid, "save");
-        libxl__wait_for_device_model(gc, domid, "paused", NULL, NULL, NULL);
+        libxl__wait_for_device_model(gc, domid, 0, "paused", NULL, NULL, NULL);
         break;
     }
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        if (libxl__qmp_stop(gc, domid))
+        if (libxl__qmp_stop(gc, domid, dmid))
             return ERROR_FAIL;
         /* Save DM state into filename */
-        ret = libxl__qmp_save(gc, domid, filename);
+        ret = libxl__qmp_save(gc, domid, dmid, filename);
         if (ret)
             unlink(filename);
         break;
@@ -870,21 +876,67 @@ int libxl__domain_suspend_device_model(libxl__gc *gc,
     return ret;
 }
 
-int libxl__domain_resume_device_model(libxl__gc *gc, uint32_t domid)
+int libxl__domain_suspend_device_models(libxl__gc *gc,
+                                        libxl__domain_suspend_state *dss)
 {
+    libxl_dmid *dms = NULL;
+    unsigned int num_dms = 0;
+    unsigned int i;
+    int ret;
+
+    dms = libxl__list_device_models(gc, dss->domid, &num_dms);
+
+    if (!dms)
+        return ERROR_FAIL;
+
+    for (i = 0; i < num_dms; i++)
+    {
+        ret = libxl__domain_suspend_device_model(gc, dms[i], dss);
+        if (ret)
+            return ret;
+    }
+
+    return 0;
+}
 
+static int libxl__domain_resume_device_model(libxl__gc *gc, libxl_domid domid,
+                                             libxl_dmid dmid)
+{
     switch (libxl__device_model_version_running(gc, domid)) {
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL: {
         libxl__qemu_traditional_cmd(gc, domid, "continue");
-        libxl__wait_for_device_model(gc, domid, "running", NULL, NULL, NULL);
+        libxl__wait_for_device_model(gc, domid, dmid, "running", NULL, NULL,
+                                     NULL);
         break;
     }
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        if (libxl__qmp_resume(gc, domid))
+        if (libxl__qmp_resume(gc, domid, dmid))
             return ERROR_FAIL;
     default:
         return ERROR_INVAL;
     }
+    return 0;
+}
+
+int libxl__domain_resume_device_models(libxl__gc *gc,
+                                       libxl_domid domid)
+{
+    libxl_dmid *dms = NULL;
+    unsigned int num_dms = 0;
+    unsigned int i = 0;
+    int ret = 0;
+
+    dms = libxl__list_device_models(gc, domid, &num_dms);
+
+    if (!dms)
+        return ERROR_FAIL;
+
+    for (i = 0; i < num_dms; i++)
+    {
+        ret = libxl__domain_resume_device_model(gc, domid, dms[i]);
+        if (ret)
+            return ret;
+    }
 
     return 0;
 }
@@ -1014,9 +1066,10 @@ int libxl__domain_suspend_common_callback(void *user)
 
  guest_suspended:
     if (dss->hvm) {
-        ret = libxl__domain_suspend_device_model(gc, dss);
+        ret = libxl__domain_suspend_device_models(gc, dss);
         if (ret) {
-            LOG(ERROR, "libxl__domain_suspend_device_model failed ret=%d", ret);
+            LOG(ERROR, "libxl__domain_suspend_device_models failed ret=%d",
+                ret);
             return 0;
         }
     }
@@ -1038,6 +1091,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
     STATE_AO_GC(dss->ao);
     int i = 0;
     char *start_addr = NULL, *size = NULL, *phys_offset = NULL, *name = NULL;
+    char *device_model = NULL;
     unsigned int num = 0;
     uint32_t count = 0, version = TOOLSTACK_SAVE_VERSION, namelen = 0;
     uint8_t *ptr = NULL;
@@ -1068,6 +1122,13 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
             return -1;
         }
 
+        xs_path = physmap_path(gc, domid, phys_offset, "device_model");
+        device_model = libxl__xs_read(gc, 0, xs_path);
+        if (device_model == NULL) {
+            LOG(ERROR, "%s is NULL", xs_path);
+            return -1;
+        }
+
         xs_path = physmap_path(gc, domid, phys_offset, "start_addr");
         start_addr = libxl__xs_read(gc, 0, xs_path);
         if (start_addr == NULL) {
@@ -1095,6 +1156,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
             return -1;
         ptr = (*buf) + offset;
         pi = (struct libxl__physmap_info *) ptr;
+        pi->device_model = strtol(device_model, NULL, 10);
         pi->phys_offset = strtoll(phys_offset, NULL, 16);
         pi->start_addr = strtoll(start_addr, NULL, 16);
         pi->size = strtoll(size, NULL, 16);
@@ -1144,7 +1206,7 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 
     /* This would go into tailbuf. */
     if (dss->hvm) {
-        libxl__domain_save_device_model(egc, dss, remus_checkpoint_dm_saved);
+        libxl__domain_save_device_models(egc, dss, remus_checkpoint_dm_saved);
     } else {
         remus_checkpoint_dm_saved(egc, dss, 0);
     }
@@ -1207,7 +1269,6 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
 
     dss->suspend_eventchn = -1;
     dss->guest_responded = 0;
-    dss->dm_savefile = libxl__device_model_savefile(gc, domid);
 
     if (r_info != NULL) {
         dss->interval = r_info->interval;
@@ -1274,10 +1335,10 @@ void libxl__xc_domain_save_done(libxl__egc *egc, void *dss_void,
     }
 
     if (type == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = libxl__domain_suspend_device_model(gc, dss);
+        rc = libxl__domain_suspend_device_models(gc, dss);
         if (rc) goto out;
         
-        libxl__domain_save_device_model(egc, dss, domain_suspend_done);
+        libxl__domain_save_device_models(egc, dss, domain_suspend_done);
         return;
     }
 
@@ -1290,29 +1351,30 @@ out:
 static void save_device_model_datacopier_done(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval);
 
-void libxl__domain_save_device_model(libxl__egc *egc,
-                                     libxl__domain_suspend_state *dss,
-                                     libxl__save_device_model_cb *callback)
+static void libxl__domain_save_device_model(libxl__egc *egc,
+                                            libxl__domain_suspend_state *dss)
 {
     STATE_AO_GC(dss->ao);
     struct stat st;
     uint32_t qemu_state_len;
+    uint32_t num_dms = dss->num_dms;
     int rc;
-
-    dss->save_dm_callback = callback;
+    libxl_dmid dmid = dss->dms[dss->current_dm];
 
     /* Convenience aliases */
-    const char *const filename = dss->dm_savefile;
+    const char *const filename = libxl__device_model_savefile(gc, dss->domid,
+                                                              dmid);
     const int fd = dss->fd;
 
     libxl__datacopier_state *dc = &dss->save_dm_datacopier;
     memset(dc, 0, sizeof(*dc));
-    dc->readwhat = GCSPRINTF("qemu save file %s", filename);
+    dc->readwhat = GCSPRINTF("qemu %u save file %s", dmid, filename);
     dc->ao = ao;
     dc->readfd = -1;
     dc->writefd = fd;
     dc->maxsz = INT_MAX;
-    dc->copywhat = GCSPRINTF("qemu save file for domain %"PRIu32, dss->domid);
+    dc->copywhat = GCSPRINTF("qemu %u save file for domain %"PRIu32,
+                             dmid, dss->domid);
     dc->writewhat = "save/migration stream";
     dc->callback = save_device_model_datacopier_done;
 
@@ -1339,8 +1401,16 @@ void libxl__domain_save_device_model(libxl__egc *egc,
     rc = libxl__datacopier_start(dc);
     if (rc) goto out;
 
+    /* FIXME: Ugly fix to add DMS_SIGNATURE */
+    if (dss->current_dm == 0) {
+        libxl__datacopier_prefixdata(egc, dc,
+                                     DMS_SIGNATURE, strlen(DMS_SIGNATURE));
+        libxl__datacopier_prefixdata(egc, dc,
+                                     &num_dms, sizeof (num_dms));
+    }
+
     libxl__datacopier_prefixdata(egc, dc,
-                                 QEMU_SIGNATURE, strlen(QEMU_SIGNATURE));
+                                 DM_SIGNATURE, strlen(DM_SIGNATURE));
 
     libxl__datacopier_prefixdata(egc, dc,
                                  &qemu_state_len, sizeof(qemu_state_len));
@@ -1350,6 +1420,20 @@ void libxl__domain_save_device_model(libxl__egc *egc,
     save_device_model_datacopier_done(egc, dc, -1, 0);
 }
 
+void libxl__domain_save_device_models(libxl__egc *egc,
+                                libxl__domain_suspend_state *dss,
+                                libxl__save_device_model_cb *callback)
+{
+    STATE_AO_GC(dss->ao);
+
+    dss->save_dm_callback = callback;
+    dss->num_dms = 0;
+    dss->current_dm = 0;
+    dss->dms = libxl__list_device_models(gc, dss->domid, &dss->num_dms);
+
+    libxl__domain_save_device_model(egc, dss);
+}
+
 static void save_device_model_datacopier_done(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
@@ -1358,7 +1442,9 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     STATE_AO_GC(dss->ao);
 
     /* Convenience aliases */
-    const char *const filename = dss->dm_savefile;
+    libxl_dmid dmid = dss->dms[dss->current_dm];
+    const char *const filename = libxl__device_model_savefile(gc, dss->domid,
+                                                              dmid);
     int our_rc = 0;
     int rc;
 
@@ -1375,7 +1461,12 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     rc = libxl__remove_file(gc, filename);
     if (!our_rc) our_rc = rc;
 
-    dss->save_dm_callback(egc, dss, our_rc);
+    dss->current_dm++;
+
+    if (!our_rc && dss->num_dms != dss->current_dm)
+        libxl__domain_save_device_model(egc, dss);
+    else
+        dss->save_dm_callback(egc, dss, our_rc);
 }
 
 static void domain_suspend_done(libxl__egc *egc,
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2e6eedc..9de1465 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -93,7 +93,8 @@
 #define LIBXL_MIN_DOM0_MEM (128*1024)
 /* use 0 as the domid of the toolstack domain for now */
 #define LIBXL_TOOLSTACK_DOMID 0
-#define QEMU_SIGNATURE "DeviceModelRecord0002"
+#define DMS_SIGNATURE "DeviceModelRecords001"
+#define DM_SIGNATURE "DeviceModelRecord0002"
 #define STUBDOM_CONSOLE_LOGGING 0
 #define STUBDOM_CONSOLE_SAVE 1
 #define STUBDOM_CONSOLE_RESTORE 2
@@ -896,7 +897,8 @@ _hidden int libxl__domain_rename(libxl__gc *gc, uint32_t domid,
 
 _hidden int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
                                      uint32_t size, void *data);
-_hidden int libxl__domain_resume_device_model(libxl__gc *gc, uint32_t domid);
+_hidden int libxl__domain_resume_device_models(libxl__gc *gc,
+                                               libxl_domid domid);
 
 _hidden void libxl__userdata_destroyall(libxl__gc *gc, uint32_t domid);
 
@@ -2230,10 +2232,12 @@ struct libxl__domain_suspend_state {
     int hvm;
     int xcflags;
     int guest_responded;
-    const char *dm_savefile;
     int interval; /* checkpoint interval (for Remus) */
     libxl__save_helper_state shs;
     libxl__logdirty_switch logdirty;
+    unsigned int num_dms;
+    unsigned int current_dm;
+    libxl_dmid *dms;
     /* private for libxl__domain_save_device_model */
     libxl__save_device_model_cb *save_dm_callback;
     libxl__datacopier_state save_dm_datacopier;
@@ -2535,9 +2539,9 @@ _hidden void libxl__xc_domain_restore_done(libxl__egc *egc, void *dcs_void,
                                            int rc, int retval, int errnoval);
 
 /* Each time the dm needs to be saved, we must call suspend and then save */
-_hidden int libxl__domain_suspend_device_model(libxl__gc *gc,
-                                           libxl__domain_suspend_state *dss);
-_hidden void libxl__domain_save_device_model(libxl__egc *egc,
+_hidden int libxl__domain_suspend_device_models(libxl__gc *gc,
+                                            libxl__domain_suspend_state *dss);
+_hidden void libxl__domain_save_device_models(libxl__egc *egc,
                                      libxl__domain_suspend_state *dss,
                                      libxl__save_device_model_cb *callback);
 
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 18:55:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 18:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4G5Y-0007Fy-5m; Wed, 22 Aug 2012 18:55:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4G5W-00070k-C3
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 18:55:34 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345661712!9781420!8
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjU4OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8053 invoked from network); 22 Aug 2012 18:55:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 18:55:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="35484818"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	22 Aug 2012 14:55:27 -0400
Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Wed, 22 Aug 2012 14:55:26 -0400
From: Julien Grall <julien.grall@citrix.com>
To: qemu-devel@nongnu.org
Date: Wed, 22 Aug 2012 13:32:03 +0100
Message-ID: <0daed9ef86102101bbc23a26ac887d751aa8f8ae.1345552068.git.julien.grall@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <cover.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>, christian.limpach@gmail.com,
	Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [XEN][RFC PATCH V2 17/17] xl: implement save/restore
	for multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Each device model will be save/restore one by one.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
---
 tools/libxl/libxl.c          |    5 +-
 tools/libxl/libxl_dom.c      |  143 ++++++++++++++++++++++++++++++++++--------
 tools/libxl/libxl_internal.h |   16 +++--
 3 files changed, 130 insertions(+), 34 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 60718b6..e9d14e8 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -416,7 +416,7 @@ int libxl_domain_resume(libxl_ctx *ctx, uint32_t domid, int suspend_cancel)
     }
 
     if (type == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = libxl__domain_resume_device_model(gc, domid);
+        rc = libxl__domain_resume_device_models(gc, domid);
         if (rc) {
             LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
                        "failed to resume device model for domain %u:%d",
@@ -852,8 +852,9 @@ int libxl_domain_unpause(libxl_ctx *ctx, uint32_t domid)
         path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
         state = libxl__xs_read(gc, XBT_NULL, path);
         if (state != NULL && !strcmp(state, "paused")) {
+            /* FIXME: handle multiple qemu */
             libxl__qemu_traditional_cmd(gc, domid, "continue");
-            libxl__wait_for_device_model(gc, domid, "running",
+            libxl__wait_for_device_model(gc, domid, 0, "running",
                                          NULL, NULL, NULL);
         }
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 475fea8..cd07140 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -582,6 +582,7 @@ int libxl__qemu_traditional_cmd(libxl__gc *gc, uint32_t domid,
 }
 
 struct libxl__physmap_info {
+    libxl_dmid device_model;
     uint64_t phys_offset;
     uint64_t start_addr;
     uint64_t size;
@@ -640,6 +641,10 @@ int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
         pi = (struct libxl__physmap_info*) ptr;
         ptr += sizeof(struct libxl__physmap_info) + pi->namelen;
 
+        xs_path = restore_helper(gc, domid, pi->phys_offset, "device_model");
+        ret = libxl__xs_write(gc, 0, xs_path, "%u", pi->device_model);
+        if (ret)
+            return -1;
         xs_path = restore_helper(gc, domid, pi->phys_offset, "start_addr");
         ret = libxl__xs_write(gc, 0, xs_path, "%"PRIx64, pi->start_addr);
         if (ret)
@@ -839,27 +844,28 @@ static void switch_logdirty_done(libxl__egc *egc,
 
 /*----- callbacks, called by xc_domain_save -----*/
 
-int libxl__domain_suspend_device_model(libxl__gc *gc,
-                                       libxl__domain_suspend_state *dss)
+static int libxl__domain_suspend_device_model(libxl__gc *gc,
+                                              libxl_dmid dmid,
+                                              libxl__domain_suspend_state *dss)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int ret = 0;
     uint32_t const domid = dss->domid;
-    const char *const filename = dss->dm_savefile;
+    const char *const filename = libxl__device_model_savefile(gc, domid, dmid);
 
     switch (libxl__device_model_version_running(gc, domid)) {
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL: {
         LIBXL__LOG(ctx, LIBXL__LOG_DEBUG,
                    "Saving device model state to %s", filename);
         libxl__qemu_traditional_cmd(gc, domid, "save");
-        libxl__wait_for_device_model(gc, domid, "paused", NULL, NULL, NULL);
+        libxl__wait_for_device_model(gc, domid, 0, "paused", NULL, NULL, NULL);
         break;
     }
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        if (libxl__qmp_stop(gc, domid))
+        if (libxl__qmp_stop(gc, domid, dmid))
             return ERROR_FAIL;
         /* Save DM state into filename */
-        ret = libxl__qmp_save(gc, domid, filename);
+        ret = libxl__qmp_save(gc, domid, dmid, filename);
         if (ret)
             unlink(filename);
         break;
@@ -870,21 +876,67 @@ int libxl__domain_suspend_device_model(libxl__gc *gc,
     return ret;
 }
 
-int libxl__domain_resume_device_model(libxl__gc *gc, uint32_t domid)
+int libxl__domain_suspend_device_models(libxl__gc *gc,
+                                        libxl__domain_suspend_state *dss)
 {
+    libxl_dmid *dms = NULL;
+    unsigned int num_dms = 0;
+    unsigned int i;
+    int ret;
+
+    dms = libxl__list_device_models(gc, dss->domid, &num_dms);
+
+    if (!dms)
+        return ERROR_FAIL;
+
+    for (i = 0; i < num_dms; i++)
+    {
+        ret = libxl__domain_suspend_device_model(gc, dms[i], dss);
+        if (ret)
+            return ret;
+    }
+
+    return 0;
+}
 
+static int libxl__domain_resume_device_model(libxl__gc *gc, libxl_domid domid,
+                                             libxl_dmid dmid)
+{
     switch (libxl__device_model_version_running(gc, domid)) {
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL: {
         libxl__qemu_traditional_cmd(gc, domid, "continue");
-        libxl__wait_for_device_model(gc, domid, "running", NULL, NULL, NULL);
+        libxl__wait_for_device_model(gc, domid, dmid, "running", NULL, NULL,
+                                     NULL);
         break;
     }
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        if (libxl__qmp_resume(gc, domid))
+        if (libxl__qmp_resume(gc, domid, dmid))
             return ERROR_FAIL;
     default:
         return ERROR_INVAL;
     }
+    return 0;
+}
+
+int libxl__domain_resume_device_models(libxl__gc *gc,
+                                       libxl_domid domid)
+{
+    libxl_dmid *dms = NULL;
+    unsigned int num_dms = 0;
+    unsigned int i = 0;
+    int ret = 0;
+
+    dms = libxl__list_device_models(gc, domid, &num_dms);
+
+    if (!dms)
+        return ERROR_FAIL;
+
+    for (i = 0; i < num_dms; i++)
+    {
+        ret = libxl__domain_resume_device_model(gc, domid, dms[i]);
+        if (ret)
+            return ret;
+    }
 
     return 0;
 }
@@ -1014,9 +1066,10 @@ int libxl__domain_suspend_common_callback(void *user)
 
  guest_suspended:
     if (dss->hvm) {
-        ret = libxl__domain_suspend_device_model(gc, dss);
+        ret = libxl__domain_suspend_device_models(gc, dss);
         if (ret) {
-            LOG(ERROR, "libxl__domain_suspend_device_model failed ret=%d", ret);
+            LOG(ERROR, "libxl__domain_suspend_device_models failed ret=%d",
+                ret);
             return 0;
         }
     }
@@ -1038,6 +1091,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
     STATE_AO_GC(dss->ao);
     int i = 0;
     char *start_addr = NULL, *size = NULL, *phys_offset = NULL, *name = NULL;
+    char *device_model = NULL;
     unsigned int num = 0;
     uint32_t count = 0, version = TOOLSTACK_SAVE_VERSION, namelen = 0;
     uint8_t *ptr = NULL;
@@ -1068,6 +1122,13 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
             return -1;
         }
 
+        xs_path = physmap_path(gc, domid, phys_offset, "device_model");
+        device_model = libxl__xs_read(gc, 0, xs_path);
+        if (device_model == NULL) {
+            LOG(ERROR, "%s is NULL", xs_path);
+            return -1;
+        }
+
         xs_path = physmap_path(gc, domid, phys_offset, "start_addr");
         start_addr = libxl__xs_read(gc, 0, xs_path);
         if (start_addr == NULL) {
@@ -1095,6 +1156,7 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
             return -1;
         ptr = (*buf) + offset;
         pi = (struct libxl__physmap_info *) ptr;
+        pi->device_model = strtol(device_model, NULL, 10);
         pi->phys_offset = strtoll(phys_offset, NULL, 16);
         pi->start_addr = strtoll(start_addr, NULL, 16);
         pi->size = strtoll(size, NULL, 16);
@@ -1144,7 +1206,7 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 
     /* This would go into tailbuf. */
     if (dss->hvm) {
-        libxl__domain_save_device_model(egc, dss, remus_checkpoint_dm_saved);
+        libxl__domain_save_device_models(egc, dss, remus_checkpoint_dm_saved);
     } else {
         remus_checkpoint_dm_saved(egc, dss, 0);
     }
@@ -1207,7 +1269,6 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
 
     dss->suspend_eventchn = -1;
     dss->guest_responded = 0;
-    dss->dm_savefile = libxl__device_model_savefile(gc, domid);
 
     if (r_info != NULL) {
         dss->interval = r_info->interval;
@@ -1274,10 +1335,10 @@ void libxl__xc_domain_save_done(libxl__egc *egc, void *dss_void,
     }
 
     if (type == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = libxl__domain_suspend_device_model(gc, dss);
+        rc = libxl__domain_suspend_device_models(gc, dss);
         if (rc) goto out;
         
-        libxl__domain_save_device_model(egc, dss, domain_suspend_done);
+        libxl__domain_save_device_models(egc, dss, domain_suspend_done);
         return;
     }
 
@@ -1290,29 +1351,30 @@ out:
 static void save_device_model_datacopier_done(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval);
 
-void libxl__domain_save_device_model(libxl__egc *egc,
-                                     libxl__domain_suspend_state *dss,
-                                     libxl__save_device_model_cb *callback)
+static void libxl__domain_save_device_model(libxl__egc *egc,
+                                            libxl__domain_suspend_state *dss)
 {
     STATE_AO_GC(dss->ao);
     struct stat st;
     uint32_t qemu_state_len;
+    uint32_t num_dms = dss->num_dms;
     int rc;
-
-    dss->save_dm_callback = callback;
+    libxl_dmid dmid = dss->dms[dss->current_dm];
 
     /* Convenience aliases */
-    const char *const filename = dss->dm_savefile;
+    const char *const filename = libxl__device_model_savefile(gc, dss->domid,
+                                                              dmid);
     const int fd = dss->fd;
 
     libxl__datacopier_state *dc = &dss->save_dm_datacopier;
     memset(dc, 0, sizeof(*dc));
-    dc->readwhat = GCSPRINTF("qemu save file %s", filename);
+    dc->readwhat = GCSPRINTF("qemu %u save file %s", dmid, filename);
     dc->ao = ao;
     dc->readfd = -1;
     dc->writefd = fd;
     dc->maxsz = INT_MAX;
-    dc->copywhat = GCSPRINTF("qemu save file for domain %"PRIu32, dss->domid);
+    dc->copywhat = GCSPRINTF("qemu %u save file for domain %"PRIu32,
+                             dmid, dss->domid);
     dc->writewhat = "save/migration stream";
     dc->callback = save_device_model_datacopier_done;
 
@@ -1339,8 +1401,16 @@ void libxl__domain_save_device_model(libxl__egc *egc,
     rc = libxl__datacopier_start(dc);
     if (rc) goto out;
 
+    /* FIXME: Ugly fix to add DMS_SIGNATURE */
+    if (dss->current_dm == 0) {
+        libxl__datacopier_prefixdata(egc, dc,
+                                     DMS_SIGNATURE, strlen(DMS_SIGNATURE));
+        libxl__datacopier_prefixdata(egc, dc,
+                                     &num_dms, sizeof (num_dms));
+    }
+
     libxl__datacopier_prefixdata(egc, dc,
-                                 QEMU_SIGNATURE, strlen(QEMU_SIGNATURE));
+                                 DM_SIGNATURE, strlen(DM_SIGNATURE));
 
     libxl__datacopier_prefixdata(egc, dc,
                                  &qemu_state_len, sizeof(qemu_state_len));
@@ -1350,6 +1420,20 @@ void libxl__domain_save_device_model(libxl__egc *egc,
     save_device_model_datacopier_done(egc, dc, -1, 0);
 }
 
+void libxl__domain_save_device_models(libxl__egc *egc,
+                                libxl__domain_suspend_state *dss,
+                                libxl__save_device_model_cb *callback)
+{
+    STATE_AO_GC(dss->ao);
+
+    dss->save_dm_callback = callback;
+    dss->num_dms = 0;
+    dss->current_dm = 0;
+    dss->dms = libxl__list_device_models(gc, dss->domid, &dss->num_dms);
+
+    libxl__domain_save_device_model(egc, dss);
+}
+
 static void save_device_model_datacopier_done(libxl__egc *egc,
      libxl__datacopier_state *dc, int onwrite, int errnoval)
 {
@@ -1358,7 +1442,9 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     STATE_AO_GC(dss->ao);
 
     /* Convenience aliases */
-    const char *const filename = dss->dm_savefile;
+    libxl_dmid dmid = dss->dms[dss->current_dm];
+    const char *const filename = libxl__device_model_savefile(gc, dss->domid,
+                                                              dmid);
     int our_rc = 0;
     int rc;
 
@@ -1375,7 +1461,12 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     rc = libxl__remove_file(gc, filename);
     if (!our_rc) our_rc = rc;
 
-    dss->save_dm_callback(egc, dss, our_rc);
+    dss->current_dm++;
+
+    if (!our_rc && dss->num_dms != dss->current_dm)
+        libxl__domain_save_device_model(egc, dss);
+    else
+        dss->save_dm_callback(egc, dss, our_rc);
 }
 
 static void domain_suspend_done(libxl__egc *egc,
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2e6eedc..9de1465 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -93,7 +93,8 @@
 #define LIBXL_MIN_DOM0_MEM (128*1024)
 /* use 0 as the domid of the toolstack domain for now */
 #define LIBXL_TOOLSTACK_DOMID 0
-#define QEMU_SIGNATURE "DeviceModelRecord0002"
+#define DMS_SIGNATURE "DeviceModelRecords001"
+#define DM_SIGNATURE "DeviceModelRecord0002"
 #define STUBDOM_CONSOLE_LOGGING 0
 #define STUBDOM_CONSOLE_SAVE 1
 #define STUBDOM_CONSOLE_RESTORE 2
@@ -896,7 +897,8 @@ _hidden int libxl__domain_rename(libxl__gc *gc, uint32_t domid,
 
 _hidden int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
                                      uint32_t size, void *data);
-_hidden int libxl__domain_resume_device_model(libxl__gc *gc, uint32_t domid);
+_hidden int libxl__domain_resume_device_models(libxl__gc *gc,
+                                               libxl_domid domid);
 
 _hidden void libxl__userdata_destroyall(libxl__gc *gc, uint32_t domid);
 
@@ -2230,10 +2232,12 @@ struct libxl__domain_suspend_state {
     int hvm;
     int xcflags;
     int guest_responded;
-    const char *dm_savefile;
     int interval; /* checkpoint interval (for Remus) */
     libxl__save_helper_state shs;
     libxl__logdirty_switch logdirty;
+    unsigned int num_dms;
+    unsigned int current_dm;
+    libxl_dmid *dms;
     /* private for libxl__domain_save_device_model */
     libxl__save_device_model_cb *save_dm_callback;
     libxl__datacopier_state save_dm_datacopier;
@@ -2535,9 +2539,9 @@ _hidden void libxl__xc_domain_restore_done(libxl__egc *egc, void *dcs_void,
                                            int rc, int retval, int errnoval);
 
 /* Each time the dm needs to be saved, we must call suspend and then save */
-_hidden int libxl__domain_suspend_device_model(libxl__gc *gc,
-                                           libxl__domain_suspend_state *dss);
-_hidden void libxl__domain_save_device_model(libxl__egc *egc,
+_hidden int libxl__domain_suspend_device_models(libxl__gc *gc,
+                                            libxl__domain_suspend_state *dss);
+_hidden void libxl__domain_save_device_models(libxl__egc *egc,
                                      libxl__domain_suspend_state *dss,
                                      libxl__save_device_model_cb *callback);
 
-- 
Julien Grall


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 19:05:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 19:05:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4GF7-000266-QB; Wed, 22 Aug 2012 19:05:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4GF6-00025z-7F
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 19:05:28 +0000
Received: from [85.158.143.35:46865] by server-2.bemta-4.messagelabs.com id
	70/56-21239-77D25305; Wed, 22 Aug 2012 19:05:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345662324!11667178!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 371 invoked from network); 22 Aug 2012 19:05:25 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 19:05:25 -0000
Received: by vbbfq11 with SMTP id fq11so1804502vbb.30
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 12:05:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=3OAIx6L7/BbeznqjWfDUB8FeM4uPin15xCxbaXEUWjY=;
	b=MXQKNv+heWSIrPGkTTobuqZ0TlngbnrjUkstMf7OAxO1zrqsag5LbVOK2GDgv3ZS42
	JbgA+2GveoIj9GE/38pEEYIJZxz4PQjtN2y6bjPrLY3kwuv9cNXnTK5TkUXfvxrl4pWU
	ShvY64GE8tRZUpUHtgqwWxRWFZ4Ky7bgjQUM6W7sixd8DolnqWqU3ZhLAXyFJythrCxw
	/OumzZBCjy4+1XUlFyxsE6uxa1FMPta3LsOAi0unqKEWEuX5DBZFKnxNJ4hXwISkvowt
	UG6kq/SfHYZFuh3h+/ybxpu7JqpzmXWQwn4ZIwv+4LT/0LYlhB9gfFTfXv014tJ7AmvN
	eiIg==
Received: by 10.220.116.6 with SMTP id k6mr17692947vcq.59.1345662324154;
	Wed, 22 Aug 2012 12:05:24 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id n19sm2624141vde.5.2012.08.22.12.05.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 12:05:23 -0700 (PDT)
Date: Wed, 22 Aug 2012 14:55:19 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822185519.GA29609@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<50351DEF020000780009702A@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50351DEF020000780009702A@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:59:11PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> >> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> >> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >> > > > instead of a big memblock_reserve. This way we can be more
> >> > > > selective in freeing regions (and it also makes it easier
> >> > > > to understand where is what).
> >> > > > 
> >> > > > [v1: Move the auto_translate_physmap to proper line]
> >> > > > [v2: Per Stefano suggestion add more comments]
> >> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > 
> >> > > much better now!
> >> > 
> >> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> >> > Will have a revised patch posted shortly.
> >> 
> >> Jan, I thought something odd. Part of this code replaces this:
> >> 
> >> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> >> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> >> 
> >> with a more region-by-region area. What I found out that if I boot this
> >> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> >> actually wrong.
> >> 
> >> Specifically this is what bootup says:
> >> 
> >> (good working case - 32bit hypervisor with 32-bit dom0):
> >> (XEN)  Loaded kernel: c1000000->c1a23000
> >> (XEN)  Init. ramdisk: c1a23000->cf730e00
> >> (XEN)  Phys-Mach map: cf731000->cf831000
> >> (XEN)  Start info:    cf831000->cf83147c
> >> (XEN)  Page tables:   cf832000->cf8b5000
> >> (XEN)  Boot stack:    cf8b5000->cf8b6000
> >> (XEN)  TOTAL:         c0000000->cfc00000
> >> 
> >> [    0.000000] PT: cf832000 (f832000)
> >> [    0.000000] Reserving PT: f832000->f8b5000
> >> 
> >> And with a 64-bit hypervisor:
> >> 
> >> XEN) VIRTUAL MEMORY ARRANGEMENT:
> >> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> >> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> >> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> >> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> >> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> >> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> >> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> >> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> >> 
> >> [    0.000000] PT: cf834000 (f834000)
> >> [    0.000000] Reserving PT: f834000->f8b8000
> >> 
> >> So the pt_base is offset by two pages. And looking at c/s 13257
> >> its not clear to me why this two page offset was added?
> 
> Actually, the adjustment turns out to be correct: The page
> tables for a 32-on-64 dom0 get allocated in the order "first L1",
> "first L2", "first L3", so the offset to the page table base is
> indeed 2. When reading xen/include/public/xen.h's comment
> very strictly, this is not a violation (since there nothing is said
> that the first thing in the page table space is pointed to by
> pt_base; I admit that this seems to be implied though, namely
> do I think that it is implied that the page table space is the
> range [pt_base, pt_base + nt_pt_frames), whereas that
> range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
> which - without a priori knowledge - the kernel would have
> difficulty to figure out).
> 
> Below is a debugging patch I used to see the full picture, if you
> want to double check.

Thinking of just sticking this patch then:

>From 9aa58784b163ee435ff5596cf3ec059b57ab85e1 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 22 Aug 2012 13:00:10 -0400
Subject: [PATCH] Revert "xen/x86: Workaround 64-bit hypervisor and 32-bit
 initial domain." and "xen/x86: Use memblock_reserve for
 sensitive areas."

This reverts commit 806c312e50f122c47913145cf884f53dd09d9199 and
commit 59b294403e9814e7c1154043567f0d71bac7a511.

And also documents setup.c and why we want to do it that way, which
is that we tried to make the the memblock_reserve more selective so
that it would be clear what region is reserved. Sadly we ran
in the problem wherein on a 64-bit hypervisor with a 32-bit
initial domain, the pt_base has the cr3 value which is not
neccessarily where the pagetable starts! As Jan put it: "
Actually, the adjustment turns out to be correct: The page
tables for a 32-on-64 dom0 get allocated in the order "first L1",
"first L2", "first L3", so the offset to the page table base is
indeed 2. When reading xen/include/public/xen.h's comment
very strictly, this is not a violation (since there nothing is said
that the first thing in the page table space is pointed to by
pt_base; I admit that this seems to be implied though, namely
do I think that it is implied that the page table space is the
range [pt_base, pt_base + nt_pt_frames), whereas that
range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
which - without a priori knowledge - the kernel would have
difficulty to figure out)." - so lets just fall back to the
easy way and reserve the whole region.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |   60 ----------------------------------------------
 arch/x86/xen/p2m.c       |    5 ----
 arch/x86/xen/setup.c     |   27 ++++++++++++++++++++
 3 files changed, 27 insertions(+), 65 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index c1e940d..d87a038 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -998,66 +998,7 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 	return ret;
 }
-/*
- * If the MFN is not in the m2p (provided to us by the hypervisor) this
- * function won't do anything. In practice this means that the XenBus
- * MFN won't be available for the initial domain. */
-static unsigned long __init xen_reserve_mfn(unsigned long mfn)
-{
-	unsigned long pfn, end_pfn = 0;
-
-	if (!mfn)
-		return end_pfn;
-
-	pfn = mfn_to_pfn(mfn);
-	if (phys_to_machine_mapping_valid(pfn)) {
-		end_pfn = PFN_PHYS(pfn) + PAGE_SIZE;
-		memblock_reserve(PFN_PHYS(pfn), end_pfn);
-	}
-	return end_pfn;
-}
-static void __init xen_reserve_internals(void)
-{
-	unsigned long size;
-	unsigned long last_phys = 0;
-
-	if (!xen_pv_domain())
-		return;
-
-	/* xen_start_info does not exist in the M2P, hence can't use
-	 * xen_reserve_mfn. */
-	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
-	last_phys = __pa(xen_start_info) + PAGE_SIZE;
-
-	last_phys = max(xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info)), last_phys);
-	last_phys = max(xen_reserve_mfn(xen_start_info->store_mfn), last_phys);
 
-	if (!xen_initial_domain())
-		last_phys = max(xen_reserve_mfn(xen_start_info->console.domU.mfn), last_phys);
-
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return;
-
-	/*
-	 * ALIGN up to compensate for the p2m_page pointing to an array that
-	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
-	 */
-
-	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-
-	/* We could use xen_reserve_mfn here, but would end up looping quite
-	 * a lot (and call memblock_reserve for each PAGE), so lets just use
-	 * the easy way and reserve it wholesale. */
-	memblock_reserve(__pa(xen_start_info->mfn_list), size);
-	last_phys = max(__pa(xen_start_info->mfn_list) + size, last_phys);
-	/* The pagetables are reserved in mmu.c */
-
-	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
-	 * offsets the pt_base by two pages. Hence the reservation that is done
-	 * in mmu.c misses two pages. We correct it here if we detect this. */
-	if (last_phys < __pa(xen_start_info->pt_base))
-		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
-}
 void xen_setup_shared_info(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
@@ -1418,7 +1359,6 @@ asmlinkage void __init xen_start_kernel(void)
 	xen_raw_console_write("mapping kernel into physical memory\n");
 	xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base, xen_start_info->nr_pages);
 
-	xen_reserve_internals();
 	/* Allocate and initialize top and mid mfn levels for p2m structure */
 	xen_build_mfn_list_list();
 
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 95c3ea2..c3e9291 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -388,11 +388,6 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	}
 
 	m2p_override_init();
-
-	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
-	 * isn't enough pieces to make it work (for one - we are still using the
-	 * Xen provided pagetable). Do it later in xen_reserve_internals.
-	 */
 }
 #ifdef CONFIG_X86_64
 #include <linux/bootmem.h>
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 9efca75..740517b 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -424,6 +424,33 @@ char * __init xen_memory_setup(void)
 	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
 			E820_RESERVED);
 
+	/*
+	 * Reserve Xen bits:
+	 *  - mfn_list
+	 *  - xen_start_info
+	 * See comment above "struct start_info" in <xen/interface/xen.h>
+	 * We tried to make the the memblock_reserve more selective so
+	 * that it would be clear what region is reserved. Sadly we ran
+	 * in the problem wherein on a 64-bit hypervisor with a 32-bit
+	 * initial domain, the pt_base has the cr3 value which is not
+	 * neccessarily where the pagetable starts! As Jan put it: "
+	 * Actually, the adjustment turns out to be correct: The page
+	 * tables for a 32-on-64 dom0 get allocated in the order "first L1",
+	 * "first L2", "first L3", so the offset to the page table base is
+	 * indeed 2. When reading xen/include/public/xen.h's comment
+	 * very strictly, this is not a violation (since there nothing is said
+	 * that the first thing in the page table space is pointed to by
+	 * pt_base; I admit that this seems to be implied though, namely
+	 * do I think that it is implied that the page table space is the
+	 * range [pt_base, pt_base + nt_pt_frames), whereas that
+	 * range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
+	 * which - without a priori knowledge - the kernel would have
+	 * difficulty to figure out)." - so lets just fall back to the
+	 * easy way and reserve the whole region.
+	 */
+	memblock_reserve(__pa(xen_start_info->mfn_list),
+			 xen_start_info->pt_base - xen_start_info->mfn_list);
+
 	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
 
 	return "Xen";
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 19:05:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 19:05:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4GF7-000266-QB; Wed, 22 Aug 2012 19:05:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4GF6-00025z-7F
	for xen-devel@lists.xensource.com; Wed, 22 Aug 2012 19:05:28 +0000
Received: from [85.158.143.35:46865] by server-2.bemta-4.messagelabs.com id
	70/56-21239-77D25305; Wed, 22 Aug 2012 19:05:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345662324!11667178!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 371 invoked from network); 22 Aug 2012 19:05:25 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 19:05:25 -0000
Received: by vbbfq11 with SMTP id fq11so1804502vbb.30
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 12:05:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=3OAIx6L7/BbeznqjWfDUB8FeM4uPin15xCxbaXEUWjY=;
	b=MXQKNv+heWSIrPGkTTobuqZ0TlngbnrjUkstMf7OAxO1zrqsag5LbVOK2GDgv3ZS42
	JbgA+2GveoIj9GE/38pEEYIJZxz4PQjtN2y6bjPrLY3kwuv9cNXnTK5TkUXfvxrl4pWU
	ShvY64GE8tRZUpUHtgqwWxRWFZ4Ky7bgjQUM6W7sixd8DolnqWqU3ZhLAXyFJythrCxw
	/OumzZBCjy4+1XUlFyxsE6uxa1FMPta3LsOAi0unqKEWEuX5DBZFKnxNJ4hXwISkvowt
	UG6kq/SfHYZFuh3h+/ybxpu7JqpzmXWQwn4ZIwv+4LT/0LYlhB9gfFTfXv014tJ7AmvN
	eiIg==
Received: by 10.220.116.6 with SMTP id k6mr17692947vcq.59.1345662324154;
	Wed, 22 Aug 2012 12:05:24 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id n19sm2624141vde.5.2012.08.22.12.05.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 22 Aug 2012 12:05:23 -0700 (PDT)
Date: Wed, 22 Aug 2012 14:55:19 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120822185519.GA29609@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<50351DEF020000780009702A@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50351DEF020000780009702A@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:59:11PM +0100, Jan Beulich wrote:
> >>> On 21.08.12 at 21:03, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Aug 21, 2012 at 01:27:32PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Aug 20, 2012 at 10:13:05AM -0400, Konrad Rzeszutek Wilk wrote:
> >> > On Fri, Aug 17, 2012 at 06:35:12PM +0100, Stefano Stabellini wrote:
> >> > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >> > > > instead of a big memblock_reserve. This way we can be more
> >> > > > selective in freeing regions (and it also makes it easier
> >> > > > to understand where is what).
> >> > > > 
> >> > > > [v1: Move the auto_translate_physmap to proper line]
> >> > > > [v2: Per Stefano suggestion add more comments]
> >> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > 
> >> > > much better now!
> >> > 
> >> > Thought interestingly enough it breaks 32-bit dom0s (and only dom0s).
> >> > Will have a revised patch posted shortly.
> >> 
> >> Jan, I thought something odd. Part of this code replaces this:
> >> 
> >> 	memblock_reserve(__pa(xen_start_info->mfn_list),
> >> 		xen_start_info->pt_base - xen_start_info->mfn_list);
> >> 
> >> with a more region-by-region area. What I found out that if I boot this
> >> as 32-bit guest with a 64-bit hypervisor the xen_start_info->pt_base is
> >> actually wrong.
> >> 
> >> Specifically this is what bootup says:
> >> 
> >> (good working case - 32bit hypervisor with 32-bit dom0):
> >> (XEN)  Loaded kernel: c1000000->c1a23000
> >> (XEN)  Init. ramdisk: c1a23000->cf730e00
> >> (XEN)  Phys-Mach map: cf731000->cf831000
> >> (XEN)  Start info:    cf831000->cf83147c
> >> (XEN)  Page tables:   cf832000->cf8b5000
> >> (XEN)  Boot stack:    cf8b5000->cf8b6000
> >> (XEN)  TOTAL:         c0000000->cfc00000
> >> 
> >> [    0.000000] PT: cf832000 (f832000)
> >> [    0.000000] Reserving PT: f832000->f8b5000
> >> 
> >> And with a 64-bit hypervisor:
> >> 
> >> XEN) VIRTUAL MEMORY ARRANGEMENT:
> >> (XEN)  Loaded kernel: 00000000c1000000->00000000c1a23000
> >> (XEN)  Init. ramdisk: 00000000c1a23000->00000000cf730e00
> >> (XEN)  Phys-Mach map: 00000000cf731000->00000000cf831000
> >> (XEN)  Start info:    00000000cf831000->00000000cf8314b4
> >> (XEN)  Page tables:   00000000cf832000->00000000cf8b6000
> >> (XEN)  Boot stack:    00000000cf8b6000->00000000cf8b7000
> >> (XEN)  TOTAL:         00000000c0000000->00000000cfc00000
> >> (XEN)  ENTRY ADDRESS: 00000000c16bb22c
> >> 
> >> [    0.000000] PT: cf834000 (f834000)
> >> [    0.000000] Reserving PT: f834000->f8b8000
> >> 
> >> So the pt_base is offset by two pages. And looking at c/s 13257
> >> its not clear to me why this two page offset was added?
> 
> Actually, the adjustment turns out to be correct: The page
> tables for a 32-on-64 dom0 get allocated in the order "first L1",
> "first L2", "first L3", so the offset to the page table base is
> indeed 2. When reading xen/include/public/xen.h's comment
> very strictly, this is not a violation (since there nothing is said
> that the first thing in the page table space is pointed to by
> pt_base; I admit that this seems to be implied though, namely
> do I think that it is implied that the page table space is the
> range [pt_base, pt_base + nt_pt_frames), whereas that
> range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
> which - without a priori knowledge - the kernel would have
> difficulty to figure out).
> 
> Below is a debugging patch I used to see the full picture, if you
> want to double check.

Thinking of just sticking this patch then:

>From 9aa58784b163ee435ff5596cf3ec059b57ab85e1 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 22 Aug 2012 13:00:10 -0400
Subject: [PATCH] Revert "xen/x86: Workaround 64-bit hypervisor and 32-bit
 initial domain." and "xen/x86: Use memblock_reserve for
 sensitive areas."

This reverts commit 806c312e50f122c47913145cf884f53dd09d9199 and
commit 59b294403e9814e7c1154043567f0d71bac7a511.

And also documents setup.c and why we want to do it that way, which
is that we tried to make the the memblock_reserve more selective so
that it would be clear what region is reserved. Sadly we ran
in the problem wherein on a 64-bit hypervisor with a 32-bit
initial domain, the pt_base has the cr3 value which is not
neccessarily where the pagetable starts! As Jan put it: "
Actually, the adjustment turns out to be correct: The page
tables for a 32-on-64 dom0 get allocated in the order "first L1",
"first L2", "first L3", so the offset to the page table base is
indeed 2. When reading xen/include/public/xen.h's comment
very strictly, this is not a violation (since there nothing is said
that the first thing in the page table space is pointed to by
pt_base; I admit that this seems to be implied though, namely
do I think that it is implied that the page table space is the
range [pt_base, pt_base + nt_pt_frames), whereas that
range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
which - without a priori knowledge - the kernel would have
difficulty to figure out)." - so lets just fall back to the
easy way and reserve the whole region.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |   60 ----------------------------------------------
 arch/x86/xen/p2m.c       |    5 ----
 arch/x86/xen/setup.c     |   27 ++++++++++++++++++++
 3 files changed, 27 insertions(+), 65 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index c1e940d..d87a038 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -998,66 +998,7 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 	return ret;
 }
-/*
- * If the MFN is not in the m2p (provided to us by the hypervisor) this
- * function won't do anything. In practice this means that the XenBus
- * MFN won't be available for the initial domain. */
-static unsigned long __init xen_reserve_mfn(unsigned long mfn)
-{
-	unsigned long pfn, end_pfn = 0;
-
-	if (!mfn)
-		return end_pfn;
-
-	pfn = mfn_to_pfn(mfn);
-	if (phys_to_machine_mapping_valid(pfn)) {
-		end_pfn = PFN_PHYS(pfn) + PAGE_SIZE;
-		memblock_reserve(PFN_PHYS(pfn), end_pfn);
-	}
-	return end_pfn;
-}
-static void __init xen_reserve_internals(void)
-{
-	unsigned long size;
-	unsigned long last_phys = 0;
-
-	if (!xen_pv_domain())
-		return;
-
-	/* xen_start_info does not exist in the M2P, hence can't use
-	 * xen_reserve_mfn. */
-	memblock_reserve(__pa(xen_start_info), PAGE_SIZE);
-	last_phys = __pa(xen_start_info) + PAGE_SIZE;
-
-	last_phys = max(xen_reserve_mfn(PFN_DOWN(xen_start_info->shared_info)), last_phys);
-	last_phys = max(xen_reserve_mfn(xen_start_info->store_mfn), last_phys);
 
-	if (!xen_initial_domain())
-		last_phys = max(xen_reserve_mfn(xen_start_info->console.domU.mfn), last_phys);
-
-	if (xen_feature(XENFEAT_auto_translated_physmap))
-		return;
-
-	/*
-	 * ALIGN up to compensate for the p2m_page pointing to an array that
-	 * can partially filled (look in xen_build_dynamic_phys_to_machine).
-	 */
-
-	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-
-	/* We could use xen_reserve_mfn here, but would end up looping quite
-	 * a lot (and call memblock_reserve for each PAGE), so lets just use
-	 * the easy way and reserve it wholesale. */
-	memblock_reserve(__pa(xen_start_info->mfn_list), size);
-	last_phys = max(__pa(xen_start_info->mfn_list) + size, last_phys);
-	/* The pagetables are reserved in mmu.c */
-
-	/* Under 64-bit hypervisor with a 32-bit domain, the hypervisor
-	 * offsets the pt_base by two pages. Hence the reservation that is done
-	 * in mmu.c misses two pages. We correct it here if we detect this. */
-	if (last_phys < __pa(xen_start_info->pt_base))
-		memblock_reserve(last_phys, __pa(xen_start_info->pt_base) - last_phys);
-}
 void xen_setup_shared_info(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
@@ -1418,7 +1359,6 @@ asmlinkage void __init xen_start_kernel(void)
 	xen_raw_console_write("mapping kernel into physical memory\n");
 	xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base, xen_start_info->nr_pages);
 
-	xen_reserve_internals();
 	/* Allocate and initialize top and mid mfn levels for p2m structure */
 	xen_build_mfn_list_list();
 
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 95c3ea2..c3e9291 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -388,11 +388,6 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	}
 
 	m2p_override_init();
-
-	/* NOTE: We cannot call memblock_reserve here for the mfn_list as there
-	 * isn't enough pieces to make it work (for one - we are still using the
-	 * Xen provided pagetable). Do it later in xen_reserve_internals.
-	 */
 }
 #ifdef CONFIG_X86_64
 #include <linux/bootmem.h>
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 9efca75..740517b 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -424,6 +424,33 @@ char * __init xen_memory_setup(void)
 	e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
 			E820_RESERVED);
 
+	/*
+	 * Reserve Xen bits:
+	 *  - mfn_list
+	 *  - xen_start_info
+	 * See comment above "struct start_info" in <xen/interface/xen.h>
+	 * We tried to make the the memblock_reserve more selective so
+	 * that it would be clear what region is reserved. Sadly we ran
+	 * in the problem wherein on a 64-bit hypervisor with a 32-bit
+	 * initial domain, the pt_base has the cr3 value which is not
+	 * neccessarily where the pagetable starts! As Jan put it: "
+	 * Actually, the adjustment turns out to be correct: The page
+	 * tables for a 32-on-64 dom0 get allocated in the order "first L1",
+	 * "first L2", "first L3", so the offset to the page table base is
+	 * indeed 2. When reading xen/include/public/xen.h's comment
+	 * very strictly, this is not a violation (since there nothing is said
+	 * that the first thing in the page table space is pointed to by
+	 * pt_base; I admit that this seems to be implied though, namely
+	 * do I think that it is implied that the page table space is the
+	 * range [pt_base, pt_base + nt_pt_frames), whereas that
+	 * range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
+	 * which - without a priori knowledge - the kernel would have
+	 * difficulty to figure out)." - so lets just fall back to the
+	 * easy way and reserve the whole region.
+	 */
+	memblock_reserve(__pa(xen_start_info->mfn_list),
+			 xen_start_info->pt_base - xen_start_info->mfn_list);
+
 	sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
 
 	return "Xen";
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 21:18:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 21:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4IJ7-0003ob-8W; Wed, 22 Aug 2012 21:17:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4IJ6-0003oW-Ag
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 21:17:44 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345670258!10271612!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26412 invoked from network); 22 Aug 2012 21:17:38 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 21:17:38 -0000
Received: by wibhq4 with SMTP id hq4so75554wib.14
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 14:17:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=5lg2qmRG19j39KOgFQS09Lait6CwYd5GVT1DhcRVFM0=;
	b=G483znJzrDZPwQiB6RQfO8adJ9gizRihhBVj8PM6QJRpfukBEYI9LYi78OtXTsa2dx
	KTYLER9hizsg/ClDO/Rms+5eyCoCYekp2FQU6HTY7PmKsqbt5lK7znWGdzfrIPr8SOs9
	xLnZx/lWHwaH93bFwoErDsUuCTCshG/0EXG6QN/4sA5sRGaWikq6Y2AXQowC+WbJizJS
	q7CtTi3IezkbE14bPQGU/kisxVQpLuvfut2QtBlGH4aeGk0eSc/pu84vBpBg+LdF2rvC
	nrjH2JWgh59ftjuxmxvMCO1IyxDxBFC+c9y5Q57mBOufHTIhXyYSn4OTlmJb2PYD8c8F
	AwuQ==
Received: by 10.180.104.197 with SMTP id gg5mr8944925wib.9.1345670257974;
	Wed, 22 Aug 2012 14:17:37 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id t7sm15432607wix.6.2012.08.22.14.17.36
	(version=SSLv3 cipher=OTHER); Wed, 22 Aug 2012 14:17:37 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 22 Aug 2012 22:17:32 +0100
From: Keir Fraser <keir@xen.org>
To: Junjie Wei <junjie.wei@oracle.com>
Message-ID: <CC5B0AFC.49ED4%keir@xen.org>
Thread-Topic: [Xen-devel] VM save/restore
Thread-Index: Ac2Aq4rLnKHghlWqwUGpBhHI28Sbrw==
In-Reply-To: <5032A4DE.5040307@oracle.com>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/08/2012 21:58, "Junjie Wei" <junjie.wei@oracle.com> wrote:

>> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
>> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
>> after restore I would imagine.
>> 
>> And what is a PVM guest?
>> 
>>   -- Keir
>> 
> 
> Paravirtualization / modified kernel. Am I using the wrong term "PVM"?

Ah, I realised that in the end. They're usually just called "PV".

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 21:18:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 21:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4IJ7-0003ob-8W; Wed, 22 Aug 2012 21:17:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4IJ6-0003oW-Ag
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 21:17:44 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345670258!10271612!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26412 invoked from network); 22 Aug 2012 21:17:38 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Aug 2012 21:17:38 -0000
Received: by wibhq4 with SMTP id hq4so75554wib.14
	for <xen-devel@lists.xen.org>; Wed, 22 Aug 2012 14:17:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=5lg2qmRG19j39KOgFQS09Lait6CwYd5GVT1DhcRVFM0=;
	b=G483znJzrDZPwQiB6RQfO8adJ9gizRihhBVj8PM6QJRpfukBEYI9LYi78OtXTsa2dx
	KTYLER9hizsg/ClDO/Rms+5eyCoCYekp2FQU6HTY7PmKsqbt5lK7znWGdzfrIPr8SOs9
	xLnZx/lWHwaH93bFwoErDsUuCTCshG/0EXG6QN/4sA5sRGaWikq6Y2AXQowC+WbJizJS
	q7CtTi3IezkbE14bPQGU/kisxVQpLuvfut2QtBlGH4aeGk0eSc/pu84vBpBg+LdF2rvC
	nrjH2JWgh59ftjuxmxvMCO1IyxDxBFC+c9y5Q57mBOufHTIhXyYSn4OTlmJb2PYD8c8F
	AwuQ==
Received: by 10.180.104.197 with SMTP id gg5mr8944925wib.9.1345670257974;
	Wed, 22 Aug 2012 14:17:37 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id t7sm15432607wix.6.2012.08.22.14.17.36
	(version=SSLv3 cipher=OTHER); Wed, 22 Aug 2012 14:17:37 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Wed, 22 Aug 2012 22:17:32 +0100
From: Keir Fraser <keir@xen.org>
To: Junjie Wei <junjie.wei@oracle.com>
Message-ID: <CC5B0AFC.49ED4%keir@xen.org>
Thread-Topic: [Xen-devel] VM save/restore
Thread-Index: Ac2Aq4rLnKHghlWqwUGpBhHI28Sbrw==
In-Reply-To: <5032A4DE.5040307@oracle.com>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/08/2012 21:58, "Junjie Wei" <junjie.wei@oracle.com> wrote:

>> The check for 64 VCPUs is to cover the fact we only save/restore a 64-bit
>> vcpumap. That would need fixing too surely, ot CPUs > 64 would be offline
>> after restore I would imagine.
>> 
>> And what is a PVM guest?
>> 
>>   -- Keir
>> 
> 
> Paravirtualization / modified kernel. Am I using the wrong term "PVM"?

Ah, I realised that in the end. They're usually just called "PV".

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 22:12:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 22:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4J9x-0004Ek-63; Wed, 22 Aug 2012 22:12:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T4J9v-0004Ef-RV
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 22:12:19 +0000
Received: from [85.158.143.35:39853] by server-1.bemta-4.messagelabs.com id
	74/A9-07754-34955305; Wed, 22 Aug 2012 22:12:19 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345673534!12401498!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA2NDE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1752 invoked from network); 22 Aug 2012 22:12:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 22:12:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MMCCWU009368
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 22:12:13 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MMCB6r019813
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 22:12:12 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MMCBMk009232
	for <Xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 17:12:11 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 15:12:11 -0700
Date: Wed, 22 Aug 2012 15:12:09 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120822151209.40a9e54c@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] mercurial sdiff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Anyone figured out a way to do sdiff (side by side diff) in hg, or
anyone written an extension to do this?

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 22:12:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 22:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4J9x-0004Ek-63; Wed, 22 Aug 2012 22:12:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T4J9v-0004Ef-RV
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 22:12:19 +0000
Received: from [85.158.143.35:39853] by server-1.bemta-4.messagelabs.com id
	74/A9-07754-34955305; Wed, 22 Aug 2012 22:12:19 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345673534!12401498!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA2NDE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1752 invoked from network); 22 Aug 2012 22:12:15 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Aug 2012 22:12:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7MMCCWU009368
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 22:12:13 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7MMCB6r019813
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 22:12:12 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7MMCBMk009232
	for <Xen-devel@lists.xensource.com>; Wed, 22 Aug 2012 17:12:11 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Aug 2012 15:12:11 -0700
Date: Wed, 22 Aug 2012 15:12:09 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120822151209.40a9e54c@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] mercurial sdiff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Anyone figured out a way to do sdiff (side by side diff) in hg, or
anyone written an extension to do this?

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 22:20:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 22:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4JHB-0004O1-2t; Wed, 22 Aug 2012 22:19:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <brendan@cs.ubc.ca>) id 1T4JHA-0004Nt-DK
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 22:19:48 +0000
Received: from [85.158.143.35:27147] by server-3.bemta-4.messagelabs.com id
	F7/66-09529-30B55305; Wed, 22 Aug 2012 22:19:47 +0000
X-Env-Sender: brendan@cs.ubc.ca
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345673986!5335340!1
X-Originating-IP: [198.162.52.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30648 invoked from network); 22 Aug 2012 22:19:46 -0000
Received: from zakopane.cs.ubc.ca (HELO mail.quuxuum.com) (198.162.52.68)
	by server-4.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 22:19:46 -0000
Received: from kremvax.kublai.com (kremvax.kublai.com [IPv6:2001:470:e865::3])
	by mail.quuxuum.com (Postfix) with ESMTPSA id ADD03601D5;
	Wed, 22 Aug 2012 15:19:52 -0700 (PDT)
Mime-Version: 1.0 (Mac OS X Mail 6.0 \(1485\))
From: Brendan Cully <brendan@cs.ubc.ca>
In-Reply-To: <20120822151209.40a9e54c@mantra.us.oracle.com>
Date: Wed, 22 Aug 2012 15:19:45 -0700
Message-Id: <58AE0BEC-FA6A-418E-A8ED-32DEC316743D@cs.ubc.ca>
References: <20120822151209.40a9e54c@mantra.us.oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
X-Mailer: Apple Mail (2.1485)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] mercurial sdiff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Good timing:

>From http://mercurial.selenic.com/wiki/WhatsNew#Mercurial_2.3_.282012-08-01.29

 * hgweb: side-by-side comparison functionality

On 2012-08-22, at 3:12 PM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> Hi,
> 
> Anyone figured out a way to do sdiff (side by side diff) in hg, or
> anyone written an extension to do this?
> 
> thanks,
> Mukesh
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 22:20:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 22:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4JHB-0004O1-2t; Wed, 22 Aug 2012 22:19:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <brendan@cs.ubc.ca>) id 1T4JHA-0004Nt-DK
	for Xen-devel@lists.xensource.com; Wed, 22 Aug 2012 22:19:48 +0000
Received: from [85.158.143.35:27147] by server-3.bemta-4.messagelabs.com id
	F7/66-09529-30B55305; Wed, 22 Aug 2012 22:19:47 +0000
X-Env-Sender: brendan@cs.ubc.ca
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345673986!5335340!1
X-Originating-IP: [198.162.52.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30648 invoked from network); 22 Aug 2012 22:19:46 -0000
Received: from zakopane.cs.ubc.ca (HELO mail.quuxuum.com) (198.162.52.68)
	by server-4.tower-21.messagelabs.com with SMTP;
	22 Aug 2012 22:19:46 -0000
Received: from kremvax.kublai.com (kremvax.kublai.com [IPv6:2001:470:e865::3])
	by mail.quuxuum.com (Postfix) with ESMTPSA id ADD03601D5;
	Wed, 22 Aug 2012 15:19:52 -0700 (PDT)
Mime-Version: 1.0 (Mac OS X Mail 6.0 \(1485\))
From: Brendan Cully <brendan@cs.ubc.ca>
In-Reply-To: <20120822151209.40a9e54c@mantra.us.oracle.com>
Date: Wed, 22 Aug 2012 15:19:45 -0700
Message-Id: <58AE0BEC-FA6A-418E-A8ED-32DEC316743D@cs.ubc.ca>
References: <20120822151209.40a9e54c@mantra.us.oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
X-Mailer: Apple Mail (2.1485)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] mercurial sdiff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Good timing:

>From http://mercurial.selenic.com/wiki/WhatsNew#Mercurial_2.3_.282012-08-01.29

 * hgweb: side-by-side comparison functionality

On 2012-08-22, at 3:12 PM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> Hi,
> 
> Anyone figured out a way to do sdiff (side by side diff) in hg, or
> anyone written an extension to do this?
> 
> thanks,
> Mukesh
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 23:48:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 23:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Kei-00050q-TI; Wed, 22 Aug 2012 23:48:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4Keh-00050b-7m
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 23:48:11 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345679284!4304733!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15237 invoked from network); 22 Aug 2012 23:48:04 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 23:48:04 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 3B20E5A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 00:47:42 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id IlsV4+JkzE+N for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 00:47:39 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	9F4465A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 00:47:39 +0100 (BST)
Message-ID: <50356FB1.2070904@abpni.co.uk>
Date: Thu, 23 Aug 2012 00:48:01 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <50356E43.3030208@abpni.co.uk>
In-Reply-To: <50356E43.3030208@abpni.co.uk>
X-Forwarded-Message-Id: <50356E43.3030208@abpni.co.uk>
Subject: [Xen-devel] Can't see more than 3.5GB of RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi Everyone,

CC: Xen-users

I am running Ubuntu 12.04 x86_64. My machine has a supermicro
motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen version.

For some reason, Xen can't see any more than about 3.5GB of RAM. I can
confirm this by xentop as well as xm info. I am definately running a
64-bit Dom0 kernel as when I boot into it without Xen, I can see all
32GB of RAM by running "free -m".

Has anybody come across this issue before? For what it's worth, I'm
booting my system using UEFI - could that have something to do with it?

Any help is very much appreciated

Thanks

Here is the output of xm dmesg:

(XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2) 
(stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro 
4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
(XEN) Bootloader: GRUB 1.99-21ubuntu3.1
(XEN) Command line: placeholder
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN) Disc information:
(XEN)  Found 0 MBR signatures
(XEN)  Found 0 EDD information structures
(XEN) Xen-e801 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000100000 - 00000000ddd00000 (usable)
(XEN) System RAM: 3548MB (3633764kB)
(XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
(XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 20051117)
(XEN) ACPI: FACS DDFBDF80, 0040
(XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 MSFT       97)
(XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 3000001)
(XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 AMI.        5)
(XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 AMI.        0)
(XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 INTL        1)
(XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 AMI.        5)
(XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0             0)
(XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0             0)
(XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0             0)
(XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0             0)
(XEN) Domain heap initialised
(XEN) ACPI: 32/64X FACS address mismatch in FADT - 
ddfbdf80/0000000000000000, using 32
(XEN) Processor #0 7:10 APIC version 21
(XEN) Processor #2 7:10 APIC version 21
(XEN) Processor #4 7:10 APIC version 21
(XEN) Processor #6 7:10 APIC version 21
(XEN) Processor #1 7:10 APIC version 21
(XEN) Processor #3 7:10 APIC version 21
(XEN) Processor #5 7:10 APIC version 21
(XEN) Processor #7 7:10 APIC version 21
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory base 
= dde16000 end = dde32fff; iommu_inclusive_mapping=1 parameter may be 
needed.
(XEN) ERST table is invalid
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3292.644 MHz processor.
(XEN) Initing memory sharing.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) EPT supports 2MB super page.
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging detected.
(XEN) Brought up 8 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642 pages 
to be allocated)
(XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
(XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
(XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
(XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
(XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
(XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
(XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
(XEN)  ENTRY ADDRESS: ffffffff81cfd200
(XEN) Dom0 has maximum 8 VCPUs
(XEN) Scrubbing Free RAM: .done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch 
input to Xen)
(XEN) Freed 220kB init memory.
(XEN) physdev.c:155: dom0: wrong map_pirq type 3




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 22 23:48:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Aug 2012 23:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Kei-00050q-TI; Wed, 22 Aug 2012 23:48:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4Keh-00050b-7m
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 23:48:11 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345679284!4304733!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15237 invoked from network); 22 Aug 2012 23:48:04 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-27.messagelabs.com with SMTP;
	22 Aug 2012 23:48:04 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 3B20E5A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 00:47:42 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id IlsV4+JkzE+N for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 00:47:39 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	9F4465A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 00:47:39 +0100 (BST)
Message-ID: <50356FB1.2070904@abpni.co.uk>
Date: Thu, 23 Aug 2012 00:48:01 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <50356E43.3030208@abpni.co.uk>
In-Reply-To: <50356E43.3030208@abpni.co.uk>
X-Forwarded-Message-Id: <50356E43.3030208@abpni.co.uk>
Subject: [Xen-devel] Can't see more than 3.5GB of RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi Everyone,

CC: Xen-users

I am running Ubuntu 12.04 x86_64. My machine has a supermicro
motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen version.

For some reason, Xen can't see any more than about 3.5GB of RAM. I can
confirm this by xentop as well as xm info. I am definately running a
64-bit Dom0 kernel as when I boot into it without Xen, I can see all
32GB of RAM by running "free -m".

Has anybody come across this issue before? For what it's worth, I'm
booting my system using UEFI - could that have something to do with it?

Any help is very much appreciated

Thanks

Here is the output of xm dmesg:

(XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2) 
(stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro 
4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
(XEN) Bootloader: GRUB 1.99-21ubuntu3.1
(XEN) Command line: placeholder
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN) Disc information:
(XEN)  Found 0 MBR signatures
(XEN)  Found 0 EDD information structures
(XEN) Xen-e801 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000100000 - 00000000ddd00000 (usable)
(XEN) System RAM: 3548MB (3633764kB)
(XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
(XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 20051117)
(XEN) ACPI: FACS DDFBDF80, 0040
(XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 MSFT       97)
(XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 3000001)
(XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 AMI.        5)
(XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 AMI.        0)
(XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 INTL        1)
(XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     10013)
(XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 AMI.        5)
(XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0             0)
(XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0             0)
(XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0             0)
(XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0             0)
(XEN) Domain heap initialised
(XEN) ACPI: 32/64X FACS address mismatch in FADT - 
ddfbdf80/0000000000000000, using 32
(XEN) Processor #0 7:10 APIC version 21
(XEN) Processor #2 7:10 APIC version 21
(XEN) Processor #4 7:10 APIC version 21
(XEN) Processor #6 7:10 APIC version 21
(XEN) Processor #1 7:10 APIC version 21
(XEN) Processor #3 7:10 APIC version 21
(XEN) Processor #5 7:10 APIC version 21
(XEN) Processor #7 7:10 APIC version 21
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory base 
= dde16000 end = dde32fff; iommu_inclusive_mapping=1 parameter may be 
needed.
(XEN) ERST table is invalid
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3292.644 MHz processor.
(XEN) Initing memory sharing.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) EPT supports 2MB super page.
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging detected.
(XEN) Brought up 8 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642 pages 
to be allocated)
(XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
(XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
(XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
(XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
(XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
(XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
(XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
(XEN)  ENTRY ADDRESS: ffffffff81cfd200
(XEN) Dom0 has maximum 8 VCPUs
(XEN) Scrubbing Free RAM: .done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch 
input to Xen)
(XEN) Freed 220kB init memory.
(XEN) physdev.c:155: dom0: wrong map_pirq type 3




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 00:12:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 00:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4L1d-0006Db-67; Thu, 23 Aug 2012 00:11:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4L1b-0006DS-FB
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 00:11:51 +0000
Received: from [85.158.138.51:59492] by server-4.bemta-3.messagelabs.com id
	4F/32-04276-64575305; Thu, 23 Aug 2012 00:11:50 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345680708!19483688!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28645 invoked from network); 23 Aug 2012 00:11:48 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-174.messagelabs.com with SMTP;
	23 Aug 2012 00:11:48 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 88FD35A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:11:25 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Ksoxv6xgFFux for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 01:11:21 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	0AE6A5A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:11:20 +0100 (BST)
Message-ID: <5035753F.5080109@abpni.co.uk>
Date: Thu, 23 Aug 2012 01:11:43 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
In-Reply-To: <50356FB1.2070904@abpni.co.uk>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 00:48, Jonathan Tripathy wrote:
>
> Hi Everyone,
>
> CC: Xen-users
>
> I am running Ubuntu 12.04 x86_64. My machine has a supermicro
> motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
> did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen 
> version.
>
> For some reason, Xen can't see any more than about 3.5GB of RAM. I can
> confirm this by xentop as well as xm info. I am definately running a
> 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
> 32GB of RAM by running "free -m".
>
> Has anybody come across this issue before? For what it's worth, I'm
> booting my system using UEFI - could that have something to do with it?
>
> Any help is very much appreciated
>
> Thanks
>
> Here is the output of xm dmesg:
>
> (XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2) 
> (stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro 
> 4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
> (XEN) Bootloader: GRUB 1.99-21ubuntu3.1
> (XEN) Command line: placeholder
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 0 MBR signatures
> (XEN)  Found 0 EDD information structures
> (XEN) Xen-e801 RAM map:
> (XEN)  0000000000000000 - 0000000000099c00 (usable)
> (XEN)  0000000000100000 - 00000000ddd00000 (usable)
> (XEN) System RAM: 3548MB (3633764kB)
> (XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
> (XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 
> 20051117)
> (XEN) ACPI: FACS DDFBDF80, 0040
> (XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 
> MSFT       97)
> (XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 
> 3000001)
> (XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 
> AMI.        5)
> (XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 
> 20091112)
> (XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 
> AMI.        0)
> (XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 
> 20051117)
> (XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 
> 20051117)
> (XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 
> INTL        1)
> (XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 
> AMI.        5)
> (XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0 0)
> (XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0 0)
> (XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0 0)
> (XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0 0)
> (XEN) Domain heap initialised
> (XEN) ACPI: 32/64X FACS address mismatch in FADT - 
> ddfbdf80/0000000000000000, using 32
> (XEN) Processor #0 7:10 APIC version 21
> (XEN) Processor #2 7:10 APIC version 21
> (XEN) Processor #4 7:10 APIC version 21
> (XEN) Processor #6 7:10 APIC version 21
> (XEN) Processor #1 7:10 APIC version 21
> (XEN) Processor #3 7:10 APIC version 21
> (XEN) Processor #5 7:10 APIC version 21
> (XEN) Processor #7 7:10 APIC version 21
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory 
> base = dde16000 end = dde32fff; iommu_inclusive_mapping=1 parameter 
> may be needed.
> (XEN) ERST table is invalid
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3292.644 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) Intel VT-d Snoop Control enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) EPT supports 2MB super page.
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging detected.
> (XEN) Brought up 8 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642 pages 
> to be allocated)
> (XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
> (XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
> (XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
> (XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
> (XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
> (XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
> (XEN)  ENTRY ADDRESS: ffffffff81cfd200
> (XEN) Dom0 has maximum 8 VCPUs
> (XEN) Scrubbing Free RAM: .done.
> (XEN) Xen trace buffers: disabled
> (XEN) Std. Loglevel: Errors and warnings
> (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch 
> input to Xen)
> (XEN) Freed 220kB init memory.
> (XEN) physdev.c:155: dom0: wrong map_pirq type 3
>
>
>

Hi Everyone,

You will note that someone has posted a bug report here with the issue 
I'm facing:

https://bugzilla.redhat.com/show_bug.cgi?id=819235

Also, someone has create a patch (for 4.0.1): 
http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-should-be-8gb-uefi-boot

Could someone here please shed light on this issue? Is there an official 
patch in the works?

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 00:12:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 00:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4L1d-0006Db-67; Thu, 23 Aug 2012 00:11:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4L1b-0006DS-FB
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 00:11:51 +0000
Received: from [85.158.138.51:59492] by server-4.bemta-3.messagelabs.com id
	4F/32-04276-64575305; Thu, 23 Aug 2012 00:11:50 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345680708!19483688!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28645 invoked from network); 23 Aug 2012 00:11:48 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-174.messagelabs.com with SMTP;
	23 Aug 2012 00:11:48 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 88FD35A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:11:25 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Ksoxv6xgFFux for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 01:11:21 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	0AE6A5A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:11:20 +0100 (BST)
Message-ID: <5035753F.5080109@abpni.co.uk>
Date: Thu, 23 Aug 2012 01:11:43 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
In-Reply-To: <50356FB1.2070904@abpni.co.uk>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 00:48, Jonathan Tripathy wrote:
>
> Hi Everyone,
>
> CC: Xen-users
>
> I am running Ubuntu 12.04 x86_64. My machine has a supermicro
> motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
> did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen 
> version.
>
> For some reason, Xen can't see any more than about 3.5GB of RAM. I can
> confirm this by xentop as well as xm info. I am definately running a
> 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
> 32GB of RAM by running "free -m".
>
> Has anybody come across this issue before? For what it's worth, I'm
> booting my system using UEFI - could that have something to do with it?
>
> Any help is very much appreciated
>
> Thanks
>
> Here is the output of xm dmesg:
>
> (XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2) 
> (stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro 
> 4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
> (XEN) Bootloader: GRUB 1.99-21ubuntu3.1
> (XEN) Command line: placeholder
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 0 MBR signatures
> (XEN)  Found 0 EDD information structures
> (XEN) Xen-e801 RAM map:
> (XEN)  0000000000000000 - 0000000000099c00 (usable)
> (XEN)  0000000000100000 - 00000000ddd00000 (usable)
> (XEN) System RAM: 3548MB (3633764kB)
> (XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
> (XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 
> 20051117)
> (XEN) ACPI: FACS DDFBDF80, 0040
> (XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 
> MSFT       97)
> (XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 
> 3000001)
> (XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 
> AMI.        5)
> (XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 
> 20091112)
> (XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 
> AMI.        0)
> (XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 
> 20051117)
> (XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 
> 20051117)
> (XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 
> INTL        1)
> (XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     
> 10013)
> (XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 
> AMI.        5)
> (XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0 0)
> (XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0 0)
> (XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0 0)
> (XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0 0)
> (XEN) Domain heap initialised
> (XEN) ACPI: 32/64X FACS address mismatch in FADT - 
> ddfbdf80/0000000000000000, using 32
> (XEN) Processor #0 7:10 APIC version 21
> (XEN) Processor #2 7:10 APIC version 21
> (XEN) Processor #4 7:10 APIC version 21
> (XEN) Processor #6 7:10 APIC version 21
> (XEN) Processor #1 7:10 APIC version 21
> (XEN) Processor #3 7:10 APIC version 21
> (XEN) Processor #5 7:10 APIC version 21
> (XEN) Processor #7 7:10 APIC version 21
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory 
> base = dde16000 end = dde32fff; iommu_inclusive_mapping=1 parameter 
> may be needed.
> (XEN) ERST table is invalid
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3292.644 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) Intel VT-d Snoop Control enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) EPT supports 2MB super page.
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging detected.
> (XEN) Brought up 8 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642 pages 
> to be allocated)
> (XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
> (XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
> (XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
> (XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
> (XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
> (XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
> (XEN)  ENTRY ADDRESS: ffffffff81cfd200
> (XEN) Dom0 has maximum 8 VCPUs
> (XEN) Scrubbing Free RAM: .done.
> (XEN) Xen trace buffers: disabled
> (XEN) Std. Loglevel: Errors and warnings
> (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch 
> input to Xen)
> (XEN) Freed 220kB init memory.
> (XEN) physdev.c:155: dom0: wrong map_pirq type 3
>
>
>

Hi Everyone,

You will note that someone has posted a bug report here with the issue 
I'm facing:

https://bugzilla.redhat.com/show_bug.cgi?id=819235

Also, someone has create a patch (for 4.0.1): 
http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-should-be-8gb-uefi-boot

Could someone here please shed light on this issue? Is there an official 
patch in the works?

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 03:18:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 03:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4NvT-0004RB-HA; Thu, 23 Aug 2012 03:17:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T4NvR-0004R6-If
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 03:17:41 +0000
Received: from [85.158.143.35:32769] by server-3.bemta-4.messagelabs.com id
	4E/1B-09529-4D0A5305; Thu, 23 Aug 2012 03:17:40 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345691859!5359184!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMzQ3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26545 invoked from network); 23 Aug 2012 03:17:40 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-21.messagelabs.com with SMTP;
	23 Aug 2012 03:17:40 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by fmsmga101.fm.intel.com with ESMTP; 22 Aug 2012 20:17:39 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,297,1344236400"; d="scan'208";a="136983529"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 22 Aug 2012 20:17:38 -0700
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:11:21 +0800
Message-Id: <1345691481-6862-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The previous order of relinquish resource is:
relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
However some L1 resources like nv_vvmcx and io_bitmaps are free in
nvmx_vcpu_destroy(), therefore the relinquish_domain_resources()
will not reduce the refcnt of the domain to 0, therefore the latter
vcpu release functions will not be called.

To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
relinquish_domain_resources().

Besides, after destroy the nested vcpu, we need to switch the vmx->vmcs
back to the L1 and let the vcpu_destroy() logic to free the L1 VMCS page.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    3 +++
 xen/arch/x86/hvm/vmx/vmx.c         |    3 ++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   11 +++++++++++
 xen/include/asm-x86/hvm/hvm.h      |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 5 files changed, 18 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025..0576a24 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -561,6 +561,9 @@ int hvm_domain_initialise(struct domain *d)
 
 void hvm_domain_relinquish_resources(struct domain *d)
 {
+    if ( hvm_funcs.nhvm_domain_relinquish_resources )
+        hvm_funcs.nhvm_domain_relinquish_resources(d);
+
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ffb86c1..3ea7012 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1547,7 +1547,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
-    .nhvm_intr_blocked    = nvmx_intr_blocked
+    .nhvm_intr_blocked    = nvmx_intr_blocked,
+    .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 2e0b79d..1f610eb 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -57,6 +57,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
 
+    if ( nvcpu->nv_n1vmcx )
+        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;
+
     nvmx_purge_vvmcs(v);
     if ( nvcpu->nv_n2vmcx ) {
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
@@ -65,6 +68,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
     }
 }
  
+void nvmx_domain_relinquish_resources(struct domain *d)
+{
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+        nvmx_purge_vvmcs(v);
+}
+
 int nvmx_vcpu_reset(struct vcpu *v)
 {
     return 0;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 7243c4e..3592a8c 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -179,6 +179,7 @@ struct hvm_function_table {
     bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
 
     enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
+    void (*nhvm_domain_relinquish_resources)(struct domain *d);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 995f9f4..bbc34e7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
                               unsigned int trap, int error_code);
+void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 03:18:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 03:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4NvT-0004RB-HA; Thu, 23 Aug 2012 03:17:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T4NvR-0004R6-If
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 03:17:41 +0000
Received: from [85.158.143.35:32769] by server-3.bemta-4.messagelabs.com id
	4E/1B-09529-4D0A5305; Thu, 23 Aug 2012 03:17:40 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345691859!5359184!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxMzQ3OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26545 invoked from network); 23 Aug 2012 03:17:40 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-21.messagelabs.com with SMTP;
	23 Aug 2012 03:17:40 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by fmsmga101.fm.intel.com with ESMTP; 22 Aug 2012 20:17:39 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,297,1344236400"; d="scan'208";a="136983529"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by AZSMGA002.ch.intel.com with ESMTP; 22 Aug 2012 20:17:38 -0700
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:11:21 +0800
Message-Id: <1345691481-6862-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The previous order of relinquish resource is:
relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
However some L1 resources like nv_vvmcx and io_bitmaps are free in
nvmx_vcpu_destroy(), therefore the relinquish_domain_resources()
will not reduce the refcnt of the domain to 0, therefore the latter
vcpu release functions will not be called.

To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
relinquish_domain_resources().

Besides, after destroy the nested vcpu, we need to switch the vmx->vmcs
back to the L1 and let the vcpu_destroy() logic to free the L1 VMCS page.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    3 +++
 xen/arch/x86/hvm/vmx/vmx.c         |    3 ++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   11 +++++++++++
 xen/include/asm-x86/hvm/hvm.h      |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 5 files changed, 18 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025..0576a24 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -561,6 +561,9 @@ int hvm_domain_initialise(struct domain *d)
 
 void hvm_domain_relinquish_resources(struct domain *d)
 {
+    if ( hvm_funcs.nhvm_domain_relinquish_resources )
+        hvm_funcs.nhvm_domain_relinquish_resources(d);
+
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ffb86c1..3ea7012 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1547,7 +1547,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
-    .nhvm_intr_blocked    = nvmx_intr_blocked
+    .nhvm_intr_blocked    = nvmx_intr_blocked,
+    .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 2e0b79d..1f610eb 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -57,6 +57,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
 
+    if ( nvcpu->nv_n1vmcx )
+        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;
+
     nvmx_purge_vvmcs(v);
     if ( nvcpu->nv_n2vmcx ) {
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
@@ -65,6 +68,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
     }
 }
  
+void nvmx_domain_relinquish_resources(struct domain *d)
+{
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+        nvmx_purge_vvmcs(v);
+}
+
 int nvmx_vcpu_reset(struct vcpu *v)
 {
     return 0;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 7243c4e..3592a8c 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -179,6 +179,7 @@ struct hvm_function_table {
     bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
 
     enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
+    void (*nhvm_domain_relinquish_resources)(struct domain *d);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 995f9f4..bbc34e7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
                               unsigned int trap, int error_code);
+void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 05:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 05:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4PqO-0005DZ-Uu; Thu, 23 Aug 2012 05:20:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4PqN-0005DU-2p
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 05:20:35 +0000
Received: from [85.158.139.83:7889] by server-10.bemta-5.messagelabs.com id
	F5/97-13125-2ADB5305; Thu, 23 Aug 2012 05:20:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345699232!29585965!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8279 invoked from network); 23 Aug 2012 05:20:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 05:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,298,1344211200"; d="scan'208";a="14137706"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 05:19:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 06:19:52 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4Ppg-0007T8-8X;
	Thu, 23 Aug 2012 05:19:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4Ppf-0007yC-Sb;
	Thu, 23 Aug 2012 06:19:52 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13622-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Aug 2012 06:19:51 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13622: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13622 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13622/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13621
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13621
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13621
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13621

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  b02ac80ff689
baseline version:
 xen                  e6ca45ca03c2

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=b02ac80ff689
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable b02ac80ff689
+ branch=xen-unstable
+ revision=b02ac80ff689
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r b02ac80ff689 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 24 changes to 22 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 05:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 05:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4PqO-0005DZ-Uu; Thu, 23 Aug 2012 05:20:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4PqN-0005DU-2p
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 05:20:35 +0000
Received: from [85.158.139.83:7889] by server-10.bemta-5.messagelabs.com id
	F5/97-13125-2ADB5305; Thu, 23 Aug 2012 05:20:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345699232!29585965!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8279 invoked from network); 23 Aug 2012 05:20:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 05:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,298,1344211200"; d="scan'208";a="14137706"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 05:19:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 06:19:52 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4Ppg-0007T8-8X;
	Thu, 23 Aug 2012 05:19:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4Ppf-0007yC-Sb;
	Thu, 23 Aug 2012 06:19:52 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13622-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Aug 2012 06:19:51 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13622: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13622 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13622/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13621
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13621
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13621
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13621

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  b02ac80ff689
baseline version:
 xen                  e6ca45ca03c2

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=b02ac80ff689
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable b02ac80ff689
+ branch=xen-unstable
+ revision=b02ac80ff689
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r b02ac80ff689 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 24 changes to 22 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 05:39:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 05:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Q8b-0005er-Ux; Thu, 23 Aug 2012 05:39:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T4M7x-0003Pq-I7
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 01:22:29 +0000
Received: from [85.158.139.83:37761] by server-3.bemta-5.messagelabs.com id
	8A/88-27237-4D585305; Thu, 23 Aug 2012 01:22:28 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345684946!29568081!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15564 invoked from network); 23 Aug 2012 01:22:27 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 01:22:27 -0000
Received: by iabz25 with SMTP id z25so443943iab.30
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 18:22:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=/l+0T+LteV5wEle38pha9vCfEKCUMzlmnlG3PeUAGq8=;
	b=KpqBh1L3diqB+JC/aD2hYyh9bg8RgaKqI7jy/cGuKG3WWUA0l6IVIcPGUyPwJZcqG0
	8CZm+ec4YaBX7+vQeN/Uh0qWFrhBVeDv5DCCK+JeF816g8qW5e0JT+b1sd/7rcocHfvn
	hmHp99awFTtBN5ku9tYtCrOWfrm2+5ZxDaccMqIM3EmAYqwJ4CUnUVLa/Z23ZUGSqWbj
	0QLwSKyQBP3bI3oQEbQ59t8YuaQVrZaq9EACH8yKvJtDlb1d81k0rj3m652ReIcrJGxl
	ZSLWQBf+MZ9w3t8lpIYlmMDcUjuh6LsI4YfYTWNK6+mY+QKNZ4MWT3WJ+7YQjL7JjI8j
	og2g==
MIME-Version: 1.0
Received: by 10.50.213.6 with SMTP id no6mr4127164igc.30.1345684946128; Wed,
	22 Aug 2012 18:22:26 -0700 (PDT)
Received: by 10.64.100.139 with HTTP; Wed, 22 Aug 2012 18:22:26 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
Date: Thu, 23 Aug 2012 09:22:26 +0800
Message-ID: <CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Thu, 23 Aug 2012 05:39:24 +0000
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1488776988284246471=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1488776988284246471==
Content-Type: multipart/alternative; boundary=f46d042f9388318d5a04c7e4b13e

--f46d042f9388318d5a04c7e4b13e
Content-Type: text/plain; charset=ISO-8859-1

Thanks for your reply.

I have tried the method but it looks still can't work, there is no "hvc0"
device file in my "/dev" directory,
is that the root cause?


On Mon, Aug 20, 2012 at 11:47 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > > > Hi All,
> > > > I'm try to set up and verify Xen console driver base on Fedora 17
> and Xen
> > > > 4.1.2 with hvm guest mode,
> > > > i searched around and got a link, it give steps both for PV and HVM
> mode,
> > > > I followed the HVM guide
> > > > and upgraded my kernel to 3.5.0.
> > > >
> > > >            http://www.dedoimedo.com/computers/xen-console.html
> > > >
> > > > After that, I can got console output with "xm console <dom_id>", but
> the
> > > > console driver is not used when I tracing the driver
> >
> > .. I am not sure I understand: "when I tracing the driver" ? Are
> > you referring to the PV driver?
> >
> > > > with "crash" utility, by examing the "console_drivers", the console
> driver
> > > > is still "serial8250 console", so i wonder if I didn't
> > > > set up it properly or something else, is there someone ever
> experienced
> > > > it, thanks.
> >
> > Hm, it should be the hvc one. Perhaps Stefano knows..
> >
> > CC-ing him and xen-devel here.
>
> An HVM guest has access to both an emulated serial (if a serial="pty"
> parameter is present in the VM config file) and a PV console.
> However the default first console is the emulated serial with libxl (see
> libxl__primary_console_find), that is what you get when you execute "xl
> console" without -t.
>
> But if you edit inittab to spawn a getty on hvc0 and then execute "xl
> console -t pv" you should get access to the PV console.
>

--f46d042f9388318d5a04c7e4b13e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks for your reply.<br><br>I have tried the method but it looks still ca=
n&#39;t work, there is no &quot;hvc0&quot; device file in my &quot;/dev&quo=
t; directory,<br>is that the root cause?<br><br><br><div class=3D"gmail_quo=
te">
On Mon, Aug 20, 2012 at 11:47 PM, Stefano Stabellini <span dir=3D"ltr">&lt;=
<a href=3D"mailto:stefano.stabellini@eu.citrix.com" target=3D"_blank">stefa=
no.stabellini@eu.citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"g=
mail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-l=
eft:1ex">
<div class=3D"im">On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:<br>
&gt; On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:<br>
&gt; &gt; &gt; Hi All,<br>
&gt; &gt; &gt; I&#39;m try to set up and verify Xen console driver base on =
Fedora 17 and Xen<br>
&gt; &gt; &gt; 4.1.2 with hvm guest mode,<br>
&gt; &gt; &gt; i searched around and got a link, it give steps both for PV =
and HVM mode,<br>
&gt; &gt; &gt; I followed the HVM guide<br>
&gt; &gt; &gt; and upgraded my kernel to 3.5.0.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 =A0 =A0<a href=3D"http://www.dedoimedo.com/c=
omputers/xen-console.html" target=3D"_blank">http://www.dedoimedo.com/compu=
ters/xen-console.html</a><br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; After that, I can got console output with &quot;xm console &=
lt;dom_id&gt;&quot;, but the<br>
&gt; &gt; &gt; console driver is not used when I tracing the driver<br>
&gt;<br>
&gt; .. I am not sure I understand: &quot;when I tracing the driver&quot; ?=
 Are<br>
&gt; you referring to the PV driver?<br>
&gt;<br>
&gt; &gt; &gt; with &quot;crash&quot; utility, by examing the &quot;console=
_drivers&quot;, the console driver<br>
&gt; &gt; &gt; is still &quot;serial8250 console&quot;, so i wonder if I di=
dn&#39;t<br>
&gt; &gt; &gt; set up it properly or something else, is there someone ever =
experienced<br>
&gt; &gt; &gt; it, thanks.<br>
&gt;<br>
&gt; Hm, it should be the hvc one. Perhaps Stefano knows..<br>
&gt;<br>
&gt; CC-ing him and xen-devel here.<br>
<br>
</div>An HVM guest has access to both an emulated serial (if a serial=3D&qu=
ot;pty&quot;<br>
parameter is present in the VM config file) and a PV console.<br>
However the default first console is the emulated serial with libxl (see<br=
>
libxl__primary_console_find), that is what you get when you execute &quot;x=
l<br>
console&quot; without -t.<br>
<br>
But if you edit inittab to spawn a getty on hvc0 and then execute &quot;xl<=
br>
console -t pv&quot; you should get access to the PV console.<br>
</blockquote></div><br>

--f46d042f9388318d5a04c7e4b13e--


--===============1488776988284246471==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1488776988284246471==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 05:39:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 05:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Q8b-0005er-Ux; Thu, 23 Aug 2012 05:39:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T4M7x-0003Pq-I7
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 01:22:29 +0000
Received: from [85.158.139.83:37761] by server-3.bemta-5.messagelabs.com id
	8A/88-27237-4D585305; Thu, 23 Aug 2012 01:22:28 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345684946!29568081!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15564 invoked from network); 23 Aug 2012 01:22:27 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 01:22:27 -0000
Received: by iabz25 with SMTP id z25so443943iab.30
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Aug 2012 18:22:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=/l+0T+LteV5wEle38pha9vCfEKCUMzlmnlG3PeUAGq8=;
	b=KpqBh1L3diqB+JC/aD2hYyh9bg8RgaKqI7jy/cGuKG3WWUA0l6IVIcPGUyPwJZcqG0
	8CZm+ec4YaBX7+vQeN/Uh0qWFrhBVeDv5DCCK+JeF816g8qW5e0JT+b1sd/7rcocHfvn
	hmHp99awFTtBN5ku9tYtCrOWfrm2+5ZxDaccMqIM3EmAYqwJ4CUnUVLa/Z23ZUGSqWbj
	0QLwSKyQBP3bI3oQEbQ59t8YuaQVrZaq9EACH8yKvJtDlb1d81k0rj3m652ReIcrJGxl
	ZSLWQBf+MZ9w3t8lpIYlmMDcUjuh6LsI4YfYTWNK6+mY+QKNZ4MWT3WJ+7YQjL7JjI8j
	og2g==
MIME-Version: 1.0
Received: by 10.50.213.6 with SMTP id no6mr4127164igc.30.1345684946128; Wed,
	22 Aug 2012 18:22:26 -0700 (PDT)
Received: by 10.64.100.139 with HTTP; Wed, 22 Aug 2012 18:22:26 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
Date: Thu, 23 Aug 2012 09:22:26 +0800
Message-ID: <CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Thu, 23 Aug 2012 05:39:24 +0000
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1488776988284246471=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1488776988284246471==
Content-Type: multipart/alternative; boundary=f46d042f9388318d5a04c7e4b13e

--f46d042f9388318d5a04c7e4b13e
Content-Type: text/plain; charset=ISO-8859-1

Thanks for your reply.

I have tried the method but it looks still can't work, there is no "hvc0"
device file in my "/dev" directory,
is that the root cause?


On Mon, Aug 20, 2012 at 11:47 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:
> > > > Hi All,
> > > > I'm try to set up and verify Xen console driver base on Fedora 17
> and Xen
> > > > 4.1.2 with hvm guest mode,
> > > > i searched around and got a link, it give steps both for PV and HVM
> mode,
> > > > I followed the HVM guide
> > > > and upgraded my kernel to 3.5.0.
> > > >
> > > >            http://www.dedoimedo.com/computers/xen-console.html
> > > >
> > > > After that, I can got console output with "xm console <dom_id>", but
> the
> > > > console driver is not used when I tracing the driver
> >
> > .. I am not sure I understand: "when I tracing the driver" ? Are
> > you referring to the PV driver?
> >
> > > > with "crash" utility, by examing the "console_drivers", the console
> driver
> > > > is still "serial8250 console", so i wonder if I didn't
> > > > set up it properly or something else, is there someone ever
> experienced
> > > > it, thanks.
> >
> > Hm, it should be the hvc one. Perhaps Stefano knows..
> >
> > CC-ing him and xen-devel here.
>
> An HVM guest has access to both an emulated serial (if a serial="pty"
> parameter is present in the VM config file) and a PV console.
> However the default first console is the emulated serial with libxl (see
> libxl__primary_console_find), that is what you get when you execute "xl
> console" without -t.
>
> But if you edit inittab to spawn a getty on hvc0 and then execute "xl
> console -t pv" you should get access to the PV console.
>

--f46d042f9388318d5a04c7e4b13e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks for your reply.<br><br>I have tried the method but it looks still ca=
n&#39;t work, there is no &quot;hvc0&quot; device file in my &quot;/dev&quo=
t; directory,<br>is that the root cause?<br><br><br><div class=3D"gmail_quo=
te">
On Mon, Aug 20, 2012 at 11:47 PM, Stefano Stabellini <span dir=3D"ltr">&lt;=
<a href=3D"mailto:stefano.stabellini@eu.citrix.com" target=3D"_blank">stefa=
no.stabellini@eu.citrix.com</a>&gt;</span> wrote:<br><blockquote class=3D"g=
mail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-l=
eft:1ex">
<div class=3D"im">On Mon, 20 Aug 2012, Konrad Rzeszutek Wilk wrote:<br>
&gt; On Mon, Aug 20, 2012 at 09:39:56PM +0800, Wei Xu wrote:<br>
&gt; &gt; &gt; Hi All,<br>
&gt; &gt; &gt; I&#39;m try to set up and verify Xen console driver base on =
Fedora 17 and Xen<br>
&gt; &gt; &gt; 4.1.2 with hvm guest mode,<br>
&gt; &gt; &gt; i searched around and got a link, it give steps both for PV =
and HVM mode,<br>
&gt; &gt; &gt; I followed the HVM guide<br>
&gt; &gt; &gt; and upgraded my kernel to 3.5.0.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 =A0 =A0<a href=3D"http://www.dedoimedo.com/c=
omputers/xen-console.html" target=3D"_blank">http://www.dedoimedo.com/compu=
ters/xen-console.html</a><br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; After that, I can got console output with &quot;xm console &=
lt;dom_id&gt;&quot;, but the<br>
&gt; &gt; &gt; console driver is not used when I tracing the driver<br>
&gt;<br>
&gt; .. I am not sure I understand: &quot;when I tracing the driver&quot; ?=
 Are<br>
&gt; you referring to the PV driver?<br>
&gt;<br>
&gt; &gt; &gt; with &quot;crash&quot; utility, by examing the &quot;console=
_drivers&quot;, the console driver<br>
&gt; &gt; &gt; is still &quot;serial8250 console&quot;, so i wonder if I di=
dn&#39;t<br>
&gt; &gt; &gt; set up it properly or something else, is there someone ever =
experienced<br>
&gt; &gt; &gt; it, thanks.<br>
&gt;<br>
&gt; Hm, it should be the hvc one. Perhaps Stefano knows..<br>
&gt;<br>
&gt; CC-ing him and xen-devel here.<br>
<br>
</div>An HVM guest has access to both an emulated serial (if a serial=3D&qu=
ot;pty&quot;<br>
parameter is present in the VM config file) and a PV console.<br>
However the default first console is the emulated serial with libxl (see<br=
>
libxl__primary_console_find), that is what you get when you execute &quot;x=
l<br>
console&quot; without -t.<br>
<br>
But if you edit inittab to spawn a getty on hvc0 and then execute &quot;xl<=
br>
console -t pv&quot; you should get access to the PV console.<br>
</blockquote></div><br>

--f46d042f9388318d5a04c7e4b13e--


--===============1488776988284246471==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1488776988284246471==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 05:39:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 05:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Q8b-0005ek-JI; Thu, 23 Aug 2012 05:39:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <p.d@gmx.de>)
	id 1T4Ilb-000481-4M
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 21:47:11 +0000
Received: from [85.158.143.99:5602] by server-1.bemta-4.messagelabs.com id
	49/8F-07754-E5355305; Wed, 22 Aug 2012 21:47:10 +0000
X-Env-Sender: p.d@gmx.de
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345672030!27120840!1
X-Originating-IP: [213.165.64.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjE2NS42NC4yMyA9PiAxOTQ1MTk=\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26374 invoked from network); 22 Aug 2012 21:47:10 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.23)
	by server-5.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 21:47:10 -0000
Received: (qmail 26771 invoked by uid 0); 22 Aug 2012 21:47:09 -0000
Received: from 95.116.180.214 by www029.gmx.net with HTTP;
	Wed, 22 Aug 2012 23:47:09 +0200 (CEST)
Date: Wed, 22 Aug 2012 23:47:09 +0200
From: p.d@gmx.de
Message-ID: <20120822214709.296550@gmx.net>
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Authenticated: #16432423
X-Flags: 0001
X-Mailer: WWW-Mail 6100 (Global Message Exchange)
X-Priority: 3
X-Provags-ID: V01U2FsdGVkX19Z7pJQoGRZujJv91r30ouPYIcD+NeKH6HPzcht6Q
	TkGwfWE4tgNZCNPnTW7tavWVc44nnByrLP/A== 
X-GMX-UID: CjZHcLMSeSEqN0bekXQhzrx+IGRvb0Dw
X-Mailman-Approved-At: Thu, 23 Aug 2012 05:39:24 +0000
Subject: [Xen-devel] make uninstall can delete xen-kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Nice time.

# make uninstall
...
rm -rf //boot/*xen*
...

if somebody use "xen" in kernel name (maybe as suffix), so it will be deleted from /boot/ too.

Thanks.
Denis.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 05:39:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 05:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Q8b-0005ek-JI; Thu, 23 Aug 2012 05:39:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <p.d@gmx.de>)
	id 1T4Ilb-000481-4M
	for xen-devel@lists.xen.org; Wed, 22 Aug 2012 21:47:11 +0000
Received: from [85.158.143.99:5602] by server-1.bemta-4.messagelabs.com id
	49/8F-07754-E5355305; Wed, 22 Aug 2012 21:47:10 +0000
X-Env-Sender: p.d@gmx.de
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345672030!27120840!1
X-Originating-IP: [213.165.64.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjE2NS42NC4yMyA9PiAxOTQ1MTk=\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26374 invoked from network); 22 Aug 2012 21:47:10 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.23)
	by server-5.tower-216.messagelabs.com with SMTP;
	22 Aug 2012 21:47:10 -0000
Received: (qmail 26771 invoked by uid 0); 22 Aug 2012 21:47:09 -0000
Received: from 95.116.180.214 by www029.gmx.net with HTTP;
	Wed, 22 Aug 2012 23:47:09 +0200 (CEST)
Date: Wed, 22 Aug 2012 23:47:09 +0200
From: p.d@gmx.de
Message-ID: <20120822214709.296550@gmx.net>
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Authenticated: #16432423
X-Flags: 0001
X-Mailer: WWW-Mail 6100 (Global Message Exchange)
X-Priority: 3
X-Provags-ID: V01U2FsdGVkX19Z7pJQoGRZujJv91r30ouPYIcD+NeKH6HPzcht6Q
	TkGwfWE4tgNZCNPnTW7tavWVc44nnByrLP/A== 
X-GMX-UID: CjZHcLMSeSEqN0bekXQhzrx+IGRvb0Dw
X-Mailman-Approved-At: Thu, 23 Aug 2012 05:39:24 +0000
Subject: [Xen-devel] make uninstall can delete xen-kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Nice time.

# make uninstall
...
rm -rf //boot/*xen*
...

if somebody use "xen" in kernel name (maybe as suffix), so it will be deleted from /boot/ too.

Thanks.
Denis.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 06:07:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 06:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4QZ6-00064j-AE; Thu, 23 Aug 2012 06:06:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4QZ3-00064b-TI
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 06:06:46 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345701982!3353740!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODk1MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26981 invoked from network); 23 Aug 2012 06:06:22 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 06:06:22 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id AA4842335;
	Thu, 23 Aug 2012 09:06:20 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 976F32005D; Thu, 23 Aug 2012 09:06:20 +0300 (EEST)
Date: Thu, 23 Aug 2012 09:06:20 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <20120823060620.GE19851@reaktio.net>
References: <50356E43.3030208@abpni.co.uk>
 <50356FB1.2070904@abpni.co.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50356FB1.2070904@abpni.co.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 12:48:01AM +0100, Jonathan Tripathy wrote:
> 
> Hi Everyone,
> 
> CC: Xen-users
> 
> I am running Ubuntu 12.04 x86_64. My machine has a supermicro
> motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
> did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen version.
> 
> For some reason, Xen can't see any more than about 3.5GB of RAM. I can
> confirm this by xentop as well as xm info. I am definately running a
> 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
> 32GB of RAM by running "free -m".
> 
> Has anybody come across this issue before? For what it's worth, I'm
> booting my system using UEFI - could that have something to do with it?
> 
> Any help is very much appreciated
> 

Yes, this is UEFI related issue. Can you turn UEFI off? 

It looks like you're not running UEFI capable Xen hypervisor.
(Xen 4.2 has UEFI support, and some vendors have backported UEFI support on older versions,
for example Suse SLES11SP2 contains Xen support in Xen 4.1).

> Thanks
> 
> Here is the output of xm dmesg:
> 
> (XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2)
> (stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro
> 4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
> (XEN) Bootloader: GRUB 1.99-21ubuntu3.1
> (XEN) Command line: placeholder
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 0 MBR signatures
> (XEN)  Found 0 EDD information structures

here:

> (XEN) Xen-e801 RAM map:
> (XEN)  0000000000000000 - 0000000000099c00 (usable)
> (XEN)  0000000000100000 - 00000000ddd00000 (usable)

This is the problem - only the legacy e801 memory map is detected here.
You need the usual e820 memory map to see all the memory (correct memory layout).

You need Xen with UEFI support, or turn off UEFI in BIOS.


-- Pasi


> (XEN) System RAM: 3548MB (3633764kB)
> (XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
> (XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 20051117)
> (XEN) ACPI: FACS DDFBDF80, 0040
> (XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 MSFT       97)
> (XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 3000001)
> (XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 AMI.        5)
> (XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
> (XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 AMI.        0)
> (XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
> (XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 20051117)
> (XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 INTL        1)
> (XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 AMI.        5)
> (XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0             0)
> (XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0             0)
> (XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0             0)
> (XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0             0)
> (XEN) Domain heap initialised
> (XEN) ACPI: 32/64X FACS address mismatch in FADT -
> ddfbdf80/0000000000000000, using 32
> (XEN) Processor #0 7:10 APIC version 21
> (XEN) Processor #2 7:10 APIC version 21
> (XEN) Processor #4 7:10 APIC version 21
> (XEN) Processor #6 7:10 APIC version 21
> (XEN) Processor #1 7:10 APIC version 21
> (XEN) Processor #3 7:10 APIC version 21
> (XEN) Processor #5 7:10 APIC version 21
> (XEN) Processor #7 7:10 APIC version 21
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory
> base = dde16000 end = dde32fff; iommu_inclusive_mapping=1 parameter
> may be needed.
> (XEN) ERST table is invalid
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3292.644 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) Intel VT-d Snoop Control enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) EPT supports 2MB super page.
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging detected.
> (XEN) Brought up 8 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642
> pages to be allocated)
> (XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
> (XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
> (XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
> (XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
> (XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
> (XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
> (XEN)  ENTRY ADDRESS: ffffffff81cfd200
> (XEN) Dom0 has maximum 8 VCPUs
> (XEN) Scrubbing Free RAM: .done.
> (XEN) Xen trace buffers: disabled
> (XEN) Std. Loglevel: Errors and warnings
> (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch
> input to Xen)
> (XEN) Freed 220kB init memory.
> (XEN) physdev.c:155: dom0: wrong map_pirq type 3
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 06:07:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 06:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4QZ6-00064j-AE; Thu, 23 Aug 2012 06:06:48 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4QZ3-00064b-TI
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 06:06:46 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345701982!3353740!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODk1MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26981 invoked from network); 23 Aug 2012 06:06:22 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 06:06:22 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id AA4842335;
	Thu, 23 Aug 2012 09:06:20 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 976F32005D; Thu, 23 Aug 2012 09:06:20 +0300 (EEST)
Date: Thu, 23 Aug 2012 09:06:20 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <20120823060620.GE19851@reaktio.net>
References: <50356E43.3030208@abpni.co.uk>
 <50356FB1.2070904@abpni.co.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50356FB1.2070904@abpni.co.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 12:48:01AM +0100, Jonathan Tripathy wrote:
> 
> Hi Everyone,
> 
> CC: Xen-users
> 
> I am running Ubuntu 12.04 x86_64. My machine has a supermicro
> motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
> did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen version.
> 
> For some reason, Xen can't see any more than about 3.5GB of RAM. I can
> confirm this by xentop as well as xm info. I am definately running a
> 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
> 32GB of RAM by running "free -m".
> 
> Has anybody come across this issue before? For what it's worth, I'm
> booting my system using UEFI - could that have something to do with it?
> 
> Any help is very much appreciated
> 

Yes, this is UEFI related issue. Can you turn UEFI off? 

It looks like you're not running UEFI capable Xen hypervisor.
(Xen 4.2 has UEFI support, and some vendors have backported UEFI support on older versions,
for example Suse SLES11SP2 contains Xen support in Xen 4.1).

> Thanks
> 
> Here is the output of xm dmesg:
> 
> (XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2)
> (stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro
> 4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
> (XEN) Bootloader: GRUB 1.99-21ubuntu3.1
> (XEN) Command line: placeholder
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 0 MBR signatures
> (XEN)  Found 0 EDD information structures

here:

> (XEN) Xen-e801 RAM map:
> (XEN)  0000000000000000 - 0000000000099c00 (usable)
> (XEN)  0000000000100000 - 00000000ddd00000 (usable)

This is the problem - only the legacy e801 memory map is detected here.
You need the usual e820 memory map to see all the memory (correct memory layout).

You need Xen with UEFI support, or turn off UEFI in BIOS.


-- Pasi


> (XEN) System RAM: 3548MB (3633764kB)
> (XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
> (XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 20051117)
> (XEN) ACPI: FACS DDFBDF80, 0040
> (XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 MSFT       97)
> (XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 3000001)
> (XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 AMI.        5)
> (XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
> (XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 AMI.        0)
> (XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
> (XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 20051117)
> (XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 INTL        1)
> (XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     10013)
> (XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 AMI.        5)
> (XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0             0)
> (XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0             0)
> (XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0             0)
> (XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0             0)
> (XEN) Domain heap initialised
> (XEN) ACPI: 32/64X FACS address mismatch in FADT -
> ddfbdf80/0000000000000000, using 32
> (XEN) Processor #0 7:10 APIC version 21
> (XEN) Processor #2 7:10 APIC version 21
> (XEN) Processor #4 7:10 APIC version 21
> (XEN) Processor #6 7:10 APIC version 21
> (XEN) Processor #1 7:10 APIC version 21
> (XEN) Processor #3 7:10 APIC version 21
> (XEN) Processor #5 7:10 APIC version 21
> (XEN) Processor #7 7:10 APIC version 21
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory
> base = dde16000 end = dde32fff; iommu_inclusive_mapping=1 parameter
> may be needed.
> (XEN) ERST table is invalid
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3292.644 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) Intel VT-d Snoop Control enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) EPT supports 2MB super page.
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging detected.
> (XEN) Brought up 8 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642
> pages to be allocated)
> (XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
> (XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
> (XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
> (XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
> (XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
> (XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
> (XEN)  ENTRY ADDRESS: ffffffff81cfd200
> (XEN) Dom0 has maximum 8 VCPUs
> (XEN) Scrubbing Free RAM: .done.
> (XEN) Xen trace buffers: disabled
> (XEN) Std. Loglevel: Errors and warnings
> (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch
> input to Xen)
> (XEN) Freed 220kB init memory.
> (XEN) physdev.c:155: dom0: wrong map_pirq type 3
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 06:39:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 06:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4R4k-0006II-W5; Thu, 23 Aug 2012 06:39:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4R4j-0006ID-Uk
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 06:39:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345703964!3349558!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5626 invoked from network); 23 Aug 2012 06:39:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 06:39:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,298,1344211200"; d="scan'208";a="14138311"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:39:23 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 07:39:22 +0100
Message-ID: <1345703962.23624.57.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "p.d@gmx.de" <p.d@gmx.de>
Date: Thu, 23 Aug 2012 07:39:22 +0100
In-Reply-To: <20120822214709.296550@gmx.net>
References: <20120822214709.296550@gmx.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] do not remove kernels or modules on uninstall.
 (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 22:47 +0100, p.d@gmx.de wrote:
> Nice time.
> 
> # make uninstall
> ...
> rm -rf //boot/*xen*
> ...
> 
> if somebody use "xen" in kernel name (maybe as suffix), so it will be deleted from /boot/ too.

Ouch!

8<-------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345703890 -3600
# Node ID b75970f4114ec72f744f5e7f979ab50401c4629f
# Parent  89c8c855f1df234f9649e98a51724f018f4f92df
do not remove kernels or modules on uninstall.

The pattern used is very broad and will delete any kernel with xen in
its filename, likewise modules, including those which come packages
from the distribution etc.

I don't think this was ever the right thing to do but it is doubly
wrong now that Xen does not even build or install a kernel by default.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 89c8c855f1df -r b75970f4114e Makefile
--- a/Makefile	Wed Aug 22 17:32:37 2012 +0100
+++ b/Makefile	Thu Aug 23 07:38:10 2012 +0100
@@ -228,8 +228,6 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)/boot/*xen*
-	rm -rf $(D)/lib/modules/*xen*
 	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
 	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
 	rm -rf $(D)$(BINDIR)/xc_shadow



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 06:39:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 06:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4R4k-0006II-W5; Thu, 23 Aug 2012 06:39:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4R4j-0006ID-Uk
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 06:39:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1345703964!3349558!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5626 invoked from network); 23 Aug 2012 06:39:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 06:39:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,298,1344211200"; d="scan'208";a="14138311"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:39:23 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 07:39:22 +0100
Message-ID: <1345703962.23624.57.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "p.d@gmx.de" <p.d@gmx.de>
Date: Thu, 23 Aug 2012 07:39:22 +0100
In-Reply-To: <20120822214709.296550@gmx.net>
References: <20120822214709.296550@gmx.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] do not remove kernels or modules on uninstall.
 (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 22:47 +0100, p.d@gmx.de wrote:
> Nice time.
> 
> # make uninstall
> ...
> rm -rf //boot/*xen*
> ...
> 
> if somebody use "xen" in kernel name (maybe as suffix), so it will be deleted from /boot/ too.

Ouch!

8<-------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345703890 -3600
# Node ID b75970f4114ec72f744f5e7f979ab50401c4629f
# Parent  89c8c855f1df234f9649e98a51724f018f4f92df
do not remove kernels or modules on uninstall.

The pattern used is very broad and will delete any kernel with xen in
its filename, likewise modules, including those which come packages
from the distribution etc.

I don't think this was ever the right thing to do but it is doubly
wrong now that Xen does not even build or install a kernel by default.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 89c8c855f1df -r b75970f4114e Makefile
--- a/Makefile	Wed Aug 22 17:32:37 2012 +0100
+++ b/Makefile	Thu Aug 23 07:38:10 2012 +0100
@@ -228,8 +228,6 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)/boot/*xen*
-	rm -rf $(D)/lib/modules/*xen*
 	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
 	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
 	rm -rf $(D)$(BINDIR)/xc_shadow



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 06:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 06:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RD1-0006Te-8W; Thu, 23 Aug 2012 06:48:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4RD0-0006TZ-80
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 06:48:02 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345703024!1685931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1343 invoked from network); 23 Aug 2012 06:23:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 06:23:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 07:23:43 +0100
Message-Id: <5035DA7E020000780008A5E6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 07:23:42 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <konrad@kernel.org>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<50351DEF020000780009702A@nat28.tlf.novell.com>
	<20120822185519.GA29609@phenom.dumpdata.com>
In-Reply-To: <20120822185519.GA29609@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Konrad Rzeszutek Wilk <konrad@kernel.org> 08/22/12 9:05 PM >>>
>Thinking of just sticking this patch then:

Yeah, that may be best for the moment. Albeit I see no reason why you
shouldn't be able to use your more selective logic, just making it either
deal with only the pt_base == start-of-page-tables case (and fall back to
the current logic alternatively), or figure out the true range.

I'm nevertheless considering to re-arrange the allocation order in the
hypervisor (and to remove the superfluously reserved page covering
what would be the L4), so going with the former, simpler case for the
kernel would be a reasonable option.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 06:48:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 06:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RD1-0006Te-8W; Thu, 23 Aug 2012 06:48:03 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4RD0-0006TZ-80
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 06:48:02 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345703024!1685931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1343 invoked from network); 23 Aug 2012 06:23:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 06:23:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 07:23:43 +0100
Message-Id: <5035DA7E020000780008A5E6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 07:23:42 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <konrad@kernel.org>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-3-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171835010.15568@kaball.uk.xensource.com>
	<20120820141305.GA2713@phenom.dumpdata.com>
	<20120821172732.GA23715@phenom.dumpdata.com>
	<20120821190317.GA13035@phenom.dumpdata.com>
	<50351DEF020000780009702A@nat28.tlf.novell.com>
	<20120822185519.GA29609@phenom.dumpdata.com>
In-Reply-To: <20120822185519.GA29609@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] Q:pt_base in COMPAT mode offset by two pages.
 Was:Re: [PATCH 02/11] xen/x86: Use memblock_reserve for sensitive areas.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Konrad Rzeszutek Wilk <konrad@kernel.org> 08/22/12 9:05 PM >>>
>Thinking of just sticking this patch then:

Yeah, that may be best for the moment. Albeit I see no reason why you
shouldn't be able to use your more selective logic, just making it either
deal with only the pt_base == start-of-page-tables case (and fall back to
the current logic alternatively), or figure out the true range.

I'm nevertheless considering to re-arrange the allocation order in the
hypervisor (and to remove the superfluously reserved page covering
what would be the L4), so going with the former, simpler case for the
kernel would be a reasonable option.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RTO-0006rV-C5; Thu, 23 Aug 2012 07:04:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4RTM-0006rQ-VI
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:04:57 +0000
Received: from [85.158.143.35:5048] by server-3.bemta-4.messagelabs.com id
	B8/CA-08232-816D5305; Thu, 23 Aug 2012 07:04:56 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345705492!12449465!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 799 invoked from network); 23 Aug 2012 07:04:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	23 Aug 2012 07:04:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:04:51 +0100
Message-Id: <5035E421020000780008A601@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:04:49 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <Ian.Campbell@citrix.com>,<p.d@gmx.de>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
In-Reply-To: <1345703962.23624.57.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
>--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
>+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
>@@ -228,8 +228,6 @@ uninstall:
>    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
>    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
>    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
>-    rm -rf $(D)/boot/*xen*

But removing this line without replacement isn't right either - we at least
need to undo what "make install" did. That may imply adding an
uninstall-xen sub-target, if we don't want to come up with a suitable
pattern to do this here.

>-    rm -rf $(D)/lib/modules/*xen*
>    rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
>    rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
>    rm -rf $(D)$(BINDIR)/xc_shadow

This may also be needed for the tools - wasn't it that BINDIR, LIBDIR and
the like are now dependent on configure options?

Similarly I don't the the EFI binaries get properly cleaned up here (and
that also would better be done with a per-subdir uninstall).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:05:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RTO-0006rV-C5; Thu, 23 Aug 2012 07:04:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4RTM-0006rQ-VI
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:04:57 +0000
Received: from [85.158.143.35:5048] by server-3.bemta-4.messagelabs.com id
	B8/CA-08232-816D5305; Thu, 23 Aug 2012 07:04:56 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345705492!12449465!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 799 invoked from network); 23 Aug 2012 07:04:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	23 Aug 2012 07:04:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:04:51 +0100
Message-Id: <5035E421020000780008A601@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:04:49 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <Ian.Campbell@citrix.com>,<p.d@gmx.de>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
In-Reply-To: <1345703962.23624.57.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
>--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
>+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
>@@ -228,8 +228,6 @@ uninstall:
>    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
>    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
>    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
>-    rm -rf $(D)/boot/*xen*

But removing this line without replacement isn't right either - we at least
need to undo what "make install" did. That may imply adding an
uninstall-xen sub-target, if we don't want to come up with a suitable
pattern to do this here.

>-    rm -rf $(D)/lib/modules/*xen*
>    rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
>    rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
>    rm -rf $(D)$(BINDIR)/xc_shadow

This may also be needed for the tools - wasn't it that BINDIR, LIBDIR and
the like are now dependent on configure options?

Similarly I don't the the EFI binaries get properly cleaned up here (and
that also would better be done with a per-subdir uninstall).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Rif-00071t-19; Thu, 23 Aug 2012 07:20:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4Rid-00071o-F3
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:20:43 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345706436!8367711!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2312 invoked from network); 23 Aug 2012 07:20:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 07:20:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:20:35 +0100
Message-Id: <5035E7D3020000780008A60D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:20:35 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <julien.grall@citrix.com>,<qemu-devel@nongnu.org>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<3921d4d38a5c20943af1ceb64f5f0691d7bfd702.1345552068.git.julien.grall@citrix.com>
In-Reply-To: <3921d4d38a5c20943af1ceb64f5f0691d7bfd702.1345552068.git.julien.grall@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: christian.limpach@gmail.com, xen-devel@lists.xen.org,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 03/17] hvm-pci: Handle PCI
 config space in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Julien Grall <julien.grall@citrix.com> 08/22/12 8:56 PM >>>
>+int hvm_register_pcidev(domid_t domid, ioservid_t id,
>+                        uint8_t domain, uint8_t bus,
>+                        uint8_t device, uint8_t function)
>+{

"domain" needs to be "uint16_t".

Also, just to double check: we don't currently expose the option of MMCONFIG
to the guest (as otherwise the change would be incomplete)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:21:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Rif-00071t-19; Thu, 23 Aug 2012 07:20:45 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4Rid-00071o-F3
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:20:43 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345706436!8367711!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2312 invoked from network); 23 Aug 2012 07:20:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 07:20:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:20:35 +0100
Message-Id: <5035E7D3020000780008A60D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:20:35 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <julien.grall@citrix.com>,<qemu-devel@nongnu.org>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<3921d4d38a5c20943af1ceb64f5f0691d7bfd702.1345552068.git.julien.grall@citrix.com>
In-Reply-To: <3921d4d38a5c20943af1ceb64f5f0691d7bfd702.1345552068.git.julien.grall@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: christian.limpach@gmail.com, xen-devel@lists.xen.org,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 03/17] hvm-pci: Handle PCI
 config space in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Julien Grall <julien.grall@citrix.com> 08/22/12 8:56 PM >>>
>+int hvm_register_pcidev(domid_t domid, ioservid_t id,
>+                        uint8_t domain, uint8_t bus,
>+                        uint8_t device, uint8_t function)
>+{

"domain" needs to be "uint16_t".

Also, just to double check: we don't currently expose the option of MMCONFIG
to the guest (as otherwise the change would be incomplete)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:22:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RkG-00076M-HF; Thu, 23 Aug 2012 07:22:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4RkF-00076D-3a
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:22:23 +0000
Received: from [85.158.139.83:52771] by server-3.bemta-5.messagelabs.com id
	DD/3F-27237-C2AD5305; Thu, 23 Aug 2012 07:22:20 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345706539!27323409!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODk1MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5949 invoked from network); 23 Aug 2012 07:22:20 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 07:22:20 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 1B0A1186F;
	Thu, 23 Aug 2012 10:22:18 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8EF6C2005D; Thu, 23 Aug 2012 10:22:18 +0300 (EEST)
Date: Thu, 23 Aug 2012 10:22:18 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <20120823072218.GF19851@reaktio.net>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120823060620.GE19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 09:06:20AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Thu, Aug 23, 2012 at 12:48:01AM +0100, Jonathan Tripathy wrote:
> > =

> > Hi Everyone,
> > =

> > CC: Xen-users
> > =

> > I am running Ubuntu 12.04 x86_64. My machine has a supermicro
> > motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
> > did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen vers=
ion.
> > =

> > For some reason, Xen can't see any more than about 3.5GB of RAM. I can
> > confirm this by xentop as well as xm info. I am definately running a
> > 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
> > 32GB of RAM by running "free -m".
> > =

> > Has anybody come across this issue before? For what it's worth, I'm
> > booting my system using UEFI - could that have something to do with it?
> > =

> > Any help is very much appreciated
> > =

> =

> Yes, this is UEFI related issue. Can you turn UEFI off? =

> =

> It looks like you're not running UEFI capable Xen hypervisor.
> (Xen 4.2 has UEFI support, and some vendors have backported UEFI support =
on older versions,
> for example Suse SLES11SP2 contains UEFI support in Xen 4.1).
>

Fixed the line above :)
                                     =

-- Pasi


> > Thanks
> > =

> > Here is the output of xm dmesg:
> > =

> > (XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2)
> > (stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro
> > 4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
> > (XEN) Bootloader: GRUB 1.99-21ubuntu3.1
> > (XEN) Command line: placeholder
> > (XEN) Video information:
> > (XEN)  VGA is text mode 80x25, font 8x16
> > (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> > (XEN) Disc information:
> > (XEN)  Found 0 MBR signatures
> > (XEN)  Found 0 EDD information structures
> =

> here:
> =

> > (XEN) Xen-e801 RAM map:
> > (XEN)  0000000000000000 - 0000000000099c00 (usable)
> > (XEN)  0000000000100000 - 00000000ddd00000 (usable)
> =

> This is the problem - only the legacy e801 memory map is detected here.
> You need the usual e820 memory map to see all the memory (correct memory =
layout).
> =

> You need Xen with UEFI support, or turn off UEFI in BIOS.
> =

> =

> -- Pasi
> =

> =

> > (XEN) System RAM: 3548MB (3633764kB)
> > (XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
> > (XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 20051=
117)
> > (XEN) ACPI: FACS DDFBDF80, 0040
> > (XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 MSFT      =
 97)
> > (XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 30000=
01)
> > (XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 AMI.      =
  5)
> > (XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 20091=
112)
> > (XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 AMI.      =
  0)
> > (XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 20051=
117)
> > (XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 20051=
117)
> > (XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 INTL      =
  1)
> > (XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 AMI.      =
  5)
> > (XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0             0)
> > (XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0             0)
> > (XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0             0)
> > (XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0             0)
> > (XEN) Domain heap initialised
> > (XEN) ACPI: 32/64X FACS address mismatch in FADT -
> > ddfbdf80/0000000000000000, using 32
> > (XEN) Processor #0 7:10 APIC version 21
> > (XEN) Processor #2 7:10 APIC version 21
> > (XEN) Processor #4 7:10 APIC version 21
> > (XEN) Processor #6 7:10 APIC version 21
> > (XEN) Processor #1 7:10 APIC version 21
> > (XEN) Processor #3 7:10 APIC version 21
> > (XEN) Processor #5 7:10 APIC version 21
> > (XEN) Processor #7 7:10 APIC version 21
> > (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> > (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> > (XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory
> > base =3D dde16000 end =3D dde32fff; iommu_inclusive_mapping=3D1 paramet=
er
> > may be needed.
> > (XEN) ERST table is invalid
> > (XEN) Switched to APIC driver x2apic_cluster.
> > (XEN) Using scheduler: SMP Credit Scheduler (credit)
> > (XEN) Detected 3292.644 MHz processor.
> > (XEN) Initing memory sharing.
> > (XEN) Intel VT-d Snoop Control enabled.
> > (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> > (XEN) Intel VT-d Queued Invalidation enabled.
> > (XEN) Intel VT-d Interrupt Remapping enabled.
> > (XEN) Intel VT-d Shared EPT tables not enabled.
> > (XEN) I/O virtualisation enabled
> > (XEN)  - Dom0 mode: Relaxed
> > (XEN) Enabled directed EOI with ioapic_ack_old on!
> > (XEN) ENABLING IO-APIC IRQs
> > (XEN)  -> Using old ACK method
> > (XEN) Platform timer is 14.318MHz HPET
> > (XEN) Allocated console ring of 16 KiB.
> > (XEN) VMX: Supported advanced features:
> > (XEN)  - APIC MMIO access virtualisation
> > (XEN)  - APIC TPR shadow
> > (XEN)  - Extended Page Tables (EPT)
> > (XEN)  - Virtual-Processor Identifiers (VPID)
> > (XEN)  - Virtual NMI
> > (XEN)  - MSR direct-access bitmap
> > (XEN)  - Unrestricted Guest
> > (XEN) EPT supports 2MB super page.
> > (XEN) HVM: ASIDs enabled.
> > (XEN) HVM: VMX enabled
> > (XEN) HVM: Hardware Assisted Paging detected.
> > (XEN) Brought up 8 CPUs
> > (XEN) *** LOADING DOMAIN 0 ***
> > (XEN)  Xen  kernel: 64-bit, lsb, compat32
> > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
> > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > (XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642
> > pages to be allocated)
> > (XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
> > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > (XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
> > (XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
> > (XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
> > (XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
> > (XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
> > (XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
> > (XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
> > (XEN)  ENTRY ADDRESS: ffffffff81cfd200
> > (XEN) Dom0 has maximum 8 VCPUs
> > (XEN) Scrubbing Free RAM: .done.
> > (XEN) Xen trace buffers: disabled
> > (XEN) Std. Loglevel: Errors and warnings
> > (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
> > (XEN) Xen is relinquishing VGA console.
> > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch
> > input to Xen)
> > (XEN) Freed 220kB init memory.
> > (XEN) physdev.c:155: dom0: wrong map_pirq type 3
> > =

> > =

> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:22:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RkG-00076M-HF; Thu, 23 Aug 2012 07:22:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4RkF-00076D-3a
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:22:23 +0000
Received: from [85.158.139.83:52771] by server-3.bemta-5.messagelabs.com id
	DD/3F-27237-C2AD5305; Thu, 23 Aug 2012 07:22:20 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345706539!27323409!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODk1MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5949 invoked from network); 23 Aug 2012 07:22:20 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-15.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 07:22:20 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 1B0A1186F;
	Thu, 23 Aug 2012 10:22:18 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8EF6C2005D; Thu, 23 Aug 2012 10:22:18 +0300 (EEST)
Date: Thu, 23 Aug 2012 10:22:18 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <20120823072218.GF19851@reaktio.net>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120823060620.GE19851@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 09:06:20AM +0300, Pasi K=E4rkk=E4inen wrote:
> On Thu, Aug 23, 2012 at 12:48:01AM +0100, Jonathan Tripathy wrote:
> > =

> > Hi Everyone,
> > =

> > CC: Xen-users
> > =

> > I am running Ubuntu 12.04 x86_64. My machine has a supermicro
> > motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
> > did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen vers=
ion.
> > =

> > For some reason, Xen can't see any more than about 3.5GB of RAM. I can
> > confirm this by xentop as well as xm info. I am definately running a
> > 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
> > 32GB of RAM by running "free -m".
> > =

> > Has anybody come across this issue before? For what it's worth, I'm
> > booting my system using UEFI - could that have something to do with it?
> > =

> > Any help is very much appreciated
> > =

> =

> Yes, this is UEFI related issue. Can you turn UEFI off? =

> =

> It looks like you're not running UEFI capable Xen hypervisor.
> (Xen 4.2 has UEFI support, and some vendors have backported UEFI support =
on older versions,
> for example Suse SLES11SP2 contains UEFI support in Xen 4.1).
>

Fixed the line above :)
                                     =

-- Pasi


> > Thanks
> > =

> > Here is the output of xm dmesg:
> > =

> > (XEN) Xen version 4.1.2 (Ubuntu 4.1.2-2ubuntu2.2)
> > (stefan.bader@canonical.com) (gcc version 4.6.3 (Ubuntu/Linaro
> > 4.6.3-1ubuntu5) ) Sat Jul 21 09:01:19 UTC 2012
> > (XEN) Bootloader: GRUB 1.99-21ubuntu3.1
> > (XEN) Command line: placeholder
> > (XEN) Video information:
> > (XEN)  VGA is text mode 80x25, font 8x16
> > (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> > (XEN) Disc information:
> > (XEN)  Found 0 MBR signatures
> > (XEN)  Found 0 EDD information structures
> =

> here:
> =

> > (XEN) Xen-e801 RAM map:
> > (XEN)  0000000000000000 - 0000000000099c00 (usable)
> > (XEN)  0000000000100000 - 00000000ddd00000 (usable)
> =

> This is the problem - only the legacy e801 memory map is detected here.
> You need the usual e820 memory map to see all the memory (correct memory =
layout).
> =

> You need Xen with UEFI support, or turn off UEFI in BIOS.
> =

> =

> -- Pasi
> =

> =

> > (XEN) System RAM: 3548MB (3633764kB)
> > (XEN) ACPI: RSDP 000FDF00, 0024 (r2 SUPERM)
> > (XEN) ACPI: XSDT DDF9E098, 00AC (r1 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: FACP DDFA90D8, 00F4 (r4 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: DSDT DDF9E1D8, AEFA (r2 SUPERM SMCI--MB        0 INTL 20051=
117)
> > (XEN) ACPI: FACS DDFBDF80, 0040
> > (XEN) ACPI: APIC DDFA91D0, 0092 (r3 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: FPDT DDFA9268, 0044 (r1 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: MCFG DDFA92B0, 003C (r1 SUPERM SMCI--MB        1 MSFT      =
 97)
> > (XEN) ACPI: PRAD DDFA92F0, 00BE (r2 PRADID  PRADTID        1 MSFT 30000=
01)
> > (XEN) ACPI: HPET DDFA93B0, 0038 (r1 SUPERM SMCI--MB        1 AMI.      =
  5)
> > (XEN) ACPI: SSDT DDFA93E8, 036D (r1 SataRe SataTabl     1000 INTL 20091=
112)
> > (XEN) ACPI: SPMI DDFA9758, 0040 (r5 A M I   OEMSPMI        0 AMI.      =
  0)
> > (XEN) ACPI: SSDT DDFA9798, 09A4 (r1  PmRef  Cpu0Ist     3000 INTL 20051=
117)
> > (XEN) ACPI: SSDT DDFAA140, 0A88 (r1  PmRef    CpuPm     3000 INTL 20051=
117)
> > (XEN) ACPI: DMAR DDFAABC8, 0078 (r1 INTEL      SNB         1 INTL      =
  1)
> > (XEN) ACPI: BGRT DDFAAC40, 0038 (r0 SUPERM SMCI--MB        1 AMI     10=
013)
> > (XEN) ACPI: SPCR DDFAAC78, 0050 (r1  A M I   APTIO4        1 AMI.      =
  5)
> > (XEN) ACPI: EINJ DDFAACC8, 0130 (r1    AMI AMI EINJ 0             0)
> > (XEN) ACPI: ERST DDFAADF8, 0210 (r1  AMIER AMI ERST 0             0)
> > (XEN) ACPI: HEST DDFAB008, 00A8 (r1    AMI AMI HEST 0             0)
> > (XEN) ACPI: BERT DDFAB0B0, 0030 (r1    AMI AMI BERT 0             0)
> > (XEN) Domain heap initialised
> > (XEN) ACPI: 32/64X FACS address mismatch in FADT -
> > ddfbdf80/0000000000000000, using 32
> > (XEN) Processor #0 7:10 APIC version 21
> > (XEN) Processor #2 7:10 APIC version 21
> > (XEN) Processor #4 7:10 APIC version 21
> > (XEN) Processor #6 7:10 APIC version 21
> > (XEN) Processor #1 7:10 APIC version 21
> > (XEN) Processor #3 7:10 APIC version 21
> > (XEN) Processor #5 7:10 APIC version 21
> > (XEN) Processor #7 7:10 APIC version 21
> > (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> > (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> > (XEN) [VT-D]dmar.c:528:   RMRR address range not in reserved memory
> > base =3D dde16000 end =3D dde32fff; iommu_inclusive_mapping=3D1 paramet=
er
> > may be needed.
> > (XEN) ERST table is invalid
> > (XEN) Switched to APIC driver x2apic_cluster.
> > (XEN) Using scheduler: SMP Credit Scheduler (credit)
> > (XEN) Detected 3292.644 MHz processor.
> > (XEN) Initing memory sharing.
> > (XEN) Intel VT-d Snoop Control enabled.
> > (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> > (XEN) Intel VT-d Queued Invalidation enabled.
> > (XEN) Intel VT-d Interrupt Remapping enabled.
> > (XEN) Intel VT-d Shared EPT tables not enabled.
> > (XEN) I/O virtualisation enabled
> > (XEN)  - Dom0 mode: Relaxed
> > (XEN) Enabled directed EOI with ioapic_ack_old on!
> > (XEN) ENABLING IO-APIC IRQs
> > (XEN)  -> Using old ACK method
> > (XEN) Platform timer is 14.318MHz HPET
> > (XEN) Allocated console ring of 16 KiB.
> > (XEN) VMX: Supported advanced features:
> > (XEN)  - APIC MMIO access virtualisation
> > (XEN)  - APIC TPR shadow
> > (XEN)  - Extended Page Tables (EPT)
> > (XEN)  - Virtual-Processor Identifiers (VPID)
> > (XEN)  - Virtual NMI
> > (XEN)  - MSR direct-access bitmap
> > (XEN)  - Unrestricted Guest
> > (XEN) EPT supports 2MB super page.
> > (XEN) HVM: ASIDs enabled.
> > (XEN) HVM: VMX enabled
> > (XEN) HVM: Hardware Assisted Paging detected.
> > (XEN) Brought up 8 CPUs
> > (XEN) *** LOADING DOMAIN 0 ***
> > (XEN)  Xen  kernel: 64-bit, lsb, compat32
> > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x205d000
> > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > (XEN)  Dom0 alloc.:   00000000bc000000->00000000c0000000 (744642
> > pages to be allocated)
> > (XEN)  Init. ramdisk: 00000000c4b6a000->00000000dd7ffe00
> > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > (XEN)  Loaded kernel: ffffffff81000000->ffffffff8205d000
> > (XEN)  Init. ramdisk: ffffffff8205d000->ffffffff9acf2e00
> > (XEN)  Phys-Mach map: ffffffff9acf3000->ffffffff9b387ac0
> > (XEN)  Start info:    ffffffff9b388000->ffffffff9b3884b4
> > (XEN)  Page tables:   ffffffff9b389000->ffffffff9b468000
> > (XEN)  Boot stack:    ffffffff9b468000->ffffffff9b469000
> > (XEN)  TOTAL:         ffffffff80000000->ffffffff9b800000
> > (XEN)  ENTRY ADDRESS: ffffffff81cfd200
> > (XEN) Dom0 has maximum 8 VCPUs
> > (XEN) Scrubbing Free RAM: .done.
> > (XEN) Xen trace buffers: disabled
> > (XEN) Std. Loglevel: Errors and warnings
> > (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
> > (XEN) Xen is relinquishing VGA console.
> > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch
> > input to Xen)
> > (XEN) Freed 220kB init memory.
> > (XEN) physdev.c:155: dom0: wrong map_pirq type 3
> > =

> > =

> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:28:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Rpg-0007Jn-Aj; Thu, 23 Aug 2012 07:28:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4Rpe-0007JZ-J1
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:27:58 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345706868!8510377!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12266 invoked from network); 23 Aug 2012 07:27:48 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-4.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 07:27:48 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id BD0285A0007
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 08:27:25 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id K75KKWjEacJB for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 08:27:23 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	0A6A15A0005; Thu, 23 Aug 2012 08:27:21 +0100 (BST)
Message-ID: <5035DB70.8090800@abpni.co.uk>
Date: Thu, 23 Aug 2012 08:27:44 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net>
In-Reply-To: <20120823072218.GF19851@reaktio.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 08:22, Pasi K=E4rkk=E4inen wrote:
> On Thu, Aug 23, 2012 at 09:06:20AM +0300, Pasi K=E4rkk=E4inen wrote:
>> On Thu, Aug 23, 2012 at 12:48:01AM +0100, Jonathan Tripathy wrote:
>>> Hi Everyone,
>>>
>>> CC: Xen-users
>>>
>>> I am running Ubuntu 12.04 x86_64. My machine has a supermicro
>>> motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
>>> did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen vers=
ion.
>>>
>>> For some reason, Xen can't see any more than about 3.5GB of RAM. I can
>>> confirm this by xentop as well as xm info. I am definately running a
>>> 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
>>> 32GB of RAM by running "free -m".
>>>
>>> Has anybody come across this issue before? For what it's worth, I'm
>>> booting my system using UEFI - could that have something to do with it?
>>>
>>> Any help is very much appreciated
>>>
>> Yes, this is UEFI related issue. Can you turn UEFI off?
>>
>> It looks like you're not running UEFI capable Xen hypervisor.
>> (Xen 4.2 has UEFI support, and some vendors have backported UEFI support=
 on older versions,
>> for example Suse SLES11SP2 contains UEFI support in Xen 4.1).
>>
> Fixed the line above :)
>                                       =

> -- Pasi
>
Thanks, Pasi.

A couple of questions:

I'm guessing xen.efi (from 4.2) just replaces grub??

Also, if I were to apply that patch from superuser =

(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-=
should-be-8gb-uefi-boot), =

would have have any bad consequences? I'm very security conscience as =

the DomUs are untrusted...

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:28:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Rpg-0007Jn-Aj; Thu, 23 Aug 2012 07:28:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4Rpe-0007JZ-J1
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:27:58 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345706868!8510377!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12266 invoked from network); 23 Aug 2012 07:27:48 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-4.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 07:27:48 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id BD0285A0007
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 08:27:25 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id K75KKWjEacJB for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 08:27:23 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	0A6A15A0005; Thu, 23 Aug 2012 08:27:21 +0100 (BST)
Message-ID: <5035DB70.8090800@abpni.co.uk>
Date: Thu, 23 Aug 2012 08:27:44 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net>
In-Reply-To: <20120823072218.GF19851@reaktio.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 08:22, Pasi K=E4rkk=E4inen wrote:
> On Thu, Aug 23, 2012 at 09:06:20AM +0300, Pasi K=E4rkk=E4inen wrote:
>> On Thu, Aug 23, 2012 at 12:48:01AM +0100, Jonathan Tripathy wrote:
>>> Hi Everyone,
>>>
>>> CC: Xen-users
>>>
>>> I am running Ubuntu 12.04 x86_64. My machine has a supermicro
>>> motherboard X9SCI-LN4F with 32GB of RAM installed. To get Xen, I simply
>>> did apt-get install xen-hypervisor which gives me Ubuntu's 4.1 xen vers=
ion.
>>>
>>> For some reason, Xen can't see any more than about 3.5GB of RAM. I can
>>> confirm this by xentop as well as xm info. I am definately running a
>>> 64-bit Dom0 kernel as when I boot into it without Xen, I can see all
>>> 32GB of RAM by running "free -m".
>>>
>>> Has anybody come across this issue before? For what it's worth, I'm
>>> booting my system using UEFI - could that have something to do with it?
>>>
>>> Any help is very much appreciated
>>>
>> Yes, this is UEFI related issue. Can you turn UEFI off?
>>
>> It looks like you're not running UEFI capable Xen hypervisor.
>> (Xen 4.2 has UEFI support, and some vendors have backported UEFI support=
 on older versions,
>> for example Suse SLES11SP2 contains UEFI support in Xen 4.1).
>>
> Fixed the line above :)
>                                       =

> -- Pasi
>
Thanks, Pasi.

A couple of questions:

I'm guessing xen.efi (from 4.2) just replaces grub??

Also, if I were to apply that patch from superuser =

(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-=
should-be-8gb-uefi-boot), =

would have have any bad consequences? I'm very security conscience as =

the DomUs are untrusted...

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:28:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Rpg-0007Ju-NT; Thu, 23 Aug 2012 07:28:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4Rpf-0007Jc-5Q
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:27:59 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345706873!2853462!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19060 invoked from network); 23 Aug 2012 07:27:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 07:27:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:27:52 +0100
Message-Id: <5035E986020000780008A617@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:27:50 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <julien.grall@citrix.com>,<qemu-devel@nongnu.org>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
In-Reply-To: <c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: christian.limpach@gmail.com, xen-devel@lists.xen.org,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Julien Grall <julien.grall@citrix.com> 08/22/12 8:56 PM >>>
>@@ -4069,20 +4053,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 >
>switch ( a.index )
>{
>-            case HVM_PARAM_IOREQ_PFN:

Removing sub-ops which a domain can issue for itself (which for this and
another one below appears to be the case) is not allowed.

>+            case HVM_PARAM_IO_PFN_FIRST:

I don't see where in this patch this and the other new sub-op constants
get defined.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:28:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Rpg-0007Ju-NT; Thu, 23 Aug 2012 07:28:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4Rpf-0007Jc-5Q
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:27:59 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1345706873!2853462!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19060 invoked from network); 23 Aug 2012 07:27:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 07:27:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:27:52 +0100
Message-Id: <5035E986020000780008A617@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:27:50 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <julien.grall@citrix.com>,<qemu-devel@nongnu.org>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
In-Reply-To: <c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: christian.limpach@gmail.com, xen-devel@lists.xen.org,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Julien Grall <julien.grall@citrix.com> 08/22/12 8:56 PM >>>
>@@ -4069,20 +4053,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
 >
>switch ( a.index )
>{
>-            case HVM_PARAM_IOREQ_PFN:

Removing sub-ops which a domain can issue for itself (which for this and
another one below appears to be the case) is not allowed.

>+            case HVM_PARAM_IO_PFN_FIRST:

I don't see where in this patch this and the other new sub-op constants
get defined.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:32:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RuF-0007a2-ER; Thu, 23 Aug 2012 07:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4RuD-0007Zr-HY
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:32:41 +0000
Received: from [85.158.143.99:11978] by server-1.bemta-4.messagelabs.com id
	FC/9A-12504-89CD5305; Thu, 23 Aug 2012 07:32:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345707160!27484701!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1374 invoked from network); 23 Aug 2012 07:32:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 07:32:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14139198"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 07:31:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 08:31:47 +0100
Message-ID: <1345707105.12501.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 08:31:45 +0100
In-Reply-To: <5035E421020000780008A601@nat28.tlf.novell.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> >@@ -228,8 +228,6 @@ uninstall:
> >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> >-    rm -rf $(D)/boot/*xen*
> 
> But removing this line without replacement isn't right either - we at least
> need to undo what "make install" did. That may imply adding an
> uninstall-xen sub-target,

Right, I totally forgot about the hypervisor itself!

Perhaps this target should include a
	$(MAKE) -C xen uninstall
since that is the Makefile which knows how to undo its own install
target.

>  if we don't want to come up with a suitable
> pattern to do this here.
> 
> >-    rm -rf $(D)/lib/modules/*xen*
> >    rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
> >    rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
> >    rm -rf $(D)$(BINDIR)/xc_shadow
> 
> This may also be needed for the tools - wasn't it that BINDIR, LIBDIR and
> the like are now dependent on configure options?

Yes, I think you are right here too. I think this needs to be pushed
down too.

> Similarly I don't the the EFI binaries get properly cleaned up here (and
> that also would better be done with a per-subdir uninstall).

Right.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:32:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4RuF-0007a2-ER; Thu, 23 Aug 2012 07:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4RuD-0007Zr-HY
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:32:41 +0000
Received: from [85.158.143.99:11978] by server-1.bemta-4.messagelabs.com id
	FC/9A-12504-89CD5305; Thu, 23 Aug 2012 07:32:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345707160!27484701!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1374 invoked from network); 23 Aug 2012 07:32:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 07:32:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14139198"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 07:31:48 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 08:31:47 +0100
Message-ID: <1345707105.12501.38.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 08:31:45 +0100
In-Reply-To: <5035E421020000780008A601@nat28.tlf.novell.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> >@@ -228,8 +228,6 @@ uninstall:
> >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> >-    rm -rf $(D)/boot/*xen*
> 
> But removing this line without replacement isn't right either - we at least
> need to undo what "make install" did. That may imply adding an
> uninstall-xen sub-target,

Right, I totally forgot about the hypervisor itself!

Perhaps this target should include a
	$(MAKE) -C xen uninstall
since that is the Makefile which knows how to undo its own install
target.

>  if we don't want to come up with a suitable
> pattern to do this here.
> 
> >-    rm -rf $(D)/lib/modules/*xen*
> >    rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
> >    rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
> >    rm -rf $(D)$(BINDIR)/xc_shadow
> 
> This may also be needed for the tools - wasn't it that BINDIR, LIBDIR and
> the like are now dependent on configure options?

Yes, I think you are right here too. I think this needs to be pushed
down too.

> Similarly I don't the the EFI binaries get properly cleaned up here (and
> that also would better be done with a per-subdir uninstall).

Right.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:40:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4S1J-0007mp-At; Thu, 23 Aug 2012 07:40:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4S1H-0007mk-HL
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:39:59 +0000
Received: from [85.158.143.99:37761] by server-2.bemta-4.messagelabs.com id
	0C/00-21239-E4ED5305; Thu, 23 Aug 2012 07:39:58 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345707597!27486082!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3983 invoked from network); 23 Aug 2012 07:39:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	23 Aug 2012 07:39:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:39:56 +0100
Message-Id: <5035EC5A020000780008A62A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:39:54 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <jonnyt@abpni.co.uk>,<pasik@iki.fi>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
In-Reply-To: <5035DB70.8090800@abpni.co.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>I'm guessing xen.efi (from 4.2) just replaces grub??

"Replaces" is the wrong term. It simply makes the use of grub.efi (or however
it's named) unnecessary.

>Also, if I were to apply that patch from superuser 
>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-should-be-8gb-uefi-boot), 
>would have have any bad consequences? I'm very security conscience as 
>the DomUs are untrusted...

If you wanted to do that, I'd strongly recommend only removing the E801 code
(obviously, from your log, you don't get E820 entries reported anyway, so this
would be to not harm using hypervisors built from the same source on other
systems) or simply swapping the E801 and multiboot handling order (which may
actually be something to consider even upstream, so you'd be welcome to post
such a patch).

But in the end, in order to indeed use UEFI as intended, you'll need to switch to
using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops for now
isn't).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:40:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4S1J-0007mp-At; Thu, 23 Aug 2012 07:40:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T4S1H-0007mk-HL
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:39:59 +0000
Received: from [85.158.143.99:37761] by server-2.bemta-4.messagelabs.com id
	0C/00-21239-E4ED5305; Thu, 23 Aug 2012 07:39:58 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345707597!27486082!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3983 invoked from network); 23 Aug 2012 07:39:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-216.messagelabs.com with SMTP;
	23 Aug 2012 07:39:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Aug 2012 08:39:56 +0100
Message-Id: <5035EC5A020000780008A62A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Thu, 23 Aug 2012 08:39:54 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <jonnyt@abpni.co.uk>,<pasik@iki.fi>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
In-Reply-To: <5035DB70.8090800@abpni.co.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>I'm guessing xen.efi (from 4.2) just replaces grub??

"Replaces" is the wrong term. It simply makes the use of grub.efi (or however
it's named) unnecessary.

>Also, if I were to apply that patch from superuser 
>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-should-be-8gb-uefi-boot), 
>would have have any bad consequences? I'm very security conscience as 
>the DomUs are untrusted...

If you wanted to do that, I'd strongly recommend only removing the E801 code
(obviously, from your log, you don't get E820 entries reported anyway, so this
would be to not harm using hypervisors built from the same source on other
systems) or simply swapping the E801 and multiboot handling order (which may
actually be something to consider even upstream, so you'd be welcome to post
such a patch).

But in the end, in order to indeed use UEFI as intended, you'll need to switch to
using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops for now
isn't).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:59:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:59:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SJp-0007zJ-BT; Thu, 23 Aug 2012 07:59:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4SJn-0007zE-Gu
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:59:07 +0000
Received: from [85.158.143.99:39912] by server-3.bemta-4.messagelabs.com id
	A9/9D-08232-AC2E5305; Thu, 23 Aug 2012 07:59:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345708743!27530220!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8677 invoked from network); 23 Aug 2012 07:59:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 07:59:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14139758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 07:59:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 08:59:03 +0100
Message-ID: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 08:59:02 +0100
In-Reply-To: <1345707105.12501.38.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
> On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> > >@@ -228,8 +228,6 @@ uninstall:
> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> > >-    rm -rf $(D)/boot/*xen*
> > 
> > But removing this line without replacement isn't right either - we at least
> > need to undo what "make install" did. That may imply adding an
> > uninstall-xen sub-target,
> 
> Right, I totally forgot about the hypervisor itself!
> 
> Perhaps this target should include a
> 	$(MAKE) -C xen uninstall
> since that is the Makefile which knows how to undo its own install
> target.

Like this, which handles EFI too but not (yet) tools.

make dist-xen
make DESTDIR=$(pwd)/dist/install uninstall

Leaves just the dist/install/boot dir which I don't think we need to
bother cleaning up (I don't think rmdir --ignore-fail-on-non-empty is
portable).

8<------------------------------------
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345708184 -3600
# Node ID 101956baa3469f5f338c661f1ceab23077bd432b
# Parent  9cb256660bfcfdf20f869ea28881115d622ef1a4
do not remove kernels or modules on uninstall.

The pattern used is very broad and will delete any kernel with xen in
its filename, likewise modules, including those which come packages
from the distribution etc.

I don't think this was ever the right thing to do but it is doubly
wrong now that Xen does not even build or install a kernel by default.

Push cleanup of the installed hypervisor down into xen/Makefile so that
it can cleanup exactly what it actually installs.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 9cb256660bfc -r 101956baa346 Makefile
--- a/Makefile	Thu Aug 23 08:28:42 2012 +0100
+++ b/Makefile	Thu Aug 23 08:49:44 2012 +0100
@@ -220,6 +220,7 @@ help:
 uninstall: D=$(DESTDIR)
 uninstall:
 	[ -d $(D)$(XEN_CONFIG_DIR) ] && mv -f $(D)$(XEN_CONFIG_DIR) $(D)$(XEN_CONFIG_DIR).old-`date +%s` || true
+	$(MAKE) -C xen uninstall
 	rm -rf $(D)$(CONFIG_DIR)/init.d/xendomains $(D)$(CONFIG_DIR)/init.d/xend
 	rm -rf $(D)$(CONFIG_DIR)/init.d/xencommons $(D)$(CONFIG_DIR)/init.d/xen-watchdog
 	rm -rf $(D)$(CONFIG_DIR)/hotplug/xen-backend.agent
@@ -228,8 +229,6 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)/boot/*xen*
-	rm -rf $(D)/lib/modules/*xen*
 	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
 	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
 	rm -rf $(D)$(BINDIR)/xc_shadow
diff -r 9cb256660bfc -r 101956baa346 xen/Makefile
--- a/xen/Makefile	Thu Aug 23 08:28:42 2012 +0100
+++ b/xen/Makefile	Thu Aug 23 08:49:44 2012 +0100
@@ -20,8 +20,8 @@ default: build
 .PHONY: dist
 dist: install
 
-.PHONY: build install clean distclean cscope TAGS tags MAP gtags
-build install debug clean distclean cscope TAGS tags MAP gtags::
+.PHONY: build install uninstall clean distclean cscope TAGS tags MAP gtags
+build install uninstall debug clean distclean cscope TAGS tags MAP gtags::
 	$(MAKE) -f Rules.mk _$@
 
 .PHONY: _build
@@ -48,6 +48,21 @@ _install: $(TARGET).gz
 		fi; \
 	fi
 
+.PHONY: _uninstall
+_uninstall: D=$(DESTDIR)
+_uninstall: T=$(notdir $(TARGET))
+_uninstall:
+	rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION).gz
+	rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).gz
+	rm -f $(D)/boot/$(T)-$(XEN_VERSION).gz
+	rm -f $(D)/boot/$(T).gz
+	rm -f $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
+	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
+	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
+	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi
+	rm -f $(D)$(EFI_DIR)/$(T).efi
+	rm -f $(D)$(EFI_MOUNTPOINT)/efi/$(EFI_VENDOR)/$(T)-$(XEN_FULLVERSION).efi
+
 .PHONY: _debug
 _debug:
 	objdump -D -S $(TARGET)-syms > $(TARGET).s




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 07:59:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 07:59:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SJp-0007zJ-BT; Thu, 23 Aug 2012 07:59:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4SJn-0007zE-Gu
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 07:59:07 +0000
Received: from [85.158.143.99:39912] by server-3.bemta-4.messagelabs.com id
	A9/9D-08232-AC2E5305; Thu, 23 Aug 2012 07:59:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1345708743!27530220!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8677 invoked from network); 23 Aug 2012 07:59:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 07:59:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14139758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 07:59:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 08:59:03 +0100
Message-ID: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 08:59:02 +0100
In-Reply-To: <1345707105.12501.38.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
> On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> > >@@ -228,8 +228,6 @@ uninstall:
> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> > >-    rm -rf $(D)/boot/*xen*
> > 
> > But removing this line without replacement isn't right either - we at least
> > need to undo what "make install" did. That may imply adding an
> > uninstall-xen sub-target,
> 
> Right, I totally forgot about the hypervisor itself!
> 
> Perhaps this target should include a
> 	$(MAKE) -C xen uninstall
> since that is the Makefile which knows how to undo its own install
> target.

Like this, which handles EFI too but not (yet) tools.

make dist-xen
make DESTDIR=$(pwd)/dist/install uninstall

Leaves just the dist/install/boot dir which I don't think we need to
bother cleaning up (I don't think rmdir --ignore-fail-on-non-empty is
portable).

8<------------------------------------
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345708184 -3600
# Node ID 101956baa3469f5f338c661f1ceab23077bd432b
# Parent  9cb256660bfcfdf20f869ea28881115d622ef1a4
do not remove kernels or modules on uninstall.

The pattern used is very broad and will delete any kernel with xen in
its filename, likewise modules, including those which come packages
from the distribution etc.

I don't think this was ever the right thing to do but it is doubly
wrong now that Xen does not even build or install a kernel by default.

Push cleanup of the installed hypervisor down into xen/Makefile so that
it can cleanup exactly what it actually installs.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 9cb256660bfc -r 101956baa346 Makefile
--- a/Makefile	Thu Aug 23 08:28:42 2012 +0100
+++ b/Makefile	Thu Aug 23 08:49:44 2012 +0100
@@ -220,6 +220,7 @@ help:
 uninstall: D=$(DESTDIR)
 uninstall:
 	[ -d $(D)$(XEN_CONFIG_DIR) ] && mv -f $(D)$(XEN_CONFIG_DIR) $(D)$(XEN_CONFIG_DIR).old-`date +%s` || true
+	$(MAKE) -C xen uninstall
 	rm -rf $(D)$(CONFIG_DIR)/init.d/xendomains $(D)$(CONFIG_DIR)/init.d/xend
 	rm -rf $(D)$(CONFIG_DIR)/init.d/xencommons $(D)$(CONFIG_DIR)/init.d/xen-watchdog
 	rm -rf $(D)$(CONFIG_DIR)/hotplug/xen-backend.agent
@@ -228,8 +229,6 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)/boot/*xen*
-	rm -rf $(D)/lib/modules/*xen*
 	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
 	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
 	rm -rf $(D)$(BINDIR)/xc_shadow
diff -r 9cb256660bfc -r 101956baa346 xen/Makefile
--- a/xen/Makefile	Thu Aug 23 08:28:42 2012 +0100
+++ b/xen/Makefile	Thu Aug 23 08:49:44 2012 +0100
@@ -20,8 +20,8 @@ default: build
 .PHONY: dist
 dist: install
 
-.PHONY: build install clean distclean cscope TAGS tags MAP gtags
-build install debug clean distclean cscope TAGS tags MAP gtags::
+.PHONY: build install uninstall clean distclean cscope TAGS tags MAP gtags
+build install uninstall debug clean distclean cscope TAGS tags MAP gtags::
 	$(MAKE) -f Rules.mk _$@
 
 .PHONY: _build
@@ -48,6 +48,21 @@ _install: $(TARGET).gz
 		fi; \
 	fi
 
+.PHONY: _uninstall
+_uninstall: D=$(DESTDIR)
+_uninstall: T=$(notdir $(TARGET))
+_uninstall:
+	rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION).gz
+	rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).gz
+	rm -f $(D)/boot/$(T)-$(XEN_VERSION).gz
+	rm -f $(D)/boot/$(T).gz
+	rm -f $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
+	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
+	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
+	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi
+	rm -f $(D)$(EFI_DIR)/$(T).efi
+	rm -f $(D)$(EFI_MOUNTPOINT)/efi/$(EFI_VENDOR)/$(T)-$(XEN_FULLVERSION).efi
+
 .PHONY: _debug
 _debug:
 	objdump -D -S $(TARGET)-syms > $(TARGET).s




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SSX-0000Bl-BQ; Thu, 23 Aug 2012 08:08:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4SSV-0000Bg-NL
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:08:07 +0000
Received: from [85.158.139.83:41147] by server-8.bemta-5.messagelabs.com id
	E8/CB-02481-5E4E5305; Thu, 23 Aug 2012 08:08:05 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345709282!27415706!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1609 invoked from network); 23 Aug 2012 08:08:02 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-12.tower-182.messagelabs.com with SMTP;
	23 Aug 2012 08:08:02 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id D5B6F5A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:07:39 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Xd5tEikpWVEv for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 09:07:37 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id 7B2F65A0005;
	Thu, 23 Aug 2012 09:07:36 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 09:07:59 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xen.org>
In-Reply-To: <5035EC5A020000780008A62A@nat28.tlf.novell.com>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
	<5035EC5A020000780008A62A@nat28.tlf.novell.com>
Message-ID: <9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 23.08.2012 08:39, Jan Beulich wrote:
>>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>>I'm guessing xen.efi (from 4.2) just replaces grub??
>
> "Replaces" is the wrong term. It simply makes the use of grub.efi (or 
> however
> it's named) unnecessary.
>
>>Also, if I were to apply that patch from superuser
>>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-should-be-8gb-uefi-boot),
>>would have have any bad consequences? I'm very security conscience as
>>the DomUs are untrusted...
>
> If you wanted to do that, I'd strongly recommend only removing the 
> E801 code
> (obviously, from your log, you don't get E820 entries reported
> anyway, so this
> would be to not harm using hypervisors built from the same source on 
> other
> systems) or simply swapping the E801 and multiboot handling order 
> (which may
> actually be something to consider even upstream, so you'd be welcome 
> to post
> such a patch).
>
> But in the end, in order to indeed use UEFI as intended, you'll need
> to switch to
> using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops 
> for now
> isn't).
>
> Jan

Hi Jan,

I'll submit a patch with the map entries in the if block swapped. I'll 
make the patch, then test it for you guys, then post it. Do I just send 
it to this list (for the benefit of others and for upstream 
consideration)?

When you say "use UEFI as intended", is there something wrong with just 
flipping the if block on its head?

Thanks


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:08:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SSX-0000Bl-BQ; Thu, 23 Aug 2012 08:08:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4SSV-0000Bg-NL
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:08:07 +0000
Received: from [85.158.139.83:41147] by server-8.bemta-5.messagelabs.com id
	E8/CB-02481-5E4E5305; Thu, 23 Aug 2012 08:08:05 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345709282!27415706!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1609 invoked from network); 23 Aug 2012 08:08:02 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-12.tower-182.messagelabs.com with SMTP;
	23 Aug 2012 08:08:02 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id D5B6F5A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:07:39 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Xd5tEikpWVEv for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 09:07:37 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id 7B2F65A0005;
	Thu, 23 Aug 2012 09:07:36 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 09:07:59 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xen.org>
In-Reply-To: <5035EC5A020000780008A62A@nat28.tlf.novell.com>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
	<5035EC5A020000780008A62A@nat28.tlf.novell.com>
Message-ID: <9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 23.08.2012 08:39, Jan Beulich wrote:
>>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>>I'm guessing xen.efi (from 4.2) just replaces grub??
>
> "Replaces" is the wrong term. It simply makes the use of grub.efi (or 
> however
> it's named) unnecessary.
>
>>Also, if I were to apply that patch from superuser
>>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-should-be-8gb-uefi-boot),
>>would have have any bad consequences? I'm very security conscience as
>>the DomUs are untrusted...
>
> If you wanted to do that, I'd strongly recommend only removing the 
> E801 code
> (obviously, from your log, you don't get E820 entries reported
> anyway, so this
> would be to not harm using hypervisors built from the same source on 
> other
> systems) or simply swapping the E801 and multiboot handling order 
> (which may
> actually be something to consider even upstream, so you'd be welcome 
> to post
> such a patch).
>
> But in the end, in order to indeed use UEFI as intended, you'll need
> to switch to
> using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops 
> for now
> isn't).
>
> Jan

Hi Jan,

I'll submit a patch with the map entries in the if block swapped. I'll 
make the patch, then test it for you guys, then post it. Do I just send 
it to this list (for the benefit of others and for upstream 
consideration)?

When you say "use UEFI as intended", is there something wrong with just 
flipping the if block on its head?

Thanks


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:16:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SZy-0000LW-8b; Thu, 23 Aug 2012 08:15:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4SZw-0000LO-6I
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:15:48 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345709541!8518439!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19338 invoked from network); 23 Aug 2012 08:12:22 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:12:22 -0000
Received: by eaac13 with SMTP id c13so127623eaa.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:12:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=1TtZ+v7Mc3kjCqQrBpfBdfm0gDjdjEefyboF8c1JnWk=;
	b=rVnqSYAs/iMEQzMLVXQ01AfPTww4vIXkwEht50NpDT/UzMN1CiIYAqTix1tPrX+F43
	g7b+I+q3e73aePOzH2wiyrCNuUODesNmQTnktotCTl7Xgk1deX31hyPFigIGvmkp9uPL
	+9JAZJW6zwIKMp7xoXeNupEEyQSfpnBBGJ+S96+rfuCJW4TJroIEIY9DOCO1j1Vsha1L
	ptP4I90ZHazODurHWlsdNvhgQkbRVUreS3ZGdINmiUHGTqpD8xOG6mwOVDqyOzWMEJiB
	PnUHizDcNkjum2SO4/yCGCo8aUIGatnwDhwQNXTLiay/EJtQYxjvVoU3gIIJddWwgG4L
	my/Q==
Received: by 10.14.4.198 with SMTP id 46mr881371eej.11.1345709541473;
	Thu, 23 Aug 2012 01:12:21 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id h2sm19269314eeo.3.2012.08.23.01.12.19
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 01:12:20 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 09:12:15 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	Pasi =?ISO-8859-1?B?S+Rya2vkaW5lbg==?= <pasik@iki.fi>
Message-ID: <CC5BA46F.49F32%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BBwFKkW+jImO3T0Go7Ex3XCUTkg==
In-Reply-To: <5035DB70.8090800@abpni.co.uk>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 08:27, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Thanks, Pasi.
> 
> A couple of questions:
> 
> I'm guessing xen.efi (from 4.2) just replaces grub??
> 
> Also, if I were to apply that patch from superuser
> (http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sho
> uld-be-8gb-uefi-boot),
> would have have any bad consequences? I'm very security conscience as
> the DomUs are untrusted...

Firstly, the same effect *should* be had by adding no-real-mode to your Xen
command line. So try that first.

Secondly, it is arguable that we should patch Xen to prefer "Multiboot-e820"
over "Xen-e801".

And yes, overall if you have a UEFI BIOS then using UEFI Xen is probably
best of all :)

 -- Keir

> Thanks
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:16:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SZy-0000LW-8b; Thu, 23 Aug 2012 08:15:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4SZw-0000LO-6I
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:15:48 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345709541!8518439!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19338 invoked from network); 23 Aug 2012 08:12:22 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:12:22 -0000
Received: by eaac13 with SMTP id c13so127623eaa.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:12:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=1TtZ+v7Mc3kjCqQrBpfBdfm0gDjdjEefyboF8c1JnWk=;
	b=rVnqSYAs/iMEQzMLVXQ01AfPTww4vIXkwEht50NpDT/UzMN1CiIYAqTix1tPrX+F43
	g7b+I+q3e73aePOzH2wiyrCNuUODesNmQTnktotCTl7Xgk1deX31hyPFigIGvmkp9uPL
	+9JAZJW6zwIKMp7xoXeNupEEyQSfpnBBGJ+S96+rfuCJW4TJroIEIY9DOCO1j1Vsha1L
	ptP4I90ZHazODurHWlsdNvhgQkbRVUreS3ZGdINmiUHGTqpD8xOG6mwOVDqyOzWMEJiB
	PnUHizDcNkjum2SO4/yCGCo8aUIGatnwDhwQNXTLiay/EJtQYxjvVoU3gIIJddWwgG4L
	my/Q==
Received: by 10.14.4.198 with SMTP id 46mr881371eej.11.1345709541473;
	Thu, 23 Aug 2012 01:12:21 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id h2sm19269314eeo.3.2012.08.23.01.12.19
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 01:12:20 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 09:12:15 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	Pasi =?ISO-8859-1?B?S+Rya2vkaW5lbg==?= <pasik@iki.fi>
Message-ID: <CC5BA46F.49F32%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BBwFKkW+jImO3T0Go7Ex3XCUTkg==
In-Reply-To: <5035DB70.8090800@abpni.co.uk>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 08:27, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Thanks, Pasi.
> 
> A couple of questions:
> 
> I'm guessing xen.efi (from 4.2) just replaces grub??
> 
> Also, if I were to apply that patch from superuser
> (http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sho
> uld-be-8gb-uefi-boot),
> would have have any bad consequences? I'm very security conscience as
> the DomUs are untrusted...

Firstly, the same effect *should* be had by adding no-real-mode to your Xen
command line. So try that first.

Secondly, it is arguable that we should patch Xen to prefer "Multiboot-e820"
over "Xen-e801".

And yes, overall if you have a UEFI BIOS then using UEFI Xen is probably
best of all :)

 -- Keir

> Thanks
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:24:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Shb-0000VN-6A; Thu, 23 Aug 2012 08:23:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4ShZ-0000VI-Vt
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:23:42 +0000
Received: from [85.158.138.51:11293] by server-8.bemta-3.messagelabs.com id
	00/B9-29583-D88E5305; Thu, 23 Aug 2012 08:23:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345710218!25790647!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16531 invoked from network); 23 Aug 2012 08:23:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:23:39 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14140356"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:23:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:23:34 +0100
Message-ID: <1345710212.12501.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 09:23:32 +0100
In-Reply-To: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 08:59 +0100, Ian Campbell wrote:
> On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
> > On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> > > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> > > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> > > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> > > >@@ -228,8 +228,6 @@ uninstall:
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> > > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> > > >-    rm -rf $(D)/boot/*xen*
> > > 
> > > But removing this line without replacement isn't right either - we at least
> > > need to undo what "make install" did. That may imply adding an
> > > uninstall-xen sub-target,
> > 
> > Right, I totally forgot about the hypervisor itself!
> > 
> > Perhaps this target should include a
> > 	$(MAKE) -C xen uninstall
> > since that is the Makefile which knows how to undo its own install
> > target.
> 
> Like this, which handles EFI too but not (yet) tools.

Here is tools. This cleans up a superset of what was cleaned up before
this change but still leaves a lot of detritus. See attached
install.before, uninstall.before & uninstall.after (install.after is
identical to install.before). This is a much bigger problem and probably
requires proper recursive subdir-uninstall rules for everything under
tools. 

That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
about this patch as a 4.2 thing, but given that the regression happened
due to the switch to autoconf in 4.2 I think it might be good to take,
even though as a %age of what we install the delta is pretty
insignificant.

8<---------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345710001 -3600
# Node ID eaf499f0f7071fc0bfd84901babfd1ae18227ebb
# Parent  101956baa3469f5f338c661f1ceab23077bd432b
uninstall: push tools uninstall down into tools/Makefile

Many of the rules here depend on having run configure and the
variables which it defines in config/Tools.mk

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 101956baa346 -r eaf499f0f707 Makefile
--- a/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -229,34 +229,7 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
-	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
-	rm -rf $(D)$(BINDIR)/xc_shadow
-	rm -rf $(D)$(BINDIR)/pygrub
-	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
-	rm -rf $(D)$(BINDIR)/xsls
-	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
-	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
-	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
-	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
-	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
-	rm -rf $(D)$(INCLUDEDIR)/xen
-	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
-	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
-	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
-	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
-	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
-	rm -rf $(D)$(LIBDIR)/xen/
-	rm -rf $(D)$(LIBEXEC)/xen*
-	rm -rf $(D)$(SBINDIR)/setmask
-	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
-	rm -rf $(D)$(SHAREDIR)/doc/xen
-	rm -rf $(D)$(SHAREDIR)/xen
-	rm -rf $(D)$(SHAREDIR)/qemu-xen
-	rm -rf $(D)$(MAN1DIR)/xen*
-	rm -rf $(D)$(MAN8DIR)/xen*
+	make -C tools uninstall
 	rm -rf $(D)/boot/tboot*
 
 # Legacy targets for compatibility
diff -r 101956baa346 -r eaf499f0f707 tools/Makefile
--- a/tools/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/tools/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -71,6 +71,38 @@ install: subdirs-install
 	$(INSTALL_DIR) $(DESTDIR)/var/lib/xen
 	$(INSTALL_DIR) $(DESTDIR)/var/lock/subsys
 
+.PHONY: uninstall
+uninstall: D=$(DESTDIR)
+uninstall:
+	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
+	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
+	rm -rf $(D)$(BINDIR)/xc_shadow
+	rm -rf $(D)$(BINDIR)/pygrub
+	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
+	rm -rf $(D)$(BINDIR)/xsls
+	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
+	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
+	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
+	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
+	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
+	rm -rf $(D)$(INCLUDEDIR)/xen
+	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
+	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
+	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
+	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
+	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
+	rm -rf $(D)$(LIBDIR)/xen/
+	rm -rf $(D)$(LIBEXEC)/xen*
+	rm -rf $(D)$(SBINDIR)/setmask
+	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
+	rm -rf $(D)$(SHAREDIR)/doc/xen
+	rm -rf $(D)$(SHAREDIR)/xen
+	rm -rf $(D)$(SHAREDIR)/qemu-xen
+	rm -rf $(D)$(MAN1DIR)/xen*
+	rm -rf $(D)$(MAN8DIR)/xen*
+
 .PHONY: clean
 clean: subdirs-clean
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:24:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Shb-0000VN-6A; Thu, 23 Aug 2012 08:23:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4ShZ-0000VI-Vt
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:23:42 +0000
Received: from [85.158.138.51:11293] by server-8.bemta-3.messagelabs.com id
	00/B9-29583-D88E5305; Thu, 23 Aug 2012 08:23:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1345710218!25790647!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16531 invoked from network); 23 Aug 2012 08:23:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:23:39 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14140356"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:23:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:23:34 +0100
Message-ID: <1345710212.12501.55.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 09:23:32 +0100
In-Reply-To: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 08:59 +0100, Ian Campbell wrote:
> On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
> > On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> > > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> > > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> > > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> > > >@@ -228,8 +228,6 @@ uninstall:
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> > > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> > > >-    rm -rf $(D)/boot/*xen*
> > > 
> > > But removing this line without replacement isn't right either - we at least
> > > need to undo what "make install" did. That may imply adding an
> > > uninstall-xen sub-target,
> > 
> > Right, I totally forgot about the hypervisor itself!
> > 
> > Perhaps this target should include a
> > 	$(MAKE) -C xen uninstall
> > since that is the Makefile which knows how to undo its own install
> > target.
> 
> Like this, which handles EFI too but not (yet) tools.

Here is tools. This cleans up a superset of what was cleaned up before
this change but still leaves a lot of detritus. See attached
install.before, uninstall.before & uninstall.after (install.after is
identical to install.before). This is a much bigger problem and probably
requires proper recursive subdir-uninstall rules for everything under
tools. 

That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
about this patch as a 4.2 thing, but given that the regression happened
due to the switch to autoconf in 4.2 I think it might be good to take,
even though as a %age of what we install the delta is pretty
insignificant.

8<---------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345710001 -3600
# Node ID eaf499f0f7071fc0bfd84901babfd1ae18227ebb
# Parent  101956baa3469f5f338c661f1ceab23077bd432b
uninstall: push tools uninstall down into tools/Makefile

Many of the rules here depend on having run configure and the
variables which it defines in config/Tools.mk

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 101956baa346 -r eaf499f0f707 Makefile
--- a/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -229,34 +229,7 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
-	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
-	rm -rf $(D)$(BINDIR)/xc_shadow
-	rm -rf $(D)$(BINDIR)/pygrub
-	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
-	rm -rf $(D)$(BINDIR)/xsls
-	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
-	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
-	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
-	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
-	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
-	rm -rf $(D)$(INCLUDEDIR)/xen
-	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
-	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
-	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
-	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
-	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
-	rm -rf $(D)$(LIBDIR)/xen/
-	rm -rf $(D)$(LIBEXEC)/xen*
-	rm -rf $(D)$(SBINDIR)/setmask
-	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
-	rm -rf $(D)$(SHAREDIR)/doc/xen
-	rm -rf $(D)$(SHAREDIR)/xen
-	rm -rf $(D)$(SHAREDIR)/qemu-xen
-	rm -rf $(D)$(MAN1DIR)/xen*
-	rm -rf $(D)$(MAN8DIR)/xen*
+	make -C tools uninstall
 	rm -rf $(D)/boot/tboot*
 
 # Legacy targets for compatibility
diff -r 101956baa346 -r eaf499f0f707 tools/Makefile
--- a/tools/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/tools/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -71,6 +71,38 @@ install: subdirs-install
 	$(INSTALL_DIR) $(DESTDIR)/var/lib/xen
 	$(INSTALL_DIR) $(DESTDIR)/var/lock/subsys
 
+.PHONY: uninstall
+uninstall: D=$(DESTDIR)
+uninstall:
+	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
+	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
+	rm -rf $(D)$(BINDIR)/xc_shadow
+	rm -rf $(D)$(BINDIR)/pygrub
+	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
+	rm -rf $(D)$(BINDIR)/xsls
+	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
+	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
+	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
+	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
+	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
+	rm -rf $(D)$(INCLUDEDIR)/xen
+	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
+	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
+	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
+	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
+	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
+	rm -rf $(D)$(LIBDIR)/xen/
+	rm -rf $(D)$(LIBEXEC)/xen*
+	rm -rf $(D)$(SBINDIR)/setmask
+	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
+	rm -rf $(D)$(SHAREDIR)/doc/xen
+	rm -rf $(D)$(SHAREDIR)/xen
+	rm -rf $(D)$(SHAREDIR)/qemu-xen
+	rm -rf $(D)$(MAN1DIR)/xen*
+	rm -rf $(D)$(MAN8DIR)/xen*
+
 .PHONY: clean
 clean: subdirs-clean
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:25:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Sil-0000Ye-Lw; Thu, 23 Aug 2012 08:24:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Sij-0000YL-N5
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:24:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345710245!8490575!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19937 invoked from network); 23 Aug 2012 08:24:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:24:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14140368"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:24:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:24:04 +0100
Message-ID: <1345710243.12501.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 09:24:03 +0100
In-Reply-To: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
Content-Type: multipart/mixed; boundary="=-Op15fjVHnxmurhd6MvPI"
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-Op15fjVHnxmurhd6MvPI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Thu, 2012-08-23 at 08:59 +0100, Ian Campbell wrote:
> On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
> > On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> > > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> > > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> > > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> > > >@@ -228,8 +228,6 @@ uninstall:
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> > > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> > > >-    rm -rf $(D)/boot/*xen*
> > > 
> > > But removing this line without replacement isn't right either - we at least
> > > need to undo what "make install" did. That may imply adding an
> > > uninstall-xen sub-target,
> > 
> > Right, I totally forgot about the hypervisor itself!
> > 
> > Perhaps this target should include a
> > 	$(MAKE) -C xen uninstall
> > since that is the Makefile which knows how to undo its own install
> > target.
> 
> Like this, which handles EFI too but not (yet) tools.

Here is tools. This cleans up a superset of what was cleaned up before
this change but still leaves a lot of detritus. See attached
install.before, uninstall.before & uninstall.after (install.after is
identical to install.before). This is a much bigger problem and probably
requires proper recursive subdir-uninstall rules for everything under
tools. 

That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
about this patch as a 4.2 thing, but given that the regression happened
due to the switch to autoconf in 4.2 I think it might be good to take,
even though as a %age of what we install the delta is pretty
insignificant.

8<---------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345710001 -3600
# Node ID eaf499f0f7071fc0bfd84901babfd1ae18227ebb
# Parent  101956baa3469f5f338c661f1ceab23077bd432b
uninstall: push tools uninstall down into tools/Makefile

Many of the rules here depend on having run configure and the
variables which it defines in config/Tools.mk

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 101956baa346 -r eaf499f0f707 Makefile
--- a/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -229,34 +229,7 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
-	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
-	rm -rf $(D)$(BINDIR)/xc_shadow
-	rm -rf $(D)$(BINDIR)/pygrub
-	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
-	rm -rf $(D)$(BINDIR)/xsls
-	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
-	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
-	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
-	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
-	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
-	rm -rf $(D)$(INCLUDEDIR)/xen
-	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
-	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
-	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
-	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
-	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
-	rm -rf $(D)$(LIBDIR)/xen/
-	rm -rf $(D)$(LIBEXEC)/xen*
-	rm -rf $(D)$(SBINDIR)/setmask
-	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
-	rm -rf $(D)$(SHAREDIR)/doc/xen
-	rm -rf $(D)$(SHAREDIR)/xen
-	rm -rf $(D)$(SHAREDIR)/qemu-xen
-	rm -rf $(D)$(MAN1DIR)/xen*
-	rm -rf $(D)$(MAN8DIR)/xen*
+	make -C tools uninstall
 	rm -rf $(D)/boot/tboot*
 
 # Legacy targets for compatibility
diff -r 101956baa346 -r eaf499f0f707 tools/Makefile
--- a/tools/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/tools/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -71,6 +71,38 @@ install: subdirs-install
 	$(INSTALL_DIR) $(DESTDIR)/var/lib/xen
 	$(INSTALL_DIR) $(DESTDIR)/var/lock/subsys
 
+.PHONY: uninstall
+uninstall: D=$(DESTDIR)
+uninstall:
+	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
+	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
+	rm -rf $(D)$(BINDIR)/xc_shadow
+	rm -rf $(D)$(BINDIR)/pygrub
+	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
+	rm -rf $(D)$(BINDIR)/xsls
+	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
+	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
+	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
+	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
+	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
+	rm -rf $(D)$(INCLUDEDIR)/xen
+	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
+	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
+	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
+	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
+	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
+	rm -rf $(D)$(LIBDIR)/xen/
+	rm -rf $(D)$(LIBEXEC)/xen*
+	rm -rf $(D)$(SBINDIR)/setmask
+	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
+	rm -rf $(D)$(SHAREDIR)/doc/xen
+	rm -rf $(D)$(SHAREDIR)/xen
+	rm -rf $(D)$(SHAREDIR)/qemu-xen
+	rm -rf $(D)$(MAN1DIR)/xen*
+	rm -rf $(D)$(MAN8DIR)/xen*
+
 .PHONY: clean
 clean: subdirs-clean
 


--=-Op15fjVHnxmurhd6MvPI
Content-Disposition: attachment; filename="install.before"
Content-Type: text/plain; name="install.before"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

dist/
dist/install
dist/install/etc
dist/install/etc/bash_completion.d
dist/install/etc/bash_completion.d/xl.sh
dist/install/etc/default
dist/install/etc/default/xencommons
dist/install/etc/default/xendomains
dist/install/etc/hotplug
dist/install/etc/hotplug/xen-backend.agent
dist/install/etc/init.d
dist/install/etc/init.d/xencommons
dist/install/etc/init.d/xend
dist/install/etc/init.d/xendomains
dist/install/etc/init.d/xen-watchdog
dist/install/etc/udev
dist/install/etc/udev/rules.d
dist/install/etc/udev/rules.d/xen-backend.rules
dist/install/etc/udev/rules.d/xend.rules
dist/install/etc/xen
dist/install/etc/xen/auto
dist/install/etc/xen/cpupool
dist/install/etc/xen/oxenstored.conf
dist/install/etc/xen/README
dist/install/etc/xen/README.incompatibilities
dist/install/etc/xen/scripts
dist/install/etc/xen/scripts/blktap
dist/install/etc/xen/scripts/block
dist/install/etc/xen/scripts/block-common.sh
dist/install/etc/xen/scripts/block-enbd
dist/install/etc/xen/scripts/block-nbd
dist/install/etc/xen/scripts/external-device-migrate
dist/install/etc/xen/scripts/hotplugpath.sh
dist/install/etc/xen/scripts/locking.sh
dist/install/etc/xen/scripts/logging.sh
dist/install/etc/xen/scripts/network-bridge
dist/install/etc/xen/scripts/network-nat
dist/install/etc/xen/scripts/network-route
dist/install/etc/xen/scripts/qemu-ifup
dist/install/etc/xen/scripts/vif2
dist/install/etc/xen/scripts/vif-bridge
dist/install/etc/xen/scripts/vif-common.sh
dist/install/etc/xen/scripts/vif-nat
dist/install/etc/xen/scripts/vif-route
dist/install/etc/xen/scripts/vif-setup
dist/install/etc/xen/scripts/vscsi
dist/install/etc/xen/scripts/vtpm
dist/install/etc/xen/scripts/vtpm-common.sh
dist/install/etc/xen/scripts/vtpm-delete
dist/install/etc/xen/scripts/vtpm-hotplug-common.sh
dist/install/etc/xen/scripts/vtpm-impl
dist/install/etc/xen/scripts/vtpm-migration.sh
dist/install/etc/xen/scripts/xen-hotplug-cleanup
dist/install/etc/xen/scripts/xen-hotplug-common.sh
dist/install/etc/xen/scripts/xen-network-common.sh
dist/install/etc/xen/scripts/xen-script-common.sh
dist/install/etc/xen/xend-config.sxp
dist/install/etc/xen/xend-pci-permissive.sxp
dist/install/etc/xen/xend-pci-quirks.sxp
dist/install/etc/xen/xl.conf
dist/install/etc/xen/xlexample.hvm
dist/install/etc/xen/xlexample.pvlinux
dist/install/etc/xen/xm-config.xml
dist/install/etc/xen/xmexample1
dist/install/etc/xen/xmexample2
dist/install/etc/xen/xmexample3
dist/install/etc/xen/xmexample.hvm
dist/install/etc/xen/xmexample.hvm-stubdom
dist/install/etc/xen/xmexample.nbd
dist/install/etc/xen/xmexample.pv-grub
dist/install/etc/xen/xmexample.vti
dist/install/usr
dist/install/usr/bin
dist/install/usr/bin/pygrub
dist/install/usr/bin/qemu-img-xen
dist/install/usr/bin/qemu-nbd-xen
dist/install/usr/bin/remus
dist/install/usr/bin/xencons
dist/install/usr/bin/xen-detect
dist/install/usr/bin/xenstore
dist/install/usr/bin/xenstore-chmod
dist/install/usr/bin/xenstore-control
dist/install/usr/bin/xenstore-exists
dist/install/usr/bin/xenstore-list
dist/install/usr/bin/xenstore-ls
dist/install/usr/bin/xenstore-read
dist/install/usr/bin/xenstore-rm
dist/install/usr/bin/xenstore-watch
dist/install/usr/bin/xenstore-write
dist/install/usr/bin/xentrace
dist/install/usr/bin/xentrace_format
dist/install/usr/bin/xentrace_setsize
dist/install/usr/include
dist/install/usr/include/blktaplib.h
dist/install/usr/include/fsimage_grub.h
dist/install/usr/include/fsimage.h
dist/install/usr/include/fsimage_plugin.h
dist/install/usr/include/libxenvchan.h
dist/install/usr/include/libxl_event.h
dist/install/usr/include/libxl.h
dist/install/usr/include/libxl_json.h
dist/install/usr/include/_libxl_list.h
dist/install/usr/include/_libxl_types.h
dist/install/usr/include/_libxl_types_json.h
dist/install/usr/include/libxl_utils.h
dist/install/usr/include/libxl_uuid.h
dist/install/usr/include/xen
dist/install/usr/include/xen/arch-arm.h
dist/install/usr/include/xen/arch-ia64
dist/install/usr/include/xen/arch-ia64/debug_op.h
dist/install/usr/include/xen/arch-ia64.h
dist/install/usr/include/xen/arch-ia64/hvm
dist/install/usr/include/xen/arch-ia64/hvm/memmap.h
dist/install/usr/include/xen/arch-ia64/hvm/save.h
dist/install/usr/include/xen/arch-ia64/sioemu.h
dist/install/usr/include/xen/arch-x86
dist/install/usr/include/xen/arch-x86_32.h
dist/install/usr/include/xen/arch-x86_64.h
dist/install/usr/include/xen/arch-x86/cpuid.h
dist/install/usr/include/xen/arch-x86/hvm
dist/install/usr/include/xen/arch-x86/hvm/save.h
dist/install/usr/include/xen/arch-x86/xen.h
dist/install/usr/include/xen/arch-x86/xen-mca.h
dist/install/usr/include/xen/arch-x86/xen-x86_32.h
dist/install/usr/include/xen/arch-x86/xen-x86_64.h
dist/install/usr/include/xen/callback.h
dist/install/usr/include/xen/COPYING
dist/install/usr/include/xenctrl.h
dist/install/usr/include/xenctrlosdep.h
dist/install/usr/include/xen/dom0_ops.h
dist/install/usr/include/xen/domctl.h
dist/install/usr/include/xen/elfnote.h
dist/install/usr/include/xen/event_channel.h
dist/install/usr/include/xen/features.h
dist/install/usr/include/xen/foreign
dist/install/usr/include/xen/foreign/ia64.h
dist/install/usr/include/xen/foreign/x86_32.h
dist/install/usr/include/xen/foreign/x86_64.h
dist/install/usr/include/xen/grant_table.h
dist/install/usr/include/xenguest.h
dist/install/usr/include/xen/hvm
dist/install/usr/include/xen/hvm/e820.h
dist/install/usr/include/xen/hvm/hvm_info_table.h
dist/install/usr/include/xen/hvm/hvm_op.h
dist/install/usr/include/xen/hvm/ioreq.h
dist/install/usr/include/xen/hvm/params.h
dist/install/usr/include/xen/hvm/save.h
dist/install/usr/include/xen/io
dist/install/usr/include/xen/io/blkif.h
dist/install/usr/include/xen/io/console.h
dist/install/usr/include/xen/io/fbif.h
dist/install/usr/include/xen/io/fsif.h
dist/install/usr/include/xen/io/kbdif.h
dist/install/usr/include/xen/io/libxenvchan.h
dist/install/usr/include/xen/io/netif.h
dist/install/usr/include/xen/io/pciif.h
dist/install/usr/include/xen/io/protocols.h
dist/install/usr/include/xen/io/ring.h
dist/install/usr/include/xen/io/tpmif.h
dist/install/usr/include/xen/io/usbif.h
dist/install/usr/include/xen/io/vscsiif.h
dist/install/usr/include/xen/io/xenbus.h
dist/install/usr/include/xen/io/xs_wire.h
dist/install/usr/include/xen/kexec.h
dist/install/usr/include/xen/mem_event.h
dist/install/usr/include/xen/memory.h
dist/install/usr/include/xen/nmi.h
dist/install/usr/include/xen/physdev.h
dist/install/usr/include/xen/platform.h
dist/install/usr/include/xen/sched.h
dist/install/usr/include/xenstat.h
dist/install/usr/include/xenstore-compat
dist/install/usr/include/xenstore-compat/xs.h
dist/install/usr/include/xenstore-compat/xs_lib.h
dist/install/usr/include/xenstore.h
dist/install/usr/include/xenstore_lib.h
dist/install/usr/include/xen/sys
dist/install/usr/include/xen/sysctl.h
dist/install/usr/include/xen/sys/evtchn.h
dist/install/usr/include/xen/sys/gntalloc.h
dist/install/usr/include/xen/sys/gntdev.h
dist/install/usr/include/xen/sys/privcmd.h
dist/install/usr/include/xen/sys/xenbus_dev.h
dist/install/usr/include/xen/tmem.h
dist/install/usr/include/xentoollog.h
dist/install/usr/include/xen/trace.h
dist/install/usr/include/xen/vcpu.h
dist/install/usr/include/xen/version.h
dist/install/usr/include/xen/xencomm.h
dist/install/usr/include/xen/xen-compat.h
dist/install/usr/include/xen/xen.h
dist/install/usr/include/xen/xenoprof.h
dist/install/usr/include/xen/xsm
dist/install/usr/include/xen/xsm/flask_op.h
dist/install/usr/include/xs.h
dist/install/usr/include/xs_lib.h
dist/install/usr/lib
dist/install/usr/lib/fs
dist/install/usr/lib/fs/ext2fs
dist/install/usr/lib/fs/ext2fs/fsimage.so
dist/install/usr/lib/fs/fat
dist/install/usr/lib/fs/fat/fsimage.so
dist/install/usr/lib/fs/iso9660
dist/install/usr/lib/fs/iso9660/fsimage.so
dist/install/usr/lib/fs/reiserfs
dist/install/usr/lib/fs/reiserfs/fsimage.so
dist/install/usr/lib/fs/ufs
dist/install/usr/lib/fs/ufs/fsimage.so
dist/install/usr/lib/fs/xfs
dist/install/usr/lib/fs/xfs/fsimage.so
dist/install/usr/lib/fs/zfs
dist/install/usr/lib/fs/zfs/fsimage.so
dist/install/usr/lib/libblktap.a
dist/install/usr/lib/libblktapctl.a
dist/install/usr/lib/libblktapctl.so
dist/install/usr/lib/libblktapctl.so.1.0
dist/install/usr/lib/libblktapctl.so.1.0.0
dist/install/usr/lib/libblktap.so
dist/install/usr/lib/libblktap.so.3.0
dist/install/usr/lib/libblktap.so.3.0.0
dist/install/usr/lib/libfsimage.so
dist/install/usr/lib/libfsimage.so.1.0
dist/install/usr/lib/libfsimage.so.1.0.0
dist/install/usr/lib/libvhd.a
dist/install/usr/lib/libvhd.so
dist/install/usr/lib/libvhd.so.1.0
dist/install/usr/lib/libvhd.so.1.0.0
dist/install/usr/lib/libxenctrl.a
dist/install/usr/lib/libxenctrl.so
dist/install/usr/lib/libxenctrl.so.4.2
dist/install/usr/lib/libxenctrl.so.4.2.0
dist/install/usr/lib/libxenguest.a
dist/install/usr/lib/libxenguest.so
dist/install/usr/lib/libxenguest.so.4.2
dist/install/usr/lib/libxenguest.so.4.2.0
dist/install/usr/lib/libxenlight.a
dist/install/usr/lib/libxenlight.so
dist/install/usr/lib/libxenlight.so.2.0
dist/install/usr/lib/libxenlight.so.2.0.0
dist/install/usr/lib/libxenstat.a
dist/install/usr/lib/libxenstat.so
dist/install/usr/lib/libxenstat.so.0
dist/install/usr/lib/libxenstat.so.0.0
dist/install/usr/lib/libxenstore.a
dist/install/usr/lib/libxenstore.so
dist/install/usr/lib/libxenstore.so.3.0
dist/install/usr/lib/libxenstore.so.3.0.1
dist/install/usr/lib/libxenvchan.a
dist/install/usr/lib/libxenvchan.so
dist/install/usr/lib/libxenvchan.so.1.0
dist/install/usr/lib/libxenvchan.so.1.0.0
dist/install/usr/lib/libxlutil.a
dist/install/usr/lib/libxlutil.so
dist/install/usr/lib/libxlutil.so.1.0
dist/install/usr/lib/libxlutil.so.1.0.0
dist/install/usr/lib/python2.6
dist/install/usr/lib/python2.6/dist-packages
dist/install/usr/lib/python2.6/dist-packages/fsimage.so
dist/install/usr/lib/python2.6/dist-packages/grub
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.py
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.pyc
dist/install/usr/lib/python2.6/dist-packages/pygrub-0.3.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen
dist/install/usr/lib/python2.6/dist-packages/xen-3.0.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/checkpoint.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/flask.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/netlink.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/ptsname.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xc.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xs.so
dist/install/usr/lib/python2.6/dist-packages/xen/remus
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.pyc
dist/install/usr/lib/xen
dist/install/usr/lib/xen/bin
dist/install/usr/lib/xen/bin/libxl-save-helper
dist/install/usr/lib/xen/bin/lsevtchn
dist/install/usr/lib/xen/bin/pygrub
dist/install/usr/lib/xen/bin/qemu-dm
dist/install/usr/lib/xen/bin/qemu-ga
dist/install/usr/lib/xen/bin/qemu-img
dist/install/usr/lib/xen/bin/qemu-io
dist/install/usr/lib/xen/bin/qemu-nbd
dist/install/usr/lib/xen/bin/qemu-system-i386
dist/install/usr/lib/xen/bin/readnotes
dist/install/usr/lib/xen/bin/xc_restore
dist/install/usr/lib/xen/bin/xc_save
dist/install/usr/lib/xen/bin/xenconsole
dist/install/usr/lib/xen/bin/xenctx
dist/install/usr/lib/xen/bin/xenpaging
dist/install/usr/lib/xen/bin/xenpvnetboot
dist/install/usr/lib/xen/boot
dist/install/usr/lib/xen/boot/hvmloader
dist/install/usr/local
dist/install/usr/local/etc
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
dist/install/usr/local/lib
dist/install/usr/local/lib/ocaml
dist/install/usr/local/lib/ocaml/3.11.2
dist/install/usr/local/lib/ocaml/3.11.2/xenbus
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/dllxenbus_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/libxenbus_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/META
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/dllxenctrl_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/libxenctrl_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/META
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/dllxeneventchn_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/libxeneventchn_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/META
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cma
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenlight
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/dllxenlight_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/libxenlight_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/META
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/dllxenmmap_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/libxenmmap_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/META
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenstore
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/META
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.a
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmxa
dist/install/usr/local/share
dist/install/usr/local/share/doc
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/man
dist/install/usr/local/share/man/man1
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/man/man8
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/sbin
dist/install/usr/sbin/blktapctrl
dist/install/usr/sbin/flask-get-bool
dist/install/usr/sbin/flask-getenforce
dist/install/usr/sbin/flask-label-pci
dist/install/usr/sbin/flask-loadpolicy
dist/install/usr/sbin/flask-set-bool
dist/install/usr/sbin/flask-setenforce
dist/install/usr/sbin/gdbsx
dist/install/usr/sbin/gtracestat
dist/install/usr/sbin/gtraceview
dist/install/usr/sbin/img2qcow
dist/install/usr/sbin/kdd
dist/install/usr/sbin/lock-util
dist/install/usr/sbin/oxenstored
dist/install/usr/sbin/qcow2raw
dist/install/usr/sbin/qcow-create
dist/install/usr/sbin/tap-ctl
dist/install/usr/sbin/tapdisk
dist/install/usr/sbin/tapdisk2
dist/install/usr/sbin/tapdisk-client
dist/install/usr/sbin/tapdisk-diff
dist/install/usr/sbin/tapdisk-stream
dist/install/usr/sbin/td-util
dist/install/usr/sbin/vhd-update
dist/install/usr/sbin/vhd-util
dist/install/usr/sbin/xenbaked
dist/install/usr/sbin/xen-bugtool
dist/install/usr/sbin/xenconsoled
dist/install/usr/sbin/xend
dist/install/usr/sbin/xen-hptool
dist/install/usr/sbin/xen-hvmcrash
dist/install/usr/sbin/xen-hvmctx
dist/install/usr/sbin/xenlockprof
dist/install/usr/sbin/xen-lowmemd
dist/install/usr/sbin/xenmon.py
dist/install/usr/sbin/xenperf
dist/install/usr/sbin/xenpm
dist/install/usr/sbin/xenpmd
dist/install/usr/sbin/xen-python-path
dist/install/usr/sbin/xen-ringwatch
dist/install/usr/sbin/xenstored
dist/install/usr/sbin/xen-tmem-list-parse
dist/install/usr/sbin/xentop
dist/install/usr/sbin/xentrace_setmask
dist/install/usr/sbin/xenwatchdogd
dist/install/usr/sbin/xl
dist/install/usr/sbin/xm
dist/install/usr/sbin/xsview
dist/install/usr/share
dist/install/usr/share/doc
dist/install/usr/share/doc/xen
dist/install/usr/share/doc/xen/qemu
dist/install/usr/share/doc/xen/qemu/qemu-doc.html
dist/install/usr/share/doc/xen/qemu/qemu-tech.html
dist/install/usr/share/doc/xen/README.blktap
dist/install/usr/share/doc/xen/README.xenmon
dist/install/usr/share/man
dist/install/usr/share/man/man1
dist/install/usr/share/man/man1/xentop.1
dist/install/usr/share/man/man1/xentrace_format.1
dist/install/usr/share/man/man8
dist/install/usr/share/man/man8/xentrace.8
dist/install/usr/share/qemu-xen
dist/install/usr/share/qemu-xen/bamboo.dtb
dist/install/usr/share/qemu-xen/bios.bin
dist/install/usr/share/qemu-xen/keymaps
dist/install/usr/share/qemu-xen/keymaps/ar
dist/install/usr/share/qemu-xen/keymaps/common
dist/install/usr/share/qemu-xen/keymaps/da
dist/install/usr/share/qemu-xen/keymaps/de
dist/install/usr/share/qemu-xen/keymaps/de-ch
dist/install/usr/share/qemu-xen/keymaps/en-gb
dist/install/usr/share/qemu-xen/keymaps/en-us
dist/install/usr/share/qemu-xen/keymaps/es
dist/install/usr/share/qemu-xen/keymaps/et
dist/install/usr/share/qemu-xen/keymaps/fi
dist/install/usr/share/qemu-xen/keymaps/fo
dist/install/usr/share/qemu-xen/keymaps/fr
dist/install/usr/share/qemu-xen/keymaps/fr-be
dist/install/usr/share/qemu-xen/keymaps/fr-ca
dist/install/usr/share/qemu-xen/keymaps/fr-ch
dist/install/usr/share/qemu-xen/keymaps/hr
dist/install/usr/share/qemu-xen/keymaps/hu
dist/install/usr/share/qemu-xen/keymaps/is
dist/install/usr/share/qemu-xen/keymaps/it
dist/install/usr/share/qemu-xen/keymaps/ja
dist/install/usr/share/qemu-xen/keymaps/lt
dist/install/usr/share/qemu-xen/keymaps/lv
dist/install/usr/share/qemu-xen/keymaps/mk
dist/install/usr/share/qemu-xen/keymaps/modifiers
dist/install/usr/share/qemu-xen/keymaps/nl
dist/install/usr/share/qemu-xen/keymaps/nl-be
dist/install/usr/share/qemu-xen/keymaps/no
dist/install/usr/share/qemu-xen/keymaps/pl
dist/install/usr/share/qemu-xen/keymaps/pt
dist/install/usr/share/qemu-xen/keymaps/pt-br
dist/install/usr/share/qemu-xen/keymaps/ru
dist/install/usr/share/qemu-xen/keymaps/sl
dist/install/usr/share/qemu-xen/keymaps/sv
dist/install/usr/share/qemu-xen/keymaps/th
dist/install/usr/share/qemu-xen/keymaps/tr
dist/install/usr/share/qemu-xen/linuxboot.bin
dist/install/usr/share/qemu-xen/mpc8544ds.dtb
dist/install/usr/share/qemu-xen/multiboot.bin
dist/install/usr/share/qemu-xen/openbios-ppc
dist/install/usr/share/qemu-xen/openbios-sparc32
dist/install/usr/share/qemu-xen/openbios-sparc64
dist/install/usr/share/qemu-xen/palcode-clipper
dist/install/usr/share/qemu-xen/petalogix-ml605.dtb
dist/install/usr/share/qemu-xen/petalogix-s3adsp1800.dtb
dist/install/usr/share/qemu-xen/ppc_rom.bin
dist/install/usr/share/qemu-xen/pxe-e1000.rom
dist/install/usr/share/qemu-xen/pxe-eepro100.rom
dist/install/usr/share/qemu-xen/pxe-ne2k_pci.rom
dist/install/usr/share/qemu-xen/pxe-pcnet.rom
dist/install/usr/share/qemu-xen/pxe-rtl8139.rom
dist/install/usr/share/qemu-xen/pxe-virtio.rom
dist/install/usr/share/qemu-xen/s390-zipl.rom
dist/install/usr/share/qemu-xen/sgabios.bin
dist/install/usr/share/qemu-xen/slof.bin
dist/install/usr/share/qemu-xen/spapr-rtas.bin
dist/install/usr/share/qemu-xen/vgabios.bin
dist/install/usr/share/qemu-xen/vgabios-cirrus.bin
dist/install/usr/share/qemu-xen/vgabios-qxl.bin
dist/install/usr/share/qemu-xen/vgabios-stdvga.bin
dist/install/usr/share/qemu-xen/vgabios-vmware.bin
dist/install/usr/share/xen
dist/install/usr/share/xen/create.dtd
dist/install/usr/share/xen/man
dist/install/usr/share/xen/man/man1
dist/install/usr/share/xen/man/man1/qemu.1
dist/install/usr/share/xen/man/man1/qemu-img.1
dist/install/usr/share/xen/man/man8
dist/install/usr/share/xen/man/man8/qemu-nbd.8
dist/install/usr/share/xen/qemu
dist/install/usr/share/xen/qemu/bamboo.dtb
dist/install/usr/share/xen/qemu/bios.bin
dist/install/usr/share/xen/qemu/keymaps
dist/install/usr/share/xen/qemu/keymaps/ar
dist/install/usr/share/xen/qemu/keymaps/common
dist/install/usr/share/xen/qemu/keymaps/da
dist/install/usr/share/xen/qemu/keymaps/de
dist/install/usr/share/xen/qemu/keymaps/de-ch
dist/install/usr/share/xen/qemu/keymaps/en-gb
dist/install/usr/share/xen/qemu/keymaps/en-us
dist/install/usr/share/xen/qemu/keymaps/es
dist/install/usr/share/xen/qemu/keymaps/et
dist/install/usr/share/xen/qemu/keymaps/fi
dist/install/usr/share/xen/qemu/keymaps/fo
dist/install/usr/share/xen/qemu/keymaps/fr
dist/install/usr/share/xen/qemu/keymaps/fr-be
dist/install/usr/share/xen/qemu/keymaps/fr-ca
dist/install/usr/share/xen/qemu/keymaps/fr-ch
dist/install/usr/share/xen/qemu/keymaps/hr
dist/install/usr/share/xen/qemu/keymaps/hu
dist/install/usr/share/xen/qemu/keymaps/is
dist/install/usr/share/xen/qemu/keymaps/it
dist/install/usr/share/xen/qemu/keymaps/ja
dist/install/usr/share/xen/qemu/keymaps/lt
dist/install/usr/share/xen/qemu/keymaps/lv
dist/install/usr/share/xen/qemu/keymaps/mk
dist/install/usr/share/xen/qemu/keymaps/modifiers
dist/install/usr/share/xen/qemu/keymaps/nl
dist/install/usr/share/xen/qemu/keymaps/nl-be
dist/install/usr/share/xen/qemu/keymaps/no
dist/install/usr/share/xen/qemu/keymaps/pl
dist/install/usr/share/xen/qemu/keymaps/pt
dist/install/usr/share/xen/qemu/keymaps/pt-br
dist/install/usr/share/xen/qemu/keymaps/ru
dist/install/usr/share/xen/qemu/keymaps/sl
dist/install/usr/share/xen/qemu/keymaps/sv
dist/install/usr/share/xen/qemu/keymaps/th
dist/install/usr/share/xen/qemu/keymaps/tr
dist/install/usr/share/xen/qemu/openbios-ppc
dist/install/usr/share/xen/qemu/openbios-sparc32
dist/install/usr/share/xen/qemu/openbios-sparc64
dist/install/usr/share/xen/qemu/ppc_rom.bin
dist/install/usr/share/xen/qemu/pxe-e1000.bin
dist/install/usr/share/xen/qemu/pxe-ne2k_pci.bin
dist/install/usr/share/xen/qemu/pxe-pcnet.bin
dist/install/usr/share/xen/qemu/pxe-rtl8139.bin
dist/install/usr/share/xen/qemu/vgabios.bin
dist/install/usr/share/xen/qemu/vgabios-cirrus.bin
dist/install/usr/share/xen/qemu/video.x
dist/install/var
dist/install/var/lib
dist/install/var/lib/xen
dist/install/var/lib/xenstored
dist/install/var/lib/xen/xenpaging
dist/install/var/lock
dist/install/var/lock/subsys
dist/install/var/log
dist/install/var/log/xen
dist/install/var/run
dist/install/var/run/xen
dist/install/var/run/xend
dist/install/var/run/xend/boot
dist/install/var/run/xenstored
dist/install/var/xen
dist/install/var/xen/dump

--=-Op15fjVHnxmurhd6MvPI
Content-Disposition: attachment; filename="uninstall.after"
Content-Type: text/plain; name="uninstall.after"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

dist/
dist/install
dist/install/etc
dist/install/etc/bash_completion.d
dist/install/etc/bash_completion.d/xl.sh
dist/install/etc/default
dist/install/etc/hotplug
dist/install/etc/init.d
dist/install/etc/udev
dist/install/etc/udev/rules.d
dist/install/etc/xen.old-1345709784
dist/install/etc/xen.old-1345709784/auto
dist/install/etc/xen.old-1345709784/cpupool
dist/install/etc/xen.old-1345709784/oxenstored.conf
dist/install/etc/xen.old-1345709784/README
dist/install/etc/xen.old-1345709784/README.incompatibilities
dist/install/etc/xen.old-1345709784/scripts
dist/install/etc/xen.old-1345709784/scripts/blktap
dist/install/etc/xen.old-1345709784/scripts/block
dist/install/etc/xen.old-1345709784/scripts/block-common.sh
dist/install/etc/xen.old-1345709784/scripts/block-enbd
dist/install/etc/xen.old-1345709784/scripts/block-nbd
dist/install/etc/xen.old-1345709784/scripts/external-device-migrate
dist/install/etc/xen.old-1345709784/scripts/hotplugpath.sh
dist/install/etc/xen.old-1345709784/scripts/locking.sh
dist/install/etc/xen.old-1345709784/scripts/logging.sh
dist/install/etc/xen.old-1345709784/scripts/network-bridge
dist/install/etc/xen.old-1345709784/scripts/network-nat
dist/install/etc/xen.old-1345709784/scripts/network-route
dist/install/etc/xen.old-1345709784/scripts/qemu-ifup
dist/install/etc/xen.old-1345709784/scripts/vif2
dist/install/etc/xen.old-1345709784/scripts/vif-bridge
dist/install/etc/xen.old-1345709784/scripts/vif-common.sh
dist/install/etc/xen.old-1345709784/scripts/vif-nat
dist/install/etc/xen.old-1345709784/scripts/vif-route
dist/install/etc/xen.old-1345709784/scripts/vif-setup
dist/install/etc/xen.old-1345709784/scripts/vscsi
dist/install/etc/xen.old-1345709784/scripts/vtpm
dist/install/etc/xen.old-1345709784/scripts/vtpm-common.sh
dist/install/etc/xen.old-1345709784/scripts/vtpm-delete
dist/install/etc/xen.old-1345709784/scripts/vtpm-hotplug-common.sh
dist/install/etc/xen.old-1345709784/scripts/vtpm-impl
dist/install/etc/xen.old-1345709784/scripts/vtpm-migration.sh
dist/install/etc/xen.old-1345709784/scripts/xen-hotplug-cleanup
dist/install/etc/xen.old-1345709784/scripts/xen-hotplug-common.sh
dist/install/etc/xen.old-1345709784/scripts/xen-network-common.sh
dist/install/etc/xen.old-1345709784/scripts/xen-script-common.sh
dist/install/etc/xen.old-1345709784/xend-config.sxp
dist/install/etc/xen.old-1345709784/xend-pci-permissive.sxp
dist/install/etc/xen.old-1345709784/xend-pci-quirks.sxp
dist/install/etc/xen.old-1345709784/xl.conf
dist/install/etc/xen.old-1345709784/xlexample.hvm
dist/install/etc/xen.old-1345709784/xlexample.pvlinux
dist/install/etc/xen.old-1345709784/xm-config.xml
dist/install/etc/xen.old-1345709784/xmexample1
dist/install/etc/xen.old-1345709784/xmexample2
dist/install/etc/xen.old-1345709784/xmexample3
dist/install/etc/xen.old-1345709784/xmexample.hvm
dist/install/etc/xen.old-1345709784/xmexample.hvm-stubdom
dist/install/etc/xen.old-1345709784/xmexample.nbd
dist/install/etc/xen.old-1345709784/xmexample.pv-grub
dist/install/etc/xen.old-1345709784/xmexample.vti
dist/install/usr
dist/install/usr/bin
dist/install/usr/bin/remus
dist/install/usr/include
dist/install/usr/include/blktaplib.h
dist/install/usr/include/fsimage_grub.h
dist/install/usr/include/fsimage.h
dist/install/usr/include/fsimage_plugin.h
dist/install/usr/include/libxenvchan.h
dist/install/usr/include/xenstore-compat
dist/install/usr/lib
dist/install/usr/lib/fs
dist/install/usr/lib/fs/ext2fs
dist/install/usr/lib/fs/ext2fs/fsimage.so
dist/install/usr/lib/fs/fat
dist/install/usr/lib/fs/fat/fsimage.so
dist/install/usr/lib/fs/iso9660
dist/install/usr/lib/fs/iso9660/fsimage.so
dist/install/usr/lib/fs/reiserfs
dist/install/usr/lib/fs/reiserfs/fsimage.so
dist/install/usr/lib/fs/ufs
dist/install/usr/lib/fs/ufs/fsimage.so
dist/install/usr/lib/fs/xfs
dist/install/usr/lib/fs/xfs/fsimage.so
dist/install/usr/lib/fs/zfs
dist/install/usr/lib/fs/zfs/fsimage.so
dist/install/usr/lib/libblktap.a
dist/install/usr/lib/libblktapctl.a
dist/install/usr/lib/libblktapctl.so
dist/install/usr/lib/libblktapctl.so.1.0
dist/install/usr/lib/libblktapctl.so.1.0.0
dist/install/usr/lib/libblktap.so
dist/install/usr/lib/libblktap.so.3.0
dist/install/usr/lib/libblktap.so.3.0.0
dist/install/usr/lib/libfsimage.so
dist/install/usr/lib/libfsimage.so.1.0
dist/install/usr/lib/libfsimage.so.1.0.0
dist/install/usr/lib/libvhd.a
dist/install/usr/lib/libvhd.so
dist/install/usr/lib/libvhd.so.1.0
dist/install/usr/lib/libvhd.so.1.0.0
dist/install/usr/lib/libxenlight.a
dist/install/usr/lib/libxenlight.so
dist/install/usr/lib/libxenlight.so.2.0
dist/install/usr/lib/libxenlight.so.2.0.0
dist/install/usr/lib/libxenstat.a
dist/install/usr/lib/libxenstat.so
dist/install/usr/lib/libxenstat.so.0
dist/install/usr/lib/libxenstat.so.0.0
dist/install/usr/lib/libxenvchan.a
dist/install/usr/lib/libxenvchan.so
dist/install/usr/lib/libxenvchan.so.1.0
dist/install/usr/lib/libxenvchan.so.1.0.0
dist/install/usr/lib/python2.6
dist/install/usr/lib/python2.6/dist-packages
dist/install/usr/lib/python2.6/dist-packages/fsimage.so
dist/install/usr/lib/python2.6/dist-packages/grub
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.py
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.pyc
dist/install/usr/lib/python2.6/dist-packages/pygrub-0.3.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen
dist/install/usr/lib/python2.6/dist-packages/xen-3.0.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/checkpoint.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/flask.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/netlink.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/ptsname.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xc.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xs.so
dist/install/usr/lib/python2.6/dist-packages/xen/remus
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.pyc
dist/install/usr/local
dist/install/usr/local/etc
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
dist/install/usr/local/lib
dist/install/usr/local/lib/ocaml
dist/install/usr/local/lib/ocaml/3.11.2
dist/install/usr/local/lib/ocaml/3.11.2/xenbus
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/dllxenbus_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/libxenbus_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/META
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/dllxenctrl_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/libxenctrl_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/META
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/dllxeneventchn_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/libxeneventchn_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/META
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cma
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenlight
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/dllxenlight_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/libxenlight_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/META
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/dllxenmmap_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/libxenmmap_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/META
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenstore
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/META
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.a
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmxa
dist/install/usr/local/share
dist/install/usr/local/share/doc
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/man
dist/install/usr/local/share/man/man1
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/man/man8
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/sbin
dist/install/usr/sbin/blktapctrl
dist/install/usr/sbin/flask-get-bool
dist/install/usr/sbin/flask-getenforce
dist/install/usr/sbin/flask-label-pci
dist/install/usr/sbin/flask-loadpolicy
dist/install/usr/sbin/flask-set-bool
dist/install/usr/sbin/flask-setenforce
dist/install/usr/sbin/gdbsx
dist/install/usr/sbin/gtracestat
dist/install/usr/sbin/gtraceview
dist/install/usr/sbin/img2qcow
dist/install/usr/sbin/kdd
dist/install/usr/sbin/lock-util
dist/install/usr/sbin/oxenstored
dist/install/usr/sbin/qcow2raw
dist/install/usr/sbin/qcow-create
dist/install/usr/sbin/tap-ctl
dist/install/usr/sbin/tapdisk
dist/install/usr/sbin/tapdisk2
dist/install/usr/sbin/tapdisk-client
dist/install/usr/sbin/tapdisk-diff
dist/install/usr/sbin/tapdisk-stream
dist/install/usr/sbin/td-util
dist/install/usr/sbin/vhd-update
dist/install/usr/sbin/vhd-util
dist/install/usr/sbin/xl
dist/install/usr/sbin/xsview
dist/install/usr/share
dist/install/usr/share/doc
dist/install/usr/share/man
dist/install/usr/share/man/man1
dist/install/usr/share/man/man8
dist/install/var
dist/install/var/lib
dist/install/var/lock
dist/install/var/lock/subsys
dist/install/var/log
dist/install/var/log/xen
dist/install/var/run
dist/install/var/xen
dist/install/var/xen/dump

--=-Op15fjVHnxmurhd6MvPI
Content-Disposition: attachment; filename="uninstall.before"
Content-Type: text/plain; name="uninstall.before"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

dist/
dist/install
dist/install/etc
dist/install/etc/bash_completion.d
dist/install/etc/bash_completion.d/xl.sh
dist/install/etc/default
dist/install/etc/hotplug
dist/install/etc/init.d
dist/install/etc/udev
dist/install/etc/udev/rules.d
dist/install/etc/xen.old-1345709489
dist/install/etc/xen.old-1345709489/auto
dist/install/etc/xen.old-1345709489/cpupool
dist/install/etc/xen.old-1345709489/oxenstored.conf
dist/install/etc/xen.old-1345709489/README
dist/install/etc/xen.old-1345709489/README.incompatibilities
dist/install/etc/xen.old-1345709489/scripts
dist/install/etc/xen.old-1345709489/scripts/blktap
dist/install/etc/xen.old-1345709489/scripts/block
dist/install/etc/xen.old-1345709489/scripts/block-common.sh
dist/install/etc/xen.old-1345709489/scripts/block-enbd
dist/install/etc/xen.old-1345709489/scripts/block-nbd
dist/install/etc/xen.old-1345709489/scripts/external-device-migrate
dist/install/etc/xen.old-1345709489/scripts/hotplugpath.sh
dist/install/etc/xen.old-1345709489/scripts/locking.sh
dist/install/etc/xen.old-1345709489/scripts/logging.sh
dist/install/etc/xen.old-1345709489/scripts/network-bridge
dist/install/etc/xen.old-1345709489/scripts/network-nat
dist/install/etc/xen.old-1345709489/scripts/network-route
dist/install/etc/xen.old-1345709489/scripts/qemu-ifup
dist/install/etc/xen.old-1345709489/scripts/vif2
dist/install/etc/xen.old-1345709489/scripts/vif-bridge
dist/install/etc/xen.old-1345709489/scripts/vif-common.sh
dist/install/etc/xen.old-1345709489/scripts/vif-nat
dist/install/etc/xen.old-1345709489/scripts/vif-route
dist/install/etc/xen.old-1345709489/scripts/vif-setup
dist/install/etc/xen.old-1345709489/scripts/vscsi
dist/install/etc/xen.old-1345709489/scripts/vtpm
dist/install/etc/xen.old-1345709489/scripts/vtpm-common.sh
dist/install/etc/xen.old-1345709489/scripts/vtpm-delete
dist/install/etc/xen.old-1345709489/scripts/vtpm-hotplug-common.sh
dist/install/etc/xen.old-1345709489/scripts/vtpm-impl
dist/install/etc/xen.old-1345709489/scripts/vtpm-migration.sh
dist/install/etc/xen.old-1345709489/scripts/xen-hotplug-cleanup
dist/install/etc/xen.old-1345709489/scripts/xen-hotplug-common.sh
dist/install/etc/xen.old-1345709489/scripts/xen-network-common.sh
dist/install/etc/xen.old-1345709489/scripts/xen-script-common.sh
dist/install/etc/xen.old-1345709489/xend-config.sxp
dist/install/etc/xen.old-1345709489/xend-pci-permissive.sxp
dist/install/etc/xen.old-1345709489/xend-pci-quirks.sxp
dist/install/etc/xen.old-1345709489/xl.conf
dist/install/etc/xen.old-1345709489/xlexample.hvm
dist/install/etc/xen.old-1345709489/xlexample.pvlinux
dist/install/etc/xen.old-1345709489/xm-config.xml
dist/install/etc/xen.old-1345709489/xmexample1
dist/install/etc/xen.old-1345709489/xmexample2
dist/install/etc/xen.old-1345709489/xmexample3
dist/install/etc/xen.old-1345709489/xmexample.hvm
dist/install/etc/xen.old-1345709489/xmexample.hvm-stubdom
dist/install/etc/xen.old-1345709489/xmexample.nbd
dist/install/etc/xen.old-1345709489/xmexample.pv-grub
dist/install/etc/xen.old-1345709489/xmexample.vti
dist/install/usr
dist/install/usr/bin
dist/install/usr/bin/remus
dist/install/usr/include
dist/install/usr/include/blktaplib.h
dist/install/usr/include/fsimage_grub.h
dist/install/usr/include/fsimage.h
dist/install/usr/include/fsimage_plugin.h
dist/install/usr/include/libxenvchan.h
dist/install/usr/include/xenstore-compat
dist/install/usr/lib
dist/install/usr/lib/fs
dist/install/usr/lib/fs/ext2fs
dist/install/usr/lib/fs/ext2fs/fsimage.so
dist/install/usr/lib/fs/fat
dist/install/usr/lib/fs/fat/fsimage.so
dist/install/usr/lib/fs/iso9660
dist/install/usr/lib/fs/iso9660/fsimage.so
dist/install/usr/lib/fs/reiserfs
dist/install/usr/lib/fs/reiserfs/fsimage.so
dist/install/usr/lib/fs/ufs
dist/install/usr/lib/fs/ufs/fsimage.so
dist/install/usr/lib/fs/xfs
dist/install/usr/lib/fs/xfs/fsimage.so
dist/install/usr/lib/fs/zfs
dist/install/usr/lib/fs/zfs/fsimage.so
dist/install/usr/lib/libblktap.a
dist/install/usr/lib/libblktapctl.a
dist/install/usr/lib/libblktapctl.so
dist/install/usr/lib/libblktapctl.so.1.0
dist/install/usr/lib/libblktapctl.so.1.0.0
dist/install/usr/lib/libblktap.so
dist/install/usr/lib/libblktap.so.3.0
dist/install/usr/lib/libblktap.so.3.0.0
dist/install/usr/lib/libfsimage.so
dist/install/usr/lib/libfsimage.so.1.0
dist/install/usr/lib/libfsimage.so.1.0.0
dist/install/usr/lib/libvhd.a
dist/install/usr/lib/libvhd.so
dist/install/usr/lib/libvhd.so.1.0
dist/install/usr/lib/libvhd.so.1.0.0
dist/install/usr/lib/libxenctrl.a
dist/install/usr/lib/libxenctrl.so
dist/install/usr/lib/libxenctrl.so.4.2
dist/install/usr/lib/libxenctrl.so.4.2.0
dist/install/usr/lib/libxenguest.a
dist/install/usr/lib/libxenguest.so
dist/install/usr/lib/libxenguest.so.4.2
dist/install/usr/lib/libxenguest.so.4.2.0
dist/install/usr/lib/libxenlight.a
dist/install/usr/lib/libxenlight.so
dist/install/usr/lib/libxenlight.so.2.0
dist/install/usr/lib/libxenlight.so.2.0.0
dist/install/usr/lib/libxenstat.a
dist/install/usr/lib/libxenstat.so
dist/install/usr/lib/libxenstat.so.0
dist/install/usr/lib/libxenstat.so.0.0
dist/install/usr/lib/libxenstore.a
dist/install/usr/lib/libxenstore.so
dist/install/usr/lib/libxenstore.so.3.0
dist/install/usr/lib/libxenstore.so.3.0.1
dist/install/usr/lib/libxenvchan.a
dist/install/usr/lib/libxenvchan.so
dist/install/usr/lib/libxenvchan.so.1.0
dist/install/usr/lib/libxenvchan.so.1.0.0
dist/install/usr/lib/libxlutil.a
dist/install/usr/lib/libxlutil.so
dist/install/usr/lib/libxlutil.so.1.0
dist/install/usr/lib/libxlutil.so.1.0.0
dist/install/usr/lib/python2.6
dist/install/usr/lib/python2.6/dist-packages
dist/install/usr/lib/python2.6/dist-packages/fsimage.so
dist/install/usr/lib/python2.6/dist-packages/grub
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.py
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.pyc
dist/install/usr/lib/python2.6/dist-packages/pygrub-0.3.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen
dist/install/usr/lib/python2.6/dist-packages/xen-3.0.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/checkpoint.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/flask.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/netlink.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/ptsname.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xc.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xs.so
dist/install/usr/lib/python2.6/dist-packages/xen/remus
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.pyc
dist/install/usr/lib/xen
dist/install/usr/lib/xen/bin
dist/install/usr/lib/xen/bin/libxl-save-helper
dist/install/usr/lib/xen/bin/lsevtchn
dist/install/usr/lib/xen/bin/pygrub
dist/install/usr/lib/xen/bin/qemu-dm
dist/install/usr/lib/xen/bin/qemu-ga
dist/install/usr/lib/xen/bin/qemu-img
dist/install/usr/lib/xen/bin/qemu-io
dist/install/usr/lib/xen/bin/qemu-nbd
dist/install/usr/lib/xen/bin/qemu-system-i386
dist/install/usr/lib/xen/bin/readnotes
dist/install/usr/lib/xen/bin/xc_restore
dist/install/usr/lib/xen/bin/xc_save
dist/install/usr/lib/xen/boot
dist/install/usr/lib/xen/boot/hvmloader
dist/install/usr/local
dist/install/usr/local/etc
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
dist/install/usr/local/lib
dist/install/usr/local/lib/ocaml
dist/install/usr/local/lib/ocaml/3.11.2
dist/install/usr/local/lib/ocaml/3.11.2/xenbus
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/dllxenbus_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/libxenbus_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/META
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/dllxenctrl_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/libxenctrl_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/META
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/dllxeneventchn_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/libxeneventchn_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/META
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cma
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenlight
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/dllxenlight_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/libxenlight_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/META
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/dllxenmmap_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/libxenmmap_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/META
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenstore
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/META
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.a
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmxa
dist/install/usr/local/share
dist/install/usr/local/share/doc
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/man
dist/install/usr/local/share/man/man1
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/man/man8
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/sbin
dist/install/usr/sbin/blktapctrl
dist/install/usr/sbin/flask-get-bool
dist/install/usr/sbin/flask-getenforce
dist/install/usr/sbin/flask-label-pci
dist/install/usr/sbin/flask-loadpolicy
dist/install/usr/sbin/flask-set-bool
dist/install/usr/sbin/flask-setenforce
dist/install/usr/sbin/gdbsx
dist/install/usr/sbin/gtracestat
dist/install/usr/sbin/gtraceview
dist/install/usr/sbin/img2qcow
dist/install/usr/sbin/kdd
dist/install/usr/sbin/lock-util
dist/install/usr/sbin/oxenstored
dist/install/usr/sbin/qcow2raw
dist/install/usr/sbin/qcow-create
dist/install/usr/sbin/tap-ctl
dist/install/usr/sbin/tapdisk
dist/install/usr/sbin/tapdisk2
dist/install/usr/sbin/tapdisk-client
dist/install/usr/sbin/tapdisk-diff
dist/install/usr/sbin/tapdisk-stream
dist/install/usr/sbin/td-util
dist/install/usr/sbin/vhd-update
dist/install/usr/sbin/vhd-util
dist/install/usr/sbin/xl
dist/install/usr/sbin/xsview
dist/install/usr/share
dist/install/usr/share/doc
dist/install/usr/share/man
dist/install/usr/share/man/man1
dist/install/usr/share/man/man8
dist/install/var
dist/install/var/lib
dist/install/var/lock
dist/install/var/lock/subsys
dist/install/var/log
dist/install/var/log/xen
dist/install/var/run
dist/install/var/xen
dist/install/var/xen/dump

--=-Op15fjVHnxmurhd6MvPI
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-Op15fjVHnxmurhd6MvPI--


From xen-devel-bounces@lists.xen.org Thu Aug 23 08:25:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Sil-0000Ye-Lw; Thu, 23 Aug 2012 08:24:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Sij-0000YL-N5
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:24:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345710245!8490575!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19937 invoked from network); 23 Aug 2012 08:24:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:24:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14140368"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:24:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:24:04 +0100
Message-ID: <1345710243.12501.56.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 23 Aug 2012 09:24:03 +0100
In-Reply-To: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
Content-Type: multipart/mixed; boundary="=-Op15fjVHnxmurhd6MvPI"
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-Op15fjVHnxmurhd6MvPI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Thu, 2012-08-23 at 08:59 +0100, Ian Campbell wrote:
> On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
> > On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
> > > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
> > > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
> > > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
> > > >@@ -228,8 +228,6 @@ uninstall:
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
> > > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
> > > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> > > >-    rm -rf $(D)/boot/*xen*
> > > 
> > > But removing this line without replacement isn't right either - we at least
> > > need to undo what "make install" did. That may imply adding an
> > > uninstall-xen sub-target,
> > 
> > Right, I totally forgot about the hypervisor itself!
> > 
> > Perhaps this target should include a
> > 	$(MAKE) -C xen uninstall
> > since that is the Makefile which knows how to undo its own install
> > target.
> 
> Like this, which handles EFI too but not (yet) tools.

Here is tools. This cleans up a superset of what was cleaned up before
this change but still leaves a lot of detritus. See attached
install.before, uninstall.before & uninstall.after (install.after is
identical to install.before). This is a much bigger problem and probably
requires proper recursive subdir-uninstall rules for everything under
tools. 

That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
about this patch as a 4.2 thing, but given that the regression happened
due to the switch to autoconf in 4.2 I think it might be good to take,
even though as a %age of what we install the delta is pretty
insignificant.

8<---------------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1345710001 -3600
# Node ID eaf499f0f7071fc0bfd84901babfd1ae18227ebb
# Parent  101956baa3469f5f338c661f1ceab23077bd432b
uninstall: push tools uninstall down into tools/Makefile

Many of the rules here depend on having run configure and the
variables which it defines in config/Tools.mk

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 101956baa346 -r eaf499f0f707 Makefile
--- a/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -229,34 +229,7 @@ uninstall:
 	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
 	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
 	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
-	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
-	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
-	rm -rf $(D)$(BINDIR)/xc_shadow
-	rm -rf $(D)$(BINDIR)/pygrub
-	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
-	rm -rf $(D)$(BINDIR)/xsls
-	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
-	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
-	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
-	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
-	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
-	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
-	rm -rf $(D)$(INCLUDEDIR)/xen
-	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
-	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
-	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
-	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
-	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
-	rm -rf $(D)$(LIBDIR)/xen/
-	rm -rf $(D)$(LIBEXEC)/xen*
-	rm -rf $(D)$(SBINDIR)/setmask
-	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
-	rm -rf $(D)$(SHAREDIR)/doc/xen
-	rm -rf $(D)$(SHAREDIR)/xen
-	rm -rf $(D)$(SHAREDIR)/qemu-xen
-	rm -rf $(D)$(MAN1DIR)/xen*
-	rm -rf $(D)$(MAN8DIR)/xen*
+	make -C tools uninstall
 	rm -rf $(D)/boot/tboot*
 
 # Legacy targets for compatibility
diff -r 101956baa346 -r eaf499f0f707 tools/Makefile
--- a/tools/Makefile	Thu Aug 23 08:49:44 2012 +0100
+++ b/tools/Makefile	Thu Aug 23 09:20:01 2012 +0100
@@ -71,6 +71,38 @@ install: subdirs-install
 	$(INSTALL_DIR) $(DESTDIR)/var/lib/xen
 	$(INSTALL_DIR) $(DESTDIR)/var/lock/subsys
 
+.PHONY: uninstall
+uninstall: D=$(DESTDIR)
+uninstall:
+	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
+	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
+	rm -rf $(D)$(BINDIR)/xc_shadow
+	rm -rf $(D)$(BINDIR)/pygrub
+	rm -rf $(D)$(BINDIR)/setsize $(D)$(BINDIR)/tbctl
+	rm -rf $(D)$(BINDIR)/xsls
+	rm -rf $(D)$(BINDIR)/xenstore* $(D)$(BINDIR)/xentrace*
+	rm -rf $(D)$(BINDIR)/xen-detect $(D)$(BINDIR)/xencons
+	rm -rf $(D)$(BINDIR)/xenpvnetboot $(D)$(BINDIR)/qemu-*-xen
+	rm -rf $(D)$(INCLUDEDIR)/xenctrl* $(D)$(INCLUDEDIR)/xenguest.h
+	rm -rf $(D)$(INCLUDEDIR)/xs_lib.h $(D)$(INCLUDEDIR)/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore-compat/xs_lib.h $(D)$(INCLUDEDIR)/xenstore-compat/xs.h
+	rm -rf $(D)$(INCLUDEDIR)/xenstore_lib.h $(D)$(INCLUDEDIR)/xenstore.h
+	rm -rf $(D)$(INCLUDEDIR)/xen
+	rm -rf $(D)$(INCLUDEDIR)/_libxl* $(D)$(INCLUDEDIR)/libxl*
+	rm -rf $(D)$(INCLUDEDIR)/xenstat.h $(D)$(INCLUDEDIR)/xentoollog.h
+	rm -rf $(D)$(LIBDIR)/libxenctrl* $(D)$(LIBDIR)/libxenguest*
+	rm -rf $(D)$(LIBDIR)/libxenstore* $(D)$(LIBDIR)/libxlutil*
+	rm -rf $(D)$(LIBDIR)/python/xen $(D)$(LIBDIR)/python/grub
+	rm -rf $(D)$(LIBDIR)/xen/
+	rm -rf $(D)$(LIBEXEC)/xen*
+	rm -rf $(D)$(SBINDIR)/setmask
+	rm -rf $(D)$(SBINDIR)/xen* $(D)$(SBINDIR)/netfix $(D)$(SBINDIR)/xm
+	rm -rf $(D)$(SHAREDIR)/doc/xen
+	rm -rf $(D)$(SHAREDIR)/xen
+	rm -rf $(D)$(SHAREDIR)/qemu-xen
+	rm -rf $(D)$(MAN1DIR)/xen*
+	rm -rf $(D)$(MAN8DIR)/xen*
+
 .PHONY: clean
 clean: subdirs-clean
 


--=-Op15fjVHnxmurhd6MvPI
Content-Disposition: attachment; filename="install.before"
Content-Type: text/plain; name="install.before"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

dist/
dist/install
dist/install/etc
dist/install/etc/bash_completion.d
dist/install/etc/bash_completion.d/xl.sh
dist/install/etc/default
dist/install/etc/default/xencommons
dist/install/etc/default/xendomains
dist/install/etc/hotplug
dist/install/etc/hotplug/xen-backend.agent
dist/install/etc/init.d
dist/install/etc/init.d/xencommons
dist/install/etc/init.d/xend
dist/install/etc/init.d/xendomains
dist/install/etc/init.d/xen-watchdog
dist/install/etc/udev
dist/install/etc/udev/rules.d
dist/install/etc/udev/rules.d/xen-backend.rules
dist/install/etc/udev/rules.d/xend.rules
dist/install/etc/xen
dist/install/etc/xen/auto
dist/install/etc/xen/cpupool
dist/install/etc/xen/oxenstored.conf
dist/install/etc/xen/README
dist/install/etc/xen/README.incompatibilities
dist/install/etc/xen/scripts
dist/install/etc/xen/scripts/blktap
dist/install/etc/xen/scripts/block
dist/install/etc/xen/scripts/block-common.sh
dist/install/etc/xen/scripts/block-enbd
dist/install/etc/xen/scripts/block-nbd
dist/install/etc/xen/scripts/external-device-migrate
dist/install/etc/xen/scripts/hotplugpath.sh
dist/install/etc/xen/scripts/locking.sh
dist/install/etc/xen/scripts/logging.sh
dist/install/etc/xen/scripts/network-bridge
dist/install/etc/xen/scripts/network-nat
dist/install/etc/xen/scripts/network-route
dist/install/etc/xen/scripts/qemu-ifup
dist/install/etc/xen/scripts/vif2
dist/install/etc/xen/scripts/vif-bridge
dist/install/etc/xen/scripts/vif-common.sh
dist/install/etc/xen/scripts/vif-nat
dist/install/etc/xen/scripts/vif-route
dist/install/etc/xen/scripts/vif-setup
dist/install/etc/xen/scripts/vscsi
dist/install/etc/xen/scripts/vtpm
dist/install/etc/xen/scripts/vtpm-common.sh
dist/install/etc/xen/scripts/vtpm-delete
dist/install/etc/xen/scripts/vtpm-hotplug-common.sh
dist/install/etc/xen/scripts/vtpm-impl
dist/install/etc/xen/scripts/vtpm-migration.sh
dist/install/etc/xen/scripts/xen-hotplug-cleanup
dist/install/etc/xen/scripts/xen-hotplug-common.sh
dist/install/etc/xen/scripts/xen-network-common.sh
dist/install/etc/xen/scripts/xen-script-common.sh
dist/install/etc/xen/xend-config.sxp
dist/install/etc/xen/xend-pci-permissive.sxp
dist/install/etc/xen/xend-pci-quirks.sxp
dist/install/etc/xen/xl.conf
dist/install/etc/xen/xlexample.hvm
dist/install/etc/xen/xlexample.pvlinux
dist/install/etc/xen/xm-config.xml
dist/install/etc/xen/xmexample1
dist/install/etc/xen/xmexample2
dist/install/etc/xen/xmexample3
dist/install/etc/xen/xmexample.hvm
dist/install/etc/xen/xmexample.hvm-stubdom
dist/install/etc/xen/xmexample.nbd
dist/install/etc/xen/xmexample.pv-grub
dist/install/etc/xen/xmexample.vti
dist/install/usr
dist/install/usr/bin
dist/install/usr/bin/pygrub
dist/install/usr/bin/qemu-img-xen
dist/install/usr/bin/qemu-nbd-xen
dist/install/usr/bin/remus
dist/install/usr/bin/xencons
dist/install/usr/bin/xen-detect
dist/install/usr/bin/xenstore
dist/install/usr/bin/xenstore-chmod
dist/install/usr/bin/xenstore-control
dist/install/usr/bin/xenstore-exists
dist/install/usr/bin/xenstore-list
dist/install/usr/bin/xenstore-ls
dist/install/usr/bin/xenstore-read
dist/install/usr/bin/xenstore-rm
dist/install/usr/bin/xenstore-watch
dist/install/usr/bin/xenstore-write
dist/install/usr/bin/xentrace
dist/install/usr/bin/xentrace_format
dist/install/usr/bin/xentrace_setsize
dist/install/usr/include
dist/install/usr/include/blktaplib.h
dist/install/usr/include/fsimage_grub.h
dist/install/usr/include/fsimage.h
dist/install/usr/include/fsimage_plugin.h
dist/install/usr/include/libxenvchan.h
dist/install/usr/include/libxl_event.h
dist/install/usr/include/libxl.h
dist/install/usr/include/libxl_json.h
dist/install/usr/include/_libxl_list.h
dist/install/usr/include/_libxl_types.h
dist/install/usr/include/_libxl_types_json.h
dist/install/usr/include/libxl_utils.h
dist/install/usr/include/libxl_uuid.h
dist/install/usr/include/xen
dist/install/usr/include/xen/arch-arm.h
dist/install/usr/include/xen/arch-ia64
dist/install/usr/include/xen/arch-ia64/debug_op.h
dist/install/usr/include/xen/arch-ia64.h
dist/install/usr/include/xen/arch-ia64/hvm
dist/install/usr/include/xen/arch-ia64/hvm/memmap.h
dist/install/usr/include/xen/arch-ia64/hvm/save.h
dist/install/usr/include/xen/arch-ia64/sioemu.h
dist/install/usr/include/xen/arch-x86
dist/install/usr/include/xen/arch-x86_32.h
dist/install/usr/include/xen/arch-x86_64.h
dist/install/usr/include/xen/arch-x86/cpuid.h
dist/install/usr/include/xen/arch-x86/hvm
dist/install/usr/include/xen/arch-x86/hvm/save.h
dist/install/usr/include/xen/arch-x86/xen.h
dist/install/usr/include/xen/arch-x86/xen-mca.h
dist/install/usr/include/xen/arch-x86/xen-x86_32.h
dist/install/usr/include/xen/arch-x86/xen-x86_64.h
dist/install/usr/include/xen/callback.h
dist/install/usr/include/xen/COPYING
dist/install/usr/include/xenctrl.h
dist/install/usr/include/xenctrlosdep.h
dist/install/usr/include/xen/dom0_ops.h
dist/install/usr/include/xen/domctl.h
dist/install/usr/include/xen/elfnote.h
dist/install/usr/include/xen/event_channel.h
dist/install/usr/include/xen/features.h
dist/install/usr/include/xen/foreign
dist/install/usr/include/xen/foreign/ia64.h
dist/install/usr/include/xen/foreign/x86_32.h
dist/install/usr/include/xen/foreign/x86_64.h
dist/install/usr/include/xen/grant_table.h
dist/install/usr/include/xenguest.h
dist/install/usr/include/xen/hvm
dist/install/usr/include/xen/hvm/e820.h
dist/install/usr/include/xen/hvm/hvm_info_table.h
dist/install/usr/include/xen/hvm/hvm_op.h
dist/install/usr/include/xen/hvm/ioreq.h
dist/install/usr/include/xen/hvm/params.h
dist/install/usr/include/xen/hvm/save.h
dist/install/usr/include/xen/io
dist/install/usr/include/xen/io/blkif.h
dist/install/usr/include/xen/io/console.h
dist/install/usr/include/xen/io/fbif.h
dist/install/usr/include/xen/io/fsif.h
dist/install/usr/include/xen/io/kbdif.h
dist/install/usr/include/xen/io/libxenvchan.h
dist/install/usr/include/xen/io/netif.h
dist/install/usr/include/xen/io/pciif.h
dist/install/usr/include/xen/io/protocols.h
dist/install/usr/include/xen/io/ring.h
dist/install/usr/include/xen/io/tpmif.h
dist/install/usr/include/xen/io/usbif.h
dist/install/usr/include/xen/io/vscsiif.h
dist/install/usr/include/xen/io/xenbus.h
dist/install/usr/include/xen/io/xs_wire.h
dist/install/usr/include/xen/kexec.h
dist/install/usr/include/xen/mem_event.h
dist/install/usr/include/xen/memory.h
dist/install/usr/include/xen/nmi.h
dist/install/usr/include/xen/physdev.h
dist/install/usr/include/xen/platform.h
dist/install/usr/include/xen/sched.h
dist/install/usr/include/xenstat.h
dist/install/usr/include/xenstore-compat
dist/install/usr/include/xenstore-compat/xs.h
dist/install/usr/include/xenstore-compat/xs_lib.h
dist/install/usr/include/xenstore.h
dist/install/usr/include/xenstore_lib.h
dist/install/usr/include/xen/sys
dist/install/usr/include/xen/sysctl.h
dist/install/usr/include/xen/sys/evtchn.h
dist/install/usr/include/xen/sys/gntalloc.h
dist/install/usr/include/xen/sys/gntdev.h
dist/install/usr/include/xen/sys/privcmd.h
dist/install/usr/include/xen/sys/xenbus_dev.h
dist/install/usr/include/xen/tmem.h
dist/install/usr/include/xentoollog.h
dist/install/usr/include/xen/trace.h
dist/install/usr/include/xen/vcpu.h
dist/install/usr/include/xen/version.h
dist/install/usr/include/xen/xencomm.h
dist/install/usr/include/xen/xen-compat.h
dist/install/usr/include/xen/xen.h
dist/install/usr/include/xen/xenoprof.h
dist/install/usr/include/xen/xsm
dist/install/usr/include/xen/xsm/flask_op.h
dist/install/usr/include/xs.h
dist/install/usr/include/xs_lib.h
dist/install/usr/lib
dist/install/usr/lib/fs
dist/install/usr/lib/fs/ext2fs
dist/install/usr/lib/fs/ext2fs/fsimage.so
dist/install/usr/lib/fs/fat
dist/install/usr/lib/fs/fat/fsimage.so
dist/install/usr/lib/fs/iso9660
dist/install/usr/lib/fs/iso9660/fsimage.so
dist/install/usr/lib/fs/reiserfs
dist/install/usr/lib/fs/reiserfs/fsimage.so
dist/install/usr/lib/fs/ufs
dist/install/usr/lib/fs/ufs/fsimage.so
dist/install/usr/lib/fs/xfs
dist/install/usr/lib/fs/xfs/fsimage.so
dist/install/usr/lib/fs/zfs
dist/install/usr/lib/fs/zfs/fsimage.so
dist/install/usr/lib/libblktap.a
dist/install/usr/lib/libblktapctl.a
dist/install/usr/lib/libblktapctl.so
dist/install/usr/lib/libblktapctl.so.1.0
dist/install/usr/lib/libblktapctl.so.1.0.0
dist/install/usr/lib/libblktap.so
dist/install/usr/lib/libblktap.so.3.0
dist/install/usr/lib/libblktap.so.3.0.0
dist/install/usr/lib/libfsimage.so
dist/install/usr/lib/libfsimage.so.1.0
dist/install/usr/lib/libfsimage.so.1.0.0
dist/install/usr/lib/libvhd.a
dist/install/usr/lib/libvhd.so
dist/install/usr/lib/libvhd.so.1.0
dist/install/usr/lib/libvhd.so.1.0.0
dist/install/usr/lib/libxenctrl.a
dist/install/usr/lib/libxenctrl.so
dist/install/usr/lib/libxenctrl.so.4.2
dist/install/usr/lib/libxenctrl.so.4.2.0
dist/install/usr/lib/libxenguest.a
dist/install/usr/lib/libxenguest.so
dist/install/usr/lib/libxenguest.so.4.2
dist/install/usr/lib/libxenguest.so.4.2.0
dist/install/usr/lib/libxenlight.a
dist/install/usr/lib/libxenlight.so
dist/install/usr/lib/libxenlight.so.2.0
dist/install/usr/lib/libxenlight.so.2.0.0
dist/install/usr/lib/libxenstat.a
dist/install/usr/lib/libxenstat.so
dist/install/usr/lib/libxenstat.so.0
dist/install/usr/lib/libxenstat.so.0.0
dist/install/usr/lib/libxenstore.a
dist/install/usr/lib/libxenstore.so
dist/install/usr/lib/libxenstore.so.3.0
dist/install/usr/lib/libxenstore.so.3.0.1
dist/install/usr/lib/libxenvchan.a
dist/install/usr/lib/libxenvchan.so
dist/install/usr/lib/libxenvchan.so.1.0
dist/install/usr/lib/libxenvchan.so.1.0.0
dist/install/usr/lib/libxlutil.a
dist/install/usr/lib/libxlutil.so
dist/install/usr/lib/libxlutil.so.1.0
dist/install/usr/lib/libxlutil.so.1.0.0
dist/install/usr/lib/python2.6
dist/install/usr/lib/python2.6/dist-packages
dist/install/usr/lib/python2.6/dist-packages/fsimage.so
dist/install/usr/lib/python2.6/dist-packages/grub
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.py
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.pyc
dist/install/usr/lib/python2.6/dist-packages/pygrub-0.3.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen
dist/install/usr/lib/python2.6/dist-packages/xen-3.0.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/checkpoint.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/flask.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/netlink.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/ptsname.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xc.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xs.so
dist/install/usr/lib/python2.6/dist-packages/xen/remus
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.pyc
dist/install/usr/lib/xen
dist/install/usr/lib/xen/bin
dist/install/usr/lib/xen/bin/libxl-save-helper
dist/install/usr/lib/xen/bin/lsevtchn
dist/install/usr/lib/xen/bin/pygrub
dist/install/usr/lib/xen/bin/qemu-dm
dist/install/usr/lib/xen/bin/qemu-ga
dist/install/usr/lib/xen/bin/qemu-img
dist/install/usr/lib/xen/bin/qemu-io
dist/install/usr/lib/xen/bin/qemu-nbd
dist/install/usr/lib/xen/bin/qemu-system-i386
dist/install/usr/lib/xen/bin/readnotes
dist/install/usr/lib/xen/bin/xc_restore
dist/install/usr/lib/xen/bin/xc_save
dist/install/usr/lib/xen/bin/xenconsole
dist/install/usr/lib/xen/bin/xenctx
dist/install/usr/lib/xen/bin/xenpaging
dist/install/usr/lib/xen/bin/xenpvnetboot
dist/install/usr/lib/xen/boot
dist/install/usr/lib/xen/boot/hvmloader
dist/install/usr/local
dist/install/usr/local/etc
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
dist/install/usr/local/lib
dist/install/usr/local/lib/ocaml
dist/install/usr/local/lib/ocaml/3.11.2
dist/install/usr/local/lib/ocaml/3.11.2/xenbus
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/dllxenbus_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/libxenbus_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/META
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/dllxenctrl_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/libxenctrl_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/META
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/dllxeneventchn_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/libxeneventchn_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/META
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cma
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenlight
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/dllxenlight_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/libxenlight_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/META
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/dllxenmmap_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/libxenmmap_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/META
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenstore
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/META
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.a
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmxa
dist/install/usr/local/share
dist/install/usr/local/share/doc
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/man
dist/install/usr/local/share/man/man1
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/man/man8
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/sbin
dist/install/usr/sbin/blktapctrl
dist/install/usr/sbin/flask-get-bool
dist/install/usr/sbin/flask-getenforce
dist/install/usr/sbin/flask-label-pci
dist/install/usr/sbin/flask-loadpolicy
dist/install/usr/sbin/flask-set-bool
dist/install/usr/sbin/flask-setenforce
dist/install/usr/sbin/gdbsx
dist/install/usr/sbin/gtracestat
dist/install/usr/sbin/gtraceview
dist/install/usr/sbin/img2qcow
dist/install/usr/sbin/kdd
dist/install/usr/sbin/lock-util
dist/install/usr/sbin/oxenstored
dist/install/usr/sbin/qcow2raw
dist/install/usr/sbin/qcow-create
dist/install/usr/sbin/tap-ctl
dist/install/usr/sbin/tapdisk
dist/install/usr/sbin/tapdisk2
dist/install/usr/sbin/tapdisk-client
dist/install/usr/sbin/tapdisk-diff
dist/install/usr/sbin/tapdisk-stream
dist/install/usr/sbin/td-util
dist/install/usr/sbin/vhd-update
dist/install/usr/sbin/vhd-util
dist/install/usr/sbin/xenbaked
dist/install/usr/sbin/xen-bugtool
dist/install/usr/sbin/xenconsoled
dist/install/usr/sbin/xend
dist/install/usr/sbin/xen-hptool
dist/install/usr/sbin/xen-hvmcrash
dist/install/usr/sbin/xen-hvmctx
dist/install/usr/sbin/xenlockprof
dist/install/usr/sbin/xen-lowmemd
dist/install/usr/sbin/xenmon.py
dist/install/usr/sbin/xenperf
dist/install/usr/sbin/xenpm
dist/install/usr/sbin/xenpmd
dist/install/usr/sbin/xen-python-path
dist/install/usr/sbin/xen-ringwatch
dist/install/usr/sbin/xenstored
dist/install/usr/sbin/xen-tmem-list-parse
dist/install/usr/sbin/xentop
dist/install/usr/sbin/xentrace_setmask
dist/install/usr/sbin/xenwatchdogd
dist/install/usr/sbin/xl
dist/install/usr/sbin/xm
dist/install/usr/sbin/xsview
dist/install/usr/share
dist/install/usr/share/doc
dist/install/usr/share/doc/xen
dist/install/usr/share/doc/xen/qemu
dist/install/usr/share/doc/xen/qemu/qemu-doc.html
dist/install/usr/share/doc/xen/qemu/qemu-tech.html
dist/install/usr/share/doc/xen/README.blktap
dist/install/usr/share/doc/xen/README.xenmon
dist/install/usr/share/man
dist/install/usr/share/man/man1
dist/install/usr/share/man/man1/xentop.1
dist/install/usr/share/man/man1/xentrace_format.1
dist/install/usr/share/man/man8
dist/install/usr/share/man/man8/xentrace.8
dist/install/usr/share/qemu-xen
dist/install/usr/share/qemu-xen/bamboo.dtb
dist/install/usr/share/qemu-xen/bios.bin
dist/install/usr/share/qemu-xen/keymaps
dist/install/usr/share/qemu-xen/keymaps/ar
dist/install/usr/share/qemu-xen/keymaps/common
dist/install/usr/share/qemu-xen/keymaps/da
dist/install/usr/share/qemu-xen/keymaps/de
dist/install/usr/share/qemu-xen/keymaps/de-ch
dist/install/usr/share/qemu-xen/keymaps/en-gb
dist/install/usr/share/qemu-xen/keymaps/en-us
dist/install/usr/share/qemu-xen/keymaps/es
dist/install/usr/share/qemu-xen/keymaps/et
dist/install/usr/share/qemu-xen/keymaps/fi
dist/install/usr/share/qemu-xen/keymaps/fo
dist/install/usr/share/qemu-xen/keymaps/fr
dist/install/usr/share/qemu-xen/keymaps/fr-be
dist/install/usr/share/qemu-xen/keymaps/fr-ca
dist/install/usr/share/qemu-xen/keymaps/fr-ch
dist/install/usr/share/qemu-xen/keymaps/hr
dist/install/usr/share/qemu-xen/keymaps/hu
dist/install/usr/share/qemu-xen/keymaps/is
dist/install/usr/share/qemu-xen/keymaps/it
dist/install/usr/share/qemu-xen/keymaps/ja
dist/install/usr/share/qemu-xen/keymaps/lt
dist/install/usr/share/qemu-xen/keymaps/lv
dist/install/usr/share/qemu-xen/keymaps/mk
dist/install/usr/share/qemu-xen/keymaps/modifiers
dist/install/usr/share/qemu-xen/keymaps/nl
dist/install/usr/share/qemu-xen/keymaps/nl-be
dist/install/usr/share/qemu-xen/keymaps/no
dist/install/usr/share/qemu-xen/keymaps/pl
dist/install/usr/share/qemu-xen/keymaps/pt
dist/install/usr/share/qemu-xen/keymaps/pt-br
dist/install/usr/share/qemu-xen/keymaps/ru
dist/install/usr/share/qemu-xen/keymaps/sl
dist/install/usr/share/qemu-xen/keymaps/sv
dist/install/usr/share/qemu-xen/keymaps/th
dist/install/usr/share/qemu-xen/keymaps/tr
dist/install/usr/share/qemu-xen/linuxboot.bin
dist/install/usr/share/qemu-xen/mpc8544ds.dtb
dist/install/usr/share/qemu-xen/multiboot.bin
dist/install/usr/share/qemu-xen/openbios-ppc
dist/install/usr/share/qemu-xen/openbios-sparc32
dist/install/usr/share/qemu-xen/openbios-sparc64
dist/install/usr/share/qemu-xen/palcode-clipper
dist/install/usr/share/qemu-xen/petalogix-ml605.dtb
dist/install/usr/share/qemu-xen/petalogix-s3adsp1800.dtb
dist/install/usr/share/qemu-xen/ppc_rom.bin
dist/install/usr/share/qemu-xen/pxe-e1000.rom
dist/install/usr/share/qemu-xen/pxe-eepro100.rom
dist/install/usr/share/qemu-xen/pxe-ne2k_pci.rom
dist/install/usr/share/qemu-xen/pxe-pcnet.rom
dist/install/usr/share/qemu-xen/pxe-rtl8139.rom
dist/install/usr/share/qemu-xen/pxe-virtio.rom
dist/install/usr/share/qemu-xen/s390-zipl.rom
dist/install/usr/share/qemu-xen/sgabios.bin
dist/install/usr/share/qemu-xen/slof.bin
dist/install/usr/share/qemu-xen/spapr-rtas.bin
dist/install/usr/share/qemu-xen/vgabios.bin
dist/install/usr/share/qemu-xen/vgabios-cirrus.bin
dist/install/usr/share/qemu-xen/vgabios-qxl.bin
dist/install/usr/share/qemu-xen/vgabios-stdvga.bin
dist/install/usr/share/qemu-xen/vgabios-vmware.bin
dist/install/usr/share/xen
dist/install/usr/share/xen/create.dtd
dist/install/usr/share/xen/man
dist/install/usr/share/xen/man/man1
dist/install/usr/share/xen/man/man1/qemu.1
dist/install/usr/share/xen/man/man1/qemu-img.1
dist/install/usr/share/xen/man/man8
dist/install/usr/share/xen/man/man8/qemu-nbd.8
dist/install/usr/share/xen/qemu
dist/install/usr/share/xen/qemu/bamboo.dtb
dist/install/usr/share/xen/qemu/bios.bin
dist/install/usr/share/xen/qemu/keymaps
dist/install/usr/share/xen/qemu/keymaps/ar
dist/install/usr/share/xen/qemu/keymaps/common
dist/install/usr/share/xen/qemu/keymaps/da
dist/install/usr/share/xen/qemu/keymaps/de
dist/install/usr/share/xen/qemu/keymaps/de-ch
dist/install/usr/share/xen/qemu/keymaps/en-gb
dist/install/usr/share/xen/qemu/keymaps/en-us
dist/install/usr/share/xen/qemu/keymaps/es
dist/install/usr/share/xen/qemu/keymaps/et
dist/install/usr/share/xen/qemu/keymaps/fi
dist/install/usr/share/xen/qemu/keymaps/fo
dist/install/usr/share/xen/qemu/keymaps/fr
dist/install/usr/share/xen/qemu/keymaps/fr-be
dist/install/usr/share/xen/qemu/keymaps/fr-ca
dist/install/usr/share/xen/qemu/keymaps/fr-ch
dist/install/usr/share/xen/qemu/keymaps/hr
dist/install/usr/share/xen/qemu/keymaps/hu
dist/install/usr/share/xen/qemu/keymaps/is
dist/install/usr/share/xen/qemu/keymaps/it
dist/install/usr/share/xen/qemu/keymaps/ja
dist/install/usr/share/xen/qemu/keymaps/lt
dist/install/usr/share/xen/qemu/keymaps/lv
dist/install/usr/share/xen/qemu/keymaps/mk
dist/install/usr/share/xen/qemu/keymaps/modifiers
dist/install/usr/share/xen/qemu/keymaps/nl
dist/install/usr/share/xen/qemu/keymaps/nl-be
dist/install/usr/share/xen/qemu/keymaps/no
dist/install/usr/share/xen/qemu/keymaps/pl
dist/install/usr/share/xen/qemu/keymaps/pt
dist/install/usr/share/xen/qemu/keymaps/pt-br
dist/install/usr/share/xen/qemu/keymaps/ru
dist/install/usr/share/xen/qemu/keymaps/sl
dist/install/usr/share/xen/qemu/keymaps/sv
dist/install/usr/share/xen/qemu/keymaps/th
dist/install/usr/share/xen/qemu/keymaps/tr
dist/install/usr/share/xen/qemu/openbios-ppc
dist/install/usr/share/xen/qemu/openbios-sparc32
dist/install/usr/share/xen/qemu/openbios-sparc64
dist/install/usr/share/xen/qemu/ppc_rom.bin
dist/install/usr/share/xen/qemu/pxe-e1000.bin
dist/install/usr/share/xen/qemu/pxe-ne2k_pci.bin
dist/install/usr/share/xen/qemu/pxe-pcnet.bin
dist/install/usr/share/xen/qemu/pxe-rtl8139.bin
dist/install/usr/share/xen/qemu/vgabios.bin
dist/install/usr/share/xen/qemu/vgabios-cirrus.bin
dist/install/usr/share/xen/qemu/video.x
dist/install/var
dist/install/var/lib
dist/install/var/lib/xen
dist/install/var/lib/xenstored
dist/install/var/lib/xen/xenpaging
dist/install/var/lock
dist/install/var/lock/subsys
dist/install/var/log
dist/install/var/log/xen
dist/install/var/run
dist/install/var/run/xen
dist/install/var/run/xend
dist/install/var/run/xend/boot
dist/install/var/run/xenstored
dist/install/var/xen
dist/install/var/xen/dump

--=-Op15fjVHnxmurhd6MvPI
Content-Disposition: attachment; filename="uninstall.after"
Content-Type: text/plain; name="uninstall.after"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

dist/
dist/install
dist/install/etc
dist/install/etc/bash_completion.d
dist/install/etc/bash_completion.d/xl.sh
dist/install/etc/default
dist/install/etc/hotplug
dist/install/etc/init.d
dist/install/etc/udev
dist/install/etc/udev/rules.d
dist/install/etc/xen.old-1345709784
dist/install/etc/xen.old-1345709784/auto
dist/install/etc/xen.old-1345709784/cpupool
dist/install/etc/xen.old-1345709784/oxenstored.conf
dist/install/etc/xen.old-1345709784/README
dist/install/etc/xen.old-1345709784/README.incompatibilities
dist/install/etc/xen.old-1345709784/scripts
dist/install/etc/xen.old-1345709784/scripts/blktap
dist/install/etc/xen.old-1345709784/scripts/block
dist/install/etc/xen.old-1345709784/scripts/block-common.sh
dist/install/etc/xen.old-1345709784/scripts/block-enbd
dist/install/etc/xen.old-1345709784/scripts/block-nbd
dist/install/etc/xen.old-1345709784/scripts/external-device-migrate
dist/install/etc/xen.old-1345709784/scripts/hotplugpath.sh
dist/install/etc/xen.old-1345709784/scripts/locking.sh
dist/install/etc/xen.old-1345709784/scripts/logging.sh
dist/install/etc/xen.old-1345709784/scripts/network-bridge
dist/install/etc/xen.old-1345709784/scripts/network-nat
dist/install/etc/xen.old-1345709784/scripts/network-route
dist/install/etc/xen.old-1345709784/scripts/qemu-ifup
dist/install/etc/xen.old-1345709784/scripts/vif2
dist/install/etc/xen.old-1345709784/scripts/vif-bridge
dist/install/etc/xen.old-1345709784/scripts/vif-common.sh
dist/install/etc/xen.old-1345709784/scripts/vif-nat
dist/install/etc/xen.old-1345709784/scripts/vif-route
dist/install/etc/xen.old-1345709784/scripts/vif-setup
dist/install/etc/xen.old-1345709784/scripts/vscsi
dist/install/etc/xen.old-1345709784/scripts/vtpm
dist/install/etc/xen.old-1345709784/scripts/vtpm-common.sh
dist/install/etc/xen.old-1345709784/scripts/vtpm-delete
dist/install/etc/xen.old-1345709784/scripts/vtpm-hotplug-common.sh
dist/install/etc/xen.old-1345709784/scripts/vtpm-impl
dist/install/etc/xen.old-1345709784/scripts/vtpm-migration.sh
dist/install/etc/xen.old-1345709784/scripts/xen-hotplug-cleanup
dist/install/etc/xen.old-1345709784/scripts/xen-hotplug-common.sh
dist/install/etc/xen.old-1345709784/scripts/xen-network-common.sh
dist/install/etc/xen.old-1345709784/scripts/xen-script-common.sh
dist/install/etc/xen.old-1345709784/xend-config.sxp
dist/install/etc/xen.old-1345709784/xend-pci-permissive.sxp
dist/install/etc/xen.old-1345709784/xend-pci-quirks.sxp
dist/install/etc/xen.old-1345709784/xl.conf
dist/install/etc/xen.old-1345709784/xlexample.hvm
dist/install/etc/xen.old-1345709784/xlexample.pvlinux
dist/install/etc/xen.old-1345709784/xm-config.xml
dist/install/etc/xen.old-1345709784/xmexample1
dist/install/etc/xen.old-1345709784/xmexample2
dist/install/etc/xen.old-1345709784/xmexample3
dist/install/etc/xen.old-1345709784/xmexample.hvm
dist/install/etc/xen.old-1345709784/xmexample.hvm-stubdom
dist/install/etc/xen.old-1345709784/xmexample.nbd
dist/install/etc/xen.old-1345709784/xmexample.pv-grub
dist/install/etc/xen.old-1345709784/xmexample.vti
dist/install/usr
dist/install/usr/bin
dist/install/usr/bin/remus
dist/install/usr/include
dist/install/usr/include/blktaplib.h
dist/install/usr/include/fsimage_grub.h
dist/install/usr/include/fsimage.h
dist/install/usr/include/fsimage_plugin.h
dist/install/usr/include/libxenvchan.h
dist/install/usr/include/xenstore-compat
dist/install/usr/lib
dist/install/usr/lib/fs
dist/install/usr/lib/fs/ext2fs
dist/install/usr/lib/fs/ext2fs/fsimage.so
dist/install/usr/lib/fs/fat
dist/install/usr/lib/fs/fat/fsimage.so
dist/install/usr/lib/fs/iso9660
dist/install/usr/lib/fs/iso9660/fsimage.so
dist/install/usr/lib/fs/reiserfs
dist/install/usr/lib/fs/reiserfs/fsimage.so
dist/install/usr/lib/fs/ufs
dist/install/usr/lib/fs/ufs/fsimage.so
dist/install/usr/lib/fs/xfs
dist/install/usr/lib/fs/xfs/fsimage.so
dist/install/usr/lib/fs/zfs
dist/install/usr/lib/fs/zfs/fsimage.so
dist/install/usr/lib/libblktap.a
dist/install/usr/lib/libblktapctl.a
dist/install/usr/lib/libblktapctl.so
dist/install/usr/lib/libblktapctl.so.1.0
dist/install/usr/lib/libblktapctl.so.1.0.0
dist/install/usr/lib/libblktap.so
dist/install/usr/lib/libblktap.so.3.0
dist/install/usr/lib/libblktap.so.3.0.0
dist/install/usr/lib/libfsimage.so
dist/install/usr/lib/libfsimage.so.1.0
dist/install/usr/lib/libfsimage.so.1.0.0
dist/install/usr/lib/libvhd.a
dist/install/usr/lib/libvhd.so
dist/install/usr/lib/libvhd.so.1.0
dist/install/usr/lib/libvhd.so.1.0.0
dist/install/usr/lib/libxenlight.a
dist/install/usr/lib/libxenlight.so
dist/install/usr/lib/libxenlight.so.2.0
dist/install/usr/lib/libxenlight.so.2.0.0
dist/install/usr/lib/libxenstat.a
dist/install/usr/lib/libxenstat.so
dist/install/usr/lib/libxenstat.so.0
dist/install/usr/lib/libxenstat.so.0.0
dist/install/usr/lib/libxenvchan.a
dist/install/usr/lib/libxenvchan.so
dist/install/usr/lib/libxenvchan.so.1.0
dist/install/usr/lib/libxenvchan.so.1.0.0
dist/install/usr/lib/python2.6
dist/install/usr/lib/python2.6/dist-packages
dist/install/usr/lib/python2.6/dist-packages/fsimage.so
dist/install/usr/lib/python2.6/dist-packages/grub
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.py
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.pyc
dist/install/usr/lib/python2.6/dist-packages/pygrub-0.3.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen
dist/install/usr/lib/python2.6/dist-packages/xen-3.0.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/checkpoint.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/flask.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/netlink.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/ptsname.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xc.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xs.so
dist/install/usr/lib/python2.6/dist-packages/xen/remus
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.pyc
dist/install/usr/local
dist/install/usr/local/etc
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
dist/install/usr/local/lib
dist/install/usr/local/lib/ocaml
dist/install/usr/local/lib/ocaml/3.11.2
dist/install/usr/local/lib/ocaml/3.11.2/xenbus
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/dllxenbus_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/libxenbus_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/META
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/dllxenctrl_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/libxenctrl_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/META
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/dllxeneventchn_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/libxeneventchn_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/META
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cma
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenlight
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/dllxenlight_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/libxenlight_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/META
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/dllxenmmap_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/libxenmmap_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/META
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenstore
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/META
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.a
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmxa
dist/install/usr/local/share
dist/install/usr/local/share/doc
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/man
dist/install/usr/local/share/man/man1
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/man/man8
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/sbin
dist/install/usr/sbin/blktapctrl
dist/install/usr/sbin/flask-get-bool
dist/install/usr/sbin/flask-getenforce
dist/install/usr/sbin/flask-label-pci
dist/install/usr/sbin/flask-loadpolicy
dist/install/usr/sbin/flask-set-bool
dist/install/usr/sbin/flask-setenforce
dist/install/usr/sbin/gdbsx
dist/install/usr/sbin/gtracestat
dist/install/usr/sbin/gtraceview
dist/install/usr/sbin/img2qcow
dist/install/usr/sbin/kdd
dist/install/usr/sbin/lock-util
dist/install/usr/sbin/oxenstored
dist/install/usr/sbin/qcow2raw
dist/install/usr/sbin/qcow-create
dist/install/usr/sbin/tap-ctl
dist/install/usr/sbin/tapdisk
dist/install/usr/sbin/tapdisk2
dist/install/usr/sbin/tapdisk-client
dist/install/usr/sbin/tapdisk-diff
dist/install/usr/sbin/tapdisk-stream
dist/install/usr/sbin/td-util
dist/install/usr/sbin/vhd-update
dist/install/usr/sbin/vhd-util
dist/install/usr/sbin/xl
dist/install/usr/sbin/xsview
dist/install/usr/share
dist/install/usr/share/doc
dist/install/usr/share/man
dist/install/usr/share/man/man1
dist/install/usr/share/man/man8
dist/install/var
dist/install/var/lib
dist/install/var/lock
dist/install/var/lock/subsys
dist/install/var/log
dist/install/var/log/xen
dist/install/var/run
dist/install/var/xen
dist/install/var/xen/dump

--=-Op15fjVHnxmurhd6MvPI
Content-Disposition: attachment; filename="uninstall.before"
Content-Type: text/plain; name="uninstall.before"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

dist/
dist/install
dist/install/etc
dist/install/etc/bash_completion.d
dist/install/etc/bash_completion.d/xl.sh
dist/install/etc/default
dist/install/etc/hotplug
dist/install/etc/init.d
dist/install/etc/udev
dist/install/etc/udev/rules.d
dist/install/etc/xen.old-1345709489
dist/install/etc/xen.old-1345709489/auto
dist/install/etc/xen.old-1345709489/cpupool
dist/install/etc/xen.old-1345709489/oxenstored.conf
dist/install/etc/xen.old-1345709489/README
dist/install/etc/xen.old-1345709489/README.incompatibilities
dist/install/etc/xen.old-1345709489/scripts
dist/install/etc/xen.old-1345709489/scripts/blktap
dist/install/etc/xen.old-1345709489/scripts/block
dist/install/etc/xen.old-1345709489/scripts/block-common.sh
dist/install/etc/xen.old-1345709489/scripts/block-enbd
dist/install/etc/xen.old-1345709489/scripts/block-nbd
dist/install/etc/xen.old-1345709489/scripts/external-device-migrate
dist/install/etc/xen.old-1345709489/scripts/hotplugpath.sh
dist/install/etc/xen.old-1345709489/scripts/locking.sh
dist/install/etc/xen.old-1345709489/scripts/logging.sh
dist/install/etc/xen.old-1345709489/scripts/network-bridge
dist/install/etc/xen.old-1345709489/scripts/network-nat
dist/install/etc/xen.old-1345709489/scripts/network-route
dist/install/etc/xen.old-1345709489/scripts/qemu-ifup
dist/install/etc/xen.old-1345709489/scripts/vif2
dist/install/etc/xen.old-1345709489/scripts/vif-bridge
dist/install/etc/xen.old-1345709489/scripts/vif-common.sh
dist/install/etc/xen.old-1345709489/scripts/vif-nat
dist/install/etc/xen.old-1345709489/scripts/vif-route
dist/install/etc/xen.old-1345709489/scripts/vif-setup
dist/install/etc/xen.old-1345709489/scripts/vscsi
dist/install/etc/xen.old-1345709489/scripts/vtpm
dist/install/etc/xen.old-1345709489/scripts/vtpm-common.sh
dist/install/etc/xen.old-1345709489/scripts/vtpm-delete
dist/install/etc/xen.old-1345709489/scripts/vtpm-hotplug-common.sh
dist/install/etc/xen.old-1345709489/scripts/vtpm-impl
dist/install/etc/xen.old-1345709489/scripts/vtpm-migration.sh
dist/install/etc/xen.old-1345709489/scripts/xen-hotplug-cleanup
dist/install/etc/xen.old-1345709489/scripts/xen-hotplug-common.sh
dist/install/etc/xen.old-1345709489/scripts/xen-network-common.sh
dist/install/etc/xen.old-1345709489/scripts/xen-script-common.sh
dist/install/etc/xen.old-1345709489/xend-config.sxp
dist/install/etc/xen.old-1345709489/xend-pci-permissive.sxp
dist/install/etc/xen.old-1345709489/xend-pci-quirks.sxp
dist/install/etc/xen.old-1345709489/xl.conf
dist/install/etc/xen.old-1345709489/xlexample.hvm
dist/install/etc/xen.old-1345709489/xlexample.pvlinux
dist/install/etc/xen.old-1345709489/xm-config.xml
dist/install/etc/xen.old-1345709489/xmexample1
dist/install/etc/xen.old-1345709489/xmexample2
dist/install/etc/xen.old-1345709489/xmexample3
dist/install/etc/xen.old-1345709489/xmexample.hvm
dist/install/etc/xen.old-1345709489/xmexample.hvm-stubdom
dist/install/etc/xen.old-1345709489/xmexample.nbd
dist/install/etc/xen.old-1345709489/xmexample.pv-grub
dist/install/etc/xen.old-1345709489/xmexample.vti
dist/install/usr
dist/install/usr/bin
dist/install/usr/bin/remus
dist/install/usr/include
dist/install/usr/include/blktaplib.h
dist/install/usr/include/fsimage_grub.h
dist/install/usr/include/fsimage.h
dist/install/usr/include/fsimage_plugin.h
dist/install/usr/include/libxenvchan.h
dist/install/usr/include/xenstore-compat
dist/install/usr/lib
dist/install/usr/lib/fs
dist/install/usr/lib/fs/ext2fs
dist/install/usr/lib/fs/ext2fs/fsimage.so
dist/install/usr/lib/fs/fat
dist/install/usr/lib/fs/fat/fsimage.so
dist/install/usr/lib/fs/iso9660
dist/install/usr/lib/fs/iso9660/fsimage.so
dist/install/usr/lib/fs/reiserfs
dist/install/usr/lib/fs/reiserfs/fsimage.so
dist/install/usr/lib/fs/ufs
dist/install/usr/lib/fs/ufs/fsimage.so
dist/install/usr/lib/fs/xfs
dist/install/usr/lib/fs/xfs/fsimage.so
dist/install/usr/lib/fs/zfs
dist/install/usr/lib/fs/zfs/fsimage.so
dist/install/usr/lib/libblktap.a
dist/install/usr/lib/libblktapctl.a
dist/install/usr/lib/libblktapctl.so
dist/install/usr/lib/libblktapctl.so.1.0
dist/install/usr/lib/libblktapctl.so.1.0.0
dist/install/usr/lib/libblktap.so
dist/install/usr/lib/libblktap.so.3.0
dist/install/usr/lib/libblktap.so.3.0.0
dist/install/usr/lib/libfsimage.so
dist/install/usr/lib/libfsimage.so.1.0
dist/install/usr/lib/libfsimage.so.1.0.0
dist/install/usr/lib/libvhd.a
dist/install/usr/lib/libvhd.so
dist/install/usr/lib/libvhd.so.1.0
dist/install/usr/lib/libvhd.so.1.0.0
dist/install/usr/lib/libxenctrl.a
dist/install/usr/lib/libxenctrl.so
dist/install/usr/lib/libxenctrl.so.4.2
dist/install/usr/lib/libxenctrl.so.4.2.0
dist/install/usr/lib/libxenguest.a
dist/install/usr/lib/libxenguest.so
dist/install/usr/lib/libxenguest.so.4.2
dist/install/usr/lib/libxenguest.so.4.2.0
dist/install/usr/lib/libxenlight.a
dist/install/usr/lib/libxenlight.so
dist/install/usr/lib/libxenlight.so.2.0
dist/install/usr/lib/libxenlight.so.2.0.0
dist/install/usr/lib/libxenstat.a
dist/install/usr/lib/libxenstat.so
dist/install/usr/lib/libxenstat.so.0
dist/install/usr/lib/libxenstat.so.0.0
dist/install/usr/lib/libxenstore.a
dist/install/usr/lib/libxenstore.so
dist/install/usr/lib/libxenstore.so.3.0
dist/install/usr/lib/libxenstore.so.3.0.1
dist/install/usr/lib/libxenvchan.a
dist/install/usr/lib/libxenvchan.so
dist/install/usr/lib/libxenvchan.so.1.0
dist/install/usr/lib/libxenvchan.so.1.0.0
dist/install/usr/lib/libxlutil.a
dist/install/usr/lib/libxlutil.so
dist/install/usr/lib/libxlutil.so.1.0
dist/install/usr/lib/libxlutil.so.1.0.0
dist/install/usr/lib/python2.6
dist/install/usr/lib/python2.6/dist-packages
dist/install/usr/lib/python2.6/dist-packages/fsimage.so
dist/install/usr/lib/python2.6/dist-packages/grub
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/ExtLinuxConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/GrubConf.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.py
dist/install/usr/lib/python2.6/dist-packages/grub/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.py
dist/install/usr/lib/python2.6/dist-packages/grub/LiloConf.pyc
dist/install/usr/lib/python2.6/dist-packages/pygrub-0.3.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen
dist/install/usr/lib/python2.6/dist-packages/xen-3.0.egg-info
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/checkpoint.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/flask.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/netlink.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/ptsname.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xc.so
dist/install/usr/lib/python2.6/dist-packages/xen/lowlevel/xs.so
dist/install/usr/lib/python2.6/dist-packages/xen/remus
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/blkdev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/device.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/netlink.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/profile.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/qdisc.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/save.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/tapdisk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vbd.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vdi.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.py
dist/install/usr/lib/python2.6/dist-packages/xen/remus/vm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/CreateDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/DomInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/GenTabbed.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/HTMLBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/NodeInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/RestoreDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.py
dist/install/usr/lib/python2.6/dist-packages/xen/sv/Wizard.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/acmpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/asserts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/auxbin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/Brctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/bugtool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/diagnose.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/dictio.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/fileuri.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/ip.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mac.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/mkdir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/oshelp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/path.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/pci.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/rwlock.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/SSHTransport.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/sxputils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/utils.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vscsi_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/vusb_util.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpcclient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xmlrpclib2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xpopen.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsconstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/acm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/acm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/dummy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/dummy/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/flask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/flask/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm_core.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xsm/xsm.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/util/xspolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/connection.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/http.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/httpserver.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/protocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/resource.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/SrvDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/static.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/tcp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.py
dist/install/usr/lib/python2.6/dist-packages/xen/web/unix.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/arch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Args.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/balloon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/encode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/image.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/MemoryPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/osdep.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/PrettyPrint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/blkif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/BlktapController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/ConsoleController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/DevController.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/iopif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/irqif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif2.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/netif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/params.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/pciquirk.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/relocate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDaemon.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomainDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvRoot.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvVnetDir.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SrvXendLog.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/SSLXMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tests/test_controllers.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/tpmif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/udevevent.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vfbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vscsiif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/vusbif.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/server/XMLRPCServer.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_sxp.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/tests/test_XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/uuid.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/Vifctl.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAPIVersion.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendAuthSessions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBase.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendBootloader.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCheckpoint.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendClient.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConfig.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendConstants.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendCPUPool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDevices.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDmesg.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomainInfo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDomain.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendDSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendError.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLocalStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendLogging.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendMonitor.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNetwork.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendNode.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendOptions.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPBD.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIFMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPIF.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPPCI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendProtocol.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendPSCSI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendQCoWStorageRepo.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStateStore.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendStorageRepository.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendSXPDev.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTaskManager.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendTask.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVDI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVMMetrics.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendVnet.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicyAdmin.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/XendXSPolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/tests/stress_xs.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xstransact.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xsutil.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.py
dist/install/usr/lib/python2.6/dist-packages/xen/xend/xenstore/xswatch.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/addlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/console.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool-new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/cpupool.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dry-run.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/dumppolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/getpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/help.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/labels.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/migrate.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/new.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/opts.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resetpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/resources.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/rmlabel.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setenforce.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/setpolicy.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/shutdown.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/tests/test_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/xenapi_create.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.py
dist/install/usr/lib/python2.6/dist-packages/xen/xm/XenAPI.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/__init__.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/main.pyc
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.py
dist/install/usr/lib/python2.6/dist-packages/xen/xsview/xsviewer.pyc
dist/install/usr/lib/xen
dist/install/usr/lib/xen/bin
dist/install/usr/lib/xen/bin/libxl-save-helper
dist/install/usr/lib/xen/bin/lsevtchn
dist/install/usr/lib/xen/bin/pygrub
dist/install/usr/lib/xen/bin/qemu-dm
dist/install/usr/lib/xen/bin/qemu-ga
dist/install/usr/lib/xen/bin/qemu-img
dist/install/usr/lib/xen/bin/qemu-io
dist/install/usr/lib/xen/bin/qemu-nbd
dist/install/usr/lib/xen/bin/qemu-system-i386
dist/install/usr/lib/xen/bin/readnotes
dist/install/usr/lib/xen/bin/xc_restore
dist/install/usr/lib/xen/bin/xc_save
dist/install/usr/lib/xen/boot
dist/install/usr/lib/xen/boot/hvmloader
dist/install/usr/local
dist/install/usr/local/etc
dist/install/usr/local/etc/qemu
dist/install/usr/local/etc/qemu/target-x86_64.conf
dist/install/usr/local/lib
dist/install/usr/local/lib/ocaml
dist/install/usr/local/lib/ocaml/3.11.2
dist/install/usr/local/lib/ocaml/3.11.2/xenbus
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/dllxenbus_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/libxenbus_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/META
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.a
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenbus/xenbus.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/dllxenctrl_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/libxenctrl_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/META
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.a
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenctrl/xenctrl.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/dllxeneventchn_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/libxeneventchn_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/META
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.a
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cma
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xeneventchn/xeneventchn.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenlight
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/dllxenlight_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/libxenlight_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/META
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.a
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenlight/xenlight.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/dllxenmmap_stubs.so
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/libxenmmap_stubs.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/META
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.a
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenmmap/xenmmap.cmxa
dist/install/usr/local/lib/ocaml/3.11.2/xenstore
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/META
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.a
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cma
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmi
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmo
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmx
dist/install/usr/local/lib/ocaml/3.11.2/xenstore/xenstore.cmxa
dist/install/usr/local/share
dist/install/usr/local/share/doc
dist/install/usr/local/share/doc/qemu
dist/install/usr/local/share/doc/qemu/qemu-doc.html
dist/install/usr/local/share/doc/qemu/qemu-tech.html
dist/install/usr/local/share/man
dist/install/usr/local/share/man/man1
dist/install/usr/local/share/man/man1/qemu.1
dist/install/usr/local/share/man/man1/qemu-img.1
dist/install/usr/local/share/man/man8
dist/install/usr/local/share/man/man8/qemu-nbd.8
dist/install/usr/sbin
dist/install/usr/sbin/blktapctrl
dist/install/usr/sbin/flask-get-bool
dist/install/usr/sbin/flask-getenforce
dist/install/usr/sbin/flask-label-pci
dist/install/usr/sbin/flask-loadpolicy
dist/install/usr/sbin/flask-set-bool
dist/install/usr/sbin/flask-setenforce
dist/install/usr/sbin/gdbsx
dist/install/usr/sbin/gtracestat
dist/install/usr/sbin/gtraceview
dist/install/usr/sbin/img2qcow
dist/install/usr/sbin/kdd
dist/install/usr/sbin/lock-util
dist/install/usr/sbin/oxenstored
dist/install/usr/sbin/qcow2raw
dist/install/usr/sbin/qcow-create
dist/install/usr/sbin/tap-ctl
dist/install/usr/sbin/tapdisk
dist/install/usr/sbin/tapdisk2
dist/install/usr/sbin/tapdisk-client
dist/install/usr/sbin/tapdisk-diff
dist/install/usr/sbin/tapdisk-stream
dist/install/usr/sbin/td-util
dist/install/usr/sbin/vhd-update
dist/install/usr/sbin/vhd-util
dist/install/usr/sbin/xl
dist/install/usr/sbin/xsview
dist/install/usr/share
dist/install/usr/share/doc
dist/install/usr/share/man
dist/install/usr/share/man/man1
dist/install/usr/share/man/man8
dist/install/var
dist/install/var/lib
dist/install/var/lock
dist/install/var/lock/subsys
dist/install/var/log
dist/install/var/log/xen
dist/install/var/run
dist/install/var/xen
dist/install/var/xen/dump

--=-Op15fjVHnxmurhd6MvPI
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-Op15fjVHnxmurhd6MvPI--


From xen-devel-bounces@lists.xen.org Thu Aug 23 08:29:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Sms-0000ld-O3; Thu, 23 Aug 2012 08:29:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4Smq-0000lL-H4
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:29:08 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345710517!8543838!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28397 invoked from network); 23 Aug 2012 08:28:37 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-12.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 08:28:37 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 1EAA85A0007
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:28:14 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id KE0WYYpYqKk4 for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 09:28:08 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id D6B845A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:28:08 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 09:28:31 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: <xen-devel@lists.xen.org>
In-Reply-To: <CC5BA46F.49F32%keir@xen.org>
References: <CC5BA46F.49F32%keir@xen.org>
Message-ID: <8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 23.08.2012 09:12, Keir Fraser wrote:
> On 23/08/2012 08:27, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>> Thanks, Pasi.
>>
>> A couple of questions:
>>
>> I'm guessing xen.efi (from 4.2) just replaces grub??
>>
>> Also, if I were to apply that patch from superuser
>> 
>> (http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sho
>> uld-be-8gb-uefi-boot),
>> would have have any bad consequences? I'm very security conscience 
>> as
>> the DomUs are untrusted...
>
> Firstly, the same effect *should* be had by adding no-real-mode to 
> your Xen
> command line. So try that first.
>
> Secondly, it is arguable that we should patch Xen to prefer 
> "Multiboot-e820"
> over "Xen-e801".
>
> And yes, overall if you have a UEFI BIOS then using UEFI Xen is 
> probably
> best of all :)
>

Unfortunately, no-real-mode didn't work :(

In xm info, I see

xen_commandline : placeholder no-real-mode

Not sure where placeholder is coming from...

Also, I'm conscience that no-real-mode may cause other problems as it's 
designed for debugging?

Thanks


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:29:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Sms-0000ld-O3; Thu, 23 Aug 2012 08:29:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4Smq-0000lL-H4
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:29:08 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345710517!8543838!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28397 invoked from network); 23 Aug 2012 08:28:37 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-12.tower-27.messagelabs.com with SMTP;
	23 Aug 2012 08:28:37 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 1EAA85A0007
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:28:14 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id KE0WYYpYqKk4 for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 09:28:08 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id D6B845A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:28:08 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 09:28:31 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: <xen-devel@lists.xen.org>
In-Reply-To: <CC5BA46F.49F32%keir@xen.org>
References: <CC5BA46F.49F32%keir@xen.org>
Message-ID: <8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 23.08.2012 09:12, Keir Fraser wrote:
> On 23/08/2012 08:27, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>> Thanks, Pasi.
>>
>> A couple of questions:
>>
>> I'm guessing xen.efi (from 4.2) just replaces grub??
>>
>> Also, if I were to apply that patch from superuser
>> 
>> (http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sho
>> uld-be-8gb-uefi-boot),
>> would have have any bad consequences? I'm very security conscience 
>> as
>> the DomUs are untrusted...
>
> Firstly, the same effect *should* be had by adding no-real-mode to 
> your Xen
> command line. So try that first.
>
> Secondly, it is arguable that we should patch Xen to prefer 
> "Multiboot-e820"
> over "Xen-e801".
>
> And yes, overall if you have a UEFI BIOS then using UEFI Xen is 
> probably
> best of all :)
>

Unfortunately, no-real-mode didn't work :(

In xm info, I see

xen_commandline : placeholder no-real-mode

Not sure where placeholder is coming from...

Also, I'm conscience that no-real-mode may cause other problems as it's 
designed for debugging?

Thanks


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:41:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SyF-0000zo-Vp; Thu, 23 Aug 2012 08:40:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4SyE-0000zj-Vi
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:40:55 +0000
Received: from [85.158.143.35:63381] by server-1.bemta-4.messagelabs.com id
	0F/9A-12504-69CE5305; Thu, 23 Aug 2012 08:40:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345711251!13319507!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10801 invoked from network); 23 Aug 2012 08:40:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:40:52 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14140789"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:40:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:40:52 +0100
Message-ID: <1345711250.12501.58.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 23 Aug 2012 09:40:50 +0100
In-Reply-To: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-21 at 16:54 +0100, Paul Durrant wrote:
> # HG changeset patch
> # User Paul Durrant <paul.durrant@citrix.com>
> # Date 1345564202 -3600
> # Node ID 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
> # Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
> Remove VM genearation ID device and incr_generationid from build_info.
> 
> Microsoft have now published their VM generation ID specification at
> https://www.microsoft.com/en-us/download/details.aspx?id=30707.
> It differs from the original specification upon which I based my
> implementation in several key areas. Particularly, it is no longer
> an incrementing 64-bit counter and so this patch is to remove
> the incr_generationid field from the build_info and also disable the
> ACPI device before 4.2 is released.
> 
> I will follow up with further patches to implement the VM generation
> ID to the new specification.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Tools part looks ok to me:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

Only thing I would change is to add a comment to the "1" parameter to
libxl__xc_domain_restore, but I can do that as I commit.

If Keir is happy with the hvmloader change then I'll commit.

> 
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/firmware/hvmloader/acpi/dsdt.asl
> --- a/tools/firmware/hvmloader/acpi/dsdt.asl	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/firmware/hvmloader/acpi/dsdt.asl	Tue Aug 21 16:50:02 2012 +0100
> @@ -398,6 +398,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
>                      })
>                  } 
>  
> +                /*
>                  Device(VGID) {
>                      Name(_HID, EisaID ("XEN0000"))
>                      Name(_UID, 0x00)
> @@ -422,6 +423,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
>                          Return(PKG)
>                      }
>                  }
> +                */
>              }
>          }
>      }
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_create.c
> --- a/tools/libxl/libxl_create.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/libxl_create.c	Tue Aug 21 16:50:02 2012 +0100
> @@ -248,7 +248,6 @@ int libxl__domain_build_info_setdefault(
>          libxl_defbool_setdefault(&b_info->u.hvm.hpet,               true);
>          libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,          true);
>          libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
> -        libxl_defbool_setdefault(&b_info->u.hvm.incr_generationid,  false);
>          libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
>          libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
>  
> @@ -758,27 +757,24 @@ static void domcreate_bootloader_done(li
>  
>      /* read signature */
>      int hvm, pae, superpages;
> -    int no_incr_generationid;
>      switch (info->type) {
>      case LIBXL_DOMAIN_TYPE_HVM:
>          hvm = 1;
>          superpages = 1;
>          pae = libxl_defbool_val(info->u.hvm.pae);
> -        no_incr_generationid = !libxl_defbool_val(info->u.hvm.incr_generationid);
>          callbacks->toolstack_restore = libxl__toolstack_restore;
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
>          hvm = 0;
>          superpages = 0;
>          pae = 1;
> -        no_incr_generationid = 0;
>          break;
>      default:
>          rc = ERROR_INVAL;
>          goto out;
>      }
>      libxl__xc_domain_restore(egc, dcs,
> -                             hvm, pae, superpages, no_incr_generationid);
> +                             hvm, pae, superpages, 1);
>      return;
>  
>   out:
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/libxl_types.idl	Tue Aug 21 16:50:02 2012 +0100
> @@ -292,7 +292,6 @@ libxl_domain_build_info = Struct("domain
>                                         ("vpt_align",        libxl_defbool),
>                                         ("timer_mode",       libxl_timer_mode),
>                                         ("nested_hvm",       libxl_defbool),
> -                                       ("incr_generationid",libxl_defbool),
>                                         ("nographic",        libxl_defbool),
>                                         ("vga",              libxl_vga_interface_info),
>                                         ("vnc",              libxl_vnc_info),
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/xl_cmdimpl.c	Tue Aug 21 16:50:02 2012 +0100
> @@ -139,7 +139,6 @@ struct domain_create {
>      const char *restore_file;
>      int migrate_fd; /* -1 means none */
>      char **migration_domname_r; /* from malloc */
> -    int incr_generationid;
>  };
>  
> 
> @@ -1759,10 +1758,6 @@ static int create_domain(struct domain_c
>          }
>      }
>  
> -    if (d_config.c_info.type == LIBXL_DOMAIN_TYPE_HVM)
> -        libxl_defbool_set(&d_config.b_info.u.hvm.incr_generationid,
> -                          dom_info->incr_generationid);
> -
>      if (debug || dom_info->dryrun)
>          printf_info(default_output_format, -1, &d_config);
>  
> @@ -3183,7 +3178,6 @@ static void migrate_receive(int debug, i
>      dom_info.paused = 1;
>      dom_info.migrate_fd = recv_fd;
>      dom_info.migration_domname_r = &migration_domname;
> -    dom_info.incr_generationid = 0;
>  
>      rc = create_domain(&dom_info);
>      if (rc < 0) {
> @@ -3364,7 +3358,6 @@ int main_restore(int argc, char **argv)
>      dom_info.vnc = vnc;
>      dom_info.vncautopass = vncautopass;
>      dom_info.console_autoconnect = console_autoconnect;
> -    dom_info.incr_generationid = 1;
>  
>      rc = create_domain(&dom_info);
>      if (rc < 0)
> @@ -3766,7 +3759,6 @@ int main_create(int argc, char **argv)
>      dom_info.vnc = vnc;
>      dom_info.vncautopass = vncautopass;
>      dom_info.console_autoconnect = console_autoconnect;
> -    dom_info.incr_generationid = 0;
>  
>      rc = create_domain(&dom_info);
>      if (rc < 0)
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_sxp.c
> --- a/tools/libxl/xl_sxp.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/xl_sxp.c	Tue Aug 21 16:50:02 2012 +0100
> @@ -108,8 +108,6 @@ void printf_info_sexp(int domid, libxl_d
>                 libxl_timer_mode_to_string(b_info->u.hvm.timer_mode));
>          printf("\t\t\t(nestedhvm %s)\n",
>                 libxl_defbool_to_string(b_info->u.hvm.nested_hvm));
> -        printf("\t\t\t(no_incr_generationid %s)\n",
> -               libxl_defbool_to_string(b_info->u.hvm.incr_generationid));
>          printf("\t\t\t(stdvga %s)\n", b_info->u.hvm.vga.kind ==
>                                        LIBXL_VGA_INTERFACE_TYPE_STD ?
>                                        "True" : "False");
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:41:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4SyF-0000zo-Vp; Thu, 23 Aug 2012 08:40:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4SyE-0000zj-Vi
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:40:55 +0000
Received: from [85.158.143.35:63381] by server-1.bemta-4.messagelabs.com id
	0F/9A-12504-69CE5305; Thu, 23 Aug 2012 08:40:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345711251!13319507!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10801 invoked from network); 23 Aug 2012 08:40:52 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:40:52 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14140789"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:40:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:40:52 +0100
Message-ID: <1345711250.12501.58.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 23 Aug 2012 09:40:50 +0100
In-Reply-To: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-21 at 16:54 +0100, Paul Durrant wrote:
> # HG changeset patch
> # User Paul Durrant <paul.durrant@citrix.com>
> # Date 1345564202 -3600
> # Node ID 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
> # Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
> Remove VM genearation ID device and incr_generationid from build_info.
> 
> Microsoft have now published their VM generation ID specification at
> https://www.microsoft.com/en-us/download/details.aspx?id=30707.
> It differs from the original specification upon which I based my
> implementation in several key areas. Particularly, it is no longer
> an incrementing 64-bit counter and so this patch is to remove
> the incr_generationid field from the build_info and also disable the
> ACPI device before 4.2 is released.
> 
> I will follow up with further patches to implement the VM generation
> ID to the new specification.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Tools part looks ok to me:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

Only thing I would change is to add a comment to the "1" parameter to
libxl__xc_domain_restore, but I can do that as I commit.

If Keir is happy with the hvmloader change then I'll commit.

> 
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/firmware/hvmloader/acpi/dsdt.asl
> --- a/tools/firmware/hvmloader/acpi/dsdt.asl	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/firmware/hvmloader/acpi/dsdt.asl	Tue Aug 21 16:50:02 2012 +0100
> @@ -398,6 +398,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
>                      })
>                  } 
>  
> +                /*
>                  Device(VGID) {
>                      Name(_HID, EisaID ("XEN0000"))
>                      Name(_UID, 0x00)
> @@ -422,6 +423,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, 
>                          Return(PKG)
>                      }
>                  }
> +                */
>              }
>          }
>      }
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_create.c
> --- a/tools/libxl/libxl_create.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/libxl_create.c	Tue Aug 21 16:50:02 2012 +0100
> @@ -248,7 +248,6 @@ int libxl__domain_build_info_setdefault(
>          libxl_defbool_setdefault(&b_info->u.hvm.hpet,               true);
>          libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,          true);
>          libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
> -        libxl_defbool_setdefault(&b_info->u.hvm.incr_generationid,  false);
>          libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
>          libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
>  
> @@ -758,27 +757,24 @@ static void domcreate_bootloader_done(li
>  
>      /* read signature */
>      int hvm, pae, superpages;
> -    int no_incr_generationid;
>      switch (info->type) {
>      case LIBXL_DOMAIN_TYPE_HVM:
>          hvm = 1;
>          superpages = 1;
>          pae = libxl_defbool_val(info->u.hvm.pae);
> -        no_incr_generationid = !libxl_defbool_val(info->u.hvm.incr_generationid);
>          callbacks->toolstack_restore = libxl__toolstack_restore;
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
>          hvm = 0;
>          superpages = 0;
>          pae = 1;
> -        no_incr_generationid = 0;
>          break;
>      default:
>          rc = ERROR_INVAL;
>          goto out;
>      }
>      libxl__xc_domain_restore(egc, dcs,
> -                             hvm, pae, superpages, no_incr_generationid);
> +                             hvm, pae, superpages, 1);
>      return;
>  
>   out:
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/libxl_types.idl
> --- a/tools/libxl/libxl_types.idl	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/libxl_types.idl	Tue Aug 21 16:50:02 2012 +0100
> @@ -292,7 +292,6 @@ libxl_domain_build_info = Struct("domain
>                                         ("vpt_align",        libxl_defbool),
>                                         ("timer_mode",       libxl_timer_mode),
>                                         ("nested_hvm",       libxl_defbool),
> -                                       ("incr_generationid",libxl_defbool),
>                                         ("nographic",        libxl_defbool),
>                                         ("vga",              libxl_vga_interface_info),
>                                         ("vnc",              libxl_vnc_info),
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/xl_cmdimpl.c	Tue Aug 21 16:50:02 2012 +0100
> @@ -139,7 +139,6 @@ struct domain_create {
>      const char *restore_file;
>      int migrate_fd; /* -1 means none */
>      char **migration_domname_r; /* from malloc */
> -    int incr_generationid;
>  };
>  
> 
> @@ -1759,10 +1758,6 @@ static int create_domain(struct domain_c
>          }
>      }
>  
> -    if (d_config.c_info.type == LIBXL_DOMAIN_TYPE_HVM)
> -        libxl_defbool_set(&d_config.b_info.u.hvm.incr_generationid,
> -                          dom_info->incr_generationid);
> -
>      if (debug || dom_info->dryrun)
>          printf_info(default_output_format, -1, &d_config);
>  
> @@ -3183,7 +3178,6 @@ static void migrate_receive(int debug, i
>      dom_info.paused = 1;
>      dom_info.migrate_fd = recv_fd;
>      dom_info.migration_domname_r = &migration_domname;
> -    dom_info.incr_generationid = 0;
>  
>      rc = create_domain(&dom_info);
>      if (rc < 0) {
> @@ -3364,7 +3358,6 @@ int main_restore(int argc, char **argv)
>      dom_info.vnc = vnc;
>      dom_info.vncautopass = vncautopass;
>      dom_info.console_autoconnect = console_autoconnect;
> -    dom_info.incr_generationid = 1;
>  
>      rc = create_domain(&dom_info);
>      if (rc < 0)
> @@ -3766,7 +3759,6 @@ int main_create(int argc, char **argv)
>      dom_info.vnc = vnc;
>      dom_info.vncautopass = vncautopass;
>      dom_info.console_autoconnect = console_autoconnect;
> -    dom_info.incr_generationid = 0;
>  
>      rc = create_domain(&dom_info);
>      if (rc < 0)
> diff -r 6d56e31fe1e1 -r 4b1f399193f5 tools/libxl/xl_sxp.c
> --- a/tools/libxl/xl_sxp.c	Wed Aug 15 09:41:21 2012 +0100
> +++ b/tools/libxl/xl_sxp.c	Tue Aug 21 16:50:02 2012 +0100
> @@ -108,8 +108,6 @@ void printf_info_sexp(int domid, libxl_d
>                 libxl_timer_mode_to_string(b_info->u.hvm.timer_mode));
>          printf("\t\t\t(nestedhvm %s)\n",
>                 libxl_defbool_to_string(b_info->u.hvm.nested_hvm));
> -        printf("\t\t\t(no_incr_generationid %s)\n",
> -               libxl_defbool_to_string(b_info->u.hvm.incr_generationid));
>          printf("\t\t\t(stdvga %s)\n", b_info->u.hvm.vga.kind ==
>                                        LIBXL_VGA_INTERFACE_TYPE_STD ?
>                                        "True" : "False");
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:44:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T0w-00015Z-I6; Thu, 23 Aug 2012 08:43:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4T0v-00015T-B5
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:43:41 +0000
Received: from [85.158.143.99:34152] by server-3.bemta-4.messagelabs.com id
	9B/BA-08232-C3DE5305; Thu, 23 Aug 2012 08:43:40 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345711419!22187428!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28490 invoked from network); 23 Aug 2012 08:43:40 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:43:40 -0000
Received: by eeke53 with SMTP id e53so156030eek.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:43:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ToHynQowe0jLrPrxnwpfdAXD0FqtfjUL3UDfVoOfesg=;
	b=c0uu+I3/gJjiEzDymz6GTpHCjf3zFHidvoQ+RwNf6h8E2/5dRb/OgOUK7hnIQioOqI
	gdZyOX9QO0bBJ0cXwsmYd/kp+2IDd4699xLHYJB3l/x/BN75lNvQ4ptfCSrUQIAV5VBS
	K34/0c58uAj2aVxVGzv8i3uu8F1IX9BIIJWjPS1FJ2VGzRS+ABel4DDNeW/hy6UC0ycc
	p5WeFNBnjgKEJ4bqglVXV/uXR7CZGf3D0wqqJ6K5fbaWx5oD+VpKmnWGtjnVa05MeERv
	UHCZqmgQz8p8o519TWZysy1P6JIYt9IY2+BImLbDyor8D4roevKGrk20XQlN7e8UnDfK
	uBgw==
Received: by 10.14.182.9 with SMTP id n9mr989665eem.6.1345711419740;
	Thu, 23 Aug 2012 01:43:39 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id k49sm19490760een.4.2012.08.23.01.43.38
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 01:43:38 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 09:43:35 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5BABC7.49F42%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BC2Hb83+ygNg12U2gHYQ/WdYZzg==
In-Reply-To: <8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 09:28, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Unfortunately, no-real-mode didn't work :(
> 
> In xm info, I see
> 
> xen_commandline : placeholder no-real-mode
> 
> Not sure where placeholder is coming from...
> 
> Also, I'm conscience that no-real-mode may cause other problems as it's
> designed for debugging?

What does your RAM map look like now from early Xen boot, using
no-real-mode? It shouldn't be "Xen-e801" any more at least, else the
no-real-mode parameter isn't working.

The installer probably added 'placeholder' because older versions of Xen
skipped the first argument (because on old GRUB that was the path/to/grub
rather than a real argument).

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:44:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T0w-00015Z-I6; Thu, 23 Aug 2012 08:43:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4T0v-00015T-B5
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:43:41 +0000
Received: from [85.158.143.99:34152] by server-3.bemta-4.messagelabs.com id
	9B/BA-08232-C3DE5305; Thu, 23 Aug 2012 08:43:40 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345711419!22187428!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28490 invoked from network); 23 Aug 2012 08:43:40 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:43:40 -0000
Received: by eeke53 with SMTP id e53so156030eek.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 01:43:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ToHynQowe0jLrPrxnwpfdAXD0FqtfjUL3UDfVoOfesg=;
	b=c0uu+I3/gJjiEzDymz6GTpHCjf3zFHidvoQ+RwNf6h8E2/5dRb/OgOUK7hnIQioOqI
	gdZyOX9QO0bBJ0cXwsmYd/kp+2IDd4699xLHYJB3l/x/BN75lNvQ4ptfCSrUQIAV5VBS
	K34/0c58uAj2aVxVGzv8i3uu8F1IX9BIIJWjPS1FJ2VGzRS+ABel4DDNeW/hy6UC0ycc
	p5WeFNBnjgKEJ4bqglVXV/uXR7CZGf3D0wqqJ6K5fbaWx5oD+VpKmnWGtjnVa05MeERv
	UHCZqmgQz8p8o519TWZysy1P6JIYt9IY2+BImLbDyor8D4roevKGrk20XQlN7e8UnDfK
	uBgw==
Received: by 10.14.182.9 with SMTP id n9mr989665eem.6.1345711419740;
	Thu, 23 Aug 2012 01:43:39 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id k49sm19490760een.4.2012.08.23.01.43.38
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 01:43:38 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 09:43:35 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5BABC7.49F42%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BC2Hb83+ygNg12U2gHYQ/WdYZzg==
In-Reply-To: <8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 09:28, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Unfortunately, no-real-mode didn't work :(
> 
> In xm info, I see
> 
> xen_commandline : placeholder no-real-mode
> 
> Not sure where placeholder is coming from...
> 
> Also, I'm conscience that no-real-mode may cause other problems as it's
> designed for debugging?

What does your RAM map look like now from early Xen boot, using
no-real-mode? It shouldn't be "Xen-e801" any more at least, else the
no-real-mode parameter isn't working.

The installer probably added 'placeholder' because older versions of Xen
skipped the first argument (because on old GRUB that was the path/to/grub
rather than a real argument).

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:49:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T6b-0001JY-El; Thu, 23 Aug 2012 08:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4T6Z-0001JT-UW
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 08:49:32 +0000
Received: from [85.158.143.99:12529] by server-1.bemta-4.messagelabs.com id
	76/9B-12504-B9EE5305; Thu, 23 Aug 2012 08:49:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345711770!27501133!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8313 invoked from network); 23 Aug 2012 08:49:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14141002"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:48:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:48:53 +0100
Message-ID: <1345711732.12501.62.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 09:48:52 +0100
In-Reply-To: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
References: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-21 at 17:24 +0100, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> xenconsoled expected domains that are being shutdown to end up in the
> the DYING state and would only clean-up such domains.  HVM domains
> either didn't enter the DYING state or weren't in long enough for
> xenconsoled to notice.
> 
> For every shutdown HVM domain, xenconsoled would leak memory, grow its
> list of domains and (if guest console logging was enabled) leak the
> log file descriptor.  If the file descriptors were leaked and enough
> HVM domains were shutdown, no more console connections would work as
> the evtchn device could not be opened.  Guests would then block
> waiting to send console output.
> 
> Fix this by tagging domains that exist in enum_domains().  Afterwards,
> all untagged domains are assumed to be dead and are shutdown and
> cleaned up.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  tools/console/daemon/io.c |   12 +++++++++++-
>  1 files changed, 11 insertions(+), 1 deletions(-)
> 
> diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
> index f09d63a..592085f 100644
> --- a/tools/console/daemon/io.c
> +++ b/tools/console/daemon/io.c
> @@ -84,6 +84,7 @@ struct domain {
>  	int slave_fd;
>  	int log_fd;
>  	bool is_dead;
> +	unsigned last_seen;
>  	struct buffer buffer;
>  	struct domain *next;
>  	char *conspath;
> @@ -727,12 +728,16 @@ static void shutdown_domain(struct domain *d)
>  	d->xce_handle = NULL;
>  }
>  
> +static unsigned enum_pass = 0;
> +
>  void enum_domains(void)
>  {
>  	int domid = 1;
>  	xc_dominfo_t dominfo;
>  	struct domain *dom;
>  
> +	enum_pass++;
> +
>  	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
>  		dom = lookup_domain(dominfo.domid);
>  		if (dominfo.dying) {
> @@ -740,8 +745,10 @@ void enum_domains(void)
>  				shutdown_domain(dom);
>  		} else {
>  			if (dom == NULL)
> -				create_domain(dominfo.domid);
> +				dom = create_domain(dominfo.domid);
>  		}
> +		if (dom)
> +			dom->last_seen = enum_pass;

Dereferencing dom here safe if you took the 
dominfo.dying => shutdown_domain path above because that doesn't free
dom, it sets is_dead which is picked up by the cleanup_domain below.

Setting last_seen in this case avoids calling shutdown_domain twice,
which is good, if a little subtle.

I also had a look for threading since that would cause this to break
down but this function is only every called from the same thread as is
running handle_io, I was concerned because xs watches are involved but
it's ok.

So:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>  		domid = dominfo.domid + 1;
>  	}
>  }
> @@ -1068,6 +1075,9 @@ void handle_io(void)
>  			if (d->master_fd != -1 && FD_ISSET(d->master_fd,
>  							   &writefds))
>  				handle_tty_write(d);
> +			
> +			if (d->last_seen != enum_pass)
> +				shutdown_domain(d);
>  
>  			if (d->is_dead)
>  				cleanup_domain(d);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:49:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T6b-0001JY-El; Thu, 23 Aug 2012 08:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4T6Z-0001JT-UW
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 08:49:32 +0000
Received: from [85.158.143.99:12529] by server-1.bemta-4.messagelabs.com id
	76/9B-12504-B9EE5305; Thu, 23 Aug 2012 08:49:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1345711770!27501133!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8313 invoked from network); 23 Aug 2012 08:49:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14141002"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:48:53 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:48:53 +0100
Message-ID: <1345711732.12501.62.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 09:48:52 +0100
In-Reply-To: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
References: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-21 at 17:24 +0100, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> xenconsoled expected domains that are being shutdown to end up in the
> the DYING state and would only clean-up such domains.  HVM domains
> either didn't enter the DYING state or weren't in long enough for
> xenconsoled to notice.
> 
> For every shutdown HVM domain, xenconsoled would leak memory, grow its
> list of domains and (if guest console logging was enabled) leak the
> log file descriptor.  If the file descriptors were leaked and enough
> HVM domains were shutdown, no more console connections would work as
> the evtchn device could not be opened.  Guests would then block
> waiting to send console output.
> 
> Fix this by tagging domains that exist in enum_domains().  Afterwards,
> all untagged domains are assumed to be dead and are shutdown and
> cleaned up.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  tools/console/daemon/io.c |   12 +++++++++++-
>  1 files changed, 11 insertions(+), 1 deletions(-)
> 
> diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
> index f09d63a..592085f 100644
> --- a/tools/console/daemon/io.c
> +++ b/tools/console/daemon/io.c
> @@ -84,6 +84,7 @@ struct domain {
>  	int slave_fd;
>  	int log_fd;
>  	bool is_dead;
> +	unsigned last_seen;
>  	struct buffer buffer;
>  	struct domain *next;
>  	char *conspath;
> @@ -727,12 +728,16 @@ static void shutdown_domain(struct domain *d)
>  	d->xce_handle = NULL;
>  }
>  
> +static unsigned enum_pass = 0;
> +
>  void enum_domains(void)
>  {
>  	int domid = 1;
>  	xc_dominfo_t dominfo;
>  	struct domain *dom;
>  
> +	enum_pass++;
> +
>  	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
>  		dom = lookup_domain(dominfo.domid);
>  		if (dominfo.dying) {
> @@ -740,8 +745,10 @@ void enum_domains(void)
>  				shutdown_domain(dom);
>  		} else {
>  			if (dom == NULL)
> -				create_domain(dominfo.domid);
> +				dom = create_domain(dominfo.domid);
>  		}
> +		if (dom)
> +			dom->last_seen = enum_pass;

Dereferencing dom here safe if you took the 
dominfo.dying => shutdown_domain path above because that doesn't free
dom, it sets is_dead which is picked up by the cleanup_domain below.

Setting last_seen in this case avoids calling shutdown_domain twice,
which is good, if a little subtle.

I also had a look for threading since that would cause this to break
down but this function is only every called from the same thread as is
running handle_io, I was concerned because xs watches are involved but
it's ok.

So:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>  		domid = dominfo.domid + 1;
>  	}
>  }
> @@ -1068,6 +1075,9 @@ void handle_io(void)
>  			if (d->master_fd != -1 && FD_ISSET(d->master_fd,
>  							   &writefds))
>  				handle_tty_write(d);
> +			
> +			if (d->last_seen != enum_pass)
> +				shutdown_domain(d);
>  
>  			if (d->is_dead)
>  				cleanup_domain(d);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:50:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T7C-0001Lx-SY; Thu, 23 Aug 2012 08:50:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1T4T7A-0001Lj-Kz
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:50:08 +0000
Received: from [85.158.143.35:22737] by server-1.bemta-4.messagelabs.com id
	7D/CC-12504-FBEE5305; Thu, 23 Aug 2012 08:50:07 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345711777!11753770!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31656 invoked from network); 23 Aug 2012 08:49:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:49:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14141018"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:49:15 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Thu, 23 Aug 2012
	09:49:15 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 23 Aug 2012 09:49:48 +0100
Thread-Topic: [Xen-devel] [PATCH] Remove VM genearation ID device and
	incr_generationid from build_info
Thread-Index: Ac2BCwCPGZ1LVVzRQ/2JIjqIH7NxxAAARxcw
Message-ID: <291EDFCB1E9E224A99088639C4762022DD824DE688@LONPMAILBOX01.citrite.net>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345711250.12501.58.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 23 August 2012 09:41
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
> incr_generationid from build_info
> 
> On Tue, 2012-08-21 at 16:54 +0100, Paul Durrant wrote:
> > # HG changeset patch
> > # User Paul Durrant <paul.durrant@citrix.com> # Date 1345564202 -3600
> > # Node ID 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
> > # Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
> > Remove VM genearation ID device and incr_generationid from build_info.
> >
> > Microsoft have now published their VM generation ID specification at
> > https://www.microsoft.com/en-us/download/details.aspx?id=30707.
> > It differs from the original specification upon which I based my
> > implementation in several key areas. Particularly, it is no longer an
> > incrementing 64-bit counter and so this patch is to remove the
> > incr_generationid field from the build_info and also disable the ACPI
> > device before 4.2 is released.
> >
> > I will follow up with further patches to implement the VM generation
> > ID to the new specification.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> Tools part looks ok to me:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Only thing I would change is to add a comment to the "1" parameter to
> libxl__xc_domain_restore, but I can do that as I commit.
>

Sure. That sounds like a good idea.
 
> If Keir is happy with the hvmloader change then I'll commit.
> 

Thanks,

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:50:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T7C-0001Lx-SY; Thu, 23 Aug 2012 08:50:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1T4T7A-0001Lj-Kz
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:50:08 +0000
Received: from [85.158.143.35:22737] by server-1.bemta-4.messagelabs.com id
	7D/CC-12504-FBEE5305; Thu, 23 Aug 2012 08:50:07 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345711777!11753770!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31656 invoked from network); 23 Aug 2012 08:49:38 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:49:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14141018"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:49:15 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX01.citrite.net ([10.30.203.162]) with mapi; Thu, 23 Aug 2012
	09:49:15 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 23 Aug 2012 09:49:48 +0100
Thread-Topic: [Xen-devel] [PATCH] Remove VM genearation ID device and
	incr_generationid from build_info
Thread-Index: Ac2BCwCPGZ1LVVzRQ/2JIjqIH7NxxAAARxcw
Message-ID: <291EDFCB1E9E224A99088639C4762022DD824DE688@LONPMAILBOX01.citrite.net>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345711250.12501.58.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 23 August 2012 09:41
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
> incr_generationid from build_info
> 
> On Tue, 2012-08-21 at 16:54 +0100, Paul Durrant wrote:
> > # HG changeset patch
> > # User Paul Durrant <paul.durrant@citrix.com> # Date 1345564202 -3600
> > # Node ID 4b1f399193f5e363c2b47a3079ac4d3f61ee9a8f
> > # Parent  6d56e31fe1e1dc793379d662a36ff1731760eb0c
> > Remove VM genearation ID device and incr_generationid from build_info.
> >
> > Microsoft have now published their VM generation ID specification at
> > https://www.microsoft.com/en-us/download/details.aspx?id=30707.
> > It differs from the original specification upon which I based my
> > implementation in several key areas. Particularly, it is no longer an
> > incrementing 64-bit counter and so this patch is to remove the
> > incr_generationid field from the build_info and also disable the ACPI
> > device before 4.2 is released.
> >
> > I will follow up with further patches to implement the VM generation
> > ID to the new specification.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> Tools part looks ok to me:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Only thing I would change is to add a comment to the "1" parameter to
> libxl__xc_domain_restore, but I can do that as I commit.
>

Sure. That sounds like a good idea.
 
> If Keir is happy with the hvmloader change then I'll commit.
> 

Thanks,

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:51:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T8E-0001SI-Bu; Thu, 23 Aug 2012 08:51:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4T8C-0001S6-SW
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:51:12 +0000
Received: from [85.158.143.35:28963] by server-2.bemta-4.messagelabs.com id
	63/85-21239-00FE5305; Thu, 23 Aug 2012 08:51:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345711777!11753770!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31772 invoked from network); 23 Aug 2012 08:49:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:49:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14141031"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:49:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:49:37 +0100
Message-ID: <1345711776.12501.63.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir@xen.org>
Date: Thu, 23 Aug 2012 09:49:36 +0100
In-Reply-To: <CC5BABC7.49F42%keir@xen.org>
References: <CC5BABC7.49F42%keir@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 09:43 +0100, Keir Fraser wrote:
> 
> The installer probably added 'placeholder' because older versions of
> Xen skipped the first argument (because on old GRUB that was the
> path/to/grub rather than a real argument). 

grub2's update-grub adds this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:51:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4T8E-0001SI-Bu; Thu, 23 Aug 2012 08:51:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4T8C-0001S6-SW
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:51:12 +0000
Received: from [85.158.143.35:28963] by server-2.bemta-4.messagelabs.com id
	63/85-21239-00FE5305; Thu, 23 Aug 2012 08:51:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345711777!11753770!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTAzMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31772 invoked from network); 23 Aug 2012 08:49:40 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 08:49:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14141031"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 08:49:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 09:49:37 +0100
Message-ID: <1345711776.12501.63.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Keir Fraser <keir@xen.org>
Date: Thu, 23 Aug 2012 09:49:36 +0100
In-Reply-To: <CC5BABC7.49F42%keir@xen.org>
References: <CC5BABC7.49F42%keir@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 09:43 +0100, Keir Fraser wrote:
> 
> The installer probably added 'placeholder' because older versions of
> Xen skipped the first argument (because on old GRUB that was the
> path/to/grub rather than a real argument). 

grub2's update-grub adds this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:59:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4TFZ-0001kA-9b; Thu, 23 Aug 2012 08:58:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4TFY-0001k3-D2
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:58:48 +0000
Received: from [85.158.143.35:18103] by server-2.bemta-4.messagelabs.com id
	9C/93-21239-7C0F5305; Thu, 23 Aug 2012 08:58:47 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345712322!15735576!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6396 invoked from network); 23 Aug 2012 08:58:42 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-21.messagelabs.com with SMTP;
	23 Aug 2012 08:58:42 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id B76855A0007
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:58:19 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id ffWHR0AssQOd for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 09:58:19 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id 21ABE5A0005;
	Thu, 23 Aug 2012 09:58:18 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 09:58:40 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: Keir Fraser <keir@xen.org>
In-Reply-To: <CC5BABC7.49F42%keir@xen.org>
References: <CC5BABC7.49F42%keir@xen.org>
Message-ID: <88d859669c14bde83cb1ca3ed7b4d455@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


>
> What does your RAM map look like now from early Xen boot, using
> no-real-mode? It shouldn't be "Xen-e801" any more at least, else the
> no-real-mode parameter isn't working.
>

Still Xen-e801, so it looks like no-real-mode isn't working :(

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 08:59:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 08:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4TFZ-0001kA-9b; Thu, 23 Aug 2012 08:58:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4TFY-0001k3-D2
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 08:58:48 +0000
Received: from [85.158.143.35:18103] by server-2.bemta-4.messagelabs.com id
	9C/93-21239-7C0F5305; Thu, 23 Aug 2012 08:58:47 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345712322!15735576!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6396 invoked from network); 23 Aug 2012 08:58:42 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-21.messagelabs.com with SMTP;
	23 Aug 2012 08:58:42 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id B76855A0007
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 09:58:19 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id ffWHR0AssQOd for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 09:58:19 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id 21ABE5A0005;
	Thu, 23 Aug 2012 09:58:18 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 09:58:40 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: Keir Fraser <keir@xen.org>
In-Reply-To: <CC5BABC7.49F42%keir@xen.org>
References: <CC5BABC7.49F42%keir@xen.org>
Message-ID: <88d859669c14bde83cb1ca3ed7b4d455@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


>
> What does your RAM map look like now from early Xen boot, using
> no-real-mode? It shouldn't be "Xen-e801" any more at least, else the
> no-real-mode parameter isn't working.
>

Still Xen-e801, so it looks like no-real-mode isn't working :(

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:13:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4TTp-00021B-2o; Thu, 23 Aug 2012 09:13:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4TTo-000216-2m
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 09:13:32 +0000
Received: from [85.158.143.99:54083] by server-2.bemta-4.messagelabs.com id
	4A/43-21239-B34F5305; Thu, 23 Aug 2012 09:13:31 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345713208!26532252!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODk1MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29894 invoked from network); 23 Aug 2012 09:13:29 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 09:13:29 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 276DD225C;
	Thu, 23 Aug 2012 12:13:27 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 861892005D; Thu, 23 Aug 2012 12:13:26 +0300 (EEST)
Date: Thu, 23 Aug 2012 12:13:26 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <20120823091326.GG19851@reaktio.net>
References: <CC5BA46F.49F32%keir@xen.org>
	<8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 09:28:31AM +0100, Jonathan Tripathy wrote:
> 
> 
> On 23.08.2012 09:12, Keir Fraser wrote:
> >On 23/08/2012 08:27, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
> >
> >>Thanks, Pasi.
> >>
> >>A couple of questions:
> >>
> >>I'm guessing xen.efi (from 4.2) just replaces grub??
> >>
> >>Also, if I were to apply that patch from superuser
> >>
> >>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sho
> >>uld-be-8gb-uefi-boot),
> >>would have have any bad consequences? I'm very security
> >>conscience as
> >>the DomUs are untrusted...
> >
> >Firstly, the same effect *should* be had by adding no-real-mode to
> >your Xen
> >command line. So try that first.
> >
> >Secondly, it is arguable that we should patch Xen to prefer
> >"Multiboot-e820"
> >over "Xen-e801".
> >
> >And yes, overall if you have a UEFI BIOS then using UEFI Xen is
> >probably
> >best of all :)
> >
> 
> Unfortunately, no-real-mode didn't work :(
> 
> In xm info, I see
> 
> xen_commandline : placeholder no-real-mode
> 
> Not sure where placeholder is coming from...
> 

"placeholder" is usually added by grub in Debian/Ubuntu,
to overcome a bug/issue with multiboot and (old versions of) Xen.. 

(so in some old Xen versions with grub2 multiboot the first cmdline parameter 
got lost, but that hasn't really been the case for a long time anymore,
so placeholder is not needed these days.)


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:13:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4TTp-00021B-2o; Thu, 23 Aug 2012 09:13:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4TTo-000216-2m
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 09:13:32 +0000
Received: from [85.158.143.99:54083] by server-2.bemta-4.messagelabs.com id
	4A/43-21239-B34F5305; Thu, 23 Aug 2012 09:13:31 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345713208!26532252!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0ODk1MTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29894 invoked from network); 23 Aug 2012 09:13:29 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 09:13:29 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 276DD225C;
	Thu, 23 Aug 2012 12:13:27 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 861892005D; Thu, 23 Aug 2012 12:13:26 +0300 (EEST)
Date: Thu, 23 Aug 2012 12:13:26 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <20120823091326.GG19851@reaktio.net>
References: <CC5BA46F.49F32%keir@xen.org>
	<8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8f2903e010e47a49a5feb68d916d324b@abpni.co.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 09:28:31AM +0100, Jonathan Tripathy wrote:
> 
> 
> On 23.08.2012 09:12, Keir Fraser wrote:
> >On 23/08/2012 08:27, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
> >
> >>Thanks, Pasi.
> >>
> >>A couple of questions:
> >>
> >>I'm guessing xen.efi (from 4.2) just replaces grub??
> >>
> >>Also, if I were to apply that patch from superuser
> >>
> >>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sho
> >>uld-be-8gb-uefi-boot),
> >>would have have any bad consequences? I'm very security
> >>conscience as
> >>the DomUs are untrusted...
> >
> >Firstly, the same effect *should* be had by adding no-real-mode to
> >your Xen
> >command line. So try that first.
> >
> >Secondly, it is arguable that we should patch Xen to prefer
> >"Multiboot-e820"
> >over "Xen-e801".
> >
> >And yes, overall if you have a UEFI BIOS then using UEFI Xen is
> >probably
> >best of all :)
> >
> 
> Unfortunately, no-real-mode didn't work :(
> 
> In xm info, I see
> 
> xen_commandline : placeholder no-real-mode
> 
> Not sure where placeholder is coming from...
> 

"placeholder" is usually added by grub in Debian/Ubuntu,
to overcome a bug/issue with multiboot and (old versions of) Xen.. 

(so in some old Xen versions with grub2 multiboot the first cmdline parameter 
got lost, but that hasn't really been the case for a long time anymore,
so placeholder is not needed these days.)


-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4TvU-0002Ea-Jz; Thu, 23 Aug 2012 09:42:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T4TvS-0002EV-UU
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 09:42:07 +0000
Received: from [85.158.139.83:16082] by server-9.bemta-5.messagelabs.com id
	31/B4-26123-EEAF5305; Thu, 23 Aug 2012 09:42:06 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345714923!27716192!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31090 invoked from network); 23 Aug 2012 09:42:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 09:42:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14142447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 09:42:03 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 10:42:03 +0100
Message-ID: <5035F7BD.6090205@citrix.com>
Date: Thu, 23 Aug 2012 10:28:29 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>	<alpine.LFD.2.02.1208212315330.2856@ionos>	<20120822135753.GA30964@phenom.dumpdata.com>	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com>
In-Reply-To: <5034F0FD.8040902@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/08/12 15:47, Attilio Rao wrote:
> On 22/08/12 15:19, Thomas Gleixner wrote:
>    
>> On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
>>
>>      
>>> On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
>>>
>>>        
>>>> On Tue, 21 Aug 2012, Attilio Rao wrote:
>>>>
>>>>          
>>>>> Differences with v1:
>>>>> - The patch serie is re-arranged in a way that it helps reviews, following
>>>>>     a plan by Thomas Gleixner
>>>>> - The PVOPS nomenclature is not used as it is not correct
>>>>> - The front-end message is adjusted with feedback by Thomas Gleixner,
>>>>>     Stefano Stabellini and Konrad Rzeszutek Wilk
>>>>>
>>>>>            
>>>> This is much simpler to read and review. Just have a look at the
>>>> diffstats of the two series:
>>>>
>>>>    6 files changed,  9 insertions(+),  8 deletions(-)
>>>>    6 files changed, 11 insertions(+),  9 deletions(-)
>>>>    5 files changed, 50 insertions(+),  2 deletions(-)
>>>>    6 files changed,  2 insertions(+), 65 deletions(-)
>>>>    1 files changed,  5 insertions(+),  0 deletions(-)
>>>>
>>>> versus
>>>>
>>>>    6 files changed, 10 insertions(+),  9 deletions(-)
>>>>    6 files changed, 11 insertions(+), 11 deletions(-)
>>>>    5 files changed,  3 insertions(+),  3 deletions(-)
>>>>    6 files changed,  4 insertions(+), 20 deletions(-)
>>>>    1 files changed,  5 insertions(+),  0 deletions(-)
>>>>
>>>> The overall result is basically the same, but it's way simpler to look
>>>> at obvious and well done patches than checking whether a subtle copy
>>>> and paste bug happened in 3/5 of the first version. Copy and paste is
>>>> the #1 cause for subtle bugs. :)
>>>>
>>>> I'm waiting for the ack of Xen folks before taking it into tip.
>>>>
>>>>          
>>> I've some extra patches that modify the new "paginig_init" in the Xen
>>> code that I am going to propose for v3.7 - so will have some merge
>>> conflicts. Let me figure that out and also run this set of patches
>>> (and also the previous one .. which I think you didn't have a
>>> chance to look since you were on vacation?) on an overnight
>>>
>>>        
>> Which previous one ?
>>
>>      
> This one:
> https://lkml.org/lkml/2012/8/21/369
>
> but I would like to repost the patch serie skipping the referral to
> PVOPS in the commit logs, I will do so right now, so please wait for
> another patch serie.
>    

For your convenience:
http://lkml.org/lkml/2012/8/22/450

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4TvU-0002Ea-Jz; Thu, 23 Aug 2012 09:42:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T4TvS-0002EV-UU
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 09:42:07 +0000
Received: from [85.158.139.83:16082] by server-9.bemta-5.messagelabs.com id
	31/B4-26123-EEAF5305; Thu, 23 Aug 2012 09:42:06 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345714923!27716192!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31090 invoked from network); 23 Aug 2012 09:42:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 09:42:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14142447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 09:42:03 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 10:42:03 +0100
Message-ID: <5035F7BD.6090205@citrix.com>
Date: Thu, 23 Aug 2012 10:28:29 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>	<alpine.LFD.2.02.1208212315330.2856@ionos>	<20120822135753.GA30964@phenom.dumpdata.com>	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com>
In-Reply-To: <5034F0FD.8040902@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/08/12 15:47, Attilio Rao wrote:
> On 22/08/12 15:19, Thomas Gleixner wrote:
>    
>> On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
>>
>>      
>>> On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
>>>
>>>        
>>>> On Tue, 21 Aug 2012, Attilio Rao wrote:
>>>>
>>>>          
>>>>> Differences with v1:
>>>>> - The patch serie is re-arranged in a way that it helps reviews, following
>>>>>     a plan by Thomas Gleixner
>>>>> - The PVOPS nomenclature is not used as it is not correct
>>>>> - The front-end message is adjusted with feedback by Thomas Gleixner,
>>>>>     Stefano Stabellini and Konrad Rzeszutek Wilk
>>>>>
>>>>>            
>>>> This is much simpler to read and review. Just have a look at the
>>>> diffstats of the two series:
>>>>
>>>>    6 files changed,  9 insertions(+),  8 deletions(-)
>>>>    6 files changed, 11 insertions(+),  9 deletions(-)
>>>>    5 files changed, 50 insertions(+),  2 deletions(-)
>>>>    6 files changed,  2 insertions(+), 65 deletions(-)
>>>>    1 files changed,  5 insertions(+),  0 deletions(-)
>>>>
>>>> versus
>>>>
>>>>    6 files changed, 10 insertions(+),  9 deletions(-)
>>>>    6 files changed, 11 insertions(+), 11 deletions(-)
>>>>    5 files changed,  3 insertions(+),  3 deletions(-)
>>>>    6 files changed,  4 insertions(+), 20 deletions(-)
>>>>    1 files changed,  5 insertions(+),  0 deletions(-)
>>>>
>>>> The overall result is basically the same, but it's way simpler to look
>>>> at obvious and well done patches than checking whether a subtle copy
>>>> and paste bug happened in 3/5 of the first version. Copy and paste is
>>>> the #1 cause for subtle bugs. :)
>>>>
>>>> I'm waiting for the ack of Xen folks before taking it into tip.
>>>>
>>>>          
>>> I've some extra patches that modify the new "paginig_init" in the Xen
>>> code that I am going to propose for v3.7 - so will have some merge
>>> conflicts. Let me figure that out and also run this set of patches
>>> (and also the previous one .. which I think you didn't have a
>>> chance to look since you were on vacation?) on an overnight
>>>
>>>        
>> Which previous one ?
>>
>>      
> This one:
> https://lkml.org/lkml/2012/8/21/369
>
> but I would like to repost the patch serie skipping the referral to
> PVOPS in the commit logs, I will do so right now, so please wait for
> another patch serie.
>    

For your convenience:
http://lkml.org/lkml/2012/8/22/450

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:43:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Twj-0002IT-2O; Thu, 23 Aug 2012 09:43:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4Twh-0002IG-E2
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 09:43:23 +0000
Received: from [85.158.143.35:8450] by server-2.bemta-4.messagelabs.com id
	2D/20-21239-A3BF5305; Thu, 23 Aug 2012 09:43:22 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345715002!10235245!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21053 invoked from network); 23 Aug 2012 09:43:22 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 09:43:22 -0000
Received: by eeke53 with SMTP id e53so181990eek.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 02:43:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=wIJwAIiIFWUkELk+2Ej+Wa5/ZUb2meNC0yFSnBbzfwo=;
	b=roi1vKOlkcgrAMLjIxINVHHteidm6vsJUrTY0PDJK3IGIik0tin0x1IkLggh+MwyBu
	5gwbp17OJQP5nr8fO2sjNY/BcyAuPBMkeIHyJbn4N2I098k77dbCwXy3WBBbiekdFpqO
	G5d0BWHGpTk5k2oiZ5ISxDB8vAhN8A1Opvm5ZRTgGlcRyDbARjj648zGU3MIktTD5Uq1
	iPYnec+OqrrpxDQwgvSU3XeWviZIp0ygcnivFhDkzmHDn6us0zKOD/0LDA5RF/23StHV
	3h6zaz4AD/IDtG44a5S/Wrr0BPCRhc7GFnZq9/b7Zdi+ZZNjL+gBBfAqKb0Jlo1vdIxC
	1KHw==
Received: by 10.14.172.129 with SMTP id t1mr1138593eel.34.1345715001978;
	Thu, 23 Aug 2012 02:43:21 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id 45sm19907205eeb.8.2012.08.23.02.43.20
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 02:43:21 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 10:43:19 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <CC5BB9C7.49F51%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BE7oWGPSNynBsA0KX8JC/UQYInQ==
In-Reply-To: <88d859669c14bde83cb1ca3ed7b4d455@abpni.co.uk>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 09:58, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>> What does your RAM map look like now from early Xen boot, using
>> no-real-mode? It shouldn't be "Xen-e801" any more at least, else the
>> no-real-mode parameter isn't working.
>> 
> 
> Still Xen-e801, so it looks like no-real-mode isn't working :(

Grrr it's been broken since tboot support went in, long ago. Going to have
to fix that and backport to 4.1 and 4.0 branches...

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:43:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Twj-0002IT-2O; Thu, 23 Aug 2012 09:43:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4Twh-0002IG-E2
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 09:43:23 +0000
Received: from [85.158.143.35:8450] by server-2.bemta-4.messagelabs.com id
	2D/20-21239-A3BF5305; Thu, 23 Aug 2012 09:43:22 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345715002!10235245!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21053 invoked from network); 23 Aug 2012 09:43:22 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 09:43:22 -0000
Received: by eeke53 with SMTP id e53so181990eek.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 02:43:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=wIJwAIiIFWUkELk+2Ej+Wa5/ZUb2meNC0yFSnBbzfwo=;
	b=roi1vKOlkcgrAMLjIxINVHHteidm6vsJUrTY0PDJK3IGIik0tin0x1IkLggh+MwyBu
	5gwbp17OJQP5nr8fO2sjNY/BcyAuPBMkeIHyJbn4N2I098k77dbCwXy3WBBbiekdFpqO
	G5d0BWHGpTk5k2oiZ5ISxDB8vAhN8A1Opvm5ZRTgGlcRyDbARjj648zGU3MIktTD5Uq1
	iPYnec+OqrrpxDQwgvSU3XeWviZIp0ygcnivFhDkzmHDn6us0zKOD/0LDA5RF/23StHV
	3h6zaz4AD/IDtG44a5S/Wrr0BPCRhc7GFnZq9/b7Zdi+ZZNjL+gBBfAqKb0Jlo1vdIxC
	1KHw==
Received: by 10.14.172.129 with SMTP id t1mr1138593eel.34.1345715001978;
	Thu, 23 Aug 2012 02:43:21 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id 45sm19907205eeb.8.2012.08.23.02.43.20
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 02:43:21 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 10:43:19 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <CC5BB9C7.49F51%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BE7oWGPSNynBsA0KX8JC/UQYInQ==
In-Reply-To: <88d859669c14bde83cb1ca3ed7b4d455@abpni.co.uk>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 09:58, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>> What does your RAM map look like now from early Xen boot, using
>> no-real-mode? It shouldn't be "Xen-e801" any more at least, else the
>> no-real-mode parameter isn't working.
>> 
> 
> Still Xen-e801, so it looks like no-real-mode isn't working :(

Grrr it's been broken since tboot support went in, long ago. Going to have
to fix that and backport to 4.1 and 4.0 branches...

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:50:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4U3S-0002Wb-Ut; Thu, 23 Aug 2012 09:50:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4U3R-0002WW-UE
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 09:50:22 +0000
Received: from [85.158.139.83:50827] by server-3.bemta-5.messagelabs.com id
	43/A5-27237-DDCF5305; Thu, 23 Aug 2012 09:50:21 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345715417!27403919!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16350 invoked from network); 23 Aug 2012 09:50:18 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-12.tower-182.messagelabs.com with SMTP;
	23 Aug 2012 09:50:18 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id BD5175A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 10:49:54 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Rn4vfEKnRjVy for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 10:49:52 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id 8725E5A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 10:49:52 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 10:50:15 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: <xen-devel@lists.xen.org>
Message-ID: <bcb6194956c235510cd981003ba7e999@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23.08.2012 10:43, Keir Fraser wrote:
> On 23/08/2012 09:58, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>> What does your RAM map look like now from early Xen boot, using
>>> no-real-mode? It shouldn't be "Xen-e801" any more at least, else 
>>> the
>>> no-real-mode parameter isn't working.
>>>
>>
>> Still Xen-e801, so it looks like no-real-mode isn't working :(
>
> Grrr it's been broken since tboot support went in, long ago. Going to 
> have
> to fix that and backport to 4.1 and 4.0 branches...
>
>  -- Keir

Just for info, is it safe to use no-real-mode on a production system? 
Please keep in mind that our DomUs are untrusted. Or would it be better 
if I created a patch to change the order of the if block to prefer the 
multi-boot memory map?

Thanks


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 09:50:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 09:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4U3S-0002Wb-Ut; Thu, 23 Aug 2012 09:50:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4U3R-0002WW-UE
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 09:50:22 +0000
Received: from [85.158.139.83:50827] by server-3.bemta-5.messagelabs.com id
	43/A5-27237-DDCF5305; Thu, 23 Aug 2012 09:50:21 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345715417!27403919!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16350 invoked from network); 23 Aug 2012 09:50:18 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-12.tower-182.messagelabs.com with SMTP;
	23 Aug 2012 09:50:18 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id BD5175A0006
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 10:49:54 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Rn4vfEKnRjVy for <xen-devel@lists.xen.org>;
	Thu, 23 Aug 2012 10:49:52 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id 8725E5A0005
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 10:49:52 +0100 (BST)
MIME-Version: 1.0
Date: Thu, 23 Aug 2012 10:50:15 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: <xen-devel@lists.xen.org>
Message-ID: <bcb6194956c235510cd981003ba7e999@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23.08.2012 10:43, Keir Fraser wrote:
> On 23/08/2012 09:58, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>> What does your RAM map look like now from early Xen boot, using
>>> no-real-mode? It shouldn't be "Xen-e801" any more at least, else 
>>> the
>>> no-real-mode parameter isn't working.
>>>
>>
>> Still Xen-e801, so it looks like no-real-mode isn't working :(
>
> Grrr it's been broken since tboot support went in, long ago. Going to 
> have
> to fix that and backport to 4.1 and 4.0 branches...
>
>  -- Keir

Just for info, is it safe to use no-real-mode on a production system? 
Please keep in mind that our DomUs are untrusted. Or would it be better 
if I created a patch to change the order of the if block to prefer the 
multi-boot memory map?

Thanks


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:06:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:06:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4UIb-0002yK-2Z; Thu, 23 Aug 2012 10:06:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1T4UIZ-0002yE-M0
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:06:00 +0000
Received: from [85.158.139.83:43851] by server-3.bemta-5.messagelabs.com id
	D7/79-27237-68006305; Thu, 23 Aug 2012 10:05:58 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345716345!27408669!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYwOTY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28696 invoked from network); 23 Aug 2012 10:05:46 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-182.messagelabs.com with SMTP;
	23 Aug 2012 10:05:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 23 Aug 2012 03:05:44 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,299,1344236400"; d="scan'208";a="190349890"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 23 Aug 2012 03:05:37 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 23 Aug 2012 03:05:36 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Thu, 23 Aug 2012 18:05:35 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Hao, Xudong" <xudong.hao@intel.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNfsDxEvIg0TSQnUWh7XNPkUBsgpdnLzzg
Date: Thu, 23 Aug 2012 10:05:35 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644032DC532@SHSMSX101.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
In-Reply-To: <5032316202000078000965DF@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, August 20, 2012 6:45 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 20.08.12 at 05:22, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> 
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Friday, August 17, 2012 5:36 PM
> >> To: Hao, Xudong
> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao;
> >> xen-devel@lists.xen.org
> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar
> >> support
> >>
> >> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >>  -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, August 16, 2012 7:04 PM
> >> >> To: Hao, Xudong
> >> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao;
> >> >> xen-devel@lists.xen.org
> >> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big
> >> >> bar support
> >> >>
> >> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com>
> wrote:
> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com>
> wrote:
> >> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04
> 2012
> >> >> +0200
> >> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01
> 2012
> >> >> +0800
> >> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
> >> expanded.
> >> >> */
> >> >> >> >  #define PCI_MEM_START       0xf0000000
> >> >> >> >  #define PCI_MEM_END         0xfc000000
> >> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >> >> >>
> >> >> >> With such hard coded values, this is hardly meant to be
> >> >> >> anything more than an RFC, is it? These values should not exist
> >> >> >> in the first place, and the variables below should be
> >> >> >> determined from VM characteristics (best would presumably be to
> >> >> >> allocate them top down from the end of physical address space,
> >> >> >> making sure you don't run into RAM).
> >> >>
> >> >> No comment on this part?
> >> >>
> >> >
> >> > The MMIO high memory start from 640G, it's already very high, I
> >> > think we don't need allocate MMIO top down from the high of physical
> address space.
> >> > Another thing you remind me that maybe we can skip this high MMIO
> >> > hole
> >> when
> >> > setup p2m table in build hvm of libxc(setup_guest()), like the
> >> > handles for MMIO below 4G.
> >>
> >> That would be an option, but any fixed address you pick here will
> >> look arbitrary (and will sooner or later raise questions). Plus by
> >> allowing the RAM above 4G to remain contiguous even for huge guests,
> >> we'd retain maximum compatibility with all sorts of guest OSes.
> >> Furthermore, did you check that we in all cases can use 40-bit
> >> (guest) physical addresses (I would think that 36 is the biggest
> >> common value). Bottom line - please don't use a fixed number here.
> >>
> >
> > Where does present the 36-bit physical addresses limit, could you help
> > to point out in the current Xen code?
> 
> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
> mtrr_var_range_msr_set().

I think BIOS may fix it, you can refer to tools/firmware/hvmloader/cacheattr.c, and BIOS can set to right phys bit number after check. 
Xiantao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:06:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:06:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4UIb-0002yK-2Z; Thu, 23 Aug 2012 10:06:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1T4UIZ-0002yE-M0
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:06:00 +0000
Received: from [85.158.139.83:43851] by server-3.bemta-5.messagelabs.com id
	D7/79-27237-68006305; Thu, 23 Aug 2012 10:05:58 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345716345!27408669!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYwOTY0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28696 invoked from network); 23 Aug 2012 10:05:46 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-182.messagelabs.com with SMTP;
	23 Aug 2012 10:05:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 23 Aug 2012 03:05:44 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,299,1344236400"; d="scan'208";a="190349890"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 23 Aug 2012 03:05:37 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 23 Aug 2012 03:05:36 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Thu, 23 Aug 2012 18:05:35 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Hao, Xudong" <xudong.hao@intel.com>
Thread-Topic: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
Thread-Index: AQHNfsDxEvIg0TSQnUWh7XNPkUBsgpdnLzzg
Date: Thu, 23 Aug 2012 10:05:35 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644032DC532@SHSMSX101.ccr.corp.intel.com>
References: <1345013682-20618-1-git-send-email-xudong.hao@intel.com>
	<502B85020200007800095036@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FE8AB13@SHSMSX102.ccr.corp.intel.com>
	<502CEFC10200007800095727@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA18DC@SHSMSX102.ccr.corp.intel.com>
	<502E2C9C0200007800095D33@nat28.tlf.novell.com>
	<403610A45A2B5242BD291EDAE8B37D300FEA36B0@SHSMSX102.ccr.corp.intel.com>
	<5032316202000078000965DF@nat28.tlf.novell.com>
In-Reply-To: <5032316202000078000965DF@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, August 20, 2012 6:45 PM
> To: Hao, Xudong
> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support
> 
> >>> On 20.08.12 at 05:22, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> 
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: Friday, August 17, 2012 5:36 PM
> >> To: Hao, Xudong
> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao;
> >> xen-devel@lists.xen.org
> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar
> >> support
> >>
> >> >>> On 17.08.12 at 11:24, "Hao, Xudong" <xudong.hao@intel.com> wrote:
> >> >>  -----Original Message-----
> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> Sent: Thursday, August 16, 2012 7:04 PM
> >> >> To: Hao, Xudong
> >> >> Cc: ian.jackson@eu.citrix.com; Zhang, Xiantao;
> >> >> xen-devel@lists.xen.org
> >> >> Subject: RE: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big
> >> >> bar support
> >> >>
> >> >> >>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@intel.com>
> wrote:
> >> >> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> >> >> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@intel.com>
> wrote:
> >> >> >> > --- a/tools/firmware/hvmloader/config.h	Tue Jul 24 17:02:04
> 2012
> >> >> +0200
> >> >> >> > +++ b/tools/firmware/hvmloader/config.h	Thu Jul 26 15:40:01
> 2012
> >> >> +0800
> >> >> >> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
> >> >> >> >  /* MMIO hole: Hardcoded defaults, which can be dynamically
> >> expanded.
> >> >> */
> >> >> >> >  #define PCI_MEM_START       0xf0000000
> >> >> >> >  #define PCI_MEM_END         0xfc000000
> >> >> >> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
> >> >> >> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
> >> >> >>
> >> >> >> With such hard coded values, this is hardly meant to be
> >> >> >> anything more than an RFC, is it? These values should not exist
> >> >> >> in the first place, and the variables below should be
> >> >> >> determined from VM characteristics (best would presumably be to
> >> >> >> allocate them top down from the end of physical address space,
> >> >> >> making sure you don't run into RAM).
> >> >>
> >> >> No comment on this part?
> >> >>
> >> >
> >> > The MMIO high memory start from 640G, it's already very high, I
> >> > think we don't need allocate MMIO top down from the high of physical
> address space.
> >> > Another thing you remind me that maybe we can skip this high MMIO
> >> > hole
> >> when
> >> > setup p2m table in build hvm of libxc(setup_guest()), like the
> >> > handles for MMIO below 4G.
> >>
> >> That would be an option, but any fixed address you pick here will
> >> look arbitrary (and will sooner or later raise questions). Plus by
> >> allowing the RAM above 4G to remain contiguous even for huge guests,
> >> we'd retain maximum compatibility with all sorts of guest OSes.
> >> Furthermore, did you check that we in all cases can use 40-bit
> >> (guest) physical addresses (I would think that 36 is the biggest
> >> common value). Bottom line - please don't use a fixed number here.
> >>
> >
> > Where does present the 36-bit physical addresses limit, could you help
> > to point out in the current Xen code?
> 
> Look at xen/arch/x86/hvm/mtrr.c, e.g. hvm_mtrr_pat_init() or
> mtrr_var_range_msr_set().

I think BIOS may fix it, you can refer to tools/firmware/hvmloader/cacheattr.c, and BIOS can set to right phys bit number after check. 
Xiantao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:19:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4UVi-0003G5-EG; Thu, 23 Aug 2012 10:19:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1T4UVh-0003Fy-Kp
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 10:19:33 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345717133!8577553!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3134 invoked from network); 23 Aug 2012 10:18:54 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-8.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	23 Aug 2012 10:18:54 -0000
Received: from mail81-va3-R.bigfish.com (10.7.14.241) by
	VA3EHSOBE003.bigfish.com (10.7.40.23) with Microsoft SMTP Server id
	14.1.225.23; Thu, 23 Aug 2012 10:18:52 +0000
Received: from mail81-va3 (localhost [127.0.0.1])	by mail81-va3-R.bigfish.com
	(Postfix) with ESMTP id D97A14C01AA;
	Thu, 23 Aug 2012 10:18:52 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I1447Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail81-va3 (localhost.localdomain [127.0.0.1]) by mail81-va3
	(MessageSwitch) id 13457171315645_3528;
	Thu, 23 Aug 2012 10:18:51 +0000 (UTC)
Received: from VA3EHSMHS016.bigfish.com (unknown [10.7.14.236])	by
	mail81-va3.bigfish.com (Postfix) with ESMTP id F0AFC240054;
	Thu, 23 Aug 2012 10:18:50 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	VA3EHSMHS016.bigfish.com (10.7.99.26) with Microsoft SMTP Server id
	14.1.225.23; Thu, 23 Aug 2012 10:18:50 +0000
X-WSS-ID: 0M97DZA-02-6KM-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 259E7C8084;	Thu, 23 Aug 2012 05:18:45 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 23 Aug 2012 05:19:04 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Thu, 23 Aug 2012 05:18:48 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	06:18:47 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 98FFF49C121;
	Thu, 23 Aug 2012 11:18:46 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 726AB594037; Thu, 23 Aug 2012
	12:18:46 +0200 (CEST)
Message-ID: <503602A0.9040908@amd.com>
Date: Thu, 23 Aug 2012 12:14:56 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
In-Reply-To: <20120817205207.GA3002@phenom.dumpdata.com>
X-OriginatorOrg: amd.com
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	david.vrabel@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
>>
>> Konrad, David,
>>
>> back on track for this issue. Thanks for your input, I could do some
>> more debugging (see below for a refresh):
>>
>> It seems like it affects only the first page of the 1:1 mapping. I
>> didn't have an issues with the last PFN or the page behind it (which
>> failed properly).
>>
>> David, thanks for the hint with varying dom0_mem parameter. I
>> thought I already checked this, but I did it once again and it
>> turned out that it is only an issue if dom0_mem is smaller than the
>> ACPI area, which generates a hole in the memory map. So we have
>> (simplified)
>> * 1:1 mapping to 1 MB
>> * normal mapping till dom0_mem
>> * unmapped area till ACPI E820 area
>> * ACPI E820 1:1 mapping
>>
>> As far as I could chase it down the 1:1 mapping itself looks OK, I
>> couldn't find any off-by-one bugs here. So maybe it is code that
>> later on invalidates areas between the normal guest mapping and the
>> ACPI mem?
>
> I think I found it. Can you try this pls [and if you can't find
> early_to_phys.. just use the __set_phys_to call]

Yes, that works. At least after a quick test on my test box. Both the 
test module and acpidump work as expected. If I replace the "<" in your 
patch with the original "<=", I get the warning (and due to the 
"continue" it also works).
I also successfully tested the minimal fix (just replacing <= with <).
I will feed it to the testers here to cover more machines.

Do you want to keep the warnings in (which exceed 80 characters, btw)?

Thanks a lot and:

Tested-by: Andre Przywara <andre.przywara@amd.com>

Regards,
Andre.

>
>  From ab915d98f321b0fcca1932747c632b5f0f299f55 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Fri, 17 Aug 2012 16:43:28 -0400
> Subject: [PATCH] xen/setup: Fix one-off error when adding for-balloon PFNs to
>   the P2M.
>
> When we are finished with return PFNs to the hypervisor, then
> populate it back, and also mark the E820 MMIO and E820 gaps
> as IDENTITY_FRAMEs, we then call P2M to set areas that can
> be used for ballooning. We were off by one, and ended up
> over-writting a P2M entry that most likely was an IDENTITY_FRAME.
> For example:
>
> 1-1 mapping on 40000->40200
> 1-1 mapping on bc558->bc5ac
> 1-1 mapping on bc5b4->bc8c5
> 1-1 mapping on bc8c6->bcb7c
> 1-1 mapping on bcd00->100000
> Released 614 pages of unused memory
> Set 277889 page(s) to 1-1 mapping
> Populating 40200-40466 pfn range: 614 pages added
>
> => here we set from 40466 up to bc559 P2M tree to be
> INVALID_P2M_ENTRY. We should have done it up to bc558.
>
> The end result is that if anybody is trying to construct
> a PTE for PFN bc558 they end up with ~PAGE_PRESENT.
>
> CC: stable@vger.kernel.org
> Reported-by: Andre Przywara <andre.przywara@amd.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>   arch/x86/xen/setup.c |   11 +++++++++--
>   1 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index ead8557..030a55a 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -78,9 +78,16 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
>   	memblock_reserve(start, size);
>
>   	xen_max_p2m_pfn = PFN_DOWN(start + size);
> +	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
> +		unsigned long mfn = pfn_to_mfn(pfn);
> +
> +		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
> +			continue;
> +		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
> +			pfn, mfn);
>
> -	for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
> -		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +		early_set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +	}
>   }
>
>   static unsigned long __init xen_do_chunk(unsigned long start,
>

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:19:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4UVi-0003G5-EG; Thu, 23 Aug 2012 10:19:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andre.Przywara@amd.com>) id 1T4UVh-0003Fy-Kp
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 10:19:33 +0000
X-Env-Sender: Andre.Przywara@amd.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345717133!8577553!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3134 invoked from network); 23 Aug 2012 10:18:54 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-8.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	23 Aug 2012 10:18:54 -0000
Received: from mail81-va3-R.bigfish.com (10.7.14.241) by
	VA3EHSOBE003.bigfish.com (10.7.40.23) with Microsoft SMTP Server id
	14.1.225.23; Thu, 23 Aug 2012 10:18:52 +0000
Received: from mail81-va3 (localhost [127.0.0.1])	by mail81-va3-R.bigfish.com
	(Postfix) with ESMTP id D97A14C01AA;
	Thu, 23 Aug 2012 10:18:52 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(zzbb2dI98dI9371I1432I1447Izz1202hzz8275bh8275dhz2dh668h839hd25he5bhf0ah107ah)
Received: from mail81-va3 (localhost.localdomain [127.0.0.1]) by mail81-va3
	(MessageSwitch) id 13457171315645_3528;
	Thu, 23 Aug 2012 10:18:51 +0000 (UTC)
Received: from VA3EHSMHS016.bigfish.com (unknown [10.7.14.236])	by
	mail81-va3.bigfish.com (Postfix) with ESMTP id F0AFC240054;
	Thu, 23 Aug 2012 10:18:50 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	VA3EHSMHS016.bigfish.com (10.7.99.26) with Microsoft SMTP Server id
	14.1.225.23; Thu, 23 Aug 2012 10:18:50 +0000
X-WSS-ID: 0M97DZA-02-6KM-02
X-M-MSG: 
Received: from sausexedgep02.amd.com (sausexedgep02-ext.amd.com
	[163.181.249.73])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 259E7C8084;	Thu, 23 Aug 2012 05:18:45 -0500 (CDT)
Received: from sausexhtp02.amd.com (163.181.3.152) by sausexedgep02.amd.com
	(163.181.36.59) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Thu, 23 Aug 2012 05:19:04 -0500
Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp02.amd.com
	(163.181.3.152) with Microsoft SMTP Server (TLS) id 8.3.213.0;
	Thu, 23 Aug 2012 05:18:48 -0500
Received: from gwo.osrc.amd.com (165.204.16.204) by storexhtp01.amd.com
	(172.24.4.3) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	06:18:47 -0400
Received: from mail.osrc.amd.com (aluminium.osrc.amd.com [165.204.15.141])	by
	gwo.osrc.amd.com (Postfix) with ESMTP id 98FFF49C121;
	Thu, 23 Aug 2012 11:18:46 +0100 (BST)
Received: from [165.204.15.38] (wanderer.osrc.amd.com [165.204.15.38])	by
	mail.osrc.amd.com (Postfix) with ESMTPS id 726AB594037; Thu, 23 Aug 2012
	12:18:46 +0200 (CEST)
Message-ID: <503602A0.9040908@amd.com>
Date: Thu, 23 Aug 2012 12:14:56 +0200
From: Andre Przywara <andre.przywara@amd.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:13.0) Gecko/20120615 Thunderbird/13.0.1
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
In-Reply-To: <20120817205207.GA3002@phenom.dumpdata.com>
X-OriginatorOrg: amd.com
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	david.vrabel@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
>>
>> Konrad, David,
>>
>> back on track for this issue. Thanks for your input, I could do some
>> more debugging (see below for a refresh):
>>
>> It seems like it affects only the first page of the 1:1 mapping. I
>> didn't have an issues with the last PFN or the page behind it (which
>> failed properly).
>>
>> David, thanks for the hint with varying dom0_mem parameter. I
>> thought I already checked this, but I did it once again and it
>> turned out that it is only an issue if dom0_mem is smaller than the
>> ACPI area, which generates a hole in the memory map. So we have
>> (simplified)
>> * 1:1 mapping to 1 MB
>> * normal mapping till dom0_mem
>> * unmapped area till ACPI E820 area
>> * ACPI E820 1:1 mapping
>>
>> As far as I could chase it down the 1:1 mapping itself looks OK, I
>> couldn't find any off-by-one bugs here. So maybe it is code that
>> later on invalidates areas between the normal guest mapping and the
>> ACPI mem?
>
> I think I found it. Can you try this pls [and if you can't find
> early_to_phys.. just use the __set_phys_to call]

Yes, that works. At least after a quick test on my test box. Both the 
test module and acpidump work as expected. If I replace the "<" in your 
patch with the original "<=", I get the warning (and due to the 
"continue" it also works).
I also successfully tested the minimal fix (just replacing <= with <).
I will feed it to the testers here to cover more machines.

Do you want to keep the warnings in (which exceed 80 characters, btw)?

Thanks a lot and:

Tested-by: Andre Przywara <andre.przywara@amd.com>

Regards,
Andre.

>
>  From ab915d98f321b0fcca1932747c632b5f0f299f55 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Fri, 17 Aug 2012 16:43:28 -0400
> Subject: [PATCH] xen/setup: Fix one-off error when adding for-balloon PFNs to
>   the P2M.
>
> When we are finished with return PFNs to the hypervisor, then
> populate it back, and also mark the E820 MMIO and E820 gaps
> as IDENTITY_FRAMEs, we then call P2M to set areas that can
> be used for ballooning. We were off by one, and ended up
> over-writting a P2M entry that most likely was an IDENTITY_FRAME.
> For example:
>
> 1-1 mapping on 40000->40200
> 1-1 mapping on bc558->bc5ac
> 1-1 mapping on bc5b4->bc8c5
> 1-1 mapping on bc8c6->bcb7c
> 1-1 mapping on bcd00->100000
> Released 614 pages of unused memory
> Set 277889 page(s) to 1-1 mapping
> Populating 40200-40466 pfn range: 614 pages added
>
> => here we set from 40466 up to bc559 P2M tree to be
> INVALID_P2M_ENTRY. We should have done it up to bc558.
>
> The end result is that if anybody is trying to construct
> a PTE for PFN bc558 they end up with ~PAGE_PRESENT.
>
> CC: stable@vger.kernel.org
> Reported-by: Andre Przywara <andre.przywara@amd.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>   arch/x86/xen/setup.c |   11 +++++++++--
>   1 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index ead8557..030a55a 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -78,9 +78,16 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
>   	memblock_reserve(start, size);
>
>   	xen_max_p2m_pfn = PFN_DOWN(start + size);
> +	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
> +		unsigned long mfn = pfn_to_mfn(pfn);
> +
> +		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
> +			continue;
> +		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
> +			pfn, mfn);
>
> -	for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
> -		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +		early_set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +	}
>   }
>
>   static unsigned long __init xen_do_chunk(unsigned long start,
>

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:22:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4UYe-0003P5-9L; Thu, 23 Aug 2012 10:22:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4UYc-0003Ox-AR
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 10:22:34 +0000
Received: from [85.158.139.83:40836] by server-11.bemta-5.messagelabs.com id
	D8/72-29296-96406305; Thu, 23 Aug 2012 10:22:33 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345717351!27365340!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8136 invoked from network); 23 Aug 2012 10:22:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:22:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206000462"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:22:31 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	06:22:31 -0400
Message-ID: <50360465.3090602@citrix.com>
Date: Thu, 23 Aug 2012 11:22:29 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andre Przywara <andre.przywara@amd.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com>
In-Reply-To: <503602A0.9040908@amd.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 11:14, Andre Przywara wrote:
> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
>>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
>>>
>>> Konrad, David,
>>>
>>> back on track for this issue. Thanks for your input, I could do some
>>> more debugging (see below for a refresh):
>>>
>>> It seems like it affects only the first page of the 1:1 mapping. I
>>> didn't have an issues with the last PFN or the page behind it (which
>>> failed properly).
>>>
>>> David, thanks for the hint with varying dom0_mem parameter. I
>>> thought I already checked this, but I did it once again and it
>>> turned out that it is only an issue if dom0_mem is smaller than the
>>> ACPI area, which generates a hole in the memory map. So we have
>>> (simplified)
>>> * 1:1 mapping to 1 MB
>>> * normal mapping till dom0_mem
>>> * unmapped area till ACPI E820 area
>>> * ACPI E820 1:1 mapping
>>>
>>> As far as I could chase it down the 1:1 mapping itself looks OK, I
>>> couldn't find any off-by-one bugs here. So maybe it is code that
>>> later on invalidates areas between the normal guest mapping and the
>>> ACPI mem?
>>
>> I think I found it. Can you try this pls [and if you can't find
>> early_to_phys.. just use the __set_phys_to call]
> 
> Yes, that works. At least after a quick test on my test box. Both the 
> test module and acpidump work as expected. If I replace the "<" in your 
> patch with the original "<=", I get the warning (and due to the 
> "continue" it also works).

Note that the balloon driver could subsequently overwrite the p2m entry.
 I don't think it is worth redoing the patch to adjust the region passed
to the balloon driver to avoid this though.

> I also successfully tested the minimal fix (just replacing <= with <).
> I will feed it to the testers here to cover more machines.
> 
> Do you want to keep the warnings in (which exceed 80 characters, btw)?

I think we do.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:22:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4UYe-0003P5-9L; Thu, 23 Aug 2012 10:22:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4UYc-0003Ox-AR
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 10:22:34 +0000
Received: from [85.158.139.83:40836] by server-11.bemta-5.messagelabs.com id
	D8/72-29296-96406305; Thu, 23 Aug 2012 10:22:33 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1345717351!27365340!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8136 invoked from network); 23 Aug 2012 10:22:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:22:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206000462"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:22:31 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	06:22:31 -0400
Message-ID: <50360465.3090602@citrix.com>
Date: Thu, 23 Aug 2012 11:22:29 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andre Przywara <andre.przywara@amd.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com>
In-Reply-To: <503602A0.9040908@amd.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 11:14, Andre Przywara wrote:
> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
>>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
>>>
>>> Konrad, David,
>>>
>>> back on track for this issue. Thanks for your input, I could do some
>>> more debugging (see below for a refresh):
>>>
>>> It seems like it affects only the first page of the 1:1 mapping. I
>>> didn't have an issues with the last PFN or the page behind it (which
>>> failed properly).
>>>
>>> David, thanks for the hint with varying dom0_mem parameter. I
>>> thought I already checked this, but I did it once again and it
>>> turned out that it is only an issue if dom0_mem is smaller than the
>>> ACPI area, which generates a hole in the memory map. So we have
>>> (simplified)
>>> * 1:1 mapping to 1 MB
>>> * normal mapping till dom0_mem
>>> * unmapped area till ACPI E820 area
>>> * ACPI E820 1:1 mapping
>>>
>>> As far as I could chase it down the 1:1 mapping itself looks OK, I
>>> couldn't find any off-by-one bugs here. So maybe it is code that
>>> later on invalidates areas between the normal guest mapping and the
>>> ACPI mem?
>>
>> I think I found it. Can you try this pls [and if you can't find
>> early_to_phys.. just use the __set_phys_to call]
> 
> Yes, that works. At least after a quick test on my test box. Both the 
> test module and acpidump work as expected. If I replace the "<" in your 
> patch with the original "<=", I get the warning (and due to the 
> "continue" it also works).

Note that the balloon driver could subsequently overwrite the p2m entry.
 I don't think it is worth redoing the patch to adjust the region passed
to the balloon driver to avoid this though.

> I also successfully tested the minimal fix (just replacing <= with <).
> I will feed it to the testers here to cover more machines.
> 
> Do you want to keep the warnings in (which exceed 80 characters, btw)?

I think we do.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:37:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Un6-0003ee-0F; Thu, 23 Aug 2012 10:37:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Un4-0003eZ-Qk
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:37:30 +0000
Received: from [85.158.138.51:64136] by server-8.bemta-3.messagelabs.com id
	82/62-29583-9E706305; Thu, 23 Aug 2012 10:37:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718247!27379610!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31036 invoked from network); 23 Aug 2012 10:37:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:37:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14143701"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 10:37:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 11:37:12 +0100
Message-ID: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 23 Aug 2012 11:37:10 +0100
In-Reply-To: <20448.49637.38489.246434@mariner.uk.xensource.com>
References: <20448.49637.38489.246434@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Security vulnerability process, and CVE-2012-0217
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-06-19 at 19:16 +0100, Ian Jackson wrote:
[...]
> More minor policy questions
> ---------------------------
[...]
> Lacunae in the process
> ----------------------

I've prepared a series of patches to the policy[0] which implement most
(but not all) of what was suggested in these two sections. I think they
are largely uncontroversial, but please do holler if you disagree or
want to suggest further improvements..

I'm not at all sure that patches are the best way to present this but there
we are.

Ian.

[0] http://www.xen.org/projects/security_vulnerability_process.html







_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:37:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Un6-0003ee-0F; Thu, 23 Aug 2012 10:37:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Un4-0003eZ-Qk
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:37:30 +0000
Received: from [85.158.138.51:64136] by server-8.bemta-3.messagelabs.com id
	82/62-29583-9E706305; Thu, 23 Aug 2012 10:37:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718247!27379610!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31036 invoked from network); 23 Aug 2012 10:37:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:37:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344211200"; d="scan'208";a="14143701"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 10:37:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 11:37:12 +0100
Message-ID: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 23 Aug 2012 11:37:10 +0100
In-Reply-To: <20448.49637.38489.246434@mariner.uk.xensource.com>
References: <20448.49637.38489.246434@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Security vulnerability process, and CVE-2012-0217
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-06-19 at 19:16 +0100, Ian Jackson wrote:
[...]
> More minor policy questions
> ---------------------------
[...]
> Lacunae in the process
> ----------------------

I've prepared a series of patches to the policy[0] which implement most
(but not all) of what was suggested in these two sections. I think they
are largely uncontroversial, but please do holler if you disagree or
want to suggest further improvements..

I'm not at all sure that patches are the best way to present this but there
we are.

Ian.

[0] http://www.xen.org/projects/security_vulnerability_process.html







_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unt-0003hb-Q3; Thu, 23 Aug 2012 10:38:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unt-0003hR-6F
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:21 +0000
Received: from [85.158.138.51:11579] by server-12.bemta-3.messagelabs.com id
	6F/76-04073-C1806305; Thu, 23 Aug 2012 10:38:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718290!27379786!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2832 invoked from network); 23 Aug 2012 10:38:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001587"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-EV;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:49 +0100
Message-ID: <1345718274-7900-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/6] Clarify what info predisclosure list
	members may share during an embargo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
  "7. Public communications during the embargo period"
---
 security_vulnerability_process.html |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index d1a6629..eff108a 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -195,9 +195,17 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     should not make available, even to their own customers and partners:<ul>
        <li>the Xen.org advisory</li>
        <li>their own advisory</li>
+       <li>the impact, scope, set of vulnerable systems or the nature
+       of the vulnerability</li>
        <li>revision control commits which are a fix for the problem</li>
        <li>patched software (even in binary form) without prior consultation with security@xen and/or the discoverer.</li>
     </ul></p>    
+    <p>List members are allowed to make available to their users only the following:<ul>
+       <li>The existance of an issue</li>
+       <li>The assigned XSA and CVE numbers</li>
+       <li>The planned disclosure date</li>
+    </ul></p>
+
     <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories.</p>    
     <p>The pre-disclosure list will also receive copies of public advisories when they are first issued or updated.</p>
     
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unt-0003hb-Q3; Thu, 23 Aug 2012 10:38:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unt-0003hR-6F
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:21 +0000
Received: from [85.158.138.51:11579] by server-12.bemta-3.messagelabs.com id
	6F/76-04073-C1806305; Thu, 23 Aug 2012 10:38:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718290!27379786!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2832 invoked from network); 23 Aug 2012 10:38:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001587"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-EV;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:49 +0100
Message-ID: <1345718274-7900-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/6] Clarify what info predisclosure list
	members may share during an embargo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
  "7. Public communications during the embargo period"
---
 security_vulnerability_process.html |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index d1a6629..eff108a 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -195,9 +195,17 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     should not make available, even to their own customers and partners:<ul>
        <li>the Xen.org advisory</li>
        <li>their own advisory</li>
+       <li>the impact, scope, set of vulnerable systems or the nature
+       of the vulnerability</li>
        <li>revision control commits which are a fix for the problem</li>
        <li>patched software (even in binary form) without prior consultation with security@xen and/or the discoverer.</li>
     </ul></p>    
+    <p>List members are allowed to make available to their users only the following:<ul>
+       <li>The existance of an issue</li>
+       <li>The assigned XSA and CVE numbers</li>
+       <li>The planned disclosure date</li>
+    </ul></p>
+
     <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories.</p>    
     <p>The pre-disclosure list will also receive copies of public advisories when they are first issued or updated.</p>
     
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Uns-0003hM-E6; Thu, 23 Aug 2012 10:38:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unq-0003hB-9p
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:18 +0000
Received: from [85.158.138.51:11260] by server-2.bemta-3.messagelabs.com id
	E3/66-09157-91806305; Thu, 23 Aug 2012 10:38:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718290!27379786!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2623 invoked from network); 23 Aug 2012 10:38:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:16 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001592"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-GR;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:54 +0100
Message-ID: <1345718274-7900-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/6] Declare version 1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 security_vulnerability_process.html |    1 +
 1 file changed, 1 insertion(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index c830a04..c6c0d1d 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -251,6 +251,7 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     
     <h2>Change History</h2>
     <ul>
+      <li><b>v1.3 Aug 2012:</b> Various minor updates</li>
       <li><b>v1.2 Apr 2012:</b> Added pre-disclosure list</li>
       <li><b>v1.1 Feb 2012:</b> Added link to Security Announcements wiki page</li>
       <li><b>v1.0 Dec 2011:</b> Intial document published after review</li>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Uns-0003hM-E6; Thu, 23 Aug 2012 10:38:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unq-0003hB-9p
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:18 +0000
Received: from [85.158.138.51:11260] by server-2.bemta-3.messagelabs.com id
	E3/66-09157-91806305; Thu, 23 Aug 2012 10:38:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718290!27379786!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2623 invoked from network); 23 Aug 2012 10:38:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:16 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001592"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-GR;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:54 +0100
Message-ID: <1345718274-7900-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/6] Declare version 1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 security_vulnerability_process.html |    1 +
 1 file changed, 1 insertion(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index c830a04..c6c0d1d 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -251,6 +251,7 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     
     <h2>Change History</h2>
     <ul>
+      <li><b>v1.3 Aug 2012:</b> Various minor updates</li>
       <li><b>v1.2 Apr 2012:</b> Added pre-disclosure list</li>
       <li><b>v1.1 Feb 2012:</b> Added link to Security Announcements wiki page</li>
       <li><b>v1.0 Dec 2011:</b> Intial document published after review</li>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unv-0003iC-JZ; Thu, 23 Aug 2012 10:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unu-0003hd-Di
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:22 +0000
Received: from [85.158.138.51:11689] by server-6.bemta-3.messagelabs.com id
	86/EE-32013-D1806305; Thu, 23 Aug 2012 10:38:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718290!27379786!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2899 invoked from network); 23 Aug 2012 10:38:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001591"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-G4;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:53 +0100
Message-ID: <1345718274-7900-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/6] Patch review,
	expert advice and targetted fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "Patch development and review"
---
 security_vulnerability_process.html |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index 687e452..c830a04 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -109,8 +109,13 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        process.</p></li>
        <p>(This may rely on the other project(s) having
        documented and responsive security contact points)</p>
-    <li><p>We will prepare or check patch(es) which fix the vulnerability.
-       This would ideally include all relevant backports.</p></li>
+    <li><p>We will prepare or check patch(es) which fix the
+       vulnerability.  This would ideally include all relevant
+       backports.  Patches will be tightly targeted on fixing the
+       specific security vulnerability in the smallest, simplest and
+       most reliable way.  Where necessary domain specific experts
+       within the community will be brought in to help with patch
+       preparation.</p></li>
     <li><p>We will determine which systems/configurations/versions are
        vulnerable, and what the impact of the vulnerability is.
        Depending on the nature of the vulnerability this may involve
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unv-0003iC-JZ; Thu, 23 Aug 2012 10:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unu-0003hd-Di
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:22 +0000
Received: from [85.158.138.51:11689] by server-6.bemta-3.messagelabs.com id
	86/EE-32013-D1806305; Thu, 23 Aug 2012 10:38:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345718290!27379786!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2899 invoked from network); 23 Aug 2012 10:38:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001591"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-G4;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:53 +0100
Message-ID: <1345718274-7900-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/6] Patch review,
	expert advice and targetted fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "Patch development and review"
---
 security_vulnerability_process.html |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index 687e452..c830a04 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -109,8 +109,13 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        process.</p></li>
        <p>(This may rely on the other project(s) having
        documented and responsive security contact points)</p>
-    <li><p>We will prepare or check patch(es) which fix the vulnerability.
-       This would ideally include all relevant backports.</p></li>
+    <li><p>We will prepare or check patch(es) which fix the
+       vulnerability.  This would ideally include all relevant
+       backports.  Patches will be tightly targeted on fixing the
+       specific security vulnerability in the smallest, simplest and
+       most reliable way.  Where necessary domain specific experts
+       within the community will be brought in to help with patch
+       preparation.</p></li>
     <li><p>We will determine which systems/configurations/versions are
        vulnerable, and what the impact of the vulnerability is.
        Depending on the nature of the vulnerability this may involve
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unx-0003j3-WF; Thu, 23 Aug 2012 10:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unw-0003iL-Cq
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:24 +0000
Received: from [85.158.138.51:11853] by server-8.bemta-3.messagelabs.com id
	7A/F3-29583-F1806305; Thu, 23 Aug 2012 10:38:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345718297!27656889!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13640 invoked from network); 23 Aug 2012 10:38:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001589"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-Fg;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:52 +0100
Message-ID: <1345718274-7900-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/6] Discuss post-embargo disclosure of
	potentially controversial private decisions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "11. Transparency"
---
 security_vulnerability_process.html |   12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index ddd88a1..687e452 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -147,6 +147,18 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        public advisory.  This will also be sent to the pre-disclosure
        list.</p>
     </li>
+
+    <li><p><b>Post embargo transparency:</b></p>
+    <p>During an embargo period the Xen.org security response team may
+    be required to make potentially controverial decisions in private,
+    since they cannot confer with the community without breaking the
+    embargo. The security team will attempt to make such decisions
+    following the guidance of this document and where necessary their
+    own best judgement. Following the embargo period any such
+    decisions will be disclosed to the community in the interests of
+    transperency and to help provide guidance should a similar
+    decision be required in the future.</p>
+    </li>
     </ol>
     
     <h2>Embargo and disclosure schedule</h2>    
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unx-0003j3-WF; Thu, 23 Aug 2012 10:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unw-0003iL-Cq
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:24 +0000
Received: from [85.158.138.51:11853] by server-8.bemta-3.messagelabs.com id
	7A/F3-29583-F1806305; Thu, 23 Aug 2012 10:38:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345718297!27656889!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13640 invoked from network); 23 Aug 2012 10:38:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001589"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-Fg;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:52 +0100
Message-ID: <1345718274-7900-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/6] Discuss post-embargo disclosure of
	potentially controversial private decisions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "11. Transparency"
---
 security_vulnerability_process.html |   12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index ddd88a1..687e452 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -147,6 +147,18 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        public advisory.  This will also be sent to the pre-disclosure
        list.</p>
     </li>
+
+    <li><p><b>Post embargo transparency:</b></p>
+    <p>During an embargo period the Xen.org security response team may
+    be required to make potentially controverial decisions in private,
+    since they cannot confer with the community without breaking the
+    embargo. The security team will attempt to make such decisions
+    following the guidance of this document and where necessary their
+    own best judgement. Following the embargo period any such
+    decisions will be disclosed to the community in the interests of
+    transperency and to help provide guidance should a similar
+    decision be required in the future.</p>
+    </li>
     </ol>
     
     <h2>Embargo and disclosure schedule</h2>    
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Uny-0003jJ-Dp; Thu, 23 Aug 2012 10:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unw-0003iQ-Lg
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:24 +0000
Received: from [85.158.138.51:51554] by server-5.bemta-3.messagelabs.com id
	9F/47-08865-F1806305; Thu, 23 Aug 2012 10:38:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345718299!18697131!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1220 invoked from network); 23 Aug 2012 10:38:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001590"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-Et;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:50 +0100
Message-ID: <1345718274-7900-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/6] Clarifications to predisclosure list
	subscription instructions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Specially:
  * Mention that subscriptions via the webterface do not work / are
    not honoured.
  * Mention the preference for role addresses only.

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "8. Predisclosure subscription process, and email address
        criteria"
---
 security_vulnerability_process.html |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index eff108a..ee42402 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -206,7 +206,9 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        <li>The planned disclosure date</li>
     </ul></p>
 
-    <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories.</p>    
+    <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories. Organisations should not request subscription via the mailing list web interface, any such subscription requests will be rejected and ignored.</p>
+    <p>Normally we would prefer that a role address be used for each organisation, rather than one or more individual's direct email address. This helps to ensure that changes of personel do not end up effectively dropping an organisation from the list</p>
+
     <p>The pre-disclosure list will also receive copies of public advisories when they are first issued or updated.</p>
     
     <h3>Organizations on the pre-disclosure list:</h3>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Uny-0003jJ-Dp; Thu, 23 Aug 2012 10:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unw-0003iQ-Lg
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:24 +0000
Received: from [85.158.138.51:51554] by server-5.bemta-3.messagelabs.com id
	9F/47-08865-F1806305; Thu, 23 Aug 2012 10:38:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345718299!18697131!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1220 invoked from network); 23 Aug 2012 10:38:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001590"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-Et;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:50 +0100
Message-ID: <1345718274-7900-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/6] Clarifications to predisclosure list
	subscription instructions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Specially:
  * Mention that subscriptions via the webterface do not work / are
    not honoured.
  * Mention the preference for role addresses only.

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "8. Predisclosure subscription process, and email address
        criteria"
---
 security_vulnerability_process.html |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index eff108a..ee42402 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -206,7 +206,9 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
        <li>The planned disclosure date</li>
     </ul></p>
 
-    <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories.</p>    
+    <p>Organisations who meet the criteria should contact security@xen if they wish to receive pre-disclosure of advisories. Organisations should not request subscription via the mailing list web interface, any such subscription requests will be rejected and ignored.</p>
+    <p>Normally we would prefer that a role address be used for each organisation, rather than one or more individual's direct email address. This helps to ensure that changes of personel do not end up effectively dropping an organisation from the list</p>
+
     <p>The pre-disclosure list will also receive copies of public advisories when they are first issued or updated.</p>
     
     <h3>Organizations on the pre-disclosure list:</h3>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unv-0003i1-7S; Thu, 23 Aug 2012 10:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unt-0003hR-Ra
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:21 +0000
Received: from [85.158.138.51:51352] by server-12.bemta-3.messagelabs.com id
	6A/86-04073-D1806305; Thu, 23 Aug 2012 10:38:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345718297!27656889!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13578 invoked from network); 23 Aug 2012 10:38:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001588"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-FJ;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:51 +0100
Message-ID: <1345718274-7900-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/6] Clarify the scope of the process to just
	the hypervisor project
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Other projects are handled on a best effort basis by the project lead
with the assistance of the security team.

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "9. Vulnerability process scope"
---
 security_vulnerability_process.html |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index ee42402..ddd88a1 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -77,6 +77,9 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     will treat with respect the requests of discoverers, or other vendors, who
     report problems to us.</p>
 
+    <h2>Scope of this process</h2>
+    <p>This process primarily covers the <a href="http://www.xen.org/products/xenhyp.html">Xen Hypervisor Project</a>. Vulnerabilties reported against other Xen.org projects will be handled on a best effort basis by the relevant Project Lead together with the security team.</p>
+
     <h2>Specific process</h2>
     <ol type="1">
     <li><p>We request that anyone who discovers a vulnerability in xen.org
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:38:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Unv-0003i1-7S; Thu, 23 Aug 2012 10:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4Unt-0003hR-Ra
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:38:21 +0000
Received: from [85.158.138.51:51352] by server-12.bemta-3.messagelabs.com id
	6A/86-04073-D1806305; Thu, 23 Aug 2012 10:38:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345718297!27656889!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13578 invoked from network); 23 Aug 2012 10:38:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:38:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,299,1344225600"; d="scan'208";a="206001588"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:37:55 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 06:37:54 -0400
Received: from zakaz.uk.xensource.com ([10.80.2.42])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T4UnS-0001Mz-FJ;
	Thu, 23 Aug 2012 11:37:54 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Date: Thu, 23 Aug 2012 11:37:51 +0100
Message-ID: <1345718274-7900-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
References: <1345718230.12501.79.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/6] Clarify the scope of the process to just
	the hypervisor project
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Other projects are handled on a best effort basis by the project lead
with the assistance of the security team.

See <20448.49637.38489.246434@mariner.uk.xensource.com>, section
    "9. Vulnerability process scope"
---
 security_vulnerability_process.html |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/security_vulnerability_process.html b/security_vulnerability_process.html
index ee42402..ddd88a1 100644
--- a/security_vulnerability_process.html
+++ b/security_vulnerability_process.html
@@ -77,6 +77,9 @@ if(ns4)_d.write("<scr"+"ipt type=text/javascript src=/globals/mmenuns4.js><\/scr
     will treat with respect the requests of discoverers, or other vendors, who
     report problems to us.</p>
 
+    <h2>Scope of this process</h2>
+    <p>This process primarily covers the <a href="http://www.xen.org/products/xenhyp.html">Xen Hypervisor Project</a>. Vulnerabilties reported against other Xen.org projects will be handled on a best effort basis by the relevant Project Lead together with the security team.</p>
+
     <h2>Specific process</h2>
     <ol type="1">
     <li><p>We request that anyone who discovers a vulnerability in xen.org
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:52:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4V1F-0004fi-W9; Thu, 23 Aug 2012 10:52:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4V1E-0004fd-36
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:52:08 +0000
Received: from [85.158.139.83:27478] by server-3.bemta-5.messagelabs.com id
	EB/AE-27237-75B06305; Thu, 23 Aug 2012 10:52:07 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345719125!20111690!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24870 invoked from network); 23 Aug 2012 10:52:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344225600"; d="scan'208";a="206002670"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:52:04 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	06:52:04 -0400
Message-ID: <50360B81.2070402@citrix.com>
Date: Thu, 23 Aug 2012 11:52:49 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
	<5035E986020000780008A617@nat28.tlf.novell.com>
In-Reply-To: <5035E986020000780008A617@nat28.tlf.novell.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 08:27 AM, Jan Beulich wrote:
>> switch ( a.index )
>> {
>> -            case HVM_PARAM_IOREQ_PFN:
>>      
> Removing sub-ops which a domain can issue for itself (which for this and
> another one below appears to be the case) is not allowed.
>    

I removed these 3 sub-ops because it will not work with
QEMU disaggregation. Shared pages and event channel
for IO request are private for each device model.

>> +            case HVM_PARAM_IO_PFN_FIRST:
>>      
> I don't see where in this patch this and the other new sub-op constants
> get defined.
>    
Both sub-op constants are added in patch 1:
http://lists.xen.org/archives/html/xen-devel/2012-08/msg01767.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 10:52:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 10:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4V1F-0004fi-W9; Thu, 23 Aug 2012 10:52:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4V1E-0004fd-36
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 10:52:08 +0000
Received: from [85.158.139.83:27478] by server-3.bemta-5.messagelabs.com id
	EB/AE-27237-75B06305; Thu, 23 Aug 2012 10:52:07 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345719125!20111690!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24870 invoked from network); 23 Aug 2012 10:52:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 10:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344225600"; d="scan'208";a="206002670"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 06:52:04 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	06:52:04 -0400
Message-ID: <50360B81.2070402@citrix.com>
Date: Thu, 23 Aug 2012 11:52:49 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
	<5035E986020000780008A617@nat28.tlf.novell.com>
In-Reply-To: <5035E986020000780008A617@nat28.tlf.novell.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 08:27 AM, Jan Beulich wrote:
>> switch ( a.index )
>> {
>> -            case HVM_PARAM_IOREQ_PFN:
>>      
> Removing sub-ops which a domain can issue for itself (which for this and
> another one below appears to be the case) is not allowed.
>    

I removed these 3 sub-ops because it will not work with
QEMU disaggregation. Shared pages and event channel
for IO request are private for each device model.

>> +            case HVM_PARAM_IO_PFN_FIRST:
>>      
> I don't see where in this patch this and the other new sub-op constants
> get defined.
>    
Both sub-op constants are added in patch 1:
http://lists.xen.org/archives/html/xen-devel/2012-08/msg01767.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 11:17:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 11:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4VPp-0004wh-6M; Thu, 23 Aug 2012 11:17:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4VPn-0004wc-Ou
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 11:17:32 +0000
Received: from [85.158.139.83:25638] by server-12.bemta-5.messagelabs.com id
	23/C1-22359-B4116305; Thu, 23 Aug 2012 11:17:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345720648!27426352!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1742 invoked from network); 23 Aug 2012 11:17:28 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 11:17:28 -0000
Received: by eaac13 with SMTP id c13so187797eaa.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 04:17:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=SneJGoD6qZfqR8U9CMI92IKwAOU8R7zKkAMvpHckvZ0=;
	b=u9C6CPcaPTKtuGXUPs3tKFTMmwUjt37dva0+H8MpnX+s7x4J3e4bfo1SjE3rgxiVXh
	5fmuQdySPxMrgnaHg6U5wN19VVr9lb14C5O8lDo/7PnqZlPozQHPVi9mNP01j4AuSc0G
	Oe7xozEvV0+PV3ZWZkxhjcYa1QRwn2VdY9318r5sFQ5Gu6KFH3bOe6Kz4npyRsE+rs9M
	KWL6FNb/6qFnuAAr3sgJynR13HLJrPX53X3jgcUcmMutqqZuHWlSRSlSq9PtCDF6OhWX
	JNx9WPx0W/D4Yt0l22VOhWVKfy7GJJI3t3UM+ddzL457gZXsKVndNEcM+8nozyOMtMP8
	0WbQ==
Received: by 10.14.211.3 with SMTP id v3mr1402490eeo.43.1345720648004;
	Thu, 23 Aug 2012 04:17:28 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id h42sm20602248eem.5.2012.08.23.04.17.26
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 04:17:27 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 12:17:19 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5BCFCF.49F7F%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BINvK2Bn36u9Xu0upRHVZY6EJzg==
In-Reply-To: <bcb6194956c235510cd981003ba7e999@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 10:50, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> On 23.08.2012 10:43, Keir Fraser wrote:
>> On 23/08/2012 09:58, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>> 
>>>> What does your RAM map look like now from early Xen boot, using
>>>> no-real-mode? It shouldn't be "Xen-e801" any more at least, else
>>>> the
>>>> no-real-mode parameter isn't working.
>>>> 
>>> 
>>> Still Xen-e801, so it looks like no-real-mode isn't working :(
>> 
>> Grrr it's been broken since tboot support went in, long ago. Going to
>> have
>> to fix that and backport to 4.1 and 4.0 branches...
>> 
>>  -- Keir
> 
> Just for info, is it safe to use no-real-mode on a production system?
> Please keep in mind that our DomUs are untrusted. Or would it be better
> if I created a patch to change the order of the if block to prefer the
> multi-boot memory map?

No-real-mode is perfectly safe to use from that point of view -- it will
have no impact on safe containment of untrusted DomU's.

However, of course it is nice to not have to rely on no-real-mode, so please
try switching the ordring of that if block.

 -- Keir

> Thanks
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 11:17:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 11:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4VPp-0004wh-6M; Thu, 23 Aug 2012 11:17:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4VPn-0004wc-Ou
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 11:17:32 +0000
Received: from [85.158.139.83:25638] by server-12.bemta-5.messagelabs.com id
	23/C1-22359-B4116305; Thu, 23 Aug 2012 11:17:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345720648!27426352!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1742 invoked from network); 23 Aug 2012 11:17:28 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 11:17:28 -0000
Received: by eaac13 with SMTP id c13so187797eaa.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 04:17:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=SneJGoD6qZfqR8U9CMI92IKwAOU8R7zKkAMvpHckvZ0=;
	b=u9C6CPcaPTKtuGXUPs3tKFTMmwUjt37dva0+H8MpnX+s7x4J3e4bfo1SjE3rgxiVXh
	5fmuQdySPxMrgnaHg6U5wN19VVr9lb14C5O8lDo/7PnqZlPozQHPVi9mNP01j4AuSc0G
	Oe7xozEvV0+PV3ZWZkxhjcYa1QRwn2VdY9318r5sFQ5Gu6KFH3bOe6Kz4npyRsE+rs9M
	KWL6FNb/6qFnuAAr3sgJynR13HLJrPX53X3jgcUcmMutqqZuHWlSRSlSq9PtCDF6OhWX
	JNx9WPx0W/D4Yt0l22VOhWVKfy7GJJI3t3UM+ddzL457gZXsKVndNEcM+8nozyOMtMP8
	0WbQ==
Received: by 10.14.211.3 with SMTP id v3mr1402490eeo.43.1345720648004;
	Thu, 23 Aug 2012 04:17:28 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id h42sm20602248eem.5.2012.08.23.04.17.26
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 04:17:27 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 12:17:19 +0100
From: Keir Fraser <keir@xen.org>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5BCFCF.49F7F%keir@xen.org>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2BINvK2Bn36u9Xu0upRHVZY6EJzg==
In-Reply-To: <bcb6194956c235510cd981003ba7e999@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 10:50, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> On 23.08.2012 10:43, Keir Fraser wrote:
>> On 23/08/2012 09:58, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>> 
>>>> What does your RAM map look like now from early Xen boot, using
>>>> no-real-mode? It shouldn't be "Xen-e801" any more at least, else
>>>> the
>>>> no-real-mode parameter isn't working.
>>>> 
>>> 
>>> Still Xen-e801, so it looks like no-real-mode isn't working :(
>> 
>> Grrr it's been broken since tboot support went in, long ago. Going to
>> have
>> to fix that and backport to 4.1 and 4.0 branches...
>> 
>>  -- Keir
> 
> Just for info, is it safe to use no-real-mode on a production system?
> Please keep in mind that our DomUs are untrusted. Or would it be better
> if I created a patch to change the order of the if block to prefer the
> multi-boot memory map?

No-real-mode is perfectly safe to use from that point of view -- it will
have no impact on safe containment of untrusted DomU's.

However, of course it is nice to not have to rely on no-real-mode, so please
try switching the ordring of that if block.

 -- Keir

> Thanks
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 11:36:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 11:36:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Vhe-00057g-UF; Thu, 23 Aug 2012 11:35:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T4Vhd-00057b-So
	for Xen-devel@lists.xensource.com; Thu, 23 Aug 2012 11:35:58 +0000
Received: from [85.158.143.35:54339] by server-3.bemta-4.messagelabs.com id
	94/B0-08232-D9516305; Thu, 23 Aug 2012 11:35:57 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345721751!13362459!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 379 invoked from network); 23 Aug 2012 11:35:51 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 11:35:51 -0000
Received: by eekd4 with SMTP id d4so230135eek.30
	for <Xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 04:35:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7aLTxp24KhUTJ9DPYRa9rswZvVpRHL35h/GJ6JkUAfI=;
	b=meugLl5W90l9amsoiU0weU8pifQ+8Spj9X/uX/M/Le9dW4HaEnGLOcE3Gm+Q6B/2PJ
	sQThwDYfJ6yOXnOO8QxpUKfuiI0x26Pr3/ELzyFpjPmfARQww70nb+uz/LvIGKF08LOS
	yFlldlSVYnHeAKHtxOX/kVP5PhRXnIQVu76t9+kdzDU1Z02l+hAmSZ6C51ceHDlVukwM
	j+8TbcGlCXuKqYtCc+TjPRZWz+NiJn8Ihm5QMG8C0LfByLXHlahOE/TpmapzXuMMPuPo
	blxzxAK27+rn9xwgZBwxsP/9IPkUJS5uMv1Lop9VkcDvE3xK/JkOMMXzjBT0aXw4gC9B
	zCYQ==
MIME-Version: 1.0
Received: by 10.14.199.67 with SMTP id w43mr1503523een.33.1345721750831; Thu,
	23 Aug 2012 04:35:50 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Thu, 23 Aug 2012 04:35:50 -0700 (PDT)
In-Reply-To: <20120822151209.40a9e54c@mantra.us.oracle.com>
References: <20120822151209.40a9e54c@mantra.us.oracle.com>
Date: Thu, 23 Aug 2012 12:35:50 +0100
X-Google-Sender-Auth: LoCb9y-b6plcV7lj-lqBDIZRHsU
Message-ID: <CAFLBxZZ1q2G=9U7PG0topVxJg_LZvfKhUkystW1-pDFYamQxiw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] mercurial sdiff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

extdiff extension + meld allows this:

hg meld -c [changeset]

meld will then show you the before and after, with the changes highlighted.

meld is a graphical tool.

 -George

On Wed, Aug 22, 2012 at 11:12 PM, Mukesh Rathor
<mukesh.rathor@oracle.com> wrote:
> Hi,
>
> Anyone figured out a way to do sdiff (side by side diff) in hg, or
> anyone written an extension to do this?
>
> thanks,
> Mukesh
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 11:36:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 11:36:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Vhe-00057g-UF; Thu, 23 Aug 2012 11:35:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T4Vhd-00057b-So
	for Xen-devel@lists.xensource.com; Thu, 23 Aug 2012 11:35:58 +0000
Received: from [85.158.143.35:54339] by server-3.bemta-4.messagelabs.com id
	94/B0-08232-D9516305; Thu, 23 Aug 2012 11:35:57 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345721751!13362459!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 379 invoked from network); 23 Aug 2012 11:35:51 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 11:35:51 -0000
Received: by eekd4 with SMTP id d4so230135eek.30
	for <Xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 04:35:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7aLTxp24KhUTJ9DPYRa9rswZvVpRHL35h/GJ6JkUAfI=;
	b=meugLl5W90l9amsoiU0weU8pifQ+8Spj9X/uX/M/Le9dW4HaEnGLOcE3Gm+Q6B/2PJ
	sQThwDYfJ6yOXnOO8QxpUKfuiI0x26Pr3/ELzyFpjPmfARQww70nb+uz/LvIGKF08LOS
	yFlldlSVYnHeAKHtxOX/kVP5PhRXnIQVu76t9+kdzDU1Z02l+hAmSZ6C51ceHDlVukwM
	j+8TbcGlCXuKqYtCc+TjPRZWz+NiJn8Ihm5QMG8C0LfByLXHlahOE/TpmapzXuMMPuPo
	blxzxAK27+rn9xwgZBwxsP/9IPkUJS5uMv1Lop9VkcDvE3xK/JkOMMXzjBT0aXw4gC9B
	zCYQ==
MIME-Version: 1.0
Received: by 10.14.199.67 with SMTP id w43mr1503523een.33.1345721750831; Thu,
	23 Aug 2012 04:35:50 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Thu, 23 Aug 2012 04:35:50 -0700 (PDT)
In-Reply-To: <20120822151209.40a9e54c@mantra.us.oracle.com>
References: <20120822151209.40a9e54c@mantra.us.oracle.com>
Date: Thu, 23 Aug 2012 12:35:50 +0100
X-Google-Sender-Auth: LoCb9y-b6plcV7lj-lqBDIZRHsU
Message-ID: <CAFLBxZZ1q2G=9U7PG0topVxJg_LZvfKhUkystW1-pDFYamQxiw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] mercurial sdiff
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

extdiff extension + meld allows this:

hg meld -c [changeset]

meld will then show you the before and after, with the changes highlighted.

meld is a graphical tool.

 -George

On Wed, Aug 22, 2012 at 11:12 PM, Mukesh Rathor
<mukesh.rathor@oracle.com> wrote:
> Hi,
>
> Anyone figured out a way to do sdiff (side by side diff) in hg, or
> anyone written an extension to do this?
>
> thanks,
> Mukesh
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 11:58:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 11:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4W2x-0005e0-Gd; Thu, 23 Aug 2012 11:57:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1T4W2w-0005dv-DD
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 11:57:58 +0000
Received: from [85.158.139.83:31700] by server-1.bemta-5.messagelabs.com id
	B1/98-09980-5CA16305; Thu, 23 Aug 2012 11:57:57 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345723074!26929436!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7443 invoked from network); 23 Aug 2012 11:57:56 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 11:57:56 -0000
Received: by vcbgb23 with SMTP id gb23so854750vcb.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 04:57:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=YLI6obi4h9b8L+rdQlpKem8Uk1JNWVKywEiDF6+KgNo=;
	b=aDGtGvnoVYpNbOPqyd9MnTdDPAMlwjDHlh5Rsqsy+tkT4W18ViBpb2NlvDV6v35ttO
	PhKQu5Bb3O10tXOnRJESF7vRM5apETsXFOgfTuD0w8mW6yPNyiG7SYtPLykdq/EQPf8C
	xB/2b8Fz3MhCk0+poCFxOOOGOA3PCg8OuxOXxPIemWdfqi1NTANW+QG5tw8EbBl1S3z4
	gE43QKk3Ga1i79T4tdWyOT3iYuBq0qgTsWP4+fUMhf+Nm2HIfXzsZs4urDDlT48M9qek
	YGkR/dLhCHyf1QgosZmRmy1WaEkzyoV7beB2xr9P7ePR6btEwo2XCaUuJT8Ru12lmyZr
	UAww==
Received: by 10.220.37.194 with SMTP id y2mr150945vcd.44.1345723074636; Thu,
	23 Aug 2012 04:57:54 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.91.72 with HTTP; Thu, 23 Aug 2012 04:57:34 -0700 (PDT)
In-Reply-To: <501FA05C0200007800092CD7@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<501FA05C0200007800092CD7@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 23 Aug 2012 12:57:34 +0100
Message-ID: <CAEBdQ90ObLybAxzYceEQyEVtpP7mMmPrn7NryH3h-+dTXBNDUw@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 09:45, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>--- /dev/null
>>+++ b/xen/include/public/v4v.h
>>...
>>+#define V4V_DOMID_ANY           0x7fffU
>
> I think I asked this before - why not use one of the pre-existing
> DOMID values? And if there is a good reason, it should be said
> here in a comment, to avoid the same question being asked
> later again.
>
>>...
>>+typedef uint64_t v4v_pfn_t;
>
> We already have xen_pfn_t, so why do you need yet another
> flavor?
>
>>...
>>+struct v4v_info
>>+{
>>+    uint64_t ring_magic;
>>+    uint64_t data_magic;
>>+    evtchn_port_t evtchn;
>
> Missing padding at the end?
>
>>+};
>>+typedef struct v4v_info v4v_info_t;
>>+
>>+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
>
> Doesn't seem to belong here. Or is the subsequent comment
> actually related to this (in which case it should be moved ahead
> of the definition and made match it).
>
>>+/*
>>+ * Messages on the ring are padded to 128 bits
>>+ * Len here refers to the exact length of the data not including the
>>+ * 128 bit header. The message uses
>>+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
>>+ */
>>...
>>+/*
>>+ * HYPERCALLS
>>+ */
>>...
>
> In the block below here, please get the naming (do_v4v_op()
> vs v4v_hypercall()) and the use of newlines (either always one
> or always two between individual hypercall descriptions)
> consistent. Hmm, even the descriptions don't seem to always
> match the definitions (not really obvious because apparently
> again the descriptions follow the definitions, whereas the
> opposite is the usual way to arrange things).
>
>>--- /dev/null
>>+++ b/xen/include/xen/v4v_utils.h
>>...
>>+/* Compiler specific hacks */
>>+#if defined(__GNUC__)
>>+# define V4V_UNUSED __attribute__ ((unused))
>>+# ifndef __STRICT_ANSI__
>>+#  define V4V_INLINE inline
>>+# else
>>+#  define V4V_INLINE
>>+# endif
>>+#else /* !__GNUC__ */
>>+# define V4V_UNUSED
>>+# define V4V_INLINE
>>+#endif
>
> This suggests the header is really intended to be public?
>
>>...
>>+static V4V_INLINE uint32_t
>>+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
>
> No space between function name and opening parenthesis
> (throughout this file).
>
>>...
>>+V4V_UNUSED static V4V_INLINE ssize_t
>
> V4V_UNUSED? Doesn't make sense in conjunction with
> V4V_INLINE, at least as long as you're using GNU extensions
> anyway (see above as to the disposition of the header).
>
>>+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * protocol,
>>+              void *_buf, size_t t, int consume)
>
> Dead functions shouldn't be placed here.
>
>>...
>>+static ssize_t
>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>+                     size_t skip) V4V_UNUSED;
>>+
>>+V4V_INLINE static ssize_t
>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>+                     size_t skip)
>>+{
>
> What's the point of having a declaration followed immediately by
> a definition? Also, the function is dead too.
>

This file (v4v_utils.h) has utility that could be used by drivers, we don't use
them in Xen but we through it will be convenient to have such function
accessible for one to write a v4v driver a v4v driver.

What would be the right place for those?

Thanks,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 11:58:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 11:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4W2x-0005e0-Gd; Thu, 23 Aug 2012 11:57:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1T4W2w-0005dv-DD
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 11:57:58 +0000
Received: from [85.158.139.83:31700] by server-1.bemta-5.messagelabs.com id
	B1/98-09980-5CA16305; Thu, 23 Aug 2012 11:57:57 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345723074!26929436!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7443 invoked from network); 23 Aug 2012 11:57:56 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 11:57:56 -0000
Received: by vcbgb23 with SMTP id gb23so854750vcb.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 04:57:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=YLI6obi4h9b8L+rdQlpKem8Uk1JNWVKywEiDF6+KgNo=;
	b=aDGtGvnoVYpNbOPqyd9MnTdDPAMlwjDHlh5Rsqsy+tkT4W18ViBpb2NlvDV6v35ttO
	PhKQu5Bb3O10tXOnRJESF7vRM5apETsXFOgfTuD0w8mW6yPNyiG7SYtPLykdq/EQPf8C
	xB/2b8Fz3MhCk0+poCFxOOOGOA3PCg8OuxOXxPIemWdfqi1NTANW+QG5tw8EbBl1S3z4
	gE43QKk3Ga1i79T4tdWyOT3iYuBq0qgTsWP4+fUMhf+Nm2HIfXzsZs4urDDlT48M9qek
	YGkR/dLhCHyf1QgosZmRmy1WaEkzyoV7beB2xr9P7ePR6btEwo2XCaUuJT8Ru12lmyZr
	UAww==
Received: by 10.220.37.194 with SMTP id y2mr150945vcd.44.1345723074636; Thu,
	23 Aug 2012 04:57:54 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.91.72 with HTTP; Thu, 23 Aug 2012 04:57:34 -0700 (PDT)
In-Reply-To: <501FA05C0200007800092CD7@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<501FA05C0200007800092CD7@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 23 Aug 2012 12:57:34 +0100
Message-ID: <CAEBdQ90ObLybAxzYceEQyEVtpP7mMmPrn7NryH3h-+dTXBNDUw@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 August 2012 09:45, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>--- /dev/null
>>+++ b/xen/include/public/v4v.h
>>...
>>+#define V4V_DOMID_ANY           0x7fffU
>
> I think I asked this before - why not use one of the pre-existing
> DOMID values? And if there is a good reason, it should be said
> here in a comment, to avoid the same question being asked
> later again.
>
>>...
>>+typedef uint64_t v4v_pfn_t;
>
> We already have xen_pfn_t, so why do you need yet another
> flavor?
>
>>...
>>+struct v4v_info
>>+{
>>+    uint64_t ring_magic;
>>+    uint64_t data_magic;
>>+    evtchn_port_t evtchn;
>
> Missing padding at the end?
>
>>+};
>>+typedef struct v4v_info v4v_info_t;
>>+
>>+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
>
> Doesn't seem to belong here. Or is the subsequent comment
> actually related to this (in which case it should be moved ahead
> of the definition and made match it).
>
>>+/*
>>+ * Messages on the ring are padded to 128 bits
>>+ * Len here refers to the exact length of the data not including the
>>+ * 128 bit header. The message uses
>>+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
>>+ */
>>...
>>+/*
>>+ * HYPERCALLS
>>+ */
>>...
>
> In the block below here, please get the naming (do_v4v_op()
> vs v4v_hypercall()) and the use of newlines (either always one
> or always two between individual hypercall descriptions)
> consistent. Hmm, even the descriptions don't seem to always
> match the definitions (not really obvious because apparently
> again the descriptions follow the definitions, whereas the
> opposite is the usual way to arrange things).
>
>>--- /dev/null
>>+++ b/xen/include/xen/v4v_utils.h
>>...
>>+/* Compiler specific hacks */
>>+#if defined(__GNUC__)
>>+# define V4V_UNUSED __attribute__ ((unused))
>>+# ifndef __STRICT_ANSI__
>>+#  define V4V_INLINE inline
>>+# else
>>+#  define V4V_INLINE
>>+# endif
>>+#else /* !__GNUC__ */
>>+# define V4V_UNUSED
>>+# define V4V_INLINE
>>+#endif
>
> This suggests the header is really intended to be public?
>
>>...
>>+static V4V_INLINE uint32_t
>>+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
>
> No space between function name and opening parenthesis
> (throughout this file).
>
>>...
>>+V4V_UNUSED static V4V_INLINE ssize_t
>
> V4V_UNUSED? Doesn't make sense in conjunction with
> V4V_INLINE, at least as long as you're using GNU extensions
> anyway (see above as to the disposition of the header).
>
>>+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * protocol,
>>+              void *_buf, size_t t, int consume)
>
> Dead functions shouldn't be placed here.
>
>>...
>>+static ssize_t
>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>+                     size_t skip) V4V_UNUSED;
>>+
>>+V4V_INLINE static ssize_t
>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>+                     size_t skip)
>>+{
>
> What's the point of having a declaration followed immediately by
> a definition? Also, the function is dead too.
>

This file (v4v_utils.h) has utility that could be used by drivers, we don't use
them in Xen but we through it will be convenient to have such function
accessible for one to write a v4v driver a v4v driver.

What would be the right place for those?

Thanks,
Jean

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 12:04:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 12:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4W8E-0005xj-00; Thu, 23 Aug 2012 12:03:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1T4W8C-0005xZ-0L
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 12:03:24 +0000
Received: from [85.158.138.51:25836] by server-12.bemta-3.messagelabs.com id
	7B/CC-04073-B0C16305; Thu, 23 Aug 2012 12:03:23 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345723401!19591468!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 396 invoked from network); 23 Aug 2012 12:03:22 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 12:03:22 -0000
Received: by vbip1 with SMTP id p1so852217vbi.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 05:03:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=wr+dk/T+syeV9BdXDeSSDWtyAd+37A2Q07G/b2jqhKE=;
	b=AYh5nwfZIZIi6Xf/NWE3VvL/PNoXL25mR91qYx6TtGnbEhnu453QvLez/c6lggdEbA
	Eyld+pLI0NR7qGaW4WXerN8DnV1EihLXbi9GDHYUpf8F+EZh1LrJ0IkikpKibtguZeAf
	ylxNAGbJ0dxKvxruViZH2cZuJyRR674gXW3ZlQkIqd+/DlRenlCH8tjYd0XhbBZdWUNg
	/hggE2uMq/LaCja2hOntKfFi7flzDsHH3ZBHf53f+5FFdPCGELiDrARphNmpYFjk0bkc
	3ynh/BGaJkiodMUAczBg/KdvHRhERXJWlQ4Yv8ohF41dsKFvVfCLX5CTF4VyPz1jRdu6
	BivA==
Received: by 10.52.70.163 with SMTP id n3mr830804vdu.64.1345723400960; Thu, 23
	Aug 2012 05:03:20 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.91.72 with HTTP; Thu, 23 Aug 2012 05:03:00 -0700 (PDT)
In-Reply-To: <5024DAF60200007800094167@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
	<5024D5D00200007800094134@nat28.tlf.novell.com>
	<20120810075100.GA30606@spongy>
	<5024DAF60200007800094167@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 23 Aug 2012 13:03:00 +0100
Message-ID: <CAEBdQ92_sD6Wo3i=ouJDyzqHrc_VE47u4Z-F2MmKiJ73PyMAiQ@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Content-Type: multipart/mixed; boundary=bcaec501621547b9b904c7eda5f2
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean Guyader <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec501621547b9b904c7eda5f2
Content-Type: text/plain; charset=ISO-8859-1

On 10 August 2012 08:57, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 10.08.12 at 09:51, Jean Guyader <jean.guyader@citrix.com> wrote:
>> No specific reason for both of those thing. Here is a new version based
>> on your comments.
>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>

I have found a copy/paste bug in this patch.

This statement was executed unconditionally, not just inside the if statement.
chn->u.unbound.remote_domid = remote_domid;

Attached the fixed version.

Jean

--bcaec501621547b9b904c7eda5f2
Content-Type: application/octet-stream; 
	name="evtchn_alloc_unbound_domain.patch"
Content-Disposition: attachment; 
	filename="evtchn_alloc_unbound_domain.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h67t0ft80

Y29tbWl0IDhhMzNmYzg5ODcyNjcxNzg3OWY1OWY1YjkxMTEyNWI4NzU5OGU3OGEKQXV0aG9yOiBK
ZWFuIEd1eWFkZXIgPGplYW4uZ3V5YWRlckBjaXRyaXguY29tPgpEYXRlOiAgIFRodSBBdWcgMiAx
NjoxOToyMyAyMDEyICswMTAwCgogICAgeGVuOiBldmVudHMsIGV4cG9zZXMgZXZ0Y2huX2FsbG9j
X3VuYm91bmRfZG9tYWluCiAgICAKICAgIEV4cG9zZXMgZXZ0Y2huX2FsbG9jX3VuYm91bmRfZG9t
YWluIHRvIHRoZSByZXN0IG9mCiAgICBYZW4gc28gd2UgY2FuIGNyZWF0ZSBhbGxvY2F0ZWQgdW5i
b3VuZCBldnRjaG4gd2l0aGluIFhlbi4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2V2ZW50X2No
YW5uZWwuYyBiL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCmluZGV4IDUzNzc3ZjguLjZhMGUz
NDIgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCisrKyBiL3hlbi9jb21t
b24vZXZlbnRfY2hhbm5lbC5jCkBAIC0xNTksMzYgKzE1OSw1MSBAQCBzdGF0aWMgaW50IGdldF9m
cmVlX3BvcnQoc3RydWN0IGRvbWFpbiAqZCkKIAogc3RhdGljIGxvbmcgZXZ0Y2huX2FsbG9jX3Vu
Ym91bmQoZXZ0Y2huX2FsbG9jX3VuYm91bmRfdCAqYWxsb2MpCiB7Ci0gICAgc3RydWN0IGV2dGNo
biAqY2huOwogICAgIHN0cnVjdCBkb21haW4gKmQ7Ci0gICAgaW50ICAgICAgICAgICAgcG9ydDsK
LSAgICBkb21pZF90ICAgICAgICBkb20gPSBhbGxvYy0+ZG9tOwogICAgIGxvbmcgICAgICAgICAg
IHJjOwogCi0gICAgcmMgPSByY3VfbG9ja190YXJnZXRfZG9tYWluX2J5X2lkKGRvbSwgJmQpOwor
ICAgIHJjID0gcmN1X2xvY2tfdGFyZ2V0X2RvbWFpbl9ieV9pZChhbGxvYy0+ZG9tLCAmZCk7CiAg
ICAgaWYgKCByYyApCiAgICAgICAgIHJldHVybiByYzsKIAorICAgIHJjID0gZXZ0Y2huX2FsbG9j
X3VuYm91bmRfZG9tYWluKGQsICZhbGxvYy0+cG9ydCwgYWxsb2MtPnJlbW90ZV9kb20pOworICAg
IGlmICggcmMgKQorICAgICAgICBFUlJPUl9FWElUX0RPTSgoaW50KXJjLCBkKTsKKworIG91dDoK
KyAgICByY3VfdW5sb2NrX2RvbWFpbihkKTsKKworICAgIHJldHVybiByYzsKK30KKworaW50IGV2
dGNobl9hbGxvY191bmJvdW5kX2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90
ICpwb3J0LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBkb21pZF90IHJlbW90ZV9k
b21pZCkKK3sKKyAgICBzdHJ1Y3QgZXZ0Y2huICpjaG47CisgICAgaW50ICAgICAgICAgICByYzsK
KyAgICBpbnQgICAgICAgICAgIGZyZWVfcG9ydDsKKwogICAgIHNwaW5fbG9jaygmZC0+ZXZlbnRf
bG9jayk7CiAKLSAgICBpZiAoIChwb3J0ID0gZ2V0X2ZyZWVfcG9ydChkKSkgPCAwICkKLSAgICAg
ICAgRVJST1JfRVhJVF9ET00ocG9ydCwgZCk7Ci0gICAgY2huID0gZXZ0Y2huX2Zyb21fcG9ydChk
LCBwb3J0KTsKKyAgICByYyA9IGZyZWVfcG9ydCA9IGdldF9mcmVlX3BvcnQoZCk7CisgICAgaWYg
KCBmcmVlX3BvcnQgPCAwICkKKyAgICAgICAgZ290byBvdXQ7CiAKLSAgICByYyA9IHhzbV9ldnRj
aG5fdW5ib3VuZChkLCBjaG4sIGFsbG9jLT5yZW1vdGVfZG9tKTsKKyAgICBjaG4gPSBldnRjaG5f
ZnJvbV9wb3J0KGQsIGZyZWVfcG9ydCk7CisgICAgcmMgPSB4c21fZXZ0Y2huX3VuYm91bmQoZCwg
Y2huLCByZW1vdGVfZG9taWQpOwogICAgIGlmICggcmMgKQogICAgICAgICBnb3RvIG91dDsKIAog
ICAgIGNobi0+c3RhdGUgPSBFQ1NfVU5CT1VORDsKLSAgICBpZiAoIChjaG4tPnUudW5ib3VuZC5y
ZW1vdGVfZG9taWQgPSBhbGxvYy0+cmVtb3RlX2RvbSkgPT0gRE9NSURfU0VMRiApCisgICAgaWYg
KCAoY2huLT51LnVuYm91bmQucmVtb3RlX2RvbWlkID0gcmVtb3RlX2RvbWlkKSA9PSBET01JRF9T
RUxGICkKICAgICAgICAgY2huLT51LnVuYm91bmQucmVtb3RlX2RvbWlkID0gY3VycmVudC0+ZG9t
YWluLT5kb21haW5faWQ7CiAKLSAgICBhbGxvYy0+cG9ydCA9IHBvcnQ7CisgICAgKnBvcnQgPSBm
cmVlX3BvcnQ7CisgICAgLyogRXZlcnl0aGluZyBpcyBmaW5lLCByZXR1cm5zIDAgKi8KKyAgICBy
YyA9IDA7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50X2xvY2spOwotICAgIHJj
dV91bmxvY2tfZG9tYWluKGQpOwotCiAgICAgcmV0dXJuIHJjOwogfQogCmRpZmYgLS1naXQgYS94
ZW4vaW5jbHVkZS94ZW4vZXZlbnQuaCBiL3hlbi9pbmNsdWRlL3hlbi9ldmVudC5oCmluZGV4IDcx
YzNlOTIuLjFhMGM4MzIgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL3hlbi9ldmVudC5oCisrKyBi
L3hlbi9pbmNsdWRlL3hlbi9ldmVudC5oCkBAIC02OSw2ICs2OSw5IEBAIGludCBndWVzdF9lbmFi
bGVkX2V2ZW50KHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCB2aXJxKTsKIC8qIE5vdGlmeSByZW1v
dGUgZW5kIG9mIGEgWGVuLWF0dGFjaGVkIGV2ZW50IGNoYW5uZWwuKi8KIHZvaWQgbm90aWZ5X3Zp
YV94ZW5fZXZlbnRfY2hhbm5lbChzdHJ1Y3QgZG9tYWluICpsZCwgaW50IGxwb3J0KTsKIAoraW50
IGV2dGNobl9hbGxvY191bmJvdW5kX2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkLCBldnRjaG5fcG9y
dF90ICpwb3J0LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBkb21pZF90IHJlbW90
ZV9kb21pZCk7CisKIC8qIEludGVybmFsIGV2ZW50IGNoYW5uZWwgb2JqZWN0IGFjY2Vzc29ycyAq
LwogI2RlZmluZSBidWNrZXRfZnJvbV9wb3J0KGQscCkgXAogICAgICgoZCktPmV2dGNoblsocCkv
RVZUQ0hOU19QRVJfQlVDS0VUXSkK
--bcaec501621547b9b904c7eda5f2
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec501621547b9b904c7eda5f2--


From xen-devel-bounces@lists.xen.org Thu Aug 23 12:04:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 12:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4W8E-0005xj-00; Thu, 23 Aug 2012 12:03:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jean.guyader@gmail.com>) id 1T4W8C-0005xZ-0L
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 12:03:24 +0000
Received: from [85.158.138.51:25836] by server-12.bemta-3.messagelabs.com id
	7B/CC-04073-B0C16305; Thu, 23 Aug 2012 12:03:23 +0000
X-Env-Sender: jean.guyader@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345723401!19591468!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 396 invoked from network); 23 Aug 2012 12:03:22 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 12:03:22 -0000
Received: by vbip1 with SMTP id p1so852217vbi.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 05:03:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=wr+dk/T+syeV9BdXDeSSDWtyAd+37A2Q07G/b2jqhKE=;
	b=AYh5nwfZIZIi6Xf/NWE3VvL/PNoXL25mR91qYx6TtGnbEhnu453QvLez/c6lggdEbA
	Eyld+pLI0NR7qGaW4WXerN8DnV1EihLXbi9GDHYUpf8F+EZh1LrJ0IkikpKibtguZeAf
	ylxNAGbJ0dxKvxruViZH2cZuJyRR674gXW3ZlQkIqd+/DlRenlCH8tjYd0XhbBZdWUNg
	/hggE2uMq/LaCja2hOntKfFi7flzDsHH3ZBHf53f+5FFdPCGELiDrARphNmpYFjk0bkc
	3ynh/BGaJkiodMUAczBg/KdvHRhERXJWlQ4Yv8ohF41dsKFvVfCLX5CTF4VyPz1jRdu6
	BivA==
Received: by 10.52.70.163 with SMTP id n3mr830804vdu.64.1345723400960; Thu, 23
	Aug 2012 05:03:20 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.58.91.72 with HTTP; Thu, 23 Aug 2012 05:03:00 -0700 (PDT)
In-Reply-To: <5024DAF60200007800094167@nat28.tlf.novell.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-5-git-send-email-jean.guyader@citrix.com>
	<20120809100605.GC16986@ocelot.phlegethon.org>
	<1344507826.32142.116.camel@zakaz.uk.xensource.com>
	<20120809103557.GA17503@ocelot.phlegethon.org>
	<CAEBdQ91rVN-wFwWBkfX1Ne133c4TDeXk+iktTDLTuM3StXdRFw@mail.gmail.com>
	<20120809232547.GA21925@spongy>
	<5024D5D00200007800094134@nat28.tlf.novell.com>
	<20120810075100.GA30606@spongy>
	<5024DAF60200007800094167@nat28.tlf.novell.com>
From: Jean Guyader <jean.guyader@gmail.com>
Date: Thu, 23 Aug 2012 13:03:00 +0100
Message-ID: <CAEBdQ92_sD6Wo3i=ouJDyzqHrc_VE47u4Z-F2MmKiJ73PyMAiQ@mail.gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Content-Type: multipart/mixed; boundary=bcaec501621547b9b904c7eda5f2
Cc: "Tim \(Xen.org\)" <tim@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jean Guyader <jean.guyader@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen: events,
	exposes evtchn_alloc_unbound_domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec501621547b9b904c7eda5f2
Content-Type: text/plain; charset=ISO-8859-1

On 10 August 2012 08:57, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 10.08.12 at 09:51, Jean Guyader <jean.guyader@citrix.com> wrote:
>> No specific reason for both of those thing. Here is a new version based
>> on your comments.
>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>

I have found a copy/paste bug in this patch.

This statement was executed unconditionally, not just inside the if statement.
chn->u.unbound.remote_domid = remote_domid;

Attached the fixed version.

Jean

--bcaec501621547b9b904c7eda5f2
Content-Type: application/octet-stream; 
	name="evtchn_alloc_unbound_domain.patch"
Content-Disposition: attachment; 
	filename="evtchn_alloc_unbound_domain.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h67t0ft80

Y29tbWl0IDhhMzNmYzg5ODcyNjcxNzg3OWY1OWY1YjkxMTEyNWI4NzU5OGU3OGEKQXV0aG9yOiBK
ZWFuIEd1eWFkZXIgPGplYW4uZ3V5YWRlckBjaXRyaXguY29tPgpEYXRlOiAgIFRodSBBdWcgMiAx
NjoxOToyMyAyMDEyICswMTAwCgogICAgeGVuOiBldmVudHMsIGV4cG9zZXMgZXZ0Y2huX2FsbG9j
X3VuYm91bmRfZG9tYWluCiAgICAKICAgIEV4cG9zZXMgZXZ0Y2huX2FsbG9jX3VuYm91bmRfZG9t
YWluIHRvIHRoZSByZXN0IG9mCiAgICBYZW4gc28gd2UgY2FuIGNyZWF0ZSBhbGxvY2F0ZWQgdW5i
b3VuZCBldnRjaG4gd2l0aGluIFhlbi4KCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2V2ZW50X2No
YW5uZWwuYyBiL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCmluZGV4IDUzNzc3ZjguLjZhMGUz
NDIgMTAwNjQ0Ci0tLSBhL3hlbi9jb21tb24vZXZlbnRfY2hhbm5lbC5jCisrKyBiL3hlbi9jb21t
b24vZXZlbnRfY2hhbm5lbC5jCkBAIC0xNTksMzYgKzE1OSw1MSBAQCBzdGF0aWMgaW50IGdldF9m
cmVlX3BvcnQoc3RydWN0IGRvbWFpbiAqZCkKIAogc3RhdGljIGxvbmcgZXZ0Y2huX2FsbG9jX3Vu
Ym91bmQoZXZ0Y2huX2FsbG9jX3VuYm91bmRfdCAqYWxsb2MpCiB7Ci0gICAgc3RydWN0IGV2dGNo
biAqY2huOwogICAgIHN0cnVjdCBkb21haW4gKmQ7Ci0gICAgaW50ICAgICAgICAgICAgcG9ydDsK
LSAgICBkb21pZF90ICAgICAgICBkb20gPSBhbGxvYy0+ZG9tOwogICAgIGxvbmcgICAgICAgICAg
IHJjOwogCi0gICAgcmMgPSByY3VfbG9ja190YXJnZXRfZG9tYWluX2J5X2lkKGRvbSwgJmQpOwor
ICAgIHJjID0gcmN1X2xvY2tfdGFyZ2V0X2RvbWFpbl9ieV9pZChhbGxvYy0+ZG9tLCAmZCk7CiAg
ICAgaWYgKCByYyApCiAgICAgICAgIHJldHVybiByYzsKIAorICAgIHJjID0gZXZ0Y2huX2FsbG9j
X3VuYm91bmRfZG9tYWluKGQsICZhbGxvYy0+cG9ydCwgYWxsb2MtPnJlbW90ZV9kb20pOworICAg
IGlmICggcmMgKQorICAgICAgICBFUlJPUl9FWElUX0RPTSgoaW50KXJjLCBkKTsKKworIG91dDoK
KyAgICByY3VfdW5sb2NrX2RvbWFpbihkKTsKKworICAgIHJldHVybiByYzsKK30KKworaW50IGV2
dGNobl9hbGxvY191bmJvdW5kX2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkLCBldnRjaG5fcG9ydF90
ICpwb3J0LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBkb21pZF90IHJlbW90ZV9k
b21pZCkKK3sKKyAgICBzdHJ1Y3QgZXZ0Y2huICpjaG47CisgICAgaW50ICAgICAgICAgICByYzsK
KyAgICBpbnQgICAgICAgICAgIGZyZWVfcG9ydDsKKwogICAgIHNwaW5fbG9jaygmZC0+ZXZlbnRf
bG9jayk7CiAKLSAgICBpZiAoIChwb3J0ID0gZ2V0X2ZyZWVfcG9ydChkKSkgPCAwICkKLSAgICAg
ICAgRVJST1JfRVhJVF9ET00ocG9ydCwgZCk7Ci0gICAgY2huID0gZXZ0Y2huX2Zyb21fcG9ydChk
LCBwb3J0KTsKKyAgICByYyA9IGZyZWVfcG9ydCA9IGdldF9mcmVlX3BvcnQoZCk7CisgICAgaWYg
KCBmcmVlX3BvcnQgPCAwICkKKyAgICAgICAgZ290byBvdXQ7CiAKLSAgICByYyA9IHhzbV9ldnRj
aG5fdW5ib3VuZChkLCBjaG4sIGFsbG9jLT5yZW1vdGVfZG9tKTsKKyAgICBjaG4gPSBldnRjaG5f
ZnJvbV9wb3J0KGQsIGZyZWVfcG9ydCk7CisgICAgcmMgPSB4c21fZXZ0Y2huX3VuYm91bmQoZCwg
Y2huLCByZW1vdGVfZG9taWQpOwogICAgIGlmICggcmMgKQogICAgICAgICBnb3RvIG91dDsKIAog
ICAgIGNobi0+c3RhdGUgPSBFQ1NfVU5CT1VORDsKLSAgICBpZiAoIChjaG4tPnUudW5ib3VuZC5y
ZW1vdGVfZG9taWQgPSBhbGxvYy0+cmVtb3RlX2RvbSkgPT0gRE9NSURfU0VMRiApCisgICAgaWYg
KCAoY2huLT51LnVuYm91bmQucmVtb3RlX2RvbWlkID0gcmVtb3RlX2RvbWlkKSA9PSBET01JRF9T
RUxGICkKICAgICAgICAgY2huLT51LnVuYm91bmQucmVtb3RlX2RvbWlkID0gY3VycmVudC0+ZG9t
YWluLT5kb21haW5faWQ7CiAKLSAgICBhbGxvYy0+cG9ydCA9IHBvcnQ7CisgICAgKnBvcnQgPSBm
cmVlX3BvcnQ7CisgICAgLyogRXZlcnl0aGluZyBpcyBmaW5lLCByZXR1cm5zIDAgKi8KKyAgICBy
YyA9IDA7CiAKICBvdXQ6CiAgICAgc3Bpbl91bmxvY2soJmQtPmV2ZW50X2xvY2spOwotICAgIHJj
dV91bmxvY2tfZG9tYWluKGQpOwotCiAgICAgcmV0dXJuIHJjOwogfQogCmRpZmYgLS1naXQgYS94
ZW4vaW5jbHVkZS94ZW4vZXZlbnQuaCBiL3hlbi9pbmNsdWRlL3hlbi9ldmVudC5oCmluZGV4IDcx
YzNlOTIuLjFhMGM4MzIgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL3hlbi9ldmVudC5oCisrKyBi
L3hlbi9pbmNsdWRlL3hlbi9ldmVudC5oCkBAIC02OSw2ICs2OSw5IEBAIGludCBndWVzdF9lbmFi
bGVkX2V2ZW50KHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCB2aXJxKTsKIC8qIE5vdGlmeSByZW1v
dGUgZW5kIG9mIGEgWGVuLWF0dGFjaGVkIGV2ZW50IGNoYW5uZWwuKi8KIHZvaWQgbm90aWZ5X3Zp
YV94ZW5fZXZlbnRfY2hhbm5lbChzdHJ1Y3QgZG9tYWluICpsZCwgaW50IGxwb3J0KTsKIAoraW50
IGV2dGNobl9hbGxvY191bmJvdW5kX2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkLCBldnRjaG5fcG9y
dF90ICpwb3J0LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBkb21pZF90IHJlbW90
ZV9kb21pZCk7CisKIC8qIEludGVybmFsIGV2ZW50IGNoYW5uZWwgb2JqZWN0IGFjY2Vzc29ycyAq
LwogI2RlZmluZSBidWNrZXRfZnJvbV9wb3J0KGQscCkgXAogICAgICgoZCktPmV2dGNoblsocCkv
RVZUQ0hOU19QRVJfQlVDS0VUXSkK
--bcaec501621547b9b904c7eda5f2
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec501621547b9b904c7eda5f2--


From xen-devel-bounces@lists.xen.org Thu Aug 23 13:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XJw-0006k6-So; Thu, 23 Aug 2012 13:19:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XJv-0006k1-A7
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:19:35 +0000
Received: from [85.158.138.51:27111] by server-9.bemta-3.messagelabs.com id
	95/36-23952-6ED26305; Thu, 23 Aug 2012 13:19:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345727974!19608791!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12782 invoked from network); 23 Aug 2012 13:19:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:19:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147478"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:18:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:18:33 +0100
Message-ID: <1345727912.12501.83.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:18:32 +0100
In-Reply-To: <96d6442c911f6ed7f6cb24670901b151fa1570d6.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<96d6442c911f6ed7f6cb24670901b151fa1570d6.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
 support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 27b3de5..49d1ca0 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
[...]
>  struct hvm_domain {
> +    /* Use for the IO handles by Xen */
>      struct hvm_ioreq_page  ioreq;
> -    struct hvm_ioreq_page  buf_ioreq;
> +    struct hvm_ioreq_server *ioreq_server_list;
> +    uint32_t		     nr_ioreq_server;
> +    spinlock_t               ioreq_server_lock;

There's some whitespace weirdness here plus some in
xen/include/asm-x86/hvm/vcpu.h and xen/include/public/hvm/hvm_op.h.

> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
> index 4022a1d..87aacd3 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -34,6 +34,7 @@
>  
>  #define IOREQ_TYPE_PIO          0 /* pio */
>  #define IOREQ_TYPE_COPY         1 /* mmio ops */
> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
>  #define IOREQ_TYPE_TIMEOFFSET   7
>  #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */

I wonder why we skip 2-6 now -- perhaps they used to be something else
and we are avoiding them to avoid strange errors? In which case adding
the new on as 9 might be a good idea.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XJw-0006k6-So; Thu, 23 Aug 2012 13:19:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XJv-0006k1-A7
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:19:35 +0000
Received: from [85.158.138.51:27111] by server-9.bemta-3.messagelabs.com id
	95/36-23952-6ED26305; Thu, 23 Aug 2012 13:19:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345727974!19608791!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12782 invoked from network); 23 Aug 2012 13:19:34 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:19:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147478"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:18:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:18:33 +0100
Message-ID: <1345727912.12501.83.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:18:32 +0100
In-Reply-To: <96d6442c911f6ed7f6cb24670901b151fa1570d6.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<96d6442c911f6ed7f6cb24670901b151fa1570d6.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
 support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 27b3de5..49d1ca0 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
[...]
>  struct hvm_domain {
> +    /* Use for the IO handles by Xen */
>      struct hvm_ioreq_page  ioreq;
> -    struct hvm_ioreq_page  buf_ioreq;
> +    struct hvm_ioreq_server *ioreq_server_list;
> +    uint32_t		     nr_ioreq_server;
> +    spinlock_t               ioreq_server_lock;

There's some whitespace weirdness here plus some in
xen/include/asm-x86/hvm/vcpu.h and xen/include/public/hvm/hvm_op.h.

> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
> index 4022a1d..87aacd3 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -34,6 +34,7 @@
>  
>  #define IOREQ_TYPE_PIO          0 /* pio */
>  #define IOREQ_TYPE_COPY         1 /* mmio ops */
> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
>  #define IOREQ_TYPE_TIMEOFFSET   7
>  #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */

I wonder why we skip 2-6 now -- perhaps they used to be something else
and we are avoiding them to avoid strange errors? In which case adding
the new on as 9 might be a good idea.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:21:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XL2-0006n2-BV; Thu, 23 Aug 2012 13:20:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4XL0-0006mu-UN
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 13:20:43 +0000
Received: from [85.158.143.99:42045] by server-2.bemta-4.messagelabs.com id
	95/FC-21239-A2E26305; Thu, 23 Aug 2012 13:20:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345728040!22118132!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25679 invoked from network); 23 Aug 2012 13:20:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:20:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147522"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:20:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 14:20:40 +0100
Date: Thu, 23 Aug 2012 14:20:17 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Wei Xu <wei.xu.prc@gmail.com>
In-Reply-To: <CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
	<CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen <xen@lists.fedoraproject.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Wei Xu wrote:
> Thanks for your reply.
> 
> I have tried the method but it looks still can't work, there is no "hvc0" device file in my "/dev" directory,
> is that the root cause?

Reading again the previous message, it is clear that you are still using
xm/xend as toolstack to create VMs. Unfortunately xend doesn't support
PV consoles for HVM guests, you need to use the new xl toolstack if
you'd like to use that functionality.

See: http://wiki.xen.org/wiki/XL

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:21:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XL2-0006n2-BV; Thu, 23 Aug 2012 13:20:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4XL0-0006mu-UN
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 13:20:43 +0000
Received: from [85.158.143.99:42045] by server-2.bemta-4.messagelabs.com id
	95/FC-21239-A2E26305; Thu, 23 Aug 2012 13:20:42 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345728040!22118132!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25679 invoked from network); 23 Aug 2012 13:20:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:20:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147522"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:20:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 14:20:40 +0100
Date: Thu, 23 Aug 2012 14:20:17 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Wei Xu <wei.xu.prc@gmail.com>
In-Reply-To: <CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
	<CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen <xen@lists.fedoraproject.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Fedora-xen] DomU console driver not works for
 Fedora17 in HVM mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Wei Xu wrote:
> Thanks for your reply.
> 
> I have tried the method but it looks still can't work, there is no "hvc0" device file in my "/dev" directory,
> is that the root cause?

Reading again the previous message, it is clear that you are still using
xm/xend as toolstack to create VMs. Unfortunately xend doesn't support
PV consoles for HVM guests, you need to use the new xl toolstack if
you'd like to use that functionality.

See: http://wiki.xen.org/wiki/XL

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:22:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XMI-0006t7-Q7; Thu, 23 Aug 2012 13:22:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XMH-0006ss-O3
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:22:01 +0000
Received: from [85.158.138.51:45956] by server-4.bemta-3.messagelabs.com id
	A0/FB-04276-87E26305; Thu, 23 Aug 2012 13:22:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345728119!27722148!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30977 invoked from network); 23 Aug 2012 13:22:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147553"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:21:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:21:59 +0100
Message-ID: <1345728118.12501.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:21:58 +0100
In-Reply-To: <8747cb48d50a10784df56904db29ca8b6e8c5d80.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<8747cb48d50a10784df56904db29ca8b6e8c5d80.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 09/17] xc: Add the hypercall for
 multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> This patch add 5 hypercalls to register server, io range and PCI.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>

Looks correct to me at least so far as the use of the hypercall buffers
goes, thanks.

Acked-by: Ian Campbell <ian.campbelL@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:22:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XMI-0006t7-Q7; Thu, 23 Aug 2012 13:22:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XMH-0006ss-O3
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:22:01 +0000
Received: from [85.158.138.51:45956] by server-4.bemta-3.messagelabs.com id
	A0/FB-04276-87E26305; Thu, 23 Aug 2012 13:22:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345728119!27722148!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30977 invoked from network); 23 Aug 2012 13:22:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147553"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:21:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:21:59 +0100
Message-ID: <1345728118.12501.86.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:21:58 +0100
In-Reply-To: <8747cb48d50a10784df56904db29ca8b6e8c5d80.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<8747cb48d50a10784df56904db29ca8b6e8c5d80.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 09/17] xc: Add the hypercall for
 multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> This patch add 5 hypercalls to register server, io range and PCI.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>

Looks correct to me at least so far as the use of the hypercall buffers
goes, thanks.

Acked-by: Ian Campbell <ian.campbelL@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XQZ-00079s-FW; Thu, 23 Aug 2012 13:26:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4XQX-00079h-JR
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:26:25 +0000
Received: from [85.158.143.99:50648] by server-3.bemta-4.messagelabs.com id
	46/FD-08232-08F26305; Thu, 23 Aug 2012 13:26:24 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345728380!21203716!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4278 invoked from network); 23 Aug 2012 13:26:21 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:26:21 -0000
Received: by yenm4 with SMTP id m4so176561yen.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 06:26:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=+DJMWrtWOFFUiunw5mecPWy5Xiszeu8+raQt2YSGPdQ=;
	b=0pNcrspoYV/3KT/Bncj50iW2yB3S15V5QxwD+rEihLC96gR1Fvaf34qPKjGwBO9Cgg
	1AFjOaSsMAY8XRqYc2vWuSKTApAXXLVEQX5XwvYDntRUben7iid8Gts2QoXOBcLfZX7J
	ZNJIez8DkG4rP6NzN8kPQ3zGU7Q3HOOgGMrdzLPGexO5Nl/Naqs4w/zNlmemZSoQbnPK
	MdKHSSou8/Tl9tqDj+TexdvuN0A+CHcXJYBIQ6PEcZJlJq9NZ9o2PbBKRJBhEgiDxweS
	7jdmp0WqOI0GGQ9Y2Mq0currjiMRXbOUOmQZHYcicGTsLqFtqC7G3TfOrbVZi5nY4ez0
	I9dg==
Received: by 10.100.84.18 with SMTP id h18mr365971anb.57.1345728379832;
	Thu, 23 Aug 2012 06:26:19 -0700 (PDT)
Received: from [10.80.114.52] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id h8sm6751651ank.9.2012.08.23.06.26.16
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 06:26:18 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 14:26:09 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Julien Grall <julien.grall@citrix.com>
Message-ID: <CC5BEE01.3CBD1%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
	support multiple ioreq server
Thread-Index: Ac2BMts6OtxgIZ01IU21qAuqa9Ijjw==
In-Reply-To: <1345727912.12501.83.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
 support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 14:18, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

>> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
>> index 4022a1d..87aacd3 100644
>> --- a/xen/include/public/hvm/ioreq.h
>> +++ b/xen/include/public/hvm/ioreq.h
>> @@ -34,6 +34,7 @@
>>  
>>  #define IOREQ_TYPE_PIO          0 /* pio */
>>  #define IOREQ_TYPE_COPY         1 /* mmio ops */
>> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
>>  #define IOREQ_TYPE_TIMEOFFSET   7
>>  #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
> 
> I wonder why we skip 2-6 now -- perhaps they used to be something else
> and we are avoiding them to avoid strange errors? In which case adding
> the new on as 9 might be a good idea.

They were almost certainly used for representing R-M-W ALU operations back
in the days of the old IO emulator, very long ago. Still, there's no harm in
leaving them unused.

 -- Keir




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:26:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:26:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XQZ-00079s-FW; Thu, 23 Aug 2012 13:26:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4XQX-00079h-JR
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:26:25 +0000
Received: from [85.158.143.99:50648] by server-3.bemta-4.messagelabs.com id
	46/FD-08232-08F26305; Thu, 23 Aug 2012 13:26:24 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345728380!21203716!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4278 invoked from network); 23 Aug 2012 13:26:21 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:26:21 -0000
Received: by yenm4 with SMTP id m4so176561yen.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 06:26:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=+DJMWrtWOFFUiunw5mecPWy5Xiszeu8+raQt2YSGPdQ=;
	b=0pNcrspoYV/3KT/Bncj50iW2yB3S15V5QxwD+rEihLC96gR1Fvaf34qPKjGwBO9Cgg
	1AFjOaSsMAY8XRqYc2vWuSKTApAXXLVEQX5XwvYDntRUben7iid8Gts2QoXOBcLfZX7J
	ZNJIez8DkG4rP6NzN8kPQ3zGU7Q3HOOgGMrdzLPGexO5Nl/Naqs4w/zNlmemZSoQbnPK
	MdKHSSou8/Tl9tqDj+TexdvuN0A+CHcXJYBIQ6PEcZJlJq9NZ9o2PbBKRJBhEgiDxweS
	7jdmp0WqOI0GGQ9Y2Mq0currjiMRXbOUOmQZHYcicGTsLqFtqC7G3TfOrbVZi5nY4ez0
	I9dg==
Received: by 10.100.84.18 with SMTP id h18mr365971anb.57.1345728379832;
	Thu, 23 Aug 2012 06:26:19 -0700 (PDT)
Received: from [10.80.114.52] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id h8sm6751651ank.9.2012.08.23.06.26.16
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 06:26:18 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 14:26:09 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Julien Grall <julien.grall@citrix.com>
Message-ID: <CC5BEE01.3CBD1%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
	support multiple ioreq server
Thread-Index: Ac2BMts6OtxgIZ01IU21qAuqa9Ijjw==
In-Reply-To: <1345727912.12501.83.camel@zakaz.uk.xensource.com>
Mime-version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
 support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 14:18, "Ian Campbell" <Ian.Campbell@citrix.com> wrote:

>> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
>> index 4022a1d..87aacd3 100644
>> --- a/xen/include/public/hvm/ioreq.h
>> +++ b/xen/include/public/hvm/ioreq.h
>> @@ -34,6 +34,7 @@
>>  
>>  #define IOREQ_TYPE_PIO          0 /* pio */
>>  #define IOREQ_TYPE_COPY         1 /* mmio ops */
>> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
>>  #define IOREQ_TYPE_TIMEOFFSET   7
>>  #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
> 
> I wonder why we skip 2-6 now -- perhaps they used to be something else
> and we are avoiding them to avoid strange errors? In which case adding
> the new on as 9 might be a good idea.

They were almost certainly used for representing R-M-W ALU operations back
in the days of the old IO emulator, very long ago. Still, there's no harm in
leaving them unused.

 -- Keir




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:28:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XS0-0007El-Tt; Thu, 23 Aug 2012 13:27:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XS0-0007Ef-1O
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:27:56 +0000
Received: from [85.158.138.51:49060] by server-11.bemta-3.messagelabs.com id
	72/93-23152-BDF26305; Thu, 23 Aug 2012 13:27:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345728472!8726516!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28635 invoked from network); 23 Aug 2012 13:27:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:27:53 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:27:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:27:52 +0100
Message-ID: <1345728471.12501.90.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:27:51 +0100
In-Reply-To: <fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> - add save/restore new special pages and remove unused
>     - modify save file structure to allow multiple qemu states
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxc/xc_domain_restore.c |  150 +++++++++++++++++++++++++++++----------
>  tools/libxc/xc_domain_save.c    |    6 +-

As you've changed the protocol olease can you update the docs in
xg_save_restore.h.

> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>  #else
>  #define RDEXACT read_exact
>  #endif
> +
> +#define QEMUSIG_SIZE 21
> +
>  /*
>  ** In the state file (or during transfer), all page-table pages are
>  ** converted into a 'canonical' form where references to actual mfns
> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
>                             int vcpuextstate, uint32_t vcpuextstate_size)
>  {
>      uint8_t *tmp;
> -    unsigned char qemusig[21];
> +    unsigned char qemusig[QEMUSIG_SIZE + 1];

An extra + 1 here?

[...]
> -    qemusig[20] = '\0';
> +    qemusig[QEMUSIG_SIZE] = '\0';

This is one bigger than it used to be now.

Perhaps this is an unrelated bug fix (I haven't check the real length of
the sig), in which case please can you split it out and submit
separately?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:28:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XS0-0007El-Tt; Thu, 23 Aug 2012 13:27:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XS0-0007Ef-1O
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:27:56 +0000
Received: from [85.158.138.51:49060] by server-11.bemta-3.messagelabs.com id
	72/93-23152-BDF26305; Thu, 23 Aug 2012 13:27:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345728472!8726516!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28635 invoked from network); 23 Aug 2012 13:27:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:27:53 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147697"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:27:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:27:52 +0100
Message-ID: <1345728471.12501.90.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:27:51 +0100
In-Reply-To: <fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> - add save/restore new special pages and remove unused
>     - modify save file structure to allow multiple qemu states
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxc/xc_domain_restore.c |  150 +++++++++++++++++++++++++++++----------
>  tools/libxc/xc_domain_save.c    |    6 +-

As you've changed the protocol olease can you update the docs in
xg_save_restore.h.

> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>  #else
>  #define RDEXACT read_exact
>  #endif
> +
> +#define QEMUSIG_SIZE 21
> +
>  /*
>  ** In the state file (or during transfer), all page-table pages are
>  ** converted into a 'canonical' form where references to actual mfns
> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
>                             int vcpuextstate, uint32_t vcpuextstate_size)
>  {
>      uint8_t *tmp;
> -    unsigned char qemusig[21];
> +    unsigned char qemusig[QEMUSIG_SIZE + 1];

An extra + 1 here?

[...]
> -    qemusig[20] = '\0';
> +    qemusig[QEMUSIG_SIZE] = '\0';

This is one bigger than it used to be now.

Perhaps this is an unrelated bug fix (I haven't check the real length of
the sig), in which case please can you split it out and submit
separately?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:30:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XUQ-0007NQ-FJ; Thu, 23 Aug 2012 13:30:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XUP-0007NH-94
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:30:25 +0000
Received: from [85.158.139.83:6487] by server-8.bemta-5.messagelabs.com id
	9B/F4-02481-07036305; Thu, 23 Aug 2012 13:30:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345728623!20268439!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14275 invoked from network); 23 Aug 2012 13:30:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:30:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147767"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:30:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:30:18 +0100
Message-ID: <1345728617.12501.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:30:17 +0100
In-Reply-To: <51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> This patch modifies libxl interface for qemu disaggregation.

I'd rather see the interfaces changes in the same patch as the
implementation of the new interfaces.

> For the moment, due to some dependencies between devices, we
> can't let the user choose which QEMU emulate a device.
> 
> Moreoever this patch adds an "id" field to nic interface.
> It will be used in config file to specify which QEMU handle
> the network card.

Is domid+devid not sufficient to identify which nic?

> A possible disaggregation is:
>     - UI: Emulate graphic card, USB, keyboard, mouse, default devices
>     (PIIX4, root bridge, ...)
>     - IDE: Emulate disk
>     - Serial: Emulate serial port
>     - Audio: Emulate audio card
>     - Net: Emulate one or more network cards, multiple QEMU can emulate
>     different card. The emulated card is specified with its nic ID.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxl/libxl.h         |    3 +++
>  tools/libxl/libxl_types.idl |   15 +++++++++++++++
>  2 files changed, 18 insertions(+), 0 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index c614d6f..71d4808 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -307,6 +307,7 @@ void libxl_cpuid_dispose(libxl_cpuid_policy_list *cpuid_list);
>  #define LIBXL_PCI_FUNC_ALL (~0U)
>  
>  typedef uint32_t libxl_domid;
> +typedef uint32_t libxl_dmid;
>  
>  /*
>   * Formatting Enumerations.
> @@ -478,12 +479,14 @@ typedef struct {
>      libxl_domain_build_info b_info;
>  
>      int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs;
> +    int num_dms;
>  
>      libxl_device_disk *disks;
>      libxl_device_nic *nics;
>      libxl_device_pci *pcidevs;
>      libxl_device_vfb *vfbs;
>      libxl_device_vkb *vkbs;
> +    libxl_dm *dms;
>  
>      libxl_action_on_shutdown on_poweroff;
>      libxl_action_on_shutdown on_reboot;
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index daa8c79..36c802a 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>      ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>      ])
>  
> +libxl_dm_cap = Enumeration("dm_cap", [
> +    (1, "UI"), # Emulate all UI + default device

What does "default device" equate too?

> +    (2, "IDE"), # Emulate IDE
> +    (4, "SERIAL"), # Emulate Serial
> +    (8, "AUDIO"), # Emulate audio
> +    ])
> +
> +libxl_dm = Struct("dm", [
> +    ("name",         string),
> +    ("path",         string),
> +    ("capabilities",   uint64),

uint64 and not libxl_dm_cap?

> +    ("vifs",         libxl_string_list),
> +    ])
> +
>  libxl_domain_build_info = Struct("domain_build_info",[
>      ("max_vcpus",       integer),
>      ("avail_vcpus",     libxl_bitmap),
> @@ -367,6 +381,7 @@ libxl_device_nic = Struct("device_nic", [
>      ("nictype", libxl_nic_type),
>      ("rate_bytes_per_interval", uint64),
>      ("rate_interval_usecs", uint32),
> +    ("id", string),
>      ])
>  
>  libxl_device_pci = Struct("device_pci", [



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:30:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XUQ-0007NQ-FJ; Thu, 23 Aug 2012 13:30:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XUP-0007NH-94
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:30:25 +0000
Received: from [85.158.139.83:6487] by server-8.bemta-5.messagelabs.com id
	9B/F4-02481-07036305; Thu, 23 Aug 2012 13:30:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1345728623!20268439!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14275 invoked from network); 23 Aug 2012 13:30:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:30:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147767"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:30:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:30:18 +0100
Message-ID: <1345728617.12501.92.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:30:17 +0100
In-Reply-To: <51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> This patch modifies libxl interface for qemu disaggregation.

I'd rather see the interfaces changes in the same patch as the
implementation of the new interfaces.

> For the moment, due to some dependencies between devices, we
> can't let the user choose which QEMU emulate a device.
> 
> Moreoever this patch adds an "id" field to nic interface.
> It will be used in config file to specify which QEMU handle
> the network card.

Is domid+devid not sufficient to identify which nic?

> A possible disaggregation is:
>     - UI: Emulate graphic card, USB, keyboard, mouse, default devices
>     (PIIX4, root bridge, ...)
>     - IDE: Emulate disk
>     - Serial: Emulate serial port
>     - Audio: Emulate audio card
>     - Net: Emulate one or more network cards, multiple QEMU can emulate
>     different card. The emulated card is specified with its nic ID.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxl/libxl.h         |    3 +++
>  tools/libxl/libxl_types.idl |   15 +++++++++++++++
>  2 files changed, 18 insertions(+), 0 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index c614d6f..71d4808 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -307,6 +307,7 @@ void libxl_cpuid_dispose(libxl_cpuid_policy_list *cpuid_list);
>  #define LIBXL_PCI_FUNC_ALL (~0U)
>  
>  typedef uint32_t libxl_domid;
> +typedef uint32_t libxl_dmid;
>  
>  /*
>   * Formatting Enumerations.
> @@ -478,12 +479,14 @@ typedef struct {
>      libxl_domain_build_info b_info;
>  
>      int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs;
> +    int num_dms;
>  
>      libxl_device_disk *disks;
>      libxl_device_nic *nics;
>      libxl_device_pci *pcidevs;
>      libxl_device_vfb *vfbs;
>      libxl_device_vkb *vkbs;
> +    libxl_dm *dms;
>  
>      libxl_action_on_shutdown on_poweroff;
>      libxl_action_on_shutdown on_reboot;
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index daa8c79..36c802a 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>      ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>      ])
>  
> +libxl_dm_cap = Enumeration("dm_cap", [
> +    (1, "UI"), # Emulate all UI + default device

What does "default device" equate too?

> +    (2, "IDE"), # Emulate IDE
> +    (4, "SERIAL"), # Emulate Serial
> +    (8, "AUDIO"), # Emulate audio
> +    ])
> +
> +libxl_dm = Struct("dm", [
> +    ("name",         string),
> +    ("path",         string),
> +    ("capabilities",   uint64),

uint64 and not libxl_dm_cap?

> +    ("vifs",         libxl_string_list),
> +    ])
> +
>  libxl_domain_build_info = Struct("domain_build_info",[
>      ("max_vcpus",       integer),
>      ("avail_vcpus",     libxl_bitmap),
> @@ -367,6 +381,7 @@ libxl_device_nic = Struct("device_nic", [
>      ("nictype", libxl_nic_type),
>      ("rate_bytes_per_interval", uint64),
>      ("rate_interval_usecs", uint32),
> +    ("id", string),
>      ])
>  
>  libxl_device_pci = Struct("device_pci", [



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:36:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XaK-0007eR-Dk; Thu, 23 Aug 2012 13:36:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XaI-0007eM-Rv
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:36:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345728950!1678341!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19572 invoked from network); 23 Aug 2012 13:35:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147920"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:35:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:35:50 +0100
Message-ID: <1345728948.12501.98.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:35:48 +0100
In-Reply-To: <557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 14/17] xl-parsing: Parse new
 device_models option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
> Add new option "device_models". The user can specify the capability of the
> QEMU (ui, vifs, ...). This option only works with QEMU upstream (qemu-xen).
> 
> For instance:
> device_models= [ 'name=all,vifs=nic1', 'name=qvga,ui', 'name=qide,ide' ]

Please can you patch docs/man/xl.cfg.pod.5 with a description of this
syntax. Possibly just a stub referencing
docs/man/xl-device-models.markdown in the same manner as
xl-disk-configuration.txt, xl-numa-placement.markdown,
xl-network-configuration.markdown etc.

iirc you can give multiple vifs -- what does that syntax look like?

I didn't ask before -- what does naming the dm give you? Is it just used
for ui things like logging or can you cross reference this in some way?

> Each device model can also take a path argument which override the default one.
> It's usefull for debugging.

      useful
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxl/Makefile     |    2 +-
>  tools/libxl/libxlu_dm.c  |   96 ++++++++++++++++++++++++++++++++++++++++++++++
>  tools/libxl/libxlutil.h  |    5 ++
>  tools/libxl/xl_cmdimpl.c |   29 +++++++++++++-
>  4 files changed, 130 insertions(+), 2 deletions(-)
>  create mode 100644 tools/libxl/libxlu_dm.c
> 
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 47fb110..2b58721 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -79,7 +79,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
>  AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>  AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>  LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
> -	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
> +	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_dm.o
>  $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  
>  CLIENTS = xl testidl libxl-save-helper
> diff --git a/tools/libxl/libxlu_dm.c b/tools/libxl/libxlu_dm.c
> new file mode 100644
> index 0000000..9f0a347
> --- /dev/null
> +++ b/tools/libxl/libxlu_dm.c
> @@ -0,0 +1,96 @@
> +#include "libxl_osdeps.h" /* must come before any other headers */
> +#include <stdlib.h>
> +#include "libxlu_internal.h"
> +#include "libxlu_cfg_i.h"
> +
> +static void split_string_into_string_list(const char *str,
> +                                          const char *delim,
> +                                          libxl_string_list *psl)

Is this a cut-n-paste of the one in xl_cmdimpl.c or did it change?

Probably better to add this as a common utility function somewhere.

> +{
> [...]
> +}
> +
> +int xlu_dm_parse(XLU_Config *cfg, const char *spec,
> +                 libxl_dm *dm)
> +{
> +    char *buf = strdup(spec);
> +    char *p, *p2;
> +    int rc = 0;
> +
> +    p = strtok(buf, ",");
> +    if (!p)
> +        goto skip_dm;
> +    do {
> +        while (*p == ' ')
> +            p++;
> +        if ((p2 = strchr(p, '=')) == NULL) {
> +            if (!strcmp(p, "ui"))

libxl provides a libxl_BLAH_from_string for enums in the idl, which
might be helpful here?

> +                dm->capabilities |= LIBXL_DM_CAP_UI;
> +            else if (!strcmp(p, "ide"))
> +                dm->capabilities |= LIBXL_DM_CAP_IDE;
> +            else if (!strcmp(p, "serial"))
> +                dm->capabilities |= LIBXL_DM_CAP_SERIAL;
> +            else if (!strcmp(p, "audio"))
> +                dm->capabilities |= LIBXL_DM_CAP_AUDIO;
> +        } else {
> +            *p2 = '\0';
> +            if (!strcmp(p, "name"))
> +                dm->name = strdup(p2 + 1);
> +            else if (!strcmp(p, "path"))
> +                dm->path = strdup(p2 + 1);
> +            else if (!strcmp(p, "vifs"))
> +                split_string_into_string_list(p2 + 1, ";", &dm->vifs);
> +       }
> +    } while ((p = strtok(NULL, ",")) != NULL);
> +
> +    if (!dm->name && dm->path)
> +    {
> +        fprintf(stderr, "xl: Unable to parse device_deamon\n");
> +        exit(-ERROR_FAIL);
> +    }
> +skip_dm:
> +    free(buf);
> +
> +    return rc;
> +}
> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
> index 0333e55..db22715 100644
> --- a/tools/libxl/libxlutil.h
> +++ b/tools/libxl/libxlutil.h
> @@ -93,6 +93,11 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
>   */
>  int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
>  
> +/*
> + * Daemon specification parsing.
> + */
> +int xlu_dm_parse(XLU_Config *cfg, const char *spec,
> +                 libxl_dm *dm);
>  
>  /*
>   * Vif rate parsing.
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 138cd72..2a26fa4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -561,7 +561,7 @@ static void parse_config_data(const char *config_source,
>      const char *buf;
>      long l;
>      XLU_Config *config;
> -    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
> +    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *dms;
>      int pci_power_mgmt = 0;
>      int pci_msitranslate = 0;
>      int pci_permissive = 0;
> @@ -995,6 +995,9 @@ static void parse_config_data(const char *config_source,
>                  } else if (!strcmp(p, "vifname")) {
>                      free(nic->ifname);
>                      nic->ifname = strdup(p2 + 1);
> +                } else if (!strcmp(p, "id")) {
> +                    free(nic->id);
> +                    nic->id = strdup(p2 + 1);
>                  } else if (!strcmp(p, "backend")) {
>                      if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
>                          fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
> @@ -1249,6 +1252,30 @@ skip_vfb:
>              }
>          }
>      }
> +
> +    d_config->num_dms = 0;
> +    d_config->dms = NULL;
> +
> +    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN
> +        && !xlu_cfg_get_list (config, "device_models", &dms, 0, 0)) {
> +        while ((buf = xlu_cfg_get_listitem (dms, d_config->num_dms)) != NULL) {
> +            libxl_dm *dm;
> +            size_t size = sizeof (libxl_dm) * (d_config->num_dms + 1);
> +
> +            d_config->dms = (libxl_dm *)realloc (d_config->dms, size);
> +            if (!d_config->dms) {
> +                fprintf(stderr, "Can't realloc d_config->dms\n");
> +                exit (1);
> +            }
> +            dm = d_config->dms + d_config->num_dms;
> +            libxl_dm_init (dm);
> +            if (xlu_dm_parse(config, buf, dm)) {
> +                exit (-ERROR_FAIL);
> +            }
> +            d_config->num_dms++;
> +        }
> +    }
> +
>  #define parse_extra_args(type)                                            \
>      e = xlu_cfg_get_list_as_string_list(config, "device_model_args"#type, \
>                                      &b_info->extra##type, 0);            \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:36:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XaK-0007eR-Dk; Thu, 23 Aug 2012 13:36:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4XaI-0007eM-Rv
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:36:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345728950!1678341!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19572 invoked from network); 23 Aug 2012 13:35:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14147920"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:35:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:35:50 +0100
Message-ID: <1345728948.12501.98.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:35:48 +0100
In-Reply-To: <557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 14/17] xl-parsing: Parse new
 device_models option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
> Add new option "device_models". The user can specify the capability of the
> QEMU (ui, vifs, ...). This option only works with QEMU upstream (qemu-xen).
> 
> For instance:
> device_models= [ 'name=all,vifs=nic1', 'name=qvga,ui', 'name=qide,ide' ]

Please can you patch docs/man/xl.cfg.pod.5 with a description of this
syntax. Possibly just a stub referencing
docs/man/xl-device-models.markdown in the same manner as
xl-disk-configuration.txt, xl-numa-placement.markdown,
xl-network-configuration.markdown etc.

iirc you can give multiple vifs -- what does that syntax look like?

I didn't ask before -- what does naming the dm give you? Is it just used
for ui things like logging or can you cross reference this in some way?

> Each device model can also take a path argument which override the default one.
> It's usefull for debugging.

      useful
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxl/Makefile     |    2 +-
>  tools/libxl/libxlu_dm.c  |   96 ++++++++++++++++++++++++++++++++++++++++++++++
>  tools/libxl/libxlutil.h  |    5 ++
>  tools/libxl/xl_cmdimpl.c |   29 +++++++++++++-
>  4 files changed, 130 insertions(+), 2 deletions(-)
>  create mode 100644 tools/libxl/libxlu_dm.c
> 
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 47fb110..2b58721 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -79,7 +79,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
>  AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>  AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>  LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
> -	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
> +	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_dm.o
>  $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  
>  CLIENTS = xl testidl libxl-save-helper
> diff --git a/tools/libxl/libxlu_dm.c b/tools/libxl/libxlu_dm.c
> new file mode 100644
> index 0000000..9f0a347
> --- /dev/null
> +++ b/tools/libxl/libxlu_dm.c
> @@ -0,0 +1,96 @@
> +#include "libxl_osdeps.h" /* must come before any other headers */
> +#include <stdlib.h>
> +#include "libxlu_internal.h"
> +#include "libxlu_cfg_i.h"
> +
> +static void split_string_into_string_list(const char *str,
> +                                          const char *delim,
> +                                          libxl_string_list *psl)

Is this a cut-n-paste of the one in xl_cmdimpl.c or did it change?

Probably better to add this as a common utility function somewhere.

> +{
> [...]
> +}
> +
> +int xlu_dm_parse(XLU_Config *cfg, const char *spec,
> +                 libxl_dm *dm)
> +{
> +    char *buf = strdup(spec);
> +    char *p, *p2;
> +    int rc = 0;
> +
> +    p = strtok(buf, ",");
> +    if (!p)
> +        goto skip_dm;
> +    do {
> +        while (*p == ' ')
> +            p++;
> +        if ((p2 = strchr(p, '=')) == NULL) {
> +            if (!strcmp(p, "ui"))

libxl provides a libxl_BLAH_from_string for enums in the idl, which
might be helpful here?

> +                dm->capabilities |= LIBXL_DM_CAP_UI;
> +            else if (!strcmp(p, "ide"))
> +                dm->capabilities |= LIBXL_DM_CAP_IDE;
> +            else if (!strcmp(p, "serial"))
> +                dm->capabilities |= LIBXL_DM_CAP_SERIAL;
> +            else if (!strcmp(p, "audio"))
> +                dm->capabilities |= LIBXL_DM_CAP_AUDIO;
> +        } else {
> +            *p2 = '\0';
> +            if (!strcmp(p, "name"))
> +                dm->name = strdup(p2 + 1);
> +            else if (!strcmp(p, "path"))
> +                dm->path = strdup(p2 + 1);
> +            else if (!strcmp(p, "vifs"))
> +                split_string_into_string_list(p2 + 1, ";", &dm->vifs);
> +       }
> +    } while ((p = strtok(NULL, ",")) != NULL);
> +
> +    if (!dm->name && dm->path)
> +    {
> +        fprintf(stderr, "xl: Unable to parse device_deamon\n");
> +        exit(-ERROR_FAIL);
> +    }
> +skip_dm:
> +    free(buf);
> +
> +    return rc;
> +}
> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
> index 0333e55..db22715 100644
> --- a/tools/libxl/libxlutil.h
> +++ b/tools/libxl/libxlutil.h
> @@ -93,6 +93,11 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
>   */
>  int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
>  
> +/*
> + * Daemon specification parsing.
> + */
> +int xlu_dm_parse(XLU_Config *cfg, const char *spec,
> +                 libxl_dm *dm);
>  
>  /*
>   * Vif rate parsing.
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 138cd72..2a26fa4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -561,7 +561,7 @@ static void parse_config_data(const char *config_source,
>      const char *buf;
>      long l;
>      XLU_Config *config;
> -    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
> +    XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *dms;
>      int pci_power_mgmt = 0;
>      int pci_msitranslate = 0;
>      int pci_permissive = 0;
> @@ -995,6 +995,9 @@ static void parse_config_data(const char *config_source,
>                  } else if (!strcmp(p, "vifname")) {
>                      free(nic->ifname);
>                      nic->ifname = strdup(p2 + 1);
> +                } else if (!strcmp(p, "id")) {
> +                    free(nic->id);
> +                    nic->id = strdup(p2 + 1);
>                  } else if (!strcmp(p, "backend")) {
>                      if(libxl_name_to_domid(ctx, (p2 + 1), &(nic->backend_domid))) {
>                          fprintf(stderr, "Specified backend domain does not exist, defaulting to Dom0\n");
> @@ -1249,6 +1252,30 @@ skip_vfb:
>              }
>          }
>      }
> +
> +    d_config->num_dms = 0;
> +    d_config->dms = NULL;
> +
> +    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN
> +        && !xlu_cfg_get_list (config, "device_models", &dms, 0, 0)) {
> +        while ((buf = xlu_cfg_get_listitem (dms, d_config->num_dms)) != NULL) {
> +            libxl_dm *dm;
> +            size_t size = sizeof (libxl_dm) * (d_config->num_dms + 1);
> +
> +            d_config->dms = (libxl_dm *)realloc (d_config->dms, size);
> +            if (!d_config->dms) {
> +                fprintf(stderr, "Can't realloc d_config->dms\n");
> +                exit (1);
> +            }
> +            dm = d_config->dms + d_config->num_dms;
> +            libxl_dm_init (dm);
> +            if (xlu_dm_parse(config, buf, dm)) {
> +                exit (-ERROR_FAIL);
> +            }
> +            d_config->num_dms++;
> +        }
> +    }
> +
>  #define parse_extra_args(type)                                            \
>      e = xlu_cfg_get_list_as_string_list(config, "device_model_args"#type, \
>                                      &b_info->extra##type, 0);            \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:39:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Xcs-0007k9-VS; Thu, 23 Aug 2012 13:39:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T4Xcq-0007ju-Qx
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 13:39:09 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345728520!8613750!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31275 invoked from network); 23 Aug 2012 13:28:42 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:28:42 -0000
Received: by pbbrq2 with SMTP id rq2so1615682pbb.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 06:28:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=wHJSntdch3q9wB07htCKeivowqSp9F01pqfUfIb2UAM=;
	b=JoMGEtl5m7AXU1blLWoo/KzDPdvqgRoBxEV7Zx5SykS5TvyoPHT4PIlkXxpqTdCLPl
	y/8JeLA6s/KxnJSCwzXl2iqnUwzURQvuBzpHuGVXpppx39XkMSYsvs9iQT3SKgPmgO7l
	hy1wVnXn9HluFxHmHxnklnQ6QTzf/6EiYFhlu6D3XyOdytU2/Ez56i/QqLxbAfKJUuND
	rqhDWxXJHMlqnBGcsToJgOGK1zYq+G8riV+y7XrVxzkA/kP4r0JbsmlD/bxVUL2bb5Nu
	xDHTFiBzGkWMZdNHZSnJ2BgAHo+ZAOJGzYjOxVvuzgxVgH+Wm1ITobJ6nrYCA27ZLwMd
	kUng==
MIME-Version: 1.0
Received: by 10.68.234.169 with SMTP id uf9mr4655510pbc.105.1345728520205;
	Thu, 23 Aug 2012 06:28:40 -0700 (PDT)
Received: by 10.68.132.69 with HTTP; Thu, 23 Aug 2012 06:28:40 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
	<CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
	<alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
Date: Thu, 23 Aug 2012 21:28:40 +0800
Message-ID: <CAH=9XObqOFabjN9QS52NyiOAGFyNEWnG8mHGyA1kLVr_D6Mdeg@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] DomU console driver not works for Fedora17 in HVM
	mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7600443423696806124=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7600443423696806124==
Content-Type: multipart/alternative; boundary=047d7b33cb4c69366304c7eed6a3

--047d7b33cb4c69366304c7eed6a3
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci'll try it

On Thursday, August 23, 2012, Stefano Stabellini wrote:

> On Thu, 23 Aug 2012, Wei Xu wrote:
> > Thanks for your reply.
> >
> > I have tried the method but it looks still can't work, there is no
> "hvc0" device file in my "/dev" directory,
> > is that the root cause?
>
> Reading again the previous message, it is clear that you are still using
> xm/xend as toolstack to create VMs. Unfortunately xend doesn't support
> PV consoles for HVM guests, you need to use the new xl toolstack if
> you'd like to use that functionality.
>
> See: http://wiki.xen.org/wiki/XL
>

--047d7b33cb4c69366304c7eed6a3
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci&#39;ll try it<span></span><br><br>On Thursday, August 23, =
2012, Stefano Stabellini  wrote:<br><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, =
23 Aug 2012, Wei Xu wrote:<br>

&gt; Thanks for your reply.<br>
&gt;<br>
&gt; I have tried the method but it looks still can&#39;t work, there is no=
 &quot;hvc0&quot; device file in my &quot;/dev&quot; directory,<br>
&gt; is that the root cause?<br>
<br>
Reading again the previous message, it is clear that you are still using<br=
>
xm/xend as toolstack to create VMs. Unfortunately xend doesn&#39;t support<=
br>
PV consoles for HVM guests, you need to use the new xl toolstack if<br>
you&#39;d like to use that functionality.<br>
<br>
See: <a href=3D"http://wiki.xen.org/wiki/XL" target=3D"_blank">http://wiki.=
xen.org/wiki/XL</a><br>
</blockquote>

--047d7b33cb4c69366304c7eed6a3--


--===============7600443423696806124==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7600443423696806124==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 13:39:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Xcs-0007k9-VS; Thu, 23 Aug 2012 13:39:10 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T4Xcq-0007ju-Qx
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 13:39:09 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345728520!8613750!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31275 invoked from network); 23 Aug 2012 13:28:42 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:28:42 -0000
Received: by pbbrq2 with SMTP id rq2so1615682pbb.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 06:28:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=wHJSntdch3q9wB07htCKeivowqSp9F01pqfUfIb2UAM=;
	b=JoMGEtl5m7AXU1blLWoo/KzDPdvqgRoBxEV7Zx5SykS5TvyoPHT4PIlkXxpqTdCLPl
	y/8JeLA6s/KxnJSCwzXl2iqnUwzURQvuBzpHuGVXpppx39XkMSYsvs9iQT3SKgPmgO7l
	hy1wVnXn9HluFxHmHxnklnQ6QTzf/6EiYFhlu6D3XyOdytU2/Ez56i/QqLxbAfKJUuND
	rqhDWxXJHMlqnBGcsToJgOGK1zYq+G8riV+y7XrVxzkA/kP4r0JbsmlD/bxVUL2bb5Nu
	xDHTFiBzGkWMZdNHZSnJ2BgAHo+ZAOJGzYjOxVvuzgxVgH+Wm1ITobJ6nrYCA27ZLwMd
	kUng==
MIME-Version: 1.0
Received: by 10.68.234.169 with SMTP id uf9mr4655510pbc.105.1345728520205;
	Thu, 23 Aug 2012 06:28:40 -0700 (PDT)
Received: by 10.68.132.69 with HTTP; Thu, 23 Aug 2012 06:28:40 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
	<CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
	<alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
Date: Thu, 23 Aug 2012 21:28:40 +0800
Message-ID: <CAH=9XObqOFabjN9QS52NyiOAGFyNEWnG8mHGyA1kLVr_D6Mdeg@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] DomU console driver not works for Fedora17 in HVM
	mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7600443423696806124=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7600443423696806124==
Content-Type: multipart/alternative; boundary=047d7b33cb4c69366304c7eed6a3

--047d7b33cb4c69366304c7eed6a3
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci'll try it

On Thursday, August 23, 2012, Stefano Stabellini wrote:

> On Thu, 23 Aug 2012, Wei Xu wrote:
> > Thanks for your reply.
> >
> > I have tried the method but it looks still can't work, there is no
> "hvc0" device file in my "/dev" directory,
> > is that the root cause?
>
> Reading again the previous message, it is clear that you are still using
> xm/xend as toolstack to create VMs. Unfortunately xend doesn't support
> PV consoles for HVM guests, you need to use the new xl toolstack if
> you'd like to use that functionality.
>
> See: http://wiki.xen.org/wiki/XL
>

--047d7b33cb4c69366304c7eed6a3
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci&#39;ll try it<span></span><br><br>On Thursday, August 23, =
2012, Stefano Stabellini  wrote:<br><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, =
23 Aug 2012, Wei Xu wrote:<br>

&gt; Thanks for your reply.<br>
&gt;<br>
&gt; I have tried the method but it looks still can&#39;t work, there is no=
 &quot;hvc0&quot; device file in my &quot;/dev&quot; directory,<br>
&gt; is that the root cause?<br>
<br>
Reading again the previous message, it is clear that you are still using<br=
>
xm/xend as toolstack to create VMs. Unfortunately xend doesn&#39;t support<=
br>
PV consoles for HVM guests, you need to use the new xl toolstack if<br>
you&#39;d like to use that functionality.<br>
<br>
See: <a href=3D"http://wiki.xen.org/wiki/XL" target=3D"_blank">http://wiki.=
xen.org/wiki/XL</a><br>
</blockquote>

--047d7b33cb4c69366304c7eed6a3--


--===============7600443423696806124==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7600443423696806124==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 13:45:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:45:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Xip-0000Ap-EG; Thu, 23 Aug 2012 13:45:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Xin-0000AL-Io
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:45:18 +0000
Received: from [85.158.138.51:9708] by server-3.bemta-3.messagelabs.com id
	1F/FC-13809-CE336305; Thu, 23 Aug 2012 13:45:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345729515!27368218!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15341 invoked from network); 23 Aug 2012 13:45:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:45:15 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14148145"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:45:15 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 14:45:15 +0100
Date: Thu, 23 Aug 2012 14:44:52 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <503514280200007800096FF4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Jan Beulich wrote:
> - don't call rtc_timer_update() on REG_A writes when the value didn't
>   change (doing the call always was reported to cause wall clock time
>   lagging with the JVM running on Windows)
> - don't call rtc_timer_update() on REG_B writes at all
> - only call alarm_timer_update() on REG_B writes when relevant bits
>   change
> - only call check_update_timer() on REG_B writes when SET changes
> - instead properly handle AF and PF when the guest is not also setting
>   AIE/PIE respectively (for UF this was already the case, only a
>   comment was slightly inaccurate)
> - raise the RTC IRQ not only when UIE gets set while UF was already
>   set, but generalize this to cover AIE and PIE as well
> - properly mask off bit 7 when retrieving the hour values in
>   alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
>   converting from 12- to 24-hour value
> - also handle the two other possible clock bases
> - use RTC_* names in a couple of places where literal numbers were used
>   so far
> 
> Note that this only improves the situation described in the thread at
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
> there are still problems with the emulation when invoked at a high rate
> as described there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Although this patch solves a real problem and should probably go in at
some point, I am a bit worried about drifting too much from the original
RTC emulator (that was taken from QEMU), because it would be nice to be
able to backport features like this one:

http://marc.info/?l=qemu-devel&m=134392375010304


> v2: Small adjustment to the pt_update_irq() change, avoiding to call
>     the RTC code for a HPET event (which also may pass 8 [aka RTC_IRQ]
>     as create_periodic_time()'s "irq" argument.
> 
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
>  static inline int from_bcd(RTCState *s, int a);
>  static inline int convert_hour(RTCState *s, int hour);
> 
> -static void rtc_periodic_cb(struct vcpu *v, void *opaque)
> +static void rtc_toggle_irq(RTCState *s)
> +{
> +    struct domain *d = vrtc_domain(s);
> +
> +    ASSERT(spin_is_locked(&s->lock));
> +    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> +    hvm_isa_irq_deassert(d, RTC_IRQ);
> +    hvm_isa_irq_assert(d, RTC_IRQ);
> +}
> +
> +void rtc_periodic_interrupt(void *opaque)
>  {
>      RTCState *s = opaque;
> +
>      spin_lock(&s->lock);
> -    s->hw.cmos_data[RTC_REG_C] |= 0xc0;
> +    s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
> +    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
> +        rtc_toggle_irq(s);
>      spin_unlock(&s->lock);
>  }
> 
> @@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
>      ASSERT(spin_is_locked(&s->lock));
> 
>      period_code = s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
> -    if ( (period_code != 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
> +    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
>      {
> -        if ( period_code <= 2 )
> +    case RTC_REF_CLCK_32KHZ:
> +        if ( (period_code != 0) && (period_code <= 2) )
>              period_code += 7;
> -
> -        period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> -        period = DIV_ROUND((period * 1000000000ULL), 32768); /* period in ns */
> -        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
> -                             rtc_periodic_cb, s);
> -    }
> -    else
> -    {
> +        /* fall through */
> +    case RTC_REF_CLCK_1MHZ:
> +    case RTC_REF_CLCK_4MHZ:
> +        if ( period_code != 0 )
> +        {
> +            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> +            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
> +            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, s);
> +            break;
> +        }
> +        /* fall through */
> +    default:
>          destroy_periodic_time(&s->pt);
> +        break;
>      }
>  }
> 
> @@ -102,7 +121,7 @@ static void check_update_timer(RTCState
>          guest_usec = get_localtime_us(d) % USEC_PER_SEC;
>          if (guest_usec >= (USEC_PER_SEC - 244))
>          {
> -            /* RTC is in update cycle when enabling UIE */
> +            /* RTC is in update cycle */
>              s->hw.cmos_data[RTC_REG_A] |= RTC_UIP;
>              next_update_time = (USEC_PER_SEC - guest_usec) * NS_PER_USEC;
>              expire_time = NOW() + next_update_time;
> @@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
>  static void rtc_update_timer2(void *opaque)
>  {
>      RTCState *s = opaque;
> -    struct domain *d = vrtc_domain(s);
> 
>      spin_lock(&s->lock);
>      if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> @@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
>          s->hw.cmos_data[RTC_REG_C] |= RTC_UF;
>          s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
>          if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
> -        {
> -            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> -            hvm_isa_irq_deassert(d, RTC_IRQ);
> -            hvm_isa_irq_assert(d, RTC_IRQ);
> -        }
> +            rtc_toggle_irq(s);
>          check_update_timer(s);
>      }
>      spin_unlock(&s->lock);
> @@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState
> 
>      stop_timer(&s->alarm_timer);
> 
> -    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
> -            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> +    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
>      {
>          s->current_tm = gmtime(get_localtime(d));
>          rtc_copy_date(s);
> 
>          alarm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
>          alarm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
> -        alarm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
> -        alarm_hour = convert_hour(s, alarm_hour);
> +        alarm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
> 
>          cur_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
>          cur_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
> -        cur_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
> -        cur_hour = convert_hour(s, cur_hour);
> +        cur_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
> 
>          next_update_time = USEC_PER_SEC - (get_localtime_us(d) % USEC_PER_SEC);
>          next_update_time = next_update_time * NS_PER_USEC + NOW();
> @@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState
>  static void rtc_alarm_cb(void *opaque)
>  {
>      RTCState *s = opaque;
> -    struct domain *d = vrtc_domain(s);
> 
>      spin_lock(&s->lock);
>      if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> @@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
>          s->hw.cmos_data[RTC_REG_C] |= RTC_AF;
>          /* alarm interrupt */
>          if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
> -        {
> -            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> -            hvm_isa_irq_deassert(d, RTC_IRQ);
> -            hvm_isa_irq_assert(d, RTC_IRQ);
> -        }
> +            rtc_toggle_irq(s);
>          alarm_timer_update(s);
>      }
>      spin_unlock(&s->lock);
> @@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
>  {
>      RTCState *s = opaque;
>      struct domain *d = vrtc_domain(s);
> +    uint32_t orig, mask;
> 
>      spin_lock(&s->lock);
> 
> @@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
>          return 0;
>      }
> 
> +    orig = s->hw.cmos_data[s->hw.cmos_index];
>      switch ( s->hw.cmos_index )
>      {
>      case RTC_SECONDS_ALARM:
> @@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
>          break;
>      case RTC_REG_A:
>          /* UIP bit is read only */
> -        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) |
> -            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
> -        rtc_timer_update(s);
> +        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) | (orig & RTC_UIP);
> +        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
> +            rtc_timer_update(s);
>          break;
>      case RTC_REG_B:
>          if ( data & RTC_SET )
> @@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
>              /* set mode: reset UIP mode */
>              s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
>              /* adjust cmos before stopping */
> -            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> +            if (!(orig & RTC_SET))
>              {
>                  s->current_tm = gmtime(get_localtime(d));
>                  rtc_copy_date(s);
> @@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
>          else
>          {
>              /* if disabling set mode, update the time */
> -            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
> +            if ( orig & RTC_SET )
>                  rtc_set_time(s);
>          }
> -        /* if the interrupt is already set when the interrupt become
> -         * enabled, raise an interrupt immediately*/
> -        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
> -            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
> +        /*
> +         * If the interrupt is already set when the interrupt becomes
> +         * enabled, raise an interrupt immediately.
> +         * NB: RTC_{A,P,U}IE == RTC_{A,P,U}F respectively.
> +         */
> +        for ( mask = RTC_UIE; mask <= RTC_PIE; mask <<= 1 )
> +            if ( (data & mask) && !(orig & mask) &&
> +                 (s->hw.cmos_data[RTC_REG_C] & mask) )
>              {
> -                hvm_isa_irq_deassert(d, RTC_IRQ);
> -                hvm_isa_irq_assert(d, RTC_IRQ);
> +                rtc_toggle_irq(s);
> +                break;
>              }
>          s->hw.cmos_data[RTC_REG_B] = data;
> -        rtc_timer_update(s);
> -        check_update_timer(s);
> -        alarm_timer_update(s);
> +        if ( (data ^ orig) & RTC_SET )
> +            check_update_timer(s);
> +        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
> +            alarm_timer_update(s);
>          break;
>      case RTC_REG_C:
>      case RTC_REG_D:
> @@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
> 
>  static inline int to_bcd(RTCState *s, int a)
>  {
> -    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
> +    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
>          return a;
>      else
>          return ((a / 10) << 4) | (a % 10);
> @@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
> 
>  static inline int from_bcd(RTCState *s, int a)
>  {
> -    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
> +    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
>          return a;
>      else
>          return ((a >> 4) * 10) + (a & 0x0f);
> @@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s,
> 
>  /* Hours in 12 hour mode are in 1-12 range, not 0-11.
>   * So we need convert it before using it*/
> -static inline int convert_hour(RTCState *s, int hour)
> +static inline int convert_hour(RTCState *s, int raw)
>  {
> +    int hour = from_bcd(s, raw & 0x7f);
> +
>      if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
>      {
>          hour %= 12;
> -        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
> +        if (raw & 0x80)
>              hour += 12;
>      }
>      return hour;
> @@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
> 
>      tm->tm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
>      tm->tm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
> -    tm->tm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
> -    tm->tm_hour = convert_hour(s, tm->tm_hour);
> +    tm->tm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
>      tm->tm_wday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
>      tm->tm_mday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
>      tm->tm_mon = from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
> --- a/xen/arch/x86/hvm/vpt.c
> +++ b/xen/arch/x86/hvm/vpt.c
> @@ -22,6 +22,7 @@
>  #include <asm/hvm/vpt.h>
>  #include <asm/event.h>
>  #include <asm/apic.h>
> +#include <asm/mc146818rtc.h>
> 
>  #define mode_is(d, name) \
>      ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
> @@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
>      struct periodic_time *pt, *temp, *earliest_pt = NULL;
>      uint64_t max_lag = -1ULL;
>      int irq, is_lapic;
> +    void *pt_priv;
> 
>      spin_lock(&v->arch.hvm_vcpu.tm_lock);
> 
> @@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
>      earliest_pt->irq_issued = 1;
>      irq = earliest_pt->irq;
>      is_lapic = (earliest_pt->source == PTSRC_lapic);
> +    pt_priv = earliest_pt->priv;
> 
>      spin_unlock(&v->arch.hvm_vcpu.tm_lock);
> 
>      if ( is_lapic )
> -    {
>          vlapic_set_irq(vcpu_vlapic(v), irq, 0);
> -    }
> +    else if ( irq == RTC_IRQ && pt_priv )
> +        rtc_periodic_interrupt(pt_priv);
>      else
>      {
>          hvm_isa_irq_deassert(v->domain, irq);
> --- a/xen/include/asm-x86/hvm/vpt.h
> +++ b/xen/include/asm-x86/hvm/vpt.h
> @@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
>  void rtc_deinit(struct domain *d);
>  void rtc_reset(struct domain *d);
>  void rtc_update_clock(struct domain *d);
> +void rtc_periodic_interrupt(void *);
> 
>  void pmtimer_init(struct vcpu *v);
>  void pmtimer_deinit(struct domain *d);
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:45:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:45:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Xip-0000Ap-EG; Thu, 23 Aug 2012 13:45:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Xin-0000AL-Io
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:45:18 +0000
Received: from [85.158.138.51:9708] by server-3.bemta-3.messagelabs.com id
	1F/FC-13809-CE336305; Thu, 23 Aug 2012 13:45:16 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345729515!27368218!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15341 invoked from network); 23 Aug 2012 13:45:15 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:45:15 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14148145"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:45:15 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 14:45:15 +0100
Date: Thu, 23 Aug 2012 14:44:52 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <503514280200007800096FF4@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Jan Beulich wrote:
> - don't call rtc_timer_update() on REG_A writes when the value didn't
>   change (doing the call always was reported to cause wall clock time
>   lagging with the JVM running on Windows)
> - don't call rtc_timer_update() on REG_B writes at all
> - only call alarm_timer_update() on REG_B writes when relevant bits
>   change
> - only call check_update_timer() on REG_B writes when SET changes
> - instead properly handle AF and PF when the guest is not also setting
>   AIE/PIE respectively (for UF this was already the case, only a
>   comment was slightly inaccurate)
> - raise the RTC IRQ not only when UIE gets set while UF was already
>   set, but generalize this to cover AIE and PIE as well
> - properly mask off bit 7 when retrieving the hour values in
>   alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
>   converting from 12- to 24-hour value
> - also handle the two other possible clock bases
> - use RTC_* names in a couple of places where literal numbers were used
>   so far
> 
> Note that this only improves the situation described in the thread at
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
> there are still problems with the emulation when invoked at a high rate
> as described there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Although this patch solves a real problem and should probably go in at
some point, I am a bit worried about drifting too much from the original
RTC emulator (that was taken from QEMU), because it would be nice to be
able to backport features like this one:

http://marc.info/?l=qemu-devel&m=134392375010304


> v2: Small adjustment to the pt_update_irq() change, avoiding to call
>     the RTC code for a HPET event (which also may pass 8 [aka RTC_IRQ]
>     as create_periodic_time()'s "irq" argument.
> 
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -50,11 +50,24 @@ static void rtc_set_time(RTCState *s);
>  static inline int from_bcd(RTCState *s, int a);
>  static inline int convert_hour(RTCState *s, int hour);
> 
> -static void rtc_periodic_cb(struct vcpu *v, void *opaque)
> +static void rtc_toggle_irq(RTCState *s)
> +{
> +    struct domain *d = vrtc_domain(s);
> +
> +    ASSERT(spin_is_locked(&s->lock));
> +    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> +    hvm_isa_irq_deassert(d, RTC_IRQ);
> +    hvm_isa_irq_assert(d, RTC_IRQ);
> +}
> +
> +void rtc_periodic_interrupt(void *opaque)
>  {
>      RTCState *s = opaque;
> +
>      spin_lock(&s->lock);
> -    s->hw.cmos_data[RTC_REG_C] |= 0xc0;
> +    s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
> +    if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
> +        rtc_toggle_irq(s);
>      spin_unlock(&s->lock);
>  }
> 
> @@ -68,19 +81,25 @@ static void rtc_timer_update(RTCState *s
>      ASSERT(spin_is_locked(&s->lock));
> 
>      period_code = s->hw.cmos_data[RTC_REG_A] & RTC_RATE_SELECT;
> -    if ( (period_code != 0) && (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
> +    switch ( s->hw.cmos_data[RTC_REG_A] & RTC_DIV_CTL )
>      {
> -        if ( period_code <= 2 )
> +    case RTC_REF_CLCK_32KHZ:
> +        if ( (period_code != 0) && (period_code <= 2) )
>              period_code += 7;
> -
> -        period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> -        period = DIV_ROUND((period * 1000000000ULL), 32768); /* period in ns */
> -        create_periodic_time(v, &s->pt, period, period, RTC_IRQ,
> -                             rtc_periodic_cb, s);
> -    }
> -    else
> -    {
> +        /* fall through */
> +    case RTC_REF_CLCK_1MHZ:
> +    case RTC_REF_CLCK_4MHZ:
> +        if ( period_code != 0 )
> +        {
> +            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> +            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
> +            create_periodic_time(v, &s->pt, period, period, RTC_IRQ, NULL, s);
> +            break;
> +        }
> +        /* fall through */
> +    default:
>          destroy_periodic_time(&s->pt);
> +        break;
>      }
>  }
> 
> @@ -102,7 +121,7 @@ static void check_update_timer(RTCState
>          guest_usec = get_localtime_us(d) % USEC_PER_SEC;
>          if (guest_usec >= (USEC_PER_SEC - 244))
>          {
> -            /* RTC is in update cycle when enabling UIE */
> +            /* RTC is in update cycle */
>              s->hw.cmos_data[RTC_REG_A] |= RTC_UIP;
>              next_update_time = (USEC_PER_SEC - guest_usec) * NS_PER_USEC;
>              expire_time = NOW() + next_update_time;
> @@ -144,7 +163,6 @@ static void rtc_update_timer(void *opaqu
>  static void rtc_update_timer2(void *opaque)
>  {
>      RTCState *s = opaque;
> -    struct domain *d = vrtc_domain(s);
> 
>      spin_lock(&s->lock);
>      if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> @@ -152,11 +170,7 @@ static void rtc_update_timer2(void *opaq
>          s->hw.cmos_data[RTC_REG_C] |= RTC_UF;
>          s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
>          if ((s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
> -        {
> -            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> -            hvm_isa_irq_deassert(d, RTC_IRQ);
> -            hvm_isa_irq_assert(d, RTC_IRQ);
> -        }
> +            rtc_toggle_irq(s);
>          check_update_timer(s);
>      }
>      spin_unlock(&s->lock);
> @@ -175,21 +189,18 @@ static void alarm_timer_update(RTCState
> 
>      stop_timer(&s->alarm_timer);
> 
> -    if ((s->hw.cmos_data[RTC_REG_B] & RTC_AIE) &&
> -            !(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> +    if ( !(s->hw.cmos_data[RTC_REG_B] & RTC_SET) )
>      {
>          s->current_tm = gmtime(get_localtime(d));
>          rtc_copy_date(s);
> 
>          alarm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS_ALARM]);
>          alarm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES_ALARM]);
> -        alarm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
> -        alarm_hour = convert_hour(s, alarm_hour);
> +        alarm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS_ALARM]);
> 
>          cur_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
>          cur_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
> -        cur_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS]);
> -        cur_hour = convert_hour(s, cur_hour);
> +        cur_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
> 
>          next_update_time = USEC_PER_SEC - (get_localtime_us(d) % USEC_PER_SEC);
>          next_update_time = next_update_time * NS_PER_USEC + NOW();
> @@ -343,7 +354,6 @@ static void alarm_timer_update(RTCState
>  static void rtc_alarm_cb(void *opaque)
>  {
>      RTCState *s = opaque;
> -    struct domain *d = vrtc_domain(s);
> 
>      spin_lock(&s->lock);
>      if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> @@ -351,11 +361,7 @@ static void rtc_alarm_cb(void *opaque)
>          s->hw.cmos_data[RTC_REG_C] |= RTC_AF;
>          /* alarm interrupt */
>          if (s->hw.cmos_data[RTC_REG_B] & RTC_AIE)
> -        {
> -            s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> -            hvm_isa_irq_deassert(d, RTC_IRQ);
> -            hvm_isa_irq_assert(d, RTC_IRQ);
> -        }
> +            rtc_toggle_irq(s);
>          alarm_timer_update(s);
>      }
>      spin_unlock(&s->lock);
> @@ -365,6 +371,7 @@ static int rtc_ioport_write(void *opaque
>  {
>      RTCState *s = opaque;
>      struct domain *d = vrtc_domain(s);
> +    uint32_t orig, mask;
> 
>      spin_lock(&s->lock);
> 
> @@ -382,6 +389,7 @@ static int rtc_ioport_write(void *opaque
>          return 0;
>      }
> 
> +    orig = s->hw.cmos_data[s->hw.cmos_index];
>      switch ( s->hw.cmos_index )
>      {
>      case RTC_SECONDS_ALARM:
> @@ -405,9 +413,9 @@ static int rtc_ioport_write(void *opaque
>          break;
>      case RTC_REG_A:
>          /* UIP bit is read only */
> -        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) |
> -            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
> -        rtc_timer_update(s);
> +        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) | (orig & RTC_UIP);
> +        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
> +            rtc_timer_update(s);
>          break;
>      case RTC_REG_B:
>          if ( data & RTC_SET )
> @@ -415,7 +423,7 @@ static int rtc_ioport_write(void *opaque
>              /* set mode: reset UIP mode */
>              s->hw.cmos_data[RTC_REG_A] &= ~RTC_UIP;
>              /* adjust cmos before stopping */
> -            if (!(s->hw.cmos_data[RTC_REG_B] & RTC_SET))
> +            if (!(orig & RTC_SET))
>              {
>                  s->current_tm = gmtime(get_localtime(d));
>                  rtc_copy_date(s);
> @@ -424,21 +432,26 @@ static int rtc_ioport_write(void *opaque
>          else
>          {
>              /* if disabling set mode, update the time */
> -            if ( s->hw.cmos_data[RTC_REG_B] & RTC_SET )
> +            if ( orig & RTC_SET )
>                  rtc_set_time(s);
>          }
> -        /* if the interrupt is already set when the interrupt become
> -         * enabled, raise an interrupt immediately*/
> -        if ((data & RTC_UIE) && !(s->hw.cmos_data[RTC_REG_B] & RTC_UIE))
> -            if (s->hw.cmos_data[RTC_REG_C] & RTC_UF)
> +        /*
> +         * If the interrupt is already set when the interrupt becomes
> +         * enabled, raise an interrupt immediately.
> +         * NB: RTC_{A,P,U}IE == RTC_{A,P,U}F respectively.
> +         */
> +        for ( mask = RTC_UIE; mask <= RTC_PIE; mask <<= 1 )
> +            if ( (data & mask) && !(orig & mask) &&
> +                 (s->hw.cmos_data[RTC_REG_C] & mask) )
>              {
> -                hvm_isa_irq_deassert(d, RTC_IRQ);
> -                hvm_isa_irq_assert(d, RTC_IRQ);
> +                rtc_toggle_irq(s);
> +                break;
>              }
>          s->hw.cmos_data[RTC_REG_B] = data;
> -        rtc_timer_update(s);
> -        check_update_timer(s);
> -        alarm_timer_update(s);
> +        if ( (data ^ orig) & RTC_SET )
> +            check_update_timer(s);
> +        if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
> +            alarm_timer_update(s);
>          break;
>      case RTC_REG_C:
>      case RTC_REG_D:
> @@ -453,7 +466,7 @@ static int rtc_ioport_write(void *opaque
> 
>  static inline int to_bcd(RTCState *s, int a)
>  {
> -    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
> +    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
>          return a;
>      else
>          return ((a / 10) << 4) | (a % 10);
> @@ -461,7 +474,7 @@ static inline int to_bcd(RTCState *s, in
> 
>  static inline int from_bcd(RTCState *s, int a)
>  {
> -    if ( s->hw.cmos_data[RTC_REG_B] & 0x04 )
> +    if ( s->hw.cmos_data[RTC_REG_B] & RTC_DM_BINARY )
>          return a;
>      else
>          return ((a >> 4) * 10) + (a & 0x0f);
> @@ -469,12 +482,14 @@ static inline int from_bcd(RTCState *s,
> 
>  /* Hours in 12 hour mode are in 1-12 range, not 0-11.
>   * So we need convert it before using it*/
> -static inline int convert_hour(RTCState *s, int hour)
> +static inline int convert_hour(RTCState *s, int raw)
>  {
> +    int hour = from_bcd(s, raw & 0x7f);
> +
>      if (!(s->hw.cmos_data[RTC_REG_B] & RTC_24H))
>      {
>          hour %= 12;
> -        if (s->hw.cmos_data[RTC_HOURS] & 0x80)
> +        if (raw & 0x80)
>              hour += 12;
>      }
>      return hour;
> @@ -493,8 +508,7 @@ static void rtc_set_time(RTCState *s)
> 
>      tm->tm_sec = from_bcd(s, s->hw.cmos_data[RTC_SECONDS]);
>      tm->tm_min = from_bcd(s, s->hw.cmos_data[RTC_MINUTES]);
> -    tm->tm_hour = from_bcd(s, s->hw.cmos_data[RTC_HOURS] & 0x7f);
> -    tm->tm_hour = convert_hour(s, tm->tm_hour);
> +    tm->tm_hour = convert_hour(s, s->hw.cmos_data[RTC_HOURS]);
>      tm->tm_wday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_WEEK]);
>      tm->tm_mday = from_bcd(s, s->hw.cmos_data[RTC_DAY_OF_MONTH]);
>      tm->tm_mon = from_bcd(s, s->hw.cmos_data[RTC_MONTH]) - 1;
> --- a/xen/arch/x86/hvm/vpt.c
> +++ b/xen/arch/x86/hvm/vpt.c
> @@ -22,6 +22,7 @@
>  #include <asm/hvm/vpt.h>
>  #include <asm/event.h>
>  #include <asm/apic.h>
> +#include <asm/mc146818rtc.h>
> 
>  #define mode_is(d, name) \
>      ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
> @@ -218,6 +219,7 @@ void pt_update_irq(struct vcpu *v)
>      struct periodic_time *pt, *temp, *earliest_pt = NULL;
>      uint64_t max_lag = -1ULL;
>      int irq, is_lapic;
> +    void *pt_priv;
> 
>      spin_lock(&v->arch.hvm_vcpu.tm_lock);
> 
> @@ -251,13 +253,14 @@ void pt_update_irq(struct vcpu *v)
>      earliest_pt->irq_issued = 1;
>      irq = earliest_pt->irq;
>      is_lapic = (earliest_pt->source == PTSRC_lapic);
> +    pt_priv = earliest_pt->priv;
> 
>      spin_unlock(&v->arch.hvm_vcpu.tm_lock);
> 
>      if ( is_lapic )
> -    {
>          vlapic_set_irq(vcpu_vlapic(v), irq, 0);
> -    }
> +    else if ( irq == RTC_IRQ && pt_priv )
> +        rtc_periodic_interrupt(pt_priv);
>      else
>      {
>          hvm_isa_irq_deassert(v->domain, irq);
> --- a/xen/include/asm-x86/hvm/vpt.h
> +++ b/xen/include/asm-x86/hvm/vpt.h
> @@ -181,6 +181,7 @@ void rtc_migrate_timers(struct vcpu *v);
>  void rtc_deinit(struct domain *d);
>  void rtc_reset(struct domain *d);
>  void rtc_update_clock(struct domain *d);
> +void rtc_periodic_interrupt(void *);
> 
>  void pmtimer_init(struct vcpu *v);
>  void pmtimer_deinit(struct domain *d);
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:46:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XkE-0000Pc-Vb; Thu, 23 Aug 2012 13:46:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4XkC-0000PL-Sh
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:46:45 +0000
Received: from [85.158.143.99:58238] by server-1.bemta-4.messagelabs.com id
	79/46-12504-44436305; Thu, 23 Aug 2012 13:46:44 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345729602!20537675!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15804 invoked from network); 23 Aug 2012 13:46:43 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:46:43 -0000
Received: by vbip1 with SMTP id p1so1006026vbi.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 06:46:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=K7l7jJJeemzzLWntSWRdiqED7BE2i2ZLVUBw0807IvQ=;
	b=geN1k3LlYS6s3yUPLdJSvtmp0I+590D7gkn2MaKGmOd6KQY6xrthya+IxRJoHVmKjv
	0FQEvgk6WSYxG+/emap/97NXEcD1wr8UEqLtIAJUbPys82+SHxb6NMFIRLkF++0NaACW
	f01UuOA2S2xR0lbX8vv3NAVYlTfVNgP2YajCOjZNC+q0890WR+RXHl9o/ytQlRvHXjrR
	5+YRicZE4djB4/Wi3O4IU2SLSPBqNRrhHxc/qyf2BYm+6S5RhLnfDmD2ZPyNO0bcnTQa
	teMmp1Gu+6j0cCBT8BTM8D9kD8CXKx7AW9y8gd3VYeUGmvvFD3N6LhFEInsld9nSfrWH
	wjaA==
Received: by 10.52.99.132 with SMTP id eq4mr782087vdb.91.1345729601621;
	Thu, 23 Aug 2012 06:46:41 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id b3sm3301967vec.11.2012.08.23.06.46.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 06:46:41 -0700 (PDT)
Date: Thu, 23 Aug 2012 09:36:36 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: =?iso-8859-1?Q?Aur=E9lien?= MILLIAT <Aurelien.MILLIAT@clarte.asso.fr>
Message-ID: <20120823133635.GA10700@phenom.dumpdata.com>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>
	<20120731231246.GD32698@phenom.dumpdata.com>
	<36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:23:02PM +0000, Aur=E9lien MILLIAT wrote:
> Hi,
> =

> I've been able to use ATI FirePro S400 with the unstable version. I've up=
dated my Dom0 and pass the graphic card as secondary adaptor.


Please do not top post. What did you upgrade dom0 to?

> I've made a quick test for Xen 4.1.2 with these updates but it's still cr=
ash (no matter I will use the unstable version).
> =

> Aur=E9lien =

> _______________________________________________
> =

> On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
> > Hi,
> > =

> > I'm currently trying to use XEN on graphical cluster.
> > VGA passtrough works fine and I'm able to use quad-buffer for active st=
ereo.
> > The last thing to do is to synchronize all GPU from the cluster.
> > For this purpose I use ATI FirePro S400 and it didn't works.
> > =

> > I've seen two behaviors:
> > -When I run lspci command on Dom0 I've got:
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> > 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio [Radeon H=
D 5800 Series]
> > And sometimes just
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> > -When a DomU(Windows 7) is running, it's very slow (I can't do anything=
) and crash after several minutes.
> > I've tried with the unstable version and I've seen the same 'lspci' beh=
avior.
> > =

> > I've a couple of questions:
> > Is it possible to passthrough a VGA with this extension?
> =

> I never tried. I just pass in the VGA card and don't try to pass in the H=
DMI driver.
> =

> > As it's a particular use of VGA passtrough is it planned to able to use=
 synchronization module?
> =

> So what is synchronization module for you? That is the HDMI part?
> =

> > Is it easy to add this feature (time cost)?
> > =

> > Computers : HP Z800 workstation
> > GPU: ATI FirePro V8800
> > CPU: Intel Xeon E5640
> > MB: Intel 5520 chipset
> > =

> > XEN :
> > Version  4.1.2
> > With ATI patch from http://old-list-archives.xen.org/archives/html/xen-=
users/2011-05/msg00048.html
> > Thanks,
> > Aur=E9lien Milliat
> > =

> > =

> =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 13:46:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 13:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XkE-0000Pc-Vb; Thu, 23 Aug 2012 13:46:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T4XkC-0000PL-Sh
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 13:46:45 +0000
Received: from [85.158.143.99:58238] by server-1.bemta-4.messagelabs.com id
	79/46-12504-44436305; Thu, 23 Aug 2012 13:46:44 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345729602!20537675!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15804 invoked from network); 23 Aug 2012 13:46:43 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:46:43 -0000
Received: by vbip1 with SMTP id p1so1006026vbi.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 06:46:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=K7l7jJJeemzzLWntSWRdiqED7BE2i2ZLVUBw0807IvQ=;
	b=geN1k3LlYS6s3yUPLdJSvtmp0I+590D7gkn2MaKGmOd6KQY6xrthya+IxRJoHVmKjv
	0FQEvgk6WSYxG+/emap/97NXEcD1wr8UEqLtIAJUbPys82+SHxb6NMFIRLkF++0NaACW
	f01UuOA2S2xR0lbX8vv3NAVYlTfVNgP2YajCOjZNC+q0890WR+RXHl9o/ytQlRvHXjrR
	5+YRicZE4djB4/Wi3O4IU2SLSPBqNRrhHxc/qyf2BYm+6S5RhLnfDmD2ZPyNO0bcnTQa
	teMmp1Gu+6j0cCBT8BTM8D9kD8CXKx7AW9y8gd3VYeUGmvvFD3N6LhFEInsld9nSfrWH
	wjaA==
Received: by 10.52.99.132 with SMTP id eq4mr782087vdb.91.1345729601621;
	Thu, 23 Aug 2012 06:46:41 -0700 (PDT)
Received: from phenom.dumpdata.com
	(209-6-85-33.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com. [209.6.85.33])
	by mx.google.com with ESMTPS id b3sm3301967vec.11.2012.08.23.06.46.40
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 06:46:41 -0700 (PDT)
Date: Thu, 23 Aug 2012 09:36:36 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: =?iso-8859-1?Q?Aur=E9lien?= MILLIAT <Aurelien.MILLIAT@clarte.asso.fr>
Message-ID: <20120823133635.GA10700@phenom.dumpdata.com>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>
	<20120731231246.GD32698@phenom.dumpdata.com>
	<36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:23:02PM +0000, Aur=E9lien MILLIAT wrote:
> Hi,
> =

> I've been able to use ATI FirePro S400 with the unstable version. I've up=
dated my Dom0 and pass the graphic card as secondary adaptor.


Please do not top post. What did you upgrade dom0 to?

> I've made a quick test for Xen 4.1.2 with these updates but it's still cr=
ash (no matter I will use the unstable version).
> =

> Aur=E9lien =

> _______________________________________________
> =

> On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
> > Hi,
> > =

> > I'm currently trying to use XEN on graphical cluster.
> > VGA passtrough works fine and I'm able to use quad-buffer for active st=
ereo.
> > The last thing to do is to synchronize all GPU from the cluster.
> > For this purpose I use ATI FirePro S400 and it didn't works.
> > =

> > I've seen two behaviors:
> > -When I run lspci command on Dom0 I've got:
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> > 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio [Radeon H=
D 5800 Series]
> > And sometimes just
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> > -When a DomU(Windows 7) is running, it's very slow (I can't do anything=
) and crash after several minutes.
> > I've tried with the unstable version and I've seen the same 'lspci' beh=
avior.
> > =

> > I've a couple of questions:
> > Is it possible to passthrough a VGA with this extension?
> =

> I never tried. I just pass in the VGA card and don't try to pass in the H=
DMI driver.
> =

> > As it's a particular use of VGA passtrough is it planned to able to use=
 synchronization module?
> =

> So what is synchronization module for you? That is the HDMI part?
> =

> > Is it easy to add this feature (time cost)?
> > =

> > Computers : HP Z800 workstation
> > GPU: ATI FirePro V8800
> > CPU: Intel Xeon E5640
> > MB: Intel 5520 chipset
> > =

> > XEN :
> > Version  4.1.2
> > With ATI patch from http://old-list-archives.xen.org/archives/html/xen-=
users/2011-05/msg00048.html
> > Thanks,
> > Aur=E9lien Milliat
> > =

> > =

> =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:01:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:01:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XyN-0001A7-LO; Thu, 23 Aug 2012 14:01:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4XyM-0001A1-KT
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:01:22 +0000
Received: from [85.158.143.35:27170] by server-1.bemta-4.messagelabs.com id
	41/60-12504-1B736305; Thu, 23 Aug 2012 14:01:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345730475!13396614!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4704 invoked from network); 23 Aug 2012 14:01:16 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:01:16 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NE0uUo032166
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:00:57 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NE0uUh014525
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:00:56 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NE0t43024143; Thu, 23 Aug 2012 09:00:55 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:00:55 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E74044031E; Thu, 23 Aug 2012 09:50:51 -0400 (EDT)
Date: Thu, 23 Aug 2012 09:50:51 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120823135051.GA10973@phenom.dumpdata.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5035F7BD.6090205@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 10:28:29AM +0100, Attilio Rao wrote:
> On 22/08/12 15:47, Attilio Rao wrote:
> >On 22/08/12 15:19, Thomas Gleixner wrote:
> >>On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >>
> >>>On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
> >>>
> >>>>On Tue, 21 Aug 2012, Attilio Rao wrote:
> >>>>
> >>>>>Differences with v1:
> >>>>>- The patch serie is re-arranged in a way that it helps reviews, following
> >>>>>    a plan by Thomas Gleixner
> >>>>>- The PVOPS nomenclature is not used as it is not correct
> >>>>>- The front-end message is adjusted with feedback by Thomas Gleixner,
> >>>>>    Stefano Stabellini and Konrad Rzeszutek Wilk
> >>>>>
> >>>>This is much simpler to read and review. Just have a look at the
> >>>>diffstats of the two series:
> >>>>
> >>>>   6 files changed,  9 insertions(+),  8 deletions(-)
> >>>>   6 files changed, 11 insertions(+),  9 deletions(-)
> >>>>   5 files changed, 50 insertions(+),  2 deletions(-)
> >>>>   6 files changed,  2 insertions(+), 65 deletions(-)
> >>>>   1 files changed,  5 insertions(+),  0 deletions(-)
> >>>>
> >>>>versus
> >>>>
> >>>>   6 files changed, 10 insertions(+),  9 deletions(-)
> >>>>   6 files changed, 11 insertions(+), 11 deletions(-)
> >>>>   5 files changed,  3 insertions(+),  3 deletions(-)
> >>>>   6 files changed,  4 insertions(+), 20 deletions(-)
> >>>>   1 files changed,  5 insertions(+),  0 deletions(-)
> >>>>
> >>>>The overall result is basically the same, but it's way simpler to look
> >>>>at obvious and well done patches than checking whether a subtle copy
> >>>>and paste bug happened in 3/5 of the first version. Copy and paste is
> >>>>the #1 cause for subtle bugs. :)
> >>>>
> >>>>I'm waiting for the ack of Xen folks before taking it into tip.

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
on all patches.

Please tell me which branch its going in so I can merge that one in my tree.
Thanks!
> >>>>
> >>>I've some extra patches that modify the new "paginig_init" in the Xen
> >>>code that I am going to propose for v3.7 - so will have some merge
> >>>conflicts. Let me figure that out and also run this set of patches
> >>>(and also the previous one .. which I think you didn't have a
> >>>chance to look since you were on vacation?) on an overnight
> >>>
> >>Which previous one ?
> >>
> >This one:
> >https://lkml.org/lkml/2012/8/21/369
> >
> >but I would like to repost the patch serie skipping the referral to
> >PVOPS in the commit logs, I will do so right now, so please wait for
> >another patch serie.
> 
> For your convenience:
> http://lkml.org/lkml/2012/8/22/450
> 
> Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:01:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:01:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4XyN-0001A7-LO; Thu, 23 Aug 2012 14:01:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4XyM-0001A1-KT
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:01:22 +0000
Received: from [85.158.143.35:27170] by server-1.bemta-4.messagelabs.com id
	41/60-12504-1B736305; Thu, 23 Aug 2012 14:01:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345730475!13396614!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4704 invoked from network); 23 Aug 2012 14:01:16 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:01:16 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NE0uUo032166
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:00:57 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NE0uUh014525
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:00:56 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NE0t43024143; Thu, 23 Aug 2012 09:00:55 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:00:55 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E74044031E; Thu, 23 Aug 2012 09:50:51 -0400 (EDT)
Date: Thu, 23 Aug 2012 09:50:51 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Attilio Rao <attilio.rao@citrix.com>
Message-ID: <20120823135051.GA10973@phenom.dumpdata.com>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5035F7BD.6090205@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 10:28:29AM +0100, Attilio Rao wrote:
> On 22/08/12 15:47, Attilio Rao wrote:
> >On 22/08/12 15:19, Thomas Gleixner wrote:
> >>On Wed, 22 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >>
> >>>On Tue, Aug 21, 2012 at 11:22:03PM +0200, Thomas Gleixner wrote:
> >>>
> >>>>On Tue, 21 Aug 2012, Attilio Rao wrote:
> >>>>
> >>>>>Differences with v1:
> >>>>>- The patch serie is re-arranged in a way that it helps reviews, following
> >>>>>    a plan by Thomas Gleixner
> >>>>>- The PVOPS nomenclature is not used as it is not correct
> >>>>>- The front-end message is adjusted with feedback by Thomas Gleixner,
> >>>>>    Stefano Stabellini and Konrad Rzeszutek Wilk
> >>>>>
> >>>>This is much simpler to read and review. Just have a look at the
> >>>>diffstats of the two series:
> >>>>
> >>>>   6 files changed,  9 insertions(+),  8 deletions(-)
> >>>>   6 files changed, 11 insertions(+),  9 deletions(-)
> >>>>   5 files changed, 50 insertions(+),  2 deletions(-)
> >>>>   6 files changed,  2 insertions(+), 65 deletions(-)
> >>>>   1 files changed,  5 insertions(+),  0 deletions(-)
> >>>>
> >>>>versus
> >>>>
> >>>>   6 files changed, 10 insertions(+),  9 deletions(-)
> >>>>   6 files changed, 11 insertions(+), 11 deletions(-)
> >>>>   5 files changed,  3 insertions(+),  3 deletions(-)
> >>>>   6 files changed,  4 insertions(+), 20 deletions(-)
> >>>>   1 files changed,  5 insertions(+),  0 deletions(-)
> >>>>
> >>>>The overall result is basically the same, but it's way simpler to look
> >>>>at obvious and well done patches than checking whether a subtle copy
> >>>>and paste bug happened in 3/5 of the first version. Copy and paste is
> >>>>the #1 cause for subtle bugs. :)
> >>>>
> >>>>I'm waiting for the ack of Xen folks before taking it into tip.

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
on all patches.

Please tell me which branch its going in so I can merge that one in my tree.
Thanks!
> >>>>
> >>>I've some extra patches that modify the new "paginig_init" in the Xen
> >>>code that I am going to propose for v3.7 - so will have some merge
> >>>conflicts. Let me figure that out and also run this set of patches
> >>>(and also the previous one .. which I think you didn't have a
> >>>chance to look since you were on vacation?) on an overnight
> >>>
> >>Which previous one ?
> >>
> >This one:
> >https://lkml.org/lkml/2012/8/21/369
> >
> >but I would like to repost the patch serie skipping the referral to
> >PVOPS in the commit logs, I will do so right now, so please wait for
> >another patch serie.
> 
> For your convenience:
> http://lkml.org/lkml/2012/8/22/450
> 
> Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:17:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YDM-0001jp-Sa; Thu, 23 Aug 2012 14:16:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YDK-0001jg-Dk
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:16:50 +0000
Received: from [85.158.138.51:6755] by server-4.bemta-3.messagelabs.com id
	BE/D1-04276-15B36305; Thu, 23 Aug 2012 14:16:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345731405!18753028!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxNTE5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6213 invoked from network); 23 Aug 2012 14:16:47 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:16:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NEGc9Z029331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:16:39 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NEGbIC019338
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:16:37 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NEGZnJ011332; Thu, 23 Aug 2012 09:16:35 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:16:35 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 174D54031E; Thu, 23 Aug 2012 10:06:32 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:06:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andre Przywara <andre.przywara@amd.com>
Message-ID: <20120823140631.GB10973@phenom.dumpdata.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503602A0.9040908@amd.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	david.vrabel@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 12:14:56PM +0200, Andre Przywara wrote:
> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> >On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> >>On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> >>
> >>Konrad, David,
> >>
> >>back on track for this issue. Thanks for your input, I could do some
> >>more debugging (see below for a refresh):
> >>
> >>It seems like it affects only the first page of the 1:1 mapping. I
> >>didn't have an issues with the last PFN or the page behind it (which
> >>failed properly).
> >>
> >>David, thanks for the hint with varying dom0_mem parameter. I
> >>thought I already checked this, but I did it once again and it
> >>turned out that it is only an issue if dom0_mem is smaller than the
> >>ACPI area, which generates a hole in the memory map. So we have
> >>(simplified)
> >>* 1:1 mapping to 1 MB
> >>* normal mapping till dom0_mem
> >>* unmapped area till ACPI E820 area
> >>* ACPI E820 1:1 mapping
> >>
> >>As far as I could chase it down the 1:1 mapping itself looks OK, I
> >>couldn't find any off-by-one bugs here. So maybe it is code that
> >>later on invalidates areas between the normal guest mapping and the
> >>ACPI mem?
> >
> >I think I found it. Can you try this pls [and if you can't find
> >early_to_phys.. just use the __set_phys_to call]
> 
> Yes, that works. At least after a quick test on my test box. Both
> the test module and acpidump work as expected. If I replace the "<"
> in your patch with the original "<=", I get the warning (and due to
> the "continue" it also works).
> I also successfully tested the minimal fix (just replacing <= with <).
> I will feed it to the testers here to cover more machines.
> 
> Do you want to keep the warnings in (which exceed 80 characters, btw)?

Yes. The new style is to allow any type of printk/WARN etc to be unbroken
and break the 80 characters.

> 
> Thanks a lot and:
> 
> Tested-by: Andre Przywara <andre.przywara@amd.com>

Great. Thx.
> 
> Regards,
> Andre.
> 
> >
> > From ab915d98f321b0fcca1932747c632b5f0f299f55 Mon Sep 17 00:00:00 2001
> >From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >Date: Fri, 17 Aug 2012 16:43:28 -0400
> >Subject: [PATCH] xen/setup: Fix one-off error when adding for-balloon PFNs to
> >  the P2M.
> >
> >When we are finished with return PFNs to the hypervisor, then
> >populate it back, and also mark the E820 MMIO and E820 gaps
> >as IDENTITY_FRAMEs, we then call P2M to set areas that can
> >be used for ballooning. We were off by one, and ended up
> >over-writting a P2M entry that most likely was an IDENTITY_FRAME.
> >For example:
> >
> >1-1 mapping on 40000->40200
> >1-1 mapping on bc558->bc5ac
> >1-1 mapping on bc5b4->bc8c5
> >1-1 mapping on bc8c6->bcb7c
> >1-1 mapping on bcd00->100000
> >Released 614 pages of unused memory
> >Set 277889 page(s) to 1-1 mapping
> >Populating 40200-40466 pfn range: 614 pages added
> >
> >=> here we set from 40466 up to bc559 P2M tree to be
> >INVALID_P2M_ENTRY. We should have done it up to bc558.
> >
> >The end result is that if anybody is trying to construct
> >a PTE for PFN bc558 they end up with ~PAGE_PRESENT.
> >
> >CC: stable@vger.kernel.org
> >Reported-by: Andre Przywara <andre.przywara@amd.com>
> >Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >---
> >  arch/x86/xen/setup.c |   11 +++++++++--
> >  1 files changed, 9 insertions(+), 2 deletions(-)
> >
> >diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> >index ead8557..030a55a 100644
> >--- a/arch/x86/xen/setup.c
> >+++ b/arch/x86/xen/setup.c
> >@@ -78,9 +78,16 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
> >  	memblock_reserve(start, size);
> >
> >  	xen_max_p2m_pfn = PFN_DOWN(start + size);
> >+	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
> >+		unsigned long mfn = pfn_to_mfn(pfn);
> >+
> >+		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
> >+			continue;
> >+		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
> >+			pfn, mfn);
> >
> >-	for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
> >-		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> >+		early_set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> >+	}
> >  }
> >
> >  static unsigned long __init xen_do_chunk(unsigned long start,
> >
> 
> -- 
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:17:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YDM-0001jp-Sa; Thu, 23 Aug 2012 14:16:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YDK-0001jg-Dk
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:16:50 +0000
Received: from [85.158.138.51:6755] by server-4.bemta-3.messagelabs.com id
	BE/D1-04276-15B36305; Thu, 23 Aug 2012 14:16:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345731405!18753028!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxNTE5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6213 invoked from network); 23 Aug 2012 14:16:47 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:16:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NEGc9Z029331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:16:39 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NEGbIC019338
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:16:37 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NEGZnJ011332; Thu, 23 Aug 2012 09:16:35 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:16:35 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 174D54031E; Thu, 23 Aug 2012 10:06:32 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:06:32 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andre Przywara <andre.przywara@amd.com>
Message-ID: <20120823140631.GB10973@phenom.dumpdata.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503602A0.9040908@amd.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>,
	david.vrabel@citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 12:14:56PM +0200, Andre Przywara wrote:
> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> >On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> >>On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> >>
> >>Konrad, David,
> >>
> >>back on track for this issue. Thanks for your input, I could do some
> >>more debugging (see below for a refresh):
> >>
> >>It seems like it affects only the first page of the 1:1 mapping. I
> >>didn't have an issues with the last PFN or the page behind it (which
> >>failed properly).
> >>
> >>David, thanks for the hint with varying dom0_mem parameter. I
> >>thought I already checked this, but I did it once again and it
> >>turned out that it is only an issue if dom0_mem is smaller than the
> >>ACPI area, which generates a hole in the memory map. So we have
> >>(simplified)
> >>* 1:1 mapping to 1 MB
> >>* normal mapping till dom0_mem
> >>* unmapped area till ACPI E820 area
> >>* ACPI E820 1:1 mapping
> >>
> >>As far as I could chase it down the 1:1 mapping itself looks OK, I
> >>couldn't find any off-by-one bugs here. So maybe it is code that
> >>later on invalidates areas between the normal guest mapping and the
> >>ACPI mem?
> >
> >I think I found it. Can you try this pls [and if you can't find
> >early_to_phys.. just use the __set_phys_to call]
> 
> Yes, that works. At least after a quick test on my test box. Both
> the test module and acpidump work as expected. If I replace the "<"
> in your patch with the original "<=", I get the warning (and due to
> the "continue" it also works).
> I also successfully tested the minimal fix (just replacing <= with <).
> I will feed it to the testers here to cover more machines.
> 
> Do you want to keep the warnings in (which exceed 80 characters, btw)?

Yes. The new style is to allow any type of printk/WARN etc to be unbroken
and break the 80 characters.

> 
> Thanks a lot and:
> 
> Tested-by: Andre Przywara <andre.przywara@amd.com>

Great. Thx.
> 
> Regards,
> Andre.
> 
> >
> > From ab915d98f321b0fcca1932747c632b5f0f299f55 Mon Sep 17 00:00:00 2001
> >From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >Date: Fri, 17 Aug 2012 16:43:28 -0400
> >Subject: [PATCH] xen/setup: Fix one-off error when adding for-balloon PFNs to
> >  the P2M.
> >
> >When we are finished with return PFNs to the hypervisor, then
> >populate it back, and also mark the E820 MMIO and E820 gaps
> >as IDENTITY_FRAMEs, we then call P2M to set areas that can
> >be used for ballooning. We were off by one, and ended up
> >over-writting a P2M entry that most likely was an IDENTITY_FRAME.
> >For example:
> >
> >1-1 mapping on 40000->40200
> >1-1 mapping on bc558->bc5ac
> >1-1 mapping on bc5b4->bc8c5
> >1-1 mapping on bc8c6->bcb7c
> >1-1 mapping on bcd00->100000
> >Released 614 pages of unused memory
> >Set 277889 page(s) to 1-1 mapping
> >Populating 40200-40466 pfn range: 614 pages added
> >
> >=> here we set from 40466 up to bc559 P2M tree to be
> >INVALID_P2M_ENTRY. We should have done it up to bc558.
> >
> >The end result is that if anybody is trying to construct
> >a PTE for PFN bc558 they end up with ~PAGE_PRESENT.
> >
> >CC: stable@vger.kernel.org
> >Reported-by: Andre Przywara <andre.przywara@amd.com>
> >Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >---
> >  arch/x86/xen/setup.c |   11 +++++++++--
> >  1 files changed, 9 insertions(+), 2 deletions(-)
> >
> >diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> >index ead8557..030a55a 100644
> >--- a/arch/x86/xen/setup.c
> >+++ b/arch/x86/xen/setup.c
> >@@ -78,9 +78,16 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
> >  	memblock_reserve(start, size);
> >
> >  	xen_max_p2m_pfn = PFN_DOWN(start + size);
> >+	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
> >+		unsigned long mfn = pfn_to_mfn(pfn);
> >+
> >+		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
> >+			continue;
> >+		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
> >+			pfn, mfn);
> >
> >-	for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
> >-		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> >+		early_set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> >+	}
> >  }
> >
> >  static unsigned long __init xen_do_chunk(unsigned long start,
> >
> 
> -- 
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:18:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:18:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YEb-0001oH-BE; Thu, 23 Aug 2012 14:18:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4YEa-0001o3-FN
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:18:08 +0000
Received: from [85.158.138.51:45593] by server-4.bemta-3.messagelabs.com id
	68/64-04276-F9B36305; Thu, 23 Aug 2012 14:18:07 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345731486!27563093!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8884 invoked from network); 23 Aug 2012 14:18:07 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:18:07 -0000
Received: by eeke53 with SMTP id e53so314315eek.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 07:18:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:mime-version:content-type:content-transfer-encoding;
	bh=rK8Om7jBY5mS/yuTRZYyDY57yp52zg902/i6y76Rkmk=;
	b=dWzmTJCrJ/ifcNNikXBtmIbOd1Iuus7NcJ1wBld58VCPA3oCDbo1PbPUVe6Dtfm9wt
	mpoHN3jI1MEjL/gcmkMQ0RZMLZNZvnqKanI3Hl1VpyyM3qLAOrxwGnei8dATOYkL3eXo
	6fNQiKnme64y4sdZjxoKHD+wTxIchL7O8KI68u47duVEAGNCPF2guLi1OQL16+hkoccv
	wUDf32luGlXNwafsbewzsHOsqKT2P4ymDhdVBvlDq5N4uXWx9NxwHI48NBvDq5V+3dIl
	GRMpZZnVLEOND5LcwyGjuHpAYqeMV+lU7H1QPnQ96KA6Ejilih2nhKPFTAEIaFDHCLNb
	RMsg==
Received: by 10.14.172.193 with SMTP id t41mr2210073eel.25.1345731486747;
	Thu, 23 Aug 2012 07:18:06 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id c6sm21899718eep.7.2012.08.23.07.18.04
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 07:18:05 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 15:18:01 +0100
From: Keir Fraser <keir@xen.org>
To: <xen-devel@lists.xen.org>
Message-ID: <CC5BFA29.49FA8%keir@xen.org>
Thread-Topic: Xen 4.2.0 RC3 released
Thread-Index: Ac2BOhoffOfsUxSlzEqGSfGzTsJp6w==
Mime-version: 1.0
Subject: [Xen-devel] Xen 4.2.0 RC3 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The third release candidiate for 4.2.0 is now available:

http://xenbits.xen.org/staging/xen-unstable.hg (tag '4.2.0-rc3')

Please test!

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:18:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:18:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YEb-0001oH-BE; Thu, 23 Aug 2012 14:18:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4YEa-0001o3-FN
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:18:08 +0000
Received: from [85.158.138.51:45593] by server-4.bemta-3.messagelabs.com id
	68/64-04276-F9B36305; Thu, 23 Aug 2012 14:18:07 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345731486!27563093!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8884 invoked from network); 23 Aug 2012 14:18:07 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:18:07 -0000
Received: by eeke53 with SMTP id e53so314315eek.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 07:18:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:mime-version:content-type:content-transfer-encoding;
	bh=rK8Om7jBY5mS/yuTRZYyDY57yp52zg902/i6y76Rkmk=;
	b=dWzmTJCrJ/ifcNNikXBtmIbOd1Iuus7NcJ1wBld58VCPA3oCDbo1PbPUVe6Dtfm9wt
	mpoHN3jI1MEjL/gcmkMQ0RZMLZNZvnqKanI3Hl1VpyyM3qLAOrxwGnei8dATOYkL3eXo
	6fNQiKnme64y4sdZjxoKHD+wTxIchL7O8KI68u47duVEAGNCPF2guLi1OQL16+hkoccv
	wUDf32luGlXNwafsbewzsHOsqKT2P4ymDhdVBvlDq5N4uXWx9NxwHI48NBvDq5V+3dIl
	GRMpZZnVLEOND5LcwyGjuHpAYqeMV+lU7H1QPnQ96KA6Ejilih2nhKPFTAEIaFDHCLNb
	RMsg==
Received: by 10.14.172.193 with SMTP id t41mr2210073eel.25.1345731486747;
	Thu, 23 Aug 2012 07:18:06 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id c6sm21899718eep.7.2012.08.23.07.18.04
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 07:18:05 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Thu, 23 Aug 2012 15:18:01 +0100
From: Keir Fraser <keir@xen.org>
To: <xen-devel@lists.xen.org>
Message-ID: <CC5BFA29.49FA8%keir@xen.org>
Thread-Topic: Xen 4.2.0 RC3 released
Thread-Index: Ac2BOhoffOfsUxSlzEqGSfGzTsJp6w==
Mime-version: 1.0
Subject: [Xen-devel] Xen 4.2.0 RC3 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The third release candidiate for 4.2.0 is now available:

http://xenbits.xen.org/staging/xen-unstable.hg (tag '4.2.0-rc3')

Please test!

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:21:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:21:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YHV-0001zR-Tl; Thu, 23 Aug 2012 14:21:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YHU-0001zJ-6M
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:21:08 +0000
Received: from [85.158.139.83:44602] by server-12.bemta-5.messagelabs.com id
	F7/AE-22359-35C36305; Thu, 23 Aug 2012 14:21:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345731665!27471122!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxNTE5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23516 invoked from network); 23 Aug 2012 14:21:06 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 14:21:06 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NEL0wj002053
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:21:01 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NEKw3v029106
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:20:59 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NEKwip008110; Thu, 23 Aug 2012 09:20:58 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:20:58 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 356E34031E; Thu, 23 Aug 2012 10:10:55 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:10:55 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120823141055.GH26098@phenom.dumpdata.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com> <50360465.3090602@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50360465.3090602@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 11:22:29AM +0100, David Vrabel wrote:
> On 23/08/12 11:14, Andre Przywara wrote:
> > On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> >> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> >>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> >>>
> >>> Konrad, David,
> >>>
> >>> back on track for this issue. Thanks for your input, I could do some
> >>> more debugging (see below for a refresh):
> >>>
> >>> It seems like it affects only the first page of the 1:1 mapping. I
> >>> didn't have an issues with the last PFN or the page behind it (which
> >>> failed properly).
> >>>
> >>> David, thanks for the hint with varying dom0_mem parameter. I
> >>> thought I already checked this, but I did it once again and it
> >>> turned out that it is only an issue if dom0_mem is smaller than the
> >>> ACPI area, which generates a hole in the memory map. So we have
> >>> (simplified)
> >>> * 1:1 mapping to 1 MB
> >>> * normal mapping till dom0_mem
> >>> * unmapped area till ACPI E820 area
> >>> * ACPI E820 1:1 mapping
> >>>
> >>> As far as I could chase it down the 1:1 mapping itself looks OK, I
> >>> couldn't find any off-by-one bugs here. So maybe it is code that
> >>> later on invalidates areas between the normal guest mapping and the
> >>> ACPI mem?
> >>
> >> I think I found it. Can you try this pls [and if you can't find
> >> early_to_phys.. just use the __set_phys_to call]
> > 
> > Yes, that works. At least after a quick test on my test box. Both the 
> > test module and acpidump work as expected. If I replace the "<" in your 
> > patch with the original "<=", I get the warning (and due to the 
> > "continue" it also works).
> 
> Note that the balloon driver could subsequently overwrite the p2m entry.

Hmm, I am not seeing how.. the region that is passed in is right up to
the PFN (I believe). And I did run with this patch over a couple of days
with ballooning up and down. But maybe I missed something?

Let me prep a patch that adds some more checks in the balloon driver
just in case we do hit this.

>  I don't think it is worth redoing the patch to adjust the region passed
> to the balloon driver to avoid this though.
> 
> > I also successfully tested the minimal fix (just replacing <= with <).
> > I will feed it to the testers here to cover more machines.
> > 
> > Do you want to keep the warnings in (which exceed 80 characters, btw)?
> 
> I think we do.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:21:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:21:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YHV-0001zR-Tl; Thu, 23 Aug 2012 14:21:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YHU-0001zJ-6M
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:21:08 +0000
Received: from [85.158.139.83:44602] by server-12.bemta-5.messagelabs.com id
	F7/AE-22359-35C36305; Thu, 23 Aug 2012 14:21:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345731665!27471122!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxNTE5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23516 invoked from network); 23 Aug 2012 14:21:06 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 14:21:06 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NEL0wj002053
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:21:01 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NEKw3v029106
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:20:59 GMT
Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NEKwip008110; Thu, 23 Aug 2012 09:20:58 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:20:58 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 356E34031E; Thu, 23 Aug 2012 10:10:55 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:10:55 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120823141055.GH26098@phenom.dumpdata.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com> <50360465.3090602@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50360465.3090602@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 11:22:29AM +0100, David Vrabel wrote:
> On 23/08/12 11:14, Andre Przywara wrote:
> > On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> >> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> >>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> >>>
> >>> Konrad, David,
> >>>
> >>> back on track for this issue. Thanks for your input, I could do some
> >>> more debugging (see below for a refresh):
> >>>
> >>> It seems like it affects only the first page of the 1:1 mapping. I
> >>> didn't have an issues with the last PFN or the page behind it (which
> >>> failed properly).
> >>>
> >>> David, thanks for the hint with varying dom0_mem parameter. I
> >>> thought I already checked this, but I did it once again and it
> >>> turned out that it is only an issue if dom0_mem is smaller than the
> >>> ACPI area, which generates a hole in the memory map. So we have
> >>> (simplified)
> >>> * 1:1 mapping to 1 MB
> >>> * normal mapping till dom0_mem
> >>> * unmapped area till ACPI E820 area
> >>> * ACPI E820 1:1 mapping
> >>>
> >>> As far as I could chase it down the 1:1 mapping itself looks OK, I
> >>> couldn't find any off-by-one bugs here. So maybe it is code that
> >>> later on invalidates areas between the normal guest mapping and the
> >>> ACPI mem?
> >>
> >> I think I found it. Can you try this pls [and if you can't find
> >> early_to_phys.. just use the __set_phys_to call]
> > 
> > Yes, that works. At least after a quick test on my test box. Both the 
> > test module and acpidump work as expected. If I replace the "<" in your 
> > patch with the original "<=", I get the warning (and due to the 
> > "continue" it also works).
> 
> Note that the balloon driver could subsequently overwrite the p2m entry.

Hmm, I am not seeing how.. the region that is passed in is right up to
the PFN (I believe). And I did run with this patch over a couple of days
with ballooning up and down. But maybe I missed something?

Let me prep a patch that adds some more checks in the balloon driver
just in case we do hit this.

>  I don't think it is worth redoing the patch to adjust the region passed
> to the balloon driver to avoid this though.
> 
> > I also successfully tested the minimal fix (just replacing <= with <).
> > I will feed it to the testers here to cover more machines.
> > 
> > Do you want to keep the warnings in (which exceed 80 characters, btw)?
> 
> I think we do.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:22:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:22:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YIB-000239-Am; Thu, 23 Aug 2012 14:21:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4YI9-00022u-0L
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:21:49 +0000
Received: from [85.158.143.35:40308] by server-2.bemta-4.messagelabs.com id
	4A/29-21239-C7C36305; Thu, 23 Aug 2012 14:21:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345730174!12528634!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4908 invoked from network); 23 Aug 2012 13:56:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14148404"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:56:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:56:13 +0100
Message-ID: <1345730172.12501.113.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:56:12 +0100
In-Reply-To: <9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
> Old configuration file is still working with qemu disaggregation.
> Before to spawn any QEMU, the toolstack will fill correctly, if needed,
> configuration structure.
> 
> For the moment, the toolstack spawns device models one by one.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxl/libxl.c          |   16 ++-
>  tools/libxl/libxl_create.c   |  150 +++++++++++++-----
>  tools/libxl/libxl_device.c   |    7 +-
>  tools/libxl/libxl_dm.c       |  369 ++++++++++++++++++++++++++++++------------
>  tools/libxl/libxl_dom.c      |    4 +-
>  tools/libxl/libxl_internal.h |   36 +++--
>  6 files changed, 421 insertions(+), 161 deletions(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 8ea3478..60718b6 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -1330,7 +1330,8 @@ static void stubdom_destroy_callback(libxl__egc *egc,
>      }
> 
>      dds->stubdom_finished = 1;
> -    savefile = libxl__device_model_savefile(gc, dis->domid);
> +    /* FIXME: get dmid */
> +    savefile = libxl__device_model_savefile(gc, dis->domid, 0);
>      rc = libxl__remove_file(gc, savefile);
>      /*
>       * On suspend libxl__domain_save_device_model will have already
> @@ -1423,10 +1424,8 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
>          LIBXL__LOG_ERRNOVAL(ctx, LIBXL__LOG_ERROR, rc, "xc_domain_pause failed for %d", domid);
>      }
>      if (dm_present) {
> -        if (libxl__destroy_device_model(gc, domid) < 0)
> -            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_model failed for %d", domid);
> -
> -        libxl__qmp_cleanup(gc, domid);
> +        if (libxl__destroy_device_models(gc, domid) < 0)
> +            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_models failed for %d", domid);
>      }
>      dis->drs.ao = ao;
>      dis->drs.domid = domid;
> @@ -1725,6 +1724,13 @@ out:
> 
>  /******************************************************************************/
> 
> +int libxl__dm_setdefault(libxl__gc *gc, libxl_dm *dm)
> +{
> +    return 0;
> +}
> +
> +/******************************************************************************/
> +
>  int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
>  {
>      int rc;
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 5f0d26f..7160c78 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -35,6 +35,10 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
>  {
>      int i;
> 
> +    for (i=0; i<d_config->num_dms; i++)
> +        libxl_dm_dispose(&d_config->dms[i]);
> +    free(d_config->dms);

We are adding libxl_FOO_list_free functions for new ones of these as we
introduce new ones), can you do that for the dm type please.

> +
>      for (i=0; i<d_config->num_disks; i++)
>          libxl_device_disk_dispose(&d_config->disks[i]);
>      free(d_config->disks);
> @@ -59,6 +63,50 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
>      libxl_domain_build_info_dispose(&d_config->b_info);
>  }
> 
> +static int libxl__domain_config_setdefault(libxl__gc *gc,
> +                                           libxl_domain_config *d_config)
> +{
> +    libxl_domain_build_info *b_info = &d_config->b_info;
> +    uint64_t cap = 0;
> +    int i = 0;
> +    int ret = 0;
> +    libxl_dm *default_dm = NULL;
> +
> +    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL
> +        && (d_config->num_dms > 1))
> +        return ERROR_INVAL;
> +
> +    if (!d_config->num_dms) {
> +        d_config->dms = malloc(sizeof (*d_config->dms));

You should use libxl__zalloc or libxl__calloc or something here with the
NO_GC argument to get the expected error handling.
> @@ -991,12 +1057,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
>          libxl__device_console_dispose(&console);
> 
>          if (need_qemu) {
> -            dcs->dmss.dm.guest_domid = domid;
> -            libxl__spawn_local_dm(egc, &dcs->dmss.dm);
> +            assert(dcs->dmss);
> +            domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
>              return;
>          } else {
> -            assert(!dcs->dmss.dm.guest_domid);
> -            domcreate_devmodel_started(egc, &dcs->dmss.dm, 0);
> +            assert(!dcs->dmss);

Doesn't this stop progress in this case meaning we'll never get to the
end of the async op?

>              return;
>          }
>      }
[..]
> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
>                                   void *check_callback_userdata)
>  {
>      char *path;
> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
> +                          domid, dmid);

Isn't this control path shared with qemu? I'm not sure we can just
change it like that? We need to at least retain compatibility with
pre-disag qemus.

>      return libxl__wait_for_offspring(gc, domid,
>                                       LIBXL_DEVICE_MODEL_START_TIMEOUT,
>                                       "Device Model", path, state, spawning,

>  const char *libxl__domain_device_model(libxl__gc *gc,
> -                                       const libxl_domain_build_info *info)
> +                                       uint32_t dmid,
> +                                       const libxl_domain_build_info *b_info)
>  {
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      const char *dm;
> +    libxl_domain_config *guest_config = CONTAINER_OF(b_info, *guest_config,
> +                                                     b_info);
> 
> -    if (libxl_defbool_val(info->device_model_stubdomain))
> +    if (libxl_defbool_val(guest_config->b_info.device_model_stubdomain))

You just extracted guest_config from b_info but you still have the
b_info point to hand. Why not use it? Likewise a few more times below.
>  {
> +    /**

The ** implies some sort of automagic comments->doc parsing process
which we don't have here.

> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
> +     * one QEMU (the one which emulates default device) will register
> +     * these devices through Xen PCI hypercall.
> +     */
> +    static unsigned int bdf = 3;

Do you mean const rather than static?

Isn't this baking in some implementation detail from the current qemu
version? What happens if it changes?

> +
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      const libxl_domain_create_info *c_info = &guest_config->c_info;
>      const libxl_domain_build_info *b_info = &guest_config->b_info;
> +    const libxl_dm *dm_config = &guest_config->dms[dmid];
>      const libxl_device_disk *disks = guest_config->disks;
>      const libxl_device_nic *nics = guest_config->nics;
>      const int num_disks = guest_config->num_disks;
>      const int num_nics = guest_config->num_nics;
> -    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
> +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
>      const libxl_sdl_info *sdl = dm_sdl(guest_config);
>      const char *keymap = dm_keymap(guest_config);
>      flexarray_t *dm_args;
>      int i;
>      uint64_t ram_size;
> +    uint32_t cap_ui = dm_config->capabilities & LIBXL_DM_CAP_UI;
> +    uint32_t cap_ide = dm_config->capabilities & LIBXL_DM_CAP_IDE;
> +    uint32_t cap_serial = dm_config->capabilities & LIBXL_DM_CAP_SERIAL;
> +    uint32_t cap_audio = dm_config->capabilities & LIBXL_DM_CAP_AUDIO;

->capabilities is defined as 64 bits, but you use 32 here, which happens
to work if you know what the actual values of the enum are but whoever
adds the 33rd capability will probably get it wrong.

     bool cap_foo = !! (dm_....capabiltieis & LIBXL_DM_CAP_FOO)

would probably work?

> 
>      dm_args = flexarray_make(16, 1);
>      if (!dm_args)
> @@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>                        "-xen-domid",
>                        libxl__sprintf(gc, "%d", guest_domid), NULL);
> 
> +    flexarray_append(dm_args, "-nodefaults");

Does this not cause a change in behaviour other than what you've
accounted for here?

> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>          abort();
>      }
> 
> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
> +    // Allocate ram space of 32Mo per previous device model to store rom

What is this about?

(also that Mo looks a bit odd in among all these mb's)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:22:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:22:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YIB-000239-Am; Thu, 23 Aug 2012 14:21:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4YI9-00022u-0L
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:21:49 +0000
Received: from [85.158.143.35:40308] by server-2.bemta-4.messagelabs.com id
	4A/29-21239-C7C36305; Thu, 23 Aug 2012 14:21:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345730174!12528634!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4908 invoked from network); 23 Aug 2012 13:56:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 13:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14148404"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:56:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:56:13 +0100
Message-ID: <1345730172.12501.113.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 14:56:12 +0100
In-Reply-To: <9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
> Old configuration file is still working with qemu disaggregation.
> Before to spawn any QEMU, the toolstack will fill correctly, if needed,
> configuration structure.
> 
> For the moment, the toolstack spawns device models one by one.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  tools/libxl/libxl.c          |   16 ++-
>  tools/libxl/libxl_create.c   |  150 +++++++++++++-----
>  tools/libxl/libxl_device.c   |    7 +-
>  tools/libxl/libxl_dm.c       |  369 ++++++++++++++++++++++++++++++------------
>  tools/libxl/libxl_dom.c      |    4 +-
>  tools/libxl/libxl_internal.h |   36 +++--
>  6 files changed, 421 insertions(+), 161 deletions(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 8ea3478..60718b6 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -1330,7 +1330,8 @@ static void stubdom_destroy_callback(libxl__egc *egc,
>      }
> 
>      dds->stubdom_finished = 1;
> -    savefile = libxl__device_model_savefile(gc, dis->domid);
> +    /* FIXME: get dmid */
> +    savefile = libxl__device_model_savefile(gc, dis->domid, 0);
>      rc = libxl__remove_file(gc, savefile);
>      /*
>       * On suspend libxl__domain_save_device_model will have already
> @@ -1423,10 +1424,8 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
>          LIBXL__LOG_ERRNOVAL(ctx, LIBXL__LOG_ERROR, rc, "xc_domain_pause failed for %d", domid);
>      }
>      if (dm_present) {
> -        if (libxl__destroy_device_model(gc, domid) < 0)
> -            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_model failed for %d", domid);
> -
> -        libxl__qmp_cleanup(gc, domid);
> +        if (libxl__destroy_device_models(gc, domid) < 0)
> +            LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_models failed for %d", domid);
>      }
>      dis->drs.ao = ao;
>      dis->drs.domid = domid;
> @@ -1725,6 +1724,13 @@ out:
> 
>  /******************************************************************************/
> 
> +int libxl__dm_setdefault(libxl__gc *gc, libxl_dm *dm)
> +{
> +    return 0;
> +}
> +
> +/******************************************************************************/
> +
>  int libxl__device_disk_setdefault(libxl__gc *gc, libxl_device_disk *disk)
>  {
>      int rc;
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 5f0d26f..7160c78 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -35,6 +35,10 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
>  {
>      int i;
> 
> +    for (i=0; i<d_config->num_dms; i++)
> +        libxl_dm_dispose(&d_config->dms[i]);
> +    free(d_config->dms);

We are adding libxl_FOO_list_free functions for new ones of these as we
introduce new ones), can you do that for the dm type please.

> +
>      for (i=0; i<d_config->num_disks; i++)
>          libxl_device_disk_dispose(&d_config->disks[i]);
>      free(d_config->disks);
> @@ -59,6 +63,50 @@ void libxl_domain_config_dispose(libxl_domain_config *d_config)
>      libxl_domain_build_info_dispose(&d_config->b_info);
>  }
> 
> +static int libxl__domain_config_setdefault(libxl__gc *gc,
> +                                           libxl_domain_config *d_config)
> +{
> +    libxl_domain_build_info *b_info = &d_config->b_info;
> +    uint64_t cap = 0;
> +    int i = 0;
> +    int ret = 0;
> +    libxl_dm *default_dm = NULL;
> +
> +    if (b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL
> +        && (d_config->num_dms > 1))
> +        return ERROR_INVAL;
> +
> +    if (!d_config->num_dms) {
> +        d_config->dms = malloc(sizeof (*d_config->dms));

You should use libxl__zalloc or libxl__calloc or something here with the
NO_GC argument to get the expected error handling.
> @@ -991,12 +1057,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
>          libxl__device_console_dispose(&console);
> 
>          if (need_qemu) {
> -            dcs->dmss.dm.guest_domid = domid;
> -            libxl__spawn_local_dm(egc, &dcs->dmss.dm);
> +            assert(dcs->dmss);
> +            domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
>              return;
>          } else {
> -            assert(!dcs->dmss.dm.guest_domid);
> -            domcreate_devmodel_started(egc, &dcs->dmss.dm, 0);
> +            assert(!dcs->dmss);

Doesn't this stop progress in this case meaning we'll never get to the
end of the async op?

>              return;
>          }
>      }
[..]
> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
>                                   void *check_callback_userdata)
>  {
>      char *path;
> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
> +                          domid, dmid);

Isn't this control path shared with qemu? I'm not sure we can just
change it like that? We need to at least retain compatibility with
pre-disag qemus.

>      return libxl__wait_for_offspring(gc, domid,
>                                       LIBXL_DEVICE_MODEL_START_TIMEOUT,
>                                       "Device Model", path, state, spawning,

>  const char *libxl__domain_device_model(libxl__gc *gc,
> -                                       const libxl_domain_build_info *info)
> +                                       uint32_t dmid,
> +                                       const libxl_domain_build_info *b_info)
>  {
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      const char *dm;
> +    libxl_domain_config *guest_config = CONTAINER_OF(b_info, *guest_config,
> +                                                     b_info);
> 
> -    if (libxl_defbool_val(info->device_model_stubdomain))
> +    if (libxl_defbool_val(guest_config->b_info.device_model_stubdomain))

You just extracted guest_config from b_info but you still have the
b_info point to hand. Why not use it? Likewise a few more times below.
>  {
> +    /**

The ** implies some sort of automagic comments->doc parsing process
which we don't have here.

> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
> +     * one QEMU (the one which emulates default device) will register
> +     * these devices through Xen PCI hypercall.
> +     */
> +    static unsigned int bdf = 3;

Do you mean const rather than static?

Isn't this baking in some implementation detail from the current qemu
version? What happens if it changes?

> +
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      const libxl_domain_create_info *c_info = &guest_config->c_info;
>      const libxl_domain_build_info *b_info = &guest_config->b_info;
> +    const libxl_dm *dm_config = &guest_config->dms[dmid];
>      const libxl_device_disk *disks = guest_config->disks;
>      const libxl_device_nic *nics = guest_config->nics;
>      const int num_disks = guest_config->num_disks;
>      const int num_nics = guest_config->num_nics;
> -    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
> +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
>      const libxl_sdl_info *sdl = dm_sdl(guest_config);
>      const char *keymap = dm_keymap(guest_config);
>      flexarray_t *dm_args;
>      int i;
>      uint64_t ram_size;
> +    uint32_t cap_ui = dm_config->capabilities & LIBXL_DM_CAP_UI;
> +    uint32_t cap_ide = dm_config->capabilities & LIBXL_DM_CAP_IDE;
> +    uint32_t cap_serial = dm_config->capabilities & LIBXL_DM_CAP_SERIAL;
> +    uint32_t cap_audio = dm_config->capabilities & LIBXL_DM_CAP_AUDIO;

->capabilities is defined as 64 bits, but you use 32 here, which happens
to work if you know what the actual values of the enum are but whoever
adds the 33rd capability will probably get it wrong.

     bool cap_foo = !! (dm_....capabiltieis & LIBXL_DM_CAP_FOO)

would probably work?

> 
>      dm_args = flexarray_make(16, 1);
>      if (!dm_args)
> @@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>                        "-xen-domid",
>                        libxl__sprintf(gc, "%d", guest_domid), NULL);
> 
> +    flexarray_append(dm_args, "-nodefaults");

Does this not cause a change in behaviour other than what you've
accounted for here?

> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>          abort();
>      }
> 
> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
> +    // Allocate ram space of 32Mo per previous device model to store rom

What is this about?

(also that Mo looks a bit odd in among all these mb's)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:28:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YOM-0002UR-UD; Thu, 23 Aug 2012 14:28:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YOL-0002UH-2X
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:28:13 +0000
Received: from [85.158.138.51:25941] by server-8.bemta-3.messagelabs.com id
	2B/A4-29583-CFD36305; Thu, 23 Aug 2012 14:28:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345732089!19571778!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7215 invoked from network); 23 Aug 2012 14:28:10 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:28:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NERm7H007516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:27:49 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NERjI7011792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:27:46 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NERiSC019976; Thu, 23 Aug 2012 09:27:45 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:27:44 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D2BA74031E; Thu, 23 Aug 2012 10:17:40 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:17:40 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120823141740.GA30305@phenom.dumpdata.com>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<1345631207.6821.140.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345631207.6821.140.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"mgorman@suse.de" <mgorman@suse.de>,
	"konrad@darnok.org" <konrad@darnok.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 11:26:47AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-08 at 23:50 +0100, David Miller wrote:
> > Just use something like a call to __pskb_pull_tail(skb, len) and all
> > that other crap around that area can simply be deleted.
> 
> I think you mean something like this, which works for me, although I've
> only lightly tested it.
> 

I've tested it heavily and works great.

Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
and I took a look at it too and:

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Ian.
> 
> 8<----------------------------------------
> 
> >From 9e47e3e87a33b45974448649a97859a479183041 Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 10:15:29 +0100
> Subject: [PATCH] xen-netfront: use __pskb_pull_tail to ensure linear area is big enough on RX
> 
> I'm slightly concerned by the "only in exceptional circumstances"
> comment on __pskb_pull_tail but the structure of an skb just created
> by netfront shouldn't hit any of the especially slow cases.
> 
> This approach still does slightly more work than the old way, since if
> we pull up the entire first frag we now have to shuffle everything
> down where before we just received into the right place in the first
> place.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Jeremy Fitzhardinge <jeremy@goop.org>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: xen-devel@lists.xensource.com
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  drivers/net/xen-netfront.c |   39 ++++++++++-----------------------------
>  1 files changed, 10 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 3089990..650f79a 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,8 +57,7 @@
>  static const struct ethtool_ops xennet_ethtool_ops;
>  
>  struct netfront_cb {
> -	struct page *page;
> -	unsigned offset;
> +	int pull_to;
>  };
>  
>  #define NETFRONT_SKB_CB(skb)	((struct netfront_cb *)((skb)->cb))
> @@ -867,15 +866,9 @@ static int handle_incoming_queue(struct net_device *dev,
>  	struct sk_buff *skb;
>  
>  	while ((skb = __skb_dequeue(rxq)) != NULL) {
> -		struct page *page = NETFRONT_SKB_CB(skb)->page;
> -		void *vaddr = page_address(page);
> -		unsigned offset = NETFRONT_SKB_CB(skb)->offset;
> -
> -		memcpy(skb->data, vaddr + offset,
> -		       skb_headlen(skb));
> +		int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
>  
> -		if (page != skb_frag_page(&skb_shinfo(skb)->frags[0]))
> -			__free_page(page);
> +		__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
>  
>  		/* Ethernet work: Delayed to here as it peeks the header. */
>  		skb->protocol = eth_type_trans(skb, dev);
> @@ -913,7 +906,6 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>  	struct sk_buff_head errq;
>  	struct sk_buff_head tmpq;
>  	unsigned long flags;
> -	unsigned int len;
>  	int err;
>  
>  	spin_lock(&np->rx_lock);
> @@ -955,24 +947,13 @@ err:
>  			}
>  		}
>  
> -		NETFRONT_SKB_CB(skb)->page =
> -			skb_frag_page(&skb_shinfo(skb)->frags[0]);
> -		NETFRONT_SKB_CB(skb)->offset = rx->offset;
> -
> -		len = rx->status;
> -		if (len > RX_COPY_THRESHOLD)
> -			len = RX_COPY_THRESHOLD;
> -		skb_put(skb, len);
> +		NETFRONT_SKB_CB(skb)->pull_to = rx->status;
> +		if (NETFRONT_SKB_CB(skb)->pull_to > RX_COPY_THRESHOLD)
> +			NETFRONT_SKB_CB(skb)->pull_to = RX_COPY_THRESHOLD;
>  
> -		if (rx->status > len) {
> -			skb_shinfo(skb)->frags[0].page_offset =
> -				rx->offset + len;
> -			skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status - len);
> -			skb->data_len = rx->status - len;
> -		} else {
> -			__skb_fill_page_desc(skb, 0, NULL, 0, 0);
> -			skb_shinfo(skb)->nr_frags = 0;
> -		}
> +		skb_shinfo(skb)->frags[0].page_offset = rx->offset;
> +		skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status);
> +		skb->data_len = rx->status;
>  
>  		i = xennet_fill_frags(np, skb, &tmpq);
>  
> @@ -999,7 +980,7 @@ err:
>  		 * receive throughout using the standard receive
>  		 * buffer size was cut by 25%(!!!).
>  		 */
> -		skb->truesize += skb->data_len - (RX_COPY_THRESHOLD - len);
> +		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>  		skb->len += skb->data_len;
>  
>  		if (rx->flags & XEN_NETRXF_csum_blank)
> -- 
> 1.7.2.5
> 
> 
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:28:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YOM-0002UR-UD; Thu, 23 Aug 2012 14:28:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YOL-0002UH-2X
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:28:13 +0000
Received: from [85.158.138.51:25941] by server-8.bemta-3.messagelabs.com id
	2B/A4-29583-CFD36305; Thu, 23 Aug 2012 14:28:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345732089!19571778!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7215 invoked from network); 23 Aug 2012 14:28:10 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:28:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NERm7H007516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:27:49 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NERjI7011792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:27:46 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NERiSC019976; Thu, 23 Aug 2012 09:27:45 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:27:44 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D2BA74031E; Thu, 23 Aug 2012 10:17:40 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:17:40 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120823141740.GA30305@phenom.dumpdata.com>
References: <20120807085554.GF29814@suse.de>
	<20120808.155046.820543563969484712.davem@davemloft.net>
	<1345631207.6821.140.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345631207.6821.140.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"mgorman@suse.de" <mgorman@suse.de>,
	"konrad@darnok.org" <konrad@darnok.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 11:26:47AM +0100, Ian Campbell wrote:
> On Wed, 2012-08-08 at 23:50 +0100, David Miller wrote:
> > Just use something like a call to __pskb_pull_tail(skb, len) and all
> > that other crap around that area can simply be deleted.
> 
> I think you mean something like this, which works for me, although I've
> only lightly tested it.
> 

I've tested it heavily and works great.

Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
and I took a look at it too and:

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Ian.
> 
> 8<----------------------------------------
> 
> >From 9e47e3e87a33b45974448649a97859a479183041 Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 22 Aug 2012 10:15:29 +0100
> Subject: [PATCH] xen-netfront: use __pskb_pull_tail to ensure linear area is big enough on RX
> 
> I'm slightly concerned by the "only in exceptional circumstances"
> comment on __pskb_pull_tail but the structure of an skb just created
> by netfront shouldn't hit any of the especially slow cases.
> 
> This approach still does slightly more work than the old way, since if
> we pull up the entire first frag we now have to shuffle everything
> down where before we just received into the right place in the first
> place.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Jeremy Fitzhardinge <jeremy@goop.org>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: xen-devel@lists.xensource.com
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  drivers/net/xen-netfront.c |   39 ++++++++++-----------------------------
>  1 files changed, 10 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 3089990..650f79a 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,8 +57,7 @@
>  static const struct ethtool_ops xennet_ethtool_ops;
>  
>  struct netfront_cb {
> -	struct page *page;
> -	unsigned offset;
> +	int pull_to;
>  };
>  
>  #define NETFRONT_SKB_CB(skb)	((struct netfront_cb *)((skb)->cb))
> @@ -867,15 +866,9 @@ static int handle_incoming_queue(struct net_device *dev,
>  	struct sk_buff *skb;
>  
>  	while ((skb = __skb_dequeue(rxq)) != NULL) {
> -		struct page *page = NETFRONT_SKB_CB(skb)->page;
> -		void *vaddr = page_address(page);
> -		unsigned offset = NETFRONT_SKB_CB(skb)->offset;
> -
> -		memcpy(skb->data, vaddr + offset,
> -		       skb_headlen(skb));
> +		int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
>  
> -		if (page != skb_frag_page(&skb_shinfo(skb)->frags[0]))
> -			__free_page(page);
> +		__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
>  
>  		/* Ethernet work: Delayed to here as it peeks the header. */
>  		skb->protocol = eth_type_trans(skb, dev);
> @@ -913,7 +906,6 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>  	struct sk_buff_head errq;
>  	struct sk_buff_head tmpq;
>  	unsigned long flags;
> -	unsigned int len;
>  	int err;
>  
>  	spin_lock(&np->rx_lock);
> @@ -955,24 +947,13 @@ err:
>  			}
>  		}
>  
> -		NETFRONT_SKB_CB(skb)->page =
> -			skb_frag_page(&skb_shinfo(skb)->frags[0]);
> -		NETFRONT_SKB_CB(skb)->offset = rx->offset;
> -
> -		len = rx->status;
> -		if (len > RX_COPY_THRESHOLD)
> -			len = RX_COPY_THRESHOLD;
> -		skb_put(skb, len);
> +		NETFRONT_SKB_CB(skb)->pull_to = rx->status;
> +		if (NETFRONT_SKB_CB(skb)->pull_to > RX_COPY_THRESHOLD)
> +			NETFRONT_SKB_CB(skb)->pull_to = RX_COPY_THRESHOLD;
>  
> -		if (rx->status > len) {
> -			skb_shinfo(skb)->frags[0].page_offset =
> -				rx->offset + len;
> -			skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status - len);
> -			skb->data_len = rx->status - len;
> -		} else {
> -			__skb_fill_page_desc(skb, 0, NULL, 0, 0);
> -			skb_shinfo(skb)->nr_frags = 0;
> -		}
> +		skb_shinfo(skb)->frags[0].page_offset = rx->offset;
> +		skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status);
> +		skb->data_len = rx->status;
>  
>  		i = xennet_fill_frags(np, skb, &tmpq);
>  
> @@ -999,7 +980,7 @@ err:
>  		 * receive throughout using the standard receive
>  		 * buffer size was cut by 25%(!!!).
>  		 */
> -		skb->truesize += skb->data_len - (RX_COPY_THRESHOLD - len);
> +		skb->truesize += skb->data_len - RX_COPY_THRESHOLD;
>  		skb->len += skb->data_len;
>  
>  		if (rx->flags & XEN_NETRXF_csum_blank)
> -- 
> 1.7.2.5
> 
> 
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:36:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YWX-0002qd-UY; Thu, 23 Aug 2012 14:36:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4YWW-0002qX-Oc
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:36:40 +0000
Received: from [85.158.143.99:12707] by server-1.bemta-4.messagelabs.com id
	5D/1E-12504-8FF36305; Thu, 23 Aug 2012 14:36:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345732593!27010971!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24803 invoked from network); 23 Aug 2012 14:36:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:36:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344225600"; d="scan'208";a="35579432"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 10:36:32 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	10:36:32 -0400
Message-ID: <50363FEF.10001@citrix.com>
Date: Thu, 23 Aug 2012 15:36:31 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com> <50360465.3090602@citrix.com>
	<20120823141055.GH26098@phenom.dumpdata.com>
In-Reply-To: <20120823141055.GH26098@phenom.dumpdata.com>
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 15:10, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 23, 2012 at 11:22:29AM +0100, David Vrabel wrote:
>> On 23/08/12 11:14, Andre Przywara wrote:
>>> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
>>>> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
>>>>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
>>>>>
>>>>> Konrad, David,
>>>>>
>>>>> back on track for this issue. Thanks for your input, I could do some
>>>>> more debugging (see below for a refresh):
>>>>>
>>>>> It seems like it affects only the first page of the 1:1 mapping. I
>>>>> didn't have an issues with the last PFN or the page behind it (which
>>>>> failed properly).
>>>>>
>>>>> David, thanks for the hint with varying dom0_mem parameter. I
>>>>> thought I already checked this, but I did it once again and it
>>>>> turned out that it is only an issue if dom0_mem is smaller than the
>>>>> ACPI area, which generates a hole in the memory map. So we have
>>>>> (simplified)
>>>>> * 1:1 mapping to 1 MB
>>>>> * normal mapping till dom0_mem
>>>>> * unmapped area till ACPI E820 area
>>>>> * ACPI E820 1:1 mapping
>>>>>
>>>>> As far as I could chase it down the 1:1 mapping itself looks OK, I
>>>>> couldn't find any off-by-one bugs here. So maybe it is code that
>>>>> later on invalidates areas between the normal guest mapping and the
>>>>> ACPI mem?
>>>>
>>>> I think I found it. Can you try this pls [and if you can't find
>>>> early_to_phys.. just use the __set_phys_to call]
>>>
>>> Yes, that works. At least after a quick test on my test box. Both the 
>>> test module and acpidump work as expected. If I replace the "<" in your 
>>> patch with the original "<=", I get the warning (and due to the 
>>> "continue" it also works).
>>
>> Note that the balloon driver could subsequently overwrite the p2m entry.
> 
> Hmm, I am not seeing how.. the region that is passed in is right up to
> the PFN (I believe). And I did run with this patch over a couple of days
> with ballooning up and down. But maybe I missed something?

Hrrm.  I was sure I wrote "Note that the balloon driver could
subsequently overwrite the p2m entry /if/ this warning is triggered."
but it seems I did not. :/

i.e., if the warning is triggered, the xen_extra_mem region will be
incorrectly sized and the balloon driver will make use of the incorrect
region.

David

> Let me prep a patch that adds some more checks in the balloon driver
> just in case we do hit this.
> 
>>  I don't think it is worth redoing the patch to adjust the region passed
>> to the balloon driver to avoid this though.
>>
>>> I also successfully tested the minimal fix (just replacing <= with <).
>>> I will feed it to the testers here to cover more machines.
>>>
>>> Do you want to keep the warnings in (which exceed 80 characters, btw)?
>>
>> I think we do.
>>
>> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:36:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YWX-0002qd-UY; Thu, 23 Aug 2012 14:36:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4YWW-0002qX-Oc
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:36:40 +0000
Received: from [85.158.143.99:12707] by server-1.bemta-4.messagelabs.com id
	5D/1E-12504-8FF36305; Thu, 23 Aug 2012 14:36:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345732593!27010971!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24803 invoked from network); 23 Aug 2012 14:36:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:36:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344225600"; d="scan'208";a="35579432"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 10:36:32 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	10:36:32 -0400
Message-ID: <50363FEF.10001@citrix.com>
Date: Thu, 23 Aug 2012 15:36:31 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <4FE1C423.6070001@amd.com>
	<20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com> <50360465.3090602@citrix.com>
	<20120823141055.GH26098@phenom.dumpdata.com>
In-Reply-To: <20120823141055.GH26098@phenom.dumpdata.com>
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 15:10, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 23, 2012 at 11:22:29AM +0100, David Vrabel wrote:
>> On 23/08/12 11:14, Andre Przywara wrote:
>>> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
>>>> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
>>>>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
>>>>>
>>>>> Konrad, David,
>>>>>
>>>>> back on track for this issue. Thanks for your input, I could do some
>>>>> more debugging (see below for a refresh):
>>>>>
>>>>> It seems like it affects only the first page of the 1:1 mapping. I
>>>>> didn't have an issues with the last PFN or the page behind it (which
>>>>> failed properly).
>>>>>
>>>>> David, thanks for the hint with varying dom0_mem parameter. I
>>>>> thought I already checked this, but I did it once again and it
>>>>> turned out that it is only an issue if dom0_mem is smaller than the
>>>>> ACPI area, which generates a hole in the memory map. So we have
>>>>> (simplified)
>>>>> * 1:1 mapping to 1 MB
>>>>> * normal mapping till dom0_mem
>>>>> * unmapped area till ACPI E820 area
>>>>> * ACPI E820 1:1 mapping
>>>>>
>>>>> As far as I could chase it down the 1:1 mapping itself looks OK, I
>>>>> couldn't find any off-by-one bugs here. So maybe it is code that
>>>>> later on invalidates areas between the normal guest mapping and the
>>>>> ACPI mem?
>>>>
>>>> I think I found it. Can you try this pls [and if you can't find
>>>> early_to_phys.. just use the __set_phys_to call]
>>>
>>> Yes, that works. At least after a quick test on my test box. Both the 
>>> test module and acpidump work as expected. If I replace the "<" in your 
>>> patch with the original "<=", I get the warning (and due to the 
>>> "continue" it also works).
>>
>> Note that the balloon driver could subsequently overwrite the p2m entry.
> 
> Hmm, I am not seeing how.. the region that is passed in is right up to
> the PFN (I believe). And I did run with this patch over a couple of days
> with ballooning up and down. But maybe I missed something?

Hrrm.  I was sure I wrote "Note that the balloon driver could
subsequently overwrite the p2m entry /if/ this warning is triggered."
but it seems I did not. :/

i.e., if the warning is triggered, the xen_extra_mem region will be
incorrectly sized and the balloon driver will make use of the incorrect
region.

David

> Let me prep a patch that adds some more checks in the balloon driver
> just in case we do hit this.
> 
>>  I don't think it is worth redoing the patch to adjust the region passed
>> to the balloon driver to avoid this though.
>>
>>> I also successfully tested the minimal fix (just replacing <= with <).
>>> I will feed it to the testers here to cover more machines.
>>>
>>> Do you want to keep the warnings in (which exceed 80 characters, btw)?
>>
>> I think we do.
>>
>> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:41:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YaU-000300-Jf; Thu, 23 Aug 2012 14:40:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YaT-0002zo-QQ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:40:46 +0000
Received: from [85.158.143.99:41851] by server-1.bemta-4.messagelabs.com id
	73/A4-12504-DE046305; Thu, 23 Aug 2012 14:40:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345732840!24208110!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23465 invoked from network); 23 Aug 2012 14:40:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:40:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149486"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:40:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:40:40 +0100
Date: Thu, 23 Aug 2012 15:40:17 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <bd5dc1c039b5b6d30ac785630677ad3339039f6b.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231449020.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<bd5dc1c039b5b6d30ac785630677ad3339039f6b.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 01/10] xen: add new machine options
 to support QEMU disaggregation in Xen environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
>     - xen_dmid: specify the id of QEMU. It will be used to
>     retrieve/store information inside XenStore.
>     - xen_default_dev (on/off): as default devices need to be create in
>     each QEMU (due to code dependency), this option specifies if it will
>     register range/PCI of default device via xen hypercall.
>     (Root bridge, south bridge, ...).
>     - xen_emulate_ide (on/off): enable/disable emulation in QEMU.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  qemu-config.c |   12 ++++++++++++
>  1 files changed, 12 insertions(+), 0 deletions(-)
> 
> diff --git a/qemu-config.c b/qemu-config.c
> index c05ffbc..7740442 100644
> --- a/qemu-config.c
> +++ b/qemu-config.c
> @@ -612,6 +612,18 @@ static QemuOptsList qemu_machine_opts = {
>              .name = "dump-guest-core",
>              .type = QEMU_OPT_BOOL,
>              .help = "Include guest memory in  a core dump",
> +        }, {
> +            .name = "xen_dmid",
> +            .type = QEMU_OPT_NUMBER,
> +            .help = "Xen device model id",
> +        }, {
> +            .name = "xen_default_dev",
> +            .type = QEMU_OPT_BOOL,
> +            .help = "emulate Xen default device (South Bridge, IDE, ...)"
                                                ^devices
It would be good to document exactly which ones are these (the default
devices)

> +        }, {
> +            .name = "xen_emulate_ide",
> +            .type = QEMU_OPT_BOOL,
> +            .help = "emulate IDE with XEN"
>          },

Is this really a Xen specific option? Couldn't be used for example by
people that wants to use virtio-scsi on KVM without IDE emulation?


>          { /* End of list */ }
>      },
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:41:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YaU-000300-Jf; Thu, 23 Aug 2012 14:40:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YaT-0002zo-QQ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:40:46 +0000
Received: from [85.158.143.99:41851] by server-1.bemta-4.messagelabs.com id
	73/A4-12504-DE046305; Thu, 23 Aug 2012 14:40:45 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345732840!24208110!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23465 invoked from network); 23 Aug 2012 14:40:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:40:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149486"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:40:40 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:40:40 +0100
Date: Thu, 23 Aug 2012 15:40:17 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <bd5dc1c039b5b6d30ac785630677ad3339039f6b.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231449020.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<bd5dc1c039b5b6d30ac785630677ad3339039f6b.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 01/10] xen: add new machine options
 to support QEMU disaggregation in Xen environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
>     - xen_dmid: specify the id of QEMU. It will be used to
>     retrieve/store information inside XenStore.
>     - xen_default_dev (on/off): as default devices need to be create in
>     each QEMU (due to code dependency), this option specifies if it will
>     register range/PCI of default device via xen hypercall.
>     (Root bridge, south bridge, ...).
>     - xen_emulate_ide (on/off): enable/disable emulation in QEMU.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  qemu-config.c |   12 ++++++++++++
>  1 files changed, 12 insertions(+), 0 deletions(-)
> 
> diff --git a/qemu-config.c b/qemu-config.c
> index c05ffbc..7740442 100644
> --- a/qemu-config.c
> +++ b/qemu-config.c
> @@ -612,6 +612,18 @@ static QemuOptsList qemu_machine_opts = {
>              .name = "dump-guest-core",
>              .type = QEMU_OPT_BOOL,
>              .help = "Include guest memory in  a core dump",
> +        }, {
> +            .name = "xen_dmid",
> +            .type = QEMU_OPT_NUMBER,
> +            .help = "Xen device model id",
> +        }, {
> +            .name = "xen_default_dev",
> +            .type = QEMU_OPT_BOOL,
> +            .help = "emulate Xen default device (South Bridge, IDE, ...)"
                                                ^devices
It would be good to document exactly which ones are these (the default
devices)

> +        }, {
> +            .name = "xen_emulate_ide",
> +            .type = QEMU_OPT_BOOL,
> +            .help = "emulate IDE with XEN"
>          },

Is this really a Xen specific option? Couldn't be used for example by
people that wants to use virtio-scsi on KVM without IDE emulation?


>          { /* End of list */ }
>      },
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:41:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YbD-00033p-1o; Thu, 23 Aug 2012 14:41:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YbA-00033Z-Sv
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:41:29 +0000
Received: from [85.158.143.99:47249] by server-2.bemta-4.messagelabs.com id
	42/3B-21239-71146305; Thu, 23 Aug 2012 14:41:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345732883!22047037!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15269 invoked from network); 23 Aug 2012 14:41:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:41:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149498"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:41:23 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:41:23 +0100
Date: Thu, 23 Aug 2012 15:41:00 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231457360.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 05/10] xen-memory: register memory/IO
 range in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> Add Memory listener on IO and modify the one on memory.
> Becareful, the first listener is not called is the range is still register with
> register_ioport*. So Xen will never know that this QEMU is handle the range.

I don't understand what you mean here. Could you please elaborate?


> IO request works as before, the only thing is QEMU will never receive IO request
> that it can't handle.
> 
> y Changes to be committed:

Just remove this line from the commit message.


> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  xen-all.c |  113 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 113 insertions(+), 0 deletions(-)
> 
> diff --git a/xen-all.c b/xen-all.c
> index 5f05838..14e5d3d 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -152,6 +152,7 @@ typedef struct XenIOState {
>  
>      struct xs_handle *xenstore;
>      MemoryListener memory_listener;
> +    MemoryListener io_listener;
>      QLIST_HEAD(, XenPhysmap) physmap;
>      target_phys_addr_t free_phys_offset;
>      const XenPhysmap *log_for_dirtybit;
> @@ -195,6 +196,31 @@ void xen_hvm_inject_msi(uint64_t addr, uint32_t data)
>      xen_xc_hvm_inject_msi(xen_xc, xen_domid, addr, data);
>  }
>  
> +static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
> +                            int is_mmio, const char *name)
> +{
> +    /* Don't register xen.ram */
> +    if (is_mmio && !strncmp(name, "xen.ram", 7)) {
> +        return;
> +    }
> +
> +    DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
> +            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
> +
> +    xen_xc_hvm_map_io_range_to_ioreq_server(xen_xc, xen_domid, serverid,
> +                                            is_mmio, addr, addr + size - 1);
> +}
> +
> +static void xen_unmap_iorange(target_phys_addr_t addr, uint64_t size,
> +                              int is_mmio)
> +{
> +    DPRINTF("unmap %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
> +            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
> +
> +    xen_xc_hvm_unmap_io_range_from_ioreq_server(xen_xc, xen_domid, serverid,
> +                                                is_mmio, addr);
> +}
> +
>  static void xen_suspend_notifier(Notifier *notifier, void *data)
>  {
>      xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 3);
> @@ -527,12 +553,16 @@ static void xen_region_add(MemoryListener *listener,
>                             MemoryRegionSection *section)
>  {
>      xen_set_memory(listener, section, true);
> +    xen_map_iorange(section->offset_within_address_space,
> +                    section->size, 1, section->mr->name);
>  }
>  
>  static void xen_region_del(MemoryListener *listener,
>                             MemoryRegionSection *section)
>  {
>      xen_set_memory(listener, section, false);
> +    xen_unmap_iorange(section->offset_within_address_space,
> +                      section->size, 1);
>  }
>  
>  static void xen_region_nop(MemoryListener *listener,
> @@ -651,6 +681,86 @@ static MemoryListener xen_memory_listener = {
>      .priority = 10,
>  };
>  
> +static void xen_io_begin(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_commit(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_region_add(MemoryListener *listener,
> +                              MemoryRegionSection *section)
> +{
> +    xen_map_iorange(section->offset_within_address_space,
> +                    section->size, 0, section->mr->name);
> +}
> +
> +static void xen_io_region_del(MemoryListener *listener,
> +                              MemoryRegionSection *section)
> +{
> +    xen_unmap_iorange(section->offset_within_address_space,
> +                      section->size, 0);
> +}
> +
> +static void xen_io_region_nop(MemoryListener *listener,
> +                              MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_start(MemoryListener *listener,
> +                             MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_stop(MemoryListener *listener,
> +                            MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_sync(MemoryListener *listener,
> +                            MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_global_start(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_log_global_stop(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_eventfd_add(MemoryListener *listener,
> +                               MemoryRegionSection *section,
> +                               bool match_data, uint64_t data,
> +                               EventNotifier *e)
> +{
> +}
> +
> +static void xen_io_eventfd_del(MemoryListener *listener,
> +                               MemoryRegionSection *section,
> +                               bool match_data, uint64_t data,
> +                               EventNotifier *e)
> +{
> +}
> +
> +static MemoryListener xen_io_listener = {
> +    .begin = xen_io_begin,
> +    .commit = xen_io_commit,
> +    .region_add = xen_io_region_add,
> +    .region_del = xen_io_region_del,
> +    .region_nop = xen_io_region_nop,
> +    .log_start = xen_io_log_start,
> +    .log_stop = xen_io_log_stop,
> +    .log_sync = xen_io_log_sync,
> +    .log_global_start = xen_io_log_global_start,
> +    .log_global_stop = xen_io_log_global_stop,
> +    .eventfd_add = xen_io_eventfd_add,
> +    .eventfd_del = xen_io_eventfd_del,
> +    .priority = 10,
> +};
> +
>  /* VCPU Operations, MMIO, IO ring ... */
>  
>  static void xen_reset_vcpu(void *opaque)
> @@ -1239,6 +1349,9 @@ int xen_hvm_init(void)
>      memory_listener_register(&state->memory_listener, get_system_memory());
>      state->log_for_dirtybit = NULL;
>  
> +    state->io_listener = xen_io_listener;
> +    memory_listener_register(&state->io_listener, get_system_io());
> +
>      /* Initialize backend core & drivers */
>      if (xen_be_init() != 0) {
>          fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:41:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YbD-00033p-1o; Thu, 23 Aug 2012 14:41:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YbA-00033Z-Sv
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:41:29 +0000
Received: from [85.158.143.99:47249] by server-2.bemta-4.messagelabs.com id
	42/3B-21239-71146305; Thu, 23 Aug 2012 14:41:27 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1345732883!22047037!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15269 invoked from network); 23 Aug 2012 14:41:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:41:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149498"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:41:23 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:41:23 +0100
Date: Thu, 23 Aug 2012 15:41:00 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231457360.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 05/10] xen-memory: register memory/IO
 range in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> Add Memory listener on IO and modify the one on memory.
> Becareful, the first listener is not called is the range is still register with
> register_ioport*. So Xen will never know that this QEMU is handle the range.

I don't understand what you mean here. Could you please elaborate?


> IO request works as before, the only thing is QEMU will never receive IO request
> that it can't handle.
> 
> y Changes to be committed:

Just remove this line from the commit message.


> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  xen-all.c |  113 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 113 insertions(+), 0 deletions(-)
> 
> diff --git a/xen-all.c b/xen-all.c
> index 5f05838..14e5d3d 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -152,6 +152,7 @@ typedef struct XenIOState {
>  
>      struct xs_handle *xenstore;
>      MemoryListener memory_listener;
> +    MemoryListener io_listener;
>      QLIST_HEAD(, XenPhysmap) physmap;
>      target_phys_addr_t free_phys_offset;
>      const XenPhysmap *log_for_dirtybit;
> @@ -195,6 +196,31 @@ void xen_hvm_inject_msi(uint64_t addr, uint32_t data)
>      xen_xc_hvm_inject_msi(xen_xc, xen_domid, addr, data);
>  }
>  
> +static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
> +                            int is_mmio, const char *name)
> +{
> +    /* Don't register xen.ram */
> +    if (is_mmio && !strncmp(name, "xen.ram", 7)) {
> +        return;
> +    }
> +
> +    DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
> +            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
> +
> +    xen_xc_hvm_map_io_range_to_ioreq_server(xen_xc, xen_domid, serverid,
> +                                            is_mmio, addr, addr + size - 1);
> +}
> +
> +static void xen_unmap_iorange(target_phys_addr_t addr, uint64_t size,
> +                              int is_mmio)
> +{
> +    DPRINTF("unmap %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
> +            (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
> +
> +    xen_xc_hvm_unmap_io_range_from_ioreq_server(xen_xc, xen_domid, serverid,
> +                                                is_mmio, addr);
> +}
> +
>  static void xen_suspend_notifier(Notifier *notifier, void *data)
>  {
>      xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 3);
> @@ -527,12 +553,16 @@ static void xen_region_add(MemoryListener *listener,
>                             MemoryRegionSection *section)
>  {
>      xen_set_memory(listener, section, true);
> +    xen_map_iorange(section->offset_within_address_space,
> +                    section->size, 1, section->mr->name);
>  }
>  
>  static void xen_region_del(MemoryListener *listener,
>                             MemoryRegionSection *section)
>  {
>      xen_set_memory(listener, section, false);
> +    xen_unmap_iorange(section->offset_within_address_space,
> +                      section->size, 1);
>  }
>  
>  static void xen_region_nop(MemoryListener *listener,
> @@ -651,6 +681,86 @@ static MemoryListener xen_memory_listener = {
>      .priority = 10,
>  };
>  
> +static void xen_io_begin(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_commit(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_region_add(MemoryListener *listener,
> +                              MemoryRegionSection *section)
> +{
> +    xen_map_iorange(section->offset_within_address_space,
> +                    section->size, 0, section->mr->name);
> +}
> +
> +static void xen_io_region_del(MemoryListener *listener,
> +                              MemoryRegionSection *section)
> +{
> +    xen_unmap_iorange(section->offset_within_address_space,
> +                      section->size, 0);
> +}
> +
> +static void xen_io_region_nop(MemoryListener *listener,
> +                              MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_start(MemoryListener *listener,
> +                             MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_stop(MemoryListener *listener,
> +                            MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_sync(MemoryListener *listener,
> +                            MemoryRegionSection *section)
> +{
> +}
> +
> +static void xen_io_log_global_start(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_log_global_stop(MemoryListener *listener)
> +{
> +}
> +
> +static void xen_io_eventfd_add(MemoryListener *listener,
> +                               MemoryRegionSection *section,
> +                               bool match_data, uint64_t data,
> +                               EventNotifier *e)
> +{
> +}
> +
> +static void xen_io_eventfd_del(MemoryListener *listener,
> +                               MemoryRegionSection *section,
> +                               bool match_data, uint64_t data,
> +                               EventNotifier *e)
> +{
> +}
> +
> +static MemoryListener xen_io_listener = {
> +    .begin = xen_io_begin,
> +    .commit = xen_io_commit,
> +    .region_add = xen_io_region_add,
> +    .region_del = xen_io_region_del,
> +    .region_nop = xen_io_region_nop,
> +    .log_start = xen_io_log_start,
> +    .log_stop = xen_io_log_stop,
> +    .log_sync = xen_io_log_sync,
> +    .log_global_start = xen_io_log_global_start,
> +    .log_global_stop = xen_io_log_global_stop,
> +    .eventfd_add = xen_io_eventfd_add,
> +    .eventfd_del = xen_io_eventfd_del,
> +    .priority = 10,
> +};
> +
>  /* VCPU Operations, MMIO, IO ring ... */
>  
>  static void xen_reset_vcpu(void *opaque)
> @@ -1239,6 +1349,9 @@ int xen_hvm_init(void)
>      memory_listener_register(&state->memory_listener, get_system_memory());
>      state->log_for_dirtybit = NULL;
>  
> +    state->io_listener = xen_io_listener;
> +    memory_listener_register(&state->io_listener, get_system_io());
> +
>      /* Initialize backend core & drivers */
>      if (xen_be_init() != 0) {
>          fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YbZ-00036w-FA; Thu, 23 Aug 2012 14:41:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YbY-00036W-1g
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:41:52 +0000
Received: from [85.158.143.99:49413] by server-3.bemta-4.messagelabs.com id
	E5/E8-08232-F2146305; Thu, 23 Aug 2012 14:41:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345732910!20551422!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30297 invoked from network); 23 Aug 2012 14:41:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:41:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149507"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:41:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:41:50 +0100
Date: Thu, 23 Aug 2012 15:41:27 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231455010.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 06/10] xen-pci: register PCI device
 in Xen and handle IOREQ_TYPE_PCI_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> With QEMU disaggregation QEMU needs to specify which PCI device it's able to
> handle. It will use the device place in the topology (domain, bus, device,
> function).
> When Xen will trap an access for the config space, it will forge a new
> ioreq and forward it to the right QEMU.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  hw/pci.c   |    6 ++++++
>  hw/xen.h   |    1 +
>  xen-all.c  |   38 ++++++++++++++++++++++++++++++++++++++
>  xen-stub.c |    5 +++++
>  4 files changed, 50 insertions(+), 0 deletions(-)
> 
> diff --git a/hw/pci.c b/hw/pci.c
> index 4d95984..0112edf 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -33,6 +33,7 @@
>  #include "qmp-commands.h"
>  #include "msi.h"
>  #include "msix.h"
> +#include "xen.h"
>  
>  //#define DEBUG_PCI
>  #ifdef DEBUG_PCI
> @@ -781,6 +782,11 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
>      pci_dev->devfn = devfn;
>      pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
>      pci_dev->irq_state = 0;
> +
> +    if (xen_enabled() && xen_register_pcidev(pci_dev)) {
> +        return NULL;

Is this an error condition? If so we should print an error message,
right?


> +    }
> +
>      pci_config_alloc(pci_dev);
>  
>      pci_config_set_vendor_id(pci_dev->config, pc->vendor_id);
> diff --git a/hw/xen.h b/hw/xen.h
> index e5926b7..663731a 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -35,6 +35,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>  void xen_piix3_set_irq(void *opaque, int irq_num, int level);
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
>  void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
> +int xen_register_pcidev(PCIDevice *pci_dev);
>  void xen_cmos_set_s3_resume(void *opaque, int irq, int level);
>  
>  qemu_irq *xen_interrupt_controller_init(void);
> diff --git a/xen-all.c b/xen-all.c
> index 14e5d3d..485c312 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -174,6 +174,16 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>                                irq_num & 3, level);
>  }
>  
> +int xen_register_pcidev(PCIDevice *pci_dev)
> +{
> +    DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
> +            pci_dev->devfn & 0x7, pci_dev->name);
> +
> +    return xen_xc_hvm_register_pcidev(xen_xc, xen_domid, serverid,
> +                                      0, 0, pci_dev->devfn >> 3,
> +                                      pci_dev->devfn & 0x7);
> +}
> +
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
>  {
>      int i;
> @@ -943,6 +953,29 @@ static void cpu_ioreq_move(ioreq_t *req)
>      }
>  }
>  
> +#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
> +static void cpu_ioreq_config_space(ioreq_t *req)
> +{
> +    uint64_t cf8 = req->addr;
> +    uint32_t tmp = req->size;
> +    uint16_t size = req->size & 0xff;
> +    uint16_t off = req->size >> 16;
> +
> +    if ((size + off + 0xcfc) > 0xd00) {
> +        hw_error("Invalid ioreq config space size = %u off = %u\n",
> +                 size, off);
> +    }
> +
> +    req->addr = 0xcfc + off;
> +    req->size = size;
> +
> +    do_outp(0xcf8, 4, cf8);
> +    cpu_ioreq_pio(req);
> +    req->addr = cf8;
> +    req->size = tmp;
> +}
> +#endif
> +
>  static void handle_ioreq(ioreq_t *req)
>  {
>      if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
> @@ -962,6 +995,11 @@ static void handle_ioreq(ioreq_t *req)
>          case IOREQ_TYPE_INVALIDATE:
>              xen_invalidate_map_cache();
>              break;
> +#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
> +        case IOREQ_TYPE_PCI_CONFIG:
> +            cpu_ioreq_config_space(req);
> +            break;
> +#endif
>          default:
>              hw_error("Invalid ioreq type 0x%x\n", req->type);
>      }
> diff --git a/xen-stub.c b/xen-stub.c
> index 8ff2b79..0128965 100644
> --- a/xen-stub.c
> +++ b/xen-stub.c
> @@ -25,6 +25,11 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>  {
>  }
>  
> +int xen_register_pcidev(PCIDevice *pci_dev)
> +{
> +    return 1;
> +}
> +
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
>  {
>  }
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:42:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YbZ-00036w-FA; Thu, 23 Aug 2012 14:41:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YbY-00036W-1g
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:41:52 +0000
Received: from [85.158.143.99:49413] by server-3.bemta-4.messagelabs.com id
	E5/E8-08232-F2146305; Thu, 23 Aug 2012 14:41:51 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345732910!20551422!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30297 invoked from network); 23 Aug 2012 14:41:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:41:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149507"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:41:50 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:41:50 +0100
Date: Thu, 23 Aug 2012 15:41:27 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231455010.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 06/10] xen-pci: register PCI device
 in Xen and handle IOREQ_TYPE_PCI_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> With QEMU disaggregation QEMU needs to specify which PCI device it's able to
> handle. It will use the device place in the topology (domain, bus, device,
> function).
> When Xen will trap an access for the config space, it will forge a new
> ioreq and forward it to the right QEMU.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  hw/pci.c   |    6 ++++++
>  hw/xen.h   |    1 +
>  xen-all.c  |   38 ++++++++++++++++++++++++++++++++++++++
>  xen-stub.c |    5 +++++
>  4 files changed, 50 insertions(+), 0 deletions(-)
> 
> diff --git a/hw/pci.c b/hw/pci.c
> index 4d95984..0112edf 100644
> --- a/hw/pci.c
> +++ b/hw/pci.c
> @@ -33,6 +33,7 @@
>  #include "qmp-commands.h"
>  #include "msi.h"
>  #include "msix.h"
> +#include "xen.h"
>  
>  //#define DEBUG_PCI
>  #ifdef DEBUG_PCI
> @@ -781,6 +782,11 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
>      pci_dev->devfn = devfn;
>      pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
>      pci_dev->irq_state = 0;
> +
> +    if (xen_enabled() && xen_register_pcidev(pci_dev)) {
> +        return NULL;

Is this an error condition? If so we should print an error message,
right?


> +    }
> +
>      pci_config_alloc(pci_dev);
>  
>      pci_config_set_vendor_id(pci_dev->config, pc->vendor_id);
> diff --git a/hw/xen.h b/hw/xen.h
> index e5926b7..663731a 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -35,6 +35,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>  void xen_piix3_set_irq(void *opaque, int irq_num, int level);
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
>  void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
> +int xen_register_pcidev(PCIDevice *pci_dev);
>  void xen_cmos_set_s3_resume(void *opaque, int irq, int level);
>  
>  qemu_irq *xen_interrupt_controller_init(void);
> diff --git a/xen-all.c b/xen-all.c
> index 14e5d3d..485c312 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -174,6 +174,16 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>                                irq_num & 3, level);
>  }
>  
> +int xen_register_pcidev(PCIDevice *pci_dev)
> +{
> +    DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
> +            pci_dev->devfn & 0x7, pci_dev->name);
> +
> +    return xen_xc_hvm_register_pcidev(xen_xc, xen_domid, serverid,
> +                                      0, 0, pci_dev->devfn >> 3,
> +                                      pci_dev->devfn & 0x7);
> +}
> +
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
>  {
>      int i;
> @@ -943,6 +953,29 @@ static void cpu_ioreq_move(ioreq_t *req)
>      }
>  }
>  
> +#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
> +static void cpu_ioreq_config_space(ioreq_t *req)
> +{
> +    uint64_t cf8 = req->addr;
> +    uint32_t tmp = req->size;
> +    uint16_t size = req->size & 0xff;
> +    uint16_t off = req->size >> 16;
> +
> +    if ((size + off + 0xcfc) > 0xd00) {
> +        hw_error("Invalid ioreq config space size = %u off = %u\n",
> +                 size, off);
> +    }
> +
> +    req->addr = 0xcfc + off;
> +    req->size = size;
> +
> +    do_outp(0xcf8, 4, cf8);
> +    cpu_ioreq_pio(req);
> +    req->addr = cf8;
> +    req->size = tmp;
> +}
> +#endif
> +
>  static void handle_ioreq(ioreq_t *req)
>  {
>      if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
> @@ -962,6 +995,11 @@ static void handle_ioreq(ioreq_t *req)
>          case IOREQ_TYPE_INVALIDATE:
>              xen_invalidate_map_cache();
>              break;
> +#if __XEN_LATEST_INTERFACE_VERSION__ >= 0x00040300
> +        case IOREQ_TYPE_PCI_CONFIG:
> +            cpu_ioreq_config_space(req);
> +            break;
> +#endif
>          default:
>              hw_error("Invalid ioreq type 0x%x\n", req->type);
>      }
> diff --git a/xen-stub.c b/xen-stub.c
> index 8ff2b79..0128965 100644
> --- a/xen-stub.c
> +++ b/xen-stub.c
> @@ -25,6 +25,11 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>  {
>  }
>  
> +int xen_register_pcidev(PCIDevice *pci_dev)
> +{
> +    return 1;
> +}
> +
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len)
>  {
>  }
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:42:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YcC-0003Dm-2Y; Thu, 23 Aug 2012 14:42:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YcA-0003DV-8c
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:42:30 +0000
Received: from [85.158.143.99:61991] by server-1.bemta-4.messagelabs.com id
	7C/C7-12504-55146305; Thu, 23 Aug 2012 14:42:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345732947!21221907!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31328 invoked from network); 23 Aug 2012 14:42:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:42:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149524"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:42:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:42:26 +0100
Date: Thu, 23 Aug 2012 15:42:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <552327742c4491e4dadbccfa189bf9e2ab706ba4.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231521020.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<552327742c4491e4dadbccfa189bf9e2ab706ba4.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 07/10] xen: specify which device is
 part of default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> One major problem of QEMU disaggregation is that some devices needs to
> be "emulate" in each QEMU, but only one need to register it in Xen.
> 
> This patch introduces helpers that can be used in QEMU code (for
> instance hw/pc_piix.c) to specify if the device is part of default sets.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  hw/pc_piix.c |    2 ++
>  hw/xen.h     |   20 ++++++++++++++++++++
>  xen-all.c    |   29 +++++++++++++++++++++++++++--
>  3 files changed, 49 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/pc_piix.c b/hw/pc_piix.c
> index 0c0096f..6cb0a2a 100644
> --- a/hw/pc_piix.c
> +++ b/hw/pc_piix.c
> @@ -342,9 +342,11 @@ static void pc_xen_hvm_init(ram_addr_t ram_size,
>      if (xen_hvm_init() != 0) {
>          hw_error("xen hardware virtual machine initialisation failed");
>      }
> +    xen_set_register_default_dev(1,  NULL);
>      pc_init_pci_no_kvmclock(ram_size, boot_device,
>                              kernel_filename, kernel_cmdline,
>                              initrd_filename, cpu_model);
> +    xen_set_register_default_dev(0, NULL);
>      xen_vcpu_init();
>  }

Honestly I don't like this interface, I would rather have an explicit
list of "default devices" and then go through them in
xen_register_pcidev and xen_map_iorange.


>  #endif
> diff --git a/hw/xen.h b/hw/xen.h
> index 663731a..3c8724f 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -21,6 +21,7 @@ extern uint32_t xen_domid;
>  extern enum xen_mode xen_mode;
>  
>  extern int xen_allowed;
> +extern int xen_register_default_dev;
>  
>  static inline int xen_enabled(void)
>  {
> @@ -31,6 +32,25 @@ static inline int xen_enabled(void)
>  #endif
>  }
>  
> +static inline int xen_is_registered_default_dev(void)
> +{
> +#if defined(CONFIG_XEN)
> +    return xen_register_default_dev;
> +#else
> +    return 1;
> +#endif
> +}
> +
> +static inline void xen_set_register_default_dev(int val, int *old)
> +{
> +#if defined(CONFIG_XEN)
> +    if (old) {
> +        *old = xen_register_default_dev;
> +    }
> +    xen_register_default_dev = val;
> +#endif
> +}
> +
>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>  void xen_piix3_set_irq(void *opaque, int irq_num, int level);
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
> diff --git a/xen-all.c b/xen-all.c
> index 485c312..afa9bcc 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -39,6 +39,10 @@ static MemoryRegion *framebuffer;
>  static unsigned int serverid;
>  static uint32_t xen_dmid = ~0;
>  
> +/* Use to tell if we register pci/mmio/pio of default devices */
> +int xen_register_default_dev = 0;
> +static int xen_emulate_default_dev = 1;
>
>  /* Compatibility with older version */
>  #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
>  static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
> @@ -176,6 +180,10 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>  
>  int xen_register_pcidev(PCIDevice *pci_dev)
>  {
> +    if (xen_register_default_dev && !xen_emulate_default_dev) {
> +        return 0;
> +    }
>
>      DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
>              pci_dev->devfn & 0x7, pci_dev->name);
>  
> @@ -214,6 +222,18 @@ static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
>          return;
>      }
>  
> +    /* Handle the registration of all default io range */
> +    if (xen_register_default_dev) {
> +        /* Register ps/2 only if we emulate VGA */
> +        if (!strcmp(name, "i8042-data") || !strcmp(name, "i8042-cmd")) {
> +            if (display_type == DT_NOGRAPHIC) {
> +                return;
> +            }
> +        } else if (!xen_emulate_default_dev && strcmp(name, "serial")) {
> +            return;
> +        }
> +    }

It seems to be that you are acting upon xen_register_default_dev only in
xen_map_iorange and xen_register_pcidev, so it shouldn't be difficult to
have a real list of "default devices" instead.


>      DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
>              (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
>  
> @@ -1300,6 +1320,8 @@ int xen_hvm_init(void)
>      if (!QTAILQ_EMPTY(&list->head)) {
>          xen_dmid = qemu_opt_get_number(QTAILQ_FIRST(&list->head),
>                                         "xen_dmid", ~0);
> +        xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
> +                                                    "xen_default_dev", 1);
>      }
>  
>      state = g_malloc0(sizeof (XenIOState));
> @@ -1395,9 +1417,12 @@ int xen_hvm_init(void)
>          fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
>          exit(1);
>      }
> -    xen_be_register("console", &xen_console_ops);
> -    xen_be_register("vkbd", &xen_kbdmouse_ops);
>      xen_be_register("qdisk", &xen_blkdev_ops);
> +
> +    if (xen_emulate_default_dev) {
> +        xen_be_register("console", &xen_console_ops);
> +        xen_be_register("vkbd", &xen_kbdmouse_ops);
> +    }
>      xen_read_physmap(state);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:42:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YcC-0003Dm-2Y; Thu, 23 Aug 2012 14:42:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YcA-0003DV-8c
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:42:30 +0000
Received: from [85.158.143.99:61991] by server-1.bemta-4.messagelabs.com id
	7C/C7-12504-55146305; Thu, 23 Aug 2012 14:42:29 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345732947!21221907!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31328 invoked from network); 23 Aug 2012 14:42:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:42:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149524"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:42:27 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:42:26 +0100
Date: Thu, 23 Aug 2012 15:42:04 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <552327742c4491e4dadbccfa189bf9e2ab706ba4.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231521020.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<552327742c4491e4dadbccfa189bf9e2ab706ba4.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 07/10] xen: specify which device is
 part of default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> One major problem of QEMU disaggregation is that some devices needs to
> be "emulate" in each QEMU, but only one need to register it in Xen.
> 
> This patch introduces helpers that can be used in QEMU code (for
> instance hw/pc_piix.c) to specify if the device is part of default sets.
> 
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  hw/pc_piix.c |    2 ++
>  hw/xen.h     |   20 ++++++++++++++++++++
>  xen-all.c    |   29 +++++++++++++++++++++++++++--
>  3 files changed, 49 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/pc_piix.c b/hw/pc_piix.c
> index 0c0096f..6cb0a2a 100644
> --- a/hw/pc_piix.c
> +++ b/hw/pc_piix.c
> @@ -342,9 +342,11 @@ static void pc_xen_hvm_init(ram_addr_t ram_size,
>      if (xen_hvm_init() != 0) {
>          hw_error("xen hardware virtual machine initialisation failed");
>      }
> +    xen_set_register_default_dev(1,  NULL);
>      pc_init_pci_no_kvmclock(ram_size, boot_device,
>                              kernel_filename, kernel_cmdline,
>                              initrd_filename, cpu_model);
> +    xen_set_register_default_dev(0, NULL);
>      xen_vcpu_init();
>  }

Honestly I don't like this interface, I would rather have an explicit
list of "default devices" and then go through them in
xen_register_pcidev and xen_map_iorange.


>  #endif
> diff --git a/hw/xen.h b/hw/xen.h
> index 663731a..3c8724f 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -21,6 +21,7 @@ extern uint32_t xen_domid;
>  extern enum xen_mode xen_mode;
>  
>  extern int xen_allowed;
> +extern int xen_register_default_dev;
>  
>  static inline int xen_enabled(void)
>  {
> @@ -31,6 +32,25 @@ static inline int xen_enabled(void)
>  #endif
>  }
>  
> +static inline int xen_is_registered_default_dev(void)
> +{
> +#if defined(CONFIG_XEN)
> +    return xen_register_default_dev;
> +#else
> +    return 1;
> +#endif
> +}
> +
> +static inline void xen_set_register_default_dev(int val, int *old)
> +{
> +#if defined(CONFIG_XEN)
> +    if (old) {
> +        *old = xen_register_default_dev;
> +    }
> +    xen_register_default_dev = val;
> +#endif
> +}
> +
>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>  void xen_piix3_set_irq(void *opaque, int irq_num, int level);
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
> diff --git a/xen-all.c b/xen-all.c
> index 485c312..afa9bcc 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -39,6 +39,10 @@ static MemoryRegion *framebuffer;
>  static unsigned int serverid;
>  static uint32_t xen_dmid = ~0;
>  
> +/* Use to tell if we register pci/mmio/pio of default devices */
> +int xen_register_default_dev = 0;
> +static int xen_emulate_default_dev = 1;
>
>  /* Compatibility with older version */
>  #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
>  static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
> @@ -176,6 +180,10 @@ void xen_piix3_set_irq(void *opaque, int irq_num, int level)
>  
>  int xen_register_pcidev(PCIDevice *pci_dev)
>  {
> +    if (xen_register_default_dev && !xen_emulate_default_dev) {
> +        return 0;
> +    }
>
>      DPRINTF("register pci %x:%x.%x %s\n", 0, (pci_dev->devfn >> 3) & 0x1f,
>              pci_dev->devfn & 0x7, pci_dev->name);
>  
> @@ -214,6 +222,18 @@ static void xen_map_iorange(target_phys_addr_t addr, uint64_t size,
>          return;
>      }
>  
> +    /* Handle the registration of all default io range */
> +    if (xen_register_default_dev) {
> +        /* Register ps/2 only if we emulate VGA */
> +        if (!strcmp(name, "i8042-data") || !strcmp(name, "i8042-cmd")) {
> +            if (display_type == DT_NOGRAPHIC) {
> +                return;
> +            }
> +        } else if (!xen_emulate_default_dev && strcmp(name, "serial")) {
> +            return;
> +        }
> +    }

It seems to be that you are acting upon xen_register_default_dev only in
xen_map_iorange and xen_register_pcidev, so it shouldn't be difficult to
have a real list of "default devices" instead.


>      DPRINTF("map %s %s 0x"TARGET_FMT_plx" - 0x"TARGET_FMT_plx"\n",
>              (is_mmio) ? "mmio" : "io", name, addr, addr + size - 1);
>  
> @@ -1300,6 +1320,8 @@ int xen_hvm_init(void)
>      if (!QTAILQ_EMPTY(&list->head)) {
>          xen_dmid = qemu_opt_get_number(QTAILQ_FIRST(&list->head),
>                                         "xen_dmid", ~0);
> +        xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
> +                                                    "xen_default_dev", 1);
>      }
>  
>      state = g_malloc0(sizeof (XenIOState));
> @@ -1395,9 +1417,12 @@ int xen_hvm_init(void)
>          fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
>          exit(1);
>      }
> -    xen_be_register("console", &xen_console_ops);
> -    xen_be_register("vkbd", &xen_kbdmouse_ops);
>      xen_be_register("qdisk", &xen_blkdev_ops);
> +
> +    if (xen_emulate_default_dev) {
> +        xen_be_register("console", &xen_console_ops);
> +        xen_be_register("vkbd", &xen_kbdmouse_ops);
> +    }
>      xen_read_physmap(state);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:43:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YdK-0003O0-Hd; Thu, 23 Aug 2012 14:43:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YdI-0003Ng-M9
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:43:40 +0000
Received: from [85.158.138.51:46035] by server-3.bemta-3.messagelabs.com id
	46/C7-13809-B9146305; Thu, 23 Aug 2012 14:43:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345733017!21060496!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24918 invoked from network); 23 Aug 2012 14:43:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149542"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:43:01 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:43:01 +0100
Date: Thu, 23 Aug 2012 15:42:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <0358823c4700d4802235bc5790d78967053bc164.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231534390.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<0358823c4700d4802235bc5790d78967053bc164.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 08/10] xen: audio is not a part of
 default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  arch_init.c |    6 ++++++
>  1 files changed, 6 insertions(+), 0 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index 9b46bfc..1077b16 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -44,6 +44,7 @@
>  #include "exec-memory.h"
>  #include "hw/pcspk.h"
>  #include "qemu/page_cache.h"
> +#include "hw/xen.h"
>  
>  #ifdef DEBUG_ARCH_INIT
>  #define DPRINTF(fmt, ...) \
> @@ -976,6 +977,9 @@ void select_soundhw(const char *optarg)
>  void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
>  {
>      struct soundhw *c;
> +    int register_default_dev;
> +
> +    xen_set_register_default_dev(0, &register_default_dev);
>  
>      for (c = soundhw; c->name; ++c) {
>          if (c->enabled) {
> @@ -990,6 +994,8 @@ void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
>              }
>          }
>      }
> +
> +    xen_set_register_default_dev(register_default_dev, NULL);
>  }
>  #else
>  void select_soundhw(const char *optarg)

and this is why it is better to have a list rather than a stateful
register_default_dev integer.
This stuff is really easy to break.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:43:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YdK-0003O0-Hd; Thu, 23 Aug 2012 14:43:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YdI-0003Ng-M9
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:43:40 +0000
Received: from [85.158.138.51:46035] by server-3.bemta-3.messagelabs.com id
	46/C7-13809-B9146305; Thu, 23 Aug 2012 14:43:39 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345733017!21060496!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24918 invoked from network); 23 Aug 2012 14:43:39 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149542"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:43:01 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:43:01 +0100
Date: Thu, 23 Aug 2012 15:42:38 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <0358823c4700d4802235bc5790d78967053bc164.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231534390.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<0358823c4700d4802235bc5790d78967053bc164.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 08/10] xen: audio is not a part of
 default devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  arch_init.c |    6 ++++++
>  1 files changed, 6 insertions(+), 0 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index 9b46bfc..1077b16 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -44,6 +44,7 @@
>  #include "exec-memory.h"
>  #include "hw/pcspk.h"
>  #include "qemu/page_cache.h"
> +#include "hw/xen.h"
>  
>  #ifdef DEBUG_ARCH_INIT
>  #define DPRINTF(fmt, ...) \
> @@ -976,6 +977,9 @@ void select_soundhw(const char *optarg)
>  void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
>  {
>      struct soundhw *c;
> +    int register_default_dev;
> +
> +    xen_set_register_default_dev(0, &register_default_dev);
>  
>      for (c = soundhw; c->name; ++c) {
>          if (c->enabled) {
> @@ -990,6 +994,8 @@ void audio_init(ISABus *isa_bus, PCIBus *pci_bus)
>              }
>          }
>      }
> +
> +    xen_set_register_default_dev(register_default_dev, NULL);
>  }
>  #else
>  void select_soundhw(const char *optarg)

and this is why it is better to have a list rather than a stateful
register_default_dev integer.
This stuff is really easy to break.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:43:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YdR-0003Pc-Ut; Thu, 23 Aug 2012 14:43:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YdQ-0003PG-Gx
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:43:48 +0000
Received: from [85.158.138.51:46972] by server-2.bemta-3.messagelabs.com id
	55/C1-09157-3A146305; Thu, 23 Aug 2012 14:43:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345733025!27384522!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6576 invoked from network); 23 Aug 2012 14:43:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:43:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149558"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:43:45 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:43:45 +0100
Date: Thu, 23 Aug 2012 15:43:22 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <2fcc0dde3e9cf58a4cc65504d28c9c599296c4fd.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231535490.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<2fcc0dde3e9cf58a4cc65504d28c9c599296c4fd.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 10/10] xen: emulate IDE outside
 default device set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> IDE can be emulate in a different QEMU that the default.
> 
> This patch also fixes ide_get_geometry. When QEMU didn't emulate IDE,
                                                     ^doesn't
> it try to derefence a NULL bus.
      ^tries to dereference


> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  hw/ide/qdev.c |    8 +++++++-
>  hw/pc_piix.c  |   38 ++++++++++++++++++++++++--------------
>  hw/xen.h      |   10 ++++++++++
>  xen-all.c     |    8 +++++++-
>  4 files changed, 48 insertions(+), 16 deletions(-)
> 
> diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
> index 5ea9b8f..473acd7 100644
> --- a/hw/ide/qdev.c
> +++ b/hw/ide/qdev.c
> @@ -115,7 +115,13 @@ IDEDevice *ide_create_drive(IDEBus *bus, int unit, DriveInfo *drive)
>  int ide_get_geometry(BusState *bus, int unit,
>                       int16_t *cyls, int8_t *heads, int8_t *secs)
>  {
> -    IDEState *s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
> +    IDEState *s = NULL;
> +
> +    if (!bus) {
> +        return -1;
> +    }
> +
> +    s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
>  
>      if (s->drive_kind != IDE_HD || !s->bs) {
>          return -1;
> diff --git a/hw/pc_piix.c b/hw/pc_piix.c
> index 6cb0a2a..b904100 100644
> --- a/hw/pc_piix.c
> +++ b/hw/pc_piix.c
> @@ -148,6 +148,7 @@ static void pc_init1(MemoryRegion *system_memory,
>      MemoryRegion *pci_memory;
>      MemoryRegion *rom_memory;
>      void *fw_cfg = NULL;
> +    int register_default_dev = 0;
>  
>      pc_cpus_init(cpu_model);
>  
> @@ -242,23 +243,32 @@ static void pc_init1(MemoryRegion *system_memory,
>              pci_nic_init_nofail(nd, "e1000", NULL);
>      }
>  
> -    ide_drive_get(hd, MAX_IDE_BUS);
> -    if (pci_enabled) {
> -        PCIDevice *dev;
> -        if (xen_enabled()) {
> -            dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
> +    if (!xen_enabled() || xen_is_emulated_ide()) {
> +        xen_set_register_default_dev(0, &register_default_dev);
> +        ide_drive_get(hd, MAX_IDE_BUS);
> +        if (pci_enabled) {
> +            PCIDevice *dev;
> +            if (xen_enabled()) {
> +                dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
> +            } else {
> +                dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
> +            }
> +            idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> +            idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
>          } else {
> -            dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
> +            for (i = 0; i < MAX_IDE_BUS; i++) {
> +                ISADevice *dev;
> +                dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
> +                                   ide_irq[i],
> +                                   hd[MAX_IDE_DEVS * i],
> +                                   hd[MAX_IDE_DEVS * i + 1]);
> +                idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
> +            }
>          }
> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
> +        xen_set_register_default_dev(register_default_dev, NULL);
>      } else {
> -        for(i = 0; i < MAX_IDE_BUS; i++) {
> -            ISADevice *dev;
> -            dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
> -                               ide_irq[i],
> -                               hd[MAX_IDE_DEVS * i], hd[MAX_IDE_DEVS * i + 1]);
> -            idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
> +        for (i = 0; i < MAX_IDE_BUS; i++) {
> +            idebus[i] = NULL;
>          }
>      }

I think it would be better to have a non-xen specific option to
enable/disable ide emulation, something like hpet/no_hpet.


> diff --git a/hw/xen.h b/hw/xen.h
> index 3c8724f..3c89fb9 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -22,6 +22,7 @@ extern enum xen_mode xen_mode;
>  
>  extern int xen_allowed;
>  extern int xen_register_default_dev;
> +extern int xen_emulate_ide;
>  
>  static inline int xen_enabled(void)
>  {
> @@ -32,6 +33,15 @@ static inline int xen_enabled(void)
>  #endif
>  }
>  
> +static inline int xen_is_emulated_ide(void)
> +{
> +#if defined(CONFIG_XEN)
> +    return xen_emulate_ide;
> +#else
> +    return 1;
> +#endif
> +}
> +
>  static inline int xen_is_registered_default_dev(void)
>  {
>  #if defined(CONFIG_XEN)
> diff --git a/xen-all.c b/xen-all.c
> index f424cce..f091908 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -43,6 +43,8 @@ static uint32_t xen_dmid = ~0;
>  int xen_register_default_dev = 0;
>  static int xen_emulate_default_dev = 1;
>  
> +int xen_emulate_ide = 0;
> +
>  /* Compatibility with older version */
>  #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
>  static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
> @@ -1342,6 +1344,8 @@ int xen_hvm_init(void)
>                                         "xen_dmid", ~0);
>          xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
>                                                      "xen_default_dev", 1);
> +        xen_emulate_ide = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
> +                                            "xen_emulate_ide", 1);
>      }
>  
>      state = g_malloc0(sizeof (XenIOState));
> @@ -1437,12 +1441,14 @@ int xen_hvm_init(void)
>          fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
>          exit(1);
>      }
> -    xen_be_register("qdisk", &xen_blkdev_ops);
>  
>      if (xen_emulate_default_dev) {
>          xen_be_register("console", &xen_console_ops);
>          xen_be_register("vkbd", &xen_kbdmouse_ops);
>      }
> +    if (xen_emulate_ide) {
> +        xen_be_register("qdisk", &xen_blkdev_ops);
> +    }
>      xen_read_physmap(state);
>  
>      return 0;
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:43:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YdR-0003Pc-Ut; Thu, 23 Aug 2012 14:43:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4YdQ-0003PG-Gx
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 14:43:48 +0000
Received: from [85.158.138.51:46972] by server-2.bemta-3.messagelabs.com id
	55/C1-09157-3A146305; Thu, 23 Aug 2012 14:43:47 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1345733025!27384522!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6576 invoked from network); 23 Aug 2012 14:43:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 14:43:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14149558"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:43:45 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 15:43:45 +0100
Date: Thu, 23 Aug 2012 15:43:22 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@citrix.com>
In-Reply-To: <2fcc0dde3e9cf58a4cc65504d28c9c599296c4fd.1345637459.git.julien.grall@citrix.com>
Message-ID: <alpine.DEB.2.02.1208231535490.15568@kaball.uk.xensource.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<2fcc0dde3e9cf58a4cc65504d28c9c599296c4fd.1345637459.git.julien.grall@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 10/10] xen: emulate IDE outside
 default device set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Aug 2012, Julien Grall wrote:
> IDE can be emulate in a different QEMU that the default.
> 
> This patch also fixes ide_get_geometry. When QEMU didn't emulate IDE,
                                                     ^doesn't
> it try to derefence a NULL bus.
      ^tries to dereference


> Signed-off-by: Julien Grall <julien.grall@citrix.com>
> ---
>  hw/ide/qdev.c |    8 +++++++-
>  hw/pc_piix.c  |   38 ++++++++++++++++++++++++--------------
>  hw/xen.h      |   10 ++++++++++
>  xen-all.c     |    8 +++++++-
>  4 files changed, 48 insertions(+), 16 deletions(-)
> 
> diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
> index 5ea9b8f..473acd7 100644
> --- a/hw/ide/qdev.c
> +++ b/hw/ide/qdev.c
> @@ -115,7 +115,13 @@ IDEDevice *ide_create_drive(IDEBus *bus, int unit, DriveInfo *drive)
>  int ide_get_geometry(BusState *bus, int unit,
>                       int16_t *cyls, int8_t *heads, int8_t *secs)
>  {
> -    IDEState *s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
> +    IDEState *s = NULL;
> +
> +    if (!bus) {
> +        return -1;
> +    }
> +
> +    s = &DO_UPCAST(IDEBus, qbus, bus)->ifs[unit];
>  
>      if (s->drive_kind != IDE_HD || !s->bs) {
>          return -1;
> diff --git a/hw/pc_piix.c b/hw/pc_piix.c
> index 6cb0a2a..b904100 100644
> --- a/hw/pc_piix.c
> +++ b/hw/pc_piix.c
> @@ -148,6 +148,7 @@ static void pc_init1(MemoryRegion *system_memory,
>      MemoryRegion *pci_memory;
>      MemoryRegion *rom_memory;
>      void *fw_cfg = NULL;
> +    int register_default_dev = 0;
>  
>      pc_cpus_init(cpu_model);
>  
> @@ -242,23 +243,32 @@ static void pc_init1(MemoryRegion *system_memory,
>              pci_nic_init_nofail(nd, "e1000", NULL);
>      }
>  
> -    ide_drive_get(hd, MAX_IDE_BUS);
> -    if (pci_enabled) {
> -        PCIDevice *dev;
> -        if (xen_enabled()) {
> -            dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
> +    if (!xen_enabled() || xen_is_emulated_ide()) {
> +        xen_set_register_default_dev(0, &register_default_dev);
> +        ide_drive_get(hd, MAX_IDE_BUS);
> +        if (pci_enabled) {
> +            PCIDevice *dev;
> +            if (xen_enabled()) {
> +                dev = pci_piix3_xen_ide_init(pci_bus, hd, piix3_devfn + 1);
> +            } else {
> +                dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
> +            }
> +            idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> +            idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
>          } else {
> -            dev = pci_piix3_ide_init(pci_bus, hd, piix3_devfn + 1);
> +            for (i = 0; i < MAX_IDE_BUS; i++) {
> +                ISADevice *dev;
> +                dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
> +                                   ide_irq[i],
> +                                   hd[MAX_IDE_DEVS * i],
> +                                   hd[MAX_IDE_DEVS * i + 1]);
> +                idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
> +            }
>          }
> -        idebus[0] = qdev_get_child_bus(&dev->qdev, "ide.0");
> -        idebus[1] = qdev_get_child_bus(&dev->qdev, "ide.1");
> +        xen_set_register_default_dev(register_default_dev, NULL);
>      } else {
> -        for(i = 0; i < MAX_IDE_BUS; i++) {
> -            ISADevice *dev;
> -            dev = isa_ide_init(isa_bus, ide_iobase[i], ide_iobase2[i],
> -                               ide_irq[i],
> -                               hd[MAX_IDE_DEVS * i], hd[MAX_IDE_DEVS * i + 1]);
> -            idebus[i] = qdev_get_child_bus(&dev->qdev, "ide.0");
> +        for (i = 0; i < MAX_IDE_BUS; i++) {
> +            idebus[i] = NULL;
>          }
>      }

I think it would be better to have a non-xen specific option to
enable/disable ide emulation, something like hpet/no_hpet.


> diff --git a/hw/xen.h b/hw/xen.h
> index 3c8724f..3c89fb9 100644
> --- a/hw/xen.h
> +++ b/hw/xen.h
> @@ -22,6 +22,7 @@ extern enum xen_mode xen_mode;
>  
>  extern int xen_allowed;
>  extern int xen_register_default_dev;
> +extern int xen_emulate_ide;
>  
>  static inline int xen_enabled(void)
>  {
> @@ -32,6 +33,15 @@ static inline int xen_enabled(void)
>  #endif
>  }
>  
> +static inline int xen_is_emulated_ide(void)
> +{
> +#if defined(CONFIG_XEN)
> +    return xen_emulate_ide;
> +#else
> +    return 1;
> +#endif
> +}
> +
>  static inline int xen_is_registered_default_dev(void)
>  {
>  #if defined(CONFIG_XEN)
> diff --git a/xen-all.c b/xen-all.c
> index f424cce..f091908 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -43,6 +43,8 @@ static uint32_t xen_dmid = ~0;
>  int xen_register_default_dev = 0;
>  static int xen_emulate_default_dev = 1;
>  
> +int xen_emulate_ide = 0;
> +
>  /* Compatibility with older version */
>  #if __XEN_LATEST_INTERFACE_VERSION__ < 0x0003020a
>  static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
> @@ -1342,6 +1344,8 @@ int xen_hvm_init(void)
>                                         "xen_dmid", ~0);
>          xen_emulate_default_dev = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
>                                                      "xen_default_dev", 1);
> +        xen_emulate_ide = qemu_opt_get_bool(QTAILQ_FIRST(&list->head),
> +                                            "xen_emulate_ide", 1);
>      }
>  
>      state = g_malloc0(sizeof (XenIOState));
> @@ -1437,12 +1441,14 @@ int xen_hvm_init(void)
>          fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);
>          exit(1);
>      }
> -    xen_be_register("qdisk", &xen_blkdev_ops);
>  
>      if (xen_emulate_default_dev) {
>          xen_be_register("console", &xen_console_ops);
>          xen_be_register("vkbd", &xen_kbdmouse_ops);
>      }
> +    if (xen_emulate_ide) {
> +        xen_be_register("qdisk", &xen_blkdev_ops);
> +    }
>      xen_read_physmap(state);
>  
>      return 0;
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:45:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YfK-0003k1-GK; Thu, 23 Aug 2012 14:45:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YfJ-0003jm-MU
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:45:45 +0000
Received: from [85.158.143.99:18094] by server-1.bemta-4.messagelabs.com id
	C2/2D-12504-91246305; Thu, 23 Aug 2012 14:45:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345733123!27012928!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12767 invoked from network); 23 Aug 2012 14:45:28 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:45:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NEjG64032250
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:45:17 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NEjGTZ012947
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:45:16 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NEjFwM015537; Thu, 23 Aug 2012 09:45:15 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:45:15 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AEE574031E; Thu, 23 Aug 2012 10:35:12 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:35:12 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120823143512.GI26098@phenom.dumpdata.com>
References: <20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com> <50360465.3090602@citrix.com>
	<20120823141055.GH26098@phenom.dumpdata.com>
	<50363FEF.10001@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50363FEF.10001@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 03:36:31PM +0100, David Vrabel wrote:
> On 23/08/12 15:10, Konrad Rzeszutek Wilk wrote:
> > On Thu, Aug 23, 2012 at 11:22:29AM +0100, David Vrabel wrote:
> >> On 23/08/12 11:14, Andre Przywara wrote:
> >>> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> >>>> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> >>>>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> >>>>>
> >>>>> Konrad, David,
> >>>>>
> >>>>> back on track for this issue. Thanks for your input, I could do some
> >>>>> more debugging (see below for a refresh):
> >>>>>
> >>>>> It seems like it affects only the first page of the 1:1 mapping. I
> >>>>> didn't have an issues with the last PFN or the page behind it (which
> >>>>> failed properly).
> >>>>>
> >>>>> David, thanks for the hint with varying dom0_mem parameter. I
> >>>>> thought I already checked this, but I did it once again and it
> >>>>> turned out that it is only an issue if dom0_mem is smaller than the
> >>>>> ACPI area, which generates a hole in the memory map. So we have
> >>>>> (simplified)
> >>>>> * 1:1 mapping to 1 MB
> >>>>> * normal mapping till dom0_mem
> >>>>> * unmapped area till ACPI E820 area
> >>>>> * ACPI E820 1:1 mapping
> >>>>>
> >>>>> As far as I could chase it down the 1:1 mapping itself looks OK, I
> >>>>> couldn't find any off-by-one bugs here. So maybe it is code that
> >>>>> later on invalidates areas between the normal guest mapping and the
> >>>>> ACPI mem?
> >>>>
> >>>> I think I found it. Can you try this pls [and if you can't find
> >>>> early_to_phys.. just use the __set_phys_to call]
> >>>
> >>> Yes, that works. At least after a quick test on my test box. Both the 
> >>> test module and acpidump work as expected. If I replace the "<" in your 
> >>> patch with the original "<=", I get the warning (and due to the 
> >>> "continue" it also works).
> >>
> >> Note that the balloon driver could subsequently overwrite the p2m entry.
> > 
> > Hmm, I am not seeing how.. the region that is passed in is right up to
> > the PFN (I believe). And I did run with this patch over a couple of days
> > with ballooning up and down. But maybe I missed something?
> 
> Hrrm.  I was sure I wrote "Note that the balloon driver could
> subsequently overwrite the p2m entry /if/ this warning is triggered."
> but it seems I did not. :/
> 
> i.e., if the warning is triggered, the xen_extra_mem region will be
> incorrectly sized and the balloon driver will make use of the incorrect
> region.

Ah, that makes more sense. Yes we would do the overwritting part later on
in that scenario.. which makes me wonder - if we did that in the past
how come MMIO devices still worked! Some boxes have the gap/MMIO right
at the edge of the E820_RAM - perhaps they silently coping with and 
we just never caught on this fact.
> 
> David
> 
> > Let me prep a patch that adds some more checks in the balloon driver
> > just in case we do hit this.
> > 
> >>  I don't think it is worth redoing the patch to adjust the region passed
> >> to the balloon driver to avoid this though.
> >>
> >>> I also successfully tested the minimal fix (just replacing <= with <).
> >>> I will feed it to the testers here to cover more machines.
> >>>
> >>> Do you want to keep the warnings in (which exceed 80 characters, btw)?
> >>
> >> I think we do.
> >>
> >> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 14:45:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 14:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4YfK-0003k1-GK; Thu, 23 Aug 2012 14:45:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4YfJ-0003jm-MU
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 14:45:45 +0000
Received: from [85.158.143.99:18094] by server-1.bemta-4.messagelabs.com id
	C2/2D-12504-91246305; Thu, 23 Aug 2012 14:45:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345733123!27012928!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12767 invoked from network); 23 Aug 2012 14:45:28 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 14:45:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NEjG64032250
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 14:45:17 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NEjGTZ012947
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 14:45:16 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NEjFwM015537; Thu, 23 Aug 2012 09:45:15 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 07:45:15 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AEE574031E; Thu, 23 Aug 2012 10:35:12 -0400 (EDT)
Date: Thu, 23 Aug 2012 10:35:12 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120823143512.GI26098@phenom.dumpdata.com>
References: <20120620145127.GD12787@phenom.dumpdata.com>
	<4FE32DDA.10204@amd.com>
	<20120630014825.GA7003@phenom.dumpdata.com>
	<20120630021936.GA27100@phenom.dumpdata.com>
	<50114002.2030700@amd.com>
	<20120817205207.GA3002@phenom.dumpdata.com>
	<503602A0.9040908@amd.com> <50360465.3090602@citrix.com>
	<20120823141055.GH26098@phenom.dumpdata.com>
	<50363FEF.10001@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50363FEF.10001@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Andre Przywara <andre.przywara@amd.com>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] acpidump crashes on some machines
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 03:36:31PM +0100, David Vrabel wrote:
> On 23/08/12 15:10, Konrad Rzeszutek Wilk wrote:
> > On Thu, Aug 23, 2012 at 11:22:29AM +0100, David Vrabel wrote:
> >> On 23/08/12 11:14, Andre Przywara wrote:
> >>> On 08/17/2012 10:52 PM, Konrad Rzeszutek Wilk wrote:
> >>>> On Thu, Jul 26, 2012 at 03:02:58PM +0200, Andre Przywara wrote:
> >>>>> On 06/30/2012 04:19 AM, Konrad Rzeszutek Wilk wrote:
> >>>>>
> >>>>> Konrad, David,
> >>>>>
> >>>>> back on track for this issue. Thanks for your input, I could do some
> >>>>> more debugging (see below for a refresh):
> >>>>>
> >>>>> It seems like it affects only the first page of the 1:1 mapping. I
> >>>>> didn't have an issues with the last PFN or the page behind it (which
> >>>>> failed properly).
> >>>>>
> >>>>> David, thanks for the hint with varying dom0_mem parameter. I
> >>>>> thought I already checked this, but I did it once again and it
> >>>>> turned out that it is only an issue if dom0_mem is smaller than the
> >>>>> ACPI area, which generates a hole in the memory map. So we have
> >>>>> (simplified)
> >>>>> * 1:1 mapping to 1 MB
> >>>>> * normal mapping till dom0_mem
> >>>>> * unmapped area till ACPI E820 area
> >>>>> * ACPI E820 1:1 mapping
> >>>>>
> >>>>> As far as I could chase it down the 1:1 mapping itself looks OK, I
> >>>>> couldn't find any off-by-one bugs here. So maybe it is code that
> >>>>> later on invalidates areas between the normal guest mapping and the
> >>>>> ACPI mem?
> >>>>
> >>>> I think I found it. Can you try this pls [and if you can't find
> >>>> early_to_phys.. just use the __set_phys_to call]
> >>>
> >>> Yes, that works. At least after a quick test on my test box. Both the 
> >>> test module and acpidump work as expected. If I replace the "<" in your 
> >>> patch with the original "<=", I get the warning (and due to the 
> >>> "continue" it also works).
> >>
> >> Note that the balloon driver could subsequently overwrite the p2m entry.
> > 
> > Hmm, I am not seeing how.. the region that is passed in is right up to
> > the PFN (I believe). And I did run with this patch over a couple of days
> > with ballooning up and down. But maybe I missed something?
> 
> Hrrm.  I was sure I wrote "Note that the balloon driver could
> subsequently overwrite the p2m entry /if/ this warning is triggered."
> but it seems I did not. :/
> 
> i.e., if the warning is triggered, the xen_extra_mem region will be
> incorrectly sized and the balloon driver will make use of the incorrect
> region.

Ah, that makes more sense. Yes we would do the overwritting part later on
in that scenario.. which makes me wonder - if we did that in the past
how come MMIO devices still worked! Some boxes have the gap/MMIO right
at the edge of the E820_RAM - perhaps they silently coping with and 
we just never caught on this fact.
> 
> David
> 
> > Let me prep a patch that adds some more checks in the balloon driver
> > just in case we do hit this.
> > 
> >>  I don't think it is worth redoing the patch to adjust the region passed
> >> to the balloon driver to avoid this though.
> >>
> >>> I also successfully tested the minimal fix (just replacing <= with <).
> >>> I will feed it to the testers here to cover more machines.
> >>>
> >>> Do you want to keep the warnings in (which exceed 80 characters, btw)?
> >>
> >> I think we do.
> >>
> >> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:11:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Z49-0004Or-M3; Thu, 23 Aug 2012 15:11:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T4Z48-0004OW-Ig
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 15:11:24 +0000
Received: from [85.158.139.83:28932] by server-7.bemta-5.messagelabs.com id
	07/1F-32634-B1846305; Thu, 23 Aug 2012 15:11:23 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345734682!26840903!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_16,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 23 Aug 2012 15:11:22 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 15:11:22 -0000
Received: by eeke53 with SMTP id e53so342756eek.32
	for <multiple recipients>; Thu, 23 Aug 2012 08:11:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=OI/WpNeTyPLYFeMpUHy05/hCsCEfPI2mbNnvJ7LTdZU=;
	b=YFnEJo/jESXlNEulzyp5p0oEaWIOQ8oB6ykVQS9lb318+eKCEkVodAyWuJEByB+IfK
	owExaD6L0iMVFu3L0my/8yMUTZn2H3Ta6yYjyU6ogUhI6QBuXEFiSbWk3xK82hx+Iq/3
	YMlzy3QEQeNPTjfKw+Mhrtc+Ai/CBIf2pN/lGWq4IRPdqbEpvV/rfKMdHvFkE9oKKC9f
	X93SQaWO3MO57s67K5PmPY4lSjukB1MZEJjT0LfbhItW4nq66K8PebzOwXVXTLfb//ZF
	tEo2uFbm7XCqe3rLm653Jc6PlO0JI3Egd1rAs4dm1ZGOwk9awTXKY6wro223gvp0WJxi
	fUDQ==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr2390865eeo.40.1345734681781; Thu,
	23 Aug 2012 08:11:21 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Thu, 23 Aug 2012 08:11:21 -0700 (PDT)
Date: Thu, 23 Aug 2012 16:11:21 +0100
X-Google-Sender-Auth: 2KAmhZTWwAJqtmqKrvlFvJHt3Xw
Message-ID: <CAFLBxZYgaMgh6s3on6z7jnFm+u0xzx54jsQ-cxnVT4KuZkzZTw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org, 
	xen-announce@lists.xen.org
Subject: [Xen-devel] Disclosure process poll results
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Monday we closed the poll [1] for the security discussion.  Thank you
everyone who participated!  The process has not turned up a hidden
option that everyone agreed on; however, it has helped find what I
hope will be a "median" option which best addresses the concerns and
desires as the community as a whole.  Below I give a description of
the results of the poll, and a recommendation as to what I think is
the best way forward.

You may wish to read this analysis on the Xen.org blog instead; the
text is the same, but it includes inline graphs of the results as
well.  You can find the blog here:

http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-results/

= Analysis =

We had 33 votes, from what seems like a good cross-section of the
community.  This includes four committers, eight regular contributors.
 It also included representatives from three large service providers
and a number of smaller service providers, as well as a number of
individual users.  Only four of the votes were anonymous.

Because the goal of the poll was not a formal decision, but to find
out what peole in the community thought, I not only looked at the
numbers for each response, but who voted for each one, and how they
voted on the other options, to understand the data.  I don't include
the names in the summary here, but I will make the raw data (minus a
couple of e-mails that people asked not to be published) available to
anyone who e-mails me.

The votes are listed numerically as:
Excellent / Happy / Unhappy / Terrible / No opinion

== No pre-disclosure ==

Description: People are brought in only to help produce a fix.  The
fix is released to everyone publicly when it's ready (or, if the
discloser has asked for a longer embargo period, when that embargo
period is up).

Votes: 3 / 5 / 6 / 17 / 2
Graph: http://blog.xen.org/wp-content/uploads/2012/08/no-predisclosure.png

This option has little support, and lots of opposition, including
several core developers.  It will probably be removed from
consideration.

== Pre-disclosure only to software vendors ==

Description: Pre-disclosure list consists only of software vendors --
people who compile and ship binaries to others.  No updates may be
given to any user until the embargo period is up.

Votes: 8 / 6 / 10 / 8 / 1
Graph: http://blog.xen.org/wp-content/uploads/2012/08/software-only.png

This one is a fairly mixed bag; is has support from a few core
developers and regular contributors (along with some software
providers), but also opposition from core developers and regular
contributors (along with some service providers).  Of the people who
did not say they would argue either way, more people are unhappy than
not.  Overall a pretty unattractive option.

== Pre-disclosure to software vendors and a strict subset of users ==

Description: Privleged users will be provided with patches at the same
time as software vendors.  They will be permitted to update their
systems at any time.  Software vendors will be permitted to send code
updates to service providers who are on the pre-disclosure list.  The
subset could be the current set (i.e,. based on size), or could
include some other criteria to be discussed.

Votes: 10 / 5 / 7 / 10 / 1
Graph: http://blog.xen.org/wp-content/uploads/2012/08/strict.png

Looking at just the numbers, there are an even split of advocates and
opponents.  However, when you look at the results in detail, it turns
out that it is opposed by several core developers, and supported
mainly by large service providers.  This kind of division makes it an
unattractive option.

== Pre-disclosure to sofware vendors and a wide set of users ==

Description: User list would have some minimal standards; but it is
likely that any genuine cloud provider would be able to get onto the
list. Users on the list will be provided with patches at the same time
as software vendors.  They will be permitted to update their systems
at any time.  Software vendors will be permitted to send code updates
to service providers who are on the pre-disclosure list.

Graph: http://blog.xen.org/wp-content/uploads/2012/08/easy.png
Votes: 11 / 9 / 4 / 9 / 0

Looking at the numbers, this looks the best: it has more "argue for"
votes than any other option, and of those who didn't say they would
argue either way, people seem the happiest with this one.

Looking at the results in detail, it looks even better.  This one has
the support of many of the core developers, and also either the
support or the acquiescence of the majority of service providers,
large and small.

Furthermore, when I looked at those who said they would "argue
against" it, it seemed clear that there are two groups who oppose it
for opposite reasons: some because they think allowing any users to
know before other users is unfair, and prefer either no pre-disclosure
or disclosure only to software providers; others (presumably) because
they think it's not restrictive enough, and favor pre-disclosure to a
smaller group.  Given the difference of opinions in the community,
having people oppose it for opposite reasons probably indicates that
this is in the "center" of community opinion.

= Moving the discussion forward =

The Xen.org governance process[2] specifies that we should try to form
a community consensus, via voting, and if that fails, that the
committers will take a vote on the issue to decide.

It seems to me that given the results above, there's really only one
option that might be able to achieve consensus, and that's
"Pre-disclosure to a wide set of users".  How I recommend we proceed,
then, is that we have a formal community vote to make that change to
the security disclosure process.  That voting process, you may recall,
involves members giving a "+1" or "-1".  Those who vote "-1" should
give an alternate solution, and are encouraged to do so.

If that vote does not turn up consensus, and there seems to be another
option that might, we will vote on that; otherwise, the committers
will vote to choose an option that the think will serve the community
the best.

[1] http://blog.xen.org/index.php/2012/08/06/security-discussion-poll/
[2] http://www.xen.org/projects/governance.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:11:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Z49-0004Or-M3; Thu, 23 Aug 2012 15:11:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T4Z48-0004OW-Ig
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 15:11:24 +0000
Received: from [85.158.139.83:28932] by server-7.bemta-5.messagelabs.com id
	07/1F-32634-B1846305; Thu, 23 Aug 2012 15:11:23 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345734682!26840903!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_16,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 23 Aug 2012 15:11:22 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 15:11:22 -0000
Received: by eeke53 with SMTP id e53so342756eek.32
	for <multiple recipients>; Thu, 23 Aug 2012 08:11:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:x-google-sender-auth:message-id:subject
	:from:to:content-type;
	bh=OI/WpNeTyPLYFeMpUHy05/hCsCEfPI2mbNnvJ7LTdZU=;
	b=YFnEJo/jESXlNEulzyp5p0oEaWIOQ8oB6ykVQS9lb318+eKCEkVodAyWuJEByB+IfK
	owExaD6L0iMVFu3L0my/8yMUTZn2H3Ta6yYjyU6ogUhI6QBuXEFiSbWk3xK82hx+Iq/3
	YMlzy3QEQeNPTjfKw+Mhrtc+Ai/CBIf2pN/lGWq4IRPdqbEpvV/rfKMdHvFkE9oKKC9f
	X93SQaWO3MO57s67K5PmPY4lSjukB1MZEJjT0LfbhItW4nq66K8PebzOwXVXTLfb//ZF
	tEo2uFbm7XCqe3rLm653Jc6PlO0JI3Egd1rAs4dm1ZGOwk9awTXKY6wro223gvp0WJxi
	fUDQ==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr2390865eeo.40.1345734681781; Thu,
	23 Aug 2012 08:11:21 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Thu, 23 Aug 2012 08:11:21 -0700 (PDT)
Date: Thu, 23 Aug 2012 16:11:21 +0100
X-Google-Sender-Auth: 2KAmhZTWwAJqtmqKrvlFvJHt3Xw
Message-ID: <CAFLBxZYgaMgh6s3on6z7jnFm+u0xzx54jsQ-cxnVT4KuZkzZTw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xen.org, xen-users@lists.xen.org, 
	xen-announce@lists.xen.org
Subject: [Xen-devel] Disclosure process poll results
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Monday we closed the poll [1] for the security discussion.  Thank you
everyone who participated!  The process has not turned up a hidden
option that everyone agreed on; however, it has helped find what I
hope will be a "median" option which best addresses the concerns and
desires as the community as a whole.  Below I give a description of
the results of the poll, and a recommendation as to what I think is
the best way forward.

You may wish to read this analysis on the Xen.org blog instead; the
text is the same, but it includes inline graphs of the results as
well.  You can find the blog here:

http://blog.xen.org/index.php/2012/08/23/disclosure-process-poll-results/

= Analysis =

We had 33 votes, from what seems like a good cross-section of the
community.  This includes four committers, eight regular contributors.
 It also included representatives from three large service providers
and a number of smaller service providers, as well as a number of
individual users.  Only four of the votes were anonymous.

Because the goal of the poll was not a formal decision, but to find
out what peole in the community thought, I not only looked at the
numbers for each response, but who voted for each one, and how they
voted on the other options, to understand the data.  I don't include
the names in the summary here, but I will make the raw data (minus a
couple of e-mails that people asked not to be published) available to
anyone who e-mails me.

The votes are listed numerically as:
Excellent / Happy / Unhappy / Terrible / No opinion

== No pre-disclosure ==

Description: People are brought in only to help produce a fix.  The
fix is released to everyone publicly when it's ready (or, if the
discloser has asked for a longer embargo period, when that embargo
period is up).

Votes: 3 / 5 / 6 / 17 / 2
Graph: http://blog.xen.org/wp-content/uploads/2012/08/no-predisclosure.png

This option has little support, and lots of opposition, including
several core developers.  It will probably be removed from
consideration.

== Pre-disclosure only to software vendors ==

Description: Pre-disclosure list consists only of software vendors --
people who compile and ship binaries to others.  No updates may be
given to any user until the embargo period is up.

Votes: 8 / 6 / 10 / 8 / 1
Graph: http://blog.xen.org/wp-content/uploads/2012/08/software-only.png

This one is a fairly mixed bag; is has support from a few core
developers and regular contributors (along with some software
providers), but also opposition from core developers and regular
contributors (along with some service providers).  Of the people who
did not say they would argue either way, more people are unhappy than
not.  Overall a pretty unattractive option.

== Pre-disclosure to software vendors and a strict subset of users ==

Description: Privleged users will be provided with patches at the same
time as software vendors.  They will be permitted to update their
systems at any time.  Software vendors will be permitted to send code
updates to service providers who are on the pre-disclosure list.  The
subset could be the current set (i.e,. based on size), or could
include some other criteria to be discussed.

Votes: 10 / 5 / 7 / 10 / 1
Graph: http://blog.xen.org/wp-content/uploads/2012/08/strict.png

Looking at just the numbers, there are an even split of advocates and
opponents.  However, when you look at the results in detail, it turns
out that it is opposed by several core developers, and supported
mainly by large service providers.  This kind of division makes it an
unattractive option.

== Pre-disclosure to sofware vendors and a wide set of users ==

Description: User list would have some minimal standards; but it is
likely that any genuine cloud provider would be able to get onto the
list. Users on the list will be provided with patches at the same time
as software vendors.  They will be permitted to update their systems
at any time.  Software vendors will be permitted to send code updates
to service providers who are on the pre-disclosure list.

Graph: http://blog.xen.org/wp-content/uploads/2012/08/easy.png
Votes: 11 / 9 / 4 / 9 / 0

Looking at the numbers, this looks the best: it has more "argue for"
votes than any other option, and of those who didn't say they would
argue either way, people seem the happiest with this one.

Looking at the results in detail, it looks even better.  This one has
the support of many of the core developers, and also either the
support or the acquiescence of the majority of service providers,
large and small.

Furthermore, when I looked at those who said they would "argue
against" it, it seemed clear that there are two groups who oppose it
for opposite reasons: some because they think allowing any users to
know before other users is unfair, and prefer either no pre-disclosure
or disclosure only to software providers; others (presumably) because
they think it's not restrictive enough, and favor pre-disclosure to a
smaller group.  Given the difference of opinions in the community,
having people oppose it for opposite reasons probably indicates that
this is in the "center" of community opinion.

= Moving the discussion forward =

The Xen.org governance process[2] specifies that we should try to form
a community consensus, via voting, and if that fails, that the
committers will take a vote on the issue to decide.

It seems to me that given the results above, there's really only one
option that might be able to achieve consensus, and that's
"Pre-disclosure to a wide set of users".  How I recommend we proceed,
then, is that we have a formal community vote to make that change to
the security disclosure process.  That voting process, you may recall,
involves members giving a "+1" or "-1".  Those who vote "-1" should
give an alternate solution, and are encouraged to do so.

If that vote does not turn up consensus, and there seems to be another
option that might, we will vote on that; otherwise, the committers
will vote to choose an option that the think will serve the community
the best.

[1] http://blog.xen.org/index.php/2012/08/06/security-discussion-poll/
[2] http://www.xen.org/projects/governance.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:16:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Z8a-00050m-UV; Thu, 23 Aug 2012 15:16:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T4Z8Z-00050H-4d
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:15:59 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345734947!1694214!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9792 invoked from network); 23 Aug 2012 15:15:47 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-14.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Aug 2012 15:15:47 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T4Z8A-0007Vr-UG; Thu, 23 Aug 2012 17:15:35 +0200
Date: Thu, 23 Aug 2012 17:15:33 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <5035F7BD.6090205@citrix.com>
Message-ID: <alpine.LFD.2.02.1208231714450.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Attilio Rao wrote:
> On 22/08/12 15:47, Attilio Rao wrote:
> > but I would like to repost the patch serie skipping the referral to
> > PVOPS in the commit logs, I will do so right now, so please wait for
> > another patch serie.
> >    
> 
> For your convenience:
> http://lkml.org/lkml/2012/8/22/450

Not very convenient. I really prefer e-mail :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:16:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Z8a-00050m-UV; Thu, 23 Aug 2012 15:16:00 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T4Z8Z-00050H-4d
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:15:59 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345734947!1694214!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9792 invoked from network); 23 Aug 2012 15:15:47 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-14.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Aug 2012 15:15:47 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T4Z8A-0007Vr-UG; Thu, 23 Aug 2012 17:15:35 +0200
Date: Thu, 23 Aug 2012 17:15:33 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <5035F7BD.6090205@citrix.com>
Message-ID: <alpine.LFD.2.02.1208231714450.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Attilio Rao wrote:
> On 22/08/12 15:47, Attilio Rao wrote:
> > but I would like to repost the patch serie skipping the referral to
> > PVOPS in the commit logs, I will do so right now, so please wait for
> > another patch serie.
> >    
> 
> For your convenience:
> http://lkml.org/lkml/2012/8/22/450

Not very convenient. I really prefer e-mail :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:23:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ZG2-0005PN-Ee; Thu, 23 Aug 2012 15:23:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1T4ZG0-0005PC-NY
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 15:23:40 +0000
Received: from [85.158.143.35:17102] by server-2.bemta-4.messagelabs.com id
	9E/BC-21239-BFA46305; Thu, 23 Aug 2012 15:23:39 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345735407!12545026!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5288 invoked from network); 23 Aug 2012 15:23:34 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-12.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	23 Aug 2012 15:23:34 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Thu, 23 Aug 2012 17:23:26 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Thread-Topic: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
Thread-Index: Ac1ps8zkCPzyeRZ3RWeWIDwxqaix/gFrW5YABEg0R6AAKGBEAAAGx1yA
Date: Thu, 23 Aug 2012 15:23:25 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0C96B@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>
	<20120731231246.GD32698@phenom.dumpdata.com>
	<36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
	<20120823133635.GA10700@phenom.dumpdata.com>
In-Reply-To: <20120823133635.GA10700@phenom.dumpdata.com>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:23:02PM +0000, Aur=E9lien MILLIAT wrote:
> Hi,
> =

> I've been able to use ATI FirePro S400 with the unstable version. I've up=
dated my Dom0 and pass the graphic card as secondary adaptor.
>
>
>Please do not top post.
Sorry !
>
> What did you upgrade dom0 to?
>

I've update Debian 6.0.4 to 6.0.5, update all packages and remove everythin=
g I've made from my previous test. =


I've continue my tests and I've got an issue with active stereo. Previously=
, I've got tested active stereo with the stable release.  =


On unstable version, only the right eye is render (OGLplane and Virtools de=
mos).
Quad buffer is on, everything seems to be setup as usual for active stereo. =

I've tried with and without the S400 card.

I will try to gather more informations about this.

>
> I've made a quick test for Xen 4.1.2 with these updates but it's still cr=
ash (no matter I will use the unstable version).
>
> Aur=E9lien
> _______________________________________________
> =

> On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
> > Hi,
> > =

> > I'm currently trying to use XEN on graphical cluster.
> > VGA passtrough works fine and I'm able to use quad-buffer for active st=
ereo.
> > The last thing to do is to synchronize all GPU from the cluster.
> > For this purpose I use ATI FirePro S400 and it didn't works.
> > =

> > I've seen two behaviors:
> > -When I run lspci command on Dom0 I've got:
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> > 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio =

> > [Radeon HD 5800 Series] And sometimes just
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888 =

> > -When a DomU(Windows 7) is running, it's very slow (I can't do anything=
) and crash after several minutes.
> > I've tried with the unstable version and I've seen the same 'lspci' beh=
avior.
> > =

> > I've a couple of questions:
> > Is it possible to passthrough a VGA with this extension?
> =

> I never tried. I just pass in the VGA card and don't try to pass in the H=
DMI driver.
> =

> > As it's a particular use of VGA passtrough is it planned to able to use=
 synchronization module?
> =

> So what is synchronization module for you? That is the HDMI part?
> =

> > Is it easy to add this feature (time cost)?
> > =

> > Computers : HP Z800 workstation
> > GPU: ATI FirePro V8800
> > CPU: Intel Xeon E5640
> > MB: Intel 5520 chipset
> > =

> > XEN :
> > Version  4.1.2
> > With ATI patch from =

> > http://old-list-archives.xen.org/archives/html/xen-users/2011-05/msg
> > 00048.html
> > Thanks,
> > Aur=E9lien Milliat
> > =

> > =

> =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:23:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ZG2-0005PN-Ee; Thu, 23 Aug 2012 15:23:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1T4ZG0-0005PC-NY
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 15:23:40 +0000
Received: from [85.158.143.35:17102] by server-2.bemta-4.messagelabs.com id
	9E/BC-21239-BFA46305; Thu, 23 Aug 2012 15:23:39 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345735407!12545026!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5288 invoked from network); 23 Aug 2012 15:23:34 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-12.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	23 Aug 2012 15:23:34 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Thu, 23 Aug 2012 17:23:26 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Thread-Topic: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
Thread-Index: Ac1ps8zkCPzyeRZ3RWeWIDwxqaix/gFrW5YABEg0R6AAKGBEAAAGx1yA
Date: Thu, 23 Aug 2012 15:23:25 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0C96B@dulac>
References: <36774CA35642C143BCDE93BA0C68DC5702C0AE46@dulac>
	<20120731231246.GD32698@phenom.dumpdata.com>
	<36774CA35642C143BCDE93BA0C68DC5702C0C852@dulac>
	<20120823133635.GA10700@phenom.dumpdata.com>
In-Reply-To: <20120823133635.GA10700@phenom.dumpdata.com>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ATI VGA passthrough and S400 Synchronization module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 22, 2012 at 04:23:02PM +0000, Aur=E9lien MILLIAT wrote:
> Hi,
> =

> I've been able to use ATI FirePro S400 with the unstable version. I've up=
dated my Dom0 and pass the graphic card as secondary adaptor.
>
>
>Please do not top post.
Sorry !
>
> What did you upgrade dom0 to?
>

I've update Debian 6.0.4 to 6.0.5, update all packages and remove everythin=
g I've made from my previous test. =


I've continue my tests and I've got an issue with active stereo. Previously=
, I've got tested active stereo with the stable release.  =


On unstable version, only the right eye is render (OGLplane and Virtools de=
mos).
Quad buffer is on, everything seems to be setup as usual for active stereo. =

I've tried with and without the S400 card.

I will try to gather more informations about this.

>
> I've made a quick test for Xen 4.1.2 with these updates but it's still cr=
ash (no matter I will use the unstable version).
>
> Aur=E9lien
> _______________________________________________
> =

> On Tue, Jul 24, 2012 at 04:05:20PM +0000, Aur=E9lien MILLIAT wrote:
> > Hi,
> > =

> > I'm currently trying to use XEN on graphical cluster.
> > VGA passtrough works fine and I'm able to use quad-buffer for active st=
ereo.
> > The last thing to do is to synchronize all GPU from the cluster.
> > For this purpose I use ATI FirePro S400 and it didn't works.
> > =

> > I've seen two behaviors:
> > -When I run lspci command on Dom0 I've got:
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888
> > 0f:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio =

> > [Radeon HD 5800 Series] And sometimes just
> > 0f:00.0 VGA compatible controller: ATI Technologies Inc Device 6888 =

> > -When a DomU(Windows 7) is running, it's very slow (I can't do anything=
) and crash after several minutes.
> > I've tried with the unstable version and I've seen the same 'lspci' beh=
avior.
> > =

> > I've a couple of questions:
> > Is it possible to passthrough a VGA with this extension?
> =

> I never tried. I just pass in the VGA card and don't try to pass in the H=
DMI driver.
> =

> > As it's a particular use of VGA passtrough is it planned to able to use=
 synchronization module?
> =

> So what is synchronization module for you? That is the HDMI part?
> =

> > Is it easy to add this feature (time cost)?
> > =

> > Computers : HP Z800 workstation
> > GPU: ATI FirePro V8800
> > CPU: Intel Xeon E5640
> > MB: Intel 5520 chipset
> > =

> > XEN :
> > Version  4.1.2
> > With ATI patch from =

> > http://old-list-archives.xen.org/archives/html/xen-users/2011-05/msg
> > 00048.html
> > Thanks,
> > Aur=E9lien Milliat
> > =

> > =

> =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:34:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ZPx-0005vo-D3; Thu, 23 Aug 2012 15:33:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T4ZPv-0005vi-Ml
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:33:55 +0000
Received: from [85.158.143.35:21784] by server-3.bemta-4.messagelabs.com id
	14/EB-08232-36D46305; Thu, 23 Aug 2012 15:33:55 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345736034!15818237!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24208 invoked from network); 23 Aug 2012 15:33:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 15:33:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14150754"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:33:53 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 16:33:53 +0100
Message-ID: <50364A30.2040008@citrix.com>
Date: Thu, 23 Aug 2012 16:20:16 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
	<alpine.LFD.2.02.1208231714450.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208231714450.2856@ionos>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 16:15, Thomas Gleixner wrote:
> On Thu, 23 Aug 2012, Attilio Rao wrote:
>    
>> On 22/08/12 15:47, Attilio Rao wrote:
>>      
>>> but I would like to repost the patch serie skipping the referral to
>>> PVOPS in the commit logs, I will do so right now, so please wait for
>>> another patch serie.
>>>
>>>        
>> For your convenience:
>> http://lkml.org/lkml/2012/8/22/450
>>      
> Not very convenient. I really prefer e-mail :)
>    

Actually you are in the "to" line for that patch serie.
Can you see the e-mail?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:34:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ZPx-0005vo-D3; Thu, 23 Aug 2012 15:33:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <attilio.rao@citrix.com>) id 1T4ZPv-0005vi-Ml
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:33:55 +0000
Received: from [85.158.143.35:21784] by server-3.bemta-4.messagelabs.com id
	14/EB-08232-36D46305; Thu, 23 Aug 2012 15:33:55 +0000
X-Env-Sender: attilio.rao@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345736034!15818237!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24208 invoked from network); 23 Aug 2012 15:33:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 15:33:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14150754"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:33:53 +0000
Received: from [10.80.3.221] (10.80.3.221) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 16:33:53 +0100
Message-ID: <50364A30.2040008@citrix.com>
Date: Thu, 23 Aug 2012 16:20:16 +0100
From: Attilio Rao <attilio.rao@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20111110 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
	<alpine.LFD.2.02.1208231714450.2856@ionos>
In-Reply-To: <alpine.LFD.2.02.1208231714450.2856@ionos>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 16:15, Thomas Gleixner wrote:
> On Thu, 23 Aug 2012, Attilio Rao wrote:
>    
>> On 22/08/12 15:47, Attilio Rao wrote:
>>      
>>> but I would like to repost the patch serie skipping the referral to
>>> PVOPS in the commit logs, I will do so right now, so please wait for
>>> another patch serie.
>>>
>>>        
>> For your convenience:
>> http://lkml.org/lkml/2012/8/22/450
>>      
> Not very convenient. I really prefer e-mail :)
>    

Actually you are in the "to" line for that patch serie.
Can you see the e-mail?

Attilio

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:48:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Ze1-0006Bp-Qu; Thu, 23 Aug 2012 15:48:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T4Ze0-0006Bk-5q
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:48:28 +0000
Received: from [85.158.138.51:51111] by server-4.bemta-3.messagelabs.com id
	61/A8-04276-BC056305; Thu, 23 Aug 2012 15:48:27 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345736906!19651674!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28705 invoked from network); 23 Aug 2012 15:48:26 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-6.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Aug 2012 15:48:26 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T4Zdo-0007k7-TC; Thu, 23 Aug 2012 17:48:17 +0200
Date: Thu, 23 Aug 2012 17:48:15 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <50364A30.2040008@citrix.com>
Message-ID: <alpine.LFD.2.02.1208231747430.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
	<alpine.LFD.2.02.1208231714450.2856@ionos>
	<50364A30.2040008@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Attilio Rao wrote:

> On 23/08/12 16:15, Thomas Gleixner wrote:
> > On Thu, 23 Aug 2012, Attilio Rao wrote:
> >    
> > > On 22/08/12 15:47, Attilio Rao wrote:
> > >      
> > > > but I would like to repost the patch serie skipping the referral to
> > > > PVOPS in the commit logs, I will do so right now, so please wait for
> > > > another patch serie.
> > > > 
> > > >        
> > > For your convenience:
> > > http://lkml.org/lkml/2012/8/22/450
> > >      
> > Not very convenient. I really prefer e-mail :)
> >    
> 
> Actually you are in the "to" line for that patch serie.

Right. So why should I look at a website?

> Can you see the e-mail?

Replied to it already :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:48:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Ze1-0006Bp-Qu; Thu, 23 Aug 2012 15:48:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1T4Ze0-0006Bk-5q
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:48:28 +0000
Received: from [85.158.138.51:51111] by server-4.bemta-3.messagelabs.com id
	61/A8-04276-BC056305; Thu, 23 Aug 2012 15:48:27 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-6.tower-174.messagelabs.com!1345736906!19651674!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28705 invoked from network); 23 Aug 2012 15:48:26 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-6.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Aug 2012 15:48:26 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <tglx@linutronix.de>)
	id 1T4Zdo-0007k7-TC; Thu, 23 Aug 2012 17:48:17 +0200
Date: Thu, 23 Aug 2012 17:48:15 +0200 (CEST)
From: Thomas Gleixner <tglx@linutronix.de>
To: Attilio Rao <attilio.rao@citrix.com>
In-Reply-To: <50364A30.2040008@citrix.com>
Message-ID: <alpine.LFD.2.02.1208231747430.2856@ionos>
References: <1345580561-8506-1-git-send-email-attilio.rao@citrix.com>
	<alpine.LFD.2.02.1208212315330.2856@ionos>
	<20120822135753.GA30964@phenom.dumpdata.com>
	<alpine.LFD.2.02.1208221618380.2856@ionos>
	<5034F0FD.8040902@citrix.com> <5035F7BD.6090205@citrix.com>
	<alpine.LFD.2.02.1208231714450.2856@ionos>
	<50364A30.2040008@citrix.com>
User-Agent: Alpine 2.02 (LFD 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mingo@redhat.com" <mingo@redhat.com>, "hpa@zytor.com" <hpa@zytor.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/5] X86/XEN: Merge
 x86_init.paging.pagetable_setup_start and
 x86_init.paging.pagetable_setup_done setup functions and document its
 semantic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Attilio Rao wrote:

> On 23/08/12 16:15, Thomas Gleixner wrote:
> > On Thu, 23 Aug 2012, Attilio Rao wrote:
> >    
> > > On 22/08/12 15:47, Attilio Rao wrote:
> > >      
> > > > but I would like to repost the patch serie skipping the referral to
> > > > PVOPS in the commit logs, I will do so right now, so please wait for
> > > > another patch serie.
> > > > 
> > > >        
> > > For your convenience:
> > > http://lkml.org/lkml/2012/8/22/450
> > >      
> > Not very convenient. I really prefer e-mail :)
> >    
> 
> Actually you are in the "to" line for that patch serie.

Right. So why should I look at a website?

> Can you see the e-mail?

Replied to it already :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:50:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Zg2-0006I3-D4; Thu, 23 Aug 2012 15:50:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4Zg1-0006Hu-EW
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:50:33 +0000
Received: from [85.158.143.35:63047] by server-1.bemta-4.messagelabs.com id
	C1/AC-12504-84156305; Thu, 23 Aug 2012 15:50:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345737025!11845558!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25652 invoked from network); 23 Aug 2012 15:50:31 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 15:50:31 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NFoGux022857
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 15:50:17 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NFoGLu017847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 15:50:16 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NFoGbZ014729; Thu, 23 Aug 2012 10:50:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 08:50:15 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 11BBF4031E; Thu, 23 Aug 2012 11:40:13 -0400 (EDT)
Date: Thu, 23 Aug 2012 11:40:13 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120823154012.GA18845@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 12:58:37PM +0100, Stefano Stabellini wrote:
> On Mon, 20 Aug 2012, Ian Campbell wrote:
> > On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > > B/c we do not need it. During the startup the Xen provides
> > > > > > us with all the memory mapped that we need to function.
> > > > > 
> > > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > > nr_pt_frames)?
> > > > 
> > > > I was looking at the source code (hypervisor) to figure it out and
> > > > that is certainly true.
> > > > 
> > > > 
> > > > > Or is it actually stated somewhere in the Xen headers?
> > > > 
> > > > Couldn't find it, but after looking so long at the source code
> > > > I didn't even bother looking for it.
> > > > 
> > > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > > 
> > > I think that we need to involve some Xen maintainers and get this
> > > written down somewhere in the public headers, otherwise we have no
> > > guarantees that it is going to stay as it is in the next Xen versions.
> > > 
> > > Maybe we just need to add a couple of lines of comment to
> > > xen/include/public/xen.h.
> > 
> > The start of day memory layout for PV guests is written down in the
> > comment just before struct start_info at
> > http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> > 
> > (I haven't read this thread to determine if what is documented matches
> > what you guys are talking about relying on)
> 
> but it is not written down how much physical memory is going to be
> mapped in the bootstrap page tables.

How about this:


>From 43fd7a5d9ecd31c2fc26851523cd4b5f7650fb39 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 12 Jul 2012 13:59:36 -0400
Subject: [PATCH] xen/mmu: For 64-bit do not call xen_map_identity_early

B/c we do not need it. During the startup the Xen provides
us with all the initial memory mapped that we need to function.

The initial memory mapped is up to the bootstack, which means
we can reference using __ka up to 4.f):

(from xen/interface/xen.h):

 4. This the order of bootstrap elements in the initial virtual region:
   a. relocated kernel image
   b. initial ram disk              [mod_start, mod_len]
   c. list of allocated page frames [mfn_list, nr_pages]
   d. start_info_t structure        [register ESI (x86)]
   e. bootstrap page tables         [pt_base, CR3 (x86)]
   f. bootstrap stack               [register ESP (x86)]

(initial ram disk may be ommitted).

[v1: More comments in git commit]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 7247e5a..a59070b 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -84,6 +84,7 @@
  */
 DEFINE_SPINLOCK(xen_reservation_lock);
 
+#ifdef CONFIG_X86_32
 /*
  * Identity map, in addition to plain kernel map.  This needs to be
  * large enough to allocate page table pages to allocate the rest.
@@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
  */
 #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
 static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
-
+#endif
 #ifdef CONFIG_X86_64
 /* l3 pud for userspace vsyscall mapping */
 static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
@@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
 		BUG();
 }
-
+#ifdef CONFIG_X86_32
 static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 {
 	unsigned pmdidx, pteidx;
@@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 
 	set_page_prot(pmd, PAGE_KERNEL_RO);
 }
-
+#endif
 void __init xen_setup_machphys_mapping(void)
 {
 	struct xen_machphys_mapping mapping;
@@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
 
-	/* Set up identity map */
-	xen_map_identity_early(level2_ident_pgt, max_pfn);
-
 	/* Make pagetable pieces RO */
 	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:50:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4Zg2-0006I3-D4; Thu, 23 Aug 2012 15:50:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4Zg1-0006Hu-EW
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:50:33 +0000
Received: from [85.158.143.35:63047] by server-1.bemta-4.messagelabs.com id
	C1/AC-12504-84156305; Thu, 23 Aug 2012 15:50:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1345737025!11845558!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25652 invoked from network); 23 Aug 2012 15:50:31 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 15:50:31 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NFoGux022857
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 15:50:17 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NFoGLu017847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 15:50:16 GMT
Received: from abhmt103.oracle.com (abhmt103.oracle.com [141.146.116.55])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NFoGbZ014729; Thu, 23 Aug 2012 10:50:16 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 08:50:15 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 11BBF4031E; Thu, 23 Aug 2012 11:40:13 -0400 (EDT)
Date: Thu, 23 Aug 2012 11:40:13 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120823154012.GA18845@phenom.dumpdata.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 20, 2012 at 12:58:37PM +0100, Stefano Stabellini wrote:
> On Mon, 20 Aug 2012, Ian Campbell wrote:
> > On Mon, 2012-08-20 at 12:45 +0100, Stefano Stabellini wrote:
> > > On Fri, 17 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Aug 17, 2012 at 06:41:23PM +0100, Stefano Stabellini wrote:
> > > > > On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > > > > > B/c we do not need it. During the startup the Xen provides
> > > > > > us with all the memory mapped that we need to function.
> > > > > 
> > > > > Shouldn't we check to make sure that is actually true (I am thinking at
> > > > > nr_pt_frames)?
> > > > 
> > > > I was looking at the source code (hypervisor) to figure it out and
> > > > that is certainly true.
> > > > 
> > > > 
> > > > > Or is it actually stated somewhere in the Xen headers?
> > > > 
> > > > Couldn't find it, but after looking so long at the source code
> > > > I didn't even bother looking for it.
> > > > 
> > > > Thought to be honest - I only looked at how the 64-bit pagetables
> > > > were set up, so I didn't dare to touch the 32-bit. Hence the #ifdef
> > > 
> > > I think that we need to involve some Xen maintainers and get this
> > > written down somewhere in the public headers, otherwise we have no
> > > guarantees that it is going to stay as it is in the next Xen versions.
> > > 
> > > Maybe we just need to add a couple of lines of comment to
> > > xen/include/public/xen.h.
> > 
> > The start of day memory layout for PV guests is written down in the
> > comment just before struct start_info at
> > http://xenbits.xen.org/docs/unstable/hypercall/include,public,xen.h.html#Struct_start_info
> > 
> > (I haven't read this thread to determine if what is documented matches
> > what you guys are talking about relying on)
> 
> but it is not written down how much physical memory is going to be
> mapped in the bootstrap page tables.

How about this:


>From 43fd7a5d9ecd31c2fc26851523cd4b5f7650fb39 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 12 Jul 2012 13:59:36 -0400
Subject: [PATCH] xen/mmu: For 64-bit do not call xen_map_identity_early

B/c we do not need it. During the startup the Xen provides
us with all the initial memory mapped that we need to function.

The initial memory mapped is up to the bootstack, which means
we can reference using __ka up to 4.f):

(from xen/interface/xen.h):

 4. This the order of bootstrap elements in the initial virtual region:
   a. relocated kernel image
   b. initial ram disk              [mod_start, mod_len]
   c. list of allocated page frames [mfn_list, nr_pages]
   d. start_info_t structure        [register ESI (x86)]
   e. bootstrap page tables         [pt_base, CR3 (x86)]
   f. bootstrap stack               [register ESP (x86)]

(initial ram disk may be ommitted).

[v1: More comments in git commit]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 7247e5a..a59070b 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -84,6 +84,7 @@
  */
 DEFINE_SPINLOCK(xen_reservation_lock);
 
+#ifdef CONFIG_X86_32
 /*
  * Identity map, in addition to plain kernel map.  This needs to be
  * large enough to allocate page table pages to allocate the rest.
@@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
  */
 #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
 static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
-
+#endif
 #ifdef CONFIG_X86_64
 /* l3 pud for userspace vsyscall mapping */
 static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
@@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
 		BUG();
 }
-
+#ifdef CONFIG_X86_32
 static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 {
 	unsigned pmdidx, pteidx;
@@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 
 	set_page_prot(pmd, PAGE_KERNEL_RO);
 }
-
+#endif
 void __init xen_setup_machphys_mapping(void)
 {
 	struct xen_machphys_mapping mapping;
@@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
 
-	/* Set up identity map */
-	xen_map_identity_early(level2_ident_pgt, max_pfn);
-
 	/* Make pagetable pieces RO */
 	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
 
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:58:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ZnA-0006W5-Aj; Thu, 23 Aug 2012 15:57:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Zn8-0006W0-Kw
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:57:54 +0000
Received: from [85.158.138.51:47612] by server-8.bemta-3.messagelabs.com id
	3A/31-29583-10356305; Thu, 23 Aug 2012 15:57:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345737473!19738322!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18803 invoked from network); 23 Aug 2012 15:57:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 15:57:53 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14151314"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:57:52 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 16:57:52 +0100
Date: Thu, 23 Aug 2012 16:57:29 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120823154012.GA18845@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208231656160.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
	<20120823154012.GA18845@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >From 43fd7a5d9ecd31c2fc26851523cd4b5f7650fb39 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Thu, 12 Jul 2012 13:59:36 -0400
> Subject: [PATCH] xen/mmu: For 64-bit do not call xen_map_identity_early
> 
> B/c we do not need it. During the startup the Xen provides
                                            ^ remove the
> us with all the initial memory mapped that we need to function.
> 
> The initial memory mapped is up to the bootstack, which means
> we can reference using __ka up to 4.f):

Thanks, that is what I was looking for.

> (from xen/interface/xen.h):
> 
>  4. This the order of bootstrap elements in the initial virtual region:
>    a. relocated kernel image
>    b. initial ram disk              [mod_start, mod_len]
>    c. list of allocated page frames [mfn_list, nr_pages]
>    d. start_info_t structure        [register ESI (x86)]
>    e. bootstrap page tables         [pt_base, CR3 (x86)]
>    f. bootstrap stack               [register ESP (x86)]
> 
> (initial ram disk may be ommitted).
> 
> [v1: More comments in git commit]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>  arch/x86/xen/mmu.c |   11 +++++------
>  1 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 7247e5a..a59070b 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -84,6 +84,7 @@
>   */
>  DEFINE_SPINLOCK(xen_reservation_lock);
>  
> +#ifdef CONFIG_X86_32
>  /*
>   * Identity map, in addition to plain kernel map.  This needs to be
>   * large enough to allocate page table pages to allocate the rest.
> @@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
>   */
>  #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
>  static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
> -
> +#endif
>  #ifdef CONFIG_X86_64
>  /* l3 pud for userspace vsyscall mapping */
>  static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
> @@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
>  	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>  		BUG();
>  }
> -
> +#ifdef CONFIG_X86_32
>  static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  {
>  	unsigned pmdidx, pteidx;
> @@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  
>  	set_page_prot(pmd, PAGE_KERNEL_RO);
>  }
> -
> +#endif
>  void __init xen_setup_machphys_mapping(void)
>  {
>  	struct xen_machphys_mapping mapping;
> @@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	/* Note that we don't do anything with level1_fixmap_pgt which
>  	 * we don't need. */
>  
> -	/* Set up identity map */
> -	xen_map_identity_early(level2_ident_pgt, max_pfn);
> -
>  	/* Make pagetable pieces RO */
>  	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
> +	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
>  
> -- 
> 1.7.7.6
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 15:58:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 15:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ZnA-0006W5-Aj; Thu, 23 Aug 2012 15:57:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4Zn8-0006W0-Kw
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 15:57:54 +0000
Received: from [85.158.138.51:47612] by server-8.bemta-3.messagelabs.com id
	3A/31-29583-10356305; Thu, 23 Aug 2012 15:57:53 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345737473!19738322!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18803 invoked from network); 23 Aug 2012 15:57:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 15:57:53 -0000
X-IronPort-AV: E=Sophos;i="4.80,300,1344211200"; d="scan'208";a="14151314"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:57:52 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 16:57:52 +0100
Date: Thu, 23 Aug 2012 16:57:29 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20120823154012.GA18845@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1208231656160.15568@kaball.uk.xensource.com>
References: <1345133009-21941-1-git-send-email-konrad.wilk@oracle.com>
	<1345133009-21941-7-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171839590.15568@kaball.uk.xensource.com>
	<20120817174549.GA14257@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201243180.15568@kaball.uk.xensource.com>
	<1345463624.28762.67.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.02.1208201257420.15568@kaball.uk.xensource.com>
	<20120823154012.GA18845@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/11] xen/mmu: For 64-bit do not call
 xen_map_identity_early
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Aug 2012, Konrad Rzeszutek Wilk wrote:
> >From 43fd7a5d9ecd31c2fc26851523cd4b5f7650fb39 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Thu, 12 Jul 2012 13:59:36 -0400
> Subject: [PATCH] xen/mmu: For 64-bit do not call xen_map_identity_early
> 
> B/c we do not need it. During the startup the Xen provides
                                            ^ remove the
> us with all the initial memory mapped that we need to function.
> 
> The initial memory mapped is up to the bootstack, which means
> we can reference using __ka up to 4.f):

Thanks, that is what I was looking for.

> (from xen/interface/xen.h):
> 
>  4. This the order of bootstrap elements in the initial virtual region:
>    a. relocated kernel image
>    b. initial ram disk              [mod_start, mod_len]
>    c. list of allocated page frames [mfn_list, nr_pages]
>    d. start_info_t structure        [register ESI (x86)]
>    e. bootstrap page tables         [pt_base, CR3 (x86)]
>    f. bootstrap stack               [register ESP (x86)]
> 
> (initial ram disk may be ommitted).
> 
> [v1: More comments in git commit]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>  arch/x86/xen/mmu.c |   11 +++++------
>  1 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 7247e5a..a59070b 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -84,6 +84,7 @@
>   */
>  DEFINE_SPINLOCK(xen_reservation_lock);
>  
> +#ifdef CONFIG_X86_32
>  /*
>   * Identity map, in addition to plain kernel map.  This needs to be
>   * large enough to allocate page table pages to allocate the rest.
> @@ -91,7 +92,7 @@ DEFINE_SPINLOCK(xen_reservation_lock);
>   */
>  #define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
>  static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
> -
> +#endif
>  #ifdef CONFIG_X86_64
>  /* l3 pud for userspace vsyscall mapping */
>  static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
> @@ -1628,7 +1629,7 @@ static void set_page_prot(void *addr, pgprot_t prot)
>  	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, 0))
>  		BUG();
>  }
> -
> +#ifdef CONFIG_X86_32
>  static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  {
>  	unsigned pmdidx, pteidx;
> @@ -1679,7 +1680,7 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
>  
>  	set_page_prot(pmd, PAGE_KERNEL_RO);
>  }
> -
> +#endif
>  void __init xen_setup_machphys_mapping(void)
>  {
>  	struct xen_machphys_mapping mapping;
> @@ -1765,14 +1766,12 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
>  	/* Note that we don't do anything with level1_fixmap_pgt which
>  	 * we don't need. */
>  
> -	/* Set up identity map */
> -	xen_map_identity_early(level2_ident_pgt, max_pfn);
> -
>  	/* Make pagetable pieces RO */
>  	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
> +	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
>  	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
>  
> -- 
> 1.7.7.6
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ads-0007cu-GF; Thu, 23 Aug 2012 16:52:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adr-0007cV-Pp
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:23 +0000
Received: from [85.158.143.99:63621] by server-1.bemta-4.messagelabs.com id
	78/7D-12504-7CF56305; Thu, 23 Aug 2012 16:52:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24650 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152772"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:21 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:21 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:45 +0100
Message-ID: <1345740826-49830-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] hotplug/NetBSD: write error message to
	hotplug-error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As recommended by Ian Campbell, write the hotplug error to
hotplug-error, just as the Linux hotplug script does.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
 tools/hotplug/NetBSD/block |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index 28135f5..2c10ed7 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,7 +12,8 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore-write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error \
+	               $xpath/hotplug-error "$@"
 	exit 1
 }
 	
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4adr-0007cb-Ns; Thu, 23 Aug 2012 16:52:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adq-0007cE-H8
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:22 +0000
Received: from [85.158.143.99:14101] by server-2.bemta-4.messagelabs.com id
	77/A2-21239-5CF56305; Thu, 23 Aug 2012 16:52:21 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24616 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152770"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:21 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:21 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:44 +0100
Message-ID: <1345740826-49830-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] hotplug/NetBSD: fix xenstore_write usage in
	error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xenstore_write doesn't exist, use xenstore-write instead. The error
function is currently broken without this change.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
---
 tools/hotplug/NetBSD/block |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index cf5ff3a..28135f5 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,7 +12,7 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore_write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error
 	exit 1
 }
 	
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4adr-0007cT-BC; Thu, 23 Aug 2012 16:52:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adq-0007cE-3s
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:22 +0000
Received: from [85.158.143.99:63560] by server-2.bemta-4.messagelabs.com id
	96/A2-21239-5CF56305; Thu, 23 Aug 2012 16:52:21 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24597 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152769"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:20 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:20 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:43 +0100
Message-ID: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remaining patches from the hotplug script series for NetBSD. Expanded 
with Ian Campbell recommendations.

The xenstore_write fix has been moved to a pre-patch, and the error 
function has been expanded to write the script error to hotplug-error 
(in a pre-patch also).

Thanks for the review.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4adr-0007cT-BC; Thu, 23 Aug 2012 16:52:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adq-0007cE-3s
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:22 +0000
Received: from [85.158.143.99:63560] by server-2.bemta-4.messagelabs.com id
	96/A2-21239-5CF56305; Thu, 23 Aug 2012 16:52:21 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24597 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152769"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:20 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:20 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:43 +0100
Message-ID: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
Subject: [Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remaining patches from the hotplug script series for NetBSD. Expanded 
with Ian Campbell recommendations.

The xenstore_write fix has been moved to a pre-patch, and the error 
function has been expanded to write the script error to hotplug-error 
(in a pre-patch also).

Thanks for the review.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4adr-0007cb-Ns; Thu, 23 Aug 2012 16:52:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adq-0007cE-H8
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:22 +0000
Received: from [85.158.143.99:14101] by server-2.bemta-4.messagelabs.com id
	77/A2-21239-5CF56305; Thu, 23 Aug 2012 16:52:21 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24616 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152770"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:21 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:21 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:44 +0100
Message-ID: <1345740826-49830-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] hotplug/NetBSD: fix xenstore_write usage in
	error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xenstore_write doesn't exist, use xenstore-write instead. The error
function is currently broken without this change.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
---
 tools/hotplug/NetBSD/block |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index cf5ff3a..28135f5 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,7 +12,7 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore_write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error
 	exit 1
 }
 	
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ads-0007cm-3R; Thu, 23 Aug 2012 16:52:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adq-0007cG-NL
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:22 +0000
Received: from [85.158.143.99:14119] by server-3.bemta-4.messagelabs.com id
	6D/74-08232-6CF56305; Thu, 23 Aug 2012 16:52:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24675 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152773"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:21 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:21 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:46 +0100
Message-ID: <1345740826-49830-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] hotplug/NetBSD: check type of file to
	attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xend used to set the xenbus backend entry "type" to either "phy" or
"file", but now libxl sets it to "phy" for both file and block device.
We have to manually check for the type of the "param" field in order
to detect if we are trying to attach a file or a block device.

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
Changes since v4:

 * Push the xenstore_write fix to a separate pre-patch

Changes since v3:

 * Add $xparams (that contains the path to the disk file) to the error
   message.

Changes since v2:

 * Better error messages.

 * Check if params is empty.

 * Replace xenstore_write with xenstore-write in error function.

 * Add quotation marks to xparams when testing.

Changes since v1:

 * Check that file is either a block special file or a regular file
   and report error otherwise.
---
 tools/hotplug/NetBSD/block |   11 ++++++++++-
 1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index 2c10ed7..f1146b5 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -20,8 +20,17 @@ error() {
 
 xpath=$1
 xstatus=$2
-xtype=$(xenstore-read "$xpath/type")
 xparams=$(xenstore-read "$xpath/params")
+if [ -b "$xparams" ]; then
+	xtype="phy"
+elif [ -f "$xparams" ]; then
+	xtype="file"
+elif [ -z "$xparams" ]; then
+	error "$xpath/params is empty, unable to attach block device."
+else
+	error "$xparams is not a valid file type to use as block device." \
+	      "Only block and regular image files accepted."
+fi
 
 case $xstatus in
 6)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ads-0007cu-GF; Thu, 23 Aug 2012 16:52:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adr-0007cV-Pp
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:23 +0000
Received: from [85.158.143.99:63621] by server-1.bemta-4.messagelabs.com id
	78/7D-12504-7CF56305; Thu, 23 Aug 2012 16:52:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24650 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152772"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:21 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:21 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:45 +0100
Message-ID: <1345740826-49830-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] hotplug/NetBSD: write error message to
	hotplug-error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As recommended by Ian Campbell, write the hotplug error to
hotplug-error, just as the Linux hotplug script does.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
 tools/hotplug/NetBSD/block |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index 28135f5..2c10ed7 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -12,7 +12,8 @@ export PATH
 
 error() {
 	echo "$@" >&2
-	xenstore-write $xpath/hotplug-status error
+	xenstore-write $xpath/hotplug-status error \
+	               $xpath/hotplug-error "$@"
 	exit 1
 }
 	
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 16:52:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 16:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ads-0007cm-3R; Thu, 23 Aug 2012 16:52:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4adq-0007cG-NL
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 16:52:22 +0000
Received: from [85.158.143.99:14119] by server-3.bemta-4.messagelabs.com id
	6D/74-08232-6CF56305; Thu, 23 Aug 2012 16:52:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345740740!27037370!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24675 invoked from network); 23 Aug 2012 16:52:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 16:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14152773"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 16:52:21 +0000
Received: from dhcp-3-120.uk.xensource.com.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 17:52:21 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 23 Aug 2012 17:53:46 +0100
Message-ID: <1345740826-49830-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Cc: Christoph Egger <Christoph.Egger@amd.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] hotplug/NetBSD: check type of file to
	attach from params
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xend used to set the xenbus backend entry "type" to either "phy" or
"file", but now libxl sets it to "phy" for both file and block device.
We have to manually check for the type of the "param" field in order
to detect if we are trying to attach a file or a block device.

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
---
Changes since v4:

 * Push the xenstore_write fix to a separate pre-patch

Changes since v3:

 * Add $xparams (that contains the path to the disk file) to the error
   message.

Changes since v2:

 * Better error messages.

 * Check if params is empty.

 * Replace xenstore_write with xenstore-write in error function.

 * Add quotation marks to xparams when testing.

Changes since v1:

 * Check that file is either a block special file or a regular file
   and report error otherwise.
---
 tools/hotplug/NetBSD/block |   11 ++++++++++-
 1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/tools/hotplug/NetBSD/block b/tools/hotplug/NetBSD/block
index 2c10ed7..f1146b5 100644
--- a/tools/hotplug/NetBSD/block
+++ b/tools/hotplug/NetBSD/block
@@ -20,8 +20,17 @@ error() {
 
 xpath=$1
 xstatus=$2
-xtype=$(xenstore-read "$xpath/type")
 xparams=$(xenstore-read "$xpath/params")
+if [ -b "$xparams" ]; then
+	xtype="phy"
+elif [ -f "$xparams" ]; then
+	xtype="file"
+elif [ -z "$xparams" ]; then
+	error "$xpath/params is empty, unable to attach block device."
+else
+	error "$xparams is not a valid file type to use as block device." \
+	      "Only block and regular image files accepted."
+fi
 
 case $xstatus in
 6)
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:14:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4az9-0008Md-9s; Thu, 23 Aug 2012 17:14:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4az8-0008MN-6L
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:14:22 +0000
Received: from [85.158.143.99:34789] by server-2.bemta-4.messagelabs.com id
	C8/44-21239-DE466305; Thu, 23 Aug 2012 17:14:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345742057!20578829!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13893 invoked from network); 23 Aug 2012 17:14:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607816"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-4y;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:45 +0100
Message-ID: <1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/3] xen/privcmd: report paged-out frames in
	PRIVCMD_MMAPBATCH ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

libxc handles paged-out frames in xc_map_foreign_bulk() and friends by
retrying the map operation.  libxc expects the PRIVCMD_MMAPBATCH ioctl
to report paged out frames by setting bit 31 in the mfn.

Do this for the PRIVCMD_MMAPBATCH ioctl if
xen_remap_domain_mfn_range() returned -ENOENT.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..f8c1b6d 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -255,10 +255,15 @@ static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		if (ret == -ENOENT)
+			*mfnp |= 0x80000000U;
+		else
+			*mfnp |= 0xf0000000U;
 		st->err++;
 	}
 	st->va += PAGE_SIZE;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:14:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4az7-0008MI-Gh; Thu, 23 Aug 2012 17:14:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4az6-0008M9-2w
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:14:20 +0000
Received: from [85.158.143.99:38936] by server-1.bemta-4.messagelabs.com id
	F1/EE-12504-BE466305; Thu, 23 Aug 2012 17:14:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345742057!20578829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13853 invoked from network); 23 Aug 2012 17:14:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:18 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607815"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-4R;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:44 +0100
Message-ID: <1345742026-10569-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/3] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Callers of xen_remap_domain_range() need to know if the remap failed
because frame is currently paged out.  So they can retry the remap
later on.  Return -ENOENT in this case.

This assumes that the error codes returned by Xen are a subset of
those used by the kernel.  It is unclear if this is defined as part of
the hypercall ABI.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..fb187ea 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = -EFAULT;
-		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
+		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
+		if (err < 0)
 			goto out;
 
 		nr -= batch;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:14:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4az9-0008Md-9s; Thu, 23 Aug 2012 17:14:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4az8-0008MN-6L
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:14:22 +0000
Received: from [85.158.143.99:34789] by server-2.bemta-4.messagelabs.com id
	C8/44-21239-DE466305; Thu, 23 Aug 2012 17:14:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345742057!20578829!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13893 invoked from network); 23 Aug 2012 17:14:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607816"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-4y;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:45 +0100
Message-ID: <1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/3] xen/privcmd: report paged-out frames in
	PRIVCMD_MMAPBATCH ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

libxc handles paged-out frames in xc_map_foreign_bulk() and friends by
retrying the map operation.  libxc expects the PRIVCMD_MMAPBATCH ioctl
to report paged out frames by setting bit 31 in the mfn.

Do this for the PRIVCMD_MMAPBATCH ioctl if
xen_remap_domain_mfn_range() returned -ENOENT.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..f8c1b6d 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -255,10 +255,15 @@ static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		if (ret == -ENOENT)
+			*mfnp |= 0x80000000U;
+		else
+			*mfnp |= 0xf0000000U;
 		st->err++;
 	}
 	st->va += PAGE_SIZE;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:14:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4az7-0008MI-Gh; Thu, 23 Aug 2012 17:14:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4az6-0008M9-2w
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:14:20 +0000
Received: from [85.158.143.99:38936] by server-1.bemta-4.messagelabs.com id
	F1/EE-12504-BE466305; Thu, 23 Aug 2012 17:14:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345742057!20578829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13853 invoked from network); 23 Aug 2012 17:14:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:18 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607815"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-4R;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:44 +0100
Message-ID: <1345742026-10569-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/3] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Callers of xen_remap_domain_range() need to know if the remap failed
because frame is currently paged out.  So they can retry the remap
later on.  Return -ENOENT in this case.

This assumes that the error codes returned by Xen are a subset of
those used by the kernel.  It is unclear if this is defined as part of
the hypercall ABI.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..fb187ea 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = -EFAULT;
-		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
+		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
+		if (err < 0)
 			goto out;
 
 		nr -= batch;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:14:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4az8-0008MW-TH; Thu, 23 Aug 2012 17:14:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4az7-0008M9-44
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:14:21 +0000
Received: from [85.158.143.99:34748] by server-1.bemta-4.messagelabs.com id
	66/EE-12504-CE466305; Thu, 23 Aug 2012 17:14:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345742057!20578829!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13924 invoked from network); 23 Aug 2012 17:14:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607817"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-5U;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:46 +0100
Message-ID: <1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
field for reporting the error code for every frame that could not be
mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   54 +++++++++++++++++++++++++++++++++++++++----------
 include/xen/privcmd.h |   10 +++++++++
 2 files changed, 53 insertions(+), 11 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index f8c1b6d..4f97160 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -248,7 +248,8 @@ struct mmap_batch_state {
 	struct vm_area_struct *vma;
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
@@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret = 0;
 
-	return put_user(*mfnp, st->user++);
+	if (st->user_err) {
+		if ((*mfnp & 0xf0000000U) == 0xf0000000U)
+			ret = -ENOENT;
+		else if ((*mfnp & 0xf0000000U) == 0x80000000U)
+			ret = -EINVAL;
+		else
+			ret = 0;
+		return __put_user(ret, st->user_err);
+	} else
+		return __put_user(*mfnp, st->user_mfn++);
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
 	LIST_HEAD(pagelist);
 	struct mmap_batch_state state;
 
+	printk("%s(%d)\n", __func__, version);
+
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	if (version == 1) {
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		m.err = NULL;
+	} else {
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
 	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+			   (xen_pfn_t *)m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -331,10 +356,11 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	up_write(&mm->mmap_sem);
 
 	if (state.err > 0) {
-		state.user = m.arr;
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
+				     &pagelist,
+				     mmap_return_errors, &state);
 	}
 
 out:
@@ -359,7 +385,13 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		printk("%s() batch ret = %d\n", __func__, ret);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
+		printk("%s() batch ret = %d\n", __func__, ret);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..9fa27c4 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
 	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
 };
 
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
+};
+
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
@@ -73,5 +81,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:14:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4az8-0008MW-TH; Thu, 23 Aug 2012 17:14:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4az7-0008M9-44
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:14:21 +0000
Received: from [85.158.143.99:34748] by server-1.bemta-4.messagelabs.com id
	66/EE-12504-CE466305; Thu, 23 Aug 2012 17:14:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345742057!20578829!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13924 invoked from network); 23 Aug 2012 17:14:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607817"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-5U;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:46 +0100
Message-ID: <1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
field for reporting the error code for every frame that could not be
mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   54 +++++++++++++++++++++++++++++++++++++++----------
 include/xen/privcmd.h |   10 +++++++++
 2 files changed, 53 insertions(+), 11 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index f8c1b6d..4f97160 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -248,7 +248,8 @@ struct mmap_batch_state {
 	struct vm_area_struct *vma;
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
@@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret = 0;
 
-	return put_user(*mfnp, st->user++);
+	if (st->user_err) {
+		if ((*mfnp & 0xf0000000U) == 0xf0000000U)
+			ret = -ENOENT;
+		else if ((*mfnp & 0xf0000000U) == 0x80000000U)
+			ret = -EINVAL;
+		else
+			ret = 0;
+		return __put_user(ret, st->user_err);
+	} else
+		return __put_user(*mfnp, st->user_mfn++);
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
 	LIST_HEAD(pagelist);
 	struct mmap_batch_state state;
 
+	printk("%s(%d)\n", __func__, version);
+
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	if (version == 1) {
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		m.err = NULL;
+	} else {
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
 	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+			   (xen_pfn_t *)m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -331,10 +356,11 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	up_write(&mm->mmap_sem);
 
 	if (state.err > 0) {
-		state.user = m.arr;
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
+				     &pagelist,
+				     mmap_return_errors, &state);
 	}
 
 out:
@@ -359,7 +385,13 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		printk("%s() batch ret = %d\n", __func__, ret);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
+		printk("%s() batch ret = %d\n", __func__, ret);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..9fa27c4 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
 	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
 };
 
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
+};
+
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
@@ -73,5 +81,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:15:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4azq-0008Ub-On; Thu, 23 Aug 2012 17:15:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4azp-0008UO-Qi
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 17:15:06 +0000
Received: from [85.158.143.35:2145] by server-1.bemta-4.messagelabs.com id
	EF/5F-12504-91566305; Thu, 23 Aug 2012 17:15:05 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345742104!12683148!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28961 invoked from network); 23 Aug 2012 17:15:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14153503"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 17:15:03 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 18:15:03 +0100
Message-ID: <50366574.1040307@citrix.com>
Date: Thu, 23 Aug 2012 18:16:36 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
	<1345020888.5926.115.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345020888.5926.115.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Tue, 2012-08-14 at 19:15 +0100, Roger Pau Monne wrote:
>> vif interfaces allows the user to specify the domain that should run
>> the backend (also known as driver domain) using the 'backend'
>> parameter. This is not compatible with run_hotplug_scripts=1, since
>> libxl can only run the hotplug scripts from the Domain 0.
>>
>> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
>> ---
>>  docs/misc/xl-network-configuration.markdown |    6 ++++--
>>  tools/libxl/libxl.c                         |   14 ++++++++++++++
>>  2 files changed, 18 insertions(+), 2 deletions(-)
>>
>> diff --git a/docs/misc/xl-network-configuration.markdown b/docs/misc/xl-network-configuration.markdown
>> index 650926c..5e2f049 100644
>> --- a/docs/misc/xl-network-configuration.markdown
>> +++ b/docs/misc/xl-network-configuration.markdown
>> @@ -122,8 +122,10 @@ specified IP address to be used by the guest (blocking all others).
>>  ### backend
>>  
>>  Specifies the backend domain which this device should attach to. This
>> -defaults to domain 0. Specifying another domain requires setting up a
>> -driver domain which is outside the scope of this document.
>> +defaults to domain 0. This option does not work if `run_hotplug_scripts`
>> +is not disabled in xl.conf (see xl.conf(5) man page for more information
>> +on this option). Specifying another domain requires setting up a driver
>> +domain which is outside the scope of this document.
>>  
>>  ### rate
>>  
>> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
>> index 8ea3478..6b85cdc 100644
>> --- a/tools/libxl/libxl.c
>> +++ b/tools/libxl/libxl.c
>> @@ -2474,6 +2474,8 @@ out:
>>  int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>>                                   uint32_t domid)
>>  {
>> +    int run_hotplug_scripts;
>> +
>>      if (!nic->mtu)
>>          nic->mtu = 1492;
>>      if (!nic->model) {
>> @@ -2503,6 +2505,18 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>>                                    libxl__xen_script_dir_path()) < 0 )
>>          return ERROR_FAIL;
>>  
>> +    run_hotplug_scripts = libxl__hotplug_settings(gc, XBT_NULL);
>> +    if (run_hotplug_scripts < 0) {
>> +        LOG(ERROR, "unable to get current hotplug scripts execution setting");
> 
> Include the error value?

libxl__hotplug_settings already prints an error message, that contains
the errno value. I think that's enough.

>> +        return run_hotplug_scripts;
>> +    }
>> +    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
>> +        LOG(ERROR, "the vif 'backend=' option cannot be used in conjunction "
>> +                   "with run_hotplug_scripts, please set run_hotplug_scripts "
>> +                   "to 0 in xl.conf");
> 
> This mention of xl.conf in libxl is a layering violation.
> 
> I think it would be fine for libxl to log something generic about
> hotplug scripts and return an error and to have a more specific check t
> parse time in xl which errors out with a reference to the config option.

Ok, I will probably split this in two patches, one for libxl and one for
the parser.

Thanks for the review.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:15:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4azq-0008Ub-On; Thu, 23 Aug 2012 17:15:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T4azp-0008UO-Qi
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 17:15:06 +0000
Received: from [85.158.143.35:2145] by server-1.bemta-4.messagelabs.com id
	EF/5F-12504-91566305; Thu, 23 Aug 2012 17:15:05 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345742104!12683148!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28961 invoked from network); 23 Aug 2012 17:15:04 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14153503"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 17:15:03 +0000
Received: from dhcp-3-120.uk.xensource.com (10.80.3.120) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 18:15:03 +0100
Message-ID: <50366574.1040307@citrix.com>
Date: Thu, 23 Aug 2012 18:16:36 +0100
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.4 (Macintosh/20120616)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
	<1345020888.5926.115.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345020888.5926.115.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Tue, 2012-08-14 at 19:15 +0100, Roger Pau Monne wrote:
>> vif interfaces allows the user to specify the domain that should run
>> the backend (also known as driver domain) using the 'backend'
>> parameter. This is not compatible with run_hotplug_scripts=1, since
>> libxl can only run the hotplug scripts from the Domain 0.
>>
>> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
>> ---
>>  docs/misc/xl-network-configuration.markdown |    6 ++++--
>>  tools/libxl/libxl.c                         |   14 ++++++++++++++
>>  2 files changed, 18 insertions(+), 2 deletions(-)
>>
>> diff --git a/docs/misc/xl-network-configuration.markdown b/docs/misc/xl-network-configuration.markdown
>> index 650926c..5e2f049 100644
>> --- a/docs/misc/xl-network-configuration.markdown
>> +++ b/docs/misc/xl-network-configuration.markdown
>> @@ -122,8 +122,10 @@ specified IP address to be used by the guest (blocking all others).
>>  ### backend
>>  
>>  Specifies the backend domain which this device should attach to. This
>> -defaults to domain 0. Specifying another domain requires setting up a
>> -driver domain which is outside the scope of this document.
>> +defaults to domain 0. This option does not work if `run_hotplug_scripts`
>> +is not disabled in xl.conf (see xl.conf(5) man page for more information
>> +on this option). Specifying another domain requires setting up a driver
>> +domain which is outside the scope of this document.
>>  
>>  ### rate
>>  
>> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
>> index 8ea3478..6b85cdc 100644
>> --- a/tools/libxl/libxl.c
>> +++ b/tools/libxl/libxl.c
>> @@ -2474,6 +2474,8 @@ out:
>>  int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>>                                   uint32_t domid)
>>  {
>> +    int run_hotplug_scripts;
>> +
>>      if (!nic->mtu)
>>          nic->mtu = 1492;
>>      if (!nic->model) {
>> @@ -2503,6 +2505,18 @@ int libxl__device_nic_setdefault(libxl__gc *gc, libxl_device_nic *nic,
>>                                    libxl__xen_script_dir_path()) < 0 )
>>          return ERROR_FAIL;
>>  
>> +    run_hotplug_scripts = libxl__hotplug_settings(gc, XBT_NULL);
>> +    if (run_hotplug_scripts < 0) {
>> +        LOG(ERROR, "unable to get current hotplug scripts execution setting");
> 
> Include the error value?

libxl__hotplug_settings already prints an error message, that contains
the errno value. I think that's enough.

>> +        return run_hotplug_scripts;
>> +    }
>> +    if (nic->backend_domid != LIBXL_TOOLSTACK_DOMID && run_hotplug_scripts) {
>> +        LOG(ERROR, "the vif 'backend=' option cannot be used in conjunction "
>> +                   "with run_hotplug_scripts, please set run_hotplug_scripts "
>> +                   "to 0 in xl.conf");
> 
> This mention of xl.conf in libxl is a layering violation.
> 
> I think it would be fine for libxl to log something generic about
> hotplug scripts and return an error and to have a more specific check t
> parse time in xl which errors out with a reference to the config option.

Ok, I will probably split this in two patches, one for libxl and one for
the parser.

Thanks for the review.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:18:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4b2v-0000Re-Hw; Thu, 23 Aug 2012 17:18:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4b2t-0000RT-Mz
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 17:18:15 +0000
Received: from [85.158.143.99:48094] by server-1.bemta-4.messagelabs.com id
	76/A1-12504-7D566305; Thu, 23 Aug 2012 17:18:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345742292!17975826!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6646 invoked from network); 23 Aug 2012 17:18:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:18:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="206053880"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:18:12 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	13:18:12 -0400
Message-ID: <50366602.4030106@citrix.com>
Date: Thu, 23 Aug 2012 18:18:58 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
	<alpine.DEB.2.02.1208231457360.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208231457360.15568@kaball.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 05/10] xen-memory: register memory/IO
 range in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 03:41 PM, Stefano Stabellini wrote:
> On Wed, 22 Aug 2012, Julien Grall wrote:
>    
>> Add Memory listener on IO and modify the one on memory.
>> Becareful, the first listener is not called is the range is still register with
>> register_ioport*. So Xen will never know that this QEMU is handle the range.
>>      
> I don't understand what you mean here. Could you please elaborate?
>    

I made a patch series to remove all ioport_register_* function in
differents devices (dma, cirrus, ...):
https://lists.gnu.org/archive/html/qemu-devel/2012-08/msg04007.html

ioport_register_* don' t use the new memory API, so listener is not
called when a new range is registered.

I will rework the commit message and send it back.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:18:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4b2v-0000Re-Hw; Thu, 23 Aug 2012 17:18:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4b2t-0000RT-Mz
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 17:18:15 +0000
Received: from [85.158.143.99:48094] by server-1.bemta-4.messagelabs.com id
	76/A1-12504-7D566305; Thu, 23 Aug 2012 17:18:15 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1345742292!17975826!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6646 invoked from network); 23 Aug 2012 17:18:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:18:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="206053880"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:18:12 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	13:18:12 -0400
Message-ID: <50366602.4030106@citrix.com>
Date: Thu, 23 Aug 2012 18:18:58 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<b7c4e2af5c7a7091d99cd8e0d838fb09daa376e5.1345637459.git.julien.grall@citrix.com>
	<alpine.DEB.2.02.1208231457360.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208231457360.15568@kaball.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 05/10] xen-memory: register memory/IO
 range in Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 03:41 PM, Stefano Stabellini wrote:
> On Wed, 22 Aug 2012, Julien Grall wrote:
>    
>> Add Memory listener on IO and modify the one on memory.
>> Becareful, the first listener is not called is the range is still register with
>> register_ioport*. So Xen will never know that this QEMU is handle the range.
>>      
> I don't understand what you mean here. Could you please elaborate?
>    

I made a patch series to remove all ioport_register_* function in
differents devices (dma, cirrus, ...):
https://lists.gnu.org/archive/html/qemu-devel/2012-08/msg04007.html

ioport_register_* don' t use the new memory API, so listener is not
called when a new range is registered.

I will rework the commit message and send it back.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:21:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4b5P-0000bf-45; Thu, 23 Aug 2012 17:20:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4b5O-0000bW-7d
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:20:50 +0000
Received: from [85.158.143.35:21590] by server-3.bemta-4.messagelabs.com id
	CB/0B-08232-17666305; Thu, 23 Aug 2012 17:20:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345742059!12559226!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1961 invoked from network); 23 Aug 2012 17:14:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607814"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-3t;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:43 +0100
Message-ID: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a straight forward-port of some functionality from
classic kernels to support Xen hosts that do paging of guests.

This isn't functionality the XenServer makes use of so I've not tested
these with paging in use (GridCentric requested that our older kernels
supported this and I'm just doing the forward port).

I'm not entirely happy about the approach used here because:

1. It relies on the meaning of the return code of the update_mmu
hypercall and it assumes the value Xen used for -ENOENT is the same
the kernel uses. This does not appear to be a formal part of the
hypercall ABI.

Keir, can you comment on this?

2. It seems more sensible to have the kernel do the retries instead of
libxc doing them.  The kernel has to have a mechanism for this any way
(for mapping back/front rings).

3. The current way of handling paged-out frames by repeatedly retrying
is a bit lame.  Shouldn't there be some event that the guest waiting
for the frame can wait on instead?  By moving the retry mechanism into
the kernel we can change this without impacting the ABI to userspace.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:21:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4b5P-0000bf-45; Thu, 23 Aug 2012 17:20:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4b5O-0000bW-7d
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:20:50 +0000
Received: from [85.158.143.35:21590] by server-3.bemta-4.messagelabs.com id
	CB/0B-08232-17666305; Thu, 23 Aug 2012 17:20:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345742059!12559226!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1961 invoked from network); 23 Aug 2012 17:14:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:14:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35607814"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 13:13:54 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 13:13:54 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T4ayg-000853-3t;
	Thu, 23 Aug 2012 18:13:54 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 23 Aug 2012 18:13:43 +0100
Message-ID: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a straight forward-port of some functionality from
classic kernels to support Xen hosts that do paging of guests.

This isn't functionality the XenServer makes use of so I've not tested
these with paging in use (GridCentric requested that our older kernels
supported this and I'm just doing the forward port).

I'm not entirely happy about the approach used here because:

1. It relies on the meaning of the return code of the update_mmu
hypercall and it assumes the value Xen used for -ENOENT is the same
the kernel uses. This does not appear to be a formal part of the
hypercall ABI.

Keir, can you comment on this?

2. It seems more sensible to have the kernel do the retries instead of
libxc doing them.  The kernel has to have a mechanism for this any way
(for mapping back/front rings).

3. The current way of handling paged-out frames by repeatedly retrying
is a bit lame.  Shouldn't there be some event that the guest waiting
for the frame can wait on instead?  By moving the retry mechanism into
the kernel we can change this without impacting the ABI to userspace.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:25:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4b9s-0000oY-R7; Thu, 23 Aug 2012 17:25:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4b9r-0000oS-NH
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:25:27 +0000
Received: from [85.158.139.83:61644] by server-11.bemta-5.messagelabs.com id
	46/39-29296-78766305; Thu, 23 Aug 2012 17:25:27 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345742724!27583518!1
X-Originating-IP: [209.85.220.171]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8892 invoked from network); 23 Aug 2012 17:25:26 -0000
Received: from mail-vc0-f171.google.com (HELO mail-vc0-f171.google.com)
	(209.85.220.171)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:25:26 -0000
Received: by vcdd16 with SMTP id d16so1436209vcd.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:25:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=c2IleU6lwKuJABtdperh8kV6F2eypmYajEseFnbPbYY=;
	b=Dor+fLLKRIuV9NJyQ/+WsXXE7wOBcRV7THTdFNKhs3bqwxWuKF47jp3TfNN5UPMJs2
	5A/TVM7E0cgra6cyW7exneape3SNP19AqkQ7imK84el2JsjAEL0p/DW5bcNp1Z5GYlzI
	bjiPLWsH5bhryiKdf0ykkS7EqRTNTto7R3Q8FAh1MqnqcVFF7jqkOiGY9N9OE05phnD8
	5IOtBjjlQkI3BCt+Af0YiDxuyKhbeWsiFAvHCMCu6TCPKMTj9PtLJWn+7exqC8a927pf
	fFzB1b47IOCRIMOWSdl38Y5IWsTSJy/u1WVE3rzosD4e7kqnCK9lhvN3yRla2peGO529
	h5RQ==
MIME-Version: 1.0
Received: by 10.52.100.67 with SMTP id ew3mr1703641vdb.36.1345742724687; Thu,
	23 Aug 2012 10:25:24 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 10:25:24 -0700 (PDT)
Date: Thu, 23 Aug 2012 13:25:24 -0400
Message-ID: <CAG4Ohu8r-RHaWVv1fzksAQZH5NKnufYFYukRZkrD-=0EULTSHQ@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7895560042222870224=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7895560042222870224==
Content-Type: multipart/alternative; boundary=20cf307f349a1065b404c7f2256a

--20cf307f349a1065b404c7f2256a
Content-Type: text/plain; charset=ISO-8859-1

With Xen-4.1.2:

I'm trying to change a register value in a paused vmx vcpu. The general
process looks like this:

1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
2. From dom0, I issue a domctl to change a register via
v->arch.guest_context.user_reg, then vcpu_wake(v)

However, the guest register does not seem to be changed when I do it this
way. Is there something I need to do to mark the registers as "dirty" ? Is
there a way to force the foreign vcpu to update the changed registers? Or
maybe I just have to change the registers somewhere else?

I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also,
but that doesn't seem to make a change either.

Thanks!

--20cf307f349a1065b404c7f2256a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

With Xen-4.1.2:<br><br>I&#39;m trying to change a register value in a pause=
d vmx vcpu. The general process looks like this:<br><br>1. Some vmexit call=
s vcpu_sleep_nosync(v) on the vcpu <br>2. From dom0, I issue a domctl to ch=
ange a register via v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br=
>
<br>However, the guest register does not seem to be changed when I do it th=
is way. Is there something I need to do to mark the registers as &quot;dirt=
y&quot; ? Is there a way to force the foreign vcpu to update the changed re=
gisters? Or maybe I just have to change the registers somewhere else?<br>
<br>I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) =
also, but that doesn&#39;t seem to make a change either.<br><br>Thanks!<br>=
<br>

--20cf307f349a1065b404c7f2256a--


--===============7895560042222870224==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7895560042222870224==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 17:25:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4b9s-0000oY-R7; Thu, 23 Aug 2012 17:25:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4b9r-0000oS-NH
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:25:27 +0000
Received: from [85.158.139.83:61644] by server-11.bemta-5.messagelabs.com id
	46/39-29296-78766305; Thu, 23 Aug 2012 17:25:27 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345742724!27583518!1
X-Originating-IP: [209.85.220.171]
X-SpamReason: No, hits=1.5 required=7.0 tests=HTML_00_10,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8892 invoked from network); 23 Aug 2012 17:25:26 -0000
Received: from mail-vc0-f171.google.com (HELO mail-vc0-f171.google.com)
	(209.85.220.171)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:25:26 -0000
Received: by vcdd16 with SMTP id d16so1436209vcd.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:25:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=c2IleU6lwKuJABtdperh8kV6F2eypmYajEseFnbPbYY=;
	b=Dor+fLLKRIuV9NJyQ/+WsXXE7wOBcRV7THTdFNKhs3bqwxWuKF47jp3TfNN5UPMJs2
	5A/TVM7E0cgra6cyW7exneape3SNP19AqkQ7imK84el2JsjAEL0p/DW5bcNp1Z5GYlzI
	bjiPLWsH5bhryiKdf0ykkS7EqRTNTto7R3Q8FAh1MqnqcVFF7jqkOiGY9N9OE05phnD8
	5IOtBjjlQkI3BCt+Af0YiDxuyKhbeWsiFAvHCMCu6TCPKMTj9PtLJWn+7exqC8a927pf
	fFzB1b47IOCRIMOWSdl38Y5IWsTSJy/u1WVE3rzosD4e7kqnCK9lhvN3yRla2peGO529
	h5RQ==
MIME-Version: 1.0
Received: by 10.52.100.67 with SMTP id ew3mr1703641vdb.36.1345742724687; Thu,
	23 Aug 2012 10:25:24 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 10:25:24 -0700 (PDT)
Date: Thu, 23 Aug 2012 13:25:24 -0400
Message-ID: <CAG4Ohu8r-RHaWVv1fzksAQZH5NKnufYFYukRZkrD-=0EULTSHQ@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7895560042222870224=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7895560042222870224==
Content-Type: multipart/alternative; boundary=20cf307f349a1065b404c7f2256a

--20cf307f349a1065b404c7f2256a
Content-Type: text/plain; charset=ISO-8859-1

With Xen-4.1.2:

I'm trying to change a register value in a paused vmx vcpu. The general
process looks like this:

1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
2. From dom0, I issue a domctl to change a register via
v->arch.guest_context.user_reg, then vcpu_wake(v)

However, the guest register does not seem to be changed when I do it this
way. Is there something I need to do to mark the registers as "dirty" ? Is
there a way to force the foreign vcpu to update the changed registers? Or
maybe I just have to change the registers somewhere else?

I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also,
but that doesn't seem to make a change either.

Thanks!

--20cf307f349a1065b404c7f2256a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

With Xen-4.1.2:<br><br>I&#39;m trying to change a register value in a pause=
d vmx vcpu. The general process looks like this:<br><br>1. Some vmexit call=
s vcpu_sleep_nosync(v) on the vcpu <br>2. From dom0, I issue a domctl to ch=
ange a register via v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br=
>
<br>However, the guest register does not seem to be changed when I do it th=
is way. Is there something I need to do to mark the registers as &quot;dirt=
y&quot; ? Is there a way to force the foreign vcpu to update the changed re=
gisters? Or maybe I just have to change the registers somewhere else?<br>
<br>I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) =
also, but that doesn&#39;t seem to make a change either.<br><br>Thanks!<br>=
<br>

--20cf307f349a1065b404c7f2256a--


--===============7895560042222870224==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7895560042222870224==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 17:35:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bJ6-00011k-Tu; Thu, 23 Aug 2012 17:35:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4bJ5-00011f-9h
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:34:59 +0000
Received: from [85.158.143.35:16547] by server-1.bemta-4.messagelabs.com id
	4A/6D-12504-2C966305; Thu, 23 Aug 2012 17:34:58 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345743296!15833636!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22240 invoked from network); 23 Aug 2012 17:34:56 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:34:56 -0000
Received: by eekd4 with SMTP id d4so405371eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:34:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=4G7viu3GXFOrr/CDjGkufupUMwzUj6LWsyKcdR6jmwo=;
	b=j3PEeLrDWrGJ/KQSszsEdy0bkzbw336h5PKGuKrBHHVHQ5i+IuS3lG7eGiBMefNVY6
	qOrF2BOLfKH5NtAGUswiBUQxHNSmcbuIfhTCCTPOuq79G2PLq8q8+TZ/FTh3bUbbLe0K
	UOdkM9J8U9oXz8dIDtME/kFVWRAgi4Fcaxoau7QfVCsC/5sOVSzEzLeTaqzsPUm+OoYu
	Bpx9ZfuB+QzFWhjeoefvCj+1rbfPHnonMTlYlwoYULwYP6Kj4i2B44nxtNN+fRUPpLwT
	/S2f8eNK3agh/zzEXYK5CcN85v8mJVrK+WTVQxqZYtn7aOWfJ9lHOKyW60RZ5fuCoRxU
	IbTA==
Received: by 10.14.179.200 with SMTP id h48mr3126564eem.12.1345743295758;
	Thu, 23 Aug 2012 10:34:55 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id 45sm23248910eeb.8.2012.08.23.10.34.53
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 10:34:54 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 18:34:46 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C2846.3CC01%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BVZZzWtfqYph+RUm6PR7KXK8Zyw==
In-Reply-To: <CAG4Ohu8r-RHaWVv1fzksAQZH5NKnufYFYukRZkrD-=0EULTSHQ@mail.gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:

> With Xen-4.1.2:
> 
> I'm trying to change a register value in a paused vmx vcpu. The general
> process looks like this:
> 
> 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> 2. From dom0, I issue a domctl to change a register via
> v->arch.guest_context.user_reg, then vcpu_wake(v)

Which domctl? From dom0 userspace you can use the libxc functions
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state.

You can read the libxc sources to see what hypercall these map to, if you
don't want to use libxc for any reason.

 -- Keir

> However, the guest register does not seem to be changed when I do it this way.
> Is there something I need to do to mark the registers as "dirty" ? Is there a
> way to force the foreign vcpu to update the changed registers? Or maybe I just
> have to change the registers somewhere else?
> 
> I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also, but
> that doesn't seem to make a change either.
> 
> Thanks!
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:35:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bJ6-00011k-Tu; Thu, 23 Aug 2012 17:35:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4bJ5-00011f-9h
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:34:59 +0000
Received: from [85.158.143.35:16547] by server-1.bemta-4.messagelabs.com id
	4A/6D-12504-2C966305; Thu, 23 Aug 2012 17:34:58 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1345743296!15833636!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.6 required=7.0 tests=MAILTO_TO_SPAM_ADDR,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22240 invoked from network); 23 Aug 2012 17:34:56 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:34:56 -0000
Received: by eekd4 with SMTP id d4so405371eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:34:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=4G7viu3GXFOrr/CDjGkufupUMwzUj6LWsyKcdR6jmwo=;
	b=j3PEeLrDWrGJ/KQSszsEdy0bkzbw336h5PKGuKrBHHVHQ5i+IuS3lG7eGiBMefNVY6
	qOrF2BOLfKH5NtAGUswiBUQxHNSmcbuIfhTCCTPOuq79G2PLq8q8+TZ/FTh3bUbbLe0K
	UOdkM9J8U9oXz8dIDtME/kFVWRAgi4Fcaxoau7QfVCsC/5sOVSzEzLeTaqzsPUm+OoYu
	Bpx9ZfuB+QzFWhjeoefvCj+1rbfPHnonMTlYlwoYULwYP6Kj4i2B44nxtNN+fRUPpLwT
	/S2f8eNK3agh/zzEXYK5CcN85v8mJVrK+WTVQxqZYtn7aOWfJ9lHOKyW60RZ5fuCoRxU
	IbTA==
Received: by 10.14.179.200 with SMTP id h48mr3126564eem.12.1345743295758;
	Thu, 23 Aug 2012 10:34:55 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id 45sm23248910eeb.8.2012.08.23.10.34.53
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 10:34:54 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 18:34:46 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C2846.3CC01%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BVZZzWtfqYph+RUm6PR7KXK8Zyw==
In-Reply-To: <CAG4Ohu8r-RHaWVv1fzksAQZH5NKnufYFYukRZkrD-=0EULTSHQ@mail.gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:

> With Xen-4.1.2:
> 
> I'm trying to change a register value in a paused vmx vcpu. The general
> process looks like this:
> 
> 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> 2. From dom0, I issue a domctl to change a register via
> v->arch.guest_context.user_reg, then vcpu_wake(v)

Which domctl? From dom0 userspace you can use the libxc functions
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state.

You can read the libxc sources to see what hypercall these map to, if you
don't want to use libxc for any reason.

 -- Keir

> However, the guest register does not seem to be changed when I do it this way.
> Is there something I need to do to mark the registers as "dirty" ? Is there a
> way to force the foreign vcpu to update the changed registers? Or maybe I just
> have to change the registers somewhere else?
> 
> I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also, but
> that doesn't seem to make a change either.
> 
> Thanks!
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 17:37:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bLT-00017M-KX; Thu, 23 Aug 2012 17:37:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4bLS-00017D-7T
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:37:26 +0000
Received: from [85.158.143.99:57682] by server-2.bemta-4.messagelabs.com id
	70/B4-21239-55A66305; Thu, 23 Aug 2012 17:37:25 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345743443!27043810!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.7 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31743 invoked from network); 23 Aug 2012 17:37:24 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:37:24 -0000
Received: by vbbfq11 with SMTP id fq11so1411586vbb.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:37:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=IrivKtymUTXtx4DZ5CSx58/o5WbcJUv6+9087n47rzw=;
	b=BkonRmPJYb6RIC9P02GqASqTb0hClBk206qT8c24naUh2fIyllTnrhqr8SubieixUo
	maA+Q/QsFVyPpWxxD+aCYBAYQBnMGxNOGGCDyHYLCmXyXi2sLK8SUyx///m+QNC0F8Nx
	8tKOeZuYf6gMlwzoUwUaRilYW5/rZ1aaWdW/kmVCdif9zFMlamB8IE9Abty2DEKu+0k1
	ixXRfQIvwmoVFilOzaFEki9woSVVn3TJ0Z4UAfr+XHHMpHzb/qMvvAYcIZolZullEoeX
	ILUSWLRYXNyLJO8hxCL6xlXx72/kO08V5GfP6030bw1lG7Wl3DQoB+3zcJXw5jcUocXg
	04jQ==
MIME-Version: 1.0
Received: by 10.52.100.67 with SMTP id ew3mr1731432vdb.36.1345743442989; Thu,
	23 Aug 2012 10:37:22 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 10:37:22 -0700 (PDT)
In-Reply-To: <CC5C2846.3CC01%keir.xen@gmail.com>
References: <CAG4Ohu8r-RHaWVv1fzksAQZH5NKnufYFYukRZkrD-=0EULTSHQ@mail.gmail.com>
	<CC5C2846.3CC01%keir.xen@gmail.com>
Date: Thu, 23 Aug 2012 13:37:22 -0400
Message-ID: <CAG4Ohu-rRT4hMn6TsMv_vnVUC1aw5gL-hjHZZpSgaH3Y04WXVg@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7926050514961483293=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7926050514961483293==
Content-Type: multipart/alternative; boundary=20cf307f349ae0d36804c7f24fc6

--20cf307f349ae0d36804c7f24fc6
Content-Type: text/plain; charset=ISO-8859-1

I'm making the register change directly from the hypervisor, inside of the
domctl code.

It's a custom domctl that I've added. I'll look into what setcontext does
after it modifies the register values, though.

Thank you!

On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:

> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>
> > With Xen-4.1.2:
> >
> > I'm trying to change a register value in a paused vmx vcpu. The general
> > process looks like this:
> >
> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> > 2. From dom0, I issue a domctl to change a register via
> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>
> Which domctl? From dom0 userspace you can use the libxc functions
> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
> state.
>
> You can read the libxc sources to see what hypercall these map to, if you
> don't want to use libxc for any reason.
>
>  -- Keir
>
> > However, the guest register does not seem to be changed when I do it
> this way.
> > Is there something I need to do to mark the registers as "dirty" ? Is
> there a
> > way to force the foreign vcpu to update the changed registers? Or maybe
> I just
> > have to change the registers somewhere else?
> >
> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also,
> but
> > that doesn't seem to make a change either.
> >
> > Thanks!
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>
>

--20cf307f349ae0d36804c7f24fc6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I&#39;m making the register change directly from the hypervisor, inside of =
the domctl code. <br>
<br>
It&#39;s a custom domctl that I&#39;ve added. I&#39;ll look into what setco=
ntext does after it modifies the register values, though.<br><br>Thank you!=
<br><br><div class=3D"gmail_quote">On Thu, Aug 23, 2012 at 1:34 PM, Keir Fr=
aser <span dir=3D"ltr">&lt;<a href=3D"mailto:keir.xen@gmail.com" target=3D"=
_blank">keir.xen@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 23/08/2012 18:25, &quot=
;Cutter 409&quot; &lt;<a href=3D"mailto:cutter409@gmail.com">cutter409@gmai=
l.com</a>&gt; wrote:<br>

<br>
&gt; With Xen-4.1.2:<br>
&gt;<br>
&gt; I&#39;m trying to change a register value in a paused vmx vcpu. The ge=
neral<br>
&gt; process looks like this:<br>
&gt;<br>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<br>
&gt; 2. From dom0, I issue a domctl to change a register via<br>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br>
<br>
</div>Which domctl? From dom0 userspace you can use the libxc functions<br>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<br>
<br>
You can read the libxc sources to see what hypercall these map to, if you<b=
r>
don&#39;t want to use libxc for any reason.<br>
<br>
=A0-- Keir<br>
<div class=3D"im"><br>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<br>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<br>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<br>
&gt; have to change the registers somewhere else?<br>
&gt;<br>
&gt; I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)=
 also, but<br>
&gt; that doesn&#39;t seem to make a change either.<br>
&gt;<br>
&gt; Thanks!<br>
&gt;<br>
&gt;<br>
&gt;<br>
</div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
<br>
</blockquote></div><br>

--20cf307f349ae0d36804c7f24fc6--


--===============7926050514961483293==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7926050514961483293==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 17:37:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bLT-00017M-KX; Thu, 23 Aug 2012 17:37:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4bLS-00017D-7T
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:37:26 +0000
Received: from [85.158.143.99:57682] by server-2.bemta-4.messagelabs.com id
	70/B4-21239-55A66305; Thu, 23 Aug 2012 17:37:25 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345743443!27043810!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=0.7 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31743 invoked from network); 23 Aug 2012 17:37:24 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:37:24 -0000
Received: by vbbfq11 with SMTP id fq11so1411586vbb.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:37:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=IrivKtymUTXtx4DZ5CSx58/o5WbcJUv6+9087n47rzw=;
	b=BkonRmPJYb6RIC9P02GqASqTb0hClBk206qT8c24naUh2fIyllTnrhqr8SubieixUo
	maA+Q/QsFVyPpWxxD+aCYBAYQBnMGxNOGGCDyHYLCmXyXi2sLK8SUyx///m+QNC0F8Nx
	8tKOeZuYf6gMlwzoUwUaRilYW5/rZ1aaWdW/kmVCdif9zFMlamB8IE9Abty2DEKu+0k1
	ixXRfQIvwmoVFilOzaFEki9woSVVn3TJ0Z4UAfr+XHHMpHzb/qMvvAYcIZolZullEoeX
	ILUSWLRYXNyLJO8hxCL6xlXx72/kO08V5GfP6030bw1lG7Wl3DQoB+3zcJXw5jcUocXg
	04jQ==
MIME-Version: 1.0
Received: by 10.52.100.67 with SMTP id ew3mr1731432vdb.36.1345743442989; Thu,
	23 Aug 2012 10:37:22 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 10:37:22 -0700 (PDT)
In-Reply-To: <CC5C2846.3CC01%keir.xen@gmail.com>
References: <CAG4Ohu8r-RHaWVv1fzksAQZH5NKnufYFYukRZkrD-=0EULTSHQ@mail.gmail.com>
	<CC5C2846.3CC01%keir.xen@gmail.com>
Date: Thu, 23 Aug 2012 13:37:22 -0400
Message-ID: <CAG4Ohu-rRT4hMn6TsMv_vnVUC1aw5gL-hjHZZpSgaH3Y04WXVg@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7926050514961483293=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7926050514961483293==
Content-Type: multipart/alternative; boundary=20cf307f349ae0d36804c7f24fc6

--20cf307f349ae0d36804c7f24fc6
Content-Type: text/plain; charset=ISO-8859-1

I'm making the register change directly from the hypervisor, inside of the
domctl code.

It's a custom domctl that I've added. I'll look into what setcontext does
after it modifies the register values, though.

Thank you!

On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:

> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>
> > With Xen-4.1.2:
> >
> > I'm trying to change a register value in a paused vmx vcpu. The general
> > process looks like this:
> >
> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> > 2. From dom0, I issue a domctl to change a register via
> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>
> Which domctl? From dom0 userspace you can use the libxc functions
> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
> state.
>
> You can read the libxc sources to see what hypercall these map to, if you
> don't want to use libxc for any reason.
>
>  -- Keir
>
> > However, the guest register does not seem to be changed when I do it
> this way.
> > Is there something I need to do to mark the registers as "dirty" ? Is
> there a
> > way to force the foreign vcpu to update the changed registers? Or maybe
> I just
> > have to change the registers somewhere else?
> >
> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also,
> but
> > that doesn't seem to make a change either.
> >
> > Thanks!
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>
>

--20cf307f349ae0d36804c7f24fc6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I&#39;m making the register change directly from the hypervisor, inside of =
the domctl code. <br>
<br>
It&#39;s a custom domctl that I&#39;ve added. I&#39;ll look into what setco=
ntext does after it modifies the register values, though.<br><br>Thank you!=
<br><br><div class=3D"gmail_quote">On Thu, Aug 23, 2012 at 1:34 PM, Keir Fr=
aser <span dir=3D"ltr">&lt;<a href=3D"mailto:keir.xen@gmail.com" target=3D"=
_blank">keir.xen@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 23/08/2012 18:25, &quot=
;Cutter 409&quot; &lt;<a href=3D"mailto:cutter409@gmail.com">cutter409@gmai=
l.com</a>&gt; wrote:<br>

<br>
&gt; With Xen-4.1.2:<br>
&gt;<br>
&gt; I&#39;m trying to change a register value in a paused vmx vcpu. The ge=
neral<br>
&gt; process looks like this:<br>
&gt;<br>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<br>
&gt; 2. From dom0, I issue a domctl to change a register via<br>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br>
<br>
</div>Which domctl? From dom0 userspace you can use the libxc functions<br>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<br>
<br>
You can read the libxc sources to see what hypercall these map to, if you<b=
r>
don&#39;t want to use libxc for any reason.<br>
<br>
=A0-- Keir<br>
<div class=3D"im"><br>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<br>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<br>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<br>
&gt; have to change the registers somewhere else?<br>
&gt;<br>
&gt; I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)=
 also, but<br>
&gt; that doesn&#39;t seem to make a change either.<br>
&gt;<br>
&gt; Thanks!<br>
&gt;<br>
&gt;<br>
&gt;<br>
</div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
<br>
</blockquote></div><br>

--20cf307f349ae0d36804c7f24fc6--


--===============7926050514961483293==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7926050514961483293==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 17:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bXX-0001Kd-TE; Thu, 23 Aug 2012 17:49:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4bXW-0001KY-8G
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:49:54 +0000
Received: from [85.158.143.99:39772] by server-1.bemta-4.messagelabs.com id
	48/07-12504-14D66305; Thu, 23 Aug 2012 17:49:53 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345744192!20116763!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR, MIME_QP_LONG_LINE, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22657 invoked from network); 23 Aug 2012 17:49:52 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:49:52 -0000
Received: by eaah11 with SMTP id h11so319320eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:49:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=Sgf9yn1A6bIsTmCpHIfHyt9LKh/6KRsDgw5bTBL1UMM=;
	b=mz9OySUJMKvf4RMdli8TPJvOK3MYJWKbWpAQpzlP1S5UugwO+ASeu2ROIPCUlB+lyv
	KAPqDy7NtmVWhgTy3vlRBVcXrQFkvvb7U5lIodputGJA3jb+AkBg7z1qnKU9FzdmatLs
	DD+Bwbf+BzHzCZTbV6s+4+xfYAyn/f2dZjzL2ESXwzhshdSBENRBezafQMXedrpPe0Do
	UJfTH8RAcS5BQL7GmoyEPFsnrbPJrUJDhphKSIvcuQVPzJ51JSUQbtF2k0vdlQ+JMuBb
	amRJk7KzmLNAAtiUgb79g3HOJqthq31R1O1PAWmFhPJhRlV/UfRJ4T3MAaBlLUzGBgkq
	WESw==
Received: by 10.14.204.200 with SMTP id h48mr3205816eeo.7.1345744191815;
	Thu, 23 Aug 2012 10:49:51 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id a7sm23302361eep.14.2012.08.23.10.49.49
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 10:49:50 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 18:49:46 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C2BCA.3CC08%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BV67lXNDtl2DXJEGdA4wycU70iA==
In-Reply-To: <CAG4Ohu-rRT4hMn6TsMv_vnVUC1aw5gL-hjHZZpSgaH3Y04WXVg@mail.gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0629132727862884477=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============0629132727862884477==
Content-type: multipart/alternative;
	boundary="B_3428592590_124082462"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428592590_124082462
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

FWIW I would expect your approach to basically work.

Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu?
This will ensure that the vcpu is both fully de-scheduled, and all of its
register state is synced back into its vcpu structure.

Otherwise you race the vcpu_sleep_nosync() -- and that=B9s assuming you also
have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
vcpu_sleep_*() operations do nothing!

In short, your problems are almost certainly something to do with the
subtleties of actually putting a vcpu properly to sleep.

 -- Keir


On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com> wrote:

> I'm making the register change directly from the hypervisor, inside of th=
e
> domctl code.=20
>=20
> It's a custom domctl that I've added. I'll look into what setcontext does
> after it modifies the register values, though.
>=20
> Thank you!
>=20
> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>>=20
>>> > With Xen-4.1.2:
>>> >
>>> > I'm trying to change a register value in a paused vmx vcpu. The gener=
al
>>> > process looks like this:
>>> >
>>> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
>>> > 2. From dom0, I issue a domctl to change a register via
>>> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>>=20
>> Which domctl? From dom0 userspace you can use the libxc functions
>> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register st=
ate.
>>=20
>> You can read the libxc sources to see what hypercall these map to, if yo=
u
>> don't want to use libxc for any reason.
>>=20
>> =A0-- Keir
>>=20
>>> > However, the guest register does not seem to be changed when I do it =
this
>>> way.
>>> > Is there something I need to do to mark the registers as "dirty" ? Is
>>> there a
>>> > way to force the foreign vcpu to update the changed registers? Or may=
be I
>>> just
>>> > have to change the registers somewhere else?
>>> >
>>> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) al=
so,
>>> but
>>> > that doesn't seem to make a change either.
>>> >
>>> > Thanks!
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Xen-devel mailing list
>>> > Xen-devel@lists.xen.org
>>> > http://lists.xen.org/xen-devel
>>=20
>>=20
>=20
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--B_3428592590_124082462
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] Foreign VCPU register change?</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>FWIW I would expect your approach to basically work. <BR>
<BR>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster state is synced back into its vcpu structure.<BR>
<BR>
Otherwise you race the vcpu_sleep_nosync() -- and that&#8217;s assuming you=
 also have a reason for that vcpu to sleep (e.g., non-zero pause counter), e=
lse vcpu_sleep_*() operations do nothing!<BR>
<BR>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
<BR>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>I'm making the register change directly from the=
 hypervisor, inside of the domctl code. <BR>
<BR>
It's a custom domctl that I've added. I'll look into what setcontext does a=
fter it modifies the register values, though.<BR>
<BR>
Thank you!<BR>
<BR>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>On 23/08/2012 18:25, &quot;Cutter 409&quot; &lt;=
<a href=3D"cutter409@gmail.com">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
&gt; With Xen-4.1.2:<BR>
&gt;<BR>
&gt; I'm trying to change a register value in a paused vmx vcpu. The genera=
l<BR>
&gt; process looks like this:<BR>
&gt;<BR>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<BR>
&gt; 2. From dom0, I issue a domctl to change a register via<BR>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<BR>
<BR>
Which domctl? From dom0 userspace you can use the libxc functions<BR>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<BR>
<BR>
You can read the libxc sources to see what hypercall these map to, if you<B=
R>
don't want to use libxc for any reason.<BR>
<BR>
=A0-- Keir<BR>
<BR>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<BR>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<BR>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<BR>
&gt; have to change the registers somewhere else?<BR>
&gt;<BR>
&gt; I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) als=
o, but<BR>
&gt; that doesn't seem to make a change either.<BR>
&gt;<BR>
&gt; Thanks!<BR>
&gt;<BR>
&gt;<BR>
&gt;<BR>
&gt; _______________________________________________<BR>
&gt; Xen-devel mailing list<BR>
&gt; <a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
&gt; <a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-deve=
l</a><BR>
<BR>
<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial">=
<SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3428592590_124082462--




--===============0629132727862884477==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0629132727862884477==--




From xen-devel-bounces@lists.xen.org Thu Aug 23 17:50:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bXX-0001Kd-TE; Thu, 23 Aug 2012 17:49:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4bXW-0001KY-8G
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:49:54 +0000
Received: from [85.158.143.99:39772] by server-1.bemta-4.messagelabs.com id
	48/07-12504-14D66305; Thu, 23 Aug 2012 17:49:53 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345744192!20116763!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR, MIME_QP_LONG_LINE, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22657 invoked from network); 23 Aug 2012 17:49:52 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:49:52 -0000
Received: by eaah11 with SMTP id h11so319320eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:49:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=Sgf9yn1A6bIsTmCpHIfHyt9LKh/6KRsDgw5bTBL1UMM=;
	b=mz9OySUJMKvf4RMdli8TPJvOK3MYJWKbWpAQpzlP1S5UugwO+ASeu2ROIPCUlB+lyv
	KAPqDy7NtmVWhgTy3vlRBVcXrQFkvvb7U5lIodputGJA3jb+AkBg7z1qnKU9FzdmatLs
	DD+Bwbf+BzHzCZTbV6s+4+xfYAyn/f2dZjzL2ESXwzhshdSBENRBezafQMXedrpPe0Do
	UJfTH8RAcS5BQL7GmoyEPFsnrbPJrUJDhphKSIvcuQVPzJ51JSUQbtF2k0vdlQ+JMuBb
	amRJk7KzmLNAAtiUgb79g3HOJqthq31R1O1PAWmFhPJhRlV/UfRJ4T3MAaBlLUzGBgkq
	WESw==
Received: by 10.14.204.200 with SMTP id h48mr3205816eeo.7.1345744191815;
	Thu, 23 Aug 2012 10:49:51 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id a7sm23302361eep.14.2012.08.23.10.49.49
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 10:49:50 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 18:49:46 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C2BCA.3CC08%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BV67lXNDtl2DXJEGdA4wycU70iA==
In-Reply-To: <CAG4Ohu-rRT4hMn6TsMv_vnVUC1aw5gL-hjHZZpSgaH3Y04WXVg@mail.gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0629132727862884477=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============0629132727862884477==
Content-type: multipart/alternative;
	boundary="B_3428592590_124082462"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428592590_124082462
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

FWIW I would expect your approach to basically work.

Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu?
This will ensure that the vcpu is both fully de-scheduled, and all of its
register state is synced back into its vcpu structure.

Otherwise you race the vcpu_sleep_nosync() -- and that=B9s assuming you also
have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
vcpu_sleep_*() operations do nothing!

In short, your problems are almost certainly something to do with the
subtleties of actually putting a vcpu properly to sleep.

 -- Keir


On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com> wrote:

> I'm making the register change directly from the hypervisor, inside of th=
e
> domctl code.=20
>=20
> It's a custom domctl that I've added. I'll look into what setcontext does
> after it modifies the register values, though.
>=20
> Thank you!
>=20
> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>>=20
>>> > With Xen-4.1.2:
>>> >
>>> > I'm trying to change a register value in a paused vmx vcpu. The gener=
al
>>> > process looks like this:
>>> >
>>> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
>>> > 2. From dom0, I issue a domctl to change a register via
>>> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>>=20
>> Which domctl? From dom0 userspace you can use the libxc functions
>> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register st=
ate.
>>=20
>> You can read the libxc sources to see what hypercall these map to, if yo=
u
>> don't want to use libxc for any reason.
>>=20
>> =A0-- Keir
>>=20
>>> > However, the guest register does not seem to be changed when I do it =
this
>>> way.
>>> > Is there something I need to do to mark the registers as "dirty" ? Is
>>> there a
>>> > way to force the foreign vcpu to update the changed registers? Or may=
be I
>>> just
>>> > have to change the registers somewhere else?
>>> >
>>> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) al=
so,
>>> but
>>> > that doesn't seem to make a change either.
>>> >
>>> > Thanks!
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Xen-devel mailing list
>>> > Xen-devel@lists.xen.org
>>> > http://lists.xen.org/xen-devel
>>=20
>>=20
>=20
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--B_3428592590_124082462
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] Foreign VCPU register change?</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>FWIW I would expect your approach to basically work. <BR>
<BR>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster state is synced back into its vcpu structure.<BR>
<BR>
Otherwise you race the vcpu_sleep_nosync() -- and that&#8217;s assuming you=
 also have a reason for that vcpu to sleep (e.g., non-zero pause counter), e=
lse vcpu_sleep_*() operations do nothing!<BR>
<BR>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
<BR>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>I'm making the register change directly from the=
 hypervisor, inside of the domctl code. <BR>
<BR>
It's a custom domctl that I've added. I'll look into what setcontext does a=
fter it modifies the register values, though.<BR>
<BR>
Thank you!<BR>
<BR>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>On 23/08/2012 18:25, &quot;Cutter 409&quot; &lt;=
<a href=3D"cutter409@gmail.com">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
&gt; With Xen-4.1.2:<BR>
&gt;<BR>
&gt; I'm trying to change a register value in a paused vmx vcpu. The genera=
l<BR>
&gt; process looks like this:<BR>
&gt;<BR>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<BR>
&gt; 2. From dom0, I issue a domctl to change a register via<BR>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<BR>
<BR>
Which domctl? From dom0 userspace you can use the libxc functions<BR>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<BR>
<BR>
You can read the libxc sources to see what hypercall these map to, if you<B=
R>
don't want to use libxc for any reason.<BR>
<BR>
=A0-- Keir<BR>
<BR>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<BR>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<BR>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<BR>
&gt; have to change the registers somewhere else?<BR>
&gt;<BR>
&gt; I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) als=
o, but<BR>
&gt; that doesn't seem to make a change either.<BR>
&gt;<BR>
&gt; Thanks!<BR>
&gt;<BR>
&gt;<BR>
&gt;<BR>
&gt; _______________________________________________<BR>
&gt; Xen-devel mailing list<BR>
&gt; <a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
&gt; <a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-deve=
l</a><BR>
<BR>
<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial">=
<SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3428592590_124082462--




--===============0629132727862884477==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0629132727862884477==--




From xen-devel-bounces@lists.xen.org Thu Aug 23 17:55:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bcX-0001Vk-SD; Thu, 23 Aug 2012 17:55:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4bcW-0001VZ-Mu
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:55:05 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345744498!8405412!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR, MIME_QP_LONG_LINE, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24357 invoked from network); 23 Aug 2012 17:54:58 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:54:58 -0000
Received: by eekd4 with SMTP id d4so413150eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:54:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=0DP1ToI2ayF6iFua3/iFr7VeOvD6TlfYQEWnJIfpfuc=;
	b=udz6y/Scqxu7W/VMl62dJi/VNpIkJ0oBknb5iImPJceFxDSR9YmtximebrAMfd/UA6
	I7lPSI5+WfY/InQALPq2LkhIylLoeZ8bR8SUMf/Y2fjI6LtFtzM+WyFAt/MzJ8I0XMMh
	9o5ilJ8/EhrKa8j8WPoch31hqJ3IlWxgLs0wHHnnAEpP++1nYP773puNM0P5Sd8rqQ6c
	oNW5nMoIKL5ZKGg+8eEOjDvlj0rU+3g3cBXaqEX1sXxKh8D9ibeV/4KI5UQGY7ewbhZ5
	Eus4TQhEXGAjTyv6wOmnq4tyyzk3Ld92ltj7SOyzvDdJFJ/TijlKQPDcnIe/EfkOzEbn
	eCdg==
Received: by 10.14.0.198 with SMTP id 46mr3130902eeb.30.1345744497892;
	Thu, 23 Aug 2012 10:54:57 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id z3sm23329126eel.15.2012.08.23.10.54.56
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 10:54:57 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 18:54:54 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C2CFE.3CC0D%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BV67lXNDtl2DXJEGdA4wycU70iAAALeWi
In-Reply-To: <CC5C2BCA.3CC08%keir.xen@gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8018410164680552582=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============8018410164680552582==
Content-type: multipart/alternative;
	boundary="B_3428592896_124067349"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428592896_124067349
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

So, for example, one possibly-valid scheme would be:

 - vcpu_pause_nosync() from the vmexit handler

 - vcpu_sleep_sync() at the start of the domctl
 - vcpu_unpause() at the end of the domctl

HTH,
 Keir


On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com> wrote:

> FWIW I would expect your approach to basically work.
>=20
> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? =
This
> will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster
> state is synced back into its vcpu structure.
>=20
> Otherwise you race the vcpu_sleep_nosync() -- and that=B9s assuming you als=
o
> have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
> vcpu_sleep_*() operations do nothing!
>=20
> In short, your problems are almost certainly something to do with the
> subtleties of actually putting a vcpu properly to sleep.
>=20
>  -- Keir
>=20
>=20
> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com> wrote:
>=20
>> I'm making the register change directly from the hypervisor, inside of t=
he
>> domctl code.=20
>>=20
>> It's a custom domctl that I've added. I'll look into what setcontext doe=
s
>> after it modifies the register values, though.
>>=20
>> Thank you!
>>=20
>> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>>> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>>>=20
>>>> > With Xen-4.1.2:
>>>> >
>>>> > I'm trying to change a register value in a paused vmx vcpu. The gene=
ral
>>>> > process looks like this:
>>>> >
>>>> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
>>>> > 2. From dom0, I issue a domctl to change a register via
>>>> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>>>=20
>>> Which domctl? From dom0 userspace you can use the libxc functions
>>> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register s=
tate.
>>>=20
>>> You can read the libxc sources to see what hypercall these map to, if y=
ou
>>> don't want to use libxc for any reason.
>>>=20
>>> =A0-- Keir
>>>=20
>>>> > However, the guest register does not seem to be changed when I do it=
 this
>>>> way.
>>>> > Is there something I need to do to mark the registers as "dirty" ? I=
s
>>>> there a
>>>> > way to force the foreign vcpu to update the changed registers? Or ma=
ybe I
>>>> just
>>>> > have to change the registers somewhere else?
>>>> >
>>>> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) a=
lso,
>>>> but
>>>> > that doesn't seem to make a change either.
>>>> >
>>>> > Thanks!
>>>> >
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > Xen-devel mailing list
>>>> > Xen-devel@lists.xen.org
>>>> > http://lists.xen.org/xen-devel
>>>=20
>>>=20
>>=20
>>=20
>>=20
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>=20


--B_3428592896_124067349
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] Foreign VCPU register change?</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>So, for example, one possibly-valid scheme would be:<BR>
<BR>
&nbsp;- vcpu_pause_nosync() from the vmexit handler<BR>
<BR>
&nbsp;- vcpu_sleep_sync() at the start of the domctl<BR>
&nbsp;- vcpu_unpause() at the end of the domctl<BR>
<BR>
HTH,<BR>
&nbsp;Keir<BR>
<BR>
<BR>
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>FWIW I would expect your approach to basically w=
ork. <BR>
<BR>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster state is synced back into its vcpu structure.<BR>
<BR>
Otherwise you race the vcpu_sleep_nosync() -- and that&#8217;s assuming you=
 also have a reason for that vcpu to sleep (e.g., non-zero pause counter), e=
lse vcpu_sleep_*() operations do nothing!<BR>
<BR>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
<BR>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>I'm making the register change directly from the=
 hypervisor, inside of the domctl code. <BR>
<BR>
It's a custom domctl that I've added. I'll look into what setcontext does a=
fter it modifies the register values, though.<BR>
<BR>
Thank you!<BR>
<BR>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>On 23/08/2012 18:25, &quot;Cutter 409&quot; &lt;=
<a href=3D"cutter409@gmail.com">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
&gt; With Xen-4.1.2:<BR>
&gt;<BR>
&gt; I'm trying to change a register value in a paused vmx vcpu. The genera=
l<BR>
&gt; process looks like this:<BR>
&gt;<BR>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<BR>
&gt; 2. From dom0, I issue a domctl to change a register via<BR>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<BR>
<BR>
Which domctl? From dom0 userspace you can use the libxc functions<BR>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<BR>
<BR>
You can read the libxc sources to see what hypercall these map to, if you<B=
R>
don't want to use libxc for any reason.<BR>
<BR>
=A0-- Keir<BR>
<BR>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<BR>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<BR>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<BR>
&gt; have to change the registers somewhere else?<BR>
&gt;<BR>
&gt; I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) als=
o, but<BR>
&gt; that doesn't seem to make a change either.<BR>
&gt;<BR>
&gt; Thanks!<BR>
&gt;<BR>
&gt;<BR>
&gt;<BR>
&gt; _______________________________________________<BR>
&gt; Xen-devel mailing list<BR>
&gt; <a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
&gt; <a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-deve=
l</a><BR>
<BR>
<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial">=
<SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, =
Arial"><SPAN STYLE=3D'font-size:11pt'><BR>
</SPAN></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3428592896_124067349--




--===============8018410164680552582==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8018410164680552582==--




From xen-devel-bounces@lists.xen.org Thu Aug 23 17:55:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 17:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bcX-0001Vk-SD; Thu, 23 Aug 2012 17:55:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4bcW-0001VZ-Mu
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 17:55:05 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345744498!8405412!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR, MIME_QP_LONG_LINE, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24357 invoked from network); 23 Aug 2012 17:54:58 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 17:54:58 -0000
Received: by eekd4 with SMTP id d4so413150eek.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 10:54:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=0DP1ToI2ayF6iFua3/iFr7VeOvD6TlfYQEWnJIfpfuc=;
	b=udz6y/Scqxu7W/VMl62dJi/VNpIkJ0oBknb5iImPJceFxDSR9YmtximebrAMfd/UA6
	I7lPSI5+WfY/InQALPq2LkhIylLoeZ8bR8SUMf/Y2fjI6LtFtzM+WyFAt/MzJ8I0XMMh
	9o5ilJ8/EhrKa8j8WPoch31hqJ3IlWxgLs0wHHnnAEpP++1nYP773puNM0P5Sd8rqQ6c
	oNW5nMoIKL5ZKGg+8eEOjDvlj0rU+3g3cBXaqEX1sXxKh8D9ibeV/4KI5UQGY7ewbhZ5
	Eus4TQhEXGAjTyv6wOmnq4tyyzk3Ld92ltj7SOyzvDdJFJ/TijlKQPDcnIe/EfkOzEbn
	eCdg==
Received: by 10.14.0.198 with SMTP id 46mr3130902eeb.30.1345744497892;
	Thu, 23 Aug 2012 10:54:57 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id z3sm23329126eel.15.2012.08.23.10.54.56
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 10:54:57 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 18:54:54 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C2CFE.3CC0D%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BV67lXNDtl2DXJEGdA4wycU70iAAALeWi
In-Reply-To: <CC5C2BCA.3CC08%keir.xen@gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8018410164680552582=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============8018410164680552582==
Content-type: multipart/alternative;
	boundary="B_3428592896_124067349"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428592896_124067349
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

So, for example, one possibly-valid scheme would be:

 - vcpu_pause_nosync() from the vmexit handler

 - vcpu_sleep_sync() at the start of the domctl
 - vcpu_unpause() at the end of the domctl

HTH,
 Keir


On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com> wrote:

> FWIW I would expect your approach to basically work.
>=20
> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? =
This
> will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster
> state is synced back into its vcpu structure.
>=20
> Otherwise you race the vcpu_sleep_nosync() -- and that=B9s assuming you als=
o
> have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
> vcpu_sleep_*() operations do nothing!
>=20
> In short, your problems are almost certainly something to do with the
> subtleties of actually putting a vcpu properly to sleep.
>=20
>  -- Keir
>=20
>=20
> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com> wrote:
>=20
>> I'm making the register change directly from the hypervisor, inside of t=
he
>> domctl code.=20
>>=20
>> It's a custom domctl that I've added. I'll look into what setcontext doe=
s
>> after it modifies the register values, though.
>>=20
>> Thank you!
>>=20
>> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>>> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>>>=20
>>>> > With Xen-4.1.2:
>>>> >
>>>> > I'm trying to change a register value in a paused vmx vcpu. The gene=
ral
>>>> > process looks like this:
>>>> >
>>>> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
>>>> > 2. From dom0, I issue a domctl to change a register via
>>>> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>>>=20
>>> Which domctl? From dom0 userspace you can use the libxc functions
>>> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register s=
tate.
>>>=20
>>> You can read the libxc sources to see what hypercall these map to, if y=
ou
>>> don't want to use libxc for any reason.
>>>=20
>>> =A0-- Keir
>>>=20
>>>> > However, the guest register does not seem to be changed when I do it=
 this
>>>> way.
>>>> > Is there something I need to do to mark the registers as "dirty" ? I=
s
>>>> there a
>>>> > way to force the foreign vcpu to update the changed registers? Or ma=
ybe I
>>>> just
>>>> > have to change the registers somewhere else?
>>>> >
>>>> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) a=
lso,
>>>> but
>>>> > that doesn't seem to make a change either.
>>>> >
>>>> > Thanks!
>>>> >
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > Xen-devel mailing list
>>>> > Xen-devel@lists.xen.org
>>>> > http://lists.xen.org/xen-devel
>>>=20
>>>=20
>>=20
>>=20
>>=20
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>=20


--B_3428592896_124067349
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] Foreign VCPU register change?</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>So, for example, one possibly-valid scheme would be:<BR>
<BR>
&nbsp;- vcpu_pause_nosync() from the vmexit handler<BR>
<BR>
&nbsp;- vcpu_sleep_sync() at the start of the domctl<BR>
&nbsp;- vcpu_unpause() at the end of the domctl<BR>
<BR>
HTH,<BR>
&nbsp;Keir<BR>
<BR>
<BR>
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>FWIW I would expect your approach to basically w=
ork. <BR>
<BR>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster state is synced back into its vcpu structure.<BR>
<BR>
Otherwise you race the vcpu_sleep_nosync() -- and that&#8217;s assuming you=
 also have a reason for that vcpu to sleep (e.g., non-zero pause counter), e=
lse vcpu_sleep_*() operations do nothing!<BR>
<BR>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
<BR>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>I'm making the register change directly from the=
 hypervisor, inside of the domctl code. <BR>
<BR>
It's a custom domctl that I've added. I'll look into what setcontext does a=
fter it modifies the register values, though.<BR>
<BR>
Thank you!<BR>
<BR>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>On 23/08/2012 18:25, &quot;Cutter 409&quot; &lt;=
<a href=3D"cutter409@gmail.com">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
&gt; With Xen-4.1.2:<BR>
&gt;<BR>
&gt; I'm trying to change a register value in a paused vmx vcpu. The genera=
l<BR>
&gt; process looks like this:<BR>
&gt;<BR>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<BR>
&gt; 2. From dom0, I issue a domctl to change a register via<BR>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<BR>
<BR>
Which domctl? From dom0 userspace you can use the libxc functions<BR>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<BR>
<BR>
You can read the libxc sources to see what hypercall these map to, if you<B=
R>
don't want to use libxc for any reason.<BR>
<BR>
=A0-- Keir<BR>
<BR>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<BR>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<BR>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<BR>
&gt; have to change the registers somewhere else?<BR>
&gt;<BR>
&gt; I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) als=
o, but<BR>
&gt; that doesn't seem to make a change either.<BR>
&gt;<BR>
&gt; Thanks!<BR>
&gt;<BR>
&gt;<BR>
&gt;<BR>
&gt; _______________________________________________<BR>
&gt; Xen-devel mailing list<BR>
&gt; <a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
&gt; <a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-deve=
l</a><BR>
<BR>
<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial">=
<SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, =
Arial"><SPAN STYLE=3D'font-size:11pt'><BR>
</SPAN></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3428592896_124067349--




--===============8018410164680552582==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8018410164680552582==--




From xen-devel-bounces@lists.xen.org Thu Aug 23 18:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bhh-0001iy-O7; Thu, 23 Aug 2012 18:00:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4bhh-0001it-50
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:00:25 +0000
Received: from [85.158.143.35:8359] by server-3.bemta-4.messagelabs.com id
	6B/45-08232-8BF66305; Thu, 23 Aug 2012 18:00:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345744821!5503511!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29219 invoked from network); 23 Aug 2012 18:00:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:00:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14154415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 18:00:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 19:00:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4bhc-0007QP-OE; Thu, 23 Aug 2012 18:00:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4bhc-00005d-KE;
	Thu, 23 Aug 2012 19:00:20 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20534.28596.520801.265300@mariner.uk.xensource.com>
Date: Thu, 23 Aug 2012 19:00:20 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
References: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: make domain resume API asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: make domain resume API asynchronous"):
> libxl: make domain resume API asynchronous

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:00:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bhh-0001iy-O7; Thu, 23 Aug 2012 18:00:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4bhh-0001it-50
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:00:25 +0000
Received: from [85.158.143.35:8359] by server-3.bemta-4.messagelabs.com id
	6B/45-08232-8BF66305; Thu, 23 Aug 2012 18:00:24 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345744821!5503511!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29219 invoked from network); 23 Aug 2012 18:00:21 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:00:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14154415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 18:00:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 19:00:21 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4bhc-0007QP-OE; Thu, 23 Aug 2012 18:00:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4bhc-00005d-KE;
	Thu, 23 Aug 2012 19:00:20 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20534.28596.520801.265300@mariner.uk.xensource.com>
Date: Thu, 23 Aug 2012 19:00:20 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
References: <7cec0543f67cefe3755b.1345046336@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: make domain resume API asynchronous
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] libxl: make domain resume API asynchronous"):
> libxl: make domain resume API asynchronous

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bl2-0001tO-Ey; Thu, 23 Aug 2012 18:03:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4bl1-0001tI-2X
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:03:51 +0000
Received: from [85.158.143.99:24286] by server-1.bemta-4.messagelabs.com id
	76/BF-12504-68076305; Thu, 23 Aug 2012 18:03:50 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345745028!17876586!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6010 invoked from network); 23 Aug 2012 18:03:49 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:03:49 -0000
Received: by obbta14 with SMTP id ta14so29511obb.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 11:03:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Apg91akYMvWraGc1f53IzsUNRUGPnMc1MvVF6B690d0=;
	b=wlli6CLVyo4cvb3FGSGjUqj6WdQHnDdm7F4mO/h3yzI05213R75ioMV0r1YEJ/Rb7r
	n8nG2Q09wwmhDVvG7vRPYtX4Mz8uI2HfTaKIZtUHV+ezJ+7VKMEzhIzOUGDu94VE/SE8
	22yTRiyMp0yxeJoByxLDPX58wXFlsUJgnd3UxpGfy6Z29ghoXvb+RscaQzl3pi2SObZZ
	k1BAchHJ/Jqup+epKXaGK6THcE8KoUDV3ThIhJyNirmJ/DY2CbwBcHhsPrfvEgUfKnz/
	UOl47cbpOrNHJLt2iWzcNLepRw7BkWewZuwtRfZL2aeEPnoUHDH++TLkWBo35y6FMw6X
	ciLg==
MIME-Version: 1.0
Received: by 10.50.187.228 with SMTP id fv4mr2572261igc.37.1345745027755; Thu,
	23 Aug 2012 11:03:47 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Thu, 23 Aug 2012 11:03:47 -0700 (PDT)
In-Reply-To: <502E3BC00200007800095DAB@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
Date: Thu, 23 Aug 2012 14:03:47 -0400
X-Google-Sender-Auth: zPwyqCMSL1MYPumjqQfMLLmPqCs
Message-ID: <CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I did some more bisecting here, and I came up with another changeset
that seems to be problematic, Re: IRQs

After bisecting the problem discussed earlier in this thread to the
changeset below,
http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42


I worked past that issue by the following hack:

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
 void evtchn_move_pirqs(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    const cpumask_t *mask = cpumask_of(v->processor);
+    //const cpumask_t *mask = cpumask_of(v->processor);
     unsigned int port;
     struct evtchn *chn;

@@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
     for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
     {
         chn = evtchn_from_port(d, port);
+#if 0
         pirq_set_affinity(d, chn->u.pirq.irq, mask);
+#endif
     }
     spin_unlock(&d->event_lock);
 }


This seemed to work for this rather old changeset, but it was not
sufficient to fix it against the 4.1, or unstable trees.

I further bisected, in combination with this hack, and found the
following changeset to also be problematic:

http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365


That is, before this change I could resume reliably (with the hack
above) - and after I could not.
This was surprising to me, as this change also looks rather innocuous.


Naturally, backing out this change seems to be non-trivial against the
tip, since so much around it has changed.




On Fri, Aug 17, 2012 at 6:40 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 17.08.12 at 12:22, Ben Guthro <ben@guthro.net> wrote:
>> On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
>>> In order to not flood the list with large logs, I put the logs here:
>>> https://citrix.sharefile.com/d/sfab699024a54df39
>>> I wasn't sure if you wanted a log with, or without the calls to
>>> evtchn_move_pirqs() commented out - so I included both.
>>
>> I received notifications that this got downloaded yesterday by a couple
>> people.
>> Did you have an opportunity to review it?
>
> Yes, I did.
>
>> If so - did you glean any new knowledge?
>
> Unfortunately not. Instead I was surprised that there were no
> IRQ fixup related messages at all, of which I still will need to
> make sense.
>
> In any case, I'm planning on putting together a debugging patch,
> but can't immediately tell when this will be.
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:04:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bl2-0001tO-Ey; Thu, 23 Aug 2012 18:03:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4bl1-0001tI-2X
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:03:51 +0000
Received: from [85.158.143.99:24286] by server-1.bemta-4.messagelabs.com id
	76/BF-12504-68076305; Thu, 23 Aug 2012 18:03:50 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345745028!17876586!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6010 invoked from network); 23 Aug 2012 18:03:49 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-8.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:03:49 -0000
Received: by obbta14 with SMTP id ta14so29511obb.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 11:03:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Apg91akYMvWraGc1f53IzsUNRUGPnMc1MvVF6B690d0=;
	b=wlli6CLVyo4cvb3FGSGjUqj6WdQHnDdm7F4mO/h3yzI05213R75ioMV0r1YEJ/Rb7r
	n8nG2Q09wwmhDVvG7vRPYtX4Mz8uI2HfTaKIZtUHV+ezJ+7VKMEzhIzOUGDu94VE/SE8
	22yTRiyMp0yxeJoByxLDPX58wXFlsUJgnd3UxpGfy6Z29ghoXvb+RscaQzl3pi2SObZZ
	k1BAchHJ/Jqup+epKXaGK6THcE8KoUDV3ThIhJyNirmJ/DY2CbwBcHhsPrfvEgUfKnz/
	UOl47cbpOrNHJLt2iWzcNLepRw7BkWewZuwtRfZL2aeEPnoUHDH++TLkWBo35y6FMw6X
	ciLg==
MIME-Version: 1.0
Received: by 10.50.187.228 with SMTP id fv4mr2572261igc.37.1345745027755; Thu,
	23 Aug 2012 11:03:47 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Thu, 23 Aug 2012 11:03:47 -0700 (PDT)
In-Reply-To: <502E3BC00200007800095DAB@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
Date: Thu, 23 Aug 2012 14:03:47 -0400
X-Google-Sender-Auth: zPwyqCMSL1MYPumjqQfMLLmPqCs
Message-ID: <CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I did some more bisecting here, and I came up with another changeset
that seems to be problematic, Re: IRQs

After bisecting the problem discussed earlier in this thread to the
changeset below,
http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42


I worked past that issue by the following hack:

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
 void evtchn_move_pirqs(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    const cpumask_t *mask = cpumask_of(v->processor);
+    //const cpumask_t *mask = cpumask_of(v->processor);
     unsigned int port;
     struct evtchn *chn;

@@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
     for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
     {
         chn = evtchn_from_port(d, port);
+#if 0
         pirq_set_affinity(d, chn->u.pirq.irq, mask);
+#endif
     }
     spin_unlock(&d->event_lock);
 }


This seemed to work for this rather old changeset, but it was not
sufficient to fix it against the 4.1, or unstable trees.

I further bisected, in combination with this hack, and found the
following changeset to also be problematic:

http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365


That is, before this change I could resume reliably (with the hack
above) - and after I could not.
This was surprising to me, as this change also looks rather innocuous.


Naturally, backing out this change seems to be non-trivial against the
tip, since so much around it has changed.




On Fri, Aug 17, 2012 at 6:40 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 17.08.12 at 12:22, Ben Guthro <ben@guthro.net> wrote:
>> On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
>>> In order to not flood the list with large logs, I put the logs here:
>>> https://citrix.sharefile.com/d/sfab699024a54df39
>>> I wasn't sure if you wanted a log with, or without the calls to
>>> evtchn_move_pirqs() commented out - so I included both.
>>
>> I received notifications that this got downloaded yesterday by a couple
>> people.
>> Did you have an opportunity to review it?
>
> Yes, I did.
>
>> If so - did you glean any new knowledge?
>
> Unfortunately not. Instead I was surprised that there were no
> IRQ fixup related messages at all, of which I still will need to
> make sense.
>
> In any case, I'm planning on putting together a debugging patch,
> but can't immediately tell when this will be.
>
> Jan
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:14:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bua-00025Z-Jp; Thu, 23 Aug 2012 18:13:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4buY-00025U-FS
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:13:42 +0000
Received: from [85.158.139.83:8822] by server-3.bemta-5.messagelabs.com id
	11/91-27237-5D276305; Thu, 23 Aug 2012 18:13:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345745620!24840944!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23601 invoked from network); 23 Aug 2012 18:13:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14154553"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 18:13:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 19:13:40 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4buW-0007Vr-3n; Thu, 23 Aug 2012 18:13:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4buW-0000EM-0X;
	Thu, 23 Aug 2012 19:13:40 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20534.29395.737226.630673@mariner.uk.xensource.com>
Date: Thu, 23 Aug 2012 19:13:39 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <add1552b607e014717f9.1345046614@cosworth.uk.xensource.com>
References: <add1552b607e014717f9.1345046614@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "waldi@debian.org" <waldi@debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xl: make "xl list -l" proper JSON
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] xl: make "xl list -l" proper JSON"):
> xl: make "xl list -l" proper JSON

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Although ...

> +    yajl_gen_status s;
...
> +    if (default_output_format == OUTPUT_FORMAT_JSON) {
...
> +        s = yajl_gen_array_open(hand);
> +        if (s != yajl_gen_status_ok)
> +            goto out;
> +    } else
> +        s = yajl_gen_status_ok;

... I found that bit rather confusing (the handling of s) and had to
go and double-check the context.  It looks right though.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:14:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bua-00025Z-Jp; Thu, 23 Aug 2012 18:13:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4buY-00025U-FS
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:13:42 +0000
Received: from [85.158.139.83:8822] by server-3.bemta-5.messagelabs.com id
	11/91-27237-5D276305; Thu, 23 Aug 2012 18:13:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1345745620!24840944!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23601 invoked from network); 23 Aug 2012 18:13:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14154553"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 18:13:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 19:13:40 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4buW-0007Vr-3n; Thu, 23 Aug 2012 18:13:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4buW-0000EM-0X;
	Thu, 23 Aug 2012 19:13:40 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20534.29395.737226.630673@mariner.uk.xensource.com>
Date: Thu, 23 Aug 2012 19:13:39 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <add1552b607e014717f9.1345046614@cosworth.uk.xensource.com>
References: <add1552b607e014717f9.1345046614@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "waldi@debian.org" <waldi@debian.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xl: make "xl list -l" proper JSON
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH] xl: make "xl list -l" proper JSON"):
> xl: make "xl list -l" proper JSON

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Although ...

> +    yajl_gen_status s;
...
> +    if (default_output_format == OUTPUT_FORMAT_JSON) {
...
> +        s = yajl_gen_array_open(hand);
> +        if (s != yajl_gen_status_ok)
> +            goto out;
> +    } else
> +        s = yajl_gen_status_ok;

... I found that bit rather confusing (the handling of s) and had to
go and double-check the context.  It looks right though.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:15:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:15:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bwH-0002Ar-4T; Thu, 23 Aug 2012 18:15:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4bwG-0002Al-50
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:15:28 +0000
Received: from [85.158.143.99:60099] by server-3.bemta-4.messagelabs.com id
	4D/DF-08232-F3376305; Thu, 23 Aug 2012 18:15:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345745726!20586165!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5856 invoked from network); 23 Aug 2012 18:15:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:15:26 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14154561"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 18:15:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 19:15:26 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4bwE-0007WX-5m; Thu, 23 Aug 2012 18:15:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4bwE-0000Ea-2T;
	Thu, 23 Aug 2012 19:15:26 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20534.29501.983170.810193@mariner.uk.xensource.com>
Date: Thu, 23 Aug 2012 19:15:25 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345048547.5926.245.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> This stuff is more of an Ian J thing but I wonder if when we hit the
> syntax error that $$ still refers to the last value parsed, which we
> think we are finished with but actually aren't? i.e. we've already
> stashed it in the cfg and the reference in $$ is now "stale".

I will look at this tomorrow, but:

> (aside; I had to find and install the Lenny version of bison to make the
> autogen diff readable. We should bump this to at least Squeeze in 4.3. I
> also trimmed the diff to the generated files to just the relevant
> looking bit -- in particular I trimmed a lot of stuff which appeared to
> be semi-automated modifications touching the generated files, e.g. the
> addition of emacs blocks and removal of page breaks ^L)

Perhaps we should do this now.  I don't think there's any reason to
fear the squeeze version of bison.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:15:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:15:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bwH-0002Ar-4T; Thu, 23 Aug 2012 18:15:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4bwG-0002Al-50
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:15:28 +0000
Received: from [85.158.143.99:60099] by server-3.bemta-4.messagelabs.com id
	4D/DF-08232-F3376305; Thu, 23 Aug 2012 18:15:27 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345745726!20586165!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5856 invoked from network); 23 Aug 2012 18:15:26 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:15:26 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14154561"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 18:15:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 19:15:26 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4bwE-0007WX-5m; Thu, 23 Aug 2012 18:15:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4bwE-0000Ea-2T;
	Thu, 23 Aug 2012 19:15:26 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20534.29501.983170.810193@mariner.uk.xensource.com>
Date: Thu, 23 Aug 2012 19:15:25 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345048547.5926.245.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> This stuff is more of an Ian J thing but I wonder if when we hit the
> syntax error that $$ still refers to the last value parsed, which we
> think we are finished with but actually aren't? i.e. we've already
> stashed it in the cfg and the reference in $$ is now "stale".

I will look at this tomorrow, but:

> (aside; I had to find and install the Lenny version of bison to make the
> autogen diff readable. We should bump this to at least Squeeze in 4.3. I
> also trimmed the diff to the generated files to just the relevant
> looking bit -- in particular I trimmed a lot of stuff which appeared to
> be semi-automated modifications touching the generated files, e.g. the
> addition of emacs blocks and removal of page breaks ^L)

Perhaps we should do this now.  I don't think there's any reason to
fear the squeeze version of bison.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:17:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bxs-0002I7-L7; Thu, 23 Aug 2012 18:17:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4bxr-0002I1-0Z
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 18:17:07 +0000
Received: from [85.158.139.83:32647] by server-1.bemta-5.messagelabs.com id
	4B/7D-09980-2A376305; Thu, 23 Aug 2012 18:17:06 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345745823!27552363!1
X-Originating-IP: [209.85.220.171]
X-SpamReason: No, hits=1.6 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,USERPASS,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13680 invoked from network); 23 Aug 2012 18:17:04 -0000
Received: from mail-vc0-f171.google.com (HELO mail-vc0-f171.google.com)
	(209.85.220.171)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:17:04 -0000
Received: by vcdd16 with SMTP id d16so1514514vcd.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 11:17:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=KwdMnbl2exUa8nuuOTjVBODBH8gEwYQ80ghGsWgmpt4=;
	b=wKXVSvaimKpR1KLUOp2YszbH8imAYE6BWuUyPOJsY+6mLlhQKRrJu5gNmJzM3/Bt1Z
	lwmyqlnPimcrE3cbNowuhSAmeTAQb1gYc6ESe9Pmy3AxvcJu1arn7pkjBBJ8bCzVYAEj
	QDc8Gen5M9l+3B7sCkVto5gHKjWzVEzzotMA8Nx8MMi1SONN4NCx8/I4Ox7NAjRTFdA3
	vizVi9ktro0K0qF5E9aL1DqvjjHB6zc9OwBdAWxLZqQyiTyFphENJ2HK5kea0F3YslCv
	C+Z9WfDUYjyosRLBEt4WaZaBoZ93dFt8Abu3dM1d0mb6KraF69iiAE+7Ylh3fUCIxTGh
	PwnA==
MIME-Version: 1.0
Received: by 10.52.69.44 with SMTP id b12mr1767892vdu.77.1345745493303; Thu,
	23 Aug 2012 11:11:33 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 11:11:33 -0700 (PDT)
In-Reply-To: <CC5C2CFE.3CC0D%keir.xen@gmail.com>
References: <CC5C2BCA.3CC08%keir.xen@gmail.com>
	<CC5C2CFE.3CC0D%keir.xen@gmail.com>
Date: Thu, 23 Aug 2012 14:11:33 -0400
Message-ID: <CAG4Ohu-4rC7xOxZ3SwnER+GF--kr6xazyfKEkgEYyDKCbk6E2w@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1249411991106978938=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1249411991106978938==
Content-Type: multipart/alternative; boundary=20cf3071cee61620d704c7f2cac0

--20cf3071cee61620d704c7f2cac0
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Thanks, Keir!

I've spent so much time trying to track down this problem, even before I
realized the registers weren't actually changing. You have no idea how
helpful that was.

Before I tried your example, I just wrapped the code to change the register
in vcpu_pause() and vcpu_unpause(), which worked.

Everything seems fine at the moment, is there any reason I should still
change the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually
work as is; I'm setting a bit in v->pause_flags before I call it, then
clear the bit before I wake it. I also tried pause_nosync on vmexit,
unpause after domctl, but that didn't work.

Thanks again!


On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser <keir.xen@gmail.com> wrote:

>  So, for example, one possibly-valid scheme would be:
>
>  - vcpu_pause_nosync() from the vmexit handler
>
>  - vcpu_sleep_sync() at the start of the domctl
>  - vcpu_unpause() at the end of the domctl
>
> HTH,
>  Keir
>
>
>
> On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com> wrote:
>
> FWIW I would expect your approach to basically work.
>
> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu?
> This will ensure that the vcpu is both fully de-scheduled, and all of its
> register state is synced back into its vcpu structure.
>
> Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you a=
lso
> have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
> vcpu_sleep_*() operations do nothing!
>
> In short, your problems are almost certainly something to do with the
> subtleties of actually putting a vcpu properly to sleep.
>
>  -- Keir
>
>
> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com> wrote:
>
> I'm making the register change directly from the hypervisor, inside of th=
e
> domctl code.
>
> It's a custom domctl that I've added. I'll look into what setcontext does
> after it modifies the register values, though.
>
> Thank you!
>
> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>
> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>
> > With Xen-4.1.2:
> >
> > I'm trying to change a register value in a paused vmx vcpu. The general
> > process looks like this:
> >
> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> > 2. From dom0, I issue a domctl to change a register via
> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>
> Which domctl? From dom0 userspace you can use the libxc functions
> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
> state.
>
> You can read the libxc sources to see what hypercall these map to, if you
> don't want to use libxc for any reason.
>
>  -- Keir
>
> > However, the guest register does not seem to be changed when I do it
> this way.
> > Is there something I need to do to mark the registers as "dirty" ? Is
> there a
> > way to force the foreign vcpu to update the changed registers? Or maybe
> I just
> > have to change the registers somewhere else?
> >
> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also=
,
> but
> > that doesn't seem to make a change either.
> >
> > Thanks!
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>
>
>
> ------------------------------
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>
>

--20cf3071cee61620d704c7f2cac0
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Thanks, Keir!<br><br>I&#39;ve spent so much time trying to track down this =
problem, even before I realized the registers weren&#39;t actually changing=
. You have no idea how helpful that was.<br><br>Before I tried your example=
, I just wrapped the code to change the register in vcpu_pause() and vcpu_u=
npause(), which worked.<br>
<br>Everything seems fine at the moment, is there any reason I should still=
 change the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actuall=
y work as is; I&#39;m setting a bit in v-&gt;pause_flags before I call it, =
then clear the bit before I wake it. I also tried pause_nosync on vmexit, u=
npause after domctl, but that didn&#39;t work.<br>
<br>Thanks again!<br><br><br><div class=3D"gmail_quote">On Thu, Aug 23, 201=
2 at 1:54 PM, Keir Fraser <span dir=3D"ltr">&lt;<a href=3D"mailto:keir.xen@=
gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt;</span> wrote:<br><b=
lockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px =
#ccc solid;padding-left:1ex">




<div>
<font face=3D"Calibri, Verdana, Helvetica, Arial"><span style=3D"font-size:=
11pt">So, for example, one possibly-valid scheme would be:<br>
<br>
=A0- vcpu_pause_nosync() from the vmexit handler<br>
<br>
=A0- vcpu_sleep_sync() at the start of the domctl<br>
=A0- vcpu_unpause() at the end of the domctl<br>
<br>
HTH,<br>
=A0Keir<div><div class=3D"h5"><br>
<br>
<br>
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt; wrote:<br>
<br>
</div></div></span></font><div><div class=3D"h5"><blockquote><font face=3D"=
Calibri, Verdana, Helvetica, Arial"><span style=3D"font-size:11pt">FWIW I w=
ould expect your approach to basically work. <br>
<br>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its reg=
ister state is synced back into its vcpu structure.<br>
<br>
Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you als=
o have a reason for that vcpu to sleep (e.g., non-zero pause counter), else=
 vcpu_sleep_*() operations do nothing!<br>
<br>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<br>
<br>
=A0-- Keir<br>
<br>
<br>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"http://cutter409=
@gmail.com" target=3D"_blank">cutter409@gmail.com</a>&gt; wrote:<br>
<br>
</span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial"=
><span style=3D"font-size:11pt">I&#39;m making the register change directly=
 from the hypervisor, inside of the domctl code. <br>
<br>
It&#39;s a custom domctl that I&#39;ve added. I&#39;ll look into what setco=
ntext does after it modifies the register values, though.<br>
<br>
Thank you!<br>
<br>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt; wrote:<br>
</span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial"=
><span style=3D"font-size:11pt">On 23/08/2012 18:25, &quot;Cutter 409&quot;=
 &lt;<a href=3D"http://cutter409@gmail.com" target=3D"_blank">cutter409@gma=
il.com</a>&gt; wrote:<br>

<br>
&gt; With Xen-4.1.2:<br>
&gt;<br>
&gt; I&#39;m trying to change a register value in a paused vmx vcpu. The ge=
neral<br>
&gt; process looks like this:<br>
&gt;<br>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<br>
&gt; 2. From dom0, I issue a domctl to change a register via<br>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br>
<br>
Which domctl? From dom0 userspace you can use the libxc functions<br>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<br>
<br>
You can read the libxc sources to see what hypercall these map to, if you<b=
r>
don&#39;t want to use libxc for any reason.<br>
<br>
=A0-- Keir<br>
<br>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<br>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<br>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<br>
&gt; have to change the registers somewhere else?<br>
&gt;<br>
&gt; I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)=
 also, but<br>
&gt; that doesn&#39;t seem to make a change either.<br>
&gt;<br>
&gt; Thanks!<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel=
@lists.xen.org</a><br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
<br>
</span></font></blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial=
"><span style=3D"font-size:11pt"><br>
<br>
<hr align=3D"CENTER" size=3D"3" width=3D"95%"></span></font><font><font fac=
e=3D"Consolas, Courier New, Courier"><span style=3D"font-size:10pt">_______=
________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</span></font></font></blockquote><font face=3D"Calibri, Verdana, Helvetica=
, Arial"><span style=3D"font-size:11pt"><br>
</span></font></blockquote>
</div></div></div>


</blockquote></div><br>

--20cf3071cee61620d704c7f2cac0--


--===============1249411991106978938==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1249411991106978938==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 18:17:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4bxs-0002I7-L7; Thu, 23 Aug 2012 18:17:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4bxr-0002I1-0Z
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 18:17:07 +0000
Received: from [85.158.139.83:32647] by server-1.bemta-5.messagelabs.com id
	4B/7D-09980-2A376305; Thu, 23 Aug 2012 18:17:06 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345745823!27552363!1
X-Originating-IP: [209.85.220.171]
X-SpamReason: No, hits=1.6 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,USERPASS,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13680 invoked from network); 23 Aug 2012 18:17:04 -0000
Received: from mail-vc0-f171.google.com (HELO mail-vc0-f171.google.com)
	(209.85.220.171)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:17:04 -0000
Received: by vcdd16 with SMTP id d16so1514514vcd.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 11:17:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=KwdMnbl2exUa8nuuOTjVBODBH8gEwYQ80ghGsWgmpt4=;
	b=wKXVSvaimKpR1KLUOp2YszbH8imAYE6BWuUyPOJsY+6mLlhQKRrJu5gNmJzM3/Bt1Z
	lwmyqlnPimcrE3cbNowuhSAmeTAQb1gYc6ESe9Pmy3AxvcJu1arn7pkjBBJ8bCzVYAEj
	QDc8Gen5M9l+3B7sCkVto5gHKjWzVEzzotMA8Nx8MMi1SONN4NCx8/I4Ox7NAjRTFdA3
	vizVi9ktro0K0qF5E9aL1DqvjjHB6zc9OwBdAWxLZqQyiTyFphENJ2HK5kea0F3YslCv
	C+Z9WfDUYjyosRLBEt4WaZaBoZ93dFt8Abu3dM1d0mb6KraF69iiAE+7Ylh3fUCIxTGh
	PwnA==
MIME-Version: 1.0
Received: by 10.52.69.44 with SMTP id b12mr1767892vdu.77.1345745493303; Thu,
	23 Aug 2012 11:11:33 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 11:11:33 -0700 (PDT)
In-Reply-To: <CC5C2CFE.3CC0D%keir.xen@gmail.com>
References: <CC5C2BCA.3CC08%keir.xen@gmail.com>
	<CC5C2CFE.3CC0D%keir.xen@gmail.com>
Date: Thu, 23 Aug 2012 14:11:33 -0400
Message-ID: <CAG4Ohu-4rC7xOxZ3SwnER+GF--kr6xazyfKEkgEYyDKCbk6E2w@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1249411991106978938=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1249411991106978938==
Content-Type: multipart/alternative; boundary=20cf3071cee61620d704c7f2cac0

--20cf3071cee61620d704c7f2cac0
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Thanks, Keir!

I've spent so much time trying to track down this problem, even before I
realized the registers weren't actually changing. You have no idea how
helpful that was.

Before I tried your example, I just wrapped the code to change the register
in vcpu_pause() and vcpu_unpause(), which worked.

Everything seems fine at the moment, is there any reason I should still
change the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually
work as is; I'm setting a bit in v->pause_flags before I call it, then
clear the bit before I wake it. I also tried pause_nosync on vmexit,
unpause after domctl, but that didn't work.

Thanks again!


On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser <keir.xen@gmail.com> wrote:

>  So, for example, one possibly-valid scheme would be:
>
>  - vcpu_pause_nosync() from the vmexit handler
>
>  - vcpu_sleep_sync() at the start of the domctl
>  - vcpu_unpause() at the end of the domctl
>
> HTH,
>  Keir
>
>
>
> On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com> wrote:
>
> FWIW I would expect your approach to basically work.
>
> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu?
> This will ensure that the vcpu is both fully de-scheduled, and all of its
> register state is synced back into its vcpu structure.
>
> Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you a=
lso
> have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
> vcpu_sleep_*() operations do nothing!
>
> In short, your problems are almost certainly something to do with the
> subtleties of actually putting a vcpu properly to sleep.
>
>  -- Keir
>
>
> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com> wrote:
>
> I'm making the register change directly from the hypervisor, inside of th=
e
> domctl code.
>
> It's a custom domctl that I've added. I'll look into what setcontext does
> after it modifies the register values, though.
>
> Thank you!
>
> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>
> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com> wrote:
>
> > With Xen-4.1.2:
> >
> > I'm trying to change a register value in a paused vmx vcpu. The general
> > process looks like this:
> >
> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> > 2. From dom0, I issue a domctl to change a register via
> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>
> Which domctl? From dom0 userspace you can use the libxc functions
> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
> state.
>
> You can read the libxc sources to see what hypercall these map to, if you
> don't want to use libxc for any reason.
>
>  -- Keir
>
> > However, the guest register does not seem to be changed when I do it
> this way.
> > Is there something I need to do to mark the registers as "dirty" ? Is
> there a
> > way to force the foreign vcpu to update the changed registers? Or maybe
> I just
> > have to change the registers somewhere else?
> >
> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also=
,
> but
> > that doesn't seem to make a change either.
> >
> > Thanks!
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>
>
>
> ------------------------------
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>
>

--20cf3071cee61620d704c7f2cac0
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Thanks, Keir!<br><br>I&#39;ve spent so much time trying to track down this =
problem, even before I realized the registers weren&#39;t actually changing=
. You have no idea how helpful that was.<br><br>Before I tried your example=
, I just wrapped the code to change the register in vcpu_pause() and vcpu_u=
npause(), which worked.<br>
<br>Everything seems fine at the moment, is there any reason I should still=
 change the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actuall=
y work as is; I&#39;m setting a bit in v-&gt;pause_flags before I call it, =
then clear the bit before I wake it. I also tried pause_nosync on vmexit, u=
npause after domctl, but that didn&#39;t work.<br>
<br>Thanks again!<br><br><br><div class=3D"gmail_quote">On Thu, Aug 23, 201=
2 at 1:54 PM, Keir Fraser <span dir=3D"ltr">&lt;<a href=3D"mailto:keir.xen@=
gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt;</span> wrote:<br><b=
lockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px =
#ccc solid;padding-left:1ex">




<div>
<font face=3D"Calibri, Verdana, Helvetica, Arial"><span style=3D"font-size:=
11pt">So, for example, one possibly-valid scheme would be:<br>
<br>
=A0- vcpu_pause_nosync() from the vmexit handler<br>
<br>
=A0- vcpu_sleep_sync() at the start of the domctl<br>
=A0- vcpu_unpause() at the end of the domctl<br>
<br>
HTH,<br>
=A0Keir<div><div class=3D"h5"><br>
<br>
<br>
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt; wrote:<br>
<br>
</div></div></span></font><div><div class=3D"h5"><blockquote><font face=3D"=
Calibri, Verdana, Helvetica, Arial"><span style=3D"font-size:11pt">FWIW I w=
ould expect your approach to basically work. <br>
<br>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its reg=
ister state is synced back into its vcpu structure.<br>
<br>
Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you als=
o have a reason for that vcpu to sleep (e.g., non-zero pause counter), else=
 vcpu_sleep_*() operations do nothing!<br>
<br>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<br>
<br>
=A0-- Keir<br>
<br>
<br>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"http://cutter409=
@gmail.com" target=3D"_blank">cutter409@gmail.com</a>&gt; wrote:<br>
<br>
</span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial"=
><span style=3D"font-size:11pt">I&#39;m making the register change directly=
 from the hypervisor, inside of the domctl code. <br>
<br>
It&#39;s a custom domctl that I&#39;ve added. I&#39;ll look into what setco=
ntext does after it modifies the register values, though.<br>
<br>
Thank you!<br>
<br>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt; wrote:<br>
</span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial"=
><span style=3D"font-size:11pt">On 23/08/2012 18:25, &quot;Cutter 409&quot;=
 &lt;<a href=3D"http://cutter409@gmail.com" target=3D"_blank">cutter409@gma=
il.com</a>&gt; wrote:<br>

<br>
&gt; With Xen-4.1.2:<br>
&gt;<br>
&gt; I&#39;m trying to change a register value in a paused vmx vcpu. The ge=
neral<br>
&gt; process looks like this:<br>
&gt;<br>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<br>
&gt; 2. From dom0, I issue a domctl to change a register via<br>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br>
<br>
Which domctl? From dom0 userspace you can use the libxc functions<br>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<br>
<br>
You can read the libxc sources to see what hypercall these map to, if you<b=
r>
don&#39;t want to use libxc for any reason.<br>
<br>
=A0-- Keir<br>
<br>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<br>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<br>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<br>
&gt; have to change the registers somewhere else?<br>
&gt;<br>
&gt; I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)=
 also, but<br>
&gt; that doesn&#39;t seem to make a change either.<br>
&gt;<br>
&gt; Thanks!<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel=
@lists.xen.org</a><br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
<br>
</span></font></blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial=
"><span style=3D"font-size:11pt"><br>
<br>
<hr align=3D"CENTER" size=3D"3" width=3D"95%"></span></font><font><font fac=
e=3D"Consolas, Courier New, Courier"><span style=3D"font-size:10pt">_______=
________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</span></font></font></blockquote><font face=3D"Calibri, Verdana, Helvetica=
, Arial"><span style=3D"font-size:11pt"><br>
</span></font></blockquote>
</div></div></div>


</blockquote></div><br>

--20cf3071cee61620d704c7f2cac0--


--===============1249411991106978938==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1249411991106978938==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4l-0002Za-S7; Thu, 23 Aug 2012 18:24:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4j-0002Yu-Jm
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.138.51:34926] by server-4.bemta-3.messagelabs.com id
	BF/D5-04276-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345746251!27768418!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTAxMzc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTAxMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22259 invoked from network); 23 Aug 2012 18:24:11 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:11 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (jored mo13) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id y0147co7NET9WI
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id BB4771836D
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:10 +0200 (CEST)
MIME-Version: 1.0
Message-Id: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:06 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0 of 3] xend pvscsi fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This series fixes the xend vscsi=[] handling.  Up to now only block devices in
'H:C:T:L, ...' or '/dev/sdX, ...' notation is recognized properly.  If a SCSI
device is not block device xend fails to parse the lscsi -g output.  Also
/dev/disk/by-*/* notation is not working properly.  The fallback sysfs parser
was also not updated for Linux 3.0.

Changes:
xend/pvscsi: fix passing of SCSI control LUNs
xend/pvscsi: fix usage of persistant device names for SCSI devices
xend/pvscsi: update sysfs parser for Linux 3.0

 tools/python/xen/util/vscsi_util.py |   32 +++++++++++++++++++++++++-------
 1 file changed, 25 insertions(+), 7 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4l-0002Za-S7; Thu, 23 Aug 2012 18:24:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4j-0002Yu-Jm
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.138.51:34926] by server-4.bemta-3.messagelabs.com id
	BF/D5-04276-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-174.messagelabs.com!1345746251!27768418!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTAxMzc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTAxMzc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22259 invoked from network); 23 Aug 2012 18:24:11 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-9.tower-174.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:11 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (jored mo13) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id y0147co7NET9WI
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id BB4771836D
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:10 +0200 (CEST)
MIME-Version: 1.0
Message-Id: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:06 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0 of 3] xend pvscsi fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This series fixes the xend vscsi=[] handling.  Up to now only block devices in
'H:C:T:L, ...' or '/dev/sdX, ...' notation is recognized properly.  If a SCSI
device is not block device xend fails to parse the lscsi -g output.  Also
/dev/disk/by-*/* notation is not working properly.  The fallback sysfs parser
was also not updated for Linux 3.0.

Changes:
xend/pvscsi: fix passing of SCSI control LUNs
xend/pvscsi: fix usage of persistant device names for SCSI devices
xend/pvscsi: update sysfs parser for Linux 3.0

 tools/python/xen/util/vscsi_util.py |   32 +++++++++++++++++++++++++-------
 1 file changed, 25 insertions(+), 7 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4l-0002ZI-2m; Thu, 23 Aug 2012 18:24:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4j-0002Ys-AJ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.143.99:30446] by server-2.bemta-4.messagelabs.com id
	95/E2-21239-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345746251!27288013!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25141 invoked from network); 23 Aug 2012 18:24:12 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:12 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (joses mo91) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e02de3o7NGLMFx
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 0DD6918370
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 482b9db173f2ceefed06346bec9e6d8cef9704fe
Message-Id: <482b9db173f2ceefed06.1345746249@probook.site>
In-Reply-To: <patchbomb.1345746246@probook.site>
References: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:09 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3 of 3] xend/pvscsi: update sysfs parser for
	Linux 3.0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1345743331 -7200
# Node ID 482b9db173f2ceefed06346bec9e6d8cef9704fe
# Parent  2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
xend/pvscsi: update sysfs parser for Linux 3.0

The sysfs parser for /sys/bus/scsi/devices understands only the layout
of kernel version 2.6.16. This looks as follows:

/sys/bus/scsi/devices/1:0:0:0/block:sda is a symlink to /sys/block/sda/
/sys/bus/scsi/devices/1:0:0:0/scsi_generic:sg1 is a symlink to /sys/class/scsi_generic/sg1

Both directories contain a 'dev' file with the major:minor information.
This patch updates the used regex strings to match also the colon to
make it more robust against possible future changes.


In kernel version 3.0 the layout changed:
/sys/bus/scsi/devices/ contains now additional symlinks to directories
such as host1 and target1:0:0. This patch ignores these as they do not
point to the desired scsi devices. They just clutter the devices array.

The directory layout in '1:0:0:0' changed as well, the 'type:name'
notation was replaced with 'type/name' directories:

/sys/bus/scsi/devices/1:0:0:0/block/sda/
/sys/bus/scsi/devices/1:0:0:0/scsi_generic/sg1/

Both directories contain a 'dev' file with the major:minor information.
This patch adds additional code to walk the subdir to find the 'dev'
file to make sure the given subdirectory is really the kernel name.


In addition this patch makes sure devname is not None.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 2b9992aea26c -r 482b9db173f2 tools/python/xen/util/vscsi_util.py
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -130,20 +130,36 @@ def _vscsi_get_scsidevices_by_sysfs():
 
     for dirpath, dirnames, files in os.walk(sysfs_mnt + SYSFS_SCSI_PATH):
         for hctl in dirnames:
+            if len(hctl.split(':')) != 4:
+                continue
             paths = os.path.join(dirpath, hctl)
             devname = None
             sg = None
             scsi_id = None
             for f in os.listdir(paths):
                 realpath = os.path.realpath(os.path.join(paths, f))
-                if  re.match('^block', f) or \
-                    re.match('^tape', f) or \
-                    re.match('^scsi_changer', f) or \
-                    re.match('^onstream_tape', f):
+                if  re.match('^block:', f) or \
+                    re.match('^tape:', f) or \
+                    re.match('^scsi_changer:', f) or \
+                    re.match('^onstream_tape:', f):
                     devname = os.path.basename(realpath)
+                elif f == "block" or \
+                     f == "tape" or \
+                     f == "scsi_changer" or \
+                     f == "onstream_tape":
+                    for dir in os.listdir(os.path.join(paths, f)):
+                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
+                            devname = os.path.basename(dir)
 
-                if re.match('^scsi_generic', f):
+                if re.match('^scsi_generic:', f):
                     sg = os.path.basename(realpath)
+                elif f == "scsi_generic":
+                    for dir in os.listdir(os.path.join(paths, f)):
+                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
+                            sg = os.path.basename(dir)
+                if sg:
+                    if devname is None:
+                        devname = sg
                     scsi_id = _vscsi_get_scsiid(sg)
             devices.append([hctl, devname, sg, scsi_id])
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4l-0002ZR-FQ; Thu, 23 Aug 2012 18:24:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4j-0002Yt-CJ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.139.83:15807] by server-11.bemta-5.messagelabs.com id
	FD/D8-29296-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345746251!26873546!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODE0MjU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODE0MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14533 invoked from network); 23 Aug 2012 18:24:12 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:12 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (jorabe mo21) (RZmta 30.10 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 602390o7NFsaPS
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id EEFB91836F
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:10 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
Message-Id: <2b9992aea26cfebc2dda.1345746248@probook.site>
In-Reply-To: <patchbomb.1345746246@probook.site>
References: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:08 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2 of 3] xend/pvscsi: fix usage of persistant
 device names for SCSI devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1345743313 -7200
# Node ID 2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
# Parent  52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
xend/pvscsi: fix usage of persistant device names for SCSI devices

Currently the callers of vscsi_get_scsidevices() do not pass a mask
string.  This will call "lsscsi -g '[]'", which causes a lsscsi syntax
error. As a result the sysfs parser _vscsi_get_scsidevices() is used.
But this parser is broken and the specified names in the config file are
not found.

Using a mask '*' if no mask was given will call lsscsi correctly and the
following config is parsed correctly:

vscsi=[
	'/dev/sg3, 0:0:0:0',
	'/dev/disk/by-id/wwn-0x600508b4000cf1c30000800000410000, 0:0:0:1'
]

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 52f3d52bacde -r 2b9992aea26c tools/python/xen/util/vscsi_util.py
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -150,7 +150,7 @@ def _vscsi_get_scsidevices_by_sysfs():
     return devices
 
 
-def vscsi_get_scsidevices(mask=""):
+def vscsi_get_scsidevices(mask="*"):
     """ get all scsi devices information """
 
     devices = _vscsi_get_scsidevices_by_lsscsi("[%s]" % mask)
@@ -279,7 +279,7 @@ def get_scsi_device(pHCTL):
             return _make_scsi_record(scsi_info)
     return None
 
-def get_all_scsi_devices(mask=""):
+def get_all_scsi_devices(mask="*"):
     scsi_records = []
     for scsi_info in vscsi_get_scsidevices(mask):
         scsi_record = _make_scsi_record(scsi_info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4l-0002ZI-2m; Thu, 23 Aug 2012 18:24:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4j-0002Ys-AJ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.143.99:30446] by server-2.bemta-4.messagelabs.com id
	95/E2-21239-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345746251!27288013!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25141 invoked from network); 23 Aug 2012 18:24:12 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-5.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:12 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (joses mo91) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e02de3o7NGLMFx
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 0DD6918370
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 482b9db173f2ceefed06346bec9e6d8cef9704fe
Message-Id: <482b9db173f2ceefed06.1345746249@probook.site>
In-Reply-To: <patchbomb.1345746246@probook.site>
References: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:09 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3 of 3] xend/pvscsi: update sysfs parser for
	Linux 3.0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1345743331 -7200
# Node ID 482b9db173f2ceefed06346bec9e6d8cef9704fe
# Parent  2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
xend/pvscsi: update sysfs parser for Linux 3.0

The sysfs parser for /sys/bus/scsi/devices understands only the layout
of kernel version 2.6.16. This looks as follows:

/sys/bus/scsi/devices/1:0:0:0/block:sda is a symlink to /sys/block/sda/
/sys/bus/scsi/devices/1:0:0:0/scsi_generic:sg1 is a symlink to /sys/class/scsi_generic/sg1

Both directories contain a 'dev' file with the major:minor information.
This patch updates the used regex strings to match also the colon to
make it more robust against possible future changes.


In kernel version 3.0 the layout changed:
/sys/bus/scsi/devices/ contains now additional symlinks to directories
such as host1 and target1:0:0. This patch ignores these as they do not
point to the desired scsi devices. They just clutter the devices array.

The directory layout in '1:0:0:0' changed as well, the 'type:name'
notation was replaced with 'type/name' directories:

/sys/bus/scsi/devices/1:0:0:0/block/sda/
/sys/bus/scsi/devices/1:0:0:0/scsi_generic/sg1/

Both directories contain a 'dev' file with the major:minor information.
This patch adds additional code to walk the subdir to find the 'dev'
file to make sure the given subdirectory is really the kernel name.


In addition this patch makes sure devname is not None.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 2b9992aea26c -r 482b9db173f2 tools/python/xen/util/vscsi_util.py
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -130,20 +130,36 @@ def _vscsi_get_scsidevices_by_sysfs():
 
     for dirpath, dirnames, files in os.walk(sysfs_mnt + SYSFS_SCSI_PATH):
         for hctl in dirnames:
+            if len(hctl.split(':')) != 4:
+                continue
             paths = os.path.join(dirpath, hctl)
             devname = None
             sg = None
             scsi_id = None
             for f in os.listdir(paths):
                 realpath = os.path.realpath(os.path.join(paths, f))
-                if  re.match('^block', f) or \
-                    re.match('^tape', f) or \
-                    re.match('^scsi_changer', f) or \
-                    re.match('^onstream_tape', f):
+                if  re.match('^block:', f) or \
+                    re.match('^tape:', f) or \
+                    re.match('^scsi_changer:', f) or \
+                    re.match('^onstream_tape:', f):
                     devname = os.path.basename(realpath)
+                elif f == "block" or \
+                     f == "tape" or \
+                     f == "scsi_changer" or \
+                     f == "onstream_tape":
+                    for dir in os.listdir(os.path.join(paths, f)):
+                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
+                            devname = os.path.basename(dir)
 
-                if re.match('^scsi_generic', f):
+                if re.match('^scsi_generic:', f):
                     sg = os.path.basename(realpath)
+                elif f == "scsi_generic":
+                    for dir in os.listdir(os.path.join(paths, f)):
+                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
+                            sg = os.path.basename(dir)
+                if sg:
+                    if devname is None:
+                        devname = sg
                     scsi_id = _vscsi_get_scsiid(sg)
             devices.append([hctl, devname, sg, scsi_id])
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4l-0002ZR-FQ; Thu, 23 Aug 2012 18:24:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4j-0002Yt-CJ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.139.83:15807] by server-11.bemta-5.messagelabs.com id
	FD/D8-29296-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345746251!26873546!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODE0MjU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODE0MjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14533 invoked from network); 23 Aug 2012 18:24:12 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-9.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:12 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (jorabe mo21) (RZmta 30.10 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 602390o7NFsaPS
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id EEFB91836F
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:10 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
Message-Id: <2b9992aea26cfebc2dda.1345746248@probook.site>
In-Reply-To: <patchbomb.1345746246@probook.site>
References: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:08 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2 of 3] xend/pvscsi: fix usage of persistant
 device names for SCSI devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1345743313 -7200
# Node ID 2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
# Parent  52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
xend/pvscsi: fix usage of persistant device names for SCSI devices

Currently the callers of vscsi_get_scsidevices() do not pass a mask
string.  This will call "lsscsi -g '[]'", which causes a lsscsi syntax
error. As a result the sysfs parser _vscsi_get_scsidevices() is used.
But this parser is broken and the specified names in the config file are
not found.

Using a mask '*' if no mask was given will call lsscsi correctly and the
following config is parsed correctly:

vscsi=[
	'/dev/sg3, 0:0:0:0',
	'/dev/disk/by-id/wwn-0x600508b4000cf1c30000800000410000, 0:0:0:1'
]

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 52f3d52bacde -r 2b9992aea26c tools/python/xen/util/vscsi_util.py
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -150,7 +150,7 @@ def _vscsi_get_scsidevices_by_sysfs():
     return devices
 
 
-def vscsi_get_scsidevices(mask=""):
+def vscsi_get_scsidevices(mask="*"):
     """ get all scsi devices information """
 
     devices = _vscsi_get_scsidevices_by_lsscsi("[%s]" % mask)
@@ -279,7 +279,7 @@ def get_scsi_device(pHCTL):
             return _make_scsi_record(scsi_info)
     return None
 
-def get_all_scsi_devices(mask=""):
+def get_all_scsi_devices(mask="*"):
     scsi_records = []
     for scsi_info in vscsi_get_scsidevices(mask):
         scsi_record = _make_scsi_record(scsi_info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4k-0002ZB-Nm; Thu, 23 Aug 2012 18:24:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4i-0002Yr-Vs
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.143.35:16659] by server-1.bemta-4.messagelabs.com id
	40/A3-12504-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345746251!4840197!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20416 invoked from network); 23 Aug 2012 18:24:11 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:11 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (joses mo85) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id k02a54o7NFG7j6
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id DBE0B1836E
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:10 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
Message-Id: <52f3d52bacdecb2c8d7f.1345746247@probook.site>
In-Reply-To: <patchbomb.1345746246@probook.site>
References: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:07 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1 of 3] xend/pvscsi: fix passing of SCSI control
	LUNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1345743306 -7200
# Node ID 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
# Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
xend/pvscsi: fix passing of SCSI control LUNs

Currently pvscsi can not pass SCSI devices that have just a scsi_generic node.
In the following example sg3 is a control LUN for the disk sdd.
But vscsi=['4:0:2:0,0:0:0:0'] does not work because the internal 'devname'
variable remains None. Later writing p-devname to xenstore fails because None
is not a valid string variable.

Since devname is used for just informational purpose use sg also as devname.

carron:~ $ lsscsi -g
[0:0:0:0]    disk    ATA      FK0032CAAZP      HPF2  /dev/sda   /dev/sg0
[4:0:0:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdb   /dev/sg1
[4:0:1:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdc   /dev/sg2
[4:0:2:0]    storage HP       HSV400           0950  -         /dev/sg3
[4:0:2:1]    disk    HP       HSV400           0950  /dev/sdd   /dev/sg4
[4:0:3:0]    storage HP       HSV400           0950  -         /dev/sg5
[4:0:3:1]    disk    HP       HSV400           0950  /dev/sde   /dev/sg6

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r e6ca45ca03c2 -r 52f3d52bacde tools/python/xen/util/vscsi_util.py
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -105,6 +105,8 @@ def _vscsi_get_scsidevices_by_lsscsi(opt
             devname = None
         try:
             sg = s[-1].split('/dev/')[1]
+            if devname is None:
+                devname = sg
             scsi_id = _vscsi_get_scsiid(sg)
         except IndexError:
             sg = None

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:24:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4c4k-0002ZB-Nm; Thu, 23 Aug 2012 18:24:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4c4i-0002Yr-Vs
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:24:13 +0000
Received: from [85.158.143.35:16659] by server-1.bemta-4.messagelabs.com id
	40/A3-12504-C4576305; Thu, 23 Aug 2012 18:24:12 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-21.messagelabs.com!1345746251!4840197!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDU2NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20416 invoked from network); 23 Aug 2012 18:24:11 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 18:24:11 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFIhi0PF4SP
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-068-141.pools.arcor-ip.net [88.65.68.141])
	by smtp.strato.de (joses mo85) (RZmta 30.9 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id k02a54o7NFG7j6
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:11 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id DBE0B1836E
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 20:24:10 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
Message-Id: <52f3d52bacdecb2c8d7f.1345746247@probook.site>
In-Reply-To: <patchbomb.1345746246@probook.site>
References: <patchbomb.1345746246@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Thu, 23 Aug 2012 20:24:07 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1 of 3] xend/pvscsi: fix passing of SCSI control
	LUNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1345743306 -7200
# Node ID 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
# Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
xend/pvscsi: fix passing of SCSI control LUNs

Currently pvscsi can not pass SCSI devices that have just a scsi_generic node.
In the following example sg3 is a control LUN for the disk sdd.
But vscsi=['4:0:2:0,0:0:0:0'] does not work because the internal 'devname'
variable remains None. Later writing p-devname to xenstore fails because None
is not a valid string variable.

Since devname is used for just informational purpose use sg also as devname.

carron:~ $ lsscsi -g
[0:0:0:0]    disk    ATA      FK0032CAAZP      HPF2  /dev/sda   /dev/sg0
[4:0:0:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdb   /dev/sg1
[4:0:1:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdc   /dev/sg2
[4:0:2:0]    storage HP       HSV400           0950  -         /dev/sg3
[4:0:2:1]    disk    HP       HSV400           0950  /dev/sdd   /dev/sg4
[4:0:3:0]    storage HP       HSV400           0950  -         /dev/sg5
[4:0:3:1]    disk    HP       HSV400           0950  /dev/sde   /dev/sg6

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r e6ca45ca03c2 -r 52f3d52bacde tools/python/xen/util/vscsi_util.py
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -105,6 +105,8 @@ def _vscsi_get_scsidevices_by_lsscsi(opt
             devname = None
         try:
             sg = s[-1].split('/dev/')[1]
+            if devname is None:
+                devname = sg
             scsi_id = _vscsi_get_scsiid(sg)
         except IndexError:
             sg = None

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cHo-0003Lc-Sb; Thu, 23 Aug 2012 18:37:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4cHn-0003LU-0X
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:37:43 +0000
Received: from [85.158.143.99:2421] by server-1.bemta-4.messagelabs.com id
	86/9B-12504-67876305; Thu, 23 Aug 2012 18:37:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345747060!20589207!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13871 invoked from network); 23 Aug 2012 18:37:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:37:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="206064704"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:37:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:37:12 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4cHI-00013u-Ci;
	Thu, 23 Aug 2012 19:37:12 +0100
Message-ID: <50367858.4060701@citrix.com>
Date: Thu, 23 Aug 2012 19:37:12 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
In-Reply-To: <CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	John Baboval <john.baboval@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 19:03, Ben Guthro wrote:
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
>
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>
>
> I worked past that issue by the following hack:
>
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
>
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }
>
>
> This seemed to work for this rather old changeset, but it was not
> sufficient to fix it against the 4.1, or unstable trees.
>
> I further bisected, in combination with this hack, and found the
> following changeset to also be problematic:
>
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>
>
> That is, before this change I could resume reliably (with the hack
> above) - and after I could not.
> This was surprising to me, as this change also looks rather innocuous.

And by the looks of that changeset, the logic in fixup_irqs() in irq.c
was changed.

Jan: The commit message says "simplify operations [in] a few cases". 
Was the change in fixup_irqs() deliberate?

~Andrew

>
>
> Naturally, backing out this change seems to be non-trivial against the
> tip, since so much around it has changed.
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:37:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cHo-0003Lc-Sb; Thu, 23 Aug 2012 18:37:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4cHn-0003LU-0X
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:37:43 +0000
Received: from [85.158.143.99:2421] by server-1.bemta-4.messagelabs.com id
	86/9B-12504-67876305; Thu, 23 Aug 2012 18:37:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345747060!20589207!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13871 invoked from network); 23 Aug 2012 18:37:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:37:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="206064704"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:37:13 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:37:12 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4cHI-00013u-Ci;
	Thu, 23 Aug 2012 19:37:12 +0100
Message-ID: <50367858.4060701@citrix.com>
Date: Thu, 23 Aug 2012 19:37:12 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
In-Reply-To: <CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	John Baboval <john.baboval@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 19:03, Ben Guthro wrote:
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
>
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>
>
> I worked past that issue by the following hack:
>
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
>
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }
>
>
> This seemed to work for this rather old changeset, but it was not
> sufficient to fix it against the 4.1, or unstable trees.
>
> I further bisected, in combination with this hack, and found the
> following changeset to also be problematic:
>
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>
>
> That is, before this change I could resume reliably (with the hack
> above) - and after I could not.
> This was surprising to me, as this change also looks rather innocuous.

And by the looks of that changeset, the logic in fixup_irqs() in irq.c
was changed.

Jan: The commit message says "simplify operations [in] a few cases". 
Was the change in fixup_irqs() deliberate?

~Andrew

>
>
> Naturally, backing out this change seems to be non-trivial against the
> tip, since so much around it has changed.
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 18:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cYR-0003hY-Nu; Thu, 23 Aug 2012 18:54:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4cYQ-0003hT-AX
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:54:54 +0000
Received: from [85.158.143.35:20481] by server-3.bemta-4.messagelabs.com id
	9E/AE-08232-D7C76305; Thu, 23 Aug 2012 18:54:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345748090!13436028!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19889 invoked from network); 23 Aug 2012 18:54:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:54:51 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; 
	d="scan'208,217";a="206066429"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:54:49 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:54:49 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4cYK-0001Ky-Vq;
	Thu, 23 Aug 2012 19:54:48 +0100
Message-ID: <50367C78.80608@citrix.com>
Date: Thu, 23 Aug 2012 19:54:48 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------000503060106030301020801"
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------000503060106030301020801
Content-Type: multipart/alternative;
	boundary="------------020109070509040601030307"

--------------020109070509040601030307
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

> On 23/08/12 19:03, Ben Guthro wrote:
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
>
> Jan: The commit message says "simplify operations [in] a few cases". 
> Was the change in fixup_irqs() deliberate?
>
> ~Andrew

Ben: Could you test the attached patch?  It is for unstable and undoes
the logical change to fixup_irqs()

~Andrew

>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
> -- Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer T: +44
> (0)1223 225 900, http://www.citrix.com
> _______________________________________________ Xen-devel mailing list
> Xen-devel@lists.xen.org http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------020109070509040601030307
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <blockquote type="cite">
      <div class="moz-text-plain" wrap="true" graphical-quote="true"
        style="font-family: -moz-fixed; font-size: 12px;"
        lang="x-western">
        <pre wrap="">On 23/08/12 19:03, Ben Guthro wrote:
</pre>
        <blockquote type="cite" style="color: #000000;">
          <pre wrap="">I did some more bisecting here, and I came up with another changeset
that seems to be problematic, Re: IRQs

After bisecting the problem discussed earlier in this thread to the
changeset below,
<a class="moz-txt-link-freetext" href="http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42">http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42</a>


I worked past that issue by the following hack:

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
 void evtchn_move_pirqs(struct vcpu *v)
 {
     struct domain *d = v-&gt;domain;
-    const cpumask_t *mask = cpumask_of(v-&gt;processor);
+    //const cpumask_t *mask = cpumask_of(v-&gt;processor);
     unsigned int port;
     struct evtchn *chn;

@@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
     for ( port = v-&gt;pirq_evtchn_head; port; port = chn-&gt;u.pirq.next_port )
     {
         chn = evtchn_from_port(d, port);
+#if 0
         pirq_set_affinity(d, chn-&gt;u.pirq.irq, mask);
+#endif
     }
     spin_unlock(&amp;d-&gt;event_lock);
 }


This seemed to work for this rather old changeset, but it was not
sufficient to fix it against the 4.1, or unstable trees.

I further bisected, in combination with this hack, and found the
following changeset to also be problematic:

<a class="moz-txt-link-freetext" href="http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365">http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365</a>


That is, before this change I could resume reliably (with the hack
above) - and after I could not.
This was surprising to me, as this change also looks rather innocuous.
</pre>
        </blockquote>
        <pre wrap="">And by the looks of that changeset, the logic in fixup_irqs() in irq.c
was changed.

Jan: The commit message says "simplify operations [in] a few cases". 
Was the change in fixup_irqs() deliberate?

~Andrew</pre>
      </div>
    </blockquote>
    <br>
    Ben: Could you test the attached patch?&nbsp; It is for unstable and
    undoes the logical change to fixup_irqs()<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote type="cite">
      <div class="moz-text-plain" wrap="true" graphical-quote="true"
        style="font-family: -moz-fixed; font-size: 12px;"
        lang="x-western">
        <pre wrap="">

</pre>
        <blockquote type="cite" style="color: #000000;">
          <pre wrap="">
Naturally, backing out this change seems to be non-trivial against the
tip, since so much around it has changed.

</pre>
        </blockquote>
        <pre wrap=""><div class="moz-txt-sig">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a>


_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</div></pre>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a></pre>
  </body>
</html>

--------------020109070509040601030307--

--------------000503060106030301020801
Content-Type: text/x-patch; name="s3-revert-fixup_irqs.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="s3-revert-fixup_irqs.patch"

# HG changeset patch
# Parent b02ac80ff6899e98b4089842843104fd8572a7cd

diff -r b02ac80ff689 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2151,7 +2151,7 @@ void fixup_irqs(void)
         spin_lock(&desc->lock);
 
         cpumask_copy(&affinity, desc->affinity);
-        if ( !desc->action || cpumask_subset(&affinity, &cpu_online_map) )
+        if ( !desc->action || cpumask_equal(&affinity, &cpu_online_map) )
         {
             spin_unlock(&desc->lock);
             continue;

--------------000503060106030301020801
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000503060106030301020801--


From xen-devel-bounces@lists.xen.org Thu Aug 23 18:55:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cYR-0003hY-Nu; Thu, 23 Aug 2012 18:54:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4cYQ-0003hT-AX
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 18:54:54 +0000
Received: from [85.158.143.35:20481] by server-3.bemta-4.messagelabs.com id
	9E/AE-08232-D7C76305; Thu, 23 Aug 2012 18:54:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345748090!13436028!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM2MzE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19889 invoked from network); 23 Aug 2012 18:54:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:54:51 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; 
	d="scan'208,217";a="206066429"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 14:54:49 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 14:54:49 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4cYK-0001Ky-Vq;
	Thu, 23 Aug 2012 19:54:48 +0100
Message-ID: <50367C78.80608@citrix.com>
Date: Thu, 23 Aug 2012 19:54:48 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------000503060106030301020801"
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------000503060106030301020801
Content-Type: multipart/alternative;
	boundary="------------020109070509040601030307"

--------------020109070509040601030307
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

> On 23/08/12 19:03, Ben Guthro wrote:
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
>
> Jan: The commit message says "simplify operations [in] a few cases". 
> Was the change in fixup_irqs() deliberate?
>
> ~Andrew

Ben: Could you test the attached patch?  It is for unstable and undoes
the logical change to fixup_irqs()

~Andrew

>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
> -- Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer T: +44
> (0)1223 225 900, http://www.citrix.com
> _______________________________________________ Xen-devel mailing list
> Xen-devel@lists.xen.org http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------020109070509040601030307
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <blockquote type="cite">
      <div class="moz-text-plain" wrap="true" graphical-quote="true"
        style="font-family: -moz-fixed; font-size: 12px;"
        lang="x-western">
        <pre wrap="">On 23/08/12 19:03, Ben Guthro wrote:
</pre>
        <blockquote type="cite" style="color: #000000;">
          <pre wrap="">I did some more bisecting here, and I came up with another changeset
that seems to be problematic, Re: IRQs

After bisecting the problem discussed earlier in this thread to the
changeset below,
<a class="moz-txt-link-freetext" href="http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42">http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42</a>


I worked past that issue by the following hack:

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
 void evtchn_move_pirqs(struct vcpu *v)
 {
     struct domain *d = v-&gt;domain;
-    const cpumask_t *mask = cpumask_of(v-&gt;processor);
+    //const cpumask_t *mask = cpumask_of(v-&gt;processor);
     unsigned int port;
     struct evtchn *chn;

@@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
     for ( port = v-&gt;pirq_evtchn_head; port; port = chn-&gt;u.pirq.next_port )
     {
         chn = evtchn_from_port(d, port);
+#if 0
         pirq_set_affinity(d, chn-&gt;u.pirq.irq, mask);
+#endif
     }
     spin_unlock(&amp;d-&gt;event_lock);
 }


This seemed to work for this rather old changeset, but it was not
sufficient to fix it against the 4.1, or unstable trees.

I further bisected, in combination with this hack, and found the
following changeset to also be problematic:

<a class="moz-txt-link-freetext" href="http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365">http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365</a>


That is, before this change I could resume reliably (with the hack
above) - and after I could not.
This was surprising to me, as this change also looks rather innocuous.
</pre>
        </blockquote>
        <pre wrap="">And by the looks of that changeset, the logic in fixup_irqs() in irq.c
was changed.

Jan: The commit message says "simplify operations [in] a few cases". 
Was the change in fixup_irqs() deliberate?

~Andrew</pre>
      </div>
    </blockquote>
    <br>
    Ben: Could you test the attached patch?&nbsp; It is for unstable and
    undoes the logical change to fixup_irqs()<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote type="cite">
      <div class="moz-text-plain" wrap="true" graphical-quote="true"
        style="font-family: -moz-fixed; font-size: 12px;"
        lang="x-western">
        <pre wrap="">

</pre>
        <blockquote type="cite" style="color: #000000;">
          <pre wrap="">
Naturally, backing out this change seems to be non-trivial against the
tip, since so much around it has changed.

</pre>
        </blockquote>
        <pre wrap=""><div class="moz-txt-sig">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a>


_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</div></pre>
      </div>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, <a class="moz-txt-link-freetext" href="http://www.citrix.com">http://www.citrix.com</a></pre>
  </body>
</html>

--------------020109070509040601030307--

--------------000503060106030301020801
Content-Type: text/x-patch; name="s3-revert-fixup_irqs.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="s3-revert-fixup_irqs.patch"

# HG changeset patch
# Parent b02ac80ff6899e98b4089842843104fd8572a7cd

diff -r b02ac80ff689 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2151,7 +2151,7 @@ void fixup_irqs(void)
         spin_lock(&desc->lock);
 
         cpumask_copy(&affinity, desc->affinity);
-        if ( !desc->action || cpumask_subset(&affinity, &cpu_online_map) )
+        if ( !desc->action || cpumask_equal(&affinity, &cpu_online_map) )
         {
             spin_unlock(&desc->lock);
             continue;

--------------000503060106030301020801
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000503060106030301020801--


From xen-devel-bounces@lists.xen.org Thu Aug 23 18:55:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cZD-0003l9-AW; Thu, 23 Aug 2012 18:55:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4cZB-0003ks-KL
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 18:55:42 +0000
Received: from [85.158.143.35:21441] by server-3.bemta-4.messagelabs.com id
	8C/1F-08232-CAC76305; Thu, 23 Aug 2012 18:55:40 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345748138!13319266!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR, MIME_QP_LONG_LINE, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, USERPASS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6662 invoked from network); 23 Aug 2012 18:55:38 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:55:38 -0000
Received: by eaah11 with SMTP id h11so337222eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 11:55:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=y79P4SUJr4PrQmoCb+sngng0YE1ssrX+LNW6AQ01shE=;
	b=KqPfO/XKuBXZ2coOmf8W4koBR4xVP3zMVBB4iKEKx7B4BeczvJ7ciEPPJN9l2L9em7
	1gUyQRSjVx9wuvnY7RBJfO2m0XOf/4YZ3dE9TXvMjktllsOFpzheE0+BI31rrXkBdSBB
	4UkfmIXJ0tA/pIKdipvUHZCOUKWLwxGMyIf4GAehe+dmOz6nbr8E8wLVeNwCnS1Ce3Bw
	NYxMkURCxv8YyqvDbcFWEX7tCvA2J37IUeuG80KtuMHGMKCCc4yYfJdKzLGKdPAEH4vg
	bDkbfDhY6RyqK1CwVouWNysbU4iZrcHzcHASnFFX+njlIJuoSRnVO3nN4fwiUP3XJAOv
	5cXA==
Received: by 10.14.203.73 with SMTP id e49mr3437827eeo.27.1345748138127;
	Thu, 23 Aug 2012 11:55:38 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id 45sm23733551eeb.8.2012.08.23.11.55.36
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 11:55:37 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 19:55:34 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C3B36.3CC23%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BYOAWSj0+cGf/5kKN8HmYg8nMBQ==
In-Reply-To: <CAG4Ohu-4rC7xOxZ3SwnER+GF--kr6xazyfKEkgEYyDKCbk6E2w@mail.gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8265482884604195495=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============8265482884604195495==
Content-type: multipart/alternative;
	boundary="B_3428596536_124287228"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428596536_124287228
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

If you have your own flag in v->pause_flags then indeed you do not need to
vcpu_pause_nosync()  in the vmexit handler.

The best sequence would be:
 - vmexit handler: set flag in v->pause_flags, then vcpu_sleep_nosync()
 - domctl entry: vcpu_sleep_sync()
 - domctl exit: clear flag in v->pause_flags, then vcpu_wake()

So that=B9s pretty much what you had in the first place, except for the extra
vcpu_sleep_sync() on domctl entry. That=B9s absolutely critical, and why your
pause_nosync on vmexit, unpause after domctl doesn=B9t work =8B something is
needed on domctl entry to be sure that the vcpu is descheduled and its stat=
e
is synchronised. Of course the extra machinery of vcpu_pause/unpause is
harmless enough, but it=B9s not actually necessary here.

 -- Keir


On 23/08/2012 19:11, "Cutter 409" <cutter409@gmail.com> wrote:

> Thanks, Keir!
>=20
> I've spent so much time trying to track down this problem, even before I
> realized the registers weren't actually changing. You have no idea how he=
lpful
> that was.
>=20
> Before I tried your example, I just wrapped the code to change the regist=
er in
> vcpu_pause() and vcpu_unpause(), which worked.
>=20
> Everything seems fine at the moment, is there any reason I should still c=
hange
> the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually work=
 as
> is; I'm setting a bit in v->pause_flags before I call it, then clear the =
bit
> before I wake it. I also tried pause_nosync on vmexit, unpause after domc=
tl,
> but that didn't work.
>=20
> Thanks again!
>=20
>=20
> On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>> So, for example, one possibly-valid scheme would be:
>>=20
>> =A0- vcpu_pause_nosync() from the vmexit handler
>>=20
>> =A0- vcpu_sleep_sync() at the start of the domctl
>> =A0- vcpu_unpause() at the end of the domctl
>>=20
>> HTH,
>> =A0Keir
>>=20
>>=20
>>=20
>> On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com
>> <http://keir.xen@gmail.com> > wrote:
>>=20
>>> FWIW I would expect your approach to basically work.
>>>=20
>>> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu=
?
>>> This will ensure that the vcpu is both fully de-scheduled, and all of i=
ts
>>> register state is synced back into its vcpu structure.
>>>=20
>>> Otherwise you race the vcpu_sleep_nosync() -- and that=B9s assuming you a=
lso
>>> have a reason for that vcpu to sleep (e.g., non-zero pause counter), el=
se
>>> vcpu_sleep_*() operations do nothing!
>>>=20
>>> In short, your problems are almost certainly something to do with the
>>> subtleties of actually putting a vcpu properly to sleep.
>>>=20
>>> =A0-- Keir
>>>=20
>>>=20
>>> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com
>>> <http://cutter409@gmail.com> > wrote:
>>>=20
>>>> I'm making the register change directly from the hypervisor, inside of=
 the
>>>> domctl code.=20
>>>>=20
>>>> It's a custom domctl that I've added. I'll look into what setcontext d=
oes
>>>> after it modifies the register values, though.
>>>>=20
>>>> Thank you!
>>>>=20
>>>> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com
>>>> <http://keir.xen@gmail.com> > wrote:
>>>>> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com
>>>>> <http://cutter409@gmail.com> > wrote:
>>>>>=20
>>>>>> > With Xen-4.1.2:
>>>>>> >
>>>>>> > I'm trying to change a register value in a paused vmx vcpu. The ge=
neral
>>>>>> > process looks like this:
>>>>>> >
>>>>>> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
>>>>>> > 2. From dom0, I issue a domctl to change a register via
>>>>>> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>>>>>=20
>>>>> Which domctl? From dom0 userspace you can use the libxc functions
>>>>> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
>>>>> state.
>>>>>=20
>>>>> You can read the libxc sources to see what hypercall these map to, if=
 you
>>>>> don't want to use libxc for any reason.
>>>>>=20
>>>>> =A0-- Keir
>>>>>=20
>>>>>> > However, the guest register does not seem to be changed when I do =
it
>>>>>> this way.
>>>>>> > Is there something I need to do to mark the registers as "dirty" ?=
 Is
>>>>>> there a
>>>>>> > way to force the foreign vcpu to update the changed registers? Or =
maybe
>>>>>> I just
>>>>>> > have to change the registers somewhere else?
>>>>>> >
>>>>>> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)
>>>>>> also, but
>>>>>> > that doesn't seem to make a change either.
>>>>>> >
>>>>>> > Thanks!
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > Xen-devel mailing list
>>>>>> > Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
>>>>>> > http://lists.xen.org/xen-devel
>>>>>=20
>>>>>=20
>>>>=20
>>>>=20
>>>>=20
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
>>>> http://lists.xen.org/xen-devel
>>>=20
>=20
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--B_3428596536_124287228
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] Foreign VCPU register change?</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>If you have your own flag in v-&gt;pause_flags then indeed you do not need=
 to vcpu_pause_nosync() &nbsp;in the vmexit handler. <BR>
<BR>
The best sequence would be:<BR>
&nbsp;- vmexit handler: set flag in v-&gt;pause_flags, then vcpu_sleep_nosy=
nc()<BR>
&nbsp;- domctl entry: vcpu_sleep_sync()<BR>
&nbsp;- domctl exit: clear flag in v-&gt;pause_flags, then vcpu_wake()<BR>
<BR>
So that&#8217;s pretty much what you had in the first place, except for the=
 extra vcpu_sleep_sync() on domctl entry. That&#8217;s absolutely critical, =
and why your pause_nosync on vmexit, unpause after domctl doesn&#8217;t work=
 &#8212; <B>something</B> is needed on domctl entry to be sure that the vcpu=
 is descheduled and its state is synchronised. Of course the extra machinery=
 of vcpu_pause/unpause is harmless enough, but it&#8217;s not actually neces=
sary here.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
<BR>
On 23/08/2012 19:11, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>Thanks, Keir!<BR>
<BR>
I've spent so much time trying to track down this problem, even before I re=
alized the registers weren't actually changing. You have no idea how helpful=
 that was.<BR>
<BR>
Before I tried your example, I just wrapped the code to change the register=
 in vcpu_pause() and vcpu_unpause(), which worked.<BR>
<BR>
Everything seems fine at the moment, is there any reason I should still cha=
nge the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually wor=
k as is; I'm setting a bit in v-&gt;pause_flags before I call it, then clear=
 the bit before I wake it. I also tried pause_nosync on vmexit, unpause afte=
r domctl, but that didn't work.<BR>
<BR>
Thanks again!<BR>
<BR>
<BR>
On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>So, for example, one possibly-valid scheme would=
 be:<BR>
<BR>
=A0- vcpu_pause_nosync() from the vmexit handler<BR>
<BR>
=A0- vcpu_sleep_sync() at the start of the domctl<BR>
=A0- vcpu_unpause() at the end of the domctl<BR>
<BR>
HTH,<BR>
=A0Keir<BR>
<BR>
<BR>
<BR>
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a> &lt;<a href=3D"http://keir.xen@gmail.com">http://kei=
r.xen@gmail.com</a>&gt; &gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>FWIW I would expect your approach to basically w=
ork. <BR>
<BR>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster state is synced back into its vcpu structure.<BR>
<BR>
Otherwise you race the vcpu_sleep_nosync() -- and that&#8217;s assuming you=
 also have a reason for that vcpu to sleep (e.g., non-zero pause counter), e=
lse vcpu_sleep_*() operations do nothing!<BR>
<BR>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<BR>
<BR>
=A0-- Keir<BR>
<BR>
<BR>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a> &lt;<a href=3D"http://cutter409@gmail.com">http://c=
utter409@gmail.com</a>&gt; &gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>I'm making the register change directly from the=
 hypervisor, inside of the domctl code. <BR>
<BR>
It's a custom domctl that I've added. I'll look into what setcontext does a=
fter it modifies the register values, though.<BR>
<BR>
Thank you!<BR>
<BR>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a> &lt;<a href=3D"http://keir.xen@gmail.com">http://kei=
r.xen@gmail.com</a>&gt; &gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>On 23/08/2012 18:25, &quot;Cutter 409&quot; &lt;=
<a href=3D"cutter409@gmail.com">cutter409@gmail.com</a> &lt;<a href=3D"http://cu=
tter409@gmail.com">http://cutter409@gmail.com</a>&gt; &gt; wrote:<BR>
<BR>
&gt; With Xen-4.1.2:<BR>
&gt;<BR>
&gt; I'm trying to change a register value in a paused vmx vcpu. The genera=
l<BR>
&gt; process looks like this:<BR>
&gt;<BR>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<BR>
&gt; 2. From dom0, I issue a domctl to change a register via<BR>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<BR>
<BR>
Which domctl? From dom0 userspace you can use the libxc functions<BR>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<BR>
<BR>
You can read the libxc sources to see what hypercall these map to, if you<B=
R>
don't want to use libxc for any reason.<BR>
<BR>
=A0-- Keir<BR>
<BR>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<BR>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<BR>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<BR>
&gt; have to change the registers somewhere else?<BR>
&gt;<BR>
&gt; I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) als=
o, but<BR>
&gt; that doesn't seem to make a change either.<BR>
&gt;<BR>
&gt; Thanks!<BR>
&gt;<BR>
&gt;<BR>
&gt;<BR>
&gt; _______________________________________________<BR>
&gt; Xen-devel mailing list<BR>
&gt; <a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a> &lt;<a h=
ref=3D"http://Xen-devel@lists.xen.org">http://Xen-devel@lists.xen.org</a>&gt; =
<BR>
&gt; <a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-deve=
l</a><BR>
<BR>
<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial">=
<SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a> &lt;<a href=3D"=
http://Xen-devel@lists.xen.org">http://Xen-devel@lists.xen.org</a>&gt; <BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, =
Arial"><SPAN STYLE=3D'font-size:11pt'><BR>
</SPAN></FONT></BLOCKQUOTE></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helve=
tica, Arial"><SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3428596536_124287228--




--===============8265482884604195495==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8265482884604195495==--




From xen-devel-bounces@lists.xen.org Thu Aug 23 18:55:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 18:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cZD-0003l9-AW; Thu, 23 Aug 2012 18:55:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4cZB-0003ks-KL
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 18:55:42 +0000
Received: from [85.158.143.35:21441] by server-3.bemta-4.messagelabs.com id
	8C/1F-08232-CAC76305; Thu, 23 Aug 2012 18:55:40 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345748138!13319266!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR, MIME_QP_LONG_LINE, ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP, USERPASS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6662 invoked from network); 23 Aug 2012 18:55:38 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 18:55:38 -0000
Received: by eaah11 with SMTP id h11so337222eaa.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 11:55:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type;
	bh=y79P4SUJr4PrQmoCb+sngng0YE1ssrX+LNW6AQ01shE=;
	b=KqPfO/XKuBXZ2coOmf8W4koBR4xVP3zMVBB4iKEKx7B4BeczvJ7ciEPPJN9l2L9em7
	1gUyQRSjVx9wuvnY7RBJfO2m0XOf/4YZ3dE9TXvMjktllsOFpzheE0+BI31rrXkBdSBB
	4UkfmIXJ0tA/pIKdipvUHZCOUKWLwxGMyIf4GAehe+dmOz6nbr8E8wLVeNwCnS1Ce3Bw
	NYxMkURCxv8YyqvDbcFWEX7tCvA2J37IUeuG80KtuMHGMKCCc4yYfJdKzLGKdPAEH4vg
	bDkbfDhY6RyqK1CwVouWNysbU4iZrcHzcHASnFFX+njlIJuoSRnVO3nN4fwiUP3XJAOv
	5cXA==
Received: by 10.14.203.73 with SMTP id e49mr3437827eeo.27.1345748138127;
	Thu, 23 Aug 2012 11:55:38 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id 45sm23733551eeb.8.2012.08.23.11.55.36
	(version=SSLv3 cipher=OTHER); Thu, 23 Aug 2012 11:55:37 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 23 Aug 2012 19:55:34 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Cutter 409 <cutter409@gmail.com>,
	<xen-devel@lists.xensource.com>
Message-ID: <CC5C3B36.3CC23%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Foreign VCPU register change?
Thread-Index: Ac2BYOAWSj0+cGf/5kKN8HmYg8nMBQ==
In-Reply-To: <CAG4Ohu-4rC7xOxZ3SwnER+GF--kr6xazyfKEkgEYyDKCbk6E2w@mail.gmail.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8265482884604195495=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============8265482884604195495==
Content-type: multipart/alternative;
	boundary="B_3428596536_124287228"

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3428596536_124287228
Content-type: text/plain;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

If you have your own flag in v->pause_flags then indeed you do not need to
vcpu_pause_nosync()  in the vmexit handler.

The best sequence would be:
 - vmexit handler: set flag in v->pause_flags, then vcpu_sleep_nosync()
 - domctl entry: vcpu_sleep_sync()
 - domctl exit: clear flag in v->pause_flags, then vcpu_wake()

So that=B9s pretty much what you had in the first place, except for the extra
vcpu_sleep_sync() on domctl entry. That=B9s absolutely critical, and why your
pause_nosync on vmexit, unpause after domctl doesn=B9t work =8B something is
needed on domctl entry to be sure that the vcpu is descheduled and its stat=
e
is synchronised. Of course the extra machinery of vcpu_pause/unpause is
harmless enough, but it=B9s not actually necessary here.

 -- Keir


On 23/08/2012 19:11, "Cutter 409" <cutter409@gmail.com> wrote:

> Thanks, Keir!
>=20
> I've spent so much time trying to track down this problem, even before I
> realized the registers weren't actually changing. You have no idea how he=
lpful
> that was.
>=20
> Before I tried your example, I just wrapped the code to change the regist=
er in
> vcpu_pause() and vcpu_unpause(), which worked.
>=20
> Everything seems fine at the moment, is there any reason I should still c=
hange
> the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually work=
 as
> is; I'm setting a bit in v->pause_flags before I call it, then clear the =
bit
> before I wake it. I also tried pause_nosync on vmexit, unpause after domc=
tl,
> but that didn't work.
>=20
> Thanks again!
>=20
>=20
> On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>> So, for example, one possibly-valid scheme would be:
>>=20
>> =A0- vcpu_pause_nosync() from the vmexit handler
>>=20
>> =A0- vcpu_sleep_sync() at the start of the domctl
>> =A0- vcpu_unpause() at the end of the domctl
>>=20
>> HTH,
>> =A0Keir
>>=20
>>=20
>>=20
>> On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com
>> <http://keir.xen@gmail.com> > wrote:
>>=20
>>> FWIW I would expect your approach to basically work.
>>>=20
>>> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu=
?
>>> This will ensure that the vcpu is both fully de-scheduled, and all of i=
ts
>>> register state is synced back into its vcpu structure.
>>>=20
>>> Otherwise you race the vcpu_sleep_nosync() -- and that=B9s assuming you a=
lso
>>> have a reason for that vcpu to sleep (e.g., non-zero pause counter), el=
se
>>> vcpu_sleep_*() operations do nothing!
>>>=20
>>> In short, your problems are almost certainly something to do with the
>>> subtleties of actually putting a vcpu properly to sleep.
>>>=20
>>> =A0-- Keir
>>>=20
>>>=20
>>> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com
>>> <http://cutter409@gmail.com> > wrote:
>>>=20
>>>> I'm making the register change directly from the hypervisor, inside of=
 the
>>>> domctl code.=20
>>>>=20
>>>> It's a custom domctl that I've added. I'll look into what setcontext d=
oes
>>>> after it modifies the register values, though.
>>>>=20
>>>> Thank you!
>>>>=20
>>>> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com
>>>> <http://keir.xen@gmail.com> > wrote:
>>>>> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com
>>>>> <http://cutter409@gmail.com> > wrote:
>>>>>=20
>>>>>> > With Xen-4.1.2:
>>>>>> >
>>>>>> > I'm trying to change a register value in a paused vmx vcpu. The ge=
neral
>>>>>> > process looks like this:
>>>>>> >
>>>>>> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
>>>>>> > 2. From dom0, I issue a domctl to change a register via
>>>>>> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>>>>>=20
>>>>> Which domctl? From dom0 userspace you can use the libxc functions
>>>>> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
>>>>> state.
>>>>>=20
>>>>> You can read the libxc sources to see what hypercall these map to, if=
 you
>>>>> don't want to use libxc for any reason.
>>>>>=20
>>>>> =A0-- Keir
>>>>>=20
>>>>>> > However, the guest register does not seem to be changed when I do =
it
>>>>>> this way.
>>>>>> > Is there something I need to do to mark the registers as "dirty" ?=
 Is
>>>>>> there a
>>>>>> > way to force the foreign vcpu to update the changed registers? Or =
maybe
>>>>>> I just
>>>>>> > have to change the registers somewhere else?
>>>>>> >
>>>>>> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)
>>>>>> also, but
>>>>>> > that doesn't seem to make a change either.
>>>>>> >
>>>>>> > Thanks!
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > Xen-devel mailing list
>>>>>> > Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
>>>>>> > http://lists.xen.org/xen-devel
>>>>>=20
>>>>>=20
>>>>=20
>>>>=20
>>>>=20
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
>>>> http://lists.xen.org/xen-devel
>>>=20
>=20
>=20
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--B_3428596536_124287228
Content-type: text/html;
	charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Re: [Xen-devel] Foreign VCPU register change?</TITLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>If you have your own flag in v-&gt;pause_flags then indeed you do not need=
 to vcpu_pause_nosync() &nbsp;in the vmexit handler. <BR>
<BR>
The best sequence would be:<BR>
&nbsp;- vmexit handler: set flag in v-&gt;pause_flags, then vcpu_sleep_nosy=
nc()<BR>
&nbsp;- domctl entry: vcpu_sleep_sync()<BR>
&nbsp;- domctl exit: clear flag in v-&gt;pause_flags, then vcpu_wake()<BR>
<BR>
So that&#8217;s pretty much what you had in the first place, except for the=
 extra vcpu_sleep_sync() on domctl entry. That&#8217;s absolutely critical, =
and why your pause_nosync on vmexit, unpause after domctl doesn&#8217;t work=
 &#8212; <B>something</B> is needed on domctl entry to be sure that the vcpu=
 is descheduled and its state is synchronised. Of course the extra machinery=
 of vcpu_pause/unpause is harmless enough, but it&#8217;s not actually neces=
sary here.<BR>
<BR>
&nbsp;-- Keir<BR>
<BR>
<BR>
On 23/08/2012 19:11, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a>&gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>Thanks, Keir!<BR>
<BR>
I've spent so much time trying to track down this problem, even before I re=
alized the registers weren't actually changing. You have no idea how helpful=
 that was.<BR>
<BR>
Before I tried your example, I just wrapped the code to change the register=
 in vcpu_pause() and vcpu_unpause(), which worked.<BR>
<BR>
Everything seems fine at the moment, is there any reason I should still cha=
nge the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually wor=
k as is; I'm setting a bit in v-&gt;pause_flags before I call it, then clear=
 the bit before I wake it. I also tried pause_nosync on vmexit, unpause afte=
r domctl, but that didn't work.<BR>
<BR>
Thanks again!<BR>
<BR>
<BR>
On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a>&gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>So, for example, one possibly-valid scheme would=
 be:<BR>
<BR>
=A0- vcpu_pause_nosync() from the vmexit handler<BR>
<BR>
=A0- vcpu_sleep_sync() at the start of the domctl<BR>
=A0- vcpu_unpause() at the end of the domctl<BR>
<BR>
HTH,<BR>
=A0Keir<BR>
<BR>
<BR>
<BR>
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a> &lt;<a href=3D"http://keir.xen@gmail.com">http://kei=
r.xen@gmail.com</a>&gt; &gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>FWIW I would expect your approach to basically w=
ork. <BR>
<BR>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its regi=
ster state is synced back into its vcpu structure.<BR>
<BR>
Otherwise you race the vcpu_sleep_nosync() -- and that&#8217;s assuming you=
 also have a reason for that vcpu to sleep (e.g., non-zero pause counter), e=
lse vcpu_sleep_*() operations do nothing!<BR>
<BR>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<BR>
<BR>
=A0-- Keir<BR>
<BR>
<BR>
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"cutter409@gmail.co=
m">cutter409@gmail.com</a> &lt;<a href=3D"http://cutter409@gmail.com">http://c=
utter409@gmail.com</a>&gt; &gt; wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>I'm making the register change directly from the=
 hypervisor, inside of the domctl code. <BR>
<BR>
It's a custom domctl that I've added. I'll look into what setcontext does a=
fter it modifies the register values, though.<BR>
<BR>
Thank you!<BR>
<BR>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"keir.xen@gmail.co=
m">keir.xen@gmail.com</a> &lt;<a href=3D"http://keir.xen@gmail.com">http://kei=
r.xen@gmail.com</a>&gt; &gt; wrote:<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><=
SPAN STYLE=3D'font-size:11pt'>On 23/08/2012 18:25, &quot;Cutter 409&quot; &lt;=
<a href=3D"cutter409@gmail.com">cutter409@gmail.com</a> &lt;<a href=3D"http://cu=
tter409@gmail.com">http://cutter409@gmail.com</a>&gt; &gt; wrote:<BR>
<BR>
&gt; With Xen-4.1.2:<BR>
&gt;<BR>
&gt; I'm trying to change a register value in a paused vmx vcpu. The genera=
l<BR>
&gt; process looks like this:<BR>
&gt;<BR>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<BR>
&gt; 2. From dom0, I issue a domctl to change a register via<BR>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<BR>
<BR>
Which domctl? From dom0 userspace you can use the libxc functions<BR>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<BR>
<BR>
You can read the libxc sources to see what hypercall these map to, if you<B=
R>
don't want to use libxc for any reason.<BR>
<BR>
=A0-- Keir<BR>
<BR>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<BR>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<BR>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<BR>
&gt; have to change the registers somewhere else?<BR>
&gt;<BR>
&gt; I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) als=
o, but<BR>
&gt; that doesn't seem to make a change either.<BR>
&gt;<BR>
&gt; Thanks!<BR>
&gt;<BR>
&gt;<BR>
&gt;<BR>
&gt; _______________________________________________<BR>
&gt; Xen-devel mailing list<BR>
&gt; <a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a> &lt;<a h=
ref=3D"http://Xen-devel@lists.xen.org">http://Xen-devel@lists.xen.org</a>&gt; =
<BR>
&gt; <a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-deve=
l</a><BR>
<BR>
<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, Arial">=
<SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a> &lt;<a href=3D"=
http://Xen-devel@lists.xen.org">http://Xen-devel@lists.xen.org</a>&gt; <BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helvetica, =
Arial"><SPAN STYLE=3D'font-size:11pt'><BR>
</SPAN></FONT></BLOCKQUOTE></BLOCKQUOTE><FONT FACE=3D"Calibri, Verdana, Helve=
tica, Arial"><SPAN STYLE=3D'font-size:11pt'><BR>
<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
Xen-devel mailing list<BR>
<a href=3D"Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><BR>
<a href=3D"http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>=
<BR>
</SPAN></FONT></FONT></BLOCKQUOTE>
</BODY>
</HTML>


--B_3428596536_124287228--




--===============8265482884604195495==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8265482884604195495==--




From xen-devel-bounces@lists.xen.org Thu Aug 23 19:06:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cja-00044w-GH; Thu, 23 Aug 2012 19:06:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4cjZ-00044r-Jw
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:06:25 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345748774!8592137!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9421 invoked from network); 23 Aug 2012 19:06:16 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:06:16 -0000
Received: by obbta14 with SMTP id ta14so260415obb.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 12:06:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Qinn0Y69KWDsaOtYR7u5Rj1RKI31tB8E4TuCq+kZoVQ=;
	b=RphNzsrq9qg2An7O/jKNGqs1NmJshKpbOkRfyLGuI2CLBWfPuoJnJLtefAOeHcNye+
	6DZ3OZPhak2DoE/ud4Ir4UT9W6nAQkEjZRh2pwAy+ptqe5bPqRaPCWiZqeOdxQrXgCQd
	/pzIVNqTeUJ31uAsJnapw7+JBE4od/F99aVmQQxx0bIF03byTC1hmbyb1/Moi38v+0Ix
	gtb8i/sNwp/6u93FHCIVw030nUi9zzFE0y/pkDgIf+/1Nk4AyFctrjcvOnT+K4tR1K3L
	Dzl61raNRDnhLR91ZiawO+07pSefKiRtUP/+Lv3d8q+08kYXNubhzeEWAOrWOxAlfzzT
	9tOg==
MIME-Version: 1.0
Received: by 10.50.187.228 with SMTP id fv4mr2772438igc.37.1345748774417; Thu,
	23 Aug 2012 12:06:14 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Thu, 23 Aug 2012 12:06:14 -0700 (PDT)
In-Reply-To: <50367C78.80608@citrix.com>
References: <50367C78.80608@citrix.com>
Date: Thu, 23 Aug 2012 15:06:14 -0400
X-Google-Sender-Auth: a-y7-sAvxUdUfL7_Wafm9IN6dQM
Message-ID: <CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No such luck.

Panic below:

(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
(XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
(XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
(XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
(XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
(XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
(XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
(XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff830148a77d50:
(XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
(XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
(XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
(XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
(XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
(XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
(XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
(XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
(XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
(XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
(XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
(XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
(XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
(XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
(XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
(XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
(XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
(XEN) Xen call trace:
(XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
(XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
(XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
(XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
(XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
(XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
(XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
(XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
(XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
(XEN)
(XEN) Pagetable walk from 000000000000007c:
(XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
(XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
(XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
(XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 000000000000007c
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 23/08/12 19:03, Ben Guthro wrote:
>
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
>
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>
>
> I worked past that issue by the following hack:
>
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
>
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }
>
>
> This seemed to work for this rather old changeset, but it was not
> sufficient to fix it against the 4.1, or unstable trees.
>
> I further bisected, in combination with this hack, and found the
> following changeset to also be problematic:
>
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>
>
> That is, before this change I could resume reliably (with the hack
> above) - and after I could not.
> This was surprising to me, as this change also looks rather innocuous.
>
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
>
> Jan: The commit message says "simplify operations [in] a few cases".
> Was the change in fixup_irqs() deliberate?
>
> ~Andrew
>
>
> Ben: Could you test the attached patch?  It is for unstable and undoes the
> logical change to fixup_irqs()
>
> ~Andrew
>
>
> Naturally, backing out this change seems to be non-trivial against the
> tip, since so much around it has changed.
>
> --
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>
> --
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:06:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cja-00044w-GH; Thu, 23 Aug 2012 19:06:26 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4cjZ-00044r-Jw
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:06:25 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345748774!8592137!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9421 invoked from network); 23 Aug 2012 19:06:16 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:06:16 -0000
Received: by obbta14 with SMTP id ta14so260415obb.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 12:06:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=Qinn0Y69KWDsaOtYR7u5Rj1RKI31tB8E4TuCq+kZoVQ=;
	b=RphNzsrq9qg2An7O/jKNGqs1NmJshKpbOkRfyLGuI2CLBWfPuoJnJLtefAOeHcNye+
	6DZ3OZPhak2DoE/ud4Ir4UT9W6nAQkEjZRh2pwAy+ptqe5bPqRaPCWiZqeOdxQrXgCQd
	/pzIVNqTeUJ31uAsJnapw7+JBE4od/F99aVmQQxx0bIF03byTC1hmbyb1/Moi38v+0Ix
	gtb8i/sNwp/6u93FHCIVw030nUi9zzFE0y/pkDgIf+/1Nk4AyFctrjcvOnT+K4tR1K3L
	Dzl61raNRDnhLR91ZiawO+07pSefKiRtUP/+Lv3d8q+08kYXNubhzeEWAOrWOxAlfzzT
	9tOg==
MIME-Version: 1.0
Received: by 10.50.187.228 with SMTP id fv4mr2772438igc.37.1345748774417; Thu,
	23 Aug 2012 12:06:14 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Thu, 23 Aug 2012 12:06:14 -0700 (PDT)
In-Reply-To: <50367C78.80608@citrix.com>
References: <50367C78.80608@citrix.com>
Date: Thu, 23 Aug 2012 15:06:14 -0400
X-Google-Sender-Auth: a-y7-sAvxUdUfL7_Wafm9IN6dQM
Message-ID: <CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No such luck.

Panic below:

(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
(XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
(XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
(XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
(XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
(XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
(XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
(XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff830148a77d50:
(XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
(XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
(XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
(XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
(XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
(XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
(XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
(XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
(XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
(XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
(XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
(XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
(XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
(XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
(XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
(XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
(XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
(XEN) Xen call trace:
(XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
(XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
(XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
(XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
(XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
(XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
(XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
(XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
(XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
(XEN)
(XEN) Pagetable walk from 000000000000007c:
(XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
(XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
(XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
(XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 000000000000007c
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 23/08/12 19:03, Ben Guthro wrote:
>
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
>
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>
>
> I worked past that issue by the following hack:
>
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
>
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }
>
>
> This seemed to work for this rather old changeset, but it was not
> sufficient to fix it against the 4.1, or unstable trees.
>
> I further bisected, in combination with this hack, and found the
> following changeset to also be problematic:
>
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>
>
> That is, before this change I could resume reliably (with the hack
> above) - and after I could not.
> This was surprising to me, as this change also looks rather innocuous.
>
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
>
> Jan: The commit message says "simplify operations [in] a few cases".
> Was the change in fixup_irqs() deliberate?
>
> ~Andrew
>
>
> Ben: Could you test the attached patch?  It is for unstable and undoes the
> logical change to fixup_irqs()
>
> ~Andrew
>
>
> Naturally, backing out this change seems to be non-trivial against the
> tip, since so much around it has changed.
>
> --
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>
> --
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:12:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cpP-0004FD-AB; Thu, 23 Aug 2012 19:12:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4cpN-0004F5-VN
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:12:26 +0000
Received: from [85.158.139.83:5816] by server-12.bemta-5.messagelabs.com id
	03/5B-22359-99086305; Thu, 23 Aug 2012 19:12:25 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345749142!27597602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18501 invoked from network); 23 Aug 2012 19:12:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35624014"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:12:22 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	15:12:22 -0400
Message-ID: <503680C5.6070509@citrix.com>
Date: Thu, 23 Aug 2012 20:13:09 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345728471.12501.90.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:27 PM, Ian Campbell wrote:
>
>> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>>   #else
>>   #define RDEXACT read_exact
>>   #endif
>> +
>> +#define QEMUSIG_SIZE 21
>> +
>>   /*
>>   ** In the state file (or during transfer), all page-table pages are
>>   ** converted into a 'canonical' form where references to actual mfns
>> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
>>                              int vcpuextstate, uint32_t vcpuextstate_size)
>>   {
>>       uint8_t *tmp;
>> -    unsigned char qemusig[21];
>> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
>>      
> An extra + 1 here?
>    
QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
If an error occurred, without +1, the output log lost the last character.

> [...]
>    
>> -    qemusig[20] = '\0';
>> +    qemusig[QEMUSIG_SIZE] = '\0';
>>      
> This is one bigger than it used to be now.
>
> Perhaps this is an unrelated bug fix (I haven't check the real length of
> the sig), in which case please can you split it out and submit
> separately?
>    

#define QEMU_SIGNATURE "DeviceModelRecord0002"
Just checked, the length seems to be 21. I will send a patch with
this change.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:12:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4cpP-0004FD-AB; Thu, 23 Aug 2012 19:12:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4cpN-0004F5-VN
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:12:26 +0000
Received: from [85.158.139.83:5816] by server-12.bemta-5.messagelabs.com id
	03/5B-22359-99086305; Thu, 23 Aug 2012 19:12:25 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1345749142!27597602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18501 invoked from network); 23 Aug 2012 19:12:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35624014"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:12:22 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Thu, 23 Aug 2012
	15:12:22 -0400
Message-ID: <503680C5.6070509@citrix.com>
Date: Thu, 23 Aug 2012 20:13:09 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345728471.12501.90.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:27 PM, Ian Campbell wrote:
>
>> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>>   #else
>>   #define RDEXACT read_exact
>>   #endif
>> +
>> +#define QEMUSIG_SIZE 21
>> +
>>   /*
>>   ** In the state file (or during transfer), all page-table pages are
>>   ** converted into a 'canonical' form where references to actual mfns
>> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
>>                              int vcpuextstate, uint32_t vcpuextstate_size)
>>   {
>>       uint8_t *tmp;
>> -    unsigned char qemusig[21];
>> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
>>      
> An extra + 1 here?
>    
QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
If an error occurred, without +1, the output log lost the last character.

> [...]
>    
>> -    qemusig[20] = '\0';
>> +    qemusig[QEMUSIG_SIZE] = '\0';
>>      
> This is one bigger than it used to be now.
>
> Perhaps this is an unrelated bug fix (I haven't check the real length of
> the sig), in which case please can you split it out and submit
> separately?
>    

#define QEMU_SIGNATURE "DeviceModelRecord0002"
Just checked, the length seems to be 21. I will send a patch with
this change.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:26:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4d31-0004Qn-5c; Thu, 23 Aug 2012 19:26:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4d2z-0004Qi-Vh
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:26:30 +0000
Received: from [85.158.143.35:8117] by server-1.bemta-4.messagelabs.com id
	FA/F4-12504-5E386305; Thu, 23 Aug 2012 19:26:29 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345749986!5511245!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30861 invoked from network); 23 Aug 2012 19:26:27 -0000
Received: from mail-iy0-f173.google.com (HELO mail-iy0-f173.google.com)
	(209.85.210.173)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:26:27 -0000
Received: by iakx26 with SMTP id x26so2283799iak.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 12:26:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7LQEnfyEqAHitzXc0HslcaCdlcQ2NmQ42KD0MmfDNFo=;
	b=TTIzVZuaoE9ia7WpVDIBCF2QfAFHKxq8teLJ5rRapnjqBUbHMEXhuS7zE4AfhBiLkX
	5tfN5glTM0fQsqY8ye2I675AF3rrzUh83I9/h1bs8fxKgQYo2/89EpN/L7g83FcL9mxX
	xGQYayuARaCrk7ynVPgOcgZDLYTrM0jPgmoHsRI9nkZjrwES/WlElM/QInu4YKSxAZHD
	VHYD38QZljCO61UKuwq1kXFPPPj/z79x8+qxVH8YtSe/wvLAbl3v4WCxPukjf8KgoGiX
	9ciYRq97hjy7Qv18a+Gr8a1SGyvdcpjzgqjI/zYqR2fTjRGm3IVoKu4JLjHQTF8pkesC
	QEyQ==
MIME-Version: 1.0
Received: by 10.50.180.129 with SMTP id do1mr2849975igc.28.1345749986052; Thu,
	23 Aug 2012 12:26:26 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Thu, 23 Aug 2012 12:26:22 -0700 (PDT)
In-Reply-To: <CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
Date: Thu, 23 Aug 2012 15:26:22 -0400
X-Google-Sender-Auth: ZofpVW36WGuNAa8pV2kCg6ZKdgI
Message-ID: <CAOvdn6V8_FzN9X37VDE-G14qiFL8xL+CibMnZPXzNYO_UFNzFg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Interestingly enough, just updating to the parent of this changeset,
without the hack mentioned previously seems to be enough to
suspend/resume multiple times on this machine.



On Thu, Aug 23, 2012 at 3:06 PM, Ben Guthro <ben@guthro.net> wrote:
> No such luck.
>
> Panic below:
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
> (XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
> (XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
> (XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
> (XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
> (XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff830148a77d50:
> (XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
> (XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
> (XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
> (XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
> (XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
> (XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
> (XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
> (XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
> (XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
> (XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
> (XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
> (XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
> (XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
> (XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
> (XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
> (XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
> (XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
> (XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
> (XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
> (XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
> (XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
> (XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
> (XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
> (XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
> (XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
> (XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
> (XEN)
> (XEN) Pagetable walk from 000000000000007c:
> (XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
> (XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
> (XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
> (XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 000000000000007c
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
>
>
> On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 23/08/12 19:03, Ben Guthro wrote:
>>
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
>>
>> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
>> was changed.
>>
>> Jan: The commit message says "simplify operations [in] a few cases".
>> Was the change in fixup_irqs() deliberate?
>>
>> ~Andrew
>>
>>
>> Ben: Could you test the attached patch?  It is for unstable and undoes the
>> logical change to fixup_irqs()
>>
>> ~Andrew
>>
>>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:26:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4d31-0004Qn-5c; Thu, 23 Aug 2012 19:26:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4d2z-0004Qi-Vh
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:26:30 +0000
Received: from [85.158.143.35:8117] by server-1.bemta-4.messagelabs.com id
	FA/F4-12504-5E386305; Thu, 23 Aug 2012 19:26:29 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345749986!5511245!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30861 invoked from network); 23 Aug 2012 19:26:27 -0000
Received: from mail-iy0-f173.google.com (HELO mail-iy0-f173.google.com)
	(209.85.210.173)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:26:27 -0000
Received: by iakx26 with SMTP id x26so2283799iak.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 12:26:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=7LQEnfyEqAHitzXc0HslcaCdlcQ2NmQ42KD0MmfDNFo=;
	b=TTIzVZuaoE9ia7WpVDIBCF2QfAFHKxq8teLJ5rRapnjqBUbHMEXhuS7zE4AfhBiLkX
	5tfN5glTM0fQsqY8ye2I675AF3rrzUh83I9/h1bs8fxKgQYo2/89EpN/L7g83FcL9mxX
	xGQYayuARaCrk7ynVPgOcgZDLYTrM0jPgmoHsRI9nkZjrwES/WlElM/QInu4YKSxAZHD
	VHYD38QZljCO61UKuwq1kXFPPPj/z79x8+qxVH8YtSe/wvLAbl3v4WCxPukjf8KgoGiX
	9ciYRq97hjy7Qv18a+Gr8a1SGyvdcpjzgqjI/zYqR2fTjRGm3IVoKu4JLjHQTF8pkesC
	QEyQ==
MIME-Version: 1.0
Received: by 10.50.180.129 with SMTP id do1mr2849975igc.28.1345749986052; Thu,
	23 Aug 2012 12:26:26 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Thu, 23 Aug 2012 12:26:22 -0700 (PDT)
In-Reply-To: <CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
Date: Thu, 23 Aug 2012 15:26:22 -0400
X-Google-Sender-Auth: ZofpVW36WGuNAa8pV2kCg6ZKdgI
Message-ID: <CAOvdn6V8_FzN9X37VDE-G14qiFL8xL+CibMnZPXzNYO_UFNzFg@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Interestingly enough, just updating to the parent of this changeset,
without the hack mentioned previously seems to be enough to
suspend/resume multiple times on this machine.



On Thu, Aug 23, 2012 at 3:06 PM, Ben Guthro <ben@guthro.net> wrote:
> No such luck.
>
> Panic below:
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
> (XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
> (XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
> (XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
> (XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
> (XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff830148a77d50:
> (XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
> (XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
> (XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
> (XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
> (XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
> (XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
> (XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
> (XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
> (XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
> (XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
> (XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
> (XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
> (XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
> (XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
> (XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
> (XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
> (XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
> (XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
> (XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
> (XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
> (XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
> (XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
> (XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
> (XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
> (XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
> (XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
> (XEN)
> (XEN) Pagetable walk from 000000000000007c:
> (XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
> (XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
> (XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
> (XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 000000000000007c
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
>
>
> On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 23/08/12 19:03, Ben Guthro wrote:
>>
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
>>
>> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
>> was changed.
>>
>> Jan: The commit message says "simplify operations [in] a few cases".
>> Was the change in fixup_irqs() deliberate?
>>
>> ~Andrew
>>
>>
>> Ben: Could you test the attached patch?  It is for unstable and undoes the
>> logical change to fixup_irqs()
>>
>> ~Andrew
>>
>>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:38:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dES-0004vP-57; Thu, 23 Aug 2012 19:38:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4dER-0004vI-8R
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:38:19 +0000
Received: from [85.158.143.99:19196] by server-1.bemta-4.messagelabs.com id
	74/DB-12504-AA686305; Thu, 23 Aug 2012 19:38:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345750696!24254457!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30084 invoked from network); 23 Aug 2012 19:38:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:38:17 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35627170"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:38:16 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 15:38:15 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4dEN-0002B1-GF;
	Thu, 23 Aug 2012 20:38:15 +0100
Message-ID: <503686A7.5050206@citrix.com>
Date: Thu, 23 Aug 2012 20:38:15 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
In-Reply-To: <CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 23/08/12 20:06, Ben Guthro wrote:
> No such luck.

Huh.  It was a shot in the dark, but I was really not expecting this.

>
> Panic below:
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
> (XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
> (XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
> (XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
> (XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
> (XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff830148a77d50:
> (XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
> (XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
> (XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
> (XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
> (XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
> (XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
> (XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
> (XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
> (XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
> (XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
> (XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
> (XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
> (XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
> (XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
> (XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
> (XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
> (XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
> (XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
> (XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
> (XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
> (XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
> (XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
> (XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
> (XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
> (XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
> (XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
> (XEN)
> (XEN) Pagetable walk from 000000000000007c:
> (XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
> (XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
> (XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
> (XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 000000000000007c
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...

Looks like some pointer has turned to NULL, although I cant identify
exactly which.

Either way, I would not pay it too much heed.

~Andrew

>
>
> On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 23/08/12 19:03, Ben Guthro wrote:
>>
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
>>
>> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
>> was changed.
>>
>> Jan: The commit message says "simplify operations [in] a few cases".
>> Was the change in fixup_irqs() deliberate?
>>
>> ~Andrew
>>
>>
>> Ben: Could you test the attached patch?  It is for unstable and undoes the
>> logical change to fixup_irqs()
>>
>> ~Andrew
>>
>>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:38:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dES-0004vP-57; Thu, 23 Aug 2012 19:38:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4dER-0004vI-8R
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:38:19 +0000
Received: from [85.158.143.99:19196] by server-1.bemta-4.messagelabs.com id
	74/DB-12504-AA686305; Thu, 23 Aug 2012 19:38:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345750696!24254457!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjYxOTM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30084 invoked from network); 23 Aug 2012 19:38:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:38:17 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344225600"; d="scan'208";a="35627170"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 15:38:16 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 15:38:15 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4dEN-0002B1-GF;
	Thu, 23 Aug 2012 20:38:15 +0100
Message-ID: <503686A7.5050206@citrix.com>
Date: Thu, 23 Aug 2012 20:38:15 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ben Guthro <ben@guthro.net>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
In-Reply-To: <CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
X-Enigmail-Version: 1.4.3
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 23/08/12 20:06, Ben Guthro wrote:
> No such luck.

Huh.  It was a shot in the dark, but I was really not expecting this.

>
> Panic below:
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
> (XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
> (XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
> (XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
> (XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
> (XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff830148a77d50:
> (XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
> (XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
> (XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
> (XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
> (XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
> (XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
> (XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
> (XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
> (XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
> (XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
> (XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
> (XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
> (XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
> (XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
> (XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
> (XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
> (XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
> (XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
> (XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
> (XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
> (XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
> (XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
> (XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
> (XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
> (XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
> (XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
> (XEN)
> (XEN) Pagetable walk from 000000000000007c:
> (XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
> (XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
> (XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
> (XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 000000000000007c
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...

Looks like some pointer has turned to NULL, although I cant identify
exactly which.

Either way, I would not pay it too much heed.

~Andrew

>
>
> On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 23/08/12 19:03, Ben Guthro wrote:
>>
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
>>
>> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
>> was changed.
>>
>> Jan: The commit message says "simplify operations [in] a few cases".
>> Was the change in fixup_irqs() deliberate?
>>
>> ~Andrew
>>
>>
>> Ben: Could you test the attached patch?  It is for unstable and undoes the
>> logical change to fixup_irqs()
>>
>> ~Andrew
>>
>>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:41:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dH3-00052w-PN; Thu, 23 Aug 2012 19:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4dH1-00052m-SY
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 19:41:00 +0000
Received: from [85.158.143.99:28835] by server-3.bemta-4.messagelabs.com id
	EA/38-08232-B4786305; Thu, 23 Aug 2012 19:40:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345750855!24254794!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxNTE5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5702 invoked from network); 23 Aug 2012 19:40:57 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 19:40:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NJeg7n026805
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 19:40:44 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NJeeYT026762
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 19:40:41 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NJed7N000490; Thu, 23 Aug 2012 14:40:40 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 12:40:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 672974031E; Thu, 23 Aug 2012 15:30:36 -0400 (EDT)
Date: Thu, 23 Aug 2012 15:30:36 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120823193036.GA27221@phenom.dumpdata.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171818260.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171818260.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen/swiotlb: Use the
 swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV PCI is used.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 06:25:56PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > With this patch we provide the functionality to initialize the
> > Xen-SWIOTLB late in the bootup cycle - specifically for
> > Xen PCI-frontend. We still will work if the user had
> > supplied 'iommu=soft' on the Linux command line.
> > 
> > CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
> > [v1: Fix smatch warnings]
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
> >  arch/x86/xen/pci-swiotlb-xen.c         |   17 +++++++++-
> >  drivers/xen/swiotlb-xen.c              |   54 ++++++++++++++++++++++++++-----
> >  include/xen/swiotlb-xen.h              |    1 +
> >  4 files changed, 64 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
> > index 1be1ab7..ee52fca 100644
> > --- a/arch/x86/include/asm/xen/swiotlb-xen.h
> > +++ b/arch/x86/include/asm/xen/swiotlb-xen.h
> > @@ -5,10 +5,12 @@
> >  extern int xen_swiotlb;
> >  extern int __init pci_xen_swiotlb_detect(void);
> >  extern void __init pci_xen_swiotlb_init(void);
> > +extern int pci_xen_swiotlb_init_late(void);
> >  #else
> >  #define xen_swiotlb (0)
> >  static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
> >  static inline void __init pci_xen_swiotlb_init(void) { }
> > +static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
> >  #endif
> >  
> >  #endif /* _ASM_X86_SWIOTLB_XEN_H */
> > diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> > index 1c17227..031d8bc 100644
> > --- a/arch/x86/xen/pci-swiotlb-xen.c
> > +++ b/arch/x86/xen/pci-swiotlb-xen.c
> > @@ -12,7 +12,7 @@
> >  #include <asm/iommu.h>
> >  #include <asm/dma.h>
> >  #endif
> > -
> > +#include <linux/export.h>
> >  int xen_swiotlb __read_mostly;
> >  
> >  static struct dma_map_ops xen_swiotlb_dma_ops = {
> > @@ -76,6 +76,21 @@ void __init pci_xen_swiotlb_init(void)
> >  		pci_request_acs();
> >  	}
> >  }
> > +
> > +int pci_xen_swiotlb_init_late(void)
> > +{
> > +	int rc = xen_swiotlb_late_init(1);
> > +	if (rc)
> > +		return rc;
> > +
> > +	dma_ops = &xen_swiotlb_dma_ops;
> > +	/* Make sure ACS will be enabled */
> > +	pci_request_acs();
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
> 
> shouldn't we be checking whether the xen_swiotlb has already been
> initialized?

Yes.
> 
> 
> >  IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
> >  		  0,
> >  		  pci_xen_swiotlb_init,
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1afb4fb..1942a3e 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -145,13 +145,14 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
> >  	return 0;
> >  }
> >  
> > -void __init xen_swiotlb_init(int verbose)
> > +static int __ref __xen_swiotlb_init(int verbose, bool early)
> >  {
> >  	unsigned long bytes;
> >  	int rc = -ENOMEM;
> >  	unsigned long nr_tbl;
> >  	char *m = NULL;
> >  	unsigned int repeat = 3;
> > +	unsigned long order;
> >
> >  	nr_tbl = swiotlb_nr_tbl();
> >  	if (nr_tbl)
> > @@ -161,12 +162,31 @@ void __init xen_swiotlb_init(int verbose)
> >  		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
> >  	}
> >  retry:
> > +	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
> >  	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
> >  
> >  	/*
> >  	 * Get IO TLB memory from any location.
> >  	 */
> > -	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> > +	if (early)
> > +		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> > +	else {
> > +#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
> > +#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
> 
> 
> > +		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
> > +			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
> > +			if (xen_io_tlb_start)
> > +				break;
> > +			order--;
> > +		}
> > +		if (order != get_order(bytes)) {
> > +			pr_warn("Warning: only able to allocate %ld MB "
> > +				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
> > +			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
> > +			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
> > +		}
> > +	}
> >  	if (!xen_io_tlb_start) {
> >  		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
> >  		goto error;
> > @@ -179,17 +199,22 @@ retry:
> >  			       bytes,
> >  			       xen_io_tlb_nslabs);
> >  	if (rc) {
> > -		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
> > +		if (early)
> > +			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
> >  		m = "Failed to get contiguous memory for DMA from Xen!\n"\
> >  		    "You either: don't have the permissions, do not have"\
> >  		    " enough free memory under 4GB, or the hypervisor memory"\
> > -		    "is too fragmented!";
> > +		    " is too fragmented!";
> >  		goto error;
> >  	}
> >  	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
> > -	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
> >  
> > -	return;
> > +	rc = 0;
> > +	if (early)
> > +		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
> > +	else
> > +		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
> > +	return rc;
> >  error:
> >  	if (repeat--) {
> >  		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
> > @@ -198,10 +223,21 @@ error:
> >  		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
> >  		goto retry;
> >  	}
> > -	xen_raw_printk("%s (rc:%d)", m, rc);
> > -	panic("%s (rc:%d)", m, rc);
> > +	pr_err("%s (rc:%d)", m, rc);
> > +	if (early)
> > +		panic("%s (rc:%d)", m, rc);
> > +	else
> > +		free_pages((unsigned long)xen_io_tlb_start, order);
> > +	return rc;
> > +}
> 
> All these "if" make the code a harder to read. Would it be possible at
> least to unify the error paths and just check on after_bootmem whether
> we need to call free_pages or free_bootmem?
> In fact using after_bootmem you might get away without an early
> parameter.


Sure thing. Here is a stab at it.


>From 573db1dbc6adf3f5efc8c699714329caffa43daf Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 13:55:26 -0400
Subject: [PATCH 1/3] xen/swiotlb: Move the nr_tbl determination in its own
 function.

Moving the function out of the way to prepare for the late
SWIOTLB init.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/swiotlb-xen.c |   21 +++++++++++----------
 1 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1afb4fb..a2aad6e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -144,25 +144,26 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 	} while (i < nslabs);
 	return 0;
 }
+static unsigned long xen_set_nslabs(unsigned long nr_tbl)
+{
+	if (!nr_tbl) {
+		xen_io_tlb_nslabs = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
+		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
+	} else
+		xen_io_tlb_nslabs = nr_tbl;
 
+	return xen_io_tlb_nslabs << IO_TLB_SHIFT;
+}
 void __init xen_swiotlb_init(int verbose)
 {
 	unsigned long bytes;
 	int rc = -ENOMEM;
-	unsigned long nr_tbl;
 	char *m = NULL;
 	unsigned int repeat = 3;
 
-	nr_tbl = swiotlb_nr_tbl();
-	if (nr_tbl)
-		xen_io_tlb_nslabs = nr_tbl;
-	else {
-		xen_io_tlb_nslabs = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
-		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
+	xen_io_tlb_nslabs = swiotlb_nr_tbl();
 retry:
-	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
-
+	bytes = xen_set_nslabs(xen_io_tlb_nslabs);
 	/*
 	 * Get IO TLB memory from any location.
 	 */
-- 
1.7.7.6



>From 1bee8a244d1fed47e307c5951a98f7922cab7993 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 14:03:55 -0400
Subject: [PATCH 2/3] xen/swiotlb: Move the error strings to its own function.

That way we can more easily reuse those errors when using the
late SWIOTLB init.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/swiotlb-xen.c |   35 +++++++++++++++++++++++++++--------
 1 files changed, 27 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index a2aad6e..701b103 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -154,11 +154,33 @@ static unsigned long xen_set_nslabs(unsigned long nr_tbl)
 
 	return xen_io_tlb_nslabs << IO_TLB_SHIFT;
 }
+
+enum xen_swiotlb_err {
+	XEN_SWIOTLB_UNKNOWN = 0,
+	XEN_SWIOTLB_ENOMEM,
+	XEN_SWIOTLB_EFIXUP
+};
+
+static const char *xen_swiotlb_error(enum xen_swiotlb_err err)
+{
+	switch (err) {
+	case XEN_SWIOTLB_ENOMEM:
+		return "Cannot allocate Xen-SWIOTLB buffer\n";
+	case XEN_SWIOTLB_EFIXUP:
+		return "Failed to get contiguous memory for DMA from Xen!\n"\
+		    "You either: don't have the permissions, do not have"\
+		    " enough free memory under 4GB, or the hypervisor memory"\
+		    " is too fragmented!";
+	default:
+		break;
+	}
+	return "";
+}
 void __init xen_swiotlb_init(int verbose)
 {
 	unsigned long bytes;
 	int rc = -ENOMEM;
-	char *m = NULL;
+	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
 
 	xen_io_tlb_nslabs = swiotlb_nr_tbl();
@@ -169,7 +191,7 @@ retry:
 	 */
 	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
 	if (!xen_io_tlb_start) {
-		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
+		m_ret = XEN_SWIOTLB_ENOMEM;
 		goto error;
 	}
 	xen_io_tlb_end = xen_io_tlb_start + bytes;
@@ -181,10 +203,7 @@ retry:
 			       xen_io_tlb_nslabs);
 	if (rc) {
 		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
-		m = "Failed to get contiguous memory for DMA from Xen!\n"\
-		    "You either: don't have the permissions, do not have"\
-		    " enough free memory under 4GB, or the hypervisor memory"\
-		    "is too fragmented!";
+		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
 	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
@@ -199,8 +218,8 @@ error:
 		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
-	xen_raw_printk("%s (rc:%d)", m, rc);
-	panic("%s (rc:%d)", m, rc);
+	xen_raw_printk("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
 }
 
 void *
-- 
1.7.7.6


and the last one:


>From d815253cbf26a5273e77f829cc414e0808500263 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 14:36:15 -0400
Subject: [PATCH 3/3] xen/swiotlb: Use the swiotlb_late_init_with_tbl to init
 Xen-SWIOTLB late when PV PCI is used.

With this patch we provide the functionality to initialize the
Xen-SWIOTLB late in the bootup cycle - specifically for
Xen PCI-frontend. We still will work if the user had
supplied 'iommu=soft' on the Linux command line.

CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
[v1: Fix smatch warnings]
[v2: Added check for xen_swiotlb]
[v3: Rebased with new xen-swiotlb changes]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
 arch/x86/xen/pci-swiotlb-xen.c         |   22 ++++++++++++++-
 drivers/xen/swiotlb-xen.c              |   48 +++++++++++++++++++++++++------
 include/xen/swiotlb-xen.h              |    2 +-
 4 files changed, 62 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
index 1be1ab7..ee52fca 100644
--- a/arch/x86/include/asm/xen/swiotlb-xen.h
+++ b/arch/x86/include/asm/xen/swiotlb-xen.h
@@ -5,10 +5,12 @@
 extern int xen_swiotlb;
 extern int __init pci_xen_swiotlb_detect(void);
 extern void __init pci_xen_swiotlb_init(void);
+extern int pci_xen_swiotlb_init_late(void);
 #else
 #define xen_swiotlb (0)
 static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
 static inline void __init pci_xen_swiotlb_init(void) { }
+static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
 #endif
 
 #endif /* _ASM_X86_SWIOTLB_XEN_H */
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 1c17227..406f9c4 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -12,7 +12,7 @@
 #include <asm/iommu.h>
 #include <asm/dma.h>
 #endif
-
+#include <linux/export.h>
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
@@ -76,6 +76,26 @@ void __init pci_xen_swiotlb_init(void)
 		pci_request_acs();
 	}
 }
+
+int pci_xen_swiotlb_init_late(void)
+{
+	int rc;
+
+	if (xen_swiotlb)
+		return 0;
+
+	rc = xen_swiotlb_init(1);
+	if (rc)
+		return rc;
+
+	dma_ops = &xen_swiotlb_dma_ops;
+	/* Make sure ACS will be enabled */
+	pci_request_acs();
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
+
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
 		  0,
 		  pci_xen_swiotlb_init,
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 701b103..f0825cb 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -176,9 +176,9 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err)
 	}
 	return "";
 }
-void __init xen_swiotlb_init(int verbose)
+int __ref xen_swiotlb_init(int verbose)
 {
-	unsigned long bytes;
+	unsigned long bytes, order;
 	int rc = -ENOMEM;
 	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
@@ -186,10 +186,28 @@ void __init xen_swiotlb_init(int verbose)
 	xen_io_tlb_nslabs = swiotlb_nr_tbl();
 retry:
 	bytes = xen_set_nslabs(xen_io_tlb_nslabs);
+	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
 	/*
 	 * Get IO TLB memory from any location.
 	 */
-	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	if (!after_bootmem)
+		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	else {
+#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
+#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
+		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
+			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
+			if (xen_io_tlb_start)
+				break;
+			order--;
+		}
+		if (order != get_order(bytes)) {
+			pr_warn("Warning: only able to allocate %ld MB "
+				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
+			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
+			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+		}
+	}
 	if (!xen_io_tlb_start) {
 		m_ret = XEN_SWIOTLB_ENOMEM;
 		goto error;
@@ -202,14 +220,21 @@ retry:
 			       bytes,
 			       xen_io_tlb_nslabs);
 	if (rc) {
-		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
+		if (!after_bootmem)
+			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
+		else {
+			free_pages((unsigned long)xen_io_tlb_start, order);
+			xen_io_tlb_start = NULL;
+		}
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
 	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
-	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
-
-	return;
+	if (!after_bootmem)
+		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
+	else
+		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
+	return rc;
 error:
 	if (repeat--) {
 		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
@@ -218,10 +243,13 @@ error:
 		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
-	xen_raw_printk("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
-	panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	pr_err("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	if (!after_bootmem)
+		panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	else
+		free_pages((unsigned long)xen_io_tlb_start, order);
+	return rc;
 }
-
 void *
 xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			   dma_addr_t *dma_handle, gfp_t flags,
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index 4f4d449..f26f9f3 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -3,7 +3,7 @@
 
 #include <linux/swiotlb.h>
 
-extern void xen_swiotlb_init(int verbose);
+extern int xen_swiotlb_init(int verbose);
 
 extern void
 *xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:41:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dH3-00052w-PN; Thu, 23 Aug 2012 19:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4dH1-00052m-SY
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 19:41:00 +0000
Received: from [85.158.143.99:28835] by server-3.bemta-4.messagelabs.com id
	EA/38-08232-B4786305; Thu, 23 Aug 2012 19:40:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345750855!24254794!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcxNTE5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5702 invoked from network); 23 Aug 2012 19:40:57 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Aug 2012 19:40:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NJeg7n026805
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 19:40:44 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NJeeYT026762
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 19:40:41 GMT
Received: from abhmt110.oracle.com (abhmt110.oracle.com [141.146.116.62])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NJed7N000490; Thu, 23 Aug 2012 14:40:40 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 12:40:39 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 672974031E; Thu, 23 Aug 2012 15:30:36 -0400 (EDT)
Date: Thu, 23 Aug 2012 15:30:36 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20120823193036.GA27221@phenom.dumpdata.com>
References: <1345132488-27323-1-git-send-email-konrad.wilk@oracle.com>
	<1345132488-27323-5-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1208171818260.15568@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1208171818260.15568@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH 4/5] xen/swiotlb: Use the
 swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV PCI is used.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 06:25:56PM +0100, Stefano Stabellini wrote:
> On Thu, 16 Aug 2012, Konrad Rzeszutek Wilk wrote:
> > With this patch we provide the functionality to initialize the
> > Xen-SWIOTLB late in the bootup cycle - specifically for
> > Xen PCI-frontend. We still will work if the user had
> > supplied 'iommu=soft' on the Linux command line.
> > 
> > CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
> > [v1: Fix smatch warnings]
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
> >  arch/x86/xen/pci-swiotlb-xen.c         |   17 +++++++++-
> >  drivers/xen/swiotlb-xen.c              |   54 ++++++++++++++++++++++++++-----
> >  include/xen/swiotlb-xen.h              |    1 +
> >  4 files changed, 64 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
> > index 1be1ab7..ee52fca 100644
> > --- a/arch/x86/include/asm/xen/swiotlb-xen.h
> > +++ b/arch/x86/include/asm/xen/swiotlb-xen.h
> > @@ -5,10 +5,12 @@
> >  extern int xen_swiotlb;
> >  extern int __init pci_xen_swiotlb_detect(void);
> >  extern void __init pci_xen_swiotlb_init(void);
> > +extern int pci_xen_swiotlb_init_late(void);
> >  #else
> >  #define xen_swiotlb (0)
> >  static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
> >  static inline void __init pci_xen_swiotlb_init(void) { }
> > +static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
> >  #endif
> >  
> >  #endif /* _ASM_X86_SWIOTLB_XEN_H */
> > diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
> > index 1c17227..031d8bc 100644
> > --- a/arch/x86/xen/pci-swiotlb-xen.c
> > +++ b/arch/x86/xen/pci-swiotlb-xen.c
> > @@ -12,7 +12,7 @@
> >  #include <asm/iommu.h>
> >  #include <asm/dma.h>
> >  #endif
> > -
> > +#include <linux/export.h>
> >  int xen_swiotlb __read_mostly;
> >  
> >  static struct dma_map_ops xen_swiotlb_dma_ops = {
> > @@ -76,6 +76,21 @@ void __init pci_xen_swiotlb_init(void)
> >  		pci_request_acs();
> >  	}
> >  }
> > +
> > +int pci_xen_swiotlb_init_late(void)
> > +{
> > +	int rc = xen_swiotlb_late_init(1);
> > +	if (rc)
> > +		return rc;
> > +
> > +	dma_ops = &xen_swiotlb_dma_ops;
> > +	/* Make sure ACS will be enabled */
> > +	pci_request_acs();
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
> 
> shouldn't we be checking whether the xen_swiotlb has already been
> initialized?

Yes.
> 
> 
> >  IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
> >  		  0,
> >  		  pci_xen_swiotlb_init,
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1afb4fb..1942a3e 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -145,13 +145,14 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
> >  	return 0;
> >  }
> >  
> > -void __init xen_swiotlb_init(int verbose)
> > +static int __ref __xen_swiotlb_init(int verbose, bool early)
> >  {
> >  	unsigned long bytes;
> >  	int rc = -ENOMEM;
> >  	unsigned long nr_tbl;
> >  	char *m = NULL;
> >  	unsigned int repeat = 3;
> > +	unsigned long order;
> >
> >  	nr_tbl = swiotlb_nr_tbl();
> >  	if (nr_tbl)
> > @@ -161,12 +162,31 @@ void __init xen_swiotlb_init(int verbose)
> >  		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
> >  	}
> >  retry:
> > +	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
> >  	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
> >  
> >  	/*
> >  	 * Get IO TLB memory from any location.
> >  	 */
> > -	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> > +	if (early)
> > +		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
> > +	else {
> > +#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
> > +#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
> 
> 
> > +		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
> > +			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
> > +			if (xen_io_tlb_start)
> > +				break;
> > +			order--;
> > +		}
> > +		if (order != get_order(bytes)) {
> > +			pr_warn("Warning: only able to allocate %ld MB "
> > +				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
> > +			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
> > +			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
> > +		}
> > +	}
> >  	if (!xen_io_tlb_start) {
> >  		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
> >  		goto error;
> > @@ -179,17 +199,22 @@ retry:
> >  			       bytes,
> >  			       xen_io_tlb_nslabs);
> >  	if (rc) {
> > -		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
> > +		if (early)
> > +			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
> >  		m = "Failed to get contiguous memory for DMA from Xen!\n"\
> >  		    "You either: don't have the permissions, do not have"\
> >  		    " enough free memory under 4GB, or the hypervisor memory"\
> > -		    "is too fragmented!";
> > +		    " is too fragmented!";
> >  		goto error;
> >  	}
> >  	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
> > -	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
> >  
> > -	return;
> > +	rc = 0;
> > +	if (early)
> > +		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
> > +	else
> > +		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
> > +	return rc;
> >  error:
> >  	if (repeat--) {
> >  		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
> > @@ -198,10 +223,21 @@ error:
> >  		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
> >  		goto retry;
> >  	}
> > -	xen_raw_printk("%s (rc:%d)", m, rc);
> > -	panic("%s (rc:%d)", m, rc);
> > +	pr_err("%s (rc:%d)", m, rc);
> > +	if (early)
> > +		panic("%s (rc:%d)", m, rc);
> > +	else
> > +		free_pages((unsigned long)xen_io_tlb_start, order);
> > +	return rc;
> > +}
> 
> All these "if" make the code a harder to read. Would it be possible at
> least to unify the error paths and just check on after_bootmem whether
> we need to call free_pages or free_bootmem?
> In fact using after_bootmem you might get away without an early
> parameter.


Sure thing. Here is a stab at it.


>From 573db1dbc6adf3f5efc8c699714329caffa43daf Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 13:55:26 -0400
Subject: [PATCH 1/3] xen/swiotlb: Move the nr_tbl determination in its own
 function.

Moving the function out of the way to prepare for the late
SWIOTLB init.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/swiotlb-xen.c |   21 +++++++++++----------
 1 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1afb4fb..a2aad6e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -144,25 +144,26 @@ xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 	} while (i < nslabs);
 	return 0;
 }
+static unsigned long xen_set_nslabs(unsigned long nr_tbl)
+{
+	if (!nr_tbl) {
+		xen_io_tlb_nslabs = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
+		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
+	} else
+		xen_io_tlb_nslabs = nr_tbl;
 
+	return xen_io_tlb_nslabs << IO_TLB_SHIFT;
+}
 void __init xen_swiotlb_init(int verbose)
 {
 	unsigned long bytes;
 	int rc = -ENOMEM;
-	unsigned long nr_tbl;
 	char *m = NULL;
 	unsigned int repeat = 3;
 
-	nr_tbl = swiotlb_nr_tbl();
-	if (nr_tbl)
-		xen_io_tlb_nslabs = nr_tbl;
-	else {
-		xen_io_tlb_nslabs = (64 * 1024 * 1024 >> IO_TLB_SHIFT);
-		xen_io_tlb_nslabs = ALIGN(xen_io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
+	xen_io_tlb_nslabs = swiotlb_nr_tbl();
 retry:
-	bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
-
+	bytes = xen_set_nslabs(xen_io_tlb_nslabs);
 	/*
 	 * Get IO TLB memory from any location.
 	 */
-- 
1.7.7.6



>From 1bee8a244d1fed47e307c5951a98f7922cab7993 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 14:03:55 -0400
Subject: [PATCH 2/3] xen/swiotlb: Move the error strings to its own function.

That way we can more easily reuse those errors when using the
late SWIOTLB init.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/swiotlb-xen.c |   35 +++++++++++++++++++++++++++--------
 1 files changed, 27 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index a2aad6e..701b103 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -154,11 +154,33 @@ static unsigned long xen_set_nslabs(unsigned long nr_tbl)
 
 	return xen_io_tlb_nslabs << IO_TLB_SHIFT;
 }
+
+enum xen_swiotlb_err {
+	XEN_SWIOTLB_UNKNOWN = 0,
+	XEN_SWIOTLB_ENOMEM,
+	XEN_SWIOTLB_EFIXUP
+};
+
+static const char *xen_swiotlb_error(enum xen_swiotlb_err err)
+{
+	switch (err) {
+	case XEN_SWIOTLB_ENOMEM:
+		return "Cannot allocate Xen-SWIOTLB buffer\n";
+	case XEN_SWIOTLB_EFIXUP:
+		return "Failed to get contiguous memory for DMA from Xen!\n"\
+		    "You either: don't have the permissions, do not have"\
+		    " enough free memory under 4GB, or the hypervisor memory"\
+		    " is too fragmented!";
+	default:
+		break;
+	}
+	return "";
+}
 void __init xen_swiotlb_init(int verbose)
 {
 	unsigned long bytes;
 	int rc = -ENOMEM;
-	char *m = NULL;
+	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
 
 	xen_io_tlb_nslabs = swiotlb_nr_tbl();
@@ -169,7 +191,7 @@ retry:
 	 */
 	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
 	if (!xen_io_tlb_start) {
-		m = "Cannot allocate Xen-SWIOTLB buffer!\n";
+		m_ret = XEN_SWIOTLB_ENOMEM;
 		goto error;
 	}
 	xen_io_tlb_end = xen_io_tlb_start + bytes;
@@ -181,10 +203,7 @@ retry:
 			       xen_io_tlb_nslabs);
 	if (rc) {
 		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
-		m = "Failed to get contiguous memory for DMA from Xen!\n"\
-		    "You either: don't have the permissions, do not have"\
-		    " enough free memory under 4GB, or the hypervisor memory"\
-		    "is too fragmented!";
+		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
 	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
@@ -199,8 +218,8 @@ error:
 		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
-	xen_raw_printk("%s (rc:%d)", m, rc);
-	panic("%s (rc:%d)", m, rc);
+	xen_raw_printk("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
 }
 
 void *
-- 
1.7.7.6


and the last one:


>From d815253cbf26a5273e77f829cc414e0808500263 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 14:36:15 -0400
Subject: [PATCH 3/3] xen/swiotlb: Use the swiotlb_late_init_with_tbl to init
 Xen-SWIOTLB late when PV PCI is used.

With this patch we provide the functionality to initialize the
Xen-SWIOTLB late in the bootup cycle - specifically for
Xen PCI-frontend. We still will work if the user had
supplied 'iommu=soft' on the Linux command line.

CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
[v1: Fix smatch warnings]
[v2: Added check for xen_swiotlb]
[v3: Rebased with new xen-swiotlb changes]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/include/asm/xen/swiotlb-xen.h |    2 +
 arch/x86/xen/pci-swiotlb-xen.c         |   22 ++++++++++++++-
 drivers/xen/swiotlb-xen.c              |   48 +++++++++++++++++++++++++------
 include/xen/swiotlb-xen.h              |    2 +-
 4 files changed, 62 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
index 1be1ab7..ee52fca 100644
--- a/arch/x86/include/asm/xen/swiotlb-xen.h
+++ b/arch/x86/include/asm/xen/swiotlb-xen.h
@@ -5,10 +5,12 @@
 extern int xen_swiotlb;
 extern int __init pci_xen_swiotlb_detect(void);
 extern void __init pci_xen_swiotlb_init(void);
+extern int pci_xen_swiotlb_init_late(void);
 #else
 #define xen_swiotlb (0)
 static inline int __init pci_xen_swiotlb_detect(void) { return 0; }
 static inline void __init pci_xen_swiotlb_init(void) { }
+static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
 #endif
 
 #endif /* _ASM_X86_SWIOTLB_XEN_H */
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 1c17227..406f9c4 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -12,7 +12,7 @@
 #include <asm/iommu.h>
 #include <asm/dma.h>
 #endif
-
+#include <linux/export.h>
 int xen_swiotlb __read_mostly;
 
 static struct dma_map_ops xen_swiotlb_dma_ops = {
@@ -76,6 +76,26 @@ void __init pci_xen_swiotlb_init(void)
 		pci_request_acs();
 	}
 }
+
+int pci_xen_swiotlb_init_late(void)
+{
+	int rc;
+
+	if (xen_swiotlb)
+		return 0;
+
+	rc = xen_swiotlb_init(1);
+	if (rc)
+		return rc;
+
+	dma_ops = &xen_swiotlb_dma_ops;
+	/* Make sure ACS will be enabled */
+	pci_request_acs();
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
+
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
 		  0,
 		  pci_xen_swiotlb_init,
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 701b103..f0825cb 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -176,9 +176,9 @@ static const char *xen_swiotlb_error(enum xen_swiotlb_err err)
 	}
 	return "";
 }
-void __init xen_swiotlb_init(int verbose)
+int __ref xen_swiotlb_init(int verbose)
 {
-	unsigned long bytes;
+	unsigned long bytes, order;
 	int rc = -ENOMEM;
 	enum xen_swiotlb_err m_ret = XEN_SWIOTLB_UNKNOWN;
 	unsigned int repeat = 3;
@@ -186,10 +186,28 @@ void __init xen_swiotlb_init(int verbose)
 	xen_io_tlb_nslabs = swiotlb_nr_tbl();
 retry:
 	bytes = xen_set_nslabs(xen_io_tlb_nslabs);
+	order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
 	/*
 	 * Get IO TLB memory from any location.
 	 */
-	xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	if (!after_bootmem)
+		xen_io_tlb_start = alloc_bootmem_pages(PAGE_ALIGN(bytes));
+	else {
+#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
+#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
+		while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
+			xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order);
+			if (xen_io_tlb_start)
+				break;
+			order--;
+		}
+		if (order != get_order(bytes)) {
+			pr_warn("Warning: only able to allocate %ld MB "
+				"for software IO TLB\n", (PAGE_SIZE << order) >> 20);
+			xen_io_tlb_nslabs = SLABS_PER_PAGE << order;
+			bytes = xen_io_tlb_nslabs << IO_TLB_SHIFT;
+		}
+	}
 	if (!xen_io_tlb_start) {
 		m_ret = XEN_SWIOTLB_ENOMEM;
 		goto error;
@@ -202,14 +220,21 @@ retry:
 			       bytes,
 			       xen_io_tlb_nslabs);
 	if (rc) {
-		free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
+		if (!after_bootmem)
+			free_bootmem(__pa(xen_io_tlb_start), PAGE_ALIGN(bytes));
+		else {
+			free_pages((unsigned long)xen_io_tlb_start, order);
+			xen_io_tlb_start = NULL;
+		}
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
 	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
-	swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
-
-	return;
+	if (!after_bootmem)
+		swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs, verbose);
+	else
+		rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
+	return rc;
 error:
 	if (repeat--) {
 		xen_io_tlb_nslabs = max(1024UL, /* Min is 2MB */
@@ -218,10 +243,13 @@ error:
 		      (xen_io_tlb_nslabs << IO_TLB_SHIFT) >> 20);
 		goto retry;
 	}
-	xen_raw_printk("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
-	panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	pr_err("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	if (!after_bootmem)
+		panic("%s (rc:%d)", xen_swiotlb_error(m_ret), rc);
+	else
+		free_pages((unsigned long)xen_io_tlb_start, order);
+	return rc;
 }
-
 void *
 xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			   dma_addr_t *dma_handle, gfp_t flags,
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index 4f4d449..f26f9f3 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -3,7 +3,7 @@
 
 #include <linux/swiotlb.h>
 
-extern void xen_swiotlb_init(int verbose);
+extern int xen_swiotlb_init(int verbose);
 
 extern void
 *xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:50:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dPx-0005HZ-R6; Thu, 23 Aug 2012 19:50:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4dPv-0005HR-QM
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 19:50:12 +0000
Received: from [85.158.138.51:43337] by server-2.bemta-3.messagelabs.com id
	24/98-09157-27986305; Thu, 23 Aug 2012 19:50:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345751408!27629669!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8117 invoked from network); 23 Aug 2012 19:50:09 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 19:50:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NJo5Hg010743
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 19:50:05 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NJo486013830
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 19:50:04 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NJo3pa019673; Thu, 23 Aug 2012 14:50:03 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 12:50:03 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 973244031E; Thu, 23 Aug 2012 15:40:00 -0400 (EDT)
Date: Thu, 23 Aug 2012 15:40:00 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120823194000.GA11652@phenom.dumpdata.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 06:13:46PM +0100, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  drivers/xen/privcmd.c |   54 +++++++++++++++++++++++++++++++++++++++----------
>  include/xen/privcmd.h |   10 +++++++++
>  2 files changed, 53 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index f8c1b6d..4f97160 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -248,7 +248,8 @@ struct mmap_batch_state {
>  	struct vm_area_struct *vma;
>  	int err;
>  
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
>  };
>  
>  static int mmap_batch_fn(void *data, void *state)
> @@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *state)
>  {
>  	xen_pfn_t *mfnp = data;
>  	struct mmap_batch_state *st = state;
> +	int ret = 0;
>  
> -	return put_user(*mfnp, st->user++);
> +	if (st->user_err) {
> +		if ((*mfnp & 0xf0000000U) == 0xf0000000U)
> +			ret = -ENOENT;
> +		else if ((*mfnp & 0xf0000000U) == 0x80000000U)
> +			ret = -EINVAL;

Yikes. Any way those 0xf000.. and 0x8000 can at least be #defined

> +		else
> +			ret = 0;
> +		return __put_user(ret, st->user_err);
> +	} else
> +		return __put_user(*mfnp, st->user_mfn++);
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops;
>  
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>  {
>  	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
>  	struct mm_struct *mm = current->mm;
>  	struct vm_area_struct *vma;
>  	unsigned long nr_pages;
>  	LIST_HEAD(pagelist);
>  	struct mmap_batch_state state;
>  
> +	printk("%s(%d)\n", __func__, version);
> +

Hehe.
>  	if (!xen_initial_domain())
>  		return -EPERM;
>  
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	if (version == 1) {
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;

That is new..
> +		m.err = NULL;
> +	} else {

Not elseif (version == 2) ?
.. what if version 3 comes around?

> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;

I think the VERIFY_WRITE can cover both versions?
> +	}
>  
>  	nr_pages = m.num;
>  	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
>  		return -EINVAL;
>  
>  	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +			   (xen_pfn_t *)m.arr);
>  
>  	if (ret || list_empty(&pagelist))
>  		goto out;
> @@ -331,10 +356,11 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  	up_write(&mm->mmap_sem);
>  
>  	if (state.err > 0) {
> -		state.user = m.arr;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
>  		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> +				     &pagelist,
> +				     mmap_return_errors, &state);
>  	}
>  
>  out:
> @@ -359,7 +385,13 @@ static long privcmd_ioctl(struct file *file,
>  		break;
>  
>  	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		printk("%s() batch ret = %d\n", __func__, ret);

Pfff...
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> +		printk("%s() batch ret = %d\n", __func__, ret);
>  		break;
>  
>  	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..9fa27c4 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
>  	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
>  };
>  
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */

unsigend int? Not 'u32'?
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */

int? Not a specific type?
> +};
> +
>  /*
>   * @cmd: IOCTL_PRIVCMD_HYPERCALL
>   * @arg: &privcmd_hypercall_t
> @@ -73,5 +81,7 @@ struct privcmd_mmapbatch {
>  	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
>  #define IOCTL_PRIVCMD_MMAPBATCH					\
>  	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
>  
>  #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:50:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dPx-0005HZ-R6; Thu, 23 Aug 2012 19:50:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4dPv-0005HR-QM
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 19:50:12 +0000
Received: from [85.158.138.51:43337] by server-2.bemta-3.messagelabs.com id
	24/98-09157-27986305; Thu, 23 Aug 2012 19:50:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1345751408!27629669!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzA3ODk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8117 invoked from network); 23 Aug 2012 19:50:09 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-16.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Aug 2012 19:50:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7NJo5Hg010743
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Aug 2012 19:50:05 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7NJo486013830
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Aug 2012 19:50:04 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7NJo3pa019673; Thu, 23 Aug 2012 14:50:03 -0500
Received: from phenom.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Aug 2012 12:50:03 -0700
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 973244031E; Thu, 23 Aug 2012 15:40:00 -0400 (EDT)
Date: Thu, 23 Aug 2012 15:40:00 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120823194000.GA11652@phenom.dumpdata.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 06:13:46PM +0100, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  drivers/xen/privcmd.c |   54 +++++++++++++++++++++++++++++++++++++++----------
>  include/xen/privcmd.h |   10 +++++++++
>  2 files changed, 53 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index f8c1b6d..4f97160 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -248,7 +248,8 @@ struct mmap_batch_state {
>  	struct vm_area_struct *vma;
>  	int err;
>  
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
>  };
>  
>  static int mmap_batch_fn(void *data, void *state)
> @@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *state)
>  {
>  	xen_pfn_t *mfnp = data;
>  	struct mmap_batch_state *st = state;
> +	int ret = 0;
>  
> -	return put_user(*mfnp, st->user++);
> +	if (st->user_err) {
> +		if ((*mfnp & 0xf0000000U) == 0xf0000000U)
> +			ret = -ENOENT;
> +		else if ((*mfnp & 0xf0000000U) == 0x80000000U)
> +			ret = -EINVAL;

Yikes. Any way those 0xf000.. and 0x8000 can at least be #defined

> +		else
> +			ret = 0;
> +		return __put_user(ret, st->user_err);
> +	} else
> +		return __put_user(*mfnp, st->user_mfn++);
>  }
>  
>  static struct vm_operations_struct privcmd_vm_ops;
>  
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>  {
>  	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
>  	struct mm_struct *mm = current->mm;
>  	struct vm_area_struct *vma;
>  	unsigned long nr_pages;
>  	LIST_HEAD(pagelist);
>  	struct mmap_batch_state state;
>  
> +	printk("%s(%d)\n", __func__, version);
> +

Hehe.
>  	if (!xen_initial_domain())
>  		return -EPERM;
>  
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	if (version == 1) {
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;

That is new..
> +		m.err = NULL;
> +	} else {

Not elseif (version == 2) ?
.. what if version 3 comes around?

> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;

I think the VERIFY_WRITE can cover both versions?
> +	}
>  
>  	nr_pages = m.num;
>  	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
>  		return -EINVAL;
>  
>  	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +			   (xen_pfn_t *)m.arr);
>  
>  	if (ret || list_empty(&pagelist))
>  		goto out;
> @@ -331,10 +356,11 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
>  	up_write(&mm->mmap_sem);
>  
>  	if (state.err > 0) {
> -		state.user = m.arr;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
>  		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> +				     &pagelist,
> +				     mmap_return_errors, &state);
>  	}
>  
>  out:
> @@ -359,7 +385,13 @@ static long privcmd_ioctl(struct file *file,
>  		break;
>  
>  	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		printk("%s() batch ret = %d\n", __func__, ret);

Pfff...
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> +		printk("%s() batch ret = %d\n", __func__, ret);
>  		break;
>  
>  	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..9fa27c4 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
>  	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
>  };
>  
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */

unsigend int? Not 'u32'?
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */

int? Not a specific type?
> +};
> +
>  /*
>   * @cmd: IOCTL_PRIVCMD_HYPERCALL
>   * @arg: &privcmd_hypercall_t
> @@ -73,5 +81,7 @@ struct privcmd_mmapbatch {
>  	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
>  #define IOCTL_PRIVCMD_MMAPBATCH					\
>  	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
>  
>  #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dRo-0005OH-GK; Thu, 23 Aug 2012 19:52:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4dRn-0005OA-Is
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:52:07 +0000
Received: from [85.158.143.99:59772] by server-1.bemta-4.messagelabs.com id
	AD/03-12504-6E986305; Thu, 23 Aug 2012 19:52:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345751526!26647759!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10437 invoked from network); 23 Aug 2012 19:52:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14155361"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 19:52:06 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 20:52:06 +0100
Message-ID: <1345751525.23624.58.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 20:52:05 +0100
In-Reply-To: <503680C5.6070509@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>
	<503680C5.6070509@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 20:13 +0100, Julien Grall wrote:
> On 08/23/2012 02:27 PM, Ian Campbell wrote:
> >
> >> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
> >>   #else
> >>   #define RDEXACT read_exact
> >>   #endif
> >> +
> >> +#define QEMUSIG_SIZE 21
> >> +
> >>   /*
> >>   ** In the state file (or during transfer), all page-table pages are
> >>   ** converted into a 'canonical' form where references to actual mfns
> >> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
> >>                              int vcpuextstate, uint32_t vcpuextstate_size)
> >>   {
> >>       uint8_t *tmp;
> >> -    unsigned char qemusig[21];
> >> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
> >>      
> > An extra + 1 here?
> >    
> QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
> If an error occurred, without +1, the output log lost the last character.

So this is just a bug fix for a pre-existing issue?

> > [...]
> >    
> >> -    qemusig[20] = '\0';
> >> +    qemusig[QEMUSIG_SIZE] = '\0';
> >>      
> > This is one bigger than it used to be now.
> >
> > Perhaps this is an unrelated bug fix (I haven't check the real length of
> > the sig), in which case please can you split it out and submit
> > separately?
> >    
> 
> #define QEMU_SIGNATURE "DeviceModelRecord0002"
> Just checked, the length seems to be 21. I will send a patch with
> this change.

Perhaps use either sizeof(QEMU_SIGNATURE) or strlen(QEMU_SIGNATURE)
(depending on which semantics you want)?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:52:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dRo-0005OH-GK; Thu, 23 Aug 2012 19:52:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4dRn-0005OA-Is
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:52:07 +0000
Received: from [85.158.143.99:59772] by server-1.bemta-4.messagelabs.com id
	AD/03-12504-6E986305; Thu, 23 Aug 2012 19:52:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1345751526!26647759!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA0MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10437 invoked from network); 23 Aug 2012 19:52:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14155361"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 19:52:06 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Thu, 23 Aug 2012 20:52:06 +0100
Message-ID: <1345751525.23624.58.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Aug 2012 20:52:05 +0100
In-Reply-To: <503680C5.6070509@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>
	<503680C5.6070509@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 20:13 +0100, Julien Grall wrote:
> On 08/23/2012 02:27 PM, Ian Campbell wrote:
> >
> >> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
> >>   #else
> >>   #define RDEXACT read_exact
> >>   #endif
> >> +
> >> +#define QEMUSIG_SIZE 21
> >> +
> >>   /*
> >>   ** In the state file (or during transfer), all page-table pages are
> >>   ** converted into a 'canonical' form where references to actual mfns
> >> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
> >>                              int vcpuextstate, uint32_t vcpuextstate_size)
> >>   {
> >>       uint8_t *tmp;
> >> -    unsigned char qemusig[21];
> >> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
> >>      
> > An extra + 1 here?
> >    
> QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
> If an error occurred, without +1, the output log lost the last character.

So this is just a bug fix for a pre-existing issue?

> > [...]
> >    
> >> -    qemusig[20] = '\0';
> >> +    qemusig[QEMUSIG_SIZE] = '\0';
> >>      
> > This is one bigger than it used to be now.
> >
> > Perhaps this is an unrelated bug fix (I haven't check the real length of
> > the sig), in which case please can you split it out and submit
> > separately?
> >    
> 
> #define QEMU_SIGNATURE "DeviceModelRecord0002"
> Just checked, the length seems to be 21. I will send a patch with
> this change.

Perhaps use either sizeof(QEMU_SIGNATURE) or strlen(QEMU_SIGNATURE)
(depending on which semantics you want)?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:53:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dT9-0005UG-08; Thu, 23 Aug 2012 19:53:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <martin.xenfan@gmail.com>) id 1T4dT7-0005Tr-LZ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:53:29 +0000
X-Env-Sender: martin.xenfan@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345751600!8659944!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 23 Aug 2012 19:53:21 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:53:21 -0000
Received: by ghrr17 with SMTP id r17so283977ghr.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 12:53:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=googlemail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+aF7gfpUgVskG/kLmh0xYntxeIo/2mkwZMtrD/cYtm4=;
	b=CzuKcBhPoh86VH8GcySxIqMKuqGhof8rBGz4Fv6sEYnEguFT5C6Pbl9oZW6ykoxNJ8
	Qwtq2PO80OZZcOmSNhGqYKFM9XJFvwKE2JyV2TaSUrKtB0nDKDXRIfjdPgxiJVsXCTg4
	Hvi9OzfSBuxlNsOesNbGMrRoMMvzO8cWnlZLNO6R/w3CAebV+T6D9Q4UqissDXlLa7Nd
	prHwf7sxEfY3I/7d7CdcJjOo9U8Xct3hYP8gkD5SAj+nD1gz3kSBNIoBNgHMcBaYFX4j
	O00S+az0Ajom3jwFTnekupHGgv2lSnpuyuT/IFo8U8TzOU+JAZ4l7RKbU3hfu8xWALQL
	693w==
MIME-Version: 1.0
Received: by 10.236.161.135 with SMTP id w7mr2386157yhk.15.1345751600061; Thu,
	23 Aug 2012 12:53:20 -0700 (PDT)
Received: by 10.100.214.1 with HTTP; Thu, 23 Aug 2012 12:53:20 -0700 (PDT)
Date: Thu, 23 Aug 2012 21:53:20 +0200
Message-ID: <CAJ0a4CFP1avOBtiieuSKP+AxCTLd6L653Pky9miTGHDdPbHUeQ@mail.gmail.com>
From: Martin Behnke <martin.xenfan@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Another successful story about XEN VGA passthrough -
 hardware can be added to wiki article
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dear XEN community,

I would like to send you some informations about my used hardware to
passthrough a ATI graphics adapter.
Please update the article
http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters with this
adapter.

I have to say:
GREAT!!! XEN 4.1.2 runs now since 8 weeks without any problems. (like
kernel oops or hanging Dom0s or DomUs).

I decided to use a  ATI HD5450 adapter because my first try with a
NVIDIA adapter failed completely. It was a
NVIDIA Geforce GT430 1GB RAM PCIe. I nearly tried all hints and how
to's - all without success. I could not figure out why. I have to say
that I liked nvidia and Linux, but now I switched to ATI.
My Dom0 uses the primary graphics adapter - Intel integrated graphics
- to provide a graphical interface to Dom0 to use the tool
virt-manager for managing all Dom0's.
The secondary graphics adapter is used for my home-used XEN all-in-one
NAS - VDR - DLNA Homeserver with xen-pciback :-)
The ATI HD5450 runs 'out of the box'. I had nothing special to do for
it - just compile and solve any compile errors.

My problems while installing:
- libpci-dev and pci-utils are mandatory for debian
- resolve all python-bindings for use xm list (python-xml)
- on 64Bit platforms all libs are located in /usr/lib64  - but all xen
related tools look at /usr/lib
(I had to move all xen libs to /usr/lib and symlink /usr/lib64 to /usr/lib )
- kernel bootoptions:
 multiboot /xen-4.1.2.gz dom0_mem=2048M iommu=1 xen-pciback.permissive
xen-pciback.passthrough=1
xen-pciback.hide=(0000:01:00.0)(0000:01:00.1)(0000:02:00.0)
- I had to pciback both PCI IDs to passthrough this ATI device


A short overview:

Dom0 host operating system:
Debian Wheezy 64bit with self compiled kernel 3.3.0-rc7

xen version:
XEN 4.1.2

Motherboard:
INTEL  DQ67OW (Intel Vt-d enabled)

CPU:
Intel Core i5 2400S (4 core)

lspci -v

01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee
ATI Cedar PRO [Radeon HD 5450] (prog-if 00 [VGA controller])
        Subsystem: PC Partner Limited Device e164
        Flags: bus master, fast devsel, latency 0, IRQ 16
        Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Memory at fe520000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at e000 [size=256]
        Expansion ROM at fe500000 [disabled] [size=128K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
Len=010 <?>
        Capabilities: [150] Advanced Error Reporting
        Kernel driver in use: pciback

01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cedar HDMI
Audio [Radeon HD 5400/6300 Series]
        Subsystem: PC Partner Limited Device aa68
        Flags: bus master, fast devsel, latency 0, IRQ 17
        Memory at fe540000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
Len=010 <?>
        Capabilities: [150] Advanced Error Reporting
        Kernel driver in use: pciback


02:00.0 Multimedia controller: Philips Semiconductors SAA7146 (rev 01)
        Subsystem: KNC One Device 0022
        Flags: bus master, medium devsel, latency 32, IRQ 16
        Memory at fe400000 (32-bit, non-prefetchable) [size=512]
        Kernel driver in use: pciback


Communication between DomU's:
- networkshares with samba and nfs
- bridged network based on a Intel e1000 NIC

+++++++++ DomU-01:

Windows7 32 bit, 4GB Ram, 50GB HDD imagefile

Hardware:
ATI Radeon HD5450

Catalyst Control Center:
Version: 2012.0611.1251.21046


+++++++++ DomU-02:
Ubuntu 32 bit, 2Gb RAM, 40 GB HDD imagefile

Hardware:
Satelco EasyWatch DVB-C





Regards,

Martin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 19:53:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 19:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4dT9-0005UG-08; Thu, 23 Aug 2012 19:53:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <martin.xenfan@gmail.com>) id 1T4dT7-0005Tr-LZ
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 19:53:29 +0000
X-Env-Sender: martin.xenfan@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345751600!8659944!1
X-Originating-IP: [209.85.160.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 23 Aug 2012 19:53:21 -0000
Received: from mail-gh0-f173.google.com (HELO mail-gh0-f173.google.com)
	(209.85.160.173)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 19:53:21 -0000
Received: by ghrr17 with SMTP id r17so283977ghr.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 12:53:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=googlemail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+aF7gfpUgVskG/kLmh0xYntxeIo/2mkwZMtrD/cYtm4=;
	b=CzuKcBhPoh86VH8GcySxIqMKuqGhof8rBGz4Fv6sEYnEguFT5C6Pbl9oZW6ykoxNJ8
	Qwtq2PO80OZZcOmSNhGqYKFM9XJFvwKE2JyV2TaSUrKtB0nDKDXRIfjdPgxiJVsXCTg4
	Hvi9OzfSBuxlNsOesNbGMrRoMMvzO8cWnlZLNO6R/w3CAebV+T6D9Q4UqissDXlLa7Nd
	prHwf7sxEfY3I/7d7CdcJjOo9U8Xct3hYP8gkD5SAj+nD1gz3kSBNIoBNgHMcBaYFX4j
	O00S+az0Ajom3jwFTnekupHGgv2lSnpuyuT/IFo8U8TzOU+JAZ4l7RKbU3hfu8xWALQL
	693w==
MIME-Version: 1.0
Received: by 10.236.161.135 with SMTP id w7mr2386157yhk.15.1345751600061; Thu,
	23 Aug 2012 12:53:20 -0700 (PDT)
Received: by 10.100.214.1 with HTTP; Thu, 23 Aug 2012 12:53:20 -0700 (PDT)
Date: Thu, 23 Aug 2012 21:53:20 +0200
Message-ID: <CAJ0a4CFP1avOBtiieuSKP+AxCTLd6L653Pky9miTGHDdPbHUeQ@mail.gmail.com>
From: Martin Behnke <martin.xenfan@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Another successful story about XEN VGA passthrough -
 hardware can be added to wiki article
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dear XEN community,

I would like to send you some informations about my used hardware to
passthrough a ATI graphics adapter.
Please update the article
http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters with this
adapter.

I have to say:
GREAT!!! XEN 4.1.2 runs now since 8 weeks without any problems. (like
kernel oops or hanging Dom0s or DomUs).

I decided to use a  ATI HD5450 adapter because my first try with a
NVIDIA adapter failed completely. It was a
NVIDIA Geforce GT430 1GB RAM PCIe. I nearly tried all hints and how
to's - all without success. I could not figure out why. I have to say
that I liked nvidia and Linux, but now I switched to ATI.
My Dom0 uses the primary graphics adapter - Intel integrated graphics
- to provide a graphical interface to Dom0 to use the tool
virt-manager for managing all Dom0's.
The secondary graphics adapter is used for my home-used XEN all-in-one
NAS - VDR - DLNA Homeserver with xen-pciback :-)
The ATI HD5450 runs 'out of the box'. I had nothing special to do for
it - just compile and solve any compile errors.

My problems while installing:
- libpci-dev and pci-utils are mandatory for debian
- resolve all python-bindings for use xm list (python-xml)
- on 64Bit platforms all libs are located in /usr/lib64  - but all xen
related tools look at /usr/lib
(I had to move all xen libs to /usr/lib and symlink /usr/lib64 to /usr/lib )
- kernel bootoptions:
 multiboot /xen-4.1.2.gz dom0_mem=2048M iommu=1 xen-pciback.permissive
xen-pciback.passthrough=1
xen-pciback.hide=(0000:01:00.0)(0000:01:00.1)(0000:02:00.0)
- I had to pciback both PCI IDs to passthrough this ATI device


A short overview:

Dom0 host operating system:
Debian Wheezy 64bit with self compiled kernel 3.3.0-rc7

xen version:
XEN 4.1.2

Motherboard:
INTEL  DQ67OW (Intel Vt-d enabled)

CPU:
Intel Core i5 2400S (4 core)

lspci -v

01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee
ATI Cedar PRO [Radeon HD 5450] (prog-if 00 [VGA controller])
        Subsystem: PC Partner Limited Device e164
        Flags: bus master, fast devsel, latency 0, IRQ 16
        Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Memory at fe520000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at e000 [size=256]
        Expansion ROM at fe500000 [disabled] [size=128K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
Len=010 <?>
        Capabilities: [150] Advanced Error Reporting
        Kernel driver in use: pciback

01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cedar HDMI
Audio [Radeon HD 5400/6300 Series]
        Subsystem: PC Partner Limited Device aa68
        Flags: bus master, fast devsel, latency 0, IRQ 17
        Memory at fe540000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
Len=010 <?>
        Capabilities: [150] Advanced Error Reporting
        Kernel driver in use: pciback


02:00.0 Multimedia controller: Philips Semiconductors SAA7146 (rev 01)
        Subsystem: KNC One Device 0022
        Flags: bus master, medium devsel, latency 32, IRQ 16
        Memory at fe400000 (32-bit, non-prefetchable) [size=512]
        Kernel driver in use: pciback


Communication between DomU's:
- networkshares with samba and nfs
- bridged network based on a Intel e1000 NIC

+++++++++ DomU-01:

Windows7 32 bit, 4GB Ram, 50GB HDD imagefile

Hardware:
ATI Radeon HD5450

Catalyst Control Center:
Version: 2012.0611.1251.21046


+++++++++ DomU-02:
Ubuntu 32 bit, 2Gb RAM, 40 GB HDD imagefile

Hardware:
Satelco EasyWatch DVB-C





Regards,

Martin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 20:39:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 20:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4eBF-0005yh-OQ; Thu, 23 Aug 2012 20:39:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4eBE-0005yc-37
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 20:39:04 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345754335!1787395!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7669 invoked from network); 23 Aug 2012 20:38:56 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 20:38:56 -0000
Received: by vbip1 with SMTP id p1so1643051vbi.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 13:38:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=KkbdlZ7Dbdu5Hecw7Dc+alZKYRawiww80NCBZ1W7gvk=;
	b=uhBIashNxlFERMJDgmPeWdlNmRgz2PXgSgbj7qGj71ejtUFvkYc5c/AuH5ViUM6nm1
	FId33bPsq8gJOpl/pLFt8SxJYqVWy3/1qG9qpWuUBX3be+23C0daSSjWCCd6We5DdaBY
	U0H9olliGvt5DbzdkBTqY/5aVzAL84Yg6j4DtwRCcHC3kJiuPyJfMhiDJYIg1gfXQ4LE
	7q9K7ysX1xsgkVbbU8/EmV8MYEBpHM8R2pS28+C8A84g2mnB6TYpkyy+fbiM57A0cIWD
	TQ77cqqIutE+j9t+RFQ66H+JGA601ZU2RAXh4Qvv84XcivwzX0nvmrT8BXO1N47yEgnc
	4WCQ==
MIME-Version: 1.0
Received: by 10.220.220.203 with SMTP id hz11mr2472768vcb.50.1345754334668;
	Thu, 23 Aug 2012 13:38:54 -0700 (PDT)
Received: by 10.58.127.232 with HTTP; Thu, 23 Aug 2012 13:38:54 -0700 (PDT)
In-Reply-To: <503686A7.5050206@citrix.com>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
	<503686A7.5050206@citrix.com>
Date: Thu, 23 Aug 2012 16:38:54 -0400
X-Google-Sender-Auth: 9mMdpphSe0bW24_qL_y0r9rCqnY
Message-ID: <CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 3:38 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> On 23/08/12 20:06, Ben Guthro wrote:
>> No such luck.
>
> Huh.  It was a shot in the dark, but I was really not expecting this.

Thanks for taking a look.

I tested the equivalent change against the offending changeset, and it
did, in fact solve the S3 issue back then (but not against the 4.1.3
tag)

I guess I'll do some more bisecting, with this change in place.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 20:39:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 20:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4eBF-0005yh-OQ; Thu, 23 Aug 2012 20:39:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4eBE-0005yc-37
	for xen-devel@lists.xen.org; Thu, 23 Aug 2012 20:39:04 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345754335!1787395!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7669 invoked from network); 23 Aug 2012 20:38:56 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 20:38:56 -0000
Received: by vbip1 with SMTP id p1so1643051vbi.32
	for <xen-devel@lists.xen.org>; Thu, 23 Aug 2012 13:38:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=KkbdlZ7Dbdu5Hecw7Dc+alZKYRawiww80NCBZ1W7gvk=;
	b=uhBIashNxlFERMJDgmPeWdlNmRgz2PXgSgbj7qGj71ejtUFvkYc5c/AuH5ViUM6nm1
	FId33bPsq8gJOpl/pLFt8SxJYqVWy3/1qG9qpWuUBX3be+23C0daSSjWCCd6We5DdaBY
	U0H9olliGvt5DbzdkBTqY/5aVzAL84Yg6j4DtwRCcHC3kJiuPyJfMhiDJYIg1gfXQ4LE
	7q9K7ysX1xsgkVbbU8/EmV8MYEBpHM8R2pS28+C8A84g2mnB6TYpkyy+fbiM57A0cIWD
	TQ77cqqIutE+j9t+RFQ66H+JGA601ZU2RAXh4Qvv84XcivwzX0nvmrT8BXO1N47yEgnc
	4WCQ==
MIME-Version: 1.0
Received: by 10.220.220.203 with SMTP id hz11mr2472768vcb.50.1345754334668;
	Thu, 23 Aug 2012 13:38:54 -0700 (PDT)
Received: by 10.58.127.232 with HTTP; Thu, 23 Aug 2012 13:38:54 -0700 (PDT)
In-Reply-To: <503686A7.5050206@citrix.com>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
	<503686A7.5050206@citrix.com>
Date: Thu, 23 Aug 2012 16:38:54 -0400
X-Google-Sender-Auth: 9mMdpphSe0bW24_qL_y0r9rCqnY
Message-ID: <CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 3:38 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> On 23/08/12 20:06, Ben Guthro wrote:
>> No such luck.
>
> Huh.  It was a shot in the dark, but I was really not expecting this.

Thanks for taking a look.

I tested the equivalent change against the offending changeset, and it
did, in fact solve the S3 issue back then (but not against the 4.1.3
tag)

I guess I'll do some more bisecting, with this change in place.

Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 21:59:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 21:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4fQr-0006h6-TZ; Thu, 23 Aug 2012 21:59:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4fQq-0006h1-09
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 21:59:16 +0000
Received: from [85.158.138.51:18414] by server-2.bemta-3.messagelabs.com id
	D9/A8-09157-3B7A6305; Thu, 23 Aug 2012 21:59:15 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345759152!8820516!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=2.0 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,USERPASS,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4683 invoked from network); 23 Aug 2012 21:59:13 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 21:59:13 -0000
Received: by vbbfq11 with SMTP id fq11so1777464vbb.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 14:59:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=9YJN+wTc8i4cOfIM/+xDtjzGpxMkQnS3R3ZK1RVH7j4=;
	b=ubnz0DRyqWoZ9mOPOL+k8qCCE+raPQ0OckpWFO0M/hlHaCBsjv5wF2u7rK3vrfQX5C
	36urkIt+kx+sLdU1RGwf43shzTl72kTVwgaqno8DldTvYeaWh4BEcVhwAu22bZZBGmZ1
	SzsV4e3PfC2U/288Foqt5VefwBND4bN8HSZmJqUVlJDbrmyT4Z5DvCro1AfCOSWRh3BN
	Vf4HxksyNd9fOdHlm8LiW+nxGq5vvr8SHqPMfA5its9/gLfe7q36TD6C9fIYShK8zKWi
	0IJbU+p0Y58GpbBFCJpzlZ0TI9Sbo6oPrV1PSQiH9CJaNIMy0agOWg/pSZQdpGPzkcko
	vfHQ==
MIME-Version: 1.0
Received: by 10.58.189.69 with SMTP id gg5mr2969185vec.6.1345759151653; Thu,
	23 Aug 2012 14:59:11 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 14:59:11 -0700 (PDT)
In-Reply-To: <CC5C3B36.3CC23%keir.xen@gmail.com>
References: <CAG4Ohu-4rC7xOxZ3SwnER+GF--kr6xazyfKEkgEYyDKCbk6E2w@mail.gmail.com>
	<CC5C3B36.3CC23%keir.xen@gmail.com>
Date: Thu, 23 Aug 2012 17:59:11 -0400
Message-ID: <CAG4Ohu-M+S7oMMLnR-gnBLpWuGYsMqUTwH3TUgyPn4UazjHwSw@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3775828533574162976=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3775828533574162976==
Content-Type: multipart/alternative; boundary=047d7b67295630029104c7f5f86e

--047d7b67295630029104c7f5f86e
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

That works perfectly, thank you for the help.

On Thu, Aug 23, 2012 at 2:55 PM, Keir Fraser <keir.xen@gmail.com> wrote:

>  If you have your own flag in v->pause_flags then indeed you do not need
> to vcpu_pause_nosync()  in the vmexit handler.
>
> The best sequence would be:
>  - vmexit handler: set flag in v->pause_flags, then vcpu_sleep_nosync()
>  - domctl entry: vcpu_sleep_sync()
>  - domctl exit: clear flag in v->pause_flags, then vcpu_wake()
>
> So that=92s pretty much what you had in the first place, except for the
> extra vcpu_sleep_sync() on domctl entry. That=92s absolutely critical, an=
d
> why your pause_nosync on vmexit, unpause after domctl doesn=92t work =97 =
*
> something* is needed on domctl entry to be sure that the vcpu is
> descheduled and its state is synchronised. Of course the extra machinery =
of
> vcpu_pause/unpause is harmless enough, but it=92s not actually necessary =
here.
>
>  -- Keir
>
>
>
> On 23/08/2012 19:11, "Cutter 409" <cutter409@gmail.com> wrote:
>
> Thanks, Keir!
>
> I've spent so much time trying to track down this problem, even before I
> realized the registers weren't actually changing. You have no idea how
> helpful that was.
>
> Before I tried your example, I just wrapped the code to change the
> register in vcpu_pause() and vcpu_unpause(), which worked.
>
> Everything seems fine at the moment, is there any reason I should still
> change the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actual=
ly
> work as is; I'm setting a bit in v->pause_flags before I call it, then
> clear the bit before I wake it. I also tried pause_nosync on vmexit,
> unpause after domctl, but that didn't work.
>
> Thanks again!
>
>
> On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>
> So, for example, one possibly-valid scheme would be:
>
>  - vcpu_pause_nosync() from the vmexit handler
>
>  - vcpu_sleep_sync() at the start of the domctl
>  - vcpu_unpause() at the end of the domctl
>
> HTH,
>  Keir
>
>
>
> On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com <
> http://keir.xen@gmail.com> > wrote:
>
> FWIW I would expect your approach to basically work.
>
> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu?
> This will ensure that the vcpu is both fully de-scheduled, and all of its
> register state is synced back into its vcpu structure.
>
> Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you a=
lso
> have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
> vcpu_sleep_*() operations do nothing!
>
> In short, your problems are almost certainly something to do with the
> subtleties of actually putting a vcpu properly to sleep.
>
>  -- Keir
>
>
> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com <
> http://cutter409@gmail.com> > wrote:
>
> I'm making the register change directly from the hypervisor, inside of th=
e
> domctl code.
>
> It's a custom domctl that I've added. I'll look into what setcontext does
> after it modifies the register values, though.
>
> Thank you!
>
> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com <
> http://keir.xen@gmail.com> > wrote:
>
> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com <
> http://cutter409@gmail.com> > wrote:
>
> > With Xen-4.1.2:
> >
> > I'm trying to change a register value in a paused vmx vcpu. The general
> > process looks like this:
> >
> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> > 2. From dom0, I issue a domctl to change a register via
> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>
> Which domctl? From dom0 userspace you can use the libxc functions
> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
> state.
>
> You can read the libxc sources to see what hypercall these map to, if you
> don't want to use libxc for any reason.
>
>  -- Keir
>
> > However, the guest register does not seem to be changed when I do it
> this way.
> > Is there something I need to do to mark the registers as "dirty" ? Is
> there a
> > way to force the foreign vcpu to update the changed registers? Or maybe
> I just
> > have to change the registers somewhere else?
> >
> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also=
,
> but
> > that doesn't seem to make a change either.
> >
> > Thanks!
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
> > http://lists.xen.org/xen-devel
>
>
>
>
> ------------------------------
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
> http://lists.xen.org/xen-devel
>
>
>
>
> ------------------------------
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--047d7b67295630029104c7f5f86e
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

That works perfectly, thank you for the help.<br><br><div class=3D"gmail_qu=
ote">On Thu, Aug 23, 2012 at 2:55 PM, Keir Fraser <span dir=3D"ltr">&lt;<a =
href=3D"mailto:keir.xen@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>=
&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">



<div>
<font face=3D"Calibri, Verdana, Helvetica, Arial"><span style=3D"font-size:=
11pt">If you have your own flag in v-&gt;pause_flags then indeed you do not=
 need to vcpu_pause_nosync() =A0in the vmexit handler. <br>
<br>
The best sequence would be:<br>
=A0- vmexit handler: set flag in v-&gt;pause_flags, then vcpu_sleep_nosync(=
)<br>
=A0- domctl entry: vcpu_sleep_sync()<br>
=A0- domctl exit: clear flag in v-&gt;pause_flags, then vcpu_wake()<br>
<br>
So that=92s pretty much what you had in the first place, except for the ext=
ra vcpu_sleep_sync() on domctl entry. That=92s absolutely critical, and why=
 your pause_nosync on vmexit, unpause after domctl doesn=92t work =97 <b>so=
mething</b> is needed on domctl entry to be sure that the vcpu is deschedul=
ed and its state is synchronised. Of course the extra machinery of vcpu_pau=
se/unpause is harmless enough, but it=92s not actually necessary here.<br>

<br>
=A0-- Keir<div class=3D"im"><br>
<br>
<br>
On 23/08/2012 19:11, &quot;Cutter 409&quot; &lt;<a href=3D"http://cutter409=
@gmail.com" target=3D"_blank">cutter409@gmail.com</a>&gt; wrote:<br>
<br>
</div></span></font><blockquote><div class=3D"im"><font face=3D"Calibri, Ve=
rdana, Helvetica, Arial"><span style=3D"font-size:11pt">Thanks, Keir!<br>
<br>
I&#39;ve spent so much time trying to track down this problem, even before =
I realized the registers weren&#39;t actually changing. You have no idea ho=
w helpful that was.<br>
<br>
Before I tried your example, I just wrapped the code to change the register=
 in vcpu_pause() and vcpu_unpause(), which worked.<br>
<br>
Everything seems fine at the moment, is there any reason I should still cha=
nge the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually wo=
rk as is; I&#39;m setting a bit in v-&gt;pause_flags before I call it, then=
 clear the bit before I wake it. I also tried pause_nosync on vmexit, unpau=
se after domctl, but that didn&#39;t work.<br>

<br>
Thanks again!<br>
<br>
<br>
On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt; wrote:<br>
</span></font></div><blockquote><font face=3D"Calibri, Verdana, Helvetica, =
Arial"><span style=3D"font-size:11pt"><div class=3D"im">So, for example, on=
e possibly-valid scheme would be:<br>
<br>
=A0- vcpu_pause_nosync() from the vmexit handler<br>
<br>
=A0- vcpu_sleep_sync() at the start of the domctl<br>
=A0- vcpu_unpause() at the end of the domctl<br>
<br>
HTH,<br>
=A0Keir<br>
<br>
<br>
<br></div><div class=3D"im">
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a> &lt;<a href=3D"http://=
keir.xen@gmail.com" target=3D"_blank">http://keir.xen@gmail.com</a>&gt; &gt=
; wrote:<br>

<br>
</div></span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, =
Arial"><span style=3D"font-size:11pt"><div class=3D"im">FWIW I would expect=
 your approach to basically work. <br>
<br>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its reg=
ister state is synced back into its vcpu structure.<br>
<br>
Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you als=
o have a reason for that vcpu to sleep (e.g., non-zero pause counter), else=
 vcpu_sleep_*() operations do nothing!<br>
<br>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<br>
<br>
=A0-- Keir<br>
<br>
<br></div><div class=3D"im">
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"http://cutter409=
@gmail.com" target=3D"_blank">cutter409@gmail.com</a> &lt;<a href=3D"http:/=
/cutter409@gmail.com" target=3D"_blank">http://cutter409@gmail.com</a>&gt; =
&gt; wrote:<br>

<br>
</div></span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, =
Arial"><span style=3D"font-size:11pt"><div class=3D"im">I&#39;m making the =
register change directly from the hypervisor, inside of the domctl code. <b=
r>

<br>
It&#39;s a custom domctl that I&#39;ve added. I&#39;ll look into what setco=
ntext does after it modifies the register values, though.<br>
<br>
Thank you!<br>
<br></div>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a> &lt;<a href=3D"http://=
keir.xen@gmail.com" target=3D"_blank">http://keir.xen@gmail.com</a>&gt; &gt=
; wrote:<br>

</span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial"=
><span style=3D"font-size:11pt"><div class=3D"im">On 23/08/2012 18:25, &quo=
t;Cutter 409&quot; &lt;<a href=3D"http://cutter409@gmail.com" target=3D"_bl=
ank">cutter409@gmail.com</a> &lt;<a href=3D"http://cutter409@gmail.com" tar=
get=3D"_blank">http://cutter409@gmail.com</a>&gt; &gt; wrote:<br>

<br>
&gt; With Xen-4.1.2:<br>
&gt;<br>
&gt; I&#39;m trying to change a register value in a paused vmx vcpu. The ge=
neral<br>
&gt; process looks like this:<br>
&gt;<br>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<br>
&gt; 2. From dom0, I issue a domctl to change a register via<br>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br>
<br>
Which domctl? From dom0 userspace you can use the libxc functions<br>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<br>
<br>
You can read the libxc sources to see what hypercall these map to, if you<b=
r>
don&#39;t want to use libxc for any reason.<br>
<br>
=A0-- Keir<br>
<br>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<br>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<br>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<br>
&gt; have to change the registers somewhere else?<br>
&gt;<br>
&gt; I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)=
 also, but<br>
&gt; that doesn&#39;t seem to make a change either.<br>
&gt;<br>
&gt; Thanks!<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br></div>
&gt; <a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel=
@lists.xen.org</a> &lt;<a href=3D"http://Xen-devel@lists.xen.org" target=3D=
"_blank">http://Xen-devel@lists.xen.org</a>&gt; <br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
<br>
</span></font></blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial=
"><span style=3D"font-size:11pt"><br>
<br>
<hr align=3D"CENTER" size=3D"3" width=3D"95%"></span></font><font><font fac=
e=3D"Consolas, Courier New, Courier"><span style=3D"font-size:10pt">_______=
________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a> &lt;<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_bla=
nk">http://Xen-devel@lists.xen.org</a>&gt; <br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</span></font></font></blockquote><font face=3D"Calibri, Verdana, Helvetica=
, Arial"><span style=3D"font-size:11pt"><br>
</span></font></blockquote></blockquote><div class=3D"im"><font face=3D"Cal=
ibri, Verdana, Helvetica, Arial"><span style=3D"font-size:11pt"><br>
<br>
<hr align=3D"CENTER" size=3D"3" width=3D"95%"></span></font><font><font fac=
e=3D"Consolas, Courier New, Courier"><span style=3D"font-size:10pt">_______=
________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</span></font></font></div></blockquote>
</div>


</blockquote></div><br>

--047d7b67295630029104c7f5f86e--


--===============3775828533574162976==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3775828533574162976==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 21:59:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 21:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4fQr-0006h6-TZ; Thu, 23 Aug 2012 21:59:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cutter409@gmail.com>) id 1T4fQq-0006h1-09
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 21:59:16 +0000
Received: from [85.158.138.51:18414] by server-2.bemta-3.messagelabs.com id
	D9/A8-09157-3B7A6305; Thu, 23 Aug 2012 21:59:15 +0000
X-Env-Sender: cutter409@gmail.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345759152!8820516!1
X-Originating-IP: [209.85.212.43]
X-SpamReason: No, hits=2.0 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,USERPASS,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4683 invoked from network); 23 Aug 2012 21:59:13 -0000
Received: from mail-vb0-f43.google.com (HELO mail-vb0-f43.google.com)
	(209.85.212.43)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 21:59:13 -0000
Received: by vbbfq11 with SMTP id fq11so1777464vbb.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 14:59:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=9YJN+wTc8i4cOfIM/+xDtjzGpxMkQnS3R3ZK1RVH7j4=;
	b=ubnz0DRyqWoZ9mOPOL+k8qCCE+raPQ0OckpWFO0M/hlHaCBsjv5wF2u7rK3vrfQX5C
	36urkIt+kx+sLdU1RGwf43shzTl72kTVwgaqno8DldTvYeaWh4BEcVhwAu22bZZBGmZ1
	SzsV4e3PfC2U/288Foqt5VefwBND4bN8HSZmJqUVlJDbrmyT4Z5DvCro1AfCOSWRh3BN
	Vf4HxksyNd9fOdHlm8LiW+nxGq5vvr8SHqPMfA5its9/gLfe7q36TD6C9fIYShK8zKWi
	0IJbU+p0Y58GpbBFCJpzlZ0TI9Sbo6oPrV1PSQiH9CJaNIMy0agOWg/pSZQdpGPzkcko
	vfHQ==
MIME-Version: 1.0
Received: by 10.58.189.69 with SMTP id gg5mr2969185vec.6.1345759151653; Thu,
	23 Aug 2012 14:59:11 -0700 (PDT)
Received: by 10.220.210.196 with HTTP; Thu, 23 Aug 2012 14:59:11 -0700 (PDT)
In-Reply-To: <CC5C3B36.3CC23%keir.xen@gmail.com>
References: <CAG4Ohu-4rC7xOxZ3SwnER+GF--kr6xazyfKEkgEYyDKCbk6E2w@mail.gmail.com>
	<CC5C3B36.3CC23%keir.xen@gmail.com>
Date: Thu, 23 Aug 2012 17:59:11 -0400
Message-ID: <CAG4Ohu-M+S7oMMLnR-gnBLpWuGYsMqUTwH3TUgyPn4UazjHwSw@mail.gmail.com>
From: Cutter 409 <cutter409@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Foreign VCPU register change?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3775828533574162976=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3775828533574162976==
Content-Type: multipart/alternative; boundary=047d7b67295630029104c7f5f86e

--047d7b67295630029104c7f5f86e
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

That works perfectly, thank you for the help.

On Thu, Aug 23, 2012 at 2:55 PM, Keir Fraser <keir.xen@gmail.com> wrote:

>  If you have your own flag in v->pause_flags then indeed you do not need
> to vcpu_pause_nosync()  in the vmexit handler.
>
> The best sequence would be:
>  - vmexit handler: set flag in v->pause_flags, then vcpu_sleep_nosync()
>  - domctl entry: vcpu_sleep_sync()
>  - domctl exit: clear flag in v->pause_flags, then vcpu_wake()
>
> So that=92s pretty much what you had in the first place, except for the
> extra vcpu_sleep_sync() on domctl entry. That=92s absolutely critical, an=
d
> why your pause_nosync on vmexit, unpause after domctl doesn=92t work =97 =
*
> something* is needed on domctl entry to be sure that the vcpu is
> descheduled and its state is synchronised. Of course the extra machinery =
of
> vcpu_pause/unpause is harmless enough, but it=92s not actually necessary =
here.
>
>  -- Keir
>
>
>
> On 23/08/2012 19:11, "Cutter 409" <cutter409@gmail.com> wrote:
>
> Thanks, Keir!
>
> I've spent so much time trying to track down this problem, even before I
> realized the registers weren't actually changing. You have no idea how
> helpful that was.
>
> Before I tried your example, I just wrapped the code to change the
> register in vcpu_pause() and vcpu_unpause(), which worked.
>
> Everything seems fine at the moment, is there any reason I should still
> change the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actual=
ly
> work as is; I'm setting a bit in v->pause_flags before I call it, then
> clear the bit before I wake it. I also tried pause_nosync on vmexit,
> unpause after domctl, but that didn't work.
>
> Thanks again!
>
>
> On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>
> So, for example, one possibly-valid scheme would be:
>
>  - vcpu_pause_nosync() from the vmexit handler
>
>  - vcpu_sleep_sync() at the start of the domctl
>  - vcpu_unpause() at the end of the domctl
>
> HTH,
>  Keir
>
>
>
> On 23/08/2012 18:49, "Keir Fraser" <keir.xen@gmail.com <
> http://keir.xen@gmail.com> > wrote:
>
> FWIW I would expect your approach to basically work.
>
> Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu?
> This will ensure that the vcpu is both fully de-scheduled, and all of its
> register state is synced back into its vcpu structure.
>
> Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you a=
lso
> have a reason for that vcpu to sleep (e.g., non-zero pause counter), else
> vcpu_sleep_*() operations do nothing!
>
> In short, your problems are almost certainly something to do with the
> subtleties of actually putting a vcpu properly to sleep.
>
>  -- Keir
>
>
> On 23/08/2012 18:37, "Cutter 409" <cutter409@gmail.com <
> http://cutter409@gmail.com> > wrote:
>
> I'm making the register change directly from the hypervisor, inside of th=
e
> domctl code.
>
> It's a custom domctl that I've added. I'll look into what setcontext does
> after it modifies the register values, though.
>
> Thank you!
>
> On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser <keir.xen@gmail.com <
> http://keir.xen@gmail.com> > wrote:
>
> On 23/08/2012 18:25, "Cutter 409" <cutter409@gmail.com <
> http://cutter409@gmail.com> > wrote:
>
> > With Xen-4.1.2:
> >
> > I'm trying to change a register value in a paused vmx vcpu. The general
> > process looks like this:
> >
> > 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu
> > 2. From dom0, I issue a domctl to change a register via
> > v->arch.guest_context.user_reg, then vcpu_wake(v)
>
> Which domctl? From dom0 userspace you can use the libxc functions
> xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register
> state.
>
> You can read the libxc sources to see what hypercall these map to, if you
> don't want to use libxc for any reason.
>
>  -- Keir
>
> > However, the guest register does not seem to be changed when I do it
> this way.
> > Is there something I need to do to mark the registers as "dirty" ? Is
> there a
> > way to force the foreign vcpu to update the changed registers? Or maybe
> I just
> > have to change the registers somewhere else?
> >
> > I've tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v) also=
,
> but
> > that doesn't seem to make a change either.
> >
> > Thanks!
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
> > http://lists.xen.org/xen-devel
>
>
>
>
> ------------------------------
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org <http://Xen-devel@lists.xen.org>
> http://lists.xen.org/xen-devel
>
>
>
>
> ------------------------------
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--047d7b67295630029104c7f5f86e
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

That works perfectly, thank you for the help.<br><br><div class=3D"gmail_qu=
ote">On Thu, Aug 23, 2012 at 2:55 PM, Keir Fraser <span dir=3D"ltr">&lt;<a =
href=3D"mailto:keir.xen@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>=
&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">



<div>
<font face=3D"Calibri, Verdana, Helvetica, Arial"><span style=3D"font-size:=
11pt">If you have your own flag in v-&gt;pause_flags then indeed you do not=
 need to vcpu_pause_nosync() =A0in the vmexit handler. <br>
<br>
The best sequence would be:<br>
=A0- vmexit handler: set flag in v-&gt;pause_flags, then vcpu_sleep_nosync(=
)<br>
=A0- domctl entry: vcpu_sleep_sync()<br>
=A0- domctl exit: clear flag in v-&gt;pause_flags, then vcpu_wake()<br>
<br>
So that=92s pretty much what you had in the first place, except for the ext=
ra vcpu_sleep_sync() on domctl entry. That=92s absolutely critical, and why=
 your pause_nosync on vmexit, unpause after domctl doesn=92t work =97 <b>so=
mething</b> is needed on domctl entry to be sure that the vcpu is deschedul=
ed and its state is synchronised. Of course the extra machinery of vcpu_pau=
se/unpause is harmless enough, but it=92s not actually necessary here.<br>

<br>
=A0-- Keir<div class=3D"im"><br>
<br>
<br>
On 23/08/2012 19:11, &quot;Cutter 409&quot; &lt;<a href=3D"http://cutter409=
@gmail.com" target=3D"_blank">cutter409@gmail.com</a>&gt; wrote:<br>
<br>
</div></span></font><blockquote><div class=3D"im"><font face=3D"Calibri, Ve=
rdana, Helvetica, Arial"><span style=3D"font-size:11pt">Thanks, Keir!<br>
<br>
I&#39;ve spent so much time trying to track down this problem, even before =
I realized the registers weren&#39;t actually changing. You have no idea ho=
w helpful that was.<br>
<br>
Before I tried your example, I just wrapped the code to change the register=
 in vcpu_pause() and vcpu_unpause(), which worked.<br>
<br>
Everything seems fine at the moment, is there any reason I should still cha=
nge the vcpu_sleep_nosync() to vcpu_pause_nosync()? It seems to actually wo=
rk as is; I&#39;m setting a bit in v-&gt;pause_flags before I call it, then=
 clear the bit before I wake it. I also tried pause_nosync on vmexit, unpau=
se after domctl, but that didn&#39;t work.<br>

<br>
Thanks again!<br>
<br>
<br>
On Thu, Aug 23, 2012 at 1:54 PM, Keir Fraser &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a>&gt; wrote:<br>
</span></font></div><blockquote><font face=3D"Calibri, Verdana, Helvetica, =
Arial"><span style=3D"font-size:11pt"><div class=3D"im">So, for example, on=
e possibly-valid scheme would be:<br>
<br>
=A0- vcpu_pause_nosync() from the vmexit handler<br>
<br>
=A0- vcpu_sleep_sync() at the start of the domctl<br>
=A0- vcpu_unpause() at the end of the domctl<br>
<br>
HTH,<br>
=A0Keir<br>
<br>
<br>
<br></div><div class=3D"im">
On 23/08/2012 18:49, &quot;Keir Fraser&quot; &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a> &lt;<a href=3D"http://=
keir.xen@gmail.com" target=3D"_blank">http://keir.xen@gmail.com</a>&gt; &gt=
; wrote:<br>

<br>
</div></span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, =
Arial"><span style=3D"font-size:11pt"><div class=3D"im">FWIW I would expect=
 your approach to basically work. <br>
<br>
Except... Does your domctl do a vcpu_pause()/vcpu_unpause() on the vcpu? Th=
is will ensure that the vcpu is both fully de-scheduled, and all of its reg=
ister state is synced back into its vcpu structure.<br>
<br>
Otherwise you race the vcpu_sleep_nosync() -- and that=92s assuming you als=
o have a reason for that vcpu to sleep (e.g., non-zero pause counter), else=
 vcpu_sleep_*() operations do nothing!<br>
<br>
In short, your problems are almost certainly something to do with the subtl=
eties of actually putting a vcpu properly to sleep.<br>
<br>
=A0-- Keir<br>
<br>
<br></div><div class=3D"im">
On 23/08/2012 18:37, &quot;Cutter 409&quot; &lt;<a href=3D"http://cutter409=
@gmail.com" target=3D"_blank">cutter409@gmail.com</a> &lt;<a href=3D"http:/=
/cutter409@gmail.com" target=3D"_blank">http://cutter409@gmail.com</a>&gt; =
&gt; wrote:<br>

<br>
</div></span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, =
Arial"><span style=3D"font-size:11pt"><div class=3D"im">I&#39;m making the =
register change directly from the hypervisor, inside of the domctl code. <b=
r>

<br>
It&#39;s a custom domctl that I&#39;ve added. I&#39;ll look into what setco=
ntext does after it modifies the register values, though.<br>
<br>
Thank you!<br>
<br></div>
On Thu, Aug 23, 2012 at 1:34 PM, Keir Fraser &lt;<a href=3D"http://keir.xen=
@gmail.com" target=3D"_blank">keir.xen@gmail.com</a> &lt;<a href=3D"http://=
keir.xen@gmail.com" target=3D"_blank">http://keir.xen@gmail.com</a>&gt; &gt=
; wrote:<br>

</span></font><blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial"=
><span style=3D"font-size:11pt"><div class=3D"im">On 23/08/2012 18:25, &quo=
t;Cutter 409&quot; &lt;<a href=3D"http://cutter409@gmail.com" target=3D"_bl=
ank">cutter409@gmail.com</a> &lt;<a href=3D"http://cutter409@gmail.com" tar=
get=3D"_blank">http://cutter409@gmail.com</a>&gt; &gt; wrote:<br>

<br>
&gt; With Xen-4.1.2:<br>
&gt;<br>
&gt; I&#39;m trying to change a register value in a paused vmx vcpu. The ge=
neral<br>
&gt; process looks like this:<br>
&gt;<br>
&gt; 1. Some vmexit calls vcpu_sleep_nosync(v) on the vcpu<br>
&gt; 2. From dom0, I issue a domctl to change a register via<br>
&gt; v-&gt;arch.guest_context.user_reg, then vcpu_wake(v)<br>
<br>
Which domctl? From dom0 userspace you can use the libxc functions<br>
xc_vcpu_getcontext() and xc_vcpu_setcontext() to read/modify register state=
.<br>
<br>
You can read the libxc sources to see what hypercall these map to, if you<b=
r>
don&#39;t want to use libxc for any reason.<br>
<br>
=A0-- Keir<br>
<br>
&gt; However, the guest register does not seem to be changed when I do it t=
his way.<br>
&gt; Is there something I need to do to mark the registers as &quot;dirty&q=
uot; ? Is there a<br>
&gt; way to force the foreign vcpu to update the changed registers? Or mayb=
e I just<br>
&gt; have to change the registers somewhere else?<br>
&gt;<br>
&gt; I&#39;ve tried directly using vmcs_enter(v), __vmwrite(), vmcs_exit(v)=
 also, but<br>
&gt; that doesn&#39;t seem to make a change either.<br>
&gt;<br>
&gt; Thanks!<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br></div>
&gt; <a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel=
@lists.xen.org</a> &lt;<a href=3D"http://Xen-devel@lists.xen.org" target=3D=
"_blank">http://Xen-devel@lists.xen.org</a>&gt; <br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
<br>
</span></font></blockquote><font face=3D"Calibri, Verdana, Helvetica, Arial=
"><span style=3D"font-size:11pt"><br>
<br>
<hr align=3D"CENTER" size=3D"3" width=3D"95%"></span></font><font><font fac=
e=3D"Consolas, Courier New, Courier"><span style=3D"font-size:10pt">_______=
________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a> &lt;<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_bla=
nk">http://Xen-devel@lists.xen.org</a>&gt; <br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</span></font></font></blockquote><font face=3D"Calibri, Verdana, Helvetica=
, Arial"><span style=3D"font-size:11pt"><br>
</span></font></blockquote></blockquote><div class=3D"im"><font face=3D"Cal=
ibri, Verdana, Helvetica, Arial"><span style=3D"font-size:11pt"><br>
<br>
<hr align=3D"CENTER" size=3D"3" width=3D"95%"></span></font><font><font fac=
e=3D"Consolas, Courier New, Courier"><span style=3D"font-size:10pt">_______=
________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"http://Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</span></font></font></div></blockquote>
</div>


</blockquote></div><br>

--047d7b67295630029104c7f5f86e--


--===============3775828533574162976==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3775828533574162976==--


From xen-devel-bounces@lists.xen.org Thu Aug 23 22:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 22:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4fbz-0006wp-9v; Thu, 23 Aug 2012 22:10:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4fby-0006wk-8D
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 22:10:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345759836!8429948!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9364 invoked from network); 23 Aug 2012 22:10:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 22:10:36 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14156361"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 22:10:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 23:10:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4fbn-0000pN-CO;
	Thu, 23 Aug 2012 22:10:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4fbm-0001M1-WA;
	Thu, 23 Aug 2012 23:10:35 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13623-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Aug 2012 23:10:35 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13623: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13623 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13623/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 13622

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13622
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13622
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13622
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13622

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  c29ffdfae39b
baseline version:
 xen                  b02ac80ff689

------------------------------------------------------------
People who touched revisions under test:
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25777:c29ffdfae39b
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:06:11 2012 +0100
    
    Update Xen version to 4.2.0-rc4-pre
    
    
changeset:   25776:5c1f69f28a34
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:05:48 2012 +0100
    
    Added signature for changeset d44f290e81df
    
    
changeset:   25775:d33db851e697
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:05:36 2012 +0100
    
    Added tag 4.2.0-rc3 for changeset d44f290e81df
    
    
changeset:   25774:d44f290e81df
tag:         4.2.0-rc3
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:05:30 2012 +0100
    
    Update Xen version to 4.2.0-rc3
    
    
changeset:   25773:b7e66cabb70f
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:02:04 2012 +0100
    
    x86,cmdline: Fix setting skip_realmode boolean on no-real-mode and tboot options
    ...effect should be cumulative.
    
    Signed-off-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25772:b02ac80ff689
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Wed Aug 22 22:29:06 2012 +0100
    
    Dump IOMMU p2m table
    
    New key handler 'o' to dump the IOMMU p2m table for each domain.
    Skips dumping table for domain 0.
    Intel and AMD specific iommu_ops handler for dumping p2m table.
    
    Incorporated feedback from Jan Beulich and Wei Wang.
    Fixed indent printing with %*s.
    Removed superflous superpage and other attribute prints.
    Make next_level use consistent for AMD IOMMU dumps. Warn if found
    inconsistent.
    AMD IOMMU does not skip levels. Handle 2mb and 1gb IOMMU page size for
    AMD.
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 23 22:11:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Aug 2012 22:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4fbz-0006wp-9v; Thu, 23 Aug 2012 22:10:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4fby-0006wk-8D
	for xen-devel@lists.xensource.com; Thu, 23 Aug 2012 22:10:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345759836!8429948!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9364 invoked from network); 23 Aug 2012 22:10:36 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Aug 2012 22:10:36 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14156361"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	23 Aug 2012 22:10:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Thu, 23 Aug 2012 23:10:35 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4fbn-0000pN-CO;
	Thu, 23 Aug 2012 22:10:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4fbm-0001M1-WA;
	Thu, 23 Aug 2012 23:10:35 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13623-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Aug 2012 23:10:35 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13623: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13623 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13623/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-pv             3 host-install(3)         broken REGR. vs. 13622

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13622
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13622
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13622
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13622

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  c29ffdfae39b
baseline version:
 xen                  b02ac80ff689

------------------------------------------------------------
People who touched revisions under test:
  Keir Fraser <keir@xen.org>
  Santosh Jodh <santosh.jodh@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25777:c29ffdfae39b
tag:         tip
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:06:11 2012 +0100
    
    Update Xen version to 4.2.0-rc4-pre
    
    
changeset:   25776:5c1f69f28a34
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:05:48 2012 +0100
    
    Added signature for changeset d44f290e81df
    
    
changeset:   25775:d33db851e697
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:05:36 2012 +0100
    
    Added tag 4.2.0-rc3 for changeset d44f290e81df
    
    
changeset:   25774:d44f290e81df
tag:         4.2.0-rc3
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:05:30 2012 +0100
    
    Update Xen version to 4.2.0-rc3
    
    
changeset:   25773:b7e66cabb70f
user:        Keir Fraser <keir@xen.org>
date:        Thu Aug 23 15:02:04 2012 +0100
    
    x86,cmdline: Fix setting skip_realmode boolean on no-real-mode and tboot options
    ...effect should be cumulative.
    
    Signed-off-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25772:b02ac80ff689
user:        Santosh Jodh <santosh.jodh@citrix.com>
date:        Wed Aug 22 22:29:06 2012 +0100
    
    Dump IOMMU p2m table
    
    New key handler 'o' to dump the IOMMU p2m table for each domain.
    Skips dumping table for domain 0.
    Intel and AMD specific iommu_ops handler for dumping p2m table.
    
    Incorporated feedback from Jan Beulich and Wei Wang.
    Fixed indent printing with %*s.
    Removed superflous superpage and other attribute prints.
    Make next_level use consistent for AMD IOMMU dumps. Warn if found
    inconsistent.
    AMD IOMMU does not skip levels. Handle 2mb and 1gb IOMMU page size for
    AMD.
    
    Signed-off-by: Santosh Jodh <santosh.jodh@citrix.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:18:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4iX7-0003nn-3h; Fri, 24 Aug 2012 01:17:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T4iX5-0003ne-JE
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 01:17:55 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345771068!8692980!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYxMzMy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8208 invoked from network); 24 Aug 2012 01:17:49 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 01:17:49 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 23 Aug 2012 18:17:48 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,301,1344236400"; d="scan'208";a="184812504"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 23 Aug 2012 18:17:47 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 23 Aug 2012 18:17:47 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 23 Aug 2012 18:17:46 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id
	14.01.0355.002; Fri, 24 Aug 2012 09:17:45 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Keir Fraser
	<keir@xen.org>, Jan Beulich <JBeulich@suse.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
	int and uint types
Thread-Index: AQHNfueyW97jVGKJUE+4w6w+zBT9h5doKxJw
Date: Fri, 24 Aug 2012 01:17:44 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> > >From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00
> > >2001
> > From: Dongxiao Xu <dongxiao.xu@intel.com>
> > Date: Mon, 20 Aug 2012 16:45:04 +0800
> > Subject: [PATCH] helper2: fix multiply issue for int and uint types
> >
> > If the two multiply operands are int and uint types separately, the
> > int type will be transformed to uint firstly, which is not the intent
> > in our code piece. The fix is to add (int64_t) transform for the uint
> > type before the multiply.
> >
> > This helps to fix the Xen hypevisor slow booting issue (boots more
> > than 30 minutes) on another Xen hypervisor (the nested virtualization
> > case).
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Will this patch supposed to be merge in Xen 4.2?

Actually this is a fix for the nested Xen booting up issue, and it is already merged in QEMU upstream.

We have several patches to fix issues and enable the nested virtualization for Xen on Xen.
They are:

1) QEMU/helper2.c: Fix multiply issue for int and uint types.
This patch is to fix the issue of nested Xen boot up. (in this mail)

2) Xiantao Zhang will (suppose today) send out another patch to fix the L2 guest booting issue.

3) nvmx: fix resource relinquish for nested VMX.
This patch is to fix the destroy issue (resource is not released, xl list still shows the guest) for L1 guest with a L2 guest running in it. (Already sent into mailing list)

4) Another patch is already merged by Ian J, which could fix the booting slow issue for L2 guest, see:
http://xenbits.xen.org/gitweb/?p=qemu-xen-unstable.git;a=commit;h=effd5676225761abdab90becac519716515c3be4

For the remaining three, will you accept them for Xen 4.2 release?

Thanks,
Dongxiao

> 
> 
> > ---
> >  i386-dm/helper2.c |   16 ++++++++--------
> >  1 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c index
> > c6d049c..c093249 100644
> > --- a/i386-dm/helper2.c
> > +++ b/i386-dm/helper2.c
> > @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >              for (i = 0; i < req->count; i++) {
> >                  tmp = do_inp(env, req->addr, req->size);
> >                  write_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >                  unsigned long tmp = 0;
> >
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >                  do_outp(env, req->addr, req->size, tmp);
> >              }
> > @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &req->data);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &req->data);
> >              }
> >          }
> > @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >                  write_physical((target_phys_addr_t )req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > --
> > 1.7.1
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:18:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4iX7-0003nn-3h; Fri, 24 Aug 2012 01:17:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T4iX5-0003ne-JE
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 01:17:55 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1345771068!8692980!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzYxMzMy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8208 invoked from network); 24 Aug 2012 01:17:49 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 01:17:49 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 23 Aug 2012 18:17:48 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,301,1344236400"; d="scan'208";a="184812504"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 23 Aug 2012 18:17:47 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 23 Aug 2012 18:17:47 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Thu, 23 Aug 2012 18:17:46 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id
	14.01.0355.002; Fri, 24 Aug 2012 09:17:45 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Keir Fraser
	<keir@xen.org>, Jan Beulich <JBeulich@suse.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
	int and uint types
Thread-Index: AQHNfueyW97jVGKJUE+4w6w+zBT9h5doKxJw
Date: Fri, 24 Aug 2012 01:17:44 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> > >From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00
> > >2001
> > From: Dongxiao Xu <dongxiao.xu@intel.com>
> > Date: Mon, 20 Aug 2012 16:45:04 +0800
> > Subject: [PATCH] helper2: fix multiply issue for int and uint types
> >
> > If the two multiply operands are int and uint types separately, the
> > int type will be transformed to uint firstly, which is not the intent
> > in our code piece. The fix is to add (int64_t) transform for the uint
> > type before the multiply.
> >
> > This helps to fix the Xen hypevisor slow booting issue (boots more
> > than 30 minutes) on another Xen hypervisor (the nested virtualization
> > case).
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Will this patch supposed to be merge in Xen 4.2?

Actually this is a fix for the nested Xen booting up issue, and it is already merged in QEMU upstream.

We have several patches to fix issues and enable the nested virtualization for Xen on Xen.
They are:

1) QEMU/helper2.c: Fix multiply issue for int and uint types.
This patch is to fix the issue of nested Xen boot up. (in this mail)

2) Xiantao Zhang will (suppose today) send out another patch to fix the L2 guest booting issue.

3) nvmx: fix resource relinquish for nested VMX.
This patch is to fix the destroy issue (resource is not released, xl list still shows the guest) for L1 guest with a L2 guest running in it. (Already sent into mailing list)

4) Another patch is already merged by Ian J, which could fix the booting slow issue for L2 guest, see:
http://xenbits.xen.org/gitweb/?p=qemu-xen-unstable.git;a=commit;h=effd5676225761abdab90becac519716515c3be4

For the remaining three, will you accept them for Xen 4.2 release?

Thanks,
Dongxiao

> 
> 
> > ---
> >  i386-dm/helper2.c |   16 ++++++++--------
> >  1 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/i386-dm/helper2.c b/i386-dm/helper2.c index
> > c6d049c..c093249 100644
> > --- a/i386-dm/helper2.c
> > +++ b/i386-dm/helper2.c
> > @@ -364,7 +364,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >              for (i = 0; i < req->count; i++) {
> >                  tmp = do_inp(env, req->addr, req->size);
> >                  write_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > @@ -376,7 +376,7 @@ static void cpu_ioreq_pio(CPUState *env, ioreq_t
> *req)
> >                  unsigned long tmp = 0;
> >
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >                  do_outp(env, req->addr, req->size, tmp);
> >              }
> > @@ -394,13 +394,13 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &req->data);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &req->data);
> >              }
> >          }
> > @@ -410,19 +410,19 @@ static void cpu_ioreq_move(CPUState *env,
> ioreq_t *req)
> >          if (req->dir == IOREQ_READ) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >                  write_physical((target_phys_addr_t )req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >              }
> >          } else if (req->dir == IOREQ_WRITE) {
> >              for (i = 0; i < req->count; i++) {
> >                  read_physical((target_phys_addr_t) req->data
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >                  write_physical(req->addr
> > -                  + (sign * i * req->size),
> > +                  + (sign * i * (int64_t)req->size),
> >                    req->size, &tmp);
> >              }
> >          }
> > --
> > 1.7.1
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:33:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:33:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ilm-00042H-KW; Fri, 24 Aug 2012 01:33:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4ilk-000428-Jx
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 01:33:04 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345771977!8624313!1
X-Originating-IP: [209.85.214.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23507 invoked from network); 24 Aug 2012 01:32:58 -0000
Received: from mail-ob0-f171.google.com (HELO mail-ob0-f171.google.com)
	(209.85.214.171)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 01:32:58 -0000
Received: by obqv19 with SMTP id v19so1740046obq.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 18:32:56 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=nQmn/d0XlfH3sO/suwzQp8caxcG0aPbJjvhyJghSX4g=;
	b=jKTNAuNzsuoKekZZDj+G8+EwuF8H7rYO06Zv9+igL0N52Q9VJ4dv5Bqjl72WiDpgXh
	8n3MIy6VCnYszuLK8in1aVCrIbZXO2rQwFxOLC3JyF/P2hj2oz/EhJw70JEi9FCBwzqu
	+w4suug+UfPWk/50fC/+5n/cOU//jgbEWnVXEAyzAq/1eZ8Q75zyG8iZWHq5SLHRxKZa
	cAbktsQwf9wlnPIgyjbHpuUbHgz8b6uq17SZQOP75kP2OLxJJnSlYRqzGjjdob31fZ80
	+76oTCWP4eNyr/YYxDVgxtuu17B6EL4Pd5zFN17cZgM356mz+Klr/C+OLk4jZM/KyTB2
	DEOQ==
Received: by 10.50.157.196 with SMTP id wo4mr417691igb.22.1345771976360;
	Thu, 23 Aug 2012 18:32:56 -0700 (PDT)
Received: from [192.168.1.100] (69-196-182-10.dsl.teksavvy.com.
	[69.196.182.10])
	by mx.google.com with ESMTPS id ua5sm870492igb.10.2012.08.23.18.32.55
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 18:32:55 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 21:32:57 -0400
Message-Id: <E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlfZy248eZnfUbiezXGMMbhKZ4+i7qzQQd4iOefBZHOB0z7kRzSDGob8BX1s1Lim4lPsl7p
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:

> This series is a straight forward-port of some functionality from
> classic kernels to support Xen hosts that do paging of guests.
> 
> This isn't functionality the XenServer makes use of so I've not tested
> these with paging in use (GridCentric requested that our older kernels
> supported this and I'm just doing the forward port).

Thanks for this series. Very timely. I may add that we are not the only consumers of paging. This functionality was first added into classic kernels by Olaf Hering from Suse (added to cc).

> 
> I'm not entirely happy about the approach used here because:
> 
> 1. It relies on the meaning of the return code of the update_mmu
> hypercall and it assumes the value Xen used for -ENOENT is the same
> the kernel uses. This does not appear to be a formal part of the
> hypercall ABI.
> 
> Keir, can you comment on this?

I see your point. I may add that it's likely to be more pervasive than just relying on ENOENT being 12. Which is a fairly safe bet.

> 
> 2. It seems more sensible to have the kernel do the retries instead of
> libxc doing them.  The kernel has to have a mechanism for this any way
> (for mapping back/front rings).
> 
> 3. The current way of handling paged-out frames by repeatedly retrying
> is a bit lame.  Shouldn't there be some event that the guest waiting
> for the frame can wait on instead?  By moving the retry mechanism into
> the kernel we can change this without impacting the ABI to userspace.

Lame is an interesting choice of language :)

I am not a huge fan of libxc retry, but we've been pounding it quite hard for a while and it works -- and, importantly, it yields the scheduler :)

While kernel retry may benefit from hypothetical code reuse, "Shouldn't there be some event that the guest waiting for the frame can wait on instead?" will need to become concrete to start a real discussion.

For better or worse, since xen-4.1 (!) libxc will do the right thing if fed the appropriate errno.

Let me re-iterate that this is great work. Thanks
Andres

> 
> David
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:33:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:33:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ilm-00042H-KW; Fri, 24 Aug 2012 01:33:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4ilk-000428-Jx
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 01:33:04 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345771977!8624313!1
X-Originating-IP: [209.85.214.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23507 invoked from network); 24 Aug 2012 01:32:58 -0000
Received: from mail-ob0-f171.google.com (HELO mail-ob0-f171.google.com)
	(209.85.214.171)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 01:32:58 -0000
Received: by obqv19 with SMTP id v19so1740046obq.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 18:32:56 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=nQmn/d0XlfH3sO/suwzQp8caxcG0aPbJjvhyJghSX4g=;
	b=jKTNAuNzsuoKekZZDj+G8+EwuF8H7rYO06Zv9+igL0N52Q9VJ4dv5Bqjl72WiDpgXh
	8n3MIy6VCnYszuLK8in1aVCrIbZXO2rQwFxOLC3JyF/P2hj2oz/EhJw70JEi9FCBwzqu
	+w4suug+UfPWk/50fC/+5n/cOU//jgbEWnVXEAyzAq/1eZ8Q75zyG8iZWHq5SLHRxKZa
	cAbktsQwf9wlnPIgyjbHpuUbHgz8b6uq17SZQOP75kP2OLxJJnSlYRqzGjjdob31fZ80
	+76oTCWP4eNyr/YYxDVgxtuu17B6EL4Pd5zFN17cZgM356mz+Klr/C+OLk4jZM/KyTB2
	DEOQ==
Received: by 10.50.157.196 with SMTP id wo4mr417691igb.22.1345771976360;
	Thu, 23 Aug 2012 18:32:56 -0700 (PDT)
Received: from [192.168.1.100] (69-196-182-10.dsl.teksavvy.com.
	[69.196.182.10])
	by mx.google.com with ESMTPS id ua5sm870492igb.10.2012.08.23.18.32.55
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 18:32:55 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 21:32:57 -0400
Message-Id: <E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlfZy248eZnfUbiezXGMMbhKZ4+i7qzQQd4iOefBZHOB0z7kRzSDGob8BX1s1Lim4lPsl7p
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:

> This series is a straight forward-port of some functionality from
> classic kernels to support Xen hosts that do paging of guests.
> 
> This isn't functionality the XenServer makes use of so I've not tested
> these with paging in use (GridCentric requested that our older kernels
> supported this and I'm just doing the forward port).

Thanks for this series. Very timely. I may add that we are not the only consumers of paging. This functionality was first added into classic kernels by Olaf Hering from Suse (added to cc).

> 
> I'm not entirely happy about the approach used here because:
> 
> 1. It relies on the meaning of the return code of the update_mmu
> hypercall and it assumes the value Xen used for -ENOENT is the same
> the kernel uses. This does not appear to be a formal part of the
> hypercall ABI.
> 
> Keir, can you comment on this?

I see your point. I may add that it's likely to be more pervasive than just relying on ENOENT being 12. Which is a fairly safe bet.

> 
> 2. It seems more sensible to have the kernel do the retries instead of
> libxc doing them.  The kernel has to have a mechanism for this any way
> (for mapping back/front rings).
> 
> 3. The current way of handling paged-out frames by repeatedly retrying
> is a bit lame.  Shouldn't there be some event that the guest waiting
> for the frame can wait on instead?  By moving the retry mechanism into
> the kernel we can change this without impacting the ABI to userspace.

Lame is an interesting choice of language :)

I am not a huge fan of libxc retry, but we've been pounding it quite hard for a while and it works -- and, importantly, it yields the scheduler :)

While kernel retry may benefit from hypothetical code reuse, "Shouldn't there be some event that the guest waiting for the frame can wait on instead?" will need to become concrete to start a real discussion.

For better or worse, since xen-4.1 (!) libxc will do the right thing if fed the appropriate errno.

Let me re-iterate that this is great work. Thanks
Andres

> 
> David
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ine-00047J-Fh; Fri, 24 Aug 2012 01:35:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4inc-00047C-N8
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 01:35:00 +0000
Received: from [85.158.143.99:6566] by server-3.bemta-4.messagelabs.com id
	5F/67-08232-44AD6305; Fri, 24 Aug 2012 01:35:00 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345772098!16480739!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14963 invoked from network); 24 Aug 2012 01:34:59 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 01:34:59 -0000
Received: by iabz25 with SMTP id z25so2981761iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 18:34:57 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=awrAP0t+6SErEZ3W7zGPzCzbU2q5PwEVsqbuUOPrcO0=;
	b=KIEB0o519M83X4daxUGIAOf7/ozybjONpl9PsDTbD1V7AC8Lf19TcNC6vZevRft6kc
	g3R/A9uLpNaPhWHVotTb/bM+ERmfrz1kLqS/RziLWHOgbnlwF0I9Ie55QJPIRT9pryIK
	I11SlA5NwSONwlKao3z4ZYJ9mdahuCv97BshnAIURcUjOljuGX44yDijiW3e5S2RJ/uU
	MR0f4xaSYnWJ32SaCubD8eR6G27muLleW6MvgTb5mLOqQKxtDpDRFSkHu3qAGkAKTSgb
	oLD8V7BbctQFVdW+urC8U+GQbwSWu+rj5344ApZqcu0YIwkDnQpZ0CoM/ymDylSXEEMl
	Xffg==
Received: by 10.50.189.227 with SMTP id gl3mr394357igc.34.1345772097776;
	Thu, 23 Aug 2012 18:34:57 -0700 (PDT)
Received: from [192.168.1.100] (69-196-182-10.dsl.teksavvy.com.
	[69.196.182.10])
	by mx.google.com with ESMTPS id k6sm877212igz.9.2012.08.23.18.34.56
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 18:34:57 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 21:34:58 -0400
Message-Id: <FFF9D448-2D04-4E6C-B846-8FA6B4B60543@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlng3Nh0jAbr9OHwAmKheLuCfsTyoldd20xDrP0gxjMGx08PFsH57/aPUsBaP3wo6lHhGKM
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/privcmd: report paged-out frames in
	PRIVCMD_MMAPBATCH ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> libxc handles paged-out frames in xc_map_foreign_bulk() and friends by
> retrying the map operation.  libxc expects the PRIVCMD_MMAPBATCH ioctl
> to report paged out frames by setting bit 31 in the mfn.
> 
> Do this for the PRIVCMD_MMAPBATCH ioctl if
> xen_remap_domain_mfn_range() returned -ENOENT.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   11 ++++++++---
> 1 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..f8c1b6d 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -255,10 +255,15 @@ static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		if (ret == -ENOENT)
> +			*mfnp |= 0x80000000U;

As Konrad pointed out separately, constants here would be great. 

These two constants are defined in <xen>/include/public/domctl.h. In particular, the 0x80..0 constant is defined there but *not* used by the hypervisor. I don't see a problem in (re)defining it in the linux interface headers shared between libxc and the kernel -- this is where it really belongs. It could be even phased out of domctl.h in 4.2 (unlikely) or 4.3.

As for the 0xf0..0 constant I have no strong opinion -- pre-existing problem.

> +		else
> +			*mfnp |= 0xf0000000U;
> 		st->err++;
> 	}
> 	st->va += PAGE_SIZE;

Libxc expects errno to be ENOENT if at least one map hypercall returned ENOENT. So you need to keep an extra latch in the state, e.g. st->enoents. Or recycle st->err to be a tristate (ok, error, enoent), since I don't see any actual use of err count. Whichever way, privcmd_ioctl_mmap_batch needs to find out if it needs to return ENOENT. Note this requirement also extends to PRIVCMD_MMAPBATCH_V2.

Andres
 
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:35:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ine-00047J-Fh; Fri, 24 Aug 2012 01:35:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4inc-00047C-N8
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 01:35:00 +0000
Received: from [85.158.143.99:6566] by server-3.bemta-4.messagelabs.com id
	5F/67-08232-44AD6305; Fri, 24 Aug 2012 01:35:00 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345772098!16480739!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14963 invoked from network); 24 Aug 2012 01:34:59 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 01:34:59 -0000
Received: by iabz25 with SMTP id z25so2981761iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 18:34:57 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=awrAP0t+6SErEZ3W7zGPzCzbU2q5PwEVsqbuUOPrcO0=;
	b=KIEB0o519M83X4daxUGIAOf7/ozybjONpl9PsDTbD1V7AC8Lf19TcNC6vZevRft6kc
	g3R/A9uLpNaPhWHVotTb/bM+ERmfrz1kLqS/RziLWHOgbnlwF0I9Ie55QJPIRT9pryIK
	I11SlA5NwSONwlKao3z4ZYJ9mdahuCv97BshnAIURcUjOljuGX44yDijiW3e5S2RJ/uU
	MR0f4xaSYnWJ32SaCubD8eR6G27muLleW6MvgTb5mLOqQKxtDpDRFSkHu3qAGkAKTSgb
	oLD8V7BbctQFVdW+urC8U+GQbwSWu+rj5344ApZqcu0YIwkDnQpZ0CoM/ymDylSXEEMl
	Xffg==
Received: by 10.50.189.227 with SMTP id gl3mr394357igc.34.1345772097776;
	Thu, 23 Aug 2012 18:34:57 -0700 (PDT)
Received: from [192.168.1.100] (69-196-182-10.dsl.teksavvy.com.
	[69.196.182.10])
	by mx.google.com with ESMTPS id k6sm877212igz.9.2012.08.23.18.34.56
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 18:34:57 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 21:34:58 -0400
Message-Id: <FFF9D448-2D04-4E6C-B846-8FA6B4B60543@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlng3Nh0jAbr9OHwAmKheLuCfsTyoldd20xDrP0gxjMGx08PFsH57/aPUsBaP3wo6lHhGKM
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/privcmd: report paged-out frames in
	PRIVCMD_MMAPBATCH ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> libxc handles paged-out frames in xc_map_foreign_bulk() and friends by
> retrying the map operation.  libxc expects the PRIVCMD_MMAPBATCH ioctl
> to report paged out frames by setting bit 31 in the mfn.
> 
> Do this for the PRIVCMD_MMAPBATCH ioctl if
> xen_remap_domain_mfn_range() returned -ENOENT.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   11 ++++++++---
> 1 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..f8c1b6d 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -255,10 +255,15 @@ static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		if (ret == -ENOENT)
> +			*mfnp |= 0x80000000U;

As Konrad pointed out separately, constants here would be great. 

These two constants are defined in <xen>/include/public/domctl.h. In particular, the 0x80..0 constant is defined there but *not* used by the hypervisor. I don't see a problem in (re)defining it in the linux interface headers shared between libxc and the kernel -- this is where it really belongs. It could be even phased out of domctl.h in 4.2 (unlikely) or 4.3.

As for the 0xf0..0 constant I have no strong opinion -- pre-existing problem.

> +		else
> +			*mfnp |= 0xf0000000U;
> 		st->err++;
> 	}
> 	st->va += PAGE_SIZE;

Libxc expects errno to be ENOENT if at least one map hypercall returned ENOENT. So you need to keep an extra latch in the state, e.g. st->enoents. Or recycle st->err to be a tristate (ok, error, enoent), since I don't see any actual use of err count. Whichever way, privcmd_ioctl_mmap_batch needs to find out if it needs to return ENOENT. Note this requirement also extends to PRIVCMD_MMAPBATCH_V2.

Andres
 
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ioU-0004BT-UJ; Fri, 24 Aug 2012 01:35:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4ioS-0004BB-TH
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 01:35:53 +0000
Received: from [85.158.138.51:60867] by server-12.bemta-3.messagelabs.com id
	9B/59-04073-87AD6305; Fri, 24 Aug 2012 01:35:52 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345772148!27708854!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26924 invoked from network); 24 Aug 2012 01:35:49 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 01:35:49 -0000
Received: by iabz25 with SMTP id z25so2983214iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 18:35:48 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=i1qZtQv/23UXBuHNL8GZO+e5pMDUZ3D0tj2lT+fWyJQ=;
	b=fgPztnKr5RecUx2sqK3eJ7Cc78uiAXu4fcpV+y7Ox+s1dXMkxCHBj9Q8Cttx4c4kxK
	s9n2zd4Uk0/Vh9KbkhZTd+MAPTp40LmHtwv8yiMWoYQr1lfJTKHGEIjV66YBAkzjOI6U
	Sk+GBWO3bRRbccW0EzZwGLumaRt0tHwUCvtFagCGwO3Yi8zPQ9k3SVtD/dp9YbkT2AHC
	sXYq+tRsA4SthApS26I3jXNmhm5/Saco/BlEVvHnojHR6aLg9b5hWL3REA5mAkR0rviI
	GdPqs8Ic9gkK8DsHwRT4Ibhs7klqSenHBkqw3yNNZ/WOPiRlplwVRsBahd3msGIYs6CJ
	sniQ==
Received: by 10.42.92.17 with SMTP id r17mr2916710icm.39.1345772148146;
	Thu, 23 Aug 2012 18:35:48 -0700 (PDT)
Received: from [192.168.1.100] (69-196-182-10.dsl.teksavvy.com.
	[69.196.182.10])
	by mx.google.com with ESMTPS id k6sm877212igz.9.2012.08.23.18.35.46
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 18:35:47 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 21:35:49 -0400
Message-Id: <6F588E8C-E2AD-4CD2-9C0E-052C28244B4C@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkLhO5zcQsjEM70bLHpHb0aU0fGDYBPv/jzwxtN02N/Jx76SiMjJVGaEyoLpIp5s+M4KWbg
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> =

> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> =

> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   54 +++++++++++++++++++++++++++++++++++++++-----=
-----
> include/xen/privcmd.h |   10 +++++++++
> 2 files changed, 53 insertions(+), 11 deletions(-)
> =

> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index f8c1b6d..4f97160 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -248,7 +248,8 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> =

> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> =

> static int mmap_batch_fn(void *data, void *state)
> @@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *sta=
te)
> {
> 	xen_pfn_t *mfnp =3D data;
> 	struct mmap_batch_state *st =3D state;
> +	int ret =3D 0;
> =

> -	return put_user(*mfnp, st->user++);
> +	if (st->user_err) {
> +		if ((*mfnp & 0xf0000000U) =3D=3D 0xf0000000U)
> +			ret =3D -ENOENT;
> +		else if ((*mfnp & 0xf0000000U) =3D=3D 0x80000000U)
> +			ret =3D -EINVAL;
Wires crossed. 0x80..0 is ENOENT, 0xf0..0 is EINVAL. Really in need of cons=
tants.

> +		else
> +			ret =3D 0;
> +		return __put_user(ret, st->user_err);
> +	} else
> +		return __put_user(*mfnp, st->user_mfn++);
> }
> =

> static struct vm_operations_struct privcmd_vm_ops;
> =

> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm =3D current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> 	LIST_HEAD(pagelist);
> 	struct mmap_batch_state state;
> =

> +	printk("%s(%d)\n", __func__, version);
Surely this and other unconditional printk's to go away in next round=85

Thanks
Andres

> +
> 	if (!xen_initial_domain())
> 		return -EPERM;
> =

> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	if (version =3D=3D 1) {
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		m.err =3D NULL;
> +	} else {
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +	}
> =

> 	nr_pages =3D m.num;
> 	if ((m.num <=3D 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> =

> 	ret =3D gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +			   (xen_pfn_t *)m.arr);
> =

> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -331,10 +356,11 @@ static long privcmd_ioctl_mmap_batch(void __user *u=
data)
> 	up_write(&mm->mmap_sem);
> =

> 	if (state.err > 0) {
> -		state.user =3D m.arr;
> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
> +		state.user_err =3D m.err;
> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> 	}
> =

> out:
> @@ -359,7 +385,13 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> =

> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret =3D privcmd_ioctl_mmap_batch(udata);
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 1);
> +		printk("%s() batch ret =3D %d\n", __func__, ret);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 2);
> +		printk("%s() batch ret =3D %d\n", __func__, ret);
> 		break;
> =

> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..9fa27c4 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
> 	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> };
> =

> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> +};
> +
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
> @@ -73,5 +81,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> =

> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- =

> 1.7.2.5
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 01:36:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 01:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ioU-0004BT-UJ; Fri, 24 Aug 2012 01:35:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4ioS-0004BB-TH
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 01:35:53 +0000
Received: from [85.158.138.51:60867] by server-12.bemta-3.messagelabs.com id
	9B/59-04073-87AD6305; Fri, 24 Aug 2012 01:35:52 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345772148!27708854!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26924 invoked from network); 24 Aug 2012 01:35:49 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 01:35:49 -0000
Received: by iabz25 with SMTP id z25so2983214iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 23 Aug 2012 18:35:48 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=i1qZtQv/23UXBuHNL8GZO+e5pMDUZ3D0tj2lT+fWyJQ=;
	b=fgPztnKr5RecUx2sqK3eJ7Cc78uiAXu4fcpV+y7Ox+s1dXMkxCHBj9Q8Cttx4c4kxK
	s9n2zd4Uk0/Vh9KbkhZTd+MAPTp40LmHtwv8yiMWoYQr1lfJTKHGEIjV66YBAkzjOI6U
	Sk+GBWO3bRRbccW0EzZwGLumaRt0tHwUCvtFagCGwO3Yi8zPQ9k3SVtD/dp9YbkT2AHC
	sXYq+tRsA4SthApS26I3jXNmhm5/Saco/BlEVvHnojHR6aLg9b5hWL3REA5mAkR0rviI
	GdPqs8Ic9gkK8DsHwRT4Ibhs7klqSenHBkqw3yNNZ/WOPiRlplwVRsBahd3msGIYs6CJ
	sniQ==
Received: by 10.42.92.17 with SMTP id r17mr2916710icm.39.1345772148146;
	Thu, 23 Aug 2012 18:35:48 -0700 (PDT)
Received: from [192.168.1.100] (69-196-182-10.dsl.teksavvy.com.
	[69.196.182.10])
	by mx.google.com with ESMTPS id k6sm877212igz.9.2012.08.23.18.35.46
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 23 Aug 2012 18:35:47 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
Date: Thu, 23 Aug 2012 21:35:49 -0400
Message-Id: <6F588E8C-E2AD-4CD2-9C0E-052C28244B4C@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkLhO5zcQsjEM70bLHpHb0aU0fGDYBPv/jzwxtN02N/Jx76SiMjJVGaEyoLpIp5s+M4KWbg
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> =

> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> =

> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   54 +++++++++++++++++++++++++++++++++++++++-----=
-----
> include/xen/privcmd.h |   10 +++++++++
> 2 files changed, 53 insertions(+), 11 deletions(-)
> =

> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index f8c1b6d..4f97160 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -248,7 +248,8 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> =

> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> =

> static int mmap_batch_fn(void *data, void *state)
> @@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *sta=
te)
> {
> 	xen_pfn_t *mfnp =3D data;
> 	struct mmap_batch_state *st =3D state;
> +	int ret =3D 0;
> =

> -	return put_user(*mfnp, st->user++);
> +	if (st->user_err) {
> +		if ((*mfnp & 0xf0000000U) =3D=3D 0xf0000000U)
> +			ret =3D -ENOENT;
> +		else if ((*mfnp & 0xf0000000U) =3D=3D 0x80000000U)
> +			ret =3D -EINVAL;
Wires crossed. 0x80..0 is ENOENT, 0xf0..0 is EINVAL. Really in need of cons=
tants.

> +		else
> +			ret =3D 0;
> +		return __put_user(ret, st->user_err);
> +	} else
> +		return __put_user(*mfnp, st->user_mfn++);
> }
> =

> static struct vm_operations_struct privcmd_vm_ops;
> =

> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm =3D current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> 	LIST_HEAD(pagelist);
> 	struct mmap_batch_state state;
> =

> +	printk("%s(%d)\n", __func__, version);
Surely this and other unconditional printk's to go away in next round=85

Thanks
Andres

> +
> 	if (!xen_initial_domain())
> 		return -EPERM;
> =

> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	if (version =3D=3D 1) {
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		m.err =3D NULL;
> +	} else {
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +	}
> =

> 	nr_pages =3D m.num;
> 	if ((m.num <=3D 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> =

> 	ret =3D gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +			   (xen_pfn_t *)m.arr);
> =

> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -331,10 +356,11 @@ static long privcmd_ioctl_mmap_batch(void __user *u=
data)
> 	up_write(&mm->mmap_sem);
> =

> 	if (state.err > 0) {
> -		state.user =3D m.arr;
> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
> +		state.user_err =3D m.err;
> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> 	}
> =

> out:
> @@ -359,7 +385,13 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> =

> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret =3D privcmd_ioctl_mmap_batch(udata);
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 1);
> +		printk("%s() batch ret =3D %d\n", __func__, ret);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 2);
> +		printk("%s() batch ret =3D %d\n", __func__, ret);
> 		break;
> =

> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..9fa27c4 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
> 	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> };
> =

> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> +};
> +
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
> @@ -73,5 +81,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> =

> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- =

> 1.7.2.5
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 03:05:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 03:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4kCI-00056P-Oz; Fri, 24 Aug 2012 03:04:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4kCI-00056K-5r
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 03:04:34 +0000
Received: from [85.158.143.99:44459] by server-2.bemta-4.messagelabs.com id
	94/FF-21239-14FE6305; Fri, 24 Aug 2012 03:04:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345777472!16487505!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2275 invoked from network); 24 Aug 2012 03:04:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 03:04:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14159381"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 03:04:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 04:04:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4kC3-0002lu-5a;
	Fri, 24 Aug 2012 03:04:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4kC2-0004bC-Rj;
	Fri, 24 Aug 2012 04:04:18 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13624-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 04:04:18 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13624: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13624 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13624/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13622
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13622
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13622
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13622

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  4ca40e0559c3
baseline version:
 xen                  b02ac80ff689

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=4ca40e0559c3
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 4ca40e0559c3
+ branch=xen-unstable
+ revision=4ca40e0559c3
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 4ca40e0559c3 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 12 changes to 10 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 03:05:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 03:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4kCI-00056P-Oz; Fri, 24 Aug 2012 03:04:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4kCI-00056K-5r
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 03:04:34 +0000
Received: from [85.158.143.99:44459] by server-2.bemta-4.messagelabs.com id
	94/FF-21239-14FE6305; Fri, 24 Aug 2012 03:04:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345777472!16487505!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2275 invoked from network); 24 Aug 2012 03:04:32 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 03:04:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,301,1344211200"; d="scan'208";a="14159381"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 03:04:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 04:04:19 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4kC3-0002lu-5a;
	Fri, 24 Aug 2012 03:04:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4kC2-0004bC-Rj;
	Fri, 24 Aug 2012 04:04:18 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13624-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 04:04:18 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13624: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13624 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13624/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13622
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13622
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13622
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13622

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  4ca40e0559c3
baseline version:
 xen                  b02ac80ff689

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Santosh Jodh <santosh.jodh@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=4ca40e0559c3
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 4ca40e0559c3
+ branch=xen-unstable
+ revision=4ca40e0559c3
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 4ca40e0559c3 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 7 changesets with 12 changes to 10 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 03:06:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 03:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4kDj-0005AL-9K; Fri, 24 Aug 2012 03:06:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vijay.chander@gmail.com>) id 1T4kDi-0005A9-8O
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 03:06:02 +0000
Received: from [85.158.139.83:10291] by server-8.bemta-5.messagelabs.com id
	2B/57-02481-99FE6305; Fri, 24 Aug 2012 03:06:01 +0000
X-Env-Sender: vijay.chander@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345777560!27876404!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21920 invoked from network); 24 Aug 2012 03:06:00 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 03:06:00 -0000
Received: by weys43 with SMTP id s43so84107wey.30
	for <multiple recipients>; Thu, 23 Aug 2012 20:06:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=oPjtvQHFS1SfuAQpzeDtu0lg1tfIsAyYNpllDvBUU8c=;
	b=f5aq7n3aPI1gEdGLXviXqV+qjNNDxR7oXvOe2LxlolnGxE3vKzAt4NiIEf16+989iq
	fK/vR9e664TnewsRqkL+KOTktAKnvqERkDDNw1pRkYYfflx07kxUZ+kJC50S+1+/TLfO
	jwzIeO4n9i1og4YDDNYeb6r1OJIAbxy4C7tIBVHQIGJft6fFAAB6QshNZlmmZmU2PGEJ
	kM+6bcQucWbUSTca7BHkqIAdPbgZqdLaHFFb8Axu5AmqgzWxnsdRq46ldbJBKeCRkID9
	exmPa3UPy4tu0LeCkScssm4B2ON+BWBMkfsxt7QWRu36cClIzbxzDY4gfImAW6FJO46+
	q32w==
MIME-Version: 1.0
Received: by 10.180.107.103 with SMTP id hb7mr1878148wib.3.1345777560678; Thu,
	23 Aug 2012 20:06:00 -0700 (PDT)
Received: by 10.216.50.8 with HTTP; Thu, 23 Aug 2012 20:06:00 -0700 (PDT)
Date: Thu, 23 Aug 2012 20:06:00 -0700
Message-ID: <CAJNqtuosC0bOkL8vGu-shhL2VbkCLuwLFuQEUd1co2vXUCkgYg@mail.gmail.com>
From: Vijay Chander <vijay.chander@gmail.com>
To: xen-users@lists.xensource.com, xen-devel@lists.xensource.com
Subject: [Xen-devel] ESXi guest VM on XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1877480693256442400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1877480693256442400==
Content-Type: multipart/alternative; boundary=e89a8f3bab97736fe704c7fa41f1

--e89a8f3bab97736fe704c7fa41f1
Content-Type: text/plain; charset=ISO-8859-1

Hello,
 Has anyone tried running a ESXi guest VM on a XEN hypervisor ?

  Installation of 5.1 ESXi went through but ESXi VM was not able to
recognize
  the network adapters exported by XEN.

 Thanks,
-vijay

--e89a8f3bab97736fe704c7fa41f1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br>=A0Has anyone tried running a ESXi guest VM on a XEN hypervisor ?=
<br><br>=A0 Installation of 5.1 ESXi went through but ESXi VM was not able =
to recognize<br>=A0 the network adapters exported by XEN.<br><br>=A0Thanks,=
<br>-vijay<br>

--e89a8f3bab97736fe704c7fa41f1--


--===============1877480693256442400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1877480693256442400==--


From xen-devel-bounces@lists.xen.org Fri Aug 24 03:06:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 03:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4kDj-0005AL-9K; Fri, 24 Aug 2012 03:06:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vijay.chander@gmail.com>) id 1T4kDi-0005A9-8O
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 03:06:02 +0000
Received: from [85.158.139.83:10291] by server-8.bemta-5.messagelabs.com id
	2B/57-02481-99FE6305; Fri, 24 Aug 2012 03:06:01 +0000
X-Env-Sender: vijay.chander@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1345777560!27876404!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21920 invoked from network); 24 Aug 2012 03:06:00 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 03:06:00 -0000
Received: by weys43 with SMTP id s43so84107wey.30
	for <multiple recipients>; Thu, 23 Aug 2012 20:06:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=oPjtvQHFS1SfuAQpzeDtu0lg1tfIsAyYNpllDvBUU8c=;
	b=f5aq7n3aPI1gEdGLXviXqV+qjNNDxR7oXvOe2LxlolnGxE3vKzAt4NiIEf16+989iq
	fK/vR9e664TnewsRqkL+KOTktAKnvqERkDDNw1pRkYYfflx07kxUZ+kJC50S+1+/TLfO
	jwzIeO4n9i1og4YDDNYeb6r1OJIAbxy4C7tIBVHQIGJft6fFAAB6QshNZlmmZmU2PGEJ
	kM+6bcQucWbUSTca7BHkqIAdPbgZqdLaHFFb8Axu5AmqgzWxnsdRq46ldbJBKeCRkID9
	exmPa3UPy4tu0LeCkScssm4B2ON+BWBMkfsxt7QWRu36cClIzbxzDY4gfImAW6FJO46+
	q32w==
MIME-Version: 1.0
Received: by 10.180.107.103 with SMTP id hb7mr1878148wib.3.1345777560678; Thu,
	23 Aug 2012 20:06:00 -0700 (PDT)
Received: by 10.216.50.8 with HTTP; Thu, 23 Aug 2012 20:06:00 -0700 (PDT)
Date: Thu, 23 Aug 2012 20:06:00 -0700
Message-ID: <CAJNqtuosC0bOkL8vGu-shhL2VbkCLuwLFuQEUd1co2vXUCkgYg@mail.gmail.com>
From: Vijay Chander <vijay.chander@gmail.com>
To: xen-users@lists.xensource.com, xen-devel@lists.xensource.com
Subject: [Xen-devel] ESXi guest VM on XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1877480693256442400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1877480693256442400==
Content-Type: multipart/alternative; boundary=e89a8f3bab97736fe704c7fa41f1

--e89a8f3bab97736fe704c7fa41f1
Content-Type: text/plain; charset=ISO-8859-1

Hello,
 Has anyone tried running a ESXi guest VM on a XEN hypervisor ?

  Installation of 5.1 ESXi went through but ESXi VM was not able to
recognize
  the network adapters exported by XEN.

 Thanks,
-vijay

--e89a8f3bab97736fe704c7fa41f1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br>=A0Has anyone tried running a ESXi guest VM on a XEN hypervisor ?=
<br><br>=A0 Installation of 5.1 ESXi went through but ESXi VM was not able =
to recognize<br>=A0 the network adapters exported by XEN.<br><br>=A0Thanks,=
<br>-vijay<br>

--e89a8f3bab97736fe704c7fa41f1--


--===============1877480693256442400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1877480693256442400==--


From xen-devel-bounces@lists.xen.org Fri Aug 24 06:35:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 06:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4nTP-0006vf-EV; Fri, 24 Aug 2012 06:34:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4nTO-0006va-VX
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 06:34:27 +0000
Received: from [85.158.143.35:59790] by server-1.bemta-4.messagelabs.com id
	D3/D4-12504-27027305; Fri, 24 Aug 2012 06:34:26 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345790018!16008653!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0OTExMzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31056 invoked from network); 24 Aug 2012 06:33:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Aug 2012 06:33:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 0CD4D1733;
	Fri, 24 Aug 2012 09:33:36 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id CD3F12005D; Fri, 24 Aug 2012 09:33:25 +0300 (EEST)
Date: Fri, 24 Aug 2012 09:33:25 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Vijay Chander <vijay.chander@gmail.com>
Message-ID: <20120824063325.GH19851@reaktio.net>
References: <CAJNqtuosC0bOkL8vGu-shhL2VbkCLuwLFuQEUd1co2vXUCkgYg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAJNqtuosC0bOkL8vGu-shhL2VbkCLuwLFuQEUd1co2vXUCkgYg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com, xen-users@lists.xensource.com
Subject: Re: [Xen-devel] ESXi guest VM on XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 08:06:00PM -0700, Vijay Chander wrote:
>    Hello,
>     Has anyone tried running a ESXi guest VM on a XEN hypervisor ?
> 
>      Installation of 5.1 ESXi went through but ESXi VM was not able to
>    recognize
>      the network adapters exported by XEN.
> 

Which emulated network adapter did you configure for the Xen HVM guest? 

e1000? 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 06:35:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 06:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4nTP-0006vf-EV; Fri, 24 Aug 2012 06:34:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T4nTO-0006va-VX
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 06:34:27 +0000
Received: from [85.158.143.35:59790] by server-1.bemta-4.messagelabs.com id
	D3/D4-12504-27027305; Fri, 24 Aug 2012 06:34:26 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345790018!16008653!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0OTExMzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31056 invoked from network); 24 Aug 2012 06:33:39 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Aug 2012 06:33:39 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 0CD4D1733;
	Fri, 24 Aug 2012 09:33:36 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id CD3F12005D; Fri, 24 Aug 2012 09:33:25 +0300 (EEST)
Date: Fri, 24 Aug 2012 09:33:25 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Vijay Chander <vijay.chander@gmail.com>
Message-ID: <20120824063325.GH19851@reaktio.net>
References: <CAJNqtuosC0bOkL8vGu-shhL2VbkCLuwLFuQEUd1co2vXUCkgYg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAJNqtuosC0bOkL8vGu-shhL2VbkCLuwLFuQEUd1co2vXUCkgYg@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com, xen-users@lists.xensource.com
Subject: Re: [Xen-devel] ESXi guest VM on XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 08:06:00PM -0700, Vijay Chander wrote:
>    Hello,
>     Has anyone tried running a ESXi guest VM on a XEN hypervisor ?
> 
>      Installation of 5.1 ESXi went through but ESXi VM was not able to
>    recognize
>      the network adapters exported by XEN.
> 

Which emulated network adapter did you configure for the Xen HVM guest? 

e1000? 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 07:23:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 07:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4oEX-0007h0-EO; Fri, 24 Aug 2012 07:23:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4oEV-0007gv-RA
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 07:23:08 +0000
Received: from [85.158.138.51:60198] by server-3.bemta-3.messagelabs.com id
	E0/5C-13809-BDB27305; Fri, 24 Aug 2012 07:23:07 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345792986!27497800!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16662 invoked from network); 24 Aug 2012 07:23:06 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 07:23:06 -0000
Received: by wibhm2 with SMTP id hm2so1364324wib.2
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 00:23:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=tvZWWdIcUqpIMehKTSqNNgXAPHf7nXF3ETqEuXLLpIU=;
	b=NowtaK5Roo08V6NYr/ZWE/7PDAfsUUhoorYtyEtwyioFZGvAuf3cjLeE/v6AE53BaD
	sa7jKwk3zcdZW2johhuCjluB6cSth2cxqDeLgfB46+5m5HFZfSiMD5FWaqC2SD5RsjXB
	p9E9SYYEPrPkTWFXjnwdM+hW0P59vimXytCudNlBCH+Qb8bK7pVLTQozkwjZa07NtUh6
	zfUov0eHRMApcK30rmN0DaX/ZVM2KBSRZw6il6etHEXY5HLxC1Q9eFZx8k3jTfmPANRi
	mPR593gXmwuAh19iKpVhl2e6k5kuN8DQC6ZnVIk8YUUkA4CnitiEENUlBE/bPityWdic
	BbyQ==
Received: by 10.216.192.85 with SMTP id h63mr2212835wen.7.1345792986071;
	Fri, 24 Aug 2012 00:23:06 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id dc3sm2931876wib.7.2012.08.24.00.23.03
	(version=SSLv3 cipher=OTHER); Fri, 24 Aug 2012 00:23:04 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 24 Aug 2012 08:22:58 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <CC5CEA62.3CCD3%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
	int and uint types
Thread-Index: AQHNfueyW97jVGKJUE+4w6w+zBT9h5doKxJwgABpsPk=
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 02:17, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:

> 1) QEMU/helper2.c: Fix multiply issue for int and uint types.
> This patch is to fix the issue of nested Xen boot up. (in this mail)
> 
> 2) Xiantao Zhang will (suppose today) send out another patch to fix the L2
> guest booting issue.
> 
> 3) nvmx: fix resource relinquish for nested VMX.
> This patch is to fix the destroy issue (resource is not released, xl list
> still shows the guest) for L1 guest with a L2 guest running in it. (Already
> sent into mailing list)
> 
> 4) Another patch is already merged by Ian J, which could fix the booting slow
> issue for L2 guest, see:
> http://xenbits.xen.org/gitweb/?p=qemu-xen-unstable.git;a=commit;h=effd56762257
> 61abdab90becac519716515c3be4
> 
> For the remaining three, will you accept them for Xen 4.2 release?

Yes! I will look at (3) today. The others should be strongly considered too.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 07:23:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 07:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4oEX-0007h0-EO; Fri, 24 Aug 2012 07:23:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4oEV-0007gv-RA
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 07:23:08 +0000
Received: from [85.158.138.51:60198] by server-3.bemta-3.messagelabs.com id
	E0/5C-13809-BDB27305; Fri, 24 Aug 2012 07:23:07 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1345792986!27497800!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16662 invoked from network); 24 Aug 2012 07:23:06 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 07:23:06 -0000
Received: by wibhm2 with SMTP id hm2so1364324wib.2
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 00:23:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=tvZWWdIcUqpIMehKTSqNNgXAPHf7nXF3ETqEuXLLpIU=;
	b=NowtaK5Roo08V6NYr/ZWE/7PDAfsUUhoorYtyEtwyioFZGvAuf3cjLeE/v6AE53BaD
	sa7jKwk3zcdZW2johhuCjluB6cSth2cxqDeLgfB46+5m5HFZfSiMD5FWaqC2SD5RsjXB
	p9E9SYYEPrPkTWFXjnwdM+hW0P59vimXytCudNlBCH+Qb8bK7pVLTQozkwjZa07NtUh6
	zfUov0eHRMApcK30rmN0DaX/ZVM2KBSRZw6il6etHEXY5HLxC1Q9eFZx8k3jTfmPANRi
	mPR593gXmwuAh19iKpVhl2e6k5kuN8DQC6ZnVIk8YUUkA4CnitiEENUlBE/bPityWdic
	BbyQ==
Received: by 10.216.192.85 with SMTP id h63mr2212835wen.7.1345792986071;
	Fri, 24 Aug 2012 00:23:06 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id dc3sm2931876wib.7.2012.08.24.00.23.03
	(version=SSLv3 cipher=OTHER); Fri, 24 Aug 2012 00:23:04 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 24 Aug 2012 08:22:58 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <CC5CEA62.3CCD3%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
	int and uint types
Thread-Index: AQHNfueyW97jVGKJUE+4w6w+zBT9h5doKxJwgABpsPk=
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
Mime-version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 02:17, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:

> 1) QEMU/helper2.c: Fix multiply issue for int and uint types.
> This patch is to fix the issue of nested Xen boot up. (in this mail)
> 
> 2) Xiantao Zhang will (suppose today) send out another patch to fix the L2
> guest booting issue.
> 
> 3) nvmx: fix resource relinquish for nested VMX.
> This patch is to fix the destroy issue (resource is not released, xl list
> still shows the guest) for L1 guest with a L2 guest running in it. (Already
> sent into mailing list)
> 
> 4) Another patch is already merged by Ian J, which could fix the booting slow
> issue for L2 guest, see:
> http://xenbits.xen.org/gitweb/?p=qemu-xen-unstable.git;a=commit;h=effd56762257
> 61abdab90becac519716515c3be4
> 
> For the remaining three, will you accept them for Xen 4.2 release?

Yes! I will look at (3) today. The others should be strongly considered too.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 07:44:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 07:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4oYs-0007ub-BC; Fri, 24 Aug 2012 07:44:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4oYr-0007uT-Aw
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 07:44:09 +0000
Received: from [85.158.139.83:19929] by server-4.bemta-5.messagelabs.com id
	24/4B-12386-8C037305; Fri, 24 Aug 2012 07:44:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345794245!20285917!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16642 invoked from network); 24 Aug 2012 07:44:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 07:44:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14161598"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 07:44:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 08:44:03 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4oYl-00052o-J8;
	Fri, 24 Aug 2012 07:44:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4oYl-0006nj-DE;
	Fri, 24 Aug 2012 08:44:03 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13625-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 08:44:03 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13625: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13625 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13625/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13624
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13624
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13624
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13624

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  4ca40e0559c3
baseline version:
 xen                  4ca40e0559c3

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 07:44:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 07:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4oYs-0007ub-BC; Fri, 24 Aug 2012 07:44:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4oYr-0007uT-Aw
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 07:44:09 +0000
Received: from [85.158.139.83:19929] by server-4.bemta-5.messagelabs.com id
	24/4B-12386-8C037305; Fri, 24 Aug 2012 07:44:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345794245!20285917!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1OTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16642 invoked from network); 24 Aug 2012 07:44:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 07:44:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14161598"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 07:44:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 08:44:03 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4oYl-00052o-J8;
	Fri, 24 Aug 2012 07:44:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4oYl-0006nj-DE;
	Fri, 24 Aug 2012 08:44:03 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13625-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 08:44:03 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13625: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13625 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13625/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13624
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13624
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13624
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13624

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  4ca40e0559c3
baseline version:
 xen                  4ca40e0559c3

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 08:27:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 08:27:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pDw-0000Ay-VS; Fri, 24 Aug 2012 08:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1T4pDv-0000At-6q
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 08:26:35 +0000
Received: from [85.158.143.99:55110] by server-1.bemta-4.messagelabs.com id
	0F/E9-12504-ABA37305; Fri, 24 Aug 2012 08:26:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345796792!27134569!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg2MTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11046 invoked from network); 24 Aug 2012 08:26:33 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-3.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 08:26:33 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by azsmga101.ch.intel.com with ESMTP; 24 Aug 2012 01:26:30 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,302,1344236400"; 
	d="scan'208,217,223";a="184890534"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 24 Aug 2012 01:26:20 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 24 Aug 2012 01:26:19 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 24 Aug 2012 16:26:17 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH 1/2] Nested: Don't set bit 55 in IA32_VMX_BASIC_MSR
Thread-Index: Ac2B0h89TfIiuDhJSry5QPs2udrG5Q==
Date: Fri, 24 Aug 2012 08:26:16 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644032DF9AD@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_"
MIME-Version: 1.0
Cc: "Keir Fraser \(keir.xen@gmail.com\)" <keir.xen@gmail.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, "Jan Beulich
	\(JBeulich@suse.com\)" <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 1/2] Nested: Don't set bit 55 in
	IA32_VMX_BASIC_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

>From b618c734fcb842de6dc7e06ca683f45f9e0235b9 Mon Sep 17 00:00:00 2001
From: Zhang Xiantao <xiantao.zhang@intel.com>
Date: Sat, 25 Aug 2012 04:02:51 +0800
Subject: [PATCH 1/2] Nested: Don't set bit 55 in IA32_VMX_BASIC_MSR

All related IA32_VMX_TRUE_*_MSR are not implemented,
so set this bit to 0, otherwise system L1VMM may
get incorrect default1 class settings.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
xen/arch/x86/hvm/vmx/vvmx.c |    2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index fc733a9..8e005cd 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1290,7 +1290,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *ms=
r_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data =3D VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 |
-               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
+               ((u64)MTRR_TYPE_WRBACK) << 50;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
         /* 1-seetings */
--
1.7.0.4




--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From b618c734fcb842de6dc7e06ca683f45f9e0235b9=
 Mon Sep 17 00:00:00 2001<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From: Zhang Xiantao &lt;xiantao.zhang@intel.c=
om&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Date: Sat, 25 Aug 2012 04:02:51 &#43;0800<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Subject: [PATCH 1/2] Nested: Don't set bit 55=
 in IA32_VMX_BASIC_MSR<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">All related IA32_VMX_TRUE_*_MSR are not imple=
mented,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">so set this bit to 0, otherwise system L1VMM =
may<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">get incorrect default1 class settings.<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Signed-off-by: Zhang Xiantao &lt;xiantao.zhan=
g@intel.com&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">---<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">xen/arch/x86/hvm/vmx/vvmx.c |&nbsp;&nbsp;&nbs=
p; 2 &#43;-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1 files changed, 1 insertions(&#43;), 1 delet=
ions(-)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xe=
n/arch/x86/hvm/vmx/vvmx.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index fc733a9..8e005cd 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/xen/arch/x86/hvm/vmx/vvmx.c<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/xen/arch/x86/hvm/vmx/vvmx.c=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -1290,7 &#43;1290,7 @@ int nvmx_msr_read_i=
ntercept(unsigned int msr, u64 *msr_content)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; switch (msr) {<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; case MSR_IA32_VMX_BA=
SIC:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; data =3D VVMCS_REVISION | ((u64)PAGE_SIZE) &lt;&lt; 32 |
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ((u64)MTRR_TYPE_WRBACK) &lt;&lt; 5=
0 | (1ULL &lt;&lt; 55);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ((u64)MTRR_TYPE_WRBACK) &lt;&l=
t; 50;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; case MSR_IA32_VMX_PI=
NBASED_CTLS:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; /* 1-seetings */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1.7.0.4<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_--

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: application/octet-stream;
	name="0001-Nested-Don-t-set-bit-55-in-IA32_VMX_BASIC_MSR.patch"
Content-Description: 0001-Nested-Don-t-set-bit-55-in-IA32_VMX_BASIC_MSR.patch
Content-Disposition: attachment;
	filename="0001-Nested-Don-t-set-bit-55-in-IA32_VMX_BASIC_MSR.patch";
	size=1081; creation-date="Fri, 24 Aug 2012 08:21:30 GMT";
	modification-date="Fri, 24 Aug 2012 08:25:23 GMT"
Content-Transfer-Encoding: base64

RnJvbSBiNjE4YzczNGZjYjg0MmRlNmRjN2UwNmNhNjgzZjQ1ZjllMDIzNWI5IE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBaaGFuZyBYaWFudGFvIDx4aWFudGFvLnpoYW5nQGludGVsLmNv
bT4KRGF0ZTogU2F0LCAyNSBBdWcgMjAxMiAwNDowMjo1MSArMDgwMApTdWJqZWN0OiBbUEFUQ0gg
MS8yXSBOZXN0ZWQ6IERvbid0IHNldCBiaXQgNTUgaW4gSUEzMl9WTVhfQkFTSUNfTVNSCgpBbGwg
cmVsYXRlZCBJQTMyX1ZNWF9UUlVFXypfTVNSIGFyZSBub3QgaW1wbGVtZW50ZWQsCnNvIHNldCB0
aGlzIGJpdCB0byAwLCBvdGhlcndpc2Ugc3lzdGVtIEwxVk1NIG1heQpnZXQgaW5jb3JyZWN0IGRl
ZmF1bHQxIGNsYXNzIHNldHRpbmdzLgoKU2lnbmVkLW9mZi1ieTogWmhhbmcgWGlhbnRhbyA8eGlh
bnRhby56aGFuZ0BpbnRlbC5jb20+Ci0tLQogeGVuL2FyY2gveDg2L2h2bS92bXgvdnZteC5jIHwg
ICAgMiArLQogMSBmaWxlcyBjaGFuZ2VkLCAxIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvdnZteC5jIGIveGVuL2FyY2gveDg2
L2h2bS92bXgvdnZteC5jCmluZGV4IGZjNzMzYTkuLjhlMDA1Y2QgMTAwNjQ0Ci0tLSBhL3hlbi9h
cmNoL3g4Ni9odm0vdm14L3Z2bXguYworKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92dm14LmMK
QEAgLTEyOTAsNyArMTI5MCw3IEBAIGludCBudm14X21zcl9yZWFkX2ludGVyY2VwdCh1bnNpZ25l
ZCBpbnQgbXNyLCB1NjQgKm1zcl9jb250ZW50KQogICAgIHN3aXRjaCAobXNyKSB7CiAgICAgY2Fz
ZSBNU1JfSUEzMl9WTVhfQkFTSUM6CiAgICAgICAgIGRhdGEgPSBWVk1DU19SRVZJU0lPTiB8ICgo
dTY0KVBBR0VfU0laRSkgPDwgMzIgfCAKLSAgICAgICAgICAgICAgICgodTY0KU1UUlJfVFlQRV9X
UkJBQ0spIDw8IDUwIHwgKDFVTEwgPDwgNTUpOworICAgICAgICAgICAgICAgKCh1NjQpTVRSUl9U
WVBFX1dSQkFDSykgPDwgNTA7CiAgICAgICAgIGJyZWFrOwogICAgIGNhc2UgTVNSX0lBMzJfVk1Y
X1BJTkJBU0VEX0NUTFM6CiAgICAgICAgIC8qIDEtc2VldGluZ3MgKi8KLS0gCjEuNy4wLjQKCg==

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_--


From xen-devel-bounces@lists.xen.org Fri Aug 24 08:27:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 08:27:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pDw-0000Ay-VS; Fri, 24 Aug 2012 08:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1T4pDv-0000At-6q
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 08:26:35 +0000
Received: from [85.158.143.99:55110] by server-1.bemta-4.messagelabs.com id
	0F/E9-12504-ABA37305; Fri, 24 Aug 2012 08:26:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345796792!27134569!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg2MTc0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11046 invoked from network); 24 Aug 2012 08:26:33 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-3.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 08:26:33 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by azsmga101.ch.intel.com with ESMTP; 24 Aug 2012 01:26:30 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,302,1344236400"; 
	d="scan'208,217,223";a="184890534"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 24 Aug 2012 01:26:20 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 24 Aug 2012 01:26:19 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 24 Aug 2012 16:26:17 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH 1/2] Nested: Don't set bit 55 in IA32_VMX_BASIC_MSR
Thread-Index: Ac2B0h89TfIiuDhJSry5QPs2udrG5Q==
Date: Fri, 24 Aug 2012 08:26:16 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644032DF9AD@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_"
MIME-Version: 1.0
Cc: "Keir Fraser \(keir.xen@gmail.com\)" <keir.xen@gmail.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, "Jan Beulich
	\(JBeulich@suse.com\)" <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 1/2] Nested: Don't set bit 55 in
	IA32_VMX_BASIC_MSR
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

>From b618c734fcb842de6dc7e06ca683f45f9e0235b9 Mon Sep 17 00:00:00 2001
From: Zhang Xiantao <xiantao.zhang@intel.com>
Date: Sat, 25 Aug 2012 04:02:51 +0800
Subject: [PATCH 1/2] Nested: Don't set bit 55 in IA32_VMX_BASIC_MSR

All related IA32_VMX_TRUE_*_MSR are not implemented,
so set this bit to 0, otherwise system L1VMM may
get incorrect default1 class settings.

Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
xen/arch/x86/hvm/vmx/vvmx.c |    2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index fc733a9..8e005cd 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1290,7 +1290,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *ms=
r_content)
     switch (msr) {
     case MSR_IA32_VMX_BASIC:
         data =3D VVMCS_REVISION | ((u64)PAGE_SIZE) << 32 |
-               ((u64)MTRR_TYPE_WRBACK) << 50 | (1ULL << 55);
+               ((u64)MTRR_TYPE_WRBACK) << 50;
         break;
     case MSR_IA32_VMX_PINBASED_CTLS:
         /* 1-seetings */
--
1.7.0.4




--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From b618c734fcb842de6dc7e06ca683f45f9e0235b9=
 Mon Sep 17 00:00:00 2001<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From: Zhang Xiantao &lt;xiantao.zhang@intel.c=
om&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Date: Sat, 25 Aug 2012 04:02:51 &#43;0800<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Subject: [PATCH 1/2] Nested: Don't set bit 55=
 in IA32_VMX_BASIC_MSR<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">All related IA32_VMX_TRUE_*_MSR are not imple=
mented,<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">so set this bit to 0, otherwise system L1VMM =
may<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">get incorrect default1 class settings.<o:p></=
o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Signed-off-by: Zhang Xiantao &lt;xiantao.zhan=
g@intel.com&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">---<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">xen/arch/x86/hvm/vmx/vvmx.c |&nbsp;&nbsp;&nbs=
p; 2 &#43;-<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1 files changed, 1 insertions(&#43;), 1 delet=
ions(-)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xe=
n/arch/x86/hvm/vmx/vvmx.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index fc733a9..8e005cd 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/xen/arch/x86/hvm/vmx/vvmx.c<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/xen/arch/x86/hvm/vmx/vvmx.c=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -1290,7 &#43;1290,7 @@ int nvmx_msr_read_i=
ntercept(unsigned int msr, u64 *msr_content)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; switch (msr) {<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; case MSR_IA32_VMX_BA=
SIC:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; data =3D VVMCS_REVISION | ((u64)PAGE_SIZE) &lt;&lt; 32 |
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ((u64)MTRR_TYPE_WRBACK) &lt;&lt; 5=
0 | (1ULL &lt;&lt; 55);<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ((u64)MTRR_TYPE_WRBACK) &lt;&l=
t; 50;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; case MSR_IA32_VMX_PI=
NBASED_CTLS:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; /* 1-seetings */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1.7.0.4<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_--

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: application/octet-stream;
	name="0001-Nested-Don-t-set-bit-55-in-IA32_VMX_BASIC_MSR.patch"
Content-Description: 0001-Nested-Don-t-set-bit-55-in-IA32_VMX_BASIC_MSR.patch
Content-Disposition: attachment;
	filename="0001-Nested-Don-t-set-bit-55-in-IA32_VMX_BASIC_MSR.patch";
	size=1081; creation-date="Fri, 24 Aug 2012 08:21:30 GMT";
	modification-date="Fri, 24 Aug 2012 08:25:23 GMT"
Content-Transfer-Encoding: base64

RnJvbSBiNjE4YzczNGZjYjg0MmRlNmRjN2UwNmNhNjgzZjQ1ZjllMDIzNWI5IE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBaaGFuZyBYaWFudGFvIDx4aWFudGFvLnpoYW5nQGludGVsLmNv
bT4KRGF0ZTogU2F0LCAyNSBBdWcgMjAxMiAwNDowMjo1MSArMDgwMApTdWJqZWN0OiBbUEFUQ0gg
MS8yXSBOZXN0ZWQ6IERvbid0IHNldCBiaXQgNTUgaW4gSUEzMl9WTVhfQkFTSUNfTVNSCgpBbGwg
cmVsYXRlZCBJQTMyX1ZNWF9UUlVFXypfTVNSIGFyZSBub3QgaW1wbGVtZW50ZWQsCnNvIHNldCB0
aGlzIGJpdCB0byAwLCBvdGhlcndpc2Ugc3lzdGVtIEwxVk1NIG1heQpnZXQgaW5jb3JyZWN0IGRl
ZmF1bHQxIGNsYXNzIHNldHRpbmdzLgoKU2lnbmVkLW9mZi1ieTogWmhhbmcgWGlhbnRhbyA8eGlh
bnRhby56aGFuZ0BpbnRlbC5jb20+Ci0tLQogeGVuL2FyY2gveDg2L2h2bS92bXgvdnZteC5jIHwg
ICAgMiArLQogMSBmaWxlcyBjaGFuZ2VkLCAxIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgvdnZteC5jIGIveGVuL2FyY2gveDg2
L2h2bS92bXgvdnZteC5jCmluZGV4IGZjNzMzYTkuLjhlMDA1Y2QgMTAwNjQ0Ci0tLSBhL3hlbi9h
cmNoL3g4Ni9odm0vdm14L3Z2bXguYworKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZteC92dm14LmMK
QEAgLTEyOTAsNyArMTI5MCw3IEBAIGludCBudm14X21zcl9yZWFkX2ludGVyY2VwdCh1bnNpZ25l
ZCBpbnQgbXNyLCB1NjQgKm1zcl9jb250ZW50KQogICAgIHN3aXRjaCAobXNyKSB7CiAgICAgY2Fz
ZSBNU1JfSUEzMl9WTVhfQkFTSUM6CiAgICAgICAgIGRhdGEgPSBWVk1DU19SRVZJU0lPTiB8ICgo
dTY0KVBBR0VfU0laRSkgPDwgMzIgfCAKLSAgICAgICAgICAgICAgICgodTY0KU1UUlJfVFlQRV9X
UkJBQ0spIDw8IDUwIHwgKDFVTEwgPDwgNTUpOworICAgICAgICAgICAgICAgKCh1NjQpTVRSUl9U
WVBFX1dSQkFDSykgPDwgNTA7CiAgICAgICAgIGJyZWFrOwogICAgIGNhc2UgTVNSX0lBMzJfVk1Y
X1BJTkJBU0VEX0NUTFM6CiAgICAgICAgIC8qIDEtc2VldGluZ3MgKi8KLS0gCjEuNy4wLjQKCg==

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9ADSHSMSX101ccrcor_--


From xen-devel-bounces@lists.xen.org Fri Aug 24 08:30:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 08:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pHT-0000Jb-Qk; Fri, 24 Aug 2012 08:30:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1T4pHS-0000JV-Gu
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 08:30:14 +0000
Received: from [85.158.143.99:19676] by server-2.bemta-4.messagelabs.com id
	C0/B6-21239-59B37305; Fri, 24 Aug 2012 08:30:13 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345797010!17540118!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMTg2MTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22029 invoked from network); 24 Aug 2012 08:30:11 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-8.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 08:30:11 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 24 Aug 2012 01:30:10 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,302,1344236400"; 
	d="scan'208,217,223";a="184370966"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 24 Aug 2012 01:30:07 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 24 Aug 2012 01:30:07 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 24 Aug 2012 01:30:07 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Fri, 24 Aug 2012 16:30:05 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: [PATCH 2/2] Nested: VM_ENTRY_IA32E_MODE shouldn't be in
	default1 class
Thread-Index: Ac2B0qZVt/oC3JnORxuEOE75Y1Kapg==
Date: Fri, 24 Aug 2012 08:30:04 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644032DF9E6@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_"
MIME-Version: 1.0
Cc: "Keir Fraser \(keir.xen@gmail.com\)" <keir.xen@gmail.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 2/2] Nested: VM_ENTRY_IA32E_MODE shouldn't be in
 default1 class
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

>From eb20603913ff7350cd25b39d1eb37b8fddd16053 Mon Sep 17 00:00:00 2001
From: Zhang Xiantao <xiantao.zhang@intel.com>
Date: Sat, 25 Aug 2012 04:11:08 +0800
Subject: [PATCH 2/2] Nested: VM_ENTRY_IA32E_MODE shouldn't be in default1 c=
lass
for IA32_VM_ENTRY_CTLS_MSR.

If set to 1, L2 guest's paging mode maybe mis-judged
and mis-set.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
xen/arch/x86/hvm/vmx/vvmx.c |    1 -
1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 8e005cd..55781e9 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1334,7 +1334,6 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *ms=
r_content)
     case MSR_IA32_VMX_ENTRY_CTLS:
         /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
         data =3D 0x11ff;
-        data |=3D VM_ENTRY_IA32E_MODE;
         data =3D (data << 32) | data;
         break;
--
1.7.0.4




--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From eb20603913ff7350cd25b39d1eb37b8fddd16053=
 Mon Sep 17 00:00:00 2001<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From: Zhang Xiantao &lt;xiantao.zhang@intel.c=
om&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Date: Sat, 25 Aug 2012 04:11:08 &#43;0800<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Subject: [PATCH 2/2] Nested: VM_ENTRY_IA32E_M=
ODE shouldn't be in default1 class<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">for IA32_VM_ENTRY_CTLS_MSR.<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">If set to 1, L2 guest's paging mode maybe mis=
-judged<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">and mis-set.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Signed-off-by: Zhang Xiantao &lt;xiantao.zhan=
g@intel.com&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">---<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">xen/arch/x86/hvm/vmx/vvmx.c |&nbsp;&nbsp;&nbs=
p; 1 -<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1 files changed, 0 insertions(&#43;), 1 delet=
ions(-)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xe=
n/arch/x86/hvm/vmx/vvmx.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 8e005cd..55781e9 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/xen/arch/x86/hvm/vmx/vvmx.c<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/xen/arch/x86/hvm/vmx/vvmx.c=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -1334,7 &#43;1334,6 @@ int nvmx_msr_read_i=
ntercept(unsigned int msr, u64 *msr_content)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; case MSR_IA32_VMX_EN=
TRY_CTLS:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; data =3D 0x11ff;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; d=
ata |=3D VM_ENTRY_IA32E_MODE;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; data =3D (data &lt;&lt; 32) | data;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1.7.0.4<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_--

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: application/octet-stream;
	name="0002-Nested-VM_ENTRY_IA32E_MODE-shouldn-t-be-in-default1-.patch"
Content-Description: 0002-Nested-VM_ENTRY_IA32E_MODE-shouldn-t-be-in-default1-.patch
Content-Disposition: attachment;
	filename="0002-Nested-VM_ENTRY_IA32E_MODE-shouldn-t-be-in-default1-.patch";
	size=958; creation-date="Fri, 24 Aug 2012 08:21:30 GMT";
	modification-date="Fri, 24 Aug 2012 08:25:23 GMT"
Content-Transfer-Encoding: base64

RnJvbSBlYjIwNjAzOTEzZmY3MzUwY2QyNWIzOWQxZWIzN2I4ZmRkZDE2MDUzIE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBaaGFuZyBYaWFudGFvIDx4aWFudGFvLnpoYW5nQGludGVsLmNv
bT4KRGF0ZTogU2F0LCAyNSBBdWcgMjAxMiAwNDoxMTowOCArMDgwMApTdWJqZWN0OiBbUEFUQ0gg
Mi8yXSBOZXN0ZWQ6IFZNX0VOVFJZX0lBMzJFX01PREUgc2hvdWxkbid0IGJlIGluIGRlZmF1bHQx
IGNsYXNzCiBmb3IgSUEzMl9WTV9FTlRSWV9DVExTX01TUi4KCklmIHNldCB0byAxLCBMMiBndWVz
dCdzIHBhZ2luZyBtb2RlIG1heWJlIG1pcy1qdWRnZWQKYW5kIG1pcy1zZXQuClNpZ25lZC1vZmYt
Ynk6IFpoYW5nIFhpYW50YW8gPHhpYW50YW8uemhhbmdAaW50ZWwuY29tPgotLS0KIHhlbi9hcmNo
L3g4Ni9odm0vdm14L3Z2bXguYyB8ICAgIDEgLQogMSBmaWxlcyBjaGFuZ2VkLCAwIGluc2VydGlv
bnMoKyksIDEgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgv
dnZteC5jIGIveGVuL2FyY2gveDg2L2h2bS92bXgvdnZteC5jCmluZGV4IDhlMDA1Y2QuLjU1Nzgx
ZTkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3Z2bXguYworKysgYi94ZW4vYXJj
aC94ODYvaHZtL3ZteC92dm14LmMKQEAgLTEzMzQsNyArMTMzNCw2IEBAIGludCBudm14X21zcl9y
ZWFkX2ludGVyY2VwdCh1bnNpZ25lZCBpbnQgbXNyLCB1NjQgKm1zcl9jb250ZW50KQogICAgIGNh
c2UgTVNSX0lBMzJfVk1YX0VOVFJZX0NUTFM6CiAgICAgICAgIC8qIGJpdCAwLTgsIGFuZCAxMiBt
dXN0IGJlIDEgKHJlZmVyIEc1IG9mIFNETSkgKi8KICAgICAgICAgZGF0YSA9IDB4MTFmZjsKLSAg
ICAgICAgZGF0YSB8PSBWTV9FTlRSWV9JQTMyRV9NT0RFOwogICAgICAgICBkYXRhID0gKGRhdGEg
PDwgMzIpIHwgZGF0YTsKICAgICAgICAgYnJlYWs7CiAKLS0gCjEuNy4wLjQKCg==

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_--


From xen-devel-bounces@lists.xen.org Fri Aug 24 08:30:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 08:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pHT-0000Jb-Qk; Fri, 24 Aug 2012 08:30:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1T4pHS-0000JV-Gu
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 08:30:14 +0000
Received: from [85.158.143.99:19676] by server-2.bemta-4.messagelabs.com id
	C0/B6-21239-59B37305; Fri, 24 Aug 2012 08:30:13 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1345797010!17540118!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMTg2MTgw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22029 invoked from network); 24 Aug 2012 08:30:11 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-8.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 08:30:11 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga102.ch.intel.com with ESMTP; 24 Aug 2012 01:30:10 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,302,1344236400"; 
	d="scan'208,217,223";a="184370966"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 24 Aug 2012 01:30:07 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 24 Aug 2012 01:30:07 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 24 Aug 2012 01:30:07 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Fri, 24 Aug 2012 16:30:05 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: [PATCH 2/2] Nested: VM_ENTRY_IA32E_MODE shouldn't be in
	default1 class
Thread-Index: Ac2B0qZVt/oC3JnORxuEOE75Y1Kapg==
Date: Fri, 24 Aug 2012 08:30:04 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A48345644032DF9E6@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_"
MIME-Version: 1.0
Cc: "Keir Fraser \(keir.xen@gmail.com\)" <keir.xen@gmail.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH 2/2] Nested: VM_ENTRY_IA32E_MODE shouldn't be in
 default1 class
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: multipart/alternative;
	boundary="_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_"

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

>From eb20603913ff7350cd25b39d1eb37b8fddd16053 Mon Sep 17 00:00:00 2001
From: Zhang Xiantao <xiantao.zhang@intel.com>
Date: Sat, 25 Aug 2012 04:11:08 +0800
Subject: [PATCH 2/2] Nested: VM_ENTRY_IA32E_MODE shouldn't be in default1 c=
lass
for IA32_VM_ENTRY_CTLS_MSR.

If set to 1, L2 guest's paging mode maybe mis-judged
and mis-set.
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
---
xen/arch/x86/hvm/vmx/vvmx.c |    1 -
1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 8e005cd..55781e9 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1334,7 +1334,6 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *ms=
r_content)
     case MSR_IA32_VMX_ENTRY_CTLS:
         /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */
         data =3D 0x11ff;
-        data |=3D VM_ENTRY_IA32E_MODE;
         data =3D (data << 32) | data;
         break;
--
1.7.0.4




--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From eb20603913ff7350cd25b39d1eb37b8fddd16053=
 Mon Sep 17 00:00:00 2001<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">From: Zhang Xiantao &lt;xiantao.zhang@intel.c=
om&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Date: Sat, 25 Aug 2012 04:11:08 &#43;0800<o:p=
></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Subject: [PATCH 2/2] Nested: VM_ENTRY_IA32E_M=
ODE shouldn't be in default1 class<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">for IA32_VM_ENTRY_CTLS_MSR.<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">If set to 1, L2 guest's paging mode maybe mis=
-judged<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">and mis-set.<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">Signed-off-by: Zhang Xiantao &lt;xiantao.zhan=
g@intel.com&gt;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">---<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">xen/arch/x86/hvm/vmx/vvmx.c |&nbsp;&nbsp;&nbs=
p; 1 -<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1 files changed, 0 insertions(&#43;), 1 delet=
ions(-)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xe=
n/arch/x86/hvm/vmx/vvmx.c<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">index 8e005cd..55781e9 100644<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--- a/xen/arch/x86/hvm/vmx/vvmx.c<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&#43;&#43;&#43; b/xen/arch/x86/hvm/vmx/vvmx.c=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">@@ -1334,7 &#43;1334,6 @@ int nvmx_msr_read_i=
ntercept(unsigned int msr, u64 *msr_content)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp; case MSR_IA32_VMX_EN=
TRY_CTLS:<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; /* bit 0-8, and 12 must be 1 (refer G5 of SDM) */<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; data =3D 0x11ff;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; d=
ata |=3D VM_ENTRY_IA32E_MODE;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; data =3D (data &lt;&lt; 32) | data;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; break;<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">--
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;">1.7.0.4<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-fa=
mily:&quot;Courier New&quot;"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_--

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: application/octet-stream;
	name="0002-Nested-VM_ENTRY_IA32E_MODE-shouldn-t-be-in-default1-.patch"
Content-Description: 0002-Nested-VM_ENTRY_IA32E_MODE-shouldn-t-be-in-default1-.patch
Content-Disposition: attachment;
	filename="0002-Nested-VM_ENTRY_IA32E_MODE-shouldn-t-be-in-default1-.patch";
	size=958; creation-date="Fri, 24 Aug 2012 08:21:30 GMT";
	modification-date="Fri, 24 Aug 2012 08:25:23 GMT"
Content-Transfer-Encoding: base64

RnJvbSBlYjIwNjAzOTEzZmY3MzUwY2QyNWIzOWQxZWIzN2I4ZmRkZDE2MDUzIE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBaaGFuZyBYaWFudGFvIDx4aWFudGFvLnpoYW5nQGludGVsLmNv
bT4KRGF0ZTogU2F0LCAyNSBBdWcgMjAxMiAwNDoxMTowOCArMDgwMApTdWJqZWN0OiBbUEFUQ0gg
Mi8yXSBOZXN0ZWQ6IFZNX0VOVFJZX0lBMzJFX01PREUgc2hvdWxkbid0IGJlIGluIGRlZmF1bHQx
IGNsYXNzCiBmb3IgSUEzMl9WTV9FTlRSWV9DVExTX01TUi4KCklmIHNldCB0byAxLCBMMiBndWVz
dCdzIHBhZ2luZyBtb2RlIG1heWJlIG1pcy1qdWRnZWQKYW5kIG1pcy1zZXQuClNpZ25lZC1vZmYt
Ynk6IFpoYW5nIFhpYW50YW8gPHhpYW50YW8uemhhbmdAaW50ZWwuY29tPgotLS0KIHhlbi9hcmNo
L3g4Ni9odm0vdm14L3Z2bXguYyB8ICAgIDEgLQogMSBmaWxlcyBjaGFuZ2VkLCAwIGluc2VydGlv
bnMoKyksIDEgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS92bXgv
dnZteC5jIGIveGVuL2FyY2gveDg2L2h2bS92bXgvdnZteC5jCmluZGV4IDhlMDA1Y2QuLjU1Nzgx
ZTkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14L3Z2bXguYworKysgYi94ZW4vYXJj
aC94ODYvaHZtL3ZteC92dm14LmMKQEAgLTEzMzQsNyArMTMzNCw2IEBAIGludCBudm14X21zcl9y
ZWFkX2ludGVyY2VwdCh1bnNpZ25lZCBpbnQgbXNyLCB1NjQgKm1zcl9jb250ZW50KQogICAgIGNh
c2UgTVNSX0lBMzJfVk1YX0VOVFJZX0NUTFM6CiAgICAgICAgIC8qIGJpdCAwLTgsIGFuZCAxMiBt
dXN0IGJlIDEgKHJlZmVyIEc1IG9mIFNETSkgKi8KICAgICAgICAgZGF0YSA9IDB4MTFmZjsKLSAg
ICAgICAgZGF0YSB8PSBWTV9FTlRSWV9JQTMyRV9NT0RFOwogICAgICAgICBkYXRhID0gKGRhdGEg
PDwgMzIpIHwgZGF0YTsKICAgICAgICAgYnJlYWs7CiAKLS0gCjEuNy4wLjQKCg==

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_B6C2EB9186482D47BD0C5A9A48345644032DF9E6SHSMSX101ccrcor_--


From xen-devel-bounces@lists.xen.org Fri Aug 24 08:46:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 08:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pWs-0000Zf-Gw; Fri, 24 Aug 2012 08:46:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4pWq-0000ZX-QK
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 08:46:09 +0000
Received: from [85.158.139.83:51180] by server-4.bemta-5.messagelabs.com id
	AF/8C-12386-F4F37305; Fri, 24 Aug 2012 08:46:07 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345797963!26959741!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30855 invoked from network); 24 Aug 2012 08:46:03 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 08:46:03 -0000
Received: by eeke53 with SMTP id e53so627598eek.32
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 01:46:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Wz3284hbpypeSRR6S4ECxkS9wmYvxZbmfOXzIVoG6T8=;
	b=OwuvDz/H++iTlEToY6byCl9YZlG5i9b8eGu88P2MELVg05DsrWJvbi5nbOfP/bBwb+
	P91TKymW0z1JagUw/x38UaXQ8lV7f/LN3tM5Czm+S/5Jp8ZaWs15W7RMQc8S/R9LH5eb
	hlu3nz00DE9+pEduISigNreEgmd2ljrn91d+7Yuxhpi8cEF/d7kUD3JBsvZFcU460bSD
	Rzog+5JG2utjaxcwsNzPNU2x0rRusJi9v5VODCL+BM4aDFq34d3v15D5PcfyD10BtK1F
	+MsDlu2ozLekZFessyKX+HBWkNJGZxJVyuBElnew/K6/olulZ2ZXsZDodei1xHuZxRhy
	ddFA==
Received: by 10.14.204.200 with SMTP id h48mr6466579eeo.7.1345797963232;
	Fri, 24 Aug 2012 01:46:03 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id h2sm27948683eeo.3.2012.08.24.01.46.01
	(version=SSLv3 cipher=OTHER); Fri, 24 Aug 2012 01:46:02 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 24 Aug 2012 09:45:56 +0100
From: Keir Fraser <keir@xen.org>
To: Dongxiao Xu <dongxiao.xu@intel.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5CFDD4.4A084%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
Thread-Index: Ac2B1OBPGx/S6qx2ik6MKRf7VS1RJQ==
In-Reply-To: <1345691481-6862-1-git-send-email-dongxiao.xu@intel.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 04:11, "Dongxiao Xu" <dongxiao.xu@intel.com> wrote:

> The previous order of relinquish resource is:
> relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
> However some L1 resources like nv_vvmcx and io_bitmaps are free in
> nvmx_vcpu_destroy(), therefore the relinquish_domain_resources()
> will not reduce the refcnt of the domain to 0, therefore the latter
> vcpu release functions will not be called.
> 
> To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
> relinquish_domain_resources().
> 
> Besides, after destroy the nested vcpu, we need to switch the vmx->vmcs
> back to the L1 and let the vcpu_destroy() logic to free the L1 VMCS page.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

Couple of comments below.

> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 2e0b79d..1f610eb 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -57,6 +57,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)
>  {
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>  
> +    if ( nvcpu->nv_n1vmcx )
> +        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;

Okay, this undoes the fork in nvmx_handle_vmxon()? A small code comment to
explain that would be handy.

>      nvmx_purge_vvmcs(v);

This call of nvmx_purge_vvmcs() is no longer needed, and should be removed?

 -- Keir

>      if ( nvcpu->nv_n2vmcx ) {
>          __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
> @@ -65,6 +68,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
>      }
>  }
>   
> +void nvmx_domain_relinquish_resources(struct domain *d)
> +{
> +    struct vcpu *v;
> +
> +    for_each_vcpu ( d, v )
> +        nvmx_purge_vvmcs(v);
> +}
> +
>  int nvmx_vcpu_reset(struct vcpu *v)
>  {
>      return 0;
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 7243c4e..3592a8c 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -179,6 +179,7 @@ struct hvm_function_table {
>      bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
>  
>      enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
> +    void (*nhvm_domain_relinquish_resources)(struct domain *d);
>  };
>  
>  extern struct hvm_function_table hvm_funcs;
> diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h
> b/xen/include/asm-x86/hvm/vmx/vvmx.h
> index 995f9f4..bbc34e7 100644
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);
>  enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
>  int nvmx_intercepts_exception(struct vcpu *v,
>                                unsigned int trap, int error_code);
> +void nvmx_domain_relinquish_resources(struct domain *d);
>  
>  int nvmx_handle_vmxon(struct cpu_user_regs *regs);
>  int nvmx_handle_vmxoff(struct cpu_user_regs *regs);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 08:46:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 08:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pWs-0000Zf-Gw; Fri, 24 Aug 2012 08:46:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T4pWq-0000ZX-QK
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 08:46:09 +0000
Received: from [85.158.139.83:51180] by server-4.bemta-5.messagelabs.com id
	AF/8C-12386-F4F37305; Fri, 24 Aug 2012 08:46:07 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345797963!26959741!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30855 invoked from network); 24 Aug 2012 08:46:03 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 08:46:03 -0000
Received: by eeke53 with SMTP id e53so627598eek.32
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 01:46:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Wz3284hbpypeSRR6S4ECxkS9wmYvxZbmfOXzIVoG6T8=;
	b=OwuvDz/H++iTlEToY6byCl9YZlG5i9b8eGu88P2MELVg05DsrWJvbi5nbOfP/bBwb+
	P91TKymW0z1JagUw/x38UaXQ8lV7f/LN3tM5Czm+S/5Jp8ZaWs15W7RMQc8S/R9LH5eb
	hlu3nz00DE9+pEduISigNreEgmd2ljrn91d+7Yuxhpi8cEF/d7kUD3JBsvZFcU460bSD
	Rzog+5JG2utjaxcwsNzPNU2x0rRusJi9v5VODCL+BM4aDFq34d3v15D5PcfyD10BtK1F
	+MsDlu2ozLekZFessyKX+HBWkNJGZxJVyuBElnew/K6/olulZ2ZXsZDodei1xHuZxRhy
	ddFA==
Received: by 10.14.204.200 with SMTP id h48mr6466579eeo.7.1345797963232;
	Fri, 24 Aug 2012 01:46:03 -0700 (PDT)
Received: from [192.168.1.3] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id h2sm27948683eeo.3.2012.08.24.01.46.01
	(version=SSLv3 cipher=OTHER); Fri, 24 Aug 2012 01:46:02 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.33.0.120411
Date: Fri, 24 Aug 2012 09:45:56 +0100
From: Keir Fraser <keir@xen.org>
To: Dongxiao Xu <dongxiao.xu@intel.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5CFDD4.4A084%keir@xen.org>
Thread-Topic: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
Thread-Index: Ac2B1OBPGx/S6qx2ik6MKRf7VS1RJQ==
In-Reply-To: <1345691481-6862-1-git-send-email-dongxiao.xu@intel.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/2012 04:11, "Dongxiao Xu" <dongxiao.xu@intel.com> wrote:

> The previous order of relinquish resource is:
> relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
> However some L1 resources like nv_vvmcx and io_bitmaps are free in
> nvmx_vcpu_destroy(), therefore the relinquish_domain_resources()
> will not reduce the refcnt of the domain to 0, therefore the latter
> vcpu release functions will not be called.
> 
> To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
> relinquish_domain_resources().
> 
> Besides, after destroy the nested vcpu, we need to switch the vmx->vmcs
> back to the L1 and let the vcpu_destroy() logic to free the L1 VMCS page.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

Couple of comments below.

> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 2e0b79d..1f610eb 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -57,6 +57,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)
>  {
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>  
> +    if ( nvcpu->nv_n1vmcx )
> +        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;

Okay, this undoes the fork in nvmx_handle_vmxon()? A small code comment to
explain that would be handy.

>      nvmx_purge_vvmcs(v);

This call of nvmx_purge_vvmcs() is no longer needed, and should be removed?

 -- Keir

>      if ( nvcpu->nv_n2vmcx ) {
>          __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
> @@ -65,6 +68,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
>      }
>  }
>   
> +void nvmx_domain_relinquish_resources(struct domain *d)
> +{
> +    struct vcpu *v;
> +
> +    for_each_vcpu ( d, v )
> +        nvmx_purge_vvmcs(v);
> +}
> +
>  int nvmx_vcpu_reset(struct vcpu *v)
>  {
>      return 0;
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 7243c4e..3592a8c 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -179,6 +179,7 @@ struct hvm_function_table {
>      bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
>  
>      enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
> +    void (*nhvm_domain_relinquish_resources)(struct domain *d);
>  };
>  
>  extern struct hvm_function_table hvm_funcs;
> diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h
> b/xen/include/asm-x86/hvm/vmx/vvmx.h
> index 995f9f4..bbc34e7 100644
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);
>  enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
>  int nvmx_intercepts_exception(struct vcpu *v,
>                                unsigned int trap, int error_code);
> +void nvmx_domain_relinquish_resources(struct domain *d);
>  
>  int nvmx_handle_vmxon(struct cpu_user_regs *regs);
>  int nvmx_handle_vmxoff(struct cpu_user_regs *regs);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:09:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ptR-0000qn-WB; Fri, 24 Aug 2012 09:09:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4ptR-0000qi-BS
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:09:29 +0000
Received: from [85.158.143.35:55925] by server-2.bemta-4.messagelabs.com id
	5A/AF-21239-8C447305; Fri, 24 Aug 2012 09:09:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345799302!13513583!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27098 invoked from network); 24 Aug 2012 09:08:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:08:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163365"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:08:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:08:20 +0100
Message-ID: <1345799299.12501.174.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 10:08:19 +0100
In-Reply-To: <20534.29501.983170.810193@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:15 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > This stuff is more of an Ian J thing but I wonder if when we hit the
> > syntax error that $$ still refers to the last value parsed, which we
> > think we are finished with but actually aren't? i.e. we've already
> > stashed it in the cfg and the reference in $$ is now "stale".
> 
> I will look at this tomorrow, but:
> 
> > (aside; I had to find and install the Lenny version of bison to make the
> > autogen diff readable. We should bump this to at least Squeeze in 4.3. I
> > also trimmed the diff to the generated files to just the relevant
> > looking bit -- in particular I trimmed a lot of stuff which appeared to
> > be semi-automated modifications touching the generated files, e.g. the
> > addition of emacs blocks and removal of page breaks ^L)
> 
> Perhaps we should do this now.  I don't think there's any reason to
> fear the squeeze version of bison.

I'd be happy with that.

When regenerating with Lenny's bison I got the following spurious
changes, no doubt due to some automated tree wide cleanup, which would
be nice to dispose of too.

diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.c
--- a/tools/libxl/libxlu_cfg_y.c	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.c	Fri Aug 24 10:07:28 2012 +0100
@@ -819,7 +819,7 @@ int yydebug;
 # define YYMAXDEPTH 10000
 #endif
 
-
+
 
 #if YYERROR_VERBOSE
 
@@ -1030,7 +1030,7 @@ yysyntax_error (char *yyresult, int yyst
     }
 }
 #endif /* YYERROR_VERBOSE */
-
+
 
 /*-----------------------------------------------.
 | Release the memory associated to this symbol.  |
@@ -1101,7 +1101,7 @@ yydestruct (yymsg, yytype, yyvaluep, yyl
 	break;
     }
 }
-
+
 
 /* Prevent warnings from -Wmissing-prototypes.  */
 
@@ -1689,11 +1689,3 @@ yyreturn:
 
 
 
-
-/*
- * Local variables:
- * mode: C
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.h
--- a/tools/libxl/libxlu_cfg_y.h	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.h	Fri Aug 24 10:07:28 2012 +0100
@@ -85,11 +85,3 @@ typedef struct YYLTYPE
 #endif
 
 
-
-/*
- * Local variables:
- * mode: C
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:09:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4ptR-0000qn-WB; Fri, 24 Aug 2012 09:09:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4ptR-0000qi-BS
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:09:29 +0000
Received: from [85.158.143.35:55925] by server-2.bemta-4.messagelabs.com id
	5A/AF-21239-8C447305; Fri, 24 Aug 2012 09:09:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345799302!13513583!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27098 invoked from network); 24 Aug 2012 09:08:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:08:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163365"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:08:21 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:08:20 +0100
Message-ID: <1345799299.12501.174.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 10:08:19 +0100
In-Reply-To: <20534.29501.983170.810193@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:15 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > This stuff is more of an Ian J thing but I wonder if when we hit the
> > syntax error that $$ still refers to the last value parsed, which we
> > think we are finished with but actually aren't? i.e. we've already
> > stashed it in the cfg and the reference in $$ is now "stale".
> 
> I will look at this tomorrow, but:
> 
> > (aside; I had to find and install the Lenny version of bison to make the
> > autogen diff readable. We should bump this to at least Squeeze in 4.3. I
> > also trimmed the diff to the generated files to just the relevant
> > looking bit -- in particular I trimmed a lot of stuff which appeared to
> > be semi-automated modifications touching the generated files, e.g. the
> > addition of emacs blocks and removal of page breaks ^L)
> 
> Perhaps we should do this now.  I don't think there's any reason to
> fear the squeeze version of bison.

I'd be happy with that.

When regenerating with Lenny's bison I got the following spurious
changes, no doubt due to some automated tree wide cleanup, which would
be nice to dispose of too.

diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.c
--- a/tools/libxl/libxlu_cfg_y.c	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.c	Fri Aug 24 10:07:28 2012 +0100
@@ -819,7 +819,7 @@ int yydebug;
 # define YYMAXDEPTH 10000
 #endif
 
-
+
 
 #if YYERROR_VERBOSE
 
@@ -1030,7 +1030,7 @@ yysyntax_error (char *yyresult, int yyst
     }
 }
 #endif /* YYERROR_VERBOSE */
-
+
 
 /*-----------------------------------------------.
 | Release the memory associated to this symbol.  |
@@ -1101,7 +1101,7 @@ yydestruct (yymsg, yytype, yyvaluep, yyl
 	break;
     }
 }
-
+
 
 /* Prevent warnings from -Wmissing-prototypes.  */
 
@@ -1689,11 +1689,3 @@ yyreturn:
 
 
 
-
-/*
- * Local variables:
- * mode: C
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.h
--- a/tools/libxl/libxlu_cfg_y.h	Tue Aug 14 15:59:38 2012 +0100
+++ b/tools/libxl/libxlu_cfg_y.h	Fri Aug 24 10:07:28 2012 +0100
@@ -85,11 +85,3 @@ typedef struct YYLTYPE
 #endif
 
 
-
-/*
- * Local variables:
- * mode: C
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:15:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pzF-0000ys-Q9; Fri, 24 Aug 2012 09:15:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4pzE-0000yl-OT
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:15:28 +0000
Received: from [85.158.139.83:36774] by server-9.bemta-5.messagelabs.com id
	E8/C2-26123-F2647305; Fri, 24 Aug 2012 09:15:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345799724!26966367!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11772 invoked from network); 24 Aug 2012 09:15:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:15:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163647"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:15:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:15:24 +0100
Message-ID: <1345799722.12501.176.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 24 Aug 2012 10:15:22 +0100
In-Reply-To: <52f3d52bacdecb2c8d7f.1345746247@probook.site>
References: <patchbomb.1345746246@probook.site>
	<52f3d52bacdecb2c8d7f.1345746247@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xend/pvscsi: fix passing of SCSI
 control LUNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1345743306 -7200
> # Node ID 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
> # Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
> xend/pvscsi: fix passing of SCSI control LUNs
> 
> Currently pvscsi can not pass SCSI devices that have just a scsi_generic node.
> In the following example sg3 is a control LUN for the disk sdd.
> But vscsi=['4:0:2:0,0:0:0:0'] does not work because the internal 'devname'
> variable remains None. Later writing p-devname to xenstore fails because None
> is not a valid string variable.

Just out of interest, would you not need to pass through 4:0:2:1 too?

> Since devname is used for just informational purpose use sg also as devname.
> 
> carron:~ $ lsscsi -g
> [0:0:0:0]    disk    ATA      FK0032CAAZP      HPF2  /dev/sda   /dev/sg0
> [4:0:0:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdb   /dev/sg1
> [4:0:1:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdc   /dev/sg2
> [4:0:2:0]    storage HP       HSV400           0950  -         /dev/sg3
> [4:0:2:1]    disk    HP       HSV400           0950  /dev/sdd   /dev/sg4
> [4:0:3:0]    storage HP       HSV400           0950  -         /dev/sg5
> [4:0:3:1]    disk    HP       HSV400           0950  /dev/sde   /dev/sg6
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff -r e6ca45ca03c2 -r 52f3d52bacde tools/python/xen/util/vscsi_util.py
> --- a/tools/python/xen/util/vscsi_util.py
> +++ b/tools/python/xen/util/vscsi_util.py
> @@ -105,6 +105,8 @@ def _vscsi_get_scsidevices_by_lsscsi(opt
>              devname = None
>          try:
>              sg = s[-1].split('/dev/')[1]
> +            if devname is None:
> +                devname = sg
>              scsi_id = _vscsi_get_scsiid(sg)
>          except IndexError:
>              sg = None
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:15:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4pzF-0000ys-Q9; Fri, 24 Aug 2012 09:15:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4pzE-0000yl-OT
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:15:28 +0000
Received: from [85.158.139.83:36774] by server-9.bemta-5.messagelabs.com id
	E8/C2-26123-F2647305; Fri, 24 Aug 2012 09:15:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345799724!26966367!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11772 invoked from network); 24 Aug 2012 09:15:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:15:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163647"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:15:24 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:15:24 +0100
Message-ID: <1345799722.12501.176.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 24 Aug 2012 10:15:22 +0100
In-Reply-To: <52f3d52bacdecb2c8d7f.1345746247@probook.site>
References: <patchbomb.1345746246@probook.site>
	<52f3d52bacdecb2c8d7f.1345746247@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xend/pvscsi: fix passing of SCSI
 control LUNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1345743306 -7200
> # Node ID 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
> # Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
> xend/pvscsi: fix passing of SCSI control LUNs
> 
> Currently pvscsi can not pass SCSI devices that have just a scsi_generic node.
> In the following example sg3 is a control LUN for the disk sdd.
> But vscsi=['4:0:2:0,0:0:0:0'] does not work because the internal 'devname'
> variable remains None. Later writing p-devname to xenstore fails because None
> is not a valid string variable.

Just out of interest, would you not need to pass through 4:0:2:1 too?

> Since devname is used for just informational purpose use sg also as devname.
> 
> carron:~ $ lsscsi -g
> [0:0:0:0]    disk    ATA      FK0032CAAZP      HPF2  /dev/sda   /dev/sg0
> [4:0:0:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdb   /dev/sg1
> [4:0:1:0]    disk    HP       P2000G3 FC/iSCSI T100  /dev/sdc   /dev/sg2
> [4:0:2:0]    storage HP       HSV400           0950  -         /dev/sg3
> [4:0:2:1]    disk    HP       HSV400           0950  /dev/sdd   /dev/sg4
> [4:0:3:0]    storage HP       HSV400           0950  -         /dev/sg5
> [4:0:3:1]    disk    HP       HSV400           0950  /dev/sde   /dev/sg6
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff -r e6ca45ca03c2 -r 52f3d52bacde tools/python/xen/util/vscsi_util.py
> --- a/tools/python/xen/util/vscsi_util.py
> +++ b/tools/python/xen/util/vscsi_util.py
> @@ -105,6 +105,8 @@ def _vscsi_get_scsidevices_by_lsscsi(opt
>              devname = None
>          try:
>              sg = s[-1].split('/dev/')[1]
> +            if devname is None:
> +                devname = sg
>              scsi_id = _vscsi_get_scsiid(sg)
>          except IndexError:
>              sg = None
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:21:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4q4Q-00018R-IG; Fri, 24 Aug 2012 09:20:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4q4O-00018L-Su
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:20:49 +0000
Received: from [85.158.138.51:7914] by server-4.bemta-3.messagelabs.com id
	C2/43-04276-07747305; Fri, 24 Aug 2012 09:20:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345800046!27895044!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3354 invoked from network); 24 Aug 2012 09:20:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:20:46 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163874"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:20:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:20:46 +0100
Message-ID: <1345800044.12501.177.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 24 Aug 2012 10:20:44 +0100
In-Reply-To: <2b9992aea26cfebc2dda.1345746248@probook.site>
References: <patchbomb.1345746246@probook.site>
	<2b9992aea26cfebc2dda.1345746248@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xend/pvscsi: fix usage of persistant
 device names for SCSI devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1345743313 -7200
> # Node ID 2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
> # Parent  52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
> xend/pvscsi: fix usage of persistant device names for SCSI devices
> 
> Currently the callers of vscsi_get_scsidevices() do not pass a mask
> string.  This will call "lsscsi -g '[]'", which causes a lsscsi syntax
> error. As a result the sysfs parser _vscsi_get_scsidevices() is used.
> But this parser is broken and the specified names in the config file are
> not found.
> 
> Using a mask '*' if no mask was given will call lsscsi correctly and the
> following config is parsed correctly:
> 
> vscsi=[
> 	'/dev/sg3, 0:0:0:0',
> 	'/dev/disk/by-id/wwn-0x600508b4000cf1c30000800000410000, 0:0:0:1'
> ]
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff -r 52f3d52bacde -r 2b9992aea26c tools/python/xen/util/vscsi_util.py
> --- a/tools/python/xen/util/vscsi_util.py
> +++ b/tools/python/xen/util/vscsi_util.py
> @@ -150,7 +150,7 @@ def _vscsi_get_scsidevices_by_sysfs():
>      return devices
>  
> 
> -def vscsi_get_scsidevices(mask=""):
> +def vscsi_get_scsidevices(mask="*"):
>      """ get all scsi devices information """
>  
>      devices = _vscsi_get_scsidevices_by_lsscsi("[%s]" % mask)
> @@ -279,7 +279,7 @@ def get_scsi_device(pHCTL):
>              return _make_scsi_record(scsi_info)
>      return None
>  
> -def get_all_scsi_devices(mask=""):
> +def get_all_scsi_devices(mask="*"):

This one could be = None , but I don't think it matters much.

>      scsi_records = []
>      for scsi_info in vscsi_get_scsidevices(mask):
>          scsi_record = _make_scsi_record(scsi_info)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:21:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4q4Q-00018R-IG; Fri, 24 Aug 2012 09:20:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4q4O-00018L-Su
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:20:49 +0000
Received: from [85.158.138.51:7914] by server-4.bemta-3.messagelabs.com id
	C2/43-04276-07747305; Fri, 24 Aug 2012 09:20:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345800046!27895044!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3354 invoked from network); 24 Aug 2012 09:20:46 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:20:46 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163874"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:20:46 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:20:46 +0100
Message-ID: <1345800044.12501.177.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 24 Aug 2012 10:20:44 +0100
In-Reply-To: <2b9992aea26cfebc2dda.1345746248@probook.site>
References: <patchbomb.1345746246@probook.site>
	<2b9992aea26cfebc2dda.1345746248@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 3] xend/pvscsi: fix usage of persistant
 device names for SCSI devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1345743313 -7200
> # Node ID 2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
> # Parent  52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
> xend/pvscsi: fix usage of persistant device names for SCSI devices
> 
> Currently the callers of vscsi_get_scsidevices() do not pass a mask
> string.  This will call "lsscsi -g '[]'", which causes a lsscsi syntax
> error. As a result the sysfs parser _vscsi_get_scsidevices() is used.
> But this parser is broken and the specified names in the config file are
> not found.
> 
> Using a mask '*' if no mask was given will call lsscsi correctly and the
> following config is parsed correctly:
> 
> vscsi=[
> 	'/dev/sg3, 0:0:0:0',
> 	'/dev/disk/by-id/wwn-0x600508b4000cf1c30000800000410000, 0:0:0:1'
> ]
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff -r 52f3d52bacde -r 2b9992aea26c tools/python/xen/util/vscsi_util.py
> --- a/tools/python/xen/util/vscsi_util.py
> +++ b/tools/python/xen/util/vscsi_util.py
> @@ -150,7 +150,7 @@ def _vscsi_get_scsidevices_by_sysfs():
>      return devices
>  
> 
> -def vscsi_get_scsidevices(mask=""):
> +def vscsi_get_scsidevices(mask="*"):
>      """ get all scsi devices information """
>  
>      devices = _vscsi_get_scsidevices_by_lsscsi("[%s]" % mask)
> @@ -279,7 +279,7 @@ def get_scsi_device(pHCTL):
>              return _make_scsi_record(scsi_info)
>      return None
>  
> -def get_all_scsi_devices(mask=""):
> +def get_all_scsi_devices(mask="*"):

This one could be = None , but I don't think it matters much.

>      scsi_records = []
>      for scsi_info in vscsi_get_scsidevices(mask):
>          scsi_record = _make_scsi_record(scsi_info)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:24:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4q7f-0001Hb-Bq; Fri, 24 Aug 2012 09:24:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4q7d-0001HP-Rq
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:24:10 +0000
Received: from [85.158.143.99:41404] by server-3.bemta-4.messagelabs.com id
	FC/0E-08232-93847305; Fri, 24 Aug 2012 09:24:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345800246!21332532!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11796 invoked from network); 24 Aug 2012 09:24:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:24:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:24:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:24:06 +0100
Message-ID: <1345800245.12501.178.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 24 Aug 2012 10:24:05 +0100
In-Reply-To: <482b9db173f2ceefed06.1345746249@probook.site>
References: <patchbomb.1345746246@probook.site>
	<482b9db173f2ceefed06.1345746249@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xend/pvscsi: update sysfs parser for
 Linux 3.0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1345743331 -7200
> # Node ID 482b9db173f2ceefed06346bec9e6d8cef9704fe
> # Parent  2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
> xend/pvscsi: update sysfs parser for Linux 3.0
> 
> The sysfs parser for /sys/bus/scsi/devices understands only the layout
> of kernel version 2.6.16. This looks as follows:
> 
> /sys/bus/scsi/devices/1:0:0:0/block:sda is a symlink to /sys/block/sda/
> /sys/bus/scsi/devices/1:0:0:0/scsi_generic:sg1 is a symlink to /sys/class/scsi_generic/sg1
> 
> Both directories contain a 'dev' file with the major:minor information.
> This patch updates the used regex strings to match also the colon to
> make it more robust against possible future changes.
> 
> 
> In kernel version 3.0 the layout changed:
> /sys/bus/scsi/devices/ contains now additional symlinks to directories
> such as host1 and target1:0:0. This patch ignores these as they do not
> point to the desired scsi devices. They just clutter the devices array.
> 
> The directory layout in '1:0:0:0' changed as well, the 'type:name'
> notation was replaced with 'type/name' directories:
> 
> /sys/bus/scsi/devices/1:0:0:0/block/sda/
> /sys/bus/scsi/devices/1:0:0:0/scsi_generic/sg1/
> 
> Both directories contain a 'dev' file with the major:minor information.
> This patch adds additional code to walk the subdir to find the 'dev'
> file to make sure the given subdirectory is really the kernel name.
> 
> 
> In addition this patch makes sure devname is not None.

Did you test this with both pre- and post-3.0 kernels?

> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> diff -r 2b9992aea26c -r 482b9db173f2 tools/python/xen/util/vscsi_util.py
> --- a/tools/python/xen/util/vscsi_util.py
> +++ b/tools/python/xen/util/vscsi_util.py
> @@ -130,20 +130,36 @@ def _vscsi_get_scsidevices_by_sysfs():
>  
>      for dirpath, dirnames, files in os.walk(sysfs_mnt + SYSFS_SCSI_PATH):
>          for hctl in dirnames:
> +            if len(hctl.split(':')) != 4:
> +                continue
>              paths = os.path.join(dirpath, hctl)
>              devname = None
>              sg = None
>              scsi_id = None
>              for f in os.listdir(paths):
>                  realpath = os.path.realpath(os.path.join(paths, f))
> -                if  re.match('^block', f) or \
> -                    re.match('^tape', f) or \
> -                    re.match('^scsi_changer', f) or \
> -                    re.match('^onstream_tape', f):
> +                if  re.match('^block:', f) or \
> +                    re.match('^tape:', f) or \
> +                    re.match('^scsi_changer:', f) or \
> +                    re.match('^onstream_tape:', f):
>                      devname = os.path.basename(realpath)
> +                elif f == "block" or \
> +                     f == "tape" or \
> +                     f == "scsi_changer" or \
> +                     f == "onstream_tape":
> +                    for dir in os.listdir(os.path.join(paths, f)):
> +                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
> +                            devname = os.path.basename(dir)
>  
> -                if re.match('^scsi_generic', f):
> +                if re.match('^scsi_generic:', f):
>                      sg = os.path.basename(realpath)
> +                elif f == "scsi_generic":
> +                    for dir in os.listdir(os.path.join(paths, f)):
> +                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
> +                            sg = os.path.basename(dir)
> +                if sg:
> +                    if devname is None:
> +                        devname = sg
>                      scsi_id = _vscsi_get_scsiid(sg)
>              devices.append([hctl, devname, sg, scsi_id])
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 09:24:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 09:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4q7f-0001Hb-Bq; Fri, 24 Aug 2012 09:24:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4q7d-0001HP-Rq
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 09:24:10 +0000
Received: from [85.158.143.99:41404] by server-3.bemta-4.messagelabs.com id
	FC/0E-08232-93847305; Fri, 24 Aug 2012 09:24:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1345800246!21332532!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11796 invoked from network); 24 Aug 2012 09:24:06 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 09:24:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14163934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:24:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 10:24:06 +0100
Message-ID: <1345800245.12501.178.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 24 Aug 2012 10:24:05 +0100
In-Reply-To: <482b9db173f2ceefed06.1345746249@probook.site>
References: <patchbomb.1345746246@probook.site>
	<482b9db173f2ceefed06.1345746249@probook.site>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xend/pvscsi: update sysfs parser for
 Linux 3.0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@aepfle.de>
> # Date 1345743331 -7200
> # Node ID 482b9db173f2ceefed06346bec9e6d8cef9704fe
> # Parent  2b9992aea26cfebc2dda56d1a97a35dc3a5c8ce8
> xend/pvscsi: update sysfs parser for Linux 3.0
> 
> The sysfs parser for /sys/bus/scsi/devices understands only the layout
> of kernel version 2.6.16. This looks as follows:
> 
> /sys/bus/scsi/devices/1:0:0:0/block:sda is a symlink to /sys/block/sda/
> /sys/bus/scsi/devices/1:0:0:0/scsi_generic:sg1 is a symlink to /sys/class/scsi_generic/sg1
> 
> Both directories contain a 'dev' file with the major:minor information.
> This patch updates the used regex strings to match also the colon to
> make it more robust against possible future changes.
> 
> 
> In kernel version 3.0 the layout changed:
> /sys/bus/scsi/devices/ contains now additional symlinks to directories
> such as host1 and target1:0:0. This patch ignores these as they do not
> point to the desired scsi devices. They just clutter the devices array.
> 
> The directory layout in '1:0:0:0' changed as well, the 'type:name'
> notation was replaced with 'type/name' directories:
> 
> /sys/bus/scsi/devices/1:0:0:0/block/sda/
> /sys/bus/scsi/devices/1:0:0:0/scsi_generic/sg1/
> 
> Both directories contain a 'dev' file with the major:minor information.
> This patch adds additional code to walk the subdir to find the 'dev'
> file to make sure the given subdirectory is really the kernel name.
> 
> 
> In addition this patch makes sure devname is not None.

Did you test this with both pre- and post-3.0 kernels?

> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> diff -r 2b9992aea26c -r 482b9db173f2 tools/python/xen/util/vscsi_util.py
> --- a/tools/python/xen/util/vscsi_util.py
> +++ b/tools/python/xen/util/vscsi_util.py
> @@ -130,20 +130,36 @@ def _vscsi_get_scsidevices_by_sysfs():
>  
>      for dirpath, dirnames, files in os.walk(sysfs_mnt + SYSFS_SCSI_PATH):
>          for hctl in dirnames:
> +            if len(hctl.split(':')) != 4:
> +                continue
>              paths = os.path.join(dirpath, hctl)
>              devname = None
>              sg = None
>              scsi_id = None
>              for f in os.listdir(paths):
>                  realpath = os.path.realpath(os.path.join(paths, f))
> -                if  re.match('^block', f) or \
> -                    re.match('^tape', f) or \
> -                    re.match('^scsi_changer', f) or \
> -                    re.match('^onstream_tape', f):
> +                if  re.match('^block:', f) or \
> +                    re.match('^tape:', f) or \
> +                    re.match('^scsi_changer:', f) or \
> +                    re.match('^onstream_tape:', f):
>                      devname = os.path.basename(realpath)
> +                elif f == "block" or \
> +                     f == "tape" or \
> +                     f == "scsi_changer" or \
> +                     f == "onstream_tape":
> +                    for dir in os.listdir(os.path.join(paths, f)):
> +                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
> +                            devname = os.path.basename(dir)
>  
> -                if re.match('^scsi_generic', f):
> +                if re.match('^scsi_generic:', f):
>                      sg = os.path.basename(realpath)
> +                elif f == "scsi_generic":
> +                    for dir in os.listdir(os.path.join(paths, f)):
> +                        if os.path.exists(os.path.join(paths, f, dir, "dev")):
> +                            sg = os.path.basename(dir)
> +                if sg:
> +                    if devname is None:
> +                        devname = sg
>                      scsi_id = _vscsi_get_scsiid(sg)
>              devices.append([hctl, devname, sg, scsi_id])
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:27:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4r6R-00023c-5N; Fri, 24 Aug 2012 10:26:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4r6Q-00023X-KM
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:26:58 +0000
Received: from [85.158.143.99:4566] by server-1.bemta-4.messagelabs.com id
	1E/5C-12504-1F657305; Fri, 24 Aug 2012 10:26:57 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345804015!24357206!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3650 invoked from network); 24 Aug 2012 10:26:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:26:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344225600"; d="scan'208";a="206127150"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 06:26:55 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	06:26:55 -0400
Message-ID: <50375721.7080503@citrix.com>
Date: Fri, 24 Aug 2012 11:27:45 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>	
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>	
	<503680C5.6070509@citrix.com>
	<1345751525.23624.58.camel@dagon.hellion.org.uk>
In-Reply-To: <1345751525.23624.58.camel@dagon.hellion.org.uk>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 08:52 PM, Ian Campbell wrote:
> On Thu, 2012-08-23 at 20:13 +0100, Julien Grall wrote:
>    
>> On 08/23/2012 02:27 PM, Ian Campbell wrote:
>>      
>>>        
>>>> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>>>>    #else
>>>>    #define RDEXACT read_exact
>>>>    #endif
>>>> +
>>>> +#define QEMUSIG_SIZE 21
>>>> +
>>>>    /*
>>>>    ** In the state file (or during transfer), all page-table pages are
>>>>    ** converted into a 'canonical' form where references to actual mfns
>>>> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
>>>>                               int vcpuextstate, uint32_t vcpuextstate_size)
>>>>    {
>>>>        uint8_t *tmp;
>>>> -    unsigned char qemusig[21];
>>>> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
>>>>
>>>>          
>>> An extra + 1 here?
>>>
>>>        
>> QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
>> If an error occurred, without +1, the output log lost the last character.
>>      
> So this is just a bug fix for a pre-existing issue?
>    
Yes.

>>> [...]
>>>
>>>        
>>>> -    qemusig[20] = '\0';
>>>> +    qemusig[QEMUSIG_SIZE] = '\0';
>>>>
>>>>          
>>> This is one bigger than it used to be now.
>>>
>>> Perhaps this is an unrelated bug fix (I haven't check the real length of
>>> the sig), in which case please can you split it out and submit
>>> separately?
>>>
>>>        
>> #define QEMU_SIGNATURE "DeviceModelRecord0002"
>> Just checked, the length seems to be 21. I will send a patch with
>> this change.
>>      
> Perhaps use either sizeof(QEMU_SIGNATURE) or strlen(QEMU_SIGNATURE)
> (depending on which semantics you want)?
>    
Here, QEMU_SIZE needs to be define as strlen (QEMU_SIGNATURE),
but QEMU_SIGNATURE is not defined in libxc. It's defined
in libxl/libxl_internal.h.
By the way, I'm wondering why QEMU save (libxl__domain_save_device_model)
is made in libxl and restore (dump_qemu) in libxc ?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:27:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4r6R-00023c-5N; Fri, 24 Aug 2012 10:26:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4r6Q-00023X-KM
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:26:58 +0000
Received: from [85.158.143.99:4566] by server-1.bemta-4.messagelabs.com id
	1E/5C-12504-1F657305; Fri, 24 Aug 2012 10:26:57 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345804015!24357206!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3650 invoked from network); 24 Aug 2012 10:26:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:26:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344225600"; d="scan'208";a="206127150"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 06:26:55 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	06:26:55 -0400
Message-ID: <50375721.7080503@citrix.com>
Date: Fri, 24 Aug 2012 11:27:45 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>	
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>	
	<503680C5.6070509@citrix.com>
	<1345751525.23624.58.camel@dagon.hellion.org.uk>
In-Reply-To: <1345751525.23624.58.camel@dagon.hellion.org.uk>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 08:52 PM, Ian Campbell wrote:
> On Thu, 2012-08-23 at 20:13 +0100, Julien Grall wrote:
>    
>> On 08/23/2012 02:27 PM, Ian Campbell wrote:
>>      
>>>        
>>>> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>>>>    #else
>>>>    #define RDEXACT read_exact
>>>>    #endif
>>>> +
>>>> +#define QEMUSIG_SIZE 21
>>>> +
>>>>    /*
>>>>    ** In the state file (or during transfer), all page-table pages are
>>>>    ** converted into a 'canonical' form where references to actual mfns
>>>> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
>>>>                               int vcpuextstate, uint32_t vcpuextstate_size)
>>>>    {
>>>>        uint8_t *tmp;
>>>> -    unsigned char qemusig[21];
>>>> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
>>>>
>>>>          
>>> An extra + 1 here?
>>>
>>>        
>> QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
>> If an error occurred, without +1, the output log lost the last character.
>>      
> So this is just a bug fix for a pre-existing issue?
>    
Yes.

>>> [...]
>>>
>>>        
>>>> -    qemusig[20] = '\0';
>>>> +    qemusig[QEMUSIG_SIZE] = '\0';
>>>>
>>>>          
>>> This is one bigger than it used to be now.
>>>
>>> Perhaps this is an unrelated bug fix (I haven't check the real length of
>>> the sig), in which case please can you split it out and submit
>>> separately?
>>>
>>>        
>> #define QEMU_SIGNATURE "DeviceModelRecord0002"
>> Just checked, the length seems to be 21. I will send a patch with
>> this change.
>>      
> Perhaps use either sizeof(QEMU_SIGNATURE) or strlen(QEMU_SIGNATURE)
> (depending on which semantics you want)?
>    
Here, QEMU_SIZE needs to be define as strlen (QEMU_SIGNATURE),
but QEMU_SIGNATURE is not defined in libxc. It's defined
in libxl/libxl_internal.h.
By the way, I'm wondering why QEMU save (libxl__domain_save_device_model)
is made in libxl and restore (dump_qemu) in libxc ?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:33:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rBr-0002CL-Ur; Fri, 24 Aug 2012 10:32:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4rBq-0002CG-N7
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:32:34 +0000
Received: from [85.158.143.35:60387] by server-3.bemta-4.messagelabs.com id
	62/44-08232-24857305; Fri, 24 Aug 2012 10:32:34 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345804346!12659610!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24484 invoked from network); 24 Aug 2012 10:32:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:32:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344225600"; d="scan'208";a="206127540"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 06:32:25 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	06:32:26 -0400
Message-ID: <5037586C.5000902@citrix.com>
Date: Fri, 24 Aug 2012 11:33:16 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC5BEE01.3CBD1%keir.xen@gmail.com>
In-Reply-To: <CC5BEE01.3CBD1%keir.xen@gmail.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
 support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:26 PM, Keir Fraser wrote:
> On 23/08/2012 14:18, "Ian Campbell"<Ian.Campbell@citrix.com>  wrote:
>
>    
>>> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
>>> index 4022a1d..87aacd3 100644
>>> --- a/xen/include/public/hvm/ioreq.h
>>> +++ b/xen/include/public/hvm/ioreq.h
>>> @@ -34,6 +34,7 @@
>>>
>>>   #define IOREQ_TYPE_PIO          0 /* pio */
>>>   #define IOREQ_TYPE_COPY         1 /* mmio ops */
>>> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
>>>   #define IOREQ_TYPE_TIMEOFFSET   7
>>>   #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
>>>        
>> I wonder why we skip 2-6 now -- perhaps they used to be something else
>> and we are avoiding them to avoid strange errors? In which case adding
>> the new on as 9 might be a good idea.
>>      
> They were almost certainly used for representing R-M-W ALU operations back
> in the days of the old IO emulator, very long ago. Still, there's no harm in
> leaving them unused.
>    

Ok. So I will use number 9 for IOREQ_TYPE_PCI_CONFIG.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:33:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rBr-0002CL-Ur; Fri, 24 Aug 2012 10:32:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4rBq-0002CG-N7
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:32:34 +0000
Received: from [85.158.143.35:60387] by server-3.bemta-4.messagelabs.com id
	62/44-08232-24857305; Fri, 24 Aug 2012 10:32:34 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345804346!12659610!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24484 invoked from network); 24 Aug 2012 10:32:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:32:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344225600"; d="scan'208";a="206127540"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 06:32:25 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	06:32:26 -0400
Message-ID: <5037586C.5000902@citrix.com>
Date: Fri, 24 Aug 2012 11:33:16 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC5BEE01.3CBD1%keir.xen@gmail.com>
In-Reply-To: <CC5BEE01.3CBD1%keir.xen@gmail.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 01/17] hvm: Modify interface to
 support multiple ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:26 PM, Keir Fraser wrote:
> On 23/08/2012 14:18, "Ian Campbell"<Ian.Campbell@citrix.com>  wrote:
>
>    
>>> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
>>> index 4022a1d..87aacd3 100644
>>> --- a/xen/include/public/hvm/ioreq.h
>>> +++ b/xen/include/public/hvm/ioreq.h
>>> @@ -34,6 +34,7 @@
>>>
>>>   #define IOREQ_TYPE_PIO          0 /* pio */
>>>   #define IOREQ_TYPE_COPY         1 /* mmio ops */
>>> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config space ops */
>>>   #define IOREQ_TYPE_TIMEOFFSET   7
>>>   #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
>>>        
>> I wonder why we skip 2-6 now -- perhaps they used to be something else
>> and we are avoiding them to avoid strange errors? In which case adding
>> the new on as 9 might be a good idea.
>>      
> They were almost certainly used for representing R-M-W ALU operations back
> in the days of the old IO emulator, very long ago. Still, there's no harm in
> leaving them unused.
>    

Ok. So I will use number 9 for IOREQ_TYPE_PCI_CONFIG.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:36:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rFf-0002ME-KG; Fri, 24 Aug 2012 10:36:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4rFe-0002M7-Jy
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:36:30 +0000
Received: from [85.158.143.99:51126] by server-1.bemta-4.messagelabs.com id
	29/5C-12504-D2957305; Fri, 24 Aug 2012 10:36:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345804518!27163221!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25622 invoked from network); 24 Aug 2012 10:35:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:35:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14165881"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:35:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 11:35:18 +0100
Message-ID: <1345804516.12501.182.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 11:35:16 +0100
In-Reply-To: <50375721.7080503@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>
	<503680C5.6070509@citrix.com>
	<1345751525.23624.58.camel@dagon.hellion.org.uk>
	<50375721.7080503@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 11:27 +0100, Julien Grall wrote:
> On 08/23/2012 08:52 PM, Ian Campbell wrote:
> > On Thu, 2012-08-23 at 20:13 +0100, Julien Grall wrote:
> >    
> >> On 08/23/2012 02:27 PM, Ian Campbell wrote:
> >>      
> >>>        
> >>>> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
> >>>>    #else
> >>>>    #define RDEXACT read_exact
> >>>>    #endif
> >>>> +
> >>>> +#define QEMUSIG_SIZE 21
> >>>> +
> >>>>    /*
> >>>>    ** In the state file (or during transfer), all page-table pages are
> >>>>    ** converted into a 'canonical' form where references to actual mfns
> >>>> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
> >>>>                               int vcpuextstate, uint32_t vcpuextstate_size)
> >>>>    {
> >>>>        uint8_t *tmp;
> >>>> -    unsigned char qemusig[21];
> >>>> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
> >>>>
> >>>>          
> >>> An extra + 1 here?
> >>>
> >>>        
> >> QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
> >> If an error occurred, without +1, the output log lost the last character.
> >>      
> > So this is just a bug fix for a pre-existing issue?
> >    
> Yes.

Can we get it as a separate change?

> 
> >>> [...]
> >>>
> >>>        
> >>>> -    qemusig[20] = '\0';
> >>>> +    qemusig[QEMUSIG_SIZE] = '\0';
> >>>>
> >>>>          
> >>> This is one bigger than it used to be now.
> >>>
> >>> Perhaps this is an unrelated bug fix (I haven't check the real length of
> >>> the sig), in which case please can you split it out and submit
> >>> separately?
> >>>
> >>>        
> >> #define QEMU_SIGNATURE "DeviceModelRecord0002"
> >> Just checked, the length seems to be 21. I will send a patch with
> >> this change.
> >>      
> > Perhaps use either sizeof(QEMU_SIGNATURE) or strlen(QEMU_SIGNATURE)
> > (depending on which semantics you want)?
> >    
> Here, QEMU_SIZE needs to be define as strlen (QEMU_SIGNATURE),
> but QEMU_SIGNATURE is not defined in libxc. It's defined
> in libxl/libxl_internal.h.

Oh, right, this again :-/

> By the way, I'm wondering why QEMU save (libxl__domain_save_device_model)
> is made in libxl and restore (dump_qemu) in libxc ?

Mostly historical accident, we'd really like to sort this out one way or
the other but untangling the protocol and the callbacks etc is a pretty
big job.

In the meantime perhaps libxc could provide a suitable "typedef char
device_model_signature_t[21]"?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:36:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rFf-0002ME-KG; Fri, 24 Aug 2012 10:36:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4rFe-0002M7-Jy
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:36:30 +0000
Received: from [85.158.143.99:51126] by server-1.bemta-4.messagelabs.com id
	29/5C-12504-D2957305; Fri, 24 Aug 2012 10:36:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1345804518!27163221!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25622 invoked from network); 24 Aug 2012 10:35:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:35:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,302,1344211200"; d="scan'208";a="14165881"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:35:18 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 11:35:18 +0100
Message-ID: <1345804516.12501.182.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 11:35:16 +0100
In-Reply-To: <50375721.7080503@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<fcf046ea782dda6cacb3bf11813bf1d16e531e6b.1345552068.git.julien.grall@citrix.com>
	<1345728471.12501.90.camel@zakaz.uk.xensource.com>
	<503680C5.6070509@citrix.com>
	<1345751525.23624.58.camel@dagon.hellion.org.uk>
	<50375721.7080503@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 11/17] xc: modify save/restore
 to support multiple device models
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 11:27 +0100, Julien Grall wrote:
> On 08/23/2012 08:52 PM, Ian Campbell wrote:
> > On Thu, 2012-08-23 at 20:13 +0100, Julien Grall wrote:
> >    
> >> On 08/23/2012 02:27 PM, Ian Campbell wrote:
> >>      
> >>>        
> >>>> @@ -103,6 +103,9 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
> >>>>    #else
> >>>>    #define RDEXACT read_exact
> >>>>    #endif
> >>>> +
> >>>> +#define QEMUSIG_SIZE 21
> >>>> +
> >>>>    /*
> >>>>    ** In the state file (or during transfer), all page-table pages are
> >>>>    ** converted into a 'canonical' form where references to actual mfns
> >>>> @@ -467,7 +522,7 @@ static int buffer_tail_hvm(xc_interface *xch, struct restore_ctx *ctx,
> >>>>                               int vcpuextstate, uint32_t vcpuextstate_size)
> >>>>    {
> >>>>        uint8_t *tmp;
> >>>> -    unsigned char qemusig[21];
> >>>> +    unsigned char qemusig[QEMUSIG_SIZE + 1];
> >>>>
> >>>>          
> >>> An extra + 1 here?
> >>>
> >>>        
> >> QEMUSIG_SIZE doesn't take into account the '\0'. So we need to add 1.
> >> If an error occurred, without +1, the output log lost the last character.
> >>      
> > So this is just a bug fix for a pre-existing issue?
> >    
> Yes.

Can we get it as a separate change?

> 
> >>> [...]
> >>>
> >>>        
> >>>> -    qemusig[20] = '\0';
> >>>> +    qemusig[QEMUSIG_SIZE] = '\0';
> >>>>
> >>>>          
> >>> This is one bigger than it used to be now.
> >>>
> >>> Perhaps this is an unrelated bug fix (I haven't check the real length of
> >>> the sig), in which case please can you split it out and submit
> >>> separately?
> >>>
> >>>        
> >> #define QEMU_SIGNATURE "DeviceModelRecord0002"
> >> Just checked, the length seems to be 21. I will send a patch with
> >> this change.
> >>      
> > Perhaps use either sizeof(QEMU_SIGNATURE) or strlen(QEMU_SIGNATURE)
> > (depending on which semantics you want)?
> >    
> Here, QEMU_SIZE needs to be define as strlen (QEMU_SIGNATURE),
> but QEMU_SIGNATURE is not defined in libxc. It's defined
> in libxl/libxl_internal.h.

Oh, right, this again :-/

> By the way, I'm wondering why QEMU save (libxl__domain_save_device_model)
> is made in libxl and restore (dump_qemu) in libxc ?

Mostly historical accident, we'd really like to sort this out one way or
the other but untangling the protocol and the callbacks etc is a pretty
big job.

In the meantime perhaps libxc could provide a suitable "typedef char
device_model_signature_t[21]"?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:47:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rPe-0002YV-Mk; Fri, 24 Aug 2012 10:46:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4rPd-0002YQ-6i
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:46:49 +0000
Received: from [85.158.143.35:9594] by server-2.bemta-4.messagelabs.com id
	F3/04-21239-79B57305; Fri, 24 Aug 2012 10:46:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345805206!13530081!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12004 invoked from network); 24 Aug 2012 10:46:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:46:47 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166130"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:46:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 11:46:38 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4rPS-0007rj-DK; Fri, 24 Aug 2012 10:46:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4rPS-0000ua-8h;
	Fri, 24 Aug 2012 11:46:38 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.23438.176337.439860@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 11:46:38 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345799299.12501.174.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> When regenerating with Lenny's bison I got the following spurious
> changes, no doubt due to some automated tree wide cleanup, which would
> be nice to dispose of too.

OK.  Shall I just cause bison to rerun on my squeeze workstation and
commit the result then ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:47:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rPe-0002YV-Mk; Fri, 24 Aug 2012 10:46:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4rPd-0002YQ-6i
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:46:49 +0000
Received: from [85.158.143.35:9594] by server-2.bemta-4.messagelabs.com id
	F3/04-21239-79B57305; Fri, 24 Aug 2012 10:46:47 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345805206!13530081!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12004 invoked from network); 24 Aug 2012 10:46:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:46:47 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166130"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:46:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 11:46:38 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4rPS-0007rj-DK; Fri, 24 Aug 2012 10:46:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4rPS-0000ua-8h;
	Fri, 24 Aug 2012 11:46:38 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.23438.176337.439860@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 11:46:38 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345799299.12501.174.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> When regenerating with Lenny's bison I got the following spurious
> changes, no doubt due to some automated tree wide cleanup, which would
> be nice to dispose of too.

OK.  Shall I just cause bison to rerun on my squeeze workstation and
commit the result then ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:49:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rSE-0002eI-8Z; Fri, 24 Aug 2012 10:49:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4rSD-0002e6-1v
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:49:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345805363!1881311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2801 invoked from network); 24 Aug 2012 10:49:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:49:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166180"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:49:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 11:49:07 +0100
Message-ID: <1345805345.12501.183.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 11:49:05 +0100
In-Reply-To: <20535.23438.176337.439860@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > When regenerating with Lenny's bison I got the following spurious
> > changes, no doubt due to some automated tree wide cleanup, which would
> > be nice to dispose of too.
> 
> OK.  Shall I just cause bison to rerun on my squeeze workstation and
> commit the result then ?

Please!

Ian,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 10:49:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 10:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rSE-0002eI-8Z; Fri, 24 Aug 2012 10:49:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4rSD-0002e6-1v
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 10:49:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1345805363!1881311!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2801 invoked from network); 24 Aug 2012 10:49:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 10:49:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166180"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:49:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 11:49:07 +0100
Message-ID: <1345805345.12501.183.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 11:49:05 +0100
In-Reply-To: <20535.23438.176337.439860@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > When regenerating with Lenny's bison I got the following spurious
> > changes, no doubt due to some automated tree wide cleanup, which would
> > be nice to dispose of too.
> 
> OK.  Shall I just cause bison to rerun on my squeeze workstation and
> commit the result then ?

Please!

Ian,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:14:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rq8-0002xe-Df; Fri, 24 Aug 2012 11:14:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4rq7-0002xW-2m
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:14:11 +0000
Received: from [85.158.139.83:7027] by server-4.bemta-5.messagelabs.com id
	75/81-12386-00267305; Fri, 24 Aug 2012 11:14:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345806844!23676784!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16637 invoked from network); 24 Aug 2012 11:14:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:14:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35690907"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 07:14:03 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	07:14:04 -0400
Message-ID: <503761FA.3050606@citrix.com>
Date: Fri, 24 Aug 2012 12:14:02 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
In-Reply-To: <20120823194000.GA11652@phenom.dumpdata.com>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 20:40, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 23, 2012 at 06:13:46PM +0100, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>> field for reporting the error code for every frame that could not be
>> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
[...]
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index f8c1b6d..4f97160 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *state)
>>  {
>>  	xen_pfn_t *mfnp = data;
>>  	struct mmap_batch_state *st = state;
>> +	int ret = 0;
>>  
>> -	return put_user(*mfnp, st->user++);
>> +	if (st->user_err) {
>> +		if ((*mfnp & 0xf0000000U) == 0xf0000000U)
>> +			ret = -ENOENT;
>> +		else if ((*mfnp & 0xf0000000U) == 0x80000000U)
>> +			ret = -EINVAL;
> 
> Yikes. Any way those 0xf000.. and 0x8000 can at least be #defined

Agreed.

>> +		else
>> +			ret = 0;
>> +		return __put_user(ret, st->user_err);
>> +	} else
>> +		return __put_user(*mfnp, st->user_mfn++);
>>  }
>>  
>>  static struct vm_operations_struct privcmd_vm_ops;
>>  
>> -static long privcmd_ioctl_mmap_batch(void __user *udata)
>> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>  {
>>  	int ret;
>> -	struct privcmd_mmapbatch m;
>> +	struct privcmd_mmapbatch_v2 m;
>>  	struct mm_struct *mm = current->mm;
>>  	struct vm_area_struct *vma;
>>  	unsigned long nr_pages;
>>  	LIST_HEAD(pagelist);
>>  	struct mmap_batch_state state;
>>  
>> +	printk("%s(%d)\n", __func__, version);
>> +
> 
> Hehe.

Heh.  I didn't polish up these patches as I wasn't sure this new
interface was useful.

>>  	if (!xen_initial_domain())
>>  		return -EPERM;
>>  
>> -	if (copy_from_user(&m, udata, sizeof(m)))
>> -		return -EFAULT;
>> +	if (version == 1) {
>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
>> +			return -EFAULT;
>> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
>> +			return -EFAULT;
> 
> That is new..

We use access_ok() here so we can use the less expensive __get_user()
and __put_user() later on.

>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
>> +			return -EFAULT;
>> +		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
>> +			return -EFAULT;
>> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
>> +			return -EFAULT;
> 
> I think the VERIFY_WRITE can cover both versions?

Yes, VERIFY_WRITE is a superset of VERIFY_READ.  In v1, m.arr is
read/write but in v2, m.arr is const so we use VERIFY_READ so the
userspace buffer can reside in a read-only section.

>> --- a/include/xen/privcmd.h
>> +++ b/include/xen/privcmd.h
>> @@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
>>  	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
>>  };
>>  
>> +struct privcmd_mmapbatch_v2 {
>> +	unsigned int num; /* number of pages to populate */
> 
> unsigend int? Not 'u32'?
>> +	domid_t dom;      /* target domain */
>> +	__u64 addr;       /* virtual address */
>> +	const xen_pfn_t __user *arr; /* array of mfns */
>> +	int __user *err;  /* array of error codes */
> 
> int? Not a specific type?

It's an existing interface supported by classic Xen kernels and
currently being used by libxc.  So while I agree that it's not the best
interface, I don't think it can be changed.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:14:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rq8-0002xe-Df; Fri, 24 Aug 2012 11:14:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4rq7-0002xW-2m
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:14:11 +0000
Received: from [85.158.139.83:7027] by server-4.bemta-5.messagelabs.com id
	75/81-12386-00267305; Fri, 24 Aug 2012 11:14:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1345806844!23676784!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16637 invoked from network); 24 Aug 2012 11:14:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:14:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35690907"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 07:14:03 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	07:14:04 -0400
Message-ID: <503761FA.3050606@citrix.com>
Date: Fri, 24 Aug 2012 12:14:02 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
In-Reply-To: <20120823194000.GA11652@phenom.dumpdata.com>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/08/12 20:40, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 23, 2012 at 06:13:46PM +0100, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>> field for reporting the error code for every frame that could not be
>> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
[...]
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index f8c1b6d..4f97160 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -275,34 +276,58 @@ static int mmap_return_errors(void *data, void *state)
>>  {
>>  	xen_pfn_t *mfnp = data;
>>  	struct mmap_batch_state *st = state;
>> +	int ret = 0;
>>  
>> -	return put_user(*mfnp, st->user++);
>> +	if (st->user_err) {
>> +		if ((*mfnp & 0xf0000000U) == 0xf0000000U)
>> +			ret = -ENOENT;
>> +		else if ((*mfnp & 0xf0000000U) == 0x80000000U)
>> +			ret = -EINVAL;
> 
> Yikes. Any way those 0xf000.. and 0x8000 can at least be #defined

Agreed.

>> +		else
>> +			ret = 0;
>> +		return __put_user(ret, st->user_err);
>> +	} else
>> +		return __put_user(*mfnp, st->user_mfn++);
>>  }
>>  
>>  static struct vm_operations_struct privcmd_vm_ops;
>>  
>> -static long privcmd_ioctl_mmap_batch(void __user *udata)
>> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>>  {
>>  	int ret;
>> -	struct privcmd_mmapbatch m;
>> +	struct privcmd_mmapbatch_v2 m;
>>  	struct mm_struct *mm = current->mm;
>>  	struct vm_area_struct *vma;
>>  	unsigned long nr_pages;
>>  	LIST_HEAD(pagelist);
>>  	struct mmap_batch_state state;
>>  
>> +	printk("%s(%d)\n", __func__, version);
>> +
> 
> Hehe.

Heh.  I didn't polish up these patches as I wasn't sure this new
interface was useful.

>>  	if (!xen_initial_domain())
>>  		return -EPERM;
>>  
>> -	if (copy_from_user(&m, udata, sizeof(m)))
>> -		return -EFAULT;
>> +	if (version == 1) {
>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
>> +			return -EFAULT;
>> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
>> +			return -EFAULT;
> 
> That is new..

We use access_ok() here so we can use the less expensive __get_user()
and __put_user() later on.

>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
>> +			return -EFAULT;
>> +		if (!access_ok(VERIFY_READ, m.arr, m.num * sizeof(*m.arr)))
>> +			return -EFAULT;
>> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
>> +			return -EFAULT;
> 
> I think the VERIFY_WRITE can cover both versions?

Yes, VERIFY_WRITE is a superset of VERIFY_READ.  In v1, m.arr is
read/write but in v2, m.arr is const so we use VERIFY_READ so the
userspace buffer can reside in a read-only section.

>> --- a/include/xen/privcmd.h
>> +++ b/include/xen/privcmd.h
>> @@ -62,6 +62,14 @@ struct privcmd_mmapbatch {
>>  	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
>>  };
>>  
>> +struct privcmd_mmapbatch_v2 {
>> +	unsigned int num; /* number of pages to populate */
> 
> unsigend int? Not 'u32'?
>> +	domid_t dom;      /* target domain */
>> +	__u64 addr;       /* virtual address */
>> +	const xen_pfn_t __user *arr; /* array of mfns */
>> +	int __user *err;  /* array of error codes */
> 
> int? Not a specific type?

It's an existing interface supported by classic Xen kernels and
currently being used by libxc.  So while I agree that it's not the best
interface, I don't think it can be changed.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:18:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rtZ-00036J-7K; Fri, 24 Aug 2012 11:17:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4rtX-000369-8s
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:17:43 +0000
Received: from [85.158.143.35:36038] by server-3.bemta-4.messagelabs.com id
	47/D2-08232-6D267305; Fri, 24 Aug 2012 11:17:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345807059!14720902!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16100 invoked from network); 24 Aug 2012 11:17:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:17:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166803"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:17:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 12:17:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4rsz-0008D3-IF; Fri, 24 Aug 2012 11:17:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4rsz-0001J8-EN;
	Fri, 24 Aug 2012 12:17:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.25269.140223.531016@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 12:17:09 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345805345.12501.183.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
	<1345805345.12501.183.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> > OK.  Shall I just cause bison to rerun on my squeeze workstation and
> > commit the result then ?
> 
> Please!

In this context, I found this helpful.

Ian.

# HG changeset patch
# User Ian Jackson <Ian.Jackson@eu.citrix.com>
# Date 1345806920 -3600
# Node ID 7769c3a0511edea42140c6c16608e16195a15182
# Parent  4ca40e0559c33205fb5163b10249a0fd5fda39b9
libxl: provide "make realclean" target

This removes all the autogenerated files.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r 4ca40e0559c3 -r 7769c3a0511e tools/libxl/Makefile
--- a/tools/libxl/Makefile	Thu Aug 23 19:12:28 2012 +0100
+++ b/tools/libxl/Makefile	Fri Aug 24 12:15:20 2012 +0100
@@ -212,8 +212,10 @@ clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
 	$(RM) -f testidl.c.new testidl.c
-#	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
 
 distclean: clean
 
+realclean: distclean
+	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
+
 -include $(DEPS)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:18:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rtZ-00036J-7K; Fri, 24 Aug 2012 11:17:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4rtX-000369-8s
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:17:43 +0000
Received: from [85.158.143.35:36038] by server-3.bemta-4.messagelabs.com id
	47/D2-08232-6D267305; Fri, 24 Aug 2012 11:17:42 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1345807059!14720902!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16100 invoked from network); 24 Aug 2012 11:17:40 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:17:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166803"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:17:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 12:17:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4rsz-0008D3-IF; Fri, 24 Aug 2012 11:17:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4rsz-0001J8-EN;
	Fri, 24 Aug 2012 12:17:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.25269.140223.531016@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 12:17:09 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345805345.12501.183.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
	<1345805345.12501.183.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> > OK.  Shall I just cause bison to rerun on my squeeze workstation and
> > commit the result then ?
> 
> Please!

In this context, I found this helpful.

Ian.

# HG changeset patch
# User Ian Jackson <Ian.Jackson@eu.citrix.com>
# Date 1345806920 -3600
# Node ID 7769c3a0511edea42140c6c16608e16195a15182
# Parent  4ca40e0559c33205fb5163b10249a0fd5fda39b9
libxl: provide "make realclean" target

This removes all the autogenerated files.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff -r 4ca40e0559c3 -r 7769c3a0511e tools/libxl/Makefile
--- a/tools/libxl/Makefile	Thu Aug 23 19:12:28 2012 +0100
+++ b/tools/libxl/Makefile	Fri Aug 24 12:15:20 2012 +0100
@@ -212,8 +212,10 @@ clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
 	$(RM) -f testidl.c.new testidl.c
-#	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
 
 distclean: clean
 
+realclean: distclean
+	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
+
 -include $(DEPS)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:19:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rvO-0003DH-O1; Fri, 24 Aug 2012 11:19:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4rvN-0003Cy-Bs
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:19:37 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345807073!8769522!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26111 invoked from network); 24 Aug 2012 11:17:53 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Aug 2012 11:17:53 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjS0PFf+F
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-073-164.pools.arcor-ip.net [88.65.73.164])
	by smtp.strato.de (josoe mo41) (RZmta 30.11 SBL|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id i0297ao7O9AZlN ;
	Fri, 24 Aug 2012 13:17:52 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 72AD71836E; Fri, 24 Aug 2012 13:17:52 +0200 (CEST)
Date: Fri, 24 Aug 2012 13:17:52 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120824111752.GA14745@aepfle.de>
References: <patchbomb.1345746246@probook.site>
	<52f3d52bacdecb2c8d7f.1345746247@probook.site>
	<1345799722.12501.176.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345799722.12501.176.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xend/pvscsi: fix passing of SCSI
 control LUNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, Ian Campbell wrote:

> On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> > # HG changeset patch
> > # User Olaf Hering <olaf@aepfle.de>
> > # Date 1345743306 -7200
> > # Node ID 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
> > # Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
> > xend/pvscsi: fix passing of SCSI control LUNs
> > 
> > Currently pvscsi can not pass SCSI devices that have just a scsi_generic node.
> > In the following example sg3 is a control LUN for the disk sdd.
> > But vscsi=['4:0:2:0,0:0:0:0'] does not work because the internal 'devname'
> > variable remains None. Later writing p-devname to xenstore fails because None
> > is not a valid string variable.
> 
> Just out of interest, would you not need to pass through 4:0:2:1 too?

Yes, this is just an example how to pass the control LUN at all.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:19:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rvO-0003DH-O1; Fri, 24 Aug 2012 11:19:38 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4rvN-0003Cy-Bs
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:19:37 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345807073!8769522!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26111 invoked from network); 24 Aug 2012 11:17:53 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Aug 2012 11:17:53 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjS0PFf+F
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-073-164.pools.arcor-ip.net [88.65.73.164])
	by smtp.strato.de (josoe mo41) (RZmta 30.11 SBL|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id i0297ao7O9AZlN ;
	Fri, 24 Aug 2012 13:17:52 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 72AD71836E; Fri, 24 Aug 2012 13:17:52 +0200 (CEST)
Date: Fri, 24 Aug 2012 13:17:52 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120824111752.GA14745@aepfle.de>
References: <patchbomb.1345746246@probook.site>
	<52f3d52bacdecb2c8d7f.1345746247@probook.site>
	<1345799722.12501.176.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345799722.12501.176.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 3] xend/pvscsi: fix passing of SCSI
 control LUNs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, Ian Campbell wrote:

> On Thu, 2012-08-23 at 19:24 +0100, Olaf Hering wrote:
> > # HG changeset patch
> > # User Olaf Hering <olaf@aepfle.de>
> > # Date 1345743306 -7200
> > # Node ID 52f3d52bacdecb2c8d7f8aa26e2600febc03b6dd
> > # Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
> > xend/pvscsi: fix passing of SCSI control LUNs
> > 
> > Currently pvscsi can not pass SCSI devices that have just a scsi_generic node.
> > In the following example sg3 is a control LUN for the disk sdd.
> > But vscsi=['4:0:2:0,0:0:0:0'] does not work because the internal 'devname'
> > variable remains None. Later writing p-devname to xenstore fails because None
> > is not a valid string variable.
> 
> Just out of interest, would you not need to pass through 4:0:2:1 too?

Yes, this is just an example how to pass the control LUN at all.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:20:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rwE-0003Hf-6z; Fri, 24 Aug 2012 11:20:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4rwD-0003HG-69
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:20:29 +0000
Received: from [85.158.138.51:39872] by server-6.bemta-3.messagelabs.com id
	06/0D-32013-C7367305; Fri, 24 Aug 2012 11:20:28 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345807227!27730152!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31688 invoked from network); 24 Aug 2012 11:20:27 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Aug 2012 11:20:27 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjS0PFf+F
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-073-164.pools.arcor-ip.net [88.65.73.164])
	by smtp.strato.de (jored mo94) (RZmta 30.11 SBL|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 5039f6o7O9LF5J ;
	Fri, 24 Aug 2012 13:20:27 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id EE4491836E; Fri, 24 Aug 2012 13:20:26 +0200 (CEST)
Date: Fri, 24 Aug 2012 13:20:26 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120824112026.GB14745@aepfle.de>
References: <patchbomb.1345746246@probook.site>
	<482b9db173f2ceefed06.1345746249@probook.site>
	<1345800245.12501.178.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345800245.12501.178.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xend/pvscsi: update sysfs parser for
 Linux 3.0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, Ian Campbell wrote:

> Did you test this with both pre- and post-3.0 kernels?

I looked at a 2.6.16 and a 3.5 kernel, and runtime tested a 3.0 kernel.

Ideally the sysfs parser should be removed and lsscsi should be
mandatory.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:20:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4rwE-0003Hf-6z; Fri, 24 Aug 2012 11:20:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T4rwD-0003HG-69
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:20:29 +0000
Received: from [85.158.138.51:39872] by server-6.bemta-3.messagelabs.com id
	06/0D-32013-C7367305; Fri, 24 Aug 2012 11:20:28 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345807227!27730152!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTE4NTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31688 invoked from network); 24 Aug 2012 11:20:27 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Aug 2012 11:20:27 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmztM8TOFJjS0PFf+F
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-073-164.pools.arcor-ip.net [88.65.73.164])
	by smtp.strato.de (jored mo94) (RZmta 30.11 SBL|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 5039f6o7O9LF5J ;
	Fri, 24 Aug 2012 13:20:27 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id EE4491836E; Fri, 24 Aug 2012 13:20:26 +0200 (CEST)
Date: Fri, 24 Aug 2012 13:20:26 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120824112026.GB14745@aepfle.de>
References: <patchbomb.1345746246@probook.site>
	<482b9db173f2ceefed06.1345746249@probook.site>
	<1345800245.12501.178.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345800245.12501.178.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 3 of 3] xend/pvscsi: update sysfs parser for
 Linux 3.0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, Ian Campbell wrote:

> Did you test this with both pre- and post-3.0 kernels?

I looked at a 2.6.16 and a 3.5 kernel, and runtime tested a 3.0 kernel.

Ideally the sysfs parser should be removed and lsscsi should be
mandatory.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:25:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4s0o-0003ZJ-To; Fri, 24 Aug 2012 11:25:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4s0n-0003Z2-1w
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:25:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345807495!8705012!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13879 invoked from network); 24 Aug 2012 11:24:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:24:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166937"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:24:55 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 12:24:55 +0100
Date: Fri, 24 Aug 2012 12:24:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Aug 2012, Xu, Dongxiao wrote:
> > -----Original Message-----
> > > >From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00
> > > >2001
> > > From: Dongxiao Xu <dongxiao.xu@intel.com>
> > > Date: Mon, 20 Aug 2012 16:45:04 +0800
> > > Subject: [PATCH] helper2: fix multiply issue for int and uint types
> > >
> > > If the two multiply operands are int and uint types separately, the
> > > int type will be transformed to uint firstly, which is not the intent
> > > in our code piece. The fix is to add (int64_t) transform for the uint
> > > type before the multiply.
> > >
> > > This helps to fix the Xen hypevisor slow booting issue (boots more
> > > than 30 minutes) on another Xen hypervisor (the nested virtualization
> > > case).
> > >
> > > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > 
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Will this patch supposed to be merge in Xen 4.2?
> 
> Actually this is a fix for the nested Xen booting up issue, and it is already merged in QEMU upstream.
> 
> We have several patches to fix issues and enable the nested virtualization for Xen on Xen.
> They are:
> 
> 1) QEMU/helper2.c: Fix multiply issue for int and uint types.
> This patch is to fix the issue of nested Xen boot up. (in this mail)
> 
> 2) Xiantao Zhang will (suppose today) send out another patch to fix the L2 guest booting issue.
> 
> 3) nvmx: fix resource relinquish for nested VMX.
> This patch is to fix the destroy issue (resource is not released, xl list still shows the guest) for L1 guest with a L2 guest running in it. (Already sent into mailing list)
> 
> 4) Another patch is already merged by Ian J, which could fix the booting slow issue for L2 guest, see:
> http://xenbits.xen.org/gitweb/?p=qemu-xen-unstable.git;a=commit;h=effd5676225761abdab90becac519716515c3be4
> 
> For the remaining three, will you accept them for Xen 4.2 release?

I can only speak for qemu-upstream-unstable, but I'll certainly backport
the int/uint fix as soon as it gets accepted in QEMU upstream.

Regarding the other patches, if any of them are for
qemu-upstream-unstable, I am going to backport them only if they are
bugfixes.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:25:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4s0o-0003ZJ-To; Fri, 24 Aug 2012 11:25:14 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4s0n-0003Z2-1w
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:25:13 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1345807495!8705012!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13879 invoked from network); 24 Aug 2012 11:24:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:24:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166937"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:24:55 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 12:24:55 +0100
Date: Fri, 24 Aug 2012 12:24:31 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Aug 2012, Xu, Dongxiao wrote:
> > -----Original Message-----
> > > >From d71f9be82ec0079aa88f779dea90e475b177e32f Mon Sep 17 00:00:00
> > > >2001
> > > From: Dongxiao Xu <dongxiao.xu@intel.com>
> > > Date: Mon, 20 Aug 2012 16:45:04 +0800
> > > Subject: [PATCH] helper2: fix multiply issue for int and uint types
> > >
> > > If the two multiply operands are int and uint types separately, the
> > > int type will be transformed to uint firstly, which is not the intent
> > > in our code piece. The fix is to add (int64_t) transform for the uint
> > > type before the multiply.
> > >
> > > This helps to fix the Xen hypevisor slow booting issue (boots more
> > > than 30 minutes) on another Xen hypervisor (the nested virtualization
> > > case).
> > >
> > > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> > 
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Will this patch supposed to be merge in Xen 4.2?
> 
> Actually this is a fix for the nested Xen booting up issue, and it is already merged in QEMU upstream.
> 
> We have several patches to fix issues and enable the nested virtualization for Xen on Xen.
> They are:
> 
> 1) QEMU/helper2.c: Fix multiply issue for int and uint types.
> This patch is to fix the issue of nested Xen boot up. (in this mail)
> 
> 2) Xiantao Zhang will (suppose today) send out another patch to fix the L2 guest booting issue.
> 
> 3) nvmx: fix resource relinquish for nested VMX.
> This patch is to fix the destroy issue (resource is not released, xl list still shows the guest) for L1 guest with a L2 guest running in it. (Already sent into mailing list)
> 
> 4) Another patch is already merged by Ian J, which could fix the booting slow issue for L2 guest, see:
> http://xenbits.xen.org/gitweb/?p=qemu-xen-unstable.git;a=commit;h=effd5676225761abdab90becac519716515c3be4
> 
> For the remaining three, will you accept them for Xen 4.2 release?

I can only speak for qemu-upstream-unstable, but I'll certainly backport
the int/uint fix as soon as it gets accepted in QEMU upstream.

Regarding the other patches, if any of them are for
qemu-upstream-unstable, I am going to backport them only if they are
bugfixes.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:29:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4s4T-0003hC-IG; Fri, 24 Aug 2012 11:29:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4s4S-0003h6-O3
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:29:00 +0000
Received: from [85.158.138.51:55075] by server-5.bemta-3.messagelabs.com id
	D7/A2-08865-C7567305; Fri, 24 Aug 2012 11:29:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345807739!18940918!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13383 invoked from network); 24 Aug 2012 11:28:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:28:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166992"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:28:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 12:28:59 +0100
Message-ID: <1345807737.12501.184.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 12:28:57 +0100
In-Reply-To: <20535.25269.140223.531016@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
	<1345805345.12501.183.camel@zakaz.uk.xensource.com>
	<20535.25269.140223.531016@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 12:17 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> > > OK.  Shall I just cause bison to rerun on my squeeze workstation and
> > > commit the result then ?
> > 
> > Please!
> 
> In this context, I found this helpful.
> 
> Ian.
> 
> # HG changeset patch
> # User Ian Jackson <Ian.Jackson@eu.citrix.com>
> # Date 1345806920 -3600
> # Node ID 7769c3a0511edea42140c6c16608e16195a15182
> # Parent  4ca40e0559c33205fb5163b10249a0fd5fda39b9
> libxl: provide "make realclean" target
> 
> This removes all the autogenerated files.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff -r 4ca40e0559c3 -r 7769c3a0511e tools/libxl/Makefile
> --- a/tools/libxl/Makefile	Thu Aug 23 19:12:28 2012 +0100
> +++ b/tools/libxl/Makefile	Fri Aug 24 12:15:20 2012 +0100
> @@ -212,8 +212,10 @@ clean:
>  	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
>  	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
>  	$(RM) -f testidl.c.new testidl.c
> -#	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
>  
>  distclean: clean
>  
> +realclean: distclean
> +	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
> +
>  -include $(DEPS)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:29:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4s4T-0003hC-IG; Fri, 24 Aug 2012 11:29:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4s4S-0003h6-O3
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:29:00 +0000
Received: from [85.158.138.51:55075] by server-5.bemta-3.messagelabs.com id
	D7/A2-08865-C7567305; Fri, 24 Aug 2012 11:29:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1345807739!18940918!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13383 invoked from network); 24 Aug 2012 11:28:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:28:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14166992"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:28:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 12:28:59 +0100
Message-ID: <1345807737.12501.184.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 12:28:57 +0100
In-Reply-To: <20535.25269.140223.531016@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
	<1345805345.12501.183.camel@zakaz.uk.xensource.com>
	<20535.25269.140223.531016@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 12:17 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> > > OK.  Shall I just cause bison to rerun on my squeeze workstation and
> > > commit the result then ?
> > 
> > Please!
> 
> In this context, I found this helpful.
> 
> Ian.
> 
> # HG changeset patch
> # User Ian Jackson <Ian.Jackson@eu.citrix.com>
> # Date 1345806920 -3600
> # Node ID 7769c3a0511edea42140c6c16608e16195a15182
> # Parent  4ca40e0559c33205fb5163b10249a0fd5fda39b9
> libxl: provide "make realclean" target
> 
> This removes all the autogenerated files.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> diff -r 4ca40e0559c3 -r 7769c3a0511e tools/libxl/Makefile
> --- a/tools/libxl/Makefile	Thu Aug 23 19:12:28 2012 +0100
> +++ b/tools/libxl/Makefile	Fri Aug 24 12:15:20 2012 +0100
> @@ -212,8 +212,10 @@ clean:
>  	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
>  	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
>  	$(RM) -f testidl.c.new testidl.c
> -#	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
>  
>  distclean: clean
>  
> +realclean: distclean
> +	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
> +
>  -include $(DEPS)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:41:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sFr-0004GJ-18; Fri, 24 Aug 2012 11:40:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4sFq-0004G3-5c
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:40:46 +0000
Received: from [85.158.139.83:49608] by server-12.bemta-5.messagelabs.com id
	F4/52-22359-D3867305; Fri, 24 Aug 2012 11:40:45 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345808444!20333808!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20942 invoked from network); 24 Aug 2012 11:40:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:40:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:40:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 12:40:44 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4sFo-00007D-9F; Fri, 24 Aug 2012 11:40:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4sFo-0001UE-5m;
	Fri, 24 Aug 2012 12:40:44 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.26683.898355.30248@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 12:40:43 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345807737.12501.184.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
	<1345805345.12501.183.camel@zakaz.uk.xensource.com>
	<20535.25269.140223.531016@mariner.uk.xensource.com>
	<1345807737.12501.184.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> On Fri, 2012-08-24 at 12:17 +0100, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > > On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> > > > OK.  Shall I just cause bison to rerun on my squeeze workstation and
> > > > commit the result then ?
> > > 
> > > Please!

Now done.

> > libxl: provide "make realclean" target
> > 
> > This removes all the autogenerated files.
> > 
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

and this also

Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:41:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sFr-0004GJ-18; Fri, 24 Aug 2012 11:40:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4sFq-0004G3-5c
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 11:40:46 +0000
Received: from [85.158.139.83:49608] by server-12.bemta-5.messagelabs.com id
	F4/52-22359-D3867305; Fri, 24 Aug 2012 11:40:45 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345808444!20333808!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20942 invoked from network); 24 Aug 2012 11:40:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:40:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167215"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:40:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 12:40:44 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4sFo-00007D-9F; Fri, 24 Aug 2012 11:40:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4sFo-0001UE-5m;
	Fri, 24 Aug 2012 12:40:44 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.26683.898355.30248@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 12:40:43 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1345807737.12501.184.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20534.29501.983170.810193@mariner.uk.xensource.com>
	<1345799299.12501.174.camel@zakaz.uk.xensource.com>
	<20535.23438.176337.439860@mariner.uk.xensource.com>
	<1345805345.12501.183.camel@zakaz.uk.xensource.com>
	<20535.25269.140223.531016@mariner.uk.xensource.com>
	<1345807737.12501.184.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> On Fri, 2012-08-24 at 12:17 +0100, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > > On Fri, 2012-08-24 at 11:46 +0100, Ian Jackson wrote:
> > > > OK.  Shall I just cause bison to rerun on my squeeze workstation and
> > > > commit the result then ?
> > > 
> > > Please!

Now done.

> > libxl: provide "make realclean" target
> > 
> > This removes all the autogenerated files.
> > 
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

and this also

Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:42:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sH5-0004Oq-Ka; Fri, 24 Aug 2012 11:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4sH3-0004OU-Bu
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:42:02 +0000
Received: from [85.158.143.35:5048] by server-1.bemta-4.messagelabs.com id
	C7/C5-12504-88867305; Fri, 24 Aug 2012 11:42:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345808515!5610334!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzEwNjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30722 invoked from network); 24 Aug 2012 11:41:57 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Aug 2012 11:41:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7OBfq5s000681
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Aug 2012 11:41:53 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7OBfpJn016954
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Aug 2012 11:41:51 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7OBfon5023586; Fri, 24 Aug 2012 06:41:50 -0500
Received: from konrad-lan.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Aug 2012 04:41:49 -0700
Date: Fri, 24 Aug 2012 07:41:47 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120824114147.GF11007@konrad-lan.dumpdata.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
	<503761FA.3050606@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503761FA.3050606@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >> +struct privcmd_mmapbatch_v2 {
> >> +	unsigned int num; /* number of pages to populate */
> > 
> > unsigend int? Not 'u32'?
> >> +	domid_t dom;      /* target domain */
> >> +	__u64 addr;       /* virtual address */
> >> +	const xen_pfn_t __user *arr; /* array of mfns */
> >> +	int __user *err;  /* array of error codes */
> > 
> > int? Not a specific type?
> 
> It's an existing interface supported by classic Xen kernels and
> currently being used by libxc.  So while I agree that it's not the best
> interface, I don't think it can be changed.

How does it work with a 64-bit dom0 and 32-bit userspace? Is the libxc
smart enough to figure out the size of the structure?
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:42:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sH5-0004Oq-Ka; Fri, 24 Aug 2012 11:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T4sH3-0004OU-Bu
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:42:02 +0000
Received: from [85.158.143.35:5048] by server-1.bemta-4.messagelabs.com id
	C7/C5-12504-88867305; Fri, 24 Aug 2012 11:42:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1345808515!5610334!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzEwNjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30722 invoked from network); 24 Aug 2012 11:41:57 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Aug 2012 11:41:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7OBfq5s000681
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Aug 2012 11:41:53 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7OBfpJn016954
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Aug 2012 11:41:51 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7OBfon5023586; Fri, 24 Aug 2012 06:41:50 -0500
Received: from konrad-lan.dumpdata.com (/209.6.85.33)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Aug 2012 04:41:49 -0700
Date: Fri, 24 Aug 2012 07:41:47 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120824114147.GF11007@konrad-lan.dumpdata.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
	<503761FA.3050606@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503761FA.3050606@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >> +struct privcmd_mmapbatch_v2 {
> >> +	unsigned int num; /* number of pages to populate */
> > 
> > unsigend int? Not 'u32'?
> >> +	domid_t dom;      /* target domain */
> >> +	__u64 addr;       /* virtual address */
> >> +	const xen_pfn_t __user *arr; /* array of mfns */
> >> +	int __user *err;  /* array of error codes */
> > 
> > int? Not a specific type?
> 
> It's an existing interface supported by classic Xen kernels and
> currently being used by libxc.  So while I agree that it's not the best
> interface, I don't think it can be changed.

How does it work with a 64-bit dom0 and 32-bit userspace? Is the libxc
smart enough to figure out the size of the structure?
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:50:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sPH-0004pr-R2; Fri, 24 Aug 2012 11:50:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4sPG-0004pk-3d
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:50:30 +0000
Received: from [85.158.143.35:3126] by server-3.bemta-4.messagelabs.com id
	A8/66-08232-58A67305; Fri, 24 Aug 2012 11:50:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345809026!13904619!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28028 invoked from network); 24 Aug 2012 11:50:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:50:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:50:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 12:50:18 +0100
Message-ID: <1345809016.12501.185.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 24 Aug 2012 12:50:16 +0100
In-Reply-To: <20120824114147.GF11007@konrad-lan.dumpdata.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>	<503761FA.3050606@citrix.com>
	<20120824114147.GF11007@konrad-lan.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 12:41 +0100, Konrad Rzeszutek Wilk wrote:
> > >> +struct privcmd_mmapbatch_v2 {
> > >> +	unsigned int num; /* number of pages to populate */
> > > 
> > > unsigend int? Not 'u32'?
> > >> +	domid_t dom;      /* target domain */
> > >> +	__u64 addr;       /* virtual address */
> > >> +	const xen_pfn_t __user *arr; /* array of mfns */
> > >> +	int __user *err;  /* array of error codes */
> > > 
> > > int? Not a specific type?
> > 
> > It's an existing interface supported by classic Xen kernels and
> > currently being used by libxc.  So while I agree that it's not the best
> > interface, I don't think it can be changed.
> 
> How does it work with a 64-bit dom0 and 32-bit userspace? Is the libxc
> smart enough to figure out the size of the structure?

This already doesn't work for numerous reasons.

The man one being that libxc will make 32 bit hypercalls, but when the
kernel forwards them on they look like 64 bit ones.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:50:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sPH-0004pr-R2; Fri, 24 Aug 2012 11:50:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4sPG-0004pk-3d
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:50:30 +0000
Received: from [85.158.143.35:3126] by server-3.bemta-4.messagelabs.com id
	A8/66-08232-58A67305; Fri, 24 Aug 2012 11:50:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1345809026!13904619!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28028 invoked from network); 24 Aug 2012 11:50:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:50:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167400"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 11:50:17 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 12:50:18 +0100
Message-ID: <1345809016.12501.185.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 24 Aug 2012 12:50:16 +0100
In-Reply-To: <20120824114147.GF11007@konrad-lan.dumpdata.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>	<503761FA.3050606@citrix.com>
	<20120824114147.GF11007@konrad-lan.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 12:41 +0100, Konrad Rzeszutek Wilk wrote:
> > >> +struct privcmd_mmapbatch_v2 {
> > >> +	unsigned int num; /* number of pages to populate */
> > > 
> > > unsigend int? Not 'u32'?
> > >> +	domid_t dom;      /* target domain */
> > >> +	__u64 addr;       /* virtual address */
> > >> +	const xen_pfn_t __user *arr; /* array of mfns */
> > >> +	int __user *err;  /* array of error codes */
> > > 
> > > int? Not a specific type?
> > 
> > It's an existing interface supported by classic Xen kernels and
> > currently being used by libxc.  So while I agree that it's not the best
> > interface, I don't think it can be changed.
> 
> How does it work with a 64-bit dom0 and 32-bit userspace? Is the libxc
> smart enough to figure out the size of the structure?

This already doesn't work for numerous reasons.

The man one being that libxc will make 32 bit hypercalls, but when the
kernel forwards them on they look like 64 bit ones.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:58:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sWv-00054K-US; Fri, 24 Aug 2012 11:58:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4sWu-00054F-SL
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:58:25 +0000
Received: from [85.158.143.35:50593] by server-2.bemta-4.messagelabs.com id
	D3/F5-21239-06C67305; Fri, 24 Aug 2012 11:58:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345809497!10439053!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19803 invoked from network); 24 Aug 2012 11:58:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:58:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="206133489"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 07:58:17 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	07:58:17 -0400
Message-ID: <50376C57.2050008@citrix.com>
Date: Fri, 24 Aug 2012 12:58:15 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
In-Reply-To: <E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/12 02:32, Andres Lagar-Cavilla wrote:
> On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:
> 
>> This series is a straight forward-port of some functionality from
>> classic kernels to support Xen hosts that do paging of guests.
>>
>> This isn't functionality the XenServer makes use of so I've not tested
>> these with paging in use (GridCentric requested that our older kernels
>> supported this and I'm just doing the forward port).
> 
> Thanks for this series. Very timely. I may add that we are not the
> only consumers of paging. This functionality was first added into
> classic kernels by Olaf Hering from Suse (added to cc).

Sure.

>> I'm not entirely happy about the approach used here because:
>>
>> 1. It relies on the meaning of the return code of the update_mmu
>> hypercall and it assumes the value Xen used for -ENOENT is the same
>> the kernel uses. This does not appear to be a formal part of the
>> hypercall ABI.
>>
>> Keir, can you comment on this?
> 
> I see your point. I may add that it's likely to be more pervasive
> than just relying on ENOENT being 12. Which is a fairly safe bet.
> 
>>
>> 2. It seems more sensible to have the kernel do the retries instead of
>> libxc doing them.  The kernel has to have a mechanism for this any way
>> (for mapping back/front rings).
>>
>> 3. The current way of handling paged-out frames by repeatedly retrying
>> is a bit lame.  Shouldn't there be some event that the guest waiting
>> for the frame can wait on instead?  By moving the retry mechanism into
>> the kernel we can change this without impacting the ABI to userspace.
> 
> Lame is an interesting choice of language :)

My embedded background makes me frown at anything that polls -- it's
generally bad for power consumption.

> I am not a huge fan of libxc retry, but we've been pounding it quite
> hard for a while and it works -- and, importantly, it yields the
> scheduler :)
> 
> While kernel retry may benefit from hypothetical code reuse,
> "Shouldn't there be some event that the guest waiting for the frame can
> wait on instead?" will need to become concrete to start a real discussion.

In kernel retries don't require the event.  Using the event is something
to consider in the longer term.

> For better or worse, since xen-4.1 (!) libxc will do the right thing
> if fed the appropriate errno.

At a minimum I think we need to fix the existing interface to have the
behavior libxc expects.

Is PRIVCMD_MMAPBATCH_V2 actually required?  It doesn't seem to do much
more than the V1 interface.  Perhaps the fix in patches #1 and #2
sufficient to make libxc work?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 11:58:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 11:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sWv-00054K-US; Fri, 24 Aug 2012 11:58:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4sWu-00054F-SL
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 11:58:25 +0000
Received: from [85.158.143.35:50593] by server-2.bemta-4.messagelabs.com id
	D3/F5-21239-06C67305; Fri, 24 Aug 2012 11:58:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345809497!10439053!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19803 invoked from network); 24 Aug 2012 11:58:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 11:58:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="206133489"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 07:58:17 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	07:58:17 -0400
Message-ID: <50376C57.2050008@citrix.com>
Date: Fri, 24 Aug 2012 12:58:15 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
In-Reply-To: <E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/12 02:32, Andres Lagar-Cavilla wrote:
> On Aug 23, 2012, at 1:13 PM, David Vrabel wrote:
> 
>> This series is a straight forward-port of some functionality from
>> classic kernels to support Xen hosts that do paging of guests.
>>
>> This isn't functionality the XenServer makes use of so I've not tested
>> these with paging in use (GridCentric requested that our older kernels
>> supported this and I'm just doing the forward port).
> 
> Thanks for this series. Very timely. I may add that we are not the
> only consumers of paging. This functionality was first added into
> classic kernels by Olaf Hering from Suse (added to cc).

Sure.

>> I'm not entirely happy about the approach used here because:
>>
>> 1. It relies on the meaning of the return code of the update_mmu
>> hypercall and it assumes the value Xen used for -ENOENT is the same
>> the kernel uses. This does not appear to be a formal part of the
>> hypercall ABI.
>>
>> Keir, can you comment on this?
> 
> I see your point. I may add that it's likely to be more pervasive
> than just relying on ENOENT being 12. Which is a fairly safe bet.
> 
>>
>> 2. It seems more sensible to have the kernel do the retries instead of
>> libxc doing them.  The kernel has to have a mechanism for this any way
>> (for mapping back/front rings).
>>
>> 3. The current way of handling paged-out frames by repeatedly retrying
>> is a bit lame.  Shouldn't there be some event that the guest waiting
>> for the frame can wait on instead?  By moving the retry mechanism into
>> the kernel we can change this without impacting the ABI to userspace.
> 
> Lame is an interesting choice of language :)

My embedded background makes me frown at anything that polls -- it's
generally bad for power consumption.

> I am not a huge fan of libxc retry, but we've been pounding it quite
> hard for a while and it works -- and, importantly, it yields the
> scheduler :)
> 
> While kernel retry may benefit from hypothetical code reuse,
> "Shouldn't there be some event that the guest waiting for the frame can
> wait on instead?" will need to become concrete to start a real discussion.

In kernel retries don't require the event.  Using the event is something
to consider in the longer term.

> For better or worse, since xen-4.1 (!) libxc will do the right thing
> if fed the appropriate errno.

At a minimum I think we need to fix the existing interface to have the
behavior libxc expects.

Is PRIVCMD_MMAPBATCH_V2 actually required?  It doesn't seem to do much
more than the V1 interface.  Perhaps the fix in patches #1 and #2
sufficient to make libxc work?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:01:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sZI-0005Fa-RD; Fri, 24 Aug 2012 12:00:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4sZG-0005FK-KS
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 12:00:50 +0000
Received: from [85.158.138.51:55765] by server-6.bemta-3.messagelabs.com id
	94/4D-32013-1FC67305; Fri, 24 Aug 2012 12:00:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345809643!23665336!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6227 invoked from network); 24 Aug 2012 12:00:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:00:44 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35694254"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 08:00:43 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	08:00:42 -0400
Message-ID: <50376CE9.1040401@citrix.com>
Date: Fri, 24 Aug 2012 13:00:41 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
	<503761FA.3050606@citrix.com>
	<20120824114147.GF11007@konrad-lan.dumpdata.com>
In-Reply-To: <20120824114147.GF11007@konrad-lan.dumpdata.com>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/12 12:41, Konrad Rzeszutek Wilk wrote:
>>>> +struct privcmd_mmapbatch_v2 {
>>>> +	unsigned int num; /* number of pages to populate */
>>>
>>> unsigend int? Not 'u32'?
>>>> +	domid_t dom;      /* target domain */
>>>> +	__u64 addr;       /* virtual address */
>>>> +	const xen_pfn_t __user *arr; /* array of mfns */
>>>> +	int __user *err;  /* array of error codes */
>>>
>>> int? Not a specific type?
>>
>> It's an existing interface supported by classic Xen kernels and
>> currently being used by libxc.  So while I agree that it's not the best
>> interface, I don't think it can be changed.

It's also the same as struct privcmd_mmapbatch except for the extra
'err' field and 'arr' being const.

> How does it work with a 64-bit dom0 and 32-bit userspace? Is the libxc
> smart enough to figure out the size of the structure?

privcmd doesn't support compat ioctls because there there is nothing
doing the translation of the hypercalls from the 32-bit to 64-bit ABI --
the hypervisor won't do it as the hypercalls are called from the 64-bit
kernel.

64 bit Xen, 64 bit dom0, 32 bit tools has never worked for this reason.
 I think Ian Campbell had some hacky patches for this but he may not
want to admit to them. ;)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:01:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4sZI-0005Fa-RD; Fri, 24 Aug 2012 12:00:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T4sZG-0005FK-KS
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 12:00:50 +0000
Received: from [85.158.138.51:55765] by server-6.bemta-3.messagelabs.com id
	94/4D-32013-1FC67305; Fri, 24 Aug 2012 12:00:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345809643!23665336!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6227 invoked from network); 24 Aug 2012 12:00:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:00:44 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35694254"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 08:00:43 -0400
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	08:00:42 -0400
Message-ID: <50376CE9.1040401@citrix.com>
Date: Fri, 24 Aug 2012 13:00:41 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
	<503761FA.3050606@citrix.com>
	<20120824114147.GF11007@konrad-lan.dumpdata.com>
In-Reply-To: <20120824114147.GF11007@konrad-lan.dumpdata.com>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/12 12:41, Konrad Rzeszutek Wilk wrote:
>>>> +struct privcmd_mmapbatch_v2 {
>>>> +	unsigned int num; /* number of pages to populate */
>>>
>>> unsigend int? Not 'u32'?
>>>> +	domid_t dom;      /* target domain */
>>>> +	__u64 addr;       /* virtual address */
>>>> +	const xen_pfn_t __user *arr; /* array of mfns */
>>>> +	int __user *err;  /* array of error codes */
>>>
>>> int? Not a specific type?
>>
>> It's an existing interface supported by classic Xen kernels and
>> currently being used by libxc.  So while I agree that it's not the best
>> interface, I don't think it can be changed.

It's also the same as struct privcmd_mmapbatch except for the extra
'err' field and 'arr' being const.

> How does it work with a 64-bit dom0 and 32-bit userspace? Is the libxc
> smart enough to figure out the size of the structure?

privcmd doesn't support compat ioctls because there there is nothing
doing the translation of the hypercalls from the 32-bit to 64-bit ABI --
the hypervisor won't do it as the hypercalls are called from the 64-bit
kernel.

64 bit Xen, 64 bit dom0, 32 bit tools has never worked for this reason.
 I think Ian Campbell had some hacky patches for this but he may not
want to admit to them. ;)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:14:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:14:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4smQ-0005uy-97; Fri, 24 Aug 2012 12:14:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4smO-0005us-VM
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 12:14:25 +0000
Received: from [85.158.139.83:4373] by server-1.bemta-5.messagelabs.com id
	40/5C-09980-02077305; Fri, 24 Aug 2012 12:14:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345810463!27078252!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21154 invoked from network); 24 Aug 2012 12:14:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:14:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167975"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 12:14:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 13:14:23 +0100
Message-ID: <1345810462.19419.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 24 Aug 2012 13:14:22 +0100
In-Reply-To: <50376CE9.1040401@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
	<503761FA.3050606@citrix.com>
	<20120824114147.GF11007@konrad-lan.dumpdata.com>
	<50376CE9.1040401@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 13:00 +0100, David Vrabel wrote:
> 64 bit Xen, 64 bit dom0, 32 bit tools has never worked for this reason.
>  I think Ian Campbell had some hacky patches for this but he may not
> want to admit to them. ;)

They were pretty nasty.

I think I made a 32-bit hypercall entry path available to 64 bit kernels
(via int 0x8?) so privcmd used that for 32 bit tasks. Then there was
some hacking to make the hypercall compat arg translation area work for
64 bit guests (I don't remember what I did here, I seem to recall it
wasn't pretty)

I eventually fell at the hurdle of getting 64 bit blktap kernel module
to work with 32 bit tapdisk process (since the ring layouts are
different). Not an impossible problem but at this poiint I couldn't be
bothered any more.

> 
> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:14:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:14:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4smQ-0005uy-97; Fri, 24 Aug 2012 12:14:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4smO-0005us-VM
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 12:14:25 +0000
Received: from [85.158.139.83:4373] by server-1.bemta-5.messagelabs.com id
	40/5C-09980-02077305; Fri, 24 Aug 2012 12:14:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345810463!27078252!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21154 invoked from network); 24 Aug 2012 12:14:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:14:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167975"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 12:14:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 13:14:23 +0100
Message-ID: <1345810462.19419.6.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 24 Aug 2012 13:14:22 +0100
In-Reply-To: <50376CE9.1040401@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-4-git-send-email-david.vrabel@citrix.com>
	<20120823194000.GA11652@phenom.dumpdata.com>
	<503761FA.3050606@citrix.com>
	<20120824114147.GF11007@konrad-lan.dumpdata.com>
	<50376CE9.1040401@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 13:00 +0100, David Vrabel wrote:
> 64 bit Xen, 64 bit dom0, 32 bit tools has never worked for this reason.
>  I think Ian Campbell had some hacky patches for this but he may not
> want to admit to them. ;)

They were pretty nasty.

I think I made a 32-bit hypercall entry path available to 64 bit kernels
(via int 0x8?) so privcmd used that for 32 bit tasks. Then there was
some hacking to make the hypercall compat arg translation area work for
64 bit guests (I don't remember what I did here, I seem to recall it
wasn't pretty)

I eventually fell at the hurdle of getting 64 bit blktap kernel module
to work with 32 bit tapdisk process (since the ring layouts are
different). Not an impossible problem but at this poiint I couldn't be
bothered any more.

> 
> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:15:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4smu-0005xF-MU; Fri, 24 Aug 2012 12:14:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4smt-0005x1-7e
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 12:14:55 +0000
Received: from [85.158.143.35:43746] by server-3.bemta-4.messagelabs.com id
	14/DC-08232-E3077305; Fri, 24 Aug 2012 12:14:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345810494!10441844!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30099 invoked from network); 24 Aug 2012 12:14:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:14:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 12:14:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 13:14:54 +0100
Message-ID: <1345810492.19419.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 24 Aug 2012 13:14:52 +0100
In-Reply-To: <50376C57.2050008@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
	<50376C57.2050008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 12:58 +0100, David Vrabel wrote:
> Is PRIVCMD_MMAPBATCH_V2 actually required?  It doesn't seem to do much
> more than the V1 interface.  Perhaps the fix in patches #1 and #2
> sufficient to make libxc work?

The V1 interface has some hideous misfeature wrt error reporting iirc.

It doesn't report the actual per-frame error code, just the "an error"
signal through setting the top nibble (already a nasty interface!), IIRC

Aha: http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/6d6c3dd995c0

It was knackered on 64 bit (actually set bits 28-31...)...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:15:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4smu-0005xF-MU; Fri, 24 Aug 2012 12:14:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4smt-0005x1-7e
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 12:14:55 +0000
Received: from [85.158.143.35:43746] by server-3.bemta-4.messagelabs.com id
	14/DC-08232-E3077305; Fri, 24 Aug 2012 12:14:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345810494!10441844!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30099 invoked from network); 24 Aug 2012 12:14:54 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:14:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14167985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 12:14:54 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 13:14:54 +0100
Message-ID: <1345810492.19419.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 24 Aug 2012 13:14:52 +0100
In-Reply-To: <50376C57.2050008@citrix.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
	<50376C57.2050008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 12:58 +0100, David Vrabel wrote:
> Is PRIVCMD_MMAPBATCH_V2 actually required?  It doesn't seem to do much
> more than the V1 interface.  Perhaps the fix in patches #1 and #2
> sufficient to make libxc work?

The V1 interface has some hideous misfeature wrt error reporting iirc.

It doesn't report the actual per-frame error code, just the "an error"
signal through setting the top nibble (already a nasty interface!), IIRC

Aha: http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/6d6c3dd995c0

It was knackered on 64 bit (actually set bits 28-31...)...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:50:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tLF-0006fS-7u; Fri, 24 Aug 2012 12:50:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ptesarik@suse.cz>) id 1T4tLE-0006fL-LM
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 12:50:24 +0000
Received: from [85.158.138.51:14475] by server-4.bemta-3.messagelabs.com id
	E2/4F-04276-D8877305; Fri, 24 Aug 2012 12:50:21 +0000
X-Env-Sender: ptesarik@suse.cz
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345812620!19895911!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27600 invoked from network); 24 Aug 2012 12:50:20 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Aug 2012 12:50:20 -0000
Received: from relay2.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 89A6CA329B
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 14:50:19 +0200 (CEST)
From: Petr Tesarik <ptesarik@suse.cz>
Organization: SUSE LINUX, s.r.o.
To: xen-devel@lists.xen.org
Date: Fri, 24 Aug 2012 14:50:13 +0200
User-Agent: KMail/1.13.6 (Linux/2.6.37.6-0.20-default; KDE/4.6.0; i686; ; )
MIME-Version: 1.0
Message-Id: <201208241450.13698.ptesarik@suse.cz>
Subject: [Xen-devel] Dumpfile filtering and PGC_xxx flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello folks,

I've been trying to add support for xen-4.0+ to makedumpfile (a utility that 
can filter out some content from a kernel dump file). In particular, I'm now 
struggling with implementing option "-X", which should filter out all domU 
pages, but keep hypervisor internal data and dom0 pages. Unused pages (free 
pages, broken pages, offlined pages, etc.) should also be filtered out, 
because they are usually not needed for dump analysis.

I'm relying on the contents of frame_table to do the job, but I'm lost in the 
hierarchy of PGC_xxx flags. My first naive idea was that I could keep pages 
that have:

1. PGC_allocated and
2. the right owner (dom_xen, dom_io, or dom0).

But that doesn't include Xen internal structures. In fact, the page_info 
structs for pages corresponding to Xen code and static data seem to be 
completely unitialized (all zero).

Thanks for any advice,
Petr Tesarik
SUSE Linux

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:50:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tLF-0006fS-7u; Fri, 24 Aug 2012 12:50:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ptesarik@suse.cz>) id 1T4tLE-0006fL-LM
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 12:50:24 +0000
Received: from [85.158.138.51:14475] by server-4.bemta-3.messagelabs.com id
	E2/4F-04276-D8877305; Fri, 24 Aug 2012 12:50:21 +0000
X-Env-Sender: ptesarik@suse.cz
X-Msg-Ref: server-12.tower-174.messagelabs.com!1345812620!19895911!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27600 invoked from network); 24 Aug 2012 12:50:20 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Aug 2012 12:50:20 -0000
Received: from relay2.suse.de (unknown [195.135.220.254])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mx2.suse.de (Postfix) with ESMTP id 89A6CA329B
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 14:50:19 +0200 (CEST)
From: Petr Tesarik <ptesarik@suse.cz>
Organization: SUSE LINUX, s.r.o.
To: xen-devel@lists.xen.org
Date: Fri, 24 Aug 2012 14:50:13 +0200
User-Agent: KMail/1.13.6 (Linux/2.6.37.6-0.20-default; KDE/4.6.0; i686; ; )
MIME-Version: 1.0
Message-Id: <201208241450.13698.ptesarik@suse.cz>
Subject: [Xen-devel] Dumpfile filtering and PGC_xxx flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello folks,

I've been trying to add support for xen-4.0+ to makedumpfile (a utility that 
can filter out some content from a kernel dump file). In particular, I'm now 
struggling with implementing option "-X", which should filter out all domU 
pages, but keep hypervisor internal data and dom0 pages. Unused pages (free 
pages, broken pages, offlined pages, etc.) should also be filtered out, 
because they are usually not needed for dump analysis.

I'm relying on the contents of frame_table to do the job, but I'm lost in the 
hierarchy of PGC_xxx flags. My first naive idea was that I could keep pages 
that have:

1. PGC_allocated and
2. the right owner (dom_xen, dom_io, or dom0).

But that doesn't include Xen internal structures. In fact, the page_info 
structs for pages corresponding to Xen code and static data seem to be 
completely unitialized (all zero).

Thanks for any advice,
Petr Tesarik
SUSE Linux

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:55:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tQH-0006pV-14; Fri, 24 Aug 2012 12:55:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tQG-0006pP-61
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 12:55:36 +0000
Received: from [85.158.139.83:63725] by server-5.bemta-5.messagelabs.com id
	A0/56-31019-7C977305; Fri, 24 Aug 2012 12:55:35 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345812933!27086743!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32271 invoked from network); 24 Aug 2012 12:55:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:55:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="206138511"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 08:55:32 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	08:55:32 -0400
Message-ID: <503779F5.80508@citrix.com>
Date: Fri, 24 Aug 2012 13:56:21 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
	<1345728617.12501.92.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345728617.12501.92.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:30 PM, Ian Campbell wrote:
> On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
>    
>> This patch modifies libxl interface for qemu disaggregation.
>>      
> I'd rather see the interfaces changes in the same patch as the
> implementation of the new interfaces.
>
>    
>> For the moment, due to some dependencies between devices, we
>> can't let the user choose which QEMU emulate a device.
>>
>> Moreoever this patch adds an "id" field to nic interface.
>> It will be used in config file to specify which QEMU handle
>> the network card.
>>      
> Is domid+devid not sufficient to identify which nic?
>    
Is the user can specify or find devid easily ?
I added "id" because, I would like that the user
can identify without any problem a network
interface.

>> A possible disaggregation is:
>>      - UI: Emulate graphic card, USB, keyboard, mouse, default devices
>>      (PIIX4, root bridge, ...)
>>      - IDE: Emulate disk
>>      - Serial: Emulate serial port
>>      - Audio: Emulate audio card
>>      - Net: Emulate one or more network cards, multiple QEMU can emulate
>>      different card. The emulated card is specified with its nic ID.
>>
>> Signed-off-by: Julien Grall<julien.grall@citrix.com>
>> ---
>>   tools/libxl/libxl.h         |    3 +++
>>   tools/libxl/libxl_types.idl |   15 +++++++++++++++
>>   2 files changed, 18 insertions(+), 0 deletions(-)
>>
>> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
>> index c614d6f..71d4808 100644
>> --- a/tools/libxl/libxl.h
>> +++ b/tools/libxl/libxl.h
>> @@ -307,6 +307,7 @@ void libxl_cpuid_dispose(libxl_cpuid_policy_list *cpuid_list);
>>   #define LIBXL_PCI_FUNC_ALL (~0U)
>>
>>   typedef uint32_t libxl_domid;
>> +typedef uint32_t libxl_dmid;
>>
>>   /*
>>    * Formatting Enumerations.
>> @@ -478,12 +479,14 @@ typedef struct {
>>       libxl_domain_build_info b_info;
>>
>>       int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs;
>> +    int num_dms;
>>
>>       libxl_device_disk *disks;
>>       libxl_device_nic *nics;
>>       libxl_device_pci *pcidevs;
>>       libxl_device_vfb *vfbs;
>>       libxl_device_vkb *vkbs;
>> +    libxl_dm *dms;
>>
>>       libxl_action_on_shutdown on_poweroff;
>>       libxl_action_on_shutdown on_reboot;
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index daa8c79..36c802a 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>>       ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>>       ])
>>
>> +libxl_dm_cap = Enumeration("dm_cap", [
>> +    (1, "UI"), # Emulate all UI + default device
>>      
> What does "default device" equate too?
>    
The following devices:
    - i440fx
    - piix3
    - piix4
    - dma
    - xen apic
    - xen platform


>> +    (2, "IDE"), # Emulate IDE
>> +    (4, "SERIAL"), # Emulate Serial
>> +    (8, "AUDIO"), # Emulate audio
>> +    ])
>> +
>> +libxl_dm = Struct("dm", [
>> +    ("name",         string),
>> +    ("path",         string),
>> +    ("capabilities",   uint64),
>>      
> uint64 and not libxl_dm_cap?
>    
Will be fixed in the next patch version.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:55:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tQH-0006pV-14; Fri, 24 Aug 2012 12:55:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tQG-0006pP-61
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 12:55:36 +0000
Received: from [85.158.139.83:63725] by server-5.bemta-5.messagelabs.com id
	A0/56-31019-7C977305; Fri, 24 Aug 2012 12:55:35 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1345812933!27086743!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32271 invoked from network); 24 Aug 2012 12:55:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:55:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="206138511"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 08:55:32 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	08:55:32 -0400
Message-ID: <503779F5.80508@citrix.com>
Date: Fri, 24 Aug 2012 13:56:21 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
	<1345728617.12501.92.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345728617.12501.92.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:30 PM, Ian Campbell wrote:
> On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
>    
>> This patch modifies libxl interface for qemu disaggregation.
>>      
> I'd rather see the interfaces changes in the same patch as the
> implementation of the new interfaces.
>
>    
>> For the moment, due to some dependencies between devices, we
>> can't let the user choose which QEMU emulate a device.
>>
>> Moreoever this patch adds an "id" field to nic interface.
>> It will be used in config file to specify which QEMU handle
>> the network card.
>>      
> Is domid+devid not sufficient to identify which nic?
>    
Is the user can specify or find devid easily ?
I added "id" because, I would like that the user
can identify without any problem a network
interface.

>> A possible disaggregation is:
>>      - UI: Emulate graphic card, USB, keyboard, mouse, default devices
>>      (PIIX4, root bridge, ...)
>>      - IDE: Emulate disk
>>      - Serial: Emulate serial port
>>      - Audio: Emulate audio card
>>      - Net: Emulate one or more network cards, multiple QEMU can emulate
>>      different card. The emulated card is specified with its nic ID.
>>
>> Signed-off-by: Julien Grall<julien.grall@citrix.com>
>> ---
>>   tools/libxl/libxl.h         |    3 +++
>>   tools/libxl/libxl_types.idl |   15 +++++++++++++++
>>   2 files changed, 18 insertions(+), 0 deletions(-)
>>
>> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
>> index c614d6f..71d4808 100644
>> --- a/tools/libxl/libxl.h
>> +++ b/tools/libxl/libxl.h
>> @@ -307,6 +307,7 @@ void libxl_cpuid_dispose(libxl_cpuid_policy_list *cpuid_list);
>>   #define LIBXL_PCI_FUNC_ALL (~0U)
>>
>>   typedef uint32_t libxl_domid;
>> +typedef uint32_t libxl_dmid;
>>
>>   /*
>>    * Formatting Enumerations.
>> @@ -478,12 +479,14 @@ typedef struct {
>>       libxl_domain_build_info b_info;
>>
>>       int num_disks, num_nics, num_pcidevs, num_vfbs, num_vkbs;
>> +    int num_dms;
>>
>>       libxl_device_disk *disks;
>>       libxl_device_nic *nics;
>>       libxl_device_pci *pcidevs;
>>       libxl_device_vfb *vfbs;
>>       libxl_device_vkb *vkbs;
>> +    libxl_dm *dms;
>>
>>       libxl_action_on_shutdown on_poweroff;
>>       libxl_action_on_shutdown on_reboot;
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index daa8c79..36c802a 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>>       ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>>       ])
>>
>> +libxl_dm_cap = Enumeration("dm_cap", [
>> +    (1, "UI"), # Emulate all UI + default device
>>      
> What does "default device" equate too?
>    
The following devices:
    - i440fx
    - piix3
    - piix4
    - dma
    - xen apic
    - xen platform


>> +    (2, "IDE"), # Emulate IDE
>> +    (4, "SERIAL"), # Emulate Serial
>> +    (8, "AUDIO"), # Emulate audio
>> +    ])
>> +
>> +libxl_dm = Struct("dm", [
>> +    ("name",         string),
>> +    ("path",         string),
>> +    ("capabilities",   uint64),
>>      
> uint64 and not libxl_dm_cap?
>    
Will be fixed in the next patch version.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:58:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:58:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tT7-0006wW-Ka; Fri, 24 Aug 2012 12:58:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tT6-0006wO-GL
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 12:58:32 +0000
Received: from [85.158.138.51:60444] by server-12.bemta-3.messagelabs.com id
	1B/BA-04073-77A77305; Fri, 24 Aug 2012 12:58:31 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345813109!19747772!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13836 invoked from network); 24 Aug 2012 12:58:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:58:31 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35699999"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 08:58:17 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	08:58:17 -0400
Message-ID: <50377A9C.7090500@citrix.com>
Date: Fri, 24 Aug 2012 13:59:08 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
	<alpine.DEB.2.02.1208231455010.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208231455010.15568@kaball.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 06/10] xen-pci: register PCI device
 in Xen and handle IOREQ_TYPE_PCI_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 03:41 PM, Stefano Stabellini wrote:
> On Wed, 22 Aug 2012, Julien Grall wrote:
>    
>> With QEMU disaggregation QEMU needs to specify which PCI device it's able to
>> handle. It will use the device place in the topology (domain, bus, device,
>> function).
>> When Xen will trap an access for the config space, it will forge a new
>> ioreq and forward it to the right QEMU.
>>
>> Signed-off-by: Julien Grall<julien.grall@citrix.com>
>> ---
>>   hw/pci.c   |    6 ++++++
>>   hw/xen.h   |    1 +
>>   xen-all.c  |   38 ++++++++++++++++++++++++++++++++++++++
>>   xen-stub.c |    5 +++++
>>   4 files changed, 50 insertions(+), 0 deletions(-)
>>
>> diff --git a/hw/pci.c b/hw/pci.c
>> index 4d95984..0112edf 100644
>> --- a/hw/pci.c
>> +++ b/hw/pci.c
>> @@ -33,6 +33,7 @@
>>   #include "qmp-commands.h"
>>   #include "msi.h"
>>   #include "msix.h"
>> +#include "xen.h"
>>
>>   //#define DEBUG_PCI
>>   #ifdef DEBUG_PCI
>> @@ -781,6 +782,11 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
>>       pci_dev->devfn = devfn;
>>       pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
>>       pci_dev->irq_state = 0;
>> +
>> +    if (xen_enabled()&&  xen_register_pcidev(pci_dev)) {
>> +        return NULL;
>>      
> Is this an error condition? If so we should print an error message,
> right?
>    

Yes, it means that the BDF is already registered by another QEMU.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 12:58:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 12:58:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tT7-0006wW-Ka; Fri, 24 Aug 2012 12:58:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tT6-0006wO-GL
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 12:58:32 +0000
Received: from [85.158.138.51:60444] by server-12.bemta-3.messagelabs.com id
	1B/BA-04073-77A77305; Fri, 24 Aug 2012 12:58:31 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345813109!19747772!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13836 invoked from network); 24 Aug 2012 12:58:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 12:58:31 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35699999"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 08:58:17 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	08:58:17 -0400
Message-ID: <50377A9C.7090500@citrix.com>
Date: Fri, 24 Aug 2012 13:59:08 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <cover.1345637459.git.julien.grall@citrix.com>
	<10985f0bc427cc258adb11cb97818a4e7ab133c9.1345637459.git.julien.grall@citrix.com>
	<alpine.DEB.2.02.1208231455010.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208231455010.15568@kaball.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [QEMU][RFC V2 06/10] xen-pci: register PCI device
 in Xen and handle IOREQ_TYPE_PCI_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 03:41 PM, Stefano Stabellini wrote:
> On Wed, 22 Aug 2012, Julien Grall wrote:
>    
>> With QEMU disaggregation QEMU needs to specify which PCI device it's able to
>> handle. It will use the device place in the topology (domain, bus, device,
>> function).
>> When Xen will trap an access for the config space, it will forge a new
>> ioreq and forward it to the right QEMU.
>>
>> Signed-off-by: Julien Grall<julien.grall@citrix.com>
>> ---
>>   hw/pci.c   |    6 ++++++
>>   hw/xen.h   |    1 +
>>   xen-all.c  |   38 ++++++++++++++++++++++++++++++++++++++
>>   xen-stub.c |    5 +++++
>>   4 files changed, 50 insertions(+), 0 deletions(-)
>>
>> diff --git a/hw/pci.c b/hw/pci.c
>> index 4d95984..0112edf 100644
>> --- a/hw/pci.c
>> +++ b/hw/pci.c
>> @@ -33,6 +33,7 @@
>>   #include "qmp-commands.h"
>>   #include "msi.h"
>>   #include "msix.h"
>> +#include "xen.h"
>>
>>   //#define DEBUG_PCI
>>   #ifdef DEBUG_PCI
>> @@ -781,6 +782,11 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
>>       pci_dev->devfn = devfn;
>>       pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
>>       pci_dev->irq_state = 0;
>> +
>> +    if (xen_enabled()&&  xen_register_pcidev(pci_dev)) {
>> +        return NULL;
>>      
> Is this an error condition? If so we should print an error message,
> right?
>    

Yes, it means that the BDF is already registered by another QEMU.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tYF-0007PG-0r; Fri, 24 Aug 2012 13:03:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4tYD-0007Ov-Iv
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:03:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345813423!8786838!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24557 invoked from network); 24 Aug 2012 13:03:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:03:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14168898"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 13:03:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 14:03:43 +0100
Message-ID: <1345813421.19419.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 14:03:41 +0100
In-Reply-To: <503779F5.80508@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
	<1345728617.12501.92.camel@zakaz.uk.xensource.com>
	<503779F5.80508@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 13:56 +0100, Julien Grall wrote:
> On 08/23/2012 02:30 PM, Ian Campbell wrote:
> > On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> >    
> >> This patch modifies libxl interface for qemu disaggregation.
> >>      
> > I'd rather see the interfaces changes in the same patch as the
> > implementation of the new interfaces.
> >
> >    
> >> For the moment, due to some dependencies between devices, we
> >> can't let the user choose which QEMU emulate a device.
> >>
> >> Moreoever this patch adds an "id" field to nic interface.
> >> It will be used in config file to specify which QEMU handle
> >> the network card.
> >>      
> > Is domid+devid not sufficient to identify which nic?
> >    
> Is the user can specify or find devid easily ?
> I added "id" because, I would like that the user
> can identify without any problem a network
> interface.

At the libxl level the libxl_device_nic struct has a devid in it.

That's not to say that xl can't add a layer of naming and indirection on
top.

> >> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
> >>       ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
> >>       ])
> >>
> >> +libxl_dm_cap = Enumeration("dm_cap", [
> >> +    (1, "UI"), # Emulate all UI + default device
> >>      
> > What does "default device" equate too?
> >    
> The following devices:
>     - i440fx
>     - piix3
>     - piix4
>     - dma
>     - xen apic
>     - xen platform

So this is more like "CORE" than "UI"?

Is there a reason why UI (which I guess means the VGA, spice and VFB
devices?) are required to be in the same emulator as these?

> >> +    (2, "IDE"), # Emulate IDE
> >> +    (4, "SERIAL"), # Emulate Serial
> >> +    (8, "AUDIO"), # Emulate audio
> >> +    ])
> >> +


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:04:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tYF-0007PG-0r; Fri, 24 Aug 2012 13:03:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4tYD-0007Ov-Iv
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:03:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1345813423!8786838!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24557 invoked from network); 24 Aug 2012 13:03:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:03:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14168898"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 13:03:43 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 14:03:43 +0100
Message-ID: <1345813421.19419.11.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 14:03:41 +0100
In-Reply-To: <503779F5.80508@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>
	<1345728617.12501.92.camel@zakaz.uk.xensource.com>
	<503779F5.80508@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 13:56 +0100, Julien Grall wrote:
> On 08/23/2012 02:30 PM, Ian Campbell wrote:
> > On Wed, 2012-08-22 at 13:31 +0100, Julien Grall wrote:
> >    
> >> This patch modifies libxl interface for qemu disaggregation.
> >>      
> > I'd rather see the interfaces changes in the same patch as the
> > implementation of the new interfaces.
> >
> >    
> >> For the moment, due to some dependencies between devices, we
> >> can't let the user choose which QEMU emulate a device.
> >>
> >> Moreoever this patch adds an "id" field to nic interface.
> >> It will be used in config file to specify which QEMU handle
> >> the network card.
> >>      
> > Is domid+devid not sufficient to identify which nic?
> >    
> Is the user can specify or find devid easily ?
> I added "id" because, I would like that the user
> can identify without any problem a network
> interface.

At the libxl level the libxl_device_nic struct has a devid in it.

That's not to say that xl can't add a layer of naming and indirection on
top.

> >> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
> >>       ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
> >>       ])
> >>
> >> +libxl_dm_cap = Enumeration("dm_cap", [
> >> +    (1, "UI"), # Emulate all UI + default device
> >>      
> > What does "default device" equate too?
> >    
> The following devices:
>     - i440fx
>     - piix3
>     - piix4
>     - dma
>     - xen apic
>     - xen platform

So this is more like "CORE" than "UI"?

Is there a reason why UI (which I guess means the VGA, spice and VFB
devices?) are required to be in the same emulator as these?

> >> +    (2, "IDE"), # Emulate IDE
> >> +    (4, "SERIAL"), # Emulate Serial
> >> +    (8, "AUDIO"), # Emulate audio
> >> +    ])
> >> +


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:11:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tfb-0007ek-UT; Fri, 24 Aug 2012 13:11:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tfa-0007ee-1L
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:11:26 +0000
Received: from [85.158.143.35:45624] by server-1.bemta-4.messagelabs.com id
	A1/01-12504-D7D77305; Fri, 24 Aug 2012 13:11:25 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345813883!13553971!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21382 invoked from network); 24 Aug 2012 13:11:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:11:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35703202"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:11:22 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	09:11:22 -0400
Message-ID: <50377DAD.6000405@citrix.com>
Date: Fri, 24 Aug 2012 14:12:13 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
	<1345728948.12501.98.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345728948.12501.98.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 14/17] xl-parsing: Parse new
 device_models option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:35 PM, Ian Campbell wrote:
> On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
>    
>> Add new option "device_models". The user can specify the capability of the
>> QEMU (ui, vifs, ...). This option only works with QEMU upstream (qemu-xen).
>>
>> For instance:
>> device_models= [ 'name=all,vifs=nic1', 'name=qvga,ui', 'name=qide,ide' ]
>>      
> iirc you can give multiple vifs -- what does that syntax look like?
>
>    
vifs=nic1;nic2

> I didn't ask before -- what does naming the dm give you? Is it just used
> for ui things like logging or can you cross reference this in some way?
>
>    
It's used for logging and in qemu log filename. It's not a mandatory.
>> Signed-off-by: Julien Grall<julien.grall@citrix.com>
>> ---
>>   tools/libxl/Makefile     |    2 +-
>>   tools/libxl/libxlu_dm.c  |   96 ++++++++++++++++++++++++++++++++++++++++++++++
>>   tools/libxl/libxlutil.h  |    5 ++
>>   tools/libxl/xl_cmdimpl.c |   29 +++++++++++++-
>>   4 files changed, 130 insertions(+), 2 deletions(-)
>>   create mode 100644 tools/libxl/libxlu_dm.c
>>
>> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
>> index 47fb110..2b58721 100644
>> --- a/tools/libxl/Makefile
>> +++ b/tools/libxl/Makefile
>> @@ -79,7 +79,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
>>   AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>>   AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>>   LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
>> -	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
>> +	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_dm.o
>>   $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>>
>>   CLIENTS = xl testidl libxl-save-helper
>> diff --git a/tools/libxl/libxlu_dm.c b/tools/libxl/libxlu_dm.c
>> new file mode 100644
>> index 0000000..9f0a347
>> --- /dev/null
>> +++ b/tools/libxl/libxlu_dm.c
>> @@ -0,0 +1,96 @@
>> +#include "libxl_osdeps.h" /* must come before any other headers */
>> +#include<stdlib.h>
>> +#include "libxlu_internal.h"
>> +#include "libxlu_cfg_i.h"
>> +
>> +static void split_string_into_string_list(const char *str,
>> +                                          const char *delim,
>> +                                          libxl_string_list *psl)
>>      
> Is this a cut-n-paste of the one in xl_cmdimpl.c or did it change?
>
> Probably better to add this as a common utility function somewhere.
>    
It's nearly the same, except it's skip blank at the beginning
of a value.
For instance if we have 'foo;   bar', the function will return
['foo', 'bar'].

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:11:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tfb-0007ek-UT; Fri, 24 Aug 2012 13:11:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tfa-0007ee-1L
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:11:26 +0000
Received: from [85.158.143.35:45624] by server-1.bemta-4.messagelabs.com id
	A1/01-12504-D7D77305; Fri, 24 Aug 2012 13:11:25 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345813883!13553971!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21382 invoked from network); 24 Aug 2012 13:11:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:11:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35703202"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:11:22 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	09:11:22 -0400
Message-ID: <50377DAD.6000405@citrix.com>
Date: Fri, 24 Aug 2012 14:12:13 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<557fe87e4a6c0defdc6549e23e8e5e7b2ebb7a9f.1345552068.git.julien.grall@citrix.com>
	<1345728948.12501.98.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345728948.12501.98.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 14/17] xl-parsing: Parse new
 device_models option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:35 PM, Ian Campbell wrote:
> On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
>    
>> Add new option "device_models". The user can specify the capability of the
>> QEMU (ui, vifs, ...). This option only works with QEMU upstream (qemu-xen).
>>
>> For instance:
>> device_models= [ 'name=all,vifs=nic1', 'name=qvga,ui', 'name=qide,ide' ]
>>      
> iirc you can give multiple vifs -- what does that syntax look like?
>
>    
vifs=nic1;nic2

> I didn't ask before -- what does naming the dm give you? Is it just used
> for ui things like logging or can you cross reference this in some way?
>
>    
It's used for logging and in qemu log filename. It's not a mandatory.
>> Signed-off-by: Julien Grall<julien.grall@citrix.com>
>> ---
>>   tools/libxl/Makefile     |    2 +-
>>   tools/libxl/libxlu_dm.c  |   96 ++++++++++++++++++++++++++++++++++++++++++++++
>>   tools/libxl/libxlutil.h  |    5 ++
>>   tools/libxl/xl_cmdimpl.c |   29 +++++++++++++-
>>   4 files changed, 130 insertions(+), 2 deletions(-)
>>   create mode 100644 tools/libxl/libxlu_dm.c
>>
>> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
>> index 47fb110..2b58721 100644
>> --- a/tools/libxl/Makefile
>> +++ b/tools/libxl/Makefile
>> @@ -79,7 +79,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
>>   AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>>   AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>>   LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
>> -	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
>> +	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_dm.o
>>   $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>>
>>   CLIENTS = xl testidl libxl-save-helper
>> diff --git a/tools/libxl/libxlu_dm.c b/tools/libxl/libxlu_dm.c
>> new file mode 100644
>> index 0000000..9f0a347
>> --- /dev/null
>> +++ b/tools/libxl/libxlu_dm.c
>> @@ -0,0 +1,96 @@
>> +#include "libxl_osdeps.h" /* must come before any other headers */
>> +#include<stdlib.h>
>> +#include "libxlu_internal.h"
>> +#include "libxlu_cfg_i.h"
>> +
>> +static void split_string_into_string_list(const char *str,
>> +                                          const char *delim,
>> +                                          libxl_string_list *psl)
>>      
> Is this a cut-n-paste of the one in xl_cmdimpl.c or did it change?
>
> Probably better to add this as a common utility function somewhere.
>    
It's nearly the same, except it's skip blank at the beginning
of a value.
For instance if we have 'foo;   bar', the function will return
['foo', 'bar'].

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:23:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tqb-000862-VL; Fri, 24 Aug 2012 13:22:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tqa-00085u-MV
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:22:48 +0000
Received: from [85.158.143.35:9173] by server-3.bemta-4.messagelabs.com id
	59/EE-08232-82087305; Fri, 24 Aug 2012 13:22:48 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345814564!13555981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14420 invoked from network); 24 Aug 2012 13:22:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:22:46 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35705051"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:22:41 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	09:22:40 -0400
Message-ID: <50378054.9080506@citrix.com>
Date: Fri, 24 Aug 2012 14:23:32 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>	
	<1345728617.12501.92.camel@zakaz.uk.xensource.com>	
	<503779F5.80508@citrix.com>
	<1345813421.19419.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345813421.19419.11.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/24/2012 02:03 PM, Ian Campbell wrote:
>
>>>> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>>>>        ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>>>>        ])
>>>>
>>>> +libxl_dm_cap = Enumeration("dm_cap", [
>>>> +    (1, "UI"), # Emulate all UI + default device
>>>>
>>>>          
>>> What does "default device" equate too?
>>>
>>>        
>> The following devices:
>>      - i440fx
>>      - piix3
>>      - piix4
>>      - dma
>>      - xen apic
>>      - xen platform
>>      
> So this is more like "CORE" than "UI"?
>
> Is there a reason why UI (which I guess means the VGA, spice and VFB
> devices?) are required to be in the same emulator as these?
>
>    

VGA, keyboard and mouse (that can be plug via USB) need
to be in the same emulator. Otherwise we can't use VNC or
something like that.

I made this choice, after discussion with Stefano, because
theses devices depends each others. For instance, keyboard
emulates A20.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:23:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4tqb-000862-VL; Fri, 24 Aug 2012 13:22:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4tqa-00085u-MV
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:22:48 +0000
Received: from [85.158.143.35:9173] by server-3.bemta-4.messagelabs.com id
	59/EE-08232-82087305; Fri, 24 Aug 2012 13:22:48 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345814564!13555981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14420 invoked from network); 24 Aug 2012 13:22:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:22:46 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35705051"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:22:41 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	09:22:40 -0400
Message-ID: <50378054.9080506@citrix.com>
Date: Fri, 24 Aug 2012 14:23:32 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<51efcbff92f713286b5839884769ef34ab0c39f7.1345552068.git.julien.grall@citrix.com>	
	<1345728617.12501.92.camel@zakaz.uk.xensource.com>	
	<503779F5.80508@citrix.com>
	<1345813421.19419.11.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345813421.19419.11.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 12/17] xl: Add interface to
 handle qemu disaggregation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/24/2012 02:03 PM, Ian Campbell wrote:
>
>>>> @@ -246,6 +246,20 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>>>>        ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>>>>        ])
>>>>
>>>> +libxl_dm_cap = Enumeration("dm_cap", [
>>>> +    (1, "UI"), # Emulate all UI + default device
>>>>
>>>>          
>>> What does "default device" equate too?
>>>
>>>        
>> The following devices:
>>      - i440fx
>>      - piix3
>>      - piix4
>>      - dma
>>      - xen apic
>>      - xen platform
>>      
> So this is more like "CORE" than "UI"?
>
> Is there a reason why UI (which I guess means the VGA, spice and VFB
> devices?) are required to be in the same emulator as these?
>
>    

VGA, keyboard and mouse (that can be plug via USB) need
to be in the same emulator. Otherwise we can't use VNC or
something like that.

I made this choice, after discussion with Stefano, because
theses devices depends each others. For instance, keyboard
emulates A20.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:51:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4uHe-0000M1-0l; Fri, 24 Aug 2012 13:50:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4uHc-0000Lv-Lx
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:50:44 +0000
Received: from [85.158.143.35:7081] by server-3.bemta-4.messagelabs.com id
	16/D7-08232-4B687305; Fri, 24 Aug 2012 13:50:44 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345816240!11496766!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28463 invoked from network); 24 Aug 2012 13:50:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:50:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35708720"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:50:19 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	09:50:19 -0400
Message-ID: <503786CF.40006@citrix.com>
Date: Fri, 24 Aug 2012 14:51:11 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345730172.12501.113.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:56 PM, Ian Campbell wrote:
> On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
>    
>> @@ -991,12 +1057,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
>>           libxl__device_console_dispose(&console);
>>
>>           if (need_qemu) {
>> -            dcs->dmss.dm.guest_domid = domid;
>> -            libxl__spawn_local_dm(egc,&dcs->dmss.dm);
>> +            assert(dcs->dmss);
>> +            domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
>>               return;
>>           } else {
>> -            assert(!dcs->dmss.dm.guest_domid);
>> -            domcreate_devmodel_started(egc,&dcs->dmss.dm, 0);
>> +            assert(!dcs->dmss);
>>      
> Doesn't this stop progress in this case meaning we'll never get to the
> end of the async op?
>
>    
Indeed, I will fix on the next patch version.

>>               return;
>>           }
>>       }
>>      
> [..]
>    
>> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
>>                                    void *check_callback_userdata)
>>   {
>>       char *path;
>> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
>> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
>> +                          domid, dmid);
>>      
> Isn't this control path shared with qemu? I'm not sure we can just
> change it like that? We need to at least retain compatibility with
> pre-disag qemus.
>
>    
Indeed, as we have multiple QEMUs for a same domain, we need
to have one control path by QEMU.

Pre-disag QEMUs cannot work with my changes inside the Xen.
Xen will not forward by default ioreq if there is no ioreq server.
>>   const char *libxl__domain_device_model(libxl__gc *gc,
>> -                                       const libxl_domain_build_info *info)
>> +                                       uint32_t dmid,
>> +                                       const libxl_domain_build_info *b_info)
>>   {
>>       libxl_ctx *ctx = libxl__gc_owner(gc);
>>       const char *dm;
>> +    libxl_domain_config *guest_config = CONTAINER_OF(b_info, *guest_config,
>> +                                                     b_info);
>>
>> -    if (libxl_defbool_val(info->device_model_stubdomain))
>> +    if (libxl_defbool_val(guest_config->b_info.device_model_stubdomain))
>>      
> You just extracted guest_config from b_info but you still have the
> b_info point to hand. Why not use it? Likewise a few more times below.
>    
An error, will be fix on next patch version.
>> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
>> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
>> +     * one QEMU (the one which emulates default device) will register
>> +     * these devices through Xen PCI hypercall.
>> +     */
>> +    static unsigned int bdf = 3;
>>      
> Do you mean const rather than static?
>
>    
No static. With QEMU disaggregation, the toolstack allocate
BDF incrementaly. QEMU is unable to know if a BDF is already
allocated in another QEMU.
For the moment, bdf variable is used to give a devfn for
network card and VGA.

> Isn't this baking in some implementation detail from the current qemu
> version? What happens if it changes?
>    

I don't have another way for the moment. I would be happy,
if someone have a good solution.

>> +
>>       libxl_ctx *ctx = libxl__gc_owner(gc);
>>       const libxl_domain_create_info *c_info =&guest_config->c_info;
>>       const libxl_domain_build_info *b_info =&guest_config->b_info;
>> +    const libxl_dm *dm_config =&guest_config->dms[dmid];
>>       const libxl_device_disk *disks = guest_config->disks;
>>       const libxl_device_nic *nics = guest_config->nics;
>>       const int num_disks = guest_config->num_disks;
>>       const int num_nics = guest_config->num_nics;
>> -    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
>> +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
>>       const libxl_sdl_info *sdl = dm_sdl(guest_config);
>>       const char *keymap = dm_keymap(guest_config);
>>       flexarray_t *dm_args;
>>       int i;
>>       uint64_t ram_size;
>> +    uint32_t cap_ui = dm_config->capabilities&  LIBXL_DM_CAP_UI;
>> +    uint32_t cap_ide = dm_config->capabilities&  LIBXL_DM_CAP_IDE;
>> +    uint32_t cap_serial = dm_config->capabilities&  LIBXL_DM_CAP_SERIAL;
>> +    uint32_t cap_audio = dm_config->capabilities&  LIBXL_DM_CAP_AUDIO;
>>      
> ->capabilities is defined as 64 bits, but you use 32 here, which happens
> to work if you know what the actual values of the enum are but whoever
> adds the 33rd capability will probably get it wrong.
>
>       bool cap_foo = !! (dm_....capabiltieis&  LIBXL_DM_CAP_FOO)
>
> would probably work?
>    
Indeed, will be fix in next patch version.

>>       dm_args = flexarray_make(16, 1);
>>       if (!dm_args)
>> @@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>>                         "-xen-domid",
>>                         libxl__sprintf(gc, "%d", guest_domid), NULL);
>>
>> +    flexarray_append(dm_args, "-nodefaults");
>>      
> Does this not cause a change in behaviour other than what you've
> accounted for here?
>    
  By default QEMU emulates VGA card, and a network card. This options,
disabled it  and avoid to add "-net none".
I added it after a discussion on my first patch series.
https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04767.html

>> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>>           abort();
>>       }
>>
>> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
>> +    // Allocate ram space of 32Mo per previous device model to store rom
>>      
> What is this about?
>
> (also that Mo looks a bit odd in among all these mb's)
>
>    
It's space for ROM allocation, like vga, rtl8139 roms ...
Each QEMU can load ROM and memory, but the memory
allocator consider that it's alone. It starts to allocate
ROM space from the end of memory RAM.

It's a solution suggest by Stefano, it's avoid modification
in QEMU. As we don't know the number of ROM and their
size per QEMU, we chose a space of 32 Mo to be sure, but in
fine most of time memory is not allocated.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 13:51:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 13:51:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4uHe-0000M1-0l; Fri, 24 Aug 2012 13:50:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4uHc-0000Lv-Lx
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 13:50:44 +0000
Received: from [85.158.143.35:7081] by server-3.bemta-4.messagelabs.com id
	16/D7-08232-4B687305; Fri, 24 Aug 2012 13:50:44 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1345816240!11496766!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28463 invoked from network); 24 Aug 2012 13:50:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 13:50:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35708720"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 09:50:19 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	09:50:19 -0400
Message-ID: <503786CF.40006@citrix.com>
Date: Fri, 24 Aug 2012 14:51:11 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345730172.12501.113.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/23/2012 02:56 PM, Ian Campbell wrote:
> On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
>    
>> @@ -991,12 +1057,11 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
>>           libxl__device_console_dispose(&console);
>>
>>           if (need_qemu) {
>> -            dcs->dmss.dm.guest_domid = domid;
>> -            libxl__spawn_local_dm(egc,&dcs->dmss.dm);
>> +            assert(dcs->dmss);
>> +            domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
>>               return;
>>           } else {
>> -            assert(!dcs->dmss.dm.guest_domid);
>> -            domcreate_devmodel_started(egc,&dcs->dmss.dm, 0);
>> +            assert(!dcs->dmss);
>>      
> Doesn't this stop progress in this case meaning we'll never get to the
> end of the async op?
>
>    
Indeed, I will fix on the next patch version.

>>               return;
>>           }
>>       }
>>      
> [..]
>    
>> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
>>                                    void *check_callback_userdata)
>>   {
>>       char *path;
>> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
>> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
>> +                          domid, dmid);
>>      
> Isn't this control path shared with qemu? I'm not sure we can just
> change it like that? We need to at least retain compatibility with
> pre-disag qemus.
>
>    
Indeed, as we have multiple QEMUs for a same domain, we need
to have one control path by QEMU.

Pre-disag QEMUs cannot work with my changes inside the Xen.
Xen will not forward by default ioreq if there is no ioreq server.
>>   const char *libxl__domain_device_model(libxl__gc *gc,
>> -                                       const libxl_domain_build_info *info)
>> +                                       uint32_t dmid,
>> +                                       const libxl_domain_build_info *b_info)
>>   {
>>       libxl_ctx *ctx = libxl__gc_owner(gc);
>>       const char *dm;
>> +    libxl_domain_config *guest_config = CONTAINER_OF(b_info, *guest_config,
>> +                                                     b_info);
>>
>> -    if (libxl_defbool_val(info->device_model_stubdomain))
>> +    if (libxl_defbool_val(guest_config->b_info.device_model_stubdomain))
>>      
> You just extracted guest_config from b_info but you still have the
> b_info point to hand. Why not use it? Likewise a few more times below.
>    
An error, will be fix on next patch version.
>> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
>> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
>> +     * one QEMU (the one which emulates default device) will register
>> +     * these devices through Xen PCI hypercall.
>> +     */
>> +    static unsigned int bdf = 3;
>>      
> Do you mean const rather than static?
>
>    
No static. With QEMU disaggregation, the toolstack allocate
BDF incrementaly. QEMU is unable to know if a BDF is already
allocated in another QEMU.
For the moment, bdf variable is used to give a devfn for
network card and VGA.

> Isn't this baking in some implementation detail from the current qemu
> version? What happens if it changes?
>    

I don't have another way for the moment. I would be happy,
if someone have a good solution.

>> +
>>       libxl_ctx *ctx = libxl__gc_owner(gc);
>>       const libxl_domain_create_info *c_info =&guest_config->c_info;
>>       const libxl_domain_build_info *b_info =&guest_config->b_info;
>> +    const libxl_dm *dm_config =&guest_config->dms[dmid];
>>       const libxl_device_disk *disks = guest_config->disks;
>>       const libxl_device_nic *nics = guest_config->nics;
>>       const int num_disks = guest_config->num_disks;
>>       const int num_nics = guest_config->num_nics;
>> -    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
>> +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
>>       const libxl_sdl_info *sdl = dm_sdl(guest_config);
>>       const char *keymap = dm_keymap(guest_config);
>>       flexarray_t *dm_args;
>>       int i;
>>       uint64_t ram_size;
>> +    uint32_t cap_ui = dm_config->capabilities&  LIBXL_DM_CAP_UI;
>> +    uint32_t cap_ide = dm_config->capabilities&  LIBXL_DM_CAP_IDE;
>> +    uint32_t cap_serial = dm_config->capabilities&  LIBXL_DM_CAP_SERIAL;
>> +    uint32_t cap_audio = dm_config->capabilities&  LIBXL_DM_CAP_AUDIO;
>>      
> ->capabilities is defined as 64 bits, but you use 32 here, which happens
> to work if you know what the actual values of the enum are but whoever
> adds the 33rd capability will probably get it wrong.
>
>       bool cap_foo = !! (dm_....capabiltieis&  LIBXL_DM_CAP_FOO)
>
> would probably work?
>    
Indeed, will be fix in next patch version.

>>       dm_args = flexarray_make(16, 1);
>>       if (!dm_args)
>> @@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>>                         "-xen-domid",
>>                         libxl__sprintf(gc, "%d", guest_domid), NULL);
>>
>> +    flexarray_append(dm_args, "-nodefaults");
>>      
> Does this not cause a change in behaviour other than what you've
> accounted for here?
>    
  By default QEMU emulates VGA card, and a network card. This options,
disabled it  and avoid to add "-net none".
I added it after a discussion on my first patch series.
https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04767.html

>> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>>           abort();
>>       }
>>
>> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
>> +    // Allocate ram space of 32Mo per previous device model to store rom
>>      
> What is this about?
>
> (also that Mo looks a bit odd in among all these mb's)
>
>    
It's space for ROM allocation, like vga, rtl8139 roms ...
Each QEMU can load ROM and memory, but the memory
allocator consider that it's alone. It starts to allocate
ROM space from the end of memory RAM.

It's a solution suggest by Stefano, it's avoid modification
in QEMU. As we don't know the number of ROM and their
size per QEMU, we chose a space of 32 Mo to be sure, but in
fine most of time memory is not allocated.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 14:09:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 14:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4uZq-0000eM-No; Fri, 24 Aug 2012 14:09:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4uZp-0000eH-5p
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 14:09:33 +0000
Received: from [85.158.143.35:35150] by server-3.bemta-4.messagelabs.com id
	5A/8B-08232-C1B87305; Fri, 24 Aug 2012 14:09:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345817372!5055942!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23222 invoked from network); 24 Aug 2012 14:09:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 14:09:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14170460"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 14:09:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 15:09:32 +0100
Message-ID: <1345817370.19419.22.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 15:09:30 +0100
In-Reply-To: <503786CF.40006@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>
	<503786CF.40006@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 14:51 +0100, Julien Grall wrote:
> >> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
> >>                                    void *check_callback_userdata)
> >>   {
> >>       char *path;
> >> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
> >> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
> >> +                          domid, dmid);
> >>      
> > Isn't this control path shared with qemu? I'm not sure we can just
> > change it like that? We need to at least retain compatibility with
> > pre-disag qemus.
> >
> >    
> Indeed, as we have multiple QEMUs for a same domain, we need
> to have one control path by QEMU.
> 
> Pre-disag QEMUs cannot work with my changes inside the Xen.
> Xen will not forward by default ioreq if there is no ioreq server.

We might need to consider making disagg an opt in feature, with the
default being to have as we do today.

> >> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
> >> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
> >> +     * one QEMU (the one which emulates default device) will register
> >> +     * these devices through Xen PCI hypercall.
> >> +     */
> >> +    static unsigned int bdf = 3;
> >>      
> > Do you mean const rather than static?
> >
> >    
> No static. With QEMU disaggregation, the toolstack allocate
> BDF incrementaly. QEMU is unable to know if a BDF is already
> allocated in another QEMU.

This is broken if the toolstack is building multiple domains, since the
bdf will be preserved across each of them.

You need to put this in some sort of data structure specific to this
particular iteration of the builder code. We must surely have something
suitable close to hand in this function. libxl__domain_build_state
perhaps?

A static variable in a library is almost always a mistake.

> > Isn't this baking in some implementation detail from the current qemu
> > version? What happens if it changes?
> >    
> 
> I don't have another way for the moment. I would be happy,
> if someone have a good solution.

Could we at least make the assignments of the 3 prior BDFs explicit on
the command line too?

> >>       dm_args = flexarray_make(16, 1);
> >>       if (!dm_args)
> >> @@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >>                         "-xen-domid",
> >>                         libxl__sprintf(gc, "%d", guest_domid), NULL);
> >>
> >> +    flexarray_append(dm_args, "-nodefaults");
> >>      
> > Does this not cause a change in behaviour other than what you've
> > accounted for here?
> >    
>   By default QEMU emulates VGA card, and a network card. This options,
> disabled it  and avoid to add "-net none".
> I added it after a discussion on my first patch series.
> https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04767.html

OK. 

> >> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >>           abort();
> >>       }
> >>
> >> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
> >> +    // Allocate ram space of 32Mo per previous device model to store rom
> >>      
> > What is this about?
> >
> > (also that Mo looks a bit odd in among all these mb's)
> >
> >    
> It's space for ROM allocation, like vga, rtl8139 roms ...
> Each QEMU can load ROM and memory, but the memory
> allocator consider that it's alone. It starts to allocate
> ROM space from the end of memory RAM.
> 
> It's a solution suggest by Stefano, it's avoid modification
> in QEMU. As we don't know the number of ROM and their
> size per QEMU, we chose a space of 32 Mo to be sure, but in
> fine most of time memory is not allocated.

"32Mo per previous device model" is the bit which struck me as odd. That
means the first device model uses 32Mo, the second 64Mo, the third 96Mo
etc?

Aren't we already modifying qemu quite substantially to implement this
functionality anyway? so why are we trying to avoid it in this one
corner? Especially at the cost of doing something which on the face of
it looks quite strange!

Isn't space for the ROMs allocated by SeaBIOS as part of enumerating the
PCI bus anyway? Or is this a different per-ROM allocation?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 14:09:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 14:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4uZq-0000eM-No; Fri, 24 Aug 2012 14:09:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4uZp-0000eH-5p
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 14:09:33 +0000
Received: from [85.158.143.35:35150] by server-3.bemta-4.messagelabs.com id
	5A/8B-08232-C1B87305; Fri, 24 Aug 2012 14:09:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345817372!5055942!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23222 invoked from network); 24 Aug 2012 14:09:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 14:09:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14170460"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 14:09:31 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 15:09:32 +0100
Message-ID: <1345817370.19419.22.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 15:09:30 +0100
In-Reply-To: <503786CF.40006@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>
	<503786CF.40006@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 14:51 +0100, Julien Grall wrote:
> >> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
> >>                                    void *check_callback_userdata)
> >>   {
> >>       char *path;
> >> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
> >> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
> >> +                          domid, dmid);
> >>      
> > Isn't this control path shared with qemu? I'm not sure we can just
> > change it like that? We need to at least retain compatibility with
> > pre-disag qemus.
> >
> >    
> Indeed, as we have multiple QEMUs for a same domain, we need
> to have one control path by QEMU.
> 
> Pre-disag QEMUs cannot work with my changes inside the Xen.
> Xen will not forward by default ioreq if there is no ioreq server.

We might need to consider making disagg an opt in feature, with the
default being to have as we do today.

> >> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
> >> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
> >> +     * one QEMU (the one which emulates default device) will register
> >> +     * these devices through Xen PCI hypercall.
> >> +     */
> >> +    static unsigned int bdf = 3;
> >>      
> > Do you mean const rather than static?
> >
> >    
> No static. With QEMU disaggregation, the toolstack allocate
> BDF incrementaly. QEMU is unable to know if a BDF is already
> allocated in another QEMU.

This is broken if the toolstack is building multiple domains, since the
bdf will be preserved across each of them.

You need to put this in some sort of data structure specific to this
particular iteration of the builder code. We must surely have something
suitable close to hand in this function. libxl__domain_build_state
perhaps?

A static variable in a library is almost always a mistake.

> > Isn't this baking in some implementation detail from the current qemu
> > version? What happens if it changes?
> >    
> 
> I don't have another way for the moment. I would be happy,
> if someone have a good solution.

Could we at least make the assignments of the 3 prior BDFs explicit on
the command line too?

> >>       dm_args = flexarray_make(16, 1);
> >>       if (!dm_args)
> >> @@ -348,11 +389,12 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >>                         "-xen-domid",
> >>                         libxl__sprintf(gc, "%d", guest_domid), NULL);
> >>
> >> +    flexarray_append(dm_args, "-nodefaults");
> >>      
> > Does this not cause a change in behaviour other than what you've
> > accounted for here?
> >    
>   By default QEMU emulates VGA card, and a network card. This options,
> disabled it  and avoid to add "-net none".
> I added it after a discussion on my first patch series.
> https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04767.html

OK. 

> >> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >>           abort();
> >>       }
> >>
> >> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
> >> +    // Allocate ram space of 32Mo per previous device model to store rom
> >>      
> > What is this about?
> >
> > (also that Mo looks a bit odd in among all these mb's)
> >
> >    
> It's space for ROM allocation, like vga, rtl8139 roms ...
> Each QEMU can load ROM and memory, but the memory
> allocator consider that it's alone. It starts to allocate
> ROM space from the end of memory RAM.
> 
> It's a solution suggest by Stefano, it's avoid modification
> in QEMU. As we don't know the number of ROM and their
> size per QEMU, we chose a space of 32 Mo to be sure, but in
> fine most of time memory is not allocated.

"32Mo per previous device model" is the bit which struck me as odd. That
means the first device model uses 32Mo, the second 64Mo, the third 96Mo
etc?

Aren't we already modifying qemu quite substantially to implement this
functionality anyway? so why are we trying to avoid it in this one
corner? Especially at the cost of doing something which on the face of
it looks quite strange!

Isn't space for the ROMs allocated by SeaBIOS as part of enumerating the
PCI bus anyway? Or is this a different per-ROM allocation?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 14:37:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 14:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4uzw-0000uJ-7h; Fri, 24 Aug 2012 14:36:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4uzu-0000uE-6V
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 14:36:30 +0000
Received: from [85.158.143.99:64099] by server-3.bemta-4.messagelabs.com id
	D0/55-08232-D6197305; Fri, 24 Aug 2012 14:36:29 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345818987!27456079!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6919 invoked from network); 24 Aug 2012 14:36:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 14:36:28 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35714961"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:36:16 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	10:36:16 -0400
Message-ID: <50379193.3020600@citrix.com>
Date: Fri, 24 Aug 2012 15:37:07 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>	
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>	
	<503786CF.40006@citrix.com>
	<1345817370.19419.22.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345817370.19419.22.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/24/2012 03:09 PM, Ian Campbell wrote:
> On Fri, 2012-08-24 at 14:51 +0100, Julien Grall wrote:
>    
>>>> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
>>>>                                     void *check_callback_userdata)
>>>>    {
>>>>        char *path;
>>>> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
>>>> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
>>>> +                          domid, dmid);
>>>>
>>>>          
>>> Isn't this control path shared with qemu? I'm not sure we can just
>>> change it like that? We need to at least retain compatibility with
>>> pre-disag qemus.
>>>
>>>
>>>        
>> Indeed, as we have multiple QEMUs for a same domain, we need
>> to have one control path by QEMU.
>>
>> Pre-disag QEMUs cannot work with my changes inside the Xen.
>> Xen will not forward by default ioreq if there is no ioreq server.
>>      
> We might need to consider making disagg an opt in feature, with the
> default being to have as we do today.
>    
When you told feature, it's only for libxl or even for Xen ?
In case of libxl, if 'device_models' options is not specified
we used only one QEMU. So there is compatibility with
previous configuration file.
In case of Xen, it's hard to have a compatibility. We can
still spawn only one QEMU, but ioreq handling will not
send an io request if no device models registered it.
There is no more default QEMU.

>>>> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
>>>> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
>>>> +     * one QEMU (the one which emulates default device) will register
>>>> +     * these devices through Xen PCI hypercall.
>>>> +     */
>>>> +    static unsigned int bdf = 3;
>>>>
>>>>          
>>> Do you mean const rather than static?
>>>
>>>
>>>        
>> No static. With QEMU disaggregation, the toolstack allocate
>> BDF incrementaly. QEMU is unable to know if a BDF is already
>> allocated in another QEMU.
>>      
> This is broken if the toolstack is building multiple domains, since the
> bdf will be preserved across each of them.
>
> You need to put this in some sort of data structure specific to this
> particular iteration of the builder code. We must surely have something
> suitable close to hand in this function. libxl__domain_build_state
> perhaps?
>
>    
Will be fix in the next patch version.
> A static variable in a library is almost always a mistake.
>
>    
>>> Isn't this baking in some implementation detail from the current qemu
>>> version? What happens if it changes?
>>>
>>>        
>> I don't have another way for the moment. I would be happy,
>> if someone have a good solution.
>>      
> Could we at least make the assignments of the 3 prior BDFs explicit on
> the command line too?
>    
I don't understand your question. Theses 3 priors BDFs can't
be modify via QEMU command line (or I don't know how).
>>>> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>>>>            abort();
>>>>        }
>>>>
>>>> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
>>>> +    // Allocate ram space of 32Mo per previous device model to store rom
>>>>
>>>>          
>>> What is this about?
>>>
>>> (also that Mo looks a bit odd in among all these mb's)
>>>
>>>
>>>        
>> It's space for ROM allocation, like vga, rtl8139 roms ...
>> Each QEMU can load ROM and memory, but the memory
>> allocator consider that it's alone. It starts to allocate
>> ROM space from the end of memory RAM.
>>
>> It's a solution suggest by Stefano, it's avoid modification
>> in QEMU. As we don't know the number of ROM and their
>> size per QEMU, we chose a space of 32 Mo to be sure, but in
>> fine most of time memory is not allocated.
>>      
> "32Mo per previous device model" is the bit which struck me as odd. That
> means the first device model uses 32Mo, the second 64Mo, the third 96Mo
> etc?
>    
That means:
     - first QEMU can allocate ROM after ram_size + 0
     - second after ram_size + 32 mo
     - ...

It's a hack to avoid modification in QEMU memory allocator
(find_ram_offset exec.c in QEMU).

> Aren't we already modifying qemu quite substantially to implement this
> functionality anyway? so why are we trying to avoid it in this one
> corner? Especially at the cost of doing something which on the face of
> it looks quite strange!
>
>    
It's not possible to made it in QEMU, otherwise QEMU need to
be spawn one by one. Indeed, the next QEMU need to know
what is the last 'address' used by the previous QEMU.

I made a modification in this way, but it was abandoned. Indeed,
it required XenStore.

> Isn't space for the ROMs allocated by SeaBIOS as part of enumerating the
> PCI bus anyway? Or is this a different per-ROM allocation?
>    
It's the rom allocated via pci_add_option_rom in QEMU.
QEMU seems to store ROM in memory and then SeaBIOS
will copy it, in the right place.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 14:37:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 14:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4uzw-0000uJ-7h; Fri, 24 Aug 2012 14:36:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1T4uzu-0000uE-6V
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 14:36:30 +0000
Received: from [85.158.143.99:64099] by server-3.bemta-4.messagelabs.com id
	D0/55-08232-D6197305; Fri, 24 Aug 2012 14:36:29 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1345818987!27456079!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6919 invoked from network); 24 Aug 2012 14:36:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 14:36:28 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35714961"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 10:36:16 -0400
Received: from [10.80.248.240] (10.80.248.240) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.213.0; Fri, 24 Aug 2012
	10:36:16 -0400
Message-ID: <50379193.3020600@citrix.com>
Date: Fri, 24 Aug 2012 15:37:07 +0100
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US;
	rv:1.9.1.16) Gecko/20120726 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>	
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>	
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>	
	<503786CF.40006@citrix.com>
	<1345817370.19419.22.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345817370.19419.22.camel@zakaz.uk.xensource.com>
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/24/2012 03:09 PM, Ian Campbell wrote:
> On Fri, 2012-08-24 at 14:51 +0100, Julien Grall wrote:
>    
>>>> @@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
>>>>                                     void *check_callback_userdata)
>>>>    {
>>>>        char *path;
>>>> -    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
>>>> +    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
>>>> +                          domid, dmid);
>>>>
>>>>          
>>> Isn't this control path shared with qemu? I'm not sure we can just
>>> change it like that? We need to at least retain compatibility with
>>> pre-disag qemus.
>>>
>>>
>>>        
>> Indeed, as we have multiple QEMUs for a same domain, we need
>> to have one control path by QEMU.
>>
>> Pre-disag QEMUs cannot work with my changes inside the Xen.
>> Xen will not forward by default ioreq if there is no ioreq server.
>>      
> We might need to consider making disagg an opt in feature, with the
> default being to have as we do today.
>    
When you told feature, it's only for libxl or even for Xen ?
In case of libxl, if 'device_models' options is not specified
we used only one QEMU. So there is compatibility with
previous configuration file.
In case of Xen, it's hard to have a compatibility. We can
still spawn only one QEMU, but ioreq handling will not
send an io request if no device models registered it.
There is no more default QEMU.

>>>> +     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
>>>> +     * XEN PCI. Theses devices will be emulate in each QEMU, but only
>>>> +     * one QEMU (the one which emulates default device) will register
>>>> +     * these devices through Xen PCI hypercall.
>>>> +     */
>>>> +    static unsigned int bdf = 3;
>>>>
>>>>          
>>> Do you mean const rather than static?
>>>
>>>
>>>        
>> No static. With QEMU disaggregation, the toolstack allocate
>> BDF incrementaly. QEMU is unable to know if a BDF is already
>> allocated in another QEMU.
>>      
> This is broken if the toolstack is building multiple domains, since the
> bdf will be preserved across each of them.
>
> You need to put this in some sort of data structure specific to this
> particular iteration of the builder code. We must surely have something
> suitable close to hand in this function. libxl__domain_build_state
> perhaps?
>
>    
Will be fix in the next patch version.
> A static variable in a library is almost always a mistake.
>
>    
>>> Isn't this baking in some implementation detail from the current qemu
>>> version? What happens if it changes?
>>>
>>>        
>> I don't have another way for the moment. I would be happy,
>> if someone have a good solution.
>>      
> Could we at least make the assignments of the 3 prior BDFs explicit on
> the command line too?
>    
I don't understand your question. Theses 3 priors BDFs can't
be modify via QEMU command line (or I don't know how).
>>>> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>>>>            abort();
>>>>        }
>>>>
>>>> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
>>>> +    // Allocate ram space of 32Mo per previous device model to store rom
>>>>
>>>>          
>>> What is this about?
>>>
>>> (also that Mo looks a bit odd in among all these mb's)
>>>
>>>
>>>        
>> It's space for ROM allocation, like vga, rtl8139 roms ...
>> Each QEMU can load ROM and memory, but the memory
>> allocator consider that it's alone. It starts to allocate
>> ROM space from the end of memory RAM.
>>
>> It's a solution suggest by Stefano, it's avoid modification
>> in QEMU. As we don't know the number of ROM and their
>> size per QEMU, we chose a space of 32 Mo to be sure, but in
>> fine most of time memory is not allocated.
>>      
> "32Mo per previous device model" is the bit which struck me as odd. That
> means the first device model uses 32Mo, the second 64Mo, the third 96Mo
> etc?
>    
That means:
     - first QEMU can allocate ROM after ram_size + 0
     - second after ram_size + 32 mo
     - ...

It's a hack to avoid modification in QEMU memory allocator
(find_ram_offset exec.c in QEMU).

> Aren't we already modifying qemu quite substantially to implement this
> functionality anyway? so why are we trying to avoid it in this one
> corner? Especially at the cost of doing something which on the face of
> it looks quite strange!
>
>    
It's not possible to made it in QEMU, otherwise QEMU need to
be spawn one by one. Indeed, the next QEMU need to know
what is the last 'address' used by the previous QEMU.

I made a modification in this way, but it was abandoned. Indeed,
it required XenStore.

> Isn't space for the ROMs allocated by SeaBIOS as part of enumerating the
> PCI bus anyway? Or is this a different per-ROM allocation?
>    
It's the rom allocated via pci_add_option_rom in QEMU.
QEMU seems to store ROM in memory and then SeaBIOS
will copy it, in the right place.

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 14:45:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 14:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4v8L-00013q-82; Fri, 24 Aug 2012 14:45:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4v8K-00013l-Hu
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 14:45:12 +0000
Received: from [85.158.138.51:50188] by server-11.bemta-3.messagelabs.com id
	55/A0-23152-77397305; Fri, 24 Aug 2012 14:45:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345819509!27839772!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9483 invoked from network); 24 Aug 2012 14:45:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 14:45:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14171200"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 14:45:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 15:45:09 +0100
Message-ID: <1345819507.19419.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 15:45:07 +0100
In-Reply-To: <50379193.3020600@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>
	<503786CF.40006@citrix.com>
	<1345817370.19419.22.camel@zakaz.uk.xensource.com>
	<50379193.3020600@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 15:37 +0100, Julien Grall wrote:
> In case of Xen, it's hard to have a compatibility. We can
> still spawn only one QEMU, but ioreq handling will not
> send an io request if no device models registered it.
> There is no more default QEMU.

This means we've broken existing qemu on a new hypervisor, which now
that we have Xen support in upstream qemu is something we need to think
about and decide if we are happy with that or not.

Perhaps it is sufficient for this to be a compile time thing, i.e.
detect if we are building against a disagg capable hypervisor or not.

Or maybe it has to be a runtime thing with Xen only turning off the
default QEMU when the first io req region is registered, or something
like that.

> >>> Isn't this baking in some implementation detail from the current qemu
> >>> version? What happens if it changes?
> >>>
> >>>        
> >> I don't have another way for the moment. I would be happy,
> >> if someone have a good solution.
> >>      
> > Could we at least make the assignments of the 3 prior BDFs explicit on
> > the command line too?
> >    
> I don't understand your question. Theses 3 priors BDFs can't
> be modify via QEMU command line (or I don't know how).

Could qemu be modified to allow this?

> >>>> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >>>>            abort();
> >>>>        }
> >>>>
> >>>> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
> >>>> +    // Allocate ram space of 32Mo per previous device model to store rom
> >>>>
> >>>>          
> >>> What is this about?
> >>>
> >>> (also that Mo looks a bit odd in among all these mb's)
> >>>
> >>>
> >>>        
> >> It's space for ROM allocation, like vga, rtl8139 roms ...
> >> Each QEMU can load ROM and memory, but the memory
> >> allocator consider that it's alone. It starts to allocate
> >> ROM space from the end of memory RAM.
> >>
> >> It's a solution suggest by Stefano, it's avoid modification
> >> in QEMU. As we don't know the number of ROM and their
> >> size per QEMU, we chose a space of 32 Mo to be sure, but in
> >> fine most of time memory is not allocated.
> >>      
> > "32Mo per previous device model" is the bit which struck me as odd. That
> > means the first device model uses 32Mo, the second 64Mo, the third 96Mo
> > etc?
> >    
> That means:
>      - first QEMU can allocate ROM after ram_size + 0
>      - second after ram_size + 32 mo
>      - ...
> 
> It's a hack to avoid modification in QEMU memory allocator
> (find_ram_offset exec.c in QEMU).

Why don't we enhance the memory allocator instead of adding hacks?

> > Aren't we already modifying qemu quite substantially to implement this
> > functionality anyway? so why are we trying to avoid it in this one
> > corner? Especially at the cost of doing something which on the face of
> > it looks quite strange!
> >
> >    
> It's not possible to made it in QEMU, otherwise QEMU need to
> be spawn one by one. Indeed, the next QEMU need to know
> what is the last 'address' used by the previous QEMU.

Or each one needs to be told explicitly where to put its ROMs. Encoding
a magic 32Mo*N in the interface is just too hacky.

> I made a modification in this way, but it was abandoned. Indeed,
> it required XenStore.
> 
> > Isn't space for the ROMs allocated by SeaBIOS as part of enumerating the
> > PCI bus anyway? Or is this a different per-ROM allocation?
> >    
> It's the rom allocated via pci_add_option_rom in QEMU.
> QEMU seems to store ROM in memory and then SeaBIOS
> will copy it, in the right place.

So the ROM binary (the content of the ROM_BAR) is stored in "guest"
memory? That seems a bit odd to me, I'd have thought it would be stored
in the host and provided on demand when the ROM BAR was accessed.

Is there any scope for changing this behaviour?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 14:45:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 14:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4v8L-00013q-82; Fri, 24 Aug 2012 14:45:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4v8K-00013l-Hu
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 14:45:12 +0000
Received: from [85.158.138.51:50188] by server-11.bemta-3.messagelabs.com id
	55/A0-23152-77397305; Fri, 24 Aug 2012 14:45:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1345819509!27839772!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9483 invoked from network); 24 Aug 2012 14:45:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 14:45:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14171200"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 14:45:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 15:45:09 +0100
Message-ID: <1345819507.19419.29.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Fri, 24 Aug 2012 15:45:07 +0100
In-Reply-To: <50379193.3020600@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<9522ee398a1fd3cdce48cfe883b307336ae6674f.1345552068.git.julien.grall@citrix.com>
	<1345730172.12501.113.camel@zakaz.uk.xensource.com>
	<503786CF.40006@citrix.com>
	<1345817370.19419.22.camel@zakaz.uk.xensource.com>
	<50379193.3020600@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy
 on multiple device model
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 15:37 +0100, Julien Grall wrote:
> In case of Xen, it's hard to have a compatibility. We can
> still spawn only one QEMU, but ioreq handling will not
> send an io request if no device models registered it.
> There is no more default QEMU.

This means we've broken existing qemu on a new hypervisor, which now
that we have Xen support in upstream qemu is something we need to think
about and decide if we are happy with that or not.

Perhaps it is sufficient for this to be a compile time thing, i.e.
detect if we are building against a disagg capable hypervisor or not.

Or maybe it has to be a runtime thing with Xen only turning off the
default QEMU when the first io req region is registered, or something
like that.

> >>> Isn't this baking in some implementation detail from the current qemu
> >>> version? What happens if it changes?
> >>>
> >>>        
> >> I don't have another way for the moment. I would be happy,
> >> if someone have a good solution.
> >>      
> > Could we at least make the assignments of the 3 prior BDFs explicit on
> > the command line too?
> >    
> I don't understand your question. Theses 3 priors BDFs can't
> be modify via QEMU command line (or I don't know how).

Could qemu be modified to allow this?

> >>>> @@ -528,65 +583,69 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
> >>>>            abort();
> >>>>        }
> >>>>
> >>>> -    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
> >>>> +    // Allocate ram space of 32Mo per previous device model to store rom
> >>>>
> >>>>          
> >>> What is this about?
> >>>
> >>> (also that Mo looks a bit odd in among all these mb's)
> >>>
> >>>
> >>>        
> >> It's space for ROM allocation, like vga, rtl8139 roms ...
> >> Each QEMU can load ROM and memory, but the memory
> >> allocator consider that it's alone. It starts to allocate
> >> ROM space from the end of memory RAM.
> >>
> >> It's a solution suggest by Stefano, it's avoid modification
> >> in QEMU. As we don't know the number of ROM and their
> >> size per QEMU, we chose a space of 32 Mo to be sure, but in
> >> fine most of time memory is not allocated.
> >>      
> > "32Mo per previous device model" is the bit which struck me as odd. That
> > means the first device model uses 32Mo, the second 64Mo, the third 96Mo
> > etc?
> >    
> That means:
>      - first QEMU can allocate ROM after ram_size + 0
>      - second after ram_size + 32 mo
>      - ...
> 
> It's a hack to avoid modification in QEMU memory allocator
> (find_ram_offset exec.c in QEMU).

Why don't we enhance the memory allocator instead of adding hacks?

> > Aren't we already modifying qemu quite substantially to implement this
> > functionality anyway? so why are we trying to avoid it in this one
> > corner? Especially at the cost of doing something which on the face of
> > it looks quite strange!
> >
> >    
> It's not possible to made it in QEMU, otherwise QEMU need to
> be spawn one by one. Indeed, the next QEMU need to know
> what is the last 'address' used by the previous QEMU.

Or each one needs to be told explicitly where to put its ROMs. Encoding
a magic 32Mo*N in the interface is just too hacky.

> I made a modification in this way, but it was abandoned. Indeed,
> it required XenStore.
> 
> > Isn't space for the ROMs allocated by SeaBIOS as part of enumerating the
> > PCI bus anyway? Or is this a different per-ROM allocation?
> >    
> It's the rom allocated via pci_add_option_rom in QEMU.
> QEMU seems to store ROM in memory and then SeaBIOS
> will copy it, in the right place.

So the ROM binary (the content of the ROM_BAR) is stored in "guest"
memory? That seems a bit odd to me, I'd have thought it would be stored
in the host and provided on demand when the ROM BAR was accessed.

Is there any scope for changing this behaviour?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:01:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vNe-0001Hg-V4; Fri, 24 Aug 2012 15:01:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hahn@univention.de>) id 1T4vNd-0001Hb-A5
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:01:01 +0000
Received: from [85.158.143.99:55585] by server-2.bemta-4.messagelabs.com id
	99/41-21239-C2797305; Fri, 24 Aug 2012 15:01:00 +0000
X-Env-Sender: hahn@univention.de
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345820458!20758838!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25061 invoked from network); 24 Aug 2012 15:00:58 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-6.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 15:00:58 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id F181F7C3F57;
	Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id DF9E8164B113;
	Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id tuZg4wkvLu4H; Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
Received: from stave.knut.univention.de (mail.univention.de [82.198.197.8])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id 66F6D164B111;
	Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
From: Philipp Hahn <hahn@univention.de>
Organization: Univention.de
To: Xen-devel@lists.xen.org
Date: Fri, 24 Aug 2012 17:00:47 +0200
User-Agent: KMail/1.9.10 (enterprise35 20100903.1171286)
MIME-Version: 1.0
Message-Id: <201208241700.52458.hahn@univention.de>
Cc: Ian Campbell <ijc@hellion.org.uk>, 666135@bugs.debian.org
Subject: [Xen-devel] XenStore tdb vs. reboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2680265650150720884=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2680265650150720884==
Content-Type: multipart/signed;
  boundary="nextPart2516547.zj9KOF346I";
  protocol="application/pgp-signature";
  micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--nextPart2516547.zj9KOF346I
Content-Type: text/plain;
  charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Hello,

I noticed a strace delay when running "virsh list" on my Xen-4.1.3 test=20
server. On further investigation I noticed that in XenStore there were=20
multiple /vm/$UUID entries suffixed by -$VERSION, especially I hat 16 entri=
es=20
for my dom0. For each reboot of the host I get a new entry.

After disabling Xend I get the following output after a reboot:
> root@xen5:~# xenstore-ls /local/domain/0
> name =3D "Domain-0"

If I then start Xend, I get more data added:
> root@xen5:~# /etc/init.d/xend start
> Starting xen management daemon: xend.
> root@xen5:~# xenstore-ls /local/domain/0
> vm =3D "/vm/00000000-0000-0000-0000-000000000000-2"
> device =3D ""
> control =3D ""
>  platform-feature-multiprocessor-suspend =3D "1"
> error =3D ""
> memory =3D ""
>  target =3D "3400192"
> guest =3D ""
> hvmpv =3D ""
> data =3D ""
> cpu =3D ""
>  1 =3D ""
>   availability =3D "online"
>  0 =3D ""
>   availability =3D "online"
> description =3D ""
> console =3D ""
>  limit =3D "1048576"
>  type =3D "xenconsoled"
> domid =3D "0"
> name =3D "Domain-0"

Notice that I now have vm=3D$UUID-2.

If you just restart Xend, the problem does not show; you need to actually=20
reboot to host, so that /local/domain/0/vm does not exist.

I found commit <http://xenbits.xen.org/hg/xen-unstable.hg/rev/265950e3df69>=
=20
which added the suffix handling, which I never noticed before.

I found the same issue reported at=20
<http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D666135>.

=46or comparison I just installed a fresh Debian Wheezy rc1 and noticed tha=
t the=20
XenStore tdb there is locates in /var/_run_/xenstore/tdb instead=20
of /var/_lib_/xenstore/tdb. Since /var/run/ lives on a tempfs, it's nuked o=
n=20
every reboot and thus the problem doesn't exist there.

Is the XenStore tdb supposed to survive a reboot or must it be cleared betw=
een=20
each reboot?

Sincerely
Philipp
=2D-=20
Philipp Hahn           Open Source Software Engineer      hahn@univention.de
Univention GmbH        be open.                       fon: +49 421 22 232- 0
Mary-Somerville-Str.1  D-28359 Bremen                 fax: +49 421 22 232-99
                                                   http://www.univention.de/

--nextPart2516547.zj9KOF346I
Content-Type: application/pgp-signature; name=signature.asc 
Content-Description: This is a digitally signed message part.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEABECAAYFAlA3lyAACgkQYPlgoZpUDjmwTACcC9xktiXlz+jgBSoECHzsChsU
zHgAn3X1iIj7AMrrglMEzt+fTLA8/9ur
=hr+S
-----END PGP SIGNATURE-----

--nextPart2516547.zj9KOF346I--


--===============2680265650150720884==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2680265650150720884==--


From xen-devel-bounces@lists.xen.org Fri Aug 24 15:01:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vNe-0001Hg-V4; Fri, 24 Aug 2012 15:01:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hahn@univention.de>) id 1T4vNd-0001Hb-A5
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:01:01 +0000
Received: from [85.158.143.99:55585] by server-2.bemta-4.messagelabs.com id
	99/41-21239-C2797305; Fri, 24 Aug 2012 15:01:00 +0000
X-Env-Sender: hahn@univention.de
X-Msg-Ref: server-6.tower-216.messagelabs.com!1345820458!20758838!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25061 invoked from network); 24 Aug 2012 15:00:58 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-6.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 15:00:58 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id F181F7C3F57;
	Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id DF9E8164B113;
	Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id tuZg4wkvLu4H; Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
Received: from stave.knut.univention.de (mail.univention.de [82.198.197.8])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id 66F6D164B111;
	Fri, 24 Aug 2012 17:00:58 +0200 (CEST)
From: Philipp Hahn <hahn@univention.de>
Organization: Univention.de
To: Xen-devel@lists.xen.org
Date: Fri, 24 Aug 2012 17:00:47 +0200
User-Agent: KMail/1.9.10 (enterprise35 20100903.1171286)
MIME-Version: 1.0
Message-Id: <201208241700.52458.hahn@univention.de>
Cc: Ian Campbell <ijc@hellion.org.uk>, 666135@bugs.debian.org
Subject: [Xen-devel] XenStore tdb vs. reboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2680265650150720884=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2680265650150720884==
Content-Type: multipart/signed;
  boundary="nextPart2516547.zj9KOF346I";
  protocol="application/pgp-signature";
  micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--nextPart2516547.zj9KOF346I
Content-Type: text/plain;
  charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Hello,

I noticed a strace delay when running "virsh list" on my Xen-4.1.3 test=20
server. On further investigation I noticed that in XenStore there were=20
multiple /vm/$UUID entries suffixed by -$VERSION, especially I hat 16 entri=
es=20
for my dom0. For each reboot of the host I get a new entry.

After disabling Xend I get the following output after a reboot:
> root@xen5:~# xenstore-ls /local/domain/0
> name =3D "Domain-0"

If I then start Xend, I get more data added:
> root@xen5:~# /etc/init.d/xend start
> Starting xen management daemon: xend.
> root@xen5:~# xenstore-ls /local/domain/0
> vm =3D "/vm/00000000-0000-0000-0000-000000000000-2"
> device =3D ""
> control =3D ""
>  platform-feature-multiprocessor-suspend =3D "1"
> error =3D ""
> memory =3D ""
>  target =3D "3400192"
> guest =3D ""
> hvmpv =3D ""
> data =3D ""
> cpu =3D ""
>  1 =3D ""
>   availability =3D "online"
>  0 =3D ""
>   availability =3D "online"
> description =3D ""
> console =3D ""
>  limit =3D "1048576"
>  type =3D "xenconsoled"
> domid =3D "0"
> name =3D "Domain-0"

Notice that I now have vm=3D$UUID-2.

If you just restart Xend, the problem does not show; you need to actually=20
reboot to host, so that /local/domain/0/vm does not exist.

I found commit <http://xenbits.xen.org/hg/xen-unstable.hg/rev/265950e3df69>=
=20
which added the suffix handling, which I never noticed before.

I found the same issue reported at=20
<http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D666135>.

=46or comparison I just installed a fresh Debian Wheezy rc1 and noticed tha=
t the=20
XenStore tdb there is locates in /var/_run_/xenstore/tdb instead=20
of /var/_lib_/xenstore/tdb. Since /var/run/ lives on a tempfs, it's nuked o=
n=20
every reboot and thus the problem doesn't exist there.

Is the XenStore tdb supposed to survive a reboot or must it be cleared betw=
een=20
each reboot?

Sincerely
Philipp
=2D-=20
Philipp Hahn           Open Source Software Engineer      hahn@univention.de
Univention GmbH        be open.                       fon: +49 421 22 232- 0
Mary-Somerville-Str.1  D-28359 Bremen                 fax: +49 421 22 232-99
                                                   http://www.univention.de/

--nextPart2516547.zj9KOF346I
Content-Type: application/pgp-signature; name=signature.asc 
Content-Description: This is a digitally signed message part.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEABECAAYFAlA3lyAACgkQYPlgoZpUDjmwTACcC9xktiXlz+jgBSoECHzsChsU
zHgAn3X1iIj7AMrrglMEzt+fTLA8/9ur
=hr+S
-----END PGP SIGNATURE-----

--nextPart2516547.zj9KOF346I--


--===============2680265650150720884==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2680265650150720884==--


From xen-devel-bounces@lists.xen.org Fri Aug 24 15:06:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:06:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vST-0001Sv-Mf; Fri, 24 Aug 2012 15:06:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4vSR-0001Sp-S1
	for Xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:05:59 +0000
Received: from [85.158.139.83:15512] by server-12.bemta-5.messagelabs.com id
	F5/85-22359-75897305; Fri, 24 Aug 2012 15:05:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345820758!20373477!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13817 invoked from network); 24 Aug 2012 15:05:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 15:05:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14171805"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 15:05:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 16:05:58 +0100
Message-ID: <1345820756.19419.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Philipp Hahn <hahn@univention.de>
Date: Fri, 24 Aug 2012 16:05:56 +0100
In-Reply-To: <201208241700.52458.hahn@univention.de>
References: <201208241700.52458.hahn@univention.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "666135@bugs.debian.org" <666135@bugs.debian.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] XenStore tdb vs. reboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 16:00 +0100, Philipp Hahn wrote:
> Is the XenStore tdb supposed to survive a reboot or must it be cleared
> between each reboot? 

Stuff in xenstore is not required to survive a reboot and clearing it on
boot is a good best practice, since it avoids bugs in the C
xenstored ;-)

The ocaml xenstore doesn't even have an on disk DB by default.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:06:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:06:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vST-0001Sv-Mf; Fri, 24 Aug 2012 15:06:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4vSR-0001Sp-S1
	for Xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:05:59 +0000
Received: from [85.158.139.83:15512] by server-12.bemta-5.messagelabs.com id
	F5/85-22359-75897305; Fri, 24 Aug 2012 15:05:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1345820758!20373477!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13817 invoked from network); 24 Aug 2012 15:05:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 15:05:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14171805"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 15:05:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 16:05:58 +0100
Message-ID: <1345820756.19419.31.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Philipp Hahn <hahn@univention.de>
Date: Fri, 24 Aug 2012 16:05:56 +0100
In-Reply-To: <201208241700.52458.hahn@univention.de>
References: <201208241700.52458.hahn@univention.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "666135@bugs.debian.org" <666135@bugs.debian.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] XenStore tdb vs. reboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 16:00 +0100, Philipp Hahn wrote:
> Is the XenStore tdb supposed to survive a reboot or must it be cleared
> between each reboot? 

Stuff in xenstore is not required to survive a reboot and clearing it on
boot is a good best practice, since it avoids bugs in the C
xenstored ;-)

The ocaml xenstore doesn't even have an on disk DB by default.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:06:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vSq-0001UX-CC; Fri, 24 Aug 2012 15:06:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4vSp-0001To-9z
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 15:06:23 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345820766!1872926!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27136 invoked from network); 24 Aug 2012 15:06:08 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 15:06:08 -0000
Received: by iabz25 with SMTP id z25so4490037iab.30
	for <xen-devel@lists.xensource.com>;
	Fri, 24 Aug 2012 08:06:06 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=0dYqDFh1GwYQ3YF1G0ph7XE1LEK6UdqqGUBw9O1yK+U=;
	b=cWHdCtj9m7w0MyuaKu8AIqKYycIQoL1UABcfS8vAeKwonY1RgwbJPPpV5drPwSmnjz
	LWoaNeIkgkFmLqqw0Vh3dVedWygXjM9KX+gvR9msaYjWU/Y6IEuwhwMC1aloCbpQ7Y4y
	o64zxun2yJVhxrKZzPufnlsFfoFaTlNU1TZagF4o7tSbxLecrNns084eIRo5fLc3AyCr
	n0aWbyMiGEMQIzZqzwcoeQjJspzhWqgdJO4vK/lhHJXFV5TD5mILzUfvOXpIAuGPAoQP
	Ip+l3QX2XmeJxGuKAV79aopaR46qV1U+5cXPrVsu4vCJkkEu+JLRq/Hj5SxbDZcmPm1/
	Xt0A==
Received: by 10.50.159.130 with SMTP id xc2mr2549085igb.15.1345820765616;
	Fri, 24 Aug 2012 08:06:05 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ut5sm100997igc.13.2012.08.24.08.06.04
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 24 Aug 2012 08:06:04 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345810492.19419.7.camel@zakaz.uk.xensource.com>
Date: Fri, 24 Aug 2012 11:06:08 -0400
Message-Id: <DAA8E93C-563A-48EC-8B60-15A93B1C0A44@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
	<50376C57.2050008@citrix.com>
	<1345810492.19419.7.camel@zakaz.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlfCgZxRCtSlPp3s1LWuDbw8hP/k+tWunfaZXuUpxm1GSOlrZu8lc23the+Xj6OdQ0beA2B
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 24, 2012, at 8:14 AM, Ian Campbell wrote:

> On Fri, 2012-08-24 at 12:58 +0100, David Vrabel wrote:
>> Is PRIVCMD_MMAPBATCH_V2 actually required?  It doesn't seem to do much
>> more than the V1 interface.  Perhaps the fix in patches #1 and #2
>> sufficient to make libxc work?
> 
> The V1 interface has some hideous misfeature wrt error reporting iirc.
> 
> It doesn't report the actual per-frame error code, just the "an error"
> signal through setting the top nibble (already a nasty interface!), IIRC
> 
> Aha: http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/6d6c3dd995c0
IIuc, David's approach in these patches was
hypervisor rc -> encode into mfn field -> decode into error rval if available.

The changeset Ian references shows a different approach is needed. Perhaps:
1. Provide local err array in all cases
2. Always store hypervisor rc into err array
3. if version 2, copy into user-space. If version 1, encode into user-space mfn

Andres

> 
> It was knackered on 64 bit (actually set bits 28-31...)...
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:06:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vSq-0001UX-CC; Fri, 24 Aug 2012 15:06:24 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T4vSp-0001To-9z
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 15:06:23 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-14.tower-27.messagelabs.com!1345820766!1872926!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27136 invoked from network); 24 Aug 2012 15:06:08 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 15:06:08 -0000
Received: by iabz25 with SMTP id z25so4490037iab.30
	for <xen-devel@lists.xensource.com>;
	Fri, 24 Aug 2012 08:06:06 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=0dYqDFh1GwYQ3YF1G0ph7XE1LEK6UdqqGUBw9O1yK+U=;
	b=cWHdCtj9m7w0MyuaKu8AIqKYycIQoL1UABcfS8vAeKwonY1RgwbJPPpV5drPwSmnjz
	LWoaNeIkgkFmLqqw0Vh3dVedWygXjM9KX+gvR9msaYjWU/Y6IEuwhwMC1aloCbpQ7Y4y
	o64zxun2yJVhxrKZzPufnlsFfoFaTlNU1TZagF4o7tSbxLecrNns084eIRo5fLc3AyCr
	n0aWbyMiGEMQIzZqzwcoeQjJspzhWqgdJO4vK/lhHJXFV5TD5mILzUfvOXpIAuGPAoQP
	Ip+l3QX2XmeJxGuKAV79aopaR46qV1U+5cXPrVsu4vCJkkEu+JLRq/Hj5SxbDZcmPm1/
	Xt0A==
Received: by 10.50.159.130 with SMTP id xc2mr2549085igb.15.1345820765616;
	Fri, 24 Aug 2012 08:06:05 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ut5sm100997igc.13.2012.08.24.08.06.04
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 24 Aug 2012 08:06:04 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1345810492.19419.7.camel@zakaz.uk.xensource.com>
Date: Fri, 24 Aug 2012 11:06:08 -0400
Message-Id: <DAA8E93C-563A-48EC-8B60-15A93B1C0A44@gridcentric.ca>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<E11EC85A-BEB5-4DAF-B899-8DEE1E52D382@gridcentric.ca>
	<50376C57.2050008@citrix.com>
	<1345810492.19419.7.camel@zakaz.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlfCgZxRCtSlPp3s1LWuDbw8hP/k+tWunfaZXuUpxm1GSOlrZu8lc23the+Xj6OdQ0beA2B
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Keir \(Xen.org\)" <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/3] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 24, 2012, at 8:14 AM, Ian Campbell wrote:

> On Fri, 2012-08-24 at 12:58 +0100, David Vrabel wrote:
>> Is PRIVCMD_MMAPBATCH_V2 actually required?  It doesn't seem to do much
>> more than the V1 interface.  Perhaps the fix in patches #1 and #2
>> sufficient to make libxc work?
> 
> The V1 interface has some hideous misfeature wrt error reporting iirc.
> 
> It doesn't report the actual per-frame error code, just the "an error"
> signal through setting the top nibble (already a nasty interface!), IIRC
> 
> Aha: http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/6d6c3dd995c0
IIuc, David's approach in these patches was
hypervisor rc -> encode into mfn field -> decode into error rval if available.

The changeset Ian references shows a different approach is needed. Perhaps:
1. Provide local err array in all cases
2. Always store hypervisor rc into err array
3. if version 2, copy into user-space. If version 1, encode into user-space mfn

Andres

> 
> It was knackered on 64 bit (actually set bits 28-31...)...
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:10:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vX2-0001jr-32; Fri, 24 Aug 2012 15:10:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4vX0-0001jk-GT
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:10:42 +0000
Received: from [85.158.139.83:9483] by server-9.bemta-5.messagelabs.com id
	B8/B5-26123-17997305; Fri, 24 Aug 2012 15:10:41 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345821039!27701537!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10280 invoked from network); 24 Aug 2012 15:10:40 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 15:10:40 -0000
Received: by vcbgb23 with SMTP id gb23so2833723vcb.32
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 08:10:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mObP9KyIccCm94UdOtQoWA7BEMADNmsNmX1mK4hns44=;
	b=OFDqVWS5+iJjeLQyxmaL8bZec9a2FZbhh4BygWEk/+QzRoj+XDogYP5q4UO1ig612d
	Y8WYIWlzYLasRKNqdpWJG7yx8pSruEyY1h/Scv1iyXywXXEc0/G66TdHl5MiCG43pgku
	5ZiL7H7q9KSgk2mcR4dXtuELpHRQdcdw3F78342euM5bHQSaiBQ0YaCdc059utv7SHM7
	o/nfFooOzP9pRCRKyJP0qrImbEx86u0fbBy7ejWKHzXaXnkcVJ+mLPtM+vOlIRb78n5J
	eSveb1HxZA39g9q35ZO2MynZN7mgfh3whr4R/+8zS0qR9vKXeSNseQ20Q0wrIpSAtjBF
	ppBA==
MIME-Version: 1.0
Received: by 10.58.151.197 with SMTP id us5mr5460648veb.14.1345821038939; Fri,
	24 Aug 2012 08:10:38 -0700 (PDT)
Received: by 10.58.127.232 with HTTP; Fri, 24 Aug 2012 08:10:38 -0700 (PDT)
In-Reply-To: <CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
	<503686A7.5050206@citrix.com>
	<CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
Date: Fri, 24 Aug 2012 11:10:38 -0400
X-Google-Sender-Auth: TI2oPAJStPgmTiUvq6Zc_bKYPsY
Message-ID: <CAOvdn6VZVVZUsUASowme-t87s8JW6WkGqHRRh64YYC24k7BQLA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: multipart/mixed; boundary=047d7b6d8354f50f0204c80460b6
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b6d8354f50f0204c80460b6
Content-Type: text/plain; charset=ISO-8859-1

The attached patch is essentially the change you suggested, plus a
check for NULL in irq_complete_move()

This patch seems to fix some of the issues I was seeing, but not all.
MSI's are now delivered, after a handful of suspend / resumes, which
is the issue I was setting out to solve here.

However, once I get out of my debugging mode, and run without a serial
console - things start to go bad.
I am unable to get the machine to resume from S3. The fan comes on,
but I never see any graphics, or HDD activity. Of course, without a
console, I have no clues to go on, as to what is wrong.

This is similar to the other S3 problem I have observed on laptops
that never get out of the "pulsing power button" state.

I suspect they are related, but with the behavior changing so
drastically while running with the serial connection enabled - it is
rather difficult to narrow down where it might be...



On Thu, Aug 23, 2012 at 4:38 PM, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 23, 2012 at 3:38 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>>
>> On 23/08/12 20:06, Ben Guthro wrote:
>>> No such luck.
>>
>> Huh.  It was a shot in the dark, but I was really not expecting this.
>
> Thanks for taking a look.
>
> I tested the equivalent change against the offending changeset, and it
> did, in fact solve the S3 issue back then (but not against the 4.1.3
> tag)
>
> I guess I'll do some more bisecting, with this change in place.
>
> Ben

--047d7b6d8354f50f0204c80460b6
Content-Type: application/octet-stream; name=s3-debug
Content-Disposition: attachment; filename=s3-debug
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h69eivh50

ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9pcnEuYyBiL3hlbi9hcmNoL3g4Ni9pcnEuYwppbmRl
eCA3OGEwMmUzLi4zOTRkY2E0IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvaXJxLmMKKysrIGIv
eGVuL2FyY2gveDg2L2lycS5jCkBAIC03MDEsMTEgKzcwMSwxNiBAQCBzdGF0aWMgdm9pZCBzZW5k
X2NsZWFudXBfdmVjdG9yKHN0cnVjdCBpcnFfZGVzYyAqZGVzYykKIHZvaWQgaXJxX2NvbXBsZXRl
X21vdmUoc3RydWN0IGlycV9kZXNjICpkZXNjKQogewogICAgIHVuc2lnbmVkIHZlY3RvciwgbWU7
CisgICAgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnI7CiAKICAgICBpZiAobGlrZWx5KCFkZXNjLT5h
cmNoLm1vdmVfaW5fcHJvZ3Jlc3MpKQogICAgICAgICByZXR1cm47CiAKLSAgICB2ZWN0b3IgPSBn
ZXRfaXJxX3JlZ3MoKS0+ZW50cnlfdmVjdG9yOworICAgIHIgPSBnZXRfaXJxX3JlZ3MoKTsKKyAg
ICBpZiAoIXIpCisJcmV0dXJuOworCisgICAgdmVjdG9yID0gci0+ZW50cnlfdmVjdG9yOwogICAg
IG1lID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwogCiAgICAgaWYgKCB2ZWN0b3IgPT0gZGVzYy0+YXJj
aC52ZWN0b3IgJiYKQEAgLTIxNTEsNyArMjE1Niw3IEBAIHZvaWQgZml4dXBfaXJxcyh2b2lkKQog
ICAgICAgICBzcGluX2xvY2soJmRlc2MtPmxvY2spOwogCiAgICAgICAgIGNwdW1hc2tfY29weSgm
YWZmaW5pdHksIGRlc2MtPmFmZmluaXR5KTsKLSAgICAgICAgaWYgKCAhZGVzYy0+YWN0aW9uIHx8
IGNwdW1hc2tfc3Vic2V0KCZhZmZpbml0eSwgJmNwdV9vbmxpbmVfbWFwKSApCisgICAgICAgIGlm
ICggIWRlc2MtPmFjdGlvbiB8fCBjcHVtYXNrX2VxdWFsKCZhZmZpbml0eSwgJmNwdV9vbmxpbmVf
bWFwKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIHNwaW5fdW5sb2NrKCZkZXNjLT5sb2NrKTsK
ICAgICAgICAgICAgIGNvbnRpbnVlOwo=
--047d7b6d8354f50f0204c80460b6
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b6d8354f50f0204c80460b6--


From xen-devel-bounces@lists.xen.org Fri Aug 24 15:10:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vX2-0001jr-32; Fri, 24 Aug 2012 15:10:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T4vX0-0001jk-GT
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:10:42 +0000
Received: from [85.158.139.83:9483] by server-9.bemta-5.messagelabs.com id
	B8/B5-26123-17997305; Fri, 24 Aug 2012 15:10:41 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-2.tower-182.messagelabs.com!1345821039!27701537!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10280 invoked from network); 24 Aug 2012 15:10:40 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-2.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 15:10:40 -0000
Received: by vcbgb23 with SMTP id gb23so2833723vcb.32
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 08:10:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=mObP9KyIccCm94UdOtQoWA7BEMADNmsNmX1mK4hns44=;
	b=OFDqVWS5+iJjeLQyxmaL8bZec9a2FZbhh4BygWEk/+QzRoj+XDogYP5q4UO1ig612d
	Y8WYIWlzYLasRKNqdpWJG7yx8pSruEyY1h/Scv1iyXywXXEc0/G66TdHl5MiCG43pgku
	5ZiL7H7q9KSgk2mcR4dXtuELpHRQdcdw3F78342euM5bHQSaiBQ0YaCdc059utv7SHM7
	o/nfFooOzP9pRCRKyJP0qrImbEx86u0fbBy7ejWKHzXaXnkcVJ+mLPtM+vOlIRb78n5J
	eSveb1HxZA39g9q35ZO2MynZN7mgfh3whr4R/+8zS0qR9vKXeSNseQ20Q0wrIpSAtjBF
	ppBA==
MIME-Version: 1.0
Received: by 10.58.151.197 with SMTP id us5mr5460648veb.14.1345821038939; Fri,
	24 Aug 2012 08:10:38 -0700 (PDT)
Received: by 10.58.127.232 with HTTP; Fri, 24 Aug 2012 08:10:38 -0700 (PDT)
In-Reply-To: <CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
	<503686A7.5050206@citrix.com>
	<CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
Date: Fri, 24 Aug 2012 11:10:38 -0400
X-Google-Sender-Auth: TI2oPAJStPgmTiUvq6Zc_bKYPsY
Message-ID: <CAOvdn6VZVVZUsUASowme-t87s8JW6WkGqHRRh64YYC24k7BQLA@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: multipart/mixed; boundary=047d7b6d8354f50f0204c80460b6
Cc: Jan Beulich <jbeulich@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7b6d8354f50f0204c80460b6
Content-Type: text/plain; charset=ISO-8859-1

The attached patch is essentially the change you suggested, plus a
check for NULL in irq_complete_move()

This patch seems to fix some of the issues I was seeing, but not all.
MSI's are now delivered, after a handful of suspend / resumes, which
is the issue I was setting out to solve here.

However, once I get out of my debugging mode, and run without a serial
console - things start to go bad.
I am unable to get the machine to resume from S3. The fan comes on,
but I never see any graphics, or HDD activity. Of course, without a
console, I have no clues to go on, as to what is wrong.

This is similar to the other S3 problem I have observed on laptops
that never get out of the "pulsing power button" state.

I suspect they are related, but with the behavior changing so
drastically while running with the serial connection enabled - it is
rather difficult to narrow down where it might be...



On Thu, Aug 23, 2012 at 4:38 PM, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 23, 2012 at 3:38 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>>
>> On 23/08/12 20:06, Ben Guthro wrote:
>>> No such luck.
>>
>> Huh.  It was a shot in the dark, but I was really not expecting this.
>
> Thanks for taking a look.
>
> I tested the equivalent change against the offending changeset, and it
> did, in fact solve the S3 issue back then (but not against the 4.1.3
> tag)
>
> I guess I'll do some more bisecting, with this change in place.
>
> Ben

--047d7b6d8354f50f0204c80460b6
Content-Type: application/octet-stream; name=s3-debug
Content-Disposition: attachment; filename=s3-debug
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h69eivh50

ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9pcnEuYyBiL3hlbi9hcmNoL3g4Ni9pcnEuYwppbmRl
eCA3OGEwMmUzLi4zOTRkY2E0IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvaXJxLmMKKysrIGIv
eGVuL2FyY2gveDg2L2lycS5jCkBAIC03MDEsMTEgKzcwMSwxNiBAQCBzdGF0aWMgdm9pZCBzZW5k
X2NsZWFudXBfdmVjdG9yKHN0cnVjdCBpcnFfZGVzYyAqZGVzYykKIHZvaWQgaXJxX2NvbXBsZXRl
X21vdmUoc3RydWN0IGlycV9kZXNjICpkZXNjKQogewogICAgIHVuc2lnbmVkIHZlY3RvciwgbWU7
CisgICAgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnI7CiAKICAgICBpZiAobGlrZWx5KCFkZXNjLT5h
cmNoLm1vdmVfaW5fcHJvZ3Jlc3MpKQogICAgICAgICByZXR1cm47CiAKLSAgICB2ZWN0b3IgPSBn
ZXRfaXJxX3JlZ3MoKS0+ZW50cnlfdmVjdG9yOworICAgIHIgPSBnZXRfaXJxX3JlZ3MoKTsKKyAg
ICBpZiAoIXIpCisJcmV0dXJuOworCisgICAgdmVjdG9yID0gci0+ZW50cnlfdmVjdG9yOwogICAg
IG1lID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwogCiAgICAgaWYgKCB2ZWN0b3IgPT0gZGVzYy0+YXJj
aC52ZWN0b3IgJiYKQEAgLTIxNTEsNyArMjE1Niw3IEBAIHZvaWQgZml4dXBfaXJxcyh2b2lkKQog
ICAgICAgICBzcGluX2xvY2soJmRlc2MtPmxvY2spOwogCiAgICAgICAgIGNwdW1hc2tfY29weSgm
YWZmaW5pdHksIGRlc2MtPmFmZmluaXR5KTsKLSAgICAgICAgaWYgKCAhZGVzYy0+YWN0aW9uIHx8
IGNwdW1hc2tfc3Vic2V0KCZhZmZpbml0eSwgJmNwdV9vbmxpbmVfbWFwKSApCisgICAgICAgIGlm
ICggIWRlc2MtPmFjdGlvbiB8fCBjcHVtYXNrX2VxdWFsKCZhZmZpbml0eSwgJmNwdV9vbmxpbmVf
bWFwKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIHNwaW5fdW5sb2NrKCZkZXNjLT5sb2NrKTsK
ICAgICAgICAgICAgIGNvbnRpbnVlOwo=
--047d7b6d8354f50f0204c80460b6
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7b6d8354f50f0204c80460b6--


From xen-devel-bounces@lists.xen.org Fri Aug 24 15:36:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vvU-00021r-DS; Fri, 24 Aug 2012 15:36:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4vvT-00021m-Qg
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:36:00 +0000
Received: from [85.158.143.35:5273] by server-1.bemta-4.messagelabs.com id
	68/74-12504-F5F97305; Fri, 24 Aug 2012 15:35:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345822556!13574879!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17623 invoked from network); 24 Aug 2012 15:35:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 15:35:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 16:35:55 +0100
Message-Id: <5037AD6A020000780008A79D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 16:35:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jonathan Tripathy" <jonnyt@abpni.co.uk>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
	<5035EC5A020000780008A62A@nat28.tlf.novell.com>
	<9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
In-Reply-To: <9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 10:07, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:

> On 23.08.2012 08:39, Jan Beulich wrote:
>>>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>>>I'm guessing xen.efi (from 4.2) just replaces grub??
>>
>> "Replaces" is the wrong term. It simply makes the use of grub.efi (or 
>> however
>> it's named) unnecessary.
>>
>>>Also, if I were to apply that patch from superuser
>>>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sh 
> ould-be-8gb-uefi-boot),
>>>would have have any bad consequences? I'm very security conscience as
>>>the DomUs are untrusted...
>>
>> If you wanted to do that, I'd strongly recommend only removing the 
>> E801 code
>> (obviously, from your log, you don't get E820 entries reported
>> anyway, so this
>> would be to not harm using hypervisors built from the same source on 
>> other
>> systems) or simply swapping the E801 and multiboot handling order 
>> (which may
>> actually be something to consider even upstream, so you'd be welcome 
>> to post
>> such a patch).
>>
>> But in the end, in order to indeed use UEFI as intended, you'll need
>> to switch to
>> using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops 
>> for now
>> isn't).
> 
> I'll submit a patch with the map entries in the if block swapped. I'll 
> make the patch, then test it for you guys, then post it. Do I just send 
> it to this list (for the benefit of others and for upstream 
> consideration)?

Yes.

> When you say "use UEFI as intended", is there something wrong with just 
> flipping the if block on its head?

That flipping has nothing to do with UEFI, just with the way grub.efi
works.

Proper UEFI support implies use of EFI's boot and run time services,
which only xen.efi currently does (and which, for those run time
services that get made available for use by Dom0, also requires an
enabled Dom0 kernel).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:36:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vvU-00021r-DS; Fri, 24 Aug 2012 15:36:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4vvT-00021m-Qg
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:36:00 +0000
Received: from [85.158.143.35:5273] by server-1.bemta-4.messagelabs.com id
	68/74-12504-F5F97305; Fri, 24 Aug 2012 15:35:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1345822556!13574879!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17623 invoked from network); 24 Aug 2012 15:35:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 15:35:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 16:35:55 +0100
Message-Id: <5037AD6A020000780008A79D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 16:35:54 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jonathan Tripathy" <jonnyt@abpni.co.uk>
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
	<5035EC5A020000780008A62A@nat28.tlf.novell.com>
	<9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
In-Reply-To: <9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 10:07, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:

> On 23.08.2012 08:39, Jan Beulich wrote:
>>>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>>>I'm guessing xen.efi (from 4.2) just replaces grub??
>>
>> "Replaces" is the wrong term. It simply makes the use of grub.efi (or 
>> however
>> it's named) unnecessary.
>>
>>>Also, if I were to apply that patch from superuser
>>>(http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sh 
> ould-be-8gb-uefi-boot),
>>>would have have any bad consequences? I'm very security conscience as
>>>the DomUs are untrusted...
>>
>> If you wanted to do that, I'd strongly recommend only removing the 
>> E801 code
>> (obviously, from your log, you don't get E820 entries reported
>> anyway, so this
>> would be to not harm using hypervisors built from the same source on 
>> other
>> systems) or simply swapping the E801 and multiboot handling order 
>> (which may
>> actually be something to consider even upstream, so you'd be welcome 
>> to post
>> such a patch).
>>
>> But in the end, in order to indeed use UEFI as intended, you'll need
>> to switch to
>> using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops 
>> for now
>> isn't).
> 
> I'll submit a patch with the map entries in the if block swapped. I'll 
> make the patch, then test it for you guys, then post it. Do I just send 
> it to this list (for the benefit of others and for upstream 
> consideration)?

Yes.

> When you say "use UEFI as intended", is there something wrong with just 
> flipping the if block on its head?

That flipping has nothing to do with UEFI, just with the way grub.efi
works.

Proper UEFI support implies use of EFI's boot and run time services,
which only xen.efi currently does (and which, for those run time
services that get made available for use by Dom0, also requires an
enabled Dom0 kernel).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:39:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vyR-00027t-0D; Fri, 24 Aug 2012 15:39:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4vyP-00027m-4d
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:39:01 +0000
Received: from [85.158.138.51:19494] by server-2.bemta-3.messagelabs.com id
	CE/59-09157-410A7305; Fri, 24 Aug 2012 15:39:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345822737!8978565!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20396 invoked from network); 24 Aug 2012 15:38:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 15:38:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 16:38:57 +0100
Message-Id: <5037AE20020000780008A7A8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 16:38:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
	<5035E986020000780008A617@nat28.tlf.novell.com>
	<50360B81.2070402@citrix.com>
In-Reply-To: <50360B81.2070402@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 12:52, Julien Grall <julien.grall@citrix.com> wrote:
> On 08/23/2012 08:27 AM, Jan Beulich wrote:
>>> switch ( a.index )
>>> {
>>> -            case HVM_PARAM_IOREQ_PFN:
>>>      
>> Removing sub-ops which a domain can issue for itself (which for this and
>> another one below appears to be the case) is not allowed.
>>    
> 
> I removed these 3 sub-ops because it will not work with
> QEMU disaggregation. Shared pages and event channel
> for IO request are private for each device model.

Then they need to be made inaccessible for that specific setup, not
removed altogether.

>>> +            case HVM_PARAM_IO_PFN_FIRST:
>>>      
>> I don't see where in this patch this and the other new sub-op constants
>> get defined.
>>    
> Both sub-op constants are added in patch 1:
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01767.html 

Hmm, I can certainly see reasons for breaking up things that way,
but I generally prefer patches to represent functional units.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:39:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4vyR-00027t-0D; Fri, 24 Aug 2012 15:39:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4vyP-00027m-4d
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:39:01 +0000
Received: from [85.158.138.51:19494] by server-2.bemta-3.messagelabs.com id
	CE/59-09157-410A7305; Fri, 24 Aug 2012 15:39:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345822737!8978565!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20396 invoked from network); 24 Aug 2012 15:38:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 15:38:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 16:38:57 +0100
Message-Id: <5037AE20020000780008A7A8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 16:38:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@citrix.com>
References: <cover.1345552068.git.julien.grall@citrix.com>
	<c378b04ee29071c1d6d68bd3ef48fedadb493b10.1345552068.git.julien.grall@citrix.com>
	<5035E986020000780008A617@nat28.tlf.novell.com>
	<50360B81.2070402@citrix.com>
In-Reply-To: <50360B81.2070402@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "christian.limpach@gmail.com" <christian.limpach@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [XEN][RFC PATCH V2 05/17] hvm: Modify hvm_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 12:52, Julien Grall <julien.grall@citrix.com> wrote:
> On 08/23/2012 08:27 AM, Jan Beulich wrote:
>>> switch ( a.index )
>>> {
>>> -            case HVM_PARAM_IOREQ_PFN:
>>>      
>> Removing sub-ops which a domain can issue for itself (which for this and
>> another one below appears to be the case) is not allowed.
>>    
> 
> I removed these 3 sub-ops because it will not work with
> QEMU disaggregation. Shared pages and event channel
> for IO request are private for each device model.

Then they need to be made inaccessible for that specific setup, not
removed altogether.

>>> +            case HVM_PARAM_IO_PFN_FIRST:
>>>      
>> I don't see where in this patch this and the other new sub-op constants
>> get defined.
>>    
> Both sub-op constants are added in patch 1:
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01767.html 

Hmm, I can certainly see reasons for breaking up things that way,
but I generally prefer patches to represent functional units.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:46:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4w56-0002R9-FP; Fri, 24 Aug 2012 15:45:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4w55-0002R2-1V
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:45:55 +0000
Received: from [85.158.138.51:6853] by server-3.bemta-3.messagelabs.com id
	35/4F-13809-2B1A7305; Fri, 24 Aug 2012 15:45:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345823153!19781206!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13535 invoked from network); 24 Aug 2012 15:45:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 15:45:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 16:45:53 +0100
Message-Id: <5037AFBF020000780008A7AF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 16:45:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 15:44, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Wed, 22 Aug 2012, Jan Beulich wrote:
>> - don't call rtc_timer_update() on REG_A writes when the value didn't
>>   change (doing the call always was reported to cause wall clock time
>>   lagging with the JVM running on Windows)
>> - don't call rtc_timer_update() on REG_B writes at all
>> - only call alarm_timer_update() on REG_B writes when relevant bits
>>   change
>> - only call check_update_timer() on REG_B writes when SET changes
>> - instead properly handle AF and PF when the guest is not also setting
>>   AIE/PIE respectively (for UF this was already the case, only a
>>   comment was slightly inaccurate)
>> - raise the RTC IRQ not only when UIE gets set while UF was already
>>   set, but generalize this to cover AIE and PIE as well
>> - properly mask off bit 7 when retrieving the hour values in
>>   alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
>>   converting from 12- to 24-hour value
>> - also handle the two other possible clock bases
>> - use RTC_* names in a couple of places where literal numbers were used
>>   so far
>> 
>> Note that this only improves the situation described in the thread at
>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
>> there are still problems with the emulation when invoked at a high rate
>> as described there.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Although this patch solves a real problem and should probably go in at
> some point, I am a bit worried about drifting too much from the original
> RTC emulator (that was taken from QEMU),

Then does that emulator have similar problems?

> because it would be nice to be able to backport features like this one:
> 
> http://marc.info/?l=qemu-devel&m=134392375010304 

I agree this would be nice to have (albeit I'm not sure how much
the original problem is actually present in the Xen code, particularly
with the patch here applied, as I think it may implicitly clean up some
of the unneccesary active timers).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 15:46:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 15:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4w56-0002R9-FP; Fri, 24 Aug 2012 15:45:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4w55-0002R2-1V
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 15:45:55 +0000
Received: from [85.158.138.51:6853] by server-3.bemta-3.messagelabs.com id
	35/4F-13809-2B1A7305; Fri, 24 Aug 2012 15:45:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345823153!19781206!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13535 invoked from network); 24 Aug 2012 15:45:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 15:45:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 16:45:53 +0100
Message-Id: <5037AFBF020000780008A7AF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 16:45:51 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 15:44, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
wrote:
> On Wed, 22 Aug 2012, Jan Beulich wrote:
>> - don't call rtc_timer_update() on REG_A writes when the value didn't
>>   change (doing the call always was reported to cause wall clock time
>>   lagging with the JVM running on Windows)
>> - don't call rtc_timer_update() on REG_B writes at all
>> - only call alarm_timer_update() on REG_B writes when relevant bits
>>   change
>> - only call check_update_timer() on REG_B writes when SET changes
>> - instead properly handle AF and PF when the guest is not also setting
>>   AIE/PIE respectively (for UF this was already the case, only a
>>   comment was slightly inaccurate)
>> - raise the RTC IRQ not only when UIE gets set while UF was already
>>   set, but generalize this to cover AIE and PIE as well
>> - properly mask off bit 7 when retrieving the hour values in
>>   alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
>>   converting from 12- to 24-hour value
>> - also handle the two other possible clock bases
>> - use RTC_* names in a couple of places where literal numbers were used
>>   so far
>> 
>> Note that this only improves the situation described in the thread at
>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
>> there are still problems with the emulation when invoked at a high rate
>> as described there.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Although this patch solves a real problem and should probably go in at
> some point, I am a bit worried about drifting too much from the original
> RTC emulator (that was taken from QEMU),

Then does that emulator have similar problems?

> because it would be nice to be able to backport features like this one:
> 
> http://marc.info/?l=qemu-devel&m=134392375010304 

I agree this would be nice to have (albeit I'm not sure how much
the original problem is actually present in the Xen code, particularly
with the patch here applied, as I think it may implicitly clean up some
of the unneccesary active timers).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 16:34:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 16:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4wpj-0003hj-CV; Fri, 24 Aug 2012 16:34:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T4wpi-0003he-Er
	for Xen-devel@lists.xen.org; Fri, 24 Aug 2012 16:34:06 +0000
Received: from [85.158.139.83:28835] by server-2.bemta-5.messagelabs.com id
	D4/69-10142-DFCA7305; Fri, 24 Aug 2012 16:34:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345826043!27729482!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEyMzA4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21360 invoked from network); 24 Aug 2012 16:34:04 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 16:34:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1345826044; x=1377362044;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=7kwsfElaIL75qH6JtmhCZJgps3jfXdQDIe6drDtG4EI=;
	b=HwVmAkHqGkYu5MIGq4FzVk/+lRgMrOHyZCnr79EHNAFKi8R7SlnKkMpE
	SOFiu9QVJ2LsPhLrP8XG2xLR81TZWQ==;
X-IronPort-AV: E=Sophos;i="4.77,820,1336348800"; d="scan'208";a="1015828552"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 24 Aug 2012 16:34:01 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7OGY0XC005359
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Fri, 24 Aug 2012 16:34:00 GMT
Received: from US-SEA-R8XVZTX (10.224.80.38) by ex10-hub-31005.ant.amazon.com
	(10.185.176.12) with Microsoft SMTP Server id 14.2.247.3;
	Fri, 24 Aug 2012 09:33:55 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Fri, 24 Aug 2012
	09:33:56 -0700
Date: Fri, 24 Aug 2012 09:33:56 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120824163356.GB4948@US-SEA-R8XVZTX>
References: <201208241700.52458.hahn@univention.de>
	<1345820756.19419.31.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345820756.19419.31.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "666135@bugs.debian.org" <666135@bugs.debian.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Philipp Hahn <hahn@univention.de>
Subject: Re: [Xen-devel] XenStore tdb vs. reboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, 2012 at 04:05:56PM +0100, Ian Campbell wrote:
> On Fri, 2012-08-24 at 16:00 +0100, Philipp Hahn wrote:
> > Is the XenStore tdb supposed to survive a reboot or must it be cleared
> > between each reboot? 
> 
> Stuff in xenstore is not required to survive a reboot and clearing it on
> boot is a good best practice, since it avoids bugs in the C
> xenstored ;-)

Many installations that are using C xenstored mount /var/lib/xenstored
as a tmpfs. It seems that a proposed patch [1] to the Linux init
scripts (pre-xencommons) wasn't applied.

> The ocaml xenstore doesn't even have an on disk DB by default.
> 
> Ian.

Matt

[1] http://lists.xen.org/archives/html/xen-devel/2010-01/msg00667.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 16:34:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 16:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4wpj-0003hj-CV; Fri, 24 Aug 2012 16:34:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T4wpi-0003he-Er
	for Xen-devel@lists.xen.org; Fri, 24 Aug 2012 16:34:06 +0000
Received: from [85.158.139.83:28835] by server-2.bemta-5.messagelabs.com id
	D4/69-10142-DFCA7305; Fri, 24 Aug 2012 16:34:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1345826043!27729482!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEyMzA4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21360 invoked from network); 24 Aug 2012 16:34:04 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 16:34:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1345826044; x=1377362044;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=7kwsfElaIL75qH6JtmhCZJgps3jfXdQDIe6drDtG4EI=;
	b=HwVmAkHqGkYu5MIGq4FzVk/+lRgMrOHyZCnr79EHNAFKi8R7SlnKkMpE
	SOFiu9QVJ2LsPhLrP8XG2xLR81TZWQ==;
X-IronPort-AV: E=Sophos;i="4.77,820,1336348800"; d="scan'208";a="1015828552"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 24 Aug 2012 16:34:01 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7OGY0XC005359
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Fri, 24 Aug 2012 16:34:00 GMT
Received: from US-SEA-R8XVZTX (10.224.80.38) by ex10-hub-31005.ant.amazon.com
	(10.185.176.12) with Microsoft SMTP Server id 14.2.247.3;
	Fri, 24 Aug 2012 09:33:55 -0700
Received: by US-SEA-R8XVZTX (sSMTP sendmail emulation); Fri, 24 Aug 2012
	09:33:56 -0700
Date: Fri, 24 Aug 2012 09:33:56 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120824163356.GB4948@US-SEA-R8XVZTX>
References: <201208241700.52458.hahn@univention.de>
	<1345820756.19419.31.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345820756.19419.31.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Cc: "666135@bugs.debian.org" <666135@bugs.debian.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Philipp Hahn <hahn@univention.de>
Subject: Re: [Xen-devel] XenStore tdb vs. reboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, 2012 at 04:05:56PM +0100, Ian Campbell wrote:
> On Fri, 2012-08-24 at 16:00 +0100, Philipp Hahn wrote:
> > Is the XenStore tdb supposed to survive a reboot or must it be cleared
> > between each reboot? 
> 
> Stuff in xenstore is not required to survive a reboot and clearing it on
> boot is a good best practice, since it avoids bugs in the C
> xenstored ;-)

Many installations that are using C xenstored mount /var/lib/xenstored
as a tmpfs. It seems that a proposed patch [1] to the Linux init
scripts (pre-xencommons) wasn't applied.

> The ocaml xenstore doesn't even have an on disk DB by default.
> 
> Ian.

Matt

[1] http://lists.xen.org/archives/html/xen-devel/2010-01/msg00667.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 16:56:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 16:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xAn-0004FT-2c; Fri, 24 Aug 2012 16:55:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72) (envelope-from <p.d@gmx.de>)
	id 1T4wyX-00045v-6j
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 16:43:13 +0000
X-Env-Sender: p.d@gmx.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345826583!8566619!1
X-Originating-IP: [213.165.64.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 553 invoked from network); 24 Aug 2012 16:43:03 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.22)
	by server-9.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 16:43:03 -0000
Received: (qmail 1360 invoked by uid 0); 24 Aug 2012 16:43:02 -0000
Received: from 95.116.131.21 by www030.gmx.net with HTTP;
	Fri, 24 Aug 2012 18:43:01 +0200 (CEST)
Content-Type: multipart/mixed; boundary="========GMX46801345826581603727"
Date: Fri, 24 Aug 2012 18:43:01 +0200
From: p.d@gmx.de
Message-ID: <20120824164301.46800@gmx.net>
MIME-Version: 1.0
To: ian.campbell@citrix.com, xen-devel@lists.xen.org
X-Authenticated: #16432423
X-Flags: 0001
X-Mailer: WWW-Mail 6100 (Global Message Exchange)
X-Priority: 3
X-Provags-ID: V01U2FsdGVkX183PM1kuF03QOPFrRJGgy6YVUSO7CaH66762IAmA8
	tNmo3HHH5eizKCt+uFbk/BKA8AnU9x8Wn/6g== 
X-GMX-UID: QspFcCxzeSEqbkbekXQhBVl+IGRvb0AP
X-Mailman-Approved-At: Fri, 24 Aug 2012 16:55:50 +0000
Subject: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--========GMX46801345826581603727
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

nice time,

Ian, I'm not sure, but I think after Your patch:
http://xenbits.xen.org/hg/xen-unstable.hg/rev/4ca40e0559c3

xen-tools (+qemu+seabios) will not be maked :)

Here are last lines of "make -j7":
==============================================================
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF ._libxl_save_msgs_callout.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o _libxl_save_msgs_callout.o _libxl_save_msgs_callout.c 
xl_cmdimpl.c: In function â€˜main_listâ€™:
xl_cmdimpl.c:2733:11: error: â€˜handâ€™ may be used uninitialized in this function [-Werror=uninitialized]
xl_cmdimpl.c:2689:14: note: â€˜handâ€™ was declared here
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .libxl_qmp.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o libxl_qmp.o libxl_qmp.c 
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .libxl_event.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o libxl_event.o libxl_event.c 
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .libxl_fork.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o libxl_fork.o libxl_fork.c 
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .osdeps.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o osdeps.o osdeps.c 
cc1: all warnings being treated as errors
make[3]: *** [xl_cmdimpl.o] Error 1
make[3]: *** Waiting for unfinished jobs....
make[3]: Leaving directory `/usr/src/xen_2012_08_24_rev_25779/tools/libxl'
make[2]: *** [subdir-install-libxl] Error 2
make[2]: Leaving directory `/usr/src/xen_2012_08_24_rev_25779/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/usr/src/xen_2012_08_24_rev_25779/tools'
make: *** [install-tools] Error 2
==============================================================


#cat .config
QEMU_UPSTREAM_URL = git://github.com/qemu/qemu.git
QEMU_UPSTREAM_REVISION=master

SEABIOS_UPSTREAM_URL=git://git.seabios.org/seabios.git
SEABIOS_UPSTREAM_TAG=master


I attach /xen-tools/config.log too.
If You need some more else write me please.

--========GMX46801345826581603727
Content-Type: text/x-log; charset="iso-8859-15"; name="config.log"
Content-Transfer-Encoding: 8bit
Content-Disposition: attachment; filename="config.log"

This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Xen Hypervisor configure 4.2, which was
generated by GNU Autoconf 2.67.  Invocation command line was

  $ ./configure 

## --------- ##
## Platform. ##
## --------- ##

hostname = moj
uname -m = x86_64
uname -r = 3.2.28
uname -s = Linux
uname -v = #1 SMP Fri Aug 24 00:21:18 CEST 2012

/usr/bin/uname -p = unknown
/bin/uname -X     = unknown

/bin/arch              = unknown
/usr/bin/arch -k       = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo      = unknown
/bin/machine           = unknown
/usr/bin/oslevel       = unknown
/bin/universe          = unknown

PATH: /usr/local/sbin
PATH: /usr/local/bin
PATH: /usr/sbin
PATH: /usr/bin
PATH: /sbin
PATH: /bin
PATH: /usr/games


## ----------- ##
## Core tests. ##
## ----------- ##

configure:2200: checking build system type
configure:2214: result: x86_64-unknown-linux-gnu
configure:2234: checking host system type
configure:2247: result: x86_64-unknown-linux-gnu
configure:2762: checking for gcc
configure:2778: found /usr/bin/gcc
configure:2789: result: gcc
configure:3018: checking for C compiler version
configure:3027: gcc --version >&5
gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

configure:3038: $? = 0
configure:3027: gcc -v >&5
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.6/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) 
configure:3038: $? = 0
configure:3027: gcc -V >&5
gcc: error: unrecognized option '-V'
gcc: fatal error: no input files
compilation terminated.
configure:3038: $? = 4
configure:3027: gcc -qversion >&5
gcc: error: unrecognized option '-qversion'
gcc: fatal error: no input files
compilation terminated.
configure:3038: $? = 4
configure:3058: checking whether the C compiler works
configure:3080: gcc        conftest.c  >&5
configure:3084: $? = 0
configure:3132: result: yes
configure:3135: checking for C compiler default output file name
configure:3137: result: a.out
configure:3143: checking for suffix of executables
configure:3150: gcc -o conftest        conftest.c  >&5
configure:3154: $? = 0
configure:3176: result: 
configure:3198: checking whether we are cross compiling
configure:3206: gcc -o conftest        conftest.c  >&5
configure:3210: $? = 0
configure:3217: ./conftest
configure:3221: $? = 0
configure:3236: result: no
configure:3241: checking for suffix of object files
configure:3263: gcc -c     conftest.c >&5
configure:3267: $? = 0
configure:3288: result: o
configure:3292: checking whether we are using the GNU C compiler
configure:3311: gcc -c     conftest.c >&5
configure:3311: $? = 0
configure:3320: result: yes
configure:3329: checking whether gcc accepts -g
configure:3349: gcc -c -g    conftest.c >&5
configure:3349: $? = 0
configure:3390: result: yes
configure:3407: checking for gcc option to accept ISO C89
configure:3471: gcc  -c -g -O2    conftest.c >&5
configure:3471: $? = 0
configure:3484: result: none needed
configure:3504: checking whether make sets $(MAKE)
configure:3526: result: yes
configure:3549: checking for a BSD-compatible install
configure:3617: result: /usr/bin/install -c
configure:3630: checking for bison
configure:3648: found /usr/bin/bison
configure:3660: result: /usr/bin/bison
configure:3670: checking for flex
configure:3688: found /usr/bin/flex
configure:3700: result: /usr/bin/flex
configure:3710: checking for perl
configure:3728: found /usr/bin/perl
configure:3741: result: /usr/bin/perl
configure:3893: checking for ocamlc
configure:3909: found /usr/bin/ocamlc
configure:3920: result: ocamlc
configure:3945: result: OCaml version is 3.12.1
configure:3954: result: OCaml library path is /usr/lib/ocaml
configure:4004: checking for ocamlopt
configure:4020: found /usr/bin/ocamlopt
configure:4031: result: ocamlopt
configure:4114: checking for ocamlc.opt
configure:4144: result: no
configure:4218: checking for ocamlopt.opt
configure:4248: result: no
configure:4327: checking for ocaml
configure:4343: found /usr/bin/ocaml
configure:4354: result: ocaml
configure:4421: checking for ocamldep
configure:4437: found /usr/bin/ocamldep
configure:4448: result: ocamldep
configure:4515: checking for ocamlmktop
configure:4531: found /usr/bin/ocamlmktop
configure:4542: result: ocamlmktop
configure:4609: checking for ocamlmklib
configure:4625: found /usr/bin/ocamlmklib
configure:4636: result: ocamlmklib
configure:4703: checking for ocamldoc
configure:4719: found /usr/bin/ocamldoc
configure:4730: result: ocamldoc
configure:4797: checking for ocamlbuild
configure:4813: found /usr/bin/ocamlbuild
configure:4824: result: ocamlbuild
configure:4860: checking for bash
configure:4891: result: /bin/bash
configure:4909: checking how to run the C preprocessor
configure:4940: gcc -E    conftest.c
configure:4940: $? = 0
configure:4954: gcc -E    conftest.c
conftest.c:9:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:4954: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| /* end confdefs.h.  */
| #include <ac_nonexistent.h>
configure:4979: result: gcc -E
configure:4999: gcc -E    conftest.c
configure:4999: $? = 0
configure:5013: gcc -E    conftest.c
conftest.c:9:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:5013: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| /* end confdefs.h.  */
| #include <ac_nonexistent.h>
configure:5042: checking for grep that handles long lines and -e
configure:5100: result: /bin/grep
configure:5105: checking for egrep
configure:5167: result: /bin/grep -E
configure:5172: checking for ANSI C header files
configure:5192: gcc -c -g -O2    conftest.c >&5
configure:5192: $? = 0
configure:5265: gcc -o conftest -g -O2       conftest.c  >&5
configure:5265: $? = 0
configure:5265: ./conftest
configure:5265: $? = 0
configure:5276: result: yes
configure:5289: checking for sys/types.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for sys/stat.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for stdlib.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for string.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for memory.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for strings.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for inttypes.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for stdint.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for unistd.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5315: checking for python
configure:5333: found /usr/bin/python
configure:5346: result: /usr/bin/python
configure:5358: checking for python version >= 2.3 
configure:5368: result: yes
configure:5378: checking for python-config
configure:5396: found /usr/bin/python-config
configure:5409: result: /usr/bin/python-config
configure:5442: checking Python.h usability
configure:5442: gcc -c -g -O2 -g -O2 -I/usr/include/python2.7 -I/usr/include/python2.7 -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security conftest.c >&5
In file included from /usr/include/python2.7/Python.h:8:0,
                 from conftest.c:52:
/usr/include/python2.7/pyconfig.h:1161:0: warning: "_POSIX_C_SOURCE" redefined [enabled by default]
/usr/include/features.h:215:0: note: this is the location of the previous definition
configure:5442: $? = 0
configure:5442: result: yes
configure:5442: checking Python.h presence
configure:5442: gcc -E -g -O2 -I/usr/include/python2.7 -I/usr/include/python2.7 -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security conftest.c
configure:5442: $? = 0
configure:5442: result: yes
configure:5442: checking for Python.h
configure:5442: result: yes
configure:5451: checking for PyArg_ParseTuple in -lpython2.7
configure:5476: gcc -o conftest -g -O2 -g -O2 -I/usr/include/python2.7 -I/usr/include/python2.7 -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security    -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lpython2.7   >&5
conftest.c:26:1: warning: function declaration isn't a prototype [-Wstrict-prototypes]
conftest.c:28:1: warning: function declaration isn't a prototype [-Wstrict-prototypes]
configure:5476: $? = 0
configure:5486: result: yes
configure:5506: checking for xgettext
configure:5524: found /usr/bin/xgettext
configure:5537: result: /usr/bin/xgettext
configure:5553: checking for as86
configure:5571: found /usr/bin/as86
configure:5584: result: /usr/bin/as86
configure:5598: checking for ld86
configure:5616: found /usr/bin/ld86
configure:5629: result: /usr/bin/ld86
configure:5643: checking for bcc
configure:5661: found /usr/bin/bcc
configure:5674: result: /usr/bin/bcc
configure:5688: checking for iasl
configure:5706: found /usr/bin/iasl
configure:5719: result: /usr/bin/iasl
configure:5734: checking uuid/uuid.h usability
configure:5734: gcc -c -g -O2    conftest.c >&5
configure:5734: $? = 0
configure:5734: result: yes
configure:5734: checking uuid/uuid.h presence
configure:5734: gcc -E    conftest.c
configure:5734: $? = 0
configure:5734: result: yes
configure:5734: checking for uuid/uuid.h
configure:5734: result: yes
configure:5737: checking for uuid_clear in -luuid
configure:5762: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -luuid  -lpython2.7  >&5
configure:5762: $? = 0
configure:5771: result: yes
configure:5781: checking uuid.h usability
configure:5781: gcc -c -g -O2    conftest.c >&5
conftest.c:53:18: fatal error: uuid.h: No such file or directory
compilation terminated.
configure:5781: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| /* end confdefs.h.  */
| #include <stdio.h>
| #ifdef HAVE_SYS_TYPES_H
| # include <sys/types.h>
| #endif
| #ifdef HAVE_SYS_STAT_H
| # include <sys/stat.h>
| #endif
| #ifdef STDC_HEADERS
| # include <stdlib.h>
| # include <stddef.h>
| #else
| # ifdef HAVE_STDLIB_H
| #  include <stdlib.h>
| # endif
| #endif
| #ifdef HAVE_STRING_H
| # if !defined STDC_HEADERS && defined HAVE_MEMORY_H
| #  include <memory.h>
| # endif
| # include <string.h>
| #endif
| #ifdef HAVE_STRINGS_H
| # include <strings.h>
| #endif
| #ifdef HAVE_INTTYPES_H
| # include <inttypes.h>
| #endif
| #ifdef HAVE_STDINT_H
| # include <stdint.h>
| #endif
| #ifdef HAVE_UNISTD_H
| # include <unistd.h>
| #endif
| #include <uuid.h>
configure:5781: result: no
configure:5781: checking uuid.h presence
configure:5781: gcc -E    conftest.c
conftest.c:20:18: fatal error: uuid.h: No such file or directory
compilation terminated.
configure:5781: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| /* end confdefs.h.  */
| #include <uuid.h>
configure:5781: result: no
configure:5781: checking for uuid.h
configure:5781: result: no
configure:5794: checking curses.h usability
configure:5794: gcc -c -g -O2    conftest.c >&5
configure:5794: $? = 0
configure:5794: result: yes
configure:5794: checking curses.h presence
configure:5794: gcc -E    conftest.c
configure:5794: $? = 0
configure:5794: result: yes
configure:5794: checking for curses.h
configure:5794: result: yes
configure:5797: checking for clear in -lcurses
configure:5822: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lcurses  -lpython2.7  >&5
configure:5822: $? = 0
configure:5831: result: yes
configure:5845: checking ncurses.h usability
configure:5845: gcc -c -g -O2    conftest.c >&5
configure:5845: $? = 0
configure:5845: result: yes
configure:5845: checking ncurses.h presence
configure:5845: gcc -E    conftest.c
configure:5845: $? = 0
configure:5845: result: yes
configure:5845: checking for ncurses.h
configure:5845: result: yes
configure:5848: checking for clear in -lncurses
configure:5873: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lncurses  -lpython2.7  >&5
configure:5873: $? = 0
configure:5882: result: yes
configure:5972: checking for pkg-config
configure:5990: found /usr/bin/pkg-config
configure:6002: result: /usr/bin/pkg-config
configure:6027: checking pkg-config is at least version 0.9.0
configure:6030: result: yes
configure:6040: checking for glib
configure:6047: $PKG_CONFIG --exists --print-errors "glib-2.0 >= 2.12"
configure:6050: $? = 0
configure:6063: $PKG_CONFIG --exists --print-errors "glib-2.0 >= 2.12"
configure:6066: $? = 0
configure:6123: result: yes
configure:6154: checking bzlib.h usability
configure:6154: gcc -c -g -O2    conftest.c >&5
configure:6154: $? = 0
configure:6154: result: yes
configure:6154: checking bzlib.h presence
configure:6154: gcc -E    conftest.c
configure:6154: $? = 0
configure:6154: result: yes
configure:6154: checking for bzlib.h
configure:6154: result: yes
configure:6157: checking for BZ2_bzDecompressInit in -lbz2
configure:6182: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lbz2  -lpython2.7  >&5
configure:6182: $? = 0
configure:6191: result: yes
configure:6201: checking lzma.h usability
configure:6201: gcc -c -g -O2    conftest.c >&5
configure:6201: $? = 0
configure:6201: result: yes
configure:6201: checking lzma.h presence
configure:6201: gcc -E    conftest.c
configure:6201: $? = 0
configure:6201: result: yes
configure:6201: checking for lzma.h
configure:6201: result: yes
configure:6204: checking for lzma_stream_decoder in -llzma
configure:6229: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -llzma  -lpython2.7  >&5
configure:6229: $? = 0
configure:6238: result: yes
configure:6248: checking lzo/lzo1x.h usability
configure:6248: gcc -c -g -O2    conftest.c >&5
configure:6248: $? = 0
configure:6248: result: yes
configure:6248: checking lzo/lzo1x.h presence
configure:6248: gcc -E    conftest.c
configure:6248: $? = 0
configure:6248: result: yes
configure:6248: checking for lzo/lzo1x.h
configure:6248: result: yes
configure:6251: checking for lzo1x_decompress in -llzo2
configure:6276: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -llzo2  -lpython2.7  >&5
configure:6276: $? = 0
configure:6285: result: yes
configure:6296: checking for io_setup in -laio
configure:6321: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -laio  -lpython2.7  >&5
configure:6321: $? = 0
configure:6330: result: yes
configure:6339: checking for MD5 in -lcrypto
configure:6364: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lcrypto  -lpython2.7  >&5
configure:6364: $? = 0
configure:6373: result: yes
configure:6386: checking for ext2fs_open2 in -lext2fs
configure:6411: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lext2fs  -lcrypto -lpython2.7  >&5
configure:6411: $? = 0
configure:6420: result: yes
configure:6429: checking for gcry_md_hash_buffer in -lgcrypt
configure:6454: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lgcrypt  -lcrypto -lpython2.7  >&5
configure:6454: $? = 0
configure:6463: result: yes
configure:6473: checking for pthread flag
configure:6509: gcc -o conftest -g -O2 -pthread       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions -pthread conftest.c -lcrypto -lpython2.7   >&5
configure:6509: $? = 0
configure:6525: result: -pthread
configure:6538: checking libutil.h usability
configure:6538: gcc -c -g -O2    conftest.c >&5
conftest.c:55:21: fatal error: libutil.h: No such file or directory
compilation terminated.
configure:6538: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| #define INCLUDE_CURSES_H <ncurses.h>
| #define HAVE_LIBCRYPTO 1
| /* end confdefs.h.  */
| #include <stdio.h>
| #ifdef HAVE_SYS_TYPES_H
| # include <sys/types.h>
| #endif
| #ifdef HAVE_SYS_STAT_H
| # include <sys/stat.h>
| #endif
| #ifdef STDC_HEADERS
| # include <stdlib.h>
| # include <stddef.h>
| #else
| # ifdef HAVE_STDLIB_H
| #  include <stdlib.h>
| # endif
| #endif
| #ifdef HAVE_STRING_H
| # if !defined STDC_HEADERS && defined HAVE_MEMORY_H
| #  include <memory.h>
| # endif
| # include <string.h>
| #endif
| #ifdef HAVE_STRINGS_H
| # include <strings.h>
| #endif
| #ifdef HAVE_INTTYPES_H
| # include <inttypes.h>
| #endif
| #ifdef HAVE_STDINT_H
| # include <stdint.h>
| #endif
| #ifdef HAVE_UNISTD_H
| # include <unistd.h>
| #endif
| #include <libutil.h>
configure:6538: result: no
configure:6538: checking libutil.h presence
configure:6538: gcc -E    conftest.c
conftest.c:22:21: fatal error: libutil.h: No such file or directory
compilation terminated.
configure:6538: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| #define INCLUDE_CURSES_H <ncurses.h>
| #define HAVE_LIBCRYPTO 1
| /* end confdefs.h.  */
| #include <libutil.h>
configure:6538: result: no
configure:6538: checking for libutil.h
configure:6538: result: no
configure:6548: checking for openpty et al
configure:6577: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lcrypto -lpython2.7  -lutil >&5
configure:6577: $? = 0
configure:6590: result: -lutil
configure:6595: checking for yajl_alloc in -lyajl
configure:6620: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lyajl  -lcrypto -lpython2.7  -lutil >&5
configure:6620: $? = 0
configure:6629: result: yes
configure:6642: checking for deflateCopy in -lz
configure:6667: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lz  -lyajl -lcrypto -lpython2.7  -lutil >&5
configure:6667: $? = 0
configure:6676: result: yes
configure:6689: checking for libiconv_open in -liconv
configure:6714: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -liconv  -lz -lyajl -lcrypto -lpython2.7  -lutil >&5
/usr/bin/ld: cannot find -liconv
collect2: ld returned 1 exit status
configure:6714: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| #define INCLUDE_CURSES_H <ncurses.h>
| #define HAVE_LIBCRYPTO 1
| #define HAVE_LIBYAJL 1
| #define HAVE_LIBZ 1
| /* end confdefs.h.  */
| 
| /* Override any GCC internal prototype to avoid an error.
|    Use char because int might match the return type of a GCC
|    builtin and then its argument prototype would still apply.  */
| #ifdef __cplusplus
| extern "C"
| #endif
| char libiconv_open ();
| int
| main ()
| {
| return libiconv_open ();
|   ;
|   return 0;
| }
configure:6723: result: no
configure:6736: checking yajl/yajl_version.h usability
configure:6736: gcc -c -g -O2    conftest.c >&5
configure:6736: $? = 0
configure:6736: result: yes
configure:6736: checking yajl/yajl_version.h presence
configure:6736: gcc -E    conftest.c
configure:6736: $? = 0
configure:6736: result: yes
configure:6736: checking for yajl/yajl_version.h
configure:6736: result: yes
configure:6850: creating ./config.status

## ---------------------- ##
## Running config.status. ##
## ---------------------- ##

This file was extended by Xen Hypervisor config.status 4.2, which was
generated by GNU Autoconf 2.67.  Invocation command line was

  CONFIG_FILES    = 
  CONFIG_HEADERS  = 
  CONFIG_LINKS    = 
  CONFIG_COMMANDS = 
  $ ./config.status 

on moj

config.status:884: creating ../config/Tools.mk
config.status:884: creating config.h

## ---------------- ##
## Cache variables. ##
## ---------------- ##

ac_cv_build=x86_64-unknown-linux-gnu
ac_cv_c_compiler_gnu=yes
ac_cv_env_APPEND_INCLUDES_set=
ac_cv_env_APPEND_INCLUDES_value=
ac_cv_env_APPEND_LIB_set=
ac_cv_env_APPEND_LIB_value=
ac_cv_env_AS86_set=
ac_cv_env_AS86_value=
ac_cv_env_BASH_set=set
ac_cv_env_BASH_value=/bin/bash
ac_cv_env_BCC_set=
ac_cv_env_BCC_value=
ac_cv_env_BISON_set=
ac_cv_env_BISON_value=
ac_cv_env_CC_set=
ac_cv_env_CC_value=
ac_cv_env_CFLAGS_set=
ac_cv_env_CFLAGS_value=
ac_cv_env_CPPFLAGS_set=
ac_cv_env_CPPFLAGS_value=
ac_cv_env_CPP_set=
ac_cv_env_CPP_value=
ac_cv_env_CURL_set=
ac_cv_env_CURL_value=
ac_cv_env_FLEX_set=
ac_cv_env_FLEX_value=
ac_cv_env_IASL_set=
ac_cv_env_IASL_value=
ac_cv_env_LD86_set=
ac_cv_env_LD86_value=
ac_cv_env_LDFLAGS_set=
ac_cv_env_LDFLAGS_value=
ac_cv_env_LIBS_set=
ac_cv_env_LIBS_value=
ac_cv_env_PERL_set=
ac_cv_env_PERL_value=
ac_cv_env_PKG_CONFIG_LIBDIR_set=
ac_cv_env_PKG_CONFIG_LIBDIR_value=
ac_cv_env_PKG_CONFIG_PATH_set=
ac_cv_env_PKG_CONFIG_PATH_value=
ac_cv_env_PKG_CONFIG_set=
ac_cv_env_PKG_CONFIG_value=
ac_cv_env_PREPEND_INCLUDES_set=
ac_cv_env_PREPEND_INCLUDES_value=
ac_cv_env_PREPEND_LIB_set=
ac_cv_env_PREPEND_LIB_value=
ac_cv_env_PYTHON_set=
ac_cv_env_PYTHON_value=
ac_cv_env_XGETTEXT_set=
ac_cv_env_XGETTEXT_value=
ac_cv_env_XML_set=
ac_cv_env_XML_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_glib_CFLAGS_set=
ac_cv_env_glib_CFLAGS_value=
ac_cv_env_glib_LIBS_set=
ac_cv_env_glib_LIBS_value=
ac_cv_env_host_alias_set=
ac_cv_env_host_alias_value=
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_header_Python_h=yes
ac_cv_header_bzlib_h=yes
ac_cv_header_curses_h=yes
ac_cv_header_inttypes_h=yes
ac_cv_header_libutil_h=no
ac_cv_header_lzma_h=yes
ac_cv_header_lzo_lzo1x_h=yes
ac_cv_header_memory_h=yes
ac_cv_header_ncurses_h=yes
ac_cv_header_stdc=yes
ac_cv_header_stdint_h=yes
ac_cv_header_stdlib_h=yes
ac_cv_header_string_h=yes
ac_cv_header_strings_h=yes
ac_cv_header_sys_stat_h=yes
ac_cv_header_sys_types_h=yes
ac_cv_header_unistd_h=yes
ac_cv_header_uuid_h=no
ac_cv_header_uuid_uuid_h=yes
ac_cv_header_yajl_yajl_version_h=yes
ac_cv_host=x86_64-unknown-linux-gnu
ac_cv_lib_aio_io_setup=yes
ac_cv_lib_bz2_BZ2_bzDecompressInit=yes
ac_cv_lib_crypto_MD5=yes
ac_cv_lib_curses_clear=yes
ac_cv_lib_ext2fs_ext2fs_open2=yes
ac_cv_lib_gcrypt_gcry_md_hash_buffer=yes
ac_cv_lib_iconv_libiconv_open=no
ac_cv_lib_lzma_lzma_stream_decoder=yes
ac_cv_lib_lzo2_lzo1x_decompress=yes
ac_cv_lib_ncurses_clear=yes
ac_cv_lib_python2_7___PyArg_ParseTuple=yes
ac_cv_lib_uuid_uuid_clear=yes
ac_cv_lib_yajl_yajl_alloc=yes
ac_cv_lib_z_deflateCopy=yes
ac_cv_objext=o
ac_cv_path_AS86=/usr/bin/as86
ac_cv_path_BASH=/bin/bash
ac_cv_path_BCC=/usr/bin/bcc
ac_cv_path_BISON=/usr/bin/bison
ac_cv_path_EGREP='/bin/grep -E'
ac_cv_path_FLEX=/usr/bin/flex
ac_cv_path_GREP=/bin/grep
ac_cv_path_IASL=/usr/bin/iasl
ac_cv_path_LD86=/usr/bin/ld86
ac_cv_path_PERL=/usr/bin/perl
ac_cv_path_PYTHONPATH=/usr/bin/python
ac_cv_path_XGETTEXT=/usr/bin/xgettext
ac_cv_path_ac_pt_PKG_CONFIG=/usr/bin/pkg-config
ac_cv_path_install='/usr/bin/install -c'
ac_cv_path_pyconfig=/usr/bin/python-config
ac_cv_prog_CPP='gcc -E'
ac_cv_prog_ac_ct_CC=gcc
ac_cv_prog_ac_ct_OCAML=ocaml
ac_cv_prog_ac_ct_OCAMLBUILD=ocamlbuild
ac_cv_prog_ac_ct_OCAMLC=ocamlc
ac_cv_prog_ac_ct_OCAMLDEP=ocamldep
ac_cv_prog_ac_ct_OCAMLDOC=ocamldoc
ac_cv_prog_ac_ct_OCAMLMKLIB=ocamlmklib
ac_cv_prog_ac_ct_OCAMLMKTOP=ocamlmktop
ac_cv_prog_ac_ct_OCAMLOPT=ocamlopt
ac_cv_prog_cc_c89=
ac_cv_prog_cc_g=yes
ac_cv_prog_make_make_set=yes
ax_cv_debug=y
ax_cv_githttp=n
ax_cv_lomount=n
ax_cv_miniterm=n
ax_cv_monitors=y
ax_cv_ocamltools=y
ax_cv_ovmf=n
ax_cv_pthread_flags=-pthread
ax_cv_ptyfuncs_libs=-lutil
ax_cv_pythontools=y
ax_cv_rombios=y
ax_cv_seabios=y
ax_cv_vtpm=n
ax_cv_xenapi=n
pkg_cv_glib_CFLAGS='-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include  '
pkg_cv_glib_LIBS='-lglib-2.0  '

## ----------------- ##
## Output variables. ##
## ----------------- ##

APPEND_INCLUDES=''
APPEND_LIB=''
AS86='/usr/bin/as86'
BASH='/bin/bash'
BCC='/usr/bin/bcc'
BISON='/usr/bin/bison'
CC='gcc'
CFLAGS='-g -O2'
CPP='gcc -E'
CPPFLAGS='  '
CURL=''
CURSES_LIBS='-lncurses'
DEFS='-DHAVE_CONFIG_H'
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP='/bin/grep -E'
EXEEXT=''
FLEX='/usr/bin/flex'
GREP='/bin/grep'
IASL='/usr/bin/iasl'
INSTALL_DATA='${INSTALL} -m 644'
INSTALL_PROGRAM='${INSTALL}'
INSTALL_SCRIPT='${INSTALL}'
LD86='/usr/bin/ld86'
LDFLAGS='   -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions'
LIBOBJS=''
LIBS='-lz -lyajl -lcrypto -lpython2.7  -lutil'
LIB_PATH='lib64'
LTLIBOBJS=''
OBJEXT='o'
OCAML='ocaml'
OCAMLBEST='opt'
OCAMLBUILD='ocamlbuild'
OCAMLC='ocamlc'
OCAMLCDOTOPT='no'
OCAMLDEP='ocamldep'
OCAMLDOC='ocamldoc'
OCAMLLIB='/usr/lib/ocaml'
OCAMLMKLIB='ocamlmklib'
OCAMLMKTOP='ocamlmktop'
OCAMLOPT='ocamlopt'
OCAMLOPTDOTOPT='no'
OCAMLVERSION='3.12.1'
PACKAGE_BUGREPORT='xen-devel@lists.xen.org'
PACKAGE_NAME='Xen Hypervisor'
PACKAGE_STRING='Xen Hypervisor 4.2'
PACKAGE_TARNAME='xen-hypervisor'
PACKAGE_URL=''
PACKAGE_VERSION='4.2'
PATH_SEPARATOR=':'
PERL='/usr/bin/perl'
PKG_CONFIG='/usr/bin/pkg-config'
PKG_CONFIG_LIBDIR=''
PKG_CONFIG_PATH=''
PREPEND_INCLUDES=''
PREPEND_LIB=''
PTHREAD_CFLAGS='-pthread'
PTHREAD_LDFLAGS='-pthread'
PTHREAD_LIBS=''
PTYFUNCS_LIBS='-lutil'
PYTHON='python'
PYTHONPATH='/usr/bin/python'
SET_MAKE=''
SHELL='/bin/bash'
XGETTEXT='/usr/bin/xgettext'
XML=''
ac_ct_CC='gcc'
bindir='${exec_prefix}/bin'
build='x86_64-unknown-linux-gnu'
build_alias=''
build_cpu='x86_64'
build_os='linux-gnu'
build_vendor='unknown'
datadir='${datarootdir}'
datarootdir='${prefix}/share'
debug='y'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
dvidir='${docdir}'
exec_prefix='/usr'
githttp='n'
glib_CFLAGS='-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include  '
glib_LIBS='-lglib-2.0  '
host='x86_64-unknown-linux-gnu'
host_alias=''
host_cpu='x86_64'
host_os='linux-gnu'
host_vendor='unknown'
htmldir='${docdir}'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
libext2fs='y'
libgcrypt='y'
libiconv='n'
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
lomount='n'
mandir='${datarootdir}/man'
miniterm='n'
monitors='y'
ocamltools='y'
oldincludedir='/usr/include'
ovmf='n'
pdfdir='${docdir}'
prefix='/usr'
program_transform_name='s,x,x,'
psdir='${docdir}'
pyconfig='/usr/bin/python-config'
pythontools='y'
rombios='y'
sbindir='${exec_prefix}/sbin'
seabios='y'
sharedstatedir='${prefix}/com'
sysconfdir='${prefix}/etc'
system_aio='y'
target_alias=''
vtpm='n'
xenapi='n'
zlib=' -DHAVE_BZLIB -lbz2 -DHAVE_LZMA -llzma -DHAVE_LZO1X -llzo2'

## ----------- ##
## confdefs.h. ##
## ----------- ##

/* confdefs.h */
#define PACKAGE_NAME "Xen Hypervisor"
#define PACKAGE_TARNAME "xen-hypervisor"
#define PACKAGE_VERSION "4.2"
#define PACKAGE_STRING "Xen Hypervisor 4.2"
#define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
#define PACKAGE_URL ""
#define STDC_HEADERS 1
#define HAVE_SYS_TYPES_H 1
#define HAVE_SYS_STAT_H 1
#define HAVE_STDLIB_H 1
#define HAVE_STRING_H 1
#define HAVE_MEMORY_H 1
#define HAVE_STRINGS_H 1
#define HAVE_INTTYPES_H 1
#define HAVE_STDINT_H 1
#define HAVE_UNISTD_H 1
#define HAVE_LIBPYTHON2_7 1
#define INCLUDE_CURSES_H <ncurses.h>
#define HAVE_LIBCRYPTO 1
#define HAVE_LIBYAJL 1
#define HAVE_LIBZ 1
#define HAVE_YAJL_YAJL_VERSION_H 1

configure: exit 0

--========GMX46801345826581603727
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--========GMX46801345826581603727--


From xen-devel-bounces@lists.xen.org Fri Aug 24 16:56:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 16:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xAn-0004FT-2c; Fri, 24 Aug 2012 16:55:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72) (envelope-from <p.d@gmx.de>)
	id 1T4wyX-00045v-6j
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 16:43:13 +0000
X-Env-Sender: p.d@gmx.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1345826583!8566619!1
X-Originating-IP: [213.165.64.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 553 invoked from network); 24 Aug 2012 16:43:03 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.22)
	by server-9.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 16:43:03 -0000
Received: (qmail 1360 invoked by uid 0); 24 Aug 2012 16:43:02 -0000
Received: from 95.116.131.21 by www030.gmx.net with HTTP;
	Fri, 24 Aug 2012 18:43:01 +0200 (CEST)
Content-Type: multipart/mixed; boundary="========GMX46801345826581603727"
Date: Fri, 24 Aug 2012 18:43:01 +0200
From: p.d@gmx.de
Message-ID: <20120824164301.46800@gmx.net>
MIME-Version: 1.0
To: ian.campbell@citrix.com, xen-devel@lists.xen.org
X-Authenticated: #16432423
X-Flags: 0001
X-Mailer: WWW-Mail 6100 (Global Message Exchange)
X-Priority: 3
X-Provags-ID: V01U2FsdGVkX183PM1kuF03QOPFrRJGgy6YVUSO7CaH66762IAmA8
	tNmo3HHH5eizKCt+uFbk/BKA8AnU9x8Wn/6g== 
X-GMX-UID: QspFcCxzeSEqbkbekXQhBVl+IGRvb0AP
X-Mailman-Approved-At: Fri, 24 Aug 2012 16:55:50 +0000
Subject: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--========GMX46801345826581603727
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

nice time,

Ian, I'm not sure, but I think after Your patch:
http://xenbits.xen.org/hg/xen-unstable.hg/rev/4ca40e0559c3

xen-tools (+qemu+seabios) will not be maked :)

Here are last lines of "make -j7":
==============================================================
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF ._libxl_save_msgs_callout.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o _libxl_save_msgs_callout.o _libxl_save_msgs_callout.c 
xl_cmdimpl.c: In function â€˜main_listâ€™:
xl_cmdimpl.c:2733:11: error: â€˜handâ€™ may be used uninitialized in this function [-Werror=uninitialized]
xl_cmdimpl.c:2689:14: note: â€˜handâ€™ was declared here
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .libxl_qmp.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o libxl_qmp.o libxl_qmp.c 
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .libxl_event.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o libxl_event.o libxl_event.c 
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .libxl_fork.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o libxl_fork.o libxl_fork.c 
gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable   -D__XEN_TOOLS__ -MMD -MF .osdeps.o.d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools/libxl/../../tools/config.h  -c -o osdeps.o osdeps.c 
cc1: all warnings being treated as errors
make[3]: *** [xl_cmdimpl.o] Error 1
make[3]: *** Waiting for unfinished jobs....
make[3]: Leaving directory `/usr/src/xen_2012_08_24_rev_25779/tools/libxl'
make[2]: *** [subdir-install-libxl] Error 2
make[2]: Leaving directory `/usr/src/xen_2012_08_24_rev_25779/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/usr/src/xen_2012_08_24_rev_25779/tools'
make: *** [install-tools] Error 2
==============================================================


#cat .config
QEMU_UPSTREAM_URL = git://github.com/qemu/qemu.git
QEMU_UPSTREAM_REVISION=master

SEABIOS_UPSTREAM_URL=git://git.seabios.org/seabios.git
SEABIOS_UPSTREAM_TAG=master


I attach /xen-tools/config.log too.
If You need some more else write me please.

--========GMX46801345826581603727
Content-Type: text/x-log; charset="iso-8859-15"; name="config.log"
Content-Transfer-Encoding: 8bit
Content-Disposition: attachment; filename="config.log"

This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Xen Hypervisor configure 4.2, which was
generated by GNU Autoconf 2.67.  Invocation command line was

  $ ./configure 

## --------- ##
## Platform. ##
## --------- ##

hostname = moj
uname -m = x86_64
uname -r = 3.2.28
uname -s = Linux
uname -v = #1 SMP Fri Aug 24 00:21:18 CEST 2012

/usr/bin/uname -p = unknown
/bin/uname -X     = unknown

/bin/arch              = unknown
/usr/bin/arch -k       = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo      = unknown
/bin/machine           = unknown
/usr/bin/oslevel       = unknown
/bin/universe          = unknown

PATH: /usr/local/sbin
PATH: /usr/local/bin
PATH: /usr/sbin
PATH: /usr/bin
PATH: /sbin
PATH: /bin
PATH: /usr/games


## ----------- ##
## Core tests. ##
## ----------- ##

configure:2200: checking build system type
configure:2214: result: x86_64-unknown-linux-gnu
configure:2234: checking host system type
configure:2247: result: x86_64-unknown-linux-gnu
configure:2762: checking for gcc
configure:2778: found /usr/bin/gcc
configure:2789: result: gcc
configure:3018: checking for C compiler version
configure:3027: gcc --version >&5
gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

configure:3038: $? = 0
configure:3027: gcc -v >&5
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.6/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) 
configure:3038: $? = 0
configure:3027: gcc -V >&5
gcc: error: unrecognized option '-V'
gcc: fatal error: no input files
compilation terminated.
configure:3038: $? = 4
configure:3027: gcc -qversion >&5
gcc: error: unrecognized option '-qversion'
gcc: fatal error: no input files
compilation terminated.
configure:3038: $? = 4
configure:3058: checking whether the C compiler works
configure:3080: gcc        conftest.c  >&5
configure:3084: $? = 0
configure:3132: result: yes
configure:3135: checking for C compiler default output file name
configure:3137: result: a.out
configure:3143: checking for suffix of executables
configure:3150: gcc -o conftest        conftest.c  >&5
configure:3154: $? = 0
configure:3176: result: 
configure:3198: checking whether we are cross compiling
configure:3206: gcc -o conftest        conftest.c  >&5
configure:3210: $? = 0
configure:3217: ./conftest
configure:3221: $? = 0
configure:3236: result: no
configure:3241: checking for suffix of object files
configure:3263: gcc -c     conftest.c >&5
configure:3267: $? = 0
configure:3288: result: o
configure:3292: checking whether we are using the GNU C compiler
configure:3311: gcc -c     conftest.c >&5
configure:3311: $? = 0
configure:3320: result: yes
configure:3329: checking whether gcc accepts -g
configure:3349: gcc -c -g    conftest.c >&5
configure:3349: $? = 0
configure:3390: result: yes
configure:3407: checking for gcc option to accept ISO C89
configure:3471: gcc  -c -g -O2    conftest.c >&5
configure:3471: $? = 0
configure:3484: result: none needed
configure:3504: checking whether make sets $(MAKE)
configure:3526: result: yes
configure:3549: checking for a BSD-compatible install
configure:3617: result: /usr/bin/install -c
configure:3630: checking for bison
configure:3648: found /usr/bin/bison
configure:3660: result: /usr/bin/bison
configure:3670: checking for flex
configure:3688: found /usr/bin/flex
configure:3700: result: /usr/bin/flex
configure:3710: checking for perl
configure:3728: found /usr/bin/perl
configure:3741: result: /usr/bin/perl
configure:3893: checking for ocamlc
configure:3909: found /usr/bin/ocamlc
configure:3920: result: ocamlc
configure:3945: result: OCaml version is 3.12.1
configure:3954: result: OCaml library path is /usr/lib/ocaml
configure:4004: checking for ocamlopt
configure:4020: found /usr/bin/ocamlopt
configure:4031: result: ocamlopt
configure:4114: checking for ocamlc.opt
configure:4144: result: no
configure:4218: checking for ocamlopt.opt
configure:4248: result: no
configure:4327: checking for ocaml
configure:4343: found /usr/bin/ocaml
configure:4354: result: ocaml
configure:4421: checking for ocamldep
configure:4437: found /usr/bin/ocamldep
configure:4448: result: ocamldep
configure:4515: checking for ocamlmktop
configure:4531: found /usr/bin/ocamlmktop
configure:4542: result: ocamlmktop
configure:4609: checking for ocamlmklib
configure:4625: found /usr/bin/ocamlmklib
configure:4636: result: ocamlmklib
configure:4703: checking for ocamldoc
configure:4719: found /usr/bin/ocamldoc
configure:4730: result: ocamldoc
configure:4797: checking for ocamlbuild
configure:4813: found /usr/bin/ocamlbuild
configure:4824: result: ocamlbuild
configure:4860: checking for bash
configure:4891: result: /bin/bash
configure:4909: checking how to run the C preprocessor
configure:4940: gcc -E    conftest.c
configure:4940: $? = 0
configure:4954: gcc -E    conftest.c
conftest.c:9:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:4954: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| /* end confdefs.h.  */
| #include <ac_nonexistent.h>
configure:4979: result: gcc -E
configure:4999: gcc -E    conftest.c
configure:4999: $? = 0
configure:5013: gcc -E    conftest.c
conftest.c:9:28: fatal error: ac_nonexistent.h: No such file or directory
compilation terminated.
configure:5013: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| /* end confdefs.h.  */
| #include <ac_nonexistent.h>
configure:5042: checking for grep that handles long lines and -e
configure:5100: result: /bin/grep
configure:5105: checking for egrep
configure:5167: result: /bin/grep -E
configure:5172: checking for ANSI C header files
configure:5192: gcc -c -g -O2    conftest.c >&5
configure:5192: $? = 0
configure:5265: gcc -o conftest -g -O2       conftest.c  >&5
configure:5265: $? = 0
configure:5265: ./conftest
configure:5265: $? = 0
configure:5276: result: yes
configure:5289: checking for sys/types.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for sys/stat.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for stdlib.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for string.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for memory.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for strings.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for inttypes.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for stdint.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5289: checking for unistd.h
configure:5289: gcc -c -g -O2    conftest.c >&5
configure:5289: $? = 0
configure:5289: result: yes
configure:5315: checking for python
configure:5333: found /usr/bin/python
configure:5346: result: /usr/bin/python
configure:5358: checking for python version >= 2.3 
configure:5368: result: yes
configure:5378: checking for python-config
configure:5396: found /usr/bin/python-config
configure:5409: result: /usr/bin/python-config
configure:5442: checking Python.h usability
configure:5442: gcc -c -g -O2 -g -O2 -I/usr/include/python2.7 -I/usr/include/python2.7 -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security conftest.c >&5
In file included from /usr/include/python2.7/Python.h:8:0,
                 from conftest.c:52:
/usr/include/python2.7/pyconfig.h:1161:0: warning: "_POSIX_C_SOURCE" redefined [enabled by default]
/usr/include/features.h:215:0: note: this is the location of the previous definition
configure:5442: $? = 0
configure:5442: result: yes
configure:5442: checking Python.h presence
configure:5442: gcc -E -g -O2 -I/usr/include/python2.7 -I/usr/include/python2.7 -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security conftest.c
configure:5442: $? = 0
configure:5442: result: yes
configure:5442: checking for Python.h
configure:5442: result: yes
configure:5451: checking for PyArg_ParseTuple in -lpython2.7
configure:5476: gcc -o conftest -g -O2 -g -O2 -I/usr/include/python2.7 -I/usr/include/python2.7 -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security    -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lpython2.7   >&5
conftest.c:26:1: warning: function declaration isn't a prototype [-Wstrict-prototypes]
conftest.c:28:1: warning: function declaration isn't a prototype [-Wstrict-prototypes]
configure:5476: $? = 0
configure:5486: result: yes
configure:5506: checking for xgettext
configure:5524: found /usr/bin/xgettext
configure:5537: result: /usr/bin/xgettext
configure:5553: checking for as86
configure:5571: found /usr/bin/as86
configure:5584: result: /usr/bin/as86
configure:5598: checking for ld86
configure:5616: found /usr/bin/ld86
configure:5629: result: /usr/bin/ld86
configure:5643: checking for bcc
configure:5661: found /usr/bin/bcc
configure:5674: result: /usr/bin/bcc
configure:5688: checking for iasl
configure:5706: found /usr/bin/iasl
configure:5719: result: /usr/bin/iasl
configure:5734: checking uuid/uuid.h usability
configure:5734: gcc -c -g -O2    conftest.c >&5
configure:5734: $? = 0
configure:5734: result: yes
configure:5734: checking uuid/uuid.h presence
configure:5734: gcc -E    conftest.c
configure:5734: $? = 0
configure:5734: result: yes
configure:5734: checking for uuid/uuid.h
configure:5734: result: yes
configure:5737: checking for uuid_clear in -luuid
configure:5762: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -luuid  -lpython2.7  >&5
configure:5762: $? = 0
configure:5771: result: yes
configure:5781: checking uuid.h usability
configure:5781: gcc -c -g -O2    conftest.c >&5
conftest.c:53:18: fatal error: uuid.h: No such file or directory
compilation terminated.
configure:5781: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| /* end confdefs.h.  */
| #include <stdio.h>
| #ifdef HAVE_SYS_TYPES_H
| # include <sys/types.h>
| #endif
| #ifdef HAVE_SYS_STAT_H
| # include <sys/stat.h>
| #endif
| #ifdef STDC_HEADERS
| # include <stdlib.h>
| # include <stddef.h>
| #else
| # ifdef HAVE_STDLIB_H
| #  include <stdlib.h>
| # endif
| #endif
| #ifdef HAVE_STRING_H
| # if !defined STDC_HEADERS && defined HAVE_MEMORY_H
| #  include <memory.h>
| # endif
| # include <string.h>
| #endif
| #ifdef HAVE_STRINGS_H
| # include <strings.h>
| #endif
| #ifdef HAVE_INTTYPES_H
| # include <inttypes.h>
| #endif
| #ifdef HAVE_STDINT_H
| # include <stdint.h>
| #endif
| #ifdef HAVE_UNISTD_H
| # include <unistd.h>
| #endif
| #include <uuid.h>
configure:5781: result: no
configure:5781: checking uuid.h presence
configure:5781: gcc -E    conftest.c
conftest.c:20:18: fatal error: uuid.h: No such file or directory
compilation terminated.
configure:5781: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| /* end confdefs.h.  */
| #include <uuid.h>
configure:5781: result: no
configure:5781: checking for uuid.h
configure:5781: result: no
configure:5794: checking curses.h usability
configure:5794: gcc -c -g -O2    conftest.c >&5
configure:5794: $? = 0
configure:5794: result: yes
configure:5794: checking curses.h presence
configure:5794: gcc -E    conftest.c
configure:5794: $? = 0
configure:5794: result: yes
configure:5794: checking for curses.h
configure:5794: result: yes
configure:5797: checking for clear in -lcurses
configure:5822: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lcurses  -lpython2.7  >&5
configure:5822: $? = 0
configure:5831: result: yes
configure:5845: checking ncurses.h usability
configure:5845: gcc -c -g -O2    conftest.c >&5
configure:5845: $? = 0
configure:5845: result: yes
configure:5845: checking ncurses.h presence
configure:5845: gcc -E    conftest.c
configure:5845: $? = 0
configure:5845: result: yes
configure:5845: checking for ncurses.h
configure:5845: result: yes
configure:5848: checking for clear in -lncurses
configure:5873: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lncurses  -lpython2.7  >&5
configure:5873: $? = 0
configure:5882: result: yes
configure:5972: checking for pkg-config
configure:5990: found /usr/bin/pkg-config
configure:6002: result: /usr/bin/pkg-config
configure:6027: checking pkg-config is at least version 0.9.0
configure:6030: result: yes
configure:6040: checking for glib
configure:6047: $PKG_CONFIG --exists --print-errors "glib-2.0 >= 2.12"
configure:6050: $? = 0
configure:6063: $PKG_CONFIG --exists --print-errors "glib-2.0 >= 2.12"
configure:6066: $? = 0
configure:6123: result: yes
configure:6154: checking bzlib.h usability
configure:6154: gcc -c -g -O2    conftest.c >&5
configure:6154: $? = 0
configure:6154: result: yes
configure:6154: checking bzlib.h presence
configure:6154: gcc -E    conftest.c
configure:6154: $? = 0
configure:6154: result: yes
configure:6154: checking for bzlib.h
configure:6154: result: yes
configure:6157: checking for BZ2_bzDecompressInit in -lbz2
configure:6182: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lbz2  -lpython2.7  >&5
configure:6182: $? = 0
configure:6191: result: yes
configure:6201: checking lzma.h usability
configure:6201: gcc -c -g -O2    conftest.c >&5
configure:6201: $? = 0
configure:6201: result: yes
configure:6201: checking lzma.h presence
configure:6201: gcc -E    conftest.c
configure:6201: $? = 0
configure:6201: result: yes
configure:6201: checking for lzma.h
configure:6201: result: yes
configure:6204: checking for lzma_stream_decoder in -llzma
configure:6229: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -llzma  -lpython2.7  >&5
configure:6229: $? = 0
configure:6238: result: yes
configure:6248: checking lzo/lzo1x.h usability
configure:6248: gcc -c -g -O2    conftest.c >&5
configure:6248: $? = 0
configure:6248: result: yes
configure:6248: checking lzo/lzo1x.h presence
configure:6248: gcc -E    conftest.c
configure:6248: $? = 0
configure:6248: result: yes
configure:6248: checking for lzo/lzo1x.h
configure:6248: result: yes
configure:6251: checking for lzo1x_decompress in -llzo2
configure:6276: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -llzo2  -lpython2.7  >&5
configure:6276: $? = 0
configure:6285: result: yes
configure:6296: checking for io_setup in -laio
configure:6321: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -laio  -lpython2.7  >&5
configure:6321: $? = 0
configure:6330: result: yes
configure:6339: checking for MD5 in -lcrypto
configure:6364: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lcrypto  -lpython2.7  >&5
configure:6364: $? = 0
configure:6373: result: yes
configure:6386: checking for ext2fs_open2 in -lext2fs
configure:6411: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lext2fs  -lcrypto -lpython2.7  >&5
configure:6411: $? = 0
configure:6420: result: yes
configure:6429: checking for gcry_md_hash_buffer in -lgcrypt
configure:6454: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lgcrypt  -lcrypto -lpython2.7  >&5
configure:6454: $? = 0
configure:6463: result: yes
configure:6473: checking for pthread flag
configure:6509: gcc -o conftest -g -O2 -pthread       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions -pthread conftest.c -lcrypto -lpython2.7   >&5
configure:6509: $? = 0
configure:6525: result: -pthread
configure:6538: checking libutil.h usability
configure:6538: gcc -c -g -O2    conftest.c >&5
conftest.c:55:21: fatal error: libutil.h: No such file or directory
compilation terminated.
configure:6538: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| #define INCLUDE_CURSES_H <ncurses.h>
| #define HAVE_LIBCRYPTO 1
| /* end confdefs.h.  */
| #include <stdio.h>
| #ifdef HAVE_SYS_TYPES_H
| # include <sys/types.h>
| #endif
| #ifdef HAVE_SYS_STAT_H
| # include <sys/stat.h>
| #endif
| #ifdef STDC_HEADERS
| # include <stdlib.h>
| # include <stddef.h>
| #else
| # ifdef HAVE_STDLIB_H
| #  include <stdlib.h>
| # endif
| #endif
| #ifdef HAVE_STRING_H
| # if !defined STDC_HEADERS && defined HAVE_MEMORY_H
| #  include <memory.h>
| # endif
| # include <string.h>
| #endif
| #ifdef HAVE_STRINGS_H
| # include <strings.h>
| #endif
| #ifdef HAVE_INTTYPES_H
| # include <inttypes.h>
| #endif
| #ifdef HAVE_STDINT_H
| # include <stdint.h>
| #endif
| #ifdef HAVE_UNISTD_H
| # include <unistd.h>
| #endif
| #include <libutil.h>
configure:6538: result: no
configure:6538: checking libutil.h presence
configure:6538: gcc -E    conftest.c
conftest.c:22:21: fatal error: libutil.h: No such file or directory
compilation terminated.
configure:6538: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| #define INCLUDE_CURSES_H <ncurses.h>
| #define HAVE_LIBCRYPTO 1
| /* end confdefs.h.  */
| #include <libutil.h>
configure:6538: result: no
configure:6538: checking for libutil.h
configure:6538: result: no
configure:6548: checking for openpty et al
configure:6577: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lcrypto -lpython2.7  -lutil >&5
configure:6577: $? = 0
configure:6590: result: -lutil
configure:6595: checking for yajl_alloc in -lyajl
configure:6620: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lyajl  -lcrypto -lpython2.7  -lutil >&5
configure:6620: $? = 0
configure:6629: result: yes
configure:6642: checking for deflateCopy in -lz
configure:6667: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -lz  -lyajl -lcrypto -lpython2.7  -lutil >&5
configure:6667: $? = 0
configure:6676: result: yes
configure:6689: checking for libiconv_open in -liconv
configure:6714: gcc -o conftest -g -O2       -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions conftest.c -liconv  -lz -lyajl -lcrypto -lpython2.7  -lutil >&5
/usr/bin/ld: cannot find -liconv
collect2: ld returned 1 exit status
configure:6714: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "Xen Hypervisor"
| #define PACKAGE_TARNAME "xen-hypervisor"
| #define PACKAGE_VERSION "4.2"
| #define PACKAGE_STRING "Xen Hypervisor 4.2"
| #define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
| #define PACKAGE_URL ""
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define HAVE_LIBPYTHON2_7 1
| #define INCLUDE_CURSES_H <ncurses.h>
| #define HAVE_LIBCRYPTO 1
| #define HAVE_LIBYAJL 1
| #define HAVE_LIBZ 1
| /* end confdefs.h.  */
| 
| /* Override any GCC internal prototype to avoid an error.
|    Use char because int might match the return type of a GCC
|    builtin and then its argument prototype would still apply.  */
| #ifdef __cplusplus
| extern "C"
| #endif
| char libiconv_open ();
| int
| main ()
| {
| return libiconv_open ();
|   ;
|   return 0;
| }
configure:6723: result: no
configure:6736: checking yajl/yajl_version.h usability
configure:6736: gcc -c -g -O2    conftest.c >&5
configure:6736: $? = 0
configure:6736: result: yes
configure:6736: checking yajl/yajl_version.h presence
configure:6736: gcc -E    conftest.c
configure:6736: $? = 0
configure:6736: result: yes
configure:6736: checking for yajl/yajl_version.h
configure:6736: result: yes
configure:6850: creating ./config.status

## ---------------------- ##
## Running config.status. ##
## ---------------------- ##

This file was extended by Xen Hypervisor config.status 4.2, which was
generated by GNU Autoconf 2.67.  Invocation command line was

  CONFIG_FILES    = 
  CONFIG_HEADERS  = 
  CONFIG_LINKS    = 
  CONFIG_COMMANDS = 
  $ ./config.status 

on moj

config.status:884: creating ../config/Tools.mk
config.status:884: creating config.h

## ---------------- ##
## Cache variables. ##
## ---------------- ##

ac_cv_build=x86_64-unknown-linux-gnu
ac_cv_c_compiler_gnu=yes
ac_cv_env_APPEND_INCLUDES_set=
ac_cv_env_APPEND_INCLUDES_value=
ac_cv_env_APPEND_LIB_set=
ac_cv_env_APPEND_LIB_value=
ac_cv_env_AS86_set=
ac_cv_env_AS86_value=
ac_cv_env_BASH_set=set
ac_cv_env_BASH_value=/bin/bash
ac_cv_env_BCC_set=
ac_cv_env_BCC_value=
ac_cv_env_BISON_set=
ac_cv_env_BISON_value=
ac_cv_env_CC_set=
ac_cv_env_CC_value=
ac_cv_env_CFLAGS_set=
ac_cv_env_CFLAGS_value=
ac_cv_env_CPPFLAGS_set=
ac_cv_env_CPPFLAGS_value=
ac_cv_env_CPP_set=
ac_cv_env_CPP_value=
ac_cv_env_CURL_set=
ac_cv_env_CURL_value=
ac_cv_env_FLEX_set=
ac_cv_env_FLEX_value=
ac_cv_env_IASL_set=
ac_cv_env_IASL_value=
ac_cv_env_LD86_set=
ac_cv_env_LD86_value=
ac_cv_env_LDFLAGS_set=
ac_cv_env_LDFLAGS_value=
ac_cv_env_LIBS_set=
ac_cv_env_LIBS_value=
ac_cv_env_PERL_set=
ac_cv_env_PERL_value=
ac_cv_env_PKG_CONFIG_LIBDIR_set=
ac_cv_env_PKG_CONFIG_LIBDIR_value=
ac_cv_env_PKG_CONFIG_PATH_set=
ac_cv_env_PKG_CONFIG_PATH_value=
ac_cv_env_PKG_CONFIG_set=
ac_cv_env_PKG_CONFIG_value=
ac_cv_env_PREPEND_INCLUDES_set=
ac_cv_env_PREPEND_INCLUDES_value=
ac_cv_env_PREPEND_LIB_set=
ac_cv_env_PREPEND_LIB_value=
ac_cv_env_PYTHON_set=
ac_cv_env_PYTHON_value=
ac_cv_env_XGETTEXT_set=
ac_cv_env_XGETTEXT_value=
ac_cv_env_XML_set=
ac_cv_env_XML_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_glib_CFLAGS_set=
ac_cv_env_glib_CFLAGS_value=
ac_cv_env_glib_LIBS_set=
ac_cv_env_glib_LIBS_value=
ac_cv_env_host_alias_set=
ac_cv_env_host_alias_value=
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_header_Python_h=yes
ac_cv_header_bzlib_h=yes
ac_cv_header_curses_h=yes
ac_cv_header_inttypes_h=yes
ac_cv_header_libutil_h=no
ac_cv_header_lzma_h=yes
ac_cv_header_lzo_lzo1x_h=yes
ac_cv_header_memory_h=yes
ac_cv_header_ncurses_h=yes
ac_cv_header_stdc=yes
ac_cv_header_stdint_h=yes
ac_cv_header_stdlib_h=yes
ac_cv_header_string_h=yes
ac_cv_header_strings_h=yes
ac_cv_header_sys_stat_h=yes
ac_cv_header_sys_types_h=yes
ac_cv_header_unistd_h=yes
ac_cv_header_uuid_h=no
ac_cv_header_uuid_uuid_h=yes
ac_cv_header_yajl_yajl_version_h=yes
ac_cv_host=x86_64-unknown-linux-gnu
ac_cv_lib_aio_io_setup=yes
ac_cv_lib_bz2_BZ2_bzDecompressInit=yes
ac_cv_lib_crypto_MD5=yes
ac_cv_lib_curses_clear=yes
ac_cv_lib_ext2fs_ext2fs_open2=yes
ac_cv_lib_gcrypt_gcry_md_hash_buffer=yes
ac_cv_lib_iconv_libiconv_open=no
ac_cv_lib_lzma_lzma_stream_decoder=yes
ac_cv_lib_lzo2_lzo1x_decompress=yes
ac_cv_lib_ncurses_clear=yes
ac_cv_lib_python2_7___PyArg_ParseTuple=yes
ac_cv_lib_uuid_uuid_clear=yes
ac_cv_lib_yajl_yajl_alloc=yes
ac_cv_lib_z_deflateCopy=yes
ac_cv_objext=o
ac_cv_path_AS86=/usr/bin/as86
ac_cv_path_BASH=/bin/bash
ac_cv_path_BCC=/usr/bin/bcc
ac_cv_path_BISON=/usr/bin/bison
ac_cv_path_EGREP='/bin/grep -E'
ac_cv_path_FLEX=/usr/bin/flex
ac_cv_path_GREP=/bin/grep
ac_cv_path_IASL=/usr/bin/iasl
ac_cv_path_LD86=/usr/bin/ld86
ac_cv_path_PERL=/usr/bin/perl
ac_cv_path_PYTHONPATH=/usr/bin/python
ac_cv_path_XGETTEXT=/usr/bin/xgettext
ac_cv_path_ac_pt_PKG_CONFIG=/usr/bin/pkg-config
ac_cv_path_install='/usr/bin/install -c'
ac_cv_path_pyconfig=/usr/bin/python-config
ac_cv_prog_CPP='gcc -E'
ac_cv_prog_ac_ct_CC=gcc
ac_cv_prog_ac_ct_OCAML=ocaml
ac_cv_prog_ac_ct_OCAMLBUILD=ocamlbuild
ac_cv_prog_ac_ct_OCAMLC=ocamlc
ac_cv_prog_ac_ct_OCAMLDEP=ocamldep
ac_cv_prog_ac_ct_OCAMLDOC=ocamldoc
ac_cv_prog_ac_ct_OCAMLMKLIB=ocamlmklib
ac_cv_prog_ac_ct_OCAMLMKTOP=ocamlmktop
ac_cv_prog_ac_ct_OCAMLOPT=ocamlopt
ac_cv_prog_cc_c89=
ac_cv_prog_cc_g=yes
ac_cv_prog_make_make_set=yes
ax_cv_debug=y
ax_cv_githttp=n
ax_cv_lomount=n
ax_cv_miniterm=n
ax_cv_monitors=y
ax_cv_ocamltools=y
ax_cv_ovmf=n
ax_cv_pthread_flags=-pthread
ax_cv_ptyfuncs_libs=-lutil
ax_cv_pythontools=y
ax_cv_rombios=y
ax_cv_seabios=y
ax_cv_vtpm=n
ax_cv_xenapi=n
pkg_cv_glib_CFLAGS='-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include  '
pkg_cv_glib_LIBS='-lglib-2.0  '

## ----------------- ##
## Output variables. ##
## ----------------- ##

APPEND_INCLUDES=''
APPEND_LIB=''
AS86='/usr/bin/as86'
BASH='/bin/bash'
BCC='/usr/bin/bcc'
BISON='/usr/bin/bison'
CC='gcc'
CFLAGS='-g -O2'
CPP='gcc -E'
CPPFLAGS='  '
CURL=''
CURSES_LIBS='-lncurses'
DEFS='-DHAVE_CONFIG_H'
ECHO_C=''
ECHO_N='-n'
ECHO_T=''
EGREP='/bin/grep -E'
EXEEXT=''
FLEX='/usr/bin/flex'
GREP='/bin/grep'
IASL='/usr/bin/iasl'
INSTALL_DATA='${INSTALL} -m 644'
INSTALL_PROGRAM='${INSTALL}'
INSTALL_SCRIPT='${INSTALL}'
LD86='/usr/bin/ld86'
LDFLAGS='   -L/usr/lib/python2.7/config -lpthread -ldl -lutil -lm -lpython2.7 -Xlinker -export-dynamic -Wl,-O1 -Wl,-Bsymbolic-functions'
LIBOBJS=''
LIBS='-lz -lyajl -lcrypto -lpython2.7  -lutil'
LIB_PATH='lib64'
LTLIBOBJS=''
OBJEXT='o'
OCAML='ocaml'
OCAMLBEST='opt'
OCAMLBUILD='ocamlbuild'
OCAMLC='ocamlc'
OCAMLCDOTOPT='no'
OCAMLDEP='ocamldep'
OCAMLDOC='ocamldoc'
OCAMLLIB='/usr/lib/ocaml'
OCAMLMKLIB='ocamlmklib'
OCAMLMKTOP='ocamlmktop'
OCAMLOPT='ocamlopt'
OCAMLOPTDOTOPT='no'
OCAMLVERSION='3.12.1'
PACKAGE_BUGREPORT='xen-devel@lists.xen.org'
PACKAGE_NAME='Xen Hypervisor'
PACKAGE_STRING='Xen Hypervisor 4.2'
PACKAGE_TARNAME='xen-hypervisor'
PACKAGE_URL=''
PACKAGE_VERSION='4.2'
PATH_SEPARATOR=':'
PERL='/usr/bin/perl'
PKG_CONFIG='/usr/bin/pkg-config'
PKG_CONFIG_LIBDIR=''
PKG_CONFIG_PATH=''
PREPEND_INCLUDES=''
PREPEND_LIB=''
PTHREAD_CFLAGS='-pthread'
PTHREAD_LDFLAGS='-pthread'
PTHREAD_LIBS=''
PTYFUNCS_LIBS='-lutil'
PYTHON='python'
PYTHONPATH='/usr/bin/python'
SET_MAKE=''
SHELL='/bin/bash'
XGETTEXT='/usr/bin/xgettext'
XML=''
ac_ct_CC='gcc'
bindir='${exec_prefix}/bin'
build='x86_64-unknown-linux-gnu'
build_alias=''
build_cpu='x86_64'
build_os='linux-gnu'
build_vendor='unknown'
datadir='${datarootdir}'
datarootdir='${prefix}/share'
debug='y'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
dvidir='${docdir}'
exec_prefix='/usr'
githttp='n'
glib_CFLAGS='-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include  '
glib_LIBS='-lglib-2.0  '
host='x86_64-unknown-linux-gnu'
host_alias=''
host_cpu='x86_64'
host_os='linux-gnu'
host_vendor='unknown'
htmldir='${docdir}'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
libext2fs='y'
libgcrypt='y'
libiconv='n'
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
lomount='n'
mandir='${datarootdir}/man'
miniterm='n'
monitors='y'
ocamltools='y'
oldincludedir='/usr/include'
ovmf='n'
pdfdir='${docdir}'
prefix='/usr'
program_transform_name='s,x,x,'
psdir='${docdir}'
pyconfig='/usr/bin/python-config'
pythontools='y'
rombios='y'
sbindir='${exec_prefix}/sbin'
seabios='y'
sharedstatedir='${prefix}/com'
sysconfdir='${prefix}/etc'
system_aio='y'
target_alias=''
vtpm='n'
xenapi='n'
zlib=' -DHAVE_BZLIB -lbz2 -DHAVE_LZMA -llzma -DHAVE_LZO1X -llzo2'

## ----------- ##
## confdefs.h. ##
## ----------- ##

/* confdefs.h */
#define PACKAGE_NAME "Xen Hypervisor"
#define PACKAGE_TARNAME "xen-hypervisor"
#define PACKAGE_VERSION "4.2"
#define PACKAGE_STRING "Xen Hypervisor 4.2"
#define PACKAGE_BUGREPORT "xen-devel@lists.xen.org"
#define PACKAGE_URL ""
#define STDC_HEADERS 1
#define HAVE_SYS_TYPES_H 1
#define HAVE_SYS_STAT_H 1
#define HAVE_STDLIB_H 1
#define HAVE_STRING_H 1
#define HAVE_MEMORY_H 1
#define HAVE_STRINGS_H 1
#define HAVE_INTTYPES_H 1
#define HAVE_STDINT_H 1
#define HAVE_UNISTD_H 1
#define HAVE_LIBPYTHON2_7 1
#define INCLUDE_CURSES_H <ncurses.h>
#define HAVE_LIBCRYPTO 1
#define HAVE_LIBYAJL 1
#define HAVE_LIBZ 1
#define HAVE_YAJL_YAJL_VERSION_H 1

configure: exit 0

--========GMX46801345826581603727
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--========GMX46801345826581603727--


From xen-devel-bounces@lists.xen.org Fri Aug 24 17:07:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xLO-0004Tx-Cx; Fri, 24 Aug 2012 17:06:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4xLM-0004Ts-TU
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 17:06:49 +0000
Received: from [85.158.143.99:54758] by server-1.bemta-4.messagelabs.com id
	F3/C0-12504-8A4B7305; Fri, 24 Aug 2012 17:06:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345828005!16611956!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6518 invoked from network); 24 Aug 2012 17:06:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 17:06:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14173785"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 17:06:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 18:06:44 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4xLI-0003DX-Jq;
	Fri, 24 Aug 2012 17:06:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4xLI-0003Ts-AB;
	Fri, 24 Aug 2012 18:06:44 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13626-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 18:06:44 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13626: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13626 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13626/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 13625
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 13625

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13625
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13625
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13625
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13625
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13625

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  985e836dff8b
baseline version:
 xen                  4ca40e0559c3

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Zhang Xiantao <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25781:985e836dff8b
tag:         tip
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:47 2012 +0100
    
    nested vmx: Don't set bit 55 in IA32_VMX_BASIC_MSR
    
    All related IA32_VMX_TRUE_*_MSR are not implemented,
    so set this bit to 0, otherwise system L1VMM may
    get incorrect default1 class settings.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25780:42f959fec02d
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:14 2012 +0100
    
    nested vmx: VM_ENTRY_IA32E_MODE shouldn't be in default1 class
    for IA32_VM_ENTRY_CTLS_MSR.
    
    If set to 1, L2 guest's paging mode maybe mis-judged
    and mis-set.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25779:4ca40e0559c3
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 23 19:12:28 2012 +0100
    
    xl: make "xl list -l" proper JSON
    
    Bastian Blank reports that the output of this command is just multiple
    JSON objects concatenated and is not a single properly formed JSON
    object.
    
    Fix this by wrapping in an array. This turned out to be a bit more
    intrusive than I was expecting due to the requirement to keep
    supporting the SXP output mode.
    
    Python's json module is happy to parse the result...
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 17:07:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xLO-0004Tx-Cx; Fri, 24 Aug 2012 17:06:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4xLM-0004Ts-TU
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 17:06:49 +0000
Received: from [85.158.143.99:54758] by server-1.bemta-4.messagelabs.com id
	F3/C0-12504-8A4B7305; Fri, 24 Aug 2012 17:06:48 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1345828005!16611956!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6518 invoked from network); 24 Aug 2012 17:06:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 17:06:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14173785"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 17:06:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 18:06:44 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T4xLI-0003DX-Jq;
	Fri, 24 Aug 2012 17:06:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T4xLI-0003Ts-AB;
	Fri, 24 Aug 2012 18:06:44 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13626-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 18:06:44 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13626: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13626 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13626/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-i386-i386-xl-win        12 guest-localmigrate/x10    fail REGR. vs. 13625
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 13625

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     11 guest-localmigrate        fail REGR. vs. 13625
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13625
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13625
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13625
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13625

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  985e836dff8b
baseline version:
 xen                  4ca40e0559c3

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Zhang Xiantao <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25781:985e836dff8b
tag:         tip
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:47 2012 +0100
    
    nested vmx: Don't set bit 55 in IA32_VMX_BASIC_MSR
    
    All related IA32_VMX_TRUE_*_MSR are not implemented,
    so set this bit to 0, otherwise system L1VMM may
    get incorrect default1 class settings.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25780:42f959fec02d
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:14 2012 +0100
    
    nested vmx: VM_ENTRY_IA32E_MODE shouldn't be in default1 class
    for IA32_VM_ENTRY_CTLS_MSR.
    
    If set to 1, L2 guest's paging mode maybe mis-judged
    and mis-set.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25779:4ca40e0559c3
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 23 19:12:28 2012 +0100
    
    xl: make "xl list -l" proper JSON
    
    Bastian Blank reports that the output of this command is just multiple
    JSON objects concatenated and is not a single properly formed JSON
    object.
    
    Fix this by wrapping in an array. This turned out to be a bit more
    intrusive than I was expecting due to the requirement to keep
    supporting the SXP output mode.
    
    Python's json module is happy to parse the result...
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 17:31:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xhn-0004hB-M9; Fri, 24 Aug 2012 17:29:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4xhl-0004h6-It
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 17:29:57 +0000
Received: from [85.158.143.35:64296] by server-3.bemta-4.messagelabs.com id
	02/88-08232-41AB7305; Fri, 24 Aug 2012 17:29:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345829396!12843512!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22171 invoked from network); 24 Aug 2012 17:29:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 17:29:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14173990"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 17:29:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 18:29:56 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4xhj-0003Rz-TJ; Fri, 24 Aug 2012 17:29:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4xhj-0001vY-Q8;
	Fri, 24 Aug 2012 18:29:55 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.47635.716045.5844@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 18:29:55 +0100
To: xen.org <ian.jackson@eu.citrix.com>
In-Reply-To: <osstest-13626-mainreport@xen.org>
References: <osstest-13626-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13626: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-unstable test] 13626: regressions - FAIL"):
> flight 13626 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13626/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-i386-i386-xl-win     12 guest-localmigrate/x10    fail REGR. vs. 13625

This seems to be a real failure, although it's a bit mysterious.

It has done 6 repetitions of the 10x localhost migration by this
point; rep 6 took 108 seconds.  Each of these prints a message like
this:
  libxl: error: libxl_blktap2.c:80:libxl__device_destroy_tapdisk: Failed to destroy tap device id 6379 minor 7: Input/output error
  libxl: error: libxl.c:1465:devices_destroy_cb: libxl__devices_destroy failed for 13

Repitition 7 proceeds apparently OK until it would normally print the
first of those messages and then seems to hang.  After 400s the test
harness gives up and calls it a failure.

The logs seem quite uninformative.  I looked at the host serial and
kernel logs, and the xl and qemu logfiles.  Both domains exist; the
migrated-away domain has been renamed and is paused.

>  test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 13625

This shows Windows at the login screen but it unaccountably isn't
listening on the special xenrt agent port.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 17:31:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xhn-0004hB-M9; Fri, 24 Aug 2012 17:29:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T4xhl-0004h6-It
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 17:29:57 +0000
Received: from [85.158.143.35:64296] by server-3.bemta-4.messagelabs.com id
	02/88-08232-41AB7305; Fri, 24 Aug 2012 17:29:56 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1345829396!12843512!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22171 invoked from network); 24 Aug 2012 17:29:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 17:29:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14173990"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 17:29:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 18:29:56 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T4xhj-0003Rz-TJ; Fri, 24 Aug 2012 17:29:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T4xhj-0001vY-Q8;
	Fri, 24 Aug 2012 18:29:55 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20535.47635.716045.5844@mariner.uk.xensource.com>
Date: Fri, 24 Aug 2012 18:29:55 +0100
To: xen.org <ian.jackson@eu.citrix.com>
In-Reply-To: <osstest-13626-mainreport@xen.org>
References: <osstest-13626-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [xen-unstable test] 13626: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-unstable test] 13626: regressions - FAIL"):
> flight 13626 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/13626/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-i386-i386-xl-win     12 guest-localmigrate/x10    fail REGR. vs. 13625

This seems to be a real failure, although it's a bit mysterious.

It has done 6 repetitions of the 10x localhost migration by this
point; rep 6 took 108 seconds.  Each of these prints a message like
this:
  libxl: error: libxl_blktap2.c:80:libxl__device_destroy_tapdisk: Failed to destroy tap device id 6379 minor 7: Input/output error
  libxl: error: libxl.c:1465:devices_destroy_cb: libxl__devices_destroy failed for 13

Repitition 7 proceeds apparently OK until it would normally print the
first of those messages and then seems to hang.  After 400s the test
harness gives up and calls it a failure.

The logs seem quite uninformative.  I looked at the host serial and
kernel logs, and the xl and qemu logfiles.  Both domains exist; the
migrated-away domain has been renamed and is paused.

>  test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 13625

This shows Windows at the login screen but it unaccountably isn't
listening on the special xenrt agent port.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 17:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xmp-0004rr-Cm; Fri, 24 Aug 2012 17:35:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4xmo-0004rd-4F
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 17:35:10 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345829703!8733525!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25869 invoked from network); 24 Aug 2012 17:35:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 17:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14174046"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 17:35:03 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 18:35:03 +0100
Date: Fri, 24 Aug 2012 18:34:39 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5037AFBF020000780008A7AF@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208241828280.15568@kaball.uk.xensource.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
	<5037AFBF020000780008A7AF@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Aug 2012, Jan Beulich wrote:
> >>> On 23.08.12 at 15:44, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Wed, 22 Aug 2012, Jan Beulich wrote:
> >> - don't call rtc_timer_update() on REG_A writes when the value didn't
> >>   change (doing the call always was reported to cause wall clock time
> >>   lagging with the JVM running on Windows)
> >> - don't call rtc_timer_update() on REG_B writes at all
> >> - only call alarm_timer_update() on REG_B writes when relevant bits
> >>   change
> >> - only call check_update_timer() on REG_B writes when SET changes
> >> - instead properly handle AF and PF when the guest is not also setting
> >>   AIE/PIE respectively (for UF this was already the case, only a
> >>   comment was slightly inaccurate)
> >> - raise the RTC IRQ not only when UIE gets set while UF was already
> >>   set, but generalize this to cover AIE and PIE as well
> >> - properly mask off bit 7 when retrieving the hour values in
> >>   alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
> >>   converting from 12- to 24-hour value
> >> - also handle the two other possible clock bases
> >> - use RTC_* names in a couple of places where literal numbers were used
> >>   so far
> >> 
> >> Note that this only improves the situation described in the thread at
> >> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
> >> there are still problems with the emulation when invoked at a high rate
> >> as described there.
> >> 
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Although this patch solves a real problem and should probably go in at
> > some point, I am a bit worried about drifting too much from the original
> > RTC emulator (that was taken from QEMU),
> 
> Then does that emulator have similar problems?

I am not sure, it probably does.
I am afraid the code already drifted too much to make comparisons.


> > because it would be nice to be able to backport features like this one:
> > 
> > http://marc.info/?l=qemu-devel&m=134392375010304 
> 
> I agree this would be nice to have (albeit I'm not sure how much
> the original problem is actually present in the Xen code, particularly
> with the patch here applied, as I think it may implicitly clean up some
> of the unneccesary active timers).

Maybe, but with your patch applied, are there going to be any timers
running if the guest is not making use of the RTC?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 17:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xmp-0004rr-Cm; Fri, 24 Aug 2012 17:35:11 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T4xmo-0004rd-4F
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 17:35:10 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345829703!8733525!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25869 invoked from network); 24 Aug 2012 17:35:03 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 17:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14174046"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 17:35:03 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 18:35:03 +0100
Date: Fri, 24 Aug 2012 18:34:39 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5037AFBF020000780008A7AF@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1208241828280.15568@kaball.uk.xensource.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
	<5037AFBF020000780008A7AF@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Aug 2012, Jan Beulich wrote:
> >>> On 23.08.12 at 15:44, Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> wrote:
> > On Wed, 22 Aug 2012, Jan Beulich wrote:
> >> - don't call rtc_timer_update() on REG_A writes when the value didn't
> >>   change (doing the call always was reported to cause wall clock time
> >>   lagging with the JVM running on Windows)
> >> - don't call rtc_timer_update() on REG_B writes at all
> >> - only call alarm_timer_update() on REG_B writes when relevant bits
> >>   change
> >> - only call check_update_timer() on REG_B writes when SET changes
> >> - instead properly handle AF and PF when the guest is not also setting
> >>   AIE/PIE respectively (for UF this was already the case, only a
> >>   comment was slightly inaccurate)
> >> - raise the RTC IRQ not only when UIE gets set while UF was already
> >>   set, but generalize this to cover AIE and PIE as well
> >> - properly mask off bit 7 when retrieving the hour values in
> >>   alarm_timer_update(), and properly use RTC_HOURS_ALARM's bit 7 when
> >>   converting from 12- to 24-hour value
> >> - also handle the two other possible clock bases
> >> - use RTC_* names in a couple of places where literal numbers were used
> >>   so far
> >> 
> >> Note that this only improves the situation described in the thread at
> >> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00664.html,
> >> there are still problems with the emulation when invoked at a high rate
> >> as described there.
> >> 
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Although this patch solves a real problem and should probably go in at
> > some point, I am a bit worried about drifting too much from the original
> > RTC emulator (that was taken from QEMU),
> 
> Then does that emulator have similar problems?

I am not sure, it probably does.
I am afraid the code already drifted too much to make comparisons.


> > because it would be nice to be able to backport features like this one:
> > 
> > http://marc.info/?l=qemu-devel&m=134392375010304 
> 
> I agree this would be nice to have (albeit I'm not sure how much
> the original problem is actually present in the Xen code, particularly
> with the patch here applied, as I think it may implicitly clean up some
> of the unneccesary active timers).

Maybe, but with your patch applied, are there going to be any timers
running if the guest is not making use of the RTC?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 17:41:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:41:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xrN-00051H-3T; Fri, 24 Aug 2012 17:39:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4xrL-000514-EC
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 17:39:51 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345829985!8579239!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8296 invoked from network); 24 Aug 2012 17:39:45 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-16.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 17:39:45 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 546DF5A0006
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 18:39:21 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Wg8HzUNmckbL for <xen-devel@lists.xen.org>;
	Fri, 24 Aug 2012 18:39:19 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	E8FF65A0005
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 18:39:18 +0100 (BST)
Message-ID: <5037BC5E.3010806@abpni.co.uk>
Date: Fri, 24 Aug 2012 18:39:42 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
	<5035EC5A020000780008A62A@nat28.tlf.novell.com>
	<9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
	<5037AD6A020000780008A79D@nat28.tlf.novell.com>
In-Reply-To: <5037AD6A020000780008A79D@nat28.tlf.novell.com>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 16:35, Jan Beulich wrote:
>>>> On 23.08.12 at 10:07, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:
>> On 23.08.2012 08:39, Jan Beulich wrote:
>>>>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>>>> I'm guessing xen.efi (from 4.2) just replaces grub??
>>> "Replaces" is the wrong term. It simply makes the use of grub.efi (or
>>> however
>>> it's named) unnecessary.
>>>
>>>> Also, if I were to apply that patch from superuser
>>>> (http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sh
>> ould-be-8gb-uefi-boot),
>>>> would have have any bad consequences? I'm very security conscience as
>>>> the DomUs are untrusted...
>>> If you wanted to do that, I'd strongly recommend only removing the
>>> E801 code
>>> (obviously, from your log, you don't get E820 entries reported
>>> anyway, so this
>>> would be to not harm using hypervisors built from the same source on
>>> other
>>> systems) or simply swapping the E801 and multiboot handling order
>>> (which may
>>> actually be something to consider even upstream, so you'd be welcome
>>> to post
>>> such a patch).
>>>
>>> But in the end, in order to indeed use UEFI as intended, you'll need
>>> to switch to
>>> using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops
>>> for now
>>> isn't).
>> I'll submit a patch with the map entries in the if block swapped. I'll
>> make the patch, then test it for you guys, then post it. Do I just send
>> it to this list (for the benefit of others and for upstream
>> consideration)?
> Yes.
>
>> When you say "use UEFI as intended", is there something wrong with just
>> flipping the if block on its head?
> That flipping has nothing to do with UEFI, just with the way grub.efi
> works.
>
> Proper UEFI support implies use of EFI's boot and run time services,
> which only xen.efi currently does (and which, for those run time
> services that get made available for use by Dom0, also requires an
> enabled Dom0 kernel).
>
Thanks for the clarification.

So from a security/reliability standpoint, nothing will be affected by 
flipping the if block?

Sorry that I haven't submitted the patch yet, just been very busy. This 
is on my to-do list this weekend.

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 17:41:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 17:41:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4xrN-00051H-3T; Fri, 24 Aug 2012 17:39:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T4xrL-000514-EC
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 17:39:51 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1345829985!8579239!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8296 invoked from network); 24 Aug 2012 17:39:45 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-16.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 17:39:45 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 546DF5A0006
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 18:39:21 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Wg8HzUNmckbL for <xen-devel@lists.xen.org>;
	Fri, 24 Aug 2012 18:39:19 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	E8FF65A0005
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 18:39:18 +0100 (BST)
Message-ID: <5037BC5E.3010806@abpni.co.uk>
Date: Fri, 24 Aug 2012 18:39:42 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <50356E43.3030208@abpni.co.uk> <50356FB1.2070904@abpni.co.uk>
	<20120823060620.GE19851@reaktio.net>
	<20120823072218.GF19851@reaktio.net> <5035DB70.8090800@abpni.co.uk>
	<5035EC5A020000780008A62A@nat28.tlf.novell.com>
	<9eb3fd179342b6962f057e499b375c4e@abpni.co.uk>
	<5037AD6A020000780008A79D@nat28.tlf.novell.com>
In-Reply-To: <5037AD6A020000780008A79D@nat28.tlf.novell.com>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 16:35, Jan Beulich wrote:
>>>> On 23.08.12 at 10:07, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:
>> On 23.08.2012 08:39, Jan Beulich wrote:
>>>>>> Jonathan Tripathy <jonnyt@abpni.co.uk> 08/23/12 9:29 AM >>>
>>>> I'm guessing xen.efi (from 4.2) just replaces grub??
>>> "Replaces" is the wrong term. It simply makes the use of grub.efi (or
>>> however
>>> it's named) unnecessary.
>>>
>>>> Also, if I were to apply that patch from superuser
>>>> (http://serverfault.com/questions/342109/xen-only-sees-512mb-of-system-ram-sh
>> ould-be-8gb-uefi-boot),
>>>> would have have any bad consequences? I'm very security conscience as
>>>> the DomUs are untrusted...
>>> If you wanted to do that, I'd strongly recommend only removing the
>>> E801 code
>>> (obviously, from your log, you don't get E820 entries reported
>>> anyway, so this
>>> would be to not harm using hypervisors built from the same source on
>>> other
>>> systems) or simply swapping the E801 and multiboot handling order
>>> (which may
>>> actually be something to consider even upstream, so you'd be welcome
>>> to post
>>> such a patch).
>>>
>>> But in the end, in order to indeed use UEFI as intended, you'll need
>>> to switch to
>>> using xen.efi and an EFI-enabled Dom0 kernel (which upstream pv-ops
>>> for now
>>> isn't).
>> I'll submit a patch with the map entries in the if block swapped. I'll
>> make the patch, then test it for you guys, then post it. Do I just send
>> it to this list (for the benefit of others and for upstream
>> consideration)?
> Yes.
>
>> When you say "use UEFI as intended", is there something wrong with just
>> flipping the if block on its head?
> That flipping has nothing to do with UEFI, just with the way grub.efi
> works.
>
> Proper UEFI support implies use of EFI's boot and run time services,
> which only xen.efi currently does (and which, for those run time
> services that get made available for use by Dom0, also requires an
> enabled Dom0 kernel).
>
Thanks for the clarification.

So from a security/reliability standpoint, nothing will be affected by 
flipping the if block?

Sorry that I haven't submitted the patch yet, just been very busy. This 
is on my to-do list this weekend.

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 18:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4yCI-0005H7-2o; Fri, 24 Aug 2012 18:01:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waldi@debian.org>) id 1T4yCG-0005H2-Vs
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 18:01:29 +0000
Received: from [85.158.143.99:57683] by server-3.bemta-4.messagelabs.com id
	46/15-08232-871C7305; Fri, 24 Aug 2012 18:01:28 +0000
X-Env-Sender: waldi@debian.org
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345831287!22341360!1
X-Originating-IP: [82.139.201.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6586 invoked from network); 24 Aug 2012 18:01:27 -0000
Received: from wavehammer.waldi.eu.org (HELO wavehammer.waldi.eu.org)
	(82.139.201.20)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Aug 2012 18:01:27 -0000
Received: by wavehammer.waldi.eu.org (Postfix, from userid 1000)
	id 2B4955424B; Fri, 24 Aug 2012 20:01:24 +0200 (CEST)
Date: Fri, 24 Aug 2012 20:01:24 +0200
From: Bastian Blank <waldi@debian.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120824180124.GA7998@wavehammer.waldi.eu.org>
Mail-Followup-To: Bastian Blank <waldi@debian.org>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xensource.com,
	Andres Lagar-Cavilla <andres@gridcentric.ca>,
	Keir Fraser <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/privcmd: report paged-out frames in
 PRIVCMD_MMAPBATCH ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 06:13:45PM +0100, David Vrabel wrote:
> +	if (ret < 0) {
> +		if (ret == -ENOENT)
> +			*mfnp |= 0x80000000U;
> +		else
> +			*mfnp |= 0xf0000000U;

Please don't use magic constants.

Bastian

-- 
Respect is a rational process
		-- McCoy, "The Galileo Seven", stardate 2822.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 18:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4yCI-0005H7-2o; Fri, 24 Aug 2012 18:01:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waldi@debian.org>) id 1T4yCG-0005H2-Vs
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 18:01:29 +0000
Received: from [85.158.143.99:57683] by server-3.bemta-4.messagelabs.com id
	46/15-08232-871C7305; Fri, 24 Aug 2012 18:01:28 +0000
X-Env-Sender: waldi@debian.org
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345831287!22341360!1
X-Originating-IP: [82.139.201.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6586 invoked from network); 24 Aug 2012 18:01:27 -0000
Received: from wavehammer.waldi.eu.org (HELO wavehammer.waldi.eu.org)
	(82.139.201.20)
	by server-12.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Aug 2012 18:01:27 -0000
Received: by wavehammer.waldi.eu.org (Postfix, from userid 1000)
	id 2B4955424B; Fri, 24 Aug 2012 20:01:24 +0200 (CEST)
Date: Fri, 24 Aug 2012 20:01:24 +0200
From: Bastian Blank <waldi@debian.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120824180124.GA7998@wavehammer.waldi.eu.org>
Mail-Followup-To: Bastian Blank <waldi@debian.org>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xensource.com,
	Andres Lagar-Cavilla <andres@gridcentric.ca>,
	Keir Fraser <keir@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1345742026-10569-1-git-send-email-david.vrabel@citrix.com>
	<1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345742026-10569-3-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/privcmd: report paged-out frames in
 PRIVCMD_MMAPBATCH ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 23, 2012 at 06:13:45PM +0100, David Vrabel wrote:
> +	if (ret < 0) {
> +		if (ret == -ENOENT)
> +			*mfnp |= 0x80000000U;
> +		else
> +			*mfnp |= 0xf0000000U;

Please don't use magic constants.

Bastian

-- 
Respect is a rational process
		-- McCoy, "The Galileo Seven", stardate 2822.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 18:43:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 18:43:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4yqU-0005k6-He; Fri, 24 Aug 2012 18:43:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4yqS-0005iU-TV
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 18:43:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345833773!8739280!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4778 invoked from network); 24 Aug 2012 18:42:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 18:42:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35747073"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 14:42:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 14:42:26 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4ypt-0003tU-Nw;
	Fri, 24 Aug 2012 19:42:25 +0100
Message-ID: <5037CB11.9000308@citrix.com>
Date: Fri, 24 Aug 2012 19:42:25 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org, p.d@gmx.de, 
	Ian Campbell <ian.campbell@citrix.com>
References: <20120824164301.46800@gmx.net>
In-Reply-To: <20120824164301.46800@gmx.net>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------060602090904000805090102"
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060602090904000805090102
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On 24/08/12 17:43, p.d@gmx.de wrote:
> nice time,
>
> Ian, I'm not sure, but I think after Your patch:
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/4ca40e0559c3
>
> xen-tools (+qemu+seabios) will not be maked :)
>
> Here are last lines of "make -j7":
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=3Dgn=
u99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -D__XEN_TOOLS__ -MMD -MF ._libxl_save_msgs_callout.o.d =
 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -=
Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-af=
ter-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_=
08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_r=
ev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25=
779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/too=
ls/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/li=
bxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/=
=2E./../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../=
=2E./tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libx=
l/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/l=
ibxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools=
/libxl/../../tools/config.h  -c -o _libxl_save_msgs_callout.o _libxl_save=
_msgs_callout.c=20
> xl_cmdimpl.c: In function =E2=80=98main_list=E2=80=99:
> xl_cmdimpl.c:2733:11: error: =E2=80=98hand=E2=80=99 may be used uniniti=
alized in this function [-Werror=3Duninitialized]
> xl_cmdimpl.c:2689:14: note: =E2=80=98hand=E2=80=99 was declared here
> <snip>

Please try the attached patch.  I have fixed the error, and also
future-proofed the logic.

@Ian: the patch can be slimed down if default_output_format can be
guaranteed not to change across the duration of this function call, but
my cursory glance at this otherwise-unfamilar codebase cant say for certa=
in.

--=20
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------060602090904000805090102
Content-Type: text/x-patch; name="fix-cmdimpl.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="fix-cmdimpl.patch"

# HG changeset patch
# Parent 4ca40e0559c33205fb5163b10249a0fd5fda39b9
tools/xl: Fix uninitialized variable error.

c/s 25779:4ca40e0559c3 introduced a compilation error for any build system using
-Werror=uninitialized, such as the default CentOS 5.7 version of gcc.

And with good reason, because if the global libxl default_output_format is
neither OUTPUT_FORMAT_SXP nor OUTPUT_FORMAT_JSON, the variable hand will be used
before being initialised.

The attached patch fixes the warning, and futher fixes the logic to work
correctly when a new OUTPUT_FORMAT is added to xl.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 4ca40e0559c3 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2686,12 +2686,13 @@ static void list_domains_details(const l
     uint8_t *data;
     int i, len, rc;
 
-    yajl_gen hand;
+    yajl_gen hand = NULL;
     yajl_gen_status s;
     const char *buf;
     libxl_yajl_length yajl_len = 0;
-
-    if (default_output_format == OUTPUT_FORMAT_JSON) {
+    const enum output_format format = default_output_format;
+
+    if (format == OUTPUT_FORMAT_JSON) {
         hand = libxl_yajl_gen_alloc(NULL);
         if (!hand) {
             fprintf(stderr, "unable to allocate JSON generator\n");
@@ -2714,10 +2715,10 @@ static void list_domains_details(const l
         CHK_ERRNO(asprintf(&config_source, "<domid %d data>", info[i].domid));
         libxl_domain_config_init(&d_config);
         parse_config_data(config_source, (char *)data, len, &d_config, NULL);
-        if (default_output_format == OUTPUT_FORMAT_SXP)
+        if (format == OUTPUT_FORMAT_JSON)
+            s = printf_info_one_json(hand, info[i].domid, &d_config);
+        else
             printf_info_sexp(domid, &d_config);
-        else
-            s = printf_info_one_json(hand, info[i].domid, &d_config);
         libxl_domain_config_dispose(&d_config);
         free(data);
         free(config_source);
@@ -2725,7 +2726,7 @@ static void list_domains_details(const l
             goto out;
     }
 
-    if (default_output_format == OUTPUT_FORMAT_JSON) {
+    if (format == OUTPUT_FORMAT_JSON) {
         s = yajl_gen_array_close(hand);
         if (s != yajl_gen_status_ok)
             goto out;
@@ -2738,7 +2739,7 @@ static void list_domains_details(const l
     }
 
 out:
-    if (default_output_format == OUTPUT_FORMAT_JSON) {
+    if (format == OUTPUT_FORMAT_JSON) {
         yajl_gen_free(hand);
         if (s != yajl_gen_status_ok)
             fprintf(stderr,

--------------060602090904000805090102
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060602090904000805090102--


From xen-devel-bounces@lists.xen.org Fri Aug 24 18:43:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 18:43:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4yqU-0005k6-He; Fri, 24 Aug 2012 18:43:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4yqS-0005iU-TV
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 18:43:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1345833773!8739280!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjY0ODk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4778 invoked from network); 24 Aug 2012 18:42:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 18:42:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="35747073"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 14:42:26 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 14:42:26 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4ypt-0003tU-Nw;
	Fri, 24 Aug 2012 19:42:25 +0100
Message-ID: <5037CB11.9000308@citrix.com>
Date: Fri, 24 Aug 2012 19:42:25 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org, p.d@gmx.de, 
	Ian Campbell <ian.campbell@citrix.com>
References: <20120824164301.46800@gmx.net>
In-Reply-To: <20120824164301.46800@gmx.net>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------060602090904000805090102"
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060602090904000805090102
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On 24/08/12 17:43, p.d@gmx.de wrote:
> nice time,
>
> Ian, I'm not sure, but I think after Your patch:
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/4ca40e0559c3
>
> xen-tools (+qemu+seabios) will not be maked :)
>
> Here are last lines of "make -j7":
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=3Dgn=
u99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-b=
ut-set-variable   -D__XEN_TOOLS__ -MMD -MF ._libxl_save_msgs_callout.o.d =
 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls  -=
Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-af=
ter-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_2012_=
08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_r=
ev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25=
779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/too=
ls/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/li=
bxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/=
=2E./../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../=
=2E./tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libx=
l/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/l=
ibxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools=
/libxl/../../tools/config.h  -c -o _libxl_save_msgs_callout.o _libxl_save=
_msgs_callout.c=20
> xl_cmdimpl.c: In function =E2=80=98main_list=E2=80=99:
> xl_cmdimpl.c:2733:11: error: =E2=80=98hand=E2=80=99 may be used uniniti=
alized in this function [-Werror=3Duninitialized]
> xl_cmdimpl.c:2689:14: note: =E2=80=98hand=E2=80=99 was declared here
> <snip>

Please try the attached patch.  I have fixed the error, and also
future-proofed the logic.

@Ian: the patch can be slimed down if default_output_format can be
guaranteed not to change across the duration of this function call, but
my cursory glance at this otherwise-unfamilar codebase cant say for certa=
in.

--=20
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------060602090904000805090102
Content-Type: text/x-patch; name="fix-cmdimpl.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="fix-cmdimpl.patch"

# HG changeset patch
# Parent 4ca40e0559c33205fb5163b10249a0fd5fda39b9
tools/xl: Fix uninitialized variable error.

c/s 25779:4ca40e0559c3 introduced a compilation error for any build system using
-Werror=uninitialized, such as the default CentOS 5.7 version of gcc.

And with good reason, because if the global libxl default_output_format is
neither OUTPUT_FORMAT_SXP nor OUTPUT_FORMAT_JSON, the variable hand will be used
before being initialised.

The attached patch fixes the warning, and futher fixes the logic to work
correctly when a new OUTPUT_FORMAT is added to xl.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 4ca40e0559c3 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2686,12 +2686,13 @@ static void list_domains_details(const l
     uint8_t *data;
     int i, len, rc;
 
-    yajl_gen hand;
+    yajl_gen hand = NULL;
     yajl_gen_status s;
     const char *buf;
     libxl_yajl_length yajl_len = 0;
-
-    if (default_output_format == OUTPUT_FORMAT_JSON) {
+    const enum output_format format = default_output_format;
+
+    if (format == OUTPUT_FORMAT_JSON) {
         hand = libxl_yajl_gen_alloc(NULL);
         if (!hand) {
             fprintf(stderr, "unable to allocate JSON generator\n");
@@ -2714,10 +2715,10 @@ static void list_domains_details(const l
         CHK_ERRNO(asprintf(&config_source, "<domid %d data>", info[i].domid));
         libxl_domain_config_init(&d_config);
         parse_config_data(config_source, (char *)data, len, &d_config, NULL);
-        if (default_output_format == OUTPUT_FORMAT_SXP)
+        if (format == OUTPUT_FORMAT_JSON)
+            s = printf_info_one_json(hand, info[i].domid, &d_config);
+        else
             printf_info_sexp(domid, &d_config);
-        else
-            s = printf_info_one_json(hand, info[i].domid, &d_config);
         libxl_domain_config_dispose(&d_config);
         free(data);
         free(config_source);
@@ -2725,7 +2726,7 @@ static void list_domains_details(const l
             goto out;
     }
 
-    if (default_output_format == OUTPUT_FORMAT_JSON) {
+    if (format == OUTPUT_FORMAT_JSON) {
         s = yajl_gen_array_close(hand);
         if (s != yajl_gen_status_ok)
             goto out;
@@ -2738,7 +2739,7 @@ static void list_domains_details(const l
     }
 
 out:
-    if (default_output_format == OUTPUT_FORMAT_JSON) {
+    if (format == OUTPUT_FORMAT_JSON) {
         yajl_gen_free(hand);
         if (s != yajl_gen_status_ok)
             fprintf(stderr,

--------------060602090904000805090102
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060602090904000805090102--


From xen-devel-bounces@lists.xen.org Fri Aug 24 19:13:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 19:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4zJG-0006DB-Li; Fri, 24 Aug 2012 19:12:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4zJF-0006D6-OH
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 19:12:45 +0000
Received: from [85.158.139.83:11396] by server-11.bemta-5.messagelabs.com id
	B1/FB-29296-C22D7305; Fri, 24 Aug 2012 19:12:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345835564!27064279!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18885 invoked from network); 24 Aug 2012 19:12:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 19:12:44 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14174874"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 19:12:42 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 20:12:35 +0100
Message-ID: <1345835562.4847.3.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 24 Aug 2012 20:12:42 +0100
In-Reply-To: <5037CB11.9000308@citrix.com>
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTA4LTI0IGF0IDE5OjQyICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+
IE9uIDI0LzA4LzEyIDE3OjQzLCBwLmRAZ214LmRlIHdyb3RlOgo+ID4gbmljZSB0aW1lLAo+ID4K
PiA+IElhbiwgSSdtIG5vdCBzdXJlLCBidXQgSSB0aGluayBhZnRlciBZb3VyIHBhdGNoOgo+ID4g
aHR0cDovL3hlbmJpdHMueGVuLm9yZy9oZy94ZW4tdW5zdGFibGUuaGcvcmV2LzRjYTQwZTA1NTlj
Mwo+ID4KPiA+IHhlbi10b29scyAoK3FlbXUrc2VhYmlvcykgd2lsbCBub3QgYmUgbWFrZWQgOikK
PiA+Cj4gPiBIZXJlIGFyZSBsYXN0IGxpbmVzIG9mICJtYWtlIC1qNyI6Cj4gPiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQo+ID4g
Z2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12YXJpYWJsZSAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAuX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0Lm8uZCAgLURfTEFSR0VGSUxF
X1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9TT1VSQ0UgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMg
LVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4g
LWZQSUMgLXB0aHJlYWQgLUkvdXNyL3NyYy94ZW5fMjAxMl8wOF8yNF9yZXZfMjU3NzkvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvdXNyL3NyYy94ZW5fMjAxMl8wOF8yNF9yZXZfMjU3
NzkvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS91c3Ivc3JjL3hlbl8yMDEyXzA4
XzI0X3Jldl8yNTc3OS90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS91c3Ivc3JjL3hl
bl8yMDEyXzA4XzI0X3Jldl8yNTc3OS90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1J
L3Vzci9zcmMveGVuXzIwMTJfMDhfMjRfcmV2XzI1Nzc5L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xz
L3hlbnN0b3JlIC1JL3Vzci9zcmMveGVuXzIwMTJfMDhfMjRfcmV2XzI1Nzc5L3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvdXNyL3NyYy94ZW5fMjAxMl8wOF8yNF9yZXZfMjU3Nzkv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvYmxrdGFwMi9jb250cm9sIC1JL3Vzci9zcmMveGVuXzIw
MTJfMDhfMjRfcmV2XzI1Nzc5L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2Jsa3RhcDIvaW5jbHVk
ZSAtSS91c3Ivc3JjL3hlbl8yMDEyXzA4XzI0X3Jldl8yNTc3OS90b29scy9saWJ4bC8uLi8uLi90
b29scy9pbmNsdWRlIC1pbmNsdWRlIC91c3Ivc3JjL3hlbl8yMDEyXzA4XzI0X3Jldl8yNTc3OS90
b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gX2xpYnhsX3NhdmVfbXNnc19j
YWxsb3V0Lm8gX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmMgCj4gPiB4bF9jbWRpbXBsLmM6IElu
IGZ1bmN0aW9uIOKAmG1haW5fbGlzdOKAmToKPiA+IHhsX2NtZGltcGwuYzoyNzMzOjExOiBlcnJv
cjog4oCYaGFuZOKAmSBtYXkgYmUgdXNlZCB1bmluaXRpYWxpemVkIGluIHRoaXMgZnVuY3Rpb24g
Wy1XZXJyb3I9dW5pbml0aWFsaXplZF0KPiA+IHhsX2NtZGltcGwuYzoyNjg5OjE0OiBub3RlOiDi
gJhoYW5k4oCZIHdhcyBkZWNsYXJlZCBoZXJlCj4gPiA8c25pcD4KPiAKPiBQbGVhc2UgdHJ5IHRo
ZSBhdHRhY2hlZCBwYXRjaC4gIEkgaGF2ZSBmaXhlZCB0aGUgZXJyb3IsIGFuZCBhbHNvCj4gZnV0
dXJlLXByb29mZWQgdGhlIGxvZ2ljLgo+IAo+IEBJYW46IHRoZSBwYXRjaCBjYW4gYmUgc2xpbWVk
IGRvd24gaWYgZGVmYXVsdF9vdXRwdXRfZm9ybWF0IGNhbiBiZQo+IGd1YXJhbnRlZWQgbm90IHRv
IGNoYW5nZSBhY3Jvc3MgdGhlIGR1cmF0aW9uIG9mIHRoaXMgZnVuY3Rpb24gY2FsbCwgYnV0Cj4g
bXkgY3Vyc29yeSBnbGFuY2UgYXQgdGhpcyBvdGhlcndpc2UtdW5mYW1pbGFyIGNvZGViYXNlIGNh
bnQgc2F5IGZvciBjZXJ0YWluLgoKSXQgY2FuIG9ubHkgY2hhbmdlIGR1cmluZyBhcmd1bWVudCBw
YXJzaW5nLgoKZ2NjIGlzIGJlaW5nIGEgYml0IGR1bWIgaGVyZSBzaW5jZSBkZWZhdWx0X291dHB1
dF9mb3JtYXQgaXMgYSBzdGF0aWNhbGx5CmluaXRpYWxpc2VkIGVudW0gd2hvc2Ugb25seSB2YWx1
ZXMgYXJlIE9VVFBVVF9GT1JNQVRfSlNPTiBhbmQKT1VUUFVUX0ZPUk1BVF9TWFAuCgpJIHN1c3Bl
Y3QgdGhhdCBub3cgeW91IGhhdmUgcmUtd3JpdHRlbiBpdCBzbyB0aGF0IGhhbmQgaXMgb25seSB0
b3VjaGVkCmlmZiBmb3JtYXQgPT0gSlNPTiBhbmQgZm9ybWF0IGlzIGNvbnN0IHRoYXQgaW5pdGlh
bGlzYXRpb24gb2YgaGFuZCBpcyBubwpsb25nZXIgbmVjZXNzYXJ5LiBCdXQgbm9uZXRoZWxlc3M6
CkFja2VkLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgoKSWFuLgoK
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 24 19:13:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 19:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4zJG-0006DB-Li; Fri, 24 Aug 2012 19:12:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T4zJF-0006D6-OH
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 19:12:45 +0000
Received: from [85.158.139.83:11396] by server-11.bemta-5.messagelabs.com id
	B1/FB-29296-C22D7305; Fri, 24 Aug 2012 19:12:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1345835564!27064279!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18885 invoked from network); 24 Aug 2012 19:12:44 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 19:12:44 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14174874"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 19:12:42 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 20:12:35 +0100
Message-ID: <1345835562.4847.3.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 24 Aug 2012 20:12:42 +0100
In-Reply-To: <5037CB11.9000308@citrix.com>
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDEyLTA4LTI0IGF0IDE5OjQyICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+
IE9uIDI0LzA4LzEyIDE3OjQzLCBwLmRAZ214LmRlIHdyb3RlOgo+ID4gbmljZSB0aW1lLAo+ID4K
PiA+IElhbiwgSSdtIG5vdCBzdXJlLCBidXQgSSB0aGluayBhZnRlciBZb3VyIHBhdGNoOgo+ID4g
aHR0cDovL3hlbmJpdHMueGVuLm9yZy9oZy94ZW4tdW5zdGFibGUuaGcvcmV2LzRjYTQwZTA1NTlj
Mwo+ID4KPiA+IHhlbi10b29scyAoK3FlbXUrc2VhYmlvcykgd2lsbCBub3QgYmUgbWFrZWQgOikK
PiA+Cj4gPiBIZXJlIGFyZSBsYXN0IGxpbmVzIG9mICJtYWtlIC1qNyI6Cj4gPiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQo+ID4g
Z2NjICAtTzEgLWZuby1vbWl0LWZyYW1lLXBvaW50ZXIgLW02NCAtZyAtZm5vLXN0cmljdC1hbGlh
c2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1h
ZnRlci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12YXJpYWJsZSAgIC1EX19YRU5fVE9P
TFNfXyAtTU1EIC1NRiAuX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0Lm8uZCAgLURfTEFSR0VGSUxF
X1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9TT1VSQ0UgLWZuby1vcHRpbWl6ZS1zaWJsaW5nLWNhbGxz
ICAtV2Vycm9yIC1Xbm8tZm9ybWF0LXplcm8tbGVuZ3RoIC1XbWlzc2luZy1kZWNsYXJhdGlvbnMg
LVduby1kZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVdmb3JtYXQtbm9ubGl0ZXJhbCAtSS4g
LWZQSUMgLXB0aHJlYWQgLUkvdXNyL3NyYy94ZW5fMjAxMl8wOF8yNF9yZXZfMjU3NzkvdG9vbHMv
bGlieGwvLi4vLi4vdG9vbHMvbGlieGMgLUkvdXNyL3NyYy94ZW5fMjAxMl8wOF8yNF9yZXZfMjU3
NzkvdG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvaW5jbHVkZSAtSS91c3Ivc3JjL3hlbl8yMDEyXzA4
XzI0X3Jldl8yNTc3OS90b29scy9saWJ4bC8uLi8uLi90b29scy9saWJ4YyAtSS91c3Ivc3JjL3hl
bl8yMDEyXzA4XzI0X3Jldl8yNTc3OS90b29scy9saWJ4bC8uLi8uLi90b29scy9pbmNsdWRlIC1J
L3Vzci9zcmMveGVuXzIwMTJfMDhfMjRfcmV2XzI1Nzc5L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xz
L3hlbnN0b3JlIC1JL3Vzci9zcmMveGVuXzIwMTJfMDhfMjRfcmV2XzI1Nzc5L3Rvb2xzL2xpYnhs
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgLUkvdXNyL3NyYy94ZW5fMjAxMl8wOF8yNF9yZXZfMjU3Nzkv
dG9vbHMvbGlieGwvLi4vLi4vdG9vbHMvYmxrdGFwMi9jb250cm9sIC1JL3Vzci9zcmMveGVuXzIw
MTJfMDhfMjRfcmV2XzI1Nzc5L3Rvb2xzL2xpYnhsLy4uLy4uL3Rvb2xzL2Jsa3RhcDIvaW5jbHVk
ZSAtSS91c3Ivc3JjL3hlbl8yMDEyXzA4XzI0X3Jldl8yNTc3OS90b29scy9saWJ4bC8uLi8uLi90
b29scy9pbmNsdWRlIC1pbmNsdWRlIC91c3Ivc3JjL3hlbl8yMDEyXzA4XzI0X3Jldl8yNTc3OS90
b29scy9saWJ4bC8uLi8uLi90b29scy9jb25maWcuaCAgLWMgLW8gX2xpYnhsX3NhdmVfbXNnc19j
YWxsb3V0Lm8gX2xpYnhsX3NhdmVfbXNnc19jYWxsb3V0LmMgCj4gPiB4bF9jbWRpbXBsLmM6IElu
IGZ1bmN0aW9uIOKAmG1haW5fbGlzdOKAmToKPiA+IHhsX2NtZGltcGwuYzoyNzMzOjExOiBlcnJv
cjog4oCYaGFuZOKAmSBtYXkgYmUgdXNlZCB1bmluaXRpYWxpemVkIGluIHRoaXMgZnVuY3Rpb24g
Wy1XZXJyb3I9dW5pbml0aWFsaXplZF0KPiA+IHhsX2NtZGltcGwuYzoyNjg5OjE0OiBub3RlOiDi
gJhoYW5k4oCZIHdhcyBkZWNsYXJlZCBoZXJlCj4gPiA8c25pcD4KPiAKPiBQbGVhc2UgdHJ5IHRo
ZSBhdHRhY2hlZCBwYXRjaC4gIEkgaGF2ZSBmaXhlZCB0aGUgZXJyb3IsIGFuZCBhbHNvCj4gZnV0
dXJlLXByb29mZWQgdGhlIGxvZ2ljLgo+IAo+IEBJYW46IHRoZSBwYXRjaCBjYW4gYmUgc2xpbWVk
IGRvd24gaWYgZGVmYXVsdF9vdXRwdXRfZm9ybWF0IGNhbiBiZQo+IGd1YXJhbnRlZWQgbm90IHRv
IGNoYW5nZSBhY3Jvc3MgdGhlIGR1cmF0aW9uIG9mIHRoaXMgZnVuY3Rpb24gY2FsbCwgYnV0Cj4g
bXkgY3Vyc29yeSBnbGFuY2UgYXQgdGhpcyBvdGhlcndpc2UtdW5mYW1pbGFyIGNvZGViYXNlIGNh
bnQgc2F5IGZvciBjZXJ0YWluLgoKSXQgY2FuIG9ubHkgY2hhbmdlIGR1cmluZyBhcmd1bWVudCBw
YXJzaW5nLgoKZ2NjIGlzIGJlaW5nIGEgYml0IGR1bWIgaGVyZSBzaW5jZSBkZWZhdWx0X291dHB1
dF9mb3JtYXQgaXMgYSBzdGF0aWNhbGx5CmluaXRpYWxpc2VkIGVudW0gd2hvc2Ugb25seSB2YWx1
ZXMgYXJlIE9VVFBVVF9GT1JNQVRfSlNPTiBhbmQKT1VUUFVUX0ZPUk1BVF9TWFAuCgpJIHN1c3Bl
Y3QgdGhhdCBub3cgeW91IGhhdmUgcmUtd3JpdHRlbiBpdCBzbyB0aGF0IGhhbmQgaXMgb25seSB0
b3VjaGVkCmlmZiBmb3JtYXQgPT0gSlNPTiBhbmQgZm9ybWF0IGlzIGNvbnN0IHRoYXQgaW5pdGlh
bGlzYXRpb24gb2YgaGFuZCBpcyBubwpsb25nZXIgbmVjZXNzYXJ5LiBCdXQgbm9uZXRoZWxlc3M6
CkFja2VkLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgoKSWFuLgoK
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 24 19:24:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 19:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4zU0-0006N1-UZ; Fri, 24 Aug 2012 19:23:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4zTz-0006Mw-28
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 19:23:51 +0000
Received: from [85.158.143.35:57206] by server-2.bemta-4.messagelabs.com id
	69/74-21239-6C4D7305; Fri, 24 Aug 2012 19:23:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345836228!13458377!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19837 invoked from network); 24 Aug 2012 19:23:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 19:23:49 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="206185927"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 15:23:47 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 15:23:47 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4zTv-0004YX-41;
	Fri, 24 Aug 2012 20:23:47 +0100
Message-ID: <5037D4C3.4060309@citrix.com>
Date: Fri, 24 Aug 2012 20:23:47 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
	<1345835562.4847.3.camel@dagon.hellion.org.uk>
In-Reply-To: <1345835562.4847.3.camel@dagon.hellion.org.uk>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------050605020601030102020804"
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------050605020601030102020804
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable


On 24/08/12 20:12, Ian Campbell wrote:
> On Fri, 2012-08-24 at 19:42 +0100, Andrew Cooper wrote:
>> On 24/08/12 17:43, p.d@gmx.de wrote:
>>> nice time,
>>>
>>> Ian, I'm not sure, but I think after Your patch:
>>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/4ca40e0559c3
>>>
>>> xen-tools (+qemu+seabios) will not be maked :)
>>>
>>> Here are last lines of "make -j7":
>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>> gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused=
-but-set-variable   -D__XEN_TOOLS__ -MMD -MF ._libxl_save_msgs_callout.o.=
d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls =
 -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-=
after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_201=
2_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24=
_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_=
25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/t=
ools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/=
libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libx=
l/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../=
=2E./tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libx=
l/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/l=
ibxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools=
/libxl/../../tools/config.h  -c -o _libxl_save_msgs_callout.o _libxl_save=
_msgs_callout.c=20
>>> xl_cmdimpl.c: In function =E2=80=98main_list=E2=80=99:
>>> xl_cmdimpl.c:2733:11: error: =E2=80=98hand=E2=80=99 may be used unini=
tialized in this function [-Werror=3Duninitialized]
>>> xl_cmdimpl.c:2689:14: note: =E2=80=98hand=E2=80=99 was declared here
>>> <snip>
>> Please try the attached patch.  I have fixed the error, and also
>> future-proofed the logic.
>>
>> @Ian: the patch can be slimed down if default_output_format can be
>> guaranteed not to change across the duration of this function call, bu=
t
>> my cursory glance at this otherwise-unfamilar codebase cant say for ce=
rtain.
> It can only change during argument parsing.
>
> gcc is being a bit dumb here since default_output_format is a staticall=
y
> initialised enum whose only values are OUTPUT_FORMAT_JSON and
> OUTPUT_FORMAT_SXP.
>
> I suspect that now you have re-written it so that hand is only touched
> iff format =3D=3D JSON and format is const that initialisation of hand =
is no
> longer necessary. But nonetheless:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Ian.
>

Sadly, hand being touched iff format =3D=3D JSON was still not enough to
satisfy my version of GCC.

Attached is a far slimmed version which explicitly sets hand to NULL at
the top, and future-proofs the use of hand in the middle of the domain lo=
op.

--=20
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------050605020601030102020804
Content-Type: text/x-patch; name="fix-cmdimpl-v2.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="fix-cmdimpl-v2.patch"

# HG changeset patch
# Parent 4ca40e0559c33205fb5163b10249a0fd5fda39b9
tools/xl: Fix uninitialized variable error.

c/s 25779:4ca40e0559c3 introduced a compilation error for any build system using
-Werror=uninitialized, such as the default CentOS 5.7 version of gcc.

And with good reason, because if the global libxl default_output_format is
neither OUTPUT_FORMAT_SXP nor OUTPUT_FORMAT_JSON, the variable hand will be used
before being initialised.

The attached patch fixes the warning, and futher fixes the logic to work
correctly when a new OUTPUT_FORMAT is added to xl.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 4ca40e0559c3 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2686,7 +2686,7 @@ static void list_domains_details(const l
     uint8_t *data;
     int i, len, rc;
 
-    yajl_gen hand;
+    yajl_gen hand = NULL;
     yajl_gen_status s;
     const char *buf;
     libxl_yajl_length yajl_len = 0;
@@ -2714,10 +2714,10 @@ static void list_domains_details(const l
         CHK_ERRNO(asprintf(&config_source, "<domid %d data>", info[i].domid));
         libxl_domain_config_init(&d_config);
         parse_config_data(config_source, (char *)data, len, &d_config, NULL);
-        if (default_output_format == OUTPUT_FORMAT_SXP)
+        if (default_output_format == OUTPUT_FORMAT_JSON)
+            s = printf_info_one_json(hand, info[i].domid, &d_config);
+        else
             printf_info_sexp(domid, &d_config);
-        else
-            s = printf_info_one_json(hand, info[i].domid, &d_config);
         libxl_domain_config_dispose(&d_config);
         free(data);
         free(config_source);

--------------050605020601030102020804
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050605020601030102020804--


From xen-devel-bounces@lists.xen.org Fri Aug 24 19:24:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 19:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4zU0-0006N1-UZ; Fri, 24 Aug 2012 19:23:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T4zTz-0006Mw-28
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 19:23:51 +0000
Received: from [85.158.143.35:57206] by server-2.bemta-4.messagelabs.com id
	69/74-21239-6C4D7305; Fri, 24 Aug 2012 19:23:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1345836228!13458377!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzM5MDk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19837 invoked from network); 24 Aug 2012 19:23:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 19:23:49 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344225600"; d="scan'208";a="206185927"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 15:23:47 -0400
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.213.0;
	Fri, 24 Aug 2012 15:23:47 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T4zTv-0004YX-41;
	Fri, 24 Aug 2012 20:23:47 +0100
Message-ID: <5037D4C3.4060309@citrix.com>
Date: Fri, 24 Aug 2012 20:23:47 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
	<1345835562.4847.3.camel@dagon.hellion.org.uk>
In-Reply-To: <1345835562.4847.3.camel@dagon.hellion.org.uk>
X-Enigmail-Version: 1.4.3
Content-Type: multipart/mixed; boundary="------------050605020601030102020804"
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------050605020601030102020804
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable


On 24/08/12 20:12, Ian Campbell wrote:
> On Fri, 2012-08-24 at 19:42 +0100, Andrew Cooper wrote:
>> On 24/08/12 17:43, p.d@gmx.de wrote:
>>> nice time,
>>>
>>> Ian, I'm not sure, but I think after Your patch:
>>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/4ca40e0559c3
>>>
>>> xen-tools (+qemu+seabios) will not be maked :)
>>>
>>> Here are last lines of "make -j7":
>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>> gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=3D=
gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused=
-but-set-variable   -D__XEN_TOOLS__ -MMD -MF ._libxl_save_msgs_callout.o.=
d  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls =
 -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-=
after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/usr/src/xen_201=
2_08_24_rev_25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24=
_rev_25779/tools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_=
25779/tools/libxl/../../tools/libxc -I/usr/src/xen_2012_08_24_rev_25779/t=
ools/libxl/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/=
libxl/../../tools/xenstore -I/usr/src/xen_2012_08_24_rev_25779/tools/libx=
l/../../tools/include -I/usr/src/xen_2012_08_24_rev_25779/tools/libxl/../=
=2E./tools/blktap2/control -I/usr/src/xen_2012_08_24_rev_25779/tools/libx=
l/../../tools/blktap2/include -I/usr/src/xen_2012_08_24_rev_25779/tools/l=
ibxl/../../tools/include -include /usr/src/xen_2012_08_24_rev_25779/tools=
/libxl/../../tools/config.h  -c -o _libxl_save_msgs_callout.o _libxl_save=
_msgs_callout.c=20
>>> xl_cmdimpl.c: In function =E2=80=98main_list=E2=80=99:
>>> xl_cmdimpl.c:2733:11: error: =E2=80=98hand=E2=80=99 may be used unini=
tialized in this function [-Werror=3Duninitialized]
>>> xl_cmdimpl.c:2689:14: note: =E2=80=98hand=E2=80=99 was declared here
>>> <snip>
>> Please try the attached patch.  I have fixed the error, and also
>> future-proofed the logic.
>>
>> @Ian: the patch can be slimed down if default_output_format can be
>> guaranteed not to change across the duration of this function call, bu=
t
>> my cursory glance at this otherwise-unfamilar codebase cant say for ce=
rtain.
> It can only change during argument parsing.
>
> gcc is being a bit dumb here since default_output_format is a staticall=
y
> initialised enum whose only values are OUTPUT_FORMAT_JSON and
> OUTPUT_FORMAT_SXP.
>
> I suspect that now you have re-written it so that hand is only touched
> iff format =3D=3D JSON and format is const that initialisation of hand =
is no
> longer necessary. But nonetheless:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Ian.
>

Sadly, hand being touched iff format =3D=3D JSON was still not enough to
satisfy my version of GCC.

Attached is a far slimmed version which explicitly sets hand to NULL at
the top, and future-proofs the use of hand in the middle of the domain lo=
op.

--=20
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------050605020601030102020804
Content-Type: text/x-patch; name="fix-cmdimpl-v2.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="fix-cmdimpl-v2.patch"

# HG changeset patch
# Parent 4ca40e0559c33205fb5163b10249a0fd5fda39b9
tools/xl: Fix uninitialized variable error.

c/s 25779:4ca40e0559c3 introduced a compilation error for any build system using
-Werror=uninitialized, such as the default CentOS 5.7 version of gcc.

And with good reason, because if the global libxl default_output_format is
neither OUTPUT_FORMAT_SXP nor OUTPUT_FORMAT_JSON, the variable hand will be used
before being initialised.

The attached patch fixes the warning, and futher fixes the logic to work
correctly when a new OUTPUT_FORMAT is added to xl.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 4ca40e0559c3 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2686,7 +2686,7 @@ static void list_domains_details(const l
     uint8_t *data;
     int i, len, rc;
 
-    yajl_gen hand;
+    yajl_gen hand = NULL;
     yajl_gen_status s;
     const char *buf;
     libxl_yajl_length yajl_len = 0;
@@ -2714,10 +2714,10 @@ static void list_domains_details(const l
         CHK_ERRNO(asprintf(&config_source, "<domid %d data>", info[i].domid));
         libxl_domain_config_init(&d_config);
         parse_config_data(config_source, (char *)data, len, &d_config, NULL);
-        if (default_output_format == OUTPUT_FORMAT_SXP)
+        if (default_output_format == OUTPUT_FORMAT_JSON)
+            s = printf_info_one_json(hand, info[i].domid, &d_config);
+        else
             printf_info_sexp(domid, &d_config);
-        else
-            s = printf_info_one_json(hand, info[i].domid, &d_config);
         libxl_domain_config_dispose(&d_config);
         free(data);
         free(config_source);

--------------050605020601030102020804
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050605020601030102020804--


From xen-devel-bounces@lists.xen.org Fri Aug 24 19:49:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 19:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4zsW-0006az-6h; Fri, 24 Aug 2012 19:49:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4zsU-0006au-IE
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 19:49:10 +0000
Received: from [85.158.139.83:44684] by server-5.bemta-5.messagelabs.com id
	F9/34-31019-4BAD7305; Fri, 24 Aug 2012 19:49:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345837747!27676097!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2622 invoked from network); 24 Aug 2012 19:49:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	24 Aug 2012 19:49:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 20:49:06 +0100
Message-Id: <5037E8C2020000780008A7CA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 20:49:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
	<5037AFBF020000780008A7AF@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208241828280.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208241828280.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.08.12 at 19:34, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> > because it would be nice to be able to backport features like this one:
>> > 
>> > http://marc.info/?l=qemu-devel&m=134392375010304 
>> 
>> I agree this would be nice to have (albeit I'm not sure how much
>> the original problem is actually present in the Xen code, particularly
>> with the patch here applied, as I think it may implicitly clean up some
>> of the unneccesary active timers).
> 
> Maybe, but with your patch applied, are there going to be any timers
> running if the guest is not making use of the RTC?

Can't tell without in depth analysis, as I didn't pay too much
attention to this aspect of the code. I just vaguely recall that
in at least one case the changes might have resulted in not
re-arming a timer under some condition where it previously
got re-armed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 19:49:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 19:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T4zsW-0006az-6h; Fri, 24 Aug 2012 19:49:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T4zsU-0006au-IE
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 19:49:10 +0000
Received: from [85.158.139.83:44684] by server-5.bemta-5.messagelabs.com id
	F9/34-31019-4BAD7305; Fri, 24 Aug 2012 19:49:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1345837747!27676097!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=1.4 required=7.0 tests=INFO_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2622 invoked from network); 24 Aug 2012 19:49:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	24 Aug 2012 19:49:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 20:49:06 +0100
Message-Id: <5037E8C2020000780008A7CA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 20:49:06 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <503514280200007800096FF4@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208231441090.15568@kaball.uk.xensource.com>
	<5037AFBF020000780008A7AF@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208241828280.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208241828280.15568@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH,
 v2] x86/HVM: assorted RTC emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.08.12 at 19:34, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> > because it would be nice to be able to backport features like this one:
>> > 
>> > http://marc.info/?l=qemu-devel&m=134392375010304 
>> 
>> I agree this would be nice to have (albeit I'm not sure how much
>> the original problem is actually present in the Xen code, particularly
>> with the patch here applied, as I think it may implicitly clean up some
>> of the unneccesary active timers).
> 
> Maybe, but with your patch applied, are there going to be any timers
> running if the guest is not making use of the RTC?

Can't tell without in depth analysis, as I didn't pay too much
attention to this aspect of the code. I just vaguely recall that
in at least one case the changes might have resulted in not
re-arming a timer under some condition where it previously
got re-armed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 20:00:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 20:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T503M-0006oY-Bt; Fri, 24 Aug 2012 20:00:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T503L-0006oT-0K
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 20:00:23 +0000
Received: from [85.158.138.51:49850] by server-3.bemta-3.messagelabs.com id
	21/47-13809-65DD7305; Fri, 24 Aug 2012 20:00:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345838420!27805492!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6077 invoked from network); 24 Aug 2012 20:00:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 20:00:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 21:00:19 +0100
Message-Id: <5037EB61020000780008A7CF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 21:00:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 09:59, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
>> On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
>> > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
>> > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
>> > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
>> > >@@ -228,8 +228,6 @@ uninstall:
>> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
>> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
>> > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
>> > >-    rm -rf $(D)/boot/*xen*
>> > 
>> > But removing this line without replacement isn't right either - we at least
>> > need to undo what "make install" did. That may imply adding an
>> > uninstall-xen sub-target,
>> 
>> Right, I totally forgot about the hypervisor itself!
>> 
>> Perhaps this target should include a
>> 	$(MAKE) -C xen uninstall
>> since that is the Makefile which knows how to undo its own install
>> target.
> 
> Like this, which handles EFI too but not (yet) tools.

Looks good to me (also the tools one you sent later).

Thanks, Jan

> make dist-xen
> make DESTDIR=$(pwd)/dist/install uninstall
> 
> Leaves just the dist/install/boot dir which I don't think we need to
> bother cleaning up (I don't think rmdir --ignore-fail-on-non-empty is
> portable).
> 
> 8<------------------------------------
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1345708184 -3600
> # Node ID 101956baa3469f5f338c661f1ceab23077bd432b
> # Parent  9cb256660bfcfdf20f869ea28881115d622ef1a4
> do not remove kernels or modules on uninstall.
> 
> The pattern used is very broad and will delete any kernel with xen in
> its filename, likewise modules, including those which come packages
> from the distribution etc.
> 
> I don't think this was ever the right thing to do but it is doubly
> wrong now that Xen does not even build or install a kernel by default.
> 
> Push cleanup of the installed hypervisor down into xen/Makefile so that
> it can cleanup exactly what it actually installs.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> diff -r 9cb256660bfc -r 101956baa346 Makefile
> --- a/Makefile	Thu Aug 23 08:28:42 2012 +0100
> +++ b/Makefile	Thu Aug 23 08:49:44 2012 +0100
> @@ -220,6 +220,7 @@ help:
>  uninstall: D=$(DESTDIR)
>  uninstall:
>  	[ -d $(D)$(XEN_CONFIG_DIR) ] && mv -f $(D)$(XEN_CONFIG_DIR) 
> $(D)$(XEN_CONFIG_DIR).old-`date +%s` || true
> +	$(MAKE) -C xen uninstall
>  	rm -rf $(D)$(CONFIG_DIR)/init.d/xendomains $(D)$(CONFIG_DIR)/init.d/xend
>  	rm -rf $(D)$(CONFIG_DIR)/init.d/xencommons 
> $(D)$(CONFIG_DIR)/init.d/xen-watchdog
>  	rm -rf $(D)$(CONFIG_DIR)/hotplug/xen-backend.agent
> @@ -228,8 +229,6 @@ uninstall:
>  	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
>  	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
>  	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> -	rm -rf $(D)/boot/*xen*
> -	rm -rf $(D)/lib/modules/*xen*
>  	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
>  	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
>  	rm -rf $(D)$(BINDIR)/xc_shadow
> diff -r 9cb256660bfc -r 101956baa346 xen/Makefile
> --- a/xen/Makefile	Thu Aug 23 08:28:42 2012 +0100
> +++ b/xen/Makefile	Thu Aug 23 08:49:44 2012 +0100
> @@ -20,8 +20,8 @@ default: build
>  .PHONY: dist
>  dist: install
>  
> -.PHONY: build install clean distclean cscope TAGS tags MAP gtags
> -build install debug clean distclean cscope TAGS tags MAP gtags::
> +.PHONY: build install uninstall clean distclean cscope TAGS tags MAP gtags
> +build install uninstall debug clean distclean cscope TAGS tags MAP gtags::
>  	$(MAKE) -f Rules.mk _$@
>  
>  .PHONY: _build
> @@ -48,6 +48,21 @@ _install: $(TARGET).gz
>  		fi; \
>  	fi
>  
> +.PHONY: _uninstall
> +_uninstall: D=$(DESTDIR)
> +_uninstall: T=$(notdir $(TARGET))
> +_uninstall:
> +	rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION).gz
> +	rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).gz
> +	rm -f $(D)/boot/$(T)-$(XEN_VERSION).gz
> +	rm -f $(D)/boot/$(T).gz
> +	rm -f $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
> +	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
> +	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
> +	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi
> +	rm -f $(D)$(EFI_DIR)/$(T).efi
> +	rm -f $(D)$(EFI_MOUNTPOINT)/efi/$(EFI_VENDOR)/$(T)-$(XEN_FULLVERSION).efi
> +
>  .PHONY: _debug
>  _debug:
>  	objdump -D -S $(TARGET)-syms > $(TARGET).s



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 20:00:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 20:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T503M-0006oY-Bt; Fri, 24 Aug 2012 20:00:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T503L-0006oT-0K
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 20:00:23 +0000
Received: from [85.158.138.51:49850] by server-3.bemta-3.messagelabs.com id
	21/47-13809-65DD7305; Fri, 24 Aug 2012 20:00:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1345838420!27805492!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6077 invoked from network); 24 Aug 2012 20:00:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 20:00:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 21:00:19 +0100
Message-Id: <5037EB61020000780008A7CF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 21:00:17 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345708742.12501.48.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 09:59, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-23 at 08:31 +0100, Ian Campbell wrote:
>> On Thu, 2012-08-23 at 08:04 +0100, Jan Beulich wrote:
>> > >>> Ian Campbell <Ian.Campbell@citrix.com> 08/23/12 8:40 AM >>>
>> > >--- a/Makefile    Wed Aug 22 17:32:37 2012 +0100
>> > >+++ b/Makefile    Thu Aug 23 07:38:10 2012 +0100
>> > >@@ -228,8 +228,6 @@ uninstall:
>> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
>> > >    rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
>> > >    rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
>> > >-    rm -rf $(D)/boot/*xen*
>> > 
>> > But removing this line without replacement isn't right either - we at least
>> > need to undo what "make install" did. That may imply adding an
>> > uninstall-xen sub-target,
>> 
>> Right, I totally forgot about the hypervisor itself!
>> 
>> Perhaps this target should include a
>> 	$(MAKE) -C xen uninstall
>> since that is the Makefile which knows how to undo its own install
>> target.
> 
> Like this, which handles EFI too but not (yet) tools.

Looks good to me (also the tools one you sent later).

Thanks, Jan

> make dist-xen
> make DESTDIR=$(pwd)/dist/install uninstall
> 
> Leaves just the dist/install/boot dir which I don't think we need to
> bother cleaning up (I don't think rmdir --ignore-fail-on-non-empty is
> portable).
> 
> 8<------------------------------------
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1345708184 -3600
> # Node ID 101956baa3469f5f338c661f1ceab23077bd432b
> # Parent  9cb256660bfcfdf20f869ea28881115d622ef1a4
> do not remove kernels or modules on uninstall.
> 
> The pattern used is very broad and will delete any kernel with xen in
> its filename, likewise modules, including those which come packages
> from the distribution etc.
> 
> I don't think this was ever the right thing to do but it is doubly
> wrong now that Xen does not even build or install a kernel by default.
> 
> Push cleanup of the installed hypervisor down into xen/Makefile so that
> it can cleanup exactly what it actually installs.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> diff -r 9cb256660bfc -r 101956baa346 Makefile
> --- a/Makefile	Thu Aug 23 08:28:42 2012 +0100
> +++ b/Makefile	Thu Aug 23 08:49:44 2012 +0100
> @@ -220,6 +220,7 @@ help:
>  uninstall: D=$(DESTDIR)
>  uninstall:
>  	[ -d $(D)$(XEN_CONFIG_DIR) ] && mv -f $(D)$(XEN_CONFIG_DIR) 
> $(D)$(XEN_CONFIG_DIR).old-`date +%s` || true
> +	$(MAKE) -C xen uninstall
>  	rm -rf $(D)$(CONFIG_DIR)/init.d/xendomains $(D)$(CONFIG_DIR)/init.d/xend
>  	rm -rf $(D)$(CONFIG_DIR)/init.d/xencommons 
> $(D)$(CONFIG_DIR)/init.d/xen-watchdog
>  	rm -rf $(D)$(CONFIG_DIR)/hotplug/xen-backend.agent
> @@ -228,8 +229,6 @@ uninstall:
>  	rm -f  $(D)$(SYSCONFIG_DIR)/xendomains
>  	rm -f  $(D)$(SYSCONFIG_DIR)/xencommons
>  	rm -rf $(D)/var/run/xen* $(D)/var/lib/xen*
> -	rm -rf $(D)/boot/*xen*
> -	rm -rf $(D)/lib/modules/*xen*
>  	rm -rf $(D)$(LIBDIR)/xen* $(D)$(BINDIR)/lomount
>  	rm -rf $(D)$(BINDIR)/cpuperf-perfcntr $(D)$(BINDIR)/cpuperf-xen
>  	rm -rf $(D)$(BINDIR)/xc_shadow
> diff -r 9cb256660bfc -r 101956baa346 xen/Makefile
> --- a/xen/Makefile	Thu Aug 23 08:28:42 2012 +0100
> +++ b/xen/Makefile	Thu Aug 23 08:49:44 2012 +0100
> @@ -20,8 +20,8 @@ default: build
>  .PHONY: dist
>  dist: install
>  
> -.PHONY: build install clean distclean cscope TAGS tags MAP gtags
> -build install debug clean distclean cscope TAGS tags MAP gtags::
> +.PHONY: build install uninstall clean distclean cscope TAGS tags MAP gtags
> +build install uninstall debug clean distclean cscope TAGS tags MAP gtags::
>  	$(MAKE) -f Rules.mk _$@
>  
>  .PHONY: _build
> @@ -48,6 +48,21 @@ _install: $(TARGET).gz
>  		fi; \
>  	fi
>  
> +.PHONY: _uninstall
> +_uninstall: D=$(DESTDIR)
> +_uninstall: T=$(notdir $(TARGET))
> +_uninstall:
> +	rm -f $(D)/boot/$(T)-$(XEN_FULLVERSION).gz
> +	rm -f $(D)/boot/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).gz
> +	rm -f $(D)/boot/$(T)-$(XEN_VERSION).gz
> +	rm -f $(D)/boot/$(T).gz
> +	rm -f $(D)/boot/$(T)-syms-$(XEN_FULLVERSION)
> +	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
> +	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
> +	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi
> +	rm -f $(D)$(EFI_DIR)/$(T).efi
> +	rm -f $(D)$(EFI_MOUNTPOINT)/efi/$(EFI_VENDOR)/$(T)-$(XEN_FULLVERSION).efi
> +
>  .PHONY: _debug
>  _debug:
>  	objdump -D -S $(TARGET)-syms > $(TARGET).s



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 20:06:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 20:06:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5093-0006zY-54; Fri, 24 Aug 2012 20:06:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T5091-0006zS-Fo
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 20:06:15 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345838754!1912565!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6702 invoked from network); 24 Aug 2012 20:05:56 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 20:05:56 -0000
Received: by wibhm6 with SMTP id hm6so1094146wib.14
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 13:05:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=me/HN34SKFXyoIbloMZ6brgvPip/2DiNCX4dB5zyRto=;
	b=T/bRnYIvJRO03weC2ukCQwkOtuK6KwIIswUPufKsi96BWawF7zL1kRqwBuSHtTQ+Hq
	KXBeWJNNNnU4E1yZRMMNcDI5CZVI2KfJVxws4J2W7wpjvEsJdQhqrvK8S9SejCh70OE2
	QsATP1SL8fqB7D2ueegC+ZtZTi66a0pF1U98m1uCodbgvTjwOOvl87YONsB4EjnNRxmx
	YCy+XBwKpolv+5Uj5OjiTQXmU2Xw2CbpNjMf8Ee7C7pT9oR9ukba0c7szU8vMztZ3YuM
	p6balMOzq+SFHfqBtofQ9MA96wv6KCnwgtFEdn0LdpgJOSq+FE2cpcPHmDBrBMrOhDpF
	vk5g==
Received: by 10.180.85.167 with SMTP id i7mr7963300wiz.8.1345838754273;
	Fri, 24 Aug 2012 13:05:54 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id bc2sm269322wib.0.2012.08.24.13.05.53
	(version=SSLv3 cipher=OTHER); Fri, 24 Aug 2012 13:05:53 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 24 Aug 2012 21:05:50 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5D9D2E.3CD9A%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2CM9tuGR7Ii8ds4k+RXiz3lJqHug==
In-Reply-To: <5037BC5E.3010806@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 18:39, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>> That flipping has nothing to do with UEFI, just with the way grub.efi
>> works.
>> 
>> Proper UEFI support implies use of EFI's boot and run time services,
>> which only xen.efi currently does (and which, for those run time
>> services that get made available for use by Dom0, also requires an
>> enabled Dom0 kernel).
>> 
> Thanks for the clarification.
> 
> So from a security/reliability standpoint, nothing will be affected by
> flipping the if block?

It should simply make it more likely that Xen sees all your RAM. ;)

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 20:06:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 20:06:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5093-0006zY-54; Fri, 24 Aug 2012 20:06:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T5091-0006zS-Fo
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 20:06:15 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1345838754!1912565!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6702 invoked from network); 24 Aug 2012 20:05:56 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 20:05:56 -0000
Received: by wibhm6 with SMTP id hm6so1094146wib.14
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 13:05:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=me/HN34SKFXyoIbloMZ6brgvPip/2DiNCX4dB5zyRto=;
	b=T/bRnYIvJRO03weC2ukCQwkOtuK6KwIIswUPufKsi96BWawF7zL1kRqwBuSHtTQ+Hq
	KXBeWJNNNnU4E1yZRMMNcDI5CZVI2KfJVxws4J2W7wpjvEsJdQhqrvK8S9SejCh70OE2
	QsATP1SL8fqB7D2ueegC+ZtZTi66a0pF1U98m1uCodbgvTjwOOvl87YONsB4EjnNRxmx
	YCy+XBwKpolv+5Uj5OjiTQXmU2Xw2CbpNjMf8Ee7C7pT9oR9ukba0c7szU8vMztZ3YuM
	p6balMOzq+SFHfqBtofQ9MA96wv6KCnwgtFEdn0LdpgJOSq+FE2cpcPHmDBrBMrOhDpF
	vk5g==
Received: by 10.180.85.167 with SMTP id i7mr7963300wiz.8.1345838754273;
	Fri, 24 Aug 2012 13:05:54 -0700 (PDT)
Received: from [192.168.1.68] (host86-184-74-117.range86-184.btcentralplus.com.
	[86.184.74.117])
	by mx.google.com with ESMTPS id bc2sm269322wib.0.2012.08.24.13.05.53
	(version=SSLv3 cipher=OTHER); Fri, 24 Aug 2012 13:05:53 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 24 Aug 2012 21:05:50 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC5D9D2E.3CD9A%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2CM9tuGR7Ii8ds4k+RXiz3lJqHug==
In-Reply-To: <5037BC5E.3010806@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 18:39, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>> That flipping has nothing to do with UEFI, just with the way grub.efi
>> works.
>> 
>> Proper UEFI support implies use of EFI's boot and run time services,
>> which only xen.efi currently does (and which, for those run time
>> services that get made available for use by Dom0, also requires an
>> enabled Dom0 kernel).
>> 
> Thanks for the clarification.
> 
> So from a security/reliability standpoint, nothing will be affected by
> flipping the if block?

It should simply make it more likely that Xen sees all your RAM. ;)

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 20:07:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 20:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T509W-00071f-Ik; Fri, 24 Aug 2012 20:06:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T509V-00071T-CZ
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 20:06:45 +0000
Received: from [85.158.138.51:20385] by server-7.bemta-3.messagelabs.com id
	22/43-01906-4DED7305; Fri, 24 Aug 2012 20:06:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345838803!23739514!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26278 invoked from network); 24 Aug 2012 20:06:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 20:06:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 21:06:43 +0100
Message-Id: <5037ECE2020000780008A7D5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 21:06:42 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<501FA05C0200007800092CD7@nat28.tlf.novell.com>
	<CAEBdQ90ObLybAxzYceEQyEVtpP7mMmPrn7NryH3h-+dTXBNDUw@mail.gmail.com>
In-Reply-To: <CAEBdQ90ObLybAxzYceEQyEVtpP7mMmPrn7NryH3h-+dTXBNDUw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 13:57, Jean Guyader <jean.guyader@gmail.com> wrote:
> On 6 August 2012 09:45, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>>--- /dev/null
>>>+++ b/xen/include/public/v4v.h
>>>...
>>>+#define V4V_DOMID_ANY           0x7fffU
>>
>> I think I asked this before - why not use one of the pre-existing
>> DOMID values? And if there is a good reason, it should be said
>> here in a comment, to avoid the same question being asked
>> later again.
>>
>>>...
>>>+typedef uint64_t v4v_pfn_t;
>>
>> We already have xen_pfn_t, so why do you need yet another
>> flavor?
>>
>>>...
>>>+struct v4v_info
>>>+{
>>>+    uint64_t ring_magic;
>>>+    uint64_t data_magic;
>>>+    evtchn_port_t evtchn;
>>
>> Missing padding at the end?
>>
>>>+};
>>>+typedef struct v4v_info v4v_info_t;
>>>+
>>>+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
>>
>> Doesn't seem to belong here. Or is the subsequent comment
>> actually related to this (in which case it should be moved ahead
>> of the definition and made match it).
>>
>>>+/*
>>>+ * Messages on the ring are padded to 128 bits
>>>+ * Len here refers to the exact length of the data not including the
>>>+ * 128 bit header. The message uses
>>>+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
>>>+ */
>>>...
>>>+/*
>>>+ * HYPERCALLS
>>>+ */
>>>...
>>
>> In the block below here, please get the naming (do_v4v_op()
>> vs v4v_hypercall()) and the use of newlines (either always one
>> or always two between individual hypercall descriptions)
>> consistent. Hmm, even the descriptions don't seem to always
>> match the definitions (not really obvious because apparently
>> again the descriptions follow the definitions, whereas the
>> opposite is the usual way to arrange things).
>>
>>>--- /dev/null
>>>+++ b/xen/include/xen/v4v_utils.h
>>>...
>>>+/* Compiler specific hacks */
>>>+#if defined(__GNUC__)
>>>+# define V4V_UNUSED __attribute__ ((unused))
>>>+# ifndef __STRICT_ANSI__
>>>+#  define V4V_INLINE inline
>>>+# else
>>>+#  define V4V_INLINE
>>>+# endif
>>>+#else /* !__GNUC__ */
>>>+# define V4V_UNUSED
>>>+# define V4V_INLINE
>>>+#endif
>>
>> This suggests the header is really intended to be public?
>>
>>>...
>>>+static V4V_INLINE uint32_t
>>>+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
>>
>> No space between function name and opening parenthesis
>> (throughout this file).
>>
>>>...
>>>+V4V_UNUSED static V4V_INLINE ssize_t
>>
>> V4V_UNUSED? Doesn't make sense in conjunction with
>> V4V_INLINE, at least as long as you're using GNU extensions
>> anyway (see above as to the disposition of the header).
>>
>>>+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * 
> protocol,
>>>+              void *_buf, size_t t, int consume)
>>
>> Dead functions shouldn't be placed here.
>>
>>>...
>>>+static ssize_t
>>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>>+                     size_t skip) V4V_UNUSED;
>>>+
>>>+V4V_INLINE static ssize_t
>>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>>+                     size_t skip)
>>>+{
>>
>> What's the point of having a declaration followed immediately by
>> a definition? Also, the function is dead too.
>>
> 
> This file (v4v_utils.h) has utility that could be used by drivers, we don't 
> use
> them in Xen but we through it will be convenient to have such function
> accessible for one to write a v4v driver a v4v driver.
> 
> What would be the right place for those?

I think public/io/ would be the best matching place, but it's
certainly also not ideal. Maybe we ought to introduce public/lib/
or some such for it?

But then again I don't see how this comment of yours relates to
the earlier comments I made.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 20:07:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 20:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T509W-00071f-Ik; Fri, 24 Aug 2012 20:06:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T509V-00071T-CZ
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 20:06:45 +0000
Received: from [85.158.138.51:20385] by server-7.bemta-3.messagelabs.com id
	22/43-01906-4DED7305; Fri, 24 Aug 2012 20:06:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1345838803!23739514!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26278 invoked from network); 24 Aug 2012 20:06:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with SMTP;
	24 Aug 2012 20:06:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 21:06:43 +0100
Message-Id: <5037ECE2020000780008A7D5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 21:06:42 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jean Guyader" <jean.guyader@gmail.com>
References: <1344023454-31425-1-git-send-email-jean.guyader@citrix.com>
	<1344023454-31425-6-git-send-email-jean.guyader@citrix.com>
	<501FA05C0200007800092CD7@nat28.tlf.novell.com>
	<CAEBdQ90ObLybAxzYceEQyEVtpP7mMmPrn7NryH3h-+dTXBNDUw@mail.gmail.com>
In-Reply-To: <CAEBdQ90ObLybAxzYceEQyEVtpP7mMmPrn7NryH3h-+dTXBNDUw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jean Guyader <jean.guyader@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 5/5] xen: Add V4V implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 13:57, Jean Guyader <jean.guyader@gmail.com> wrote:
> On 6 August 2012 09:45, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 03.08.12 at 21:50, Jean Guyader <jean.guyader@citrix.com> wrote:
>>>--- /dev/null
>>>+++ b/xen/include/public/v4v.h
>>>...
>>>+#define V4V_DOMID_ANY           0x7fffU
>>
>> I think I asked this before - why not use one of the pre-existing
>> DOMID values? And if there is a good reason, it should be said
>> here in a comment, to avoid the same question being asked
>> later again.
>>
>>>...
>>>+typedef uint64_t v4v_pfn_t;
>>
>> We already have xen_pfn_t, so why do you need yet another
>> flavor?
>>
>>>...
>>>+struct v4v_info
>>>+{
>>>+    uint64_t ring_magic;
>>>+    uint64_t data_magic;
>>>+    evtchn_port_t evtchn;
>>
>> Missing padding at the end?
>>
>>>+};
>>>+typedef struct v4v_info v4v_info_t;
>>>+
>>>+#define V4V_ROUNDUP(a) (((a) +0xf ) & ~0xf)
>>
>> Doesn't seem to belong here. Or is the subsequent comment
>> actually related to this (in which case it should be moved ahead
>> of the definition and made match it).
>>
>>>+/*
>>>+ * Messages on the ring are padded to 128 bits
>>>+ * Len here refers to the exact length of the data not including the
>>>+ * 128 bit header. The message uses
>>>+ * ((len +0xf) & ~0xf) + sizeof(v4v_ring_message_header) bytes
>>>+ */
>>>...
>>>+/*
>>>+ * HYPERCALLS
>>>+ */
>>>...
>>
>> In the block below here, please get the naming (do_v4v_op()
>> vs v4v_hypercall()) and the use of newlines (either always one
>> or always two between individual hypercall descriptions)
>> consistent. Hmm, even the descriptions don't seem to always
>> match the definitions (not really obvious because apparently
>> again the descriptions follow the definitions, whereas the
>> opposite is the usual way to arrange things).
>>
>>>--- /dev/null
>>>+++ b/xen/include/xen/v4v_utils.h
>>>...
>>>+/* Compiler specific hacks */
>>>+#if defined(__GNUC__)
>>>+# define V4V_UNUSED __attribute__ ((unused))
>>>+# ifndef __STRICT_ANSI__
>>>+#  define V4V_INLINE inline
>>>+# else
>>>+#  define V4V_INLINE
>>>+# endif
>>>+#else /* !__GNUC__ */
>>>+# define V4V_UNUSED
>>>+# define V4V_INLINE
>>>+#endif
>>
>> This suggests the header is really intended to be public?
>>
>>>...
>>>+static V4V_INLINE uint32_t
>>>+v4v_ring_bytes_to_read (volatile struct v4v_ring *r)
>>
>> No space between function name and opening parenthesis
>> (throughout this file).
>>
>>>...
>>>+V4V_UNUSED static V4V_INLINE ssize_t
>>
>> V4V_UNUSED? Doesn't make sense in conjunction with
>> V4V_INLINE, at least as long as you're using GNU extensions
>> anyway (see above as to the disposition of the header).
>>
>>>+v4v_copy_out (struct v4v_ring *r, struct v4v_addr *from, uint32_t * 
> protocol,
>>>+              void *_buf, size_t t, int consume)
>>
>> Dead functions shouldn't be placed here.
>>
>>>...
>>>+static ssize_t
>>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>>+                     size_t skip) V4V_UNUSED;
>>>+
>>>+V4V_INLINE static ssize_t
>>>+v4v_copy_out_offset (struct v4v_ring *r, struct v4v_addr *from,
>>>+                     uint32_t * protocol, void *_buf, size_t t, int consume,
>>>+                     size_t skip)
>>>+{
>>
>> What's the point of having a declaration followed immediately by
>> a definition? Also, the function is dead too.
>>
> 
> This file (v4v_utils.h) has utility that could be used by drivers, we don't 
> use
> them in Xen but we through it will be convenient to have such function
> accessible for one to write a v4v driver a v4v driver.
> 
> What would be the right place for those?

I think public/io/ would be the best matching place, but it's
certainly also not ideal. Maybe we ought to introduce public/lib/
or some such for it?

But then again I don't see how this comment of yours relates to
the earlier comments I made.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 21:32:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 21:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T51Tc-0007Zf-AZ; Fri, 24 Aug 2012 21:31:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T51Ta-0007Za-RQ
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 21:31:35 +0000
Received: from [85.158.138.51:53939] by server-7.bemta-3.messagelabs.com id
	89/FD-01906-5B2F7305; Fri, 24 Aug 2012 21:31:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345843892!9013613!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10382 invoked from network); 24 Aug 2012 21:31:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 21:31:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14175687"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 21:31:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 22:31:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T51TT-00058d-4w;
	Fri, 24 Aug 2012 21:31:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T51TS-0000I0-S3;
	Fri, 24 Aug 2012 22:31:26 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13627-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 22:31:26 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13627: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13627 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13627/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win-vcpus1 12 guest-localmigrate/x10   fail REGR. vs. 13625
 test-amd64-amd64-win          5 xen-boot                  fail REGR. vs. 13625
 test-i386-i386-xl-win         5 xen-boot                  fail REGR. vs. 13625
 test-amd64-amd64-pair         8 xen-boot/dst_host         fail REGR. vs. 13625
 test-amd64-amd64-pair         7 xen-boot/src_host         fail REGR. vs. 13625

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13625
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13625
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13625
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13625

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  4ca40e0559c3

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Zhang Xiantao <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25784:1126b3079bef
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Fri Aug 24 12:38:18 2012 +0100
    
    libxl: Rerun bison
    
    This updates libxlu_cfg_y.[ch] to code generated by bison from
    Debian squeeze (1:2.4.1.dfsg-3 i386).
    
    There should be no functional change since there is no change to the
    source file, but we will inherit bugfixes and behavioural changes from
    the new version of bison.  So this is more a matter of hope than
    knowledge.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   25783:d4f854c3e732
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Fri Aug 24 12:38:16 2012 +0100
    
    libxl: Rerun flex
    
    This undoes some systematic changes which were made to
    libxlu_cfg_l.[ch] along with manually-edited files (eg, whitespace
    changes, emacs local variables) and returns these two files to exactly
    the output of flex (Debian squeeze 2.5.35-10 i386).
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   25782:24dbd9d4f340
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Fri Aug 24 12:38:14 2012 +0100
    
    libxl: provide "make realclean" target
    
    This removes all the autogenerated files.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   25781:985e836dff8b
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:47 2012 +0100
    
    nested vmx: Don't set bit 55 in IA32_VMX_BASIC_MSR
    
    All related IA32_VMX_TRUE_*_MSR are not implemented,
    so set this bit to 0, otherwise system L1VMM may
    get incorrect default1 class settings.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25780:42f959fec02d
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:14 2012 +0100
    
    nested vmx: VM_ENTRY_IA32E_MODE shouldn't be in default1 class
    for IA32_VM_ENTRY_CTLS_MSR.
    
    If set to 1, L2 guest's paging mode maybe mis-judged
    and mis-set.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25779:4ca40e0559c3
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 23 19:12:28 2012 +0100
    
    xl: make "xl list -l" proper JSON
    
    Bastian Blank reports that the output of this command is just multiple
    JSON objects concatenated and is not a single properly formed JSON
    object.
    
    Fix this by wrapping in an array. This turned out to be a bit more
    intrusive than I was expecting due to the requirement to keep
    supporting the SXP output mode.
    
    Python's json module is happy to parse the result...
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 21:32:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 21:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T51Tc-0007Zf-AZ; Fri, 24 Aug 2012 21:31:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T51Ta-0007Za-RQ
	for xen-devel@lists.xensource.com; Fri, 24 Aug 2012 21:31:35 +0000
Received: from [85.158.138.51:53939] by server-7.bemta-3.messagelabs.com id
	89/FD-01906-5B2F7305; Fri, 24 Aug 2012 21:31:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-174.messagelabs.com!1345843892!9013613!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10382 invoked from network); 24 Aug 2012 21:31:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Aug 2012 21:31:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,304,1344211200"; d="scan'208";a="14175687"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	24 Aug 2012 21:31:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Fri, 24 Aug 2012 22:31:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T51TT-00058d-4w;
	Fri, 24 Aug 2012 21:31:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T51TS-0000I0-S3;
	Fri, 24 Aug 2012 22:31:26 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13627-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Aug 2012 22:31:26 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13627: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13627 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13627/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win-vcpus1 12 guest-localmigrate/x10   fail REGR. vs. 13625
 test-amd64-amd64-win          5 xen-boot                  fail REGR. vs. 13625
 test-i386-i386-xl-win         5 xen-boot                  fail REGR. vs. 13625
 test-amd64-amd64-pair         8 xen-boot/dst_host         fail REGR. vs. 13625
 test-amd64-amd64-pair         7 xen-boot/src_host         fail REGR. vs. 13625

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13625
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13625
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13625
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13625

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  4ca40e0559c3

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Zhang Xiantao <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   25784:1126b3079bef
tag:         tip
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Fri Aug 24 12:38:18 2012 +0100
    
    libxl: Rerun bison
    
    This updates libxlu_cfg_y.[ch] to code generated by bison from
    Debian squeeze (1:2.4.1.dfsg-3 i386).
    
    There should be no functional change since there is no change to the
    source file, but we will inherit bugfixes and behavioural changes from
    the new version of bison.  So this is more a matter of hope than
    knowledge.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   25783:d4f854c3e732
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Fri Aug 24 12:38:16 2012 +0100
    
    libxl: Rerun flex
    
    This undoes some systematic changes which were made to
    libxlu_cfg_l.[ch] along with manually-edited files (eg, whitespace
    changes, emacs local variables) and returns these two files to exactly
    the output of flex (Debian squeeze 2.5.35-10 i386).
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   25782:24dbd9d4f340
user:        Ian Jackson <Ian.Jackson@eu.citrix.com>
date:        Fri Aug 24 12:38:14 2012 +0100
    
    libxl: provide "make realclean" target
    
    This removes all the autogenerated files.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
changeset:   25781:985e836dff8b
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:47 2012 +0100
    
    nested vmx: Don't set bit 55 in IA32_VMX_BASIC_MSR
    
    All related IA32_VMX_TRUE_*_MSR are not implemented,
    so set this bit to 0, otherwise system L1VMM may
    get incorrect default1 class settings.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25780:42f959fec02d
user:        Zhang Xiantao <xiantao.zhang@intel.com>
date:        Fri Aug 24 09:49:14 2012 +0100
    
    nested vmx: VM_ENTRY_IA32E_MODE shouldn't be in default1 class
    for IA32_VM_ENTRY_CTLS_MSR.
    
    If set to 1, L2 guest's paging mode maybe mis-judged
    and mis-set.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com>
    Committed-by: Keir Fraser <keir@xen.org>
    
    
changeset:   25779:4ca40e0559c3
user:        Ian Campbell <ian.campbell@citrix.com>
date:        Thu Aug 23 19:12:28 2012 +0100
    
    xl: make "xl list -l" proper JSON
    
    Bastian Blank reports that the output of this command is just multiple
    JSON objects concatenated and is not a single properly formed JSON
    object.
    
    Fix this by wrapping in an array. This turned out to be a bit more
    intrusive than I was expecting due to the requirement to keep
    supporting the SXP output mode.
    
    Python's json module is happy to parse the result...
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:11:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T525r-0007tz-NI; Fri, 24 Aug 2012 22:11:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T525p-0007tu-T1
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:11:06 +0000
Received: from [85.158.143.35:3009] by server-1.bemta-4.messagelabs.com id
	49/C7-12504-9FBF7305; Fri, 24 Aug 2012 22:11:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345846263!16125294!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12949 invoked from network); 24 Aug 2012 22:11:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 22:11:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:12:38 +0100
Message-Id: <50380A04020000780008A7E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:11:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>, "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
	<50367858.4060701@citrix.com>
In-Reply-To: <50367858.4060701@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	ThomasGoetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 20:37, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 23/08/12 19:03, Ben Guthro wrote:
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365 
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
> 
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
> 
> Jan: The commit message says "simplify operations [in] a few cases". 
> Was the change in fixup_irqs() deliberate?

Yes, it was: There's no need to break/adjust the affinity if it
continues to be a subset of cpu_online_map (i.e. there's no
need for the to match exactly). A similar change was also done
to Linux'es fixup_irqs() later one, without any problems that I'm
aware of.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:11:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T525r-0007tz-NI; Fri, 24 Aug 2012 22:11:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T525p-0007tu-T1
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:11:06 +0000
Received: from [85.158.143.35:3009] by server-1.bemta-4.messagelabs.com id
	49/C7-12504-9FBF7305; Fri, 24 Aug 2012 22:11:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1345846263!16125294!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12949 invoked from network); 24 Aug 2012 22:11:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 22:11:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:12:38 +0100
Message-Id: <50380A04020000780008A7E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:11:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>, "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
	<50367858.4060701@citrix.com>
In-Reply-To: <50367858.4060701@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	ThomasGoetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 20:37, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 23/08/12 19:03, Ben Guthro wrote:
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365 
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
> 
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
> 
> Jan: The commit message says "simplify operations [in] a few cases". 
> Was the change in fixup_irqs() deliberate?

Yes, it was: There's no need to break/adjust the affinity if it
continues to be a subset of cpu_online_map (i.e. there's no
need for the to match exactly). A similar change was also done
to Linux'es fixup_irqs() later one, without any problems that I'm
aware of.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:17:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T52Bb-00082K-Gi; Fri, 24 Aug 2012 22:17:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T52BZ-00082D-Fj
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:17:01 +0000
Received: from [85.158.143.35:9868] by server-1.bemta-4.messagelabs.com id
	30/69-12504-C5DF7305; Fri, 24 Aug 2012 22:17:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345846618!12738466!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30646 invoked from network); 24 Aug 2012 22:16:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 22:16:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:18:34 +0100
Message-Id: <50380B69020000780008A7EC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:16:57 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>, "Ben Guthro" <ben@guthro.net>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
	<503686A7.5050206@citrix.com>
	<CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
	<CAOvdn6VZVVZUsUASowme-t87s8JW6WkGqHRRh64YYC24k7BQLA@mail.gmail.com>
In-Reply-To: <CAOvdn6VZVVZUsUASowme-t87s8JW6WkGqHRRh64YYC24k7BQLA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.08.12 at 17:10, Ben Guthro <ben@guthro.net> wrote:
> The attached patch is essentially the change you suggested, plus a
> check for NULL in irq_complete_move()
> 
> This patch seems to fix some of the issues I was seeing, but not all.
> MSI's are now delivered, after a handful of suspend / resumes, which
> is the issue I was setting out to solve here.

But I'm afraid this is just masking some other problem (see my
response to Andrew's mail that I just sent).

Further more, the NULL check here seems pretty odd - I'd be very
curious what code path could get us there with the IRQ regs
pointer being NULL. It would likely be that code path that needs
fixing, not irq_complete_move(). Could you add a call to
dump_execution_state() to the early return path that you added?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:17:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T52Bb-00082K-Gi; Fri, 24 Aug 2012 22:17:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T52BZ-00082D-Fj
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:17:01 +0000
Received: from [85.158.143.35:9868] by server-1.bemta-4.messagelabs.com id
	30/69-12504-C5DF7305; Fri, 24 Aug 2012 22:17:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1345846618!12738466!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30646 invoked from network); 24 Aug 2012 22:16:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 22:16:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:18:34 +0100
Message-Id: <50380B69020000780008A7EC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:16:57 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>, "Ben Guthro" <ben@guthro.net>
References: <50367C78.80608@citrix.com>
	<CAOvdn6U_mSNQBbeboipKHuRJzcbrCe1Kj7ZY9=7N6s--AMESmQ@mail.gmail.com>
	<503686A7.5050206@citrix.com>
	<CAOvdn6U5Kmsfv9e=Un8qNR_mbM-V2x-v7Ork9S+saj6EjC-sEA@mail.gmail.com>
	<CAOvdn6VZVVZUsUASowme-t87s8JW6WkGqHRRh64YYC24k7BQLA@mail.gmail.com>
In-Reply-To: <CAOvdn6VZVVZUsUASowme-t87s8JW6WkGqHRRh64YYC24k7BQLA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	John Baboval <john.baboval@citrix.com>,
	Thomas Goetz <thomas.goetz@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.08.12 at 17:10, Ben Guthro <ben@guthro.net> wrote:
> The attached patch is essentially the change you suggested, plus a
> check for NULL in irq_complete_move()
> 
> This patch seems to fix some of the issues I was seeing, but not all.
> MSI's are now delivered, after a handful of suspend / resumes, which
> is the issue I was setting out to solve here.

But I'm afraid this is just masking some other problem (see my
response to Andrew's mail that I just sent).

Further more, the NULL check here seems pretty odd - I'd be very
curious what code path could get us there with the IRQ regs
pointer being NULL. It would likely be that code path that needs
fixing, not irq_complete_move(). Could you add a call to
dump_execution_state() to the early return path that you added?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:49:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:49:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T52gd-0008Hw-BE; Fri, 24 Aug 2012 22:49:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T52gb-0008Hr-Ev
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:49:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345848539!8556926!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 763 invoked from network); 24 Aug 2012 22:48:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 22:48:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:48:58 +0100
Message-Id: <503812E9020000780008A7F9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:48:57 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xen.org>,"Petr Tesarik" <ptesarik@suse.cz>
References: <201208241450.13698.ptesarik@suse.cz>
In-Reply-To: <201208241450.13698.ptesarik@suse.cz>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] Dumpfile filtering and PGC_xxx flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.08.12 at 14:50, Petr Tesarik <ptesarik@suse.cz> wrote:
> Hello folks,
> 
> I've been trying to add support for xen-4.0+ to makedumpfile (a utility that 
> can filter out some content from a kernel dump file). In particular, I'm now 
> 
> struggling with implementing option "-X", which should filter out all domU 
> pages, but keep hypervisor internal data and dom0 pages. Unused pages (free 
> pages, broken pages, offlined pages, etc.) should also be filtered out, 
> because they are usually not needed for dump analysis.
> 
> I'm relying on the contents of frame_table to do the job, but I'm lost in 
> the 
> hierarchy of PGC_xxx flags. My first naive idea was that I could keep pages 
> that have:
> 
> 1. PGC_allocated and
> 2. the right owner (dom_xen, dom_io, or dom0).

That looks reasonable.

> But that doesn't include Xen internal structures. In fact, the page_info 
> structs for pages corresponding to Xen code and static data seem to be 
> completely unitialized (all zero).

Yes, because the page allocator never gets to see those pages.
But for the (static) Xen image it ought to be possible to determine
which pages it consists of without relying on struct page_info,
based on virtual address (and its translation to physical).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:49:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:49:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T52gd-0008Hw-BE; Fri, 24 Aug 2012 22:49:07 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T52gb-0008Hr-Ev
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:49:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1345848539!8556926!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 763 invoked from network); 24 Aug 2012 22:48:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	24 Aug 2012 22:48:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:48:58 +0100
Message-Id: <503812E9020000780008A7F9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:48:57 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xen.org>,"Petr Tesarik" <ptesarik@suse.cz>
References: <201208241450.13698.ptesarik@suse.cz>
In-Reply-To: <201208241450.13698.ptesarik@suse.cz>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] Dumpfile filtering and PGC_xxx flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.08.12 at 14:50, Petr Tesarik <ptesarik@suse.cz> wrote:
> Hello folks,
> 
> I've been trying to add support for xen-4.0+ to makedumpfile (a utility that 
> can filter out some content from a kernel dump file). In particular, I'm now 
> 
> struggling with implementing option "-X", which should filter out all domU 
> pages, but keep hypervisor internal data and dom0 pages. Unused pages (free 
> pages, broken pages, offlined pages, etc.) should also be filtered out, 
> because they are usually not needed for dump analysis.
> 
> I'm relying on the contents of frame_table to do the job, but I'm lost in 
> the 
> hierarchy of PGC_xxx flags. My first naive idea was that I could keep pages 
> that have:
> 
> 1. PGC_allocated and
> 2. the right owner (dom_xen, dom_io, or dom0).

That looks reasonable.

> But that doesn't include Xen internal structures. In fact, the page_info 
> structs for pages corresponding to Xen code and static data seem to be 
> completely unitialized (all zero).

Yes, because the page allocator never gets to see those pages.
But for the (static) Xen image it ought to be possible to determine
which pages it consists of without relying on struct page_info,
based on virtual address (and its translation to physical).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T52md-0008Qb-5F; Fri, 24 Aug 2012 22:55:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T52mc-0008QW-HW
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:55:18 +0000
Received: from [85.158.143.99:41603] by server-1.bemta-4.messagelabs.com id
	DC/02-12504-55608305; Fri, 24 Aug 2012 22:55:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345848917!24453440!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4619 invoked from network); 24 Aug 2012 22:55:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 22:55:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:55:16 +0100
Message-Id: <50381463020000780008A7FF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:55:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
In-Reply-To: <CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 20:03, Ben Guthro <ben@guthro.net> wrote:
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
> 
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 
> 
> 
> I worked past that issue by the following hack:
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
> 
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }

Did you also make an attempt at figuring out which of the three calls
to evtchn_move_pirqs() is the actual problematic one?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 24 22:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Aug 2012 22:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T52md-0008Qb-5F; Fri, 24 Aug 2012 22:55:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T52mc-0008QW-HW
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 22:55:18 +0000
Received: from [85.158.143.99:41603] by server-1.bemta-4.messagelabs.com id
	DC/02-12504-55608305; Fri, 24 Aug 2012 22:55:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1345848917!24453440!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4619 invoked from network); 24 Aug 2012 22:55:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-216.messagelabs.com with SMTP;
	24 Aug 2012 22:55:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Aug 2012 23:55:16 +0100
Message-Id: <50381463020000780008A7FF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 24 Aug 2012 23:55:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ben Guthro" <ben@guthro.net>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
In-Reply-To: <CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.08.12 at 20:03, Ben Guthro <ben@guthro.net> wrote:
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
> 
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 
> 
> 
> I worked past that issue by the following hack:
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
> 
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }

Did you also make an attempt at figuring out which of the three calls
to evtchn_move_pirqs() is the actual problematic one?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 00:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 00:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T54Xu-0000yz-Gu; Sat, 25 Aug 2012 00:48:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T54Xt-0000yu-Fj
	for xen-devel@lists.xen.org; Sat, 25 Aug 2012 00:48:13 +0000
Received: from [85.158.138.51:30627] by server-3.bemta-3.messagelabs.com id
	3F/16-13809-CC028305; Sat, 25 Aug 2012 00:48:12 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345855690!19827124!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25643 invoked from network); 25 Aug 2012 00:48:12 -0000
Received: from mail-iy0-f173.google.com (HELO mail-iy0-f173.google.com)
	(209.85.210.173)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 00:48:12 -0000
Received: by iakx26 with SMTP id x26so5521450iak.32
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 17:48:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=3AxLZVg4GnNZE9fua4N3HQ+Wk2lx/+302P2idP2uI3w=;
	b=dPV4b/vSyjkvWV0JKjZgFNjwM7staQyp+I0Nr32MWjkWe0IGIioPSAKnQt85G7S5lV
	/s7ynAnLylaCEfN4o8WeA5rL37lV+xcS1MVMYNUSqhOWuo2ALGk9+Hb52wvozpqGEiDy
	vdy7EFfbOTuvSeOtM03IYA1pz73JfeeMGlcJEnqlnpoOJjI93ISsGqkbNaDeJAKN0cjc
	H/X1zmSyySNp4hB5NppAywPVj8gwhVoYAIN6X6SVdQQySVA4v0EPh7pcFKuaq6VNP0A0
	4mCZES9FGtlwT4o0VCYQ4cGbOFlvvJ4azJ1psn3ki54QAYfxgSJWEwF0PnnOQ4mEHdx4
	0n/w==
MIME-Version: 1.0
Received: by 10.50.156.133 with SMTP id we5mr3851527igb.62.1345855690428; Fri,
	24 Aug 2012 17:48:10 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Fri, 24 Aug 2012 17:48:10 -0700 (PDT)
In-Reply-To: <50381463020000780008A7FF@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
	<50381463020000780008A7FF@nat28.tlf.novell.com>
Date: Fri, 24 Aug 2012 20:48:10 -0400
X-Google-Sender-Auth: k4IqLGOjLPy7MllC3e5-IzkLyWs
Message-ID: <CAOvdn6VupYN13eUUjQAjoKcsEELwTzYUNi4BYAwZk_xTUHayDw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, 2012 at 6:55 PM, Jan Beulich <JBeulich@suse.com> wrote:
> Did you also make an attempt at figuring out which of the three calls
> to evtchn_move_pirqs() is the actual problematic one?
>

I did a lot of experimentation...I think this was one of the tests
that I did - though I've slept, and rebooted that machine so many
times over the past few days, a lot of the tests are starting to run
together.

If I recall correctly, I was unable to isolate this issue - commenting
out  just one of the three calls didn't seem to fix the problem.

I'll be traveling all next week (part of which is the Xen summit) -
but I will be able to give a more definitive answer to this when I
return to the office.

Since this is such an old changeset, I moved past it, since the same
fix at the tip didn't have the same effect.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 00:48:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 00:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T54Xu-0000yz-Gu; Sat, 25 Aug 2012 00:48:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben.guthro@gmail.com>) id 1T54Xt-0000yu-Fj
	for xen-devel@lists.xen.org; Sat, 25 Aug 2012 00:48:13 +0000
Received: from [85.158.138.51:30627] by server-3.bemta-3.messagelabs.com id
	3F/16-13809-CC028305; Sat, 25 Aug 2012 00:48:12 +0000
X-Env-Sender: ben.guthro@gmail.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1345855690!19827124!1
X-Originating-IP: [209.85.210.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25643 invoked from network); 25 Aug 2012 00:48:12 -0000
Received: from mail-iy0-f173.google.com (HELO mail-iy0-f173.google.com)
	(209.85.210.173)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 00:48:12 -0000
Received: by iakx26 with SMTP id x26so5521450iak.32
	for <xen-devel@lists.xen.org>; Fri, 24 Aug 2012 17:48:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=3AxLZVg4GnNZE9fua4N3HQ+Wk2lx/+302P2idP2uI3w=;
	b=dPV4b/vSyjkvWV0JKjZgFNjwM7staQyp+I0Nr32MWjkWe0IGIioPSAKnQt85G7S5lV
	/s7ynAnLylaCEfN4o8WeA5rL37lV+xcS1MVMYNUSqhOWuo2ALGk9+Hb52wvozpqGEiDy
	vdy7EFfbOTuvSeOtM03IYA1pz73JfeeMGlcJEnqlnpoOJjI93ISsGqkbNaDeJAKN0cjc
	H/X1zmSyySNp4hB5NppAywPVj8gwhVoYAIN6X6SVdQQySVA4v0EPh7pcFKuaq6VNP0A0
	4mCZES9FGtlwT4o0VCYQ4cGbOFlvvJ4azJ1psn3ki54QAYfxgSJWEwF0PnnOQ4mEHdx4
	0n/w==
MIME-Version: 1.0
Received: by 10.50.156.133 with SMTP id we5mr3851527igb.62.1345855690428; Fri,
	24 Aug 2012 17:48:10 -0700 (PDT)
Received: by 10.64.28.203 with HTTP; Fri, 24 Aug 2012 17:48:10 -0700 (PDT)
In-Reply-To: <50381463020000780008A7FF@nat28.tlf.novell.com>
References: <CAOvdn6WwgBD-2yf1uu3m9-16=Tb5KUrybXf2KibtnxKbP7YtwA@mail.gmail.com>
	<CAOvdn6UBW-yTOMd3YSf5M9JiHLRivyaKL-qzcKG8epszqaKmGg@mail.gmail.com>
	<20120807163320.GC15053@phenom.dumpdata.com>
	<CAOvdn6XcRGKtJxGwJO8J+ZBJr89wfTLc1woW5rhtMM540H9h1w@mail.gmail.com>
	<CAOvdn6XUoSeGoo7X=AJ1kpDxZB9HZEv+TJ4v-=FHcyR4=CHK-w@mail.gmail.com>
	<502240F302000078000937A6@nat28.tlf.novell.com>
	<CAOvdn6WN3kxC8Bb9g6MpfLULu47EGSGCtajPc-SFeJ-8VSUQDQ@mail.gmail.com>
	<CAOvdn6Xk1inm1hU55i0phU2qvSWyJLNoyDdn43biBAY2OewPtw@mail.gmail.com>
	<5023F5400200007800093F4E@nat28.tlf.novell.com>
	<CAOvdn6U-f3C4U8xVPvDyRFkFFLkbfiCXg_n+9zgA9V5u+q5jfw@mail.gmail.com>
	<5023F8B80200007800093F8C@nat28.tlf.novell.com>
	<CAOvdn6XKHfete_+=Hkvt20G7_cJ-4i8+9j9YD30OxzQ16Ru-+g@mail.gmail.com>
	<5024CB720200007800094124@nat28.tlf.novell.com>
	<CAOvdn6VVJdhWVYa-gNVt3euE4AbGi1TijUCUd1wGzv0SCWEUYA@mail.gmail.com>
	<CAOvdn6USnG9Y4MNBhrDZmob0oJ8BgFfKTvFEhcKbXDxNmzfZ-Q@mail.gmail.com>
	<502B75BA0200007800094FD4@nat28.tlf.novell.com>
	<CAOvdn6Ww5oEj6Xg=XT4dWCYWUBuOdPqLz-CqNjz4HmCScE+==g@mail.gmail.com>
	<CAOvdn6Wsjds_dmZ0dwDTYmD-o-OqjBH5-b9mitAuUADE+A4_ag@mail.gmail.com>
	<502BB8FD02000078000951E0@nat28.tlf.novell.com>
	<CAOvdn6VD6bp13F-tvYdkBsj4hkhE7PTGZrapndtdHrDKeZqccg@mail.gmail.com>
	<502CCC0002000078000955C4@nat28.tlf.novell.com>
	<CAOvdn6VEuX7R2_wz74ZvCxMjC-=YUir9kP5sdxEVaKVspRUd_w@mail.gmail.com>
	<502CF0A8020000780009574E@nat28.tlf.novell.com>
	<CAOvdn6VsZvn_1TCWYeuyY5YuTcP8=miK2KwE4583nsj6Qjb_vg@mail.gmail.com>
	<CAOvdn6UZ61363sKb6N_2pCeOBt-RzHK96Vxudmey4NxmUCHoDQ@mail.gmail.com>
	<502E3BC00200007800095DAB@nat28.tlf.novell.com>
	<CAOvdn6VJDqKA=a6Z71jtew5jtTZzOF6Fb4yCSyQQKA62iw_4Dw@mail.gmail.com>
	<50381463020000780008A7FF@nat28.tlf.novell.com>
Date: Fri, 24 Aug 2012 20:48:10 -0400
X-Google-Sender-Auth: k4IqLGOjLPy7MllC3e5-IzkLyWs
Message-ID: <CAOvdn6VupYN13eUUjQAjoKcsEELwTzYUNi4BYAwZk_xTUHayDw@mail.gmail.com>
From: Ben Guthro <ben@guthro.net>
To: Jan Beulich <JBeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, john.baboval@citrix.com,
	Thomas Goetz <thomas.goetz@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2 S3 regression?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, 2012 at 6:55 PM, Jan Beulich <JBeulich@suse.com> wrote:
> Did you also make an attempt at figuring out which of the three calls
> to evtchn_move_pirqs() is the actual problematic one?
>

I did a lot of experimentation...I think this was one of the tests
that I did - though I've slept, and rebooted that machine so many
times over the past few days, a lot of the tests are starting to run
together.

If I recall correctly, I was unable to isolate this issue - commenting
out  just one of the three calls didn't seem to fix the problem.

I'll be traveling all next week (part of which is the Xen summit) -
but I will be able to give a more definitive answer to this when I
return to the office.

Since this is such an old changeset, I moved past it, since the same
fix at the tip didn't have the same effect.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 02:42:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 02:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T56JX-0005dx-Ch; Sat, 25 Aug 2012 02:41:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T56JV-0005ds-HM
	for xen-devel@lists.xensource.com; Sat, 25 Aug 2012 02:41:29 +0000
Received: from [85.158.143.99:29638] by server-2.bemta-4.messagelabs.com id
	6C/89-21239-85B38305; Sat, 25 Aug 2012 02:41:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345862488!22537745!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4133 invoked from network); 25 Aug 2012 02:41:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 02:41:28 -0000
X-IronPort-AV: E=Sophos;i="4.80,307,1344211200"; d="scan'208";a="14177956"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	25 Aug 2012 02:41:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 25 Aug 2012 03:41:12 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T56JE-0007E3-37;
	Sat, 25 Aug 2012 02:41:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T56JD-0002b3-QD;
	Sat, 25 Aug 2012 03:41:11 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13628-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Aug 2012 03:41:11 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13628: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13628 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13628/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13625
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13625
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13625
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13625

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  4ca40e0559c3

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Zhang Xiantao <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=1126b3079bef
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 1126b3079bef
+ branch=xen-unstable
+ revision=1126b3079bef
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 1126b3079bef ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 7 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 02:42:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 02:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T56JX-0005dx-Ch; Sat, 25 Aug 2012 02:41:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T56JV-0005ds-HM
	for xen-devel@lists.xensource.com; Sat, 25 Aug 2012 02:41:29 +0000
Received: from [85.158.143.99:29638] by server-2.bemta-4.messagelabs.com id
	6C/89-21239-85B38305; Sat, 25 Aug 2012 02:41:28 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1345862488!22537745!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4133 invoked from network); 25 Aug 2012 02:41:28 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 02:41:28 -0000
X-IronPort-AV: E=Sophos;i="4.80,307,1344211200"; d="scan'208";a="14177956"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	25 Aug 2012 02:41:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.213.0; Sat, 25 Aug 2012 03:41:12 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T56JE-0007E3-37;
	Sat, 25 Aug 2012 02:41:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T56JD-0002b3-QD;
	Sat, 25 Aug 2012 03:41:11 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13628-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Aug 2012 03:41:11 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13628: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13628 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13628/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13625
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13625
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13625
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13625

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  4ca40e0559c3

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Keir Fraser <keir@xen.org>
  Zhang Xiantao <xiantao.zhang@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=1126b3079bef
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 1126b3079bef
+ branch=xen-unstable
+ revision=1126b3079bef
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 1126b3079bef ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 7 changes to 6 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 06:33:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 06:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T59vs-0007zW-D3; Sat, 25 Aug 2012 06:33:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <p.d@gmx.de>)
	id 1T51ho-0007mN-Rh
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 21:46:17 +0000
Received: from [85.158.143.35:65179] by server-3.bemta-4.messagelabs.com id
	C1/4A-08232-826F7305; Fri, 24 Aug 2012 21:46:16 +0000
X-Env-Sender: p.d@gmx.de
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345844770!10504075!1
X-Originating-IP: [213.165.64.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3731 invoked from network); 24 Aug 2012 21:46:10 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.22)
	by server-10.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 21:46:10 -0000
Received: (qmail 7235 invoked by uid 0); 24 Aug 2012 21:46:09 -0000
Received: from 95.116.131.21 by www020.gmx.net with HTTP;
	Fri, 24 Aug 2012 23:46:08 +0200 (CEST)
Date: Fri, 24 Aug 2012 23:46:08 +0200
From: p.d@gmx.de
In-Reply-To: <5037D4C3.4060309@citrix.com>
Message-ID: <20120824214608.46820@gmx.net>
MIME-Version: 1.0
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
	<1345835562.4847.3.camel@dagon.hellion.org.uk>
	<5037D4C3.4060309@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Ian.Campbell@citrix.com
X-Authenticated: #16432423
X-Flags: 0001
X-Mailer: WWW-Mail 6100 (Global Message Exchange)
X-Priority: 3
X-Provags-ID: V01U2FsdGVkX1/0fBQZf6z0pPcircVEy+E7cdky0Qcj/5RQDmVzjT
	0ZXm4ltEVZ96UWqsV8aoUBmZxBdF6qJSLCAA== 
X-GMX-UID: d5NFcC5zeSEqbkbekXQhy9J+IGRvb8Dd
X-Mailman-Approved-At: Sat, 25 Aug 2012 06:33:18 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VEhYIGZvciBxdWlja2x5IGFuc3dlci4KCldpdGggSEcgSSBoYXZlIG5vdCBzbyBtdWNoIGV4cGVy
aWVuY2UsIHNvIHNvcnJ5IGZvciBub3ZpY2UgcXVlc3Rpb246CnNob3VsZCBJIHBhdGNoaW5nIHdp
dGggImhnIGltcG9ydCIgYWNjb3JkaW5nIHRvIHRoZSBkb2N1bWVudGF0aW9uLCBvciB0aGVyZSBh
cmUgc29tZSBkaWZmZXJlbmNlcywgYmVjYXVzZSBvZiBzb21lIGVycm9ycz8KCiNoZyBpbXBvcnQg
Zml4LWNtZGltcGwucGF0Y2gKVHVybmluZyB0byBmaXggY21kaW1wbC5wYXRjaApUdXJuaW5nIHRv
IHBhdGNoIGZpbGUgdG9vbHMgLyBsaWJ4bCAvIHRvIHhsX2NtZGltcGwuYwpCVVNUIG9mIHNlY3Rp
b24gIyAxIGF0IGxpbmUgMjY4NQpCVVNUIG9mIHNlY3Rpb24gIyAyIGF0IGxpbmUgMjcxMwpCVVNU
IG9mIHNlY3Rpb24gIyAzIGluIGxpbmUgMjcyNApCVVNUIG9mIHNlY3Rpb24gIyA0IG9uIGxpbmUg
MjczNwo0IG9mIDQgc2VjdGlvbnMgYXJlIEZBSUxFRCAtIHNhdmUgQ29tbWl0dGVlIHRvIGZpbGUg
dG9vbHMgLyBsaWJ4bCAvIHhsX2NtZGltcGwuYy5yZWoKRGVtb2xpdGlvbjogUGF0Y2ggZmFpbGVk
Cihnb29nbGUgdHJhbnNsYXRlZCkKCgojaGcgaW1wb3J0IGZpeC1jbWRpbXBsLXYyLnBhdGNoCkFi
YnJ1Y2g6IEF1c3N0ZWhlbmRlIG5pY2h0IHZlcnNpb25pZXJ0ZSDDhG5kZXJ1bmdlbgpEZW1vbGl0
aW9uOiBPdXRzdGFuZGluZyB1bnZlcnNpb25lZCBjaGFuZ2VzICh3LiBHb29nbGUgdHJhbnNsYXRl
ZCkKCkkgY2hhbmdlZCB4bF9jbWRpbXBsLmMgYWNjb3JkaW5nIHRvIFlvdXIgcGF0Y2hlcyBieSBo
YW5kIGFuZCB0cmllZCBjb21waWxlIGl0LiBJdCB3b3Jrcy4KCkJlc3QgcmVnYXJkcy4KUGFuc2No
aW5za2kgRGVuaXMKCgpbY3V0XQoKPiBTYWRseSwgaGFuZCBiZWluZyB0b3VjaGVkIGlmZiBmb3Jt
YXQgPT0gSlNPTiB3YXMgc3RpbGwgbm90IGVub3VnaCB0bwo+IHNhdGlzZnkgbXkgdmVyc2lvbiBv
ZiBHQ0MuCj4gCj4gQXR0YWNoZWQgaXMgYSBmYXIgc2xpbW1lZCB2ZXJzaW9uIHdoaWNoIGV4cGxp
Y2l0bHkgc2V0cyBoYW5kIHRvIE5VTEwgYXQKPiB0aGUgdG9wLCBhbmQgZnV0dXJlLXByb29mcyB0
aGUgdXNlIG9mIGhhbmQgaW4gdGhlIG1pZGRsZSBvZiB0aGUgZG9tYWluCj4gbG9vcC4KPiAKPiAt
LSAKPiBBbmRyZXcgQ29vcGVyIC0gRG9tMCBLZXJuZWwgRW5naW5lZXIsIENpdHJpeCBYZW5TZXJ2
ZXIKPiBUOiArNDQgKDApMTIyMyAyMjUgOTAwLCBodHRwOi8vd3d3LmNpdHJpeC5jb20KPiAKCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Sat Aug 25 06:33:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 06:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T59vs-0007zW-D3; Sat, 25 Aug 2012 06:33:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <p.d@gmx.de>)
	id 1T51ho-0007mN-Rh
	for xen-devel@lists.xen.org; Fri, 24 Aug 2012 21:46:17 +0000
Received: from [85.158.143.35:65179] by server-3.bemta-4.messagelabs.com id
	C1/4A-08232-826F7305; Fri, 24 Aug 2012 21:46:16 +0000
X-Env-Sender: p.d@gmx.de
X-Msg-Ref: server-10.tower-21.messagelabs.com!1345844770!10504075!1
X-Originating-IP: [213.165.64.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuMTY1LjY0LjIyID0+IDE5ODY5Mg==\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3731 invoked from network); 24 Aug 2012 21:46:10 -0000
Received: from mailout-de.gmx.net (HELO mailout-de.gmx.net) (213.165.64.22)
	by server-10.tower-21.messagelabs.com with SMTP;
	24 Aug 2012 21:46:10 -0000
Received: (qmail 7235 invoked by uid 0); 24 Aug 2012 21:46:09 -0000
Received: from 95.116.131.21 by www020.gmx.net with HTTP;
	Fri, 24 Aug 2012 23:46:08 +0200 (CEST)
Date: Fri, 24 Aug 2012 23:46:08 +0200
From: p.d@gmx.de
In-Reply-To: <5037D4C3.4060309@citrix.com>
Message-ID: <20120824214608.46820@gmx.net>
MIME-Version: 1.0
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
	<1345835562.4847.3.camel@dagon.hellion.org.uk>
	<5037D4C3.4060309@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Ian.Campbell@citrix.com
X-Authenticated: #16432423
X-Flags: 0001
X-Mailer: WWW-Mail 6100 (Global Message Exchange)
X-Priority: 3
X-Provags-ID: V01U2FsdGVkX1/0fBQZf6z0pPcircVEy+E7cdky0Qcj/5RQDmVzjT
	0ZXm4ltEVZ96UWqsV8aoUBmZxBdF6qJSLCAA== 
X-GMX-UID: d5NFcC5zeSEqbkbekXQhy9J+IGRvb8Dd
X-Mailman-Approved-At: Sat, 25 Aug 2012 06:33:18 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VEhYIGZvciBxdWlja2x5IGFuc3dlci4KCldpdGggSEcgSSBoYXZlIG5vdCBzbyBtdWNoIGV4cGVy
aWVuY2UsIHNvIHNvcnJ5IGZvciBub3ZpY2UgcXVlc3Rpb246CnNob3VsZCBJIHBhdGNoaW5nIHdp
dGggImhnIGltcG9ydCIgYWNjb3JkaW5nIHRvIHRoZSBkb2N1bWVudGF0aW9uLCBvciB0aGVyZSBh
cmUgc29tZSBkaWZmZXJlbmNlcywgYmVjYXVzZSBvZiBzb21lIGVycm9ycz8KCiNoZyBpbXBvcnQg
Zml4LWNtZGltcGwucGF0Y2gKVHVybmluZyB0byBmaXggY21kaW1wbC5wYXRjaApUdXJuaW5nIHRv
IHBhdGNoIGZpbGUgdG9vbHMgLyBsaWJ4bCAvIHRvIHhsX2NtZGltcGwuYwpCVVNUIG9mIHNlY3Rp
b24gIyAxIGF0IGxpbmUgMjY4NQpCVVNUIG9mIHNlY3Rpb24gIyAyIGF0IGxpbmUgMjcxMwpCVVNU
IG9mIHNlY3Rpb24gIyAzIGluIGxpbmUgMjcyNApCVVNUIG9mIHNlY3Rpb24gIyA0IG9uIGxpbmUg
MjczNwo0IG9mIDQgc2VjdGlvbnMgYXJlIEZBSUxFRCAtIHNhdmUgQ29tbWl0dGVlIHRvIGZpbGUg
dG9vbHMgLyBsaWJ4bCAvIHhsX2NtZGltcGwuYy5yZWoKRGVtb2xpdGlvbjogUGF0Y2ggZmFpbGVk
Cihnb29nbGUgdHJhbnNsYXRlZCkKCgojaGcgaW1wb3J0IGZpeC1jbWRpbXBsLXYyLnBhdGNoCkFi
YnJ1Y2g6IEF1c3N0ZWhlbmRlIG5pY2h0IHZlcnNpb25pZXJ0ZSDDhG5kZXJ1bmdlbgpEZW1vbGl0
aW9uOiBPdXRzdGFuZGluZyB1bnZlcnNpb25lZCBjaGFuZ2VzICh3LiBHb29nbGUgdHJhbnNsYXRl
ZCkKCkkgY2hhbmdlZCB4bF9jbWRpbXBsLmMgYWNjb3JkaW5nIHRvIFlvdXIgcGF0Y2hlcyBieSBo
YW5kIGFuZCB0cmllZCBjb21waWxlIGl0LiBJdCB3b3Jrcy4KCkJlc3QgcmVnYXJkcy4KUGFuc2No
aW5za2kgRGVuaXMKCgpbY3V0XQoKPiBTYWRseSwgaGFuZCBiZWluZyB0b3VjaGVkIGlmZiBmb3Jt
YXQgPT0gSlNPTiB3YXMgc3RpbGwgbm90IGVub3VnaCB0bwo+IHNhdGlzZnkgbXkgdmVyc2lvbiBv
ZiBHQ0MuCj4gCj4gQXR0YWNoZWQgaXMgYSBmYXIgc2xpbW1lZCB2ZXJzaW9uIHdoaWNoIGV4cGxp
Y2l0bHkgc2V0cyBoYW5kIHRvIE5VTEwgYXQKPiB0aGUgdG9wLCBhbmQgZnV0dXJlLXByb29mcyB0
aGUgdXNlIG9mIGhhbmQgaW4gdGhlIG1pZGRsZSBvZiB0aGUgZG9tYWluCj4gbG9vcC4KPiAKPiAt
LSAKPiBBbmRyZXcgQ29vcGVyIC0gRG9tMCBLZXJuZWwgRW5naW5lZXIsIENpdHJpeCBYZW5TZXJ2
ZXIKPiBUOiArNDQgKDApMTIyMyAyMjUgOTAwLCBodHRwOi8vd3d3LmNpdHJpeC5jb20KPiAKCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Sat Aug 25 08:41:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 08:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5Bv8-0001I1-2O; Sat, 25 Aug 2012 08:40:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5Bv6-0001Hv-Mw
	for xen-devel@lists.xensource.com; Sat, 25 Aug 2012 08:40:40 +0000
Received: from [85.158.138.51:32236] by server-5.bemta-3.messagelabs.com id
	DC/81-08865-78F88305; Sat, 25 Aug 2012 08:40:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345884039!28037542!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2215 invoked from network); 25 Aug 2012 08:40:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 08:40:39 -0000
X-IronPort-AV: E=Sophos;i="4.80,309,1344211200"; d="scan'208";a="14180248"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	25 Aug 2012 08:40:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 25 Aug 2012 09:40:38 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5Bv4-0001ET-IT;
	Sat, 25 Aug 2012 08:40:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5Bv4-0007tS-7Y;
	Sat, 25 Aug 2012 09:40:38 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13629-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Aug 2012 09:40:38 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13629: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13629 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13629/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13628
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13628
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13628
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13628

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 08:41:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 08:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5Bv8-0001I1-2O; Sat, 25 Aug 2012 08:40:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5Bv6-0001Hv-Mw
	for xen-devel@lists.xensource.com; Sat, 25 Aug 2012 08:40:40 +0000
Received: from [85.158.138.51:32236] by server-5.bemta-3.messagelabs.com id
	DC/81-08865-78F88305; Sat, 25 Aug 2012 08:40:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1345884039!28037542!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2215 invoked from network); 25 Aug 2012 08:40:39 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 08:40:39 -0000
X-IronPort-AV: E=Sophos;i="4.80,309,1344211200"; d="scan'208";a="14180248"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	25 Aug 2012 08:40:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sat, 25 Aug 2012 09:40:38 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5Bv4-0001ET-IT;
	Sat, 25 Aug 2012 08:40:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5Bv4-0007tS-7Y;
	Sat, 25 Aug 2012 09:40:38 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13629-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Aug 2012 09:40:38 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13629: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13629 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13629/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13628
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13628
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13628
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13628

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 08:54:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 08:54:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5C89-0001Vr-Ec; Sat, 25 Aug 2012 08:54:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T5C87-0001Vm-Ti
	for xen-devel@lists.xen.org; Sat, 25 Aug 2012 08:54:08 +0000
Received: from [85.158.143.99:6532] by server-2.bemta-4.messagelabs.com id
	11/22-21239-FA298305; Sat, 25 Aug 2012 08:54:07 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345884844!20368364!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10815 invoked from network); 25 Aug 2012 08:54:06 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 08:54:06 -0000
Received: by obbta14 with SMTP id ta14so4404821obb.32
	for <xen-devel@lists.xen.org>; Sat, 25 Aug 2012 01:54:04 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=viXGLtAn3u0BOMRmEqXlfTqMKD5RXup9be8Gjq/d2yw=;
	b=FAfvRfX3CvmtTFTLXNIu68akY08dzvcSX0PGOTPrYmwN6I4RdiLTWgIHQDyP61eU6Z
	VQJ6Nm3Wp1XUIQTkGy1Lvdmqswu1nd+/rQHIo7WLf2LNjmv+y18iIY2crQDY4gLFgJTW
	xg6yBTZLJQtpLK2s2ttsr2DPZmJzy4den8aYWU6V3cX+CLhb7GBFjr1EOKFmxRh3TNgD
	uerTgyxuaRThqmftNZGwHK6azHM2dukFuxO3AVrOdNil7mxcqTLi0rpZV7Bo9SqQaPKT
	0SlvJ2RlyG5gnhbj5Z66sLvdgYhYtnRPsluMr2wQsi6Rs27mD6qKrY6NjWKusGBpGbMY
	oWeg==
MIME-Version: 1.0
Received: by 10.182.146.77 with SMTP id ta13mr5858273obb.97.1345884844296;
	Sat, 25 Aug 2012 01:54:04 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Sat, 25 Aug 2012 01:54:04 -0700 (PDT)
X-Originating-IP: [121.44.74.61]
In-Reply-To: <1345215949.10161.76.camel@zakaz.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
	<20515.49901.369414.638893@mariner.uk.xensource.com>
	<1345215949.10161.76.camel@zakaz.uk.xensource.com>
Date: Sat, 25 Aug 2012 18:54:04 +1000
Message-ID: <CAOzFzEgrHOJJD_6DTcebbt+AER0dAM_CE0AZqnV9PZEmU9GYtw@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Gm-Message-State: ALoCoQm1RIsBNnPP503AB8Mthw95JG9F8XY5er4p5E7PCaR6NRplnGJozbPffqQwyTHQz8YGLyTB
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
 xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18 August 2012 01:05, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-09 at 15:02 +0100, Ian Jackson wrote:
>> Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
>> ...
>> > > --- a/docs/misc/xenstore-paths.markdown
>> > > +++ b/docs/misc/xenstore-paths.markdown
>> > > @@ -0,0 +1,294 @@
>> ...
>> > > +PATH can contain simple regex constructs following the POSIX regexp
>> > > +syntax described in regexp(7). In addition the following additional
>> > > +wild card names are defined and are evaluated before regexp expansion:
>>
>> Can we use a restricted perl re syntax ?  That avoids weirdness with
>> the rules for \.
>
> Is "restricted perl re syntax" a well defined thing (reference?) or do
> you just mean perlre(1)--?
>
> What's the weirdness with \.?
>
>> Also how does this interact with markdown ?
>
> The html version looks ok after a brief inspection.
>
>> > > +#### ~/image/device-model-pid = INTEGER   [r]
>>
>> This [r] tag is not defined above.  I assume you mean "readonly to the
>> domain" but that's the default.  Left over from an earlier version ?
>
> Yes, it's vestigial. Remove it.
>
>>
>> > > +The process ID of the device model associated with this domain, if it
>> > > +has one.
>> > > +
>> > > +XXX why is this visible to the guest?
>>
>> I think some of these things were put here just because there wasn't
>> another place for the toolstack to store things.  See also the
>> arbitrary junk stored by scripts in the device backend directories.
>
> Should we define a proper home for these? e.g. /$toolstack/$domid?
>
>> > > +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
>> > > +
>> > > +One node for each virtual CPU up to the guest's configured
>> > > +maximum. Valid values are "online" and "offline".
>>
>> Should have a cross-reference to the cpu online/offline protocol,
>> which appears to be in xen/include/public/vcpu.h.  It doesn't seem to
>> be fully documented yet.
>
> vcpu.h has the hypercalls which are the mechanism by which a guest
> brings a cpu up/down but nothing on the xenstore protocol which might
> cause it to do so.
>
> I don't think a reference currently exists for that protocol. This
> probably belongs in the same (non-existent) protocol doc as
> ~/control/shutdown in so much as it is a toolstack<->guest kernel
> protocol.
>
>> > > +#### ~/memory/static-max = MEMKB []
>> > > +
>> > > +Specifies a static maximum amount memory which this domain should
>> > > +expect to be given. In the absence of in-guest memory hotplug support
>> > > +this set on domain boot and is usually the maximum amount of RAM which
>> > > +a guest can make use of .
>>
>> This should have a cross-reference to the documentation defining
>> static-max etc.  I thought we had some in tree but I can't seem to
>> find it.  The best I can find is docs/man/xl.cfg.pod.5.
>
> I think you might be thinking of tools/libxl/libxl_memory.txt.
>
> Shall we move that doc to docs/misc?
>
>>
>> > > +#### ~/memory/target = MEMKB []
>> > > +
>> > > +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort
>>
>> every effort to ... ?
>
> err. yes. I appear  to have got distracted there ...
>
> Perhaps:
>
>         every effort to ... reach this target
>
> ? but I'm not sure that is strictly correct, a guest can use less if it
> wants to. So perhaps
>
>         every effort to ... not use more than this
>
> ? seems clumsy though.
>
>>
>> The interaction with the Xen maximum should be stated, preferably by
>> cross-reference.  In general it might be better to have a single place
>> where all these values and their semantics are written down ?
>>
>> > > +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
>> > > +
>> > > +The domain's suspend event channel. The use of a suspend event channel
>> > > +is optional at the domain's discression. If it is not used then this
>> > > +path will be left blank.
>>
>> May it be ENOENT ?  Does the toolstack create it as "" then ?
>
> libxl seems to *mkdir* it:
>     libxl__xs_mkdir(gc, t,
>                     libxl__sprintf(gc, "%s/device/suspend/event-channel", dom_path),
>                     rwperm, ARRAY_SIZE(rwperm));
>
> which I suppose is the same as writing it as "" (unless there is some
> subtle xenstore semantic difference I'm not thinking of)
>
> If xend writes this key then I can't find it. I rather suspect the
> ~/device/suspend is guest writeable in that case (but I can't find that
> either).
>
> While grepping around I noticed xs_suspend_evtchn_port which reads this.
> Seems like an odd place for it...
>
>>
>> > > +#### ~/device/serial/$DEVID/* [HVM]
>> > > +
>> > > +An emulated serial device
>>
>> You should presumably add
>>     XXX documentation for the protocol needed
>> here.
>
> I think this is in docs/misc/console.txt along with the PV stuff, so
> I've added that as a reference.
>
>>
>> > > +#### ~/store/port = EVTCHN []
>> > > +
>> > > +The event channel used by the domains connection to XenStore.
>>
>> Apostrophe.
>>
>> > > +XXX why is this exposed to the guest?
>>
>> Is there really only one event channel ?  Ie the same evtchn is used
>> to signal to xenstore that the guest has sent a command, and to signal
>> the guest that xenstore has written the response ?
>
> Yes, event channels are bidirectional so that's quite common.
>
>> Anyway surely this is something the guest needs to know.  Why it's in
>> xenstore is a bit of a mystery since you can't use xenstore without it
>> and it's in the start_info.
>
> I should have written "why is this exposed to the guest via xenstore?"
>
>> Is this the same value as start_info.store_evtchn ?  Cross reference ?
>
> I'd be semi inclined to ditch/deprecate it unless we can figure out what
> it is for -- as you say there is something of a chicken and egg problem
> with using it.
>
>>
>> > > +#### ~/store/ring-ref = GNTREF []
>> > > +
>> > > +The grant reference of the domain's XenStore ring.
>> > > +
>> > > +XXX why is this exposed to the guest?
>>
>> See above.
>
> Yup, the same issues.
>
>> > > +#### ~/device-model/$DOMID/* []
>> > > +
>> > > +Information relating to device models running in the domain. $DOMID is
>> > > +the target domain of the device model.
>> > > +
>> > > +XXX where is the contents of this directory specified?
>>
>> I think it's not specified anywhere.  It's ad-hoc.  The guest
>> shouldn't need to see it but exposing it readonly is probably
>> harmless.  Except perhaps for the vnc password ?
>
> vnc password appears to go into /vm/$uuid/vncpass (looking at libxl code
> only).
>
> AFAIK it does nothing special with the perms, but /vm/$uuid is not guest
> readable (perms are "n0") so I think that works out ok.
>
> I wonder if that's part of the point of /vm/$uuid.

What has /vm/$UUID been used for historically?

I find it useful if you set your own UUIDs as it provides a consistent
path across guest reboots (which ofcourse change the domid)
A /byname shortcut sounds good as a replacement if /vm/$UUID goes away.

>
>> > > +### /vm/$UUID/uuid = UUID []
>> > > +
>> > > +Value is the same UUID as the path.
>> > > +
>> > > +### /vm/$UUID/name = STRING []
>> > > +
>> > > +The domains name.
>>
>> IMO this should be
>>   (a) in /local/domain/$DOMID
>>   (b) also a copy in /byname/$NAME = $DOMID   for fast lookup
>> but not in 4.2.
>>
>> Guests shouldn't rely on it.  In fact do guests actually need anything
>> from here ?
>
> I'd say definitely not, but it has existed with xend for many years so
> I'd be surprised if something hadn't crept in somewhere :-(
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 08:54:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 08:54:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5C89-0001Vr-Ec; Sat, 25 Aug 2012 08:54:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T5C87-0001Vm-Ti
	for xen-devel@lists.xen.org; Sat, 25 Aug 2012 08:54:08 +0000
Received: from [85.158.143.99:6532] by server-2.bemta-4.messagelabs.com id
	11/22-21239-FA298305; Sat, 25 Aug 2012 08:54:07 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-11.tower-216.messagelabs.com!1345884844!20368364!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10815 invoked from network); 25 Aug 2012 08:54:06 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 08:54:06 -0000
Received: by obbta14 with SMTP id ta14so4404821obb.32
	for <xen-devel@lists.xen.org>; Sat, 25 Aug 2012 01:54:04 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=viXGLtAn3u0BOMRmEqXlfTqMKD5RXup9be8Gjq/d2yw=;
	b=FAfvRfX3CvmtTFTLXNIu68akY08dzvcSX0PGOTPrYmwN6I4RdiLTWgIHQDyP61eU6Z
	VQJ6Nm3Wp1XUIQTkGy1Lvdmqswu1nd+/rQHIo7WLf2LNjmv+y18iIY2crQDY4gLFgJTW
	xg6yBTZLJQtpLK2s2ttsr2DPZmJzy4den8aYWU6V3cX+CLhb7GBFjr1EOKFmxRh3TNgD
	uerTgyxuaRThqmftNZGwHK6azHM2dukFuxO3AVrOdNil7mxcqTLi0rpZV7Bo9SqQaPKT
	0SlvJ2RlyG5gnhbj5Z66sLvdgYhYtnRPsluMr2wQsi6Rs27mD6qKrY6NjWKusGBpGbMY
	oWeg==
MIME-Version: 1.0
Received: by 10.182.146.77 with SMTP id ta13mr5858273obb.97.1345884844296;
	Sat, 25 Aug 2012 01:54:04 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Sat, 25 Aug 2012 01:54:04 -0700 (PDT)
X-Originating-IP: [121.44.74.61]
In-Reply-To: <1345215949.10161.76.camel@zakaz.uk.xensource.com>
References: <1343657037-28495-1-git-send-email-ian.campbell@citrix.com>
	<1344506462.32142.96.camel@zakaz.uk.xensource.com>
	<20515.49901.369414.638893@mariner.uk.xensource.com>
	<1345215949.10161.76.camel@zakaz.uk.xensource.com>
Date: Sat, 25 Aug 2012 18:54:04 +1000
Message-ID: <CAOzFzEgrHOJJD_6DTcebbt+AER0dAM_CE0AZqnV9PZEmU9GYtw@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Gm-Message-State: ALoCoQm1RIsBNnPP503AB8Mthw95JG9F8XY5er4p5E7PCaR6NRplnGJozbPffqQwyTHQz8YGLyTB
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [DOCDAY PATCH] docs: initial documentation for
 xenstore paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18 August 2012 01:05, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2012-08-09 at 15:02 +0100, Ian Jackson wrote:
>> Ian Campbell writes ("Re: [DOCDAY PATCH] docs: initial documentation for xenstore paths"):
>> ...
>> > > --- a/docs/misc/xenstore-paths.markdown
>> > > +++ b/docs/misc/xenstore-paths.markdown
>> > > @@ -0,0 +1,294 @@
>> ...
>> > > +PATH can contain simple regex constructs following the POSIX regexp
>> > > +syntax described in regexp(7). In addition the following additional
>> > > +wild card names are defined and are evaluated before regexp expansion:
>>
>> Can we use a restricted perl re syntax ?  That avoids weirdness with
>> the rules for \.
>
> Is "restricted perl re syntax" a well defined thing (reference?) or do
> you just mean perlre(1)--?
>
> What's the weirdness with \.?
>
>> Also how does this interact with markdown ?
>
> The html version looks ok after a brief inspection.
>
>> > > +#### ~/image/device-model-pid = INTEGER   [r]
>>
>> This [r] tag is not defined above.  I assume you mean "readonly to the
>> domain" but that's the default.  Left over from an earlier version ?
>
> Yes, it's vestigial. Remove it.
>
>>
>> > > +The process ID of the device model associated with this domain, if it
>> > > +has one.
>> > > +
>> > > +XXX why is this visible to the guest?
>>
>> I think some of these things were put here just because there wasn't
>> another place for the toolstack to store things.  See also the
>> arbitrary junk stored by scripts in the device backend directories.
>
> Should we define a proper home for these? e.g. /$toolstack/$domid?
>
>> > > +#### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
>> > > +
>> > > +One node for each virtual CPU up to the guest's configured
>> > > +maximum. Valid values are "online" and "offline".
>>
>> Should have a cross-reference to the cpu online/offline protocol,
>> which appears to be in xen/include/public/vcpu.h.  It doesn't seem to
>> be fully documented yet.
>
> vcpu.h has the hypercalls which are the mechanism by which a guest
> brings a cpu up/down but nothing on the xenstore protocol which might
> cause it to do so.
>
> I don't think a reference currently exists for that protocol. This
> probably belongs in the same (non-existent) protocol doc as
> ~/control/shutdown in so much as it is a toolstack<->guest kernel
> protocol.
>
>> > > +#### ~/memory/static-max = MEMKB []
>> > > +
>> > > +Specifies a static maximum amount memory which this domain should
>> > > +expect to be given. In the absence of in-guest memory hotplug support
>> > > +this set on domain boot and is usually the maximum amount of RAM which
>> > > +a guest can make use of .
>>
>> This should have a cross-reference to the documentation defining
>> static-max etc.  I thought we had some in tree but I can't seem to
>> find it.  The best I can find is docs/man/xl.cfg.pod.5.
>
> I think you might be thinking of tools/libxl/libxl_memory.txt.
>
> Shall we move that doc to docs/misc?
>
>>
>> > > +#### ~/memory/target = MEMKB []
>> > > +
>> > > +The current balloon target for the domain. The balloon driver within the guest is expected to make every effort
>>
>> every effort to ... ?
>
> err. yes. I appear  to have got distracted there ...
>
> Perhaps:
>
>         every effort to ... reach this target
>
> ? but I'm not sure that is strictly correct, a guest can use less if it
> wants to. So perhaps
>
>         every effort to ... not use more than this
>
> ? seems clumsy though.
>
>>
>> The interaction with the Xen maximum should be stated, preferably by
>> cross-reference.  In general it might be better to have a single place
>> where all these values and their semantics are written down ?
>>
>> > > +#### ~/device/suspend/event-channel = ""|EVTCHN [w]
>> > > +
>> > > +The domain's suspend event channel. The use of a suspend event channel
>> > > +is optional at the domain's discression. If it is not used then this
>> > > +path will be left blank.
>>
>> May it be ENOENT ?  Does the toolstack create it as "" then ?
>
> libxl seems to *mkdir* it:
>     libxl__xs_mkdir(gc, t,
>                     libxl__sprintf(gc, "%s/device/suspend/event-channel", dom_path),
>                     rwperm, ARRAY_SIZE(rwperm));
>
> which I suppose is the same as writing it as "" (unless there is some
> subtle xenstore semantic difference I'm not thinking of)
>
> If xend writes this key then I can't find it. I rather suspect the
> ~/device/suspend is guest writeable in that case (but I can't find that
> either).
>
> While grepping around I noticed xs_suspend_evtchn_port which reads this.
> Seems like an odd place for it...
>
>>
>> > > +#### ~/device/serial/$DEVID/* [HVM]
>> > > +
>> > > +An emulated serial device
>>
>> You should presumably add
>>     XXX documentation for the protocol needed
>> here.
>
> I think this is in docs/misc/console.txt along with the PV stuff, so
> I've added that as a reference.
>
>>
>> > > +#### ~/store/port = EVTCHN []
>> > > +
>> > > +The event channel used by the domains connection to XenStore.
>>
>> Apostrophe.
>>
>> > > +XXX why is this exposed to the guest?
>>
>> Is there really only one event channel ?  Ie the same evtchn is used
>> to signal to xenstore that the guest has sent a command, and to signal
>> the guest that xenstore has written the response ?
>
> Yes, event channels are bidirectional so that's quite common.
>
>> Anyway surely this is something the guest needs to know.  Why it's in
>> xenstore is a bit of a mystery since you can't use xenstore without it
>> and it's in the start_info.
>
> I should have written "why is this exposed to the guest via xenstore?"
>
>> Is this the same value as start_info.store_evtchn ?  Cross reference ?
>
> I'd be semi inclined to ditch/deprecate it unless we can figure out what
> it is for -- as you say there is something of a chicken and egg problem
> with using it.
>
>>
>> > > +#### ~/store/ring-ref = GNTREF []
>> > > +
>> > > +The grant reference of the domain's XenStore ring.
>> > > +
>> > > +XXX why is this exposed to the guest?
>>
>> See above.
>
> Yup, the same issues.
>
>> > > +#### ~/device-model/$DOMID/* []
>> > > +
>> > > +Information relating to device models running in the domain. $DOMID is
>> > > +the target domain of the device model.
>> > > +
>> > > +XXX where is the contents of this directory specified?
>>
>> I think it's not specified anywhere.  It's ad-hoc.  The guest
>> shouldn't need to see it but exposing it readonly is probably
>> harmless.  Except perhaps for the vnc password ?
>
> vnc password appears to go into /vm/$uuid/vncpass (looking at libxl code
> only).
>
> AFAIK it does nothing special with the perms, but /vm/$uuid is not guest
> readable (perms are "n0") so I think that works out ok.
>
> I wonder if that's part of the point of /vm/$uuid.

What has /vm/$UUID been used for historically?

I find it useful if you set your own UUIDs as it provides a consistent
path across guest reboots (which ofcourse change the domid)
A /byname shortcut sounds good as a replacement if /vm/$UUID goes away.

>
>> > > +### /vm/$UUID/uuid = UUID []
>> > > +
>> > > +Value is the same UUID as the path.
>> > > +
>> > > +### /vm/$UUID/name = STRING []
>> > > +
>> > > +The domains name.
>>
>> IMO this should be
>>   (a) in /local/domain/$DOMID
>>   (b) also a copy in /byname/$NAME = $DOMID   for fast lookup
>> but not in 4.2.
>>
>> Guests shouldn't rely on it.  In fact do guests actually need anything
>> from here ?
>
> I'd say definitely not, but it has existed with xend for many years so
> I'd be surprised if something hadn't crept in somewhere :-(
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Aug 25 11:15:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 11:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5EKo-0002br-26; Sat, 25 Aug 2012 11:15:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T5EKm-0002bm-G7
	for xen-devel@lists.xen.org; Sat, 25 Aug 2012 11:15:20 +0000
Received: from [85.158.143.35:30752] by server-2.bemta-4.messagelabs.com id
	B0/FF-21239-7C3B8305; Sat, 25 Aug 2012 11:15:19 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345893318!5145447!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31620 invoked from network); 25 Aug 2012 11:15:19 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 11:15:19 -0000
Received: by lbbgm13 with SMTP id gm13so1922582lbb.32
	for <xen-devel@lists.xen.org>; Sat, 25 Aug 2012 04:15:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=Levxpb1cv/c5BwQZLRsYM7V2OJ8Sn4LGFXk1vfBZ580=;
	b=ry58UY2ZCwWFIuJnkhvvRBXUis3Xdbp6AnxN3JzVcVhXyG3XaWOLrZrsfaGeq43iNF
	IWTDlU5hx4PlzpJZ+rM7n2e88YjcWH/ccK3h/PMomgTnD+TcTYN3aajDt+lEAAS+X+cv
	CVo0LiyUrksSqAyn7Nx70cR9lEM+N6R8N5z/K6/fN6m0GKXcfTRN6Rhiph+LQaWSeNuc
	cYrA/PXLgamm+ezrWjcs5bkWJYOgR7byy6Bc5DOC9Y3KLNLaJZOYxylzAsHBZ961rfKE
	mTdKZW1XyXIVrbgAcetPh954IEs0FmmB1pizJpDA1RGq1D1Hgi7TUkWlxgQbBnWGbyIG
	MgMg==
MIME-Version: 1.0
Received: by 10.112.85.200 with SMTP id j8mr3886669lbz.41.1345893318336; Sat,
	25 Aug 2012 04:15:18 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Sat, 25 Aug 2012 04:15:18 -0700 (PDT)
Date: Sat, 25 Aug 2012 19:15:18 +0800
Message-ID: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Jordan Justen <jljusten@gmail.com>
Subject: [Xen-devel] How to decide the xenstore path for a domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3241590925047196288=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3241590925047196288==
Content-Type: multipart/alternative; boundary=bcaec554d60c2520aa04c815353a

--bcaec554d60c2520aa04c815353a
Content-Type: text/plain; charset=ISO-8859-1

Hi,

Can anyone tell me how to decide the number in the Xenstore key path for a
DomU's front driver?
In Mini-OS, the block front driver write the key-value to "*
device/vbd/768/[ring-ref|event-channelprotocol|...]*" according to the
code mini-os/blkfront.c.
So why does it use the number "768" in the path here? Can this number be
another one? For a new DomU's block front driver, how to decide this number?

Thank you very much.


Best Regards,
Bei Guan

--bcaec554d60c2520aa04c815353a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div><br></div><div>Can anyone tell me how to decide the number in the X=
enstore key path for a DomU&#39;s front driver?=A0</div><div>In Mini-OS, th=
e block front driver write the key-value to &quot;<b><font color=3D"#ff0000=
">device/vbd/768/[ring-ref|event-channelprotocol|...]</font></b>&quot; acco=
rding to the code=A0mini-os/blkfront.c.</div>
<div>So why does it use the number &quot;768&quot; in the path here? Can th=
is number be another one? For a new DomU&#39;s block front driver, how to d=
ecide this number?</div><div><br></div><div>Thank you very much.</div><div>
<br></div><div><div><br></div>Best Regards,<div>Bei Guan</div><br>
</div>

--bcaec554d60c2520aa04c815353a--


--===============3241590925047196288==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3241590925047196288==--


From xen-devel-bounces@lists.xen.org Sat Aug 25 11:15:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Aug 2012 11:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5EKo-0002br-26; Sat, 25 Aug 2012 11:15:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T5EKm-0002bm-G7
	for xen-devel@lists.xen.org; Sat, 25 Aug 2012 11:15:20 +0000
Received: from [85.158.143.35:30752] by server-2.bemta-4.messagelabs.com id
	B0/FF-21239-7C3B8305; Sat, 25 Aug 2012 11:15:19 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1345893318!5145447!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31620 invoked from network); 25 Aug 2012 11:15:19 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Aug 2012 11:15:19 -0000
Received: by lbbgm13 with SMTP id gm13so1922582lbb.32
	for <xen-devel@lists.xen.org>; Sat, 25 Aug 2012 04:15:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=Levxpb1cv/c5BwQZLRsYM7V2OJ8Sn4LGFXk1vfBZ580=;
	b=ry58UY2ZCwWFIuJnkhvvRBXUis3Xdbp6AnxN3JzVcVhXyG3XaWOLrZrsfaGeq43iNF
	IWTDlU5hx4PlzpJZ+rM7n2e88YjcWH/ccK3h/PMomgTnD+TcTYN3aajDt+lEAAS+X+cv
	CVo0LiyUrksSqAyn7Nx70cR9lEM+N6R8N5z/K6/fN6m0GKXcfTRN6Rhiph+LQaWSeNuc
	cYrA/PXLgamm+ezrWjcs5bkWJYOgR7byy6Bc5DOC9Y3KLNLaJZOYxylzAsHBZ961rfKE
	mTdKZW1XyXIVrbgAcetPh954IEs0FmmB1pizJpDA1RGq1D1Hgi7TUkWlxgQbBnWGbyIG
	MgMg==
MIME-Version: 1.0
Received: by 10.112.85.200 with SMTP id j8mr3886669lbz.41.1345893318336; Sat,
	25 Aug 2012 04:15:18 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Sat, 25 Aug 2012 04:15:18 -0700 (PDT)
Date: Sat, 25 Aug 2012 19:15:18 +0800
Message-ID: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Jordan Justen <jljusten@gmail.com>
Subject: [Xen-devel] How to decide the xenstore path for a domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3241590925047196288=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3241590925047196288==
Content-Type: multipart/alternative; boundary=bcaec554d60c2520aa04c815353a

--bcaec554d60c2520aa04c815353a
Content-Type: text/plain; charset=ISO-8859-1

Hi,

Can anyone tell me how to decide the number in the Xenstore key path for a
DomU's front driver?
In Mini-OS, the block front driver write the key-value to "*
device/vbd/768/[ring-ref|event-channelprotocol|...]*" according to the
code mini-os/blkfront.c.
So why does it use the number "768" in the path here? Can this number be
another one? For a new DomU's block front driver, how to decide this number?

Thank you very much.


Best Regards,
Bei Guan

--bcaec554d60c2520aa04c815353a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div><br></div><div>Can anyone tell me how to decide the number in the X=
enstore key path for a DomU&#39;s front driver?=A0</div><div>In Mini-OS, th=
e block front driver write the key-value to &quot;<b><font color=3D"#ff0000=
">device/vbd/768/[ring-ref|event-channelprotocol|...]</font></b>&quot; acco=
rding to the code=A0mini-os/blkfront.c.</div>
<div>So why does it use the number &quot;768&quot; in the path here? Can th=
is number be another one? For a new DomU&#39;s block front driver, how to d=
ecide this number?</div><div><br></div><div>Thank you very much.</div><div>
<br></div><div><div><br></div>Best Regards,<div>Bei Guan</div><br>
</div>

--bcaec554d60c2520aa04c815353a--


--===============3241590925047196288==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3241590925047196288==--


From xen-devel-bounces@lists.xen.org Sun Aug 26 05:51:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Aug 2012 05:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5Vk3-00035z-8D; Sun, 26 Aug 2012 05:50:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5Vk1-00035u-3A
	for xen-devel@lists.xensource.com; Sun, 26 Aug 2012 05:50:33 +0000
Received: from [85.158.143.99:61791] by server-1.bemta-4.messagelabs.com id
	4C/52-12504-829B9305; Sun, 26 Aug 2012 05:50:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345960228!22475543!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1681 invoked from network); 26 Aug 2012 05:50:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Aug 2012 05:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.80,313,1344211200"; d="scan'208";a="14185083"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	26 Aug 2012 05:50:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 26 Aug 2012 06:50:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5Vju-0001mR-Tb;
	Sun, 26 Aug 2012 05:50:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5Vju-0003qr-GB;
	Sun, 26 Aug 2012 06:50:26 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13630-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Aug 2012 06:50:26 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13630: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13630 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13630/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13629
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13629
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13629
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13629

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 26 05:51:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Aug 2012 05:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5Vk3-00035z-8D; Sun, 26 Aug 2012 05:50:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5Vk1-00035u-3A
	for xen-devel@lists.xensource.com; Sun, 26 Aug 2012 05:50:33 +0000
Received: from [85.158.143.99:61791] by server-1.bemta-4.messagelabs.com id
	4C/52-12504-829B9305; Sun, 26 Aug 2012 05:50:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1345960228!22475543!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1681 invoked from network); 26 Aug 2012 05:50:31 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Aug 2012 05:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.80,313,1344211200"; d="scan'208";a="14185083"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	26 Aug 2012 05:50:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Sun, 26 Aug 2012 06:50:27 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5Vju-0001mR-Tb;
	Sun, 26 Aug 2012 05:50:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5Vju-0003qr-GB;
	Sun, 26 Aug 2012 06:50:26 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13630-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Aug 2012 06:50:26 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13630: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13630 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13630/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13629
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13629
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13629
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13629

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Aug 26 14:00:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Aug 2012 14:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5dNK-0006D4-49; Sun, 26 Aug 2012 13:59:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T5dNH-0006Cz-St
	for xen-devel@lists.xensource.com; Sun, 26 Aug 2012 13:59:36 +0000
Received: from [85.158.138.51:57800] by server-3.bemta-3.messagelabs.com id
	68/E0-13809-6CB2A305; Sun, 26 Aug 2012 13:59:34 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345989572!21480961!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9227 invoked from network); 26 Aug 2012 13:59:34 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Aug 2012 13:59:34 -0000
Received: by iabz25 with SMTP id z25so8173892iab.30
	for <xen-devel@lists.xensource.com>;
	Sun, 26 Aug 2012 06:59:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=0iowQBChaLUjdX68K6N9BPjAnR1dVvlgk5vTINs186k=;
	b=vjXrwmOrv0Vs0UxiOSa1C0p5cZWLhab8nAo9tLnjv+GdYQKC2TuPQIs0FZIqAEbwkY
	agEbkBlVggiEbWCP5acCDbhJNNFDH11Y080IZOiX1qKuzcK0e/OHSueAxNZBY6NNW6yT
	uVTdyYBb6hNE83rXUOJtmLg40tFBE/FHxBrCsCuWfD0/CyVkUrjge8lMfiz6OdZdSIEl
	lgN/vU6uSh/oa8XlDYmA/qcxADhBwt9js4OK5I+rPRjXXtt0TB51Btb8VqjK+xJ+gQhJ
	WnJ0RngNQ1UH+Br8Dc1GUfbCnA8TEhSI+AGdUH2TIj9oth+ZCmyPgY3rVLlBZwikCl/2
	GeHA==
MIME-Version: 1.0
Received: by 10.50.47.200 with SMTP id f8mr7561598ign.6.1345989572260; Sun, 26
	Aug 2012 06:59:32 -0700 (PDT)
Received: by 10.64.100.139 with HTTP; Sun, 26 Aug 2012 06:59:32 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
	<CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
	<alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
Date: Sun, 26 Aug 2012 21:59:32 +0800
Message-ID: <CAH=9XOYv8N4NP9AfkUDwgKAUfvcFh16MuSVsjG1DoAVVoKOXXw@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] DomU console driver not works for Fedora17 in HVM
	mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7620030511116812104=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7620030511116812104==
Content-Type: multipart/alternative; boundary=14dae9340cc35372b504c82b9eb9

--14dae9340cc35372b504c82b9eb9
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci'll try it

On Thursday, August 23, 2012, Stefano Stabellini wrote:

> On Thu, 23 Aug 2012, Wei Xu wrote:
> > Thanks for your reply.
> >
> > I have tried the method but it looks still can't work, there is no
> "hvc0" device file in my "/dev" directory,
> > is that the root cause?
>
> Reading again the previous message, it is clear that you are still using
> xm/xend as toolstack to create VMs. Unfortunately xend doesn't support
> PV consoles for HVM guests, you need to use the new xl toolstack if
> you'd like to use that functionality.
>
> See: http://wiki.xen.org/wiki/XL
>

--14dae9340cc35372b504c82b9eb9
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci&#39;ll try it<span></span><br><br>On Thursday, August 23, =
2012, Stefano Stabellini  wrote:<br><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, =
23 Aug 2012, Wei Xu wrote:<br>

&gt; Thanks for your reply.<br>
&gt;<br>
&gt; I have tried the method but it looks still can&#39;t work, there is no=
 &quot;hvc0&quot; device file in my &quot;/dev&quot; directory,<br>
&gt; is that the root cause?<br>
<br>
Reading again the previous message, it is clear that you are still using<br=
>
xm/xend as toolstack to create VMs. Unfortunately xend doesn&#39;t support<=
br>
PV consoles for HVM guests, you need to use the new xl toolstack if<br>
you&#39;d like to use that functionality.<br>
<br>
See: <a href=3D"http://wiki.xen.org/wiki/XL" target=3D"_blank">http://wiki.=
xen.org/wiki/XL</a><br>
</blockquote>

--14dae9340cc35372b504c82b9eb9--


--===============7620030511116812104==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7620030511116812104==--


From xen-devel-bounces@lists.xen.org Sun Aug 26 14:00:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Aug 2012 14:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5dNK-0006D4-49; Sun, 26 Aug 2012 13:59:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.xu.prc@gmail.com>) id 1T5dNH-0006Cz-St
	for xen-devel@lists.xensource.com; Sun, 26 Aug 2012 13:59:36 +0000
Received: from [85.158.138.51:57800] by server-3.bemta-3.messagelabs.com id
	68/E0-13809-6CB2A305; Sun, 26 Aug 2012 13:59:34 +0000
X-Env-Sender: wei.xu.prc@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1345989572!21480961!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9227 invoked from network); 26 Aug 2012 13:59:34 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Aug 2012 13:59:34 -0000
Received: by iabz25 with SMTP id z25so8173892iab.30
	for <xen-devel@lists.xensource.com>;
	Sun, 26 Aug 2012 06:59:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=0iowQBChaLUjdX68K6N9BPjAnR1dVvlgk5vTINs186k=;
	b=vjXrwmOrv0Vs0UxiOSa1C0p5cZWLhab8nAo9tLnjv+GdYQKC2TuPQIs0FZIqAEbwkY
	agEbkBlVggiEbWCP5acCDbhJNNFDH11Y080IZOiX1qKuzcK0e/OHSueAxNZBY6NNW6yT
	uVTdyYBb6hNE83rXUOJtmLg40tFBE/FHxBrCsCuWfD0/CyVkUrjge8lMfiz6OdZdSIEl
	lgN/vU6uSh/oa8XlDYmA/qcxADhBwt9js4OK5I+rPRjXXtt0TB51Btb8VqjK+xJ+gQhJ
	WnJ0RngNQ1UH+Br8Dc1GUfbCnA8TEhSI+AGdUH2TIj9oth+ZCmyPgY3rVLlBZwikCl/2
	GeHA==
MIME-Version: 1.0
Received: by 10.50.47.200 with SMTP id f8mr7561598ign.6.1345989572260; Sun, 26
	Aug 2012 06:59:32 -0700 (PDT)
Received: by 10.64.100.139 with HTTP; Sun, 26 Aug 2012 06:59:32 -0700 (PDT)
In-Reply-To: <alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
References: <CAH=9XOZC46yEKjoQKRgD3aEUaqvNcNF4Jcs=0HDdxJbAOpWEXw@mail.gmail.com>
	<CAH=9XObhC86Cg7kWdrUvPtPQEX94RcGiLH621pLbye3Q02=9ww@mail.gmail.com>
	<20120820140241.GD7847@phenom.dumpdata.com>
	<alpine.DEB.2.02.1208201639390.15568@kaball.uk.xensource.com>
	<CAH=9XOYuPpfwoWd4swSzMTJFcFVhhXAQJyjMpp2FiVqm7xDf-w@mail.gmail.com>
	<alpine.DEB.2.02.1208231418010.15568@kaball.uk.xensource.com>
Date: Sun, 26 Aug 2012 21:59:32 +0800
Message-ID: <CAH=9XOYv8N4NP9AfkUDwgKAUfvcFh16MuSVsjG1DoAVVoKOXXw@mail.gmail.com>
From: Wei Xu <wei.xu.prc@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] DomU console driver not works for Fedora17 in HVM
	mode with Xen 4.1.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7620030511116812104=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7620030511116812104==
Content-Type: multipart/alternative; boundary=14dae9340cc35372b504c82b9eb9

--14dae9340cc35372b504c82b9eb9
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci'll try it

On Thursday, August 23, 2012, Stefano Stabellini wrote:

> On Thu, 23 Aug 2012, Wei Xu wrote:
> > Thanks for your reply.
> >
> > I have tried the method but it looks still can't work, there is no
> "hvc0" device file in my "/dev" directory,
> > is that the root cause?
>
> Reading again the previous message, it is clear that you are still using
> xm/xend as toolstack to create VMs. Unfortunately xend doesn't support
> PV consoles for HVM guests, you need to use the new xl toolstack if
> you'd like to use that functionality.
>
> See: http://wiki.xen.org/wiki/XL
>

--14dae9340cc35372b504c82b9eb9
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Thanks=EF=BC=8Ci&#39;ll try it<span></span><br><br>On Thursday, August 23, =
2012, Stefano Stabellini  wrote:<br><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, =
23 Aug 2012, Wei Xu wrote:<br>

&gt; Thanks for your reply.<br>
&gt;<br>
&gt; I have tried the method but it looks still can&#39;t work, there is no=
 &quot;hvc0&quot; device file in my &quot;/dev&quot; directory,<br>
&gt; is that the root cause?<br>
<br>
Reading again the previous message, it is clear that you are still using<br=
>
xm/xend as toolstack to create VMs. Unfortunately xend doesn&#39;t support<=
br>
PV consoles for HVM guests, you need to use the new xl toolstack if<br>
you&#39;d like to use that functionality.<br>
<br>
See: <a href=3D"http://wiki.xen.org/wiki/XL" target=3D"_blank">http://wiki.=
xen.org/wiki/XL</a><br>
</blockquote>

--14dae9340cc35372b504c82b9eb9--


--===============7620030511116812104==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7620030511116812104==--


From xen-devel-bounces@lists.xen.org Sun Aug 26 23:23:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Aug 2012 23:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5mAM-0008Gr-Hr; Sun, 26 Aug 2012 23:22:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1T5mAK-0008Gm-AZ
	for xen-devel@lists.xen.org; Sun, 26 Aug 2012 23:22:49 +0000
Received: from [85.158.143.99:61320] by server-2.bemta-4.messagelabs.com id
	80/34-21239-7CFAA305; Sun, 26 Aug 2012 23:22:47 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346023364!20515655!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8807 invoked from network); 26 Aug 2012 23:22:44 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Aug 2012 23:22:44 -0000
Received: by weyz53 with SMTP id z53so2409799wey.32
	for <xen-devel@lists.xen.org>; Sun, 26 Aug 2012 16:22:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=I/4GBYggagcVeenbkKG/Kd9p05Oysa1v4KQ7oHm6YZ8=;
	b=ufrv6OD8dKMOBjJms35Y+q94mklLL34VlazcqxMZBajiEk+JLgeenH5eRWZv7OmcuX
	W27sr2QDfzdSFExgzWbT5EVr7GPW0ipuW5CiyZDNlkMmT/en0ETFNM77dTb4v+CRlXB3
	V9pHBUF4+p/8cq+TFvTIehDfOgdV5+K/ajgYiAPhdm/ltZ1QFB6kquW+Ki/dc2HMTrwE
	XdEhYSzdCpzuHNiFHjsInR+qPlhKK5me3N8dQJCpt3PgZ/mrloXS7L4wd4v3YqtmF/13
	erkru8SK3UOAmW7BUTHwwktImdy76nIhby2EKTHBfxToQgGPbUUwCfzl/Oy0IBvUd/ob
	N7FA==
MIME-Version: 1.0
Received: by 10.216.65.202 with SMTP id f52mr5376662wed.206.1346023363823;
	Sun, 26 Aug 2012 16:22:43 -0700 (PDT)
Received: by 10.223.204.2 with HTTP; Sun, 26 Aug 2012 16:22:43 -0700 (PDT)
Date: Sun, 26 Aug 2012 16:22:43 -0700
Message-ID: <CANKx4w8=5nubN6C8CZij3Lz6_18n6ROaczvTuL+Ap8jug357dg@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=000e0cd3406a75c8a504c8337ccb
Subject: [Xen-devel] Lockup in netback - Xen 4.1.2 (XS 6.0.2 hotfix 7)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--000e0cd3406a75c8a504c8337ccb
Content-Type: text/plain; charset=ISO-8859-1

Hi all-
I am seeing an intermittent lockup on my machine's networking as soon
as I apply a network load.  On a pool of 80 the first one will lock up
generally within 15-20 minutes of beginning the workload.  The symptom
is I see a long list of the following in /var/log/messages:

Aug 16 18:32:49 localhost kernel: netback[1]: TXP193 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[1]: TXP211 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[1]: TXP232 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[1]: TXP157 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[0]: TXP44 is DMA mapped

this seems to clog up the networking pipeline which leads to stall in
my NIC driver:

Aug 16 18:32:58 localhost kernel: ------------[ cut here ]------------
Aug 16 18:32:58 localhost kernel: WARNING: at
net/sched/sch_generic.c:261 dev_watchdog+0x241/0x250()
Aug 16 18:32:58 localhost kernel: Hardware name: C51G,MCP51
Aug 16 18:32:58 localhost kernel: NETDEV WATCHDOG: eth0 (tg3):
transmit queue 0 timed out
Aug 16 18:32:58 localhost kernel: Modules linked in: nfs nfs_acl
auth_rpcgss sch_htb lockd sunrpc 8021q openvswitch ipt_REJECT
nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack xt_tcpudp
iptable_filter ip_tables x_tables binfmt_misc nls_utf8 isofs video
output sbs sbshc fan container battery ac parport_pc lp parport nvram
thermal rtc_cmos processor evdev sg tg3 button thermal_sys rtc_core
sata_sil24 rtc_lib serio_raw tpm_tis tpm tpm_bios i2c_nforce2 pcspkr
i2c_core ide_generic dm_snapshot dm_zero dm_mirror dm_region_hash
dm_log dm_mod sata_nv pata_acpi ata_generic libata sd_mod scsi_mod
ext3 jbd uhci_hcd ohci_hcd ehci_hcd usbcore fbcon font tileblit
bitblit softcursor
Aug 16 18:32:58 localhost kernel: Pid: 0, comm: swapper Not tainted
2.6.32.12-0.7.1.xs6.0.2.553.170674xen #1
Aug 16 18:32:58 localhost kernel: Call Trace:
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] ? dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] ? dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c012e0bc>] warn_slowpath_common+0x7c/0xa0
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] ? dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c012e126>] warn_slowpath_fmt+0x26/0x30
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c02188f6>] ?
blk_rq_timed_out_timer+0xe6/0x110
Aug 16 18:32:58 localhost kernel:  [<c0137fe1>] run_timer_softirq+0x151/0x200
Aug 16 18:32:58 localhost kernel:  [<c0319f60>] ? dev_watchdog+0x0/0x250
Aug 16 18:32:58 localhost kernel:  [<c013359a>] __do_softirq+0xba/0x180
Aug 16 18:32:58 localhost kernel:  [<c015b657>] ? handle_IRQ_event+0x37/0x100
Aug 16 18:32:58 localhost kernel:  [<c015e774>] ? move_native_irq+0x14/0x50
Aug 16 18:32:58 localhost kernel:  [<c01336d5>] do_softirq+0x75/0x80
Aug 16 18:32:58 localhost kernel:  [<c01339bb>] irq_exit+0x2b/0x40
Aug 16 18:32:58 localhost kernel:  [<c029c7b7>] evtchn_do_upcall+0x1e7/0x330
Aug 16 18:32:58 localhost kernel:  [<c010470f>] hypervisor_callback+0x43/0x4b
Aug 16 18:32:58 localhost kernel:  [<c0107095>] ? xen_safe_halt+0xb5/0x150
Aug 16 18:32:58 localhost kernel:  [<c010adae>] xen_idle+0x1e/0x50
Aug 16 18:32:58 localhost kernel:  [<c0102a7b>] cpu_idle+0x3b/0x60
Aug 16 18:32:58 localhost kernel:  [<c0373c43>] rest_init+0x53/0x60
Aug 16 18:32:58 localhost kernel:  [<c04f5cea>] start_kernel+0x29a/0x340
Aug 16 18:32:58 localhost kernel:  [<c04f55f0>] ? unknown_bootoption+0x0/0x1f0
Aug 16 18:32:58 localhost kernel:  [<c04f507c>] i386_start_kernel+0x7c/0x90
Aug 16 18:32:58 localhost kernel: ---[ end trace 76ea5a31a8fc2f33 ]---

and after the NIC driver fails netback un-stalls itself:

Aug 16 18:33:00 localhost kernel: tg3 0000:01:00.0: tg3_stop_block
timed out, ofs=1400 enable_bit=2
Aug 16 18:33:00 localhost kernel: pci 0000:00:02.0: eth0: Link is down
Aug 16 18:33:00 localhost kernel: netback[1]: DMA mapped TXP 203 released
Aug 16 18:33:00 localhost kernel: netback[1]: DMA mapped TXP 212 released
Aug 16 18:33:00 localhost kernel: netback[2]: DMA mapped TXP 94 released
Aug 16 18:33:00 localhost kernel: netback[1]: DMA mapped TXP 159 released

To get packets moving again I have to have a serial console to the
host, rmmod the tg3 driver, modprobe it, ifconfig up the interface and
restart OVS.

I've tried a variety of things to debug the problem:
-Turning off all hardware acceleration on the NIC from ethtool
-Different OVS versions
-Using a single dom0 vcpu
-Turning off irqbalance and MSI
-Trying the latest stable kernel in my VMs (3.5.3)
-Tried a newer TG3 driver from the Citrix crew
(http://forums.citrix.com/thread.jspa?threadID=311744)

But to no avail.  I don't ever see the "is DMA mapped" messages under
normal operation, so it seems like whatever is causing dom0 to believe
that the memory in the netback/front rings is DMA mapped is the
problem.  If anyone has any suggestions on how to approach/solve this
problem I am open to ideas, I've spent a couple weeks on and off on it
with no resolution.  I'm attaching a tar with all the log messages
from the system if they can help.

Thanks in advance,
David

--000e0cd3406a75c8a504c8337ccb
Content-Type: application/x-gzip; name="crash_newdriver-1_logs.tgz"
Content-Disposition: attachment; filename="crash_newdriver-1_logs.tgz"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h6crjjl30

H4sICN98LVACAGNyYXNoX25ld2RyaXZlci0xX2xvZ3MudGFyAOxdaXvbuBHer9GvwNN+qF2bMi+d
Tba1ZTlR14dWUo429aNSJCWzlkgtSTl2f33fAXhIoijKjrPptvZuxAMzA2AwGAwwA9D0jeBm6Npf
LN+5s/2hUp56kx+e90/GX1XXceV/a9dapabJPyiqrGhqrVJRaz/gribrPzD5h1/hbxGEhs/YD77n
hVvACtN/o3+/WE5gsptwxJQms+w7Zoc3MqPKMt8em27IqsxXf2GKjNSxsZiGTGGW49tmOJwb5q0d
BkNwMGRyifVtF6kNpa7oarWu1NnoIbQDptRrar1erbH5bcj2LN+bz22LyYfMg8BNnZkTBkxGbr8s
7AWBK/usxHwjtJk8ckB4Pg/YCFlBMPEGz8uwrCRqMB87Y284NgIgiIrcOWOlvFoVFXRcK2Aam/uO
NzPmDJVR6T9+lfGfsvpfVClNbyiNuqxpalQntao0VK2yWiVd0fS8WlUaFX33ehH01pqpz1UzvaFW
9ZqqVitRzeo1vJDrqzXT6jW5mlc1tSE36rvXjYNvrZz2XJWrVRp6RZPrtWosiqhcFQ25Wjs0W640
ghnVR9QN0Furpj9X1aA065W4Wqpeq+7Yu+TdKyOz0mWnxah3O0HomEGzxOjPvx96ZoiO32RKVWtU
K3q1UqsnaWPfmMxsl5Ll5OXCNIJIYVCCWq9BwtVqkj5bTa9UU3qj1SQ5zcgMhrbve/7KW2PqTNwN
7+89F0QWgT30zTtrNWU8zkmaGebQDP3pZhxU0vZt8T6t/Mwehp43nHruZEMx/mWMRvbqq4Vr4Y3z
b3tTJR13OLXdSXizgZa3CPMTq/rQ85EcBKK1NhGvVlDUoaLWckGQWCcYtVLJhUFilWAqipILg0SV
5yWrWi4QpeocqqKquVCUqvEyyXp+wSmVl1yXG/lFp1Re9rrSyC88pfLSN+TccoVLnaLRqOuypleU
RpJmetOpEzieu4QhZDKAFK2+G48zL8dT7wtouKHvTdP3XD6Tpk/fBo47mdo5ec4whOckYYAHMdta
fmffmxAh5y6P3NQIc5LEW8seqqEzszemaLkpem5KJTelmptSy02p56Y0clMUOT9JyU9S85O0/CQ9
PynDiayWlWs1PValYVbLVqpRUkbLaml2hu87tk8iGdgbhA1jHUCs5XfrUNbMGH7xndD+ZTheTKfZ
90Ma/NYS/fuRBf0+m4cP6FLRu+XsFFmv1JL3WQUY3vh2cONNreGNE67k6duGJXLLvF4rScrz2XzI
h8W1QqKngTMhcYdwPWvoQJvfrwLQCLoIhou5hc6SJLmOOXT8X4KVF8adh7a1sglhXoVmo8V4OP0y
i5LjtJZnQPebNpsbNCSFGGHY2PO5ed8sHVvGPESnZr1PTQaFw9hA3JSorIE0mnrmrYQR0eSlCIzZ
fGpLDo13dwavPqwNicwICZpp5fnGmdzQi5J/HxNQZXrgQyNJXZIioZYATRL5CwIohQluTcVDiqvh
aRU5XENOMxZFS8lHzwmF5Hk1PSUQ1SWlEL1ISaQv1iFKV2MobcPayH8iaN7Y5m2wmM0gIWC9CxLZ
d5D1EKjSxAhvbF+AmXMpsLmFZYRQuZInMuKJC2tO5cimisad2K7tO2YufgKA6Z0N8VjFnhr+xN6c
1IX1lCdqi9Bz7YkXOiT8r5AJZO4VXQfRtURLD1S7cKKVcEMDCTRQWVFVrzR2/NkXw7elJKFSqyjS
nVau1EujBVrdHXtgOP6astKU5bJc6oF9mwvTBSF0VzYz7p3ZYhY0RWFgueCGXTiu03wl0+1fF7OR
R/eDOL218H0yvG+gfahADHRCZBTTUGW5kEZJlnkJWe/4gs3smedDt7kfHMsxWMvz557Pm4S1Kgp7
52HicOI71sRmez5mD4a6X3rVim2AztGVxC7s2QE7WQQXRoCKHrD+3DZbD+bU5kkfO5cfJPbh7XHf
9by5xLqG3/Z9ifVDez5HyXHX7vUkdgbsE/VEYqdO0Lkc3EulV32usZqsZcwPWLV68e7fB+z96VkC
e5AQO21/6LfP3/Bpzo+D45HnhxJ7ndxcRDc/iqxed/klyuUcEuGaD9RdXiEnY+RMndChfv5Z16/Z
u4e57Q98ww3AGCi1/tSAxkJbdn1nZvgPrEP6aGyYdukVGDObGS7E8cQI7PeuE3ZO38iMblpu+Eap
MOIR8RRFtsenDhX9/bkEzHPHvWURX2mm1jqbtg9Yqz+QcNtGmc9vzwxnilKD2AFrX7UkaMsrJLR6
LfAA2XQCz2y7Ejvv02/7PmwNziVW1UfL9MfOhMhfnH/svFGqNPk6/XJmdoCAV1dLr64WKOQ6FBHO
gOHlegWUr67AwbYKKI+qQD1b/nqm+D37jtuOrHOKQb0sa3GWZzQTJQFhMk94++7fcVKbRnt6/brr
eyRnV3fjKS68bij7YCZlqSQixgmiv3Kx1qKrHl0r0bUaXeviygsgriquEq56dK3y6wdYAJTpmY2u
49tLuTU5e89aB+z8dNAfXHUPGBg/uICkUFE5n48l9r5z2jvNFhss52VYr7zy5Mor367yomjv0BGn
fAjrclG84r9dksUr+kFB6dKjn1YP91wTicczvMe1e8mhLyNwfu1dRgiXMQZuKFdo9bEdmjfGaGpH
apWN7BtYYmwkFOj7OVQJjRKSLAMBKpNdQjvzd1nlY8tZ5XPR70CFcL3J2i7ldMDOnHubVztOOLYs
H7MlroNAOfob2za/xupf2UX9X/DkuFtPbUj7tmFASocB6UnDwEHBMCBtGgak5xgGIq6oT+OK8k25
In1vrmhP40rlv48rz2IyRFzRn8YV/bsaUt+OK0uGVMyhyi4cejExfy/L7LOMv+uYcdVdGJcVLW1H
Bj6tw3135Vx7GlfU/z419HxcIRdct9WJTIw8rnCQ9v2c7ILVvgY35x5WjCaSM2YkhpeePzOmcBSZ
nmVff8+OKD0/x5Y64iFywkIHg4Fqs77zb7AudquWXp1QSeZifvdGBmwAdrgWf1LwtEAWluOCGJ55
qjQVlN+gh4NRK4ZfZISN8U8SphjWKl4JOV0HNGSFg0i4GdNfqdiu5ORjwgJbTv8iKv24BiwQnBZM
fjp/Xy/zFyzjRWmF0yYRgoEft/Wl1+kfk1DgJ2mbno11iyRvYT07EAZzMEMJUNbkProh8UgeiDBm
bVldCou5vxgFD5DN2caucIpZnmlzfmWx69es632BwrgwXGNiz2idJVrvYSrNqabGBIDdi3ZreovC
9Dv4UfBPldjx4j5amnkjz44JZu9UPjg8VfBPxT/txgvpYmLh9AC9KpH3Uxkc6nvjsIdFAcKThF1P
9O3pGxkXE2uouEFatsoV+ZomBU0WYaG2C1ppOFJpqeGW3vGZHabcyDSaGcQCk/4xdmqERnMzY6rf
aCKynk8d+cQ6au8OiqlHLuou8mN7/Sn4t3/I8+WTJ7Qk0JvI675rPPA1TkWNYj0OWffGcMOzhWtS
R486PTuXA/YaHkA3wDuFvdYXAQi9wjrDwIBq6p20IZHnQjRFDiTOWBmYUxGipX0SJwo7EZy9xMrl
GTiHiW90ee8GizkhCFa86k3vrSvfOmBRNmnRuNB0v/hEhetLaqGc+uBtD36Bnv0LQwXEW1HEPprt
FcpEfZdyN8XtgSgP7uIiAfcgyZG3ZDeaLp+7t5yVnNO/Vw9JqcN5r5YrbwdHyPyjY4U37B6K7rjf
vSA2LvN0E0tbtHDfvZDQG32o0QCcOj8/NkOwEqPHl0svjPLlDCaq1LWJpdafWK91AokVdUxeo3ns
0DccrN/QGht1wKjt+g+ueYO3cZZY6UWBgYeR/SPW5vgbuovz5BzLreJAMHIg8iKxQ14H7BTFJ1fF
AYheTGZENbqhvjwNOQOPw9A9CYEGHrfgr4fC62EBi153XIu/5td3XtidLiYpe3hdKC+YgodCCZ07
MwqMKKOjfPyTWGOkGpKwgAHzaZQvZ6DogE22nP/ZNIyy76JDnZK2bc2s1pxev+t2/F/QJA4a4GZC
lJJRPi4sg+PgkOjwezcqlJQWJMqfmBnpsjR/DpyUIMk4KcpBSkZkji4xgcN3pbwHvHxE2qaOQbqA
jy5onuUuSM9JL+RP/I6rUuQC1tBiTf8DFvcIXFDizZV9zWtDiAydpXPKNdYhPYsKcprUaaDbsppY
gRa9U67ZB8cPF7CgqFKuPQVlAAbNV+fd9ofWGxmkx5CoNwB3A9Y9HrTB+IcTxMe8UUhD+6PmK6FI
2cdeT1P5parzCxQClZREq0mQGCDAhjccLh1SXnXcLgw6UqME/aElN6MiIDc0K9QbbBpSKQN4biF1
lDWK9S+uhEgzEN4ORWGD+AZXRF8AKy5cPCTw5e+0pJwaG7SOwAkyStIyX8I3E7OWpRWIjFzt1zRy
D16MXFldM3LVXY1cO7Y+7a1GrhobuerjjdzR2BTYSy9ejNwXI/fFyP1/M3KlTUautIuRq7wYuS9G
7ouR+2LkFhbl1zNyG8URORetbq7DZCebwBzV9e+4pPsdfCtfHb7T+B7hOwe7hO/UM8Ev9afFvmwK
3ZGyhT/gheeREAdU+IOnxB4pz1343QJ3KITk/zZwR/02gTsS06KrHl0r0bUaXeviGlVeKqz8S+BO
YeCOgVECc82oZFtGifNua3WQ0J4+SGxZEjl4vkHi1wlNgMKYUGVlXjNGzRMw7Bwd1+tYGaINUG8w
3F/H3FZY/wL1zmV0lP5MPP7m3uhvOBDzgdRfzCHtKA87Zj52hsESCj3W6f3MKgnn9Qzn9ZjzVf06
AatkwORlsOxg/9wrG9Ihkg+R/IwrGz/ZPuxlJmLNGdi0CKgTn3m+aavDYIZY8gRo5lmLKdXNUU3J
HXMQIZYjKIH3/RNmJkEO+fLJAdNoiERQ0xVRBWy9etfqXH+l8P5vWJFsr1ahWcvMwYRQFbfG/f4j
5FtuRkEoJLVGvPi0h7kFjJdD5mIiN18acvYjodZ/ejahPoBQH/x6y3Wbhdq7MZ3hjWmty7NIkZAS
C7PyjMKsQpjbXyfM/0NTol2F+WR3YVZ2EmbMlDdL86k9WkyYMIpOjnuYlHvxbL1R37Ba+tuVfztX
/u0V+bfJojttAy+aiG6zNbAvCssux4Pj5V6QdXLVK+yzEF7yM1zRZPfqK3rEi3pnyhbLUbfrkdjX
U/NFyULF5suSkaNmoKwNtLQs1AZaegbKTCza6pJdtd6j1Z169DMOT7vbXFLUPaWv6Z5rhR7leZP0
5/MmmeYuU0BJTAEP8jRIgAyG7t26AkkShP4Yv+iP37r+GO2kP0Y76Q9jJ/1h7KQ/GjvpD+1Ff/yG
9YciFwfXcOVBQGtuh+WwGoV9hnaA+5K7ELcFkEv/E4pgewCNvhZAo68G0FT1rwsTXw703j2CJsHa
PSw8w7WZbTmL2XePmTl4VMzMqL5jzAwNXdlJyE6d8WnruUq9LHOHXtxIQLozXBMDx4Vj+l5UsIB9
Pr44vWY/1XET3uBUp6p+dDVHD/Lc9cKxgW3euB6O0nqInC4LUcnvsfvlG4a5ZaaKmVbijCVfn5Dy
XG/fR8OfYSyATI2mbXgiiOlY338jk3/bcfs4C0XitN7h7kC4EIXDRFQi6wL8agfgNh/aoza/77h7
f92LpmZcQBs3vz/VibTuP5PW/GfSN/CfSbv4z4jtvT6eWu54w9hv/PcJ2Tdy1D5KyC6PWqsiRm8e
L2AkE//nAmb+9wnYixb7rQtZZGgoX21oRAYNWT7fw5iQvpkxEXFI/WoOnVIEV7q08j/JJe2ruXQB
c92eTg3X9hZBzK7vwatfzzwdy9ekwKlTW2Ku8frPPwqONl6mAM/D4//CKcCLdfZbHjhfzP8XAftV
BezF/H8x/59dyISR8WL+Fxi2jRfzfycuvZj/O/BqV/NfESfxtsMbctGFK0FwJz42SOLM7xVHxaUd
fgp9eErZSeuCDiFmb50JcgpTGuk2+CgsWVVWXfc/vz++5BEAs/kCvIU2N2MHSL2hG79ONNzBr9US
O26HL/Lfq7VGXlCcEkV4VvXtLvAq94GjaTC2E5ne1QUReI1PhXj44MGP7LMV7QS9/hU2bj9neLOS
v3H7g4Odigggge/ZDLnPG/Tjx0scjp0KeircGaFeUaqvaJuw5LnTBzZ27CmO4ce7V5+7l9ckMCFz
xd4M6iEN6iI8td26hqdugqa3fXLOmXw/Jj/Bv1qtNSSlwsH6ICKiZBIysqLiMxrVWr3BIS4AgTZY
wACjQZobG5qCjzOxagW/HKb34ZqRK9O/ox2f8YHqbOJ51iGLdjzvBfvMj2CiOh3x7xCsVOpvxxh3
AzotPDRgNH3629/j4oiMPiIj4obANPD1AKpRLclh7NtkJsIizLZOfSWQ4SAJZKgXBjLo+A4V+TFh
2JhxIAO9y+RhrW93R0HmHkKVnmGXu74QG7IXLv/AkG2lG90P4m26mY3J33gLvLTDFnjp19oCf7Bt
C7y8wxZ4cHeZ5ynLX1eLN8FLv+1N8NL6Jvjc3dCJCRRtrROCBB2DUrxv80KcnneRxWm7j/zo9qyF
n9YM+8UHV9HN8cgPqQ3v+RO4cH91JkEgpmOO0Ya5L5o4auEW9nV7U4nncRHcfvM8+nd+yDM5WMvk
YIdMDuJMDrZl0hLM6t3zhjkxLCDw6+k53fSg/r07sgVpTzdGZ94vY/nnBIgTX0PguN3jXeTM8YMw
as4uqSs+DqDLvLVdbq20cNMmEb25pWe64jErIZr5nfbLc/D/1f3yGTZXeUeMjNho/I62aNKuTaUq
aVVpPJbGtmQrkt2QZHsDEWqr2Mo6WWB2Q52YDPXNAXb8UyHrwXXiZUlWuWFPA2mA0CrPh8m2Ytz3
kTGeWWeGlEMywct414EloanLYbo9w7FYp5MJ15VXbPqvofcdQ3X/i819Oc/YV3c09tEHlsJ012no
u9Go/pQfnmubqxtWsxOL+KSspXlFhADjYkM4buXx4bgH8W6bneYYXxWOmzfHMHPCcZVCK3bncNza
uhV7bk8M82FXYxbfEyy2Zqt6fL6QsnJk02/fkqWvG36tKSt9rSm7zO10xrA6f3gxaF8M2heDdtmg
3bqzIHCmqr5u/6yklX7Y8c/Cl/Oe+3vvj/r+u1JTFK2SfP9dVbQfZCwMyerL999/jT+xVkbW9l00
UzIiX1OAj4FiXWsEMWfy/Xisi9h52HSL+9Q2KVfLmopP9Un4SERZKd8H1bJchoqs4Pt9+GZ4Tb+3
XbY3se1b7y+jhTO1buDC2Mcb00yI6EBUyaWH75XK+EhLD/rvnRGK95Je399nv6fTPrpscLPAZ/Wm
GO6YXGvqSrNSZ+3TAXAVtfRTu3fZPmfJmMrM+YJ/s5oszin1twXsUf6Al/CSkJK/gdnkmHjAq8t+
C1DYPINxhB7wqvXgO/fit+MGIcZVegkcY+HH13f4Kn6A13ycxCcHjTirwcV9vbqckN61uu9Ln2yy
Pb07+uAnm8FgBkq0fwSPc5T8pHPV32AwScuvGmadDNoFH/L2I5w1gHUcQxjS8RroJiw7k5MiF2Ip
6+WrjUe2kle+FCCDUzcJ57jV7TALSi1GW4PJoDXkBO3yQ38jViMqYMqJomqN6xm2j41iLNtMsdJX
ShGWncWyC7HG41UsIBTWKwFJsdSopmlzrcro/OYhcDAjoMPyIglF+lYBVcf1NYqnFx0GhcHIpkf/
KZem/MPDY5e9gZrh8KCN77j75k3yOi5syUH4AybUaTfhX7tvsijPFGgoIIYzsSMoW0hpvYily09s
r31vm5iNxibkPkPlQ5jw/MujYoNdia3WNK3jnJYa9NvSbTRyO2T8s6gEjE8DAraY00w3wfkLs2oG
Fccej6gQYO1pp/8TlbcyrsVZmFDFSCTJbrJe/7RLZRhXLSRTDaAz72SVvb26enve3o/APvWhGHlH
USwOVmkRmBKB8fThR6WRCASmUmcDTNQ1WVOrakzmDBdBRm5wMmf6MhmeXkiGYmRcaga8P6rqn5j4
bD328gYzA1NvMm/ezm3lhAxy1P1IqTbYngrlhf+0o3A0NqxQqqjVdXJXc2oaYyo8NykNfKY1YP+G
0zsZzjw/yhSO88bqf0fyhrwqtZgDp8TIRFHhw+66sswBnk4VX+NAtF6xzMg+SxQXuXLiBj3udloi
RdV4Su14OQOeXshirHO0zt5GZEacjLbS4Dx9BzLvuu1BRGYsyNSXyVD6LnLz8bQXkdE0Ud3aMhlK
34VMv3cck6kTGUWRORk+fuPv3fHFRbuHG0FFJKRPCZmkM+hcirX2aqW6l1cft5ZGvjhh7zpv3120
scBzh1As6s3lUq1KCedXH9fes1g14ZvSDB8iFhoqUTUs971L5oe8lCz+BJS8BjWCTYp8loHoUiFd
sVdhtuFPHyJPJ49hgFfyzZsfBZo9Y58zikw8XCOH38siWVkhbFUVC0tV+Bu0Pw3YKa1rnvT7RDUX
nFNTRHKq0vAnFB+nRuMIi8eZmFoWPC6bGiVbUXL8YBicWvp30vuJqOWCc2oakgX9NNnmrFii1n07
OD45b8fUsuBx2fR1tvK/6iq1k6urwcVxl6jlg5f+7sEg7J5dMl84zEH+9OJYUJDvU6QfoydFCEf0
jQWWvk1hIGkc5h0+U34BEUjeZmAuvDu+ofnfVAiaI4VUFPEtbdiqXARLXMKGdEsD7Wf1WoyRy6Uu
8dI2swWO5C1Kzy/IlRuLe+hhej03ROxAQ63A5icn+5B870M+6BNgMwI/ZPOJhVmEKeu2BV18yN+T
XUBlZaYizl8RTKVqilVqTp/m2havK8ABvQIkRzCxZbWSqGOmy9Np3atzho3mNL4103aJ4BS9Xt+c
VwayXm2o9Q00NSVSbud8ANkzzLkzdKzPxLtrNjXmjpk+2i5fEtvPRVFWUZQdUNRVFHUHFG0VRduM
Mry86KzXhz6sz2w6AmGKRRS8VK634yhPwFGfgKNtxelcCR4IWPs6tkfwJKYIVLdJ4AxH2Kr+WQae
wMBtk0WcgrAcJpNlpXaY2DQxDb6cROvgkqrF+V4Ohv1ea3j1ocf2RgvAMvwOHf8X3E2m3siY8gc1
LXxS5N7PMhdKmgbTIpcPrVxOE9VtiY0Nie8Dsn+54bZ3cXw62Cdp5/N5c3mLDIxAvJ/x+9LxFGYc
boFIUXWorrfwEd7IVRG9Xj3ib4LZSPKIVQFxs1+67A0x1e430bFdf4gFgRk8E8MR/LvpK3A4aOp4
4OqDPykleLaACN8GPJkWDUoN0QOPgMD+YiIGms8kA0Wr4eA7X2aWqim6zhZVLH1US3OASQbVoZkH
w3gyPE1/xDL9KgYan8n8V+G/Kv/VWKn/JZpMXJxdBsRFrmgrusoQic5QqBE4jDc1O36n6jTY7ZdO
sPYSghzplKkThAGZ3HyAwVEZtn+IVcTolOEJ/HE8D88tY5wnncsSpSs3GmopWnc0RQA8ZJ7UFK2P
vTk/PoH3kG6lX+zAN6bGDAnUzIEHR1IYPsgMC0L0/ObmzowT6F4udTunZLbfiJkSFEPo0+K48GLs
8WIi8PsQ1dTq0VI/ppME9sBM8jduxFY0BGWpCT56T0XV1Xo9JtChRpfy8XlbJejVQwZzUNGT/Lnv
i7jFXaZn3fdYhb2zOet9m9zPdrlcZhYYXU5hFy6JIYSq37k45Uj2vWnzqUy8fJVidcSU1/k3YUIm
fy+XyGf3BaMetAvWzk9iHdpkJXaMLQwUHNdk9IexaAaHGS8r0uKVPe6RxZiCfoDXojnFcN3EuFgb
qcIWMSvitoQeKGGIazLRk4M4+5EH3xQui/GYh/aRRFKZ9vofO1co2P5K4WOjg6DiIX1Pjvss/kV9
Vvhrweu6Wq3Wb49qtTqk+DY1rzFbq9Rrt+w2lkMLfvd6Tanqt8mgDDHRq+otXz46hC++eguBJ48v
CJHGw1i7X4qWPAWhZFFhajxgQb/JDZOxc0/GAmvyJVC5JixI/qDxVRfA7GlVpcpuT/Y5xvyWEATG
8iLQPV/biTDghq4nGHcz3u0FBkGkGKOxHWEwVcMh+ByDTx5QVpGHuYxhk8UUY2BuIjD4X5mqH2Ho
4wrH4A8VXY8xNKpHikG8izC0etUa1YAh0PXKuE4YxONljNC+j/NIZwIpOhMtxzFaFJpJUoEjjMIb
JyDz34R0QjZuPBfKPsBrm33skpQy+87G+MB7By1XA4rcHtS3rm7LpXeO7dN6kVicar1nDvxF3InO
B5MyDQMYm/pN4jutaUFMxBoxuoGq4JP80AeIikjLUC61hGpqQrymKA2zFrPZQxxCXsdKVaUUaS/2
GXotNbHS16TV0tctdIORz4c0kIGQYaSk+xAOJB+r7bbpjB2TR0NAoaL767reKKNgJ97Eu+h0+2xv
Ov/XG1VV9YauYWY7dyxYsvdNUBtjCTok87VWrdMBac5sMcOjrJR46Dup9TNMJ+0vno9+EPdJFOqC
XPZbtB/Z2DMcbEeKJ1ZNAYMkttqwQt3bIG3HP9xMwz+AeBD6C75gRirs6qdyiQZ6Yxqij6Hyd9xR
9cWBAcsr77H3Xd6DIxOi5UGv+PF2onhVpogGkjmRrLJUsq/U0gmYPLkJaS1Op1dBFkgrvYVCDCN+
pKEYl+1Bk1FcCIXo2BZfH/QgItD/M2f6gMEpqgjZXCE2lLG56TA/QYAujZZqVo0fmgKRCciS5ebA
npAEC8PNDnCV0EGX8GnaEx3Pzf8S3UcdpV1Xud6OdfbFRQuH8nfeigncPRzyMGqiIghXzxL0atFE
NRTCFEU0TOokpZHjoXegECEmiFDM7DXeSPKPVKZ4aavdgtnsebdgNe5RML5aJhLFbjqsA+M37iVR
yl4ian2Z9SvcMo0LF5nUvDhOHEsUdZtJBHjpYfA0b6PuSpbSwrXKooYdivknOjfpZhr2Bad1eV8A
6Hszbqn+ifSSa1NFsfHvkKxa9js05RvcDE0/+J0Y4kW8h4EmirNGHvRlseQ7MZ/xQr4WA12ThjeS
B/4QfUScR/X8PnVbiTKcyojvQXgP4+E7TETv5OFGQUZr6dpX0NYKaBtIJ6mmI8wdLzpVGHMaOstf
ost4fL2OoQgMdR1DFxi1fAx9DUMWGNoGjJwaZyuaomyt6GipoppKQ9Fs5ni8HPF0Q4pux1TnLHIs
yoLnawBf0UTALSi5sq3kCpU8vt1QcqWo5MpXlHw7z+0c4aLjfiV+iYubxdDXMWSBoeVi1NcwLJGH
lZ+HuY4hC4zcPNT1epgCwxznYuib2kxN20zNSts4l20jUaVRPoa+jiELDG0dI5dthsjDyM/DXMeQ
BUYmj1y2NQRGI5PHdrZpKdu0VbaJXYAp26r6GrKS9HDqJRuRH6VxUpRU+jep9fRYUDJvPvMFHEXC
mlYWNoJDxbeUfbxedrWg4qpAjm9r+cj1Tch6ilzLyznbwhRYK9ElH0NbVmbkRB4L2RhHulhM0zbV
tkgXa3l8VyWsC2ZhI7jVCogi2NnOqa201BZmA3UrclXP1j75MC+/3UBBkfMqp0uyfg17K3OA7P4G
AmzVXKKsBcvx93VEkrZL/7YTHaL0CWFG23PfXxxHqwpLllgnMRB7wkBkA27Qf/7HsH8yLJNtVh52
e4PrR+K8e3/yZETlSYhdtXuyBZGflfD5/PKnY/AMc124+v4oHyYqprwd6eQpSK2nIJ0+BamdIinK
dtCzJVB5O+jbpxTl3VOQ/voUpJ9SpMp2yPOdIS92hrzanY/d3UF/3jn/3lNY1n8K0uApSO+fgvTh
KUgfn4L06SlIseZglUehnTwNrfU0tNOnobVjNEUhvN10CAyjYuC3TyvQu6eh/ZSgFZfs/BGwF4+A
vXoMe7qPAf75EaXopy26zkF4mMjN32Txqh5fNhrRKrvnRgH15dLdxDCwp5LR/pl4XS71VPLlJdBP
Fpb6y+uNJsWYCY8k+bNKc3eOgrpdjsoXWNdXAQGxtAq4itAU61OoSbxgFWHT7yqJhbtEROxAZGJe
ATsUdRZeHKy5kzXVGOMPa3mLqQXLCGtddrREWIBrRtaY9UR8O7bmnogfu00kfvt0KnZKxX4ylfFI
UOG3nAotlAPfdnelMF4yb3egoGYo1FMDubELBY0o0KQkIUFrFPSrZBEz0yT+eTBWpTgaOH5pX6GQ
T94lvHEkozvMurKT2aUvNbDUgFc24DAs80YzhGbStTfCIfIuAdw47d2I1e21z9qD1ruCPLRdyr8+
kcyUf2ViuAacrUR2RpjFyqlEPJVKJ4TpiywZRd6ldvoazm6tA7i4YrsB7tggvPmw65LPkqKPeggX
Gunlqr7GpwLowkExckvApSG+zreZfxRtw44Rt8ZxBSqeeExOBW47eC+nh+S13cfbzYSKa7Uy440D
YsiHTssIq1PxLeAKo5Fxy6Q7i65sQs92tCyiuqGYaVfYipDmtN4bCnNS+boIYWeXRjYi60vIejE3
l8Eru3CzyGWoljpdsQk+N4wlcu0mYSiVwyiuJQ5DGSBI3w4IxwlubGuXSJg6aMAdXqlVl4mM0Pt2
iINJw2hSXISOJ3hB7EKkCu8tlSwqBc9H0BT5YpjxlgykQqatjRkN6kBnNEolXfbdYPlbMY8FJ3Mw
/vpSumxm2bQZBDZmtEGEuuvpcatcGvgPkWm4cOFMveXhUOMAgQi0J8QIuE0ID/w4gFe8dIaoVYCL
l9j7lQS81JRbOjcK9YeDvSmYdIcwPT70nl5dUGD4ZAJUSh/Si+H51dsmtRJUheXNsHcGYnlHKuaG
PuZu+yuQ3DM6Ixf7wp8AiYfHbIagoxGWgT6c9ZtU61v2y8ILUSOLrsNquVJWS6e43xJGwDfiR7JD
gbnpvvT90iyYzFwnMUtICxIbNRz6ZkwnvDiACzkHgtDywba9W/zul/hGD4rWoW/ltdB2KCzCkEzE
OwaT/cjOT1z3clmPTH22NzP+BWpqRd8vYRk1QKlpBy0FJXnLVvpqokEbBZ25EaKtcoEs7BCkeLhc
AHP8S5JGRYliN/ZL46k3nz9ALN9zgSQeTHxjRKHFKlWeR8MIoKhqCQrKHaekB51ETu/SyLea0Rbh
eKozRTXX3lFQTLIDNA5gCRbc0T9eTKcPPLDDAGGLpJnH67XvbIodEQfrRFZhClYujaa3oTEfigQe
n91k4l0MHLeDFsOSX34Vkt4IuCYHxIl0Yx+VXJ3cJUW3o1Prklnee9ehoFJ2ASY7UhdjK39sS/hw
Y8zH7mWXy1m3f6QucTCKGqAT9EaUB/ccRLu6pg/lEsxnGiycuoxYy59OTjkAD364r8qH+NEZdV2l
NENdm4L6DOduxgdR8gBKz+WNC5YxAivWe7XSkvigH97S58OCRMx5gAhZZzOqMKKg7lGRbNvQVkDs
nD0FWVOEfd4hCqpKATe69NeFK6kVdNPV+IlEbS1civFdjZprUvBYrLwWwciECmkuS7prf0k/lBl3
TgCOg93BbxajPOC4jinhUvwZ4Cb/djUMR/aHtnvDj1f4Az9JdeVoH/p09T475eiF6yrrZmGS14qT
O7UNT5Ztw/NC23Ajta0GYh4K1Wq9rjmwxEXiFDf9U+Yeor8HOCfTtniCK452UvKIWMlXpvNhxDhB
apKHotKckvpKQBPz1I+ag837FITT8biJlwYzbIZPGl8EjqM+gidKGRiQEpIUAK0EPNF2JfPGw/ZU
4cvlT9Q5IX8MXtgmkAVdvBB9YzVJFboiMRdK3oooKtj++oerue1uEMOrncXw5w1i6GVZkDtF+blQ
DDdS2yqGeShXG8RwM+zjxFDNI5IVEvpLm1x9TJOr+U2u5jb5YqXJMQyRhjammRbvrCm5Ejdlgvh8
sZVoTQSQwhRLdL0GGebjd+GycTcjK/DUzOm4EGxkWY4/yZWWbiwtwFwXl20ECwRmh0LE6wCFi9I7
1XCcW8Orp9Vw/IQajnNqGH0ddo0VaXNXYojnarFN5OJV8P7Hy9bPPMo6BntEswZm4Mgs+d4tf1aW
nnGB1u1j/yht9mfvYeUeKZrGzJnFRKAXg5ElbhEAO7NmBn8wZWFTyURB3UbBSilYKxTqnEJa+eeS
jSy53XhZKEDEO3WNl9oqL7VtnBilnBgtc6KxzEt9GwUjpWCsUIh5uVyGKXVJrOC52AYiTprD4k8/
0nj0wYb9qPVT6MWcdBl7O5oHKRIOW96AVqbVLCBKNRAYaDr2yqryZf8QBI7bbw9Xyp8i1OqKrtIK
Cq1wYgIXHAr7mCHt/OQYmzHQPjQZm4c3MGSPNDXJLR0oxC7NhDq1AxEQ/7FTPi+QjvmUidEfVY/+
lsvJn3lZWfdn2kp6fEkn6KFjWympz4FlXKdlpiB8iWbLNPnnWxxG3P2EMGmQZW9PjrC/DOd2n+xv
oPKRn07dFadYkI3ljccbwC4ocKZvu+JzuUwz8Iv/8wgKE64Zq9xDDNSGlXlpeXbg/iEx6dhp94qO
YTh7f1xioNWkH4V+VPrRspnRwXtE0+Lze9KTt7zfP0rS9N2hxRxC4lv3ffKhQOqbdAxRpSxL/Ekc
FCFR2JSyn4zNAW0ksWaSRYrhLxCUGyMsY3ZXcrCfMFqVaDJMxhA9H0WSY05pD2kX2Bv5Xhv/Lp6N
zw3apoHRjYvaXMw8+exwKg4+xJebGaauwtooRVNT/iadLSFXmZaZcOWLA8k0znNpFYHj8kQxWVXG
skS/tUMqShVQolfr+eSViLyyjbwSka9x8jVOvpaSr5Ru/4VVVBzeYSWbOsuMn9TnhCK7O2MKuMgr
EZRw5IAmjYMmzaJdHmnpTO3ITIJ39kZsZcR7WrDi2haFdPGlBdLqdFChgSkrLV8cmVNYk0c8Tfzi
0BjVZPgnEcfOPCzuqthVcgIhMSyDPp9BleHR6imoUgCK/pMeybbqYkvn6+XigKZ1wyaHbO7odbbr
6JVDka/7kff8S7TeScIZtfpGxEJ3BqlOnS2dWCdeVVZf4a6SGZZkWWzcmOHIy7+kzgEx5UyCT5eH
tupTaFRTGpEMJccTh6HnbpYi0ZRrkJ+7H3tn1yUcTlw2m3cazkpTPezSpf1q2FB7yM8u2y8Vhq+s
C4FCFGMPzbbWb6etr2RaP0tqh9Zbgt5RSDIuPFqow+/AmXguzt+Yo/u73l782Yh92gHGdJX28O8t
fVRlH5+rb6W73zH1qja1ahNqyG405fxsjHgY6b77G42ByAMdQZEhCfyfTN+JlwbxNy/ofLqPGM75
eZ+flWssGbTbtDc/N4PeJxNfmQgAy9neupn02m9pz/ZFBzJEN8f9M7oM+lemMQdcLinYVkP/i4lz
uz/Xqgr3/F/TSz5YfNZUOrf4enmoTAdJ6kLJIngwkUXYiMz70U6DoBgwq7vCXnpLe3Jb93z/Pm91
2iOvbE9WtydrJT80h+bM480s15sMzyyz8JcAGQHdy4RF7T3FJ9nio7Zoj/uDbWCL+4N2ewhpj4+J
de8w1pboWN47D2LuTOPT9xLXgVLWSp0BvDJQ8LTxtHOVeoBK03kzAoz9CqNFGA+H6aLpzGrSOYMe
R+RdqnfcOcXmPd94CMo8uW/SEjpalPaUkXjTfUQmAjFAwV+4rFzmz3RN3p1eXbbLfHBkZ31Ul5tV
h2IEpaOxolG21OlfsUa1KtPxwjY/rjpoim9H0YZu9ldv6tjQOaQgmLYZutfrdIdKo6Ecl7CZHJUR
XkDsqQ5wMgHlfXRn+Ef0wH9gPysY07tYKKdNqE1JEXsPQxyxUMEBBZQ3bnHuZ402b5ec+ZB7jsi2
be0zvt8FP1X6Pg0Ge1RIbBAd2Gg5dzyEXeBSdPjtkpunUqa+LfboL8xbO4Sxz92cNADslzysu92J
zaN0fBi5Pfr8Kd1Ryu0HTNuhn3GopXwIOsgbrTiB95Lra4ZRrqI3K42sLuSDBmqE88vww4+6IMfi
Ef5Jn0p1dHXlZ/bh/PiS9YVZTGKGTb4oyFvfhpyy1xN+Hf0FYkEbhSE3N2RN/kiHYtBOwyCSkdED
js6+c2Adl9GOfDXptYXTB2ZLJuiP3KgVYWojX8YwkvgVZvgY2AL+CG4pRWCkhHKBchWsGKgc6nE0
PvEThy5GczCefEfYOk1+iFzkM4wZsdslmqFwrg0+UX9IHnufSr1ua8U5srDQw+PvNUYmdDkDFZq7
Ql2e9e9g7bMRJCr2a2Uxv8SHz/3TDecWmucPjG/QFbqZmUsnO7M9YbbHU6DoGF9M1BYBRM9q8o3t
wIky0+qld4OTJlys8DYuZrQyzo0MppydnbWJOSMHRjLtTXfIl+qrv0SfUyrnIdLqOCEGdMrAo1HV
IlQ+FWUhZnPR1FqiJX2+cfwI/zg3TP4ZRaVaq9VUpUq+PmJwVHcpqXsjSeFeP4miOOrJu9QTBisG
56oOwW7weZ+iPaW7kbUyLvBSrXkZZZVoRc7D+zsq8Qha2IHiSHYfi/f8V+G/KnvNrxX24yOqqqqN
ulJNEEaFCAlvjoBiTI/gycdR4kfKkajC0Z0zPpJXXZ6PAY0YgSSlnN+7v5sEbRSHJ0rDZmooyQZy
jWJyW5lM9XG5NZA0tbl7U+eUVNefXPFnkO1HlH+VNeruoqruJqrqi6h+haiqhaJq7d7UeaL69Io/
g6g+ovyrrNF2F1VtN1HVXkT1K0RVKxRVe/emzilp5btq1UeUf5U1+u6iqu8mqvqLqH6FqOp5ovpb
M9uxyEX1xkJUE/OsrtLQCAOLo9G5xesAqqIUAGjqdgClUssBkAWArm9Phz9wO0ABfVUvAFDqtYI6
VgqYUM/lYlzFAh5p9e0AjQImV+WCDORaAUBNK6hiQQ6qqhXkoDaKiqAUZVEvoKDXiigUpBdyqbId
oLAzyAVsLGRBUQn1ohJo1QIApSiLRgGFRlFvKWhGXStiYlEV9EpRFYrqWG8UlLFQFIuYUMDDAo1R
2IpVtYiJlSJRk4skqaCMlYIi1PSiVqoXEGgUFKBaUMUiHmCfTFFnaRRkoRexuUgzV4oUc5EkqEUA
DbWoiIVGQpGwVRpFWRSxsagIagGfq4XtUFAEvaAK9UIu6gUZVAvS5SJZ1ApYUFCAWhF+kZVUxAGl
qLcVENBrRRqpKIdGUSMW5VApaoQiC6R47CqqYxGXC5nwH/auvalxG4j33/IpNNN/7iaYWrL8or12
IEBLy6tA7/oYJuNXIG1iU8e5cv303ZXtOCEEbXLmrg9oL5bt1U+7q9VqZUm2retbLF3foivB0jY2
oQnJbUtD4GoQpKcZFdimhkDHgu88ft/T5BcaCRwNg1ANjxNYtqYAT1OA5WoK0GhAWFIDIHUcaESE
rkkjo+6+pgBbx6GnsxJNJbi6AjT5PY0AUpff09zXDZ5NX0PAdQA6Al0B0tUZiampI62MUqNkoWPR
12hZc19o7jtSk18nobR1OtTZma4EoXMWWm+kU7KtRTB1CFJjKJoSbJ0STEdXD7rGoHNo0tLJ6OgI
dKbAdQiOTg3ars3VtVipszZfV4Ktc+w6BGnqeBA6a9DVlbZ/1LZrW8eDNk7S+jbX0TGp7cR1gYqG
BU/oCnB1Jm3qIhmdooXUKdrS8cB1BLoihK7VaPRka/WoswVfJ8PyIoizBULbDfi66QIdgKMxRy0H
rs5YdPboODol6LyXzph08YLeLehqWuiMSdvhc11QpA09fUdXhE6PnqmTQoegGyRJHY+mTk+uTg26
+FcnJLc0JSz1jkI3uyR0D5uFLnwUmsdzYuo9NQSmpgSH62TQsABBtk5NGgRfIwNEhxoCbU05Gi24
nk4Lj9+3hCb/QpBuzPz9yqJJwWCzRsKuZq9vvNk5Pzk8+Qa3fDBA/Fy9eAd/6214sAdHOOo9m70/
8SuacXbdMe/AjX0Ov7b54uXGt0Eeq6+bpcEIP3xk82821Yug8N0se/uv2ZsdeE3dHrwqqVws/QKW
gL/cLtcJjwYFTMgnk4SZasdMzLJJsXGsFg6P1YYK9XWebZb2x/ivF0RDXMd/08tvo+vxmCGnN0XI
cOFIzMaTFK4zWDTO/2AzC9fZ4Lbone9/t9+9ZLOr4HuD27cSr8RJPw+uy9O7ooeviEnmKPEqrHiG
BdSApZbc96r19dMl+OyuToSDtD8qerhWhKXDcW9S9LH2M5ABP9iboZC3UB/jcIz/biLWD1K1ojsY
gNLxc6WA/I4FEe5vxHXQPZBqeFuflTsxcO8gfva02dTRfBEseQsVxsbXDHTNQrWvqibvwQbAMkuW
JzO7x9S14SBk6jU9vTz4kxW3o14xGONRpcMBFDIQUS/tl7v4bqPx7e85XKrQZrZvsnjUG6fB7fgm
KzCN3xaHIyglz7McU7j8OEt7+PorPIW9uup+FrN6y3fzJgBITIGrlzqM45I2Gg8wgfskLPZbGLP6
RRIsqxNJnajehcP6cEhZHxTOcBtLCIvPodIKdcS9HdEkBy1unA3ibWZuqvcO4Qe01DZXdgICQUWp
DZViy9myxBYXhgk7H/jW3djZMrfElm1bWzAYc1x5l+C7Mja6uCX1Eiwp2d5gv34ZmRYP4L+vrtjX
S9vWioRcJGYYASGut++NYZMAbsnolS9NAnI3AupgDVToQxZQwb6R1gFS6x6kBlBwz+s7qmRYTNTL
/+ipht+DNqFSOWRJEJfzKgc4336CwPkkLUl6WEmwbwRIua3QzSkTft8xH5LLnJPKsmw/ALJeL85m
0MIAC/ZqKjt0bBfBqpfD4UfxemqdFNBaLtKaU9rEdaWiHWVvk1753bVexaQE0pmyndgGyrmSXRtI
vIbED0MggXu95G6gdB0CgayV6EduiJwlb0HAFIWY3EZgY1hYgoxZVo1lStfsA+nNu/pTgD2kxM4D
qKWFsGFN65q+rYTAd0KPg34C7XNYoGaAP1D2FDSIgwQIkWwAmlHlAkVDIAIXJQCfWRNYKIFTV5Rr
RdICAvzqp3qFGVDY1gyF7NtRAnVUbnbulW/PQkX4AconGzK7X9b4JP09he13Pfxee6a+DlrVO+83
xKYboWItWLM2D1w2EN/cUP1lksaswPbKXCcJ7ACs2+tHom9Zqgddsjmn6daa7gw3+1e7Q5fmaj72
jWlc3MhlIjfVdTAxaToqLZBC8M2G3tNCcnPmNb8A2RS1LK2FFOa2HsbFtOfLAJ6lxFpIS6UTfh9G
eg/CczPUQspHuYxADBuAMO1IOKEIbldp37Q901JZzQgmW2p4z42BaLN6mbZnmpEW0pmmJazb2LyX
lgJbUpnG3a2k6nHrNOTti0bwYEYJ9mo17i0IPqdXyS2oQmVcVmR6MPQSWki/THNIOaavYKzHDCrx
tZDBkhoX/GHI0NVCRhRTT1bSZVynOfzfwPTr6oEKb+AtJ+JcC5mUbsPkqv200cZ55YmsiOOZBtIB
M9VDcoIug9W4tAiQZZo7ygNq7ZKXbdwr66Tfii6dyqsrXVqVe+CJXxoOEljOtCikkVpI9yHBXY83
tlhXm4jMCBplooUs27jVh9q02ukohEOuHmobF96KbVxoW4/w24dc1RMRIMP2IaP2IeP2IZP2Ifut
Q1pweUkfs27fY/H2IUX7kFb7kLJ9SLt9SKd9SLd9SK99SL99yKB9yLB9yKh9yLh9yKR9yH7rkLL2
lwkM8UyvzCrnYPhc2tNGG3LWX3IHs1pOE2YtXtfHRFK0D0kJWcNm1Gv3tSGrlATIKu1ZHPoqbXcm
G385E/5Hy+ApgbWc85dTAYUjZof1YpX4UrpkwSUVsvSXUjRZ3X75pyCdkmr2ug7SNhcEr3W2bmBt
14PnRCs4FdIpuezPCmviK/n4gzER7+ufbTh8DnIBxo2ULlficrZBylYE9xYitynHa0Py9iHvRW4E
eC2ktZTLhTTR1D3wRASYlSDt9iEdMiRZl02YZTlC6O0y0I50vabTJTyBIXEZzZl6sATGatL6jiKi
PNSp015A4rLUJc48xaHptiJ4UkNalrcMxqsVAoFEoH2uHs1WT6iqx66C+DW55Hjnkc5BKS8Jo5Ug
6+oJXfx/mhVg1uVSznHptFA9vAlgypiSkNZC2u1DOu1Duu1Dzg347DaqxzPno2DCUEALSQhZxUxa
/3iZVx2FC4GJE9oPQkqrvI7cUnTpVTUuKM6NJji6YEIAuAKkMJ8AsvHqoAW+udAPuT7ETavUuOBT
LuM+ZK6y+k7UjHXg1FoJkrcPKdqHtNqHlO1D2u1DOu1Duu1Deu1D+u1DBu1Dhu1DRu1Dxu1DJu1D
9luHFJW/5KFrhV4rXl3MxkSB3waknAsNWuFSLj6qxz69TC92xg4FctlzosRbm8vK1D0leBuhgfBa
D6xFtCpk4mkhSQM+byUuRQPZzhIly2xdcGtWl9ayMIvPcC/1kHxJrG7Z6woeiXlIwlBAC0l5Yt00
VI8CuST8D9ev8chuu41b+CCCALMSZDQ74Gvj2YY09eE/xPNBlfYcaQktZG2XEnRgVTAi9KYe3lOT
DpsVTWA6esjaLiHusJb5S76a4GCXC1k9HnC+vi41w1IO+ovEapD245Cg18DkU12G+pkUOV0zKLUN
ksql/qEOB9mnkHFfOx0ncTpOW8vOalzOOjcrbkVwwoOIOg2G3tc/EpXSriO3BFteCeMH8M9fl8s6
NPDA5q1+smiXCwtW9JBQPfrnv8FKXGrmKEyLq/T0Sayvh1zW9zh8Nr0Sl7KqniBy+lG4eT9tmYmY
PnqSwvFsXwvZzFE4QcTL6WDfgjYjVTq0Avi3ki7noo1+0oap0yI3ZyVI0krwOXgtJCXasKzZtBYS
anxJoNKsuRSrtB57ptP1qwkFhJPrV49tkqqn4Tj2PS1kWT1ePxSQw9PPjxMEX1Y9PF5bcK0Lhq5H
rjLtYVeD51D6gecHYQ3Th8F5kw5n0lpPZAtt9QCXK62qt8XjrScqk6txWVZPRDMiGpdQPff01+i1
Sa/EJfjLh6thfS4hJmqqoR3B3VnBW+HSa31BtO2tuiBaz6W/6lpWAuSqyxD1kMGUSy4enh/nK0I6
hKHUqpAYuS3ZIbN5b/tL7IQisK1IC1nt6OImyt3G6hIHw6wSJoIhKsCUafN9IMFttPtQx3n4qUF/
OaSjh7QpDXKVRWnurBFxj/DUINJD8jIdKJ5l/RjUE24VB9kunJVb2TwPgmbb0kI+3vcEqFW5UvW4
cpnbCII1a9z1TEr1rBL+u/PjniVD0X5Ig9xm3+JnUcflxx7Lr1f8WnO1XR59b/tFlTnx1LWXcKFM
h+7Lq8fQT3bODsvvPJaoSDVNvMD8cBqF29grvYSk4AraV0TNDxSy+Nk9uNIbF9ltT7E9u101649f
4TKP6psdsO2/eCW0n7LDD1/Ova+0eRUFvp+C4Ssn80R90jl+lI6LRTqxSOdLEhy3fRKdMGlkkiiF
sEl0Do1MmDT2pE/SHXdoUniCRCZMWl1Ih0ZGk5VzmhAOjTmXiEaiEjZRc5xExk1am+CWS6NzHBKd
65HYky6JjFtEaX1ak/VowgpaVXCbaHY0YYVJsxROs05u0aQQFicpxaIZlLCISrFpYli0RsYdoqVY
VPdJ0x73afwJTuNPEGuN20SjsmnlejQxJK3SuEVzAz5NWIdoUiZNJ9y1SKbMiQ7eo7EnBA2OC5oz
4y6taoWgVQYnOmWidxQ0NO7SGiS1XRDDHqrqiEJIIncWrRcVwiTRcUHDk0SLIvZU3Kf1VD5Nyx7N
PolxNCd3ozTD4z5RKR5NyZImLZdE5ZEMQHBJoyMqjziUIpo7te92iK3MpNUstZUJqkO2aFohRvou
DY37NINyaULYNAMQRB1zk2Z40BppdEQxOKfRCaIXEMTYwqY5KY+mZS6IdA8FUuZDeJxEB7VLonNo
xXo0MtAxjY6bNDqLhseFpNFJi0QnOFHLnFiuoMlLNAKbRiZMGh33idISy+U2rdaERcTzabXmEuEc
WtOgcufSpOXEtuE4JDJh0aTgRFOmGbKQHpGOVqoQNCULQSsXOg0SnTBpdNyn1QbnxNrwaI5ACCKe
ReNP0si4SVSzS1SfSROXqBVu0Yol9hqSZvNcEN0PiUpIGh33iHQ2jTtO9I7UBm7TqkwQ2SO2M4+G
RvQqDk0G7tCcGdEjOzQNc6IVEz2AsGjWTiTjFk0nwqTVmEMko5VqE4UgmpNPq1iXWCoNzaOJKolo
NKOjOh1q90mTgZtEOmKpRBcriL2TT4PjJq3+Jc05cUmT1qORCRqZQ5NBEFVCtE5BNABixXJB9Im0
RsGpIwGbikezO050Yz6tWOKojJs05XFiiM+Jrp04NOc2kT2baHqciOcQ1UcNKlwanU+zAU5s39yl
qYUT3QqXNDE4MSTjxHiBU+MUqqeiPtugxsdUL+/Oml9/vI2fYnmb5PhAcIs73hbfgnghzQqgGt9m
aTxIrzebhVvPWVbOkqRxL0/gA0TjYpsdfn7KEvw4zSbDj+cUcbgJcFGR5TAm9XCP3FMWAUG0sB3p
rZjBkStm8J5AimSmDMt1hLlWGWtkIYsuPZDek09ehjC99suIZtTreMJ3nSctQzq+Z4unlUOVwUX7
ZcQzbdYSvvMEYsQz1eHBIxf/CcQIZ8uA9d1PUeWtuLc1stB9iTRt21yr0a6Rhc4W1IZlytUyQINa
LYNcz4lSWyBUORfWE1jucwMkl9HUuOtw2JjgbHzy/PdP/xuOb6PBJ0/7V27zkXBUf/eOjgMDlE9g
GhnCDde2HOcTk5vgkT5h5icf4G+CX01j7JM8y4pHyLT3/6V/pqn2z7DznWM2SkZZ/m6bpa8H8SBg
3Qy/zhngd+fwK6jl3qDdfBBfJ+xFDk0+EC83Pu1maZFnQ+UPDHacjDpsdzI+DsZFknfYxW0Sdd9F
w0TdenN48tpgr7/ZuUiz7NZgZ0G+n+cGuyiS21vwNZDaPz832AHk3hW7BtsbjA9PLu+MjU8v1J6k
bdYNbjswj3z87V8d9uPewZS2MwWDT7Re7B+96sN19tXlTpjlhcG+nCaOq8RXZVFfnqlDVcpRUCRp
BCowQbDgNggHw0ExSKDcX6W8Yt/iJwcv8Yt4oBhwgRfD4G3CwOGd5YNRkL9jhylI3Q+iZONTUMxo
FKTxNtuFAfCP6aA43HtlMkx00+IV+FnUEeoUWE76ewNk/ccjA3KqLUiVXpkJMh8M9zuse3FpQHIf
eD76/SAYDIFrAOuw/dOuAQPuU7jRPe+CDqCYw3EW7acGO7rA3/27ont5ZDBHhrP4/cE1wh8fvTl8
xZ1wULC9Pw+iQ8gAl05nLp1OgMn7VAi8QLafLgjA31uAzmMC8JUE8Bb59xbYP0/eDsZo9Yd724xv
mVZd5AF2fmggzFQ3vvn2r/rWPvaEcBkMKs/Qzk7f9odwULIB75cjYxFlamIKEMaZyqyt6iiro10d
nerolUfFQHkUcDTgKKujo46vobvGQg8SaDp5MlPatlLvQbfDjvYuLy5PzzoMFH95DJaCrCo97xjs
x8O9871FtkHliof7wvO1hedPJ3zJ2rf4uVNwMNvsTJniqfo9Q1s8xR9gFA/n8IOagAN6ovL0AK7D
8exEUZ9U5Op4flJlOKlzQAJLPcuTflJEN7jfsHKrLExuBmnMwtKB/ojf3gU/YxqmCRnAZbKTyShU
1xadT2IuOp/ji0NwIcpvsv0US+qwg8FdosSub+zEMYRyY+WDALn66yeJOm5slO6fU9z/sbpdN+th
Atb+WDdgNN2AsVY30NF0A8ZD3YDRRjdQaUWspxX+pFoxPrZWrPW0Yv/ztNJKyFBpRa6nFflRA6mn
00oTSE19jE3R0HOI+Zlplpvjr2rFORTFLZqWRVTgeg3uoztndz2tiH+eG2pPKwLGcWfdwyrEWKoV
JNm/u8W4YL6t8ZfsxW2eXRuDPkMzPMnyUTBkcRJlcXL1MRui0b7GZhriJpQU3ST46oeEXQz+AtVZ
goXvimS8gZHZNrstx3evTKAdgzrSWJ1xOJtAEfEgBTA4V3eNYYn8Clo4KGo+8IPiqpe8GNUnefsb
n5Z2ep+w/MY5EEICyIBQH1cq+Bq4zG02fxXKRS0BG5eaLpW8vn6/nNUvqEyx0i2G2wgEAX5d1yfZ
4cUOGgX8TOvmHL8mPy27jJ4HYAzR5Qg4AF6n6SqB5tGcADCM2hZ9KUTMF5Nw/A5sc/RgU9iDUV6U
KH0t5vau2Fn2Z5JDhJ0G18koSQsGD0rHmFHgmGoYXAPh2fF+d/g7MHNxCD8c/gmD7UzuupM8hyyv
zNEO0rzYMzubexz+Cfhn3WQFHqJsGHegVU3tfc8EDV1k/eJ8XBiYzyjjesRPhq9MOETBEOwM7y2K
bIPIMCjYZnWubjbBJw2fC3zU8Ht5DUZ2MOSGQquRARjMvT/G9oIi2H5YMc4TDUTul+NBObWPevEW
HNN5lhXsDMpjLy6GoL+Xm6pcNXiCmoTs21DW3VnwbpgFMePCKxvwJju7CdLiYJJG2NCrRs+OzDH7
Embs0jFc4+xLORkD0KfwnOEyANd0vrsPFnlUmmZZApozPBlAkcvH3uigMqjmqCg1e5KlxgFobgg5
y8OP6XhyixlKVXx6PryLT/O4w6piGtaU0Zz9mSOK8pdYQ0vkgavnSRCfJ38wEKC8WrJ4AdUGPjrH
toulR2WyU/IDqZolyNuZlqhq8qwaLh+lvytVKk1/JjbRqScxE1v2N5efQ+FvBnFxw+7A0e1cnB2j
Gmd1+pBKu/g6nrNjA1pjDm50DJo6OtqJClAl9B5/nmRFXS4qGFGxaaNK4y/YeXcXLLaUcXoZqicp
8mCQooWPRtgAq7q7eJdGNwariwQRC2AY8kHP/gaezakrmKrLVBpbKuJlqcjLsiw0Oyirw/aQ/cHb
pAOgx9cjRK0S2JaHhVLgTlGkuwVkAx13i3wIDu/8CCjh8mEaq8vq+G1WnA0n1416lCxYFoSCm6UT
OhqMBgUzt6ChvPmifMaIEqKxgAJuh3W5qMCyAW6z2fIPhkVV/Bk0qD30tt1R3L3Fy9+eHeZ/QJUM
oAJurhFp2svXzLJTeG0a4qh0WjFlNIxU5aMyp76sKR+IpxxMC56y0mlgysKhSVwn8TZSN0SKP4RO
sGGgL1C9C1TPXBOE86YV4lmZQncJpYBq8GHNxevBeIDkNRJU1+JlJQ1mZNBYDveUx9rE81JAhXmW
qBmyRU+Me0Pe8iv2epAXE4igUKg0GQIyEI63Pz0623/dfWUCdB8s6hWQp2N2tnO5D4p/tzso4H1V
6KHzcPvT0pGyN+fnllAHR6oDOATkFE1rGymhgwA1vFJ0TZfy6WF6BgEdulGkft01tysWoDSoVnBv
ENOgS7mEST6wOiwa2PpNOSH0DJiPwAq7rBNwhGU5kKtmru4S1OPvhlOFxi67n4MmMChpeD5JrrNa
tawRoApyrQ8Z5Haeg1xT3AtyBTXITeroM3k0yBV1kCtWD3LDflTmnrnwHOQ+B7nPQe7/Lcg1Hgpy
DUqQy5+D3Ocg9znIfQ5ytax8uCDX16/IOe6eLZ0wIcUEUejJj/hI9yPMrbz38h3/Yyzf6VCW73gL
i1+89da+PLR0x1hkvqOYVyshOsh8Z521R7xt5mkLd3AJyf924Y54moU7BrOqo6yOdnV0qqNXHivh
Da3wzwt3tAt3AuglYKxZcfZIL3F01p3vJKz1O4lHHol02uskPszSBHAY1yisWa5mx+oZs6Bgfc+D
J0NjeEjyCrr7q1rbnF0cg9zLFV3eb0nHTz4b/YQdsepI88ktWDvww3ZYnk0KiISKjB2e/8Dsqebl
guZlrXlHXk3J7AUyc5ZssbNv+8mGsQm3N+F2i082vk9yiJdZnA+ANQZqmoyxER9keZSI3ngUTsZT
olEWT4Yo20BERtpXJKVZhuAEfrzYZdF0kcNy+1SE3Snh1FCbJ6Ic1Hr6bffw6j2N978RRbIXro2j
ltEABoSiTAZ3L1ewb3O7XoQCVhuUThxgYWwBwcsmS2EgdzvT5bysjFp+35pRd8CoOx/ucd3DRp3d
RIPeTRQv2jPeMeBObcy8RWMWYMz772fM/6EhEdWYd+nGzEnGDCPlB60ZtB5OrlkZFO3unMOgPKtH
6773wNPSf6/9J0vtP5mz/wQjur19NqgHoo/FGkk+CIZs53JnthUsTnJ5Nvu1NF7IEp3iYPf0PVrE
s3tn/JHIUSZeZfZeE77wRSqz9vQNlVigih/AshapHsCSC1TRNKJ1GjL7fosWpBbdYvdEj7mMqnka
79M87zEdLptNku3NJkURZQholEPAzjIPMoYCeunbBQdS3yj9R//Zf/zb/UdI8h8hyX8EJP8RkPyH
T/If1rP/+Bf7D27qF9eUzgOI7k07zC6r4exX8A4wfammEB9bQG78JxzB4wto5L0FNHJ+AY0j32+Z
+OxCb/oKmmku+rLwBa2NkngwGX30NTOdldbMhB5xzQx2XYuDEFJjXO95Lve2TDWhV1cSZHobpBF0
HMeDKM8qxsbs153jvSv2vQeJ4maYpY78/PQWWlCW3meOXSbRTZoNs+t31aTLpBTyY+x+ecJlbgtD
xYVaKhWb5ay08qWzfW+CfAR9AdhUONyHmQhUOjzff2Xi/PYgvRjEiaGwvoVUp5xCLCdM9vOHpwDf
ewLwsTm0lTa/E3fv359FEwtTQA9ufl93Eun+/Jlxb/7MeIL5M4Myf4ZqP7+As27af6DvD/55RvZE
E7UrGdnJ5917JgZXVjcwtIn/uYFF/zwDe/Zi/3YjqwIN/t6BRh3QQIDzMYIJ48mCiUpD4r01tIcr
uJpHK/9JLVnvraVjCNeT4TBIk2wyrtX1MXT14cLTvnmFDhwbdVyONb78+qtSo/7zEKAdHf8DhwDP
0dm/ueN8Dv+fDeyDGthz+P8c/rduZGWQ8Rz+awJb/zn8J2npOfwn6Ioa/vPyTbz7xQ1O0RVzi+B2
8yyIo2w0N1FxkhQ/FTnMlLLd7rHtwrTcN4NrKKloMJpt8NWyZMHnp+5/+HHnRK0AGN1OQLfgzaN6
AsTzZfBhVsN1PlRNELfD6+bvhesvWxTHqxWejnx8CtxRc+BQNdC3I8z56TECfDm4TrM8ib9iv8bV
TtCrD7Bxu83lzXz5xu3XgyIYMuhg40lUqDlvwK9PT4JR0hh6Y9wLRj3nVKEagtjI0uE71h8kw3i8
jbvsfj07uUKDKVha7s3AFuJjE1F397tXMFN3DVWf5Dg5F+F+TOSSm47j+ga3FdkFgJSrZKYwJheW
tB3X8xXFMVBAHUwgAMNOWgUbFmeWZI4Nv4rm/PUVO0/Ui/YhNAObi34fw6zpdZbFm6za8fxi/JLl
FU0l0+d/wgRpMifUzzvQ747HoIEigKDpp59/qdkpC3oDBaE2ypxBngQokTstoZ8nCVBCRLhYO97c
QobOdCGDp13IILnv4TwmBDZRvZABry2UEd/f7g6M3GaDtGhhl7uclBuyJ+lwMALp42aje6fepruw
MfmJt8AbhC3wxofaAt95bAu8SdgCD9qd1Xmj8i8d/SZ449+9Cd5Y2AS/bDd0EwKV+9dKQwIfA1z8
uK+Y2Ds6gyL29i+gPEwedOGnCzvci8vTKrET5gXW4Z06Ay3cnR4YYBDDvsqxD+F+WcVVDXdhX3c2
NFQZx+Pfn7yMi7d5oQrp3CukQyikUxfSeayQbqms8ztVMbtBDBnUce8IE+fg/rO3CdzCPd3QO6t2
ObV/BEBNvA/Azv65aiIHg3xcVNV5hu5K9QPQZL5JUiAAkSGxjyZ68zue4xFOFy3Eij7SfnlF/l/d
L7+gZqdsiGUQW/ff5RZN3LXJHcNyjH7f6CdGwo3EN8zkARCsqzrK2p3A6AYbMQbqyxbYFdfW4uI6
vLhhijKwh450zMZFlkPINhfcX0DBcM4OR3BnE0PwLbh2CJGEJWaX6Z4Hg5gdHi4s1zXnYvr3wfuI
S3X/weG+uSzYF8RgH9rAzDLd+xiShuF8v3x5bhLNb1hdHFjUb8qaGVf8zd6VLrdtA+H+bZ8CdX/U
npoSwJucujNO4t5tUsfp3dHwtFnrqkg5bicP310eFiVbIkBJhDpTTWIdJBbfAktgsQS/LQsYTH1i
O64hvh33k+ppG641xlbbcdetMYI123FZoxfLvR3XWvViv42uveBvXmeWqnqzN2vqFb8QW6Js+u97
sjp1zG1dWWVbV7be2osVw/L64X+H9n+H9n+Htu7QbnyyIE2Gqv7I/6kf+z8Z4EG+RjCPwfSevrfH
V57lb5H3b+mdaSrTLPMh/59qqe9RxjSm/5//r4vX+fyaMJNQ3cVlCiPDCThYN3hTB1YTsGkoxBuT
PeaSWYQNlfU+qErYrmq6hlYrMc6m4W+GSTEXfOCNP87IZBqNSf/Om/WHid+H430YPOKsd3Xx3Svw
HaPZKElzjzKMxkkU1mRrhqsba2SnOFfOJuPkn8JJd8yeDhh1mAdPSZrNwJccEbUmzFBdw+IVZoMk
rceMNdIMlxmuVm8oBZ9ZufwG3h9OcvbYNlC71QjAcA2n8STTVZlLN5/E6B5VMV1dbURpuVRvRMlc
prmM8vbxZoOxQGWOGvfXLpar240AMKrgGjqvyg1mDVec2lQlnuS4Rr2Vw5tgmETjDJZuXz5/dXnx
A/hSVwQ0uo/G/oxizfhQYU/tqQA1X4qYFre88+ffkHg2GdVkbCibjBModxkps8jDOFH+Q+b5XNX5
sFIM63BtbIBZNI7eekOQRJhtOGr5WF9aHwUNvdGEHZeZTScxbY8GBWOB3QiAokE1WZ3uMpBk8lrd
xguN6ftTGbTReYYNHE7VzScZLo7n3GPL5guNGXvUmbn6ytx0700TWMzly/93+Aulqv1OZ7YKNh1l
yuWr5+/OUYXej9/1guFkHJFL19NoYPmG5WnMf3eTZdM/CG71KvSberM0Il9eXb0qk4TDt+J5UTIE
keS3P8gKHvWQ8GBP8uDRNuBhMbMML2KWH9It8RhggVz9ZW7AE3mR5wdmHNrM2RYPuF1aMx6V6mQ+
Tu5zPDuo0uSoUjV2WmXdH8y8aZikt7/hGG/84VbflaAKhuNMckoYJfdEvyX+PI6jWSoiEaO1sGlw
HpWrZxdgTUQEVJDAJXdRs1kesRpGd9GQUBFB+bAym4/76UwZYXi0b9sO0/TYVhzdiRVD15gSxJ6t
6E5geL5qMtX2+9RTVS1gpmIHkaHoUWArnm0Hih6HUaTZpgb/e3c3YRUjzhETel+yRGmnxHfz6N8p
8fJ4fJz/Hbti6HFkhB5P8M7G3lU5ZmSegjqnuSlFbqEOLdTJYGsuaHRKZpMTEQ1agrY9S6eRbSmR
5vmKTlWmeHrAFFML/EA11TD2Lf72Z7pm5l3AmMPk9AK3QnvphR+fvSDPvzz/6ntXpFRLVXkNzi3U
gU/H+kl1p9Adx+kxE1KuJUzeHtkVTB92P6LHM/UC2NUMN7gsy1KZiT42yBxUN6f68J/EeLfpjG6Q
fpuHHkEqRtBJFnpuKUhBQS6pJFX1uaSqkEvore8Fty6B8XacKUFx655ojkjZGd4SnUUxsUVKTWeT
bBJMhphX+t42B5qqeH5ywicCFk+4uLm/w+bwvdkswXkrGuc3IzhElCXxL8v/quTT/N0gn20ofueH
vXneVL95YfjHMWoSjcM+/N6n0AsWU09c8hY0g+ED53v4FW4JTofzazynOKP6hZyRjydjnMU/3lGV
j44tKlMKpwHrDCbQxwFMs6vV2rVqX/3yxeWbZy45euPDdTY/JW+T7AbvMs/vidoze5ra081PwtlI
03oqVRhNlSwZwZoyyY7g3OGQ+FE1na9U49SqGc2HWTL1Mrz48m7AeNwIwv0EfyTHMCDPPLD1EyIm
JJ3OZ8kE9C2a7rQQN57gYjr/fBcFIiKNXeAydo+L7QIX2x5XzUoLPOsMtTgaNl8du6ys+boQaK1d
tHirBjfohjnnGvY5Edwfm0UwPd4m02kEddE+FRHScn7ldwN4XeWNKGF1mG7rpPFCJsd0nZOmPjhp
JyLoZXYUvyMk5lPvvbt4ge++u5aWp6gKxl+9OMM9d+XmkzbiintQ0KQqqW5U4jLcfNhDcz9KshSO
a9DyeTAgv1TC2aS0lxZ1Piz6Q9w8gXowUvoCsK5qpcZC5GQcrQrQFwIwan49j1IMf0cZhvlwmZXd
RCTGu/tvEczUm3kjPJjmoXEsgi545HKJvQum8375MAMrvr2NkuubzKXFN3CP4eP4Hq33LpkhOdoY
2pfANBTk78E0gXeAEVVfB6kOZ5cfNfyYpcFgNAnRrvhR0b4Xx8kYXfP5OI2yDSX7k2nWR8Un81kQ
YZA0uo+C/kKgSpmqUFupF794cUWgZxyV/gH9MIK+9efJMISGIGGUeQl0z0cffUQUfJH7oDyHlCcR
sMMYDKxc+5anScB4HwzgyMAbYlV46QajEJ3jsyO8Q332Bh77PIthzI5jU1MinVEcB0zFMXxTMVVL
tZwgsgPbJrMJmnU6ge10N3cBJTDXDr30BsJUSZQRULD4tHTSEVxmxfOo6dnRkUT1iwXKAHvEJfh3
7I2gCR7GzXv0Y3xokPxPeXrv2Y8Bu/5VJu5R3m857pE3JfDCGUS3yO0ziahgRMFxirs5y/N79Js3
sfnPobUnLurltmfeSCBsUNySxN1Seg98ggCdGvhd0XpUwcW8qS991dQpubkb1b7j13WHsbREJfOQ
+wD9FRyEoB+iUCIamDnCwXDi5cfggYmcjPPbz5VrGK1nSUCKQ6TX60lps7L66WziR+ipHF5jFTEL
/5/8oQHZzbV0dZPayyW2YdpPXN3GdqgMAVThZHA9H/+TTF2Sv5HJ7Sk40FoUeGpIlM/gs21EqkVl
YFwytJff7BXCfbCoNxrG5YjgJ2MPc1ZOb8IZ/PXCcHZG78vnokfRKP0HvpqWR+l+G0gEnWV5dXRq
EMpFV+X9rNqtsCpPt7qGBSVLaOMJOptf4P6ewcvX5IwcDXHEODoAOD9eXL7+6uX3iEntmXIR/Xzx
fR1PObvLxfTjV5dXg2fnry8AEb3PnwOSbUcX319d/lLCsYJINpwvf4Hnxp6ff/vt4NX5F1UzMarK
xvX5xfnVm8uL/HL7EKNheOMGzriOBvnH9N3Ug+/XYTIbeD5EWwf6tS/X2F6dXwy+e/niAiH/HaVy
wXz78vzFxSVCKf1AuXDm49vx5G2+ORKPE/yVHNP78EQqrNdvXr+6+P7F4Pn5988vvs2tn0kF9OWP
g9dX5zBoffvypxxObNjyx6xX5y9eXA5efv7564urHFX3cNBdGUANwSDnEXGJVzzDCM/adIYFXhgZ
zAa+l0akeMmZWPCV+055sxS54zrumKoxytolN0aO5BZvJy8hYbKQRONw2UA69mAfkETIoVAAkeN0
4GuqjpaumWJkW3ntFRFPQAcvqHL7x2psyq1bk/JZVz3aECOIRmWYDT7lTDnfPTsl6CKlAFBFrMW3
U9y6GXnBzUGAXYImERGGKhewyjieFDxgY8FkNPVARC5p1fpOq8mO4IYuoqlSUK4NTKHpyY1L5b8X
AVmpHVm7QTVIo+tR/nwNKfeUVc21fighx9N4XEQ+yCfw7uhWcZXs10luGBvj8SCbDKYZHILvb8io
yOvkkgewnxRQvQy++oZpdx7gwtDfUnyL0ALNAg80dfk9pLp8fAzwLfAs4Yuo5Hnlkf2WN70W9ptb
7MJ+WWDadfuFg7n9MlVl8g34qRGLUcmR9IarCpvwk6IBy8tK101HrlmsCf5btmdYefA/x0tNJhPk
I9ud3vydqiP0fbDjF9a6sN1oyXbxaG68NpVvuk1WgmA/QaiVkWiRZCMp2h8bLm9uQoolWTKOJ/kP
ixavGhy/ymzjR4irbUXwcYHYWUbsHBTicp8KWULsLSP2pCAez+oh29pmO1qt+vqauvh58cKr8/ES
8ZQwkos6lnNZrldHe1CH1n8OFurUfj18dShlhToqa1YnYA/qmFL1WTsH4KVSKLm4Pvz6HBCHy3NA
5JcOzH9hDoj83FV4mAPCg/Afa+MTrrRxIghu8x8WLb5o8HjPNwK4148IEANnpRbROHxAvPewmTjI
6TWY9QKk2kHklSOoIn0h7s2CGxgAsvk0BxR5s+HfePXAZYN3nm7gXQasoo3KWE91H+OBdpDkwss9
6Mtb5f5DYHGn3qdnMEhlwY3U8B4P8PqWQqlRNXGw0/8QWqkGPJ+GXhYNclkDuBEBJ0ClmuonWS1Y
LmvYDIaRV3hhtfncOyWj/LNqqLGc5fSTuJwKF7NsXbNl4OJ3imx0ihY+kSmpg/P17wDXv4PiwpU5
LxZT4g2mVkMQZf/e5VsBF/t+sA3zfYtSRsTlh0/K/YAkhml8OkvGmQxI8CofOpHTbcXrURwTvUBH
NaikSGbx8saTMRnB5VdHRfEhNTmzPyH5WCCzo/CRjEdNwpglbfd2/srHyUewVNs0ZN28W/HSh/kz
XemNN4vCfMCsBnNaTTtWTIP9bs9ag7QAdSijOD4zeHAwZlo9IFH3EywZ4IbefAzWdTeqWuiUBNl9
BtOaH/ua7TsyXdJZBN5VyrVCjuEskMkcctwAilTP2k7mWRwS8+F7MoavFlHCyJ9fw7OcpJ+NpvB4
ywmJ7vEh87Bg7ggmYUTUFShPk9g47anC1knkpgrjgsRDFbZRUE6lo3jJJH/o9fjjlbafQigLVw79
JJ3071Mlm0yGqWIifaZi6CqjPTjw8YlAhXk1bWo5KXrv5eDFV5eQBFVEyeJ5p4IjiLw/nUUP/EM3
SZwR8hvOWaZhOeofcBgkLB//TVUdm5l/iFS5RJ/VRl0eTiyac2KJwFpwYomUaoPfLTCCZR3TGoNU
3ztWmBDkFQop7AsRAiljOwKporoVkTYnKZEBwiyTrvAgGctML3hGIw9S6yqfOMbNg2S6lINHU7Xp
7kgboUrjKVtwVMNuNxJvksg1EnNCah6JNwlqT2TCTd/CRRpYZ6LRheDvgi+QW5f1YyOr8wW+PRHR
oCVobkIWrg54xDAjZkS76AVuhThmKOyFiVAvLGYokVItVeU1OG46xP3A5O2RXcHckrURpW+YdP3G
SbeqcFWoyBzI1NVpV5h+cIsql44J0g9Ctc4TzdfPf+oXCy4QWuQV798lcZ/CPAYTYeINEyRl6kpY
GsySaZYqUMolz4sVnpeSI2gdAr8dERCfhK5BQDq8U5LzBeEQdZeCLx/ctJBddNNuxb+Osgw0RaEG
5nC8ekOYQSlv8fMwrJXOJlVeBFyuVPvZcXkbx/V/vNIXrlQyRZfpFmPbZWXbSy/lRPcZGIY3VJIw
dY/u4eBImc+T8OjsiOk2pb6vKgEMQUpgRY5iWyZTfINqvm86tuNrR1tXl8RVfbFlaaqnWYpuBp4S
GpquODS2FMuELgkdwwoCe+v6xlH2djK7reqkvh2HzIDBn8UxDL1qqASGqiteEPoxs2MtDM3t6vSy
DHNThsrIC7BCmmeLuHApc+Gz+WyT9MkdSE6DDC7b3MN5l39599X3n798t7gy+vN01vfhYn44nShK
VqQJO9MofIF/0MwQCkmzFK6ZoYI3Myu4ioLGlP9U2e/iCFpcnkMs9oKo+v1oScPfa1bz+9HZ71x2
8/vRkZB4gF/J57ITQfmVVVR1cNmFSB11K8AKHtsBSOMYvYvR+sHSsFowgukMxsM0mE/maT4Ucgj6
8uqZC8sfb4z5KyYxCYaYv5ahlZEkJSneielh7tg0wVtUM/WvMo9+byvhqpBwpvOzNTsiRRdkzSKl
OMiaTSHMjIqUrUCLqcoH2hD2FJ5XHsxyyiJNb44baOpT+UCKu8iY09b1bWZ7ganHAdW2iivkkPSn
HV5bs1rEFZolPhlXEBHAGVdoFtRyzQHjp6lZsaGEOtUVPbZCGAUNXTHtMIwC6JkwZPzL2lpcQQz9
Lha03Kq0WNA2a9ASNO+aj6/9l8IKcnqBW6G99MIirCBSqqWqvAbHt17fG0zeHtkVzG3CCpX09WGF
QDysUArln6p1XaTswr8QKbVmquYTwZcMohTRNhlEUZw3NLKTZBDbVdlMer8mGlNUa+47GURRjb1d
0gUuIWJc+80iOZJUcAnZNS6OZBBcQrbFJZyfofHq2GVlzdeFQGvtosVbNbhB1805fDkGmoW0nF/5
3QBeV3k9yh1lF+CFLJpdoBm9zI7id4TEfOq9dxcv8N13l1AyCH5xu0sGwV8ndzKINiIn42hVwOHF
RnafnwLFHl5+ikZUa/NTYMktaLGr4vnOSThgqnvIT9EdxoPMT9G5+u3yU/yU/DV8lsnEzZWfonNU
bfNT/PCSPh9rh9aeGGeQ254y8lN0ruQT+SkkommVn6I7lKv5KQ6vsTjyU3SOkys/BYrd4sGKsrgA
qhb5KbrDuJKfYq8QRDJAPJWfYr8NJILucX4Kuej48lN0AEskP4U0OMv5KeQiasxPIQHTE/kpZNvR
Sn4K2XC48lNIwLVtfgoJkFfyU8gF8yg/hVw4PPkpJMB6Kj+FVEAc+SkkoHqUn6J7OFz5KfaMRSA/
RQdIVvNTdNwxAvkpukLSnJ+iMySN+Sm6QtKcn6IDJIL5KfaAiCegs2V+is5Rb5OfQiLYJWgSET2Z
n0IKni3yU3SIkj8/ReegatSdUjuyg/wUnWvTSAW3mp/CUtm+x8Xm/A+0QFPiWc5P4fjy8THAt8Cz
hC9y9j4/C9rvTvNTdK4OT36KzkFNhfNTGPt3ZVsF/x/np5AJcq/5KTrXBiCJ5afQbclGsnV+CvmI
RfNTyEcsmp+iO8Sd5Kc4BHV2mJ/iENTZZX6K7vTpIj9F59o0zwEr+Sl08yD8xy3yU3SOuE1+ioMA
uZqfooPIK0dQRfpCXCg/RXewBOj9l7fK/YfA1vNTSA3v8QCvbymUGlUTBzv9D6GVasBt8lN0B7Ih
P4UZ2zQ4GFyL/BRqbDNTBi5+p2gpP4Vu7n1bW/v8FB2CaZ2fojuM3PkpuoO0yE8hp9tE8lN0jYov
P0WnqHAskNlRnPkpuobFmZ+iO1it8lPIGcQ58lN0hwafGTw4GI/zU1R+giED3Kb8FLHn27ZMl7TI
T8G1Qo7hLJTJus5PUUAxOalPzCcp500xyvntqlw6JkI5r+Y0iPoTz1erVGdmO562TRK5eNo4ITXz
tG0S1J67gJuxgYsnbIn/XQj+LijCuHVpwf/erEFL0NwcDFwd8IhUQsyIdtEL3Aq1I2pr0GBB1CZS
qqWqvAbHzYC2H5i8PbIrmFsStaH0nRO1oVCRCYmpq3OgMOPYFlUuHRNkHGM8lO0mD2X7noVx8r+b
zQTt/LKLbtqt+Br/u1njf+ctvuB/N9vwv6P0NvzvJhf/e4N0lLOZ/92PjFANTEOhjhfCOGl7imP6
tkIDGmlhFDLds4+2rm7B/x6EMWOhaoJT71gKDIWqouo6VcJYdwyVebGhWVvXtxX/e5s6G/jfz5el
d8z/bq7lfzfX8L+bzfzvPHYDbOdC4mv871x2Iih/p/zvZhv+93OQxjF6L/jfTR7+dxQkStHOzf/e
Rjg3/3sunKn8BK2OSNEFP6tIKQ5+ViaEmVGRshVoMVX5QGvCnsJj/vdcksqRN07XjM0cZ5GuxUZE
DdOLg205zgCS86TDa1O9bVxhvUTOuAIXJJ64wnpB7dccsRboPnV8RYtCT9E1nyl2HMSKbQYhC6mn
OxrlX9bW4gpi6HexoOVWpe2Cdh/tz7vm42v/5rDC3nuBW6G99MIirCBSqqWqvAYnsF7fB0zeHtkV
zK3DCqqzIawQtgwraJR/qtYdkbIL/0KkFNdUrdEt+d9RxBb871icMzSyK/73Laps5rneFI3R1A74
37Garfnfm4UI02ujyK3535uF7BoXH/97s5A2uLRtKNkbr45dVtZ8XazWti3/e6OMNg2ur5tzuGnF
Ucju51d+N4DXVV6PckeE4ryQWxCKH3BH8TtCYj713ruLF/juu0uU/51T3E753znrFOF/FxY5GUer
Aqzm2IhB1R3l1C+qdPbA745iD4/fHVG143fHklvsPMLi5c4jxlRnP/zuHWE8VH73btVvx+/+68/m
16+fycTNy+/eLaq2/O6//Hnz+io5tPbEOILc9pTE796tkk/wu0tE05bfvSOUq/zuh9dY/3J3rk1u
02AU/ise+ACdwUGyJcvOAENZlvtC2ZbLDMPs+CJvM91NQi5QoPx3XtnJJnYufiXb8sIOtE2yOjqy
ZOtYsR/j+O52fWL57nZdmfHdLXms8d17taBDUG/muw/prpnvbtcdmu/ety1Nvvswdqp892EdYfju
tj0h+O62LdX47kPbwfLdbfvqgO9u23KN7z6smQO++7B2kHx327Ya+e62DeH47rZdHfDd7dvB8t37
9KLHd+/bCYLv3rcFPN/dihMU392OEwzf3YoTFN+9byf6fPeuHWEWdNrz3e26bsl3H8psxdqAjpr5
7rb8tOO723KpxXe3a6p4/9EsyPbNd7fbmkaUUp3vHmTWF7jO8d2VnyrfnbPh/VHwt+dn35/kA88r
vfPd7TYHyXe3a2quzXdnSTTssDDku9s12Tff3W5rwJIm390feJB0wXcf2LEB331gxwZ8d0uObfHd
B29Ot3z3wZvTMd/dUnss8d3ttqZ5Dqjz3b1HkR/b8d3tOjbkuw9vEsF3t2wSunr4E3FdvrslWxp4
7Oqlcv8hs/t890GX9zDG9y8pHHRVTd/s/D/kdtABbMh3t2Syge/uZz4Rj8bXHt9diHzITkWEoirf
3QsG6mAc392WmTZ8d0sedfjulizt+O7DdJsm392qKzTf3Z4rdSwYsqPwfHertvB8d0u2TPjuySCz
IZLvbsmNumfw0dk4wXeHnOAPYe4c3z31Qs8fMr2UfHfUGbLiuyvNcAC+Ox0zLNpEHOW7C22+e4sq
K59p890ZQ9z+TP0ub39mR6FE1OfCFP12WhGJfkNZwqDfTguZ4xDQEAgUeqyClNey3wV1DN0WU6R8
Hx2AxjqgOqCZU9F7L6AbZMp+O9OCffabTinDpmIHnAZUrQ+b2B7pymZr9hvrg/3GIp058ABiJgwg
ZsZVVj7ThpjxZrarQFPg+xNDIuUFkvmO0i67qVv5PaS82EPKY4vvkPLCGCnPfW2kvMAj5U+rK53z
SPmceyyJw8BN8jx1cy+OXJ/xzCVJEvNU0DgP/LdaV7dDyguRhlxEUEvGMvgj9N2UQs1+EAlJgiQK
U9K6vvZIec06G5DyFwfqNpHy4iRSXpxAyotmpDxm3ABAXUt+DymPGiea+p0i5YUJUv4C1BBH7x1S
XqCR8tzXor7rIeU1xfWQ8jzAM18jnaI75KtOKRTyVcczJTplt6b1moozLbSTwgmkPI8a1w0o9eGT
uxtIGSs5fXOpNomzermQceYUX05COzdfnd7L1WKSLp13ywnfEU+cT8eCRkFIs9wjOWu76hDgFjpE
2N1CB1QpjibwiFCzhY5ziqiFDqSl5oWOc0LmJ0GBCGTGPd9N4oi5THIYwhn8khBRlsDh3qOh4bPz
tNx3cYaNborZGXY/2x97Eorb/pV1jmF6Ad2gXnpht86hU8qwqdgBh15A6Mcmtke6stlynUOpn1nn
kEbrHEoUnx040ym7Czw6pRDZQUm0ZNwriRaMe1UcuVbTFeO+RZXNLO+Ty0Oq2tAC4x6qiVoz7ptF
tBHiSrI1475ZpGtfOMZ9s0hbX9rY+ca9o8vKmveLam2ctGXcN2oYbHBOT805aHS6Eul+fsXHAGxU
Pu2yI2g61rIBNP0RdxQ+COll6t67C2u8++7SZdwj5Tpl3CPr1GHca0sC474uwJvXPgIqulz74KIH
xr2SfXyMe+XKjHGvSra4+koV31x9BT9BP4x7Sx4fK+PebvPNGPev/rr87MYb0jeWcW/XlSnj/qv7
ix+z549te6p1hGG350CMe7uNPMK4H9CNKePekss64/7xbSwc496uTyzj3q4rM8a9JY81xn2vFnQo
8s2M+yHdNTPu7bpDM+77tqXJuB/GTpVxP6wjDOPeticE4962pRrjfmg7WMa9bV8dMO5tW64x7oc1
c8C4H9YOknFv21Yj4962IRzj3rarA8a9fTtYxn2fXvQY9307QTDu+7aAZ9xbcYJi3NtxgmHcW3GC
Ytz37USfcd+1I8yCTnvGvV3XLRn3Q5mtWBvQUTPj3pafdox7Wy61GPd2TRXvP5oF2b4Z93Zb04iT
qjPuheR9HxebGfKkdLPxU2PI58P7o+Bvz8++vzzo/eRbc/x2zbi32xwk496uqbk2456n7MiwCNtZ
DjUsGzLu7Zrsm3FvtzVgSY9xz9nAg6QLxv3Ajg0Y9wM7NmDcW3Jsi3E/eHO6ZdwP3pyOGfeW2mOJ
cW+3Nc1zQI1xz33vkc0BBox7u44NGffDm0Qw7i2bPGDcD2FGl3FvyZYGIrx6qdx/yGyVcf/IjVcv
MfxPmZ3/h9wOOoANGfeWTDYw7gOWSf/R+Nox7gMh8iH3F0QoqjDuuU8H6mAc496WmTaMe0sedRj3
liztGPfDdJsm496qKzTj3p4rdSwYsqPwjHurtvCMe0u2jBj3wRBOkYx7S27UPYOPzsZxxr3KCXQI
c+cY99JPfTFkJD3DuFdSWNxseBQsH2qD5VtUWflMEywvxsRH3HPsse7uOYYqQ0SVvNsqj0KWPRYw
M8TbOUUU4g1pqRnxdk7IHHuAhj2gEGMVlr2W/S7oYui2mLHs++kANL4B1QEHPAq9QdRFL6AbZMZ4
a2jBjvGmU8qwqdgBh4an9WMT2yNd2WzJeFPqnTPeQFRr2j2AlYXasLIWVVY+04SViTFljVDZEImf
71MMybIPUbB5pHbZTd3K77Hswz2WPbb4jmUfGrLslbo2yz7EsuzPqSud8yx7KYnvZVK4lMOxJ2Sc
uiISAg5VMpJhHKRewt5qXd2OZR8lLCKU5C4NROTKlEQuFzJwZZZEaRRHHhFJ6/rasuy162xg2UcH
6jZZ9uFJln14gmUfNrPsMeMGyO1a8nsse9Q40dTvlGUfmrDsI1BDHL13LPsQybJXQlq4eR2Wvba4
DssexDW48JFO0R3aVacUAu0qtDxTolN2a1qvqTjTzSz7EMOyB6VeWfZhwbIXMeVJ6oec81arDsHY
C6oPY5iu5tkvPCBEAf7i6Tur4rSqCO9qcet9+Px9WC3IV6MXl1fPxs4zBWBbqnM7J5PTSWVjMH9M
6ZHNOk8nxdF8rP7zRgQGxeolGSu6yiu1d2Rw+29NhhnIrOdOvCr2OecqmS/hzHJ9d+dk6/mdfN1C
/jNgzDmbRRhVzSzPnXy2cF787MTT7OHl9c+1OsLatLYJaor5R4kXvVmAJoyF6+s35Rz0gfvRKp3D
cPJGNAhHdETpOAh8f+xMZ7ueVdloGqerye9w9lASTTYUQ+4sJUhm0PBsstzE7WqoZWzMQ+3Zdn+S
nc5cRQbem245/BuGwhI2z5/O97PnxVaD98rviZ1nMMmWb/02W/Zu5cFHscoVkpiygKduFAfSZRLm
TpiVfVdkAUlERklIEsuWhOcxj3rUlZR5LstS340Dmrrcz/Mg82QGP71bKqfwatd8+PFU/gF/b7JT
9vA6haPUSqqeLU64PywAJ+7LVVIuJcLp8Icf/0beo/An3RZW72wKbto9A17jQi1l5pPbMRAc3QV8
9uF+AXquQPx6U4Bsf+obCT8nf/bZZ5dqP04mt40zsqa0XpbQF9fIEtwzCdaIkaO/q/djpbpfBQnN
uB9zNwlI6rI4ZmpXj11BeSzTKEijPLZsKckCIgIV2v08c1kQZm4Yxrkbcp+mmU9JGoreLf3PdnXu
acVvnV1dU1pvV9cXR+/q/piwMdmLXWpDXz29/hr+fvgl0WPu89WBzG8wAL8UjZm390vZy/RuUlyF
/+kXF8+uL79X1CoH5Ms0pIIOJSMy8kYeZ05xrh4ItN7Ti68LEvCexpmy5W3e19JViVxlcfXGKk5Q
1SWwtp3t2w3VBlhI2KniO1ACAV8E23C26zjoNdHYcWGPHQfTD200wMfUa+zdqEeXfMzo2PfrD3f3
eHD64e6/T9MVCLp/LNT1PgsHeuT1wcnd9tYo90Xtke7Lye0Ues6ltGai+UGpzaeMpko/lJSkmg7q
iah5brCMXOoz/MMadt//l59mzRcdtKyMes2VUa+jyvZbhri2oWVt+01DfL9Sqw3z3UIh3n5QHFsN
fgn6iZTTbQNOF98ulzh/xIvp3mUWImIODNlVcZnF0+Wf03T049UIrrTKbpYv1yu1XuBcj3nmkTTy
8lTE0ZvNBvnV+Qmk4BD+nlr1lcXCynoqX8+LvdB5Byy+45TgGzU9HA4btbc5Ncf6sazdAnWter3g
gE4lJuLoVFKKB6e+d8U9PqNZxPA7ZvxX4ZXLFTy/vFqBFdcqcMJC39Nx28UDNLDWzzxAg+weoKHj
fsgOw18UgL6+pNlrF92FNa77vJNm91rPO8HLdfe8E3yd6OedmEjOprImwFgtbNLIFxQdNvlh2OT6
YZM1X8nAcWHTQOlo2GSYWHEkbHJkrmAaIYmfCJscGzbbVVb5FJH/tGujXnPTqNdR006GTY4Im4wj
RkUhbjYo+JmwyRvDJuOYsBkiwibx4jCWeeT5ecuwyc+GTcZthk3YgvXqtfKgXtjUFNcLm0cf5Eoj
j4ca2eW0iHl2IbHn+SkN3DCVHBJbGrpxGKYuyzMp/TDw4f8j2YXwIrqI8vJkH97XMttFeME6N8qa
j7a/sJFNN2v23l1Y40ZZ87R7o6yJkus4a6Lq1MuampL1rOmNuagvbFIWBOisKQ6zptDNmmCi+VoY
gcmaRkqbrFnXQaSKI1lToGKF0te9ZufHq4dLcsQTp2zCTRGsnA83L52HuPA3pKf3iuz0j/PpOAm5
F+QyiTPuvSmK/FoVSGfru6x4ZGwi1SUdqYKXZ/AijWE/daDqhUxni2z7VNksXsVJvDwcS/jgJ04E
aIEL0G0rq3zaGKANaqNec9Oo11HTTgZo0RigobYQMdQLcbORHp4J0KIhQKvizQEahjciQFMhacJz
InMStgvQ4kyA9kyul2kRoGELVqrXvaJDJ0Bri+sEaH9MyNHnxPpcnAxkHmx9DRHDXJP7KUtIlLi+
zGKX+Ql1wzzN3TBIM5qRmEU+ORLIfFLmMa9crfV9Hmi57SKSYa0bJGijDiNWOgwbRPUStIXuwho3
SNDn3BskaKRcpwkaWadOgtaWrCdoNvaCxsgZYMKrkdImvNZ1EDP6kfAaoKZ0pY/PJ8GJMBTgwlDb
yiqfNoYhg9oC0ty0gHTUtJMRNmiMsMhRUYi3HxT1nBc05DxVHJHzQorIebnwMxb7gRcnLRdKgzM5
Tzm2mfNgCyIOFt3kPG1xnZwH4uGxNYswinh9Fq0tQsxhxKsZ5P3Jcvb+66W7ms3ulm6gLnqDCdWj
ZAQfYKZG8sTRsKQ7NSLlHqZGujc1hgGpz4weYmZEVqkzM2pLHs6MPjs2tRJGA3Q+PCding9Fmga+
yLmbMcJclovMjSlnbhBmmUxpGGcZPbYiTot4yMoVcaqI9jpmuwiIWOcGef4R9xc2FuvleQvdhTWO
OWjpdZf+QQsl12meR9apd9TSlKwctYIx4Y0XJAdjSqpUqm34+MX3WKBOG9QbK2VUvVvcnAfbbSXv
IfoomtxJqe2F4M+LfKUEVjMHRp0CHIHZYL8g3edF7d0O7S5keSfDeiH3o92JBf2j5cCn2tOlfn3f
zpylLC+jVrfN3k4g0DrwArC89+pmRXXK8Z4DQ+i2vMp9VKsiaq6i+EzdNa5CB+xWhDIu3ZD5cDz0
Y+bGIRwec9/3Kfe9XJKoWgX1mqu4Xk/VONu6Hp8Pbl4R3JKFu4RouncT+vZGgtrN7ksHc6d73XS3
WdO25RPbeXP2WrkBx4lXq8UkWa/k9oHr6nacycwFanPPNcx/776CdA7b7yaX8Qp+/eY+Xr6q1XGE
M1c8kmZ33kM2d2rf/CUXszdv/lD48mwG5zRgRE7UqaD6kg0GayYdMoJdcAWfKDiaOgSNnFp9+4dH
tR8qCpuqVxGMCjbpcp2q73/Uvct/bst6pCx74gYL9W9lQlUIe/v2ez7KkeXb3aBxpF1/LO8zOIzu
H4aVr+dffv7i8vrK0S1afgUnF4iC32x/Vc0BRWlEoWpti83RB/KAWupZYgRU3+2VGcMHC3mLKPlN
eSje9n69xP554GKejlQ6gW7jRbddxOvbl6tdb78H59QuVKs2wUKJxtOHYTE6I3y/vltN5vFKBTh3
8/Ow+TavzxnbnqB+Xfz9MLu8q77JfLJt2UhLwMlieQ8jZqVG3TQ+1gKxEzg79tWvs+oyY5miMoeO
2IiOt3vr6EyJrcNXu3LvFT7L6dz50Hlftfb9V/fL2y0YEaN3LdWIUdbVMzvW8d3ucZ7zGHSLx9cX
TztiRP0gJL9RN21u+XuONwpGvjdSSNKRGNHR62Vxvj7i3B9RQQLB1GPX372V8tXs4wJKqsSewDtp
+iDCoKDneISERBDmvHsNA/WLeFW+77LwyRPnbeo8v3rmvHi5dr5a36l8SsSY0S38VEFREd6/vrz+
9vKb3WMD1OF7OUYUdADjtYKh87mcriHiFC9QxZ5efeo8XcPkMV1NUniBKvTt8wuoSR3vkz/VC1Sh
iz8Xk9fln19OlyuYUHDFwFm8Xmz//iJe3y1RBV8s4unyXq7i7UZ5cQUwFM2iD/+6ePYDoujPcurC
fgDrldB36sl6UO1mjlfobVRPfvLld8/HzuFDv/bfitJQPSFovVQPbnpipAoSddWYEKW6KPZJmZnp
ygO3lHSgS+tbQeSJpO22wkairhqmSvXpxbMviwtNDIXD9EA4Ig/C3/743FA3qmwG1WftN28eHgyy
PO5CV6Z1XfUWba8rD3VlB7p5XtOlpIPtq0Rqul65xfWGb+Xooh7eOYHfcK6fXqGPLaBw9tDi5aG2
q0+vvnRggnXmagNNUSkCFu5X6jklxbPuizqhBa9vFPT+4W1KCHbGV4sWN+VR9mbzsJPDVrp7bcQK
f/uz8+4lrFLAmZTz6aTcIgVdq1z6GTsFh0gilP7l7kq329aN8O++BZr+iNOaMgCuYuueepETt/Fy
LSdp6/qoXECZtSSqpOQ4PXn4zgCkKMkbJEtRWt0bWVzwzWCwDQaDwQUMrcftvyBbduJWQo9AxdFi
BPsMAGkfnmO+EieGtzAroIvcUU7en529/9jSKSwF9Nc2qCSyk2KxBLIPEIiVQPJ55wtrTiowOWkf
XRKTm9TkDtcndAR/FCHalISOrBlC+HwFhKrVNR9S7jjWX0kPotGDvydM2ORhW+gz+H4o2D4GpUWj
MXOaZItT2oT/zJ1RmATxyLC5szjBsyHUBFS4k1T04pqKXG7ECfNEoc3yki2AaM7+t0Mf4cZ2dbmB
NogFOhnOqGlZbEbO+BzFOydnyuWVtpxlgbZJNbwhmqVffffOjw9UWm7KtO7eDJP4fAWVwScnB0fv
S0KhJGTOVm98vhJCH85blyWhRBHyZgjB89W0oy+HFyUhU4nOcmcIwfPVEGpf7FWEPCTEGFWEcJ4A
nw97JyetC0JqQvCgvlqA0KQLsmTPYLZmRXd+evbllTmiJ/vkw/H7DyetExLcBWkP+2+d4cl1MOnH
sy8Lp5wcsIMh8/KgD0wQgyww4pBXpBxIA9wUgPooHO3BdIKDKwWQGzKnJth6I9aWTeTRkkSpTYFc
KnlHdnf/qIBFn1zNqSAVk9ca8IT8hioANsNc7LDYuiY4h2v99ZIc7l3ukf12G+k++bomPYYAM8M3
pbGLajLSk7oVqbQzpPf46/r54yVAXAPIiyAAetOf/Yu/IL0nX9ekZwKA4rEGELJIpuidv7/c24fm
CfQefX2B8rMA4KEO6szS2z87uzzZO0d6j7+uSe/vuM52fnRKcnQI0TSpHJ7sKS5wMXT+dGem25xO
sxw8UyYorEaRuqgmyoe0e3MCjWaSbgmUk+wOOzLyHxCGMtKhSOT6oABbBTZ8HV1WtusOvoza9hW/
VmrwlHz1DDSQF/8x0fJEV7SI8BqBqKHoo1RMtoJomHbS+AoRrmGKMkyj+lIMQHJ6c7+nQNksKFsJ
KJ8F5SsBNWdBzWVBO6cnx/NyvYF6TETcFbgUO4Kb7Pq1qGwtqHwtqOYrUY/PVGkpNHFdzS7gqjK2
XJNukXZwe9QV1UJWmPCyT8pSxwgvE+M3c7crKqSmsk3et48JNbipQaHk/fSy07446Jx9viBb4RjQ
CHx30vzf8Kvby8KgJy/4RERa9odP6FqCJEDSJ3uHl++wQ5PG+GqZVCofclEz78vfGqh76lBLhD4/
OAYtRi10FKRa2MRQy7XxrQsGmMmlb5fWHR32Ty86YFtu+yYng7wD9n5co+2E6aiob3UworwFF6rX
xSumAX3eugBon7T6oYhRK2mqQ313AJL8KWJO2ETmC2a6nJKckpibzLLI2LFtU8dSPgQgI0BJ+U+h
EPl4lzm/tWhzUUyok4TKbya/ufw2iQZM+6syBJGTo9MCa4QcnWyLy0VByHoItQXuuKK6xy3UmHSK
bB+WiUaEyXG0Jz1l04HSMLI8FjkcyZiFaQ/DU3fzbCy5yAYNUEmzUdBTRYAxVWizqbUmJP/OeLD4
JAcdevfj3n7r4y7+NP4tijzoBX14QMrtr7uj0TeKDhV4vXtzF1UP8LfO6Hh+fIhWjBsykloDuien
yDkWJNmSWfUJ3wYI0yv9r3Skd6j8nCPQOMSj+Mxk1OUVBQwAYHOLe54+iWNsJsbTFLBm1lkAJz3u
cGYtkIcWjodYrkkAd4/OP5EiuBOyGuUCj2ISjUYDnbu0Zo0TtPEAG7+IwVHg5FDCivtISAtTtSa4
CG51ss9/EBt6gt/olHo7S0bg9y5gWCCXH/erod/XaXRkbyhydDRRypljkb7oBiBVPY1wrxxl1ElR
HAo8HWklVE1E6p6QMGJuyNUkIbLhp542Cb28AQp/NZ4UlRgwumkkytMGZZFnOcpmq/3l+AwEpFNd
Zgqi0uQBp5pqb1U6MCTXHzlOpMUb6q/HHce73XFdj1v0trZYgFHR9tzbMoF00AEPb5c51i2pVjOg
+VoOv5UrXdvEZM5tecwiAOEwDBN1HV6q1Xt1XS159oJv2XikN9nCs4Hv5Qm8qNInCXWriSxcmEmC
F4RsmQ4A3WpJHT/DW4QEzFp1KTGFoLTE5NTyFsC8U6dNK0xKpzHDRJSYhJs2OdHGBKMNyEzxGU1j
isSb8ImGqQUwCWlgUZaYVmJXmBG1LavCNBeSJ2BiTSkxTc+JQ7fEtBLLTjzExBq1GOZI3Fd81gaa
mgBRNVkT8+BGRLfSvTMBz7W0qCIKQHu7yQagxhVwW5Av59i/yONZSCr7V/REwbf6WYz999mtTif7
IRU5LlyptbiDTyTtD3uiD2ONVDV1IEAFPL74pe1jPdR4HQ1NuVAOIhgImNEGg9Hx5MN/6pzqkD1Q
6oAPHUMPpELicb//rQrl4NF7bmuAVLE2rkDbmMyTdTw7JglBG1ko4QF0pGGuVPNYQBdDxrLHRt/S
nBRDEaVJGhHQvOAVKEdiWVazAQLaz7rZyfF5m2z1hv/a5ZxbTcvUMlkP0xhMHfc+0EsCcBbDdSbX
8Ugf2ld/3IdLqqOPt0U0zlEtPAL7rEDvVpJWI4NW1k/Q8/8ZvcbWcjLqR8JHdaBSKQpiw5JIi4TB
4FZnuJ20r7c3vdFbyAIE/R/LBVFUTs7+olP3cHoW9KTfL5iOkPdi2uf8E0ze9IxRamp5kEmncWi+
qCqVS2k6M9Pn+YDH2ow8ULZ0asSDRDrlt59n0u9xPCQWJtIpsweEdITzHhSqkapjE2uADoOnrUuf
XJR+mCKuz6NKgn4KSwJMZx6oChYMBHL7iTycKC8htdoK6HNqJXDODEABFEPHTHxeCtHF3lpZI0Qh
111MncKrKYCeGJBgVGNW2hUOLC2Pa+qfle55cnJwdnp0/F6qiOgdP4hFXGZD+kXq1Mcab1YASpwM
sZUgAnS31qlDYZr51cEVBSiY5A9wx6B/xJzr5LC0AqEf+zAX8L1Ap6/Sbk16rDYlba25OiSs5FAZ
zzDnKXKRj4ejcpzoakPBDpMY1vfLcbIAtPFAy9VVFshxd5DlyIt8FOZp3BUQm2oAnsaF2vqLVH6P
qstAYLkE+bdtgkGQ3kAL2IUfnSgv3pQzTbk5N4B6q88+cEEuwGJA9hXtK7hBr9UMxNebdzw8J+z8
pPWb2mlV5eOQkkNGDjk5NGEXM3xDB6BT0k+gx2mhW1dmEMy18me+mr+gwR7l71VsAegr2QrXKrbw
1WILG2yt/L1KfgxeaNAVFyuCvlpsHBGgxVc9D6g4V9ISzwzKdNZRZ2v/U2jcoHxRNEafQrMMal1j
3x+O8nKlMRaglWmtDtS93nHV5avzEq8+nv5lD2Bh5gV+Cb+l26VQ9fYtPAu7vx7Yg/XAHq4HtlXD
Mh1t6lmwoykwfVXjCbD368nwh/XA/nk9sH+pYXWm+c9ifVwh1skKsc5WWWvOVwn2ywpzebGeCtJe
D+zlemA/rQf283pgv6wH9q/rga1GSWKvGHh/XcAH6wI+XBdwqwJmDJBf3b8cTeDoKuDeryvbH9YF
/JcJ8Cry/3GlaCcrRTtbbVGfrxbul5XmtV23ksXry70Y4Cah2hYqbUIhruXhnv4c5hVayyZ33SDI
Qx+eBLHWDAwyM+OlhPYnyMUClqeZeCwRbi9R3kiQJx1fpuFgCFwMzhX5FLKvkWjO/goYi9lfZ4j6
ykwG5absZlr+AJhQfc+yMR4sxEgZ+kbNoUmaQR1QngLoK2rQ+2YCn9kQ5Mp8+2r0iFKKFOK1URAl
hWRtFKpFc0P+XCcdUdMRa6SThBM6SajoVJEhV0cjqWkkK6HBH9Dw6IRGczU0TKSBtqoJEStGfCtm
S0HX1h1e2q729y6I40OJkmGQi8FI9QqyQ8ySauX5CqVWZy6hLmZuUXPSvKlruzx/GOzp2Juo97RW
bR+iElhJKI32/utMcYTAFqMJFL0PKKsyDj9lqS6JCwdKH7UuDz68mk9TR446q4JzqPNyVJ2ZAX+W
yLT5uDB5LUy+lDDNR4VZ71oIk6p3rG8sQ4hRHSlbS6GuorYyOhHwqqBeU0FfUK+rdUUSjKSipWNi
ebw0wGWd7MG2EomuwOFKur7bZEuGxcMwOF/fwV1tUi8sj2stsR+fS/VRPONGq1xUKidXe7v0q9V3
cr2EvduiQNS0uBGxjq+uB1So5dmusxiZEOqAhqfuEq7AgA4+WhPkgkxip0HQy+ncqZwoTpCq1l5w
5B0Gsmwx1filCrDwqNTE+nqEY2lV9SFYMjmBWnoSDId6c4yVA5buEec2m17VicVIxrxulAEVAAt2
aR7oTLwu82/lFGg8wFDM0gk+KcoIqUEhZzZ50E+KRkMH7ygXAgBVMgjXM3GnddktSXKhVZafj9o+
ZuUWIlNnI2Aixr8dp2E3dNrxIbz9nEcXo9wqqz/GMbVoc4GWBbG4+oO01t0g2CJKz3SbGomDXlc6
OwCtkQohOopzjGp2C9861FUYBPDPA9bbB1B1uhiXLo1g70/RfVdOuCd+RbRhlTNqstUP/gX0uG3p
kEkh4idILx73AG2QZQtOU2eSBxgECyPBQUV4BUwsghgqtngFRJT8u04NIikdD3UEkvQg3tw3eTYD
1iUs8W4ehLiblcuCHN2I8qWyEBYBHWRV2jKGLXCrnGB08hjmMVTLDPOobz/pQaEungq8ZCuv9MrX
dCa0pPRbnEQr1dwS08LzEQyMKg+X1YRFAWlaocLeLQQB7qikHex6qnsVXFX7dTz0SjT0KJrFwjsK
ydeEgnirSQ5FOmcfq0QoML4ovLOAoezTIMUdd+QEwysa571gJC9bxvFhS7/mnZ+ey37ovL3Dp+pc
6XdFzvMsRD6VW1ic5gIizX7TYa8QOTq0pR61OPnL/qGEQJWR3jt0G74sArsRtQbjPoa8VBxCcO5C
VEWJe7YyFYAZKgi8puW596J64GqATHUeMCrdYtzMoupstfIk1d+zMjZmD2I150vVdwyEBREQD4H5
SO2Ju6MNGZGVWsafIV4nB83UKqH1O5Jq7B4PQNzx7LYTUH2Zoz+Cj4swynIxI7KB+FoH3C3rKr6Y
6NgttQFvxuEr4FBk08zpYImbKO3cRDE0zvY+RO2i5G1rAL1ZJOK35AO+eVC3sK3Wh4Pjd+RQt6U+
v4Sy1Kys4nfGNauemu1PT80+rmBq9gQ9FMSseLQE8gQaFB6KHyf2U2W6DcMQxm4VsXwwGPdDkWu1
1CfIxCIcq37xNSjYBdrboJxg40Kjin7oi8cRq5pXRYndVtJlDS1MqOb4j/lzTs0QnSWCF8VAednJ
K73eFhohYQb1gQHFG9zQ7oZmEnM1CJVTHK2+J5tpj6zByNsziEL9SFs8W2Fb/GWptpg9LM4nzSS/
rKAtPkHvbLm2+CjaYm1RZz73OJmHrUg/uGFZ4/lqazx/TY3nr6jx45kaDyoi6iRB70GFP64GS31N
UU4yi3GoljQW3Fr1wqL4fIPRcgGDiWTQweAfdW0QTzaZ80mTYXS+zayS5KrMrGcrk0nypEzO1iWT
ZGmZFAg2uPtxJfo4QeVO0P5yevAL6ettUCuiIqVkAqiZgi2UAl5k0BAh6hoGdCWfwOaww0yTRP0Y
1ywt4RGYnqmflIT9uB/Ii4hiF6knEKDBn6MR1zTiGRqePo0yxz+ujj4k+IpS5guVGaYwF0oBL5rP
lUBYl0A4XQLNxUrZeo5GUNMIZmgsUMpT+ehBacrjHiCuQxvmyjI6Ubscj4iptzEK8Ng03nhITFB0
34fDooZl3FwKuEGhQgC04QKJS9Oi1OH0tL0NJPZa77enpaRj7plAuh6zuFwqKmAMz+AIMDXnJxRi
WO3vWR6B2ocGyCEE1DXZjqkVe7OCn15uATPIIhxCvUQm1H/kUFp2jD1pwCP4QTHjZ1oa8lpKhJz/
giE0907bx76WulnENbGrIg6upyRjM26g1R2SdeU+/1D6YMGGRItC+e7vmC4EYU61ohM8pPMlTwH6
XIWVxkOlMq314yeA1IqgXw3O26DWBvGDm3EmCjjzptqdRA7PzzBA8dGnPQ3KBKgh/YDhF8cvrQJ9
wPDeaIR8xUp1w5PRNGCw/19xq7VWiafsMoYMBJujWV+elGU1mN2ghrxSwZ4NahkQMXeipxYihsR9
I8Yh5E/QYG6CUQMsiBokUwgKVq6ugLrTE0Ehyj2jYB1Vpz7t0nszeVNZ0YcBRiAAnU42yqGyoUob
ZQ8C9kTfyHF7j4CZFvV7rRA8wIFaHKgMedkA1xkwvY6zISZXpleWUAO/3W1k2AEc1Z/r+D4ACnsd
E6xkwpVMuJIJt2ZCpx+5/Vc2xtPMYlIfgUUOwBCclmd43cnTeUoHDx3ZQiBb00igt5EHP2JHmvZE
OckBx9AbFXVMxDKGD5awVnCodDAcj1CnIe2hCOCuPLRNnqm7I5+pb51xNOUR/jOwBhxleSQ4BE7Y
h8YTxMEQcqyEmnhascIrMPYs2EIBPp9ykl9uBiP1pCLtcWvGu+0J9fBoderhEzTl8nPQA8xyLR3b
NDYBPWsADrMWmSKh09Iwkb1oInjXfqDSwZvqDC7GvT/VPlvKbqmuLUoXUxydZag4C1GpWk/2VeQQ
D3A0ygaPtx8d0yvW0jmsq/MvF0c6Po+jrtmI/DuzwTjPIKQhhiIirAkh7SjT0tQ0Xnlh58p8K9La
7gl8Vz6YzzWfVt182HzzeSWxV7abh06XYnQD35dpNxtALG0YYkeDbGv/4KRpuzZ7h+FpiMUxWu0W
ZrV1P8Rgd+/ICVCu4qsCFpA0HV8wXzR9+hpGgkq5Ov/wN9QqgQsKsbkotAb5j+6DmmBckpZa2cUz
z76Akg2jgYiv2DXY5VstzQiyT7Fw8deoGPcLQJMFenDTvWi9B0xycgwtDX/stY/wz2X7LIJQ0otv
QZ8iBjPATv41GuW9K9dhngqFCzel+nNlcgOifengT6mptYKKHd7EgaXoUrkbQSvqCfZ6WuqllnIJ
aM7q0E6zqWCBB/cyRK5sBRCDVWvR6DkALXv5swA6k4p8FHWifiabDvXkNZlfLq1fCgr4reXpga+h
E1SQ9wucyI8yggFev4kA4rt+M2+3CWNlwFAyuMsDHTX9NBsYd1kvGIH+VoVIrNygWEMnt8eXnstA
IcJYdcdnyu6uedJib+iXxCo/q3A8AmV54aX3fuzjiYGZIo5958UeRIoN8jz4VuhwggDtKBgMgAcq
I9kEMfJDS1a0QQLgIh8PiJ67H6TANyepDs9OWzrpUP0mR20ofjnl3VZaPGj4RGn6OucsQmRTEJRy
XoXAmgUEJwY8ddgsXsivBqWsQcAQkGYYJc43mAr+BJGgmc1JEOVZAT+5a7sQwVNn4E6HHeX1CpaK
g3cElR4DvhxyKkZJimHPVOS0S6FVfwdJB889xygct1OOe3YDxxUVQHcc3YoRmJCUDzAoYDrdEB68
e6firuGBSui4pXa81cHY5NwGj2glrOE26DYJZQxmiW1KXYeATmxbvt2cI8hePkN5yUPKDQMCMgdd
GesRLpIMkhiTV6ujrMfDxRmS8x6yEqiPWYBVTx2xjEKEmF+ljUguHKo6iE93yhO3G3EoHy9O62DW
1vdZld6k6BYHDGRUwc546JciWBwCbWKY/zJzc4uowObSyFVuAX1SQIVUJT8czMWeXxx88rNTSc+v
PKjlx1ia7ceQq7MEFfLV+C1qVG+vVwMOfVgeKEMSgq8UNQQTyrKoe8NhTzqURzfyzBIc4nfEKNoB
04p6rWoRRhHl6XBU7MgtF4aS/GtLWJ/8j6M0yWiaREl3RRm9UMeJVwHsfbJTgLURSKh3pPaOGlZ/
BDqnTZeo0k9QAOBRBqqw8RdFA4Lxo2md9NTfVdAZF7mild0Vxl0vGBjhuGugKIMc1SlFeDVSQ1oT
UgWakQ0DY95CndzlFC7gfzQHGimiEQP+GuIejyfAMLkGNkDFjmHIoeubeorKlxGW3eCjDzFl9bgC
KEQVQbB6kuE00lA58G++QsJ8982D+eybxxMnMAXooOVytxhBboMe6Nr4Zg5K8p14jFJHUaqW7o0U
RA8pMVGYG0ABMqDUNCONq5T3hVHV9PE4hZ6Dhl4SM9szApYkhtXksRHZ3DKCKA4T5iVmHDvPlN2k
KNRha+y7vPh+fHp09n2iVPz0JffmsaL7x8Oy+8eb/6/Sq9RPj/IG+4V8/rh3StpqPQznYxAwWgwg
Bq4QQU7+0JV/wz9FwKaAKZyIbnBx5o8a+HCUDIboLMppTviNHAZ3kK12g5ykUEkAPQ7uRL9c89GF
VZOlSfEOJs7i/bSIxhmYA7BI9IGggiwFo92Bld3+eKihu8BbaNrwieb7cu/DBbKg1F28AbruM0ni
m6iXQnZ9qa9dtH751GpfQlddEsNBktt2Y+qfshY77hyouYCNSlpNU2lQCEZEHgV3Eg5hvoRbUiAA
OzravwL+CIyZ1R6JciFZqriXf8Vp9uTy4q9zNOx1yOVp0L2Dv5ROi7RBGxwwrGfSwnaIK9t0zOuq
ClVgpbZCfBAnKIYiVloNHtXUu2tgVZxHXbzGPJ6JUI7to2ySAU91d+BNGvQAiXDGGauW+BrzmC83
HrVeOFpMadPWTlfMwI+ktbiGOs2A2swC8wfojozyo85BHA+ra6KZHuuNYnJyV9Y53fT4h0QYSh83
0FVcPJcamxkeuGhbFoXGcDbofZPblctNsR/Pzs73oWlViwm/nod67Fjy84OZrUbjGCydEP2jQFql
s0BjCZxRtDqc06P2ndVgJASrU7nzbiHsfBg1cPtTDILzGAjuc7ULCtpuk7TLVfo5BOc5BHlB8nKE
Cwq5E7mBHthflVEPLUc7g6TYkW/uFH0oIHycFQJ0niQRMvrJuICx/xm6va9FP0YTTrcLdKoNExop
qjwRIIBjuw8PwByukfIjSl7mCgQkBvEwg0Y7yRL8uxVf00LsNPB9bTyRP8W8+0gl+Fqdrf7PwUh2
7G+JDMyvlnBIFAwDebxaCve2lL/KxI8pHcC7dWBgTpEKmx73S8wr23YdWZRiSFAZl7ICRlmz2XAb
zHUbnDo4ZKK+a9CGZfOmZWGfPpOFh+CA7FFAxt/EwtFhyP7EGsx2qZGRdgDOViICMkQeFOgD0KfL
A+CUNsGK+k4DnAH4MBeROuxiF+sxpWSsxxkkni3k2sr4Ne3FUZCDdxhtyP9+w7hZbtMX8euwe9k2
YdyVuEzitgYrgFX9/3Y9Di8BrapdWQe+DSLZumXMFYtqQnTFCDtdXLzyyRvfZ2+AR/DzASUf3616
ZFg6wPD/DzjjU7B3TgzecQShvuMNSrn3nX7//jUAU2qcdWH1+EK8q1yL8HVc05gHfGxZc1wMBerh
Avcwk6onNb25tNP6JpqFH+EGnKpw9Ov8R+TZk5xhWmTtGfg+joHYdcKqOZ7TVG3cKDDmV9291sOm
7GchNk4GL1TFpHQ/GI7/PU5BtC/nRuR5lte5sb7HWZ9W9mEgWCEeghyp69Gk6bih/b0fAHNy4joQ
0ix9TQ4CeQ7LyfyTxvtslHVAwDFO7n4oR5PfOOCoF2DAwXAbx9i+5VZ4KJ6xwOe5wIMYQYMQCWq/
AvQJPGEDqypSTcZ4lB+wq4afrDxNqw+Pg65496BorYUtI9MGkUFmfA3S0ZRpxCZoCKi3fJet/UdQ
JeHTtq3KBm2k8S5ldOrDDn4Ebw8lgmqSz5q8wRyvwRqM+Y5jmj+CFzLFi91MREJF07As4RmWGTpG
M7ZCo+kEjmsLMxI2xRXBDipyMK7uoux+SGkaaKAyQFOcFGhtp1o7fcMgsShASN/IL1l7YoaLemhW
Op+Y9v6dFfqs8JKV1sXFdww+BLP+N4iCw46KIIO09PGWzNp0vpqebQaBbRp2EFtY+qCs2R414LYb
0CCInYD/OIbGYixI4lq2l5jUcMMm1MpYJEYYB4FBOWU8iUM7aFo/mCXbTZosFI5BYwtYimxmBDYX
RuI2PWpGJm+6ydpZUjbc2bq3+yewXcBfZUVO4+q6Ou8Jili6PO32YPC4N25GIfk35qjYpbt/+jfd
ZvDNqsR4p0yo8j3bgfbB2pvDs93pBOy5BMF9mQBz/VSvMX8u4+W+DzxCSJ9xH8cu6ZlJ2NHRUQs1
jDDtNuQJiCmGVsr5v0vbQ2N5aCwRhC7wfNA1gPOFwC32oqIxq8ip+Y8MIwOahrBNL3Ejr+nw5vei
30E/iGtyCnpZEQVKMas9B4p+eea6nKSDlln4ZSgW6eKUCaXRqYWJOTataTaLMlJPOYWt10/k7HPy
eOev7QP1qwGBiIxE6oVzuPbjuJ+kKnUE1sdz1FtLW0Ah1zAEaMd6KFetPIeOl6NEMKrQjfTJJ1le
Z3sayPbpy2qfwyGaSXpvgA3k+/eb0Wh4TY4kU2Qk46UWAurGJcabK4bAh6gUYBlp6uqazBJk9HHO
kd8A4rfK1GLK+c4ovRfKSFPq+RympWHMW86XJhdf83QZepgfUWAZAkZCuike55oNZo5L3650WABp
zJFwHpaLNEDU5eJa3yuTOujlQTqoXPcOfR7aTa9pu7bX9L7Lmc8YSk01KnL1aKprqY7PF5b3ckbl
M1z3wl7ADELKLFsYnmXiaBtYRuAltpGYpslskyeCNmdJ8Je7A8boKmsg1zD9LrPKvcYFQmR64eH2
p2L5CTl/GtwOwDY5M6yig3iehuOR8Mu6aUgDunFz118zheHd6glEQ5BfJxFQI3O15W2WhklfbgC8
ucoGYJqPd8Efpfttte9P8oLLc5X5A+eXcg2JN0yHz+8TU9C292JmZgd3qVwpI3KWB120JVjNOHHs
kMVhmLw2s47Pm1McgcoyquwEkH4ItAGWM2lkm8mK63P+YkLTfDSh/YQ1Do14YFYapP9ReTG9BufN
hssabBsykQeoWfFZMPPl8fmHCtT17Zfl0nxELDpVg9nm6uq55/OXZceaK1RugKKjQXGFbdnzLf6i
nsC5J0cFZaTu4EudOgZNaXaFJSBwAFDbUqvhgARjuMTQo8opddgbg7ICFSq0qOVyGjeZaz5UL1ZI
Sykl81luvihkbq+oWDn1Tcs3m5ot2nGgRdMGa3BqP96kLaZTSUzmEuBkJPnfQyqNzyeNSPovXfix
G3jNpMkjFruvyVzJjvdie+YP2rNl+fTlYjA5eyYbpudGLBRNwaLXdkvW04NaC0dt2R/2hIBYptLC
DFzJAzsx7KNRv60s/F9VFAXgAy0UkFTFyYSJ8Lch8DIjBhhfLA0xWM+VZmJadmxFMbPi14rB8W37
xdJkzYfF6YIe/mJCdzYdtA2bwXCg2TZeHO1sb3Y2CtZsjLxwxaAaOQBYXldmZl8qKtuEgbJKrFsS
jpMEPRYWQDzeOZPWoiqWlw9CzRYBqFiCaZxfxw2Uu0EJXQRImizy8WCnyA25m37H85rMtBLPaFrN
xLAtkxlREnigeUd2EHKHcS/cEcy0m6EVGK4T24ZlYjgQzxWG03SswHWSyBaicXcTV5tiJMeE3pex
Ek3YruLL2N0QWM7HON6J/B74i3GP+2RErOKdrz0rW0z6JxTbyhJQn3kB2ZGmQMgRBBfJ3i2SgyWZ
9gLXosJzDWEGoWFRzozAipjhmFEYcYfHSejqy59ZpiOLgLEm20wpaGdoLaXwGYIdH3zYOz71F0m1
ZFZ1K5yvsgO/tqx3lVbtD5Jiiy2UuSXZ1C2RVbEZxuLOl74sEYyOu8xxXReelEGYOgV057s24zvw
DwKuB10wtM+jPxoPHGPujyA8TwlkIJBPKqSKnk8qgnOg1uNBxtGs6xMxE/3cbC6SFnUDAy103iKp
JqGvGdm695yOyY0gTN9pQNwL2GsbxjPbgKV05sKsU64BBvyUsdHv71C2YZDnKbTKKvKBBoRMqdIz
+c3JH+Rfm/zxmeSQhcZYyv0qiOPrrdK+vgP3dygUqQtBHsBjCmNHEWmLh7twwDyq9fiOeqO6A75C
b7MBKjRvV0Ry5tksMUPpT0iz9FAQ8TzZaQ39/G/vLz7tg/vMpxAa7Xhbhbb5iMtNEC7YaZi8YTm/
i/O+aYLabzBaSCMb6o5vlGtDKCrdYI6M+7gLpioGv9qmgDfJFvTuOZ7/8I4sBlIMx3maQX6V6LYV
3CBDe5b8fSeiRSDtVfBlr54vtgq+2HJ8eY/XUsXP4xW14jZ+uXWsktjL7WKe2jPS0pX4cxhLCZzR
ZwawbgBRDXHRZiRgrL1NIdoZ0KI7dBGQJQdrfZ1CV+9+lkuYQhav1fh0WSZb9CmNj080vneLcL/J
gtLXqhZT0NdeXLqMr764Zua6mBXpA5mABgNSkBaPYhk4WFyQiIxXBgdgmVFHxTCB3/ewB0Me6LY9
WQvHA5bzrKwvS9CsLAiAEqQDzAcjtbfiUtmoIbOBmAeYWawXg+5YFLjjCOMc9kHpidXJR1kZ8qkO
glio3UiQBPV54WvB3kXD8Y6KDO8zdfVVpOAP6lN1Bbo2/Bzcg+zwFJ00TgOoyYzAMBTJvxEasRiw
IarLTmHB2+VPE3+OikjurITf+lzRnSBJ0gHq+eNBIUZPp3xqkX4CKENrGNQzppO3Di8JlIxt2dfl
4jZG44hBEBiRJkiheH7zm98QtZ3mPqoWwMuXgHoXvSWqibR6bQM83kcdeNLBEGARNl2MLYzK8e4b
dCfZ/fTp+HA3MSOWJI5pCItR7Acco2mHjuFwl7vNSHiR58H0uzphahfPkSIw1vbwLDfwSBYjAhlU
v2ZeerNNyrXKYvfNmw1mX01QOlgiPsHvQdAHEUz6zXvUY0IQiPwqX2/knW97x3eb5Lsvy03y3Q+G
BD44glguud3fIFfQo2A/pS3O8v2GwU6joPjZ5IkWgs3KUwoJwMrTzXCDjoWG7QiVGrhvmA1qoGXA
sWYuTT4k4McwdY2XTz6G1BvMpFx96KC+gp2QdCzZIDcwcsQdCBEmn43UmZutj0dGFf1OPcJQWhuR
WUleRjZGTeXnE5ayWYT/OcYC3bS4Zlo3mfpA3APb8TbbuuOs0x0P/pNCfCf5h2QQVw9CLoso4DFE
+ITfni24SzfB40xFO/vLWlm4j2q6opeUPUKYDuAAcAzPFANfQxmXA8JSU/ygQ1XxH7h03IDS9Qpo
Ee5cN5jmjkfxZrmrjgSs5KZqVWC5P5otSFmyNshQ2XyPcRU6Z20wkL6RTvXrHfz12Pncumgfn50i
T7zhbJajv7ZOp/kpR/fN8vT5+OKys7/XbgFH9D6i8Nl0PWqdXl78rWTHjcSm2fnwt/PWxcHex4+d
8733lZgY5Zvm66i1d/npoiWb26/BGiZ3ScEbXVGGyPw+DOC6G6d5JwjB2tqxuuFmK9v5XqtzcnbY
Qpa/iTVPDl5gBqJL7B22LpCVUg/cLDvj0hsY7uNzgnfBFncfv9soW+1P7fPW6WHnYO/0oPVR1n62
UYY+fO60L/eg0/p49kWyk9je5vus873Dw4vO2dFRu3Upufrx7KC60gEKUUfGYPGrPfoQK/eH8QIf
eb52R8ZkVZ/NDCzwUbqTFIsKevGDC6YSRkl9w8KQnOCp2fkMJ2xTnIhBPFtBfrAGO+FEDDBmgPxs
RunAz5D3Z9qM6tnmPmvlSMeggw2qdP+Yt03507XJ+OOPKtEXbASiX5rZ4BexGScn+9sEVaQCGOTI
q7raRj9QATGifwpmZ1jbIEdoqpywtVE7HtSxKIPgKgAhkeZr3/bkvBH0DiMm3wiXTxumbMY3a5eS
938ag6wyzBei25fBB0kVsUh9nulKyNYwGSjLB/kd/G1armol61WSn88NsNSB6DjDETyC608ED8qT
4b4mzP5OsqoO+gptN4nW3S/OK4ho+puxbxGquKn5AVGX18J1Ns8fA/5qfmb4S9zmuvlbsP6Wi17T
9Tew3Lr+ssjxpusvPJT1l3HONl+BH+uxGN2wJf2lVhVY7u+UAMtmZdlxuNlq8YTx3/UC21XGf+SX
OmyTTM7XXXlUAu+D7iMLvq6tdd0VM3UXn8rK69HNV92Xagky+ztkdVJJ7A1XEiV/FJwUNykDpWJA
OrwxJfFa4MLbpIznOa7ciuByiuPmLMfNn4rjapveDMfBLMfBRjge5NMm2ylnO5qUnx2TT9+Gz5Rm
lMx9tgkjEmprM83y6eyYk+zQ6dvRTHaquz9/dihlKjucvZydiE2y42w0P0+OAdhUVCbr9hFOjwFJ
PDsGiLBUYP4XxgARSlVhMgZYP4X+ONU/4UwbB4Lotu6fknimf0rWvBCgPX8EBpXhTOVCDOIJx2s3
my3O5LAL1bpmkv8Ay6uGUWXjE3E8VBc6AAi/IBkSQd77hq1HHiwGAWng7ybYUjIqbT3lOkYVCRsE
JcGVD/qcq9z/ELPoqfeHXdLHYG0bNe/pMD7tUrhRq9rizA7/h7jdaAVWsZg7Eqsz5H14AYiaHELT
TxnLN9VtYlhZpYVNjefBNunL39xu2sL5afhqVnwxk3obLVQNpchDpajWibwNFbCc/3Zw/ttRDXeT
46IaEm+ggebIRFm+d8oVsPb7ARnCJaMb6RFnN59UsRsTGMaHOYSa2wRL8Ck3nWym2NTnoR0TlL4m
t+mGLJnqEwwyCJUJzW+aK4qb1DYz+sMH+4JNFhRuyXggEsbcjXlvqw/2kw/Y4p5jb2rxbk5L78k9
XcVNkItYdphVZ06rYcdNaGRvglPF1M/Si+OewZ+OjdycNkhM6wnmJpjrBeMB1K67fiUh2II0uh/B
sBYmnhsG4Sa1l1yAdlU8MUPWifxrMWc1cfxKio8HvnHMpcOLPYmoG15MiyWd8GLPAakAM0YA1HFv
69bbuXIfgsUKJwg7aZFBMHMDTycuDAePLjJsizPagAdv3y1AUJJZhso7FV7lrHN4fNE6uFwkk2pb
k4orRH4FofomMYtu0mREoM4y13Fst8mv4TEgzD6/4rzpMWehOjMTcmuZ7OrE0aIyjtYibNVxtBZJ
tQz/ZdQpqFlbdCrq1E6wZbCFWJ4LO4VlsUjQKf66oFOK3DykZuwhBmCuQ+fCHbH5gC4OfTHc0dIk
H3m2QLij/1J3Jr1Ow1AU/iuIDbAIJLYzVWKBmMQCiVlCQkLNxDyPC348cfsgaWjrcz1cF4kFvMrH
x7FfYl/S7wgzmVepzOuNGOnRE8J126MUexdfVQrbW/9hRfDWD1lCbv2HhewBKTAWBiIbzgk3imTf
B9QQHsvhm3E2hxr+uEQZgaVpGPQCTYCZXBN8FuABAY/EDVqSNAvTI5HSynKo6IIjMBtD2ERnxJdN
Z7SkFEee8o3xKf+nw6Uo5aH7D9Yws8AaWne58xkZayjzPZfvyuZHV7YHuVF0S6G88v3VcCUdn2Nn
7HYNe+ISO0vSTsZW89ie8+PVOTf+7Py5Uf5Vt8rOafVulZ7bcIj0Ler7l/Hw0L600N5Ok1/5h/0m
O1yLZpfTc3cfPT6X5WmKNr/WdbPWXz/8SWbR56O/78kP/WoY5n9Q9Wkr9eqj3jK90TXzs87c1c90
ZrkyX1bnf44fvtukyZy/er4eqmwYlEyaWsgklUWWVG23TmRfy3pYpyIrxHnn7l4Nf/qTTd7kfd0n
6ZDKpOub8ebcyiYp1llWKNGposid+5sSczZ9IpE5bn2uv34d3/jsu+TdutUdpqtRRt5cpdlq/Htx
c6nuOURIpmdBeUOyiVD7cq7r3yb6P0m3dvXHejFtfvRn/U6ffOlnoTp/fn5+Z4TPZqvm2fmrz6B1
8+z8eZL8q+GvPrROiPrTqtj2Aa0LSh/zVaA7+HcdjGrA3Vvf8KZeet1t32nOyLtXX9pvH7592dwK
ASEvOYR0cUIOoRYvcaR0TWk6EaUprSCiNMVzllLa/jFNGypmuiLvFK7/2cHsKilzVJbIivGTt8/f
bnLZf93Ul+RPMs3mDYlxnGfvb7zrv35+1X45d3H7wD83bmZvrNq0ksV66IdSOmWriGyVVqs0A8Mx
DMExIjsa896u31/4ujmk6aPANpNy/PxK93ksbF5+dPPuvdW5e5o1uc0j7Pr3r2aXthy1oWJ8rTxV
ZLY9lvuOCqrMpUVFxqy4tyJDEQArMmYhy9NaI6u0EWuViLxQiWr086NK86SuRNOlTd4J1eAFgVlF
hubeRykAHopFKcA8AkvT6GkZu/47BZk4swAPKMgsTAUZSivLoaILDqt0BLOJzogvmy4FmTP1IwWZ
ll6Q2YoSNjlKUdpOOzNKqwObHEwCi+c4k7CN5zhrDhaVvMRzuHVpjiE4UMfadluHjufYdCNTtxgM
SISWfmCWBGJDIBHfvoB4DkjExlfmkphh/O3w2Zn592LZm1M8B6Jhc8HFoWcOlvpgFrF8vuLbAHSr
fNilp7wH1DI178HsPuZE4Rsh2p46+HShxv1PFymeYysnATl/8Rz4EOB4DhvJD+/7pUDlO57jTPbE
4jmMrpbxHMuWti+O/mm+fXE0TysRIJ6Dz+NJxnOwD98unuOF7D8+fh7TNxTPwe7KNp7jzafH6sP6
1K6nPtTHvZ4x4jnYB7knniOiG6t4Dj6Xy3iO07tYQDwHu08onmMrW7u5qgmuLOI5+Dwu4jmCWqAE
YOyL5wh7gSju/o3niOsOi+dgsEWJ54hmZzeeI64jYzxHBE974jlir6NFPEdsO1A8RwRfrvEcESwv
4jnimvknniOuHSSeI4KtffEcUQ0B8RwRXP0Tz8FvB4rnCOyFEM/B4GQZz8E8MYR4Di4n5ngONifG
eA4uJ+Z4DgYnxHiOAI6Qgo5jPAe7a5d4johmd6xFdLQ3niOKH4d4DkaXeDwHu6kZuTTqRDLEc7CP
xkjC+yeeg/eZa47nmC715t8dfwHuWDyH9rOIDwlegiOuX6/xHOzDQeI52E19JMdzqG6Iuyxs4jnY
TQaN52AfzWiJFs+h8siLxDmeI75jajxHfMfUeA4+xyzxHKcwHI/xHKcwHJ/xHHzj4YjnYB+N+Rmw
iOdQ6iT2jw7xHOyObeI5TsLkMp6DofIKFFWiH8RJ8Rx8tgjpBruvyv1HZufxHFHLe4jx+SuFUatq
dLMf/yO3URewTTwHn0lTPEdf9lHWpSGeQxSViuIL3xTtxHMo1UaaYCCeg9GMdTwHn0c4noPP0hTP
EWfaKPEc3K6weA5WV/peEHOiwHgObltgPAefLat4jihbCSSeg8+N/s7gydlYxnNM+4SwL9RZxHP0
4ypaR1lHhniOrZTKQLiH2Mu/FzT+vVuXO59R+PebbvN03zeIc5naxIKYFRESGWQJIZEdFrL/dj7M
JIBIWDtseJJ9HxAseCwWbHjzCCxNw5QBaALM2ITgswAPyBZFdnQEE4qM0spyqOiCIzC+QthEZ8SX
TWcUWZ4GQJHlKeWB9A9TS1gwtay73PmMzNTKzTh3geDcA4uBbHhhhrfj2ttp8is/Y8OLGRsebT6x
4YUNG36rTmfDC4gNb1DXOsfZ8Hnb1W3byKQvGpGsy6pP6lqIpM6zRirVpGWtzjt3N2PDy27IVVYm
fdPkiRBlkVRVliZZVvTFum7yonXvz4kNb9PncTZ8mS7VWdnw4iAbXhxgwwszGx5ZNyMJnSQ/Y8ND
64So75UNLyzY8GU6qgF374kNLxA2/FaIhm+H2fA24jAbfite4QjSmtJ0IpBSWkEEUornLKW0/WOa
NlR704udgpENPyrlqzQoG15oNrxsurRWnarH24wr/3z8YzacV9m5se3XDXH9mmbGX35y9/L2v3I+
vD/3YNX2vRhkO4w3ntrdkcj2btDLsrKrgxxThOogoCVzHeSYkP0ZqRVlIWRWJVlbiUSJfkiqVK6T
rGylrEupUlXhx/BZHYTm3scBHB6K3QE8zPVHz6jY9d8pg8SZBXhAQWZhKoNQWlkOFV1wcH0hjE10
RnzZdCyDaPUjZZDOqgyiRfGthaopbaf9EKUVsLXQEo5Edi3hQGQfm6P/neGLyO7QpZk8fbB6pLsV
DET2sRvpTGQ3i5CB11rSmchuFvHtCyOym0VsfCkXSLrxt8NnZ+bfi2VvrkR2o4bNBS8OPXNg0LcW
8f98xbcB8Fb5oEtPiG/UsgXi+4QnCt8IEffUoacLNe5/uqhEdlDOK5Ed7JNCZCdLfnjfLwVqY2mk
ULW/MLpyJUUABryWPT0GvHZlx4DXLR3eTtLNz95OKrK0DMOAZ/J4qgx43uHbMeBfP/3+8dWdmL5R
BjyvK1sGfPbk/aNOnNr11GWEuNczEgOed5B7GPAR3dgy4JlcLhnwp3exMAY8r0+UAc/ryo4Bz+Rx
wYAPaoFCWTcz4GO6MzPged3BDPjQtogM+Dh2dhnwcR0hDHhuTwADntvSggEf2w7KgOf25YEBz215
wYCPa+YfBnxcOyADntuWkQHPbQhjwHO7+ocBz28HZcCH9EJjwId2AjDgQ1vAGfAsTiAGPI8ThAHP
4gRiwId2QmfA+3aEFHTcGfC8rh0Z8LHM7liL6MjMgOfy48aA53JJYsDzmtr8/GQKsqEZ8LyjMeKW
lgz4oqtC3xfNjPV06+bMzy4DPmc/3B5nwOdix1+f56H9EdevbwY873BABjyvqY90BnwTebthyYDn
NRmaAc87mtESkQEvIy8SHwz4yI4tGPCRHVsw4JkcczHgow/HLwM++nA8M+CZxsPEgOcdjfkZsGTA
i5PYP7ox4HkdWzLg45sEGPDMJsepjn8QpzLgmWwRENq7r8r9R2bnDPio5T3E+PyVwqhVNbrZj/+R
26gL2JIBz2TSwIAvetH1J+NrYsBnRZVF/Q8b86ZolwEvVKQJxhjwXGZcGPBMHikMeCZLEwM+zrQR
GfCsrmAGPJ8rfS+IOVE4A57VFs6AZ7Jlw4BvyhhOQQY8kxv9ncGTs7FkwE/7hCj7l2MM+KZtijzm
lvQIA15LKRAnIvcy4CWZAe/Q5c5nRAZ8uVLC/JXjKvf5lWMl931LupAys6WtHVYEaWuQJYS2dljI
nkAAcxcg2tcOdZ5k3wfoCx6LHXU+zATAJAVoAsxoiOCzAA/IFrd2dAQTbo3SynKo6IIjcMxC2ERn
xJdNZ9yakgFwa0pSHoGZWD51Lbhh1l3ufEbmhqnCiH+VICg+pBhInZcQFh7U3k6TX/kZdV7OqPNo
84k6Ly2p81qdTJ2XKHX+mLrWOU6dH1LVyWHok07pdI6uWCeyzNdJKbuyy9pCDk123rm7iTpf10Mu
laqToevLpBFdk0hRd0naFHm2FkPVy8a5P1fqPLnP49T54tY/6pzUeXmQOi8PUOelmTqPrJuRsU6S
n1HnoXVC1PdKnZcW1Pni1qgG3L0n6rwEqfNaiASGp1DnyeIU6vwoXuOY1ZrSdKKsUlpBlFWK5yyl
tP1jmjZUyHSekncKe6nz1SqVxrJBKQp/ZYOxR7V3P1sqaVc2OKYIlQ1AS+aywTEh+yPFWq77Nm+a
8abcZ4kq2vGulrdNUuW9zEQnZJdl+Kl1VjagufdxXoWHYndeDXP90SMddv13qgZxZgEeUJBZmKoG
lFaWQ0UXHHwcD2MTnRFfNh2rBlr9SNWgt6oaaFH8SZwrSttp+0BpBTyJtYQjpF1LOEDax+Y5WPnw
BWl36NIMoz5YbNHdFgyQ9rGb0hnSbhYhM7C1pDOk3Szi2xcGaTeL2PiqXLjpxt8On52Zfy+WvblC
2o0aFhc8Sw89c2D2txbx/3zFtwHoVvmwS0/Ub9SyBfX7hCcK3wjR9tTBpws17n+6qJB2UM4rpB3s
kwJpJ0t+eN8vBcxvTJSq8ln6yMoAkHYte3qQdu3KDtKuWzq8PqSbn70+VKZFFQbSzuTxVCHt1fjH
bfgVYfh2kHb58/6bb7dj+kYh7byubCHtw/rhjfL5qV1PXUaIez0jQdp5B7kH0h7RjS2kncnlEtJ+
ehcLg7Tz+kQh7byu7CDtTB4XkPagFigYdDOkPaY7M6Sd1x0MaQ9tiwhpj2NnF9Ie1xECaef2BEDa
uS0tIO2x7fym7sp73Cai+FexxB+ARJa5PY44BeW+QYCEUDS+2oU2WTZpC4gPz3s+YidNmrHfxIZt
dzO5fu+YN9eb8c++JO1T6xWApH1qlY9I2udV5gWS9nnV8SRpn1qtiyTtUyvkR9I+tVYvkLRPr44v
Sfs1dRlG0n5tTTxI2q+tgj9J+ySaeJG0T6OJD0n7JJp4kbRfW5PhJO2hNfJJ6NBJ2qfVmkjSPpey
B6rNqNFlkvap9KGRtE+l5SCS9mmVql7/zyRkr03SPq01F/mQjknabayv3S9eJkHvSNpRn0MS9KKc
Xz+O+nX69PUry6svvgfGb2iS9mnN8SRpn1apu8Ek7UarecNiJEn7tEpem6R9WmtApWEk7TqfOUhC
kLTPrPEIkvaZNR5B0j6RxlORtM9uTliS9tnNCUzSPpE9E5G0T2vN5THgiKRdZ/+J+SONpH1ajUeS
tM+vpAdJ+8RKQlXPvxAfStI+kVoDOK4Pj8r9j5Ttk7TPmt7zUbx/pHDWrNpwZe/+R9rOGsAjSdon
UvICSbssRJr9Z/TqSNqZkdzNoZf/pOiApF1nfKYK9iNpn0oZCkn7RDoOIWmfSKWOpH2eahtI0j6p
Vt4k7dNphX3BnBXlT9I+qVr+JO0TqTWKpN3MoaknSftE2uA1g/85NY5J2rt5QjKHci8jaS9UrNI5
p6Q1SbvXCrmETyFmHL12QamovboXeBPLPDL75xC1eRRHi7xInz6EazmjN3dP7uDylteB6hEva89r
rpBskxeROFIl8WQ2USf54tVgvniCyIP3BvLF26Uwl69+tkGvfvaRmLCgEpNTl4jHIhFjqebOI3pS
zXmp5EM1dx5oPP2CN+mEF9XZAUP9IPVDsJx52zKOof46FeBNI+FVAZd5Ma5eC94GjeWae6kFHdfc
kG+NNNU34AaQuF1DTd8aCaUmmWtOJFfgmhODBt0XSNPUCNK00SIP3htMmiYvk8orT1L5a4J5MtQr
Lwp5T+y6msLC9xjqVY+h3vfrHUO9GslQD+jDGeqVL0P9y9AR5+UM9a4QxhnBF0rJeJEymSyky90i
M8zEptC5AfZ2sriOoV4rJUsuxCJTZbLIbWkXNlfZQmut4iJO4J8gy6My1A+WeYGh/sNj9EkZ6tVZ
hnp1hqFeXWao94kb4GMfBN9jqPeKk4H4QRnq1RiG+g8BzaP37hjqlSdDPQANI5EfwlA/GHwIQz2A
D2B7T4Z8taOYHfItL4rZITpzNuS7rdLDTPVSWl1mqFd+DPXqMkO94AbeebyCWQaQFv7zAF0CVGr3
hcsj3A2tRuQ6ZwR0ave32TZ6rR7wkWzvw2WWxMIY43huEmLWQSRLyTuFsTl/+f53n8Nj70MyWWo7
uFfud8brzQKZbHvdskZROZh4v/kr+nbzfVTsHmGHW29sRt9gZ1y99Mdme3VV9npU2ZA0Nk4qUSyY
i/VCcV4unCzlwuWJS7jUzDo9sUqJlNyWVi4SadOFsmm6cMzkC/gVcc4Tl3N7dZXqrv6wat5+b108
h8dmjM33zzOI5l2BNVstzN6uGDkWj3ZpnXKCZdPb7/3B3uDwl3df/oM1X2zs3gDB4D2mvMrbh0ug
HFzcw3tv97/AX/YF92fzBdb+HDvJv+/+6KOPHmDfnd4+vNhzD4QeNuYMB/cdc4RYcrXkvAe+3t3l
v2htORKPuvWruyrNUi3mMcH9Jrz/JmQPy93NDw++/AYuREdmyC3meiCC17e9ztEupTh9LxGYe2BP
/QvK+OHnb3giUd8Pv3y/2igcAyE4J0NIQYXgOh4NwWoIpagIXIz3ZgNBtkIoMgS34yHaGtXkoLDj
fcnJFdpGlbRUiIQc2oZRETgjVyiPydVhyXYIQvtq7RAJGSImx7YQ5LjihHbeakEOTUHWgceaChFg
8GDk0AwQEnRfKronpCFDcLohCVmLhFwhltxEFb23YmRHcEVuYDyh16gld7uKXB9c0YOCCpCQ5yQB
Wqih9xOMHFVC0DtdSfamJrsiJuvAE3JYxeTmpclhJegxwRndmZzsCkHvMgV9xq3Jo4clVykXdAj6
AkgwsisEJ2vBNT2wyD2eoM8phCBHtyEjCPqMW5Grw5JjQtD7K0VuYYrsSs7JUWXIjqAnGMieFPTZ
gCBPMTkjK0FPDXBJtyOhN1C6HZreOMhKiBCrODICPU0iAgSFJhsiyf0EN3Q7ZIAhcHyFsEYLwhyx
hYjJWhAW5g2CILTSFoLuCkLSiZG3LBg5JcvIuwWM3jwaCELraM0gBxWX4ztuRk61tPVJyFA0EHQE
ymDO6EuwtkbpCGQzCKNP60t6G0/IjYOQcmrNIOtgyZVByLIwcoqekWeYrSfZ+GluA0E4QLFXgg5B
N4Ow7GD0tEADQdirYPQkZANB2IhrlSCkNxoIwpZLg0DYK2fktAAjbzEz+g5YqwS9fdEbByUZ20AQ
jh0w+jKyhdABtCD3E5SVaANB2LZpEOiH1ig7xIy+LdpC0Cd3hMR2qwRhC2sPQe8qOF0LQw+LAMvA
mAxB2Oxm9ONajH7CiNFPL7QQhCx9C0FPDnD6Mo6y595CBBjTCZn6FiJA3irAPC+mdxeEPQdG37pv
IAhnShh5O47Rd91bCE5up4KeeOKC3sjoAyrlZH8LQdi4aA0h7Joz8hlsRj4mxMKl+wmHGhl9h5MF
u9KCcr0Ho1+40kDQdz444SAEIx9e3o8g5PqgXB/QQBjyMCbomwac3mGRuysRYlpBNkPQ54qUaxQY
/Sxeq0WA9Cp9Y4/T8+WccPkLo5/cYvTz/Yx8YGnf3dAji3DgldEv+mDkK5r29UG2gzDbFPRruwT9
MLegpzcF+UigoM9XBf20jyBvdLZKEPImgp42F/RWKsgpJEHPTQr6UYIWgjBBaiDi8b2mIJ8uEeTp
aqvDy1P3Wh9RKdQcW8g5x7iy7J97JDP758F33/1T0we9tXhnl90teSJuuLE3/IbzpTFSguBNR8qB
tFZrl+1unwHxG7KUpEXkSuCziTTwLABkDuyCwE1XM6UhJ8iRWqdIABa9n1+i7OkuAj6EIvq1/7oH
zE/vf/fVp199vAR6fnTWm1uk7sG/q4dAsgiMJDfZUhiObDyr5w59sXkIdP5C8Tfhr2avve4h5BN3
nz93oN3aPSmAT0Xzj9/48oNvNPf47lcPfvjwwY+g5w8ffPLh16BoRUjx2u6hfB2YAO/devvkdtew
m7IIOSzyCFgsPJC/3ORP8T5UyDeGdJJAQwksgvi7ctnjyD3dPVrd32UPgVsB/YGMFkjul8NtNtbw
emSZ4H9EPSq2CPixVt89+OzBBz8AygqrExTMfl/d3j1T+EpelEBWWD/9c7eqCCUPPomvQkA9ze8Q
C2+TtQIKc4gUeLpqbpv1Z1tIgU74yW6FdEjR+vF2BWTD2Eg3YANQxRUbdMMdxMU23eLvoywq3TpC
VlkguQHIFJiaCiD0dxlSySAp1gqsenzXPIN4eAa3G42QZwMYJaL7XbbKnmy2GMAZ8LFt7oHiB8Ii
2j6MoDaAl3a326zbj6+2f23rr+D9+7Zu54BR8bFQ1WvAJxFtIbg2q3v3PNrdPVntbrf4iL9wd1YQ
ciuyFXB8A70y3Pgr2979Di4QDRoY10ZnlD9Zbdfubgu8g1j+u7jf4OOTWyTiwdJ98RA4KlaP3PYR
PkXWZXx/k9dKrZ9Fd/josrvbCAstMOgIT6NtXn82295iARnPZPRbCvQ9j7Lb1aMMgq0tFG3h6Tat
9CzhYQ33S0DOHyDtSR9DoMK9TqpHqKZd9vQevOgRqd/c5hVfLdzt5QnwSj7Hrus++gpMxqpE3mhx
Y26kuEEq7JsY+qE/t+aG3YgbreVNRVqt/gQyj1d8GhzSxEQ/QDQWS49PR7+8lTHJHfx759fo3bP9
xAxQXBQszQAKup71avt48xxq+tEKnbhZA2CcAZ7zhgurGRfmBc2gNSOaATDpjdWqRVZKcGtLg/Yh
o9bq/o9V1ZmuoBepSvcAWqBunHtjchmXBSoHdLA1yAoD//b+DwDjutKQeaNJnpSGnaoBNsxULqVO
HACt8Fa6nUapQ/OsP45OjY4rhYBfJ4fe+tPvvl1VxGeAJmNE8zeP6yKOVYX2ZPOsWK2BHgweGmcp
ABtkock1YB3YF2sAGWCelEmaAgh8e4UM9RhWKUAobwiRZHGKHiqeQWWt0d1P75BrH00q0EHSP9Q5
UzErAay6bc6z2y3Q9CIWzvAAT0lULfVHi1mi0d3IBbnaurKAUeIxGplqrLkB3mYudwVAIdAtxEFl
3bAKY8LF6Gu8lUMDIdHXxhtCxjJTEiBg4on3OarqS8tBGKrUWYEto75ZUv0eVnvisK78Kx6AdFm3
1afr39eb5+vqziJw4wQYjZsWy8shcCzOMBQlEPsdKVd15IkPVjVXxns04myriGJTOO0kd7bMRCll
9KvfpPkuu63o1Zb4X9ywelLaTUe7aegbWBk18+4R7klqaiMlXqABk5TqzjVb96yi+qtZuODVN6Fn
hg+/2XzhpvrCDbyBDOp9ARJUG6B4dwdZLCMzNVeFwnuKV/ztipmqLPATgr/Rfd4GEMqrsoNH+FE9
cHauHECoYMvLgmIs20Q5rkUeQGh9R+KCHwtS9qQCnKUBhKqXWpoxJjRjVZ0aBU/CuFc35YRpy2QF
zjIueauAjXP4EJbLomCWsSyAULMvK57YN47KSmAnW5djw22gQIr3Zc5K0bnX9VytQ0evPXLvUf0q
LhmTCssyY5YzKwIITeoyh5JhSSVIvqz5FEkAoe5M9Ap+WmgaBxCa+XQOReA6zdsyh/+doLIJJAze
TgFpMs4DCC3qDp/xqk+apu/lzSgjMxTKLwg1gvEQQrlHnbrQlkoPoXWZm2r8C9BOed332ub+9hPV
qWlmDlWdyqZj50XSNhNupNkrg59RAYTGp9wbW95vm3WAiYxlMmNFAKF13ytLZoScaroijHcghet7
hR3Y94oAPZJI5hA6dJQJIjSdQ2g2h9B8DqHFHELLGYRKdkZoGl9vjiT5HELFHELlHELVHEL1HELN
HELjOYTaOYQmcwh1cwhN5xCazSE0n0NoMYfQcgahqh1PC6YsszW4OhDED8o2wAxf9cdTbhBcmv4C
6vj1EGsZJeYQ6rMoTruMqC4DLIqV8hDalK3kTLIAEzPVjae9REd2ToEwy391MJ7u3SiM6CeWRdj1
qYq93avCCa3HUyU68Lisfyqhpv5U/3W6UM2O3NvV3fWW/7pNvRYX3RtOqKktLfsuZaXFVNopobwM
kcM3/EjokaA4q+o0sKX9blBN5F57vGrrrL6iUD6H0KNVm4cCAYTK85Yel4N1DlaxpYegwEL1HEKN
t9CAddotoKQR4nI7dQGyoLabgnrsVgSyNDvoHNwZQbIrh5iuZD5bJG3ZukCWNnXKRZGnLJ7IvUUr
VEp7TpBt3a44cwHOOWT9QEo5gusmGXE1Szm+85IpSlWJRZoFFtoGUhrj/xq8FnQ9S9WBpWaSQOL9
ZQWuST3KAYTqOYSaOYTGcwg9SNPpaQLJssOVuEfSI4BQj0Wx6JVDbMTzZroSlywzqT4pVMn29TQO
U6e2iV7hM7SFci8O4h7Lw6BCBZtFaG/mwC3vgTflOGGWhY1ewfeW5mWyX4knJkt7q/JEBhbK5xAq
5hAq5xCq5hCq5xBq5hAazyHUziE0mUOom0NoOofQbA6h+RxCizmEljMIFaxZFKexTO1EMwfRX8u4
ZBqh6mCyPZGl6sXDFXHZbQYdT01NGKHn9mUKe0VLm87BVu6dZrIt7AzLf5ENFVrQ80jCL01nA1sq
OqFTXaQj2Qzulf06lecWULznARVCKD+Tc5D6eu7NxKFQj6RHAKE+u/9d92jDCD2T6EivGb2Znr7v
lZhO9xAUWGjWT9NNk8NX7HKiQzDe5vCtUZJ+uELtEx1KZlY2gkRq97MIC2+p5nUlHTMhhIqmLHgq
z42nPLR7JVu+AG654/yadXohIcmhHjMRWqi+0COJ1O09oGQa4pSO2l+rqC52g+EsvbxFArVr90Lz
MsDBNoUH2y5GrAltaX9ok/lE7vVIp7dl6VgZYtNWKd2u2gom2nRr4uA3uZ6llrUzfMlkWZxop0dC
RQihEEiX98FdYEsvnF1hkndWCyltEkLouTmS4f1yYEtVE0guM2WWvnFclqwQolVGCWM1/XCF6p1d
MS6rLJImkYzXE2yZSge/gev0YIZfFtN0Dn6rNhNYqBfLwYECAYT6zPCl7JcDCFVseWb50F1zKsL2
SLo3BU1sLciiMtcMJM28AqmzOk/o61PdTEFtmQrAtJfP9wZx77lA4vkV3XtxEOfMqrAHZnSTek1V
4mzi0lZQWSasK6e9coBRRouLgQSWBmau0EJcYiPBn9CW1oGU+TWZUJYqtjyqx65+u3JgSzVbng6Y
a1pq2LILmKncG/fdO5GldobL8LUdehl+CEuTodcUBxE69ALJEELd3lIuTp/v5cGFGo/kVXihuGo7
ww70xhGxT25S4bSkX9BhbMPuVbHX8GmuODBWtsuKTGZZ2QjN2HWFKraceovEnM5sl+eFmhBCtU83
GPYSs7jfZLj1yGxnIYQ2cyRX2a3ajVorYlkL0qCXrcnTrFVWavpaJr7ATeewdlXgQIrVuQ7fuatF
b2yZTyCFTXTER3mk00nIMg0lFNjP8XPI9f0UOLuRQTz6pbVsWT8mdvla8+XCVq+9ji9U5TR+/Vea
/K/e/+ZTYDcvN3u5+Kn6B8WkMTzN0iXOnl5f4ryiEp5UH+r+eKmBROD1N9A2UANfWW2B93NVmd5n
xgSi8re5AphiXTGdAzX225TU+hfA447s/jmQjXrAtLcGQMLN7oYAeJeASDAJvJ2PC7ctfGgfX4bE
xRgk8SJSosYAnVCJ6yQQkmChgFQwfwsdCMmEAhIslJvUqIoTJ0LAhPK3HRXeJ70UKr6VCQUUqt44
D+VuE8pJcTCNAuEIHSySeCAgzkapJE4gyTgUkhnl8BNIsR0DdKqZjLLtlL9lsJpLRvWUJ5DsKOPE
iagcFUsnh+9g3dIo20723aOC8pRKofo3LkP5W4yLyhMhME4lcUqlYCGgQzlchhpQuBnl75MxEG4K
N6rqTvZMofz0L3nXtuM4EUR/xeIJJDzp+yUICYmL4AGBuL2gVeRbZgOZJMTOwgr4d9rZCZMMncQu
V6gFv+xmZpJydfU51e1TlbbgWHESMGBG4YSWUjTW6BxWwBUWVbjEWno91sQZtITCsBDAYVtUgWYp
hiWsMAmB5RIXWFsdbrEoJwQWwDna5hJt/yWwPOIWa3nCy99oUgUelNDCrdCiJEHb3dhGDqagRSxx
geUTTGSKhQnt/oJ70OBiCyYWLh1WhkNTPjniTRhWYuIeDQIOC5YKa+a4QgMTxE4sncBun8Ut7+jR
yhZoSRfv/tKgrSgMi3F4K4rA21hKLAygacQWyyPuQeSNuYQVbg0iSgxLaKjksOKOQFubYpbQAs45
liUBG13MEtp9uMbaoDgsXAL3ujFLMAGFRSzBqk4MjXURS7DliWEVeRkaKmOWOGidi1mC1eaiGABl
uZgl2F0BQ1OJGdrmMjY62B6FYemfDGtjwdAqBbEowe5VGVppnaHp+wyt5sDQKioRSxbNJVjvEMNS
9mIuWayZAyr8EUuwVoYYmGCFvtjg0NIlVrIUML0iZgmmV8QswcR0hlYpYGib8GgKx7IE1C2jo0ND
OKyGxdBaLRmaoM6wev+iWQ4NmLD7TIZWnWFYdczozGENDm0XDivTx8YGq9FGLGENTcCqobGxwfra
YpZg+kDMEtr+C2/JhPXaxSyhhQltTYE1xjCs0jrD6kJhaD1kEUtoO0uDhUlgLzHDqqjEAAAT+COW
0AwBe4kZ2neBIpZgX7pgWBWsiCFY92cs3GjJxGMRDlbiiRhC23o7rGmDVQkYVhcDw2qTjwUb7+YL
K9qcoVlCGxvaRhD45bSIJVirB0MrFkYswTq+Yi7BmqIilmBtLBFDsOo8w/pKGcNqjGRoNd6YS2jU
RSMcsHgZsYQnneGpzBrPJ6y8BPyKS8QSrNUjYgitigLs+WNozWwM7bvFDKs1MuYSrCUmagktNXE0
nwwanPBuwC2WJVj/IEP7ogRDa9xnaF2tMUuwL6vHLKHJORztzhnYQRizhLdLwauq4+mVeDtea85Z
4keWmmxTLurQW2WFF8HKcn0/XyyrpM5eBUPNOpm8yraT8NtJvvw5vHny+IG7/Qfuwh+mCUu6XEBL
ZnpdoP3AmQuICxf49JtvvvpmmlTb7WqdpDrJmuTVy3JWrB82y6qppm8uuN2tJvU2fVjvVs3EOc+l
mrvUKz9PtZI8LeaZS5UvdJYLw4XLJ7l0LBeZSoU2KlW5dmnmmE69E3nJcl0Kld+FK02T9WaahLOz
lnVVTNvSmBPehnOBw4/1NAnnXq3y100VXirmw6/DsKeJlIrtX87W83ldNe0fjVRy2Lhns8c3zV7l
TxGYNeVsW/2yq+pwmfCitZSKOzFNft0umurxcQt7d5OP9j+VrPCCJWnyxWqzaybh7J/wX3ul9XaY
g6QTo4x3WrjDxAj1NDNcCOcOU6OdO50arrX3ftjIe00NP54afjI1FVPufzo1XJzhzNszMew8Z9qJ
4b0n5u/EPq+nwdr2VbVNuA8IMO6Oh5SYrNZNuH69Wa/Kxer+/afTuC4YXWbhn4eqrrP7Knx4U2VN
+IzYf7buEi3pbL+1IXzgzNqgomuDkZLfEIaFsEZI7lJeOJEqUc1Tx2SWcltI6a1UTLkIDMN7hDfu
Mgq5N/wUhVo77vmwYfdAobyEQpt5UbgOKFQcHYVqynS/UVNONpeOa+nFtXXaitPZFtxq7YeNu9ds
8wvrtCgqcz3ntA7eYLalO5c7bjjbFZfa5ypLrSl1Gt4j0tzZKjXeqMyaeaGrKj7bjOnruzLjT2db
MmmVHDbuHrPNL3G7FF65LrMtHfps6ykT/UZNOduOOaYUu5bIn1NbGi2ZHDbsXpPNLyTyXBXyeiIf
6uCjU7N20sM1Tpw7Ovkz/LrZLsIP7M0mIjnx4RZbmjNGq9VRIL+YfPUmIu8nZfUqacp8P+PNentA
wKlFzfqFigDC8ilhcSG1UwcM8ycMay4O+Uow9WxHLMLVho26F4Iv3EQWrtOGeKiDFxAsuiN4mA9X
giQvBanT7dxQBy8ESb4lQVKXgyRJg6SAQcLKhZoNyIVtKtFGOXSTRqGbdOzYpAkmz2iXt0zZmcyq
Qud5Km3FU2UKlWa6yFOnq+BkKWTJeXSPKa0RjF3ZYwqjn90/Sun9sGH3YJq6tMUsCp51uH0c6uA5
prXOdWWauQXTDATD1ROGHxFwbNJOhemnSKFgGKjEOea4kKL3zlkKZfnzYaNPjwtG+2V4lFj228Id
qZqOS+7UVcHZcPOsFqC19nzYyHukBH6nLwjOgjl2ffEd6uCFxVd3TQnuX8Bct5TwtKwdQHBs0veO
FTmMBTsni/YAcf9x9wKxOb+utSDm19e1oQ5eALHpCmJ/CxB7BBCHEB6ZVOwGMtPe6H9nsUQr9QLG
3RF4In4TzB+B12FCL2OkeMLIYzCOTfLewyKdTrwCMWDknSeUd51QfguG8oEoeYzxsUnRO1b0KEGo
VQPG3RkjrCtGxC0wIjAwwsWxSRlM9quxoWAEWFvEKiQDht0RIrI7ROQtICIhECkPEDkE+Nii6h0q
SoSgVZ8B4+4Mkc4rzS1KjkoNg8ghwscmde/aLQpGoDdeWDVrwLg7YoR3TyP6FhjREIw83dccInxs
0vSOFSVGsCrdgGH3ujcfXOke7CBCpVuZW2DYDMPwPyvdygaL/eR5FAgDyybWcOv431lOHWFYKG3/
BrHT/8hz1il7MvBbzE9b2joyOllvmslv1ape77ZFNckXq8lv2WYxTX7cT9Af7ZsYE+4P6/7Ybcqs
CZaTH76cPbS4Kuq7h+phvX09y4pmly2T+aJalnWSrcrk6y8+q5NPpqXPrPfSa57lf7xsms2L5LNs
sXzTgLnJtnWVfP7dd18/+l1Xh+fULherKvnxRfK2uP6Q1U21DaharQJQF+vVi+TjbHf/skk+D4Oa
FctFtWru2tcHmIf/fwpvrcp33/k9Cdd8uS6TD5Ovv/r2uw+S3XYRXk+2wYOQY8o8OFFUdf1BEj65
fR3+9GOSvPggCVdrgtnZslrdNy/3v+Zetn9pttmqngcAVKti3c52+OMHSUBEHVwLrwMe2o+vf15U
+49t1uvlLCAypIMPC5/NK1dkqWK+TOfeirQshEm1M3PB54UqSzupfOFKNxepF0am2pQsVblSaemM
M5mdzzOtJowpb/KQ4A2XeeqNZIFHyqU6d6JwVWVlVu29zeqf9w7Wuzy8bjHf/ngYX9q83rRuhrjU
IcbZffhd+LGdzAm/88mf77z3VuPg79ctpt+8ISlD0r1Lvkh+XSyX7TjDRXdV+/cwA2GGF6uymi9W
YQlZvk7erXebgP+69Wu+a3bbqu1Vvm9/DIFqXi7qQ/vze3engfhPZcZgmGnN1LWC8j9uEpg2dtiw
eyzu6tLiXgRSqKvVo8EOni8od1/c3Y0Wj/6L+1NB+YCAU5PKHpl8ZG7ya7Zd4TK3tfwiLC+7VfA5
2SzKPbKYSYrBebxD3u6WtNlJ0r5J6jyJ9r+bJ9/E+fvV4re79p/Z/rLvyiAbvbMn3DvhxdvkL2Fe
156QE1yNjBPak2EMxAlCf8k44aecE3JC6lFxoo02GcYAnCD1l5ATQhJyQsmRcUJIMoyBOEHoLyEn
pCbkhOYj44TUZBgDcYLQX0pO9NQjaHUiw7jiVwuFgj3v0OSeDxt2L53o0pfF5i5z14tA/R2knxdx
vQG898zgiFOeVknSfmSZn1CZAWV+Qn8JMz+pkmTsyDhBqMyAOEHoLxUnNCNVkgxnY+KEZpTKTH9O
0PpLyAlKJckIMTJOECozIE4Q+kvICWkIOSHHtk5IQ4YxECcI/aXkxNuvWBydOqSt4urKqUOSq2dy
BWfODht1LyFJnv+6ejEXzl5tOBrs4PmGo84H6gz24UqQ1CW1TXhNGqSuB+oM9uFKkPTFILmSNEhd
Dz4Y7EOf/j7J/hGkq7qtvsWXxTXoC8vHvXXaGskUtkktHLZJxdipSeUo91luZPss5cj2LaB9FqG/
ZPssPmWMkBNqVF2x+2iTYQzACVJ/CTnBBSEnxlXL2EebDGMgThD6S8gJoQg5YUfVFbuPNhnGQJwg
9JeQE6S6rRtVV6zmlDooiBOE/lJyoufhTCi6bb9DqdTTMRDWSi7ism33U1sBo+4hJIk7efEUiHkH
SXKog+fPzOqu297iXDU98Fy1x3N7TyzSqkR+VH3dmlOqLqCsTugvWVYXtCqRH1XHqxaUqguAE6T+
EnKCUiWyI+vaEJSqC4gThP4ScoJSJbJqXN19glJ1AXGC0F9CTlCqRFaPbZ0gVF1AnCD0l5ITPQ/f
RVGJgIcOYx1LDRh2D5kI4fnGgx3se252TCa6xdHqGnS0+vlzs1uLpDKR1eNqJhKUsgsorRP6S5bW
JalMZM24mokkpewC4ASpv4ScIJWJ3LiaiSSl7ALiBKG/hJwglYn8uJqJJKXsAuIEob+EnKCUiRwb
VzORpJRdQJwg9JeSEz3lCEqZCO3ZVIBx99KJLpw7X4qiMtfbiYY62PfhWTGd6BbPV9MDn692gMCJ
SVKhyPFx9RNJSuEFlNgJ/SVL7IpUKHJ8XP1EilJ4AXCC1F9CTlAKRW5kdWJFKbyAOEHoLyEnKIUi
Z8bVT6QohRcQJwj9JeSEJDx92NmxrROS7jRfECcI/aXkRM+H3KIIRcCHD6M9oBow7h5CUWDXJaHI
K3ddKBrqYN8naMeEols8ZV2DnrIef4L2k0lFeIS2s+PqKArRpjuSGpTYCf0lS+x6ygiP0HZuXB1F
Idp0R1IDOEHqLyEnOOER2p6Nq6MoRJvuSGoQJwj9JeSEIHwYm+fj6igK0aZ7uBmIE4T+EnKCUijy
YlwdRZpSeAFxgtBfSk78Rd7Z9bZOBGGYa36FbxBFomFnvx0BEp8SEgKEQEggFLmJ20QEO8ROKag/
nrXXaZvgtTehTAaRm5NzkvPus7Pe1/Zkx/v6uQB18mOkXyX7dNqz6uU++2Q0SKnl8DOPrPpb7slY
af5Zr0/IPckJCz8820jXtfFitn8KGH7CeHzuSZ2TJnp+irXRYCzoA8WLZolS8f9aTqQumXU5y9Uv
yHsxV9cXzRKl4v+1nEhfMutyxpy4KO8F58QFs0TA2P9tTlww63LWnLgg7wXnxAWzRMDE/2s5kb5k
1uWsOXFB3gvOiQtmiYDJ/9dyIn3JrMtZc+KCvJecEydmDi6Z0AGhUms4G9l8kLPDjI5UkMI/63V0
voTH5kv0v5Gf0/8sCQNCM5DPWRjOG0kJLySLerP4SSkLLk7VH8V8uS2L1Z/5ojkwhZ1wnk6M43WS
zoPq3a8JP+C7ZEoHmPx/LfzRl0yRnGXBF+TFteA3/vev8r66vq9+X9Xz5WLiIvXGv/Bi7qWldH+2
r4M/ATTTSr0BnIHgRhn3ngE37usJewPhtavqbJskb2zLsh742ujn/9EXZ8Cvmb0G/R2zU8anDH58
ZMC4fLx3h8PjF199/vVjucmLfNFMJDel17m/3HB/e6/5oDt63js+kt58IQ2qzYqlXlo93mxXi7u8
Fe/eJw95cbNl0yRbLPKFm9Nu4t9m8zy5X92qCUucIWzKbZ3oA1mnCU+y+kRZ/ULW/F1WCS9rTpQ1
L2TtsayZgvSy9kRZ+0I2PZKVYgrWy6aP28ZPH93F06NXe//6w3q+mb64ioGp1kJM3bVMdylTtW66
KjLnvPer+o9ksy1v8iS7bdxXJVXuJBeVu1BZVXuzLu56GPxACNYxND0bhXgWnEwmYU04QzNfHOvp
/agK/ljeul7W5WyxWd3O7kp3SVCU25f6zQXbXdaeOvYfJ+7wXjbnyV8+TpZZtUzq7GadH7eSTpXx
rYiYVlxU9w0dKSn21H95Rv+bs+h8XVbuULpxY5rn27/rc+b11Tn6z2PWq6nP0XTMt9lq7ZivXlwM
bPPbnevHO+HGzCmN/Z6t2isXvj+2k5v8ttzmrp2Oo6ch7huypzTUE6k+zfQczZMj5RuTJ03PfaTk
KZHSvqFz5uxzpPo1+TmaJ0eqa0yc0tg+UvaESAnhG5KPDd+sOZ0+/vDRt189flLu1ovmZnPfB2fQ
zdt6W67XzpSvnBf5c+2+Z88fvl25s0RVX7/ofbb4dVVV7t3+a+4b69X8j3eSWycELyy+UWt60aD0
8ErP+5pusdc8xy3yRVjPHMf0q9JdwRR3LnorfzxcN58lv5aL/FhFTAXzKjbGwc87TzRpAuVbSWNa
CZ0nzJTDXkmxGKVzeA9agZhW3BfmedWcfSRLdbLJ5r/kddUEn02ETdzBlhfrbNuAvGi0zWDwBuQq
Bdt+kLuvatXMq3q3ce9ZUi3dZdB8V1fv/J1RMs8YdYZ/ZrSQ8kNGO8ioZcsoDJNPkNy9TqQUjxs3
aWfrstz4o/TTbbnZOB5oLrf396ot0Tqr6sQJWfFsMVe/upneOIzLQaz/ePf44+yufCdZ7Frg/KHp
6ereGVJW52EieUz0e/ZLvtvsZX765usvv/ziq5+TxlQdpkmudsUvRfm7c5miLGarws2n6U/5vSNq
hH5+p0ncrVc37zXDcO3MZfcwmU/BcSZXKn0r+eSb75Nd08mBKCmCTJogkyHIZAkypfSYNCPIBASZ
OEEmQZCJoI9rgj6uCfq4JujjmqCPa4I+bgj6uCHo44agjxuCPm4I+rgh6OOGoI8bgj5uCPq4Iejj
lqCPW4I+bgn6uCXo45agj1uCPm4J+rgl6OOWoI9bgj6eEvTxlKCPpwR9PCXo4ylBH08J+nhK0MdT
gj6eEvTxlJ6PA6Pn48Do+Tgwej4OjJ6PA6Pn48Do+Tgwej4OjJ6PA6Pn48AI+jgQ9HEg6ONA0MeB
oI8DQR8Hgj4OBH0cCPo4EPRxIOjjnKCPc4I+zgn6OCfo45ygj3OCPs4J+jgn6OOcoI9zgj4uCPq4
IOjjgqCPC4I+Lgj6uCDo44KgjwuCPi4I+rgg6OOSoI9Lgj4uCfq4JOjjkqCPS4I+Lgn6uCTo45Kg
j0uCPq4I+rgi6OOKoI8rgj5OsJ4TCNZzAsF6TiBYzwkE6zmBYD0nEKznBIL1nECwnhMI1nMCwXpO
IFjPCQTrOYFgPScQrOcEgvWcQLCeEwjWcwLBek6gVc8JnkkGn/gCJvDQFwg+7wXOe9SLR6FVxumZ
aJVxcs9kQkPGGVdIQ+ZRaFVveiZa1ZuiZbIsOMtSaZGGrEMZMWt2/WuVdI8wbzrePbXqvVVxfZMV
C9dzqXV8x0nZsPRMIjgYimukwehQ5KsMhuFxHcc02FEm5Zn0wGBIpMHwKKhllbHhwTTYUSbtmdLg
kHHBkIbMo6BWU0aGB7WacpTJeCYeHjJlkIbMo6AWUVqICw9mMmOUyXomFRwyzbBmmUdBrZ2MDQ+m
WY8ypZ7JBodMKqwh8yioJZNx4eGoJZNjTIp5JggOmVFIt78dCmqlpGVx4UE16zEm8Exy4IoRa8g8
CmqBZGx4UM16jIl7JhNOMjGONGQeBbUuMjY8qGY9xiRaJmBhY9RI92UdCmo5pOVx4eGn3bjvd6J5
z++s4prjnMuI5qRvLpyyEAZrNDwKaoFjbHhQfXiMSXmmcGJDaqRbrg4Fta4xNjyoPjzGpD1TOLGh
ACmx3qGgljOaNCo8qOWMo0zGM/GwMVqBNGQdijjtlNCby43vOKYNjzJZzxROWUCKdP/boaCWJcaG
B9OGR5lSz2TD80enSEPmUVCrEa2MCg9qNeIwE58y5pngsXGR+2ztiRQkzlgadb9Dn/sguQLm/rHd
EPbdpH1b/VHV+a/vJKsqaXaeSbTtdh+ul3nye766W9b5wo11VjyrtMJX2kCqRVJlze7O1QAXf5w3
0tl+E8HPml4n+39LrvJNOV8mXu4999Fq646IXfHOu+2GNR9kNlvcqoWZhlsQhy0c7qjzsHZHV7J/
gUN+z/0ptJQKZFhTHmk2GxzWs2q3afdtTQ5eKWs1E6U0KBXWVMea6/L3Wf5Qb7N5nRy+QECrCcxy
mfKwpj7SXP6abWabrF6WbpKu5tn6QDV5r3vDhQ1rmh5Nx7lpdsg9fnHVairLuB3itEea7fy5Xcya
/ciONYG3mpZpoWAgnumR5q5erWcP2Xpdzv+GmUqnCTbVwJgN912yI80ir910/GXWuJnzqkXPGHFI
mVQirAkBzSov/jbumneahnMjw5r7WXW4NWeDWRa3q7vdNn+h2hyf3UtAWDM0j27Xu2q5F+jRTAb6
LgOafg+rmbvo6dEEaYwJa6p+zabz9+1Oas3EOtbULOUDmjqg2W4fnc+aDU17+y4HjiUz5EtuPjmn
W/xdc/CYl7bXl+bLrLjLF7Ptbt077mDTgWMp7dXcNueSVdP7vnEHJkUa7rtix/P9l/pmdzvzm9y3
86iXU3AV1jyeR00c25GZZYvF0PGZhjV5n2Zz/Dwf9f2aMqwpgpqb/eHeP0YsrCl7NN0xlM93+/Hp
m0caGA9rqt547rZ3jeJQPCGseTyPfs3ms3WebYtVceff5ItezQFO83f/XOT3s7vmhFxnddXHyYFz
MTDuveejP/NtOdvfi/X1nVszcMwfz6N2p8jZb7t8ly+C8TTc6rCmZn2a3ZnjXE047ntVb/Ps19Y6
k/N8Xh/Pox7Jfk0T1jyeR7ti9TCv1wfn4hOPJS2Dmpv1ykmeo3k8j+6bMTrser+mDmvqPs3enh+e
4wQLa5pjzchjyciwpu2fm5Wbm+3mrW7a9/adQ1gz7dfMtpvmjuuX3aaPkwOHsKZhYQ9Z3a6KRf7Q
P0bAw5oQ1lz+ni0W2ySgycKaPBTPvebpnmxEv2ZeL+uyXIePTzPAKY8118X+KmTwmLcDnMfzSOik
vUWukiJvbk6Xqzr8n8N5dSZRkkxPKKj76URmB1BXZI8ygWdKw0kmibLg9gnFsldJ0kZ23GJmzEeZ
uGcKZ8ylRMn4PaGg7otjRVx4MPPqo0zCM4Xz6gZnwfozin6d+RPZcVSDHWOSnmkgY46zlPYZJcUZ
DN8a6lrrUSblmSBsZjjL955QUDessRAXHlSDHWPSnmlg+Z5AWQv2jKJeZ/5EdhzzJ8lRJuOZTNjM
0C6mPQrqxjOx4UH9SXKMyTZMgrHwkOEsUnlCQd1vxvK48KDa8BhT6pkGVu/hrFh+QkHdZiY2PHRW
7/EpMM+kB8rikYyxQ0HdXSY2PKhmPcYEnikdMEasIfMoqJvKWBEVHtxV1GNM3DPxy1++dyioe8nE
huc0s5asWfczYRM+4XYqjZD6/esPIeUT0HYCE4Cp1kI8gflfbq5vFw2XjMESHmtgmZ9GuoLvUFB3
kYkND6pfjzFJz2QHvBEpg7RHeaWkhYzqOOo66lEm5Zlg4A4YazA8CupuMLHhQXXiMSbtmeTAMnOs
IfMoqJvAxIYH1YbHmIxnCqc2JMMaMo+CuveLhbjwUEptgG2ZBAsbI0P6aapDQd3yJTY8qGY9xpR6
JhGeZRxryDqU13mWUmzHUW14hIkzz6QHLixQiqGeUFC3brEsLjyoNjzGBJ5p4FlKWL+GdCioO7ZE
hgd1x5ZRJu6ZwkkLjXX726GgbtQSG57Tkhb/MpPwTCpsjBLpN5MOBXV/ltHzhmdC3Z9llEl6poGM
Bc7jr55QULdliQwP6rYso0zKM0HYGBXW5YdHQd2NJTY8qGY9xqQ9UzivIVOsc5lHQd2EZfS84ZlQ
N2EZZTKeyYTPZRblUcLPKPZ17rgiO45qw2NMtmXSAxkLgXVh0aG8znOdYzuOarBjTKlnGshFKJSH
JD2hoO6OYnlceFANdoRJMM8UzlhwnCeGPKOY15k/kR1HzUWMMYFnGshFYKVfOxTUXU4iw4O6y0ks
Ez+sNb9ryqqKcuvLqh7y4mbLpon7QhP19kJcWJn4kvx2ENkkVUn1bpIX62x7tyru2mdxJHV2s247
Bdwmv3ycXBngsv0od19WQqdudOvdxv2FJdWy3NbzXV31cXLPKcKHFke6aNmjoFpzz8/zPUyoO5+M
MgnPpAcWLSLdGnQouHV2keFBNfAxJumZBp5fh3Y29SgW1cDjwoNbjTfGpDzTwAo4rBuIPcrrPL9u
9MzlW8Otsxtj0p5JhQdDY52lOhTUzENkeFBteIzJeCZ7+aX1exTU/ERceHBr9saYrGeCy+dMOhTc
mr3I8KCmiceYUs8kB85SWMboUV6rZi+y45RsWDLPNLSwDelnlg4Ft2YvMjyUbFhCwyQHavY01u/P
HQqpmr2OiVTNnuSeaSCZDEjJsA6FVM1ex0SqZk8Kz6QH8pdYs8yjkKrZ65hI1exJ6ZkGUs5YTzbp
UEjV7HVMpGr2pPJMA0/cx0rl7lFeKWMR2XFUGx5j0p5JhQcDZ8e5JxRSpXgdE6lSPGk8UzhjARzr
8t2jnFqK13S8fSyp67OCNLrLuEV4Y0zWMw1kIbBW8e5RTtxL6lwb863hlteNMaWeaSC/YJCeqdWh
4JbXRYaHkrX+xdyZ9bZORHH8q1g8gcQtsy8V8MCOhEAChHiLTOK2UdMkJE65oH54xj5eGjez2AoH
5+He25vmP7854zme/D2LJMAUmIaGc/BOh4K7vI6khQfVhYgx0ZopsLxOYKW8FgVnslpTGu7CuRgT
Ayb+f5811qOgDmwTw4OahmNMHJgCMyI40rOmFuVKk9VEWsVRE2yMSQCT9TeGwmoMQMFdOJcWHtyF
czEmCUzMb/bgnDXWoeAunEsMD2oajjEpYAr4CwRpYN6g4C6cI2nhQfUXYkwamPz+gsRywVuUkf7C
1FEelIa7JC7GZICJBtZXYd2lAAV3SVxieFDTcIzJApPfiyBoAwtAwV0Sx9LCg5qGg0y8OfNChJbE
4Ry13aFca0lcYsVRXYYYE62ZVMhlQHmk16OgDoITw4OahmNMDJgCqzMEyiO9HmXkJj5vH0nQtCqj
ptYYEwemgL9gUdYv9iioA9vE8KC6EDEmAUwBF0KjuOAdCu6SubTw4C6ZizFJYGKB8R3K96MeBXX4
K9LCMx8XwjEpYAps34MzF69HQR3+JoYHNVnHmDQw+V0IgXNKUo+COkhOCw/u8rgYkwGmgFeBY8/2
KBNnRDR/v6uCUFWfSJkwCG7KRE3GMSYLTIFth3H2x+9RUAfMieFBTcYRJkqAKeBFaKThRYuC40U0
pc1o+ZtjojWTDXgRWF9sWxTUoXBieObkRVAGTIF5ETjHk/YoqEPhxPDMKQ1TDkwB3wLnPIMe5Urz
IhIrjupIxJgEMAVWVCgkEwlQJEEd5CaFR85o+ZtjksDE/E3GUSbx9yhzGgQ3TDNa/uaYFDBJf5NZ
rFFeg3KlE5kTK47qNcSYNDAFVlTgbHvfo1xpxoNIqjjukrUYkwEm6u8ZOJsA9yiow9vE8KAm2BiT
BaaQy4CVzBoU1OFtYnhQXYYIEyPApP29DKvJWpQruQyJFUd1GWJMtGZixN8YOBva9Ciow9vE8Mwp
DTMGTNyf8rBcuhYFdXibGJ45pWHGgSk0OwJpYNGioA6CE8OD6kXEmAQwBfaj5EgPARsUTnDuUk1p
s0rDEpgCO03i7EHUo1xp34bEis8qwSpgCqyrEFg9o0FBHd7ytPCgJtgYkwamwIwGgrJ6tkdBHQSn
hQd3EVyMyQAT9fcyg+SFtygTZzSM2uOhKw13eVuMyQKT32WgOCeF9iiow9vE8MwpDXMCTDrQf5CG
DC0K6vA2MTxzSsOc1kyS/P9LYVqUK+3xkFhxVJchxsSAif/fu9v1KKiDYJoWHtQ0HGPiwKT8TUaR
XIYWBXUQnBge1DQcYxLA5HcZqEYamDcoCnUQnBYe3EVwMSYJTCxgvyLNy2tRUIfKIi08qMk6xqSA
Sc4gMTYoV5rxkFhx1DQcY9LAZPyNgXNsa4+COghOCw/uUrgYkwGmwH6TONvm9ig4XkRTGu4itxiT
BabAHg9Yk1QalBkd/dYx4S5yizAJAkzan/JwtgjtUGZ09FvHhLvILcZEaybj9yI4zknVPcpIL2Kw
rUA8y7floLoQMSYGTIEVFTh7hHcouIe+JYYHNQHHmDgwqf97c9AOBffQt8TwoCbgGJMAJhtYBIP0
eKNBmdGhbx3TrJa/CQlMARcCZz/kDmVGh751TLNa/iYUMMlAYsQaUgDKjI6G65lQk3WMSQOTmUEv
a1BGrs4Yed5FW47CXfgWYzLA5PcfNNb0yRYFx39oSsNd0hZjssAUmAuhsIbkDcqVztNMrDhq0oww
SQJM2p+gBNImGw0K7hFtNC08qM5CjInWTDSw4gLrntKg4B7RlhgeVBcixsSAiftTnkFKeQ0K7kFu
ieFBdSFiTByY/C6EwNlJvEPBPchNpYUHNVnHmAQwhc7TxEqMgIJ76FtaeHCXx8WYJDAxf5Nh7fra
oOAeDZcYHtRkHWNSwBRYvWGQnkU1KLhHwyWGZ1bJWgOTCSRGrOFHgzLShQhsbphcfT6rZGyAye9I
EI7k5TUouAfEJYZnVsnYApPwNxnWJLAW5Uq+hU6rOGqaDTKJW0KAye9bCJx5rz3KlfaASKw4qiMR
Y6I1kyCBxkAxkXqUkXMdpjZGUxpq6owxMWDigVPLUO4sHQruoW48LTyoXkOMiQNTwGvA2bawQ8E9
1C0xPKheQ4xJAJP1NxlFGVJ3KPJKuzukVVyiuggxJglMfheB4mxI2KHgHuom08KDmoZjTAqY/C4C
wdnpuEPBPdQtMTyoaTjGpIEptO4Ca2ABKLhHv6WFB3f5W4zJABMNPN3F+pYEKLhHv+m08KAm6xiT
BSavy2Ap1lAQSHBPiUuMDmqujjBRAkx+K4IKlIXuHQruKXGJ4UHN1TEmWjNpv2FBcDat7FGuZFjY
tIqjZuEYEwOmgGGB86S9Q8E9/y0xPKhpOMbEgSmwUQSW4deg4J7/lhge1DQcYxLAZANLcFEe23Yo
uOe/yaTwGFRbI8YkgSlga+AsdO9RrrRpZWLFUdNwjEkBkwzMB0My/BoU3JPfEsODmoZjTBqYAosv
cJ6x9ygTpz0M+49KqjjuGrgYkwEm6k9mHGvI0KBcaQlGYsVRTYYYkwWmwFQGnC2IOhTcM90Sw4Oa
YCNMjABTwGXAWRXdo+BMeGhLm5N/AKc6aEL8jYGzHXKHgnumW2J4UF2GGBMDpoDLgPXAtkHBPdMt
MTyoaTjGxIEp4DKgNRmg4J78lhgeVJchxiSAyfp7GUN6fNGg0CtNi0irOO5KuBiTBKaQf4Bk+bQo
V/IPEis+qwSrgMnvHyiCMnu4Rxm5heRgJ4CEISuUM6N1a45JA1PgIAuD1QwNCo4n0JQ2oxVpjskA
U8ATEFg3eEDBPactMTxzcg6YBSYxg3sKoOCe05YYnjk5B5wAU2h+AlLKa1Bwz2lLDA+qvxBjojUT
DxxkQZCmlDQouKe5JYZnTsmaM2Di//cJjR0K7plvieGZU7LmHJiUv8lwNkLuUHDPfEsMz6yStQCm
wLILjmS8Nii4Z76lhUfMKllLYPK7EMRi3csABfdkuMTwzCpZK2AKLM5AelTbkkTG00LL63xLbsuL
5GHOr7hBgStVQ6leUwLpwV7LYSPRFvZK0Yby5KxSqAEm+tJUalGJLO53z8Vhuzu8fP/jNz+9vC+2
fxzIbeZ+oYqoayGmKJPZPl8+FmXdQPKGkOz4cXZ8OKy3j+vtffaQHx+yMv9jU1dKiezxi+xDyokx
9VuF+2WraNVy5WnvfiDus7tDuTyVxwucggMnG8dp7QCShiA5qyGV1KJjFIbaVMYmlnwcIxmDSFWN
yJnRHSITJBVRNmEUKYir9bHY3uf3juJMyd4K2t57pXzZFuWqeF7U1x1cz0X5UO52m2y5e3py/SP7
+tfvfv3ppx8W3/7y9a/VVe0+8dfu8Ji5j62XRfa8vjM3JLvL15tidZv9uMuOp+VD866/ZPVSdcnn
fAOFUmoz102rHuECWRYH9072Ian+73QsDh9n9T+Pfx/L4umjbH3Mqjpn1JK6Z7v+/VBkfxXr+4ey
WLnck297FVp98kNLhGQmO+ZP+01x/MgPps/Bljun877Mjn+ty6VrsNuMZc+7zWlb5geXu4grp/vR
L2qcjgN2vRga6ZfH9X7vWsaFqXSBO7qk5BLIfrNeulyX1ckhaz+R3e0O9dXy2fKPFaU8Z9WlVux3
LspQq7fliqZce5X2lSPatylZkUH7SjKhfZkY077qrH19YPSa7duKsv+ufdXb9mW3UkO5/Crtq5Pb
ty9Z4LevZbZrXz+YvF779qLqv2tfV6thud3TKKWv0r4quX37kg16+0rGeNe+fjB7vfbtRDX5z9q3
qpW/XHqhfaHBbrPdH67obQUBv5Mdy7w8uvbMs+f97lA2TZp9eN6k/tBpdqG09W5Zbj785fufvvz2
+2++//Grr3//qLqIAKKRHHvxaB6oVv1f79yVkt0XJdSprQm1fkkxpid89fNvVTNerTfo64yTppQ8
GCdpNqEbUhrvharvhSbeC3V0lCQm9MLhKOnr83724as+ZT5xb60PRXY4bT/6GLpfTpf5kml+6y/B
npdwPo5+v6k6d//S2SfuT+PKo9x4Nc0wdzw85ftF8X5fXRPDF1WgyYgwIU060GyuPtdjFtBjXr94
rZkxxjTlfk12rglfY+9Wi7/ydfmGk1WalBjFlJV+TT7QPJXrzeJ9vtnslkNJZSpOxi0VXHPq1xT+
uq/v1ttV8f41qNNsXqG6y8uaTRcekLJeUyu/pnqr6bLD46IyNpxtcd740O6SU0209Wtqj+ax6gjD
lwRNVX1O+zWHveqPw3p1X1SYu+3d+v50eH3Nk77uXPg1ff3obnM6PrQCFzQzfxtZ4tGE79eL3am8
oEm1YsyvST2arvLPi92+2N5tdn8NNZVUwvg1mUfztF+5/LGobs0X6y6VX5OH8pLLJS7Trd5qcmW1
X3PYj+7ypQvj8iHf3herxeG0udju1Brq15QXNQ/VjWBd1f5Su1MiTaju6qLm8bTfH4rj8eK1xISy
nAfqroe57rH843TnQMvDuoC+eanugjK/5rAfVW1Tt/YiX61C17z1a9pLmtU1CT3Jr+ntm4YQr+b+
VHo5qdXar0kvaRbvi+XJtblPk1rCA5zsYjxPh3tQ9Ned+jWH/aiqtuMsD/my9GhyxZRRxK8pLt3f
93n5sHM+93qZby5yCksDnMN+9JQvF5siP1TDffhHsbpYd+bXVJfu7/8Uh92iMb8vcnKipF9z2I/q
KZWLP0/FqeG7pKkVZwFNc1ET7nBTNe2w7sfyUORPdYrPJt2PDB32o4GkX1P7NelA87Rdv1+Wm7Mx
w8h2p8yr6b6vOskpmsN+9Fy1UV91v6bya4pLmsOaX7gXS+rXlEPNxGuJBzTV5bHisShdtncOgOui
F+vOApr6smZ+2FcPkh5P+0ucjBEViKfxj5Mf/nK3pIOnjSjxa1p/3UFzfE5mZKi52bajkOC1ZLVf
kw40uQJT5pht3d/OjlmXrz/MyC2lzeJWw5gbYpT5ZnFc32/bL7BlcXhab/Oyalz3HfYhgzczKrMP
f23eK1YfDUWb/abdi748u9sB0FT9pFjVj0Hv1psi++Q5P3zifvqkeuMZviN/sns+vmv+vbpxb/ql
2Us9Vt8Wy7Lv4be16OG0PRNd/XFz3C0fbzP4/ao6Nzc3fmk+XbpYDWXFLaEg2z2gakSPtZXmgl7d
NrPvv8pI/WKckTtpVlL6pSR8Xzn77pK1z7jcFVmswMOoRo1ZUT6Qynqpx+DUL6penMKxOLhLBXRd
ZbeOsfrs78X2l/od5yrsjuW702m9ynj+B6FCFu+M4OKd4Ll4l5s7+e6Oc04lZ3cFsf7i9Hk4Xgei
7Qjlro5JUYg7wnKhCqr9eualaoOn+wb+PBquWrXfVFkyh91m43rDB3t/s8KHb57un8oP3pZIBZRo
02pAyasXXXr1KAnVYH9YPzmf6KwG5XJ/Sy27ocrc0BtKb5Xi3A9MKQwrXst/+u7ziyqDzuLXZBM0
3/QSaW65BD0+5tJ29iG96S9udiZLK0zZyIqRsuyVLB/K6luhQHZUR3Sy/JWsGMq6IDSyaqSseCUr
B7LcuvCC7MCflHSCbcpY3DZltW1KtZb9syk/lYm5pjTJNT0XtUmuaY14wTQVhZSUaXPrLYCRMQbS
a0NO+TXpGCO21RSGUe3XZCOMWCZAk1JqpPVr8tGmqdOmnFDi1xRjTFNZa3JjuLDcrynHm5EO1HAb
aCM1xow0raY2oTbSU41D5tc0U4zDyt8Ufk07xTjMqApc85xMNA4Z82vSScZhxjT3a7JJxmFGJfFr
8gnGYZZJKrVfU0wy+cKccqrJJ/2aaqrJF2gjPcnkC9fdTDH5KqdL+DXtVUy+M01Bxpt87qWEsX5N
OsnkC+YlwaaafMKvyQOa0N9TjMNzTTHmYWOnybkJcMpJZmTGKfVrqilmZEaVDbS7nmJGRjTNVczI
c0071Yz0t7skU00+6tekU0y+jCqq/ZpsismXUU2IX5NPNPloIJ5iislX3+T8mnLMw/AzE9qvqSYa
h5nya+qrGIfnmmbMQ/uzB9d+TTvRjOT+NlLDfsRN2Iyk1cxCLuDDSVP324ncWfs2eJQwnbyfYv62
FEGglJET7wWx5vW8dnJjuZvX7jg2+eHeO/Weasm7ee2UMhmd195QypoS9WgDdXaktp8Jc6lTKhPm
utRUJsxNBFKZMBeopjJhLlBNZcLcTSCRSWMus0plwlypmsqEua1AKtMM8zjqKQmpTDPM43qGeRz1
uIRUphnmcT3DPI56bkIq0wzzuJlhHv+XuXPJgRoGguhVWLIB2Y4dOwhxFyRgxwa4v/iUCR8Rd4PC
mz5BnttKjafS5R4BdRwds+BlCqjj6LwFL1NAHR8BdXwE1HF0PIOXKaCOHwF1HJ3m4GUKqOPoWAcv
U0AdPwLq+BFQx9EpED6mnOLpeEbHQXiZ4ul4TvF0PKNzIbxM8XQ8p3g6ntEBESbT/o0JHRDhZQql
42LKoXRcTOg8CS9TKB2fTKF0XEzo+AkvUygdn0yhdFxM6MwKL1NAHc8BdRwdceFlCqjj6KwLL1NA
HS8BdRwdeuFlCqjjJaCOo9MvvEwBdXwLqOPoGAwvU0Ad3wLqODoPw8sUUMe3gDqODsbwMgXU8S2g
jqMTMrxMAXW8BtRxdFSGlymgjteAOl4D6ngNqOM1oI7XgDqOjunwMgXU8RZQx1tAHW8BdbwF1PEW
UMdbQB1vAXW8BdTxPaCO7wF1fA+o47HynGKKlecUU6w8p5hi5TnFFCvPKaZYeU4xxcpziilWnlNM
sfKcYoqV5xRTrDynmGLlOcUUK88pplh5TjHFynOKKVaeU0yx8pxiipXnFFOsPKeYYuU5J9PluP/+
/0fH/0piqHfyjez1rjuULncxXc/xb+W/D4X/BSVWLFNMsWKZQ0zH5ZbtNUFbJpRYaUwxxUpjTqa/
vPdq5KP8du9V/3Hv1eVc7K3Wn+Y5tzq8914dorzU5Vz3/z5N/VeUm4R5N7ZHT2MDlwZTS2JaCPM4
mM34jtKRzZhPYxOUFlMW00JyM/QrKZTCBidd5SlscNJiKmIq11uWoF/J7yiGb5Gd70/3LRx1JCym
TUztcjN6osRMKGwA0lke9ORrMVUxjev3p1RoyybKwbw/ehqbaLSYmpjy9cmsUe+PUNAgY0++8pDG
sMm0i6leb9no0JYJBc0vestDyrDJ1MXUr4/cfUBbJhQ0tugtD2kDm0zjG1NJ129Z36EtEwqbVtx9
5SHF2mQ6xHTtMuydOphPFMZlmE9j44dLpu1FSmLaF5uRic34gUK4DOfT2DyhxZTFdO0ytB1xGU4U
NEbYi6s8aIzQZCpiWrgMG7VlQkHTg97ykF6EybSJ6dqLqJjkCQUNDXrLQ3oRJlMV01j8/aXeMqGw
WcHuKg+bFbSYmpjy4i2jtmyilHsOFs6FkzJsMu1iqgv7FfHyfqA0ZjP0NDbNZzF1MfXFkQFpf/mB
Mu7ZjOFbOCqdFtP4xtTSoz9MnChsKs9ZHtQ/sJgOMW3XW9YpMZsof+kffF34t8GOX9bc8uFfMuoc
GEw5iWnhHFSkWeQHyj3OQc++hZPOgcmUxbRyDhDn+kRBg3LO8qBBOZOpiKks+nsQ5/oHysa8P3oa
mnwzmTYxLTyBDHkCEwUNvPXNVx7UE7CYqpjG9fuToZPbREFzbs7yoDk3k6mJKV9v2QGd3CYKGm/z
lod0DkymXUzXzsGOHSyEgqbaevKVBxVri6mLaeEvdMhfmChomM1bHlSsLabxjWmk6y0byCfxEwXN
sHnLg4q1xXSIaVsYR5QwThTyQL0fvvKQYm0xlSSma8eiUls2Udiom7M8pFibTFlMx6IJGTrkTxQ2
6uYrDxt1s5iKmBa+BvV5faKg8wZ79pWHFGuTaRNTWxw/oP9lEwUdM+gtDynWJlMV01hsGfVbJhR0
uqCvPBsakjOZmpjyQhip3zKhoEMFe/KVBxVri2kXU73eMib2eaKgswS95UHF2mLqYlp1V1BvmVDQ
EYLe8qBibTGNb0w5Xb9l2P+yiZJv+RJmHpX1NDRKZzIdYlqkMxr0WX+isKMAneUhZdhi2pKY9sf/
Sn1HuacTw7twUmBNpiymY/GNCxKziYKO9OvZVR40JGcyFTGVa1+QuWnlBwrTifH9aaTAmkybmBad
GAf09eo7yv53m/Fbd59/yZH8ha2KadVdgaTXf6Acf7cN/9ikPJ+GBt9MpiamvPhPQ/2mTJSbshTO
hZOegMm0i2mRpdihb7UThZ2D5ywP6QmYTF1Mi3sdKiVmQkHH3/XqKw/qCVhM4xtTTQuzlNqyiXKP
J+BdOOkJmEyHmBb3Qlbok95EQcfYectDyrDFVJOYrj2BTtk4EwWdXtebrzyoDFtMWUwL54C5MOpE
QYfWOcuDxuNMpiKmsvgKC/1ZnSjorDpveVCxtpg2MbWF2UMJ40T5S3/hH82e709DZdhiqmIaj4+t
fUe5x2VwLjxUSK42MS1chgKd8iYKOkTOWx5UYC2mXUyL/gTqYP4d5Z57HXryLZx0GUymLqZV5wEU
6Jwo6DA4b3lQl8FiGt+Y+sJloJpFJgo6A85bHlSGLaZDTNvCfqWO3BPlL29/+KPk2Um8+TQ2/mYw
tSSmhctQoc6DiYLOcvOWB5VhiymL6bjeMurCwokyEvL+zKexwTaLqYhpkZWgjgzfUdDjbfGVB/UP
LKZNTKv+BOjz33eUv/QPfutP8C8ZlVaLqYppPPqq9h8o6MHWVx40rGYyNTGtbn+ADtvfUSIdbCcT
G1azmHYx1ccHwyYKOqLNW55ILkTrYlq4EAd1pBAKOsjNWx5UrC2m8ZWpprQwXqnfMqGgg9zMQ7KY
2LCaxXSI6dqFaI3aMqGg49685UHFeslUX6QkpodPujhR0HFv3vKgYm0xZTEdAbZMKPkmr6K4Fp5R
GbaYipjKo5uQf6DclJJwLhwVWItpE1O73gxmUsyJwg5yc5YHFViLqYppPLpX6ERhB7lVV3nYsJrF
1MSUH32VzYmCDnLzlgd1LCymXUyLdAZzK/kPlJs6IpwLR2XYYupi6tebwVzRe6Kg49685Qklw+Mb
05Ye7fidKOy4N2d5QsnwIabtessYX/1EYce9OcuDehEGU05iWngRTBPyicKOe3OWJ5JY5yym49Hd
fScKOu7NWZ4aSaxzEdO1YzES9I/rO8pNjoVz4ZFkOG9iWvRNFKRv4gfKX/ZN/FOr//k0Nv5mMVUx
jUdHzU4UdCicszxs/M1iamJapDOYCzl+oNxzB4R34ajLYDHtYlr0RVTIC/+OcpPLUHwLR10Gi6mL
adHxwAw3/YFyzzxN78JR6bSYxjemfZWooP7ZCIUd+uYsD+ofWEyHmLaFmEH+wXeUmxIV1bfwSM5A
SWLaF5sBHZMnCjv0zVmeSM5AyWI6Hv8tdqKwQ9985WGDbRZTEVN59CW3Jwo79M1Znkj+QdnEtOh4
YC6JPlHYoW/O8oQS6yqmsXjLKGGcKH95u8NvURn3kkcoAW5iWt0eSR0phMIOcnOWJ5QA72Kqj+4I
P1HQQW7e8oQS4C6mRa8Dc5XAD5SbXAjnwlEXwmIa35iOtGgcpiRPKOiINm95QsnwIaZFooK5fegH
CuNCzKexwTaDaUtiWrgQTLzlRGGHrznLE8mF2LKYVvc6QO+PUFq6KSvRPAtvbGTNYipiWvgLBfpY
9x3lps4D58JR58Bi2sS0cA6oLtSJwg5Mc5bn7wS2pidPc3qenpfnZbxotaT+8tmrfJTneR/P8/Oc
X+z7tp1gHz99ePv6/bN3b75yVQ9WFdZ4/BHuO8pfmgdXr9DuWnhGLQSLqYlpEYTI1I/LZ+bOZddp
GIqiv9IhA0CO7cROhwyQmMAnIF4CBAJ0uTwGfDzlnvSUV+zdUna3hIQQ0Cw7t7vOqZePoVB7pqHT
Qy0h9JgmY2ocDUl7/xgKtWcaOj3UJO4xFWMqDamctWowFGpnte4ngjFxrbUeU71higK6xB7lyDYV
v5W48SFTA7jHNBtT41BITge3A8qZigfgwKnR2mHKwZgaxYNIuhkLCrez2oRNDzVae0yDMa0XD1Im
yQ0LCrX/Gjg9XBOtxxSNKTbqpaQtDHuUM5UYwIFTSww9pmRMreMYSMuAPcqZ5IaIDZwasD2mbEz1
8pXQBYXbWQ2bHq491mMajWm4/BbUBYXbfw2cHmp9occ0GdN6fWFm7brfo5xJgcjYwKmVgx5TMabG
5oPKijxD6XVWi2e6GXY1rj3WY7IH5DE0Pn9Y7wxD4fZMA6eHGrA9ptmYGgoEy1rZo5ynfjCBA2fW
D3pMYzCmRlOJQLoZCwq1Zxo6Pcz6QZdpMKa5cctI67cFhdpZDZweqonWZYrG1KgfsDZnLyjczmoJ
mx5qlaHHlIxpvPwJnAsKt/8aOD3MsO4yZWNqtakgFVYXFG7/NWx6uL5aj2k0plabCtLyfY9ypoMW
MjZwagz3mCZjypffVb+gcPuvgdNDjeEeUzGmcvm9QnsUjgixvxo1YHtM9rBdQ2MjN+tmLCj/e3/C
ch2uY9Zjmo1pvb5QI2uxvaCcp76ADpwamk2mcRuCMU2NpuOUnXEHlEK4GYerMSsHXabBmNYrB7VQ
zuhzFG7PtAmaHq491mOKxnTxwxMchdszDZweauWgx5SMqVE54DyGOgq3Zxo4PdTKQY8pG1O9tLVy
QDnSf/hz5TYgQ54CNYB7TKMxrVcDCufY3wPKmaoB4MCp0dpjmoypUQ2IlNKMo3A7nIHTQ43WHlMx
pkY1ILAW24Yi1OHswEStGfSY6g3T0KgZcAo4B5QjawYndYj3q3HVsh7TbEyNYxE4XaMchdsHDZwe
agx3mIZgTI39CwMp8hYUbh+0CZseagz3mAZjWq9CjKwqxIISz3R4AjbwqFRfGKIxxUt3sD6gMMyG
w9WoAdtjSsbUqBxwNtM7CrfDWcKmhxqwPaZsTI09BwMrzAyF2+EMmx6uYNZjGo2pUWXgyPqOwu1w
Bk4PtRbRY5qMKTfMb8pXeo4i1OHswEQN6x5TMabSsIxYn2ULCmNngl+Nq6H1mOxhO69XGerAWuUZ
ilDvMmfiamg9ptmY0qVPpT2gnGkXAzhwpYCNwZimS7cgchRuhzNweqhVhh7TYEyNRhGFtGRYULgd
zrDpEZLVdkzRmBqWBGfLvaNw+6AlbHqoFYseUzKm8dJii6Nwu6WB00MN6x5TNqb1ikVNpIrFHuWf
9zpgQ6Zqal2m0ZiGxiMtqQqxoHD7pBVseqgB3GOajCk39rKS9uItKNxuauD0UAO4x1SMqTT8Ita7
bEE5UxUCHDi1CtFjssfs0tjrkFnvH0Ph9kkbsemhxnCPaTamdPnvavcoZ6pCgAOnBmyHKQVjargU
nIM3HYXbAQ2cHmoVosc0GFNjr8PIumWGwu2Whk1PVYrhFI2psSOCoyQ7CrdbGjg91CpEjykZ03oV
orA+pfYoR54IeaR4ebiOVABnY1qvLORAOVLdUXp90ubzLBaWq3Flth7TaEzr9QXWrTCQXv+zNJQz
3YzJrtcJzWleWykuv9/5Mek/pjuMI37V3BllOPMopdan1Zimb8ugHv94kccv339+cfXu/dW3Bw/v
P/r29cW7p1dhu9n9gx8/NzdnV6Zp8+HJszcvrm9+Cse7YfeTdnvz8dXV63dvXr97uXn15OOrzfWT
p29tDuPmzb3NrVxyvPmbF7t/G6ex7H46rz992P0h7P7r+6vrZ5+uP/6FMiejLMdR1nQM4zDdMMY0
J2ccUh5QxtkYK8L4/PXHF+9ePnm5o/jtlcZhm5dompFX2r/MZv/Xmy+vr1/ZYH4a4O9XyQtvCQG5
SoN3Wvb4lTAgr3Qi77zs0SghIldZ443bELY52isl5JVO4Y3bkLZ54c3IVRq88zYtd2pEXulE3nk5
trWEI5Mgh/mXJAh3y7x7l+043j65ermaBENNP73LYqr9d1ncDmF5b5R1XWr1mIgx1PVPyTHUYz8o
f8E5sUD414+t8WdjbH0KqEoUyETtyYUyMR9UUSZmvRBlYj6xokzMJ1aUibl8Q5mY39+gTMzHXJSJ
WWdEmQRzPArmOFXBQpkEczwK5jjV2EKZBHOcqm6hTII5Tu0NhjIJ5jjV4UKZBHOcKnOhTII5Tu0p
hjIJ5jjV6kKZBHM8CeY4VQIDmai9yFAmwRyn2mAok2COZ8Ecp8pjKJNgjlMtMpRJMMepLc9AJqpO
hjIJ5ji19xnKJJjjVK8MZRLMcapghjIJ5ji1ZxrKJJjjVCsNZRLMcaqehjIJ5ji1ixrKJJjjVE8N
ZRLM8Ukwx6laG8hE7auGMgnmONVvQ5kEc7wI5jhVh0OZBHOc6sWhTII5Tm3fBjJRBTmUSTDHqT3f
UCbBHKeaciiTYI5XwRyn6nUok2COV8Ecp9p4KJNgjs+COT4L5vgsmONU/Q9lEsxxarM5lEkwx2e9
HK/UDnUok16O16CX45Xa0A5l0svxSu1shzLp5XgNejleg16OV0Gfswr6nFXQ56yCPmcV9DmroM9Z
BX3OKuhzVkGfswr6nFXQ56yCPmcV9DmroM9ZBX3OKuhzVkGfswr6nFXL5xyNaf34uSH+/14Ev6Ac
ebD9X88CgweuJWhOxjSs3YwY5v9/JOPPKFpepjFpeZnFmPLqLRsC65YZipaOaUxCOuaw+2VMZe2W
1bJyunDMq7cs5hPumbMIaZjOJKRhOpOQhulMQhqmMwlpmM4kpGE6k5CG6UxCGqYzCWmYziSkYTqT
kIbpTEIapjMJaZjOJKRhOpOQhulMQhqmMwlpmM4kpGE6k5CG6UxCGqYzCWmYziSkYTqTkIbpTEIa
pjMJaZjOJKRhOpOQhulMQhqmMwlpmAemclrd9oQjuP2aQpqlMwlpls4kpFk6k5Bm6UxCmqUzCWmW
ziSkWTqTkGbpTEKapTMJaZbOJKRZOpOQZulMQpqlMwlpls4kpFk6k5Bm6UxCmqUzCWmWO6ZoTOvd
Y+M0ML7edRQhu9KZhOzKHVMypnn9G/n0/7v4/YwiJFU6k5BUuWPKxhTXe0qF/9/j/BeUdNrD7FGb
kPxqQpZk3IZqTKt9YedhZUPYuL49Yjxle4SzcO3IhM0PNYdBJmoOg0zUZTTENHPtSJCJGsggE3UZ
DTJRl9EgEzW/QSbqMhpk0svxmWtHgkx6OT5z7UiMiWtHgkyCOc61I0EmwRzn2pEgk2COc+1IkEkw
x7l2JMgkmONcOxJkEsxxrh0JMgnmONeOBJkEc5xrR4JMgjnO7XYJMgnmOFemBJkEc5xrVYJMgjnO
7XYJMgnmOFevBJkEc5yrWX5n7kx2nIihKPorWbJAqGyXa+glCyQ28AmISYBADepuhgUfT9F2HCYn
lwhdzhJ1Uz7ll75xHJ96IhMwx72apcgEzHGvZikyAXPcq1mKTMAc92qWIhMwx72apcgEzHGvZiky
AXPcq1mKTMAc92qWIhMwx72apcgEzHGvZikyAXPcq1mKTMAc92qWIhMwx72apcgEzHGvZikyAXPc
q2GKTMAc92qYIhMwx70apsgEzHGvhikyAXPcq2GKTMAc92qYIhMwx70apsgEzHGvhikyAXPcq2GK
TMAc93a7FJmAOe71MUUmYI57xUyRCZjj3m6XIhMwx70ep8gEzHGgz7kCfc4V6HOuPJ8zDDyfMww8
nzMMPJ8zDCyfcy1MY78bSrY8O+KAkv/NgwjEGycFcxgK09x/kMdkKsYeZbEUo46GMi5DuGUKQ7cY
abT0OTmgBE8xymgohTLEwpT6xUiTqRgFxWtORm16rCvfU0ypME3dko2jq2QV5czntf7096PfuHUp
e4ppLExrvxjTaCpGQfEakNr0eA3IU0y5MMV+a7RseRBbQ/GKj+L0oGJ4Kkz5yPptNZWsoHh9R3F6
rPsOp5jmwrT0SxYs3SAbildz1KbHqzmeYIprYer2jFxjb10R+yWLZ9Wssnj1xlGbH2tai0zWtBaZ
rItmkcka2yKTNbZFJusaW2Sy5rfG5NUbRSbrYltkAua4V28UmYA57tUbRSZgjnv1RpEJmONevVFj
8uqNIhMwx716o8gEzHGv3igykXI8DYXpSDcE1yZcRfFajeL0kOI7hcJ0ZKt7GUwlKyhemVGbHq/M
eIopFqbYL5nrq9aK4nUYxekhhXVKhSn3SxZdJSsoKHWxMqHUxTQWpiNb3dm01V1RUMZiZUIZiykX
ptA/9hBMXyjtUeK/+Q5dvHFrDJ9gGlNhGr/Wm3ry/SJPXr3/9PLq8v3V14ePHjz++uXl5bOr4WK3
/cL3+dxKlMMcdh+ePn/78ua2OvneMOyu7+6uX1+9uXz75vLV7vXT69e7m6fP3t3eU5h2b+/v7sQ1
Dbc/ebn9bgxj3Kp28/HD9o9h+6/vr26ef7y5/hPlUiizQvnizfXLy1dPX20Uv15p3J+QmyflSvvL
7PY/3n1+c/O63szhBn8dZd6flphnZZQjvMvFOJQrLcqVzuON4WLMZZRVGaXPG9N+fpdBudKZvGl/
tGEJyiiHV+04rNOPr9rh3py2V+3G8e7p1avfX7Up3r5qwxzG9qqd5lV50cZpf35piX8LOY+//GkF
5U9ribFBxmEJKuVcKJNCeaT080VaypWkKDmz9PPFuJZRpCg4Vvp1VUq/xNBmNcScpFld9lHfbzi4
Dsuf3+LWaei+ya3T8Pdvcwcar+N2rD/ZgcnruIlM1hWcxuR13EQm68dtkcm6SSoyWRd8IpP1c7fI
ZN0kFZmsH8BFJmCOex03kYmX48HruIlMvBwPXsdNZOLlePD2LBSZeDkevEqcyMTL8TDwcjx4DTqN
yduzUGQC5rhXuBOZgDnuNe9EJmCOe3sWikzAHPeKeiITMMe9xp7IBMxxb89CkQmY4151T2QC5rjX
4ROZgDnu7VkoMgFz3CvziUzAHPdKfSITMMe9Up/IBMxxr9QnMgFz3Cv1iUzAHPdKfSITMMe9Up/I
BMxxr9QnMgFz3Cv1iUzAHPdKfSITMMe9Up/IBMxxr9QnMgFz3Cv1iUzAHPf2LBSZgDnutftEJmCO
ezU/kQmY496ehSITMMe9vp/IBMxxr/gnMgFz3NuzUGQC5rjXABSZgDk+A3PcKwyKTMAc9/YsFJmA
Oe7tWSgyAXPc27NQY/L2LBSZgDnu7VkoMgFz3NuzUGQC5ri3Z6HIBMxxoM8ZgD5nAPqcAehzBqDP
GYA+ZwD6nAHocwagzxmAPmcA+pwB6HNGoM8ZgT5nBPqcEehzRqDPGYE+Z2T5nFNhmrsPVM2L47mB
P6D8oxaH4o2jgnm+Zeq3OAxzdnQF+wHlH7U4FG8clbRLYUrdYizLYipGRRk9xSijoRTKnApT96FU
qfcI4dyvRT6rGBUFZU5WJpQ5WZlQ5mRlQpmTlQllTlYmlDlZmVDmZGVCmZOVCWVOViaUOVmZUOZk
ZUKZk5UJZU5WJpQ5WZlQ5mRlQpmTlQllTlYmlDlZmVDmZGVCmZOVCWVOViaUOVmZUOZkZUKZk5UJ
ZU5WJpQ5WZlQ5mRlQpmTlQllTlYmlDmZx8I09/fbsmm/bY/ylzvR32/8ahvtcrvnHFb9llGRXL6k
yEf2oMfZVIaK4tmDrqOhrMY8Fab+HvQ6OfrNHFBQMmNlQsmMeS1M3Z3qsVOw1C1YOq9gBQRlMFYm
lMFYmVAGY2VCGYyVCWUwViaQwbheDENh6mf2FCxvoAcUx/eGbTSQkrgxhcI0HVnNOHrs/YAye4pR
RgM5hhtTLEz9TrBDspw1aSggtbAxgdTCjSkXpm4n2GxZzTQQr0+YtcmxLolFJmsIi0zW3QaRybo6
FpmsmX2KaSpM65HzgZYjaQ3FqxFq0+PVCE8xzYUp9jfSRkeP/AOK1x4UpweV2kthyv2ShWwqWUHx
SoPi9JDCOuXCtHT3eTonEuPcrViczypZRQG5go0J5Ao2JpAr2JhArmBjArmCjQnkCjYmkCvYmECu
YGMCuYKNCeQKNiaQK7hnSiBXsDGBXMHGBHIFGxPIFWxMIFewMYFcwcYEcgUbE6j344GJl+MJpBY2
JlDvxwMTMMdBJmJjAvV+PDABcxwkLjYmUO/HxgQyGBsTyGBsTCCDsTGBDMbGBDIYGxPIYGxMIIOx
MYEMxsYEMhgbE8hgbEwgg7ExgQzGxgQyGBsTyGBsTCCDsTGBDMbGBDIYGxPIYGxMIIOxMYEMxsYE
MhgbE8hgbEwgg7ExgQzGxgQyGBsTyGDcmKbClPqHg1fTSe2KAhIXGxNIXNyY5sJ05Dx3cpWsoIA6
PR6YUKm9FKb+CcJlNh1Hqyh5sBzB34+GiuG1MMX/70PsUdLfFeN3yXbUbtkarSeYxqEw5b4JMVpc
54bibb4oTo81Wk8xhcK0dEsWginGKorXWPzG3pn1Nk9EYZhrfoXFDSDxmdkXC5BYBRICBOIKIZQm
BiIgKVlYJH48M3HSwJmmneMzHnxBhUS/NvV5/J7xO/s4T566OxafY1IDE7+VMlNp88oZpO42RZkn
TtVBjEymqladyVS1FZzJVNWzM5mqevZzTHpgcre34opKm1fOKHXfr5gnT91NkM8xmYGJ30yZ55X2
iJ5R6r5WMVOeWbm2HZjU7ZQpVillA0rd3Y+Z8szJrLUZmOzNltGNhPmbCfOjEnYGqbvlUeSJU9Wq
85jq7n3MZKo6vpHJVNWzM5mqenYmU9WWdiZTVfPOZKpq3plMVVvamUwz9PG62yLzmOpui8xkmqGP
190Wmck0Qx+vuy0yk2lWPm4HpifmDWsdPXRGqbsbMlOeWdm3G5j87W6lN5VSdkJRdTdBZsmj6m6C
fI7JD0xPnDwiKs3OX1CQs4uPTvXm3/hsbFiy8KmBSd9KhnU3UiFu50Lgk3FFqbuZMVOe2djwlanu
ZsZMptm0oq9MdTczZjLNxo+vTHU3M2YyzaYVfWWqu5kxk2k2regrU93NjJlMM/TxupsZM5lm6ON1
NzNmMs3Qx+tuZsxkmqGP193MmMk0Qx+vu5kxk2mGPl53M2Mm06x8XJyYJLvd7zc11kZeUeruYcyU
Z1b2LQcmeXsFcpWhmn+gFHpLQeaNz8qP1cBkbg91ilrPzxml0FsKMm98VgarByZ/Oxm6xrjzFUUV
2q+Sd+N1tw0+x2QGpv/6LOsrSt33HWbKM6sGrx2Y9O2UVVkO+A8UU+f5GaLV3RD4DJMwA5PDTgIU
Xeh3Ban7qsM8cfSchh/OTHU3DWYyzan9emGakxufmeruNMxkmlNz98xUd8thJtMMzbvu2xIzmWbo
43X3HmYyzdDH625DzGSaoY/X3YaYyTRDH6+7DfE5JjswPfEWcVdjZ/0Vpe4rGDPlmZV9DyvZLLs9
FOkr9fHPKHVfwZgpz6xc2w9M8vZb4aq8JP6KUvcljJnyzMmslRyYzF/n4Y3v4kW++2H7W7/bbHd/
ffLZR5//9Ue/uduxrgkfiKqHRCohRXO/WP7UH0451C0LeXqj2f+4W29+Wm9+aH5c7H9sDou7n0/3
xE3z03vNa8IYcfpNHz7LvXcht4fjffgHC3+63R2Wx8P+MUo3UNocytV6329+WPwQKOCVRMfOOXA5
V7pcprn8uvl9ffjxfDPXG0yjaDVE8TlRnuTVw+hr2OF3Ot3mrw+//PL8h2+9eOewvO+4Fy03ruUt
550xUnbNZhsen/39drM/ab/eLJaH9W/rw58xgXd9s/j+0O8afXmu3ogE8eL98vA4gxkY+MBwon8W
4nrBtm0fuaYdrinGXHO7OY3LrZowMnf70hJz6d8X68jaiAe3ueu/3+76oOQQ9bFAfgikMIGe1EWe
35atwm637xfrn7/b3vebwSPe3x5/XoXUHi5XCKmN3x52wQJCOl8LpXMfiufyxyb+ZRDn+stX9819
MNUX/9Bvsfplvd+H7y4fa4KVrJd/vt6Eu274tXCcrhaViSiP8PKB14zR4Kk8Xi9tMZe+5FFl5zEE
0kMghwn0XB7PZcOPueZTujxc2jPMpS+6OIQu3A6Byj33shN6uGbx5/56adRzHz4ZWI9RmkO0zcPu
z3+EO9Vthx/75i5UdT/stsfNqrkLswH74/19sNh9/LPvj7vwiV1s1jxm4epSjr16eKTPbF8d1j//
HCPEn7+IP29+2a4u/mzNQ6qu/hzu/fvd9pd/PNswnu6YGOJpXDwu7aiApmNyCGiQAb0bFdB2TA0B
LS6g0H5UQHdxCO9wAaVgowJeXiCsvEcGdHxMQMXOr7/VjOECKiVGBeTnl7dqxnEBNZOjAl5afpoJ
ZECjRgUM9smGgBIX0Ag9KqDqOB8CIo3GuFFOo3THxRAQ6TRWjXIaZTouh4BIp3FslNMoe16JohnS
aZwZ5TTKdVwPAZFO4+Uop1H+vFZAM6TTeD/KafRlPlxzpNNwpkdZjeYdd0NEpNVwzkd5jRYd90NE
pNdwbkeZjZadYENEpNlwIUe5jVad4ENEpNtw4UfZjdadEENEpN1wqUf5jTadkENEpN9wxUcZjrad
UENEpOFwZUc5jnbnZrPmSMfhWo2yHO3PEwSaIy2HGzbCc2THLuPbWmA9x5gRnhMi8vPwrBZYz7Ei
03NeKv8V+kr77XG37NvQl3lpmi/OmLW2eamJX+xf/+eCK6GMCN8wLoU1UsrwY8tE+Dx76f+vqb++
EYwL5kIZZiyWYS5aE5+cVX93/OGvH8MIEmNh04MOZXixO40m3Pe79Xa1Xjb75Y/96hjHoT7o+qWV
XAtjzMr/de04P3zk2+bd1WroOW+GTv2n2/CRw+JwGq789dgf+zCSHGO8LRlr2enrjebw533/9hfn
iK9df/P6y9XAv/q57+/j3x834TFuNv0fh+Y0Dt9ceLx/cP6US9qWx15BePy/3165fPNjH7ju+sXh
r48v3wUeIdTKs6VjS23++mNxv/42qHIMhhAG69arromm7ZvXlosI2LwdHOXnff968+AZp4H3QNWK
lndKyQl44nj4d3EIabfqV9+d6QJcGBjf/xRc7/D2Z9tN3+y/Cz97+8QLIETHbKsUp0AAUYRg40Up
wYMUJfAmECEzTmsKRCKKoohC50GLogCE7JhrmRUUCCiK4uNFKcGDFUXxBEK6VmhScU1E0RRR6Dxo
UTSAUDEzyioKBBTFmvGilODBimJNAhEyYyTpGYaiOILRnnkMhQcrioNGq2NmvCFlBori7XhRSvBg
RfE2gZC+5bxkSZGMYLQleJCiBF4AYQJHy5RJGpWSWx64+sOLL794/6/QZbtfhDnY7rxi48V5ocuL
xUrdqeXy+xd3asVf8JXhL5Y9My+UXfjVYqkDsw3oKiRfrawJH2cPF4stz6/PK0B298vmlTEXf+XR
GzL2rOqNGzr9uN31oVG+P3wX28TH+4BpubVmuTBMLW5lXBEMszQbNvvKpECmNc6ds08Gevcu7ikK
Inz/8zGsIlktwjqSxb7vmjd/W+zejJ95cx/7Q+3q7gYJXZpdv9p+Fzr63zYfRYjQq3ngiGSLMEpx
WjfSN/GTL8In9ymLb4XylAcTFhtNqFJK8GCLimYJBBetlXKC3ue03eYJwXO6zf5Wt9lE+1fcUbKa
lDJCZ6gED7qUKQBhY1HXigQBRbGEOroED1YUyxMI6YM3kpqUiSiEzlAJHrQoGkC4mBnHLQUCiKIY
oW4vwYMUJfAmECEznlZcoSicUHOV4MGKwhmA8F0k9KTxQCiKIHSGSvBgRRE2gVCBT5J6ZFAUSTDa
EjxYUSQwWh7/a6UnNbSgKHZ8lVyEByuKVQlEyIxWpOKaiOIootB50KI4AMFjZoyVFAggiibUPkV4
kKIE3hRCtE76CVrak3YRpgTP6SLcmlnjPBZ1x0hD8bCUEarzIjzYUsYZgBBdIPSaVNShKIT5kgtP
zZor8CYQioc2RVlRxncRivCgRdEAQnacU9sUUBQjxotSggcrihEJRMiMdCSIRBRCzVWCBy2KARAq
ZsY40lA8FIUwX1KEByuKtwlEyIxTpB4tEMUQ5kuK8CBFCbwAQsfMeE8a+4CiSEKVXIIHK4pkCYQK
dILUT0lEIfSbSvCgRVEAwpzmPLV5bEZDhebg+o/TjMaN2zf/dI/D7oi4e1xg7H0am4bzrdIsL1zG
fe7C1undJvzg8tlwu6ePJIF5kEDZxwLrZwNbQRAYExgrsBVpONEq6fPCZdwnQmAZGnF6gm7RtP25
CcFJ/bkTl6CtTIT5JSzgKcKDLt4mgVABwpHGiKEontBKL8GDFcULAGFjZqQhQQBRLBu/prYID1KU
wJtAhMwo2jw4FIVLiih0HqwoHEK4jqtWSdLKRCiKIrS9SvBgRVEqgVCqNbS57EQUwph1CR60KA5A
+JgZa0huD0UxhDGiEjxYUYxOIEJmHG3gPBGFYLQleNCiAKMVgUO3TJCeYSCK4+Or5CI8SFECbwKh
dMtpA1WJKOMbb0V40KIYAMFPLUjGJ2hpT9pFmBKc0kWIXLqVkvT8w1JGqM6L8GBLmVIJhNKtohV1
KAqhOi/CgxbFAQgRM2M0qfUJRbHja64iPFhRrE8gQmYcIw2cQ1Hc+C5CER6sKA5CyJgZT9vmBkTx
bPyYdREepCiBN4FQpuWM1CRPRCEYbQketCjQaFXHTSsMaTYBiiLHdxEuPDWnfAJvAhEyo2grOxNR
CEZbggctCjRaHTOjDcntoSia0BouwYMVRZsEImTGShIEFMUQjHbgqbpULvACiNMgs2RTHFswbRdh
QvAnughcilYFjusxiY9wmdbRivq/S5lhjNA7L8GDK2WRN4FQtmWCVNShKIQFmUV40KJAP7Idty23
JAgoiiC0hkvwYEURKUTIjDSkwSQoCmE7QBEetCgWQLiYGc1IEFAUTehhl+DBiqJdAiF06+QU08iw
5vr6PuxR7cOJWZv1YbuLMn2//uG4W5x+mVRm5nZtZm5UZw93Yya4m6eqM2NbZYU0/FZ15mL5t5w0
wgSLHmGVZhEebNEzAkD4WP4d7bwRKAplcKcED1YU6xMI5VpGqz6gKJTBnRI8WFEcgJCs464VgjRu
AEThcrxzF+FBihJ4E4iQGUlbkw9FUeM9pQgPVhQlAMRpykPxKTbCT9oRmxL8iZrLstgA0ddDpB/B
cq3ypHEYWMjs+N5+ER5sIbMsgVA+pJFUe0JRCFsoi/CgRVEAQnTct5I2YgdEEWz8sGoRHqQogTeB
ELL1Xk3wVH+R/tnDfdWjAKb4JFWZNv6Ud/OEUwrhgz17w/wtqxTxKVS0ISL4ABCWMxbhQT8AHkDI
+BQaXnCyxQgxfsiqCA9WFGESiJAZ60htaigKYX9NER6sKJIBCBUz42kLTaEoZvxOrCI8WFEMTyA0
azknQUBRCMsZi/CgRdEAQnci8NHOFoGiOELHpwQPVhQnEoiQGUUbvEhEIRhtCR60KNBoT7Nbmk9x
5sG0vcEJwXNW7t1q4wQuwcoeRGskpTqPPHUPoo28CYRmreOkgQ8oCqU6L8GDFSWpzm3MjDekYzSg
KGr8DFQRHqwoyiYQmreMNnoBRdGENk4JHqwomgMI1wkedpoW9RRKdV6CByuKEwlEyIx0pMxAUSjV
eQketCjQaH3MjBakzABRFBs/N1KEBymKYilEyIzRpMwkohCMtgQPWhRgtIrFzDja4QhQFMLpxkV4
sKJolkBo2QpacYWiEA7jLcKDFkUBiNO8ixE11j8U7SJMCZ7TRbi1uSdwCdkq2n57WMoI1XkRHmwp
cyKB0LLVgmSKUBRCdV6EBy2KARCngXxDm0UHomg+vuYqwoMUJfAmECEzllZTQFHE+C5CER6sKIID
CBkz42jDjlAUwvqPIjxYUaRLIEJmPM3YoCiE9R9FeLCiKGi0qhOqZYwEAUWx47sIRXiwotgUQivq
YdyJKASjLcGDFgUarY6Z4Y40QA1EMYTF9EV4kKIE3gQiZEbQ3osHRaHUPiV4sKIktc9pMN5KNkFL
e9ouwhl8bvv/A5dQraS9KQ2WMk2ouUrwYEuZFgmEVq2mLeKCohA2GxbhQYsC/cjGzGhPygwUhVKd
l+DBimJTiJAZQzuZAYpCqc5L8KBFsQDCxcxYSxpMAqJYRugilOBBihJ4E4iQGWdIEFAUwhlHRXiw
onBotD5mxtMGqKEoYvw6vSI8WFGETyC0bhnt3GkoiiQYbQkerCgSQGgWd5Nx2rnTUBTCUaZFeLCi
WJNCuNb4x15i6eQzJ/Ua60afSHwJ7PMCY+/TiTScb4VVeeEy7jPzROIQWAYRHHsssHo+8OgztZGB
0QLbNFzg4DwvXMZ95gscvEXR1gADHEc4878IDzIbgRdAnKaTnJziIIhJO7lTgud0cm8tldM8mrUW
pKELWMr4+AZpER5sKeMugdC6NbRF1VAUMb5BWoQHK4oQAELEzFhNygwURY9vexXhwYqiUwity74B
KwQZ38ktwoMWxQII2QnTMtoxGlAUwsbDIjxYUaxKILRpBSdBQFEI72688FR9fCw0WhUzIxUJAoji
CZvRivAgRQm8CYQ21CFNKArhbP0iPFhROITQsVvFaKvFoSiEl+QW4cGKInkCoR31+EEoCuFVg0V4
sKIoDSBO00leiQla2tN2ESYEJ3URTCzqTpKyCksZZXypBA+2lFmTQGhHPTgXiuLGL90twoMVxTEA
YeNgFKPtSYCiEN7dWIQHK4q3CYT2LactYvy3KJZRxnFK8OBEibwAwsVBPENbRvI3dWeyW0kNheFX
yQtgeR6uxIoVOyTEGgU63USkE5Q009tT53Yg8DvDPf5PXQkpYkF3x199x+Wp7GOUkogZtgWPVkqK
ADEO3rvRKIhJCtHQWvCopdQJIgUXuBktSmGWHSx4tFLKfyDCYePwwUXuTmeUsrzsYMSjltIQQiKT
C7UxGqSE5c/IRjxKKRsvQASJTOUuukYpyylWjXi0UmKbIEJ2PvcdRto7ThH2BV+fIhy5UnS+Uu8/
1rLlrChGPNpaVgNAxO3HJa6qo5S+uuJnxKOV0vsEkZKL3N4alDJWxzhGPFopowJE2n5cadSQHKTE
5RU/Ix6llBgmCInM4PJOo5Tlc1dPPGdcGxZegMjbDzt5QynL+SqMeLRS8vNS2pyUNIXe/rVF48eb
rbQ/ri7kz794uP5we3nzxe3Vp9/v7n/eOr4vfvzp8vbD1cWvD1f3t5cfr768v7v7dPHL5cPD7xvZ
l7e/3tw8X255o9yHq4eHbSjgHm4uf7v6/ubux8ub7b8frm+/Pz753yVsct7VHN61H2LIP6bPcmQ8
cfH4CyQUx3+9RfDu/vLD1XM4sdc3cOR/u89P//3T03//+ek3iDri1bh6n9/nHh4hvvrp6kf5SxvB
BnN1cf3+6FAsXdzdX3y8vN1oPspY4etvLn66fLj4/MvePQeYfLIH/Pb4t28E8fL2T6ktv19efzoO
xDa+Tz9dAePlu3f3m9TteR5RJ9K0kXIJe/A1G6tfMI14tK/ZyABRJHyF6ypBSgrEIM+CRyll40UI
iUzrVGRAynqqKSMerZRYAaJuHM5zV+2hFGZ5woJHK6W0CSJkF8oeCWH3nXTuCE5NOushFRcGtb6C
tYyZdD7ynHGXlvACRJOqnrmLEFDK8jaTf3jOuoa88SKERKZyK3EgZT13lxGPVkqPANG3yLD3iICU
zEw6LXiUUnKYIVJ1nhtToBRm0mnBo5bSAGIcfHWRyyOBUhIxGrbg0UpJGSEkMqlaNrR5OdmHEY9a
CjS0wUtkiuUmx5b7epdswqOV0gNCSGRqoxajUQqx5GnCo5UyKkAEiUzntp+ClEL0PiY8SiklzBAh
u1j/Zykc9gU/ZYrwlMJh4krVjUFFFWrZeu4uIx51LWsAEQ++sTsYUEpe77lMeLRScp8gUmN3MKCU
5WQfRjxaKSUCRJLItE5VV5RSV7fnP/GcVUodE0TqzntqhQ2krOfuMuLRSmkIkQ++uxgpCJBS/foU
wYRHKWXjRQiJTOY+uKOU5WQfRjxqKdjQFolMzVRkUEoiumQLHq2U1BBCItO5dxikrKeaMuLRSpl6
nyqRGZ76toxSlu+YNeLRSukzRMgu1T2ujt53irAj+ClThKevCBNXGs5zt4pNtYxojyx41LUM26N2
8MPFRK2wgZTGdOcWPEopGy9CSGQSlw8UpKzn7jLiUUvpANElMoX7voNSEjFFsODRSkkDISQynZvR
opRM9FwWPFopE8Q4CCEHgVLq6ok5Ix6tlOoniOxd4Kb5kxSiobXgUUuBhjZ6iUziMnuAlPWcNP/w
nLWmbLwTRPbsKXyQsp6TxohHKyVGgAgSmcpFBqUQvY8Jj1ZKGjNEdrntcY581ynCnuCnTBFe+ooQ
g1T17ql1E6xlRHduwqOtZRNElKo+uD0sIGU9yY8Rj1ZKyxNEDi4kCgKkrCf5MeJRS+kAkQ4hOu8p
CJQyVrOUGPFopYwyQeTkEpcqZZJC9FwWPGopAyDy8T5hbi0WpIzlK2qNeJRSNt4JIic2NQFKIXZf
mfCopeQJolQXepogUhhvpb1tI/u17LPqgrXPmf1zxdUaTivuhOc8Kfvs54Kby7U8V3B+u+DCCFYU
rBZc5uKGKymfVtwJz3mq4CJtS+d6IcRpRFtnwaONRvMIIW3LyNSSJkphBpUWPGop2NYdv5x4bmP0
f6V074nxkwWPTorwPgdR2h6ZEPad5O4I/sokN6TgYumtxJcmuXWLNJu1H2vZcipOIx51LRsA0bZg
sflAUQpxus+ERysl1gkiZ/ZID0phRukWPFop0yi9S2Qql3EEpRCH0R55znlZmPBOELG4kff45o+N
9He/vJMW+uPd7fWnu3vR9P76w6/3l/KHc7tdX26461PLfb6nea3lrt3VFHN7seXuUv8HdyHwVPWI
8YEFj7rqFYAYh1Bc5LJ1oBTiyJ8Jj1ZKjxNELi5xh8NRyvL93kY8ainQnSUvkSmdqq4gJRB770x4
lFI2XoSQyHRuORClLF/zaMSjloI1JRz71L7HQfhd5xx7gr/Sc9XhUgglt6eOa8Kqzhfq9cdKRuyk
f+Q55xV3G+8MkauLhRq4oZTlazONeNRSGkBEiUzm0kyBlEhsvTPhUUrZCp8gYna+lh3e6m+mf/b0
XOejgEbxVSqbMf6eT/NKSxmTdzWM2vpLTWWUt7A0agMsvgBEblsTHu0LECJAJHkLW6BGSihl+YpU
Ix6tlDgQQiLTCxUZlLJ8RaoRj1ZKQoh8CM1F7ugGSiG2WZrwaKXUPEHk5lqgcotMUohO1YJHLaUD
RJHI9E5FBqUQyRpMeLRSRp0gcneBGwOClOTXV3xNeJRSNl6AOH7IaX3sMFTYdza4IzizzVK4ukuZ
iirWsuUrvox4tLUshQkid1dM58gprS/umvCopRSAaBKZxl0Si1KYJTsLHq2UEhFCItNNv0AlZsnO
gkctpQJEP4ThPPcZDKUQpztNeLRSepsg8nAxUkNylDKIhtaCRytlYEM7JDI5UqNPkJKZJSYLHqWU
jXeCyINNjoBSIjGZtODRSokAkb1Epmdqmo9S6npDa8KjlVLrBFE2Qu6KcpRCbGc04dFKaR4gjt9d
+thjNXHXKcKe4KdMEZ6SNUxc0buaqEYRaxnRnZvwaGtZbxNEic5zH4xRCtGd/81z1ldvBICI2291
aVADLZBSiOOyJjxKKRsvQkhkaqUig1KI7YwmPGopFSCSRGZwB5lRCnG685HnnKc7N94ZoiQXqmWb
Uoh7hkx41FKwoc2HmFziIoNSiNOdJjxaKa1PECWxu29RyiBeHwserZSBEEUiUyMFAVIq0/tY8Cil
bLwIIZFpjVolQylM72PBo5ZSAaLKbj/vww4j7X2nCDuCU1OEKjs4OpfyHWsZcfeQCY+2lpU2QZTu
vOkYpxKnE0x4tFJqAIh2iN0FbkiOUpju3IJHK6X1CaIMVyv1vRmlEPvmTXi0UnoEiH5I3iUumzBI
acRlBSY8SiktIMQ4eO9Kpho2lELk1jfhUUtpCCGRaYmqriglr3+uNeHRSsEz9sVvkWET505S1vfp
mfCopYwJIgXnuYYNpYz1htaERytlRIAIBx9ctLy9rjdi95UJj1pKnSBCccHvce5w1ynCnuCnTBFe
2mhUglT1zN3CArWsE925CY+ylvWAEFGqeuFur0QpxH1mJjxaKakjhESmDWpMgVKIq3JMeLRSMkpJ
Bx/ZyRtKIc4bm/BopdQyQaSNjlt2RCnEVTkmPGopAyCyRCZxl6qhFKY7t+DRShkVISQymVt2BCmD
yK1vwqOUsvECRJHI1Ei9wyglEaNhCx6tlBQRQiLTuB2UkxTi9bHgUUupAFElMiNSq2QopRKDNwse
rZQ6Q4TiYtgjc/K+U4QdwU+ZIjx9RZi4UnK+URt7sJa19QVzEx5tLWsBINrBJxe5LakopROjYQse
rZTeJ4iUXerU+49SmHUcCx6tlGkdpx98doXbfflfKcNH4vWx4NFJEV6EkMi0QC0mTVLWF8xNeNRS
GkCMLTLsKRGUkonRsAWPVkrOE0QqzkcqMiiFWXaw4FFL6QjxOUP53Nqn6OMbmXqHL3k1I7GyYO1z
lvxcca3V04o74TlPzEgsBWdX+niu4PR2wYMRrChYLXjMxVXXWzytuBOe8yTB8eD9wRcXuFQIgBPi
altnxKOMxsaLENK2RG4P8CRlta0z4lFL6QARJDKpUg0uSqmrg0ojHq2UGieIsEGEPTIn7zjJ3Rd8
fZIrXFLVc6GiirVs+SCbEY+6llWAiFLVC5fyHaWM1VG6EY9WymgIIZGpifqaAFKiX53PGfEopWzl
A0TaIsOe+kcpy9cUGvFopYQ+QaTKZi9FKcs73414tFJiBIh82AhDpiaVKKWsrlkb8WillBkib3xc
KgSUsrxR24hHLQUb2iKRSY2CQCl99du6EY9WSi8IIZHJnVrnnaSsfls34lFLGQBRJTJlUBAgJcXV
A/f/8Jzzvg7hnSGKy3GPRMf7ThF2BD9litBfmiJUqeqNuxUKaxmzZGHBo65lGSDaIXg24QZKycRo
2IJHKyUHhJDIDG4xCaUs73w34lFLKQDRDyE4zx1bRimNmGFb8GiltDpB5ODCoGa0KKUTPZcFj1ZK
9wAxJDLJUzNalMIsO1jwaKWMhhASmcx9oQQpmVl2sOBRSsm47BC8RKZwZ7lRSllvaE14tFJKQQiJ
TOVydU5S1qcIf/NQo3O1lAEQQSLTOlVdUQrR+zzynPPmYOGdIYorye8w0t51irAnODNFCEGqeudO
r2AtI7pzEx5tLeseILYf+kwCSClhvecy4VFK2XgniLzRcVtlJylEz2XBo5ZSACJJZGKiIFDK8s53
Ix6tlBQRQiJTAwWBUpZ3vhvxqKVUgMgSmcYlG0QpdX2KYMKjlVIbQkhkeqEgUMryRm0jHq2Uhg1t
kciMSq3FgpTK9D4WPEopNcxScnKeu0kDpTC9jwWPWkoBiHoIyYVBQaCUTDS0FjxaKbnOEMXVtEf2
232nCDuCU1OEKlU9cRdjYS0rxBThkeeca8MbL0A0qeo5mUphuvNHnrNKqQ0hJDKlUJM3lMJ05xY8
WilTd94lMo075wRSmiemCBY8Sikb7wSRs/OJmtFOUoiey4JHLaUCxDiE7EKhIFDK8n0NRjxaKXGC
kMjESlVXlLJ87sqIRy0FGtroJTKpU5FBKX29oTXh0UrpASEkMsVT8xSUQuy+MuFRSykAcVx3zoGK
DEjpRO9jwqOUsvE+B9HyHtlvd50i7Al+yhTh6SzCxJWLK9y3IaxlRHduwqOuZRUgogRrcMuOKCWt
91wmPFopqU0QuTrPjT5RCrH7yoRHKyUHgDhePR24A+4opayfRXjkOetRno13gsiVHWihFOIUnQmP
VkqNAJElMqlZLjv05fsajHi0UsYEIZHJ3CkRlELsvjLhUUvBhrZIZMqgIEDKiMRo2IJHKWXjnSBy
Y89MT1LWNxqZ8KilDIA4rjv33HYYVO47Gt4R/JXRcEjeBSmmPY2GJ67mYqOiirWsri+Ym/Boa1n1
E0RuLnH7dCcp62cRTHjUUjJANIlM5jJUoxRmHceCRyulB4SQyJRCTd5QCrOOY8GjllIAom+RYc9M
/0dK9D4QM+xHnvN9WjnyThCxOl/2SFGOPdd3v7yTbuvj3e31p7t70fT++sOv95fyh3NnVl/uzepT
d3a+p3mtO6vDtRFTSC91Z13qf+Py7mLVI84qmvBoq170ADGk/jdunypKYRZ3LHi0UlJDCIlMb9Ti
LkphFncseLRScpggSnF+PJcfK3hMWDU/fltLzKUuWP2c7bniyiinFXfCc56YmEsKri49X3B4s+AS
GcGKgrWCS5yL667GeFpxJzzniYKTl7ZlFGrhGHDC8l1RRjzKaGy8E0TuznM7rVHKcpLUf3jOuJ/3
yAsQ4fiJoaQdhjB/cXduva0TQRz/KhYvgITdvV+CQCDxAA8IBOIiEKp8pYE2KUlLQRy+O7u5NDB2
W69nHA5I56i3JPvb/8zu7Iy961lLCXOCP7P2Mq7gXAp2Wnr1seJLUEaFTjb5sS5EPKlOJl0PQgUI
3FkVUBQ1/cIACU+qKEoACBEtIywKAopip18YIOFJFcX2IYQquJrjXI7Pe2879et8FGBSfJaKJkud
szfPzJRC8sIxrz1/aqoUcRRKjbr3vDcAEIsUCp7kAWABhIyjUOEeIAlF8dOLriQ8qaJ4BSGiZTTh
/eyxEURQpeBJFsUBCBUsg708B0QRiCtjJDyJogReCBEtY3AXTqAoEhFUKXhSRZEQQkfLWNxpwlCU
yU8jJ+JJFcVwCBEt43DnvEJRJj9Vj4gnWRQNIEy85M+0n2GpMG82OCM45jbLyOUKj3uSA/SyyU/k
e+Q54+ExO94ehPIFwx3rA0SRbPrlCRKeRFECL4CwC+4LjjtNGIqCKdlR8KSKwi2EiJYRuBIZFAVT
sqPgSRWlV7Jz0TLSoCwDRdGIagoFT6ooWkCIaBmFu3DUEwUx0VLwJItiAISPljGMsuItMSUmCp5U
UWwPIlrGEh5dJpic/FQ9Ip5UURxIJhWLlnGEpxsLpuT0iZaEJ1GUwNuD0IFQoJK3nijTUwQSnmRR
NIDgu50GZo5q4qwpwhH8dTusIXAJVnBc2RF6GSKck/CkepkWECK6usQtyaEoiHBOwpMsigEQcZtc
IXGPyoKiuOkpAglPqijO9iA0L7RHQUBRPCJyUfCkiuI5gJDRMha3nwWIotn0NQ4JT6IogbcHoTn2
hHIoCkdMtBQ8qaJwASDUQoiC4W6qh6Ko6SnCkeecBSo9AKFFIXC7KaEoiN2dJDypomgHIHS0jMIl
b1AUTPSh4EkVxVkIES2jcbUPKAom+lDwpIrSiz67Yrywc5ycPG+KMCP4mBTBP5UimOjqFncEB/Ay
g3jIGwlPopcFXggRXd0zlKv3REGkCHueMx75vOMFEHYhZMFwj1OHomDCOQVPqigDEFoWQqLubOuJ
gohcFDzJolgA4aJlJO7xe1AUi1jjUPCkimIdhIiWUbiyIxTFISZaCp5UUZwAED7eP+g8yjJAFMxx
iCQ8iaJY1hdF64IZVJUMioLYYkfCkyoK3GKn2ULoQnAUBBQF8YBrEp5UUYyBENEyEnfOKxTFTvcU
Ep5UUSz0FL4/hWaOnbOzpghzgo9JEU5XESBXdHWFuzYEvcwjvIyCJ9XLPIMQ0dU16RZ8i7iZmoQn
WRQFIES0jCG9Jc2J6Qs/Ep5EUQIvhIiWsTgIKAriUi8JT6ooUgMIGS3jGGr1CUVBXJkk4UkVRQsI
ES3jcAVqKAriyXckPMmieAChomW8o6zFOEz0oeBJFcWzHoQ2BbMoCCCKR1yEI+FJFCXwAggdj0zR
uHUBFAUTfSh4UkURPVGiZRzu4a9QFMQzyUh4UkWRAkDsivHKiRlW2vOmCDOCj0kR/FMpgomu7nE3
GkIvw2TnFDypXmZMD0LbguOOCoOiYLJzCp5UUXrZuV0IWyjc/kYoCiacU/CkiuJ7okTLGNztGlAU
xJm1JDzJolgA4aJlLO75Yf8UhTNMOKfgSRMl8kKIaBmvUBBQFEwyScGTKkovmfQL4QqOOwMOioJJ
Jil4UkXRogehXSE0amKDomCiDwVPqij/jD5ywdhC+MJSXtjnfPKcQsSTKErgBRA8cBSKoWZ7KMrk
FIGIJ1UUKXoQ3BTauRlW2jOmCPOCj0kRTlcRelySFZqhyo7Qy8zU4hYRT6qXGQ8gRHR1jauwAVGm
H9RExJMqiu1BRMsY3PN6gCiCTV0NE/EkihJ4AUTkKBgOAooy+cl3RDypooi+KFIUDndfHBRl8tZC
Ip5UUSQHECpaxjlU7QOIMv1IFiKeVFFMT5RoGW9R13egKJjoQ8GTLIoHEDrO9lKxHoQU0r9whiwX
1k07Kze54dR+WjfUnONyXHMj+jnqrNx9w7wwQg00rNiLDXuFETih4VSBveo3JwuGexoW6P30U1eI
eBJFCbxDEE6IcTYYYfzxXidlwXGZNrQGn3prChFPqjU4dFETrSEU6sZoKIpCiELBkyqK6ovCTWH8
f2zD/bzgY5Jc/1SSa6KrS9zt99DL9NTyLBFPqpdpDSBsdHUlUeMfimIR9SUKnlRRrIAQ0TKWo4pc
PVEQC1IKnmRRPIBwC6YKrlFxG4iiJj85mIgnUZTA24OQqhC4Iy2hKJNPbyPiSRWFWwDhg2Ww50NA
USY/cYeIJ1UU2RdFKuwD26Aok5+4Q8STKooCmT9n0TLaogpVUJTJBzwT8aSK4h2EiJYxuPv1gCh6
8r4rIp5EUQIvgODRMha3oRSKMnkrNBFPqiiiLwo3hWNz7FufNUWYExyTIgQuqQuOu40Eetnks9WJ
eJK9zAMIsWC6kBIVKaAokx+SS8STKophECJaRuFuYuyJMn2NQ8KTLIoFEDJaRuMOP4WiuOl1HBKe
VFGc6kFIUzjcg/eBKNOPiiHiSRXFcwChFswVUqEsA0Qxk595T8STKErg7UFIVxjcIT5QFEz0oeBJ
FsUDCB0t4zmqSgZFsYg5hYInVRSrehDSFxx3gR+IMv0QDyKeZFEcgDAL5gupUBMbEMVOPuePiCdR
lMDbg+Cm8GyOfevzpggzgo9JEU63yvW4FCscbuNYz8sQkzQFT7KXeQBhF5wXivI0SW7l9NowCU+q
KNL0IBQvNG4zKhRl8plcRDypovQgXLSMwVXYoCiYcE7BkyqKVRAiWsbiFlpQFIdIESh4UkVxHED4
BReFx10eBaI4hhCFgidRFMf6oihZKNxCqycKIm+i4EkWBeRNgi24LDSuFgtFQRSoSHhSRTEMQkTL
GFxG2xNl+kRLwpMsigIQPFrG4h5uBUVBFKhIeFJFcQOi2IJxPsNKe9YUYU7wMSnC6SoC5Iqu7nAn
VEMvm/ykISKeZC9zAEIsuMTeaQ5E8YgLeCQ8iaIEXggRLeNxh59CURAVPxKeZFE8gIiVx4Ip1I09
UBQ1PZkk4UkVRZkehFIFF6jVJxRFT08mSXhSRdEQQkXLCIaqsEFRMOGcgidVFKcgRLSMcKhrxlAU
xPUmEp5UUTwHEHrBdaFxz+P+pyiCYaIPBU+aKJG3B6FcYXD3/kNRJp+kT8STLIoBECYuKjm3Mywq
510Nzwj+zGqYC1d46bXgp9Vwj8sVFnfuMPQyTHZOwZPqZYZBiOjqDnfOJxQFk51T8CSLogCEjZbx
uOPnoCiYcE7BkyqK64uiAoRDWQaKggnnFDypovTC+e7R0wxXoAaicEw4p+BJFCXw9iCEKbiZ44op
jFxf3TYxbN2sV8u79SbK1C1/vN+U8Y/9YGaejmbmFM7O15vnwpllhddCOvlUONs94JsrlKmh62FS
dgqeZNfzAMJH/xcCVdyFomBSdgqeVFGUgRDRMhJ34AQUBZOyU/CkiqIBhGTRMhJ3iA0QRSBu/CPh
SRQl8EKIaBmFg4CiIPY/k/CkiqIUgIjV/TiGZwgAsyZic4I/E7mMKbwQkp8CVx/LFw537gJ0MsSm
VhKeVCezvgeheaFw+yWBKBJxkwEJT6IogRdA7J7vrR0qUEBRJj83j4gnVRRl+xCqEHyOUw0+773t
1K/zUYBJ8VkqmjX+nL15ZqYUUgYeb4w5TZWQS8uC46YmOAAQB1KR8KQOAOMBhFwEQuP79UUpmQtc
7d3uyJxmub0t7+qrxf3q59X6YZXftNtt+WObG9fUqrVlXnZK5ZVvRG6lKXNetbJqrFJd1QR03Tkv
Xdl1wvHHD4sW/Wr/ednmts7emPLhbwx1yAnxQodeNfc3N79Hla6CYJEkfp899vPrT4urchMkvbq/
ayLgRwvW8FbrmjvudFZv2jKauPp9/8YvFkzVpnF1W6ra9pl4oR6fXPeiyC80/qqH/eH291VdwLcB
qKxbbx7C3wP1W3ebMLyCS1gjrZVlayquVM3btm6D1qxzUtm6abu3h3rimH6hJ6N4Di789afZZ7dl
mHe+aLtFVeuGBS/PW9bJ3Ki6yavGl3ndClnarq600dnj5y1DoWK5M0OYCNrNqry+/v3drNzNdNvL
sgu/+/6qvA4v+CEMz4/a7d1m/ftgj/RxXUfTI/CqLw/fFMvV5Y/3ASNr1jdxQGb1dVuu3ttNHP8C
1+1VuW0zdiEX2Wp9knXT/hIhd1+Xm3j61XJVh1X01XKbhX9l9o9PmZ/727oIepXL1el1R/0+Dtad
E+DuKozz5nIXFX/Ibu+3V2+Fpr+6XK72/vb4xnfGjcG3F/sIG9zx+3HvyH54i7+d0EX/CkKvb59g
fjt7bxxDQvPJCl+vw0S0+vFyP59tw4qlPnhd/Eu2Xk2YIGJUDMH353aVCTYMr3rwnMvst6DO9XJ7
165evdqtK+Kg2TvbQmT39+GLCtCmE6FFqXTeaWFyZ0sWAKqm8R2vhebZB5v2ug1j66Od2w4TWBL5
jiP7cUSEIf04kt+Lp9i9mzW/B4H3k0yWvx+GehjaYaEQfjlIZl7UhkCTuNwzrbdVMGbVqKPa5X65
+kW76/yvN3EJ9bDe/LwfNa8PLvDaw9QQvs1DY/s1NIXnvP3uabqg+LynJxPPSN3xmQAY+sAuy+s4
Rf1+GXWMQ73c3l3u1n6Xxy7W95tNcIn4c/TrYWg3I/TiEB7FhVhkzX7tEH1zfd1EO0Q139rBZeIJ
ST0NXbta3wYP21uwOJAssrDYyXaSBbhfl3W73TlJt1mvgs83kW0/IbzKgqc27/0a1uGv4kvDLzW3
hgXfijnW317Lnnnt6A/mYvwHc/HsBy+7xxc/TQtfl/1wfmNkdTBG9I3TSuXwl+dcRBSMmmrnCMXX
VfPP1482XqJXnLlPh3Xp7N2KiVLdDnWOM6Jp8tC58HUbLrW2+cNmGerUF9fr4EkXexe6YBcH5IuA
eCEudnAXx4/Mj6v0957D5aS4D7vsdP8luvsRPwtp69V2kX2f1oMmliZ+OKzWljft+v4u4+KxrnSs
4wz3THw3fgG3N/kijMLDXHkZ/xal2/94Sn3+CKDv7DD/HNssRtCDc29uLg9c23i9Yi6/PkePHj16
cwOcQVzs+7jzhR3QMA9NWvEyD3DO55DU2ZDazWa9+SfYv4sk9kgjbUdL9WQ4q9c3t9ftXfuvQowd
pIkLojP3aXJMTezWc0FKk3YuNaZykRhTDSkuQUzlgiimEpUkaIJbooOdo0fjgxsXwzxECeuLPMBL
Xg+k0cGNNoNIDW78LFQTg9s8EMtudHCblJSfoy9j5/1ldyHCz+tVyNhjYY2dA+5JocsqzNCxylld
rx+y8qH8/VH7sVwg75ma74QSpxRGOlV6r3R5TJ8+/EdFNs67h7sERjvoTHxjSrBB9mPp8aMQGN+K
n5YdkvwjA3WN9d1RrYZK7BMxi89UYng6ZsUhwcayUGbZXPw5tlmMBCOWWLEEHAS5Wt/dXt//uFfk
+NPE5VS8J3eOocBFHAoNs8403FStctOG6hn5MEOVi397qL47ii0MaDk4oK2jrWikhJJjdC7D/91S
Z+/yZ8Qcv/onW1hQ9wQzg1pHm7mPWvHHFc6/jwNW+xHq30USe6Sz2ewQPn7IPt5/UxzmnsXoFJig
vmsdTXWl15mHcnl3Gaokl/er+PN/slOIBcKhDopdJFD15NE8X4ZXXwVLLLfhsvr65z3UkXFbb5a3
d/9JU+EHE0E9abbOTB5Mr1Gn0IOJC/RgoinNzjOYSExF3MGEwUS4PJupEwmD6LXrDGLwEKSqVL2g
Gji05glqblZEHfsmfFSwyzuh1VCj3e9EaX+73W/3eDMAvZmF7HizK+LdDNtqmJP2QgW8V6y/Fv43
KB41Oxj0uWSU9qrE6b6sHyNBtw75erxL+mGRcb6IFZP1/Sbe4t7e3K435WZ5/XswbPlrubwuq+tw
12yeldvt/U2EP9z7t1pfr1c/tpuw9SAUS/63vZCF1P1ecHY8JeJVqOKEOfemXNWnkyM+WjjlJJNd
V5WlfXX49eVm02zD3BAsf3SC7IsvPoqXWmMxJxZx3htTxDkH5IebMP/+GjGPhOvqpzDKx1PGQsym
vVnHAzXKUE3bnAX7WAIKDhLvpd6027PY9LHdbTDuORqMToSyTnk08O4e+Yh+kGvxuGVvsB/muIlr
aj8G9xEqNf3oK1qwxA2FAbxPYwohFckOsbj2CRsBL5fhkmiIJ58fY8mOZnHaFXJ8Ux7flfOwzzDX
um1zW1dWtpyVLVNZc7vYXwz/7demibdPrx8uw7L358VO7+Fu0KRG/+gGAft287f3867SWos2152w
YadkWeWNUl3eyU53UpetNFX2a7P821vKsm6dDvtntOmaXPGmyr2vZG64a2VppTetzrb3sVYfi7aL
uKPu190+uy8+G9aJZv34/9eJZkEwq05S6tpy1+StUyzvqtbnHS+bnBsumbHclVIAnbSWthLO5Lb1
de5VEKvhncirtrNtI72yRj2h0zeDOima+5ZfN50qERqsWXiLDuGJcaNz7jqWa1VVla+8qqsuTSea
Kx2vm06lEU1rXBcGqPV56VqWs/DGvHZKldbZinU2Taf/wPw0QScX9KldJfKmC611iqm8FZUOC5+G
exE/reVJOjn+eoftclzYdq/j9FG+fuGIqPI7fJTC1x99Eux+ZIh37+i67GrXVuFkgeTzE0whX9zM
8myLr7Y3IW34NFt12yjr5emFwS7vpQ/D99LC32CPnHh+uAXmy/a3tg5d4ZL7zoomNFQdlupfhmwl
HHRQ7KXMjkc6uHDIRuUq5cM3reJeMOM990ywRgYaHTaEr6/3u4FjdeCmfS9s6o8bMePO8Og899t2
s8+KyvuQDMQfL7fhg0O5Ju7IfO/YkHdSdbrsFK+9Ua0QwZOVsL5sQ7ut8kNdVhrVZZg2/eM4tICc
ljWhcZKTJTUIYSQGYoQmm/bufrNqm+NrozS7lwzhWDaEw3lwluVvO5wnGtYCZ4zRDafKrsVgc3xk
cyP6mSSwE2qoYfFyww4nsBN6XMPJArvB5qQf19yIfiYJ7KUdali+2LCROIFHN5wqsBluzspxzY3o
Z4LAtuB+0JHUqeHtITZdr38M5epdd29D7fkhdDgeZKVqVgrHZdmq54OZ7IxyulYdq1tppamYa5gT
1oTAzFTH+8Fss17fnS2g2UJ4QS0FtJZFDfuIKKkRU/3XuiEwKcjBRmiX5OnS8SFE/eIQcxpntNEN
p5rC6cHmvB3X3Ih+JgmszGAwNC827BlO4NENpwrs2WBzjo9rbkQ/kwTWwzHJvtww0oP/4u5Kexsp
guhfGfEFVsLevo+sQEJCiA8IIQ4hgVbRnGwgiSPbAVZa/jvdY0/i2JVM1XTbCUggkeD4vXpVfdVM
V6GByQJrEE4LHBzCTpLARoGedWPAmiVGMBqYKHAgBsJZhoND2EkS2GlwpPpx4MQIRgOTBdYgnJU4
OISdJIE9OBUKNgrMEyMYDUwVmAMR7OaMIeEQdhIEdnMtDQQ8etTUPCmCCcBkgYEI9nMmJA4OYSdB
4AAcH9T2b/c/AN49Cb1br2/eFl+VF5e9dHHvH84MX//443cBanWzuA4/xbTp7aroL4b++hYCcsxB
Fo4eubRIOloSgKmeFIdwgs85Yzg4hJ14T/bACvKkyutJIeaMCchCNW5hynGMAkz2pAPhpMXBIewk
eDIAgy+VCoMqKq1aq5yblZKLWdk6P/OtqmaWCSc7X5aNLsP5z4hOdqLuSmsspaj0+Jd/BBrkBWhQ
UlFpXUrZWOVZaUvgoUjrOs2YkV3lGcSJw8E0em7TMmU+ogBTo1jCcJ7j4BB2kqKY3+V3RqN4xLuE
qt27XoeqdjeBJ6tqIxolKlPVpdKiFaos27oqm6Z9BVkiNLg4jx6QtOJpoYIGpoaK4hCcZAwHh7CT
FCpSewh4/ICkTJrAaGCywAaEs6DABvHYeDewofLv3DrhealnpbJ2po0zM+G9njW6qVtXtRVjLnf5
994ipIAIz9FC5u6qch4pkXXn5U7d+efgdZy689l5P1p3XvZ1549JIEfd+d0vxNWd3/2L+1LRByaO
Ftw+qJ0uNyUaOsFKr5p6VpainlnHxEwKq2e+88x3tW91NVo7PTIIm4HxyhWpyGH5rp1uaqGbkhs3
sUJ5T3e0VukR6BIiKAM6Pb7e5IC9r6Kzp7mCTz3juTSduNlBA1PXYs1BOKVxcAg7SSuXAh8qyvFc
mk7c7KCByQKDmx3NHA4OYSdJYK1BO8dzaSYxgtHAVIENB+GswMEh7CQJbBSDgEdfT9EmMYLRwGSB
DQinNQ4OYSdJYCtBz47nEG1iBFspccBUgS0H4TTDwSHsJAnsmIKAx1N7NjGC0cBkgQ0Ixy0ODmEn
TWADRvB41sklRjAamCqw4yCc4zg4hJ0EgeWccQMBj74VoZ1NEZgATBbYgnBK4OAQdpIE5gIcqePJ
MJ/yMi8FmCqwFyCctDg4hJ00gT04dMZTRz4xgtHAZIHBCBZS4uAQdpIElgwEHj3hGJYYwWhgosCB
GAjHHQ4OYSdNYOlTrh9sX9wdrnihr6GAVFSeAiRHvbiFvSCUcj92vd449P6e3p5OeapTP3VxKzII
Hred5G3rWGd8Rb20FZlagbq0BaDtX9jafOh5LmsFS5Rn6GEiSmOFN65iSj/9fnvbVF5XvFFe+KYt
K9b4xgrXcW+4b3n5fJe1gslWqxSTDyaqpFNHMh3yNG1AEtakkEBoQpi81ZwJkdI+fJ+O4CllSNL5
UH0kOETCWWgpVaMpJcOT9mQEYKqdHJhc9dwzj4ND2EmIugCsbcog2NsyoCfDQyp2LjO14/mvbxk2
q2PbgBpl6vHy0jTKWk4j6sT5/1KnrOU0ep1eYtmRdJ2yltMIOnmRR6fD9gmLmwfdE0Ij9xztEEAb
RkvxkNuznxU3y0Ud1oBgUw7a8T07Vbba1K7Wymwf1E9oG//Jn1eMMxMXw7CFjs2nF9fFYv2uXRbR
9p2v+/anb74pPhkEeXUWgjd6qLhebF4/mihlxnDANLYASWp2VH9jeAWHSslL5pjoeCNfrkNHtTqB
Q7kYcajxz+JQLh441JouJn46U7f85Tp0VKuJDv1+M181Rfw/xeJ6gumbus6LUDe3EIxA3n/YexUI
fgvo7K7vXLPrXV5crNur1Yt6uwdvahxHsLlhROEMwcPTX/I7CJMv6vji432YTHg5djdMOEg+0xuK
w6udd69EvitXxfDhPiP1pmjeB/O22avZ52HsrYtlOF2GX4LMPMvK7ImXYQNndl5exlh5fx7FjpqX
q/V5f1XkfAjc+na5jJPM1kSYtDoi6bPtq7LitTgrtsflOCwXl822gvO2KHohX8HsdB52cEXvWI2t
6CXbdgNbxRnioLC7zNNe5A36i4mtFoAvTitFX7w9vTOKOjgjxsb9W8vb//NUiLg5U3m6He0+Rri/
shJ/OwRu20D4gom8qmD7dWeKStimTHPshH7dmczaNMA+hXHYvq3hnC9f9+Tw/bp7upkmaHyHDooF
U/t195Zlnk9wHfsyBRhskclq0eO94ORez2fDsHzG+59KdKtMw/6BYe2JZACDE6bkTkbpsE24fF5K
ckMJGTJ5WdHbhJ+SBHZuIG7ITmzT5DWVaNZTi1SeE+XUNZUL4pqal26GNZWLPGsqz3QGzbOmEgPs
FBbh11QuYD78RHz2ouRlUMIubiejtF3cnsV30xa3I5G46NCL26SkAGxL3sMvdt6/6F7L8PPiOmQM
YlaXnYIcpXH5oP0peGFmtCDYSTRCrIR9T0GZr/9jb4X9ZfyG8qTzTXj0401b6dqwVrFqOC598fAW
ddVE9/dZ+2fmt5eT39ycfvAw7lPMM0z56s39IxPUH/T37Scan3QI5QI8hEo/evadChc0t5UoVSV9
bRs1LSZOyC8lJrhIiIk3qK995DGZmzubN/VFmcCHNbEM//YbjM0MdkKa+D13tuU8tyUpK9PpuOzt
K56fzt4eO5KCKeXNn+L32NFnx2SE7+99xGTu0YwZ7fP9HzAqYb+3zT6m7vmczZNrT+v5fVRXZTZw
8mDKkMU5mjGTB9MLMip5MHHxvx5ML8hVUwZTxu3ZkYwgDKIXZ0zC4MmQeXA2z6POXAMno3uCYUHN
5XUmw34OXxX88mlADZnR6KHb6/bvm83dtI8DoY+LcLBd9qmzK9hXMM/Mj9X23hA73As/B4tBs8Gh
Tx1G8z7aun8b67fIoFsUsogvqf51VnB+FpMdi9tlLHLZXt0sluXy4vJ9vOj7Z3lxWVaXbTGbFeVq
dXsVyA9v/F0vLhfXv7XLov075Dn+x1Z4wbMUmsW2ox3+aBb/alaqVs5UJdmsaZkVAZ031g9XjiSy
i3xvRp4HFg/MyMD9RbWj7XXKk0v//+uU5wx6VJ0mXL/Le50z6pRnb/HSdMp7nTPqlGexeGk65b3O
GXQy5mUvR4ju6L0ZeQ4Oed39wrqj9zrl2ZMeVacJwyL7NGvzbG+QXeS5FVXdeuurpiI3TPFzJtgI
2ycRj9hFHjOtgxbBJaANVIyjbtrWi9p3nWFPFyayzjTGVroUgndVU7bedFVt6rKzpWWuecbCRH7O
GU8xeb8ciknpIp9Oh1oMxiiQhFApJBCaEErE+LkUAqCjRqsOG5tUto8ATJXdigM4yedMehwcwk68
wAGY68PlibPiKqS+woLzIYzAkJC8Kq/rdvhdcLxTTjLZdVVZ2g/bX58vl80qzgPXzZAhKb7//ssi
/Hl8oeT2NpiPubx7CpJfLENy8s9Ic2C4qH4PoYFnGTVdtleLdUAsV+t2eRLaw/sRIXsSG0gsw5x7
UtxVcO4pAGMQJXmnHBzcLx2B+iDX2TDw4TgzQ0GqqXbAg9TplHpoGYlRJyunD9nIuRJgfbLRQtom
qZ41BZhsJ4fgvGE4OISdhElZzbnVEPBoIW2T1M2cAkwV2DMITmiPg0PYSRJYWwkBm3vg1XY/e7n4
LaSre3NvQu75r2BwrEZXqa6uvPG27Z7eAGumBROmUW3DSivq2lnBq0o3Lvx9x+XhBni5WKxPtQkO
Uhjlckux5y3LUuohHociMX4tsyAxo3ITQ2hHinQrOERxtKK55Skb6B5Y4ICpruAChFMMB4ewkySw
Y+CUOVrR3CZVCaUAkwW2IBy3ODiEnTSBLRhIoxXNrUiMYDQwVWAhQDgncXAIO0kCew1Nr3q0N5UV
iRGMBiYLbEE4o3BwCDsJAus5Yx4CHi2wa2VSBBOAqQJLAcIJhoND2EkSmDMLAY9mT6xMimACMFlg
C8JJh4ND2EkSWDDQs6O9qaxKjGA0MFVgJUA4aXFwCDtJAkvOIeDRo6ZViRGMBiYLbEE46XFwCDtp
AnsBAY8eNa1OjGA0MFVgDUawYhwHh7CTJLAy4Egd7U1ldWIEo4HJAlsQzmocHMJOksBag3aOn3BM
YgSjgakCGwHCOSQcwk6CwHYulISAx084JimCI7DCAZMFBiLYzblhODiEnQSBA7APAvdX2HaBze4G
/N16ffO2+Kq8uOyliwmuVVt8/eOP3wWo1c3iOvwU3ye4XRV9zYFf30JADkwYmfGdvtUpnozAHgdM
9aTVh3B+bp3BwSHsxHtSsUfyp2Z8p+9SEtQUYKrAjkFwlnscHMJOksCPeHZ8p+9SIpgCTBZYQ3CO
KRwcwk6SwM6Dnh3f6Sc9YqEAUwX2QATzOWcCB4ewkyBwALagneM7fZ8UwQRgssAaghPc4OAQdpIE
FvBUOLrTdywpggnARIEdAyNYMoODQ9hJEljBQ2d0p+9YYgSjgckCaxCOGxwcwk6awB4MpNGdvuMp
7xFQgKkCcw7BaSZwcAg7SQJr8KxqRnf6jqf0WKQAkwU2IJyzODiEnSSBjQI9O/rMxonECEYDUwUW
HITTEgeHsJMksOWQZ+3oCceJxAhGA5MFNiCcZjg4hJ00ga1Ied93rwcj+r1vkIr7D9wowb6Rn3LR
aq9t86FOeW6UINo2N62rm9qrznvyLYnA1CmOuiVxgHa0ts0YX0CWeO7QwySwV1y0TcO5evrlMFVr
XtUu0JdNXTtnWm9LwzvFjas5k893OyKa7PEzw57J0EQlE08diXSo07QEAlrMmcBfkpmmCWHyDmKB
uQQ7mp5y0qU4gwBMlt1BcIJbHBzCTpLAwsgUf++tjuhxD1ExL/Ia8ulXx6FDMaiRZP9LjbJeQVZm
7mWeHURSx8ndCqUgSadGSJLb2qE6Tu7yircotHNVo3RbV35bYnZCP71Nx0l3lI6TKK1O4FAunnKo
nTMpnsWhXDxwqGhFzSTnnRHlC3UoRquJDt20ED1ib8gN+Qx9NTN0/oTJSZjc5Fg4K35bllUVfRpF
jRuK7YWss9MbcexmoAKk4zRIJ3FAZxCvn8Bl0zR9o3c72cdHH+9OYx2aeQTTHJrSBFgcNgEONcfZ
K4gA93l2u5PaSWoNvsjhi3dtuVxX4Qj/4evhv+KCIlTjWe1Yrc0j5xAtUi6ApvOhHsP0flSoM2Hn
1rgUEnuieMani5KDD1GUwPeAhHZzyWwKiX1RhEwRJZ0PVRSxT0KfsUACLCbiRp8XeDX90SkNmGqn
0hCcMwYHh7ATm5gIwP6R23V+9HmB19PTcDRgqsCaHcBxNefG4OAQduIFlnzubNKEu0/HuunDOgcf
qjes2yNhzpicSy9SSDwURTLBpouSgw9NlMj3gAS3cwnUvdMfflgH6LDriUeBi0VzUReraOjtZbsM
ZNraSq6FMabxH2KplN+Wi9vr5u4jd11sutvrfs9YfLMIH4nJnaDL0CxmFTE+k3cVkj8t1u9v2s++
2yJ+cv9/Xp2O+A+XbXuzqSu8vrgsQnHhdbFpljPw8d4NtZwPeUk9lyYp1PejTJmUKEvnQ40yZfZI
2DOm514khfq+KCnzUQ4+VFGsOyAhzZz5pA3gvijepoiSzocqireHJNycG+DspvjO0am5WN30Zdpv
r/+4jp1Or8IJLeS7Z67WkjNlZrYrxUy3Ts+EqPxMceOsKy0XzgbqRjSNUJ3vKivvviyO/Z8231cs
b+rioylf/hFskBsx6JFH2Xd27p8Sgw1ScKGUa5kQGniy7VvRSlGLtm4YxEncvZQ4KvII+IcD2vDJ
do/Uzmn2k7taI7wqu9bXTLm25KxsvOeybjrFauEcr15BligpIEswJ+1dPn0I7yVWatOIVldyxrwu
Z6ZTfla2ZTWrlGy4MKJxRhXD9wW8i/VF74b2701C4fL9m23KZHVeduF3v74rL8MH3sYk9Ob0Dlqk
h3Utj0V7nxpyzPNYxrxvcbyp6s+LOiRmrjeP65+B1827ctUW7LU8C6mme1mHPszLdpsHDLmYWGL9
3cWqCP+UxYNvOT7v+xrww+fu9Ps6ePeYBPayz7H7HpzA+hQ3Bl/tpLJwf9F3Z0SbSE6ZoziA8IZl
UZiYfUZNELvJSgmT57/gO1tuMsp8k1FuKy4CeT1rZGlnDS/9zFZOzZRpfSuN65xgexllkIHVWeQb
RvbdiAhD+m4k9+/5vCma90Hg7TtBs8/DUA9DO2wUwi9BZo79Mt7eMlWTsKRZx3SpeddxUw9qlw87
cV7FLdRfi+Ufm1HzcuhiGnNmgN3t25nj+x6fTJzLGo5PLIDBBnZeXsYp6v151DEO9XK1Pu/3fueD
ifXtchmfmWzjGiTt846hh5862y6P4rU4GzL/MTYXl83QSeSTnlzxiKTe5GH3SGeZsNkpesm2DWVX
MUgOGgzxPG3u3qC/mNjyC/jitJZIxdvTO6OogzNibNzvVLb/5+kQMbkCeKSrPdp5xKg4sU3bfenR
zYoHpboFjcscRtiu+VXzmm97bw5fORt26Z+dji6iUxvFgiakJujd27aW2ayWITs3Zwow2CL3C35L
OnQX5+ju4ob9g4VNEfLxVsT89YbafR9ZmE+eg9s4n73gfJySZXlOOhhKew2kI7HnpcQ3lFC+y83q
0eVseFfkWUlg5wbihujENk1eU4lmPb5IWcazGkddU7kgramWiax0M6ypXGRZU3NblrimEgMMtkhm
tQi/uHHxvHz2ouQpSupklNCL26kobRc3pO/yspq4uB2HxEWHXtwmHcpPYQt23r/oXvPw8+I6nNhj
Yo3B5I5zij0QuhwapVSXi7+K8q/y/Z32p+CFmdGCYI9odPLTZd9bmufqA76xQoyeu6aet0LSuGId
a30pvbFqOL598TDHXTV37+c8Mz9MUhtxBYrnzlq/QaGG3PYjuwDhfznmyZoL8GTtFN+WLszrOC6i
4xrfdKLzrGxsvWX/8Avqxe1l0z/rqdrhlfq2CT/U5e2qjZb2r41sPhJy2U25LqsSegzt58zmPZRT
ZsNhgSnDv72am+nghDTxG9hMa2N+S6ZP86fksr9Iw3Rc3rMwYcMaST0vJb6hNO6zTIy2S+vb4uvN
f8y38/IZ+hSXnBk9ojF/lRfr83DQP7+9jj//J41K2DxtU3lpG6hoSZ78zZ17fgiffhc8cRHXq8Uf
G1IDx1W9vLhZn9hVmQ2cPJiSUyJHNGbyYHpBRiUPJi7+14Mpi6vyJBmnDKZ827NjGUEYRC/OmITB
k3yMz2fFv9xd6W4jRRB+lRF/YCXG9H0YgYSEED8QIA4hgVDUc0EgiSPbYVlpeXe6bU82sSuZKnfH
CUgg2Kzj7+jq6WO6q0p1nLLNE91cXhUS9lP8qtguH0bUuM24vXbT/329vbHwfiT0fhXXj8vNPtQl
3FYwz7J72/vHnQ7nws/BYvRsbNBHFqOu7Pbmu6NFvyUGw6LiVTro+3pecT5Pu0mLm2U6pd1fXi+W
YXl+8SYlgPsr1kUIzUVf1XUVVquby0R++0Vx5X+xuPqtX1b933E/4n+sQihe5MLC/SRPn387xsV+
vqfxl+r0W3Xf6VCrVupaKD0Y2ba2V82Y74mnfE9dFS7ijsdZHML+nI+3AQAZZSYH92QU4H4vVxUf
Gq216Gs9CFuHITR1p9RQD3LQg9Shl6bZz+cV2t7pQdbaDF2teNfU3jeyNtz1MljpTa8fyFX1DexT
mbHg/+9TmZcJT+rTEbnPGtG5tmXxV7QINeNG19wNrNaqaRrfeNU2AzL32c6nMofNXppPBXPEbXyS
yr3sx2xAPWalKnMkrmxzhxf3+JD6JQ5H+akTSz8+pC4zh3os+fLIIaXcZk61WghleU++pupnivsJ
to8iHiZhHj+YnYgZ87iCFFlxOB3grLqMa7sYeW93idfCVduPP4uanHKSyWFoQrBvdz8+Wy67VVQX
5/27JUA09LvP01mx9D4tvd78BPN68xQkP1vG1fdfiebIcNH80bdrPMv0Gm7ZXy7WETGs1v3yJLTH
F4BxeZAugy371eqkuKvYuKcATEGU1TphbODNJb9IfbRrfptzANQhx6XQsTrARAhc6axECOWIETMi
ROIHbFLWbm2hqQyYxbmXbhhaLofQNI8nb2eydX7QrEmZ1VO6uU7ZMAyd5A1jjVbPlbx9I5lLnyP5
ICD80VmjStAhhwHsiWY5JBCeYDNMbehILiE6U0VrJM8omLsDVjhgqu3agnCynE6SwUqC3d5PAhuZ
ZzAamGqwgeG0wcEhdBIMljNtoUASbBr4+PIDNGCywe4QTs28sDg4hE6CwXrmuIaAxTvg1W5sulj8
FrdhN3Kv457q6yg4PsWCDa1qDAtKhscHM927oEyrm4GJQbUtM2l6rhvH+8F6xQ4Hs+VisT7ZgKZn
nqnSVuy3lsuKyiehSI1f50BiWpYmhvCOEOlmxjjYxeRkF/NZz2oCMLUpPAwnHA4OoZNmsANjQE0D
Z/UKAjDZYAfCeYuDQ+gkGcy1h4D1FLBgmRGMBiYaLBgMZxkODqGTZLCQHAI208CZEYwGJhvsQDjF
cHAInTSDvYCA7SQwz4xgNDDVYA7CScZwcAidJIOlMRDw5EpI8MwIRgOTDXYgnHU4OIROksFKaQh4
ciUkRGYEo4GpBgsYznAcHEInyWAtIWA5uRISIjOC0cBkgx0IpxkODqGTZLBhCgLmk8AyM4IN0zhg
qsEShhMGB4fQSTPYgzrFNHBmBBtvcMBkg8EItrycTpLB1oERPLnCESozgtHAVINhOCc1Dg6hk2Sw
sw4CnlzhCJUZwWhgssEOhPNIOIROksFeCwh4eoWjMyPYa4kDphqsYTircXAInQSD7YzBBk+vcHRW
BBOAyQY7EM5aHBxCJ8lgLhkEPL3CyXvzQQCmGmxgOMVxcAidJIOFUhDw9Aon780HAZhsMBjBSjgc
HEInyWDNwZadXuHYzAhGA1MNtjCc0Tg4hE6SwQbsqWp6hWMzIxgNTDbYgXBa4uAQOkkGewsCT69w
XGYEo4GpBjsALlXCsDg4hE6CwanQnYeAp1c4LuswCwGYbLAH4STDwSF00gz2BgKeXuF4lWkwFphq
sFcQnGIeB4fQSTJYObCnTq5wJBN5BqOBiQZHYhCc5hwHh9BJMlhbsGUnJ+CSZR2xIgCTDbYQnGEa
B4fQSTLYCJNzpG53GGW8EoI+WglSkS/xnlj+BQbMQfl7FxjW622DxgsMsE9Pf4EhMUj5yLgfhAlR
mhXUywuJqbaoywsA2uHFhfSh57m0EJVYwvFX39pGe9WI3ujHz2y1PPhu6EJjOGu6YLSyvHONaU3f
MimHZzyA7GaOyRzJ+w8qzvMeyJl0qI9pzkESnOeQQHhCenh7Aw5Sk7snkmdOoNHAZNs9CGctDg6h
k2CwnzEhctp7b3RE93uYii/y1P+vj45dvx0dDzzSfCZdmRvUh5k3F9f3Em/GUm0lMmlCGpThExrI
BdjmY8LJqKkE7RjsQ+uH4EJk77ptvstjKkd+8Ncl48ymThlHrVReanFVLdaxEmmVtN/5uq9//Oqr
6oPRkFfzGLiphaqrxba+5JFWFgwHTE5UiKSbTJFC9vVue6N4pQblQXQN412n/Ytt0GmvjmzQ7zbh
/YRFDrfkUQUi4eKQ89tE5N3d1uXV+bq/XN0tYxkT7wKpi7SeaVnm6XhnyXCnhm366ViX7eDp7ObM
pKdzTm3l/bFdZ9R6LsGHOrXR9oCEtDPLs6pw75tiM0qll+BDNcWaPRJ+zuzMCZlDYs8UxTJKpZfg
QzQl8j0godjMuiwS+6YIlWNKPh+qKWJvz0ixOeczaWwOiX1TrD/alCJ8qKZYf0BC8Zm2WQ+2PVM0
Fzmm5PMhmhL57pHgcy4jiayW2TdFHT/6FOFDNUXZQxJ2poCtRP32+3WETrOLOMSfL7rztloloTcX
/TLtArdWci2MMZ1/m5Ju/LZc3Fx1tx+5ragw3FxtZnLVV4v4kbR4i76MhQtWCeMTeZtM8sNq/ea6
/+TbHeIH7/7m1emIf3/R99fbFIzr84sq5mFcV9v0/SMf7/2Y9vKQl1IzJl1Oq+5HmTl+OC/Chxpl
xuyREHOuZ9qpHBJ7phh+fG6LInyIpkS+ByTSFQtpckjsmyKPH86L8KGaIveHc5m6tckjsW9KTvfZ
8TnlQzryPSCh/MwJnkNi3xSX0X1K8KGa4va7j5oLPvM268G2Z4qV/HhTSvAhmhL5HpJwM8WBFb7m
8t0KvztfXW8SQt9cbYqV15dxIyFuB9di0FL4htfMqKbuuBvqpje2ViKI3sqeh95F6u0QOu4HwSSz
t1+Whs4ft99XLa/b6r1jvvw9SJDxbELQA28xb3Xub2ZEDVKYzVDVqrYFXmoOoWdGBi+ZFIec/Eyr
saUnTZ4Af3tAG96A2SN1Z9Plg/GNguJt0zeKDZ3s+8F1QfjWtkOvlPRt27evICXGalAJYkPoLp9N
CO/tFg6mD1aqrta9FbVsOlWbTppaNa1lne57Z3U1fl/EO1+fb5qh/3u773Xx5uPdPuDqLAzxZ7/8
Hi7iB35NO63bTSZIkWXjC+cyivY+NW6kzlLC5E1l0m3+cFW1cf/wavum9hl4Xf8eVn3FPpLzuH/6
ztaxfGr67/kyuhu3DFMy59/PV1X8J1T3vuXpeb/LNj1+7ta/L2PrPiWBu1usu0Jq8D7rh7g++OrO
jivuN+K2LH+FlgjsC6eXDjDn+PoBxwENf4TDB3vqn7Up6t7tqR/xgLi7p65g8pRactvXHWr73ss1
rBFpP39Qoqt7KZpaqK6pTVBeiaHlrRN7771ABlIWsW/s2bc9Inbp2568OeLxcdW9iQbvjoPUn24q
xS3jRCH+EGSm+M/TlQpzPYlDmnU2tB2zvXTt6Ha4X1TxMk2hYi6aP7e95uXQxdRYLAB7twRjie97
+GGiVdFwfGQAjBrYWbhIj6g3Z8nH1NVDrF2+mfudjRLbm+UyvQjcxTVM2jwh6flueBQfifn4girF
5uKiG2sWbMuAVOoBS8s8IB+qYREnO9XGsl3ByFUKkoNSJqpMQa2P0V9MLC4EfHFe8ZXq19M3RtXG
xoixcWemMv7NoyHitS/L6oFi1OjGI0bFiTXt5qVPListlNoeFGdYUXHYYtdN95HaVfkbv7IeZ+mf
PEaXF6WLqAlFUdDFrQlynahRmSiqDFkjtlCAnULRw9VH1V6hfMOwfKbLLSt0/WPD/oFhC82LJ20A
g/O5Ke2Vqk3EYEr6RJTUlhIyZArNx6aGs/FI07OSwD4biBOiE2s6ekwlytoMUidpMOqYygVxTC07
kywwpnJRaEx1RZVljqnEADuFIvyYygXMp+zcmDS4PT8l9OB2Kkq7we1Z2u7Iwe1pSJwP6MHtqEU5
qMU+z1oqVj5V8c+Lq7hiTxtr7BTkHjQ6jBVrmljRrAqvw5tb70/BC/NEi4Y94NHJV5ebKraqUMXh
UYX6eXr/+Kj1Tdw0Njb01tpBusaMy6XP7u9xN93t8bZn5ofZ1EZcC1Gld60/RqHGvW3x6kgHs1ay
XByuZCWbcal/foqG4yI1nG+U7VurfZDsiMA6Lb+cwOLiuQPrYxS3GH7yFeSzEWUXDJShZBydQ/x3
M9XZPktPSBM/+y8zsXgCJUePkSflsjfDeX46m3i7TwqmdLo12oYSos3KMLot1f/l9n9mu2fPHL0E
zt1Wfkoxr8P5+myIhRBvrtKf/5OiMmaeu33QrNlnQSW3zfN9/PTvsSXO063UxZ9bUiPHVbs8v17/
J5sqvzPl7ic9pZijO1MRUWX3FI7uTFxkd6YySp6mM72gpjqmMxWcnj2RCEInenFiMjpP7h5IQRWl
Ok655kkZNKOby6tCwn6KXxXb5cOIGvdot1f++r+vt9c93o+E3q/i6ni52cS7hNsK5ll2W2//rNjh
XPg5WIyejQ36yGJUlt1afHcu67fEYFhUqkqnpF/PK87nacdkcbNMR9z7y+vFMizPL96kxGl/hfOL
0Fz0VV1XYbW6uYzkx7N/V4uLxdVv/TJePYibJf9nFY4Xue1xP4HU59+OcbGfS2r8pTr9Vt2r0NaW
MVl7ZrjSXSMa7cdcUirlkuqqcBF3PM7iEPbnfLxKAcgoc77mseyH6ebJX2Hdp/0v1jhhBuFlR74s
FNlariZMfxTx1uzQtv0qhs47s+4n3uJDo7UWfa0HYeswhKbulBrqQQ56kDr00jQxVvpVOjWe0mnF
wFpvEiuOyLFbL7YJeNZpxILFlDlcfC+CCoQN3Yn7KchCaHunB1lrM3S14l1Te9/I2nDXy2ClN71+
IEHnN7BPZU5CYBJ0RmTvudHBh2PCU9oJpg+ijQk6z1eLvQSd9Lb4hNYWkBKn4btsYKY+p0xnvBGN
6B5P0CmF7Ifet063vu+ZajveNtKxZpBed1Y8U4LOreQoIEfy/hVYd3TG5JGOzKFDvQHrLEjC8xwS
CE+Q+SMTHT4TzkB0JvNmW69zGoMATLXdawhOCoaDQ+gkGCxmggMXwFh1GZcscZB5G3tgXEhehqu2
H38WG94pJ5kchiYE+3b347Plslul58BVN85sq++++zydH0tvqG5uonzMm6lTkPxsGReVfyWaI8NF
80cMDTzL5Omyv1ysI2JYrfvlSWiP77XirDddEFvGZ+5JcVexcU8BmIIoq3XC2MCboSNSH+2a33Z8
UMdtrZ9jdYCd1PGMrFhFiREfVpE4xEYacKowWTXNcZbzUCYAk3UCmxFyxq3FwSF0Eh7KEThlhtu8
o7sHbO8A/75eX/9afRFXtBvr0nRo1Vdf/vDDtxFqdb24in9Kk+2bVbU53vXLrxCQ4aChk3mhnTi6
vgkRmNqSAtjQUTMpFQ4OoZPQkirdr4aAJ6uHOXF0gnEiMNlg4JGgZ4xLHBxCJ8FgPePwHH6yepiT
WRFMAKYaLBUEJ5TCwSF0kgyWCuqpYrJ6mJNZEUwAJhvsQThvcHAInSSDlQN1TlYPcyozgtHAVIMV
GMGaKxwcQifJYG0EBDxZPcypzAhGA5MN9iCcNTg4hE6SwQbuqZPVw5zOjGCjDA6YarBWIJxBwiF0
kgy20kHAk7sDTmdGMBqYbLAH4ZTHwSF0kgx2HASenumbzAhGA1MNNgqEUxIHh9BJMtgzBgFPlmdz
JjOC0cBkgz0Ixw0ODqGTZrABA2l6hWMzIxgNTDXYAhFsZowpHBxCJ8FgM+OGQ8DTKxybFcEEYLLB
HoITwuDgEDpJBgtlc9487JWKQr+BgqjIQq/I/+vvacdSUbBH/4F32UeU04oR5dqWxV/RItSMG11z
N7Baq6ZpfONV2wx3PRpf/3ex2CTkk/L+ZZ8aCYhTI2amWZkjUdhTI96pvu+tClqQX8ubmZ287/4Y
IlA7c/xgdv1MTHgdKvKk97JK6YF1VjkzhIn6mYKZzvo+sE6ynnHlY8AYwxsdQh+656qfqVSqj8Ks
ypG8Pz45kzEO59Ohjs7OgCQc/pTGcZ7gx+xIRxjIEzm9aXZ8fXQiMNV2r0A4j4RD6CQZbKyEgCc3
zTzLedW2AVY4YKLBkRgEZx0SDqGTZLCHA2ly08yznAMmFGCywfoQjs+YFzg4hE6CwXwmYZ3yHfBq
NzZdLH6LJ3o3cq/DavU6Ck4X87lrmdW9VbyZGMyMaFreB2Vao6VSquuUE1JY51mvpD0czJaLxfpk
A9qmjFNpK/ZbS4icqHwSitT4FQIk5m1pYgjvSJHuHUhxcvfSZ73rpQCTmwIIYzETWuPgEDoJBouZ
VAoCntxc8zLrWZ2ANQ6YarDUIJxGwiF0kgxWHAykyc01r7JmGwRgqsGKgXDS4eAQOmkGw6Pw5Oaa
V5kRjAYmGwxGsGYMB4fQSTJYW7BlJ49JeJ0ZwWhgqsEajGAjJA4OoZNksAEn6mpyJeR1ZgSjgckG
gxFsGcfBIXSSDLbaQsDTKyGTGcFoYKrBhoFwxuPgEDoJBsuZgHVOr4RMVgQTgMkGawhOGSQcQifJ
YMsdBDx5TMLbrAgmAFMNtgyCc1zi4BA6SQY7DwJPT8BtZgSjgckGgxHshcPBIXSSDIYXVGrymIR3
mRGMBqYa7IAIVjPGkHAInQSDI7AGA2l6heOyIpgATDZYg3BG4OAQOkkGc7jrTK9wfFYEE4CpBnsG
wimGg0PoJBksGBhI0yucrIuJFGCywRqE4xoHh9BJM9gbCNhPACuW9+aDAEwzOBGD4CRDwiF0kgyW
VgDAmk0DZ0YwGphssAbhnMHBIXSSDFYG1MkngbOuyVGAqQZzBsI5JBxCJ8lgbSUELKaBMyMYDUw2
WINwzuHgEDpJBhvjIWA5CSwyIxgNTDVYMBDOSRwcQifJYAvrVNPAWYdZCMBkgw0I5wQODqGTZLBT
YMvqSWDJ8wxGA1MNlhyE0xYHh9BJMtgLsGXNNHBmBKOByQYbEE5yHBxCJ8HgdLOWQcB2ElhlRTAB
mGqw4iCc0Dg4hE6awcrkHKnbO3eOPloJUtFl8uSVPVMdTn+m+l1+sJ9gn8oUIkTkB3O2GxoRGqWV
pB5ETkydQR1EBtAODyGnDz3PAeSohAv8AWTTDablLVfa+4kzWzzCacuF6HzLvOtYaBsjZdu3rdDK
P+MB5CjZ4RNy7UmGHlR57/Cz6ZAf0xok4fGnoI/zhPTwVhwclt3kqKFlXmOggam26wfgPA4OoZNm
sMlq773REd3vQSqFSn7910fHB29lKTPjoswM4rBQ0eL6Xp2iV9UnJQoPgRqUn9AQge+0mOmDlaqr
dW9FLZtO1aaTplZNa1mn+95ZPa+ul4uUADZqKkE7BjvrYrQ3vWq5CWPdKTKv6oO/LhlnOnXKOGrF
LzlbXFUxgWy/rJL2O1/39Y9ffVV9MBryah4D989tutkqXU5aHGllwXDAlJACSTr+pO2N4RUb1Gov
W2G9d23/cht00qsTNCgXjzeoEPpZGpSLew3qudTet4PXwb/YBp326sgG/W77vOqq9DfV4uoI6dsK
CouYob4S+IaWzL99m7Kxh+5sE0q/pkb58ez8at0vr8LFrYT5bYXX7m7r8up83V+u5ncq3MWSckBZ
A81mQpZx7+4acFgsX4c0Wduu8XYzF2C41WrGgX1NX/3eh+W66cP67Zfj/8VwFEJ1nrWOtdo8MFkz
KidPZD4f6lz1IMmInosYFdbmkNgzheckzyzBh2hK5HtAQruZ9zqHxL4piueY8i9v59oqSU/E8a8i
88oBMyRVqduC38AXgvhachXxihdQ8MPbffSs40w53T19dHlYlmfPpn7/6q4kXakk53mOOuUp58ff
ktwoP8/o6e8/+/Nieu0JlnD81e/7r9oP/rQK/ctvxh8XmNEEEwEzd/v7esrwL//4+7/8rn//ke+3
f86//O6j1/3BT36//Mg6c1788nnJ5p9WGz/G75fC/OgHf/7bH8aPf/oviz/8999c/3/gP/vNGH/4
51Uqf/7Vb36w3Kfy5x/885LRTx4z/by+5pkL4caczzzVx7dM5f237Ct4jr5lT0eay7e4HuN6qlN8
cAqc6Y++guegUxbeJwjEG5KdgXh0ypn+6J88HM/wHHXKU3+k3yLdYsQzEI9Osfi+U76C56hTLD5B
oN0IT8Xwg1Mwngifr+A56BSMj+Fj31K6ReEzEI9OyScmfl/Bc9QpOT9BrHug0qmJ1qNTGM845TzP
UafwAwTFb4lvwKcgHpySEd52yiePnuE56JSF9wkiyy3xqdf10Sn0fvh8Cc9Rp1B+gEjrpJIz/w8m
lf/T2fD/EnzPbNg+Z8Mul1A681Qf3zLl99+yr+A5+pYpP0Ksr7rqqTnFg1Mo0hmnnOc56JSF9wEC
viW9xXOzz0en0Puz4S/hOeoUSk8QWW8ZT40UT07RM0754Dn1yXLYKfoEQXbL2ck4co7/zjj2X/3p
Dx+X1/7ld7/+3ZJrDL8df/pT+eUIDTqmlGcYahiyrEcwlg4hJemMRZrWvqDjbARkLDLi98bWDvHn
/2zvB3/8Q/vB5Z3GL64gihuC/kuZzHed6z+7tfKb3/xivdv1Y3FMS0RgsuW35FTNLDC5cYs6xXfy
9wMwNp28YfzvT9iP/+AB5y79+8PPxWrpQA0b9lyXHwOEMmxoSpZNssVy9TRo3vLrBskbSWnCtfMQ
PJXae4xbe/+D7Ut4jsat2RPEurMyi/c4YEfc9qossY/AESnUNnqoRilEmFEmSY+VPmY0aZSOLad2
IG73NH5xBVFyBZ2K2y6sOiFRiezErc7KOmekqt1n+pxGbzp5w/iuuL3H8eK2kCXrg/oYfVpe/DwM
xkypUU+1x+uzhnxLuPWibJC8F7fZbpFP5QUf4pbTiS/Fr+A5GLcL7wNEXufMIOA9jrwjbjFZLqO2
kFPVMEbFUAvnYJNGH5kmQ1nQbfmbaSl1zLY/bvc0fvEFsSvoXNyOOpoxp5TNiVuuNc2KmYZNl0k/
n/SmkzeM74rbexwvbpGa9Di04egyhLCmzDCbrDaa1aunISNt+HWD5J24XSzbDfHUZ91j3NKJuP0K
nqNxS89xm+1s7u3JKScmIV/Bc9gp9gRB6RbJHVt4T2emgELNQmrJQoXCAZdfoVGGlqBoq2VBV9JU
ta8DHxzozHY0fvEEJURX0KnOTGKuGSsLQ/c6Myaa0ns1Vpfp+ymBm07eML6vM/s3jv/xkCNKzWSj
44QO2fLi0VS6TjNK8eppANMNv74meevjgdbOQ84N+o9xqyfSQ1/BczRulZ4gYKFT93Hojrglq9SU
UrCRJOSS11LgOEOvaUYBYCJYXzvhmQERIcn+uN3T+MUVZMkVdCpuS1OuhjFj9bbKlGLdIucck/hM
nz30ppM3jO+K23scL26ZII0Ic6ZYbGYeWXszLJJZeZZ09TTkuOXXDZL34jbbzeTUoP8Qt3JilfZL
eA7G7cL7BEFwU/GGJ0p74jYiM4mGWaUHlorLn5hDMm5dRFI0WdAx99GaNFM+MN7uafziCrLsCjoV
tz12KLLYzejFreqIQ8psmrPP9JmW3XTya+P74vYex4vbNlKffRhjbJmoxBy1mhTu2iuOfH3WgDci
3vDrBsk7ccv/XBOC/8Ha2f920e9/CL5n0c8pgfvkgnjDdGqV4LEjzCc6wn/ywP9zKWfhfYLAeEsp
eu837ugIEWUCRwutaw8VUQKUKSFL6TRUUtOPD484TGkQl6H7O8I9jV98QegKOtURZkKJRUFgetnP
3m1Y621QQZ/p883bdPJr4/uyn/c47ofH4FKsREm1qhFZGr2PdfbC1g3i1dMAKht+3SB5ryOkeLZU
9DFuzyQMvoLnaNw+JQw+SkUV3MdBO+KWUqQs2oLNkgKrldBhUNAM2ShKk2ELeq0MmAp3hAPZzz2N
X1xBGF1Bp+JWck0VpE3ryZvAQBtp9sxUqs/0+eGx6eTXxvetNt7jeHFbBWpvUBkqjdKwQec5O0Sb
c5Yh12cN+RZVN/y6QfJO3Mo3SLd4rnTqMW7PJAy+gudo3Co9QeS4QLhxKzviFtBwQLfAfb3RccQW
ilQNsc5sMWO0PBZ0Krw8uaRF+cB4u6fxiytIoivoVNxOiND7jJOUvPFWuI5YBiQjn+kzbjedvGF8
V9ze43hxGzmnmkZWnq2CmsymFbELlaqN4tXTAKQbft0geS9uKd2ITiXWHuPWTlT3fAXP0bg1fYDQ
9aMGzbzHYTviViNSatJDiTGHKWxBM5YQoZhgbvzP8VYbARRijoP2x+2exi+eoBzRFXQqbgcpzj4X
Z43ixK11ElhAoEFzmdLnk9508obxXXF7j+PF7QSYtXJaJ0F1Sm1iFguUxXWdDdrV00BGG37dIHkn
bnVNMcZ0KrH2ELeKJ+bJX8FzMG4X3ieIzDcS73HwrkQfm7amOZgmDdzkI6pimGPUkWqHUtP62vVE
sdTYYBxJ9O1o/OILMk/QuUQf5zpjatpb7l5VXsU+EZo2SC7T9xtYNp382vi+hbV7HDfR14tBTWk9
oKMUJMqW1uUvoNwlxXr1NAhv+XWD5L24pfUGplN1zo9xe6ZK4Ct4jsbtU5WArd2oJvAex568FCgL
RI2BwEoQ4xF6IQrQYR0thTrR+n0bMULNsYPg/rjd0/jFF8SeoHN5KdFR8oC6vPHDiVtAni1zg0jd
Zfp+ctumkzeM74rbexz3+9ZS67lPwlywYaE5UVrODZOO2ebV02CKG359TfJWVZ6t45vQqTzQY9zK
iXnyV/AcjVvRJ4i1FD+53eievFTpqoJZwoBhoSfJgXBAGFhhiqRhLOvnGdcGQFhbOTBP3tP4xRWE
0RN0Mi+lOWbJnXmCV8hipbehUqANl+n73RabTt4wvitu73Hc8baaDVWzKhPLgBhlxtm1ritr0/jq
aWDEDb9ukLwXt4S3eK7g6zFu7cSWnq/gORq3lu4h8rcYvwHegE9N1h+cYvzuBvcv4jnolIX3EWJ9
Mnpu7/STU949NOOLeA47RR4g0re4ZkLcwN2TCakT03o43vKHsqYWYYYUew9DsNUBVcqYaw/fbbZW
oMx+oIff0/jFF8SeoHOZEKwFRUvviOSdDmxrZTnmmXrxmHL8fP03nfza+L4M5j2Ov2JYAOOQGGeF
0UpCTkiWZo2ZpNjV00B5y68bJId7+NXyx7yW9H9QgfA/LJ3434LvKZ1w9kt/ckG+pXO7Tv+zI6QY
T3SEX8FzrCNceZ8h7CbRK7mSPamlPLJmBgpRB4aqEsNUK2Hy2oPlITmWBZ06p9YaSKxlf0e4p/GL
L0g8QSdTS23YnAS50vSWYLGQNC3S1Xfy95TwppM3jO/qCO9xvI7QovUpDVsn65wkInCjNAvGlG3C
1dOgjBt+fU1yvPbzw3LWG2b1LO/Z26ixLtwwA0wdoaiM0HKdIUKquWdoHdrqYbHeiNAW4/tf0T2N
X1xBlDxB5/Y2FoqTYI7UUvPLHLX0EpVl+Eyf8/dNJ28Y3/eK3uG4e6Sg6Myz9IkR2+htGKkwjNlL
jpKungaKuOHX1yTHv8Y+LNN6CdupOe3j0ALvHsXxRTxHhxbgBwj4thBadh/Hrr2NU0sqiMEac6gc
y5qzpMCGBdEaVagLumoVEozE1vbH7Z7GL74g9gSd29s4CXutlKFNb46ds2LnlkuS6TJ9X1fedPJr
4/vm2Pc4XtyO2tOAWAYRVkk88uCSyD4uoegQr88a0i1F2PDra5Ljexs/LEO+kZ3KNj7GLZ+I26/g
ORq3/By3uBAyeY9jT/Zzzkq1VAo6cAQYtQbRVkIdsWNv2jqMBT3OOMEqZMYDcbun8YsvSD1BJ7Of
ROsMClOH7GY/MQ6OGHspLtP3OxY3nbxhfFfc3uN4cdsH2GgcY1arNRuRtpRx9AyUYhpXTwOkLb++
Jnnr2xjW6lUT8CzvKUATGin20UKrnELByYG45YCJS0LjkTp9ZB9UUXKfBQ4srO1p/OILcoeWcwVo
FGfONdae3K8WmFlaTApG7DLpZ/pm08kvje/8arnHcQvQurXZYmRpuZgK5lhbLXkk6qwJn1/Rdbcs
2IZfN0jefEXzTfBUwdfj0PJ24egX8RwdWpQeIPBbyjfwp4R70q6zxGKxxFCYe6BWOfDCGWaiqpQ4
wWxr3MYGVjVm0QMF33sav/iCxBN0Lu06YoodWqRm3pRQcrVWEvGYw2X6PiXcdPJr4/umhPc47tAS
W03SgZTWGRtQHw2HzjYH16h49TTkGDf8+prkrU85XKdghqcKRx7iNqV3z+78Ip6DcbvwPkGg3NDd
qKF7soQzM1tSDbaWmTTrJWgzDDqzQG9SpMYFvXTV0aQhYj8Qtzsav7iCMHqCzmUJSXi2lhYJU524
XV58izUhM06f6fOjfdPJG8Z3xe09jhe3WufIPBKOFnPHxiPFImk0ih3Q6tXTkDVv+PU1yVufcviN
6JbOFY48xi2citvzPEfjFvIzBN+U3MexJ3U6KKnlFEOJPQYyaKHHoqEW6mpaWk/ycRohTxm5qPGB
T7k9jV98QeIJOpc6NSzQk2ZsKE7cLmBQe65r7tFl+n6Gx6aTN4zvitt7HDe7X4HqUFh+EfWhjVJK
DDQH5jSzE7ck623IG359SfLeMmdeD+yW7D7RPTWSAmVBVApzagrQkgZNMQerNAwNwbCtQwtgIYZ1
x+2B7P6exi+uIEqeoHM1klpiHrVl6u4eQNJMk0dWYfKZPoeWTSe/NL5zD+A9jntyYZKB0UqcgCVL
kZQZau1KyXJBu3oaLOKGX1+THN9L9GEZ6IZ0qrbpcWg5U0HzFTxHhxaWJwikm/hTwj3Z/aaVU8Ie
dPk9oGQNmCIHiMgt9Rgx1hWdW1Op1LDO/XG7p/GLKwjduD2X3U9lUsUaq7biDS1NUEayWuL0mT7j
dtPJr43v20t0j+PGLXRBM5EmGaUprwVKqGVwKRiLXD0Nqlt+fU3y1pQwfyM6m01/jFs5FbfneY7G
rYgHYWTe49iT3R8NaptkoTKnUHX2ULutH18Nxii9K/KC3q3mjjmC0pEp4Y7GL64gBk/Quex+rdpm
lhGnsreXqBfEOTKAmM/0ebb8ppNfG993VsY9jhe3OXUqQowxdyqcoHLOotCajgh5XJ818C0l2vDr
Bsk7cUvfEt3Qn4zuye6vx4vXkiSk0eLyKoEEQqsBIDZlnCWlsXh4eZ8WedEsHzkWYk/jF19Q9gSd
y+6rRbZiOtEtv+8LFM1UNYH5TJ/HuWw6+aXxneX3dzjuKwrYNY8Mk3LRSGUyr+X3rUebseu8ehoy
04ZfX5O8+YoC3eTcLV0PQwukE0PLV/AcHFoW3icIzP+tJ95zjiRWsDk7h6JYQ1zPibcGOSCN1is3
0Y9zJHNEbGjrPssDq3J7Gr/4gtgTdO4cybzWeA+0CtUbWorax+UOtSV1mb5fr7jp5NfG9w0t9zhu
tqFAQktlsE4tEWBCUZpAM1KcvV6fNdAtAW349TXJ8fNfPyxTvkVxLe9ZgNKupXCpgZlmGHXy+irl
wJRiHa0WEv2oJYyCmpCpHCh33dP4xRdknqCTC1BSWirUgaO3IxM1tjyrduvRZfq+k3rTya+N79uR
eY/jvaLEjXlCwTrNyGrPEdezgTrEmhqmq6cBcOtF2SB58xWlm6ZTCz6PQwu+e83gF/EcHVrwEYK/
JbxJ9Hpi23UvEVtXbSlUBA6ToIQyuAZKwyRXowG2oJfYac4qWGnsj9s9jV9cQSl6gs7dS8SR08RG
BsVLZK9EeULXUpPP9HmTzaaTXxvfl8i+x3GPKJ6QlFPHpkNTFqXWE41Rok6NgldPg7Ju+PU1yfET
EP5pWW7G+D/Y9vK/3a/zPwQ/tV+H1zmtxS886pRATuzm/Aqeox2hxCeI9cpxQO/93rMSPxJPkqhh
9i6hsUKYEWOAbCLUrRS2BT2LxAKJqfGBb+M9jV98QewJOrcSXwtPqykLpOptXCyp5tRG7oYu0/da
qU0nbxjf1RHe4/gVNJ0aFMlxWF6fTa2VazVNNWJtdvU0iMiGX1+THD/qdLX8cdSZRs/yrkXnlnqn
mUPKioFro2AyOXShlOKc3Dp/lK1TV5Q2oxwoFtnT+MUX5MbcuUXnRB0pMueavTl2Rq2QuaA18pk+
O6NNJ782vm+OfY/jnjJWVNYOL9YuRde8UCGAks1yVLRx9TQIyoZfX5O8taK3WOZb/Mq7Pwnevjr3
i3iODi3KDxDyLa2Pg7zHsWclviSymPsIM00JPNLyvxQgjKhRdRZeuD6u0pKRIaUM+cCe+D2NX3xB
6gk6txKfrEDqjLOjt88OACoJmwwglyl/fk1tOvm18X377O5x3NOKZjFjicilZNJhYikNHrXBYmno
1dOgaht+fU3yVtzKN+Ab8qm7Nh/iFt++BuSLeA7G7cL7BIHpRhG8x7FnJV7RssESWn2MGJhGD6lp
DQzWKa3vRIEFnZJNa5Bam0dyWjsav/iC2BN0biWeqrFal4RukZf0UrPYOrNILlP63Ky/6eSXxncW
ed3j+HGrIppyQ0ATYgNSqdbGaNKxwNXTwKQbfn1N8ta3saxbSVJ0Le9ZdOYppUSsIcc5Qy3Lb22m
HCgR5FiKTS0f9wukolowTjtwPe2exi+uoJQ8QecWnS2XqrlYzOC+opMTEucy0vCZPsuCNp382vjO
V/TfOP4W7gzr6BxbEaNUBtSUm2Gd0bjKzFdPAzBt+PU1yVt1/7JOwYhPFVU9Di1wYr/OV/AcHVqe
Ziv6LdoNwX0ce250zHXwSDGHIq0GrsohIUgYlqeBFIWIH/cLsDFLHW0cKM7c0/jFF6SeoHM3OkaL
omntdsGbEpaujYVrR/Wd/H3tdtPJr43vmxLe43hxmzoNIM2NU9VkNhJ0nbA8IJGobV49DVnjhl9f
k7w1JdR1Csb5VFbuMW753YNnv4jnaNwyPEMsj4PdL+s9FTTUkLv0GkohCq0whdgshbEIwyZYEPOC
XqaNBVt6y7w/bvc0fnEFiTvenqugkchQk0qd5I23dcbae9aMFn2mz+LMTSe/Nr5vvL3HccfbWHoq
NsscEHs0M1j+m4ypNY2Jrp4Ghrjh19ckby1z6npYi0T0LO8pFpnAKmWkgGPdYI1ogTWnUJRirFip
gywe5kU7c9bS4oEzzfc0fvEFsSfoXLFIxzJTp4IZizcl/FgRQ1OZyWX6fvDUppNfG99XP3yP42YJ
s5UujDHxGFaTqY3ZrYukUVTg6mlQog2/viZ566tFvxGfvZz3cWh5+2zkL+I5OrSIPkDYwnFL6L7m
eypokDVRnzHYQAiI0ELJHEMZnWLupc42Ps47ozl6TS2OA5cF72n84grK7tByroJGTTMOnIYxu59y
M0lqcz1zw2f6nPxvOvm18X2ng9zjuKf6LPZTX49FJsozNYgRZ6XWO5QpPK+eBjDa8OtrkreOXrB1
Cqbntl4+xG2O756N/EU8B+N24X2GEP8OWIl7Kmgsy8zrieNTBoSuZoEjS8C1CwedlfP8KNxKPcUR
gexAdn9P4xdfkHmCzlXQjFELjI9D58yroGmMwhpbjOQy8Wd2f9PJG8Z3xe09jruFG3SknlsqxeIc
rTce1BVzlFy4jKunQRJs+PU1yVufcvZxe4m5r+ie2oaK8rHoHuKMKTDnFHLTFjBPLsyo2vNHHlab
ZQKlfCDbsKfxiy/IfUXP1TawioxRMGdmb+E41ZF7iRxj8pg0fnZGm05+bXxf/fA9jls/nGzawNLr
YOjTZP22aKZRR1z+pVw9DcZxw6+vSd6aEto3WiyfO7P3cWg5U/f/FTxHh5bHuv8Uv0XxdzxL3FPw
kXvFOVMJM6Ue5ogURCkGHZRUY6WpsKCn5f0gpdLKoTOEdzR+8QWRJ+hcwUcHKMZpcurqZRs65J4R
MJG5TN+f9KaTXxvfdzrIPY5bk8Qf+cCchRpLKiwZUxHCppbijFdPg8Utv74meWe/ToofZaGmnvfy
Up34q79+WP4vEcr3H21//uNf9gfoQcNHQ5H12dx68KvsM7dD5x/Hn//yx9+N/vmzq9yPH3k2TLck
7utJm4YFzzh4NZz2GT7qYHHNLb7cZ26Hzv0OBrmleOpI30ccfX/k+RKeo09D5RlCb5rV61D2rE+1
nnLSWEKd1oMBa2jTakCjOESq5pkW9DpLTUm6tCMXEe9p/OIK8ofSc+tTkoaUniP04Z0DOUTH4uCP
s2J8ps8nvenk18b3nQN5j+MmIyRlsW6zN2wGyRS6jTSlQBpQ6PqswW4x5g2/viY5fqHph+W8fk6R
Z3nPUkyZiWe0vvwBaqgjrzfCVFwzXAgtssy5To50oiGnTi0dWIrZ0/jFF+TG3LmlGOZSS8dq4h55
WIrRqCVFk+IywWevvOnkl8Z3Hnl4j+MuoVYwMu3rEtlolGj5k2SMRVteNx9fPQ0qecOvr0neOYJp
sUxyy3iq1OBhaKH4/o6zf/H8X6thKT5CpH9+A6D3OHatT2WCFg0CwpTQWl07fqwBaID1KlwwLuhp
LdXqFVCOLKHuafziCxJP0Ln1KcSsMnAkvxp2lhKFRTM3dpm+r2hsOvmV8b3VsPc47h2Ac/Cg2NrE
MevU2Wnk2GOGKII9Xz0NFvOGX1+SvJXnXiwnvUW2/8HGrf/pjrP/JfieHWfq7zhbuYBvhG4Ccs/a
FszYpEZYb8Hm0GPF0OpMIWsvqFGKzDXmKTFGULR+5HjiPY1ffEHmCTq3tsUta+6FqRT1cuRpsLTc
m9XoMuXPmulNJ782vi+RcYfjl01obDZbLYVAOZsMlFR6Y1Tm3MB9Udjyhl9fk7xTNrFaXm/mP5Xw
exyr8f1ypy/hOTpWIzxB5PUsOu81T3vWttYNdCLQwqwNA1jNgRVKIJWY6kiZR17QW4NOadFB9cBY
vafxiysI0RN0bm0rpZZ4gGEb7vHE1bBxnpEAfabPWdmmk18b3xe39zh+BXyHnnV2zi2OOZomgEx9
0vI+g9r1WQPfEuqGX1+TvLMpMqV1Tsty6irYx7jNp+L2PM/RuM2PcQvf4hq30Xscexb8sJpqzBJy
ZQ2lzxSQZIQaoyJgziP39bXjqZFwQMoHzpDc0/jFF+TG7bkFv5xKHkwlsYA33lJn6H0MntVn+ixI
3XTya+P7Dvq6x/HPomOV2qJGWe8gjJAtltRGRIgwsV09DSp5w68bJO/E7ceNTEjkWd6ztjWj8aJ0
BgQtISXLIbPm0HJLlHtvvaR1aLHBhRqV0Y+UKe5o/OILUk/QybUtiyy9QLGm7iuKMKv0RE1dJv7M
MG46+bXxfUPLPY47JZQsKQ9tI+XGxqMhWJt9zIyRk149DZRww6+vSd4502q1LDfTU+VFj0PLiZtm
voTn6NCi9ASR/8tZdJL2bGZGzjV2m6FHxiAxt9C55xCtVqRUMtoat2kiROtSLR84hGBP4xdfEHuC
zm1mxpKSpi4DUN3NzJQo06yTs8v0fRKx6eTXxvfF7T2OF7frHLDRx16VvIZtzGslYcSRmAWjXJ81
0C0hbfh1g+S9uCW9pXObhx/j1k7F7Xmeo3Frj3GLH7MVcV/zPZuZO/NsLC1kKD1omSnwaD0UEKMk
WgzqR7YfJCpV0HygvHhP4xdfkHmCzm1mBllsF64A1bxjxcUSTtbOii6Tfj7pTSe/Nr6vTPEexy1T
pEkspZaitUNbTywXLAONl/fZGl49DYZbL8prkrc+5fBjpNfsWd6zmXlQMZ5SAlKfAQjbemacBJ2M
vdHUovLxsbzmnxJi4QNTwj2NX3xB4gk6t5lZY68JE2Sj4t5rWxXHuhs4T5fp+1npm05+bXzfzpV7
HPerhWF8ODDxWDOEQ3Mb1mOLWkpPcPU0iNiGX1+TvHPy/WpZbyCndoo8DC184jKkL+E5OLQsvE8Q
Od8iuXG7p1hkGCPnbKHUMUJK2ELMg4Iy9ZitKNNY0IVZukabpkfidkfjF1+QG7fnikVyGczZeDE1
vE+5EptprTZLdZm+V8BvOvm18X3nUt3juJ9ynUpuvYmUhpmEacYaNfcWJYK1q6cBkm749TXJW9kG
XKdgeG6nyGPc4qm4Pc9zNG7xMW7zt4i3BG7c7qmgiYKzrxXAyYDCxCghsmKwmiWr0UBep4RRTXAI
mPKBbQF7Gr/4gty4PVdBoyNrK9Ja7V6WUGceESRPnOoyfT/ycNPJr43vyxLe4fjHimeCyHmi1pR0
UESAYrGJgeUS69XTAAobfn1N8lYKJn+DdFP/ie4pFuGuqEks8OQY2KytlR0Wch6iWKotktehpffW
tBPNeuR8mx2NX1xBGD1B54pFKo7SqAsW8b5aJLUMttBy9J38/TKkTSe/Nr7vq+Uexy0WiZhAuPeR
Y+IstVeELlw1dpRSr54GU9rw6wbJm6+o3jKeqtd9HFpOXIb0JTxHhxaRJ4gcb+ZPCfcUfNQylTrN
sHC1IIQSWoEUxpBIGkdqNBf0NqJIHTHHUffH7Z7GL74g8QSdK/iIpXaWMgW4eTtXbFQpgpTYd/L3
KeGmk18b31fkdY/jXz44CFLnYZCbNNWpUtocreiagEjXZw3plqJt+HWD5L24Jb0RnIqTx7g9U/f/
FTxH4/ap7p++xYUPvTJ52FPwMZo1tdKDAJcwDUfQRBAq94EFbEz7yBJmErGsiofutd3R+MUXlD1B
5wo+aKIZNIu9TS+7P6vi5KaVwGXKn6U9m05+bXxf3f89jpslHK2kDINLUSRrmazgjJAVEhjEq6cB
betF2SB5J27pW7IbJ/Us76ltKLNxzVpCGx8F+ZCDLS2FMWCmTOu9cPixfjKYWpVF/4H99nsav7iC
wHXludoG6LmZdgNlL0tYIkXUBAKz+EyfleubTn5tfF+W8B7HPzqt94mxTOSUOsG0OpQtASFnlHn1
NIjYhl9fk7xV90/rFIzjV9b9C75/mu6X8BwcWhbeJwi0GybwHseeheMxE1kbGpLWGCzTCAathTkm
wJhSms6PO7iwYZFuagemhHsav/iCyBN0cuF4FkvFMFkX737MXID77LEMcZngc+F408mvje87Ou0e
x80SFmhMZUCxNaFvXQ0w1yalpAhJr56GzLLh19ckb00J6WOh1h9a9qyRtooWO0noEHtokjlQmRhm
TzGnrFZTWb9aJMVsNRbuB4aWPY1fXEH+0HJujXRxcSXJsWJUdwEKciVOdfD0mT7nuZtOfm18X23D
PY47tIzK2EucOVOrKdkwwPVACyi2fsBcPQ0gsOHX1yRvLUB9WGY9VUvwOLTQidqGr+A5OrQQPUDw
xw326PbEexaONUYtQD0oQgl9NA1Tew4VZ1meVE/aZEEXAeg2DXDm/XG7p/GLL8g8QecWjrGsizPQ
e63V261cK6RZTbkPlyl/PulNJ782vu/unnscf+HYMINNkTpBlJlaHpNTJDYizddnDXKLuuXX1yTv
nIK9WtZbkvw/2Jn1v91S9j8E37OlzLnE7JML9CZ8Kl3y2BHaibr/r+A52hEaPEEg3yKx937vWYmv
YpANODQqOVijGQC4h6rrfCNFRtCPBakWG9ZRYRwoztzT+MUVxNETdG4lvjCmMVolzNlL31iDLBAr
NvKZPp/0ppNfG993huQ9jntgEKGuZzbILBaRLfXFghkNRegJ4OppgIwbfn1J8tYNUf/g7VyWHchp
MPwu2WXhlHWzJJ6CDQ/gK8WCS1Hw/iQDgXCO6O6kzzC7qamxvl/HajuWLN8ti9yQQstHks4obKXl
nrgaJ6YxUieeKaPDasSV2nosNVhmYTKo+sYJ45HBL7GgMObOJZ1FyGorw3Jb0R47m4s4KgmETPxM
Ou86edv4sRPGV5xoimrhwiO7Ws7ea3aQ1qX1Jg0ydb1GGjjzjl+3ST7pRfewbDeTU20Sviwtlj9v
T/wjPG8uLZbtC4Q++jMhh7+Nj2TiicA6r5ywc0ulykqVMiQmkiZaMpHc0XNWK3nd2emNh4KPDH4J
BQlGgs5l4imDtGWdIDy+mdaIpnTqWWKm519618mbxg8e37zihFdBO67c2wBtOduUvvJytNK1cZ4D
rpEGzmXHr5skH7U5BX1swfxce5OvcUuftyf+EZ5345bgGwTxTUr45ziSideGJoqcFi5LtoYnh9qT
aB8+1uTV6Y6uXVFIO/t4Y0t4ZPBLKEhzJOhcJr50zjioTGsUZeJbFs2zDZwaMz1/G+86ecf4obh9
xQnXW+xuQoWbZGs6QRsXmmWN0mQOuUYalGTHr5sknx276m+EbhjenKRDSefSaLmvxHmWpOQ9IU5M
0NoSyLzgl05egzuB6Kjgb7Q5PTL4JRRUIBJ0LulcrXOWOlB5BFMUMiKNlqmTxkzPHNCuk3eMH5qi
rzhxdxDCBpLNeOZemGy5lJLraIqlyDXSwMA7ft0m+XSK+i3DqU/516WFTxSL/ATPu0sL6xcI+w3g
zTSM2yOZeMSpi+tMWDslGLWl0hnTnGK6qEykckevZWUQRZP6xpbwyOCXUJCFcXsyE2/my6tKC7sM
5DwE27KxisdMz7jddfKm8YNdBl5xwnSJGulSWDVbXYXqkkyzKhvMjAWv3zXQLVPZ8es2yUftie2X
9pR+6tGwr3FrJ+r+f4Ln3bg1/gZBeCsZoz/HodYL1AeRWPLWJNVFLUlzST5WdihldqFH3IrObqtO
lDfq/o8MfokFlUjQudYLpSIMQq40wxcr1PvCCVPKCJngmYnfdfK28WNF1a84UdzmsiDnzLYwS6Pu
CioZc54+ceZ1jTSoyY5ft0k+Kqq2xwv2JOGX+EixCEqdS4okbW7JsrWEDi2tiVwn2Wi5/ZKJzzCr
OWd9o//wkcEvsSCOBJ0rFnm4GAY45HBL6HVJdqVaYIZM/+58v+vkTeMHt4SvOOFBtvWJQMV65YGT
61p1jCliRXvp7RppkKw7ft0k+ehd24dlvxHRmU/5l6XFcz6ztJzneXNpufN+gfDHC/ZE4TQ/UkEz
RmMlzanoHGlptTRBKbUuw3qbVGw90Gdu6FBneafI68jgl1iQRoLOVdB4URrWVhsc1f0XMx08Zzep
IdO/T/d3nbxt/Fjd/ytOeNrQ5xhQTRwmlcHSOw7oIjrE55JrpIEdd/y6TfJRcab/8roonmqs+DVu
8URT5p/geTdu8TsE5Rtb+FPuSAWNtl4ql57KcExQSJMXL6mNztpqrdbbHV1bJQBqzctbp4T7g19C
QQ6RoHMVNHM1lS6jGq6oqNqNDGQpe42YJD/jdtfJ28aPZeVeccIKmjUXc1uzDwWuw+e0Rlxa6y3n
3q+RhiJ7ft0m+TBuJd+gYGT5SG3D6kO7VkzuBdNgWKlJ1cQLCtUikKH8MkXJtSAWXW8UZx4Z/BIL
kkjQudoGglka0gNpRgko9FxW8YrNAqaXR192nbxt/FiXgVeccIpWnYs6lzkysLWiDU0zk5c8htg1
0kAIO37dJvnotrI/tmACpz7lX5eWM3X/P8Hz7tLyte4f8+MFe+cc/TmOFHy0hqOSzDSKlwQ6ctJh
nqbYarbWQOx39Dqa50kZ3npK5sjgl1gQRYLOFXyoFFqtVJhTotP9YQulZJqYY6bnefCuk7eNH3un
4xUnTBzb6sALiBdPR8kVK3QkWqbCWK7fNDymsvuOX7dJPun9jb88YgPGkeUjOdLsBJJxpmpU0gIq
qdBoibnMctcMhnr3sGEnRFtt8htdBo4MfokFaSToXI5UZtXSWy06otMGy7xsjlyy9pDJn79Pd528
bfzYacMrTjRFpRXN3bw4mRQp2AuSVcQJLNnwGmkgLjt+3Sb55LWjXyyfbSv930tLyfnzuv8f4Xlv
aXnwfoNgu6FGcctHEscjY1ldMcmikZyrpiowU+8wmEib9X5H741Ap9JgfSNxfGTwSyxII0HnEscE
2bNCXo0kitsOIGAEWDhksmfc7jp52/ixpeUVJ4rb1Vafw5pXAJXuVpWk9jKXoGTma6SBcc+vmyQf
9SLF/NiCqZxq0PY1buFU3J7neTdu4Wvcwi+v8BNFf44jieOltU/UlrxbT6sWSw2XpmGeRWgu9nJH
L9qzjJzrlDfa0x8Z/BILKpGgc4lj7qodtSxuJYjbxYqa70y915Dp36eEu07eNn7sPeoXnPi0wWSI
9+yW3VdlWrYmZ+hcOhjbNdKApjt+3Sb55LThbhkelvOvcO3lV72v82uCn7mvc+dCvTGFa/WRpLMT
kMrgNGejxFNroiU1aW+AbVqu2u4qbKGV3Kgpv/Ez8Mjgl1iQRoLOJZ2tjznRpIhCFPOusyzmLCwh
Ez/X6l0n7xg/FvMvOFHMT2L3OnjxGAbUmQCwT5M10DnPcKKI+Y5ft0k+qR9+WPab0ak97de1mj8v
FvkRnnfXauZvEFxuLOE0P5KJh1V7reYJuOXkippggadpzE4599zmHR0MZ8PSqM83mrseGfwSCioQ
CTqXiSeYuXPBUalF9cOl4ZxtlTY0ZnoWee06edv4sbu1rzhR3GLx1SaMsTKwZJFiZprVsBetbV4j
DQVsx6/bJJ8UZyI89rSOpzLfX+NWTsXteZ5341a+xi3+JpebcDjNj2TiwZdozZAgS004ykwZ+0xz
YV05U2Frd3RZtLRpBu1vFGceGfwSC5JI0LlM/LRCPLHzwOhMS7B1a2Z3shYyyfOAfdfJ28aPnWm9
4kRxa8RWoGmBJcSliIGu0d2YseSh10iD5r2Jsk3ySVPmu2WUu2WPLB9JOsMY2kg09awtEUxPWBnT
gAZFivfZ7LG0lEnD6+IGb7RtODL4JRQEGAk6l3TGOz313gU92hLOwguykktdMdMzM7Dr5G3jx7aE
rzhhk8imua61QKWUxloWAI+F3oCgIV4jDaZ5x6/bJJ+0W0F8pLtzPnVc8nVp8c/r/n+E592lxfUb
BMsNJDztOJKJd6u5Ql1pibYEBVearWiC6bpm9lbauKPPPDuZTpzwRrrkyOCXWJBHgs5l4mflLALZ
lkVxO4SzF+mLuAVML48G7Tp52/ixuH3F+V99w8HrhFIJkByH9L5cdKJzAb1GGghox6/bJB8tLfRo
T5ntVMe7L3EL+UTc/gTPm3F75/0OwTf08M9xJBNfRwesrqlUG0nLyInUWpI2TTNm7dB/2dFI8+5s
rb2RLjky+CUWVCJB5zLxhSkr6kQs0U+53mfv7gO4rYiJ8vNH+66Tt40f+yn3ghOvt8JtdCaBloWH
CjaA3JfTHFIVr5EGFtjx6zbJR+st/QbxZi6R5UNdBkRVarXkCzAtHDVJ7zVVzpOLYdZFvySSCbVB
cVhvTNEjg19iQR4JOtdlYFIfWacCzWhpcbGxKNsECJ3s+bm07Dp52/ixpeUVJzwlXESZqJdp2fsq
RtkQHAox2Cz5+l0D3bLAjl+3ST65moL02IKBnup493VpoXJmaTnP8+7SQuUbBNMN4x+RRypoYA5Q
XS0tm5pMjFJXaYmaNXZegwzv6Ow8Wkfm6m9k9I4MfokFSSToXAUNgOdSO425MIpbLEWcczeQkOnf
5Xy7Tt42fuwq6CtOeNqwxirZlg1ks8FDNJc51ao2dKvXSAMJ7/h1m+Sj031+bMFQTsXJ17jlE3H7
Ezzvxi2X7xB4y2FRtRypoKE5q0uuCahYkqGavDEmrdpIXWDmckcvNEabHYX1jVPCI4NfYkESCTpX
QQOw+hgjz9GiuJ0MmYBmhTlDJn0+9rXr5G3jx+L2FSf8KTda5absrJY7MuBs1UBKZa3F+zXSgFh2
/LpN8sk9u7tlhBsWjywfKRYZsHwN5GS1YzLMJdXFkFbmprh6k/5LwzjteTnl7ELHp+iRwS+hIKVI
0LlikTkr1G7VOsdXQbFJ1WpFKWZ6nhLuOnnT+OGroP/BCeuHZXCpbaJkNu2CDiUXFCoNSUmukQam
Pb9uk3zycC3yYwtGdOpT/nVpsc/r/n+E592lxfwbBGOcOFY5UvAxnEZetaaVBdO02VLD1dP07ABc
ifu4o9dZZ3PSltsbieMjg19CQSVHgs4VfIwGurSMqZOipWWwSs+DqOWY6dmLdNfJ28aPNZ56xQmL
M6fmZlDGXEtmq8Vbh47oq2BeuK6RhgKy49dtko/q/uWxBWM41bP3a9z6ibj9CZ5349b9OwTcNP6M
Hin4MBgqS0fyhS3J4pV6LSW5ZWy2uFJpj7hVXuiLfI031tsjg19iQRoJOlfwYWCM2rN3Crv6yOi1
WSeYHDL9u1Br18nbxo9tCV9xwoIPrU17LeqTFg/KDXqXlnmyUHW6RhrMeMev2yQfFVXLbzDfrFhk
+UjiuLn2ulQTjlzTrIppZugpsyz00agve9T954FUZQn1NxJQRwa/hIIUI0HnEscNc53E4Nw9mqKV
+hTR6RljpmfKYtfJ28aPNbB5xQnv61Rsy1YXXl46Ly6NpLEJiRYSuX7XALeMe37dJvnklTKUf94/
P5Xw+bK04ImroD/C8+bSguTfIPhOGP+UO5I4rtTq8G4JsudUO+Yk4p5WpdkrTGk67+gr11zLIM/t
jdOGI4NfQkHxT7lzieNSVdSqkuYSJY5ZXCVTwTljpmeLsV0nbxs/Vvf/ihPG7RTr3nUWXSZepQnd
gfusTUYt6xppQCw7ft0k+ey0QR61eyWeokdypG5WBAalWXpO7DiTDi9JRWFqp4lGj4Ns0FbNIWPT
41P0yOCXUFC8tJzLkYIzlVG91vAFBejdeXQUWBIzPT9Gu07eNH7wBYUXnLhMHStMGETDVr3DZOsg
umwyrj57uUYaDGHHr9skHx1kl8evBNFTvxK+Li0nnoD6EZ53lxaRbxBgN7LyK1x7+XXv6/yK4Bv3
dQD1pgag9LyvE3D5TS38EB7JxCN07LXP1IvW1JqMJBUhNSlW2vAic9xVdFMa6CZjvLHHPjL4JRTk
GAk6l4mHBpxRy1xlRmt1qROtE+EsMdPz2HXXydvGj7VbecUJmzJ3nlpI85ojd6iZS1bqOnrrmcDD
ieKcd/y6Q/LZh5DyrcipPe3XD+GJJ6D+xfP/fKfjwfsdwm8C4TQ/komn6kOalQRYNNUFNU0aNY0y
V4OJQMvu6EyrSms1+zudM48MfokFSSToXCa+ls5OUqzMsJdFLWAT5nSUkOnfz6zuOnnb+LF0yStO
FLcquY7GC0du0riXbkWVS5VRh7heIw2l6I5ft0k+3MCI3TSsrShHks6dJvHIkFygp+ljJgIbCbAp
zmngCx8HZELS5sgN37macmTwSyzII0Hnks5THIoUJ8wU1SFSm9WtGgwPmeCZA9p18rbxY5mBV5xw
jy1VzduYgN1l9YxjlAq1igtpw2ukwUrZ8es2ySd9w1Efe1o993TLl6WFTjwB9SM8by4tlO0bBNiN
LZzmhzLxkkUqaZKaS8odS1qtUmpTVBrjFIE7epm5iktdFd54uu3I4JdQkGMk6FwmHiZWXh1yzTk8
vtHFeXFdecZMz+ObXSfvGD8Ut684YdzWJtJLVsrcKsMsFdBRFnOjNss10lCYdvy6TfLR8Y0+tmAG
p/rrf43bE09A/QjPu3FL8B3Cblly9Oc4kolnHKK9loTqnKDATDDUU9NmS3vDRuWO3llh2rQ18xvN
XY8MfokFUSToXCaebQ7O1L2HxZkG7gCWpZPETM+/9K6TN40fLM58xQl/ys2ZR1ZvwkxlqOdFrdSB
nYRXG9dIAxLs+HWb5KMtof5G9IYQWj6SdC6r19x7SZjLSmMtSMWspTzqKqXi7FMfU7TgKAuVcL7R
N/zI4JdYkEaCziWdV9FCOQO1VaP+w0QFsPQCkEMmfF4e3HXyjvFDU/QVJ5qinchXH1XmcqizKqya
l/pqSFbYr5EGKr7j122Sj4pF7LEFs3NPt3xdWk48AfUjPO8uLazfIKDc0MId1JEuA9NZZ1+eyh0r
ISingr0lpMWlzJXXL3G7aluzQytu5XjcHhn8EguSSNC5LgM0+zR3heUaxC2NRrlnrFw9ZPJn3f+u
k7eNH3sV9BUnPG2Y3WTWKtJyni7MTYhhmlas2ewaaWCyHb9uk3z0U85+g+WGKr/CAfvXzMDv/jIe
aYE//vlPf/jbn//6iOT1h9///a/18R+/JwvK/8wWvPyn6/9PzVa6QPEGAAYlThc8uCjfXE99jb5+
HU88tPUjPO9+HY2/Q8jNNNwEHKlToqJUW7E0hWtykZVAGifqCrn3XrvNO3oWlmys6vbOWeyBwS+h
IINI0Lk6Jc1QaPpSV4kOugD7gMXVwWOm50HXrpO3jR/rT/qCE38dqVbE0VvrNpxyZ5Np7s1smgIH
0VNumfcmyjbJh19H4ZvE3+UjJTnoiGvUniTnkqjpSEg0Ulm96+yt4C9V2szQvfIodbxx4fbI4JdQ
EIdT9FxJTsUyS5uNGkUlOTwETLtnzCtmeiaqdp28bfxYSc4LTnwnfDWGlic6gC2cNdelhqMTldyx
XCMNxWnHr9skH90J90edaS6nep58WVr4xENbP8Lz5tJy5/0GAXzLFG68j9QpjYJG2TQt7jWhK6WF
1dMgcycTlF823nX1IcMm8XyjSvvI4JdYkESCztUpEa1VJuTsOUrzzV4ZndGyj5CJn2m+XSdvGz+W
5nvFCau0+yorgzXxLty5ErUh5M5DddV6jTSAlx2/bpN8dFHef0NwAz7VvvJr3J54aOtHeN6NWwwg
8KYensUeKasRX1OtzSS8ZmrdRjKxkToMb82XVaBfStcBoAML4htxe2TwSyyIIkHnympaK8069Yoe
X5Q3yj5F55wB00tZza6Tt40fO4t9xQnX26zQfFov7E6ujoo4e1YdLrnDNdLgzDt+3ST5rATWH09q
arHI8pEKEnBHbYVS9uKpzLJSl0lJVmltKLQ5/JGlktVZodc+30jzHRn8EgpSiASdqyCxNQWK5zZ6
VPmlkyeMlTuvGjM9t4S7Tt42fqzy6xUnvHALipgdqtNghsUdeGqhfgfDruMaaXD0Hb9uk3zSnZ3y
YwuGeGoL9nVpOXG74kd43l1ayL9BANwUoy+GHimryWNoQ1gpV5dEw2qybjW5CigMUSW+oy9tfYwB
C8YbpetHBr/EgjQSdK6sZvWcTWotPdfwWe/SDXkZlhwy0fMvvevkbePHciivOFHcDkQGJMDJornK
oAoTiF1nplbHNdJg6jt+3Sb5JIdC+bEFo3PtK7/ErcDn6fkf4Xkzbu+8EUT89qoeSs+r58Ytp46d
k2CvaRBTapR54PIym93RrbtwhzUI3tgSHhn8EgpyiASdS89jF51Ya6GK0RHMHIV6aVqXx0zPSwq7
Tt42fmxL+IoTxS0Qcs4ZJsHAloe13LmZtcJutc1rpKEw7fh1m+STU8K7Zcm3kkPLR9LzK0MB5Zaa
YktDxkrNdSZk51VVC7o8pmgptbGKsb9RQXJk8EssSCNB59LzZpJNqCmphw/kaZlEsAZAyATPO+G7
Tt42fuzC7StONEWzGPZVTZfjXJgJWy5TBzGM7FOvkQYtecevOySfTFF4bMHITjXd/7q04Ofp+R/h
eXdpQf0GAXADD+P2SHqeG+ZZ50zT60oZdCQrZGk0K12HgORxRx9iYyxBGvJGAurI4JdYUBi359Lz
s8As3o3NKbooP1ep09DIMWLCf28idp28bfxYpfUrTlj5ZThnIVkLhmOvUgeRzsHTc7E8r5EGYt/x
6zbJJxflH5btsVn4FRLav+rFvV8TfCMTX+SmSI4YJ+IfWHQnlvAveSTnLDVjKVTTUIIEwppW7pia
scwmZSxvdxEA1M2zCeQ3KnKODH4JBRWMBJ3LOa/eCwlOMV9ByAvw8JxtwICY6Xl6s+vkbePHXvJ/
xQmLtA1qJkQaCLiyI7SRK1utzXpGDicKgu74dZvkw6X6sRcvp5bGr0u1fN6J80d43l2qpXyDYL8R
hkv1kUQ8DMfc3FNvA1MrQ5M17KlOhsrYJzPf0UVBBLUasx+P2yODX2JBJRJ0LhEP6F6dmmNtUdyK
N+Lmo/UVMv37RbVdJ28bP9ac/RUnvm9bF6MOz0bWCuGU4lbYS81zQL1GGthgx6/bJJ8k4gkfW1rh
U20Fv8ZtORG3P8HzbtyW8h3C4obGqkcS8XMUIEFKVGmk1WpLFQcmG7NWACtS8x296KhYy6Sy3rgn
f2TwSyzIIkHnEvFVTWpZaL1SdOrKWTP1URbGTv53e9JdJ28bP7bFfsUJE/ETV/fKusCq4CwAanV4
qYvaqv0aaRDUHb9uk3xyT/5uGfmGhr/CTvW33/+3f4deTEG/AsWXjf4m1c/U4f6aajZ2/0h3o9k1
23P7H3CVm8fbiCP1BVmyr549FaORsraWevWZuE7DmhXXWA8VNKaXuRrzG8nbI4NfYkElEnSuvqA3
KQpTUEZYcqplqQEJWQ2Z/p0E2nXytvGDJacvOGESqLWh5nznqH1kt76siXVtrZPaCCaK3u7/tuPX
bZKPkrf4yzE28pll+8s2ouQTyduf4HlzG1Gyf4NgubmGv3KPFF301vok8OSt5FQWj7S856RMwlCw
+qh39NoZ1LH4tDcuwB0Z/BILkkjQuaKLhkI6lg3l6IR9ycK2Vil3qpDJntv/XSdvGz92wv6KE27/
q4yh1BvwrNbZtLfiVbh3zhOCuOVyA7Qdv26SfFTPR/TYbms+td3+GrdwIm5/gufduAX/DiE3CHuZ
2JGii9X7GBlm4qyWclk1jZo9OeaujFYlrzu6KzrMXFE7HI/bI4NfYkEWCTpXdCFSZsU6Os5ovbU2
+iQGkNVDJnj+pXedvG382Hr7ihPFLTcXQq6MNEZekHl0r22wi3egeo00iOqOX3dIPotb5BsYRZaP
tO2oXHU0G8lrg8RtWEJcNSnzmIuKe/0lCWTFl/dlA944WToy+CUWFLryXNsOL3VyxTFL46i+gFdG
IiObGjL5cxOx6+Rt48daWr7ixF1XHXCoFam9w1jKtTqQdvS5dI1rpIFK3vHrNsknL7gTPbZg6qda
SH5dWvjzdjs/wvPu0sL2DYL5BojRn+NIXRDMbmUsSUN4JibXVAZ4WtI1Y0EX1zu6sHUeVC3DG1vC
I4NfYkESCTpXF8TQB2pXwRplcsYcZc3OjllDpn+fCO86edv4sUzOC04YtzbzRGJChskIPrNWkDyW
qGFufI00oMGOX7dJPopb/ucTAKfi5Gvcyom4/Qmed+NW7DsE3dzD9fZIsVRdbGB1pTnHSkAoyefi
hL07+hza2O/ok2fpLl4Kv/HwzZHBL7GgEgk6VyzlQlW8zp5XdCJMWmfNa8xZ5TsT3zI8j2B2nbxt
/NiJ8CtOFLdOVXlloDzFM+OsvVWV3IDEppVrpAFkb6Jsk3zSbuduGTk+/FE7Uhe0hDhPy6lVs9Sn
WhIaK+HgAU4Tx4RflpYCtc5BvbyRtDgy+CUWJJGgc3VBWlx66a0UiaaodZ2a0ZdnDJn8ubTsOnnb
+LEp+ooTvs20VHWWYp0zulGWO/Fg0AJeh7VrpAHJd/y6RfLZAxrEjy2Yl1MdD78sLZpPlIr/BM+b
S8ud9xsE043CslU7UtwzpZRWsyY0luTgJSFVT0W6je69WX/ErZcp7uac/Y3OqUcGv4SCIEeCzhX3
5HpHkdI1W3TrF7XWhlYLE8ZMz03ErpO3jR+79fuCE8atKpmOXCjLbDqg1u5SmnajIWvqNdLABXf8
uk3yUdzKL4+dYoksH6lj8TozFqCUOfe0kHJCs5WA50Rdyl373cPFLJfO6jO/8avlyOCXUBCFU/Rk
HYuhzuEKMjSaojKzTnLAtmKm54HYrpO3jR/rCPWKE3Zy61bKyq41r15ZVivEICWb19wyXkMNJZ/5
oH79wNPnbR1+hOfdDzzlCAJMdibb9p/no+KeX57KkniaH6ljKezNZvGEmDmxjpk6Wkm0TBsUUy6P
uK0mtHTpbCzH4/bI4JdYEEeCztWxFK99qEAlj997IMRCToAzZnp2pdp18rbxY7d+X3HCuNUuJkai
c9U1lacWbGMO7opc4RppKAY7ft0m+eRi+t0y4Q3ONcH++sWQE1vCn+B594sh8A2C800o/HMcSRxD
ybO7QJqzltRsSJJOnnBgz3WqufsdvfdZizUbnd+44nFk8EssSCNB5xLHNBbUNkcdAtEVD+vWBXVU
awHTSwOnXSfvGD8Ut684UdzeAaXN3KqQ8STgrLMOZJgoSF6ukYbivuPXHZLP4lY8biihfiRHSjLX
gLESF7dEjTSx9JJIDbQb1sH5cSAGtfPq0Pyd04Yjg19CQQqRoHM50p5HkTwGaoWwocT0mcHUzWKm
5xTddfK28WNT9BUnPG3IxbBpWatALt59otiY3FsbritfIw2OuOPXHZJPpmj5TcazDRy+Li1+YjP6
EzzvLi2ev0GA3cT1V6hx/HWvZv2K4BvFmf/icX8pzgy4jMIP4ZFM/MjYMqqkrpNTa4apQpmJ6wIc
Oq1Cu6tAxQx9uufyRmbgyOCXUBCHH8JzmfjRZSqPLCv8bcyZ1Je2OWePmZ4fwl0nbxo/+Nv4FScs
8pLSK/gotQ1Fb7muNmstwNO9VAonijvu+HWb5KNikfLY0+K5ruJfPoSWP+/j+yM8b34I77zfIBhu
VDT6cxzJxN+JpqzRU4WJKZOORG6YFqAJ2cRBcEfvtHLRUkzxjSKvI4NfQkHxBuZcJt7Xqjit65or
Lqqe1SuoVIuZnnG76+Rt48cy8S848QaGC2UmvoNWaLX6oloGZdFs3Fa9RhoEy45ft0k+2sD88kAU
W2j5SNJ5YAM3qGnJwgScW+IJkHhSW1gFWeQxRbmZlTLkrYzekcEvsSCPBJ1LOpMQS3OucWZgdcAx
p+e2IGTy57HrrpO3jB/NDLzgxB06+urQMsP9nyXFMA+uLl6GryqC10hD4bzj122Sj6796mNPS+da
sn9dWujzfqA/wvPu0kLfIcDjpwPVj2Ti5yQupj2VtlqazT2JuCQurQB7QyC9o69MvXWVNeo7Gb0D
g19CQYCRoHOZeKdqHVtHD98bzwquXrUNnDHTc2nZdfKm8YPvjb/ihMUikqHUNrs35kxrgZS+Og2h
qWrrGmmwIjt+3ST57NhVH1swPpcY+Rq3fCpuz/O8G7cRhN9Qwi3hkUx8z5aRBBK2RWn1VhKqeAIp
2nhYKaT/XG+zoi6id95UOzL4JRRUwnXhXCZ+SUPmZsY9LM6U0Z2hTKAZMz0z8btO3jZ+bEv4ghPG
7VQfzr5yg9n64sXEdTg3FWNHvkYaGGDHr9skH8at2A2FI8tHMvFZs6LgTHkhpS6NklfgpIVhZMBJ
ff1SLGKs3Wq19saW8Mjgl1iQRYLOZeJVFVrtWBCjY9fG1Ep2B6gaMv27d8iuk7eNHzt2fcUJ+wrm
Na2brgqORNOhrzoWECoTU75GGjjv+XWb5KP6YXtswUROfcq/Li32+QvuP8Lz7tJi+A0C9MbxacOR
THwD09HLTK2VnPJiT7NLT9UHQK9itPodnTog5mzT5Y3ThiODX0JB8WnDyUx8E80qdQmFHSWa16nD
cmsUMz2Xll0nbxs/Vpz5ihPG7Wxjde7YRht5jImLes9VOhtyK9dIQ0Hc8esOyWdxS3grdCo98TVu
/UQnmJ/geTduvXyHKDfO4Z/jSOuFsnAhm6eKaAk614QjawJoa03RrlMe6AunrUzc5Y3+20cGv8SC
JBJ0rvXCkjzNVy893BK2XmbNjJVJQyZ4fqF3nbxp/OCW8BUnbrZoMNEn1LoQquBgdh2962SvbV4j
DSK249dtkg/XWym3oh5ZPlIskg1rB8U0citpNGuJTTDBmNJEOBvCY2mB7CRY58pvtJo+MvglFGQU
CTpXLJI1dyK32laPTgkXtMY8BZhjpue50q6Td4wfmqKvOHGxSFUi8DqyjkkKddYKfY3RtKvwNdJg
vOfXHZJPpug/eDuXXUdyGwy/i3deyCBFUiTzNroC2WQRJEAeP3YnJ3HOYeri6gSDHswADfH7abGk
EimWv7Zgipe+0vNtafEsny8t/+T5fy4tnuUHBOqDoqXF4EgFDfTZy0BOo1tNpVpJY6CmTujccusD
5hOdNasj59nGiVPCI4PfYkESCbpWQTN5uS1QGShRUbU1n1CMcMRO/tfSsuvkbePHugy844Sn+2UW
HZzb8NqLd1wFZ+9gQKRr5HukgcV3/LpN8tGVMn9twQwuddH5Hrd0KW6v85yNW5KfEOUBotHPcaTg
Q3lyUWtJWm3Jdc3ES3paykVEK7RVn+h9MVSBVus6cUp4ZPBbKKhAJOhawUdhldK9c28YxW1BBa9D
qEHM9JWV23XytvFjRzDvOOGWMIuqOVcuikCtDym8Oio3AHG4Rxoy7vl1m+SjT7L4H0QeRSSyfKS2
oQvymAvSFJkpd19JsngafVhppLk5vbaEjduw559KJ2qSjgx+iwV5JOhabUMvdUirtWKNapJQqGa3
NXVAyPSvKbrr5G3jx2qS3nHi0wYAZGYYYlxqb+hallDOxTtNvEcaDHHHr9skn0xRhtcWzK59SuH7
0mKfnzb8Fp6zS4uVHxBYHpnCLeGRgo+qWg1pJKDOiRaMVIq2JK3nOVuGUvWJvppCaVhbXide5Y4M
fosFSSToWsEHqkycgE5h4pirde1oUBqHTPy1+d918qbxg4njd5wobttaU0btjXAQrs5DdGKGYeQO
vd0jDWR7ft0m+eRVjuG1BfNyaQv2n3GrcOHDzb+F51zcvnh/QtCDmaIpJc+q4j/+7R/TPJYP5U3+
X/7813PqTxg+rbNE5pzkmLkDOv88//LXP/9pjq+/+5L766/8NMwPdY4Ml13DiFccfMLwWQcj/jQn
D6JyzNwBnSccXB7i4cpzKD/ViXJpKyGUnJRsJHT1hM3crS2bY/1Ki1ZAwLkynfjCyJHBb7EgiQRd
y0+N3Gg4ABq3qBqWZq1lDLMGEVOBr8OIXSdvGz/2pYJ3nLDFcKHWmUdeqCzDZhm5C4v2zp3Z75EG
lb2Jskny0UsNw+v4MqtHlo+kYmA0m8V7KoMx1VZqKuglOfOoVFhax9cUrSKrFFt05r37yOC3UJCF
rryWioG8zDoIskWpGEPpzcZqvWrM9HX9ddfJ28aPpWLecf7bJxV1rIxAfaJWIGyNJhJ1WqP1e6Sh
CO74dZPko9IHxldTLbj28YrvD3W5sDn6HTxn1zSBHxAoDyvhedmR/FSvdbXnP4kWtwRttNSVKYmt
xh20cKUn+mwgbhXVznwK9cjgt1CQQiToWn5quhnDXF47RYcRVrLK0l6dY6avjcOuk7eNHyt9eMcJ
l5ahssxNgRvQ1KyrFBy+HBSklPtPDeUBWXb8uk3y0UvN07I9FP4XH374n16K/F+CX7kU+eQiemC+
9OD5/iAslx6E13nOPggL/ITgh+TouYFHEn5GS4BNUp6Yk0iDNGz25LVkEijWTZ/oTtwVp4LIiW4s
Rwa/xYI4EnQt4ZeLrGw2sxWLNjBenQBwIo2A6a0by66Td4wfehC+44RtxbM3cG+SnYa3iRVHLcjg
WrAOuUcaiuGOX7dJPvkcwNOy8CODRJaP5LbYcivWZwLsmmxyTb2xJEI3HpTZgZ8exoo+2ygN6zo+
RY8MfosFeSToWm4re1FqTWYZ0R571aFTVMYqLWTCrzOFXSdvGz+2x37HCSvgsS7MML3mppOVfGFR
NtQJqhXukQaSPb9uk3zSeITza0+b4VI74O9Li3/+xYrfwnN2aXH/AYEUlxcbHkn4FSVtbbaUq9XU
zGvKPEdS7azUuWOevxIHK9uCnIlO1JIcGfwWClKMBF1L+FEj06qcRw+7sXRfpEUsy4qZvn7pXSdv
Gz+Wk37HieK2lFIcnTswAilog1UY3KkINmj3SEPJe37dJvmkFx3n10G9EkeWj+S2ipYyOK9ELWvq
1HtCBU9tms5alCHTK7q6qY0CJY8TxzdHBr/FgiwSdC23tZozj1m8YXSZmdoEVJnuVkOmf+UDdp28
bfzYZeZ3nPC+PQoy0oDm0NQqQp0w88plNIba75EG87zj102Sjz5i9rJMj+yX3hK+LS144WNIv4Xn
5NKC6D8gJMftiQ2PXGYWb8UyU8q4SipqlpSI0ihjLGdtpa4nOtJ4/kezVfTElvDI4LdQEGIk6Npl
ZmqoFYubkoflTponUm6tacz0dZl518nbxo99xOwdJyxTxOyz80RQmIUmqDQYi6wX7Dr7PdLA4jt+
3Sb56NiVXlswUr0SJ9/jlj+/cfZbeM7GLecfEPikKxT9HEcuM09rbapzQhg5EVJJWrAmYMCBXjMC
P9HXytL6dIBy4qOhRwa/xYJKJOjaZeaJIsiN1mjRacN0KdVIe1UImfxrS7jr5G3jx04b3nHCuM1V
wV2Qa1WgSTohM1tF1wHN75EGYt7x6zbJRxm9X19kAgyn6JHLzG1WHDZXakKSvDVONrkl7R2nZeDB
r6XFvNauQgXpxBQ9MvgtFqSRoGuXmaFx61AQ6wz7ZGTtpWslai1kyl/lTrtO3jZ+rNzpHSeaotVa
JSqjLFlldoHedczluBZ3nHSPNKDijl+3ST46baBfBTZy6VH+fWm58DGk38JzdmkR+wEh8LB4S3ik
WES6lAWjJmrdk/fGyWmV1AcUAl2l9vpEdxZ0EeoTTjQhODL4LRQUbwmvFYuI94yduK4RnzYIqDYd
Vi1m+lpadp28bfzoacO/ccIyRW/sc3aR3ky5545WDUcBbmgF7pEGL3t+3ST5qHUa82sLJtfaeH+P
W/+8rfhv4Tkbt44/IPDJx+HPcaSCpleZrjMnyATJm7SUwXuqxkPGWk4rP9HN16BaEcuZ0/0jg99i
QRIJulZBw4Mk64RSZUZbwpqlzCpgs4ZM8lXktevkbePHPgfwjhNuCcG1MzgvlQalUB5VlYZWng2W
3yMN5L7j122Sj45g+A8ED3CILB8pFmmZC7pREuKVWKGkziRJSZsvBWwCTw+vOciK9jXKic73Rwa/
xYI4EnStWARH9QpMffmKtoTSdMxmNbcZMSF8nRLuOnnb+LEE1DtO3N9m4spjWSEwHFKxy6rdUYb2
0co90pBFdvy6TfJJC6aXZXoUvLQF+7a0ZLy0tFznObm0ZMQfEOzxJ9MtH6ltyFZLnXUlg6lpEUFC
LStVHlQz09DxituOVm0ZqY8TRV5HBr/FgjwSdK22YfIqoqhrlGhL6MhKVXOrNgOmt0+m7zp52/ix
LeE7TljklbGtOnLLthhXb9ja4FxWGZa5lnukIZey49dtko/iVl5bsHKt6/T3uOULN85+B8/ZuOXy
E8IeHCax8pGCDx1rrCItldo0LR0lNUJI2WcjmQPyrE/0YsNzyXNaP3EEc2TwWyiIMRJ0reBjsEir
S5uOsDhzcp7VR+44YqavJ/Suk7eNHyzOfMMJa5IWZitZ2yrApQ9oQASgNJcCtXaPNBTIO37dJvmo
Jkn+kDW+yWH5SG3DyA6dqCWyOpIVxTRELLXSSVV1lmmvJ2NtXADbIq/Hp+iRwW+RoAIcCbpY21Bn
piUD55CoK6cNF5y9tLZCpn/1ydh18rbxY30y3nHCbrrVgHuZLXO1jtant6bcpOblk/AeaVChHb9u
knz0nT2W1xZM7VKLwe9LS7mQgPodPGeXlpJ/QHB5aA6fxEcSx3XizFI1zVUx1aolTbeZCrItmJ3y
yE90WaUAATadJ1oeHhn8FgsK4/Za4tg1ExlJ7jNKQLVGdVHvCthCJvr6pXedvG38WALqHSeKW8uS
F3dyQJo5ZxQ0y6tL06GSxz3SYKo7ft0m+eh0v7y2YHat6/S3uCW4sCX8HTwn45ag/ISQB8fL05HE
MdDSAtQSQaEEpbe0sI00O2QaYkUdnujVvS2DWXmdaB5yZPBbLCicX9cSx61Y5sFKkkfYPERhNStS
yEIm/MrK7Tp52/ixQq13nHBLCFTy4NG1tdIBiyzwDDBrz0rY7pEGKbDj122Sj073//E5KLD/wbWX
/+19nf8h+KX7OuUPmR+m4WnHkUy8kWdozslrG4lbXklam0kyDnfyntWeKtQr1A515nki5o8MfgsF
WY4EXcvEq4zWilZ2gajuv4kCwZjNPWb6ygHtOnnH+KGYf8cJY54mlMy1jyxVem0TvKzSgdpAHBhM
FHkA6Y5fd0g+i3mih/Olno3f1+p8aa2+znN2rc7lBwTLI8evPEcy8VS10ForGZKmbINS65yTzVVR
lCv39USX1mcWrp6XH4/bI4PfYkElEnQtE28EypN6VQzbik8jyrSktxoyla9fetfJ28aPHd+844R7
7DXrggKdppqxI4oKtIUsmaSte6SBgXf8uk3yUUZPf+1pTSPLR5LOpK6Aa6WuKmlgy0lYLbXZGlOB
4kyvKepzAhTLbic6aB8Z/BYKcowEXUs6Lx5zTtGuNMLMgPe1crXiFjN9FYvsOnnb+LHt5DtOmBno
K4sskM4KXZHWnFVGlQxYgeX+UwM/IF86D//+gJfPm43+Fp6zD3iREKLAzmTb/nk+2mPrHzI9jHJk
+UgmXuoqczCnStJSRcM0aZbkGQmg0gB6ZQZG47rYqs584rbykcFvsaBwabmWia9UpgNo8VmCuJXR
qZMpYK0hE38tLbtO3jF+KG7fccIirzYa+1wElOvqXQ17t1br8jKX9XukwY12/LpN8uHSQvzAa228
vz8xyqUnxnWes0+MIj8gmB4UVtDQkUw8ZBFb7om7W2o2JVGnloothJwHTbUn+lL3Odtoa55Yb48M
fosFUSToWia+rzEZmxrgioq8ZC5czaXUGjExfP3Su07eNn6sguYdJ7zCnXvjijQYG3TEwoutDFbG
rpXoHmkw2JsomySfxa29diucQ8tHks6T5lhlSOqltdQnrES6KHVRmcIZxflXQxvmugjV1oml5cjg
t1hQOEWvJZ19uoy5DAGjun+i3svMULqMmOlr+7Lr5G3jx+r+33HCjB4JmCsYUEcmGr14Z3YWaDAb
3SMNorzj122Sj5LO9tp3of/GL1Yow4W6/3/y/D+XFgb7AZHzg4tFP8eRTLxMkOoAyWZeqcwxEg6S
BIMNWL0o1Sd6XYMll4qofDxujwx+CwUpRoKuZeKbCcDMpSyv0ZbQa2t1sEzEmOmrWGTXyTvGD8Xt
O04Ut1QIRmHFybV2EMo+YYKb4/Rmdo80lMw7ft0m+eR71GyvLVjWS228v8ctXorb6zxn4xZ/xi3j
QzGM2yOtFzJr701Hgu6c3LQmtDGTa+6tOJdS+gt9zQVzObV84pTwyOC3UFAO4/Za6wVry3IpY7VK
UdwKtem9jDIwZvqK210nbxs/dkr4jhOeEtJoCi6rjWHeeZKT2nKsiCyz3yMNrmXHr9skH8Wt/wHw
ARBO0SPFIpmHQe2emjqnVVZLXjOnbh1YF+IUfp02EPZCVscqJ4pFjgx+CwWhRoKuFYuIdGrVqsso
Ud1/BXUpA3h5yJS/Hka7Tt42fuy04R0nvAoqmVbvWKV3qU26ZTJkxlJFZp73SAMq7fh1m+SjBJS/
tmBULj3Kvy8trJ8vLb+D5+zSwvoDIsND4i3hkWKRXnH10UsSE0ujZ05OfSVHtrY6FgN5ojtLXtTZ
spyoHz4y+C0UpDkSdK1YpPLsDQd3h/hKWc9MJmydY6avpWXXydvGj14p+zdOeNog3Hsj7ow2S+1Z
CIhsAheuefA90qCZdvy6TfJRc1d/bcGYL30h4nvcyqW4vc5zNm5Ff0L4gzzcQR0p+CCSZtpWKnXV
NFuTtGh4YhyqpcAarb+mHVJt3GT0M8WZRwa/xYI4EnSt4KMO0r7MJkJU8KHZnWlVIKkx09cRzK6T
t40fK/h4w4m7cRmMNWfvmd375IFDuC1vjcRqW/dIg7Ds+HWb5KNTQn99wV7cI8tHahv6NMCJM61q
PekgT6NnTdykESzz/itxPG3NNfJqhie6DBwZ/BYJKkCRoGu1DctnXdLVjKOlJfdC2UfhSRgzfdUk
7Tp52/ixpeUdJ5yiOWfuzGtJ7oaoGd2gTGz9+a/V75EGFdnx6zbJJ28tAq8tmFxrK/1taRH8vO7/
t/CcXFoE8w8I9Efm8M3nUMFHnzRrrQmL5zTngJSHjLSA8+w+dXV6TTtiJc5m7UzjqSOD30JBApGg
awUfCLn2VkEaWvQq11lWsTYza8z0VfCx6+Rt48fq/t9xorhF90GTePAyJO6sRcXMURQzGt8jDQx7
ft0m+aRhnMCvBttwKU6+xy3Rlbi9znM2bimAsAeEBR98JHHcYRQtUNLAIgm8cFpZWtIyaGEGbzO/
4raAVSiLrZ64Z3dk8FssSCJBFxPH03gMQHKL1ttKtdfZxhyrhUz8dcNj18nbxo+tt+84cXt6G6ut
UY3WcnWpGUmEa3OzNeUeaUCzHb9uk3xyX+dpWfRRskSWjySOFy8CniMB9J5EAVPRCQlK4yZlNej6
9LAvK9wAGOqJt5Yjg99iQaErryWOh6Gg1G7aJaol1LHYHW0Bhkz09X666+Rt48eugr7hxAkolL6e
f4Z6cxu9jdwHm9lyXE5+jzSo7U2UbZKPtoT42oIVu/Qo/760XLgK+lt4zi4tJf+AQH1A2BqWjySO
c2FqbJKgzpVqVUjUlNNYNTtXAuT5RIdem04seqoX6ZHBb6GgQpGgi4njzjp6ZyFY0dJic1H3wm1Z
zPR12rDr5G3jx2qS3nGiuO2re+Oea9XctY+Vi7axxrLVxDHfIw0ZZcev2ySflLu+LNvDkf8HN7P+
p1fK/pfgV66UCb72tFoufRfj+4NQLz0Ir/OcfRBq/gkhD+NwvT+SiR8iUkqFNGftyRpa0kySuDCU
wQvR/In++q8yBQ3pROXbkcFvoSDJkaBrmfiCq/KTBJtydKa1HLAMQls5ZvpKl+w6edv4scq3d5ww
E1+ht0m5aRlujdHWrAWsMKG26fdIg7vv+HWH5LMHociDC0aWj2TiR3ayNXPCCTXNqpywNk9Qs9ty
mE7y8nBtlK3aGnJij31k8FssSCJB1zLxS0Zh9Ocfoqhtg1RZa7RaMIdM+lVeuOvkbePHikXecMIp
Kmyg0Mqo0lEdW5ngo03lalhGvUcaSt6bKNskH03R/NrT+rXM97elpeDndf+/hefk0lJQfkCgPEg4
+jmOZOLVuzNoT83qTLlVSdwyplnWMNI5AF9xi5JHtgxUy4njmyOD32JBGgm6loknJyhlmiyJ4hYR
UNpsYhkCpreai10nbxs/FrfvOOEeW3smqK02IYNaKuZWh00kyVxXuUcaBMqOX7dJPun3L/m1BQOG
yPKRpPOizgrmyVZpaZahaaDmNAipeW9twisz4DBEHVdt9UQd4pHBb7GgMOauJZ19ra7QFSW36Pgm
U0NVrm2tmOnrNXDXydvGj31k/g0n/kpZnTVDXZppVZ3caeQhEwcgtYLrHmlALzt+3Sb5pJPXy7I8
QC/16f6+tNDnzV1/C8/ZpYXwB4TwAzncQR3JxOemUIxHYmJPZtDTqASpV8WedQzQ8UI37XORsOKJ
LeGRwW+xIIsEXcvE+6iTRrMqYb//mpcAaaMCOWSSr2PXXSdvGj/Y7/8dJzx2tSF1TS955YrE1Gsr
UDznOrBJuUcasvuOX7dJPtoS0qsXEeqlY87vcSv8edz+Dp6zcSv8AwLpIRY+Ro9k4kfL3noeqY06
0ySE1K2PhIZchHkNWk/0CUuKCVP3E/d1jgx+CwU5RIKuZeKpDhqlmYlHRV6SuVQqzNprxFTw61Vu
18nbxo8Veb3hxFe4e+2ZKOduE1Wdc80ufVFFbmTjHmlQlR2/bpN8UuT1tEz5kZUiy0e6DHCeQj4s
zaWcuJKmKXUmJ5zNHdkH/qpRKmzGpZdVjk/RI4PfYkFhzF3rMpCpGq8qTVa0JTQYTLkR+YCQyb6m
6K6Tt40f2xK+44RLi49Zc1u6GkGexblIU25i1mdTvUcaOOuOX7dJPnprodcWjK49yr8vLQpXlpbr
PGeXFv05pSTH1zBMjhSLGNbWWmlJM3sSZ0rdcaXRqcDgqr3AE92WQ+YMM585bTgy+C0WVCJB14pF
XOYyRyIKm0RmwVZw4eDCIZN9XfrddfKm8YNNIt9x4uLMJVZy7QCAprkMXkaLtdrUSfUeaSDamyjb
JJ/U/Qu/tmCsl/r3fItbhc/r/n8Lz8m4VdAfEIgPCito5EgFTSsKS70lmy2nWqonnOxJeBH7sDJW
f6FPUKzGqKsdj9sjg99iQRYJulZBQ1odOEPp5NF6myesqaUKt5DpXxU0u07eNH7wa77vOFHcKmgp
pS2VuUy8SlllNJuzV1DQdY80sPqOX7dJPiry+vVlHy4QWT5SLFLqzL25JQf3NAA5uUlLvFYrhn01
0l/RNZDLIJt2ohfpkcFvkSDN4dJyrViEqVpnrAtWdJCtULwR5uq1hkz/mqK7Tt42fuwg+x0n7A7C
DiqVa/XVB2chs5ZbXwoO5HqPNJjRjl83ST5qYPOyLA/RS326vy8t+UL98O/gObu05J8Qgg8M6/7l
SG1D0WU0CVLj2hP5wtRFWwJxgqbZVx5PdC2wbGpGlHkibg8MfgsFCUSCrtU2OEthajULtihuPS/N
rVX3ETN9VbHsOnnb+LFXuXec8Cpo4b7YF5G1ge4AA2xkalQmtyn3SAPBnl+3ST7p6iPy2oIp/s6u
PnrhE1D/5Pm/NoxTLj8gnoSk4c9xpOADl7lDwzTXyqn1bMkntaRafayC9R+vcrq0ldFrZjtxBHNk
8FssiCJB1wo+vM0KYHmqRDVJZQgjq7DIjJm+8ji7Tt42fqwm6R0nilvsY1bv02WwKKyi5qLNelNY
3fweaZC859dtkk8+3SbyWunRJbJ8JHFMsiRz51RL9kTNS1JqM5nl4cpEsvBV/tqg8UKdWU/UDx8Z
/BYL8kjQtcTxMAIoJGq9R7UNZDZ9LTSTiCn/6xLSrpN3jB+aou844dLCsjpm8NGxuIotJRq9q9TR
M/g90kCSd/y6Q/LhFJWH5Uvlpd+XFr2wJfwdPGeXFv0Jwf7I8dJyJHHsY2AXkJTRRlqAlhCqJ/al
WHTo9PVEpyITahnd1zget0cGv8WCKBJ0MXFcaOa2equZgrhdJNbrP/ZtMdPXudKuk7eNH3uVe8OJ
E8czj67iU5Am5+HVBbyrkplb4XukgSnv+HWb5KPEcXltwZx+55bQ8MLp/u/gORm3hvATwh+AHP0c
RxLHUnoeWiCx15zQiJK491SyKvQOalCe6Ew8PZdafZz4CveRwW+xIIsEXUscr4Hd3VGFLSrUIssZ
u0JDD5jeunHtOnnb+LEr3O84YeJY2xhIubRMKiprdBXNrY3i0pbcIw1Y9ibKNskn3bielvHlPfgf
XHv5397X+R+CX7qvU/6Q/aEUhsiRTLy1iR1Zk6xaE64MqRcfyUduRGsu+sceu8Bk4lxdT1xNOTL4
LRTEORJ0LRNPhcbMJqWLREVeQzib59mwxUxfr4G7Tt42fuxu7TtOFPM4GyHmjqarrJpLZ0E3qGjS
FWY4Ucxpx6+bJB99rlHKU/wD8FLnrO9r9YVPQP0WnrNrdS4/INgfEtbKliOZeK59ULOcELkm6iUn
cKY0kMF4NXK1J7rSFNI1uNOJouojg99iQR4JupaJH1nEFaxa1agDX4E6n//qXjFgemvQsevkHeOH
4vYdJ4pbkOaj8excoMBSbm5zOPuaVHSNe6RBBXb8uk3y0Z14/QOUB+ZL92O+xy3753H7O3jOxi37
Twh7kFD0cxzKxOfVB5eVyhozgUJPUMpMa8niXIxWoyd64yqAeXSZ63jcHhn8FgsqkaBrmfiKYqMp
K0+O3o2B1dzQHDVkKl9vU7tO3jZ+7Nj1HSeKWxpr9e6lUSPKi9gR+1zAS4bnOu6RBkHY8es2yUdt
kvQP2R7sGFk+konvKlyNS8LePIE7pK5YUs5tQVtzMtbXjsZ6WU1VZ6XjU/TI4LdYEEeCrmXiBW0t
wiHUcpTR66ZUm/MoHjEJfOVud528Y/zQFH3HCYu8mkDJPjL2nh1aZSMuLnUNHlL4HmkoAjt+3Sb5
6DVQX1uwjJce5d+XlgufgPotPGeXliI/INgeiuEO6kgm3uoUWCOn1XUlBRqpq4xkUpetbr24PdEl
s2Qzh2xnijMPDH4LBWWKBF3LxCsokNgs1qN2KyPrzJQnQO0x09er3K6Tt40fa7fyjhPFrYCtOhxo
9swruy/BKWsYg+We2z3SYFp2/LpD8knc2msLRtc63n2LW4cL3UF+B8/JuHXIPyH0wRYuT0cy8ZM6
eO0zDfeWch+asjAn0JWrSate8hO9LKpjlD5aO3HsemTwWyjIQ0HXMvGFp2tdBd1qVFRd+kCSbMN7
zPSVLtl18rbxY9/XecMJ43ZYB2VotXsnn6QFmXhZdWDRDPdIQ+G849dtkg/jNr8shyv9kS4DNKx0
WJ6qzJHmyJRY1BO3TFlIqhZ9erhzzc3Ioc8TRV5HBr/FgigSdK3LAAzpbWIRdoyWFi7ehvvqpcZM
X1N018nbxo9dBX3HCeuHHXtDGtZnMWSW0tSo915GqwByjzSo+45ft0k+Ks601xaMf2uRl1/4BNRv
4Tm7tKD9gGB9aHiFuxypoPGaJ07IqaLmlNeg1GduidHWalKMcT7R1ajk7l77PJOJPzD4LRZkkaBr
FTSQB4zKI2P4FW4lsg6jEFILmeRr87/r5E3jB7/C/Y4TbgmtEFuxqtVXW3WZl9ondvdauPZ7pMFc
d/y6TfLR6b7/AcrV5uXf45YvtF74B8//tYLGGX9C6AMk3EEdqaAhRJTVKIloTaYCqXkrqQ5X99I6
wHrF7Zrk5Ll3PPE13yOD32JB4fy6VkGDy/KsdRGs8CMIlet0cOkYO7l8leHuOnnb+LGmzG848ek+
IwCZNmHNshwbjKxtNoCCNO0eachQdvy6SfLRJxeflnN5eDxFjxSLaJexgCgNaiV1o5K0wEwLqKIt
4cLt5WHLKsgZxU5M0SOD32JBFgm6VizSRUQ6EhbJ0RQdQrMU6dNrwPTWYmzXydvGj50SvuOEby25
dlNERPPee4PJuZCZrTWLVr7/1KAPRNjx6zbJR1tCf23BSr7URef70nLhE1C/hefs0iL6A4LLAyic
5kcKPrwbkq6elsyVpgslorWSY+kmtOaE/ESf1tWw9VbWidOGI4PfQkGMkaBrBR/YqdJ8/oPuQdx2
lWqMrXIuMdNXL9JdJ28bP3YV9B0nXFoGMg3JtMowp4ZiIrmOvpBHd7lHGqjkHb/ukJyOW/kD/Pqe
jXFkWY/UNkh30AKQuLaVdBVMCNwTYx9tKbKs8krxld5ZW+5NTtxWPjL4LRZUIkHXahtM28CCNtx7
VJO0ZhOWqVYoZJKv3c+uk7eNH6v7f8cJ6xBnVl+mCr0glgZ9ZOE6PNtshPP+U4M8QGnHrzskH07R
8lC5dKv/P5cWg/xp46nfxHNuaXnx/oDI/MjhKaEeSRxbc/A8Zmq9cJpWPdXaa2JaZVkeJjKe6D1n
7NRrlXyiYdyRwW+xII4EXUwcZyKFVVabUQLKW2WZnmXgCJnkK3G86+Rt48cSUO84YaNHZYPMiyxn
dNapvH79/+AltvAeaWDYmyjbJOevgv6yTOVhfql273vc8qeJ49/EczZuWX5AMD3EwsfokcTxMFm2
KqTVkFMuRVLDQkmRTeuck7E/0bmQMc5VZztxun9k8FssKFxvryWOc581T59LPYrbYjqAc64EFDL5
1+Z/18nbxo/F7TtO+CqntQoSNpvYVl7dldTUnGzppHyPNCjzjl+3ST5ab/EPQA/IGFk+lCO14uIk
ycQ9GXpJOKam0bgNyoaLXlMUxNDWAvFyYkt4ZPBbLEgiQddypN1z1rymo9Rwis6RmWjWukIm+kpZ
7Dp52/ixHOk7TjRFjTLXNRdChtq82lyCKxda1ZfYuEca0PKOX7dJzvcQ/odlfaBcSvh8X1r804Ps
38Rzdmlx/AGB/sAs/4NrL//D+zr/W/DP7+v84sr5kc2j+X0kE88TWHRKmsUktdk5jbl6QqOJawBw
zr/mQieVPOrKJ+7rHBn877ydSY5ruw2G9+KZBzIokmKT3agFMskgSIAsP/ZLKnCqiNP4VAK82bsQ
v58lHskiRd1CQR4G7LVMvBRGFLZcCMIiLxmVp4hAi5m+1updJ28bP9bc9R0nbreSF3UuuVV39AUD
eNaqc9ls5BhOFGbd8es2yfmM3h+WSR9YLrUl+vYhzHDpQ3id5+SHMEP+AcH5AWHiWo9k4rUDEGFJ
yKzJCGuC3nKaBfvQDF1ze6K715rBK4wzRV5HBr+FgkoYtxcz8WNi1zmrW3SmxVImYdU+msdMX8eu
u07eNn7sTOsdJ9zAwMLZJCO00ajXKoVn42yF2MqUe6QBgXf8uk1yvqj6aRn/BPAQ18jykaQzr6Zd
ak69FU9GttLg2lLP3FmG1VLaK7oyVK29iq4zV8oODH6LBGl8zHAt6dwJSp0tzwoSFYuUgnmOoX30
mOmrvGDXyTvGD03Rd5zw2FWW+RiLzHAZTZ3IraCuXriD8j3SYKXs+HWb5Hwd4r8s64P40jHn96WF
Lxy7/gbP2aWF+QdE9odiOM2PZOLnsAEdNblJSbmqp9WspjxlFiy2uL3ithTpfSpKqyc6Zx4Z/BYL
4kjQtUy8kNkSmYurRUsLY6um3YtyzPT1l9518rbxY20b3nHCjF6tY3gTzCgisNbyjiw2FcRM2j3S
YCo7ft0mOd+U+Q/LpA+mS3HyPW4/fqfjl3jOxm3hnxAWV9CYHsnElyZUsuY02xhJVq1prkFJejdX
mZ2nP9EVmQFg8OgnzrSODH6LBWkk6FomHnoZlaBV7DN8X4eU2JF1acj0nyKvXSdvGz9W5PWOE8at
DOswGkxty1ZxrNyZnGqFzuz3nxr8kcF2/LpNcv4q6B+Wiz2ASmDZjmTiV6uyTCkxTklDVk11aUlI
zjJGoTHGH2cmGRBLllHg+BQ9MvgtFmSRoGuZeF/LMqwxpEW3lWkOqkXZxoidzF85oF0nbxs/dlv5
HSeaotxh5moZRxnUZ9bZCwvQqjRoZrhHGrLjjl+3ST7K6NFrC1b4Ur3u96XFPu3A90s8Z5cW+wmR
9UGk0Z/jSJeB2T0b4UraQZOMupILrCSzKtTVJjM/0XOjPtcS03aiueuRwW+hIM6RoGtdBpq4cJOx
ZnhfB0ZuAl0KsQVMb5v/XSdvGj94X+cdJzwlJFOrXK06Zmre25iKKqtq0TnLPdJQkHb8uk1y/uX9
PyyTPJQksnykWIRAV1NYqWeT5GNxgl5GWnOM2vsgGa9fLaOQ5j5HyXrilbIjg99iQR4JulYsMqW0
Uq2xL4saYeS2FmnnukrIxF+XK3advG382K+Wd5zwakohESdnW9mqT1utam/cqVFZrdwjDea049dt
kvPNXf9lWR9y7df996XFP21g80s8Z5cW9x8QpTws/hIfKRbBPGaviCkzW7K+LGnhntaY05tIAfsj
bms1IayLz7wueGTwWyyII0HXikUqDFne2mrhaYO03nPO1OaInfyflMWukzeNHzxteMcJlxb1bIPZ
bSoUZ1iNrXXrYAxd5R5pcNcdv26TfHSQza8tmPIvXikzpE8bxv0Sz8m4RZIfELk8RMJpfqSCpito
GUIpt7USuY+kIJ4kVwOAbkz8RAftBhUmtXzitOHI4LdYUDi/rlXQSPfl2nvP4OF6CwKlqrJpyKRf
93V2nbxt/Fjd/ztOFLfY2Oq0zqIj2yidczXkCY2aGMg90mCYd/y6TfLRaQP/iThOWZsdqW0YxLlZ
q6l4m2kux8SoK/Eaq2VtuQI9PWyjLJ2efeiZOsQDg99CQQUiQddqG4qO6a1atRwVeblNkA4NllPM
9PX7dNfJ28aPFXm944QHYkDGDCBSGrVaShsVJlkpa/ho+R5pEJAdv26TfFSHyK8tmPGlWoLvS0v5
tIHNL/GcXVoK/oAo/CglPPw5Utsw14Q1SBKhlCSlrYSikFbxUap05+ZP9D7cW686sZ04JTwy+C0W
VCJB12obkL1yN6DuLVpalrFC1dpghkz/OcjedfK28WNPQL3jhD/ldCDgRPNc2uJRRBwZcAhwF533
SIOC7/h1m+T8UzLlT6//9OF8qRjye9zahQTUb/CcjVvjHxCZHgbh8nQkcVy5VmnqaXSpqeW6Epr3
NHNx6VUE23yiSzPIWvsYcqKW8Mjgt1gQRYKuJY7XVDItJL3Hr4LyMlOA1lfM9PWjfdfJ28aPbQnf
caK4zQNnxlVL9bUQqZW8XNQzuGEVukcavNiOX7dJPiqqLn8iehiGlo/kSA2wOYikQWWmajZTluYJ
u0wyH8sG/9Eu13IRWi71xBQ9MvgtFEThWnktR9oHglazDiUqmwNSp1xsma+Y6avcddfJ28aPlc29
4YRTtJbVmhZVxkLNRiWuFWvOjVuFRvdIg3ve8es2yadT1B4Elz7l35YWgk+fkvklnpNLCwH8gCj0
wPCU0A8ljq14yQxJgD0NUU/TcknWrUMBtqn9iZ6llawIJZ95KP7I4LdYUIkEXUscz4rciiIhRVvC
VnrOVlsvWAKmt8LmXSdvGz+2JXzHCU8bZpuCoowdmy9fE7UNQtXClnu+RxrIacev2yQfne7LH03C
/VLP3u9xS592B/kPz//1KiiR/oDIT7qw6YMfSRxbb9Awj8SFVyrSW6piLc3cW5esQJVe046gLq4T
fZ65wn1g8FssSCJBFxPHWPrIZrlqeIVbuJs3GW31kEm/vtC7Tt42fuwq6DtO+FNuTqkCo6K3WtFn
mTlzHQy1F+B2jzRQ5h2/bpN8VAP8tOwPpPw/uJn1v71S9j8Ev3SlTP5E+WHxn/JIJl5NppGPxNk4
iUxMakMS5VkwQ24N8lMFileZMEcvJ+r+jwx+iwVZJOhaJl5W67KGEGhUh6iIqkBWp5SQCb++7rtO
3jZ+rA7xHSeKeW1F26QuAgp92rCKhlhLs0zQZzhRXHXHr9skH67VZA+2X12ry4Uir9/gObtWl58Q
JT9yvMc+kolfQmvVqckwlzRFZzLGmbxDbTxAQPSFXntuvRPSmabMRwa/xYI4EnQtE78UyYdnyOHz
yGsU7tRABtSY6ev4ZtfJm8YPPo/8jhO2Wxm9ECpmzcw546hViyl2qeg+2z3SgC47ft0mOd+B72n5
X4/Y6KU97fe41QuZ+N/gORu3Kj8gnoSE4Wf0SCaetbTspCkPnGm2QYkhY3Iu4tNNGOcTPVsbPTvQ
XCeeNT8y+C0URDkSdC0Tv1qpJUMusMJeFr21Sd6wKsdMX2nOXSdvGz+W5nzHCc+0dLk3lT671yJe
61IAlIa9dGS8RxrYZMevmySf3dfRPx6sxNDyoUw8jJJ99cS9jWSImCpXSgSsLhWar/xH/bD0gn1y
9hN9w48MfgsFEUSCrmXiq9DySsrdPKwfLqs0yFCaxExfU3TXydvGj2UG3nHCKdpJpFvuIyNBNuA1
cim5EusSgHukwUx3/LpN8lGTSH1twUQufcq/Ly1+IRP/b57/Z5EXOf6AYHs4hF/iI5n4Uk0BGide
q6QFA1MtoyZiBzBehbI+0UudPbeh0sqJpsxHBr/FgjgSdC0TD04DDFfFUYK4zX0qdBrd2oqZvraE
u07eNn7sNd93nPCnXMUG2ZuNUUDXGrqyWOuuBBkkWFrYH1Bkx6/bJB+lS+yPzHe8+znSZQCWNXNt
yVrpqXbpqbRV0+Q6TUufrdvTw6TTlhAbzROvCx4Z/BYKipeWa10GxqyTF0uWFr5S1kfPDaQSccz0
tbTsOnnb+LETxnecaIr2OUvPzl3Kypq9kHrlWtvUTsPy/acGe4Dxjl+3ST7a/fxhWX91aWG60CTy
N3hOLi1M5QcE6gNKmKg5Uiyyio+pdaYhONNomFPutNKiOWtjzd7XE711mAoTtOQTDWyODH6LBZVI
0LViEa2aF9EiDuPWazEwmDoqhEzylRnYdfKm8YNx+44TxS2VnkeG3odUKuzgzXFUpMq9TYd7pAFB
dvy6Q/JZ3JI9rOiVOPket1eau/4Gz9m45Z9xy+UJEX5Gj1TQdPHhC1Zqumrq2mZiZE3sYCJ5KWl9
oru04c6cs52ooDky+C0WpJGgaxU07D7q8FlL5ajLQFfpYjRWtpDpP8WZu07eNn7sCvc7TrjeFiZo
lYmna85rkS2fkLsXQxe7/9QgD0Dc8es2yUdXQf3Vxp2inoYOR4pFFMsCY0zsqInyHGlRLonbYG7M
uY7XFM3TndExyzoxRY8MfgsFOUaCrhWLwKIGg2jUBdFBdmuv/zW90IyZvoq8dp28bfxYT8N3nLBY
BDMvZEbV4s1dSUmt5yLaBAbfIw2F9ybKNslH9cP+2oL5tStc35cWvVA//Bs8Z5cW5R8QWB5IEP05
jhSL9OmKNkZalSgxQU15VU8CfeQq2EDGE73mTOStoeYTP+WODH6LBVEk6FqxyKoyhoxGJbwKWgnM
emNS55jp67Rh18mbxg9eBX3HCbeElYu2zGDWEBoLOHqZMoGZOcM90kBWdvy6TfLRTzl/tc4BurQF
+x63V+r+f4PnbNwa/4BgenjJ0Z/jSMEHkkn3NhJkXMl8jdSkj9Ra6YbSlAc/0XV4huqui89c4T4w
+C0WxJGgawUfo7oiijUM+/3nucTHFMiTY6avuN118qbxg/3+33HCfv8DcCmJkmslrrVxg8oDm1ot
M4hb5keGsuPXbZJP4jbDnwAfphhZPpIjXZLrnGaJaK7UkUta3EuiMlcjJQZ7TVEvoL0tWFDPnDYc
GPwWCwpdeS1H2lB1VEVkCruD+JJeJkIeOWSyr/rhXSdvGz+2tLzjRFN09TryaDKkjUatFIemqwKI
kOVh958a6AEoO37dJjn/AOa/LPsj46VP+belpZTP6/5/hefk0lIK/IBAeqCEX+IjieOF0Gu3mqZn
SLnMlipCS1Rqo1bBdeoTHcDyklZAxolawiOD32JBGgm6ljjOvsQMRm8jfF2QdY2KWvsYIdN/ugzs
Onnb+LHE8TtOFLdQZhaZrWccmTJX1rZcgcmHeaN7pIERd/y6SfJRw7gMry0Y5ktddL7HrVyK2+s8
Z+NWfsYtP+k0jNsjieMxVu+5YVqsnIouSAWyJDQgURnaCj3RZyltQh48/UQt4ZHBb7GgMG6vJY7V
uxNmrq4cZeUUsXfkgpVCJvs6Jdx18rbxY6eE7zjhlrBO9d56BphSfM3JfWRz57qqSbtHGghlx6/b
JJ+0Xsj5tVshyZHlI4nj5cuHTkyupSSAXlOuZSSoRsW7a8l/9DTU3NeAZrmfaHN9ZPBbLCiMuWuJ
49kaoNZRWs3hlTL00ofapBIy6deWcNfJ28aP9TR8xwlPCUm6g6KWybCsGXn3mUcd1PrEdY80lKw7
ft0m+aRM/WXZH3TtgefvS4t93jDuV3jOLi3mPyDyE4L0f3Dt5X96X+d/CX7lvk7Of2z/PPwpeSQT
b3m10iCnjmQpW6fEID0ZqtswBvPxVFEIQL2ukteJIq8jg99iQRYJupiJHyBeoE+E6ENIqG1oazDJ
IiaCr5qLXSdvGz/2IXzHCd/pgD6sYsNRMs7BXqX1pU07waKywonCvDdRtkk+yejl/NrT8rU97fcP
4YXOmb/Cc/ZD6P4DgnNcy+hwJBOPNthBR3KzlbLISE/anhrUhRUyMbQnulvTNkrpg078Nj4y+C0W
pJGga5l4HzQZWpWSSxS3NcNkwTlphEz5a4+96+Rt48eKM99xorhlWLlZm5mYqXuV4dgXjVxLr9bt
/lMDPkBwx6/bJB/9NsY/QX4UsMByPpKJz6Q0WoM0p3taknNq3S1l51KQp+nUV3SZU4PGy+aJPfaR
wW+hoIyRoGuZeBzINmFNrzUszlzQuMrKpjHT165s18nbxo9dTXnHCZcW7Vqd1uTFxSZ5K2jFeJjN
VQbdIw1SfMevmyQfNYl8WfZHufaa37elRfjzuv9f4Tm5tAjjDwjMjxz+lMxHMvFLm0wdPRVuJeXR
OVXXnDJ4tWHe1rIn+rDStC0fbGfarRwY/BYL0kjQtUy8FgKvtchECeLWWmadmbsUDpn0a2nZdfK2
8WNPt73jxJ28VtFRcTopQ+eycGWV4YVL11zukQbCvOPXbZKPMnr42oJ5uXRl+nvcXuic+Ss8Z+O2
/IxbhgdbGLdHMvHgVSqNnIynJofhqfbREg+txGt6q/xENy9jwGj11LPmRwa/xYIkEnQtE4/Lxiwr
U9w5k0oD7Wty0xYy+VdR9a6TN4wf7pz5jhM2U2+aEXIeQtZcJgH0UdG5zSwKeI80CNGOX7dJPnnN
92m5+ENVI8uHugwgFG6zJCZaySZAqsw5VfbKTXJfOl8Zvf7ySGmO88RV0COD30JBliNB17oMVMl1
jZklW3TawNWyzKJcFsdMX/d1dp28bfzYacM7TnjsWsFYF80y2iQ1L6U05UY4Mhv4PdLgtOfXbZKP
lhZ63ZOGa81Uvy8t9vmD07/Cc3ZpsfwTwh9gYdweqaBhMq2skCoTpFGIElWENIUaCqEw0gu95axc
CzU6sbQcGfwWCnKIBF2soIGhq2NZ0DUqzixr+RwlF/aY6Stud528Y/xQ3L7jxJl41cGVmqk21Mrc
R2OY7H3RYrtHGrD4jl+3ST55p+NpmewBTpHlI8UiWh2hDU59EiYRwGTVZ1JZtrgBNs5PD3ubPnkq
K59ohHFk8FssSCJB14pFOuK0OnNbJTwQ04ZDmqNJi5gyfP1q2XXytvGDB2JvOGH9MLIOdsyFZnHo
o6/RBs5GXpvOeo80EOmOX7dJPtr90GvflQmvfMq/Ly3+eZPIX+E5u7S4/oAo9rAc/jmOFIsskgXU
OE2smBbOnMS7psFsCItsdnqiq2g3GqrazmwJDwx+CwUhRIKuFYt0yCB9DlkjOiW0TpnLaFgWxkxf
S8uuk7eNHzslfMcJ49Z68bLU5shUZwHus2ipMtdQmHqPNLjmHb/ukHwSt/zagiFeuh/zLW6VPq/7
/xWek3GrxD8h/JEpPG04UkEDo1ijqmlqWUlgSuqDRyqwJnep5h1f6FKwLKMy+4l3Oo4MfosFaSTo
WgVNFRxtuEwIu4P4rJWzkshcIRN/nSvtOnnT+MHuIG84cTP1uigvna7oY5nNyUvVPY9ZDajdIw1o
vuPXbZJPuoNk/mOlL+EUPVLbULOqgq20unEq2Gby1jA1nDyqSJ2aXwfZOhCWkdKZBjZHBr/FgsIp
eq22wUplbyBOPsJXyswrDsil5pBJvqborpN3jB+aou840RS1DOg8FOpgq1mMmldgH42XFMz3SEPJ
exNlm+Sjun9+bcEILn3Kvy8tV+r+/83z/3xwWgv8gCj6MAi/GEdqG5irotSZgMdKvc6SvPlMxqUw
sAzE/kR3LaTaTZxOXOE+MvgtFBTvcS/WNrgPMlirDomWltLQRKmMPkIm/Nr87zp52/ixBNQ7ThS3
Smx9grrJrLNlXwCU5zCvgxn6PdLgIjt+3Sb5qLaBX5tR9eirh/lZiPfnf/xhOY5QlfdU09/++vdT
AXrG8NlQFPxpzh8ocMzcAZ1/nX/7+1//MsfXv33J/eOffDNc/gTwAPLIMO4b1o8d/C/DDMcMn3aw
RuaK5WPmDug87uAMD7JLfaC+4Rh83k33V3hO/jUM5CeExl2wHY/kp/KYbK6aYOSR1hBL9YmSpjIo
tUUL8IkO3JlqRpxwpmTpwOC3WBBHgq7lpxQ6gvuQPGeUn9JC2kYFmxYyydeU33XytvFj3evfcaKV
xyX3YozSaObSXIS7dEPTNVYeeI80UMYdv26TfFQNW/71gn1o+UgqZqo6Co3kXVpyHJxGK56sMvMy
4bLklUkwBXKZOv3Ej5ojg99iQSUSdC0VAwQdKnnRjlFVXYPiCN1whE52+Jqiu07eMX5oir7jhFn+
CnWY9eWzMhGAOrl0KxV01cH3nxr0Aew7ft0k+exSZPmjRkIvVcN8X1rwQnXOb/CcXVoQf0CUEreV
cDyUn1oorAXTUJAkajPZnJSkUrfWsQrkJ3oR42mLdJz5UXNk8FssyCJB1/JTldGNyIuvaGlxs27T
Z/UxQ6b/HEbsOnnb+LGl5R0nPIwoboCOzTFP62tVEiBAmrK4drn/1CCPDHt+3Sb5pC9VltcWrJRL
W7DvcVsuXLT4DZ6zcVv8JwQ/RMPl6Uh+KncuQsSJ1rJExjMNE0/SNFfuoqbjid5WG30AdC/reNwe
GfwWC/JI0LX8lDcfAlMpW5SfAtPMnUsnGSHTfzb/u07eNn4sP/WOE5Y+sAyRYlQxYyP3xdVYp/bW
qbndIw1GsuPXbZKP1tunZX8w4//gbuH/9lLk/xD80qVIeb3/KhZ+gg/ltqRXg8yp2fCUEVpaDDW5
kQ7tjL3YU4UsGJjNptKZ3NaBwW+hIM+RoGu5rVUJTOaSWSxqPKIwbFivzhYzfbWY2XXytvFjDYPe
ccIyxaKSWamOuUbVXnOpWa3RLLkSlHCiGOuOX7dJPvoZKK89rcqlS4jf12r9vD3xr/CcXau1/IAo
9GAPl7YjCb8yrHdbngTXSoAy09TeUqnA06tIK/2JTjNbrdiHwYkHR48MfosEFcBI0LWEH3aeuXnP
KFHiYBrikum1EcdMXzfOdp28bfxY4uAdJ4pbyd0wc4a+pJiuRRltaOuWBcvEe6RBxHb8ukny0XMA
WV97WtPfTPiZ2+dx+xs8Z+PW7ScEPlDDP8eRhJ+qjErVUl+9Jps1Jy1VE7aMa9U1zeiF7quPNkG6
nWjQd2TwWyjIciToWsJPWscCC71TFLdEOmuXjp5HzPSV2t118rbxY3H7hhPvsbPDXICSm4uOuirM
7tJs8cLlfo80MOGOX7dJPtpj658IHxRmBujIZWYEnzMXSCpU00TgtDp4YvdZG9YG+TVFKRsb5Lz0
zCNmRwa/xYI4EnTtMvNEzjPXlinXqJZEK3aC3hwkZCpfx667Tt42fuxn4DtOWO7URm6lVEIDgr7m
hArIqHXwmGXdIw0llx2/bpN88ohZ1tcWzOVS7ca3pcXzhTLF3+A5ubR45h8QJT+yUvTnOHKZuWNV
Z8Y0hmjSOVoyHiVhU1k0bHSrT3SwUYbkibWe6JNxZPBbLEgiQdcuM0uuuVG3auDRltDN2HnMKhow
vXUd3HXytvFjPSTfcaK4zVgyaUcqRdVFxhAZpl3nAKui90gDw95E2Sb5qAmB/XHDK7Z85N5uHUS6
VJPPslIngORrYppeahPOg0t9RVd1MSHvJZ/IDBwZ/BYLCqfotXu7k+YSgFm9t/BXC2lHgI6IIdN/
WrnsOnnb+LF7u+840RTtnSZ78do8QzavUqd5Y1i9o1O7Rxqy7Pl1m+SjCvin5fyAculXwvelpVy4
ufJvnv/r0lL0BwT6Q0r45zhSLLIKL82WUxu2EjhgyotX8lYJxPNalp/oOioyOPqp8uIjg99iQR4J
ulYs0mDybM5cqQdxW3IriKvmgTNkkq+Zt+vkbePHnkx/x4nftZ2zwSy5F8rIdYj1QsBtdWFwukca
LOuOX7dJPsro2auBVeZLbby/x61citvrPGfjVvQnhD3Ew7g9UkFTFVlH51SlUsLZPJnPlWYFdyRf
0+iJXuri2XB1yidO948MfosFhXF7rYJGl0LuRFpbjtbb0fJcOjvwipj0P4Xku07eNn6sCcE7Tlhe
jL6gNK06qmlT8yE2LRML86z5Hmmwwjt+3Sb56OaKv55cLQKR5SPFIoh1VB+QgHwlxqmp155T6Vzc
F2Nrry3h8FJNpOfZTySdjwx+iwVRJOhasQgq9yqtjt4oSjpP1YEVuyjFTF+/WnadvGP80BR9xwm3
hEOzt0a4WpmNp6/J1K3S4ImifI80aOYdv26TfNRB219bMKRL7YC/Ly1+oX74N3jOLi0uPyDQHh6f
NhwpFimt0momqfK01OaoibvMVDKQzbnMuj/Rlw2BXMZa9UTcHhn8FgsK18prxSJ9kKnluVqYOC7T
V1Ez681Dpv90vt918qbxg4njd5ywfphHmZwXAczOAB2XuNPKQrN4CeIW/ZEJd/y6TfJRnwx/bcGI
L7UY/O+4dbhS9/8bPOfi9sX7E0IfEtYy0pGCj96gz6otDcOaOmZI0MdI4LYaTEZv8ESf1aZkmY7t
xJbwyOC3UJCE8+tawQe1hTi7FpeoqBond/DplSrFTF+b/10nbxs/VlT9jhPX/TcCc1itNzPBXmBI
bd275t693SMNlmXHr5skH71YgfAnkIeSRpaP5EhLo9VWXwnbwESsOUkWT2iCDLoK9/ZH+euY0oa2
SidOG44MfgsFMUSCruVIzQ0qoDVcUQIqNxgtc6/TNWb6Om3YdfK28WMJqHecsE8GWzYFzDLbqAua
zupgULplb9LukQbzvOPXbZJPDrIRXlswvrYF+760XHjX9ld4zi4tyj8gUB6Uoy0hH0kcD6tz6eLE
zj35mJgyd0lCtWW3Qq2NJ/pybLz6kGYn3sc8MvgtFiSRoGuJ4551zGydtXMQt02kUEbWMS1k+s9l
5l0nbxs/9hjSO07cqH2WytA6WW64fOaRRVZXrWAdyj3SwKI7ft0k+ehKGcJrC1bo0hbse9xeeNf2
V3jOxq3xT4jyUArj9kjiGGepXhSSgOS0iDWNNiQ192qAa7nJE13LFBm1Lgc5HrdHBr/FgsK4vZY4
LkPnclQUKlHLQ3OzrJnIW8j0n+aWu07eNn6sv807TlgDvLoqu2RCnpShlQndi1pVVwW7RxrMccev
2ySf9JPD/HrBXsUiy0cSxy5Aq4unWpjSpIIJWu5pzDyXVyjN8iu63JFznrT4xG3lI4PfQkGaI0HX
EsdtzN4B66wLooPs5ouWsXMrMdPXr5ZdJ+8YPzRF33HCsjmfaH00g0ZrTtdWui7Oy6G3ivkeaXDc
8+s2ySePmGF+bcHk2qf829KSM32+tPwGz8mlJecAwh+F7X9w7eV/el/nfwl+5b4O5tfWCahE8/tI
Jn4IMvGEVEajRNw8rdpngsLNGgII/pEuYS7GILbaiRcrjgx+iwVZJOhaJn7OMTtbVQgf+Ka+Jq0+
ocEKmfjrjt6ukzeNH3zg+x0nrPvvgNOljllAtRWV1Ui4V19ZxGo4UbLTjl+3ST7J6GF+7WkVL7Xx
/v4hxEsfwus8Zz+EGEDwwwyiP8eRTLx3Fm/uSaD0BIVzomEr6aCOg8lhjid6rZlGBvVqJ9IlRwa/
xYI4EnQtE9+61g599TWjuv9spJBL88IYM32dguw6edv4sbr/d5y4OLOuMpV91L7mQCfKWpqVJsua
z/tPDeUBtOfXbZJPKt8Q/wT0MAwtH8nEV8qrsGDS1VYatHrKgjl1wzlICgjaH6eGSzOVAm2e+Bl4
ZPBbLEgjQdcy8S5SYSFMnFEmfog7jjVdq4VM9NW2YdfJ28aPZeLfccKfgaM2GETatFjrU2xyNrOS
x6okeo80uOGOX7dJPjq+wdee1vKlT/n3pUU+z+j9Cs/ZpUXkBwTSg0u4tBzJxM8BBWhZGlokZc+U
cDIkynUAWFGX/ETnkUt1L1rbiRcrjgx+iwVRJOhaJp4acm8GJaOGRV5ZDZXrmhozfS0tu07eNn7s
xYp3nHhLWK12bMqraVu1us/CZIUKG/q8RxoE9vy6TfJRugT/9bropfb73+NWL8XtdZ6zcavyE4Ie
Od4SHsnEN8HlSCM55ZqKu6a2RktDrcxBLVPxJ/ooeWUFaANO3Nc5MvgtFhTOr2uZ+A5W5gLXXiDa
Ejr4nKOPNSlm+orbXSdvGz92pvWOE6ZLKnCngdY6rqlFSZuwt8Z9FUO5RxqIecev2yQfZeLptdIT
hyv9kS4DjWbT3EZaAzUxmaSCyImozOJjdYD1x5ZQG2ubFfzElvDI4LdYkESCrnUZKOYkbRaUWaPM
QF1Awst7rgHT21XQXSdvGz+WiX/HCeuHW4WMGbhVozVYCqNBp2zLmkG+RxrY9/y6Q/LZFM34gGuf
8m9LC8Ln3UF+hefk0oJQfkDgkw7DP8eRCpoJUK0OSL1MTlxYkvUhSaUsX8TFsD/RpzuuvgAazONx
e2TwWyiIIBJ0rYJGbDZaBbRreIV7gOJoQjpGzPRV97/r5G3jx+L2HSdMl6hMWr2sDmYwHLmWUmtn
6wWttHukAW3Pr5skH3XORPrjfkx4flSOFItgJhs2aupScgIdnLIvTzppDQApgu11npNXX4tn72cO
xI4MfosFaSToWrHIP6k7k11LahgMvwpix6JKsWM7MRILEFsEAsEOoYzMg5gEb88poKFJhXMqlTpI
bLrV6nsrn//MsePE4pS8Cpkg3WAR44ynEByXLtNfKdQfiny/8GMHYi/j9DPf+4gCnkUSuIKINatW
A76WSEFf69lgCR7oep/kzJWyrWRcAaeuXrZTC55/H/MSntGpBaGBoC2LO3Q353wkWMR5KDZpXkwJ
ZmHHuviS3BJcZvLZ2Rrxhs6YnKvR2IwDDqgjH3+1b1C3384Fi0TBVH0BB6b7rq2XogSYk7guk7xY
RDwU+X7hx66UvYzT67feCYaABn2UXDJ577OCxEKW1Au91rPBGv9A17skp7IMIG1LMMSpJMhtv+Xz
iacu4Rntt+z3EH5V262OIxE04g1lsrA4a/0S1JUlFsyLmiDZFVuMbP3WRielYADEgVPCIx9/tWtQ
f16Yi6BxiSqQyUZNz3EsWTDnVJ2p0md6EUHzUOT7hR9zHL+M03VAcSi+loIQrYvB1BKSmJCroIcS
wmt7G3Q1ah/oep/kTFafW8nWr4a5V/KR2Ab0HAJpXnIAu3h12x8hLYpFMGPVaMxNYSOUA6EBKwPZ
QY58/NW+Qd0+NxfbQOqdg8I2d5eEwUIoFh0EU7tM8iK24aHIdws/uCR8Gad7FRSIrLXKWaEm9iIp
UOGSawkaYn6tZwOCfaDrXZJzsQ20LcGsmRrK26llJu7/Cp7RqcXRDoL96rqHanwktiFhTmohLYjg
FkxqFw01LNn5GDQGMcg3dI8ueguKpQzkIj3y8Vf7BmnPoLnYhuDZbmU6rr1+69lXQ955Je0y/ZWX
5qHI9ws/1m9fxun12wygVjBHdJWBowtiiSB4R9E4l1/r2aDGPtD1LsmpJ9ORtyWYzPWTpt9aM7Ek
vIJnsN9a4/cQfgXuNvMjjmMoOWi1aYnFucWC44XR1EWCqblIRWfohq7FMRJgLTBwBHPk4692DRLs
GTTnOEbgaJEhWOjOt6ZWqqVQta7P9MIB9VDk+4Uf67cv43TnW/ZKmIuRhBjRoGRDFnJUtuKLe61n
g7WPdL1Pcmor9/vLPr4/0x/xkRYsWZOPC4VoFi7oFgi+LuTAGtQkpLKNjEbY2Bw1mYEl4ZGPv9o3
SHoGzflIoRavyJbQa3fXEmsFhx4gdJheeh/zocj3Cz+W0/BlnG7iKcw1VefVgs8pcizsNXEMpSSs
Cq/1bFD3qKHcJTnngOJtCebM1NMt7dSCE0vCK3hGpxakHQS7FUx3xDjkOI4GC1S7VNGwlOhoMYH8
YksA9jaSLb87oKIGRYkp1nS83x75+Kt9g7hn0KTjOJqsMdcC0fX6bS3sPLgcbOoy/RV99lDk+4Uf
i0l6GafXb9XmlB2AQ8glMZZCPiRgYixM0b7WswHZP9D1AcmZfivbEszp1Gl6229l4nT/Cp7Rfiuw
h+BV+/PtEccxhBwhF78EybhAzrI4E+1ijdcqUn0O+YZeq8+SQvIe7fF+e+Tjr/YNkp5Bc47jWNCT
NTGx9E732ZoQFb1hxC4Tvui3D0W+X/ix0/2XcbpbuZRyMZGcLSYAkrPiJHmwUorGYl/b2yCrEfdA
17skpxK0biXrKkxPuJn13CtlTwS/c6UMUFZBUJD+lbKNy/LK3WAROeKJz8V4Vl8W4YCLBaiL9VKX
xGxtMGwNxz/m6qIKqdSRPn/k46/2DdKeQXOeeMzgaoIsLnPPE59zNhAFDNUuk32xKnso8v3Cj13/
fhmn69EDX2rWEmNBLRSluMpGKtlYyBjuNhRxj3S9R3IuSSTKtqb1c3NjO1e788ldX/D8h8ldN94d
BNOq3TW2HPHEAzNWdnXx4uxSgcJiipXFYHIhsLjs5IbunXFRaswwkkz9yMdf7RvEPYPmPPHeYUB0
VYNL3fs6YtmoSVlih+mlSPGHIt8v/FhS5pdxev22OutKZYsJQFWKKzZw4hIsMwjoa3sbeLVED3S9
T3IqbYPb1rR6ab8lY8732yt4BvstGbOHoJWM61XHEU98MiioPixBQBYj4BZK2S/Gu2idTWSk3tAB
xZUQU4U64Ik/8vFXuwaB6Rk054nHIBAl51gpdvqto+IpGi4+2z7TC3fJQ5HvF37sEYSXcXr91or3
LrisKceoEchxSSFAMNHb6uJrPRtY7ANd75Ocivt3r1tarWKv5COe+MBBsyG/iCZdSFxYompcQmVb
g1efXdgUZmVwWcjlgeObIx9/tW+Q9Aya88QrFOMrZ3Ch10RTTiYJouEce0z018v7D0W+X/ixJvoy
TjdYxHlKLqdCjoIpJiZ25CRnT0I28Gs9G5j5ga73SU5OLWRX46de4WynFpi4UnYFz+jUArKDYFrR
dWf6I574IOxjUlpujHnhamRJGfPi1IeIFIzz/obODnNw2RunA8GZRz7+at8g1zNozhPvM+bIzIKo
vX6LHMimWgLaLpN/4bt9KPL9wo+5S17G6T/d5m0xxpXoJSYXEWyukkuskVMw9bWeDYT8QNf7JKem
Fv862NWo9ko+kmXAGBbxkZfMPi7gCBebKyyRCMA4o97KTWHJmXwCJxoHrqYc+firPYPA2J5Bc1kG
jAplzF4dYu/Wo7GWEjlTxfeZXjidH4p8v/Bj+YdfwuknifQSJGr16faX9aop25i8ZCOI1pvXujb4
qT12O8Cznh/gr+AZHeBZexAo/KCx3a+eU6kX/O95R/0zHrBvj10//C5vZ65ff/vN5z9++/1WQfXz
T3/6Pmz/uT+JlX89in3pv17776y5dxbr7IoOGbV/FrtxWVrBdLdSR0Jyog1gKaQlKYSFSGgBybJg
NkIANRuINytUXcqlmlTswAR+5OOv9g2yPYPmQnIoJ7ZQwGAwvdsVJSTvvDPosM/0Yqn2UOT7hR+7
E/4STt//IpKr5ZJ9LpKyL5UQs1cizilH320oKPhA1/skp/wvflvoopta6LbjskyNy/M8o+Oy6A6C
7eq6z3rLkTilCJwKKS01o1k4pbBkDLRYa6paNs5SuaFHY5VSjSnIyEX5Ax9/tWuQ77avuTglQTEc
wVXuvmKkzilzMp4B+0wvZuCHIt8t/OArRi/jdFc1KacEPhRSKVTIW4pWc2UvFAvCaz0b1NIDXe+T
nOq3+nt8AUCv5CMhOQ4pe3VhQRS3SDZuoZBwAcwpu+ysFt4mSET2ASsFHnjW+8jHX+0b1JVyLiTH
co4+QIYcSi8kh6upJmX0wl0mfJF09aHIDwo/1ERfxum6+UIoyXvNUSKjVVd8sBhsiRgyIb3Ws0HB
PtD1PsmpFLq6rW6tTL1i0E4tOhGlfQXP6NSifgdhsf8aisqhkByDUtnjUlyBRaqnBa1zi0LWlHwi
4/SGHkIFZ9UFsAOpr498/NWuQWR6Bs2F5NTirK/Gay29fltJUKwpKGr6TC8cQw9Fvl/4sX77Mk5/
wwzFo4YcNFqIjkPwqVa2jIXQ4ms9G7zyA10fkJzrt2RXkqmcJ02/5ZnbFVfwDPZbNvt+y3ZF05tv
3ZGwmopWo/G0iPrbv8TRgsmWJVdIBmrGXMqG7iuBVYJUBx7aOvLxV/sGcc+gubCaBCFL9Joy93wo
alJhE6tXxC4TvJhvH4p8v/BjPpSXcXr9tlgCAWSOlsAF64XIGQwgORAYeK1ng2X3QNe7JKdC6azZ
ZnqQbslHIkiI2SYBXFjJLzFCXSDYvLiI6iE4ybw10YAEDMk6ygMP9hz5+Ktdg5zpGTQXQaImsiP1
YrrpvVwVNZQLKscu01/ugoci3y38YHqvl3D6ESRoyQeXOCLayJwdmhRqLSShRtTXejZY+0jX+yRn
3HxbyXZlmXrFoJ1a3Pn0XpfwjE4tjncQFlbTb+ZH3PNcgINFvwTRsmDxaSFJYUEWTglNUeu2fpug
pJgoRD/gnj/y8Vf7BlHPoDn3fHGFUQVySKY3tYRSk5KtiXyf6cW50kOR7xd+7JTwZZxevzWSEnr2
RFq9M94JCSNG1GRsFXitZwMiP9D1PskZd4E12xJMZCozcttvJ95MvoRntN+q7CAYVgvdJeER97zN
rNm7sjjndQFNsBQudUnCqkFM0Upbs8spZ4yoIAP99sjHX+0b1G1fc+55jKqeCxpbS+92BbjisvhA
qXSZ/jqCeSjy/cKPbeVexun1WymGYgghaY7AISuTIpqq0WXlCK/1bCBxD3S9T3Im55mF18GshNQr
+Yh73hZBSiYvnpMuJEUWxyEvLMpoSGqksCkcmA1HqDySg+XIx1/tG9SVcs49X6QiYIKUu3fCczVF
rEQMZLpM9sX+9KHIdws/eCf8ZZz+Q1vB5qqhmkQcOEDMFaGm5DUacOW1ng3s8YGu90nOnDZsJdtV
ccoR30wtghNLwit4BqcWQd5D6OpYn+DQfuqtqD/BxTwB/I4nXmhlJCPUd8RvWBZW1O7MdsQRX4yP
xpmyeBftgpDzotbjEpxadFJdTeFmhHcFKtTsFQZOXY98/NW+Qb5n0JwjPsZccsZgSnW9nDYZM5pa
KRXuML2UveihyPcLP3aB+WWcbiQdi8nZZAygCW0pElMgB5QJkiPbbShEj3S9T3LGW3IrmWg1NHWZ
oR0H7dQ4OM8zOg7afZPiG6HvTvdHHPFoJVGQsnhxbilEfiku4II+ZQjAxle7NTs2UatqLGbAy3nk
4692DVLoGTTniMfo2Tq1YFzteUuKOE8SS8DUZ3ox4z0U+UHhh/rtyzjdU9dETgx48RajGPaJNSQA
BleYhV7r2YBED3S9T3Im8YDF142uxtpeyUcc8cCxGvSyABhYMlNcqBRZ0DsWH2s2GW8Kg7eoodQc
/cC7H0c+/mrfIOkZNOeIR9FA2WkE5N6paxBTcqlQUbtM9GKJ/VDk+4Ufu2/7Mk73coU6chwAsyat
CrV4iiawsy57seW1ng2g7oGu90nOpEuzuN08AppyoLVTizvv0LuEZ3RqcX4HgbRahiesVN/b/drf
dv13FM1C/y7VNXG4z7TmzuofLa9kVAy/WP53uPzq+wdbR4IunHiMUmgJEOPixecliasLgIVqqjLY
crOCbI7OYcpDxyBHPv5q3yDfM2gu6KIYa2rgAgV7nrGIGcmG6AVslwlfLP8finy/8GOesZdw+nc0
ayEh4xNxsYlqgkKFSwg2FIWC3YaizjzQ9T7JmYs0FrflNtJUCsl2jPZTY/Q8z+gY7X0DYbeJwtqp
5GeNKG7iNaBLeAZFcYZ3EEQr2amN2U6U8xHbl/AMi6INBP1+uZp4BqIVhc6nJriEZ1QUMjsIolXm
IHainM9leAnPsCjUQPBWM85One+2ojg8L8oVPKOiONxBEK1+DqIVZeJFz0t4hkWRBkK2mlGaihdu
RPHmfJqiS3gGRfHG7SF09cJPWOI/1zPxJ7g8AfzO3uRPHlXf35tsXMSroakLT20rg/OJKy/hGW1l
AA2E23IqAk/FZbWi4MRq+AqeUVHQ7yCIV5yrmVYUOzFzXcEzKorFBsJvNWPnaqYVRex5Ua7gGRVF
9hDEK82FNe5EmZi5ruAZFsU1ELrVDPNUzFgrip9YDV/BMyqKpx0E8WwgXSvKzLHDFTzDojQDLZmt
ZtzchZNGFJ3YTF7CMyiKktlBEK9+DqIVZWIzeQnPsCjUQMBWMypTp2StKBNZxy/hGRVFYA+hq7pn
eAGeukV4JvjMFuHGRbIamTp23LWy88egl/AMtzJuIHBLWQ5uCqIVZSIk/RKeUVFUdhAkK7qpE7Z/
iGKNmcjQegnPkCi/8zYQdqsZ66cOqFtR4Pxq+BKeUVHA7SDQrNJN9eiPpK01jozRyktFHxYrgRYo
DpaEaiJFZuPzDT2kUCBEl3wZcIke+firXYOkb1DjutuG6J++++T77/MPGyNFW0G5Irhj087777/9
yh/f2M07YH0z8bz7Tfngs29/PIr71LkGnK7WOOM6rvIXXCQrzTXOtrNMvH90Cc9oZ0FqIGjrsTyX
LK4VhScm4Ct4RkVh3kGQrDIH0YoykdHvEp5hUbSB4NcNrqK9LMCebh3y81/+CMzom6/mJfN//P6n
AevHCh61U82+OLuiMceKO2Dn9+XHn77/puQXP7uZ+/uP7Aum1fQL5scF84zAAwUPC8y94pjgWHEH
7DwuMMjqdMrL1ODARHjBJTyDtQFm361JVj8H0YoC50+ZL+EZFQVaCHkd3GpgasfTijJzInQFz6go
RDsIvBE694R13XMPP54IfuTwo5ND9QUXuRXm7uPtWtn5E+pLeIZbmW8g3NbULUztq1tRZk6EruAZ
FUV4B0FuJZryHbSiTKQyvYRnWBRtIPxWM6xTEI0oOHMidAXPoChozA6C3KxDZSfKxMx1Bc+wKNRA
6FYzOvfScCsKTvgyruAZFQVhB0F+NXMOlVaUidvpl/AMi9IMtGx+fwRfp2qmFcWdH2gv4RkVxekO
gvxKMHVQ1Yriz28RLuEZFcW3EPA6+Nkw6EYUOzH7XMIzKIo1ZgeBZgVvn7DSfuoW4ZngM1sEhq2p
i07VatvKJqbzS3iGWxk1ELg1dU9TMVetKBP5BC7hGRXF8g6CdDV2CmInysTMdQXPsCjaQNgt7AHc
1I62FYXPe9Iv4RkVhWUHQbqKmYJoRZGJmesKnlFRxDQQtNWMm8ug3oqi57cIL3j+uwjk33l3EKSr
zvXhnSgTA+0VPMOicAPBvy8VzBREIwrhxGr4Cp5BUQj3EHzjmwtu3YlyPhDlEp5hUVwD8fu5M/r/
XaqzZ4IfWQ2/FC3Y4SK4cpAmPn9gfgnPaCtjv4Ngs7KdauqtKHL+7swlPKOiCDYQ7vcYpLnDpFaU
mXOcK3hGRXG6g2AzexbbijJzjnMFz6govoXwW814njpMakRhmNhhX8EzKAoD7SDYrDqXIW8nysRA
ewXPsCi+gdDXEVYzVzOtKDPHDlfwjIpieQfBsFqcgmhF4YnN5BU8o6KweRlCXjdmqxnCqXVBI4qc
Tu5yEc+gKGJ4B8Gw8typfSvK6eiri3iGRdEG4vdzZ6vPiJV+4hbhueDntwh/cMEqZurcpG1ldHaN
cxHPaCvrQDCszkz1/50oZ3fnF/EMi+IaCHwdYTb6shHlfOrIi3hGRXF+B8Gwej/VXFtR/NnN5EU8
o6J4bCDsVjM693BzK4pOzFxX8IyKorqDYFyNmzp2bERxZmKgvYJnUBRnWgi6fXWFuW1+K4o9u5m8
iGdUFEs7CMYVeQqiFYUmus8VPKOiUNt9eKsZOxdB2YoiZz2TF/GMiiKyg2BcyU5FF7SiuLObyYt4
RkVxpoH4/ZCZzTOu7j93i/BE8DtbBLOCMCm82CB0qcRNPuPy9bfffP7jt9//fs/24xcXZn8Itdxq
OPz4wydffbvl0P31jd8b3H9A8NbvBK/fEH6+/fp2i3fLzvvKR++88tNP2zOaIiTo6wJSeSFj3KJJ
6pLQ+xJcMKJ2q/Pfsbckyl/+B8xvfp8++/wfuN/GL27dcgz5+/L1t7cG+3X44cfy/X9BHb/96fek
4p+WH18JOX9/u9jdLXY22Ua/2Fuz/ib/B+V9cCtmqmrCi9p9Y7ukdyN/Idbrf424fTNeLGtPmtGd
Bbw5G4pxMdfgbOBNVyRHk7keDw0e26cfDwoXs9wfFGyIBojL4snSQjbQEnzlpVprgS3WYvTQoHA1
9cFB4VnFboPCf1DeNihMVc3JQcExX9H52kFh5kj9Sq7hQYF7MMr/8WEsqq5M4p27s9bClXHKwdtW
2syR/xU8w5WlOwjGVWBq+9mKcvrC9UU8o6JAC+G2mnFzN6JaUezEzu8KnlFRrNlB8A1i7vJPK8rM
adIVPMOiUAPht5rxc1lzWlFO3w++iGdUFOEdBOOqfuqgohXl9P3gi3iGRdEGQl9Hu5q5DG2tKH7i
iO0KnlFRvOwg2K7AU16mRhSd2VxdwTMoirabKjCvo51N992Icj4B7EU8o6IQ7SDYzqb7bkU5nZ3i
Ip5hUXwD8XvYg5jJXdV/fxj7TPCZeI2Ny67EU0vyppWdTwB7Ec9oKxPeQbCdzZ/eijIxnV/CMyyK
NhC41YzMue3+KQqcTwB7Ec+YKBvvDoLt6uaaayvK6fvBF/EMi0INhN1qxs+FArSinE738TfPf9h9
Nt4dBNtV5/pwK8rpdB8X8QyLwg0Eba+6GjvVh1tRTt8PvohnVBSWHQTTCnMHQq0obqL7XMEzKopr
uw9vNQM6tc1vRIGZ2ecKnkFRwJgdBNOKc4nhW1FmZp8reIZFoQbiD88A4BNW2s/dIjwRfGqLIH88
vT11btK2stP3bi7iGW1llncQTLOhSa0op9N9XMQzLIo2EG6rGZ7L1diKMjOdbzz/7RWTjXcHwTeI
uZmiFeV0uo+LeEZFEdNA+K1mZC4RSyvK6XQfF/GMiqKwg2BaHU9BtKKcTvdxEc+wKNxA/J6rkWwv
W7naB9nKAYHOZWUfLnjQTgTqFee9HivugJ2HsrL/UbBfxflewfS4YJ0ReKDgYYF1X5yuiHKsuAN2
HhcYafV2ysXS4tjzAQeX8IzWht1DMK0KUy6WnSjnr8RdwjMsivsnBJqtZlSnIFpR/PmlwiU8o6J4
s4NgXs3cbatWlNOPq17EMywKNRC/u5M8+CfsFZ+6yX0m+Mwmd+PiFebeEW5amZ3wy1/CM9jKrIEd
BPOKc5eSWlEmgj4v4RkWhRsI3GrG4hREK4o9v8m9hGdUFCs7COaVzBREK8rpd40v4hkVhUwDYbea
IT/ljGtF4fNrnN+IO5veSGogDP+VEfcYu+zyx0ickJC4IRBHhGYzk2XFbhYly5fEj6edhGy6PEmm
+q0erpCdfvopd7tdtssmPFopXAYIZscFmk2QUhafa2zEo5WSg4BIPTKZocS5kJKAzdwmPEopyY8Q
zK5ECGKQAjw+FjxqKfLx4R6Zip1EJaUsPkTTiEcrhdIAwRMEVhV9kLJ8qZwJj1pKFRD3m1JojZL6
6w4RVgR/6fxaYpe4tfp057LkYtcKlDiXrWzxqaRGPNpWxm2A4Ox8hiCklAz0XBY8WilZQpSJwwWs
MqCUguRxLHi0UqofIDg7itBsgpSC5HEseNRSkoCoPTIRqwwopHAARtgWPEopHPgYRGrgfvqTeq4f
f9v3buthm23XdPXu7e83u/4/x84sP9+b5c/d2fnu5sXj2JNLMTK3491Z5+KMFs8emh7QnVnwqJte
ExCtB4uxcoFSClCpz4RHKyWVAYKzY6wInJTCwJDdgkcrhcWQPfoemVyhBK2Qkv3yNKAJj1JK9nmA
4OwKlrWWUsLyD58HnnMeQ9F5BUS4qxNNa5zmsOpAbE3wF3quHF0kapH+67iOYGVXsVkA2cgWn3Vi
xKNtZFQGCM6uYaUVpJS4/B1twqOVEoOAoD5/nQisP3W0AX83/LPP93U+CvH8v0hl8zm75t28VOsk
Zpd9y8X/91Y4wlUmZKjByQdg8TEsRjzaByDTAMETBDbjI6UAVS5NeNRSsoCIU2TQoqxSyuJjWIx4
tFLqCMHFUYLy+IMUoFO14FFLKQIi9chEgvL4Qkqh5UNkEx6llEJtgODikocgpBRgOaMJj1ZKlBDc
I5OwKjRSyuJjWIx4tFLYDxBcHGO5+EHK8oyvCY9aShIQuQ+qQlyjVty6o8EVwU9Zufc0jym5issJ
yuPLVlaXp9BNeLStrPIAwcUVgiAGKUDPZcGjltIEROmRqdixJ0JKRVJ2FjxKKdXnAYIniApBSClI
ys6CRytlSNnVHpmGnf4spSQgm2LBo5WSwgCRqsulDB1AbK1OXIdPdxtopkLFv+0+Xf6y/f361+uP
f15ffJjqy+7eHi5iqnRVrupFoUu68JkPF414f8G1Bs8tEfk2ob9pdNjH+Ma3/e7xx3ov8eP9721u
frvcfLHkx784ekNNNjV5Q/0/u5vD1IHefvr5vtT8hMnxzf5Qr3wIe3om4kC1LXM2dfTrCNRcfEwG
w0CPtZOv3v9++8tmv/u0e7O7PWw3X/6xu/my/82XvZT/we3fHCdpuJqbw/7jdFTA258233SI/gXy
H0cnm04Q2OymT6k/Dpv+lxfTX94OLFydx4qiyWbDQNbJgkfbVJgERNtSdQE7Jl5KAZa/mPBopVQ/
QHB1FKDUl5QCLH8x4VFLEYOh5HtkqEFTJ0JKA6ZyTHiUUhqVAYKriwWKjJQCTOWY8GilxCAg7uYi
Ka0x7bDqsHlN8FOGzU83vEmuihZaFK0MKS/6wHPWcn4t1QGCq2Os/peUAnTnJjxaKUwCgnpkMrbE
X0oB5ktMeLRS6gjBEwRWi11KAeZLTHjUUoqAiD0yBauwPZdCSDVMEx6dlM47QHB1FStXNUgBXrQW
PGopVUCkHpkWoVS8lIL0PhY8WilplMLNeaxclZSC9D4PPGdM2nZeAcFbn51vfoBI3ocno/3d9X76
KNr+8unTb18+3ODP98dQXX7YT2yHEnZ7erPzpRyecVUXFwtak1JrsKZjbMHzCmwnGDyxSs4DZYrn
oKxYnNehVMe5HmXLtALbCQZVcSYfzkDZCIvzOpTaODc6yhbSCmwnGNTFOSQx7pWU/zwS/bT5DPfA
vrmZLjENcI/9dKX8yk9/f/dzU8g/fJh+bvP99rAP9XJ/dUU57Z9ed8rrbjfDNaihRY+FTaTesAmP
suUF7wcIbo6wCUshBak3bMKjlpIExN26iIge9Pg/ZKZWBIcyU7k3dcJmgEUrQ+oNm/BoW1nkAYKb
i9gMsJCC1Bs24VFLaQKi9Mgk7JwmIQWpN2zCo5XCeYDg5hib7RJSkHrDJjxaKdkLiNojkwM0sSOk
IPWGTXi0UloYILi53KDICCkErIcy4VFKIZ8FROuRKdhsl5RCQLbbgkcrhUYIbq5aThYRIfO0Fjxq
KSLbzb5HpmEl96WUsvxFa8KjlVK8gAhbPxESNE4ZpCwfIvzHc8Z5kc47QFBwidc42WPVIcKa4MgQ
YeKK3gXseGjZyoDu3IRH28paEBDUm3qoEISUAhwfYMKjlsIDRJz4MvRSFFIiLf/GMeFRSomUBUTs
kYnJcqIpAqe9m/BopUQ/QETvEjYFKKUApV8eeM5Zn6rzDhApusJH8+CvHZpAkePSBLzywtr75COX
S47In3a5E+7zxDT5/YVba8cuzK9fePFMlvLCasF1vBw7TnTa5U64z1MFp/5uYWwSW+KU5ZPqJjza
aBQaIOIE0aCUppCC1Bs24VFKSV5CcI9MLhCEkILUGzbhUUspA0T0aAUlIQWpN2zCo5VCSUDkHpmK
bSQWUpB6wyY8ail1gKDgmMsKY8V1B7krgkOD3NybesP28ohWhtQbNuHRtjJuAqJsfUA3OAkpSL1h
Ex6tlDxCxIAeFiOkIPWGTXi0UqofIJhdqOHYKpH85Ivz8v10tb8Om/7/L27fvb3evb+YlpD8+fHm
1+nJvbic1qK8PWx+vz3cXO8+HL66+fjx0+a33e3t9Af7r65/f//++HXjK9e9nTauTu8yd/t+98dh
2oR4uXvftyK+u/757s7/u0J/zYR2RZf8xvt9u5fTX4ibhx/oobj711MEP97s3h6O4ZD3r+D0/+zu
7/7nz3f/8/3dTxDNc7u8uuLEe/8A8fUvh8v+R53g9nDYvLu6c9gtbaZautOim4nmQ3/ZffvdtKLn
dnP/Y/ujgCHbA/5w99fvO+Lu+u/eWv7cvbt7nV9NfJ9+OQjG3X5/M0md7ucBVZDW/qwTdlSIeMwY
ySVZ8CgfM6Y8QMTgYoOmYqQU4JAbEx6tFC4CovXIMFb8TUpBBp0WPFophQaIGFw2TbAxUJPEhEcr
pbanEGXrfY9Mwba9CSl58cFZjzznPPmn8w4QMbjaoByJlLJ4d4gRj1pKFRBh68n5BM0cSCl16TvF
iEcrpdIAQcHlvMZZMSsOOtcFXz7ovOOK5AIW1aGVLf3GMeJRt7IsIKg3dcLyK0JKCUsHnUY8Sikl
jBCRXMSOzJdSaOkcvBGPVgoFARF7ZFKFIKSUCPRcFjxaKbEOEJFcxg6VkVIS0HNZ8GilJBIQqUem
JAhiLiUASixodErCKCSSawwtnhatBPgOtqDRCSEvEHhqqehB43Mhiw/IN6LRCYllQIjREUFpxbmQ
xTX5jGh0QlIQCLnHJGJL6+dCMtDXWNDohOQ6IFBwJa9xYsm6w4EVwU8ZDrTnhgO5N/IUoenWeQsr
wGDAgkbXwkoWCKU3ck5QJm0upAEDAQsanZA2IsToMnZ6gRAC9FIWNEohRSDUHpOCZeVnQpavlzCi
UQlJlAaEGF1tEMJcSAS+dC1odEIiCYS29cl5Dw3f50IW788zotEJyX5AiMmFACV5hBDgkbGgUQoR
j0zwPSaEbeWZCeHFq/OMaFRC2I8IMbmIzdXMhYTln/4mNDohIQiE0GOSGErszIUsXoBmRKMTQnVA
oOBqWeN4ilU//dcERz79Q+iNnDMU03kLA7ptExpdC4skEGjrE7oedy5k8Wo8IxqdEG4DQkyuYKu2
50Ly8k9/ExqdkFwEQuwxaVg9mrmQxRW+jWh0QmoaECI7T1DScC6kAY+MBY1OSGsCIW09uxAhhJmQ
vHjf3SNNOl/GLkc/IER2xFBKRwgBHhkLGqWQJBC4xyQWCGEuZPHyZiManZA8IkR2CTszdC4E6WUs
aJRCikDIPSaMrXCcC2nAp78FjU5IqwMCBdfqGiX21/30fwAPK4Cf8ulfn/v0z72RF8MiHsUDn/4W
NKoWVjwJhNIbeTVcpVkC8B1jQaMTEtqAENm1DC0TnQtZXCPJiEYnhCRCvSsAXSCEmZDlJ50a0eiE
cBoQYnahQklDIQTopSxolEKqQGg9JhHLs8+FAOsqTWh0QmoeEGJ2ybDcfWnAWMiCRiekibEQ+R4T
JghhJqQuPlf7keaMR7bWOCLEjB6lOxcCrIAyoVEKKQIh9JgUhlI6cyFAL2NCoxPCaUAgcr6uUcN4
1U//NcGRT38KvZFXLIs6b2FAt21Co2xhVSDQ1hfng13Gri7es2ZEoxNS24AQi2kKswIroExodEKa
RIg9JoTtdp8JaYvPzzaiUQlpwQ8IsaBJQyEE6KUsaJRC0oDAzeUjq+uTD/Hl4leN/LISX+rL6u6R
ZNDTXQ2/XE672Kv3eFJ5r/vLBhf46D2m1y7Li9XqLqtUy8cuVkI47WKv3qNCbUFT3XOYvHzAakKj
i0POA0IsLhOEMBeyuNyzEY1OSJHPPPeYFGz58UzI8iMwjWh0QmoZEGJBM7NzIYurEhvR6IS0IBBy
j0nDlh/PhARPy6cSTHBURjruwEDkQlujpva6Q9YVwV8YsgZKjlqr8cmQVXLF6jxWZ1q0scVVeYxw
tG0sC4a+zh0tvS2UJGCQZoGjVHKEIVYXPTQDIJQsru5shKNVUgRD7WFJWIZXKClAsscCR6mk1IGB
ssv+HEsJfvxt39/MHz5ev5sKt3VLV+/e/n6z6/9zfF/n51/Y+fMb+3x389Ibu7DjFsk/+8auvfFz
hNJ6ouEBlWxMcJQNr5JgaL3xZ+zosLmSgGSRLHB0SkLwA0OsrmQoLEIJkkeywNEqEYmk6HtYKnYU
oVCSl2ejTXCUSnIbGGJ1rUItVSgpyz91THCUSopkCFvfXAjQQHiuhID1WyY4OiXkaWAgctTqCj3h
qoOuNcFf6MIzuUAx13a8B+9YsblIUFJ93sSWnxxohKNtYlkwUG/mqdqt3ggEFHK8wzlr5aeOOzBQ
mhjWqGrx3fDPPt/W+SjEs/8ilc03/Zp388ILgWJx1beSynNvBOqPYMYOBJbNf/nX2gNOOd80aMcV
DHHrG3osw1zJ8vMhjXCUSoofGNIEiJ30NVey/HTIR5wzLunquIIhbcOEh83jzJXEsDwLY4KjUxJD
HRiSR8/RFkqAWQMTHKUSIsHAPSwZS44JJYuPfzfCUSqJbWBI3lXso08oAZLcJjhKJQND7mFp2C55
oQR5vVrgKJWUNDAQueTXKLS87shvRfBTVog+GfpJrhRcwHYOiDYGzBqY4GjbWBUMvT6FI4YYhJLF
B+8a4SiVNB4YUnAJO+ZQKAG2/pvgaJU0wVB7WDhBHcRcSQL2d5jg6JSkODKk4DKWUxdKgB0eJjha
JUUwtB6WGqElKkIJsMfDBEephNPAkMh5rF63UILMeljgKJXIWY/kt4EcBejhnSth4EACExydEg4j
QyJ0nbdUsvxdYoKjVVIEQ+hhYWxIIpQA1VVMcJRKYhoYiBz7NdKnqw4H1gQ/ZTjwZC2H5ErkKrZ4
QrQx4MALExxtG6uCgbYhuoBVQxRKgP7KBEepJLeBIUWXsOMUhBJglt4ER6mkSIbYw1Is5wsZ2P9u
gqNU0vzAkKJrDC2KlEqA/soCR6skCYa0DcmFCKXj50oyLU8tmODolGTigSElF7FTAOZKkBMTTXCU
SpJ8cLiHJTVotlMoQXqcB5xzKsltYEjJZezMRqEE6XEscJRKimTIPSzNQwxzJcUDr1cLHJ2S4tPA
QORyWKP48rrDgRXBoeFA3iZ2ATsPX7YxYDhggaNtY1Uw9KqRjrAiFkIJ0oVb4CiVEA8MiR17qIOQ
SoD+ygJHq6QJhtrDkrE9UkIJcCqICY5SSfYDQ2JXsRSaUAKcC2KCo1WSBEPbhuw8NkoTSuryHcYm
OEolNQwMKTvKUFiEkrZ8KfJ/OGccIZUmliKz72FJWFGcuZIKLBZ6wDljmfGOOzCk7Bib2hNKgMVC
DzjYalmlkoEh9LAUD03tCSVAj2OCo1SS/cBA5AqtUZB51eHAmuCnDAeeLBaSXCm7inUQoo0BXbgJ
jraNJcFA21CcT1A7F0qAxUL/8nYuOZbbMBTdkSBR1M+ryCQLCBAkyCAfBMn+Y+VTga+6qx51qQJq
1mj4vEPaMmWJcsExKhllYdAWkuvDmVgs5IJjVTKAIc+wCHfc8lPJSPtvNS44NiU3LjLMsGSucAUl
xAnhLjhGJRKB4Z/peK0Lg0ZJ73cGS6Pk3fZnxusaf2X54tWqjNeu9vGvfLEB2rxuCZral64rH163
M3YN17Xa7V+62qjptat9/CsNdlsoXMNtoGn7mzxccIyhaLIwaAuVO6AKlBDHbbvgWJVUYCgzLM2z
lB1Eu1kXHKOSsTJoC71QMwyghDhy2wXHqqQBQ71SZ092eyiRSByH6oJjUjJxFwaR0OVEg/GzpexB
8FdK2bcvWyuX9pC4KRvIMeJAVBccY45lAYY28zwLNbEHSpSo2zxwjEp0IMMMizbqaQhKCjFeeeAY
lRRk6DMsldtCAEqIY1FdcIxKuiLDDEvnjmZFJcR45YFjVdKBYcywjE6F5akkRWJazAPHpuTGXRh0
hKTUN1hQwkx4eOAYlTwnPPoV45VGkEEtyQQl2xWSE45RSavIMMOiXFhASd/NEicco5KOWZJmWMqg
bt6nEtnuHOiEY1Ny4y4MImHkE82bD5YDZ8FfKQfevmytXDrYw68gx7bbtjjhWHOsA4NcabCnNT2V
5O2JCSccm5Ibd2EokV1/+FSicXe9zhvOJ55vPnGBIV8SQ+a+84MS5sbxwDEqSX1hKDEod1ADKNnu
d+SEY1QiAgw6w1KFeo0AJdsLB5xwjEqqLgwlhh49nyWVuHE8cKxKOjCUSyLbbReU9N2i0QnHqKSX
haGkkCpVkoCS7QMLnXCsSgYw/H3MfubWczyVlO062gnHpqRIXBlyiPlEW+Gz5cBB8FfKgbevAytX
SaFyWwgwx4jxygPHmmMKDG3meePOXAclSrz7euAYlWhaGEoKg+u5hEqI8coDx6qkAEO/5Ibj3r9B
CTOj54FjVNLqwlAkCPdpD5QwM3oeOEYly4zeuETY/m1PJXV7W6ETjk3JjbswFAnFsZWy1EQUjR44
RiVJngwpzrA0rhckKCn7zxIXHKOSUheGIuyKd1CyvcnDCceopEZgSH+/QnLv36Ck7U9TuuAYlbT2
JYakJ3rNHi0HToIz5cDNVXIQoV4jIMe292k64RhzrCdgkBkq5VbnPJW07ZaFTjg2JS2tDCWH0qjX
CFCy3bLQCceqpAFDnmFp3PEDoGS7ZaETjlFJ1oWh5NA7lamgZLvDnhOOVUkHBr1EQ+QKV1BCVEgu
OEYlPS4MRYNwB8qCku1Vdk44ViUKDOUSZb9jPZV0ZsTxwLEp6WllKBoqdxYfKGFGHA8cq5IGDH/P
KIueaKx69t33IDj17ltnnndu7hlyTInxygPHmGPaF4ZSQuRufVBS9udqXHCMSooAQ7ukBOE6SYKS
7ZaFTjhGJXUsDKUE5ao0ULLdstAJx6ikIUOfYSncMuynkrHdstAJx6bkxl0YSgm1UW9WoISYCnfB
sSrpwDAuKexSFFCSiXlfDxyjklwXhlLCcOwIK2O7Sa4TjlGJQoUk8ZIaEjf3DEr6/uPVBceopK8M
pYbMdaVFJfvlgAuOVUkDhr9nlHM50Vj1aDlwEpwpByZXDYU7fuCRYzkSQ/i/OJ+4UH7iLgylhsYd
UgFKiCHcBceqpAODzLAM7lAlULLdstAJx6gkj4WhtBC54wdAyXbLQicco5KFIV/SQuI6SYIS4mu2
C45RSY0LQ2lBlAoLKCEWyrvgWJUoMOgMSy7UYv2nksSMOB44NiUprkpKC4XLVFDCjDgeOFYlHRjK
JY39tAdKttuyv+F84mzLxF0YSgudW6wPSrbbsv+P83kV0sQFhr8n2bWeaKx6thw4CP5OOZAkh5TH
0PpWDqxcPcRE3fqQY8wQ7oFjzLEaF4bSg2SKAZQwQ7gHjlWJAkObYdFMjZmghFg/9R8O9WA0Kulp
YSg91ESVJKCE2Ov2H87nrQqfuMBw/9XQjpyJiQ/nb3/7fj6Zf/71l5/++PX3aemHn3788/fv5j+u
z+v69Qd2/f+J/Xm/5r0ndquhFakpf+2JPf9C44bhZ+LJditiJxxb4t24C4OOUIsuDBpzfL9nahaR
vc6w/1+3vHZd468UWa5WYkgqr13t41/5UmfY/647pH7puunD6zbCruW6VrttvVoKWvNrV/v4Vxrs
9jC4Zl9AQ2zLc8ExhkITMIz7L0SlGEBJIapjDxyjktIXhjKCcIcdgZK6v2LFBceopMqTIccZFq2O
HzIysXbOBcem5MZdGMoIdTi+pmfZf5a44BiVSAKGvz/DlXqife3RCYP/wNsB8HdeP+sccXN+zBcg
1gid66UMKUasfnfBMaZY7gtDGWFwG3JBie4/m11wjEpUgEEu0VDKibYQ36z/7e1nfR4F3PvvUvnU
oyd/zTsPBMnj5hk9y9eeCHLlG7lRr0eQ/sQySRccY/o3ZMhXjCFzNTEoIba0uuBYlTRkmGGp3LEE
oGTsT6q64BiVDAUGvWIKkVuX+FSicf/zjguOTcmNuzDkFKRRX91ACbFM0gXHqEQjMJQZlsLdvKiE
uHE8cKxKFBlmWHqnbl5QUokyxwPHqKQmYKhXvOE8v57rdgd7Jxyjkt4XBsmhthOdis9WfgfBX1k5
+tZieeXKEjK3lghybPtkZyccY46NCgxt5nlN1ADxVFKILa0uODYlJa0MWULnehqDEmZazgPHqqQB
w+zhG1KiGEAJM4R74BiVqC4MOYfMrXEGJcQOTBcco5IiwDCuSLf7eCph2sy54BiV9IgMMyytUiUJ
KCGaKLjgWJXAjaNxhmVw+yWeSqrsjzguODYlVVaGrCEplamgJO+XAy44RiU5AUO6orLf50AJseff
BceoRPvCIDm0dmL69Gg5cBL8lXLgbSPZypU1qFAzAZBjxBDugmPMsSLAIDPPS6fyHJS0/Rk9Fxyj
kjaQYYalcROtoITY0uqCY1TSGzDkGZbBddl5KmnEfhgXHJuSG3dhyCVErpEDKCH2w7jgWJV0YNAr
liBKMYCSsl8OuOAYlZS4MOQSsucq9FaIG8cDx6pEgaHMsGiiGEAJ0UTBBceopK8MuYTCtboDJcyI
44FjVdKAoc6wVK6L/lNJJ874csGxKblxFwbJoXc98FZ9thw4CE6VA3XmeeOO5IIcIw5N+w/nE2+7
LgIM7YqF3TAJSogmCi44RiV5LAy5hsjNKoISoomCC45RycLQr1iDcL0tQEkj3mo8cIxKmiLDDEsu
1BJGVEKMVx44ViUdGMYMS+Hmnp9KRtz/+OqCY1Ny4yLDDEvjuhaAkkRUSB44RiUJKqQSZ1gGt30a
lJT9x6sLjlFJWRlyC3FQk2CoZL8ccMGxKmnAkK7YQuYOwgElxIjjgmNU0nRhkBzGiAfeqo+WAyfB
mXKgpJnnhfuiDjlGDOH/4XziEH7jAoPMPK9ch8mHEo3Eel8XHJOSiYsMMyy9+2WJRmL91L84n3hg
8MQFhnzFHiL3ZgVK8v5bjQuOUUmOC0PuQbiV6aCEOITGBceqRIFBZ1hypgpXUEIc0+mCY1TSKjLM
sCj3PAMlxPopFxyjkh6BocywVG4pCigZxLuvB45RyWjIMMPSuO1hTyUp7i8WcsGxKblxgaHOsAzH
Tkaa8v7MrQuOUUmWhUE0xHGiefPZcuAgOFUO1CsPdlcm5BhxJIALjjXHKjC0K46QBvUVDJQwczUe
OEYlZWXII2Tu1BdQwszVeOBYlTRg6DMsyi2dBCXExi4XHKOS3pFhhqVy799PJRKJG8cDx6ZEIjKM
GZZWqbCAEiHefT1wjEokLgx5sMvaUAlRNHrgWJU8isY7JvGagNypL6Bke3mqE45RSRsLg8ag3CZC
ULK9fsoJx6ikI0O6w+La2FT3+2o54diU3LgLg2iQeKJ588Fy4Cz4fjkwuWaeN66xwjPH9huVOeEY
c0wSMMjM89H81k9p3t5O4YRjVFJkYdAUUqSm41HJbtHohGNVUoEhXykFKdSs4lPJfrMlJxyjkrYy
aArKLbB9KtlvtvSG84lfsyfuwlDiF5VozOP9Xraae97r2Gu+rvFX9i9erdfx2tU+/pUvdez957rp
K92mNX543c7YNVzXarevV5Mgubx2tY9/5at29Up0g6EnjW4feuKEYwvFjbswaAotUt9NQEkmHvoe
OEYlGRnKDEvnejKDku3dUU44RiUlLgwqIRaqnAYl27ujnHCsShQY6pUkqFJvbKBk+4wOJxyjkl4W
BtGQ44m2wmdL2YPgr5Sy42ulbJ153oSa68QcI8YrDxxrjg1gaDPPe6du/aeSsr1W3QnHpuTGXRg0
hxip6V9Qsr1W3QnHqCRFYOhXyiEJFRZQsn3whROOUYkmZJhhEe5lE5UQ45UHjlVJAYYxw6JcpoIS
ZsLDA8eopK0MmkPhmmuCEmbCwwPHqgQmPFKcYalcw4ankrr9/dMJx6bkxkWGGZbWqQlcULL9/dMJ
x6pEgSHNsHTuPBJQQow4LjhGJZoWBtGg6USv2aPlwEnwV8qBr33ZSjPQISZqcgRyjBjCXXCsOVaA
Qa6kIXE7ikHJ9lp1JxyjklaRYYZFuPkiULK9Vt0Jx6ikR2DIMyylUZMjoGR7rboTjlHJaMgww9KK
45ettr1W3QnHpuTGBQadYRncFBoo2e4f4YRjVPIFBi0hOe601KbEjeOBY1XSgKHcf0G4fXyghBlx
PHCMSlpdGLSEEql5QVDCjDj/4HxmHd2WEafOsFSuJHkq6Yl4vHrg2JTcuAuDaCjpRK/Zs+XAQXCq
HKgzzzvX7gFybPuwbicca45VYGg3RoiFynNQQnzNdsExKskrg1Z2ohWVEEP4PzifeJTtxAWGfoeF
XfYLSuruvhcnHKOS2pFhhqVzW/FBSdtfIeqCY1TSBBjuvxbSoN6snkrG9lYgJxybkhFXBm0hcwt3
UQnxLPHAsSqBZ4nEGZbCPeJBCbFYyAXHqKREZJhhqVwtD0qIxUIuOFYlCgxphqVlx0F4EI9XFxyj
kiYLg2iocqLX7NFy4CQ4Uw5ImnneExVUyDFiYsIFx5pjFRhk5nkf1MvmQ0mJxHjlgmNSMnGRYYZl
cN0JQAkxMeGCY1SSEjDkK/UQuXOmQInslwMuOEYl0hcG7SFxC2xByXbbFicco5IswKAzLFKp929Q
QiwWcsExKmkrg/agmZrxASXEnm8XHKuSAQxlhqVyR5k8laRIlAMeODYlNy4yzLB0x68DJW0fSeSE
Y1WiwFBnWAa3JxiU5P2FHC44RiW5LAyioeUTvWbPlgMHwd8pB5KkMHosIm/lwMKlIwi3dPIv3s4u
R28bhqI7EiTq36voSxdQoGjRhzZF0O6/JpKm9dVkZqhLDTBvQeDjQ36yKMsU5Nj2ARFOONYcm8DQ
rzRD4baigBLim28XHKOS2pBBw9I6NbMCJY14XnngGJW0CAxDwzI6lamgZBIVkgeOUclMC4O0MI68
AcXB+cc/f9aR+fdPf/z216fPaumX3379+/NP+o/reN2+P2C3/0bsj7ub10bs3sNMUmP+NmIvXDWG
FKm1W0w8YlbggWNNvAoM918MOVPjwVOJbJ9I/Q3nIysMkZWhxlC4FRNUQqxpeeBYlcCaVo4alsbt
PAAl230MnXCMSsZABg3L4Nohg5K5v4DjgmNUMgUYkoZlci9Qn0ry9hEJTjg2JTfuylDCyCc6+h4t
uk6Cv/IIrzN0yVm+vYJZsWoKiWuxBym2feSEE44xxRIyyCUpCPe5Gigp+6s/LjhGJaWsDCU0aQeS
94f1v327rZcpTvQ8gN/+q1Q+c/qTd/PKgCAlBolzpP69EUGumly3RtdciKmJB441/QcwZP0Jdu4c
Z1DS9sscFxyjklYXhnrDcd1qUQkxNfHAsSqZK8N4sfN1iaW/3iKs5l53G6EZr2u8y/5C4GeIbb7v
am/f5TsboX257qjtpeuOt647ImPXcF2j3RHhauWKMeQx33e1t+/yvXbLJRJKp+ZdT5qyfVy1E44t
FEVkYagSRqVWMVHJ/tK7C45VSQOGekkOcVBhASXEt3UuOEYlpS8MNYfMHawOSur+0rsLjlFJTcDQ
NCyFOzoFlBBN5F1wjErGyiAlzHKi8fjZVYuD4MzO0dw0z1umpsCYY8RI5IFjzbEODF3zvE/qp/9U
UpnFQg8cm5IbFxk0LJNbTQIlzOKWB45RSUKGoT/oyK3hghKiP6MLjlHJmAtDLUG4TwtAySSyxAPH
qGQiw9SwZO60zqeSxsx9PXBsSm5cZNCwVKHeB4GS7YOInXCMSjIUqyVqWFqjXqSCEuIlpguOUckY
yKBhGZFiACXES0wXHKOSKcCQNCyTe7f8VNKJeYkLjk3Jjbsy1BDLie08R8uBk+BMOXBz1RqSUNMI
yDFioueCY8yxhAyiocqJynNQQrzFdMExKillYaiVPa0TlBBvtlxwrEoGMGQNS+POYQQlxJstFxyj
klaRQcPSG7VeBEqIN1suOFYlExjKJS0k7hCnp5KRiCzxwLEpuXEXhtpCjlRYUAmRJR44ViWYJVXD
UrjTz0AJM7x64BiVlIEMGpYmVFhACXGKoQuOUUkVYGg6xKeaD0whz859D4JTc9+med65Br6QY2N/
rcYFx5hjoyGD5vng1otAydxfq3HBMSqZERj6JT0IN/9+KpmRGJw9cGxKbtyFofZQOxUWUJKIwdkD
x6gkCTAMDUvPVOEKSoh+wi44RiUrg4Zlcu2pQAmx58EFx6qkA8O8ZITILbSCkk6UAx44RiW9Lgx1
BOG+RwclRF+Jrzgf2GZOcZ8MNWpYCtfI4aGkRWJLmQuOSYniIoOGpXK7VEEJ8VrNBceqpAFDuqQG
qfPArPpoOXASnCkHlGuE3qiZFeQY8Qh3wTHm2MqgeT659SJQQjzCXXCsSjowyCUzCFelgZK+Xw64
4BiV9LEw1Bkyd/4UKBnE88oDx6hkCDBkDUut1DMTlMz9WY0LjlHJnMigYencHq6nkkS0WHbBsSlJ
ERmKhmUmKiyghDjj6yvOB34dpbjAUK8Y2Z3pqIT44XjgWJXMhSHfeFymgpJGzH09cIxKWgOGpmEp
3E4/UNL3lyldcIxKelwYpIbcTnQqPlsOHASnyoGmed4y9Ub9mWNCHJrmgmPLMYkJGPoVI7szHZUQ
zysPHKuSigwalskttIISZq3GA8eoRAQYxhVTiNwpZaCEWavxwLEqaQtDTuy5mKCkEnX0F5wPPG1Q
cYFhXjGxB6CDkr6/y84Fx6ikF2TQsFTuIBxQQnxh4oJjVPL4wiTFK0YNS+easD2V5LI50fPCsSm5
cReGnNgekU8l211GvHCsSgYwpPuP3ZkOSnYXoP6H85FKhiwMUkPpJzoVnysHDoNvlwNfuLKExPU0
hhzb3SzkhWPNsQYMcv+FLI4/u7L77YAXjk1JSStDllAc32a3IptFoxeOUYkkYMgalpapKg2UZOJ5
5YFjVJIHMmhYpmOn21YK8bzywDEqKQIM5cYIketbA0p2vx3wwjEqaXNhyCmU9lL/pFpf72jUSp9b
fZvs1zXeZX/xLmeR913t7bt8T9+mr9eV0HN56brtreuOwtg1XNdod5T1ajmUmN93tbfv0mA3h+b5
ZqvsfuDshWMNxQCGesWbYVIvCZ5K6u6nQP/hfOD5dIqLDBqWPhyXxerup0BeOFYlExiahmVwWzJB
SSHKDA8co5LSFgapofYTLXTPlrIHwalStmmeT25nGeTY7pm9XjjGHKsFGPoVS4jcCXugpBN1mweO
UUlPC0MuQbhP8UHJ7l51LxyrkgoMQ8NSuG9Fn0paJOo2DxybkhsXGTQsjdt/CEoSsUDvgWNUkiIw
TA3L4D7FByW7h7Z44RiVSEcGDcvkznQFJcyChweOUQkueChGDZFrrglKxv7w6oJjVDLqwpBrEG7/
ISjZff/phWNVMoEhaVgyV7g+lfS4Xw58xflIJTfuwiA1tCEHZtVHy4GT4Ew5kJLmeUnUWifkGPEI
d8Ex5liKwCCa5zVSDKCk7JcDLjhGJSUtDPlmGNR6ESjZ3avuhWNVUoEhX7GyS2igpO3Pav7F+cC5
742LDBqWkSgGULK7V90Lx6qkAUPRsEyuJAElc78ccMExKpl9YcgtxEKt+DyVjN2t1V44NiU3LjDU
K7aQKhUWUMI8cTxwjErKqiS3IIMqXEEJ88TxwLEqqcDQNCyFO9EOlHRiePXAMSrpbWGQGvo40Vj1
bDlwEJwqB5rmeeN2g0CODaIc8MAx5tiIwNA1zye3ORyUMI9wDxyjktkXhtzZZ+ZTyWQe4R44NiVz
eYSPK3Z2SyYoyUQ54IFjVJIFGTQswr2hQCXE88oDx6qkAcPUsAj30RooqfubZr/icLtSjErqypB7
yJ7b9+fu11FeOFYlMLxK1LDURi20PpT0mPaHVxcckxLFRQYNS29+P5weif1TLjhWJRUYkoZlckul
oIR44rjgGJVkWRikhjFPfE1+tBw4Cc6UAzdXHiEm6gEBOUY8wl1wrDnWgEGuOELiGEBJ239eueAY
lbSODBoW4RZaQQmxf8oFx6ikJ2DIGpbK7bsAJcQWaBcco5IxkEHD0hK1hwuU7J7x5IVjVDIFGIqG
pXMMTyWJ2CzkgmNTcuMig4ZlcAygJBNjiQeOUUlOwFCvOEPiPkcCJXV/Z4wLjlFJnQtDnkEGVaWB
krZfR7vgGJU0ZGgalswxgJK5v6/bBceoZJaFQVqI8cTX5GfLgYPgr5QDN4AOdTn375UDTfO8cG/B
MMeIWY0HjjXHBjB0zfPBNdd8KhFmYeILzgf2uVfchSFPtgMGKCE+7HLBsSqZwDCuGzA57uTsUok6
2gPHqKS2hUFamHLiDSgOzj/++bOOzL9/+uO3vz59Vku//Pbr359/0n9cx+v2/QG7/Tdif9zdvDZi
9xlilzzztxF74SoxiOvw1Pbf57rgGBOvRWCYmvyV+5YIlHSi6PLAMSrpHRk0LI2b0oMSooWSC45R
CbZQylHDMrh1taeSTLypc8GxKcl1ZSg3g+OO156JN3UuOFYlHRjSlVIQrvcvKNntY+iFY1TSy8Ig
LaRYDzwJjxZdJ8FfeYTXEapISvHbE3zBKilUoWb0kGK7R078D+cDx+YbFxhE07xxrRqeSsru0fhe
ODYlN+7CIOVmONFt4If1v327rY+jgN/+q1Q+c/qTd/PKgCAlhZLmbPN7I4LoT3Bw508905/pvOeC
Y0z/hAz5ShJio+YCoCTvlzkuOEYlOS4MRYJwby5QCTE18cCxKinAUDQsmTtQFpTsntrvhWNU0urC
UCRU7oiTpxKm854LjlXJXBhqDjHKwlBiy693Teul193ecMbrGu+y15euVnp539Xevst39obT65Yg
Lb903fLWdUdk7Bqua7Q74ktXG1Hed7W37/K9dqsOKUOoZ/GTphI7R7/ifGBTXsVdGIqEye1efSph
Ou+54FiVVGBoV8ohOu5C70ybORcco5LSFgZpQVI6MNc/u2pxEJzZOXpzlcweTQk5VokZuQeOMcdq
BIaueZ65z2pBCbFz1AXHqKR1ZNCwFG4N96mE6bzngmNU0hMwDA1L4w5eeSph2sy54NiU3LjIoGEZ
mcpUUEL0QXLBsSppwDCvVELkTqoHJUKsAXngGJXIylAKO8Q/lTCd91xwrEpgeC1Rw5K5VnegZOwP
ry44RiUjLQylhOJ4GFtnOu+54FiVVGBIGpbGlSRPJZ144rjg2JTcuAuDtJBTPzCrPloOnARnyoGb
q9QQK1XjPXOMaWXogmPNsQYMcqUacqRGQ1CS959XLjhGJbkjg4alcN+jP5UwnfdccIxKSgKGrGHp
hWIAJXX/9b8LjlFJHcigYZncGeRPJUznPRcco5ImwFCu1ELk9lWCkrlfDrjgGJXMlaG0kArF8FTC
dN5zwbEq6cBQNSzC/XifSkbef9nngmNTcuMig4YlV2rFB5QU4ofjgWNUsjA0DUvpVEkCSog9jS44
RiW9LAxyM0g+MKs+Ww4cBKfKgaZ53iKV55BjxKZGFxxrjg1g6JrnI1KjISiZxMKEB45RyazIoGGZ
g1oJQCXE88oDx6pkAsO4Ug/CheWpZBIb3VxwbEpu3IWh9JC5U8pACbHRzQXHqqQAw9SwtEjtuwAl
laijPXCMSmpCBg3LyNT+Q1RCDK8eOFYlMLzWeKURhHvqPZSMGPcrJBcckxLFXRjKCJnby/xUwnTe
c8ExKkkJGJKGpXKNHECJ7E/VXHCMSmQsDNJClXlgVn20HDgJzpQDNWme905NI545xrQydMEx5lgW
YBDN88ltRQElRIcbFxyjkrYylBkSd3j/UwnTec8Fx6qkA0O+0gzCHdwGSsb+3NcFx6hkFGTQsFTu
lLKnEqbznguOVckAhqJh6RzDU0kSYlbjgWNTcuMig4ZlRIoBleyXAy44ViUVGKqGZXDbsEEJ0fHA
BceopK4MZYbJfY4ESoiOB19xPvCDMcUFhnbJDcjNjUAJM7x64BiVjLEytNDyicYBZ8uBg+BUOdCu
GkPiPp+FHCM6xbrgGHNsCjB0zXPhVgKeSoQ4ptMFx6bkxkUGDUvhnpmohJj7euBYlRRgGBqWyj0z
QQlxaJoLjlFJqcigYenc9wuoZP8dkguOVckEhnlJCpHbcwxKiGM6XXCMSlpbGGoKwu3hAiWdGF49
cIxK/qHu3FItt4EoOiMhld4eRX4ygEBIyEcehGT+cZEX3rp9c0q7dCDQP03TeHlVHcslS6X+eLym
K0YNS+H27j6V5O2jH5xwbEpuXGTQsNRGdRIFJWU3S5xwjEoKZknSsHTPCmm/AZoTjlFJLytDC72c
2JF7sBw4C75fDiTlqhJion76zxzb74DmhGPNsQEMcokE4VqOPZXsd0BzwrEpuXGRQcNSuM1lTyX7
XbGccIxKEjLkS4Tdu/tUst8VywnHqCRHZNCwTG4pylPJflcsJxyrkgIM5ZIcUqEK16eS/a5YTjhG
Ja0uDDWHzO3dfSrZ74rlhGNVMleGHuQDhhJ7+ryj0djuimW+rvEue/3oam3W167233f5Ut+mP687
QukfXlf+47rbXbHM1zXaHXG92gwxzteu9t93+ardqo+Uyq0se9LUSLxAeuDYQnHjLgw1h8YdigFK
0u4UuxOOUUkSYGgalsEt3AUl21uBnHCMSj5iaGGUEx1wz5ayB8GpUrZpX8DEnRuMObb7QdkJx5pj
HRj6JSVkbq8oKGnEG7kHjlFJK8igYSmTWhyOSojxygPHqmQAw9CwtE7VjqBkeyvQvzhvrNtu3IWh
FnaH1lNJi8R45YFjU9IiMkwNy+TWH4ISISY8PHCMSiQuDLWGxB0JgkqIx6sHjlUJPF5TvKQGadSP
F5QQ88kuOEYlfSCDhiVz/dtAyfYBUE44RiVDgCFpWCq3ngOUECOOC45RyZwrQwuzyoG36qPlwElw
phxISfO8CTUT8MyxTgzhLji2HOsRGUTzvHNdaUEJMUHvgmNUkgsyaFjGoMKCSojxygPHqmQAQ9Yf
dMwUAyjZ3s3rhGNUUuvCUFsQrqUdKNk+Jf5fnDe+6N24wFA0LKVQYQElc78ccMExKpkRGTQsrVK1
/FPJiMQPxwPHpuTGBYaqYZncIx6UMCOOB45RSS4LQ+0hcS1YQQkz4njgWJUMYGiX9BDriQOez777
HgSn3n2bcmVu5w7kGPGN3QXHmGNtIoPmeeWWUYGSTpQDHjhGJR0Zuoald2qAACXMEO6BY1QyIzJo
WCa37xyVEOOVB45VSQGGcckIcVATrU8lc3s3rxOOTcmNuzDUEUQoBlRCPF49cKxKJjBMDUvmWrCC
ku1jr/7BeedCtxsXGTQsbVDrLkBJ2+0M4IRjVNLSk0GihmVwc88PJTPG/cerC45JieIuDHWGyBWu
oIRYP+WCY1UygOH+07UkOfBWfbQcOAn+SjnwONQauGaQ7PeBf0ZiCHfBMeaY1IWhTrYkASXEEO6C
Y1UygUE0LDX5DeEztv1yQHHe29xHcZFBw9K408hACbF+ygXHqqQAQ9awdK5lOCjZPvbKCceoZCRk
0LBMbvssKNk+9soJx6qkAkO58g3IzT0/laTt8+KccGxKblxgqFeMIXFtPUBJJn44HjhGJTkhg4Yl
ZypTQUkh3n09cIxKygCGpmEp3OJHUFL3Fwu54BiVVFkYpAfp8cBb9dly4CD4J+VASjO0OmvMXyoH
muZ583zRS2N/KvwvnHe+6KWBDF3zfHRXJcR45YFjVdIXhpxC5D4MPpUIM1fjgWNTIstczbhiCsI1
6gUlzBDugWNUktPCID3EfOILKD6cv/7lW30y//jzTz/89vOvaum7H77//ddv9B/X53X78gO7/fvE
ft/dfPbEHjFIzjHXLz2xhyZ/5fYSQeJtHy7mhGNNvAkMU5O/F2o6G5TU/Q8OLjhGJbUtDDmxXV1B
yfaB/044RiUdHk85XlHYFqZPJVn2X6ddcGxKssjCkCUIdwweKCEal7jgGJXkCAxJw1Icl3fOTOwc
dcExKikdGf5csnNik+/Rousk+CdDeO0hp9R7/3cER6wsoXFdByHFtk+NcsIxplhNwCCa5oNb3PJU
wrQec8ExKuljYZASejlxwvxX63/757beRwG//U+pfN7pT97NJw8EKRKazJ7Tl54IcuUcYqZqfEj/
sf8C64JjTP/RgCFfMYfUqGn6p5JCbNlywbEpKXFh0LBkx735s2yfI+eEY1XSgaFoWKrjUrPJtJlz
wTEqyQUZNCxdqMkxULJ9vIgTjlFJRYaqYZnclwtQst112gnHqKTHhSGXkCo1RwFKiH7Cf+G8sRuM
4gJDu2IJmRv1nkoqsWXLBcem5MZdGKSHMk50Kj5b+R0Ef2X13fjSi17TPC+NCirkGNFP+C+cd35I
uXGBoWueV+60b1AixLuvB45RiTRk0LA0bi0yKGGm5TxwjEqWabmhYemdqshBCbGA3gXHqKQlZNCw
jE7NloISop+wC45VSQWGecXCHtwGSoj2Oi44RiVDFgYpofa0MJQ4IvSCXW9+7Ha8NV7XepdjvVoN
UfprV/vvu3yx4+2f1x1JPrpu+q/rzszYNVzXaHd+cLUWchuvXe2/7/J1u7mGOKkFvU+axkx4eODY
QtFwwqPEK9YgXHsqUFL3Xw3+xnnjOHjjIoOGJQs1NYdK9ktZFxyrkgIMScNSuZ7GoIT4+v8Xzjsn
T2/chUF6qOPE1P/RUvZv8DevQ/rPUrYkzfPOfZnGHNt//XTBseZYBQbRPJ/cfvSnkh73S1kXHJuS
G3dhyC2k7Dhe9e3zVP/BeeeH3RsXGPIVG9v5GpQQ+2FccIxKpCODhqUmaoAAJcRiWhcco5KcgKFo
WJrnTGFv+6WsC45RSRNk0LB0LlNBSSd+OB44RiW9A0PVsIxCzTA8lQxmxPHAsSm5cReG3NhJF1DC
jDh/4ryxi6fiAkO7Yg+JaxkPSgrxePXAMSopaWGQHtrMB96qz5YDB8GpcqBduQfhDq+AHNs+Et0J
x5pjFRi65nnlGEAJM4R74BiVNEEGDcuI1JohULJ9JLoTjlVJA4ZxxRESV6WBkkm81XjgGJXMvjDk
ETL3KempZEZivPoT5519JW5cYJgaljaoNUOgJO0vkHbBMSpJAxk0LJNr5ABK8v5CNxcco5IMDDVe
cYbIHYYMSvr+49UFx6ikt4Uhz5AKxQBKxn454IJjVDIiMCQNi3C9LUAJMeK44BiVzL4wSA99zgNv
1UfLgZPgr5QDX2ozV5PmeRa/9aUpRmIMd+ExJVlSXoAQzfSSKAiUkvcrAhceq5QsCKGRqZGCWKQQ
g5YHj1lKA4iskanciXYohVjF78JjlVIXCI1Mm37fkvQixMDlwWOW0gGiXHGyU64oZexXBi48Vilj
LBDlJkwUBEhJxO4xFx6jlBQRol43oXBrVlEKcWqnC49VikSE0MgU7ox4lEKc2+nCY5ZSAKJpZBrX
NRKlEMd+ufBYpdS6QEgPM9YDb9pnS4SD4K+UCI8vBk+uEtmD9zHLiJPUXHjMWTYBol8phej49TLF
xEzhePBYpfS2QJQUcqJGCpTCTOJ48FilLLM4QyNTMzXfCFIkERW2B49Rys2LEBqZwRVvixRi5PLg
MUupADGvdNNlCgKlFOJt2IPHKuUDiCLs9lqUQjR6cuExS3kUk3LFqJGpHARIyXF3gsqJxyjl5kUI
jUzn9jiglO1FVk48ZikNINKVhG3oh1LK7pDsxGOVUuICISPEdGLX+cES4Sz4fokgylVyiNwM25Jl
u9W5E485ywpAyJUy2+oXpWzvvHDisUrpFSE0MoXbtr9I2S0mnXjMUiZAZI1Mc1xHk2KexMjlwWOV
MhtCaGRGpCaTQEqJxMjlwWOUcvMCRNHITK4ZLUrZ3oHhxGOVktMCUUpIiUpXlLJ9kKUTj1VKaQBR
r1SCCAWBUtpuMenEY5XSFgiNTOYis0jZLSadeMxSOkA0jUzh9tqhlLn7ZdKJxypljgVCRkjpf7YN
+Sw4VSI0TfXKrR6DLKvMlIUHjzHLbl6A6JrqTahVhiglEW/DHjxWKWkihEamc805UYoQI5cHj1WK
IMTQyAxuXydK2e6e4cRjlVLLAlFqiNwpVIsUYuTy4DFLGQAxr1TZFy2Ust2k2onHKmU0hNDIFO4U
VpQyiWLSg8cqZUIxmaJGpnEdcEBK296Z4cRjlNLyAqGRGZlK10XKfongwmOW0gEiXamFWKkHG0jZ
793kxGOVUssCISOInNjhe7REOAn+Sonw2Ivw5CotCLewB7OMGM5deMxZNgBCNNULt/oSpYz9EsGF
xyplTITQyDRunS5KmcTI5cFjlTIRImtkGvd9B6Ts93By4jFKuXkRQiPTuU8ZixRi5PLgMUspAFF0
Cdis7aM+oRP6hK63X/peO1Tzha33Wfp6OQlFymuXe+E+X2qI+ueFc7j/+sGFZ/zPC1dhBBsubBVc
5aPL9RJfu9wL92kQ3MLgTiBEnO0GFU481mi0tkCUFiZ3nilK2T4/xYnHKqVHgKhX6iEWap4XpUzi
/cmDxyplzgWidPYYF5AytvddOfEYpYyIEE0jU7guCChl+5AqJx6rlFwWCBkhy4l962eL3IPgrxS5
40tFbtNUr1xHliXLiCLXg8ecZQMguqZ6TxQEStneyObEY5VS6wJRRohchyeUsr3vyonHLGUCxLj/
sIfgohRiIt+FxyplRoTQyExuEeMihRi5PHjMUgpAzPtPkEh9TQApc3vflROPUcrNu0CUGQq3Iw6l
bJ/i5cRjlSJQN0nUyFRuowZKafsPWhceq5Q2EUIjUyc1pYlS+n6J4MJjldIRImlkGrfLFqUQo48L
j1XKjAuEjFByOfCmfbREOAnOlAg3V5nsHDFmGTGcu/CYs6wAhA4W7K7/p5QUZb9EcOGxSVFehNDI
zEpBLFKIkcuDxyxlAkS+JLLdLFEKsR3AhccqpbQFosaQKlXRopTtA8mceKxSagSIopHJ3NI0lDL3
1yS78FilTEEIjUxpVLqiFGIzmguPWUoDiKqRadxqcZCSEvE27MFjlJLSAqGR6Z2KzCJlf6ncXzxv
PK8/KS9AtEtSiJV6JUcpZX9+14XHKqWMFWKEWuKBN+2zJcJB8E9KhJRGKHPU2P8tEZCrpiBCzcVi
llVi5PLgsWZZFYDomuqlUcMnSmHmcTx4rFLaRAiNTGvUQxGlMPM4f/G8cVGA8gLE0MgM7qR8kCKR
qLA9eIxSbt4VoodUThxgig/pr3/5Vp/QP/780w+//fyravruh+9///Ub/cf1ud2+/OBu/z6533c3
nz25Rwp55ljzv09u5KoSklDj8ZJ6xPuBB4859QZAzEskZK6NAEphJnc8eKxSpCKERqZxTbVQCtG6
yYXHKgVbN+WokRnc8imUMvczxYXHKmXWBaLmELlmxCiFWOPmwmOWMgEiXZKDRGouEqTktD+P4cJj
lHLzrhAjtNIOjIpHC7GT4J8M57WFFFOXL21ZykkzPXNbFjHJtnu1O/FYk0wiQMglmV24iFLq/soL
Fx6rlJpWiBJGPNGX46v1v/1zX++jgN//p1Q+7/gn7+YP4s5tN5IaCMOvEnG/xudDS1wgJCTukBCX
CA3JcBBLgpJdDm9PF8sC+Z1lp/xXBwkhwWbTX3/lsV0eu/wfnULMyfU02ng0x3/MVZIr3EIkfgCI
7YwmPOoPQAGIJJ/C2qjEH6W09SWrtzzPOZ3deRFCItOGrRRikmLBo5ZSASJLZAa3+xak5OVLNI14
lFJ23gmiZOe5FVaUEomPjwWPVkqMAFFkAAhcaR6UkojEx4JHKyUNhJDIRG5TGEpZLiNvxKOVMkFU
iUzuVMeGUtr6iq8Jj1ZKyzNEd63EA+ZPx2aDB4IzO/dSlabeuGOgUysjRi4LHnUr6wDRpKn3QaVf
KIVZsrPg0UoZZYIoxXnuu0GUwizZWfCopQyA6FssLnKrFyClJGI1xYJHKWXnRQiJTOZqxKIU4liq
CY9aSgaIIZGpgVpMRSnMEpMFj1ZKCRNE8S70ebNK9qP+qxrD9cv9ab+dr+TPXzz88N3t6eWL2/Or
X+/uf9wHvhfX359uvztfvX4439+efjp/dH939+rq59PDw6872Ue3r1++fPq58T3PfTg/POxTAffw
8vTL+euXd9enl/u/v/vh9us/3/ztE3Y59ZSzjzcnf/q2vJEj84mrv36BhOLPv71H8O7+9N35KZwY
w3tw5H87efsd45+3//rN2+8QNzcx5tDzTb72f0F88v35Wn5oJ9hhzlc/fPunQ7F0tX+N/dPpdqf5
SeYKn31+9f3p4erNL7t5EjA1e8Av/vzpl4J4uv1dWsuvpx9eyX9+u/O9+v4MjKebm/tdqryP/Nqn
VBbXuZNI8DGrxJfVb3meM5Wo+GV19vJZH4UaFVAK0feY8GillDBBlOoCN1SiFGLJ04RHLaUARNhi
dYm7VQ6lEEueJjxaKS3OEN31csTGnEOTziPBmaQzB2nquVFzD2xlxBqyCY+6lVWAiFusxtsem1/f
m27Co5Sy8yKERKZza7YohTgSb8KjlRICQCSJzOCOoKOUuD7HMeHRSol9gijNeW6ihVISMXJZ8Gil
pAgQeYvNBW7hCKUQV+qY8Gil1BkiZZdDeqoEYMcSgPPrj9VSh8oHq99zPPW43uplj7vgPS8sdSgP
Lq7W8tSDx3sf3DIjWPFgreCW58dVF0a/7HEXvOflgktziSveizidGAAseLTR6B0givQtmdsBDVI6
cbbPhEcpZedFCIlM7VSHi1IiMX+y4NFKiQEgqkSmcwcMUQpxFM2ERyulxBmiu1GPKC1ybJJ7IDiV
5NatdOe5TnFqZUSSa8GjbmUVIJoEK3TqqxiUQpzte8vznJl/bxOERCZVauVvkkKMXBY8aikNILpE
pmQqMiBlEAv5b3meU8rOixASmcZBoJRAjFwWPFopIQLEkMgMrkwhSiFKCpnwaKXEMUGU4TyXaaMU
4n57Ex6tFLzfvvgtDhe5k9wopa93tCY8Wim9IYREJgdqSROljPUUwYRHK2UEgAgSGd+OqKJx6Gz4
SPBLZsNw7Ay4KhfVx60semI4N+HRtTLhRQhp6m1QECiFGM5NeLRSQgSIuKWdkOsUUQqxGdyERytl
gkib3/m481CTFGLksuBRS2kIIZEpnqpBglLq+hynCM/zVkEWXoDIEpmabaUQHa0Fj1pKRwiJTOMu
r0UpxN5lEx6tlDEAokhkOle7EaQE4qI5Ex6llJ13gkjBBW5XOUpJxGzYgkcrJQWAqJsPLnKHMiYp
61vlTHjUUsoEEYcL7Yj6DsemCAeCX5Ii9HelCFWaerJc8YuBuCHPhEfbymoFiLb5wF67jlKIG/JM
eLRSmkcIiUzhasmiFGYdx4JHK2Vax+kSmZ6p73dASmSGcwsepZSdFyEkMmNQkUEpxBl0Ex6tlBgB
Ymx+p0vUWixKIc6gm/BopaQxQaToInehI0phlh0seLRSHkNIji+RKdw9GiAl+dWJnxGPUsrOO0Gk
yK6SoZTlyupGPFopASGCRGZwG6NRyvIClRGPVkpuE0QcLvZwwEz7wBThWPBLUoRH3yI85krJea5T
hFa2XqfNiEfbykoAiLj55CJ3rw9KWS7hYMSjldI6QkhkCjenQCl9NZk04tFK6RUg5B/XEzV8gpS8
fIm2EY9SSvYzRMoucNcJo5Tlc1dGPGopDSDy/o+L3N2FKGW5hIMRj1ZKygghkSlckWCUkleTSSMe
rZQcAaJIZOqgenuUsrxAZcSjldI8QkhkOlc5GaSs1+4y4lFLyQBRJTJ9UPviQErxxOTNgkcpZeed
IOJwqR9REPbYFOFAcCpFqHIGzEeqqU+tbHVxy4hH3coGQLTNFxcK9SU8Slm+RNuIRyslVoSQyCRP
JW8gZb12lxGPVkryANElMplboEYpleikLXi0UmqYIFJhCy9PUohlUAsetZQCEGPz1XkOAqV0YjZs
waOV0uMEkapL3IIQSmGWHSx41FKgow1e0vza+1w9LPiyc51f/XlO+uaHh59Pr66/317f/nh79+vt
i5/2Ql976a8XIZbT6ab2Fzfn6/rCp3N/Mc6n/uKbc7kO/pRiPEepLnZ9Tqdv2zffpn7++5fJfOLL
N7/v6v7n66sPVn75B0+9UAserOILyf929+d9qvXw6muZ6bz+ecc81ZJKPp9LiPnpiNflL9MOYFNG
f2efgbJL6W30aaCPv7l7/UokfPvy9cP3V/u1bKdvTg/n7erDX073H8rPfPggs1x38807SHg19+eb
O6kE+NXVpwIhc9W3HEK2F7S7Ou2T7l/OV/KTL/affJhZqiuF2lWMzYb4vsSER9tUcpsgUnWVu1QQ
pKzXqTPi0UopASCCRKZz1XVQyvKGXiMerZTaJ4g4XB7pgOzz0LT5SPBL0uZHm+8ec6XqBrdbB1rZ
euE/Ix5tK2sRIOLmm/OVggApjfi+xIRHKaX5GSI1F7ileJCyXqfOiEctpQFEksgkbps2SonrC74m
PFopMSOERKYkaoEVpKzXqTPiUUvpAJElMi1ScwqUUoh8wYJHK6UMhJDI9E5FBqSs16kz4tFKqTNE
Ts7n8VS60t5TXiwu16lTP1j9nuOpx5UUL3vcBe95URm1Nw/OLj394P7eB6/WqVM/WCu45ace15u/
7HEXvOelgsvmu/OWF+jFTuyUNOFRRqOHGSJ1F7mdQCBlvU6dEY9aSgOIKpFJ3DI/SsnEVMGCRysl
9wkiDlfGOCBXPDbJPRCcSnKrNPXCXX0OrWy98J8Rj7aVlQgQTZp6S1SniFIqMSG14NFKqQMhJDKd
u1UUpKzXqTPi0UppCNE3P9jdziBleCLJteBRStl5J4g0XOTqfIOU9Tp1RjxqKR0ghkQmcRVNUUpc
/8LchEcrJRaEkMiUSEGAlPU6dX/zPGu9hRGho41eItMy9W0CSunrHa0Jj1ZKnyHyTsgdsgYp63Xq
jHjUUhpAhG0njNyescdSkidGHxMenRThnSDicM2XA2bah6YIR4IzKUIM0tQzd+sNtLL1OnVGPOpW
1gEiSlNvXOlolELscTHh0UpJAyEkMoNbTAIp63XqjHi0UiYI+ebWhUF1iiilru+pNeHRSql+gsjB
ZU+tsIGU9Tp1RjxqKRkgskSmcmf/UMryHdtGPFopoyCERKZXavYJUtbr1BnxqKUMgChbiM4PCgKk
hEjMhi14lFJ23gkiR3ZjL0hZr1NnxKOVkgJAVIlM5e6VRSl1fX3XhEcrpcYJIg7XwxG1HI5NEQ4E
/48UIYTmUuopl3elCFWaeucuPYBWtl6n7m+e8pzD+c4LEG0LyYVIdYoohVnHseDRSukzRE4uD2pO
AVLW69QZ8ailNIDoEpnG5SkgJQYiw7bgUUrZeSeI2FwsR3xjip30lz/fSA/9093tD6/u7kXTtz98
9/r+JH8499v13R13/afnfr63+a+eu0dXSuxlvKvn7tL+Bzdzg6a3Xg3QiEfb9GIEiLGF7CK34g1S
1qsBGvFopaQxQeTsEnd5I0hZrwZoxKOVghDJbyGzwwdIScTRehMepZSdFyEkMpWrEgxSmGqAJjxa
KQEhgkSmcTfIoRTiaL0Jj1ZK8hNEHG6EIypmHJqIHQn+H8N5ya6PUGN7x2ieJNA7FhVUbGREjaq/
eOJz5mE7L0DELRR2TwxKIUoVmPBopdQyQcTseu0HNODPp7/2z3s9HwV8/v+TymaOf+Tb/EenEHNx
Po82Hq3OPObKxQXuokH8ABDbGU141B+AARBJPoWJ2+MNUphqmCY8Wim9IoREJnOFFlHKICYpFjxa
KcMDRJbI1ExFBqTk5QvjjXiUUnZehJDING73LUohtjOa8KilFIAoEpnBHRxEKYVIfCx4tFLKDJGr
85WCmKSsr/ia8KilNICoW6guWt7Tl3JfX/E14dFK6X2CSDthPKIQxLHZ4IHgzM69VKWpJ67wK7ay
sb64a8KjbWUjAkSTpl64XS0ghamGacKjlLLzIoREpnKnAVAKs2RnwaOVEhCiS2RapyKDUpYvjDfi
0UrJGSEkMoM7IoFSiGOpJjxqKR0gxhaa843q2EAKUw3ThEcrpZYJIjcXAwWBUphlBwsetRToaLOX
yKRmuUJb43pHa8KjlFLjBCGRqYHKaFEKsZ3RhEctpQFEkMj0TGW0KIUYfUx4tFJyniCSdyEescR6
aIrwFvyZ93+8N0XYuXJ3nqvniK2MGM5NeNStrANE3EJ3MVLLjiilrY9cJjxaKW0ghEQmc5tQUAqx
ndGERyulI0SSyNRANVeQ0vz6grkJj1LKzosQEpnGlR1HKcSxVBMetZQMEFkiMxo1p0ApyxfGG/Fo
paQyQeThArfsiFKIjXYmPGopAyDK5qNr4alSaKFAbbL59ZtfrcGmfLD2PZufH5dcaPmyx13wnhfW
YHvz4FHaUw+u739wYQQrHqwWXObHZZdHuexxF7zn5YLDcJHbhYg4nfhYv+F51nXnnXeCyMPlRs1X
UMog5k8WPFopAyGqRKZy525BSieKBJvwKKXsvBNE2vlSPiBXPDbJ/Qv8/6pg8eiQw2OuPNi9rtjK
iKrLJjzqVtYBoklTH9wFiCglExNSCx6tlFwmiOJd8FSmPUkhRi4LHrWUARB9i94lT638oZROJLkW
PFop3SOERKZxkZmkECOXBY9aSgaIIZHpXGkekDKI+0xNeJRSdt4JogTnuapYkxSio7XgUUuBjrb4
LQaXuJKmKIWo+W/Co5VSBkJIZEqlZp8ohaj5b8KjlVIRIkhkWqQig1KI0ceERyul+wkieZeyP2Cm
fWiKcCT4JSnCu74HK0Ga+khUp4itjBjOTXjUrSwDRNziTleo4fOxlOzD+shlwqOTIrwTRIkucWvE
kxRi5LLgUUsZAJEkMoWbaKGUtH6axoRHKyVVhJDINMuqONlnYuSy4NFKyR4gskRmNCoyKKWtpwgm
PFopLUwQJbnAfRmHUojDaCY8Wim9AkTZYnKxU3uAQUogrho04VFKCX6GKIk9EYdSiKsGTXjUUhpA
VIlMLdTSIUpJ6+u7JjxaKalPEMm7nOsBM+1jU4QDwakUoUpTH4HaRoKtLK+fpvmL5zn3+ggvQLQt
ZucHNadAKcw6jgWPVkoZE0TJLnE7zVEKs45jwaOVUhGiS2Qyd2wZpQwiw37D85w3TQkvQkhkaqAg
JinEyGXBo5bSAWJsMbPlB0FKZJYdLHiUUnZehJDIDK5qDEoh6keZ8GilPK4flTfvt1hcCFRkUEpf
bSlGPFopvUwQpbhYqciglOXdV0Y8aikDIIJEJnOTJZCSls9dGfEopaQ4QyTvSokHzLQPTBGOBb8k
RXi00egxVymucKdXsJUtH2Qz4lG3sgYQUZp6C9RGQ5RSVuc4f/E871U5wosQEpleTKUs1+U34tFK
qREg0har895w33BOy+eujHi0UtqYIEp1oVAQKGX53JURj1ZKR4gskYlcmv8HdeeWa7cNQ9EZCXpR
D4+iPx1AgaJFPtoUQTv/mkCam2zdx6E2dYAC+QsuvLzIY5myRIGUmnaLSSceo5SbFyE0MrVQEChl
+zRNJx6rlDwBQjQywm1wRyl199OKE49VSm0IoZHpmYJAKbJbTDrxWKVIBIimkRlcW0qUMna/TDrx
WKWMtECUGJr8zzbcnwWnSoR2SQ+RW2iIWcZMWXjwmLNMAKJfuYfEnfoAUiQSb8MePEYpNy9CaGSK
UJFZpBAjl/I89zwz5QWIoZERbrUTSilEhe3BY5VSOkJoZBo3QY1SKjFyefBYpdQEEFMj07k+nyiF
mXbw4LFKkYEQGpnBHb+HUjpRTHrwWKV0gEhRIzMbla4gpaX9B60Lj1HKzbtAyAiRO+cUpeT9EsGF
xyolR4BIVx4hc1OHKIUYfVx4rFJKXyBKDL39z7YrnwV/pET4bqERcskIpVHLNTDLiOHchceaZTUB
RNZUF27xMkrp+yWCC49VSs8IoZHp3NvnIoUYuTx4zFIaQBSNzODSFaVsN9pw4rFKmSuEzBALBbFI
IUYuDx6zlA4Q9cozZG61E0jp230hvvE8dfnIzYsQGpnCNadAKdvnPDrxWKUsEKKREdfFi50ZfTx4
rFJ6RgiNTHddEN2Z0ceDxyylAUTTyMxIpStIGdsbNJx4jFJu3gWixDD6iV2/Z0uEg+BUidCvGEPi
TvbBLEtEieDBY82ylBBCg5UnBQFS9ts8feN56leEsQznQyNTh6uUQoxcHjxWKSUjhEamcb3vUMr2
Bg0nHquUhhBTIzMqFZlFCjFyefCYpXSE0MhMbrUTSNlv8+TEY5Uy6gKhr+R5hagp1w8ai9YxNzvU
mi9svc8ZX7tcn+2xyz1wnw81UK1Xjpr60sdrF5aPL7zZodZ8YbNgWS+X9LXhscs9cJ8GwSkUrs/n
grO/VM6FxxyNuUCUFKpQECBlyv6rgguPUcrNCxBJI9MyNVG1SNkvcl14zFLaAqGjUD+xRftoPXcS
nKnnlCuFzg3rmGXEd3kXHmuWdYTImuqjUqm+SNl/IXXhMUvpCKGRmdyi6h+lSIz7Ra4Lj02K8gJE
uWIOkVs/DFL2e3I58VilpLlAlJsuU8MnSiG2A7jwWKVklFI1Mpk72g2lMMO5B49VijSE0MgUrncw
Smn7Ra4Lj1VKqwAhGpnCLb9HKduHJ7/wPFXKzAihkamD+kK5SCF+Ph48ZikNIJpGRrjlriAlEXvL
XXiMUlJeIUoKcaQDb9pnS4SD4O+UCCm1kPoYNb9VIjRN9cbt5cYsK/uffFx4rFlWEkB0TfXBdTZC
KdvHVzrxWKXUsUCUEjLX2QilMPM4HjxWKcs8zrhiCbVTS2VRCrG33IXHKqXPBSL3UNqJU2nxIf3z
X7/qE/qPz39++vvzF9X026ff//nyi/7n+txubz+428uT+3l3896Te5TQU0lR3npyD83/lqlQY+oN
Yh7Dg8eaeqMDxNT875GCACl5+5BQJx6jlJsXITQynWsxi1KYyZ2vPE+cBlTeHyFK1MiM5iql7+9E
ceGxSukRITQykzuaY5GynykuPGYpmCnpijVErokNSpn7e6tdeKxSpiwQJYU0+oFR8WghdhL8neFc
Smgjpxa/G80Rq4bMnf+CSUZ8OnbhMSfZBIismV65uV2QUra7UzrxGKXcvAtErmHmEy0oflr+7OW+
nkcBv/93qXze8U/ezTsPhVxbyDKnjLeeCll/hW1Sv8LlB7D/OuvCY/4BVIAoV5QQuXdqlCL7U1Yu
PFYpkhaIIiG5vuMXIV5SPHjMUgQgqkamcFPWKGX7ABQnHquU0RBCI9Mi9fqIUiYxqHrwWKXMCBBy
RWG7YoGUmojCx4PHKOXmXSDKCJNrsYxS8v6MrwuPVUrOANGuOEPk1o+jFNn/LOfCY5UiK0RJIc9y
4P3pbDV4EJxZuadck935tGTZ/uSuC485yzpAdE31zI0UKKUTb8MePFYpvS4QZbLnh6EUZsrOg8cs
ZQDE0MgUbjIVpEgkZlM8eIxSbl6E0MjUSk27o5REjFwePFYpCSGmRka4he8ohZli8uCxSikRITQy
LVIQKIWZdvDgMUuBB22NGpnJdcVCKWP/QevCY5UyBkJoZCbXoRylEMsZXXisUmYGiHQp4aCKN5DS
iNHHhcco5eZdIEoKZZ6YYj1aIpwEZ0qEm6vGkLhvo5hlxHDuwmPNsoQQ+UqRXRSGUur+yOXCY5VS
K0JoZEqm5k1QCrGc0YXHLGUARNHICJeuKKXtT5i78FilNEEIjUyrVGQWKcTI5cFjljIBompkOpeu
IKXH/RLBhcco5eZFCI3M4HoyohRioZ0Lj1lKBQjRyIxBTQihFGb08eCxSqmLFI3M5FosoxRm9PHg
MUsZANGulEKsVPGGUoh14y48Vil9LhAlBYknOiefLREOglMlQrtqCkmot0/MMuKQNxcea5YNhOhX
SuzBsSBlMMO5B49Rys2LEBqZyj0UUQoznHvwmKVUgBgamc6dpotSClEiePBYpRRBCI3M4HrELlKI
kcuDxyxlAsS8Ug6RiwxKIfb/u/BYpUhbIOpN16nIoJRGPGg9eKxSGjxoJWpkaqSe9iBlEkeMfuV5
at108yKERkYSNWuPUojVVy48ZikDIJJGpnXqZQmlEKOPKM9zj99S3gWipNDSic7JR0uEk+BMiSBJ
U31whwNglhHDuQuPOcsmQOQrZfYrIkoh9ga68Fil9LhA1BKS54H7MonVVy48ZikVIMqVSsjc8Xso
Ze7vRXDhsUqZCSE0MrVQkUEpxIZJFx6zFAGIqpFp3C6RH6W0mPdLBBcemxTlXSCq8870FonVVy48
ViklrhAj1FcgaioF2xevt9922zQbL2y+z/ba5Wavj13ugft8sE2zXniGLq/eZ/3wwjUxgg0Xtgqu
abmc3CTjwcs9cJ+PChZ9tsxOva8gTiPqOQ8eazTaWCBq9T0WrEXisCcXHquUngGiXamGws3JgJQU
979QuPAYpaS4QhSd5z3RTvpskXsQ/JEid75V5DZNdcnUCyBmGXF6lguPOcs6QHRN9cYdko1SMlHP
efBYpeSKEBqZznX+RCnEYU8uPGYpAyCGRmZyc8QoRYg5Ig8eqxSZC0SVELnvTiiFOOzJhccqpSHE
vJKExJ0xj1IGUc958FiljIgQGpnM/YZRCnHYkwuPWcoPD1q5YtTIVG7vJEjJspspTjxGKTcvQmhk
hFvZuUjZzRQnHrMUzJR0JWFPsUMp2/uunHisUsZYIEoKI+cDb9oHS4Sz4I+UCN99B0OuKmFwK80x
y7Y3sjnxWLNsZoDIVxJ2qSxIKdst6J14jFJKXiFqC5FbVL1I2S0mnXjMUjpAlCu1kLjIoJTtle9O
PFYptSKERiZzJzuhlO2V7048ZikDIKpGpnD9alDK9sp3Jx6rlD4RQiMjlUpXlLK9UNuJxyplIIRo
ZLpQ6QpS6vZp5E48Rik3L0JoZCb3tF+k7H5bd+IxSxGAaFfqIXPnnKKUuvtt3YnHKqW2BaKkMPOJ
RsdnSwQFf34z1o9LhHbVznb/xSxjpiw8eKxZtkxZdE11GdSsPUrZPt79G88zGycqL0JoZPqk3ilQ
SidGLg8eq5SeAGJoZCY3FwtSJBIVtgePUcrNu0DUERJX5i9SiJHLg8cspQHEvNIIuVKRQSnMtIMH
j1VKXiA0MqVQkUEpzLSDB49ZCjxoU9TIVK4tJUoh6iYXHquUsUBoZIT7lIFStg+GcuIxS8FMSRoZ
mRQESGnbG1ydeIxSbt4FouQQy4nut0dLhJPgTImQkqZ649YPY5Ztn7TlxGPOsgEQWVO9N6rPJ0rZ
3mLkxGOVUiZCaGQG1/wUpVRi5PLgsUpZIMqVBnvqP0rZ3jHsxGOV0uICUWeIXEWLUraPUXfiMUup
AFGvNEPmNrijlO0tRk48VilTEEIjU4SKDErZPkPMiccsZQKEaGSE2xEHUnoh3oY9eIxSbl6E0Mh0
z73lrdf9CSoXHquUmgCiaWQml64oZXuDhhOPVUrPC0S56cqJ04/PlggHwR8pEfpLiYBcEtlty0uW
7c/4ufCYs6wBRL9yDMWzK0rrk3gb9uCxSpkLhEZGOIhFCjFyefCYpXSAGBqZJlSdAlLG9gYNJx6j
lJsXITQyg+veiFIKMXJ58FillAwQ88opRO7MFZRSibdhDx6rlDoXCEmhcGtYUMr26WVOPFYpAhA5
amQkU2U+SJlp/+fjwmOUcvMihEamF+qVfJGy//Nx4TFLmQCRNDKTW0EJUva7Jb3wPDVTyiql5JCr
HHjTPloinAR/pET47isCckkOyTfLiOVsLjzWLKsNIPL9jx0pUMr21kInHquUERFCI9MSNUWAUohP
vS48ZikdIIpGZjbqofijlB63d/s78dikKO8CITWkRqUrStnuO/qN57lSUgWIev8Lg+v2gVLq/lcE
Fx6rlCoIoZGZ3PcdlCLE27AHj1WKZIAQ7SDUywpRk6QPWin12Oteyyjzha332et6uRpynI9d7oH7
fKhl1H8XnuPVC+ePLzwZwYYLmwXP9XISRNJjl3vgPh8XnCXEQdVPiENMsX7leWYXAuVdIERC4iBA
Sto+C8KJxyjl5gWIppHJ3MJolLJ9+utXnucekKG8C0TR1/10oFY8W+QeBH+nyE1JQkwzxfJWkds0
1Qu30hyzjPgE7cJjzbKaAKJfWdj1wyilEVI8eKxS2iJFIyNcuyeUst1N0onHLGUAxNDIyKAgQEom
dtO48Bil3LwrRA+1n/jmjw/pn//6VZ/Qf3z+89Pfn7+opt8+/f7Pl1/0P9fndnv7wd1entzPu5v3
ntxDS9CSc355ciOXtNCECjWm3vbx1i88T1zBoLwAMTVYlVtpjVK223s78Vil5LZASA/SKQiUUoia
w4PHKqUARIkamcYtPwcphXhyu/AYpdy8C4R0trcCSiGeKS48ZikTIJJGZnDfXVBKJqR48Fil5FVK
yaHK/66twUnwd4ZzyaH2nHp9Gc0RSzq7xwGTrOwPXF95ntnHVHkBIutE/BwnWlD8tPzZy309jwJS
/V0qn9fZk3fzTv7n2kNtc4703Q8AuUaIhUo4/AFsH/bkxGP9AfS4QMgIiestskjZ/4bqwmOW0gGi
aGQy93UbpRALcl14rFLmKkVm6JN6KQApldhY7cJjlHLzAkS9SgyFm8dHKdtnNzrxWKUUAQhtKBwq
N4+PUipR+HjwWKXUjBAaGXGdTKl9v0R24bFK6QjRNDKN68mIUoi1Xi48VikjIoRGphcKAqQIsYLH
hcco5eZdIXKQdqKd9Nlq8F/qzrRHchoIw3+lxRdYiWR9lu1GIHEIgQQCcUsIjZzEgRFz0T2zHOLH
U57uGZbEQ7tih+MLzO5sdz31VtkuH3GO4P+1GyykialuRdES2TTLxPIdqCo81CwTfAoRU93xIoip
KAV7lVV4qKKoqSh2y3jLyq71mYpScLtxFR6yKHYGIRHCFUVmKootqHFq8FBFsWIC4WJkhC0qySei
QMEpmyo8RFGQdwoRIyPLIKaiFNz8V4WHLMpk3U2xGBlVBjEVpeBA7pGn7HZTqihKzyAkr3vFqim5
AKYKD1UUUBMIHiMDtmjtYyqKXT4kV+GhimL5FCJGxpRBzEQpaD41eMii6DmEaAHWWHdedYqwJnjO
FME9MUVQIqa6LYvqJMtMweOyVXiIWYa8U4iY6q7sBpupKGJ5jVOFhyqKYBMIuWWiZVVPEpfcvVSF
hyqKNDMIiXS6aNlxIkrJ3UtVeKiiKD6BUDEyouz07VSUgjOeVXioooCdQsTIyLLzFhNRSq4KqsJD
FcXABELHyCgo6u0nolhWUOPU4CGKgrwzCMlbzualguJWIFe4vX+qaDjf3/jb/oft3dWPV9c/XzWX
Yb/334eGQXDGsKFxPfeNHQbeSN9DwyGM1qrAB8MQvfdeuEEysFo+flmsJ748fN9md9NvXlny5a8k
HRLihEO/D3eXl79GlX5AwSJJ/Hnz6OdXH7f9BUqH7COTvB97N3Sj2vQ7jAWGtvv18IHPtl6y3nTa
eMm7FIt4PAR/UtwnjP4+w3x7/+tV3z788wnEZrze/exjQmxeu91hEYeh17LzMIxDb7XBBPKeMzBc
6U4PXLmuf5YiB8ZOkP8txxNNRvClT9w9UPEKVNSGI3iShasKLBkKZT6rd6Ayyq1BpcviVoeKHDed
ZLGyAkuGQrS4WTziF3a7690yqh9ub2++27zvzy8OEbnxu33YfPDFF58ix/7m+gr/hJPK27v95uIc
P/7tdykKq2q0r5k2i5+zrUlFzh6XZAGowJKhECl7HKjjmFdC9dH5/jZc4QQ/jiKHmf0eOS58XDSI
9qNS+80nNx5XKT4L41YEN0rDRROGwJqhG3zDA3fNIDwbgh9Y6NnGXw1/fuTRbBPtNpIPXdPhiN64
IJXF8W7UYOceijj/WkF3KUsysxYVNTNlkoVXGRszFCJkpmiFq9Hzz6hMWdzqUJHjZlIsUq4xHklT
FjcFa2STKqr/alFR46Z4ikWLNSpkVVT/IZW2/3odIVoou4ZwIkrJBZpVeMgJ45IQZo0JgyoqzWtR
kRXSKRYjTQWWDIVITcqpGoP8lEoXlh51qKhx02kWU2M4zVCIEDfZclFDoRlVUelRi4ocN5Nk0TXa
foZCpLipkyuSOVTvn1+d7yPDxX92ViNbrdaoIaCoyqpFRU1R4EkWu0aKQlGVpVphTYUU/erD99vD
mvJmhzYxHVO2JK9h6/24l7+59P3ZPqCTeOzgq4+3m/3dzc3FOf7547ffjaWevwy3YYeavPpqCsWJ
Gi0T3Ua1xzdf/bP9wCh6bUNofNfrxnEWmtBr1vCBgfdMjpzpVx/X31/76uOI+OfHu0HE9fKxEc5D
IzV+vEP2xkhvVK9Nz8bw6hv3Tvph2OEOQ/x8gK2yWzlumd9CtxX61cRKOLRClun/0gL+n6vzxx2H
sL/dXf8aholdiLtDVhUdSp2mvC3Y16zBQ+0OrJ1BxArRqBUOtqx7ImdF8JwTOX8e2p9zyVaXnQef
Zpkr2CiuwUPNMgcTCLNlsjWmaLd6Iorjy5+sqsJDFMXxGUSMjCt7jnYmSsGJnBo8ZFHMBMJumWo1
FJ2VmoqiCg7Y1uChiqLUDEKq1rii1ZapKAXXAFfhoYqixQTCbZlumSl65m0qChQsidXgoYoCbgbB
XSuBpWormXH4RHXGW6Nlw7keGjxj4nGCBrbpelDcQW+NGRGdMctGz+1oOM8/fJLz5a+kHbJJhxYd
Puk7o3THldWjSRw+4SM32gduuoGlWNTjtulJcZ8wmnH45GWI1OGTDjOnC103AhMjdFKKvhsdWA1j
Z2ywz1LkTp8i/1uOJ5qMgcXT4opU1IZjIMliWQWWDIVyp8VuK1A+DStQWVYQt2pU1LhZlmQBU4El
QyFS3LhQa1Atvr/+gUpXoCLHTSVZoEbbz1CIFjejj6NNCRX52AcTIgg/hqYfcLnFMq+awVnVgB3Z
yMXQBTVOFkgfzDbRbmOM0400nW8UAwXjoBh+S8pDyWq03Znutiwz61CRM9MmWbStwJKhECkzNZMr
UDlRFjfNavRz1Lg5kWQxa/QoTpTFzao1xidXUnlVoyLHDZIsdpXMLqy8rIXjsY9lVMXHPg7vKeX1
+yLLWFH9V4uKlj2ROslia9Q0GQoRsoe33NUYz2ZURfVfLSpy3FSKRUCNkSNDIVLcpKnRL86oiqqj
WlTkuNkUi64y98tQiBQ3c3JlZgkVL6qOalFR48ZFkmWVfpIXVUe8tbbGSDKjKqqOalGR4wYpFidr
tP0MhQhxQ+l4/dUyy0RRXVKLiho3wZIsUCOHMhQixU0ytwZVUV1Si4ocN5VkETX67AyFSHFTUEOh
GVVRXVKLihw3m2Rxa9SToqguEa22YgUqWVSXIFWVGTc1blKkWECu0d5kUV0iWqNWiVtRXVKLihw3
SLK4NeoSWViXOF4js6dUqrAuqUNFjZtiSRZVo8/OUIgQN9lyUSOzZ1RFdUktKnLcVJLFrdFPqqK6
RLZS1cjsGVVRXVKLihw3m2RxNeYkGQqR4qarKDSl0kV1SS0qaty0SLKYNda5dFFdIlswNXqkGVVR
XVKLihw3SLEYvkZdoovqEllpRWlKBUV1SS0qatyApVhclT2lDIUIcVMts6rCeZDlT879YwdDVMvN
GhNoKCrBalGRU1SlWARfowSDohJMtYZBhRQ9+eTcwRavYav0yTlE0S3jrAJK4sk5C9JrqW3Tj0w0
yinbGBEGbE7cqw4bYxf43z05xwQL4FxjrTdN14FrwHlswMEIBZ0Xo7bzJ+dk2HZ+CyZe7ij77RDi
k3NTp6GVhhc5veDJObeVulWs6CGIacqX3Ahag4fcHUwqe83i43syfS2izjhqD1KACNbi080Wc0yF
vuF4pLthpuuFHJU2tkd04512qmMDOpB/1D7ny19JOwRJhxYdtRdhHGA02FowpPOj9j0zoeMDN1zy
JMvjduVJcZ8wmnHU/mWI1FF7PvSiR816DjCirB60HowO1gw4pIrwLEUOYE6Q/y1Husnw5fc81qQi
NhykTrIYqMCSoVDmOHqgMqKGQlOq5fc81qQix00nWZSswJKhEC1u1q5BtfiGxZpU5Li5FItlq8Rt
8Q2LR6rHWxhKqMhH7YW2TlocGKGTvjHCmGaQumvEaDx0Ogxe+cmM6sFsE+3iZAo/J63Eb+CC9V4L
p4ek7k7yFXRffsNiTSpqZso0C6wxEiy/YTFSyZY5tQbV4muOalKR42ZSLJzXYMlQiBQ3Yd0KVMtv
WKxJRY2b4ikWKWqMShkKkeKmeI0aZ0q1/Bq/mlTkuOkki63RZ2coRIqbYatkU1HlVYuKHDeXYrGS
VWDJUIgUN+dq9EhTquXXL9akosZNJ1hUy7SowJKhECFucf1ZVaiYl29G/GOls2olrFFgLL9psiYV
OUVNikWxGs0lQyFCiuqWGV4hRZ/YjJjasq6CrcLNiAOKlFABJbEZ4aRAqrFreqF0o6yTje973oAd
wIgxaCPs32xGMM4dCGx0vWG64WOP24fWxJf/KNc7cLKz43wzwsNWyG0vt924lchwvMZv6jS4ouvr
pqkHy1/+VIWH2ixBzyGgdaJsVYO+Q6PZVurWld3HPQ1GwWVIVXjIwXATCL5lrDWgU8GAjB2aXgfr
2OAabH8c777su2aI98/2o+x1Nyjo+YDoWuFYaLVRVvj8HZqcL38l7ZBLOrRoh8bKPkAvgg0QEjs0
wQe8OxRPDFieFNdyfYzwSXGfMJqxQ/MyRGqHRippNbfKBS8ldxYzyHNQJnjgPgzdszk5b7liJ8j/
luOJJrP8MqSaVNSGYyDJ4kQFlgyFcosLHqmE5itQLb8MqSYVNW6WpVgkNxVYMhQixU0puwbV4pNF
NanIcVNJFisrsGQoRIube5hvllCRd2jG4D3I0TUw4LXT2muDP/muGfEnEaSwik1ui38020S7Dd5N
LZuxx6vmue2Vd5wNWid119atofvi4+s1qciZaVMsIFfJzMXH149U6uGdPMuoSi9nOVBYgBW0WX4l
U00qavY4kWRZpV9bfiVTpBKtcCuMkoIt3yeqSEWMG2KkWKTUFVgyFCLFTbkaldaUii9ft65IRY0b
T7JoWSOHMhQixc2INbKJL1/MrUhFjptJslSZ+2UoRIqbY7xC/bd8v+EfKwRly9gKA5UoOLRZkYqa
ooInWdQKSwKi4NAmj1Sga6Toyf2Ggy0rKtgq3W9AFNVyKSugJPYbBlBsFLjBwJm0TfBMNF5I3DXQ
nVK2Mx244W/2G0I/eotrcU0ncc/PG3xrF1jum9GqQbh+GJ3u5vsNTmyD2Opu23ts81sTH36YOa3j
knKR0wuW1vmWmVbKoheozFK+YJ+jBg+5O9AzCClaa9kKb99Z9bVBa4LnvDbI/fnaoBmXaQFESVRn
WVawgVODh5xlbgIhYqpbWfRKjakowJaLcuT5J7cYkXcGIW2rRFFkZqIsf0NOFR6yKGoCIbfMtiBN
CcRUFMuXi1KDhyqK5VOIGBmjiyIzE6Vg5KrBQxZFTyBUjIyFojY8EUXy5e9nq8JDFAV5ZxDStYwX
QUxFEQUdbQ0eqiiCTSD0lrmWu6JCayqKWv7UaxUeqijKziCEbZlMnqmwGWcqVO+kDt42LE7Jje9M
45SARvYDtyoMajAO0Vlv5MDGbuSqzz9TkfPlr6Qc4tIkHSKdqYhFaZS9w2sZhO7G3rjUC6ZE6Cy+
uAlfrNrJFIvQD2l/UtwnjD55puL4z8+uryYcqWMVvRo17zqhtVbcK+NNb4zrR2etkoz5Zyl4dTIz
TqH8fvsDSjac3Vf6321u7vY/vPbi8uzifAz9r/1FwCR9ffPw8ZOOPdsepgzY2r7N/9Tmu9d4tnvC
6t+n0Nc3U+Znmzfz7eeaJip7cY2Bvfr+7JAae5xq9T/dne8w6PE3h2WIl8/tOiedcKxxvcCrDwz4
BoQI+B/vjINgO6PvOxTst34MVxuhV+I+dqXxt1tcNYk+4GLJDieEP4SIjNJfXPy6Od9vfvAX2NiS
GEZUxcDe6F5MZPC3EaL3V3HhCFXEv0KUaCUFAkJWBel2SBFB7m42w/WlP7/Cb4rK4BLS3T4uocW5
85zEtVzaUpLLcHm9+/UMRzBcOrn4Lm6cht2LcHb8+52/+j5sLs+v3tSSKSkweL8cf1wJ6Jdwtb+9
3oUGx4fvNu8+BCmOcBrAJa0qVtfq17vzW1zkRjGwIe1vN3f7iBALgH4chIZBNc5z1/Qd/mS98I3u
FXCQcYQy6yFe32CT/zkOHNvD/yLWET1my+0P+y32ks/3iB1+C8Nz9Ob5w1Z4/CGGtjmEtrkP7fMc
hzbfHTuJ88twfXe74eJxceZhMSTtM5T6nJWdvr+98xfYOx/y8p9FGY4sj+bfiK3lzz/FBrM22nEd
+GOM0sUF9qAvwoZj73V7c3H3/ffYf7zocQzexMhdjxueglBQqUvDZERlbq53uCkUPRcaNjfvfvpl
MkG0eTghU2j140MIxl2IZYKxQkhgb+Di3+6uw79gSeO2Wov8LHwfrsLusAMW++1jk4zzgYiEi4xh
hw3zOQ7S/uL5oY9/rr9LY5WnwxHr3WPherB3aMKxD9Nr2Y2xSFmNP69peNo0b7Ff2I9hd3Zooz4u
Qp/dXp8dkYxVox1ANaB7aJzx+J8RpxxYejse8P2rwg6b5q3VcBeMcNpWHlefHuGGUcZeH6diEqDp
e9c3o+qGxjJlhXSjCsDXQywb4R4C37wU+Ob2ujm2uBzXFo512plS76dZPIQL3D18KYfzMncluiVJ
61xdq08nLTPWKW+HRgPesqidlw03A/5Rj6wfO2U9+PUQy5L2EOeXU/Z5jjtLElXy1qhyjw/lBm6x
jb9GT79670MMSVyQ2m/CL+eJSZOULePlw220e1iOiWZ/DDtc+UrZctzUqSuma396+WUEdcGoi4Ba
z3FUCwzq4GTolHseBMF0K4VcCWz5rQR1wcgBdEkcqISToRMpgCD5OmCw/JhnXTBqACGN4yrhZOhE
C6DjFbtrNPrVO+9tXru7Q4Gc1q5XssfnJB00SlvZ4F2scY0yCKY5G60dnr2x8TjUX94cDxUeZ7Px
R5znxi+Kfo4ACoQdGw6jbhRjBr8E8AFMYW3wxjNw8lnKOyvZSrIvP81aF4ycnmaOAy2zyZVfl7FL
pI1SuI7EGjNK1TgTNIbaQGOkVdb5zioGuOnBuedIrp3QKn+XKOfLX0k5xFXaIcIu0TvvtTETkX0Y
eHwKlgH+L7FLZAAfXbVhcM6PKRahHzrvk+I+YXS+S/TwDyfmU5tDgwyjkZ2xRmmneuGE6HWPVeaA
4fDd8CzFDNKd0O8JAvLOhRg0swJvJbEaGmY0XlIyCqyAR/yN7/Ego1N/2bmAFK4BthD30IpedMNZ
/O120hvhn7CRx1+nrDrBi6y+/Rdbw69X/vK8j/sjD3ZjXxp/E1dVNmyOYFopZRFCtDCEF+eY2Gg9
bBh+EE+y3uxCzHG/33wTrl7DM3vs2ea1H15cPu6Ep1iM0sc8p7LE+RM297Pzy5uL7+I0oz0KEHN9
+5gsjx9q4qeaLl6vL8wgcfZkJAsBpJJsM9xsMV7P2fNfXgx+s9+99HkpMfE5TreCVazBa4ddM3I/
4LDCJQPDrZdi82I4f/kjXluJzabxOFFDQ9I0WvYWFxVgNAbAOuk29xuKP+OkNWzv5UmrAwsjleyo
XlbpvS0w11sJo0Yecg9lWy5PNp+nrP2+v8TjlB9vrsZ9VO7s8I9Q9jfpsr9JlD0q7ruL8OYTotvW
WvH3KYn4Z+GXEG/v7qUYghGmV8PDEYzPcRjCafrDYfPHWwx6xpRTrpPcMCMkl0EoprQeejt6bnBT
+friALW5i83qTdzwPLtfEz7b32EI73AF4M1YNWz8HQ7f8Y9ne/zieJQcD6G++WDIWalG7UfFewcq
CKFHoYTB++nHoIJyc5ddy50ucXla6Jjlj23XwKGWN0alIISyJRAZmhBqbtdKW3Qd/hSn5PBjDR5q
jCxPQWibipFj2ILOf7mP0VPuL7/Ig2aY7CekzIFheeYy/CRlneFQ0gge+sLjsxfZnWESBZaOhX+p
FP4v5cF9n39/tmMbh8f7EG4++yStjKlXQ+FDGS/Q6P9EprQcdmGiPFU0/SlLvN0IZ9JeBhNgIJdN
irXMZJVNaXvTwunhn/0jpVPKG+fyayWwxjA8wGjdyP++VgpcCWEsZ8ZzCM7yTkol9IDPeuKv5PDv
1UqKt8Lk1wUzl+c9tFv+/GgNHOr45HgKQgpWApGhSf6ohTiQfKbW8ZPDpSvZAaAYJsvukuZsprkM
P0kCWylShsUpw4qV7JFRDBMFRrCkOa3zzGX4SRBYtIzzlGF50jBfftMZzTBVYM5S5niuuQw/SQIL
DinD6rThogwmGCYLrJPmlMszl+EnSWCpky1VnzQsCjM42zBVYJHMYK1MnrkMP0kCg0xOQeG04cIM
zjZMFlgnzYHIM5fhJ0lgA8nO35w0LAsz2ADkGaYKLJMZbHmmuQw/SQI7kUwke9pwYQZnGyYLrJPm
HM8zl+EnQWD51OjqThpWRRlMMEwVWLGkOS3yzGX4SRJYpFZeBDu5LqcKXlxzNGzzDJMF1klzjuWZ
y/CTJLCSLmX45AxH6cIMzjZMFVizpDkDeeYy/CQJDCzp5+kZTtFpR4phssA6aU6ZPHMZfpIENkak
DJ+e4Sx/8T/RMFVgSGawFZnmMvwkCWylnt8TK9hLM5zyq2B1NOQET3l4eioFhU0l2zA5kjppDkSe
uQw/CZFULXPJQe30VMoUNRWCYarAhqXMcZ5pLsNPksAi3UZPT6VMUQYTDJMF1klz0uaZy/CTJLBU
yUHt9FSq4AUFNMNUgS1LmjM6z1yGnySBNUt2haenUkUb/hTDZIEhaY7bPHMZfpIEBpEsRE9Ppcr2
sQiGqQI7njSndZ65DD9JAicXdwQ/PZVyhRmcbZgsMCTNOZlnLsNPksCOu5Thk1MpXXC5O80wUWAE
S5rTkGcuw0+CwLpl2qYMn5xKaVaUwQTDZIEhZY4LkWcuw0+SwIKZlOGTUynNizKYYJgqMOdJc9Lm
mcvwkyTwH9SdWW8kNRDH3/kULZ5Aoid2+Q7KC5dACAlxSiAUubvdJCIXM8lyf3fcM5NsMqmsq8Yz
CTzsKskcv6p/l91dPspKCwxczHCMrIxgMpgtsEVxDmg4gp8sgbVH/SxmOKaimvoaLGlgrsAgUVyw
NBzBT5bAFlCBixmOgcoIJoPZAlsUZwwNR/CTJbDz6M28mOEYVRnBZDBXYIVGsJdEHMFPlsDeoIHk
X4MX68WBZ5e5/tnx0t2ruFj8lh2eCiL62Amhdd/34s2rCa21pvd6lF5570GNOinpnRK9kGpMyGrC
+eXl9bOtKDSzEHYuxaOrVdnsQ5C7NpEdv/axYXYmAHZtGEE7RqRnE73DTCymmkZXdSUMMPdSaInh
pFI0HMFPlsCADn5BMdU0uqpVMMBsgS2Ks4aGI/jJElh5wMDlVNNURjAZzBXYoBGspaDhCH6yBDYg
MXA51TSVEUwGswW2KE4ZGo7gJ0tgK1A/y6mmrYxgMpgrsJUoDhQNR/CTJbBTqJ/lVNNWRjAZzBbY
ojiraDiCnzyBnUamucHseJrbzrxB+9xyTusqmwoZzL2SDm0qQUgajuAn40q6mZAeA5dz2orT53lg
tsAWxXlJwxH8ZAksjcPA5ZzWV0UwA8wV2EsU54g4gp8sgcGgz73F2UlTNwvLALMFtijOORqO4CdL
YGV2sq1yve2avK8SN2Xboj7/+23X6821Q952jSpjYUtlbquCfrQsojP7rhtmccjVl1ZFdY4vbs67
PNr0uorO383VyR+LqVTo0UF+z8Hi/KDLVzRdDAfeB6n06Nugw9garWTbj9G3OvQmdmAl+O5ARADV
S9v6PplWp+x09L5v9TikpLxV+d8asjxn79XJ8C7u8daFg9Yex9XhfytHm+YDsVH2Wtw5trrwRjoJ
PzXNJ4/fuPqO6X137/r0x4OpAeQX1xXh7r2K+rP1TvH19//UfLr6YfZbPL0+Hi/n64pQ4zwX6k0X
+Zpme3PLE1neXOFqOMoG5R+z7fmPS8NyMbvJZdp7d+sEozjtU7Le/mWberOT7c48sl1K1fyeh11X
Rwn//ffyiMWpgP1an8OpRaxbyur4xaPbeFqd0tL8tTTtn2bVt1spMLTWooSuQU7F2JxRodcmuTCs
vLg7/DK3+Kz8bJGuj7OUx79eLu4Ov/wP2LpRjW19SE/+sc2U9Qmduev6Zvl17yy/7b1m+qaJPNmQ
TXj/9bk8pffip/Fkt51wW0b2XfP8OtdDPMlt8XQxncP7yypIb2N20c9Pr66ftbE6sW1jfbrHuTX/
+bue7b3hdz23bj3ue95vCPeQg+t4NZwufmmXKfTjjyz/jH1w9cpW/ZufSbftDfPNEXwrxsuFcvbN
b/34s3pmzp1ffzOfZs/O/liXgFtlActpthzZy0KD5Kqw+7DxdYnHxU3f56fq8SZXWrx3esq6a90t
+3H/e3n1oPvNh42VOlXMIoBi6Rlulc3D5mp+OQmT7S2ZlG8zRvlgQxr9KMX6lsiv7HnYxC63wSlS
sig5Ts4Pm23U2LoK5qPr81U6S3Gxx2ql2VwFektztzgTfUW0Oyo3vJEdK+tqsuNsWH356S7P8J9d
xmHS5IO7n/NgxPl5vBim0dHD5uBmke8I+e5w9cfP01FB7a9N2w5pjDdn18dx/vPiKP+efs/J9eq3
ts2t8zRdT0Mai8tcYfLkVS/yW16dH0ntheg6aHstUtu7FFrvrGw7I1TX2eBDp5opo1lmum9hTge3
n2ru1lXVvNmhYcwxm2z4Y3PCTGDBAUCpSu2gT9JE3Rrju3ZwwrcSnGpl0k4rUBrSmDuxAMFq6MIA
VtCrUlO+/G3cIY05xKxKfXOxLhEtbOpNGLUJFju9NPogIzgTQ+gxa6S47RSK8j6JRStTr9+6YQJW
m9pqE8AkFWHwOplBOpBDgs6N1o82dO+idoMrqPikDXuvTu1Qg43Y2uBSpegh3VWKzg+M5xt3yQ0z
ttcNGebq843y4nhxcnM95MbxfI+rO3dkfcrQ3l1pLvLzTTzbl0d3RygtC1GTEqnLi+Xk4dFT8eJf
wqjby9Cur0s2703ChWobCYkrx+wh39meSErvDXRWZ6TZ9+B+2OGI2134/5XNe281sIRj68PiXuu7
xR5PD2yXF4fNAyGbeHWVYu6f92zJ/Px4rcpywmBfXcC+vLhrZvPzpjS4jttQ34zu2UBpMC9hBjIy
hJkBouJ+TTaDcl12YcmDO92U3D5rdO/Cgx1EVo0Zj4Zr1zI+4xDdjh24G2++uXjmgeY6Rxh37L3M
ck3W6x92MH2D3WunzEr2yg4G+qTFxjTTV2mZeU8a304vvbB9xKmlr9df9VG+m78zfVuzfhpe2fBw
fon0AXySKTtv1PZ30v/oNNOOnHpyoukleoAaj/g9wK1jO+sCgoGtzf+vTwVN3qmtvbubaLmLqjwx
tBy0wCeH8GN7JiPudofzjXi4VOmjL29XTm2sWrr7UDt9qgXpQhu8G9tkBiFCP5gk1MaqpTzgcvnb
cb73/3L4tOl2+9vb02cWDOneKQKp7wfphRhlb7YY9gtOFix8I3Hz3IIhvezJBWZqswYLF3SJXQQY
nR372GtT2GsohO8E+K6XY+yEjb0feiUH3/vemDG83MkF2WUpbI3LGxMHrmr31NIcV2MOc7ogm4sa
IX2NEQRN6FNb2RzwmCaquMXLmZojtzhgruxGo7gQaDiCnyyB8R1lqrjFy1WVY+SAuQJbgeKCouEI
frIE1ga9ssUtXq6qSiIHzBbYoDgnaTiCnyyBjRMYWFF3PMth6JS3MX9Gv/lmNoQolADVjzCGMYle
Rt2PIvk4GuGdftmN81kK6+Wupdi8WlWF+fZjIjd+vUANC2HXhhG0Y0W6s6iJxb12ztcctM0Bsy+F
Q3Fe0HAEP1kCe+UxcHELnAtQJzAZzBU4AIrTRBzBT5bAwaBPVcUtcC5URjAZzBbYoTjraTiCnwyB
5Uw4jYGLW+C8qIpgBpgpcDYMxXlHwxH8ZAksvcHAxS1wXlRFMAPMFthhOBBEHMFPlsDg0aZTrJri
ZWUEk8FcgSUawUpYGo7gJ0vgDEHAuphqelkZwWQwW2CH4oyk4Qh+sgQ2aEvVxVTTQ2UEG29pYK7A
gEawBaDhCH6yBPYgMHAx1fRQGcFkMFtgh+K0p+EIfrIEDji4WDXFq8oIJoO5AitAcVbTcAQ/GQLD
TFg0kIoZjldVEcwAswV2KC5oGo7gJ0tgkA4DFzMcr6simAHmCqwBxSlDwxH8ZAmswGLgYobjdWUE
k8FsgR2K00DDEfxkCawNCi5nOKYygslgrsAGUJwLNBzBT5bAJqAttZzhmMoIJoPZAqMRbKWl4Qh+
sgR2Gu38yxmOrYxgMpgrsAUUFwwNR/CTJbBHi9CacoZjKyOYDGYLjEZwACKO4CdDYDUT2mDgcobj
qiJ4AlsamCuwAxRnPQ1H8JMlsLQOA5czHFcVwQwwW2CH4kKg4Qh+sgRWCo3gcobjVZ3AZDBXYI/j
vKLhCH6yBDZSYOByhuN9ncBkMFtgj+KsouEIfrIEtg4wcDnDCZURTAZzBQ4ozilFwxH8ZAnspcXA
5QwnVEYwGcwW2KM4DTQcwU+ewFbXLKnbKF5IXlqJm7L93rAHa4J3sBD42csX3i6azuULUW3KS3q3
WnQ8UfOVTTHbpSF1g2DXGcjWhfIe3adojxcbT296oYXGeiYcfZmr9BoidNr1Q19YaAy9CsZ4PQ5u
cGEAJZwelPUJBgude8GFxnomPb0H2HAZ6ZBC3Vx9tTnM7jibixkBssoIgiaMTlrPtEQTguIwVJBV
C+IYYK7sUqA4pWk4gp88gV2oud4bd0Fyu8dMMWL77Tn/97vgkFZ3QVQXuf22G2qNtuLGRNQwrTHD
dlmorWhXDssBHPS602Dtzqu1FfmoLuV9UnzzXkiXe/tp7+3TvfclYYRxjH1qexVlK6AbWityuHcp
aDPILsRuREUq10Pg2rozkT74iOkgtglwPE1nQ36ayQ41k27LO8XdZs4tJaE39B0X+3OYwU5u/2y+
Vbm/iWlhN3XcCAXmWHdS7/ReDAuyasfJDg1jP+oYzJxg3G7MIejEuoDZsNp6jSvDupvTs2E6u+bm
9+btg1dxfjC/uTjILX84mAo6Lv87/iXNs0GzD77r5c8/vN28Pe1JOfr2288+OhpVL8fRqjZpKVot
wLbBdLa14MCFPvk+2zS/fFi/sVlcncXFybq0Y4MWeXy7edXnQgaHsjlP58fn8fdDAxq8X/6aPfo5
Xa//si95VjvcVwIdX83Tqic+Mu832ZojI6H54vSD93OJhpj77SOz+m2eVmXs8ut++gtunN+BcT9P
NbnWndJtGztssjCLZQGH9vxySKvrery6xu3S/sY07VrR4/PTrsl+rP6wQt396fQ8P6I2pIBo2nk8
nwpBP/n29esz8fm3o/2zafvzIVuWmmeIo6YdU8wNKy2a/ONZzPLkvy1va8dXl/PrRmZzVu9f/Z59
z138L6uKyT89vn5mprXdwfVbBtdXU/H4H6XRUjrbgFbWe9H87u2xgjZ2pyjfSPUDv96YeVRj5MOT
ePFz+vaiv7zMQRTz5vG0KgWAQkFUOz11OKtxq2XB/g+//HZt0qKJF0OzehpZ11ZcFRbCLbE/0Eur
MNyeCqsIDb2yKaVe44+hIagAQbShh6F1zsbWAqT8Xwwu2OQ7Z6bnqFyntHkNwBU1O1f0g4+2EdSa
+nieDLnHb6bhsUblb7hO89x3Tnf/uGim41fU8viVd05enedrsdT9XcwoZ6pL4D7Ms6exzZUgGyn3
3Yfa6VOt7EVqjYp9q2VQopd9N3p7m3KbKeUeHqbccuyMMZA/NYJr4xi7dtB6bEc1mlGZmJTtNlLu
GPvk84utseOQQUPXhpDhVvqkolPBJtPkyzocL+s1Hj7ZLJ3RtdcOHXi+L1eeUhBdHO0g3KC6zYHn
ja/DjPTFG96TtNuB59PF5SRhzlhuB575+h8x9Z+kj12+jzyhvp0JZdAgxQakTLCdsB10IgyFPcKj
jaobXO9lZyIEaQYRvOq1hCH1AC84Dm1nUusalzefg6FqhrXaHG6WALgRJtQYQdCEkRvYmRboNHBx
tV6oOmefA+bKriSKU5KGI/jJE9gABg7Ufd7eWmWl8bErlQsQsht74UZphAud8VaMzgTtVHSjlt69
cLkAO7Ni51JsXq2qc/b3YyI7fi1qGIhdG0bQjhXp1mPa2eKyyVB1zj4HzL0UGu1KnJA0HMFPlsDO
WgxcXDYZ6ipFMcBsgS2KC5qGI/jJEtg7NJCKyyZD1Tn7HDBXYINGcJCKhiP4yRI4eIGBi8smQ9U5
+xwwW2Akgt1MSCKO4CdD4Ax2aEstLpsMVefsc8Bcga3EcFIYGo7gJ0tg/IhpW1w2GarO2eeA2QJb
FOeBhiP4yRIYNNr5F5dNhqrj7zlgrsBOojgbaDiCnyyBlUL7pvKKnKrj7zlgtsAWxVlDwxH8ZAmM
F4Kw5VSz6vh7DpgrsJcozgcajuAnS2Bj0EAqboALVcffL8GSBmYLbFGc1zQcwU+WwFZXjVFtrJ0j
j1WiphhTO2b9YGj//zKeT15IniWqX3eAzH6s60P/T/RCdSnOJBR1eXq6416hbSWVS27sxt5F9oSH
mzkBhcv3Bt7dlYvT8bh5krhK9+EyTbOR11P0TfPbq1mUNfkwv5ynB6fXp2nCJ1ypVfx/31jXag1P
NFYnVa1EyBlBMXv6znrq+uLmvMtjuK+nT/9urk7+WExnLBwtjzhdnN8dwJO/NxtkulaBN63XUrRy
ULFNMYU+AvTRpAOhlAfXTYI50eqQVffRjq0YIIrRSQ9pXEOu/7jKM2CnZ+ndPfueHc4xeDuB3Hwg
fnzTGUNmOkPCip+a5hOz8UazcaTT8l2f/rg6iMJsHkWTX8Udq+5pnj71Az/qw+AnRGQL6adJ5Pfu
yRvuiR+I0DUHfhg/E/XrFOpOl9nbBcq+hWrfJtV5R8BD6lXXSdkKpUM7+mFoo7Vj243QO+sHGJLA
jJX1d2F8wYjAFowIyoIRPwMl9/DItJsFI8+0R+PegpHpYuMqVd/OCetFRNJGRBNs3/Efn7KRVnHW
i9ynbW5UjNfPtlHx9XqRp8RXSpBzMMjfLFIaRFL2zXPEvu/N6G2Ig3dZiOyYVUaqNAyum2x7weUi
fpbdq3H5YRYchKw6kqPaHN4YwGQuagTYGiMImjBGBvzMYuaAK819BlFXa5QB5soOgOKMouEIfvIE
DvShl43rjQy9kNs9ZoqTsrb7320290x3Rnzo5XtcItjDc8Suhl6eRy9El11keqShF+8A+j75ZGXY
4tnBK0F5dsB5m08PL3yiWpgJoC9vFKOXg5AudjEWnhh8FINXOgXtRQidcSIboGUYh87KlF7wiSHM
pIIalzc777p6zdXmsG9dDjXC0e8g22nCuKGFmVLIDgzRnOd0Ovd8f6/3JMeLPt3+LdvltVdCjWM2
y/29/vPxfD7kIYUPbzcwjPOUt/N89dVHqzz1i5ymTrJoL0TXQdtrkdrepdB6Z2XbGaG6zgYfOoWa
6dAbvize8FXVJkYGmBsdymA4rSQNR/CTFQfGOAwMRXDd6kAGmCuwlhjOCknDEfxkCeyEx8CqDK6a
FWWA2QJbFCeJOIKfLIG9thhYF8F1qwMZYK7ABo3gAIaGI/hJFdgeiixCCBjYlMHbRzAPzBbYYjgp
HQ1H8JMlMFgUbIvgitWBPDBXYCsxnFaBhiP4yRLY4k3HlcGVEUwGswVGI9gJQcMR/GQJ7PF7jS+C
K1YH8sBcgZ1EcSrQcAQ/WQIHjT6+hDK4MoLJYLbAFsU5TcMR/OQJ7PPNPM3nl/P7YC/ugU+ur69+
aj6Jp2cr6XKinJPrT7/55suMWlxdXuTfpvGpm0WzrIzw40+PQfKJItu+PIRZsQyRB+ZeSS8xnBSS
hiP4ybiScgYCMHA5lapYhsgDswW2KE46Go7gJ0tgpdFAKqdSoTKCyWCuwEGiOCdoOIKfLIG1Rq9s
OZUKlRFMBrMFtijOEXEEP1kCG7wrLKZSUlRGMBnMFDgbhuKCouEIfrIEtjZg4GIqJUVlBJPBbIHR
CHYKaDiCnyyBPaDgYiolK47m44G5AitAcSbQcAQ/WQIHrzFwMZWSKtQJTAazBQ6PcTAToGg4gp8M
gbMA1mLgYiol9fYj+zwwV2BtUFwQNBzBT5bA4FFwMZWSZvtS1jwwV2AjMJySRBzBT5bAeCnrUMxw
pKmMYDKYLTAawQYEDUfwkyWwDaifxQxH2soIdkLQwFyBrUBxmogj+MkSOBjAwMUMR9rKCCaD2QIb
FBcEDUfwkyGwmkmHgosZjnRVEcwAcwV2AsOBsjQcwU+WwBA8Bi5nOK4qgidwoIHZAhsMp0DScAQ/
WQJrHFzOcHxlBJPBXIG9QHGOiCP4yRLYaIeByxmOr4xgMpgtsEFx3tBwBD9ZAtugMXA5wwmVEUwG
cwUOaAQ7YWk4gp8sgR2gXaGn1kDr0ujB2c56Fd686DHqPkUnh9gL2UevQ7J+HPowgBmNkfYlS+kt
pfDgdy3Fo6tV2ez3YCI7fg1qmINdG0bQjhXpTzy2FlNNEJVdCRnMvBTZMBTnAw1H8JMhsJ4Jj9yT
lCimmiCqWgUDzBbYYDipiTiCnyyBAVBwMdWEivPNeGCuwFKgOKNoOIKfLIGVkhi4mGpCxakrPDBb
YIPirKDhCH6yBNbWYOBiqglQGcFkMFdgQCPYiEDDEfxkCWwV2nSKqSZAZQSTwWyBDYrTloYj+MkS
2Em06RRTTVCVEUwGcwVWAsWBp+EIfrIE9nhXWEw1oW7ujgFmCxxQnA00HMFPlsAhoIFUTDVBb7/3
mwfmCqz1Y5yZCTA0HMFPhsBmJnFwcdIQdFUEM8BsgQOK04qGI/jJEhh2sylzvZ+bvCsTNcWJ2k25
//v93K+rc32PSaTq9y3Tq3OJUnUu74NUevRt0GFsjVay7cfoWx16EzuwEnx3ICKA6qVtfZ9Mq1P2
PnqfVR+HlJS3Kv+7X53r1cnw7p5d36I4lwRKcS4JeHGuu1dRx+rLWe2qOJcEeu2n/N49eVNdnEvC
9sW5shM7OURsP8W5dnCBbKhvR1PXzyrOlUw3dYRD20c9tCYNojUSulZGHToHsdOjwY0NuzH29bFy
n33CPFZuaYgL1femlSHz9PN0rN/8+NXp2OR/9+oqCN/H1AfZxjD0LSRn205r0QaZfOhdNzoxNhmX
h3l/ufexflAaXEotDNC3SoqxNUbYNkGS2sv8jQrR1s+s3JFL38z/WOm3FrP58rNPDhsbYtDaQDvE
7I0LKrXJ9qId7WjGfNmFC/GxWWEmRHXb27zFnY7LW9y6aa1biGjOY38kxKG0h+rjQyEP88/2g3+p
u9LeVmoo+p1fMeITCGae96WoSKwCiU2sEghVnrEHCm1TsrDz3/EsCWlyy/jGkz6Qnp7SZJJz7vH1
eu3ronHL5fU2MLJ063B5143fYlLRWGixxWivv7v85tsi/LpeuniTat9dxytEf1tdfrMM2OJ8rXOB
sk8M0GrNmeO6FKpxpZdclJa0utRKEuKt1E1jXtsW//AVUpvWU2lKR9u2FJbF+iWZKF3j65aalnuv
vj2zxoi+9LqNTfVEksv4DJjgcvgENiY/DdDp/ed1u2tyobyJ8HNnsiKj34ziZiS07A0QMrtfmavP
nK1QhJznctrRrYbLWn0RLSfFHy++WL6+uOsOyP0FYUs4MjIZeuIya2M7Ahg58+WSwXD265wbcKe0
jfNWQSVXmgbTBr0trzeGduvT0DtldIVOmthybwJIUpInJRlbxx8jvasht1Ss0/H28O9fii/LCNOz
fPXgN1+KJvTztR2H14YHI/o3U88W375EwQqgiEzzhQQnTF+W6DyUUgCYkuQYuPXWa8s4Ne2/b6UI
tRNOWKYUlZIRz1UQjnnrRPePPLcLSgcZgEPHtvg+xPazjlb8+d72VTSZMeEtaQxppHqsVMxeqXQm
odqGfD7oFsNAJBjLvnJhoOP8D5tVPyvobu8eVj4K9527jv1l0X91c18Ob4M8FJvbRw8LTOW05eeh
iC1DxSBinOi5iSVoh2qCuJhpfjTOPXeD5HC7uXFxQAaD6nlAY/7eHvM23C6Wv8VG7Oa6+Q1EtBIq
ickNAlydnowOB4z2OA3Bddly5xjLvRXTGyxuwktffvTWq4W0hL0MoUmr0oxLUBXltVYwCHhyOwLP
2kXbA/M0YGxxGnkMRytKdRpcgp0IgRl89xCnk9sReNYuWgwwVmBLQDit0uAS7EQJbJmAgCe3I/Cs
DaEYYLTAEoTbBa0zm/ojHXJCn3MSQ+tkQTrapBVLgj8gHJFXlEsIeHLbhiA5wf0eWKUBIwWOxJLh
TinvBEfEFYDhaTokFAAKmBE561jh87ha/GphJWEvQ3BcgZPdye0rgmTVdAQw2tEsBCdkIlyCnajy
FGqmGWP/6UXke+82qzi8HiNGowsUEixeCVfnyc0zIuOCCBwwtnipAOFMIlyCnaji1VxDwJObZwTN
rD/JwGiBLQgns6OhxwuLi/v9dcWowOXUaiFEzXCRpkRCEaDK3kyG6qNle8FSa7lllpS2Yb7UWrlS
MRbif85qq4Kptbwo7peLJqy6Cj6lRbfO1crGtE41xvhtr3ACpqtjsKUzdcw4flGcUgyTN2+dwOzp
1djrUvdX6HGx1eJYIFERwf6LAr3/LtK4i24ROobb92vOa8UQTRj//F+ZXwE7SdrrcOOjnZFc0fnA
aQbhm8RPw01wXeSx+yRqeoL/DgHURQw7FkzCvPkRb2bkn3+uv+8vPuvb4nhb6m0cb7ah+a25CbGb
uIjdU9zZ2F0ot686La7X4XZ1sRfwiREd8jIEzCevI5sUbP/Sk9hS/eKWPgw9WTFuAw0eghYzjarH
ZdRBqeKP/h7Cv4qx52QUgrYqL343ARn9upWcStEoRdRhjDFusYxx+arbL/VzHYt2sdrFGv8DXPcd
bowzHnjdq91Gms/7x17qf+3Vov+liDyOevc8b+LZIdSYajZQI+J45ZBfHLJMoaIAT+21Lorvlq6u
Y5H3Lcd+Xz4rv6MG643mp831crYGy8Bs6NfnqbyU/eOd7BhaVkQYSIjMrmrKT2OMSWhnnea8pSrD
KfqBXZ4zyIoKkegMM/deBsXmvL2XrJhkX8/bNj7wvljoTSDah1Zb4tqcdvzJuaLbccoGx2MJ7fjB
s2M7nlxEJ7bjx6gowNOrbGI7nsnv3O24hdhww87dmB7LEn3V8UZYwZ1opcwomb4xzS0RJUxiiczc
mFoUm/M2pqoSBxFboIF6pyuobRO0uffDPobvNmG1Lm7DenndrPZasFjMgQTVCuOI48Oa6VX/8NX4
cKdoE65/Dj7ixg0K7mb41dDt5rwdf9h914FenyB2nAGvrn4Oy27jSG97t1/rovii3tytN/1qnqhY
8cHnn71WbIaPWKUqziqhXvHLW84rRkpKVmW397XbA/xa4a+7GU1c8u1/47Xi1v2wWF4UlMSX13fd
SyK+fa24//nKL68j9EP88Wn1z8Pdq6b7QRY3pm6ubzrhBNGm+Ha3uXs1FhtUZsaq85fZ8Spd1on2
nrZ+CtrYZVZobq4rotlTkE3QGLESqivGng/trDX0J6ONdg0LkZWUPAXZBI1RrqHIc2k1OM9zjSei
jXUNDpPV9CnIJmic7hqcVdYcRwzkn5+t3bIndx+W1wt/3RSrDmJzE5YdrUZzKplSyts/u9MV3y0X
mzu/e2Q3Y2o3d8266ws/WMRHuhPOuxnTq0Ms9ZLvDpe8WvTnfz8ZEV/655OXn474Zzch3Hffj739
9U1xFwP3xTAh2/Kx1myPwRzzEqRiluRskT4qX52zZTufD7py6AMSut98qVgOiUNRpDhdlDn4YEWR
4oiEIJXiWZvpD0XJ2dw/Bx+0KOaAhOlKxlCTQ+JQFGNPF2UOPlhRjD0iIWhFBc8hcSiK5Tmi9HxE
Dh+sKPaQhO12AHOe1bAdiCIpOV2UOfggRYl8j0gIWkmaVYePRMloaOfggxblQUOrLgiJ/yqroI1J
bDLJpDw1Ux4aGGsnI8dwUWwr0+AS7EwaRA7ArBLMQMCTpx2kPvF8ExoYK7Bmx3CyItqmwSXYmS4w
7W/zyqlGh3TsqW3dTHywpWGP3Z3TShDgrBRn7J+tDHHx7r4/DL+5+/Fu8ctdeRtWK/ddKB2ngjWh
KR0jumwYr8uWG1ZK1hIRvHLGkEjds7pmta0bp9nux7ppwhfD7xXL+6Z48ZQffxE0iIoJg/70m9vb
3/Y3YHSvi52d466NTnYhrTJW+Vo2YjiCFnxR/zZ84dOL2lDjGiXahnCQy+7ejElxHwH984gmsLPk
AY+93SQv7Y7nSsN5MEw5zWxjiQ2S8ppzbzzzUriXIfJSkAkhp6gkhdO2X580bD+wlv6tLsSWbF56
iC0ZH4amucoiw16yVoYSK0pBu/+096ULRJSqoQ1pdFCNU/tBFk7OxPvBDvBb19lQrDbLLkIbOspR
+pub37r9at+7m1jZQBqWzUojtka9mJGDW3ckGhcbpc1dVDG+Fal0KBARzcysROplZNGvm9xv98TH
f50y3U75vpdz6wAxMfklM5xx7ZLzxFDNTZdjaRWWP4er8f1ltxmxC8BcSk4EZ7Hwfh1fnonQNuNK
GfuHb4u3toXU9XBSKQujynlRv4oHgNchekP4qV9gHM4r9EuLhHtHhCmNFnVJdWtKRbUpnVTON7S2
tWLno5iaomYVaYffg38WrXm2vVb82Vi05VC0ZV+0z1IMQma12dkss3uSJO90zXrjbmLrPPjl01Lx
I5cdfB+u3P/L/Xpuan1rEluwWEo3N7EF/TkUdJvyp0uZ8nMT++CiK7lFW1CIhOUzkfi1rzX3i+V6
1VvOpCruY4aIFYi6u3crE/XDoQjaZQhRa80sM4S81mU72tTxDQKD22yTxxr5aYhR97B066E3Cdsq
2c0HOkpxmTzE+PVB4i717TEtVhEyW0Px1jhwHfCGKty1YepcuLEsYNT4+pzAh1VzHduFVRuWV0Md
dV0Y5Wq9uBopBc7rQDUvnTKi9I1nQxLWYCQJpCXMKV2Urz9GV+XSRfdwHeoczprWwzGjhRe8ZA1n
JbMilKZRtNRt62rrnDTSnY9iXg+3Lfhyr+DL9aIca1yKaSf1ddF6mj0mPPRiH7odSns+nOa5MLs5
ygbttCy/U0l0WtZYbXQdSm8pKSOmLRvtSFlrYmrfSmKFOB/FPKcdynnfZZ+lmHOioyrCsy0ehhth
ed3+1u/tffv9YliQWhXh1xiuh3DtXMOcXUagH8MyrnwdY/FKEDHPuOJo7e/EtD2zE0MvAmqQjtHz
0EnQKX3JNhJTmp6FmCI0rwBnI4YswEgcoqMpn4dOgk6IAhQVoeRMxE7cTDk7MXQBSpDOXP6UoBOu
AHX+Qs4/zXXwfTrzl/p85g01uqnrpnRta8qWe1G2oWYlC7W0lhlidIgJOF3s6m/vhynVYpzNdi/j
PHf4oYuiVUooZtqSqlaWghBd2ka1ZcOMCU47oix/GbYu1puwXC6WJ1v3/Xp9/23xrru+GRzv3i1X
oXjv888/KbY9e79uFyfe/bnib76FiDCizlT+J+4s/YfYTF0Dup5YiI4koDvyhHAVlYxTb3QpTeAl
IaopmdaslDwYroiQrKn7MxZeE8KoFaRND1el/PiLoEEUNggRrnrz7aqrEpE7EaoVXshaiBoIV0VV
nfXKBWUVyGV3L+ikuI+AHoertg8ewENRKs65ErUymulaBG20bmrVGkGNjho6DdZgJe2Efo8wwIZQ
GuYlMawumZGqJFrq0rfMl7KNn7jGy9qKByEUCtK18kS6Qy36ufZDOvWDZnE47999DKFqrrNQ33iA
5X+7c/H8RReo2eLGr8dPxuNCBKJg7KnlNFDoEMbDfBE9FCR+cR2W98vQ+bhbFf9ctvPS9z/fbkPy
gM/IirJtY4vl8vBqpDjfqUYB1m7/lqTdl8ruW6Vj1JQ6OFdqbo1wbTC1Ydtbksg8tySRllpvHC+9
dKxsfCPLlrWkpJxyT7Rg1Miij2x2WU3DRS8PrM6pzgI2VPsqvX2heKRTh5qwwNAtlKyYMBPcHkWL
V3DFnckfFnftqlNuzJARZb/Ey36JlL1T3NU34fJR0YWwoEuCN4hJQTljSrQ1mbjhPHAaBCGWyZqE
IBx3TeOYs5Ez18E9l7TcW5MlJzkmHw50qMoZ6GTTwQ5vqAJJCJNDIkETxOBfVpoLiM5k1kjFTky6
hgbGys4ECGd4GlyCnSiBDcsq74OLA5PrPUhFiK9Pa/YfdIr/l55w777AXa6k4tOPYWXkfMOF8W7C
/4lMsBzqREcBxwcPZIluLhvmjCdeWOXRIwRVETI5wH4c73CMsH3sSUYJkDWCpndK0Y42hJYwI8O/
Dwu4oT4Qzy2tmdGCM8eJcFZKI7hRpH2OwwJdEUSTeGAy1EKferJ2Jjro/gnWRMkcEgmaIHotXQkJ
0plMKax4VtgEAYyVnWsITopEuAQ7UQIraSHgyRy/Spx4WBgNjBUYhtNEpsEl2IkS2BAGAU9m2VXC
5AmcDIwW2IBwXKXBJdiJEtgqCgFPprJVMtODk4GxAksAzlSEmzS4BDsRApuKWtDOyUvFlMzyYAQw
WmADwTEp0+AS7EQJrJgGgPnkOTClsjwYAYwVWIFwWtE0uAQ7UQIbYyDgyXNgSmV6cDIwWmDQg61S
aXAJdiIEthXloJ2Tt/soneXBHbBNA8YKrEE4xkgaXIKdKIEF4RDw5O0+Smd5MAIYLbAB4ZhNg0uw
EyWwlAoCnl6XM5kenAyMFdjAcNamwSXYiRJYGQ0BT89wTKYHJwOjBQY9WPNEuAQ7UQIbQyDg6RmO
zfTgZGCswBaEs4ynwSXYmS6wgC9TjcDTMxyb48EYYLTABoQzOg0uwU6UwFQzCHhyhqNJjgdjgJEC
awLCMSrS4BLsRAnMJYGAJ2c4mmR6cDIwWmDQgwXRaXAJdqIEhpfxxOQMR9NMD5ZCpQFjBaYwnGZp
cAl2ogRWCrRzcoajaaYHJwOjBQY9WDORBpdgJ0pga0DgyRmOzorD98AyDRgrMBCH71Ni0TS4BDsR
AtOKMwMBT85wdFZ4CQOMFthCcIbInJQzh9bzU3OgzcQHKwqHSdjEMkgofITXsYpIBQFPTvs0z2o3
EcBogQ0Ip2kaXIKdKIEZ5RDw5LRPZ8XRMMBYgR+BUyINLsFOlMBcSgh4ctqns+JoGGC0wKAHC2LS
4BLsRAksDIWAJ6d9OiuOhgHGCixBOMl4GlyCnSiBlWAQ8PS0LyuOhgFGC2xAOGvS4BLsRAmslYSA
p6d9WXG0HlilAWMFViCcIfPZiRLYWgIAy+lpX1YcrQemacBogQEP5hURJg0uwU6EwPyRhSg5Pe3L
iqNhgLECaxCOMZMGl2AnSmBOQEdi/wCvxo2MN4uYH+2qN/ferVa/RIO7mYETjkpaS2/Uv+98JF5R
LRvLCedEEBGIbr23oW0JNdKo452Py8Vi/VS7H6MUQvG5pTgsrazg43koov0XbCAkMXMTS9AO5ekK
bjKnp5pZAU0MMLYoDAinGUuDS7ATJbAlIPD0VDMroIkBRgtsQDiVCJdgJ0JgUREBAk9PNbMCmhhg
rMAWhtM8DS7BTpTAnFsIeHqqmRfQjMCCpAGjBTYQnCA8DS7BTpTAQhkIeHKqafICmghgpMCGgHCS
2jS4BDtRAisJOtLkVNPkBTQRwGiBDQhnbBpcgp0ogbUREPDkVNPkBTQ7YJkGjBWYgnBG6TS4BDtR
Altw84yanGqavIAmAhgtMODBsiLUpMEl2IkQWFaUUAh4cqppWJYHI4CxAjMYTiTCJdiJEphR0JEm
I6mGZXkwAhgtsAHhJEuDS7ATJTBn6QkbgCN5Bye3k49mglSik512IPd/f3J7PJ/r48ltUBl96pn2
bTLPt/uUM9WXta+cj7mKxuvl7za3dVhe/pNz5s/i/vvfVl2Gz8tn8Zlnq9tn3dWI4c4/M8ZSLloz
ZGiVgtOyaZ0phW2kq5mizNTPdNMorltZekFEKVrtS0elKJXxPjTURHA6gvQXPP78vX/5PBa74dbJ
wdCieJMcZKsmO8OGgpdUU/ZtUbx7/ODwG91zu6fe++ZZVwHih2Mit71PYXtOTRs0/v63xXvDi6q7
ivqqXSzH/EntMubXjWaMl4VekihvzAflLyOh+LLj7i97YjEHXWdy2rPzGoHIKfuYrNt3TkkT23E3
5pD74Y2sezfRjxevdjVirCn9Z8Xl+Of23tY/emp/FWPbLikELTifgs6BjA0zFZZJoV2oWzZYsbt1
Ndb4qHy1CuurLufWT4vV7tbV/wDXg9xl49068WUZUYarYaE7/Y2k2zv9SfTq3XU6U8/Cl+hEszXj
J3r2rnp+FrMHfh/r4nV3wcnix8FJx0+7xP3X9+snrawzmHTU4mzpP33To1luf5De9GzNOm57XisS
+pBna3fvr1c/ln0+zuOv9G9DXxw+Oal9UxU9uW3+dw/eivH8XFlVTLMTbRvHzLHxazbLLqh189uY
MG2YBfTRr+jZfVq+5GSu5+D4T0LE1aZpwmrVbmJewr1LT4amdWbs4/Z3cf+g+Y13hE01qhAjziaH
8ticlBfF/XLRCRP5TlHq0p86IkirrDO1HLtEfB7Mi8LVsQ52nhJFiX5ye1GcoIagp2YiPCqfT8NN
cKsz5vaMdBUjJ9Ldy4m0lxi1e7cYJ4jBg4jqXGm/s3a2qkrn34JRx8D7zcL5TpM3d6+LZnF76+58
l0r5oni2WcUeIfYO9799193wU/5UlKUPrdvcrK/c8rvVZfw7/Bon18NfZRlr53VYd0saq8VNuPz+
54bER36+vayD9KxRsiTW+dIL40qralOShgTug6fCmaKb0fQz3ReOjdaVmCtV8kFpWJaTrL4nNtP1
Ssg1G8soRMdKaISlknI4C0laQnyp6qBKz01dmtrz0iqvnTKSt7ZrxFrSOsa99JxIRA7nhB9/ETRI
gQYhczhv7saEyjU3xAimZWMUkAFNUSmUp01QrDlmYyqyy/kyKe+jsGAe5/HRAwpQJmevSdCuFaym
rGXCcKMFpaomTgbprX0Z5K3phIqPcjh7LmcGEWbWnkx4Kq+yD7u8ynHAePt4L2kqLtnpNI6XuZqb
4O6uVt9v1j5Wjqcbrs5uyHg50NlNKe7i+MbdnMui3c1HfdrmpInU4q6/aeDyfP5yAqltMZTbS5su
/1U4lc0xYeKKoe1jz/bIpHRvoTN7RmoqwSaXkjArbjv3/yPSe3VYWEqFxUq+V/u2sFfdgG1xd1E8
ELJw9/fBLYM/M5Pl7dWoSh8wOFcTAFshsq3459az22JqcR3mIM/K4UGFeX40gJWh50cjrVzym7cH
PV03uX1i7863YBbPOp3G4XLtVsYnXKKb2YDdevPm7okXmvMMQfTYZ4lyRfZqMsp1al8bZ1YhyFq0
vLEuuIMw06ehn3nHx3fhpefMLzG09Nn4U2/H3vyl7teKcTQ8cHgYX0r6Ahxkisbryel1Qh35b4WZ
ZjLq0UDT82gBcizCtwBbw2ZqAmxF+emz+f96KCjPul2gZedVq7DuFy3A4NCQCAIkYbf7VvEkHm5V
evuT7c6pg11Luy+V3bfKOgRdeurrMgTKhBKUG+MOdi3FBZfFL1c3wf148S/UTx/NP6A+A98Hu6xo
W0spWShly3TpWleXXoi2bHkrWy5d4Ko+2GXlXBNM/LCUqvWl6NCsrXmpqAncaW5VQOyy6rQxJ2vz
+JUQPuxd0qB8rWoupW+sQS+J2opJMsHwXxEPr4Xw4fleDBEt4hbcZgluP7Rc6NA0jjXe/fvxSEkb
w4Rv6rauCRXOCcJbTW1QTjWmtc/vYohosiQix+TDoErega9sOthQijEgCapzSCRoggj72crAm38n
T6VZm3N5FwYYK7tVEJwlLA0uwc50gSV55Ay8kv8OzAghLENgDDBK4J4YCGdkGlyCnSiBqaEQsJoG
zrnaBAOMFlhDcEzINLgEO1ECc8UgYJ16SLv1jaqFbJSlE51ZWytmWat5LXgbGuYJI77bTa2lCN75
53vWP0ohDZlbisPSYpnVXho6N0Ws/zKwgVCMz00sQTuUp2tqIYpmsopl5RTEAKOLwoJwQqfBJdiJ
EtjCIzA7CcxzsmJigLECcwHCUZ0Gl2AnQmBaEcYAYE2mgbM8GAGMFtiCcNymwSXYiRKYKtBOOgks
sjwYAYwVWAgIjhGaBpdgJ0pgDjsSmwbO9OBkYLTAFoRTJg0uwU6UwEJJCJhPAstMD04GxgosBQhn
dRpcgp0ogRUD7RTTwJkenAyMFtiCcIqmwSXYiRJYSwIBT081VaYHJwNjBVYChDMmDS7BTpTAxgoI
eHqqqTI9OBkYLTDowZaaNLgEOxECs4ow0E49CayzPBgBjBVYCxBOJcIl2IkSmAoJAU/PcHSWByOA
0QJbEE4nwiXYiRKYKQUBT89wTKYHJwNjBTYChNM6DS7BTpTAXBEA2EzPcEymBycDowW2IJymaXAJ
dqIEFtJAwNMzHJvpwcnAWIGtAOEMSYNLsBMlsNQMAp6e4dhMD04GRgtsQTgj0uAS7EQJrBmFgCdn
OJRkenAyMFLgSAyEUywNLsFOlMAGrqmTMxxKMj04GRgtsAXh7N/cnWlvMzUQx7/KilcgcPB9FD1I
nAKJ+5RAqPKuvVDoRZo+4vzueHfTEtopnonThwMBatMkv5m/Z22P1ztWOBzCT4LAasUlCK5mOEI0
RTABTBVYaBBnkTiEnySBhTcQuJrhCNEUwQQwWeAA4aQMOBzCT5LAyggIXM1whDRtAqPBVIGlAXFe
43AIP0kCa9jPaoYjFG8TGA2mCqw4iHMeh0P4SRLYGAWBqxmOUI0RjAaTBTYgznEcDuEnSWDrLQAO
1QxH6MYIRoOpAmswgp0IOBzCT5LATh9kZ+V2ezJ6ayVkiuf7P7D2qJuOsRtdW0o73mwoL5uOYW3c
3tr83abjiVpaVvkYnArCON5TNxwX64KoWvcQ7e5m4+VN/8xGY6NX3OG3udo0phgDj4b3f783y0eX
tDY9L/rp3Hvr3Ginf7wtNgbzz200Li5LEVpcvtshtd2rbzaH3B1r0AjlW4xAaELopPVKK3Awqi5D
CdO0IY4ApspuJIgzHodD+EkSmLK7fbe94VEQfd2Dpsjqwxv/21Ew5WUUBHVpKJPwb9NFReNV0AOL
QUdmrHLMqMGzYbCjc9b6oMIDjyR9CWqj9n8kCVvbr/pAK2iYrU7pqIWKdgv8oewql6y2o7Z9doYr
cegqf1U+qIv//+iy8xz27vPdO9GeU8w89ywlnxkflGciisjC2LvcKz1G7iGRrK4+Y0+19WAivf4m
0UHo4dHxJJ+mMtMrDnWTbvMoevsQ8J6S4C/0AxeJBEd05/avvrRXmcjC9Fwcpv4fojAhaZYxJ7hz
2ZS9Dftus7n8pns7npwuE7gy/7/K3TufffZRMeXq8uK8/DaNGddX3VwU6+tvQEP8YxTSLBzT9FjQ
AQ0jz0cdZE5Q5jDmIHQiRVIxrLXg6GJYf31ymqZQuf6pe+7lp3H98vr6/OWfpkoDU0XS+X/HP+R1
MWj15cmPp69vnuueW5cXn3z++btvPhnVIMbRKpa14ExzaVkwvWVWOunCkP3gfbe++GsB0u7q8jRe
fbetTdqBVUqf654OpRLHkejO8tnxWfzpyEgtvZ9/LR59mzfbVx5LnrlEw1ag48t1XoaEJ/aVrljz
xAjZvX/y+iulxkgsA8gTs/y2zksdxvJ3P70CG2cPYNy3U1G5be94c40ddUWYq7kCCTu7SHlp1+Ol
jdlsf2c7tlX0+Oyk74ofywsL6valk7MyX+5QAdGxdTybKpk/+Pbt31cff8jfOFcdG87S3Dk9gzjq
2JhjubDyVVd+PI1FnvLaPL4eX16sN53o2PL+7e/F9zLW/LCU/AY6T7PS8mDB9UmOqftaSK2E8Z00
Rnnru5+8PVaSxf4E5BtZLY4DFMyz94rkvPFdPP82f34+XFyUIIolw8hLLQsQ6niz01OHsywuzidO
vPHR51uTrrp4nrplWrQtDrpUxgItCdV6gfu5XaZ6nlultVHJjxmcD5veesGDZlpM/3MpsZi5ZnYQ
Ax9ctkO004SuFNrtdgCQH1bqQyv6+pv7COp8OIwhf/K7aQ2zmyZ5m7wufec0+serbjo/SM3nBz3/
3dOz7smi+wuQUR5qZd6dlVozmzJ92iZg8XzIN69Nzae94moc+xjdb9uXj9frVK60N25EGte5dBmf
fPLmUs///bmcP6ocNmRmaC81/Ze1iXlVfmm3O8sUtx9i06dYn3jPdBI9S0PsdT/2ahD2ZpnCTssU
6dlUTinCpuO5LupSTgaWqbk++l9k+q9og60qM0sUHiGSttD/iF6wLs290/87dOyK62aJ8Oe/qdr5
bzL3TijTMyW9YX6az4mkIssxhyFKOUSTX+ZKeen6STDHmQ48Mx/tyHiSkY9OeJnH3fPfxpLvvgD6
btrnJXudBGenKmyWTyfB2TtvtHeKolq+cxKcvVvMsfwVdqx5ekA9Es7CNdaKhfh6bOW9j+QNtWYe
IHRLybzihAqy1Ym2+oyP2EC6fYSeVKcdoqSc09wYz3QQgY2jH9gweMW45Vr65GSMDjTWtl/04IyV
QzNWjpmx2pWR9hEG8MNMBdvvWKHu5O1MBafGhlVyrU0HbmfZVauc7SGiTKPRcQgR2M6y+3WgkdUL
/UHa3e0scfPMtrN0k/KxLwsfD4nvHJytgIf4aqWTTDYUQytV9NIQrdeKB6O1GCTPKYTiSExilFz2
/+DmFotI0GCXH1i3bXtGt9kc6qq2hTVRTUYgNCGsZbuVAHcghuojQMI13WwggKmyOwfirMHhEH6S
BJaCt7T3nc0t6OseNEU2j9yHzeae0cgI7/T8Epao+e7lIy4EPBu9YF1ka+g8PHPYqYQrpPPcGTlG
RZ87FDOdxcwdQN692cM/W3e3eKOCQncdfpBTxGRpff77GcPg1GhMNsr1ejBJBd5nIZMcRmWV0PEf
nDG4lXauxeW7nbdv2oHZbA516PISMsJUd1S0akIa0BwHB+76M62h6aEQApgqe+AgTlscDuEnSWCv
QHD9mdamwsYUMFlgC+KswuEQfpIEDlpA4OozrZK3nI1KARMFLiQQFxQOh/CTILBfcWcgcPWZVsmb
IpgAJgtsIZyQCodD+EkSWPgAgavPtErRFMEEMFVgISCclBKHQ/hJElgGDoGrz7RK0RjBaDBZYDCC
lbA4HMJPksAK7puqz7TKpvOpKWCqwBKMYK04DofwkySwgfomzavPtErZGMFoMFlgC+KUx+EQfpIE
toZD4OrjUlI1RjAaTBVYCRDnLA6H8JMksOcCAleX7KRqjGA0mCywBXHS43AIP0kCBykhcDXDkbox
gtFgqsBagDhvcDiEnwSBw4p70M9qhiN1UwQTwGSBLYQTQuNwCD9JAksJXqn1DMc0RTABTBXYCBBn
FA6H8JMksFbgpVPPcExjBKPBZIEtiPMch0P4SRLYctDPeoZjGyMYDaYKbAWI0x6HQ/hJEtgFDYHr
GY5tjGA0mCwwGMFeInEIP0kCB9jPeobjGiMYDaYK7O5GsCjEFTceh0P4iRV4BksBgUU9w2m4RU8D
kwV2IC4YHA7hJ0lgbRUErmc4DbeXaGCqwF5COCMCDofwkySwg8H1DMc3RjAaTBbYgTiPxCH8JAkc
vIbA9QwnNEYwGkwVOAARPB+ehMMh/CQILFYC7pvqGU5oimACmCywg3BSBRwO4SdJYGU5BK5mOKrh
sFEamCiw4mAEa25xOISfJIGNERC4muEo0RjBaDBVYAFGsNUSh0P4SRLYOdDPaoajdGMEo8FUgTUY
wd4qHA7hJ0ngYEE/qxmO0o0RjAaTBQYiWK7KCzgcwk+CwAUcQD+rGY5qKHRHA1MFNhLCCWlwOISf
JIGFgVpWcvRBsqoXox/VOGTx95v8YkwmWMutz0J55Z02cXSptznnsU/unzyPeJZC+nBoKe62VkM5
nMcykRy/YAehBD+0YQjtSJGuBahdNdVUtrErQYOpTWEliNMSh0P4SRM4eAhcTTWVbbwq0GCywGCs
GyVxOISfJIGtVhC4mmoq1xjBaDBVYCdBnPU4HMJPksBeGQhcTTVV27ImAUwW2IE4q3A4hJ8kgQM8
ftVTzbZlTQKYKjC0rKlWXFocDuEnQWD1wDxS1lPNtmVNApgssANxVuFwCD9JAkvPIXA91Wxb1iSA
qQIHMIK1FTgcwk+SwJZ7CFxPNduWNQlgssAOxMmAwyH8JAkcBOhnNdXUbcuaBDBRYA0ta+oVlwaH
Q/hJEFivBDhbUtWbhpo3RTABTBbYgTjncTiEnySB5f0jDClPn915fhn9FCJoSmgutfSff34ZrkV/
K5FSzRU+8NWoeK0alfdBKD16FnQYmdFKsGGMnukwmNhLK6TvX3bDYJUbDUuaa6ZHl1gURjPrU8qD
8AUudqtRPf0uvfDIru9RjEpITDEqIeFiVLd/BR3TotWxQxWjEhJf66i8F/am+Uny5mJUQu5fjGp2
wnrV6sTjFKM6QAM50+zb3PXTilGlXkYuHMtJZcZz8kxzkVksr2vremeFAo0NB6qc9Wcd13ffJtZx
nQ0J1h7GkHX+dqqjuz5+ejJ25b+dOgJxTCoo5ZmRg2fScMlELwXrY59Nzt4oZ7uCK8u8P+x8bEhK
S5czm4ZXpgQfmTHcsiyz0F7kIShz36WwkrK531lc+mz986LfVszuo3ffPupsiEFrI1mKaWAuqMyy
HTgb7WjGqAN3IUJmKdU8C7g7xJ2M8xC3vbS2VwjvzuLwhPOjQlZvzXB+ZF/rhrhen9zcGFnHTX5y
Ps3fLjbflUYrPcZ48u2Tr7/p8k+bdSyly+fhutTs/vnqydfrTG3OV6YQYHOV3CGNQiRpGffBsTF6
yaTWnKVRByNFHMv7X7lp/uUjvPdjEsazKMaR6SATG4zULA6pH4UfVUr2G1Bj3VwpZo+x9GQsXXWl
qGN5D1jQcfsX2JnmDm3/8fNkvO1yoTqB8PtgL/7JIo5F3IYCjpMD8+a4Q1RO3zbBUkk8dcVN3v36
3HPs1Yvzqa7971g2UbxDjdeHCQjB4UJiWlVvPRmxfx0OGpiY+RbDQJyXOBzCT3zmO4HrAdsUqNOZ
rikYbk3Uptc3wf/a3GHeHuZ0MhYj57PA/hVG3j+w7Prqu7+cWPbSne98vrgwJ4q3NryyvLHQv669
t/vmeQFEv1hxDy6/VO8LmoYH9LdgjQOTo99CuPI7DofwkxD9YiVF+xRgNiym76+v5qn1dObEsnzQ
xW/jSRl0uvmj15dseRm0A+5tqvcnTUOhABqY2tBSgLgDjO+7mcztlCufXZ/GzQUsrjnQFL9UP52Z
Z/nsYv1zd3lxejL8DBJ99QAR1CzgjeXYmOe/+OCNlzoTuAJ7CcUtrg0RwUO6erSQELh609eo/Wuc
0sDUqFUBxBmLwyH8JAlsrD1oHH1W8rWXumAeCCQrmutLwzpo1dbeBzOMGg8PmGP9YcxB6ESKl+Ca
k8DFsPmvRyW+L+P1fJDlkqJuQ6yzQPjIFW+/c4E6KbYym4JME9WFK/IZTLvnfNZMKvPLPKocVR51
L4a9z32CzofdQw1ZrcS/h2XPXo2dDm83kaCtPXWgQF78GwV6922ic0fTXue42ex2Ka90S9Kz/fU/
5f4KccLtfg7Re6a7R9vuEb+7R9tyrN3Sm99+23w3H4Qwd4nfdE/PymxgzMPPw2kug9hRGVzLnd+8
yWlXddGdbPLZ1dFOXloSTw525QqYYIXuu1y06XPc/PbOzU+lraTUKfDB88HYh8a129vz2/MtaBOA
ZnvIw76DjLDGtkbRHucNz+igzEEmgttsaQmf7tf5sJbfuxuvgcmvWnFRRbcgS4P5Xqh+6LkTYriz
PlTuy5fF3NV0k+1pX+L94up2nehfYOvuVbhdI7pzKb403X35bH7b8/O3vdTN31TI24nUzuVYee+y
TAS6LVHdRJlL3bWvTKdqVBJw36H8qPt2Hfu+NPncne5OcA5q371e/LVhOk33YL24gq2RXz3OxSvk
bXQaDqGFbFs4rSCnk2lk5mbIMiUb2i5eofiztBWzxvuX0BNyDj1zM1O5s8YLv7d+8QrjoFhtnGJV
LpoihhuzcaIfghxiw3U7JySt16vSDnm9AvlgRXkQqLl4bNHvG1JEH8d+1MbHKJN6PNFxGhiukKIf
eKqrSNY87lTXrOTurQ6423lraqibjuX6Mi0rz/Px6N1Z3qxPhqudfmk+cGPgo4mj0GZZUTme33y8
vHlWdMjl0OZUuGVJOZ4u35qnO6pn2y+O307Qkz3ELhne1fHTvJ62pM6+T4++HnWf99fnm+t5qq1X
snvvs09f6a6XP8mVXSm50vbFtD5TaiU5E/yKTfe+pz0Ar3TpZJqclgWh+Tum8+m/v1gfdYKXH0/O
px+5/uaV7vLpcVqfFPRf+dt329s3zx8bpi+UryzH3x91RnPnu29uN3dcbZsNajPlge11Wpk/Z+LF
4Mt5A8D1+Q/lpOxzdlZCo0jKhuSkcY6zLIJjUvSBmSFlpnTQ2drQ+1GWFhy4z8b0MXCvb79smrV/
vnxft74cuuf2+fLnYIdsxaEHTpa59fMm6Si2y9xH6/rehKCA42WyVqPJ3Ng4DpAt9vZIiKq4D0B/
u2cmlBjt2rGTDD1/s3m6F1qnYYw2FA1lDkIUI4K2XOvQixBfgI13j381309wGwpBPkuzqXmwsZCx
3vBnYSxCY8LieDHb6so1VolSXMJ38/FqzO+mfvhPzfNIrHv4JBDNx6KJyhITsz7JYRzCyGSIlinD
M+uLVcyp6PRgXAmm/Jc5h34ku/9yu+QsTj50V9frKQ3Jk8lF+tPTn6flye/i6SYn0AwvDmpGif9Z
zGJD3ExGDLGMV9fnRcXyUjFlokCGBOH/iQvbNm3VemZmUztPC2TkdiW5PWhj9+viSfm3eHVzk678
O0XfdOtu7hrjJoOWqH+kG7dNG9bsSvFnMlTeM1u3BOkzM5scpBo09p8Z4a1uDA0TWq+tZWvPtMO9
5Dun04MKV3n9NB9vX19Pd6ymLOaJUVwrWbr8n7Y/PpJBN9uWWUk4vuneuOnapyY11gaQatvHk79Q
vyz7nkqbFzF+nNt32RIwNZoRo+9H6Rl3IbOh1wOLWo9M5DENfRqUD/bxTMTu874qZudfcnq5ePNy
adHLsh0mv7xtWrY0LZub9mWMQ/tsDZ99bg4GVHTGYXMdT8sVtMTlszUlbW25xc85/+5v8afHNm3u
XMq8p7TS6WmZdz3NnbjZuz5t/306lJl7N7XcxdgJyAjr5GGM+Gm+ai4v1pur2XNpbHdZdogCAeJW
/PaAiUbq+0sTjOuci9bWWSWlf2Xatn/dlxc4DG+fC2+vyE9yWbrK67hZ5qD59pJM3WYy6fo85bII
dOfpF/cNaJbkhzLrje1KyMKbL+G5D3OPxS1tAVPLz48Jvntpbkq/cDXm9fFyjZZ2uTg/3lwcb03S
2mSRbGR9diNzzgeWHXfMjEMxbRi4UEPHXn3I3EN05LQRbqK2X5/YEW5IWcTQs+yVYC6PIxt1ymwY
orJ2lLyo9ngmto1wNw3PdhqebS7Y9orDuLbXWFe8V829yd0oTnla5t+JYVzkPpJ1+wStas/8kEHb
WyGEjZz1Q3TMCi6ZHofIbD9opUathpQfz8S2oF3aeTdkX8a4s1eg+pXkptnjZbqR1yfj/ARwOXy9
WzKwqy7/dAIttYSVPNSaz+2DED/kdcmPIJbR5jDzinv5mm/JiQ9oGDnr9ZA5LsjDmIPQCZ/XSr4y
Vj2OYU1VHA9pGLUBnYTMsdIdxhyETqQGtNNSW16vL9Z7G/bdZnP5Tfd2PDldmuYyrq9y985nn33U
3XSd81JfyWzm3b1ffwMZ4sSBGuyeQi03wA5pGDmSLGiOFYcxB6ETKZLcARZO/hw3cpqLkzw/VycJ
SZtktWA+28DsYAyTkvfMi9EpE2zqe112PMXNJp9dLrndxTatnn4sCffyRUfdaK220o9M2NEwzblj
YbAjG6T3ObrIbVAvQN75EB5Hdt9yi+GQhlHD0/P75ogVtx6KAovY/2By6E02ujSGS2wYuWdpjD3r
jfBcuIH3fNrnasYUg1HB6pjw+x8wX/4c6JALoEOE/Q+vv7maIrHYLoZe5jzoZBO0/2GIY+BJC5F6
cd8WueK3VRur4sJQYP/DzRvv4MFtD9aJaLyKY3bCWRf8KJOP3Pk0SpXsC6DNTlT0e8AC6o3XQSbD
veyZ9MYy7oxjqdg3pVwyxSGZPui/3Hg1kLlCmT3NXa6ip31aapLc6Y22lXfKn0Gq803U1/7CSj+f
x7KJab69u+WWj5e/bPfcccgEJXWTCRNhu8+10HPHywc3eX25nnfAxavuz4p1z3/39Oz2aQvIFnP7
0D3Vlr/WFyz5zmoR4E6pwdsPselTTOnkWdDWMWN9LuYopbm5KTXID1NqMPlglXQ9895Epm3PmZdj
ZiHy0OsxZd733bwfYnqYOx/N8sDq7HtFgR3VrkplG2hveuPyYO0o9uihrLYV2x6klTqWpbjL+935
eDUpt32Mqsj+hC77E6Lsk+KxP81PHhL9oQzMQmU4c5LBGxNljPzvjwnxKWox5pTS6KVNxvJB56C4
dlp77/L9Y0Ke1REhxeXAm1y+O9FpKrrdbg55euNAI6RvMQKhCWHOrVZCKcicauFqE0xLYxDAVNmD
AXEWiUP4SRJY8tDS3neq76Kve9AUWe1aUYPif2Qk3Cm6e/tAbffJh7Ay7nDThW2B3/+ITLAc+04m
4fnBrixTOqGVGZXoB8c9eYagVspVw/hh3r05wvZtz2SWAHnjnEL3D4NQwjvN49CPfz8t6JVOSock
Y3DFZDkGpV0yo+mNjTmpf3BaoFfc4ofAXZfhHtrypvWPZnOI41MxFzJCCPzcZD9NCKOWXhm4KlD1
uAXLW26bUMBk2T2Es1gcwk+SwN4LCFw9bsGKlr2aFDBVYKEhXOAeh0P4SRDYrDg479LV4xasaKnf
RQGTBQ4gznkcDuEnSWDhJASuVvW0simCCWCqwFJDOMk5DofwkySwDA4CVwtHWtkYwWgwWWAwgpVG
4hB+kgTWXkDgam1BqxojGA2mCqzACDac43AIP0kCWxkgcLW2oG2qoUgBkwUOIM4bHA7hJ0lg50A/
qwfnWd0YwWgwVWANRrCXSBzCT5LAwYCXTvXgPKsbIxgNJgscQJzTOBzCT4LAdsXhQKquy1nTFMEE
MFVgoyGckAqHQ/hJElgEC4HrGY5pimACmCxwgHBaKRwO4SdJYKMcBK5nOE1P3VHAVIGtBnFW4XAI
P0kCW6UBsKlnOLYxgtFgssABxDmHwyH8JAnseYDA9QzHNUYwGkwV2GkQZwwOh/CTJHAwAgLXMxzX
GMFoMFngAOJ8wOEQfhIEdituLQSuZzi+KYIJYKrAXoO4oHE4hJ8kgaVQELie4fimCCaAyQIHEGc0
DofwkySw5uCVWs9w2u7DE8BUgYMBcYrjcAg/SQIbDbZsNcNxbbeXCGCiwMUwEOcsDofwkySwDRIC
VzMcxxsjGA0mCwxGsJMKh0P4SRLYSw+BqxmOazoujAKmCiw4iAsCh0P4SRI4eDCQqhmOE40RjAaT
BQYi2K84Vzgcwk+CwAXsILCtZjhONkUwAUwVWHIIJ4TD4RB+kgQW3kPgaobjZFMEE8Bkgf/g7kx7
G6iBMPydX7HiE0g4+D5AReIUIECI+xCqvGubFnqRthwC/jveJC2hneKZOC2HEIgm2z4zr2e9Hq89
BiNYKoHDIfwkCawUGEjNDMepzghGg6kCKw7iXMDhEH6SBDYcbNlmhuNUZwSjwWSBDYhTCodD+EkS
2CoLgZsZjtOdEWyVw4GpAmsO4uz+/KQJ7MFA2spwLjdr9k7Oa3G7w5W7F/Hy8qfq8LxaMTjrYhJF
Ff33i/xKtCaprKdiRhNl8VOMOo2p8FhCyOX+Ir/l+fnVky308wvn/L6luNdanbf9I5hIjl+wg/BC
7tswhHakSA/CQCa2U03T2ZUEYXFgalMYDuIUx+EQftIEhoet7VTTdN4VaDBZYAPiPMfhEH4SBA4L
bjUEbqeaXWVdKWCqwJZDOMEFDofwkySwCGAgtVNN2xXBBDBZYAPhpLA4HMJPksASvHVcO9V0nRGM
BlMFdhzEBYPDIfwkCaysg8DtVNN1RjAaTBbYgDhvcDiEnySBdfAQuJ1q9tW9IICpAnswgo3SOBzC
T5LAVggI3E41fWcEo8FkgQ2IUxyHQ/hJEtiJAIHbqWbojGA0mCpw4CDOcBwO4SdJYA/fOs2Xhq7v
5SgBTBbYgDgfcDiEnySBgxP3wITdZ3c2KaN3Id4zRdX2aO/uRG3FvYr/hd2325uUb7aiprpJGVJG
CLWjMjd1K99YVVdZfDamRUxpeG5zyODZ9elYJ4H+LK/y23Bx9MvlXMzy4MV6zYuXpy+OtUXzWXrR
+yCULrN6oTCjlWBTiZ7pMJk4SiukH18satJjdZKpnKrnqiruy1SYt1MSiUcdFN9Arn65yAc/HqXn
YY93riiz8TiuT2tcOzoMr/E7hZn5rWPrhl8dMPfNMLx1/8L135ivu73q7a9fnG+A+uWmVNjWt7A/
u+6r3vz9b26Ol17MR5cdlvPlplRQWdZSstWNzbEEB7O8tfRROqgG1f+tttcPV4bVKmezy7hr9+sE
oXzqQ7LefLJLRdTZdum+op9WyknHcXopQbTTX+FPACUja8cc+SSy08b4MXWcVvr0tu54WqmX8uYM
SA6fVgpfC58yVd3W1u0Y2be358e1UN5RvReP59Npzr9fB+nm27lG/fHF1ZPerHtw6V6Pc2P+03c9
u3tD73pu3Lrf97w8IJ4hL17Fi3R8+T1bVUa9/yurj6FfXH+zY/8WZNhRoL+P4Bsx/rlQFguud/Vt
PUidO7/pejm/1Dr5ZVMbbJ0FrN5+1cheVaBDlwt9DBv/rP13eT3NB76W61qCb/t8j3XXul/2rgcR
b3eqkEXSNges1PKL8JHEsEn1McOjTbEkr7yTm0civeQj8kjilhrG8a/21D7rM4sfsYxlNTfYXcNp
q/zPVg3Q+dNhkyDmdJ8oFxzYprxTHdo72bHpmkCbDbOqty7xWF+8n5zHNGvy2u3/zwc8n8azNBe1
fml48fqyPhHq0+Hil2/nw2zYDwNjKZd4fXJ1GJffXh7Un/PPNble/8RYvTuP89U8pXF5XksPHv04
8XrJj6cHxUg9Rm/ZWMrEioyBKW0S4+MY5zw2FquGOaNZZbrPQE5bYR+lNbzvqRCzT8OIczbVcMgc
B6ZDDlWuONhJOxeZLMqwEq1iQmXOJhlMNEJPXs3nqnuZop+4CMpzQrlixB9/FnRICsghYrni67NN
7eDoYpQ6OZllAYp9We5TmVSo/xTQmttjFJryPoiFSxavL71jAlS0WAmvhZh8yiEHKZPn01gCz5PQ
owqyPA/abVoqPmjDo5cttpDBXsmdDW6VEE75poTw+hB4+Cm5NsOq3c24P801neR4dnh5dH2V6s3x
dMPVvTuyOQfn0V0Zzur4Jp48lke3h/ysKhSjEqnzs9WZDwePFy87GHXTDGzTLsPB3wrnum1EJK4U
s1N9sj2QlG5NdHZnpHIRhPpqjzNut+H/azXvhfXEEozV3ZJv3X032MN5wHZ+9tLwFyGHeHGR4zKn
R7ZkeXq4UWX1wuCxuoDH8uLPA75Oh9bkOmxD/220ZQPmhgHNaA74+swAZoZgM8ITmIFrl35Ltp90
q+T2aaNb8m4P9hFZHWbcm67dyPiEU3R7duB2vvn67IknmvscITyxH+UtV7VeN5+5uz5r53S0hFSM
L35K8s5rpo/yKvOul9++XvqH7UO+Wvp486feqE/z5+a/NmxGwzc2bL9fQv3CAy+Z1IKH+3mW+e3j
eaZilrCadnyejqfhcp4huT7Jy+pSnpwSRlprU/htjuxvl+fXZ+n2ktsGKNdn0+r00PfO6yWrQcLV
+Y2flzPjQN2G0AvD6uX7hxvic39+84SGf3yS88X8+9dnV8cnw1md3xrW7XxjTwj+JthBu6QxO9+r
/9L3dnty6sE3d0/fpfZ5RO9SbxzbU5+qFlbvPsj5t79b6/Pu9s3VbVRd5qtVlwC/bYMPyKlGOH0z
sUs34q9rv9748GYp2p1lYLe/xObfYrLEwrJymnHhQxHR+5DuLgOrM1jnPx2e5Pj9S39n+s76/cX0
Pdj7l2VroozGGJmZKdKxWOLIktaFFVVMUSZmZcc7y9ZinLKvXzJjS2JapJGFMCpmhc8qOhVsNvhl
a1Ubs/tY6uHjJFLeOuAhTiZWL7XVhT7HrBbeNJPbvyPePVIi5X/2UInqUQB3vzhwPWdMSQRTRFZe
/P1+0+idKlb7nMNUkk7WTZNSxfk4al7Gf/CsKaUXwsgel++8pQpdO+j6zSG+m6rmQkZIrnuMQGhC
eI+qF9oYyJzmNr9gVV9jaGNxYKrsFsQZrnE4hJ8kgS0scHObX+iqlEkBkwUOIM5xHA7hJ0lgpxUE
bm7zC12VMilgqsBOgzgrcDiEnySBA3ineo7d9R5UTmKKOXtv/v5hNsWxokv2QVqRAjfFa24N91lp
EybxzxZPUGbBrd+3FHdbq6s44+OYSI1fD2vn3L4NQ2hHiHSzEHCkN/dbhtBzGj0FTG2KYEGc4zgc
wk+SwFIHCCwbYMG56BMYDaYJPBsG4hzH4RB+kgRWxkBg1QZ3RjAaTBbYgjjncDiEnySBNeynboJF
ZwSjwVSBhQBxXuBwCD9JAlsOgk0b3BnBaDBZYAvihMbhEH6SBHYw2DbBsjOCnTA4MFVgKUCcdjgc
wk+SwF6CYNcGd0YwGkwW2II4G3A4hJ8kgUMAA8k3waozgtFgqsAKiGC74NLhcAg/CQJXcFAQOLTB
XRFMAJMFthBOCInDIfwkCSzACneBN8G6K4IJYKrAGoxgyZE4hJ8kgWUAwaIN7oxgNJgsMBjBSnIc
DuEnSWCtAgRuZzimM4LRYKrARoA443A4hJ8kgQ0Mbmc4pjOC0WCywBbEuYDDIfwkCWwt6Gc7w7Gd
EYwGUwW2AsSFgMMh/CQJ7KyHwO0Mx3ZGMBpMFtiCOG9wOISfJIF9EBC4neG4zghGg6kCOzCCg/A4
HMJPgsBuwbmFwO0Mx3VFMAFMFtiCOKlwOISfJIEF3BW2MxzfFcEEMFVgLyCcFAKHQ/hJEljC4/12
huM7IxgNJgsMRrASSBzCT5LAGjoLzfB2hhNkn8BoMFXgIEGckjgcwk+SwEaC4HaGE1yfwGgwWWAH
4gwSh/CTJvDcFa52Fv0FLLfAR1dXF98Mb8Xjk7V0F3F5mYe3P/nkw4q6vKilDnJFxavry2G1L/Tr
byCQdaCHzVRK8M5bBQ0mtmQ1DMQFg8Mh/CS1pPMcAjdTKcE7bxU0mCywg3BeOBwO4SdJYK98z9q9
OyU50Ws4QVP87kv3H3V1M3ZFbU9RzpuV63V1M6zN7ptS/25180ytLWtL9GLK02SnSF7Z7BbBNVvu
IdrdVc3ri/6hFc1+IbxD3w5jyWMYYzI6hL9fBGa4H7mWvGJl4t5MWmXtRLFKhTzF6R9c0ewX0oce
l+92SH2LArrNoXbHQkBGKInvFXfThNBJ+4Xh4MOoOd8lRNfKOwKYLHsAcVLjcAg/aQLbrqC78xRE
3/egKW73nv6//hRMef0UhHXZfe/Yv00XXkRI9XOWTJRsSpNhRRbOhBIqcael8A/tffoc1MbvXlTp
36aNisaroCcWg47MWOWYUZNn02SLc9b6ajZFG8t3r9yDrVjZ3KYNGiabAU0tv7VdthJl17wG2SZe
dPBWybzv2pVNPqiLbm7ko5v3D+myVV1gq2rB1h/xrvBc0cxyxdkc4Ey5PDLJjfSqcJNVAEXyzaim
2ro3kV57g+ggtIO3HOeTVEfB1aFh1m01wrjdib2jJPgbfc+lTy1ksOfNrOkhg3csflqZRu6nqiWi
3CZpBBbUnspt3h0ayq4tUXs0jDpElsCWqbDg4lGq11Zc15aqlWG91WvXho3XxydpnnS9/nl49sUf
4/LF5fXZiz/PVRbm8rar/xx+n5fVoMVXX9h3P37t2eHZZf3w4NNP33njoKhJlGIVy1pwprm0LJjR
MiuddGHKfvJ+WJ7/tZrtcHlxEi+PNoVuB7Dk7bPDj1Mt6/KSGE7z6eFp/PklI7X0fvVj9ejbfLX5
5LHkWZenWAt0eLHMw7oIg3t5qNYcGCGH949fe7kWrIm13z4w65+WeV3Us37v509g4+QejPt2rlC4
6ZRu7rGXhrmG8KqcDTs9T3ndrofrNmYr+wc3sI2ih6fH41D9WH+wRt1+dHxah6kDKiAGtoync1n8
By/ffL/48rujjz85Hth0mlbT/E8QRwMrOdYbK18O9X9PYpWnfrZ6rB1enC+vBjGw9fWbn6vvtYv/
fl0//huo/XQIe2i/VXB9lGMavhZScGXsIJV2zqvhZ28PlWRxPAb5xsqv6NUX3b2KS68fxbNv86dn
0/l5DaJYB/Z5XccDhDrb7fTc4aznO1fHl7z+4acbky6HeJaG9WhkU2l2XWYNsGRVKhpfaIrgdh1h
OcWTmHxWyQVwGDomOZUpFCZDtEwZntlYDWRORacn4yZe8jyOqlWbhy0A6IfS+1b0tTd2EdT5/s5y
NmSLP8zTqoOqf+EqL2vfOT/94+UwH0alVodRPXf04+lwsNb9ecioIEXvk/cvufZqBn4tyJ20+/aX
2PxbbMzFsSCcY0XHVL9xc5vfpN1uTrvT05Rjqc2aDlfVazc1amCZetvuLzL9V7RBl6qZJeoehAOR
tIH+R/SCdekehvzvQ8f4Xonwp/Sp1il9Mo9OKDMyJb1hXgvORFKR5ZjDFKWcoskvcqW8dOMsmONM
B56Zj7YwnmTkxQkvc9k+pa8cn+TnH9l32nl9bi7tZvl8Xp+7c6G7U7rW8q3z+tzdkpv1W9Ax2x33
1IP73AOF2yzHF3mr1z6SN9RCfIDQPXX4NF8o3n0WS1/Rx8dqoNk35Xp9m1WnHXVVkp6E0o5pHyVL
SXNmRByZ5HIyeZycch4yVu/L2LtDQQ4NBTliKFiNslI9wgN8P0PB/jcwqLd2W0PBubFhlbrTCXDp
yrZadQp6HI2cNDfTZIGlK9t/DjLS+Vbq/CDt7tKVePVkS1eGWfk41hmFh8QPwGIqwYfT2g3VyPtt
85Ymnk355rPqnddecVXKGKP7bfPx4XKZalf8+k1KV5a5TnB89NEb6/v7/dXtjToJ6r6ZYmEMPLsK
vXjnUrhYRiGiLH+/3qbooMM4Zc5HO03KCTmm2bMUfBFusv/cepvqsg2ux+W787Zd+5P7zaHOaisL
GeGM7jECoQl+LltXsaSH1qE0tz8J3fOygQKmyq41iLMBh0P4SRJYNg9XQ932m1dd6PseNEX9y+Yr
nugBDi8+/RyW6N88X/E0esG6dCcJDw9wtqoASx/iKGJUWkbyEEculBOYIQ7MuzvI+WdrDldvTJDo
riO5XII1QUcvGyt0RYpTDpPK3topxaSnKWYnufbBSf8P1hyuLlug5EQYjnINqbF68dvbN/83t53U
KfDJ88nYhzrv7cWSs0vUp1SnPeRnFyiKs/hxy504gEQxPbV/+82hamJgIwLvMQKhCeEprxbcg/v5
mpucRVc5AgqYKrsVEE5oi8Mh/CQJrLSDwM1NzsL27I6igMkCOwinJRKH8JMksPEeAjc3OQvXs7+P
AqYK7CSEs0rjcAg/SQI7ZSFwc5OzcJ0RjAaTBXYgzjkcDuEnSWBvFQAWzU3OwndGMBpMFdiDERw0
x+EQfhIE1gsuBQRubnIWviuCCWCywA7EGYfDIfwkCSychMDtvcdd5QgoYKrAQUI4KZE4hJ8kgaUM
wG55ofa7W76CtDKQh+1Nzl11Dyhgcks6EOc5Dofwk9SSVjgI3Nw/J7vKEVDARIGrYRDOcYvDIfwk
CewFGEjNCVPZVY5gBbY4MFlgB+Kkx+EQfpIEDhwUuJlKSdEZwWgwVWAhQZz0OBzCT4LAZsFNgMDN
VEqKrgiuYMtxYLLADsR5gcMh/CQJLAXYss1USsquCCaAqQJLCeK0w+EQfpIEVjC4mUpJ2RnBaDBZ
YAfigsbhEH6SBDZgKiWbqZRUnRFstMCBqQIrMIJtcDgcwk+SwN4YCNxMpaTqjGA0mCywA3Fe4nAI
P0kCh6AhcDOVkrozgtFgqsAaiGA7TwHgcAg/CQLbhbCgn80MR2rfIzABTBbYgzgvcTiEnySBpbUQ
uJ3h9L3HIoCpAhsQp7jC4RB+kgTW2kHgdoZjOiMYDSYLDEawERKHQ/hJEtgKMJDaGU7XKawUMFVg
C+O0xuEQfpIEdlpB4HaGYzsjGA0mC+xBnEPiEH6SBPYWBLczHNcZwWgwVWAH4wLH4RB+EgR2C841
BG5nOK4rgglgssAexCmBwyH8pAkMlm1V7QzHd0XwDLY4MFVgD+KEUDgcwk+SwJKDArczHN8ZwWgw
WWAP4jQSh/CTJLASEgK3M5zQGcFoMFXgAOO0xeEQftIENgECK+wRximV0YVQkmhtysg8qiQnFTI3
kUvLXdRiNGasP0Yu0j97EnaVQge1bynutVbnbf8IJpLjF+wgjOb7NgyhHSnSrdOQic1UU/HOrsQ6
gwMTm0JxEOe0xOEQfpIE9lZA4GaqqXjnXYEGkwUGYz0YJA7hJ0Fgv+AejOBmqqlEVwQTwFSBBYgT
QuJwCD9JAtcfIXAz1VSiK4JncMCByQJ7EKcdDofwkySwCmDLNlNNJTsjGA2mCixBnJYCh0P4SRLY
wOPIZqqpZGcEo8FkgT2IMx6HQ/hJEtiCr9B0M9VUqjOC0WCqwArGOSQO4SdJYHj4opupplKdEYwG
kwX2IM5yHA7hJ0lg70E/m6mm0p0RjAZTBYZxQRocDuEnQWC4RGsFN18aqr6XowQwWWAP4rTB4RB+
kgQWEr97HN4Cur17HL0HFDRFdxc7+s/vHt8qwA9KZPsr+KBLlvFWyTLvg1C6eBZ0KMxoJdhUomc6
TCaO0grpxxeLmvRYvWUqpyqBGgXzZSrM2ymJxKMOim+XLPvxKD0Pu/5PViwTElOxTEioYtnWt6Bj
rjvs91WxTEh8Qax67SN5012xTMiuimVhYUR3MYLHqVi2hwYyqr+B5q6fVLFMKc+NtJwFPk7MTHpk
0dnMvE1uEkUknxJorO7v76qx21V033mLXEW3GuJ4f++zMmSZv52rGC8PfzwuQ/13q4qDiEaU6EKV
JkVWUlIsCG3YGBJ3ynLpshoqrk7zfr/1a1NSWrqcmUxyYkrwwozhlmWZhfYiT0GZey4ZtZB7qKK+
cumT5S9r/TZiDh++89ZLgw0xaG0kSzFNzAWVWbYTZ8UWU6IO3IUImaV06DbrziPuuKwecZtba3OH
8OE0Tgecv1TJ6s0ZXv/fvj5Mcbk8vnkxsoxX+eBsHr+dXx3VRqs9Rjn+9uDrb4b889Uy1sLxq8d1
rZj+y+XB18tMbc6X5xBgq6pfzk3euBCYSjoxlbxikygTUza4zO0Y/MRfvmn+9a/w0ZckjGdRlMJ0
kIlNRmoWpzQW4YtKyX7zyBoTnqXHpXbVjcqf9Rqw6uf6G9gZ010ab/fn53G57XKhYpLwdY/kRcdz
s4rbVeWzOuC5+Gofdes3TbCu456G6iYffn32WfbK+dlJ1eJ3LJso3r6e13sLCO/ABLT56kmbnoIf
FDAx862G3cfpBfcOh0P4ic98K1gY0xs0a8Ni+u76cjW4mM88WCdQQ/w2Htfbblj96vUFW38M2uEk
JEDzFZg2PTsYKGByQzsIJwXH4RB+khpaOvNVz5ESrR5pLvbJucxZ+WwSv+nlXl09GW+PKjsu1cjV
SXf/CiPvH8d3fXn0l/P4XrjzN5+rLqxmBG5teHl9YaV/3bp2+OY5AXRzen8FlFcj/K2hSD69Pon1
sQdC4enp5ktRbXv2tFDA1FvOShjXP6hYm7M8Xml7mk/Pl7/U1UUnx9MvINH5r/YxCnh9fWjPc599
8PoLgwlcw8HjFU5ORDuSuhTtLQRuvvTVnvcFEBpMDSDPIZyxBodD+EkQ2Cy4lxtwZ+TeM6ynquw+
DSM3kIbMEaG7ejpaJ1IDiuD22hF8UhPuF4Zgak8A4XTonrFb67D69qUa3xfxenV65zpF3Vg4OJBu
+t9coI7HbTxkQdNca5RLPgFr+3DTlkl12KG1F84a46NJO5+6BR2Ku4MarvkSYQfLnl6Nrftle3xJ
m3saIIGCkf9Ggf6g7kp7m6mB8Hd+xYpPIPC+vo+iInEKEAjEKYFQ5V3bUOhF0nLz3/FuNiEkU9YT
JwXQK9QkmzzPPB4f42P87ttI487yXudhTnq7BXulWY2Fp5f/K/Pbgmt9DzMI3zLt3ud7gP9u3+cr
Yd777Tm36vff778db8sYm8Svmx+vc2eSYv9LfxVzn3mW+/K88hvvY9hWnTWX9/F6ebYVruR4BJp2
0a1U7ij91hSFrOg2v403yPzRrPtyDUI7VVtWB11lnKGdmZ28q7E2+3XSTHFPre9T2omQP+9Cnrds
h/WkH7tctLfLTaT8H+C67XBTlLzjdS8PCw2fjo+9MP7ay834Sxl5GjNsed7Ms6tAudRsoEbkYcMu
vzxymENFAR7aa5013yx81+UiH1uO7b78qPz2GqzX+uHa3qM1WI+wOc61rfveyfhf3glAm5YpDQlR
2VXNlEiuJtr2fVRB2d6JCqcYB3Z1zmBaTmflr5E9W2tMEi4loZ2hNQ3Yk3Mtmen7m+KMrxQ3k+I7
M33ws3MNmGkVk4X1FQh95lAhQK34qevFPpGsvrdUahOSVYGdrl6UaeCELRT9yKM6HJvTjupsq7eT
lsJV7q2hoNaV6uEurCaTx3vYm+t4v7jsl9t1Mlcy5rRlIXGa5Dh5cDE+fDE9PCjax3w7dMi4efbU
X61+NQ6Lh9fTD/tvBtDLA8TOwczy4se4GHZfjrbf+OvsNJ91Dzf3D+Mslmx58/6nn7zSPKw+4q1u
BW+lfiksroVoOSWMLsmwzDssd7/ShMthhJjnPsbfGC7C/+52cdYwmv+8vBn+pPLrV5q7Hy/C4nKA
3sZfP603D49f64cf5K+M9+xn4ZSkxjZfb/YxLKdi2y8z11LmTl9m+xNfrmqG9sloY6cPHYXIcsqf
gmyBxojJxYG2mrIsn4j2EXI0Z5pC039F3aop7iejjfZgCZHVWj8F2QKNUR5s5L9Du+ZUxNPRRruG
hchaxp6CbIHGpa7BzxhvDXU1N0Lt0FG84oaqY/BBlmXmu0dC8tYqVUNiVxQhakSp54MVReySEGdM
tIyyGhK7oih6uCjH4IMVRdE9ElK0wugaEnuiyBpR6vmgRdntJeVQMlqJGhK7olh1uCjH4IMVxao9
EuOlL1UN254oFQ3tMfigRdltaNUZky13sobEjiia6sNFOQYfpCiZ7x6JXDLaVJHYFYVX9D7H4IMV
he+S0GdMtYxVkdgVpaahPQYfrChK7pEQvFUOOFSjpPtrDS/PstyNG/Qfbr6/uf3phlzn+Sz/TSQi
Rtkzx4kzzBPdd45oShnhXWB9FMynPmTqVEYaOm9iEHzzY8N632er32sWd33z/CE//jxkkJZyxqBH
7t3d2DktVw6ydyzwSDtrqErA5btG6sRl18vUdRAXY9YN5Ky4j4D+vkcTWlLd5rG1jPrC+nCztz4w
1XEtvE5OmuhC5Ml2cvCqKNKLEHlr+YyQc1SKliLXX581bHtRsvxbw+x+sXnly5PF+KXQSGWRS4ZU
UEXzP2KYsIQxR0lyLhLfmWAENb1I/G+z4fZEvP+2Z+3aDzbk4+SLYXEoDpSz9FdXvwx7RL71V/cx
QDQctUelkVujUczMwd8PJHqfG6WHm6xifitTGVD2iQzhgzwqkW6RWeR/ObBf7+LL/wZlhr19MYwz
eBFiwqnZY6J+/2T42eH3sotc3obLvllmW8PDVVzkZib2RjDFtdbB/T4c+/lmcftwEzaPbFYL08NN
fz/Mmr9/mx8ZKGxWC19eUT8XmxNXLzfjofiPJsQX/vrkxacj/slVjHejkDf3l1fNTd6K2awWI9d8
nLPrs2Ewr3pXX209v8gYeZHiajhIu4yLH+PF9P5i2FE1LD2cK0Gl4Lk2/Dz9eSJC62N1JHe4Xzdv
rL1+GDIorR2MWu/hf0P9Iu/Lz/NYWYwfxjmr1ZbVYURlg/RDr0eC6w0ZEzB0xvWkyy9F6LzXuj8d
xdJziMtMO/4aw7NszbP1zPqzqWjJqmjJWLTPSgzCH11c2exYrc1F3un7+wd/lbu7lV8+LZUwcdnA
jwt126/8z6emNjbPuUvIpXR11Xzrf4wNW5+tHE4t/djnQU0zlNxtahhEQuojkfh5rDV3t4v75Wg5
V7q5y+f3QAfRm5zYlagfrIogLWLMWmsuhFZZ+mW/eOjyG/QR8GqTpxr5cczrzXHh71fdc9xUyRzl
DJRysx/zyu3O6Wz7NUiL02PRemOKBFZ4YxVetWGnws1lAaPmv08JvFs173O7sExxcbGqo34YFlzc
315MlHzGT4JJ4jg1pOtoR5yQlqiOMdE7qo2hDXn1ZHQP6OE0P3K/+ngPR1Uvre8oYUZqQp32xDgd
iHZJJeZipzk7HcW6Hm5d8GSr4Mn9LZlqXIlpB/Z1WlRbv+vFIQ57c7Z8uMxzT8TuEKcV+riojzut
UtoIEyTRQvakS1EQqYMgMvS0s8l3RofTUaxz2lU5b7vssxJzDnJU2ZpjhaWf56An/TLu03zz3WY1
w7ds4s+XUBSqWqbru9u/H0j+Pi7yVCKEJWYntAqxdidT9eE5II5LDDurqhlERzp6HDoFOpWu1Y/E
tD7SwHCPmKorwKMRQxegAumcrABVXQEadSRH3yPm6grwaMTQBehAOtYeh06BTrgCtEfqJqbliDFn
3Qtj0jpqok2RReJMMoSH1BMlrSeRpqCC6Jilfd4Z73NXf323Cqlup2h2+DPHucMPDXYmraXmNhGm
kyKSUkNcrxPpubXRG0+1Ey9C1llzpGq8K3vFtWrHJYZ1TwPQ0S2nDvACRQuW3YIKgQfJCVehI7bz
PeHJ9YR1ToTktLaSDxu/krIpRuloTOXLbiU//jxokFCgQYhlt9ffbAdPzNyjYzGlEC1NElh2S0pE
4T2TIVCQy2a1eVbcR0D3l93WD+7AQ6ttQgqZOiOkEpSHZKJh3gjrtOh90L16EeIshJzR7xEG2KWg
ngdFLe8It0oTapQhIfFAVMqf+D6ozsm/LQU5kK5hB9Jd1aIfuzClqtu0RtsJGfPHEKqUqgr1tb9h
hV9ufN7wPyw4TbhDKedPpvMpoG8pS6soDAjTeaiMHpvhi/dxcbeIg4/7ZfNXIuMXvv3xerO1AOLi
NlemY7n8Pe10DjPalQA7Gag3XyLDt4jgIpHYBUb6JKyyRkrO0zoDNT1OBmpjmLGh14RJqYg3WhHb
2URSEMy4oTZF14wrtEOOn3g2ygOrww8sKbCh2lYpH5mi1ASpmVQ8oVso0zI+60WPoeX05nmB64Pm
Ji0H5abT9Vn2c7zs50jZB8V9dxXPYdGzYVI40CWh7OxdoL20NCqfun++PY5xEU2ngpMisU5FT1Uf
fOS8MzL2Brg97mlujhtN1sLWmLw70KlIuHQMOtjhjaUgCVPlBgWaIMbcpnVGQHRmr9vQFTd14oDR
stt9ONtSbsrgCuxECJyBDa8p751LGYrrPUSFiUOb/b91iv+XnnDrLoZNnpXm4w9hZcTxhgvTvQ//
E5lAOQ4eccPjg21Zhj472RCUiloGix4h2JZz+2XBCAHG2xsjTI89ySgBskZRRPvgjO6Y6YxL8p+H
BSLqZHXPRaKCM+8tj0bESG3Unkrv/8VhgW21EzUm77bQFVcaH4MOtn9yIAknWA2JAk0QvZZrhZIQ
ndlLogytWq1AACNlz8QgOClNGVyBnSiBDTwMnL0kytDDM0XjgNECGwjOOlEGV2BnucAyi8AUBDyb
i9yww/MCT8C6DBgrMOMgnBRlcAV2ogRmEhR4Nge4YTUejAFGC2wgOE5NGVyBnSiBJbMQ8GxGZMMr
PbgYGCsw5yCc5WVwBXaiBNZMQMCzGZENr/TgYmC0wAaE04VwBXaiBDYcLNnZ636NqPTgYmCswIKD
cM6WwRXYiRLYcQYBz173a0SlBxcDowU2IJxRZXAFdiIEZi0Da6qeve7XyCoPRgBjBZYchDOuDK7A
TpTAQoHAs/NyRlZ5MAIYLbAB4ZwugyuwEyWw4iDwfISjKj24GBgrsOIgnBVlcAV2IgTOAlDQzvkI
p+LOHxwwWmADweWXZXAFdqIEFhK0cz7CqbjhBQeMFVhzCE5SVQZXYCdKYMVAO+cjHF3pwcXAaIEN
CKddGVyBnSiBrbQQ8HyEYyo9uBgYK7ABPdhpWgZXYCdCYNFSA9o5H+GYKg9GAKMFNhAcY7QMrsBO
lMBccwh4PsKxVR6MAMYKbDkI52QZXIGdKIGlAB1pPsKpWofHAKMFthCckrQMrsBOlMAaXIc38xFO
1fISBhgrsAPhDC+EK7ATJbCVFAKej3AqElnigNECWxDOijK4AjsRAstHVjTMbIRjaZUHI4CRAlsK
w1lZBldgJ0pgzi0EPBvhWFrlwQhgtMAWhHOiDK7ATpTAwlIIeDbCsazSg4VlZcBYgRkIJ7kogyuw
EyWwYqDAsxGOZZUeXAyMFhj0YC14GVyBnSiBjZEQ8GyEY3mlBxcDYwXmIJxlrgyuwE6UwE4YCHg2
wrG80oOLgdECWxDOiTK4AjtxAju9l28/A29FOPUZ8zOQahkFRw2zoZQVVVUFAYwtSQHDKVMGV2An
oiRVKzXoQluh1HLaHHh1m3OnXYzm3vnlMt/WMWSfdIEb3rNABQv/vJuwZ9Zy45mmivbcJR+DEppr
13GfRGf2dxMubm/vn2pHYZZCyaoU3nslY2vyl9bzQTumhUgYIY/tH7tC1a1ynoQiVjvJQWKWHptY
gXao6m9BinY20Ld1K6cIYHRRGAjOMVkGV2AnQmDdUsYg4NlA39atnCKAsQIrDsJJUQZXYCdKYMYd
BDwf6NetnCKA0QIbEE7pMrgCO1ECcwECzwf6dSunCGCswJqDcIqXwRXYiRJYKNCR5gP9upVTBDBa
YAPCWVEGV2AnSmAFO9J8oF+3cooAxgpsOAinRBlcgZ0ogbWUEPB8oF+3cooARgtsQDjtyuAK7EQJ
bDmDgOcD/bqVUwQwVmDLQTjFy+AK7EQJ7CxYsvPxt6304GJgtMCAB5uWClsGV2AnQmDTcqYg4Nkl
W+uqPBgBjBXYcRBO0TK4AjtRAgtxlCOg0xHx4jOgMJXZs7VFB6Hv/f/h7PP2EfH1QeCQj4iDyuiD
E+xMyTrfHHPbtJ93ofUhNC9MV4HfPFx3cXH+V3Kb35u7b39ZDhk8z5/lZ54tr58NqfzjTXhmrWNC
JkucdIkoKQYJvSXS9cp3XDNuu2fa6BgUF6TzThIZFSc+5IeMcaHrnOXMsglkvJDgx2/Di7DFutJi
v7olYWVo07xOd7JR041hq4Ifr8L+umne3n9w9RvDc5un3vnq2VAB8odToratT0F7zGxmmUfsmX7/
6+ad1R/tcMnyRbpdTIma0iLnz81mTPdLntMsb048Fc4zoeb30f5wPhLLOeYGk8uePa4RiJyxj8m6
fueQNLADd8vmLstf37G+dWk+RV2a75iCoCUTX5bf04+GHHKkUaaS9SFSRVdWbG4JyTU+K98u4/3F
kNzrh9vl5paQ/wDXnSRp02VE23f2vwzdVu+YWt9WT7NXb+4fmnsWvnUom23Yoc3Npnp+ktMUfpvr
4uVwI8zt9ysnnT4dEvNf3t0/aWU1zNSatNfirOk/fdNzuDX4pmdt1n7b80pT0Ic8u/d34XL5PRkX
Ove/Mr4NfXH1yUHtm22ZOY0Hr8X491zZFiSx/OfkfkPj1z8shpW+q1+mzGyrKGBcEsyePeb/K07W
egqOf2VeXD70fR5Vp4ecAHHrUpOpaT0u9n77e3v3t+Y3X6o216iCjORsdcUmvzxr7ha3gzCZ7xyl
oZuR1HVcKW1jP3WJ+ISbZ43vch0cPCWLkv3k+qw5QA1BD83KuVc+H8er6JcnTCKa6Sp+aFOylXxp
KwPr8G4zBYgxgIjA3cYHZQHeiY61qdrmYFvNXG1W6C7vRri69WHQ5PXN301/e33tb8Kwh+Wsefaw
zD1C7h3ufvlmuMGH/NAQEmLyD1f3F37xzfI8v44/5+B69YqQXDsv4/0wpbG8vYrn3/7Y0/zIj9fn
MVLBQzSEKd8RKxUjxhlDlIwuWq973slmiGjGSPe5faNdy8xpcnQ7W5OM/pjEkHM2mThERzCoWruS
ZNGCciOsj0QNsXIQhhKlrCdMdUz22ncs+tyIMZ10510UNobyZNElP/48bJCCDEImi364mTI3U011
EqnjTlIg1VqnEk++60XsJMhms3I5K++jsGDC6OnRHQpQymhudac573rTc8u6nvHhUWOcTL1Krn8R
5O3EjIqPcjh10mhJIcKKsoMJzyVwDnGTwDkPGK+nXhKm4czhNPanufqr6G8ult8+3IdcOZ5uuHp0
Q6bLf05uSnOTxzf+CrbIVlu0udlozA9dFEjd3oxbPM8b+h8itS4GMpVLc/6PwrlqjgWBK4Z2yD3b
I0Hp1kRndUTqWs3rbd+qBmsDLoaR0+3NWfM3ixp/dxf9IgaQiaDHYrK4vphmvMaZ+1PVxVNZsfH3
xXUzN8tdyqFiFnXTpP2WUV9eTRbCsBU9VIHpQIWBaciT0gBmhmAa6glolLlDPZPtnm4Mbp+4UtVb
cBTPOpzG3nTtJOMTTtEd2YDNfPPDzRNPNNcZguixT7LKldk7/uURlm+gdnlY//dWBm37JITbWWb6
OI6Rd358s7z0L/MrXFr6ZPqpN/Mg4oXh15ppNLzmsL2+VPQFeJEpG+/M4R34f3SZ6UhGPbrQ9G+0
ADUW4VuAtWHHaQIUbYXVB9P/jy8FVVq3WWjZeNUy3o+TFvDiEHybTiahNnvD8CT+vlXpzY/WO6d2
di1tvkSGbxFmRSI0OUe4dcEzY3pN+c6upTzhcvvTxVX03589Tn12iq+Uej3fv+2yYqlTSvFIVOKG
+OQ7EqRMJImkklA+Ct3t7LLyvo82f0iUToFIFjriXCeIZjYKb4TTURXvshq1OTxgffzuiRC3boMw
TgjDuxiESNgp0cxQz3v/PyDu3VEV4r97A0W2yBoGVSV4+6EwTmdsS4P45zOjQUcvNUs0Bel475zg
RltBbeyd8IH+ezdQKNZSIWpM/vuiCqdVB77q6eCWUga6IAlXRaJAk/Jlv0xHUA3RmTuVxqmuuSUM
A4yVXVMQTqkyuAI7UQJLyyBgPg+s6wQuBkYLrCE4xWwZXIGdKIE1dxCwmAU2NXeoYICxAhsGwmlT
BldgJ0pgKzgELEsPaQslo7XaJjXXmTkrexk17YMTPCqpe5N8lH1IlEZv2L+bAEFlX1BHl2K3tKoS
DZ6GItZ/rQWJGXZsYgXaITydt4JKiKKarWJVyQsxwNiicCCcNKYMrsBOlMDKOAhYzwNX1opiYLTA
oK9rasrgCuxECWwYh4DNHDCrSl6IAUYKzCgMJwvhCuxECWyphYDtPHClBxcDowW2IJx0ZXAFdqIE
dpxCwG4WuCp54QjMyoCxAjMYTpoyuAI7EQKLlnKgr9GUzgNXeTACGC2wBeEkK4MrsBMlMGMg8Gyo
yaqSF2KAsQJzGE7aMrgCO1ECc6kh4NlQk1UlL8QAowW2IJwTZXAFdqIEFg60czbUZFU5BTHAWIEF
CCelLIMrsBMlsFIKApbzwJUeXAyMFtiCcE6WwRXYiRJYGwoBz0Y4TFZ6cDEwVuBH4JwtgyuwEyWw
sRoCno1wmKz04GJgtMCgB1tKy+AK7EQJbA1YsvMRjqr04GJgrMAKhHOClsEV2IkS2DkQeD7CUZUe
XAyMFhjwYNlSZsvgCuxECCxbRjkEPB/h6CoPRgBjBdYwnNRlcAV2ogTm4ACczUc4usqDEcBogUEP
FtyVwRXYiRJYUQoBz0c4ptKDi4GxAhsYTtoyuAI7UQJrxSHg+QjHVHpwMTBaYNCDDVVlcAV2ogS2
QkDA8xGOrfTgYmCswBaGM7YMrsBOhMCqpQYs2fkIx7oagRHAaIEdBMcEL4MrsBMlMJcMAp6PcJys
E7gYGCuwkyCc42VwBXaiBBZWQsDzEY6r9OBiYLTAoAdnFmVwBXaiBJbS7V8Fopk57lUgGUhx0MLZ
UIrTyqpSDIwsyUwMhDOyDK7ATlRJKqdq9u7tZJAs3sMJUdHi8CPBJ93dXLqjtiaH5Hrnet7dDGoj
Dz9D+U+7mwfUXLI+JdcZ4YOmFr2zWbWWzSZNeAxtf1fz8NC/tKNZtwKxo1l2feeV9aYz6p83gUmj
OTfSiYzOKJVW9TGzS1zRJLMM/+KOZt0qVb57d8dkqEGq2xRQTQfdHFuIhGGyhkSBJohG2rQU7udn
57s4q9pwiwDGys40CGd1GVyBnSiBmagq751esLjeQ1Q4PfyY+v+9F1w1/jHAusyenvnf6BKs04Kb
jlirPJG6o8TyFInz1HUyhUi77pGzT1/A2hx+9um/pg1NzIX8PgnKc9KHXpHMgRImmAjUSM6sQmkz
m+fr/6ON8MoKJ3vinfREaWGIEr0lfa+TMVpbJxxKGz47bitPdgUknyw6cQ0Ss7OFhs2ktZ2BsojX
cMGeCJwpblVU6thpKGfxIV0Em+0c8PT+JV22EgVsJSDY+hHZ2xSscEQxLYlJxhLHbSK9SJElG/uM
AIpkZuNILNejifT6m0gDodPN6TJehRwhZIOaQbdx9LU5pX6gJOUV/bhZTCXo6LIiPdhBeUwzpuLq
OAkqCzJnokanmrpTEMv/VR0XOyIxbPjAGUTH0JMkos3/VR03G4nVJqJdEeseLq/CMF/68HPz/LMf
/eLZ4uHm2c9DBoohU+34v4vv4yITar//9a23L/jzzfOL/Ob5Z5+9++Z5Ej1LSQsSJaNEUq6JU50m
mhtuXB9tb22zuP17YtpmeXfll99OOWsbMHvt882Pfc7Qcsaa63h9ce1/PlNccmvHl9mib+L99M6p
5BlTd0wCXdwt4qolPrevNJnNuWK8+eDy9Vdy7hmf2+1ztXq1iKv8nPlzO7wDk6tOIpzJfTMkG5wa
pXUdO2uGdMBjZhpyfRviqlwvVmVMRv6Nbcik6MX1ZddkO1ZvrKA2b11e52FqU+QQDVn46yHD/aOP
T5+3712/8Xn4pCH9dRhn6J/AjxqSos8VKy7zA+nKZ3nye2O3dnF3u7hvWENWz0+vszm5if9+lQoe
WEGwLXPqCOU3OtfH0YfmKyYNl842XDEqqGl+tvpCcOK7SxB/2OmBT7pn95InvfGtv/kmfnbT395m
J/J5YB9XOU5AUC2rjR4anNVc8HgTyRsffTZRWjb+JjSr0ciUNHaVMQ1kkjvy8pxRCLP/5O5aW2Sp
gehfafykaMckleeIgk8UVMQnKCL91NG7XpndFUX976Z7Ztcxnb2d6sqsD1DYu9sz59RJdSWVRyWM
sEw/6q6Xo3HcJYehHLjm4b/aCnC1EJ7Xo/dD3bS2t8BtB6OcxlGhAHN1BpCyQwl6jIoUfeOtLYJq
S/fnicgZfjVNOVcQvuFmOITYOfX+zXU13SsF871Sz3/389X9feYpUhaA2vP+LdeeVyeOgkRp9/2H
6ulTtXfK1YOXslaDl43iZpDC3qXdbkq7+8cpVROatf9mLkR7qt+Tlonadn+T6b+iTXYZn0kidQFP
OoH+R/RK60KO6P9717GOKlH+hXuwduGeHForQLc1SKdrpwSvRQ9NPTSD7xopu0YPL3MAJ207CWZ5
rTwfateYsea9bPhohZPDeH7h3rh/8kAAtiVGqRuu3nNT2TvDp6v3XPSgi6rQGn529Z6Lq2eGvyYN
c+SQib2Dz6WL2gWG+QXwwrMXsgZbpDAhNKlGoWdCCqoRtIKYF2sgzySnx5CgOv7WKhgbV7eqHetG
jbIeZavr1irtGytBqj5J1pL7hPRQkKeGgjxnKOiZcvwCHXiZoSB9BSZrRfNsKDg1dlol8juU3NZz
rtZbO99o3XYtdx2ktvWcf12KpFmdl3kIbbGtp7l5tG091aR804YZhYfE91YkXTS12m+gHxredrzp
xUqlp35w0ozQKN6KoW0HKY0aHTR6GHrdNv/UJh+745xJYSkmxxOihEPRJehgp4shTcKSSGRokjtJ
PNPROnkkaPXMlVTbZ/FxwFjZlUjCOZsHl2EnSmCjOKW9o00+2e99kooljwPLZnOP1DOmd7x+kZaI
vBp3wYmAx9ErrYuhus7DI4ez0sOmE53tVGCnetzYYaZpV2k+Ay8ePfyThY6DNYKBzA8d2hk9Cq6M
1urZI4ZWKuhNy5W2rei874UTQ/iHGaH3urf/4IhBMK0kxeQ4eOvtJ2tK0MF2XTqpiZEkEhmaIDq0
QMck7uPg1VVIp0Pk+/2026j5sRvufhd4OeWAwzi2TWN/P/36m8OhD1MKb94tTYyHISzUffzxW8c8
9YM5Tc26nDRBM10WzMjVs87SWJrPZANjvcPYJJzneXAZdiL8QDLuTQp49ayzJJzpxgFjBbZJOCF8
HlyGnSiBJU8Cr551loQz3UdgwfOA0QK7JBzYPLgMOxECAzMuKfDqWWdJONONA8YK7JJwXug8uAw7
UQJ7KRMHNqUqeWAzACnGtUxZuHqoWhKKROOA0S3pUnACRB5chp2IllRMmuQ7unqoWhIKOuOAsQL7
NJwzeXAZdqIEVumYu3qoWhIKOuOA0QK7JJzJhMuwEyWw1ioFvHrWGQgFnXHASIGBJ+GMMnlwGXai
BLbgUsCrx+iAcHYTB4wW2CXhLM+Dy7ATJbBTPgW8OoUJhILOOGCswCIN53keXIadCIE14zIFDKup
FBAKOp+ARR4wWmCXhFM6Dy7DTpTAQqgU8GoqBYSCzjhgrMAyDadlHlyGnSiBpfUp4NVUCggFnXHA
aIGTHgza5sFl2IkSWNnkm7qaSgFt7RIBjBUYknBa+jy4DDtRAhuRBF7NcIBQ0BkHjBbYJeGUyIPL
sBMlsHWQAl7NcIBQ0BkHjBU4DeeUzYPLsBMhsHmgwBusZjhAKOiMA0YL7JJwxufBZdiJEli6ZMuu
Zzh6e3UuHDBWYK1ScMB1HlyGnSiBwfIU8HqGQ1tuQwCjBfYpOCUz4TLsRAmsuUgBr2c4hujB2cBY
gY1KwoHOg8uwEyWwSb46aj3DMUQPzgZGC+yTcAry4DLsRAlsdRJ4PcOxRA/OBsYKbFUSzrk8uAw7
UQI761PA6xmOJXpwNjBa4KQHe6Hz4DLsRAhsGRdJR1rPcBzJgxHAWIGdSsKByoPLsBMlsFBJR1rP
cAgFnXHAaIF9Es6oPLgMO1ECS5O0cz3DIRR0xgFjBfYqCecgDy7DTpTAKv3qrGY4im+/0h0HjBQ4
EEvCGciDy7ATJ7A3KWCbe5PxqLltOu1M7/mzNz06N8AwmME3AxhpGw160KYTUnS6Ebz7Jy/EnqUw
AKWlWLSWpnnlBSii/VcniRldmliGdihPt8KmKK6mmkoQQ0k2MLYpBE/CKZUHl2EnSmBnRAp4NdVU
gvhWZAOjBdZJOJcJl2EnQmD3wO1XejXVVJLkwQhgrMCSJ+G0zIPLsBMlsFCQAl5NNZUkeTACGC2w
TsLpTLgMO1ECS6NSwKuppgKiB0uj84CxAgNPwnmeB5dhJ0pgJZN2rqaaCogenA2MFlgn4bTIg8uw
Eyewton9tLr0flrHtLcpC1dzWqWIr0o2MLYlVfJVMULkwWXYiWpJq10KeDWnVYr4qmQDowVOviqO
izy4DDtRAnupUsDrOa0menA2MFZgzZNwVuTBZdiJENgzsMk3dXV1UmmSByOA0QLrFJwSIg8uw06U
wGp5IzXm2F90cDz7+GeKihaWevr3P39w/KyofVIiIEuUXwaMr5UBc84LUKOrvfJjrRWIuhsbVyvf
6aaVRkjXvmysGXotoW4br4LWWtZNHx6y1vdt650UTpyXAfv5u/6FC5u+oQqYkDlVwIRMVwG7/2vS
MCWphpWqAiZkfpGp8OyFrCFXARNyexWw2QhHr2B0mSpgBRrIG/p7NIV+VBUwJYRtWm1q23pZc+Cu
7gfb10PbOyGbQbVKJslackNMZM8r0773DrIyrd0JzgSYMkQOw7dTZeDDNz/vxyr8f1bAoXWj5b4x
teBdVwsnIchlu7qVXcvHtuNSqirAhfnkH84+1vWgpB2GWvayq0Hwsdaam3qQg1BODJ1P5JcCmPOF
TPr08OtJv1MlrI/ee2dXGd94pbSs+6bvauthqAfT8Xo0ox4b5bn1TYqWd/QqdVEXtx/nLu70ap3e
EF5dNd2rnO8CMry942IXfja+6prDYX+3AnNoboZXf5zGb09vvguNFiLGuP/21a++roZfwlAnFGOf
u+tQhfzX61e/OgzY5nxlcoF6rkDg26DJ1HjCWF8PHfe1toOph771XdBSctu+ctf8x4/wANEL7epG
jGOtvOzrTktVN13fjsKN0Pfm6wtrjOhL92MI1SvVNMMz6Uqa81/SxnhyvaHt/ed+vA+5qQKN6ecu
ZAWh3wziEipnBgMUU7BaDD2rFvypCY610fsqmMmr3557rn7t6Y/TVNIfaWxyhdhS/XUhhwg22VU9
STrONeal7ttGjQqGu7Z5fX6f7y8tCjRvnh7vvPo3kFxezHV7/d3fbuZ6KfrO54MJcx5zz+GV44MB
/au1Z6uvnxfJxtGQTNdXFyA1ae/yDCzzgJHTEoFYEk5nwmXYmT8tMQGXGqU2/fe31/PIb7rk4Zjd
Vs23zT7ExGr+6O1P9fHXKR5GFrqR4DQAve8ph6vbJ83N0zQo0HOooxMc9jPm1XD19PBr2H3yZN/9
mkTUydmu1fVeTdoqjgHG+rNRSTijvyzRSb15vKfl+c8/fPOlSnuuX0iieZ5nXIaqqLfHOihq5qdh
tPtS5fUDdjqeihJmdTFbO0nznmxgrPc4mYQDlQeXYSeqPT34FPDqYrZ2lCpjGGC0wDYJZzPhMuxE
CKwZv+/PiRE3JuYp9WtLEsM2kBcpOkLwMnQydEI1oHSF+un5r7vgcD81t/ONmMcU9RQkQ4xMoQP9
HrWsK2dXhqtJasquUEPfKnV+YegapelOcd55o60VoPvNN1mlLprdoobW/wc1zrrs80wNN/dUpQRS
/F8p0HvvII3bVfvr05z0fUh5pTpmlad/Js1fvxXgnzGfZVyVu80gfGSK78jd4L9nd+SCy+Utnf79
95vv5hso5pD4dfXzVRjPjkP3a/dkCJ3YLnSuYeV3uBn6c9VFtb8Zrq53Z4l/yOx5MpSb1Xi5KtiG
O3pnaGeX5xR89d0QUNqhufn93bufgptIqXrPO8c7bR7qUu93BpzuNMGOPYh80COOxCqFYVKXydhO
affRfarf5lty/qjuoG0KWoH5kjK5tQIZVLOukYMbRDcYG03AhXX5MJnLpkW2n9vg70+v7yfi/gVc
z9/C0yRc9Cq+NK2+fDo/9vz8bS/NPmkD8mkgdfY6rj07z8Plmp0IE2EsFfMLw6k1VBTg1q58V317
aNo2NPkcTs8HOEX5LaL46910P3CpKK5Egk2xNYGldwr5l3cmhQi/TglB7L9XWiS8Ji3nQz90rbaD
JjjFPNqlOoNxkOkMZbt0JdJsVM67WrpLN8yKVS+keF9odM6bDgbXtp1uaXH8kbmi47iQR8dzGXH8
/tn1OB7MzvGNnDi+REUBbn9lM+M4kd+l47hMsfHWXzqYxrKckqGm961ouPQdoWXmYEprEcu4V5kt
UjiYShSbywZTx+C8ZF86QL09NdRdCLr9qT+ue317G3Kf6mq4Oey76/MIFoaWthG67cBprefk4Zv5
4W9OD0+KdkO4u7wPuGFBq3ly/NZhWoa/On1x8+0Eut8gdpgWuP7m5+Ew7WOebZ8OZu+qz9rbH29u
5yRJMVm9/+knr1S3xz9JZoIOTJkX+8MVAJO8Fvy6njZMTBtHXqn6/ZTmhVnE+Tteqa6a758edpXg
4cf9j9OPXH39SvXTz9/0h/0EfY5/97S5f3j+WDd9YSDb3u6fBOG04jbE0/sdQdenZku1mX2MNlsk
oIaT5r4fjTYyTw1mpcg6oR+DbIbGiHlzx7zi/whtyumMx6ONdo3EFIZnwpjHIJuhMcI1JtrudAzw
QrQLHCIMNEE8iiss1CVt5Hk02mgP9imyiqvHIJuhcb4HK8OEkpQJzJiOAMqEKp0Pti1FTMJNY3lw
ikIiFkWp7aKU4IMVRakFCWWY5aSp7oUojiIKnQ9alDjj8XOOoUgkYlEMYT2iBB+sKEYvSCjHtNAU
EgtRPEUUOh+0KFH/IPg8HDOkwBaJYoXcLEoRPkhRAt8FCeWZkoJCIhZFWooodD5YUaSNSIhpJGFp
XWAsCqH3KcIHK4pSCxIADMRynl7//slNc5gHUT8Nh/3Tft9V15Oht0+GQyAzdBaElsaY3v8+nTL4
9vD09sf+/pH7Cejx9sfuZppaeP9peGQ6ZXw/Af3ScbfTq3B/wOOlaj6D+9EJ8fm//vLC4xH/5Mkw
/DR9Pkye7J9UP4bNp9VxfvuOj/fu7ijKkpfmTHCgtGrsZYTuvAgftJe5iITcSc6UJb3/sSh2e891
x+dRXz3rFyQ0Z85ZColYFLc9RSjCByuKi0nATgomaSQiURzn20UpwQcpSuC7IKEFs56UpyxEIfRc
JfigRYl7LrWTgZ1zFBKxKLA9RSjCBysK6AUJLZlSJQOtA0KgLcEHLUocaPXUMkZxColYFEPokkvw
wYpi3IKElswpkrvGolB6nxJ8sKIseh+zk8AkLU+JRPGCEGhL8EGKEvguSAAwJfwFRtqXTREuSJyU
IpidBmYkafS58DJCPCrBB+1lcTyyk6s7RcpTYlEo3XkJPlhRQC9IaMW4Jr3/sSiU7rwEH7QocXfu
dlIxaUjdZyyKJaQIJfhgRbF8QUIrpjUpT1mIQui5SvBBixL3XH5qGcdJ7hqL4sV2UUrwwYrixYKE
1szSAlssCuGoRxE+aFGiQCv5ThpmLekd/rsowGF7l1yED06Uie+ChLbUWftYFL09RSjCByuKjkmI
nbTM0+ZTY1EIvU8RPlhRLF+QAGBa6guMtC+aIlySeE6K4BMpwomXdkxqUqvGXkbozovwQXuZikjI
nXQMPGmKIBJF8O09VxE+SFEC3wUJ7ZguubQSQLanCEX4oEXxEQmYWsYKTyERiyLNdlFK8MGKIs2C
hHbMORKJWBQg9Fwl+GBFgbjnUjvpGQfSQCsWxWxPEYrwwYpixIKE9kw4EolYFMLuqyJ80KLEgVZP
LQO00Wcsitu+0agIH6woTi5IaM80LaONRfHbNxoV4YMVxduIhJlaxtCWRyNRpCQkkyX4IEUJfBck
AJgBfoGR9mVThBNxcQHiz0gRhADmlJdGPZQimMnVvSRtSY29DAjxqAQfrJdBHI/sDjgTtKXVWBRF
GA2X4IMVRfmIhNtxzkCQkrdYFMo8Tgk+WFH0koR0TKpLvNZxPPrseFj0KtR0DgVXq2MN99tDM/1x
GaLMwzHKnAWppDXyAtY8K0g5zcBzDvahIOUm/9dACgqx6znCPMaJzyMuS0x8IxJ+8n9LCwoLUQjj
gxJ80KK4BQngzNNaJhIFKJM7JfggRYF4cgf4jgsmJGmGKRaFMLlThA9aFL8gIQ1z0qQqta7dVwZg
zNZzk0hgrJ3GLOHslODlwWXYmXl2cQJ2LPw7BexWge3mU/dIYKzAVizgIDBRJg8uw858gUEwsKTY
sqCzPdYV4YNujTjWiSm2aEEiEYmi+PZ8pggfpCiB74IECGZp2ygWomyfyC7CBy2KWZIAZsFcYLB7
0amESxJ/xihdCya1VPyhxUaQwdOpRxZjJ4Pt+WERPlgngyUJqRlX9gJt9dHyY/d2PR6LyNWfyapM
lnpJa57h/1I5Zpz38IwXACQTnLQSFb8AavsiUBE+2BdAiYgE7LhkkrYFKRZFb89Si/DBiqLdggQA
k7RJzlgUQxiknPg8an9sZERC7bhhoEnxOhbFE/qPEnywovglCTBMORKJWBTCylgRPmhRbERCTy1j
NGn+IBJFE87XFOGDFCXwXZAAwyyNRCyKJEzylOCDFUX6iISZWsZ5UrSPRTHbN6sU4YMVxfAFCQDm
LjLLf9nE54LEc7ZZJk9inXgZ5i3J1RdeRohHJfigvSyOR3bHLeOGVNYnFsURRsMl+GBFcWJBAiwT
lkRiIQphyq4EH7QoOiLhppaRjkQiEsUIwuxUCT5IUQLfBQmw1F3BsSiS0HOV4IMVRcY9l59aRgsS
iVgUIIyGS/DBigJ2QQIsM0BK82NRKNMOJfhgRYmnHRSfWsbSusBYFMKOgSJ8sKI4tSABlnlOGn3G
ovjtKUIRPlhRvI9IiB13jNPO3UaiWELvU4QPUpTAd0ECgHnlLjDSvmiKcEnilBRh4uWY5KR5k9jL
CN15ET5YL5M8IiEnV9e069liUfT2nqsIH6woWixIgGOWtvd8Icr2FKEIH7QoOiIBU8s4ILlrLIrd
PmFehA9WFCsXJMAxb0gbYBeiEHquEnzQosQ9l9pxzzjtlEIkiuPbU4QifJCiBL4LEuCZBFJGG4si
CIG2BB+sKCIOtHpqGaBNCEWieMrA78TnMXufwHdBAjzTvKgoQPCUEnywokDsKWZqGVN0+7knLNcW
4YMVxcgFCVCMa3WBkfZlU4QLEielCGZydWtI8yYLLyPEoxJ80F4WxyM7ubqnHXmPRSHUlyzCByuK
W5JQgaEjDbQWohDGOCX4oEWxEQm3Cww1bUzxd1EUJ5RDLMIHJ8rEd0FCceZo6zuxKIQzxH/ydi7L
ddRAGH4VvwBdunTrMlWsWLGjimJNHRInpDAOZXN9e0bHJEDrJD6tv8cbFhCibz710aglTcuFxyol
aSl9i5FCgzJaLQWore/CY5XCsxSOlCuUp2gpwDk9Fx6rFFFSJIyeqdjhfSUlAhXTXXiMUnbeCYIT
lQqN9lpKXN9FcOGxSoldQcQtJupYhXItJa9P3lx4rFJymSAyU5QjKicfmiIcCX5NinD5hPmZizNF
7PCyjjJe30Vw4bFGGQcFkbaYibFL1ZSUBHzd6sJjlLLzThCcqWJrsVoKsLXiwmOVIlFB5NEzTaCe
0VIqIOWJ5yU/g9t5ZylcKGBnWCYp6xM/Fx6zlKYgeIuFuHtGSo7rm3AuPEYpO+8EwYUKthOopaT1
ZVAXHquUlBSEjJ6p2B1gWgoDkeLBY5XCMkFwpZigNF9LKeurdi48VilFQ5QtVsrYuTglhYE7yVx4
jFJ23gkiM6VyROXkY1OEA8GhFKGMUC/YuomOMmD/24XHHGVVQdQtNsrYWqyWkoB1HA8eq5TEEwQ3
EqwYt5YCrA278JilNAXR9p5Bi1hpKQIkkx48VinSJwhu1As00dJSkNe5B49VStEQfYudIlZiWUtp
wOv8H54XldLCBMGdcoaSNy0FOHbvwmOWwv+FqFsIo2cYOwKmpMjy9VtOPEYpO+8EwZ0Ktkqmpchq
pDjxWKWIjpSnu9w7NC/QUpZLOjrxWKW0NkFkplxfosSqY4pwLPh6inDm4k6doWVHHWV9dR3Hicca
ZT0piPNd7qFD2bCSUtLqHMeJxyilpBlCdr4GDYqTlNVk0onHLKUqiDx6hl13EQoDby4PHqsU5glC
AtUCzSm0lOUzNU48Zin6zcVbihQ6tBarpVRg4ufBY5VS+wQhkRgb7bWU5ROdTjxWKU1DyLYT5gsT
LS49P1NDlmtIa7VyzQ0bn3MHu9Rc6+W65q54zqtq5T41HKn0dqlhfr7higgeDffrGjYLrnNzmVqR
65q74jmvF5wSpQrNVzROWt1bd+Kx9kaKE4QkEvFcI6ppdXPQiccsRRREGT3TIpRUaimyelTOiccq
RcoEkZm4HlHT9Ngk90Dwa5Lc9qkkt4xQ7wL9/nWULVd++8jzkmUdBq+CqFvKFLAtXy2lAvmcB49V
Sq0ThGSKCeoZLWW5UJkTj1VK02+uNnomYWdrlJQWgTUiDx6jlJ13gpAdAvtQQ0tZ/mLYiccsRb+5
+uiZjG3GaSkZWA7x4LFKyTxBSCZmaC97kgIsh3jwmKWo5ZAYRs9IgiC0FGCgdeGxSmlxgpBMJUAL
VVpKXx9TXHisUnpREHH0TME+KFVSelxfI/rA85JSepwhMpO0I6rfHpoifADPB4AjKUKMI9SrZwEQ
7nF9NuzCY46yqiDSCPXGEISWAizku/BYpXCbICSjl9JqKbI+G3bhsUqRpCDylphCgCC0lLK+u+HC
Y5VS+gQhO0SDdhO0lAq8uTx4rFKqhuDRM7FAEP+XImH5M2onHpuUwTtBCFPGKsJOUoCB1oPHLEUP
tDJ6RrDyKlrKcoFnJx6rlFwnCGEqDEFoKcjbx4PHKmV6+5TRMy1Cq2RaCrCN7MJjldJmiMxU2hHV
b49NEQ4Eh1KEMkK9YxtmOsqWKy058ZijTI9HdUtCEVuLVVLWq6I48Ril7LwThAi6FqulIK9zDx6z
FP06b6NnGMtotRTg5LsLj1VK7hOECEmHZp9aCgNvLg8eq5QJoo+eqdlxxU8isF3rwmOVUsIEIULN
cw9b4vK9Uk48ZilqoE3n6/MDdlxDSUnLH7g68Ril7LwThFRqEVr70FKAg0YuPFYpSRRE3FKjiH3S
o6Usl2Zw4rFK4VlKZqr9iOq3h6YIR4IjKcLOJQ3db9ZRtnxfgxOPNcpER1kaoV6wrVUtpa1P/Fx4
rFJanyAEvjRCS1m+fukjz4vOcVLXUvLomd6gZUclJS9Xk3TiMUrJcZYinSJW7klLWb7L14nHKiVp
KbylTrlCA5uSsl5UyInHKqXMENJJsDRfS6nreZMLj1VKDQpC9p5Biw1qKcCqnQuPVUrTKULZQqSM
HStVUtZr4DjxGKXsvBNEjiRY8qalICmCB49VCssMwdRDOGCmfWyKcCD4Z1KEGDNlTo0/mSLUPdTR
Op86ysr62rALjzXKSpsg8k7H0ERLS6nAT8+Dxyql6p9e20KiHKE3hZIiwC6CC49Rys47QaRGWY7Y
HNTj0Xe/vB6D0c/v79/9+v5haHrz7u1vD6fxH+chqnx6jCr/DlIv9zSfG6RaIeEk8XJVkDNXzlSx
qkA69JBEzIPHGnpTIta3kKlh1Qu1FF4/euHCY5XCaYLImXqFXh+TFCBSPHjMUlSk5LAFpoBtLSop
Zbm6qxOPUcrOO0FkRle8tZS4nnO48FilRFYQcfRMSlC4aim8np268FilcJ0gMlPByh1pKQJJwXms
UuSCFKEQjqgxfGgidiT4Z+Y4suNw6r1+YoqT04j0FqBI10HW1z9Ac+GxBlmPGuLp9FQ5oK++mf+3
j8/1chQq1D9L5TPHP/JpPhP/iTu13ntsn/4BZKaO5bjqB1AD9APAeYw/gJ1XQeQtCIUGDfVKClK5
xoXHKiXNUrJQwiozaCnL90A78VilZC2FR89krBi3llLWl6xceKxSikwQWYiL5652BdbxXHisUqqW
IqNnaoB6RkvpgBQPHquUPkvJQj1Ce4NKSguQFJzHKKWFWYpkSulCYbrKzxVskxZ5tTCdsWHrc0a+
1Fzp/brmrnjOKwvTjYaZJMRLDcvzDXdE8Gg4XdewWXCfmytUUriuuSue81rBZQuFouuxprZ8c7AT
j7U3pE0QWSjGI2p/H5u6HwiOHLMcXIUYuzJHRxmwh+rCY42yoqOsjlAv2Ak+LQW4L8CFxyqlzVJy
oZag17qW0iEpOI9VStdS2uiZLtDGkZLS8/pGlguPUcrOO0HkSgGrnK6lALt7LjxWKayl9C1UigWC
0FIKIMWDxyqlzFJypVQhCC2lQlJwHquUqqRwGD2Tsdoi/5dSQlwfaF14bFIG7wSRKzEGoaUs36rn
xGOVkrSUOHpGsEP1WgpwzNKFxyqFZYLIQikeUTn50BThSHAkRRhclQq2cK6jDHhzufBYo6zoQTqN
UK/YRraW0te/mnDhsUrpM0Su1LHl0EnK+mkDFx6zlKog8hYahQhNyZWUCBzWceExStl5J4jcKCfo
TaGlIHMcDx6zFP0659EzjH35pKUABwBdeKxSuE8QuVEJ0LKjliLAQOvBY5UiGkL2nkG/fNJSGvBK
9uCxSmlpgsiNeoAgJinrJZ9deMxSioIoW+gUMjTaKykJKMbrwmOUsvNOEFkopyMqJx+bIhwIDqUI
O1enhNWI1VEGLIO68FijLOvxqI5Qzw3aytBSkNe5B49VCvcJIncSrO60loK8zj14rFJEQ7TRMxUL
Vy1l+Xr3jzwveYB28E4QuaNfU2opwCacC49Zik4R+jYIsaNpSkoGzvq48Bil7LwTBAeKHYKYpAAD
rQePWUqfIXa6On8PybWk/5xbeXW3t/bn7c347188vnt7f7r74v721z/eP/y0zwa+ePXj6f7t7c1v
j7cP96efb798eP/+15tfTo+P+x94/eX9b3d3F9tt4Zl2H28fH/f5ET3enX6//f7u/avT3f7Pt+/u
vz8/+YcWdjm1vXr1w+mHKqfbdJZznmTd/PMXjK44/997D75/OL29vYjT6zM441/T09N//+/Tf//0
9DtEK+nNqeXXqbx+8w/EVz/evhp/aCfYYW5v3r05OxyWbvZvXH8+3e80P48J1Nff3Px4erx5+ste
XwJMMfsDfnv+03cD8XT/14iWP07vzlPENzvfrz/eKsbT69cPu9T9ec6oWqWE8522WH1q/TPr628p
Fx7rz6zzBMGBJEEQWgpwKsKFxyylKYg4eqZiJ0CVFI7rbykXHqOUnXeCyEKcjvhA/dCk80jwa5LO
y1/Wn7k4UMMq8ukoy+ufBv/D85In5Aevgkgj1HuHFp20lLL+gYkLj1VKiRMER4oCQWgpwAcmLjxm
KXo8yluMlLF9IC0FWEN24bFKaWmC4EhcIAgtBVhDduExSykKgkfPlAStkSgpAqwhu/AYpey8EwRH
qthJSy0F+LzRhccqJemBVraYKFRoYNNSgPtZXXisUnLTEOcMD7tVTklBit248FilsB5oy+iZjJU+
1VKAC91ceKxS6gyRhSQfUYv72BThQPBrUoT2qRShjFAXrDCNjjJgDdmFxxplTY9HdYR6FShPUVIK
sIbswmOUsvNOEJyoYWddtBRgDfkDz8tK6QqibTFTCNCpKC2FgQzbg8cqhcMEwRlN3iYpwDKoB49Z
CiuIPnomYycttRRk2cGDxyqlxAmCMzH22Z6Wgiw7ePCYpch/IdoWwuiZgl3oqKTUtDqmOPEYpey8
EwRnatj9g1rKchFxJx6zFFYQcYtMASuSq6UsfwDvxGOVIm2CyEKFj6jFfWCKcCz4eopw5mKmiNXD
1VG2nIg68VijrGqINEI9Y1NyJaUtX0XqxGOUsvNOEMzE2NFVLWW5iLgTj1mKHo/yFoUCdsullrJ8
Q54Tj1VKkgmChRJWTlpLWb5nyInHLKUrCB49w9hJSy1l+SpSJx6rlBImCBZq2AVqWsryVaROPGYp
eqCVLRZK2ERLS2mryaQTj1VKixMEF+IKTcknKcBA68FjlqIH2qfLji70DNdSnynOVXqQtSJk5oaN
z9nDxedsoVzX3BXPeVURsqeGE5XQLjXcnm14tcqbuWGr4Njn5ph6Kdc1d8VzXi84FirYUXuNk4Gf
tQePtTfyHO5ZqPIRdc6PTXIPBIeS3LJxQasDTFEGTEg9eMxRpn/zdYR6rxCEliKrZ32ceKxSpEwQ
XCliBaG1FGSW7sFjlTLN0tsWK5XuuTnYly84cOKxSulxguBKDZsAainLtZideMxS9Jurb7FRwIoS
/l9KDXH1rI8Tj03K4J0guFHCCvZMUoCB1oPHLEUNtDGMnhHPbz5rkPVXsguPVYr0CYIbNWyDX0tZ
vovaiccqpWiIOHqmYz2jpKyXmnLisUrpM0QWanJEPeVDU4QjwZEUYefijh6VVVG2XrvLicccZVVB
pC12SliRVCUlLleddOIxStl5Jwju6MK5lrJc7MOJxyolJwWRR8+UDL0plJT12l0feV4ybxq8EwR3
6gHqGSVlvXaXE49VimgI3tLOh+1QainLxT6ceKxSGk8QEkiwiZaWsnxQ24nHLEUPtDJ6pmAVR5SU
9dpdTjxGKTvvBCGBOtYzWsryd1dOPFYpKSqIsqWInnrQUpC8yYPHKqXMEFmoyxH1lI9NEQ4EvyZF
uPzB/ZlLIiUs1KcoA8YjDx5zlOnxqI5Qzwk6mqakrNfucuKxSmk8QUgkbtCyo5KyXrvLiccsRb/O
2+iZkiEIJSVHIEXw4DFK2XknCIlUOzQl11IS8Oby4LFKSRqij57p2IU1Wsryd1dOPFYpHCYI2emw
r1e0lOXvrpx4zFLUQJvC/rdSwsqraClt/ZXswmOV0uoEIYmYodmnlgJs17rwWKX0qCDi6BnBvitQ
Uhh4+7jwGKXsvBNELhTKEfWUD00RjgT/m7oz222mhgLwq4y4Aom43pdBIJC44AaBQCAEQr/SZEoj
2qSkLYuAd8dOmgDHaTvH50wQUqV/aRt//nzG23gZM0Q4vkWouZwWgRbqMMoIzTkLDzbKNITQ+Usk
RYKAUlx7y8XCg5XibAXhjJCeNEENpRB29z3xnHP5SOEFECZ/CU0rGSil+dJ3Jh6slOAqCGeEiaSS
qaQQWi4OHrSUBCBsKRlHu7AGSHGqfYjAwoOUknkrCGeETyQIKKX5qiUmHrQUCyBcKZlIexMIpRhC
b5iDByvFqArCWSFpt/hAKYRl908857wrpvACCN9rKxRtXRyUQml9OHiwUkItxXih/BSn3047RHgC
TxOAvzBEUEoLlaQ35rkhgi+hbmhnS8AoozTnHDzoKEsAIpRQd7RTSKCU1L5EloUHKyX5CsJZ4Wln
YAEpXhL6OBw8SCmZF0DkLysibdYeSqE05xw8WClG1RBRWD/FvitYSX91tyw19O1mvcoXThRNV6sf
Hrfz8s263vbPV9z+75r7fLl5qeaOQQRppQvP1dyxxH+iLcmEoUfpNHHwoEPPAYjUayckrZMCpbj2
1ZAsPFgpTlcQzgkVSG/RoBTC1joWHrQU0JwZWUpG0y7JBFKCaq+5WXiQUjJvBeG8cLSBD5TSfFUI
Ew9aigMQqtdeBNryKSjFts+is/BgpZyAMF7oMMUZw5MOxKYEf6E5t1FEY6yTp1vzguW8SLRjwKsg
a391zMKDDrIAIHSpE9UkXa/P61875ut8FCDUX6Ti6c5OmZsX4l87JZSSSv7dna25gpCBNDsLH4DQ
/gaKhQf7AIRYQbggtCN1H6GU5lthmHiwUqIGEKaUjPWk2RAoJbVPWbHwYKWkVEG4ILwlDfyBlCgJ
nRQOHqSUKCGELSUTDalkoBTT/gaKhQcrxdgKwkUhaZOJUErzrTBMPGgpEUC4XkehaG+RoRRHGPhw
8GClOFdBuCgM55V+IRIORWDhQUtJAML3KtOdqO1tCO6Vs8lCTLr1DDZkwth8Jn0qOW/DuORG5HPk
GWwlYSOsM6cS9q8nHCiCEQmjBYc6OSecjOOSG5HP8YJ1FM6T5g8ATlKErgIHD7I0kqohjBcmTnEg
+rTzGROCU9aeZi6XhKJN3cEoI2yXZeFBR1kAEKHXGSKS3llCKZQOKQcPVoqxFYRLQtNu1YRSKB1S
Dh60lAggYikZY0nhCqV4Qt+LgwcrxacKwiVhaYvVoBTC7RAsPFgpAUKkUjKOdlMvlJLa12Ww8GCl
JLAuw8peZj7avttKSntFy8KDlmIrCCOFox248m8pUZr2JpmFByel8AIIVUrGRVLvE0qx7e+BWXiw
UqyqIIwUntYlh1IIrQ8LD1aKTzWEFzZO8eZk0iHClOCUIYLVJdSDJb1igVFGaM5ZeLBRFmoII0Wk
bd0AUhThsicWHqSUzAsgTC+VUJLUp4BSCJc9sfCgpcQKwiihFan3CaUQLnti4cFK0Q5A2FIyRpMm
zqEUwmVPLDxoKamCMEpY2qpgKIVwjDwLD1aKlwDClZJxNAgohXDZEwsPWoqtIAz5FBoohXC6MQsP
VkqCEL6UTDCkwRuUQjiM94nnnGedFN4KwigRaRBAiiZcYsLCg5SSeWsIL1ya4jjpaYcIE4KThgih
hHqiTTvCKLOEjh8HDzbKbKwgjBaSNk6BUijNOQcPVkrVnMdeGuFomwqgFMLZSyw8WCkxVBDGCE+D
gFIIt8Kw8GClJAcgUi+tSLTJJCDFEG6FYeFBSsm8FYTxwtBO74RSCLfCsPCgpYA9Qk720gtnSYvg
oRTCAigWHqwUlyoIE4RUpJKBUginG7PwYKV4CKF6GYSKJAgghXLIHQsPVkpSFYTxwqcpTk6etDc8
JTilN1y4gvC0t4gwygjNOQsPOsocgNAl1IMjhTqQYnV7y8XCg5SSeSsIE0SknecIpZj2IQILD1aK
kQDC9DIKSVt+D6UQ9tix8GCl2FBBmCiUI82wQSmO0HJx8GClOAUgbCkZHUgQUAph3xULD1ZK1BWE
icLQdqhXUggVLQcPWooHEK6UjKVNCAEpjrDvioUHKcXJGsJE4Qyptq+kECpaDh60lAAgfCkZT1sX
B6UQlrmy8GClmFhBGC+inOLk5GmHCBOCjxki/H0rTM0Vhad1tGCUES55Y+HBRpmDEKGEeqBNO0Ip
gdAb5uDBSgmygjBRRNr9YZWU9vffLDxoKRZAxFIyyZAggBQvCSNsDh6klMxbQZgkJK35rKQQpkE5
eNBSEoBIvUxCaVLJQCmUaQcOHqwU7SsIk4SmzcVCKZRpBw4erJR/TzukXspeJuqrDCglttYpTDxY
KdFVECZDJNJiwUpKa53CxIOWkgCEKiVjE2lCCEgJzfd1MPEgpQRdQxgvkvyfHfk8LfiYIcLfbxFq
riQc7fo9GGXNA1EmHmyUmQggdC8TdWUClNK844WJByvFpwrCJBFoJQOlNG/QYOLBSgkQwpSSibQN
WlBK837LI89ZX/Vm3grCZkJFKhkopXm/JRMPWooFELZXkrpLBEiJzRs0mHiQUjJvBWEl9RIjKKV5
gwYTD1pKAhCulIyhraCEUmzrYJKJByvF+grCSupFjVCKI1S0HDxYKU4CCF9KxtFmyaCU2Pq6lokH
KyUqCLFfAjbFmb7TDhEmBCcNEXwJde9Jzz+MsuYLUJh4sFGWPIAIJdRDIIU6kNJ+bhYTD1JKUjWE
lSLSjskFUtqPeWLiQUsJACKWkkmR1NGCUpp3rTDxYKXYWEFYJRRtsSCU0nw1BxMPVorTACL1Sgmt
SdOOQEr7MU9MPFgpPlUQVglDWxcHpLQf88TEg5USAISSpWSsJUH8W0qSzRs0jjznPOC58FYQVglH
g4BSdPsQgYUHK0VLAKF6aYULpw4WjfaVg0WTNKntAFV0wth8mlQn54T0dlxyI/I56gDVQ8LxdD7d
qwlbSxGMSBgr2No6uSik9uOSG5HP8YKVEp62BhjiNN9GzsSDLQ0XKwgThNJTHPQ66SB3SvAxg9zj
UrmayyoRHKkDCKPMt3dIWXiwUeY1gNAl1CMNAkppPgCEiQcrJdUQVonkSIuqoZTmA0CYeNBSAoAw
mUNI2vJ7IEU1X+/OxIOUknkrCJvpPGmHbiWF0HJx8KClRABhe6WpBypDKbZ9kMvCg5ViUwVhtTCe
9IIfSmleqM3Eg5XiIIQrJWNp4QqlNO+7YuLBSom6grCaej8ClNK874qJBy3FA4j99SS0kgFSNGHe
mYUHKSXzVhAmCK2nOOh12iHChOBjhgjxuSGCL6EeaG8TYJQZQn3EwYONMqMBROiVpi44glIozTkH
D1aKTRWE1SIl0jgFSqE05xw8WCkOQsReGSETqWSglEgYInDwYKVEW0FYI7TknMjXkTBE4OBBSwFD
hPJlhJGkcAVSTPO+KyYepJTMW0FYI6wiQUApzfuumHjQUkBFq2UpGadJc7FQCuE1MgsPVorTFYQ1
wtPCtZLSPkRg4UFL8QBClZLxtCYQSiG8Rj7wnHOCyoQawgRhzBQHvU46RJgSnDJE0KqEeoikUoVR
RmjOWXiwURYjgNC9skJazkfPqvaWi4UHKSXzVhDWUvdyQymE3X0sPFgpGkKYXjkhadOOUIptX8HB
woOVYmUFYZ1QiTSZVElpHzex8KClWABhe+WoexKglNA+RGDhwUoJroKwTljOzWjJBkJFy8GDlpIA
hCsl42klA6UQVq8feM64qLLwVhDWiehIFRuQ4iShouXgQUrJvADCl5JJtLM6oRTTvpuGhQcrxagK
wgRhzRSn3047RJgQ/IUhglJKSJ2cCqeHCIXLeqFoi6phlBF297HwYKPMegAReuWFjqRpRyil+Rhx
Jh6sFF9DWC9sII1TKintbzVZeNBSAoCIpWQ8bfk9lJIII2wOHqyUFCsIHYULU+wchJX0V3fLUkPf
btarh822aLpa/fC4nZdv1vW2f77i9n/X3OfLzUs1d4wi+oz1bM0dS/wH2sAHhJ6XhMnmJ55zdq8z
L4BIJf4TbUkmlEKZ3OHgwUpRqYKwQQTagRNQCmVyh4MHK0UDCCN7FUSkNR9QSmivuVl4sFJCrCBs
FJJ27i6UQljjxsKDlRI1gFC9ikI5EgSQElT76JSFBykl81YQJghn9QSt4qQDsQP4mQ9HtkF4bUwy
p1vzgmWjMIo05QCDjLBinYUHHWQWQOgS6ZY2DwOl2PZpVRYerBTrKgjthNZTBPDn9a8d83U+CvD8
v0jF08efMjcvVAraaWG0lMadrhUKl43C03pK8AEgLGdk4UE/AAlAmPIUBsc58Am+fcqKhQcrxfsK
wkbqQlMopfkYeSYerJQgAYTtVRKSdn4akBJl+xQ6Cw9SSuatIGwSivUNVCQsZ2ThQUtxAML1KlFn
WKEUTRj4cPBgpWhdQdgkrCTNxUMpzcfIH3nUeaV4AOFLyTjaC1MoxRPmDfY87qxSfKwgTBDeTnGc
9LSjwQnBKSv3jC+h7mnPP4yyQKiPOHiwURY0gAgl1COt9wmlEM7lZ+HBSompgrCJOsMMpRAORWDh
wUpJECL2WgplOCvppAmzKRw8SCmZt4JwUphAeoYrKYSWi4MHLSUCiFRKxtFO2oNSKFNMHDxYKdZV
EE4Kb0nzXFAKZdqBgwctBVS0VpaSiY70DEMpqb1JZuHBSkm6gnBKKNo5r1AKYTkjCw9aigcQqldB
GHXq8LekXz6bzEipXOsZbMiEUfncgZ1KLuqRyY3I58gz2ErCUQR5MmHzasJaUgQjEsYK1rJKTith
tR+X3Ih8jhestVC0k9MhDqH/xMKDLg1bQRgtUn1WXCkNm7mGh11pLFf3d/OHxXX/uP5xvfllPbsd
7u/nPwyzaIaF1n45c5eDmi2XwzAzanE1i/MUli4tzGKZMrp10jrjl/7y0hw/rAwov9p/Xre9W3Rv
tXz4W6czZF/J0B/Lx9vb34ql6yyskJS/d8d8fv2puJ5vs9Lrx4dlZip5SGke/TCPZeXpYjvMSxFf
/rb/xS96t9RykfTVIsxTzWSENIfZwFclv5L4HxX2R/e/rRcC/hqA6q4221/mJVC6tx+2eXSfQyKl
Ybm0QRut03x+peylurRXy6vLxZX3c63eOZUTHfwrORnF8xTCX3/afXY3z5MeXwxXvTTSyfw1C8rE
mVJJzq4y42x+GZbByLAwV7o7ft4qL5Bb7Yohz0MM2/X85ua397r5bk7l/s38Kv/fd9fzm/wD3+fH
8+Ph/mG7+e1Ujpw7vCXjyRH4qS+f/iJW6zc/PGaMbrm5zfZjt7gZ5uv3dxXHf8B1dz2/Hzp5Yfpu
vflb63b4qUDu/lxtS8W6Wi/y1NT16r7LX/PuX58yPfc3C5F9zbO8w88d/X2SS3dKgIfr/Jwv3+ym
5L7v7h7vr9/OSX/1ZrXex9vxF98d9wy+0++n93I4fjfuN7rv31bvjM2iMTL9AaE3d88wv9O9P45h
bPIthm82uSJa//BmX5/d57nRxVPUle90m3VDBVFaxdz4/jisO2tOw9sKXinT/Zrt3KzuH4b1H3/s
pjXLQ7MPtj52j4/5j2GQRi+HMFNufjmL1qlZSCHMnB3SEOd+oS9t9+F2uBnys/XxLmxPEgQefcc6
9PBE5Ef6+CS/XzpI73XL37LgfSXTzT7Ij3p+tHNHIf/nSbKUXnPD4CQ3ablXM5iYp5nDZTjYnu8n
xr8Ydpn/+bZ0oX7ZbH/cPzWncL2U/wEujNp91ZD/OsuJ7SfwOSLnnff+ri44Pu/ZysRHxxqOLzSA
OQ/yzfymVFG/vSkey6M+v394s+v7vTlkcfG43eaQKP8ucX0KOig1IXT/1DzqC913y33focTm5mZZ
yqHYfHsH18V3TtNpHrphvbnLEbYvQfFE0ne5s9PtlGW4n1eL4X4XJFfbzTrHfO7iHSqEP7ocqcv3
f75c5r+WH12+71TwMsdWecXzj5+VL/zs6A9WevwHK/3iB6+ujj/8PC38ue7704XBVN2eLIxukQuj
xMbfPZXDd14KkRHjLiTVLhDE15fLf//86MJDRsXpPDE9lHWenvqlk2erDJQWwzkyl/+8z1t8htkv
21V++Xtxs8mRdLEPoQt58YR8kREv4sUO7uLwkbNDL/398+H+shud7v8o4X7A7/Kw9fq+777D5WBZ
pia+f+qtrW6HzeNDp/TxtfbhNfLpnJlvx3fg9kXe56fwqa58U75X1O3/+ffQ5/cM+u4O88+xyVKE
PgX39vbNE9d9WQQwVVyfyNGImRJcjo4Rvb0FwRAv9nncxcIO6L/lAcH5ElI4G9Kw3W62/wY7jRTP
hBT3SCPLjpfq2eZssbm9uxkehv8UYuxDiuwQnTlPzW0qMlu7RuosBYZtU5VGtqmJFZehTVWap001
k/RGWxs3ZICdI0fjGzelT/Pwdh9QjdtzSOdrb8c2bmdDinukkWXHS9XWuE0Esboa3bg1DcrPkZex
9f7q6iLmf2/WecReJtbkaTim+bDXRM8vcw1dZjkvbza/dPNf5r8d3Z+Da0yNloU944i34R7REpaZ
ukx2vXm4u3n8YY92+Fdrq+fHzB//xd2Vt0ZSRPGv0viPCvZY9xFREPxDwQtFBUVCdXe1RpNMmGw8
QL+7VXOZnXlJvzdVMxmF3WU3m8zvqFdnd7130A4yHRpzP4goBj8GM242pB++fsbdDduXvl+YH+ZQ
OwXy5jD3o7TUeCt/WrM+NtlwqH1q/R4KNZ1ti7cPdLDorIAL4KxAzbjS3x+j4bjIDWe8sqKPnZad
OCSwTsqvJLC4eOnAeg/FLYWffBvy2bDKzwEIU8lmdg7p93KpsxpLT0gTv/qvtLDISk61yn59jnxZ
LjsrnJens7Paz6RgSkdaBk6u9nObHZPRel3yY/Px6i+z9dhzgd4CF5/vHlHM7+Hq1WU6Jbl8uM3/
/k+KKlh5rs9By1af9ZRsm+fr9N0/p5a4uo9DM/91RWrD8b5fXN29OnFT1dmwlHem4vOkI4o5uDOd
kajizsRFcWeqo+Q4nemMmuqQzlRxeXYkEYROdHZiCjpP8RlIVmGqqKjVcSo2TxKW3FzcVhL2Xfqo
1C7vJNR0Rru6CB//uFtd93gzEXqzSbvjxfIQ7wZuq2M2wFPviu2vhWEWdR9077DYerZp0FVQn4LJ
v+9l/ZQZjPPGNfkt6d8vGs4v8onJ/GGRX3GPN3fzRVhcXf+ZGjb8Fq6uQ3cdm7Ztwv39w00iv3n3
73Z+Pb/9KS7S1YN0WPI/VmG5rHLbI49j6VLP5VV6vJFi48tNXCxvsFxs3/De/lCbf6rl3Zj+GHve
6uC0t0NUtvPNcHexerD1x2/DkF+FnP9+maawXy+euEqhqr0B+JqMCtzvF49+no+d1lrEVo/CtmFM
B1mDUmM7ylGPUocoTdf8Nlw9+pEQ+ujSf7bajEOr+NC13neyNdxFGaz0Jurm/iGfu+UDmIt8O+a3
5Z2Zr76Afaozo/3/farTuY/qk5S6t9wNbXSKtWMXfTvyMLTccMmM5S5IseOTtdy6oTctV0q3wRrd
us4lawfJrR9ttNE/4dN3sE913pc4N58G540Utmud06FVpmOtE2NsfWA+X6GLrOtIPlV6V/PcfGIj
90P6ejvoINp+6HU7ipG1XHI5MKsEd5rmU50Tk3PzSQbtpFd9G7wKrTbStlr2ru17M1prjPOS1O+c
F+c9bQfUtO18nccSdZs7nN105HydaRu+Fv3tR5+kdt9wSI/3hp75qCSzoe/Id6H1jFk7wfZZxL/u
b9Ie87PmdrzPtl4++sb7xfv0bvg+bfqDFAk3sUpOnC/jH7HPUqKLLhnUxY6vLx99nW68X81vZysr
m8317NjZxJWlUVPGbCbXiYdeDhWGxxCau/n8enWzL6/0b+L7zdX95XKnd7kMnof7uFjeAmzCQ8oU
kP95eZ8+OG298u2q9zdA3kk16jAq3nujohApkpWwPsQxqqhAycqzEsm7qRSsL8hgUU6HmknBgp5o
zktIIDzBZ7tIdBwclXoyzYazZY2BBqba7iwE54XFwSF0Egw2M2YcBGwmgb0sMZgATDXYg3CcWxwc
QifJYK7Al8wcIjWJ4IPQqott1znTOpNGcduJoRXC5fHTSNPZnFUluMH0A2NRWnxqEsyHvwEK0hwU
VJSapHNeB2F6I4IBpmOWNKaR3As5epCT2ST0mDR5ApyQmuQRKTA1yRBEVDZwa2TIjcPzSJl+xogu
DIbJt0El8CRgp+PUlfVHNDC5PzoITgiBg0PoJPVH6TUE7P8Fvl8vZa7nP6WTyqXcu3Ts+HsSnILF
+sHwOIQoR/H82kd1wqTY4Nr3THSKaamZk9KIseOe9+P+2mcxn7862frHzJSRYK9BLLkfxT6YBqf3
XnrhWev7NKZYa0JrhIjpj+CtN9F1VldPg2Nmens/oY4iZBoc/TgNzgvwOlIanNq8n0yDo5dpcI5J
oEYanEcfiEyD8/gntpkr9iXSU7no1RujXDmWTulE2ysW295G3zpreNtpJrvOeOc7OZnKxcyMcN9P
v0hbipzGzlGK3vIuv1QrD02Y8lJ0CRFUAZ0eX+/VgN28gr7vuXK1Z82diZ3zwgXMESgSlzqcO5CY
VbWJIbwjLYqslvsUHWNTqzEuCneBaGBqUwgYzjEcHEInyWDnPATMp4ELewUamGwwGOtecBwcQifB
YDvjMPBkwlsuiyKYAEw1WMJw0uDgEDpJBgsrIODJhLdcFkUwAZhssAPhnMPBIXSSDJbWQsBqElgV
RjAamGowDKc8sHxjm2Lhf6U9bzpLuQm3/baAeJoDnXKSyXHsQrB/rb98uVgM93n3fTts3slqvvrq
o3wUk9dCDw+JBeZO1SlIfrhIr0P+lmhuGc67X1IL4VnmllzEm3muqx7SIndxEtqbG1npfa28Ul+k
hcpJce9T454CMAdRUeuETQMvD2sS9Y1dF9v+B+nQjOH6HqLTk0YbzTad/lADn+B1eOmRusTIg5WD
2BgGLp8mn8twzcsGZTQwVafmIJxhODiETlIcWs2PUOXrqOXJjkkcU57M75Qn2+FlNdSSkw+6uC56
YkoAJkesheCcYzg4hE5SxHoLAk8+ueBGlBmMBqYabMQ+nJsxJnBwCJ0Eg92Mw8BuGrgoggnAZIMt
CKccDg6hk2SwgAPJTwLboggmAFMNtgKE8xIHh9BJMlhqCJhPHzPZwghGA5MNtiCctzg4hE6SwUo7
CHj6mMkVRjAamGqwEyCcR8IhdJIM1uBkzqePmcreuyIAkw22IJxXODiETpLBxoBdZ/qYyRdGMBqY
arAXIJxHwiF0kgy2jkPA0ztrXxjBaGCywWAEJy9xcAidBIP9jEkDAU9uGQUrimACMNHgRAyEcxIH
h9BJMphbcGya3OEIVhTBBGCywRaCE8rh4BA6SQYrzSDgyR2O4IURjAamGszBCNZc4eAQOkkGGwYO
/pM7HMELIxgNTDbYgnBa4OAQOkkGWziQJnc4ouy5PQGYarAA4RwzODiETpLB3kJjk5jc4Yiy5/YE
YLLBbg9OJS7c4OAQOvEGJ2AuBAQ8ucMRRc/tKcBUgyUIJ7jBwSF0kgwWFgSe3OGIouf2FGCywWAE
S+ZxcAidJIOlLLq2tXp5fJtkBH1zDqZiq9yhPOpdU+ydxpIr/a9erRp0e7V416c6V/qfu2uaGeR3
SIPseCpHO45DoN4zTUyVE6h7pgDa/h3T/E0vc79UsZlVhG7CR+Z4r/sgJu6X+k4IF7Tpow75ZVfl
WYgxpKjsleqkfrn7pYrNnBUlkncHKlWy6yinQx2mlYBIeEG4X3qQJ4TBm8+4ktCsMXk8JXRJuXoK
MNV2zSA4aSwODqGTZLDiqqS9d2ZHdL8HqWhRZdT/r8+OQ1zNjrBHdbIwnJtHVZPdLH36f660qia7
yT6ZOgWMzs2nqslulJw5Vycp0H6hgvnda3UKUsn0GoUHIA1+sgQzuRD6RXO3mPdpDkiaatBOk0wY
PcsXsnlv+PoO2gEF2t/67YZxpvNkmFaLuczz/LaZv/o5Lpqs/dHHff7Np582b20MefsiBW9uoeZ2
vrrgeqCVFcMBU0Jin6SaMcWP2t4YXvmlOi2l7XhQ2ogzbVCMVydoUC6eb1Bu3Ys06GNeeRM7COuc
74JS5mwblNs6Rwh7DfrVarwamvw/zfz2AOmrDMrzlKG2URJLXkrm/9q55QpfcL3YVngbHrcub65e
xZv7c7q4SpCa+xEsN/UonBAs/CH31/fC5MN+ead/GyYHpF94HCYKIi+4qUJ+k7Vge9s/pSvYZilY
Hr681wx/Jnnrg5r2g9T3XjWLtLtMXwSZmTppARB5HhJndhmuc6z8eZnNzp6HVHB2mU7nchO4/cNi
kQeZtUSQtFNHJH2xzgIh3hUXzXq7nLvl/HpY50relPLXb8PsdB12cO7snCuxWVq2rrt1n0eIvRTq
uk4hj/fQH0wsagB8cFnS9+bH0zdG06fGyLHxb0KOzf88GyLK1VnEPD4x/zdvUv7qJnDjAOL7Sh0o
EitjV4rKE2uarIxdSdaq1DQsrm4YYyuk5nIs7y7J4StjH4EuohYGRcGhlbGXyirN5LTaeJUCDFZk
qyp6uuqa3qmubBiWz3SNTo0uSmnY3zCsO5ENO8H5nBN1lrwYSvsFufXLUtIrSsiQqcuKXpD7lCSw
YwNxQXZiTQfPqURZT09SmrGq4qhzKhekOVUzXpVuhTmViypzqmaiqrLCOZUYYKdQhJ9TuYD5VNrl
T/LZiZLnKNVdrleY3E5HSa8oIduuLquDJrdjkbga0ZPbQYcCp9CCHfdTxTed/j2/TScG+VSXweTq
7pwoJcI33p+CF2ZES4Y94VHdPRhiJlxW79P1Ki0qNTMGk3zzoP3NMpV51EL7gUW/TRD64esJQrth
m8PjhfntncnnpKCvPYx7B/MMU6dhYfvIBPUDy1SyB4ov2oRyAW5C8xXEY3jORfY8qoTXMx2HQR0W
EyfkVxITXBTExHuoj33iMZmecV93qUUZwDdzYki/lwuM1Qh2Qpr4NXel6by+ksNnplNy2VlXvDyd
nTV2JgVSqnxKj19j5zY7JiN8Je2jHeYeUcxkRe3/gKiC9d769LFszVdPSVl17f9AU5V3puJTHD3j
lZ5i1etMZySquDNxUdyZ6ig5Tmc6o6Y6pDNVXJ4dSQShE52dmILOU3zyoGe80jPsWh2nYvMkYcnN
xW0lYd+lj0rt8k5CTSejq2yi8Y+71d20NxOhN5u0sV0sj85u4LY6ZgM89YbY/lr4JVhsPNs06DOb
0cqvIPz7NtZPmcE4b3STX1L9/aLh/CIfdswfFrl+U7y5my/C4ur6z3yn9bdwdR2669i0bZOKhjzc
JPKbN/5u59fz25/iool/pHOO/7EKY02VUmbYYtGbH2qFTH+oUfdtpwbf8l4EZTvZaz1urhzpZX3t
6WLRSxl1gvs1GRW4n1Wx6KVPdYL2/+9TnTchjurTAdfv6l7nTD65Om8pnJtPda9zJp8qvcJ+bj7V
vc6pZ56x856OAmo68pXeNarb3OHshllf6cWZo/p0QLeoPcz6Su8wnJtPtYdZz+q81ftcfqINh3wl
2jLPbAhMRnLxbWVmzE9F/7OIe3mKtt9YnKsIM6xDiqR6ejexm7REq+BcUHGMVj6fq4iNWnLXm14P
SvUmDtxaMZhOazd0euQvmKvIzLRmJZJ308b4oux8xXSoSXM8TMLaEhIITwipdMzMwTl8JjM9S1ZS
i4gCTLQ9EYPgPGM4OIROgsF2xoyGgCczPcuihMsUYLLBFoRzHgeH0EkymHtQ52SmZ1mUcJkCTDWY
CwhOCIeDQ+gkGSw9h4AtttQw6yPj3g9WieH5yUxx4aUxou+YGcZBDb0JwTvJRRTDyNz+ZLaYz1+d
bEKzMy1VbSt2W0sUdvsjUKTGr7AgMWNqE0N4R4p0YxlEcTLltpSqrNHQwNSmkAqCs0Li4BA6SQY7
Zo5SYRMs04rJ+HEKkk+VacWznC7TegzamDKtx8TNZVpPAZiDqKh1DinTmnVwU6Uo6W4nVbKkWmpF
YtTB6gk2Clx+TKbpl9KXDcpoYKpO6UE4jYRD6CQNyl4KAFhOpumXqiiZOwGYarByIJxWODiEToLB
bsYcg4An0/RLXXTgQQCmGqxBOK4cDg6hk2SwMAYCnkzTL3VRBBOAyQY7CE4yhYND6CQZLC3YspOZ
nqUpjGA0MNVgA8IpYXFwCJ0kg5UHdU4fT5nCCEYDkw0GI1hzi4ND6CQZDNdklNPHU7YwgtHAVIMt
CGekxsEhdJIMNg7UOX08ZQsjGA1MNhiMYMslDg6hk2SwhQNpsuCadIURjAamGuxAOCclDg6hk2Sw
F+BQOH0q4gojGA1MNtiBcErj4BA6CQb7GVPg4D+9wyl78EcAphrsYTiLhEPoJBnMDdRT1fQOxxdF
MAGYbLCD4KR3ODiETpLByjAIeHKHo1hhBKOBiQYrBsJpqXFwCJ0kg43yEPDkDkexwghGA5MNdiCc
R8IhdJIMdhwEntzhKF4YwWhgqsEchnMcB4fQiTdYJxOYhIAndziKl0QwBZhssAPhtMLBIXSSDOYC
DKTJHY4qKk9LAaYaLGA4zXFwCJ0kg+HS2Wpyh6OKytNSgMkGOxDOahwcQifJYO1AnZM7HFX0mJkC
TDVYKgjOSIWDQ+gkGWwNOBRO7nBU0SMjCjDZYA/BOa5xcAidJIM9rHNyh6NUYQSjgakGKyCC+Yxp
i4ND6CQYzGcc1KkndzhKFUUwAZhssIfgBHc4OIROksFCFb0LvFNCEv06NkjF1rlJf24XGDAvyqML
LGefKt3PRBRYHkLsBItB29FTLy8kpop71OUFAO1IBZYxbbGvRMyYFOhu4saeWTuKfMHn+fc8ZWeU
l0Po1MjjILjUXRhjH2P03POXLLCsxUxyXyJ5d6AqeoZfToc6TGuYhOUlJBCeEAZvAT4V9s3PMSxe
dSnQ/vp487dERwg1eNY71usn6diSF6bK+ZDbyEIkjAWn0ukzO6PLIhQNTNVpNARnBRIOoZMUddaq
kk6ws2RAD4YQFSfrTIX/9SXDk1Wndb2Mj0XVMR9nU4VICjt5zZFagg9VHfMxrxyugvfSWTFatSl3
ekDtv1V1THuU6pgor07QoFw836BSTC79qL6iGpSL1xqUh/zBmnPJ3Nk26LRXBzboqtzpEetYLsnX
qAFaoUopjdyhsXDR/LQIXZdsXpqaJ8r1e+AXRxMhGSzi6IVLNURHawPSKezQFczL+cID6zul8p25
eHAbH7u/JwexDVq5Bz/RoBbVg0kFi8V+weKUH529DRFwpk5GhgNKX4oLIWbaiJJty+762hdso2rw
oW4vvN0lsV7ll5DYMUUzXmJKOR+iKYnvPgkzc5btR6rT4t9IHa7u75YZPB9ulwV525vUIdLWojWd
4qNTOv2ll22QfUxDWfpbF6XkIvbC2JCpR9Epmf438mH7YTmgv1l9XrO465s3DvnwN0BBjk8IeuKY
cKtzt1PmQdgFb8zg+85G4NSQ28jTGTCLI3MQJy82LT1p8gT4X3u04YFkh9SjweOtze40cpfiaey5
YkOyM5hRDt6Mo+Im/fbx7X0ldiY4A5UgBrbHfJYhvDOPdYPox96PrfDBtFKz1OIJvrUyWNVr27Px
UVGNq3Q77WrZDPGP1fh9/ed76xnq/jKM6Ws//Byu0zf8mNf8q8ESUiT55oy+jqJnaoP/9JBorKua
26ZP8+Dt6ij0BXitqn+zd+VFmtn/tXVTom8R18uuNPXl7Js/X9036VdoXvuU4/Pepgfdft/Wv49T
6x6TwM5iPxdmgdcL7+D64NuPVg64n1gW7sFKpO9QUByw8Ic4TFzsowaIx2tDA5M33+OLHq0W8Ha1
gB+1UF1wpu3GsW9HEXwrlR7atKgP+UAsjGZ3AQ8ykKKKfZueve0RqUtve/LyGcp7zZCL+q+ft7Qf
pK6eunZaKKQvwsz099OVj/7h7kp2nKmB8KuMuCAkEryX60cckLgjcUFCQsi9IeCHoBlWCd4ddxYY
kppxVdvpBA6A2PItLreXdvur9WR+7zp2yY7WJ4jx5Hb6d0jT9/MUKl8Q8d2h19B0i015BbqczKYG
sM8jnVr83ssPk4BNy/GVATBrUF+lt/Mj6vevZh/nrp5yPu5+7vfVSWL/8+PjvEQ91jVF2qm2fejf
/9Wb4/BoPjBvTgutuTZ3b4fjJdPHe9sf4D2anW3Djr50fL6M82Fv2TFr7Gkukou756FNAsqH7B8W
pkEQP1x3W/7Dl+s3xkOfGyPXxrOZyvHfvF4ioTUrbpB2o6pYWVMxSLuRrEMyNS3ONRXHDVTNr+Dg
EMvEDdI+0m0zRxOEeEgULAvSPiorjvpEaiWwYxqD+pOGjU0NZWYJNqprWlGjgb+YUgdnGdBB3ZbP
WXHeB6XL/G64LSU4UOK1nVZNWUnzu9clwe2kwgnRypoWj6lCWftBapUGk46p2sjGVN1o0dFuTNWm
zZjaWFnl4CYssDUU8Qc3bWg+befGosHt9pS4g5u+0iSbObhpswarhYPbdUh8M7EHt0WL8jW0cJ/7
OaoO8t/vfnibgfKTXq1BTpJtfvKe5NVo153TN57HSa/BhTES7mMHoVVE5EGFwy/K+8eLVpB503hS
Fh3qYLT9++Tlx//e4+7mcwKHbd4b8+NsajNOnELrXesPWah5b9u8t9DBqr0Cbci9gjjfhbYfX9o2
nDZzw3XRmzCNXRr8qbD+/QP97ue3w/5dTzeeTjCNQ/6bPv38NM5K98dGDv9J3sse0k+pS9Rr6LhV
vm1PlzwNTwNMyn/s3Tw8DlakyZ/ANhob2ytZ/phfk8vFIE3TCavROZuwzqRuSwkOlBht1oYRP0/7
aluUVxRTzNX+D4iqmDwdt/LqJlDtlNRlbF+1qaCtwMWdqXpL5IpiFnemOxJV3Zm0qe5MbZRcpzPd
UVMt6UwNp2dXEiHoRHcnpqLzVC/jZxVt3sy26jgNmycLy24+/tBI2Of5p3K7vJ9R8zbj3EI//zD+
9uPhi4V3M6F3H/L68XG/D/U93VYkz9B2mXB+3OlyLkyzaDMZeYHF356dGvRQ1Gsw+edo0dczg2n3
AA/zQd9f3zxo/WbeTdr9/Dif0h6//3H3mB6/efv7fLnGL+mbt6l7Oz5sNg/p6enn72fyhx/KK/+3
ux++Hh8fxt/yfsQLKto+HG6jQvt4lUghMpeKs8G1BsmXcqn4LC9yqVahXc6lui7unEu1BuBcRFWt
I8+lOugI0CT+6fxTOBeqPoVrR0z6TZwLFBtEaPKREzeO//Q/beb/a9Mr129sxLABhGmMGuPY9afr
OGC+jmMg4vgpGW0e4P+S0YD7HcXxH31qc6Lq/+0TbpXS9+/TgqtphojBGug2Mfq0caFTm2imcYNJ
YeemYVRdx4zjP/jU6Bj0vfmkJo1DTHYz+GQ2/dD7zWQyA221HRQ4o6OX+KR9uO/HbGI8ZrOMRpug
bZs73d3jQ/s2j9nXLnk8ccizA7Ap2FENCvtO/Mk2bp1zBbavIl5c9vj3f1h94SPncUUpAu/J7kZe
cpZ6VN6Pegzm9Qsf+y6ima8109r7IVdZGs1kpgl68Bomd6sLH/eSo3c1ks+nuV4tvk6vBR3p5NYr
kgTYGhIMT7hX79k3Sm21RopOMc3BVwQ8yoDFtkcSzhkeHEOnyGCryKIrpjn4sPxCbxmw1ODgSDij
eHAMnSKDnSF1FtMcfFh+obcMWGwwknCBCcfQKTLYA1DA4R/gp+PY9Hb3dd5e3Mv9Me8V/poFzwfd
EJWD4BXo7vXBTEfbRaNw6ABSUhhtdDHiNAXo7RjS5WD2uNv9tM6AtrciGmxtxXlrxeV3xl6LorR+
oyeJRdOaGMM7QaXrraJXRMVYDY+6ptEEwNKmQE3Baa14cAydIoON0hRwMVbDY6gzmA0sNjiQcAZ4
cAydMoPRUcDFWI2gKiuYDSw0OCiygq2OPDiGTpHBFqhCCsVYjaAqK5gNLDaYrGCnPA+OoVNksA8k
cPES8qArK5gNLDVYkxUclOPBMXSKDA6R1FkMSAy6ar4sABYbjBQc+MCDY+gUGRyNoYCLS81QEUIm
A5YabB0JFzQPjqFTYHA2wFoKuLjUDBUhZDJgscFIwkXgwTF0igy2SPbU4lIzVISQyYClBjtHwTkw
PDiGTpHBLzz8iwGJoSKETAYsNhhJODQ8OIZOkcEAZNcprnCCr6xgNrDUYE9WcFSOB8fQKTIY6bGm
uMIJvrKC2cBig5GEc54Hx9ApMNhulQ0UcHmFU7dvLACWGhwcCeeAB8fQKTJYa0onlFc4dfvGAmCx
wUjCWc+DY+gUGWw8CVxe4UBlBbOBpQaDI+HQ8OAYOkUGO7qQyiscqKxgNrDYYCThvOHBMXSKDKZn
S1Be4cTKCmYDSw2OjoQzhgfH0CkyGNFSwOUVTqysYDaw2GCigt1WWSYcQ6fAYLc1SlHA5RUOVlWw
AFhqMDoSTmseHEOnyGAbIgVcXOGAWn6YRQYsNDgTo+Cccjw4hk6Rwd6RXae4wgFV9SJVACw22FNw
QUceHEOnyGCIZNcprnBAV1YwG1hqsCYrOJrIg2PoFBjst5oea4orHNBVFSwAFhvsKTiEwINj6BQZ
XP5ehHWy8nhsmX208pJK2AbT5kbNqx5G5h56rTmj/yxxnvapzfWQjMT53ik76hGTHqPsIPKeKRTv
nX4R7UqJ85y2oJSg4ADyqMDHrKUPXf/6ma3UB29HF21vwHhUHvrkulEPkNkbi7c6gJwlw1ahqpF8
/qCqewNaTUf8mCY90aaqDBieCB7esHXWEHRicXsKDNQ1BhtYarsBEi4EHhxDp8hgr/mn3s/amxgd
2f2eohLsPX7Btv7oeApXpz26x6/X6j1q+PVa9gm3WrepJVZmeYObNCkNptjW4gC25wnILWjP9126
MSQ0yfYuHe67XJIceUhAjldIQGZa2bAcOHeiUiRt8ctUsa+sCPvnvOaT1DD0vu9C14393TZo2auF
DXoIwL5ayOGJPCsgUhSArXkB2HsCLrRZhy4IwLZvjJ33Y2uylc/nQNYsv+CiBR/pFNCaCxLebY2p
Cpy+MKXi1o8WfMSmnHcV98a4OU+nhsS5KaEiKr0FH6kpAS5J+K0h78CPmpEKbmOmpCBuIqZhM2jv
NxD6tAGl06R9jxFcpm4xYIbsTOw8PxWc8+PvkIJcJAVVpYKnbkomauiV1cTOzgR2cMkGkzokOf29
Gi+aXAAXpII/J0WlgkNneoPd0AWcpphwijAMsfMpJO+6MLxHKQFrSSWMp/1zPvsSPpsK+C5ErdBt
nJ7/lMls0qjcJvS6Vz2MoU+hbSr4QVE47Q+2UcRMBQ/PU8FvwOsaqeBX4P1iKnjYp4Jfk0CLVPDn
P8hLBX/+f+yDfLkSG6aCP+fAhV/isDAVnPWAeD5hBpq8FQRFHNYy4bCo7UY/mD74jdqPSS6mDYYu
blSvRjuMg3Ypni1qaQaxiX2nnn3qEVWp4AdmUAzcbODJPtgiASjT+6DdolTwG9LlBKg0gH2er9Li
9158mETTpjczBkBWKng4SwUPNGm4Iml+Knh4wdJGPbwqFTy0uS3/Q/YPC28OJ3647mblhy/Xb4xy
KjhVImGrrG7Liptg2qgqVtZUTDBtJOsQCbqGOG6SXX4vEmSp4Ee6jZ7q/AvfJQqWJZgelUkmcKfo
rcCO3grqTxrWNzWUmTvVqK7XUPRyolEgkqUpPmElPmfFeR+ULoNTw20pBV4q+JFVo/nYuCw4dV0S
3E4qnBCtrGnxmCqUtR+kVmkw6ZjKTwU/0sWmdBuMqfWp4FdRVjm4CQtsDUX8wU0bko9TK/E5q5L7
oMQd3FajFHip4EdWbaf+ywa3K5EopoJXLsrX0MJ97udYo/CvVPA1yElycE/er8GL80TLhr3g0eqr
y31EVWgVJ7ZXYbz5orx/vGgFOZ998r2dJu2tS06cCr42P86mNuPMV2i9a81KBQ/7VPCFDlbtFWhD
7hVYw3kxsQguN9zQRw/B+SF4vaywVuRXU1ja3LqwPmRxy+VnyfID1XatLhlKeJHq16TJn/03m1i0
VlIzRq7H5WKGQ9O50lq7PNufSd2WUihHqjdlxA+uveL+7tXEFANs/wOiKmae9ZHqRyVt3kvWhdle
takaC1zcmRrsJ11NzOLOdEeiqjtTTaR6UyXX6Ux31FRLOlPD6dmVRAg60d2Jqeg8DfZAQLXZJ2/V
cRo2Dx4j1RsJWxKpftZWJE/d9i18MVI93IIFK1L9yKTtsukyjDxcN4z8f6QihtDkaw9ubufpf9rM
/9dGudFvoJumjQrQo+uddhOePhYPrHjko4w2awZmHuXU+a6D5LrYJfHHQmGL6Aqmv4r4t9mp78en
XDp7s5bmDA+78Wk+NT5/L58L66f97TIn5Df5Xx++rv1pHrFeENPE+n9VUIOykTtRGZn6zy1Fn9I+
tXleMG4pGiEMGNCbfsG3bLC1xc70ItrplqJvnnZntxTJ2+IjWVtQSsAj2dGo60owGkRQMPZmfP2W
IjUaSH70vQ3z9jPi0CuLMeluvgqhu1lM6l4yGl0j+ewT2GiW38vXgo7wC9hMlyQRoIYEwxPuJTqZ
Ttxa7Sk6xSuIo1kekyoDFtseSbhgeXAMnSKDfSSBi1cQR2frDGYDSw2m4QJ4HhxDp8jgqMheXbyC
OFakLsuApQZ7RcKZyINj6JQZjJc7D1o9fJ8X3Xma9EceQ6bd4/fph348/bP5OkoXrbLT1KUEfxz/
8VePj8PTPJL9MJzWZg+fffbJfAJyfsf6889ZPufd6hokP37M2yK/ZJp/M9x13+bS4LOcPX0cv9/9
lBFTfsn9uArt05vZvG6bP3F8zLOGVXGfcuPSgNgUcC6iqtZJpwbeT34y9ZNdb/7u+JQOVKdeuFTH
C50Uq673mIlhE2LihxWSbCy5lC9eWx798quBZcBincSkHrcqMuEYOgUPZdwaQwIXry2PFSk6R2Dg
AUsNDo6Cs9bw4Bg6RQY7byng4rXlsSJFRwYsNhhJuOh5cAydIoOD1hRw8dryWJGiIwOWGgyOhPNM
OIZOkcFAL3mK15bHihQdGbDYYCThMPDgGDpFBkekdGLx5ttYkaIjA5YaHMkKRgg8OIZOvsFWbbVG
CrgYQBUrUnRkwGKDkYRDxYNj6BQZbDBSwOVdkYoUnSMw8oClBqOj4KzTPDiGTpHBQBdSeVcEKyuY
DSw2GEk4Dzw4hk6RwajJli3uiqCqrGA2sNDgTIyE854Hx9ApMFhvtVMUcHGFg6qqggXAYoORgrMe
eHAMnSKDLVS9EDi7xp79YoikEtucMfqvvz49XWNPeoRtvou7qke3vcZ+71Movu5c4lPDwxypfJhj
lhHafCLMPMzhzRDjNKY4oJe+Lbd2q71jvS1/AfEi1+fv/7A624dTXpQiryP/6Whj6sJMLhXemg9W
6TSh1mZ+dEA/9qFXAxg7Tf0UPdzurXmWHEKV5PPxSde8R6ynIx2dNU0CfQ0JhieCMdtuI5KTv+Ku
JBpd1xgRDQ9YarvRFBwq4MExdAoMnnNlSYOLu5JYEZ4kAxYbDCQcRB4cQ6fIYB3Jli3uSqI1dQaz
gaUGW0PBGa15cAydIoO9shQw/gP8dByb3u6+zgdt93J/TE9Pv2bBc95elwDS2Gmww+uD2ahwmJy1
Fp3TA4wx/3UcAezkvFGTvxzMHne7n1Yb0Nw2WGhtxXlrucpufwWK0vp1QBLz2JoYwztRpYNxlxRR
FbeH0VdtwQiApU3hHQnnNQ+OoVNkcHRAARe3h9FXbcEIgMUGIwmHTDiGToHBfqsC2bLF7WGsepm+
B448YKnBwVFwWhkeHEOnyGDtyZYtbg9j1ct0CbDYYCThwPLgGDpFBhsgC6m8PVz1Ml0CLDUYyAq2
yvDgGDpFBtuIFHB5e7jqZboEWGwwWcFOGx4cQ6fIYK8dBVxeala9TJcASw2OjoTzwINj6BQZjM5T
wOWlZt3LdAGw2GCigsNWgeLBMXQKDA5bpy0FXF5q1r1MFwBLDUZHwoHhwTF0igwOimzZ4kEfrHuZ
PgNrHrDYYCThoJ1OkcFgIwGsVQFYq7qX6QJgmcEzMRIODA+OoVNg8JyTbylgXQauqmABsNhgpOA0
MOEYOkUGW6UoYFME1lUVLACWGqwdBecQeHAMnSKDgyWBbRm4soLZwGKDkYQLkQfH0CkyOCpNAbsi
sKmsYDaw1GDjSDhveXAMnSKDMZI91ZeBKysYo+YBiw0mKjjOV9Lz4Bg6BQbHrdYkcCgC26oKFgBL
DbaOhLORB8fQKTLYOEMBQxm4qoIFwGKD/6LuTHtjp6Ew/FeGfgKJBG/HSxEf2EEChNglhCqv3ELp
VDMtm/jxONNOGTJniB2nBQCxXHrzPuf1OY7tODGawZyQMrmCOKsMFngi6Ulh0ZjBxcK1BguBygEp
kyuIs8pgSdBEMpPCTe9s1wjXGgwEk1O6UK4gziqDNQAizKZnOE3vX9YIVxsMmJyRhXIFcVYYbHoi
DCY8PcORTRlcIVxrsCSYHGWkTK4gziqDGUhMeHqGI5syuEK42mDA5DgTZXIFcVYZzIVs2VI32nde
vLUSRVHLnO+x7J5q+/x7qv/6bNfXuE9iEZ8KPttlRSJeSqkc8OqNyKaXBIo2IiNqx5uQhx/6dzYg
C9JTQYrLRIPiGoIhKuiJPVshSO6dZ0RJ7b1nwljGjY88EhJZ/Pc2IOeQBWMtIY87qqZn+O041d00
7gk0pUGBJ+Wdd8ZRBB0FTC9PqZaNcjXCtbYrhcoxXiZXEGedwdK0tPfo7lhc9xiKJsu8gvJ/vzue
fCtLiF6QZUYQx+cHrW8Ojw/KafTGEucBYTHA5CgoC/9lHzipKTGiE3T4mwqhs5GITnrqiVdReivP
Vzebtc+5mGNaAjsnu1IsuARGxiT3x0FVc61e/vknQokcijLftfJFLtbXq/xd17hZDbEfXO6TLz/6
aPXy3pBXznPi/nj/FdjV8HLSeqaVC6ZDyclOCGTBiT3Vvh62dxHXcH6WdVpLZ3W09j/boNNePUOD
UvaPDVrQUVf7WtSglP2tQa2lgvogrWH/3Qqd9mpmg35231+F1fB/VuvrGaHfH2ywzh+OXwmFw9Mj
eM6J+eOP4SPpNlzsUum7oVG+vLjMRw9sru3VYwjnjwevhsPWzaHnz6lvzw8OnssnvSGnDQjoqV7m
GIXDOWBab36xw2Dtfo73MHJBbrcAPZHHo2GzehHt5tblec4fH+z/LacjYyIY4jXxIE8M1jRr+fhd
O0/tWFWzEQScM+g56BaIkSmUivmmLMFTaUrmPYLILaMktEAcmaJbTGnnqTZFjyDk7pxQLlsgxqZI
Mt+UJXhqTZHkCIKrXiNv2sMfn99m6aF7zH3U5Tpc+tV2CPTuKm4yTPSKU2BSymD+GL4n+/1mfXcd
Hn/k8aTSdHe9uxWtPlrnHxmmE9mX/YGg20HjDf54gM2rq9vfbuIbnz4ovvzX/3nl+cA/v4rx5v7Y
l9vLq1U+++V2dX8g6p7HGL0/aueYC3QvTVOneJRlDf3REjzVWSZGEGr3fVDNWyDGpmg635QleGpN
0fQIgqteMmRd2TD513AiXG5vdgdG3V3/eJ0HEt1PedSSFxM6ySlTRrjOc087BUp3kgvSWSDeJRkJ
WJ3ROSQrgAntpHy82JDmX95fb7W58auzORc/wwJSiu5dxQPa/XI+jCz3ANvbi6EA724yJnU6OEME
YVqdavGGO9DSbNWtr4+BTC/5/oiNZqDHz0Cnq3zu8irYW+t2p7299rPd3J9RtVvL6YM7QdJuzSaG
dV4pz0dovTdADF3oniOTDR992U1Lfo6r4Se7/JPbIxYwPRdNhTlOG9MwxF2CpzZVDBtB6GFICZRh
qaIKOooQEjOc6I65IDovFekkUabT3EVKg0gQQ0YHrbiKxCrqRXlHUXLxMzwgmAjoxMOyxzh3Gelz
Vl0MB6/tlsg8p4YFDVpwgzw7Aya18yZxrQjKxPZVMGnyhPgfR9jj3zDCOZgEvrxfstaEAOVGE6Yc
JO0I48xFYxSljnPDX8FiUCAmfJ0gmTM1zcqmF7ypTkZ1y1jDd9mX4Kms28x7BAHQa4KmuS6oW+K0
E5qlzimhO2ej7bSl0JlkDAgig9AiowsVZUqGOOdNed2WXPwMD0ijATXVrSaJBk8o0ZogdUs48cZk
yIyCMj2eaDNp8oR4Ud0e4mB1GwjXypFkrBdKAWfBJ6u8j5YQS2x4BYvByClfJ0jm1S2YHmjTxHxc
t7ypbtt5auv2aAes2X07nIsWiLEpIOebsgRPrSkgjyC46g2hWI6aks5MURIScx1nmnTBs9B5HyD/
J0ueGKI1NRldWBKYtNFrbys6s4KLn+EBSTSgps5MMQiM2RhVckhnxljmMAAh8oAyPX53b9LkCfGi
zuwQB+vMlHFSZx9VBK4VCc75pCgIbpJJNMpXjmPQPZVTvk6QzOnMzNB5KNNUJ+O6lU11285TW7dy
VLdAzintJeVIc3BSULfKBaadIx11nnY2OduB9K4DwwyzyjEtwoAufYwhaWe8L6/bkouf4QFJNKCm
uiWMcJ6Y5Z4orG5BRcYUZdRFlOlxs9SkyRPiZXV7gINOHmiyFrxgTBltORfglVZEBhoI0ZK9gsWg
tJjwdYJkRt0Cuf9OfNMS5ahuOYXZdbsIT2XdZt4jCJH5FMOagxbUrTVaGKago0SJLjEeOs4gdoE4
GQVXkUuV0UFE6SX30uuK+23Jxc/wgAANqK1uDScgKQ+gsGOYeYwBVEyGUI8yPb4cOGnyhHhR3R7i
YHWbFGXcCbrrCTzzkQcfCGdJEa2ESq9gMQgwE75OkMypW3pOSC9Z0wOscd2yhrpdgqe2bhkcQVDR
C4Z2o7ygbrlXxAy7+UIgPpeWZZ1IIXSSWmuVEkGn4X5rtbPKcpGcJeV1W3LxMzwghQbUVLdKmeiB
80RVQuqWUp8VWWDKSZTp8RN5kyZPiBfV7SEOVreRBM+IkZEZRRwQokLwLllqZcrG6lewGBShE75O
kMyrW056pWlLnYzrtuEQxUV4ausWzDGE6o2EJ3ie/KQPwp8SvOVBeOYSvBcG7TdEQUdINfciSdJZ
RU1HUyCdERI6RSnomKh3Ie7qWDJNvPLWpvKOsOTiZ1hAQAgaUFNHaBwNRGrvgyNYR6h9FJxyoiPF
mfYLBpMm/7N42ernIQ7aEYK1yegkHdchgVMqW21FBGOpN9GjiSKlmPD1n0nmrH4CGwYMRjbt1Rp3
hGb+Xo1FeGo7QiOOIBjtBT5Oh4K69SJ4ogPvHLGik1rlamM2dMIKS5jhERhkdO2UDyYGKlTF08aS
i5+hAWk8oKa65Ql0FMFQrrGFPhIlaMhSVFGU6fFEgUmT/1m8bKHvEAedeECkBJgctnc4pgRx0Svv
JHdcCWv5K1gM2rAJX/+ZZGbdctoz3jRgGNWt4PO3Ey3CU1m3gh+nFPCeGnRdrGQ7USTMaU1YJ7g0
HYC2HYkqdIYkYwnzTFia0YP0gWnhg6UV99uSi59hATGl0YDa6lZ5msBoyiAides5IxYUYz4IlOlx
49ikyRPiRXV7iIPVrSSGkuCjjiaA4YbqkFRyiRvmUgD/ChaDEGTC1wmSOXXLzwntuWmqk3HdQsP9
dgme2roFcQTBoCf4Ql/J7h4JgilvVec5gY4IqjsRqOwSZ4xlUBeoyegQlPZmOIGOViwYlFz8DA8I
vd+27e4JMnCXovGJowv01lgquGYqRpRJ71t60uR/Fi9coD/AweqWuWA515Q4ZSPPVlrGqQmGEZcM
D+YVLAYGU77+M8msBXo+3N/ANC2sjeu2YU/zIjy1dSuP6xZYD/iwsmR3T0wRQEbVKUNDF6nVnRYp
dBaYs5yDY9rtFuiBM2VNsrxi+27Jxc/QgAx6X2jc3cMTkc4SZYxH6lZrRVQKxCjnMSap9uPkSZMn
xIvq9hAHq1vKbLRByxCU08oElgIEbW1k3oiU7CtYDJrqCV//mWTWAr0Y7m+6bffquG6Nml+3S/DU
1q1RRxA8p5Ro6jxGpgBpeGqxBE+lKZn3GEL0kqGDkJLdPTR5Z3kUHTfSd0rF1MXAVZekccF5oTlJ
GV166lJSloaqxbqCi5/hAUksoMbdPSQlplxiNAXsaSN3iXotg3IRZXqcJk6aPCFe1Jkd4qCTB2to
dMpTzoVijhCtqE1CKWesoN6+gsWgNJ/w9R9J5u0SgHPCetX2zs64btn8N0oW4amtW6aPICjrgWPN
IUp291AltWKMdsRa0iUqREdJIJ1WKRnpZALlMzpASCIJ8BZCRd0WXPwMDUgQLKC23T0iEKI5DSIm
bJeAtMppCYYJY1Gmx1d6J03+Z/GyXQKHOFjdOm8Yk5QB4y4G5okwlgTDnOFBGs1eOY6B95SZCV8n
SObVLec9oaqlTsZ1y5vqtp2ntm75uG7lOcnNoVkLxNgU2TAIWYKn1hQJRxC5ZZhpghibohummUvw
1JqixTGE7omiT/B492mfSz8heNNzaTWkumjbTT/KMkkaHh4swVOZZZn3CGI48B2aIMamtEyKluCp
NgVGEHpoGSmaXkYZm8Lnv4y6CE+tKZwdQeSWUbwJYmyKmP8VjUV4ak0RZARhhpbRbTsPx6ZAw0LL
A89zjnEy7xFEbhlDFzVFNnS0S/DUmiLpAQQj5yT/JVpHnyNTFJ3ZpyzFU2lK5j2CGE6nJ00QY1PY
zD5lKZ5aUxgZQdChZahp2gw0NmXuw9KleGpNAXEEkVuGadMCMTZFzty6uhRPrSnSHEPonir1BCPt
p5siPDH47CnCwMWGVOe66QWgcZaZmS8SLsVTm2VGHkHkVBdtECNTNGnoj5bgqTQl844g+NAyoJog
xqawmWOcA57n3NGSeY8gcstI2QQxNmXue0FL8VSbAiMIMbSMkk2LSWNT5n5UYCmeWlNAHkHkltGy
6RnK2JS5n2RciqfWFElGEDC0jGlb+xibomZOJpfiqTVFqSMIDj1pgxibMvcLgkvx1Jqi6QhCZo6e
QtOMdmSKaZlMLsFTaUrmPYLILcPaIMamsIbyWYKn1hSmjiF0zzR/gpH2004RnhC8ZIpgTk0R1JDq
HJqWHcdZJhpGw0vw1GaZEEcQOdUFLDkaNmLmo96leKpN0SMIPbQMtK3aj03RDaPhJXhqTdFwBJFb
Ri46bzK6YR3nnudZ501GmxGEGVpGQdNi0t9NYYQ0mLIET50pmffYlNwyWjWtxY5NobzFlHaeWlPo
CIKSoWWMboIYmzL3Df6leGpNAXMEwWVPTVPHNjZl7ue/luKpNUXKIwhQGQI76FQcnIW8fTg07Wqd
N31d7FRv7Hb7S9bNYOCTCkoqxYDvwE4erkiJ4wK0clwnYzxECIkEqSyjTkTw/8bhio82CMqXtmGU
L4LRWWf8PSViZQrlEFAwxpYGK/Cu5NzAR0RgmHcwdSB1FpZtjVYsXN0UEpUTpEyuIM4qg6WgmDCd
FOaNVVEsXGswp6gcmDK5gjirDFYKjZNNCzdmcLFwtcFoBhumy+QK4qwwWPdESkyYTwqLpgyuEK41
WFBUTosyuYI4qwymhmPCYlq4KYMHYVEmXG2wxOQYK5QriLPKYM4MJgyTwtCYwcXCtQYDReWEKJMr
iLPKYPzTfSAPhP1V1v017l7b6DYx5WMaX3Q3eVzaDbz3v3x3N0RnHaECYqcFF53gVnRWJ+gS55wC
ZykSsxqGuLuR8Ga9vl3th0xvXN9dXWF8QBRmjCwdiEWluWLUhcTCP4/lfbCK+ehoiFFqzcBJxalX
ntLgI6PHY/khgmcbz+teErm0FUfZNO/M7qdErK4vhYJRWBqswLuqSpRMoIiTXcDcTaL1yrVtISkm
p4XBupySj3+AjyxprYdjukRHgZtOBEa6FIkSJHHGeBxWFgQxUQtmdM0ZYCUXP0MDAjSg2o9/PHSt
F0PXejG0Ro5EOUeVjJ4IAOQtvhhNYmBtcERjZAb245NJq4sQkHf50N82QsPe6DOMxhiYSlRFKp22
NunAlLJe8BS4ewWPR084XcTzkM6fDT92f27Xdp1uM2G8+Dluhm7g6cXfvM1n8N4MD/SGEl3fxOuD
o8t+liiAFIsA/CzzLX13WHP+uZv19XBs2ucvZz3SE2AUcd70lCwTfOUblY/6DFhZHzXuHRVpuaWZ
nrNC4drOURFUjpsyuYI4S28/dPhOBaMCE9ald0iAYHkikUY7sWgKIVjvmPacQZKMayK1k0kbJ4nx
xv2bA62dFYLQpa0Yt5aZn5VPhVibv4agYIwtDVbgXVWmAz7cMZMlZqCt0YqFq5sCUDmlyuQK4qwy
WEosOeXkii2Q+ZP2OuFKgzMYKmd0mVxBnFUGa6Iw4ckVW6CizeBi4VqDqUDlhCyTK4izwmDWE4oK
T67YAuMtBlcI1xrMcDmAMrmCOKsMpoZhwpMrtsB0m8HFwtUGa0yOMSiTK4izymAuFCY8uWILvDGD
i4VrDeaoHAApkyuIs8pgKTUmPLliC7wxg4uFqw1GM1gDK5MriLPCYN4TQOOcnAmBaMrgCuFag3E5
SliZXEGcFQaLnhqFCatp4aYMrhCuNlhjcszIMrmCOKsM5tpgwnpSGJoyOAsbUiZcazCgcqBpmVxB
nFUGS9zgyRkOQGMGFwtXG4xmsBK6TK4gziqDDdo3KVI8m9ciGisMRKf/eVHIgBLGUC+sCQSoMjpK
y5WLJkQZ9L+9KAQ9IWRpK8atJZtmZwMiXRqxNn+lQMGYWRqswLuKTIeeKo4hTk81pWlrtGLh6qZA
05hJUiZXEGeVwZwrTHh6qqkaq6JYuNZgJVA5acrkCuKsMlhoNM7pqaZqzOBi4WqD0QwGQsrkCuKs
MhiMwISnp5q6MYOLhWsN1mgGSyLL5ArirDJYGsCEp6eaujGDi4WrDUYzWFFSJlcQZ5XBGheenmqa
xgzWlJYJ1xpsBCrHTJlcQZxVBhuBGjw91TSNGVwsXG2wQeVkoVxBnBUGyx5408cqRjiSz3+xdBGe
ytbIvMcQ5sRDA1X8sopjkpBEPdMk/fMsTarArdJJU2c804YbEFoEETg11iv+L8/STM+IWNqKcc7o
+Vu3nwqxNo20RMGALg1W4F1F+ZteoB27nnziKk3Tk+UK4dqmMBSVk7RMriDOKoMBOCY8OT2UprEq
ioWrDUZzXRJSJlcQZ5XBiqHCk9ND1bY3okK40uAMhsoJVSZXEGeVwVozTHhyeqhIYwYXC1cbjGaw
IbxMriDOcoMHExgqPDk9VLQlg2uEaw2mFJOjGsrkCuKsMpjjLTs5PVS0JYNrhKsNlpic4LJMriDO
KoOlRoUnp4eq4XX3OuFagxmawYrrMrmCOCsMpj1hFBOenB6qhlfT64SrDZaonNZlcgVxVhnMqMKE
J5+4qoZX0+uEaw3mFJWTvEyuIM4qg7kimPDkE1fV8Gp6nXC1wWgGCwJlcgVxVhksFCZsJmc4quHV
9DrhWoMFmsFAC+UK4qwyWBKOCU/OcFTDq+l1wtUGoxmsGJTJFcRZYTDrCWeY8PQMp+HV9DrhWoOB
YnKU0zK5gjirDBagkFfODD/1avr2xd1tGF7otN/nZcSKF80f1Ayqdrz6tb2yP8eHBU18JUx6GYAS
H6Mw+9f9Qlg9XGAIeve7swvrTYZFcfQUzvDL/T7mi13MWVoDWClpIC7FB+kP1rsX83YfJb3/ufPd
r612/56z5CqTPFwnt/OKbl/Kf1RAifsMuEjZ8u9WV5ffv7jd/fuFvQ4XYX19e3GvnnWHsxaHVx4/
Xl9f5uAvNpuw7YfPld7dDLZss7Mrf7fZDGCfffbO9t6oq9/qaH46uPp3q4fLb22KuQjs7fbi4aLD
InSN+UdXfmt35fMBOwc1AA/vNK6++ni1+x5CklJIplNHZYJOEKI642XqPNM6WmWJNPyvZMhv4v7Y
AvPmxr+4/BvH2v2QO5VFWQyZyMrP4+3wTut90sVr667uO7X7ZwLv5OsIRw1YLbw9zJo33frudpcC
8Rb/zRgPcFXkDd5Qu25jOuhSkZMNUP5xjEkWTfd9/2INMO64acuriU+FWHcrowQFU3J5sEnvqm55
xtAnR2x5lP9UiJXNK47BeE/M0zdvy/aAjMjEE5TGCLFlZ/NTIVY2r8bAOH/65p2/W1rsvNPHd0ey
un2xiTZc/B436zyE3Pycn3Zf5ttGBorAdVJeG8nMH7sB0t3Nw2c1vv0w/0gO5i9Ce/viO1TU0Pmi
+GdJMO38Ow1nEISPxoEYf4tkdOHFQUfufPPmpx+uPn/3s6/e/Wz1+RdvfvbFh5+8f8IdNl8UdwfT
HrIreReZ5Uo4mOXOFGiB7kNWr68vtr9t8+dFLlye+uxGubuNGReb9VV8Yzd9Od/3A/NYylvqU7vZ
5lzKhTscNbDe/LZKl1fxRFvx+bJ4W+HqwzSJOqudsdQ5Nau1GlCxSrdXlwPofvRnb62z21M2ifna
uE2nEQavgEuthHYE9JzM5kRO8JbL//HCbl/cuquLX3+6+m719jo36/2ZGOE2PLn27v8G991u2nCR
R+i7O8zF7jiNfHcbam2/i+z5WG5f5M1V1+sQL364y1Qu+rzkcfGTzeW/eX6anTMPk/1sjb+K9vrq
t+fnuI//Yus3lze32+fXf2F7u/kphjbl8j7ko7UNA947H7z96SqbnleavsM11XxNvO9ApIe9jhaY
58onpfysPuPgMRPKOS17v7pwF174m5D7CnuXl6VW8Vcfb3bn9nyZh4398LeLuNmsNy+zfErP2fAZ
rrP8z7++xDX89u7++n1wZ6+srh6ET/3I+X24w49cx19Wwze+ftsD7vNm8aYZpcNn8Z5xuOkPJ/rk
UeyJdNDzNfF0QKSH3yi10URQF82c2y1ny91u83lN3w+AQ9OtHr72tss03CG2+M32JMCwcOyCYTZQ
p02Y5dMUban4fhvwLrhhBpkuv7/b2KFyzp9F+WazDnf+dv89vvOVHMarzyrtNvY6Twu/idf3NjyL
uLu7vAoX13c/ubg5XwEXIG+eRfjF9yv/wl5/H7cxr9M/i+Rj29KezxMsL/u3X0T/Y2Ybcjl/ZTDf
w9c/2cvrE0UP83Xxoj8hP/gSpGEuCWIdnVXycuJOWSb90CT5/5HdevXqp8vtT8OHQXeLNwcTqK5b
bR8WWi5vs1Ac6iUHuvrZXt3FE4gPdv6PEcszbb8Mtb38/jqPFXNNhauc6CcyreGmj2caLr8bHBkN
nCdQLMwak7WwnvDo7mbf1CQ/xr0enofG1Y/xt1NuLT5cmQTJlyE8QHQyURfnVaiZT/1PawX59pSZ
V/f3ilV+lhs3NnPj1vGGpcmCRYMTLPlKQUTqnDLRBTnHPb7cWtRneTlzgH345ZUfup3vFlfF7cLE
dwZwygG40BHiLH/kBGmJ8EPX+tbltc39p725iXnhbFjrdplpvdnkxwd5Gp+f3P8YwzyK8lb6W159
/nHmzpacaKWGHqkgqffiw3qhkjzFoKOjZlYr7eezRasMJ4T/eNgvcnH5001e/vpkvd9A8vDTl9vV
zTCk2ma4LHY/Rd3cXd9PUx9+tg/u9b+mqO7KXv/4OC9d3OSjeen3A9xmUH5xe3szcXPki3f3JwGG
jkpTEC4YAjCvo1qum/9ogByKdvdwK3+c3P94agYvFu/XUfHhNpgSo0rTJOKs4YMgEyVQIryfl955
nzdTpbur3C254aDj/Y/frs9XX37y4TcH6zPD3xb3Dm+x/Qphvt6p9mp4CIa3FyKdf6NiJBAaiZKM
zWotWtxap2SL2urDT979YkWZ6kn+k55rsrhrpyaCH/zJ3ZX2uE5D0e/8iogvMBIJ3peBh/TYEduD
YRMIVU7iMNXrtKVpBx7Lf8dO0r6+1h3bdcqGEHS65Jx7fH29XduPX5zXOFFeo3cEThLottDWSJVl
AxHFZ5VaAttTOtlEpvlinhu6K10vcpPyeEKqhEk6j1QODnZAwzCGqCYCirMidgrh4wGN6Vq9+3aQ
SyXMarh1cqJbf1Ic4wZyyrQ8q9vipRqEvJ1XGxZj63L4drcYe3ncH543BV0/qajLMwsmBvVuUevr
+WIym95N1244OiKcqSFmLuTarIteHkvdq+nMLn5O7Opjl2RwbZMMzkMOr2Sl7bcNo/U7NTed2Tub
Jzydmx5doyp9olQThibu6ubjYUMTZIJCzZXUZ03ke0lHchhK7tPn3/zoSabqeqXb1owbrrd5YUj8
nXRubjf99ItNPe8m6haz2v37V6dNpubPrv5Wesb9dgt5vZeY3BqjXIhcXedpZD4BPaktrxM9KYLA
RaT6Utu6up815kKn3oIKDwZbvOHnzwvL1NPpop5WWWsz+TYzU23dXMYb1x5wMWrcLcx4fKV/3hhd
2tP4nZOcgd+Jvp1SskP9V57Y9dYbm9r1Stf0jG7ymZD02N/YievQDmWzDOqSCEkrWTfnjGJc6OQU
ereuMNdrs4fnae//dT8FAxHgAjSS8bMy+9wkgtAG0bfd4H57hM5MKLRPspubssXK1NL9oH6r2mEh
rw5lQ09I8lCdMkwb1UiFBK1ofc5ghVLoYBIO+Yfd1/PTyobB5zU9e3L8s652nlIjYQjgusdNVT9v
piudqezO1dReikOPWjtB9xv3S+G/oEG1uLszMziV8YduZW7IxnJic+MDXf5NfN0YcryG7HbTWd0l
+Xx6+EnxwWK9mHRzfSd5nF1HXTx2r40eg/lZPdV1kX3U77cz3zZybbT9fKXXq2cmota6sRJr07i/
2m6WtugsarNZb4xHzYa19UXT5fxlw62VV8Xfoeu/wp7xyudmpvXS/gIVoPvHZiotTBzJSt0sVro3
wXxha+ueKxUnCOLU1vyFvQhUpBxNOBalqH0blvIxD1IAx62V3LGH12aM5TMzq5jdGUZ3atYlrkds
4w1Cs42GPWBQm76MmrZGSrXObKJgL0ZxN7uGkBYQ54Rm+VtZf1Nm/3aBZE7li+8iTEXBZA4JsB90
OYfmU/sRhwXOISCH70OAC5lDQA8/QAAUPCfk6H2KCpIjfI69nbrfbR+2S3/cBsPr7IPFflbkh5/f
fDW5+eqjTz6ZvP3553aLwXX2Q5b9eDb0TmTj33Zk0QlAQYFojq2hw9v6115jIKz0SLrx5BGeOOVI
S7VSd/lPeth9mfV/d460MAPNVd7Pcwzvm5X+R/f2oXd1Puwcinc8B7v/teMFlMalHC8AekTHQ1wc
4clTpxBUama8b7YxjVvs1fj9rx7ZJOT71sTe6javmp/yoVlr5o+6V9EhEXF5TP9/7JnGXn9xXcYz
g6BH80xaUC7H7BcNLbxZQbU9ta4jbzWZzvV19vqmXb3eltP569uOSt7UGdUM8pKSXANc5aCRKC/L
SuRYNoAQABhH6KWXnNQFuwj1LTszDrZdqfWz6+zV99973zwvszn+VAp+dfWPMOrGoaeRu67amMh7
3UXbP7OWZ039CPJjCqwAInWW4sYkHA6ne5iJWtsv/EVN19vVO9Pmukenpis7zFe4aEkmUjuxL/ar
pcRJ/epRKMX1q6V08ODOhBNIsu197CenubYX/ttuUmGWb+x+0K5IeQXrupSqFuXRdM6X14zLBlLN
KdPQRYfirSxuOkG4fxzR7b5t3pktDjjs3R7/6vbEcA0xY6Wua8kVR7iGJeJIE9UIrgCmzZWLOAP4
QeInGQyF+eQFe2zzn5mi1dN7y+yjV+4ylXUfnQbvCvE88N2xKU+n1dOuku32plUHg/anxqHswHu6
fqU1lcKO2odq7RzTioIxeCa3My75J9cYF8SbI+StWi/UdgYgS6nt41CKqu2WsouHwMmzdDdDAmi7
W0YaHai9m+hfdWWwKtUvFNRTY3mXlf+6sfR1m1u92Kwq/Xp7Z13DpDoO3d72mAwpqOO4SQxDD6Sv
sESAiZJroB++10DDRguINZa1whXCRDdCKQAgVQ0StTy+1+DvudOgE4HzlCX0ux93WY/G4+vV9N7O
pNumwVhSm9fWnKxrK06An+//O3fYlfKAbDRbLlZrGyHtFzZrVc4cARLTAhIxrgccBAh0/nE4l+EX
GS0QcbFCJCUZdVdqXxaVWqpyOjP9ON1eG6uezu1K/e7dZ9njrz7/9KN3Jk8ef33z3gWYuJz3ve++
2nddM05yAtOEwO1wXIMa4bZYkHHdwu+24UfVGH48rVhGdJBEJi4H+eSbTxcfvv1430lm93eL21I5
CYzsKAN6hLMI4HAWz/maDGCUFrkCUSPjEUbHWKwAgIZg+S2McHJWEO6IzdiHSs4//y4GNVJXAlxY
lPIQLL+FEbryArgsJF7U808mi0GN1VW4sDCAIVh+C6N0xQyMHQrfufnkg/04WLWzn9zQ4wZBixsc
AXnBiENv6tM74Zr0GNRIj6JOLM6CsPwWRniUKABLyPhye5TJMnz85Mm+T5n5ALVcuuETctwcXtVj
B/uVSDxUJaabc/Pl5Oadx5/94yQef/XV43c+/MdpvPvehWi4XPKjm8/3/XHaLhzAAWcKBdm/dUaL
GuGJVKSkvkaq/47R3qxT/bM8Djr+DiajO8Jn79+8EJia1g08riMY1GBHkAVI2dV6yv/fufnoxRpQ
tVM3eMLahcP0DjnCeMbI2MYfDPTsIM8NPKrhEQM8AgpIk8a4Y1W6dCanRtudHxyOt7de6CIxamFY
BlGeaDiwlDN+TsxLffH140/2RTB57GrmBh93UrVDjjBepFXDEf0xkYmrGN6fzvR+KTTmbyf0yCHB
4gaXASwQStmr67R8U+v7fcvt3yegR7Xc4kRYLlK2oYzofclMTkTDgzjoBE7Z/OOOgLEF0GWHp6B/
tjAJTNtlvt0W2711PZtXpud1e723CFgvdJvNu/wqI9wxOVQA75mUddk+m1fZq30yysQky10Zkg2X
lCEllCRs+IpRyPz3OrOvJ/2K32Q4rOBvwN6dsfIgJPae5hwA+eI9JPZo1N0tJKvFXU+gi4P9seBn
3U3iIk8lH5u8IW34qlrXF+Yuxhe+mmm1Msr3zr2XL3W7WDxtL0bC4ez9bNXEPnNyp+9MzZv0R+CZ
UOkgggvM0EWIDLWuI1ItNxeH3lW6E2DCsTUXM+80XMrUdThq7ESjcGFJBEKw/BaGTzQSUgDv0XwB
hTjsdtwW4TtPvs7AdYYkqKgWKkd1hXPEMM0r1JCc05JSxQnRTemiBL0D7fMoQdOesUZBQXUOQAVz
WGmWlwjUec0Bp4g2NVONixLynjh0HiV0nVWIYFjWLK8JVLnCFOdUc5GLpq4NoVJIzZ2UmMM1uc9d
Em77j0GNrBCSubBIGJbfwqgKQbm8SFHj60whBZlSLCclwzkWCOei1DqvSaOlMIw0Ai5KzHu+SQCl
0zG+Pz+ze9DF0XdhvgfNHPbSgoERG5YX7FXtrUmYWhrXtG/crkwvoNUXp7CPZm23T8x2ZNqL4Rsv
3G6YsA95bn/htV9Ax5qtCE2DIaVACApRN1I+nL8HS10pKYisGNCqVBWFSjZ1ReuGgaoBx/l7duvM
35XDZ4SQeETv3/PF4dv3d+3FUc1X+jr3zXA6pRvSexJoBOTXW0ib2NxtDV00Bt4BzAoC0biu9mKD
AIlMafIuwC+ucTT8Xawok+Oy8qsW0YyyQvIRRphbd7L/7RoNfT+tdJupeZ2t9k4Eve9Hl8Ogzrxz
nYVcm+rmPULnc7bojv2Y9Kn07d45D/aTbDG3VfHzpTLHpXypm+sK1RQIVOZIUJYDTnleN6jOaVOh
WlU1LSXpvMYo/lQ7G01eAOaI2NLTW4I0JYs2HDXS5SlxYo0xaaHtWfM/nqH/q6bHYjUDV2by0MIU
y2rqogkZCZHEXxAR9Y0XGNJ0cQ791lhqL8C5qN8yR3YYAT65WEquXzhqpN8yp4WcwfSiSZ0pPC8e
8kLA0clHzxT+o9ydk3R3C3uRvFZPjVfcl3V7cfQOctihmfXA/blXpa2ZuVqv+8rwzdvvts4j97as
JMQhju+vbhHRSRSIw2NU6EVN6rkFo8ZWcunCwgyFYPktjNKVg20YS/AxJzOekqI5LrPI8uFuPo7t
gAT5yocnzR8Ho8ZaKFxYQtCxFPf7QpSXSkJDVPBrH4EqCyTBhfQQSXVjVGaRniNO8HG0CdhXPiKp
bgSjxlooXFhYihAsv4VRHkggHquc/R4Yz2ysvslez2g5bSbdwUztxVEPJ1WffPR+dhqau0IQ8TmA
TKzogaiRLi6dWALxECy/hVGOJLx3ZEUUadeh3Z1X3JrTesyu83IFfrw4suu85AfgKXBeDESoV+6U
Hm04aqxDSScWYyFYfgvDHYqCcUdN+5HJHmGynXWvy4tDV9ujfzvgbt590zpR0QiofVGYOwSuszdt
ifXj5peZQJzVWuVKQJETyUGuGolzrAmuSsxLgtjLXb5bPlOlnj16+bsbBtB7AMDh7Vr3V3GbqeRH
Lz/ZlLNplT22twgMmXrX2adq9dRO7XdO1x0D/+SbIaeuzVSbLTbrfNHklvrLmWpM5crVcjl7lv9k
KKp5pR+9PNw7/t3jJx+9PNwtatBgAV5+/S2HZONU/ROSIVZyVOky51RWOak1zksAeK6Rok2FgZJY
uSXDwZLZFTmzCLhupr/auQ5Tx4ar2rJvPnsns7mAmaq7C/LbLiYV/wXhZClISSuQK4B4DqGGuRIl
zIEwMABUUrETvkaDhftYr0z86Kb/b4YL1IyGuvXo86GxNUwf72WLCfoQyihtdI7qqs4J02VeIkRy
TSlTCjUIlcKtDwrW5/3BoWx1vFX9bQC/3Or57gaXxmhmPUrX5iCve2bet7cQjOZe4yXuHMkHGZGq
bGguiChzUmmaC0Z5XjItOakrjbl2y8dj5NOtFbCroa9tfazzt6ed643jaLCA1NG2+nK9EEhZrOlQ
eQhqXO/BsHJhESRDsPwWRvQeYMHlCI2po/dQThftpD/dsXXjjpCuscW1Z9++/dHnN9kAaNZdtneA
OrEvloraGLe+fZ54PbGleTEGrgPp7YGqexnq98yNPkLiwj3rj0HtxpXLxbw13f+bVxGwVYoieOUE
HmM98iBjYkgOahfN+he10ttb9I/hUQEEOa5h3FfDYMrCWThqZAyBwIWFZBCW38KIGIIKjvExqvCi
8jRdA1FjdeXHWLhAJAjLb2GErriQjrErBT5UlJKwGo4aqStyRCJSCBGE5bcwQldacECPUaEPFadM
bYWjRuqKHVisQCwIy29hhK6swGCEjIpteH9HzarNbLi62WYwrqd3uiiKiwNnw5q6Be3Prn4BEF8j
8b2LhKDgWHPk1TxpfisYNdarpBNLjDBacfSb+sTU+3o6UdV6et9dx9i6CEiGQoz1Sxzh1rwA9DJd
5Op2umy1u5NocTGXx9Zin7WEpjhUMGqkQxHqxBKXUXZZTSdD5qMLltMRJi0csD8tN048AWmIpP6C
jHJbOaa4/drRsA0B7F9q29o72ubaSN2q1bNjIqIA0lFric98muTHwaiRfkypCwshEILltzCigEVB
oThGpT5UljQlYlFlCGqkroy4sDjDIVh+CyN0lQWQ7BjVOxHDk4aJwaiRunLgwiIchGD5LYzSVbhK
0zs45UlxIBg1Vld6gEWvgaFBYAiW38JQXTtUSh2Rzjs4FfBsXWNQI3UV0IXFGQnB8lsYoSssgMtz
pBf1/GmNGNRYXbkLC1IRguW3MEpXinxnIjXaLj33nYxJ38l495qVQhAoBcUauInR88dSIxOLLB0q
XXQYOvZ65p0ckeeflRyDGmmhRC4sLnEIlt/CKP8TGI1UzH7/iyNG4EWIYcTPvxpkZGZxfmOYH9NB
BXWUIPNNb2F0/gReDGqshU4sSWgIlt/CCAfEBcDi6LQkhvZQb9drcxP8+2o66yTrVsl19uFXXz3Z
rQINiTtmD6sx8ocfHTDO3FKGfMbhpIY1GDWy+DB3YRFCRqovByIknIQ/MrFYnZCLDocwpEz8nhDl
5pJdpgHACaf498S85wM9cFrZsOA9XCb2w+4mfX2nVz/pefVse7udrap6/aObAD6fgPsSNh8P85Ra
NBXXEuNakrjb9dNJH6j29fKnlap1Vtorsm1ZvW2euJi7xUo50NEt1gl482MoIEdNjSUT8RqRAgpw
XNGwr6KRpGgTjBoZSghyYWFBQrD8FkbUWFKQpCvEXvS9m/3s/MEFFvP+Rr1Oo+GK9h/dROj5RNzO
GMrHPE3VSlUVYFCU8gzv5N4ThM7k8kc3L//8Vk+7P7s9fpSLk4AO3yVefzp/i08MamyNEU4sIc93
mQd995tPHn8W4btCgvOJhPjulo/TXypGEGnqitUInuG7ksCQEvP7SUTcoQWgLKrGBCvgrzHdo1yc
EHYoQX1KJJynEYMaWWMocWFhjEOw/BZGlTWnjnEn86ImTYYFo8bqKl1YAiQEgAcjUc8nOBZZKvB8
KiGxaMfIXRcJqQguJdYIi+hoFEL/bDb+yDA8zMVLEhjiTX4fjqg5zHljJeM+VHb+ilMMamTNYdSJ
xRPGIAc15yPzlamaTVu9PRdqWOffrFR327qTQMqNre764uNhh+K8bDSWmtCSR9cSViDKQ0rI7xdR
3oj5eFFup1F36va00vPWVkk3LDwf1l1CbnTzWw45EogCpprmjHJhrighfOWSsPIegxpZXzlwYQkM
QrD8FkZ5nhS+SvrJrhDNFzovEA3HTPOSwfqsnQY9sHeI8SCwc5PB77qe2jAwMfskr29ebVZaX73R
aGX00G335g83r9pO6tUbN69+sbix/7u5VStbaP32JPuOvcreoNqXnyoTV24Xc/v6ne1N8t1fH83X
ejabvmN1vfrxDVX3yGrWwfzePt1M1s+Wesdi2Htg/mbW5a7eMDXL1JP55q7UK/OueWNht4+15jW8
emO5WtSbal0tat19qH9dTlfPzMtOSAnxVwBcd/9+f/WGmRiq7Nfmi6s3zO6s/hfV4m6p5s/sa8vO
CNDC/T9Q/63puv+KnZrvf7g0YqvZDrlabObr1fAlY9WdWhmaRp/JAPXOdL0yW+W+0/ObLlhcvWEe
b96q1pP71tSMbrbUPGIxm3WWWifd+85MlcdvGgY/adfbXSGZ0nn64IeTdrrWXYU0fJ4df/UXXU5a
PWtsdDNO5v3CpN/SvmP/55/HHs0LwB1Nh/RW4PNz8mNQY0MUc2EhikOw/BZGhCheUEwG1PMihZNU
Wo7LWKQiS0VAJxVHqgYHvlIRiX4XiBprodPvGBxDbL8HRLklE0EC+GWPQBUFZPwY1bv6K5PGLcGo
kYUtaShWcmHLpI7gWKRi9QEuKoSJkLLwe0CU31EARxDAXypRpLggx1IgjxQEJIX+YNS4wjasXFiC
yBAsv4VRugoKRi9sAhJj0DikYkvFEaJkAaAYgYpfn4hC60idP5g/mEPo/jTsh59nQwd+e9FEf5nS
j5dmcafb1nS7J/biqEn/iBOY8nxM9zSGA7r7oWQlK0mNAYqewzA80XizPIcldKvN36VhlA1CXZrA
4ZFxHz7u70MIBubkD7f4bm8zdFBJpZCUUyHjJ8ANCe/hUNFuckoEu9asJJCgFkIqdRZZfKwYPaXY
YekbAFoTRErKEKvoGfgIOHoZ2NfwwKSmNRg1MohD6MRiPATLb2FUlCaUHaPS5wX4x4fbV6aQYMVR
o2iJWSPddBBLSt+1fHgSn8iiQMxNwlHs5I9TcYATzRrUMFES4JYl4QCJkQhF6gK5iwaDjsJhPhdF
SalZwaiRFiLkxKK+ntR+iDXtsZrsAiylClECEKzYQMmE32mb1dNWlTNd25XLcrG+fX71qD18qrtF
x07xuulsey//EjoJHRtPh2FYP/vxH8Kd1KVRYdZeHP+njW7XE2XSS9eTB21mCX0jd6vsgDY/BJjV
GtVUIcTOaJiTeB5os15sqltbNvbHz7o928HacH6qM/LcVIk1UkhXsGz0WaaiY1jhgd06lnk+5phC
gJQA4JwuEEtJHnZzdAhuUymAZBgrDDAsz+I5Xm35cu8iKEN7uqinVdZs5t2614nKmrJ27ZbpYRbm
GaXGXKG60pDHLyaHUI5i8Ie94/unlVkoq1vb7G5mxhK3EZ1r/mOUHte1+fXuR+bmG/MV237ZboG5
h2ijX8s6h3hk6BWg++e1zK5jPnoyIL6KwfaTq3+NHVty2e47eyOiQ9MQOWka+xfa1p1ldyrd5dA2
Y8HljAuPIv3N+9aM4YOsPy30RAAZPwvlJAE7MKa0arhitTpju0Qi2wOdKuMU07pzlz3Ki3neZ9nl
3dcH6iekS9gV5JYuhpN5YkWZABRVshbx6f2JBhyoudm/J3NIPciepx6cEHD0Nt5DwzwEQcSxEAog
Dc7QTDKSODj1j5bD5zQgKCBCIQNG/zA1CpUT39AxriTcLHHKqsXlWEYOvjF1cROYj+tICSfWjUQo
Vhd5TAMWwHEcowA+f6Ype8jDUSMtpE4sBsAlvNJfdyJquGHJZYgi/nKIQEUF8K4zjqENTdkNdjmW
sd4l3NzgcblBX7mxpPoTjBppIXNiYXS8x0ogA67XHVY9bburSq4386fzxS/zfFhPzCsMdQMQzpu6
krkUGOSEg8r+h1ZcN4QwZkqmqcsSalhXEIjdw2wP4+v+edlqWWUvn/Pwl53mYPKgOSd6Ojsrv/m0
sKdV9oF5Uj8zmaHTarIylyZoG6YRg0hxWOnqeMLry+uKyFrgqkKyqZ3kGA8pV783RUUBzhzR2Ls6
xFOm3sNRI32Yo2MsXAAMjwudBPgw5JAJUNOcUdbkgJU85wzTXGDSlLXmtNK22Gsua13SWskYHw55
+MtOc6gMkc5fYBFugguEpUPE8Jpjf1TYSaHJcrYxV4wY3SRUSHBAJVDaUV1KRnAjNQKqwi5GGIFj
HahPh6R02HDUSMcV0InFtiXtjlZnhKU/jkrogd8exCx7x8kvyhqUmexwVT011LWsNGccAo4BNjCi
qnVDa6IEsQFaXrnsomEa+ksuyoMZeDj2hwtx6Zv6oZM+E0f06R8320Wn5fHMoGldVSOVAa1ojd2T
cDczrZfm90b89XSWzfWv66y77D57FUJWAIAZwlmrTR2p26t/Ey9KyIO8JHf0krxj9aRk8XDU2PDA
jrFIAQkIwfJbGFGNSIEo26I6G4LQiH8ciQ5/cNAGuMIPqAmRFYAMYip1TVAFOeKwqhGrJK/ElcsC
wsWxBdynW9KZguGokZ4hkQuLShSC5bcwyjMkIseowoNKk5KUw1HjdKXAEYNpATG8WBPypZ5p1V6w
CbH00RF9CUwAnql5pQfatkNLKyxBxUuJqj8GY4bxtN2E+OL3RwMy16banW06N93lH58fmWXLiDIK
3DgkFefb1XS9NoKttFHaZKpsWgtqXROWGgIsaC5YWeYNZFUOCAA5LIkmjDMlFR+T1GLZGjftQmf/
P0tkS9ZeCHpr77bNXm8NUf2brl83/F/fbgl9fXh83j/+9RDyWV8ruktFzF7ODKLdwtm2ZXXaRx3N
zt7euFa3dgNoMVuYGD7p6t1Ste0vpuYZq7lSpS7rGjZom1l3M/ygHwJk29jO6grJqmQUlKLEpVZS
AFhWUgCBasmrbGmSiR51qYAGXN3pRyt7Rcq0He7SaDemYm1avXpko0qmNoaI/XPSmocbPVemN/Fo
CyYFJg1VDYGVZEQjRBtEEJdKN9ooJl1CIEBDIo8/3kVEWVoQeowqoRc1acI6GDU2yjp15RCEYPkt
jNCVFQCLY1TkQ4UpJ/eEo0bqCokLCzIYguW3MEpXyNlYbeZ+j3HXGRzmGeyG5sUzXbsoYAYSV1v8
yz9RmhBOjksC+0oCpWztC0eN9DUEXFiMBGH5LYzSldHUZbVDQonVO51QbGkQJw0hx22yD2VKWuO5
AL9Y1YSLFedwXFZ+1aK8XchU5/J7+xmETFieBxHa3+W0S3HZZk//4N6Jl9VTV4znBcTsOOAQX8BJ
uM8wBjXSHfEJLH6MRb0WJlXNYNRYC4ULi9AgLL+FEV5rUBkPKUO/50ShMimPUZkPNeFiuxjUyNIk
1IXFOT3G4l4LkwYqwaixFkonVpia/jKM8hyBWYiFfl0jUEUBmQNV+lCTDuAMR40sTSpdWAigECy/
hVG6IujwV+8kQtKBseGosboSFxaGPATLb2GUrtSxOAIB8MEmnXsZARupLKMuMEZJEJjfxihpuXDZ
6J2gSTqiMAI2Utq/mDuTHltqGAr/lbeEBVGcyQlLQEhsEAK2LBroBsQohgX/ntSlQVDX4Jy2A7V/
r8+XY1du4kwcJbHeuiCmFksMD/VBsmgbxfQZuS8ZqscRSJ8RYuKlNurWQrItS+mjliRs+0XWZcGI
ChtG0iRhWhLT27hu7U1W6ofUIbRprR2QRa1tklgqY0lMbyNkbamSrDonMi1WA7KgtSNJYj3FJTG9
jZC1vWRBVh2+D8uNC4Asai1LYqOWJTG9jYC1JN5uQpEV2RYto1pAFrN2YolijZbE9DZi1jILsl2V
NWXtuixqLUtiOaUlMb2NkLV1jLuX/4ii79N/U6eT5OXQmkeWWiUgC4aQBLEUYktLYnobgRD+gyxF
VdZSJAVkUWu7JFYHC2KktdG0lAXIgm1MRRJrlZYM1eMIpU+PZamNurWY7JBkkyprKa4BsmhEx71Y
DpF5SUxvI2BtDoWkNmZNNps+lnVZ0NpcJLEaqyBWtDYWSz3mJtuWZME2liiJtbomprcRSh8utBRH
PX0A2SKeeSSqmqxpdQiQRSNaJTEafUlMbyNkbWEpouqwvZpqFeuyoLW1SWLc18T0NgLW1lCy1MWr
w/ZmqbABsqC1jSQxLn1JTG8jYC2LO9iImiZbTV38uixobY2SWIllSUxvI2Qts/DLktShpqnSD8iC
1nKVxHoeS2J6GwFre4gjCrLqcLqbsvaQpSVZ0NoeJTHqfUlMbyNkbSssyKrDaVvJfV0WtbZJYtxp
SUxvI2DtCCSO+dTSRTPVENZlQWtbl8WSYK06rh2mn+p1WbCNgySxVNqSmN5GKH1yTUtx1NMHkm0k
WatOkGyrGeuyaER5XUybMnA0LYaty2JtnFiiWO1LYnobofRh4qU46umzLpvlC9UosdZaUykckAUj
SqJYprYkprcRsra0Ksg2TTZafsYAWdDa2CUxjlIcu9ZGshROAVmwjTREsdSXDNXjCKUPMy+1UbcW
ku1NWINLw3cNLlMgklwdWvNMB10AWTBzUpfE0qKY3kYghBS6VPvOUZM1ldwBWdDaXO7FJsWI/3ob
knKVxctOMk7dFMeOy/BOxxurKQMmZarP7jhR/nlJwvHg8vM/f+Ozpy/euP2dx9e+/+7VH0Kvv/nq
/Yn25XF+9Ab512ukJdjKeSlv9HSFPOJS9keSTfeU7qNEv8EhsY205WJI3UEozsPxJbK3ZtfwxuPT
7DB+fvXpj0fDfvnh1fdPr3748tefvpq9zav333v7p09ECv8XwZZg5p/KsXKM/fHzh8/hpzGWyF8C
8hzrt587h0k0e+55cPPrV8dtVI/fff7mX7uae6587EbXuA6WZ5w/WT54792DJdXHUXohfvi8PLMc
zXj13IyH31+Zn4n1Z4uO/znzcTyMUmp6Yzbiszd45Mc3Httn8Y2n9lSfHsqIPB7uaUfg4fkYF3Oy
PMblwAN2IMK9kPXY75V9Q/jxj7/eepvvjwBOewwxm3hc+eXf7Knn+PM6tuMvTd9/fvzxu9mU3197
//bxu5//yP9PRBLLK0Vy77EMNP9crI8xPnzWmWpGe5CaQnZ8R+j4J189fPP7G4jP7/B/IovSy0Vl
yyTtIw+fPus0WksVf/un5kC1Kr+iC7piL9FNe06d0bAOY6JLQDU2NyDdq/VhRj22bmhJfkuNv2OV
8flTq5/S559++vTHL+If/+qDt9756ZiavPrpy4fj9/GjD3+SdMufN5fbdE92kGWdyhELTBtqEkxr
2QVG9whImRoilx1YybJG5IgFhi6RBJNyd4HRPYJCx04enbEsexocsdDQ1XuYFqhEFxjdIyB0LXDZ
0hlky54JRywwdDnew3CITr237hEQOj42mm3BMn11flho6KoE08inC9A9AkLXQyxpB1ax1KwdscDQ
lSLBpEQuMLpHUOjqntFTsazkOGKhoesSTEtbOsxiWQc4sDI9L+a9FMthsa+OkMnnMzvZY3oHyxEL
TKEqwpS4ZaRreqPqwCrpEik0hs/nfrLH9DjWDcvn5xZNoXMv1N6Mk5N98ln3aDWFblg1bwlde/mu
MlcsMHQtSTCNqwuM7hEUusb8f3/9k4NCLHvseXmB1BULTSGWYFL3yWfdIyCFKLTq00OesPjlVS5X
LDB0TBLMaFtKJfzyKtcz1v/+2z85Ushjhz0jvnz46IqFpdCIIkxV31ldg9E9AlIohcE75rTDsEnW
FQsNnTBsy4GaT/1I9wgI3YHVL/D159C3VLQGmYaPflhgCpEwfCwhks9ARPcISKESaEtFa5Bp2OaH
hYaOJZiu7qJbg9E9AkJXQ6o7KlojvXxd2RULDF1qEkxVNwOvwegeQaHjuqMMMLJpxO2HBYYukwTT
t5RKRjaNuA+seoHf3BZa2jGXHIbd0K5YaAqNexgO1Pak0Mv3Oz9jXSGFONS+xR7Dip8rFphCpUgw
XLeM/A0rfhOrByo+46MTluFlid+x/pfF2oktwjgV/nWPoNDVvAXLsNLmigWGroowg7Z8dYaVtok1
Au2p4BhWuJ6xfDoDNHRdgsmlucDoHkGhy+UKpZIRGm2ZkNhW2vywwBRqSYLhumOdfZhW2qaBlLZ0
SqZlEkcsMHRMEkxqW0JnWiY5sHiYzlKdcerLz3a58KCxqhIEO33zZ3Msv2aOWKhHXYLpvOUXny2/
ZjesCyz7EYW4p0bTLTNIRywwhXqRYEg9H74Go3sEpNCB1S6RQjVv+cK6qRfyw0JTqIsw3QdG9whK
odrzJVJoOG0UOdkzLDPqA0s9m7mGBabQEGAmpdMJCN0jIIVSaLxltmi4ss8VCw0dCzBuAyLdIyh0
nK4wBskhN5/qx9/s4Wi4EdEVC0qhA1uCKc2nK9I9AlIoh943dEocDYfqXbHQ0AlffwlxR8fN0XCY
/nesEi/w9ZdQnGJ1sodefuDQFQtMIaoSTCWfNVndIyCFaiDy6SFPWOnlx3xdscDQpSjBFKfldN0j
KHR1x6ooR8P7W65YaOiKBNOclrF1j6DQtV4u0HG3EPcMSQwPfrlioSk0BJg9K9gcDQ+G3bCa03rj
CSub5tt+WGDosgzjdPJA9wgK3XBaLDpjmaptflho6Po9DLsNAHSPgNAdWFfouDlU8qmNnOwppvm2
HxaYQiUJMHsKtlPMNN/ugeKWIUkxzbf9sNDQsQSTnX5FdI+g0A3eMhGopnmuHxYYuirMc0cotGHz
wRQzzXMnVrpAqSTFkMuGOjLHZplvH1jVp68EU6hFCaY57YLUPVpPoYnVd1ytMbEsR30csdDQtXsY
CsnpZj3dIyB0FNhpS/QJi01fnR8WGDoWvrpJOXxgdI+A0B1YFxhxT45StwxJ2FIqccRCU2hIMC36
dEW6R1AKjR0nkDiaNkg5YoGhEzZI3Z6995n86x4BoTuwLrBjPOU9G+o5mjZIOWKhKdQlmFp9YHSP
oBTqTndYnLBMG5McscDQDQHmWEjeMmkzbUw6sOr/fi/jjSPvOAnF0bRByhELTSGWYJpT/Uj3CEih
GuKOC+OYTBuTHLGw0E1sCabs2NvGZNqYNLHq8PHojGWab/thoaFrAsyeJxummGm+PbGucLh+cnSn
/X8ne8hyWswRC0whonuYFqJT6U/3CEihA+sC+9pT27RviwxP+7pioSk0RBinsazuEZBCHOKWnRtk
2iDliAWGLhUJJuUdkzYybZCaWHXHEVYmw7PIrlho6LoEw+orX2swukdA6Hogp81bJyzTxiRHLDB0
WYLZ81wTk2lj0sTiHffrTizTPNcPCw0dSzCdfGB0j4DQjV39uGlDkCMWGLqSBJg99/4zmTYE3bCu
sMI1wnA6bXu2xzTfPrB8BpdoCrU7mDw5nboi3aP1FDqwrnAD+eToTvv/TvaYNkg5YoEpVKsIs2Vb
Ipk2SGUKqfnMSk5Ypo1Jjlhg6FoUYbbsJyfTxqSJ1fZ03M0yz3XEQkNXJBh2mrnpHgGhm+btGZI0
S3XpwPpfdpRObBHGaYOb7hEWunGBGvfkKGPL6g1b5tuOWGAKsQjTms8AQPcISqF2hacjcg7R6bb4
kz2mS8gcsdAU6gKM2w1/ukdACuVQnDaRnbC6Zd7viAWGricBZtN+cuqWef8N6wKXf02OseNdnWmP
perniIWmEN/DlBDTluFjt1T9Dqz//+HaycFuq5Ene4Zp3u+HBabQEOb9PRSnZT/dIyCFRsjJZ13k
71iJTPN+PywsdBP7DqbEUJ222eoerYeuUIhbDtskshRsHbHQ0DUJhkc3XfJ78iYly6XDN57/8NLh
g1eC6FuO1yayVLFLCqn4FIxOWKa3tByx0NA1CaY6LWPrHkGhq1eYyZbsVrg62ZMtpVBHLDCFcpFg
0pZiSMqWUugNa1wihcaeztH0ppcjFppC4x6mhLjlkH8yvek1sWryqTqcsExvaTligaErRYDZtGM8
md7Smlhjy/pMKpYSpCMWGrp+D1MDOS2o6R4BoTuwLrBTe3Jkp40iJ3tMb3o5YoEpVCUYt7qs7hGU
QmPPqLZaSpCOWGjo+B7Gb8e/7hEQOg5py+GxZLoVxRELDF1rEkxhHxjdIyh0pV/gTqTJ0Z32+Jzs
Mb3p5YgFphCTBDP6lq/f9KbXDesCmyBKD7nuscfUC/lhoSnUJJiy5weETb1QD8w+Q+wTVrcsQDhi
gaHrUYBxOzCqewSFrrcrjPyPxaItv6/dsvzoiIWmUJVgmtNyuu4RkELjGlugagzZ6db6sz2Wqp8j
FppCQ4KpW674SN1S9asUaMuJrDQspSNHLDB0o0swyakIoXsEhS7lCxx9mhxMPkP9v9uTo6V05IiF
pVCOEozb9fC6R0AKpVCjz4/sGcv09fthoaHrIsyWy/1yNH39OcS+o1PKZNm76ogFho6SBFOd6li6
R1Do6hXeHZwc3WmIfbbHUjh2xEJTiCWY4XQGW/cISqGRrzDyL6FsmVvnZOqF/LDAFEpJgqlbntHK
ydQLVbcazQkrW0pHjlhg6HK8h2khbrlqJGdL6WhilR1v+k8sS8HWEQsNXZNgqtPb1bpHQOg4kNP4
6IRVTF+dHxYYuhIlmLLlRrRcTF8dh76nHy+WQqkjFhq6eg/TAzltX9c9AkJ3YF3gtNjkaE4HM072
VNPX74cFplAVvv4RUvGB0T36jbkz6Y2sBuL4V2lxAolnucprITiwHEEgECeE0FthxJAZsrBIfHjc
nWRgXorY1bZJuAAz6fx/tTy3XeVnC1KIlG+0KtljVdW422FJQ0ccTGg07c77SBS68PSdtvCB1gpt
l0qSP79Q2hRLmEKehTHUZb3tzy+UnrBCl42/xp9fKG2KJQ1dfAgDSjc6UT/vI0HoQBnsMihVHPHR
FEsYusDCuC5vtZiKIz4SFiqIbb7c9ljnFyhvsRqdZCUNXeBgTKMT9fM+EoWu1apkh0XnT5eaYglD
R/QQxiit2+RR3keC0BnlG92K9jaW1ee/CNQUSxa6hM3BBN2jMGj1+S8C3WLBkx9pkTis0o3mJTv3
VPSVm2JJUyhyMNjlvj9b0Vc+YRl48lLJicNjj0qShappWzssYQoBCxO77CmzUDVtO2I9+cakxOEU
Nipq79xT0d9uiiVNocDB2C4X2NiK/vYd1nMYhZwKXVYgtqK/3RRLmEKIHEzsk0IV/e1brKe/Aylx
eIWxy1Sx4mCWpljSFPIcjOlyvJetOJglYQWlbZcpmjm/49cUSxg64zgYaLRXM+8jUejgWayEggqN
Dq/Zuaei398US5hCVnMwscv5kLai35+worKNVos7rIoDUZpiSUNnORjfqIqd95EgdKR0o1PH9lhV
1cd2WNLQEQcDuke7xtqq6iMp0wfLVT117bCEoXOWhenSZ7eu8qnz0AerqurXDksausjCNHqpPe8j
Wejckx9FcOI47f5d1unmx3843F/fXI+XJ+3X6+WLV8uL+XB1NPXm5XqZMLZxoxGjm91i/prG+ecf
L1/dXCxvfuT7w8fLcvz0dnMxX794dXH4/FX6kcSyHkF/vVlv1vcTW9L4yGit9Omf9w/Xf75eP/rq
TvHdf/7mvf8P/JuX6/r6+Pmbi+sXLw8X6x/Xh/W39eL68C4AqUjGGXu4SjmZPvaQCxKy6zK38jUF
DNBPdDtBwuZgoNEeqryPyp/JhOVMj86u9TXly4ZY0tAFDsZ32XNifU35MmHFPqN8qCkcNMQShi44
DoYalcHzPhKFjuIzKBsCKBPbPGZ799QsYRpiSVOIOBjXqBCW95EghUBF7LGbysaaJUxDLGHoouVg
CNsshfM+EoQOFUZfdcL8DofM+SfeN+ERxopYCNPlpQ4ba9Z1J6wnGYpi5GBcn90csWZdl7Bil8ML
LFXNIdthCUNHgYMh0yV0VDWHTFhPf7xL4jDKdNkb7PT5B0w2xZKlUMLmYGyjVyjyPhKk0BHryW8F
OHGERkcB791T0wJviCVNIc/BxC7vKztd0wIHq5C6ZHbFDVdNsYShA83BmC7HzLiKG65usZ7+mJkT
R2j0Zb93T1UxpB2WNIUcBxO7rGQdVBVDnNJdNnc4rNmE2xBLGDpkYaDL+eQOazbhJizbZVbrsGrR
1g5LGrrIwYQuhzs5rFq0eaW7vAHvTFXPph2WMHQGORjsch63M1U9G6+c7TIYmKr1djssaegCBxMb
Hcub95EgdEGh71GRdLZqndsOSxg6CxyMc10GTFu1zg2KTI/3YJytWl+2w5KGjllfRqX7LFJs1foy
YT2LRUpU2GWfhKs4macpljCFnOZgQp+Bu+JknoRFSje63HGHVXEyT1MsaeiIhWl0ZEHeR7LQ0XN4
+km52OV7zVd1bNthCVPIWwam2SFheR+JUshjePoUQq10lz25ruqEoIZY0hSKHEywbWDyPipPoRPW
M+i0IShNXdxTdVJRQyxhCgUWxjZ6hSLvI0EKgQp9Fkah6ulvhyUNXeRgomuzHMr7SBA6VLbRu9Q7
rFhTbWuIJQxdRA4mhi6hizXVthPWcxi4jTKNbivauYdqSkcNsYQpRMDCUJeCLdWUjtA3GyH3WDWl
o4ZY0tB5Doa6vJ/rqKZ0hEH5Rl3At7G8rqm1N8SShS5hszCNejZ5H4lC16cF4CsuL2qKJQwdIAdD
XWa6vuLyooQVlTU9NgN7qBow22FJQ+c5mNjnqYOqAZOUDW2+VHZYWFPlaoglDB3aBzBGK9PlvQSP
NVUuA0p32YLvKy7raYolDJ3RHAzFNjB5HwlChwqxx8TXm5r2REMsaeiIgwmNDqPK+0gUOuqDVXUC
TEMsYegsM2AapRu9hZP3kSB0R6xn0BZIHMG3WQ3s3VNTGGyIJU2hyMFE2yaf8z4SpVB0z+AIMWMV
dNm54V1NW6AhljCFHAtjuxyl7l1NW+CE9Qz624nDd7nB3FedzNMQS5pCkYFpdpdK3keCFHIKu3RN
vK9ZbzfEEobOew7Gdrk0yPua9bbxCrq0u32oaQs0xBKGLgAHYxvNZfM+EoXOdzmcxIeqp64dljR0
noOJ0GXmHyqfugjPYeYflNVdBqVYVbJJWE9ykGHC5mB8l9fefKwq2QRFjcpae6yaNxYbYklD5x7C
9Lrp1ceaNxbNs7jp9cQRG+2727unqurXDkuaQsTBUKcRsqrqR8o3Oidph0VVi6V2WMLQUeRgYqNX
W/I+EoUuPoczU6xW0Gjj9tvuCbqmZNMQS5ZCQbMw1veY+QddU7JJWBF6fK+FqqvLGmJJQxcfwoDC
Lje+hKqryxKWgy4ZVbWrpCGWMHSALEyXk4pC1a4Si0r3yaiqq7oaYklDFxiYZs32vI9EobNdzr8I
WFPlaoglDB0CB+Ma7cTN+0gUutBlW3eoupqqIZY0dJ6HcVUncO58Y2zNiaANeIROMZaDiI0mkvnE
EeSzUabROxP7mFXN3dphSUMXGZhOBfdgquZuCYueQbM9cUTd5UvWVq3c2mEJU8jyMI3uwM37SJBC
VkGjr409VtUcsh2WNHSBg8EuO5ODrZpDHrGeQdU2cdhGr7ru3OOq5rLtsIQp5ICDcbFHzya4qrns
EesZ3GCUOKjLTqTgqubU7bCkKeRZmEYVkbyPRClEwd7d4XQu1t0f/zDO83p19f3h05vLy3Rb08s/
h/H6+pbsm6+vPjh8dzgwyeMUADUm+Ga9Pv34V598dvjy9Zguu/p63T6Y12UdfaRBL1MYxnFyQ6C4
DqCX0Y9Afp3wMN/D//AG/sPhds3CsSO29h7PPlnCGRPxaJ0b4gY4zBj14IzdtmkMk5k3MbsN/ws7
jdNoyG4DkJ4HO85h8H4ch9nMeiOat4lGKbux+L+wxxECjksYjA3r4ADNAKMzQwzH37S4cZL73Rn4
X9g9onOjm1POEA1u0tMw4RoH8NOGOI/GGSG7V1p3GcR81VyhHZZwoPfAwYB+OJ6BsQlvvR6+/urT
v5YXV6/H6/mnD24ufr549fvF8EuKYmIapmmMU3TjoOM2DuOEOMDscfBhxAnJjdrFBA46zDBvOALR
m192vDnv29vfd7h8PR/eOeeXv8PaA/Fxe/5abn755c+ji35K3jqCHP/78MbMlJbq9cubHxM6uY22
oCe7rHg4xuQY4OnP2w98/cFMkxkD2bCNkUPB/ZGee5Sc5l8PKO9/cKd+2F5d/j4e0+Dw7vVluqYw
Bdwvoxu9MRuZLTgL4wy4jPMaF5zWdZzf45CDxseReYD45iF/8cvrl+kR/1rdPqAn6g/ePOVvPjMc
P5S+FHwY1jjB4Ldtm7WxZtJ0uLr81ye8Ax/Q6zSIRTM4P9nBzGlknjfY0MBkIyFvibnLA6ElfHq8
sSiFaEnuW9AEq+10RloQhcfJ/kvrr6tfkmu/ONws62/JSXdjYPqvj2rdFZRD82jgk/YP6x/rfMzQ
ebU4Y1wx3I9E36SnNt3aqW6dcbhPQeMcBRuD29aw+Y2sJoAF7TKlAX2b4uH1q1cvPzoN3+n6zPGX
9aPDi6sfXr6ax5c/XN0k599crZcfHUfYw3iTxrrj//5wlX5xmvgfvwc+uheiaOzmxs3CTN6uiG5D
i4HGdVvtaumhxVGBrbF495UQat4Xq4YRfhEEyyGgwwqEvD8E8/yoKGgGxqUsefHHCYaXjVU1vHJZ
ocMjI0YKjCkSy9socC0pGywj67OyVRX2clmpayMn5kkXieVtLHet0wp0YGRDTpZqBg+BrNC1ZDkx
RFsklrdR5FpLxMjGrGzNZkyBrNS1xIl5Z4rE8jYKXAtKIzCylJGNumartEBW5tqExYmBC0VieRtF
rg3gHspanZOFqgGhXFboWrCsmKUisbyNMtd6VzFduZ+wLuvV9eWrP4tnrA9JULngz1tp/GuR8a8F
3GkVcce1Lpxg6HKKRvA15f0j1nM4KdoZhQ4exsNCQUVj3DzMOhXMTCq6DqDTwjTgsfywztvotyUl
AiZm2ADDGl30uJRXNEp++Tu8PYGzp7yikdaTV/N4kchTRc2vc7SwbJpZuYIZgVZKpQLULEmjd9t2
QwHWvC/VEEs4HPI+Mog7mF20MmF5WPO5+7ldfLiSz2ynieawOh1NhHW2RM6GKYxm9T4G8x4HbK1l
0gsLHhda0ZOZ/BCNwcEktGGGVPWfNjPZUUc/bzpZZ1JlZ5y8H82G5Y9LyS9/h7cncPaUPy7/KsbN
1q1TqpURboF5XrxZwuRwRYMzhxIx8+Tyod3XeY4/0qDK46zSxCYn96Xpj196ZtvWdZker/K4xQUY
R4OE07KFpL16jSlKXo+b2czTVXmcVWZ/1OM+G3Jh/+8a7C4BuAdymzbw2m4mDfQUpzVVXyH1ONIf
4bZBXN/jkC3px5H/A6C0Bnv/meH4oWFanR/07ExqvMx62pxZ3Ta9XYM1xs0B4pJqtVYP27TSsMG4
DODBaB8gpuDzlsBd/gstyddgY5rTjZH8OEX5k2mVC6biQdh9f5maflA1jPBbywCHEHwNQt4fgomk
Vel/Hk+b/0qE+4HzYrt6uz5em8pOoeYchJyDwMTFzXbxzoTHR06vwxT9NMcw0aLdjGmlomNYQ1zn
1HOen3DkdCo6qLB4lxK2aopXCyN8RKxmEYgYBJNb51YdAyaQldoYOTGyvsLN+ZgLhgGvAE2R5XmH
i2QdWEbW5WSrNgUIZIVx9sCKETBiNmtj1VNcLiu1UXNiHmORQ/NxFKVPCFhkY961AtmgtNeMrM/K
Vo1O5bLSiEZODAIWieVtFLkWyDKyISdbdaePQFbo2sCKWQ1FYnkbRa4NwKVPzMpWZm2prNS1kRXD
MrG8jTLXBmJksx2Pqg66QFbo2siKRdBFYnkbBa6NygT3UNZlOx5VB8MIZKWuJVaMODHI2VjXyi6X
FdpIlhOzxheJ5W0UpY+lUBTHfPqIZL2pWULtGmbFS1iOJDrNOABzDqjr5ZfLSnOLWDEfisTyNoqC
TPjwdB1w5h/ZJj02UojhvMreeT1PanaYab71JPA3KactE+bcGpWgKpXLZWWpnLBYMWOKxPI2ilwb
G13StMPCmndFG2IJQ4OBgyETmdDYXGhM1SSvXFZoo9mLxQ90ItFMf9K5gv6kHbVbx/QOCfoIQ/A4
DSaOkEq/2o2Y/kqPYwqOGQ1OqGdIBOX9yZJf/g5vT+DsOas/OS6zN9aiQ+2ZLsiibUTEDawnFsVg
URzz6VP6ZJ9kEWknu/NAztT/7sntjOZ6cmTnNax6ms2x7R5GA2QwLEskCJ4cvcchG+cYZJ/zlD2/
KCiSFT5oFjgxa0yRWN5GQTYA+zYUuJCVPf9tW5Gs1LWeE8NYJpa3UeRa58LjDxr/2FBpJ/n+M8Px
Q4M/bs+IaYfboOPoIH3ZxfTvtzvJM2xg5qgH62AeMNhlIKPDQDbqiJMPmiJvyf1bXUJL8p1k52G2
2zJS+gIXj6GgYuBWFTEXWnf+iXUiWWH6OuTECKBILG+jIH1RAXE2Uk42nD+5FskKXRuIF9OPJ/Z/
pep9r/vlb7+83euufdhQIdoiB+T9Lgo3Wl2xB2FXoCjenfSQxCgd2HGTI4nOoA4ANML4eLN/dsfd
NXrB6DcaI7q07dNtAbTd0I6je6pm/8niAFhh8S4R6Pw2YQMY4WNJmkUgc9bOvTPKGyc9qjI573/B
g2iV5YZ7r3PPf0V9TiQrDTBxYs66IrG8jSLXktaMbKbkHHXFuzYiWZFrj1gPxZzSnorE8jYKXHs6
ZIORxZxsxbs2Ilmha8FyYhZ1kVjeRpFro+VkTU4Wq+a45bJC1yJyYmSxSCxvo8C1XiFwNtqs7PlV
TJGs1LWBFYuxSCxvo8i13nCyLidrqrK2XFboWoOcWLBYJJa3UeDaoEB7RtZnZc/ftyGSlbo2cmII
VCSWt1HkWk+cjSEnW3H4nEhW6FrnObHooEgsb6PAtVFZCIxszMn6qslXuazQtd5xYo5ckVjeRpFr
I2lGlnKyoWqsLZcVujYwYy0p7cvE8jYKXEvKctPpoLOyVTOEclmpawMn5nQoEsvbKHItoWFksyuV
WJW15bJC18aHWXskIVcklrex3LVJ1kRONrtSqThlRiQrdW3kxKz3RWJ5G0WujQYZ2exKhc5v+otk
ha4lVoxcKBLL2yhwLSgDXPpkVypVlS+BrNS1xIpRKBLL2yhybSTLyLqMLEBNB10gK3MtAHBiRFgk
lrdR4FpUEDUj67OyNUsGgazUtZ4TQxOLxPI2ilwbNGdjyMlW3IwkkhW6FoETi+iKxPI2ilwbbYsm
211XpbjL9pDEKGfNeZsLzur2JEHf5XYOqiqYHbGo7t6iXYIYOP8epTseX8UjfDoMcBAhMNv1QizY
fjgBeKvT7hhn/TSEEMcB7GIGIhNAzyMtyzG5t8X7yfrZjUso335Y8svfYe2JmrOnfPvhP8fWBNgM
eDObuHE7ZxBWsDCNm4PIkcR9NW9PkpH8m7lz242kBsLwq4y4AoluXLbLdi1wgQAJJEBInC4QWrlP
SyAkSw6cxMPjnk1g06lNucYTAITIbqbn++uvarftdrtfuVPOhs0tAow0OhunYbFuDMsQvZ2yj8kM
gONABt5gBdMjnLYJmubyjyhLebowc/3gezD0cHnxOdqsZlk/cYS1LEWPd5ErMu4qM2Ci6CINCwj7
NsQAMSYksnkYPOQ4u0h+GtOY82hH+u+WcoDvI9iGiDe12TRj3yxGWZEusRKca5Ag+6G4xGJv2CkD
kvpaDS+4UmGVhvvIwcCZKpgco8pajww2GQmLLff/FVilteg5GHpfBZNjVFlLERksSNiGLTFUWKW1
gennhd6EOpgco8La0HsIDNZK2NiyxE6BVVobDQdD76tgcowqa1OwDNaJ2JbbfQqs1lrkYOSxCibH
qLA29kDAYL2ETU0NQj1WaW0CDuYIq2ByjCprIzgGK84hpqbJvHqs1trAwRKFKpgco8La1FuIDFac
Q6SW230KrNJashzMO18Fk2NUWRsSMVhxDpGa+rX1WK21kYMlH6pgcowKa6k33jHYJGCtabrdV4/V
WWsNC7MRqmByjCprbUIGS/9gL2/G/6fnz07Onu5DfZ4vL38twa6TUSnMS6AQ8iI8+1HQRAsEP+Sc
TAoDEVpvByIaox3M/QmDi/Pzq39t0oB6Z1oGyZvJ+Op5kntKrOltcgdNIB00F194Mdgj18CmSKGl
P/koApWnLOB9WbYvMo4sS/at/uQuAskxvhFIbYptSlc9VpkEyyTB9cFQFUyOUWFtwYZ0f5cZssfc
ZWbPIQdMeFYKzzWlsB6rTKFjUuh7AKyCyTEqUuj76CKDdRK2aUMDBVZprQcOlhxVweQYFdZib01i
sF7EtvSFFVittZGDeYhVMDlGlbVIXEZRwjY9bK/AKq1Fy8GicVUwOUaVtRS5GIOIbazaWqzWWqZq
Qw/RVsHkGBXWht56Zj97qllC4HCEMMPUJTNjN4+T67KPQ+eCj7NxBgFo7UzlPCwZXYoQ6pcQ1Hz5
a2w8CEw8iiUEL23rYzDbFIM3OXlmDUEcBx9yBpMRWCnbBwC2UiTmq7cS2tC5VQTTONK4YASaRz9C
dglgSj6aotMbyG9wkh3Rw5J5AVC7KcvtMd16UBcXNJ2HKZa0miWZxc3DYu5uygLLgIh27nCxsctL
HrrJ+6Vb3IKLwzy7MHCReHO7lEQZibwpy2JTGC2t+OmAskAQzrhXsW7XIZxcnt/dVaPdraKJyXuU
Gp/QMo+jwCob2MDDkquCyTGqGlhMkRmP0JHHI6EnBCY8ksKLTZ3meqwyhZE5dWJvLDIwdiYoQ7I2
2kAW7MPzX8YDxdENFOYpJMhzJmtxWWgwIzr/n73oZB+xi3X2yllVFG3sY3INRm/EpJb7wXsxvkGM
svCS4SQkeFw/VMkh5uW01hipJlLTzEM9Vms4M/OQeoi2CibHqLA29TYggxXnrBp2FldhldaS52Au
URVMjlFlbQAO6wSsM43W1mJ11hZZLCx4BmbFGJsayHqsNkamKaTemlBlqJxHRflQ7yhUxShbq8Im
w1nrJSw0ZbQeq8woGBbmTRVMjrHeWmd6SMhgUcS2LCZRYLXWBg5m0VXB5BhV1toYG7ormzum1R1l
Tgmy1/EgGdD0jJoCq0yyBQ4WAlTB5BhVSSaiw2Y4Drob7aA3zh3lWQ75EROFD9Bbio8gy9mmduZ4
srQlGjgxjuK9WrEm1szyzokw2NyZZbAdELpudM510zR4DEA+zViEZzJEy7CkAFkxy1vx5a9x8XgD
XDwHzfIONoaI0xzHuDDTeSkPDscczWKQk4Jhk+etFIn56lneDZ2b5R1SROtScnkYvAHwGOc0LzBZ
b2Y7wBuc5ATpYcm8AKyd5b09plsP6pKloYs+UGdwnmwIJo642Xp7ATTLlLHzfvTd7N3SLUShS8aM
frDkrUt8JLdtoDISeZbXBILy7xgWf0hZUBCUvYq1fcH6P9O8rXbZ3vK1yr5mCYOzEdEOkxH2Dl68
CX4Gl5zPg3fTMjmfpxxHt0RM/9kr1vcRh2AbIt5cEpoeOGsWo7wQ8BUQwTdIkP1QXLldDzYwYpLU
cWpajKDAKg33kYWlOpgco8ra4LhSIwnbtBhBgVVai5aDkYcqmByjwlrfu8jMKIERsU2NRz1Wa23i
YJ5SFUyOUWUtIYcFCdt0G1KBVVobGBj2EU0VTI5RYW3orfMMVpw/a3hRsAqrtDayMBdiFUyOUWUt
Jg4rzog2vQpEgdVaSxwsBF8Fk2NUWRtSS49pM5FV3WO9ryT2aOxhI43DZlsK0B3nXX7ytIYiI3tZ
Nz60yfpob8llX0q+9PVLj38+y8NpOezp+rWFf3rK+4I+3U8E+D/5LyrwZVwmMwxxoRGr4E/Pzsvo
5McHNAR0DRpuR9F5HEuBfrd7//piHeec/t69GOvNUxmIXj7ZfbvTbTHy9kufDwgh2mC6FJPrMAy+
c6MZu3GBxToYfCL78uedwzFCmro5edMtw0zdAnnqIIAzIULKrnxet9Do7Z1uxLr7rt7rmlcsJmuN
m+ax83kxHU7kuiW7uRttHmeMw5QHu+YGBmNMnhaYoX7qqubLX2PjCZ6JR/GKxX82HDKTHyNNiVLi
FqI5G8g6My00WlZJOmYVfzFf7c/60g6+lPfBkx3tgF32iF1awBaHUjHM+WUZchzcuOzG2xPg6d8n
wDvdvnHiZEeQ3rZ7cnZydZJPTy7vtkJ2GRNQCBYN8o1j0xKJI0tTXrYTsoICPHqKg7WIGceSYqIO
BzN0g51TB2FYrB2zQ6dOcTRH81FOser6V6QJ178yffVLcXZVWBTN6NISx0TBUjE0X1xdP785o7/d
/3ENYP6leHN58y3f8Vg4HMs3KDx9ncsPI7k4OPQ0b5uVzVcfXerGoV8vTvYS9x8dz8u083w1Vxtk
Iex+K7OMpUyu5rM//9wH+t3uw328/Yu/fVo+cCPuNlb+6+3hQfH+s7GVQ3GakreBljHQIfanIxbo
nC/G74vItee6Oy1KSiNxVaZzL3dff/Axn4fUkH3eqIdVrHdTIKclkBuHMR7kmCxZoeCm3fnkxad+
u+xefLDEvA5GnuxCb3rbobdgDlVTm79lvnohe8pXechlAn4oI6Dyex589BJ/Fb8cPc1TTN7GkUw6
JGcEwhWiks1fJci3bA/6CPqUvQHynKgULNNGvrwCfjwtrN/m/S2bbm0g13sw+afLd6+vC+unkvmf
8umLezXrbZp/dgO4fXr63bPr09P77NSbgI/eE0kZos1T7JyPc4dgXQcZXZfiOvqcMA/KzmbqrUFm
3iPWPklOix3HMbthHPDhu2g52rSAwxmW2Y/LECAZ9NFgWNwUp+W/3YXBpd4RPXoCKQ/ZkV86ILOO
rMbYhZBzN7rRLETFFsrKBAbDzCBxw0d+XPXqjVw3Ayzu5rxJM2IIOBZqSeEwLZbmGTMuIZhs3Rus
YJsYwam24pL3fvF2HsdED1ccQphjjMEMYB05P4QxDhiymUx0hv7D5x5WFyjVdue+/nT3+nT+09pw
mzd25a75yTg/3f+utN3vf5/Pns1fnY3n50VxmZf6ZX7RnnPQZOIN9DiN9nvD+fXaT9gtp9eX3/99
zJPdW7/ki7fWz7y1Ptcz99PAy4lHuoZczNN5ydyz8pRRUXL3iCKvzHPt8rj3Zv1kVz55yQkiHw/v
IWx79rl85Kd8VhqFn0qqSphX88WSx72g0hued+WzvIh0uAi+m1KjpXxTpNnS7FwchkHdZakRfoCO
m0r7phx50z2dS+zr/8s3z6/8smf55GxX2rCPP9/labooDcS/rfi9q6v5p+er6H0Fjj9fn1yUhuSl
L/svxb2QM/Fqntz2/mz6t4V9yh307JxXVyaE8x2XL+Zuf4cin/1eWvT1x2fztF6CeYOTlZrgw5re
fetk0uLClJDCP+34y/PddkKT7NDZhKEzEWM3LXbqcBntlMcJB/LlcrQrc7K7fwBMq069dXDkLtzd
wYI3TYsmHkOgbrTgjeVkOeePLEv2TTEVVwR6eHgmne+pbRfYrR85xvK6VZB92DHeBxOaBpsr11Vx
tWUROFiy9sj9040dTc/OPIpApW9gWFmBu9tCrxp878fd3bP5arcOvd+dYx6WeYHOTGPuaETbpaFU
6DCmQBY9phLhi2P2XfXzq3ILqyu+LSfPbv7+x/n3d0tH/fzp8/Nfy1XoTDeCpz7R499LGOdpziFR
CXOIXc4DdpHS3IGZcshAYR6sagBIPQEeuRzkeq1vxrzpjQ9VZ+8G2/SIjAKrLH4LHAyIOSetEWNs
eQZDgdXGGHiYrzJUzqOqfIC4+q7e6xCiz8YNFmCW1nkntGbJ8zBGHwHjZJDCaKxHMxtK4b+dofKm
99ZVZVsuMlUC8O8EHL+/v5HqWi6NjyxVeQ45wwmMGB9NoOylKu2RWzRqQaq2pv0fFVhtOpCFkamC
yTEqrIXeBMdgrYT1TWdHPVZprTccDCxVweQYVdaCC0e+WmwFtuzXsBcYjyxQmy7PyvLAyGJfCDk6
l4dxHvKA88NXVJ/nMCElBJyNceUPEfJsyHmXCX3872bg14gTHTkRcqXoSpmESuGx6FpG2wqusu6Q
hXnTctv0ZrVr+U2//9xcHLj8/bLMCe6+z5e7H67LIRfzfmZ+/6ubMdm+wvbfV+y4OZAVx16LnJQB
37LwXYFVJoA/DdD5Kpgco6q4AxKD9RK26SEjBVZpLSYOFiFWweQYVdZSfLz5+c/O1xNqP7N+2OTG
k3IzbhXy++5mer7A/1dB1Czo/d8HUbPO4H8fRM1ql/99EDWLh+uCsD14V3Vh3rQnoeX2jQKrbDOD
ZWHEXftQjLHlkWkFVhtj5GDWmIa+9ibypv1Nm8Uo/YjASvCpqsDkulZcJ23vrP235mKaXmv4yFK1
KURWoDdV54982upS2DRqlc8klRiEf21ur+kVknupx1vo9H5ZXbV/yuK3q/0g/fvz8x/LPeeLk+dX
T4d5XbDzdI3h6e0TLX/M3/GS0uGS+GVPemWr+bgsOVs7miVoF0GtYbgjPp1w8wDLzeF/x3N+dk/3
izUpa4ClqPYro0rQ693P0/3MSfnlyZivymzGvotxcsanwDWszL+t030ET25/KEPu0hE6KxW+lIMu
S8HyYGYlTvmPz+sxfShRDN74aM1EEN1BCV8nHMvK2LO72h9P5N9lclsX3x6R9d1uOuGTFAw3dA5S
E9+0ObYCq7yMJcPBovFVMDlGVXMcwTBY8c5qapoMqcdqrU0cLEGqgskxKqx1PSDTQ3DiPVNqmsKr
xyqtJeJgFrmqFfvPTZt6K7DaGD0Hc2CqYHKMqvJxMVblUS4fFTZZy2BBwGLTXuIKrC6jRRYP8wzM
ijE2npm1WG2MxMHIpSpD5Twqysf3xtzHRtx9P5d+w1A6OH9+dPvTeiNrjHbJOLiwEC/H2qY7Su16
lLmwlhVhQ1Xi5XpT5cISMFgnYaHp2l6PVVoLiYOhNQzMSzHapkm7eqwyRhtZmKMqQ+U8qsonAodF
Ceua5nwL1poqrNJaZzlYQqyCyTGqrE0BqspHrloFFnsIjsEGCdu0N6sCq81o4mAWqAomx6iy1rL9
lyhhfdPJUo9VWustC0uhCibHqLIW2ZMlidimJr4eq7U2crBgEgMjKcamPWUVWGWMyJZPjKYKJseo
Kp+YTFUe5fJRYENvItPZ90bCNr1XV4HVZjRyMPCxCibHqLIWEzJYsTvddqO2Hqu0NkQeFhiYOHxr
u+Fej9XGaDlYsLEKJseoKp8IpiqPcvmosBSJwYp9vthYtbVYZUZjZGEEDEwcqcTGqq3FamNkqjb2
xrkqQ+U8Kson9gCpKkbZWhXWEVc+4gApNWW0HqvMaGIz6iFVweQYVdZ6x2HFkUpqahDqsVprIwdD
A1UwOUaVtcl4Biv2+ajR2lqs0lqKLMwbBiaOxqjxzKzFamO0PCxVGSrnUVc+MVXFKFurwKYeDIcl
ARtM05O89VhdRossBsbfnUQjxQhNSw3rscoYAVhYNFWGynlUlY9NdTHK1uqwlBoWwm3eI1D9/Baj
hH9DDoJoQOP5U4vV1lbgYMmEKpgcoyLJ1BsoJT1fXJxfvIy1L2G/v7p6XvZAyyenL1wrj8ldzruP
vvzy80K6fF6eqZoLKV9dX+5OT0qY337HccDQQfvOHPQmiMJz3KmKVrKzbYOIeqyyZCxwMIyxCibH
qCqZFD2DdSK26Wysx2qtDSyMQhVMjrHeWjS9IcdgvYR1LVWrwCqtdcDBgGwVTI5RZS0arnxQxLZU
7R6LVVittYGFJVsFk2NUWUvIYYOE9Y1VW4tVWuuZqoXeRFMFk2NUWAu9i47BRhHbVLX1WK21gYN5
tFUwOUaVtclw2CRhsalq67FKa5GtWvKpCibHqLDW9sEYBiuOk0LLsm4FVmltMCyMYhVMjlFhresN
t3NVEMeCoeVhMQVWay1yMJdSFUyOUWVt8JbBiiOw2FS19ViltdFwsGh9FUyOUWGt763nykccqTQ9
4qjAaq1FDubBVsHkGFXWRuBiFEcqTY/QKLBKa5PhYMnaKpgco8Ja7B16BiuOVJpeiKbAaq1FDuYx
VMHkGFXWJjAMVhypUFPV1mOV1hJTtaE3KVbB5BgV1obeGWSw4kil6ekkBVZrLXEwtKEKJseosjZZ
x2CjgI1Nj9AosDpriywORq4OJseosDb2DjlsErFNVVuP1VpLHMxjqILJMaqsJQAGSxIWmqq2Hqu0
FpiqTb3BWAWTY1RYm3rHLUGNRsQ2VW09VmstcTAPqQomx6iyNjhgsCBhbVPV1mOV1lq2amPAKpgc
o8Ja6iE5BmtFbFPV1mO11hIHcxiqYHKMKmvZWaHoJKxrqtp6rNJa5zkYBVMFk2OstZaeGNM79mTx
Eta7g61VYZXW8jAfbRVMjlFlbWQziiL28MfHVFittYmFUaiCyTEqrIUegMtokLDYVLX1WKW1yMM8
VMHkGFXWesNhxZFKw3avKqzW2sTCKFTB5BhV1kbjGaw4UgmNVVuLVVobeFgIVTA5RoW1tjeOi1Ec
qYSmqq3Haq1NHAxMqoLJMaqs9Vz5JHGkkg6/Wa7CKq1NgYU5VwWTY1RZGywXozhSocNvlquwSmsJ
WBhCFUyOUWGt6wuGwYojFWqq2hVLVVittYGDgQ9VMDlGlbVoIoN1AjaZpqqtx+qsLbJYmMcqmByj
ylpKjsF6EdtYtbVYrbVM1fre2jqYHKPCWs/vTphQwsLhtx1VWKW1gBwsoa+CyTEqrMXeBGSwQcLa
w287qrBKa63hYGDrYHKMKmujSQw2itimqq3Haq1lqjb0BkIVTI5RYW3gX1WbkoRtePGdCqu01hkO
5gGqYHKMKmsj250mEdtUtfVYrbVs1RLGKpgco8Ja6iPzegz884vb/afLEz0n59PJuLtcEden88X6
Iqa8ULYJR5zcn0Mef3x2cX59Nv39ke92n98/7O8t//49Fe9N03r0cn02rvscP6hqNansV309v7m7
XLHvumBMb/b/vLm7+v35/O7t4a+/9Ks3/r1ovjid5+fr8ddl5+bT3VnZzflmj+fXIa16wJi4uyxl
WA67rwuwjxaaNmPclB6awzeHPIoe5XmH5p4IH3rvqUnExpTU8Mbzo+hRmrKdv1h1xN64tszcNYWC
P9iU4+jRmVL0bkW82F4gNonYmEKHV8px9ChNoW2lwBNIvUmhScQdU8j4dLgpR9GjMmXVe0+ET30g
2yRiY0o6vKE9jh6lKclsRNj1qpjSY1wVt9f4T87LR67y1Xz/gv7q6/k/l/N/T/gDl/MbPUSJvZzv
dQH1ltquFHerDHxDe3QUPboqK3rvifDUJ2OaRGxMSdBiyhH0KE1JsBHhnljTR9t2/t81xbqG9ugo
enSmFL33RCD04NsunxtTAraYcgQ9SlMCbkT4JxZ6jK5JxF1TnG24nB9Fj86UoveeCLS9cW2b2G9M
CYe/pvk4epSmhK0ILDp6iG0i7prioeH0OYoenSlF7z0R6Hts7H1uTMGGq89R9ChNQdiICGtmCNtE
3DUFoeH0OYoenSkI90Ug9i60dZY2pnhqMeUIev5i7txaNCeCMPxXBm9WLxL6VIce8UIEQVAEQW8l
Xw4quuuyo4jgjzeZmdXd/tq1K1XtCIussDN58r6VdHXnTUcoSsoFBB2dds49Ou2+U4SO4C1ThFyf
IhxcAUYmnatFlWU6X2UmPMIqy3QFATj6rGvJ3xYFo+J+ZMIjEwVjCcH7LUDdaBWiZMVwbsIjFCVD
CXE4k0k3T3lbFIqKZdAHnv9UlJ23gMh7xY4eQAVRiEKKkcuERygK5SsIoDGC7sb2tijsFfMmEx6Z
KFx+HCruHKR+lPG2KNmd74ZteGSiZOdLiMMZUEIUoqTzazGPPPBfjj47bwHhD2cw6SZvhSh8/vKx
4RGKwlxCHM6Q0pm3RPH7j58ffmyARKrcAxcU4XZH9K5Hr911ktATXDNJiOEodra8I3nnFI/AbYCk
dcZ4RQE0ZiVFIYsPiuHLBEgoiw++oIi3IY8+6ChKWej89MkGSCoLXVNAHgl198ZCluDOt8U2QEJZ
duCCIh3mMOiCG6UspLiITICkspC/ooA85qQzp5AluvMLMzZAQll24IICbuOOGHUUpSxwfhXCBkgq
C0BBgcdL0T7oKEpZWFEtJkBSWZiuKOIO6HWXciFLAsVIZAIklCVBSUGHOdHpKEpZNJOpRyDdKppU
FuYrCn9vTofuu++0oSN4y7SB/2naQEe1R04qW4s6g6i4K5kACesMYnlX4qPaE+ooSlno/Cf4bYCk
slAoKQ5zQPnQtZAFo2JoNwESyrIDFxT5MAejLvRTykKK5UATIKks5EqKwxwKOopCFlKsp9sACWWh
ckE9ucMc9rqpbikLnF8RtAGSygKppDjMycqMZSELK1JrNkBCWTiU1eIPczLr+q5SFsVbHDZAUlkw
XVGEdLQ2V/1U9HTz1zeulh/uXk6/zN/f/vrixxc///ZieL7e3U3frcPmAiGH/S/JzcOSgxsw8TRM
iwvz6jdMed3J1wteFqJlmgP99cuOnuvrh9938+rlfPPemV/+XvV8PL77fP5Yfn3+/Pc3P9F1/P3m
r9P85otx/mkXbkefV+dgXlOaF7iZX63T4ezl94cf+Op2oYnzlsPsF6qh+PILRSXKvx3zjyvKj+9+
fzGPr/95wfDGt8bef/3lujBN0wVWDm72EELG1cOKgOu8TcsS6YNrcBidS+8GfydG/XrJ5z+aaAgl
vGay91WUBHqUBn0aX119gPJooE8JdX5/BkMosWlQReGoR2nQR2Ra9KED1Ok9Tw2hxKblKkrqYdrp
/VIfoej1EoQC6vMf7n5ZX9yvlvz86mGt4W7H+Gk6ljH2w9/rdHfz5ctpXzf5at1uE7LfLvNlWNiF
YdlwGjDHOFwmyi4vAabLejO9WN74kdeHHY7jDhGCH7JzfuAlIdGFLilULxWwKIBS9XB6nz1DKGlV
hjpKQj1Kgz6iqsSUO0Cd/ga5IZTYNKqiUA/TTn/b+wGKg8H4WEKd/7KmIZTUtOirKEB6lAZ9BKbh
6JzvAKXqtIygxKZBFSWxHqVBH5FpIRrocwWl6rSMoMSm5RpKdB3mNFHVaeGYYod79vnN1Q2hpKbV
UTh3aI/Pb8z+Gio9fp39FJT20+0HBI0udijnpOqGjKDElUM1FG9x52nQR1A5NIZo0C2WUOe/fWsI
JTUNfBWFDVAa9BGZltB1gFJ1Q0ZQYtOgikIGKA36yEwj//T3aMQOfSKoWjIjKHHl5BoK9WjJQNWS
0cipQzmf/waDIZTUNKyg8OicQXfYoI/AtAOKn/py59GzwbLClTKqlswISlw5VEMJ0KGZR1VLxiNg
hz7x/CeyDaGkphFUUbjDPfr857UfoCg4/QOKT3948cPdgfDT//ZJBY9MBqP1lfyqFsIISlyeuYaS
fYf1U1K1EHkMkfXl+c1nn44PkYybV/sh91KsHipF/aE+PcLCN8+n+du7dT/FPdf8zRe3N3e/vnz5
0w/7/3/x8SfHiDg9X39ZX+2KPHtWJckGJPtJ71JvHz37+8qBwDHlHIYt5WWADcKwhIBDwLAs6CDD
4p79FV55/5svdsI3ftwxA80+DNPi8uCDhyHBlgfyfuXkCdO2Pfvw/hynZdlH+7vj59dwC5fby3zr
9/9ebnF79kHtnMmp7kNvpF/+jrY8pnXWu19e/fz7ulwdNvrRO12Wtax3Pv8mhA2Q9F7AJUU4PkOC
ykD427J4l85njm2AZLIcwFcUO2IMocMLBV3fhOgJ3vImRP0F6nuuGMecdduylHXGqjozAJLWGZd1
Fu8/zEO6ai9k8YqNlmyAhLL4eE0R05gsNxXaj4Ln3xS2AZLKgrmgSIc5lHVbbhSyhHj+TWEbIKEs
O/A1BY0pVLqo4BvSyiEEFzy7AfLCQ0baBsSIA+ewbDlNYQpw3CC3tOWc/LK4rT2t3PLL36ufD9fO
51RaebpsDgJtROwqaeXINPvLmlc/b1WUciZbovzbMRvSym8y1NLKF5g57H+mNW2B8sWtFMKFcY0J
wpIuH9TAM/C7wd+J8Q/XSwqn54B2UNJrJoUqSjZAadCndQ64Q/Hos+sAdfpDfYZQYtOwhhIg61Ea
9BGZlrBDJcHpL34ZQklNA1dDATZAadBHZBqG12llBZQ4rQwI3jPPwyW7bfAz+CG4eBlmAlp5pm1e
c7EG+Pqww3HcATfvBghpGSaAZXY5xjhXC4BS7KB60lWlCZS4KlMVJSc9SoM+oqpkDB2gTn9v3BBK
bBpXUSyGogZ9BKbl0VncaksoVHVaRlBS0zBUUbJB/TToIzLNg8Gd6ApK1WkZQYlNwyoKdrjSUNVp
HVDx8YH9KSj1A/t7iEAdelBStXtGUNLKIVdDicngztOgj6hyIBjocwWl6oaMoMSmpSoKoh6lQR+R
aeQ6tGik6oaMoMSmcRWFDVAa9BGZlqPBxOp8yuK/mmF5N3ome/lZ0/dZQUnLk0MNJXivR2nQp708
dyjkqC/Pf01ZPBwqg/5Q2pTFPQlx0pNUUhYzJgY3rUNKzg95SvOQGWBIOwLmLa6rx3ekLFb2nHFb
hyUuOAS3bMOS1mlYo2OHOU8T4XXKAvA2XG55/+NvL8ttvjz74Pqcwxgiac75RMoi3UYaOeoeHxb1
Hv35De9sgIT3gh24oIBj4u5Q97C5lIUUuy+ZAEllIb6iiDxGr9sDqpAlKTaEfgT6T3cnPYALCjzM
AeWnP0tZFN+/sAGSypLcFYXHEVNt7EoNj3k5pnmZ4zREn/MQYF2GvDgcfFr4Ai74nKadPPESI8CU
Erv2x7wtv/y9+vlQ7XxOPebdHF4gheTi7GubUm0xwbJz7lA1FPLFpLVE+bdjNjzmfZOh9ph3JU9T
gGlb/UKrd4zspwXYTTPO+ZI+uAanMTh6N/g7Mf7helG80WkHJb5mqIpCqEdp0Ke1Cb2HAt8BSvFG
px2U1DTwNRRKrEdp0EdgGo/JGehTQimezdhBiU3jK5TgxpSCHqVBn3bTgh8d58chRgElfsy7oY/x
ssSBNnLD7OI2XBxtw7q4TFPEHBwWixCvDzscxx0uQDDgwtOATOjhEuOWqHaC0RvcakvVFQ+f7KCk
VYmhihKTHqVBH1FVJuxwf1M8fLKDEpuGNRSIXo/SoI/INLTQp4RSPPexg5KaRq6Kkg2GogZ9RKZl
36GSFM997KDEpqUqSnR6lAZ9BKaF0YFBUZdQiuc+dlBi07iKQgb106CPyDRPHQZaxdMQOyipaRxq
KCFFPUqDPiLTYjQo6isoVSNiBCU2DasobNDINugjMg0sxvwSKqsaESMoqWnZ1VDQG1z0DfqITKPU
oTvKqkbECEpsWqqhcOhwe8zKRiRbjPlXUMpGxARKbFqlEYmjcx0akaxqROLovX1LC07ViBhBCU3b
oWsoydlfaeBUjUgcIdkPtOBUjYgRlNg0rKFgNLhTN+gjMo3IoKhLKK9qRIygpKZ5V0PhkPUoDfqI
TMvOfpkevKoRMYISm5aqKGzfPYJXNSJp3Ckeg++noNTB9wcIth/twau6ISMoceVwDcUHA5QGfUSV
Ey2urBIqqLohIyipaSFUUaCDaUHVDaURLPITV1CqbsgISmwaVlG6mKbqhtKIucPAEVXdkBGU1LTo
aijkOnRDUdUNpZF7TIaiqhsyghKblqooqYdpqm4IRucMVkCuoFSNiBGU2DSuogQDlAZ9RKb5ZDAv
K6EU+9HYQUlNS6GGEnvcHhX70dxDpQQdoFSNiBGU2DSsoYBF/K5BH5FpXdaKFPvR2EFJTQNXRbGY
MjboIzKNY4fLX7Fdix2U2LRURemQ4wbFdi33UDl3GGhVkWArKLFplUYER0cGKA36CEzDMXSIPIEu
MWsEJTUNQxXFIjPToI/ItNQhxgu6xKwRlNg0rKDYhHcb9BGZhmjQqJVQusSsEZTUNHI1FOrwxgTo
ErM4cja4E5VQusSsEZTYtFRDyT1W+XWJWRoddOiOdIlZIyixaVxD8c6gfhr0EZkWQofLX5eYNYKS
msahisIdxjRdYpbG1GOZRpeYNYISm4ZVFDZAadBHZBp6gzG/hNIlZo2gpKZlV0GxeYuoQR+RaYQG
w0cJpUvMGkGJTUs1FO4QcwZdYpbGjB0GWl1i1ghKbFqlEeHRQQ/TVI3IAYVPnS7iMTiD22GhDOpi
u0ZQwsrZoaso0QClQR9R5fTIEqMutmsEJTYNqygWiw0N+ohMA4sxrITSxXaNoKSm1WK7efQWjVmD
PgLT8hjQfuBAXWzXCEpsWqqiWKwQNegjMy37px5Y85g6pEFQF9s1ghJXDldQbHKNDfqIKgdcfvrK
IW+wFlQqo8sOG0FJKyeEGgqDAUqDPqLKYYCnr5xs8RJOqYwuwLxDWWybIK4cvEKJO6fFgl2DPu2V
s0P5aNA3l1CqALMVlNS06KooHd4MQlWAeYcK1KFZVQWYraDEpqUaSrRoNBr0EZkW41P3hTtE6vDG
PapS1FZQ4srhKkqHHAyqUtQ7FFCHZlWVoj6gLGJeUtNSqKGgxdZaDfqITCOLHRtKKFWK2gpKbBrW
UBg6dEOqFPUOlakDlCpFbQUlNQ0q3ZAfHRncqRv0EZjmx2DR4pdQqhS1FZTYtFRDiRYXfYM+ItMg
dphGq1LUVlBi07iGgh3CS6hKUR9QuUMjokpRW0FJTcNQQbF5daJBH5FpucfjKFWK2gpKbFqlEQmj
cx1aflWK+h7qqZ/O7xC+w478qIpyW0FJK4dcDSX0mHeootyxz+6VqIpyW0GJTUs1lGSxKVuDPiLT
wGIFpIRSRbmtoMSmcQWlzwqIKsq9Q2GP5x2qKLcVlNQ0DjUUtnhlu0EfgWlxdBZfqSihVFFuKyix
aVhB6fL2Jqqi3DtU6LHsoIpyW0FJTcuuhhI77DmIqih3jP+H5x1xTLlDi6bKk1tBiSsn1VAgdZj8
qPLkOxSiwTJMCaXKk1tBiU3jKgob3Hka9JGZxk8dQdkhuMPLtaTKk1tBCStnh66ikAFKgz6Cykld
Xo8gVZ7cCkpsGtZQvMWO1A36iEwL2b7lIFWe3ApKapp3NZRoMXlu0EdkWrKYHJZQqjy5FZTYtEo3
1Ge3SlLlyXcozh3u2aootxWU2DSuoWSLtcUGfQSmwRg7PFQgVYraCkpqWggVlC7PpEmVot6hUoeP
GZIqwGwFJTYNayhgsUzVoI/INESDyXQJpQswG0FJTYuuimKRHmrQR2ban8ydO66VMQyEt8IKosQn
D4eagm0AAlFQASWL5wKiIBgpoxkDHR2f/OWeP49J7P98b2gUV6TMz8pwKWoRFDxyeoSyE07KFpei
nqUmHCosLsAsgoKleYii+Fxc1AeS1ragPicUF2AWQaHSukUoplgCXdQHktYT3h5YXIBZBAVLmwGK
Zt16UR9I2jTBFP+E4gLMIihU2qgRyko471hcgHmWrXgm+4TiAswiKFhaD1EUN8su6gNIW6Up3sw7
obgAswgKluYhiiLheVEfSJp5AhQXYBZBodKmRSiPhEs5iwswP0Gt9a8Xiyslgba4FLUICh45M0RR
9Jy6qA80csb+D0bOyliRcSlqERQ6claNUFzxtsdFfaCRk7L3waWoRVCwtB6iKB6BuqgPIM1LVVwN
OKG4FLUICpbmAYrmTuJFfSBpXXGSeEJxKWoRFCrNLUIZirDgRX0gaevxe33WePb+7auPn1+/ffX5
y8uf/3r24nl7s+zdq/H6Md/tmGf9kjV99+rDJ1QYC4S62jWi8Iz8ABcuF0Gh9fEZoiiaA1zUBxrL
Pv75JHGXmrFa5nLKIih05OweoTRF8+iL+gAjZxdTfCVOKC6nLIKCpXmAkhPy4HLKu/SEnnLORYRF
UKC0J+gIZSTcYXcuIrzLnIKNjhOKiwiLoGBpM0JZTZAauKgPJM0VzZFPKC4iLIJCpbUaoWzFvYCL
+txL61XTHPmEoiLCKihYWg9RFLf+LuoDSbOEax1ORYRVULA0j1AeTfBHf1EfSFp/CD4fJxQVEVZB
odLMIpSpSCtf1AeSthKudTgVEVZBwdJmgKJplXRRH0jaVuyAnFBURFgFhUp71BClJ0ijIsK9laq4
b3tCUelcFRQsrYcoio7fF/WBpFnVb1Y7lc5VQcHSPEJ5JHSZcSqd+wQ1Ero3OpXOVUGh0rpFKFPx
KuRFfSBpKyET51Q6VwUFS5shiuIq8EV9IGlb0fD/hKLSuSooVNoIJiJWqgt+qS/qA0izlCZTTqVz
VVCwtB6hPHqGNGoiYpoN9ROKSueqoGBpHqBoXjq+qA8kbSrOG08oKp2rgkKlTYtQluLA4aI+gLRH
qVUwqE8oKhirgoKlzRAl4RDUqWDsE5Q1QX1OKCqTqoJCpa0aooyEbxqVSX2Ceihu5JxQVCZVBQVL
6xFKV/zRX9QHkjYSHm9yKpOqgoKleYCS0ufAqUzqNyhFOv6EojKpKihUmluEshT33S/qA0nbiqOr
E4oKX36HEnz9YWnzd5Se0n7SqfDlE5Ql5Pqcetn3G9Q/6CX/BB2iuGD8XNQHkvZIuEfgVFhVBQVL
6xFKt4RvGhVWfYIaj4TjdCqsqoKCpXmEMlvCzyMVVv0GZfaPs+lPECsh8rSpxKwKChw5T9ARiid8
wzaVmH2C2gm7sptKzKqgYGkzRHH9b/SmErN9lJZwPLypxKwKCpXWaoCi6f10UR9ImiW8mbC5xKwI
CpbWQ5SE1+o3l5gdKXtFm0vMfoMSrGBhaR6iKA6FL+oDSRuKd6ROKC4xK4JCpZmFKAnnQ5tLzI4y
p+CX6ITiErMiKFjaDFBSXiTbXGJ2FPcEKC4xK4JCpT1qhLJnwjeNS8zOlDuym0vMiqBgaT1CMUv4
eeQSs7M8lmBQn1BcYlYEBUvzCKWbYMl4UR9IWsYd2c0lZkVQqLRuIUpCNn1zidlZZvvXDaqeIJZl
VIaaDYmg4JEzQxQXfC4u6gONnOX934+cndBsenPZYREUOnKi7PAqVbF4vqgPMHJWaYp1xgnFZYdF
ULC0HqIkpPQ3lx1eKd1UN5cdFkHB0jxCGZawd8Zlh1dZil+iE4rLDougUGnTApScQyouO7zKVkw8
TiguOyyCgqXNECXhPszmssPfoP75mbSXlnBav7kAswgKHTmrhigJXZ82F2D2lGc4NhdgfoL6J9J6
iDISfqO5ALOXqQjDnVBcgFkEBUvzAEXT6u2iPpA0STrmhOICzCIoVJpbhOIZi0UuwOxlPxJGEhdg
FkHB0maIshI+H1yAeZeasWHFBZhFUKi0XQMUzf2gi/pA0oZiin9CcQFmERQsrYcoCXcWNxdgFj23
eUJxAWYRFCzNQ5QtQLmoDyTNFR1qfoWyymWHRVCYtG/QAUpGHw2rVHZ41NL02w5WqeywCgqWNiMU
079MaJXKDj9BdcWZ/QlFZYdVUKi0ViOUqXj496I+kLSleE/uhKKywyooWFqPUFzR7vWiPpC0rd9b
s0plh1VQsDQPUfTpD6tUdni00vR7a1ap7LAKCpVmFqFYE/xSX9QHkvbQvyZrlcoOq6BgaTNC6SND
GjURaWUqtvlOKCo7rIJCpT1qhLL09/CtUtnhJ6iteFzihKKywyooWFowEbFSXYByUR9AmhVT7Bid
UFR2WAUFS/MQRbFkvKgPJK0rdoxOKCo7rIJCpXULUbbgl/qiPpC0mQJFTUREULC0GaGsKb8uaJWK
7Q7LeBDIKpWYVUGh0kYwEXmUquiqc1EfQNo3qMc/Thc9QZgiE3JWhortqqDgkdNDlIxdayq2Ox6a
B4BOKCq2q4KCpXmE0lOkUbOhR86uLBXbVUGh0qaFKIp3vi/qA0lb7V/fh3mCcMUthrMyVHZYBQWP
nBmhbMUgvqgPMHJ6afo+lVap2K4KCpW2aoCS0V3QKhXbfYIyfQd6q1RsVwUFS+sRysMEu/gX9YGk
9Yz9fCq2q4KCpXmEMhT3qS7qA0mbGbuMVGxXBYVKcwtQMhogWKViu6NrvvknFBXbVUHB0maEshXP
+l/UB5A2SstYDFGxXRUUKm3XAEXTi+GiPpC0hE5DVqnYrgoKltYjlJ4ijZqIfIPyf71YHGVknLlS
2WEVFDxyPEDRfOMv6gONHNe/12SNyg6roEBpT9AhiqLB+kV9IGlbn0K3xmWHRVCwtPk7yixVcf5z
UR9A2ixN3+7IGpcdFkGh0lqNUB6KHnAX9YGkjanf4GtcdlgEBUvrIYrij/6iPpC0pXgt9oTissMi
KFiaByiaO8IX9YGkueI65QnFZYdFUKg0swhl69tBWeOyw6vUJqjPCcVlh0VQsLQZoujfHbbGZYeX
pvPSCcVlh5fmuhcq7VEjlEfCoV7jssOrdMVe4wnFZYdFULC0HqEMxUT2oj6QNMnTZCcUlx0WQcHS
PELxnvFLRE1EvNSV8KHlssMiKFRatwilKc4XL+oDSUt4dtwalx0WQcHSZoTyqILxc1EfSFrv+g3R
xmWHRVCotFEjlJExUeOyw15mxjqEi+2KoGBpPUJZXbD6uKgPIG1nvNtsjYvtiqBgaR6i7Axp1ERk
l4eirfEJxcV2n6D+fgOEb9Ahir7nmTUutrvLUEzUTiguMSuCgqXNCGUq5kQX9YGkrYzzIi4xK4JC
pa0aomQczXCJ2V22YqJ2QnGJ2W9Q/0JaP1DW81pz9h6JxOx3qEfTX7VoRGJWBwVL8xBFkbi+qA8m
bf/bqyjfIUbCRcpGxHZ1UOjIcYtQpmI2fVEfaOQsRev5E4qI7eqgYGkzQvGMvTQitvsdait6UZ9Q
RGxXB4VK2/V3lFaaYrPhoj6AtFYeil7UJxQR29VBwdJ6hNIV988u6gNJG4pBfUIRiVkdFCzNQ5SM
dQeRmP0OtRT1OaCMSMzqoEBpT9AhSsK6zIjE7DcoTVvjE4pIzOqgYGkzQtkP/c+jEYnZJygrVfEI
4AlFJGZ/QCmux6HSWo1Q2hCgXNQHktbG/teLRSuPIViXnZWxxY0cCRQ6cmyFKPrW6t/+K27k9CWo
zwn1aJw0CRQq7dEilOkClIv6QNJWwiGMPQYnTQIFSxsRiit2iC7qA0nbiieRf4PanDQJFCxthygJ
AXl7bEbaI2c21B+MNBEUKu0PKIrZ9EV9IGmWkAaxTk1ERFCwtBWhPFrCFLZTE5FHyovfNqiJiAgK
lTZahDIs4Zs2qInIo8wh2AH5DYqaiIigYGkjQNE8ZHtRH0iam/640wY1ERFBwdJ2iDIypJETEcmL
8SfUJCciEihU2gxQesqTvzapiUhPObSySU1ERFCwtBWhdMWlwYv6QNL6P34D9BtEys0qW9RsSASF
jpzVQhRFU6iL+kAjZynesf4NipoNiaBgaeN3lJz30mxRs6GR0v/EFjUbEkHB0naE0hTXTS/qA0mz
JajPCeXUbEgEhUrzEKUrHuC6qA8kbWzBxOM3KGo2JIKCpX2l7mx7G6mBOP5VVryBk9itnx+CisSD
EEggEIgHCZ0q7673riJNSpIed4gPjzdpoGzcqyczC4IXwF2bzM//Gduz67Ftcyhmhq0nwqGyId3Y
OZY7PSoRIYKCOs3zHIqjmF4L9AE5zc+wxVt4VCJCBAV2ms6gzLPU4FGJiGk4xREBJ1CoRIQICuw0
n0GhuSa7QB+Q09QMu4UlQyUiRFBAp0mWR6FYqirQB+Q0M8N+D8lQiQgRFNhpNodiZ0hEJEMlIqZx
M+wWlhyViBBBQZ3GeQ7Fyxl6GkclIrZhM1TPSI5KRIigwE7TORQ+w1tryVGJiG0ERbHlCRQqESGC
AjvN51DkDGW7kqMSEdsoN8OYLVCJCBEU1Gkii6Iprqsr0AfkNJLjSadQuIrZEerffyOSoLMoMxx0
K3EVs7bxFI9EUyhcxSwRFNRpkmdRKN6oFegDcJprOEUhzxQKVzFLBAV2ms6hCDZDIoKrmHWNnGHp
QeIqZomgwE7zORQ1R8qPq5h1NC/Up1C4ilkiKKjT8ihmhuvZJK5i1jWOYvqYQuEqZomgwE6zWRSK
p48CfUBO8//x/UMJwjcscy2C1dXLGDa7NobdH58f/6/6dME7K4agW2kG/4go/yhuHMJyC4saPBA0
YLTOUXCKF0NTbXDVzERQYH14DkVTnDxeoA+gQ41Q/3WH4qyRczzzIA4gpIOCRo4RORQlUxD3sb17
8TeK/uO7XerU16sX1W3cXK/7667ajq29W8ZN6uZDGHwQTne6l3+0ofvlxWZ9t+r/+pXn1Ud9P356
uFt1u+v1qvpynX4l+SqO6vx6F+/i+6PvNrtLyVjD9v+8X+3e3MbLb+4tvvf3T579e+DfLWO8HT9/
t9pdL6tVfL2r4qu42lVHHu99tU2qp49luHhj50hIECc6JijRSIoCqCkU4kRHOihwLzA5FE+xx6FA
H4DTZMMotu1OoRAnOtJBQZ1mWQ5FUZxTWKAPyGmGoqh3CoU40ZEOCuw0lUWheHQs0AfkNJJLkKdQ
qBMdqaDATnM5FC9mWKJBnejIVcMswWrIFAp1mCIVFNRpTuRQuCZAKdAH5DThZuj+qMMUqaDATjM5
FDnDEW8SdZgiV42iuEFuCoU6TJEKCuo0z3IommKJr0AfkNNIjt6eQqEOU6SCAjtN5VDcHD0NdZgi
V42fJZJQiQgRFNhpmUREN5wRdPoCfQBO0w2nmPMnUAp1mCIVFNBpiokcipihpl+hDlPkupEUByhP
oVCHKVJBgZ1mciiKEbycKdAH5DQzwwZnhTpMkQoK6jTOcihWzOA01GGKXDd+jkhCXD9OBwV2msqi
UNyEXqAPwGmm4ZLgNd8UCnH9OB0U2Gkui2IJOn2BPiCnCYplpSkU4vpxOiio04TIocgZbpNRiOvH
RyiaM7GmUIjrx+mgwE4zORQ9w62fCnH9+AhFU0szhUJcP04HBXWaZFkUM8PwiLh+fISiOZxzCoW4
fpwOCuy0TCJiGzbDtXYKcf34HoobgpFoCoW4fpwOCuw0l0MRFPscC/QBOU3N8RyCuH6cDgrqNCWy
KBSVDwX6gJymKRK1KRTi+nE6KLDTTA7FGIKnjwJ9QE6zFEcGTqEQ148nKKJ9RlCnaZZDcXM8pyGu
H7+H0v91YaRt/BzhjLgDnQ4KHDmZbMg1bIb1IYW4A30PxT1B4jGFQtyBTgcFdprLoYgZ9hIrxB3o
I9Qs54MrXAkyERTUaUbkUNQMu0EUrmLWNWaO7o+rmCWCAjvNZFEoTucr0AfkNDfDBnCFq5glgoI6
zbIsygzFlwpXMesaT1HnOIXCVcwSQYGdlklEfMNmKL5UuIpZ3wiKM7GmULiKWSIosNNcFsURxE+B
PiCnKYqRaAqFq5glgoI6zYksCkUdaIE+IKcZNcMLPlzFLBEU2GkmizLHEz6uYtbT3PQ9hcJVzBJB
QZ3mWQaFZtWjQJ9ypwnWsBkuKlGoilkqKLDTVBbFz1CogqqYFazhc6x/oCpmqaDATnM5FCFnSERQ
FbOCNdLRP/FrVMUsFRTQaZqJDMos2/k1qmJWsMZQ1O5PoVAVs1RQYKeZDArN7U0F+gCcxhs+wzV3
GlUxSwUFdRpnWRSKRLZAH5DTSPajTaFQFbNUUGCnqRyKpLhyq0AfkNM0xYaLKRSqYpYKCuw0l0Mx
MxRfalTF7B7K/McrsYI3boaL7DWqbJcKCho5QuRQPMVdlgX6ACJHNGwWKFQ2RAQFdprJolDET4E+
IKdxigOUp1Cosl0qKKjTJMuhCEGAUqAPyGlyhpVGjSrbpYICO03lUBRFtVWBPiCnWU6Q4k+hUGW7
VFBgp7ksygwVoBpVtivELMe8aFTZLhUU1GlKZFHsDD0NVbYrZMNnOGpao8p2qaDATjM5FDHDLaka
VbaboNQMF9lrVNnuCPUf1PFpzXIoeoajuDWqbDdBGe3vT5xEQH12vbrejgjL6+0urvanZK43hzMm
t4lnGXbp7xLHXrBt9fVtSOdlfhuHxWC4lG0vaztYVndMDnXL7FDHnnkbpPGCmSqs+gcfOdqvR4C6
1VbXpnehNs4arlspB2VzLbUzrKlqVG0wFRQ4PFUOxc0SnqiUSzXSCHx4/vDFZ023ieNJrZtkMoVi
3pTFm/psPHq1ugnd1TamJqZTYn/4alFt725vl9fpz1999Mn4PiXcxF3cJEXefTdHYrjEk6RGJ6mH
y3f/7jnOKSdk9HXb9rFuPVO1MyKkbmeDsN62LoZ3q71SifW9H75KhA8+rm2UsddtrQMXtYza1U4p
XxshBbeRc63Cux/s2xj6Pr0r2o6f7+JCdgvVL7hb9GzBxbunJ8wK00jGMW0eI/1lCvp03O049PwW
Ulc4xH/Vx+1us34T+xOz0jfeMtRJ4Cfx7jBHk++BOAoIPBa4CYVbcNZwrVAUU1m8OF8WEiCoLF6c
UCjWSC9RFCeyGIwsBEBgWcyEwo/O0RpHMZHFcIQsJEBAWQw/lUWxxmncLQNTWQTDyEIABJVFTNJg
zRacN8x7FMVUFnN+tNAAQWUx5oRC8sboTPoh2d8TYH+9vQ277uXibvXLav3bqr5JM2x4kSZyHWWn
g6g7z/vaDMrVLHRD3fv0+cCEDc4ncud7zmzgvB3UX182zpffH76v2tx21TvnfPk7ufZY5d7enj/6
u5ubNw8n7sN0fWzmcapP6MpF0TKmYxTir/ykfXP4wLeLgUlnZC86Z3QOxU3PAJiiPGXzjxPKf2Yj
E4YHGch7u006kT/5PQ5u6AauvVWBWyu1VLZ3XXDciK7T+tkpuGik4m8HfyvGI/3Fnn1PEiEUtM/Y
PIq3eJQCfQofWQ5Q2rAZoM6+J4kQCuw0m0MxYg6nnX1P0gHKGk0P5c6+i4cQCuo0x3Mojjs8SoE+
IKc5fZxiEFBfQl9ZRT10tnW2Hp+G01N0eiDuopN1pxXnwmqvh3byyupoth7t1rFzQ228bNM3dK1p
JZeDCbkGekfQa09UP/tGSkIocFTqLIonmIoK9AFEpWy4maOrnH0jJSEU2Gk+hyK4x6MU6ANymvQE
Q+0UyqMyLSIoqNN8FkWxGZzmUZmWbDQ3M0ChMi0iKLDTbBZFCTxKgT4gpxlFoM8EyjJUpkUEBXSa
ZTyL4ghQCvQBOc1J+pzdMlQiQgQFdprOoliCkbpAH4DTVMM5fXZkGSoRIYICO81nUTxBTlSgD8hp
0tJPtJajEhEiKKjTeBZFiRmGR45KRFRjpcI/iJ5fRPGvPZGqxs/wHsByVMpFBAUOT5tFMQqPUqAP
IDx9Y6zAh2emiCJnylP0BFwRRSJRrGHa40kyRRSdjp3wwdZCpH+Flpu6dUNaGrFGDJFHLjr9liKK
VgTvDZP1EFxbp1/v6oFHU2vueVTB6eDFaRHFEBetWTC1kP1C+EW0YxHFtM28EVqgVrKmgScUZmmN
AAjaKYXKUVhpMaEAry3RfMF5owyu8VNvKH6+N0iAoN5QfEpx8IZGUZzIcn7JDQ0QWBY3oRD7mgU1
x9Xfs95ZPic45s7ykUs0jOHcOo0z68+PMxIgaJxZf0KhRMOtQVFMZXGI8gsSIKgszkwo5Ogcwy2K
YiKLYwhZSICAsjh2KouSDde42pipLPz8GiYaIKgsnE0o1Hh3tDakZaNOIRItEiCoLEpNKUbneIEb
4aayYKZ2EiCwLG5Coff3HyvcCDeVxSBkIQGCymJOZVG6URxX0zuVxWqMLARAUFmsnlCY/d1iDjfC
TWTxmJmIBAgoi2fmhEKZRhncfDiVhUuMLARAUFn4lMKO2bfRbIbse97HhhnBSx4b3GOPDYcrEjRu
bJzGmUTEGQkQNM7kKYXyDWe4fSknsliMLARAYFnsCYUW+W1LqqRq24e+tcKJWoSB1WHQtk54smbG
2SEa1YbAEnnnlNXa9cz6WF61XfLl7+TbYzLtgVVt7weGhB6idUpqY20bMlXb3uuOMR80D30WZfoq
fYrylM3HqraPv361Xk0wcoXbfc+t4zb9jmZMsWCjHWwnAktvB/lg3bMcu5Lq7TI+RfLH7mUSrL/a
j7XPq9u77cv3Xt1cLa+H2L3pljEF6PvV8eNPtuvZ4jBop672c/mnqufvcUDr+B9T6PXtlPlZdVlu
v9w0RNjlOrl19eLqEBjbNNd1v95db5LLx58cFhseLJQF6XvWG1sb147bkr2rvRpU3bdmCKZPgcD7
/ViSxqxf4qoS81AfB9H000VaGRlbkBZENmk+fhlH4KT7cvmmut5WL8NyF/PaGU1JkYahvZIJIexG
hi6sxqWhJGH6q0QyGslxaCMpOdpNgthP+bdVv74J16v0TXtdbsPddlwiGxOXHIiRWEFu4s168+Yq
TVxpGWD5PIXQixElfUOawdrkkvV6tXdV3Ly67h6BMEiI13G13a03sU5TwfPqk6NbxrlMG83yRj2p
0R8317tdTO6Pqdtsd9XddiQYp3oTFZNuMLXkxtSitzZtHeK+VoOIXg6xjzrMRri+3aaJfj9HHP4z
Uh3JU3jsXm4XaUS82Cbq+HvsL1JjLo6H3F0s18mXFyUNqJ7fDwHXN3F9t6u4+Cv3Peaa2TYqhmzj
NP4Se4q0eHX/95uwehGrm+vVpZZMSZFGjtf3/zsPzzmhqNBjY2EotkwMVoRYC9uyWvchbYQXxte2
C8Yknwre9rMR4kLx3q/1wa/13q8XJe05OzIdsslFkRm63V1YppzgEJP/Kkl/j/KX9Q/GjvLwT+H1
zGT3NQZfJRctl2nifhUrnmbN3e3y7sWLNHG96lLeV41uWw8VzzFY/Ci5Z3i97zC3681uu2+30Ka6
/eSb77PBYf/aAIIz+tVB/2ETx8TUOiGkUx+kB/7NXZv+guVtc2yD7/vit/FFXMXNfVHVy3jsjOPT
50iU3ivETeqSaRrowvLikFlc8Od5KktE9cnhMemYyOz77v6JeSazyRF5o+n/Z7Q77ZO7NB5sh7i5
OnTOML50utqtr+6JWtHaTqdSGdmyVOCWCmZS3Vrva9EKLlgrhBe2qj98jBbdT+GzmvW0U+njs1rn
wsBVqt+zuh9qz7Wsreps7Xhng+RGtqKfjRA3qx29Xj/wer1b1/d9raRlZ85v1lNHcB+XqRztQfyW
Re08cGcFLHomKQxYK6Tmkft6GKtVfQjpXylTqTlzsYuCOc/8bIS4gD04+WG4XpS05qwglQ3n6AYf
Eoz0Kn14Mzb0h0+/qA6vPLdVfJ3KjHNmJUdPsXuz6Y3fYXL9JaZK1WXOlNKcJJOYvlxW5+8sIeWC
vmNWOkejmSWhKVCptKx4z2UsjUonXOdvMSHlAnvP52gsEyQ0BSqBvOeEnIVLn7/XhJQL6j2dp1E0
NAUqwbynaN6hHpdlfvj40+q9u7skTx/DYDvG68GFIb3mVmJMvXR6q+A7q8N4FhB/9kEV0ux+c3vc
kXJ4aN3/7w9fHb5oUQ3GKCNS/sbNoGvFmK19Z4a6E87FYAMzXj7LNc5bPY/o5+8zIeUCh2YmTVQN
F7nXUqJg6VEyzZlRbR1cx2vtjKntIE0tpWlFZyJP7zTTWprWQTHP+jYIwIFRJV/+Tr49PtcewNLj
x582YxQm9D4KN0TvvOE8s/TorI1mfKmreplFcRNHT1Gesnm69Hj8xYn13Iqj5b3uWAhjzAgnBzsk
H/S9E3YI4w6TZzlkYd3b1XsEALoe1oleMyfaWjhtamZ1eh0+iL7WQ/pJ6HrdevVwPSyrrxTiPNhD
/3nV9lfjTxeTUSj9KfXu8cdZo9ZgjH70D1P9m1W4ue72i24Hs6OH008OL04qliMwSmII0k/T6Dyu
I1XJeKxY+mDaAXW7iWN0h231U1y9l8pw2LPqvZevbv6qq8ihOGXPQ9nepCKbr6rVsK22m/3jbyqu
uRorf9KfL/+OEyl1Z7nr6+gUq4c2picknh6OuOGSGctdyL1V1Q2bFJtNwRLAVXwdu9ThUpEDN6yT
ztqjRN+lQSg9lR03qh17VGuDtdz2nLswdH7oTYiaucB9ZH1v2rRUvV5e7tWq7kZtL9NK6tX+xd9V
2nYWN2kdcXM5zhhVuEtj9/jHq2364nEfWmr/5dGQd1INOgyKd96oKIQehBLWhzhEFZXPtVhoiWjx
dI4zZx9ITUEDndmMyjJ4hWAoUASQaulGa5GhkSlQrl/vaR6xa8++IwJmF6q5FVlrlhVZK2glTF3n
EL4+9vf7fXjFHT5HYvxRAeiQmN4dpczn6vrmdvl8fMfSHGaEvQMWfw+Jxw/V46dq66RMG69TeRaP
rdHS9ErIUPW3izR/XbCL16/6kIbUBWxIrV711w8+MrS6E17F2njr614JXydBfC2D7lIKPgifMr19
zc5v6X1dXOyHwLw4/Lz5Ip+z/S1S8iuz0irDBBu0BSdrurHOvB3tUWMPJ7Ok29Xhl+Dz2PjhS6Do
o96hXcbLRyQ3DRO8tGdw2QVvfFC8N2+fCW0Qrmud5soHLzjn3tnOcsP7zqpWhP9wJjQN95gWn4xM
Z1/3QkEDHpVNjkHI4tn4PEUAY7VpDMspop6cIxwqJym3C9Xcqaw1rousFbQSpq4tznomvs7MhMUd
Pkdi+ZmPZ/+YCf8v099+WNvXCC7SBHBwYPXt13lhJFmKkDZWvEo2/ycq5dVQ54VJNif4pyppr4cO
LVc6Bj4YcFZgGm98QVaQNzfNC+5/7Ywn3DMyg9PG2EYwVjo0jAf4RMXieEzJ21MBbrxwSpnWC+29
8DKtsquOKal4YuLDf5gK2EZxg2jxdGhGnJ5KQAOelvKKaEwMFCgCmKxs/jxopZ+cJBGna4LsQjX3
NmvNiCJrBa0EqOsa5nLzjHnCrmQMtfZWbhembuLKWuNaFFkraCVI3ewBxMo+bffsm1phdsHquqw1
Z4usFbQSpK42uRhyT9pFnFEIsgtVl2etGa+LrBW0EqSuZ7mHBP+0XWTsltoFq+uy1pQoslbQSoC6
vmE802c0e9KuQMWubxh3RXah6oq8NW2KrBW0EqQulzxjlz9tFxW75XbB6rqsNaOLrBW0EqSuULkY
Ek/alcjYLbULVVdmrUlWZq2glSB1pcv1Gfm0XWTsltoFq5uNXcV5kbWCVoLU1bnnW62etKuQsVtq
F6ruI9a4KbJW0EqYui7XZ/TTdpGxW2oXrG42do1gRdYKWglS1/LceP/0UwyuTvJP7s5sN5YaCMP3
PEWLK5DojvfloFywCMEFCLFKIBS5224SkY2ZhEXAu+PumckJM5W4ajxJAAmhk2SSr+p3udtLuYzn
UtXVME1bFA3hJUldZ6A+U57F6MrYdcaiuGR1HUjzAkVDeIlV171iWQIJqVuexZj9Y5fEpaprQBrn
HEVDeElSVzCIW57FmP1jl8Qlq+tAmhYoGsJLkrpSATFkyrOYiusvSVyquhamGRwN4SVJXeUUwC3P
Ymxl7GK5ZHXB2NVCo2gIL0nqGi8BbnkWU7GRTuJS1XUKolllUTSElyR1nRAAtzyLqdglInHJ6nqQ
ph2KhvCSpK6XHuCWZzG+MnaxXKq6Hohd3jEuUDSElwR1eccZ9EQqz2J8VeziuWR1PUgTHkVDeElS
V3AGcIuzGM6qYnfichSXqG62C6Rpg6IhvCSpKy0UQ8VZDK+4UozEJasLxq5iOBrCS5K6WkIxVJzF
cF4Zu1guVV2uQJqyKBrCS5K6RkNPpOIshovK2MVyqeoKD9KcQdEQXpLUdQzw0hZnMVztny1C4lLV
VRakKY6iIbwkqesd5GVxFsP1/gdUVlzPUFyqulrs0kTHGI6G8JKgbuZKCXDFa+5ynV43lws8mZ29
Dsvlr9nd6RAtk1r5mGQv2OP5eCmMPIwiem6Sj74XPWPWxoGNSY1x8Lv5eIurq5vnycmbleBGHViJ
7baqOKD9RBaSY9eCdnlzYLsQypGiXIK9qzib5KbqGYLnUtvBCJCmOIqG8JKkrtIC4BZnk9xU9gcs
l6yuBWleoWgIL0nqGvA9VJxN8ooDmiQuVV0Lxq41CkVDeElS14GtWp5N2srYxXLJ6oKx66VE0RBe
EtSVHTPQE6k8m3RVsYvnUtV1AqQ5g6IhvCSpyy3kZXk26apiF88lq2shmmAGRUN4SVJXeA1wy7NJ
Xxm7WC5VXQ/GrlQcRUN4SVJXWbfLdeXZZMUJCxKXrC4Yu1prFA3hJUldYyTALc4mBauMXSyXqG62
C6Q5jqIhvCSpa70FuMWdP8EqYxfLJasLxq5TEkVDeElS19uak2BbB4XRxwF3LVEd03tWhfjPHxRe
nwiN+aAwJIzg+1Z6Wpeo/XCu89R908cuxNi8FeevTy5vL/q0OH5d6OnP5vr09+VUt/b4KH/maHlx
1Of2TJfxyDnPpRpd65UfW60kb4cxuFb5QYdeGC5cf5S41L5XobUm6lZJLtre2UkHo4I146BTWkPm
291+OY1vgw7vexZ443BY3Ti38rNp3mdbhdfZnV+rZtfccvFD03y0+8HV35g/t/nUx98fTdGff7iu
VHjvp6A7Ys/2W//5H5qPV//ofg1nNyfj1WJdsGxczBWzcotmc3OvY1ncXH4tHmd78j8n0+PxbFeu
sTh5jPvsQX0glEl+SNTNd+iVj1emS7FjuuCm+S2c5+ukllnAP/+cb/Wbbk9Yy/Nq6g7rbjL/rDne
RNPqWqrmj9m0v5r1Q10qEO1MCV2DnG50M0yMKiZmYr/2YnPfYu7uWflumW5OspQnP18t7+5b/BfY
ul0pcHUrWf5nmymrSyGn59ZX8597a/5r7zTzX8rkyYZswrv3LiIrfHa6fgwMbO33LE9x1zm//P1y
OM098Wy6Hurqp1WMbkJ2OSzOrm+etavWe7TzuNlY//zPnb2doT93Nl7tPnjebRCvj6ObcB3Plj+1
abG4Wuz+yvxt6BdXP9nr4aY7JvV++jwevhstXi6Odcf3rRK2GppOz73hdjFtVJ3/vq7GtRr4zzta
OaznGpjoMsVPYOLr0qPL22HIQ+nxNpcAvX9pz+qhelD07oP36vofz918rWLpaQoZJGyhp5Irv75q
rhdXkyzZ3JJF+fVihhR571k0lq1fhfRqs6+a0OfuN4VJ1iQHycWrZg8xlNhzBLvTOl+k8xSWT1VA
dzbWMrOfsfeq7twrPTx9t1lPCFMEgeaJ6qBXzYX1Ae5K6fMm+vlViJMi79/9uxmuLi7CZTzPf+hV
c3S7zO+B/E64/v3H6XKq9uembWMaw+35zUlY/Lg8zl+n324WYfVV2+Z+eZZuptWL5dV5Oj79ZWD5
I79cHPvR8XFUsu29kC2ThrduiKGVyUs/Bia4Ec00hZkntm/s+mw6Yw5ThnxrXULWZTkezi7i6oyE
siBtxwzwtHWyKZdHF4NywUbWKuZEa5wa2tEl11oebUhuGDSfxsbKCBuT1n3iA748OuaPvwn5w60A
/SGVR7+9XBcrN1Gy0STjwjgA1bUsG8wodZR9GiFjhFCbpoaNKVOhEumbj25ZAF7LPAzSRx573auk
dEzepiCkHXofokjpbchsadzjGj5owlOXSQdD2Cq/r7mlmuUxbWqWT+PDi4dfjLZzZn8rdlezhvMU
Lk+Wp7c3MXeL5xucHtqP9b1WT+5Jc5kHNOH8iRy6u7NrLv+LmjNdXU7vxOb4gWCx7AVs2jRCu26V
bN0jsllVayJihkqxOubX2QOzz3uLmdVTT9t5Jr874LraXez/kc17Z7V8BGOrFb/X8zbUk2mIdnX5
qvmHjk24vk5hkSJsiDmQIYuLk7Um84bAU/X+J3Li9VV9F01p9Rw2wT6lCUBnga3Y+22OsQJY/Xkx
K3CNUm3I/ffbPIl95sA+3PusJqr2tmJ7NXYj4jOuwR3W/rvV5NvLZ15GrvKD8Jp+kg2sbLz33x1g
ZwZ6weZplBSjEC5K5VPa2kH6Is1z7Enizc7RC9uH3DX6cv2nPszv8Lemv9asx78bG+5vHaF+4YH9
I9cJLfeNrH/pDtJhfHpwD+n5u3+VQ/Tuv/HrQP3fdUbv/fD6t+/yVDm32UR5HVLLdDOvT8D7PvNp
J8gGe3dZAtmGf+Yeffj5JhNqOw1p80vt9FstEzG0obdDa+xgR94rpb3fSkMK5+dXv56cp/DTq0cs
1/uq93Dd/JjulbL3flBjUGP0LJHX9lznvCgY+Bhwu3Z+TC9ZPT875DsB5lxKKF/OCeEMc5rH0mk9
5R1XNgid3e6HOKroe6azCkoFxdSLXSk3e6xEjcfb+wKW1+wL1FpD3Q2wHLTB8AobEIoQdq08fHLI
FU9KSVt12hzPJWsORqHVHEVDeElS14E55cWTUtLtf2s8iUtV14FPNS84iobwEq/uJIGCvCyelJKe
VahL4FLV9QykWY+iIbwkqSsY9Ky02JPCcvBRBiVsDPHx95fQ0Rolog9RBN8nY3vbO5HFZyr0o3zZ
0+ZZCensgZXYaitVddrgSSwkxq5iFrJLMX1guxDKkaJcC6h3FU+sKV5z+obApbYDFyDNchQN4SVJ
XcsgbvHEmuKV/QHLJatrQZqQKBrCS5K6DvLSF0+sKVEZu1guVV0hQJrB0RBektT1WgLc4ok1JSpj
F8slq2tBmlcoGsJLgrocvlzdF0+sKVkVu3guVV0pIBrnOBrCS5K6YHVBX6wuomRV7OK5ZHUtSNMO
RUN4SVJXcqjPFGeTSlXGLpZLVVcJkKYZiobwkqSuUgLgFmeTqqpCG4FLVteCNOtRNISXJHW11AC3
OJtUVRXaCFyqulqANCdRNISXJHWNMQC3WF1EVVUbI3DJ6oKxa7lH0RBektS1DvKyPIupqgxG4FLV
NWDsOoajIbwkqevAJ1J5FlNVGYzAJasLxq7nOBrCS5K6HqgUo1h5FlNVGYzApaprgdgVHeMMRUN4
SVA3c50AuOVZTFVlMAKXrK6FaJxZFA3hJUld7hzALc9iqiqDEbhUdR0Yu4LjaAgvSepKDvXQ8iym
qjIYgUtW14I0bVA0hJckdZWBvCzPYqoqgxG4VHU9GLuaexQN4SVJXcOhVi3PYqoqgxG4ZHUtSFMW
RUN4SVLXOuhtWpzF6KrKYAQuUV3NwNh1XKJoCC9J6noDvbuLsxhdcfc6iUtW14E0K1E0hJcEdeGq
nooVZzG64u51EpeqLodpVqJoCC9J6nIDjVWKsxhdcfc6iUtW14E0x1E0hJckdYUD3t28OIvRFXev
k7hUdQVIk8yhaAgvSepK4Sry37ZqBqKTIEFL5N7HlP6RuXuAdN1nrxq4yWzOVQNBadTeh2cfSw2e
oLldfQopjFGzwMhH/rNxinNMWjAA204JXn3oZdKBsyPeKmxX6HnSOa6DYaYvpQNLaYR3YtBZwCB4
smlwY5LGusC8frl0YK46wUyFx9uPorqt9lpryI9hC9oga2IAoQjh4aw6zTnwUiguMWlZk7A6cwWK
S9VcapBmPYqG8JKkrhG6oq23Xn3oDg9aovY+B/9ff/XFtHr1gbLsfyIGWxmteFgQssty0K4Dlkcr
mpVj0mrPesO04bE/dI20Ih+UxYn/iyz3TrjeOzl774+oXgadeN9GrXzbazO2hg99a7zRoxc2JQMM
cnXHpP23avT+h0T/oKN541k6j3kEkx1qJtnmV8TdAcv9FMF38sMW2FOQucLvffx0rxJ7GSmlOUj5
NERZN8L7M9vl+VPYpVXVgZDD2UUd3SgGWaO0P4g1CJVIracMqyyQuLKrvz07zx3u7PL2t+bNo1/C
4mhxe3n023RoeqqgOP/v5Ke0yPZ0i5Pf3/vklzebN6cjI8dff/3Jh8ejHPg4GtkmxVmrmDCt171p
jbDC+iG5wblmcfXPgonN8vo8LE/XtRQbsKrim80vQ64o8Io3F+ni5CL89koLJZybv8we/Zhu1t95
InVWh81X+pxcL9LqAXzM322yMceai+bTs/ffbZanIT+uj/Xqq0VaFY/LP3fTd57Mth+ncljrp9Gm
e71qsizLuY5Ce3EV06pVT1Yt3M7mN7xp13qeXJz1jeZi9Y0Z9fpbZxd5UNqgwqFpF+Fiqrb84MfX
P+9a/tkQlk07XMRsWWqeIYqadkwh96q0bPI/z0OWJ39vfpmdXF8tbiY9Vp9ff519z8/2n1aFiX+A
ms9bVd98c2h9kUJsvudCWSlFI5SRgpnmN2dOpGhDfwbgTadEsTY8UOqL71T6+OA0XP6Yvr4crq5y
DIV8lDutTuWDUMdrfZ4eNqvVqbkg/geff722aNmEy9isRiCbgoZzaR/IEC2KFwjs5/U0qrIm9DZ4
a30CR55B+siisa1x/dim6F3r1aja2JsxmDiIwOexU64L2twDgH5ofWBB3/9wHz2NNwex4zW+mRbB
mmlAdZMW+bE5vfRzp59uNpHzzSZvnf5y0RyvVH971ybbCeUrbVqtS+amuGqWi5NhLqBxc5LvOdmq
VcDHXmstUqtHYdswhr6NSo3tKEc9Sh2SND1kodYKGhNAKw/K2dFzz5Ng4fGVxmiVj8OgQi/tECLn
vRcmsr5XMSQf/QuuNLqOCVfh8c7Yp2rjrNYa8rjQQTZwU2MDQhHCaNB1ygNJBryYbqW1qWsJLJeq
uTYQzUiPoiG8JKlrtK1o662VRnSH37XEd0ypytnIPxccp72d1atie+1x80vt9FttGqJsnTS2VckM
SQ699+rusi4+rT3Gae2R9mzdWnsMYUgu/7DVZoyt4rFvve9la7hLMljpTdK54UI8mSvIrkrxwCrV
vl3Bbbd7auWWDkpIYfwgmGLb225bfw6yUbDCMPJB2P3XWxZwvXhDf7PN225E9SfhQ58H1w9q74D0
Lqub05SF7rNIf368+Vd2iw9WjEH30oz+gb5r7s+TJyrtEVVvEPXRZRhkhZdyp7mtbS6uLs9yh/xz
vXgbLoe0+V62RjIe5Zj/63v75/rbJ4tFzHOHDzYDv3GR8hzoiy8+XF2j8ul8iwrqNoIdMwXrOH5k
E50XIx81M1w/PrLR0jJuRuet1yFa1yud5CBs7FWSRr/gyCZ7bASv8HgnWmuuWqi2hhyqCrTB1tiA
UAT/7s3WOAO984upzrrqqAeBS9Xcgv3OC4OiIbwkqeudALgKW1JEhD6MfDDeJfH4M0Ba77Q3ybJo
hRnSKJxLVowy/zL3Tr9sWRrBO+75gZXYbquq4zFPYiE5di1kl1CHtguhHCHKeSc1ZGEx5VxXHbkh
cKnt4AREU9KhaAgvSeqqaYA1lzO/zzX3uKc3N9c/NB+Fs/OVbrm75m7+8VdffZ5Jy+ury/zVlE5w
u2zmheHvf4A4WkEJneXU9qqjPQQuuRUtSDMWRUN4SWpFy6BYLae2Vx3tIXCp6noB0qRH0RBektR1
CnrPllPbq472ELhkdS1IcwxFQ3hJUFd0TENeFlPbTdXRHgKXqG62C6JxYVA0hJckdYUGWlUUU9tN
VRk+ApesroVoUjsUDeElSV3NOMAtZk+aqpJ5BC5VXS5AmvIoGsJLkrrGCIBbXDE2VSXzCFyyumDs
WilRNISXJHWdgPpMcdZqqkrmEbhUdYUAaYahaAgvSep6bQBu8RiyqcrjJ3DJ6gKxKzvGJIqG8JKg
buZa9J7O1ioXsKeDXuoELfG1CSX/2NP5r2zkYM9PzQrVXkC7u+u1ub3gPyIXKEv1HckPb3PduwVC
xxidc8MgR0vd6MpWCl5ovEdwd+0W5jvZf2iqVI9XaUrOuZlCb0r2mnfPNuRX+cd5P2X6+ZQ2A3tS
u634n++oa7HiAx1VCFepEHBjXciOvrXO47q8vejT4vh1NtGfzfXp78vp4p/j+XLt5cXdfXAi9ZZL
3bdSON06xVnLowxtCskPQYgh6HTEpHTC9pNelrXKs9S6YMaWRRHYaLkTaVxDpvyh4/HsPL39tK6H
+cavTTpV8z77/rEb7/iR5tawH5rmI771Qb51veD8qY+/X12OxLfvRss/Bf2SotKvh++hgi+f4vCt
RdlA/A1H+bNP4wz1DipA5porqLIPujr/tO6ysydsHsNqM//WY8V0cwLeO7VK682hNyUszjvlTWC9
YIMwbei1aDWXruWil60QSbpBJSmTgmy1ujqUwMxJBmVOMkTmZLbJS3n4cdJhsoOe6WTiveygqalh
kWr7DyI5aNRJj9aF6S28x5jJe05IDroP2z6TH26e7Uz+6+QgWHrVMaOxc67U9yMTTPa2L+wsOzUE
z3spZUpajNz0TvSDFlbEFFn/ghd2ZY8Fq/F4e86ra86FV1tDne9rDdog0dkl+ylCWAVQ4OkuJYp7
psbUFKchcKmaG5hmcTSElyR1taiJt601FnSHBy0xtVPlw07dnul1CK+xfAsrVHvO8QnXWJ5HLlAW
W3tYD7XGEkwvlDDcOBHJ4wXVWWYR4wUYtz1ieNlrPbMz3qKfGmNgYZBahT6Nj48SuLU8ZdSgxSCN
scw7402SvYwyMCVecJSgO6FFhcc7T+2a0zXV1pDfWA60AZ+Dup8ihPeY7jQQkUoUc3KMrTldQ+BS
NbcGohnGUTSElyR1rWcAt5iTY+qy8/BcqrpOQDQnJYqG8JKkrrcC4BZzckxd1hyeS1bXgjQvUTSE
lwR1Tcc11GfKOTl1WXN4LlVdL0CasSgawkuSusKqXa4s5+TUZc3huWR1LUSTzKJoCC9J6ionAW4x
J8fWZc3huUR1s10gzXsUDeElSV3rOcAt5uTYuqw5PJesLhi7jjsUDeElSV0PxlAxJ8fWZc1NXIXi
UtWFsuZsxyRD0RBeEtS1ndDQE6mYk2PrsubwXLK6FqJJhqMhvCSpqyzELa7G2bqsOTyXqq4AY1dz
haIhvCSpa6wGuMVZjK3LmsNzyeqCsZtlQtEQXpLUdQaKoeIsxlZdNEvgUtWVYOx6plE0hJcEdV3H
LPS8L85ibNVFswQuWV0L0rxC0RBektQVEhozFGcxtuqiWQKXqq4SIE07FA3hJUldBY1VVHEWY6su
miVwyepakKYliobwkqauZbtnLBU/8BlL1znFAf/Ks6WqC20JXGorarCPeCVRNISXpFb01gOtKA7c
ir7jXgD+lWdlVRfnErjkVrQQTXCDoiG8JLSi76RXALc8K6vLGcBzqeoakKYkR9EQXpLU1Rrilmdl
dbteeC5ZXQfSvEXREF6S1DUOevKVZ2W2MnaxXKq6FqRZrVE0hJckdZ2Eemh5VmYrYxfLJavrQJqX
KBrCS7y6MksgoB5anpW5mtglcKnqOphmBYqG8JKkLrdQDJVnZa4mdglcsroOogkhUTSElyR1pfAA
tzwr85Wxi+VS1fUwTSsUDeElSV0FVWjR5VmZr4xdLJesrgNpzqBoCC9J6hoGcYuzJccqYxfLJarr
GExTCkVDeElS1yoOcIuzGFd1cS6BS1bXgTSrUDSElyR1wUoBWmJrkzHOVW9j6i0bHs8vTD3nfhx7
7SO3avRu6HsbmLcs8l6b4WXr22UlsiUHVmK7raouG34SC6mxy2G7vDqwXQjlCFHOO2YEYGFxNumq
LjAmcMnt4ECaMygawkuSuoJDcVmcTbqqC4z/5u5Ke1ynoeh3fkXEp4fAGe/LoEFiFUhsYpVAaOTY
DlPoTEuXgYHHf8dJm05J/V7sOB0WCaS+aZpz7vH1ta+dXCfgpqqLw2hYRKFFWJmmrgpZOZhNSpzp
u7G4yerKMJqKQouwMkldinEAdzCblCTTd2NxU9UlYTQShxZhZZK6jNAA7mA2KUmm78biJqsrg2iM
RaFFWJmkLmcygDuYTUqa6buxuKnqvgBNiCi0CCuT1BWhKk98MJuUWWe3JOAmqyuDaApGoUVYmaRu
8AkSPpxNskzfjcVNVZeF0aiMQouwMkFdXEIuArjD2STL8t143GR1ZQgNMRqFFmFlkrqYqADu4M6f
zNrfTMBNVZeH0RiOQouwMkldQqPrzgXebOu9Ex39emOQCSOZr7b+59+Jfixn9W1QIZl9Ol50OSs4
VM5KSoUIrSVQVNWAUYKAqbUEVBmmK8wRltWFQ4SpimoguGWAEoRBJUUjCKda8Now547LWd3f2NfC
lue+L59RzQrhmGpWCIerWR2+Ddqlcg+ynKqaFcLx5ZL8tecxJruaFcI51ay8DSK7Gs95qllN0DyC
58bXNuYnVbPiEjuMBAHMWgYgcxxYwiTgDBNuhTRcqyBXkVvasOF6fB7pRx8kn0fqeSiUHXFbHiv3
Y3Ma7Or6flYX/v/jo9EqpSD3EvEaGeA0RwDVzALLkHS0qi3HpvBwfvn256OfGUsoFs4BbLEBBMEa
MAY5cNghKpEzirBTi1gJxTR1wr5aPezU20tZfP7RB5cFV1pRyjCw2hogFHHAcQNBzWtWa6qgUDrE
CrOp4ns3ss3qdmTb96p954DFrTZXEF4ifknev4To0n/m7xdGr1azbq9jpTfu6q6ZsvnSm649r7We
/Xj1/Q+F+83Pb/zZ2+0g7U+dflhffb9yqY35ZuMAoD0WjVSsYk45AGtIgLdfA2FIBbhGiFNsKefs
za7xdz+BlawtYhJoVNeAKmyBYZgCbWxVI1kTa/kP55U4YQid1T5GDxSD9NeEC0G234Rt4bkjzfhh
c1YfYm2owmD4uvMYkTFcemmzCj96/gLmVhidaqicrEkak6Y4zHzvVLvTvW3hLYfFH6++Ct5a3DWP
Sf8ZwA6/KMsHt5MUz3lIOgE3Mc31vEJoksootAgrE9Jcj8vQdzkHtQ+1aPGeR6kdgtbyukadl7y9
C5VfuLYreAdsSPqxYvuvIOkD8s+e3vWuEpQPJMvt+uaZ/wg8TMvyjd49n3kT2sTwwOHN3YUe/fuh
a4sfnqFgt1N0mlmWtj9t1+10rznOf5fMFvpHPfPhsGh/ul2C3Z+DNGSoJwxuOaqsox0TcFN7oMCn
aLyEkEShRViZ0AM9Lp6mkbspfTf/cLfbufajXRAzv77yTvnVrIW8dbeL1YN/dmY+Mw9BQM6+m2IA
ede/WbOYu2fffPruGwVTEL8WQkM0tJswuIursk5+TMBNdlgRQsOERqFFWJnksISQAO7gLq5SOVVR
E3BT1VUsiCbxpB77lU+S3igU8y4bhhNRxkWImtSYXHS4eZHghJfKa+ypeCU7gwqxEQxPwiZCpaTW
kzx3Va7j5b+99M621Nt1O4a0yejerQsUdFmVfczG6TxrsTyeZnk5roYmT0FmeKA+vb//0eEaRFlo
uQBcVjVwVkmgaE2BrXituTVYIw+0XC2MWzfiDDFqniWETFpLjODsMOUdgakrn3Y2DrA/dv6yGCMG
lf8DMY4i7HHWkLbCVAT14ehfqM9HHyTadumfUe4Wnbto8maxS3D2//wvWV8GVtLrmZtbb6YnVzQe
MMqe9KD0hZs7vfb4zTde0RHOu1tGWvjVlwLHs0bPn29u2pMS2mDoT1W69VOP2pkHM3d+7Lr0Q6rf
0XUbZ481R8Vs427Xl0cJqM8wQws7okQid63quGKyj1G/6pV1u9G12G9+OxtCloRPMr/aZzs7oYo/
2kNL/iza0ZxCiILQanBqlwPpnRq72mrMja4E7C15+K1lvzZZNptF95Vv2cX6sPTxL+B67G/7ZY+e
073R7CR81V72rL3bG0VzJ4/czRWOHG/o2nblI9bsQIfw84U+Pz9lGECN9okGcPSAdVn8uNJV5Zu8
jRvHo/ik/E7C1dvml+1sNVW4YmEy5Lvz9F2EH50Th6CVECEdMoepITf1p91pVyvJGbHUZfhEO6PL
8wVZQiojfWHaoYslkTnv0LUrhjVtZPyb8zVTE2akFpxSVumcKP7kXJOjOMI7v8MRUfzk2jaKRzfR
yCh+ipoEOL7HRkbxTH5njuI8RMbTPXcoDfiKD6WECkQcoZXWGQ3ThtLcBsESRTbItKGUJ5E5byhV
peg9wRwIT+837dQFoO3S7rYaftw6v4N06zarmVkfxS/fygZKwnXtakFEu8p03V58vbu4FdS42b2z
HtdvIuj57q6u2c6+3d9Y/9iAzkZo7TPf9fW9WzUP4ba2N6/yXhZfV9u7zbZdfKQlLj7+6ss3i+3u
K1zykuCS8tft6paQEkOA4Bo0e//NIxBvFnbWZDN+kay9x5vFrf5psfLGQv9xdtd8hPSHN4vl/bVd
zRroY/zuan64uP2ZaW6I/c78djb3wjEKhSx+ODzZst4320mb0cYAdf42669YUghzFvGfjnbasm9j
VogsUvgpyEZoHL8qTL38UvwjtHOW/J+OdrJrqBBZhslTkI3QOMk1GOb7yoZnoj1BYURPkxP2T6ib
VRnh6WinejAKk+VPEokjNI73YAZLFIgSghU3Tq82ldOb5x92nzwfZASuNasIr9WL+IiOz/7I3qRG
n4BQcmuKHgt1iWFJFc5i0ZeF0PGyTEIoVRZCT1gwWEqS1zgnssgcWSYglCyLPGbRPteNUQkJymLR
l0WRkbJMRShVFnXKgqESc5XF4kSWsbFlKkLJsogeC9Q0DhU0i0VPFkTQeFn2hFgWoURZPOETFgyV
Ek4sC8uRpSX0xLKwHgt86SmKwPsU7PmXG71qZw9Lt5ot7MwU68bS7dyt2jdoa6WxZIZZ8rx5TP3H
1WJ7Zw+XHNZd6+2d2TQ59ccLf0nzcuhh3fWN3VMsV+TwkP4bRfvu5Od7xGeP37z2dMS/nDu3bH7v
Vw1m8+LOPwZW7JZ1Oz5Kqe51glNeGJeI5zVr3884H+9nkxBK9TPOT1gwXFKRN8HoyyJgjiwTEEqV
RcAeC9I0jsR5I2lfFpkxhk1CKFUWKU5YMFIiBrNY9GVRGWPYJIRSZVH9MYxeYlIKkjfB6MmCMR4v
yySEEmXxhE9YMFKqzMY5kSUj5E5CKFmWfshll5hmp7p9WWhG2jAJoVRZAiwYLYnKY3EiS0bInYRQ
siz9kMubxpGEZ7HoyyLG5t5TEUqVRcgTFoyVkOVNE/qyyIyQOwmhVFlkP+SKZvYtOT3D7Pu8acMZ
icekDTKYNrS8MCspygsCfT9TaryfTUIo1c+UOmHBWMnJpFk7gRlj2CSEEmUhsM9C7o6BJ1ks+rKM
Xj+eilCqLISesGC8pCiPRV+W0evHUxFKlqU/hqn28GqcN5L2ZWEZS1+TEEqVhbETFkyUEOdFuBNZ
MkLuJISSZemFXATbA5ylzGLRk4XC8TOeaQglyuIJn7BgopSZOd2JLOOTzGkIJcvCeyxQezAqzVsB
6MuCxw/Q0xBKlQWfsmCyZDCPxYks45PMaQglyyJ6LNpFeyXgGWbfZ00bzkn8JWkDErIUECFGg2lD
ywv7SzKXa/t+xsbPeKYhlOpnTJ6wYKqE0272UZ4xhk1CKFUW3h/DSHu2q8obMvqyiPEznmkIpcoi
1AkLpkqG8lj0ZZEZY9gkhFJlkX0WtIl5HJ2+l0+5fHzb1M7Wy7Zm2/bu57vFr3fg1q3X+kcHjNGi
MrgGEFsOlKkdkIYyAKUxyFHsNIKeOeaSQk2ZoMwebtbExa939ytWS1O8Oubmrwbtwfjl9jy329vb
h6NXZFu9ioOZ3Xu1njrxqlMBJcGs2tXycbaoHnY/+OLSIIkgdbUUFQ5REY+rCGEqQ5jPT1iGXv09
pnH0uu+zrr61VYbbWomqltoQragVldBYC224hLV8LcRdcvlyGYeYRL3w1P180K7jV5/if9W8BBVt
XfxLUNH4sdBpwia+l8QNp5opCbStGJDWWIAkFcBBigWsWQUdP34NRpyHdRuzuqImt7qxoFhvV80L
dK4h7HWfzx+aSgI3eu77WYiFQpOyMDeuVdJT0JuGg9E+Gm3vvIT+T55JA3LKA5WQ8il5VCtPop0g
LrsiL/6/Rpem9IuzRVtxPUQEk1xBdlXCmrqx/kWaeVP8d+1W9+56//dVUyOieT3mihFICfYt99v+
43n4dOVAgR8Vfije7VqoGdYYZzAISuGkoN/6Amob5z3B/dI+Vb2rvtOM+goJYbG1QGCjAOSUAQWR
Ag5pXBtT0apGZ2MYWz117Vm735298MZcdM/3X+zbFezaFbTtehFjT2LB1UeTcz0hyjO12Wz13Mfk
nU8+KRO7p3JAb98jO/6X/u3MzNow4iOXb6L5vLjR965AXS3apqrmvfHjbtE026Iugq4pEJuEw29t
h1kuVpt1azdmvFj6ypZB55C4mxjlgX6y079eOeeFFlhhCeWbTRHebeX/AMPY2Qbv++IXzr8K6VZ6
sxtC3KEz2mLTMPKrAM6/VNirJY1/CLIiaCJW7+6nqTu4tu+2sQufCdY3RBjUfz4jbr9Pbnw8WNdu
db3rnLpZIrreLK73jCyhRtWQAqRhDTBWBEikOaCcGCGY5krhArx1LrYjRjVJJvDTqFHNWFfX1FbA
GuiAI4ICRZwEXjNnjdVSKnY2hnmjWtfq4KjVwWYB9n0txrKR45ukJNP4vgdb17wvfuS/cV57HnJj
HJZO4A5RDusEYozUFCimMOBCM2Ahh76VFea8xsJIezaGeQ67a+Rjd72IsWaUk+KSqex5526C4Vaz
+qGtsfLeR8VuyWlduN/8G4khWDVRsnionvyzW/mlrVMoUhKIJplJ9Bb3GBpZdXhqXolrfJ53iA1F
eBI2ESpFvTfa8WJMnoUXRnmtNxWv1NbDKMSGTzRbjlApqfUkIufhNbKcxdS8kluPBdmwM7XeyIIU
R7ymC9LOtsdoPWvP0TKUSSYEAYhzBBQ1CNRcGIANg5haqajk/igG7Uf32+UueVp0Sav/6NPZ9kbe
zJpzyrGsAeI1AxRCAZThNTBYSqeFhlyRwPouLSFS5xF9ZDmNqXklu6YKsUEsNE1UEVs/iAjGTIWA
4RIBbY0AtYECSIK5szVmzslmL0NViGlJCDE0fusn5uavBu3hOGRPwtbPO++VjRd66o5qZSDBhgsR
2PpxjDCDDbHamSAVeej1YSpDmKdbP92FPfTgjo/AWGOsGHNIMWSpNZgSgplQGtnaBPsMJurl6r2A
QOp+hMGWQYkrgCXjAAomgK2xBaz232hjWaXo8X6EDJElGI0ju+s/95XdnZ3Vi0K7ssbN10FQlQX6
9t+g7MOd9sWmmk2PDtb/3H+zL40GQwy8cDkMGoB92UIP7grof7hxq+WqLVSm18XjYarPbu5vD/va
ISqcd3PKRCp/P/jWJxTl3vyNPj4D9/Aj0PwKVIIo4LDjoFaYC6kFrDHpzsCF05yBW0mpLFYKWIwo
qCUUoGZcAmFqRrWzHOq6aDcIm0NW3GWrTlgcMq6dggHqWKT3LmVdE1ZbxWpERkQmwdkAtReB+dOV
/cNMnxR39brRbV8D3It+lS76VaLojd66mrurF0jOSkhgyB9DZ0NTCaFENVeqIvue0Z0NvdOzOARS
ormCWBBkK4QJd5Q4ySqhbS0NYrXfIV7Md5x2Neuumorv7XrvdXtW8nbdndSot37Ibv55vfY3Lpa6
qaR+1QEpSWjNdE2RUZw6jFmNKRZKu9pRR1XIYoJ5hsX9qQ3hOVObXDapExrCgxwEy+AQoUjCDJuV
AgWySAEHjo+hjNK8lojFTdWc0iAa5VFoEVamqatIRlv3zoKP7vAhJhKOnBb8bST8rwx/x0fAd4dA
FF98FhZmuinC/rT5/4hKYTWmnxN0qvgCvpY4WbO6YlYmzwo8OSUiZgVhuP68oLvsSWYGp8bwEqLo
YQBJWEGEZIWoeflUwFilnCOkwaoJh5ZSjBh23DruLKn+wakALxGJHnx7FgdDc9YqRy6b5GEprAjL
8YEIRRIGK17S0JOgAg0OkixrGyIeN1VzJoJoikehRViZpC4XOICLB3H5yNKiqbip6vIgmiAwCi3C
yiR1FQzhkmFcmaduLG6yujKIRmUUWoSVCeqKEmEawKWDuCLLd+NxU9UVYTTOo9AirExSFzMWwGXD
uFm+G4+brK4MoREso9AirExSl/KQD/FBXJnpu7G4qerKIBoTOAotwsokdSWGAVwxjJvpu7G4yerK
IJoiUWgRViapq2TISjmIqzJ9NxY3VV0VQJMlJCQKLcLKBHVliZgK4Kph3CzfjcdNVleG0DBGUWgR
ViapSyg7xZWDS24cZvluPG6iuhwG0SiKQ4uwMkldQWUAdzCL4TDTd2Nxk9WVQTTFotAirExQV5WI
kADuYBbDxx4D8YhLo3BT1UVBNExUFFqElUnqUowDuINZDEdZvhuPm6yuDKFxjKLQIqxMUlcIHsAd
zGI4zvTdWNxUdXEQTQoZhRZhZby62EsASQB3MIvhOMd3E3CT1ZVBNI6i0CKsTFIXMRHAHcxiOMnx
3QTcVHVJGE2RKLQIK5PUxYIGcAezGE4yfTcWN1ldGUSTLAotwsokdSkMWTmYxfCsjfQE3FR1KQ2i
MRaFFmFlkrqMiADuYBbDs3aJEnCT1VVBNAmj0CKsTFJXhNaq1HAWwzJ9NxY3VV0W9F2pYBRahJUJ
6qJgLUWqhrMYluW7Da6Mwk1WVwXRFItCi7AySV3MQ+oOZzE8y3fjcVPV5TSERpCKQouwMkldLngA
dziL4Zm+G4ubrG7QdwUiUWgRViapKwg5OVSTKvqIm3kq5g4Hl1DggH3D2ZLI6iPxuKmtKGgIDSEe
hRZhZUIr4hIzGsAdzpZEVh+Jx01WV4XQiMRRaBFWJqnLGQrgDmdLMtN3Y3FT1ZVB3xVQRaFFWJmk
rpQhHxrOlmSm78biJqsb9F3FWRRahJUJ6tISKhHAlY+46/1jfPOFL6113Rq71Ov1r95c/8yXUpqh
muEaSfjy5/6gIrSiUrgaQayJE7x2VEgjjGHKaH363N9qsdg81bN/Xgmq4MRK9NtKZfX3czBM9V1F
Q7wYIxPzilAuycu5Co27w1mryooh8bjJ7RD0YIFRFFqElUnqysC4y+Bg1ipgZn+IxU1UV8Cglysc
hxZhZYK6rISYBnAHs1YBs3w3HjdZXRVEYygKLcLKJHUxggHcwaxVoCzfjcdNVRfRIJqIQ4uwMkld
wlkAdzBrFSjTd2Nxk9UN+i5FKgotwsokdRkXAdzBbFLgTN+NxU1VFwd9lxMehRZhZYK6PLifyeBg
Nimyalsk4Carq0JohPAotAgrk9RlMuRDg9mkIFm+G4+bqi6hITQuSBRahJVJ6spgRBrMJgXJ9N1Y
3GR1g76rBIlCi7AyQV1RQhaycnCHUeTto3pcDqNwU9WlNISGIIpCi7AySV0kowsQBN44672QHP3a
YYgJxiMrufznX0jev3lq/QvJQWEYHSdMV/fxvbZ4SvlNZUttbfHMtv++vtveVm519Vg95XmxvHlY
N8Ugry78NRfr24vKt6e7sxfeJERoLYGiqgaMEgRMrSWgyjBdYY6wrC4qImGFNQWYcQpoxSTQEjKg
JK4srJjFtNqDtAcc3d/Y185isG4PXeqKxhTvwF41Y3iwa9fsDAmEfyiKD04v3N2jva676sPvLxrv
91/uy38dfRs0h8Nx5uxv/0Px4e5D+aueba7rxWpfBahe+Sqs7s63aFt85wp6cX1NI3vl+RTPW/Pt
VcvLFy5rLI67dlIbEmqPvkjU7i9jyok21MUpdYx48Zue+zMy1l7A58/bg62akuR7eS6b7rDvJu13
xVXnTbuzNoo/Wmp/FvugLmAImkA+BJ0D2cTk2goDhVGV1XsruiPHfHf3ypdrt7luCkf9slgfjhz7
F3A9Kb/VHrXiPwKPsj8Xzcetr9rbPWvv9kbR3um1rtSUd+rD6SpD14bPVPFmc8jHOfahc37pa9/d
+J44WztbLH7e+Wjnsmuzmi03T9pV8y06CTcd+6ePO6ONSY87nVWngefNImL4uNjopZ2tfwbt9v3p
T9o/B3/YfjMquMkS8pFVXF7uvp0W/5wfyxKJkfPB/UTZxz2zXTUbVfOHfdWv3cS/3dHybt0Wlouu
/XkGio/1/NZbY9x6XW99Xb2jkzD2QXVS6NPAu1j+Le76s6KGommIEGUDRf6SyyleFsvVopHF0x1i
5IcXzRVy1FkjKrgfCtNLOF4WuvLdr3ETr4l3ktvLYoQYjI0MWyet84WbO70+X1VKT1apka50VN3n
qJ5n89dinxA6ewqoSniu4sJZKw2qRDC30n7lN9HnC20bRd45fC7M4vZW39nmgazL4mK79uOAHxOW
Dz82J76AXwoArKv1dr651qsf11f+3+43n0rv/gWA75czt2lWL9aLubu6uTfQX3J/e8WMVcZUBDhe
YaCFdEApjIFiqCKUVlAoWjQpTJvYvhKymfNpSr732kKSnBowE/JKXJ3xvENsBMw7ObavDkXjT+Gc
hFCqLBSFWEh4WrqYwZhSzIQSXdfWAlRDCAhhDFQS14BYDiGnQjFj24JdSjvCkUFcxZdijrn5q2F7
eMCexFLM27t9YWRssIZCK2alDhQ3qyuONERWcKeDZOhhBzxMZhg1VI65u7THIFSQuSbcal4ZaFBN
iFJEVtJR6AiyiihnXgvRVpS+XMMXUjh3SWZ1QpfAkio4lu5QfWTruvrIzbT5tpsvhFhwJUezOF3k
M3On767XN9uN9d3iyebsk9uxP0Pn7JYUd36ep+dnMuhwPlBbfTkqlVzctc9uX73IWdQ/wKlrBNCd
bHT1EtkEHN+r4hP3FNbWD2cvSMqP1nhzM3IC20rVEy43Hnz/D0/vjd2qWhiW5yp+1PM61Otm5rq4
uyz+pmOhl0unV86el8jq9nqvSbtPcq7efyYjHo8Fuy2GNhXCFMQ5Kfyts/xjLAKLYmEW00Xh1W1k
FDgTkePxrc3tn9ixsw2YxKtGs+gvUnciPt3S5MT8D4vs27unXV3PsyNhmD7Hvp4nLwn+boINq9AA
69OoqmKOSg4hoqa3sfaFa5ceGom7DbV/mF/kZtqX+1u958fwZ83div38t+NwvKMW9YPwthpBJYKj
PevfubE2kU0v3Fp7+u6fZVB69+/smqj/o5IqNJb9v3zzK8+4bm/p0aXWbtOuT4S3w9rlxxAHRrsH
75I5/P2RrPc+7x4Q6z+d1f0INL8CFUUVMKgSQBNOkaFQQyd6T2f5pZXFr9dzp3++fAnz0Rny35hP
QDf3YbK/uLvSHkdqIPpXWnwCgbO+j1ktEqdAAoQ4JRAaudv2bmAuksxw/3fcnWQ2JDXT5bhnFpBg
lUk6ea+ey+W7nFrVcScj0c44EiR3xPrkiPCqi94k7qK6YzPZt6A0ih8rzd03OoS4c8mC8ykIK5iI
LJVOe2aCho94/r2A+7c6hPhq73XIBjkNzeOCOyyNirENKeqk2P3nO2WiJnVed61OLHYsOmo642nb
adtmdV/dvQ6Cz5hUFRbvrZW4qgyV1WwKF0oyW5CDq/EBhCL4dU7BwbyHio2erXOs5rKtAVegcEs1
ZxpEMxaFhrCySF3NFIA7erbOcV6nLha3VF3OQTTFUWgIK4vUNUYCuKNn6xyvWY8uwC1W10BoljoU
GsLKAnXFjIJWCuzZ8tipGIMRyQZ/f/vlfadSzM9TbX3LafLOJha1aI2lhvJXm58gK8EUn1iJ/bKS
NTkqH4Rhqe/ewcuoiXkhlCvyci4NwHD0jKOryh1YgFtcDg5E0xyFhrCySF1hoNg1esbRVeUOLMAt
VVdJCE1SgUJDWFmkrhJQhB494+iqcgcW4Bar60A0Q1FoCCuL1DUCil2jZxxdVe7AAtxSdbUE0ZRB
oSGsLFLXagvgjp5xdFW5Awtwi9UFfddRHBrCygJ15YwyqFRH89G4qpx+Bbil6hoJoimKQkNYWaQu
g0aTfHw0WZXTrwC3WF0HoXFpUGgIK4vUFYYDuOOjyaqcfgW4pepa0HclxaEhrCxSV4G446PJqpx+
BbjF6joQTWsUGsLKInWNgqwczUfjqvLTFeCWqutA37VUodAQVhap60AfGh/FVOWSK8AtVteBaNKh
0BBWFqirZlRD8V6N4CpalUuuALdM3Z4XhMYEDg1hZZG6HKyhehy3ynfxuMXqOghNSBwawsoidRWD
4r0Zxa3KJVeAW6oukyCatig0hJVF6loBRSQ7jlvpu1jcYnVB33XcoNAQVhaoq2dUCQDXjeJW5ZIr
wC1Vl0sIjVGJQkNYWaQuh3xI0HHcKt/F4xar60A0q1BoCCuL1BVGArhsFLcql1wBbqm6QoJojqLQ
EFYWqauYAXD5OG6l7ypmUbjF6joQTTEUGsLKInUNg3xIjOJKVacuFrdUXalANG1RaAgri9S1FsKV
o7iK1qmLxS1VV1EIzXEcGsLKInWdhXxofBSjKn0Xi1usLuC7ZkYpR6EhrCxQN+M6KO6Oj2J0le/i
cUvV1RRCY9yh0BBWFqnLpKvY/7aXZRK9CRJkYtmx+1//bVuDMRtSd7cG3276znkmYWnEsdLctzW4
B83lqq0yNCqmZBeKtwWbGVcCsy0YANvfErx+6BVtBzYzadDbUbURUsnWdYGa+7dTdSG0XMckW29b
RttWRhm9YDqotm3Tq9wObGZaoSv/nsVQKKpbaq9lUxyGJcTBcFnBAaFIQXC2M8oE0CiMTzGZqg2r
eNxSzQ0H0aRDoSGsLFPXiYqy3mv60BUeYIIIof/bpm8d8+Mdsqh/pSwstUopHolK3BCffEuClIkk
kVQSykeh2z1ZvO+izR8SpVMgkoWWONcKopmNwhvhdFTozNNZGsGO7ixh0wyOHjEFeWkH8Joy1+Ao
rb6nqqWzkiqvrJ464eAoPiSLpP8bWXbORe+ct975ES40s22nSAjWE9O1jsRILWlbo6PuqIgM1EgJ
9m/V6N33C+2DDnSmeTwLuXOXDWp62YbW8/ZYLqzI0ScCHzhbpYPoGuuOpXtUvsoMaZmZJBfhfhZA
UXVaI/Ny0+RIPOjyVJ2VmY5XccfPQGycFJOwQahUVHqZV2W20TWv9np+FvrLnq9/bV57cuMXTxbX
F09yjQ9P+nSkwz+nP8VF5jN7LuLV16evNa8t8pvPvv764/efJdGxlLQgUTJKJOWaONVqornhxnXR
dtY2i8t/Zh9tlldnfvlik5i0AVOUvtbcdFfXyxPWnMfz03P/64nikls7/Jkteh5Xm3ceSJ0hRcFG
n9OrRVwH4Gf8aZPJPFOMN5/O333aLF/4HK6fqfVfi7hOOZg/t/07MLfaPLGZ2/M+idomGm2r10mT
ZVkO2TfI+WWI61I9XZcwGeg3vCEbPU/P522j2OaNAerlW/Pz3DFtUO7QkIU/71OX3/n45vPZTz9/
LS99Q7rzMCSoewQvakiKPtequGzyyzOf5cnvDY3Z6dXlYtWwhmyeH/7u9cmx/ad1lu8fDovPzSSX
9cU3uNYX0Yfme8apFsI2XFrJnGx+tfpUcOLb+SG8pEP24/IEcfwgP8x7L/zF8/j1RXd5mX3I5558
BHM5DKBMsVqbh2AzzDsNt0u89/nXG0bLxl+EZt0D2abBHBJCQUS4sd/hs+IUWN3nuRXOcdVRrxTc
89Sdll45S3xoFbGhC4RZaUikkhuaVEuj7vtOOZts8xIAFFSIqQV99/1j9FSGTsLjJXzTzw82Iv/C
Ki5y2Owbfb9s+muCxHBN0Osvbs5v0xZDnAy1lS3uPwbWwzzyWo79Mfb2S6T/FunXDUhKOpBOt8bG
jgcbw3aMzfsxdnicMXYu1HA65NbcJikBVaosuX+o9F+RBjv90CvE6PR+tMH8j8gFy1Jb5f//jjNV
53H86jQxdnUaj23mo1oiuFXESkYJC8KT6KPrPOedV/EJFcJy0/Z6GUqko5FYrxOhgXuaDLM8pt2r
09L87I7Yy6bq3JRdosb7nF6a9peo8b0H+V5qTU13LlG7zQK28ylsV220LL1NjcMZuzJBfHav/OzD
GFOafw2QuSb9mmQzqmodrS7R34MVD5vgHpBe87IriELsGDOuJSwKTURoDaG2M0RRG1LLqHbcQFz5
RFz3+38U6v9RTP+PzYRT07fb0/T/Hmnpaaf/1xc1LJKuLDhw08WuWO+fWM01l5rbmMT+pou9n4M4
Ksvv53gn2P6mC796tE0XTa+7b8/is7ukd/xQemOa8xyBst/9uVmH8Rdd3L6XrROUBZHyf21r/ty8
fbpYhByE39uO4dIi5umML754f123Px2qNuqWnkOafEYlOI0Krhyz/IHwWT0u798pomXyrU3Wtil4
56WwwZpkeEyyMzHaV7dTJFssGK2weH+Ctur4ajWb0slrCytidAUHhCL4KevMxoAHHMaPy7iaBYUC
3FLNnQHRjEChIawsUnd0mQlV3zcrWegKDzLRtZ2KaUeYj9Rqw3slv4UV0tN3caaamngcuWBZTKXj
3N2p2cn4mjpHtRCUO6GLuzV85oxGdGtguP2OzatNLyvFTEiLjRrcSMddZ12XzEgvQVtpeWcCDVIa
FyizPFLamo4L13bsFfYSxExqV2HxXtRmtGYXYzWbwhYrswU5mBoOCEUK2jEx0+CBkdFjn4zVHKUo
wC3VnFEIzUiKQkNYWaSuhY7OydFjn6wqeW8BbrG6GkSzFIWGsLJIXWcEgDt67JNxVqcuFrdUXc4O
0eSMUodCQ1hZoK6ED4TL0WOfjFf5bo+rULjF6moQjePQEFYWqcs15EOjxz6ZqPJdPG6pugL0XcEM
Cg1hZZG6SkK+O3rsk4lK38XiFqsL+q5mFoWGsLJIXSOhOjN67JPJSt/F4paqK0HftZKh0BBWFqir
ZlRAdWb02CeTVb6Lxy1WV0NojDsUGsLKInU56EOjJ4uYqvJdPG6puopBaILi0BBWFqkruQBwR2fj
mKr0XSxusboaRFMchYawskhdzSmAOz6K0ZW+i8UtVVczEE1oFBrCyiJ1DZSiR42PYnSl72Jxi9UF
fdcqjkJDWFmgrp5RDVk5PooxVb6Lxy1V1zAQzVEUGsLKInWZhUp1fBRjqnwXj1usrobQuGYoNISV
ReoKZQHc8VGMrfRdoRwKt1RdC/quFBqFhrCySF3FISvHRzG20nexuMXqahBNURQawsoida0wAO74
KMZV+i4Wt1RdB/quYwyFhrCyQF0zowqqM+OjmLo19oyrBQq3WF0DoTFhUGgIK4vUFQxSd3QUw+tW
ifC4hepmXiAatyg0hJVF6ioNWTk6iuG00nexuMXqgr6rpUChIawsUtdIKCKNjmI4q/RdLG6pugz0
Xcs4Cg1hZZG6DppN0aOjGM4qfReLW6wu4Lt2RqlAoSGsLFDXzpiAcEdHMbzqUswBV6JwS9XlHERz
DoWGsLJIXcEsgDs6iuFVl2IW4Bara0A0yVFoCCuL1JVSA7ijoxguKn0Xi1uqrgB9V1GKQkNYWaSu
sgbAHR3FcFHpu8paFG6xuqDvaoFDQ1hZpK7hUJ0ZHcVwWem7WNxSdSUH0bRBoSGsLFLXgq2pxl51
2tKWsU7YRD2/fz9eoiwIzUJnmPA2RCeo1iklZ2Jg0Xev9rrcrISzbmIlDsqqsr5Pz7DYd4HI4GaU
Te1DCOUKvNzNmFIAw/HRpKqKIXjc0nJQHEQzDoWGsLJIXa4tgDs+mlRV9QGPW6yuAdGcQKEhrCxS
V4I91PHRpK70XSxuqbqag2hSoNAQVpapC413zPhoUlf6rqIUhVusrgHRFEehIawsUtdQCHd8NFmV
l7cAt1Rdw0E0gUNDWFmkrpUQ7vhosio5XgFusboGRHM4NISVeHVVloArAHd8NGlrfLcAt1Rdy0E0
I1BoCCuL1GXSArjjo0lb47sFuMXqGgiNMxwawsoidcE1cjM+mnSVvovFLVXXcRDNWBQawsoidaU0
AO7oyh+vWt8swC1W14BohqHQEFYWqXvYVyk5CbZ3hhh9HBBkYmoT5P3nzxC/zFL1LaiQq83jhc9S
RceyVGXbmJDJEiddIkoKRrrkLZGuU77lmnHbPmmFpS33knClJZGtssRbqoizvA20VYHLdjdL1c2L
8AZsea1vVCSpYhyTpIpxOEnV7aegXdU5XaZKUsU4PgtSfvZhjKlOUsV4TZKqbIMTtYm2HiZJ1QTF
41xtjoYh5hclqeoES44rQVolEkmMO2IZNcQE653xQkUVDrmyGaXVkS5z3U2W+vGHxclSMw8mpuGx
iM/7VLWL05t5avL/O8f4pfJWuVYQSoUmgipOUpslkpS3XHOnrY9NhsvTtz/tfK0LQnITI+GBd0Qw
mohSVJPII5OWxc4JdWjRcG5gEou+Wvy2Vm8jZfP5xx+eNNp5JzMECT50xDgRSdQdJUknlbx01DgP
sVK6Wue9lm2ehpZtW6vWlYM25757RukJ0yfigxPKTvJrQ5vOLxbz7VrHwq/is4u+y3a5epGLLMeK
NH/+7PsfmvjrauFzYvChkc4psX9bPvt+EUsL82nvAGRI9CRESEoyQ2LbKsK50cRmmwhjOmrvWqU7
+XRb+Ouv0NamwPp2lKVEpOOBdIpL4rvQJmaTCEH/AEv8CprQecoxeiTHY34Gzu84fALbUp1u5vhm
c55uYy2UOBB+7mGMqGgus7RV+RyVnlHGvpsiLfmmBNZ5ukOTzaTNH6+9Rt6+vDjLWvwFY9dGsama
6YncIZsk3Hc1ec7HZOyX/pL1pg08asm2RfPOUJm3V+b0NFeX6/uW/g0kDy+Ful6++MetUG/t/ebr
2YRh6HLL4en6wYz+/dizzQ+vM7BwODjTMrrWJ2nNDvYC3MI5iMwLRDMShYawsmAOQs9EfdM08PLh
x+vl0N3r7xpYD2Yb/9zPczhshq9eX5H12xANyarHL7udztsWMp5fn/nVJYzJqxNPrx1gMR8gz+P5
5eK35urybN79BgIKyKNG11UlrTnpUoBb7MkaRFPquynapvfWF4W8/s1n773VKEfFGyCaZSjbEJoW
1RttINzRVVzJa25ELcAtLUsuITTDLAoNYWWRuuApVTu6iit5TRbRAtxidR2IZiwKDWFlkbqObq2s
C3z7vISoU38qXqWlI+5iM2kc+yoPYd9qnLojkDkrJzEeUSgFzmJmzNV297e88qcn2bev/PVw++J6
MLoRLusGgQtW2ztBXW060jUFmakRWYrvMtq9mnKMUb8iFLiMkXoneDr6/iToQtNjxDDufyDGTh3e
HZOVzTA1kD7SqX+hPh9/WGjbSTNfbiedt9HkabMePm7+/C9ZP0PcyHqUPeVBaf8q1iOcd+cqVoNl
LfIw/8/Vi+EChCEY/tDcnOfGLcXut+4s5qbypOku84puXMWwqzlr5qt4vjzZGd7n8TsFY7hWtcs6
R10Fq+yMKT1JC74ZS66Fav4Y7iL5q9l2HhQEzc1o56EGsp+rEcHa6NpkxP6EUl5aznOTs36x6KbN
JXu5vJ1Y+hdw3fW3zaTSntO91a8kfDU89vrwa281wy9tKnWmsON4Y88O80pYs4EKkfsL+/xyl2EM
tQjw2AbrpHm+8G2bi3yIG7ut+KT8DsLVO11/7+pU4YpRmM00N28eeifjL71TQ9BSWkiIynZqzE/z
lZhtx4OWNCQjK5xi6NLVOoNSDukM07ZdjBaxedjGy840H11oqfG+vnOSmDOtaHXLKuP4I3MtjuOM
rx1PI+L47bPjcTybjfENTBw/RC0CPL7KIuN4Jb+HjuMMYmMof+hguifLJpgGZ9rgTTQdqyiZIZjW
loijFFkiEwdTBrNBVZhJg6k6oXRGxeF+f6OaF9EvVm30qz8/2r7qy68zPHnVCp3cXVNbt/Ogmxvu
CiYcJyFUPNHoIBZs73ICIG5/0PvvNjJfX4X1GtdwK31zHleLebfcCeyZrGgDdTJIF1w3kD0dHj5d
Pzw4WhfnNzFk3Lx65c/Wvxr71fbzzQ/75z3o/AgfzJMCy9ObuOj3J/cuMdxSeNJ83V5frK6HUpEz
3nzy1ZdPm+v1R3ymZ4LPpH4zLM6FmHFKGF2SfltEvzvkaRPm/UAvzx8Ov/G0Ofc/Xi6yF9D8cn7R
v6Tyh6fN1c1pWMx76F387dP65cP9q67/Qf60Ga7sP2mUpMY2P9xu+lluvBkqMynlw5fZocfL42fY
N7TVY9AurRd3kLXmMcgiNMZOmG9o575qXCwuFw9E+8VqdfVD86Gfn60L/covlrH56KuvPs9El1d5
uSFmon51vWyGmbnvQQ9Wjr8SdY8/GfKYtIs92EBkrdOPQRahcYEHs8dpkA5pV6RYf0zapa6hGESW
00chi9AY7xp9AdjDWWT155crvxjIXcXF/DLMu2bZQ1yfxUWmlXxynlvVqSD+7DfpPV9cXl+E20du
x7vp+qJb9U32172xL+9zXe8Cvl744cPtEPit9ZLiM6Fvt0y+1QwnWT7fsHh956M3Hs+aL89ivOq/
n3sq87PmIq/7Nusx9uuZwMxaLZzZ7u485CXojBtT1S/dL3VLazrKAyFbRai0zli6x0IPsV9MLIs8
XpZJCBXLIg9YCDozllWx2JNFMVUjywSECmXJhPdYmL5wnBFVLA5kqRhtbgjJKkLFsrhDFmZGtTuM
eZa9XI7LI62rYVP79cVPF5e/XJDzPNTPY0ESU+qs1JpI6SLR0VDCXWsIbYOPlAobTcrMqbGyla22
LPjbH+vj4tfr32sWV13z2jE//hpojxH323PHLaW3Zm4XHjN1KYVUnfaGOg5dVRojT6JLupOgtIzd
hgOYyhjmnwcsD9ZG92jsrIe+fpsSzFgaVBdTlKFtmWTRJmYkD1G2pjXdGxB3buz9Mo4xQc0Hb78+
atfuzDD+W/0cMdo6/BwxGh8LXSZs4bRtkikqbjpifbKEytSRIHgi0nihgmgj5f+cJOQPQ/sf277O
fW9Cs7xe9AsMsWechT87+63fa/HCn61iAFk4NSWLHIcGKTMFv+o5dD6Ho+uLrGF+KzPpQSAeUk/K
o11kEkMP8Wq7DW5+MejSb46LYRi+R4iI47KSyHqXen+yLs+nnfXHI5dxcRNPN+8v+l00/SzZMyWo
FDyX3K+blw/DZ3tgiuRm4YfmvW0J9e2a0orCoHpS0G8X81UecGUpfh4GV+v9iX2zb71mmvJI2sQZ
SW2XSAw6EOlc0lLnf5l+MIbY82XLzDr+HsOTbMyT7QzQk025knW5kqFcn2DsKT+StjZZ8EqTUZ7p
u9W1P8tBee2Tj8okbKjcog/Tybt/+V8fmNkQRnLkykV0dpYD501s2PbEXH+q66bLDW/TF9tlatgh
BztjurrGDhx+HSrM1eVitRzs5ko3V/nszxICFbf3bteBfrrWPy1izEJrowUX+ml/VPC6zW9QENvQ
WoM3dfGLmFdE4sKv1k1IvK2MoVn1jK4vQsxrC3unbcUPMKupIsR7m37qGm6ou0PsEg8EmwsCBs2v
78E1lbj7dXKV48EyxcXpunIOc0Onq8vTDSPPJFNcMqJ025KUhCCtaofj3cl3gQcRYkPefii2xa1a
Bq3vcSFbtcBEMEkrkmTnifTSE02VIdFFJb1P3hoPM3T1DOtatW2pk51SJ6tLsqlrGMuOat+y8a42
jOx7cIj9avqO/+K89mHIHeOwrrolQTos71ptRGcJ1a4lso2RUGo64lJkPBqqTBIPxrDOYdeFvOuu
TzDWHOWkbiY4FEM4YnIniK4NsRtqTiJGs0iYpJqIpBznWvCkeD+5Q51jtGVSMYmf3MH8+GugPZJC
9hw1uaOisV2nBRXCApM7JkklOqO5EwyiItlEfbe4mKffhu1973/crKfzlk38Na/5QLDWTAN7ezD6
p7i4iGcHUIzO+P5FVPtij6mKm77aFRqavrK201ppo5y0jIfOtEm1rHc6EZyn4g2Y+zQdzP1J34qr
FSblVTr3KznERnA6CRuESthVxTUvPRK27vcj5KGC7ddHvRI/+bj7LXjyMVunrHgY1Y9PCzApr2Lf
1BAbTeUkbBAqFfmmpnqznedYXtX7dQYeltsH0UfROi+ailepFykKsjHTsEGoVORFtn5w97IZj2HI
8vf6kOZPiyhc5wVJ0XSEiaSJN9ERG2nHNWedlSnn4fGrVTy/Ws9cXG5njPLLHN+GH8pmJq2l5jYR
ppMiklJDXKcT6bi10RtPtYMaXzajnD2M6MfnypiUV7FrSpCN0Q+w2WR/68wnl/mRfvXgcJ/M3dtk
9nfJ7BM3D0D8vl0ybDg2wJzejnVAXhJcHBKIwY7lsU26TcQYb0jbMUlE1zISvdZJ81YHFrMRJkra
JiGY9xY/2MH8+GuwPRayp2Cw8+77s75aZ+qOdcG0JtlOQivZqmuTCdF3WjiIitrPGLRPZQzzcCiw
fXAPHRoBpE44a5NRmuUnqcrcgvXRhs5HQaUGvcEofr96MIHi5dWOh0yIt4RbpQk1ypCQeCAq5U98
F1Tr5D+WVwXI1qrj2K4j0k0b1skS9+L6Jh9o/hgCtZbWgL7zD6jw24U/n3f9Gu4WNn89f7I5CAM0
z3xGraxhkD/dnlLL4LGh+YuruLhaxN69/bJ5mT379Rc357f7dCAq6vYqr0Iq/8x0ngfxs7X5+0nP
t1/6m71z642eBsLwPb8i4goksp/PY4N6AwjBBQJxFCBUOYkDhZ7YbTkIfjxOugvL7rSeWW9bTgih
0m77zPtmnNiOPW6n32r9lCPKJtEOEVxwrhfQp03Rc3GcoufGBhs7pVsXg2q9Fl1rrVCtFUFk60Nn
pW3mgdJUsyy9PruDm3PgdULvUNsm5c3ggxh1/rwfXce+NalFVl4I7T5YLqefHzvvN5fjavJtXfQh
m37CN/2Eafrkd+zO08m9lgfrsHzEDgPwTkHvRy+S1g8fRWiD62waRiWTT9HA6LWxehiEcwCi7/aP
InyaYwizYr2QukbxbmfRHb56fB0NVETD7SI6icYApiIGgiOMMYteaC+RaIpHulh3eBE2FpftOZqF
RtJoBJUsd42tybedwz/IDR6NxLnDbvZ/eRL+Ux5/22d+bKr+NB99gBvjj9ZFWB8v8g9xCXcjHJYm
eJ9g25Wc4wZGIRNECVKyewV6Ya0j9Apw3G6/YPOxJ+kZYGLAWOqtYdDd4KUV2g/x4a7AkHwSfQpd
L0U3JAvghe6S6wMo5yA8a1cgiBrFu7dmqJo3qo2G+1gCg8bgyZ2jwxxhPKwMuv/H+uL5Y7bihEAW
l+u5VxhNK0eiEVSy3DXotS6eP2a9r3OXymW76zGa1Z5EI6hkuesUxi1WVrfh8L32LC7X3YDTjCPR
CCpZ7uLPqmL9bRsqc5fKZbvrURo4Eo2gkuVuQJcPFOtvO1GZu1Qu010ncJrxJBpBJcNduxDY6b6h
WH/biarcnbiaxGW761EaAIlGUMlyVwKmsnhWtJNVuUvnct2VOC04Eo2gkuWuMgHhFs+KdrIyd6lc
trsepYEk0QgqWe5qjd0ZimdFO1WZu1Qu112F05wj0QgqWe4aj+VQccrNqcrcpXLZ7qK5C1aTaASV
LHfRp2kojmJcxekNLC7XXY3TvCDRCCoZ7rqFdFgOFUcxTlflLp3LdtdjNAU0GkEly12NttDiKMZV
1EVjcbnu4jSjBYlGUMly16LP7uIoxpnK3KVy2e56lOY1iUZQyXIXtEW45VGMrcxdKpfrrsVpACQa
QSXL3SDVHteJ8ijGVuYulct216M050k0gkqGu7CQwiDc8ijGVeUunct11+E0CyQaQSXLXS2wHCqP
YupepNO5bHcDSnOCRCOoZLlrjEW45VFM3VsiOpfrLhiU5gOJRlDJctcCdlXLoxiozF0ql+1uQGnB
kWgElSx3wWqEWx7F+MrcpXK57nqD0gKNRlDJcjdIrIWWRzG+MnepXLa7AaWBIdEIKhnu+oVAueVR
TKjKXTqX624wGE2CINEIKlnuKovlUHkUE6pyd+I6EpftbsBoWhsSjaCS5a4FTGVxFAOiMnepXKa7
INDcdU6TaASVLHchIH0VWRzFgKjMXSqX7S6au94IEo2gkuVuAEC4xVEMyMrcpXK57kokd8NCGEmi
EVQy3A0LKTTCVX9yV+vldedXuYLf6Sz2Oq5WP2W5U30MO44pAdghwcPr8bQdkxhMUiK5EYINwY4+
q7YypZCk31+Pt7y6unmyNXlhkb08shN716qqvT9GhOzcRZ1z5thxEZxjZTkEhURYHE1Cxen1LC73
Oij0HuKdJNEIKlnuhiAQbnE0CRWn17O4bHf3s1zlUKwk0Qgq6e5mrrQG4RZHk6BrcpfB5bqrDUoL
lkQjqGS5qzXGLY4mQdfk7sx1JC7b3YDS3NFUstw1ViPc4mgSTGXuUrlcd41BaSBINIJKlrtOYy20
OJoEU5m7VC7b3YDSvCbRCCpZ7nq0hZZHk1VVPBhcrrvWoDQHJBpBJctd9NmtyqNJW5m7VC7bXSR3
5UIoGo2gkuGuRDf3OlUeTbqq3KVzue46g9IgkGgElSx3lTMIt/jmD6rebzK4bHcDSvOORCOoZLmr
90YxnJ1gOxuFydsB0UjgwKoQ//iNwusdoUPeKIwa4w+sPbMpL/v2XNRk8Vk3LOIwNK+sT+a+vL3o
8gTPn1VNfmuuv/1lNdWcPXmRP/NidfFiKq6ULocX3gepzejbYMLYWqNl24/Rtyb0k1Ynle9e9Aqc
0tK3sveqNSqNrRc6thJ6rQNoI4xfQ+YSUT9+O7z6KILjXLZqU8yleVPsFE0Xf+i6u+zzUctfN807
+x+8+xvz5zafeverF1P25x+uC51t/RSX4w6Ts/7zXzfv3n2xmA6rPR2vluvqPOMyF3vOKtbn9J2I
bG4uNjSc5Hjylzn0/M05rlyibVJM+yyuAQ7TwChxfJ+pm+8cUrV4Cj340sH1mxO8tw6wF6wD7CFI
DO20+pJ+Zj4bOR/kNegASYymswef7/8cse7VxZqrquYv20yZg0TP94cgN2ehi5zUyPn++Gfx8qlZ
NqgD6w780Tg/zlX+vs0t8WyVhubq+7sc3aTsql+eXd88aVOtV7R3u9lE//T3nYPF8O87G1X7N543
GsLj48VNvB7OVt+3czXX/V+Zv43+4vyTg25uaiEPrfT2cPpuvHi+PFYLZQ+99HPXdLrv9bfL6UXV
+S/ralx3Hf/5jVZO67ngG7nK6SOE+GedvdVt36fVarzN9e62DtxZ31SPit6/8V5d/+W+m8+kK91N
sYC0KPRy2HUOX2+ul1eTLTncUkRTgUg9DkNwqRudXD8K+bUVX29il5vflCbZk5wkF683B5hhxIE9
2L2r81E6T3H1iOUic7SueOnwaLfLcG5V2py+26xHhGlAgEg5gcNK5+4Mhm3dNFmOK+jKOspdfot+
fhWHyZE3//i66a8uLuLlMBXgfr15cbvKD4L8ULj+5ZvpZKn2h6ZthzTG2/Ob07j8ZnWS/z8XrF3G
u/9r29wwz9LNNH2xuspV/r79sRf5Iz9enIzCDHocUzuYaFo1uNhqsLEFPcAge6fHTjbTGGYe2b60
r1kvlHuUqv3e1RRnOWJczOmZHDcWjdH7kxNOUaoBD7rXcYhDq8DbNio7tkH2oQ069KEL0ru5c6xU
580AYEbf0asBU/74y7gej+hhVgO+vVzX5gUhYxetsKa3SHktH0eIwvRBx4QFY8Ufc1B4MGUqVhF4
89GdCNBDbX0YVOqstqCV6wctQ3C91w5G1QOEV9GwjXjYw3tDePSqwAaLN0s7NN5Sid4hbUr0Tj3E
i82jEY0CDnYNmc/qz1O8PF19e3sz5HbxdN3TY+tYn0r16Eqay9ylieePJOiPE7fmAsCkUdPV5Xwq
xcl9ySKfIabNRWjXVyVH95Bth7cq+hiVE/WQn2f3jD+3pjOrB59Zug9fHnFm7Y/c/zWH99rdBBKK
DdVJsdXyNtTTqY92dfl68xcfm3h9neIyDY8byPLidO3J/ErgsVr/I4n486C9i6Y0f46HUN2EtkKg
NJZniAKZ/3m2KGgXRdcGsv18m4exT5zY1QKOklUHR7E7H7sx8Qln4Y4b/x/zybeXTzyRXKWD8Zh+
lFdYeuFN8RXWoQ/YaRylou06r6Xox513SB+leZCdP/7Hu6Nnjo/43ujj9Z96Oz/DX5n+WrPu/25i
2H55RPqFe94gmYU8fFj1N32HdBxN975FevrmXyWI3/w3uo7U/s3COnto9H/39zxV4javUf5MqVW6
mecn8Dc/834nLAYXNjNQ7Bj+uvro7Q83a6F2FyJtfqmdfqsdVO9aE8bY2tEDwKCEjWFnIVKeWrn6
6fQ8xe9fvz/yw7vvf4n8COHWrpsaO9urYFLrAoR2MCq0OYrQ6mj7FGFUId23bupz3BpzqDX3Hyow
pK06/9ZqZYWzg0iRPe9pFkH4QoAPAXcPFhjS8x4toOxCaWyJKLqYsHN9hJiS09A/vJVRSN11oY9d
73tvYw5NOJf6oIbRCd3Z5ztaICs2QlQo3nlnEqq2aFVHw3xTEhTuiKzJAYIjjDd6duGUQ6IpbiML
GuquBJXL9VwDSgNJohFUstwFj6ksbiMLVQUVGVyuuzjNK0uiEVSy3A3aI9ziNrJQVVCRwWW761Fa
ECQaQSXDXbcQ1iBcoG6jHiGEGA30oM3Dzy8YUwdKJKMUmGHUMgbVJ2OMtH0wAp53K352QqljO7F7
raoK+T1KhNzcdXhcRh85LoJzrCzXDouwuJ0vQM257Qwu9zqAwGhG0GgElSx3rVYIt7idL4Ctc5fK
ZbtrURpYEo2gkuUuCNjn6uJ2vuArc5fK5brrBUrTkkQjqGS568Ej3OJ2vuArc5fKZbuL5m4QgkQj
qGS4CwshNMItbucLoSp36Vyuu0GgNBNINIJKlrsSzaFi6ZUQqnKXzmW7azGaEopEI6hkuWsV1mZM
geuEqMxdqySJy3N3iguleUuiEVSy3AWBcW2ZW5m7VC7bXYvSgifRCCpZ7vrgEK4rcmVl7lK5XHcl
krt+IQSQaASVDHf9QqNtBsrcqtylc9nuWoxmpCPRCCpZ7pqAXVVf5Kqq3KVzue4qNHetkSQaQSXL
XaewHAplbmXuUrlsdy1KM0CiEVSy3AWsjI4RRa6uzF0ql+uuFigNgEQjqGS5661BuLLMrcxdKpft
rkVpACQaQSXL3eCxHFJFrqnMXSqX665BcjcshLIkGkElw92wkEohXF3mVuUunct216I060k0gkqW
u1pgLbQ8irFVuUvnct21AqUpINEIKlnuGikRbnkUYytzl8plu2tRmtUkGkEly12LttDyKMZV5i6V
y3XXCZRmaDSCSpa7Dm0z5VGMc3XuUrlsdx1KM4ZEI6hkuQs+INzyKAZknbtULtddkBjNW0miEVSy
3PU+P03nrSrb3LDF/fbm5vrr5p14dn7n23VcrlLz7ieffJhJq+u8UT5lUry5XTXz/sKvvsY4wWH6
yqMlqGwjwQGJy76K+21E51CMItEIKulXMXMV9hSz5dGSr2kjDC7XXS9RmrMkGkElz11vK9bZ7RRu
JC+2xCLROhy6zvbvtgSZsvB1ewnyH4vLc+lG1Bpz8Orsh5YgT9B8XUXwIuo0/UrPXX6cg7NWU5Yf
I7C9pcfzh55n2XEW4iV52as1SgzRuSiSe3jZVpKQdOZG7T2Y3quhT3q0IYYwdn03Pt+y46w4BKhQ
vHsrqnqlP0fjK6Jh34aRHJALIWVFDARHGDdnuZA27EUDtvk2xeVNl9Pst3c3X811EUGN0XbajeGe
eAJsxTPlHO8S1QfEvUgBsCi0c8izsjzDF2rWDzK4bJUozShFohFUspLOuJqbwk6PgHwfRCMJ/9ke
wd2jMA2YLVYc3Bv4u9nSeR8GFUKmSdOOXkA7Wudb6EdrYhqciCN5r9ZszcF7tR7VGjl21lqVWjsq
aOMYu3YwZmxHPdpR25i063asibFPPv+wtW4cWiOHrg2h062TPukIOrhEL/89WWP0odZQaz0WNz9j
cTkDSFzHLPhYDCvfyaKX0DnjFNjx2FUfi3zMFvDh32LL1o79rUoAW38EjLAWtGiz6KHtRxfb2IW+
7XTqvRh66TpkzKwWQv5tPXrzbaY+bKvxeJbOhzwcyIKayba5Y/HHhvHDHKE38iOXDDVYvNoe/Cg7
qGjohPT6KAUhCYUqGd0utTDOPEJcToqaAppHjIvZKc5xY9FYCUeJhuAS6+rluCpLvt7F1d2enQ/T
XPPtz83LL36MyxfL28sXuckPL6aasPN/Tr9PyxzP4rsvfrw+e+/l5uVl/ubJp5++9/bJqHs5jk63
yUjRGqFcG2znWqdAQeiT771vlld/LQHbrK7P4+rbdXXYBq0T+3LzY59LpLwum4t0cXoRf37dKqO8
n/83K/om3ay/80juzNUz1v6cXi/T3R34RL/R5GBOrFTN+2dvvpFrv8R8vz6xd/+3THflMPPP/fQd
PDZfH9s3U32/9d1o07xeb7Itq7kwTHtxNaS7q3p6d4XbOfxGN+3az9OLs67JMu6+MaP+/NbZRfwm
NaR0aNplvJgKyN/78fXPF/Kzy08G1bT9xTC/23iCLGraMcXcqtKqyV+ex2xP/t78NDu9vlreNLJp
7z6//v8cXr63f39Xa/1r7PIFH+ov35xaH6U4NF9JaQQ4aJQ13hjX/OzdqVZt7M4QvF7IcEjtQr1X
uuitb+PlN+nTy/7qKudQzF35dDdBg0GVrm5O881mnqqcz/h468NP1xGtmng5NHddkE2F1rlWGRoI
wJf0gk0M1dPcwpiUyKoyM6Fdz9GMySroWx9H3woz9u2g1dgaiNoOuktCuanzlCsdN1sATIcW1feA
HUPffPsQP42tLRw+x7GFb6Yp5WbqUN2kZb5tTg/9uGqmw5r0fFjTK9/+ePHHZCAWU1C1BbT/MrKe
Xz3c2bE7yN78Ujv9Vpuma6lAptaBkWowWoqgN4NsPQ2yh6cZZOeLOpzOZV/X9XNwlyqv3F9c+qdY
Q55/mBySx8+jNfMfYtc9tvyfOA8kjlkIcawnfPkAO106wE6lDqS2XauVt603UrRy0LFNMYU+KtVH
m14Irb2CbvILRGuCSK2PbmzFoKIYQXqVxu0D7Maz8/QqKl3W3lUOO8pOT+XmnJiOstM7H9Q7VV+d
2DrKTu+WrMw/xXWZSl3cM+00XkwuB0gvPJc/+zhiuKUBEZtrKgNmDdrWtrG6GpSPeHl0qJU2ec47
CKrrMgG0aZ3Tok1xtK3XVrfgndBeDj3EDovV6cfp/wms/yco/T+bk8ce/7l9nP5f/fsn0mu5rf7f
dKlxk1zlhUPX6Wyb9fbrJjgzCDkO2mPrdLb/HBqjL9yn7oPtrdOJN0+2TqeZfI9dnj+4z3ojHJaf
2OtqpfUgTa+F14Vigf2oxdCNvRydM9Ek6Mb85WCldAbG3j3jqh27cMZUKN6d+ZQ1p4dXR8OdFZYG
jSHYihgIjjDmgu0iYOUsbHGLlFRVM/V0LtdzpfZpbupUkGgElQx3M7cq33aWp5AbPBqJq+1ZHHfo
9kSPQ3zd6ueoQ/AIc0fHGvM/jV24LbVj/vt7C1tVfgdh42idCnaU7P6CWxjtCf0FHLfbY3jeksJZ
DIhAvWv4zkafwtCNcXi4l+CdNV4mk/I/ylndjcIaEH2U1ig9mmfsJbiFD6JC8d5du6aQbXU07CcW
YDEE+pPjMEcYzzFYKMCen8WtvrJqMz6Dy/VcW5SG1JUHaC7y/EC+s/+2XiYVL/u0+V52Xws56DH/
23Xw2/rbp8vlkKdI3tq8YRmXKb9s/Oijt+9G3u/PA2/SQaZYmFoqkimEi8FKAisDwi3uSJamam8T
nctNAiNRWpAkGkEly11wGLe4I1mamn15DC7bXYfSvCfRCCpZ7gaFqSzuSJa2MnepXK67VqI0AyQa
QSXDXb+Q0iLc4o5kaatyl85lu+tQmgUSjaCS5a5C20xxR7J0VblL53LddRKjaR1INIJKlrvGKoRb
3Cksq2oGMLhsdx1K85pEI6hkueuwvoor7hSWVTUDGFyuuyBRGkgSjaCS5S54jFvcdSWr9vIzuGx3
HUoLjkQjqGS5G4RHuOVJw7q9/HQu110vUZoGEo2gkuFuWAiwCLc8ivFVuTtxHYnLdtdhNCmARCOo
ZLkrXUC45VFMqMpdOpfrbpAozdNoBJUsd7UQCLc8igmVuUvlst11KE07Eo2gkuWu0ZjK4ihGicrc
pXKZ7mYQSjOGRCOoZLlrDXZHKo5ilKjMXSqX7a5DaWBJNIJKlruAttDiKEbJytwFIUlcrrtSojSj
STSCSpa7XmHc4ihGycrcpXLZ7jqU5gSJRlDJchctaQLFUYyqWwowcQOJy3UXWQpgcijSkGgElXR3
M1cqjFscxaiql1kMLttdQGk6kGgElSx3lbEItziKUbomdxlcrrtaobRAoxFUstw1qMriKEZVnSjK
4LLdRXPXqkCiEVSy3LXoVS2OYpSpzF0ql+uuQXPXKUeiEVSy3AWFqSyOYpSpzF0ql+0uoDSrSDSC
Spa73gqEWx7F2MrcpXK57lqF0pwm0QgqGe5KfNkilEcxtip36Vy2u4DSLI1GUMlyV1qDcMujGFeV
u3Qu112nUBooEo2gkuWu8libKY9iXGXuUrlsd9Hc1cKSaASVLHc1lkNeUE/hlXK0KoBIKoTC5gKl
0uAyztrR97Kzg+qc6EKUg+3BDs97knN2whl1ZCd2rxVUtvfjR8jNXUDvDKD8keMiOMfK8mADEmF5
NPk7d2fa2zARhOHv/AqLTyDYsPdRVCROgcQlTgmEqrW9C4Ve5ODmv7N2khLSKZ7JJuWQAJXWyTPz
7tjec8ZVPkOwXHI7uIc0OePSomgILwnqypkUEHd6NOmr7oeB61BcqrpegjR1NC9J6iqnAe70aNJX
xS6eS1bXgTQvUTSElyR1Ndiq06PJUBm7WC5V3SBBmlYoGsJLmroGqEXhzXFrURSOsVD0TI9aQ+U9
guWSW9FBNCssiobwktSKTgqAOzlqVVVJ+AhcorrFLpCmFYqG8JKkrtcB4E6OWhWvjF0sl6yuA2nO
o2gILwnqKrimu58ctSpRFbt4LlVdIUGa5igawkuSusJDMTQ5alWiKnbxXLK6DqJJ7lA0hJckdRWU
Cz9MrjCqqnVUApeqrpQgzWoUDeElSd2HfSLKwbi9I9Xo05GgJUZWnoz9zx+p3slwDyrka88O47Nh
8alsWN4HoXT2LOiQmSkCsi5Hz3ToBqetkL59qZPOSiU8E52XTMuUmecqMuE6pYJTmmu/mw3rh2/6
52HPa2OjIhmWkJhkWEJCybB2/gr7VZs75ljJsITEZ1sq157GmepkWELWJMMqPnhVe4udJhnWEZrH
++qEcsMzn5QMKwQVs1Ka6Rw6JnzmzKReMBeN7LvsJLcBtDVU3+/F1t2krO+8RU7KqnUJmNpwWNsx
T18PKXHnFz9c5qb8u5PVQMS20yJx1hmjmY28ZcmHzBy3SnjZKe9EU3Blmvi7nY91vdLSpcRkLzum
BM/Du8WyJJPQXqQuKPPQIzuTsvomHT36ZP7zRr1NRqgP33nrrLEhBq2NZH3sO+aCSizZjrNss8lR
B+5ChKxSojr52d6b7TKPb7bNXbW5OXhzHbtzzs+EPVNvnnFxVn62bzVdnM8vt2sq87hM5zdDl+12
+U1psvKsyJdfn3/5VZN+Kv2bkoB8fEmX1Ns/L86/nCdqY748BAAbj6yHkI3SOrDcJ8da2bdMydAz
3lojosw+qfblbeOvP8Jbn3thPIsiZ6aD7AtLaha7vs3CZ9X39itQ4qOlKiS8Qi9zeUZP5JIs18B5
JMe/wL5U520//LV5me+ftVCCQvi60zhR8bos0lbljSz2O2u+OEb6800LrPOB901xkze/Pvsse+X2
Zpjr/B1m1z7FjvWaPlo4eKm/qMmnPiXjMIAzTrrOdpzrsG2aV8ebeVubZzBzebsu7PRvMPJh9anV
4pu/lJ96ce87nysujEOXexteXl9Y6F9OXdt89ZyAG8cFYHQ+uaaoZc1OeQKXOAdR7IJoQXMUDeEl
YQ7CHa2XFftvV4uxuzfUNFgPZpv4dbwsj8Nm/Ojqjq1/DZqhDOD+5NKmVjWHTQhcaiMrAdJM9Wvn
z671Tj8gXa+u4vIWVtaaozBLQtAReZ2ub+c/l70yV5fdzyDQOZSkiKakRfKxSoKsC6E899n7r7/Y
mMD18xBNGuh+nVwt1lU5bghcasBqC9GUNSgawktSWxqrjtqWn5Thw4tNMI80pnEecHNycVqbmhyu
BC61MY2GaFZpFA3hJaEx/U6Sr7on0AO7fI36x7OL3DoetMb7o1iDUInUetoep4Mw/vWsBNtdXA1l
HjczP5ubt9y7ENzw2opLqBqqE11T0DI9MaQn10zarYE5ZVHpq7d91DkL2alkD67TBFVOPUQMI/4H
Yuy8R3bHZLQZpgbSx/J/Y7C88xbRt7PmcjFMOu8+TV5u1sPHzf/+l7yfIUq/HuQP/aG0X/P1gODd
rfkqsWYrLX77bfnNWGlhfBp+1fxwXXpYOXU/d1epvLzOyju1LOkOBSx2RS/eLtP14mxnfL/1t4Hc
d1kb1TkrgxLjSL/CwuEJvmdleYjj+Vi08RJqU/hLH7bpq926CCbcprLYELIxzNq+Z0koybQSiUWv
Rd/lkIKJf2lTcxqz//Jyvo6DC81iNU9NmX8fLC7CX139PNwR38SrEgSgFd4d04pyI45SFhPicrCh
izdNKahZNCy/KpYMEMgOX12f8aDixoUc+FEVaOfF/UGB1d22m1T+GVpk6DylftyPCjxrw0x4XmnI
enw/rLwUd6+G5bNFmv+QLja/nw9P2eb68ubcKK6VHMrAbn48jT3bCXU2v+tKHuJtbAz9a2MNh6Hy
qNDPy9RHGXKOZW1LFDTr/uvwduliztqYyIxIlpncRcazNcz6bFxndGeiP5mF2PWHRbE6/ZL6l4oz
L223NL+0aVe2blc2tutLGH8OWrIoLgdR6TIqMmO3XMWr8jpYx+STWtJvTLmnvzzcKLv/F386sWXj
Y6Q8M0sTXV2VR/YPqRHbFZXSO1pXk26GZrvNjYBs0HpymQE19bKZmly/uptfx0JhvzebIbDlENqY
6rtidP+n8V4dChkvRsmlsc1dmZYG49Jx+0XNssqEn6X7mYLkXZCRR5f2ln7KJrByF8+GbR0/tKWD
c7u4XwL6F9i61+0qyz/7/a4XhzX/T8bLnhu/7cVm/KZC3ozqd3qIU9eO/ULY7clyxweGpJB/ui8g
tL8/AFAXku+tHwx5ngYprFTKWv7ysMa5assvOMyuvh3uS2x/nW7GOsxjrypt3xJD32awaHXTp/lZ
s7dNQH8FW+WPZNXrm9Ioa9z4UhlfqvpE2NIQMLT8fELu/stiWV5Ui5zmF+u3RmmU25uL5e3FxqJe
ZdN2RjAVXPkPd4YJIzzTKSTRC+lF0A175VTWHtDd8uq4fbzHu1u+9zaKpFiXizblZ8mclpK1ppOO
m651rj+ZhXXdrW2rs51WZ8tbtrnXMJ4d2PHyuvYxsh/BfRrmA3biFxe1pzHukIDV1aNmZMB2MaaY
rWcit5ZJw3vW6iKLstElmTRvW38yC+sCdt3Iu+H6EsabQ4PUii+O27P5y6t9KEonuiSF8klmV9cL
e2Jbyb0wIdc9K3FQL+zlqS8r3TRgk7PhM8er7+X16CXNL/O4w7VUOmvWC2uLJv1UpH2IFTNR/3D/
66aK79L8Jl1BKBV2Vgzhln9zaORt267u+vVXfj0+F67Tcn7ZLXZDoxRdysHpTkvtkhktuRgvvthc
PHTeunRZBpeFW3ZgxKv1t6ZhX9z15ovj1wP08oBp4zJ9v7j4Ic2Hk0RjdAx5T86aT9vVzXI1rqDq
mWze/eTjl5vV+k9yZmdKzrR9oZ9fKzWTnAm+YMP9PuzjfLnpL4eJsrLSN37HMOj99rZ0LgUfR8PD
j1x/9XJz98NFP78c0Lv87dX2z4uHn7rhC8sAul1dXhXhjOauPGnut+cuxo82X0FtZq0+fZs9XHa1
NVsDns5s6tq11aCx3h5llPRAxJoV/iPaRVbJQ9Y4oZ6iSRGRiN8AUMz2wvwTZlelKno6s6mh4SRs
7HECFXEDEdq+qOvcU4iIaHuS2UWRk8hZVWzmiHaRg85C1ij1BL2ZhyJ6XieiUk/yGKVq7DlsrD5K
iyMikXSHKG+fQkRE25PM1sKdRE5f1Vs7nl3koNOgNf441iBUorXekfYJbGpqj0ddn1ufdU2dUMIK
ZkSfmNAusJy4Zp4r0aXOxNzGchgllmma67vl5vDk9qBR+XEYr69Wg5vZWm2lz0zYbJjm3LHQ2cw6
6X2KLnIb1POQc5bz04he1Qce7DrOCgY5ND1kjTMPN3bbYP60powe78Ypq9XNdze3P96w67RYlPEt
U7EV3MuO+c4r5rz3TLStYZJza7zzvDNy2OvlrLI2dMIHc/9lw3aOT9ff15RJrubZQ778WdAfO+HP
IyXj790sMTwborCYHkPurOp6I6QC6sZHl3KyUlorwFdN8DsBCJkyxfztgZXbC/foO1tjnttmHYmu
UyLI3HKpTQo+6KyzzFrYWDTUPXDP6JlU8u/Ve8QA6marTvaGe9ky6Y1l3BnH+ix7ZnL5Syzet0H/
ZbOVhaw1QR1m7foGGqYWxwOue4+hzRnu8mcIapWvgb76F1T/800skynDjq4ttny8/GUzt8chC7x0
NRYMgM2MZ4GnhpcPLtP8bj5uKYyL5s+MJ89988N1eciNGXjBeAlm2/UjmvLX7DRlWnG2cX8vUc32
Q2z4FJOtEGyIZyZlVJ1UbVI5bhPV8CMlqslKaN21LETTMd3qzFTWgYlOGx06XrhtM84ADyew0tmo
DiyOPqydwCfUrkglQa62ViuXnOoS+dFkZpyHCdMeg5UUSOWY8HvNTV4Mum026hbRz+minxNFH/SO
7VU6hyUvfgkOxiOUwMm2PvCeOxdM+/dpqp3vpfc+d73Sse1TDL1TVkaXk1JJiodpqp8qRXXxWCpX
4fF+3yaYmr5NrTXUHk0woA2uxgaEIoQutpkZbwFrJlNIGq7qWgLLJWpuOEizyqNoCC9J6lpvKtp6
L2Eb+oaHLHGHdgv+8ib8r7z+dvO0bU9qNB99AAqj+dG6CJuUcP8RlWA1xGFhAvcJdlUZJnx4N8Rs
3+cgyL0CO5NcInoFMG6/X7C97El6BpAzWnLso8FwJXjvs/Bd/PuuQEpKJd2ZTgajXG5j27lghfE+
S6FE9w92BezMOF/h8YNHc9U0R6015NeSB20I6K7AYYoQXlZ25g2kyGS+Y1NVYXTkBhSXqrmwEC0I
gaIhvCSo6x7p9k3mOzZVmXEJXKq6UkI0zS2KhvCSpK6RUOxO5js2VRVGCVyyug6keYeiIbwkqeuA
mgGOT+Y7NlUVRglcqroKjF3PBYqG8JKkbjAO4E5mcjJVFUYJXLK6QOz6GTcSRUN4SVDXz4T2AHcy
u5CpqjBK4FLV1RKiSRFQNISXJHWVMAB3Mt+PqaowSuCS1XUgzQsUDeElSV3tNMCdTMBjqiqMjlyD
4lLVNWDsGi5RNISXJHWtCgB3sjaNqaowSuCS1XUgzWsUDeElSV3nJcCdnnKrqjBK4FLVtWDseqFQ
NISXJHV9gLjTo5iqCqMELlldB9GCNSgawkuCumGmwJ7g9CimbgsqnktV10mQZhSKhvCSpK72EHd6
FFNV2ZLAJavrIJqTBkVDeElSN0AxJKZHMVWVLQlcqrpegjTlUTSEl1h1/RnnMyEkwJ0exVRUtiRx
yeo6kKYNiobwkqSu0hrgTo9iKipbkrhUdYOEaFp6FA3hJUldpwLAnR7FVFScXHM1R3HJ6jqI5qVB
0RBektQNTgHcyVGMrVhIJ3GJ6loO0MSMC4+iIbwkqCtmwkHcyVGMrVglInHJ6nqIJqVB0RBektRV
WgLcyVGMFVWxi+dS1RUwzXkUDeElSV1tA8CdHMVYURm7WC5ZXTB2DTQjZ5pvUpwv2xSXv729/WnY
sdc5maNplc3hEe93V4TGHZ5U92sNosoiJWiFNqhGQDQ+KeichLiTgztbsSBG4pLVdSBNexQN4SVJ
Xe8VwJ0c3Flz+P5FEpeqrjEQLeiAoiG8JKgr4aUiOTm4s/bwg4IkLlVdyyGa9B5FQ3hJUlcrBXAn
B3fWVsUunktW14A0b1A0hJckda1yAHdycGddZexiuVR1HQdpAUdDeElS10sLcCcHd9ZVxi6WS1bX
gDSLoyG8JKkbhAe4+k/uYrPr8Oq2JBu+GJ29i4vFj8XdYYua9FzFLFoTzd9vU2w73dsgDc/ap5gF
F1Fp7bUzynTK6YfbFOe3t8un2apYlFAz7t2Rldhvq4pD7SeykBq7nkN2CWWPbBdCOUKUq5nUUJRP
D7J91TMEzyW3gwFpXqFoCC9J6ipVuGk+v53vcu0O95vl8u6r5q14eTXqNtyu5TZ/+5NPPmy2ueQK
KS5Xi2asE/HlVxDHSAH4Nz2YD5X3HZZLbcXAQZrRKBrCS1IrWjB6pketFeezSFyyugaiOWlRNISX
JHW9CgB3ctTqeGXsYrlEdR3nIM1xFA3hJUndYB3AnRy1Ol4Zu1guWV0gdvWMc4miIbwkqKtnAooh
NTlqdaIqdvFcqrqCgzSjUTSElyR1pbQAd3LU6kRV7OK5ZHUNSDMBRUN4SVJXC8jLyVGrk5Wxi+VS
1ZUcpFmNoiG8JKlrwBiaHLU6WRm7WC5ZXQPStEDREF6S1LVWAdzJhVenKmMXy6WqqzhICx5FQ3hJ
UtcFscelHMTbO6eNPo0JWeKVPewA7n/+nPbmQG5fzmmDwmh+mDDbxOZvjDllZiV39yz2JRHRJsn2
usT4+Z9JZX5r7r75eTFkOz9/qVzz0uL6pba0Z7rpX/I+CKWzZ0GHzIxWgnU5elb8MbGVVkjfvhRV
TJ1pW6ZcEkzbTrNoig7eJCVkL1UvxAay/Pkunf/wTf886PChR7G3Dhcvd0p+N6/xvXId/N6vdbOP
ybu/apq3Hl64/o7xuu1Vb3/50hD95Y+btGg7f4XdMYe5s/n6r7Z1G2dD1uiLfDvfJEfK81JmoHix
yS14zou4JddTf17saX4b3e/PR7tKQrfBY9y1R/WBkFz/MVG3v6Hny1+bbiYrAQElcTgpF70zHkR7
+QU+/T0ZOSYA6nxseydN7g5O1f9P2PogLdmYrX+32jCYqt8Zv82uz0tQA6n64WuhgknFbTMT4cCU
Xfc358clJ+A35U68HKoN3n63jtFtyC66+eXd8glv1WN49OBxs7X+qZ87Fc7Qnztbrx4+eF5uEK+P
l5bxrr9cfMfGKeCHHxl/DX5w/MtBDzczs/pAff4+fLda/JNxbA9u+rFrOjz3gKrFy9uxlzzkZRzz
7aFzop7AxD/THC5W3VAuOK9KusGdGoSbh+pR0agq7xNPU8ggZyZ6COQ0k7ulq6csGtKZyq4zHQ/K
qm3VGnpqS6jK+wFieHGs1lmXuz5Zts5irZ3xQ9MhHlADeAQKuT+Fe1ii3b3BsPZVWweLXaa2llRb
VuuvbmM/KPLa/c9DYfDreNMPq3pnzUurRXkRlJfC3c9fDzUN2fcNY33KcXW1vIjzrxfn5f/TT2Us
vf4/xsqNeZmWw/TF4vYqnX/zQ8fLJT9cn8ckbbRSMK2VYy1XganYR9ZZbp1Npre9bYYxzDiyfQby
OciTZMD2ddspj2cXcXrGQ9st3UwD5USdwiRjFn2wbQiCJW8NyyF2LEoRmMpR9p2ILitXnl6O97EV
VoncRXwyZsyXPwv6IzjgDzEZ8+pmkxq50yqkVmjTxx7IbiZDjLkX3jqnQGP8zqw4ZMw0FUrIvL10
zwIoJXPbRt2rrHhs+7aVfWxdl5N3LgYftdTPQ2Ybpf9ew0dNOHlSZjCGnTOH2juVIblP2wzJY62t
x1+NbubFwaoB81ndVYo3F4tvVsu+3BdP1z09th+beogn96S5KV2aeHUih+5rPY75l1GjptubcavL
+cmC5QCbto3AtlUqz/9WNltrImKMSrG6L++zR8afO9OZ1YNPN05MH3Fm7T72fy3mvbieQIKxslbx
nTtvS70Y+mi3N2fNX3Rs4t1divPUn9aQ+fXFRpNxSeBUd/+JnPizxOt1MzV//g+YANwssBXqlFYA
8z//mBW4Rql+9O6+38Zh7BMHdrUDR4mqg63Yn4/diviEs3DHtf9+Pnl188QTyVV+EF7TJ1nCKsY7
98UR1magF2wZR2WpdettSH3cX0P6KI2D7EHi7drRP2wfct3o481XvVHe4c8N39Zs+r9bG3YXj1Af
eGQFyc+EC4dG1r90Dan45Hm1T4+uIj397V/lEP323/p1pPvfzyw/eIzxb1/nqXJuu4zyZ0gt0nKc
n4BXfsZzVaAN9zu2yTb8dffRGx9u90Ltb0TafogNn2LSy8BS6jxrs8udtl3btWZvI1KZWrn98eIq
xe/O/sbygwP7L5YfwdzafVO5NZ0MOjEbXGC9Lkgfc2Aqmi5Fl2VI5pF9U5/D0hzclf63SaNNMLGV
itkYJPOKt8wYLpnhgXOvQ2sEUZqD77nHy130aacCRSpatK3Rre45eUrYz7xwEwb+HXC/5EWf/smi
F8WhMOPGAE8YcJ+lDzLZ3LbCGTFR/ypymbnpUxZdnzrDiyWtilHn7Ezo8j9V9GL0WGpZ4fH+clI4
vOzsEayhLiIFDdmghK2wAaEIYbEzzIzXgDWTJ+wCr0o5h+cSNS92QTQrPIqG8JKkrgsK4E6esAt1
abvwXLK6HqJ561E0hJd4dQcJrAW4kyfsQlXaLgKXqq4AaSI4FA3hJUldFSTADdiT7MoLo5PqdMwT
2RCiajsjWmmEFVooaXOQKYjspely7Pk/mw2hKOGOrsR+W1WdUjmJhdTYVRyyywtxZLsQypGi3EN3
l5486RhUzamtketRXHI7GIgWxNG8JKgrZlxpgDt50jHoqvsBz6WqqzlICwJFQ3hJUleKAHAnTzoG
XRW7eC5ZXQPSvELREF6S1FXeA9zJk47BVMYulktV14Cxq7lD0RBektTVoJeTJx1DVQZBApesLhi7
RnIUDeElSV2rIS8ns9+EqgyCBC5VXctBWlAoGsJLgrpyxsFWnR5NVmUQJHDJ6hqQFgKKhvCSpK5S
DuBOjyarMggSuFR1HQdpGkdDeElT19mH+am0P25+qsLR4Ft7etRalamQwCW3ooFohksUDeElqRWt
NAB3MvtNqMq6R+BS1fUcojmlUTSElyR1wfksMz1aqsqQR+CS1QVjNwiHoiG8JKirZlxB3OnRUlXm
OgKXqm7gIC0IFA3hJUld4QzAnR4tVWWuI3DJ6hqIJpVD0RBektRVQgNcNcF1vCpzHYFLU3ewC6TZ
gKIhvCSpq5UHuHqaWxm7WgUUl6yuAWlOomgIL0nqgqMlYya5VZnrCFyqugKMXactiobwkqSutxDX
TnMrY9dbh+KS1TUgzeNoCC8J6uoZlwbgukluVeY6ApeqruQgzXgUDeElSV3hBMD109yq2MVzyeoa
iCadQ9EQXpLUtQbyMkxylahTF8ulqqsERHPCoWgIL0nqeqiqhuXTXFunLpZLVtc+pJkZFx5FQ3hJ
UNfMhJEAV0xydVXs4rlUdbWAaFJ4FA3hJUld5aAYktPcqtjFc8nqgrFrvEbREF6S1LVcV+yz28tp
id5sCVjyf9qdjdn4ursF+X7fPZTVUpia3dl/twV5gJZ2/YO7K+1xpAaif6XFJxB0Y5fvrBaJGyRA
3EggNOrDvQTmWGVmViDgv2N3MkNwvLQr1ZldEId2Zzp5r16Vj3LbZemMsmB71/IOu/04kLNWlWw/
zoClW4+3D72YbcdcNyBtaVPwnnvR8pHbzv77tq1eS+g0KK60BbBMyXEApR1nzgoG+sVtOw4WS1fc
+BOLc10R7ZU+lQ22G1Ysx0HZ4q3PxymC6Jx1fsOanl9iUpSNsQhctOY2h2a5LEIrsBKlrlVLtPfd
0Ffc4LNM3NH9+3996Bv8dujLy3L0yZuXTZbOWjeAc/UAXNajZaYelba16UclWz9o1o7Fh5ImaexL
KQ0fO6UU+FqNYOp2bLt6kHKsRzGqUajWC90l0rRt760aRa30ONSSD13tXCdqza0XrRFOe1VcAjxI
4+DoeWRpvcfZA9BZXsZkeC1Z9HGWVpzED61kXLtubOXSlR9n8bOyOP5/kWXv1P5eNYD9L2Gu7zrX
1q3XsrZ24HXnbFcrP2jeAYAxmXmaabh8aUPnnfeQ9uWOG49rfz6EeW8wqIqyTROL+0PjxylS3sgX
Lhtqcny14MfyPapwaIA0mi9SFLKgWCVi2mUax09R0NQwTTqvtBwv7KRY59nYZUp6FqiE8541xLKv
W17d7fp8iFvjbn+tXnnzWbt5c3N7+eavsRJErAs7/e/sF78JfBrx6xe/3H74SvVKPNH0+JtvPn7v
8Sh6Po5a1F5yVksGunaq07UGA8b13vbWVpurf5aBra6fnrfXP+0qxFbZWrGvVM/6UCZlxasLf3F2
0f66UiDB2umvwaIn/mb3kxOpM1XQ2Olz9nTjq22hCPmoCmQeKw7Vp+t3HoX6L23orx+r7d82flsS
M/zexp/kuVk6tyexxt+uN7prXqsqyHI9FYepL64Gv/Xq2dbD9US/klW90/PsYt1VwYztD7ZQ9z9a
X4SZaVUUDlW9aS9iEfnnPr77fTO2X71nzqq6vximrZgPEEVVPfo2tCp/HR4Yz9sgT/jZNJqdPb3a
3FS8qrfP7/4ebA99+y/beuuZraK2kVrT3TeF1pe+HaofODPCSV2BUNwyWf1q9ZmAuu3WWXjNxff4
+oXyoHzRuz+1l0/8N5f91VWIoTZM5X2+1MgEajjV5qmzmdbkpns+3v38mx2j66q9HKrtFOSuSutU
ryxLJKRb5UWbEFbHqz6MGyXwznAB2aknBD5uVKrWehhqHx6rpeC+bq3kQz8671QbJ0+h2nG1B5Cz
w2hq0e5E0DBtOUZPp8j9ZOSxB1/FtdNKhG+48ZvQbcZBv72u4oVNYrqw6dWfnl1Uj7eqv3bIyTVc
O+KI+4/Melpj38qRJtl3H6rjp2rvYaxt39l6VFZK2XslBdwl2TIm2cPDJNmxbv/ZVPp1V0MnrxLR
c/9Q6b8iTfH6Q1DIUOfdmTjaYf5H5MrKQu7L/++BA4wRFSq/xE7MXWIHPg5IqqsFWFVbyVnNB9HW
vvWubwH6Vvk3mRAWTBf1MqyWjvnatnqs2QAtGw234Mf9S+zG9bl/7bSm466zk7HknGbxOjuZPCiT
yq+a7V1nJ9OyleG3ebsU0S7svXbyOQXlNCsvPheePY0x2PKAGZkp1QGDDUpSkyBaHcoTukfT21DQ
HHcZlDCdHLUYagnG1swyVQ+iN3VnQYHW0NlR5rgaICcz2fkfy83/WMH8D1gj1Es7/6O/fyp6Lbc3
/4uuzoqkqUGW3ZCyL9Z7K9ZxZ3stbM9dZkPK/tflOCo+08afC5ZuSGlvHmxDShV1b7uwfvA86Y3M
ziuzO7X60RnoutZz/e/bU1w/Gim16VvDnLK85W3vPQPVBxWV7l7c9pRgsXOSYHG68kk6M0tmg10V
zpykBd4wbQkcChQpXwsObITQmY0b82d0LGWlHoGL1dxm0SSDIrQCK1HqSmkIvk62pxQ3+CwTS+30
l03dHmg4zG/Q/C6v0Euc8z+MXHlZgBg4z58t7JWzlV70vRrB2W5Azxf4/MT5X+DSGcOLrZ0bjLG8
eITgRvTKANfKztTOHbXXote8063VBoZWK9DcwygMG5g0L3CWwBtni8flxOJsr03ZOklmgx6x7CEH
aJihcChQBDGOQSNsZuuDqS5C5nwT6qvsNhC1l72/+1ngJRgfxBj+7Trzx+7HZ5vNEBYP3r179zBu
fHgN9+WX721z0k+nlLToms88zdwwP38k1lGOmUy4oggXGxoua6XUtgitwEpUEBjDMrizR2I5qYAy
AhepbuCVQ7OaF6EVWIlQVzSMyQzu7JFYzij3ziJw0eqaLBq4IrQCK1HqAs/hzh6J5ZwUu+W4WHU5
5NCEEEVoBVai1JVSZ3Bnj8RyTozdUly0utnYVSCL0AqsRKmbLeluZo/EciDGbikuVl2ALJooQyuw
EqWucSqDO3sklpPu+0bgotXNxq4VpgitwEqEurJhgmVwZ4/EckGK3YjLi3Cx6grIojkoQiuwEqUu
aJnBnT11xQUpdstx0eqaLJopQyuwEqWuMCaDO7toyCUxdktxserKbOxKzovQCqxEqasMZHBnsxgu
ibFbiotWNxu7WrAitAIrUeoaEBnc+SxGEWO3FBerroIsmoYitAIrUeo6xjO481mMIsZuKS5aXZNF
E64IrcBKhLqqYVnc+SxGk2K3HBerroYcGmeyCK3ASpS6kO3v57MYTYrdiAtFuGh1TRZNmSK0AitR
6gqb6e/tfBZjiLFbiotV12RjVzJThFZgJUpdxXUGdz6LMcTYLcVFq2uyaEIWoRVYiVJX25yV81kM
bStAOS5WXZtFM4IVoRVYiVLXKpfBnc9iaC+zynHR6tosmoUitAIrUeo6xzO481mMI8ZuKS5WXZdB
0w3jtgitwEqEurrhWmRw57MYR4rdcly0ujaHBgBFaAVWotQVhmVwZ7MYYKTYLcdFqgssj+ZEEVqB
lSh1pcl5dTaLAdLFpAhctLrZ2FW8DK3ASpS6CszhJSt24UtWAo7mKmPfbLYEpAtQEbhYL/I8mtRF
aAVWorxoRS56ZrMl4MQ2YoUowkWra7NoVhahFViJUtcxdthGHFu4jZiGqUx/7mazMgBSGynHxXoR
8mjWFaEVWInwoslXlnSzWRkAqY2U46LVtVk0I4vQCqzEqWtVBhdKr89lchjGUfSj4f7ft0FKzgUz
yvrwr3BOjdKo0UlhnFFMa/5ir2AOSgQqCyuR+koQ2/vyDLGxK/K89NK8CpRDRbnmOsNwNjsGQexD
SnHRfrBZNAFFaAVWotQ1Moc7mx2DJLaHUlysus9B02VoBVai1HXKZHBns2OQxNgtxUWrm4ld2zDg
RWgFViLUtQ1XIoM7nx0rUuyW42LVVXk0q4rQCqxEqStMzsr57JhUgxmBi1bXZtGcLkIrsBKlrmI2
gzuftZKK/SFwserqPJosQyuwEqWudiyDO5+1amLsluKi1c3GrpGmCK3ASpS6LvMW3rL5bNIQY7cU
F6uuyaMJU4RWYCVCXddwxjK489mkIcVuOS5aXZtFk6oIrcBKlLpgXAZ39k0m0N7XluNi1bV5NKeL
0AqsRKkrTPHR7cwBvOTodvEpzBwTKaiFVv7zR7f3KulnFTKCqFB51S02V3XLWseFHG3tpBtrJYOo
/djaOhim2g40B9u92YrW96rramE8r6XuZd2qIIhVXnAYQAyc71fdevbT8NppLT+i6BaHkqJbHHJF
t/Z+m7drsRpPxKJbHMqrOoVnT2MMuegWB0rRrWCDBWoZgNMU3VrAPVaTa6LFPh9VdEuDUGrwQ+2U
97XTrK+9ULLmuh1Bd35kcshyNdQyHpHrfvHXjz9AF38VIWDkMsVfN/5JLL27OXu2Hqvw3171hIF7
zv0ItdB+rNsexhq0aWtoR9NJ2WqnxirAheXbX/Y+1g9CggmiwgB9LTgba6WYrj14Li33vRPq0CLe
mNxKiKp+8oF+F8T646O7P8URvzcwtqoTenT5+Yfg+1cmxvcJqGnWAoSQ069A+JCFapxdpnF8vflt
F1S7glyff/zBqtKudVIqqId26GvjhK+97lk96lGNrXTMuPaQlW44UCvYpgP+epwG/F1ns+szWHXR
9o8ZW3G9Eu+vGF+FP+v3qr7dbNZ3r4A27Y1/fBmlvLr5KURycPK4fvL4hx8r/+vNpg3136e5S6h8
/tv14x82Hhvjj2K7qKe6CEpKMXKAupejqwc72toOsq+VUtJ448I/8OiuTWw/wjo7DlzZuuXjWEsH
Q90rkAFr6EZuRzEM+sesxILc0+BnFusxDF0zpTzDM9kynrvf5G2hlsw+fjaxHu+HoFx9yPxzpzGC
MIsI0pLKdgb+inyvwlIziMVcouxsSfmigvq7oNpWmB+qYDmrfn/llfqtq8u43+PPHLaV8D2lnP0c
ZhhURtG2wUQnVdvd2fH21JjvrkaKEt1cbe/VehlIHl7+dXv90z9u/3oj+c5XgwlTRnfP4dH2wYD+
w9yz1Y+v8mxgWJNbEpp9BSoc5TZpBC52buB4Ds0ZXoRWYGX50owwDRfUMuRbXu3w8+31NAuOV0ps
c/yqfdKuQ3dYTR+9fVpvf5ynARnzZ9/EClK1IQQu2sk6i6ZsEVqBlSgng6AOF0mms5t/+Ivb8zaM
dllMSR5it8pv1hPkhb+42vwWthSdr/vfsoDGfL/EAPLu9h6aV7/97N03KuWYei2HJnhuKXX25bZk
pF4p4poiXGTABl5ZNKOK0AqsRAWs0pDBnX25LbmjqVuKi1WXuzzaQncI7SL265AkvVE59ZyQ1doV
GVcgKsKZdq+iG60nSHmR9touyAsbDJBloxlbhE2BSijvWXLN0jte4berEGxP29vpls1tMroL6xDV
OXBHvp6g6ArbmanpITM3v2iHvrJq/wrSOUZxi6URoBmMbhi7o6/Jyl1ce4wYRv8PxNjrYfdzMtwK
U5XTBxR/CfX5+AOkbau4dbu9udnvTR5V2/Rx99f/kvVNwc27R9mD75TSK3ePCN79K3dVKW0h+R9/
3Pw0XXQx9YY/Vs8uwtxj9P1v/bkPg9cqjKnhTXe8P2RfdF6tb/zF9Wovvw+rv19P3/TqdCfLGxXX
Ut85ohKP/vkAh+0D/P6BsAAArxGIx449IR/69jlaeUDIAmZ7llGOXoHpa9uOtmZy7OtBwFhL0wo1
iM4z0KvqyabtuiDc5OD97nZRfgdx9XY/3YN6H1d46v+IK5tjI0Fk2aCh9lv1nCyhVcMoRWeNMAMD
gmemAZDqEcVcoUe2Lf20Hglslmow9830OPOPd0thgyHyO3WDcTk2mrNTN5hUlqnB+J6JflDOq9ET
PDM1GKpHjNGFHlm4wTgUm2OGRjgcGsPQxrJDm1XU7WhH3YYfkTVfZO1htxK4Far6feox/6zuMl11
AC1ZAzD7pocCGSNdKtlaANWBTl62hN1o4b1dE/eXPOuCZ6+u71+6vARc9+Nt98IlCbo3MsNT/KaA
vMujs3Oy/LPTO5dSs48cQQ5RUYDHJnPFIwiRH3IEKaG+318By7Nx35+m8XL4Ozp1DlqAnh+8SqzM
D155j4Rm4jptO2mk74aeEBQls73ZYBB6Vn6K7MFaC1wJ7SVYULQO7IG5lrw0PpguTN+0Uzx5aZx/
dr4DUyAL22tmPXAONQdomDl1uzgkEtQPQH0/DqPtPTtduyjTwMrSvHTZ9Q5gKDbHTOp46aRO8sbu
n+nIN7n3o6PuGtXt02H7VvXJrb++qS78zWbdX++1yeDm3hnQWrd80Ns9iWfTw2fbhydFe79+5oeA
G96Xtufbb/Vx587F7ovbJxF0fYTYYZHv+uyZ38RjGNH26XLZVfVNd3t5czu9Z5ENVJ98/dWj6nb7
K2h0I6CR+vVhcyFEA6zm7LqO25zibq9H1bCOk9PwPmD6jkfVRfvz1WZVcRb+uL6Mf2Tyx0fV02dn
w2Ydoffx757W9w9PH+vjF0LYhHS7Pg/CKcmMrX6838R3vXPboc8CUYDT+yzzcobyvvLhaKPfcLkc
WSUfoF0UaVz+AizQNvaF0CZVFnk42tjQEBmyouGaPQTZAo0RoRFpi101qhPRXqCYVaAJUr8QdSmV
gifa5iFooyPYZMnal0Xj0gh2KwYN45Z0MCLlo+TxJzUWIYT1ppKHLFxjTeZMxB9f3QTo4LM4SV5f
Deu+uo6W3p77TSAztqNrwapeDeKPuA/6yebq9nK4f+TH6vPDj93b9XAsdnnkeHvZ34T51L+yuk8t
39ju33gs9P3+9Deq6TTl3cdf3fvVaw9nzVfn3j+Nnw/TyPV5dRm2QFXb3PVVbnnDnRXO3m2lP+Ql
oAHOSRF30AQspQksQAjdBJKXSMBiQxTKkViksjhztCzLEMLK4kzKIjpHM01ikciimKLIsgAhpCyB
cMKCR+dYYsymsgAcL8sihLCyABywEKJhQGNxIIumyLIAIbQsOmEBKyYaIMZsKosSx8uyCCGsLOqQ
BecNt+IEI2Q63n9yFR6JpSAOB/fnj+1/D+0PR/z5Q/sdH/fcoR0gRrskRnsaZ/r4c8jLEMLGmeYJ
CxGjXRnaaehUFnP8jGcZQlhZjE1ZROdYoLFIZbGEMWwRQlhZbDqGyRWTDdOGxCKRRTN3vCyLEELK
EggfsBCy4YqWqKaycML8eBFCWFl4Oj9W0TkgaSxSWcTxCwrLEMLKImTKIjpHCFryciALoctdhBBa
lrTL1dE5UjASi1QWTcimFiGElUWrlEV0juI0FqkshjA/XoQQVhaTsjDROcrRWKSyOHa8LIsQwsri
2AELzhuwp1hYO23acELipLTBxGjXBkhuPYgzwhi2CCF0nKVjmI3RbhSNRSKLAUJnvQghpCyBcMoi
OscK2iz9QBbC/HgRQmhZ0vmxi85xXJJYpLJIwtLXIoSwskh9wEKohjEai1QWRRjDFiGElUUlY5hg
KxZYWBqLRBZLWFZfhhBSlkA4ZRGdw7UisUhlIeTeyxBCy+ISFjw6BxSNRSoLYVl9GUJYWUCnLKJz
hKCtYqeyiOP7lmUIYWURad8C0TkSaCxSWQirwMsQwsqi+QELzhvh1Alm3ydNG05JnJI2CIjRrohb
V9I4IyxmLEMIHWfpGCZitGtOW65NZSEsqy9DCCuLhZRFdI4hsjiQhTCGLUIILUs6hsnoHCtoMZvI
4gjL6ssQQsoSCKcsptLBxOXaVBYgjGGLEMLKAukYplZMN4y4XJvKQlhWX4YQVhZhUxZTeVsii1QW
Sehyd4Qe9L23k2mXq6NzgMgilYWwrL4MIaws5pBFcI4gsjiQhdDlLkIILUva5ZroHCloq4upLIRV
4GUIYWVx8oAF541i7ASz79OmDSckTkobzFT4mujWgzgjjGFbQg+7fuxcOobZGO2ayOKfslhGWFZf
hhBOlkg4ZRGdYyRtXTKVRRDGsEUIYWURKQsXnWOBxiKVhbCjdBlCWFnUIYvgHAe0dckDWQhD+yKE
0LIkQ7tkK2YaRlyAS2Th5vjcexlCSFkC4ZTFVJ6cuAs6lcUev358R+ghlyQi4YQFj84B4uaXVBbC
SZhlCGFlcSZlsa1ETmvKiSzAjl+SuCNEC1+kLIFAwgKic5SlsUhlISxJLEMIK4uwByw4bzTTJ5h9
nzRtOCVxStogYSoCamlJchpnkhRnCxDCxplM40ysAkWuaPOuVBZNkGURQlhZ9KEskjdATF5SWQhH
PpYhhJXFpLLIaU1A00bSRBbBj18oXYYQUpZAOGURnaOJm19SWQinOZchhJUFUllUdI5xS67UWEEo
lrAMIawsSqYspnpXlsYilUWTZFmAEFYWncqio3OcprFIZXHHbwdYhhBWFqcOWEhomKR1/IkskhH6
lkUIIWWRLO1bzNSUOZxgmnna+fEJiZPmx4FX8DdxAS6NM8oYtgghbJzBYZyFaAegTTBSWQRJlgUI
YWURqSw2OkcI2kpTKosmrAguQggri9Ypi6m+IPHwZCoLZaF0EUJYWUwqi4vOUZq2+SWVxRFkWYQQ
VhZ3KEtwjmW0kTSRRTGSLAsQQsqiWCKLYtE5jrgwksqiju9ylyGElUXBAQspyOfVD2Q5/gXvMoT+
Yu/cetwmwjB8z6+wuKEr4emcD0EgIQQCiZPK4QZVK8d2dgPZZJVkgQr478w4aYHP2WrH38xsRCuh
km66yTPPN7bn9djjaC0OULAZE0RwHAXUYhBakgDFajEjLaE4iuKGCVAL4nLZNECxWizctwxnpy2z
GUbfWWNDTnBMbAhcghjk4iKgn2k+/UKCNECR/UzzMYWUhCKnHKEWxK1FaYBitQgKKMSMScIZbtwF
tSAuO0kDFKtFjShCcSTyxhOoRSN6SxKgWC0a9hYZiqM17gwA1IK4kCANUKwWZyBFKI5TuAlqoMXQ
6eeP0wBFavHAgELNmCJM48IL1MIRWpIAxWrhYy1SEeFwmzLUIqZfdpIGKFaLYIBCh+Jo5MUvUAvi
5sM0QLFatIIUoTjW4aIu1GJQWhIAxWoxUMtw0t6JHAt25o0NGcFRscFzacKQSzXCfoaY7EsDFNvP
3LifSU0EcrkIoAWzrEcaoEgtlkItNhRHK1ymg1oE4hxPEqBYLcJBilAchxylQy0SpSUBUKwWCbW4
GTOEIYsDtWiEliRAsVr0WIs0RCD3cFALYm4qDVCsFqP/TcFnlIbiKOT96kCLm7wAayqgSC0eeEQh
Hfp0LdQiUVoSAMVqkVALm3FKDHJThlrs1DSVCihWi2UjCsUINbjwArU4lJYEQLFaHNTCw8QhFTkW
7MwYG/KCT48NAxdn6NO1/+1njk5eUjMVUFw/C8CQIvR2SXEhGWoRKC0JgGK1CKhFhOJo5BVkUItC
aEkCFKtFjbSE4liGu64Vapl86isVUKwWDbXIGeeEIi9+gVomX6SUCihWi9MjCuXxDO4MANDC6NQp
u1RAkVoYhRQqFIc73KYMtUxeUjMVUKwWriFFKI5Ero4LtUxebiEVUKwWASl0KI5C3sYPtZipE7yp
gGK1GAopQnE08jZ+qGXySkqpgKK1GEBxuPJdygyj77yxISP4Q2KDuy82mNDbDbKsoJ9xitj8kgBF
9jMPDClCb7ccd+5gpAWx+SUBitYCNz8biuOQF3BCLRJxMiMJUKwWyUYUSqCXahxpQcSGJEDRWmBs
cDMuCOO4MwBQCyZNJQGK1aLHWpRAr7kCtUy+kCAVUKwWA7QwGoojkFEXaBGT16VNBRSpxQNDilAc
SXEUUMvki5RSAcVqEVALC8VRFDdKh1oUQksSoFgtaqQlFEcjiwO1TH5AQSqgWC0aagn/Ea5yLNiZ
NTbkBH9NbGDGECuU4Pp0bAhc3JfV4YajoJ9NX5EgFVBsP3MKUoTebhzuvCTQMn1FglRAkVo8MKAQ
oTjW4iigFjb1spNUQLFamIMUoTgOec081II4UZoGKFYL14BCzrgkFHnNPNSip58oTQMUq0WPKZQk
TOIGGFAL4kRpGqBYLYYCCjXziDzLWtzw0P7DbReO6zeb9XK/2QZPi+XV3bYJb46P9vr+w73+53hf
rjWvO95zTqhiQvP7jvcqbAIc+fgt0PnU5KtvUwFFdj4PPKJQilCH2wSgFobSkgAoVguDWvSMKyIM
7hoHoGX6MgWpgGK1KD6iUAqdhYCW6csUpAKK1uIAhRnWp1E5HsKWN5wdwQuv58qYIYpLxd19O2sT
ertVuN4O+tn0dR9SAcX2MzPuZ0oTanCjW6jFIsaVSYBitVhIYWdcE4HcCQAtevJj0lIBRWrxwCMK
pdHrIEEtHKUlAVCsFg61uBl1xNkci218O/61V+0qRwEOGa+lSjToz9ia1x1HLCdCW2Xo6eNI4OKG
MORUPNwEJGITSAIUuwnI8SagDFHI66aglslPQkkFFKtFAS2czrglDrn6ItBiJi/ZlgooUosHHlEo
RyRyKTCohU0fdaQBitXCIAWbCUoE8goFqEUgeksSoFgtAvYWPqOUGIkb+0AtcvpJvjRAsVokHVEw
TqRmGY6QWSNiTnDMZX+ch95uBe6QAfuZmT5PnAYotp8ZBihE6O2O4SigFsQ9jGmAYrXYsRbBCE36
NC9nJz+fKRVQpBZLoRY5o4wwhptFhFoYQksSoFgtbKQlFEcgKaAWxNWQaYBitUgFKFQojnS4AQbU
oqY+NycVUKwWZSFFKI5B3iYKtSCuhkwDFKvFwN6iZxT/hA2gxVFEb0kCFKnFA48oBEcvBQa1MERs
SAIUq4XB2GBmlBOLfHoP1IK4tywNUKwWMaYY1rI3GUbfeWNDRnBUbDAzIQijuHMHsJ9JVD9LABTb
z0YUdkYF4ch7+qEWg9CSBChWixlTCIFexQhqsdNPlKYBitViDaBwoTgy5VP6OaWIe8vSAEVpGYAh
RSiOErg+C7Ug5iHTAMVqYUCLoL446PtuoRbEmZo0QLFaDIMUoTgGuV4I1GKnx4Y0QLFarAIUzBcH
PUoHWhjiAvQ0QJFaPPCIQkjCBe6yOKgFMTeVBihaiwYUfEYlkcj1QqCWyYuHpgKK1SLciIJxok2O
lYGzxoac4A+JDfetTSZ46O0GeYMp7Gdq+qxWGqDYfqYooBhSnjE5rvvI288ygmP6mRz6v7U5FtnI
KjQnOEroUGhncyx2mFdoRnCUUDsTiqhwT3m/3W62/9rx2D/vbrtmAPvxq8ubfr9dtjty099sti8u
m3Z/16yqxbJfdbuqWXfVt198tvO4nWuMc8Ip1sz/vN7vb59XnzXL1WEHeNtsd331+ffff1tt+93t
Zr3rg9L93a5aLf2e6afnJelumt2+314Ou+ih5M+rT5q7q+t99bnnvmxXS++QDK+3ve8Au73//8/D
3vzJu39U/juvN2FX/+03333/QXW3XfrXT7eeYN9fdnMP0fa73Qeh62xf+Ld+qqrnH4QDwt5/7OWq
X1/tr4cfMyfCO/tts94tfM37dbsJXdG/+UH1a7/deTT/mhEWfn3zy7Iffu12s1ld+qpu+/2HrWsW
vW2bWlLX1QtneN21XNfK6gVni1Z2nXnau9Z2dsFrx7Wole5oLedS1p3VVjdmsWiUfEqpdHreqVoz
Ma+dFrRuF9LWam55a/veiKYPtOGQMgDu7uaHw8vC//VV++qwXQw/uNt5x82V/5n/azhOPWXEVX+9
e39HHDaQ/KV+9Tr0zMM/qLpl35HqC38kXa1CU/yX3g3bv5fsi7hcd/1i6W+26lcvqie7u9utr/Bh
p7G/2/bVanN1Ff7qXeyvl7vqxr/tm35BHrtbn0Vby9T11d6QH/fEL/d81bxfbLb9oXnhHxw9/PPL
hJyAN0TxYvDHMdus+iGo9mg3N/63w75xVj29222f7ubL9dOXA8960VVz3jStkrYWHWtrxSytlVC2
7pidGypVu1jQd9453SxXvFkvya+bXRg271/MqiefffrZZvtL9YTb9zlVlF1cnB3ucNg/TfXq+rxS
VP+KFGEYPyirFuHFPXxZrA2B4+UWdLP0B01f01/76ue73d5vYYOwvpv5lwv/t+vwVevNur4NR7Nd
OEBUHqDx4vzOoz9+znu74YfzZteXb4r/xaurfhs+qFlXy5ubvls2+/5lA/xuDjbg+LFP+qvq8H0X
5al3/bqDyPAeBcjDRjxO0j99r9otO9/l+nUzX/nPvAxvh2+li5Z2zcJ13B2/9fN+FTyQ079zud6s
Nu0vVdusVn13msFl2WhOxvLpD9kugxsZ2n1zRozSMw4nNH9rtuscjCeUtuiR8gNGxg8bFtP/DIsz
DU4Hx3nGAQfHz5rlzvfHZl/9frPa3rZHqTerGeeCMFEzWdUfhYz4a7Nb/tr3vx/fJLx2j4k7ItKE
85qr86ANidl3obb/F3IYflzv56tA5LxbV3MbaIevDz9klBKha61Kkl4f9qrPqy+u1pvhQNT/3va3
4eA/q774+vtPn3398ZeXnz579s2zWfXT/3bLq/wu8Xq56mOPbRkrc+jxzX7f39wOH7TfeDq/8/Nf
8uoTV76X+ze8tbv+FfCQne7WK5+Nqi/eu3k13ll2nW+hd7O7vtsPH3nzYtevFlW3+W19z85HFEuL
h137D+vl7yT8cTl87RPB3q/e/W3rA+G7/sW791FmqcDZZdrCFTmLtpapa+L8Lh1RJs9uIUl+90Vy
zNG+buZO13y+ULXSRtYt7RvZcS0b3fv8fqpZNk+yeEggvj+/M3pxcXa4h/x+murx8zujh/xe0lrp
/J69KVnye3bq2DFO4MkZRmHQY9MfAlgCNzY7MzliVI5oVjI7M/nGZefgONMEQI7sXBA3QXYeaPOM
zhJn54ykb7MzKjtnrMyjZ+ehbeWS2sTsnLECZ5cnC1fkLNpapq5Js7ML91dqXgw+Pjsv5qrRlDX+
tTC16Lioe827WpvOadFaLaTx2flUszLNyj0kjN6fnYWf+z473CEKnqY6g7lvcZz7LmmtbHYu0JQM
2flIzTJSR41xDjxZJ3Jh0EM8k/mIK/NsX9Oys1AjRuYZi847C/WGZeeD40wn39Nn56K46OyclTZp
ds5K+jY7I7Jz1so8cnY+tK3gLOek7Jy1AmeWJ4tX5CzaWqauibMzE0SbYvDx2VkJ5rq2k3W3cKK2
am7qvu9drVmvrZaNFJz57HyqWZlmxB4SRl+Tnd3FxdnhDlHwNNUZzDsLd8jOJa2Vzs7Zm5IlOw/U
Z5SdA0/WiVwY9BArMB1xXZ7ta1p2PsHIBTFMFMzOUrxx2Tk4znTyPUd2LoibIDtnpE2cnTOSvs3O
qOycsTKPnp1D2wrOck7MzhkrcHZ5snBFzqKtZeqaODtzRcw5zztzR6Vs54u6bZisO0nnNaNsXhuq
uobNpVCm8dn5VLMyzYg9JIzen52lubg4O9xjdj5NlWVsH5OdpTlk55LWSmfn7E3Jkp2zU8eOcTxP
1olcGPTU9KU0S+DGZmfFRozCM+qS2VmxNy47B8eZTr7nyM4FcRNk54y0ibNzRtK32RmVnTNW5tGz
c2hbwVnOidk5YwXOLk8ObZXFKnIWbS1T18TZWRhiznneecGotfO5q/lcNrVlTNRS0EXdsa5VatFx
PvfZ+WSzMs3jPSSM3p+dlfLZ+SRupgm8h+AesvNpqsfPzkodsnNJa6Wzc/amZMnO2aljxzieJ+tE
7ijoTV8qvgRudHYeRw9piC0676zcG5edg+NMJ99zZOeCuAmy80CbZ3SWODtnJH2bnVHZOWNlHj07
D20rl9QmZueMFTi7PFm4ImfR1jJ1TZydpSP2nOed5023MK2yNbVtWzdMyFowLmvXS9WIpl30rfXZ
+VSzMs2IPSSM3p+dtbi4ODvcIQqepjqDeWctDtm5pLXS2Tl7U7Jk5+zUsWMcz5N1IhcGPT39yZYl
cGOzszYjRuUZi847a/PGZefgONOkSo7sHHAzzRVkyM4ZaRNn54ykb7MzKjtnrMyjZ+fQtoKznBOz
c8YKnF2eLFyRs2hrmbqmzM6KhvVqrCkGPyE7mzlt2ZzWqvN/MKZFTYWZ17RvOJXcdE3j551PNivT
jNhDwui92VlTfXFxdrhDFDxN9fjzzl7ZITuXtFY0O5doSvrs/JI654q7MWOcI4/L02FPBj3Npj86
ugRuZHbWbLyNMUYck+Wys2d4s7Lz0XGmk+/Js3NZXGx2zkubMjvnJX2bnadn57yVedzsfGxbwVnO
Kdk5bwXOK0+Wr8hZtLVMXRNnZyaI48Xg47OztM3f7N3LqtRAEAbgV3FpwA59vwgu3LsQXLvIpSOC
KHhb+fCmk6gQT8BO+q80HDduPOBfVXNm+utyZvjYS8siD4JJrzjrxkkyx5UxfJBayLjZeV8WaCP2
Lxg9trPwTVNd3IWCD6e6f+88t2y1M2XXqO0MLwVi5yV1RXZOeaCL3D30pLxiZ3jcXDtL+VdGOWe0
lHaW8tHZOfUYdPmOsDNh3AJ2BqYtbGdg0v92vmRn4GRut3OqjXDLedLOwAlU50niiVRRK81cC9tZ
mjbUvHe2Kko/cMNEdAOTMmgmVdexQarIlQvKTWGz874s0B7vXzB6bGc5752ri7tQ8OFUFeyd5bZ3
puwatZ3hpUDsnFLXtHdOeaCL3D301KW9Mzxurp0V/yujsi0n3Turx7d3Tj0GXb4j7EwYt4CdgWkL
2xmY9L+dL9kZOJnb7bzUZsikdtLOa0rIBKrzJPFEqqiVZq6F7ax8y2veO49eTf3IA4uTD8zZ6JgL
2rAYZDfxyTuhx83O+7JAG7F/weixnZVumuriLhQ8SgU52+fYWenVzpRdo7YzvBSIneGpc884cx6N
xOhf0PNX7AyPm21n/1dGPWck3Tsr/+jsnHoMunxH2JkwbgE7A9MWtjMw6X87X7IzcDK323mpjU5q
J+28pCQz1r2eJJ5IFbXSzLWwnQ1vec17Zy286LR2zJspMsvHkYlOBBYE906GTvBJbnbelwXaiP0L
Ro/trGXTVBd3oeBBqvvtrOVqZ8quUdsZXgrEzktq5IccZZ5xUh7oIncPPW2v2BkeN9fO2u4yiuec
t0IYQjtr+8jsvPYYdPle3s6kcS/bGZq2qJ2hSf/b+YKdoZO52c5rbYRbzlN2hk6gMk+ST6SKWmnm
WtTOc3jZCkkWPt/OYQw2pu93HoNXzIVhYHYSinGrlFa9MzK93/nBshTmVP8vGD22c/qc7eriLhQ8
SgU52+fY+dfnbFN2jdbOBKUA7EyQOveMM+fRmMueh6F34XO2f8XF/H6ds7N1f2UUc0ZLaefH9jnb
W49Bl+8IOxPGLWBnYNrCdgYm/W/nS3YGTuZ2O6faCLecJ+0MnEB1niSeSBW10sy1sJ2FboUjC59v
5974oQtGsqiUYL7XmnWTHhnvJsNVN7nIx83O+7JAe7x/weixnZ1omuriLhQ8SIU52+fY2YmH7bzk
g6496ewMLwViZ3jq3DPOnCdQ2tmZK3aGx821szN/ZZS6laR7Z2cenZ1Tj0GX7wg7E8YtYGdg2sJ2
Bib9b+dLdgZO5nY7p9oIt5wn7byktJAJVOfJtVayiVRRK81cC9s5vRrVvHcW2gsXTM/4ICY26E4z
HkNko7dddFMflLWbnfdlKfqy/sHOoWmqi7tQ8CgV5GyfZeew2pmya9R2hpcCsfOSGvPMd8rOKY/G
PGAfhp5XV+wMj5trZ/93RjVnJN07e/Xo7Jx6DLp8R9iZMG4BOwPTFrbzkhTzbPrfzpfsDJzM7XZe
aqOT2kk7AydQnSeJJ1JFrTRzLWxn5VtZ89550qPp5MCZ7DlnWvWCGREl09yoEJ3tejXb+cGyQBux
f8HosZ29a5rq4i4UfDhVBXtn71Y7U3aN2s7wUiB2hqfOPePMeaCL3D30grhiZ3jcXDsH8VdG7VtF
uncO4tHZOfUYutYqa+cUF7QrANgZmLawnYFJ/9v5kp2Bk7ndzqk2wi3nSTsDJ1CdJ4knUkWtNHMt
bGfDWyXJwufbOY58sF5pNphxYEEbz/qoNfOjlHaY5Ci12Oy8Lwv0Tsx/weixnYNpmuriLhR8OFUF
73cOZrUzZdeo7QwvBWLnJTXmgwZP2TnlIX2/cwhX7JziVvV+57Cnh3y+nPQtpZ3DI7Pz2mPQ5Xt5
O5PGvWxnaNqidoYm/W/nC3aGTuZmO6+1EW45T9kZOoHKPEk+kSpqpZlrUTvP4WWrHFn4fDsbO8x5
rGZeyon1fT8yF93EBhViEDIYOYbNzvuyQBuxf8HooZ1d+n7n6uIuFHw41f17Z/fr+50pu0ZrZ4JS
AHbeUldj5zUPdJG7g5678v3OW1zM79cpOzvF/8ooZKsFoZ3do/t+57XHoMt3hJ0J4xawMzBtYTsD
k/638yU7Aydzu51TbYRbzpN2Bk6gOk8ST6SKWmnmWtjOQrU8mL/Cmx9vZh6k7Ok16v389DY8+ZLO
iN8+xM9zyqmbQie9GcyolteQd58/ffs4/v6Rt09ejuPazY/rYF59mn/ka/c1jSA9v36Lz1azvVCc
b6U8e5Ke/1683v7Fp3/+pqEL/rvB3+aHzIcnH+eX5ifxe3oi/pUnBP+r6Q/k0q2uepGvRqG4i0yP
g2D9MARmhVDMhuhi7JSJdv7wtQfLAi1G/0X3x5cRyjdNdXGXB/ZBKg3BUs5lhPIPX0Ys+aB7ZLrL
CHgpkMsIeOrcQ+OcB7oZ38tZyyuXEfC4uZcRWv6VUc4ZKRf5c4ZHdxmRegzaZiAuIwjjFriMAKYt
fBkBTPr/MuLSZQRwMrdfRiy1OTL6nryMWFNCJlAd0IknUkWtNHMtfBkhbaurXuQ74XXQI3NcWuZG
rljfT5JxOQTh9OS4iZud92V5splk2FnPi/zq4i4UPEoFOdvn2Flvi3zKrlHbGV4KxM7w1LlnnDlP
sJAH7MPQM5cW+fC4uXY2/K+MyraGdJFvHt8iP/UYtM1A2JkwbgE7A9MWtjMw6X87X7IzcDK323mp
jU5qJ+28pMScNKrzJPFEqqiVZq6F7ax8a2reO7soVD+qkVneSdZZ0bExdhMT0rjJz3+Mut/svC8L
tBH7F4we29nopqku7kLBh1NVsHc2erUzZdeo7QwvBWLnlBr6ceu5Z5w5j8Y8YA+g56/YGR43287+
r4x6zki6dzb+0dk59Ri61iprZ8K4BewMTFvYzikpaAfz386X7AyczO12TrURbjlP2hk4geo8STyR
KmqlmWthOxveGkcWPt/OVnaD6P3EphgHNkg5MCVVz4TTQwhSKj2Kzc77skBvbf0XjB7b2cqmqS7u
QsGDVJizfY6drVztTNk1ajvDS4HYeUldkZ1THug7svfQs/aKnZe4mN+vc3a2dpdRPee8tcIR2tna
R2bntcegy/fydiaNe9nO0LRF7QxN+t/OF+wMnczNdl5rI9xynrIzdAKVeZJ8IlXUSjPXonaew8vW
SrLw+XbmnRyVFIFNeuiZ74RlYgiWaaG6yIdowjhudt6XBdrj/QtGj+3sVdNUF3eh4MOpKtg7e7Xa
mbJrtHZeS4FucAF2Jkide8aZ80AXuXvoeXfezgRxc+3s3V8ZxZzRUtrZu0dn59Rj0OU7ws6EcQvY
GZi2sJ2BSf/b+ZKdgZO53c5LbZ5MaiftvKaETKA6TxJPpIpaaeZa2M5Ct7bmvTOXXediL9nE5z/6
wUrmuomzbuBdjFEa4ef/s/1gWaCN2L9g9NjOQTRNdXEXCh6lgpztc+wcxGpnyq5R2xleCsTO8NS5
Z5w5D3SRu4deMFfsDI+ba+dg/soodetI987BPDo7px6DLt8RdiaMW8DOP9m7l1ypYSAKw+vJwJFd
frObOI/9L4E4aQbkEtROfCqWuieI4V+ui9RflwBgbWU7b6WYz5FfO9+yM3Azj9t5m41PahftDNxA
c55k3kgTs/LstbKdyfW+5btz8KQMORIqBC1m7bSgMcwiaJ+in72OZF52Po4Fuoi9g9H/2Dl2XXO5
GwVPqhqwc9ztzPlq3HaGjwKxM7y69DPO2mMwP7D/hF6Q+o6d4bmFdg7yZ6NeGznvzmvDx9k5vzH0
rFXXzoy5FewMrK1s51wKusF87XzLzsDNPG7nPBvjlfOinYEbaM6TzBtpYlaevVa2sw6992zx5Xa2
anExxknIqKWY9DiIMA1OyCklGSPpUS0vOx/HAl3E3sHoqZ2D9F3XXO6JnXPV83fnIP1uZ85X47Yz
fBSInbdqzL/Dc8nOuSdifmD/DT2l7tgZnltqZ6V+NJrQBxUY7azUx9k5vzHoy3eEnRlzK9gZWFvZ
zsDSr51v2Rm4mcftnGdjvHJetDNwA815knkjTczKs9fKdrayD8QWX25nPw7ORzUKHRcS3igraDZa
UEjT4lScaRhfdj6OBbqIvYPRczsr23XN5Z7YOVc9f3cOyu525nw1bjvDR4HYeatuyM65B3rI/QG9
eMfO8NxiOx/pYX7JtdGx2jl+mJ33NwZ9+V7fzqy5t+0Mra1qZ2jp18437AzdzMN23mdjvHJesjN0
A415kn0jTczKs9eqdl7jqQ8t352D0t4HNwiaYxImLUlYZY1YxkHJ2RqKRr/sfBwLdBF7B6Pndjau
65rL3Sh4UoX5/2dL7GzcbmfOV+O1M8MoADszVJd+xll7oIfcI/Tu/P/ODLmldrbyR6OiPrLenT/u
/3fe3xj05TvCzoy5FewMrK1sZ2Dp18637AzczON23maLbFK7aOe9ErKB5jzJvJEmZuXZa2U7K9PH
tu/OizEhJZGC1kKNoxckXRCji9aPdpmVTi87H8cCXcTewei5nW3ouuZyNwqeVD1vZxt2O3O+Gred
4aNA7AyvLv2Ms/ZAD7lH6Dm6Y2d4bqmdHf1opLWR9e7s6OPsnN8Y9OU7ws5bLubzDsDOwNrKdgaW
fu18y87AzTxu5202PqldtDNwA815knkjTczKs9fKdibXR88WX27nQQ+zCTaKcXH57jwm4Y2JQntH
s3KDl9a/7HwcC3QRewej53Z2ruuayz2xc65q4O7sXndnzlfjtjN8FIidt2rMv8Nzyc65B3rIPULP
37o7w3NL7ezlj0bte8l6d/afd3fObww9a9W1c84F3QoAdgbWVrYzsPRr51t2Bm7mcTvn2RivnBft
DNxAc55k3kgTs/LstbKddewlscWX29nKRHIiLZSXQSQ3kXByJJHUZL2bohosvex8HAt0EXsHo+d2
9qbrmss9sXOuauDu7M1uZ85X47YzfBSInbfqhuyce6CH3B/QC3fsvOVi/nxdtHP40WjWRhc57Rw+
zs75jUFfviPszJhbwc7A2sp2BpZ+7XzLzsDNPG7nPBvjlfOinYEbaM6TzBtpYlaevVa2s1W9bPnu
PJtEKfokkpwXkaaJRLTRijFE0kkOfqLhZefjWKA73jsYPbdzoK5rLnej4EmVh3y2L7FzoBM75z7o
2ZPPzvBRIHaGVxd+xsk9EYnRI/SCu2PnLRfz5+uanYM7NNpfUvVKcdo5uA+z8/7GoC/f69uZNfe2
naG1Ve0MLf3a+YadoZt52M77bIxXzkt23itJQjbQmCf/zMq2kSZm5dlrVTuv8bpXLd+dQ4rOynES
aVkGEac0CKKkhIpGquSCila97HwcC3QRewejp3aOUnddc7kbBU+qMJ/tC+wcpd7tzPlqvHZmGAVg
Z4bq0s84aw/0kHuAXpT+up0ZcgvtHKX/0ajWRs6789rwcXbObwz68h1hZ8bcCnYG1la2M7D0a+db
dgZu5nE759k8n9Qu2nmrZDPWs55k3kgTs/LstbKdle1Vy3fnYKZJxaCF01YKZUYSk7OjSGOMs5PT
ZJV72fk4Fugi9g5Gz+2sVNc1l3ti51z1/N05KrXbmfPVuO0MHwViZ3h16WectYfz7hyVvWNneG6p
nZX90Ui2J86789rwcXbObww9a9W1c84F3QoAdgbWVrYzsPRr51t2Bm7mcTvn2RivnBftDNxAc55k
3kgTs/LstbKdyfek2eLL7TzIoKfkrBjisIjBKxLLoJ3QIy1JGusp0cvOx7EM/1hv2Dl2XXO5GwXP
qiCf7YvsHHc7c74at53ho0DsDK8u/YyTeyLkB/bf0CN9x87w3FI7089GvTZ6yWhn0h9n5/zGoC/f
EXZmzK1gZ2BtZTsDS792vmVn4GYet3OejfHKedHOwA0050nmjTQxK89eK9tZx54CW3y5nRMFt0wL
iXFQSVhNJBI5I6Jzo5pT0smEbOd/jYX5p0vfwei5ncl3XXO5GwXPqiCf7UvsTH63M+ercdsZPgrE
zvDq0s84aw/0LxAfoafVHTvDc0vtrNWPRhN7TZx21urj7JzfGPTlO8LOjLkV7AysrWxnYOnXzrfs
DNzM43bOszFeOS/aGbiB5jzJvJEmZuXZa2U7W9Xrlu/Oi1GL0cMogrIkhlmPwsU5ipBSsDTYaRzk
y87HsUB/E/MdjJ7bWduuay73xM65qoG/76ztbmfOV+O2M3wUiJ236obsnHsM5gf2BHrxjp3hucV2
PtLD/ZJrI+vdWccPs/P+xqAv3+vbmTX3tp2htVXtDC392vmGnaGbedjOr9kUm9Qu2flPJWQDjXmS
fSNNzMqz16p2XuN1r1u+O5sl2MEHJTz5SaRhDMJNaRbDZOc4yHmM2r3sfBwLdBF7B6NndlZSrnZu
Lnej4L+rHr87KylfduZ8NV47M4wCsDNDdelnnLUHesj9G3pKyht2Zsgts3Me50ej0r1hvDuvDZ9n
5/zGoC/fEXZmzK1g560W8+mssp2BpV8737IzcDOP23mbjU9qF+0M3EBznmTeSBOz8uy1sp2V7U3L
d2dPRGbWfv394sXoxyC8Wn+ZZoo0J5XcYl52Po4Fuoi9g9FzO1PouuZyNwqeVD1vZwq7nTlfjdvO
8FEgdoZXl37GWXsM5gf239DTdMfO8NxSO2v60UhrI+PdOTd8nJ3zG0PPWnXtzJhbwc65FnTZqGxn
YOnXzrfsDNzM43bOszFeOS/aGbiB5jzJvJEmZuXZa2U7k+9NYIsvt7O0xpopLSKMMYkUJyuUWcb1
d7ObJiknR+PLzsexQBexdzB6bmftuq653BM756oG7s7a7XbmfDVuO/9u526SHASBMAzPem7iAguR
5mduIw7e/wjj38rEqhD4WmqiJ3jbzsJHUsJHgdh5rUZ+IDj1GWfu8Zgf7HPoaZljZ3huqp21fGjs
bUuqY7Szlh9n5+Ueg16+I+zMmFvAzsDawnYGlt52zrIzcDOX23mZjfGU8007AzdQnSeZN1LFrDx7
LWzn3rfUs8Wn2znQ7zC5YMXg1CC8HrQwMZIYTVBa9zGMatrtfBwLdCL2CkbP7ax101SXu1LwpArz
HeAUO2t9Yue1D0k3PjvDR4HYGV6d+owz90APch+g53LsDM9NtrN7aNRzo2W1s/s4Oy/3GPTyHWFn
xtwCdgbWFrYzsPS2c5ad180oyGYut/M2G5vU3rQzcAPVeZJ5I1XMyrPXwnamrqWaz51VCE520QtP
0gkimoT3hoRVjrx2aorK7nY+jgU6EXsFo+d2JtU01eWuFDyput7OpDY7c941bjvDR4HYGV6d+Iyz
9HjMD/Y59Mjk2Bmem2pnModG+yO71rCeO5P5MDtv9xj08r28nVlzs+0MrS1q570U8xx52znDztDN
XGznfTY+qb1lZ+gGKvMk+0aqmJVnr0XtPMf3ran53NmZ0cTfrhc2Wi3M6KKw0nfCjWGg0VCIY7fb
+TgW6ETsFYye29mapqkud6Xg86oKzp2t2ezMedd47cwwCsDODNWpzzhzD/Qg9wg9l/GfbYbcVDs7
+f11X/d1X/f1L64/jKcrYQA6NQA=
--000e0cd3406a75c8a504c8337ccb
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--000e0cd3406a75c8a504c8337ccb--


From xen-devel-bounces@lists.xen.org Sun Aug 26 23:23:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Aug 2012 23:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5mAM-0008Gr-Hr; Sun, 26 Aug 2012 23:22:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <halcyon1981@gmail.com>) id 1T5mAK-0008Gm-AZ
	for xen-devel@lists.xen.org; Sun, 26 Aug 2012 23:22:49 +0000
Received: from [85.158.143.99:61320] by server-2.bemta-4.messagelabs.com id
	80/34-21239-7CFAA305; Sun, 26 Aug 2012 23:22:47 +0000
X-Env-Sender: halcyon1981@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346023364!20515655!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8807 invoked from network); 26 Aug 2012 23:22:44 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Aug 2012 23:22:44 -0000
Received: by weyz53 with SMTP id z53so2409799wey.32
	for <xen-devel@lists.xen.org>; Sun, 26 Aug 2012 16:22:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=I/4GBYggagcVeenbkKG/Kd9p05Oysa1v4KQ7oHm6YZ8=;
	b=ufrv6OD8dKMOBjJms35Y+q94mklLL34VlazcqxMZBajiEk+JLgeenH5eRWZv7OmcuX
	W27sr2QDfzdSFExgzWbT5EVr7GPW0ipuW5CiyZDNlkMmT/en0ETFNM77dTb4v+CRlXB3
	V9pHBUF4+p/8cq+TFvTIehDfOgdV5+K/ajgYiAPhdm/ltZ1QFB6kquW+Ki/dc2HMTrwE
	XdEhYSzdCpzuHNiFHjsInR+qPlhKK5me3N8dQJCpt3PgZ/mrloXS7L4wd4v3YqtmF/13
	erkru8SK3UOAmW7BUTHwwktImdy76nIhby2EKTHBfxToQgGPbUUwCfzl/Oy0IBvUd/ob
	N7FA==
MIME-Version: 1.0
Received: by 10.216.65.202 with SMTP id f52mr5376662wed.206.1346023363823;
	Sun, 26 Aug 2012 16:22:43 -0700 (PDT)
Received: by 10.223.204.2 with HTTP; Sun, 26 Aug 2012 16:22:43 -0700 (PDT)
Date: Sun, 26 Aug 2012 16:22:43 -0700
Message-ID: <CANKx4w8=5nubN6C8CZij3Lz6_18n6ROaczvTuL+Ap8jug357dg@mail.gmail.com>
From: David Erickson <halcyon1981@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=000e0cd3406a75c8a504c8337ccb
Subject: [Xen-devel] Lockup in netback - Xen 4.1.2 (XS 6.0.2 hotfix 7)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--000e0cd3406a75c8a504c8337ccb
Content-Type: text/plain; charset=ISO-8859-1

Hi all-
I am seeing an intermittent lockup on my machine's networking as soon
as I apply a network load.  On a pool of 80 the first one will lock up
generally within 15-20 minutes of beginning the workload.  The symptom
is I see a long list of the following in /var/log/messages:

Aug 16 18:32:49 localhost kernel: netback[1]: TXP193 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[1]: TXP211 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[1]: TXP232 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[1]: TXP157 is DMA mapped
Aug 16 18:32:49 localhost kernel: netback[0]: TXP44 is DMA mapped

this seems to clog up the networking pipeline which leads to stall in
my NIC driver:

Aug 16 18:32:58 localhost kernel: ------------[ cut here ]------------
Aug 16 18:32:58 localhost kernel: WARNING: at
net/sched/sch_generic.c:261 dev_watchdog+0x241/0x250()
Aug 16 18:32:58 localhost kernel: Hardware name: C51G,MCP51
Aug 16 18:32:58 localhost kernel: NETDEV WATCHDOG: eth0 (tg3):
transmit queue 0 timed out
Aug 16 18:32:58 localhost kernel: Modules linked in: nfs nfs_acl
auth_rpcgss sch_htb lockd sunrpc 8021q openvswitch ipt_REJECT
nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack xt_tcpudp
iptable_filter ip_tables x_tables binfmt_misc nls_utf8 isofs video
output sbs sbshc fan container battery ac parport_pc lp parport nvram
thermal rtc_cmos processor evdev sg tg3 button thermal_sys rtc_core
sata_sil24 rtc_lib serio_raw tpm_tis tpm tpm_bios i2c_nforce2 pcspkr
i2c_core ide_generic dm_snapshot dm_zero dm_mirror dm_region_hash
dm_log dm_mod sata_nv pata_acpi ata_generic libata sd_mod scsi_mod
ext3 jbd uhci_hcd ohci_hcd ehci_hcd usbcore fbcon font tileblit
bitblit softcursor
Aug 16 18:32:58 localhost kernel: Pid: 0, comm: swapper Not tainted
2.6.32.12-0.7.1.xs6.0.2.553.170674xen #1
Aug 16 18:32:58 localhost kernel: Call Trace:
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] ? dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] ? dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c012e0bc>] warn_slowpath_common+0x7c/0xa0
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] ? dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c012e126>] warn_slowpath_fmt+0x26/0x30
Aug 16 18:32:58 localhost kernel:  [<c031a1a1>] dev_watchdog+0x241/0x250
Aug 16 18:32:58 localhost kernel:  [<c02188f6>] ?
blk_rq_timed_out_timer+0xe6/0x110
Aug 16 18:32:58 localhost kernel:  [<c0137fe1>] run_timer_softirq+0x151/0x200
Aug 16 18:32:58 localhost kernel:  [<c0319f60>] ? dev_watchdog+0x0/0x250
Aug 16 18:32:58 localhost kernel:  [<c013359a>] __do_softirq+0xba/0x180
Aug 16 18:32:58 localhost kernel:  [<c015b657>] ? handle_IRQ_event+0x37/0x100
Aug 16 18:32:58 localhost kernel:  [<c015e774>] ? move_native_irq+0x14/0x50
Aug 16 18:32:58 localhost kernel:  [<c01336d5>] do_softirq+0x75/0x80
Aug 16 18:32:58 localhost kernel:  [<c01339bb>] irq_exit+0x2b/0x40
Aug 16 18:32:58 localhost kernel:  [<c029c7b7>] evtchn_do_upcall+0x1e7/0x330
Aug 16 18:32:58 localhost kernel:  [<c010470f>] hypervisor_callback+0x43/0x4b
Aug 16 18:32:58 localhost kernel:  [<c0107095>] ? xen_safe_halt+0xb5/0x150
Aug 16 18:32:58 localhost kernel:  [<c010adae>] xen_idle+0x1e/0x50
Aug 16 18:32:58 localhost kernel:  [<c0102a7b>] cpu_idle+0x3b/0x60
Aug 16 18:32:58 localhost kernel:  [<c0373c43>] rest_init+0x53/0x60
Aug 16 18:32:58 localhost kernel:  [<c04f5cea>] start_kernel+0x29a/0x340
Aug 16 18:32:58 localhost kernel:  [<c04f55f0>] ? unknown_bootoption+0x0/0x1f0
Aug 16 18:32:58 localhost kernel:  [<c04f507c>] i386_start_kernel+0x7c/0x90
Aug 16 18:32:58 localhost kernel: ---[ end trace 76ea5a31a8fc2f33 ]---

and after the NIC driver fails netback un-stalls itself:

Aug 16 18:33:00 localhost kernel: tg3 0000:01:00.0: tg3_stop_block
timed out, ofs=1400 enable_bit=2
Aug 16 18:33:00 localhost kernel: pci 0000:00:02.0: eth0: Link is down
Aug 16 18:33:00 localhost kernel: netback[1]: DMA mapped TXP 203 released
Aug 16 18:33:00 localhost kernel: netback[1]: DMA mapped TXP 212 released
Aug 16 18:33:00 localhost kernel: netback[2]: DMA mapped TXP 94 released
Aug 16 18:33:00 localhost kernel: netback[1]: DMA mapped TXP 159 released

To get packets moving again I have to have a serial console to the
host, rmmod the tg3 driver, modprobe it, ifconfig up the interface and
restart OVS.

I've tried a variety of things to debug the problem:
-Turning off all hardware acceleration on the NIC from ethtool
-Different OVS versions
-Using a single dom0 vcpu
-Turning off irqbalance and MSI
-Trying the latest stable kernel in my VMs (3.5.3)
-Tried a newer TG3 driver from the Citrix crew
(http://forums.citrix.com/thread.jspa?threadID=311744)

But to no avail.  I don't ever see the "is DMA mapped" messages under
normal operation, so it seems like whatever is causing dom0 to believe
that the memory in the netback/front rings is DMA mapped is the
problem.  If anyone has any suggestions on how to approach/solve this
problem I am open to ideas, I've spent a couple weeks on and off on it
with no resolution.  I'm attaching a tar with all the log messages
from the system if they can help.

Thanks in advance,
David

--000e0cd3406a75c8a504c8337ccb
Content-Type: application/x-gzip; name="crash_newdriver-1_logs.tgz"
Content-Disposition: attachment; filename="crash_newdriver-1_logs.tgz"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_h6crjjl30

H4sICN98LVACAGNyYXNoX25ld2RyaXZlci0xX2xvZ3MudGFyAOxdaXvbuBHer9GvwNN+qF2bMi+d
Tba1ZTlR14dWUo429aNSJCWzlkgtSTl2f33fAXhIoijKjrPptvZuxAMzA2AwGAwwA9D0jeBm6Npf
LN+5s/2hUp56kx+e90/GX1XXceV/a9dapabJPyiqrGhqrVJRaz/gribrPzD5h1/hbxGEhs/YD77n
hVvACtN/o3+/WE5gsptwxJQms+w7Zoc3MqPKMt8em27IqsxXf2GKjNSxsZiGTGGW49tmOJwb5q0d
BkNwMGRyifVtF6kNpa7oarWu1NnoIbQDptRrar1erbH5bcj2LN+bz22LyYfMg8BNnZkTBkxGbr8s
7AWBK/usxHwjtJk8ckB4Pg/YCFlBMPEGz8uwrCRqMB87Y284NgIgiIrcOWOlvFoVFXRcK2Aam/uO
NzPmDJVR6T9+lfGfsvpfVClNbyiNuqxpalQntao0VK2yWiVd0fS8WlUaFX33ehH01pqpz1UzvaFW
9ZqqVitRzeo1vJDrqzXT6jW5mlc1tSE36rvXjYNvrZz2XJWrVRp6RZPrtWosiqhcFQ25Wjs0W640
ghnVR9QN0Furpj9X1aA065W4Wqpeq+7Yu+TdKyOz0mWnxah3O0HomEGzxOjPvx96ZoiO32RKVWtU
K3q1UqsnaWPfmMxsl5Ll5OXCNIJIYVCCWq9BwtVqkj5bTa9UU3qj1SQ5zcgMhrbve/7KW2PqTNwN
7+89F0QWgT30zTtrNWU8zkmaGebQDP3pZhxU0vZt8T6t/Mwehp43nHruZEMx/mWMRvbqq4Vr4Y3z
b3tTJR13OLXdSXizgZa3CPMTq/rQ85EcBKK1NhGvVlDUoaLWckGQWCcYtVLJhUFilWAqipILg0SV
5yWrWi4QpeocqqKquVCUqvEyyXp+wSmVl1yXG/lFp1Re9rrSyC88pfLSN+TccoVLnaLRqOuypleU
RpJmetOpEzieu4QhZDKAFK2+G48zL8dT7wtouKHvTdP3XD6Tpk/fBo47mdo5ec4whOckYYAHMdta
fmffmxAh5y6P3NQIc5LEW8seqqEzszemaLkpem5KJTelmptSy02p56Y0clMUOT9JyU9S85O0/CQ9
PynDiayWlWs1PValYVbLVqpRUkbLaml2hu87tk8iGdgbhA1jHUCs5XfrUNbMGH7xndD+ZTheTKfZ
90Ma/NYS/fuRBf0+m4cP6FLRu+XsFFmv1JL3WQUY3vh2cONNreGNE67k6duGJXLLvF4rScrz2XzI
h8W1QqKngTMhcYdwPWvoQJvfrwLQCLoIhou5hc6SJLmOOXT8X4KVF8adh7a1sglhXoVmo8V4OP0y
i5LjtJZnQPebNpsbNCSFGGHY2PO5ed8sHVvGPESnZr1PTQaFw9hA3JSorIE0mnrmrYQR0eSlCIzZ
fGpLDo13dwavPqwNicwICZpp5fnGmdzQi5J/HxNQZXrgQyNJXZIioZYATRL5CwIohQluTcVDiqvh
aRU5XENOMxZFS8lHzwmF5Hk1PSUQ1SWlEL1ISaQv1iFKV2MobcPayH8iaN7Y5m2wmM0gIWC9CxLZ
d5D1EKjSxAhvbF+AmXMpsLmFZYRQuZInMuKJC2tO5cimisad2K7tO2YufgKA6Z0N8VjFnhr+xN6c
1IX1lCdqi9Bz7YkXOiT8r5AJZO4VXQfRtURLD1S7cKKVcEMDCTRQWVFVrzR2/NkXw7elJKFSqyjS
nVau1EujBVrdHXtgOP6astKU5bJc6oF9mwvTBSF0VzYz7p3ZYhY0RWFgueCGXTiu03wl0+1fF7OR
R/eDOL218H0yvG+gfahADHRCZBTTUGW5kEZJlnkJWe/4gs3smedDt7kfHMsxWMvz557Pm4S1Kgp7
52HicOI71sRmez5mD4a6X3rVim2AztGVxC7s2QE7WQQXRoCKHrD+3DZbD+bU5kkfO5cfJPbh7XHf
9by5xLqG3/Z9ifVDez5HyXHX7vUkdgbsE/VEYqdO0Lkc3EulV32usZqsZcwPWLV68e7fB+z96VkC
e5AQO21/6LfP3/Bpzo+D45HnhxJ7ndxcRDc/iqxed/klyuUcEuGaD9RdXiEnY+RMndChfv5Z16/Z
u4e57Q98ww3AGCi1/tSAxkJbdn1nZvgPrEP6aGyYdukVGDObGS7E8cQI7PeuE3ZO38iMblpu+Eap
MOIR8RRFtsenDhX9/bkEzHPHvWURX2mm1jqbtg9Yqz+QcNtGmc9vzwxnilKD2AFrX7UkaMsrJLR6
LfAA2XQCz2y7Ejvv02/7PmwNziVW1UfL9MfOhMhfnH/svFGqNPk6/XJmdoCAV1dLr64WKOQ6FBHO
gOHlegWUr67AwbYKKI+qQD1b/nqm+D37jtuOrHOKQb0sa3GWZzQTJQFhMk94++7fcVKbRnt6/brr
eyRnV3fjKS68bij7YCZlqSQixgmiv3Kx1qKrHl0r0bUaXeviygsgriquEq56dK3y6wdYAJTpmY2u
49tLuTU5e89aB+z8dNAfXHUPGBg/uICkUFE5n48l9r5z2jvNFhss52VYr7zy5Mor367yomjv0BGn
fAjrclG84r9dksUr+kFB6dKjn1YP91wTicczvMe1e8mhLyNwfu1dRgiXMQZuKFdo9bEdmjfGaGpH
apWN7BtYYmwkFOj7OVQJjRKSLAMBKpNdQjvzd1nlY8tZ5XPR70CFcL3J2i7ldMDOnHubVztOOLYs
H7MlroNAOfob2za/xupf2UX9X/DkuFtPbUj7tmFASocB6UnDwEHBMCBtGgak5xgGIq6oT+OK8k25
In1vrmhP40rlv48rz2IyRFzRn8YV/bsaUt+OK0uGVMyhyi4cejExfy/L7LOMv+uYcdVdGJcVLW1H
Bj6tw3135Vx7GlfU/z419HxcIRdct9WJTIw8rnCQ9v2c7ILVvgY35x5WjCaSM2YkhpeePzOmcBSZ
nmVff8+OKD0/x5Y64iFywkIHg4Fqs77zb7AudquWXp1QSeZifvdGBmwAdrgWf1LwtEAWluOCGJ55
qjQVlN+gh4NRK4ZfZISN8U8SphjWKl4JOV0HNGSFg0i4GdNfqdiu5ORjwgJbTv8iKv24BiwQnBZM
fjp/Xy/zFyzjRWmF0yYRgoEft/Wl1+kfk1DgJ2mbno11iyRvYT07EAZzMEMJUNbkProh8UgeiDBm
bVldCou5vxgFD5DN2caucIpZnmlzfmWx69es632BwrgwXGNiz2idJVrvYSrNqabGBIDdi3ZreovC
9Dv4UfBPldjx4j5amnkjz44JZu9UPjg8VfBPxT/txgvpYmLh9AC9KpH3Uxkc6nvjsIdFAcKThF1P
9O3pGxkXE2uouEFatsoV+ZomBU0WYaG2C1ppOFJpqeGW3vGZHabcyDSaGcQCk/4xdmqERnMzY6rf
aCKynk8d+cQ6au8OiqlHLuou8mN7/Sn4t3/I8+WTJ7Qk0JvI675rPPA1TkWNYj0OWffGcMOzhWtS
R486PTuXA/YaHkA3wDuFvdYXAQi9wjrDwIBq6p20IZHnQjRFDiTOWBmYUxGipX0SJwo7EZy9xMrl
GTiHiW90ee8GizkhCFa86k3vrSvfOmBRNmnRuNB0v/hEhetLaqGc+uBtD36Bnv0LQwXEW1HEPprt
FcpEfZdyN8XtgSgP7uIiAfcgyZG3ZDeaLp+7t5yVnNO/Vw9JqcN5r5YrbwdHyPyjY4U37B6K7rjf
vSA2LvN0E0tbtHDfvZDQG32o0QCcOj8/NkOwEqPHl0svjPLlDCaq1LWJpdafWK91AokVdUxeo3ns
0DccrN/QGht1wKjt+g+ueYO3cZZY6UWBgYeR/SPW5vgbuovz5BzLreJAMHIg8iKxQ14H7BTFJ1fF
AYheTGZENbqhvjwNOQOPw9A9CYEGHrfgr4fC62EBi153XIu/5td3XtidLiYpe3hdKC+YgodCCZ07
MwqMKKOjfPyTWGOkGpKwgAHzaZQvZ6DogE22nP/ZNIyy76JDnZK2bc2s1pxev+t2/F/QJA4a4GZC
lJJRPi4sg+PgkOjwezcqlJQWJMqfmBnpsjR/DpyUIMk4KcpBSkZkji4xgcN3pbwHvHxE2qaOQbqA
jy5onuUuSM9JL+RP/I6rUuQC1tBiTf8DFvcIXFDizZV9zWtDiAydpXPKNdYhPYsKcprUaaDbsppY
gRa9U67ZB8cPF7CgqFKuPQVlAAbNV+fd9ofWGxmkx5CoNwB3A9Y9HrTB+IcTxMe8UUhD+6PmK6FI
2cdeT1P5parzCxQClZREq0mQGCDAhjccLh1SXnXcLgw6UqME/aElN6MiIDc0K9QbbBpSKQN4biF1
lDWK9S+uhEgzEN4ORWGD+AZXRF8AKy5cPCTw5e+0pJwaG7SOwAkyStIyX8I3E7OWpRWIjFzt1zRy
D16MXFldM3LVXY1cO7Y+7a1GrhobuerjjdzR2BTYSy9ejNwXI/fFyP1/M3KlTUautIuRq7wYuS9G
7ouR+2LkFhbl1zNyG8URORetbq7DZCebwBzV9e+4pPsdfCtfHb7T+B7hOwe7hO/UM8Ev9afFvmwK
3ZGyhT/gheeREAdU+IOnxB4pz1343QJ3KITk/zZwR/02gTsS06KrHl0r0bUaXeviGlVeKqz8S+BO
YeCOgVECc82oZFtGifNua3WQ0J4+SGxZEjl4vkHi1wlNgMKYUGVlXjNGzRMw7Bwd1+tYGaINUG8w
3F/H3FZY/wL1zmV0lP5MPP7m3uhvOBDzgdRfzCHtKA87Zj52hsESCj3W6f3MKgnn9Qzn9ZjzVf06
AatkwORlsOxg/9wrG9Ihkg+R/IwrGz/ZPuxlJmLNGdi0CKgTn3m+aavDYIZY8gRo5lmLKdXNUU3J
HXMQIZYjKIH3/RNmJkEO+fLJAdNoiERQ0xVRBWy9etfqXH+l8P5vWJFsr1ahWcvMwYRQFbfG/f4j
5FtuRkEoJLVGvPi0h7kFjJdD5mIiN18acvYjodZ/ejahPoBQH/x6y3Wbhdq7MZ3hjWmty7NIkZAS
C7PyjMKsQpjbXyfM/0NTol2F+WR3YVZ2EmbMlDdL86k9WkyYMIpOjnuYlHvxbL1R37Ba+tuVfztX
/u0V+bfJojttAy+aiG6zNbAvCssux4Pj5V6QdXLVK+yzEF7yM1zRZPfqK3rEi3pnyhbLUbfrkdjX
U/NFyULF5suSkaNmoKwNtLQs1AZaegbKTCza6pJdtd6j1Z169DMOT7vbXFLUPaWv6Z5rhR7leZP0
5/MmmeYuU0BJTAEP8jRIgAyG7t26AkkShP4Yv+iP37r+GO2kP0Y76Q9jJ/1h7KQ/GjvpD+1Ff/yG
9YciFwfXcOVBQGtuh+WwGoV9hnaA+5K7ELcFkEv/E4pgewCNvhZAo68G0FT1rwsTXw703j2CJsHa
PSw8w7WZbTmL2XePmTl4VMzMqL5jzAwNXdlJyE6d8WnruUq9LHOHXtxIQLozXBMDx4Vj+l5UsIB9
Pr44vWY/1XET3uBUp6p+dDVHD/Lc9cKxgW3euB6O0nqInC4LUcnvsfvlG4a5ZaaKmVbijCVfn5Dy
XG/fR8OfYSyATI2mbXgiiOlY338jk3/bcfs4C0XitN7h7kC4EIXDRFQi6wL8agfgNh/aoza/77h7
f92LpmZcQBs3vz/VibTuP5PW/GfSN/CfSbv4z4jtvT6eWu54w9hv/PcJ2Tdy1D5KyC6PWqsiRm8e
L2AkE//nAmb+9wnYixb7rQtZZGgoX21oRAYNWT7fw5iQvpkxEXFI/WoOnVIEV7q08j/JJe2ruXQB
c92eTg3X9hZBzK7vwatfzzwdy9ekwKlTW2Ku8frPPwqONl6mAM/D4//CKcCLdfZbHjhfzP8XAftV
BezF/H8x/59dyISR8WL+Fxi2jRfzfycuvZj/O/BqV/NfESfxtsMbctGFK0FwJz42SOLM7xVHxaUd
fgp9eErZSeuCDiFmb50JcgpTGuk2+CgsWVVWXfc/vz++5BEAs/kCvIU2N2MHSL2hG79ONNzBr9US
O26HL/Lfq7VGXlCcEkV4VvXtLvAq94GjaTC2E5ne1QUReI1PhXj44MGP7LMV7QS9/hU2bj9neLOS
v3H7g4Odigggge/ZDLnPG/Tjx0scjp0KeircGaFeUaqvaJuw5LnTBzZ27CmO4ce7V5+7l9ckMCFz
xd4M6iEN6iI8td26hqdugqa3fXLOmXw/Jj/Bv1qtNSSlwsH6ICKiZBIysqLiMxrVWr3BIS4AgTZY
wACjQZobG5qCjzOxagW/HKb34ZqRK9O/ox2f8YHqbOJ51iGLdjzvBfvMj2CiOh3x7xCsVOpvxxh3
AzotPDRgNH3629/j4oiMPiIj4obANPD1AKpRLclh7NtkJsIizLZOfSWQ4SAJZKgXBjLo+A4V+TFh
2JhxIAO9y+RhrW93R0HmHkKVnmGXu74QG7IXLv/AkG2lG90P4m26mY3J33gLvLTDFnjp19oCf7Bt
C7y8wxZ4cHeZ5ynLX1eLN8FLv+1N8NL6Jvjc3dCJCRRtrROCBB2DUrxv80KcnneRxWm7j/zo9qyF
n9YM+8UHV9HN8cgPqQ3v+RO4cH91JkEgpmOO0Ya5L5o4auEW9nV7U4nncRHcfvM8+nd+yDM5WMvk
YIdMDuJMDrZl0hLM6t3zhjkxLCDw6+k53fSg/r07sgVpTzdGZ94vY/nnBIgTX0PguN3jXeTM8YMw
as4uqSs+DqDLvLVdbq20cNMmEb25pWe64jErIZr5nfbLc/D/1f3yGTZXeUeMjNho/I62aNKuTaUq
aVVpPJbGtmQrkt2QZHsDEWqr2Mo6WWB2Q52YDPXNAXb8UyHrwXXiZUlWuWFPA2mA0CrPh8m2Ytz3
kTGeWWeGlEMywct414EloanLYbo9w7FYp5MJ15VXbPqvofcdQ3X/i819Oc/YV3c09tEHlsJ012no
u9Go/pQfnmubqxtWsxOL+KSspXlFhADjYkM4buXx4bgH8W6bneYYXxWOmzfHMHPCcZVCK3bncNza
uhV7bk8M82FXYxbfEyy2Zqt6fL6QsnJk02/fkqWvG36tKSt9rSm7zO10xrA6f3gxaF8M2heDdtmg
3bqzIHCmqr5u/6yklX7Y8c/Cl/Oe+3vvj/r+u1JTFK2SfP9dVbQfZCwMyerL999/jT+xVkbW9l00
UzIiX1OAj4FiXWsEMWfy/Xisi9h52HSL+9Q2KVfLmopP9Un4SERZKd8H1bJchoqs4Pt9+GZ4Tb+3
XbY3se1b7y+jhTO1buDC2Mcb00yI6EBUyaWH75XK+EhLD/rvnRGK95Je399nv6fTPrpscLPAZ/Wm
GO6YXGvqSrNSZ+3TAXAVtfRTu3fZPmfJmMrM+YJ/s5oszin1twXsUf6Al/CSkJK/gdnkmHjAq8t+
C1DYPINxhB7wqvXgO/fit+MGIcZVegkcY+HH13f4Kn6A13ycxCcHjTirwcV9vbqckN61uu9Ln2yy
Pb07+uAnm8FgBkq0fwSPc5T8pHPV32AwScuvGmadDNoFH/L2I5w1gHUcQxjS8RroJiw7k5MiF2Ip
6+WrjUe2kle+FCCDUzcJ57jV7TALSi1GW4PJoDXkBO3yQ38jViMqYMqJomqN6xm2j41iLNtMsdJX
ShGWncWyC7HG41UsIBTWKwFJsdSopmlzrcro/OYhcDAjoMPyIglF+lYBVcf1NYqnFx0GhcHIpkf/
KZem/MPDY5e9gZrh8KCN77j75k3yOi5syUH4AybUaTfhX7tvsijPFGgoIIYzsSMoW0hpvYily09s
r31vm5iNxibkPkPlQ5jw/MujYoNdia3WNK3jnJYa9NvSbTRyO2T8s6gEjE8DAraY00w3wfkLs2oG
Fccej6gQYO1pp/8TlbcyrsVZmFDFSCTJbrJe/7RLZRhXLSRTDaAz72SVvb26enve3o/APvWhGHlH
USwOVmkRmBKB8fThR6WRCASmUmcDTNQ1WVOrakzmDBdBRm5wMmf6MhmeXkiGYmRcaga8P6rqn5j4
bD328gYzA1NvMm/ezm3lhAxy1P1IqTbYngrlhf+0o3A0NqxQqqjVdXJXc2oaYyo8NykNfKY1YP+G
0zsZzjw/yhSO88bqf0fyhrwqtZgDp8TIRFHhw+66sswBnk4VX+NAtF6xzMg+SxQXuXLiBj3udloi
RdV4Su14OQOeXshirHO0zt5GZEacjLbS4Dx9BzLvuu1BRGYsyNSXyVD6LnLz8bQXkdE0Ud3aMhlK
34VMv3cck6kTGUWRORk+fuPv3fHFRbuHG0FFJKRPCZmkM+hcirX2aqW6l1cft5ZGvjhh7zpv3120
scBzh1As6s3lUq1KCedXH9fes1g14ZvSDB8iFhoqUTUs971L5oe8lCz+BJS8BjWCTYp8loHoUiFd
sVdhtuFPHyJPJ49hgFfyzZsfBZo9Y58zikw8XCOH38siWVkhbFUVC0tV+Bu0Pw3YKa1rnvT7RDUX
nFNTRHKq0vAnFB+nRuMIi8eZmFoWPC6bGiVbUXL8YBicWvp30vuJqOWCc2oakgX9NNnmrFii1n07
OD45b8fUsuBx2fR1tvK/6iq1k6urwcVxl6jlg5f+7sEg7J5dMl84zEH+9OJYUJDvU6QfoydFCEf0
jQWWvk1hIGkc5h0+U34BEUjeZmAuvDu+ofnfVAiaI4VUFPEtbdiqXARLXMKGdEsD7Wf1WoyRy6Uu
8dI2swWO5C1Kzy/IlRuLe+hhej03ROxAQ63A5icn+5B870M+6BNgMwI/ZPOJhVmEKeu2BV18yN+T
XUBlZaYizl8RTKVqilVqTp/m2havK8ABvQIkRzCxZbWSqGOmy9Np3atzho3mNL4103aJ4BS9Xt+c
VwayXm2o9Q00NSVSbud8ANkzzLkzdKzPxLtrNjXmjpk+2i5fEtvPRVFWUZQdUNRVFHUHFG0VRduM
Mry86KzXhz6sz2w6AmGKRRS8VK634yhPwFGfgKNtxelcCR4IWPs6tkfwJKYIVLdJ4AxH2Kr+WQae
wMBtk0WcgrAcJpNlpXaY2DQxDb6cROvgkqrF+V4Ohv1ea3j1ocf2RgvAMvwOHf8X3E2m3siY8gc1
LXxS5N7PMhdKmgbTIpcPrVxOE9VtiY0Nie8Dsn+54bZ3cXw62Cdp5/N5c3mLDIxAvJ/x+9LxFGYc
boFIUXWorrfwEd7IVRG9Xj3ib4LZSPKIVQFxs1+67A0x1e430bFdf4gFgRk8E8MR/LvpK3A4aOp4
4OqDPykleLaACN8GPJkWDUoN0QOPgMD+YiIGms8kA0Wr4eA7X2aWqim6zhZVLH1US3OASQbVoZkH
w3gyPE1/xDL9KgYan8n8V+G/Kv/VWKn/JZpMXJxdBsRFrmgrusoQic5QqBE4jDc1O36n6jTY7ZdO
sPYSghzplKkThAGZ3HyAwVEZtn+IVcTolOEJ/HE8D88tY5wnncsSpSs3GmopWnc0RQA8ZJ7UFK2P
vTk/PoH3kG6lX+zAN6bGDAnUzIEHR1IYPsgMC0L0/ObmzowT6F4udTunZLbfiJkSFEPo0+K48GLs
8WIi8PsQ1dTq0VI/ppME9sBM8jduxFY0BGWpCT56T0XV1Xo9JtChRpfy8XlbJejVQwZzUNGT/Lnv
i7jFXaZn3fdYhb2zOet9m9zPdrlcZhYYXU5hFy6JIYSq37k45Uj2vWnzqUy8fJVidcSU1/k3YUIm
fy+XyGf3BaMetAvWzk9iHdpkJXaMLQwUHNdk9IexaAaHGS8r0uKVPe6RxZiCfoDXojnFcN3EuFgb
qcIWMSvitoQeKGGIazLRk4M4+5EH3xQui/GYh/aRRFKZ9vofO1co2P5K4WOjg6DiIX1Pjvss/kV9
Vvhrweu6Wq3Wb49qtTqk+DY1rzFbq9Rrt+w2lkMLfvd6Tanqt8mgDDHRq+otXz46hC++eguBJ48v
CJHGw1i7X4qWPAWhZFFhajxgQb/JDZOxc0/GAmvyJVC5JixI/qDxVRfA7GlVpcpuT/Y5xvyWEATG
8iLQPV/biTDghq4nGHcz3u0FBkGkGKOxHWEwVcMh+ByDTx5QVpGHuYxhk8UUY2BuIjD4X5mqH2Ho
4wrH4A8VXY8xNKpHikG8izC0etUa1YAh0PXKuE4YxONljNC+j/NIZwIpOhMtxzFaFJpJUoEjjMIb
JyDz34R0QjZuPBfKPsBrm33skpQy+87G+MB7By1XA4rcHtS3rm7LpXeO7dN6kVicar1nDvxF3InO
B5MyDQMYm/pN4jutaUFMxBoxuoGq4JP80AeIikjLUC61hGpqQrymKA2zFrPZQxxCXsdKVaUUaS/2
GXotNbHS16TV0tctdIORz4c0kIGQYaSk+xAOJB+r7bbpjB2TR0NAoaL767reKKNgJ97Eu+h0+2xv
Ov/XG1VV9YauYWY7dyxYsvdNUBtjCTok87VWrdMBac5sMcOjrJR46Dup9TNMJ+0vno9+EPdJFOqC
XPZbtB/Z2DMcbEeKJ1ZNAYMkttqwQt3bIG3HP9xMwz+AeBD6C75gRirs6qdyiQZ6Yxqij6Hyd9xR
9cWBAcsr77H3Xd6DIxOi5UGv+PF2onhVpogGkjmRrLJUsq/U0gmYPLkJaS1Op1dBFkgrvYVCDCN+
pKEYl+1Bk1FcCIXo2BZfH/QgItD/M2f6gMEpqgjZXCE2lLG56TA/QYAujZZqVo0fmgKRCciS5ebA
npAEC8PNDnCV0EGX8GnaEx3Pzf8S3UcdpV1Xud6OdfbFRQuH8nfeigncPRzyMGqiIghXzxL0atFE
NRTCFEU0TOokpZHjoXegECEmiFDM7DXeSPKPVKZ4aavdgtnsebdgNe5RML5aJhLFbjqsA+M37iVR
yl4ian2Z9SvcMo0LF5nUvDhOHEsUdZtJBHjpYfA0b6PuSpbSwrXKooYdivknOjfpZhr2Bad1eV8A
6Hszbqn+ifSSa1NFsfHvkKxa9js05RvcDE0/+J0Y4kW8h4EmirNGHvRlseQ7MZ/xQr4WA12ThjeS
B/4QfUScR/X8PnVbiTKcyojvQXgP4+E7TETv5OFGQUZr6dpX0NYKaBtIJ6mmI8wdLzpVGHMaOstf
ost4fL2OoQgMdR1DFxi1fAx9DUMWGNoGjJwaZyuaomyt6GipoppKQ9Fs5ni8HPF0Q4pux1TnLHIs
yoLnawBf0UTALSi5sq3kCpU8vt1QcqWo5MpXlHw7z+0c4aLjfiV+iYubxdDXMWSBoeVi1NcwLJGH
lZ+HuY4hC4zcPNT1epgCwxznYuib2kxN20zNSts4l20jUaVRPoa+jiELDG0dI5dthsjDyM/DXMeQ
BUYmj1y2NQRGI5PHdrZpKdu0VbaJXYAp26r6GrKS9HDqJRuRH6VxUpRU+jep9fRYUDJvPvMFHEXC
mlYWNoJDxbeUfbxedrWg4qpAjm9r+cj1Tch6ilzLyznbwhRYK9ElH0NbVmbkRB4L2RhHulhM0zbV
tkgXa3l8VyWsC2ZhI7jVCogi2NnOqa201BZmA3UrclXP1j75MC+/3UBBkfMqp0uyfg17K3OA7P4G
AmzVXKKsBcvx93VEkrZL/7YTHaL0CWFG23PfXxxHqwpLllgnMRB7wkBkA27Qf/7HsH8yLJNtVh52
e4PrR+K8e3/yZETlSYhdtXuyBZGflfD5/PKnY/AMc124+v4oHyYqprwd6eQpSK2nIJ0+BamdIinK
dtCzJVB5O+jbpxTl3VOQ/voUpJ9SpMp2yPOdIS92hrzanY/d3UF/3jn/3lNY1n8K0uApSO+fgvTh
KUgfn4L06SlIseZglUehnTwNrfU0tNOnobVjNEUhvN10CAyjYuC3TyvQu6eh/ZSgFZfs/BGwF4+A
vXoMe7qPAf75EaXopy26zkF4mMjN32Txqh5fNhrRKrvnRgH15dLdxDCwp5LR/pl4XS71VPLlJdBP
Fpb6y+uNJsWYCY8k+bNKc3eOgrpdjsoXWNdXAQGxtAq4itAU61OoSbxgFWHT7yqJhbtEROxAZGJe
ATsUdRZeHKy5kzXVGOMPa3mLqQXLCGtddrREWIBrRtaY9UR8O7bmnogfu00kfvt0KnZKxX4ylfFI
UOG3nAotlAPfdnelMF4yb3egoGYo1FMDubELBY0o0KQkIUFrFPSrZBEz0yT+eTBWpTgaOH5pX6GQ
T94lvHEkozvMurKT2aUvNbDUgFc24DAs80YzhGbStTfCIfIuAdw47d2I1e21z9qD1ruCPLRdyr8+
kcyUf2ViuAacrUR2RpjFyqlEPJVKJ4TpiywZRd6ldvoazm6tA7i4YrsB7tggvPmw65LPkqKPeggX
Gunlqr7GpwLowkExckvApSG+zreZfxRtw44Rt8ZxBSqeeExOBW47eC+nh+S13cfbzYSKa7Uy440D
YsiHTssIq1PxLeAKo5Fxy6Q7i65sQs92tCyiuqGYaVfYipDmtN4bCnNS+boIYWeXRjYi60vIejE3
l8Eru3CzyGWoljpdsQk+N4wlcu0mYSiVwyiuJQ5DGSBI3w4IxwlubGuXSJg6aMAdXqlVl4mM0Pt2
iINJw2hSXISOJ3hB7EKkCu8tlSwqBc9H0BT5YpjxlgykQqatjRkN6kBnNEolXfbdYPlbMY8FJ3Mw
/vpSumxm2bQZBDZmtEGEuuvpcatcGvgPkWm4cOFMveXhUOMAgQi0J8QIuE0ID/w4gFe8dIaoVYCL
l9j7lQS81JRbOjcK9YeDvSmYdIcwPT70nl5dUGD4ZAJUSh/Si+H51dsmtRJUheXNsHcGYnlHKuaG
PuZu+yuQ3DM6Ixf7wp8AiYfHbIagoxGWgT6c9ZtU61v2y8ILUSOLrsNquVJWS6e43xJGwDfiR7JD
gbnpvvT90iyYzFwnMUtICxIbNRz6ZkwnvDiACzkHgtDywba9W/zul/hGD4rWoW/ltdB2KCzCkEzE
OwaT/cjOT1z3clmPTH22NzP+BWpqRd8vYRk1QKlpBy0FJXnLVvpqokEbBZ25EaKtcoEs7BCkeLhc
AHP8S5JGRYliN/ZL46k3nz9ALN9zgSQeTHxjRKHFKlWeR8MIoKhqCQrKHaekB51ETu/SyLea0Rbh
eKozRTXX3lFQTLIDNA5gCRbc0T9eTKcPPLDDAGGLpJnH67XvbIodEQfrRFZhClYujaa3oTEfigQe
n91k4l0MHLeDFsOSX34Vkt4IuCYHxIl0Yx+VXJ3cJUW3o1Prklnee9ehoFJ2ASY7UhdjK39sS/hw
Y8zH7mWXy1m3f6QucTCKGqAT9EaUB/ccRLu6pg/lEsxnGiycuoxYy59OTjkAD364r8qH+NEZdV2l
NENdm4L6DOduxgdR8gBKz+WNC5YxAivWe7XSkvigH97S58OCRMx5gAhZZzOqMKKg7lGRbNvQVkDs
nD0FWVOEfd4hCqpKATe69NeFK6kVdNPV+IlEbS1civFdjZprUvBYrLwWwciECmkuS7prf0k/lBl3
TgCOg93BbxajPOC4jinhUvwZ4Cb/djUMR/aHtnvDj1f4Az9JdeVoH/p09T475eiF6yrrZmGS14qT
O7UNT5Ztw/NC23Ajta0GYh4K1Wq9rjmwxEXiFDf9U+Yeor8HOCfTtniCK452UvKIWMlXpvNhxDhB
apKHotKckvpKQBPz1I+ag837FITT8biJlwYzbIZPGl8EjqM+gidKGRiQEpIUAK0EPNF2JfPGw/ZU
4cvlT9Q5IX8MXtgmkAVdvBB9YzVJFboiMRdK3oooKtj++oerue1uEMOrncXw5w1i6GVZkDtF+blQ
DDdS2yqGeShXG8RwM+zjxFDNI5IVEvpLm1x9TJOr+U2u5jb5YqXJMQyRhjammRbvrCm5Ejdlgvh8
sZVoTQSQwhRLdL0GGebjd+GycTcjK/DUzOm4EGxkWY4/yZWWbiwtwFwXl20ECwRmh0LE6wCFi9I7
1XCcW8Orp9Vw/IQajnNqGH0ddo0VaXNXYojnarFN5OJV8P7Hy9bPPMo6BntEswZm4Mgs+d4tf1aW
nnGB1u1j/yht9mfvYeUeKZrGzJnFRKAXg5ElbhEAO7NmBn8wZWFTyURB3UbBSilYKxTqnEJa+eeS
jSy53XhZKEDEO3WNl9oqL7VtnBilnBgtc6KxzEt9GwUjpWCsUIh5uVyGKXVJrOC52AYiTprD4k8/
0nj0wYb9qPVT6MWcdBl7O5oHKRIOW96AVqbVLCBKNRAYaDr2yqryZf8QBI7bbw9Xyp8i1OqKrtIK
Cq1wYgIXHAr7mCHt/OQYmzHQPjQZm4c3MGSPNDXJLR0oxC7NhDq1AxEQ/7FTPi+QjvmUidEfVY/+
lsvJn3lZWfdn2kp6fEkn6KFjWympz4FlXKdlpiB8iWbLNPnnWxxG3P2EMGmQZW9PjrC/DOd2n+xv
oPKRn07dFadYkI3ljccbwC4ocKZvu+JzuUwz8Iv/8wgKE64Zq9xDDNSGlXlpeXbg/iEx6dhp94qO
YTh7f1xioNWkH4V+VPrRspnRwXtE0+Lze9KTt7zfP0rS9N2hxRxC4lv3ffKhQOqbdAxRpSxL/Ekc
FCFR2JSyn4zNAW0ksWaSRYrhLxCUGyMsY3ZXcrCfMFqVaDJMxhA9H0WSY05pD2kX2Bv5Xhv/Lp6N
zw3apoHRjYvaXMw8+exwKg4+xJebGaauwtooRVNT/iadLSFXmZaZcOWLA8k0znNpFYHj8kQxWVXG
skS/tUMqShVQolfr+eSViLyyjbwSka9x8jVOvpaSr5Ru/4VVVBzeYSWbOsuMn9TnhCK7O2MKuMgr
EZRw5IAmjYMmzaJdHmnpTO3ITIJ39kZsZcR7WrDi2haFdPGlBdLqdFChgSkrLV8cmVNYk0c8Tfzi
0BjVZPgnEcfOPCzuqthVcgIhMSyDPp9BleHR6imoUgCK/pMeybbqYkvn6+XigKZ1wyaHbO7odbbr
6JVDka/7kff8S7TeScIZtfpGxEJ3BqlOnS2dWCdeVVZf4a6SGZZkWWzcmOHIy7+kzgEx5UyCT5eH
tupTaFRTGpEMJccTh6HnbpYi0ZRrkJ+7H3tn1yUcTlw2m3cazkpTPezSpf1q2FB7yM8u2y8Vhq+s
C4FCFGMPzbbWb6etr2RaP0tqh9Zbgt5RSDIuPFqow+/AmXguzt+Yo/u73l782Yh92gHGdJX28O8t
fVRlH5+rb6W73zH1qja1ahNqyG405fxsjHgY6b77G42ByAMdQZEhCfyfTN+JlwbxNy/ofLqPGM75
eZ+flWssGbTbtDc/N4PeJxNfmQgAy9neupn02m9pz/ZFBzJEN8f9M7oM+lemMQdcLinYVkP/i4lz
uz/Xqgr3/F/TSz5YfNZUOrf4enmoTAdJ6kLJIngwkUXYiMz70U6DoBgwq7vCXnpLe3Jb93z/Pm91
2iOvbE9WtydrJT80h+bM480s15sMzyyz8JcAGQHdy4RF7T3FJ9nio7Zoj/uDbWCL+4N2ewhpj4+J
de8w1pboWN47D2LuTOPT9xLXgVLWSp0BvDJQ8LTxtHOVeoBK03kzAoz9CqNFGA+H6aLpzGrSOYMe
R+RdqnfcOcXmPd94CMo8uW/SEjpalPaUkXjTfUQmAjFAwV+4rFzmz3RN3p1eXbbLfHBkZ31Ul5tV
h2IEpaOxolG21OlfsUa1KtPxwjY/rjpoim9H0YZu9ldv6tjQOaQgmLYZutfrdIdKo6Ecl7CZHJUR
XkDsqQ5wMgHlfXRn+Ef0wH9gPysY07tYKKdNqE1JEXsPQxyxUMEBBZQ3bnHuZ402b5ec+ZB7jsi2
be0zvt8FP1X6Pg0Ge1RIbBAd2Gg5dzyEXeBSdPjtkpunUqa+LfboL8xbO4Sxz92cNADslzysu92J
zaN0fBi5Pfr8Kd1Ryu0HTNuhn3GopXwIOsgbrTiB95Lra4ZRrqI3K42sLuSDBmqE88vww4+6IMfi
Ef5Jn0p1dHXlZ/bh/PiS9YVZTGKGTb4oyFvfhpyy1xN+Hf0FYkEbhSE3N2RN/kiHYtBOwyCSkdED
js6+c2Adl9GOfDXptYXTB2ZLJuiP3KgVYWojX8YwkvgVZvgY2AL+CG4pRWCkhHKBchWsGKgc6nE0
PvEThy5GczCefEfYOk1+iFzkM4wZsdslmqFwrg0+UX9IHnufSr1ua8U5srDQw+PvNUYmdDkDFZq7
Ql2e9e9g7bMRJCr2a2Uxv8SHz/3TDecWmucPjG/QFbqZmUsnO7M9YbbHU6DoGF9M1BYBRM9q8o3t
wIky0+qld4OTJlys8DYuZrQyzo0MppydnbWJOSMHRjLtTXfIl+qrv0SfUyrnIdLqOCEGdMrAo1HV
IlQ+FWUhZnPR1FqiJX2+cfwI/zg3TP4ZRaVaq9VUpUq+PmJwVHcpqXsjSeFeP4miOOrJu9QTBisG
56oOwW7weZ+iPaW7kbUyLvBSrXkZZZVoRc7D+zsq8Qha2IHiSHYfi/f8V+G/KnvNrxX24yOqqqqN
ulJNEEaFCAlvjoBiTI/gycdR4kfKkajC0Z0zPpJXXZ6PAY0YgSSlnN+7v5sEbRSHJ0rDZmooyQZy
jWJyW5lM9XG5NZA0tbl7U+eUVNefXPFnkO1HlH+VNeruoqruJqrqi6h+haiqhaJq7d7UeaL69Io/
g6g+ovyrrNF2F1VtN1HVXkT1K0RVKxRVe/emzilp5btq1UeUf5U1+u6iqu8mqvqLqH6FqOp5ovpb
M9uxyEX1xkJUE/OsrtLQCAOLo9G5xesAqqIUAGjqdgClUssBkAWArm9Phz9wO0ABfVUvAFDqtYI6
VgqYUM/lYlzFAh5p9e0AjQImV+WCDORaAUBNK6hiQQ6qqhXkoDaKiqAUZVEvoKDXiigUpBdyqbId
oLAzyAVsLGRBUQn1ohJo1QIApSiLRgGFRlFvKWhGXStiYlEV9EpRFYrqWG8UlLFQFIuYUMDDAo1R
2IpVtYiJlSJRk4skqaCMlYIi1PSiVqoXEGgUFKBaUMUiHmCfTFFnaRRkoRexuUgzV4oUc5EkqEUA
DbWoiIVGQpGwVRpFWRSxsagIagGfq4XtUFAEvaAK9UIu6gUZVAvS5SJZ1ApYUFCAWhF+kZVUxAGl
qLcVENBrRRqpKIdGUSMW5VApaoQiC6R47CqqYxGXC5nwH/auvalxG4j33/IpNNN/7iaYWrL8or12
IEBLy6tA7/oYJuNXIG1iU8e5cv303ZXtOCEEbXLmrg9oL5bt1U+7q9VqZUm2retbLF3foivB0jY2
oQnJbUtD4GoQpKcZFdimhkDHgu88ft/T5BcaCRwNg1ANjxNYtqYAT1OA5WoK0GhAWFIDIHUcaESE
rkkjo+6+pgBbx6GnsxJNJbi6AjT5PY0AUpff09zXDZ5NX0PAdQA6Al0B0tUZiampI62MUqNkoWPR
12hZc19o7jtSk18nobR1OtTZma4EoXMWWm+kU7KtRTB1CFJjKJoSbJ0STEdXD7rGoHNo0tLJ6OgI
dKbAdQiOTg3ars3VtVipszZfV4Ktc+w6BGnqeBA6a9DVlbZ/1LZrW8eDNk7S+jbX0TGp7cR1gYqG
BU/oCnB1Jm3qIhmdooXUKdrS8cB1BLoihK7VaPRka/WoswVfJ8PyIoizBULbDfi66QIdgKMxRy0H
rs5YdPboODol6LyXzph08YLeLehqWuiMSdvhc11QpA09fUdXhE6PnqmTQoegGyRJHY+mTk+uTg26
+FcnJLc0JSz1jkI3uyR0D5uFLnwUmsdzYuo9NQSmpgSH62TQsABBtk5NGgRfIwNEhxoCbU05Gi24
nk4Lj9+3hCb/QpBuzPz9yqJJwWCzRsKuZq9vvNk5Pzk8+Qa3fDBA/Fy9eAd/6214sAdHOOo9m70/
8SuacXbdMe/AjX0Ov7b54uXGt0Eeq6+bpcEIP3xk82821Yug8N0se/uv2ZsdeE3dHrwqqVws/QKW
gL/cLtcJjwYFTMgnk4SZasdMzLJJsXGsFg6P1YYK9XWebZb2x/ivF0RDXMd/08tvo+vxmCGnN0XI
cOFIzMaTFK4zWDTO/2AzC9fZ4Lbone9/t9+9ZLOr4HuD27cSr8RJPw+uy9O7ooeviEnmKPEqrHiG
BdSApZbc96r19dMl+OyuToSDtD8qerhWhKXDcW9S9LH2M5ABP9iboZC3UB/jcIz/biLWD1K1ojsY
gNLxc6WA/I4FEe5vxHXQPZBqeFuflTsxcO8gfva02dTRfBEseQsVxsbXDHTNQrWvqibvwQbAMkuW
JzO7x9S14SBk6jU9vTz4kxW3o14xGONRpcMBFDIQUS/tl7v4bqPx7e85XKrQZrZvsnjUG6fB7fgm
KzCN3xaHIyglz7McU7j8OEt7+PorPIW9uup+FrN6y3fzJgBITIGrlzqM45I2Gg8wgfskLPZbGLP6
RRIsqxNJnajehcP6cEhZHxTOcBtLCIvPodIKdcS9HdEkBy1unA3ibWZuqvcO4Qe01DZXdgICQUWp
DZViy9myxBYXhgk7H/jW3djZMrfElm1bWzAYc1x5l+C7Mja6uCX1Eiwp2d5gv34ZmRYP4L+vrtjX
S9vWioRcJGYYASGut++NYZMAbsnolS9NAnI3AupgDVToQxZQwb6R1gFS6x6kBlBwz+s7qmRYTNTL
/+ipht+DNqFSOWRJEJfzKgc4336CwPkkLUl6WEmwbwRIua3QzSkTft8xH5LLnJPKsmw/ALJeL85m
0MIAC/ZqKjt0bBfBqpfD4UfxemqdFNBaLtKaU9rEdaWiHWVvk1753bVexaQE0pmyndgGyrmSXRtI
vIbED0MggXu95G6gdB0CgayV6EduiJwlb0HAFIWY3EZgY1hYgoxZVo1lStfsA+nNu/pTgD2kxM4D
qKWFsGFN65q+rYTAd0KPg34C7XNYoGaAP1D2FDSIgwQIkWwAmlHlAkVDIAIXJQCfWRNYKIFTV5Rr
RdICAvzqp3qFGVDY1gyF7NtRAnVUbnbulW/PQkX4AconGzK7X9b4JP09he13Pfxee6a+DlrVO+83
xKYboWItWLM2D1w2EN/cUP1lksaswPbKXCcJ7ACs2+tHom9Zqgddsjmn6daa7gw3+1e7Q5fmaj72
jWlc3MhlIjfVdTAxaToqLZBC8M2G3tNCcnPmNb8A2RS1LK2FFOa2HsbFtOfLAJ6lxFpIS6UTfh9G
eg/CczPUQspHuYxADBuAMO1IOKEIbldp37Q901JZzQgmW2p4z42BaLN6mbZnmpEW0pmmJazb2LyX
lgJbUpnG3a2k6nHrNOTti0bwYEYJ9mo17i0IPqdXyS2oQmVcVmR6MPQSWki/THNIOaavYKzHDCrx
tZDBkhoX/GHI0NVCRhRTT1bSZVynOfzfwPTr6oEKb+AtJ+JcC5mUbsPkqv200cZ55YmsiOOZBtIB
M9VDcoIug9W4tAiQZZo7ygNq7ZKXbdwr66Tfii6dyqsrXVqVe+CJXxoOEljOtCikkVpI9yHBXY83
tlhXm4jMCBplooUs27jVh9q02ukohEOuHmobF96KbVxoW4/w24dc1RMRIMP2IaP2IeP2IZP2Ifut
Q1pweUkfs27fY/H2IUX7kFb7kLJ9SLt9SKd9SLd9SK99SL99yKB9yLB9yKh9yLh9yKR9yH7rkLL2
lwkM8UyvzCrnYPhc2tNGG3LWX3IHs1pOE2YtXtfHRFK0D0kJWcNm1Gv3tSGrlATIKu1ZHPoqbXcm
G385E/5Hy+ApgbWc85dTAYUjZof1YpX4UrpkwSUVsvSXUjRZ3X75pyCdkmr2ug7SNhcEr3W2bmBt
14PnRCs4FdIpuezPCmviK/n4gzER7+ufbTh8DnIBxo2ULlficrZBylYE9xYitynHa0Py9iHvRW4E
eC2ktZTLhTTR1D3wRASYlSDt9iEdMiRZl02YZTlC6O0y0I50vabTJTyBIXEZzZl6sATGatL6jiKi
PNSp015A4rLUJc48xaHptiJ4UkNalrcMxqsVAoFEoH2uHs1WT6iqx66C+DW55Hjnkc5BKS8Jo5Ug
6+oJXfx/mhVg1uVSznHptFA9vAlgypiSkNZC2u1DOu1Duu1Dzg347DaqxzPno2DCUEALSQhZxUxa
/3iZVx2FC4GJE9oPQkqrvI7cUnTpVTUuKM6NJji6YEIAuAKkMJ8AsvHqoAW+udAPuT7ETavUuOBT
LuM+ZK6y+k7UjHXg1FoJkrcPKdqHtNqHlO1D2u1DOu1Duu1Deu1D+u1DBu1Dhu1DRu1Dxu1DJu1D
9luHFJW/5KFrhV4rXl3MxkSB3waknAsNWuFSLj6qxz69TC92xg4FctlzosRbm8vK1D0leBuhgfBa
D6xFtCpk4mkhSQM+byUuRQPZzhIly2xdcGtWl9ayMIvPcC/1kHxJrG7Z6woeiXlIwlBAC0l5Yt00
VI8CuST8D9ev8chuu41b+CCCALMSZDQ74Gvj2YY09eE/xPNBlfYcaQktZG2XEnRgVTAi9KYe3lOT
DpsVTWA6esjaLiHusJb5S76a4GCXC1k9HnC+vi41w1IO+ovEapD245Cg18DkU12G+pkUOV0zKLUN
ksql/qEOB9mnkHFfOx0ncTpOW8vOalzOOjcrbkVwwoOIOg2G3tc/EpXSriO3BFteCeMH8M9fl8s6
NPDA5q1+smiXCwtW9JBQPfrnv8FKXGrmKEyLq/T0Sayvh1zW9zh8Nr0Sl7KqniBy+lG4eT9tmYmY
PnqSwvFsXwvZzFE4QcTL6WDfgjYjVTq0Avi3ki7noo1+0oap0yI3ZyVI0krwOXgtJCXasKzZtBYS
anxJoNKsuRSrtB57ptP1qwkFhJPrV49tkqqn4Tj2PS1kWT1ePxSQw9PPjxMEX1Y9PF5bcK0Lhq5H
rjLtYVeD51D6gecHYQ3Th8F5kw5n0lpPZAtt9QCXK62qt8XjrScqk6txWVZPRDMiGpdQPff01+i1
Sa/EJfjLh6thfS4hJmqqoR3B3VnBW+HSa31BtO2tuiBaz6W/6lpWAuSqyxD1kMGUSy4enh/nK0I6
hKHUqpAYuS3ZIbN5b/tL7IQisK1IC1nt6OImyt3G6hIHw6wSJoIhKsCUafN9IMFttPtQx3n4qUF/
OaSjh7QpDXKVRWnurBFxj/DUINJD8jIdKJ5l/RjUE24VB9kunJVb2TwPgmbb0kI+3vcEqFW5UvW4
cpnbCII1a9z1TEr1rBL+u/PjniVD0X5Ig9xm3+JnUcflxx7Lr1f8WnO1XR59b/tFlTnx1LWXcKFM
h+7Lq8fQT3bODsvvPJaoSDVNvMD8cBqF29grvYSk4AraV0TNDxSy+Nk9uNIbF9ltT7E9u101649f
4TKP6psdsO2/eCW0n7LDD1/Ova+0eRUFvp+C4Ssn80R90jl+lI6LRTqxSOdLEhy3fRKdMGlkkiiF
sEl0Do1MmDT2pE/SHXdoUniCRCZMWl1Ih0ZGk5VzmhAOjTmXiEaiEjZRc5xExk1am+CWS6NzHBKd
65HYky6JjFtEaX1ak/VowgpaVXCbaHY0YYVJsxROs05u0aQQFicpxaIZlLCISrFpYli0RsYdoqVY
VPdJ0x73afwJTuNPEGuN20SjsmnlejQxJK3SuEVzAz5NWIdoUiZNJ9y1SKbMiQ7eo7EnBA2OC5oz
4y6taoWgVQYnOmWidxQ0NO7SGiS1XRDDHqrqiEJIIncWrRcVwiTRcUHDk0SLIvZU3Kf1VD5Nyx7N
PolxNCd3ozTD4z5RKR5NyZImLZdE5ZEMQHBJoyMqjziUIpo7te92iK3MpNUstZUJqkO2aFohRvou
DY37NINyaULYNAMQRB1zk2Z40BppdEQxOKfRCaIXEMTYwqY5KY+mZS6IdA8FUuZDeJxEB7VLonNo
xXo0MtAxjY6bNDqLhseFpNFJi0QnOFHLnFiuoMlLNAKbRiZMGh33idISy+U2rdaERcTzabXmEuEc
WtOgcufSpOXEtuE4JDJh0aTgRFOmGbKQHpGOVqoQNCULQSsXOg0SnTBpdNyn1QbnxNrwaI5ACCKe
ReNP0si4SVSzS1SfSROXqBVu0Yol9hqSZvNcEN0PiUpIGh33iHQ2jTtO9I7UBm7TqkwQ2SO2M4+G
RvQqDk0G7tCcGdEjOzQNc6IVEz2AsGjWTiTjFk0nwqTVmEMko5VqE4UgmpNPq1iXWCoNzaOJKolo
NKOjOh1q90mTgZtEOmKpRBcriL2TT4PjJq3+Jc05cUmT1qORCRqZQ5NBEFVCtE5BNABixXJB9Im0
RsGpIwGbikezO050Yz6tWOKojJs05XFiiM+Jrp04NOc2kT2baHqciOcQ1UcNKlwanU+zAU5s39yl
qYUT3QqXNDE4MSTjxHiBU+MUqqeiPtugxsdUL+/Oml9/vI2fYnmb5PhAcIs73hbfgnghzQqgGt9m
aTxIrzebhVvPWVbOkqRxL0/gA0TjYpsdfn7KEvw4zSbDj+cUcbgJcFGR5TAm9XCP3FMWAUG0sB3p
rZjBkStm8J5AimSmDMt1hLlWGWtkIYsuPZDek09ehjC99suIZtTreMJ3nSctQzq+Z4unlUOVwUX7
ZcQzbdYSvvMEYsQz1eHBIxf/CcQIZ8uA9d1PUeWtuLc1stB9iTRt21yr0a6Rhc4W1IZlytUyQINa
LYNcz4lSWyBUORfWE1jucwMkl9HUuOtw2JjgbHzy/PdP/xuOb6PBJ0/7V27zkXBUf/eOjgMDlE9g
GhnCDde2HOcTk5vgkT5h5icf4G+CX01j7JM8y4pHyLT3/6V/pqn2z7DznWM2SkZZ/m6bpa8H8SBg
3Qy/zhngd+fwK6jl3qDdfBBfJ+xFDk0+EC83Pu1maZFnQ+UPDHacjDpsdzI+DsZFknfYxW0Sdd9F
w0TdenN48tpgr7/ZuUiz7NZgZ0G+n+cGuyiS21vwNZDaPz832AHk3hW7BtsbjA9PLu+MjU8v1J6k
bdYNbjswj3z87V8d9uPewZS2MwWDT7Re7B+96sN19tXlTpjlhcG+nCaOq8RXZVFfnqlDVcpRUCRp
BCowQbDgNggHw0ExSKDcX6W8Yt/iJwcv8Yt4oBhwgRfD4G3CwOGd5YNRkL9jhylI3Q+iZONTUMxo
FKTxNtuFAfCP6aA43HtlMkx00+IV+FnUEeoUWE76ewNk/ccjA3KqLUiVXpkJMh8M9zuse3FpQHIf
eD76/SAYDIFrAOuw/dOuAQPuU7jRPe+CDqCYw3EW7acGO7rA3/27ont5ZDBHhrP4/cE1wh8fvTl8
xZ1wULC9Pw+iQ8gAl05nLp1OgMn7VAi8QLafLgjA31uAzmMC8JUE8Bb59xbYP0/eDsZo9Yd724xv
mVZd5AF2fmggzFQ3vvn2r/rWPvaEcBkMKs/Qzk7f9odwULIB75cjYxFlamIKEMaZyqyt6iiro10d
nerolUfFQHkUcDTgKKujo46vobvGQg8SaDp5MlPatlLvQbfDjvYuLy5PzzoMFH95DJaCrCo97xjs
x8O9871FtkHliof7wvO1hedPJ3zJ2rf4uVNwMNvsTJniqfo9Q1s8xR9gFA/n8IOagAN6ovL0AK7D
8exEUZ9U5Op4flJlOKlzQAJLPcuTflJEN7jfsHKrLExuBmnMwtKB/ojf3gU/YxqmCRnAZbKTyShU
1xadT2IuOp/ji0NwIcpvsv0US+qwg8FdosSub+zEMYRyY+WDALn66yeJOm5slO6fU9z/sbpdN+th
Atb+WDdgNN2AsVY30NF0A8ZD3YDRRjdQaUWspxX+pFoxPrZWrPW0Yv/ztNJKyFBpRa6nFflRA6mn
00oTSE19jE3R0HOI+Zlplpvjr2rFORTFLZqWRVTgeg3uoztndz2tiH+eG2pPKwLGcWfdwyrEWKoV
JNm/u8W4YL6t8ZfsxW2eXRuDPkMzPMnyUTBkcRJlcXL1MRui0b7GZhriJpQU3ST46oeEXQz+AtVZ
goXvimS8gZHZNrstx3evTKAdgzrSWJ1xOJtAEfEgBTA4V3eNYYn8Clo4KGo+8IPiqpe8GNUnefsb
n5Z2ep+w/MY5EEICyIBQH1cq+Bq4zG02fxXKRS0BG5eaLpW8vn6/nNUvqEyx0i2G2wgEAX5d1yfZ
4cUOGgX8TOvmHL8mPy27jJ4HYAzR5Qg4AF6n6SqB5tGcADCM2hZ9KUTMF5Nw/A5sc/RgU9iDUV6U
KH0t5vau2Fn2Z5JDhJ0G18koSQsGD0rHmFHgmGoYXAPh2fF+d/g7MHNxCD8c/gmD7UzuupM8hyyv
zNEO0rzYMzubexz+Cfhn3WQFHqJsGHegVU3tfc8EDV1k/eJ8XBiYzyjjesRPhq9MOETBEOwM7y2K
bIPIMCjYZnWubjbBJw2fC3zU8Ht5DUZ2MOSGQquRARjMvT/G9oIi2H5YMc4TDUTul+NBObWPevEW
HNN5lhXsDMpjLy6GoL+Xm6pcNXiCmoTs21DW3VnwbpgFMePCKxvwJju7CdLiYJJG2NCrRs+OzDH7
Embs0jFc4+xLORkD0KfwnOEyANd0vrsPFnlUmmZZApozPBlAkcvH3uigMqjmqCg1e5KlxgFobgg5
y8OP6XhyixlKVXx6PryLT/O4w6piGtaU0Zz9mSOK8pdYQ0vkgavnSRCfJ38wEKC8WrJ4AdUGPjrH
toulR2WyU/IDqZolyNuZlqhq8qwaLh+lvytVKk1/JjbRqScxE1v2N5efQ+FvBnFxw+7A0e1cnB2j
Gmd1+pBKu/g6nrNjA1pjDm50DJo6OtqJClAl9B5/nmRFXS4qGFGxaaNK4y/YeXcXLLaUcXoZqicp
8mCQooWPRtgAq7q7eJdGNwariwQRC2AY8kHP/gaezakrmKrLVBpbKuJlqcjLsiw0Oyirw/aQ/cHb
pAOgx9cjRK0S2JaHhVLgTlGkuwVkAx13i3wIDu/8CCjh8mEaq8vq+G1WnA0n1416lCxYFoSCm6UT
OhqMBgUzt6ChvPmifMaIEqKxgAJuh3W5qMCyAW6z2fIPhkVV/Bk0qD30tt1R3L3Fy9+eHeZ/QJUM
oAJurhFp2svXzLJTeG0a4qh0WjFlNIxU5aMyp76sKR+IpxxMC56y0mlgysKhSVwn8TZSN0SKP4RO
sGGgL1C9C1TPXBOE86YV4lmZQncJpYBq8GHNxevBeIDkNRJU1+JlJQ1mZNBYDveUx9rE81JAhXmW
qBmyRU+Me0Pe8iv2epAXE4igUKg0GQIyEI63Pz0623/dfWUCdB8s6hWQp2N2tnO5D4p/tzso4H1V
6KHzcPvT0pGyN+fnllAHR6oDOATkFE1rGymhgwA1vFJ0TZfy6WF6BgEdulGkft01tysWoDSoVnBv
ENOgS7mEST6wOiwa2PpNOSH0DJiPwAq7rBNwhGU5kKtmru4S1OPvhlOFxi67n4MmMChpeD5JrrNa
tawRoApyrQ8Z5Haeg1xT3AtyBTXITeroM3k0yBV1kCtWD3LDflTmnrnwHOQ+B7nPQe7/Lcg1Hgpy
DUqQy5+D3Ocg9znIfQ5ytax8uCDX16/IOe6eLZ0wIcUEUejJj/hI9yPMrbz38h3/Yyzf6VCW73gL
i1+89da+PLR0x1hkvqOYVyshOsh8Z521R7xt5mkLd3AJyf924Y54moU7BrOqo6yOdnV0qqNXHivh
Da3wzwt3tAt3AuglYKxZcfZIL3F01p3vJKz1O4lHHol02uskPszSBHAY1yisWa5mx+oZs6Bgfc+D
J0NjeEjyCrr7q1rbnF0cg9zLFV3eb0nHTz4b/YQdsepI88ktWDvww3ZYnk0KiISKjB2e/8Dsqebl
guZlrXlHXk3J7AUyc5ZssbNv+8mGsQm3N+F2i082vk9yiJdZnA+ANQZqmoyxER9keZSI3ngUTsZT
olEWT4Yo20BERtpXJKVZhuAEfrzYZdF0kcNy+1SE3Snh1FCbJ6Ic1Hr6bffw6j2N978RRbIXro2j
ltEABoSiTAZ3L1ewb3O7XoQCVhuUThxgYWwBwcsmS2EgdzvT5bysjFp+35pRd8CoOx/ucd3DRp3d
RIPeTRQv2jPeMeBObcy8RWMWYMz772fM/6EhEdWYd+nGzEnGDCPlB60ZtB5OrlkZFO3unMOgPKtH
6773wNPSf6/9J0vtP5mz/wQjur19NqgHoo/FGkk+CIZs53JnthUsTnJ5Nvu1NF7IEp3iYPf0PVrE
s3tn/JHIUSZeZfZeE77wRSqz9vQNlVigih/AshapHsCSC1TRNKJ1GjL7fosWpBbdYvdEj7mMqnka
79M87zEdLptNku3NJkURZQholEPAzjIPMoYCeunbBQdS3yj9R//Zf/zb/UdI8h8hyX8EJP8RkPyH
T/If1rP/+Bf7D27qF9eUzgOI7k07zC6r4exX8A4wfammEB9bQG78JxzB4wto5L0FNHJ+AY0j32+Z
+OxCb/oKmmku+rLwBa2NkngwGX30NTOdldbMhB5xzQx2XYuDEFJjXO95Lve2TDWhV1cSZHobpBF0
HMeDKM8qxsbs153jvSv2vQeJ4maYpY78/PQWWlCW3meOXSbRTZoNs+t31aTLpBTyY+x+ecJlbgtD
xYVaKhWb5ay08qWzfW+CfAR9AdhUONyHmQhUOjzff2Xi/PYgvRjEiaGwvoVUp5xCLCdM9vOHpwDf
ewLwsTm0lTa/E3fv359FEwtTQA9ufl93Eun+/Jlxb/7MeIL5M4Myf4ZqP7+As27af6DvD/55RvZE
E7UrGdnJ5917JgZXVjcwtIn/uYFF/zwDe/Zi/3YjqwIN/t6BRh3QQIDzMYIJ48mCiUpD4r01tIcr
uJpHK/9JLVnvraVjCNeT4TBIk2wyrtX1MXT14cLTvnmFDhwbdVyONb78+qtSo/7zEKAdHf8DhwDP
0dm/ueN8Dv+fDeyDGthz+P8c/rduZGWQ8Rz+awJb/zn8J2npOfwn6Ioa/vPyTbz7xQ1O0RVzi+B2
8yyIo2w0N1FxkhQ/FTnMlLLd7rHtwrTcN4NrKKloMJpt8NWyZMHnp+5/+HHnRK0AGN1OQLfgzaN6
AsTzZfBhVsN1PlRNELfD6+bvhesvWxTHqxWejnx8CtxRc+BQNdC3I8z56TECfDm4TrM8ib9iv8bV
TtCrD7Bxu83lzXz5xu3XgyIYMuhg40lUqDlvwK9PT4JR0hh6Y9wLRj3nVKEagtjI0uE71h8kw3i8
jbvsfj07uUKDKVha7s3AFuJjE1F397tXMFN3DVWf5Dg5F+F+TOSSm47j+ga3FdkFgJSrZKYwJheW
tB3X8xXFMVBAHUwgAMNOWgUbFmeWZI4Nv4rm/PUVO0/Ui/YhNAObi34fw6zpdZbFm6za8fxi/JLl
FU0l0+d/wgRpMifUzzvQ747HoIEigKDpp59/qdkpC3oDBaE2ypxBngQokTstoZ8nCVBCRLhYO97c
QobOdCGDp13IILnv4TwmBDZRvZABry2UEd/f7g6M3GaDtGhhl7uclBuyJ+lwMALp42aje6fepruw
MfmJt8AbhC3wxofaAt95bAu8SdgCD9qd1Xmj8i8d/SZ449+9Cd5Y2AS/bDd0EwKV+9dKQwIfA1z8
uK+Y2Ds6gyL29i+gPEwedOGnCzvci8vTKrET5gXW4Z06Ay3cnR4YYBDDvsqxD+F+WcVVDXdhX3c2
NFQZx+Pfn7yMi7d5oQrp3CukQyikUxfSeayQbqms8ztVMbtBDBnUce8IE+fg/rO3CdzCPd3QO6t2
ObV/BEBNvA/Azv65aiIHg3xcVNV5hu5K9QPQZL5JUiAAkSGxjyZ68zue4xFOFy3Eij7SfnlF/l/d
L7+gZqdsiGUQW/ff5RZN3LXJHcNyjH7f6CdGwo3EN8zkARCsqzrK2p3A6AYbMQbqyxbYFdfW4uI6
vLhhijKwh450zMZFlkPINhfcX0DBcM4OR3BnE0PwLbh2CJGEJWaX6Z4Hg5gdHi4s1zXnYvr3wfuI
S3X/weG+uSzYF8RgH9rAzDLd+xiShuF8v3x5bhLNb1hdHFjUb8qaGVf8zd6VLrdtA+H+bZ8CdX/U
npoSwJucujNO4t5tUsfp3dHwtFnrqkg5bicP310eFiVbIkBJhDpTTWIdJBbfAktgsQS/LQsYTH1i
O64hvh33k+ppG641xlbbcdetMYI123FZoxfLvR3XWvViv42uveBvXmeWqnqzN2vqFb8QW6Js+u97
sjp1zG1dWWVbV7be2osVw/L64X+H9n+H9n+Htu7QbnyyIE2Gqv7I/6kf+z8Z4EG+RjCPwfSevrfH
V57lb5H3b+mdaSrTLPMh/59qqe9RxjSm/5//r4vX+fyaMJNQ3cVlCiPDCThYN3hTB1YTsGkoxBuT
PeaSWYQNlfU+qErYrmq6hlYrMc6m4W+GSTEXfOCNP87IZBqNSf/Om/WHid+H430YPOKsd3Xx3Svw
HaPZKElzjzKMxkkU1mRrhqsba2SnOFfOJuPkn8JJd8yeDhh1mAdPSZrNwJccEbUmzFBdw+IVZoMk
rceMNdIMlxmuVm8oBZ9ZufwG3h9OcvbYNlC71QjAcA2n8STTVZlLN5/E6B5VMV1dbURpuVRvRMlc
prmM8vbxZoOxQGWOGvfXLpar240AMKrgGjqvyg1mDVec2lQlnuS4Rr2Vw5tgmETjDJZuXz5/dXnx
A/hSVwQ0uo/G/oxizfhQYU/tqQA1X4qYFre88+ffkHg2GdVkbCibjBModxkps8jDOFH+Q+b5XNX5
sFIM63BtbIBZNI7eekOQRJhtOGr5WF9aHwUNvdGEHZeZTScxbY8GBWOB3QiAokE1WZ3uMpBk8lrd
xguN6ftTGbTReYYNHE7VzScZLo7n3GPL5guNGXvUmbn6ytx0700TWMzly/93+Aulqv1OZ7YKNh1l
yuWr5+/OUYXej9/1guFkHJFL19NoYPmG5WnMf3eTZdM/CG71KvSberM0Il9eXb0qk4TDt+J5UTIE
keS3P8gKHvWQ8GBP8uDRNuBhMbMML2KWH9It8RhggVz9ZW7AE3mR5wdmHNrM2RYPuF1aMx6V6mQ+
Tu5zPDuo0uSoUjV2WmXdH8y8aZikt7/hGG/84VbflaAKhuNMckoYJfdEvyX+PI6jWSoiEaO1sGlw
HpWrZxdgTUQEVJDAJXdRs1kesRpGd9GQUBFB+bAym4/76UwZYXi0b9sO0/TYVhzdiRVD15gSxJ6t
6E5geL5qMtX2+9RTVS1gpmIHkaHoUWArnm0Hih6HUaTZpgb/e3c3YRUjzhETel+yRGmnxHfz6N8p
8fJ4fJz/Hbti6HFkhB5P8M7G3lU5ZmSegjqnuSlFbqEOLdTJYGsuaHRKZpMTEQ1agrY9S6eRbSmR
5vmKTlWmeHrAFFML/EA11TD2Lf72Z7pm5l3AmMPk9AK3QnvphR+fvSDPvzz/6ntXpFRLVXkNzi3U
gU/H+kl1p9Adx+kxE1KuJUzeHtkVTB92P6LHM/UC2NUMN7gsy1KZiT42yBxUN6f68J/EeLfpjG6Q
fpuHHkEqRtBJFnpuKUhBQS6pJFX1uaSqkEvore8Fty6B8XacKUFx655ojkjZGd4SnUUxsUVKTWeT
bBJMhphX+t42B5qqeH5ywicCFk+4uLm/w+bwvdkswXkrGuc3IzhElCXxL8v/quTT/N0gn20ofueH
vXneVL95YfjHMWoSjcM+/N6n0AsWU09c8hY0g+ED53v4FW4JTofzazynOKP6hZyRjydjnMU/3lGV
j44tKlMKpwHrDCbQxwFMs6vV2rVqX/3yxeWbZy45euPDdTY/JW+T7AbvMs/vidoze5ra081PwtlI
03oqVRhNlSwZwZoyyY7g3OGQ+FE1na9U49SqGc2HWTL1Mrz48m7AeNwIwv0EfyTHMCDPPLD1EyIm
JJ3OZ8kE9C2a7rQQN57gYjr/fBcFIiKNXeAydo+L7QIX2x5XzUoLPOsMtTgaNl8du6ys+boQaK1d
tHirBjfohjnnGvY5Edwfm0UwPd4m02kEddE+FRHScn7ldwN4XeWNKGF1mG7rpPFCJsd0nZOmPjhp
JyLoZXYUvyMk5lPvvbt4ge++u5aWp6gKxl+9OMM9d+XmkzbiintQ0KQqqW5U4jLcfNhDcz9KshSO
a9DyeTAgv1TC2aS0lxZ1Piz6Q9w8gXowUvoCsK5qpcZC5GQcrQrQFwIwan49j1IMf0cZhvlwmZXd
RCTGu/tvEczUm3kjPJjmoXEsgi545HKJvQum8375MAMrvr2NkuubzKXFN3CP4eP4Hq33LpkhOdoY
2pfANBTk78E0gXeAEVVfB6kOZ5cfNfyYpcFgNAnRrvhR0b4Xx8kYXfP5OI2yDSX7k2nWR8Un81kQ
YZA0uo+C/kKgSpmqUFupF794cUWgZxyV/gH9MIK+9efJMISGIGGUeQl0z0cffUQUfJH7oDyHlCcR
sMMYDKxc+5anScB4HwzgyMAbYlV46QajEJ3jsyO8Q332Bh77PIthzI5jU1MinVEcB0zFMXxTMVVL
tZwgsgPbJrMJmnU6ge10N3cBJTDXDr30BsJUSZQRULD4tHTSEVxmxfOo6dnRkUT1iwXKAHvEJfh3
7I2gCR7GzXv0Y3xokPxPeXrv2Y8Bu/5VJu5R3m857pE3JfDCGUS3yO0ziahgRMFxirs5y/N79Js3
sfnPobUnLurltmfeSCBsUNySxN1Seg98ggCdGvhd0XpUwcW8qS991dQpubkb1b7j13WHsbREJfOQ
+wD9FRyEoB+iUCIamDnCwXDi5cfggYmcjPPbz5VrGK1nSUCKQ6TX60lps7L66WziR+ipHF5jFTEL
/5/8oQHZzbV0dZPayyW2YdpPXN3GdqgMAVThZHA9H/+TTF2Sv5HJ7Sk40FoUeGpIlM/gs21EqkVl
YFwytJff7BXCfbCoNxrG5YjgJ2MPc1ZOb8IZ/PXCcHZG78vnokfRKP0HvpqWR+l+G0gEnWV5dXRq
EMpFV+X9rNqtsCpPt7qGBSVLaOMJOptf4P6ewcvX5IwcDXHEODoAOD9eXL7+6uX3iEntmXIR/Xzx
fR1PObvLxfTjV5dXg2fnry8AEb3PnwOSbUcX319d/lLCsYJINpwvf4Hnxp6ff/vt4NX5F1UzMarK
xvX5xfnVm8uL/HL7EKNheOMGzriOBvnH9N3Ug+/XYTIbeD5EWwf6tS/X2F6dXwy+e/niAiH/HaVy
wXz78vzFxSVCKf1AuXDm49vx5G2+ORKPE/yVHNP78EQqrNdvXr+6+P7F4Pn5988vvs2tn0kF9OWP
g9dX5zBoffvypxxObNjyx6xX5y9eXA5efv7564urHFX3cNBdGUANwSDnEXGJVzzDCM/adIYFXhgZ
zAa+l0akeMmZWPCV+055sxS54zrumKoxytolN0aO5BZvJy8hYbKQRONw2UA69mAfkETIoVAAkeN0
4GuqjpaumWJkW3ntFRFPQAcvqHL7x2psyq1bk/JZVz3aECOIRmWYDT7lTDnfPTsl6CKlAFBFrMW3
U9y6GXnBzUGAXYImERGGKhewyjieFDxgY8FkNPVARC5p1fpOq8mO4IYuoqlSUK4NTKHpyY1L5b8X
AVmpHVm7QTVIo+tR/nwNKfeUVc21fighx9N4XEQ+yCfw7uhWcZXs10luGBvj8SCbDKYZHILvb8io
yOvkkgewnxRQvQy++oZpdx7gwtDfUnyL0ALNAg80dfk9pLp8fAzwLfAs4Yuo5Hnlkf2WN70W9ptb
7MJ+WWDadfuFg7n9MlVl8g34qRGLUcmR9IarCpvwk6IBy8tK101HrlmsCf5btmdYefA/x0tNJhPk
I9ud3vydqiP0fbDjF9a6sN1oyXbxaG68NpVvuk1WgmA/QaiVkWiRZCMp2h8bLm9uQoolWTKOJ/kP
ixavGhy/ymzjR4irbUXwcYHYWUbsHBTicp8KWULsLSP2pCAez+oh29pmO1qt+vqauvh58cKr8/ES
8ZQwkos6lnNZrldHe1CH1n8OFurUfj18dShlhToqa1YnYA/qmFL1WTsH4KVSKLm4Pvz6HBCHy3NA
5JcOzH9hDoj83FV4mAPCg/Afa+MTrrRxIghu8x8WLb5o8HjPNwK4148IEANnpRbROHxAvPewmTjI
6TWY9QKk2kHklSOoIn0h7s2CGxgAsvk0BxR5s+HfePXAZYN3nm7gXQasoo3KWE91H+OBdpDkwss9
6Mtb5f5DYHGn3qdnMEhlwY3U8B4P8PqWQqlRNXGw0/8QWqkGPJ+GXhYNclkDuBEBJ0ClmuonWS1Y
LmvYDIaRV3hhtfncOyWj/LNqqLGc5fSTuJwKF7NsXbNl4OJ3imx0ihY+kSmpg/P17wDXv4PiwpU5
LxZT4g2mVkMQZf/e5VsBF/t+sA3zfYtSRsTlh0/K/YAkhml8OkvGmQxI8CofOpHTbcXrURwTvUBH
NaikSGbx8saTMRnB5VdHRfEhNTmzPyH5WCCzo/CRjEdNwpglbfd2/srHyUewVNs0ZN28W/HSh/kz
XemNN4vCfMCsBnNaTTtWTIP9bs9ag7QAdSijOD4zeHAwZlo9IFH3EywZ4IbefAzWdTeqWuiUBNl9
BtOaH/ua7TsyXdJZBN5VyrVCjuEskMkcctwAilTP2k7mWRwS8+F7MoavFlHCyJ9fw7OcpJ+NpvB4
ywmJ7vEh87Bg7ggmYUTUFShPk9g47anC1knkpgrjgsRDFbZRUE6lo3jJJH/o9fjjlbafQigLVw79
JJ3071Mlm0yGqWIifaZi6CqjPTjw8YlAhXk1bWo5KXrv5eDFV5eQBFVEyeJ5p4IjiLw/nUUP/EM3
SZwR8hvOWaZhOeofcBgkLB//TVUdm5l/iFS5RJ/VRl0eTiyac2KJwFpwYomUaoPfLTCCZR3TGoNU
3ztWmBDkFQop7AsRAiljOwKporoVkTYnKZEBwiyTrvAgGctML3hGIw9S6yqfOMbNg2S6lINHU7Xp
7kgboUrjKVtwVMNuNxJvksg1EnNCah6JNwlqT2TCTd/CRRpYZ6LRheDvgi+QW5f1YyOr8wW+PRHR
oCVobkIWrg54xDAjZkS76AVuhThmKOyFiVAvLGYokVItVeU1OG46xP3A5O2RXcHckrURpW+YdP3G
SbeqcFWoyBzI1NVpV5h+cIsql44J0g9Ctc4TzdfPf+oXCy4QWuQV798lcZ/CPAYTYeINEyRl6kpY
GsySaZYqUMolz4sVnpeSI2gdAr8dERCfhK5BQDq8U5LzBeEQdZeCLx/ctJBddNNuxb+Osgw0RaEG
5nC8ekOYQSlv8fMwrJXOJlVeBFyuVPvZcXkbx/V/vNIXrlQyRZfpFmPbZWXbSy/lRPcZGIY3VJIw
dY/u4eBImc+T8OjsiOk2pb6vKgEMQUpgRY5iWyZTfINqvm86tuNrR1tXl8RVfbFlaaqnWYpuBp4S
GpquODS2FMuELgkdwwoCe+v6xlH2djK7reqkvh2HzIDBn8UxDL1qqASGqiteEPoxs2MtDM3t6vSy
DHNThsrIC7BCmmeLuHApc+Gz+WyT9MkdSE6DDC7b3MN5l39599X3n798t7gy+vN01vfhYn44nShK
VqQJO9MofIF/0MwQCkmzFK6ZoYI3Myu4ioLGlP9U2e/iCFpcnkMs9oKo+v1oScPfa1bz+9HZ71x2
8/vRkZB4gF/J57ITQfmVVVR1cNmFSB11K8AKHtsBSOMYvYvR+sHSsFowgukMxsM0mE/maT4Ucgj6
8uqZC8sfb4z5KyYxCYaYv5ahlZEkJSneielh7tg0wVtUM/WvMo9+byvhqpBwpvOzNTsiRRdkzSKl
OMiaTSHMjIqUrUCLqcoH2hD2FJ5XHsxyyiJNb44baOpT+UCKu8iY09b1bWZ7ganHAdW2iivkkPSn
HV5bs1rEFZolPhlXEBHAGVdoFtRyzQHjp6lZsaGEOtUVPbZCGAUNXTHtMIwC6JkwZPzL2lpcQQz9
Lha03Kq0WNA2a9ASNO+aj6/9l8IKcnqBW6G99MIirCBSqqWqvAbHt17fG0zeHtkVzG3CCpX09WGF
QDysUArln6p1XaTswr8QKbVmquYTwZcMohTRNhlEUZw3NLKTZBDbVdlMer8mGlNUa+47GURRjb1d
0gUuIWJc+80iOZJUcAnZNS6OZBBcQrbFJZyfofHq2GVlzdeFQGvtosVbNbhB1805fDkGmoW0nF/5
3QBeV3k9yh1lF+CFLJpdoBm9zI7id4TEfOq9dxcv8N13l1AyCH5xu0sGwV8ndzKINiIn42hVwOHF
RnafnwLFHl5+ikZUa/NTYMktaLGr4vnOSThgqnvIT9EdxoPMT9G5+u3yU/yU/DV8lsnEzZWfonNU
bfNT/PCSPh9rh9aeGGeQ254y8lN0ruQT+SkkommVn6I7lKv5KQ6vsTjyU3SOkys/BYrd4sGKsrgA
qhb5KbrDuJKfYq8QRDJAPJWfYr8NJILucX4Kuej48lN0AEskP4U0OMv5KeQiasxPIQHTE/kpZNvR
Sn4K2XC48lNIwLVtfgoJkFfyU8gF8yg/hVw4PPkpJMB6Kj+FVEAc+SkkoHqUn6J7OFz5KfaMRSA/
RQdIVvNTdNwxAvkpukLSnJ+iMySN+Sm6QtKcn6IDJIL5KfaAiCegs2V+is5Rb5OfQiLYJWgSET2Z
n0IKni3yU3SIkj8/ReegatSdUjuyg/wUnWvTSAW3mp/CUtm+x8Xm/A+0QFPiWc5P4fjy8THAt8Cz
hC9y9j4/C9rvTvNTdK4OT36KzkFNhfNTGPt3ZVsF/x/np5AJcq/5KTrXBiCJ5afQbclGsnV+CvmI
RfNTyEcsmp+iO8Sd5Kc4BHV2mJ/iENTZZX6K7vTpIj9F59o0zwEr+Sl08yD8xy3yU3SOuE1+ioMA
uZqfooPIK0dQRfpCXCg/RXewBOj9l7fK/YfA1vNTSA3v8QCvbymUGlUTBzv9D6GVasBt8lN0B7Ih
P4UZ2zQ4GFyL/BRqbDNTBi5+p2gpP4Vu7n1bW/v8FB2CaZ2fojuM3PkpuoO0yE8hp9tE8lN0jYov
P0WnqHAskNlRnPkpuobFmZ+iO1it8lPIGcQ58lN0hwafGTw4GI/zU1R+giED3Kb8FLHn27ZMl7TI
T8G1Qo7hLJTJus5PUUAxOalPzCcp500xyvntqlw6JkI5r+Y0iPoTz1erVGdmO562TRK5eNo4ITXz
tG0S1J67gJuxgYsnbIn/XQj+LijCuHVpwf/erEFL0NwcDFwd8IhUQsyIdtEL3Aq1I2pr0GBB1CZS
qqWqvAbHzYC2H5i8PbIrmFsStaH0nRO1oVCRCYmpq3OgMOPYFlUuHRNkHGM8lO0mD2X7noVx8r+b
zQTt/LKLbtqt+Br/u1njf+ctvuB/N9vwv6P0NvzvJhf/e4N0lLOZ/92PjFANTEOhjhfCOGl7imP6
tkIDGmlhFDLds4+2rm7B/x6EMWOhaoJT71gKDIWqouo6VcJYdwyVebGhWVvXtxX/e5s6G/jfz5el
d8z/bq7lfzfX8L+bzfzvPHYDbOdC4mv871x2Iih/p/zvZhv+93OQxjF6L/jfTR7+dxQkStHOzf/e
Rjg3/3sunKn8BK2OSNEFP6tIKQ5+ViaEmVGRshVoMVX5QGvCnsJj/vdcksqRN07XjM0cZ5GuxUZE
DdOLg205zgCS86TDa1O9bVxhvUTOuAIXJJ64wnpB7dccsRboPnV8RYtCT9E1nyl2HMSKbQYhC6mn
OxrlX9bW4gpi6HexoOVWpe2Cdh/tz7vm42v/5rDC3nuBW6G99MIirCBSqqWqvAYnsF7fB0zeHtkV
zK3DCqqzIawQtgwraJR/qtYdkbIL/0KkFNdUrdEt+d9RxBb871icMzSyK/73Laps5rneFI3R1A74
37Garfnfm4UI02ujyK3535uF7BoXH/97s5A2uLRtKNkbr45dVtZ8XazWti3/e6OMNg2ur5tzuGnF
Ucju51d+N4DXVV6PckeE4ryQWxCKH3BH8TtCYj713ruLF/juu0uU/51T3E753znrFOF/FxY5GUer
Aqzm2IhB1R3l1C+qdPbA745iD4/fHVG143fHklvsPMLi5c4jxlRnP/zuHWE8VH73btVvx+/+68/m
16+fycTNy+/eLaq2/O6//Hnz+io5tPbEOILc9pTE796tkk/wu0tE05bfvSOUq/zuh9dY/3J3rk1u
02AU/ise+ACdwUGyJcvOAENZlvtC2ZbLDMPs+CJvM91NQi5QoPx3XtnJJnYufiXb8sIOtE2yOjqy
ZOtYsR/j+O52fWL57nZdmfHdLXms8d17taBDUG/muw/prpnvbtcdmu/ety1Nvvswdqp892EdYfju
tj0h+O62LdX47kPbwfLdbfvqgO9u23KN7z6smQO++7B2kHx327Ya+e62DeH47rZdHfDd7dvB8t37
9KLHd+/bCYLv3rcFPN/dihMU392OEwzf3YoTFN+9byf6fPeuHWEWdNrz3e26bsl3H8psxdqAjpr5
7rb8tOO723KpxXe3a6p4/9EsyPbNd7fbmkaUUp3vHmTWF7jO8d2VnyrfnbPh/VHwt+dn35/kA88r
vfPd7TYHyXe3a2quzXdnSTTssDDku9s12Tff3W5rwJIm390feJB0wXcf2LEB331gxwZ8d0uObfHd
B29Ot3z3wZvTMd/dUnss8d3ttqZ5Dqjz3b1HkR/b8d3tOjbkuw9vEsF3t2wSunr4E3FdvrslWxp4
7Oqlcv8hs/t890GX9zDG9y8pHHRVTd/s/D/kdtABbMh3t2Syge/uZz4Rj8bXHt9diHzITkWEoirf
3QsG6mAc392WmTZ8d0sedfjulizt+O7DdJsm392qKzTf3Z4rdSwYsqPwfHertvB8d0u2TPjuySCz
IZLvbsmNumfw0dk4wXeHnOAPYe4c3z31Qs8fMr2UfHfUGbLiuyvNcAC+Ox0zLNpEHOW7C22+e4sq
K59p890ZQ9z+TP0ub39mR6FE1OfCFP12WhGJfkNZwqDfTguZ4xDQEAgUeqyClNey3wV1DN0WU6R8
Hx2AxjqgOqCZU9F7L6AbZMp+O9OCffabTinDpmIHnAZUrQ+b2B7pymZr9hvrg/3GIp058ABiJgwg
ZsZVVj7ThpjxZrarQFPg+xNDIuUFkvmO0i67qVv5PaS82EPKY4vvkPLCGCnPfW2kvMAj5U+rK53z
SPmceyyJw8BN8jx1cy+OXJ/xzCVJEvNU0DgP/LdaV7dDyguRhlxEUEvGMvgj9N2UQs1+EAlJgiQK
U9K6vvZIec06G5DyFwfqNpHy4iRSXpxAyotmpDxm3ABAXUt+DymPGiea+p0i5YUJUv4C1BBH7x1S
XqCR8tzXor7rIeU1xfWQ8jzAM18jnaI75KtOKRTyVcczJTplt6b1moozLbSTwgmkPI8a1w0o9eGT
uxtIGSs5fXOpNomzermQceYUX05COzdfnd7L1WKSLp13ywnfEU+cT8eCRkFIs9wjOWu76hDgFjpE
2N1CB1QpjibwiFCzhY5ziqiFDqSl5oWOc0LmJ0GBCGTGPd9N4oi5THIYwhn8khBRlsDh3qOh4bPz
tNx3cYaNborZGXY/2x97Eorb/pV1jmF6Ad2gXnpht86hU8qwqdgBh15A6Mcmtke6stlynUOpn1nn
kEbrHEoUnx040ym7Czw6pRDZQUm0ZNwriRaMe1UcuVbTFeO+RZXNLO+Ty0Oq2tAC4x6qiVoz7ptF
tBHiSrI1475ZpGtfOMZ9s0hbX9rY+ca9o8vKmveLam2ctGXcN2oYbHBOT805aHS6Eul+fsXHAGxU
Pu2yI2g61rIBNP0RdxQ+COll6t67C2u8++7SZdwj5Tpl3CPr1GHca0sC474uwJvXPgIqulz74KIH
xr2SfXyMe+XKjHGvSra4+koV31x9BT9BP4x7Sx4fK+PebvPNGPev/rr87MYb0jeWcW/XlSnj/qv7
ix+z549te6p1hGG350CMe7uNPMK4H9CNKePekss64/7xbSwc496uTyzj3q4rM8a9JY81xn2vFnQo
8s2M+yHdNTPu7bpDM+77tqXJuB/GTpVxP6wjDOPeticE4962pRrjfmg7WMa9bV8dMO5tW64x7oc1
c8C4H9YOknFv21Yj4962IRzj3rarA8a9fTtYxn2fXvQY9307QTDu+7aAZ9xbcYJi3NtxgmHcW3GC
Ytz37USfcd+1I8yCTnvGvV3XLRn3Q5mtWBvQUTPj3pafdox7Wy61GPd2TRXvP5oF2b4Z93Zb04iT
qjPuheR9HxebGfKkdLPxU2PI58P7o+Bvz8++vzzo/eRbc/x2zbi32xwk496uqbk2456n7MiwCNtZ
DjUsGzLu7Zrsm3FvtzVgSY9xz9nAg6QLxv3Ajg0Y9wM7NmDcW3Jsi3E/eHO6ZdwP3pyOGfeW2mOJ
cW+3Nc1zQI1xz33vkc0BBox7u44NGffDm0Qw7i2bPGDcD2FGl3FvyZYGIrx6qdx/yGyVcf/IjVcv
MfxPmZ3/h9wOOoANGfeWTDYw7gOWSf/R+Nox7gMh8iH3F0QoqjDuuU8H6mAc496WmTaMe0sedRj3
liztGPfDdJsm496qKzTj3p4rdSwYsqPwjHurtvCMe0u2jBj3wRBOkYx7S27UPYOPzsZxxr3KCXQI
c+cY99JPfTFkJD3DuFdSWNxseBQsH2qD5VtUWflMEywvxsRH3HPsse7uOYYqQ0SVvNsqj0KWPRYw
M8TbOUUU4g1pqRnxdk7IHHuAhj2gEGMVlr2W/S7oYui2mLHs++kANL4B1QEHPAq9QdRFL6AbZMZ4
a2jBjvGmU8qwqdgBh4an9WMT2yNd2WzJeFPqnTPeQFRr2j2AlYXasLIWVVY+04SViTFljVDZEImf
71MMybIPUbB5pHbZTd3K77Hswz2WPbb4jmUfGrLslbo2yz7EsuzPqSud8yx7KYnvZVK4lMOxJ2Sc
uiISAg5VMpJhHKRewt5qXd2OZR8lLCKU5C4NROTKlEQuFzJwZZZEaRRHHhFJ6/rasuy162xg2UcH
6jZZ9uFJln14gmUfNrPsMeMGyO1a8nsse9Q40dTvlGUfmrDsI1BDHL13LPsQybJXQlq4eR2Wvba4
DssexDW48JFO0R3aVacUAu0qtDxTolN2a1qvqTjTzSz7EMOyB6VeWfZhwbIXMeVJ6oec81arDsHY
C6oPY5iu5tkvPCBEAf7i6Tur4rSqCO9qcet9+Px9WC3IV6MXl1fPxs4zBWBbqnM7J5PTSWVjMH9M
6ZHNOk8nxdF8rP7zRgQGxeolGSu6yiu1d2Rw+29NhhnIrOdOvCr2OecqmS/hzHJ9d+dk6/mdfN1C
/jNgzDmbRRhVzSzPnXy2cF787MTT7OHl9c+1OsLatLYJaor5R4kXvVmAJoyF6+s35Rz0gfvRKp3D
cPJGNAhHdETpOAh8f+xMZ7ueVdloGqerye9w9lASTTYUQ+4sJUhm0PBsstzE7WqoZWzMQ+3Zdn+S
nc5cRQbem245/BuGwhI2z5/O97PnxVaD98rviZ1nMMmWb/02W/Zu5cFHscoVkpiygKduFAfSZRLm
TpiVfVdkAUlERklIEsuWhOcxj3rUlZR5LstS340Dmrrcz/Mg82QGP71bKqfwatd8+PFU/gF/b7JT
9vA6haPUSqqeLU64PywAJ+7LVVIuJcLp8Icf/0beo/An3RZW72wKbto9A17jQi1l5pPbMRAc3QV8
9uF+AXquQPx6U4Bsf+obCT8nf/bZZ5dqP04mt40zsqa0XpbQF9fIEtwzCdaIkaO/q/djpbpfBQnN
uB9zNwlI6rI4ZmpXj11BeSzTKEijPLZsKckCIgIV2v08c1kQZm4Yxrkbcp+mmU9JGoreLf3PdnXu
acVvnV1dU1pvV9cXR+/q/piwMdmLXWpDXz29/hr+fvgl0WPu89WBzG8wAL8UjZm390vZy/RuUlyF
/+kXF8+uL79X1CoH5Ms0pIIOJSMy8kYeZ05xrh4ItN7Ti68LEvCexpmy5W3e19JViVxlcfXGKk5Q
1SWwtp3t2w3VBlhI2KniO1ACAV8E23C26zjoNdHYcWGPHQfTD200wMfUa+zdqEeXfMzo2PfrD3f3
eHD64e6/T9MVCLp/LNT1PgsHeuT1wcnd9tYo90Xtke7Lye0Ues6ltGai+UGpzaeMpko/lJSkmg7q
iah5brCMXOoz/MMadt//l59mzRcdtKyMes2VUa+jyvZbhri2oWVt+01DfL9Sqw3z3UIh3n5QHFsN
fgn6iZTTbQNOF98ulzh/xIvp3mUWImIODNlVcZnF0+Wf03T049UIrrTKbpYv1yu1XuBcj3nmkTTy
8lTE0ZvNBvnV+Qmk4BD+nlr1lcXCynoqX8+LvdB5Byy+45TgGzU9HA4btbc5Ncf6sazdAnWter3g
gE4lJuLoVFKKB6e+d8U9PqNZxPA7ZvxX4ZXLFTy/vFqBFdcqcMJC39Nx28UDNLDWzzxAg+weoKHj
fsgOw18UgL6+pNlrF92FNa77vJNm91rPO8HLdfe8E3yd6OedmEjOprImwFgtbNLIFxQdNvlh2OT6
YZM1X8nAcWHTQOlo2GSYWHEkbHJkrmAaIYmfCJscGzbbVVb5FJH/tGujXnPTqNdR006GTY4Im4wj
RkUhbjYo+JmwyRvDJuOYsBkiwibx4jCWeeT5ecuwyc+GTcZthk3YgvXqtfKgXtjUFNcLm0cf5Eoj
j4ca2eW0iHl2IbHn+SkN3DCVHBJbGrpxGKYuyzMp/TDw4f8j2YXwIrqI8vJkH97XMttFeME6N8qa
j7a/sJFNN2v23l1Y40ZZ87R7o6yJkus4a6Lq1MuampL1rOmNuagvbFIWBOisKQ6zptDNmmCi+VoY
gcmaRkqbrFnXQaSKI1lToGKF0te9ZufHq4dLcsQTp2zCTRGsnA83L52HuPA3pKf3iuz0j/PpOAm5
F+QyiTPuvSmK/FoVSGfru6x4ZGwi1SUdqYKXZ/AijWE/daDqhUxni2z7VNksXsVJvDwcS/jgJ04E
aIEL0G0rq3zaGKANaqNec9Oo11HTTgZo0RigobYQMdQLcbORHp4J0KIhQKvizQEahjciQFMhacJz
InMStgvQ4kyA9kyul2kRoGELVqrXvaJDJ0Bri+sEaH9MyNHnxPpcnAxkHmx9DRHDXJP7KUtIlLi+
zGKX+Ql1wzzN3TBIM5qRmEU+ORLIfFLmMa9crfV9Hmi57SKSYa0bJGijDiNWOgwbRPUStIXuwho3
SNDn3BskaKRcpwkaWadOgtaWrCdoNvaCxsgZYMKrkdImvNZ1EDP6kfAaoKZ0pY/PJ8GJMBTgwlDb
yiqfNoYhg9oC0ty0gHTUtJMRNmiMsMhRUYi3HxT1nBc05DxVHJHzQorIebnwMxb7gRcnLRdKgzM5
Tzm2mfNgCyIOFt3kPG1xnZwH4uGxNYswinh9Fq0tQsxhxKsZ5P3Jcvb+66W7ms3ulm6gLnqDCdWj
ZAQfYKZG8sTRsKQ7NSLlHqZGujc1hgGpz4weYmZEVqkzM2pLHs6MPjs2tRJGA3Q+PCding9Fmga+
yLmbMcJclovMjSlnbhBmmUxpGGcZPbYiTot4yMoVcaqI9jpmuwiIWOcGef4R9xc2FuvleQvdhTWO
OWjpdZf+QQsl12meR9apd9TSlKwctYIx4Y0XJAdjSqpUqm34+MX3WKBOG9QbK2VUvVvcnAfbbSXv
IfoomtxJqe2F4M+LfKUEVjMHRp0CHIHZYL8g3edF7d0O7S5keSfDeiH3o92JBf2j5cCn2tOlfn3f
zpylLC+jVrfN3k4g0DrwArC89+pmRXXK8Z4DQ+i2vMp9VKsiaq6i+EzdNa5CB+xWhDIu3ZD5cDz0
Y+bGIRwec9/3Kfe9XJKoWgX1mqu4Xk/VONu6Hp8Pbl4R3JKFu4RouncT+vZGgtrN7ksHc6d73XS3
WdO25RPbeXP2WrkBx4lXq8UkWa/k9oHr6nacycwFanPPNcx/776CdA7b7yaX8Qp+/eY+Xr6q1XGE
M1c8kmZ33kM2d2rf/CUXszdv/lD48mwG5zRgRE7UqaD6kg0GayYdMoJdcAWfKDiaOgSNnFp9+4dH
tR8qCpuqVxGMCjbpcp2q73/Uvct/bst6pCx74gYL9W9lQlUIe/v2ez7KkeXb3aBxpF1/LO8zOIzu
H4aVr+dffv7i8vrK0S1afgUnF4iC32x/Vc0BRWlEoWpti83RB/KAWupZYgRU3+2VGcMHC3mLKPlN
eSje9n69xP554GKejlQ6gW7jRbddxOvbl6tdb78H59QuVKs2wUKJxtOHYTE6I3y/vltN5vFKBTh3
8/Ow+TavzxnbnqB+Xfz9MLu8q77JfLJt2UhLwMlieQ8jZqVG3TQ+1gKxEzg79tWvs+oyY5miMoeO
2IiOt3vr6EyJrcNXu3LvFT7L6dz50Hlftfb9V/fL2y0YEaN3LdWIUdbVMzvW8d3ucZ7zGHSLx9cX
TztiRP0gJL9RN21u+XuONwpGvjdSSNKRGNHR62Vxvj7i3B9RQQLB1GPX372V8tXs4wJKqsSewDtp
+iDCoKDneISERBDmvHsNA/WLeFW+77LwyRPnbeo8v3rmvHi5dr5a36l8SsSY0S38VEFREd6/vrz+
9vKb3WMD1OF7OUYUdADjtYKh87mcriHiFC9QxZ5efeo8XcPkMV1NUniBKvTt8wuoSR3vkz/VC1Sh
iz8Xk9fln19OlyuYUHDFwFm8Xmz//iJe3y1RBV8s4unyXq7i7UZ5cQUwFM2iD/+6ePYDoujPcurC
fgDrldB36sl6UO1mjlfobVRPfvLld8/HzuFDv/bfitJQPSFovVQPbnpipAoSddWYEKW6KPZJmZnp
ygO3lHSgS+tbQeSJpO22wkairhqmSvXpxbMviwtNDIXD9EA4Ig/C3/743FA3qmwG1WftN28eHgyy
PO5CV6Z1XfUWba8rD3VlB7p5XtOlpIPtq0Rqul65xfWGb+Xooh7eOYHfcK6fXqGPLaBw9tDi5aG2
q0+vvnRggnXmagNNUSkCFu5X6jklxbPuizqhBa9vFPT+4W1KCHbGV4sWN+VR9mbzsJPDVrp7bcQK
f/uz8+4lrFLAmZTz6aTcIgVdq1z6GTsFh0gilP7l7kq329aN8O++BZr+iNOaMgCuYuueepETt/Fy
LSdp6/qoXECZtSSqpOQ4PXn4zgCkKMkbJEtRWt0bWVzwzWCwDQaDwQUMrcftvyBbduJWQo9AxdFi
BPsMAGkfnmO+EieGtzAroIvcUU7en529/9jSKSwF9Nc2qCSyk2KxBLIPEIiVQPJ55wtrTiowOWkf
XRKTm9TkDtcndAR/FCHalISOrBlC+HwFhKrVNR9S7jjWX0kPotGDvydM2ORhW+gz+H4o2D4GpUWj
MXOaZItT2oT/zJ1RmATxyLC5szjBsyHUBFS4k1T04pqKXG7ECfNEoc3yki2AaM7+t0Mf4cZ2dbmB
NogFOhnOqGlZbEbO+BzFOydnyuWVtpxlgbZJNbwhmqVffffOjw9UWm7KtO7eDJP4fAWVwScnB0fv
S0KhJGTOVm98vhJCH85blyWhRBHyZgjB89W0oy+HFyUhU4nOcmcIwfPVEGpf7FWEPCTEGFWEcJ4A
nw97JyetC0JqQvCgvlqA0KQLsmTPYLZmRXd+evbllTmiJ/vkw/H7DyetExLcBWkP+2+d4cl1MOnH
sy8Lp5wcsIMh8/KgD0wQgyww4pBXpBxIA9wUgPooHO3BdIKDKwWQGzKnJth6I9aWTeTRkkSpTYFc
KnlHdnf/qIBFn1zNqSAVk9ca8IT8hioANsNc7LDYuiY4h2v99ZIc7l3ukf12G+k++bomPYYAM8M3
pbGLajLSk7oVqbQzpPf46/r54yVAXAPIiyAAetOf/Yu/IL0nX9ekZwKA4rEGELJIpuidv7/c24fm
CfQefX2B8rMA4KEO6szS2z87uzzZO0d6j7+uSe/vuM52fnRKcnQI0TSpHJ7sKS5wMXT+dGem25xO
sxw8UyYorEaRuqgmyoe0e3MCjWaSbgmUk+wOOzLyHxCGMtKhSOT6oABbBTZ8HV1WtusOvoza9hW/
VmrwlHz1DDSQF/8x0fJEV7SI8BqBqKHoo1RMtoJomHbS+AoRrmGKMkyj+lIMQHJ6c7+nQNksKFsJ
KJ8F5SsBNWdBzWVBO6cnx/NyvYF6TETcFbgUO4Kb7Pq1qGwtqHwtqOYrUY/PVGkpNHFdzS7gqjK2
XJNukXZwe9QV1UJWmPCyT8pSxwgvE+M3c7crKqSmsk3et48JNbipQaHk/fSy07446Jx9viBb4RjQ
CHx30vzf8Kvby8KgJy/4RERa9odP6FqCJEDSJ3uHl++wQ5PG+GqZVCofclEz78vfGqh76lBLhD4/
OAYtRi10FKRa2MRQy7XxrQsGmMmlb5fWHR32Ty86YFtu+yYng7wD9n5co+2E6aiob3UworwFF6rX
xSumAX3eugBon7T6oYhRK2mqQ313AJL8KWJO2ETmC2a6nJKckpibzLLI2LFtU8dSPgQgI0BJ+U+h
EPl4lzm/tWhzUUyok4TKbya/ufw2iQZM+6syBJGTo9MCa4QcnWyLy0VByHoItQXuuKK6xy3UmHSK
bB+WiUaEyXG0Jz1l04HSMLI8FjkcyZiFaQ/DU3fzbCy5yAYNUEmzUdBTRYAxVWizqbUmJP/OeLD4
JAcdevfj3n7r4y7+NP4tijzoBX14QMrtr7uj0TeKDhV4vXtzF1UP8LfO6Hh+fIhWjBsykloDuien
yDkWJNmSWfUJ3wYI0yv9r3Skd6j8nCPQOMSj+Mxk1OUVBQwAYHOLe54+iWNsJsbTFLBm1lkAJz3u
cGYtkIcWjodYrkkAd4/OP5EiuBOyGuUCj2ISjUYDnbu0Zo0TtPEAG7+IwVHg5FDCivtISAtTtSa4
CG51ss9/EBt6gt/olHo7S0bg9y5gWCCXH/erod/XaXRkbyhydDRRypljkb7oBiBVPY1wrxxl1ElR
HAo8HWklVE1E6p6QMGJuyNUkIbLhp542Cb28AQp/NZ4UlRgwumkkytMGZZFnOcpmq/3l+AwEpFNd
Zgqi0uQBp5pqb1U6MCTXHzlOpMUb6q/HHce73XFdj1v0trZYgFHR9tzbMoF00AEPb5c51i2pVjOg
+VoOv5UrXdvEZM5tecwiAOEwDBN1HV6q1Xt1XS159oJv2XikN9nCs4Hv5Qm8qNInCXWriSxcmEmC
F4RsmQ4A3WpJHT/DW4QEzFp1KTGFoLTE5NTyFsC8U6dNK0xKpzHDRJSYhJs2OdHGBKMNyEzxGU1j
isSb8ImGqQUwCWlgUZaYVmJXmBG1LavCNBeSJ2BiTSkxTc+JQ7fEtBLLTjzExBq1GOZI3Fd81gaa
mgBRNVkT8+BGRLfSvTMBz7W0qCIKQHu7yQagxhVwW5Av59i/yONZSCr7V/REwbf6WYz999mtTif7
IRU5LlyptbiDTyTtD3uiD2ONVDV1IEAFPL74pe1jPdR4HQ1NuVAOIhgImNEGg9Hx5MN/6pzqkD1Q
6oAPHUMPpELicb//rQrl4NF7bmuAVLE2rkDbmMyTdTw7JglBG1ko4QF0pGGuVPNYQBdDxrLHRt/S
nBRDEaVJGhHQvOAVKEdiWVazAQLaz7rZyfF5m2z1hv/a5ZxbTcvUMlkP0xhMHfc+0EsCcBbDdSbX
8Ugf2ld/3IdLqqOPt0U0zlEtPAL7rEDvVpJWI4NW1k/Q8/8ZvcbWcjLqR8JHdaBSKQpiw5JIi4TB
4FZnuJ20r7c3vdFbyAIE/R/LBVFUTs7+olP3cHoW9KTfL5iOkPdi2uf8E0ze9IxRamp5kEmncWi+
qCqVS2k6M9Pn+YDH2ow8ULZ0asSDRDrlt59n0u9xPCQWJtIpsweEdITzHhSqkapjE2uADoOnrUuf
XJR+mCKuz6NKgn4KSwJMZx6oChYMBHL7iTycKC8htdoK6HNqJXDODEABFEPHTHxeCtHF3lpZI0Qh
111MncKrKYCeGJBgVGNW2hUOLC2Pa+qfle55cnJwdnp0/F6qiOgdP4hFXGZD+kXq1Mcab1YASpwM
sZUgAnS31qlDYZr51cEVBSiY5A9wx6B/xJzr5LC0AqEf+zAX8L1Ap6/Sbk16rDYlba25OiSs5FAZ
zzDnKXKRj4ejcpzoakPBDpMY1vfLcbIAtPFAy9VVFshxd5DlyIt8FOZp3BUQm2oAnsaF2vqLVH6P
qstAYLkE+bdtgkGQ3kAL2IUfnSgv3pQzTbk5N4B6q88+cEEuwGJA9hXtK7hBr9UMxNebdzw8J+z8
pPWb2mlV5eOQkkNGDjk5NGEXM3xDB6BT0k+gx2mhW1dmEMy18me+mr+gwR7l71VsAegr2QrXKrbw
1WILG2yt/L1KfgxeaNAVFyuCvlpsHBGgxVc9D6g4V9ISzwzKdNZRZ2v/U2jcoHxRNEafQrMMal1j
3x+O8nKlMRaglWmtDtS93nHV5avzEq8+nv5lD2Bh5gV+Cb+l26VQ9fYtPAu7vx7Yg/XAHq4HtlXD
Mh1t6lmwoykwfVXjCbD368nwh/XA/nk9sH+pYXWm+c9ifVwh1skKsc5WWWvOVwn2ywpzebGeCtJe
D+zlemA/rQf283pgv6wH9q/rga1GSWKvGHh/XcAH6wI+XBdwqwJmDJBf3b8cTeDoKuDeryvbH9YF
/JcJ8Cry/3GlaCcrRTtbbVGfrxbul5XmtV23ksXry70Y4Cah2hYqbUIhruXhnv4c5hVayyZ33SDI
Qx+eBLHWDAwyM+OlhPYnyMUClqeZeCwRbi9R3kiQJx1fpuFgCFwMzhX5FLKvkWjO/goYi9lfZ4j6
ykwG5absZlr+AJhQfc+yMR4sxEgZ+kbNoUmaQR1QngLoK2rQ+2YCn9kQ5Mp8+2r0iFKKFOK1URAl
hWRtFKpFc0P+XCcdUdMRa6SThBM6SajoVJEhV0cjqWkkK6HBH9Dw6IRGczU0TKSBtqoJEStGfCtm
S0HX1h1e2q729y6I40OJkmGQi8FI9QqyQ8ySauX5CqVWZy6hLmZuUXPSvKlruzx/GOzp2Juo97RW
bR+iElhJKI32/utMcYTAFqMJFL0PKKsyDj9lqS6JCwdKH7UuDz68mk9TR446q4JzqPNyVJ2ZAX+W
yLT5uDB5LUy+lDDNR4VZ71oIk6p3rG8sQ4hRHSlbS6GuorYyOhHwqqBeU0FfUK+rdUUSjKSipWNi
ebw0wGWd7MG2EomuwOFKur7bZEuGxcMwOF/fwV1tUi8sj2stsR+fS/VRPONGq1xUKidXe7v0q9V3
cr2EvduiQNS0uBGxjq+uB1So5dmusxiZEOqAhqfuEq7AgA4+WhPkgkxip0HQy+ncqZwoTpCq1l5w
5B0Gsmwx1filCrDwqNTE+nqEY2lV9SFYMjmBWnoSDId6c4yVA5buEec2m17VicVIxrxulAEVAAt2
aR7oTLwu82/lFGg8wFDM0gk+KcoIqUEhZzZ50E+KRkMH7ygXAgBVMgjXM3GnddktSXKhVZafj9o+
ZuUWIlNnI2Aixr8dp2E3dNrxIbz9nEcXo9wqqz/GMbVoc4GWBbG4+oO01t0g2CJKz3SbGomDXlc6
OwCtkQohOopzjGp2C9861FUYBPDPA9bbB1B1uhiXLo1g70/RfVdOuCd+RbRhlTNqstUP/gX0uG3p
kEkh4idILx73AG2QZQtOU2eSBxgECyPBQUV4BUwsghgqtngFRJT8u04NIikdD3UEkvQg3tw3eTYD
1iUs8W4ehLiblcuCHN2I8qWyEBYBHWRV2jKGLXCrnGB08hjmMVTLDPOobz/pQaEungq8ZCuv9MrX
dCa0pPRbnEQr1dwS08LzEQyMKg+X1YRFAWlaocLeLQQB7qikHex6qnsVXFX7dTz0SjT0KJrFwjsK
ydeEgnirSQ5FOmcfq0QoML4ovLOAoezTIMUdd+QEwysa571gJC9bxvFhS7/mnZ+ey37ovL3Dp+pc
6XdFzvMsRD6VW1ic5gIizX7TYa8QOTq0pR61OPnL/qGEQJWR3jt0G74sArsRtQbjPoa8VBxCcO5C
VEWJe7YyFYAZKgi8puW596J64GqATHUeMCrdYtzMoupstfIk1d+zMjZmD2I150vVdwyEBREQD4H5
SO2Ju6MNGZGVWsafIV4nB83UKqH1O5Jq7B4PQNzx7LYTUH2Zoz+Cj4swynIxI7KB+FoH3C3rKr6Y
6NgttQFvxuEr4FBk08zpYImbKO3cRDE0zvY+RO2i5G1rAL1ZJOK35AO+eVC3sK3Wh4Pjd+RQt6U+
v4Sy1Kys4nfGNauemu1PT80+rmBq9gQ9FMSseLQE8gQaFB6KHyf2U2W6DcMQxm4VsXwwGPdDkWu1
1CfIxCIcq37xNSjYBdrboJxg40Kjin7oi8cRq5pXRYndVtJlDS1MqOb4j/lzTs0QnSWCF8VAednJ
K73eFhohYQb1gQHFG9zQ7oZmEnM1CJVTHK2+J5tpj6zByNsziEL9SFs8W2Fb/GWptpg9LM4nzSS/
rKAtPkHvbLm2+CjaYm1RZz73OJmHrUg/uGFZ4/lqazx/TY3nr6jx45kaDyoi6iRB70GFP64GS31N
UU4yi3GoljQW3Fr1wqL4fIPRcgGDiWTQweAfdW0QTzaZ80mTYXS+zayS5KrMrGcrk0nypEzO1iWT
ZGmZFAg2uPtxJfo4QeVO0P5yevAL6ettUCuiIqVkAqiZgi2UAl5k0BAh6hoGdCWfwOaww0yTRP0Y
1ywt4RGYnqmflIT9uB/Ii4hiF6knEKDBn6MR1zTiGRqePo0yxz+ujj4k+IpS5guVGaYwF0oBL5rP
lUBYl0A4XQLNxUrZeo5GUNMIZmgsUMpT+ehBacrjHiCuQxvmyjI6Ubscj4iptzEK8Ng03nhITFB0
34fDooZl3FwKuEGhQgC04QKJS9Oi1OH0tL0NJPZa77enpaRj7plAuh6zuFwqKmAMz+AIMDXnJxRi
WO3vWR6B2ocGyCEE1DXZjqkVe7OCn15uATPIIhxCvUQm1H/kUFp2jD1pwCP4QTHjZ1oa8lpKhJz/
giE0907bx76WulnENbGrIg6upyRjM26g1R2SdeU+/1D6YMGGRItC+e7vmC4EYU61ohM8pPMlTwH6
XIWVxkOlMq314yeA1IqgXw3O26DWBvGDm3EmCjjzptqdRA7PzzBA8dGnPQ3KBKgh/YDhF8cvrQJ9
wPDeaIR8xUp1w5PRNGCw/19xq7VWiafsMoYMBJujWV+elGU1mN2ghrxSwZ4NahkQMXeipxYihsR9
I8Yh5E/QYG6CUQMsiBokUwgKVq6ugLrTE0Ehyj2jYB1Vpz7t0nszeVNZ0YcBRiAAnU42yqGyoUob
ZQ8C9kTfyHF7j4CZFvV7rRA8wIFaHKgMedkA1xkwvY6zISZXpleWUAO/3W1k2AEc1Z/r+D4ACnsd
E6xkwpVMuJIJt2ZCpx+5/Vc2xtPMYlIfgUUOwBCclmd43cnTeUoHDx3ZQiBb00igt5EHP2JHmvZE
OckBx9AbFXVMxDKGD5awVnCodDAcj1CnIe2hCOCuPLRNnqm7I5+pb51xNOUR/jOwBhxleSQ4BE7Y
h8YTxMEQcqyEmnhascIrMPYs2EIBPp9ykl9uBiP1pCLtcWvGu+0J9fBoderhEzTl8nPQA8xyLR3b
NDYBPWsADrMWmSKh09Iwkb1oInjXfqDSwZvqDC7GvT/VPlvKbqmuLUoXUxydZag4C1GpWk/2VeQQ
D3A0ygaPtx8d0yvW0jmsq/MvF0c6Po+jrtmI/DuzwTjPIKQhhiIirAkh7SjT0tQ0Xnlh58p8K9La
7gl8Vz6YzzWfVt182HzzeSWxV7abh06XYnQD35dpNxtALG0YYkeDbGv/4KRpuzZ7h+FpiMUxWu0W
ZrV1P8Rgd+/ICVCu4qsCFpA0HV8wXzR9+hpGgkq5Ov/wN9QqgQsKsbkotAb5j+6DmmBckpZa2cUz
z76Akg2jgYiv2DXY5VstzQiyT7Fw8deoGPcLQJMFenDTvWi9B0xycgwtDX/stY/wz2X7LIJQ0otv
QZ8iBjPATv41GuW9K9dhngqFCzel+nNlcgOifengT6mptYKKHd7EgaXoUrkbQSvqCfZ6WuqllnIJ
aM7q0E6zqWCBB/cyRK5sBRCDVWvR6DkALXv5swA6k4p8FHWifiabDvXkNZlfLq1fCgr4reXpga+h
E1SQ9wucyI8yggFev4kA4rt+M2+3CWNlwFAyuMsDHTX9NBsYd1kvGIH+VoVIrNygWEMnt8eXnstA
IcJYdcdnyu6uedJib+iXxCo/q3A8AmV54aX3fuzjiYGZIo5958UeRIoN8jz4VuhwggDtKBgMgAcq
I9kEMfJDS1a0QQLgIh8PiJ67H6TANyepDs9OWzrpUP0mR20ofjnl3VZaPGj4RGn6OucsQmRTEJRy
XoXAmgUEJwY8ddgsXsivBqWsQcAQkGYYJc43mAr+BJGgmc1JEOVZAT+5a7sQwVNn4E6HHeX1CpaK
g3cElR4DvhxyKkZJimHPVOS0S6FVfwdJB889xygct1OOe3YDxxUVQHcc3YoRmJCUDzAoYDrdEB68
e6firuGBSui4pXa81cHY5NwGj2glrOE26DYJZQxmiW1KXYeATmxbvt2cI8hePkN5yUPKDQMCMgdd
GesRLpIMkhiTV6ujrMfDxRmS8x6yEqiPWYBVTx2xjEKEmF+ljUguHKo6iE93yhO3G3EoHy9O62DW
1vdZld6k6BYHDGRUwc546JciWBwCbWKY/zJzc4uowObSyFVuAX1SQIVUJT8czMWeXxx88rNTSc+v
PKjlx1ia7ceQq7MEFfLV+C1qVG+vVwMOfVgeKEMSgq8UNQQTyrKoe8NhTzqURzfyzBIc4nfEKNoB
04p6rWoRRhHl6XBU7MgtF4aS/GtLWJ/8j6M0yWiaREl3RRm9UMeJVwHsfbJTgLURSKh3pPaOGlZ/
BDqnTZeo0k9QAOBRBqqw8RdFA4Lxo2md9NTfVdAZF7mild0Vxl0vGBjhuGugKIMc1SlFeDVSQ1oT
UgWakQ0DY95CndzlFC7gfzQHGimiEQP+GuIejyfAMLkGNkDFjmHIoeubeorKlxGW3eCjDzFl9bgC
KEQVQbB6kuE00lA58G++QsJ8982D+eybxxMnMAXooOVytxhBboMe6Nr4Zg5K8p14jFJHUaqW7o0U
RA8pMVGYG0ABMqDUNCONq5T3hVHV9PE4hZ6Dhl4SM9szApYkhtXksRHZ3DKCKA4T5iVmHDvPlN2k
KNRha+y7vPh+fHp09n2iVPz0JffmsaL7x8Oy+8eb/6/Sq9RPj/IG+4V8/rh3StpqPQznYxAwWgwg
Bq4QQU7+0JV/wz9FwKaAKZyIbnBx5o8a+HCUDIboLMppTviNHAZ3kK12g5ykUEkAPQ7uRL9c89GF
VZOlSfEOJs7i/bSIxhmYA7BI9IGggiwFo92Bld3+eKihu8BbaNrwieb7cu/DBbKg1F28AbruM0ni
m6iXQnZ9qa9dtH751GpfQlddEsNBktt2Y+qfshY77hyouYCNSlpNU2lQCEZEHgV3Eg5hvoRbUiAA
OzravwL+CIyZ1R6JciFZqriXf8Vp9uTy4q9zNOx1yOVp0L2Dv5ROi7RBGxwwrGfSwnaIK9t0zOuq
ClVgpbZCfBAnKIYiVloNHtXUu2tgVZxHXbzGPJ6JUI7to2ySAU91d+BNGvQAiXDGGauW+BrzmC83
HrVeOFpMadPWTlfMwI+ktbiGOs2A2swC8wfojozyo85BHA+ra6KZHuuNYnJyV9Y53fT4h0QYSh83
0FVcPJcamxkeuGhbFoXGcDbofZPblctNsR/Pzs73oWlViwm/nod67Fjy84OZrUbjGCydEP2jQFql
s0BjCZxRtDqc06P2ndVgJASrU7nzbiHsfBg1cPtTDILzGAjuc7ULCtpuk7TLVfo5BOc5BHlB8nKE
Cwq5E7mBHthflVEPLUc7g6TYkW/uFH0oIHycFQJ0niQRMvrJuICx/xm6va9FP0YTTrcLdKoNExop
qjwRIIBjuw8PwByukfIjSl7mCgQkBvEwg0Y7yRL8uxVf00LsNPB9bTyRP8W8+0gl+Fqdrf7PwUh2
7G+JDMyvlnBIFAwDebxaCve2lL/KxI8pHcC7dWBgTpEKmx73S8wr23YdWZRiSFAZl7ICRlmz2XAb
zHUbnDo4ZKK+a9CGZfOmZWGfPpOFh+CA7FFAxt/EwtFhyP7EGsx2qZGRdgDOViICMkQeFOgD0KfL
A+CUNsGK+k4DnAH4MBeROuxiF+sxpWSsxxkkni3k2sr4Ne3FUZCDdxhtyP9+w7hZbtMX8euwe9k2
YdyVuEzitgYrgFX9/3Y9Di8BrapdWQe+DSLZumXMFYtqQnTFCDtdXLzyyRvfZ2+AR/DzASUf3616
ZFg6wPD/DzjjU7B3TgzecQShvuMNSrn3nX7//jUAU2qcdWH1+EK8q1yL8HVc05gHfGxZc1wMBerh
Avcwk6onNb25tNP6JpqFH+EGnKpw9Ov8R+TZk5xhWmTtGfg+joHYdcKqOZ7TVG3cKDDmV9291sOm
7GchNk4GL1TFpHQ/GI7/PU5BtC/nRuR5lte5sb7HWZ9W9mEgWCEeghyp69Gk6bih/b0fAHNy4joQ
0ix9TQ4CeQ7LyfyTxvtslHVAwDFO7n4oR5PfOOCoF2DAwXAbx9i+5VZ4KJ6xwOe5wIMYQYMQCWq/
AvQJPGEDqypSTcZ4lB+wq4afrDxNqw+Pg65496BorYUtI9MGkUFmfA3S0ZRpxCZoCKi3fJet/UdQ
JeHTtq3KBm2k8S5ldOrDDn4Ebw8lgmqSz5q8wRyvwRqM+Y5jmj+CFzLFi91MREJF07As4RmWGTpG
M7ZCo+kEjmsLMxI2xRXBDipyMK7uoux+SGkaaKAyQFOcFGhtp1o7fcMgsShASN/IL1l7YoaLemhW
Op+Y9v6dFfqs8JKV1sXFdww+BLP+N4iCw46KIIO09PGWzNp0vpqebQaBbRp2EFtY+qCs2R414LYb
0CCInYD/OIbGYixI4lq2l5jUcMMm1MpYJEYYB4FBOWU8iUM7aFo/mCXbTZosFI5BYwtYimxmBDYX
RuI2PWpGJm+6ydpZUjbc2bq3+yewXcBfZUVO4+q6Ou8Jili6PO32YPC4N25GIfk35qjYpbt/+jfd
ZvDNqsR4p0yo8j3bgfbB2pvDs93pBOy5BMF9mQBz/VSvMX8u4+W+DzxCSJ9xH8cu6ZlJ2NHRUQs1
jDDtNuQJiCmGVsr5v0vbQ2N5aCwRhC7wfNA1gPOFwC32oqIxq8ip+Y8MIwOahrBNL3Ejr+nw5vei
30E/iGtyCnpZEQVKMas9B4p+eea6nKSDlln4ZSgW6eKUCaXRqYWJOTataTaLMlJPOYWt10/k7HPy
eOev7QP1qwGBiIxE6oVzuPbjuJ+kKnUE1sdz1FtLW0Ah1zAEaMd6KFetPIeOl6NEMKrQjfTJJ1le
Z3sayPbpy2qfwyGaSXpvgA3k+/eb0Wh4TY4kU2Qk46UWAurGJcabK4bAh6gUYBlp6uqazBJk9HHO
kd8A4rfK1GLK+c4ovRfKSFPq+RympWHMW86XJhdf83QZepgfUWAZAkZCuike55oNZo5L3650WABp
zJFwHpaLNEDU5eJa3yuTOujlQTqoXPcOfR7aTa9pu7bX9L7Lmc8YSk01KnL1aKprqY7PF5b3ckbl
M1z3wl7ADELKLFsYnmXiaBtYRuAltpGYpslskyeCNmdJ8Je7A8boKmsg1zD9LrPKvcYFQmR64eH2
p2L5CTl/GtwOwDY5M6yig3iehuOR8Mu6aUgDunFz118zheHd6glEQ5BfJxFQI3O15W2WhklfbgC8
ucoGYJqPd8Efpfttte9P8oLLc5X5A+eXcg2JN0yHz+8TU9C292JmZgd3qVwpI3KWB120JVjNOHHs
kMVhmLw2s47Pm1McgcoyquwEkH4ItAGWM2lkm8mK63P+YkLTfDSh/YQ1Do14YFYapP9ReTG9BufN
hssabBsykQeoWfFZMPPl8fmHCtT17Zfl0nxELDpVg9nm6uq55/OXZceaK1RugKKjQXGFbdnzLf6i
nsC5J0cFZaTu4EudOgZNaXaFJSBwAFDbUqvhgARjuMTQo8opddgbg7ICFSq0qOVyGjeZaz5UL1ZI
Sykl81luvihkbq+oWDn1Tcs3m5ot2nGgRdMGa3BqP96kLaZTSUzmEuBkJPnfQyqNzyeNSPovXfix
G3jNpMkjFruvyVzJjvdie+YP2rNl+fTlYjA5eyYbpudGLBRNwaLXdkvW04NaC0dt2R/2hIBYptLC
DFzJAzsx7KNRv60s/F9VFAXgAy0UkFTFyYSJ8Lch8DIjBhhfLA0xWM+VZmJadmxFMbPi14rB8W37
xdJkzYfF6YIe/mJCdzYdtA2bwXCg2TZeHO1sb3Y2CtZsjLxwxaAaOQBYXldmZl8qKtuEgbJKrFsS
jpMEPRYWQDzeOZPWoiqWlw9CzRYBqFiCaZxfxw2Uu0EJXQRImizy8WCnyA25m37H85rMtBLPaFrN
xLAtkxlREnigeUd2EHKHcS/cEcy0m6EVGK4T24ZlYjgQzxWG03SswHWSyBaicXcTV5tiJMeE3pex
Ek3YruLL2N0QWM7HON6J/B74i3GP+2RErOKdrz0rW0z6JxTbyhJQn3kB2ZGmQMgRBBfJ3i2SgyWZ
9gLXosJzDWEGoWFRzozAipjhmFEYcYfHSejqy59ZpiOLgLEm20wpaGdoLaXwGYIdH3zYOz71F0m1
ZFZ1K5yvsgO/tqx3lVbtD5Jiiy2UuSXZ1C2RVbEZxuLOl74sEYyOu8xxXReelEGYOgV057s24zvw
DwKuB10wtM+jPxoPHGPujyA8TwlkIJBPKqSKnk8qgnOg1uNBxtGs6xMxE/3cbC6SFnUDAy103iKp
JqGvGdm695yOyY0gTN9pQNwL2GsbxjPbgKV05sKsU64BBvyUsdHv71C2YZDnKbTKKvKBBoRMqdIz
+c3JH+Rfm/zxmeSQhcZYyv0qiOPrrdK+vgP3dygUqQtBHsBjCmNHEWmLh7twwDyq9fiOeqO6A75C
b7MBKjRvV0Ry5tksMUPpT0iz9FAQ8TzZaQ39/G/vLz7tg/vMpxAa7Xhbhbb5iMtNEC7YaZi8YTm/
i/O+aYLabzBaSCMb6o5vlGtDKCrdYI6M+7gLpioGv9qmgDfJFvTuOZ7/8I4sBlIMx3maQX6V6LYV
3CBDe5b8fSeiRSDtVfBlr54vtgq+2HJ8eY/XUsXP4xW14jZ+uXWsktjL7WKe2jPS0pX4cxhLCZzR
ZwawbgBRDXHRZiRgrL1NIdoZ0KI7dBGQJQdrfZ1CV+9+lkuYQhav1fh0WSZb9CmNj080vneLcL/J
gtLXqhZT0NdeXLqMr764Zua6mBXpA5mABgNSkBaPYhk4WFyQiIxXBgdgmVFHxTCB3/ewB0Me6LY9
WQvHA5bzrKwvS9CsLAiAEqQDzAcjtbfiUtmoIbOBmAeYWawXg+5YFLjjCOMc9kHpidXJR1kZ8qkO
glio3UiQBPV54WvB3kXD8Y6KDO8zdfVVpOAP6lN1Bbo2/Bzcg+zwFJ00TgOoyYzAMBTJvxEasRiw
IarLTmHB2+VPE3+OikjurITf+lzRnSBJ0gHq+eNBIUZPp3xqkX4CKENrGNQzppO3Di8JlIxt2dfl
4jZG44hBEBiRJkiheH7zm98QtZ3mPqoWwMuXgHoXvSWqibR6bQM83kcdeNLBEGARNl2MLYzK8e4b
dCfZ/fTp+HA3MSOWJI5pCItR7Acco2mHjuFwl7vNSHiR58H0uzphahfPkSIw1vbwLDfwSBYjAhlU
v2ZeerNNyrXKYvfNmw1mX01QOlgiPsHvQdAHEUz6zXvUY0IQiPwqX2/knW97x3eb5Lsvy03y3Q+G
BD44glguud3fIFfQo2A/pS3O8v2GwU6joPjZ5IkWgs3KUwoJwMrTzXCDjoWG7QiVGrhvmA1qoGXA
sWYuTT4k4McwdY2XTz6G1BvMpFx96KC+gp2QdCzZIDcwcsQdCBEmn43UmZutj0dGFf1OPcJQWhuR
WUleRjZGTeXnE5ayWYT/OcYC3bS4Zlo3mfpA3APb8TbbuuOs0x0P/pNCfCf5h2QQVw9CLoso4DFE
+ITfni24SzfB40xFO/vLWlm4j2q6opeUPUKYDuAAcAzPFANfQxmXA8JSU/ygQ1XxH7h03IDS9Qpo
Ee5cN5jmjkfxZrmrjgSs5KZqVWC5P5otSFmyNshQ2XyPcRU6Z20wkL6RTvXrHfz12Pncumgfn50i
T7zhbJajv7ZOp/kpR/fN8vT5+OKys7/XbgFH9D6i8Nl0PWqdXl78rWTHjcSm2fnwt/PWxcHex4+d
8733lZgY5Zvm66i1d/npoiWb26/BGiZ3ScEbXVGGyPw+DOC6G6d5JwjB2tqxuuFmK9v5XqtzcnbY
Qpa/iTVPDl5gBqJL7B22LpCVUg/cLDvj0hsY7uNzgnfBFncfv9soW+1P7fPW6WHnYO/0oPVR1n62
UYY+fO60L/eg0/p49kWyk9je5vus873Dw4vO2dFRu3Upufrx7KC60gEKUUfGYPGrPfoQK/eH8QIf
eb52R8ZkVZ/NDCzwUbqTFIsKevGDC6YSRkl9w8KQnOCp2fkMJ2xTnIhBPFtBfrAGO+FEDDBmgPxs
RunAz5D3Z9qM6tnmPmvlSMeggw2qdP+Yt03507XJ+OOPKtEXbASiX5rZ4BexGScn+9sEVaQCGOTI
q7raRj9QATGifwpmZ1jbIEdoqpywtVE7HtSxKIPgKgAhkeZr3/bkvBH0DiMm3wiXTxumbMY3a5eS
938ag6wyzBei25fBB0kVsUh9nulKyNYwGSjLB/kd/G1armol61WSn88NsNSB6DjDETyC608ED8qT
4b4mzP5OsqoO+gptN4nW3S/OK4ho+puxbxGquKn5AVGX18J1Ns8fA/5qfmb4S9zmuvlbsP6Wi17T
9Tew3Lr+ssjxpusvPJT1l3HONl+BH+uxGN2wJf2lVhVY7u+UAMtmZdlxuNlq8YTx3/UC21XGf+SX
OmyTTM7XXXlUAu+D7iMLvq6tdd0VM3UXn8rK69HNV92Xagky+ztkdVJJ7A1XEiV/FJwUNykDpWJA
OrwxJfFa4MLbpIznOa7ciuByiuPmLMfNn4rjapveDMfBLMfBRjge5NMm2ylnO5qUnx2TT9+Gz5Rm
lMx9tgkjEmprM83y6eyYk+zQ6dvRTHaquz9/dihlKjucvZydiE2y42w0P0+OAdhUVCbr9hFOjwFJ
PDsGiLBUYP4XxgARSlVhMgZYP4X+ONU/4UwbB4Lotu6fknimf0rWvBCgPX8EBpXhTOVCDOIJx2s3
my3O5LAL1bpmkv8Ay6uGUWXjE3E8VBc6AAi/IBkSQd77hq1HHiwGAWng7ybYUjIqbT3lOkYVCRsE
JcGVD/qcq9z/ELPoqfeHXdLHYG0bNe/pMD7tUrhRq9rizA7/h7jdaAVWsZg7Eqsz5H14AYiaHELT
TxnLN9VtYlhZpYVNjefBNunL39xu2sL5afhqVnwxk3obLVQNpchDpajWibwNFbCc/3Zw/ttRDXeT
46IaEm+ggebIRFm+d8oVsPb7ARnCJaMb6RFnN59UsRsTGMaHOYSa2wRL8Ck3nWym2NTnoR0TlL4m
t+mGLJnqEwwyCJUJzW+aK4qb1DYz+sMH+4JNFhRuyXggEsbcjXlvqw/2kw/Y4p5jb2rxbk5L78k9
XcVNkItYdphVZ06rYcdNaGRvglPF1M/Si+OewZ+OjdycNkhM6wnmJpjrBeMB1K67fiUh2II0uh/B
sBYmnhsG4Sa1l1yAdlU8MUPWifxrMWc1cfxKio8HvnHMpcOLPYmoG15MiyWd8GLPAakAM0YA1HFv
69bbuXIfgsUKJwg7aZFBMHMDTycuDAePLjJsizPagAdv3y1AUJJZhso7FV7lrHN4fNE6uFwkk2pb
k4orRH4FofomMYtu0mREoM4y13Fst8mv4TEgzD6/4rzpMWehOjMTcmuZ7OrE0aIyjtYibNVxtBZJ
tQz/ZdQpqFlbdCrq1E6wZbCFWJ4LO4VlsUjQKf66oFOK3DykZuwhBmCuQ+fCHbH5gC4OfTHc0dIk
H3m2QLij/1J3Jr1Ow1AU/iuIDbAIJLYzVWKBmMQCiVlCQkLNxDyPC348cfsgaWjrcz1cF4kFvMrH
x7FfYl/S7wgzmVepzOuNGOnRE8J126MUexdfVQrbW/9hRfDWD1lCbv2HhewBKTAWBiIbzgk3imTf
B9QQHsvhm3E2hxr+uEQZgaVpGPQCTYCZXBN8FuABAY/EDVqSNAvTI5HSynKo6IIjMBtD2ERnxJdN
Z7SkFEee8o3xKf+nw6Uo5aH7D9Yws8AaWne58xkZayjzPZfvyuZHV7YHuVF0S6G88v3VcCUdn2Nn
7HYNe+ISO0vSTsZW89ie8+PVOTf+7Py5Uf5Vt8rOafVulZ7bcIj0Ler7l/Hw0L600N5Ok1/5h/0m
O1yLZpfTc3cfPT6X5WmKNr/WdbPWXz/8SWbR56O/78kP/WoY5n9Q9Wkr9eqj3jK90TXzs87c1c90
ZrkyX1bnf44fvtukyZy/er4eqmwYlEyaWsgklUWWVG23TmRfy3pYpyIrxHnn7l4Nf/qTTd7kfd0n
6ZDKpOub8ebcyiYp1llWKNGposid+5sSczZ9IpE5bn2uv34d3/jsu+TdutUdpqtRRt5cpdlq/Htx
c6nuOURIpmdBeUOyiVD7cq7r3yb6P0m3dvXHejFtfvRn/U6ffOlnoTp/fn5+Z4TPZqvm2fmrz6B1
8+z8eZL8q+GvPrROiPrTqtj2Aa0LSh/zVaA7+HcdjGrA3Vvf8KZeet1t32nOyLtXX9pvH7592dwK
ASEvOYR0cUIOoRYvcaR0TWk6EaUprSCiNMVzllLa/jFNGypmuiLvFK7/2cHsKilzVJbIivGTt8/f
bnLZf93Ul+RPMs3mDYlxnGfvb7zrv35+1X45d3H7wD83bmZvrNq0ksV66IdSOmWriGyVVqs0A8Mx
DMExIjsa896u31/4ujmk6aPANpNy/PxK93ksbF5+dPPuvdW5e5o1uc0j7Pr3r2aXthy1oWJ8rTxV
ZLY9lvuOCqrMpUVFxqy4tyJDEQArMmYhy9NaI6u0EWuViLxQiWr086NK86SuRNOlTd4J1eAFgVlF
hubeRykAHopFKcA8AkvT6GkZu/47BZk4swAPKMgsTAUZSivLoaILDqt0BLOJzogvmy4FmTP1IwWZ
ll6Q2YoSNjlKUdpOOzNKqwObHEwCi+c4k7CN5zhrDhaVvMRzuHVpjiE4UMfadluHjufYdCNTtxgM
SISWfmCWBGJDIBHfvoB4DkjExlfmkphh/O3w2Zn592LZm1M8B6Jhc8HFoWcOlvpgFrF8vuLbAHSr
fNilp7wH1DI178HsPuZE4Rsh2p46+HShxv1PFymeYysnATl/8Rz4EOB4DhvJD+/7pUDlO57jTPbE
4jmMrpbxHMuWti+O/mm+fXE0TysRIJ6Dz+NJxnOwD98unuOF7D8+fh7TNxTPwe7KNp7jzafH6sP6
1K6nPtTHvZ4x4jnYB7knniOiG6t4Dj6Xy3iO07tYQDwHu08onmMrW7u5qgmuLOI5+Dwu4jmCWqAE
YOyL5wh7gSju/o3niOsOi+dgsEWJ54hmZzeeI64jYzxHBE974jlir6NFPEdsO1A8RwRfrvEcESwv
4jnimvknniOuHSSeI4KtffEcUQ0B8RwRXP0Tz8FvB4rnCOyFEM/B4GQZz8E8MYR4Di4n5ngONifG
eA4uJ+Z4DgYnxHiOAI6Qgo5jPAe7a5d4johmd6xFdLQ3niOKH4d4DkaXeDwHu6kZuTTqRDLEc7CP
xkjC+yeeg/eZa47nmC715t8dfwHuWDyH9rOIDwlegiOuX6/xHOzDQeI52E19JMdzqG6Iuyxs4jnY
TQaN52AfzWiJFs+h8siLxDmeI75jajxHfMfUeA4+xyzxHKcwHI/xHKcwHJ/xHHzj4YjnYB+N+Rmw
iOdQ6iT2jw7xHOyObeI5TsLkMp6DofIKFFWiH8RJ8Rx8tgjpBruvyv1HZufxHFHLe4jx+SuFUatq
dLMf/yO3URewTTwHn0lTPEdf9lHWpSGeQxSViuIL3xTtxHMo1UaaYCCeg9GMdTwHn0c4noPP0hTP
EWfaKPEc3K6weA5WV/peEHOiwHgObltgPAefLat4jihbCSSeg8+N/s7gydlYxnNM+4SwL9RZxHP0
4ypaR1lHhniOrZTKQLiH2Mu/FzT+vVuXO59R+PebbvN03zeIc5naxIKYFRESGWQJIZEdFrL/dj7M
JIBIWDtseJJ9HxAseCwWbHjzCCxNw5QBaALM2ITgswAPyBZFdnQEE4qM0spyqOiCIzC+QthEZ8SX
TWcUWZ4GQJHlKeWB9A9TS1gwtay73PmMzNTKzTh3geDcA4uBbHhhhrfj2ttp8is/Y8OLGRsebT6x
4YUNG36rTmfDC4gNb1DXOsfZ8Hnb1W3byKQvGpGsy6pP6lqIpM6zRirVpGWtzjt3N2PDy27IVVYm
fdPkiRBlkVRVliZZVvTFum7yonXvz4kNb9PncTZ8mS7VWdnw4iAbXhxgwwszGx5ZNyMJnSQ/Y8ND
64So75UNLyzY8GU6qgF374kNLxA2/FaIhm+H2fA24jAbfite4QjSmtJ0IpBSWkEEUornLKW0/WOa
NlR704udgpENPyrlqzQoG15oNrxsurRWnarH24wr/3z8YzacV9m5se3XDXH9mmbGX35y9/L2v3I+
vD/3YNX2vRhkO4w3ntrdkcj2btDLsrKrgxxThOogoCVzHeSYkP0ZqRVlIWRWJVlbiUSJfkiqVK6T
rGylrEupUlXhx/BZHYTm3scBHB6K3QE8zPVHz6jY9d8pg8SZBXhAQWZhKoNQWlkOFV1wcH0hjE10
RnzZdCyDaPUjZZDOqgyiRfGthaopbaf9EKUVsLXQEo5Edi3hQGQfm6P/neGLyO7QpZk8fbB6pLsV
DET2sRvpTGQ3i5CB11rSmchuFvHtCyOym0VsfCkXSLrxt8NnZ+bfi2VvrkR2o4bNBS8OPXNg0LcW
8f98xbcB8Fb5oEtPiG/UsgXi+4QnCt8IEffUoacLNe5/uqhEdlDOK5Ed7JNCZCdLfnjfLwVqY2mk
ULW/MLpyJUUABryWPT0GvHZlx4DXLR3eTtLNz95OKrK0DMOAZ/J4qgx43uHbMeBfP/3+8dWdmL5R
BjyvK1sGfPbk/aNOnNr11GWEuNczEgOed5B7GPAR3dgy4JlcLhnwp3exMAY8r0+UAc/ryo4Bz+Rx
wYAPaoFCWTcz4GO6MzPged3BDPjQtogM+Dh2dhnwcR0hDHhuTwADntvSggEf2w7KgOf25YEBz215
wYCPa+YfBnxcOyADntuWkQHPbQhjwHO7+ocBz28HZcCH9EJjwId2AjDgQ1vAGfAsTiAGPI8ThAHP
4gRiwId2QmfA+3aEFHTcGfC8rh0Z8LHM7liL6MjMgOfy48aA53JJYsDzmtr8/GQKsqEZ8LyjMeKW
lgz4oqtC3xfNjPV06+bMzy4DPmc/3B5nwOdix1+f56H9EdevbwY873BABjyvqY90BnwTebthyYDn
NRmaAc87mtESkQEvIy8SHwz4yI4tGPCRHVsw4JkcczHgow/HLwM++nA8M+CZxsPEgOcdjfkZsGTA
i5PYP7ox4HkdWzLg45sEGPDMJsepjn8QpzLgmWwRENq7r8r9R2bnDPio5T3E+PyVwqhVNbrZj/+R
26gL2JIBz2TSwIAvetH1J+NrYsBnRZVF/Q8b86ZolwEvVKQJxhjwXGZcGPBMHikMeCZLEwM+zrQR
GfCsrmAGPJ8rfS+IOVE4A57VFs6AZ7Jlw4BvyhhOQQY8kxv9ncGTs7FkwE/7hCj7l2MM+KZtijzm
lvQIA15LKRAnIvcy4CWZAe/Q5c5nRAZ8uVLC/JXjKvf5lWMl931LupAys6WtHVYEaWuQJYS2dljI
nkAAcxcg2tcOdZ5k3wfoCx6LHXU+zATAJAVoAsxoiOCzAA/IFrd2dAQTbo3SynKo6IIjcMxC2ERn
xJdNZ9yakgFwa0pSHoGZWD51Lbhh1l3ufEbmhqnCiH+VICg+pBhInZcQFh7U3k6TX/kZdV7OqPNo
84k6Ly2p81qdTJ2XKHX+mLrWOU6dH1LVyWHok07pdI6uWCeyzNdJKbuyy9pCDk123rm7iTpf10Mu
laqToevLpBFdk0hRd0naFHm2FkPVy8a5P1fqPLnP49T54tY/6pzUeXmQOi8PUOelmTqPrJuRsU6S
n1HnoXVC1PdKnZcW1Pni1qgG3L0n6rwEqfNaiASGp1DnyeIU6vwoXuOY1ZrSdKKsUlpBlFWK5yyl
tP1jmjZUyHSekncKe6nz1SqVxrJBKQp/ZYOxR7V3P1sqaVc2OKYIlQ1AS+aywTEh+yPFWq77Nm+a
8abcZ4kq2vGulrdNUuW9zEQnZJdl+Kl1VjagufdxXoWHYndeDXP90SMddv13qgZxZgEeUJBZmKoG
lFaWQ0UXHHwcD2MTnRFfNh2rBlr9SNWgt6oaaFH8SZwrSttp+0BpBTyJtYQjpF1LOEDax+Y5WPnw
BWl36NIMoz5YbNHdFgyQ9rGb0hnSbhYhM7C1pDOk3Szi2xcGaTeL2PiqXLjpxt8On52Zfy+WvblC
2o0aFhc8Sw89c2D2txbx/3zFtwHoVvmwS0/Ub9SyBfX7hCcK3wjR9tTBpws17n+6qJB2UM4rpB3s
kwJpJ0t+eN8vBcxvTJSq8ln6yMoAkHYte3qQdu3KDtKuWzq8PqSbn70+VKZFFQbSzuTxVCHt1fjH
bfgVYfh2kHb58/6bb7dj+kYh7byubCHtw/rhjfL5qV1PXUaIez0jQdp5B7kH0h7RjS2kncnlEtJ+
ehcLg7Tz+kQh7byu7CDtTB4XkPagFigYdDOkPaY7M6Sd1x0MaQ9tiwhpj2NnF9Ie1xECaef2BEDa
uS0tIO2x7fym7sp73Cai+FexxB+ARJa5PY44BeW+QYCEUDS+2oU2WTZpC4gPz3s+YidNmrHfxIZt
dzO5fu+YN9eb8c++JO1T6xWApH1qlY9I2udV5gWS9nnV8SRpn1qtiyTtUyvkR9I+tVYvkLRPr44v
Sfs1dRlG0n5tTTxI2q+tgj9J+ySaeJG0T6OJD0n7JJp4kbRfW5PhJO2hNfJJ6NBJ2qfVmkjSPpey
B6rNqNFlkvap9KGRtE+l5SCS9mmVql7/zyRkr03SPq01F/mQjknabayv3S9eJkHvSNpRn0MS9KKc
Xz+O+nX69PUry6svvgfGb2iS9mnN8SRpn1apu8Ek7UarecNiJEn7tEpem6R9WmtApWEk7TqfOUhC
kLTPrPEIkvaZNR5B0j6RxlORtM9uTliS9tnNCUzSPpE9E5G0T2vN5THgiKRdZ/+J+SONpH1ajUeS
tM+vpAdJ+8RKQlXPvxAfStI+kVoDOK4Pj8r9j5Ttk7TPmt7zUbx/pHDWrNpwZe/+R9rOGsAjSdon
UvICSbssRJr9Z/TqSNqZkdzNoZf/pOiApF1nfKYK9iNpn0oZCkn7RDoOIWmfSKWOpH2eahtI0j6p
Vt4k7dNphX3BnBXlT9I+qVr+JO0TqTWKpN3MoaknSftE2uA1g/85NY5J2rt5QjKHci8jaS9UrNI5
p6Q1SbvXCrmETyFmHL12QamovboXeBPLPDL75xC1eRRHi7xInz6EazmjN3dP7uDylteB6hEva89r
rpBskxeROFIl8WQ2USf54tVgvniCyIP3BvLF26Uwl69+tkGvfvaRmLCgEpNTl4jHIhFjqebOI3pS
zXmp5EM1dx5oPP2CN+mEF9XZAUP9IPVDsJx52zKOof46FeBNI+FVAZd5Ma5eC94GjeWae6kFHdfc
kG+NNNU34AaQuF1DTd8aCaUmmWtOJFfgmhODBt0XSNPUCNK00SIP3htMmiYvk8orT1L5a4J5MtQr
Lwp5T+y6msLC9xjqVY+h3vfrHUO9GslQD+jDGeqVL0P9y9AR5+UM9a4QxhnBF0rJeJEymSyky90i
M8zEptC5AfZ2sriOoV4rJUsuxCJTZbLIbWkXNlfZQmut4iJO4J8gy6My1A+WeYGh/sNj9EkZ6tVZ
hnp1hqFeXWao94kb4GMfBN9jqPeKk4H4QRnq1RiG+g8BzaP37hjqlSdDPQANI5EfwlA/GHwIQz2A
D2B7T4Z8taOYHfItL4rZITpzNuS7rdLDTPVSWl1mqFd+DPXqMkO94AbeebyCWQaQFv7zAF0CVGr3
hcsj3A2tRuQ6ZwR0ave32TZ6rR7wkWzvw2WWxMIY43huEmLWQSRLyTuFsTl/+f53n8Nj70MyWWo7
uFfud8brzQKZbHvdskZROZh4v/kr+nbzfVTsHmGHW29sRt9gZ1y99Mdme3VV9npU2ZA0Nk4qUSyY
i/VCcV4unCzlwuWJS7jUzDo9sUqJlNyWVi4SadOFsmm6cMzkC/gVcc4Tl3N7dZXqrv6wat5+b108
h8dmjM33zzOI5l2BNVstzN6uGDkWj3ZpnXKCZdPb7/3B3uDwl3df/oM1X2zs3gDB4D2mvMrbh0ug
HFzcw3tv97/AX/YF92fzBdb+HDvJv+/+6KOPHmDfnd4+vNhzD4QeNuYMB/cdc4RYcrXkvAe+3t3l
v2htORKPuvWruyrNUi3mMcH9Jrz/JmQPy93NDw++/AYuREdmyC3meiCC17e9ztEupTh9LxGYe2BP
/QvK+OHnb3giUd8Pv3y/2igcAyE4J0NIQYXgOh4NwWoIpagIXIz3ZgNBtkIoMgS34yHaGtXkoLDj
fcnJFdpGlbRUiIQc2oZRETgjVyiPydVhyXYIQvtq7RAJGSImx7YQ5LjihHbeakEOTUHWgceaChFg
8GDk0AwQEnRfKronpCFDcLohCVmLhFwhltxEFb23YmRHcEVuYDyh16gld7uKXB9c0YOCCpCQ5yQB
Wqih9xOMHFVC0DtdSfamJrsiJuvAE3JYxeTmpclhJegxwRndmZzsCkHvMgV9xq3Jo4clVykXdAj6
AkgwsisEJ2vBNT2wyD2eoM8phCBHtyEjCPqMW5Grw5JjQtD7K0VuYYrsSs7JUWXIjqAnGMieFPTZ
gCBPMTkjK0FPDXBJtyOhN1C6HZreOMhKiBCrODICPU0iAgSFJhsiyf0EN3Q7ZIAhcHyFsEYLwhyx
hYjJWhAW5g2CILTSFoLuCkLSiZG3LBg5JcvIuwWM3jwaCELraM0gBxWX4ztuRk61tPVJyFA0EHQE
ymDO6EuwtkbpCGQzCKNP60t6G0/IjYOQcmrNIOtgyZVByLIwcoqekWeYrSfZ+GluA0E4QLFXgg5B
N4Ow7GD0tEADQdirYPQkZANB2IhrlSCkNxoIwpZLg0DYK2fktAAjbzEz+g5YqwS9fdEbByUZ20AQ
jh0w+jKyhdABtCD3E5SVaANB2LZpEOiH1ig7xIy+LdpC0Cd3hMR2qwRhC2sPQe8qOF0LQw+LAMvA
mAxB2Oxm9ONajH7CiNFPL7QQhCx9C0FPDnD6Mo6y595CBBjTCZn6FiJA3irAPC+mdxeEPQdG37pv
IAhnShh5O47Rd91bCE5up4KeeOKC3sjoAyrlZH8LQdi4aA0h7Joz8hlsRj4mxMKl+wmHGhl9h5MF
u9KCcr0Ho1+40kDQdz444SAEIx9e3o8g5PqgXB/QQBjyMCbomwac3mGRuysRYlpBNkPQ54qUaxQY
/Sxeq0WA9Cp9Y4/T8+WccPkLo5/cYvTz/Yx8YGnf3dAji3DgldEv+mDkK5r29UG2gzDbFPRruwT9
MLegpzcF+UigoM9XBf20jyBvdLZKEPImgp42F/RWKsgpJEHPTQr6UYIWgjBBaiDi8b2mIJ8uEeTp
aqvDy1P3Wh9RKdQcW8g5x7iy7J97JDP758F33/1T0we9tXhnl90teSJuuLE3/IbzpTFSguBNR8qB
tFZrl+1unwHxG7KUpEXkSuCziTTwLABkDuyCwE1XM6UhJ8iRWqdIABa9n1+i7OkuAj6EIvq1/7oH
zE/vf/fVp199vAR6fnTWm1uk7sG/q4dAsgiMJDfZUhiObDyr5w59sXkIdP5C8Tfhr2avve4h5BN3
nz93oN3aPSmAT0Xzj9/48oNvNPf47lcPfvjwwY+g5w8ffPLh16BoRUjx2u6hfB2YAO/devvkdtew
m7IIOSzyCFgsPJC/3ORP8T5UyDeGdJJAQwksgvi7ctnjyD3dPVrd32UPgVsB/YGMFkjul8NtNtbw
emSZ4H9EPSq2CPixVt89+OzBBz8AygqrExTMfl/d3j1T+EpelEBWWD/9c7eqCCUPPomvQkA9ze8Q
C2+TtQIKc4gUeLpqbpv1Z1tIgU74yW6FdEjR+vF2BWTD2Eg3YANQxRUbdMMdxMU23eLvoywq3TpC
VlkguQHIFJiaCiD0dxlSySAp1gqsenzXPIN4eAa3G42QZwMYJaL7XbbKnmy2GMAZ8LFt7oHiB8Ii
2j6MoDaAl3a326zbj6+2f23rr+D9+7Zu54BR8bFQ1WvAJxFtIbg2q3v3PNrdPVntbrf4iL9wd1YQ
ciuyFXB8A70y3Pgr2979Di4QDRoY10ZnlD9Zbdfubgu8g1j+u7jf4OOTWyTiwdJ98RA4KlaP3PYR
PkXWZXx/k9dKrZ9Fd/josrvbCAstMOgIT6NtXn82295iARnPZPRbCvQ9j7Lb1aMMgq0tFG3h6Tat
9CzhYQ33S0DOHyDtSR9DoMK9TqpHqKZd9vQevOgRqd/c5hVfLdzt5QnwSj7Hrus++gpMxqpE3mhx
Y26kuEEq7JsY+qE/t+aG3YgbreVNRVqt/gQyj1d8GhzSxEQ/QDQWS49PR7+8lTHJHfx759fo3bP9
xAxQXBQszQAKup71avt48xxq+tEKnbhZA2CcAZ7zhgurGRfmBc2gNSOaATDpjdWqRVZKcGtLg/Yh
o9bq/o9V1ZmuoBepSvcAWqBunHtjchmXBSoHdLA1yAoD//b+DwDjutKQeaNJnpSGnaoBNsxULqVO
HACt8Fa6nUapQ/OsP45OjY4rhYBfJ4fe+tPvvl1VxGeAJmNE8zeP6yKOVYX2ZPOsWK2BHgweGmcp
ABtkock1YB3YF2sAGWCelEmaAgh8e4UM9RhWKUAobwiRZHGKHiqeQWWt0d1P75BrH00q0EHSP9Q5
UzErAay6bc6z2y3Q9CIWzvAAT0lULfVHi1mi0d3IBbnaurKAUeIxGplqrLkB3mYudwVAIdAtxEFl
3bAKY8LF6Gu8lUMDIdHXxhtCxjJTEiBg4on3OarqS8tBGKrUWYEto75ZUv0eVnvisK78Kx6AdFm3
1afr39eb5+vqziJw4wQYjZsWy8shcCzOMBQlEPsdKVd15IkPVjVXxns04myriGJTOO0kd7bMRCll
9KvfpPkuu63o1Zb4X9ywelLaTUe7aegbWBk18+4R7klqaiMlXqABk5TqzjVb96yi+qtZuODVN6Fn
hg+/2XzhpvrCDbyBDOp9ARJUG6B4dwdZLCMzNVeFwnuKV/ztipmqLPATgr/Rfd4GEMqrsoNH+FE9
cHauHECoYMvLgmIs20Q5rkUeQGh9R+KCHwtS9qQCnKUBhKqXWpoxJjRjVZ0aBU/CuFc35YRpy2QF
zjIueauAjXP4EJbLomCWsSyAULMvK57YN47KSmAnW5djw22gQIr3Zc5K0bnX9VytQ0evPXLvUf0q
LhmTCssyY5YzKwIITeoyh5JhSSVIvqz5FEkAoe5M9Ap+WmgaBxCa+XQOReA6zdsyh/+doLIJJAze
TgFpMs4DCC3qDp/xqk+apu/lzSgjMxTKLwg1gvEQQrlHnbrQlkoPoXWZm2r8C9BOed332ub+9hPV
qWlmDlWdyqZj50XSNhNupNkrg59RAYTGp9wbW95vm3WAiYxlMmNFAKF13ytLZoScaroijHcghet7
hR3Y94oAPZJI5hA6dJQJIjSdQ2g2h9B8DqHFHELLGYRKdkZoGl9vjiT5HELFHELlHELVHEL1HELN
HELjOYTaOYQmcwh1cwhN5xCazSE0n0NoMYfQcgahqh1PC6YsszW4OhDED8o2wAxf9cdTbhBcmv4C
6vj1EGsZJeYQ6rMoTruMqC4DLIqV8hDalK3kTLIAEzPVjae9REd2ToEwy391MJ7u3SiM6CeWRdj1
qYq93avCCa3HUyU68Lisfyqhpv5U/3W6UM2O3NvV3fWW/7pNvRYX3RtOqKktLfsuZaXFVNopobwM
kcM3/EjokaA4q+o0sKX9blBN5F57vGrrrL6iUD6H0KNVm4cCAYTK85Yel4N1DlaxpYegwEL1HEKN
t9CAddotoKQR4nI7dQGyoLabgnrsVgSyNDvoHNwZQbIrh5iuZD5bJG3ZukCWNnXKRZGnLJ7IvUUr
VEp7TpBt3a44cwHOOWT9QEo5gusmGXE1Szm+85IpSlWJRZoFFtoGUhrj/xq8FnQ9S9WBpWaSQOL9
ZQWuST3KAYTqOYSaOYTGcwg9SNPpaQLJssOVuEfSI4BQj0Wx6JVDbMTzZroSlywzqT4pVMn29TQO
U6e2iV7hM7SFci8O4h7Lw6BCBZtFaG/mwC3vgTflOGGWhY1ewfeW5mWyX4knJkt7q/JEBhbK5xAq
5hAq5xCq5hCq5xBq5hAazyHUziE0mUOom0NoOofQbA6h+RxCizmEljMIFaxZFKexTO1EMwfRX8u4
ZBqh6mCyPZGl6sXDFXHZbQYdT01NGKHn9mUKe0VLm87BVu6dZrIt7AzLf5ENFVrQ80jCL01nA1sq
OqFTXaQj2Qzulf06lecWULznARVCKD+Tc5D6eu7NxKFQj6RHAKE+u/9d92jDCD2T6EivGb2Znr7v
lZhO9xAUWGjWT9NNk8NX7HKiQzDe5vCtUZJ+uELtEx1KZlY2gkRq97MIC2+p5nUlHTMhhIqmLHgq
z42nPLR7JVu+AG654/yadXohIcmhHjMRWqi+0COJ1O09oGQa4pSO2l+rqC52g+EsvbxFArVr90Lz
MsDBNoUH2y5GrAltaX9ok/lE7vVIp7dl6VgZYtNWKd2u2gom2nRr4uA3uZ6llrUzfMlkWZxop0dC
RQihEEiX98FdYEsvnF1hkndWCyltEkLouTmS4f1yYEtVE0guM2WWvnFclqwQolVGCWM1/XCF6p1d
MS6rLJImkYzXE2yZSge/gev0YIZfFtN0Dn6rNhNYqBfLwYECAYT6zPCl7JcDCFVseWb50F1zKsL2
SLo3BU1sLciiMtcMJM28AqmzOk/o61PdTEFtmQrAtJfP9wZx77lA4vkV3XtxEOfMqrAHZnSTek1V
4mzi0lZQWSasK6e9coBRRouLgQSWBmau0EJcYiPBn9CW1oGU+TWZUJYqtjyqx65+u3JgSzVbng6Y
a1pq2LILmKncG/fdO5GldobL8LUdehl+CEuTodcUBxE69ALJEELd3lIuTp/v5cGFGo/kVXihuGo7
ww70xhGxT25S4bSkX9BhbMPuVbHX8GmuODBWtsuKTGZZ2QjN2HWFKraceovEnM5sl+eFmhBCtU83
GPYSs7jfZLj1yGxnIYQ2cyRX2a3ajVorYlkL0qCXrcnTrFVWavpaJr7ATeewdlXgQIrVuQ7fuatF
b2yZTyCFTXTER3mk00nIMg0lFNjP8XPI9f0UOLuRQTz6pbVsWT8mdvla8+XCVq+9ji9U5TR+/Vea
/K/e/+ZTYDcvN3u5+Kn6B8WkMTzN0iXOnl5f4ryiEp5UH+r+eKmBROD1N9A2UANfWW2B93NVmd5n
xgSi8re5AphiXTGdAzX225TU+hfA447s/jmQjXrAtLcGQMLN7oYAeJeASDAJvJ2PC7ctfGgfX4bE
xRgk8SJSosYAnVCJ6yQQkmChgFQwfwsdCMmEAhIslJvUqIoTJ0LAhPK3HRXeJ70UKr6VCQUUqt44
D+VuE8pJcTCNAuEIHSySeCAgzkapJE4gyTgUkhnl8BNIsR0DdKqZjLLtlL9lsJpLRvWUJ5DsKOPE
iagcFUsnh+9g3dIo20723aOC8pRKofo3LkP5W4yLyhMhME4lcUqlYCGgQzlchhpQuBnl75MxEG4K
N6rqTvZMofz0L3nXtuM4EUR/xeIJJDzp+yUICYmL4AGBuL2gVeRbZgOZJMTOwgr4d9rZCZMMncQu
V6gFv+xmZpJydfU51e1TlbbgWHESMGBG4YSWUjTW6BxWwBUWVbjEWno91sQZtITCsBDAYVtUgWYp
hiWsMAmB5RIXWFsdbrEoJwQWwDna5hJt/yWwPOIWa3nCy99oUgUelNDCrdCiJEHb3dhGDqagRSxx
geUTTGSKhQnt/oJ70OBiCyYWLh1WhkNTPjniTRhWYuIeDQIOC5YKa+a4QgMTxE4sncBun8Ut7+jR
yhZoSRfv/tKgrSgMi3F4K4rA21hKLAygacQWyyPuQeSNuYQVbg0iSgxLaKjksOKOQFubYpbQAs45
liUBG13MEtp9uMbaoDgsXAL3ujFLMAGFRSzBqk4MjXURS7DliWEVeRkaKmOWOGidi1mC1eaiGABl
uZgl2F0BQ1OJGdrmMjY62B6FYemfDGtjwdAqBbEowe5VGVppnaHp+wyt5sDQKioRSxbNJVjvEMNS
9mIuWayZAyr8EUuwVoYYmGCFvtjg0NIlVrIUML0iZgmmV8QswcR0hlYpYGib8GgKx7IE1C2jo0ND
OKyGxdBaLRmaoM6wev+iWQ4NmLD7TIZWnWFYdczozGENDm0XDivTx8YGq9FGLGENTcCqobGxwfra
YpZg+kDMEtr+C2/JhPXaxSyhhQltTYE1xjCs0jrD6kJhaD1kEUtoO0uDhUlgLzHDqqjEAAAT+COW
0AwBe4kZ2neBIpZgX7pgWBWsiCFY92cs3GjJxGMRDlbiiRhC23o7rGmDVQkYVhcDw2qTjwUb7+YL
K9qcoVlCGxvaRhD45bSIJVirB0MrFkYswTq+Yi7BmqIilmBtLBFDsOo8w/pKGcNqjGRoNd6YS2jU
RSMcsHgZsYQnneGpzBrPJ6y8BPyKS8QSrNUjYgitigLs+WNozWwM7bvFDKs1MuYSrCUmagktNXE0
nwwanPBuwC2WJVj/IEP7ogRDa9xnaF2tMUuwL6vHLKHJORztzhnYQRizhLdLwauq4+mVeDtea85Z
4keWmmxTLurQW2WFF8HKcn0/XyyrpM5eBUPNOpm8yraT8NtJvvw5vHny+IG7/Qfuwh+mCUu6XEBL
ZnpdoP3AmQuICxf49JtvvvpmmlTb7WqdpDrJmuTVy3JWrB82y6qppm8uuN2tJvU2fVjvVs3EOc+l
mrvUKz9PtZI8LeaZS5UvdJYLw4XLJ7l0LBeZSoU2KlW5dmnmmE69E3nJcl0Kld+FK02T9WaahLOz
lnVVTNvSmBPehnOBw4/1NAnnXq3y100VXirmw6/DsKeJlIrtX87W83ldNe0fjVRy2Lhns8c3zV7l
TxGYNeVsW/2yq+pwmfCitZSKOzFNft0umurxcQt7d5OP9j+VrPCCJWnyxWqzaybh7J/wX3ul9XaY
g6QTo4x3WrjDxAj1NDNcCOcOU6OdO50arrX3ftjIe00NP54afjI1FVPufzo1XJzhzNszMew8Z9qJ
4b0n5u/EPq+nwdr2VbVNuA8IMO6Oh5SYrNZNuH69Wa/Kxer+/afTuC4YXWbhn4eqrrP7Knx4U2VN
+IzYf7buEi3pbL+1IXzgzNqgomuDkZLfEIaFsEZI7lJeOJEqUc1Tx2SWcltI6a1UTLkIDMN7hDfu
Mgq5N/wUhVo77vmwYfdAobyEQpt5UbgOKFQcHYVqynS/UVNONpeOa+nFtXXaitPZFtxq7YeNu9ds
8wvrtCgqcz3ntA7eYLalO5c7bjjbFZfa5ypLrSl1Gt4j0tzZKjXeqMyaeaGrKj7bjOnruzLjT2db
MmmVHDbuHrPNL3G7FF65LrMtHfps6ykT/UZNOduOOaYUu5bIn1NbGi2ZHDbsXpPNLyTyXBXyeiIf
6uCjU7N20sM1Tpw7Ovkz/LrZLsIP7M0mIjnx4RZbmjNGq9VRIL+YfPUmIu8nZfUqacp8P+PNentA
wKlFzfqFigDC8ilhcSG1UwcM8ycMay4O+Uow9WxHLMLVho26F4Iv3EQWrtOGeKiDFxAsuiN4mA9X
giQvBanT7dxQBy8ESb4lQVKXgyRJg6SAQcLKhZoNyIVtKtFGOXSTRqGbdOzYpAkmz2iXt0zZmcyq
Qud5Km3FU2UKlWa6yFOnq+BkKWTJeXSPKa0RjF3ZYwqjn90/Sun9sGH3YJq6tMUsCp51uH0c6uA5
prXOdWWauQXTDATD1ROGHxFwbNJOhemnSKFgGKjEOea4kKL3zlkKZfnzYaNPjwtG+2V4lFj228Id
qZqOS+7UVcHZcPOsFqC19nzYyHukBH6nLwjOgjl2ffEd6uCFxVd3TQnuX8Bct5TwtKwdQHBs0veO
FTmMBTsni/YAcf9x9wKxOb+utSDm19e1oQ5eALHpCmJ/CxB7BBCHEB6ZVOwGMtPe6H9nsUQr9QLG
3RF4In4TzB+B12FCL2OkeMLIYzCOTfLewyKdTrwCMWDknSeUd51QfguG8oEoeYzxsUnRO1b0KEGo
VQPG3RkjrCtGxC0wIjAwwsWxSRlM9quxoWAEWFvEKiQDht0RIrI7ROQtICIhECkPEDkE+Nii6h0q
SoSgVZ8B4+4Mkc4rzS1KjkoNg8ghwscmde/aLQpGoDdeWDVrwLg7YoR3TyP6FhjREIw83dccInxs
0vSOFSVGsCrdgGH3ujcfXOke7CBCpVuZW2DYDMPwPyvdygaL/eR5FAgDyybWcOv431lOHWFYKG3/
BrHT/8hz1il7MvBbzE9b2joyOllvmslv1ape77ZFNckXq8lv2WYxTX7cT9Af7ZsYE+4P6/7Ybcqs
CZaTH76cPbS4Kuq7h+phvX09y4pmly2T+aJalnWSrcrk6y8+q5NPpqXPrPfSa57lf7xsms2L5LNs
sXzTgLnJtnWVfP7dd18/+l1Xh+fULherKvnxRfK2uP6Q1U21DaharQJQF+vVi+TjbHf/skk+D4Oa
FctFtWru2tcHmIf/fwpvrcp33/k9Cdd8uS6TD5Ovv/r2uw+S3XYRXk+2wYOQY8o8OFFUdf1BEj65
fR3+9GOSvPggCVdrgtnZslrdNy/3v+Zetn9pttmqngcAVKti3c52+OMHSUBEHVwLrwMe2o+vf15U
+49t1uvlLCAypIMPC5/NK1dkqWK+TOfeirQshEm1M3PB54UqSzupfOFKNxepF0am2pQsVblSaemM
M5mdzzOtJowpb/KQ4A2XeeqNZIFHyqU6d6JwVWVlVu29zeqf9w7Wuzy8bjHf/ngYX9q83rRuhrjU
IcbZffhd+LGdzAm/88mf77z3VuPg79ctpt+8ISlD0r1Lvkh+XSyX7TjDRXdV+/cwA2GGF6uymi9W
YQlZvk7erXebgP+69Wu+a3bbqu1Vvm9/DIFqXi7qQ/vze3engfhPZcZgmGnN1LWC8j9uEpg2dtiw
eyzu6tLiXgRSqKvVo8EOni8od1/c3Y0Wj/6L+1NB+YCAU5PKHpl8ZG7ya7Zd4TK3tfwiLC+7VfA5
2SzKPbKYSYrBebxD3u6WtNlJ0r5J6jyJ9r+bJ9/E+fvV4re79p/Z/rLvyiAbvbMn3DvhxdvkL2Fe
156QE1yNjBPak2EMxAlCf8k44aecE3JC6lFxoo02GcYAnCD1l5ATQhJyQsmRcUJIMoyBOEHoLyEn
pCbkhOYj44TUZBgDcYLQX0pO9NQjaHUiw7jiVwuFgj3v0OSeDxt2L53o0pfF5i5z14tA/R2knxdx
vQG898zgiFOeVknSfmSZn1CZAWV+Qn8JMz+pkmTsyDhBqMyAOEHoLxUnNCNVkgxnY+KEZpTKTH9O
0PpLyAlKJckIMTJOECozIE4Q+kvICWkIOSHHtk5IQ4YxECcI/aXkxNuvWBydOqSt4urKqUOSq2dy
BWfODht1LyFJnv+6ejEXzl5tOBrs4PmGo84H6gz24UqQ1CW1TXhNGqSuB+oM9uFKkPTFILmSNEhd
Dz4Y7EOf/j7J/hGkq7qtvsWXxTXoC8vHvXXaGskUtkktHLZJxdipSeUo91luZPss5cj2LaB9FqG/
ZPssPmWMkBNqVF2x+2iTYQzACVJ/CTnBBSEnxlXL2EebDGMgThD6S8gJoQg5YUfVFbuPNhnGQJwg
9JeQE6S6rRtVV6zmlDooiBOE/lJyoufhTCi6bb9DqdTTMRDWSi7ism33U1sBo+4hJIk7efEUiHkH
SXKog+fPzOqu297iXDU98Fy1x3N7TyzSqkR+VH3dmlOqLqCsTugvWVYXtCqRH1XHqxaUqguAE6T+
EnKCUiWyI+vaEJSqC4gThP4ScoJSJbJqXN19glJ1AXGC0F9CTlCqRFaPbZ0gVF1AnCD0l5ITPQ/f
RVGJgIcOYx1LDRh2D5kI4fnGgx3se252TCa6xdHqGnS0+vlzs1uLpDKR1eNqJhKUsgsorRP6S5bW
JalMZM24mokkpewC4ASpv4ScIJWJ3LiaiSSl7ALiBKG/hJwglYn8uJqJJKXsAuIEob+EnKCUiRwb
VzORpJRdQJwg9JeSEz3lCEqZCO3ZVIBx99KJLpw7X4qiMtfbiYY62PfhWTGd6BbPV9MDn692gMCJ
SVKhyPFx9RNJSuEFlNgJ/SVL7IpUKHJ8XP1EilJ4AXCC1F9CTlAKRW5kdWJFKbyAOEHoLyEnKIUi
Z8bVT6QohRcQJwj9JeSEJDx92NmxrROS7jRfECcI/aXkRM+H3KIIRcCHD6M9oBow7h5CUWDXJaHI
K3ddKBrqYN8naMeEols8ZV2DnrIef4L2k0lFeIS2s+PqKArRpjuSGpTYCf0lS+x6ygiP0HZuXB1F
Idp0R1IDOEHqLyEnOOER2p6Nq6MoRJvuSGoQJwj9JeSEIHwYm+fj6igK0aZ7uBmIE4T+EnKCUijy
YlwdRZpSeAFxgtBfSk78Rd7Z9bZOBGGYa36FbxBFomFnvx0BEp8SEgKEQEggFLmJ20QEO8ROKag/
nrXXaZvgtTehTAaRm5NzkvPus7Pe1/Zkx/v6uQB18mOkXyX7dNqz6uU++2Q0SKnl8DOPrPpb7slY
af5Zr0/IPckJCz8820jXtfFitn8KGH7CeHzuSZ2TJnp+irXRYCzoA8WLZolS8f9aTqQumXU5y9Uv
yHsxV9cXzRKl4v+1nEhfMutyxpy4KO8F58QFs0TA2P9tTlww63LWnLgg7wXnxAWzRMDE/2s5kb5k
1uWsOXFB3gvOiQtmiYDJ/9dyIn3JrMtZc+KCvJecEydmDi6Z0AGhUms4G9l8kLPDjI5UkMI/63V0
voTH5kv0v5Gf0/8sCQNCM5DPWRjOG0kJLySLerP4SSkLLk7VH8V8uS2L1Z/5ojkwhZ1wnk6M43WS
zoPq3a8JP+C7ZEoHmPx/LfzRl0yRnGXBF+TFteA3/vev8r66vq9+X9Xz5WLiIvXGv/Bi7qWldH+2
r4M/ATTTSr0BnIHgRhn3ngE37usJewPhtavqbJskb2zLsh742ujn/9EXZ8Cvmb0G/R2zU8anDH58
ZMC4fLx3h8PjF199/vVjucmLfNFMJDel17m/3HB/e6/5oDt63js+kt58IQ2qzYqlXlo93mxXi7u8
Fe/eJw95cbNl0yRbLPKFm9Nu4t9m8zy5X92qCUucIWzKbZ3oA1mnCU+y+kRZ/ULW/F1WCS9rTpQ1
L2TtsayZgvSy9kRZ+0I2PZKVYgrWy6aP28ZPH93F06NXe//6w3q+mb64ioGp1kJM3bVMdylTtW66
KjLnvPer+o9ksy1v8iS7bdxXJVXuJBeVu1BZVXuzLu56GPxACNYxND0bhXgWnEwmYU04QzNfHOvp
/agK/ljeul7W5WyxWd3O7kp3SVCU25f6zQXbXdaeOvYfJ+7wXjbnyV8+TpZZtUzq7GadH7eSTpXx
rYiYVlxU9w0dKSn21H95Rv+bs+h8XVbuULpxY5rn27/rc+b11Tn6z2PWq6nP0XTMt9lq7ZivXlwM
bPPbnevHO+HGzCmN/Z6t2isXvj+2k5v8ttzmrp2Oo6ch7huypzTUE6k+zfQczZMj5RuTJ03PfaTk
KZHSvqFz5uxzpPo1+TmaJ0eqa0yc0tg+UvaESAnhG5KPDd+sOZ0+/vDRt189flLu1ovmZnPfB2fQ
zdt6W67XzpSvnBf5c+2+Z88fvl25s0RVX7/ofbb4dVVV7t3+a+4b69X8j3eSWycELyy+UWt60aD0
8ErP+5pusdc8xy3yRVjPHMf0q9JdwRR3LnorfzxcN58lv5aL/FhFTAXzKjbGwc87TzRpAuVbSWNa
CZ0nzJTDXkmxGKVzeA9agZhW3BfmedWcfSRLdbLJ5r/kddUEn02ETdzBlhfrbNuAvGi0zWDwBuQq
Bdt+kLuvatXMq3q3ce9ZUi3dZdB8V1fv/J1RMs8YdYZ/ZrSQ8kNGO8ioZcsoDJNPkNy9TqQUjxs3
aWfrstz4o/TTbbnZOB5oLrf396ot0Tqr6sQJWfFsMVe/upneOIzLQaz/ePf44+yufCdZ7Frg/KHp
6ereGVJW52EieUz0e/ZLvtvsZX765usvv/ziq5+TxlQdpkmudsUvRfm7c5miLGarws2n6U/5vSNq
hH5+p0ncrVc37zXDcO3MZfcwmU/BcSZXKn0r+eSb75Nd08mBKCmCTJogkyHIZAkypfSYNCPIBASZ
OEEmQZCJoI9rgj6uCfq4JujjmqCPa4I+bgj6uCHo44agjxuCPm4I+rgh6OOGoI8bgj5uCPq4Iejj
lqCPW4I+bgn6uCXo45agj1uCPm4J+rgl6OOWoI9bgj6eEvTxlKCPpwR9PCXo4ylBH08J+nhK0MdT
gj6eEvTxlJ6PA6Pn48Do+Tgwej4OjJ6PA6Pn48Do+Tgwej4OjJ6PA6Pn48AI+jgQ9HEg6ONA0MeB
oI8DQR8Hgj4OBH0cCPo4EPRxIOjjnKCPc4I+zgn6OCfo45ygj3OCPs4J+jgn6OOcoI9zgj4uCPq4
IOjjgqCPC4I+Lgj6uCDo44KgjwuCPi4I+rgg6OOSoI9Lgj4uCfq4JOjjkqCPS4I+Lgn6uCTo45Kg
j0uCPq4I+rgi6OOKoI8rgj5OsJ4TCNZzAsF6TiBYzwkE6zmBYD0nEKznBIL1nECwnhMI1nMCwXpO
IFjPCQTrOYFgPScQrOcEgvWcQLCeEwjWcwLBek6gVc8JnkkGn/gCJvDQFwg+7wXOe9SLR6FVxumZ
aJVxcs9kQkPGGVdIQ+ZRaFVveiZa1ZuiZbIsOMtSaZGGrEMZMWt2/WuVdI8wbzrePbXqvVVxfZMV
C9dzqXV8x0nZsPRMIjgYimukwehQ5KsMhuFxHcc02FEm5Zn0wGBIpMHwKKhllbHhwTTYUSbtmdLg
kHHBkIbMo6BWU0aGB7WacpTJeCYeHjJlkIbMo6AWUVqICw9mMmOUyXomFRwyzbBmmUdBrZ2MDQ+m
WY8ypZ7JBodMKqwh8yioJZNx4eGoJZNjTIp5JggOmVFIt78dCmqlpGVx4UE16zEm8Exy4IoRa8g8
CmqBZGx4UM16jIl7JhNOMjGONGQeBbUuMjY8qGY9xiRaJmBhY9RI92UdCmo5pOVx4eGn3bjvd6J5
z++s4prjnMuI5qRvLpyyEAZrNDwKaoFjbHhQfXiMSXmmcGJDaqRbrg4Fta4xNjyoPjzGpD1TOLGh
ACmx3qGgljOaNCo8qOWMo0zGM/GwMVqBNGQdijjtlNCby43vOKYNjzJZzxROWUCKdP/boaCWJcaG
B9OGR5lSz2TD80enSEPmUVCrEa2MCg9qNeIwE58y5pngsXGR+2ztiRQkzlgadb9Dn/sguQLm/rHd
EPbdpH1b/VHV+a/vJKsqaXaeSbTtdh+ul3nye766W9b5wo11VjyrtMJX2kCqRVJlze7O1QAXf5w3
0tl+E8HPml4n+39LrvJNOV8mXu4999Fq646IXfHOu+2GNR9kNlvcqoWZhlsQhy0c7qjzsHZHV7J/
gUN+z/0ptJQKZFhTHmk2GxzWs2q3afdtTQ5eKWs1E6U0KBXWVMea6/L3Wf5Qb7N5nRy+QECrCcxy
mfKwpj7SXP6abWabrF6WbpKu5tn6QDV5r3vDhQ1rmh5Nx7lpdsg9fnHVairLuB3itEea7fy5Xcya
/ciONYG3mpZpoWAgnumR5q5erWcP2Xpdzv+GmUqnCTbVwJgN912yI80ir910/GXWuJnzqkXPGHFI
mVQirAkBzSov/jbumneahnMjw5r7WXW4NWeDWRa3q7vdNn+h2hyf3UtAWDM0j27Xu2q5F+jRTAb6
LgOafg+rmbvo6dEEaYwJa6p+zabz9+1Oas3EOtbULOUDmjqg2W4fnc+aDU17+y4HjiUz5EtuPjmn
W/xdc/CYl7bXl+bLrLjLF7Ptbt077mDTgWMp7dXcNueSVdP7vnEHJkUa7rtix/P9l/pmdzvzm9y3
86iXU3AV1jyeR00c25GZZYvF0PGZhjV5n2Zz/Dwf9f2aMqwpgpqb/eHeP0YsrCl7NN0xlM93+/Hp
m0caGA9rqt547rZ3jeJQPCGseTyPfs3ms3WebYtVceff5ItezQFO83f/XOT3s7vmhFxnddXHyYFz
MTDuveejP/NtOdvfi/X1nVszcMwfz6N2p8jZb7t8ly+C8TTc6rCmZn2a3ZnjXE047ntVb/Ps19Y6
k/N8Xh/Pox7Jfk0T1jyeR7ti9TCv1wfn4hOPJS2Dmpv1ykmeo3k8j+6bMTrser+mDmvqPs3enh+e
4wQLa5pjzchjyciwpu2fm5Wbm+3mrW7a9/adQ1gz7dfMtpvmjuuX3aaPkwOHsKZhYQ9Z3a6KRf7Q
P0bAw5oQ1lz+ni0W2ySgycKaPBTPvebpnmxEv2ZeL+uyXIePTzPAKY8118X+KmTwmLcDnMfzSOik
vUWukiJvbk6Xqzr8n8N5dSZRkkxPKKj76URmB1BXZI8ygWdKw0kmibLg9gnFsldJ0kZ23GJmzEeZ
uGcKZ8ylRMn4PaGg7otjRVx4MPPqo0zCM4Xz6gZnwfozin6d+RPZcVSDHWOSnmkgY46zlPYZJcUZ
DN8a6lrrUSblmSBsZjjL955QUDessRAXHlSDHWPSnmlg+Z5AWQv2jKJeZ/5EdhzzJ8lRJuOZTNjM
0C6mPQrqxjOx4UH9SXKMyTZMgrHwkOEsUnlCQd1vxvK48KDa8BhT6pkGVu/hrFh+QkHdZiY2PHRW
7/EpMM+kB8rikYyxQ0HdXSY2PKhmPcYEnikdMEasIfMoqJvKWBEVHtxV1GNM3DPxy1++dyioe8nE
huc0s5asWfczYRM+4XYqjZD6/esPIeUT0HYCE4Cp1kI8gflfbq5vFw2XjMESHmtgmZ9GuoLvUFB3
kYkND6pfjzFJz2QHvBEpg7RHeaWkhYzqOOo66lEm5Zlg4A4YazA8CupuMLHhQXXiMSbtmeTAMnOs
IfMoqJvAxIYH1YbHmIxnCqc2JMMaMo+CuveLhbjwUEptgG2ZBAsbI0P6aapDQd3yJTY8qGY9xpR6
JhGeZRxryDqU13mWUmzHUW14hIkzz6QHLixQiqGeUFC3brEsLjyoNjzGBJ5p4FlKWL+GdCioO7ZE
hgd1x5ZRJu6ZwkkLjXX726GgbtQSG57Tkhb/MpPwTCpsjBLpN5MOBXV/ltHzhmdC3Z9llEl6poGM
Bc7jr55QULdliQwP6rYso0zKM0HYGBXW5YdHQd2NJTY8qGY9xqQ9UzivIVOsc5lHQd2EZfS84ZlQ
N2EZZTKeyYTPZRblUcLPKPZ17rgiO45qw2NMtmXSAxkLgXVh0aG8znOdYzuOarBjTKlnGshFKJSH
JD2hoO6OYnlceFANdoRJMM8UzlhwnCeGPKOY15k/kR1HzUWMMYFnGshFYKVfOxTUXU4iw4O6y0ks
Ez+sNb9ryqqKcuvLqh7y4mbLpon7QhP19kJcWJn4kvx2ENkkVUn1bpIX62x7tyru2mdxJHV2s247
Bdwmv3ycXBngsv0od19WQqdudOvdxv2FJdWy3NbzXV31cXLPKcKHFke6aNmjoFpzz8/zPUyoO5+M
MgnPpAcWLSLdGnQouHV2keFBNfAxJumZBp5fh3Y29SgW1cDjwoNbjTfGpDzTwAo4rBuIPcrrPL9u
9MzlW8Otsxtj0p5JhQdDY52lOhTUzENkeFBteIzJeCZ7+aX1exTU/ERceHBr9saYrGeCy+dMOhTc
mr3I8KCmiceYUs8kB85SWMboUV6rZi+y45RsWDLPNLSwDelnlg4Ft2YvMjyUbFhCwyQHavY01u/P
HQqpmr2OiVTNnuSeaSCZDEjJsA6FVM1ex0SqZk8Kz6QH8pdYs8yjkKrZ65hI1exJ6ZkGUs5YTzbp
UEjV7HVMpGr2pPJMA0/cx0rl7lFeKWMR2XFUGx5j0p5JhQcDZ8e5JxRSpXgdE6lSPGk8UzhjARzr
8t2jnFqK13S8fSyp67OCNLrLuEV4Y0zWMw1kIbBW8e5RTtxL6lwb863hlteNMaWeaSC/YJCeqdWh
4JbXRYaHkrX+xdyZ9bZORHH8q1g8gcQtsy8V8MCOhEAChHiLTOK2UdMkJE65oH54xj5eGjez2AoH
5+He25vmP7854zme/D2LJMAUmIaGc/BOh4K7vI6khQfVhYgx0ZopsLxOYKW8FgVnslpTGu7CuRgT
Ayb+f5811qOgDmwTw4OahmNMHJgCMyI40rOmFuVKk9VEWsVRE2yMSQCT9TeGwmoMQMFdOJcWHtyF
czEmCUzMb/bgnDXWoeAunEsMD2oajjEpYAr4CwRpYN6g4C6cI2nhQfUXYkwamPz+gsRywVuUkf7C
1FEelIa7JC7GZICJBtZXYd2lAAV3SVxieFDTcIzJApPfiyBoAwtAwV0Sx9LCg5qGg0y8OfNChJbE
4Ry13aFca0lcYsVRXYYYE62ZVMhlQHmk16OgDoITw4OahmNMDJgCqzMEyiO9HmXkJj5vH0nQtCqj
ptYYEwemgL9gUdYv9iioA9vE8KC6EDEmAUwBF0KjuOAdCu6SubTw4C6ZizFJYGKB8R3K96MeBXX4
K9LCMx8XwjEpYAps34MzF69HQR3+JoYHNVnHmDQw+V0IgXNKUo+COkhOCw/u8rgYkwGmgFeBY8/2
KBNnRDR/v6uCUFWfSJkwCG7KRE3GMSYLTIFth3H2x+9RUAfMieFBTcYRJkqAKeBFaKThRYuC40U0
pc1o+ZtjojWTDXgRWF9sWxTUoXBieObkRVAGTIF5ETjHk/YoqEPhxPDMKQ1TDkwB3wLnPIMe5Urz
IhIrjupIxJgEMAVWVCgkEwlQJEEd5CaFR85o+ZtjksDE/E3GUSbx9yhzGgQ3TDNa/uaYFDBJf5NZ
rFFeg3KlE5kTK47qNcSYNDAFVlTgbHvfo1xpxoNIqjjukrUYkwEm6u8ZOJsA9yiow9vE8KAm2BiT
BaaQy4CVzBoU1OFtYnhQXYYIEyPApP29DKvJWpQruQyJFUd1GWJMtGZixN8YOBva9Ciow9vE8Mwp
DTMGTNyf8rBcuhYFdXibGJ45pWHGgSk0OwJpYNGioA6CE8OD6kXEmAQwBfaj5EgPARsUTnDuUk1p
s0rDEpgCO03i7EHUo1xp34bEis8qwSpgCqyrEFg9o0FBHd7ytPCgJtgYkwamwIwGgrJ6tkdBHQSn
hQd3EVyMyQAT9fcyg+SFtygTZzSM2uOhKw13eVuMyQKT32WgOCeF9iiow9vE8MwpDXMCTDrQf5CG
DC0K6vA2MTxzSsOc1kyS/P9LYVqUK+3xkFhxVJchxsSAif/fu9v1KKiDYJoWHtQ0HGPiwKT8TUaR
XIYWBXUQnBge1DQcYxLA5HcZqEYamDcoCnUQnBYe3EVwMSYJTCxgvyLNy2tRUIfKIi08qMk6xqSA
Sc4gMTYoV5rxkFhx1DQcY9LAZPyNgXNsa4+COghOCw/uUrgYkwGmwH6TONvm9ig4XkRTGu4itxiT
BabAHg9Yk1QalBkd/dYx4S5yizAJAkzan/JwtgjtUGZ09FvHhLvILcZEaybj9yI4zknVPcpIL2Kw
rUA8y7floLoQMSYGTIEVFTh7hHcouIe+JYYHNQHHmDgwqf97c9AOBffQt8TwoCbgGJMAJhtYBIP0
eKNBmdGhbx3TrJa/CQlMARcCZz/kDmVGh751TLNa/iYUMMlAYsQaUgDKjI6G65lQk3WMSQOTmUEv
a1BGrs4Yed5FW47CXfgWYzLA5PcfNNb0yRYFx39oSsNd0hZjssAUmAuhsIbkDcqVztNMrDhq0oww
SQJM2p+gBNImGw0K7hFtNC08qM5CjInWTDSw4gLrntKg4B7RlhgeVBcixsSAiftTnkFKeQ0K7kFu
ieFBdSFiTByY/C6EwNlJvEPBPchNpYUHNVnHmAQwhc7TxEqMgIJ76FtaeHCXx8WYJDAxf5Nh7fra
oOAeDZcYHtRkHWNSwBRYvWGQnkU1KLhHwyWGZ1bJWgOTCSRGrOFHgzLShQhsbphcfT6rZGyAye9I
EI7k5TUouAfEJYZnVsnYApPwNxnWJLAW5Uq+hU6rOGqaDTKJW0KAye9bCJx5rz3KlfaASKw4qiMR
Y6I1kyCBxkAxkXqUkXMdpjZGUxpq6owxMWDigVPLUO4sHQruoW48LTyoXkOMiQNTwGvA2bawQ8E9
1C0xPKheQ4xJAJP1NxlFGVJ3KPJKuzukVVyiuggxJglMfheB4mxI2KHgHuom08KDmoZjTAqY/C4C
wdnpuEPBPdQtMTyoaTjGpIEptO4Ca2ABKLhHv6WFB3f5W4zJABMNPN3F+pYEKLhHv+m08KAm6xiT
BSavy2Ap1lAQSHBPiUuMDmqujjBRAkx+K4IKlIXuHQruKXGJ4UHN1TEmWjNpv2FBcDat7FGuZFjY
tIqjZuEYEwOmgGGB86S9Q8E9/y0xPKhpOMbEgSmwUQSW4deg4J7/lhge1DQcYxLAZANLcFEe23Yo
uOe/yaTwGFRbI8YkgSlga+AsdO9RrrRpZWLFUdNwjEkBkwzMB0My/BoU3JPfEsODmoZjTBqYAosv
cJ6x9ygTpz0M+49KqjjuGrgYkwEm6k9mHGvI0KBcaQlGYsVRTYYYkwWmwFQGnC2IOhTcM90Sw4Oa
YCNMjABTwGXAWRXdo+BMeGhLm5N/AKc6aEL8jYGzHXKHgnumW2J4UF2GGBMDpoDLgPXAtkHBPdMt
MTyoaTjGxIEp4DKgNRmg4J78lhgeVJchxiSAyfp7GUN6fNGg0CtNi0irOO5KuBiTBKaQf4Bk+bQo
V/IPEis+qwSrgMnvHyiCMnu4Rxm5heRgJ4CEISuUM6N1a45JA1PgIAuD1QwNCo4n0JQ2oxVpjskA
U8ATEFg3eEDBPactMTxzcg6YBSYxg3sKoOCe05YYnjk5B5wAU2h+AlLKa1Bwz2lLDA+qvxBjojUT
DxxkQZCmlDQouKe5JYZnTsmaM2Di//cJjR0K7plvieGZU7LmHJiUv8lwNkLuUHDPfEsMz6yStQCm
wLILjmS8Nii4Z76lhUfMKllLYPK7EMRi3csABfdkuMTwzCpZK2AKLM5AelTbkkTG00LL63xLbsuL
5GHOr7hBgStVQ6leUwLpwV7LYSPRFvZK0Yby5KxSqAEm+tJUalGJLO53z8Vhuzu8fP/jNz+9vC+2
fxzIbeZ+oYqoayGmKJPZPl8+FmXdQPKGkOz4cXZ8OKy3j+vtffaQHx+yMv9jU1dKiezxi+xDyokx
9VuF+2WraNVy5WnvfiDus7tDuTyVxwucggMnG8dp7QCShiA5qyGV1KJjFIbaVMYmlnwcIxmDSFWN
yJnRHSITJBVRNmEUKYir9bHY3uf3juJMyd4K2t57pXzZFuWqeF7U1x1cz0X5UO52m2y5e3py/SP7
+tfvfv3ppx8W3/7y9a/VVe0+8dfu8Ji5j62XRfa8vjM3JLvL15tidZv9uMuOp+VD866/ZPVSdcnn
fAOFUmoz102rHuECWRYH9072Ian+73QsDh9n9T+Pfx/L4umjbH3Mqjpn1JK6Z7v+/VBkfxXr+4ey
WLnck297FVp98kNLhGQmO+ZP+01x/MgPps/Bljun877Mjn+ty6VrsNuMZc+7zWlb5geXu4grp/vR
L2qcjgN2vRga6ZfH9X7vWsaFqXSBO7qk5BLIfrNeulyX1ckhaz+R3e0O9dXy2fKPFaU8Z9WlVux3
LspQq7fliqZce5X2lSPatylZkUH7SjKhfZkY077qrH19YPSa7duKsv+ufdXb9mW3UkO5/Crtq5Pb
ty9Z4LevZbZrXz+YvF779qLqv2tfV6thud3TKKWv0r4quX37kg16+0rGeNe+fjB7vfbtRDX5z9q3
qpW/XHqhfaHBbrPdH67obQUBv5Mdy7w8uvbMs+f97lA2TZp9eN6k/tBpdqG09W5Zbj785fufvvz2
+2++//Grr3//qLqIAKKRHHvxaB6oVv1f79yVkt0XJdSprQm1fkkxpid89fNvVTNerTfo64yTppQ8
GCdpNqEbUhrvharvhSbeC3V0lCQm9MLhKOnr83724as+ZT5xb60PRXY4bT/6GLpfTpf5kml+6y/B
npdwPo5+v6k6d//S2SfuT+PKo9x4Nc0wdzw85ftF8X5fXRPDF1WgyYgwIU060GyuPtdjFtBjXr94
rZkxxjTlfk12rglfY+9Wi7/ydfmGk1WalBjFlJV+TT7QPJXrzeJ9vtnslkNJZSpOxi0VXHPq1xT+
uq/v1ttV8f41qNNsXqG6y8uaTRcekLJeUyu/pnqr6bLD46IyNpxtcd740O6SU0209Wtqj+ax6gjD
lwRNVX1O+zWHveqPw3p1X1SYu+3d+v50eH3Nk77uXPg1ff3obnM6PrQCFzQzfxtZ4tGE79eL3am8
oEm1YsyvST2arvLPi92+2N5tdn8NNZVUwvg1mUfztF+5/LGobs0X6y6VX5OH8pLLJS7Trd5qcmW1
X3PYj+7ypQvj8iHf3herxeG0udju1Brq15QXNQ/VjWBd1f5Su1MiTaju6qLm8bTfH4rj8eK1xISy
nAfqroe57rH843TnQMvDuoC+eanugjK/5rAfVW1Tt/YiX61C17z1a9pLmtU1CT3Jr+ntm4YQr+b+
VHo5qdXar0kvaRbvi+XJtblPk1rCA5zsYjxPh3tQ9Ned+jWH/aiqtuMsD/my9GhyxZRRxK8pLt3f
93n5sHM+93qZby5yCksDnMN+9JQvF5siP1TDffhHsbpYd+bXVJfu7/8Uh92iMb8vcnKipF9z2I/q
KZWLP0/FqeG7pKkVZwFNc1ET7nBTNe2w7sfyUORPdYrPJt2PDB32o4GkX1P7NelA87Rdv1+Wm7Mx
w8h2p8yr6b6vOskpmsN+9Fy1UV91v6bya4pLmsOaX7gXS+rXlEPNxGuJBzTV5bHisShdtncOgOui
F+vOApr6smZ+2FcPkh5P+0ucjBEViKfxj5Mf/nK3pIOnjSjxa1p/3UFzfE5mZKi52bajkOC1ZLVf
kw40uQJT5pht3d/OjlmXrz/MyC2lzeJWw5gbYpT5ZnFc32/bL7BlcXhab/Oyalz3HfYhgzczKrMP
f23eK1YfDUWb/abdi748u9sB0FT9pFjVj0Hv1psi++Q5P3zifvqkeuMZviN/sns+vmv+vbpxb/ql
2Us9Vt8Wy7Lv4be16OG0PRNd/XFz3C0fbzP4/ao6Nzc3fmk+XbpYDWXFLaEg2z2gakSPtZXmgl7d
NrPvv8pI/WKckTtpVlL6pSR8Xzn77pK1z7jcFVmswMOoRo1ZUT6Qynqpx+DUL6penMKxOLhLBXRd
ZbeOsfrs78X2l/od5yrsjuW702m9ynj+B6FCFu+M4OKd4Ll4l5s7+e6Oc04lZ3cFsf7i9Hk4Xgei
7Qjlro5JUYg7wnKhCqr9eualaoOn+wb+PBquWrXfVFkyh91m43rDB3t/s8KHb57un8oP3pZIBZRo
02pAyasXXXr1KAnVYH9YPzmf6KwG5XJ/Sy27ocrc0BtKb5Xi3A9MKQwrXst/+u7ziyqDzuLXZBM0
3/QSaW65BD0+5tJ29iG96S9udiZLK0zZyIqRsuyVLB/K6luhQHZUR3Sy/JWsGMq6IDSyaqSseCUr
B7LcuvCC7MCflHSCbcpY3DZltW1KtZb9syk/lYm5pjTJNT0XtUmuaY14wTQVhZSUaXPrLYCRMQbS
a0NO+TXpGCO21RSGUe3XZCOMWCZAk1JqpPVr8tGmqdOmnFDi1xRjTFNZa3JjuLDcrynHm5EO1HAb
aCM1xow0raY2oTbSU41D5tc0U4zDyt8Ufk07xTjMqApc85xMNA4Z82vSScZhxjT3a7JJxmFGJfFr
8gnGYZZJKrVfU0wy+cKccqrJJ/2aaqrJF2gjPcnkC9fdTDH5KqdL+DXtVUy+M01Bxpt87qWEsX5N
OsnkC+YlwaaafMKvyQOa0N9TjMNzTTHmYWOnybkJcMpJZmTGKfVrqilmZEaVDbS7nmJGRjTNVczI
c0071Yz0t7skU00+6tekU0y+jCqq/ZpsismXUU2IX5NPNPloIJ5iislX3+T8mnLMw/AzE9qvqSYa
h5nya+qrGIfnmmbMQ/uzB9d+TTvRjOT+NlLDfsRN2Iyk1cxCLuDDSVP324ncWfs2eJQwnbyfYv62
FEGglJET7wWx5vW8dnJjuZvX7jg2+eHeO/Weasm7ee2UMhmd195QypoS9WgDdXaktp8Jc6lTKhPm
utRUJsxNBFKZMBeopjJhLlBNZcLcTSCRSWMus0plwlypmsqEua1AKtMM8zjqKQmpTDPM43qGeRz1
uIRUphnmcT3DPI56bkIq0wzzuJlhHv+XuXPJgRoGguhVWLIB2Y4dOwhxFyRgxwa4v/iUCR8Rd4PC
mz5BnttKjafS5R4BdRwds+BlCqjj6LwFL1NAHR8BdXwE1HF0PIOXKaCOHwF1HJ3m4GUKqOPoWAcv
U0AdPwLq+BFQx9EpED6mnOLpeEbHQXiZ4ul4TvF0PKNzIbxM8XQ8p3g6ntEBESbT/o0JHRDhZQql
42LKoXRcTOg8CS9TKB2fTKF0XEzo+AkvUygdn0yhdFxM6MwKL1NAHc8BdRwdceFlCqjj6KwLL1NA
HS8BdRwdeuFlCqjjJaCOo9MvvEwBdXwLqOPoGAwvU0Ad3wLqODoPw8sUUMe3gDqODsbwMgXU8S2g
jqMTMrxMAXW8BtRxdFSGlymgjteAOl4D6ngNqOM1oI7XgDqOjunwMgXU8RZQx1tAHW8BdbwF1PEW
UMdbQB1vAXW8BdTxPaCO7wF1fA+o47HynGKKlecUU6w8p5hi5TnFFCvPKaZYeU4xxcpziilWnlNM
sfKcYoqV5xRTrDynmGLlOcUUK88pplh5TjHFynOKKVaeU0yx8pxiipXnFFOsPKeYYuU5J9PluP/+
/0fH/0piqHfyjez1rjuULncxXc/xb+W/D4X/BSVWLFNMsWKZQ0zH5ZbtNUFbJpRYaUwxxUpjTqa/
vPdq5KP8du9V/3Hv1eVc7K3Wn+Y5tzq8914dorzU5Vz3/z5N/VeUm4R5N7ZHT2MDlwZTS2JaCPM4
mM34jtKRzZhPYxOUFlMW00JyM/QrKZTCBidd5SlscNJiKmIq11uWoF/J7yiGb5Gd70/3LRx1JCym
TUztcjN6osRMKGwA0lke9ORrMVUxjev3p1RoyybKwbw/ehqbaLSYmpjy9cmsUe+PUNAgY0++8pDG
sMm0i6leb9no0JYJBc0vestDyrDJ1MXUr4/cfUBbJhQ0tugtD2kDm0zjG1NJ129Z36EtEwqbVtx9
5SHF2mQ6xHTtMuydOphPFMZlmE9j44dLpu1FSmLaF5uRic34gUK4DOfT2DyhxZTFdO0ytB1xGU4U
NEbYi6s8aIzQZCpiWrgMG7VlQkHTg97ykF6EybSJ6dqLqJjkCQUNDXrLQ3oRJlMV01j8/aXeMqGw
WcHuKg+bFbSYmpjy4i2jtmyilHsOFs6FkzJsMu1iqgv7FfHyfqA0ZjP0NDbNZzF1MfXFkQFpf/mB
Mu7ZjOFbOCqdFtP4xtTSoz9MnChsKs9ZHtQ/sJgOMW3XW9YpMZsof+kffF34t8GOX9bc8uFfMuoc
GEw5iWnhHFSkWeQHyj3OQc++hZPOgcmUxbRyDhDn+kRBg3LO8qBBOZOpiKks+nsQ5/oHysa8P3oa
mnwzmTYxLTyBDHkCEwUNvPXNVx7UE7CYqpjG9fuToZPbREFzbs7yoDk3k6mJKV9v2QGd3CYKGm/z
lod0DkymXUzXzsGOHSyEgqbaevKVBxVri6mLaeEvdMhfmChomM1bHlSsLabxjWmk6y0byCfxEwXN
sHnLg4q1xXSIaVsYR5QwThTyQL0fvvKQYm0xlSSma8eiUls2Udiom7M8pFibTFlMx6IJGTrkTxQ2
6uYrDxt1s5iKmBa+BvV5faKg8wZ79pWHFGuTaRNTWxw/oP9lEwUdM+gtDynWJlMV01hsGfVbJhR0
uqCvPBsakjOZmpjyQhip3zKhoEMFe/KVBxVri2kXU73eMib2eaKgswS95UHF2mLqYlp1V1BvmVDQ
EYLe8qBibTGNb0w5Xb9l2P+yiZJv+RJmHpX1NDRKZzIdYlqkMxr0WX+isKMAneUhZdhi2pKY9sf/
Sn1HuacTw7twUmBNpiymY/GNCxKziYKO9OvZVR40JGcyFTGVa1+QuWnlBwrTifH9aaTAmkybmBad
GAf09eo7yv53m/Fbd59/yZH8ha2KadVdgaTXf6Acf7cN/9ikPJ+GBt9MpiamvPhPQ/2mTJSbshTO
hZOegMm0i2mRpdihb7UThZ2D5ywP6QmYTF1Mi3sdKiVmQkHH3/XqKw/qCVhM4xtTTQuzlNqyiXKP
J+BdOOkJmEyHmBb3Qlbok95EQcfYectDyrDFVJOYrj2BTtk4EwWdXtebrzyoDFtMWUwL54C5MOpE
QYfWOcuDxuNMpiKmsvgKC/1ZnSjorDpveVCxtpg2MbWF2UMJ40T5S3/hH82e709DZdhiqmIaj4+t
fUe5x2VwLjxUSK42MS1chgKd8iYKOkTOWx5UYC2mXUyL/gTqYP4d5Z57HXryLZx0GUymLqZV5wEU
6Jwo6DA4b3lQl8FiGt+Y+sJloJpFJgo6A85bHlSGLaZDTNvCfqWO3BPlL29/+KPk2Um8+TQ2/mYw
tSSmhctQoc6DiYLOcvOWB5VhiymL6bjeMurCwokyEvL+zKexwTaLqYhpkZWgjgzfUdDjbfGVB/UP
LKZNTKv+BOjz33eUv/QPfutP8C8ZlVaLqYppPPqq9h8o6MHWVx40rGYyNTGtbn+ADtvfUSIdbCcT
G1azmHYx1ccHwyYKOqLNW55ILkTrYlq4EAd1pBAKOsjNWx5UrC2m8ZWpprQwXqnfMqGgg9zMQ7KY
2LCaxXSI6dqFaI3aMqGg49685UHFeslUX6QkpodPujhR0HFv3vKgYm0xZTEdAbZMKPkmr6K4Fp5R
GbaYipjKo5uQf6DclJJwLhwVWItpE1O73gxmUsyJwg5yc5YHFViLqYppPLpX6ERhB7lVV3nYsJrF
1MSUH32VzYmCDnLzlgd1LCymXUyLdAZzK/kPlJs6IpwLR2XYYupi6tebwVzRe6Kg49685Qklw+Mb
05Ye7fidKOy4N2d5QsnwIabtessYX/1EYce9OcuDehEGU05iWngRTBPyicKOe3OWJ5JY5yym49Hd
fScKOu7NWZ4aSaxzEdO1YzES9I/rO8pNjoVz4ZFkOG9iWvRNFKRv4gfKX/ZN/FOr//k0Nv5mMVUx
jUdHzU4UdCicszxs/M1iamJapDOYCzl+oNxzB4R34ajLYDHtYlr0RVTIC/+OcpPLUHwLR10Gi6mL
adHxwAw3/YFyzzxN78JR6bSYxjemfZWooP7ZCIUd+uYsD+ofWEyHmLaFmEH+wXeUmxIV1bfwSM5A
SWLaF5sBHZMnCjv0zVmeSM5AyWI6Hv8tdqKwQ9985WGDbRZTEVN59CW3Jwo79M1Znkj+QdnEtOh4
YC6JPlHYoW/O8oQS6yqmsXjLKGGcKH95u8NvURn3kkcoAW5iWt0eSR0phMIOcnOWJ5QA72Kqj+4I
P1HQQW7e8oQS4C6mRa8Dc5XAD5SbXAjnwlEXwmIa35iOtGgcpiRPKOiINm95QsnwIaZFooK5fegH
CuNCzKexwTaDaUtiWrgQTLzlRGGHrznLE8mF2LKYVvc6QO+PUFq6KSvRPAtvbGTNYipiWvgLBfpY
9x3lps4D58JR58Bi2sS0cA6oLtSJwg5Mc5bn7wS2pidPc3qenpfnZbxotaT+8tmrfJTneR/P8/Oc
X+z7tp1gHz99ePv6/bN3b75yVQ9WFdZ4/BHuO8pfmgdXr9DuWnhGLQSLqYlpEYTI1I/LZ+bOZddp
GIqiv9IhA0CO7cROhwyQmMAnIF4CBAJ0uTwGfDzlnvSUV+zdUna3hIQQ0Cw7t7vOqZePoVB7pqHT
Qy0h9JgmY2ocDUl7/xgKtWcaOj3UJO4xFWMqDamctWowFGpnte4ngjFxrbUeU71higK6xB7lyDYV
v5W48SFTA7jHNBtT41BITge3A8qZigfgwKnR2mHKwZgaxYNIuhkLCrez2oRNDzVae0yDMa0XD1Im
yQ0LCrX/Gjg9XBOtxxSNKTbqpaQtDHuUM5UYwIFTSww9pmRMreMYSMuAPcqZ5IaIDZwasD2mbEz1
8pXQBYXbWQ2bHq491mMajWm4/BbUBYXbfw2cHmp9occ0GdN6fWFm7brfo5xJgcjYwKmVgx5TMabG
5oPKijxD6XVWi2e6GXY1rj3WY7IH5DE0Pn9Y7wxD4fZMA6eHGrA9ptmYGgoEy1rZo5ynfjCBA2fW
D3pMYzCmRlOJQLoZCwq1Zxo6Pcz6QZdpMKa5cctI67cFhdpZDZweqonWZYrG1KgfsDZnLyjczmoJ
mx5qlaHHlIxpvPwJnAsKt/8aOD3MsO4yZWNqtakgFVYXFG7/NWx6uL5aj2k0plabCtLyfY9ypoMW
MjZwagz3mCZjypffVb+gcPuvgdNDjeEeUzGmcvm9QnsUjgixvxo1YHtM9rBdQ2MjN+tmLCj/e3/C
ch2uY9Zjmo1pvb5QI2uxvaCcp76ADpwamk2mcRuCMU2NpuOUnXEHlEK4GYerMSsHXabBmNYrB7VQ
zuhzFG7PtAmaHq491mOKxnTxwxMchdszDZweauWgx5SMqVE54DyGOgq3Zxo4PdTKQY8pG1O9tLVy
QDnSf/hz5TYgQ54CNYB7TKMxrVcDCufY3wPKmaoB4MCp0dpjmoypUQ2IlNKMo3A7nIHTQ43WHlMx
pkY1ILAW24Yi1OHswEStGfSY6g3T0KgZcAo4B5QjawYndYj3q3HVsh7TbEyNYxE4XaMchdsHDZwe
agx3mIZgTI39CwMp8hYUbh+0CZseagz3mAZjWq9CjKwqxIISz3R4AjbwqFRfGKIxxUt3sD6gMMyG
w9WoAdtjSsbUqBxwNtM7CrfDWcKmhxqwPaZsTI09BwMrzAyF2+EMmx6uYNZjGo2pUWXgyPqOwu1w
Bk4PtRbRY5qMKTfMb8pXeo4i1OHswEQN6x5TMabSsIxYn2ULCmNngl+Nq6H1mOxhO69XGerAWuUZ
ilDvMmfiamg9ptmY0qVPpT2gnGkXAzhwpYCNwZimS7cgchRuhzNweqhVhh7TYEyNRhGFtGRYULgd
zrDpEZLVdkzRmBqWBGfLvaNw+6AlbHqoFYseUzKm8dJii6Nwu6WB00MN6x5TNqb1ikVNpIrFHuWf
9zpgQ6Zqal2m0ZiGxiMtqQqxoHD7pBVseqgB3GOajCk39rKS9uItKNxuauD0UAO4x1SMqTT8Ita7
bEE5UxUCHDi1CtFjssfs0tjrkFnvH0Ph9kkbsemhxnCPaTamdPnvavcoZ6pCgAOnBmyHKQVjargU
nIM3HYXbAQ2cHmoVosc0GFNjr8PIumWGwu2Whk1PVYrhFI2psSOCoyQ7CrdbGjg91CpEjykZ03oV
orA+pfYoR54IeaR4ebiOVABnY1qvLORAOVLdUXp90ubzLBaWq3Flth7TaEzr9QXWrTCQXv+zNJQz
3YzJrtcJzWleWykuv9/5Mek/pjuMI37V3BllOPMopdan1Zimb8ugHv94kccv339+cfXu/dW3Bw/v
P/r29cW7p1dhu9n9gx8/NzdnV6Zp8+HJszcvrm9+Cse7YfeTdnvz8dXV63dvXr97uXn15OOrzfWT
p29tDuPmzb3NrVxyvPmbF7t/G6ex7H46rz992P0h7P7r+6vrZ5+uP/6FMiejLMdR1nQM4zDdMMY0
J2ccUh5QxtkYK8L4/PXHF+9ePnm5o/jtlcZhm5dompFX2r/MZv/Xmy+vr1/ZYH4a4O9XyQtvCQG5
SoN3Wvb4lTAgr3Qi77zs0SghIldZ443bELY52isl5JVO4Y3bkLZ54c3IVRq88zYtd2pEXulE3nk5
trWEI5Mgh/mXJAh3y7x7l+043j65ermaBENNP73LYqr9d1ncDmF5b5R1XWr1mIgx1PVPyTHUYz8o
f8E5sUD414+t8WdjbH0KqEoUyETtyYUyMR9UUSZmvRBlYj6xokzMJ1aUibl8Q5mY39+gTMzHXJSJ
WWdEmQRzPArmOFXBQpkEczwK5jjV2EKZBHOcqm6hTII5Tu0NhjIJ5jjV4UKZBHOcKnOhTII5Tu0p
hjIJ5jjV6kKZBHM8CeY4VQIDmai9yFAmwRyn2mAok2COZ8Ecp8pjKJNgjlMtMpRJMMepLc9AJqpO
hjIJ5ji19xnKJJjjVK8MZRLMcapghjIJ5ji1ZxrKJJjjVCsNZRLMcaqehjIJ5ji1ixrKJJjjVE8N
ZRLM8Ukwx6laG8hE7auGMgnmONVvQ5kEc7wI5jhVh0OZBHOc6sWhTII5Tm3fBjJRBTmUSTDHqT3f
UCbBHKeaciiTYI5XwRyn6nUok2COV8Ecp9p4KJNgjs+COT4L5vgsmONU/Q9lEsxxarM5lEkwx2e9
HK/UDnUok16O16CX45Xa0A5l0svxSu1shzLp5XgNejleg16OV0Gfswr6nFXQ56yCPmcV9DmroM9Z
BX3OKuhzVkGfswr6nFXQ56yCPmcV9DmroM9ZBX3OKuhzVkGfswr6nFXL5xyNaf34uSH+/14Ev6Ac
ebD9X88CgweuJWhOxjSs3YwY5v9/JOPPKFpepjFpeZnFmPLqLRsC65YZipaOaUxCOuaw+2VMZe2W
1bJyunDMq7cs5hPumbMIaZjOJKRhOpOQhulMQhqmMwlpmM4kpGE6k5CG6UxCGqYzCWmYziSkYTqT
kIbpTEIapjMJaZjOJKRhOpOQhulMQhqmMwlpmM4kpGE6k5CG6UxCGqYzCWmYziSkYTqTkIbpTEIa
pjMJaZjOJKRhOpOQhulMQhqmMwlpmAemclrd9oQjuP2aQpqlMwlpls4kpFk6k5Bm6UxCmqUzCWmW
ziSkWTqTkGbpTEKapTMJaZbOJKRZOpOQZulMQpqlMwlpls4kpFk6k5Bm6UxCmqUzCWmWO6ZoTOvd
Y+M0ML7edRQhu9KZhOzKHVMypnn9G/n0/7v4/YwiJFU6k5BUuWPKxhTXe0qF/9/j/BeUdNrD7FGb
kPxqQpZk3IZqTKt9YedhZUPYuL49Yjxle4SzcO3IhM0PNYdBJmoOg0zUZTTENHPtSJCJGsggE3UZ
DTJRl9EgEzW/QSbqMhpk0svxmWtHgkx6OT5z7UiMiWtHgkyCOc61I0EmwRzn2pEgk2COc+1IkEkw
x7l2JMgkmONcOxJkEsxxrh0JMgnmONeOBJkEc5xrR4JMgjnO7XYJMgnmOFemBJkEc5xrVYJMgjnO
7XYJMgnmOFevBJkEc5yrWX5n7kx2nIihKPorWbJAqGyXa+glCyQ28AmISYBADepuhgUfT9F2HCYn
lwhdzhJ1Uz7ll75xHJ96IhMwx72apcgEzHGvZikyAXPcq1mKTMAc92qWIhMwx72apcgEzHGvZiky
AXPcq1mKTMAc92qWIhMwx72apcgEzHGvZikyAXPcq1mKTMAc92qWIhMwx72apcgEzHGvZikyAXPc
q2GKTMAc92qYIhMwx70apsgEzHGvhikyAXPcq2GKTMAc92qYIhMwx70apsgEzHGvhikyAXPcq2GK
TMAc93a7FJmAOe71MUUmYI57xUyRCZjj3m6XIhMwx70ep8gEzHGgz7kCfc4V6HOuPJ8zDDyfMww8
nzMMPJ8zDCyfcy1MY78bSrY8O+KAkv/NgwjEGycFcxgK09x/kMdkKsYeZbEUo46GMi5DuGUKQ7cY
abT0OTmgBE8xymgohTLEwpT6xUiTqRgFxWtORm16rCvfU0ypME3dko2jq2QV5czntf7096PfuHUp
e4ppLExrvxjTaCpGQfEakNr0eA3IU0y5MMV+a7RseRBbQ/GKj+L0oGJ4Kkz5yPptNZWsoHh9R3F6
rPsOp5jmwrT0SxYs3SAbildz1KbHqzmeYIprYer2jFxjb10R+yWLZ9Wssnj1xlGbH2tai0zWtBaZ
rItmkcka2yKTNbZFJusaW2Sy5rfG5NUbRSbrYltkAua4V28UmYA57tUbRSZgjnv1RpEJmONevVFj
8uqNIhMwx716o8gEzHGv3igykXI8DYXpSDcE1yZcRfFajeL0kOI7hcJ0ZKt7GUwlKyhemVGbHq/M
eIopFqbYL5nrq9aK4nUYxekhhXVKhSn3SxZdJSsoKHWxMqHUxTQWpiNb3dm01V1RUMZiZUIZiykX
ptA/9hBMXyjtUeK/+Q5dvHFrDJ9gGlNhGr/Wm3ry/SJPXr3/9PLq8v3V14ePHjz++uXl5bOr4WK3
/cL3+dxKlMMcdh+ePn/78ua2OvneMOyu7+6uX1+9uXz75vLV7vXT69e7m6fP3t3eU5h2b+/v7sQ1
Dbc/ebn9bgxj3Kp28/HD9o9h+6/vr26ef7y5/hPlUiizQvnizfXLy1dPX20Uv15p3J+QmyflSvvL
7PY/3n1+c/O63szhBn8dZd6flphnZZQjvMvFOJQrLcqVzuON4WLMZZRVGaXPG9N+fpdBudKZvGl/
tGEJyiiHV+04rNOPr9rh3py2V+3G8e7p1avfX7Up3r5qwxzG9qqd5lV50cZpf35piX8LOY+//GkF
5U9ribFBxmEJKuVcKJNCeaT080VaypWkKDmz9PPFuJZRpCg4Vvp1VUq/xNBmNcScpFld9lHfbzi4
Dsuf3+LWaei+ya3T8Pdvcwcar+N2rD/ZgcnruIlM1hWcxuR13EQm68dtkcm6SSoyWRd8IpP1c7fI
ZN0kFZmsH8BFJmCOex03kYmX48HruIlMvBwPXsdNZOLlePD2LBSZeDkevEqcyMTL8TDwcjx4DTqN
yduzUGQC5rhXuBOZgDnuNe9EJmCOe3sWikzAHPeKeiITMMe9xp7IBMxxb89CkQmY4151T2QC5rjX
4ROZgDnu7VkoMgFz3CvziUzAHPdKfSITMMe9Up/IBMxxr9QnMgFz3Cv1iUzAHPdKfSITMMe9Up/I
BMxxr9QnMgFz3Cv1iUzAHPdKfSITMMe9Up/IBMxxr9QnMgFz3Cv1iUzAHPf2LBSZgDnutftEJmCO
ezU/kQmY496ehSITMMe9vp/IBMxxr/gnMgFz3NuzUGQC5rjXABSZgDk+A3PcKwyKTMAc9/YsFJmA
Oe7tWSgyAXPc27NQY/L2LBSZgDnu7VkoMgFz3NuzUGQC5ri3Z6HIBMxxoM8ZgD5nAPqcAehzBqDP
GYA+ZwD6nAHocwagzxmAPmcA+pwB6HNGoM8ZgT5nBPqcEehzRqDPGYE+Z2T5nFNhmrsPVM2L47mB
P6D8oxaH4o2jgnm+Zeq3OAxzdnQF+wHlH7U4FG8clbRLYUrdYizLYipGRRk9xSijoRTKnApT96FU
qfcI4dyvRT6rGBUFZU5WJpQ5WZlQ5mRlQpmTlQllTlYmlDlZmVDmZGVCmZOVCWVOViaUOVmZUOZk
ZUKZk5UJZU5WJpQ5WZlQ5mRlQpmTlQllTlYmlDlZmVDmZGVCmZOVCWVOViaUOVmZUOZkZUKZk5UJ
ZU5WJpQ5WZlQ5mRlQpmTlQllTlYmlDmZx8I09/fbsmm/bY/ylzvR32/8ahvtcrvnHFb9llGRXL6k
yEf2oMfZVIaK4tmDrqOhrMY8Fab+HvQ6OfrNHFBQMmNlQsmMeS1M3Z3qsVOw1C1YOq9gBQRlMFYm
lMFYmVAGY2VCGYyVCWUwViaQwbheDENh6mf2FCxvoAcUx/eGbTSQkrgxhcI0HVnNOHrs/YAye4pR
RgM5hhtTLEz9TrBDspw1aSggtbAxgdTCjSkXpm4n2GxZzTQQr0+YtcmxLolFJmsIi0zW3QaRybo6
FpmsmX2KaSpM65HzgZYjaQ3FqxFq0+PVCE8xzYUp9jfSRkeP/AOK1x4UpweV2kthyv2ShWwqWUHx
SoPi9JDCOuXCtHT3eTonEuPcrViczypZRQG5go0J5Ao2JpAr2JhArmBjArmCjQnkCjYmkCvYmECu
YGMCuYKNCeQKNiaQK7hnSiBXsDGBXMHGBHIFGxPIFWxMIFewMYFcwcYEcgUbE6j344GJl+MJpBY2
JlDvxwMTMMdBJmJjAvV+PDABcxwkLjYmUO/HxgQyGBsTyGBsTCCDsTGBDMbGBDIYGxPIYGxMIIOx
MYEMxsYEMhgbE8hgbEwgg7ExgQzGxgQyGBsTyGBsTCCDsTGBDMbGBDIYGxPIYGxMIIOxMYEMxsYE
MhgbE8hgbEwgg7ExgQzGxgQyGBsTyGDcmKbClPqHg1fTSe2KAhIXGxNIXNyY5sJ05Dx3cpWsoIA6
PR6YUKm9FKb+CcJlNh1Hqyh5sBzB34+GiuG1MMX/70PsUdLfFeN3yXbUbtkarSeYxqEw5b4JMVpc
54bibb4oTo81Wk8xhcK0dEsWginGKorXWPzG3pn1Nk9EYZhrfoXFDSDxmdkXC5BYBRICBOIKIZQm
BiIgKVlYJH48M3HSwJmmneMzHnxBhUS/NvV5/J7xO/s4T566OxafY1IDE7+VMlNp88oZpO42RZkn
TtVBjEymqladyVS1FZzJVNWzM5mqevZzTHpgcre34opKm1fOKHXfr5gnT91NkM8xmYGJ30yZ55X2
iJ5R6r5WMVOeWbm2HZjU7ZQpVillA0rd3Y+Z8szJrLUZmOzNltGNhPmbCfOjEnYGqbvlUeSJU9Wq
85jq7n3MZKo6vpHJVNWzM5mqenYmU9WWdiZTVfPOZKpq3plMVVvamUwz9PG62yLzmOpui8xkmqGP
190Wmck0Qx+vuy0yk2lWPm4HpifmDWsdPXRGqbsbMlOeWdm3G5j87W6lN5VSdkJRdTdBZsmj6m6C
fI7JD0xPnDwiKs3OX1CQs4uPTvXm3/hsbFiy8KmBSd9KhnU3UiFu50Lgk3FFqbuZMVOe2djwlanu
ZsZMptm0oq9MdTczZjLNxo+vTHU3M2YyzaYVfWWqu5kxk2k2regrU93NjJlMM/TxupsZM5lm6ON1
NzNmMs3Qx+tuZsxkmqGP193MmMk0Qx+vu5kxk2mGPl53M2Mm06x8XJyYJLvd7zc11kZeUeruYcyU
Z1b2LQcmeXsFcpWhmn+gFHpLQeaNz8qP1cBkbg91ilrPzxml0FsKMm98VgarByZ/Oxm6xrjzFUUV
2q+Sd+N1tw0+x2QGpv/6LOsrSt33HWbKM6sGrx2Y9O2UVVkO+A8UU+f5GaLV3RD4DJMwA5PDTgIU
Xeh3Ban7qsM8cfSchh/OTHU3DWYyzan9emGakxufmeruNMxkmlNz98xUd8thJtMMzbvu2xIzmWbo
43X3HmYyzdDH625DzGSaoY/X3YaYyTRDH6+7DfE5JjswPfEWcVdjZ/0Vpe4rGDPlmZV9DyvZLLs9
FOkr9fHPKHVfwZgpz6xc2w9M8vZb4aq8JP6KUvcljJnyzMmslRyYzF/n4Y3v4kW++2H7W7/bbHd/
ffLZR5//9Ue/uduxrgkfiKqHRCohRXO/WP7UH0451C0LeXqj2f+4W29+Wm9+aH5c7H9sDou7n0/3
xE3z03vNa8IYcfpNHz7LvXcht4fjffgHC3+63R2Wx8P+MUo3UNocytV6329+WPwQKOCVRMfOOXA5
V7pcprn8uvl9ffjxfDPXG0yjaDVE8TlRnuTVw+hr2OF3Ot3mrw+//PL8h2+9eOewvO+4Fy03ruUt
550xUnbNZhsen/39drM/ab/eLJaH9W/rw58xgXd9s/j+0O8afXmu3ogE8eL98vA4gxkY+MBwon8W
4nrBtm0fuaYdrinGXHO7OY3LrZowMnf70hJz6d8X68jaiAe3ueu/3+76oOQQ9bFAfgikMIGe1EWe
35atwm637xfrn7/b3vebwSPe3x5/XoXUHi5XCKmN3x52wQJCOl8LpXMfiufyxyb+ZRDn+stX9819
MNUX/9Bvsfplvd+H7y4fa4KVrJd/vt6Eu274tXCcrhaViSiP8PKB14zR4Kk8Xi9tMZe+5FFl5zEE
0kMghwn0XB7PZcOPueZTujxc2jPMpS+6OIQu3A6Byj33shN6uGbx5/56adRzHz4ZWI9RmkO0zcPu
z3+EO9Vthx/75i5UdT/stsfNqrkLswH74/19sNh9/LPvj7vwiV1s1jxm4epSjr16eKTPbF8d1j//
HCPEn7+IP29+2a4u/mzNQ6qu/hzu/fvd9pd/PNswnu6YGOJpXDwu7aiApmNyCGiQAb0bFdB2TA0B
LS6g0H5UQHdxCO9wAaVgowJeXiCsvEcGdHxMQMXOr7/VjOECKiVGBeTnl7dqxnEBNZOjAl5afpoJ
ZECjRgUM9smGgBIX0Ag9KqDqOB8CIo3GuFFOo3THxRAQ6TRWjXIaZTouh4BIp3FslNMoe16JohnS
aZwZ5TTKdVwPAZFO4+Uop1H+vFZAM6TTeD/KafRlPlxzpNNwpkdZjeYdd0NEpNVwzkd5jRYd90NE
pNdwbkeZjZadYENEpNlwIUe5jVad4ENEpNtw4UfZjdadEENEpN1wqUf5jTadkENEpN9wxUcZjrad
UENEpOFwZUc5jnbnZrPmSMfhWo2yHO3PEwSaIy2HGzbCc2THLuPbWmA9x5gRnhMi8vPwrBZYz7Ei
03NeKv8V+kr77XG37NvQl3lpmi/OmLW2eamJX+xf/+eCK6GMCN8wLoU1UsrwY8tE+Dx76f+vqb++
EYwL5kIZZiyWYS5aE5+cVX93/OGvH8MIEmNh04MOZXixO40m3Pe79Xa1Xjb75Y/96hjHoT7o+qWV
XAtjzMr/de04P3zk2+bd1WroOW+GTv2n2/CRw+JwGq789dgf+zCSHGO8LRlr2enrjebw533/9hfn
iK9df/P6y9XAv/q57+/j3x834TFuNv0fh+Y0Dt9ceLx/cP6US9qWx15BePy/3165fPNjH7ju+sXh
r48v3wUeIdTKs6VjS23++mNxv/42qHIMhhAG69arromm7ZvXlosI2LwdHOXnff968+AZp4H3QNWK
lndKyQl44nj4d3EIabfqV9+d6QJcGBjf/xRc7/D2Z9tN3+y/Cz97+8QLIETHbKsUp0AAUYRg40Up
wYMUJfAmECEzTmsKRCKKoohC50GLogCE7JhrmRUUCCiK4uNFKcGDFUXxBEK6VmhScU1E0RRR6Dxo
UTSAUDEzyioKBBTFmvGilODBimJNAhEyYyTpGYaiOILRnnkMhQcrioNGq2NmvCFlBori7XhRSvBg
RfE2gZC+5bxkSZGMYLQleJCiBF4AYQJHy5RJGpWSWx64+sOLL794/6/QZbtfhDnY7rxi48V5ocuL
xUrdqeXy+xd3asVf8JXhL5Y9My+UXfjVYqkDsw3oKiRfrawJH2cPF4stz6/PK0B298vmlTEXf+XR
GzL2rOqNGzr9uN31oVG+P3wX28TH+4BpubVmuTBMLW5lXBEMszQbNvvKpECmNc6ds08Gevcu7ikK
Inz/8zGsIlktwjqSxb7vmjd/W+zejJ95cx/7Q+3q7gYJXZpdv9p+Fzr63zYfRYjQq3ngiGSLMEpx
WjfSN/GTL8In9ymLb4XylAcTFhtNqFJK8GCLimYJBBetlXKC3ue03eYJwXO6zf5Wt9lE+1fcUbKa
lDJCZ6gED7qUKQBhY1HXigQBRbGEOroED1YUyxMI6YM3kpqUiSiEzlAJHrQoGkC4mBnHLQUCiKIY
oW4vwYMUJfAmECEznlZcoSicUHOV4MGKwhmA8F0k9KTxQCiKIHSGSvBgRRE2gVCBT5J6ZFAUSTDa
EjxYUSQwWh7/a6UnNbSgKHZ8lVyEByuKVQlEyIxWpOKaiOIootB50KI4AMFjZoyVFAggiibUPkV4
kKIE3hRCtE76CVrak3YRpgTP6SLcmlnjPBZ1x0hD8bCUEarzIjzYUsYZgBBdIPSaVNShKIT5kgtP
zZor8CYQioc2RVlRxncRivCgRdEAQnacU9sUUBQjxotSggcrihEJRMiMdCSIRBRCzVWCBy2KARAq
ZsY40lA8FIUwX1KEByuKtwlEyIxTpB4tEMUQ5kuK8CBFCbwAQsfMeE8a+4CiSEKVXIIHK4pkCYQK
dILUT0lEIfSbSvCgRVEAwpzmPLV5bEZDhebg+o/TjMaN2zf/dI/D7oi4e1xg7H0am4bzrdIsL1zG
fe7C1undJvzg8tlwu6ePJIF5kEDZxwLrZwNbQRAYExgrsBVpONEq6fPCZdwnQmAZGnF6gm7RtP25
CcFJ/bkTl6CtTIT5JSzgKcKDLt4mgVABwpHGiKEontBKL8GDFcULAGFjZqQhQQBRLBu/prYID1KU
wJtAhMwo2jw4FIVLiih0HqwoHEK4jqtWSdLKRCiKIrS9SvBgRVEqgVCqNbS57EQUwph1CR60KA5A
+JgZa0huD0UxhDGiEjxYUYxOIEJmHG3gPBGFYLQleNCiAKMVgUO3TJCeYSCK4+Or5CI8SFECbwKh
dMtpA1WJKOMbb0V40KIYAMFPLUjGJ2hpT9pFmBKc0kWIXLqVkvT8w1JGqM6L8GBLmVIJhNKtohV1
KAqhOi/CgxbFAQgRM2M0qfUJRbHja64iPFhRrE8gQmYcIw2cQ1Hc+C5CER6sKA5CyJgZT9vmBkTx
bPyYdREepCiBN4FQpuWM1CRPRCEYbQketCjQaFXHTSsMaTYBiiLHdxEuPDWnfAJvAhEyo2grOxNR
CEZbggctCjRaHTOjDcntoSia0BouwYMVRZsEImTGShIEFMUQjHbgqbpULvACiNMgs2RTHFswbRdh
QvAnughcilYFjusxiY9wmdbRivq/S5lhjNA7L8GDK2WRN4FQtmWCVNShKIQFmUV40KJAP7Idty23
JAgoiiC0hkvwYEURKUTIjDSkwSQoCmE7QBEetCgWQLiYGc1IEFAUTehhl+DBiqJdAiF06+QU08iw
5vr6PuxR7cOJWZv1YbuLMn2//uG4W5x+mVRm5nZtZm5UZw93Yya4m6eqM2NbZYU0/FZ15mL5t5w0
wgSLHmGVZhEebNEzAkD4WP4d7bwRKAplcKcED1YU6xMI5VpGqz6gKJTBnRI8WFEcgJCs464VgjRu
AEThcrxzF+FBihJ4E4iQGUlbkw9FUeM9pQgPVhQlAMRpykPxKTbCT9oRmxL8iZrLstgA0ddDpB/B
cq3ypHEYWMjs+N5+ER5sIbMsgVA+pJFUe0JRCFsoi/CgRVEAQnTct5I2YgdEEWz8sGoRHqQogTeB
ELL1Xk3wVH+R/tnDfdWjAKb4JFWZNv6Ud/OEUwrhgz17w/wtqxTxKVS0ISL4ABCWMxbhQT8AHkDI
+BQaXnCyxQgxfsiqCA9WFGESiJAZ60htaigKYX9NER6sKJIBCBUz42kLTaEoZvxOrCI8WFEMTyA0
azknQUBRCMsZi/CgRdEAQnci8NHOFoGiOELHpwQPVhQnEoiQGUUbvEhEIRhtCR60KNBoT7Nbmk9x
5sG0vcEJwXNW7t1q4wQuwcoeRGskpTqPPHUPoo28CYRmreOkgQ8oCqU6L8GDFSWpzm3MjDekYzSg
KGr8DFQRHqwoyiYQmreMNnoBRdGENk4JHqwomgMI1wkedpoW9RRKdV6CByuKEwlEyIx0pMxAUSjV
eQketCjQaH3MjBakzABRFBs/N1KEBymKYilEyIzRpMwkohCMtgQPWhRgtIrFzDja4QhQFMLpxkV4
sKJolkBo2QpacYWiEA7jLcKDFkUBiNO8ixE11j8U7SJMCZ7TRbi1uSdwCdkq2n57WMoI1XkRHmwp
cyKB0LLVgmSKUBRCdV6EBy2KARCngXxDm0UHomg+vuYqwoMUJfAmECEzllZTQFHE+C5CER6sKIID
CBkz42jDjlAUwvqPIjxYUaRLIEJmPM3YoCiE9R9FeLCiKGi0qhOqZYwEAUWx47sIRXiwotgUQivq
YdyJKASjLcGDFgUarY6Z4Y40QA1EMYTF9EV4kKIE3gQiZEbQ3osHRaHUPiV4sKIktc9pMN5KNkFL
e9ouwhl8bvv/A5dQraS9KQ2WMk2ouUrwYEuZFgmEVq2mLeKCohA2GxbhQYsC/cjGzGhPygwUhVKd
l+DBimJTiJAZQzuZAYpCqc5L8KBFsQDCxcxYSxpMAqJYRugilOBBihJ4E4iQGWdIEFAUwhlHRXiw
onBotD5mxtMGqKEoYvw6vSI8WFGETyC0bhnt3GkoiiQYbQkerCgSQGgWd5Nx2rnTUBTCUaZFeLCi
WJNCuNb4x15i6eQzJ/Ua60afSHwJ7PMCY+/TiTScb4VVeeEy7jPzROIQWAYRHHsssHo+8OgztZGB
0QLbNFzg4DwvXMZ95gscvEXR1gADHEc4878IDzIbgRdAnKaTnJziIIhJO7lTgud0cm8tldM8mrUW
pKELWMr4+AZpER5sKeMugdC6NbRF1VAUMb5BWoQHK4oQAELEzFhNygwURY9vexXhwYqiUwity74B
KwQZ38ktwoMWxQII2QnTMtoxGlAUwsbDIjxYUaxKILRpBSdBQFEI72688FR9fCw0WhUzIxUJAoji
CZvRivAgRQm8CYQ21CFNKArhbP0iPFhROITQsVvFaKvFoSiEl+QW4cGKInkCoR31+EEoCuFVg0V4
sKIoDSBO00leiQla2tN2ESYEJ3URTCzqTpKyCksZZXypBA+2lFmTQGhHPTgXiuLGL90twoMVxTEA
YeNgFKPtSYCiEN7dWIQHK4q3CYT2LactYvy3KJZRxnFK8OBEibwAwsVBPENbRvI3dWeyW0kNheFX
yQtgeR6uxIoVOyTEGgU63USkE5Q009tT53Yg8DvDPf5PXQkpYkF3x199x+Wp7GOUkogZtgWPVkqK
ADEO3rvRKIhJCtHQWvCopdQJIgUXuBktSmGWHSx4tFLKfyDCYePwwUXuTmeUsrzsYMSjltIQQiKT
C7UxGqSE5c/IRjxKKRsvQASJTOUuukYpyylWjXi0UmKbIEJ2PvcdRto7ThH2BV+fIhy5UnS+Uu8/
1rLlrChGPNpaVgNAxO3HJa6qo5S+uuJnxKOV0vsEkZKL3N4alDJWxzhGPFopowJE2n5cadSQHKTE
5RU/Ix6llBgmCInM4PJOo5Tlc1dPPGdcGxZegMjbDzt5QynL+SqMeLRS8vNS2pyUNIXe/rVF48eb
rbQ/ri7kz794uP5we3nzxe3Vp9/v7n/eOr4vfvzp8vbD1cWvD1f3t5cfr768v7v7dPHL5cPD7xvZ
l7e/3tw8X255o9yHq4eHbSjgHm4uf7v6/ubux8ub7b8frm+/Pz753yVsct7VHN61H2LIP6bPcmQ8
cfH4CyQUx3+9RfDu/vLD1XM4sdc3cOR/u89P//3T03//+ek3iDri1bh6n9/nHh4hvvrp6kf5SxvB
BnN1cf3+6FAsXdzdX3y8vN1oPspY4etvLn66fLj4/MvePQeYfLIH/Pb4t28E8fL2T6ktv19efzoO
xDa+Tz9dAePlu3f3m9TteR5RJ9K0kXIJe/A1G6tfMI14tK/ZyABRJHyF6ypBSgrEIM+CRyll40UI
iUzrVGRAynqqKSMerZRYAaJuHM5zV+2hFGZ5woJHK6W0CSJkF8oeCWH3nXTuCE5NOushFRcGtb6C
tYyZdD7ynHGXlvACRJOqnrmLEFDK8jaTf3jOuoa88SKERKZyK3EgZT13lxGPVkqPANG3yLD3iICU
zEw6LXiUUnKYIVJ1nhtToBRm0mnBo5bSAGIcfHWRyyOBUhIxGrbg0UpJGSEkMqlaNrR5OdmHEY9a
CjS0wUtkiuUmx5b7epdswqOV0gNCSGRqoxajUQqx5GnCo5UyKkAEiUzntp+ClEL0PiY8SiklzBAh
u1j/Zykc9gU/ZYrwlMJh4krVjUFFFWrZeu4uIx51LWsAEQ++sTsYUEpe77lMeLRScp8gUmN3MKCU
5WQfRjxaKSUCRJLItE5VV5RSV7fnP/GcVUodE0TqzntqhQ2krOfuMuLRSmkIkQ++uxgpCJBS/foU
wYRHKWXjRQiJTOY+uKOU5WQfRjxqKdjQFolMzVRkUEoiumQLHq2U1BBCItO5dxikrKeaMuLRSpl6
nyqRGZ76toxSlu+YNeLRSukzRMgu1T2ujt53irAj+ClThKevCBNXGs5zt4pNtYxojyx41LUM26N2
8MPFRK2wgZTGdOcWPEopGy9CSGQSlw8UpKzn7jLiUUvpANElMoX7voNSEjFFsODRSkkDISQynZvR
opRM9FwWPFopE8Q4CCEHgVLq6ok5Ix6tlOoniOxd4Kb5kxSiobXgUUuBhjZ6iUziMnuAlPWcNP/w
nLWmbLwTRPbsKXyQsp6TxohHKyVGgAgSmcpFBqUQvY8Jj1ZKGjNEdrntcY581ynCnuCnTBFe+ooQ
g1T17ql1E6xlRHduwqOtZRNElKo+uD0sIGU9yY8Rj1ZKyxNEDi4kCgKkrCf5MeJRS+kAkQ4hOu8p
CJQyVrOUGPFopYwyQeTkEpcqZZJC9FwWPGopAyDy8T5hbi0WpIzlK2qNeJRSNt4JIic2NQFKIXZf
mfCopeQJolQXepogUhhvpb1tI/u17LPqgrXPmf1zxdUaTivuhOc8Kfvs54Kby7U8V3B+u+DCCFYU
rBZc5uKGKymfVtwJz3mq4CJtS+d6IcRpRFtnwaONRvMIIW3LyNSSJkphBpUWPGop2NYdv5x4bmP0
f6V074nxkwWPTorwPgdR2h6ZEPad5O4I/sokN6TgYumtxJcmuXWLNJu1H2vZcipOIx51LRsA0bZg
sflAUQpxus+ERysl1gkiZ/ZID0phRukWPFop0yi9S2Qql3EEpRCH0R55znlZmPBOELG4kff45o+N
9He/vJMW+uPd7fWnu3vR9P76w6/3l/KHc7tdX26461PLfb6nea3lrt3VFHN7seXuUv8HdyHwVPWI
8YEFj7rqFYAYh1Bc5LJ1oBTiyJ8Jj1ZKjxNELi5xh8NRyvL93kY8ainQnSUvkSmdqq4gJRB770x4
lFI2XoSQyHRuORClLF/zaMSjloI1JRz71L7HQfhd5xx7gr/Sc9XhUgglt6eOa8Kqzhfq9cdKRuyk
f+Q55xV3G+8MkauLhRq4oZTlazONeNRSGkBEiUzm0kyBlEhsvTPhUUrZCp8gYna+lh3e6m+mf/b0
XOejgEbxVSqbMf6eT/NKSxmTdzWM2vpLTWWUt7A0agMsvgBEblsTHu0LECJAJHkLW6BGSihl+YpU
Ix6tlDgQQiLTCxUZlLJ8RaoRj1ZKQoh8CM1F7ugGSiG2WZrwaKXUPEHk5lqgcotMUohO1YJHLaUD
RJHI9E5FBqUQyRpMeLRSRp0gcneBGwOClOTXV3xNeJRSNl6AOH7IaX3sMFTYdza4IzizzVK4ukuZ
iirWsuUrvox4tLUshQkid1dM58gprS/umvCopRSAaBKZxl0Si1KYJTsLHq2UEhFCItNNv0AlZsnO
gkctpQJEP4ThPPcZDKUQpztNeLRSepsg8nAxUkNylDKIhtaCRytlYEM7JDI5UqNPkJKZJSYLHqWU
jXeCyINNjoBSIjGZtODRSokAkb1Epmdqmo9S6npDa8KjlVLrBFE2Qu6KcpRCbGc04dFKaR4gjt9d
+thjNXHXKcKe4KdMEZ6SNUxc0buaqEYRaxnRnZvwaGtZbxNEic5zH4xRCtGd/81z1ldvBICI2291
aVADLZBSiOOyJjxKKRsvQkhkaqUig1KI7YwmPGopFSCSRGZwB5lRCnG685HnnKc7N94ZoiQXqmWb
Uoh7hkx41FKwoc2HmFziIoNSiNOdJjxaKa1PECWxu29RyiBeHwserZSBEEUiUyMFAVIq0/tY8Cil
bLwIIZFpjVolQylM72PBo5ZSAaLKbj/vww4j7X2nCDuCU1OEKjs4OpfyHWsZcfeQCY+2lpU2QZTu
vOkYpxKnE0x4tFJqAIh2iN0FbkiOUpju3IJHK6X1CaIMVyv1vRmlEPvmTXi0UnoEiH5I3iUumzBI
acRlBSY8SiktIMQ4eO9Kpho2lELk1jfhUUtpCCGRaYmqriglr3+uNeHRSsEz9sVvkWET505S1vfp
mfCopYwJIgXnuYYNpYz1htaERytlRIAIBx9ctLy9rjdi95UJj1pKnSBCccHvce5w1ynCnuCnTBFe
2mhUglT1zN3CArWsE925CY+ylvWAEFGqeuFur0QpxH1mJjxaKakjhESmDWpMgVKIq3JMeLRSMkpJ
Bx/ZyRtKIc4bm/BopdQyQaSNjlt2RCnEVTkmPGopAyCyRCZxl6qhFKY7t+DRShkVISQymVt2BCmD
yK1vwqOUsvECRJHI1Ei9wyglEaNhCx6tlBQRQiLTuB2UkxTi9bHgUUupAFElMiNSq2QopRKDNwse
rZQ6Q4TiYtgjc/K+U4QdwU+ZIjx9RZi4UnK+URt7sJa19QVzEx5tLWsBINrBJxe5LakopROjYQse
rZTeJ4iUXerU+49SmHUcCx6tlGkdpx98doXbfflfKcNH4vWx4NFJEV6EkMi0QC0mTVLWF8xNeNRS
GkCMLTLsKRGUkonRsAWPVkrOE0QqzkcqMiiFWXaw4FFL6QjxOUP53Nqn6OMbmXqHL3k1I7GyYO1z
lvxcca3V04o74TlPzEgsBWdX+niu4PR2wYMRrChYLXjMxVXXWzytuBOe8yTB8eD9wRcXuFQIgBPi
altnxKOMxsaLENK2RG4P8CRlta0z4lFL6QARJDKpUg0uSqmrg0ojHq2UGieIsEGEPTIn7zjJ3Rd8
fZIrXFLVc6GiirVs+SCbEY+6llWAiFLVC5fyHaWM1VG6EY9WymgIIZGpifqaAFKiX53PGfEopWzl
A0TaIsOe+kcpy9cUGvFopYQ+QaTKZi9FKcs73414tFJiBIh82AhDpiaVKKWsrlkb8WillBkib3xc
KgSUsrxR24hHLQUb2iKRSY2CQCl99du6EY9WSi8IIZHJnVrnnaSsfls34lFLGQBRJTJlUBAgJcXV
A/f/8Jzzvg7hnSGKy3GPRMf7ThF2BD9litBfmiJUqeqNuxUKaxmzZGHBo65lGSDaIXg24QZKycRo
2IJHKyUHhJDIDG4xCaUs73w34lFLKQDRDyE4zx1bRimNmGFb8GiltDpB5ODCoGa0KKUTPZcFj1ZK
9wAxJDLJUzNalMIsO1jwaKWMhhASmcx9oQQpmVl2sOBRSsm47BC8RKZwZ7lRSllvaE14tFJKQQiJ
TOVydU5S1qcIf/NQo3O1lAEQQSLTOlVdUQrR+zzynPPmYOGdIYorye8w0t51irAnODNFCEGqeudO
r2AtI7pzEx5tLeseILYf+kwCSClhvecy4VFK2XgniLzRcVtlJylEz2XBo5ZSACJJZGKiIFDK8s53
Ix6tlBQRQiJTAwWBUpZ3vhvxqKVUgMgSmcYlG0QpdX2KYMKjlVIbQkhkeqEgUMryRm0jHq2Uhg1t
kciMSq3FgpTK9D4WPEopNcxScnKeu0kDpTC9jwWPWkoBiHoIyYVBQaCUTDS0FjxaKbnOEMXVtEf2
232nCDuCU1OEKlU9cRdjYS0rxBThkeeca8MbL0A0qeo5mUphuvNHnrNKqQ0hJDKlUJM3lMJ05xY8
WilTd94lMo075wRSmiemCBY8Sikb7wSRs/OJmtFOUoiey4JHLaUCxDiE7EKhIFDK8n0NRjxaKXGC
kMjESlVXlLJ87sqIRy0FGtroJTKpU5FBKX29oTXh0UrpASEkMsVT8xSUQuy+MuFRSykAcVx3zoGK
DEjpRO9jwqOUsvE+B9HyHtlvd50i7Al+yhTh6SzCxJWLK9y3IaxlRHduwqOuZRUgogRrcMuOKCWt
91wmPFopqU0QuTrPjT5RCrH7yoRHKyUHgDhePR24A+4opayfRXjkOetRno13gsiVHWihFOIUnQmP
VkqNAJElMqlZLjv05fsajHi0UsYEIZHJ3CkRlELsvjLhUUvBhrZIZMqgIEDKiMRo2IJHKWXjnSBy
Y89MT1LWNxqZ8KilDIA4rjv33HYYVO47Gt4R/JXRcEjeBSmmPY2GJ67mYqOiirWsri+Ym/Boa1n1
E0RuLnH7dCcp62cRTHjUUjJANIlM5jJUoxRmHceCRyulB4SQyJRCTd5QCrOOY8GjllIAom+RYc9M
/0dK9D4QM+xHnvN9WjnyThCxOl/2SFGOPdd3v7yTbuvj3e31p7t70fT++sOv95fyh3NnVl/uzepT
d3a+p3mtO6vDtRFTSC91Z13qf+Py7mLVI84qmvBoq170ADGk/jdunypKYRZ3LHi0UlJDCIlMb9Ti
LkphFncseLRScpggSnF+PJcfK3hMWDU/fltLzKUuWP2c7bniyiinFXfCc56YmEsKri49X3B4s+AS
GcGKgrWCS5yL667GeFpxJzzniYKTl7ZlFGrhGHDC8l1RRjzKaGy8E0TuznM7rVHKcpLUf3jOuJ/3
yAsQ4fiJoaQdhjB/cXduva0TQRz/KhYvgITdvV+CQCDxAA8IBOIiEKp8pYE2KUlLQRy+O7u5NDB2
W69nHA5I56i3JPvb/8zu7Iy961lLCXOCP7P2Mq7gXAp2Wnr1seJLUEaFTjb5sS5EPKlOJl0PQgUI
3FkVUBQ1/cIACU+qKEoACBEtIywKAopip18YIOFJFcX2IYQquJrjXI7Pe2879et8FGBSfJaKJkud
szfPzJRC8sIxrz1/aqoUcRRKjbr3vDcAEIsUCp7kAWABhIyjUOEeIAlF8dOLriQ8qaJ4BSGiZTTh
/eyxEURQpeBJFsUBCBUsg708B0QRiCtjJDyJogReCBEtY3AXTqAoEhFUKXhSRZEQQkfLWNxpwlCU
yU8jJ+JJFcVwCBEt43DnvEJRJj9Vj4gnWRQNIEy85M+0n2GpMG82OCM45jbLyOUKj3uSA/SyyU/k
e+Q54+ExO94ehPIFwx3rA0SRbPrlCRKeRFECL4CwC+4LjjtNGIqCKdlR8KSKwi2EiJYRuBIZFAVT
sqPgSRWlV7Jz0TLSoCwDRdGIagoFT6ooWkCIaBmFu3DUEwUx0VLwJItiAISPljGMsuItMSUmCp5U
UWwPIlrGEh5dJpic/FQ9Ip5UURxIJhWLlnGEpxsLpuT0iZaEJ1GUwNuD0IFQoJK3nijTUwQSnmRR
NIDgu50GZo5q4qwpwhH8dTusIXAJVnBc2RF6GSKck/CkepkWECK6usQtyaEoiHBOwpMsigEQcZtc
IXGPyoKiuOkpAglPqijO9iA0L7RHQUBRPCJyUfCkiuI5gJDRMha3nwWIotn0NQ4JT6IogbcHoTn2
hHIoCkdMtBQ8qaJwASDUQoiC4W6qh6Ko6SnCkeecBSo9AKFFIXC7KaEoiN2dJDypomgHIHS0jMIl
b1AUTPSh4EkVxVkIES2jcbUPKAom+lDwpIrSiz67Yrywc5ycPG+KMCP4mBTBP5UimOjqFncEB/Ay
g3jIGwlPopcFXggRXd0zlKv3REGkCHueMx75vOMFEHYhZMFwj1OHomDCOQVPqigDEFoWQqLubOuJ
gohcFDzJolgA4aJlJO7xe1AUi1jjUPCkimIdhIiWUbiyIxTFISZaCp5UUZwAED7eP+g8yjJAFMxx
iCQ8iaJY1hdF64IZVJUMioLYYkfCkyoK3GKn2ULoQnAUBBQF8YBrEp5UUYyBENEyEnfOKxTFTvcU
Ep5UUSz0FL4/hWaOnbOzpghzgo9JEU5XESBXdHWFuzYEvcwjvIyCJ9XLPIMQ0dU16RZ8i7iZmoQn
WRQFIES0jCG9Jc2J6Qs/Ep5EUQIvhIiWsTgIKAriUi8JT6ooUgMIGS3jGGr1CUVBXJkk4UkVRQsI
ES3jcAVqKAriyXckPMmieAChomW8o6zFOEz0oeBJFcWzHoQ2BbMoCCCKR1yEI+FJFCXwAggdj0zR
uHUBFAUTfSh4UkURPVGiZRzu4a9QFMQzyUh4UkWRAkDsivHKiRlW2vOmCDOCj0kR/FMpgomu7nE3
GkIvw2TnFDypXmZMD0LbguOOCoOiYLJzCp5UUXrZuV0IWyjc/kYoCiacU/CkiuJ7okTLGNztGlAU
xJm1JDzJolgA4aJlLO75Yf8UhTNMOKfgSRMl8kKIaBmvUBBQFEwyScGTKkovmfQL4QqOOwMOioJJ
Jil4UkXRogehXSE0amKDomCiDwVPqij/jD5ywdhC+MJSXtjnfPKcQsSTKErgBRA8cBSKoWZ7KMrk
FIGIJ1UUKXoQ3BTauRlW2jOmCPOCj0kRTlcRelySFZqhyo7Qy8zU4hYRT6qXGQ8gRHR1jauwAVGm
H9RExJMqiu1BRMsY3PN6gCiCTV0NE/EkihJ4AUTkKBgOAooy+cl3RDypooi+KFIUDndfHBRl8tZC
Ip5UUSQHECpaxjlU7QOIMv1IFiKeVFFMT5RoGW9R13egKJjoQ8GTLIoHEDrO9lKxHoQU0r9whiwX
1k07Kze54dR+WjfUnONyXHMj+jnqrNx9w7wwQg00rNiLDXuFETih4VSBveo3JwuGexoW6P30U1eI
eBJFCbxDEE6IcTYYYfzxXidlwXGZNrQGn3prChFPqjU4dFETrSEU6sZoKIpCiELBkyqK6ovCTWH8
f2zD/bzgY5Jc/1SSa6KrS9zt99DL9NTyLBFPqpdpDSBsdHUlUeMfimIR9SUKnlRRrIAQ0TKWo4pc
PVEQC1IKnmRRPIBwC6YKrlFxG4iiJj85mIgnUZTA24OQqhC4Iy2hKJNPbyPiSRWFWwDhg2Ww50NA
USY/cYeIJ1UU2RdFKuwD26Aok5+4Q8STKooCmT9n0TLaogpVUJTJBzwT8aSK4h2EiJYxuPv1gCh6
8r4rIp5EUQIvgODRMha3oRSKMnkrNBFPqiiiLwo3hWNz7FufNUWYExyTIgQuqQuOu40Eetnks9WJ
eJK9zAMIsWC6kBIVKaAokx+SS8STKophECJaRuFuYuyJMn2NQ8KTLIoFEDJaRuMOP4WiuOl1HBKe
VFGc6kFIUzjcg/eBKNOPiiHiSRXFcwChFswVUqEsA0Qxk595T8STKErg7UFIVxjcIT5QFEz0oeBJ
FsUDCB0t4zmqSgZFsYg5hYInVRSrehDSFxx3gR+IMv0QDyKeZFEcgDAL5gupUBMbEMVOPuePiCdR
lMDbg+Cm8GyOfevzpggzgo9JEU63yvW4FCscbuNYz8sQkzQFT7KXeQBhF5wXivI0SW7l9NowCU+q
KNL0IBQvNG4zKhRl8plcRDypovQgXLSMwVXYoCiYcE7BkyqKVRAiWsbiFlpQFIdIESh4UkVxHED4
BReFx10eBaI4hhCFgidRFMf6oihZKNxCqycKIm+i4EkWBeRNgi24LDSuFgtFQRSoSHhSRTEMQkTL
GFxG2xNl+kRLwpMsigIQPFrG4h5uBUVBFKhIeFJFcQOi2IJxPsNKe9YUYU7wMSnC6SoC5Iqu7nAn
VEMvm/ykISKeZC9zAEIsuMTeaQ5E8YgLeCQ8iaIEXggRLeNxh59CURAVPxKeZFE8gIiVx4Ip1I09
UBQ1PZkk4UkVRZkehFIFF6jVJxRFT08mSXhSRdEQQkXLCIaqsEFRMOGcgidVFKcgRLSMcKhrxlAU
xPUmEp5UUTwHEHrBdaFxz+P+pyiCYaIPBU+aKJG3B6FcYXD3/kNRJp+kT8STLIoBECYuKjm3Mywq
510Nzwj+zGqYC1d46bXgp9Vwj8sVFnfuMPQyTHZOwZPqZYZBiOjqDnfOJxQFk51T8CSLogCEjZbx
uOPnoCiYcE7BkyqK64uiAoRDWQaKggnnFDypovTC+e7R0wxXoAaicEw4p+BJFCXw9iCEKbiZ44op
jFxf3TYxbN2sV8u79SbK1C1/vN+U8Y/9YGaejmbmFM7O15vnwpllhddCOvlUONs94JsrlKmh62FS
dgqeZNfzAMJH/xcCVdyFomBSdgqeVFGUgRDRMhJ34AQUBZOyU/CkiqIBhGTRMhJ3iA0QRSBu/CPh
SRQl8EKIaBmFg4CiIPY/k/CkiqIUgIjV/TiGZwgAsyZic4I/E7mMKbwQkp8CVx/LFw537gJ0MsSm
VhKeVCezvgeheaFw+yWBKBJxkwEJT6IogRdA7J7vrR0qUEBRJj83j4gnVRRl+xCqEHyOUw0+773t
1K/zUYBJ8VkqmjX+nL15ZqYUUgYeb4w5TZWQS8uC46YmOAAQB1KR8KQOAOMBhFwEQuP79UUpmQtc
7d3uyJxmub0t7+qrxf3q59X6YZXftNtt+WObG9fUqrVlXnZK5ZVvRG6lKXNetbJqrFJd1QR03Tkv
Xdl1wvHHD4sW/Wr/ednmts7emPLhbwx1yAnxQodeNfc3N79Hla6CYJEkfp899vPrT4urchMkvbq/
ayLgRwvW8FbrmjvudFZv2jKauPp9/8YvFkzVpnF1W6ra9pl4oR6fXPeiyC80/qqH/eH291VdwLcB
qKxbbx7C3wP1W3ebMLyCS1gjrZVlayquVM3btm6D1qxzUtm6abu3h3rimH6hJ6N4Di789afZZ7dl
mHe+aLtFVeuGBS/PW9bJ3Ki6yavGl3ndClnarq600dnj5y1DoWK5M0OYCNrNqry+/v3drNzNdNvL
sgu/+/6qvA4v+CEMz4/a7d1m/ftgj/RxXUfTI/CqLw/fFMvV5Y/3ASNr1jdxQGb1dVuu3ttNHP8C
1+1VuW0zdiEX2Wp9knXT/hIhd1+Xm3j61XJVh1X01XKbhX9l9o9PmZ/727oIepXL1el1R/0+Dtad
E+DuKozz5nIXFX/Ibu+3V2+Fpr+6XK72/vb4xnfGjcG3F/sIG9zx+3HvyH54i7+d0EX/CkKvb59g
fjt7bxxDQvPJCl+vw0S0+vFyP59tw4qlPnhd/Eu2Xk2YIGJUDMH353aVCTYMr3rwnMvst6DO9XJ7
165evdqtK+Kg2TvbQmT39+GLCtCmE6FFqXTeaWFyZ0sWAKqm8R2vhebZB5v2ug1j66Od2w4TWBL5
jiP7cUSEIf04kt+Lp9i9mzW/B4H3k0yWvx+GehjaYaEQfjlIZl7UhkCTuNwzrbdVMGbVqKPa5X65
+kW76/yvN3EJ9bDe/LwfNa8PLvDaw9QQvs1DY/s1NIXnvP3uabqg+LynJxPPSN3xmQAY+sAuy+s4
Rf1+GXWMQ73c3l3u1n6Xxy7W95tNcIn4c/TrYWg3I/TiEB7FhVhkzX7tEH1zfd1EO0Q139rBZeIJ
ST0NXbta3wYP21uwOJAssrDYyXaSBbhfl3W73TlJt1mvgs83kW0/IbzKgqc27/0a1uGv4kvDLzW3
hgXfijnW317Lnnnt6A/mYvwHc/HsBy+7xxc/TQtfl/1wfmNkdTBG9I3TSuXwl+dcRBSMmmrnCMXX
VfPP1482XqJXnLlPh3Xp7N2KiVLdDnWOM6Jp8tC58HUbLrW2+cNmGerUF9fr4EkXexe6YBcH5IuA
eCEudnAXx4/Mj6v0957D5aS4D7vsdP8luvsRPwtp69V2kX2f1oMmliZ+OKzWljft+v4u4+KxrnSs
4wz3THw3fgG3N/kijMLDXHkZ/xal2/94Sn3+CKDv7DD/HNssRtCDc29uLg9c23i9Yi6/PkePHj16
cwOcQVzs+7jzhR3QMA9NWvEyD3DO55DU2ZDazWa9+SfYv4sk9kgjbUdL9WQ4q9c3t9ftXfuvQowd
pIkLojP3aXJMTezWc0FKk3YuNaZykRhTDSkuQUzlgiimEpUkaIJbooOdo0fjgxsXwzxECeuLPMBL
Xg+k0cGNNoNIDW78LFQTg9s8EMtudHCblJSfoy9j5/1ldyHCz+tVyNhjYY2dA+5JocsqzNCxylld
rx+y8qH8/VH7sVwg75ma74QSpxRGOlV6r3R5TJ8+/EdFNs67h7sERjvoTHxjSrBB9mPp8aMQGN+K
n5YdkvwjA3WN9d1RrYZK7BMxi89UYng6ZsUhwcayUGbZXPw5tlmMBCOWWLEEHAS5Wt/dXt//uFfk
+NPE5VS8J3eOocBFHAoNs8403FStctOG6hn5MEOVi397qL47ii0MaDk4oK2jrWikhJJjdC7D/91S
Z+/yZ8Qcv/onW1hQ9wQzg1pHm7mPWvHHFc6/jwNW+xHq30USe6Sz2ewQPn7IPt5/UxzmnsXoFJig
vmsdTXWl15mHcnl3Gaokl/er+PN/slOIBcKhDopdJFD15NE8X4ZXXwVLLLfhsvr65z3UkXFbb5a3
d/9JU+EHE0E9abbOTB5Mr1Gn0IOJC/RgoinNzjOYSExF3MGEwUS4PJupEwmD6LXrDGLwEKSqVL2g
Gji05glqblZEHfsmfFSwyzuh1VCj3e9EaX+73W/3eDMAvZmF7HizK+LdDNtqmJP2QgW8V6y/Fv43
KB41Oxj0uWSU9qrE6b6sHyNBtw75erxL+mGRcb6IFZP1/Sbe4t7e3K435WZ5/XswbPlrubwuq+tw
12yeldvt/U2EP9z7t1pfr1c/tpuw9SAUS/63vZCF1P1ecHY8JeJVqOKEOfemXNWnkyM+WjjlJJNd
V5WlfXX49eVm02zD3BAsf3SC7IsvPoqXWmMxJxZx3htTxDkH5IebMP/+GjGPhOvqpzDKx1PGQsym
vVnHAzXKUE3bnAX7WAIKDhLvpd6027PY9LHdbTDuORqMToSyTnk08O4e+Yh+kGvxuGVvsB/muIlr
aj8G9xEqNf3oK1qwxA2FAbxPYwohFckOsbj2CRsBL5fhkmiIJ58fY8mOZnHaFXJ8Ux7flfOwzzDX
um1zW1dWtpyVLVNZc7vYXwz/7demibdPrx8uw7L358VO7+Fu0KRG/+gGAft287f3867SWos2152w
YadkWeWNUl3eyU53UpetNFX2a7P821vKsm6dDvtntOmaXPGmyr2vZG64a2VppTetzrb3sVYfi7aL
uKPu190+uy8+G9aJZv34/9eJZkEwq05S6tpy1+StUyzvqtbnHS+bnBsumbHclVIAnbSWthLO5Lb1
de5VEKvhncirtrNtI72yRj2h0zeDOima+5ZfN50qERqsWXiLDuGJcaNz7jqWa1VVla+8qqsuTSea
Kx2vm06lEU1rXBcGqPV56VqWs/DGvHZKldbZinU2Taf/wPw0QScX9KldJfKmC611iqm8FZUOC5+G
exE/reVJOjn+eoftclzYdq/j9FG+fuGIqPI7fJTC1x99Eux+ZIh37+i67GrXVuFkgeTzE0whX9zM
8myLr7Y3IW34NFt12yjr5emFwS7vpQ/D99LC32CPnHh+uAXmy/a3tg5d4ZL7zoomNFQdlupfhmwl
HHRQ7KXMjkc6uHDIRuUq5cM3reJeMOM990ywRgYaHTaEr6/3u4FjdeCmfS9s6o8bMePO8Og899t2
s8+KyvuQDMQfL7fhg0O5Ju7IfO/YkHdSdbrsFK+9Ua0QwZOVsL5sQ7ut8kNdVhrVZZg2/eM4tICc
ljWhcZKTJTUIYSQGYoQmm/bufrNqm+NrozS7lwzhWDaEw3lwluVvO5wnGtYCZ4zRDafKrsVgc3xk
cyP6mSSwE2qoYfFyww4nsBN6XMPJArvB5qQf19yIfiYJ7KUdali+2LCROIFHN5wqsBluzspxzY3o
Z4LAtuB+0JHUqeHtITZdr38M5epdd29D7fkhdDgeZKVqVgrHZdmq54OZ7IxyulYdq1tppamYa5gT
1oTAzFTH+8Fss17fnS2g2UJ4QS0FtJZFDfuIKKkRU/3XuiEwKcjBRmiX5OnS8SFE/eIQcxpntNEN
p5rC6cHmvB3X3Ih+JgmszGAwNC827BlO4NENpwrs2WBzjo9rbkQ/kwTWwzHJvtww0oP/4u5Kexsp
guhfGfEFVsLevo+sQEJCiA8IIQ4hgVbRnGwgiSPbAVZa/jvdY0/i2JVM1XTbCUggkeD4vXpVfdVM
V6GByQJrEE4LHBzCTpLARoGedWPAmiVGMBqYKHAgBsJZhoND2EkS2GlwpPpx4MQIRgOTBdYgnJU4
OISdJIE9OBUKNgrMEyMYDUwVmAMR7OaMIeEQdhIEdnMtDQQ8etTUPCmCCcBkgYEI9nMmJA4OYSdB
4AAcH9T2b/c/AN49Cb1br2/eFl+VF5e9dHHvH84MX//443cBanWzuA4/xbTp7aroL4b++hYCcsxB
Fo4eubRIOloSgKmeFIdwgs85Yzg4hJ14T/bACvKkyutJIeaMCchCNW5hynGMAkz2pAPhpMXBIewk
eDIAgy+VCoMqKq1aq5yblZKLWdk6P/OtqmaWCSc7X5aNLsP5z4hOdqLuSmsspaj0+Jd/BBrkBWhQ
UlFpXUrZWOVZaUvgoUjrOs2YkV3lGcSJw8E0em7TMmU+ogBTo1jCcJ7j4BB2kqKY3+V3RqN4xLuE
qt27XoeqdjeBJ6tqIxolKlPVpdKiFaos27oqm6Z9BVkiNLg4jx6QtOJpoYIGpoaK4hCcZAwHh7CT
FCpSewh4/ICkTJrAaGCywAaEs6DABvHYeDewofLv3DrhealnpbJ2po0zM+G9njW6qVtXtRVjLnf5
994ipIAIz9FC5u6qch4pkXXn5U7d+efgdZy689l5P1p3XvZ1549JIEfd+d0vxNWd3/2L+1LRByaO
Ftw+qJ0uNyUaOsFKr5p6VpainlnHxEwKq2e+88x3tW91NVo7PTIIm4HxyhWpyGH5rp1uaqGbkhs3
sUJ5T3e0VukR6BIiKAM6Pb7e5IC9r6Kzp7mCTz3juTSduNlBA1PXYs1BOKVxcAg7SSuXAh8qyvFc
mk7c7KCByQKDmx3NHA4OYSdJYK1BO8dzaSYxgtHAVIENB+GswMEh7CQJbBSDgEdfT9EmMYLRwGSB
DQinNQ4OYSdJYCtBz47nEG1iBFspccBUgS0H4TTDwSHsJAnsmIKAx1N7NjGC0cBkgQ0Ixy0ODmEn
TWADRvB41sklRjAamCqw4yCc4zg4hJ0EgeWccQMBj74VoZ1NEZgATBbYgnBK4OAQdpIE5gIcqePJ
MJ/yMi8FmCqwFyCctDg4hJ00gT04dMZTRz4xgtHAZIHBCBZS4uAQdpIElgwEHj3hGJYYwWhgosCB
GAjHHQ4OYSdNYOlTrh9sX9wdrnihr6GAVFSeAiRHvbiFvSCUcj92vd449P6e3p5OeapTP3VxKzII
Hred5G3rWGd8Rb20FZlagbq0BaDtX9jafOh5LmsFS5Rn6GEiSmOFN65iSj/9fnvbVF5XvFFe+KYt
K9b4xgrXcW+4b3n5fJe1gslWqxSTDyaqpFNHMh3yNG1AEtakkEBoQpi81ZwJkdI+fJ+O4CllSNL5
UH0kOETCWWgpVaMpJcOT9mQEYKqdHJhc9dwzj4ND2EmIugCsbcog2NsyoCfDQyp2LjO14/mvbxk2
q2PbgBpl6vHy0jTKWk4j6sT5/1KnrOU0ep1eYtmRdJ2yltMIOnmRR6fD9gmLmwfdE0Ij9xztEEAb
RkvxkNuznxU3y0Ud1oBgUw7a8T07Vbba1K7Wymwf1E9oG//Jn1eMMxMXw7CFjs2nF9fFYv2uXRbR
9p2v+/anb74pPhkEeXUWgjd6qLhebF4/mihlxnDANLYASWp2VH9jeAWHSslL5pjoeCNfrkNHtTqB
Q7kYcajxz+JQLh441JouJn46U7f85Tp0VKuJDv1+M181Rfw/xeJ6gumbus6LUDe3EIxA3n/YexUI
fgvo7K7vXLPrXV5crNur1Yt6uwdvahxHsLlhROEMwcPTX/I7CJMv6vji432YTHg5djdMOEg+0xuK
w6udd69EvitXxfDhPiP1pmjeB/O22avZ52HsrYtlOF2GX4LMPMvK7ImXYQNndl5exlh5fx7FjpqX
q/V5f1XkfAjc+na5jJPM1kSYtDoi6bPtq7LitTgrtsflOCwXl822gvO2KHohX8HsdB52cEXvWI2t
6CXbdgNbxRnioLC7zNNe5A36i4mtFoAvTitFX7w9vTOKOjgjxsb9W8vb//NUiLg5U3m6He0+Rri/
shJ/OwRu20D4gom8qmD7dWeKStimTHPshH7dmczaNMA+hXHYvq3hnC9f9+Tw/bp7upkmaHyHDooF
U/t195Zlnk9wHfsyBRhskclq0eO94ORez2fDsHzG+59KdKtMw/6BYe2JZACDE6bkTkbpsE24fF5K
ckMJGTJ5WdHbhJ+SBHZuIG7ITmzT5DWVaNZTi1SeE+XUNZUL4pqal26GNZWLPGsqz3QGzbOmEgPs
FBbh11QuYD78RHz2ouRlUMIubiejtF3cnsV30xa3I5G46NCL26SkAGxL3sMvdt6/6F7L8PPiOmQM
YlaXnYIcpXH5oP0peGFmtCDYSTRCrIR9T0GZr/9jb4X9ZfyG8qTzTXj0401b6dqwVrFqOC598fAW
ddVE9/dZ+2fmt5eT39ycfvAw7lPMM0z56s39IxPUH/T37Scan3QI5QI8hEo/evadChc0t5UoVSV9
bRs1LSZOyC8lJrhIiIk3qK995DGZmzubN/VFmcCHNbEM//YbjM0MdkKa+D13tuU8tyUpK9PpuOzt
K56fzt4eO5KCKeXNn+L32NFnx2SE7+99xGTu0YwZ7fP9HzAqYb+3zT6m7vmczZNrT+v5fVRXZTZw
8mDKkMU5mjGTB9MLMip5MHHxvx5ML8hVUwZTxu3ZkYwgDKIXZ0zC4MmQeXA2z6POXAMno3uCYUHN
5XUmw34OXxX88mlADZnR6KHb6/bvm83dtI8DoY+LcLBd9qmzK9hXMM/Mj9X23hA73As/B4tBs8Gh
Tx1G8z7aun8b67fIoFsUsogvqf51VnB+FpMdi9tlLHLZXt0sluXy4vJ9vOj7Z3lxWVaXbTGbFeVq
dXsVyA9v/F0vLhfXv7XLov075Dn+x1Z4wbMUmsW2ox3+aBb/alaqVs5UJdmsaZkVAZ031g9XjiSy
i3xvRp4HFg/MyMD9RbWj7XXKk0v//+uU5wx6VJ0mXL/Le50z6pRnb/HSdMp7nTPqlGexeGk65b3O
GXQy5mUvR4ju6L0ZeQ4Oed39wrqj9zrl2ZMeVacJwyL7NGvzbG+QXeS5FVXdeuurpiI3TPFzJtgI
2ycRj9hFHjOtgxbBJaANVIyjbtrWi9p3nWFPFyayzjTGVroUgndVU7bedFVt6rKzpWWuecbCRH7O
GU8xeb8ciknpIp9Oh1oMxiiQhFApJBCaEErE+LkUAqCjRqsOG5tUto8ATJXdigM4yedMehwcwk68
wAGY68PlibPiKqS+woLzIYzAkJC8Kq/rdvhdcLxTTjLZdVVZ2g/bX58vl80qzgPXzZAhKb7//ssi
/Hl8oeT2NpiPubx7CpJfLENy8s9Ic2C4qH4PoYFnGTVdtleLdUAsV+t2eRLaw/sRIXsSG0gsw5x7
UtxVcO4pAGMQJXmnHBzcLx2B+iDX2TDw4TgzQ0GqqXbAg9TplHpoGYlRJyunD9nIuRJgfbLRQtom
qZ41BZhsJ4fgvGE4OISdhElZzbnVEPBoIW2T1M2cAkwV2DMITmiPg0PYSRJYWwkBm3vg1XY/e7n4
LaSre3NvQu75r2BwrEZXqa6uvPG27Z7eAGumBROmUW3DSivq2lnBq0o3Lvx9x+XhBni5WKxPtQkO
Uhjlckux5y3LUuohHociMX4tsyAxo3ITQ2hHinQrOERxtKK55Skb6B5Y4ICpruAChFMMB4ewkySw
Y+CUOVrR3CZVCaUAkwW2IBy3ODiEnTSBLRhIoxXNrUiMYDQwVWAhQDgncXAIO0kCew1Nr3q0N5UV
iRGMBiYLbEE4o3BwCDsJAus5Yx4CHi2wa2VSBBOAqQJLAcIJhoND2EkSmDMLAY9mT6xMimACMFlg
C8JJh4ND2EkSWDDQs6O9qaxKjGA0MFVgJUA4aXFwCDtJAkvOIeDRo6ZViRGMBiYLbEE46XFwCDtp
AnsBAY8eNa1OjGA0MFVgDUawYhwHh7CTJLAy4Egd7U1ldWIEo4HJAlsQzmocHMJOksBag3aOn3BM
YgSjgakCGwHCOSQcwk6CwHYulISAx084JimCI7DCAZMFBiLYzblhODiEnQSBA7APAvdX2HaBze4G
/N16ffO2+Kq8uOyliwmuVVt8/eOP3wWo1c3iOvwU3ye4XRV9zYFf30JADkwYmfGdvtUpnozAHgdM
9aTVh3B+bp3BwSHsxHtSsUfyp2Z8p+9SEtQUYKrAjkFwlnscHMJOksCPeHZ8p+9SIpgCTBZYQ3CO
KRwcwk6SwM6Dnh3f6Sc9YqEAUwX2QATzOWcCB4ewkyBwALagneM7fZ8UwQRgssAaghPc4OAQdpIE
FvBUOLrTdywpggnARIEdAyNYMoODQ9hJEljBQ2d0p+9YYgSjgckCaxCOGxwcwk6awB4MpNGdvuMp
7xFQgKkCcw7BaSZwcAg7SQJr8KxqRnf6jqf0WKQAkwU2IJyzODiEnSSBjQI9O/rMxonECEYDUwUW
HITTEgeHsJMksOWQZ+3oCceJxAhGA5MFNiCcZjg4hJ00ga1Ied93rwcj+r1vkIr7D9wowb6Rn3LR
aq9t86FOeW6UINo2N62rm9qrznvyLYnA1CmOuiVxgHa0ts0YX0CWeO7QwySwV1y0TcO5evrlMFVr
XtUu0JdNXTtnWm9LwzvFjas5k893OyKa7PEzw57J0EQlE08diXSo07QEAlrMmcBfkpmmCWHyDmKB
uQQ7mp5y0qU4gwBMlt1BcIJbHBzCTpLAwsgUf++tjuhxD1ExL/Ia8ulXx6FDMaiRZP9LjbJeQVZm
7mWeHURSx8ndCqUgSadGSJLb2qE6Tu7yircotHNVo3RbV35bYnZCP71Nx0l3lI6TKK1O4FAunnKo
nTMpnsWhXDxwqGhFzSTnnRHlC3UoRquJDt20ED1ib8gN+Qx9NTN0/oTJSZjc5Fg4K35bllUVfRpF
jRuK7YWss9MbcexmoAKk4zRIJ3FAZxCvn8Bl0zR9o3c72cdHH+9OYx2aeQTTHJrSBFgcNgEONcfZ
K4gA93l2u5PaSWoNvsjhi3dtuVxX4Qj/4evhv+KCIlTjWe1Yrc0j5xAtUi6ApvOhHsP0flSoM2Hn
1rgUEnuieMani5KDD1GUwPeAhHZzyWwKiX1RhEwRJZ0PVRSxT0KfsUACLCbiRp8XeDX90SkNmGqn
0hCcMwYHh7ATm5gIwP6R23V+9HmB19PTcDRgqsCaHcBxNefG4OAQduIFlnzubNKEu0/HuunDOgcf
qjes2yNhzpicSy9SSDwURTLBpouSgw9NlMj3gAS3cwnUvdMfflgH6LDriUeBi0VzUReraOjtZbsM
ZNraSq6FMabxH2KplN+Wi9vr5u4jd11sutvrfs9YfLMIH4nJnaDL0CxmFTE+k3cVkj8t1u9v2s++
2yJ+cv9/Xp2O+A+XbXuzqSu8vrgsQnHhdbFpljPw8d4NtZwPeUk9lyYp1PejTJmUKEvnQ40yZfZI
2DOm514khfq+KCnzUQ4+VFGsOyAhzZz5pA3gvijepoiSzocqireHJNycG+DspvjO0am5WN30Zdpv
r/+4jp1Or8IJLeS7Z67WkjNlZrYrxUy3Ts+EqPxMceOsKy0XzgbqRjSNUJ3vKivvviyO/Z8231cs
b+rioylf/hFskBsx6JFH2Xd27p8Sgw1ScKGUa5kQGniy7VvRSlGLtm4YxEncvZQ4KvII+IcD2vDJ
do/Uzmn2k7taI7wqu9bXTLm25KxsvOeybjrFauEcr15BligpIEswJ+1dPn0I7yVWatOIVldyxrwu
Z6ZTfla2ZTWrlGy4MKJxRhXD9wW8i/VF74b2701C4fL9m23KZHVeduF3v74rL8MH3sYk9Ob0Dlqk
h3Utj0V7nxpyzPNYxrxvcbyp6s+LOiRmrjeP65+B1827ctUW7LU8C6mme1mHPszLdpsHDLmYWGL9
3cWqCP+UxYNvOT7v+xrww+fu9Ps6ePeYBPayz7H7HpzA+hQ3Bl/tpLJwf9F3Z0SbSE6ZoziA8IZl
UZiYfUZNELvJSgmT57/gO1tuMsp8k1FuKy4CeT1rZGlnDS/9zFZOzZRpfSuN65xgexllkIHVWeQb
RvbdiAhD+m4k9+/5vCma90Hg7TtBs8/DUA9DO2wUwi9BZo79Mt7eMlWTsKRZx3SpeddxUw9qlw87
cV7FLdRfi+Ufm1HzcuhiGnNmgN3t25nj+x6fTJzLGo5PLIDBBnZeXsYp6v151DEO9XK1Pu/3fueD
ifXtchmfmWzjGiTt846hh5862y6P4rU4GzL/MTYXl83QSeSTnlzxiKTe5GH3SGeZsNkpesm2DWVX
MUgOGgzxPG3u3qC/mNjyC/jitJZIxdvTO6OogzNibNzvVLb/5+kQMbkCeKSrPdp5xKg4sU3bfenR
zYoHpboFjcscRtiu+VXzmm97bw5fORt26Z+dji6iUxvFgiakJujd27aW2ayWITs3Zwow2CL3C35L
OnQX5+ju4ob9g4VNEfLxVsT89YbafR9ZmE+eg9s4n73gfJySZXlOOhhKew2kI7HnpcQ3lFC+y83q
0eVseFfkWUlg5wbihujENk1eU4lmPb5IWcazGkddU7kgramWiax0M6ypXGRZU3NblrimEgMMtkhm
tQi/uHHxvHz2ouQpSupklNCL26kobRc3pO/yspq4uB2HxEWHXtwmHcpPYQt23r/oXvPw8+I6nNhj
Yo3B5I5zij0QuhwapVSXi7+K8q/y/Z32p+CFmdGCYI9odPLTZd9bmufqA76xQoyeu6aet0LSuGId
a30pvbFqOL598TDHXTV37+c8Mz9MUhtxBYrnzlq/QaGG3PYjuwDhfznmyZoL8GTtFN+WLszrOC6i
4xrfdKLzrGxsvWX/8Avqxe1l0z/rqdrhlfq2CT/U5e2qjZb2r41sPhJy2U25LqsSegzt58zmPZRT
ZsNhgSnDv72am+nghDTxG9hMa2N+S6ZP86fksr9Iw3Rc3rMwYcMaST0vJb6hNO6zTIy2S+vb4uvN
f8y38/IZ+hSXnBk9ojF/lRfr83DQP7+9jj//J41K2DxtU3lpG6hoSZ78zZ17fgiffhc8cRHXq8Uf
G1IDx1W9vLhZn9hVmQ2cPJiSUyJHNGbyYHpBRiUPJi7+14Mpi6vyJBmnDKZ827NjGUEYRC/OmITB
k3yMz2fFv9xd6W4jRRB+lRF/YCXG9H0YgYSEED8QIA4hgVDUc0EgiSPbYVlpeXe6bU82sSuZKnfH
CUgg2Kzj7+jq6WO6q0p1nLLNE91cXhUS9lP8qtguH0bUuM24vXbT/329vbHwfiT0fhXXj8vNPtQl
3FYwz7J72/vHnQ7nws/BYvRsbNBHFqOu7Pbmu6NFvyUGw6LiVTro+3pecT5Pu0mLm2U6pd1fXi+W
YXl+8SYlgPsr1kUIzUVf1XUVVquby0R++0Vx5X+xuPqtX1b933E/4n+sQihe5MLC/SRPn387xsV+
vqfxl+r0W3Xf6VCrVupaKD0Y2ba2V82Y74mnfE9dFS7ijsdZHML+nI+3AQAZZSYH92QU4H4vVxUf
Gq216Gs9CFuHITR1p9RQD3LQg9Shl6bZz+cV2t7pQdbaDF2teNfU3jeyNtz1MljpTa8fyFX1DexT
mbHg/+9TmZcJT+rTEbnPGtG5tmXxV7QINeNG19wNrNaqaRrfeNU2AzL32c6nMofNXppPBXPEbXyS
yr3sx2xAPWalKnMkrmxzhxf3+JD6JQ5H+akTSz8+pC4zh3os+fLIIaXcZk61WghleU++pupnivsJ
to8iHiZhHj+YnYgZ87iCFFlxOB3grLqMa7sYeW93idfCVduPP4uanHKSyWFoQrBvdz8+Wy67VVQX
5/27JUA09LvP01mx9D4tvd78BPN68xQkP1vG1fdfiebIcNH80bdrPMv0Gm7ZXy7WETGs1v3yJLTH
F4BxeZAugy371eqkuKvYuKcATEGU1TphbODNJb9IfbRrfptzANQhx6XQsTrARAhc6axECOWIETMi
ROIHbFLWbm2hqQyYxbmXbhhaLofQNI8nb2eydX7QrEmZ1VO6uU7ZMAyd5A1jjVbPlbx9I5lLnyP5
ICD80VmjStAhhwHsiWY5JBCeYDNMbehILiE6U0VrJM8omLsDVjhgqu3agnCynE6SwUqC3d5PAhuZ
ZzAamGqwgeG0wcEhdBIMljNtoUASbBr4+PIDNGCywe4QTs28sDg4hE6CwXrmuIaAxTvg1W5sulj8
FrdhN3Kv457q6yg4PsWCDa1qDAtKhscHM927oEyrm4GJQbUtM2l6rhvH+8F6xQ4Hs+VisT7ZgKZn
nqnSVuy3lsuKyiehSI1f50BiWpYmhvCOEOlmxjjYxeRkF/NZz2oCMLUpPAwnHA4OoZNmsANjQE0D
Z/UKAjDZYAfCeYuDQ+gkGcy1h4D1FLBgmRGMBiYaLBgMZxkODqGTZLCQHAI208CZEYwGJhvsQDjF
cHAInTSDvYCA7SQwz4xgNDDVYA7CScZwcAidJIOlMRDw5EpI8MwIRgOTDXYgnHU4OIROksFKaQh4
ciUkRGYEo4GpBgsYznAcHEInyWAtIWA5uRISIjOC0cBkgx0IpxkODqGTZLBhCgLmk8AyM4IN0zhg
qsEShhMGB4fQSTPYgzrFNHBmBBtvcMBkg8EItrycTpLB1oERPLnCESozgtHAVINhOCc1Dg6hk2Sw
sw4CnlzhCJUZwWhgssEOhPNIOIROksFeCwh4eoWjMyPYa4kDphqsYTircXAInQSD7YzBBk+vcHRW
BBOAyQY7EM5aHBxCJ8lgLhkEPL3CyXvzQQCmGmxgOMVxcAidJIOFUhDw9Aon780HAZhsMBjBSjgc
HEInyWDNwZadXuHYzAhGA1MNtjCc0Tg4hE6SwQbsqWp6hWMzIxgNTDbYgXBa4uAQOkkGewsCT69w
XGYEo4GpBjsALlXCsDg4hE6CwanQnYeAp1c4LuswCwGYbLAH4STDwSF00gz2BgKeXuF4lWkwFphq
sFcQnGIeB4fQSTJYObCnTq5wJBN5BqOBiQZHYhCc5hwHh9BJMlhbsGUnJ+CSZR2xIgCTDbYQnGEa
B4fQSTLYCJNzpG53GGW8EoI+WglSkS/xnlj+BQbMQfl7FxjW622DxgsMsE9Pf4EhMUj5yLgfhAlR
mhXUywuJqbaoywsA2uHFhfSh57m0EJVYwvFX39pGe9WI3ujHz2y1PPhu6EJjOGu6YLSyvHONaU3f
MimHZzyA7GaOyRzJ+w8qzvMeyJl0qI9pzkESnOeQQHhCenh7Aw5Sk7snkmdOoNHAZNs9CGctDg6h
k2CwnzEhctp7b3RE93uYii/y1P+vj45dvx0dDzzSfCZdmRvUh5k3F9f3Em/GUm0lMmlCGpThExrI
BdjmY8LJqKkE7RjsQ+uH4EJk77ptvstjKkd+8Ncl48ymThlHrVReanFVLdaxEmmVtN/5uq9//Oqr
6oPRkFfzGLiphaqrxba+5JFWFgwHTE5UiKSbTJFC9vVue6N4pQblQXQN412n/Ytt0GmvjmzQ7zbh
/YRFDrfkUQUi4eKQ89tE5N3d1uXV+bq/XN0tYxkT7wKpi7SeaVnm6XhnyXCnhm366ViX7eDp7ObM
pKdzTm3l/bFdZ9R6LsGHOrXR9oCEtDPLs6pw75tiM0qll+BDNcWaPRJ+zuzMCZlDYs8UxTJKpZfg
QzQl8j0godjMuiwS+6YIlWNKPh+qKWJvz0ixOeczaWwOiX1TrD/alCJ8qKZYf0BC8Zm2WQ+2PVM0
Fzmm5PMhmhL57pHgcy4jiayW2TdFHT/6FOFDNUXZQxJ2poCtRP32+3WETrOLOMSfL7rztloloTcX
/TLtArdWci2MMZ1/m5Ju/LZc3Fx1tx+5ragw3FxtZnLVV4v4kbR4i76MhQtWCeMTeZtM8sNq/ea6
/+TbHeIH7/7m1emIf3/R99fbFIzr84sq5mFcV9v0/SMf7/2Y9vKQl1IzJl1Oq+5HmTl+OC/Chxpl
xuyREHOuZ9qpHBJ7phh+fG6LInyIpkS+ByTSFQtpckjsmyKPH86L8KGaIveHc5m6tckjsW9KTvfZ
8TnlQzryPSCh/MwJnkNi3xSX0X1K8KGa4va7j5oLPvM268G2Z4qV/HhTSvAhmhL5HpJwM8WBFb7m
8t0KvztfXW8SQt9cbYqV15dxIyFuB9di0FL4htfMqKbuuBvqpje2ViKI3sqeh95F6u0QOu4HwSSz
t1+Whs4ft99XLa/b6r1jvvw9SJDxbELQA28xb3Xub2ZEDVKYzVDVqrYFXmoOoWdGBi+ZFIec/Eyr
saUnTZ4Af3tAG96A2SN1Z9Plg/GNguJt0zeKDZ3s+8F1QfjWtkOvlPRt27evICXGalAJYkPoLp9N
CO/tFg6mD1aqrta9FbVsOlWbTppaNa1lne57Z3U1fl/EO1+fb5qh/3u773Xx5uPdPuDqLAzxZ7/8
Hi7iB35NO63bTSZIkWXjC+cyivY+NW6kzlLC5E1l0m3+cFW1cf/wavum9hl4Xf8eVn3FPpLzuH/6
ztaxfGr67/kyuhu3DFMy59/PV1X8J1T3vuXpeb/LNj1+7ta/L2PrPiWBu1usu0Jq8D7rh7g++OrO
jivuN+K2LH+FlgjsC6eXDjDn+PoBxwENf4TDB3vqn7Up6t7tqR/xgLi7p65g8pRactvXHWr73ss1
rBFpP39Qoqt7KZpaqK6pTVBeiaHlrRN7771ABlIWsW/s2bc9Inbp2568OeLxcdW9iQbvjoPUn24q
xS3jRCH+EGSm+M/TlQpzPYlDmnU2tB2zvXTt6Ha4X1TxMk2hYi6aP7e95uXQxdRYLAB7twRjie97
+GGiVdFwfGQAjBrYWbhIj6g3Z8nH1NVDrF2+mfudjRLbm+UyvQjcxTVM2jwh6flueBQfifn4girF
5uKiG2sWbMuAVOoBS8s8IB+qYREnO9XGsl3ByFUKkoNSJqpMQa2P0V9MLC4EfHFe8ZXq19M3RtXG
xoixcWemMv7NoyHitS/L6oFi1OjGI0bFiTXt5qVPListlNoeFGdYUXHYYtdN95HaVfkbv7IeZ+mf
PEaXF6WLqAlFUdDFrQlynahRmSiqDFkjtlCAnULRw9VH1V6hfMOwfKbLLSt0/WPD/oFhC82LJ20A
g/O5Ke2Vqk3EYEr6RJTUlhIyZArNx6aGs/FI07OSwD4biBOiE2s6ekwlytoMUidpMOqYygVxTC07
kywwpnJRaEx1RZVljqnEADuFIvyYygXMp+zcmDS4PT8l9OB2Kkq7we1Z2u7Iwe1pSJwP6MHtqEU5
qMU+z1oqVj5V8c+Lq7hiTxtr7BTkHjQ6jBVrmljRrAqvw5tb70/BC/NEi4Y94NHJV5ebKraqUMXh
UYX6eXr/+Kj1Tdw0Njb01tpBusaMy6XP7u9xN93t8bZn5ofZ1EZcC1Gld60/RqHGvW3x6kgHs1ay
XByuZCWbcal/foqG4yI1nG+U7VurfZDsiMA6Lb+cwOLiuQPrYxS3GH7yFeSzEWUXDJShZBydQ/x3
M9XZPktPSBM/+y8zsXgCJUePkSflsjfDeX46m3i7TwqmdLo12oYSos3KMLot1f/l9n9mu2fPHL0E
zt1Wfkoxr8P5+myIhRBvrtKf/5OiMmaeu33QrNlnQSW3zfN9/PTvsSXO063UxZ9bUiPHVbs8v17/
J5sqvzPl7ic9pZijO1MRUWX3FI7uTFxkd6YySp6mM72gpjqmMxWcnj2RCEInenFiMjpP7h5IQRWl
Ok655kkZNKOby6tCwn6KXxXb5cOIGvdot1f++r+vt9c93o+E3q/i6ni52cS7hNsK5ll2W2//rNjh
XPg5WIyejQ36yGJUlt1afHcu67fEYFhUqkqnpF/PK87nacdkcbNMR9z7y+vFMizPL96kxGl/hfOL
0Fz0VV1XYbW6uYzkx7N/V4uLxdVv/TJePYibJf9nFY4Xue1xP4HU59+OcbGfS2r8pTr9Vt2r0NaW
MVl7ZrjSXSMa7cdcUirlkuqqcBF3PM7iEPbnfLxKAcgoc77mseyH6ebJX2Hdp/0v1jhhBuFlR74s
FNlariZMfxTx1uzQtv0qhs47s+4n3uJDo7UWfa0HYeswhKbulBrqQQ56kDr00jQxVvpVOjWe0mnF
wFpvEiuOyLFbL7YJeNZpxILFlDlcfC+CCoQN3Yn7KchCaHunB1lrM3S14l1Te9/I2nDXy2ClN71+
IEHnN7BPZU5CYBJ0RmTvudHBh2PCU9oJpg+ijQk6z1eLvQSd9Lb4hNYWkBKn4btsYKY+p0xnvBGN
6B5P0CmF7Ifet063vu+ZajveNtKxZpBed1Y8U4LOreQoIEfy/hVYd3TG5JGOzKFDvQHrLEjC8xwS
CE+Q+SMTHT4TzkB0JvNmW69zGoMATLXdawhOCoaDQ+gkGCxmggMXwFh1GZcscZB5G3tgXEhehqu2
H38WG94pJ5kchiYE+3b347Plslul58BVN85sq++++zydH0tvqG5uonzMm6lTkPxsGReVfyWaI8NF
80cMDTzL5Omyv1ysI2JYrfvlSWiP77XirDddEFvGZ+5JcVexcU8BmIIoq3XC2MCboSNSH+2a33Z8
UMdtrZ9jdYCd1PGMrFhFiREfVpE4xEYacKowWTXNcZbzUCYAk3UCmxFyxq3FwSF0Eh7KEThlhtu8
o7sHbO8A/75eX/9afRFXtBvr0nRo1Vdf/vDDtxFqdb24in9Kk+2bVbU53vXLrxCQ4aChk3mhnTi6
vgkRmNqSAtjQUTMpFQ4OoZPQkirdr4aAJ6uHOXF0gnEiMNlg4JGgZ4xLHBxCJ8FgPePwHH6yepiT
WRFMAKYaLBUEJ5TCwSF0kgyWCuqpYrJ6mJNZEUwAJhvsQThvcHAInSSDlQN1TlYPcyozgtHAVIMV
GMGaKxwcQifJYG0EBDxZPcypzAhGA5MN9iCcNTg4hE6SwQbuqZPVw5zOjGCjDA6YarBWIJxBwiF0
kgy20kHAk7sDTmdGMBqYbLAH4ZTHwSF0kgx2HASenumbzAhGA1MNNgqEUxIHh9BJMtgzBgFPlmdz
JjOC0cBkgz0Ixw0ODqGTZrABA2l6hWMzIxgNTDXYAhFsZowpHBxCJ8FgM+OGQ8DTKxybFcEEYLLB
HoITwuDgEDpJBgtlc9487JWKQr+BgqjIQq/I/+vvacdSUbBH/4F32UeU04oR5dqWxV/RItSMG11z
N7Baq6ZpfONV2wx3PRpf/3ex2CTkk/L+ZZ8aCYhTI2amWZkjUdhTI96pvu+tClqQX8ubmZ287/4Y
IlA7c/xgdv1MTHgdKvKk97JK6YF1VjkzhIn6mYKZzvo+sE6ynnHlY8AYwxsdQh+656qfqVSqj8Ks
ypG8Pz45kzEO59Ohjs7OgCQc/pTGcZ7gx+xIRxjIEzm9aXZ8fXQiMNV2r0A4j4RD6CQZbKyEgCc3
zTzLedW2AVY4YKLBkRgEZx0SDqGTZLCHA2ly08yznAMmFGCywfoQjs+YFzg4hE6CwXwmYZ3yHfBq
NzZdLH6LJ3o3cq/DavU6Ck4X87lrmdW9VbyZGMyMaFreB2Vao6VSquuUE1JY51mvpD0czJaLxfpk
A9qmjFNpK/ZbS4icqHwSitT4FQIk5m1pYgjvSJHuHUhxcvfSZ73rpQCTmwIIYzETWuPgEDoJBouZ
VAoCntxc8zLrWZ2ANQ6YarDUIJxGwiF0kgxWHAykyc01r7JmGwRgqsGKgXDS4eAQOmkGw6Pw5Oaa
V5kRjAYmGwxGsGYMB4fQSTJYW7BlJ49JeJ0ZwWhgqsEajGAjJA4OoZNksAEn6mpyJeR1ZgSjgckG
gxFsGcfBIXSSDLbaQsDTKyGTGcFoYKrBhoFwxuPgEDoJBsuZgHVOr4RMVgQTgMkGawhOGSQcQifJ
YMsdBDx5TMLbrAgmAFMNtgyCc1zi4BA6SQY7DwJPT8BtZgSjgckGgxHshcPBIXSSDIYXVGrymIR3
mRGMBqYa7IAIVjPGkHAInQSDI7AGA2l6heOyIpgATDZYg3BG4OAQOkkGc7jrTK9wfFYEE4CpBnsG
wimGg0PoJBksGBhI0yucrIuJFGCywRqE4xoHh9BJM9gbCNhPACuW9+aDAEwzOBGD4CRDwiF0kgyW
VgDAmk0DZ0YwGphssAbhnMHBIXSSDFYG1MkngbOuyVGAqQZzBsI5JBxCJ8lgbSUELKaBMyMYDUw2
WINwzuHgEDpJBhvjIWA5CSwyIxgNTDVYMBDOSRwcQifJYAvrVNPAWYdZCMBkgw0I5wQODqGTZLBT
YMvqSWDJ8wxGA1MNlhyE0xYHh9BJMtgLsGXNNHBmBKOByQYbEE5yHBxCJ8HgdLOWQcB2ElhlRTAB
mGqw4iCc0Dg4hE6awcrkHKnbO3eOPloJUtFl8uSVPVMdTn+m+l1+sJ9gn8oUIkTkB3O2GxoRGqWV
pB5ETkydQR1EBtAODyGnDz3PAeSohAv8AWTTDablLVfa+4kzWzzCacuF6HzLvOtYaBsjZdu3rdDK
P+MB5CjZ4RNy7UmGHlR57/Cz6ZAf0xok4fGnoI/zhPTwVhwclt3kqKFlXmOggam26wfgPA4OoZNm
sMlq773REd3vQSqFSn7910fHB29lKTPjoswM4rBQ0eL6Xp2iV9UnJQoPgRqUn9AQge+0mOmDlaqr
dW9FLZtO1aaTplZNa1mn+95ZPa+ul4uUADZqKkE7BjvrYrQ3vWq5CWPdKTKv6oO/LhlnOnXKOGrF
LzlbXFUxgWy/rJL2O1/39Y9ffVV9MBryah4D989tutkqXU5aHGllwXDAlJACSTr+pO2N4RUb1Gov
W2G9d23/cht00qsTNCgXjzeoEPpZGpSLew3qudTet4PXwb/YBp326sgG/W77vOqq9DfV4uoI6dsK
CouYob4S+IaWzL99m7Kxh+5sE0q/pkb58ez8at0vr8LFrYT5bYXX7m7r8up83V+u5ncq3MWSckBZ
A81mQpZx7+4acFgsX4c0Wduu8XYzF2C41WrGgX1NX/3eh+W66cP67Zfj/8VwFEJ1nrWOtdo8MFkz
KidPZD4f6lz1IMmInosYFdbmkNgzheckzyzBh2hK5HtAQruZ9zqHxL4piueY8i9v59oqSU/E8a8i
88oBMyRVqduC38AXgvhachXxihdQ8MPbffSs40w53T19dHlYlmfPpn7/6q4kXakk53mOOuUp58ff
ktwoP8/o6e8/+/Nieu0JlnD81e/7r9oP/rQK/ctvxh8XmNEEEwEzd/v7esrwL//4+7/8rn//ke+3
f86//O6j1/3BT36//Mg6c1788nnJ5p9WGz/G75fC/OgHf/7bH8aPf/oviz/8999c/3/gP/vNGH/4
51Uqf/7Vb36w3Kfy5x/885LRTx4z/by+5pkL4caczzzVx7dM5f237Ct4jr5lT0eay7e4HuN6qlN8
cAqc6Y++guegUxbeJwjEG5KdgXh0ypn+6J88HM/wHHXKU3+k3yLdYsQzEI9Osfi+U76C56hTLD5B
oN0IT8Xwg1Mwngifr+A56BSMj+Fj31K6ReEzEI9OyScmfl/Bc9QpOT9BrHug0qmJ1qNTGM845TzP
UafwAwTFb4lvwKcgHpySEd52yiePnuE56JSF9wkiyy3xqdf10Sn0fvh8Cc9Rp1B+gEjrpJIz/w8m
lf/T2fD/EnzPbNg+Z8Mul1A681Qf3zLl99+yr+A5+pYpP0Ksr7rqqTnFg1Mo0hmnnOc56JSF9wEC
viW9xXOzz0en0Puz4S/hOeoUSk8QWW8ZT40UT07RM0754Dn1yXLYKfoEQXbL2ck4co7/zjj2X/3p
Dx+X1/7ld7/+3ZJrDL8df/pT+eUIDTqmlGcYahiyrEcwlg4hJemMRZrWvqDjbARkLDLi98bWDvHn
/2zvB3/8Q/vB5Z3GL64gihuC/kuZzHed6z+7tfKb3/xivdv1Y3FMS0RgsuW35FTNLDC5cYs6xXfy
9wMwNp28YfzvT9iP/+AB5y79+8PPxWrpQA0b9lyXHwOEMmxoSpZNssVy9TRo3vLrBskbSWnCtfMQ
PJXae4xbe/+D7Ut4jsat2RPEurMyi/c4YEfc9qossY/AESnUNnqoRilEmFEmSY+VPmY0aZSOLad2
IG73NH5xBVFyBZ2K2y6sOiFRiezErc7KOmekqt1n+pxGbzp5w/iuuL3H8eK2kCXrg/oYfVpe/DwM
xkypUU+1x+uzhnxLuPWibJC8F7fZbpFP5QUf4pbTiS/Fr+A5GLcL7wNEXufMIOA9jrwjbjFZLqO2
kFPVMEbFUAvnYJNGH5kmQ1nQbfmbaSl1zLY/bvc0fvEFsSvoXNyOOpoxp5TNiVuuNc2KmYZNl0k/
n/SmkzeM74rbexwvbpGa9Di04egyhLCmzDCbrDaa1aunISNt+HWD5J24XSzbDfHUZ91j3NKJuP0K
nqNxS89xm+1s7u3JKScmIV/Bc9gp9gRB6RbJHVt4T2emgELNQmrJQoXCAZdfoVGGlqBoq2VBV9JU
ta8DHxzozHY0fvEEJURX0KnOTGKuGSsLQ/c6Myaa0ns1Vpfp+ymBm07eML6vM/s3jv/xkCNKzWSj
44QO2fLi0VS6TjNK8eppANMNv74meevjgdbOQ84N+o9xqyfSQ1/BczRulZ4gYKFT93Hojrglq9SU
UrCRJOSS11LgOEOvaUYBYCJYXzvhmQERIcn+uN3T+MUVZMkVdCpuS1OuhjFj9bbKlGLdIucck/hM
nz30ppM3jO+K23scL26ZII0Ic6ZYbGYeWXszLJJZeZZ09TTkuOXXDZL34jbbzeTUoP8Qt3JilfZL
eA7G7cL7BEFwU/GGJ0p74jYiM4mGWaUHlorLn5hDMm5dRFI0WdAx99GaNFM+MN7uafziCrLsCjoV
tz12KLLYzejFreqIQ8psmrPP9JmW3XTya+P74vYex4vbNlKffRhjbJmoxBy1mhTu2iuOfH3WgDci
3vDrBsk7ccv/XBOC/8Ha2f920e9/CL5n0c8pgfvkgnjDdGqV4LEjzCc6wn/ywP9zKWfhfYLAeEsp
eu837ugIEWUCRwutaw8VUQKUKSFL6TRUUtOPD484TGkQl6H7O8I9jV98QegKOtURZkKJRUFgetnP
3m1Y621QQZ/p883bdPJr4/uyn/c47ofH4FKsREm1qhFZGr2PdfbC1g3i1dMAKht+3SB5ryOkeLZU
9DFuzyQMvoLnaNw+JQw+SkUV3MdBO+KWUqQs2oLNkgKrldBhUNAM2ShKk2ELeq0MmAp3hAPZzz2N
X1xBGF1Bp+JWck0VpE3ryZvAQBtp9sxUqs/0+eGx6eTXxvetNt7jeHFbBWpvUBkqjdKwQec5O0Sb
c5Yh12cN+RZVN/y6QfJO3Mo3SLd4rnTqMW7PJAy+gudo3Co9QeS4QLhxKzviFtBwQLfAfb3RccQW
ilQNsc5sMWO0PBZ0Krw8uaRF+cB4u6fxiytIoivoVNxOiND7jJOUvPFWuI5YBiQjn+kzbjedvGF8
V9ze43hxGzmnmkZWnq2CmsymFbELlaqN4tXTAKQbft0geS9uKd2ITiXWHuPWTlT3fAXP0bg1fYDQ
9aMGzbzHYTviViNSatJDiTGHKWxBM5YQoZhgbvzP8VYbARRijoP2x+2exi+eoBzRFXQqbgcpzj4X
Z43ixK11ElhAoEFzmdLnk9508obxXXF7j+PF7QSYtXJaJ0F1Sm1iFguUxXWdDdrV00BGG37dIHkn
bnVNMcZ0KrH2ELeKJ+bJX8FzMG4X3ieIzDcS73HwrkQfm7amOZgmDdzkI6pimGPUkWqHUtP62vVE
sdTYYBxJ9O1o/OILMk/QuUQf5zpjatpb7l5VXsU+EZo2SC7T9xtYNp382vi+hbV7HDfR14tBTWk9
oKMUJMqW1uUvoNwlxXr1NAhv+XWD5L24pfUGplN1zo9xe6ZK4Ct4jsbtU5WArd2oJvAex568FCgL
RI2BwEoQ4xF6IQrQYR0thTrR+n0bMULNsYPg/rjd0/jFF8SeoHN5KdFR8oC6vPHDiVtAni1zg0jd
Zfp+ctumkzeM74rbexz3+9ZS67lPwlywYaE5UVrODZOO2ebV02CKG359TfJWVZ6t45vQqTzQY9zK
iXnyV/AcjVvRJ4i1FD+53eievFTpqoJZwoBhoSfJgXBAGFhhiqRhLOvnGdcGQFhbOTBP3tP4xRWE
0RN0Mi+lOWbJnXmCV8hipbehUqANl+n73RabTt4wvitu73Hc8baaDVWzKhPLgBhlxtm1ritr0/jq
aWDEDb9ukLwXt4S3eK7g6zFu7cSWnq/gORq3lu4h8rcYvwHegE9N1h+cYvzuBvcv4jnolIX3EWJ9
Mnpu7/STU949NOOLeA47RR4g0re4ZkLcwN2TCakT03o43vKHsqYWYYYUew9DsNUBVcqYaw/fbbZW
oMx+oIff0/jFF8SeoHOZEKwFRUvviOSdDmxrZTnmmXrxmHL8fP03nfza+L4M5j2Ov2JYAOOQGGeF
0UpCTkiWZo2ZpNjV00B5y68bJId7+NXyx7yW9H9QgfA/LJ3434LvKZ1w9kt/ckG+pXO7Tv+zI6QY
T3SEX8FzrCNceZ8h7CbRK7mSPamlPLJmBgpRB4aqEsNUK2Hy2oPlITmWBZ06p9YaSKxlf0e4p/GL
L0g8QSdTS23YnAS50vSWYLGQNC3S1Xfy95TwppM3jO/qCO9xvI7QovUpDVsn65wkInCjNAvGlG3C
1dOgjBt+fU1yvPbzw3LWG2b1LO/Z26ixLtwwA0wdoaiM0HKdIUKquWdoHdrqYbHeiNAW4/tf0T2N
X1xBlDxB5/Y2FoqTYI7UUvPLHLX0EpVl+Eyf8/dNJ28Y3/eK3uG4e6Sg6Myz9IkR2+htGKkwjNlL
jpKungaKuOHX1yTHv8Y+LNN6CdupOe3j0ALvHsXxRTxHhxbgBwj4thBadh/Hrr2NU0sqiMEac6gc
y5qzpMCGBdEaVagLumoVEozE1vbH7Z7GL74g9gSd29s4CXutlKFNb46ds2LnlkuS6TJ9X1fedPJr
4/vm2Pc4XtyO2tOAWAYRVkk88uCSyD4uoegQr88a0i1F2PDra5Ljexs/LEO+kZ3KNj7GLZ+I26/g
ORq3/By3uBAyeY9jT/Zzzkq1VAo6cAQYtQbRVkIdsWNv2jqMBT3OOMEqZMYDcbun8YsvSD1BJ7Of
ROsMClOH7GY/MQ6OGHspLtP3OxY3nbxhfFfc3uN4cdsH2GgcY1arNRuRtpRx9AyUYhpXTwOkLb++
Jnnr2xjW6lUT8CzvKUATGin20UKrnELByYG45YCJS0LjkTp9ZB9UUXKfBQ4srO1p/OILcoeWcwVo
FGfONdae3K8WmFlaTApG7DLpZ/pm08kvje/8arnHcQvQurXZYmRpuZgK5lhbLXkk6qwJn1/Rdbcs
2IZfN0jefEXzTfBUwdfj0PJ24egX8RwdWpQeIPBbyjfwp4R70q6zxGKxxFCYe6BWOfDCGWaiqpQ4
wWxr3MYGVjVm0QMF33sav/iCxBN0Lu06YoodWqRm3pRQcrVWEvGYw2X6PiXcdPJr4/umhPc47tAS
W03SgZTWGRtQHw2HzjYH16h49TTkGDf8+prkrU85XKdghqcKRx7iNqV3z+78Ip6DcbvwPkGg3NDd
qKF7soQzM1tSDbaWmTTrJWgzDDqzQG9SpMYFvXTV0aQhYj8Qtzsav7iCMHqCzmUJSXi2lhYJU524
XV58izUhM06f6fOjfdPJG8Z3xe09jhe3WufIPBKOFnPHxiPFImk0ih3Q6tXTkDVv+PU1yVufcviN
6JbOFY48xi2citvzPEfjFvIzBN+U3MexJ3U6KKnlFEOJPQYyaKHHoqEW6mpaWk/ycRohTxm5qPGB
T7k9jV98QeIJOpc6NSzQk2ZsKE7cLmBQe65r7tFl+n6Gx6aTN4zvitt7HDe7X4HqUFh+EfWhjVJK
DDQH5jSzE7ck623IG359SfLeMmdeD+yW7D7RPTWSAmVBVApzagrQkgZNMQerNAwNwbCtQwtgIYZ1
x+2B7P6exi+uIEqeoHM1klpiHrVl6u4eQNJMk0dWYfKZPoeWTSe/NL5zD+A9jntyYZKB0UqcgCVL
kZQZau1KyXJBu3oaLOKGX1+THN9L9GEZ6IZ0qrbpcWg5U0HzFTxHhxaWJwikm/hTwj3Z/aaVU8Ie
dPk9oGQNmCIHiMgt9Rgx1hWdW1Op1LDO/XG7p/GLKwjduD2X3U9lUsUaq7biDS1NUEayWuL0mT7j
dtPJr43v20t0j+PGLXRBM5EmGaUprwVKqGVwKRiLXD0Nqlt+fU3y1pQwfyM6m01/jFs5FbfneY7G
rYgHYWTe49iT3R8NaptkoTKnUHX2ULutH18Nxii9K/KC3q3mjjmC0pEp4Y7GL64gBk/Quex+rdpm
lhGnsreXqBfEOTKAmM/0ebb8ppNfG993VsY9jhe3OXUqQowxdyqcoHLOotCajgh5XJ818C0l2vDr
Bsk7cUvfEt3Qn4zuye6vx4vXkiSk0eLyKoEEQqsBIDZlnCWlsXh4eZ8WedEsHzkWYk/jF19Q9gSd
y+6rRbZiOtEtv+8LFM1UNYH5TJ/HuWw6+aXxneX3dzjuKwrYNY8Mk3LRSGUyr+X3rUebseu8ehoy
04ZfX5O8+YoC3eTcLV0PQwukE0PLV/AcHFoW3icIzP+tJ95zjiRWsDk7h6JYQ1zPibcGOSCN1is3
0Y9zJHNEbGjrPssDq3J7Gr/4gtgTdO4cybzWeA+0CtUbWorax+UOtSV1mb5fr7jp5NfG9w0t9zhu
tqFAQktlsE4tEWBCUZpAM1KcvV6fNdAtAW349TXJ8fNfPyxTvkVxLe9ZgNKupXCpgZlmGHXy+irl
wJRiHa0WEv2oJYyCmpCpHCh33dP4xRdknqCTC1BSWirUgaO3IxM1tjyrduvRZfq+k3rTya+N79uR
eY/jvaLEjXlCwTrNyGrPEdezgTrEmhqmq6cBcOtF2SB58xWlm6ZTCz6PQwu+e83gF/EcHVrwEYK/
JbxJ9Hpi23UvEVtXbSlUBA6ToIQyuAZKwyRXowG2oJfYac4qWGnsj9s9jV9cQSl6gs7dS8SR08RG
BsVLZK9EeULXUpPP9HmTzaaTXxvfl8i+x3GPKJ6QlFPHpkNTFqXWE41Rok6NgldPg7Ju+PU1yfET
EP5pWW7G+D/Y9vK/3a/zPwQ/tV+H1zmtxS886pRATuzm/Aqeox2hxCeI9cpxQO/93rMSPxJPkqhh
9i6hsUKYEWOAbCLUrRS2BT2LxAKJqfGBb+M9jV98QewJOrcSXwtPqykLpOptXCyp5tRG7oYu0/da
qU0nbxjf1RHe4/gVNJ0aFMlxWF6fTa2VazVNNWJtdvU0iMiGX1+THD/qdLX8cdSZRs/yrkXnlnqn
mUPKioFro2AyOXShlOKc3Dp/lK1TV5Q2oxwoFtnT+MUX5MbcuUXnRB0pMueavTl2Rq2QuaA18pk+
O6NNJ782vm+OfY/jnjJWVNYOL9YuRde8UCGAks1yVLRx9TQIyoZfX5O8taK3WOZb/Mq7Pwnevjr3
i3iODi3KDxDyLa2Pg7zHsWclviSymPsIM00JPNLyvxQgjKhRdRZeuD6u0pKRIaUM+cCe+D2NX3xB
6gk6txKfrEDqjLOjt88OACoJmwwglyl/fk1tOvm18X377O5x3NOKZjFjicilZNJhYikNHrXBYmno
1dOgaht+fU3yVtzKN+Ab8qm7Nh/iFt++BuSLeA7G7cL7BIHpRhG8x7FnJV7RssESWn2MGJhGD6lp
DQzWKa3vRIEFnZJNa5Bam0dyWjsav/iC2BN0biWeqrFal4RukZf0UrPYOrNILlP63Ky/6eSXxncW
ed3j+HGrIppyQ0ATYgNSqdbGaNKxwNXTwKQbfn1N8ta3saxbSVJ0Le9ZdOYppUSsIcc5Qy3Lb22m
HCgR5FiKTS0f9wukolowTjtwPe2exi+uoJQ8QecWnS2XqrlYzOC+opMTEucy0vCZPsuCNp382vjO
V/TfOP4W7gzr6BxbEaNUBtSUm2Gd0bjKzFdPAzBt+PU1yVt1/7JOwYhPFVU9Di1wYr/OV/AcHVqe
Ziv6LdoNwX0ce250zHXwSDGHIq0GrsohIUgYlqeBFIWIH/cLsDFLHW0cKM7c0/jFF6SeoHM3OkaL
omntdsGbEpaujYVrR/Wd/H3tdtPJr43vmxLe43hxmzoNIM2NU9VkNhJ0nbA8IJGobV49DVnjhl9f
k7w1JdR1Csb5VFbuMW753YNnv4jnaNwyPEMsj4PdL+s9FTTUkLv0GkohCq0whdgshbEIwyZYEPOC
XqaNBVt6y7w/bvc0fnEFiTvenqugkchQk0qd5I23dcbae9aMFn2mz+LMTSe/Nr5vvL3HccfbWHoq
NsscEHs0M1j+m4ypNY2Jrp4Ghrjh19ckby1z6npYi0T0LO8pFpnAKmWkgGPdYI1ogTWnUJRirFip
gywe5kU7c9bS4oEzzfc0fvEFsSfoXLFIxzJTp4IZizcl/FgRQ1OZyWX6fvDUppNfG99XP3yP42YJ
s5UujDHxGFaTqY3ZrYukUVTg6mlQog2/viZ566tFvxGfvZz3cWh5+2zkL+I5OrSIPkDYwnFL6L7m
eypokDVRnzHYQAiI0ELJHEMZnWLupc42Ps47ozl6TS2OA5cF72n84grK7tByroJGTTMOnIYxu59y
M0lqcz1zw2f6nPxvOvm18X2ng9zjuKf6LPZTX49FJsozNYgRZ6XWO5QpPK+eBjDa8OtrkreOXrB1
Cqbntl4+xG2O756N/EU8B+N24X2GEP8OWIl7Kmgsy8zrieNTBoSuZoEjS8C1CwedlfP8KNxKPcUR
gexAdn9P4xdfkHmCzlXQjFELjI9D58yroGmMwhpbjOQy8Wd2f9PJG8Z3xe09jruFG3SknlsqxeIc
rTce1BVzlFy4jKunQRJs+PU1yVufcvZxe4m5r+ie2oaK8rHoHuKMKTDnFHLTFjBPLsyo2vNHHlab
ZQKlfCDbsKfxiy/IfUXP1TawioxRMGdmb+E41ZF7iRxj8pg0fnZGm05+bXxf/fA9jls/nGzawNLr
YOjTZP22aKZRR1z+pVw9DcZxw6+vSd6aEto3WiyfO7P3cWg5U/f/FTxHh5bHuv8Uv0XxdzxL3FPw
kXvFOVMJM6Ue5ogURCkGHZRUY6WpsKCn5f0gpdLKoTOEdzR+8QWRJ+hcwUcHKMZpcurqZRs65J4R
MJG5TN+f9KaTXxvfdzrIPY5bk8Qf+cCchRpLKiwZUxHCppbijFdPg8Utv74meWe/ToofZaGmnvfy
Up34q79+WP4vEcr3H21//uNf9gfoQcNHQ5H12dx68KvsM7dD5x/Hn//yx9+N/vmzq9yPH3k2TLck
7utJm4YFzzh4NZz2GT7qYHHNLb7cZ26Hzv0OBrmleOpI30ccfX/k+RKeo09D5RlCb5rV61D2rE+1
nnLSWEKd1oMBa2jTakCjOESq5pkW9DpLTUm6tCMXEe9p/OIK8ofSc+tTkoaUniP04Z0DOUTH4uCP
s2J8ps8nvenk18b3nQN5j+MmIyRlsW6zN2wGyRS6jTSlQBpQ6PqswW4x5g2/viY5fqHph+W8fk6R
Z3nPUkyZiWe0vvwBaqgjrzfCVFwzXAgtssy5To50oiGnTi0dWIrZ0/jFF+TG3LmlGOZSS8dq4h55
WIrRqCVFk+IywWevvOnkl8Z3Hnl4j+MuoVYwMu3rEtlolGj5k2SMRVteNx9fPQ0qecOvr0neOYJp
sUxyy3iq1OBhaKH4/o6zf/H8X6thKT5CpH9+A6D3OHatT2WCFg0CwpTQWl07fqwBaID1KlwwLuhp
LdXqFVCOLKHuafziCxJP0Ln1KcSsMnAkvxp2lhKFRTM3dpm+r2hsOvmV8b3VsPc47h2Ac/Cg2NrE
MevU2Wnk2GOGKII9Xz0NFvOGX1+SvJXnXiwnvUW2/8HGrf/pjrP/JfieHWfq7zhbuYBvhG4Ccs/a
FszYpEZYb8Hm0GPF0OpMIWsvqFGKzDXmKTFGULR+5HjiPY1ffEHmCTq3tsUta+6FqRT1cuRpsLTc
m9XoMuXPmulNJ782vi+RcYfjl01obDZbLYVAOZsMlFR6Y1Tm3MB9Udjyhl9fk7xTNrFaXm/mP5Xw
exyr8f1ypy/hOTpWIzxB5PUsOu81T3vWttYNdCLQwqwNA1jNgRVKIJWY6kiZR17QW4NOadFB9cBY
vafxiysI0RN0bm0rpZZ4gGEb7vHE1bBxnpEAfabPWdmmk18b3xe39zh+BXyHnnV2zi2OOZomgEx9
0vI+g9r1WQPfEuqGX1+TvLMpMqV1Tsty6irYx7jNp+L2PM/RuM2PcQvf4hq30Xscexb8sJpqzBJy
ZQ2lzxSQZIQaoyJgziP39bXjqZFwQMoHzpDc0/jFF+TG7bkFv5xKHkwlsYA33lJn6H0MntVn+ixI
3XTya+P7Dvq6x/HPomOV2qJGWe8gjJAtltRGRIgwsV09DSp5w68bJO/E7ceNTEjkWd6ztjWj8aJ0
BgQtISXLIbPm0HJLlHtvvaR1aLHBhRqV0Y+UKe5o/OILUk/QybUtiyy9QLGm7iuKMKv0RE1dJv7M
MG46+bXxfUPLPY47JZQsKQ9tI+XGxqMhWJt9zIyRk149DZRww6+vSd4502q1LDfTU+VFj0PLiZtm
voTn6NCi9ASR/8tZdJL2bGZGzjV2m6FHxiAxt9C55xCtVqRUMtoat2kiROtSLR84hGBP4xdfEHuC
zm1mxpKSpi4DUN3NzJQo06yTs8v0fRKx6eTXxvfF7T2OF7frHLDRx16VvIZtzGslYcSRmAWjXJ81
0C0hbfh1g+S9uCW9pXObhx/j1k7F7Xmeo3Frj3GLH7MVcV/zPZuZO/NsLC1kKD1omSnwaD0UEKMk
WgzqR7YfJCpV0HygvHhP4xdfkHmCzm1mBllsF64A1bxjxcUSTtbOii6Tfj7pTSe/Nr6vTPEexy1T
pEkspZaitUNbTywXLAONl/fZGl49DYZbL8prkrc+5fBjpNfsWd6zmXlQMZ5SAlKfAQjbemacBJ2M
vdHUovLxsbzmnxJi4QNTwj2NX3xB4gk6t5lZY68JE2Sj4t5rWxXHuhs4T5fp+1npm05+bXzfzpV7
HPerhWF8ODDxWDOEQ3Mb1mOLWkpPcPU0iNiGX1+TvHPy/WpZbyCndoo8DC184jKkL+E5OLQsvE8Q
Od8iuXG7p1hkGCPnbKHUMUJK2ELMg4Iy9ZitKNNY0IVZukabpkfidkfjF1+QG7fnikVyGczZeDE1
vE+5EptprTZLdZm+V8BvOvm18X3nUt3juJ9ynUpuvYmUhpmEacYaNfcWJYK1q6cBkm749TXJW9kG
XKdgeG6nyGPc4qm4Pc9zNG7xMW7zt4i3BG7c7qmgiYKzrxXAyYDCxCghsmKwmiWr0UBep4RRTXAI
mPKBbQF7Gr/4gty4PVdBoyNrK9Ja7V6WUGceESRPnOoyfT/ycNPJr43vyxLe4fjHimeCyHmi1pR0
UESAYrGJgeUS69XTAAobfn1N8lYKJn+DdFP/ie4pFuGuqEks8OQY2KytlR0Wch6iWKotktehpffW
tBPNeuR8mx2NX1xBGD1B54pFKo7SqAsW8b5aJLUMttBy9J38/TKkTSe/Nr7vq+Uexy0WiZhAuPeR
Y+IstVeELlw1dpRSr54GU9rw6wbJm6+o3jKeqtd9HFpOXIb0JTxHhxaRJ4gcb+ZPCfcUfNQylTrN
sHC1IIQSWoEUxpBIGkdqNBf0NqJIHTHHUffH7Z7GL74g8QSdK/iIpXaWMgW4eTtXbFQpgpTYd/L3
KeGmk18b31fkdY/jXz44CFLnYZCbNNWpUtocreiagEjXZw3plqJt+HWD5L24Jb0RnIqTx7g9U/f/
FTxH4/ap7p++xYUPvTJ52FPwMZo1tdKDAJcwDUfQRBAq94EFbEz7yBJmErGsiofutd3R+MUXlD1B
5wo+aKIZNIu9TS+7P6vi5KaVwGXKn6U9m05+bXxf3f89jpslHK2kDINLUSRrmazgjJAVEhjEq6cB
betF2SB5J27pW7IbJ/Us76ltKLNxzVpCGx8F+ZCDLS2FMWCmTOu9cPixfjKYWpVF/4H99nsav7iC
wHXludoG6LmZdgNlL0tYIkXUBAKz+EyfleubTn5tfF+W8B7HPzqt94mxTOSUOsG0OpQtASFnlHn1
NIjYhl9fk7xV90/rFIzjV9b9C75/mu6X8BwcWhbeJwi0GybwHseeheMxE1kbGpLWGCzTCAathTkm
wJhSms6PO7iwYZFuagemhHsav/iCyBN0cuF4FkvFMFkX737MXID77LEMcZngc+F408mvje87Ou0e
x80SFmhMZUCxNaFvXQ0w1yalpAhJr56GzLLh19ckb00J6WOh1h9a9qyRtooWO0noEHtokjlQmRhm
TzGnrFZTWb9aJMVsNRbuB4aWPY1fXEH+0HJujXRxcSXJsWJUdwEKciVOdfD0mT7nuZtOfm18X23D
PY47tIzK2EucOVOrKdkwwPVACyi2fsBcPQ0gsOHX1yRvLUB9WGY9VUvwOLTQidqGr+A5OrQQPUDw
xw326PbEexaONUYtQD0oQgl9NA1Tew4VZ1meVE/aZEEXAeg2DXDm/XG7p/GLL8g8QecWjrGsizPQ
e63V261cK6RZTbkPlyl/PulNJ782vu/unnscf+HYMINNkTpBlJlaHpNTJDYizddnDXKLuuXX1yTv
nIK9WtZbkvw/2Jn1v91S9j8E37OlzLnE7JML9CZ8Kl3y2BHaibr/r+A52hEaPEEg3yKx937vWYmv
YpANODQqOVijGQC4h6rrfCNFRtCPBakWG9ZRYRwoztzT+MUVxNETdG4lvjCmMVolzNlL31iDLBAr
NvKZPp/0ppNfG993huQ9jntgEKGuZzbILBaRLfXFghkNRegJ4OppgIwbfn1J8tYNUf/g7VyWHchp
MPwu2WXhlHWzJJ6CDQ/gK8WCS1Hw/iQDgXCO6O6kzzC7qamxvl/HajuWLN8ti9yQQstHks4obKXl
nrgaJ6YxUieeKaPDasSV2nosNVhmYTKo+sYJ45HBL7GgMObOJZ1FyGorw3Jb0R47m4s4KgmETPxM
Ou86edv4sRPGV5xoimrhwiO7Ws7ea3aQ1qX1Jg0ydb1GGjjzjl+3ST7pRfewbDeTU20Sviwtlj9v
T/wjPG8uLZbtC4Q++jMhh7+Nj2TiicA6r5ywc0ulykqVMiQmkiZaMpHc0XNWK3nd2emNh4KPDH4J
BQlGgs5l4imDtGWdIDy+mdaIpnTqWWKm519618mbxg8e37zihFdBO67c2wBtOduUvvJytNK1cZ4D
rpEGzmXHr5skH7U5BX1swfxce5OvcUuftyf+EZ5345bgGwTxTUr45ziSideGJoqcFi5LtoYnh9qT
aB8+1uTV6Y6uXVFIO/t4Y0t4ZPBLKEhzJOhcJr50zjioTGsUZeJbFs2zDZwaMz1/G+86ecf4obh9
xQnXW+xuQoWbZGs6QRsXmmWN0mQOuUYalGTHr5sknx276m+EbhjenKRDSefSaLmvxHmWpOQ9IU5M
0NoSyLzgl05egzuB6Kjgb7Q5PTL4JRRUIBJ0LulcrXOWOlB5BFMUMiKNlqmTxkzPHNCuk3eMH5qi
rzhxdxDCBpLNeOZemGy5lJLraIqlyDXSwMA7ft0m+XSK+i3DqU/516WFTxSL/ATPu0sL6xcI+w3g
zTSM2yOZeMSpi+tMWDslGLWl0hnTnGK6qEykckevZWUQRZP6xpbwyOCXUJCFcXsyE2/my6tKC7sM
5DwE27KxisdMz7jddfKm8YNdBl5xwnSJGulSWDVbXYXqkkyzKhvMjAWv3zXQLVPZ8es2yUftie2X
9pR+6tGwr3FrJ+r+f4Ln3bg1/gZBeCsZoz/HodYL1AeRWPLWJNVFLUlzST5WdihldqFH3IrObqtO
lDfq/o8MfokFlUjQudYLpSIMQq40wxcr1PvCCVPKCJngmYnfdfK28WNF1a84UdzmsiDnzLYwS6Pu
CioZc54+ceZ1jTSoyY5ft0k+Kqq2xwv2JOGX+EixCEqdS4okbW7JsrWEDi2tiVwn2Wi5/ZKJzzCr
OWd9o//wkcEvsSCOBJ0rFnm4GAY45HBL6HVJdqVaYIZM/+58v+vkTeMHt4SvOOFBtvWJQMV65YGT
61p1jCliRXvp7RppkKw7ft0k+ehd24dlvxHRmU/5l6XFcz6ztJzneXNpufN+gfDHC/ZE4TQ/UkEz
RmMlzanoHGlptTRBKbUuw3qbVGw90Gdu6FBneafI68jgl1iQRoLOVdB4URrWVhsc1f0XMx08Zzep
IdO/T/d3nbxt/Fjd/ytOeNrQ5xhQTRwmlcHSOw7oIjrE55JrpIEdd/y6TfJRcab/8roonmqs+DVu
8URT5p/geTdu8TsE5Rtb+FPuSAWNtl4ql57KcExQSJMXL6mNztpqrdbbHV1bJQBqzctbp4T7g19C
QQ6RoHMVNHM1lS6jGq6oqNqNDGQpe42YJD/jdtfJ28aPZeVeccIKmjUXc1uzDwWuw+e0Rlxa6y3n
3q+RhiJ7ft0m+TBuJd+gYGT5SG3D6kO7VkzuBdNgWKlJ1cQLCtUikKH8MkXJtSAWXW8UZx4Z/BIL
kkjQudoGglka0gNpRgko9FxW8YrNAqaXR192nbxt/FiXgVeccIpWnYs6lzkysLWiDU0zk5c8htg1
0kAIO37dJvnotrI/tmACpz7lX5eWM3X/P8Hz7tLyte4f8+MFe+cc/TmOFHy0hqOSzDSKlwQ6ctJh
nqbYarbWQOx39Dqa50kZ3npK5sjgl1gQRYLOFXyoFFqtVJhTotP9YQulZJqYY6bnefCuk7eNH3un
4xUnTBzb6sALiBdPR8kVK3QkWqbCWK7fNDymsvuOX7dJPun9jb88YgPGkeUjOdLsBJJxpmpU0gIq
qdBoibnMctcMhnr3sGEnRFtt8htdBo4MfokFaSToXI5UZtXSWy06otMGy7xsjlyy9pDJn79Pd528
bfzYacMrTjRFpRXN3bw4mRQp2AuSVcQJLNnwGmkgLjt+3Sb55LWjXyyfbSv930tLyfnzuv8f4Xlv
aXnwfoNgu6FGcctHEscjY1ldMcmikZyrpiowU+8wmEib9X5H741Ap9JgfSNxfGTwSyxII0HnEscE
2bNCXo0kitsOIGAEWDhksmfc7jp52/ixpeUVJ4rb1Vafw5pXAJXuVpWk9jKXoGTma6SBcc+vmyQf
9SLF/NiCqZxq0PY1buFU3J7neTdu4Wvcwi+v8BNFf44jieOltU/UlrxbT6sWSw2XpmGeRWgu9nJH
L9qzjJzrlDfa0x8Z/BILKpGgc4lj7qodtSxuJYjbxYqa70y915Dp36eEu07eNn7sPeoXnPi0wWSI
9+yW3VdlWrYmZ+hcOhjbNdKApjt+3Sb55LThbhkelvOvcO3lV72v82uCn7mvc+dCvTGFa/WRpLMT
kMrgNGejxFNroiU1aW+AbVqu2u4qbKGV3Kgpv/Ez8Mjgl1iQRoLOJZ2tjznRpIhCFPOusyzmLCwh
Ez/X6l0n7xg/FvMvOFHMT2L3OnjxGAbUmQCwT5M10DnPcKKI+Y5ft0k+qR9+WPab0ak97de1mj8v
FvkRnnfXauZvEFxuLOE0P5KJh1V7reYJuOXkippggadpzE4599zmHR0MZ8PSqM83mrseGfwSCioQ
CTqXiSeYuXPBUalF9cOl4ZxtlTY0ZnoWee06edv4sbu1rzhR3GLx1SaMsTKwZJFiZprVsBetbV4j
DQVsx6/bJJ8UZyI89rSOpzLfX+NWTsXteZ5341a+xi3+JpebcDjNj2TiwZdozZAgS004ykwZ+0xz
YV05U2Frd3RZtLRpBu1vFGceGfwSC5JI0LlM/LRCPLHzwOhMS7B1a2Z3shYyyfOAfdfJ28aPnWm9
4kRxa8RWoGmBJcSliIGu0d2YseSh10iD5r2Jsk3ySVPmu2WUu2WPLB9JOsMY2kg09awtEUxPWBnT
gAZFivfZ7LG0lEnD6+IGb7RtODL4JRQEGAk6l3TGOz313gU92hLOwguykktdMdMzM7Dr5G3jx7aE
rzhhk8imua61QKWUxloWAI+F3oCgIV4jDaZ5x6/bJJ+0W0F8pLtzPnVc8nVp8c/r/n+E592lxfUb
BMsNJDztOJKJd6u5Ql1pibYEBVearWiC6bpm9lbauKPPPDuZTpzwRrrkyOCXWJBHgs5l4mflLALZ
lkVxO4SzF+mLuAVML48G7Tp52/ixuH3F+V99w8HrhFIJkByH9L5cdKJzAb1GGghox6/bJB8tLfRo
T5ntVMe7L3EL+UTc/gTPm3F75/0OwTf08M9xJBNfRwesrqlUG0nLyInUWpI2TTNm7dB/2dFI8+5s
rb2RLjky+CUWVCJB5zLxhSkr6kQs0U+53mfv7gO4rYiJ8vNH+66Tt40f+yn3ghOvt8JtdCaBloWH
CjaA3JfTHFIVr5EGFtjx6zbJR+st/QbxZi6R5UNdBkRVarXkCzAtHDVJ7zVVzpOLYdZFvySSCbVB
cVhvTNEjg19iQR4JOtdlYFIfWacCzWhpcbGxKNsECJ3s+bm07Dp52/ixpeUVJzwlXESZqJdp2fsq
RtkQHAox2Cz5+l0D3bLAjl+3ST65moL02IKBnup493VpoXJmaTnP8+7SQuUbBNMN4x+RRypoYA5Q
XS0tm5pMjFJXaYmaNXZegwzv6Ow8Wkfm6m9k9I4MfokFSSToXAUNgOdSO425MIpbLEWcczeQkOnf
5Xy7Tt42fuwq6CtOeNqwxirZlg1ks8FDNJc51ao2dKvXSAMJ7/h1m+Sj031+bMFQTsXJ17jlE3H7
Ezzvxi2X7xB4y2FRtRypoKE5q0uuCahYkqGavDEmrdpIXWDmckcvNEabHYX1jVPCI4NfYkESCTpX
QQOw+hgjz9GiuJ0MmYBmhTlDJn0+9rXr5G3jx+L2FSf8KTda5absrJY7MuBs1UBKZa3F+zXSgFh2
/LpN8sk9u7tlhBsWjywfKRYZsHwN5GS1YzLMJdXFkFbmprh6k/5LwzjteTnl7ELHp+iRwS+hIKVI
0LlikTkr1G7VOsdXQbFJ1WpFKWZ6nhLuOnnT+OGroP/BCeuHZXCpbaJkNu2CDiUXFCoNSUmukQam
Pb9uk3zycC3yYwtGdOpT/nVpsc/r/n+E592lxfwbBGOcOFY5UvAxnEZetaaVBdO02VLD1dP07ABc
ifu4o9dZZ3PSltsbieMjg19CQSVHgs4VfIwGurSMqZOipWWwSs+DqOWY6dmLdNfJ28aPNZ56xQmL
M6fmZlDGXEtmq8Vbh47oq2BeuK6RhgKy49dtko/q/uWxBWM41bP3a9z6ibj9CZ5349b9OwTcNP6M
Hin4MBgqS0fyhS3J4pV6LSW5ZWy2uFJpj7hVXuiLfI031tsjg19iQRoJOlfwYWCM2rN3Crv6yOi1
WSeYHDL9u1Br18nbxo9tCV9xwoIPrU17LeqTFg/KDXqXlnmyUHW6RhrMeMev2yQfFVXLbzDfrFhk
+UjiuLn2ulQTjlzTrIppZugpsyz00agve9T954FUZQn1NxJQRwa/hIIUI0HnEscNc53E4Nw9mqKV
+hTR6RljpmfKYtfJ28aPNbB5xQnv61Rsy1YXXl46Ly6NpLEJiRYSuX7XALeMe37dJvnklTKUf94/
P5Xw+bK04ImroD/C8+bSguTfIPhOGP+UO5I4rtTq8G4JsudUO+Yk4p5WpdkrTGk67+gr11zLIM/t
jdOGI4NfQkHxT7lzieNSVdSqkuYSJY5ZXCVTwTljpmeLsV0nbxs/Vvf/ihPG7RTr3nUWXSZepQnd
gfusTUYt6xppQCw7ft0k+ey0QR61eyWeokdypG5WBAalWXpO7DiTDi9JRWFqp4lGj4Ns0FbNIWPT
41P0yOCXUFC8tJzLkYIzlVG91vAFBejdeXQUWBIzPT9Gu07eNH7wBYUXnLhMHStMGETDVr3DZOsg
umwyrj57uUYaDGHHr9skHx1kl8evBNFTvxK+Li0nnoD6EZ53lxaRbxBgN7LyK1x7+XXv6/yK4Bv3
dQD1pgag9LyvE3D5TS38EB7JxCN07LXP1IvW1JqMJBUhNSlW2vAic9xVdFMa6CZjvLHHPjL4JRTk
GAk6l4mHBpxRy1xlRmt1qROtE+EsMdPz2HXXydvGj7VbecUJmzJ3nlpI85ojd6iZS1bqOnrrmcDD
ieKcd/y6Q/LZh5DyrcipPe3XD+GJJ6D+xfP/fKfjwfsdwm8C4TQ/komn6kOalQRYNNUFNU0aNY0y
V4OJQMvu6EyrSms1+zudM48MfokFSSToXCa+ls5OUqzMsJdFLWAT5nSUkOnfz6zuOnnb+LF0yStO
FLcquY7GC0du0riXbkWVS5VRh7heIw2l6I5ft0k+3MCI3TSsrShHks6dJvHIkFygp+ljJgIbCbAp
zmngCx8HZELS5sgN37macmTwSyzII0Hnks5THIoUJ8wU1SFSm9WtGgwPmeCZA9p18rbxY5mBV5xw
jy1VzduYgN1l9YxjlAq1igtpw2ukwUrZ8es2ySd9w1Efe1o993TLl6WFTjwB9SM8by4tlO0bBNiN
LZzmhzLxkkUqaZKaS8odS1qtUmpTVBrjFIE7epm5iktdFd54uu3I4JdQkGMk6FwmHiZWXh1yzTk8
vtHFeXFdecZMz+ObXSfvGD8Ut684YdzWJtJLVsrcKsMsFdBRFnOjNss10lCYdvy6TfLR8Y0+tmAG
p/rrf43bE09A/QjPu3FL8B3Cblly9Oc4kolnHKK9loTqnKDATDDUU9NmS3vDRuWO3llh2rQ18xvN
XY8MfokFUSToXCaebQ7O1L2HxZkG7gCWpZPETM+/9K6TN40fLM58xQl/ys2ZR1ZvwkxlqOdFrdSB
nYRXG9dIAxLs+HWb5KMtof5G9IYQWj6SdC6r19x7SZjLSmMtSMWspTzqKqXi7FMfU7TgKAuVcL7R
N/zI4JdYkEaCziWdV9FCOQO1VaP+w0QFsPQCkEMmfF4e3HXyjvFDU/QVJ5qinchXH1XmcqizKqya
l/pqSFbYr5EGKr7j122Sj4pF7LEFs3NPt3xdWk48AfUjPO8uLazfIKDc0MId1JEuA9NZZ1+eyh0r
ISingr0lpMWlzJXXL3G7aluzQytu5XjcHhn8EguSSNC5LgM0+zR3heUaxC2NRrlnrFw9ZPJn3f+u
k7eNH3sV9BUnPG2Y3WTWKtJyni7MTYhhmlas2ewaaWCyHb9uk3z0U85+g+WGKr/CAfvXzMDv/jIe
aYE//vlPf/jbn//6iOT1h9///a/18R+/JwvK/8wWvPyn6/9PzVa6QPEGAAYlThc8uCjfXE99jb5+
HU88tPUjPO9+HY2/Q8jNNNwEHKlToqJUW7E0hWtykZVAGifqCrn3XrvNO3oWlmys6vbOWeyBwS+h
IINI0Lk6Jc1QaPpSV4kOugD7gMXVwWOm50HXrpO3jR/rT/qCE38dqVbE0VvrNpxyZ5Np7s1smgIH
0VNumfcmyjbJh19H4ZvE3+UjJTnoiGvUniTnkqjpSEg0Ulm96+yt4C9V2szQvfIodbxx4fbI4JdQ
EIdT9FxJTsUyS5uNGkUlOTwETLtnzCtmeiaqdp28bfxYSc4LTnwnfDWGlic6gC2cNdelhqMTldyx
XCMNxWnHr9skH90J90edaS6nep58WVr4xENbP8Lz5tJy5/0GAXzLFG68j9QpjYJG2TQt7jWhK6WF
1dMgcycTlF823nX1IcMm8XyjSvvI4JdYkESCztUpEa1VJuTsOUrzzV4ZndGyj5CJn2m+XSdvGz+W
5nvFCau0+yorgzXxLty5ErUh5M5DddV6jTSAlx2/bpN8dFHef0NwAz7VvvJr3J54aOtHeN6NWwwg
8KYensUeKasRX1OtzSS8ZmrdRjKxkToMb82XVaBfStcBoAML4htxe2TwSyyIIkHnympaK8069Yoe
X5Q3yj5F55wB00tZza6Tt40fO4t9xQnX26zQfFov7E6ujoo4e1YdLrnDNdLgzDt+3ST5rATWH09q
arHI8pEKEnBHbYVS9uKpzLJSl0lJVmltKLQ5/JGlktVZodc+30jzHRn8EgpSiASdqyCxNQWK5zZ6
VPmlkyeMlTuvGjM9t4S7Tt42fqzy6xUnvHALipgdqtNghsUdeGqhfgfDruMaaXD0Hb9uk3zSnZ3y
YwuGeGoL9nVpOXG74kd43l1ayL9BANwUoy+GHimryWNoQ1gpV5dEw2qybjW5CigMUSW+oy9tfYwB
C8YbpetHBr/EgjQSdK6sZvWcTWotPdfwWe/SDXkZlhwy0fMvvevkbePHciivOFHcDkQGJMDJornK
oAoTiF1nplbHNdJg6jt+3Sb5JIdC+bEFo3PtK7/ErcDn6fkf4Xkzbu+8EUT89qoeSs+r58Ytp46d
k2CvaRBTapR54PIym93RrbtwhzUI3tgSHhn8EgpyiASdS89jF51Ya6GK0RHMHIV6aVqXx0zPSwq7
Tt42fmxL+IoTxS0Qcs4ZJsHAloe13LmZtcJutc1rpKEw7fh1m+STU8K7Zcm3kkPLR9LzK0MB5Zaa
YktDxkrNdSZk51VVC7o8pmgptbGKsb9RQXJk8EssSCNB59LzZpJNqCmphw/kaZlEsAZAyATPO+G7
Tt42fuzC7StONEWzGPZVTZfjXJgJWy5TBzGM7FOvkQYtecevOySfTFF4bMHITjXd/7q04Ofp+R/h
eXdpQf0GAXADD+P2SHqeG+ZZ50zT60oZdCQrZGk0K12HgORxRx9iYyxBGvJGAurI4JdYUBi359Lz
s8As3o3NKbooP1ep09DIMWLCf28idp28bfxYpfUrTlj5ZThnIVkLhmOvUgeRzsHTc7E8r5EGYt/x
6zbJJxflH5btsVn4FRLav+rFvV8TfCMTX+SmSI4YJ+IfWHQnlvAveSTnLDVjKVTTUIIEwppW7pia
scwmZSxvdxEA1M2zCeQ3KnKODH4JBRWMBJ3LOa/eCwlOMV9ByAvw8JxtwICY6Xl6s+vkbePHXvJ/
xQmLtA1qJkQaCLiyI7SRK1utzXpGDicKgu74dZvkw6X6sRcvp5bGr0u1fN6J80d43l2qpXyDYL8R
hkv1kUQ8DMfc3FNvA1MrQ5M17KlOhsrYJzPf0UVBBLUasx+P2yODX2JBJRJ0LhEP6F6dmmNtUdyK
N+Lmo/UVMv37RbVdJ28bP9ac/RUnvm9bF6MOz0bWCuGU4lbYS81zQL1GGthgx6/bJJ8k4gkfW1rh
U20Fv8ZtORG3P8HzbtyW8h3C4obGqkcS8XMUIEFKVGmk1WpLFQcmG7NWACtS8x296KhYy6Sy3rgn
f2TwSyzIIkHnEvFVTWpZaL1SdOrKWTP1URbGTv53e9JdJ28bP7bFfsUJE/ETV/fKusCq4CwAanV4
qYvaqv0aaRDUHb9uk3xyT/5uGfmGhr/CTvW33/+3f4deTEG/AsWXjf4m1c/U4f6aajZ2/0h3o9k1
23P7H3CVm8fbiCP1BVmyr549FaORsraWevWZuE7DmhXXWA8VNKaXuRrzG8nbI4NfYkElEnSuvqA3
KQpTUEZYcqplqQEJWQ2Z/p0E2nXytvGDJacvOGESqLWh5nznqH1kt76siXVtrZPaCCaK3u7/tuPX
bZKPkrf4yzE28pll+8s2ouQTyduf4HlzG1Gyf4NgubmGv3KPFF301vok8OSt5FQWj7S856RMwlCw
+qh39NoZ1LH4tDcuwB0Z/BILkkjQuaKLhkI6lg3l6IR9ycK2Vil3qpDJntv/XSdvGz92wv6KE27/
q4yh1BvwrNbZtLfiVbh3zhOCuOVyA7Qdv26SfFTPR/TYbms+td3+GrdwIm5/gufduAX/DiE3CHuZ
2JGii9X7GBlm4qyWclk1jZo9OeaujFYlrzu6KzrMXFE7HI/bI4NfYkEWCTpXdCFSZsU6Os5ovbU2
+iQGkNVDJnj+pXedvG382Hr7ihPFLTcXQq6MNEZekHl0r22wi3egeo00iOqOX3dIPotb5BsYRZaP
tO2oXHU0G8lrg8RtWEJcNSnzmIuKe/0lCWTFl/dlA944WToy+CUWFLryXNsOL3VyxTFL46i+gFdG
IiObGjL5cxOx6+Rt48daWr7ixF1XHXCoFam9w1jKtTqQdvS5dI1rpIFK3vHrNsknL7gTPbZg6qda
SH5dWvjzdjs/wvPu0sL2DYL5BojRn+NIXRDMbmUsSUN4JibXVAZ4WtI1Y0EX1zu6sHUeVC3DG1vC
I4NfYkESCTpXF8TQB2pXwRplcsYcZc3OjllDpn+fCO86edv4sUzOC04YtzbzRGJChskIPrNWkDyW
qGFufI00oMGOX7dJPopb/ucTAKfi5Gvcyom4/Qmed+NW7DsE3dzD9fZIsVRdbGB1pTnHSkAoyefi
hL07+hza2O/ok2fpLl4Kv/HwzZHBL7GgEgk6VyzlQlW8zp5XdCJMWmfNa8xZ5TsT3zI8j2B2nbxt
/NiJ8CtOFLdOVXlloDzFM+OsvVWV3IDEppVrpAFkb6Jsk3zSbuduGTk+/FE7Uhe0hDhPy6lVs9Sn
WhIaK+HgAU4Tx4RflpYCtc5BvbyRtDgy+CUWJJGgc3VBWlx66a0UiaaodZ2a0ZdnDJn8ubTsOnnb
+LEp+ooTvs20VHWWYp0zulGWO/Fg0AJeh7VrpAHJd/y6RfLZAxrEjy2Yl1MdD78sLZpPlIr/BM+b
S8ud9xsE043CslU7UtwzpZRWsyY0luTgJSFVT0W6je69WX/ErZcp7uac/Y3OqUcGv4SCIEeCzhX3
5HpHkdI1W3TrF7XWhlYLE8ZMz03ErpO3jR+79fuCE8atKpmOXCjLbDqg1u5SmnajIWvqNdLABXf8
uk3yUdzKL4+dYoksH6lj8TozFqCUOfe0kHJCs5WA50Rdyl373cPFLJfO6jO/8avlyOCXUBCFU/Rk
HYuhzuEKMjSaojKzTnLAtmKm54HYrpO3jR/rCPWKE3Zy61bKyq41r15ZVivEICWb19wyXkMNJZ/5
oH79wNPnbR1+hOfdDzzlCAJMdibb9p/no+KeX57KkniaH6ljKezNZvGEmDmxjpk6Wkm0TBsUUy6P
uK0mtHTpbCzH4/bI4JdYEEeCztWxFK99qEAlj997IMRCToAzZnp2pdp18rbxY7d+X3HCuNUuJkai
c9U1lacWbGMO7opc4RppKAY7ft0m+eRi+t0y4Q3ONcH++sWQE1vCn+B594sh8A2C800o/HMcSRxD
ybO7QJqzltRsSJJOnnBgz3WqufsdvfdZizUbnd+44nFk8EssSCNB5xLHNBbUNkcdAtEVD+vWBXVU
awHTSwOnXSfvGD8Ut684UdzeAaXN3KqQ8STgrLMOZJgoSF6ukYbivuPXHZLP4lY8biihfiRHSjLX
gLESF7dEjTSx9JJIDbQb1sH5cSAGtfPq0Pyd04Yjg19CQQqRoHM50p5HkTwGaoWwocT0mcHUzWKm
5xTddfK28WNT9BUnPG3IxbBpWatALt59otiY3FsbritfIw2OuOPXHZJPpmj5TcazDRy+Li1+YjP6
EzzvLi2ev0GA3cT1V6hx/HWvZv2K4BvFmf/icX8pzgy4jMIP4ZFM/MjYMqqkrpNTa4apQpmJ6wIc
Oq1Cu6tAxQx9uufyRmbgyOCXUBCHH8JzmfjRZSqPLCv8bcyZ1Je2OWePmZ4fwl0nbxo/+Nv4FScs
8pLSK/gotQ1Fb7muNmstwNO9VAonijvu+HWb5KNikfLY0+K5ruJfPoSWP+/j+yM8b34I77zfIBhu
VDT6cxzJxN+JpqzRU4WJKZOORG6YFqAJ2cRBcEfvtHLRUkzxjSKvI4NfQkHxBuZcJt7Xqjit65or
Lqqe1SuoVIuZnnG76+Rt48cy8S848QaGC2UmvoNWaLX6oloGZdFs3Fa9RhoEy45ft0k+2sD88kAU
W2j5SNJ5YAM3qGnJwgScW+IJkHhSW1gFWeQxRbmZlTLkrYzekcEvsSCPBJ1LOpMQS3OucWZgdcAx
p+e2IGTy57HrrpO3jB/NDLzgxB06+urQMsP9nyXFMA+uLl6GryqC10hD4bzj122Sj6796mNPS+da
sn9dWujzfqA/wvPu0kLfIcDjpwPVj2Ti5yQupj2VtlqazT2JuCQurQB7QyC9o69MvXWVNeo7Gb0D
g19CQYCRoHOZeKdqHVtHD98bzwquXrUNnDHTc2nZdfKm8YPvjb/ihMUikqHUNrs35kxrgZS+Og2h
qWrrGmmwIjt+3ST57NhVH1swPpcY+Rq3fCpuz/O8G7cRhN9Qwi3hkUx8z5aRBBK2RWn1VhKqeAIp
2nhYKaT/XG+zoi6id95UOzL4JRRUwnXhXCZ+SUPmZsY9LM6U0Z2hTKAZMz0z8btO3jZ+bEv4ghPG
7VQfzr5yg9n64sXEdTg3FWNHvkYaGGDHr9skH8at2A2FI8tHMvFZs6LgTHkhpS6NklfgpIVhZMBJ
ff1SLGKs3Wq19saW8Mjgl1iQRYLOZeJVFVrtWBCjY9fG1Ep2B6gaMv27d8iuk7eNHzt2fcUJ+wrm
Na2brgqORNOhrzoWECoTU75GGjjv+XWb5KP6YXtswUROfcq/Li32+QvuP8Lz7tJi+A0C9MbxacOR
THwD09HLTK2VnPJiT7NLT9UHQK9itPodnTog5mzT5Y3ThiODX0JB8WnDyUx8E80qdQmFHSWa16nD
cmsUMz2Xll0nbxs/Vpz5ihPG7Wxjde7YRht5jImLes9VOhtyK9dIQ0Hc8esOyWdxS3grdCo98TVu
/UQnmJ/geTduvXyHKDfO4Z/jSOuFsnAhm6eKaAk614QjawJoa03RrlMe6AunrUzc5Y3+20cGv8SC
JBJ0rvXCkjzNVy893BK2XmbNjJVJQyZ4fqF3nbxp/OCW8BUnbrZoMNEn1LoQquBgdh2962SvbV4j
DSK249dtkg/XWym3oh5ZPlIskg1rB8U0citpNGuJTTDBmNJEOBvCY2mB7CRY58pvtJo+MvglFGQU
CTpXLJI1dyK32laPTgkXtMY8BZhjpue50q6Td4wfmqKvOHGxSFUi8DqyjkkKddYKfY3RtKvwNdJg
vOfXHZJPpug/eDuXXUdyGwy/i3deyCBFUiTzNroC2WQRJEAeP3YnJ3HOYeri6gSDHswADfH7abGk
EimWv7Zgipe+0vNtafEsny8t/+T5fy4tnuUHBOqDoqXF4EgFDfTZy0BOo1tNpVpJY6CmTujccusD
5hOdNasj59nGiVPCI4PfYkESCbpWQTN5uS1QGShRUbU1n1CMcMRO/tfSsuvkbePHugy844Sn+2UW
HZzb8NqLd1wFZ+9gQKRr5HukgcV3/LpN8tGVMn9twQwuddH5Hrd0KW6v85yNW5KfEOUBotHPcaTg
Q3lyUWtJWm3Jdc3ES3paykVEK7RVn+h9MVSBVus6cUp4ZPBbKKhAJOhawUdhldK9c28YxW1BBa9D
qEHM9JWV23XytvFjRzDvOOGWMIuqOVcuikCtDym8Oio3AHG4Rxoy7vl1m+SjT7L4H0QeRSSyfKS2
oQvymAvSFJkpd19JsngafVhppLk5vbaEjduw559KJ2qSjgx+iwV5JOhabUMvdUirtWKNapJQqGa3
NXVAyPSvKbrr5G3jx2qS3nHi0wYAZGYYYlxqb+hallDOxTtNvEcaDHHHr9skn0xRhtcWzK59SuH7
0mKfnzb8Fp6zS4uVHxBYHpnCLeGRgo+qWg1pJKDOiRaMVIq2JK3nOVuGUvWJvppCaVhbXide5Y4M
fosFSSToWsEHqkycgE5h4pirde1oUBqHTPy1+d918qbxg4njd5wobttaU0btjXAQrs5DdGKGYeQO
vd0jDWR7ft0m+eRVjuG1BfNyaQv2n3GrcOHDzb+F51zcvnh/QtCDmaIpJc+q4j/+7R/TPJYP5U3+
X/7813PqTxg+rbNE5pzkmLkDOv88//LXP/9pjq+/+5L766/8NMwPdY4Ml13DiFccfMLwWQcj/jQn
D6JyzNwBnSccXB7i4cpzKD/ViXJpKyGUnJRsJHT1hM3crS2bY/1Ki1ZAwLkynfjCyJHBb7EgiQRd
y0+N3Gg4ABq3qBqWZq1lDLMGEVOBr8OIXSdvGz/2pYJ3nLDFcKHWmUdeqCzDZhm5C4v2zp3Z75EG
lb2Jskny0UsNw+v4MqtHlo+kYmA0m8V7KoMx1VZqKuglOfOoVFhax9cUrSKrFFt05r37yOC3UJCF
rryWioG8zDoIskWpGEPpzcZqvWrM9HX9ddfJ28aPpWLecf7bJxV1rIxAfaJWIGyNJhJ1WqP1e6Sh
CO74dZPko9IHxldTLbj28YrvD3W5sDn6HTxn1zSBHxAoDyvhedmR/FSvdbXnP4kWtwRttNSVKYmt
xh20cKUn+mwgbhXVznwK9cjgt1CQQiToWn5quhnDXF47RYcRVrLK0l6dY6avjcOuk7eNHyt9eMcJ
l5ahssxNgRvQ1KyrFBy+HBSklPtPDeUBWXb8uk3y0UvN07I9FP4XH374n16K/F+CX7kU+eQiemC+
9OD5/iAslx6E13nOPggL/ITgh+TouYFHEn5GS4BNUp6Yk0iDNGz25LVkEijWTZ/oTtwVp4LIiW4s
Rwa/xYI4EnQt4ZeLrGw2sxWLNjBenQBwIo2A6a0by66Td4wfehC+44RtxbM3cG+SnYa3iRVHLcjg
WrAOuUcaiuGOX7dJPvkcwNOy8CODRJaP5LbYcivWZwLsmmxyTb2xJEI3HpTZgZ8exoo+2ygN6zo+
RY8MfosFeSToWm4re1FqTWYZ0R571aFTVMYqLWTCrzOFXSdvGz+2x37HCSvgsS7MML3mppOVfGFR
NtQJqhXukQaSPb9uk3zSeITza0+b4VI74O9Li3/+xYrfwnN2aXH/AYEUlxcbHkn4FSVtbbaUq9XU
zGvKPEdS7azUuWOevxIHK9uCnIlO1JIcGfwWClKMBF1L+FEj06qcRw+7sXRfpEUsy4qZvn7pXSdv
Gz+Wk37HieK2lFIcnTswAilog1UY3KkINmj3SEPJe37dJvmkFx3n10G9EkeWj+S2ipYyOK9ELWvq
1HtCBU9tms5alCHTK7q6qY0CJY8TxzdHBr/FgiwSdC23tZozj1m8YXSZmdoEVJnuVkOmf+UDdp28
bfzYZeZ3nPC+PQoy0oDm0NQqQp0w88plNIba75EG87zj102Sjz5i9rJMj+yX3hK+LS144WNIv4Xn
5NKC6D8gJMftiQ2PXGYWb8UyU8q4SipqlpSI0ihjLGdtpa4nOtJ4/kezVfTElvDI4LdQEGIk6Npl
ZmqoFYubkoflTponUm6tacz0dZl518nbxo99xOwdJyxTxOyz80RQmIUmqDQYi6wX7Dr7PdLA4jt+
3Sb56NiVXlswUr0SJ9/jlj+/cfZbeM7GLecfEPikKxT9HEcuM09rbapzQhg5EVJJWrAmYMCBXjMC
P9HXytL6dIBy4qOhRwa/xYJKJOjaZeaJIsiN1mjRacN0KdVIe1UImfxrS7jr5G3jx04b3nHCuM1V
wV2Qa1WgSTohM1tF1wHN75EGYt7x6zbJRxm9X19kAgyn6JHLzG1WHDZXakKSvDVONrkl7R2nZeDB
r6XFvNauQgXpxBQ9MvgtFqSRoGuXmaFx61AQ6wz7ZGTtpWslai1kyl/lTrtO3jZ+rNzpHSeaotVa
JSqjLFlldoHedczluBZ3nHSPNKDijl+3ST46baBfBTZy6VH+fWm58DGk38JzdmkR+wEh8LB4S3ik
WES6lAWjJmrdk/fGyWmV1AcUAl2l9vpEdxZ0EeoTTjQhODL4LRQUbwmvFYuI94yduK4RnzYIqDYd
Vi1m+lpadp28bfzoacO/ccIyRW/sc3aR3ky5545WDUcBbmgF7pEGL3t+3ST5qHUa82sLJtfaeH+P
W/+8rfhv4Tkbt44/IPDJx+HPcaSCpleZrjMnyATJm7SUwXuqxkPGWk4rP9HN16BaEcuZ0/0jg99i
QRIJulZBw4Mk64RSZUZbwpqlzCpgs4ZM8lXktevkbePHPgfwjhNuCcG1MzgvlQalUB5VlYZWng2W
3yMN5L7j122Sj45g+A8ED3CILB8pFmmZC7pREuKVWKGkziRJSZsvBWwCTw+vOciK9jXKic73Rwa/
xYI4EnStWARH9QpMffmKtoTSdMxmNbcZMSF8nRLuOnnb+LEE1DtO3N9m4spjWSEwHFKxy6rdUYb2
0co90pBFdvy6TfJJC6aXZXoUvLQF+7a0ZLy0tFznObm0ZMQfEOzxJ9MtH6ltyFZLnXUlg6lpEUFC
LStVHlQz09DxituOVm0ZqY8TRV5HBr/FgjwSdK22YfIqoqhrlGhL6MhKVXOrNgOmt0+m7zp52/ix
LeE7TljklbGtOnLLthhXb9ja4FxWGZa5lnukIZey49dtko/iVl5bsHKt6/T3uOULN85+B8/ZuOXy
E8IeHCax8pGCDx1rrCItldo0LR0lNUJI2WcjmQPyrE/0YsNzyXNaP3EEc2TwWyiIMRJ0reBjsEir
S5uOsDhzcp7VR+44YqavJ/Suk7eNHyzOfMMJa5IWZitZ2yrApQ9oQASgNJcCtXaPNBTIO37dJvmo
Jkn+kDW+yWH5SG3DyA6dqCWyOpIVxTRELLXSSVV1lmmvJ2NtXADbIq/Hp+iRwW+RoAIcCbpY21Bn
piUD55CoK6cNF5y9tLZCpn/1ydh18rbxY30y3nHCbrrVgHuZLXO1jtant6bcpOblk/AeaVChHb9u
knz0nT2W1xZM7VKLwe9LS7mQgPodPGeXlpJ/QHB5aA6fxEcSx3XizFI1zVUx1aolTbeZCrItmJ3y
yE90WaUAATadJ1oeHhn8FgsK4/Za4tg1ExlJ7jNKQLVGdVHvCthCJvr6pXedvG38WALqHSeKW8uS
F3dyQJo5ZxQ0y6tL06GSxz3SYKo7ft0m+eh0v7y2YHat6/S3uCW4sCX8HTwn45ag/ISQB8fL05HE
MdDSAtQSQaEEpbe0sI00O2QaYkUdnujVvS2DWXmdaB5yZPBbLCicX9cSx61Y5sFKkkfYPERhNStS
yEIm/MrK7Tp52/ixQq13nHBLCFTy4NG1tdIBiyzwDDBrz0rY7pEGKbDj122Sj073//E5KLD/wbWX
/+19nf8h+KX7OuUPmR+m4WnHkUy8kWdozslrG4lbXklam0kyDnfyntWeKtQr1A515nki5o8MfgsF
WY4EXcvEq4zWilZ2gajuv4kCwZjNPWb6ygHtOnnH+KGYf8cJY54mlMy1jyxVem0TvKzSgdpAHBhM
FHkA6Y5fd0g+i3mih/Olno3f1+p8aa2+znN2rc7lBwTLI8evPEcy8VS10ForGZKmbINS65yTzVVR
lCv39USX1mcWrp6XH4/bI4PfYkElEnQtE28EypN6VQzbik8jyrSktxoyla9fetfJ28aPHd+844R7
7DXrggKdppqxI4oKtIUsmaSte6SBgXf8uk3yUUZPf+1pTSPLR5LOpK6Aa6WuKmlgy0lYLbXZGlOB
4kyvKepzAhTLbic6aB8Z/BYKcowEXUs6Lx5zTtGuNMLMgPe1crXiFjN9FYvsOnnb+LHt5DtOmBno
K4sskM4KXZHWnFVGlQxYgeX+UwM/IF86D//+gJfPm43+Fp6zD3iREKLAzmTb/nk+2mPrHzI9jHJk
+UgmXuoqczCnStJSRcM0aZbkGQmg0gB6ZQZG47rYqs584rbykcFvsaBwabmWia9UpgNo8VmCuJXR
qZMpYK0hE38tLbtO3jF+KG7fccIirzYa+1wElOvqXQ17t1br8jKX9XukwY12/LpN8uHSQvzAa228
vz8xyqUnxnWes0+MIj8gmB4UVtDQkUw8ZBFb7om7W2o2JVGnloothJwHTbUn+lL3Odtoa55Yb48M
fosFUSToWia+rzEZmxrgioq8ZC5czaXUGjExfP3Su07eNn6sguYdJ7zCnXvjijQYG3TEwoutDFbG
rpXoHmkw2JsomySfxa29diucQ8tHks6T5lhlSOqltdQnrES6KHVRmcIZxflXQxvmugjV1oml5cjg
t1hQOEWvJZ19uoy5DAGjun+i3svMULqMmOlr+7Lr5G3jx+r+33HCjB4JmCsYUEcmGr14Z3YWaDAb
3SMNorzj122Sj5LO9tp3of/GL1Yow4W6/3/y/D+XFgb7AZHzg4tFP8eRTLxMkOoAyWZeqcwxEg6S
BIMNWL0o1Sd6XYMll4qofDxujwx+CwUpRoKuZeKbCcDMpSyv0ZbQa2t1sEzEmOmrWGTXyTvGD8Xt
O04Ut1QIRmHFybV2EMo+YYKb4/Rmdo80lMw7ft0m+eR71GyvLVjWS228v8ctXorb6zxn4xZ/xi3j
QzGM2yOtFzJr701Hgu6c3LQmtDGTa+6tOJdS+gt9zQVzObV84pTwyOC3UFAO4/Za6wVry3IpY7VK
UdwKtem9jDIwZvqK210nbxs/dkr4jhOeEtJoCi6rjWHeeZKT2nKsiCyz3yMNrmXHr9skH8Wt/wHw
ARBO0SPFIpmHQe2emjqnVVZLXjOnbh1YF+IUfp02EPZCVscqJ4pFjgx+CwWhRoKuFYuIdGrVqsso
Ud1/BXUpA3h5yJS/Hka7Tt42fuy04R0nvAoqmVbvWKV3qU26ZTJkxlJFZp73SAMq7fh1m+SjBJS/
tmBULj3Kvy8trJ8vLb+D5+zSwvoDIsND4i3hkWKRXnH10UsSE0ujZ05OfSVHtrY6FgN5ojtLXtTZ
spyoHz4y+C0UpDkSdK1YpPLsDQd3h/hKWc9MJmydY6avpWXXydvGj14p+zdOeNog3Hsj7ow2S+1Z
CIhsAheuefA90qCZdvy6TfJRc1d/bcGYL30h4nvcyqW4vc5zNm5Ff0L4gzzcQR0p+CCSZtpWKnXV
NFuTtGh4YhyqpcAarb+mHVJt3GT0M8WZRwa/xYI4EnSt4KMO0r7MJkJU8KHZnWlVIKkx09cRzK6T
t40fK/h4w4m7cRmMNWfvmd375IFDuC1vjcRqW/dIg7Ds+HWb5KNTQn99wV7cI8tHahv6NMCJM61q
PekgT6NnTdykESzz/itxPG3NNfJqhie6DBwZ/BYJKkCRoGu1DctnXdLVjKOlJfdC2UfhSRgzfdUk
7Tp52/ixpeUdJ5yiOWfuzGtJ7oaoGd2gTGz9+a/V75EGFdnx6zbJJ28tAq8tmFxrK/1taRH8vO7/
t/CcXFoE8w8I9Efm8M3nUMFHnzRrrQmL5zTngJSHjLSA8+w+dXV6TTtiJc5m7UzjqSOD30JBApGg
awUfCLn2VkEaWvQq11lWsTYza8z0VfCx6+Rt48fq/t9xorhF90GTePAyJO6sRcXMURQzGt8jDQx7
ft0m+aRhnMCvBttwKU6+xy3Rlbi9znM2bimAsAeEBR98JHHcYRQtUNLAIgm8cFpZWtIyaGEGbzO/
4raAVSiLrZ64Z3dk8FssSCJBFxPH03gMQHKL1ttKtdfZxhyrhUz8dcNj18nbxo+tt+84cXt6G6ut
UY3WcnWpGUmEa3OzNeUeaUCzHb9uk3xyX+dpWfRRskSWjySOFy8CniMB9J5EAVPRCQlK4yZlNej6
9LAvK9wAGOqJt5Yjg99iQaErryWOh6Gg1G7aJaol1LHYHW0Bhkz09X666+Rt48eugr7hxAkolL6e
f4Z6cxu9jdwHm9lyXE5+jzSo7U2UbZKPtoT42oIVu/Qo/760XLgK+lt4zi4tJf+AQH1A2BqWjySO
c2FqbJKgzpVqVUjUlNNYNTtXAuT5RIdem04seqoX6ZHBb6GgQpGgi4njzjp6ZyFY0dJic1H3wm1Z
zPR12rDr5G3jx2qS3nGiuO2re+Oea9XctY+Vi7axxrLVxDHfIw0ZZcev2ySflLu+LNvDkf8HN7P+
p1fK/pfgV66UCb72tFoufRfj+4NQLz0Ir/OcfRBq/gkhD+NwvT+SiR8iUkqFNGftyRpa0kySuDCU
wQvR/In++q8yBQ3pROXbkcFvoSDJkaBrmfiCq/KTBJtydKa1HLAMQls5ZvpKl+w6edv4scq3d5ww
E1+ht0m5aRlujdHWrAWsMKG26fdIg7vv+HWH5LMHociDC0aWj2TiR3ayNXPCCTXNqpywNk9Qs9ty
mE7y8nBtlK3aGnJij31k8FssSCJB1zLxS0Zh9Ocfoqhtg1RZa7RaMIdM+lVeuOvkbePHikXecMIp
Kmyg0Mqo0lEdW5ngo03lalhGvUcaSt6bKNskH03R/NrT+rXM97elpeDndf+/hefk0lJQfkCgPEg4
+jmOZOLVuzNoT83qTLlVSdwyplnWMNI5AF9xi5JHtgxUy4njmyOD32JBGgm6loknJyhlmiyJ4hYR
UNpsYhkCpreai10nbxs/FrfvOOEeW3smqK02IYNaKuZWh00kyVxXuUcaBMqOX7dJPun3L/m1BQOG
yPKRpPOizgrmyVZpaZahaaDmNAipeW9twisz4DBEHVdt9UQd4pHBb7GgMOauJZ19ra7QFSW36Pgm
U0NVrm2tmOnrNXDXydvGj31k/g0n/kpZnTVDXZppVZ3caeQhEwcgtYLrHmlALzt+3Sb5pJPXy7I8
QC/16f6+tNDnzV1/C8/ZpYXwB4TwAzncQR3JxOemUIxHYmJPZtDTqASpV8WedQzQ8UI37XORsOKJ
LeGRwW+xIIsEXcvE+6iTRrMqYb//mpcAaaMCOWSSr2PXXSdvGj/Y7/8dJzx2tSF1TS955YrE1Gsr
UDznOrBJuUcasvuOX7dJPtoS0qsXEeqlY87vcSv8edz+Dp6zcSv8AwLpIRY+Ro9k4kfL3noeqY06
0ySE1K2PhIZchHkNWk/0CUuKCVP3E/d1jgx+CwU5RIKuZeKpDhqlmYlHRV6SuVQqzNprxFTw61Vu
18nbxo8Veb3hxFe4e+2ZKOduE1Wdc80ufVFFbmTjHmlQlR2/bpN8UuT1tEz5kZUiy0e6DHCeQj4s
zaWcuJKmKXUmJ5zNHdkH/qpRKmzGpZdVjk/RI4PfYkFhzF3rMpCpGq8qTVa0JTQYTLkR+YCQyb6m
6K6Tt40f2xK+44RLi49Zc1u6GkGexblIU25i1mdTvUcaOOuOX7dJPnprodcWjK49yr8vLQpXlpbr
PGeXFv05pSTH1zBMjhSLGNbWWmlJM3sSZ0rdcaXRqcDgqr3AE92WQ+YMM585bTgy+C0WVCJB14pF
XOYyRyIKm0RmwVZw4eDCIZN9XfrddfKm8YNNIt9x4uLMJVZy7QCAprkMXkaLtdrUSfUeaSDamyjb
JJ/U/Qu/tmCsl/r3fItbhc/r/n8Lz8m4VdAfEIgPCito5EgFTSsKS70lmy2nWqonnOxJeBH7sDJW
f6FPUKzGqKsdj9sjg99iQRYJulZBQ1odOEPp5NF6myesqaUKt5DpXxU0u07eNH7wa77vOFHcKmgp
pS2VuUy8SlllNJuzV1DQdY80sPqOX7dJPiry+vVlHy4QWT5SLFLqzL25JQf3NAA5uUlLvFYrhn01
0l/RNZDLIJt2ohfpkcFvkSDN4dJyrViEqVpnrAtWdJCtULwR5uq1hkz/mqK7Tt42fuwg+x0n7A7C
DiqVa/XVB2chs5ZbXwoO5HqPNJjRjl83ST5qYPOyLA/RS326vy8t+UL98O/gObu05J8Qgg8M6/7l
SG1D0WU0CVLj2hP5wtRFWwJxgqbZVx5PdC2wbGpGlHkibg8MfgsFCUSCrtU2OEthajULtihuPS/N
rVX3ETN9VbHsOnnb+LFXuXec8Cpo4b7YF5G1ge4AA2xkalQmtyn3SAPBnl+3ST7p6iPy2oIp/s6u
PnrhE1D/5Pm/NoxTLj8gnoSk4c9xpOADl7lDwzTXyqn1bMkntaRafayC9R+vcrq0ldFrZjtxBHNk
8FssiCJB1wo+vM0KYHmqRDVJZQgjq7DIjJm+8ji7Tt42fqwm6R0nilvsY1bv02WwKKyi5qLNelNY
3fweaZC859dtkk8+3SbyWunRJbJ8JHFMsiRz51RL9kTNS1JqM5nl4cpEsvBV/tqg8UKdWU/UDx8Z
/BYL8kjQtcTxMAIoJGq9R7UNZDZ9LTSTiCn/6xLSrpN3jB+aou844dLCsjpm8NGxuIotJRq9q9TR
M/g90kCSd/y6Q/LhFJWH5Uvlpd+XFr2wJfwdPGeXFv0Jwf7I8dJyJHHsY2AXkJTRRlqAlhCqJ/al
WHTo9PVEpyITahnd1zget0cGv8WCKBJ0MXFcaOa2equZgrhdJNbrP/ZtMdPXudKuk7eNH3uVe8OJ
E8czj67iU5Am5+HVBbyrkplb4XukgSnv+HWb5KPEcXltwZx+55bQ8MLp/u/gORm3hvATwh+AHP0c
RxLHUnoeWiCx15zQiJK491SyKvQOalCe6Ew8PZdafZz4CveRwW+xIIsEXUscr4Hd3VGFLSrUIssZ
u0JDD5jeunHtOnnb+LEr3O84YeJY2xhIubRMKiprdBXNrY3i0pbcIw1Y9ibKNskn3bielvHlPfgf
XHv5397X+R+CX7qvU/6Q/aEUhsiRTLy1iR1Zk6xaE64MqRcfyUduRGsu+sceu8Bk4lxdT1xNOTL4
LRTEORJ0LRNPhcbMJqWLREVeQzib59mwxUxfr4G7Tt42fuxu7TtOFPM4GyHmjqarrJpLZ0E3qGjS
FWY4Ucxpx6+bJB99rlHKU/wD8FLnrO9r9YVPQP0WnrNrdS4/INgfEtbKliOZeK59ULOcELkm6iUn
cKY0kMF4NXK1J7rSFNI1uNOJouojg99iQR4JupaJH1nEFaxa1agDX4E6n//qXjFgemvQsevkHeOH
4vYdJ4pbkOaj8excoMBSbm5zOPuaVHSNe6RBBXb8uk3y0Z14/QOUB+ZL92O+xy3753H7O3jOxi37
Twh7kFD0cxzKxOfVB5eVyhozgUJPUMpMa8niXIxWoyd64yqAeXSZ63jcHhn8FgsqkaBrmfiKYqMp
K0+O3o2B1dzQHDVkKl9vU7tO3jZ+7Nj1HSeKWxpr9e6lUSPKi9gR+1zAS4bnOu6RBkHY8es2yUdt
kvQP2R7sGFk+konvKlyNS8LePIE7pK5YUs5tQVtzMtbXjsZ6WU1VZ6XjU/TI4LdYEEeCrmXiBW0t
wiHUcpTR66ZUm/MoHjEJfOVud528Y/zQFH3HCYu8mkDJPjL2nh1aZSMuLnUNHlL4HmkoAjt+3Sb5
6DVQX1uwjJce5d+XlgufgPotPGeXliI/INgeiuEO6kgm3uoUWCOn1XUlBRqpq4xkUpetbr24PdEl
s2Qzh2xnijMPDH4LBWWKBF3LxCsokNgs1qN2KyPrzJQnQO0x09er3K6Tt40fa7fyjhPFrYCtOhxo
9swruy/BKWsYg+We2z3SYFp2/LpD8knc2msLRtc63n2LW4cL3UF+B8/JuHXIPyH0wRYuT0cy8ZM6
eO0zDfeWch+asjAn0JWrSate8hO9LKpjlD5aO3HsemTwWyjIQ0HXMvGFp2tdBd1qVFRd+kCSbMN7
zPSVLtl18rbxY9/XecMJ43ZYB2VotXsnn6QFmXhZdWDRDPdIQ+G849dtkg/jNr8shyv9kS4DNKx0
WJ6qzJHmyJRY1BO3TFlIqhZ9erhzzc3Ioc8TRV5HBr/FgigSdK3LAAzpbWIRdoyWFi7ehvvqpcZM
X1N018nbxo9dBX3HCeuHHXtDGtZnMWSW0tSo915GqwByjzSo+45ft0k+Ks601xaMf2uRl1/4BNRv
4Tm7tKD9gGB9aHiFuxypoPGaJ07IqaLmlNeg1GduidHWalKMcT7R1ajk7l77PJOJPzD4LRZkkaBr
FTSQB4zKI2P4FW4lsg6jEFILmeRr87/r5E3jB7/C/Y4TbgmtEFuxqtVXW3WZl9ondvdauPZ7pMFc
d/y6TfLR6b7/AcrV5uXf45YvtF74B8//tYLGGX9C6AMk3EEdqaAhRJTVKIloTaYCqXkrqQ5X99I6
wHrF7Zrk5Ll3PPE13yOD32JB4fy6VkGDy/KsdRGs8CMIlet0cOkYO7l8leHuOnnb+LGmzG848ek+
IwCZNmHNshwbjKxtNoCCNO0eachQdvy6SfLRJxeflnN5eDxFjxSLaJexgCgNaiV1o5K0wEwLqKIt
4cLt5WHLKsgZxU5M0SOD32JBFgm6VizSRUQ6EhbJ0RQdQrMU6dNrwPTWYmzXydvGj50SvuOEby25
dlNERPPee4PJuZCZrTWLVr7/1KAPRNjx6zbJR1tCf23BSr7URef70nLhE1C/hefs0iL6A4LLAyic
5kcKPrwbkq6elsyVpgslorWSY+kmtOaE/ESf1tWw9VbWidOGI4PfQkGMkaBrBR/YqdJ8/oPuQdx2
lWqMrXIuMdNXL9JdJ28bP3YV9B0nXFoGMg3JtMowp4ZiIrmOvpBHd7lHGqjkHb/ukJyOW/kD/Pqe
jXFkWY/UNkh30AKQuLaVdBVMCNwTYx9tKbKs8krxld5ZW+5NTtxWPjL4LRZUIkHXahtM28CCNtx7
VJO0ZhOWqVYoZJKv3c+uk7eNH6v7f8cJ6xBnVl+mCr0glgZ9ZOE6PNtshPP+U4M8QGnHrzskH07R
8lC5dKv/P5cWg/xp46nfxHNuaXnx/oDI/MjhKaEeSRxbc/A8Zmq9cJpWPdXaa2JaZVkeJjKe6D1n
7NRrlXyiYdyRwW+xII4EXUwcZyKFVVabUQLKW2WZnmXgCJnkK3G86+Rt48cSUO84YaNHZYPMiyxn
dNapvH79/+AltvAeaWDYmyjbJOevgv6yTOVhfql273vc8qeJ49/EczZuWX5AMD3EwsfokcTxMFm2
KqTVkFMuRVLDQkmRTeuck7E/0bmQMc5VZztxun9k8FssKFxvryWOc581T59LPYrbYjqAc64EFDL5
1+Z/18nbxo/F7TtO+CqntQoSNpvYVl7dldTUnGzppHyPNCjzjl+3ST5ab/EPQA/IGFk+lCO14uIk
ycQ9GXpJOKam0bgNyoaLXlMUxNDWAvFyYkt4ZPBbLEgiQddypN1z1rymo9Rwis6RmWjWukIm+kpZ
7Dp52/ixHOk7TjRFjTLXNRdChtq82lyCKxda1ZfYuEca0PKOX7dJzvcQ/odlfaBcSvh8X1r804Ps
38Rzdmlx/AGB/sAs/4NrL//D+zr/W/DP7+v84sr5kc2j+X0kE88TWHRKmsUktdk5jbl6QqOJawBw
zr/mQieVPOrKJ+7rHBn877ydSY5ruw2G9+KZBzIokmKT3agFMskgSIAsP/ZLKnCqiNP4VAK82bsQ
v58lHskiRd1CQR4G7LVMvBRGFLZcCMIiLxmVp4hAi5m+1updJ28bP9bc9R0nbreSF3UuuVV39AUD
eNaqc9ls5BhOFGbd8es2yfmM3h+WSR9YLrUl+vYhzHDpQ3id5+SHMEP+AcH5AWHiWo9k4rUDEGFJ
yKzJCGuC3nKaBfvQDF1ze6K715rBK4wzRV5HBr+FgkoYtxcz8WNi1zmrW3SmxVImYdU+msdMX8eu
u07eNn7sTOsdJ9zAwMLZJCO00ajXKoVn42yF2MqUe6QBgXf8uk1yvqj6aRn/BPAQ18jykaQzr6Zd
ak69FU9GttLg2lLP3FmG1VLaK7oyVK29iq4zV8oODH6LBGl8zHAt6dwJSp0tzwoSFYuUgnmOoX30
mOmrvGDXyTvGD03Rd5zw2FWW+RiLzHAZTZ3IraCuXriD8j3SYKXs+HWb5Hwd4r8s64P40jHn96WF
Lxy7/gbP2aWF+QdE9odiOM2PZOLnsAEdNblJSbmqp9WspjxlFiy2uL3ithTpfSpKqyc6Zx4Z/BYL
4kjQtUy8kNkSmYurRUsLY6um3YtyzPT1l9518rbxY20b3nHCjF6tY3gTzCgisNbyjiw2FcRM2j3S
YCo7ft0mOd+U+Q/LpA+mS3HyPW4/fqfjl3jOxm3hnxAWV9CYHsnElyZUsuY02xhJVq1prkFJejdX
mZ2nP9EVmQFg8OgnzrSODH6LBWkk6FomHnoZlaBV7DN8X4eU2JF1acj0nyKvXSdvGz9W5PWOE8at
DOswGkxty1ZxrNyZnGqFzuz3nxr8kcF2/LpNcv4q6B+Wiz2ASmDZjmTiV6uyTCkxTklDVk11aUlI
zjJGoTHGH2cmGRBLllHg+BQ9MvgtFmSRoGuZeF/LMqwxpEW3lWkOqkXZxoidzF85oF0nbxs/dlv5
HSeaotxh5moZRxnUZ9bZCwvQqjRoZrhHGrLjjl+3ST7K6NFrC1b4Ur3u96XFPu3A90s8Z5cW+wmR
9UGk0Z/jSJeB2T0b4UraQZOMupILrCSzKtTVJjM/0XOjPtcS03aiueuRwW+hIM6RoGtdBpq4cJOx
ZnhfB0ZuAl0KsQVMb5v/XSdvGj94X+cdJzwlJFOrXK06Zmre25iKKqtq0TnLPdJQkHb8uk1y/uX9
PyyTPJQksnykWIRAV1NYqWeT5GNxgl5GWnOM2vsgGa9fLaOQ5j5HyXrilbIjg99iQR4JulYsMqW0
Uq2xL4saYeS2FmnnukrIxF+XK3advG382K+Wd5zwakohESdnW9mqT1utam/cqVFZrdwjDea049dt
kvPNXf9lWR9y7df996XFP21g80s8Z5cW9x8QpTws/hIfKRbBPGaviCkzW7K+LGnhntaY05tIAfsj
bms1IayLz7wueGTwWyyII0HXikUqDFne2mrhaYO03nPO1OaInfyflMWukzeNHzxteMcJlxb1bIPZ
bSoUZ1iNrXXrYAxd5R5pcNcdv26TfHSQza8tmPIvXikzpE8bxv0Sz8m4RZIfELk8RMJpfqSCpito
GUIpt7USuY+kIJ4kVwOAbkz8RAftBhUmtXzitOHI4LdYUDi/rlXQSPfl2nvP4OF6CwKlqrJpyKRf
93V2nbxt/Fjd/ztOFLfY2Oq0zqIj2yidczXkCY2aGMg90mCYd/y6TfLRaQP/iThOWZsdqW0YxLlZ
q6l4m2kux8SoK/Eaq2VtuQI9PWyjLJ2efeiZOsQDg99CQQUiQddqG4qO6a1atRwVeblNkA4NllPM
9PX7dNfJ28aPFXm944QHYkDGDCBSGrVaShsVJlkpa/ho+R5pEJAdv26TfFSHyK8tmPGlWoLvS0v5
tIHNL/GcXVoK/oAo/CglPPw5Utsw14Q1SBKhlCSlrYSikFbxUap05+ZP9D7cW686sZ04JTwy+C0W
VCJB12obkL1yN6DuLVpalrFC1dpghkz/OcjedfK28WNPQL3jhD/ldCDgRPNc2uJRRBwZcAhwF533
SIOC7/h1m+T8UzLlT6//9OF8qRjye9zahQTUb/CcjVvjHxCZHgbh8nQkcVy5VmnqaXSpqeW6Epr3
NHNx6VUE23yiSzPIWvsYcqKW8Mjgt1gQRYKuJY7XVDItJL3Hr4LyMlOA1lfM9PWjfdfJ28aPbQnf
caK4zQNnxlVL9bUQqZW8XNQzuGEVukcavNiOX7dJPiqqLn8iehiGlo/kSA2wOYikQWWmajZTluYJ
u0wyH8sG/9Eu13IRWi71xBQ9MvgtFEThWnktR9oHglazDiUqmwNSp1xsma+Y6avcddfJ28aPlc29
4YRTtJbVmhZVxkLNRiWuFWvOjVuFRvdIg3ve8es2yadT1B4Elz7l35YWgk+fkvklnpNLCwH8gCj0
wPCU0A8ljq14yQxJgD0NUU/TcknWrUMBtqn9iZ6llawIJZ95KP7I4LdYUIkEXUscz4rciiIhRVvC
VnrOVlsvWAKmt8LmXSdvGz+2JXzHCU8bZpuCoowdmy9fE7UNQtXClnu+RxrIacev2yQfne7LH03C
/VLP3u9xS592B/kPz//1KiiR/oDIT7qw6YMfSRxbb9Awj8SFVyrSW6piLc3cW5esQJVe046gLq4T
fZ65wn1g8FssSCJBFxPHWPrIZrlqeIVbuJs3GW31kEm/vtC7Tt42fuwq6DtO+FNuTqkCo6K3WtFn
mTlzHQy1F+B2jzRQ5h2/bpN8VAP8tOwPpPw/uJn1v71S9j8Ev3SlTP5E+WHxn/JIJl5NppGPxNk4
iUxMakMS5VkwQ24N8lMFileZMEcvJ+r+jwx+iwVZJOhaJl5W67KGEGhUh6iIqkBWp5SQCb++7rtO
3jZ+rA7xHSeKeW1F26QuAgp92rCKhlhLs0zQZzhRXHXHr9skH67VZA+2X12ry4Uir9/gObtWl58Q
JT9yvMc+kolfQmvVqckwlzRFZzLGmbxDbTxAQPSFXntuvRPSmabMRwa/xYI4EnQtE78UyYdnyOHz
yGsU7tRABtSY6ev4ZtfJm8YPPo/8jhO2Wxm9ECpmzcw546hViyl2qeg+2z3SgC47ft0mOd+B72n5
X4/Y6KU97fe41QuZ+N/gORu3Kj8gnoSE4Wf0SCaetbTspCkPnGm2QYkhY3Iu4tNNGOcTPVsbPTvQ
XCeeNT8y+C0URDkSdC0Tv1qpJUMusMJeFr21Sd6wKsdMX2nOXSdvGz+W5nzHCc+0dLk3lT671yJe
61IAlIa9dGS8RxrYZMevmySf3dfRPx6sxNDyoUw8jJJ99cS9jWSImCpXSgSsLhWar/xH/bD0gn1y
9hN9w48MfgsFEUSCrmXiq9DySsrdPKwfLqs0yFCaxExfU3TXydvGj2UG3nHCKdpJpFvuIyNBNuA1
cim5EusSgHukwUx3/LpN8lGTSH1twUQufcq/Ly1+IRP/b57/Z5EXOf6AYHs4hF/iI5n4Uk0BGide
q6QFA1MtoyZiBzBehbI+0UudPbeh0sqJpsxHBr/FgjgSdC0TD04DDFfFUYK4zX0qdBrd2oqZvraE
u07eNn7sNd93nPCnXMUG2ZuNUUDXGrqyWOuuBBkkWFrYH1Bkx6/bJB+lS+yPzHe8+znSZQCWNXNt
yVrpqXbpqbRV0+Q6TUufrdvTw6TTlhAbzROvCx4Z/BYKipeWa10GxqyTF0uWFr5S1kfPDaQSccz0
tbTsOnnb+LETxnecaIr2OUvPzl3Kypq9kHrlWtvUTsPy/acGe4Dxjl+3ST7a/fxhWX91aWG60CTy
N3hOLi1M5QcE6gNKmKg5Uiyyio+pdaYhONNomFPutNKiOWtjzd7XE711mAoTtOQTDWyODH6LBZVI
0LViEa2aF9EiDuPWazEwmDoqhEzylRnYdfKm8YNx+44TxS2VnkeG3odUKuzgzXFUpMq9TYd7pAFB
dvy6Q/JZ3JI9rOiVOPket1eau/4Gz9m45Z9xy+UJEX5Gj1TQdPHhC1Zqumrq2mZiZE3sYCJ5KWl9
oru04c6cs52ooDky+C0WpJGgaxU07D7q8FlL5ajLQFfpYjRWtpDpP8WZu07eNn7sCvc7TrjeFiZo
lYmna85rkS2fkLsXQxe7/9QgD0Dc8es2yUdXQf3Vxp2inoYOR4pFFMsCY0zsqInyHGlRLonbYG7M
uY7XFM3TndExyzoxRY8MfgsFOUaCrhWLwKIGg2jUBdFBdmuv/zW90IyZvoq8dp28bfxYT8N3nLBY
BDMvZEbV4s1dSUmt5yLaBAbfIw2F9ybKNslH9cP+2oL5tStc35cWvVA//Bs8Z5cW5R8QWB5IEP05
jhSL9OmKNkZalSgxQU15VU8CfeQq2EDGE73mTOStoeYTP+WODH6LBVEk6FqxyKoyhoxGJbwKWgnM
emNS55jp67Rh18mbxg9eBX3HCbeElYu2zGDWEBoLOHqZMoGZOcM90kBWdvy6TfLRTzl/tc4BurQF
+x63V+r+f4PnbNwa/4BgenjJ0Z/jSMEHkkn3NhJkXMl8jdSkj9Ra6YbSlAc/0XV4huqui89c4T4w
+C0WxJGgawUfo7oiijUM+/3nucTHFMiTY6avuN118qbxg/3+33HCfv8DcCmJkmslrrVxg8oDm1ot
M4hb5keGsuPXbZJP4jbDnwAfphhZPpIjXZLrnGaJaK7UkUta3EuiMlcjJQZ7TVEvoL0tWFDPnDYc
GPwWCwpdeS1H2lB1VEVkCruD+JJeJkIeOWSyr/rhXSdvGz+2tLzjRFN09TryaDKkjUatFIemqwKI
kOVh958a6AEoO37dJjn/AOa/LPsj46VP+belpZTP6/5/hefk0lIK/IBAeqCEX+IjieOF0Gu3mqZn
SLnMlipCS1Rqo1bBdeoTHcDyklZAxolawiOD32JBGgm6ljjOvsQMRm8jfF2QdY2KWvsYIdN/ugzs
Onnb+LHE8TtOFLdQZhaZrWccmTJX1rZcgcmHeaN7pIERd/y6SfJRw7gMry0Y5ktddL7HrVyK2+s8
Z+NWfsYtP+k0jNsjieMxVu+5YVqsnIouSAWyJDQgURnaCj3RZyltQh48/UQt4ZHBb7GgMG6vJY7V
uxNmrq4cZeUUsXfkgpVCJvs6Jdx18rbxY6eE7zjhlrBO9d56BphSfM3JfWRz57qqSbtHGghlx6/b
JJ+0Xsj5tVshyZHlI4nj5cuHTkyupSSAXlOuZSSoRsW7a8l/9DTU3NeAZrmfaHN9ZPBbLCiMuWuJ
49kaoNZRWs3hlTL00ofapBIy6deWcNfJ28aP9TR8xwlPCUm6g6KWybCsGXn3mUcd1PrEdY80lKw7
ft0m+aRM/WXZH3TtgefvS4t93jDuV3jOLi3mPyDyE4L0f3Dt5X96X+d/CX7lvk7Of2z/PPwpeSQT
b3m10iCnjmQpW6fEID0ZqtswBvPxVFEIQL2ukteJIq8jg99iQRYJupiJHyBeoE+E6ENIqG1oazDJ
IiaCr5qLXSdvGz/2IXzHCd/pgD6sYsNRMs7BXqX1pU07waKywonCvDdRtkk+yejl/NrT8rU97fcP
4YXOmb/Cc/ZD6P4DgnNcy+hwJBOPNthBR3KzlbLISE/anhrUhRUyMbQnulvTNkrpg078Nj4y+C0W
pJGga5l4HzQZWpWSSxS3NcNkwTlphEz5a4+96+Rt48eKM99xorhlWLlZm5mYqXuV4dgXjVxLr9bt
/lMDPkBwx6/bJB/9NsY/QX4UsMByPpKJz6Q0WoM0p3taknNq3S1l51KQp+nUV3SZU4PGy+aJPfaR
wW+hoIyRoGuZeBzINmFNrzUszlzQuMrKpjHT165s18nbxo9dTXnHCZcW7Vqd1uTFxSZ5K2jFeJjN
VQbdIw1SfMevmyQfNYl8WfZHufaa37elRfjzuv9f4Tm5tAjjDwjMjxz+lMxHMvFLm0wdPRVuJeXR
OVXXnDJ4tWHe1rIn+rDStC0fbGfarRwY/BYL0kjQtUy8FgKvtchECeLWWmadmbsUDpn0a2nZdfK2
8WNPt73jxJ28VtFRcTopQ+eycGWV4YVL11zukQbCvOPXbZKPMnr42oJ5uXRl+nvcXuic+Ss8Z+O2
/IxbhgdbGLdHMvHgVSqNnIynJofhqfbREg+txGt6q/xENy9jwGj11LPmRwa/xYIkEnQtE4/Lxiwr
U9w5k0oD7Wty0xYy+VdR9a6TN4wf7pz5jhM2U2+aEXIeQtZcJgH0UdG5zSwKeI80CNGOX7dJPnnN
92m5+ENVI8uHugwgFG6zJCZaySZAqsw5VfbKTXJfOl8Zvf7ySGmO88RV0COD30JBliNB17oMVMl1
jZklW3TawNWyzKJcFsdMX/d1dp28bfzYacM7TnjsWsFYF80y2iQ1L6U05UY4Mhv4PdLgtOfXbZKP
lhZ63ZOGa81Uvy8t9vmD07/Cc3ZpsfwTwh9gYdweqaBhMq2skCoTpFGIElWENIUaCqEw0gu95axc
CzU6sbQcGfwWCnKIBF2soIGhq2NZ0DUqzixr+RwlF/aY6Stud528Y/xQ3L7jxJl41cGVmqk21Mrc
R2OY7H3RYrtHGrD4jl+3ST55p+NpmewBTpHlI8UiWh2hDU59EiYRwGTVZ1JZtrgBNs5PD3ubPnkq
K59ohHFk8FssSCJB14pFOuK0OnNbJTwQ04ZDmqNJi5gyfP1q2XXytvGDB2JvOGH9MLIOdsyFZnHo
o6/RBs5GXpvOeo80EOmOX7dJPtr90GvflQmvfMq/Ly3+eZPIX+E5u7S4/oAo9rAc/jmOFIsskgXU
OE2smBbOnMS7psFsCItsdnqiq2g3GqrazmwJDwx+CwUhRIKuFYt0yCB9DlkjOiW0TpnLaFgWxkxf
S8uuk7eNHzslfMcJ49Z68bLU5shUZwHus2ipMtdQmHqPNLjmHb/ukHwSt/zagiFeuh/zLW6VPq/7
/xWek3GrxD8h/JEpPG04UkEDo1ijqmlqWUlgSuqDRyqwJnep5h1f6FKwLKMy+4l3Oo4MfosFaSTo
WgVNFRxtuEwIu4P4rJWzkshcIRN/nSvtOnnT+MHuIG84cTP1uigvna7oY5nNyUvVPY9ZDajdIw1o
vuPXbZJPuoNk/mOlL+EUPVLbULOqgq20unEq2Gby1jA1nDyqSJ2aXwfZOhCWkdKZBjZHBr/FgsIp
eq22wUplbyBOPsJXyswrDsil5pBJvqborpN3jB+aou840RS1DOg8FOpgq1mMmldgH42XFMz3SEPJ
exNlm+Sjun9+bcEILn3Kvy8tV+r+/83z/3xwWgv8gCj6MAi/GEdqG5irotSZgMdKvc6SvPlMxqUw
sAzE/kR3LaTaTZxOXOE+MvgtFBTvcS/WNrgPMlirDomWltLQRKmMPkIm/Nr87zp52/ixBNQ7ThS3
Smx9grrJrLNlXwCU5zCvgxn6PdLgIjt+3Sb5qLaBX5tR9eirh/lZiPfnf/xhOY5QlfdU09/++vdT
AXrG8NlQFPxpzh8ocMzcAZ1/nX/7+1//MsfXv33J/eOffDNc/gTwAPLIMO4b1o8d/C/DDMcMn3aw
RuaK5WPmDug87uAMD7JLfaC+4Rh83k33V3hO/jUM5CeExl2wHY/kp/KYbK6aYOSR1hBL9YmSpjIo
tUUL8IkO3JlqRpxwpmTpwOC3WBBHgq7lpxQ6gvuQPGeUn9JC2kYFmxYyydeU33XytvFj3evfcaKV
xyX3YozSaObSXIS7dEPTNVYeeI80UMYdv26TfFQNW/71gn1o+UgqZqo6Co3kXVpyHJxGK56sMvMy
4bLklUkwBXKZOv3Ej5ojg99iQSUSdC0VAwQdKnnRjlFVXYPiCN1whE52+Jqiu07eMX5oir7jhFn+
CnWY9eWzMhGAOrl0KxV01cH3nxr0Aew7ft0k+exSZPmjRkIvVcN8X1rwQnXOb/CcXVoQf0CUEreV
cDyUn1oorAXTUJAkajPZnJSkUrfWsQrkJ3oR42mLdJz5UXNk8FssyCJB1/JTldGNyIuvaGlxs27T
Z/UxQ6b/HEbsOnnb+LGl5R0nPIwoboCOzTFP62tVEiBAmrK4drn/1CCPDHt+3Sb5pC9VltcWrJRL
W7DvcVsuXLT4DZ6zcVv8JwQ/RMPl6Uh+KncuQsSJ1rJExjMNE0/SNFfuoqbjid5WG30AdC/reNwe
GfwWC/JI0LX8lDcfAlMpW5SfAtPMnUsnGSHTfzb/u07eNn4sP/WOE5Y+sAyRYlQxYyP3xdVYp/bW
qbndIw1GsuPXbZKP1tunZX8w4//gbuH/9lLk/xD80qVIeb3/KhZ+gg/ltqRXg8yp2fCUEVpaDDW5
kQ7tjL3YU4UsGJjNptKZ3NaBwW+hIM+RoGu5rVUJTOaSWSxqPKIwbFivzhYzfbWY2XXytvFjDYPe
ccIyxaKSWamOuUbVXnOpWa3RLLkSlHCiGOuOX7dJPvoZKK89rcqlS4jf12r9vD3xr/CcXau1/IAo
9GAPl7YjCb8yrHdbngTXSoAy09TeUqnA06tIK/2JTjNbrdiHwYkHR48MfosEFcBI0LWEH3aeuXnP
KFHiYBrikum1EcdMXzfOdp28bfxY4uAdJ4pbyd0wc4a+pJiuRRltaOuWBcvEe6RBxHb8ukny0XMA
WV97WtPfTPiZ2+dx+xs8Z+PW7ScEPlDDP8eRhJ+qjErVUl+9Jps1Jy1VE7aMa9U1zeiF7quPNkG6
nWjQd2TwWyjIciToWsJPWscCC71TFLdEOmuXjp5HzPSV2t118rbxY3H7hhPvsbPDXICSm4uOuirM
7tJs8cLlfo80MOGOX7dJPtpj658IHxRmBujIZWYEnzMXSCpU00TgtDp4YvdZG9YG+TVFKRsb5Lz0
zCNmRwa/xYI4EnTtMvNEzjPXlinXqJZEK3aC3hwkZCpfx667Tt42fuxn4DtOWO7URm6lVEIDgr7m
hArIqHXwmGXdIw0llx2/bpN88ohZ1tcWzOVS7ca3pcXzhTLF3+A5ubR45h8QJT+yUvTnOHKZuWNV
Z8Y0hmjSOVoyHiVhU1k0bHSrT3SwUYbkibWe6JNxZPBbLEgiQdcuM0uuuVG3auDRltDN2HnMKhow
vXUd3HXytvFjPSTfcaK4zVgyaUcqRdVFxhAZpl3nAKui90gDw95E2Sb5qAmB/XHDK7Z85N5uHUS6
VJPPslIngORrYppeahPOg0t9RVd1MSHvJZ/IDBwZ/BYLCqfotXu7k+YSgFm9t/BXC2lHgI6IIdN/
WrnsOnnb+LF7u+840RTtnSZ78do8QzavUqd5Y1i9o1O7Rxqy7Pl1m+SjCvin5fyAculXwvelpVy4
ufJvnv/r0lL0BwT6Q0r45zhSLLIKL82WUxu2EjhgyotX8lYJxPNalp/oOioyOPqp8uIjg99iQR4J
ulYs0mDybM5cqQdxW3IriKvmgTNkkq+Zt+vkbePHnkx/x4nftZ2zwSy5F8rIdYj1QsBtdWFwukca
LOuOX7dJPsro2auBVeZLbby/x61citvrPGfjVvQnhD3Ew7g9UkFTFVlH51SlUsLZPJnPlWYFdyRf
0+iJXuri2XB1yidO948MfosFhXF7rYJGl0LuRFpbjtbb0fJcOjvwipj0P4Xku07eNn6sCcE7Tlhe
jL6gNK06qmlT8yE2LRML86z5Hmmwwjt+3Sb56OaKv55cLQKR5SPFIoh1VB+QgHwlxqmp155T6Vzc
F2Nrry3h8FJNpOfZTySdjwx+iwVRJOhasQgq9yqtjt4oSjpP1YEVuyjFTF+/WnadvGP80BR9xwm3
hEOzt0a4WpmNp6/J1K3S4ImifI80aOYdv26TfNRB219bMKRL7YC/Ly1+oX74N3jOLi0uPyDQHh6f
NhwpFimt0momqfK01OaoibvMVDKQzbnMuj/Rlw2BXMZa9UTcHhn8FgsK18prxSJ9kKnluVqYOC7T
V1Ez681Dpv90vt918qbxg4njd5ywfphHmZwXAczOAB2XuNPKQrN4CeIW/ZEJd/y6TfJRnwx/bcGI
L7UY/O+4dbhS9/8bPOfi9sX7E0IfEtYy0pGCj96gz6otDcOaOmZI0MdI4LYaTEZv8ESf1aZkmY7t
xJbwyOC3UJCE8+tawQe1hTi7FpeoqBond/DplSrFTF+b/10nbxs/VlT9jhPX/TcCc1itNzPBXmBI
bd275t693SMNlmXHr5skH71YgfAnkIeSRpaP5EhLo9VWXwnbwESsOUkWT2iCDLoK9/ZH+euY0oa2
SidOG44MfgsFMUSCruVIzQ0qoDVcUQIqNxgtc6/TNWb6Om3YdfK28WMJqHecsE8GWzYFzDLbqAua
zupgULplb9LukQbzvOPXbZJPDrIRXlswvrYF+760XHjX9ld4zi4tyj8gUB6Uoy0hH0kcD6tz6eLE
zj35mJgyd0lCtWW3Qq2NJ/pybLz6kGYn3sc8MvgtFiSRoGuJ4551zGydtXMQt02kUEbWMS1k+s9l
5l0nbxs/9hjSO07cqH2WytA6WW64fOaRRVZXrWAdyj3SwKI7ft0k+ehKGcJrC1bo0hbse9xeeNf2
V3jOxq3xT4jyUArj9kjiGGepXhSSgOS0iDWNNiQ192qAa7nJE13LFBm1Lgc5HrdHBr/FgsK4vZY4
LkPnclQUKlHLQ3OzrJnIW8j0n+aWu07eNn6sv807TlgDvLoqu2RCnpShlQndi1pVVwW7RxrMccev
2ySf9JPD/HrBXsUiy0cSxy5Aq4unWpjSpIIJWu5pzDyXVyjN8iu63JFznrT4xG3lI4PfQkGaI0HX
EsdtzN4B66wLooPs5ouWsXMrMdPXr5ZdJ+8YPzRF33HCsjmfaH00g0ZrTtdWui7Oy6G3ivkeaXDc
8+s2ySePmGF+bcHk2qf829KSM32+tPwGz8mlJecAwh+F7X9w7eV/el/nfwl+5b4O5tfWCahE8/tI
Jn4IMvGEVEajRNw8rdpngsLNGgII/pEuYS7GILbaiRcrjgx+iwVZJOhaJn7OMTtbVQgf+Ka+Jq0+
ocEKmfjrjt6ukzeNH3zg+x0nrPvvgNOljllAtRWV1Ui4V19ZxGo4UbLTjl+3ST7J6GF+7WkVL7Xx
/v4hxEsfwus8Zz+EGEDwwwyiP8eRTLx3Fm/uSaD0BIVzomEr6aCOg8lhjid6rZlGBvVqJ9IlRwa/
xYI4EnQtE9+61g599TWjuv9spJBL88IYM32dguw6edv4sbr/d5y4OLOuMpV91L7mQCfKWpqVJsua
z/tPDeUBtOfXbZJPKt8Q/wT0MAwtH8nEV8qrsGDS1VYatHrKgjl1wzlICgjaH6eGSzOVAm2e+Bl4
ZPBbLEgjQdcy8S5SYSFMnFEmfog7jjVdq4VM9NW2YdfJ28aPZeLfccKfgaM2GETatFjrU2xyNrOS
x6okeo80uOGOX7dJPjq+wdee1vKlT/n3pUU+z+j9Cs/ZpUXkBwTSg0u4tBzJxM8BBWhZGlokZc+U
cDIkynUAWFGX/ETnkUt1L1rbiRcrjgx+iwVRJOhaJp4acm8GJaOGRV5ZDZXrmhozfS0tu07eNn7s
xYp3nHhLWK12bMqraVu1us/CZIUKG/q8RxoE9vy6TfJRugT/9bropfb73+NWL8XtdZ6zcavyE4Ie
Od4SHsnEN8HlSCM55ZqKu6a2RktDrcxBLVPxJ/ooeWUFaANO3Nc5MvgtFhTOr2uZ+A5W5gLXXiDa
Ejr4nKOPNSlm+orbXSdvGz92pvWOE6ZLKnCngdY6rqlFSZuwt8Z9FUO5RxqIecev2yQfZeLptdIT
hyv9kS4DjWbT3EZaAzUxmaSCyImozOJjdYD1x5ZQG2ubFfzElvDI4LdYkESCrnUZKOYkbRaUWaPM
QF1Awst7rgHT21XQXSdvGz+WiX/HCeuHW4WMGbhVozVYCqNBp2zLmkG+RxrY9/y6Q/LZFM34gGuf
8m9LC8Ln3UF+hefk0oJQfkDgkw7DP8eRCpoJUK0OSL1MTlxYkvUhSaUsX8TFsD/RpzuuvgAazONx
e2TwWyiIIBJ0rYJGbDZaBbRreIV7gOJoQjpGzPRV97/r5G3jx+L2HSdMl6hMWr2sDmYwHLmWUmtn
6wWttHukAW3Pr5skH3XORPrjfkx4flSOFItgJhs2aupScgIdnLIvTzppDQApgu11npNXX4tn72cO
xI4MfosFaSToWrHIP6k7k11LahgMvwpix6JKsWM7MRILEFsEAsEOoYzMg5gEb88poKFJhXMqlTpI
bLrV6nsrn//MsePE4pS8Cpkg3WAR44ynEByXLtNfKdQfiny/8GMHYi/j9DPf+4gCnkUSuIKINatW
A76WSEFf69lgCR7oep/kzJWyrWRcAaeuXrZTC55/H/MSntGpBaGBoC2LO3Q353wkWMR5KDZpXkwJ
ZmHHuviS3BJcZvLZ2Rrxhs6YnKvR2IwDDqgjH3+1b1C3384Fi0TBVH0BB6b7rq2XogSYk7guk7xY
RDwU+X7hx66UvYzT67feCYaABn2UXDJ577OCxEKW1Au91rPBGv9A17skp7IMIG1LMMSpJMhtv+Xz
iacu4Rntt+z3EH5V262OIxE04g1lsrA4a/0S1JUlFsyLmiDZFVuMbP3WRielYADEgVPCIx9/tWtQ
f16Yi6BxiSqQyUZNz3EsWTDnVJ2p0md6EUHzUOT7hR9zHL+M03VAcSi+loIQrYvB1BKSmJCroIcS
wmt7G3Q1ah/oep/kTFafW8nWr4a5V/KR2Ab0HAJpXnIAu3h12x8hLYpFMGPVaMxNYSOUA6EBKwPZ
QY58/NW+Qd0+NxfbQOqdg8I2d5eEwUIoFh0EU7tM8iK24aHIdws/uCR8Gad7FRSIrLXKWaEm9iIp
UOGSawkaYn6tZwOCfaDrXZJzsQ20LcGsmRrK26llJu7/Cp7RqcXRDoL96rqHanwktiFhTmohLYjg
FkxqFw01LNn5GDQGMcg3dI8ueguKpQzkIj3y8Vf7BmnPoLnYhuDZbmU6rr1+69lXQ955Je0y/ZWX
5qHI9ws/1m9fxun12wygVjBHdJWBowtiiSB4R9E4l1/r2aDGPtD1LsmpJ9ORtyWYzPWTpt9aM7Ek
vIJnsN9a4/cQfgXuNvMjjmMoOWi1aYnFucWC44XR1EWCqblIRWfohq7FMRJgLTBwBHPk4692DRLs
GTTnOEbgaJEhWOjOt6ZWqqVQta7P9MIB9VDk+4Uf67cv43TnW/ZKmIuRhBjRoGRDFnJUtuKLe61n
g7WPdL1Pcmor9/vLPr4/0x/xkRYsWZOPC4VoFi7oFgi+LuTAGtQkpLKNjEbY2Bw1mYEl4ZGPv9o3
SHoGzflIoRavyJbQa3fXEmsFhx4gdJheeh/zocj3Cz+W0/BlnG7iKcw1VefVgs8pcizsNXEMpSSs
Cq/1bFD3qKHcJTnngOJtCebM1NMt7dSCE0vCK3hGpxakHQS7FUx3xDjkOI4GC1S7VNGwlOhoMYH8
YksA9jaSLb87oKIGRYkp1nS83x75+Kt9g7hn0KTjOJqsMdcC0fX6bS3sPLgcbOoy/RV99lDk+4Uf
i0l6GafXb9XmlB2AQ8glMZZCPiRgYixM0b7WswHZP9D1AcmZfivbEszp1Gl6229l4nT/Cp7Rfiuw
h+BV+/PtEccxhBwhF78EybhAzrI4E+1ijdcqUn0O+YZeq8+SQvIe7fF+e+Tjr/YNkp5Bc47jWNCT
NTGx9E732ZoQFb1hxC4Tvui3D0W+X/ix0/2XcbpbuZRyMZGcLSYAkrPiJHmwUorGYl/b2yCrEfdA
17skpxK0biXrKkxPuJn13CtlTwS/c6UMUFZBUJD+lbKNy/LK3WAROeKJz8V4Vl8W4YCLBaiL9VKX
xGxtMGwNxz/m6qIKqdSRPn/k46/2DdKeQXOeeMzgaoIsLnPPE59zNhAFDNUuk32xKnso8v3Cj13/
fhmn69EDX2rWEmNBLRSluMpGKtlYyBjuNhRxj3S9R3IuSSTKtqb1c3NjO1e788ldX/D8h8ldN94d
BNOq3TW2HPHEAzNWdnXx4uxSgcJiipXFYHIhsLjs5IbunXFRaswwkkz9yMdf7RvEPYPmPPHeYUB0
VYNL3fs6YtmoSVlih+mlSPGHIt8v/FhS5pdxev22OutKZYsJQFWKKzZw4hIsMwjoa3sbeLVED3S9
T3IqbYPb1rR6ab8lY8732yt4BvstGbOHoJWM61XHEU98MiioPixBQBYj4BZK2S/Gu2idTWSk3tAB
xZUQU4U64Ik/8vFXuwaB6Rk054nHIBAl51gpdvqto+IpGi4+2z7TC3fJQ5HvF37sEYSXcXr91or3
LrisKceoEchxSSFAMNHb6uJrPRtY7ANd75Ocivt3r1tarWKv5COe+MBBsyG/iCZdSFxYompcQmVb
g1efXdgUZmVwWcjlgeObIx9/tW+Q9Aya88QrFOMrZ3Ch10RTTiYJouEce0z018v7D0W+X/ixJvoy
TjdYxHlKLqdCjoIpJiZ25CRnT0I28Gs9G5j5ga73SU5OLWRX46de4WynFpi4UnYFz+jUArKDYFrR
dWf6I574IOxjUlpujHnhamRJGfPi1IeIFIzz/obODnNw2RunA8GZRz7+at8g1zNozhPvM+bIzIKo
vX6LHMimWgLaLpN/4bt9KPL9wo+5S17G6T/d5m0xxpXoJSYXEWyukkuskVMw9bWeDYT8QNf7JKem
Fv862NWo9ko+kmXAGBbxkZfMPi7gCBebKyyRCMA4o97KTWHJmXwCJxoHrqYc+firPYPA2J5Bc1kG
jAplzF4dYu/Wo7GWEjlTxfeZXjidH4p8v/Bj+YdfwuknifQSJGr16faX9aop25i8ZCOI1pvXujb4
qT12O8Cznh/gr+AZHeBZexAo/KCx3a+eU6kX/O95R/0zHrBvj10//C5vZ65ff/vN5z9++/1WQfXz
T3/6Pmz/uT+JlX89in3pv17776y5dxbr7IoOGbV/FrtxWVrBdLdSR0Jyog1gKaQlKYSFSGgBybJg
NkIANRuINytUXcqlmlTswAR+5OOv9g2yPYPmQnIoJ7ZQwGAwvdsVJSTvvDPosM/0Yqn2UOT7hR+7
E/4STt//IpKr5ZJ9LpKyL5UQs1cizilH320oKPhA1/skp/wvflvoopta6LbjskyNy/M8o+Oy6A6C
7eq6z3rLkTilCJwKKS01o1k4pbBkDLRYa6paNs5SuaFHY5VSjSnIyEX5Ax9/tWuQ77avuTglQTEc
wVXuvmKkzilzMp4B+0wvZuCHIt8t/OArRi/jdFc1KacEPhRSKVTIW4pWc2UvFAvCaz0b1NIDXe+T
nOq3+nt8AUCv5CMhOQ4pe3VhQRS3SDZuoZBwAcwpu+ysFt4mSET2ASsFHnjW+8jHX+0b1JVyLiTH
co4+QIYcSi8kh6upJmX0wl0mfJF09aHIDwo/1ERfxum6+UIoyXvNUSKjVVd8sBhsiRgyIb3Ws0HB
PtD1PsmpFLq6rW6tTL1i0E4tOhGlfQXP6NSifgdhsf8aisqhkByDUtnjUlyBRaqnBa1zi0LWlHwi
4/SGHkIFZ9UFsAOpr498/NWuQWR6Bs2F5NTirK/Gay29fltJUKwpKGr6TC8cQw9Fvl/4sX77Mk5/
wwzFo4YcNFqIjkPwqVa2jIXQ4ms9G7zyA10fkJzrt2RXkqmcJ02/5ZnbFVfwDPZbNvt+y3ZF05tv
3ZGwmopWo/G0iPrbv8TRgsmWJVdIBmrGXMqG7iuBVYJUBx7aOvLxV/sGcc+gubCaBCFL9Joy93wo
alJhE6tXxC4TvJhvH4p8v/BjPpSXcXr9tlgCAWSOlsAF64XIGQwgORAYeK1ng2X3QNe7JKdC6azZ
ZnqQbslHIkiI2SYBXFjJLzFCXSDYvLiI6iE4ybw10YAEDMk6ygMP9hz5+Ktdg5zpGTQXQaImsiP1
YrrpvVwVNZQLKscu01/ugoci3y38YHqvl3D6ESRoyQeXOCLayJwdmhRqLSShRtTXejZY+0jX+yRn
3HxbyXZlmXrFoJ1a3Pn0XpfwjE4tjncQFlbTb+ZH3PNcgINFvwTRsmDxaSFJYUEWTglNUeu2fpug
pJgoRD/gnj/y8Vf7BlHPoDn3fHGFUQVySKY3tYRSk5KtiXyf6cW50kOR7xd+7JTwZZxevzWSEnr2
RFq9M94JCSNG1GRsFXitZwMiP9D1PskZd4E12xJMZCozcttvJ95MvoRntN+q7CAYVgvdJeER97zN
rNm7sjjndQFNsBQudUnCqkFM0Upbs8spZ4yoIAP99sjHX+0b1G1fc+55jKqeCxpbS+92BbjisvhA
qXSZ/jqCeSjy/cKPbeVexun1WymGYgghaY7AISuTIpqq0WXlCK/1bCBxD3S9T3Im55mF18GshNQr
+Yh73hZBSiYvnpMuJEUWxyEvLMpoSGqksCkcmA1HqDySg+XIx1/tG9SVcs49X6QiYIKUu3fCczVF
rEQMZLpM9sX+9KHIdws/eCf8ZZz+Q1vB5qqhmkQcOEDMFaGm5DUacOW1ng3s8YGu90nOnDZsJdtV
ccoR30wtghNLwit4BqcWQd5D6OpYn+DQfuqtqD/BxTwB/I4nXmhlJCPUd8RvWBZW1O7MdsQRX4yP
xpmyeBftgpDzotbjEpxadFJdTeFmhHcFKtTsFQZOXY98/NW+Qb5n0JwjPsZccsZgSnW9nDYZM5pa
KRXuML2UveihyPcLP3aB+WWcbiQdi8nZZAygCW0pElMgB5QJkiPbbShEj3S9T3LGW3IrmWg1NHWZ
oR0H7dQ4OM8zOg7afZPiG6HvTvdHHPFoJVGQsnhxbilEfiku4II+ZQjAxle7NTs2UatqLGbAy3nk
4692DVLoGTTniMfo2Tq1YFzteUuKOE8SS8DUZ3ox4z0U+UHhh/rtyzjdU9dETgx48RajGPaJNSQA
BleYhV7r2YBED3S9T3Im8YDF142uxtpeyUcc8cCxGvSyABhYMlNcqBRZ0DsWH2s2GW8Kg7eoodQc
/cC7H0c+/mrfIOkZNOeIR9FA2WkE5N6paxBTcqlQUbtM9GKJ/VDk+4Ufu2/7Mk73coU6chwAsyat
CrV4iiawsy57seW1ng2g7oGu90nOpEuzuN08AppyoLVTizvv0LuEZ3RqcX4HgbRahiesVN/b/drf
dv13FM1C/y7VNXG4z7TmzuofLa9kVAy/WP53uPzq+wdbR4IunHiMUmgJEOPixecliasLgIVqqjLY
crOCbI7OYcpDxyBHPv5q3yDfM2gu6KIYa2rgAgV7nrGIGcmG6AVslwlfLP8finy/8GOesZdw+nc0
ayEh4xNxsYlqgkKFSwg2FIWC3YaizjzQ9T7JmYs0FrflNtJUCsl2jPZTY/Q8z+gY7X0DYbeJwtqp
5GeNKG7iNaBLeAZFcYZ3EEQr2amN2U6U8xHbl/AMi6INBP1+uZp4BqIVhc6nJriEZ1QUMjsIolXm
IHainM9leAnPsCjUQPBWM85One+2ojg8L8oVPKOiONxBEK1+DqIVZeJFz0t4hkWRBkK2mlGaihdu
RPHmfJqiS3gGRfHG7SF09cJPWOI/1zPxJ7g8AfzO3uRPHlXf35tsXMSroakLT20rg/OJKy/hGW1l
AA2E23IqAk/FZbWi4MRq+AqeUVHQ7yCIV5yrmVYUOzFzXcEzKorFBsJvNWPnaqYVRex5Ua7gGRVF
9hDEK82FNe5EmZi5ruAZFsU1ELrVDPNUzFgrip9YDV/BMyqKpx0E8WwgXSvKzLHDFTzDojQDLZmt
ZtzchZNGFJ3YTF7CMyiKktlBEK9+DqIVZWIzeQnPsCjUQMBWMypTp2StKBNZxy/hGRVFYA+hq7pn
eAGeukV4JvjMFuHGRbIamTp23LWy88egl/AMtzJuIHBLWQ5uCqIVZSIk/RKeUVFUdhAkK7qpE7Z/
iGKNmcjQegnPkCi/8zYQdqsZ66cOqFtR4Pxq+BKeUVHA7SDQrNJN9eiPpK01jozRyktFHxYrgRYo
DpaEaiJFZuPzDT2kUCBEl3wZcIke+firXYOkb1DjutuG6J++++T77/MPGyNFW0G5Irhj087777/9
yh/f2M07YH0z8bz7Tfngs29/PIr71LkGnK7WOOM6rvIXXCQrzTXOtrNMvH90Cc9oZ0FqIGjrsTyX
LK4VhScm4Ct4RkVh3kGQrDIH0YoykdHvEp5hUbSB4NcNrqK9LMCebh3y81/+CMzom6/mJfN//P6n
AevHCh61U82+OLuiMceKO2Dn9+XHn77/puQXP7uZ+/uP7Aum1fQL5scF84zAAwUPC8y94pjgWHEH
7DwuMMjqdMrL1ODARHjBJTyDtQFm361JVj8H0YoC50+ZL+EZFQVaCHkd3GpgasfTijJzInQFz6go
RDsIvBE694R13XMPP54IfuTwo5ND9QUXuRXm7uPtWtn5E+pLeIZbmW8g3NbULUztq1tRZk6EruAZ
FUV4B0FuJZryHbSiTKQyvYRnWBRtIPxWM6xTEI0oOHMidAXPoChozA6C3KxDZSfKxMx1Bc+wKNRA
6FYzOvfScCsKTvgyruAZFQVhB0F+NXMOlVaUidvpl/AMi9IMtGx+fwRfp2qmFcWdH2gv4RkVxekO
gvxKMHVQ1Yriz28RLuEZFcW3EPA6+Nkw6EYUOzH7XMIzKIo1ZgeBZgVvn7DSfuoW4ZngM1sEhq2p
i07VatvKJqbzS3iGWxk1ELg1dU9TMVetKBP5BC7hGRXF8g6CdDV2CmInysTMdQXPsCjaQNgt7AHc
1I62FYXPe9Iv4RkVhWUHQbqKmYJoRZGJmesKnlFRxDQQtNWMm8ug3oqi57cIL3j+uwjk33l3EKSr
zvXhnSgTA+0VPMOicAPBvy8VzBREIwrhxGr4Cp5BUQj3EHzjmwtu3YlyPhDlEp5hUVwD8fu5M/r/
XaqzZ4IfWQ2/FC3Y4SK4cpAmPn9gfgnPaCtjv4Ngs7KdauqtKHL+7swlPKOiCDYQ7vcYpLnDpFaU
mXOcK3hGRXG6g2AzexbbijJzjnMFz6govoXwW814njpMakRhmNhhX8EzKAoD7SDYrDqXIW8nysRA
ewXPsCi+gdDXEVYzVzOtKDPHDlfwjIpieQfBsFqcgmhF4YnN5BU8o6KweRlCXjdmqxnCqXVBI4qc
Tu5yEc+gKGJ4B8Gw8typfSvK6eiri3iGRdEG4vdzZ6vPiJV+4hbhueDntwh/cMEqZurcpG1ldHaN
cxHPaCvrQDCszkz1/50oZ3fnF/EMi+IaCHwdYTb6shHlfOrIi3hGRXF+B8Gwej/VXFtR/NnN5EU8
o6J4bCDsVjM693BzK4pOzFxX8IyKorqDYFyNmzp2bERxZmKgvYJnUBRnWgi6fXWFuW1+K4o9u5m8
iGdUFEs7CMYVeQqiFYUmus8VPKOiUNt9eKsZOxdB2YoiZz2TF/GMiiKyg2BcyU5FF7SiuLObyYt4
RkVxpoH4/ZCZzTOu7j93i/BE8DtbBLOCMCm82CB0qcRNPuPy9bfffP7jt9//fs/24xcXZn8Itdxq
OPz4wydffbvl0P31jd8b3H9A8NbvBK/fEH6+/fp2i3fLzvvKR++88tNP2zOaIiTo6wJSeSFj3KJJ
6pLQ+xJcMKJ2q/Pfsbckyl/+B8xvfp8++/wfuN/GL27dcgz5+/L1t7cG+3X44cfy/X9BHb/96fek
4p+WH18JOX9/u9jdLXY22Ua/2Fuz/ib/B+V9cCtmqmrCi9p9Y7ukdyN/Idbrf424fTNeLGtPmtGd
Bbw5G4pxMdfgbOBNVyRHk7keDw0e26cfDwoXs9wfFGyIBojL4snSQjbQEnzlpVprgS3WYvTQoHA1
9cFB4VnFboPCf1DeNihMVc3JQcExX9H52kFh5kj9Sq7hQYF7MMr/8WEsqq5M4p27s9bClXHKwdtW
2syR/xU8w5WlOwjGVWBq+9mKcvrC9UU8o6JAC+G2mnFzN6JaUezEzu8KnlFRrNlB8A1i7vJPK8rM
adIVPMOiUAPht5rxc1lzWlFO3w++iGdUFOEdBOOqfuqgohXl9P3gi3iGRdEGQl9Hu5q5DG2tKH7i
iO0KnlFRvOwg2K7AU16mRhSd2VxdwTMoirabKjCvo51N992Icj4B7EU8o6IQ7SDYzqb7bkU5nZ3i
Ip5hUXwD8XvYg5jJXdV/fxj7TPCZeI2Ny67EU0vyppWdTwB7Ec9oKxPeQbCdzZ/eijIxnV/CMyyK
NhC41YzMue3+KQqcTwB7Ec+YKBvvDoLt6uaaayvK6fvBF/EMi0INhN1qxs+FArSinE738TfPf9h9
Nt4dBNtV5/pwK8rpdB8X8QyLwg0Eba+6GjvVh1tRTt8PvohnVBSWHQTTCnMHQq0obqL7XMEzKopr
uw9vNQM6tc1vRIGZ2ecKnkFRwJgdBNOKc4nhW1FmZp8reIZFoQbiD88A4BNW2s/dIjwRfGqLIH88
vT11btK2stP3bi7iGW1llncQTLOhSa0op9N9XMQzLIo2EG6rGZ7L1diKMjOdbzz/7RWTjXcHwTeI
uZmiFeV0uo+LeEZFEdNA+K1mZC4RSyvK6XQfF/GMiqKwg2BaHU9BtKKcTvdxEc+wKNxA/J6rkWwv
W7naB9nKAYHOZWUfLnjQTgTqFee9HivugJ2HsrL/UbBfxflewfS4YJ0ReKDgYYF1X5yuiHKsuAN2
HhcYafV2ysXS4tjzAQeX8IzWht1DMK0KUy6WnSjnr8RdwjMsivsnBJqtZlSnIFpR/PmlwiU8o6J4
s4NgXs3cbatWlNOPq17EMywKNRC/u5M8+CfsFZ+6yX0m+Mwmd+PiFebeEW5amZ3wy1/CM9jKrIEd
BPOKc5eSWlEmgj4v4RkWhRsI3GrG4hREK4o9v8m9hGdUFCs7COaVzBREK8rpd40v4hkVhUwDYbea
IT/ljGtF4fNrnN+IO5veSGogDP+VEfcYu+zyx0ickJC4IRBHhGYzk2XFbhYly5fEj6edhGy6PEmm
+q0erpCdfvopd7tdtssmPFopXAYIZscFmk2QUhafa2zEo5WSg4BIPTKZocS5kJKAzdwmPEopyY8Q
zK5ECGKQAjw+FjxqKfLx4R6Zip1EJaUsPkTTiEcrhdIAwRMEVhV9kLJ8qZwJj1pKFRD3m1JojZL6
6w4RVgR/6fxaYpe4tfp057LkYtcKlDiXrWzxqaRGPNpWxm2A4Ox8hiCklAz0XBY8WilZQpSJwwWs
MqCUguRxLHi0UqofIDg7itBsgpSC5HEseNRSkoCoPTIRqwwopHAARtgWPEopHPgYRGrgfvqTeq4f
f9v3buthm23XdPXu7e83u/4/x84sP9+b5c/d2fnu5sXj2JNLMTK3491Z5+KMFs8emh7QnVnwqJte
ExCtB4uxcoFSClCpz4RHKyWVAYKzY6wInJTCwJDdgkcrhcWQPfoemVyhBK2Qkv3yNKAJj1JK9nmA
4OwKlrWWUsLyD58HnnMeQ9F5BUS4qxNNa5zmsOpAbE3wF3quHF0kapH+67iOYGVXsVkA2cgWn3Vi
xKNtZFQGCM6uYaUVpJS4/B1twqOVEoOAoD5/nQisP3W0AX83/LPP93U+CvH8v0hl8zm75t28VOsk
Zpd9y8X/91Y4wlUmZKjByQdg8TEsRjzaByDTAMETBDbjI6UAVS5NeNRSsoCIU2TQoqxSyuJjWIx4
tFLqCMHFUYLy+IMUoFO14FFLKQIi9chEgvL4Qkqh5UNkEx6llEJtgODikocgpBRgOaMJj1ZKlBDc
I5OwKjRSyuJjWIx4tFLYDxBcHGO5+EHK8oyvCY9aShIQuQ+qQlyjVty6o8EVwU9Zufc0jym5issJ
yuPLVlaXp9BNeLStrPIAwcUVgiAGKUDPZcGjltIEROmRqdixJ0JKRVJ2FjxKKdXnAYIniApBSClI
ys6CRytlSNnVHpmGnf4spSQgm2LBo5WSwgCRqsulDB1AbK1OXIdPdxtopkLFv+0+Xf6y/f361+uP
f15ffJjqy+7eHi5iqnRVrupFoUu68JkPF414f8G1Bs8tEfk2ob9pdNjH+Ma3/e7xx3ov8eP9721u
frvcfLHkx784ekNNNjV5Q/0/u5vD1IHefvr5vtT8hMnxzf5Qr3wIe3om4kC1LXM2dfTrCNRcfEwG
w0CPtZOv3v9++8tmv/u0e7O7PWw3X/6xu/my/82XvZT/we3fHCdpuJqbw/7jdFTA258233SI/gXy
H0cnm04Q2OymT6k/Dpv+lxfTX94OLFydx4qiyWbDQNbJgkfbVJgERNtSdQE7Jl5KAZa/mPBopVQ/
QHB1FKDUl5QCLH8x4VFLEYOh5HtkqEFTJ0JKA6ZyTHiUUhqVAYKriwWKjJQCTOWY8GilxCAg7uYi
Ka0x7bDqsHlN8FOGzU83vEmuihZaFK0MKS/6wHPWcn4t1QGCq2Os/peUAnTnJjxaKUwCgnpkMrbE
X0oB5ktMeLRS6gjBEwRWi11KAeZLTHjUUoqAiD0yBauwPZdCSDVMEx6dlM47QHB1FStXNUgBXrQW
PGopVUCkHpkWoVS8lIL0PhY8WilplMLNeaxclZSC9D4PPGdM2nZeAcFbn51vfoBI3ocno/3d9X76
KNr+8unTb18+3ODP98dQXX7YT2yHEnZ7erPzpRyecVUXFwtak1JrsKZjbMHzCmwnGDyxSs4DZYrn
oKxYnNehVMe5HmXLtALbCQZVcSYfzkDZCIvzOpTaODc6yhbSCmwnGNTFOSQx7pWU/zwS/bT5DPfA
vrmZLjENcI/9dKX8yk9/f/dzU8g/fJh+bvP99rAP9XJ/dUU57Z9ed8rrbjfDNaihRY+FTaTesAmP
suUF7wcIbo6wCUshBak3bMKjlpIExN26iIge9Pg/ZKZWBIcyU7k3dcJmgEUrQ+oNm/BoW1nkAYKb
i9gMsJCC1Bs24VFLaQKi9Mgk7JwmIQWpN2zCo5XCeYDg5hib7RJSkHrDJjxaKdkLiNojkwM0sSOk
IPWGTXi0UloYILi53KDICCkErIcy4VFKIZ8FROuRKdhsl5RCQLbbgkcrhUYIbq5aThYRIfO0Fjxq
KSLbzb5HpmEl96WUsvxFa8KjlVK8gAhbPxESNE4ZpCwfIvzHc8Z5kc47QFBwidc42WPVIcKa4MgQ
YeKK3gXseGjZyoDu3IRH28paEBDUm3qoEISUAhwfYMKjlsIDRJz4MvRSFFIiLf/GMeFRSomUBUTs
kYnJcqIpAqe9m/BopUQ/QETvEjYFKKUApV8eeM5Zn6rzDhApusJH8+CvHZpAkePSBLzywtr75COX
S47In3a5E+7zxDT5/YVba8cuzK9fePFMlvLCasF1vBw7TnTa5U64z1MFp/5uYWwSW+KU5ZPqJjza
aBQaIOIE0aCUppCC1Bs24VFKSV5CcI9MLhCEkILUGzbhUUspA0T0aAUlIQWpN2zCo5VCSUDkHpmK
bSQWUpB6wyY8ail1gKDgmMsKY8V1B7krgkOD3NybesP28ohWhtQbNuHRtjJuAqJsfUA3OAkpSL1h
Ex6tlDxCxIAeFiOkIPWGTXi0UqofIJhdqOHYKpH85Ivz8v10tb8Om/7/L27fvb3evb+YlpD8+fHm
1+nJvbic1qK8PWx+vz3cXO8+HL66+fjx0+a33e3t9Af7r65/f//++HXjK9e9nTauTu8yd/t+98dh
2oR4uXvftyK+u/757s7/u0J/zYR2RZf8xvt9u5fTX4ibhx/oobj711MEP97s3h6O4ZD3r+D0/+zu
7/7nz3f/8/3dTxDNc7u8uuLEe/8A8fUvh8v+R53g9nDYvLu6c9gtbaZautOim4nmQ3/ZffvdtKLn
dnP/Y/ujgCHbA/5w99fvO+Lu+u/eWv7cvbt7nV9NfJ9+OQjG3X5/M0md7ucBVZDW/qwTdlSIeMwY
ySVZ8CgfM6Y8QMTgYoOmYqQU4JAbEx6tFC4CovXIMFb8TUpBBp0WPFophQaIGFw2TbAxUJPEhEcr
pbanEGXrfY9Mwba9CSl58cFZjzznPPmn8w4QMbjaoByJlLJ4d4gRj1pKFRBh68n5BM0cSCl16TvF
iEcrpdIAQcHlvMZZMSsOOtcFXz7ovOOK5AIW1aGVLf3GMeJRt7IsIKg3dcLyK0JKCUsHnUY8Sikl
jBCRXMSOzJdSaOkcvBGPVgoFARF7ZFKFIKSUCPRcFjxaKbEOEJFcxg6VkVIS0HNZ8GilJBIQqUem
JAhiLiUASixodErCKCSSawwtnhatBPgOtqDRCSEvEHhqqehB43Mhiw/IN6LRCYllQIjREUFpxbmQ
xTX5jGh0QlIQCLnHJGJL6+dCMtDXWNDohOQ6IFBwJa9xYsm6w4EVwU8ZDrTnhgO5N/IUoenWeQsr
wGDAgkbXwkoWCKU3ck5QJm0upAEDAQsanZA2IsToMnZ6gRAC9FIWNEohRSDUHpOCZeVnQpavlzCi
UQlJlAaEGF1tEMJcSAS+dC1odEIiCYS29cl5Dw3f50IW788zotEJyX5AiMmFACV5hBDgkbGgUQoR
j0zwPSaEbeWZCeHFq/OMaFRC2I8IMbmIzdXMhYTln/4mNDohIQiE0GOSGErszIUsXoBmRKMTQnVA
oOBqWeN4ilU//dcERz79Q+iNnDMU03kLA7ptExpdC4skEGjrE7oedy5k8Wo8IxqdEG4DQkyuYKu2
50Ly8k9/ExqdkFwEQuwxaVg9mrmQxRW+jWh0QmoaECI7T1DScC6kAY+MBY1OSGsCIW09uxAhhJmQ
vHjf3SNNOl/GLkc/IER2xFBKRwgBHhkLGqWQJBC4xyQWCGEuZPHyZiManZA8IkR2CTszdC4E6WUs
aJRCikDIPSaMrXCcC2nAp78FjU5IqwMCBdfqGiX21/30fwAPK4Cf8ulfn/v0z72RF8MiHsUDn/4W
NKoWVjwJhNIbeTVcpVkC8B1jQaMTEtqAENm1DC0TnQtZXCPJiEYnhCRCvSsAXSCEmZDlJ50a0eiE
cBoQYnahQklDIQTopSxolEKqQGg9JhHLs8+FAOsqTWh0QmoeEGJ2ybDcfWnAWMiCRiekibEQ+R4T
JghhJqQuPlf7keaMR7bWOCLEjB6lOxcCrIAyoVEKKQIh9JgUhlI6cyFAL2NCoxPCaUAgcr6uUcN4
1U//NcGRT38KvZFXLIs6b2FAt21Co2xhVSDQ1hfng13Gri7es2ZEoxNS24AQi2kKswIroExodEKa
RIg9JoTtdp8JaYvPzzaiUQlpwQ8IsaBJQyEE6KUsaJRC0oDAzeUjq+uTD/Hl4leN/LISX+rL6u6R
ZNDTXQ2/XE672Kv3eFJ5r/vLBhf46D2m1y7Li9XqLqtUy8cuVkI47WKv3qNCbUFT3XOYvHzAakKj
i0POA0IsLhOEMBeyuNyzEY1OSJHPPPeYFGz58UzI8iMwjWh0QmoZEGJBM7NzIYurEhvR6IS0IBBy
j0nDlh/PhARPy6cSTHBURjruwEDkQlujpva6Q9YVwV8YsgZKjlqr8cmQVXLF6jxWZ1q0scVVeYxw
tG0sC4a+zh0tvS2UJGCQZoGjVHKEIVYXPTQDIJQsru5shKNVUgRD7WFJWIZXKClAsscCR6mk1IGB
ssv+HEsJfvxt39/MHz5ev5sKt3VLV+/e/n6z6/9zfF/n51/Y+fMb+3x389Ibu7DjFsk/+8auvfFz
hNJ6ouEBlWxMcJQNr5JgaL3xZ+zosLmSgGSRLHB0SkLwA0OsrmQoLEIJkkeywNEqEYmk6HtYKnYU
oVCSl2ejTXCUSnIbGGJ1rUItVSgpyz91THCUSopkCFvfXAjQQHiuhID1WyY4OiXkaWAgctTqCj3h
qoOuNcFf6MIzuUAx13a8B+9YsblIUFJ93sSWnxxohKNtYlkwUG/mqdqt3ggEFHK8wzlr5aeOOzBQ
mhjWqGrx3fDPPt/W+SjEs/8ilc03/Zp388ILgWJx1beSynNvBOqPYMYOBJbNf/nX2gNOOd80aMcV
DHHrG3osw1zJ8vMhjXCUSoofGNIEiJ30NVey/HTIR5wzLunquIIhbcOEh83jzJXEsDwLY4KjUxJD
HRiSR8/RFkqAWQMTHKUSIsHAPSwZS44JJYuPfzfCUSqJbWBI3lXso08oAZLcJjhKJQND7mFp2C55
oQR5vVrgKJWUNDAQueTXKLS87shvRfBTVog+GfpJrhRcwHYOiDYGzBqY4GjbWBUMvT6FI4YYhJLF
B+8a4SiVNB4YUnAJO+ZQKAG2/pvgaJU0wVB7WDhBHcRcSQL2d5jg6JSkODKk4DKWUxdKgB0eJjha
JUUwtB6WGqElKkIJsMfDBEephNPAkMh5rF63UILMeljgKJXIWY/kt4EcBejhnSth4EACExydEg4j
QyJ0nbdUsvxdYoKjVVIEQ+hhYWxIIpQA1VVMcJRKYhoYiBz7NdKnqw4H1gQ/ZTjwZC2H5ErkKrZ4
QrQx4MALExxtG6uCgbYhuoBVQxRKgP7KBEepJLeBIUWXsOMUhBJglt4ER6mkSIbYw1Is5wsZ2P9u
gqNU0vzAkKJrDC2KlEqA/soCR6skCYa0DcmFCKXj50oyLU8tmODolGTigSElF7FTAOZKkBMTTXCU
SpJ8cLiHJTVotlMoQXqcB5xzKsltYEjJZezMRqEE6XEscJRKimTIPSzNQwxzJcUDr1cLHJ2S4tPA
QORyWKP48rrDgRXBoeFA3iZ2ATsPX7YxYDhggaNtY1Uw9KqRjrAiFkIJ0oVb4CiVEA8MiR17qIOQ
SoD+ygJHq6QJhtrDkrE9UkIJcCqICY5SSfYDQ2JXsRSaUAKcC2KCo1WSBEPbhuw8NkoTSuryHcYm
OEolNQwMKTvKUFiEkrZ8KfJ/OGccIZUmliKz72FJWFGcuZIKLBZ6wDljmfGOOzCk7Bib2hNKgMVC
DzjYalmlkoEh9LAUD03tCSVAj2OCo1SS/cBA5AqtUZB51eHAmuCnDAeeLBaSXCm7inUQoo0BXbgJ
jraNJcFA21CcT1A7F0qAxUL/8nYuOZbbMBTdkSBR1M+ryCQLCBAkyCAfBMn+Y+VTga+6qx51qQJq
1mj4vEPaMmWJcsExKhllYdAWkuvDmVgs5IJjVTKAIc+wCHfc8lPJSPtvNS44NiU3LjLMsGSucAUl
xAnhLjhGJRKB4Z/peK0Lg0ZJ73cGS6Pk3fZnxusaf2X54tWqjNeu9vGvfLEB2rxuCZral64rH163
M3YN17Xa7V+62qjptat9/CsNdlsoXMNtoGn7mzxccIyhaLIwaAuVO6AKlBDHbbvgWJVUYCgzLM2z
lB1Eu1kXHKOSsTJoC71QMwyghDhy2wXHqqQBQ71SZ092eyiRSByH6oJjUjJxFwaR0OVEg/GzpexB
8FdK2bcvWyuX9pC4KRvIMeJAVBccY45lAYY28zwLNbEHSpSo2zxwjEp0IMMMizbqaQhKCjFeeeAY
lRRk6DMsldtCAEqIY1FdcIxKuiLDDEvnjmZFJcR45YFjVdKBYcywjE6F5akkRWJazAPHpuTGXRh0
hKTUN1hQwkx4eOAYlTwnPPoV45VGkEEtyQQl2xWSE45RSavIMMOiXFhASd/NEicco5KOWZJmWMqg
bt6nEtnuHOiEY1Ny4y4MImHkE82bD5YDZ8FfKQfevmytXDrYw68gx7bbtjjhWHOsA4NcabCnNT2V
5O2JCSccm5Ibd2EokV1/+FSicXe9zhvOJ55vPnGBIV8SQ+a+84MS5sbxwDEqSX1hKDEod1ADKNnu
d+SEY1QiAgw6w1KFeo0AJdsLB5xwjEqqLgwlhh49nyWVuHE8cKxKOjCUSyLbbReU9N2i0QnHqKSX
haGkkCpVkoCS7QMLnXCsSgYw/H3MfubWczyVlO062gnHpqRIXBlyiPlEW+Gz5cBB8FfKgbevAytX
SaFyWwgwx4jxygPHmmMKDG3meePOXAclSrz7euAYlWhaGEoKg+u5hEqI8coDx6qkAEO/5Ibj3r9B
CTOj54FjVNLqwlAkCPdpD5QwM3oeOEYly4zeuETY/m1PJXV7W6ETjk3JjbswFAnFsZWy1EQUjR44
RiVJngwpzrA0rhckKCn7zxIXHKOSUheGIuyKd1CyvcnDCceopEZgSH+/QnLv36Ck7U9TuuAYlbT2
JYakJ3rNHi0HToIz5cDNVXIQoV4jIMe292k64RhzrCdgkBkq5VbnPJW07ZaFTjg2JS2tDCWH0qjX
CFCy3bLQCceqpAFDnmFp3PEDoGS7ZaETjlFJ1oWh5NA7lamgZLvDnhOOVUkHBr1EQ+QKV1BCVEgu
OEYlPS4MRYNwB8qCku1Vdk44ViUKDOUSZb9jPZV0ZsTxwLEp6WllKBoqdxYfKGFGHA8cq5IGDH/P
KIueaKx69t33IDj17ltnnndu7hlyTInxygPHmGPaF4ZSQuRufVBS9udqXHCMSooAQ7ukBOE6SYKS
7ZaFTjhGJXUsDKUE5ao0ULLdstAJx6ikIUOfYSncMuynkrHdstAJx6bkxl0YSgm1UW9WoISYCnfB
sSrpwDAuKexSFFCSiXlfDxyjklwXhlLCcOwIK2O7Sa4TjlGJQoUk8ZIaEjf3DEr6/uPVBceopK8M
pYbMdaVFJfvlgAuOVUkDhr9nlHM50Vj1aDlwEpwpByZXDYU7fuCRYzkSQ/i/OJ+4UH7iLgylhsYd
UgFKiCHcBceqpAODzLAM7lAlULLdstAJx6gkj4WhtBC54wdAyXbLQicco5KFIV/SQuI6SYIS4mu2
C45RSY0LQ2lBlAoLKCEWyrvgWJUoMOgMSy7UYv2nksSMOB44NiUprkpKC4XLVFDCjDgeOFYlHRjK
JY39tAdKttuyv+F84mzLxF0YSgudW6wPSrbbsv+P83kV0sQFhr8n2bWeaKx6thw4CP5OOZAkh5TH
0PpWDqxcPcRE3fqQY8wQ7oFjzLEaF4bSg2SKAZQwQ7gHjlWJAkObYdFMjZmghFg/9R8O9WA0Kulp
YSg91ESVJKCE2Ov2H87nrQqfuMBw/9XQjpyJiQ/nb3/7fj6Zf/71l5/++PX3aemHn3788/fv5j+u
z+v69Qd2/f+J/Xm/5r0ndquhFakpf+2JPf9C44bhZ+LJditiJxxb4t24C4OOUIsuDBpzfL9nahaR
vc6w/1+3vHZd468UWa5WYkgqr13t41/5UmfY/647pH7puunD6zbCruW6VrttvVoKWvNrV/v4Vxrs
9jC4Zl9AQ2zLc8ExhkITMIz7L0SlGEBJIapjDxyjktIXhjKCcIcdgZK6v2LFBceopMqTIccZFq2O
HzIysXbOBcem5MZdGMoIdTi+pmfZf5a44BiVSAKGvz/DlXqife3RCYP/wNsB8HdeP+sccXN+zBcg
1gid66UMKUasfnfBMaZY7gtDGWFwG3JBie4/m11wjEpUgEEu0VDKibYQ36z/7e1nfR4F3PvvUvnU
oyd/zTsPBMnj5hk9y9eeCHLlG7lRr0eQ/sQySRccY/o3ZMhXjCFzNTEoIba0uuBYlTRkmGGp3LEE
oGTsT6q64BiVDAUGvWIKkVuX+FSicf/zjguOTcmNuzDkFKRRX91ACbFM0gXHqEQjMJQZlsLdvKiE
uHE8cKxKFBlmWHqnbl5QUokyxwPHqKQmYKhXvOE8v57rdgd7Jxyjkt4XBsmhthOdis9WfgfBX1k5
+tZieeXKEjK3lghybPtkZyccY46NCgxt5nlN1ADxVFKILa0uODYlJa0MWULnehqDEmZazgPHqqQB
w+zhG1KiGEAJM4R74BiVqC4MOYfMrXEGJcQOTBcco5IiwDCuSLf7eCph2sy54BiV9IgMMyytUiUJ
KCGaKLjgWJXAjaNxhmVw+yWeSqrsjzguODYlVVaGrCEplamgJO+XAy44RiU5AUO6orLf50AJseff
BceoRPvCIDm0dmL69Gg5cBL8lXLgbSPZypU1qFAzAZBjxBDugmPMsSLAIDPPS6fyHJS0/Rk9Fxyj
kjaQYYalcROtoITY0uqCY1TSGzDkGZbBddl5KmnEfhgXHJuSG3dhyCVErpEDKCH2w7jgWJV0YNAr
liBKMYCSsl8OuOAYlZS4MOQSsucq9FaIG8cDx6pEgaHMsGiiGEAJ0UTBBceopK8MuYTCtboDJcyI
44FjVdKAoc6wVK6L/lNJJ874csGxKblxFwbJoXc98FZ9thw4CE6VA3XmeeOO5IIcIw5N+w/nE2+7
LgIM7YqF3TAJSogmCi44RiV5LAy5hsjNKoISoomCC45RycLQr1iDcL0tQEkj3mo8cIxKmiLDDEsu
1BJGVEKMVx44ViUdGMYMS+Hmnp9KRtz/+OqCY1Ny4yLDDEvjuhaAkkRUSB44RiUJKqQSZ1gGt30a
lJT9x6sLjlFJWRlyC3FQk2CoZL8ccMGxKmnAkK7YQuYOwgElxIjjgmNU0nRhkBzGiAfeqo+WAyfB
mXKgpJnnhfuiDjlGDOH/4XziEH7jAoPMPK9ch8mHEo3Eel8XHJOSiYsMMyy9+2WJRmL91L84n3hg
8MQFhnzFHiL3ZgVK8v5bjQuOUUmOC0PuQbiV6aCEOITGBceqRIFBZ1hypgpXUEIc0+mCY1TSKjLM
sCj3PAMlxPopFxyjkh6BocywVG4pCigZxLuvB45RyWjIMMPSuO1hTyUp7i8WcsGxKblxgaHOsAzH
Tkaa8v7MrQuOUUmWhUE0xHGiefPZcuAgOFUO1CsPdlcm5BhxJIALjjXHKjC0K46QBvUVDJQwczUe
OEYlZWXII2Tu1BdQwszVeOBYlTRg6DMsyi2dBCXExi4XHKOS3pFhhqVy799PJRKJG8cDx6ZEIjKM
GZZWqbCAEiHefT1wjEokLgx5sMvaUAlRNHrgWJU8isY7JvGagNypL6Bke3mqE45RSRsLg8ag3CZC
ULK9fsoJx6ikI0O6w+La2FT3+2o54diU3LgLg2iQeKJ588Fy4Cz4fjkwuWaeN66xwjPH9huVOeEY
c0wSMMjM89H81k9p3t5O4YRjVFJkYdAUUqSm41HJbtHohGNVUoEhXykFKdSs4lPJfrMlJxyjkrYy
aArKLbB9KtlvtvSG84lfsyfuwlDiF5VozOP9Xraae97r2Gu+rvFX9i9erdfx2tU+/pUvdez957rp
K92mNX543c7YNVzXarevV5Mgubx2tY9/5at29Up0g6EnjW4feuKEYwvFjbswaAotUt9NQEkmHvoe
OEYlGRnKDEvnejKDku3dUU44RiUlLgwqIRaqnAYl27ujnHCsShQY6pUkqFJvbKBk+4wOJxyjkl4W
BtGQ44m2wmdL2YPgr5Sy42ulbJ153oSa68QcI8YrDxxrjg1gaDPPe6du/aeSsr1W3QnHpuTGXRg0
hxip6V9Qsr1W3QnHqCRFYOhXyiEJFRZQsn3whROOUYkmZJhhEe5lE5UQ45UHjlVJAYYxw6JcpoIS
ZsLDA8eopK0MmkPhmmuCEmbCwwPHqgQmPFKcYalcw4ankrr9/dMJx6bkxkWGGZbWqQlcULL9/dMJ
x6pEgSHNsHTuPBJQQow4LjhGJZoWBtGg6USv2aPlwEnwV8qBr33ZSjPQISZqcgRyjBjCXXCsOVaA
Qa6kIXE7ikHJ9lp1JxyjklaRYYZFuPkiULK9Vt0Jx6ikR2DIMyylUZMjoGR7rboTjlHJaMgww9KK
45ettr1W3QnHpuTGBQadYRncFBoo2e4f4YRjVPIFBi0hOe601KbEjeOBY1XSgKHcf0G4fXyghBlx
PHCMSlpdGLSEEql5QVDCjDj/4HxmHd2WEafOsFSuJHkq6Yl4vHrg2JTcuAuDaCjpRK/Zs+XAQXCq
HKgzzzvX7gFybPuwbicca45VYGg3RoiFynNQQnzNdsExKskrg1Z2ohWVEEP4PzifeJTtxAWGfoeF
XfYLSuruvhcnHKOS2pFhhqVzW/FBSdtfIeqCY1TSBBjuvxbSoN6snkrG9lYgJxybkhFXBm0hcwt3
UQnxLPHAsSqBZ4nEGZbCPeJBCbFYyAXHqKREZJhhqVwtD0qIxUIuOFYlCgxphqVlx0F4EI9XFxyj
kiYLg2iocqLX7NFy4CQ4Uw5ImnneExVUyDFiYsIFx5pjFRhk5nkf1MvmQ0mJxHjlgmNSMnGRYYZl
cN0JQAkxMeGCY1SSEjDkK/UQuXOmQInslwMuOEYl0hcG7SFxC2xByXbbFicco5IswKAzLFKp929Q
QiwWcsExKmkrg/agmZrxASXEnm8XHKuSAQxlhqVyR5k8laRIlAMeODYlNy4yzLB0x68DJW0fSeSE
Y1WiwFBnWAa3JxiU5P2FHC44RiW5LAyioeUTvWbPlgMHwd8pB5KkMHosIm/lwMKlIwi3dPIv3s4u
R28bhqI7EiTq36voSxdQoGjRhzZF0O6/JpKm9dVkZqhLDTBvQeDjQ36yKMsU5Nj2ARFOONYcm8DQ
rzRD4baigBLim28XHKOS2pBBw9I6NbMCJY14XnngGJW0CAxDwzI6lamgZBIVkgeOUclMC4O0MI68
AcXB+cc/f9aR+fdPf/z216fPaumX3379+/NP+o/reN2+P2C3/0bsj7ub10bs3sNMUmP+NmIvXDWG
FKm1W0w8YlbggWNNvAoM918MOVPjwVOJbJ9I/Q3nIysMkZWhxlC4FRNUQqxpeeBYlcCaVo4alsbt
PAAl230MnXCMSsZABg3L4Nohg5K5v4DjgmNUMgUYkoZlci9Qn0ry9hEJTjg2JTfuylDCyCc6+h4t
uk6Cv/IIrzN0yVm+vYJZsWoKiWuxBym2feSEE44xxRIyyCUpCPe5Gigp+6s/LjhGJaWsDCU0aQeS
94f1v327rZcpTvQ8gN/+q1Q+c/qTd/PKgCAlBolzpP69EUGumly3RtdciKmJB441/QcwZP0Jdu4c
Z1DS9sscFxyjklYXhnrDcd1qUQkxNfHAsSqZK8N4sfN1iaW/3iKs5l53G6EZr2u8y/5C4GeIbb7v
am/f5TsboX257qjtpeuOt647ImPXcF2j3RHhauWKMeQx33e1t+/yvXbLJRJKp+ZdT5qyfVy1E44t
FEVkYagSRqVWMVHJ/tK7C45VSQOGekkOcVBhASXEt3UuOEYlpS8MNYfMHawOSur+0rsLjlFJTcDQ
NCyFOzoFlBBN5F1wjErGyiAlzHKi8fjZVYuD4MzO0dw0z1umpsCYY8RI5IFjzbEODF3zvE/qp/9U
UpnFQg8cm5IbFxk0LJNbTQIlzOKWB45RSUKGoT/oyK3hghKiP6MLjlHJmAtDLUG4TwtAySSyxAPH
qGQiw9SwZO60zqeSxsx9PXBsSm5cZNCwVKHeB4GS7YOInXCMSjIUqyVqWFqjXqSCEuIlpguOUckY
yKBhGZFiACXES0wXHKOSKcCQNCyTe7f8VNKJeYkLjk3Jjbsy1BDLie08R8uBk+BMOXBz1RqSUNMI
yDFioueCY8yxhAyiocqJynNQQrzFdMExKillYaiVPa0TlBBvtlxwrEoGMGQNS+POYQQlxJstFxyj
klaRQcPSG7VeBEqIN1suOFYlExjKJS0k7hCnp5KRiCzxwLEpuXEXhtpCjlRYUAmRJR44ViWYJVXD
UrjTz0AJM7x64BiVlIEMGpYmVFhACXGKoQuOUUkVYGg6xKeaD0whz859D4JTc9+med65Br6QY2N/
rcYFx5hjoyGD5vng1otAydxfq3HBMSqZERj6JT0IN/9+KpmRGJw9cGxKbtyFofZQOxUWUJKIwdkD
x6gkCTAMDUvPVOEKSoh+wi44RiUrg4Zlcu2pQAmx58EFx6qkA8O8ZITILbSCkk6UAx44RiW9Lgx1
BOG+RwclRF+Jrzgf2GZOcZ8MNWpYCtfI4aGkRWJLmQuOSYniIoOGpXK7VEEJ8VrNBceqpAFDuqQG
qfPArPpoOXASnCkHlGuE3qiZFeQY8Qh3wTHm2MqgeT659SJQQjzCXXCsSjowyCUzCFelgZK+Xw64
4BiV9LEw1Bkyd/4UKBnE88oDx6hkCDBkDUut1DMTlMz9WY0LjlHJnMigYencHq6nkkS0WHbBsSlJ
ERmKhmUmKiyghDjj6yvOB34dpbjAUK8Y2Z3pqIT44XjgWJXMhSHfeFymgpJGzH09cIxKWgOGpmEp
3E4/UNL3lyldcIxKelwYpIbcTnQqPlsOHASnyoGmed4y9Ub9mWNCHJrmgmPLMYkJGPoVI7szHZUQ
zysPHKuSigwalskttIISZq3GA8eoRAQYxhVTiNwpZaCEWavxwLEqaQtDTuy5mKCkEnX0F5wPPG1Q
cYFhXjGxB6CDkr6/y84Fx6ikF2TQsFTuIBxQQnxh4oJjVPL4wiTFK0YNS+easD2V5LI50fPCsSm5
cReGnNgekU8l211GvHCsSgYwpPuP3ZkOSnYXoP6H85FKhiwMUkPpJzoVnysHDoNvlwNfuLKExPU0
hhzb3SzkhWPNsQYMcv+FLI4/u7L77YAXjk1JSStDllAc32a3IptFoxeOUYkkYMgalpapKg2UZOJ5
5YFjVJIHMmhYpmOn21YK8bzywDEqKQIM5cYIketbA0p2vx3wwjEqaXNhyCmU9lL/pFpf72jUSp9b
fZvs1zXeZX/xLmeR913t7bt8T9+mr9eV0HN56brtreuOwtg1XNdod5T1ajmUmN93tbfv0mA3h+b5
ZqvsfuDshWMNxQCGesWbYVIvCZ5K6u6nQP/hfOD5dIqLDBqWPhyXxerup0BeOFYlExiahmVwWzJB
SSHKDA8co5LSFgapofYTLXTPlrIHwalStmmeT25nGeTY7pm9XjjGHKsFGPoVS4jcCXugpBN1mweO
UUlPC0MuQbhP8UHJ7l51LxyrkgoMQ8NSuG9Fn0paJOo2DxybkhsXGTQsjdt/CEoSsUDvgWNUkiIw
TA3L4D7FByW7h7Z44RiVSEcGDcvkznQFJcyChweOUQkueChGDZFrrglKxv7w6oJjVDLqwpBrEG7/
ISjZff/phWNVMoEhaVgyV7g+lfS4Xw58xflIJTfuwiA1tCEHZtVHy4GT4Ew5kJLmeUnUWifkGPEI
d8Ex5liKwCCa5zVSDKCk7JcDLjhGJSUtDPlmGNR6ESjZ3avuhWNVUoEhX7GyS2igpO3Pav7F+cC5
742LDBqWkSgGULK7V90Lx6qkAUPRsEyuJAElc78ccMExKpl9YcgtxEKt+DyVjN2t1V44NiU3LjDU
K7aQKhUWUMI8cTxwjErKqiS3IIMqXEEJ88TxwLEqqcDQNCyFO9EOlHRiePXAMSrpbWGQGvo40Vj1
bDlwEJwqB5rmeeN2g0CODaIc8MAx5tiIwNA1zye3ORyUMI9wDxyjktkXhtzZZ+ZTyWQe4R44NiVz
eYSPK3Z2SyYoyUQ54IFjVJIFGTQswr2hQCXE88oDx6qkAcPUsAj30RooqfubZr/icLtSjErqypB7
yJ7b9+fu11FeOFYlMLxK1LDURi20PpT0mPaHVxcckxLFRQYNS29+P5weif1TLjhWJRUYkoZlckul
oIR44rjgGJVkWRikhjFPfE1+tBw4Cc6UAzdXHiEm6gEBOUY8wl1wrDnWgEGuOELiGEBJ239eueAY
lbSODBoW4RZaQQmxf8oFx6ikJ2DIGpbK7bsAJcQWaBcco5IxkEHD0hK1hwuU7J7x5IVjVDIFGIqG
pXMMTyWJ2CzkgmNTcuMig4ZlcAygJBNjiQeOUUlOwFCvOEPiPkcCJXV/Z4wLjlFJnQtDnkEGVaWB
krZfR7vgGJU0ZGgalswxgJK5v6/bBceoZJaFQVqI8cTX5GfLgYPgr5QDN4AOdTn375UDTfO8cG/B
MMeIWY0HjjXHBjB0zfPBNdd8KhFmYeILzgf2uVfchSFPtgMGKCE+7HLBsSqZwDCuGzA57uTsUok6
2gPHqKS2hUFamHLiDSgOzj/++bOOzL9/+uO3vz59Vku//Pbr359/0n9cx+v2/QG7/Tdif9zdvDZi
9xlilzzztxF74SoxiOvw1Pbf57rgGBOvRWCYmvyV+5YIlHSi6PLAMSrpHRk0LI2b0oMSooWSC45R
CbZQylHDMrh1taeSTLypc8GxKcl1ZSg3g+OO156JN3UuOFYlHRjSlVIQrvcvKNntY+iFY1TSy8Ig
LaRYDzwJjxZdJ8FfeYTXEapISvHbE3zBKilUoWb0kGK7R078D+cDx+YbFxhE07xxrRqeSsru0fhe
ODYlN+7CIOVmONFt4If1v327rY+jgN/+q1Q+c/qTd/PKgCAlhZLmbPN7I4LoT3Bw508905/pvOeC
Y0z/hAz5ShJio+YCoCTvlzkuOEYlOS4MRYJwby5QCTE18cCxKinAUDQsmTtQFpTsntrvhWNU0urC
UCRU7oiTpxKm854LjlXJXBhqDjHKwlBiy693Teul193ecMbrGu+y15euVnp539Xevst39obT65Yg
Lb903fLWdUdk7Bqua7Q74ktXG1Hed7W37/K9dqsOKUOoZ/GTphI7R7/ifGBTXsVdGIqEye1efSph
Ou+54FiVVGBoV8ohOu5C70ybORcco5LSFgZpQVI6MNc/u2pxEJzZOXpzlcweTQk5VokZuQeOMcdq
BIaueZ65z2pBCbFz1AXHqKR1ZNCwFG4N96mE6bzngmNU0hMwDA1L4w5eeSph2sy54NiU3LjIoGEZ
mcpUUEL0QXLBsSppwDCvVELkTqoHJUKsAXngGJXIylAKO8Q/lTCd91xwrEpgeC1Rw5K5VnegZOwP
ry44RiUjLQylhOJ4GFtnOu+54FiVVGBIGpbGlSRPJZ144rjg2JTcuAuDtJBTPzCrPloOnARnyoGb
q9QQK1XjPXOMaWXogmPNsQYMcqUacqRGQ1CS959XLjhGJbkjg4alcN+jP5UwnfdccIxKSgKGrGHp
hWIAJXX/9b8LjlFJHcigYZncGeRPJUznPRcco5ImwFCu1ELk9lWCkrlfDrjgGJXMlaG0kArF8FTC
dN5zwbEq6cBQNSzC/XifSkbef9nngmNTcuMig4YlV2rFB5QU4ofjgWNUsjA0DUvpVEkCSog9jS44
RiW9LAxyM0g+MKs+Ww4cBKfKgaZ53iKV55BjxKZGFxxrjg1g6JrnI1KjISiZxMKEB45RyazIoGGZ
g1oJQCXE88oDx6pkAsO4Ug/CheWpZBIb3VxwbEpu3IWh9JC5U8pACbHRzQXHqqQAw9SwtEjtuwAl
laijPXCMSmpCBg3LyNT+Q1RCDK8eOFYlMLzWeKURhHvqPZSMGPcrJBcckxLFXRjKCJnby/xUwnTe
c8ExKkkJGJKGpXKNHECJ7E/VXHCMSmQsDNJClXlgVn20HDgJzpQDNWme905NI545xrQydMEx5lgW
YBDN88ltRQElRIcbFxyjkrYylBkSd3j/UwnTec8Fx6qkA0O+0gzCHdwGSsb+3NcFx6hkFGTQsFTu
lLKnEqbznguOVckAhqJh6RzDU0kSYlbjgWNTcuMig4ZlRIoBleyXAy44ViUVGKqGZXDbsEEJ0fHA
BceopK4MZYbJfY4ESoiOB19xPvCDMcUFhnbJDcjNjUAJM7x64BiVjLEytNDyicYBZ8uBg+BUOdCu
GkPiPp+FHCM6xbrgGHNsCjB0zXPhVgKeSoQ4ptMFx6bkxkUGDUvhnpmohJj7euBYlRRgGBqWyj0z
QQlxaJoLjlFJqcigYenc9wuoZP8dkguOVckEhnlJCpHbcwxKiGM6XXCMSlpbGGoKwu3hAiWdGF49
cIxK/qHu3FItt4EoOiMhld4eRX4ygEBIyEcehGT+cZEX3rp9c0q7dCDQP03TeHlVHcslS6X+eLym
K0YNS+H27j6V5O2jH5xwbEpuXGTQsNRGdRIFJWU3S5xwjEoKZknSsHTPCmm/AZoTjlFJLytDC72c
2JF7sBw4C75fDiTlqhJion76zxzb74DmhGPNsQEMcokE4VqOPZXsd0BzwrEpuXGRQcNSuM1lTyX7
XbGccIxKEjLkS4Tdu/tUst8VywnHqCRHZNCwTG4pylPJflcsJxyrkgIM5ZIcUqEK16eS/a5YTjhG
Ja0uDDWHzO3dfSrZ74rlhGNVMleGHuQDhhJ7+ryj0djuimW+rvEue/3oam3W167233f5Ut+mP687
QukfXlf+47rbXbHM1zXaHXG92gwxzteu9t93+ardqo+Uyq0se9LUSLxAeuDYQnHjLgw1h8YdigFK
0u4UuxOOUUkSYGgalsEt3AUl21uBnHCMSj5iaGGUEx1wz5ayB8GpUrZpX8DEnRuMObb7QdkJx5pj
HRj6JSVkbq8oKGnEG7kHjlFJK8igYSmTWhyOSojxygPHqmQAw9CwtE7VjqBkeyvQvzhvrNtu3IWh
FnaH1lNJi8R45YFjU9IiMkwNy+TWH4ISISY8PHCMSiQuDLWGxB0JgkqIx6sHjlUJPF5TvKQGadSP
F5QQ88kuOEYlfSCDhiVz/dtAyfYBUE44RiVDgCFpWCq3ngOUECOOC45RyZwrQwuzyoG36qPlwElw
phxISfO8CTUT8MyxTgzhLji2HOsRGUTzvHNdaUEJMUHvgmNUkgsyaFjGoMKCSojxygPHqmQAQ9Yf
dMwUAyjZ3s3rhGNUUuvCUFsQrqUdKNk+Jf5fnDe+6N24wFA0LKVQYQElc78ccMExKpkRGTQsrVK1
/FPJiMQPxwPHpuTGBYaqYZncIx6UMCOOB45RSS4LQ+0hcS1YQQkz4njgWJUMYGiX9BDriQOez777
HgSn3n2bcmVu5w7kGPGN3QXHmGNtIoPmeeWWUYGSTpQDHjhGJR0Zuoald2qAACXMEO6BY1QyIzJo
WCa37xyVEOOVB45VSQGGcckIcVATrU8lc3s3rxOOTcmNuzDUEUQoBlRCPF49cKxKJjBMDUvmWrCC
ku1jr/7BeedCtxsXGTQsbVDrLkBJ2+0M4IRjVNLSk0GihmVwc88PJTPG/cerC45JieIuDHWGyBWu
oIRYP+WCY1UygOH+07UkOfBWfbQcOAn+SjnwONQauGaQ7PeBf0ZiCHfBMeaY1IWhTrYkASXEEO6C
Y1UygUE0LDX5DeEztv1yQHHe29xHcZFBw9K408hACbF+ygXHqqQAQ9awdK5lOCjZPvbKCceoZCRk
0LBMbvssKNk+9soJx6qkAkO58g3IzT0/laTt8+KccGxKblxgqFeMIXFtPUBJJn44HjhGJTkhg4Yl
ZypTQUkh3n09cIxKygCGpmEp3OJHUFL3Fwu54BiVVFkYpAfp8cBb9dly4CD4J+VASjO0OmvMXyoH
muZ583zRS2N/KvwvnHe+6KWBDF3zfHRXJcR45YFjVdIXhpxC5D4MPpUIM1fjgWNTIstczbhiCsI1
6gUlzBDugWNUktPCID3EfOILKD6cv/7lW30y//jzTz/89vOvaum7H77//ddv9B/X53X78gO7/fvE
ft/dfPbEHjFIzjHXLz2xhyZ/5fYSQeJtHy7mhGNNvAkMU5O/F2o6G5TU/Q8OLjhGJbUtDDmxXV1B
yfaB/044RiUdHk85XlHYFqZPJVn2X6ddcGxKssjCkCUIdwweKCEal7jgGJXkCAxJw1Icl3fOTOwc
dcExKikdGf5csnNik+/Rousk+CdDeO0hp9R7/3cER6wsoXFdByHFtk+NcsIxplhNwCCa5oNb3PJU
wrQec8ExKuljYZASejlxwvxX63/757beRwG//U+pfN7pT97NJw8EKRKazJ7Tl54IcuUcYqZqfEj/
sf8C64JjTP/RgCFfMYfUqGn6p5JCbNlywbEpKXFh0LBkx735s2yfI+eEY1XSgaFoWKrjUrPJtJlz
wTEqyQUZNCxdqMkxULJ9vIgTjlFJRYaqYZnclwtQst112gnHqKTHhSGXkCo1RwFKiH7Cf+G8sRuM
4gJDu2IJmRv1nkoqsWXLBcem5MZdGKSHMk50Kj5b+R0Ef2X13fjSi17TPC+NCirkGNFP+C+cd35I
uXGBoWueV+60b1AixLuvB45RiTRk0LA0bi0yKGGm5TxwjEqWabmhYemdqshBCbGA3gXHqKQlZNCw
jE7NloISop+wC45VSQWGecXCHtwGSoj2Oi44RiVDFgYpofa0MJQ4IvSCXW9+7Ha8NV7XepdjvVoN
UfprV/vvu3yx4+2f1x1JPrpu+q/rzszYNVzXaHd+cLUWchuvXe2/7/J1u7mGOKkFvU+axkx4eODY
QtFwwqPEK9YgXHsqUFL3Xw3+xnnjOHjjIoOGJQs1NYdK9ktZFxyrkgIMScNSuZ7GoIT4+v8Xzjsn
T2/chUF6qOPE1P/RUvZv8DevQ/rPUrYkzfPOfZnGHNt//XTBseZYBQbRPJ/cfvSnkh73S1kXHJuS
G3dhyC2k7Dhe9e3zVP/BeeeH3RsXGPIVG9v5GpQQ+2FccIxKpCODhqUmaoAAJcRiWhcco5KcgKFo
WJrnTGFv+6WsC45RSRNk0LB0LlNBSSd+OB44RiW9A0PVsIxCzTA8lQxmxPHAsSm5cReG3NhJF1DC
jDh/4ryxi6fiAkO7Yg+JaxkPSgrxePXAMSopaWGQHtrMB96qz5YDB8GpcqBduQfhDq+AHNs+Et0J
x5pjFRi65nnlGEAJM4R74BiVNEEGDcuI1JohULJ9JLoTjlVJA4ZxxRESV6WBkkm81XjgGJXMvjDk
ETL3KempZEZivPoT5519JW5cYJgaljaoNUOgJO0vkHbBMSpJAxk0LJNr5ABK8v5CNxcco5IMDDVe
cYbIHYYMSvr+49UFx6ikt4Uhz5AKxQBKxn454IJjVDIiMCQNi3C9LUAJMeK44BiVzL4wSA99zgNv
1UfLgZPgr5QDX2ozV5PmeRa/9aUpRmIMd+ExJVlSXoAQzfSSKAiUkvcrAhceq5QsCKGRqZGCWKQQ
g5YHj1lKA4iskanciXYohVjF78JjlVIXCI1Mm37fkvQixMDlwWOW0gGiXHGyU64oZexXBi48Vilj
LBDlJkwUBEhJxO4xFx6jlBQRol43oXBrVlEKcWqnC49VikSE0MgU7ox4lEKc2+nCY5ZSAKJpZBrX
NRKlEMd+ufBYpdS6QEgPM9YDb9pnS4SD4K+UCI8vBk+uEtmD9zHLiJPUXHjMWTYBol8phej49TLF
xEzhePBYpfS2QJQUcqJGCpTCTOJ48FilLLM4QyNTMzXfCFIkERW2B49Rys2LEBqZwRVvixRi5PLg
MUupADGvdNNlCgKlFOJt2IPHKuUDiCLs9lqUQjR6cuExS3kUk3LFqJGpHARIyXF3gsqJxyjl5kUI
jUzn9jiglO1FVk48ZikNINKVhG3oh1LK7pDsxGOVUuICISPEdGLX+cES4Sz4fokgylVyiNwM25Jl
u9W5E485ywpAyJUy2+oXpWzvvHDisUrpFSE0MoXbtr9I2S0mnXjMUiZAZI1Mc1xHk2KexMjlwWOV
MhtCaGRGpCaTQEqJxMjlwWOUcvMCRNHITK4ZLUrZ3oHhxGOVktMCUUpIiUpXlLJ9kKUTj1VKaQBR
r1SCCAWBUtpuMenEY5XSFgiNTOYis0jZLSadeMxSOkA0jUzh9tqhlLn7ZdKJxypljgVCRkjpf7YN
+Sw4VSI0TfXKrR6DLKvMlIUHjzHLbl6A6JrqTahVhiglEW/DHjxWKWkihEamc805UYoQI5cHj1WK
IMTQyAxuXydK2e6e4cRjlVLLAlFqiNwpVIsUYuTy4DFLGQAxr1TZFy2Ust2k2onHKmU0hNDIFO4U
VpQyiWLSg8cqZUIxmaJGpnEdcEBK296Z4cRjlNLyAqGRGZlK10XKfongwmOW0gEiXamFWKkHG0jZ
793kxGOVUssCISOInNjhe7REOAn+Sonw2Ivw5CotCLewB7OMGM5deMxZNgBCNNULt/oSpYz9EsGF
xyplTITQyDRunS5KmcTI5cFjlTIRImtkGvd9B6Ts93By4jFKuXkRQiPTuU8ZixRi5PLgMUspAFF0
Cdis7aM+oRP6hK63X/peO1Tzha33Wfp6OQlFymuXe+E+X2qI+ueFc7j/+sGFZ/zPC1dhBBsubBVc
5aPL9RJfu9wL92kQ3MLgTiBEnO0GFU481mi0tkCUFiZ3nilK2T4/xYnHKqVHgKhX6iEWap4XpUzi
/cmDxyplzgWidPYYF5AytvddOfEYpYyIEE0jU7guCChl+5AqJx6rlFwWCBkhy4l962eL3IPgrxS5
40tFbtNUr1xHliXLiCLXg8ecZQMguqZ6TxQEStneyObEY5VS6wJRRohchyeUsr3vyonHLGUCxLj/
sIfgohRiIt+FxyplRoTQyExuEeMihRi5PHjMUgpAzPtPkEh9TQApc3vflROPUcrNu0CUGQq3Iw6l
bJ/i5cRjlSJQN0nUyFRuowZKafsPWhceq5Q2EUIjUyc1pYlS+n6J4MJjldIRImlkGrfLFqUQo48L
j1XKjAuEjFByOfCmfbREOAnOlAg3V5nsHDFmGTGcu/CYs6wAhA4W7K7/p5QUZb9EcOGxSVFehNDI
zEpBLFKIkcuDxyxlAkS+JLLdLFEKsR3AhccqpbQFosaQKlXRopTtA8mceKxSagSIopHJ3NI0lDL3
1yS78FilTEEIjUxpVLqiFGIzmguPWUoDiKqRadxqcZCSEvE27MFjlJLSAqGR6Z2KzCJlf6ncXzxv
PK8/KS9AtEtSiJV6JUcpZX9+14XHKqWMFWKEWuKBN+2zJcJB8E9KhJRGKHPU2P8tEZCrpiBCzcVi
llVi5PLgsWZZFYDomuqlUcMnSmHmcTx4rFLaRAiNTGvUQxGlMPM4f/G8cVGA8gLE0MgM7qR8kCKR
qLA9eIxSbt4VoodUThxgig/pr3/5Vp/QP/780w+//fyravruh+9///Ub/cf1ud2+/OBu/z6533c3
nz25Rwp55ljzv09u5KoSklDj8ZJ6xPuBB4859QZAzEskZK6NAEphJnc8eKxSpCKERqZxTbVQCtG6
yYXHKgVbN+WokRnc8imUMvczxYXHKmXWBaLmELlmxCiFWOPmwmOWMgEiXZKDRGouEqTktD+P4cJj
lHLzrhAjtNIOjIpHC7GT4J8M57WFFFOXL21ZykkzPXNbFjHJtnu1O/FYk0wiQMglmV24iFLq/soL
Fx6rlJpWiBJGPNGX46v1v/1zX++jgN//p1Q+7/gn7+YP4s5tN5IaCMOvEnG/xudDS1wgJCTukBCX
CA3JcBBLgpJdDm9PF8sC+Z1lp/xXBwkhwWbTX3/lsV0eu/wfnULMyfU02ng0x3/MVZIr3EIkfgCI
7YwmPOoPQAGIJJ/C2qjEH6W09SWrtzzPOZ3deRFCItOGrRRikmLBo5ZSASJLZAa3+xak5OVLNI14
lFJ23gmiZOe5FVaUEomPjwWPVkqMAFFkAAhcaR6UkojEx4JHKyUNhJDIRG5TGEpZLiNvxKOVMkFU
iUzuVMeGUtr6iq8Jj1ZKyzNEd63EA+ZPx2aDB4IzO/dSlabeuGOgUysjRi4LHnUr6wDRpKn3QaVf
KIVZsrPg0UoZZYIoxXnuu0GUwizZWfCopQyA6FssLnKrFyClJGI1xYJHKWXnRQiJTOZqxKIU4liq
CY9aSgaIIZGpgVpMRSnMEpMFj1ZKCRNE8S70ebNK9qP+qxrD9cv9ab+dr+TPXzz88N3t6eWL2/Or
X+/uf9wHvhfX359uvztfvX4439+efjp/dH939+rq59PDw6872Ue3r1++fPq58T3PfTg/POxTAffw
8vTL+euXd9enl/u/v/vh9us/3/ztE3Y59ZSzjzcnf/q2vJEj84mrv36BhOLPv71H8O7+9N35KZwY
w3tw5H87efsd45+3//rN2+8QNzcx5tDzTb72f0F88v35Wn5oJ9hhzlc/fPunQ7F0tX+N/dPpdqf5
SeYKn31+9f3p4erNL7t5EjA1e8Av/vzpl4J4uv1dWsuvpx9eyX9+u/O9+v4MjKebm/tdqryP/Nqn
VBbXuZNI8DGrxJfVb3meM5Wo+GV19vJZH4UaFVAK0feY8GillDBBlOoCN1SiFGLJ04RHLaUARNhi
dYm7VQ6lEEueJjxaKS3OEN31csTGnEOTziPBmaQzB2nquVFzD2xlxBqyCY+6lVWAiFusxtsem1/f
m27Co5Sy8yKERKZza7YohTgSb8KjlRICQCSJzOCOoKOUuD7HMeHRSol9gijNeW6ihVISMXJZ8Gil
pAgQeYvNBW7hCKUQV+qY8Gil1BkiZZdDeqoEYMcSgPPrj9VSh8oHq99zPPW43uplj7vgPS8sdSgP
Lq7W8tSDx3sf3DIjWPFgreCW58dVF0a/7HEXvOflgktziSveizidGAAseLTR6B0givQtmdsBDVI6
cbbPhEcpZedFCIlM7VSHi1IiMX+y4NFKiQEgqkSmcwcMUQpxFM2ERyulxBmiu1GPKC1ybJJ7IDiV
5NatdOe5TnFqZUSSa8GjbmUVIJoEK3TqqxiUQpzte8vznJl/bxOERCZVauVvkkKMXBY8aikNILpE
pmQqMiBlEAv5b3meU8rOixASmcZBoJRAjFwWPFopIQLEkMgMrkwhSiFKCpnwaKXEMUGU4TyXaaMU
4n57Ex6tFLzfvvgtDhe5k9wopa93tCY8Wim9IYREJgdqSROljPUUwYRHK2UEgAgSGd+OqKJx6Gz4
SPBLZsNw7Ay4KhfVx60semI4N+HRtTLhRQhp6m1QECiFGM5NeLRSQgSIuKWdkOsUUQqxGdyERytl
gkib3/m481CTFGLksuBRS2kIIZEpnqpBglLq+hynCM/zVkEWXoDIEpmabaUQHa0Fj1pKRwiJTOMu
r0UpxN5lEx6tlDEAokhkOle7EaQE4qI5Ex6llJ13gkjBBW5XOUpJxGzYgkcrJQWAqJsPLnKHMiYp
61vlTHjUUsoEEYcL7Yj6DsemCAeCX5Ii9HelCFWaerJc8YuBuCHPhEfbymoFiLb5wF67jlKIG/JM
eLRSmkcIiUzhasmiFGYdx4JHK2Vax+kSmZ6p73dASmSGcwsepZSdFyEkMmNQkUEpxBl0Ex6tlBgB
Ymx+p0vUWixKIc6gm/BopaQxQaToInehI0phlh0seLRSHkNIji+RKdw9GiAl+dWJnxGPUsrOO0Gk
yK6SoZTlyupGPFopASGCRGZwG6NRyvIClRGPVkpuE0QcLvZwwEz7wBThWPBLUoRH3yI85krJea5T
hFa2XqfNiEfbykoAiLj55CJ3rw9KWS7hYMSjldI6QkhkCjenQCl9NZk04tFK6RUg5B/XEzV8gpS8
fIm2EY9SSvYzRMoucNcJo5Tlc1dGPGopDSDy/o+L3N2FKGW5hIMRj1ZKygghkSlckWCUkleTSSMe
rZQcAaJIZOqgenuUsrxAZcSjldI8QkhkOlc5GaSs1+4y4lFLyQBRJTJ9UPviQErxxOTNgkcpZeed
IOJwqR9REPbYFOFAcCpFqHIGzEeqqU+tbHVxy4hH3coGQLTNFxcK9SU8Slm+RNuIRyslVoSQyCRP
JW8gZb12lxGPVkryANElMplboEYpleikLXi0UmqYIFJhCy9PUohlUAsetZQCEGPz1XkOAqV0YjZs
waOV0uMEkapL3IIQSmGWHSx41FKgow1e0vza+1w9LPiyc51f/XlO+uaHh59Pr66/317f/nh79+vt
i5/2Ql976a8XIZbT6ab2Fzfn6/rCp3N/Mc6n/uKbc7kO/pRiPEepLnZ9Tqdv2zffpn7++5fJfOLL
N7/v6v7n66sPVn75B0+9UAserOILyf929+d9qvXw6muZ6bz+ecc81ZJKPp9LiPnpiNflL9MOYFNG
f2efgbJL6W30aaCPv7l7/UokfPvy9cP3V/u1bKdvTg/n7erDX073H8rPfPggs1x38807SHg19+eb
O6kE+NXVpwIhc9W3HEK2F7S7Ou2T7l/OV/KTL/affJhZqiuF2lWMzYb4vsSER9tUcpsgUnWVu1QQ
pKzXqTPi0UopASCCRKZz1XVQyvKGXiMerZTaJ4g4XB7pgOzz0LT5SPBL0uZHm+8ec6XqBrdbB1rZ
euE/Ix5tK2sRIOLmm/OVggApjfi+xIRHKaX5GSI1F7ileJCyXqfOiEctpQFEksgkbps2SonrC74m
PFopMSOERKYkaoEVpKzXqTPiUUvpAJElMi1ScwqUUoh8wYJHK6UMhJDI9E5FBqSs16kz4tFKqTNE
Ts7n8VS60t5TXiwu16lTP1j9nuOpx5UUL3vcBe95URm1Nw/OLj394P7eB6/WqVM/WCu45ace15u/
7HEXvOelgsvmu/OWF+jFTuyUNOFRRqOHGSJ1F7mdQCBlvU6dEY9aSgOIKpFJ3DI/SsnEVMGCRysl
9wkiDlfGOCBXPDbJPRCcSnKrNPXCXX0OrWy98J8Rj7aVlQgQTZp6S1SniFIqMSG14NFKqQMhJDKd
u1UUpKzXqTPi0UppCNE3P9jdziBleCLJteBRStl5J4g0XOTqfIOU9Tp1RjxqKR0ghkQmcRVNUUpc
/8LchEcrJRaEkMiUSEGAlPU6dX/zPGu9hRGho41eItMy9W0CSunrHa0Jj1ZKnyHyTsgdsgYp63Xq
jHjUUhpAhG0njNyescdSkidGHxMenRThnSDicM2XA2bah6YIR4IzKUIM0tQzd+sNtLL1OnVGPOpW
1gEiSlNvXOlolELscTHh0UpJAyEkMoNbTAIp63XqjHi0UiYI+ebWhUF1iiilru+pNeHRSql+gsjB
ZU+tsIGU9Tp1RjxqKRkgskSmcmf/UMryHdtGPFopoyCERKZXavYJUtbr1BnxqKUMgChbiM4PCgKk
hEjMhi14lFJ23gkiR3ZjL0hZr1NnxKOVkgJAVIlM5e6VRSl1fX3XhEcrpcYJIg7XwxG1HI5NEQ4E
/48UIYTmUuopl3elCFWaeucuPYBWtl6n7m+e8pzD+c4LEG0LyYVIdYoohVnHseDRSukzRE4uD2pO
AVLW69QZ8ailNIDoEpnG5SkgJQYiw7bgUUrZeSeI2FwsR3xjip30lz/fSA/9093tD6/u7kXTtz98
9/r+JH8499v13R13/afnfr63+a+eu0dXSuxlvKvn7tL+Bzdzg6a3Xg3QiEfb9GIEiLGF7CK34g1S
1qsBGvFopaQxQeTsEnd5I0hZrwZoxKOVghDJbyGzwwdIScTRehMepZSdFyEkMpWrEgxSmGqAJjxa
KQEhgkSmcTfIoRTiaL0Jj1ZK8hNEHG6EIypmHJqIHQn+H8N5ya6PUGN7x2ieJNA7FhVUbGREjaq/
eOJz5mE7L0DELRR2TwxKIUoVmPBopdQyQcTseu0HNODPp7/2z3s9HwV8/v+TymaOf+Tb/EenEHNx
Po82Hq3OPObKxQXuokH8ABDbGU141B+AARBJPoWJ2+MNUphqmCY8Wim9IoREJnOFFlHKICYpFjxa
KcMDRJbI1ExFBqTk5QvjjXiUUnZehJDING73LUohtjOa8KilFIAoEpnBHRxEKYVIfCx4tFLKDJGr
85WCmKSsr/ia8KilNICoW6guWt7Tl3JfX/E14dFK6X2CSDthPKIQxLHZ4IHgzM69VKWpJ67wK7ay
sb64a8KjbWUjAkSTpl64XS0ghamGacKjlLLzIoREpnKnAVAKs2RnwaOVEhCiS2RapyKDUpYvjDfi
0UrJGSEkMoM7IoFSiGOpJjxqKR0gxhaa843q2EAKUw3ThEcrpZYJIjcXAwWBUphlBwsetRToaLOX
yKRmuUJb43pHa8KjlFLjBCGRqYHKaFEKsZ3RhEctpQFEkMj0TGW0KIUYfUx4tFJyniCSdyEescR6
aIrwFvyZ93+8N0XYuXJ3nqvniK2MGM5NeNStrANE3EJ3MVLLjiilrY9cJjxaKW0ghEQmc5tQUAqx
ndGERyulI0SSyNRANVeQ0vz6grkJj1LKzosQEpnGlR1HKcSxVBMetZQMEFkiMxo1p0ApyxfGG/Fo
paQyQeThArfsiFKIjXYmPGopAyDK5qNr4alSaKFAbbL59ZtfrcGmfLD2PZufH5dcaPmyx13wnhfW
YHvz4FHaUw+u739wYQQrHqwWXObHZZdHuexxF7zn5YLDcJHbhYg4nfhYv+F51nXnnXeCyMPlRs1X
UMog5k8WPFopAyGqRKZy525BSieKBJvwKKXsvBNE2vlSPiBXPDbJ/Qv8/6pg8eiQw2OuPNi9rtjK
iKrLJjzqVtYBoklTH9wFiCglExNSCx6tlFwmiOJd8FSmPUkhRi4LHrWUARB9i94lT638oZROJLkW
PFop3SOERKZxkZmkECOXBY9aSgaIIZHpXGkekDKI+0xNeJRSdt4JogTnuapYkxSio7XgUUuBjrb4
LQaXuJKmKIWo+W/Co5VSBkJIZEqlZp8ohaj5b8KjlVIRIkhkWqQig1KI0ceERyul+wkieZeyP2Cm
fWiKcCT4JSnCu74HK0Ga+khUp4itjBjOTXjUrSwDRNziTleo4fOxlOzD+shlwqOTIrwTRIkucWvE
kxRi5LLgUUsZAJEkMoWbaKGUtH6axoRHKyVVhJDINMuqONlnYuSy4NFKyR4gskRmNCoyKKWtpwgm
PFopLUwQJbnAfRmHUojDaCY8Wim9AkTZYnKxU3uAQUogrho04VFKCX6GKIk9EYdSiKsGTXjUUhpA
VIlMLdTSIUpJ6+u7JjxaKalPEMm7nOsBM+1jU4QDwakUoUpTH4HaRoKtLK+fpvmL5zn3+ggvQLQt
ZucHNadAKcw6jgWPVkoZE0TJLnE7zVEKs45jwaOVUhGiS2Qyd2wZpQwiw37D85w3TQkvQkhkaqAg
JinEyGXBo5bSAWJsMbPlB0FKZJYdLHiUUnZehJDIDK5qDEoh6keZ8GilPK4flTfvt1hcCFRkUEpf
bSlGPFopvUwQpbhYqciglOXdV0Y8aikDIIJEJnOTJZCSls9dGfEopaQ4QyTvSokHzLQPTBGOBb8k
RXi00egxVymucKdXsJUtH2Qz4lG3sgYQUZp6C9RGQ5RSVuc4f/E871U5wosQEpleTKUs1+U34tFK
qREg0har895w33BOy+eujHi0UtqYIEp1oVAQKGX53JURj1ZKR4gskYlcmv8HdeeWa7cNQ9EZCXpR
D4+iPx1AgaJFPtoUQTv/mkCam2zdx6E2dYAC+QsuvLzIY5myRIGUmnaLSSceo5SbFyE0MrVQEChl
+zRNJx6rlDwBQjQywm1wRyl199OKE49VSm0IoZHpmYJAKbJbTDrxWKVIBIimkRlcW0qUMna/TDrx
WKWMtECUGJr8zzbcnwWnSoR2SQ+RW2iIWcZMWXjwmLNMAKJfuYfEnfoAUiQSb8MePEYpNy9CaGSK
UJFZpBAjl/I89zwz5QWIoZERbrUTSilEhe3BY5VSOkJoZBo3QY1SKjFyefBYpdQEEFMj07k+nyiF
mXbw4LFKkYEQGpnBHb+HUjpRTHrwWKV0gEhRIzMbla4gpaX9B60Lj1HKzbtAyAiRO+cUpeT9EsGF
xyolR4BIVx4hc1OHKIUYfVx4rFJKXyBKDL39z7YrnwV/pET4bqERcskIpVHLNTDLiOHchceaZTUB
RNZUF27xMkrp+yWCC49VSs8IoZHp3NvnIoUYuTx4zFIaQBSNzODSFaVsN9pw4rFKmSuEzBALBbFI
IUYuDx6zlA4Q9cozZG61E0jp230hvvE8dfnIzYsQGpnCNadAKdvnPDrxWKUsEKKREdfFi50ZfTx4
rFJ6RgiNTHddEN2Z0ceDxyylAUTTyMxIpStIGdsbNJx4jFJu3gWixDD6iV2/Z0uEg+BUidCvGEPi
TvbBLEtEieDBY82ylBBCg5UnBQFS9ts8feN56leEsQznQyNTh6uUQoxcHjxWKSUjhEamcb3vUMr2
Bg0nHquUhhBTIzMqFZlFCjFyefCYpXSE0MhMbrUTSNlv8+TEY5Uy6gKhr+R5hagp1w8ai9YxNzvU
mi9svc8ZX7tcn+2xyz1wnw81UK1Xjpr60sdrF5aPL7zZodZ8YbNgWS+X9LXhscs9cJ8GwSkUrs/n
grO/VM6FxxyNuUCUFKpQECBlyv6rgguPUcrNCxBJI9MyNVG1SNkvcl14zFLaAqGjUD+xRftoPXcS
nKnnlCuFzg3rmGXEd3kXHmuWdYTImuqjUqm+SNl/IXXhMUvpCKGRmdyi6h+lSIz7Ra4Lj02K8gJE
uWIOkVs/DFL2e3I58VilpLlAlJsuU8MnSiG2A7jwWKVklFI1Mpk72g2lMMO5B49VijSE0MgUrncw
Smn7Ra4Lj1VKqwAhGpnCLb9HKduHJ7/wPFXKzAihkamD+kK5SCF+Ph48ZikNIJpGRrjlriAlEXvL
XXiMUlJeIUoKcaQDb9pnS4SD4O+UCCm1kPoYNb9VIjRN9cbt5cYsK/uffFx4rFlWEkB0TfXBdTZC
KdvHVzrxWKXUsUCUEjLX2QilMPM4HjxWKcs8zrhiCbVTS2VRCrG33IXHKqXPBSL3UNqJU2nxIf3z
X7/qE/qPz39++vvzF9X026ff//nyi/7n+txubz+428uT+3l3896Te5TQU0lR3npyD83/lqlQY+oN
Yh7Dg8eaeqMDxNT875GCACl5+5BQJx6jlJsXITQynWsxi1KYyZ2vPE+cBlTeHyFK1MiM5iql7+9E
ceGxSukRITQykzuaY5GynykuPGYpmCnpijVErokNSpn7e6tdeKxSpiwQJYU0+oFR8WghdhL8neFc
Smgjpxa/G80Rq4bMnf+CSUZ8OnbhMSfZBIismV65uV2QUra7UzrxGKXcvAtErmHmEy0oflr+7OW+
nkcBv/93qXze8U/ezTsPhVxbyDKnjLeeCll/hW1Sv8LlB7D/OuvCY/4BVIAoV5QQuXdqlCL7U1Yu
PFYpkhaIIiG5vuMXIV5SPHjMUgQgqkamcFPWKGX7ABQnHquU0RBCI9Mi9fqIUiYxqHrwWKXMCBBy
RWG7YoGUmojCx4PHKOXmXSDKCJNrsYxS8v6MrwuPVUrOANGuOEPk1o+jFNn/LOfCY5UiK0RJIc9y
4P3pbDV4EJxZuadck935tGTZ/uSuC485yzpAdE31zI0UKKUTb8MePFYpvS4QZbLnh6EUZsrOg8cs
ZQDE0MgUbjIVpEgkZlM8eIxSbl6E0MjUSk27o5REjFwePFYpCSGmRka4he8ohZli8uCxSikRITQy
LVIQKIWZdvDgMUuBB22NGpnJdcVCKWP/QevCY5UyBkJoZCbXoRylEMsZXXisUmYGiHQp4aCKN5DS
iNHHhcco5eZdIEoKZZ6YYj1aIpwEZ0qEm6vGkLhvo5hlxHDuwmPNsoQQ+UqRXRSGUur+yOXCY5VS
K0JoZEqm5k1QCrGc0YXHLGUARNHICJeuKKXtT5i78FilNEEIjUyrVGQWKcTI5cFjljIBompkOpeu
IKXH/RLBhcco5eZFCI3M4HoyohRioZ0Lj1lKBQjRyIxBTQihFGb08eCxSqmLFI3M5FosoxRm9PHg
MUsZANGulEKsVPGGUoh14y48Vil9LhAlBYknOiefLREOglMlQrtqCkmot0/MMuKQNxcea5YNhOhX
SuzBsSBlMMO5B49Rys2LEBqZyj0UUQoznHvwmKVUgBgamc6dpotSClEiePBYpRRBCI3M4HrELlKI
kcuDxyxlAsS8Ug6RiwxKIfb/u/BYpUhbIOpN16nIoJRGPGg9eKxSGjxoJWpkaqSe9iBlEkeMfuV5
at108yKERkYSNWuPUojVVy48ZikDIJJGpnXqZQmlEKOPKM9zj99S3gWipNDSic7JR0uEk+BMiSBJ
U31whwNglhHDuQuPOcsmQOQrZfYrIkoh9ga68Fil9LhA1BKS54H7MonVVy48ZikVIMqVSsjc8Xso
Ze7vRXDhsUqZCSE0MrVQkUEpxIZJFx6zFAGIqpFp3C6RH6W0mPdLBBcemxTlXSCq8870FonVVy48
ViklrhAj1FcgaioF2xevt9922zQbL2y+z/ba5Wavj13ugft8sE2zXniGLq/eZ/3wwjUxgg0Xtgqu
abmc3CTjwcs9cJ+PChZ9tsxOva8gTiPqOQ8eazTaWCBq9T0WrEXisCcXHquUngGiXamGws3JgJQU
979QuPAYpaS4QhSd5z3RTvpskXsQ/JEid75V5DZNdcnUCyBmGXF6lguPOcs6QHRN9cYdko1SMlHP
efBYpeSKEBqZznX+RCnEYU8uPGYpAyCGRmZyc8QoRYg5Ig8eqxSZC0SVELnvTiiFOOzJhccqpSHE
vJKExJ0xj1IGUc958FiljIgQGpnM/YZRCnHYkwuPWcoPD1q5YtTIVG7vJEjJspspTjxGKTcvQmhk
hFvZuUjZzRQnHrMUzJR0JWFPsUMp2/uunHisUsZYIEoKI+cDb9oHS4Sz4I+UCN99B0OuKmFwK80x
y7Y3sjnxWLNsZoDIVxJ2qSxIKdst6J14jFJKXiFqC5FbVL1I2S0mnXjMUjpAlCu1kLjIoJTtle9O
PFYptSKERiZzJzuhlO2V7048ZikDIKpGpnD9alDK9sp3Jx6rlD4RQiMjlUpXlLK9UNuJxyplIIRo
ZLpQ6QpS6vZp5E48Rik3L0JoZCb3tF+k7H5bd+IxSxGAaFfqIXPnnKKUuvtt3YnHKqW2BaKkMPOJ
RsdnSwQFf34z1o9LhHbVznb/xSxjpiw8eKxZtkxZdE11GdSsPUrZPt79G88zGycqL0JoZPqk3ilQ
SidGLg8eq5SeAGJoZCY3FwtSJBIVtgePUcrNu0DUERJX5i9SiJHLg8cspQHEvNIIuVKRQSnMtIMH
j1VKXiA0MqVQkUEpzLSDB49ZCjxoU9TIVK4tJUoh6iYXHquUsUBoZIT7lIFStg+GcuIxS8FMSRoZ
mRQESGnbG1ydeIxSbt4FouQQy4nut0dLhJPgTImQkqZ649YPY5Ztn7TlxGPOsgEQWVO9N6rPJ0rZ
3mLkxGOVUiZCaGQG1/wUpVRi5PLgsUpZIMqVBnvqP0rZ3jHsxGOV0uICUWeIXEWLUraPUXfiMUup
AFGvNEPmNrijlO0tRk48VilTEEIjU4SKDErZPkPMiccsZQKEaGSE2xEHUnoh3oY9eIxSbl6E0Mh0
z73lrdf9CSoXHquUmgCiaWQml64oZXuDhhOPVUrPC0S56cqJ04/PlggHwR8pEfpLiYBcEtlty0uW
7c/4ufCYs6wBRL9yDMWzK0rrk3gb9uCxSpkLhEZGOIhFCjFyefCYpXSAGBqZJlSdAlLG9gYNJx6j
lJsXITQyg+veiFIKMXJ58FillAwQ88opRO7MFZRSibdhDx6rlDoXCEmhcGtYUMr26WVOPFYpAhA5
amQkU2U+SJlp/+fjwmOUcvMihEamF+qVfJGy//Nx4TFLmQCRNDKTW0EJUva7Jb3wPDVTyiql5JCr
HHjTPloinAR/pET47isCckkOyTfLiOVsLjzWLKsNIPL9jx0pUMr21kInHquUERFCI9MSNUWAUohP
vS48ZikdIIpGZjbqofijlB63d/s78dikKO8CITWkRqUrStnuO/qN57lSUgWIev8Lg+v2gVLq/lcE
Fx6rlCoIoZGZ3PcdlCLE27AHj1WKZIAQ7SDUywpRk6QPWin12Oteyyjzha332et6uRpynI9d7oH7
fKhl1H8XnuPVC+ePLzwZwYYLmwXP9XISRNJjl3vgPh8XnCXEQdVPiENMsX7leWYXAuVdIERC4iBA
Sto+C8KJxyjl5gWIppHJ3MJolLJ9+utXnucekKG8C0TR1/10oFY8W+QeBH+nyE1JQkwzxfJWkds0
1Qu30hyzjPgE7cJjzbKaAKJfWdj1wyilEVI8eKxS2iJFIyNcuyeUst1N0onHLGUAxNDIyKAgQEom
dtO48Bil3LwrRA+1n/jmjw/pn//6VZ/Qf3z+89Pfn7+opt8+/f7Pl1/0P9fndnv7wd1entzPu5v3
ntxDS9CSc355ciOXtNCECjWm3vbx1i88T1zBoLwAMTVYlVtpjVK223s78Vil5LZASA/SKQiUUoia
w4PHKqUARIkamcYtPwcphXhyu/AYpdy8C4R0trcCSiGeKS48ZikTIJJGZnDfXVBKJqR48Fil5FVK
yaHK/66twUnwd4ZzyaH2nHp9Gc0RSzq7xwGTrOwPXF95ntnHVHkBIutE/BwnWlD8tPzZy309jwJS
/V0qn9fZk3fzTv7n2kNtc4703Q8AuUaIhUo4/AFsH/bkxGP9AfS4QMgIiestskjZ/4bqwmOW0gGi
aGQy93UbpRALcl14rFLmKkVm6JN6KQApldhY7cJjlHLzAkS9SgyFm8dHKdtnNzrxWKUUAQhtKBwq
N4+PUipR+HjwWKXUjBAaGXGdTKl9v0R24bFK6QjRNDKN68mIUoi1Xi48VikjIoRGphcKAqQIsYLH
hcco5eZdIXKQdqKd9Nlq8F/qzrRHchoIw3+lxRdYiWR9lu1GIHEIgQQCcUsIjZzEgRFz0T2zHOLH
U57uGZbEQ7tih+MLzO5sdz31VtkuH3GO4P+1GyykialuRdES2TTLxPIdqCo81CwTfAoRU93xIoip
KAV7lVV4qKKoqSh2y3jLyq71mYpScLtxFR6yKHYGIRHCFUVmKootqHFq8FBFsWIC4WJkhC0qySei
QMEpmyo8RFGQdwoRIyPLIKaiFNz8V4WHLMpk3U2xGBlVBjEVpeBA7pGn7HZTqihKzyAkr3vFqim5
AKYKD1UUUBMIHiMDtmjtYyqKXT4kV+GhimL5FCJGxpRBzEQpaD41eMii6DmEaAHWWHdedYqwJnjO
FME9MUVQIqa6LYvqJMtMweOyVXiIWYa8U4iY6q7sBpupKGJ5jVOFhyqKYBMIuWWiZVVPEpfcvVSF
hyqKNDMIiXS6aNlxIkrJ3UtVeKiiKD6BUDEyouz07VSUgjOeVXioooCdQsTIyLLzFhNRSq4KqsJD
FcXABELHyCgo6u0nolhWUOPU4CGKgrwzCMlbzualguJWIFe4vX+qaDjf3/jb/oft3dWPV9c/XzWX
Yb/334eGQXDGsKFxPfeNHQbeSN9DwyGM1qrAB8MQvfdeuEEysFo+flmsJ748fN9md9NvXlny5a8k
HRLihEO/D3eXl79GlX5AwSJJ/Hnz6OdXH7f9BUqH7COTvB97N3Sj2vQ7jAWGtvv18IHPtl6y3nTa
eMm7FIt4PAR/UtwnjP4+w3x7/+tV3z788wnEZrze/exjQmxeu91hEYeh17LzMIxDb7XBBPKeMzBc
6U4PXLmuf5YiB8ZOkP8txxNNRvClT9w9UPEKVNSGI3iShasKLBkKZT6rd6Ayyq1BpcviVoeKHDed
ZLGyAkuGQrS4WTziF3a7690yqh9ub2++27zvzy8OEbnxu33YfPDFF58ix/7m+gr/hJPK27v95uIc
P/7tdykKq2q0r5k2i5+zrUlFzh6XZAGowJKhECl7HKjjmFdC9dH5/jZc4QQ/jiKHmf0eOS58XDSI
9qNS+80nNx5XKT4L41YEN0rDRROGwJqhG3zDA3fNIDwbgh9Y6NnGXw1/fuTRbBPtNpIPXdPhiN64
IJXF8W7UYOceijj/WkF3KUsysxYVNTNlkoVXGRszFCJkpmiFq9Hzz6hMWdzqUJHjZlIsUq4xHklT
FjcFa2STKqr/alFR46Z4ikWLNSpkVVT/IZW2/3odIVoou4ZwIkrJBZpVeMgJ45IQZo0JgyoqzWtR
kRXSKRYjTQWWDIVITcqpGoP8lEoXlh51qKhx02kWU2M4zVCIEDfZclFDoRlVUelRi4ocN5Nk0TXa
foZCpLipkyuSOVTvn1+d7yPDxX92ViNbrdaoIaCoyqpFRU1R4EkWu0aKQlGVpVphTYUU/erD99vD
mvJmhzYxHVO2JK9h6/24l7+59P3ZPqCTeOzgq4+3m/3dzc3FOf7547ffjaWevwy3YYeavPpqCsWJ
Gi0T3Ua1xzdf/bP9wCh6bUNofNfrxnEWmtBr1vCBgfdMjpzpVx/X31/76uOI+OfHu0HE9fKxEc5D
IzV+vEP2xkhvVK9Nz8bw6hv3Tvph2OEOQ/x8gK2yWzlumd9CtxX61cRKOLRClun/0gL+n6vzxx2H
sL/dXf8aholdiLtDVhUdSp2mvC3Y16zBQ+0OrJ1BxArRqBUOtqx7ImdF8JwTOX8e2p9zyVaXnQef
Zpkr2CiuwUPNMgcTCLNlsjWmaLd6Iorjy5+sqsJDFMXxGUSMjCt7jnYmSsGJnBo8ZFHMBMJumWo1
FJ2VmoqiCg7Y1uChiqLUDEKq1rii1ZapKAXXAFfhoYqixQTCbZlumSl65m0qChQsidXgoYoCbgbB
XSuBpWormXH4RHXGW6Nlw7keGjxj4nGCBrbpelDcQW+NGRGdMctGz+1oOM8/fJLz5a+kHbJJhxYd
Puk7o3THldWjSRw+4SM32gduuoGlWNTjtulJcZ8wmnH45GWI1OGTDjOnC103AhMjdFKKvhsdWA1j
Z2ywz1LkTp8i/1uOJ5qMgcXT4opU1IZjIMliWQWWDIVyp8VuK1A+DStQWVYQt2pU1LhZlmQBU4El
QyFS3LhQa1Atvr/+gUpXoCLHTSVZoEbbz1CIFjejj6NNCRX52AcTIgg/hqYfcLnFMq+awVnVgB3Z
yMXQBTVOFkgfzDbRbmOM0400nW8UAwXjoBh+S8pDyWq03Znutiwz61CRM9MmWbStwJKhECkzNZMr
UDlRFjfNavRz1Lg5kWQxa/QoTpTFzao1xidXUnlVoyLHDZIsdpXMLqy8rIXjsY9lVMXHPg7vKeX1
+yLLWFH9V4uKlj2ROslia9Q0GQoRsoe33NUYz2ZURfVfLSpy3FSKRUCNkSNDIVLcpKnRL86oiqqj
WlTkuNkUi64y98tQiBQ3c3JlZgkVL6qOalFR48ZFkmWVfpIXVUe8tbbGSDKjKqqOalGR4wYpFidr
tP0MhQhxQ+l4/dUyy0RRXVKLiho3wZIsUCOHMhQixU0ytwZVUV1Si4ocN5VkETX67AyFSHFTUEOh
GVVRXVKLihw3m2Rxa9SToqguEa22YgUqWVSXIFWVGTc1blKkWECu0d5kUV0iWqNWiVtRXVKLihw3
SLK4NeoSWViXOF4js6dUqrAuqUNFjZtiSRZVo8/OUIgQN9lyUSOzZ1RFdUktKnLcVJLFrdFPqqK6
RLZS1cjsGVVRXVKLihw3m2RxNeYkGQqR4qarKDSl0kV1SS0qaty0SLKYNda5dFFdIlswNXqkGVVR
XVKLihw3SLEYvkZdoovqEllpRWlKBUV1SS0qatyApVhclT2lDIUIcVMts6rCeZDlT879YwdDVMvN
GhNoKCrBalGRU1SlWARfowSDohJMtYZBhRQ9+eTcwRavYav0yTlE0S3jrAJK4sk5C9JrqW3Tj0w0
yinbGBEGbE7cqw4bYxf43z05xwQL4FxjrTdN14FrwHlswMEIBZ0Xo7bzJ+dk2HZ+CyZe7ij77RDi
k3NTp6GVhhc5veDJObeVulWs6CGIacqX3Ahag4fcHUwqe83i43syfS2izjhqD1KACNbi080Wc0yF
vuF4pLthpuuFHJU2tkd04512qmMDOpB/1D7ny19JOwRJhxYdtRdhHGA02FowpPOj9j0zoeMDN1zy
JMvjduVJcZ8wmnHU/mWI1FF7PvSiR816DjCirB60HowO1gw4pIrwLEUOYE6Q/y1Husnw5fc81qQi
NhykTrIYqMCSoVDmOHqgMqKGQlOq5fc81qQix00nWZSswJKhEC1u1q5BtfiGxZpU5Li5FItlq8Rt
8Q2LR6rHWxhKqMhH7YW2TlocGKGTvjHCmGaQumvEaDx0Ogxe+cmM6sFsE+3iZAo/J63Eb+CC9V4L
p4ek7k7yFXRffsNiTSpqZso0C6wxEiy/YTFSyZY5tQbV4muOalKR42ZSLJzXYMlQiBQ3Yd0KVMtv
WKxJRY2b4ikWKWqMShkKkeKmeI0aZ0q1/Bq/mlTkuOkki63RZ2coRIqbYatkU1HlVYuKHDeXYrGS
VWDJUIgUN+dq9EhTquXXL9akosZNJ1hUy7SowJKhECFucf1ZVaiYl29G/GOls2olrFFgLL9psiYV
OUVNikWxGs0lQyFCiuqWGV4hRZ/YjJjasq6CrcLNiAOKlFABJbEZ4aRAqrFreqF0o6yTje973oAd
wIgxaCPs32xGMM4dCGx0vWG64WOP24fWxJf/KNc7cLKz43wzwsNWyG0vt924lchwvMZv6jS4ouvr
pqkHy1/+VIWH2ixBzyGgdaJsVYO+Q6PZVurWld3HPQ1GwWVIVXjIwXATCL5lrDWgU8GAjB2aXgfr
2OAabH8c777su2aI98/2o+x1Nyjo+YDoWuFYaLVRVvj8HZqcL38l7ZBLOrRoh8bKPkAvgg0QEjs0
wQe8OxRPDFieFNdyfYzwSXGfMJqxQ/MyRGqHRippNbfKBS8ldxYzyHNQJnjgPgzdszk5b7liJ8j/
luOJJrP8MqSaVNSGYyDJ4kQFlgyFcosLHqmE5itQLb8MqSYVNW6WpVgkNxVYMhQixU0puwbV4pNF
NanIcVNJFisrsGQoRIube5hvllCRd2jG4D3I0TUw4LXT2muDP/muGfEnEaSwik1ui38020S7Dd5N
LZuxx6vmue2Vd5wNWid119atofvi4+s1qciZaVMsIFfJzMXH149U6uGdPMuoSi9nOVBYgBW0WX4l
U00qavY4kWRZpV9bfiVTpBKtcCuMkoIt3yeqSEWMG2KkWKTUFVgyFCLFTbkaldaUii9ft65IRY0b
T7JoWSOHMhQixc2INbKJL1/MrUhFjptJslSZ+2UoRIqbY7xC/bd8v+EfKwRly9gKA5UoOLRZkYqa
ooInWdQKSwKi4NAmj1Sga6Toyf2Ggy0rKtgq3W9AFNVyKSugJPYbBlBsFLjBwJm0TfBMNF5I3DXQ
nVK2Mx244W/2G0I/eotrcU0ncc/PG3xrF1jum9GqQbh+GJ3u5vsNTmyD2Opu23ts81sTH36YOa3j
knKR0wuW1vmWmVbKoheozFK+YJ+jBg+5O9AzCClaa9kKb99Z9bVBa4LnvDbI/fnaoBmXaQFESVRn
WVawgVODh5xlbgIhYqpbWfRKjakowJaLcuT5J7cYkXcGIW2rRFFkZqIsf0NOFR6yKGoCIbfMtiBN
CcRUFMuXi1KDhyqK5VOIGBmjiyIzE6Vg5KrBQxZFTyBUjIyFojY8EUXy5e9nq8JDFAV5ZxDStYwX
QUxFEQUdbQ0eqiiCTSD0lrmWu6JCayqKWv7UaxUeqijKziCEbZlMnqmwGWcqVO+kDt42LE7Jje9M
45SARvYDtyoMajAO0Vlv5MDGbuSqzz9TkfPlr6Qc4tIkHSKdqYhFaZS9w2sZhO7G3rjUC6ZE6Cy+
uAlfrNrJFIvQD2l/UtwnjD55puL4z8+uryYcqWMVvRo17zqhtVbcK+NNb4zrR2etkoz5Zyl4dTIz
TqH8fvsDSjac3Vf6321u7vY/vPbi8uzifAz9r/1FwCR9ffPw8ZOOPdsepgzY2r7N/9Tmu9d4tnvC
6t+n0Nc3U+Znmzfz7eeaJip7cY2Bvfr+7JAae5xq9T/dne8w6PE3h2WIl8/tOiedcKxxvcCrDwz4
BoQI+B/vjINgO6PvOxTst34MVxuhV+I+dqXxt1tcNYk+4GLJDieEP4SIjNJfXPy6Od9vfvAX2NiS
GEZUxcDe6F5MZPC3EaL3V3HhCFXEv0KUaCUFAkJWBel2SBFB7m42w/WlP7/Cb4rK4BLS3T4uocW5
85zEtVzaUpLLcHm9+/UMRzBcOrn4Lm6cht2LcHb8+52/+j5sLs+v3tSSKSkweL8cf1wJ6Jdwtb+9
3oUGx4fvNu8+BCmOcBrAJa0qVtfq17vzW1zkRjGwIe1vN3f7iBALgH4chIZBNc5z1/Qd/mS98I3u
FXCQcYQy6yFe32CT/zkOHNvD/yLWET1my+0P+y32ks/3iB1+C8Nz9Ob5w1Z4/CGGtjmEtrkP7fMc
hzbfHTuJ88twfXe74eJxceZhMSTtM5T6nJWdvr+98xfYOx/y8p9FGY4sj+bfiK3lzz/FBrM22nEd
+GOM0sUF9qAvwoZj73V7c3H3/ffYf7zocQzexMhdjxueglBQqUvDZERlbq53uCkUPRcaNjfvfvpl
MkG0eTghU2j140MIxl2IZYKxQkhgb+Di3+6uw79gSeO2Wov8LHwfrsLusAMW++1jk4zzgYiEi4xh
hw3zOQ7S/uL5oY9/rr9LY5WnwxHr3WPherB3aMKxD9Nr2Y2xSFmNP69peNo0b7Ff2I9hd3Zooz4u
Qp/dXp8dkYxVox1ANaB7aJzx+J8RpxxYejse8P2rwg6b5q3VcBeMcNpWHlefHuGGUcZeH6diEqDp
e9c3o+qGxjJlhXSjCsDXQywb4R4C37wU+Ob2ujm2uBzXFo512plS76dZPIQL3D18KYfzMncluiVJ
61xdq08nLTPWKW+HRgPesqidlw03A/5Rj6wfO2U9+PUQy5L2EOeXU/Z5jjtLElXy1qhyjw/lBm6x
jb9GT79670MMSVyQ2m/CL+eJSZOULePlw220e1iOiWZ/DDtc+UrZctzUqSuma396+WUEdcGoi4Ba
z3FUCwzq4GTolHseBMF0K4VcCWz5rQR1wcgBdEkcqISToRMpgCD5OmCw/JhnXTBqACGN4yrhZOhE
C6DjFbtrNPrVO+9tXru7Q4Gc1q5XssfnJB00SlvZ4F2scY0yCKY5G60dnr2x8TjUX94cDxUeZ7Px
R5znxi+Kfo4ACoQdGw6jbhRjBr8E8AFMYW3wxjNw8lnKOyvZSrIvP81aF4ycnmaOAy2zyZVfl7FL
pI1SuI7EGjNK1TgTNIbaQGOkVdb5zioGuOnBuedIrp3QKn+XKOfLX0k5xFXaIcIu0TvvtTETkX0Y
eHwKlgH+L7FLZAAfXbVhcM6PKRahHzrvk+I+YXS+S/TwDyfmU5tDgwyjkZ2xRmmneuGE6HWPVeaA
4fDd8CzFDNKd0O8JAvLOhRg0swJvJbEaGmY0XlIyCqyAR/yN7/Ego1N/2bmAFK4BthD30IpedMNZ
/O120hvhn7CRx1+nrDrBi6y+/Rdbw69X/vK8j/sjD3ZjXxp/E1dVNmyOYFopZRFCtDCEF+eY2Gg9
bBh+EE+y3uxCzHG/33wTrl7DM3vs2ea1H15cPu6Ep1iM0sc8p7LE+RM297Pzy5uL7+I0oz0KEHN9
+5gsjx9q4qeaLl6vL8wgcfZkJAsBpJJsM9xsMV7P2fNfXgx+s9+99HkpMfE5TreCVazBa4ddM3I/
4LDCJQPDrZdi82I4f/kjXluJzabxOFFDQ9I0WvYWFxVgNAbAOuk29xuKP+OkNWzv5UmrAwsjleyo
XlbpvS0w11sJo0Yecg9lWy5PNp+nrP2+v8TjlB9vrsZ9VO7s8I9Q9jfpsr9JlD0q7ruL8OYTotvW
WvH3KYn4Z+GXEG/v7qUYghGmV8PDEYzPcRjCafrDYfPHWwx6xpRTrpPcMCMkl0EoprQeejt6bnBT
+friALW5i83qTdzwPLtfEz7b32EI73AF4M1YNWz8HQ7f8Y9ne/zieJQcD6G++WDIWalG7UfFewcq
CKFHoYTB++nHoIJyc5ddy50ucXla6Jjlj23XwKGWN0alIISyJRAZmhBqbtdKW3Qd/hSn5PBjDR5q
jCxPQWibipFj2ILOf7mP0VPuL7/Ig2aY7CekzIFheeYy/CRlneFQ0gge+sLjsxfZnWESBZaOhX+p
FP4v5cF9n39/tmMbh8f7EG4++yStjKlXQ+FDGS/Q6P9EprQcdmGiPFU0/SlLvN0IZ9JeBhNgIJdN
irXMZJVNaXvTwunhn/0jpVPKG+fyayWwxjA8wGjdyP++VgpcCWEsZ8ZzCM7yTkol9IDPeuKv5PDv
1UqKt8Lk1wUzl+c9tFv+/GgNHOr45HgKQgpWApGhSf6ohTiQfKbW8ZPDpSvZAaAYJsvukuZsprkM
P0kCWylShsUpw4qV7JFRDBMFRrCkOa3zzGX4SRBYtIzzlGF50jBfftMZzTBVYM5S5niuuQw/SQIL
DinD6rThogwmGCYLrJPmlMszl+EnSWCpky1VnzQsCjM42zBVYJHMYK1MnrkMP0kCg0xOQeG04cIM
zjZMFlgnzYHIM5fhJ0lgA8nO35w0LAsz2ADkGaYKLJMZbHmmuQw/SQI7kUwke9pwYQZnGyYLrJPm
HM8zl+EnQWD51OjqThpWRRlMMEwVWLGkOS3yzGX4SRJYpFZeBDu5LqcKXlxzNGzzDJMF1klzjuWZ
y/CTJLCSLmX45AxH6cIMzjZMFVizpDkDeeYy/CQJDCzp5+kZTtFpR4phssA6aU6ZPHMZfpIENkak
DJ+e4Sx/8T/RMFVgSGawFZnmMvwkCWylnt8TK9hLM5zyq2B1NOQET3l4eioFhU0l2zA5kjppDkSe
uQw/CZFULXPJQe30VMoUNRWCYarAhqXMcZ5pLsNPksAi3UZPT6VMUQYTDJMF1klz0uaZy/CTJLBU
yUHt9FSq4AUFNMNUgS1LmjM6z1yGnySBNUt2haenUkUb/hTDZIEhaY7bPHMZfpIEBpEsRE9Ppcr2
sQiGqQI7njSndZ65DD9JAicXdwQ/PZVyhRmcbZgsMCTNOZlnLsNPksCOu5Thk1MpXXC5O80wUWAE
S5rTkGcuw0+CwLpl2qYMn5xKaVaUwQTDZIEhZY4LkWcuw0+SwIKZlOGTUynNizKYYJgqMOdJc9Lm
mcvwkyTwH9SdWW8kNRDH3/kULZ5Aoid2+Q7KC5dACAlxSiAUubvdJCIXM8lyf3fcM5NsMqmsq8Yz
CTzsKskcv6p/l91dPspKCwxczHCMrIxgMpgtsEVxDmg4gp8sgbVH/SxmOKaimvoaLGlgrsAgUVyw
NBzBT5bAFlCBixmOgcoIJoPZAlsUZwwNR/CTJbDz6M28mOEYVRnBZDBXYIVGsJdEHMFPlsDeoIHk
X4MX68WBZ5e5/tnx0t2ruFj8lh2eCiL62Amhdd/34s2rCa21pvd6lF5570GNOinpnRK9kGpMyGrC
+eXl9bOtKDSzEHYuxaOrVdnsQ5C7NpEdv/axYXYmAHZtGEE7RqRnE73DTCymmkZXdSUMMPdSaInh
pFI0HMFPlsCADn5BMdU0uqpVMMBsgS2Ks4aGI/jJElh5wMDlVNNURjAZzBXYoBGspaDhCH6yBDYg
MXA51TSVEUwGswW2KE4ZGo7gJ0tgK1A/y6mmrYxgMpgrsJUoDhQNR/CTJbBTqJ/lVNNWRjAZzBbY
ojiraDiCnzyBnUamucHseJrbzrxB+9xyTusqmwoZzL2SDm0qQUgajuAn40q6mZAeA5dz2orT53lg
tsAWxXlJwxH8ZAksjcPA5ZzWV0UwA8wV2EsU54g4gp8sgcGgz73F2UlTNwvLALMFtijOORqO4CdL
YGV2sq1yve2avK8SN2Xboj7/+23X6821Q952jSpjYUtlbquCfrQsojP7rhtmccjVl1ZFdY4vbs67
PNr0uorO383VyR+LqVTo0UF+z8Hi/KDLVzRdDAfeB6n06Nugw9garWTbj9G3OvQmdmAl+O5ARADV
S9v6PplWp+x09L5v9TikpLxV+d8asjxn79XJ8C7u8daFg9Yex9XhfytHm+YDsVH2Wtw5trrwRjoJ
PzXNJ4/fuPqO6X137/r0x4OpAeQX1xXh7r2K+rP1TvH19//UfLr6YfZbPL0+Hi/n64pQ4zwX6k0X
+Zpme3PLE1neXOFqOMoG5R+z7fmPS8NyMbvJZdp7d+sEozjtU7Le/mWberOT7c48sl1K1fyeh11X
Rwn//ffyiMWpgP1an8OpRaxbyur4xaPbeFqd0tL8tTTtn2bVt1spMLTWooSuQU7F2JxRodcmuTCs
vLg7/DK3+Kz8bJGuj7OUx79eLu4Ov/wP2LpRjW19SE/+sc2U9Qmduev6Zvl17yy/7b1m+qaJPNmQ
TXj/9bk8pffip/Fkt51wW0b2XfP8OtdDPMlt8XQxncP7yypIb2N20c9Pr66ftbE6sW1jfbrHuTX/
+bue7b3hdz23bj3ue95vCPeQg+t4NZwufmmXKfTjjyz/jH1w9cpW/ZufSbftDfPNEXwrxsuFcvbN
b/34s3pmzp1ffzOfZs/O/liXgFtlActpthzZy0KD5Kqw+7DxdYnHxU3f56fq8SZXWrx3esq6a90t
+3H/e3n1oPvNh42VOlXMIoBi6Rlulc3D5mp+OQmT7S2ZlG8zRvlgQxr9KMX6lsiv7HnYxC63wSlS
sig5Ts4Pm23U2LoK5qPr81U6S3Gxx2ql2VwFektztzgTfUW0Oyo3vJEdK+tqsuNsWH356S7P8J9d
xmHS5IO7n/NgxPl5vBim0dHD5uBmke8I+e5w9cfP01FB7a9N2w5pjDdn18dx/vPiKP+efs/J9eq3
ts2t8zRdT0Mai8tcYfLkVS/yW16dH0ntheg6aHstUtu7FFrvrGw7I1TX2eBDp5opo1lmum9hTge3
n2ru1lXVvNmhYcwxm2z4Y3PCTGDBAUCpSu2gT9JE3Rrju3ZwwrcSnGpl0k4rUBrSmDuxAMFq6MIA
VtCrUlO+/G3cIY05xKxKfXOxLhEtbOpNGLUJFju9NPogIzgTQ+gxa6S47RSK8j6JRStTr9+6YQJW
m9pqE8AkFWHwOplBOpBDgs6N1o82dO+idoMrqPikDXuvTu1Qg43Y2uBSpegh3VWKzg+M5xt3yQ0z
ttcNGebq843y4nhxcnM95MbxfI+rO3dkfcrQ3l1pLvLzTTzbl0d3RygtC1GTEqnLi+Xk4dFT8eJf
wqjby9Cur0s2703ChWobCYkrx+wh39meSErvDXRWZ6TZ9+B+2OGI2134/5XNe281sIRj68PiXuu7
xR5PD2yXF4fNAyGbeHWVYu6f92zJ/Px4rcpywmBfXcC+vLhrZvPzpjS4jttQ34zu2UBpMC9hBjIy
hJkBouJ+TTaDcl12YcmDO92U3D5rdO/Cgx1EVo0Zj4Zr1zI+4xDdjh24G2++uXjmgeY6Rxh37L3M
ck3W6x92MH2D3WunzEr2yg4G+qTFxjTTV2mZeU8a304vvbB9xKmlr9df9VG+m78zfVuzfhpe2fBw
fon0AXySKTtv1PZ30v/oNNOOnHpyoukleoAaj/g9wK1jO+sCgoGtzf+vTwVN3qmtvbubaLmLqjwx
tBy0wCeH8GN7JiPudofzjXi4VOmjL29XTm2sWrr7UDt9qgXpQhu8G9tkBiFCP5gk1MaqpTzgcvnb
cb73/3L4tOl2+9vb02cWDOneKQKp7wfphRhlb7YY9gtOFix8I3Hz3IIhvezJBWZqswYLF3SJXQQY
nR372GtT2GsohO8E+K6XY+yEjb0feiUH3/vemDG83MkF2WUpbI3LGxMHrmr31NIcV2MOc7ogm4sa
IX2NEQRN6FNb2RzwmCaquMXLmZojtzhgruxGo7gQaDiCnyyB8R1lqrjFy1WVY+SAuQJbgeKCouEI
frIE1ga9ssUtXq6qSiIHzBbYoDgnaTiCnyyBjRMYWFF3PMth6JS3MX9Gv/lmNoQolADVjzCGMYle
Rt2PIvk4GuGdftmN81kK6+Wupdi8WlWF+fZjIjd+vUANC2HXhhG0Y0W6s6iJxb12ztcctM0Bsy+F
Q3Fe0HAEP1kCe+UxcHELnAtQJzAZzBU4AIrTRBzBT5bAwaBPVcUtcC5URjAZzBbYoTjraTiCnwyB
5Uw4jYGLW+C8qIpgBpgpcDYMxXlHwxH8ZAksvcHAxS1wXlRFMAPMFthhOBBEHMFPlsDg0aZTrJri
ZWUEk8FcgSUawUpYGo7gJ0vgDEHAuphqelkZwWQwW2CH4oyk4Qh+sgQ2aEvVxVTTQ2UEG29pYK7A
gEawBaDhCH6yBPYgMHAx1fRQGcFkMFtgh+K0p+EIfrIEDji4WDXFq8oIJoO5AitAcVbTcAQ/GQLD
TFg0kIoZjldVEcwAswV2KC5oGo7gJ0tgkA4DFzMcr6simAHmCqwBxSlDwxH8ZAmswGLgYobjdWUE
k8FsgR2K00DDEfxkCawNCi5nOKYygslgrsAGUJwLNBzBT5bAJqAttZzhmMoIJoPZAqMRbKWl4Qh+
sgR2Gu38yxmOrYxgMpgrsAUUFwwNR/CTJbBHi9CacoZjKyOYDGYLjEZwACKO4CdDYDUT2mDgcobj
qiJ4AlsamCuwAxRnPQ1H8JMlsLQOA5czHFcVwQwwW2CH4kKg4Qh+sgRWCo3gcobjVZ3AZDBXYI/j
vKLhCH6yBDZSYOByhuN9ncBkMFtgj+KsouEIfrIEtg4wcDnDCZURTAZzBQ4ozilFwxH8ZAnspcXA
5QwnVEYwGcwW2KM4DTQcwU+ewFbXLKnbKF5IXlqJm7L93rAHa4J3sBD42csX3i6azuULUW3KS3q3
WnQ8UfOVTTHbpSF1g2DXGcjWhfIe3adojxcbT296oYXGeiYcfZmr9BoidNr1Q19YaAy9CsZ4PQ5u
cGEAJZwelPUJBgude8GFxnomPb0H2HAZ6ZBC3Vx9tTnM7jibixkBssoIgiaMTlrPtEQTguIwVJBV
C+IYYK7sUqA4pWk4gp88gV2oud4bd0Fyu8dMMWL77Tn/97vgkFZ3QVQXuf22G2qNtuLGRNQwrTHD
dlmorWhXDssBHPS602Dtzqu1FfmoLuV9UnzzXkiXe/tp7+3TvfclYYRxjH1qexVlK6AbWityuHcp
aDPILsRuREUq10Pg2rozkT74iOkgtglwPE1nQ36ayQ41k27LO8XdZs4tJaE39B0X+3OYwU5u/2y+
Vbm/iWlhN3XcCAXmWHdS7/ReDAuyasfJDg1jP+oYzJxg3G7MIejEuoDZsNp6jSvDupvTs2E6u+bm
9+btg1dxfjC/uTjILX84mAo6Lv87/iXNs0GzD77r5c8/vN28Pe1JOfr2288+OhpVL8fRqjZpKVot
wLbBdLa14MCFPvk+2zS/fFi/sVlcncXFybq0Y4MWeXy7edXnQgaHsjlP58fn8fdDAxq8X/6aPfo5
Xa//si95VjvcVwIdX83Tqic+Mu832ZojI6H54vSD93OJhpj77SOz+m2eVmXs8ut++gtunN+BcT9P
NbnWndJtGztssjCLZQGH9vxySKvrery6xu3S/sY07VrR4/PTrsl+rP6wQt396fQ8P6I2pIBo2nk8
nwpBP/n29esz8fm3o/2zafvzIVuWmmeIo6YdU8wNKy2a/ONZzPLkvy1va8dXl/PrRmZzVu9f/Z59
z138L6uKyT89vn5mprXdwfVbBtdXU/H4H6XRUjrbgFbWe9H87u2xgjZ2pyjfSPUDv96YeVRj5MOT
ePFz+vaiv7zMQRTz5vG0KgWAQkFUOz11OKtxq2XB/g+//HZt0qKJF0OzehpZ11ZcFRbCLbE/0Eur
MNyeCqsIDb2yKaVe44+hIagAQbShh6F1zsbWAqT8Xwwu2OQ7Z6bnqFyntHkNwBU1O1f0g4+2EdSa
+nieDLnHb6bhsUblb7hO89x3Tnf/uGim41fU8viVd05enedrsdT9XcwoZ6pL4D7Ms6exzZUgGyn3
3Yfa6VOt7EVqjYp9q2VQopd9N3p7m3KbKeUeHqbccuyMMZA/NYJr4xi7dtB6bEc1mlGZmJTtNlLu
GPvk84utseOQQUPXhpDhVvqkolPBJtPkyzocL+s1Hj7ZLJ3RtdcOHXi+L1eeUhBdHO0g3KC6zYHn
ja/DjPTFG96TtNuB59PF5SRhzlhuB575+h8x9Z+kj12+jzyhvp0JZdAgxQakTLCdsB10IgyFPcKj
jaobXO9lZyIEaQYRvOq1hCH1AC84Dm1nUusalzefg6FqhrXaHG6WALgRJtQYQdCEkRvYmRboNHBx
tV6oOmefA+bKriSKU5KGI/jJE9gABg7Ufd7eWmWl8bErlQsQsht74UZphAud8VaMzgTtVHSjlt69
cLkAO7Ni51JsXq2qc/b3YyI7fi1qGIhdG0bQjhXp1mPa2eKyyVB1zj4HzL0UGu1KnJA0HMFPlsDO
WgxcXDYZ6ipFMcBsgS2KC5qGI/jJEtg7NJCKyyZD1Tn7HDBXYINGcJCKhiP4yRI4eIGBi8smQ9U5
+xwwW2Akgt1MSCKO4CdD4Ax2aEstLpsMVefsc8Bcga3EcFIYGo7gJ0tg/IhpW1w2GarO2eeA2QJb
FOeBhiP4yRIYNNr5F5dNhqrj7zlgrsBOojgbaDiCnyyBlUL7pvKKnKrj7zlgtsAWxVlDwxH8ZAmM
F4Kw5VSz6vh7DpgrsJcozgcajuAnS2Bj0EAqboALVcffL8GSBmYLbFGc1zQcwU+WwFZXjVFtrJ0j
j1WiphhTO2b9YGj//zKeT15IniWqX3eAzH6s60P/T/RCdSnOJBR1eXq6416hbSWVS27sxt5F9oSH
mzkBhcv3Bt7dlYvT8bh5krhK9+EyTbOR11P0TfPbq1mUNfkwv5ynB6fXp2nCJ1ypVfx/31jXag1P
NFYnVa1EyBlBMXv6znrq+uLmvMtjuK+nT/9urk7+WExnLBwtjzhdnN8dwJO/NxtkulaBN63XUrRy
ULFNMYU+AvTRpAOhlAfXTYI50eqQVffRjq0YIIrRSQ9pXEOu/7jKM2CnZ+ndPfueHc4xeDuB3Hwg
fnzTGUNmOkPCip+a5hOz8UazcaTT8l2f/rg6iMJsHkWTX8Udq+5pnj71Az/qw+AnRGQL6adJ5Pfu
yRvuiR+I0DUHfhg/E/XrFOpOl9nbBcq+hWrfJtV5R8BD6lXXSdkKpUM7+mFoo7Vj243QO+sHGJLA
jJX1d2F8wYjAFowIyoIRPwMl9/DItJsFI8+0R+PegpHpYuMqVd/OCetFRNJGRBNs3/Efn7KRVnHW
i9ynbW5UjNfPtlHx9XqRp8RXSpBzMMjfLFIaRFL2zXPEvu/N6G2Ig3dZiOyYVUaqNAyum2x7weUi
fpbdq3H5YRYchKw6kqPaHN4YwGQuagTYGiMImjBGBvzMYuaAK819BlFXa5QB5soOgOKMouEIfvIE
DvShl43rjQy9kNs9ZoqTsrb7320290x3Rnzo5XtcItjDc8Suhl6eRy9El11keqShF+8A+j75ZGXY
4tnBK0F5dsB5m08PL3yiWpgJoC9vFKOXg5AudjEWnhh8FINXOgXtRQidcSIboGUYh87KlF7wiSHM
pIIalzc777p6zdXmsG9dDjXC0e8g22nCuKGFmVLIDgzRnOd0Ovd8f6/3JMeLPt3+LdvltVdCjWM2
y/29/vPxfD7kIYUPbzcwjPOUt/N89dVHqzz1i5ymTrJoL0TXQdtrkdrepdB6Z2XbGaG6zgYfOoWa
6dAbvize8FXVJkYGmBsdymA4rSQNR/CTFQfGOAwMRXDd6kAGmCuwlhjOCknDEfxkCeyEx8CqDK6a
FWWA2QJbFCeJOIKfLIG9thhYF8F1qwMZYK7ABo3gAIaGI/hJFdgeiixCCBjYlMHbRzAPzBbYYjgp
HQ1H8JMlMFgUbIvgitWBPDBXYCsxnFaBhiP4yRLY4k3HlcGVEUwGswVGI9gJQcMR/GQJ7PF7jS+C
K1YH8sBcgZ1EcSrQcAQ/WQIHjT6+hDK4MoLJYLbAFsU5TcMR/OQJ7PPNPM3nl/P7YC/ugU+ur69+
aj6Jp2cr6XKinJPrT7/55suMWlxdXuTfpvGpm0WzrIzw40+PQfKJItu+PIRZsQyRB+ZeSS8xnBSS
hiP4ybiScgYCMHA5lapYhsgDswW2KE46Go7gJ0tgpdFAKqdSoTKCyWCuwEGiOCdoOIKfLIG1Rq9s
OZUKlRFMBrMFtijOEXEEP1kCG7wrLKZSUlRGMBnMFDgbhuKCouEIfrIEtjZg4GIqJUVlBJPBbIHR
CHYKaDiCnyyBPaDgYiolK47m44G5AitAcSbQcAQ/WQIHrzFwMZWSKtQJTAazBQ6PcTAToGg4gp8M
gbMA1mLgYiol9fYj+zwwV2BtUFwQNBzBT5bA4FFwMZWSZvtS1jwwV2AjMJySRBzBT5bAeCnrUMxw
pKmMYDKYLTAawQYEDUfwkyWwDaifxQxH2soIdkLQwFyBrUBxmogj+MkSOBjAwMUMR9rKCCaD2QIb
FBcEDUfwkyGwmkmHgosZjnRVEcwAcwV2AsOBsjQcwU+WwBA8Bi5nOK4qgidwoIHZAhsMp0DScAQ/
WQJrHFzOcHxlBJPBXIG9QHGOiCP4yRLYaIeByxmOr4xgMpgtsEFx3tBwBD9ZAtugMXA5wwmVEUwG
cwUOaAQ7YWk4gp8sgR2gXaGn1kDr0ujB2c56Fd686DHqPkUnh9gL2UevQ7J+HPowgBmNkfYlS+kt
pfDgdy3Fo6tV2ez3YCI7fg1qmINdG0bQjhXpTzy2FlNNEJVdCRnMvBTZMBTnAw1H8JMhsJ4Jj9yT
lCimmiCqWgUDzBbYYDipiTiCnyyBAVBwMdWEivPNeGCuwFKgOKNoOIKfLIGVkhi4mGpCxakrPDBb
YIPirKDhCH6yBNbWYOBiqglQGcFkMFdgQCPYiEDDEfxkCWwV2nSKqSZAZQSTwWyBDYrTloYj+MkS
2Em06RRTTVCVEUwGcwVWAsWBp+EIfrIE9nhXWEw1oW7ujgFmCxxQnA00HMFPlsAhoIFUTDVBb7/3
mwfmCqz1Y5yZCTA0HMFPhsBmJnFwcdIQdFUEM8BsgQOK04qGI/jJEhh2sylzvZ+bvCsTNcWJ2k25
//v93K+rc32PSaTq9y3Tq3OJUnUu74NUevRt0GFsjVay7cfoWx16EzuwEnx3ICKA6qVtfZ9Mq1P2
PnqfVR+HlJS3Kv+7X53r1cnw7p5d36I4lwRKcS4JeHGuu1dRx+rLWe2qOJcEeu2n/N49eVNdnEvC
9sW5shM7OURsP8W5dnCBbKhvR1PXzyrOlUw3dYRD20c9tCYNojUSulZGHToHsdOjwY0NuzH29bFy
n33CPFZuaYgL1femlSHz9PN0rN/8+NXp2OR/9+oqCN/H1AfZxjD0LSRn205r0QaZfOhdNzoxNhmX
h3l/ufexflAaXEotDNC3SoqxNUbYNkGS2sv8jQrR1s+s3JFL38z/WOm3FrP58rNPDhsbYtDaQDvE
7I0LKrXJ9qId7WjGfNmFC/GxWWEmRHXb27zFnY7LW9y6aa1biGjOY38kxKG0h+rjQyEP88/2g3+p
u9LeVmoo+p1fMeITCGae96WoSKwCiU2sEghVnrEHCm1TsrDz3/EsCWlyy/jGkz6Qnp7SZJJz7vH1
eu3ronHL5fU2MLJ063B5143fYlLRWGixxWivv7v85tsi/LpeuniTat9dxytEf1tdfrMM2OJ8rXOB
sk8M0GrNmeO6FKpxpZdclJa0utRKEuKt1E1jXtsW//AVUpvWU2lKR9u2FJbF+iWZKF3j65aalnuv
vj2zxoi+9LqNTfVEksv4DJjgcvgENiY/DdDp/ed1u2tyobyJ8HNnsiKj34ziZiS07A0QMrtfmavP
nK1QhJznctrRrYbLWn0RLSfFHy++WL6+uOsOyP0FYUs4MjIZeuIya2M7Ahg58+WSwXD265wbcKe0
jfNWQSVXmgbTBr0trzeGduvT0DtldIVOmthybwJIUpInJRlbxx8jvasht1Ss0/H28O9fii/LCNOz
fPXgN1+KJvTztR2H14YHI/o3U88W375EwQqgiEzzhQQnTF+W6DyUUgCYkuQYuPXWa8s4Ne2/b6UI
tRNOWKYUlZIRz1UQjnnrRPePPLcLSgcZgEPHtvg+xPazjlb8+d72VTSZMeEtaQxppHqsVMxeqXQm
odqGfD7oFsNAJBjLvnJhoOP8D5tVPyvobu8eVj4K9527jv1l0X91c18Ob4M8FJvbRw8LTOW05eeh
iC1DxSBinOi5iSVoh2qCuJhpfjTOPXeD5HC7uXFxQAaD6nlAY/7eHvM23C6Wv8VG7Oa6+Q1EtBIq
ickNAlydnowOB4z2OA3Bddly5xjLvRXTGyxuwktffvTWq4W0hL0MoUmr0oxLUBXltVYwCHhyOwLP
2kXbA/M0YGxxGnkMRytKdRpcgp0IgRl89xCnk9sReNYuWgwwVmBLQDit0uAS7EQJbJmAgCe3I/Cs
DaEYYLTAEoTbBa0zm/ojHXJCn3MSQ+tkQTrapBVLgj8gHJFXlEsIeHLbhiA5wf0eWKUBIwWOxJLh
TinvBEfEFYDhaTokFAAKmBE561jh87ha/GphJWEvQ3BcgZPdye0rgmTVdAQw2tEsBCdkIlyCnajy
FGqmGWP/6UXke+82qzi8HiNGowsUEixeCVfnyc0zIuOCCBwwtnipAOFMIlyCnaji1VxDwJObZwTN
rD/JwGiBLQgns6OhxwuLi/v9dcWowOXUaiFEzXCRpkRCEaDK3kyG6qNle8FSa7lllpS2Yb7UWrlS
MRbif85qq4Kptbwo7peLJqy6Cj6lRbfO1crGtE41xvhtr3ACpqtjsKUzdcw4flGcUgyTN2+dwOzp
1djrUvdX6HGx1eJYIFERwf6LAr3/LtK4i24ROobb92vOa8UQTRj//F+ZXwE7SdrrcOOjnZFc0fnA
aQbhm8RPw01wXeSx+yRqeoL/DgHURQw7FkzCvPkRb2bkn3+uv+8vPuvb4nhb6m0cb7ah+a25CbGb
uIjdU9zZ2F0ot686La7X4XZ1sRfwiREd8jIEzCevI5sUbP/Sk9hS/eKWPgw9WTFuAw0eghYzjarH
ZdRBqeKP/h7Cv4qx52QUgrYqL343ARn9upWcStEoRdRhjDFusYxx+arbL/VzHYt2sdrFGv8DXPcd
bowzHnjdq91Gms/7x17qf+3Vov+liDyOevc8b+LZIdSYajZQI+J45ZBfHLJMoaIAT+21Lorvlq6u
Y5H3Lcd+Xz4rv6MG643mp831crYGy8Bs6NfnqbyU/eOd7BhaVkQYSIjMrmrKT2OMSWhnnea8pSrD
KfqBXZ4zyIoKkegMM/deBsXmvL2XrJhkX8/bNj7wvljoTSDah1Zb4tqcdvzJuaLbccoGx2MJ7fjB
s2M7nlxEJ7bjx6gowNOrbGI7nsnv3O24hdhww87dmB7LEn3V8UZYwZ1opcwomb4xzS0RJUxiiczc
mFoUm/M2pqoSBxFboIF6pyuobRO0uffDPobvNmG1Lm7DenndrPZasFjMgQTVCuOI48Oa6VX/8NX4
cKdoE65/Dj7ixg0K7mb41dDt5rwdf9h914FenyB2nAGvrn4Oy27jSG97t1/rovii3tytN/1qnqhY
8cHnn71WbIaPWKUqziqhXvHLW84rRkpKVmW397XbA/xa4a+7GU1c8u1/47Xi1v2wWF4UlMSX13fd
SyK+fa24//nKL68j9EP88Wn1z8Pdq6b7QRY3pm6ubzrhBNGm+Ha3uXs1FhtUZsaq85fZ8Spd1on2
nrZ+CtrYZVZobq4rotlTkE3QGLESqivGng/trDX0J6ONdg0LkZWUPAXZBI1RrqHIc2k1OM9zjSei
jXUNDpPV9CnIJmic7hqcVdYcRwzkn5+t3bIndx+W1wt/3RSrDmJzE5YdrUZzKplSyts/u9MV3y0X
mzu/e2Q3Y2o3d8266ws/WMRHuhPOuxnTq0Ms9ZLvDpe8WvTnfz8ZEV/655OXn474Zzch3Hffj739
9U1xFwP3xTAh2/Kx1myPwRzzEqRiluRskT4qX52zZTufD7py6AMSut98qVgOiUNRpDhdlDn4YEWR
4oiEIJXiWZvpD0XJ2dw/Bx+0KOaAhOlKxlCTQ+JQFGNPF2UOPlhRjD0iIWhFBc8hcSiK5Tmi9HxE
Dh+sKPaQhO12AHOe1bAdiCIpOV2UOfggRYl8j0gIWkmaVYePRMloaOfggxblQUOrLgiJ/yqroI1J
bDLJpDw1Ux4aGGsnI8dwUWwr0+AS7EwaRA7ArBLMQMCTpx2kPvF8ExoYK7Bmx3CyItqmwSXYmS4w
7W/zyqlGh3TsqW3dTHywpWGP3Z3TShDgrBRn7J+tDHHx7r4/DL+5+/Fu8ctdeRtWK/ddKB2ngjWh
KR0jumwYr8uWG1ZK1hIRvHLGkEjds7pmta0bp9nux7ppwhfD7xXL+6Z48ZQffxE0iIoJg/70m9vb
3/Y3YHSvi52d466NTnYhrTJW+Vo2YjiCFnxR/zZ84dOL2lDjGiXahnCQy+7ejElxHwH984gmsLPk
AY+93SQv7Y7nSsN5MEw5zWxjiQ2S8ppzbzzzUriXIfJSkAkhp6gkhdO2X580bD+wlv6tLsSWbF56
iC0ZH4amucoiw16yVoYSK0pBu/+096ULRJSqoQ1pdFCNU/tBFk7OxPvBDvBb19lQrDbLLkIbOspR
+pub37r9at+7m1jZQBqWzUojtka9mJGDW3ckGhcbpc1dVDG+Fal0KBARzcysROplZNGvm9xv98TH
f50y3U75vpdz6wAxMfklM5xx7ZLzxFDNTZdjaRWWP4er8f1ltxmxC8BcSk4EZ7Hwfh1fnonQNuNK
GfuHb4u3toXU9XBSKQujynlRv4oHgNchekP4qV9gHM4r9EuLhHtHhCmNFnVJdWtKRbUpnVTON7S2
tWLno5iaomYVaYffg38WrXm2vVb82Vi05VC0ZV+0z1IMQma12dkss3uSJO90zXrjbmLrPPjl01Lx
I5cdfB+u3P/L/Xpuan1rEluwWEo3N7EF/TkUdJvyp0uZ8nMT++CiK7lFW1CIhOUzkfi1rzX3i+V6
1VvOpCruY4aIFYi6u3crE/XDoQjaZQhRa80sM4S81mU72tTxDQKD22yTxxr5aYhR97B066E3Cdsq
2c0HOkpxmTzE+PVB4i717TEtVhEyW0Px1jhwHfCGKty1YepcuLEsYNT4+pzAh1VzHduFVRuWV0Md
dV0Y5Wq9uBopBc7rQDUvnTKi9I1nQxLWYCQJpCXMKV2Urz9GV+XSRfdwHeoczprWwzGjhRe8ZA1n
JbMilKZRtNRt62rrnDTSnY9iXg+3Lfhyr+DL9aIca1yKaSf1ddF6mj0mPPRiH7odSns+nOa5MLs5
ygbttCy/U0l0WtZYbXQdSm8pKSOmLRvtSFlrYmrfSmKFOB/FPKcdynnfZZ+lmHOioyrCsy0ehhth
ed3+1u/tffv9YliQWhXh1xiuh3DtXMOcXUagH8MyrnwdY/FKEDHPuOJo7e/EtD2zE0MvAmqQjtHz
0EnQKX3JNhJTmp6FmCI0rwBnI4YswEgcoqMpn4dOgk6IAhQVoeRMxE7cTDk7MXQBSpDOXP6UoBOu
AHX+Qs4/zXXwfTrzl/p85g01uqnrpnRta8qWe1G2oWYlC7W0lhlidIgJOF3s6m/vhynVYpzNdi/j
PHf4oYuiVUooZtqSqlaWghBd2ka1ZcOMCU47oix/GbYu1puwXC6WJ1v3/Xp9/23xrru+GRzv3i1X
oXjv888/KbY9e79uFyfe/bnib76FiDCizlT+J+4s/YfYTF0Dup5YiI4koDvyhHAVlYxTb3QpTeAl
IaopmdaslDwYroiQrKn7MxZeE8KoFaRND1el/PiLoEEUNggRrnrz7aqrEpE7EaoVXshaiBoIV0VV
nfXKBWUVyGV3L+ikuI+AHoertg8ewENRKs65ErUymulaBG20bmrVGkGNjho6DdZgJe2Efo8wwIZQ
GuYlMawumZGqJFrq0rfMl7KNn7jGy9qKByEUCtK18kS6Qy36ufZDOvWDZnE47999DKFqrrNQ33iA
5X+7c/H8RReo2eLGr8dPxuNCBKJg7KnlNFDoEMbDfBE9FCR+cR2W98vQ+bhbFf9ctvPS9z/fbkPy
gM/IirJtY4vl8vBqpDjfqUYB1m7/lqTdl8ruW6Vj1JQ6OFdqbo1wbTC1Ydtbksg8tySRllpvHC+9
dKxsfCPLlrWkpJxyT7Rg1Miij2x2WU3DRS8PrM6pzgI2VPsqvX2heKRTh5qwwNAtlKyYMBPcHkWL
V3DFnckfFnftqlNuzJARZb/Ey36JlL1T3NU34fJR0YWwoEuCN4hJQTljSrQ1mbjhPHAaBCGWyZqE
IBx3TeOYs5Ez18E9l7TcW5MlJzkmHw50qMoZ6GTTwQ5vqAJJCJNDIkETxOBfVpoLiM5k1kjFTky6
hgbGys4ECGd4GlyCnSiBDcsq74OLA5PrPUhFiK9Pa/YfdIr/l55w777AXa6k4tOPYWXkfMOF8W7C
/4lMsBzqREcBxwcPZIluLhvmjCdeWOXRIwRVETI5wH4c73CMsH3sSUYJkDWCpndK0Y42hJYwI8O/
Dwu4oT4Qzy2tmdGCM8eJcFZKI7hRpH2OwwJdEUSTeGAy1EKferJ2Jjro/gnWRMkcEgmaIHotXQkJ
0plMKax4VtgEAYyVnWsITopEuAQ7UQIraSHgyRy/Spx4WBgNjBUYhtNEpsEl2IkS2BAGAU9m2VXC
5AmcDIwW2IBwXKXBJdiJEtgqCgFPprJVMtODk4GxAksAzlSEmzS4BDsRApuKWtDOyUvFlMzyYAQw
WmADwTEp0+AS7EQJrJgGgPnkOTClsjwYAYwVWIFwWtE0uAQ7UQIbYyDgyXNgSmV6cDIwWmDQg61S
aXAJdiIEthXloJ2Tt/soneXBHbBNA8YKrEE4xkgaXIKdKIEF4RDw5O0+Smd5MAIYLbAB4ZhNg0uw
EyWwlAoCnl6XM5kenAyMFdjAcNamwSXYiRJYGQ0BT89wTKYHJwOjBQY9WPNEuAQ7UQIbQyDg6RmO
zfTgZGCswBaEs4ynwSXYmS6wgC9TjcDTMxyb48EYYLTABoQzOg0uwU6UwFQzCHhyhqNJjgdjgJEC
awLCMSrS4BLsRAnMJYGAJ2c4mmR6cDIwWmDQgwXRaXAJdqIEhpfxxOQMR9NMD5ZCpQFjBaYwnGZp
cAl2ogRWCrRzcoajaaYHJwOjBQY9WDORBpdgJ0pga0DgyRmOzorD98AyDRgrMBCH71Ni0TS4BDsR
AtOKMwMBT85wdFZ4CQOMFthCcIbInJQzh9bzU3OgzcQHKwqHSdjEMkgofITXsYpIBQFPTvs0z2o3
EcBogQ0Ip2kaXIKdKIEZ5RDw5LRPZ8XRMMBYgR+BUyINLsFOlMBcSgh4ctqns+JoGGC0wKAHC2LS
4BLsRAksDIWAJ6d9OiuOhgHGCixBOMl4GlyCnSiBlWAQ8PS0LyuOhgFGC2xAOGvS4BLsRAmslYSA
p6d9WXG0HlilAWMFViCcIfPZiRLYWgIAy+lpX1YcrQemacBogQEP5hURJg0uwU6EwPyRhSg5Pe3L
iqNhgLECaxCOMZMGl2AnSmBOQEdi/wCvxo2MN4uYH+2qN/ferVa/RIO7mYETjkpaS2/Uv+98JF5R
LRvLCedEEBGIbr23oW0JNdKo452Py8Vi/VS7H6MUQvG5pTgsrazg43koov0XbCAkMXMTS9AO5ekK
bjKnp5pZAU0MMLYoDAinGUuDS7ATJbAlIPD0VDMroIkBRgtsQDiVCJdgJ0JgUREBAk9PNbMCmhhg
rMAWhtM8DS7BTpTAnFsIeHqqmRfQjMCCpAGjBTYQnCA8DS7BTpTAQhkIeHKqafICmghgpMCGgHCS
2jS4BDtRAisJOtLkVNPkBTQRwGiBDQhnbBpcgp0ogbUREPDkVNPkBTQ7YJkGjBWYgnBG6TS4BDtR
Altw84yanGqavIAmAhgtMODBsiLUpMEl2IkQWFaUUAh4cqppWJYHI4CxAjMYTiTCJdiJEphR0JEm
I6mGZXkwAhgtsAHhJEuDS7ATJTBn6QkbgCN5Bye3k49mglSik512IPd/f3J7PJ/r48ltUBl96pn2
bTLPt/uUM9WXta+cj7mKxuvl7za3dVhe/pNz5s/i/vvfVl2Gz8tn8Zlnq9tn3dWI4c4/M8ZSLloz
ZGiVgtOyaZ0phW2kq5mizNTPdNMorltZekFEKVrtS0elKJXxPjTURHA6gvQXPP78vX/5PBa74dbJ
wdCieJMcZKsmO8OGgpdUU/ZtUbx7/ODwG91zu6fe++ZZVwHih2Mit71PYXtOTRs0/v63xXvDi6q7
ivqqXSzH/EntMubXjWaMl4VekihvzAflLyOh+LLj7i97YjEHXWdy2rPzGoHIKfuYrNt3TkkT23E3
5pD74Y2sezfRjxevdjVirCn9Z8Xl+Of23tY/emp/FWPbLikELTifgs6BjA0zFZZJoV2oWzZYsbt1
Ndb4qHy1CuurLufWT4vV7tbV/wDXg9xl49068WUZUYarYaE7/Y2k2zv9SfTq3XU6U8/Cl+hEszXj
J3r2rnp+FrMHfh/r4nV3wcnix8FJx0+7xP3X9+snrawzmHTU4mzpP33To1luf5De9GzNOm57XisS
+pBna3fvr1c/ln0+zuOv9G9DXxw+Oal9UxU9uW3+dw/eivH8XFlVTLMTbRvHzLHxazbLLqh189uY
MG2YBfTRr+jZfVq+5GSu5+D4T0LE1aZpwmrVbmJewr1LT4amdWbs4/Z3cf+g+Y13hE01qhAjziaH
8ticlBfF/XLRCRP5TlHq0p86IkirrDO1HLtEfB7Mi8LVsQ52nhJFiX5ye1GcoIagp2YiPCqfT8NN
cKsz5vaMdBUjJ9Ldy4m0lxi1e7cYJ4jBg4jqXGm/s3a2qkrn34JRx8D7zcL5TpM3d6+LZnF76+58
l0r5oni2WcUeIfYO9799193wU/5UlKUPrdvcrK/c8rvVZfw7/Bon18NfZRlr53VYd0saq8VNuPz+
54bER36+vayD9KxRsiTW+dIL40qralOShgTug6fCmaKb0fQz3ReOjdaVmCtV8kFpWJaTrL4nNtP1
Ssg1G8soRMdKaISlknI4C0laQnyp6qBKz01dmtrz0iqvnTKSt7ZrxFrSOsa99JxIRA7nhB9/ETRI
gQYhczhv7saEyjU3xAimZWMUkAFNUSmUp01QrDlmYyqyy/kyKe+jsGAe5/HRAwpQJmevSdCuFaym
rGXCcKMFpaomTgbprX0Z5K3phIqPcjh7LmcGEWbWnkx4Kq+yD7u8ynHAePt4L2kqLtnpNI6XuZqb
4O6uVt9v1j5Wjqcbrs5uyHg50NlNKe7i+MbdnMui3c1HfdrmpInU4q6/aeDyfP5yAqltMZTbS5su
/1U4lc0xYeKKoe1jz/bIpHRvoTN7RmoqwSaXkjArbjv3/yPSe3VYWEqFxUq+V/u2sFfdgG1xd1E8
ELJw9/fBLYM/M5Pl7dWoSh8wOFcTAFshsq3459az22JqcR3mIM/K4UGFeX40gJWh50cjrVzym7cH
PV03uX1i7863YBbPOp3G4XLtVsYnXKKb2YDdevPm7okXmvMMQfTYZ4lyRfZqMsp1al8bZ1YhyFq0
vLEuuIMw06ehn3nHx3fhpefMLzG09Nn4U2/H3vyl7teKcTQ8cHgYX0r6Ahxkisbryel1Qh35b4WZ
ZjLq0UDT82gBcizCtwBbw2ZqAmxF+emz+f96KCjPul2gZedVq7DuFy3A4NCQCAIkYbf7VvEkHm5V
evuT7c6pg11Luy+V3bfKOgRdeurrMgTKhBKUG+MOdi3FBZfFL1c3wf148S/UTx/NP6A+A98Hu6xo
W0spWShly3TpWleXXoi2bHkrWy5d4Ko+2GXlXBNM/LCUqvWl6NCsrXmpqAncaW5VQOyy6rQxJ2vz
+JUQPuxd0qB8rWoupW+sQS+J2opJMsHwXxEPr4Xw4fleDBEt4hbcZgluP7Rc6NA0jjXe/fvxSEkb
w4Rv6rauCRXOCcJbTW1QTjWmtc/vYohosiQix+TDoErega9sOthQijEgCapzSCRoggj72crAm38n
T6VZm3N5FwYYK7tVEJwlLA0uwc50gSV55Ay8kv8OzAghLENgDDBK4J4YCGdkGlyCnSiBqaEQsJoG
zrnaBAOMFlhDcEzINLgEO1ECc8UgYJ16SLv1jaqFbJSlE51ZWytmWat5LXgbGuYJI77bTa2lCN75
53vWP0ohDZlbisPSYpnVXho6N0Ws/zKwgVCMz00sQTuUp2tqIYpmsopl5RTEAKOLwoJwQqfBJdiJ
EtjCIzA7CcxzsmJigLECcwHCUZ0Gl2AnQmBaEcYAYE2mgbM8GAGMFtiCcNymwSXYiRKYKtBOOgks
sjwYAYwVWAgIjhGaBpdgJ0pgDjsSmwbO9OBkYLTAFoRTJg0uwU6UwEJJCJhPAstMD04GxgosBQhn
dRpcgp0ogRUD7RTTwJkenAyMFtiCcIqmwSXYiRJYSwIBT081VaYHJwNjBVYChDMmDS7BTpTAxgoI
eHqqqTI9OBkYLTDowZaaNLgEOxECs4ow0E49CayzPBgBjBVYCxBOJcIl2IkSmAoJAU/PcHSWByOA
0QJbEE4nwiXYiRKYKQUBT89wTKYHJwNjBTYChNM6DS7BTpTAXBEA2EzPcEymBycDowW2IJymaXAJ
dqIEFtJAwNMzHJvpwcnAWIGtAOEMSYNLsBMlsNQMAp6e4dhMD04GRgtsQTgj0uAS7EQJrBmFgCdn
OJRkenAyMFLgSAyEUywNLsFOlMAGrqmTMxxKMj04GRgtsAXh7N/cnWlvMzUQx7/KilcgcPB9FD1I
nAKJ+5RAqPKuvVDoRZo+4vzueHfTEtopnonThwMBatMkv5m/Z22P1ztWOBzCT4LAasUlCK5mOEI0
RTABTBVYaBBnkTiEnySBhTcQuJrhCNEUwQQwWeAA4aQMOBzCT5LAyggIXM1whDRtAqPBVIGlAXFe
43AIP0kCa9jPaoYjFG8TGA2mCqw4iHMeh0P4SRLYGAWBqxmOUI0RjAaTBTYgznEcDuEnSWDrLQAO
1QxH6MYIRoOpAmswgp0IOBzCT5LATh9kZ+V2ezJ6ayVkiuf7P7D2qJuOsRtdW0o73mwoL5uOYW3c
3tr83abjiVpaVvkYnArCON5TNxwX64KoWvcQ7e5m4+VN/8xGY6NX3OG3udo0phgDj4b3f783y0eX
tDY9L/rp3Hvr3Ginf7wtNgbzz200Li5LEVpcvtshtd2rbzaH3B1r0AjlW4xAaELopPVKK3Awqi5D
CdO0IY4ApspuJIgzHodD+EkSmLK7fbe94VEQfd2Dpsjqwxv/21Ew5WUUBHVpKJPwb9NFReNV0AOL
QUdmrHLMqMGzYbCjc9b6oMIDjyR9CWqj9n8kCVvbr/pAK2iYrU7pqIWKdgv8oewql6y2o7Z9doYr
cegqf1U+qIv//+iy8xz27vPdO9GeU8w89ywlnxkflGciisjC2LvcKz1G7iGRrK4+Y0+19WAivf4m
0UHo4dHxJJ+mMtMrDnWTbvMoevsQ8J6S4C/0AxeJBEd05/avvrRXmcjC9Fwcpv4fojAhaZYxJ7hz
2ZS9Dftus7n8pns7npwuE7gy/7/K3TufffZRMeXq8uK8/DaNGddX3VwU6+tvQEP8YxTSLBzT9FjQ
AQ0jz0cdZE5Q5jDmIHQiRVIxrLXg6GJYf31ymqZQuf6pe+7lp3H98vr6/OWfpkoDU0XS+X/HP+R1
MWj15cmPp69vnuueW5cXn3z++btvPhnVIMbRKpa14ExzaVkwvWVWOunCkP3gfbe++GsB0u7q8jRe
fbetTdqBVUqf654OpRLHkejO8tnxWfzpyEgtvZ9/LR59mzfbVx5LnrlEw1ag48t1XoaEJ/aVrljz
xAjZvX/y+iulxkgsA8gTs/y2zksdxvJ3P70CG2cPYNy3U1G5be94c40ddUWYq7kCCTu7SHlp1+Ol
jdlsf2c7tlX0+Oyk74ofywsL6valk7MyX+5QAdGxdTybKpk/+Pbt31cff8jfOFcdG87S3Dk9gzjq
2JhjubDyVVd+PI1FnvLaPL4eX16sN53o2PL+7e/F9zLW/LCU/AY6T7PS8mDB9UmOqftaSK2E8Z00
Rnnru5+8PVaSxf4E5BtZLY4DFMyz94rkvPFdPP82f34+XFyUIIolw8hLLQsQ6niz01OHsywuzidO
vPHR51uTrrp4nrplWrQtDrpUxgItCdV6gfu5XaZ6nlultVHJjxmcD5veesGDZlpM/3MpsZi5ZnYQ
Ax9ctkO004SuFNrtdgCQH1bqQyv6+pv7COp8OIwhf/K7aQ2zmyZ5m7wufec0+serbjo/SM3nBz3/
3dOz7smi+wuQUR5qZd6dlVozmzJ92iZg8XzIN69Nzae94moc+xjdb9uXj9frVK60N25EGte5dBmf
fPLmUs///bmcP6ocNmRmaC81/Ze1iXlVfmm3O8sUtx9i06dYn3jPdBI9S0PsdT/2ahD2ZpnCTssU
6dlUTinCpuO5LupSTgaWqbk++l9k+q9og60qM0sUHiGSttD/iF6wLs290/87dOyK62aJ8Oe/qdr5
bzL3TijTMyW9YX6az4mkIssxhyFKOUSTX+ZKeen6STDHmQ48Mx/tyHiSkY9OeJnH3fPfxpLvvgD6
btrnJXudBGenKmyWTyfB2TtvtHeKolq+cxKcvVvMsfwVdqx5ekA9Es7CNdaKhfh6bOW9j+QNtWYe
IHRLybzihAqy1Ym2+oyP2EC6fYSeVKcdoqSc09wYz3QQgY2jH9gweMW45Vr65GSMDjTWtl/04IyV
QzNWjpmx2pWR9hEG8MNMBdvvWKHu5O1MBafGhlVyrU0HbmfZVauc7SGiTKPRcQgR2M6y+3WgkdUL
/UHa3e0scfPMtrN0k/KxLwsfD4nvHJytgIf4aqWTTDYUQytV9NIQrdeKB6O1GCTPKYTiSExilFz2
/+DmFotI0GCXH1i3bXtGt9kc6qq2hTVRTUYgNCGsZbuVAHcghuojQMI13WwggKmyOwfirMHhEH6S
BJaCt7T3nc0t6OseNEU2j9yHzeae0cgI7/T8Epao+e7lIy4EPBu9YF1ka+g8PHPYqYQrpPPcGTlG
RZ87FDOdxcwdQN692cM/W3e3eKOCQncdfpBTxGRpff77GcPg1GhMNsr1ejBJBd5nIZMcRmWV0PEf
nDG4lXauxeW7nbdv2oHZbA516PISMsJUd1S0akIa0BwHB+76M62h6aEQApgqe+AgTlscDuEnSWCv
QHD9mdamwsYUMFlgC+KswuEQfpIEDlpA4OozrZK3nI1KARMFLiQQFxQOh/CTILBfcWcgcPWZVsmb
IpgAJgtsIZyQCodD+EkSWPgAgavPtErRFMEEMFVgISCclBKHQ/hJElgGDoGrz7RK0RjBaDBZYDCC
lbA4HMJPksAK7puqz7TKpvOpKWCqwBKMYK04DofwkySwgfomzavPtErZGMFoMFlgC+KUx+EQfpIE
toZD4OrjUlI1RjAaTBVYCRDnLA6H8JMksOcCAleX7KRqjGA0mCywBXHS43AIP0kCBykhcDXDkbox
gtFgqsBagDhvcDiEnwSBw4p70M9qhiN1UwQTwGSBLYQTQuNwCD9JAksJXqn1DMc0RTABTBXYCBBn
FA6H8JMksFbgpVPPcExjBKPBZIEtiPMch0P4SRLYctDPeoZjGyMYDaYKbAWI0x6HQ/hJEtgFDYHr
GY5tjGA0mCwwGMFeInEIP0kCB9jPeobjGiMYDaYK7O5GsCjEFTceh0P4iRV4BksBgUU9w2m4RU8D
kwV2IC4YHA7hJ0lgbRUErmc4DbeXaGCqwF5COCMCDofwkySwg8H1DMc3RjAaTBbYgTiPxCH8JAkc
vIbA9QwnNEYwGkwVOAARPB+ehMMh/CQILFYC7pvqGU5oimACmCywg3BSBRwO4SdJYGU5BK5mOKrh
sFEamCiw4mAEa25xOISfJIGNERC4muEo0RjBaDBVYAFGsNUSh0P4SRLYOdDPaoajdGMEo8FUgTUY
wd4qHA7hJ0ngYEE/qxmO0o0RjAaTBQYiWK7KCzgcwk+CwAUcQD+rGY5qKHRHA1MFNhLCCWlwOISf
JIGFgVpWcvRBsqoXox/VOGTx95v8YkwmWMutz0J55Z02cXSptznnsU/unzyPeJZC+nBoKe62VkM5
nMcykRy/YAehBD+0YQjtSJGuBahdNdVUtrErQYOpTWEliNMSh0P4SRM4eAhcTTWVbbwq0GCywGCs
GyVxOISfJIGtVhC4mmoq1xjBaDBVYCdBnPU4HMJPksBeGQhcTTVV27ImAUwW2IE4q3A4hJ8kgQM8
ftVTzbZlTQKYKjC0rKlWXFocDuEnQWD1wDxS1lPNtmVNApgssANxVuFwCD9JAkvPIXA91Wxb1iSA
qQIHMIK1FTgcwk+SwJZ7CFxPNduWNQlgssAOxMmAwyH8JAkcBOhnNdXUbcuaBDBRYA0ta+oVlwaH
Q/hJEFivBDhbUtWbhpo3RTABTBbYgTjncTiEnySB5f0jDClPn915fhn9FCJoSmgutfSff34ZrkV/
K5FSzRU+8NWoeK0alfdBKD16FnQYmdFKsGGMnukwmNhLK6TvX3bDYJUbDUuaa6ZHl1gURjPrU8qD
8AUudqtRPf0uvfDIru9RjEpITDEqIeFiVLd/BR3TotWxQxWjEhJf66i8F/am+Uny5mJUQu5fjGp2
wnrV6sTjFKM6QAM50+zb3PXTilGlXkYuHMtJZcZz8kxzkVksr2vremeFAo0NB6qc9Wcd13ffJtZx
nQ0J1h7GkHX+dqqjuz5+ejJ25b+dOgJxTCoo5ZmRg2fScMlELwXrY59Nzt4oZ7uCK8u8P+x8bEhK
S5czm4ZXpgQfmTHcsiyz0F7kIShz36WwkrK531lc+mz986LfVszuo3ffPupsiEFrI1mKaWAuqMyy
HTgb7WjGqAN3IUJmKdU8C7g7xJ2M8xC3vbS2VwjvzuLwhPOjQlZvzXB+ZF/rhrhen9zcGFnHTX5y
Ps3fLjbflUYrPcZ48u2Tr7/p8k+bdSyly+fhutTs/vnqydfrTG3OV6YQYHOV3CGNQiRpGffBsTF6
yaTWnKVRByNFHMv7X7lp/uUjvPdjEsazKMaR6SATG4zULA6pH4UfVUr2G1Bj3VwpZo+x9GQsXXWl
qGN5D1jQcfsX2JnmDm3/8fNkvO1yoTqB8PtgL/7JIo5F3IYCjpMD8+a4Q1RO3zbBUkk8dcVN3v36
3HPs1Yvzqa7971g2UbxDjdeHCQjB4UJiWlVvPRmxfx0OGpiY+RbDQJyXOBzCT3zmO4HrAdsUqNOZ
rikYbk3Uptc3wf/a3GHeHuZ0MhYj57PA/hVG3j+w7Prqu7+cWPbSne98vrgwJ4q3NryyvLHQv669
t/vmeQFEv1hxDy6/VO8LmoYH9LdgjQOTo99CuPI7DofwkxD9YiVF+xRgNiym76+v5qn1dObEsnzQ
xW/jSRl0uvmj15dseRm0A+5tqvcnTUOhABqY2tBSgLgDjO+7mcztlCufXZ/GzQUsrjnQFL9UP52Z
Z/nsYv1zd3lxejL8DBJ99QAR1CzgjeXYmOe/+OCNlzoTuAJ7CcUtrg0RwUO6erSQELh609eo/Wuc
0sDUqFUBxBmLwyH8JAlsrD1oHH1W8rWXumAeCCQrmutLwzpo1dbeBzOMGg8PmGP9YcxB6ESKl+Ca
k8DFsPmvRyW+L+P1fJDlkqJuQ6yzQPjIFW+/c4E6KbYym4JME9WFK/IZTLvnfNZMKvPLPKocVR51
L4a9z32CzofdQw1ZrcS/h2XPXo2dDm83kaCtPXWgQF78GwV6922ic0fTXue42ex2Ka90S9Kz/fU/
5f4KccLtfg7Re6a7R9vuEb+7R9tyrN3Sm99+23w3H4Qwd4nfdE/PymxgzMPPw2kug9hRGVzLnd+8
yWlXddGdbPLZ1dFOXloSTw525QqYYIXuu1y06XPc/PbOzU+lraTUKfDB88HYh8a129vz2/MtaBOA
ZnvIw76DjLDGtkbRHucNz+igzEEmgttsaQmf7tf5sJbfuxuvgcmvWnFRRbcgS4P5Xqh+6LkTYriz
PlTuy5fF3NV0k+1pX+L94up2nehfYOvuVbhdI7pzKb403X35bH7b8/O3vdTN31TI24nUzuVYee+y
TAS6LVHdRJlL3bWvTKdqVBJw36H8qPt2Hfu+NPncne5OcA5q371e/LVhOk33YL24gq2RXz3OxSvk
bXQaDqGFbFs4rSCnk2lk5mbIMiUb2i5eofiztBWzxvuX0BNyDj1zM1O5s8YLv7d+8QrjoFhtnGJV
LpoihhuzcaIfghxiw3U7JySt16vSDnm9AvlgRXkQqLl4bNHvG1JEH8d+1MbHKJN6PNFxGhiukKIf
eKqrSNY87lTXrOTurQ6423lraqibjuX6Mi0rz/Px6N1Z3qxPhqudfmk+cGPgo4mj0GZZUTme33y8
vHlWdMjl0OZUuGVJOZ4u35qnO6pn2y+O307Qkz3ELhne1fHTvJ62pM6+T4++HnWf99fnm+t5qq1X
snvvs09f6a6XP8mVXSm50vbFtD5TaiU5E/yKTfe+pz0Ar3TpZJqclgWh+Tum8+m/v1gfdYKXH0/O
px+5/uaV7vLpcVqfFPRf+dt329s3zx8bpi+UryzH3x91RnPnu29uN3dcbZsNajPlge11Wpk/Z+LF
4Mt5A8D1+Q/lpOxzdlZCo0jKhuSkcY6zLIJjUvSBmSFlpnTQ2drQ+1GWFhy4z8b0MXCvb79smrV/
vnxft74cuuf2+fLnYIdsxaEHTpa59fMm6Si2y9xH6/rehKCA42WyVqPJ3Ng4DpAt9vZIiKq4D0B/
u2cmlBjt2rGTDD1/s3m6F1qnYYw2FA1lDkIUI4K2XOvQixBfgI13j381309wGwpBPkuzqXmwsZCx
3vBnYSxCY8LieDHb6so1VolSXMJ38/FqzO+mfvhPzfNIrHv4JBDNx6KJyhITsz7JYRzCyGSIlinD
M+uLVcyp6PRgXAmm/Jc5h34ku/9yu+QsTj50V9frKQ3Jk8lF+tPTn6flye/i6SYn0AwvDmpGif9Z
zGJD3ExGDLGMV9fnRcXyUjFlokCGBOH/iQvbNm3VemZmUztPC2TkdiW5PWhj9+viSfm3eHVzk678
O0XfdOtu7hrjJoOWqH+kG7dNG9bsSvFnMlTeM1u3BOkzM5scpBo09p8Z4a1uDA0TWq+tZWvPtMO9
5Dun04MKV3n9NB9vX19Pd6ymLOaJUVwrWbr8n7Y/PpJBN9uWWUk4vuneuOnapyY11gaQatvHk79Q
vyz7nkqbFzF+nNt32RIwNZoRo+9H6Rl3IbOh1wOLWo9M5DENfRqUD/bxTMTu874qZudfcnq5ePNy
adHLsh0mv7xtWrY0LZub9mWMQ/tsDZ99bg4GVHTGYXMdT8sVtMTlszUlbW25xc85/+5v8afHNm3u
XMq8p7TS6WmZdz3NnbjZuz5t/306lJl7N7XcxdgJyAjr5GGM+Gm+ai4v1pur2XNpbHdZdogCAeJW
/PaAiUbq+0sTjOuci9bWWSWlf2Xatn/dlxc4DG+fC2+vyE9yWbrK67hZ5qD59pJM3WYy6fo85bII
dOfpF/cNaJbkhzLrje1KyMKbL+G5D3OPxS1tAVPLz48Jvntpbkq/cDXm9fFyjZZ2uTg/3lwcb03S
2mSRbGR9diNzzgeWHXfMjEMxbRi4UEPHXn3I3EN05LQRbqK2X5/YEW5IWcTQs+yVYC6PIxt1ymwY
orJ2lLyo9ngmto1wNw3PdhqebS7Y9orDuLbXWFe8V829yd0oTnla5t+JYVzkPpJ1+wStas/8kEHb
WyGEjZz1Q3TMCi6ZHofIbD9opUathpQfz8S2oF3aeTdkX8a4s1eg+pXkptnjZbqR1yfj/ARwOXy9
WzKwqy7/dAIttYSVPNSaz+2DED/kdcmPIJbR5jDzinv5mm/JiQ9oGDnr9ZA5LsjDmIPQCZ/XSr4y
Vj2OYU1VHA9pGLUBnYTMsdIdxhyETqQGtNNSW16vL9Z7G/bdZnP5Tfd2PDldmuYyrq9y985nn33U
3XSd81JfyWzm3b1ffwMZ4sSBGuyeQi03wA5pGDmSLGiOFYcxB6ETKZLcARZO/hw3cpqLkzw/VycJ
SZtktWA+28DsYAyTkvfMi9EpE2zqe112PMXNJp9dLrndxTatnn4sCffyRUfdaK220o9M2NEwzblj
YbAjG6T3ObrIbVAvQN75EB5Hdt9yi+GQhlHD0/P75ogVtx6KAovY/2By6E02ujSGS2wYuWdpjD3r
jfBcuIH3fNrnasYUg1HB6pjw+x8wX/4c6JALoEOE/Q+vv7maIrHYLoZe5jzoZBO0/2GIY+BJC5F6
cd8WueK3VRur4sJQYP/DzRvv4MFtD9aJaLyKY3bCWRf8KJOP3Pk0SpXsC6DNTlT0e8AC6o3XQSbD
veyZ9MYy7oxjqdg3pVwyxSGZPui/3Hg1kLlCmT3NXa6ip31aapLc6Y22lXfKn0Gq803U1/7CSj+f
x7KJab69u+WWj5e/bPfcccgEJXWTCRNhu8+10HPHywc3eX25nnfAxavuz4p1z3/39Oz2aQvIFnP7
0D3Vlr/WFyz5zmoR4E6pwdsPselTTOnkWdDWMWN9LuYopbm5KTXID1NqMPlglXQ9895Epm3PmZdj
ZiHy0OsxZd733bwfYnqYOx/N8sDq7HtFgR3VrkplG2hveuPyYO0o9uihrLYV2x6klTqWpbjL+935
eDUpt32Mqsj+hC77E6Lsk+KxP81PHhL9oQzMQmU4c5LBGxNljPzvjwnxKWox5pTS6KVNxvJB56C4
dlp77/L9Y0Ke1REhxeXAm1y+O9FpKrrdbg55euNAI6RvMQKhCWHOrVZCKcicauFqE0xLYxDAVNmD
AXEWiUP4SRJY8tDS3neq76Kve9AUWe1aUYPif2Qk3Cm6e/tAbffJh7Ay7nDThW2B3/+ITLAc+04m
4fnBrixTOqGVGZXoB8c9eYagVspVw/hh3r05wvZtz2SWAHnjnEL3D4NQwjvN49CPfz8t6JVOSock
Y3DFZDkGpV0yo+mNjTmpf3BaoFfc4ofAXZfhHtrypvWPZnOI41MxFzJCCPzcZD9NCKOWXhm4KlD1
uAXLW26bUMBk2T2Es1gcwk+SwN4LCFw9bsGKlr2aFDBVYKEhXOAeh0P4SRDYrDg479LV4xasaKnf
RQGTBQ4gznkcDuEnSWDhJASuVvW0simCCWCqwFJDOMk5DofwkySwDA4CVwtHWtkYwWgwWWAwgpVG
4hB+kgTWXkDgam1BqxojGA2mCqzACDac43AIP0kCWxkgcLW2oG2qoUgBkwUOIM4bHA7hJ0lg50A/
qwfnWd0YwWgwVWANRrCXSBzCT5LAwYCXTvXgPKsbIxgNJgscQJzTOBzCT4LAdsXhQKquy1nTFMEE
MFVgoyGckAqHQ/hJElgEC4HrGY5pimACmCxwgHBaKRwO4SdJYKMcBK5nOE1P3VHAVIGtBnFW4XAI
P0kCW6UBsKlnOLYxgtFgssABxDmHwyH8JAnseYDA9QzHNUYwGkwV2GkQZwwOh/CTJHAwAgLXMxzX
GMFoMFngAOJ8wOEQfhIEdituLQSuZzi+KYIJYKrAXoO4oHE4hJ8kgaVQELie4fimCCaAyQIHEGc0
DofwkySw5uCVWs9w2u7DE8BUgYMBcYrjcAg/SQIbDbZsNcNxbbeXCGCiwMUwEOcsDofwkySwDRIC
VzMcxxsjGA0mCwxGsJMKh0P4SRLYSw+BqxmOazoujAKmCiw4iAsCh0P4SRI4eDCQqhmOE40RjAaT
BQYi2K84Vzgcwk+CwAXsILCtZjhONkUwAUwVWHIIJ4TD4RB+kgQW3kPgaobjZFMEE8Bkgf/g7kx7
G6iBMPydX7HiE0g4+D5AReIUIECI+xCqvGubFnqRthwC/jveJC2hneKZOC2HEIgm2z4zr2e9Hq89
BiNYKoHDIfwkCawUGEjNDMepzghGg6kCKw7iXMDhEH6SBDYcbNlmhuNUZwSjwWSBDYhTCodD+EkS
2CoLgZsZjtOdEWyVw4GpAmsO4uz+/KQJ7MFA2spwLjdr9k7Oa3G7w5W7F/Hy8qfq8LxaMTjrYhJF
Ff33i/xKtCaprKdiRhNl8VOMOo2p8FhCyOX+Ir/l+fnVky308wvn/L6luNdanbf9I5hIjl+wg/BC
7tswhHakSA/CQCa2U03T2ZUEYXFgalMYDuIUx+EQftIEhoet7VTTdN4VaDBZYAPiPMfhEH4SBA4L
bjUEbqeaXWVdKWCqwJZDOMEFDofwkySwCGAgtVNN2xXBBDBZYAPhpLA4HMJPksASvHVcO9V0nRGM
BlMFdhzEBYPDIfwkCaysg8DtVNN1RjAaTBbYgDhvcDiEnySBdfAQuJ1q9tW9IICpAnswgo3SOBzC
T5LAVggI3E41fWcEo8FkgQ2IUxyHQ/hJEtiJAIHbqWbojGA0mCpw4CDOcBwO4SdJYA/fOs2Xhq7v
5SgBTBbYgDgfcDiEnySBgxP3wITdZ3c2KaN3Id4zRdX2aO/uRG3FvYr/hd2325uUb7aiprpJGVJG
CLWjMjd1K99YVVdZfDamRUxpeG5zyODZ9elYJ4H+LK/y23Bx9MvlXMzy4MV6zYuXpy+OtUXzWXrR
+yCULrN6oTCjlWBTiZ7pMJk4SiukH18satJjdZKpnKrnqiruy1SYt1MSiUcdFN9Arn65yAc/HqXn
YY93riiz8TiuT2tcOzoMr/E7hZn5rWPrhl8dMPfNMLx1/8L135ivu73q7a9fnG+A+uWmVNjWt7A/
u+6r3vz9b26Ol17MR5cdlvPlplRQWdZSstWNzbEEB7O8tfRROqgG1f+tttcPV4bVKmezy7hr9+sE
oXzqQ7LefLJLRdTZdum+op9WyknHcXopQbTTX+FPACUja8cc+SSy08b4MXWcVvr0tu54WqmX8uYM
SA6fVgpfC58yVd3W1u0Y2be358e1UN5RvReP59Npzr9fB+nm27lG/fHF1ZPerHtw6V6Pc2P+03c9
u3tD73pu3Lrf97w8IJ4hL17Fi3R8+T1bVUa9/yurj6FfXH+zY/8WZNhRoL+P4Bsx/rlQFguud/Vt
PUidO7/pejm/1Dr5ZVMbbJ0FrN5+1cheVaBDlwt9DBv/rP13eT3NB76W61qCb/t8j3XXul/2rgcR
b3eqkEXSNges1PKL8JHEsEn1McOjTbEkr7yTm0civeQj8kjilhrG8a/21D7rM4sfsYxlNTfYXcNp
q/zPVg3Q+dNhkyDmdJ8oFxzYprxTHdo72bHpmkCbDbOqty7xWF+8n5zHNGvy2u3/zwc8n8azNBe1
fml48fqyPhHq0+Hil2/nw2zYDwNjKZd4fXJ1GJffXh7Un/PPNble/8RYvTuP89U8pXF5XksPHv04
8XrJj6cHxUg9Rm/ZWMrEioyBKW0S4+MY5zw2FquGOaNZZbrPQE5bYR+lNbzvqRCzT8OIczbVcMgc
B6ZDDlWuONhJOxeZLMqwEq1iQmXOJhlMNEJPXs3nqnuZop+4CMpzQrlixB9/FnRICsghYrni67NN
7eDoYpQ6OZllAYp9We5TmVSo/xTQmttjFJryPoiFSxavL71jAlS0WAmvhZh8yiEHKZPn01gCz5PQ
owqyPA/abVoqPmjDo5cttpDBXsmdDW6VEE75poTw+hB4+Cm5NsOq3c24P801neR4dnh5dH2V6s3x
dMPVvTuyOQfn0V0Zzur4Jp48lke3h/ysKhSjEqnzs9WZDwePFy87GHXTDGzTLsPB3wrnum1EJK4U
s1N9sj2QlG5NdHZnpHIRhPpqjzNut+H/azXvhfXEEozV3ZJv3X032MN5wHZ+9tLwFyGHeHGR4zKn
R7ZkeXq4UWX1wuCxuoDH8uLPA75Oh9bkOmxD/220ZQPmhgHNaA74+swAZoZgM8ITmIFrl35Ltp90
q+T2aaNb8m4P9hFZHWbcm67dyPiEU3R7duB2vvn67IknmvscITyxH+UtV7VeN5+5uz5r53S0hFSM
L35K8s5rpo/yKvOul9++XvqH7UO+Wvp486feqE/z5+a/NmxGwzc2bL9fQv3CAy+Z1IKH+3mW+e3j
eaZilrCadnyejqfhcp4huT7Jy+pSnpwSRlprU/htjuxvl+fXZ+n2ktsGKNdn0+r00PfO6yWrQcLV
+Y2flzPjQN2G0AvD6uX7hxvic39+84SGf3yS88X8+9dnV8cnw1md3xrW7XxjTwj+JthBu6QxO9+r
/9L3dnty6sE3d0/fpfZ5RO9SbxzbU5+qFlbvPsj5t79b6/Pu9s3VbVRd5qtVlwC/bYMPyKlGOH0z
sUs34q9rv9748GYp2p1lYLe/xObfYrLEwrJymnHhQxHR+5DuLgOrM1jnPx2e5Pj9S39n+s76/cX0
Pdj7l2VroozGGJmZKdKxWOLIktaFFVVMUSZmZcc7y9ZinLKvXzJjS2JapJGFMCpmhc8qOhVsNvhl
a1Ubs/tY6uHjJFLeOuAhTiZWL7XVhT7HrBbeNJPbvyPePVIi5X/2UInqUQB3vzhwPWdMSQRTRFZe
/P1+0+idKlb7nMNUkk7WTZNSxfk4al7Gf/CsKaUXwsgel++8pQpdO+j6zSG+m6rmQkZIrnuMQGhC
eI+qF9oYyJzmNr9gVV9jaGNxYKrsFsQZrnE4hJ8kgS0scHObX+iqlEkBkwUOIM5xHA7hJ0lgpxUE
bm7zC12VMilgqsBOgzgrcDiEnySBA3ineo7d9R5UTmKKOXtv/v5hNsWxokv2QVqRAjfFa24N91lp
EybxzxZPUGbBrd+3FHdbq6s44+OYSI1fD2vn3L4NQ2hHiHSzEHCkN/dbhtBzGj0FTG2KYEGc4zgc
wk+SwFIHCCwbYMG56BMYDaYJPBsG4hzH4RB+kgRWxkBg1QZ3RjAaTBbYgjjncDiEnySBNeynboJF
ZwSjwVSBhQBxXuBwCD9JAlsOgk0b3BnBaDBZYAvihMbhEH6SBHYw2DbBsjOCnTA4MFVgKUCcdjgc
wk+SwF6CYNcGd0YwGkwW2II4G3A4hJ8kgUMAA8k3waozgtFgqsAKiGC74NLhcAg/CQJXcFAQOLTB
XRFMAJMFthBOCInDIfwkCSzACneBN8G6K4IJYKrAGoxgyZE4hJ8kgWUAwaIN7oxgNJgsMBjBSnIc
DuEnSWCtAgRuZzimM4LRYKrARoA443A4hJ8kgQ0Mbmc4pjOC0WCywBbEuYDDIfwkCWwt6Gc7w7Gd
EYwGUwW2AsSFgMMh/CQJ7KyHwO0Mx3ZGMBpMFtiCOG9wOISfJIF9EBC4neG4zghGg6kCOzCCg/A4
HMJPgsBuwbmFwO0Mx3VFMAFMFtiCOKlwOISfJIEF3BW2MxzfFcEEMFVgLyCcFAKHQ/hJEljC4/12
huM7IxgNJgsMRrASSBzCT5LAGjoLzfB2hhNkn8BoMFXgIEGckjgcwk+SwEaC4HaGE1yfwGgwWWAH
4gwSh/CTJvDcFa52Fv0FLLfAR1dXF98Mb8Xjk7V0F3F5mYe3P/nkw4q6vKilDnJFxavry2G1L/Tr
byCQdaCHzVRK8M5bBQ0mtmQ1DMQFg8Mh/CS1pPMcAjdTKcE7bxU0mCywg3BeOBwO4SdJYK98z9q9
OyU50Ws4QVP87kv3H3V1M3ZFbU9RzpuV63V1M6zN7ptS/25180ytLWtL9GLK02SnSF7Z7BbBNVvu
IdrdVc3ri/6hFc1+IbxD3w5jyWMYYzI6hL9fBGa4H7mWvGJl4t5MWmXtRLFKhTzF6R9c0ewX0oce
l+92SH2LArrNoXbHQkBGKInvFXfThNBJ+4Xh4MOoOd8lRNfKOwKYLHsAcVLjcAg/aQLbrqC78xRE
3/egKW73nv6//hRMef0UhHXZfe/Yv00XXkRI9XOWTJRsSpNhRRbOhBIqcael8A/tffoc1MbvXlTp
36aNisaroCcWg47MWOWYUZNn02SLc9b6ajZFG8t3r9yDrVjZ3KYNGiabAU0tv7VdthJl17wG2SZe
dPBWybzv2pVNPqiLbm7ko5v3D+myVV1gq2rB1h/xrvBc0cxyxdkc4Ey5PDLJjfSqcJNVAEXyzaim
2ro3kV57g+ggtIO3HOeTVEfB1aFh1m01wrjdib2jJPgbfc+lTy1ksOfNrOkhg3csflqZRu6nqiWi
3CZpBBbUnspt3h0ayq4tUXs0jDpElsCWqbDg4lGq11Zc15aqlWG91WvXho3XxydpnnS9/nl49sUf
4/LF5fXZiz/PVRbm8rar/xx+n5fVoMVXX9h3P37t2eHZZf3w4NNP33njoKhJlGIVy1pwprm0LJjR
MiuddGHKfvJ+WJ7/tZrtcHlxEi+PNoVuB7Dk7bPDj1Mt6/KSGE7z6eFp/PklI7X0fvVj9ejbfLX5
5LHkWZenWAt0eLHMw7oIg3t5qNYcGCGH949fe7kWrIm13z4w65+WeV3Us37v509g4+QejPt2rlC4
6ZRu7rGXhrmG8KqcDTs9T3ndrofrNmYr+wc3sI2ih6fH41D9WH+wRt1+dHxah6kDKiAGtoync1n8
By/ffL/48rujjz85Hth0mlbT/E8QRwMrOdYbK18O9X9PYpWnfrZ6rB1enC+vBjGw9fWbn6vvtYv/
fl0//huo/XQIe2i/VXB9lGMavhZScGXsIJV2zqvhZ28PlWRxPAb5xsqv6NUX3b2KS68fxbNv86dn
0/l5DaJYB/Z5XccDhDrb7fTc4aznO1fHl7z+4acbky6HeJaG9WhkU2l2XWYNsGRVKhpfaIrgdh1h
OcWTmHxWyQVwGDomOZUpFCZDtEwZntlYDWRORacn4yZe8jyOqlWbhy0A6IfS+1b0tTd2EdT5/s5y
NmSLP8zTqoOqf+EqL2vfOT/94+UwH0alVodRPXf04+lwsNb9ecioIEXvk/cvufZqBn4tyJ20+/aX
2PxbbMzFsSCcY0XHVL9xc5vfpN1uTrvT05Rjqc2aDlfVazc1amCZetvuLzL9V7RBl6qZJeoehAOR
tIH+R/SCdekehvzvQ8f4Xonwp/Sp1il9Mo9OKDMyJb1hXgvORFKR5ZjDFKWcoskvcqW8dOMsmONM
B56Zj7YwnmTkxQkvc9k+pa8cn+TnH9l32nl9bi7tZvl8Xp+7c6G7U7rW8q3z+tzdkpv1W9Ax2x33
1IP73AOF2yzHF3mr1z6SN9RCfIDQPXX4NF8o3n0WS1/Rx8dqoNk35Xp9m1WnHXVVkp6E0o5pHyVL
SXNmRByZ5HIyeZycch4yVu/L2LtDQQ4NBTliKFiNslI9wgN8P0PB/jcwqLd2W0PBubFhlbrTCXDp
yrZadQp6HI2cNDfTZIGlK9t/DjLS+Vbq/CDt7tKVePVkS1eGWfk41hmFh8QPwGIqwYfT2g3VyPtt
85Ymnk355rPqnddecVXKGKP7bfPx4XKZalf8+k1KV5a5TnB89NEb6/v7/dXtjToJ6r6ZYmEMPLsK
vXjnUrhYRiGiLH+/3qbooMM4Zc5HO03KCTmm2bMUfBFusv/cepvqsg2ux+W787Zd+5P7zaHOaisL
GeGM7jECoQl+LltXsaSH1qE0tz8J3fOygQKmyq41iLMBh0P4SRJYNg9XQ932m1dd6PseNEX9y+Yr
nugBDi8+/RyW6N88X/E0esG6dCcJDw9wtqoASx/iKGJUWkbyEEculBOYIQ7MuzvI+WdrDldvTJDo
riO5XII1QUcvGyt0RYpTDpPK3topxaSnKWYnufbBSf8P1hyuLlug5EQYjnINqbF68dvbN/83t53U
KfDJ88nYhzrv7cWSs0vUp1SnPeRnFyiKs/hxy504gEQxPbV/+82hamJgIwLvMQKhCeEprxbcg/v5
mpucRVc5AgqYKrsVEE5oi8Mh/CQJrLSDwM1NzsL27I6igMkCOwinJRKH8JMksPEeAjc3OQvXs7+P
AqYK7CSEs0rjcAg/SQI7ZSFwc5OzcJ0RjAaTBXYgzjkcDuEnSWBvFQAWzU3OwndGMBpMFdiDERw0
x+EQfhIE1gsuBQRubnIWviuCCWCywA7EGYfDIfwkCSychMDtvcdd5QgoYKrAQUI4KZE4hJ8kgaUM
wG55ofa7W76CtDKQh+1Nzl11Dyhgcks6EOc5Dofwk9SSVjgI3Nw/J7vKEVDARIGrYRDOcYvDIfwk
CewFGEjNCVPZVY5gBbY4MFlgB+Kkx+EQfpIEDhwUuJlKSdEZwWgwVWAhQZz0OBzCT4LAZsFNgMDN
VEqKrgiuYMtxYLLADsR5gcMh/CQJLAXYss1USsquCCaAqQJLCeK0w+EQfpIEVjC4mUpJ2RnBaDBZ
YAfigsbhEH6SBDZgKiWbqZRUnRFstMCBqQIrMIJtcDgcwk+SwN4YCNxMpaTqjGA0mCywA3Fe4nAI
P0kCh6AhcDOVkrozgtFgqsAaiGA7TwHgcAg/CQLbhbCgn80MR2rfIzABTBbYgzgvcTiEnySBpbUQ
uJ3h9L3HIoCpAhsQp7jC4RB+kgTW2kHgdoZjOiMYDSYLDEawERKHQ/hJEtgKMJDaGU7XKawUMFVg
C+O0xuEQfpIEdlpB4HaGYzsjGA0mC+xBnEPiEH6SBPYWBLczHNcZwWgwVWAH4wLH4RB+EgR2C841
BG5nOK4rgglgssAexCmBwyH8pAkMlm1V7QzHd0XwDLY4MFVgD+KEUDgcwk+SwJKDArczHN8ZwWgw
WWAP4jQSh/CTJLASEgK3M5zQGcFoMFXgAOO0xeEQftIENgECK+wRximV0YVQkmhtysg8qiQnFTI3
kUvLXdRiNGasP0Yu0j97EnaVQge1bynutVbnbf8IJpLjF+wgjOb7NgyhHSnSrdOQic1UU/HOrsQ6
gwMTm0JxEOe0xOEQfpIE9lZA4GaqqXjnXYEGkwUGYz0YJA7hJ0Fgv+AejOBmqqlEVwQTwFSBBYgT
QuJwCD9JAtcfIXAz1VSiK4JncMCByQJ7EKcdDofwkySwCmDLNlNNJTsjGA2mCixBnJYCh0P4SRLY
wOPIZqqpZGcEo8FkgT2IMx6HQ/hJEtiCr9B0M9VUqjOC0WCqwArGOSQO4SdJYHj4opupplKdEYwG
kwX2IM5yHA7hJ0lg70E/m6mm0p0RjAZTBYZxQRocDuEnQWC4RGsFN18aqr6XowQwWWAP4rTB4RB+
kgQWEr97HN4Cur17HL0HFDRFdxc7+s/vHt8qwA9KZPsr+KBLlvFWyTLvg1C6eBZ0KMxoJdhUomc6
TCaO0grpxxeLmvRYvWUqpyqBGgXzZSrM2ymJxKMOim+XLPvxKD0Pu/5PViwTElOxTEioYtnWt6Bj
rjvs91WxTEh8Qax67SN5012xTMiuimVhYUR3MYLHqVi2hwYyqr+B5q6fVLFMKc+NtJwFPk7MTHpk
0dnMvE1uEkUknxJorO7v76qx21V033mLXEW3GuJ4f++zMmSZv52rGC8PfzwuQ/13q4qDiEaU6EKV
JkVWUlIsCG3YGBJ3ynLpshoqrk7zfr/1a1NSWrqcmUxyYkrwwozhlmWZhfYiT0GZey4ZtZB7qKK+
cumT5S9r/TZiDh++89ZLgw0xaG0kSzFNzAWVWbYTZ8UWU6IO3IUImaV06DbrziPuuKwecZtba3OH
8OE0Tgecv1TJ6s0ZXv/fvj5Mcbk8vnkxsoxX+eBsHr+dXx3VRqs9Rjn+9uDrb4b889Uy1sLxq8d1
rZj+y+XB18tMbc6X5xBgq6pfzk3euBCYSjoxlbxikygTUza4zO0Y/MRfvmn+9a/w0ZckjGdRlMJ0
kIlNRmoWpzQW4YtKyX7zyBoTnqXHpXbVjcqf9Rqw6uf6G9gZ010ab/fn53G57XKhYpLwdY/kRcdz
s4rbVeWzOuC5+Gofdes3TbCu456G6iYffn32WfbK+dlJ1eJ3LJso3r6e13sLCO/ABLT56kmbnoIf
FDAx862G3cfpBfcOh0P4ic98K1gY0xs0a8Ni+u76cjW4mM88WCdQQ/w2Htfbblj96vUFW38M2uEk
JEDzFZg2PTsYKGByQzsIJwXH4RB+khpaOvNVz5ESrR5pLvbJucxZ+WwSv+nlXl09GW+PKjsu1cjV
SXf/CiPvH8d3fXn0l/P4XrjzN5+rLqxmBG5teHl9YaV/3bp2+OY5AXRzen8FlFcj/K2hSD69Pon1
sQdC4enp5ktRbXv2tFDA1FvOShjXP6hYm7M8Xml7mk/Pl7/U1UUnx9MvINH5r/YxCnh9fWjPc599
8PoLgwlcw8HjFU5ORDuSuhTtLQRuvvTVnvcFEBpMDSDPIZyxBodD+EkQ2Cy4lxtwZ+TeM6ynquw+
DSM3kIbMEaG7ejpaJ1IDiuD22hF8UhPuF4Zgak8A4XTonrFb67D69qUa3xfxenV65zpF3Vg4OJBu
+t9coI7HbTxkQdNca5RLPgFr+3DTlkl12KG1F84a46NJO5+6BR2Ku4MarvkSYQfLnl6Nrftle3xJ
m3saIIGCkf9Ggf6g7kp7m6mB8Hd+xYpPIPC+vo+iInEKEAjEKYFQ5V3bUOhF0nLz3/FuNiEkU9YT
JwXQK9QkmzzPPB4f42P87ttI487yXudhTnq7BXulWY2Fp5f/K/Pbgmt9DzMI3zLt3ud7gP9u3+cr
Yd777Tm36vff778db8sYm8Svmx+vc2eSYv9LfxVzn3mW+/K88hvvY9hWnTWX9/F6ebYVruR4BJp2
0a1U7ij91hSFrOg2v403yPzRrPtyDUI7VVtWB11lnKGdmZ28q7E2+3XSTHFPre9T2omQP+9Cnrds
h/WkH7tctLfLTaT8H+C67XBTlLzjdS8PCw2fjo+9MP7ay834Sxl5GjNsed7Ms6tAudRsoEbkYcMu
vzxymENFAR7aa5013yx81+UiH1uO7b78qPz2GqzX+uHa3qM1WI+wOc61rfveyfhf3glAm5YpDQlR
2VXNlEiuJtr2fVRB2d6JCqcYB3Z1zmBaTmflr5E9W2tMEi4loZ2hNQ3Yk3Mtmen7m+KMrxQ3k+I7
M33ws3MNmGkVk4X1FQh95lAhQK34qevFPpGsvrdUahOSVYGdrl6UaeCELRT9yKM6HJvTjupsq7eT
lsJV7q2hoNaV6uEurCaTx3vYm+t4v7jsl9t1Mlcy5rRlIXGa5Dh5cDE+fDE9PCjax3w7dMi4efbU
X61+NQ6Lh9fTD/tvBtDLA8TOwczy4se4GHZfjrbf+OvsNJ91Dzf3D+Mslmx58/6nn7zSPKw+4q1u
BW+lfiksroVoOSWMLsmwzDssd7/ShMthhJjnPsbfGC7C/+52cdYwmv+8vBn+pPLrV5q7Hy/C4nKA
3sZfP603D49f64cf5K+M9+xn4ZSkxjZfb/YxLKdi2y8z11LmTl9m+xNfrmqG9sloY6cPHYXIcsqf
gmyBxojJxYG2mrIsn4j2EXI0Z5pC039F3aop7iejjfZgCZHVWj8F2QKNUR5s5L9Du+ZUxNPRRruG
hchaxp6CbIHGpa7BzxhvDXU1N0Lt0FG84oaqY/BBlmXmu0dC8tYqVUNiVxQhakSp54MVReySEGdM
tIyyGhK7oih6uCjH4IMVRdE9ElK0wugaEnuiyBpR6vmgRdntJeVQMlqJGhK7olh1uCjH4IMVxao9
EuOlL1UN254oFQ3tMfigRdltaNUZky13sobEjiia6sNFOQYfpCiZ7x6JXDLaVJHYFYVX9D7H4IMV
he+S0GdMtYxVkdgVpaahPQYfrChK7pEQvFUOOFSjpPtrDS/PstyNG/Qfbr6/uf3phlzn+Sz/TSQi
Rtkzx4kzzBPdd45oShnhXWB9FMynPmTqVEYaOm9iEHzzY8N632er32sWd33z/CE//jxkkJZyxqBH
7t3d2DktVw6ydyzwSDtrqErA5btG6sRl18vUdRAXY9YN5Ky4j4D+vkcTWlLd5rG1jPrC+nCztz4w
1XEtvE5OmuhC5Ml2cvCqKNKLEHlr+YyQc1SKliLXX581bHtRsvxbw+x+sXnly5PF+KXQSGWRS4ZU
UEXzP2KYsIQxR0lyLhLfmWAENb1I/G+z4fZEvP+2Z+3aDzbk4+SLYXEoDpSz9FdXvwx7RL71V/cx
QDQctUelkVujUczMwd8PJHqfG6WHm6xifitTGVD2iQzhgzwqkW6RWeR/ObBf7+LL/wZlhr19MYwz
eBFiwqnZY6J+/2T42eH3sotc3obLvllmW8PDVVzkZib2RjDFtdbB/T4c+/lmcftwEzaPbFYL08NN
fz/Mmr9/mx8ZKGxWC19eUT8XmxNXLzfjofiPJsQX/vrkxacj/slVjHejkDf3l1fNTd6K2awWI9d8
nLPrs2Ewr3pXX209v8gYeZHiajhIu4yLH+PF9P5i2FE1LD2cK0Gl4Lk2/Dz9eSJC62N1JHe4Xzdv
rL1+GDIorR2MWu/hf0P9Iu/Lz/NYWYwfxjmr1ZbVYURlg/RDr0eC6w0ZEzB0xvWkyy9F6LzXuj8d
xdJziMtMO/4aw7NszbP1zPqzqWjJqmjJWLTPSgzCH11c2exYrc1F3un7+wd/lbu7lV8+LZUwcdnA
jwt126/8z6emNjbPuUvIpXR11Xzrf4wNW5+tHE4t/djnQU0zlNxtahhEQuojkfh5rDV3t4v75Wg5
V7q5y+f3QAfRm5zYlagfrIogLWLMWmsuhFZZ+mW/eOjyG/QR8GqTpxr5cczrzXHh71fdc9xUyRzl
DJRysx/zyu3O6Wz7NUiL02PRemOKBFZ4YxVetWGnws1lAaPmv08JvFs173O7sExxcbGqo34YFlzc
315MlHzGT4JJ4jg1pOtoR5yQlqiOMdE7qo2hDXn1ZHQP6OE0P3K/+ngPR1Uvre8oYUZqQp32xDgd
iHZJJeZipzk7HcW6Hm5d8GSr4Mn9LZlqXIlpB/Z1WlRbv+vFIQ57c7Z8uMxzT8TuEKcV+riojzut
UtoIEyTRQvakS1EQqYMgMvS0s8l3RofTUaxz2lU5b7vssxJzDnJU2ZpjhaWf56An/TLu03zz3WY1
w7ds4s+XUBSqWqbru9u/H0j+Pi7yVCKEJWYntAqxdidT9eE5II5LDDurqhlERzp6HDoFOpWu1Y/E
tD7SwHCPmKorwKMRQxegAumcrABVXQEadSRH3yPm6grwaMTQBehAOtYeh06BTrgCtEfqJqbliDFn
3Qtj0jpqok2RReJMMoSH1BMlrSeRpqCC6Jilfd4Z73NXf323Cqlup2h2+DPHucMPDXYmraXmNhGm
kyKSUkNcrxPpubXRG0+1Ey9C1llzpGq8K3vFtWrHJYZ1TwPQ0S2nDvACRQuW3YIKgQfJCVehI7bz
PeHJ9YR1ToTktLaSDxu/krIpRuloTOXLbiU//jxokFCgQYhlt9ffbAdPzNyjYzGlEC1NElh2S0pE
4T2TIVCQy2a1eVbcR0D3l93WD+7AQ6ttQgqZOiOkEpSHZKJh3gjrtOh90L16EeIshJzR7xEG2KWg
ngdFLe8It0oTapQhIfFAVMqf+D6ozsm/LQU5kK5hB9Jd1aIfuzClqtu0RtsJGfPHEKqUqgr1tb9h
hV9ufN7wPyw4TbhDKedPpvMpoG8pS6soDAjTeaiMHpvhi/dxcbeIg4/7ZfNXIuMXvv3xerO1AOLi
NlemY7n8Pe10DjPalQA7Gag3XyLDt4jgIpHYBUb6JKyyRkrO0zoDNT1OBmpjmLGh14RJqYg3WhHb
2URSEMy4oTZF14wrtEOOn3g2ygOrww8sKbCh2lYpH5mi1ASpmVQ8oVso0zI+60WPoeX05nmB64Pm
Ji0H5abT9Vn2c7zs50jZB8V9dxXPYdGzYVI40CWh7OxdoL20NCqfun++PY5xEU2ngpMisU5FT1Uf
fOS8MzL2Brg97mlujhtN1sLWmLw70KlIuHQMOtjhjaUgCVPlBgWaIMbcpnVGQHRmr9vQFTd14oDR
stt9ONtSbsrgCuxECJyBDa8p751LGYrrPUSFiUOb/b91iv+XnnDrLoZNnpXm4w9hZcTxhgvTvQ//
E5lAOQ4eccPjg21Zhj472RCUiloGix4h2JZz+2XBCAHG2xsjTI89ySgBskZRRPvgjO6Y6YxL8p+H
BSLqZHXPRaKCM+8tj0bESG3Unkrv/8VhgW21EzUm77bQFVcaH4MOtn9yIAknWA2JAk0QvZZrhZIQ
ndlLogytWq1AACNlz8QgOClNGVyBnSiBDTwMnL0kytDDM0XjgNECGwjOOlEGV2BnucAyi8AUBDyb
i9yww/MCT8C6DBgrMOMgnBRlcAV2ogRmEhR4Nge4YTUejAFGC2wgOE5NGVyBnSiBJbMQ8GxGZMMr
PbgYGCsw5yCc5WVwBXaiBNZMQMCzGZENr/TgYmC0wAaE04VwBXaiBDYcLNnZ636NqPTgYmCswIKD
cM6WwRXYiRLYcQYBz173a0SlBxcDowU2IJxRZXAFdiIEZi0Da6qeve7XyCoPRgBjBZYchDOuDK7A
TpTAQoHAs/NyRlZ5MAIYLbAB4ZwugyuwEyWw4iDwfISjKj24GBgrsOIgnBVlcAV2IgTOAlDQzvkI
p+LOHxwwWmADweWXZXAFdqIEFhK0cz7CqbjhBQeMFVhzCE5SVQZXYCdKYMVAO+cjHF3pwcXAaIEN
CKddGVyBnSiBrbQQ8HyEYyo9uBgYK7ABPdhpWgZXYCdCYNFSA9o5H+GYKg9GAKMFNhAcY7QMrsBO
lMBccwh4PsKxVR6MAMYKbDkI52QZXIGdKIGlAB1pPsKpWofHAKMFthCckrQMrsBOlMAaXIc38xFO
1fISBhgrsAPhDC+EK7ATJbCVFAKej3AqElnigNECWxDOijK4AjsRAstHVjTMbIRjaZUHI4CRAlsK
w1lZBldgJ0pgzi0EPBvhWFrlwQhgtMAWhHOiDK7ATpTAwlIIeDbCsazSg4VlZcBYgRkIJ7kogyuw
EyWwYqDAsxGOZZUeXAyMFhj0YC14GVyBnSiBjZEQ8GyEY3mlBxcDYwXmIJxlrgyuwE6UwE4YCHg2
wrG80oOLgdECWxDOiTK4AjtxAju9l28/A29FOPUZ8zOQahkFRw2zoZQVVVUFAYwtSQHDKVMGV2An
oiRVKzXoQluh1HLaHHh1m3OnXYzm3vnlMt/WMWSfdIEb3rNABQv/vJuwZ9Zy45mmivbcJR+DEppr
13GfRGf2dxMubm/vn2pHYZZCyaoU3nslY2vyl9bzQTumhUgYIY/tH7tC1a1ynoQiVjvJQWKWHptY
gXao6m9BinY20Ld1K6cIYHRRGAjOMVkGV2AnQmDdUsYg4NlA39atnCKAsQIrDsJJUQZXYCdKYMYd
BDwf6NetnCKA0QIbEE7pMrgCO1ECcwECzwf6dSunCGCswJqDcIqXwRXYiRJYKNCR5gP9upVTBDBa
YAPCWVEGV2AnSmAFO9J8oF+3cooAxgpsOAinRBlcgZ0ogbWUEPB8oF+3cooARgtsQDjtyuAK7EQJ
bDmDgOcD/bqVUwQwVmDLQTjFy+AK7EQJ7CxYsvPxt6304GJgtMCAB5uWClsGV2AnQmDTcqYg4Nkl
W+uqPBgBjBXYcRBO0TK4AjtRAgtxlCOg0xHx4jOgMJXZs7VFB6Hv/f/h7PP2EfH1QeCQj4iDyuiD
E+xMyTrfHHPbtJ93ofUhNC9MV4HfPFx3cXH+V3Kb35u7b39ZDhk8z5/lZ54tr58NqfzjTXhmrWNC
JkucdIkoKQYJvSXS9cp3XDNuu2fa6BgUF6TzThIZFSc+5IeMcaHrnOXMsglkvJDgx2/Di7DFutJi
v7olYWVo07xOd7JR041hq4Ifr8L+umne3n9w9RvDc5un3vnq2VAB8odToratT0F7zGxmmUfsmX7/
6+ad1R/tcMnyRbpdTIma0iLnz81mTPdLntMsb048Fc4zoeb30f5wPhLLOeYGk8uePa4RiJyxj8m6
fueQNLADd8vmLstf37G+dWk+RV2a75iCoCUTX5bf04+GHHKkUaaS9SFSRVdWbG4JyTU+K98u4/3F
kNzrh9vl5paQ/wDXnSRp02VE23f2vwzdVu+YWt9WT7NXb+4fmnsWvnUom23Yoc3Npnp+ktMUfpvr
4uVwI8zt9ysnnT4dEvNf3t0/aWU1zNSatNfirOk/fdNzuDX4pmdt1n7b80pT0Ic8u/d34XL5PRkX
Ove/Mr4NfXH1yUHtm22ZOY0Hr8X491zZFiSx/OfkfkPj1z8shpW+q1+mzGyrKGBcEsyePeb/K07W
egqOf2VeXD70fR5Vp4ecAHHrUpOpaT0u9n77e3v3t+Y3X6o216iCjORsdcUmvzxr7ha3gzCZ7xyl
oZuR1HVcKW1jP3WJ+ISbZ43vch0cPCWLkv3k+qw5QA1BD83KuVc+H8er6JcnTCKa6Sp+aFOylXxp
KwPr8G4zBYgxgIjA3cYHZQHeiY61qdrmYFvNXG1W6C7vRri69WHQ5PXN301/e33tb8Kwh+Wsefaw
zD1C7h3ufvlmuMGH/NAQEmLyD1f3F37xzfI8v44/5+B69YqQXDsv4/0wpbG8vYrn3/7Y0/zIj9fn
MVLBQzSEKd8RKxUjxhlDlIwuWq973slmiGjGSPe5faNdy8xpcnQ7W5OM/pjEkHM2mThERzCoWruS
ZNGCciOsj0QNsXIQhhKlrCdMdUz22ncs+tyIMZ10510UNobyZNElP/48bJCCDEImi364mTI3U011
EqnjTlIg1VqnEk++60XsJMhms3I5K++jsGDC6OnRHQpQymhudac573rTc8u6nvHhUWOcTL1Krn8R
5O3EjIqPcjh10mhJIcKKsoMJzyVwDnGTwDkPGK+nXhKm4czhNPanufqr6G8ult8+3IdcOZ5uuHp0
Q6bLf05uSnOTxzf+CrbIVlu0udlozA9dFEjd3oxbPM8b+h8itS4GMpVLc/6PwrlqjgWBK4Z2yD3b
I0Hp1kRndUTqWs3rbd+qBmsDLoaR0+3NWfM3ixp/dxf9IgaQiaDHYrK4vphmvMaZ+1PVxVNZsfH3
xXUzN8tdyqFiFnXTpP2WUV9eTRbCsBU9VIHpQIWBaciT0gBmhmAa6glolLlDPZPtnm4Mbp+4UtVb
cBTPOpzG3nTtJOMTTtEd2YDNfPPDzRNPNNcZguixT7LKldk7/uURlm+gdnlY//dWBm37JITbWWb6
OI6Rd358s7z0L/MrXFr6ZPqpN/Mg4oXh15ppNLzmsL2+VPQFeJEpG+/M4R34f3SZ6UhGPbrQ9G+0
ADUW4VuAtWHHaQIUbYXVB9P/jy8FVVq3WWjZeNUy3o+TFvDiEHybTiahNnvD8CT+vlXpzY/WO6d2
di1tvkSGbxFmRSI0OUe4dcEzY3pN+c6upTzhcvvTxVX03589Tn12iq+Uej3fv+2yYqlTSvFIVOKG
+OQ7EqRMJImkklA+Ct3t7LLyvo82f0iUToFIFjriXCeIZjYKb4TTURXvshq1OTxgffzuiRC3boMw
TgjDuxiESNgp0cxQz3v/PyDu3VEV4r97A0W2yBoGVSV4+6EwTmdsS4P45zOjQUcvNUs0Bel475zg
RltBbeyd8IH+ezdQKNZSIWpM/vuiCqdVB77q6eCWUga6IAlXRaJAk/Jlv0xHUA3RmTuVxqmuuSUM
A4yVXVMQTqkyuAI7UQJLyyBgPg+s6wQuBkYLrCE4xWwZXIGdKIE1dxCwmAU2NXeoYICxAhsGwmlT
BldgJ0pgKzgELEsPaQslo7XaJjXXmTkrexk17YMTPCqpe5N8lH1IlEZv2L+bAEFlX1BHl2K3tKoS
DZ6GItZ/rQWJGXZsYgXaITydt4JKiKKarWJVyQsxwNiicCCcNKYMrsBOlMDKOAhYzwNX1opiYLTA
oK9rasrgCuxECWwYh4DNHDCrSl6IAUYKzCgMJwvhCuxECWyphYDtPHClBxcDowW2IJx0ZXAFdqIE
dpxCwG4WuCp54QjMyoCxAjMYTpoyuAI7EQKLlnKgr9GUzgNXeTACGC2wBeEkK4MrsBMlMGMg8Gyo
yaqSF2KAsQJzGE7aMrgCO1ECc6kh4NlQk1UlL8QAowW2IJwTZXAFdqIEFg60czbUZFU5BTHAWIEF
CCelLIMrsBMlsFIKApbzwJUeXAyMFtiCcE6WwRXYiRJYGwoBz0Y4TFZ6cDEwVuBH4JwtgyuwEyWw
sRoCno1wmKz04GJgtMCgB1tKy+AK7EQJbA1YsvMRjqr04GJgrMAKhHOClsEV2IkS2DkQeD7CUZUe
XAyMFhjwYNlSZsvgCuxECCxbRjkEPB/h6CoPRgBjBdYwnNRlcAV2ogTm4ACczUc4usqDEcBogUEP
FtyVwRXYiRJYUQoBz0c4ptKDi4GxAhsYTtoyuAI7UQJrxSHg+QjHVHpwMTBaYNCDDVVlcAV2ogS2
QkDA8xGOrfTgYmCswBaGM7YMrsBOhMCqpQYs2fkIx7oagRHAaIEdBMcEL4MrsBMlMJcMAp6PcJys
E7gYGCuwkyCc42VwBXaiBBZWQsDzEY6r9OBiYLTAoAdnFmVwBXaiBJbS7V8Fopk57lUgGUhx0MLZ
UIrTyqpSDIwsyUwMhDOyDK7ATlRJKqdq9u7tZJAs3sMJUdHi8CPBJ93dXLqjtiaH5Hrnet7dDGoj
Dz9D+U+7mwfUXLI+JdcZ4YOmFr2zWbWWzSZNeAxtf1fz8NC/tKNZtwKxo1l2feeV9aYz6p83gUmj
OTfSiYzOKJVW9TGzS1zRJLMM/+KOZt0qVb57d8dkqEGq2xRQTQfdHFuIhGGyhkSBJohG2rQU7udn
57s4q9pwiwDGys40CGd1GVyBnSiBmagq751esLjeQ1Q4PfyY+v+9F1w1/jHAusyenvnf6BKs04Kb
jlirPJG6o8TyFInz1HUyhUi77pGzT1/A2hx+9um/pg1NzIX8PgnKc9KHXpHMgRImmAjUSM6sQmkz
m+fr/6ON8MoKJ3vinfREaWGIEr0lfa+TMVpbJxxKGz47bitPdgUknyw6cQ0Ss7OFhs2ktZ2BsojX
cMGeCJwpblVU6thpKGfxIV0Em+0c8PT+JV22EgVsJSDY+hHZ2xSscEQxLYlJxhLHbSK9SJElG/uM
AIpkZuNILNejifT6m0gDodPN6TJehRwhZIOaQbdx9LU5pX6gJOUV/bhZTCXo6LIiPdhBeUwzpuLq
OAkqCzJnokanmrpTEMv/VR0XOyIxbPjAGUTH0JMkos3/VR03G4nVJqJdEeseLq/CMF/68HPz/LMf
/eLZ4uHm2c9DBoohU+34v4vv4yITar//9a23L/jzzfOL/Ob5Z5+9++Z5Ej1LSQsSJaNEUq6JU50m
mhtuXB9tb22zuP17YtpmeXfll99OOWsbMHvt882Pfc7Qcsaa63h9ce1/PlNccmvHl9mib+L99M6p
5BlTd0wCXdwt4qolPrevNJnNuWK8+eDy9Vdy7hmf2+1ztXq1iKv8nPlzO7wDk6tOIpzJfTMkG5wa
pXUdO2uGdMBjZhpyfRviqlwvVmVMRv6Nbcik6MX1ZddkO1ZvrKA2b11e52FqU+QQDVn46yHD/aOP
T5+3712/8Xn4pCH9dRhn6J/AjxqSos8VKy7zA+nKZ3nye2O3dnF3u7hvWENWz0+vszm5if9+lQoe
WEGwLXPqCOU3OtfH0YfmKyYNl842XDEqqGl+tvpCcOK7SxB/2OmBT7pn95InvfGtv/kmfnbT395m
J/J5YB9XOU5AUC2rjR4anNVc8HgTyRsffTZRWjb+JjSr0ciUNHaVMQ1kkjvy8pxRCLP/5O5aW2Sp
gehfafykaMckleeIgk8UVMQnKCL91NG7XpndFUX976Z7Ztcxnb2d6sqsD1DYu9sz59RJdSWVRyWM
sEw/6q6Xo3HcJYehHLjm4b/aCnC1EJ7Xo/dD3bS2t8BtB6OcxlGhAHN1BpCyQwl6jIoUfeOtLYJq
S/fnicgZfjVNOVcQvuFmOITYOfX+zXU13SsF871Sz3/389X9feYpUhaA2vP+LdeeVyeOgkRp9/2H
6ulTtXfK1YOXslaDl43iZpDC3qXdbkq7+8cpVROatf9mLkR7qt+Tlonadn+T6b+iTXYZn0kidQFP
OoH+R/RK60KO6P9717GOKlH+hXuwduGeHForQLc1SKdrpwSvRQ9NPTSD7xopu0YPL3MAJ207CWZ5
rTwfateYsea9bPhohZPDeH7h3rh/8kAAtiVGqRuu3nNT2TvDp6v3XPSgi6rQGn529Z6Lq2eGvyYN
c+SQib2Dz6WL2gWG+QXwwrMXsgZbpDAhNKlGoWdCCqoRtIKYF2sgzySnx5CgOv7WKhgbV7eqHetG
jbIeZavr1irtGytBqj5J1pL7hPRQkKeGgjxnKOiZcvwCHXiZoSB9BSZrRfNsKDg1dlol8juU3NZz
rtZbO99o3XYtdx2ktvWcf12KpFmdl3kIbbGtp7l5tG091aR804YZhYfE91YkXTS12m+gHxredrzp
xUqlp35w0ozQKN6KoW0HKY0aHTR6GHrdNv/UJh+745xJYSkmxxOihEPRJehgp4shTcKSSGRokjtJ
PNPROnkkaPXMlVTbZ/FxwFjZlUjCOZsHl2EnSmCjOKW9o00+2e99kooljwPLZnOP1DOmd7x+kZaI
vBp3wYmAx9ErrYuhus7DI4ez0sOmE53tVGCnetzYYaZpV2k+Ay8ePfyThY6DNYKBzA8d2hk9Cq6M
1urZI4ZWKuhNy5W2rei874UTQ/iHGaH3urf/4IhBMK0kxeQ4eOvtJ2tK0MF2XTqpiZEkEhmaIDq0
QMck7uPg1VVIp0Pk+/2026j5sRvufhd4OeWAwzi2TWN/P/36m8OhD1MKb94tTYyHISzUffzxW8c8
9YM5Tc26nDRBM10WzMjVs87SWJrPZANjvcPYJJzneXAZdiL8QDLuTQp49ayzJJzpxgFjBbZJOCF8
HlyGnSiBJU8Cr551loQz3UdgwfOA0QK7JBzYPLgMOxECAzMuKfDqWWdJONONA8YK7JJwXug8uAw7
UQJ7KRMHNqUqeWAzACnGtUxZuHqoWhKKROOA0S3pUnACRB5chp2IllRMmuQ7unqoWhIKOuOAsQL7
NJwzeXAZdqIEVumYu3qoWhIKOuOA0QK7JJzJhMuwEyWw1ioFvHrWGQgFnXHASIGBJ+GMMnlwGXai
BLbgUsCrx+iAcHYTB4wW2CXhLM+Dy7ATJbBTPgW8OoUJhILOOGCswCIN53keXIadCIE14zIFDKup
FBAKOp+ARR4wWmCXhFM6Dy7DTpTAQqgU8GoqBYSCzjhgrMAyDadlHlyGnSiBpfUp4NVUCggFnXHA
aIGTHgza5sFl2IkSWNnkm7qaSgFt7RIBjBUYknBa+jy4DDtRAhuRBF7NcIBQ0BkHjBbYJeGUyIPL
sBMlsHWQAl7NcIBQ0BkHjBU4DeeUzYPLsBMhsHmgwBusZjhAKOiMA0YL7JJwxufBZdiJEli6ZMuu
Zzh6e3UuHDBWYK1ScMB1HlyGnSiBwfIU8HqGQ1tuQwCjBfYpOCUz4TLsRAmsuUgBr2c4hujB2cBY
gY1KwoHOg8uwEyWwSb46aj3DMUQPzgZGC+yTcAry4DLsRAlsdRJ4PcOxRA/OBsYKbFUSzrk8uAw7
UQI761PA6xmOJXpwNjBa4KQHe6Hz4DLsRAhsGRdJR1rPcBzJgxHAWIGdSsKByoPLsBMlsFBJR1rP
cAgFnXHAaIF9Es6oPLgMO1ECS5O0cz3DIRR0xgFjBfYqCecgDy7DTpTAKv3qrGY4im+/0h0HjBQ4
EEvCGciDy7ATJ7A3KWCbe5PxqLltOu1M7/mzNz06N8AwmME3AxhpGw160KYTUnS6Ebz7Jy/EnqUw
AKWlWLSWpnnlBSii/VcniRldmliGdihPt8KmKK6mmkoQQ0k2MLYpBE/CKZUHl2EnSmBnRAp4NdVU
gvhWZAOjBdZJOJcJl2EnQmD3wO1XejXVVJLkwQhgrMCSJ+G0zIPLsBMlsFCQAl5NNZUkeTACGC2w
TsLpTLgMO1ECS6NSwKuppgKiB0uj84CxAgNPwnmeB5dhJ0pgJZN2rqaaCogenA2MFlgn4bTIg8uw
Eyewton9tLr0flrHtLcpC1dzWqWIr0o2MLYlVfJVMULkwWXYiWpJq10KeDWnVYr4qmQDowVOviqO
izy4DDtRAnupUsDrOa0menA2MFZgzZNwVuTBZdiJENgzsMk3dXV1UmmSByOA0QLrFJwSIg8uw06U
wGp5IzXm2F90cDz7+GeKihaWevr3P39w/KyofVIiIEuUXwaMr5UBc84LUKOrvfJjrRWIuhsbVyvf
6aaVRkjXvmysGXotoW4br4LWWtZNHx6y1vdt650UTpyXAfv5u/6FC5u+oQqYkDlVwIRMVwG7/2vS
MCWphpWqAiZkfpGp8OyFrCFXARNyexWw2QhHr2B0mSpgBRrIG/p7NIV+VBUwJYRtWm1q23pZc+Cu
7gfb10PbOyGbQbVKJslackNMZM8r0773DrIyrd0JzgSYMkQOw7dTZeDDNz/vxyr8f1bAoXWj5b4x
teBdVwsnIchlu7qVXcvHtuNSqirAhfnkH84+1vWgpB2GWvayq0Hwsdaam3qQg1BODJ1P5JcCmPOF
TPr08OtJv1MlrI/ee2dXGd94pbSs+6bvauthqAfT8Xo0ox4b5bn1TYqWd/QqdVEXtx/nLu70ap3e
EF5dNd2rnO8CMry942IXfja+6prDYX+3AnNoboZXf5zGb09vvguNFiLGuP/21a++roZfwlAnFGOf
u+tQhfzX61e/OgzY5nxlcoF6rkDg26DJ1HjCWF8PHfe1toOph771XdBSctu+ctf8x4/wANEL7epG
jGOtvOzrTktVN13fjsKN0Pfm6wtrjOhL92MI1SvVNMMz6Uqa81/SxnhyvaHt/ed+vA+5qQKN6ecu
ZAWh3wziEipnBgMUU7BaDD2rFvypCY610fsqmMmr3557rn7t6Y/TVNIfaWxyhdhS/XUhhwg22VU9
STrONeal7ttGjQqGu7Z5fX6f7y8tCjRvnh7vvPo3kFxezHV7/d3fbuZ6KfrO54MJcx5zz+GV44MB
/au1Z6uvnxfJxtGQTNdXFyA1ae/yDCzzgJHTEoFYEk5nwmXYmT8tMQGXGqU2/fe31/PIb7rk4Zjd
Vs23zT7ExGr+6O1P9fHXKR5GFrqR4DQAve8ph6vbJ83N0zQo0HOooxMc9jPm1XD19PBr2H3yZN/9
mkTUydmu1fVeTdoqjgHG+rNRSTijvyzRSb15vKfl+c8/fPOlSnuuX0iieZ5nXIaqqLfHOihq5qdh
tPtS5fUDdjqeihJmdTFbO0nznmxgrPc4mYQDlQeXYSeqPT34FPDqYrZ2lCpjGGC0wDYJZzPhMuxE
CKwZv+/PiRE3JuYp9WtLEsM2kBcpOkLwMnQydEI1oHSF+un5r7vgcD81t/ONmMcU9RQkQ4xMoQP9
HrWsK2dXhqtJasquUEPfKnV+YegapelOcd55o60VoPvNN1mlLprdoobW/wc1zrrs80wNN/dUpQRS
/F8p0HvvII3bVfvr05z0fUh5pTpmlad/Js1fvxXgnzGfZVyVu80gfGSK78jd4L9nd+SCy+Utnf79
95vv5hso5pD4dfXzVRjPjkP3a/dkCJ3YLnSuYeV3uBn6c9VFtb8Zrq53Z4l/yOx5MpSb1Xi5KtiG
O3pnaGeX5xR89d0QUNqhufn93bufgptIqXrPO8c7bR7qUu93BpzuNMGOPYh80COOxCqFYVKXydhO
affRfarf5lty/qjuoG0KWoH5kjK5tQIZVLOukYMbRDcYG03AhXX5MJnLpkW2n9vg70+v7yfi/gVc
z9/C0yRc9Cq+NK2+fDo/9vz8bS/NPmkD8mkgdfY6rj07z8Plmp0IE2EsFfMLw6k1VBTg1q58V317
aNo2NPkcTs8HOEX5LaL46910P3CpKK5Egk2xNYGldwr5l3cmhQi/TglB7L9XWiS8Ji3nQz90rbaD
JjjFPNqlOoNxkOkMZbt0JdJsVM67WrpLN8yKVS+keF9odM6bDgbXtp1uaXH8kbmi47iQR8dzGXH8
/tn1OB7MzvGNnDi+REUBbn9lM+M4kd+l47hMsfHWXzqYxrKckqGm961ouPQdoWXmYEprEcu4V5kt
UjiYShSbywZTx+C8ZF86QL09NdRdCLr9qT+ue317G3Kf6mq4Oey76/MIFoaWthG67cBprefk4Zv5
4W9OD0+KdkO4u7wPuGFBq3ly/NZhWoa/On1x8+0Eut8gdpgWuP7m5+Ew7WOebZ8OZu+qz9rbH29u
5yRJMVm9/+knr1S3xz9JZoIOTJkX+8MVAJO8Fvy6njZMTBtHXqn6/ZTmhVnE+Tteqa6a758edpXg
4cf9j9OPXH39SvXTz9/0h/0EfY5/97S5f3j+WDd9YSDb3u6fBOG04jbE0/sdQdenZku1mX2MNlsk
oIaT5r4fjTYyTw1mpcg6oR+DbIbGiHlzx7zi/whtyumMx6ONdo3EFIZnwpjHIJuhMcI1JtrudAzw
QrQLHCIMNEE8iiss1CVt5Hk02mgP9imyiqvHIJuhcb4HK8OEkpQJzJiOAMqEKp0Pti1FTMJNY3lw
ikIiFkWp7aKU4IMVRakFCWWY5aSp7oUojiIKnQ9alDjj8XOOoUgkYlEMYT2iBB+sKEYvSCjHtNAU
EgtRPEUUOh+0KFH/IPg8HDOkwBaJYoXcLEoRPkhRAt8FCeWZkoJCIhZFWooodD5YUaSNSIhpJGFp
XWAsCqH3KcIHK4pSCxIADMRynl7//slNc5gHUT8Nh/3Tft9V15Oht0+GQyAzdBaElsaY3v8+nTL4
9vD09sf+/pH7Cejx9sfuZppaeP9peGQ6ZXw/Af3ScbfTq3B/wOOlaj6D+9EJ8fm//vLC4xH/5Mkw
/DR9Pkye7J9UP4bNp9VxfvuOj/fu7ijKkpfmTHCgtGrsZYTuvAgftJe5iITcSc6UJb3/sSh2e891
x+dRXz3rFyQ0Z85ZColYFLc9RSjCByuKi0nATgomaSQiURzn20UpwQcpSuC7IKEFs56UpyxEIfRc
JfigRYl7LrWTgZ1zFBKxKLA9RSjCBysK6AUJLZlSJQOtA0KgLcEHLUocaPXUMkZxColYFEPokkvw
wYpi3IKElswpkrvGolB6nxJ8sKIseh+zk8AkLU+JRPGCEGhL8EGKEvguSAAwJfwFRtqXTREuSJyU
IpidBmYkafS58DJCPCrBB+1lcTyyk6s7RcpTYlEo3XkJPlhRQC9IaMW4Jr3/sSiU7rwEH7QocXfu
dlIxaUjdZyyKJaQIJfhgRbF8QUIrpjUpT1mIQui5SvBBixL3XH5qGcdJ7hqL4sV2UUrwwYrixYKE
1szSAlssCuGoRxE+aFGiQCv5ThpmLekd/rsowGF7l1yED06Uie+ChLbUWftYFL09RSjCByuKjkmI
nbTM0+ZTY1EIvU8RPlhRLF+QAGBa6guMtC+aIlySeE6K4BMpwomXdkxqUqvGXkbozovwQXuZikjI
nXQMPGmKIBJF8O09VxE+SFEC3wUJ7ZguubQSQLanCEX4oEXxEQmYWsYKTyERiyLNdlFK8MGKIs2C
hHbMORKJWBQg9Fwl+GBFgbjnUjvpGQfSQCsWxWxPEYrwwYpixIKE9kw4EolYFMLuqyJ80KLEgVZP
LQO00Wcsitu+0agIH6woTi5IaM80LaONRfHbNxoV4YMVxduIhJlaxtCWRyNRpCQkkyX4IEUJfBck
AJgBfoGR9mVThBNxcQHiz0gRhADmlJdGPZQimMnVvSRtSY29DAjxqAQfrJdBHI/sDjgTtKXVWBRF
GA2X4IMVRfmIhNtxzkCQkrdYFMo8Tgk+WFH0koR0TKpLvNZxPPrseFj0KtR0DgVXq2MN99tDM/1x
GaLMwzHKnAWppDXyAtY8K0g5zcBzDvahIOUm/9dACgqx6znCPMaJzyMuS0x8IxJ+8n9LCwoLUQjj
gxJ80KK4BQngzNNaJhIFKJM7JfggRYF4cgf4jgsmJGmGKRaFMLlThA9aFL8gIQ1z0qQqta7dVwZg
zNZzk0hgrJ3GLOHslODlwWXYmXl2cQJ2LPw7BexWge3mU/dIYKzAVizgIDBRJg8uw858gUEwsKTY
sqCzPdYV4YNujTjWiSm2aEEiEYmi+PZ8pggfpCiB74IECGZp2ygWomyfyC7CBy2KWZIAZsFcYLB7
0amESxJ/xihdCya1VPyhxUaQwdOpRxZjJ4Pt+WERPlgngyUJqRlX9gJt9dHyY/d2PR6LyNWfyapM
lnpJa57h/1I5Zpz38IwXACQTnLQSFb8AavsiUBE+2BdAiYgE7LhkkrYFKRZFb89Si/DBiqLdggQA
k7RJzlgUQxiknPg8an9sZERC7bhhoEnxOhbFE/qPEnywovglCTBMORKJWBTCylgRPmhRbERCTy1j
NGn+IBJFE87XFOGDFCXwXZAAwyyNRCyKJEzylOCDFUX6iISZWsZ5UrSPRTHbN6sU4YMVxfAFCQDm
LjLLf9nE54LEc7ZZJk9inXgZ5i3J1RdeRohHJfigvSyOR3bHLeOGVNYnFsURRsMl+GBFcWJBAiwT
lkRiIQphyq4EH7QoOiLhppaRjkQiEsUIwuxUCT5IUQLfBQmw1F3BsSiS0HOV4IMVRcY9l59aRgsS
iVgUIIyGS/DBigJ2QQIsM0BK82NRKNMOJfhgRYmnHRSfWsbSusBYFMKOgSJ8sKI4tSABlnlOGn3G
ovjtKUIRPlhRvI9IiB13jNPO3UaiWELvU4QPUpTAd0ECgHnlLjDSvmiKcEnilBRh4uWY5KR5k9jL
CN15ET5YL5M8IiEnV9e069liUfT2nqsIH6woWixIgGOWtvd8Icr2FKEIH7QoOiIBU8s4ILlrLIrd
PmFehA9WFCsXJMAxb0gbYBeiEHquEnzQosQ9l9pxzzjtlEIkiuPbU4QifJCiBL4LEuCZBFJGG4si
CIG2BB+sKCIOtHpqGaBNCEWieMrA78TnMXufwHdBAjzTvKgoQPCUEnywokDsKWZqGVN0+7knLNcW
4YMVxcgFCVCMa3WBkfZlU4QLEielCGZydWtI8yYLLyPEoxJ80F4WxyM7ubqnHXmPRSHUlyzCByuK
W5JQgaEjDbQWohDGOCX4oEWxEQm3Cww1bUzxd1EUJ5RDLMIHJ8rEd0FCceZo6zuxKIQzxH/ydi7L
ddRAGH4VvwBdunTrMlWsWLGjimJNHRInpDAOZXN9e0bHJEDrJD6tv8cbFhCibz710aglTcuFxyol
aSl9i5FCgzJaLQWore/CY5XCsxSOlCuUp2gpwDk9Fx6rFFFSJIyeqdjhfSUlAhXTXXiMUnbeCYIT
lQqN9lpKXN9FcOGxSoldQcQtJupYhXItJa9P3lx4rFJymSAyU5QjKicfmiIcCX5NinD5hPmZizNF
7PCyjjJe30Vw4bFGGQcFkbaYibFL1ZSUBHzd6sJjlLLzThCcqWJrsVoKsLXiwmOVIlFB5NEzTaCe
0VIqIOWJ5yU/g9t5ZylcKGBnWCYp6xM/Fx6zlKYgeIuFuHtGSo7rm3AuPEYpO+8EwYUKthOopaT1
ZVAXHquUlBSEjJ6p2B1gWgoDkeLBY5XCMkFwpZigNF9LKeurdi48VilFQ5QtVsrYuTglhYE7yVx4
jFJ23gkiM6VyROXkY1OEA8GhFKGMUC/YuomOMmD/24XHHGVVQdQtNsrYWqyWkoB1HA8eq5TEEwQ3
EqwYt5YCrA278JilNAXR9p5Bi1hpKQIkkx48VinSJwhu1As00dJSkNe5B49VStEQfYudIlZiWUtp
wOv8H54XldLCBMGdcoaSNy0FOHbvwmOWwv+FqFsIo2cYOwKmpMjy9VtOPEYpO+8EwZ0Ktkqmpchq
pDjxWKWIjpSnu9w7NC/QUpZLOjrxWKW0NkFkplxfosSqY4pwLPh6inDm4k6doWVHHWV9dR3Hicca
ZT0piPNd7qFD2bCSUtLqHMeJxyilpBlCdr4GDYqTlNVk0onHLKUqiDx6hl13EQoDby4PHqsU5glC
AtUCzSm0lOUzNU48Zin6zcVbihQ6tBarpVRg4ufBY5VS+wQhkRgb7bWU5ROdTjxWKU1DyLYT5gsT
LS49P1NDlmtIa7VyzQ0bn3MHu9Rc6+W65q54zqtq5T41HKn0dqlhfr7higgeDffrGjYLrnNzmVqR
65q74jmvF5wSpQrNVzROWt1bd+Kx9kaKE4QkEvFcI6ppdXPQiccsRRREGT3TIpRUaimyelTOiccq
RcoEkZm4HlHT9Ngk90Dwa5Lc9qkkt4xQ7wL9/nWULVd++8jzkmUdBq+CqFvKFLAtXy2lAvmcB49V
Sq0ThGSKCeoZLWW5UJkTj1VK02+uNnomYWdrlJQWgTUiDx6jlJ13gpAdAvtQQ0tZ/mLYiccsRb+5
+uiZjG3GaSkZWA7x4LFKyTxBSCZmaC97kgIsh3jwmKWo5ZAYRs9IgiC0FGCgdeGxSmlxgpBMJUAL
VVpKXx9TXHisUnpREHH0TME+KFVSelxfI/rA85JSepwhMpO0I6rfHpoifADPB4AjKUKMI9SrZwEQ
7nF9NuzCY46yqiDSCPXGEISWAizku/BYpXCbICSjl9JqKbI+G3bhsUqRpCDylphCgCC0lLK+u+HC
Y5VS+gQhO0SDdhO0lAq8uTx4rFKqhuDRM7FAEP+XImH5M2onHpuUwTtBCFPGKsJOUoCB1oPHLEUP
tDJ6RrDyKlrKcoFnJx6rlFwnCGEqDEFoKcjbx4PHKmV6+5TRMy1Cq2RaCrCN7MJjldJmiMxU2hHV
b49NEQ4Eh1KEMkK9YxtmOsqWKy058ZijTI9HdUtCEVuLVVLWq6I48Ril7LwThAi6FqulIK9zDx6z
FP06b6NnGMtotRTg5LsLj1VK7hOECEmHZp9aCgNvLg8eq5QJoo+eqdlxxU8isF3rwmOVUsIEIULN
cw9b4vK9Uk48ZilqoE3n6/MDdlxDSUnLH7g68Ril7LwThFRqEVr70FKAg0YuPFYpSRRE3FKjiH3S
o6Usl2Zw4rFK4VlKZqr9iOq3h6YIR4IjKcLOJQ3db9ZRtnxfgxOPNcpER1kaoV6wrVUtpa1P/Fx4
rFJanyAEvjRCS1m+fukjz4vOcVLXUvLomd6gZUclJS9Xk3TiMUrJcZYinSJW7klLWb7L14nHKiVp
KbylTrlCA5uSsl5UyInHKqXMENJJsDRfS6nreZMLj1VKDQpC9p5Biw1qKcCqnQuPVUrTKULZQqSM
HStVUtZr4DjxGKXsvBNEjiRY8qalICmCB49VCssMwdRDOGCmfWyKcCD4Z1KEGDNlTo0/mSLUPdTR
Op86ysr62rALjzXKSpsg8k7H0ERLS6nAT8+Dxyql6p9e20KiHKE3hZIiwC6CC49Rys47QaRGWY7Y
HNTj0Xe/vB6D0c/v79/9+v5haHrz7u1vD6fxH+chqnx6jCr/DlIv9zSfG6RaIeEk8XJVkDNXzlSx
qkA69JBEzIPHGnpTIta3kKlh1Qu1FF4/euHCY5XCaYLImXqFXh+TFCBSPHjMUlSk5LAFpoBtLSop
Zbm6qxOPUcrOO0FkRle8tZS4nnO48FilRFYQcfRMSlC4aim8np268FilcJ0gMlPByh1pKQJJwXms
UuSCFKEQjqgxfGgidiT4Z+Y4suNw6r1+YoqT04j0FqBI10HW1z9Ac+GxBlmPGuLp9FQ5oK++mf+3
j8/1chQq1D9L5TPHP/JpPhP/iTu13ntsn/4BZKaO5bjqB1AD9APAeYw/gJ1XQeQtCIUGDfVKClK5
xoXHKiXNUrJQwiozaCnL90A78VilZC2FR89krBi3llLWl6xceKxSikwQWYiL5652BdbxXHisUqqW
IqNnaoB6RkvpgBQPHquUPkvJQj1Ce4NKSguQFJzHKKWFWYpkSulCYbrKzxVskxZ5tTCdsWHrc0a+
1Fzp/brmrnjOKwvTjYaZJMRLDcvzDXdE8Gg4XdewWXCfmytUUriuuSue81rBZQuFouuxprZ8c7AT
j7U3pE0QWSjGI2p/H5u6HwiOHLMcXIUYuzJHRxmwh+rCY42yoqOsjlAv2Ak+LQW4L8CFxyqlzVJy
oZag17qW0iEpOI9VStdS2uiZLtDGkZLS8/pGlguPUcrOO0HkSgGrnK6lALt7LjxWKayl9C1UigWC
0FIKIMWDxyqlzFJypVQhCC2lQlJwHquUqqRwGD2Tsdoi/5dSQlwfaF14bFIG7wSRKzEGoaUs36rn
xGOVkrSUOHpGsEP1WgpwzNKFxyqFZYLIQikeUTn50BThSHAkRRhclQq2cK6jDHhzufBYo6zoQTqN
UK/YRraW0te/mnDhsUrpM0Su1LHl0EnK+mkDFx6zlKog8hYahQhNyZWUCBzWceExStl5J4jcKCfo
TaGlIHMcDx6zFP0659EzjH35pKUABwBdeKxSuE8QuVEJ0LKjliLAQOvBY5UiGkL2nkG/fNJSGvBK
9uCxSmlpgsiNeoAgJinrJZ9deMxSioIoW+gUMjTaKykJKMbrwmOUsvNOEFkopyMqJx+bIhwIDqUI
O1enhNWI1VEGLIO68FijLOvxqI5Qzw3aytBSkNe5B49VCvcJIncSrO60loK8zj14rFJEQ7TRMxUL
Vy1l+Xr3jzwveYB28E4QuaNfU2opwCacC49Zik4R+jYIsaNpSkoGzvq48Bil7LwTBAeKHYKYpAAD
rQePWUqfIXa6On8PybWk/5xbeXW3t/bn7c347188vnt7f7r74v721z/eP/y0zwa+ePXj6f7t7c1v
j7cP96efb798eP/+15tfTo+P+x94/eX9b3d3F9tt4Zl2H28fH/f5ET3enX6//f7u/avT3f7Pt+/u
vz8/+YcWdjm1vXr1w+mHKqfbdJZznmTd/PMXjK44/997D75/OL29vYjT6zM441/T09N//+/Tf//0
9DtEK+nNqeXXqbx+8w/EVz/evhp/aCfYYW5v3r05OxyWbvZvXH8+3e80P48J1Nff3Px4erx5+ste
XwJMMfsDfnv+03cD8XT/14iWP07vzlPENzvfrz/eKsbT69cPu9T9ec6oWqWE8522WH1q/TPr628p
Fx7rz6zzBMGBJEEQWgpwKsKFxyylKYg4eqZiJ0CVFI7rbykXHqOUnXeCyEKcjvhA/dCk80jwa5LO
y1/Wn7k4UMMq8ukoy+ufBv/D85In5Aevgkgj1HuHFp20lLL+gYkLj1VKiRMER4oCQWgpwAcmLjxm
KXo8yluMlLF9IC0FWEN24bFKaWmC4EhcIAgtBVhDduExSykKgkfPlAStkSgpAqwhu/AYpey8EwRH
qthJSy0F+LzRhccqJemBVraYKFRoYNNSgPtZXXisUnLTEOcMD7tVTklBit248FilsB5oy+iZjJU+
1VKAC91ceKxS6gyRhSQfUYv72BThQPBrUoT2qRShjFAXrDCNjjJgDdmFxxplTY9HdYR6FShPUVIK
sIbswmOUsvNOEJyoYWddtBRgDfkDz8tK6QqibTFTCNCpKC2FgQzbg8cqhcMEwRlN3iYpwDKoB49Z
CiuIPnomYycttRRk2cGDxyqlxAmCMzH22Z6Wgiw7ePCYpch/IdoWwuiZgl3oqKTUtDqmOPEYpey8
EwRnatj9g1rKchFxJx6zFFYQcYtMASuSq6UsfwDvxGOVIm2CyEKFj6jFfWCKcCz4eopw5mKmiNXD
1VG2nIg68VijrGqINEI9Y1NyJaUtX0XqxGOUsvNOEMzE2NFVLWW5iLgTj1mKHo/yFoUCdsullrJ8
Q54Tj1VKkgmChRJWTlpLWb5nyInHLKUrCB49w9hJSy1l+SpSJx6rlBImCBZq2AVqWsryVaROPGYp
eqCVLRZK2ERLS2mryaQTj1VKixMEF+IKTcknKcBA68FjlqIH2qfLji70DNdSnynOVXqQtSJk5oaN
z9nDxedsoVzX3BXPeVURsqeGE5XQLjXcnm14tcqbuWGr4Njn5ph6Kdc1d8VzXi84FirYUXuNk4Gf
tQePtTfyHO5ZqPIRdc6PTXIPBIeS3LJxQasDTFEGTEg9eMxRpn/zdYR6rxCEliKrZ32ceKxSpEwQ
XCliBaG1FGSW7sFjlTLN0tsWK5XuuTnYly84cOKxSulxguBKDZsAainLtZideMxS9Jurb7FRwIoS
/l9KDXH1rI8Tj03K4J0guFHCCvZMUoCB1oPHLEUNtDGMnhHPbz5rkPVXsguPVYr0CYIbNWyDX0tZ
vovaiccqpWiIOHqmYz2jpKyXmnLisUrpM0QWanJEPeVDU4QjwZEUYefijh6VVVG2XrvLicccZVVB
pC12SliRVCUlLleddOIxStl5Jwju6MK5lrJc7MOJxyolJwWRR8+UDL0plJT12l0feV4ybxq8EwR3
6gHqGSVlvXaXE49VimgI3tLOh+1QainLxT6ceKxSGk8QEkiwiZaWsnxQ24nHLEUPtDJ6pmAVR5SU
9dpdTjxGKTvvBCGBOtYzWsryd1dOPFYpKSqIsqWInnrQUpC8yYPHKqXMEFmoyxH1lI9NEQ4EvyZF
uPzB/ZlLIiUs1KcoA8YjDx5zlOnxqI5Qzwk6mqakrNfucuKxSmk8QUgkbtCyo5KyXrvLiccsRb/O
2+iZkiEIJSVHIEXw4DFK2XknCIlUOzQl11IS8Oby4LFKSRqij57p2IU1Wsryd1dOPFYpHCYI2emw
r1e0lOXvrpx4zFLUQJvC/rdSwsqraClt/ZXswmOV0uoEIYmYodmnlgJs17rwWKX0qCDi6BnBvitQ
Uhh4+7jwGKXsvBNELhTKEfWUD00RjgT/m7oz222mhgLwq4y4Aom43pdBIJC44AaBQCAEQr/SZEoj
2qSkLYuAd8dOmgDHaTvH50wQUqV/aRt//nzG23gZM0Q4vkWouZwWgRbqMMoIzTkLDzbKNITQ+Usk
RYKAUlx7y8XCg5XibAXhjJCeNEENpRB29z3xnHP5SOEFECZ/CU0rGSil+dJ3Jh6slOAqCGeEiaSS
qaQQWi4OHrSUBCBsKRlHu7AGSHGqfYjAwoOUknkrCGeETyQIKKX5qiUmHrQUCyBcKZlIexMIpRhC
b5iDByvFqArCWSFpt/hAKYRl908857wrpvACCN9rKxRtXRyUQml9OHiwUkItxXih/BSn3047RHgC
TxOAvzBEUEoLlaQ35rkhgi+hbmhnS8AoozTnHDzoKEsAIpRQd7RTSKCU1L5EloUHKyX5CsJZ4Wln
YAEpXhL6OBw8SCmZF0DkLysibdYeSqE05xw8WClG1RBRWD/FvitYSX91tyw19O1mvcoXThRNV6sf
Hrfz8s263vbPV9z+75r7fLl5qeaOQQRppQvP1dyxxH+iLcmEoUfpNHHwoEPPAYjUayckrZMCpbj2
1ZAsPFgpTlcQzgkVSG/RoBTC1joWHrQU0JwZWUpG0y7JBFKCaq+5WXiQUjJvBeG8cLSBD5TSfFUI
Ew9aigMQqtdeBNryKSjFts+is/BgpZyAMF7oMMUZw5MOxKYEf6E5t1FEY6yTp1vzguW8SLRjwKsg
a391zMKDDrIAIHSpE9UkXa/P61875ut8FCDUX6Ti6c5OmZsX4l87JZSSSv7dna25gpCBNDsLH4DQ
/gaKhQf7AIRYQbggtCN1H6GU5lthmHiwUqIGEKaUjPWk2RAoJbVPWbHwYKWkVEG4ILwlDfyBlCgJ
nRQOHqSUKCGELSUTDalkoBTT/gaKhQcrxdgKwkUhaZOJUErzrTBMPGgpEUC4XkehaG+RoRRHGPhw
8GClOFdBuCgM55V+IRIORWDhQUtJAML3KtOdqO1tCO6Vs8lCTLr1DDZkwth8Jn0qOW/DuORG5HPk
GWwlYSOsM6cS9q8nHCiCEQmjBYc6OSecjOOSG5HP8YJ1FM6T5g8ATlKErgIHD7I0kqohjBcmTnEg
+rTzGROCU9aeZi6XhKJN3cEoI2yXZeFBR1kAEKHXGSKS3llCKZQOKQcPVoqxFYRLQtNu1YRSKB1S
Dh60lAggYikZY0nhCqV4Qt+LgwcrxacKwiVhaYvVoBTC7RAsPFgpAUKkUjKOdlMvlJLa12Ww8GCl
JLAuw8peZj7avttKSntFy8KDlmIrCCOFox248m8pUZr2JpmFByel8AIIVUrGRVLvE0qx7e+BWXiw
UqyqIIwUntYlh1IIrQ8LD1aKTzWEFzZO8eZk0iHClOCUIYLVJdSDJb1igVFGaM5ZeLBRFmoII0Wk
bd0AUhThsicWHqSUzAsgTC+VUJLUp4BSCJc9sfCgpcQKwiihFan3CaUQLnti4cFK0Q5A2FIyRpMm
zqEUwmVPLDxoKamCMEpY2qpgKIVwjDwLD1aKlwDClZJxNAgohXDZEwsPWoqtIAz5FBoohXC6MQsP
VkqCEL6UTDCkwRuUQjiM94nnnGedFN4KwigRaRBAiiZcYsLCg5SSeWsIL1ya4jjpaYcIE4KThgih
hHqiTTvCKLOEjh8HDzbKbKwgjBaSNk6BUijNOQcPVkrVnMdeGuFomwqgFMLZSyw8WCkxVBDGCE+D
gFIIt8Kw8GClJAcgUi+tSLTJJCDFEG6FYeFBSsm8FYTxwtBO74RSCLfCsPCgpYA9Qk720gtnSYvg
oRTCAigWHqwUlyoIE4RUpJKBUginG7PwYKV4CKF6GYSKJAgghXLIHQsPVkpSFYTxwqcpTk6etDc8
JTilN1y4gvC0t4gwygjNOQsPOsocgNAl1IMjhTqQYnV7y8XCg5SSeSsIE0SknecIpZj2IQILD1aK
kQDC9DIKSVt+D6UQ9tix8GCl2FBBmCiUI82wQSmO0HJx8GClOAUgbCkZHUgQUAph3xULD1ZK1BWE
icLQdqhXUggVLQcPWooHEK6UjKVNCAEpjrDvioUHKcXJGsJE4Qyptq+kECpaDh60lAAgfCkZT1sX
B6UQlrmy8GClmFhBGC+inOLk5GmHCBOCjxki/H0rTM0Vhad1tGCUES55Y+HBRpmDEKGEeqBNO0Ip
gdAb5uDBSgmygjBRRNr9YZWU9vffLDxoKRZAxFIyyZAggBQvCSNsDh6klMxbQZgkJK35rKQQpkE5
eNBSEoBIvUxCaVLJQCmUaQcOHqwU7SsIk4SmzcVCKZRpBw4erJR/TzukXspeJuqrDCglttYpTDxY
KdFVECZDJNJiwUpKa53CxIOWkgCEKiVjE2lCCEgJzfd1MPEgpQRdQxgvkvyfHfk8LfiYIcLfbxFq
riQc7fo9GGXNA1EmHmyUmQggdC8TdWUClNK844WJByvFpwrCJBFoJQOlNG/QYOLBSgkQwpSSibQN
WlBK837LI89ZX/Vm3grCZkJFKhkopXm/JRMPWooFELZXkrpLBEiJzRs0mHiQUjJvBWEl9RIjKKV5
gwYTD1pKAhCulIyhraCEUmzrYJKJByvF+grCSupFjVCKI1S0HDxYKU4CCF9KxtFmyaCU2Pq6lokH
KyUqCLFfAjbFmb7TDhEmBCcNEXwJde9Jzz+MsuYLUJh4sFGWPIAIJdRDIIU6kNJ+bhYTD1JKUjWE
lSLSjskFUtqPeWLiQUsJACKWkkmR1NGCUpp3rTDxYKXYWEFYJRRtsSCU0nw1BxMPVorTACL1Sgmt
SdOOQEr7MU9MPFgpPlUQVglDWxcHpLQf88TEg5USAISSpWSsJUH8W0qSzRs0jjznPOC58FYQVglH
g4BSdPsQgYUHK0VLAKF6aYULpw4WjfaVg0WTNKntAFV0wth8mlQn54T0dlxyI/I56gDVQ8LxdD7d
qwlbSxGMSBgr2No6uSik9uOSG5HP8YKVEp62BhjiNN9GzsSDLQ0XKwgThNJTHPQ66SB3SvAxg9zj
UrmayyoRHKkDCKPMt3dIWXiwUeY1gNAl1CMNAkppPgCEiQcrJdUQVonkSIuqoZTmA0CYeNBSAoAw
mUNI2vJ7IEU1X+/OxIOUknkrCJvpPGmHbiWF0HJx8KClRABhe6WpBypDKbZ9kMvCg5ViUwVhtTCe
9IIfSmleqM3Eg5XiIIQrJWNp4QqlNO+7YuLBSom6grCaej8ClNK874qJBy3FA4j99SS0kgFSNGHe
mYUHKSXzVhAmCK2nOOh12iHChOBjhgjxuSGCL6EeaG8TYJQZQn3EwYONMqMBROiVpi44glIozTkH
D1aKTRWE1SIl0jgFSqE05xw8WCkOQsReGSETqWSglEgYInDwYKVEW0FYI7TknMjXkTBE4OBBSwFD
hPJlhJGkcAVSTPO+KyYepJTMW0FYI6wiQUApzfuumHjQUkBFq2UpGadJc7FQCuE1MgsPVorTFYQ1
wtPCtZLSPkRg4UFL8QBClZLxtCYQSiG8Rj7wnHOCyoQawgRhzBQHvU46RJgSnDJE0KqEeoikUoVR
RmjOWXiwURYjgNC9skJazkfPqvaWi4UHKSXzVhDWUvdyQymE3X0sPFgpGkKYXjkhadOOUIptX8HB
woOVYmUFYZ1QiTSZVElpHzex8KClWABhe+WoexKglNA+RGDhwUoJroKwTljOzWjJBkJFy8GDlpIA
hCsl42klA6UQVq8feM64qLLwVhDWiehIFRuQ4iShouXgQUrJvADCl5JJtLM6oRTTvpuGhQcrxagK
wgRhzRSn3047RJgQ/IUhglJKSJ2cCqeHCIXLeqFoi6phlBF297HwYKPMegAReuWFjqRpRyil+Rhx
Jh6sFF9DWC9sII1TKintbzVZeNBSAoCIpWQ8bfk9lJIII2wOHqyUFCsIHYULU+wchJX0V3fLUkPf
btarh822aLpa/fC4nZdv1vW2f77i9n/X3OfLzUs1d4wi+oz1bM0dS/wH2sAHhJ6XhMnmJ55zdq8z
L4BIJf4TbUkmlEKZ3OHgwUpRqYKwQQTagRNQCmVyh4MHK0UDCCN7FUSkNR9QSmivuVl4sFJCrCBs
FJJ27i6UQljjxsKDlRI1gFC9ikI5EgSQElT76JSFBykl81YQJghn9QSt4qQDsQP4mQ9HtkF4bUwy
p1vzgmWjMIo05QCDjLBinYUHHWQWQOgS6ZY2DwOl2PZpVRYerBTrKgjthNZTBPDn9a8d83U+CvD8
v0jF08efMjcvVAraaWG0lMadrhUKl43C03pK8AEgLGdk4UE/AAlAmPIUBsc58Am+fcqKhQcrxfsK
wkbqQlMopfkYeSYerJQgAYTtVRKSdn4akBJl+xQ6Cw9SSuatIGwSivUNVCQsZ2ThQUtxAML1KlFn
WKEUTRj4cPBgpWhdQdgkrCTNxUMpzcfIH3nUeaV4AOFLyTjaC1MoxRPmDfY87qxSfKwgTBDeTnGc
9LSjwQnBKSv3jC+h7mnPP4yyQKiPOHiwURY0gAgl1COt9wmlEM7lZ+HBSompgrCJOsMMpRAORWDh
wUpJECL2WgplOCvppAmzKRw8SCmZt4JwUphAeoYrKYSWi4MHLSUCiFRKxtFO2oNSKFNMHDxYKdZV
EE4Kb0nzXFAKZdqBgwctBVS0VpaSiY70DEMpqb1JZuHBSkm6gnBKKNo5r1AKYTkjCw9aigcQqldB
GHXq8LekXz6bzEipXOsZbMiEUfncgZ1KLuqRyY3I58gz2ErCUQR5MmHzasJaUgQjEsYK1rJKTith
tR+X3Ih8jhestVC0k9MhDqH/xMKDLg1bQRgtUn1WXCkNm7mGh11pLFf3d/OHxXX/uP5xvfllPbsd
7u/nPwyzaIaF1n45c5eDmi2XwzAzanE1i/MUli4tzGKZMrp10jrjl/7y0hw/rAwov9p/Xre9W3Rv
tXz4W6czZF/J0B/Lx9vb34ql6yyskJS/d8d8fv2puJ5vs9Lrx4dlZip5SGke/TCPZeXpYjvMSxFf
/rb/xS96t9RykfTVIsxTzWSENIfZwFclv5L4HxX2R/e/rRcC/hqA6q4221/mJVC6tx+2eXSfQyKl
Ybm0QRut03x+peylurRXy6vLxZX3c63eOZUTHfwrORnF8xTCX3/afXY3z5MeXwxXvTTSyfw1C8rE
mVJJzq4y42x+GZbByLAwV7o7ft4qL5Bb7Yohz0MM2/X85ua397r5bk7l/s38Kv/fd9fzm/wD3+fH
8+Ph/mG7+e1Ujpw7vCXjyRH4qS+f/iJW6zc/PGaMbrm5zfZjt7gZ5uv3dxXHf8B1dz2/Hzp5Yfpu
vflb63b4qUDu/lxtS8W6Wi/y1NT16r7LX/PuX58yPfc3C5F9zbO8w88d/X2SS3dKgIfr/Jwv3+ym
5L7v7h7vr9/OSX/1ZrXex9vxF98d9wy+0++n93I4fjfuN7rv31bvjM2iMTL9AaE3d88wv9O9P45h
bPIthm82uSJa//BmX5/d57nRxVPUle90m3VDBVFaxdz4/jisO2tOw9sKXinT/Zrt3KzuH4b1H3/s
pjXLQ7MPtj52j4/5j2GQRi+HMFNufjmL1qlZSCHMnB3SEOd+oS9t9+F2uBnys/XxLmxPEgQefcc6
9PBE5Ef6+CS/XzpI73XL37LgfSXTzT7Ij3p+tHNHIf/nSbKUXnPD4CQ3ablXM5iYp5nDZTjYnu8n
xr8Ydpn/+bZ0oX7ZbH/cPzWncL2U/wEujNp91ZD/OsuJ7SfwOSLnnff+ri44Pu/ZysRHxxqOLzSA
OQ/yzfymVFG/vSkey6M+v394s+v7vTlkcfG43eaQKP8ucX0KOig1IXT/1DzqC913y33focTm5mZZ
yqHYfHsH18V3TtNpHrphvbnLEbYvQfFE0ne5s9PtlGW4n1eL4X4XJFfbzTrHfO7iHSqEP7ocqcv3
f75c5r+WH12+71TwMsdWecXzj5+VL/zs6A9WevwHK/3iB6+ujj/8PC38ue7704XBVN2eLIxukQuj
xMbfPZXDd14KkRHjLiTVLhDE15fLf//86MJDRsXpPDE9lHWenvqlk2erDJQWwzkyl/+8z1t8htkv
21V++Xtxs8mRdLEPoQt58YR8kREv4sUO7uLwkbNDL/398+H+shud7v8o4X7A7/Kw9fq+777D5WBZ
pia+f+qtrW6HzeNDp/TxtfbhNfLpnJlvx3fg9kXe56fwqa58U75X1O3/+ffQ5/cM+u4O88+xyVKE
PgX39vbNE9d9WQQwVVyfyNGImRJcjo4Rvb0FwRAv9nncxcIO6L/lAcH5ElI4G9Kw3W62/wY7jRTP
hBT3SCPLjpfq2eZssbm9uxkehv8UYuxDiuwQnTlPzW0qMlu7RuosBYZtU5VGtqmJFZehTVWap001
k/RGWxs3ZICdI0fjGzelT/Pwdh9QjdtzSOdrb8c2bmdDinukkWXHS9XWuE0Esboa3bg1DcrPkZex
9f7q6iLmf2/WecReJtbkaTim+bDXRM8vcw1dZjkvbza/dPNf5r8d3Z+Da0yNloU944i34R7REpaZ
ukx2vXm4u3n8YY92+Fdrq+fHzB//xd2Vt0ZSRPGv0viPCvZY9xFREPxDwQtFBUVCdXe1RpNMmGw8
QL+7VXOZnXlJvzdVMxmF3WU3m8zvqFdnd7130A4yHRpzP4goBj8GM242pB++fsbdDduXvl+YH+ZQ
OwXy5jD3o7TUeCt/WrM+NtlwqH1q/R4KNZ1ti7cPdLDorIAL4KxAzbjS3x+j4bjIDWe8sqKPnZad
OCSwTsqvJLC4eOnAeg/FLYWffBvy2bDKzwEIU8lmdg7p93KpsxpLT0gTv/qvtLDISk61yn59jnxZ
LjsrnJens7Paz6RgSkdaBk6u9nObHZPRel3yY/Px6i+z9dhzgd4CF5/vHlHM7+Hq1WU6Jbl8uM3/
/k+KKlh5rs9By1af9ZRsm+fr9N0/p5a4uo9DM/91RWrD8b5fXN29OnFT1dmwlHem4vOkI4o5uDOd
kajizsRFcWeqo+Q4nemMmuqQzlRxeXYkEYROdHZiCjpP8RlIVmGqqKjVcSo2TxKW3FzcVhL2Xfqo
1C7vJNR0Rru6CB//uFtd93gzEXqzSbvjxfIQ7wZuq2M2wFPviu2vhWEWdR9077DYerZp0FVQn4LJ
v+9l/ZQZjPPGNfkt6d8vGs4v8onJ/GGRX3GPN3fzRVhcXf+ZGjb8Fq6uQ3cdm7Ztwv39w00iv3n3
73Z+Pb/9KS7S1YN0WPI/VmG5rHLbI49j6VLP5VV6vJFi48tNXCxvsFxs3/De/lCbf6rl3Zj+GHve
6uC0t0NUtvPNcHexerD1x2/DkF+FnP9+maawXy+euEqhqr0B+JqMCtzvF49+no+d1lrEVo/CtmFM
B1mDUmM7ylGPUocoTdf8Nlw9+pEQ+ujSf7bajEOr+NC13neyNdxFGaz0Jurm/iGfu+UDmIt8O+a3
5Z2Zr76Afaozo/3/farTuY/qk5S6t9wNbXSKtWMXfTvyMLTccMmM5S5IseOTtdy6oTctV0q3wRrd
us4lawfJrR9ttNE/4dN3sE913pc4N58G540Utmud06FVpmOtE2NsfWA+X6GLrOtIPlV6V/PcfGIj
90P6ejvoINp+6HU7ipG1XHI5MKsEd5rmU50Tk3PzSQbtpFd9G7wKrTbStlr2ru17M1prjPOS1O+c
F+c9bQfUtO18nccSdZs7nN105HydaRu+Fv3tR5+kdt9wSI/3hp75qCSzoe/Id6H1jFk7wfZZxL/u
b9Ie87PmdrzPtl4++sb7xfv0bvg+bfqDFAk3sUpOnC/jH7HPUqKLLhnUxY6vLx99nW68X81vZysr
m8317NjZxJWlUVPGbCbXiYdeDhWGxxCau/n8enWzL6/0b+L7zdX95XKnd7kMnof7uFjeAmzCQ8oU
kP95eZ8+OG298u2q9zdA3kk16jAq3nujohApkpWwPsQxqqhAycqzEsm7qRSsL8hgUU6HmknBgp5o
zktIIDzBZ7tIdBwclXoyzYazZY2BBqba7iwE54XFwSF0Egw2M2YcBGwmgb0sMZgATDXYg3CcWxwc
QifJYK7Al8wcIjWJ4IPQqott1znTOpNGcduJoRXC5fHTSNPZnFUluMH0A2NRWnxqEsyHvwEK0hwU
VJSapHNeB2F6I4IBpmOWNKaR3As5epCT2ST0mDR5ApyQmuQRKTA1yRBEVDZwa2TIjcPzSJl+xogu
DIbJt0El8CRgp+PUlfVHNDC5PzoITgiBg0PoJPVH6TUE7P8Fvl8vZa7nP6WTyqXcu3Ts+HsSnILF
+sHwOIQoR/H82kd1wqTY4Nr3THSKaamZk9KIseOe9+P+2mcxn7862frHzJSRYK9BLLkfxT6YBqf3
XnrhWev7NKZYa0JrhIjpj+CtN9F1VldPg2Nmens/oY4iZBoc/TgNzgvwOlIanNq8n0yDo5dpcI5J
oEYanEcfiEyD8/gntpkr9iXSU7no1RujXDmWTulE2ysW295G3zpreNtpJrvOeOc7OZnKxcyMcN9P
v0hbipzGzlGK3vIuv1QrD02Y8lJ0CRFUAZ0eX+/VgN28gr7vuXK1Z82diZ3zwgXMESgSlzqcO5CY
VbWJIbwjLYqslvsUHWNTqzEuCneBaGBqUwgYzjEcHEInyWDnPATMp4ELewUamGwwGOtecBwcQifB
YDvjMPBkwlsuiyKYAEw1WMJw0uDgEDpJBgsrIODJhLdcFkUwAZhssAPhnMPBIXSSDJbWQsBqElgV
RjAamGowDKc8sHxjm2Lhf6U9bzpLuQm3/baAeJoDnXKSyXHsQrB/rb98uVgM93n3fTts3slqvvrq
o3wUk9dCDw+JBeZO1SlIfrhIr0P+lmhuGc67X1IL4VnmllzEm3muqx7SIndxEtqbG1npfa28Ul+k
hcpJce9T454CMAdRUeuETQMvD2sS9Y1dF9v+B+nQjOH6HqLTk0YbzTad/lADn+B1eOmRusTIg5WD
2BgGLp8mn8twzcsGZTQwVafmIJxhODiETlIcWs2PUOXrqOXJjkkcU57M75Qn2+FlNdSSkw+6uC56
YkoAJkesheCcYzg4hE5SxHoLAk8+ueBGlBmMBqYabMQ+nJsxJnBwCJ0Eg92Mw8BuGrgoggnAZIMt
CKccDg6hk2SwgAPJTwLboggmAFMNtgKE8xIHh9BJMlhqCJhPHzPZwghGA5MNtiCctzg4hE6SwUo7
CHj6mMkVRjAamGqwEyCcR8IhdJIM1uBkzqePmcreuyIAkw22IJxXODiETpLBxoBdZ/qYyRdGMBqY
arAXIJxHwiF0kgy2jkPA0ztrXxjBaGCywWAEJy9xcAidBIP9jEkDAU9uGQUrimACMNHgRAyEcxIH
h9BJMphbcGya3OEIVhTBBGCywRaCE8rh4BA6SQYrzSDgyR2O4IURjAamGszBCNZc4eAQOkkGGwYO
/pM7HMELIxgNTDbYgnBa4OAQOkkGWziQJnc4ouy5PQGYarAA4RwzODiETpLB3kJjk5jc4Yiy5/YE
YLLBbg9OJS7c4OAQOvEGJ2AuBAQ8ucMRRc/tKcBUgyUIJ7jBwSF0kgwWFgSe3OGIouf2FGCywWAE
S+ZxcAidJIOlLLq2tXp5fJtkBH1zDqZiq9yhPOpdU+ydxpIr/a9erRp0e7V416c6V/qfu2uaGeR3
SIPseCpHO45DoN4zTUyVE6h7pgDa/h3T/E0vc79UsZlVhG7CR+Z4r/sgJu6X+k4IF7Tpow75ZVfl
WYgxpKjsleqkfrn7pYrNnBUlkncHKlWy6yinQx2mlYBIeEG4X3qQJ4TBm8+4ktCsMXk8JXRJuXoK
MNV2zSA4aSwODqGTZLDiqqS9d2ZHdL8HqWhRZdT/r8+OQ1zNjrBHdbIwnJtHVZPdLH36f660qia7
yT6ZOgWMzs2nqslulJw5Vycp0H6hgvnda3UKUsn0GoUHIA1+sgQzuRD6RXO3mPdpDkiaatBOk0wY
PcsXsnlv+PoO2gEF2t/67YZxpvNkmFaLuczz/LaZv/o5Lpqs/dHHff7Np582b20MefsiBW9uoeZ2
vrrgeqCVFcMBU0Jin6SaMcWP2t4YXvmlOi2l7XhQ2ogzbVCMVydoUC6eb1Bu3Ys06GNeeRM7COuc
74JS5mwblNs6Rwh7DfrVarwamvw/zfz2AOmrDMrzlKG2URJLXkrm/9q55QpfcL3YVngbHrcub65e
xZv7c7q4SpCa+xEsN/UonBAs/CH31/fC5MN+ead/GyYHpF94HCYKIi+4qUJ+k7Vge9s/pSvYZilY
Hr681wx/Jnnrg5r2g9T3XjWLtLtMXwSZmTppARB5HhJndhmuc6z8eZnNzp6HVHB2mU7nchO4/cNi
kQeZtUSQtFNHJH2xzgIh3hUXzXq7nLvl/HpY50relPLXb8PsdB12cO7snCuxWVq2rrt1n0eIvRTq
uk4hj/fQH0wsagB8cFnS9+bH0zdG06fGyLHxb0KOzf88GyLK1VnEPD4x/zdvUv7qJnDjAOL7Sh0o
EitjV4rKE2uarIxdSdaq1DQsrm4YYyuk5nIs7y7J4StjH4EuohYGRcGhlbGXyirN5LTaeJUCDFZk
qyp6uuqa3qmubBiWz3SNTo0uSmnY3zCsO5ENO8H5nBN1lrwYSvsFufXLUtIrSsiQqcuKXpD7lCSw
YwNxQXZiTQfPqURZT09SmrGq4qhzKhekOVUzXpVuhTmViypzqmaiqrLCOZUYYKdQhJ9TuYD5VNrl
T/LZiZLnKNVdrleY3E5HSa8oIduuLquDJrdjkbga0ZPbQYcCp9CCHfdTxTed/j2/TScG+VSXweTq
7pwoJcI33p+CF2ZES4Y94VHdPRhiJlxW79P1Ki0qNTMGk3zzoP3NMpV51EL7gUW/TRD64esJQrth
m8PjhfntncnnpKCvPYx7B/MMU6dhYfvIBPUDy1SyB4ov2oRyAW5C8xXEY3jORfY8qoTXMx2HQR0W
EyfkVxITXBTExHuoj33iMZmecV93qUUZwDdzYki/lwuM1Qh2Qpr4NXel6by+ksNnplNy2VlXvDyd
nTV2JgVSqnxKj19j5zY7JiN8Je2jHeYeUcxkRe3/gKiC9d769LFszVdPSVl17f9AU5V3puJTHD3j
lZ5i1etMZySquDNxUdyZ6ig5Tmc6o6Y6pDNVXJ4dSQShE52dmILOU3zyoGe80jPsWh2nYvMkYcnN
xW0lYd+lj0rt8k5CTSejq2yi8Y+71d20NxOhN5u0sV0sj85u4LY6ZgM89YbY/lr4JVhsPNs06DOb
0cqvIPz7NtZPmcE4b3STX1L9/aLh/CIfdswfFrl+U7y5my/C4ur6z3yn9bdwdR2669i0bZOKhjzc
JPKbN/5u59fz25/iool/pHOO/7EKY02VUmbYYtGbH2qFTH+oUfdtpwbf8l4EZTvZaz1urhzpZX3t
6WLRSxl1gvs1GRW4n1Wx6KVPdYL2/+9TnTchjurTAdfv6l7nTD65Om8pnJtPda9zJp8qvcJ+bj7V
vc6pZ56x856OAmo68pXeNarb3OHshllf6cWZo/p0QLeoPcz6Su8wnJtPtYdZz+q81ftcfqINh3wl
2jLPbAhMRnLxbWVmzE9F/7OIe3mKtt9YnKsIM6xDiqR6ejexm7REq+BcUHGMVj6fq4iNWnLXm14P
SvUmDtxaMZhOazd0euQvmKvIzLRmJZJ308b4oux8xXSoSXM8TMLaEhIITwipdMzMwTl8JjM9S1ZS
i4gCTLQ9EYPgPGM4OIROgsF2xoyGgCczPcuihMsUYLLBFoRzHgeH0EkymHtQ52SmZ1mUcJkCTDWY
CwhOCIeDQ+gkGSw9h4AtttQw6yPj3g9WieH5yUxx4aUxou+YGcZBDb0JwTvJRRTDyNz+ZLaYz1+d
bEKzMy1VbSt2W0sUdvsjUKTGr7AgMWNqE0N4R4p0YxlEcTLltpSqrNHQwNSmkAqCs0Li4BA6SQY7
Zo5SYRMs04rJ+HEKkk+VacWznC7TegzamDKtx8TNZVpPAZiDqKh1DinTmnVwU6Uo6W4nVbKkWmpF
YtTB6gk2Clx+TKbpl9KXDcpoYKpO6UE4jYRD6CQNyl4KAFhOpumXqiiZOwGYarByIJxWODiEToLB
bsYcg4An0/RLXXTgQQCmGqxBOK4cDg6hk2SwMAYCnkzTL3VRBBOAyQY7CE4yhYND6CQZLC3YspOZ
nqUpjGA0MNVgA8IpYXFwCJ0kg5UHdU4fT5nCCEYDkw0GI1hzi4ND6CQZDNdklNPHU7YwgtHAVIMt
CGekxsEhdJIMNg7UOX08ZQsjGA1MNhiMYMslDg6hk2SwhQNpsuCadIURjAamGuxAOCclDg6hk2Sw
F+BQOH0q4gojGA1MNtiBcErj4BA6CQb7GVPg4D+9wyl78EcAphrsYTiLhEPoJBnMDdRT1fQOxxdF
MAGYbLCD4KR3ODiETpLByjAIeHKHo1hhBKOBiQYrBsJpqXFwCJ0kg43yEPDkDkexwghGA5MNdiCc
R8IhdJIMdhwEntzhKF4YwWhgqsEchnMcB4fQiTdYJxOYhIAndziKl0QwBZhssAPhtMLBIXSSDOYC
DKTJHY4qKk9LAaYaLGA4zXFwCJ0kg+HS2Wpyh6OKytNSgMkGOxDOahwcQifJYO1AnZM7HFX0mJkC
TDVYKgjOSIWDQ+gkGWwNOBRO7nBU0SMjCjDZYA/BOa5xcAidJIM9rHNyh6NUYQSjgakGKyCC+Yxp
i4ND6CQYzGcc1KkndzhKFUUwAZhssIfgBHc4OIROksFCFb0LvFNCEv06NkjF1rlJf24XGDAvyqML
LGefKt3PRBRYHkLsBItB29FTLy8kpop71OUFAO1IBZYxbbGvRMyYFOhu4saeWTuKfMHn+fc8ZWeU
l0Po1MjjILjUXRhjH2P03POXLLCsxUxyXyJ5d6AqeoZfToc6TGuYhOUlJBCeEAZvAT4V9s3PMSxe
dSnQ/vp487dERwg1eNY71usn6diSF6bK+ZDbyEIkjAWn0ukzO6PLIhQNTNVpNARnBRIOoZMUddaq
kk6ws2RAD4YQFSfrTIX/9SXDk1Wndb2Mj0XVMR9nU4VICjt5zZFagg9VHfMxrxyugvfSWTFatSl3
ekDtv1V1THuU6pgor07QoFw836BSTC79qL6iGpSL1xqUh/zBmnPJ3Nk26LRXBzboqtzpEetYLsnX
qAFaoUopjdyhsXDR/LQIXZdsXpqaJ8r1e+AXRxMhGSzi6IVLNURHawPSKezQFczL+cID6zul8p25
eHAbH7u/JwexDVq5Bz/RoBbVg0kFi8V+weKUH529DRFwpk5GhgNKX4oLIWbaiJJty+762hdso2rw
oW4vvN0lsV7ll5DYMUUzXmJKOR+iKYnvPgkzc5btR6rT4t9IHa7u75YZPB9ulwV525vUIdLWojWd
4qNTOv2ll22QfUxDWfpbF6XkIvbC2JCpR9Epmf438mH7YTmgv1l9XrO465s3DvnwN0BBjk8IeuKY
cKtzt1PmQdgFb8zg+85G4NSQ28jTGTCLI3MQJy82LT1p8gT4X3u04YFkh9SjweOtze40cpfiaey5
YkOyM5hRDt6Mo+Im/fbx7X0ldiY4A5UgBrbHfJYhvDOPdYPox96PrfDBtFKz1OIJvrUyWNVr27Px
UVGNq3Q77WrZDPGP1fh9/ed76xnq/jKM6Ws//Byu0zf8mNf8q8ESUiT55oy+jqJnaoP/9JBorKua
26ZP8+Dt6ij0BXitqn+zd+VFmtn/tXVTom8R18uuNPXl7Js/X9036VdoXvuU4/Pepgfdft/Wv49T
6x6TwM5iPxdmgdcL7+D64NuPVg64n1gW7sFKpO9QUByw8Ic4TFzsowaIx2tDA5M33+OLHq0W8Ha1
gB+1UF1wpu3GsW9HEXwrlR7atKgP+UAsjGZ3AQ8ykKKKfZueve0RqUtve/LyGcp7zZCL+q+ft7Qf
pK6eunZaKKQvwsz099OVj/7h7kp2nKmB8KuMuCAkEryX60cckLgjcUFCQsi9IeCHoBlWCd4ddxYY
kppxVdvpBA6A2PItLreXdvur9WR+7zp2yY7WJ4jx5Hb6d0jT9/MUKl8Q8d2h19B0i015BbqczKYG
sM8jnVr83ssPk4BNy/GVATBrUF+lt/Mj6vevZh/nrp5yPu5+7vfVSWL/8+PjvEQ91jVF2qm2fejf
/9Wb4/BoPjBvTgutuTZ3b4fjJdPHe9sf4D2anW3Djr50fL6M82Fv2TFr7Gkukou756FNAsqH7B8W
pkEQP1x3W/7Dl+s3xkOfGyPXxrOZyvHfvF4ioTUrbpB2o6pYWVMxSLuRrEMyNS3ONRXHDVTNr+Dg
EMvEDdI+0m0zRxOEeEgULAvSPiorjvpEaiWwYxqD+pOGjU0NZWYJNqprWlGjgb+YUgdnGdBB3ZbP
WXHeB6XL/G64LSU4UOK1nVZNWUnzu9clwe2kwgnRypoWj6lCWftBapUGk46p2sjGVN1o0dFuTNWm
zZjaWFnl4CYssDUU8Qc3bWg+befGosHt9pS4g5u+0iSbObhpswarhYPbdUh8M7EHt0WL8jW0cJ/7
OaoO8t/vfnibgfKTXq1BTpJtfvKe5NVo153TN57HSa/BhTES7mMHoVVE5EGFwy/K+8eLVpB503hS
Fh3qYLT9++Tlx//e4+7mcwKHbd4b8+NsajNOnELrXesPWah5b9u8t9DBqr0Cbci9gjjfhbYfX9o2
nDZzw3XRmzCNXRr8qbD+/QP97ue3w/5dTzeeTjCNQ/6bPv38NM5K98dGDv9J3sse0k+pS9Rr6LhV
vm1PlzwNTwNMyn/s3Tw8DlakyZ/ANhob2ytZ/phfk8vFIE3TCavROZuwzqRuSwkOlBht1oYRP0/7
aluUVxRTzNX+D4iqmDwdt/LqJlDtlNRlbF+1qaCtwMWdqXpL5IpiFnemOxJV3Zm0qe5MbZRcpzPd
UVMt6UwNp2dXEiHoRHcnpqLzVC/jZxVt3sy26jgNmycLy24+/tBI2Of5p3K7vJ9R8zbj3EI//zD+
9uPhi4V3M6F3H/L68XG/D/U93VYkz9B2mXB+3OlyLkyzaDMZeYHF356dGvRQ1Gsw+edo0dczg2n3
AA/zQd9f3zxo/WbeTdr9/Dif0h6//3H3mB6/efv7fLnGL+mbt6l7Oz5sNg/p6enn72fyhx/KK/+3
ux++Hh8fxt/yfsQLKto+HG6jQvt4lUghMpeKs8G1BsmXcqn4LC9yqVahXc6lui7unEu1BuBcRFWt
I8+lOugI0CT+6fxTOBeqPoVrR0z6TZwLFBtEaPKREzeO//Q/beb/a9Mr129sxLABhGmMGuPY9afr
OGC+jmMg4vgpGW0e4P+S0YD7HcXxH31qc6Lq/+0TbpXS9+/TgqtphojBGug2Mfq0caFTm2imcYNJ
YeemYVRdx4zjP/jU6Bj0vfmkJo1DTHYz+GQ2/dD7zWQyA221HRQ4o6OX+KR9uO/HbGI8ZrOMRpug
bZs73d3jQ/s2j9nXLnk8ccizA7Ap2FENCvtO/Mk2bp1zBbavIl5c9vj3f1h94SPncUUpAu/J7kZe
cpZ6VN6Pegzm9Qsf+y6ima8109r7IVdZGs1kpgl68Bomd6sLH/eSo3c1ks+nuV4tvk6vBR3p5NYr
kgTYGhIMT7hX79k3Sm21RopOMc3BVwQ8yoDFtkcSzhkeHEOnyGCryKIrpjn4sPxCbxmw1ODgSDij
eHAMnSKDnSF1FtMcfFh+obcMWGwwknCBCcfQKTLYA1DA4R/gp+PY9Hb3dd5e3Mv9Me8V/poFzwfd
EJWD4BXo7vXBTEfbRaNw6ABSUhhtdDHiNAXo7RjS5WD2uNv9tM6AtrciGmxtxXlrxeV3xl6LorR+
oyeJRdOaGMM7QaXrraJXRMVYDY+6ptEEwNKmQE3Baa14cAydIoON0hRwMVbDY6gzmA0sNjiQcAZ4
cAydMoPRUcDFWI2gKiuYDSw0OCiygq2OPDiGTpHBFqhCCsVYjaAqK5gNLDaYrGCnPA+OoVNksA8k
cPES8qArK5gNLDVYkxUclOPBMXSKDA6R1FkMSAy6ar4sABYbjBQc+MCDY+gUGRyNoYCLS81QEUIm
A5YabB0JFzQPjqFTYHA2wFoKuLjUDBUhZDJgscFIwkXgwTF0igy2SPbU4lIzVISQyYClBjtHwTkw
PDiGTpHBLzz8iwGJoSKETAYsNhhJODQ8OIZOkcEAZNcprnCCr6xgNrDUYE9WcFSOB8fQKTIY6bGm
uMIJvrKC2cBig5GEc54Hx9ApMNhulQ0UcHmFU7dvLACWGhwcCeeAB8fQKTJYa0onlFc4dfvGAmCx
wUjCWc+DY+gUGWw8CVxe4UBlBbOBpQaDI+HQ8OAYOkUGO7qQyiscqKxgNrDYYCThvOHBMXSKDKZn
S1Be4cTKCmYDSw2OjoQzhgfH0CkyGNFSwOUVTqysYDaw2GCigt1WWSYcQ6fAYLc1SlHA5RUOVlWw
AFhqMDoSTmseHEOnyGAbIgVcXOGAWn6YRQYsNDgTo+Cccjw4hk6Rwd6RXae4wgFV9SJVACw22FNw
QUceHEOnyGCIZNcprnBAV1YwG1hqsCYrOJrIg2PoFBjst5oea4orHNBVFSwAFhvsKTiEwINj6BQZ
XP5ehHWy8nhsmX208pJK2AbT5kbNqx5G5h56rTmj/yxxnvapzfWQjMT53ik76hGTHqPsIPKeKRTv
nX4R7UqJ85y2oJSg4ADyqMDHrKUPXf/6ma3UB29HF21vwHhUHvrkulEPkNkbi7c6gJwlw1ahqpF8
/qCqewNaTUf8mCY90aaqDBieCB7esHXWEHRicXsKDNQ1BhtYarsBEi4EHhxDp8hgr/mn3s/amxgd
2f2eohLsPX7Btv7oeApXpz26x6/X6j1q+PVa9gm3WrepJVZmeYObNCkNptjW4gC25wnILWjP9126
MSQ0yfYuHe67XJIceUhAjldIQGZa2bAcOHeiUiRt8ctUsa+sCPvnvOaT1DD0vu9C14393TZo2auF
DXoIwL5ayOGJPCsgUhSArXkB2HsCLrRZhy4IwLZvjJ33Y2uylc/nQNYsv+CiBR/pFNCaCxLebY2p
Cpy+MKXi1o8WfMSmnHcV98a4OU+nhsS5KaEiKr0FH6kpAS5J+K0h78CPmpEKbmOmpCBuIqZhM2jv
NxD6tAGl06R9jxFcpm4xYIbsTOw8PxWc8+PvkIJcJAVVpYKnbkomauiV1cTOzgR2cMkGkzokOf29
Gi+aXAAXpII/J0WlgkNneoPd0AWcpphwijAMsfMpJO+6MLxHKQFrSSWMp/1zPvsSPpsK+C5ErdBt
nJ7/lMls0qjcJvS6Vz2MoU+hbSr4QVE47Q+2UcRMBQ/PU8FvwOsaqeBX4P1iKnjYp4Jfk0CLVPDn
P8hLBX/+f+yDfLkSG6aCP+fAhV/isDAVnPWAeD5hBpq8FQRFHNYy4bCo7UY/mD74jdqPSS6mDYYu
blSvRjuMg3Ypni1qaQaxiX2nnn3qEVWp4AdmUAzcbODJPtgiASjT+6DdolTwG9LlBKg0gH2er9Li
9158mETTpjczBkBWKng4SwUPNGm4Iml+Knh4wdJGPbwqFTy0uS3/Q/YPC28OJ3647mblhy/Xb4xy
KjhVImGrrG7Liptg2qgqVtZUTDBtJOsQCbqGOG6SXX4vEmSp4Ee6jZ7q/AvfJQqWJZgelUkmcKfo
rcCO3grqTxrWNzWUmTvVqK7XUPRyolEgkqUpPmElPmfFeR+ULoNTw20pBV4q+JFVo/nYuCw4dV0S
3E4qnBCtrGnxmCqUtR+kVmkw6ZjKTwU/0sWmdBuMqfWp4FdRVjm4CQtsDUX8wU0bko9TK/E5q5L7
oMQd3FajFHip4EdWbaf+ywa3K5EopoJXLsrX0MJ97udYo/CvVPA1yElycE/er8GL80TLhr3g0eqr
y31EVWgVJ7ZXYbz5orx/vGgFOZ998r2dJu2tS06cCr42P86mNuPMV2i9a81KBQ/7VPCFDlbtFWhD
7hVYw3kxsQguN9zQRw/B+SF4vaywVuRXU1ja3LqwPmRxy+VnyfID1XatLhlKeJHq16TJn/03m1i0
VlIzRq7H5WKGQ9O50lq7PNufSd2WUihHqjdlxA+uveL+7tXEFANs/wOiKmae9ZHqRyVt3kvWhdle
takaC1zcmRrsJ11NzOLOdEeiqjtTTaR6UyXX6Ux31FRLOlPD6dmVRAg60d2Jqeg8DfZAQLXZJ2/V
cRo2Dx4j1RsJWxKpftZWJE/d9i18MVI93IIFK1L9yKTtsukyjDxcN4z8f6QihtDkaw9ubufpf9rM
/9dGudFvoJumjQrQo+uddhOePhYPrHjko4w2awZmHuXU+a6D5LrYJfHHQmGL6Aqmv4r4t9mp78en
XDp7s5bmDA+78Wk+NT5/L58L66f97TIn5Df5Xx++rv1pHrFeENPE+n9VUIOykTtRGZn6zy1Fn9I+
tXleMG4pGiEMGNCbfsG3bLC1xc70ItrplqJvnnZntxTJ2+IjWVtQSsAj2dGo60owGkRQMPZmfP2W
IjUaSH70vQ3z9jPi0CuLMeluvgqhu1lM6l4yGl0j+ewT2GiW38vXgo7wC9hMlyQRoIYEwxPuJTqZ
Ttxa7Sk6xSuIo1kekyoDFtseSbhgeXAMnSKDfSSBi1cQR2frDGYDSw2m4QJ4HhxDp8jgqMheXbyC
OFakLsuApQZ7RcKZyINj6JQZjJc7D1o9fJ8X3Xma9EceQ6bd4/fph348/bP5OkoXrbLT1KUEfxz/
8VePj8PTPJL9MJzWZg+fffbJfAJyfsf6889ZPufd6hokP37M2yK/ZJp/M9x13+bS4LOcPX0cv9/9
lBFTfsn9uArt05vZvG6bP3F8zLOGVXGfcuPSgNgUcC6iqtZJpwbeT34y9ZNdb/7u+JQOVKdeuFTH
C50Uq673mIlhE2LihxWSbCy5lC9eWx798quBZcBincSkHrcqMuEYOgUPZdwaQwIXry2PFSk6R2Dg
AUsNDo6Cs9bw4Bg6RQY7byng4rXlsSJFRwYsNhhJuOh5cAydIoOD1hRw8dryWJGiIwOWGgyOhPNM
OIZOkcFAL3mK15bHihQdGbDYYCThMPDgGDpFBkekdGLx5ttYkaIjA5YaHMkKRgg8OIZOvsFWbbVG
CrgYQBUrUnRkwGKDkYRDxYNj6BQZbDBSwOVdkYoUnSMw8oClBqOj4KzTPDiGTpHBQBdSeVcEKyuY
DSw2GEk4Dzw4hk6RwajJli3uiqCqrGA2sNDgTIyE854Hx9ApMFhvtVMUcHGFg6qqggXAYoORgrMe
eHAMnSKDLVS9EDi7xp79YoikEtucMfqvvz49XWNPeoRtvou7qke3vcZ+71Movu5c4lPDwxypfJhj
lhHafCLMPMzhzRDjNKY4oJe+Lbd2q71jvS1/AfEi1+fv/7A624dTXpQiryP/6Whj6sJMLhXemg9W
6TSh1mZ+dEA/9qFXAxg7Tf0UPdzurXmWHEKV5PPxSde8R6ynIx2dNU0CfQ0JhieCMdtuI5KTv+Ku
JBpd1xgRDQ9YarvRFBwq4MExdAoMnnNlSYOLu5JYEZ4kAxYbDCQcRB4cQ6fIYB3Jli3uSqI1dQaz
gaUGW0PBGa15cAydIoO9shQw/gP8dByb3u6+zgdt93J/TE9Pv2bBc95elwDS2Gmww+uD2ahwmJy1
Fp3TA4wx/3UcAezkvFGTvxzMHne7n1Yb0Nw2WGhtxXlrucpufwWK0vp1QBLz2JoYwztRpYNxlxRR
FbeH0VdtwQiApU3hHQnnNQ+OoVNkcHRAARe3h9FXbcEIgMUGIwmHTDiGToHBfqsC2bLF7WGsepm+
B448YKnBwVFwWhkeHEOnyGDtyZYtbg9j1ct0CbDYYCThwPLgGDpFBhsgC6m8PVz1Ml0CLDUYyAq2
yvDgGDpFBtuIFHB5e7jqZboEWGwwWcFOGx4cQ6fIYK8dBVxeala9TJcASw2OjoTzwINj6BQZjM5T
wOWlZt3LdAGw2GCigsNWgeLBMXQKDA5bpy0FXF5q1r1MFwBLDUZHwoHhwTF0igwOimzZ4kEfrHuZ
PgNrHrDYYCThoJ1OkcFgIwGsVQFYq7qX6QJgmcEzMRIODA+OoVNg8JyTbylgXQauqmABsNhgpOA0
MOEYOkUGW6UoYFME1lUVLACWGqwdBecQeHAMnSKDgyWBbRm4soLZwGKDkYQLkQfH0CkyOCpNAbsi
sKmsYDaw1GDjSDhveXAMnSKDMZI91ZeBKysYo+YBiw0mKjjOV9Lz4Bg6BQbHrdYkcCgC26oKFgBL
DbaOhLORB8fQKTLYOEMBQxm4qoIFwGKD/6LuTHtjp6Ew/FeGfgKJBG/HSxEf2EEChNglhCqv3ELp
VDMtm/jxONNOGTJniB2nBQCxXHrzPuf1OY7tODGawZyQMrmCOKsMFngi6Ulh0ZjBxcK1BguBygEp
kyuIs8pgSdBEMpPCTe9s1wjXGgwEk1O6UK4gziqDNQAizKZnOE3vX9YIVxsMmJyRhXIFcVYYbHoi
DCY8PcORTRlcIVxrsCSYHGWkTK4gziqDGUhMeHqGI5syuEK42mDA5DgTZXIFcVYZzIVs2VI32nde
vLUSRVHLnO+x7J5q+/x7qv/6bNfXuE9iEZ8KPttlRSJeSqkc8OqNyKaXBIo2IiNqx5uQhx/6dzYg
C9JTQYrLRIPiGoIhKuiJPVshSO6dZ0RJ7b1nwljGjY88EhJZ/Pc2IOeQBWMtIY87qqZn+O041d00
7gk0pUGBJ+Wdd8ZRBB0FTC9PqZaNcjXCtbYrhcoxXiZXEGedwdK0tPfo7lhc9xiKJsu8gvJ/vzue
fCtLiF6QZUYQx+cHrW8Ojw/KafTGEucBYTHA5CgoC/9lHzipKTGiE3T4mwqhs5GITnrqiVdReivP
Vzebtc+5mGNaAjsnu1IsuARGxiT3x0FVc61e/vknQokcijLftfJFLtbXq/xd17hZDbEfXO6TLz/6
aPXy3pBXznPi/nj/FdjV8HLSeqaVC6ZDyclOCGTBiT3Vvh62dxHXcH6WdVpLZ3W09j/boNNePUOD
UvaPDVrQUVf7WtSglP2tQa2lgvogrWH/3Qqd9mpmg35231+F1fB/VuvrGaHfH2ywzh+OXwmFw9Mj
eM6J+eOP4SPpNlzsUum7oVG+vLjMRw9sru3VYwjnjwevhsPWzaHnz6lvzw8OnssnvSGnDQjoqV7m
GIXDOWBab36xw2Dtfo73MHJBbrcAPZHHo2GzehHt5tblec4fH+z/LacjYyIY4jXxIE8M1jRr+fhd
O0/tWFWzEQScM+g56BaIkSmUivmmLMFTaUrmPYLILaMktEAcmaJbTGnnqTZFjyDk7pxQLlsgxqZI
Mt+UJXhqTZHkCIKrXiNv2sMfn99m6aF7zH3U5Tpc+tV2CPTuKm4yTPSKU2BSymD+GL4n+/1mfXcd
Hn/k8aTSdHe9uxWtPlrnHxmmE9mX/YGg20HjDf54gM2rq9vfbuIbnz4ovvzX/3nl+cA/v4rx5v7Y
l9vLq1U+++V2dX8g6p7HGL0/aueYC3QvTVOneJRlDf3REjzVWSZGEGr3fVDNWyDGpmg635QleGpN
0fQIgqteMmRd2TD513AiXG5vdgdG3V3/eJ0HEt1PedSSFxM6ySlTRrjOc087BUp3kgvSWSDeJRkJ
WJ3ROSQrgAntpHy82JDmX95fb7W58auzORc/wwJSiu5dxQPa/XI+jCz3ANvbi6EA724yJnU6OEME
YVqdavGGO9DSbNWtr4+BTC/5/oiNZqDHz0Cnq3zu8irYW+t2p7299rPd3J9RtVvL6YM7QdJuzSaG
dV4pz0dovTdADF3oniOTDR992U1Lfo6r4Se7/JPbIxYwPRdNhTlOG9MwxF2CpzZVDBtB6GFICZRh
qaIKOooQEjOc6I65IDovFekkUabT3EVKg0gQQ0YHrbiKxCrqRXlHUXLxMzwgmAjoxMOyxzh3Gelz
Vl0MB6/tlsg8p4YFDVpwgzw7Aya18yZxrQjKxPZVMGnyhPgfR9jj3zDCOZgEvrxfstaEAOVGE6Yc
JO0I48xFYxSljnPDX8FiUCAmfJ0gmTM1zcqmF7ypTkZ1y1jDd9mX4Kms28x7BAHQa4KmuS6oW+K0
E5qlzimhO2ej7bSl0JlkDAgig9AiowsVZUqGOOdNed2WXPwMD0ijATXVrSaJBk8o0ZogdUs48cZk
yIyCMj2eaDNp8oR4Ud0e4mB1GwjXypFkrBdKAWfBJ6u8j5YQS2x4BYvByClfJ0jm1S2YHmjTxHxc
t7ypbtt5auv2aAes2X07nIsWiLEpIOebsgRPrSkgjyC46g2hWI6aks5MURIScx1nmnTBs9B5HyD/
J0ueGKI1NRldWBKYtNFrbys6s4KLn+EBSTSgps5MMQiM2RhVckhnxljmMAAh8oAyPX53b9LkCfGi
zuwQB+vMlHFSZx9VBK4VCc75pCgIbpJJNMpXjmPQPZVTvk6QzOnMzNB5KNNUJ+O6lU11285TW7dy
VLdAzintJeVIc3BSULfKBaadIx11nnY2OduB9K4DwwyzyjEtwoAufYwhaWe8L6/bkouf4QFJNKCm
uiWMcJ6Y5Z4orG5BRcYUZdRFlOlxs9SkyRPiZXV7gINOHmiyFrxgTBltORfglVZEBhoI0ZK9gsWg
tJjwdYJkRt0Cuf9OfNMS5ahuOYXZdbsIT2XdZt4jCJH5FMOagxbUrTVaGKago0SJLjEeOs4gdoE4
GQVXkUuV0UFE6SX30uuK+23Jxc/wgAANqK1uDScgKQ+gsGOYeYwBVEyGUI8yPb4cOGnyhHhR3R7i
YHWbFGXcCbrrCTzzkQcfCGdJEa2ESq9gMQgwE75OkMypW3pOSC9Z0wOscd2yhrpdgqe2bhkcQVDR
C4Z2o7ygbrlXxAy7+UIgPpeWZZ1IIXSSWmuVEkGn4X5rtbPKcpGcJeV1W3LxMzwghQbUVLdKmeiB
80RVQuqWUp8VWWDKSZTp8RN5kyZPiBfV7SEOVreRBM+IkZEZRRwQokLwLllqZcrG6lewGBShE75O
kMyrW056pWlLnYzrtuEQxUV4ausWzDGE6o2EJ3ie/KQPwp8SvOVBeOYSvBcG7TdEQUdINfciSdJZ
RU1HUyCdERI6RSnomKh3Ie7qWDJNvPLWpvKOsOTiZ1hAQAgaUFNHaBwNRGrvgyNYR6h9FJxyoiPF
mfYLBpMm/7N42ernIQ7aEYK1yegkHdchgVMqW21FBGOpN9GjiSKlmPD1n0nmrH4CGwYMRjbt1Rp3
hGb+Xo1FeGo7QiOOIBjtBT5Oh4K69SJ4ogPvHLGik1rlamM2dMIKS5jhERhkdO2UDyYGKlTF08aS
i5+hAWk8oKa65Ql0FMFQrrGFPhIlaMhSVFGU6fFEgUmT/1m8bKHvEAedeECkBJgctnc4pgRx0Svv
JHdcCWv5K1gM2rAJX/+ZZGbdctoz3jRgGNWt4PO3Ey3CU1m3gh+nFPCeGnRdrGQ7USTMaU1YJ7g0
HYC2HYkqdIYkYwnzTFia0YP0gWnhg6UV99uSi59hATGl0YDa6lZ5msBoyiAides5IxYUYz4IlOlx
49ikyRPiRXV7iIPVrSSGkuCjjiaA4YbqkFRyiRvmUgD/ChaDEGTC1wmSOXXLzwntuWmqk3HdQsP9
dgme2roFcQTBoCf4Ql/J7h4JgilvVec5gY4IqjsRqOwSZ4xlUBeoyegQlPZmOIGOViwYlFz8DA8I
vd+27e4JMnCXovGJowv01lgquGYqRpRJ71t60uR/Fi9coD/AweqWuWA515Q4ZSPPVlrGqQmGEZcM
D+YVLAYGU77+M8msBXo+3N/ANC2sjeu2YU/zIjy1dSuP6xZYD/iwsmR3T0wRQEbVKUNDF6nVnRYp
dBaYs5yDY9rtFuiBM2VNsrxi+27Jxc/QgAx6X2jc3cMTkc4SZYxH6lZrRVQKxCjnMSap9uPkSZMn
xIvq9hAHq1vKbLRByxCU08oElgIEbW1k3oiU7CtYDJrqCV//mWTWAr0Y7m+6bffquG6Nml+3S/DU
1q1RRxA8p5Ro6jxGpgBpeGqxBE+lKZn3GEL0kqGDkJLdPTR5Z3kUHTfSd0rF1MXAVZekccF5oTlJ
GV166lJSloaqxbqCi5/hAUksoMbdPSQlplxiNAXsaSN3iXotg3IRZXqcJk6aPCFe1Jkd4qCTB2to
dMpTzoVijhCtqE1CKWesoN6+gsWgNJ/w9R9J5u0SgHPCetX2zs64btn8N0oW4amtW6aPICjrgWPN
IUp291AltWKMdsRa0iUqREdJIJ1WKRnpZALlMzpASCIJ8BZCRd0WXPwMDUgQLKC23T0iEKI5DSIm
bJeAtMppCYYJY1Gmx1d6J03+Z/GyXQKHOFjdOm8Yk5QB4y4G5okwlgTDnOFBGs1eOY6B95SZCV8n
SObVLec9oaqlTsZ1y5vqtp2ntm75uG7lOcnNoVkLxNgU2TAIWYKn1hQJRxC5ZZhpghibohummUvw
1JqixTGE7omiT/B492mfSz8heNNzaTWkumjbTT/KMkkaHh4swVOZZZn3CGI48B2aIMamtEyKluCp
NgVGEHpoGSmaXkYZm8Lnv4y6CE+tKZwdQeSWUbwJYmyKmP8VjUV4ak0RZARhhpbRbTsPx6ZAw0LL
A89zjnEy7xFEbhlDFzVFNnS0S/DUmiLpAQQj5yT/JVpHnyNTFJ3ZpyzFU2lK5j2CGE6nJ00QY1PY
zD5lKZ5aUxgZQdChZahp2gw0NmXuw9KleGpNAXEEkVuGadMCMTZFzty6uhRPrSnSHEPonir1BCPt
p5siPDH47CnCwMWGVOe66QWgcZaZmS8SLsVTm2VGHkHkVBdtECNTNGnoj5bgqTQl844g+NAyoJog
xqawmWOcA57n3NGSeY8gcstI2QQxNmXue0FL8VSbAiMIMbSMkk2LSWNT5n5UYCmeWlNAHkHkltGy
6RnK2JS5n2RciqfWFElGEDC0jGlb+xibomZOJpfiqTVFqSMIDj1pgxibMvcLgkvx1Jqi6QhCZo6e
QtOMdmSKaZlMLsFTaUrmPYLILcPaIMamsIbyWYKn1hSmjiF0zzR/gpH2004RnhC8ZIpgTk0R1JDq
HJqWHcdZJhpGw0vw1GaZEEcQOdUFLDkaNmLmo96leKpN0SMIPbQMtK3aj03RDaPhJXhqTdFwBJFb
Ri46bzK6YR3nnudZ501GmxGEGVpGQdNi0t9NYYQ0mLIET50pmffYlNwyWjWtxY5NobzFlHaeWlPo
CIKSoWWMboIYmzL3Df6leGpNAXMEwWVPTVPHNjZl7ue/luKpNUXKIwhQGQI76FQcnIW8fTg07Wqd
N31d7FRv7Hb7S9bNYOCTCkoqxYDvwE4erkiJ4wK0clwnYzxECIkEqSyjTkTw/8bhio82CMqXtmGU
L4LRWWf8PSViZQrlEFAwxpYGK/Cu5NzAR0RgmHcwdSB1FpZtjVYsXN0UEpUTpEyuIM4qg6WgmDCd
FOaNVVEsXGswp6gcmDK5gjirDFYKjZNNCzdmcLFwtcFoBhumy+QK4qwwWPdESkyYTwqLpgyuEK41
WFBUTosyuYI4qwymhmPCYlq4KYMHYVEmXG2wxOQYK5QriLPKYM4MJgyTwtCYwcXCtQYDReWEKJMr
iLPKYPzTfSAPhP1V1v017l7b6DYx5WMaX3Q3eVzaDbz3v3x3N0RnHaECYqcFF53gVnRWJ+gS55wC
ZykSsxqGuLuR8Ga9vl3th0xvXN9dXWF8QBRmjCwdiEWluWLUhcTCP4/lfbCK+ehoiFFqzcBJxalX
ntLgI6PHY/khgmcbz+teErm0FUfZNO/M7qdErK4vhYJRWBqswLuqSpRMoIiTXcDcTaL1yrVtISkm
p4XBupySj3+AjyxprYdjukRHgZtOBEa6FIkSJHHGeBxWFgQxUQtmdM0ZYCUXP0MDAjSg2o9/PHSt
F0PXejG0Ro5EOUeVjJ4IAOQtvhhNYmBtcERjZAb245NJq4sQkHf50N82QsPe6DOMxhiYSlRFKp22
NunAlLJe8BS4ewWPR084XcTzkM6fDT92f27Xdp1uM2G8+Dluhm7g6cXfvM1n8N4MD/SGEl3fxOuD
o8t+liiAFIsA/CzzLX13WHP+uZv19XBs2ucvZz3SE2AUcd70lCwTfOUblY/6DFhZHzXuHRVpuaWZ
nrNC4drOURFUjpsyuYI4S28/dPhOBaMCE9ald0iAYHkikUY7sWgKIVjvmPacQZKMayK1k0kbJ4nx
xv2bA62dFYLQpa0Yt5aZn5VPhVibv4agYIwtDVbgXVWmAz7cMZMlZqCt0YqFq5sCUDmlyuQK4qwy
WEosOeXkii2Q+ZP2OuFKgzMYKmd0mVxBnFUGa6Iw4ckVW6CizeBi4VqDqUDlhCyTK4izwmDWE4oK
T67YAuMtBlcI1xrMcDmAMrmCOKsMpoZhwpMrtsB0m8HFwtUGa0yOMSiTK4izymAuFCY8uWILvDGD
i4VrDeaoHAApkyuIs8pgKTUmPLliC7wxg4uFqw1GM1gDK5MriLPCYN4TQOOcnAmBaMrgCuFag3E5
SliZXEGcFQaLnhqFCatp4aYMrhCuNlhjcszIMrmCOKsM5tpgwnpSGJoyOAsbUiZcazCgcqBpmVxB
nFUGS9zgyRkOQGMGFwtXG4xmsBK6TK4gziqDDdo3KVI8m9ciGisMRKf/eVHIgBLGUC+sCQSoMjpK
y5WLJkQZ9L+9KAQ9IWRpK8atJZtmZwMiXRqxNn+lQMGYWRqswLuKTIeeKo4hTk81pWlrtGLh6qZA
05hJUiZXEGeVwZwrTHh6qqkaq6JYuNZgJVA5acrkCuKsMlhoNM7pqaZqzOBi4WqD0QwGQsrkCuKs
MhiMwISnp5q6MYOLhWsN1mgGSyLL5ArirDJYGsCEp6eaujGDi4WrDUYzWFFSJlcQZ5XBGheenmqa
xgzWlJYJ1xpsBCrHTJlcQZxVBhuBGjw91TSNGVwsXG2wQeVkoVxBnBUGyx5408cqRjiSz3+xdBGe
ytbIvMcQ5sRDA1X8sopjkpBEPdMk/fMsTarArdJJU2c804YbEFoEETg11iv+L8/STM+IWNqKcc7o
+Vu3nwqxNo20RMGALg1W4F1F+ZteoB27nnziKk3Tk+UK4dqmMBSVk7RMriDOKoMBOCY8OT2UprEq
ioWrDUZzXRJSJlcQZ5XBiqHCk9ND1bY3okK40uAMhsoJVSZXEGeVwVozTHhyeqhIYwYXC1cbjGaw
IbxMriDOcoMHExgqPDk9VLQlg2uEaw2mFJOjGsrkCuKsMpjjLTs5PVS0JYNrhKsNlpic4LJMriDO
KoOlRoUnp4eq4XX3OuFagxmawYrrMrmCOCsMpj1hFBOenB6qhlfT64SrDZaonNZlcgVxVhnMqMKE
J5+4qoZX0+uEaw3mFJWTvEyuIM4qg7kimPDkE1fV8Gp6nXC1wWgGCwJlcgVxVhksFCZsJmc4quHV
9DrhWoMFmsFAC+UK4qwyWBKOCU/OcFTDq+l1wtUGoxmsGJTJFcRZYTDrCWeY8PQMp+HV9DrhWoOB
YnKU0zK5gjirDBagkFfODD/1avr2xd1tGF7otN/nZcSKF80f1Ayqdrz6tb2yP8eHBU18JUx6GYAS
H6Mw+9f9Qlg9XGAIeve7swvrTYZFcfQUzvDL/T7mi13MWVoDWClpIC7FB+kP1rsX83YfJb3/ufPd
r612/56z5CqTPFwnt/OKbl/Kf1RAifsMuEjZ8u9WV5ffv7jd/fuFvQ4XYX19e3GvnnWHsxaHVx4/
Xl9f5uAvNpuw7YfPld7dDLZss7Mrf7fZDGCfffbO9t6oq9/qaH46uPp3q4fLb22KuQjs7fbi4aLD
InSN+UdXfmt35fMBOwc1AA/vNK6++ni1+x5CklJIplNHZYJOEKI642XqPNM6WmWJNPyvZMhv4v7Y
AvPmxr+4/BvH2v2QO5VFWQyZyMrP4+3wTut90sVr667uO7X7ZwLv5OsIRw1YLbw9zJo33frudpcC
8Rb/zRgPcFXkDd5Qu25jOuhSkZMNUP5xjEkWTfd9/2INMO64acuriU+FWHcrowQFU3J5sEnvqm55
xtAnR2x5lP9UiJXNK47BeE/M0zdvy/aAjMjEE5TGCLFlZ/NTIVY2r8bAOH/65p2/W1rsvNPHd0ey
un2xiTZc/B436zyE3Pycn3Zf5ttGBorAdVJeG8nMH7sB0t3Nw2c1vv0w/0gO5i9Ce/viO1TU0Pmi
+GdJMO38Ow1nEISPxoEYf4tkdOHFQUfufPPmpx+uPn/3s6/e/Wz1+RdvfvbFh5+8f8IdNl8UdwfT
HrIreReZ5Uo4mOXOFGiB7kNWr68vtr9t8+dFLlye+uxGubuNGReb9VV8Yzd9Od/3A/NYylvqU7vZ
5lzKhTscNbDe/LZKl1fxRFvx+bJ4W+HqwzSJOqudsdQ5Nau1GlCxSrdXlwPofvRnb62z21M2ifna
uE2nEQavgEuthHYE9JzM5kRO8JbL//HCbl/cuquLX3+6+m719jo36/2ZGOE2PLn27v8G991u2nCR
R+i7O8zF7jiNfHcbam2/i+z5WG5f5M1V1+sQL364y1Qu+rzkcfGTzeW/eX6anTMPk/1sjb+K9vrq
t+fnuI//Yus3lze32+fXf2F7u/kphjbl8j7ko7UNA947H7z96SqbnleavsM11XxNvO9ApIe9jhaY
58onpfysPuPgMRPKOS17v7pwF174m5D7CnuXl6VW8Vcfb3bn9nyZh4398LeLuNmsNy+zfErP2fAZ
rrP8z7++xDX89u7++n1wZ6+srh6ET/3I+X24w49cx19Wwze+ftsD7vNm8aYZpcNn8Z5xuOkPJ/rk
UeyJdNDzNfF0QKSH3yi10URQF82c2y1ny91u83lN3w+AQ9OtHr72tss03CG2+M32JMCwcOyCYTZQ
p02Y5dMUban4fhvwLrhhBpkuv7/b2KFyzp9F+WazDnf+dv89vvOVHMarzyrtNvY6Twu/idf3NjyL
uLu7vAoX13c/ubg5XwEXIG+eRfjF9yv/wl5/H7cxr9M/i+Rj29KezxMsL/u3X0T/Y2Ybcjl/ZTDf
w9c/2cvrE0UP83Xxoj8hP/gSpGEuCWIdnVXycuJOWSb90CT5/5HdevXqp8vtT8OHQXeLNwcTqK5b
bR8WWi5vs1Ac6iUHuvrZXt3FE4gPdv6PEcszbb8Mtb38/jqPFXNNhauc6CcyreGmj2caLr8bHBkN
nCdQLMwak7WwnvDo7mbf1CQ/xr0enofG1Y/xt1NuLT5cmQTJlyE8QHQyURfnVaiZT/1PawX59pSZ
V/f3ilV+lhs3NnPj1vGGpcmCRYMTLPlKQUTqnDLRBTnHPb7cWtRneTlzgH345ZUfup3vFlfF7cLE
dwZwygG40BHiLH/kBGmJ8EPX+tbltc39p725iXnhbFjrdplpvdnkxwd5Gp+f3P8YwzyK8lb6W159
/nHmzpacaKWGHqkgqffiw3qhkjzFoKOjZlYr7eezRasMJ4T/eNgvcnH5001e/vpkvd9A8vDTl9vV
zTCk2ma4LHY/Rd3cXd9PUx9+tg/u9b+mqO7KXv/4OC9d3OSjeen3A9xmUH5xe3szcXPki3f3JwGG
jkpTEC4YAjCvo1qum/9ogByKdvdwK3+c3P94agYvFu/XUfHhNpgSo0rTJOKs4YMgEyVQIryfl955
nzdTpbur3C254aDj/Y/frs9XX37y4TcH6zPD3xb3Dm+x/Qphvt6p9mp4CIa3FyKdf6NiJBAaiZKM
zWotWtxap2SL2urDT979YkWZ6kn+k55rsrhrpyaCH/zJ3ZX2uE5D0e/8iogvMBIJ3peBh/TYEduD
YRMIVU7iMNXrtKVpBx7Lf8dO0r6+1h3bdcqGEHS65Jx7fH29XduPX5zXOFFeo3cEThLottDWSJVl
AxHFZ5VaAttTOtlEpvlinhu6K10vcpPyeEKqhEk6j1QODnZAwzCGqCYCirMidgrh4wGN6Vq9+3aQ
SyXMarh1cqJbf1Ic4wZyyrQ8q9vipRqEvJ1XGxZj63L4drcYe3ncH543BV0/qajLMwsmBvVuUevr
+WIym95N1244OiKcqSFmLuTarIteHkvdq+nMLn5O7Opjl2RwbZMMzkMOr2Sl7bcNo/U7NTed2Tub
Jzydmx5doyp9olQThibu6ubjYUMTZIJCzZXUZ03ke0lHchhK7tPn3/zoSabqeqXb1owbrrd5YUj8
nXRubjf99ItNPe8m6haz2v37V6dNpubPrv5Wesb9dgt5vZeY3BqjXIhcXedpZD4BPaktrxM9KYLA
RaT6Utu6up815kKn3oIKDwZbvOHnzwvL1NPpop5WWWsz+TYzU23dXMYb1x5wMWrcLcx4fKV/3hhd
2tP4nZOcgd+Jvp1SskP9V57Y9dYbm9r1Stf0jG7ymZD02N/YievQDmWzDOqSCEkrWTfnjGJc6OQU
ereuMNdrs4fnae//dT8FAxHgAjSS8bMy+9wkgtAG0bfd4H57hM5MKLRPspubssXK1NL9oH6r2mEh
rw5lQ09I8lCdMkwb1UiFBK1ofc5ghVLoYBIO+Yfd1/PTyobB5zU9e3L8s652nlIjYQjgusdNVT9v
piudqezO1dReikOPWjtB9xv3S+G/oEG1uLszMziV8YduZW7IxnJic+MDXf5NfN0YcryG7HbTWd0l
+Xx6+EnxwWK9mHRzfSd5nF1HXTx2r40eg/lZPdV1kX3U77cz3zZybbT9fKXXq2cmota6sRJr07i/
2m6WtugsarNZb4xHzYa19UXT5fxlw62VV8Xfoeu/wp7xyudmpvXS/gIVoPvHZiotTBzJSt0sVro3
wXxha+ueKxUnCOLU1vyFvQhUpBxNOBalqH0blvIxD1IAx62V3LGH12aM5TMzq5jdGUZ3atYlrkds
4w1Cs42GPWBQm76MmrZGSrXObKJgL0ZxN7uGkBYQ54Rm+VtZf1Nm/3aBZE7li+8iTEXBZA4JsB90
OYfmU/sRhwXOISCH70OAC5lDQA8/QAAUPCfk6H2KCpIjfI69nbrfbR+2S3/cBsPr7IPFflbkh5/f
fDW5+eqjTz6ZvP3553aLwXX2Q5b9eDb0TmTj33Zk0QlAQYFojq2hw9v6115jIKz0SLrx5BGeOOVI
S7VSd/lPeth9mfV/d460MAPNVd7Pcwzvm5X+R/f2oXd1Puwcinc8B7v/teMFlMalHC8AekTHQ1wc
4clTpxBUama8b7YxjVvs1fj9rx7ZJOT71sTe6javmp/yoVlr5o+6V9EhEXF5TP9/7JnGXn9xXcYz
g6BH80xaUC7H7BcNLbxZQbU9ta4jbzWZzvV19vqmXb3eltP569uOSt7UGdUM8pKSXANc5aCRKC/L
SuRYNoAQABhH6KWXnNQFuwj1LTszDrZdqfWz6+zV99973zwvszn+VAp+dfWPMOrGoaeRu67amMh7
3UXbP7OWZ039CPJjCqwAInWW4sYkHA6ne5iJWtsv/EVN19vVO9Pmukenpis7zFe4aEkmUjuxL/ar
pcRJ/epRKMX1q6V08ODOhBNIsu197CenubYX/ttuUmGWb+x+0K5IeQXrupSqFuXRdM6X14zLBlLN
KdPQRYfirSxuOkG4fxzR7b5t3pktDjjs3R7/6vbEcA0xY6Wua8kVR7iGJeJIE9UIrgCmzZWLOAP4
QeInGQyF+eQFe2zzn5mi1dN7y+yjV+4ylXUfnQbvCvE88N2xKU+n1dOuku32plUHg/anxqHswHu6
fqU1lcKO2odq7RzTioIxeCa3My75J9cYF8SbI+StWi/UdgYgS6nt41CKqu2WsouHwMmzdDdDAmi7
W0YaHai9m+hfdWWwKtUvFNRTY3mXlf+6sfR1m1u92Kwq/Xp7Z13DpDoO3d72mAwpqOO4SQxDD6Sv
sESAiZJroB++10DDRguINZa1whXCRDdCKQAgVQ0StTy+1+DvudOgE4HzlCX0ux93WY/G4+vV9N7O
pNumwVhSm9fWnKxrK06An+//O3fYlfKAbDRbLlZrGyHtFzZrVc4cARLTAhIxrgccBAh0/nE4l+EX
GS0QcbFCJCUZdVdqXxaVWqpyOjP9ON1eG6uezu1K/e7dZ9njrz7/9KN3Jk8ef33z3gWYuJz3ve++
2nddM05yAtOEwO1wXIMa4bZYkHHdwu+24UfVGH48rVhGdJBEJi4H+eSbTxcfvv1430lm93eL21I5
CYzsKAN6hLMI4HAWz/maDGCUFrkCUSPjEUbHWKwAgIZg+S2McHJWEO6IzdiHSs4//y4GNVJXAlxY
lPIQLL+FEbryArgsJF7U808mi0GN1VW4sDCAIVh+C6N0xQyMHQrfufnkg/04WLWzn9zQ4wZBixsc
AXnBiENv6tM74Zr0GNRIj6JOLM6CsPwWRniUKABLyPhye5TJMnz85Mm+T5n5ALVcuuETctwcXtVj
B/uVSDxUJaabc/Pl5Oadx5/94yQef/XV43c+/MdpvPvehWi4XPKjm8/3/XHaLhzAAWcKBdm/dUaL
GuGJVKSkvkaq/47R3qxT/bM8Djr+DiajO8Jn79+8EJia1g08riMY1GBHkAVI2dV6yv/fufnoxRpQ
tVM3eMLahcP0DjnCeMbI2MYfDPTsIM8NPKrhEQM8AgpIk8a4Y1W6dCanRtudHxyOt7de6CIxamFY
BlGeaDiwlDN+TsxLffH140/2RTB57GrmBh93UrVDjjBepFXDEf0xkYmrGN6fzvR+KTTmbyf0yCHB
4gaXASwQStmr67R8U+v7fcvt3yegR7Xc4kRYLlK2oYzofclMTkTDgzjoBE7Z/OOOgLEF0GWHp6B/
tjAJTNtlvt0W2711PZtXpud1e723CFgvdJvNu/wqI9wxOVQA75mUddk+m1fZq30yysQky10Zkg2X
lCEllCRs+IpRyPz3OrOvJ/2K32Q4rOBvwN6dsfIgJPae5hwA+eI9JPZo1N0tJKvFXU+gi4P9seBn
3U3iIk8lH5u8IW34qlrXF+Yuxhe+mmm1Msr3zr2XL3W7WDxtL0bC4ez9bNXEPnNyp+9MzZv0R+CZ
UOkgggvM0EWIDLWuI1ItNxeH3lW6E2DCsTUXM+80XMrUdThq7ESjcGFJBEKw/BaGTzQSUgDv0XwB
hTjsdtwW4TtPvs7AdYYkqKgWKkd1hXPEMM0r1JCc05JSxQnRTemiBL0D7fMoQdOesUZBQXUOQAVz
WGmWlwjUec0Bp4g2NVONixLynjh0HiV0nVWIYFjWLK8JVLnCFOdUc5GLpq4NoVJIzZ2UmMM1uc9d
Em77j0GNrBCSubBIGJbfwqgKQbm8SFHj60whBZlSLCclwzkWCOei1DqvSaOlMIw0Ai5KzHu+SQCl
0zG+Pz+ze9DF0XdhvgfNHPbSgoERG5YX7FXtrUmYWhrXtG/crkwvoNUXp7CPZm23T8x2ZNqL4Rsv
3G6YsA95bn/htV9Ax5qtCE2DIaVACApRN1I+nL8HS10pKYisGNCqVBWFSjZ1ReuGgaoBx/l7duvM
35XDZ4SQeETv3/PF4dv3d+3FUc1X+jr3zXA6pRvSexJoBOTXW0ib2NxtDV00Bt4BzAoC0biu9mKD
AIlMafIuwC+ucTT8Xawok+Oy8qsW0YyyQvIRRphbd7L/7RoNfT+tdJupeZ2t9k4Eve9Hl8Ogzrxz
nYVcm+rmPULnc7bojv2Y9Kn07d45D/aTbDG3VfHzpTLHpXypm+sK1RQIVOZIUJYDTnleN6jOaVOh
WlU1LSXpvMYo/lQ7G01eAOaI2NLTW4I0JYs2HDXS5SlxYo0xaaHtWfM/nqH/q6bHYjUDV2by0MIU
y2rqogkZCZHEXxAR9Y0XGNJ0cQ791lhqL8C5qN8yR3YYAT65WEquXzhqpN8yp4WcwfSiSZ0pPC8e
8kLA0clHzxT+o9ydk3R3C3uRvFZPjVfcl3V7cfQOctihmfXA/blXpa2ZuVqv+8rwzdvvts4j97as
JMQhju+vbhHRSRSIw2NU6EVN6rkFo8ZWcunCwgyFYPktjNKVg20YS/AxJzOekqI5LrPI8uFuPo7t
gAT5yocnzR8Ho8ZaKFxYQtCxFPf7QpSXSkJDVPBrH4EqCyTBhfQQSXVjVGaRniNO8HG0CdhXPiKp
bgSjxlooXFhYihAsv4VRHkggHquc/R4Yz2ysvslez2g5bSbdwUztxVEPJ1WffPR+dhqau0IQ8TmA
TKzogaiRLi6dWALxECy/hVGOJLx3ZEUUadeh3Z1X3JrTesyu83IFfrw4suu85AfgKXBeDESoV+6U
Hm04aqxDSScWYyFYfgvDHYqCcUdN+5HJHmGynXWvy4tDV9ujfzvgbt590zpR0QiofVGYOwSuszdt
ifXj5peZQJzVWuVKQJETyUGuGolzrAmuSsxLgtjLXb5bPlOlnj16+bsbBtB7AMDh7Vr3V3GbqeRH
Lz/ZlLNplT22twgMmXrX2adq9dRO7XdO1x0D/+SbIaeuzVSbLTbrfNHklvrLmWpM5crVcjl7lv9k
KKp5pR+9PNw7/t3jJx+9PNwtatBgAV5+/S2HZONU/ROSIVZyVOky51RWOak1zksAeK6Rok2FgZJY
uSXDwZLZFTmzCLhupr/auQ5Tx4ar2rJvPnsns7mAmaq7C/LbLiYV/wXhZClISSuQK4B4DqGGuRIl
zIEwMABUUrETvkaDhftYr0z86Kb/b4YL1IyGuvXo86GxNUwf72WLCfoQyihtdI7qqs4J02VeIkRy
TSlTCjUIlcKtDwrW5/3BoWx1vFX9bQC/3Or57gaXxmhmPUrX5iCve2bet7cQjOZe4yXuHMkHGZGq
bGguiChzUmmaC0Z5XjItOakrjbl2y8dj5NOtFbCroa9tfazzt6ed643jaLCA1NG2+nK9EEhZrOlQ
eQhqXO/BsHJhESRDsPwWRvQeYMHlCI2po/dQThftpD/dsXXjjpCuscW1Z9++/dHnN9kAaNZdtneA
OrEvloraGLe+fZ54PbGleTEGrgPp7YGqexnq98yNPkLiwj3rj0HtxpXLxbw13f+bVxGwVYoieOUE
HmM98iBjYkgOahfN+he10ttb9I/hUQEEOa5h3FfDYMrCWThqZAyBwIWFZBCW38KIGIIKjvExqvCi
8jRdA1FjdeXHWLhAJAjLb2GErriQjrErBT5UlJKwGo4aqStyRCJSCBGE5bcwQldacECPUaEPFadM
bYWjRuqKHVisQCwIy29hhK6swGCEjIpteH9HzarNbLi62WYwrqd3uiiKiwNnw5q6Be3Prn4BEF8j
8b2LhKDgWHPk1TxpfisYNdarpBNLjDBacfSb+sTU+3o6UdV6et9dx9i6CEiGQoz1Sxzh1rwA9DJd
5Op2umy1u5NocTGXx9Zin7WEpjhUMGqkQxHqxBKXUXZZTSdD5qMLltMRJi0csD8tN048AWmIpP6C
jHJbOaa4/drRsA0B7F9q29o72ubaSN2q1bNjIqIA0lFric98muTHwaiRfkypCwshEILltzCigEVB
oThGpT5UljQlYlFlCGqkroy4sDjDIVh+CyN0lQWQ7BjVOxHDk4aJwaiRunLgwiIchGD5LYzSVbhK
0zs45UlxIBg1Vld6gEWvgaFBYAiW38JQXTtUSh2Rzjs4FfBsXWNQI3UV0IXFGQnB8lsYoSssgMtz
pBf1/GmNGNRYXbkLC1IRguW3MEpXinxnIjXaLj33nYxJ38l495qVQhAoBcUauInR88dSIxOLLB0q
XXQYOvZ65p0ckeeflRyDGmmhRC4sLnEIlt/CKP8TGI1UzH7/iyNG4EWIYcTPvxpkZGZxfmOYH9NB
BXWUIPNNb2F0/gReDGqshU4sSWgIlt/CCAfEBcDi6LQkhvZQb9drcxP8+2o66yTrVsl19uFXXz3Z
rQINiTtmD6sx8ocfHTDO3FKGfMbhpIY1GDWy+DB3YRFCRqovByIknIQ/MrFYnZCLDocwpEz8nhDl
5pJdpgHACaf498S85wM9cFrZsOA9XCb2w+4mfX2nVz/pefVse7udrap6/aObAD6fgPsSNh8P85Ra
NBXXEuNakrjb9dNJH6j29fKnlap1Vtorsm1ZvW2euJi7xUo50NEt1gl482MoIEdNjSUT8RqRAgpw
XNGwr6KRpGgTjBoZSghyYWFBQrD8FkbUWFKQpCvEXvS9m/3s/MEFFvP+Rr1Oo+GK9h/dROj5RNzO
GMrHPE3VSlUVYFCU8gzv5N4ThM7k8kc3L//8Vk+7P7s9fpSLk4AO3yVefzp/i08MamyNEU4sIc93
mQd995tPHn8W4btCgvOJhPjulo/TXypGEGnqitUInuG7ksCQEvP7SUTcoQWgLKrGBCvgrzHdo1yc
EHYoQX1KJJynEYMaWWMocWFhjEOw/BZGlTWnjnEn86ImTYYFo8bqKl1YAiQEgAcjUc8nOBZZKvB8
KiGxaMfIXRcJqQguJdYIi+hoFEL/bDb+yDA8zMVLEhjiTX4fjqg5zHljJeM+VHb+ilMMamTNYdSJ
xRPGIAc15yPzlamaTVu9PRdqWOffrFR327qTQMqNre764uNhh+K8bDSWmtCSR9cSViDKQ0rI7xdR
3oj5eFFup1F36va00vPWVkk3LDwf1l1CbnTzWw45EogCpprmjHJhrighfOWSsPIegxpZXzlwYQkM
QrD8FkZ5nhS+SvrJrhDNFzovEA3HTPOSwfqsnQY9sHeI8SCwc5PB77qe2jAwMfskr29ebVZaX73R
aGX00G335g83r9pO6tUbN69+sbix/7u5VStbaP32JPuOvcreoNqXnyoTV24Xc/v6ne1N8t1fH83X
ejabvmN1vfrxDVX3yGrWwfzePt1M1s+Wesdi2Htg/mbW5a7eMDXL1JP55q7UK/OueWNht4+15jW8
emO5WtSbal0tat19qH9dTlfPzMtOSAnxVwBcd/9+f/WGmRiq7Nfmi6s3zO6s/hfV4m6p5s/sa8vO
CNDC/T9Q/63puv+KnZrvf7g0YqvZDrlabObr1fAlY9WdWhmaRp/JAPXOdL0yW+W+0/ObLlhcvWEe
b96q1pP71tSMbrbUPGIxm3WWWifd+85MlcdvGgY/adfbXSGZ0nn64IeTdrrWXYU0fJ4df/UXXU5a
PWtsdDNO5v3CpN/SvmP/55/HHs0LwB1Nh/RW4PNz8mNQY0MUc2EhikOw/BZGhCheUEwG1PMihZNU
Wo7LWKQiS0VAJxVHqgYHvlIRiX4XiBprodPvGBxDbL8HRLklE0EC+GWPQBUFZPwY1bv6K5PGLcGo
kYUtaShWcmHLpI7gWKRi9QEuKoSJkLLwe0CU31EARxDAXypRpLggx1IgjxQEJIX+YNS4wjasXFiC
yBAsv4VRugoKRi9sAhJj0DikYkvFEaJkAaAYgYpfn4hC60idP5g/mEPo/jTsh59nQwd+e9FEf5nS
j5dmcafb1nS7J/biqEn/iBOY8nxM9zSGA7r7oWQlK0mNAYqewzA80XizPIcldKvN36VhlA1CXZrA
4ZFxHz7u70MIBubkD7f4bm8zdFBJpZCUUyHjJ8ANCe/hUNFuckoEu9asJJCgFkIqdRZZfKwYPaXY
YekbAFoTRErKEKvoGfgIOHoZ2NfwwKSmNRg1MohD6MRiPATLb2FUlCaUHaPS5wX4x4fbV6aQYMVR
o2iJWSPddBBLSt+1fHgSn8iiQMxNwlHs5I9TcYATzRrUMFES4JYl4QCJkQhF6gK5iwaDjsJhPhdF
SalZwaiRFiLkxKK+ntR+iDXtsZrsAiylClECEKzYQMmE32mb1dNWlTNd25XLcrG+fX71qD18qrtF
x07xuulsey//EjoJHRtPh2FYP/vxH8Kd1KVRYdZeHP+njW7XE2XSS9eTB21mCX0jd6vsgDY/BJjV
GtVUIcTOaJiTeB5os15sqltbNvbHz7o928HacH6qM/LcVIk1UkhXsGz0WaaiY1jhgd06lnk+5phC
gJQA4JwuEEtJHnZzdAhuUymAZBgrDDAsz+I5Xm35cu8iKEN7uqinVdZs5t2614nKmrJ27ZbpYRbm
GaXGXKG60pDHLyaHUI5i8Ie94/unlVkoq1vb7G5mxhK3EZ1r/mOUHte1+fXuR+bmG/MV237ZboG5
h2ijX8s6h3hk6BWg++e1zK5jPnoyIL6KwfaTq3+NHVty2e47eyOiQ9MQOWka+xfa1p1ldyrd5dA2
Y8HljAuPIv3N+9aM4YOsPy30RAAZPwvlJAE7MKa0arhitTpju0Qi2wOdKuMU07pzlz3Ki3neZ9nl
3dcH6iekS9gV5JYuhpN5YkWZABRVshbx6f2JBhyoudm/J3NIPciepx6cEHD0Nt5DwzwEQcSxEAog
Dc7QTDKSODj1j5bD5zQgKCBCIQNG/zA1CpUT39AxriTcLHHKqsXlWEYOvjF1cROYj+tICSfWjUQo
Vhd5TAMWwHEcowA+f6Ype8jDUSMtpE4sBsAlvNJfdyJquGHJZYgi/nKIQEUF8K4zjqENTdkNdjmW
sd4l3NzgcblBX7mxpPoTjBppIXNiYXS8x0ogA67XHVY9bburSq4386fzxS/zfFhPzCsMdQMQzpu6
krkUGOSEg8r+h1ZcN4QwZkqmqcsSalhXEIjdw2wP4+v+edlqWWUvn/Pwl53mYPKgOSd6Ojsrv/m0
sKdV9oF5Uj8zmaHTarIylyZoG6YRg0hxWOnqeMLry+uKyFrgqkKyqZ3kGA8pV783RUUBzhzR2Ls6
xFOm3sNRI32Yo2MsXAAMjwudBPgw5JAJUNOcUdbkgJU85wzTXGDSlLXmtNK22Gsua13SWskYHw55
+MtOc6gMkc5fYBFugguEpUPE8Jpjf1TYSaHJcrYxV4wY3SRUSHBAJVDaUV1KRnAjNQKqwi5GGIFj
HahPh6R02HDUSMcV0InFtiXtjlZnhKU/jkrogd8exCx7x8kvyhqUmexwVT011LWsNGccAo4BNjCi
qnVDa6IEsQFaXrnsomEa+ksuyoMZeDj2hwtx6Zv6oZM+E0f06R8320Wn5fHMoGldVSOVAa1ojd2T
cDczrZfm90b89XSWzfWv66y77D57FUJWAIAZwlmrTR2p26t/Ey9KyIO8JHf0krxj9aRk8XDU2PDA
jrFIAQkIwfJbGFGNSIEo26I6G4LQiH8ciQ5/cNAGuMIPqAmRFYAMYip1TVAFOeKwqhGrJK/ElcsC
wsWxBdynW9KZguGokZ4hkQuLShSC5bcwyjMkIseowoNKk5KUw1HjdKXAEYNpATG8WBPypZ5p1V6w
CbH00RF9CUwAnql5pQfatkNLKyxBxUuJqj8GY4bxtN2E+OL3RwMy16banW06N93lH58fmWXLiDIK
3DgkFefb1XS9NoKttFHaZKpsWgtqXROWGgIsaC5YWeYNZFUOCAA5LIkmjDMlFR+T1GLZGjftQmf/
P0tkS9ZeCHpr77bNXm8NUf2brl83/F/fbgl9fXh83j/+9RDyWV8ruktFzF7ODKLdwtm2ZXXaRx3N
zt7euFa3dgNoMVuYGD7p6t1Ste0vpuYZq7lSpS7rGjZom1l3M/ygHwJk29jO6grJqmQUlKLEpVZS
AFhWUgCBasmrbGmSiR51qYAGXN3pRyt7Rcq0He7SaDemYm1avXpko0qmNoaI/XPSmocbPVemN/Fo
CyYFJg1VDYGVZEQjRBtEEJdKN9ooJl1CIEBDIo8/3kVEWVoQeowqoRc1acI6GDU2yjp15RCEYPkt
jNCVFQCLY1TkQ4UpJ/eEo0bqCokLCzIYguW3MEpXyNlYbeZ+j3HXGRzmGeyG5sUzXbsoYAYSV1v8
yz9RmhBOjksC+0oCpWztC0eN9DUEXFiMBGH5LYzSldHUZbVDQonVO51QbGkQJw0hx22yD2VKWuO5
AL9Y1YSLFedwXFZ+1aK8XchU5/J7+xmETFieBxHa3+W0S3HZZk//4N6Jl9VTV4znBcTsOOAQX8BJ
uM8wBjXSHfEJLH6MRb0WJlXNYNRYC4ULi9AgLL+FEV5rUBkPKUO/50ShMimPUZkPNeFiuxjUyNIk
1IXFOT3G4l4LkwYqwaixFkonVpia/jKM8hyBWYiFfl0jUEUBmQNV+lCTDuAMR40sTSpdWAigECy/
hVG6IujwV+8kQtKBseGosboSFxaGPATLb2GUrtSxOAIB8MEmnXsZARupLKMuMEZJEJjfxihpuXDZ
6J2gSTqiMAI2Utq/mDuTHltqGAr/lbeEBVGcyQlLQEhsEAK2LBroBsQohgX/ntSlQVDX4Jy2A7V/
r8+XY1du4kwcJbHeuiCmFksMD/VBsmgbxfQZuS8ZqscRSJ8RYuKlNurWQrItS+mjliRs+0XWZcGI
ChtG0iRhWhLT27hu7U1W6ofUIbRprR2QRa1tklgqY0lMbyNkbamSrDonMi1WA7KgtSNJYj3FJTG9
jZC1vWRBVh2+D8uNC4Asai1LYqOWJTG9jYC1JN5uQpEV2RYto1pAFrN2YolijZbE9DZi1jILsl2V
NWXtuixqLUtiOaUlMb2NkLV1jLuX/4ii79N/U6eT5OXQmkeWWiUgC4aQBLEUYktLYnobgRD+gyxF
VdZSJAVkUWu7JFYHC2KktdG0lAXIgm1MRRJrlZYM1eMIpU+PZamNurWY7JBkkyprKa4BsmhEx71Y
DpF5SUxvI2BtDoWkNmZNNps+lnVZ0NpcJLEaqyBWtDYWSz3mJtuWZME2liiJtbomprcRSh8utBRH
PX0A2SKeeSSqmqxpdQiQRSNaJTEafUlMbyNkbWEpouqwvZpqFeuyoLW1SWLc18T0NgLW1lCy1MWr
w/ZmqbABsqC1jSQxLn1JTG8jYC2LO9iImiZbTV38uixobY2SWIllSUxvI2Qts/DLktShpqnSD8iC
1nKVxHoeS2J6GwFre4gjCrLqcLqbsvaQpSVZ0NoeJTHqfUlMbyNkbSssyKrDaVvJfV0WtbZJYtxp
SUxvI2DtCCSO+dTSRTPVENZlQWtbl8WSYK06rh2mn+p1WbCNgySxVNqSmN5GKH1yTUtx1NMHkm0k
WatOkGyrGeuyaER5XUybMnA0LYaty2JtnFiiWO1LYnobofRh4qU46umzLpvlC9UosdZaUykckAUj
SqJYprYkprcRsra0Ksg2TTZafsYAWdDa2CUxjlIcu9ZGshROAVmwjTREsdSXDNXjCKUPMy+1UbcW
ku1NWINLw3cNLlMgklwdWvNMB10AWTBzUpfE0qKY3kYghBS6VPvOUZM1ldwBWdDaXO7FJsWI/3ob
knKVxctOMk7dFMeOy/BOxxurKQMmZarP7jhR/nlJwvHg8vM/f+Ozpy/euP2dx9e+/+7VH0Kvv/nq
/Yn25XF+9Ab512ukJdjKeSlv9HSFPOJS9keSTfeU7qNEv8EhsY205WJI3UEozsPxJbK3ZtfwxuPT
7DB+fvXpj0fDfvnh1fdPr3748tefvpq9zav333v7p09ECv8XwZZg5p/KsXKM/fHzh8/hpzGWyF8C
8hzrt587h0k0e+55cPPrV8dtVI/fff7mX7uae6587EbXuA6WZ5w/WT54792DJdXHUXohfvi8PLMc
zXj13IyH31+Zn4n1Z4uO/znzcTyMUmp6Yzbiszd45Mc3Httn8Y2n9lSfHsqIPB7uaUfg4fkYF3Oy
PMblwAN2IMK9kPXY75V9Q/jxj7/eepvvjwBOewwxm3hc+eXf7Knn+PM6tuMvTd9/fvzxu9mU3197
//bxu5//yP9PRBLLK0Vy77EMNP9crI8xPnzWmWpGe5CaQnZ8R+j4J189fPP7G4jP7/B/IovSy0Vl
yyTtIw+fPus0WksVf/un5kC1Kr+iC7piL9FNe06d0bAOY6JLQDU2NyDdq/VhRj22bmhJfkuNv2OV
8flTq5/S559++vTHL+If/+qDt9756ZiavPrpy4fj9/GjD3+SdMufN5fbdE92kGWdyhELTBtqEkxr
2QVG9whImRoilx1YybJG5IgFhi6RBJNyd4HRPYJCx04enbEsexocsdDQ1XuYFqhEFxjdIyB0LXDZ
0hlky54JRywwdDnew3CITr237hEQOj42mm3BMn11flho6KoE08inC9A9AkLXQyxpB1ax1KwdscDQ
lSLBpEQuMLpHUOjqntFTsazkOGKhoesSTEtbOsxiWQc4sDI9L+a9FMthsa+OkMnnMzvZY3oHyxEL
TKEqwpS4ZaRreqPqwCrpEik0hs/nfrLH9DjWDcvn5xZNoXMv1N6Mk5N98ln3aDWFblg1bwlde/mu
MlcsMHQtSTCNqwuM7hEUusb8f3/9k4NCLHvseXmB1BULTSGWYFL3yWfdIyCFKLTq00OesPjlVS5X
LDB0TBLMaFtKJfzyKtcz1v/+2z85Ushjhz0jvnz46IqFpdCIIkxV31ldg9E9AlIohcE75rTDsEnW
FQsNnTBsy4GaT/1I9wgI3YHVL/D159C3VLQGmYaPflhgCpEwfCwhks9ARPcISKESaEtFa5Bp2OaH
hYaOJZiu7qJbg9E9AkJXQ6o7KlojvXxd2RULDF1qEkxVNwOvwegeQaHjuqMMMLJpxO2HBYYukwTT
t5RKRjaNuA+seoHf3BZa2jGXHIbd0K5YaAqNexgO1Pak0Mv3Oz9jXSGFONS+xR7Dip8rFphCpUgw
XLeM/A0rfhOrByo+46MTluFlid+x/pfF2oktwjgV/nWPoNDVvAXLsNLmigWGroowg7Z8dYaVtok1
Au2p4BhWuJ6xfDoDNHRdgsmlucDoHkGhy+UKpZIRGm2ZkNhW2vywwBRqSYLhumOdfZhW2qaBlLZ0
SqZlEkcsMHRMEkxqW0JnWiY5sHiYzlKdcerLz3a58KCxqhIEO33zZ3Msv2aOWKhHXYLpvOUXny2/
ZjesCyz7EYW4p0bTLTNIRywwhXqRYEg9H74Go3sEpNCB1S6RQjVv+cK6qRfyw0JTqIsw3QdG9whK
odrzJVJoOG0UOdkzLDPqA0s9m7mGBabQEGAmpdMJCN0jIIVSaLxltmi4ss8VCw0dCzBuAyLdIyh0
nK4wBskhN5/qx9/s4Wi4EdEVC0qhA1uCKc2nK9I9AlIoh943dEocDYfqXbHQ0AlffwlxR8fN0XCY
/nesEi/w9ZdQnGJ1sodefuDQFQtMIaoSTCWfNVndIyCFaiDy6SFPWOnlx3xdscDQpSjBFKfldN0j
KHR1x6ooR8P7W65YaOiKBNOclrF1j6DQtV4u0HG3EPcMSQwPfrlioSk0BJg9K9gcDQ+G3bCa03rj
CSub5tt+WGDosgzjdPJA9wgK3XBaLDpjmaptflho6Po9DLsNAHSPgNAdWFfouDlU8qmNnOwppvm2
HxaYQiUJMHsKtlPMNN/ugeKWIUkxzbf9sNDQsQSTnX5FdI+g0A3eMhGopnmuHxYYuirMc0cotGHz
wRQzzXMnVrpAqSTFkMuGOjLHZplvH1jVp68EU6hFCaY57YLUPVpPoYnVd1ytMbEsR30csdDQtXsY
CsnpZj3dIyB0FNhpS/QJi01fnR8WGDoWvrpJOXxgdI+A0B1YFxhxT45StwxJ2FIqccRCU2hIMC36
dEW6R1AKjR0nkDiaNkg5YoGhEzZI3Z6995n86x4BoTuwLrBjPOU9G+o5mjZIOWKhKdQlmFp9YHSP
oBTqTndYnLBMG5McscDQDQHmWEjeMmkzbUw6sOr/fi/jjSPvOAnF0bRByhELTSGWYJpT/Uj3CEih
GuKOC+OYTBuTHLGw0E1sCabs2NvGZNqYNLHq8PHojGWab/thoaFrAsyeJxummGm+PbGucLh+cnSn
/X8ne8hyWswRC0whonuYFqJT6U/3CEihA+sC+9pT27RviwxP+7pioSk0RBinsazuEZBCHOKWnRtk
2iDliAWGLhUJJuUdkzYybZCaWHXHEVYmw7PIrlho6LoEw+orX2swukdA6Hogp81bJyzTxiRHLDB0
WYLZ81wTk2lj0sTiHffrTizTPNcPCw0dSzCdfGB0j4DQjV39uGlDkCMWGLqSBJg99/4zmTYE3bCu
sMI1wnA6bXu2xzTfPrB8BpdoCrU7mDw5nboi3aP1FDqwrnAD+eToTvv/TvaYNkg5YoEpVKsIs2Vb
Ipk2SGUKqfnMSk5Ypo1Jjlhg6FoUYbbsJyfTxqSJ1fZ03M0yz3XEQkNXJBh2mrnpHgGhm+btGZI0
S3XpwPpfdpRObBHGaYOb7hEWunGBGvfkKGPL6g1b5tuOWGAKsQjTms8AQPcISqF2hacjcg7R6bb4
kz2mS8gcsdAU6gKM2w1/ukdACuVQnDaRnbC6Zd7viAWGricBZtN+cuqWef8N6wKXf02OseNdnWmP
perniIWmEN/DlBDTluFjt1T9Dqz//+HaycFuq5Ene4Zp3u+HBabQEOb9PRSnZT/dIyCFRsjJZ13k
71iJTPN+PywsdBP7DqbEUJ222eoerYeuUIhbDtskshRsHbHQ0DUJhkc3XfJ78iYly6XDN57/8NLh
g1eC6FuO1yayVLFLCqn4FIxOWKa3tByx0NA1CaY6LWPrHkGhq1eYyZbsVrg62ZMtpVBHLDCFcpFg
0pZiSMqWUugNa1wihcaeztH0ppcjFppC4x6mhLjlkH8yvek1sWryqTqcsExvaTligaErRYDZtGM8
md7Smlhjy/pMKpYSpCMWGrp+D1MDOS2o6R4BoTuwLrBTe3Jkp40iJ3tMb3o5YoEpVCUYt7qs7hGU
QmPPqLZaSpCOWGjo+B7Gb8e/7hEQOg5py+GxZLoVxRELDF1rEkxhHxjdIyh0pV/gTqTJ0Z32+Jzs
Mb3p5YgFphCTBDP6lq/f9KbXDesCmyBKD7nuscfUC/lhoSnUJJiy5weETb1QD8w+Q+wTVrcsQDhi
gaHrUYBxOzCqewSFrrcrjPyPxaItv6/dsvzoiIWmUJVgmtNyuu4RkELjGlugagzZ6db6sz2Wqp8j
FppCQ4KpW674SN1S9asUaMuJrDQspSNHLDB0o0swyakIoXsEhS7lCxx9mhxMPkP9v9uTo6V05IiF
pVCOEozb9fC6R0AKpVCjz4/sGcv09fthoaHrIsyWy/1yNH39OcS+o1PKZNm76ogFho6SBFOd6li6
R1Do6hXeHZwc3WmIfbbHUjh2xEJTiCWY4XQGW/cISqGRrzDyL6FsmVvnZOqF/LDAFEpJgqlbntHK
ydQLVbcazQkrW0pHjlhg6HK8h2khbrlqJGdL6WhilR1v+k8sS8HWEQsNXZNgqtPb1bpHQOg4kNP4
6IRVTF+dHxYYuhIlmLLlRrRcTF8dh76nHy+WQqkjFhq6eg/TAzltX9c9AkJ3YF3gtNjkaE4HM072
VNPX74cFplAVvv4RUvGB0T36jbkz6Y2sBuL4V2lxAolnucprITiwHEEgECeE0FthxJAZsrBIfHjc
nWRgXorY1bZJuAAz6fx/tTy3XeVnC1KIlG+0KtljVdW422FJQ0ccTGg07c77SBS68PSdtvCB1gpt
l0qSP79Q2hRLmEKehTHUZb3tzy+UnrBCl42/xp9fKG2KJQ1dfAgDSjc6UT/vI0HoQBnsMihVHPHR
FEsYusDCuC5vtZiKIz4SFiqIbb7c9ljnFyhvsRqdZCUNXeBgTKMT9fM+EoWu1apkh0XnT5eaYglD
R/QQxiit2+RR3keC0BnlG92K9jaW1ee/CNQUSxa6hM3BBN2jMGj1+S8C3WLBkx9pkTis0o3mJTv3
VPSVm2JJUyhyMNjlvj9b0Vc+YRl48lLJicNjj0qShappWzssYQoBCxO77CmzUDVtO2I9+cakxOEU
Nipq79xT0d9uiiVNocDB2C4X2NiK/vYd1nMYhZwKXVYgtqK/3RRLmEKIHEzsk0IV/e1brKe/Aylx
eIWxy1Sx4mCWpljSFPIcjOlyvJetOJglYQWlbZcpmjm/49cUSxg64zgYaLRXM+8jUejgWayEggqN
Dq/Zuaei398US5hCVnMwscv5kLai35+worKNVos7rIoDUZpiSUNnORjfqIqd95EgdKR0o1PH9lhV
1cd2WNLQEQcDuke7xtqq6iMp0wfLVT117bCEoXOWhenSZ7eu8qnz0AerqurXDksausjCNHqpPe8j
Wejckx9FcOI47f5d1unmx3843F/fXI+XJ+3X6+WLV8uL+XB1NPXm5XqZMLZxoxGjm91i/prG+ecf
L1/dXCxvfuT7w8fLcvz0dnMxX794dXH4/FX6kcSyHkF/vVlv1vcTW9L4yGit9Omf9w/Xf75eP/rq
TvHdf/7mvf8P/JuX6/r6+Pmbi+sXLw8X6x/Xh/W39eL68C4AqUjGGXu4SjmZPvaQCxKy6zK38jUF
DNBPdDtBwuZgoNEeqryPyp/JhOVMj86u9TXly4ZY0tAFDsZ32XNifU35MmHFPqN8qCkcNMQShi44
DoYalcHzPhKFjuIzKBsCKBPbPGZ799QsYRpiSVOIOBjXqBCW95EghUBF7LGbysaaJUxDLGHoouVg
CNsshfM+EoQOFUZfdcL8DofM+SfeN+ERxopYCNPlpQ4ba9Z1J6wnGYpi5GBcn90csWZdl7Bil8ML
LFXNIdthCUNHgYMh0yV0VDWHTFhPf7xL4jDKdNkb7PT5B0w2xZKlUMLmYGyjVyjyPhKk0BHryW8F
OHGERkcB791T0wJviCVNIc/BxC7vKztd0wIHq5C6ZHbFDVdNsYShA83BmC7HzLiKG65usZ7+mJkT
R2j0Zb93T1UxpB2WNIUcBxO7rGQdVBVDnNJdNnc4rNmE2xBLGDpkYaDL+eQOazbhJizbZVbrsGrR
1g5LGrrIwYQuhzs5rFq0eaW7vAHvTFXPph2WMHQGORjsch63M1U9G6+c7TIYmKr1djssaegCBxMb
Hcub95EgdEGh71GRdLZqndsOSxg6CxyMc10GTFu1zg2KTI/3YJytWl+2w5KGjllfRqX7LFJs1foy
YT2LRUpU2GWfhKs4macpljCFnOZgQp+Bu+JknoRFSje63HGHVXEyT1MsaeiIhWl0ZEHeR7LQ0XN4
+km52OV7zVd1bNthCVPIWwam2SFheR+JUshjePoUQq10lz25ruqEoIZY0hSKHEywbWDyPipPoRPW
M+i0IShNXdxTdVJRQyxhCgUWxjZ6hSLvI0EKgQp9Fkah6ulvhyUNXeRgomuzHMr7SBA6VLbRu9Q7
rFhTbWuIJQxdRA4mhi6hizXVthPWcxi4jTKNbivauYdqSkcNsYQpRMDCUJeCLdWUjtA3GyH3WDWl
o4ZY0tB5Doa6vJ/rqKZ0hEH5Rl3At7G8rqm1N8SShS5hszCNejZ5H4lC16cF4CsuL2qKJQwdIAdD
XWa6vuLyooQVlTU9NgN7qBow22FJQ+c5mNjnqYOqAZOUDW2+VHZYWFPlaoglDB3aBzBGK9PlvQSP
NVUuA0p32YLvKy7raYolDJ3RHAzFNjB5HwlChwqxx8TXm5r2REMsaeiIgwmNDqPK+0gUOuqDVXUC
TEMsYegsM2AapRu9hZP3kSB0R6xn0BZIHMG3WQ3s3VNTGGyIJU2hyMFE2yaf8z4SpVB0z+AIMWMV
dNm54V1NW6AhljCFHAtjuxyl7l1NW+CE9Qz624nDd7nB3FedzNMQS5pCkYFpdpdK3keCFHIKu3RN
vK9ZbzfEEobOew7Gdrk0yPua9bbxCrq0u32oaQs0xBKGLgAHYxvNZfM+EoXOdzmcxIeqp64dljR0
noOJ0GXmHyqfugjPYeYflNVdBqVYVbJJWE9ykGHC5mB8l9fefKwq2QRFjcpae6yaNxYbYklD5x7C
9Lrp1ceaNxbNs7jp9cQRG+2727unqurXDkuaQsTBUKcRsqrqR8o3Oidph0VVi6V2WMLQUeRgYqNX
W/I+EoUuPoczU6xW0Gjj9tvuCbqmZNMQS5ZCQbMw1veY+QddU7JJWBF6fK+FqqvLGmJJQxcfwoDC
Lje+hKqryxKWgy4ZVbWrpCGWMHSALEyXk4pC1a4Si0r3yaiqq7oaYklDFxiYZs32vI9EobNdzr8I
WFPlaoglDB0CB+Ma7cTN+0gUutBlW3eoupqqIZY0dJ6HcVUncO58Y2zNiaANeIROMZaDiI0mkvnE
EeSzUabROxP7mFXN3dphSUMXGZhOBfdgquZuCYueQbM9cUTd5UvWVq3c2mEJU8jyMI3uwM37SJBC
VkGjr409VtUcsh2WNHSBg8EuO5ODrZpDHrGeQdU2cdhGr7ru3OOq5rLtsIQp5ICDcbFHzya4qrns
EesZ3GCUOKjLTqTgqubU7bCkKeRZmEYVkbyPRClEwd7d4XQu1t0f/zDO83p19f3h05vLy3Rb08s/
h/H6+pbsm6+vPjh8dzgwyeMUADUm+Ga9Pv34V598dvjy9Zguu/p63T6Y12UdfaRBL1MYxnFyQ6C4
DqCX0Y9Afp3wMN/D//AG/sPhds3CsSO29h7PPlnCGRPxaJ0b4gY4zBj14IzdtmkMk5k3MbsN/ws7
jdNoyG4DkJ4HO85h8H4ch9nMeiOat4lGKbux+L+wxxECjksYjA3r4ADNAKMzQwzH37S4cZL73Rn4
X9g9onOjm1POEA1u0tMw4RoH8NOGOI/GGSG7V1p3GcR81VyhHZZwoPfAwYB+OJ6BsQlvvR6+/urT
v5YXV6/H6/mnD24ufr549fvF8EuKYmIapmmMU3TjoOM2DuOEOMDscfBhxAnJjdrFBA46zDBvOALR
m192vDnv29vfd7h8PR/eOeeXv8PaA/Fxe/5abn755c+ji35K3jqCHP/78MbMlJbq9cubHxM6uY22
oCe7rHg4xuQY4OnP2w98/cFMkxkD2bCNkUPB/ZGee5Sc5l8PKO9/cKd+2F5d/j4e0+Dw7vVluqYw
Bdwvoxu9MRuZLTgL4wy4jPMaF5zWdZzf45CDxseReYD45iF/8cvrl+kR/1rdPqAn6g/ePOVvPjMc
P5S+FHwY1jjB4Ldtm7WxZtJ0uLr81ye8Ax/Q6zSIRTM4P9nBzGlknjfY0MBkIyFvibnLA6ElfHq8
sSiFaEnuW9AEq+10RloQhcfJ/kvrr6tfkmu/ONws62/JSXdjYPqvj2rdFZRD82jgk/YP6x/rfMzQ
ebU4Y1wx3I9E36SnNt3aqW6dcbhPQeMcBRuD29aw+Y2sJoAF7TKlAX2b4uH1q1cvPzoN3+n6zPGX
9aPDi6sfXr6ax5c/XN0k599crZcfHUfYw3iTxrrj//5wlX5xmvgfvwc+uheiaOzmxs3CTN6uiG5D
i4HGdVvtaumhxVGBrbF495UQat4Xq4YRfhEEyyGgwwqEvD8E8/yoKGgGxqUsefHHCYaXjVU1vHJZ
ocMjI0YKjCkSy9socC0pGywj67OyVRX2clmpayMn5kkXieVtLHet0wp0YGRDTpZqBg+BrNC1ZDkx
RFsklrdR5FpLxMjGrGzNZkyBrNS1xIl5Z4rE8jYKXAtKIzCylJGNumartEBW5tqExYmBC0VieRtF
rg3gHspanZOFqgGhXFboWrCsmKUisbyNMtd6VzFduZ+wLuvV9eWrP4tnrA9JULngz1tp/GuR8a8F
3GkVcce1Lpxg6HKKRvA15f0j1nM4KdoZhQ4exsNCQUVj3DzMOhXMTCq6DqDTwjTgsfywztvotyUl
AiZm2ADDGl30uJRXNEp++Tu8PYGzp7yikdaTV/N4kchTRc2vc7SwbJpZuYIZgVZKpQLULEmjd9t2
QwHWvC/VEEs4HPI+Mog7mF20MmF5WPO5+7ldfLiSz2ynieawOh1NhHW2RM6GKYxm9T4G8x4HbK1l
0gsLHhda0ZOZ/BCNwcEktGGGVPWfNjPZUUc/bzpZZ1JlZ5y8H82G5Y9LyS9/h7cncPaUPy7/KsbN
1q1TqpURboF5XrxZwuRwRYMzhxIx8+Tyod3XeY4/0qDK46zSxCYn96Xpj196ZtvWdZker/K4xQUY
R4OE07KFpL16jSlKXo+b2czTVXmcVWZ/1OM+G3Jh/+8a7C4BuAdymzbw2m4mDfQUpzVVXyH1ONIf
4bZBXN/jkC3px5H/A6C0Bnv/meH4oWFanR/07ExqvMx62pxZ3Ta9XYM1xs0B4pJqtVYP27TSsMG4
DODBaB8gpuDzlsBd/gstyddgY5rTjZH8OEX5k2mVC6biQdh9f5maflA1jPBbywCHEHwNQt4fgomk
Vel/Hk+b/0qE+4HzYrt6uz5em8pOoeYchJyDwMTFzXbxzoTHR06vwxT9NMcw0aLdjGmlomNYQ1zn
1HOen3DkdCo6qLB4lxK2aopXCyN8RKxmEYgYBJNb51YdAyaQldoYOTGyvsLN+ZgLhgGvAE2R5XmH
i2QdWEbW5WSrNgUIZIVx9sCKETBiNmtj1VNcLiu1UXNiHmORQ/NxFKVPCFhkY961AtmgtNeMrM/K
Vo1O5bLSiEZODAIWieVtFLkWyDKyISdbdaePQFbo2sCKWQ1FYnkbRa4NwKVPzMpWZm2prNS1kRXD
MrG8jTLXBmJksx2Pqg66QFbo2siKRdBFYnkbBa6NygT3UNZlOx5VB8MIZKWuJVaMODHI2VjXyi6X
FdpIlhOzxheJ5W0UpY+lUBTHfPqIZL2pWULtGmbFS1iOJDrNOABzDqjr5ZfLSnOLWDEfisTyNoqC
TPjwdB1w5h/ZJj02UojhvMreeT1PanaYab71JPA3KactE+bcGpWgKpXLZWWpnLBYMWOKxPI2ilwb
G13StMPCmndFG2IJQ4OBgyETmdDYXGhM1SSvXFZoo9mLxQ90ItFMf9K5gv6kHbVbx/QOCfoIQ/A4
DSaOkEq/2o2Y/kqPYwqOGQ1OqGdIBOX9yZJf/g5vT+DsOas/OS6zN9aiQ+2ZLsiibUTEDawnFsVg
URzz6VP6ZJ9kEWknu/NAztT/7sntjOZ6cmTnNax6ms2x7R5GA2QwLEskCJ4cvcchG+cYZJ/zlD2/
KCiSFT5oFjgxa0yRWN5GQTYA+zYUuJCVPf9tW5Gs1LWeE8NYJpa3UeRa58LjDxr/2FBpJ/n+M8Px
Q4M/bs+IaYfboOPoIH3ZxfTvtzvJM2xg5qgH62AeMNhlIKPDQDbqiJMPmiJvyf1bXUJL8p1k52G2
2zJS+gIXj6GgYuBWFTEXWnf+iXUiWWH6OuTECKBILG+jIH1RAXE2Uk42nD+5FskKXRuIF9OPJ/Z/
pep9r/vlb7+83euufdhQIdoiB+T9Lgo3Wl2xB2FXoCjenfSQxCgd2HGTI4nOoA4ANML4eLN/dsfd
NXrB6DcaI7q07dNtAbTd0I6je6pm/8niAFhh8S4R6Pw2YQMY4WNJmkUgc9bOvTPKGyc9qjI573/B
g2iV5YZ7r3PPf0V9TiQrDTBxYs66IrG8jSLXktaMbKbkHHXFuzYiWZFrj1gPxZzSnorE8jYKXHs6
ZIORxZxsxbs2Ilmha8FyYhZ1kVjeRpFro+VkTU4Wq+a45bJC1yJyYmSxSCxvo8C1XiFwNtqs7PlV
TJGs1LWBFYuxSCxvo8i13nCyLidrqrK2XFboWoOcWLBYJJa3UeDaoEB7RtZnZc/ftyGSlbo2cmII
VCSWt1HkWk+cjSEnW3H4nEhW6FrnObHooEgsb6PAtVFZCIxszMn6qslXuazQtd5xYo5ckVjeRpFr
I2lGlnKyoWqsLZcVujYwYy0p7cvE8jYKXEvKctPpoLOyVTOEclmpawMn5nQoEsvbKHItoWFksyuV
WJW15bJC18aHWXskIVcklrex3LVJ1kRONrtSqThlRiQrdW3kxKz3RWJ5G0WujQYZ2exKhc5v+otk
ha4lVoxcKBLL2yhwLSgDXPpkVypVlS+BrNS1xIpRKBLL2yhybSTLyLqMLEBNB10gK3MtAHBiRFgk
lrdR4FpUEDUj67OyNUsGgazUtZ4TQxOLxPI2ilwbNGdjyMlW3IwkkhW6FoETi+iKxPI2ilwbbYsm
211XpbjL9pDEKGfNeZsLzur2JEHf5XYOqiqYHbGo7t6iXYIYOP8epTseX8UjfDoMcBAhMNv1QizY
fjgBeKvT7hhn/TSEEMcB7GIGIhNAzyMtyzG5t8X7yfrZjUso335Y8svfYe2JmrOnfPvhP8fWBNgM
eDObuHE7ZxBWsDCNm4PIkcR9NW9PkpH8m7lz242kBsLwq4y4AoluXLbLdi1wgQAJJEBInC4QWrlP
SyAkSw6cxMPjnk1g06lNucYTAITIbqbn++uvarftdrtfuVPOhs0tAow0OhunYbFuDMsQvZ2yj8kM
gONABt5gBdMjnLYJmubyjyhLebowc/3gezD0cHnxOdqsZlk/cYS1LEWPd5ErMu4qM2Ci6CINCwj7
NsQAMSYksnkYPOQ4u0h+GtOY82hH+u+WcoDvI9iGiDe12TRj3yxGWZEusRKca5Ag+6G4xGJv2CkD
kvpaDS+4UmGVhvvIwcCZKpgco8pajww2GQmLLff/FVilteg5GHpfBZNjVFlLERksSNiGLTFUWKW1
gennhd6EOpgco8La0HsIDNZK2NiyxE6BVVobDQdD76tgcowqa1OwDNaJ2JbbfQqs1lrkYOSxCibH
qLA29kDAYL2ETU0NQj1WaW0CDuYIq2ByjCprIzgGK84hpqbJvHqs1trAwRKFKpgco8La1FuIDFac
Q6SW230KrNJashzMO18Fk2NUWRsSMVhxDpGa+rX1WK21kYMlH6pgcowKa6k33jHYJGCtabrdV4/V
WWsNC7MRqmByjCprbUIGS/9gL2/G/6fnz07Onu5DfZ4vL38twa6TUSnMS6AQ8iI8+1HQRAsEP+Sc
TAoDEVpvByIaox3M/QmDi/Pzq39t0oB6Z1oGyZvJ+Op5kntKrOltcgdNIB00F194Mdgj18CmSKGl
P/koApWnLOB9WbYvMo4sS/at/uQuAskxvhFIbYptSlc9VpkEyyTB9cFQFUyOUWFtwYZ0f5cZssfc
ZWbPIQdMeFYKzzWlsB6rTKFjUuh7AKyCyTEqUuj76CKDdRK2aUMDBVZprQcOlhxVweQYFdZib01i
sF7EtvSFFVittZGDeYhVMDlGlbVIXEZRwjY9bK/AKq1Fy8GicVUwOUaVtRS5GIOIbazaWqzWWqZq
Qw/RVsHkGBXWht56Zj97qllC4HCEMMPUJTNjN4+T67KPQ+eCj7NxBgFo7UzlPCwZXYoQ6pcQ1Hz5
a2w8CEw8iiUEL23rYzDbFIM3OXlmDUEcBx9yBpMRWCnbBwC2UiTmq7cS2tC5VQTTONK4YASaRz9C
dglgSj6aotMbyG9wkh3Rw5J5AVC7KcvtMd16UBcXNJ2HKZa0miWZxc3DYu5uygLLgIh27nCxsctL
HrrJ+6Vb3IKLwzy7MHCReHO7lEQZibwpy2JTGC2t+OmAskAQzrhXsW7XIZxcnt/dVaPdraKJyXuU
Gp/QMo+jwCob2MDDkquCyTGqGlhMkRmP0JHHI6EnBCY8ksKLTZ3meqwyhZE5dWJvLDIwdiYoQ7I2
2kAW7MPzX8YDxdENFOYpJMhzJmtxWWgwIzr/n73oZB+xi3X2yllVFG3sY3INRm/EpJb7wXsxvkGM
svCS4SQkeFw/VMkh5uW01hipJlLTzEM9Vms4M/OQeoi2CibHqLA29TYggxXnrBp2FldhldaS52Au
URVMjlFlbQAO6wSsM43W1mJ11hZZLCx4BmbFGJsayHqsNkamKaTemlBlqJxHRflQ7yhUxShbq8Im
w1nrJSw0ZbQeq8woGBbmTRVMjrHeWmd6SMhgUcS2LCZRYLXWBg5m0VXB5BhV1toYG7ormzum1R1l
Tgmy1/EgGdD0jJoCq0yyBQ4WAlTB5BhVSSaiw2Y4Drob7aA3zh3lWQ75EROFD9Bbio8gy9mmduZ4
srQlGjgxjuK9WrEm1szyzokw2NyZZbAdELpudM510zR4DEA+zViEZzJEy7CkAFkxy1vx5a9x8XgD
XDwHzfIONoaI0xzHuDDTeSkPDscczWKQk4Jhk+etFIn56lneDZ2b5R1SROtScnkYvAHwGOc0LzBZ
b2Y7wBuc5ATpYcm8AKyd5b09plsP6pKloYs+UGdwnmwIJo642Xp7ATTLlLHzfvTd7N3SLUShS8aM
frDkrUt8JLdtoDISeZbXBILy7xgWf0hZUBCUvYq1fcH6P9O8rXbZ3vK1yr5mCYOzEdEOkxH2Dl68
CX4Gl5zPg3fTMjmfpxxHt0RM/9kr1vcRh2AbIt5cEpoeOGsWo7wQ8BUQwTdIkP1QXLldDzYwYpLU
cWpajKDAKg33kYWlOpgco8ra4LhSIwnbtBhBgVVai5aDkYcqmByjwlrfu8jMKIERsU2NRz1Wa23i
YJ5SFUyOUWUtIYcFCdt0G1KBVVobGBj2EU0VTI5RYW3orfMMVpw/a3hRsAqrtDayMBdiFUyOUWUt
Jg4rzog2vQpEgdVaSxwsBF8Fk2NUWRtSS49pM5FV3WO9ryT2aOxhI43DZlsK0B3nXX7ytIYiI3tZ
Nz60yfpob8llX0q+9PVLj38+y8NpOezp+rWFf3rK+4I+3U8E+D/5LyrwZVwmMwxxoRGr4E/Pzsvo
5McHNAR0DRpuR9F5HEuBfrd7//piHeec/t69GOvNUxmIXj7ZfbvTbTHy9kufDwgh2mC6FJPrMAy+
c6MZu3GBxToYfCL78uedwzFCmro5edMtw0zdAnnqIIAzIULKrnxet9Do7Z1uxLr7rt7rmlcsJmuN
m+ax83kxHU7kuiW7uRttHmeMw5QHu+YGBmNMnhaYoX7qqubLX2PjCZ6JR/GKxX82HDKTHyNNiVLi
FqI5G8g6My00WlZJOmYVfzFf7c/60g6+lPfBkx3tgF32iF1awBaHUjHM+WUZchzcuOzG2xPg6d8n
wDvdvnHiZEeQ3rZ7cnZydZJPTy7vtkJ2GRNQCBYN8o1j0xKJI0tTXrYTsoICPHqKg7WIGceSYqIO
BzN0g51TB2FYrB2zQ6dOcTRH81FOser6V6QJ178yffVLcXZVWBTN6NISx0TBUjE0X1xdP785o7/d
/3ENYP6leHN58y3f8Vg4HMs3KDx9ncsPI7k4OPQ0b5uVzVcfXerGoV8vTvYS9x8dz8u083w1Vxtk
Iex+K7OMpUyu5rM//9wH+t3uw328/Yu/fVo+cCPuNlb+6+3hQfH+s7GVQ3GakreBljHQIfanIxbo
nC/G74vItee6Oy1KSiNxVaZzL3dff/Axn4fUkH3eqIdVrHdTIKclkBuHMR7kmCxZoeCm3fnkxad+
u+xefLDEvA5GnuxCb3rbobdgDlVTm79lvnohe8pXechlAn4oI6Dyex589BJ/Fb8cPc1TTN7GkUw6
JGcEwhWiks1fJci3bA/6CPqUvQHynKgULNNGvrwCfjwtrN/m/S2bbm0g13sw+afLd6+vC+unkvmf
8umLezXrbZp/dgO4fXr63bPr09P77NSbgI/eE0kZos1T7JyPc4dgXQcZXZfiOvqcMA/KzmbqrUFm
3iPWPklOix3HMbthHPDhu2g52rSAwxmW2Y/LECAZ9NFgWNwUp+W/3YXBpd4RPXoCKQ/ZkV86ILOO
rMbYhZBzN7rRLETFFsrKBAbDzCBxw0d+XPXqjVw3Ayzu5rxJM2IIOBZqSeEwLZbmGTMuIZhs3Rus
YJsYwam24pL3fvF2HsdED1ccQphjjMEMYB05P4QxDhiymUx0hv7D5x5WFyjVdue+/nT3+nT+09pw
mzd25a75yTg/3f+utN3vf5/Pns1fnY3n50VxmZf6ZX7RnnPQZOIN9DiN9nvD+fXaT9gtp9eX3/99
zJPdW7/ki7fWz7y1Ptcz99PAy4lHuoZczNN5ydyz8pRRUXL3iCKvzHPt8rj3Zv1kVz55yQkiHw/v
IWx79rl85Kd8VhqFn0qqSphX88WSx72g0hued+WzvIh0uAi+m1KjpXxTpNnS7FwchkHdZakRfoCO
m0r7phx50z2dS+zr/8s3z6/8smf55GxX2rCPP9/labooDcS/rfi9q6v5p+er6H0Fjj9fn1yUhuSl
L/svxb2QM/Fqntz2/mz6t4V9yh307JxXVyaE8x2XL+Zuf4cin/1eWvT1x2fztF6CeYOTlZrgw5re
fetk0uLClJDCP+34y/PddkKT7NDZhKEzEWM3LXbqcBntlMcJB/LlcrQrc7K7fwBMq069dXDkLtzd
wYI3TYsmHkOgbrTgjeVkOeePLEv2TTEVVwR6eHgmne+pbRfYrR85xvK6VZB92DHeBxOaBpsr11Vx
tWUROFiy9sj9040dTc/OPIpApW9gWFmBu9tCrxp878fd3bP5arcOvd+dYx6WeYHOTGPuaETbpaFU
6DCmQBY9phLhi2P2XfXzq3ILqyu+LSfPbv7+x/n3d0tH/fzp8/Nfy1XoTDeCpz7R499LGOdpziFR
CXOIXc4DdpHS3IGZcshAYR6sagBIPQEeuRzkeq1vxrzpjQ9VZ+8G2/SIjAKrLH4LHAyIOSetEWNs
eQZDgdXGGHiYrzJUzqOqfIC4+q7e6xCiz8YNFmCW1nkntGbJ8zBGHwHjZJDCaKxHMxtK4b+dofKm
99ZVZVsuMlUC8O8EHL+/v5HqWi6NjyxVeQ45wwmMGB9NoOylKu2RWzRqQaq2pv0fFVhtOpCFkamC
yTEqrIXeBMdgrYT1TWdHPVZprTccDCxVweQYVdaCC0e+WmwFtuzXsBcYjyxQmy7PyvLAyGJfCDk6
l4dxHvKA88NXVJ/nMCElBJyNceUPEfJsyHmXCX3872bg14gTHTkRcqXoSpmESuGx6FpG2wqusu6Q
hXnTctv0ZrVr+U2//9xcHLj8/bLMCe6+z5e7H67LIRfzfmZ+/6ubMdm+wvbfV+y4OZAVx16LnJQB
37LwXYFVJoA/DdD5Kpgco6q4AxKD9RK26SEjBVZpLSYOFiFWweQYVdZSfLz5+c/O1xNqP7N+2OTG
k3IzbhXy++5mer7A/1dB1Czo/d8HUbPO4H8fRM1ql/99EDWLh+uCsD14V3Vh3rQnoeX2jQKrbDOD
ZWHEXftQjLHlkWkFVhtj5GDWmIa+9ibypv1Nm8Uo/YjASvCpqsDkulZcJ23vrP235mKaXmv4yFK1
KURWoDdV54982upS2DRqlc8klRiEf21ur+kVknupx1vo9H5ZXbV/yuK3q/0g/fvz8x/LPeeLk+dX
T4d5XbDzdI3h6e0TLX/M3/GS0uGS+GVPemWr+bgsOVs7miVoF0GtYbgjPp1w8wDLzeF/x3N+dk/3
izUpa4ClqPYro0rQ693P0/3MSfnlyZivymzGvotxcsanwDWszL+t030ET25/KEPu0hE6KxW+lIMu
S8HyYGYlTvmPz+sxfShRDN74aM1EEN1BCV8nHMvK2LO72h9P5N9lclsX3x6R9d1uOuGTFAw3dA5S
E9+0ObYCq7yMJcPBovFVMDlGVXMcwTBY8c5qapoMqcdqrU0cLEGqgskxKqx1PSDTQ3DiPVNqmsKr
xyqtJeJgFrmqFfvPTZt6K7DaGD0Hc2CqYHKMqvJxMVblUS4fFTZZy2BBwGLTXuIKrC6jRRYP8wzM
ijE2npm1WG2MxMHIpSpD5Twqysf3xtzHRtx9P5d+w1A6OH9+dPvTeiNrjHbJOLiwEC/H2qY7Su16
lLmwlhVhQ1Xi5XpT5cISMFgnYaHp2l6PVVoLiYOhNQzMSzHapkm7eqwyRhtZmKMqQ+U8qsonAodF
Ceua5nwL1poqrNJaZzlYQqyCyTGqrE0BqspHrloFFnsIjsEGCdu0N6sCq81o4mAWqAomx6iy1rL9
lyhhfdPJUo9VWustC0uhCibHqLIW2ZMlidimJr4eq7U2crBgEgMjKcamPWUVWGWMyJZPjKYKJseo
Kp+YTFUe5fJRYENvItPZ90bCNr1XV4HVZjRyMPCxCibHqLIWEzJYsTvddqO2Hqu0NkQeFhiYOHxr
u+Fej9XGaDlYsLEKJseoKp8IpiqPcvmosBSJwYp9vthYtbVYZUZjZGEEDEwcqcTGqq3FamNkqjb2
xrkqQ+U8Kson9gCpKkbZWhXWEVc+4gApNWW0HqvMaGIz6iFVweQYVdZ6x2HFkUpqahDqsVprIwdD
A1UwOUaVtcl4Biv2+ajR2lqs0lqKLMwbBiaOxqjxzKzFamO0PCxVGSrnUVc+MVXFKFurwKYeDIcl
ARtM05O89VhdRossBsbfnUQjxQhNSw3rscoYAVhYNFWGynlUlY9NdTHK1uqwlBoWwm3eI1D9/Baj
hH9DDoJoQOP5U4vV1lbgYMmEKpgcoyLJ1BsoJT1fXJxfvIy1L2G/v7p6XvZAyyenL1wrj8ldzruP
vvzy80K6fF6eqZoLKV9dX+5OT0qY337HccDQQfvOHPQmiMJz3KmKVrKzbYOIeqyyZCxwMIyxCibH
qCqZFD2DdSK26Wysx2qtDSyMQhVMjrHeWjS9IcdgvYR1LVWrwCqtdcDBgGwVTI5RZS0arnxQxLZU
7R6LVVittYGFJVsFk2NUWUvIYYOE9Y1VW4tVWuuZqoXeRFMFk2NUWAu9i47BRhHbVLX1WK21gYN5
tFUwOUaVtclw2CRhsalq67FKa5GtWvKpCibHqLDW9sEYBiuOk0LLsm4FVmltMCyMYhVMjlFhresN
t3NVEMeCoeVhMQVWay1yMJdSFUyOUWVt8JbBiiOw2FS19ViltdFwsGh9FUyOUWGt763nykccqTQ9
4qjAaq1FDubBVsHkGFXWRuBiFEcqTY/QKLBKa5PhYMnaKpgco8Ja7B16BiuOVJpeiKbAaq1FDuYx
VMHkGFXWJjAMVhypUFPV1mOV1hJTtaE3KVbB5BgV1obeGWSw4kil6ekkBVZrLXEwtKEKJseosjZZ
x2CjgI1Nj9AosDpriywORq4OJseosDb2DjlsErFNVVuP1VpLHMxjqILJMaqsJQAGSxIWmqq2Hqu0
FpiqTb3BWAWTY1RYm3rHLUGNRsQ2VW09VmstcTAPqQomx6iyNjhgsCBhbVPV1mOV1lq2amPAKpgc
o8Ja6iE5BmtFbFPV1mO11hIHcxiqYHKMKmvZWaHoJKxrqtp6rNJa5zkYBVMFk2OstZaeGNM79mTx
Eta7g61VYZXW8jAfbRVMjlFlbWQziiL28MfHVFittYmFUaiCyTEqrIUegMtokLDYVLX1WKW1yMM8
VMHkGFXWesNhxZFKw3avKqzW2sTCKFTB5BhV1kbjGaw4UgmNVVuLVVobeFgIVTA5RoW1tjeOi1Ec
qYSmqq3Haq1NHAxMqoLJMaqs9Vz5JHGkkg6/Wa7CKq1NgYU5VwWTY1RZGywXozhSocNvlquwSmsJ
WBhCFUyOUWGt6wuGwYojFWqq2hVLVVittYGDgQ9VMDlGlbVoIoN1AjaZpqqtx+qsLbJYmMcqmByj
ylpKjsF6EdtYtbVYrbVM1fre2jqYHKPCWs/vTphQwsLhtx1VWKW1gBwsoa+CyTEqrMXeBGSwQcLa
w287qrBKa63hYGDrYHKMKmujSQw2itimqq3Haq1lqjb0BkIVTI5RYW3gX1WbkoRtePGdCqu01hkO
5gGqYHKMKmsj250mEdtUtfVYrbVs1RLGKpgco8Ja6iPzegz884vb/afLEz0n59PJuLtcEden88X6
Iqa8ULYJR5zcn0Mef3x2cX59Nv39ke92n98/7O8t//49Fe9N03r0cn02rvscP6hqNansV309v7m7
XLHvumBMb/b/vLm7+v35/O7t4a+/9Ks3/r1ovjid5+fr8ddl5+bT3VnZzflmj+fXIa16wJi4uyxl
WA67rwuwjxaaNmPclB6awzeHPIoe5XmH5p4IH3rvqUnExpTU8Mbzo+hRmrKdv1h1xN64tszcNYWC
P9iU4+jRmVL0bkW82F4gNonYmEKHV8px9ChNoW2lwBNIvUmhScQdU8j4dLgpR9GjMmXVe0+ET30g
2yRiY0o6vKE9jh6lKclsRNj1qpjSY1wVt9f4T87LR67y1Xz/gv7q6/k/l/N/T/gDl/MbPUSJvZzv
dQH1ltquFHerDHxDe3QUPboqK3rvifDUJ2OaRGxMSdBiyhH0KE1JsBHhnljTR9t2/t81xbqG9ugo
enSmFL33RCD04NsunxtTAraYcgQ9SlMCbkT4JxZ6jK5JxF1TnG24nB9Fj86UoveeCLS9cW2b2G9M
CYe/pvk4epSmhK0ILDp6iG0i7prioeH0OYoenSlF7z0R6Hts7H1uTMGGq89R9ChNQdiICGtmCNtE
3DUFoeH0OYoenSkI90Ug9i60dZY2pnhqMeUIev5i7txaNCeCMPxXBm9WLxL6VIce8UIEQVAEQW8l
Xw4quuuyo4jgjzeZmdXd/tq1K1XtCIussDN58r6VdHXnTUcoSsoFBB2dds49Ou2+U4SO4C1ThFyf
IhxcAUYmnatFlWU6X2UmPMIqy3QFATj6rGvJ3xYFo+J+ZMIjEwVjCcH7LUDdaBWiZMVwbsIjFCVD
CXE4k0k3T3lbFIqKZdAHnv9UlJ23gMh7xY4eQAVRiEKKkcuERygK5SsIoDGC7sb2tijsFfMmEx6Z
KFx+HCruHKR+lPG2KNmd74ZteGSiZOdLiMMZUEIUoqTzazGPPPBfjj47bwHhD2cw6SZvhSh8/vKx
4RGKwlxCHM6Q0pm3RPH7j58ffmyARKrcAxcU4XZH9K5Hr911ktATXDNJiOEodra8I3nnFI/AbYCk
dcZ4RQE0ZiVFIYsPiuHLBEgoiw++oIi3IY8+6ChKWej89MkGSCoLXVNAHgl198ZCluDOt8U2QEJZ
duCCIh3mMOiCG6UspLiITICkspC/ooA85qQzp5AluvMLMzZAQll24IICbuOOGHUUpSxwfhXCBkgq
C0BBgcdL0T7oKEpZWFEtJkBSWZiuKOIO6HWXciFLAsVIZAIklCVBSUGHOdHpKEpZNJOpRyDdKppU
FuYrCn9vTofuu++0oSN4y7SB/2naQEe1R04qW4s6g6i4K5kACesMYnlX4qPaE+ooSlno/Cf4bYCk
slAoKQ5zQPnQtZAFo2JoNwESyrIDFxT5MAejLvRTykKK5UATIKks5EqKwxwKOopCFlKsp9sACWWh
ckE9ucMc9rqpbikLnF8RtAGSygKppDjMycqMZSELK1JrNkBCWTiU1eIPczLr+q5SFsVbHDZAUlkw
XVGEdLQ2V/1U9HTz1zeulh/uXk6/zN/f/vrixxc///ZieL7e3U3frcPmAiGH/S/JzcOSgxsw8TRM
iwvz6jdMed3J1wteFqJlmgP99cuOnuvrh9938+rlfPPemV/+XvV8PL77fP5Yfn3+/Pc3P9F1/P3m
r9P85otx/mkXbkefV+dgXlOaF7iZX63T4ezl94cf+Op2oYnzlsPsF6qh+PILRSXKvx3zjyvKj+9+
fzGPr/95wfDGt8bef/3lujBN0wVWDm72EELG1cOKgOu8TcsS6YNrcBidS+8GfydG/XrJ5z+aaAgl
vGay91WUBHqUBn0aX119gPJooE8JdX5/BkMosWlQReGoR2nQR2Ra9KED1Ok9Tw2hxKblKkrqYdrp
/VIfoej1EoQC6vMf7n5ZX9yvlvz86mGt4W7H+Gk6ljH2w9/rdHfz5ctpXzf5at1uE7LfLvNlWNiF
YdlwGjDHOFwmyi4vAabLejO9WN74kdeHHY7jDhGCH7JzfuAlIdGFLilULxWwKIBS9XB6nz1DKGlV
hjpKQj1Kgz6iqsSUO0Cd/ga5IZTYNKqiUA/TTn/b+wGKg8H4WEKd/7KmIZTUtOirKEB6lAZ9BKbh
6JzvAKXqtIygxKZBFSWxHqVBH5FpIRrocwWl6rSMoMSm5RpKdB3mNFHVaeGYYod79vnN1Q2hpKbV
UTh3aI/Pb8z+Gio9fp39FJT20+0HBI0udijnpOqGjKDElUM1FG9x52nQR1A5NIZo0C2WUOe/fWsI
JTUNfBWFDVAa9BGZltB1gFJ1Q0ZQYtOgikIGKA36yEwj//T3aMQOfSKoWjIjKHHl5BoK9WjJQNWS
0cipQzmf/waDIZTUNKyg8OicQXfYoI/AtAOKn/py59GzwbLClTKqlswISlw5VEMJ0KGZR1VLxiNg
hz7x/CeyDaGkphFUUbjDPfr857UfoCg4/QOKT3948cPdgfDT//ZJBY9MBqP1lfyqFsIISlyeuYaS
fYf1U1K1EHkMkfXl+c1nn44PkYybV/sh91KsHipF/aE+PcLCN8+n+du7dT/FPdf8zRe3N3e/vnz5
0w/7/3/x8SfHiDg9X39ZX+2KPHtWJckGJPtJ71JvHz37+8qBwDHlHIYt5WWADcKwhIBDwLAs6CDD
4p79FV55/5svdsI3ftwxA80+DNPi8uCDhyHBlgfyfuXkCdO2Pfvw/hynZdlH+7vj59dwC5fby3zr
9/9ebnF79kHtnMmp7kNvpF/+jrY8pnXWu19e/fz7ulwdNvrRO12Wtax3Pv8mhA2Q9F7AJUU4PkOC
ykD427J4l85njm2AZLIcwFcUO2IMocMLBV3fhOgJ3vImRP0F6nuuGMecdduylHXGqjozAJLWGZd1
Fu8/zEO6ai9k8YqNlmyAhLL4eE0R05gsNxXaj4Ln3xS2AZLKgrmgSIc5lHVbbhSyhHj+TWEbIKEs
O/A1BY0pVLqo4BvSyiEEFzy7AfLCQ0baBsSIA+ewbDlNYQpw3CC3tOWc/LK4rT2t3PLL36ufD9fO
51RaebpsDgJtROwqaeXINPvLmlc/b1WUciZbovzbMRvSym8y1NLKF5g57H+mNW2B8sWtFMKFcY0J
wpIuH9TAM/C7wd+J8Q/XSwqn54B2UNJrJoUqSjZAadCndQ64Q/Hos+sAdfpDfYZQYtOwhhIg61Ea
9BGZlrBDJcHpL34ZQklNA1dDATZAadBHZBqG12llBZQ4rQwI3jPPwyW7bfAz+CG4eBlmAlp5pm1e
c7EG+Pqww3HcATfvBghpGSaAZXY5xjhXC4BS7KB60lWlCZS4KlMVJSc9SoM+oqpkDB2gTn9v3BBK
bBpXUSyGogZ9BKbl0VncaksoVHVaRlBS0zBUUbJB/TToIzLNg8Gd6ApK1WkZQYlNwyoKdrjSUNVp
HVDx8YH9KSj1A/t7iEAdelBStXtGUNLKIVdDicngztOgj6hyIBjocwWl6oaMoMSmpSoKoh6lQR+R
aeQ6tGik6oaMoMSmcRWFDVAa9BGZlqPBxOp8yuK/mmF5N3ome/lZ0/dZQUnLk0MNJXivR2nQp708
dyjkqC/Pf01ZPBwqg/5Q2pTFPQlx0pNUUhYzJgY3rUNKzg95SvOQGWBIOwLmLa6rx3ekLFb2nHFb
hyUuOAS3bMOS1mlYo2OHOU8T4XXKAvA2XG55/+NvL8ttvjz74Pqcwxgiac75RMoi3UYaOeoeHxb1
Hv35De9sgIT3gh24oIBj4u5Q97C5lIUUuy+ZAEllIb6iiDxGr9sDqpAlKTaEfgT6T3cnPYALCjzM
AeWnP0tZFN+/sAGSypLcFYXHEVNt7EoNj3k5pnmZ4zREn/MQYF2GvDgcfFr4Ai74nKadPPESI8CU
Erv2x7wtv/y9+vlQ7XxOPebdHF4gheTi7GubUm0xwbJz7lA1FPLFpLVE+bdjNjzmfZOh9ph3JU9T
gGlb/UKrd4zspwXYTTPO+ZI+uAanMTh6N/g7Mf7helG80WkHJb5mqIpCqEdp0Ke1Cb2HAt8BSvFG
px2U1DTwNRRKrEdp0EdgGo/JGehTQimezdhBiU3jK5TgxpSCHqVBn3bTgh8d58chRgElfsy7oY/x
ssSBNnLD7OI2XBxtw7q4TFPEHBwWixCvDzscxx0uQDDgwtOATOjhEuOWqHaC0RvcakvVFQ+f7KCk
VYmhihKTHqVBH1FVJuxwf1M8fLKDEpuGNRSIXo/SoI/INLTQp4RSPPexg5KaRq6Kkg2GogZ9RKZl
36GSFM997KDEpqUqSnR6lAZ9BKaF0YFBUZdQiuc+dlBi07iKQgb106CPyDRPHQZaxdMQOyipaRxq
KCFFPUqDPiLTYjQo6isoVSNiBCU2DasobNDINugjMg0sxvwSKqsaESMoqWnZ1VDQG1z0DfqITKPU
oTvKqkbECEpsWqqhcOhwe8zKRiRbjPlXUMpGxARKbFqlEYmjcx0akaxqROLovX1LC07ViBhBCU3b
oWsoydlfaeBUjUgcIdkPtOBUjYgRlNg0rKFgNLhTN+gjMo3IoKhLKK9qRIygpKZ5V0PhkPUoDfqI
TMvOfpkevKoRMYISm5aqKGzfPYJXNSJp3Ckeg++noNTB9wcIth/twau6ISMoceVwDcUHA5QGfUSV
Ey2urBIqqLohIyipaSFUUaCDaUHVDaURLPITV1CqbsgISmwaVlG6mKbqhtKIucPAEVXdkBGU1LTo
aijkOnRDUdUNpZF7TIaiqhsyghKblqooqYdpqm4IRucMVkCuoFSNiBGU2DSuogQDlAZ9RKb5ZDAv
K6EU+9HYQUlNS6GGEnvcHhX70dxDpQQdoFSNiBGU2DSsoYBF/K5BH5FpXdaKFPvR2EFJTQNXRbGY
MjboIzKNY4fLX7Fdix2U2LRURemQ4wbFdi33UDl3GGhVkWArKLFplUYER0cGKA36CEzDMXSIPIEu
MWsEJTUNQxXFIjPToI/ItNQhxgu6xKwRlNg0rKDYhHcb9BGZhmjQqJVQusSsEZTUNHI1FOrwxgTo
ErM4cja4E5VQusSsEZTYtFRDyT1W+XWJWRoddOiOdIlZIyixaVxD8c6gfhr0EZkWQofLX5eYNYKS
msahisIdxjRdYpbG1GOZRpeYNYISm4ZVFDZAadBHZBp6gzG/hNIlZo2gpKZlV0GxeYuoQR+RaYQG
w0cJpUvMGkGJTUs1FO4QcwZdYpbGjB0GWl1i1ghKbFqlEeHRQQ/TVI3IAYVPnS7iMTiD22GhDOpi
u0ZQwsrZoaso0QClQR9R5fTIEqMutmsEJTYNqygWiw0N+ohMA4sxrITSxXaNoKSm1WK7efQWjVmD
PgLT8hjQfuBAXWzXCEpsWqqiWKwQNegjMy37px5Y85g6pEFQF9s1ghJXDldQbHKNDfqIKgdcfvrK
IW+wFlQqo8sOG0FJKyeEGgqDAUqDPqLKYYCnr5xs8RJOqYwuwLxDWWybIK4cvEKJO6fFgl2DPu2V
s0P5aNA3l1CqALMVlNS06KooHd4MQlWAeYcK1KFZVQWYraDEpqUaSrRoNBr0EZkW41P3hTtE6vDG
PapS1FZQ4srhKkqHHAyqUtQ7FFCHZlWVoj6gLGJeUtNSqKGgxdZaDfqITCOLHRtKKFWK2gpKbBrW
UBg6dEOqFPUOlakDlCpFbQUlNQ0q3ZAfHRncqRv0EZjmx2DR4pdQqhS1FZTYtFRDiRYXfYM+ItMg
dphGq1LUVlBi07iGgh3CS6hKUR9QuUMjokpRW0FJTcNQQbF5daJBH5FpucfjKFWK2gpKbFqlEQmj
cx1aflWK+h7qqZ/O7xC+w478qIpyW0FJK4dcDSX0mHeootyxz+6VqIpyW0GJTUs1lGSxKVuDPiLT
wGIFpIRSRbmtoMSmcQWlzwqIKsq9Q2GP5x2qKLcVlNQ0DjUUtnhlu0EfgWlxdBZfqSihVFFuKyix
aVhB6fL2Jqqi3DtU6LHsoIpyW0FJTcuuhhI77DmIqih3jP+H5x1xTLlDi6bKk1tBiSsn1VAgdZj8
qPLkOxSiwTJMCaXKk1tBiU3jKgob3Hka9JGZxk8dQdkhuMPLtaTKk1tBCStnh66ikAFKgz6Cykld
Xo8gVZ7cCkpsGtZQvMWO1A36iEwL2b7lIFWe3ApKapp3NZRoMXlu0EdkWrKYHJZQqjy5FZTYtEo3
1Ge3SlLlyXcozh3u2aootxWU2DSuoWSLtcUGfQSmwRg7PFQgVYraCkpqWggVlC7PpEmVot6hUoeP
GZIqwGwFJTYNayhgsUzVoI/INESDyXQJpQswG0FJTYuuimKRHmrQR2ban8ydO66VMQyEt8IKosQn
D4eagm0AAlFQASWL5wKiIBgpoxkDHR2f/OWeP49J7P98b2gUV6TMz8pwKWoRFDxyeoSyE07KFpei
nqUmHCosLsAsgoKleYii+Fxc1AeS1ragPicUF2AWQaHSukUoplgCXdQHktYT3h5YXIBZBAVLmwGK
Zt16UR9I2jTBFP+E4gLMIihU2qgRyko471hcgHmWrXgm+4TiAswiKFhaD1EUN8su6gNIW6Up3sw7
obgAswgKluYhiiLheVEfSJp5AhQXYBZBodKmRSiPhEs5iwswP0Gt9a8Xiyslgba4FLUICh45M0RR
9Jy6qA80csb+D0bOyliRcSlqERQ6claNUFzxtsdFfaCRk7L3waWoRVCwtB6iKB6BuqgPIM1LVVwN
OKG4FLUICpbmAYrmTuJFfSBpXXGSeEJxKWoRFCrNLUIZirDgRX0gaevxe33WePb+7auPn1+/ffX5
y8uf/3r24nl7s+zdq/H6Md/tmGf9kjV99+rDJ1QYC4S62jWi8Iz8ABcuF0Gh9fEZoiiaA1zUBxrL
Pv75JHGXmrFa5nLKIih05OweoTRF8+iL+gAjZxdTfCVOKC6nLIKCpXmAkhPy4HLKu/SEnnLORYRF
UKC0J+gIZSTcYXcuIrzLnIKNjhOKiwiLoGBpM0JZTZAauKgPJM0VzZFPKC4iLIJCpbUaoWzFvYCL
+txL61XTHPmEoiLCKihYWg9RFLf+LuoDSbOEax1ORYRVULA0j1AeTfBHf1EfSFp/CD4fJxQVEVZB
odLMIpSpSCtf1AeSthKudTgVEVZBwdJmgKJplXRRH0jaVuyAnFBURFgFhUp71BClJ0ijIsK9laq4
b3tCUelcFRQsrYcoio7fF/WBpFnVb1Y7lc5VQcHSPEJ5JHSZcSqd+wQ1Ero3OpXOVUGh0rpFKFPx
KuRFfSBpKyET51Q6VwUFS5shiuIq8EV9IGlb0fD/hKLSuSooVNoIJiJWqgt+qS/qA0izlCZTTqVz
VVCwtB6hPHqGNGoiYpoN9ROKSueqoGBpHqBoXjq+qA8kbSrOG08oKp2rgkKlTYtQluLA4aI+gLRH
qVUwqE8oKhirgoKlzRAl4RDUqWDsE5Q1QX1OKCqTqoJCpa0aooyEbxqVSX2Ceihu5JxQVCZVBQVL
6xFKV/zRX9QHkjYSHm9yKpOqgoKleYCS0ufAqUzqNyhFOv6EojKpKihUmluEshT33S/qA0nbiqOr
E4oKX36HEnz9YWnzd5Se0n7SqfDlE5Ql5Pqcetn3G9Q/6CX/BB2iuGD8XNQHkvZIuEfgVFhVBQVL
6xFKt4RvGhVWfYIaj4TjdCqsqoKCpXmEMlvCzyMVVv0GZfaPs+lPECsh8rSpxKwKChw5T9ARiid8
wzaVmH2C2gm7sptKzKqgYGkzRHH9b/SmErN9lJZwPLypxKwKCpXWaoCi6f10UR9ImiW8mbC5xKwI
CpbWQ5SE1+o3l5gdKXtFm0vMfoMSrGBhaR6iKA6FL+oDSRuKd6ROKC4xK4JCpZmFKAnnQ5tLzI4y
p+CX6ITiErMiKFjaDFBSXiTbXGJ2FPcEKC4xK4JCpT1qhLJnwjeNS8zOlDuym0vMiqBgaT1CMUv4
eeQSs7M8lmBQn1BcYlYEBUvzCKWbYMl4UR9IWsYd2c0lZkVQqLRuIUpCNn1zidlZZvvXDaqeIJZl
VIaaDYmg4JEzQxQXfC4u6gONnOX934+cndBsenPZYREUOnKi7PAqVbF4vqgPMHJWaYp1xgnFZYdF
ULC0HqIkpPQ3lx1eKd1UN5cdFkHB0jxCGZawd8Zlh1dZil+iE4rLDougUGnTApScQyouO7zKVkw8
TiguOyyCgqXNECXhPszmssPfoP75mbSXlnBav7kAswgKHTmrhigJXZ82F2D2lGc4NhdgfoL6J9J6
iDISfqO5ALOXqQjDnVBcgFkEBUvzAEXT6u2iPpA0STrmhOICzCIoVJpbhOIZi0UuwOxlPxJGEhdg
FkHB0maIshI+H1yAeZeasWHFBZhFUKi0XQMUzf2gi/pA0oZiin9CcQFmERQsrYcoCXcWNxdgFj23
eUJxAWYRFCzNQ5QtQLmoDyTNFR1qfoWyymWHRVCYtG/QAUpGHw2rVHZ41NL02w5WqeywCgqWNiMU
079MaJXKDj9BdcWZ/QlFZYdVUKi0ViOUqXj496I+kLSleE/uhKKywyooWFqPUFzR7vWiPpC0rd9b
s0plh1VQsDQPUfTpD6tUdni00vR7a1ap7LAKCpVmFqFYE/xSX9QHkvbQvyZrlcoOq6BgaTNC6SND
GjURaWUqtvlOKCo7rIJCpT1qhLL09/CtUtnhJ6iteFzihKKywyooWFowEbFSXYByUR9AmhVT7Bid
UFR2WAUFS/MQRbFkvKgPJK0rdoxOKCo7rIJCpXULUbbgl/qiPpC0mQJFTUREULC0GaGsKb8uaJWK
7Q7LeBDIKpWYVUGh0kYwEXmUquiqc1EfQNo3qMc/Thc9QZgiE3JWhortqqDgkdNDlIxdayq2Ox6a
B4BOKCq2q4KCpXmE0lOkUbOhR86uLBXbVUGh0qaFKIp3vi/qA0lb7V/fh3mCcMUthrMyVHZYBQWP
nBmhbMUgvqgPMHJ6afo+lVap2K4KCpW2aoCS0V3QKhXbfYIyfQd6q1RsVwUFS+sRysMEu/gX9YGk
9Yz9fCq2q4KCpXmEMhT3qS7qA0mbGbuMVGxXBYVKcwtQMhogWKViu6NrvvknFBXbVUHB0maEshXP
+l/UB5A2SstYDFGxXRUUKm3XAEXTi+GiPpC0hE5DVqnYrgoKltYjlJ4ijZqIfIPyf71YHGVknLlS
2WEVFDxyPEDRfOMv6gONHNe/12SNyg6roEBpT9AhiqLB+kV9IGlbn0K3xmWHRVCwtPk7yixVcf5z
UR9A2ixN3+7IGpcdFkGh0lqNUB6KHnAX9YGkjanf4GtcdlgEBUvrIYrij/6iPpC0pXgt9oTissMi
KFiaByiaO8IX9YGkueI65QnFZYdFUKg0swhl69tBWeOyw6vUJqjPCcVlh0VQsLQZoujfHbbGZYeX
pvPSCcVlh5fmuhcq7VEjlEfCoV7jssOrdMVe4wnFZYdFULC0HqEMxUT2oj6QNMnTZCcUlx0WQcHS
PELxnvFLRE1EvNSV8KHlssMiKFRatwilKc4XL+oDSUt4dtwalx0WQcHSZoTyqILxc1EfSFrv+g3R
xmWHRVCotFEjlJExUeOyw15mxjqEi+2KoGBpPUJZXbD6uKgPIG1nvNtsjYvtiqBgaR6i7Axp1ERk
l4eirfEJxcV2n6D+fgOEb9Ahir7nmTUutrvLUEzUTiguMSuCgqXNCGUq5kQX9YGkrYzzIi4xK4JC
pa0aomQczXCJ2V22YqJ2QnGJ2W9Q/0JaP1DW81pz9h6JxOx3qEfTX7VoRGJWBwVL8xBFkbi+qA8m
bf/bqyjfIUbCRcpGxHZ1UOjIcYtQpmI2fVEfaOQsRev5E4qI7eqgYGkzQvGMvTQitvsdait6UZ9Q
RGxXB4VK2/V3lFaaYrPhoj6AtFYeil7UJxQR29VBwdJ6hNIV988u6gNJG4pBfUIRiVkdFCzNQ5SM
dQeRmP0OtRT1OaCMSMzqoEBpT9AhSsK6zIjE7DcoTVvjE4pIzOqgYGkzQtkP/c+jEYnZJygrVfEI
4AlFJGZ/QCmux6HSWo1Q2hCgXNQHktbG/teLRSuPIViXnZWxxY0cCRQ6cmyFKPrW6t/+K27k9CWo
zwn1aJw0CRQq7dEilOkClIv6QNJWwiGMPQYnTQIFSxsRiit2iC7qA0nbiieRf4PanDQJFCxthygJ
AXl7bEbaI2c21B+MNBEUKu0PKIrZ9EV9IGmWkAaxTk1ERFCwtBWhPFrCFLZTE5FHyovfNqiJiAgK
lTZahDIs4Zs2qInIo8wh2AH5DYqaiIigYGkjQNE8ZHtRH0iam/640wY1ERFBwdJ2iDIypJETEcmL
8SfUJCciEihU2gxQesqTvzapiUhPObSySU1ERFCwtBWhdMWlwYv6QNL6P34D9BtEys0qW9RsSASF
jpzVQhRFU6iL+kAjZynesf4NipoNiaBgaeN3lJz30mxRs6GR0v/EFjUbEkHB0naE0hTXTS/qA0mz
JajPCeXUbEgEhUrzEKUrHuC6qA8kbWzBxOM3KGo2JIKCpX2l7mx7G6mBOP5VVryBk9itnx+CisSD
EEggEIgHCZ0q7673riJNSpIed4gPjzdpoGzcqyczC4IXwF2bzM//Gduz67Ftcyhmhq0nwqGyId3Y
OZY7PSoRIYKCOs3zHIqjmF4L9AE5zc+wxVt4VCJCBAV2ms6gzLPU4FGJiGk4xREBJ1CoRIQICuw0
n0GhuSa7QB+Q09QMu4UlQyUiRFBAp0mWR6FYqirQB+Q0M8N+D8lQiQgRFNhpNodiZ0hEJEMlIqZx
M+wWlhyViBBBQZ3GeQ7Fyxl6GkclIrZhM1TPSI5KRIigwE7TORQ+w1tryVGJiG0ERbHlCRQqESGC
AjvN51DkDGW7kqMSEdsoN8OYLVCJCBEU1Gkii6Iprqsr0AfkNJLjSadQuIrZEerffyOSoLMoMxx0
K3EVs7bxFI9EUyhcxSwRFNRpkmdRKN6oFegDcJprOEUhzxQKVzFLBAV2ms6hCDZDIoKrmHWNnGHp
QeIqZomgwE7zORQ1R8qPq5h1NC/Up1C4ilkiKKjT8ihmhuvZJK5i1jWOYvqYQuEqZomgwE6zWRSK
p48CfUBO8//x/UMJwjcscy2C1dXLGDa7NobdH58f/6/6dME7K4agW2kG/4go/yhuHMJyC4saPBA0
YLTOUXCKF0NTbXDVzERQYH14DkVTnDxeoA+gQ41Q/3WH4qyRczzzIA4gpIOCRo4RORQlUxD3sb17
8TeK/uO7XerU16sX1W3cXK/7667ajq29W8ZN6uZDGHwQTne6l3+0ofvlxWZ9t+r/+pXn1Ud9P356
uFt1u+v1qvpynX4l+SqO6vx6F+/i+6PvNrtLyVjD9v+8X+3e3MbLb+4tvvf3T579e+DfLWO8HT9/
t9pdL6tVfL2r4qu42lVHHu99tU2qp49luHhj50hIECc6JijRSIoCqCkU4kRHOihwLzA5FE+xx6FA
H4DTZMMotu1OoRAnOtJBQZ1mWQ5FUZxTWKAPyGmGoqh3CoU40ZEOCuw0lUWheHQs0AfkNJJLkKdQ
qBMdqaDATnM5FC9mWKJBnejIVcMswWrIFAp1mCIVFNRpTuRQuCZAKdAH5DThZuj+qMMUqaDATjM5
FDnDEW8SdZgiV42iuEFuCoU6TJEKCuo0z3IommKJr0AfkNNIjt6eQqEOU6SCAjtN5VDcHD0NdZgi
V42fJZJQiQgRFNhpmUREN5wRdPoCfQBO0w2nmPMnUAp1mCIVFNBpiokcipihpl+hDlPkupEUByhP
oVCHKVJBgZ1mciiKEbycKdAH5DQzwwZnhTpMkQoK6jTOcihWzOA01GGKXDd+jkhCXD9OBwV2msqi
UNyEXqAPwGmm4ZLgNd8UCnH9OB0U2Gkui2IJOn2BPiCnCYplpSkU4vpxOiio04TIocgZbpNRiOvH
RyiaM7GmUIjrx+mgwE4zORQ9w62fCnH9+AhFU0szhUJcP04HBXWaZFkUM8PwiLh+fISiOZxzCoW4
fpwOCuy0TCJiGzbDtXYKcf34HoobgpFoCoW4fpwOCuw0l0MRFPscC/QBOU3N8RyCuH6cDgrqNCWy
KBSVDwX6gJymKRK1KRTi+nE6KLDTTA7FGIKnjwJ9QE6zFEcGTqEQ148nKKJ9RlCnaZZDcXM8pyGu
H7+H0v91YaRt/BzhjLgDnQ4KHDmZbMg1bIb1IYW4A30PxT1B4jGFQtyBTgcFdprLoYgZ9hIrxB3o
I9Qs54MrXAkyERTUaUbkUNQMu0EUrmLWNWaO7o+rmCWCAjvNZFEoTucr0AfkNDfDBnCFq5glgoI6
zbIsygzFlwpXMesaT1HnOIXCVcwSQYGdlklEfMNmKL5UuIpZ3wiKM7GmULiKWSIosNNcFsURxE+B
PiCnKYqRaAqFq5glgoI6zYksCkUdaIE+IKcZNcMLPlzFLBEU2GkmizLHEz6uYtbT3PQ9hcJVzBJB
QZ3mWQaFZtWjQJ9ypwnWsBkuKlGoilkqKLDTVBbFz1CogqqYFazhc6x/oCpmqaDATnM5FCFnSERQ
FbOCNdLRP/FrVMUsFRTQaZqJDMos2/k1qmJWsMZQ1O5PoVAVs1RQYKeZDArN7U0F+gCcxhs+wzV3
GlUxSwUFdRpnWRSKRLZAH5DTSPajTaFQFbNUUGCnqRyKpLhyq0AfkNM0xYaLKRSqYpYKCuw0l0Mx
MxRfalTF7B7K/McrsYI3boaL7DWqbJcKCho5QuRQPMVdlgX6ACJHNGwWKFQ2RAQFdprJolDET4E+
IKdxigOUp1Cosl0qKKjTJMuhCEGAUqAPyGlyhpVGjSrbpYICO03lUBRFtVWBPiCnWU6Q4k+hUGW7
VFBgp7ksygwVoBpVtivELMe8aFTZLhUU1GlKZFHsDD0NVbYrZMNnOGpao8p2qaDATjM5FDHDLaka
VbaboNQMF9lrVNnuCPUf1PFpzXIoeoajuDWqbDdBGe3vT5xEQH12vbrejgjL6+0urvanZK43hzMm
t4lnGXbp7xLHXrBt9fVtSOdlfhuHxWC4lG0vaztYVndMDnXL7FDHnnkbpPGCmSqs+gcfOdqvR4C6
1VbXpnehNs4arlspB2VzLbUzrKlqVG0wFRQ4PFUOxc0SnqiUSzXSCHx4/vDFZ023ieNJrZtkMoVi
3pTFm/psPHq1ugnd1TamJqZTYn/4alFt725vl9fpz1999Mn4PiXcxF3cJEXefTdHYrjEk6RGJ6mH
y3f/7jnOKSdk9HXb9rFuPVO1MyKkbmeDsN62LoZ3q71SifW9H75KhA8+rm2UsddtrQMXtYza1U4p
XxshBbeRc63Cux/s2xj6Pr0r2o6f7+JCdgvVL7hb9GzBxbunJ8wK00jGMW0eI/1lCvp03O049PwW
Ulc4xH/Vx+1us34T+xOz0jfeMtRJ4Cfx7jBHk++BOAoIPBa4CYVbcNZwrVAUU1m8OF8WEiCoLF6c
UCjWSC9RFCeyGIwsBEBgWcyEwo/O0RpHMZHFcIQsJEBAWQw/lUWxxmncLQNTWQTDyEIABJVFTNJg
zRacN8x7FMVUFnN+tNAAQWUx5oRC8sboTPoh2d8TYH+9vQ277uXibvXLav3bqr5JM2x4kSZyHWWn
g6g7z/vaDMrVLHRD3fv0+cCEDc4ncud7zmzgvB3UX182zpffH76v2tx21TvnfPk7ufZY5d7enj/6
u5ubNw8n7sN0fWzmcapP6MpF0TKmYxTir/ykfXP4wLeLgUlnZC86Z3QOxU3PAJiiPGXzjxPKf2Yj
E4YHGch7u006kT/5PQ5u6AauvVWBWyu1VLZ3XXDciK7T+tkpuGik4m8HfyvGI/3Fnn1PEiEUtM/Y
PIq3eJQCfQofWQ5Q2rAZoM6+J4kQCuw0m0MxYg6nnX1P0gHKGk0P5c6+i4cQCuo0x3Mojjs8SoE+
IKc5fZxiEFBfQl9ZRT10tnW2Hp+G01N0eiDuopN1pxXnwmqvh3byyupoth7t1rFzQ228bNM3dK1p
JZeDCbkGekfQa09UP/tGSkIocFTqLIonmIoK9AFEpWy4maOrnH0jJSEU2Gk+hyK4x6MU6ANymvQE
Q+0UyqMyLSIoqNN8FkWxGZzmUZmWbDQ3M0ChMi0iKLDTbBZFCTxKgT4gpxlFoM8EyjJUpkUEBXSa
ZTyL4ghQCvQBOc1J+pzdMlQiQgQFdprOoliCkbpAH4DTVMM5fXZkGSoRIYICO81nUTxBTlSgD8hp
0tJPtJajEhEiKKjTeBZFiRmGR45KRFRjpcI/iJ5fRPGvPZGqxs/wHsByVMpFBAUOT5tFMQqPUqAP
IDx9Y6zAh2emiCJnylP0BFwRRSJRrGHa40kyRRSdjp3wwdZCpH+Flpu6dUNaGrFGDJFHLjr9liKK
VgTvDZP1EFxbp1/v6oFHU2vueVTB6eDFaRHFEBetWTC1kP1C+EW0YxHFtM28EVqgVrKmgScUZmmN
AAjaKYXKUVhpMaEAry3RfMF5owyu8VNvKH6+N0iAoN5QfEpx8IZGUZzIcn7JDQ0QWBY3oRD7mgU1
x9Xfs95ZPic45s7ykUs0jOHcOo0z68+PMxIgaJxZf0KhRMOtQVFMZXGI8gsSIKgszkwo5Ogcwy2K
YiKLYwhZSICAsjh2KouSDde42pipLPz8GiYaIKgsnE0o1Hh3tDakZaNOIRItEiCoLEpNKUbneIEb
4aayYKZ2EiCwLG5Coff3HyvcCDeVxSBkIQGCymJOZVG6URxX0zuVxWqMLARAUFmsnlCY/d1iDjfC
TWTxmJmIBAgoi2fmhEKZRhncfDiVhUuMLARAUFn4lMKO2bfRbIbse97HhhnBSx4b3GOPDYcrEjRu
bJzGmUTEGQkQNM7kKYXyDWe4fSknsliMLARAYFnsCYUW+W1LqqRq24e+tcKJWoSB1WHQtk54smbG
2SEa1YbAEnnnlNXa9cz6WF61XfLl7+TbYzLtgVVt7weGhB6idUpqY20bMlXb3uuOMR80D30WZfoq
fYrylM3HqraPv361Xk0wcoXbfc+t4zb9jmZMsWCjHWwnAktvB/lg3bMcu5Lq7TI+RfLH7mUSrL/a
j7XPq9u77cv3Xt1cLa+H2L3pljEF6PvV8eNPtuvZ4jBop672c/mnqufvcUDr+B9T6PXtlPlZdVlu
v9w0RNjlOrl19eLqEBjbNNd1v95db5LLx58cFhseLJQF6XvWG1sb147bkr2rvRpU3bdmCKZPgcD7
/ViSxqxf4qoS81AfB9H000VaGRlbkBZENmk+fhlH4KT7cvmmut5WL8NyF/PaGU1JkYahvZIJIexG
hi6sxqWhJGH6q0QyGslxaCMpOdpNgthP+bdVv74J16v0TXtdbsPddlwiGxOXHIiRWEFu4s168+Yq
TVxpGWD5PIXQixElfUOawdrkkvV6tXdV3Ly67h6BMEiI13G13a03sU5TwfPqk6NbxrlMG83yRj2p
0R8317tdTO6Pqdtsd9XddiQYp3oTFZNuMLXkxtSitzZtHeK+VoOIXg6xjzrMRri+3aaJfj9HHP4z
Uh3JU3jsXm4XaUS82Cbq+HvsL1JjLo6H3F0s18mXFyUNqJ7fDwHXN3F9t6u4+Cv3Peaa2TYqhmzj
NP4Se4q0eHX/95uwehGrm+vVpZZMSZFGjtf3/zsPzzmhqNBjY2EotkwMVoRYC9uyWvchbYQXxte2
C8Yknwre9rMR4kLx3q/1wa/13q8XJe05OzIdsslFkRm63V1YppzgEJP/Kkl/j/KX9Q/GjvLwT+H1
zGT3NQZfJRctl2nifhUrnmbN3e3y7sWLNHG96lLeV41uWw8VzzFY/Ci5Z3i97zC3681uu2+30Ka6
/eSb77PBYf/aAIIz+tVB/2ETx8TUOiGkUx+kB/7NXZv+guVtc2yD7/vit/FFXMXNfVHVy3jsjOPT
50iU3ivETeqSaRrowvLikFlc8Od5KktE9cnhMemYyOz77v6JeSazyRF5o+n/Z7Q77ZO7NB5sh7i5
OnTOML50utqtr+6JWtHaTqdSGdmyVOCWCmZS3Vrva9EKLlgrhBe2qj98jBbdT+GzmvW0U+njs1rn
wsBVqt+zuh9qz7Wsreps7Xhng+RGtqKfjRA3qx29Xj/wer1b1/d9raRlZ85v1lNHcB+XqRztQfyW
Re08cGcFLHomKQxYK6Tmkft6GKtVfQjpXylTqTlzsYuCOc/8bIS4gD04+WG4XpS05qwglQ3n6AYf
Eoz0Kn14Mzb0h0+/qA6vPLdVfJ3KjHNmJUdPsXuz6Y3fYXL9JaZK1WXOlNKcJJOYvlxW5+8sIeWC
vmNWOkejmSWhKVCptKx4z2UsjUonXOdvMSHlAnvP52gsEyQ0BSqBvOeEnIVLn7/XhJQL6j2dp1E0
NAUqwbynaN6hHpdlfvj40+q9u7skTx/DYDvG68GFIb3mVmJMvXR6q+A7q8N4FhB/9kEV0ux+c3vc
kXJ4aN3/7w9fHb5oUQ3GKCNS/sbNoGvFmK19Z4a6E87FYAMzXj7LNc5bPY/o5+8zIeUCh2YmTVQN
F7nXUqJg6VEyzZlRbR1cx2vtjKntIE0tpWlFZyJP7zTTWprWQTHP+jYIwIFRJV/+Tr49PtcewNLj
x582YxQm9D4KN0TvvOE8s/TorI1mfKmreplFcRNHT1Gesnm69Hj8xYn13Iqj5b3uWAhjzAgnBzsk
H/S9E3YI4w6TZzlkYd3b1XsEALoe1oleMyfaWjhtamZ1eh0+iL7WQ/pJ6HrdevVwPSyrrxTiPNhD
/3nV9lfjTxeTUSj9KfXu8cdZo9ZgjH70D1P9m1W4ue72i24Hs6OH008OL04qliMwSmII0k/T6Dyu
I1XJeKxY+mDaAXW7iWN0h231U1y9l8pw2LPqvZevbv6qq8ihOGXPQ9nepCKbr6rVsK22m/3jbyqu
uRorf9KfL/+OEyl1Z7nr6+gUq4c2picknh6OuOGSGctdyL1V1Q2bFJtNwRLAVXwdu9ThUpEDN6yT
ztqjRN+lQSg9lR03qh17VGuDtdz2nLswdH7oTYiaucB9ZH1v2rRUvV5e7tWq7kZtL9NK6tX+xd9V
2nYWN2kdcXM5zhhVuEtj9/jHq2364nEfWmr/5dGQd1INOgyKd96oKIQehBLWhzhEFZXPtVhoiWjx
dI4zZx9ITUEDndmMyjJ4hWAoUASQaulGa5GhkSlQrl/vaR6xa8++IwJmF6q5FVlrlhVZK2glTF3n
EL4+9vf7fXjFHT5HYvxRAeiQmN4dpczn6vrmdvl8fMfSHGaEvQMWfw+Jxw/V46dq66RMG69TeRaP
rdHS9ErIUPW3izR/XbCL16/6kIbUBWxIrV711w8+MrS6E17F2njr614JXydBfC2D7lIKPgifMr19
zc5v6X1dXOyHwLw4/Lz5Ip+z/S1S8iuz0irDBBu0BSdrurHOvB3tUWMPJ7Ok29Xhl+Dz2PjhS6Do
o96hXcbLRyQ3DRO8tGdw2QVvfFC8N2+fCW0Qrmud5soHLzjn3tnOcsP7zqpWhP9wJjQN95gWn4xM
Z1/3QkEDHpVNjkHI4tn4PEUAY7VpDMspop6cIxwqJym3C9Xcqaw1rousFbQSpq4tznomvs7MhMUd
Pkdi+ZmPZ/+YCf8v099+WNvXCC7SBHBwYPXt13lhJFmKkDZWvEo2/ycq5dVQ54VJNif4pyppr4cO
LVc6Bj4YcFZgGm98QVaQNzfNC+5/7Ywn3DMyg9PG2EYwVjo0jAf4RMXieEzJ21MBbrxwSpnWC+29
8DKtsquOKal4YuLDf5gK2EZxg2jxdGhGnJ5KQAOelvKKaEwMFCgCmKxs/jxopZ+cJBGna4LsQjX3
NmvNiCJrBa0EqOsa5nLzjHnCrmQMtfZWbhembuLKWuNaFFkraCVI3ewBxMo+bffsm1phdsHquqw1
Z4usFbQSpK42uRhyT9pFnFEIsgtVl2etGa+LrBW0EqSuZ7mHBP+0XWTsltoFq+uy1pQoslbQSoC6
vmE802c0e9KuQMWubxh3RXah6oq8NW2KrBW0EqQulzxjlz9tFxW75XbB6rqsNaOLrBW0EqSuULkY
Ek/alcjYLbULVVdmrUlWZq2glSB1pcv1Gfm0XWTsltoFq5uNXcV5kbWCVoLU1bnnW62etKuQsVtq
F6ruI9a4KbJW0EqYui7XZ/TTdpGxW2oXrG42do1gRdYKWglS1/LceP/0UwyuTvJP7s5sN5YaCMP3
PEWLK5DojvfloFywCMEFCLFKIBS5224SkY2ZhEXAu+PumckJM5W4ajxJAAmhk2SSr+p3udtLuYzn
UtXVME1bFA3hJUldZ6A+U57F6MrYdcaiuGR1HUjzAkVDeIlV171iWQIJqVuexZj9Y5fEpaprQBrn
HEVDeElSVzCIW57FmP1jl8Qlq+tAmhYoGsJLkrpSATFkyrOYiusvSVyquhamGRwN4SVJXeUUwC3P
Ymxl7GK5ZHXB2NVCo2gIL0nqGi8BbnkWU7GRTuJS1XUKolllUTSElyR1nRAAtzyLqdglInHJ6nqQ
ph2KhvCSpK6XHuCWZzG+MnaxXKq6Hohd3jEuUDSElwR1eccZ9EQqz2J8VeziuWR1PUgTHkVDeElS
V3AGcIuzGM6qYnfichSXqG62C6Rpg6IhvCSpKy0UQ8VZDK+4UozEJasLxq5iOBrCS5K6WkIxVJzF
cF4Zu1guVV2uQJqyKBrCS5K6RkNPpOIshovK2MVyqeoKD9KcQdEQXpLUdQzw0hZnMVztny1C4lLV
VRakKY6iIbwkqesd5GVxFsP1/gdUVlzPUFyqulrs0kTHGI6G8JKgbuZKCXDFa+5ynV43lws8mZ29
Dsvlr9nd6RAtk1r5mGQv2OP5eCmMPIwiem6Sj74XPWPWxoGNSY1x8Lv5eIurq5vnycmbleBGHViJ
7baqOKD9RBaSY9eCdnlzYLsQypGiXIK9qzib5KbqGYLnUtvBCJCmOIqG8JKkrtIC4BZnk9xU9gcs
l6yuBWleoWgIL0nqGvA9VJxN8ooDmiQuVV0Lxq41CkVDeElS14GtWp5N2srYxXLJ6oKx66VE0RBe
EtSVHTPQE6k8m3RVsYvnUtV1AqQ5g6IhvCSpyy3kZXk26apiF88lq2shmmAGRUN4SVJXeA1wy7NJ
Xxm7WC5VXQ/GrlQcRUN4SVJXWbfLdeXZZMUJCxKXrC4Yu1prFA3hJUldYyTALc4mBauMXSyXqG62
C6Q5jqIhvCSpa70FuMWdP8EqYxfLJasLxq5TEkVDeElS19uak2BbB4XRxwF3LVEd03tWhfjPHxRe
nwiN+aAwJIzg+1Z6Wpeo/XCu89R908cuxNi8FeevTy5vL/q0OH5d6OnP5vr09+VUt/b4KH/maHlx
1Of2TJfxyDnPpRpd65UfW60kb4cxuFb5QYdeGC5cf5S41L5XobUm6lZJLtre2UkHo4I146BTWkPm
291+OY1vgw7vexZ443BY3Ti38rNp3mdbhdfZnV+rZtfccvFD03y0+8HV35g/t/nUx98fTdGff7iu
VHjvp6A7Ys/2W//5H5qPV//ofg1nNyfj1WJdsGxczBWzcotmc3OvY1ncXH4tHmd78j8n0+PxbFeu
sTh5jPvsQX0glEl+SNTNd+iVj1emS7FjuuCm+S2c5+ukllnAP/+cb/Wbbk9Yy/Nq6g7rbjL/rDne
RNPqWqrmj9m0v5r1Q10qEO1MCV2DnG50M0yMKiZmYr/2YnPfYu7uWflumW5OspQnP18t7+5b/BfY
ul0pcHUrWf5nmymrSyGn59ZX8597a/5r7zTzX8rkyYZswrv3LiIrfHa6fgwMbO33LE9x1zm//P1y
OM098Wy6Hurqp1WMbkJ2OSzOrm+etavWe7TzuNlY//zPnb2doT93Nl7tPnjebRCvj6ObcB3Plj+1
abG4Wuz+yvxt6BdXP9nr4aY7JvV++jwevhstXi6Odcf3rRK2GppOz73hdjFtVJ3/vq7GtRr4zzta
OaznGpjoMsVPYOLr0qPL22HIQ+nxNpcAvX9pz+qhelD07oP36vofz918rWLpaQoZJGyhp5Irv75q
rhdXkyzZ3JJF+fVihhR571k0lq1fhfRqs6+a0OfuN4VJ1iQHycWrZg8xlNhzBLvTOl+k8xSWT1VA
dzbWMrOfsfeq7twrPTx9t1lPCFMEgeaJ6qBXzYX1Ae5K6fMm+vlViJMi79/9uxmuLi7CZTzPf+hV
c3S7zO+B/E64/v3H6XKq9uembWMaw+35zUlY/Lg8zl+n324WYfVV2+Z+eZZuptWL5dV5Oj79ZWD5
I79cHPvR8XFUsu29kC2ThrduiKGVyUs/Bia4Ec00hZkntm/s+mw6Yw5ThnxrXULWZTkezi7i6oyE
siBtxwzwtHWyKZdHF4NywUbWKuZEa5wa2tEl11oebUhuGDSfxsbKCBuT1n3iA748OuaPvwn5w60A
/SGVR7+9XBcrN1Gy0STjwjgA1bUsG8wodZR9GiFjhFCbpoaNKVOhEumbj25ZAF7LPAzSRx573auk
dEzepiCkHXofokjpbchsadzjGj5owlOXSQdD2Cq/r7mlmuUxbWqWT+PDi4dfjLZzZn8rdlezhvMU
Lk+Wp7c3MXeL5xucHtqP9b1WT+5Jc5kHNOH8iRy6u7NrLv+LmjNdXU7vxOb4gWCx7AVs2jRCu26V
bN0jsllVayJihkqxOubX2QOzz3uLmdVTT9t5Jr874LraXez/kc17Z7V8BGOrFb/X8zbUk2mIdnX5
qvmHjk24vk5hkSJsiDmQIYuLk7Um84bAU/X+J3Li9VV9F01p9Rw2wT6lCUBnga3Y+22OsQJY/Xkx
K3CNUm3I/ffbPIl95sA+3PusJqr2tmJ7NXYj4jOuwR3W/rvV5NvLZ15GrvKD8Jp+kg2sbLz33x1g
ZwZ6weZplBSjEC5K5VPa2kH6Is1z7Enizc7RC9uH3DX6cv2nPszv8Lemv9asx78bG+5vHaF+4YH9
I9cJLfeNrH/pDtJhfHpwD+n5u3+VQ/Tuv/HrQP3fdUbv/fD6t+/yVDm32UR5HVLLdDOvT8D7PvNp
J8gGe3dZAtmGf+Yeffj5JhNqOw1p80vt9FstEzG0obdDa+xgR94rpb3fSkMK5+dXv56cp/DTq0cs
1/uq93Dd/JjulbL3flBjUGP0LJHX9lznvCgY+Bhwu3Z+TC9ZPT875DsB5lxKKF/OCeEMc5rH0mk9
5R1XNgid3e6HOKroe6azCkoFxdSLXSk3e6xEjcfb+wKW1+wL1FpD3Q2wHLTB8AobEIoQdq08fHLI
FU9KSVt12hzPJWsORqHVHEVDeElS14E55cWTUtLtf2s8iUtV14FPNS84iobwEq/uJIGCvCyelJKe
VahL4FLV9QykWY+iIbwkqSsY9Ky02JPCcvBRBiVsDPHx95fQ0Rolog9RBN8nY3vbO5HFZyr0o3zZ
0+ZZCensgZXYaitVddrgSSwkxq5iFrJLMX1guxDKkaJcC6h3FU+sKV5z+obApbYDFyDNchQN4SVJ
XcsgbvHEmuKV/QHLJatrQZqQKBrCS5K6DvLSF0+sKVEZu1guVV0hQJrB0RBektT1WgLc4ok1JSpj
F8slq2tBmlcoGsJLgrocvlzdF0+sKVkVu3guVV0pIBrnOBrCS5K6YHVBX6wuomRV7OK5ZHUtSNMO
RUN4SVJXcqjPFGeTSlXGLpZLVVcJkKYZiobwkqSuUgLgFmeTqqpCG4FLVteCNOtRNISXJHW11AC3
OJtUVRXaCFyqulqANCdRNISXJHWNMQC3WF1EVVUbI3DJ6oKxa7lH0RBektS1DvKyPIupqgxG4FLV
NWDsOoajIbwkqevAJ1J5FlNVGYzAJasLxq7nOBrCS5K6HqgUo1h5FlNVGYzApaprgdgVHeMMRUN4
SVA3c50AuOVZTFVlMAKXrK6FaJxZFA3hJUld7hzALc9iqiqDEbhUdR0Yu4LjaAgvSepKDvXQ8iym
qjIYgUtW14I0bVA0hJckdZWBvCzPYqoqgxG4VHU9GLuaexQN4SVJXcOhVi3PYqoqgxG4ZHUtSFMW
RUN4SVLXOuhtWpzF6KrKYAQuUV3NwNh1XKJoCC9J6noDvbuLsxhdcfc6iUtW14E0K1E0hJcEdeGq
nooVZzG64u51EpeqLodpVqJoCC9J6nIDjVWKsxhdcfc6iUtW14E0x1E0hJckdYUD3t28OIvRFXev
k7hUdQVIk8yhaAgvSepK4Sry37ZqBqKTIEFL5N7HlP6RuXuAdN1nrxq4yWzOVQNBadTeh2cfSw2e
oLldfQopjFGzwMhH/rNxinNMWjAA204JXn3oZdKBsyPeKmxX6HnSOa6DYaYvpQNLaYR3YtBZwCB4
smlwY5LGusC8frl0YK46wUyFx9uPorqt9lpryI9hC9oga2IAoQjh4aw6zTnwUiguMWlZk7A6cwWK
S9VcapBmPYqG8JKkrhG6oq23Xn3oDg9aovY+B/9ff/XFtHr1gbLsfyIGWxmteFgQssty0K4Dlkcr
mpVj0mrPesO04bE/dI20Ih+UxYn/iyz3TrjeOzl774+oXgadeN9GrXzbazO2hg99a7zRoxc2JQMM
cnXHpP23avT+h0T/oKN541k6j3kEkx1qJtnmV8TdAcv9FMF38sMW2FOQucLvffx0rxJ7GSmlOUj5
NERZN8L7M9vl+VPYpVXVgZDD2UUd3SgGWaO0P4g1CJVIracMqyyQuLKrvz07zx3u7PL2t+bNo1/C
4mhxe3n023RoeqqgOP/v5Ke0yPZ0i5Pf3/vklzebN6cjI8dff/3Jh8ejHPg4GtkmxVmrmDCt171p
jbDC+iG5wblmcfXPgonN8vo8LE/XtRQbsKrim80vQ64o8Io3F+ni5CL89koLJZybv8we/Zhu1t95
InVWh81X+pxcL9LqAXzM322yMceai+bTs/ffbZanIT+uj/Xqq0VaFY/LP3fTd57Mth+ncljrp9Gm
e71qsizLuY5Ce3EV06pVT1Yt3M7mN7xp13qeXJz1jeZi9Y0Z9fpbZxd5UNqgwqFpF+Fiqrb84MfX
P+9a/tkQlk07XMRsWWqeIYqadkwh96q0bPI/z0OWJ39vfpmdXF8tbiY9Vp9ff519z8/2n1aFiX+A
ms9bVd98c2h9kUJsvudCWSlFI5SRgpnmN2dOpGhDfwbgTadEsTY8UOqL71T6+OA0XP6Yvr4crq5y
DIV8lDutTuWDUMdrfZ4eNqvVqbkg/geff722aNmEy9isRiCbgoZzaR/IEC2KFwjs5/U0qrIm9DZ4
a30CR55B+siisa1x/dim6F3r1aja2JsxmDiIwOexU64L2twDgH5ofWBB3/9wHz2NNwex4zW+mRbB
mmlAdZMW+bE5vfRzp59uNpHzzSZvnf5y0RyvVH971ybbCeUrbVqtS+amuGqWi5NhLqBxc5LvOdmq
VcDHXmstUqtHYdswhr6NSo3tKEc9Sh2SND1kodYKGhNAKw/K2dFzz5Ng4fGVxmiVj8OgQi/tECLn
vRcmsr5XMSQf/QuuNLqOCVfh8c7Yp2rjrNYa8rjQQTZwU2MDQhHCaNB1ygNJBryYbqW1qWsJLJeq
uTYQzUiPoiG8JKlrtK1o662VRnSH37XEd0ypytnIPxccp72d1atie+1x80vt9FttGqJsnTS2VckM
SQ699+rusi4+rT3Gae2R9mzdWnsMYUgu/7DVZoyt4rFvve9la7hLMljpTdK54UI8mSvIrkrxwCrV
vl3Bbbd7auWWDkpIYfwgmGLb225bfw6yUbDCMPJB2P3XWxZwvXhDf7PN225E9SfhQ58H1w9q74D0
Lqub05SF7rNIf368+Vd2iw9WjEH30oz+gb5r7s+TJyrtEVVvEPXRZRhkhZdyp7mtbS6uLs9yh/xz
vXgbLoe0+V62RjIe5Zj/63v75/rbJ4tFzHOHDzYDv3GR8hzoiy8+XF2j8ul8iwrqNoIdMwXrOH5k
E50XIx81M1w/PrLR0jJuRuet1yFa1yud5CBs7FWSRr/gyCZ7bASv8HgnWmuuWqi2hhyqCrTB1tiA
UAT/7s3WOAO984upzrrqqAeBS9Xcgv3OC4OiIbwkqeudALgKW1JEhD6MfDDeJfH4M0Ba77Q3ybJo
hRnSKJxLVowy/zL3Tr9sWRrBO+75gZXYbquq4zFPYiE5di1kl1CHtguhHCHKeSc1ZGEx5VxXHbkh
cKnt4AREU9KhaAgvSeqqaYA1lzO/zzX3uKc3N9c/NB+Fs/OVbrm75m7+8VdffZ5Jy+ury/zVlE5w
u2zmheHvf4A4WkEJneXU9qqjPQQuuRUtSDMWRUN4SWpFy6BYLae2Vx3tIXCp6noB0qRH0RBektR1
CnrPllPbq472ELhkdS1IcwxFQ3hJUFd0TENeFlPbTdXRHgKXqG62C6JxYVA0hJckdYUGWlUUU9tN
VRk+ApesroVoUjsUDeElSV3NOMAtZk+aqpJ5BC5VXS5AmvIoGsJLkrrGCIBbXDE2VSXzCFyyumDs
WilRNISXJHWdgPpMcdZqqkrmEbhUdYUAaYahaAgvSep6bQBu8RiyqcrjJ3DJ6gKxKzvGJIqG8JKg
buZa9J7O1ioXsKeDXuoELfG1CSX/2NP5r2zkYM9PzQrVXkC7u+u1ub3gPyIXKEv1HckPb3PduwVC
xxidc8MgR0vd6MpWCl5ovEdwd+0W5jvZf2iqVI9XaUrOuZlCb0r2mnfPNuRX+cd5P2X6+ZQ2A3tS
u634n++oa7HiAx1VCFepEHBjXciOvrXO47q8vejT4vh1NtGfzfXp78vp4p/j+XLt5cXdfXAi9ZZL
3bdSON06xVnLowxtCskPQYgh6HTEpHTC9pNelrXKs9S6YMaWRRHYaLkTaVxDpvyh4/HsPL39tK6H
+cavTTpV8z77/rEb7/iR5tawH5rmI771Qb51veD8qY+/X12OxLfvRss/Bf2SotKvh++hgi+f4vCt
RdlA/A1H+bNP4wz1DipA5porqLIPujr/tO6ysydsHsNqM//WY8V0cwLeO7VK682hNyUszjvlTWC9
YIMwbei1aDWXruWil60QSbpBJSmTgmy1ujqUwMxJBmVOMkTmZLbJS3n4cdJhsoOe6WTiveygqalh
kWr7DyI5aNRJj9aF6S28x5jJe05IDroP2z6TH26e7Uz+6+QgWHrVMaOxc67U9yMTTPa2L+wsOzUE
z3spZUpajNz0TvSDFlbEFFn/ghd2ZY8Fq/F4e86ra86FV1tDne9rDdog0dkl+ylCWAVQ4OkuJYp7
psbUFKchcKmaG5hmcTSElyR1taiJt601FnSHBy0xtVPlw07dnul1CK+xfAsrVHvO8QnXWJ5HLlAW
W3tYD7XGEkwvlDDcOBHJ4wXVWWYR4wUYtz1ieNlrPbMz3qKfGmNgYZBahT6Nj48SuLU8ZdSgxSCN
scw7402SvYwyMCVecJSgO6FFhcc7T+2a0zXV1pDfWA60AZ+Dup8ihPeY7jQQkUoUc3KMrTldQ+BS
NbcGohnGUTSElyR1rWcAt5iTY+qy8/BcqrpOQDQnJYqG8JKkrrcC4BZzckxd1hyeS1bXgjQvUTSE
lwR1Tcc11GfKOTl1WXN4LlVdL0CasSgawkuSusKqXa4s5+TUZc3huWR1LUSTzKJoCC9J6ionAW4x
J8fWZc3huUR1s10gzXsUDeElSV3rOcAt5uTYuqw5PJesLhi7jjsUDeElSV0PxlAxJ8fWZc1NXIXi
UtWFsuZsxyRD0RBeEtS1ndDQE6mYk2PrsubwXLK6FqJJhqMhvCSpqyzELa7G2bqsOTyXqq4AY1dz
haIhvCSpa6wGuMVZjK3LmsNzyeqCsZtlQtEQXpLUdQaKoeIsxlZdNEvgUtWVYOx6plE0hJcEdV3H
LPS8L85ibNVFswQuWV0L0rxC0RBektQVEhozFGcxtuqiWQKXqq4SIE07FA3hJUldBY1VVHEWY6su
miVwyepakKYliobwkqauZbtnLBU/8BlL1znFAf/Ks6WqC20JXGorarCPeCVRNISXpFb01gOtKA7c
ir7jXgD+lWdlVRfnErjkVrQQTXCDoiG8JLSi76RXALc8K6vLGcBzqeoakKYkR9EQXpLU1Rrilmdl
dbteeC5ZXQfSvEXREF6S1DUOevKVZ2W2MnaxXKq6FqRZrVE0hJckdZ2Eemh5VmYrYxfLJavrQJqX
KBrCS7y6MksgoB5anpW5mtglcKnqOphmBYqG8JKkLrdQDJVnZa4mdglcsroOogkhUTSElyR1pfAA
tzwr85Wxi+VS1fUwTSsUDeElSV0FVWjR5VmZr4xdLJesrgNpzqBoCC9J6hoGcYuzJccqYxfLJarr
GExTCkVDeElS1yoOcIuzGFd1cS6BS1bXgTSrUDSElyR1wUoBWmJrkzHOVW9j6i0bHs8vTD3nfhx7
7SO3avRu6HsbmLcs8l6b4WXr22UlsiUHVmK7raouG34SC6mxy2G7vDqwXQjlCFHOO2YEYGFxNumq
LjAmcMnt4ECaMygawkuSuoJDcVmcTbqqC4z/5u5Ke1ynoeh3fkXEp4fAGe/LoEFiFUhsYpVAaOTY
DlPoTEuXgYHHf8dJm05J/V7sOB0WCaS+aZpz7vH1ta+dXCfgpqqLw2hYRKFFWJmmrgpZOZhNSpzp
u7G4yerKMJqKQouwMkldinEAdzCblCTTd2NxU9UlYTQShxZhZZK6jNAA7mA2KUmm78biJqsrg2iM
RaFFWJmkLmcygDuYTUqa6buxuKnqvgBNiCi0CCuT1BWhKk98MJuUWWe3JOAmqyuDaApGoUVYmaRu
8AkSPpxNskzfjcVNVZeF0aiMQouwMkFdXEIuArjD2STL8t143GR1ZQgNMRqFFmFlkrqYqADu4M6f
zNrfTMBNVZeH0RiOQouwMkldQqPrzgXebOu9Ex39emOQCSOZr7b+59+Jfixn9W1QIZl9Ol50OSs4
VM5KSoUIrSVQVNWAUYKAqbUEVBmmK8wRltWFQ4SpimoguGWAEoRBJUUjCKda8Now547LWd3f2NfC
lue+L59RzQrhmGpWCIerWR2+Ddqlcg+ynKqaFcLx5ZL8tecxJruaFcI51ay8DSK7Gs95qllN0DyC
58bXNuYnVbPiEjuMBAHMWgYgcxxYwiTgDBNuhTRcqyBXkVvasOF6fB7pRx8kn0fqeSiUHXFbHiv3
Y3Ma7Or6flYX/v/jo9EqpSD3EvEaGeA0RwDVzALLkHS0qi3HpvBwfvn256OfGUsoFs4BbLEBBMEa
MAY5cNghKpEzirBTi1gJxTR1wr5aPezU20tZfP7RB5cFV1pRyjCw2hogFHHAcQNBzWtWa6qgUDrE
CrOp4ns3ss3qdmTb96p954DFrTZXEF4ifknev4To0n/m7xdGr1azbq9jpTfu6q6ZsvnSm649r7We
/Xj1/Q+F+83Pb/zZ2+0g7U+dflhffb9yqY35ZuMAoD0WjVSsYk45AGtIgLdfA2FIBbhGiFNsKefs
za7xdz+BlawtYhJoVNeAKmyBYZgCbWxVI1kTa/kP55U4YQid1T5GDxSD9NeEC0G234Rt4bkjzfhh
c1YfYm2owmD4uvMYkTFcemmzCj96/gLmVhidaqicrEkak6Y4zHzvVLvTvW3hLYfFH6++Ct5a3DWP
Sf8ZwA6/KMsHt5MUz3lIOgE3Mc31vEJoksootAgrE9Jcj8vQdzkHtQ+1aPGeR6kdgtbyukadl7y9
C5VfuLYreAdsSPqxYvuvIOkD8s+e3vWuEpQPJMvt+uaZ/wg8TMvyjd49n3kT2sTwwOHN3YUe/fuh
a4sfnqFgt1N0mlmWtj9t1+10rznOf5fMFvpHPfPhsGh/ul2C3Z+DNGSoJwxuOaqsox0TcFN7oMCn
aLyEkEShRViZ0AM9Lp6mkbspfTf/cLfbufajXRAzv77yTvnVrIW8dbeL1YN/dmY+Mw9BQM6+m2IA
ede/WbOYu2fffPruGwVTEL8WQkM0tJswuIursk5+TMBNdlgRQsOERqFFWJnksISQAO7gLq5SOVVR
E3BT1VUsiCbxpB77lU+S3igU8y4bhhNRxkWImtSYXHS4eZHghJfKa+ypeCU7gwqxEQxPwiZCpaTW
kzx3Va7j5b+99M621Nt1O4a0yejerQsUdFmVfczG6TxrsTyeZnk5roYmT0FmeKA+vb//0eEaRFlo
uQBcVjVwVkmgaE2BrXituTVYIw+0XC2MWzfiDDFqniWETFpLjODsMOUdgakrn3Y2DrA/dv6yGCMG
lf8DMY4i7HHWkLbCVAT14ehfqM9HHyTadumfUe4Wnbto8maxS3D2//wvWV8GVtLrmZtbb6YnVzQe
MMqe9KD0hZs7vfb4zTde0RHOu1tGWvjVlwLHs0bPn29u2pMS2mDoT1W69VOP2pkHM3d+7Lr0Q6rf
0XUbZ481R8Vs427Xl0cJqM8wQws7okQid63quGKyj1G/6pV1u9G12G9+OxtCloRPMr/aZzs7oYo/
2kNL/iza0ZxCiILQanBqlwPpnRq72mrMja4E7C15+K1lvzZZNptF95Vv2cX6sPTxL+B67G/7ZY+e
073R7CR81V72rL3bG0VzJ4/czRWOHG/o2nblI9bsQIfw84U+Pz9lGECN9okGcPSAdVn8uNJV5Zu8
jRvHo/ik/E7C1dvml+1sNVW4YmEy5Lvz9F2EH50Th6CVECEdMoepITf1p91pVyvJGbHUZfhEO6PL
8wVZQiojfWHaoYslkTnv0LUrhjVtZPyb8zVTE2akFpxSVumcKP7kXJOjOMI7v8MRUfzk2jaKRzfR
yCh+ipoEOL7HRkbxTH5njuI8RMbTPXcoDfiKD6WECkQcoZXWGQ3ThtLcBsESRTbItKGUJ5E5byhV
peg9wRwIT+837dQFoO3S7rYaftw6v4N06zarmVkfxS/fygZKwnXtakFEu8p03V58vbu4FdS42b2z
HtdvIuj57q6u2c6+3d9Y/9iAzkZo7TPf9fW9WzUP4ba2N6/yXhZfV9u7zbZdfKQlLj7+6ss3i+3u
K1zykuCS8tft6paQEkOA4Bo0e//NIxBvFnbWZDN+kay9x5vFrf5psfLGQv9xdtd8hPSHN4vl/bVd
zRroY/zuan64uP2ZaW6I/c78djb3wjEKhSx+ODzZst4320mb0cYAdf42669YUghzFvGfjnbasm9j
VogsUvgpyEZoHL8qTL38UvwjtHOW/J+OdrJrqBBZhslTkI3QOMk1GOb7yoZnoj1BYURPkxP2T6ib
VRnh6WinejAKk+VPEokjNI73YAZLFIgSghU3Tq82ldOb5x92nzwfZASuNasIr9WL+IiOz/7I3qRG
n4BQcmuKHgt1iWFJFc5i0ZeF0PGyTEIoVRZCT1gwWEqS1zgnssgcWSYglCyLPGbRPteNUQkJymLR
l0WRkbJMRShVFnXKgqESc5XF4kSWsbFlKkLJsogeC9Q0DhU0i0VPFkTQeFn2hFgWoURZPOETFgyV
Ek4sC8uRpSX0xLKwHgt86SmKwPsU7PmXG71qZw9Lt5ot7MwU68bS7dyt2jdoa6WxZIZZ8rx5TP3H
1WJ7Zw+XHNZd6+2d2TQ59ccLf0nzcuhh3fWN3VMsV+TwkP4bRfvu5Od7xGeP37z2dMS/nDu3bH7v
Vw1m8+LOPwZW7JZ1Oz5Kqe51glNeGJeI5zVr3884H+9nkxBK9TPOT1gwXFKRN8HoyyJgjiwTEEqV
RcAeC9I0jsR5I2lfFpkxhk1CKFUWKU5YMFIiBrNY9GVRGWPYJIRSZVH9MYxeYlIKkjfB6MmCMR4v
yySEEmXxhE9YMFKqzMY5kSUj5E5CKFmWfshll5hmp7p9WWhG2jAJoVRZAiwYLYnKY3EiS0bInYRQ
siz9kMubxpGEZ7HoyyLG5t5TEUqVRcgTFoyVkOVNE/qyyIyQOwmhVFlkP+SKZvYtOT3D7Pu8acMZ
icekDTKYNrS8MCspygsCfT9TaryfTUIo1c+UOmHBWMnJpFk7gRlj2CSEEmUhsM9C7o6BJ1ks+rKM
Xj+eilCqLISesGC8pCiPRV+W0evHUxFKlqU/hqn28GqcN5L2ZWEZS1+TEEqVhbETFkyUEOdFuBNZ
MkLuJISSZemFXATbA5ylzGLRk4XC8TOeaQglyuIJn7BgopSZOd2JLOOTzGkIJcvCeyxQezAqzVsB
6MuCxw/Q0xBKlQWfsmCyZDCPxYks45PMaQglyyJ6LNpFeyXgGWbfZ00bzkn8JWkDErIUECFGg2lD
ywv7SzKXa/t+xsbPeKYhlOpnTJ6wYKqE0272UZ4xhk1CKFUW3h/DSHu2q8obMvqyiPEznmkIpcoi
1AkLpkqG8lj0ZZEZY9gkhFJlkX0WtIl5HJ2+l0+5fHzb1M7Wy7Zm2/bu57vFr3fg1q3X+kcHjNGi
MrgGEFsOlKkdkIYyAKUxyFHsNIKeOeaSQk2ZoMwebtbExa939ytWS1O8Oubmrwbtwfjl9jy329vb
h6NXZFu9ioOZ3Xu1njrxqlMBJcGs2tXycbaoHnY/+OLSIIkgdbUUFQ5REY+rCGEqQ5jPT1iGXv09
pnH0uu+zrr61VYbbWomqltoQragVldBYC224hLV8LcRdcvlyGYeYRL3w1P180K7jV5/if9W8BBVt
XfxLUNH4sdBpwia+l8QNp5opCbStGJDWWIAkFcBBigWsWQUdP34NRpyHdRuzuqImt7qxoFhvV80L
dK4h7HWfzx+aSgI3eu77WYiFQpOyMDeuVdJT0JuGg9E+Gm3vvIT+T55JA3LKA5WQ8il5VCtPop0g
LrsiL/6/Rpem9IuzRVtxPUQEk1xBdlXCmrqx/kWaeVP8d+1W9+56//dVUyOieT3mihFICfYt99v+
43n4dOVAgR8Vfije7VqoGdYYZzAISuGkoN/6Amob5z3B/dI+Vb2rvtOM+goJYbG1QGCjAOSUAQWR
Ag5pXBtT0apGZ2MYWz117Vm735298MZcdM/3X+zbFezaFbTtehFjT2LB1UeTcz0hyjO12Wz13Mfk
nU8+KRO7p3JAb98jO/6X/u3MzNow4iOXb6L5vLjR965AXS3apqrmvfHjbtE026Iugq4pEJuEw29t
h1kuVpt1azdmvFj6ypZB55C4mxjlgX6y079eOeeFFlhhCeWbTRHebeX/AMPY2Qbv++IXzr8K6VZ6
sxtC3KEz2mLTMPKrAM6/VNirJY1/CLIiaCJW7+6nqTu4tu+2sQufCdY3RBjUfz4jbr9Pbnw8WNdu
db3rnLpZIrreLK73jCyhRtWQAqRhDTBWBEikOaCcGCGY5krhArx1LrYjRjVJJvDTqFHNWFfX1FbA
GuiAI4ICRZwEXjNnjdVSKnY2hnmjWtfq4KjVwWYB9n0txrKR45ukJNP4vgdb17wvfuS/cV57HnJj
HJZO4A5RDusEYozUFCimMOBCM2Ahh76VFea8xsJIezaGeQ67a+Rjd72IsWaUk+KSqex5526C4Vaz
+qGtsfLeR8VuyWlduN/8G4khWDVRsnionvyzW/mlrVMoUhKIJplJ9Bb3GBpZdXhqXolrfJ53iA1F
eBI2ESpFvTfa8WJMnoUXRnmtNxWv1NbDKMSGTzRbjlApqfUkIufhNbKcxdS8kluPBdmwM7XeyIIU
R7ymC9LOtsdoPWvP0TKUSSYEAYhzBBQ1CNRcGIANg5haqajk/igG7Uf32+UueVp0Sav/6NPZ9kbe
zJpzyrGsAeI1AxRCAZThNTBYSqeFhlyRwPouLSFS5xF9ZDmNqXklu6YKsUEsNE1UEVs/iAjGTIWA
4RIBbY0AtYECSIK5szVmzslmL0NViGlJCDE0fusn5uavBu3hOGRPwtbPO++VjRd66o5qZSDBhgsR
2PpxjDCDDbHamSAVeej1YSpDmKdbP92FPfTgjo/AWGOsGHNIMWSpNZgSgplQGtnaBPsMJurl6r2A
QOp+hMGWQYkrgCXjAAomgK2xBaz232hjWaXo8X6EDJElGI0ju+s/95XdnZ3Vi0K7ssbN10FQlQX6
9t+g7MOd9sWmmk2PDtb/3H+zL40GQwy8cDkMGoB92UIP7grof7hxq+WqLVSm18XjYarPbu5vD/va
ISqcd3PKRCp/P/jWJxTl3vyNPj4D9/Aj0PwKVIIo4LDjoFaYC6kFrDHpzsCF05yBW0mpLFYKWIwo
qCUUoGZcAmFqRrWzHOq6aDcIm0NW3GWrTlgcMq6dggHqWKT3LmVdE1ZbxWpERkQmwdkAtReB+dOV
/cNMnxR39brRbV8D3It+lS76VaLojd66mrurF0jOSkhgyB9DZ0NTCaFENVeqIvue0Z0NvdOzOARS
ormCWBBkK4QJd5Q4ySqhbS0NYrXfIV7Md5x2Neuumorv7XrvdXtW8nbdndSot37Ibv55vfY3Lpa6
qaR+1QEpSWjNdE2RUZw6jFmNKRZKu9pRR1XIYoJ5hsX9qQ3hOVObXDapExrCgxwEy+AQoUjCDJuV
AgWySAEHjo+hjNK8lojFTdWc0iAa5VFoEVamqatIRlv3zoKP7vAhJhKOnBb8bST8rwx/x0fAd4dA
FF98FhZmuinC/rT5/4hKYTWmnxN0qvgCvpY4WbO6YlYmzwo8OSUiZgVhuP68oLvsSWYGp8bwEqLo
YQBJWEGEZIWoeflUwFilnCOkwaoJh5ZSjBh23DruLKn+wakALxGJHnx7FgdDc9YqRy6b5GEprAjL
8YEIRRIGK17S0JOgAg0OkixrGyIeN1VzJoJoikehRViZpC4XOICLB3H5yNKiqbip6vIgmiAwCi3C
yiR1FQzhkmFcmaduLG6yujKIRmUUWoSVCeqKEmEawKWDuCLLd+NxU9UVYTTOo9AirExSFzMWwGXD
uFm+G4+brK4MoREso9AirExSl/KQD/FBXJnpu7G4qerKIBoTOAotwsokdSWGAVwxjJvpu7G4yerK
IJoiUWgRViapq2TISjmIqzJ9NxY3VV0VQJMlJCQKLcLKBHVliZgK4Kph3CzfjcdNVleG0DBGUWgR
ViapSyg7xZWDS24cZvluPG6iuhwG0SiKQ4uwMkldQWUAdzCL4TDTd2Nxk9WVQTTFotAirExQV5WI
kADuYBbDxx4D8YhLo3BT1UVBNExUFFqElUnqUowDuINZDEdZvhuPm6yuDKFxjKLQIqxMUlcIHsAd
zGI4zvTdWNxUdXEQTQoZhRZhZby62EsASQB3MIvhOMd3E3CT1ZVBNI6i0CKsTFIXMRHAHcxiOMnx
3QTcVHVJGE2RKLQIK5PUxYIGcAezGE4yfTcWN1ldGUSTLAotwsokdSkMWTmYxfCsjfQE3FR1KQ2i
MRaFFmFlkrqMiADuYBbDs3aJEnCT1VVBNAmj0CKsTFJXhNaq1HAWwzJ9NxY3VV0W9F2pYBRahJUJ
6qJgLUWqhrMYluW7Da6Mwk1WVwXRFItCi7AySV3MQ+oOZzE8y3fjcVPV5TSERpCKQouwMkldLngA
dziL4Zm+G4ubrG7QdwUiUWgRViapKwg5OVSTKvqIm3kq5g4Hl1DggH3D2ZLI6iPxuKmtKGgIDSEe
hRZhZUIr4hIzGsAdzpZEVh+Jx01WV4XQiMRRaBFWJqnLGQrgDmdLMtN3Y3FT1ZVB3xVQRaFFWJmk
rpQhHxrOlmSm78biJqsb9F3FWRRahJUJ6tISKhHAlY+46/1jfPOFL6113Rq71Ov1r95c/8yXUpqh
muEaSfjy5/6gIrSiUrgaQayJE7x2VEgjjGHKaH363N9qsdg81bN/Xgmq4MRK9NtKZfX3czBM9V1F
Q7wYIxPzilAuycu5Co27w1mryooh8bjJ7RD0YIFRFFqElUnqysC4y+Bg1ipgZn+IxU1UV8Cglysc
hxZhZYK6rISYBnAHs1YBs3w3HjdZXRVEYygKLcLKJHUxggHcwaxVoCzfjcdNVRfRIJqIQ4uwMkld
wlkAdzBrFSjTd2Nxk9UN+i5FKgotwsokdRkXAdzBbFLgTN+NxU1VFwd9lxMehRZhZYK6PLifyeBg
Nimyalsk4Carq0JohPAotAgrk9RlMuRDg9mkIFm+G4+bqi6hITQuSBRahJVJ6spgRBrMJgXJ9N1Y
3GR1g76rBIlCi7AyQV1RQhaycnCHUeTto3pcDqNwU9WlNISGIIpCi7AySV0kowsQBN44672QHP3a
YYgJxiMrufznX0jev3lq/QvJQWEYHSdMV/fxvbZ4SvlNZUttbfHMtv++vtveVm519Vg95XmxvHlY
N8Ugry78NRfr24vKt6e7sxfeJERoLYGiqgaMEgRMrSWgyjBdYY6wrC4qImGFNQWYcQpoxSTQEjKg
JK4srJjFtNqDtAcc3d/Y185isG4PXeqKxhTvwF41Y3iwa9fsDAmEfyiKD04v3N2jva676sPvLxrv
91/uy38dfRs0h8Nx5uxv/0Px4e5D+aueba7rxWpfBahe+Sqs7s63aFt85wp6cX1NI3vl+RTPW/Pt
VcvLFy5rLI67dlIbEmqPvkjU7i9jyok21MUpdYx48Zue+zMy1l7A58/bg62akuR7eS6b7rDvJu13
xVXnTbuzNoo/Wmp/FvugLmAImkA+BJ0D2cTk2goDhVGV1XsruiPHfHf3ypdrt7luCkf9slgfjhz7
F3A9Kb/VHrXiPwKPsj8Xzcetr9rbPWvv9kbR3um1rtSUd+rD6SpD14bPVPFmc8jHOfahc37pa9/d
+J44WztbLH7e+Wjnsmuzmi03T9pV8y06CTcd+6ePO6ONSY87nVWngefNImL4uNjopZ2tfwbt9v3p
T9o/B3/YfjMquMkS8pFVXF7uvp0W/5wfyxKJkfPB/UTZxz2zXTUbVfOHfdWv3cS/3dHybt0Wlouu
/XkGio/1/NZbY9x6XW99Xb2jkzD2QXVS6NPAu1j+Le76s6KGommIEGUDRf6SyyleFsvVopHF0x1i
5IcXzRVy1FkjKrgfCtNLOF4WuvLdr3ETr4l3ktvLYoQYjI0MWyet84WbO70+X1VKT1apka50VN3n
qJ5n89dinxA6ewqoSniu4sJZKw2qRDC30n7lN9HnC20bRd45fC7M4vZW39nmgazL4mK79uOAHxOW
Dz82J76AXwoArKv1dr651qsf11f+3+43n0rv/gWA75czt2lWL9aLubu6uTfQX3J/e8WMVcZUBDhe
YaCFdEApjIFiqCKUVlAoWjQpTJvYvhKymfNpSr732kKSnBowE/JKXJ3xvENsBMw7ObavDkXjT+Gc
hFCqLBSFWEh4WrqYwZhSzIQSXdfWAlRDCAhhDFQS14BYDiGnQjFj24JdSjvCkUFcxZdijrn5q2F7
eMCexFLM27t9YWRssIZCK2alDhQ3qyuONERWcKeDZOhhBzxMZhg1VI65u7THIFSQuSbcal4ZaFBN
iFJEVtJR6AiyiihnXgvRVpS+XMMXUjh3SWZ1QpfAkio4lu5QfWTruvrIzbT5tpsvhFhwJUezOF3k
M3On767XN9uN9d3iyebsk9uxP0Pn7JYUd36ep+dnMuhwPlBbfTkqlVzctc9uX73IWdQ/wKlrBNCd
bHT1EtkEHN+r4hP3FNbWD2cvSMqP1nhzM3IC20rVEy43Hnz/D0/vjd2qWhiW5yp+1PM61Otm5rq4
uyz+pmOhl0unV86el8jq9nqvSbtPcq7efyYjHo8Fuy2GNhXCFMQ5Kfyts/xjLAKLYmEW00Xh1W1k
FDgTkePxrc3tn9ixsw2YxKtGs+gvUnciPt3S5MT8D4vs27unXV3PsyNhmD7Hvp4nLwn+boINq9AA
69OoqmKOSg4hoqa3sfaFa5ceGom7DbV/mF/kZtqX+1u958fwZ83div38t+NwvKMW9YPwthpBJYKj
PevfubE2kU0v3Fp7+u6fZVB69+/smqj/o5IqNJb9v3zzK8+4bm/p0aXWbtOuT4S3w9rlxxAHRrsH
75I5/P2RrPc+7x4Q6z+d1f0INL8CFUUVMKgSQBNOkaFQQyd6T2f5pZXFr9dzp3++fAnz0Rny35hP
QDf3YbK/uLvSHkdqIPpXWnwCgbO+j1ktEqdAAoQ4JRAaudv2bmAuksxw/3fcnWQ2JDXT5bhnFpBg
lUk6ea+ey+W7nFrVcScj0c44EiR3xPrkiPCqi94k7qK6YzPZt6A0ih8rzd03OoS4c8mC8ykIK5iI
LJVOe2aCho94/r2A+7c6hPhq73XIBjkNzeOCOyyNirENKeqk2P3nO2WiJnVed61OLHYsOmo642nb
adtmdV/dvQ6Cz5hUFRbvrZW4qgyV1WwKF0oyW5CDq/EBhCL4dU7BwbyHio2erXOs5rKtAVegcEs1
ZxpEMxaFhrCySF3NFIA7erbOcV6nLha3VF3OQTTFUWgIK4vUNUYCuKNn6xyvWY8uwC1W10BoljoU
GsLKAnXFjIJWCuzZ8tipGIMRyQZ/f/vlfadSzM9TbX3LafLOJha1aI2lhvJXm58gK8EUn1iJ/bKS
NTkqH4Rhqe/ewcuoiXkhlCvyci4NwHD0jKOryh1YgFtcDg5E0xyFhrCySF1hoNg1esbRVeUOLMAt
VVdJCE1SgUJDWFmkrhJQhB494+iqcgcW4Bar60A0Q1FoCCuL1DUCil2jZxxdVe7AAtxSdbUE0ZRB
oSGsLFLXagvgjp5xdFW5Awtwi9UFfddRHBrCygJ15YwyqFRH89G4qpx+Bbil6hoJoimKQkNYWaQu
g0aTfHw0WZXTrwC3WF0HoXFpUGgIK4vUFYYDuOOjyaqcfgW4pepa0HclxaEhrCxSV4G446PJqpx+
BbjF6joQTWsUGsLKInWNgqwczUfjqvLTFeCWqutA37VUodAQVhap60AfGh/FVOWSK8AtVteBaNKh
0BBWFqirZlRD8V6N4CpalUuuALdM3Z4XhMYEDg1hZZG6HKyhehy3ynfxuMXqOghNSBwawsoidRWD
4r0Zxa3KJVeAW6oukyCatig0hJVF6loBRSQ7jlvpu1jcYnVB33XcoNAQVhaoq2dUCQDXjeJW5ZIr
wC1Vl0sIjVGJQkNYWaQuh3xI0HHcKt/F4xar60A0q1BoCCuL1BVGArhsFLcql1wBbqm6QoJojqLQ
EFYWqauYAXD5OG6l7ypmUbjF6joQTTEUGsLKInUNg3xIjOJKVacuFrdUXalANG1RaAgri9S1FsKV
o7iK1qmLxS1VV1EIzXEcGsLKInWdhXxofBSjKn0Xi1usLuC7ZkYpR6EhrCxQN+M6KO6Oj2J0le/i
cUvV1RRCY9yh0BBWFqnLpKvY/7aXZRK9CRJkYtmx+1//bVuDMRtSd7cG3276znkmYWnEsdLctzW4
B83lqq0yNCqmZBeKtwWbGVcCsy0YANvfErx+6BVtBzYzadDbUbURUsnWdYGa+7dTdSG0XMckW29b
RttWRhm9YDqotm3Tq9wObGZaoSv/nsVQKKpbaq9lUxyGJcTBcFnBAaFIQXC2M8oE0CiMTzGZqg2r
eNxSzQ0H0aRDoSGsLFPXiYqy3mv60BUeYIIIof/bpm8d8+Mdsqh/pSwstUopHolK3BCffEuClIkk
kVQSykeh2z1ZvO+izR8SpVMgkoWWONcKopmNwhvhdFTozNNZGsGO7ixh0wyOHjEFeWkH8Joy1+Ao
rb6nqqWzkiqvrJ464eAoPiSLpP8bWXbORe+ct975ES40s22nSAjWE9O1jsRILWlbo6PuqIgM1EgJ
9m/V6N33C+2DDnSmeTwLuXOXDWp62YbW8/ZYLqzI0ScCHzhbpYPoGuuOpXtUvsoMaZmZJBfhfhZA
UXVaI/Ny0+RIPOjyVJ2VmY5XccfPQGycFJOwQahUVHqZV2W20TWv9np+FvrLnq9/bV57cuMXTxbX
F09yjQ9P+nSkwz+nP8VF5jN7LuLV16evNa8t8pvPvv764/efJdGxlLQgUTJKJOWaONVqornhxnXR
dtY2i8t/Zh9tlldnfvlik5i0AVOUvtbcdFfXyxPWnMfz03P/64nikls7/Jkteh5Xm3ceSJ0hRcFG
n9OrRVwH4Gf8aZPJPFOMN5/O333aLF/4HK6fqfVfi7hOOZg/t/07MLfaPLGZ2/M+idomGm2r10mT
ZVkO2TfI+WWI61I9XZcwGeg3vCEbPU/P522j2OaNAerlW/Pz3DFtUO7QkIU/71OX3/n45vPZTz9/
LS99Q7rzMCSoewQvakiKPtequGzyyzOf5cnvDY3Z6dXlYtWwhmyeH/7u9cmx/ad1lu8fDovPzSSX
9cU3uNYX0Yfme8apFsI2XFrJnGx+tfpUcOLb+SG8pEP24/IEcfwgP8x7L/zF8/j1RXd5mX3I5558
BHM5DKBMsVqbh2AzzDsNt0u89/nXG0bLxl+EZt0D2abBHBJCQUS4sd/hs+IUWN3nuRXOcdVRrxTc
89Sdll45S3xoFbGhC4RZaUikkhuaVEuj7vtOOZts8xIAFFSIqQV99/1j9FSGTsLjJXzTzw82Iv/C
Ki5y2Owbfb9s+muCxHBN0Osvbs5v0xZDnAy1lS3uPwbWwzzyWo79Mfb2S6T/FunXDUhKOpBOt8bG
jgcbw3aMzfsxdnicMXYu1HA65NbcJikBVaosuX+o9F+RBjv90CvE6PR+tMH8j8gFy1Jb5f//jjNV
53H86jQxdnUaj23mo1oiuFXESkYJC8KT6KPrPOedV/EJFcJy0/Z6GUqko5FYrxOhgXuaDLM8pt2r
09L87I7Yy6bq3JRdosb7nF6a9peo8b0H+V5qTU13LlG7zQK28ylsV220LL1NjcMZuzJBfHav/OzD
GFOafw2QuSb9mmQzqmodrS7R34MVD5vgHpBe87IriELsGDOuJSwKTURoDaG2M0RRG1LLqHbcQFz5
RFz3+38U6v9RTP+PzYRT07fb0/T/Hmnpaaf/1xc1LJKuLDhw08WuWO+fWM01l5rbmMT+pou9n4M4
Ksvv53gn2P6mC796tE0XTa+7b8/is7ukd/xQemOa8xyBst/9uVmH8Rdd3L6XrROUBZHyf21r/ty8
fbpYhByE39uO4dIi5umML754f123Px2qNuqWnkOafEYlOI0Krhyz/IHwWT0u798pomXyrU3Wtil4
56WwwZpkeEyyMzHaV7dTJFssGK2weH+Ctur4ajWb0slrCytidAUHhCL4KevMxoAHHMaPy7iaBYUC
3FLNnQHRjEChIawsUnd0mQlV3zcrWegKDzLRtZ2KaUeYj9Rqw3slv4UV0tN3caaamngcuWBZTKXj
3N2p2cn4mjpHtRCUO6GLuzV85oxGdGtguP2OzatNLyvFTEiLjRrcSMddZ12XzEgvQVtpeWcCDVIa
FyizPFLamo4L13bsFfYSxExqV2HxXtRmtGYXYzWbwhYrswU5mBoOCEUK2jEx0+CBkdFjn4zVHKUo
wC3VnFEIzUiKQkNYWaSuhY7OydFjn6wqeW8BbrG6GkSzFIWGsLJIXWcEgDt67JNxVqcuFrdUXc4O
0eSMUodCQ1hZoK6ED4TL0WOfjFf5bo+rULjF6moQjePQEFYWqcs15EOjxz6ZqPJdPG6pugL0XcEM
Cg1hZZG6SkK+O3rsk4lK38XiFqsL+q5mFoWGsLJIXSOhOjN67JPJSt/F4paqK0HftZKh0BBWFqir
ZlRAdWb02CeTVb6Lxy1WV0NojDsUGsLKInU56EOjJ4uYqvJdPG6puopBaILi0BBWFqkruQBwR2fj
mKr0XSxusboaRFMchYawskhdzSmAOz6K0ZW+i8UtVVczEE1oFBrCyiJ1DZSiR42PYnSl72Jxi9UF
fdcqjkJDWFmgrp5RDVk5PooxVb6Lxy1V1zAQzVEUGsLKInWZhUp1fBRjqnwXj1usrobQuGYoNISV
ReoKZQHc8VGMrfRdoRwKt1RdC/quFBqFhrCySF3FISvHRzG20nexuMXqahBNURQawsoida0wAO74
KMZV+i4Wt1RdB/quYwyFhrCyQF0zowqqM+OjmLo19oyrBQq3WF0DoTFhUGgIK4vUFQxSd3QUw+tW
ifC4hepmXiAatyg0hJVF6ioNWTk6iuG00nexuMXqgr6rpUChIawsUtdIKCKNjmI4q/RdLG6pugz0
Xcs4Cg1hZZG6DppN0aOjGM4qfReLW6wu4Lt2RqlAoSGsLFDXzpiAcEdHMbzqUswBV6JwS9XlHERz
DoWGsLJIXcEsgDs6iuFVl2IW4Bara0A0yVFoCCuL1JVSA7ijoxguKn0Xi1uqrgB9V1GKQkNYWaSu
sgbAHR3FcFHpu8paFG6xuqDvaoFDQ1hZpK7hUJ0ZHcVwWem7WNxSdSUH0bRBoSGsLFLXgq2pxl51
2tKWsU7YRD2/fz9eoiwIzUJnmPA2RCeo1iklZ2Jg0Xev9rrcrISzbmIlDsqqsr5Pz7DYd4HI4GaU
Te1DCOUKvNzNmFIAw/HRpKqKIXjc0nJQHEQzDoWGsLJIXa4tgDs+mlRV9QGPW6yuAdGcQKEhrCxS
V4I91PHRpK70XSxuqbqag2hSoNAQVpapC413zPhoUlf6rqIUhVusrgHRFEehIawsUtdQCHd8NFmV
l7cAt1Rdw0E0gUNDWFmkrpUQ7vhosio5XgFusboGRHM4NISVeHVVloArAHd8NGlrfLcAt1Rdy0E0
I1BoCCuL1GXSArjjo0lb47sFuMXqGgiNMxwawsoidcE1cjM+mnSVvovFLVXXcRDNWBQawsoidaU0
AO7oyh+vWt8swC1W14BohqHQEFYWqXvYVyk5CbZ3hhh9HBBkYmoT5P3nzxC/zFL1LaiQq83jhc9S
RceyVGXbmJDJEiddIkoKRrrkLZGuU77lmnHbPmmFpS33knClJZGtssRbqoizvA20VYHLdjdL1c2L
8AZsea1vVCSpYhyTpIpxOEnV7aegXdU5XaZKUsU4PgtSfvZhjKlOUsV4TZKqbIMTtYm2HiZJ1QTF
41xtjoYh5hclqeoES44rQVolEkmMO2IZNcQE653xQkUVDrmyGaXVkS5z3U2W+vGHxclSMw8mpuGx
iM/7VLWL05t5avL/O8f4pfJWuVYQSoUmgipOUpslkpS3XHOnrY9NhsvTtz/tfK0LQnITI+GBd0Qw
mohSVJPII5OWxc4JdWjRcG5gEou+Wvy2Vm8jZfP5xx+eNNp5JzMECT50xDgRSdQdJUknlbx01DgP
sVK6Wue9lm2ehpZtW6vWlYM25757RukJ0yfigxPKTvJrQ5vOLxbz7VrHwq/is4u+y3a5epGLLMeK
NH/+7PsfmvjrauFzYvChkc4psX9bPvt+EUsL82nvAGRI9CRESEoyQ2LbKsK50cRmmwhjOmrvWqU7
+XRb+Ouv0NamwPp2lKVEpOOBdIpL4rvQJmaTCEH/AEv8CprQecoxeiTHY34Gzu84fALbUp1u5vhm
c55uYy2UOBB+7mGMqGgus7RV+RyVnlHGvpsiLfmmBNZ5ukOTzaTNH6+9Rt6+vDjLWvwFY9dGsama
6YncIZsk3Hc1ec7HZOyX/pL1pg08asm2RfPOUJm3V+b0NFeX6/uW/g0kDy+Ful6++MetUG/t/ebr
2YRh6HLL4en6wYz+/dizzQ+vM7BwODjTMrrWJ2nNDvYC3MI5iMwLRDMShYawsmAOQs9EfdM08PLh
x+vl0N3r7xpYD2Yb/9zPczhshq9eX5H12xANyarHL7udztsWMp5fn/nVJYzJqxNPrx1gMR8gz+P5
5eK35urybN79BgIKyKNG11UlrTnpUoBb7MkaRFPquynapvfWF4W8/s1n773VKEfFGyCaZSjbEJoW
1RttINzRVVzJa25ELcAtLUsuITTDLAoNYWWRuuApVTu6iit5TRbRAtxidR2IZiwKDWFlkbqObq2s
C3z7vISoU38qXqWlI+5iM2kc+yoPYd9qnLojkDkrJzEeUSgFzmJmzNV297e88qcn2bev/PVw++J6
MLoRLusGgQtW2ztBXW060jUFmakRWYrvMtq9mnKMUb8iFLiMkXoneDr6/iToQtNjxDDufyDGTh3e
HZOVzTA1kD7SqX+hPh9/WGjbSTNfbiedt9HkabMePm7+/C9ZP0PcyHqUPeVBaf8q1iOcd+cqVoNl
LfIw/8/Vi+EChCEY/tDcnOfGLcXut+4s5qbypOku84puXMWwqzlr5qt4vjzZGd7n8TsFY7hWtcs6
R10Fq+yMKT1JC74ZS66Fav4Y7iL5q9l2HhQEzc1o56EGsp+rEcHa6NpkxP6EUl5aznOTs36x6KbN
JXu5vJ1Y+hdw3fW3zaTSntO91a8kfDU89vrwa281wy9tKnWmsON4Y88O80pYs4EKkfsL+/xyl2EM
tQjw2AbrpHm+8G2bi3yIG7ut+KT8DsLVO11/7+pU4YpRmM00N28eeifjL71TQ9BSWkiIynZqzE/z
lZhtx4OWNCQjK5xi6NLVOoNSDukM07ZdjBaxedjGy840H11oqfG+vnOSmDOtaHXLKuP4I3MtjuOM
rx1PI+L47bPjcTybjfENTBw/RC0CPL7KIuN4Jb+HjuMMYmMof+hguifLJpgGZ9rgTTQdqyiZIZjW
loijFFkiEwdTBrNBVZhJg6k6oXRGxeF+f6OaF9EvVm30qz8/2r7qy68zPHnVCp3cXVNbt/Ogmxvu
CiYcJyFUPNHoIBZs73ICIG5/0PvvNjJfX4X1GtdwK31zHleLebfcCeyZrGgDdTJIF1w3kD0dHj5d
Pzw4WhfnNzFk3Lx65c/Wvxr71fbzzQ/75z3o/AgfzJMCy9ObuOj3J/cuMdxSeNJ83V5frK6HUpEz
3nzy1ZdPm+v1R3ymZ4LPpH4zLM6FmHFKGF2SfltEvzvkaRPm/UAvzx8Ov/G0Ofc/Xi6yF9D8cn7R
v6Tyh6fN1c1pWMx76F387dP65cP9q67/Qf60Ga7sP2mUpMY2P9xu+lluvBkqMynlw5fZocfL42fY
N7TVY9AurRd3kLXmMcgiNMZOmG9o575qXCwuFw9E+8VqdfVD86Gfn60L/covlrH56KuvPs9El1d5
uSFmon51vWyGmbnvQQ9Wjr8SdY8/GfKYtIs92EBkrdOPQRahcYEHs8dpkA5pV6RYf0zapa6hGESW
00chi9AY7xp9AdjDWWT155crvxjIXcXF/DLMu2bZQ1yfxUWmlXxynlvVqSD+7DfpPV9cXl+E20du
x7vp+qJb9U32172xL+9zXe8Cvl744cPtEPit9ZLiM6Fvt0y+1QwnWT7fsHh956M3Hs+aL89ivOq/
n3sq87PmIq/7Nusx9uuZwMxaLZzZ7u485CXojBtT1S/dL3VLazrKAyFbRai0zli6x0IPsV9MLIs8
XpZJCBXLIg9YCDozllWx2JNFMVUjywSECmXJhPdYmL5wnBFVLA5kqRhtbgjJKkLFsrhDFmZGtTuM
eZa9XI7LI62rYVP79cVPF5e/XJDzPNTPY0ESU+qs1JpI6SLR0VDCXWsIbYOPlAobTcrMqbGyla22
LPjbH+vj4tfr32sWV13z2jE//hpojxH323PHLaW3Zm4XHjN1KYVUnfaGOg5dVRojT6JLupOgtIzd
hgOYyhjmnwcsD9ZG92jsrIe+fpsSzFgaVBdTlKFtmWTRJmYkD1G2pjXdGxB3buz9Mo4xQc0Hb78+
atfuzDD+W/0cMdo6/BwxGh8LXSZs4bRtkikqbjpifbKEytSRIHgi0nihgmgj5f+cJOQPQ/sf277O
fW9Cs7xe9AsMsWechT87+63fa/HCn61iAFk4NSWLHIcGKTMFv+o5dD6Ho+uLrGF+KzPpQSAeUk/K
o11kEkMP8Wq7DW5+MejSb46LYRi+R4iI47KSyHqXen+yLs+nnfXHI5dxcRNPN+8v+l00/SzZMyWo
FDyX3K+blw/DZ3tgiuRm4YfmvW0J9e2a0orCoHpS0G8X81UecGUpfh4GV+v9iX2zb71mmvJI2sQZ
SW2XSAw6EOlc0lLnf5l+MIbY82XLzDr+HsOTbMyT7QzQk025knW5kqFcn2DsKT+StjZZ8EqTUZ7p
u9W1P8tBee2Tj8okbKjcog/Tybt/+V8fmNkQRnLkykV0dpYD501s2PbEXH+q66bLDW/TF9tlatgh
BztjurrGDhx+HSrM1eVitRzs5ko3V/nszxICFbf3bteBfrrWPy1izEJrowUX+ml/VPC6zW9QENvQ
WoM3dfGLmFdE4sKv1k1IvK2MoVn1jK4vQsxrC3unbcUPMKupIsR7m37qGm6ou0PsEg8EmwsCBs2v
78E1lbj7dXKV48EyxcXpunIOc0Onq8vTDSPPJFNcMqJ025KUhCCtaofj3cl3gQcRYkPefii2xa1a
Bq3vcSFbtcBEMEkrkmTnifTSE02VIdFFJb1P3hoPM3T1DOtatW2pk51SJ6tLsqlrGMuOat+y8a42
jOx7cIj9avqO/+K89mHIHeOwrrolQTos71ptRGcJ1a4lso2RUGo64lJkPBqqTBIPxrDOYdeFvOuu
TzDWHOWkbiY4FEM4YnIniK4NsRtqTiJGs0iYpJqIpBznWvCkeD+5Q51jtGVSMYmf3MH8+GugPZJC
9hw1uaOisV2nBRXCApM7JkklOqO5EwyiItlEfbe4mKffhu1973/crKfzlk38Na/5QLDWTAN7ezD6
p7i4iGcHUIzO+P5FVPtij6mKm77aFRqavrK201ppo5y0jIfOtEm1rHc6EZyn4g2Y+zQdzP1J34qr
FSblVTr3KznERnA6CRuESthVxTUvPRK27vcj5KGC7ddHvRI/+bj7LXjyMVunrHgY1Y9PCzApr2Lf
1BAbTeUkbBAqFfmmpnqznedYXtX7dQYeltsH0UfROi+ailepFykKsjHTsEGoVORFtn5w97IZj2HI
8vf6kOZPiyhc5wVJ0XSEiaSJN9ERG2nHNWedlSnn4fGrVTy/Ws9cXG5njPLLHN+GH8pmJq2l5jYR
ppMiklJDXKcT6bi10RtPtYMaXzajnD2M6MfnypiUV7FrSpCN0Q+w2WR/68wnl/mRfvXgcJ/M3dtk
9nfJ7BM3D0D8vl0ybDg2wJzejnVAXhJcHBKIwY7lsU26TcQYb0jbMUlE1zISvdZJ81YHFrMRJkra
JiGY9xY/2MH8+GuwPRayp2Cw8+77s75aZ+qOdcG0JtlOQivZqmuTCdF3WjiIitrPGLRPZQzzcCiw
fXAPHRoBpE44a5NRmuUnqcrcgvXRhs5HQaUGvcEofr96MIHi5dWOh0yIt4RbpQk1ypCQeCAq5U98
F1Tr5D+WVwXI1qrj2K4j0k0b1skS9+L6Jh9o/hgCtZbWgL7zD6jw24U/n3f9Gu4WNn89f7I5CAM0
z3xGraxhkD/dnlLL4LGh+YuruLhaxN69/bJ5mT379Rc357f7dCAq6vYqr0Iq/8x0ngfxs7X5+0nP
t1/6m71z642eBsLwPb8i4goksp/PY4N6AwjBBQJxFCBUOYkDhZ7YbTkIfjxOugvL7rSeWW9bTgih
0m77zPtmnNiOPW6n32r9lCPKJtEOEVxwrhfQp03Rc3GcoufGBhs7pVsXg2q9Fl1rrVCtFUFk60Nn
pW3mgdJUsyy9PruDm3PgdULvUNsm5c3ggxh1/rwfXce+NalFVl4I7T5YLqefHzvvN5fjavJtXfQh
m37CN/2Eafrkd+zO08m9lgfrsHzEDgPwTkHvRy+S1g8fRWiD62waRiWTT9HA6LWxehiEcwCi7/aP
InyaYwizYr2QukbxbmfRHb56fB0NVETD7SI6icYApiIGgiOMMYteaC+RaIpHulh3eBE2FpftOZqF
RtJoBJUsd42tybedwz/IDR6NxLnDbvZ/eRL+Ux5/22d+bKr+NB99gBvjj9ZFWB8v8g9xCXcjHJYm
eJ9g25Wc4wZGIRNECVKyewV6Ya0j9Apw3G6/YPOxJ+kZYGLAWOqtYdDd4KUV2g/x4a7AkHwSfQpd
L0U3JAvghe6S6wMo5yA8a1cgiBrFu7dmqJo3qo2G+1gCg8bgyZ2jwxxhPKwMuv/H+uL5Y7bihEAW
l+u5VxhNK0eiEVSy3DXotS6eP2a9r3OXymW76zGa1Z5EI6hkuesUxi1WVrfh8L32LC7X3YDTjCPR
CCpZ7uLPqmL9bRsqc5fKZbvrURo4Eo2gkuVuQJcPFOtvO1GZu1Qu010ncJrxJBpBJcNduxDY6b6h
WH/biarcnbiaxGW761EaAIlGUMlyVwKmsnhWtJNVuUvnct2VOC04Eo2gkuWuMgHhFs+KdrIyd6lc
trsepYEk0QgqWe5qjd0ZimdFO1WZu1Qu112F05wj0QgqWe4aj+VQccrNqcrcpXLZ7qK5C1aTaASV
LHfRp2kojmJcxekNLC7XXY3TvCDRCCoZ7rqFdFgOFUcxTlflLp3LdtdjNAU0GkEly12NttDiKMZV
1EVjcbnu4jSjBYlGUMly16LP7uIoxpnK3KVy2e56lOY1iUZQyXIXtEW45VGMrcxdKpfrrsVpACQa
QSXL3SDVHteJ8ijGVuYulct216M050k0gkqGu7CQwiDc8ijGVeUunct11+E0CyQaQSXLXS2wHCqP
YupepNO5bHcDSnOCRCOoZLlrjEW45VFM3VsiOpfrLhiU5gOJRlDJctcCdlXLoxiozF0ql+1uQGnB
kWgElSx3wWqEWx7F+MrcpXK57nqD0gKNRlDJcjdIrIWWRzG+MnepXLa7AaWBIdEIKhnu+oVAueVR
TKjKXTqX624wGE2CINEIKlnuKovlUHkUE6pyd+I6EpftbsBoWhsSjaCS5a4FTGVxFAOiMnepXKa7
INDcdU6TaASVLHchIH0VWRzFgKjMXSqX7S6au94IEo2gkuVuAEC4xVEMyMrcpXK57kokd8NCGEmi
EVQy3A0LKTTCVX9yV+vldedXuYLf6Sz2Oq5WP2W5U30MO44pAdghwcPr8bQdkxhMUiK5EYINwY4+
q7YypZCk31+Pt7y6unmyNXlhkb08shN716qqvT9GhOzcRZ1z5thxEZxjZTkEhURYHE1Cxen1LC73
Oij0HuKdJNEIKlnuhiAQbnE0CRWn17O4bHf3s1zlUKwk0Qgq6e5mrrQG4RZHk6BrcpfB5bqrDUoL
lkQjqGS5qzXGLY4mQdfk7sx1JC7b3YDS3NFUstw1ViPc4mgSTGXuUrlcd41BaSBINIJKlrtOYy20
OJoEU5m7VC7b3YDSvCbRCCpZ7nq0hZZHk1VVPBhcrrvWoDQHJBpBJctd9NmtyqNJW5m7VC7bXSR3
5UIoGo2gkuGuRDf3OlUeTbqq3KVzue46g9IgkGgElSx3lTMIt/jmD6rebzK4bHcDSvOORCOoZLmr
90YxnJ1gOxuFydsB0UjgwKoQ//iNwusdoUPeKIwa4w+sPbMpL/v2XNRk8Vk3LOIwNK+sT+a+vL3o
8gTPn1VNfmuuv/1lNdWcPXmRP/NidfFiKq6ULocX3gepzejbYMLYWqNl24/Rtyb0k1Ynle9e9Aqc
0tK3sveqNSqNrRc6thJ6rQNoI4xfQ+YSUT9+O7z6KILjXLZqU8yleVPsFE0Xf+i6u+zzUctfN807
+x+8+xvz5zafeverF1P25x+uC51t/RSX4w6Ts/7zXzfv3n2xmA6rPR2vluvqPOMyF3vOKtbn9J2I
bG4uNjSc5Hjylzn0/M05rlyibVJM+yyuAQ7TwChxfJ+pm+8cUrV4Cj340sH1mxO8tw6wF6wD7CFI
DO20+pJ+Zj4bOR/kNegASYymswef7/8cse7VxZqrquYv20yZg0TP94cgN2ehi5zUyPn++Gfx8qlZ
NqgD6w780Tg/zlX+vs0t8WyVhubq+7sc3aTsql+eXd88aVOtV7R3u9lE//T3nYPF8O87G1X7N543
GsLj48VNvB7OVt+3czXX/V+Zv43+4vyTg25uaiEPrfT2cPpuvHi+PFYLZQ+99HPXdLrv9bfL6UXV
+S/ralx3Hf/5jVZO67ngG7nK6SOE+GedvdVt36fVarzN9e62DtxZ31SPit6/8V5d/+W+m8+kK91N
sYC0KPRy2HUOX2+ul1eTLTncUkRTgUg9DkNwqRudXD8K+bUVX29il5vflCbZk5wkF683B5hhxIE9
2L2r81E6T3H1iOUic7SueOnwaLfLcG5V2py+26xHhGlAgEg5gcNK5+4Mhm3dNFmOK+jKOspdfot+
fhWHyZE3//i66a8uLuLlMBXgfr15cbvKD4L8ULj+5ZvpZKn2h6ZthzTG2/Ob07j8ZnWS/z8XrF3G
u/9r29wwz9LNNH2xuspV/r79sRf5Iz9enIzCDHocUzuYaFo1uNhqsLEFPcAge6fHTjbTGGYe2b60
r1kvlHuUqv3e1RRnOWJczOmZHDcWjdH7kxNOUaoBD7rXcYhDq8DbNio7tkH2oQ069KEL0ru5c6xU
580AYEbf0asBU/74y7gej+hhVgO+vVzX5gUhYxetsKa3SHktH0eIwvRBx4QFY8Ufc1B4MGUqVhF4
89GdCNBDbX0YVOqstqCV6wctQ3C91w5G1QOEV9GwjXjYw3tDePSqwAaLN0s7NN5Sid4hbUr0Tj3E
i82jEY0CDnYNmc/qz1O8PF19e3sz5HbxdN3TY+tYn0r16Eqay9ylieePJOiPE7fmAsCkUdPV5Xwq
xcl9ySKfIabNRWjXVyVH95Bth7cq+hiVE/WQn2f3jD+3pjOrB59Zug9fHnFm7Y/c/zWH99rdBBKK
DdVJsdXyNtTTqY92dfl68xcfm3h9neIyDY8byPLidO3J/ErgsVr/I4n486C9i6Y0f46HUN2EtkKg
NJZniAKZ/3m2KGgXRdcGsv18m4exT5zY1QKOklUHR7E7H7sx8Qln4Y4b/x/zybeXTzyRXKWD8Zh+
lFdYeuFN8RXWoQ/YaRylou06r6Xox513SB+leZCdP/7Hu6Nnjo/43ujj9Z96Oz/DX5n+WrPu/25i
2H55RPqFe94gmYU8fFj1N32HdBxN975FevrmXyWI3/w3uo7U/s3COnto9H/39zxV4javUf5MqVW6
mecn8Dc/834nLAYXNjNQ7Bj+uvro7Q83a6F2FyJtfqmdfqsdVO9aE8bY2tEDwKCEjWFnIVKeWrn6
6fQ8xe9fvz/yw7vvf4n8COHWrpsaO9urYFLrAoR2MCq0OYrQ6mj7FGFUId23bupz3BpzqDX3Hyow
pK06/9ZqZYWzg0iRPe9pFkH4QoAPAXcPFhjS8x4toOxCaWyJKLqYsHN9hJiS09A/vJVRSN11oY9d
73tvYw5NOJf6oIbRCd3Z5ztaICs2QlQo3nlnEqq2aFVHw3xTEhTuiKzJAYIjjDd6duGUQ6IpbiML
GuquBJXL9VwDSgNJohFUstwFj6ksbiMLVQUVGVyuuzjNK0uiEVSy3A3aI9ziNrJQVVCRwWW761Fa
ECQaQSXDXbcQ1iBcoG6jHiGEGA30oM3Dzy8YUwdKJKMUmGHUMgbVJ2OMtH0wAp53K352QqljO7F7
raoK+T1KhNzcdXhcRh85LoJzrCzXDouwuJ0vQM257Qwu9zqAwGhG0GgElSx3rVYIt7idL4Ctc5fK
ZbtrURpYEo2gkuUuCNjn6uJ2vuArc5fK5brrBUrTkkQjqGS568Ej3OJ2vuArc5fKZbuL5m4QgkQj
qGS4CwshNMItbucLoSp36Vyuu0GgNBNINIJKlrsSzaFi6ZUQqnKXzmW7azGaEopEI6hkuWsV1mZM
geuEqMxdqySJy3N3iguleUuiEVSy3AWBcW2ZW5m7VC7bXYvSgifRCCpZ7vrgEK4rcmVl7lK5XHcl
krt+IQSQaASVDHf9QqNtBsrcqtylc9nuWoxmpCPRCCpZ7pqAXVVf5Kqq3KVzue4qNHetkSQaQSXL
XaewHAplbmXuUrlsdy1KM0CiEVSy3AWsjI4RRa6uzF0ql+uuFigNgEQjqGS5661BuLLMrcxdKpft
rkVpACQaQSXL3eCxHFJFrqnMXSqX665BcjcshLIkGkElw92wkEohXF3mVuUunct216I060k0gkqW
u1pgLbQ8irFVuUvnct21AqUpINEIKlnuGikRbnkUYytzl8plu2tRmtUkGkEly12LttDyKMZV5i6V
y3XXCZRmaDSCSpa7Dm0z5VGMc3XuUrlsdx1KM4ZEI6hkuQs+INzyKAZknbtULtddkBjNW0miEVSy
3PU+P03nrSrb3LDF/fbm5vrr5p14dn7n23VcrlLz7ieffJhJq+u8UT5lUry5XTXz/sKvvsY4wWH6
yqMlqGwjwQGJy76K+21E51CMItEIKulXMXMV9hSz5dGSr2kjDC7XXS9RmrMkGkElz11vK9bZ7RRu
JC+2xCLROhy6zvbvtgSZsvB1ewnyH4vLc+lG1Bpz8Orsh5YgT9B8XUXwIuo0/UrPXX6cg7NWU5Yf
I7C9pcfzh55n2XEW4iV52as1SgzRuSiSe3jZVpKQdOZG7T2Y3quhT3q0IYYwdn03Pt+y46w4BKhQ
vHsrqnqlP0fjK6Jh34aRHJALIWVFDARHGDdnuZA27EUDtvk2xeVNl9Pst3c3X811EUGN0XbajeGe
eAJsxTPlHO8S1QfEvUgBsCi0c8izsjzDF2rWDzK4bJUozShFohFUspLOuJqbwk6PgHwfRCMJ/9ke
wd2jMA2YLVYc3Bv4u9nSeR8GFUKmSdOOXkA7Wudb6EdrYhqciCN5r9ZszcF7tR7VGjl21lqVWjsq
aOMYu3YwZmxHPdpR25i063asibFPPv+wtW4cWiOHrg2h062TPukIOrhEL/89WWP0odZQaz0WNz9j
cTkDSFzHLPhYDCvfyaKX0DnjFNjx2FUfi3zMFvDh32LL1o79rUoAW38EjLAWtGiz6KHtRxfb2IW+
7XTqvRh66TpkzKwWQv5tPXrzbaY+bKvxeJbOhzwcyIKayba5Y/HHhvHDHKE38iOXDDVYvNoe/Cg7
qGjohPT6KAUhCYUqGd0utTDOPEJcToqaAppHjIvZKc5xY9FYCUeJhuAS6+rluCpLvt7F1d2enQ/T
XPPtz83LL36MyxfL28sXuckPL6aasPN/Tr9PyxzP4rsvfrw+e+/l5uVl/ubJp5++9/bJqHs5jk63
yUjRGqFcG2znWqdAQeiT771vlld/LQHbrK7P4+rbdXXYBq0T+3LzY59LpLwum4t0cXoRf37dKqO8
n/83K/om3ay/80juzNUz1v6cXi/T3R34RL/R5GBOrFTN+2dvvpFrv8R8vz6xd/+3THflMPPP/fQd
PDZfH9s3U32/9d1o07xeb7Itq7kwTHtxNaS7q3p6d4XbOfxGN+3az9OLs67JMu6+MaP+/NbZRfwm
NaR0aNplvJgKyN/78fXPF/Kzy08G1bT9xTC/23iCLGraMcXcqtKqyV+ex2xP/t78NDu9vlreNLJp
7z6//v8cXr63f39Xa/1r7PIFH+ov35xaH6U4NF9JaQQ4aJQ13hjX/OzdqVZt7M4QvF7IcEjtQr1X
uuitb+PlN+nTy/7qKudQzF35dDdBg0GVrm5O881mnqqcz/h468NP1xGtmng5NHddkE2F1rlWGRoI
wJf0gk0M1dPcwpiUyKoyM6Fdz9GMySroWx9H3woz9u2g1dgaiNoOuktCuanzlCsdN1sATIcW1feA
HUPffPsQP42tLRw+x7GFb6Yp5WbqUN2kZb5tTg/9uGqmw5r0fFjTK9/+ePHHZCAWU1C1BbT/MrKe
Xz3c2bE7yN78Ujv9Vpuma6lAptaBkWowWoqgN4NsPQ2yh6cZZOeLOpzOZV/X9XNwlyqv3F9c+qdY
Q55/mBySx8+jNfMfYtc9tvyfOA8kjlkIcawnfPkAO106wE6lDqS2XauVt603UrRy0LFNMYU+KtVH
m14Irb2CbvILRGuCSK2PbmzFoKIYQXqVxu0D7Maz8/QqKl3W3lUOO8pOT+XmnJiOstM7H9Q7VV+d
2DrKTu+WrMw/xXWZSl3cM+00XkwuB0gvPJc/+zhiuKUBEZtrKgNmDdrWtrG6GpSPeHl0qJU2ec47
CKrrMgG0aZ3Tok1xtK3XVrfgndBeDj3EDovV6cfp/wms/yco/T+bk8ce/7l9nP5f/fsn0mu5rf7f
dKlxk1zlhUPX6Wyb9fbrJjgzCDkO2mPrdLb/HBqjL9yn7oPtrdOJN0+2TqeZfI9dnj+4z3ojHJaf
2OtqpfUgTa+F14Vigf2oxdCNvRydM9Ek6Mb85WCldAbG3j3jqh27cMZUKN6d+ZQ1p4dXR8OdFZYG
jSHYihgIjjDmgu0iYOUsbHGLlFRVM/V0LtdzpfZpbupUkGgElQx3M7cq33aWp5AbPBqJq+1ZHHfo
9kSPQ3zd6ueoQ/AIc0fHGvM/jV24LbVj/vt7C1tVfgdh42idCnaU7P6CWxjtCf0FHLfbY3jeksJZ
DIhAvWv4zkafwtCNcXi4l+CdNV4mk/I/ylndjcIaEH2U1ig9mmfsJbiFD6JC8d5du6aQbXU07CcW
YDEE+pPjMEcYzzFYKMCen8WtvrJqMz6Dy/VcW5SG1JUHaC7y/EC+s/+2XiYVL/u0+V52Xws56DH/
23Xw2/rbp8vlkKdI3tq8YRmXKb9s/Oijt+9G3u/PA2/SQaZYmFoqkimEi8FKAisDwi3uSJamam8T
nctNAiNRWpAkGkEly11wGLe4I1mamn15DC7bXYfSvCfRCCpZ7gaFqSzuSJa2MnepXK67VqI0AyQa
QSXDXb+Q0iLc4o5kaatyl85lu+tQmgUSjaCS5a5C20xxR7J0VblL53LddRKjaR1INIJKlrvGKoRb
3Cksq2oGMLhsdx1K85pEI6hkueuwvoor7hSWVTUDGFyuuyBRGkgSjaCS5S54jFvcdSWr9vIzuGx3
HUoLjkQjqGS5G4RHuOVJw7q9/HQu110vUZoGEo2gkuFuWAiwCLc8ivFVuTtxHYnLdtdhNCmARCOo
ZLkrXUC45VFMqMpdOpfrbpAozdNoBJUsd7UQCLc8igmVuUvlst11KE07Eo2gkuWu0ZjK4ihGicrc
pXKZ7mYQSjOGRCOoZLlrDXZHKo5ilKjMXSqX7a5DaWBJNIJKlruAttDiKEbJytwFIUlcrrtSojSj
STSCSpa7XmHc4ihGycrcpXLZ7jqU5gSJRlDJchctaQLFUYyqWwowcQOJy3UXWQpgcijSkGgElXR3
M1cqjFscxaiql1kMLttdQGk6kGgElSx3lbEItziKUbomdxlcrrtaobRAoxFUstw1qMriKEZVnSjK
4LLdRXPXqkCiEVSy3LXoVS2OYpSpzF0ql+uuQXPXKUeiEVSy3AWFqSyOYpSpzF0ql+0uoDSrSDSC
Spa73gqEWx7F2MrcpXK57lqF0pwm0QgqGe5KfNkilEcxtip36Vy2u4DSLI1GUMlyV1qDcMujGFeV
u3Qu112nUBooEo2gkuWu8libKY9iXGXuUrlsd9Hc1cKSaASVLHc1lkNeUE/hlXK0KoBIKoTC5gKl
0uAyztrR97Kzg+qc6EKUg+3BDs97knN2whl1ZCd2rxVUtvfjR8jNXUDvDKD8keMiOMfK8mADEmF5
NPk7d2fa2zARhOHv/AqLTyDYsPdRVCROgcQlTgmEqrW9C4Ve5ODmv7N2khLSKZ7JJuWQAJXWyTPz
7tjec8ZVPkOwXHI7uIc0OePSomgILwnqypkUEHd6NOmr7oeB61BcqrpegjR1NC9J6iqnAe70aNJX
xS6eS1bXgTQvUTSElyR1Ndiq06PJUBm7WC5V3SBBmlYoGsJLmroGqEXhzXFrURSOsVD0TI9aQ+U9
guWSW9FBNCssiobwktSKTgqAOzlqVVVJ+AhcorrFLpCmFYqG8JKkrtcB4E6OWhWvjF0sl6yuA2nO
o2gILwnqKrimu58ctSpRFbt4LlVdIUGa5igawkuSusJDMTQ5alWiKnbxXLK6DqJJ7lA0hJckdRWU
Cz9MrjCqqnVUApeqrpQgzWoUDeElSd2HfSLKwbi9I9Xo05GgJUZWnoz9zx+p3slwDyrka88O47Nh
8alsWN4HoXT2LOiQmSkCsi5Hz3ToBqetkL59qZPOSiU8E52XTMuUmecqMuE6pYJTmmu/mw3rh2/6
52HPa2OjIhmWkJhkWEJCybB2/gr7VZs75ljJsITEZ1sq157GmepkWELWJMMqPnhVe4udJhnWEZrH
++qEcsMzn5QMKwQVs1Ka6Rw6JnzmzKReMBeN7LvsJLcBtDVU3+/F1t2krO+8RU7KqnUJmNpwWNsx
T18PKXHnFz9c5qb8u5PVQMS20yJx1hmjmY28ZcmHzBy3SnjZKe9EU3Blmvi7nY91vdLSpcRkLzum
BM/Du8WyJJPQXqQuKPPQIzuTsvomHT36ZP7zRr1NRqgP33nrrLEhBq2NZH3sO+aCSizZjrNss8lR
B+5ChKxSojr52d6b7TKPb7bNXbW5OXhzHbtzzs+EPVNvnnFxVn62bzVdnM8vt2sq87hM5zdDl+12
+U1psvKsyJdfn3/5VZN+Kv2bkoB8fEmX1Ns/L86/nCdqY748BAAbj6yHkI3SOrDcJ8da2bdMydAz
3lojosw+qfblbeOvP8Jbn3thPIsiZ6aD7AtLaha7vs3CZ9X39itQ4qOlKiS8Qi9zeUZP5JIs18B5
JMe/wL5U520//LV5me+ftVCCQvi60zhR8bos0lbljSz2O2u+OEb6800LrPOB901xkze/Pvsse+X2
Zpjr/B1m1z7FjvWaPlo4eKm/qMmnPiXjMIAzTrrOdpzrsG2aV8ebeVubZzBzebsu7PRvMPJh9anV
4pu/lJ96ce87nysujEOXexteXl9Y6F9OXdt89ZyAG8cFYHQ+uaaoZc1OeQKXOAdR7IJoQXMUDeEl
YQ7CHa2XFftvV4uxuzfUNFgPZpv4dbwsj8Nm/Ojqjq1/DZqhDOD+5NKmVjWHTQhcaiMrAdJM9Wvn
z671Tj8gXa+u4vIWVtaaozBLQtAReZ2ub+c/l70yV5fdzyDQOZSkiKakRfKxSoKsC6E899n7r7/Y
mMD18xBNGuh+nVwt1lU5bghcasBqC9GUNSgawktSWxqrjtqWn5Thw4tNMI80pnEecHNycVqbmhyu
BC61MY2GaFZpFA3hJaEx/U6Sr7on0AO7fI36x7OL3DoetMb7o1iDUInUetoep4Mw/vWsBNtdXA1l
HjczP5ubt9y7ENzw2opLqBqqE11T0DI9MaQn10zarYE5ZVHpq7d91DkL2alkD67TBFVOPUQMI/4H
Yuy8R3bHZLQZpgbSx/J/Y7C88xbRt7PmcjFMOu8+TV5u1sPHzf/+l7yfIUq/HuQP/aG0X/P1gODd
rfkqsWYrLX77bfnNWGlhfBp+1fxwXXpYOXU/d1epvLzOyju1LOkOBSx2RS/eLtP14mxnfL/1t4Hc
d1kb1TkrgxLjSL/CwuEJvmdleYjj+Vi08RJqU/hLH7bpq926CCbcprLYELIxzNq+Z0koybQSiUWv
Rd/lkIKJf2lTcxqz//Jyvo6DC81iNU9NmX8fLC7CX139PNwR38SrEgSgFd4d04pyI45SFhPicrCh
izdNKahZNCy/KpYMEMgOX12f8aDixoUc+FEVaOfF/UGB1d22m1T+GVpk6DylftyPCjxrw0x4XmnI
enw/rLwUd6+G5bNFmv+QLja/nw9P2eb68ubcKK6VHMrAbn48jT3bCXU2v+tKHuJtbAz9a2MNh6Hy
qNDPy9RHGXKOZW1LFDTr/uvwduliztqYyIxIlpncRcazNcz6bFxndGeiP5mF2PWHRbE6/ZL6l4oz
L223NL+0aVe2blc2tutLGH8OWrIoLgdR6TIqMmO3XMWr8jpYx+STWtJvTLmnvzzcKLv/F386sWXj
Y6Q8M0sTXV2VR/YPqRHbFZXSO1pXk26GZrvNjYBs0HpymQE19bKZmly/uptfx0JhvzebIbDlENqY
6rtidP+n8V4dChkvRsmlsc1dmZYG49Jx+0XNssqEn6X7mYLkXZCRR5f2ln7KJrByF8+GbR0/tKWD
c7u4XwL6F9i61+0qyz/7/a4XhzX/T8bLnhu/7cVm/KZC3ozqd3qIU9eO/ULY7clyxweGpJB/ui8g
tL8/AFAXku+tHwx5ngYprFTKWv7ysMa5assvOMyuvh3uS2x/nW7GOsxjrypt3xJD32awaHXTp/lZ
s7dNQH8FW+WPZNXrm9Ioa9z4UhlfqvpE2NIQMLT8fELu/stiWV5Ui5zmF+u3RmmU25uL5e3FxqJe
ZdN2RjAVXPkPd4YJIzzTKSTRC+lF0A175VTWHtDd8uq4fbzHu1u+9zaKpFiXizblZ8mclpK1ppOO
m651rj+ZhXXdrW2rs51WZ8tbtrnXMJ4d2PHyuvYxsh/BfRrmA3biFxe1pzHukIDV1aNmZMB2MaaY
rWcit5ZJw3vW6iKLstElmTRvW38yC+sCdt3Iu+H6EsabQ4PUii+O27P5y6t9KEonuiSF8klmV9cL
e2Jbyb0wIdc9K3FQL+zlqS8r3TRgk7PhM8er7+X16CXNL/O4w7VUOmvWC2uLJv1UpH2IFTNR/3D/
66aK79L8Jl1BKBV2Vgzhln9zaORt267u+vVXfj0+F67Tcn7ZLXZDoxRdysHpTkvtkhktuRgvvthc
PHTeunRZBpeFW3ZgxKv1t6ZhX9z15ovj1wP08oBp4zJ9v7j4Ic2Hk0RjdAx5T86aT9vVzXI1rqDq
mWze/eTjl5vV+k9yZmdKzrR9oZ9fKzWTnAm+YMP9PuzjfLnpL4eJsrLSN37HMOj99rZ0LgUfR8PD
j1x/9XJz98NFP78c0Lv87dX2z4uHn7rhC8sAul1dXhXhjOauPGnut+cuxo82X0FtZq0+fZs9XHa1
NVsDns5s6tq11aCx3h5llPRAxJoV/iPaRVbJQ9Y4oZ6iSRGRiN8AUMz2wvwTZlelKno6s6mh4SRs
7HECFXEDEdq+qOvcU4iIaHuS2UWRk8hZVWzmiHaRg85C1ij1BL2ZhyJ6XieiUk/yGKVq7DlsrD5K
iyMikXSHKG+fQkRE25PM1sKdRE5f1Vs7nl3koNOgNf441iBUorXekfYJbGpqj0ddn1ufdU2dUMIK
ZkSfmNAusJy4Zp4r0aXOxNzGchgllmma67vl5vDk9qBR+XEYr69Wg5vZWm2lz0zYbJjm3LHQ2cw6
6X2KLnIb1POQc5bz04he1Qce7DrOCgY5ND1kjTMPN3bbYP60powe78Ypq9XNdze3P96w67RYlPEt
U7EV3MuO+c4r5rz3TLStYZJza7zzvDNy2OvlrLI2dMIHc/9lw3aOT9ff15RJrubZQ778WdAfO+HP
IyXj790sMTwborCYHkPurOp6I6QC6sZHl3KyUlorwFdN8DsBCJkyxfztgZXbC/foO1tjnttmHYmu
UyLI3HKpTQo+6KyzzFrYWDTUPXDP6JlU8u/Ve8QA6marTvaGe9ky6Y1l3BnH+ix7ZnL5Syzet0H/
ZbOVhaw1QR1m7foGGqYWxwOue4+hzRnu8mcIapWvgb76F1T/800skynDjq4ttny8/GUzt8chC7x0
NRYMgM2MZ4GnhpcPLtP8bj5uKYyL5s+MJ89988N1eciNGXjBeAlm2/UjmvLX7DRlWnG2cX8vUc32
Q2z4FJOtEGyIZyZlVJ1UbVI5bhPV8CMlqslKaN21LETTMd3qzFTWgYlOGx06XrhtM84ADyew0tmo
DiyOPqydwCfUrkglQa62ViuXnOoS+dFkZpyHCdMeg5UUSOWY8HvNTV4Mum026hbRz+minxNFH/SO
7VU6hyUvfgkOxiOUwMm2PvCeOxdM+/dpqp3vpfc+d73Sse1TDL1TVkaXk1JJiodpqp8qRXXxWCpX
4fF+3yaYmr5NrTXUHk0woA2uxgaEIoQutpkZbwFrJlNIGq7qWgLLJWpuOEizyqNoCC9J6lpvKtp6
L2Eb+oaHLHGHdgv+8ib8r7z+dvO0bU9qNB99AAqj+dG6CJuUcP8RlWA1xGFhAvcJdlUZJnx4N8Rs
3+cgyL0CO5NcInoFMG6/X7C97El6BpAzWnLso8FwJXjvs/Bd/PuuQEpKJd2ZTgajXG5j27lghfE+
S6FE9w92BezMOF/h8YNHc9U0R6015NeSB20I6K7AYYoQXlZ25g2kyGS+Y1NVYXTkBhSXqrmwEC0I
gaIhvCSo6x7p9k3mOzZVmXEJXKq6UkI0zS2KhvCSpK6RUOxO5js2VRVGCVyyug6keYeiIbwkqeuA
mgGOT+Y7NlUVRglcqroKjF3PBYqG8JKkbjAO4E5mcjJVFUYJXLK6QOz6GTcSRUN4SVDXz4T2AHcy
u5CpqjBK4FLV1RKiSRFQNISXJHWVMAB3Mt+PqaowSuCS1XUgzQsUDeElSV3tNMCdTMBjqiqMjlyD
4lLVNWDsGi5RNISXJHWtCgB3sjaNqaowSuCS1XUgzWsUDeElSV3nJcCdnnKrqjBK4FLVtWDseqFQ
NISXJHV9gLjTo5iqCqMELlldB9GCNSgawkuCumGmwJ7g9CimbgsqnktV10mQZhSKhvCSpK72EHd6
FFNV2ZLAJavrIJqTBkVDeElSN0AxJKZHMVWVLQlcqrpegjTlUTSEl1h1/RnnMyEkwJ0exVRUtiRx
yeo6kKYNiobwkqSu0hrgTo9iKipbkrhUdYOEaFp6FA3hJUldpwLAnR7FVFScXHM1R3HJ6jqI5qVB
0RBektQNTgHcyVGMrVhIJ3GJ6loO0MSMC4+iIbwkqCtmwkHcyVGMrVglInHJ6nqIJqVB0RBektRV
WgLcyVGMFVWxi+dS1RUwzXkUDeElSV1tA8CdHMVYURm7WC5ZXTB2DTQjZ5pvUpwv2xSXv729/WnY
sdc5maNplc3hEe93V4TGHZ5U92sNosoiJWiFNqhGQDQ+KeichLiTgztbsSBG4pLVdSBNexQN4SVJ
Xe8VwJ0c3Flz+P5FEpeqrjEQLeiAoiG8JKgr4aUiOTm4s/bwg4IkLlVdyyGa9B5FQ3hJUlcrBXAn
B3fWVsUunktW14A0b1A0hJckda1yAHdycGddZexiuVR1HQdpAUdDeElS10sLcCcHd9ZVxi6WS1bX
gDSLoyG8JKkbhAe4+k/uYrPr8Oq2JBu+GJ29i4vFj8XdYYua9FzFLFoTzd9vU2w73dsgDc/ap5gF
F1Fp7bUzynTK6YfbFOe3t8un2apYlFAz7t2Rldhvq4pD7SeykBq7nkN2CWWPbBdCOUKUq5nUUJRP
D7J91TMEzyW3gwFpXqFoCC9J6ipVuGk+v53vcu0O95vl8u6r5q14eTXqNtyu5TZ/+5NPPmy2ueQK
KS5Xi2asE/HlVxDHSAH4Nz2YD5X3HZZLbcXAQZrRKBrCS1IrWjB6pketFeezSFyyugaiOWlRNISX
JHW9CgB3ctTqeGXsYrlEdR3nIM1xFA3hJUndYB3AnRy1Ol4Zu1guWV0gdvWMc4miIbwkqKtnAooh
NTlqdaIqdvFcqrqCgzSjUTSElyR1pbQAd3LU6kRV7OK5ZHUNSDMBRUN4SVJXC8jLyVGrk5Wxi+VS
1ZUcpFmNoiG8JKlrwBiaHLU6WRm7WC5ZXQPStEDREF6S1LVWAdzJhVenKmMXy6WqqzhICx5FQ3hJ
UtcFscelHMTbO6eNPo0JWeKVPewA7n/+nPbmQG5fzmmDwmh+mDDbxOZvjDllZiV39yz2JRHRJsn2
usT4+Z9JZX5r7r75eTFkOz9/qVzz0uL6pba0Z7rpX/I+CKWzZ0GHzIxWgnU5elb8MbGVVkjfvhRV
TJ1pW6ZcEkzbTrNoig7eJCVkL1UvxAay/Pkunf/wTf886PChR7G3Dhcvd0p+N6/xvXId/N6vdbOP
ybu/apq3Hl64/o7xuu1Vb3/50hD95Y+btGg7f4XdMYe5s/n6r7Z1G2dD1uiLfDvfJEfK81JmoHix
yS14zou4JddTf17saX4b3e/PR7tKQrfBY9y1R/WBkFz/MVG3v6Hny1+bbiYrAQElcTgpF70zHkR7
+QU+/T0ZOSYA6nxseydN7g5O1f9P2PogLdmYrX+32jCYqt8Zv82uz0tQA6n64WuhgknFbTMT4cCU
Xfc358clJ+A35U68HKoN3n63jtFtyC66+eXd8glv1WN49OBxs7X+qZ87Fc7Qnztbrx4+eF5uEK+P
l5bxrr9cfMfGKeCHHxl/DX5w/MtBDzczs/pAff4+fLda/JNxbA9u+rFrOjz3gKrFy9uxlzzkZRzz
7aFzop7AxD/THC5W3VAuOK9KusGdGoSbh+pR0agq7xNPU8ggZyZ6COQ0k7ulq6csGtKZyq4zHQ/K
qm3VGnpqS6jK+wFieHGs1lmXuz5Zts5irZ3xQ9MhHlADeAQKuT+Fe1ii3b3BsPZVWweLXaa2llRb
VuuvbmM/KPLa/c9DYfDreNMPq3pnzUurRXkRlJfC3c9fDzUN2fcNY33KcXW1vIjzrxfn5f/TT2Us
vf4/xsqNeZmWw/TF4vYqnX/zQ8fLJT9cn8ckbbRSMK2VYy1XganYR9ZZbp1Npre9bYYxzDiyfQby
OciTZMD2ddspj2cXcXrGQ9st3UwD5USdwiRjFn2wbQiCJW8NyyF2LEoRmMpR9p2ILitXnl6O97EV
VoncRXwyZsyXPwv6IzjgDzEZ8+pmkxq50yqkVmjTxx7IbiZDjLkX3jqnQGP8zqw4ZMw0FUrIvL10
zwIoJXPbRt2rrHhs+7aVfWxdl5N3LgYftdTPQ2Ybpf9ew0dNOHlSZjCGnTOH2juVIblP2wzJY62t
x1+NbubFwaoB81ndVYo3F4tvVsu+3BdP1z09th+beogn96S5KV2aeHUih+5rPY75l1GjptubcavL
+cmC5QCbto3AtlUqz/9WNltrImKMSrG6L++zR8afO9OZ1YNPN05MH3Fm7T72fy3mvbieQIKxslbx
nTtvS70Y+mi3N2fNX3Rs4t1divPUn9aQ+fXFRpNxSeBUd/+JnPizxOt1MzV//g+YANwssBXqlFYA
8z//mBW4Rql+9O6+38Zh7BMHdrUDR4mqg63Yn4/diviEs3DHtf9+Pnl188QTyVV+EF7TJ1nCKsY7
98UR1magF2wZR2WpdettSH3cX0P6KI2D7EHi7drRP2wfct3o481XvVHe4c8N39Zs+r9bG3YXj1Af
eGQFyc+EC4dG1r90Dan45Hm1T4+uIj397V/lEP323/p1pPvfzyw/eIzxb1/nqXJuu4zyZ0gt0nKc
n4BXfsZzVaAN9zu2yTb8dffRGx9u90Ltb0TafogNn2LSy8BS6jxrs8udtl3btWZvI1KZWrn98eIq
xe/O/sbygwP7L5YfwdzafVO5NZ0MOjEbXGC9Lkgfc2Aqmi5Fl2VI5pF9U5/D0hzclf63SaNNMLGV
itkYJPOKt8wYLpnhgXOvQ2sEUZqD77nHy130aacCRSpatK3Rre45eUrYz7xwEwb+HXC/5EWf/smi
F8WhMOPGAE8YcJ+lDzLZ3LbCGTFR/ypymbnpUxZdnzrDiyWtilHn7Ezo8j9V9GL0WGpZ4fH+clI4
vOzsEayhLiIFDdmghK2wAaEIYbEzzIzXgDWTJ+wCr0o5h+cSNS92QTQrPIqG8JKkrgsK4E6esAt1
abvwXLK6HqJ561E0hJd4dQcJrAW4kyfsQlXaLgKXqq4AaSI4FA3hJUldFSTADdiT7MoLo5PqdMwT
2RCiajsjWmmEFVooaXOQKYjspely7Pk/mw2hKOGOrsR+W1WdUjmJhdTYVRyyywtxZLsQypGi3EN3
l5486RhUzamtketRXHI7GIgWxNG8JKgrZlxpgDt50jHoqvsBz6WqqzlICwJFQ3hJUleKAHAnTzoG
XRW7eC5ZXQPSvELREF6S1FXeA9zJk47BVMYulktV14Cxq7lD0RBektTVoJeTJx1DVQZBApesLhi7
RnIUDeElSV2rIS8ns9+EqgyCBC5VXctBWlAoGsJLgrpyxsFWnR5NVmUQJHDJ6hqQFgKKhvCSpK5S
DuBOjyarMggSuFR1HQdpGkdDeElT19mH+am0P25+qsLR4Ft7etRalamQwCW3ooFohksUDeElqRWt
NAB3MvtNqMq6R+BS1fUcojmlUTSElyR1wfksMz1aqsqQR+CS1QVjNwiHoiG8JKirZlxB3OnRUlXm
OgKXqm7gIC0IFA3hJUld4QzAnR4tVWWuI3DJ6hqIJpVD0RBektRVQgNcNcF1vCpzHYFLU3ewC6TZ
gKIhvCSpq5UHuHqaWxm7WgUUl6yuAWlOomgIL0nqgqMlYya5VZnrCFyqugKMXactiobwkqSutxDX
TnMrY9dbh+KS1TUgzeNoCC8J6uoZlwbgukluVeY6ApeqruQgzXgUDeElSV3hBMD109yq2MVzyeoa
iCadQ9EQXpLUtQbyMkxylahTF8ulqqsERHPCoWgIL0nqeqiqhuXTXFunLpZLVtc+pJkZFx5FQ3hJ
UNfMhJEAV0xydVXs4rlUdbWAaFJ4FA3hJUld5aAYktPcqtjFc8nqgrFrvEbREF6S1LVcV+yz28tp
id5sCVjyf9qdjdn4ursF+X7fPZTVUpia3dl/twV5gJZ2/YO7K+1xpAaif6XFJxB0Y5fvrBaJGyRA
3EggNOrDvQTmWGVmViDgv2N3MkNwvLQr1ZldEId2Zzp5r16Vj3LbZemMsmB71/IOu/04kLNWlWw/
zoClW4+3D72YbcdcNyBtaVPwnnvR8pHbzv77tq1eS+g0KK60BbBMyXEApR1nzgoG+sVtOw4WS1fc
+BOLc10R7ZU+lQ22G1Ysx0HZ4q3PxymC6Jx1fsOanl9iUpSNsQhctOY2h2a5LEIrsBKlrlVLtPfd
0Ffc4LNM3NH9+3996Bv8dujLy3L0yZuXTZbOWjeAc/UAXNajZaYelba16UclWz9o1o7Fh5ImaexL
KQ0fO6UU+FqNYOp2bLt6kHKsRzGqUajWC90l0rRt760aRa30ONSSD13tXCdqza0XrRFOe1VcAjxI
4+DoeWRpvcfZA9BZXsZkeC1Z9HGWVpzED61kXLtubOXSlR9n8bOyOP5/kWXv1P5eNYD9L2Gu7zrX
1q3XsrZ24HXnbFcrP2jeAYAxmXmaabh8aUPnnfeQ9uWOG49rfz6EeW8wqIqyTROL+0PjxylS3sgX
Lhtqcny14MfyPapwaIA0mi9SFLKgWCVi2mUax09R0NQwTTqvtBwv7KRY59nYZUp6FqiE8541xLKv
W17d7fp8iFvjbn+tXnnzWbt5c3N7+eavsRJErAs7/e/sF78JfBrx6xe/3H74SvVKPNH0+JtvPn7v
8Sh6Po5a1F5yVksGunaq07UGA8b13vbWVpurf5aBra6fnrfXP+0qxFbZWrGvVM/6UCZlxasLf3F2
0f66UiDB2umvwaIn/mb3kxOpM1XQ2Olz9nTjq22hCPmoCmQeKw7Vp+t3HoX6L23orx+r7d82flsS
M/zexp/kuVk6tyexxt+uN7prXqsqyHI9FYepL64Gv/Xq2dbD9US/klW90/PsYt1VwYztD7ZQ9z9a
X4SZaVUUDlW9aS9iEfnnPr77fTO2X71nzqq6vximrZgPEEVVPfo2tCp/HR4Yz9sgT/jZNJqdPb3a
3FS8qrfP7/4ebA99+y/beuuZraK2kVrT3TeF1pe+HaofODPCSV2BUNwyWf1q9ZmAuu3WWXjNxff4
+oXyoHzRuz+1l0/8N5f91VWIoTZM5X2+1MgEajjV5qmzmdbkpns+3v38mx2j66q9HKrtFOSuSutU
ryxLJKRb5UWbEFbHqz6MGyXwznAB2aknBD5uVKrWehhqHx6rpeC+bq3kQz8671QbJ0+h2nG1B5Cz
w2hq0e5E0DBtOUZPp8j9ZOSxB1/FtdNKhG+48ZvQbcZBv72u4oVNYrqw6dWfnl1Uj7eqv3bIyTVc
O+KI+4/Melpj38qRJtl3H6rjp2rvYaxt39l6VFZK2XslBdwl2TIm2cPDJNmxbv/ZVPp1V0MnrxLR
c/9Q6b8iTfH6Q1DIUOfdmTjaYf5H5MrKQu7L/++BA4wRFSq/xE7MXWIHPg5IqqsFWFVbyVnNB9HW
vvWubwH6Vvk3mRAWTBf1MqyWjvnatnqs2QAtGw234Mf9S+zG9bl/7bSm466zk7HknGbxOjuZPCiT
yq+a7V1nJ9OyleG3ebsU0S7svXbyOQXlNCsvPheePY0x2PKAGZkp1QGDDUpSkyBaHcoTukfT21DQ
HHcZlDCdHLUYagnG1swyVQ+iN3VnQYHW0NlR5rgaICcz2fkfy83/WMH8D1gj1Es7/6O/fyp6Lbc3
/4uuzoqkqUGW3ZCyL9Z7K9ZxZ3stbM9dZkPK/tflOCo+08afC5ZuSGlvHmxDShV1b7uwfvA86Y3M
ziuzO7X60RnoutZz/e/bU1w/Gim16VvDnLK85W3vPQPVBxWV7l7c9pRgsXOSYHG68kk6M0tmg10V
zpykBd4wbQkcChQpXwsObITQmY0b82d0LGWlHoGL1dxm0SSDIrQCK1HqSmkIvk62pxQ3+CwTS+30
l03dHmg4zG/Q/C6v0Euc8z+MXHlZgBg4z58t7JWzlV70vRrB2W5Azxf4/MT5X+DSGcOLrZ0bjLG8
eITgRvTKANfKztTOHbXXote8063VBoZWK9DcwygMG5g0L3CWwBtni8flxOJsr03ZOklmgx6x7CEH
aJihcChQBDGOQSNsZuuDqS5C5nwT6qvsNhC1l72/+1ngJRgfxBj+7Trzx+7HZ5vNEBYP3r179zBu
fHgN9+WX721z0k+nlLToms88zdwwP38k1lGOmUy4oggXGxoua6XUtgitwEpUEBjDMrizR2I5qYAy
AhepbuCVQ7OaF6EVWIlQVzSMyQzu7JFYzij3ziJw0eqaLBq4IrQCK1HqAs/hzh6J5ZwUu+W4WHU5
5NCEEEVoBVai1JVSZ3Bnj8RyTozdUly0utnYVSCL0AqsRKmbLeluZo/EciDGbikuVl2ALJooQyuw
EqWucSqDO3sklpPu+0bgotXNxq4VpgitwEqEurJhgmVwZ4/EckGK3YjLi3Cx6grIojkoQiuwEqUu
aJnBnT11xQUpdstx0eqaLJopQyuwEqWuMCaDO7toyCUxdktxserKbOxKzovQCqxEqasMZHBnsxgu
ibFbiotWNxu7WrAitAIrUeoaEBnc+SxGEWO3FBerroIsmoYitAIrUeo6xjO481mMIsZuKS5aXZNF
E64IrcBKhLqqYVnc+SxGk2K3HBerroYcGmeyCK3ASpS6kO3v57MYTYrdiAtFuGh1TRZNmSK0AitR
6gqb6e/tfBZjiLFbiotV12RjVzJThFZgJUpdxXUGdz6LMcTYLcVFq2uyaEIWoRVYiVJX25yV81kM
bStAOS5WXZtFM4IVoRVYiVLXKpfBnc9iaC+zynHR6tosmoUitAIrUeo6xzO481mMI8ZuKS5WXZdB
0w3jtgitwEqEurrhWmRw57MYR4rdcly0ujaHBgBFaAVWotQVhmVwZ7MYYKTYLcdFqgssj+ZEEVqB
lSh1pcl5dTaLAdLFpAhctLrZ2FW8DK3ASpS6CszhJSt24UtWAo7mKmPfbLYEpAtQEbhYL/I8mtRF
aAVWorxoRS56ZrMl4MQ2YoUowkWra7NoVhahFViJUtcxdthGHFu4jZiGqUx/7mazMgBSGynHxXoR
8mjWFaEVWInwoslXlnSzWRkAqY2U46LVtVk0I4vQCqzEqWtVBhdKr89lchjGUfSj4f7ft0FKzgUz
yvrwr3BOjdKo0UlhnFFMa/5ir2AOSgQqCyuR+koQ2/vyDLGxK/K89NK8CpRDRbnmOsNwNjsGQexD
SnHRfrBZNAFFaAVWotQ1Moc7mx2DJLaHUlysus9B02VoBVai1HXKZHBns2OQxNgtxUWrm4ld2zDg
RWgFViLUtQ1XIoM7nx0rUuyW42LVVXk0q4rQCqxEqStMzsr57JhUgxmBi1bXZtGcLkIrsBKlrmI2
gzuftZKK/SFwserqPJosQyuwEqWudiyDO5+1amLsluKi1c3GrpGmCK3ASpS6LvMW3rL5bNIQY7cU
F6uuyaMJU4RWYCVCXddwxjK489mkIcVuOS5aXZtFk6oIrcBKlLpgXAZ39k0m0N7XluNi1bV5NKeL
0AqsRKkrTPHR7cwBvOTodvEpzBwTKaiFVv7zR7f3KulnFTKCqFB51S02V3XLWseFHG3tpBtrJYOo
/djaOhim2g40B9u92YrW96rramE8r6XuZd2qIIhVXnAYQAyc71fdevbT8NppLT+i6BaHkqJbHHJF
t/Z+m7drsRpPxKJbHMqrOoVnT2MMuegWB0rRrWCDBWoZgNMU3VrAPVaTa6LFPh9VdEuDUGrwQ+2U
97XTrK+9ULLmuh1Bd35kcshyNdQyHpHrfvHXjz9AF38VIWDkMsVfN/5JLL27OXu2Hqvw3171hIF7
zv0ItdB+rNsexhq0aWtoR9NJ2WqnxirAheXbX/Y+1g9CggmiwgB9LTgba6WYrj14Li33vRPq0CLe
mNxKiKp+8oF+F8T646O7P8URvzcwtqoTenT5+Yfg+1cmxvcJqGnWAoSQ069A+JCFapxdpnF8vflt
F1S7glyff/zBqtKudVIqqId26GvjhK+97lk96lGNrXTMuPaQlW44UCvYpgP+epwG/F1ns+szWHXR
9o8ZW3G9Eu+vGF+FP+v3qr7dbNZ3r4A27Y1/fBmlvLr5KURycPK4fvL4hx8r/+vNpg3136e5S6h8
/tv14x82Hhvjj2K7qKe6CEpKMXKAupejqwc72toOsq+VUtJ448I/8OiuTWw/wjo7DlzZuuXjWEsH
Q90rkAFr6EZuRzEM+sesxILc0+BnFusxDF0zpTzDM9kynrvf5G2hlsw+fjaxHu+HoFx9yPxzpzGC
MIsI0pLKdgb+inyvwlIziMVcouxsSfmigvq7oNpWmB+qYDmrfn/llfqtq8u43+PPHLaV8D2lnP0c
ZhhURtG2wUQnVdvd2fH21JjvrkaKEt1cbe/VehlIHl7+dXv90z9u/3oj+c5XgwlTRnfP4dH2wYD+
w9yz1Y+v8mxgWJNbEpp9BSoc5TZpBC52buB4Ds0ZXoRWYGX50owwDRfUMuRbXu3w8+31NAuOV0ps
c/yqfdKuQ3dYTR+9fVpvf5ynARnzZ9/EClK1IQQu2sk6i6ZsEVqBlSgng6AOF0mms5t/+Ivb8zaM
dllMSR5it8pv1hPkhb+42vwWthSdr/vfsoDGfL/EAPLu9h6aV7/97N03KuWYei2HJnhuKXX25bZk
pF4p4poiXGTABl5ZNKOK0AqsRAWs0pDBnX25LbmjqVuKi1WXuzzaQncI7SL265AkvVE59ZyQ1doV
GVcgKsKZdq+iG60nSHmR9touyAsbDJBloxlbhE2BSijvWXLN0jte4berEGxP29vpls1tMroL6xDV
OXBHvp6g6ArbmanpITM3v2iHvrJq/wrSOUZxi6URoBmMbhi7o6/Jyl1ce4wYRv8PxNjrYfdzMtwK
U5XTBxR/CfX5+AOkbau4dbu9udnvTR5V2/Rx99f/kvVNwc27R9mD75TSK3ePCN79K3dVKW0h+R9/
3Pw0XXQx9YY/Vs8uwtxj9P1v/bkPg9cqjKnhTXe8P2RfdF6tb/zF9Wovvw+rv19P3/TqdCfLGxXX
Ut85ohKP/vkAh+0D/P6BsAAArxGIx449IR/69jlaeUDIAmZ7llGOXoHpa9uOtmZy7OtBwFhL0wo1
iM4z0KvqyabtuiDc5OD97nZRfgdx9XY/3YN6H1d46v+IK5tjI0Fk2aCh9lv1nCyhVcMoRWeNMAMD
gmemAZDqEcVcoUe2Lf20Hglslmow9830OPOPd0thgyHyO3WDcTk2mrNTN5hUlqnB+J6JflDOq9ET
PDM1GKpHjNGFHlm4wTgUm2OGRjgcGsPQxrJDm1XU7WhH3YYfkTVfZO1htxK4Far6feox/6zuMl11
AC1ZAzD7pocCGSNdKtlaANWBTl62hN1o4b1dE/eXPOuCZ6+u71+6vARc9+Nt98IlCbo3MsNT/KaA
vMujs3Oy/LPTO5dSs48cQQ5RUYDHJnPFIwiRH3IEKaG+318By7Nx35+m8XL4Ozp1DlqAnh+8SqzM
D155j4Rm4jptO2mk74aeEBQls73ZYBB6Vn6K7MFaC1wJ7SVYULQO7IG5lrw0PpguTN+0Uzx5aZx/
dr4DUyAL22tmPXAONQdomDl1uzgkEtQPQH0/DqPtPTtduyjTwMrSvHTZ9Q5gKDbHTOp46aRO8sbu
n+nIN7n3o6PuGtXt02H7VvXJrb++qS78zWbdX++1yeDm3hnQWrd80Ns9iWfTw2fbhydFe79+5oeA
G96Xtufbb/Vx587F7ovbJxF0fYTYYZHv+uyZ38RjGNH26XLZVfVNd3t5czu9Z5ENVJ98/dWj6nb7
K2h0I6CR+vVhcyFEA6zm7LqO25zibq9H1bCOk9PwPmD6jkfVRfvz1WZVcRb+uL6Mf2Tyx0fV02dn
w2Ydoffx757W9w9PH+vjF0LYhHS7Pg/CKcmMrX6838R3vXPboc8CUYDT+yzzcobyvvLhaKPfcLkc
WSUfoF0UaVz+AizQNvaF0CZVFnk42tjQEBmyouGaPQTZAo0RoRFpi101qhPRXqCYVaAJUr8QdSmV
gifa5iFooyPYZMnal0Xj0gh2KwYN45Z0MCLlo+TxJzUWIYT1ppKHLFxjTeZMxB9f3QTo4LM4SV5f
Deu+uo6W3p77TSAztqNrwapeDeKPuA/6yebq9nK4f+TH6vPDj93b9XAsdnnkeHvZ34T51L+yuk8t
39ju33gs9P3+9Deq6TTl3cdf3fvVaw9nzVfn3j+Nnw/TyPV5dRm2QFXb3PVVbnnDnRXO3m2lP+Ql
oAHOSRF30AQspQksQAjdBJKXSMBiQxTKkViksjhztCzLEMLK4kzKIjpHM01ikciimKLIsgAhpCyB
cMKCR+dYYsymsgAcL8sihLCyABywEKJhQGNxIIumyLIAIbQsOmEBKyYaIMZsKosSx8uyCCGsLOqQ
BecNt+IEI2Q63n9yFR6JpSAOB/fnj+1/D+0PR/z5Q/sdH/fcoR0gRrskRnsaZ/r4c8jLEMLGmeYJ
CxGjXRnaaehUFnP8jGcZQlhZjE1ZROdYoLFIZbGEMWwRQlhZbDqGyRWTDdOGxCKRRTN3vCyLEELK
EggfsBCy4YqWqKaycML8eBFCWFl4Oj9W0TkgaSxSWcTxCwrLEMLKImTKIjpHCFryciALoctdhBBa
lrTL1dE5UjASi1QWTcimFiGElUWrlEV0juI0FqkshjA/XoQQVhaTsjDROcrRWKSyOHa8LIsQwsri
2AELzhuwp1hYO23acELipLTBxGjXBkhuPYgzwhi2CCF0nKVjmI3RbhSNRSKLAUJnvQghpCyBcMoi
OscK2iz9QBbC/HgRQmhZ0vmxi85xXJJYpLJIwtLXIoSwskh9wEKohjEai1QWRRjDFiGElUUlY5hg
KxZYWBqLRBZLWFZfhhBSlkA4ZRGdw7UisUhlIeTeyxBCy+ISFjw6BxSNRSoLYVl9GUJYWUCnLKJz
hKCtYqeyiOP7lmUIYWURad8C0TkSaCxSWQirwMsQwsqi+QELzhvh1Alm3ydNG05JnJI2CIjRrohb
V9I4IyxmLEMIHWfpGCZitGtOW65NZSEsqy9DCCuLhZRFdI4hsjiQhTCGLUIILUs6hsnoHCtoMZvI
4gjL6ssQQsoSCKcsptLBxOXaVBYgjGGLEMLKAukYplZMN4y4XJvKQlhWX4YQVhZhUxZTeVsii1QW
Sehyd4Qe9L23k2mXq6NzgMgilYWwrL4MIaws5pBFcI4gsjiQhdDlLkIILUva5ZroHCloq4upLIRV
4GUIYWVx8oAF541i7ASz79OmDSckTkobzFT4mujWgzgjjGFbQg+7fuxcOobZGO2ayOKfslhGWFZf
hhBOlkg4ZRGdYyRtXTKVRRDGsEUIYWURKQsXnWOBxiKVhbCjdBlCWFnUIYvgHAe0dckDWQhD+yKE
0LIkQ7tkK2YaRlyAS2Th5vjcexlCSFkC4ZTFVJ6cuAs6lcUev358R+ghlyQi4YQFj84B4uaXVBbC
SZhlCGFlcSZlsa1ETmvKiSzAjl+SuCNEC1+kLIFAwgKic5SlsUhlISxJLEMIK4uwByw4bzTTJ5h9
nzRtOCVxStogYSoCamlJchpnkhRnCxDCxplM40ysAkWuaPOuVBZNkGURQlhZ9KEskjdATF5SWQhH
PpYhhJXFpLLIaU1A00bSRBbBj18oXYYQUpZAOGURnaOJm19SWQinOZchhJUFUllUdI5xS67UWEEo
lrAMIawsSqYspnpXlsYilUWTZFmAEFYWncqio3OcprFIZXHHbwdYhhBWFqcOWEhomKR1/IkskhH6
lkUIIWWRLO1bzNSUOZxgmnna+fEJiZPmx4FX8DdxAS6NM8oYtgghbJzBYZyFaAegTTBSWQRJlgUI
YWURqSw2OkcI2kpTKosmrAguQggri9Ypi6m+IPHwZCoLZaF0EUJYWUwqi4vOUZq2+SWVxRFkWYQQ
VhZ3KEtwjmW0kTSRRTGSLAsQQsqiWCKLYtE5jrgwksqiju9ylyGElUXBAQspyOfVD2Q5/gXvMoT+
Yu/cetwmwjB8z6+wuKEr4emcD0EgIQQCiZPK4QZVK8d2dgPZZJVkgQr478w4aYHP2WrH38xsRCuh
km66yTPPN7bn9djjaC0OULAZE0RwHAXUYhBakgDFajEjLaE4iuKGCVAL4nLZNECxWizctwxnpy2z
GUbfWWNDTnBMbAhcghjk4iKgn2k+/UKCNECR/UzzMYWUhCKnHKEWxK1FaYBitQgKKMSMScIZbtwF
tSAuO0kDFKtFjShCcSTyxhOoRSN6SxKgWC0a9hYZiqM17gwA1IK4kCANUKwWZyBFKI5TuAlqoMXQ
6eeP0wBFavHAgELNmCJM48IL1MIRWpIAxWrhYy1SEeFwmzLUIqZfdpIGKFaLYIBCh+Jo5MUvUAvi
5sM0QLFatIIUoTjW4aIu1GJQWhIAxWoxUMtw0t6JHAt25o0NGcFRscFzacKQSzXCfoaY7EsDFNvP
3LifSU0EcrkIoAWzrEcaoEgtlkItNhRHK1ymg1oE4hxPEqBYLcJBilAchxylQy0SpSUBUKwWCbW4
GTOEIYsDtWiEliRAsVr0WIs0RCD3cFALYm4qDVCsFqP/TcFnlIbiKOT96kCLm7wAayqgSC0eeEQh
Hfp0LdQiUVoSAMVqkVALm3FKDHJThlrs1DSVCihWi2UjCsUINbjwArU4lJYEQLFaHNTCw8QhFTkW
7MwYG/KCT48NAxdn6NO1/+1njk5eUjMVUFw/C8CQIvR2SXEhGWoRKC0JgGK1CKhFhOJo5BVkUItC
aEkCFKtFjbSE4liGu64Vapl86isVUKwWDbXIGeeEIi9+gVomX6SUCihWi9MjCuXxDO4MANDC6NQp
u1RAkVoYhRQqFIc73KYMtUxeUjMVUKwWriFFKI5Ero4LtUxebiEVUKwWASl0KI5C3sYPtZipE7yp
gGK1GAopQnE08jZ+qGXySkqpgKK1GEBxuPJdygyj77yxISP4Q2KDuy82mNDbDbKsoJ9xitj8kgBF
9jMPDClCb7ccd+5gpAWx+SUBitYCNz8biuOQF3BCLRJxMiMJUKwWyUYUSqCXahxpQcSGJEDRWmBs
cDMuCOO4MwBQCyZNJQGK1aLHWpRAr7kCtUy+kCAVUKwWA7QwGoojkFEXaBGT16VNBRSpxQNDilAc
SXEUUMvki5RSAcVqEVALC8VRFDdKh1oUQksSoFgtaqQlFEcjiwO1TH5AQSqgWC0aagn/Ea5yLNiZ
NTbkBH9NbGDGECuU4Pp0bAhc3JfV4YajoJ9NX5EgFVBsP3MKUoTebhzuvCTQMn1FglRAkVo8MKAQ
oTjW4iigFjb1spNUQLFamIMUoTgOec081II4UZoGKFYL14BCzrgkFHnNPNSip58oTQMUq0WPKZQk
TOIGGFAL4kRpGqBYLYYCCjXziDzLWtzw0P7DbReO6zeb9XK/2QZPi+XV3bYJb46P9vr+w73+53hf
rjWvO95zTqhiQvP7jvcqbAIc+fgt0PnU5KtvUwFFdj4PPKJQilCH2wSgFobSkgAoVguDWvSMKyIM
7hoHoGX6MgWpgGK1KD6iUAqdhYCW6csUpAKK1uIAhRnWp1E5HsKWN5wdwQuv58qYIYpLxd19O2sT
ertVuN4O+tn0dR9SAcX2MzPuZ0oTanCjW6jFIsaVSYBitVhIYWdcE4HcCQAtevJj0lIBRWrxwCMK
pdHrIEEtHKUlAVCsFg61uBl1xNkci218O/61V+0qRwEOGa+lSjToz9ia1x1HLCdCW2Xo6eNI4OKG
MORUPNwEJGITSAIUuwnI8SagDFHI66aglslPQkkFFKtFAS2czrglDrn6ItBiJi/ZlgooUosHHlEo
RyRyKTCohU0fdaQBitXCIAWbCUoE8goFqEUgeksSoFgtAvYWPqOUGIkb+0AtcvpJvjRAsVokHVEw
TqRmGY6QWSNiTnDMZX+ch95uBe6QAfuZmT5PnAYotp8ZBihE6O2O4SigFsQ9jGmAYrXYsRbBCE36
NC9nJz+fKRVQpBZLoRY5o4wwhptFhFoYQksSoFgtbKQlFEcgKaAWxNWQaYBitUgFKFQojnS4AQbU
oqY+NycVUKwWZSFFKI5B3iYKtSCuhkwDFKvFwN6iZxT/hA2gxVFEb0kCFKnFA48oBEcvBQa1MERs
SAIUq4XB2GBmlBOLfHoP1IK4tywNUKwWMaYY1rI3GUbfeWNDRnBUbDAzIQijuHMHsJ9JVD9LABTb
z0YUdkYF4ch7+qEWg9CSBChWixlTCIFexQhqsdNPlKYBitViDaBwoTgy5VP6OaWIe8vSAEVpGYAh
RSiOErg+C7Ug5iHTAMVqYUCLoL446PtuoRbEmZo0QLFaDIMUoTgGuV4I1GKnx4Y0QLFarAIUzBcH
PUoHWhjiAvQ0QJFaPPCIQkjCBe6yOKgFMTeVBihaiwYUfEYlkcj1QqCWyYuHpgKK1SLciIJxok2O
lYGzxoac4A+JDfetTSZ46O0GeYMp7Gdq+qxWGqDYfqYooBhSnjE5rvvI288ygmP6mRz6v7U5FtnI
KjQnOEroUGhncyx2mFdoRnCUUDsTiqhwT3m/3W62/9rx2D/vbrtmAPvxq8ubfr9dtjty099sti8u
m3Z/16yqxbJfdbuqWXfVt198tvO4nWuMc8Ip1sz/vN7vb59XnzXL1WEHeNtsd331+ffff1tt+93t
Zr3rg9L93a5aLf2e6afnJelumt2+314Ou+ih5M+rT5q7q+t99bnnvmxXS++QDK+3ve8Au73//8/D
3vzJu39U/juvN2FX/+03333/QXW3XfrXT7eeYN9fdnMP0fa73Qeh62xf+Ld+qqrnH4QDwt5/7OWq
X1/tr4cfMyfCO/tts94tfM37dbsJXdG/+UH1a7/deTT/mhEWfn3zy7Iffu12s1ld+qpu+/2HrWsW
vW2bWlLX1QtneN21XNfK6gVni1Z2nXnau9Z2dsFrx7Wole5oLedS1p3VVjdmsWiUfEqpdHreqVoz
Ma+dFrRuF9LWam55a/veiKYPtOGQMgDu7uaHw8vC//VV++qwXQw/uNt5x82V/5n/azhOPWXEVX+9
e39HHDaQ/KV+9Tr0zMM/qLpl35HqC38kXa1CU/yX3g3bv5fsi7hcd/1i6W+26lcvqie7u9utr/Bh
p7G/2/bVanN1Ff7qXeyvl7vqxr/tm35BHrtbn0Vby9T11d6QH/fEL/d81bxfbLb9oXnhHxw9/PPL
hJyAN0TxYvDHMdus+iGo9mg3N/63w75xVj29222f7ubL9dOXA8960VVz3jStkrYWHWtrxSytlVC2
7pidGypVu1jQd9453SxXvFkvya+bXRg271/MqiefffrZZvtL9YTb9zlVlF1cnB3ucNg/TfXq+rxS
VP+KFGEYPyirFuHFPXxZrA2B4+UWdLP0B01f01/76ue73d5vYYOwvpv5lwv/t+vwVevNur4NR7Nd
OEBUHqDx4vzOoz9+znu74YfzZteXb4r/xaurfhs+qFlXy5ubvls2+/5lA/xuDjbg+LFP+qvq8H0X
5al3/bqDyPAeBcjDRjxO0j99r9otO9/l+nUzX/nPvAxvh2+li5Z2zcJ13B2/9fN+FTyQ079zud6s
Nu0vVdusVn13msFl2WhOxvLpD9kugxsZ2n1zRozSMw4nNH9rtuscjCeUtuiR8gNGxg8bFtP/DIsz
DU4Hx3nGAQfHz5rlzvfHZl/9frPa3rZHqTerGeeCMFEzWdUfhYz4a7Nb/tr3vx/fJLx2j4k7ItKE
85qr86ANidl3obb/F3IYflzv56tA5LxbV3MbaIevDz9klBKha61Kkl4f9qrPqy+u1pvhQNT/3va3
4eA/q774+vtPn3398ZeXnz579s2zWfXT/3bLq/wu8Xq56mOPbRkrc+jxzX7f39wOH7TfeDq/8/Nf
8uoTV76X+ze8tbv+FfCQne7WK5+Nqi/eu3k13ll2nW+hd7O7vtsPH3nzYtevFlW3+W19z85HFEuL
h137D+vl7yT8cTl87RPB3q/e/W3rA+G7/sW791FmqcDZZdrCFTmLtpapa+L8Lh1RJs9uIUl+90Vy
zNG+buZO13y+ULXSRtYt7RvZcS0b3fv8fqpZNk+yeEggvj+/M3pxcXa4h/x+murx8zujh/xe0lrp
/J69KVnye3bq2DFO4MkZRmHQY9MfAlgCNzY7MzliVI5oVjI7M/nGZefgONMEQI7sXBA3QXYeaPOM
zhJn54ykb7MzKjtnrMyjZ+ehbeWS2sTsnLECZ5cnC1fkLNpapq5Js7ML91dqXgw+Pjsv5qrRlDX+
tTC16Lioe827WpvOadFaLaTx2flUszLNyj0kjN6fnYWf+z473CEKnqY6g7lvcZz7LmmtbHYu0JQM
2flIzTJSR41xDjxZJ3Jh0EM8k/mIK/NsX9Oys1AjRuYZi847C/WGZeeD40wn39Nn56K46OyclTZp
ds5K+jY7I7Jz1so8cnY+tK3gLOek7Jy1AmeWJ4tX5CzaWqauibMzE0SbYvDx2VkJ5rq2k3W3cKK2
am7qvu9drVmvrZaNFJz57HyqWZlmxB4SRl+Tnd3FxdnhDlHwNNUZzDsLd8jOJa2Vzs7Zm5IlOw/U
Z5SdA0/WiVwY9BArMB1xXZ7ta1p2PsHIBTFMFMzOUrxx2Tk4znTyPUd2LoibIDtnpE2cnTOSvs3O
qOycsTKPnp1D2wrOck7MzhkrcHZ5snBFzqKtZeqaODtzRcw5zztzR6Vs54u6bZisO0nnNaNsXhuq
uobNpVCm8dn5VLMyzYg9JIzen52lubg4O9xjdj5NlWVsH5OdpTlk55LWSmfn7E3Jkp2zU8eOcTxP
1olcGPTU9KU0S+DGZmfFRozCM+qS2VmxNy47B8eZTr7nyM4FcRNk54y0ibNzRtK32RmVnTNW5tGz
c2hbwVnOidk5YwXOLk8ObZXFKnIWbS1T18TZWRhiznneecGotfO5q/lcNrVlTNRS0EXdsa5VatFx
PvfZ+WSzMs3jPSSM3p+dlfLZ+SRupgm8h+AesvNpqsfPzkodsnNJa6Wzc/amZMnO2aljxzieJ+tE
7ijoTV8qvgRudHYeRw9piC0676zcG5edg+NMJ99zZOeCuAmy80CbZ3SWODtnJH2bnVHZOWNlHj07
D20rl9QmZueMFTi7PFm4ImfR1jJ1TZydpSP2nOed5023MK2yNbVtWzdMyFowLmvXS9WIpl30rfXZ
+VSzMs2IPSSM3p+dtbi4ODvcIQqepjqDeWctDtm5pLXS2Tl7U7Jk5+zUsWMcz5N1IhcGPT39yZYl
cGOzszYjRuUZi847a/PGZefgONOkSo7sHHAzzRVkyM4ZaRNn54ykb7MzKjtnrMyjZ+fQtoKznBOz
c8YKnF2eLFyRs2hrmbqmzM6KhvVqrCkGPyE7mzlt2ZzWqvN/MKZFTYWZ17RvOJXcdE3j551PNivT
jNhDwui92VlTfXFxdrhDFDxN9fjzzl7ZITuXtFY0O5doSvrs/JI654q7MWOcI4/L02FPBj3Npj86
ugRuZHbWbLyNMUYck+Wys2d4s7Lz0XGmk+/Js3NZXGx2zkubMjvnJX2bnadn57yVedzsfGxbwVnO
Kdk5bwXOK0+Wr8hZtLVMXRNnZyaI48Xg47OztM3f7N3LqtRAEAbgV3FpwA59vwgu3LsQXLvIpSOC
KHhb+fCmk6gQT8BO+q80HDduPOBfVXNm+utyZvjYS8siD4JJrzjrxkkyx5UxfJBayLjZeV8WaCP2
Lxg9trPwTVNd3IWCD6e6f+88t2y1M2XXqO0MLwVi5yV1RXZOeaCL3D30pLxiZ3jcXDtL+VdGOWe0
lHaW8tHZOfUYdPmOsDNh3AJ2BqYtbGdg0v92vmRn4GRut3OqjXDLedLOwAlU50niiVRRK81cC9tZ
mjbUvHe2Kko/cMNEdAOTMmgmVdexQarIlQvKTWGz874s0B7vXzB6bGc5752ri7tQ8OFUFeyd5bZ3
puwatZ3hpUDsnFLXtHdOeaCL3D301KW9Mzxurp0V/yujsi0n3Turx7d3Tj0GXb4j7EwYt4CdgWkL
2xmY9L+dL9kZOJnb7bzUZsikdtLOa0rIBKrzJPFEqqiVZq6F7ax8y2veO49eTf3IA4uTD8zZ6JgL
2rAYZDfxyTuhx83O+7JAG7F/weixnZVumuriLhQ8SgU52+fYWenVzpRdo7YzvBSIneGpc884cx6N
xOhf0PNX7AyPm21n/1dGPWck3Tsr/+jsnHoMunxH2JkwbgE7A9MWtjMw6X87X7IzcDK323mpjU5q
J+28pCQz1r2eJJ5IFbXSzLWwnQ1vec17Zy286LR2zJspMsvHkYlOBBYE906GTvBJbnbelwXaiP0L
Ro/trGXTVBd3oeBBqvvtrOVqZ8quUdsZXgrEzktq5IccZZ5xUh7oIncPPW2v2BkeN9fO2u4yiuec
t0IYQjtr+8jsvPYYdPle3s6kcS/bGZq2qJ2hSf/b+YKdoZO52c5rbYRbzlN2hk6gMk+ST6SKWmnm
WtTOc3jZCkkWPt/OYQw2pu93HoNXzIVhYHYSinGrlFa9MzK93/nBshTmVP8vGD22c/qc7eriLhQ8
SgU52+fY+dfnbFN2jdbOBKUA7EyQOveMM+fRmMueh6F34XO2f8XF/H6ds7N1f2UUc0ZLaefH9jnb
W49Bl+8IOxPGLWBnYNrCdgYm/W/nS3YGTuZ2O6faCLecJ+0MnEB1niSeSBW10sy1sJ2FboUjC59v
5974oQtGsqiUYL7XmnWTHhnvJsNVN7nIx83O+7JAe7x/weixnZ1omuriLhQ8SIU52+fY2YmH7bzk
g6496ewMLwViZ3jq3DPOnCdQ2tmZK3aGx821szN/ZZS6laR7Z2cenZ1Tj0GX7wg7E8YtYGdg2sJ2
Bib9b+dLdgZO5nY7p9oIt5wn7byktJAJVOfJtVayiVRRK81cC9s5vRrVvHcW2gsXTM/4ICY26E4z
HkNko7dddFMflLWbnfdlKfqy/sHOoWmqi7tQ8CgV5GyfZeew2pmya9R2hpcCsfOSGvPMd8rOKY/G
PGAfhp5XV+wMj5trZ/93RjVnJN07e/Xo7Jx6DLp8R9iZMG4BOwPTFrbzkhTzbPrfzpfsDJzM7XZe
aqOT2kk7AydQnSeJJ1JFrTRzLWxn5VtZ89550qPp5MCZ7DlnWvWCGREl09yoEJ3tejXb+cGyQBux
f8HosZ29a5rq4i4UfDhVBXtn71Y7U3aN2s7wUiB2hqfOPePMeaCL3D30grhiZ3jcXDsH8VdG7VtF
uncO4tHZOfUYutYqa+cUF7QrANgZmLawnYFJ/9v5kp2Bk7ndzqk2wi3nSTsDJ1CdJ4knUkWtNHMt
bGfDWyXJwufbOY58sF5pNphxYEEbz/qoNfOjlHaY5Ci12Oy8Lwv0Tsx/weixnYNpmuriLhR8OFUF
73cOZrUzZdeo7QwvBWLnJTXmgwZP2TnlIX2/cwhX7JziVvV+57Cnh3y+nPQtpZ3DI7Pz2mPQ5Xt5
O5PGvWxnaNqidoYm/W/nC3aGTuZmO6+1EW45T9kZOoHKPEk+kSpqpZlrUTvP4WWrHFn4fDsbO8x5
rGZeyon1fT8yF93EBhViEDIYOYbNzvuyQBuxf8HooZ1d+n7n6uIuFHw41f17Z/fr+50pu0ZrZ4JS
AHbeUldj5zUPdJG7g5678v3OW1zM79cpOzvF/8ooZKsFoZ3do/t+57XHoMt3hJ0J4xawMzBtYTsD
k/638yU7Aydzu51TbYRbzpN2Bk6gOk8ST6SKWmnmWtjOQrU8mL/Cmx9vZh6k7Ok16v389DY8+ZLO
iN8+xM9zyqmbQie9GcyolteQd58/ffs4/v6Rt09ejuPazY/rYF59mn/ka/c1jSA9v36Lz1azvVCc
b6U8e5Ke/1683v7Fp3/+pqEL/rvB3+aHzIcnH+eX5ifxe3oi/pUnBP+r6Q/k0q2uepGvRqG4i0yP
g2D9MARmhVDMhuhi7JSJdv7wtQfLAi1G/0X3x5cRyjdNdXGXB/ZBKg3BUs5lhPIPX0Ys+aB7ZLrL
CHgpkMsIeOrcQ+OcB7oZ38tZyyuXEfC4uZcRWv6VUc4ZKRf5c4ZHdxmRegzaZiAuIwjjFriMAKYt
fBkBTPr/MuLSZQRwMrdfRiy1OTL6nryMWFNCJlAd0IknUkWtNHMtfBkhbaurXuQ74XXQI3NcWuZG
rljfT5JxOQTh9OS4iZud92V5splk2FnPi/zq4i4UPEoFOdvn2Flvi3zKrlHbGV4KxM7w1LlnnDlP
sJAH7MPQM5cW+fC4uXY2/K+MyraGdJFvHt8iP/UYtM1A2JkwbgE7A9MWtjMw6X87X7IzcDK323mp
jU5qJ+28pMScNKrzJPFEqqiVZq6F7ax8a2reO7soVD+qkVneSdZZ0bExdhMT0rjJz3+Mut/svC8L
tBH7F4we29nopqku7kLBh1NVsHc2erUzZdeo7QwvBWLnlBr6ceu5Z5w5j8Y8YA+g56/YGR43287+
r4x6zki6dzb+0dk59Ri61iprZ8K4BewMTFvYzikpaAfz386X7AyczO12TrURbjlP2hk4geo8STyR
KmqlmWthOxveGkcWPt/OVnaD6P3EphgHNkg5MCVVz4TTQwhSKj2Kzc77skBvbf0XjB7b2cqmqS7u
QsGDVJizfY6drVztTNk1ajvDS4HYeUldkZ1THug7svfQs/aKnZe4mN+vc3a2dpdRPee8tcIR2tna
R2bntcegy/fydiaNe9nO0LRF7QxN+t/OF+wMnczNdl5rI9xynrIzdAKVeZJ8IlXUSjPXonaew8vW
SrLw+XbmnRyVFIFNeuiZ74RlYgiWaaG6yIdowjhudt6XBdrj/QtGj+3sVdNUF3eh4MOpKtg7e7Xa
mbJrtHZeS4FucAF2Jkide8aZ80AXuXvoeXfezgRxc+3s3V8ZxZzRUtrZu0dn59Rj0OU7ws6EcQvY
GZi2sJ2BSf/b+ZKdgZO53c5LbZ5MaiftvKaETKA6TxJPpIpaaeZa2M5Ct7bmvTOXXediL9nE5z/6
wUrmuomzbuBdjFEa4ef/s/1gWaCN2L9g9NjOQTRNdXEXCh6lgpztc+wcxGpnyq5R2xleCsTO8NS5
Z5w5D3SRu4deMFfsDI+ba+dg/soodetI987BPDo7px6DLt8RdiaMW8DOP9m7l1ypYSAKw+vJwJFd
frObOI/9L4E4aQbkEtROfCqWuieI4V+ui9RflwBgbWU7b6WYz5FfO9+yM3Azj9t5m41PahftDNxA
c55k3kgTs/LstbKdyfW+5btz8KQMORIqBC1m7bSgMcwiaJ+in72OZF52Po4Fuoi9g9H/2Dl2XXO5
GwVPqhqwc9ztzPlq3HaGjwKxM7y69DPO2mMwP7D/hF6Q+o6d4bmFdg7yZ6NeGznvzmvDx9k5vzH0
rFXXzoy5FewMrK1s51wKusF87XzLzsDNPG7nPBvjlfOinYEbaM6TzBtpYlaevVa2sw6992zx5Xa2
anExxknIqKWY9DiIMA1OyCklGSPpUS0vOx/HAl3E3sHoqZ2D9F3XXO6JnXPV83fnIP1uZ85X47Yz
fBSInbdqzL/Dc8nOuSdifmD/DT2l7tgZnltqZ6V+NJrQBxUY7azUx9k5vzHoy3eEnRlzK9gZWFvZ
zsDSr51v2Rm4mcftnGdjvHJetDNwA815knkjTczKs9fKdrayD8QWX25nPw7ORzUKHRcS3igraDZa
UEjT4lScaRhfdj6OBbqIvYPRczsr23XN5Z7YOVc9f3cOyu525nw1bjvDR4HYeatuyM65B3rI/QG9
eMfO8NxiOx/pYX7JtdGx2jl+mJ33NwZ9+V7fzqy5t+0Mra1qZ2jp18437AzdzMN23mdjvHJesjN0
A415kn0jTczKs9eqdl7jqQ8t352D0t4HNwiaYxImLUlYZY1YxkHJ2RqKRr/sfBwLdBF7B6Pndjau
65rL3Sh4UoX5/2dL7GzcbmfOV+O1M8MoADszVJd+xll7oIfcI/Tu/P/ODLmldrbyR6OiPrLenT/u
/3fe3xj05TvCzoy5FewMrK1sZ2Dp18637AzczON23maLbFK7aOe9ErKB5jzJvJEmZuXZa2U7K9PH
tu/OizEhJZGC1kKNoxckXRCji9aPdpmVTi87H8cCXcTewei5nW3ouuZyNwqeVD1vZxt2O3O+Gred
4aNA7AyvLv2Ms/ZAD7lH6Dm6Y2d4bqmdHf1opLWR9e7s6OPsnN8Y9OU7ws5bLubzDsDOwNrKdgaW
fu18y87AzTxu5202PqldtDNwA815knkjTczKs9fKdibXR88WX27nQQ+zCTaKcXH57jwm4Y2JQntH
s3KDl9a/7HwcC3QRewej53Z2ruuayz2xc65q4O7sXndnzlfjtjN8FIidt2rMv8Nzyc65B3rIPULP
37o7w3NL7ezlj0bte8l6d/afd3fObww9a9W1c84F3QoAdgbWVrYzsPRr51t2Bm7mcTvn2RivnBft
DNxAc55k3kgTs/LstbKddewlscWX29nKRHIiLZSXQSQ3kXByJJHUZL2bohosvex8HAt0EXsHo+d2
9qbrmss9sXOuauDu7M1uZ85X47YzfBSInbfqhuyce6CH3B/QC3fsvOVi/nxdtHP40WjWRhc57Rw+
zs75jUFfviPszJhbwc7A2sp2BpZ+7XzLzsDNPG7nPBvjlfOinYEbaM6TzBtpYlaevVa2s1W9bPnu
PJtEKfokkpwXkaaJRLTRijFE0kkOfqLhZefjWKA73jsYPbdzoK5rLnej4EmVh3y2L7FzoBM75z7o
2ZPPzvBRIHaGVxd+xsk9EYnRI/SCu2PnLRfz5+uanYM7NNpfUvVKcdo5uA+z8/7GoC/f69uZNfe2
naG1Ve0MLf3a+YadoZt52M77bIxXzkt23itJQjbQmCf/zMq2kSZm5dlrVTuv8bpXLd+dQ4rOynES
aVkGEac0CKKkhIpGquSCila97HwcC3QRewejp3aOUnddc7kbBU+qMJ/tC+wcpd7tzPlqvHZmGAVg
Z4bq0s84aw/0kHuAXpT+up0ZcgvtHKX/0ajWRs6789rwcXbObwz68h1hZ8bcCnYG1la2M7D0a+db
dgZu5nE759k8n9Qu2nmrZDPWs55k3kgTs/LstbKdle1Vy3fnYKZJxaCF01YKZUYSk7OjSGOMs5PT
ZJV72fk4Fugi9g5Gz+2sVNc1l3ti51z1/N05KrXbmfPVuO0MHwViZ3h16WectYfz7hyVvWNneG6p
nZX90Ui2J86789rwcXbObww9a9W1c84F3QoAdgbWVrYzsPRr51t2Bm7mcTvn2RivnBftDNxAc55k
3kgTs/LstbKdyfek2eLL7TzIoKfkrBjisIjBKxLLoJ3QIy1JGusp0cvOx7EM/1hv2Dl2XXO5GwXP
qiCf7YvsHHc7c74at53ho0DsDK8u/YyTeyLkB/bf0CN9x87w3FI7089GvTZ6yWhn0h9n5/zGoC/f
EXZmzK1gZ2BtZTsDS792vmVn4GYet3OejfHKedHOwA0050nmjTQxK89eK9tZx54CW3y5nRMFt0wL
iXFQSVhNJBI5I6Jzo5pT0smEbOd/jYX5p0vfwei5ncl3XXO5GwXPqiCf7UvsTH63M+ercdsZPgrE
zvDq0s84aw/0LxAfoafVHTvDc0vtrNWPRhN7TZx21urj7JzfGPTlO8LOjLkV7AysrWxnYOnXzrfs
DNzM43bOszFeOS/aGbiB5jzJvJEmZuXZa2U7W9Xrlu/Oi1GL0cMogrIkhlmPwsU5ipBSsDTYaRzk
y87HsUB/E/MdjJ7bWduuay73xM65qoG/76ztbmfOV+O2M3wUiJ236obsnHsM5gf2BHrxjp3hucV2
PtLD/ZJrI+vdWccPs/P+xqAv3+vbmTX3tp2htVXtDC392vmGnaGbedjOr9kUm9Qu2flPJWQDjXmS
fSNNzMqz16p2XuN1r1u+O5sl2MEHJTz5SaRhDMJNaRbDZOc4yHmM2r3sfBwLdBF7B6NndlZSrnZu
Lnej4L+rHr87KylfduZ8NV47M4wCsDNDdelnnLUHesj9G3pKyht2Zsgts3Me50ej0r1hvDuvDZ9n
5/zGoC/fEXZmzK1g560W8+mssp2BpV8737IzcDOP23mbjU9qF+0M3EBznmTeSBOz8uy1sp2V7U3L
d2dPRGbWfv394sXoxyC8Wn+ZZoo0J5XcYl52Po4Fuoi9g9FzO1PouuZyNwqeVD1vZwq7nTlfjdvO
8FEgdoZXl37GWXsM5gf239DTdMfO8NxSO2v60UhrI+PdOTd8nJ3zG0PPWnXtzJhbwc65FnTZqGxn
YOnXzrfsDNzM43bOszFeOS/aGbiB5jzJvJEmZuXZa2U7k+9NYIsvt7O0xpopLSKMMYkUJyuUWcb1
d7ObJiknR+PLzsexQBexdzB6bmftuq653BM756oG7s7a7XbmfDVuO/9u526SHASBMAzPem7iAguR
5mduIw7e/wjj38rEqhD4WmqiJ3jbzsJHUsJHgdh5rUZ+IDj1GWfu8Zgf7HPoaZljZ3huqp21fGjs
bUuqY7Szlh9n5+Ueg16+I+zMmFvAzsDawnYGlt52zrIzcDOX23mZjfGU8007AzdQnSeZN1LFrDx7
LWzn3rfUs8Wn2znQ7zC5YMXg1CC8HrQwMZIYTVBa9zGMatrtfBwLdCL2CkbP7ax101SXu1LwpArz
HeAUO2t9Yue1D0k3PjvDR4HYGV6d+owz90APch+g53LsDM9NtrN7aNRzo2W1s/s4Oy/3GPTyHWFn
xtwCdgbWFrYzsPS2c5ad180oyGYut/M2G5vU3rQzcAPVeZJ5I1XMyrPXwnamrqWaz51VCE520QtP
0gkimoT3hoRVjrx2aorK7nY+jgU6EXsFo+d2JtU01eWuFDyput7OpDY7c941bjvDR4HYGV6d+Iyz
9HjMD/Y59Mjk2Bmem2pnModG+yO71rCeO5P5MDtv9xj08r28nVlzs+0MrS1q570U8xx52znDztDN
XGznfTY+qb1lZ+gGKvMk+0aqmJVnr0XtPMf3ran53NmZ0cTfrhc2Wi3M6KKw0nfCjWGg0VCIY7fb
+TgW6ETsFYye29mapqkud6Xg86oKzp2t2ezMedd47cwwCsDODNWpzzhzD/Qg9wg9l/GfbYbcVDs7
+f11X/d1X/f1L64/jKcrYQA6NQA=
--000e0cd3406a75c8a504c8337ccb
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--000e0cd3406a75c8a504c8337ccb--


From xen-devel-bounces@lists.xen.org Mon Aug 27 00:56:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 00:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5nbs-0000cA-8H; Mon, 27 Aug 2012 00:55:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T5nbq-0000c5-6z
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 00:55:18 +0000
Received: from [85.158.143.99:42157] by server-2.bemta-4.messagelabs.com id
	E9/EE-21239-575CA305; Mon, 27 Aug 2012 00:55:17 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1346028915!22551832!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg2OTk0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14035 invoked from network); 27 Aug 2012 00:55:16 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-12.tower-216.messagelabs.com with SMTP;
	27 Aug 2012 00:55:16 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Aug 2012 17:55:14 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="185155568"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by azsmga001.ch.intel.com with ESMTP; 26 Aug 2012 17:55:14 -0700
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 17:55:13 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 17:55:13 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 08:55:11 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Keir Fraser <keir@xen.org>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
Thread-Index: Ac2B1OBPGx/S6qx2ik6MKRf7VS1RJQCGQN+Q
Date: Mon, 27 Aug 2012 00:55:11 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE1F6DB@SHSMSX102.ccr.corp.intel.com>
References: <1345691481-6862-1-git-send-email-dongxiao.xu@intel.com>
	<CC5CFDD4.4A084%keir@xen.org>
In-Reply-To: <CC5CFDD4.4A084%keir@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Keir Fraser [mailto:keir.xen@gmail.com] On Behalf Of Keir Fraser
> Sent: Friday, August 24, 2012 4:46 PM
> To: Xu, Dongxiao; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
> 
> On 23/08/2012 04:11, "Dongxiao Xu" <dongxiao.xu@intel.com> wrote:
> 
> > The previous order of relinquish resource is:
> > relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
> > However some L1 resources like nv_vvmcx and io_bitmaps are free in
> > nvmx_vcpu_destroy(), therefore the relinquish_domain_resources() will
> > not reduce the refcnt of the domain to 0, therefore the latter vcpu
> > release functions will not be called.
> >
> > To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
> > relinquish_domain_resources().
> >
> > Besides, after destroy the nested vcpu, we need to switch the
> > vmx->vmcs back to the L1 and let the vcpu_destroy() logic to free the L1
> VMCS page.
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> 
> Couple of comments below.
> 
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index 2e0b79d..1f610eb 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -57,6 +57,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)  {
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >
> > +    if ( nvcpu->nv_n1vmcx )
> > +        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;
> 
> Okay, this undoes the fork in nvmx_handle_vmxon()? A small code comment to
> explain that would be handy.

Consider the following case:
When the vcpu is representing the L2 guest, therefore the v->arch.hvm_vmx.vmcs points to the L2's VMCS (as known as the shadow VMCS, nvcpu->nv_n2vmcx), and at this time, user destroy the L1 guest by "xl destroy", we need to set the v->arch.hvm_vmx.vmcs back to L1's VMCS, otherwise, L2's VMCS will be free twice and keep L1's VMCS un-freed.
I will add a comment the code.

> 
> >      nvmx_purge_vvmcs(v);
> 
> This call of nvmx_purge_vvmcs() is no longer needed, and should be removed?
Yes, this could be removed. I will send out a new version.

Thanks,
Dongxiao


> 
>  -- Keir
> 
> >      if ( nvcpu->nv_n2vmcx ) {
> >          __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
> > @@ -65,6 +68,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
> >      }
> >  }
> >
> > +void nvmx_domain_relinquish_resources(struct domain *d) {
> > +    struct vcpu *v;
> > +
> > +    for_each_vcpu ( d, v )
> > +        nvmx_purge_vvmcs(v);
> > +}
> > +
> >  int nvmx_vcpu_reset(struct vcpu *v)
> >  {
> >      return 0;
> > diff --git a/xen/include/asm-x86/hvm/hvm.h
> > b/xen/include/asm-x86/hvm/hvm.h index 7243c4e..3592a8c 100644
> > --- a/xen/include/asm-x86/hvm/hvm.h
> > +++ b/xen/include/asm-x86/hvm/hvm.h
> > @@ -179,6 +179,7 @@ struct hvm_function_table {
> >      bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
> >
> >      enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
> > +    void (*nhvm_domain_relinquish_resources)(struct domain *d);
> >  };
> >
> >  extern struct hvm_function_table hvm_funcs; diff --git
> > a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > index 995f9f4..bbc34e7 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > @@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);  enum
> > hvm_intblk nvmx_intr_blocked(struct vcpu *v);  int
> > nvmx_intercepts_exception(struct vcpu *v,
> >                                unsigned int trap, int error_code);
> > +void nvmx_domain_relinquish_resources(struct domain *d);
> >
> >  int nvmx_handle_vmxon(struct cpu_user_regs *regs);  int
> > nvmx_handle_vmxoff(struct cpu_user_regs *regs);
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 00:56:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 00:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5nbs-0000cA-8H; Mon, 27 Aug 2012 00:55:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T5nbq-0000c5-6z
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 00:55:18 +0000
Received: from [85.158.143.99:42157] by server-2.bemta-4.messagelabs.com id
	E9/EE-21239-575CA305; Mon, 27 Aug 2012 00:55:17 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-216.messagelabs.com!1346028915!22551832!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg2OTk0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14035 invoked from network); 27 Aug 2012 00:55:16 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-12.tower-216.messagelabs.com with SMTP;
	27 Aug 2012 00:55:16 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Aug 2012 17:55:14 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="185155568"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by azsmga001.ch.intel.com with ESMTP; 26 Aug 2012 17:55:14 -0700
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 17:55:13 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 17:55:13 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 08:55:11 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Keir Fraser <keir@xen.org>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
Thread-Index: Ac2B1OBPGx/S6qx2ik6MKRf7VS1RJQCGQN+Q
Date: Mon, 27 Aug 2012 00:55:11 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE1F6DB@SHSMSX102.ccr.corp.intel.com>
References: <1345691481-6862-1-git-send-email-dongxiao.xu@intel.com>
	<CC5CFDD4.4A084%keir@xen.org>
In-Reply-To: <CC5CFDD4.4A084%keir@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Keir Fraser [mailto:keir.xen@gmail.com] On Behalf Of Keir Fraser
> Sent: Friday, August 24, 2012 4:46 PM
> To: Xu, Dongxiao; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [PATCH] nvmx: fix resource relinquish for nested VMX
> 
> On 23/08/2012 04:11, "Dongxiao Xu" <dongxiao.xu@intel.com> wrote:
> 
> > The previous order of relinquish resource is:
> > relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
> > However some L1 resources like nv_vvmcx and io_bitmaps are free in
> > nvmx_vcpu_destroy(), therefore the relinquish_domain_resources() will
> > not reduce the refcnt of the domain to 0, therefore the latter vcpu
> > release functions will not be called.
> >
> > To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
> > relinquish_domain_resources().
> >
> > Besides, after destroy the nested vcpu, we need to switch the
> > vmx->vmcs back to the L1 and let the vcpu_destroy() logic to free the L1
> VMCS page.
> >
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> 
> Couple of comments below.
> 
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index 2e0b79d..1f610eb 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -57,6 +57,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)  {
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >
> > +    if ( nvcpu->nv_n1vmcx )
> > +        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;
> 
> Okay, this undoes the fork in nvmx_handle_vmxon()? A small code comment to
> explain that would be handy.

Consider the following case:
When the vcpu is representing the L2 guest, therefore the v->arch.hvm_vmx.vmcs points to the L2's VMCS (as known as the shadow VMCS, nvcpu->nv_n2vmcx), and at this time, user destroy the L1 guest by "xl destroy", we need to set the v->arch.hvm_vmx.vmcs back to L1's VMCS, otherwise, L2's VMCS will be free twice and keep L1's VMCS un-freed.
I will add a comment the code.

> 
> >      nvmx_purge_vvmcs(v);
> 
> This call of nvmx_purge_vvmcs() is no longer needed, and should be removed?
Yes, this could be removed. I will send out a new version.

Thanks,
Dongxiao


> 
>  -- Keir
> 
> >      if ( nvcpu->nv_n2vmcx ) {
> >          __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
> > @@ -65,6 +68,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
> >      }
> >  }
> >
> > +void nvmx_domain_relinquish_resources(struct domain *d) {
> > +    struct vcpu *v;
> > +
> > +    for_each_vcpu ( d, v )
> > +        nvmx_purge_vvmcs(v);
> > +}
> > +
> >  int nvmx_vcpu_reset(struct vcpu *v)
> >  {
> >      return 0;
> > diff --git a/xen/include/asm-x86/hvm/hvm.h
> > b/xen/include/asm-x86/hvm/hvm.h index 7243c4e..3592a8c 100644
> > --- a/xen/include/asm-x86/hvm/hvm.h
> > +++ b/xen/include/asm-x86/hvm/hvm.h
> > @@ -179,6 +179,7 @@ struct hvm_function_table {
> >      bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
> >
> >      enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
> > +    void (*nhvm_domain_relinquish_resources)(struct domain *d);
> >  };
> >
> >  extern struct hvm_function_table hvm_funcs; diff --git
> > a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > index 995f9f4..bbc34e7 100644
> > --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> > @@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);  enum
> > hvm_intblk nvmx_intr_blocked(struct vcpu *v);  int
> > nvmx_intercepts_exception(struct vcpu *v,
> >                                unsigned int trap, int error_code);
> > +void nvmx_domain_relinquish_resources(struct domain *d);
> >
> >  int nvmx_handle_vmxon(struct cpu_user_regs *regs);  int
> > nvmx_handle_vmxoff(struct cpu_user_regs *regs);
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 02:34:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 02:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5p9K-00055J-WB; Mon, 27 Aug 2012 02:33:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T5p9J-00055E-F5
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 02:33:57 +0000
Received: from [85.158.143.99:43601] by server-3.bemta-4.messagelabs.com id
	9F/33-08232-49CDA305; Mon, 27 Aug 2012 02:33:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1346034835!21002389!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM1Njc2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18291 invoked from network); 27 Aug 2012 02:33:56 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-216.messagelabs.com with SMTP;
	27 Aug 2012 02:33:56 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 26 Aug 2012 19:33:53 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="191630539"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 26 Aug 2012 19:33:35 -0700
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Aug 2012 10:27:12 +0800
Message-Id: <1346034432-29872-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH v2] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change from v1:
 - Add annotations to the VMCS pointer switching when destroy nvcpus.
 - Remove the unnecessary nvmx_purge_vvmcs() function call when destroy
   nested vcpus.

The previous order of relinquish resource is:
relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
However some L1 resources like nv_vvmcx and io_bitmaps are free in
nvmx_vcpu_destroy(), therefore the relinquish_domain_resources()
will not reduce the refcnt of the domain to 0, therefore the latter
vcpu release functions will not be called.

To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
relinquish_domain_resources().

Besides, after destroy the nested vcpu, we need to switch the vmx->vmcs
back to the L1 and let the vcpu_destroy() logic to free the L1 VMCS page.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    3 +++
 xen/arch/x86/hvm/vmx/vmx.c         |    3 ++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   18 +++++++++++++++++-
 xen/include/asm-x86/hvm/hvm.h      |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 5 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025..0576a24 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -561,6 +561,9 @@ int hvm_domain_initialise(struct domain *d)
 
 void hvm_domain_relinquish_resources(struct domain *d)
 {
+    if ( hvm_funcs.nhvm_domain_relinquish_resources )
+        hvm_funcs.nhvm_domain_relinquish_resources(d);
+
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ffb86c1..3ea7012 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1547,7 +1547,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
-    .nhvm_intr_blocked    = nvmx_intr_blocked
+    .nhvm_intr_blocked    = nvmx_intr_blocked,
+    .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 2e0b79d..5f6553d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -57,7 +57,15 @@ void nvmx_vcpu_destroy(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
 
-    nvmx_purge_vvmcs(v);
+    /* 
+     * When destroying the vcpu, it may be running on behalf of L2 guest.
+     * Therefore we need to switch the VMCS pointer back to the L1 VMCS,
+     * in order to avoid double free of L2 VMCS and the possible memory
+     * leak of L1 VMCS page.
+     */
+    if ( nvcpu->nv_n1vmcx )
+        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;
+
     if ( nvcpu->nv_n2vmcx ) {
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
         free_xenheap_page(nvcpu->nv_n2vmcx);
@@ -65,6 +73,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
     }
 }
  
+void nvmx_domain_relinquish_resources(struct domain *d)
+{
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+        nvmx_purge_vvmcs(v);
+}
+
 int nvmx_vcpu_reset(struct vcpu *v)
 {
     return 0;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 7243c4e..3592a8c 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -179,6 +179,7 @@ struct hvm_function_table {
     bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
 
     enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
+    void (*nhvm_domain_relinquish_resources)(struct domain *d);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 995f9f4..bbc34e7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
                               unsigned int trap, int error_code);
+void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 02:34:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 02:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5p9K-00055J-WB; Mon, 27 Aug 2012 02:33:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T5p9J-00055E-F5
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 02:33:57 +0000
Received: from [85.158.143.99:43601] by server-3.bemta-4.messagelabs.com id
	9F/33-08232-49CDA305; Mon, 27 Aug 2012 02:33:56 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1346034835!21002389!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM1Njc2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18291 invoked from network); 27 Aug 2012 02:33:56 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-216.messagelabs.com with SMTP;
	27 Aug 2012 02:33:56 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 26 Aug 2012 19:33:53 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="191630539"
Received: from unknown (HELO localhost) ([10.239.36.11])
	by orsmga002.jf.intel.com with ESMTP; 26 Aug 2012 19:33:35 -0700
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Aug 2012 10:27:12 +0800
Message-Id: <1346034432-29872-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH v2] nvmx: fix resource relinquish for nested VMX
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change from v1:
 - Add annotations to the VMCS pointer switching when destroy nvcpus.
 - Remove the unnecessary nvmx_purge_vvmcs() function call when destroy
   nested vcpus.

The previous order of relinquish resource is:
relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
However some L1 resources like nv_vvmcx and io_bitmaps are free in
nvmx_vcpu_destroy(), therefore the relinquish_domain_resources()
will not reduce the refcnt of the domain to 0, therefore the latter
vcpu release functions will not be called.

To fix this issue, we need to release the nv_vvmcx and io_bitmaps in
relinquish_domain_resources().

Besides, after destroy the nested vcpu, we need to switch the vmx->vmcs
back to the L1 and let the vcpu_destroy() logic to free the L1 VMCS page.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/hvm.c             |    3 +++
 xen/arch/x86/hvm/vmx/vmx.c         |    3 ++-
 xen/arch/x86/hvm/vmx/vvmx.c        |   18 +++++++++++++++++-
 xen/include/asm-x86/hvm/hvm.h      |    1 +
 xen/include/asm-x86/hvm/vmx/vvmx.h |    1 +
 5 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f8a025..0576a24 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -561,6 +561,9 @@ int hvm_domain_initialise(struct domain *d)
 
 void hvm_domain_relinquish_resources(struct domain *d)
 {
+    if ( hvm_funcs.nhvm_domain_relinquish_resources )
+        hvm_funcs.nhvm_domain_relinquish_resources(d);
+
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
     hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ffb86c1..3ea7012 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1547,7 +1547,8 @@ static struct hvm_function_table __read_mostly vmx_function_table = {
     .nhvm_vcpu_asid       = nvmx_vcpu_asid,
     .nhvm_vmcx_guest_intercepts_trap = nvmx_intercepts_exception,
     .nhvm_vcpu_vmexit_trap = nvmx_vmexit_trap,
-    .nhvm_intr_blocked    = nvmx_intr_blocked
+    .nhvm_intr_blocked    = nvmx_intr_blocked,
+    .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources
 };
 
 struct hvm_function_table * __init start_vmx(void)
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 2e0b79d..5f6553d 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -57,7 +57,15 @@ void nvmx_vcpu_destroy(struct vcpu *v)
 {
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
 
-    nvmx_purge_vvmcs(v);
+    /* 
+     * When destroying the vcpu, it may be running on behalf of L2 guest.
+     * Therefore we need to switch the VMCS pointer back to the L1 VMCS,
+     * in order to avoid double free of L2 VMCS and the possible memory
+     * leak of L1 VMCS page.
+     */
+    if ( nvcpu->nv_n1vmcx )
+        v->arch.hvm_vmx.vmcs = nvcpu->nv_n1vmcx;
+
     if ( nvcpu->nv_n2vmcx ) {
         __vmpclear(virt_to_maddr(nvcpu->nv_n2vmcx));
         free_xenheap_page(nvcpu->nv_n2vmcx);
@@ -65,6 +73,14 @@ void nvmx_vcpu_destroy(struct vcpu *v)
     }
 }
  
+void nvmx_domain_relinquish_resources(struct domain *d)
+{
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+        nvmx_purge_vvmcs(v);
+}
+
 int nvmx_vcpu_reset(struct vcpu *v)
 {
     return 0;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 7243c4e..3592a8c 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -179,6 +179,7 @@ struct hvm_function_table {
     bool_t (*nhvm_vmcx_hap_enabled)(struct vcpu *v);
 
     enum hvm_intblk (*nhvm_intr_blocked)(struct vcpu *v);
+    void (*nhvm_domain_relinquish_resources)(struct domain *d);
 };
 
 extern struct hvm_function_table hvm_funcs;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 995f9f4..bbc34e7 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -96,6 +96,7 @@ uint32_t nvmx_vcpu_asid(struct vcpu *v);
 enum hvm_intblk nvmx_intr_blocked(struct vcpu *v);
 int nvmx_intercepts_exception(struct vcpu *v, 
                               unsigned int trap, int error_code);
+void nvmx_domain_relinquish_resources(struct domain *d);
 
 int nvmx_handle_vmxon(struct cpu_user_regs *regs);
 int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 02:50:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 02:50:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5pPL-0005GC-Kw; Mon, 27 Aug 2012 02:50:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T5pPK-0005G7-S2
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 02:50:31 +0000
Received: from [85.158.138.51:60827] by server-1.bemta-3.messagelabs.com id
	78/EF-09327-570EA305; Mon, 27 Aug 2012 02:50:29 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1346035827!20025891!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM1Njc2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7579 invoked from network); 27 Aug 2012 02:50:28 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-174.messagelabs.com with SMTP;
	27 Aug 2012 02:50:28 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 26 Aug 2012 19:50:26 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="185641342"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 26 Aug 2012 19:50:26 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 19:50:26 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 10:50:24 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
	int and uint types
Thread-Index: AQHNgeshpsbLn3d2ukGegzirk+Cwa5ds+Adw
Date: Mon, 27 Aug 2012 02:50:24 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE1F771@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Friday, August 24, 2012 7:25 PM
> To: Xu, Dongxiao
> Cc: Stefano Stabellini; Keir (Xen.org); Jan Beulich; Ian Jackson;
> xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for int
> and uint types
> 
> 
> I can only speak for qemu-upstream-unstable, but I'll certainly backport the
> int/uint fix as soon as it gets accepted in QEMU upstream.
> 
> Regarding the other patches, if any of them are for qemu-upstream-unstable, I
> am going to backport them only if they are bugfixes.

It seems that the int/uint patch is already merged by Anthony L, see commit b100fcfe4966aa41d4d6908d0c4c510bcf8f82dd.

Sorry I have been running away from Xen list for a long time and not familiar with the current rules.
For QEMU bug fixes, is it the regulation that patch developer need to fix the QEMU upstream, and Stefano will be in charge to backport it? Or patch developer needs to fix both?

Thanks,
Dongxiao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 02:50:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 02:50:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5pPL-0005GC-Kw; Mon, 27 Aug 2012 02:50:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T5pPK-0005G7-S2
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 02:50:31 +0000
Received: from [85.158.138.51:60827] by server-1.bemta-3.messagelabs.com id
	78/EF-09327-570EA305; Mon, 27 Aug 2012 02:50:29 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1346035827!20025891!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM1Njc2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7579 invoked from network); 27 Aug 2012 02:50:28 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-174.messagelabs.com with SMTP;
	27 Aug 2012 02:50:28 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 26 Aug 2012 19:50:26 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="185641342"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 26 Aug 2012 19:50:26 -0700
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 19:50:26 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 10:50:24 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
	int and uint types
Thread-Index: AQHNgeshpsbLn3d2ukGegzirk+Cwa5ds+Adw
Date: Mon, 27 Aug 2012 02:50:24 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A90FE1F771@SHSMSX102.ccr.corp.intel.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Friday, August 24, 2012 7:25 PM
> To: Xu, Dongxiao
> Cc: Stefano Stabellini; Keir (Xen.org); Jan Beulich; Ian Jackson;
> xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for int
> and uint types
> 
> 
> I can only speak for qemu-upstream-unstable, but I'll certainly backport the
> int/uint fix as soon as it gets accepted in QEMU upstream.
> 
> Regarding the other patches, if any of them are for qemu-upstream-unstable, I
> am going to backport them only if they are bugfixes.

It seems that the int/uint patch is already merged by Anthony L, see commit b100fcfe4966aa41d4d6908d0c4c510bcf8f82dd.

Sorry I have been running away from Xen list for a long time and not familiar with the current rules.
For QEMU bug fixes, is it the regulation that patch developer need to fix the QEMU upstream, and Stefano will be in charge to backport it? Or patch developer needs to fix both?

Thanks,
Dongxiao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 03:03:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 03:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5pbp-0005qT-37; Mon, 27 Aug 2012 03:03:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chao.zhou@intel.com>) id 1T5pbn-0005qO-2Z
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 03:03:23 +0000
Received: from [85.158.139.83:19925] by server-4.bemta-5.messagelabs.com id
	DD/E4-12386-A73EA305; Mon, 27 Aug 2012 03:03:22 +0000
X-Env-Sender: chao.zhou@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346036600!27887045!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg3MTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9435 invoked from network); 27 Aug 2012 03:03:21 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-12.tower-182.messagelabs.com with SMTP;
	27 Aug 2012 03:03:21 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Aug 2012 20:03:20 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="185184695"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 26 Aug 2012 20:03:19 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 20:03:19 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 11:03:17 +0800
From: "Zhou, Chao" <chao.zhou@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest	with
	qemu-xen-dir-remote
Thread-Index: AQHNhACBIcVJkHzSNEKAhiEtlfNv7A==
Date: Mon, 27 Aug 2012 03:03:16 +0000
Message-ID: <40352EBA8B4DF841A9907B883F22B59B0FDD25C7@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Anthony Perard <anthony.perard@citrix.com>, "Shan,
	Haitao" <haitao.shan@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to
	guest	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here's the output of 'xl -vvv create'
libxl: debug: libxl_create.c:1143:do_domain_create: ao 0xfc43f0: create: how=(nil) callback=(nil) poller=0xfc4c50
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hda, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_create.c:677:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0xfc4fb0: deregister unregistered
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9a6e4
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19a6e4
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019a6e4
  TOTAL:         0000000000000000->000000007f800000
  ENTRY ADDRESS: 0000000000100000
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000003fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x0x7fbcdfd49000 -> 0x0x7fbcdfdda55c
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_dm.c:1142:libxl__spawn_local_dm: Spawning device-model /usr/local/qemu-test/bin/qemu-system-x86_64 with arguments:
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   /usr/local/qemu-test/bin/qemu-system-x86_64
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   example.hvm
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   127.0.0.1:0,to=99
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vga
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   cirrus
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   order=cda
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   4,maxcpus=4
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   none
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2040
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   file=/root/czhou/ia32e_rhel6u2.img,if=ide,index=0,media=disk,format=raw
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1156:do_domain_create: ao 0xfc43f0: inprogress: poller=0xfc4c50, flags=i
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0xfc51e8: deregister unregistered
libxl: debug: libxl_qmp.c:646:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:646:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-09_11.0",
        "hostaddr": "0000:09:11.0"
    }
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-pci",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_pci.c:85:libxl__create_pci_backend: Creating pci backend
libxl: debug: libxl_event.c:1667:libxl__ao_progress_report: ao 0xfc43f0: progress report: ignored
libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0xfc43f0: complete, rc=0
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0xfc43f0: destroy
xc: debug: hypercall buffer: total allocations:979 total releases:979
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:975 misses:2 toobig:2
Parsing config from xlexample.hvm
Daemon running with PID 13533

Thanks,
Zhou Chao

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Zhou, Chao
Sent: Thursday, August 09, 2012 2:49 PM
To: Ian Campbell; Stefano Stabellini
Cc: Zhang, Yang Z; Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

I rebuild the upstream QEMU according to the wiki, but device static assignment doesn't work, no lspci output in guest. However hotplug & unplug works fine.

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
Sent: Friday, August 03, 2012 6:36 PM
To: Stefano Stabellini
Cc: Zhang, Yang Z; Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> > When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> > libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an 
> > error message from QMP server: Parameter 'driver' expects a driver 
> > name
> > 
> > It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> > I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?
> 
> Yes, it was accepted, but it is present only in upstream QEMU (from 
> git://git.qemu.org/qemu.git), not the tree we are currently using in 
> xen-unstable for development 
> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
> Make sure you are using the right tree!

http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the upstream qemu tree instead of our stable branch of upstream.

> 
> Anthony is currently on vacation and is going to be back in about a 
> week.
> 
> > Another question:
> > Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.
> 
> You should use upstream QEMU, I am going to rebase our tree on that 
> early on in the 4.3 release cycle.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 03:03:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 03:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5pbp-0005qT-37; Mon, 27 Aug 2012 03:03:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <chao.zhou@intel.com>) id 1T5pbn-0005qO-2Z
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 03:03:23 +0000
Received: from [85.158.139.83:19925] by server-4.bemta-5.messagelabs.com id
	DD/E4-12386-A73EA305; Mon, 27 Aug 2012 03:03:22 +0000
X-Env-Sender: chao.zhou@intel.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346036600!27887045!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg3MTIy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9435 invoked from network); 27 Aug 2012 03:03:21 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-12.tower-182.messagelabs.com with SMTP;
	27 Aug 2012 03:03:21 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 26 Aug 2012 20:03:20 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,316,1344236400"; d="scan'208";a="185184695"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by azsmga001.ch.intel.com with ESMTP; 26 Aug 2012 20:03:19 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Sun, 26 Aug 2012 20:03:19 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.92]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 11:03:17 +0800
From: "Zhou, Chao" <chao.zhou@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Stefano Stabellini
	<Stefano.Stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] error when pass through device to guest	with
	qemu-xen-dir-remote
Thread-Index: AQHNhACBIcVJkHzSNEKAhiEtlfNv7A==
Date: Mon, 27 Aug 2012 03:03:16 +0000
Message-ID: <40352EBA8B4DF841A9907B883F22B59B0FDD25C7@SHSMSX102.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E203D54@SHSMSX101.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208031122170.4645@kaball.uk.xensource.com>
	<1343990187.21372.48.camel@zakaz.uk.xensource.com>
	<40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
In-Reply-To: <40352EBA8B4DF841A9907B883F22B59B0FDBCE1E@SHSMSX102.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Anthony Perard <anthony.perard@citrix.com>, "Shan,
	Haitao" <haitao.shan@intel.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] error when pass through device to
	guest	with	qemu-xen-dir-remote
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here's the output of 'xl -vvv create'
libxl: debug: libxl_create.c:1143:do_domain_create: ao 0xfc43f0: create: how=(nil) callback=(nil) poller=0xfc4c50
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:175:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:184:disk_try_backend: Disk vdev=hda, backend tap unsuitable because blktap not available
libxl: debug: libxl_device.c:265:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_create.c:677:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0xfc4fb0: deregister unregistered
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9a6e4
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19a6e4
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019a6e4
  TOTAL:         0000000000000000->000000007f800000
  ENTRY ADDRESS: 0000000000100000
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000003fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x0x7fbcdfd49000 -> 0x0x7fbcdfdda55c
libxl: debug: libxl_device.c:229:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_dm.c:1142:libxl__spawn_local_dm: Spawning device-model /usr/local/qemu-test/bin/qemu-system-x86_64 with arguments:
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   /usr/local/qemu-test/bin/qemu-system-x86_64
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   example.hvm
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   127.0.0.1:0,to=99
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -vga
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   cirrus
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   order=cda
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   4,maxcpus=4
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   none
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   2040
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1144:libxl__spawn_local_dm:   file=/root/czhou/ia32e_rhel6u2.img,if=ide,index=0,media=disk,format=raw
libxl: debug: libxl_event.c:512:libxl__ev_xswatch_register: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1156:do_domain_create: ao 0xfc43f0: inprogress: poller=0xfc4c50, flags=i
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:457:watchfd_callback: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:549:libxl__ev_xswatch_deregister: watch w=0xfc51e8 wpath=/local/domain/0/device-model/2/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:561:libxl__ev_xswatch_deregister: watch w=0xfc51e8: deregister unregistered
libxl: debug: libxl_qmp.c:646:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:646:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-09_11.0",
        "hostaddr": "0000:09:11.0"
    }
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-pci",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:298:qmp_handle_response: message type: return
libxl: debug: libxl_pci.c:85:libxl__create_pci_backend: Creating pci backend
libxl: debug: libxl_event.c:1667:libxl__ao_progress_report: ao 0xfc43f0: progress report: ignored
libxl: debug: libxl_event.c:1497:libxl__ao_complete: ao 0xfc43f0: complete, rc=0
libxl: debug: libxl_event.c:1469:libxl__ao__destroy: ao 0xfc43f0: destroy
xc: debug: hypercall buffer: total allocations:979 total releases:979
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:975 misses:2 toobig:2
Parsing config from xlexample.hvm
Daemon running with PID 13533

Thanks,
Zhou Chao

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Zhou, Chao
Sent: Thursday, August 09, 2012 2:49 PM
To: Ian Campbell; Stefano Stabellini
Cc: Zhang, Yang Z; Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

I rebuild the upstream QEMU according to the wiki, but device static assignment doesn't work, no lspci output in guest. However hotplug & unplug works fine.

-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
Sent: Friday, August 03, 2012 6:36 PM
To: Stefano Stabellini
Cc: Zhang, Yang Z; Anthony Perard; xen-devel
Subject: Re: [Xen-devel] error when pass through device to guest with qemu-xen-dir-remote

On Fri, 2012-08-03 at 11:29 +0100, Stefano Stabellini wrote:
> On Fri, 3 Aug 2012, Zhang, Yang Z wrote:
> > When create guest with device assigned, it shows the error and the device wasn't able to work inside guest:
> > libxl: error: libxl_qmp.c:288:qmp_handle_error_response: received an 
> > error message from QMP server: Parameter 'driver' expects a driver 
> > name
> > 
> > It only fails with qemu-xen-dir-remote(Is this tree more close to upstream qemu?). I don't see the error with the traditional Qemu.
> > I also tried qemu-upstream, but it fails when I try to enable pci pass-through for xen. I think Anthony's patch to add pci pass-through support for Xen is accepted by qemu-upstream, am I right?
> 
> Yes, it was accepted, but it is present only in upstream QEMU (from 
> git://git.qemu.org/qemu.git), not the tree we are currently using in 
> xen-unstable for development 
> (git://xenbits.xensource.com/qemu-upstream-unstable.git).
> Make sure you are using the right tree!

http://wiki.xen.org/wiki/QEMU_Upstream has some notes on how to use the upstream qemu tree instead of our stable branch of upstream.

> 
> Anthony is currently on vacation and is going to be back in about a 
> week.
> 
> > Another question:
> > Now I am trying to add some features (relevant to pass through device) to Qemu, which tree should I use? Since traditional qemu is great different from qemu-upstream, it is too old to develop patch base on it. But besides the old one, I cannot find a working qemu.
> 
> You should use upstream QEMU, I am going to rebase our tree on that 
> early on in the 4.3 release cycle.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 05:25:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 05:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5rou-0006YO-5L; Mon, 27 Aug 2012 05:25:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5ros-0006YJ-Mn
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 05:25:03 +0000
Received: from [85.158.139.83:57528] by server-11.bemta-5.messagelabs.com id
	28/58-29296-DA40B305; Mon, 27 Aug 2012 05:25:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346045098!23692371!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22261 invoked from network); 27 Aug 2012 05:24:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 05:24:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,318,1344211200"; d="scan'208";a="14203230"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Aug 2012 05:24:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 27 Aug 2012 06:24:57 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5ron-0002oJ-KW;
	Mon, 27 Aug 2012 05:24:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5ron-0006hz-64;
	Mon, 27 Aug 2012 06:24:57 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13631-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Aug 2012 06:24:57 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13631: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13631 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13631/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13630
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13630
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13630
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13630

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 05:25:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 05:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5rou-0006YO-5L; Mon, 27 Aug 2012 05:25:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5ros-0006YJ-Mn
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 05:25:03 +0000
Received: from [85.158.139.83:57528] by server-11.bemta-5.messagelabs.com id
	28/58-29296-DA40B305; Mon, 27 Aug 2012 05:25:01 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346045098!23692371!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22261 invoked from network); 27 Aug 2012 05:24:59 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 05:24:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,318,1344211200"; d="scan'208";a="14203230"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Aug 2012 05:24:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 27 Aug 2012 06:24:57 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5ron-0002oJ-KW;
	Mon, 27 Aug 2012 05:24:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5ron-0006hz-64;
	Mon, 27 Aug 2012 06:24:57 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13631-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Aug 2012 06:24:57 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13631: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13631 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13631/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13630
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13630
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13630
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13630

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 07:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 07:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5tbd-0007VB-SY; Mon, 27 Aug 2012 07:19:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T5tbc-0007V6-0k
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 07:19:28 +0000
Received: from [85.158.143.35:65371] by server-2.bemta-4.messagelabs.com id
	9B/5E-21239-F7F1B305; Mon, 27 Aug 2012 07:19:27 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346051966!16323459!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18508 invoked from network); 27 Aug 2012 07:19:26 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 07:19:26 -0000
Received: by eaah11 with SMTP id h11so1099744eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Aug 2012 00:19:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=/rYJFpqVhTYWF4z3RPswtVaWJuxx0fC4IVn5zE7Qx7g=;
	b=bQDqm/uMVU0M6g0VuqTuBSCu8bPRS9gtJiCfIKrPdpx6eAQbaxNHW5SF/nTuNrAK8z
	BX1HjKTxdFpeH2ZZnTwZ7usIJQepsM4x2yRqFvzF6Mqux6lL4nUrWg0BJQJesHIPzr3T
	hFxFgXrwq6wEY5xAl3l/qu9Rt7w8L3uEEPOXdqjTVI1Ipz+0h5eUp0na67yLCX5wOhrj
	Ot9OEkX5eiYOhV9GUrUJKnrey0TMVrOmF0Jibt3FbBZinn/HbMrHR/pdBwo1zcuTz6Ms
	yJXBNhu9DCUIlqM3TsUCylSGUJul7S0dO3eebqyMb4adRFT2Ofn4/dji2VCgvnCzaqkI
	gPuw==
MIME-Version: 1.0
Received: by 10.14.172.193 with SMTP id t41mr16454724eel.25.1346051966115;
	Mon, 27 Aug 2012 00:19:26 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 27 Aug 2012 00:19:26 -0700 (PDT)
Date: Mon, 27 Aug 2012 15:19:26 +0800
Message-ID: <CA+ePHTDwcLE0xpked8wiXBi9j9c_rST0gTcN8eqAvOKui6T1AA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] =?utf-8?b?44CQcHJvYmxlbeOAkVdoeSBjYW4ndCBtb2RpZnkg?=
	=?utf-8?q?mac_addr_in_the_saved_state_file_to_restore_a_vm_with_th?=
	=?utf-8?q?e_specific_mac_addr=3F?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2768854772225703942=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2768854772225703942==
Content-Type: multipart/alternative; boundary=047d7b603ef24a1d9804c83a2524

--047d7b603ef24a1d9804c83a2524
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
    `xl save domid saved-state-file` will generate a saved state file which
consists of three parts: vm configuration, memory dump and qemu state info.

    So, I try to modify the `vifspec` in vm configuration for a different
mac addr, and modify the 6-bytes mac in the rtl8139 section of the qemu
state info.

    But when restoreing by the modified saved state file, although I
restarted the network within the vm, I couldn't get the specific mac addr ,
it's still the original mac.

   Why should this happen?  It's odd.

Thanks

--047d7b603ef24a1d9804c83a2524
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<div>=A0 =A0 `xl save domid saved-state-file` will generate a saved =
state file which consists of three parts: vm configuration, memory dump and=
 qemu state info.</div><div>=A0 =A0=A0</div><div>=A0 =A0 So, I try to modif=
y the `vifspec` in vm configuration for a different mac addr, and modify th=
e 6-bytes mac in the rtl8139 section of the qemu state info.</div>
<div><br></div><div>=A0 =A0 But when restoreing by the modified saved state=
 file, although I restarted the network within the vm, I couldn&#39;t get t=
he specific mac addr , it&#39;s still the original mac.</div><div><br></div=
>
<div>=A0 =A0Why should this happen? =A0It&#39;s odd.</div><div><br></div><d=
iv>Thanks</div>

--047d7b603ef24a1d9804c83a2524--


--===============2768854772225703942==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2768854772225703942==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 07:20:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 07:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5tbd-0007VB-SY; Mon, 27 Aug 2012 07:19:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T5tbc-0007V6-0k
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 07:19:28 +0000
Received: from [85.158.143.35:65371] by server-2.bemta-4.messagelabs.com id
	9B/5E-21239-F7F1B305; Mon, 27 Aug 2012 07:19:27 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346051966!16323459!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18508 invoked from network); 27 Aug 2012 07:19:26 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 07:19:26 -0000
Received: by eaah11 with SMTP id h11so1099744eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Aug 2012 00:19:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=/rYJFpqVhTYWF4z3RPswtVaWJuxx0fC4IVn5zE7Qx7g=;
	b=bQDqm/uMVU0M6g0VuqTuBSCu8bPRS9gtJiCfIKrPdpx6eAQbaxNHW5SF/nTuNrAK8z
	BX1HjKTxdFpeH2ZZnTwZ7usIJQepsM4x2yRqFvzF6Mqux6lL4nUrWg0BJQJesHIPzr3T
	hFxFgXrwq6wEY5xAl3l/qu9Rt7w8L3uEEPOXdqjTVI1Ipz+0h5eUp0na67yLCX5wOhrj
	Ot9OEkX5eiYOhV9GUrUJKnrey0TMVrOmF0Jibt3FbBZinn/HbMrHR/pdBwo1zcuTz6Ms
	yJXBNhu9DCUIlqM3TsUCylSGUJul7S0dO3eebqyMb4adRFT2Ofn4/dji2VCgvnCzaqkI
	gPuw==
MIME-Version: 1.0
Received: by 10.14.172.193 with SMTP id t41mr16454724eel.25.1346051966115;
	Mon, 27 Aug 2012 00:19:26 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 27 Aug 2012 00:19:26 -0700 (PDT)
Date: Mon, 27 Aug 2012 15:19:26 +0800
Message-ID: <CA+ePHTDwcLE0xpked8wiXBi9j9c_rST0gTcN8eqAvOKui6T1AA@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] =?utf-8?b?44CQcHJvYmxlbeOAkVdoeSBjYW4ndCBtb2RpZnkg?=
	=?utf-8?q?mac_addr_in_the_saved_state_file_to_restore_a_vm_with_th?=
	=?utf-8?q?e_specific_mac_addr=3F?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2768854772225703942=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2768854772225703942==
Content-Type: multipart/alternative; boundary=047d7b603ef24a1d9804c83a2524

--047d7b603ef24a1d9804c83a2524
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
    `xl save domid saved-state-file` will generate a saved state file which
consists of three parts: vm configuration, memory dump and qemu state info.

    So, I try to modify the `vifspec` in vm configuration for a different
mac addr, and modify the 6-bytes mac in the rtl8139 section of the qemu
state info.

    But when restoreing by the modified saved state file, although I
restarted the network within the vm, I couldn't get the specific mac addr ,
it's still the original mac.

   Why should this happen?  It's odd.

Thanks

--047d7b603ef24a1d9804c83a2524
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi all,<div>=A0 =A0 `xl save domid saved-state-file` will generate a saved =
state file which consists of three parts: vm configuration, memory dump and=
 qemu state info.</div><div>=A0 =A0=A0</div><div>=A0 =A0 So, I try to modif=
y the `vifspec` in vm configuration for a different mac addr, and modify th=
e 6-bytes mac in the rtl8139 section of the qemu state info.</div>
<div><br></div><div>=A0 =A0 But when restoreing by the modified saved state=
 file, although I restarted the network within the vm, I couldn&#39;t get t=
he specific mac addr , it&#39;s still the original mac.</div><div><br></div=
>
<div>=A0 =A0Why should this happen? =A0It&#39;s odd.</div><div><br></div><d=
iv>Thanks</div>

--047d7b603ef24a1d9804c83a2524--


--===============2768854772225703942==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2768854772225703942==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 10:06:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 10:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5wCb-00012a-J8; Mon, 27 Aug 2012 10:05:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T5wCa-00012S-5G
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 10:05:48 +0000
Received: from [85.158.139.83:27362] by server-5.bemta-5.messagelabs.com id
	16/B0-31019-A764B305; Mon, 27 Aug 2012 10:05:46 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1346061945!27326213!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20669 invoked from network); 27 Aug 2012 10:05:45 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 10:05:45 -0000
Received: by eaah11 with SMTP id h11so1155914eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Aug 2012 03:05:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=hCdl9M6iW7O/8qzwQTgG8RmeeejnZSwRcyxcoof1nrA=;
	b=PaaaO93/XzBecf76Ky3fWyR+lzUYTKk+S7ViTRZxmZBWcCZnjYXSmKf2N2ueJKOgqu
	yewIvfQJKmjlQrY26mspfT+vxaA9CQCYqQYJawnimGCLRHOJqPvPrb/JpcDsbBO+mBrE
	q2FLTKE5b0S99830GKAQndLpPgtzrYWlYIvKQOfTtR6fsVsv1TOvxNu2f6dDyDA+zQNT
	sGaiX+H92n1Xn49D3YmU01RPfpITlblKLTEVrOw4tXf75gshoXoathD1Yg+sno805jAP
	1Ck+HaAYTh8PrU+yF2Fk+3vuLxKKLZ+R3Vd65k3M27k4xK6vqoZUrUFA3DyuVWS+1A7Q
	DHew==
MIME-Version: 1.0
Received: by 10.14.225.200 with SMTP id z48mr16768745eep.39.1346061945305;
	Mon, 27 Aug 2012 03:05:45 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 27 Aug 2012 03:05:45 -0700 (PDT)
In-Reply-To: <CA+ePHTDwcLE0xpked8wiXBi9j9c_rST0gTcN8eqAvOKui6T1AA@mail.gmail.com>
References: <CA+ePHTDwcLE0xpked8wiXBi9j9c_rST0gTcN8eqAvOKui6T1AA@mail.gmail.com>
Date: Mon, 27 Aug 2012 18:05:45 +0800
Message-ID: <CA+ePHTDjMe+ZfS7yrw9dXEunO2dr6TxEiyz3_95zq1k3DowGfQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] =?utf-8?b?44CQcHJvYmxlbeOAkVdoeSBjYW4ndCBtb2RpZnkg?=
	=?utf-8?q?mac_addr_in_the_saved_state_file_to_restore_a_vm_with_th?=
	=?utf-8?q?e_specific_mac_addr=3F?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2159602775626215652=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2159602775626215652==
Content-Type: multipart/alternative; boundary=047d7b6707171878ea04c83c78b8

--047d7b6707171878ea04c83c78b8
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Mon, Aug 27, 2012 at 3:19 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.com> w=
rote:

> Hi all,
>     `xl save domid saved-state-file` will generate a saved state file
> which consists of three parts: vm configuration, memory dump and qemu sta=
te
> info.
>
>     So, I try to modify the `vifspec` in vm configuration for a different
> mac addr, and modify the 6-bytes mac in the rtl8139 section of the qemu
> state info.
>
>     But when restoreing by the modified saved state file, although I
> restarted the network within the vm, I couldn't get the specific mac addr=
 ,
> it's still the original mac.
>
>    Why should this happen?  It's odd.
>
> Thanks
>

In `xm restore` way , the same method above can achive the expected goal.
What happened to that?

--047d7b6707171878ea04c83c78b8
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Mon, Aug 27, 2012 at 3:19 PM, =E9=A9=
=AC=E7=A3=8A <span dir=3D"ltr">&lt;<a href=3D"mailto:aware.why@gmail.com" t=
arget=3D"_blank">aware.why@gmail.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
Hi all,<div>=C2=A0 =C2=A0 `xl save domid saved-state-file` will generate a =
saved state file which consists of three parts: vm configuration, memory du=
mp and qemu state info.</div><div>=C2=A0 =C2=A0=C2=A0</div><div>=C2=A0 =C2=
=A0 So, I try to modify the `vifspec` in vm configuration for a different m=
ac addr, and modify the 6-bytes mac in the rtl8139 section of the qemu stat=
e info.</div>

<div><br></div><div>=C2=A0 =C2=A0 But when restoreing by the modified saved=
 state file, although I restarted the network within the vm, I couldn&#39;t=
 get the specific mac addr , it&#39;s still the original mac.</div><div><br=
></div>

<div>=C2=A0 =C2=A0Why should this happen? =C2=A0It&#39;s odd.</div><div><br=
></div><div>Thanks</div>
</blockquote></div><br><div>In `xm restore` way , the same method above can=
 achive the expected goal.</div><div>What happened to that?</div>

--047d7b6707171878ea04c83c78b8--


--===============2159602775626215652==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2159602775626215652==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 10:06:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 10:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5wCb-00012a-J8; Mon, 27 Aug 2012 10:05:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <aware.why@gmail.com>) id 1T5wCa-00012S-5G
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 10:05:48 +0000
Received: from [85.158.139.83:27362] by server-5.bemta-5.messagelabs.com id
	16/B0-31019-A764B305; Mon, 27 Aug 2012 10:05:46 +0000
X-Env-Sender: aware.why@gmail.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1346061945!27326213!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20669 invoked from network); 27 Aug 2012 10:05:45 -0000
Received: from mail-ey0-f171.google.com (HELO mail-ey0-f171.google.com)
	(209.85.215.171)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 10:05:45 -0000
Received: by eaah11 with SMTP id h11so1155914eaa.30
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Aug 2012 03:05:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=hCdl9M6iW7O/8qzwQTgG8RmeeejnZSwRcyxcoof1nrA=;
	b=PaaaO93/XzBecf76Ky3fWyR+lzUYTKk+S7ViTRZxmZBWcCZnjYXSmKf2N2ueJKOgqu
	yewIvfQJKmjlQrY26mspfT+vxaA9CQCYqQYJawnimGCLRHOJqPvPrb/JpcDsbBO+mBrE
	q2FLTKE5b0S99830GKAQndLpPgtzrYWlYIvKQOfTtR6fsVsv1TOvxNu2f6dDyDA+zQNT
	sGaiX+H92n1Xn49D3YmU01RPfpITlblKLTEVrOw4tXf75gshoXoathD1Yg+sno805jAP
	1Ck+HaAYTh8PrU+yF2Fk+3vuLxKKLZ+R3Vd65k3M27k4xK6vqoZUrUFA3DyuVWS+1A7Q
	DHew==
MIME-Version: 1.0
Received: by 10.14.225.200 with SMTP id z48mr16768745eep.39.1346061945305;
	Mon, 27 Aug 2012 03:05:45 -0700 (PDT)
Received: by 10.14.100.71 with HTTP; Mon, 27 Aug 2012 03:05:45 -0700 (PDT)
In-Reply-To: <CA+ePHTDwcLE0xpked8wiXBi9j9c_rST0gTcN8eqAvOKui6T1AA@mail.gmail.com>
References: <CA+ePHTDwcLE0xpked8wiXBi9j9c_rST0gTcN8eqAvOKui6T1AA@mail.gmail.com>
Date: Mon, 27 Aug 2012 18:05:45 +0800
Message-ID: <CA+ePHTDjMe+ZfS7yrw9dXEunO2dr6TxEiyz3_95zq1k3DowGfQ@mail.gmail.com>
From: =?UTF-8?B?6ams56OK?= <aware.why@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] =?utf-8?b?44CQcHJvYmxlbeOAkVdoeSBjYW4ndCBtb2RpZnkg?=
	=?utf-8?q?mac_addr_in_the_saved_state_file_to_restore_a_vm_with_th?=
	=?utf-8?q?e_specific_mac_addr=3F?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2159602775626215652=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2159602775626215652==
Content-Type: multipart/alternative; boundary=047d7b6707171878ea04c83c78b8

--047d7b6707171878ea04c83c78b8
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Mon, Aug 27, 2012 at 3:19 PM, =E9=A9=AC=E7=A3=8A <aware.why@gmail.com> w=
rote:

> Hi all,
>     `xl save domid saved-state-file` will generate a saved state file
> which consists of three parts: vm configuration, memory dump and qemu sta=
te
> info.
>
>     So, I try to modify the `vifspec` in vm configuration for a different
> mac addr, and modify the 6-bytes mac in the rtl8139 section of the qemu
> state info.
>
>     But when restoreing by the modified saved state file, although I
> restarted the network within the vm, I couldn't get the specific mac addr=
 ,
> it's still the original mac.
>
>    Why should this happen?  It's odd.
>
> Thanks
>

In `xm restore` way , the same method above can achive the expected goal.
What happened to that?

--047d7b6707171878ea04c83c78b8
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">On Mon, Aug 27, 2012 at 3:19 PM, =E9=A9=
=AC=E7=A3=8A <span dir=3D"ltr">&lt;<a href=3D"mailto:aware.why@gmail.com" t=
arget=3D"_blank">aware.why@gmail.com</a>&gt;</span> wrote:<br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">
Hi all,<div>=C2=A0 =C2=A0 `xl save domid saved-state-file` will generate a =
saved state file which consists of three parts: vm configuration, memory du=
mp and qemu state info.</div><div>=C2=A0 =C2=A0=C2=A0</div><div>=C2=A0 =C2=
=A0 So, I try to modify the `vifspec` in vm configuration for a different m=
ac addr, and modify the 6-bytes mac in the rtl8139 section of the qemu stat=
e info.</div>

<div><br></div><div>=C2=A0 =C2=A0 But when restoreing by the modified saved=
 state file, although I restarted the network within the vm, I couldn&#39;t=
 get the specific mac addr , it&#39;s still the original mac.</div><div><br=
></div>

<div>=C2=A0 =C2=A0Why should this happen? =C2=A0It&#39;s odd.</div><div><br=
></div><div>Thanks</div>
</blockquote></div><br><div>In `xm restore` way , the same method above can=
 achive the expected goal.</div><div>What happened to that?</div>

--047d7b6707171878ea04c83c78b8--


--===============2159602775626215652==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2159602775626215652==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 11:13:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 11:13:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5xFh-0001k5-9K; Mon, 27 Aug 2012 11:13:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T5xFf-0001jq-1o
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 11:13:03 +0000
Received: from [85.158.138.51:47643] by server-3.bemta-3.messagelabs.com id
	B9/2B-13809-D365B305; Mon, 27 Aug 2012 11:13:01 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346065977!21601427!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM1MzUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12641 invoked from network); 27 Aug 2012 11:12:58 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-174.messagelabs.com with SMTP;
	27 Aug 2012 11:12:58 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 27 Aug 2012 04:12:56 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,320,1344236400"; d="scan'208";a="191766576"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 27 Aug 2012 04:12:53 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 27 Aug 2012 04:12:53 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 27 Aug 2012 04:12:52 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 19:12:51 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH] X86/vMCE: guest broken page handling when migration
Thread-Index: Ac2EROVXZp6ZOLjuREqA3X3AgCYhPw==
Date: Mon, 27 Aug 2012 11:12:51 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_"
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] X86/vMCE: guest broken page handling when
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

X86/vMCE: guest broken page handling when migration

This patch is used to handle guest broken page when migration.

At sender, the broken page would not be mapped, and the error page=20
content would not be copied to target, otherwise it may trigger more=20
serious error (i.e. SRAR error). While its pfn_type and pfn number=20
would be transferred to target so that target take appropriate action.

At target, it would set p2m as p2m_ram_broken for broken page, so that=20
if guest crazy access the broken page again, it would kill guest as expecte=
d.

Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>

diff -r b17fb3cb92d2 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xc_domain.c	Mon Aug 27 23:25:43 2012 +0800
@@ -314,6 +314,22 @@
     return ret ? -1 : 0;
 }
=20
+/* set broken page p2m */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn)
+{
+    int ret;
+    DECLARE_DOMCTL;
+
+    domctl.cmd =3D XEN_DOMCTL_set_broken_page_p2m;
+    domctl.domain =3D (domid_t)domid;
+    domctl.u.set_broken_page_p2m.pfn =3D pfn;
+    ret =3D do_domctl(xch, &domctl);
+
+    return ret ? -1 : 0;
+}
+
 /* get info from hvm guest for save */
 int xc_domain_hvm_getcontext(xc_interface *xch,
                              uint32_t domid,
diff -r b17fb3cb92d2 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xc_domain_restore.c	Mon Aug 27 23:25:43 2012 +0800
@@ -962,9 +962,15 @@
=20
     countpages =3D count;
     for (i =3D oldcount; i < buf->nr_pages; ++i)
-        if ((buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) =3D=3D XEN_D=
OMCTL_PFINFO_XTAB
-            ||(buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) =3D=3D XEN=
_DOMCTL_PFINFO_XALLOC)
+    {
+        unsigned long pagetype;
+
+        pagetype =3D buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK;
+        if ( pagetype =3D=3D XEN_DOMCTL_PFINFO_XTAB ||
+             pagetype =3D=3D XEN_DOMCTL_PFINFO_BROKEN ||
+             pagetype =3D=3D XEN_DOMCTL_PFINFO_XALLOC )
             --countpages;
+    }
=20
     if (!countpages)
         return count;
@@ -1200,6 +1206,17 @@
             /* a bogus/unmapped/allocate-only page: skip it */
             continue;
=20
+        if ( pagetype =3D=3D XEN_DOMCTL_PFINFO_BROKEN )
+        {
+            if ( xc_set_broken_page_p2m(xch, dom, pfn) )
+            {
+                ERROR("Set p2m for broken page fail, "
+                      "dom=3D%d, pfn=3D%lx\n", dom, pfn);
+                goto err_mapped;
+            }
+            continue;
+        }
+
         if (pfn_err[i])
         {
             ERROR("unexpected PFN mapping failure pfn %lx map_mfn %lx p2m_=
mfn %lx",
diff -r b17fb3cb92d2 tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xc_domain_save.c	Mon Aug 27 23:25:43 2012 +0800
@@ -1285,6 +1285,13 @@
                 if ( !hvm )
                     gmfn =3D pfn_to_mfn(gmfn);
=20
+                if ( pfn_type[j] =3D=3D XEN_DOMCTL_PFINFO_BROKEN )
+                {
+                    pfn_type[j] |=3D pfn_batch[j];
+                    ++run;
+                    continue;
+                }
+
                 if ( pfn_err[j] )
                 {
                     if ( pfn_type[j] =3D=3D XEN_DOMCTL_PFINFO_XTAB )
@@ -1379,8 +1386,12 @@
                     }
                 }
=20
-                /* skip pages that aren't present or are alloc-only */
+                /*=20
+                 * skip pages that aren't present,
+                 * or are broken, or are alloc-only
+                 */
                 if ( pagetype =3D=3D XEN_DOMCTL_PFINFO_XTAB
+                    || pagetype =3D=3D XEN_DOMCTL_PFINFO_BROKEN
                     || pagetype =3D=3D XEN_DOMCTL_PFINFO_XALLOC )
                     continue;
=20
diff -r b17fb3cb92d2 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xenctrl.h	Mon Aug 27 23:25:43 2012 +0800
@@ -588,6 +588,17 @@
                                int *vmce_while_migrate);
=20
 /**
+ * This function set p2m for broken page
+ * &parm xch a handle to an open hypervisor interface
+ * @parm domid the domain id which broken page belong to
+ * @parm pfn the pfn number of the broken page
+ * @return 0 on success, -1 on failure
+ */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn);
+
+/**
  * This function returns information about the context of a hvm domain
  * @parm xch a handle to an open hypervisor interface
  * @parm domid the domain to get information from
diff -r b17fb3cb92d2 xen/arch/x86/domctl.c
--- a/xen/arch/x86/domctl.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/xen/arch/x86/domctl.c	Mon Aug 27 23:25:43 2012 +0800
@@ -203,12 +203,18 @@
                 for ( j =3D 0; j < k; j++ )
                 {
                     unsigned long type =3D 0;
+                    p2m_type_t t;
=20
-                    page =3D get_page_from_gfn(d, arr[j], NULL, P2M_ALLOC)=
;
+                    page =3D get_page_from_gfn(d, arr[j], &t, P2M_ALLOC);
=20
                     if ( unlikely(!page) ||
                          unlikely(is_xen_heap_page(page)) )
-                        type =3D XEN_DOMCTL_PFINFO_XTAB;
+                    {
+                        if ( p2m_is_broken(t) )
+                            type =3D XEN_DOMCTL_PFINFO_BROKEN;
+                        else
+                            type =3D XEN_DOMCTL_PFINFO_XTAB;
+                    }
                     else if ( xsm_getpageframeinfo(page) !=3D 0 )
                         ;
                     else
@@ -231,6 +237,9 @@
=20
                         if ( page->u.inuse.type_info & PGT_pinned )
                             type |=3D XEN_DOMCTL_PFINFO_LPINTAB;
+
+                        if ( page->count_info & PGC_broken )
+                            type =3D XEN_DOMCTL_PFINFO_BROKEN;
                     }
=20
                     if ( page )
@@ -1552,6 +1561,28 @@
     }
     break;
=20
+    case XEN_DOMCTL_set_broken_page_p2m:
+    {
+        struct domain *d;
+        p2m_type_t pt;
+        unsigned long pfn;
+
+        d =3D rcu_lock_domain_by_id(domctl->domain);
+        if ( d !=3D NULL )
+        {
+            pfn =3D domctl->u.set_broken_page_p2m.pfn;
+
+            get_gfn_query(d, pfn, &pt);
+            p2m_change_type(d, pfn, pt, p2m_ram_broken);
+            put_gfn(d, pfn);
+
+            rcu_unlock_domain(d);
+        }
+        else
+            ret =3D -ESRCH;
+    }
+    break;
+
     default:
         ret =3D iommu_do_domctl(domctl, u_domctl);
         break;
diff -r b17fb3cb92d2 xen/include/public/domctl.h
--- a/xen/include/public/domctl.h	Mon Aug 27 05:27:54 2012 +0800
+++ b/xen/include/public/domctl.h	Mon Aug 27 23:25:43 2012 +0800
@@ -136,6 +136,7 @@
 #define XEN_DOMCTL_PFINFO_LPINTAB (0x1U<<31)
 #define XEN_DOMCTL_PFINFO_XTAB    (0xfU<<28) /* invalid page */
 #define XEN_DOMCTL_PFINFO_XALLOC  (0xeU<<28) /* allocate-only page */
+#define XEN_DOMCTL_PFINFO_BROKEN  (0xdU<<28) /* broken page */
 #define XEN_DOMCTL_PFINFO_PAGEDTAB (0x8U<<28)
 #define XEN_DOMCTL_PFINFO_LTAB_MASK (0xfU<<28)
=20
@@ -856,6 +857,12 @@
 typedef struct xen_domctl_vmce_monitor xen_domctl_vmce_monitor_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmce_monitor_t);
=20
+struct xen_domctl_set_broken_page_p2m {
+    uint64_t pfn;
+};
+typedef struct xen_domctl_set_broken_page_p2m xen_domctl_set_broken_page_p=
2m_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_broken_page_p2m_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -923,6 +930,7 @@
 #define XEN_DOMCTL_set_virq_handler              66
 #define XEN_DOMCTL_vmce_monitor_start            67
 #define XEN_DOMCTL_vmce_monitor_end              68
+#define XEN_DOMCTL_set_broken_page_p2m           69
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -980,6 +988,7 @@
         struct xen_domctl_set_virq_handler  set_virq_handler;
         struct xen_domctl_vmce_monitor      vmce_monitor;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
+        struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];=

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_
Content-Type: application/octet-stream;
	name="9_vmce_migration_pfntype_broken.patch"
Content-Description: 9_vmce_migration_pfntype_broken.patch
Content-Disposition: attachment;
	filename="9_vmce_migration_pfntype_broken.patch"; size=8730;
	creation-date="Mon, 27 Aug 2012 11:03:05 GMT";
	modification-date="Mon, 27 Aug 2012 11:08:32 GMT"
Content-Transfer-Encoding: base64

WDg2L3ZNQ0U6IGd1ZXN0IGJyb2tlbiBwYWdlIGhhbmRsaW5nIHdoZW4gbWlncmF0aW9uDQoNClRo
aXMgcGF0Y2ggaXMgdXNlZCB0byBoYW5kbGUgZ3Vlc3QgYnJva2VuIHBhZ2Ugd2hlbiBtaWdyYXRp
b24uDQoNCkF0IHNlbmRlciwgdGhlIGJyb2tlbiBwYWdlIHdvdWxkIG5vdCBiZSBtYXBwZWQsIGFu
ZCB0aGUgZXJyb3IgcGFnZSANCmNvbnRlbnQgd291bGQgbm90IGJlIGNvcGllZCB0byB0YXJnZXQs
IG90aGVyd2lzZSBpdCBtYXkgdHJpZ2dlciBtb3JlIA0Kc2VyaW91cyBlcnJvciAoaS5lLiBTUkFS
IGVycm9yKS4gV2hpbGUgaXRzIHBmbl90eXBlIGFuZCBwZm4gbnVtYmVyIA0Kd291bGQgYmUgdHJh
bnNmZXJyZWQgdG8gdGFyZ2V0IHNvIHRoYXQgdGFyZ2V0IHRha2UgYXBwcm9wcmlhdGUgYWN0aW9u
Lg0KDQpBdCB0YXJnZXQsIGl0IHdvdWxkIHNldCBwMm0gYXMgcDJtX3JhbV9icm9rZW4gZm9yIGJy
b2tlbiBwYWdlLCBzbyB0aGF0IA0KaWYgZ3Vlc3QgY3JhenkgYWNjZXNzIHRoZSBicm9rZW4gcGFn
ZSBhZ2FpbiwgaXQgd291bGQga2lsbCBndWVzdCBhcyBleHBlY3RlZC4NCg0KU2lnbmVkLW9mZi1i
eTogTGl1LCBKaW5zb25nIDxqaW5zb25nLmxpdUBpbnRlbC5jb20+DQoNCmRpZmYgLXIgYjE3ZmIz
Y2I5MmQyIHRvb2xzL2xpYnhjL3hjX2RvbWFpbi5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21h
aW4uYwlNb24gQXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hj
X2RvbWFpbi5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAgLTMxNCw2ICszMTQs
MjIgQEANCiAgICAgcmV0dXJuIHJldCA/IC0xIDogMDsNCiB9DQogDQorLyogc2V0IGJyb2tlbiBw
YWdlIHAybSAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2VfcDJtKHhjX2ludGVyZmFjZSAqeGNo
LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGRvbWlkLA0KKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuKQ0KK3sNCisgICAgaW50IHJldDsN
CisgICAgREVDTEFSRV9ET01DVEw7DQorDQorICAgIGRvbWN0bC5jbWQgPSBYRU5fRE9NQ1RMX3Nl
dF9icm9rZW5fcGFnZV9wMm07DQorICAgIGRvbWN0bC5kb21haW4gPSAoZG9taWRfdClkb21pZDsN
CisgICAgZG9tY3RsLnUuc2V0X2Jyb2tlbl9wYWdlX3AybS5wZm4gPSBwZm47DQorICAgIHJldCA9
IGRvX2RvbWN0bCh4Y2gsICZkb21jdGwpOw0KKw0KKyAgICByZXR1cm4gcmV0ID8gLTEgOiAwOw0K
K30NCisNCiAvKiBnZXQgaW5mbyBmcm9tIGh2bSBndWVzdCBmb3Igc2F2ZSAqLw0KIGludCB4Y19k
b21haW5faHZtX2dldGNvbnRleHQoeGNfaW50ZXJmYWNlICp4Y2gsDQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwNCmRpZmYgLXIgYjE3ZmIzY2I5MmQyIHRvb2xz
L2xpYnhjL3hjX2RvbWFpbl9yZXN0b3JlLmMNCi0tLSBhL3Rvb2xzL2xpYnhjL3hjX2RvbWFpbl9y
ZXN0b3JlLmMJTW9uIEF1ZyAyNyAwNToyNzo1NCAyMDEyICswODAwDQorKysgYi90b29scy9saWJ4
Yy94Y19kb21haW5fcmVzdG9yZS5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAg
LTk2Miw5ICs5NjIsMTUgQEANCiANCiAgICAgY291bnRwYWdlcyA9IGNvdW50Ow0KICAgICBmb3Ig
KGkgPSBvbGRjb3VudDsgaSA8IGJ1Zi0+bnJfcGFnZXM7ICsraSkNCi0gICAgICAgIGlmICgoYnVm
LT5wZm5fdHlwZXNbaV0gJiBYRU5fRE9NQ1RMX1BGSU5GT19MVEFCX01BU0spID09IFhFTl9ET01D
VExfUEZJTkZPX1hUQUINCi0gICAgICAgICAgICB8fChidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9E
T01DVExfUEZJTkZPX0xUQUJfTUFTSykgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DKQ0KKyAg
ICB7DQorICAgICAgICB1bnNpZ25lZCBsb25nIHBhZ2V0eXBlOw0KKw0KKyAgICAgICAgcGFnZXR5
cGUgPSBidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9ET01DVExfUEZJTkZPX0xUQUJfTUFTSzsNCisg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQiB8fA0KKyAgICAg
ICAgICAgICBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19CUk9LRU4gfHwNCisgICAgICAg
ICAgICAgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DICkNCiAgICAgICAgICAg
ICAtLWNvdW50cGFnZXM7DQorICAgIH0NCiANCiAgICAgaWYgKCFjb3VudHBhZ2VzKQ0KICAgICAg
ICAgcmV0dXJuIGNvdW50Ow0KQEAgLTEyMDAsNiArMTIwNiwxNyBAQA0KICAgICAgICAgICAgIC8q
IGEgYm9ndXMvdW5tYXBwZWQvYWxsb2NhdGUtb25seSBwYWdlOiBza2lwIGl0ICovDQogICAgICAg
ICAgICAgY29udGludWU7DQogDQorICAgICAgICBpZiAoIHBhZ2V0eXBlID09IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTiApDQorICAgICAgICB7DQorICAgICAgICAgICAgaWYgKCB4Y19zZXRfYnJv
a2VuX3BhZ2VfcDJtKHhjaCwgZG9tLCBwZm4pICkNCisgICAgICAgICAgICB7DQorICAgICAgICAg
ICAgICAgIEVSUk9SKCJTZXQgcDJtIGZvciBicm9rZW4gcGFnZSBmYWlsLCAiDQorICAgICAgICAg
ICAgICAgICAgICAgICJkb209JWQsIHBmbj0lbHhcbiIsIGRvbSwgcGZuKTsNCisgICAgICAgICAg
ICAgICAgZ290byBlcnJfbWFwcGVkOw0KKyAgICAgICAgICAgIH0NCisgICAgICAgICAgICBjb250
aW51ZTsNCisgICAgICAgIH0NCisNCiAgICAgICAgIGlmIChwZm5fZXJyW2ldKQ0KICAgICAgICAg
ew0KICAgICAgICAgICAgIEVSUk9SKCJ1bmV4cGVjdGVkIFBGTiBtYXBwaW5nIGZhaWx1cmUgcGZu
ICVseCBtYXBfbWZuICVseCBwMm1fbWZuICVseCIsDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB0b29s
cy9saWJ4Yy94Y19kb21haW5fc2F2ZS5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21haW5fc2F2
ZS5jCU1vbiBBdWcgMjcgMDU6Mjc6NTQgMjAxMiArMDgwMA0KKysrIGIvdG9vbHMvbGlieGMveGNf
ZG9tYWluX3NhdmUuYwlNb24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC0xMjg1LDYg
KzEyODUsMTMgQEANCiAgICAgICAgICAgICAgICAgaWYgKCAhaHZtICkNCiAgICAgICAgICAgICAg
ICAgICAgIGdtZm4gPSBwZm5fdG9fbWZuKGdtZm4pOw0KIA0KKyAgICAgICAgICAgICAgICBpZiAo
IHBmbl90eXBlW2pdID09IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiApDQorICAgICAgICAgICAg
ICAgIHsNCisgICAgICAgICAgICAgICAgICAgIHBmbl90eXBlW2pdIHw9IHBmbl9iYXRjaFtqXTsN
CisgICAgICAgICAgICAgICAgICAgICsrcnVuOw0KKyAgICAgICAgICAgICAgICAgICAgY29udGlu
dWU7DQorICAgICAgICAgICAgICAgIH0NCisNCiAgICAgICAgICAgICAgICAgaWYgKCBwZm5fZXJy
W2pdICkNCiAgICAgICAgICAgICAgICAgew0KICAgICAgICAgICAgICAgICAgICAgaWYgKCBwZm5f
dHlwZVtqXSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCICkNCkBAIC0xMzc5LDggKzEzODYsMTIg
QEANCiAgICAgICAgICAgICAgICAgICAgIH0NCiAgICAgICAgICAgICAgICAgfQ0KIA0KLSAgICAg
ICAgICAgICAgICAvKiBza2lwIHBhZ2VzIHRoYXQgYXJlbid0IHByZXNlbnQgb3IgYXJlIGFsbG9j
LW9ubHkgKi8NCisgICAgICAgICAgICAgICAgLyogDQorICAgICAgICAgICAgICAgICAqIHNraXAg
cGFnZXMgdGhhdCBhcmVuJ3QgcHJlc2VudCwNCisgICAgICAgICAgICAgICAgICogb3IgYXJlIGJy
b2tlbiwgb3IgYXJlIGFsbG9jLW9ubHkNCisgICAgICAgICAgICAgICAgICovDQogICAgICAgICAg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQg0KKyAgICAgICAg
ICAgICAgICAgICAgfHwgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fQlJPS0VODQogICAg
ICAgICAgICAgICAgICAgICB8fCBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YQUxMT0Mg
KQ0KICAgICAgICAgICAgICAgICAgICAgY29udGludWU7DQogDQpkaWZmIC1yIGIxN2ZiM2NiOTJk
MiB0b29scy9saWJ4Yy94ZW5jdHJsLmgNCi0tLSBhL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlNb24g
QXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlN
b24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC01ODgsNiArNTg4LDE3IEBADQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCAqdm1jZV93aGlsZV9taWdyYXRlKTsNCiAN
CiAvKioNCisgKiBUaGlzIGZ1bmN0aW9uIHNldCBwMm0gZm9yIGJyb2tlbiBwYWdlDQorICogJnBh
cm0geGNoIGEgaGFuZGxlIHRvIGFuIG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCisgKiBAcGFy
bSBkb21pZCB0aGUgZG9tYWluIGlkIHdoaWNoIGJyb2tlbiBwYWdlIGJlbG9uZyB0bw0KKyAqIEBw
YXJtIHBmbiB0aGUgcGZuIG51bWJlciBvZiB0aGUgYnJva2VuIHBhZ2UNCisgKiBAcmV0dXJuIDAg
b24gc3VjY2VzcywgLTEgb24gZmFpbHVyZQ0KKyAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2Vf
cDJtKHhjX2ludGVyZmFjZSAqeGNoLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQz
Ml90IGRvbWlkLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZu
KTsNCisNCisvKioNCiAgKiBUaGlzIGZ1bmN0aW9uIHJldHVybnMgaW5mb3JtYXRpb24gYWJvdXQg
dGhlIGNvbnRleHQgb2YgYSBodm0gZG9tYWluDQogICogQHBhcm0geGNoIGEgaGFuZGxlIHRvIGFu
IG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCiAgKiBAcGFybSBkb21pZCB0aGUgZG9tYWluIHRv
IGdldCBpbmZvcm1hdGlvbiBmcm9tDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB4ZW4vYXJjaC94ODYv
ZG9tY3RsLmMNCi0tLSBhL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDA1OjI3OjU0
IDIwMTIgKzA4MDANCisrKyBiL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDIzOjI1
OjQzIDIwMTIgKzA4MDANCkBAIC0yMDMsMTIgKzIwMywxOCBAQA0KICAgICAgICAgICAgICAgICBm
b3IgKCBqID0gMDsgaiA8IGs7IGorKyApDQogICAgICAgICAgICAgICAgIHsNCiAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgdHlwZSA9IDA7DQorICAgICAgICAgICAgICAgICAgICBw
Mm1fdHlwZV90IHQ7DQogDQotICAgICAgICAgICAgICAgICAgICBwYWdlID0gZ2V0X3BhZ2VfZnJv
bV9nZm4oZCwgYXJyW2pdLCBOVUxMLCBQMk1fQUxMT0MpOw0KKyAgICAgICAgICAgICAgICAgICAg
cGFnZSA9IGdldF9wYWdlX2Zyb21fZ2ZuKGQsIGFycltqXSwgJnQsIFAyTV9BTExPQyk7DQogDQog
ICAgICAgICAgICAgICAgICAgICBpZiAoIHVubGlrZWx5KCFwYWdlKSB8fA0KICAgICAgICAgICAg
ICAgICAgICAgICAgICB1bmxpa2VseShpc194ZW5faGVhcF9wYWdlKHBhZ2UpKSApDQotICAgICAg
ICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExfUEZJTkZPX1hUQUI7DQorICAgICAg
ICAgICAgICAgICAgICB7DQorICAgICAgICAgICAgICAgICAgICAgICAgaWYgKCBwMm1faXNfYnJv
a2VuKHQpICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTjsNCisgICAgICAgICAgICAgICAgICAgICAgICBlbHNlDQorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHR5cGUgPSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCOw0KKyAgICAg
ICAgICAgICAgICAgICAgfQ0KICAgICAgICAgICAgICAgICAgICAgZWxzZSBpZiAoIHhzbV9nZXRw
YWdlZnJhbWVpbmZvKHBhZ2UpICE9IDAgKQ0KICAgICAgICAgICAgICAgICAgICAgICAgIDsNCiAg
ICAgICAgICAgICAgICAgICAgIGVsc2UNCkBAIC0yMzEsNiArMjM3LDkgQEANCiANCiAgICAgICAg
ICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3Bpbm5l
ZCApDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgIHR5cGUgfD0gWEVOX0RPTUNUTF9QRklO
Rk9fTFBJTlRBQjsNCisNCisgICAgICAgICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPmNvdW50
X2luZm8gJiBQR0NfYnJva2VuICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9
IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTjsNCiAgICAgICAgICAgICAgICAgICAgIH0NCiANCiAg
ICAgICAgICAgICAgICAgICAgIGlmICggcGFnZSApDQpAQCAtMTU1Miw2ICsxNTYxLDI4IEBADQog
ICAgIH0NCiAgICAgYnJlYWs7DQogDQorICAgIGNhc2UgWEVOX0RPTUNUTF9zZXRfYnJva2VuX3Bh
Z2VfcDJtOg0KKyAgICB7DQorICAgICAgICBzdHJ1Y3QgZG9tYWluICpkOw0KKyAgICAgICAgcDJt
X3R5cGVfdCBwdDsNCisgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuOw0KKw0KKyAgICAgICAgZCA9
IHJjdV9sb2NrX2RvbWFpbl9ieV9pZChkb21jdGwtPmRvbWFpbik7DQorICAgICAgICBpZiAoIGQg
IT0gTlVMTCApDQorICAgICAgICB7DQorICAgICAgICAgICAgcGZuID0gZG9tY3RsLT51LnNldF9i
cm9rZW5fcGFnZV9wMm0ucGZuOw0KKw0KKyAgICAgICAgICAgIGdldF9nZm5fcXVlcnkoZCwgcGZu
LCAmcHQpOw0KKyAgICAgICAgICAgIHAybV9jaGFuZ2VfdHlwZShkLCBwZm4sIHB0LCBwMm1fcmFt
X2Jyb2tlbik7DQorICAgICAgICAgICAgcHV0X2dmbihkLCBwZm4pOw0KKw0KKyAgICAgICAgICAg
IHJjdV91bmxvY2tfZG9tYWluKGQpOw0KKyAgICAgICAgfQ0KKyAgICAgICAgZWxzZQ0KKyAgICAg
ICAgICAgIHJldCA9IC1FU1JDSDsNCisgICAgfQ0KKyAgICBicmVhazsNCisNCiAgICAgZGVmYXVs
dDoNCiAgICAgICAgIHJldCA9IGlvbW11X2RvX2RvbWN0bChkb21jdGwsIHVfZG9tY3RsKTsNCiAg
ICAgICAgIGJyZWFrOw0KZGlmZiAtciBiMTdmYjNjYjkyZDIgeGVuL2luY2x1ZGUvcHVibGljL2Rv
bWN0bC5oDQotLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1ZyAyNyAwNToy
Nzo1NCAyMDEyICswODAwDQorKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1
ZyAyNyAyMzoyNTo0MyAyMDEyICswODAwDQpAQCAtMTM2LDYgKzEzNiw3IEBADQogI2RlZmluZSBY
RU5fRE9NQ1RMX1BGSU5GT19MUElOVEFCICgweDFVPDwzMSkNCiAjZGVmaW5lIFhFTl9ET01DVExf
UEZJTkZPX1hUQUIgICAgKDB4ZlU8PDI4KSAvKiBpbnZhbGlkIHBhZ2UgKi8NCiAjZGVmaW5lIFhF
Tl9ET01DVExfUEZJTkZPX1hBTExPQyAgKDB4ZVU8PDI4KSAvKiBhbGxvY2F0ZS1vbmx5IHBhZ2Ug
Ki8NCisjZGVmaW5lIFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiAgKDB4ZFU8PDI4KSAvKiBicm9r
ZW4gcGFnZSAqLw0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fUEFHRURUQUIgKDB4OFU8PDI4
KQ0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fTFRBQl9NQVNLICgweGZVPDwyOCkNCiANCkBA
IC04NTYsNiArODU3LDEyIEBADQogdHlwZWRlZiBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNlX21vbml0
b3IgeGVuX2RvbWN0bF92bWNlX21vbml0b3JfdDsNCiBERUZJTkVfWEVOX0dVRVNUX0hBTkRMRSh4
ZW5fZG9tY3RsX3ZtY2VfbW9uaXRvcl90KTsNCiANCitzdHJ1Y3QgeGVuX2RvbWN0bF9zZXRfYnJv
a2VuX3BhZ2VfcDJtIHsNCisgICAgdWludDY0X3QgcGZuOw0KK307DQordHlwZWRlZiBzdHJ1Y3Qg
eGVuX2RvbWN0bF9zZXRfYnJva2VuX3BhZ2VfcDJtIHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9wYWdl
X3AybV90Ow0KK0RFRklORV9YRU5fR1VFU1RfSEFORExFKHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9w
YWdlX3AybV90KTsNCisNCiBzdHJ1Y3QgeGVuX2RvbWN0bCB7DQogICAgIHVpbnQzMl90IGNtZDsN
CiAjZGVmaW5lIFhFTl9ET01DVExfY3JlYXRlZG9tYWluICAgICAgICAgICAgICAgICAgIDENCkBA
IC05MjMsNiArOTMwLDcgQEANCiAjZGVmaW5lIFhFTl9ET01DVExfc2V0X3ZpcnFfaGFuZGxlciAg
ICAgICAgICAgICAgNjYNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX3N0YXJ0ICAg
ICAgICAgICAgNjcNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX2VuZCAgICAgICAg
ICAgICAgNjgNCisjZGVmaW5lIFhFTl9ET01DVExfc2V0X2Jyb2tlbl9wYWdlX3AybSAgICAgICAg
ICAgNjkNCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfZ3Vlc3RtZW1pbyAgICAgICAgICAgIDEw
MDANCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfcGF1c2V2Y3B1ICAgICAgICAgICAgIDEwMDEN
CiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfdW5wYXVzZXZjcHUgICAgICAgICAgIDEwMDINCkBA
IC05ODAsNiArOTg4LDcgQEANCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX3NldF92aXJxX2hh
bmRsZXIgIHNldF92aXJxX2hhbmRsZXI7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNl
X21vbml0b3IgICAgICB2bWNlX21vbml0b3I7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF9n
ZGJzeF9tZW1pbyAgICAgICBnZGJzeF9ndWVzdF9tZW1pbzsNCisgICAgICAgIHN0cnVjdCB4ZW5f
ZG9tY3RsX3NldF9icm9rZW5fcGFnZV9wMm0gc2V0X2Jyb2tlbl9wYWdlX3AybTsNCiAgICAgICAg
IHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X3BhdXNldW5wX3ZjcHUgZ2Ric3hfcGF1c2V1bnBfdmNw
dTsNCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X2RvbXN0YXR1cyAgIGdkYnN4X2Rv
bXN0YXR1czsNCiAgICAgICAgIHVpbnQ4X3QgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBh
ZFsxMjhdOw0K

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Mon Aug 27 11:13:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 11:13:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5xFh-0001k5-9K; Mon, 27 Aug 2012 11:13:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T5xFf-0001jq-1o
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 11:13:03 +0000
Received: from [85.158.138.51:47643] by server-3.bemta-3.messagelabs.com id
	B9/2B-13809-D365B305; Mon, 27 Aug 2012 11:13:01 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346065977!21601427!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM1MzUw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12641 invoked from network); 27 Aug 2012 11:12:58 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-174.messagelabs.com with SMTP;
	27 Aug 2012 11:12:58 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 27 Aug 2012 04:12:56 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,320,1344236400"; d="scan'208";a="191766576"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 27 Aug 2012 04:12:53 -0700
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 27 Aug 2012 04:12:53 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 27 Aug 2012 04:12:52 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Mon, 27 Aug 2012 19:12:51 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH] X86/vMCE: guest broken page handling when migration
Thread-Index: Ac2EROVXZp6ZOLjuREqA3X3AgCYhPw==
Date: Mon, 27 Aug 2012 11:12:51 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_"
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] X86/vMCE: guest broken page handling when
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

X86/vMCE: guest broken page handling when migration

This patch is used to handle guest broken page when migration.

At sender, the broken page would not be mapped, and the error page=20
content would not be copied to target, otherwise it may trigger more=20
serious error (i.e. SRAR error). While its pfn_type and pfn number=20
would be transferred to target so that target take appropriate action.

At target, it would set p2m as p2m_ram_broken for broken page, so that=20
if guest crazy access the broken page again, it would kill guest as expecte=
d.

Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>

diff -r b17fb3cb92d2 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xc_domain.c	Mon Aug 27 23:25:43 2012 +0800
@@ -314,6 +314,22 @@
     return ret ? -1 : 0;
 }
=20
+/* set broken page p2m */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn)
+{
+    int ret;
+    DECLARE_DOMCTL;
+
+    domctl.cmd =3D XEN_DOMCTL_set_broken_page_p2m;
+    domctl.domain =3D (domid_t)domid;
+    domctl.u.set_broken_page_p2m.pfn =3D pfn;
+    ret =3D do_domctl(xch, &domctl);
+
+    return ret ? -1 : 0;
+}
+
 /* get info from hvm guest for save */
 int xc_domain_hvm_getcontext(xc_interface *xch,
                              uint32_t domid,
diff -r b17fb3cb92d2 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xc_domain_restore.c	Mon Aug 27 23:25:43 2012 +0800
@@ -962,9 +962,15 @@
=20
     countpages =3D count;
     for (i =3D oldcount; i < buf->nr_pages; ++i)
-        if ((buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) =3D=3D XEN_D=
OMCTL_PFINFO_XTAB
-            ||(buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK) =3D=3D XEN=
_DOMCTL_PFINFO_XALLOC)
+    {
+        unsigned long pagetype;
+
+        pagetype =3D buf->pfn_types[i] & XEN_DOMCTL_PFINFO_LTAB_MASK;
+        if ( pagetype =3D=3D XEN_DOMCTL_PFINFO_XTAB ||
+             pagetype =3D=3D XEN_DOMCTL_PFINFO_BROKEN ||
+             pagetype =3D=3D XEN_DOMCTL_PFINFO_XALLOC )
             --countpages;
+    }
=20
     if (!countpages)
         return count;
@@ -1200,6 +1206,17 @@
             /* a bogus/unmapped/allocate-only page: skip it */
             continue;
=20
+        if ( pagetype =3D=3D XEN_DOMCTL_PFINFO_BROKEN )
+        {
+            if ( xc_set_broken_page_p2m(xch, dom, pfn) )
+            {
+                ERROR("Set p2m for broken page fail, "
+                      "dom=3D%d, pfn=3D%lx\n", dom, pfn);
+                goto err_mapped;
+            }
+            continue;
+        }
+
         if (pfn_err[i])
         {
             ERROR("unexpected PFN mapping failure pfn %lx map_mfn %lx p2m_=
mfn %lx",
diff -r b17fb3cb92d2 tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xc_domain_save.c	Mon Aug 27 23:25:43 2012 +0800
@@ -1285,6 +1285,13 @@
                 if ( !hvm )
                     gmfn =3D pfn_to_mfn(gmfn);
=20
+                if ( pfn_type[j] =3D=3D XEN_DOMCTL_PFINFO_BROKEN )
+                {
+                    pfn_type[j] |=3D pfn_batch[j];
+                    ++run;
+                    continue;
+                }
+
                 if ( pfn_err[j] )
                 {
                     if ( pfn_type[j] =3D=3D XEN_DOMCTL_PFINFO_XTAB )
@@ -1379,8 +1386,12 @@
                     }
                 }
=20
-                /* skip pages that aren't present or are alloc-only */
+                /*=20
+                 * skip pages that aren't present,
+                 * or are broken, or are alloc-only
+                 */
                 if ( pagetype =3D=3D XEN_DOMCTL_PFINFO_XTAB
+                    || pagetype =3D=3D XEN_DOMCTL_PFINFO_BROKEN
                     || pagetype =3D=3D XEN_DOMCTL_PFINFO_XALLOC )
                     continue;
=20
diff -r b17fb3cb92d2 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Aug 27 05:27:54 2012 +0800
+++ b/tools/libxc/xenctrl.h	Mon Aug 27 23:25:43 2012 +0800
@@ -588,6 +588,17 @@
                                int *vmce_while_migrate);
=20
 /**
+ * This function set p2m for broken page
+ * &parm xch a handle to an open hypervisor interface
+ * @parm domid the domain id which broken page belong to
+ * @parm pfn the pfn number of the broken page
+ * @return 0 on success, -1 on failure
+ */
+int xc_set_broken_page_p2m(xc_interface *xch,
+                           uint32_t domid,
+                           unsigned long pfn);
+
+/**
  * This function returns information about the context of a hvm domain
  * @parm xch a handle to an open hypervisor interface
  * @parm domid the domain to get information from
diff -r b17fb3cb92d2 xen/arch/x86/domctl.c
--- a/xen/arch/x86/domctl.c	Mon Aug 27 05:27:54 2012 +0800
+++ b/xen/arch/x86/domctl.c	Mon Aug 27 23:25:43 2012 +0800
@@ -203,12 +203,18 @@
                 for ( j =3D 0; j < k; j++ )
                 {
                     unsigned long type =3D 0;
+                    p2m_type_t t;
=20
-                    page =3D get_page_from_gfn(d, arr[j], NULL, P2M_ALLOC)=
;
+                    page =3D get_page_from_gfn(d, arr[j], &t, P2M_ALLOC);
=20
                     if ( unlikely(!page) ||
                          unlikely(is_xen_heap_page(page)) )
-                        type =3D XEN_DOMCTL_PFINFO_XTAB;
+                    {
+                        if ( p2m_is_broken(t) )
+                            type =3D XEN_DOMCTL_PFINFO_BROKEN;
+                        else
+                            type =3D XEN_DOMCTL_PFINFO_XTAB;
+                    }
                     else if ( xsm_getpageframeinfo(page) !=3D 0 )
                         ;
                     else
@@ -231,6 +237,9 @@
=20
                         if ( page->u.inuse.type_info & PGT_pinned )
                             type |=3D XEN_DOMCTL_PFINFO_LPINTAB;
+
+                        if ( page->count_info & PGC_broken )
+                            type =3D XEN_DOMCTL_PFINFO_BROKEN;
                     }
=20
                     if ( page )
@@ -1552,6 +1561,28 @@
     }
     break;
=20
+    case XEN_DOMCTL_set_broken_page_p2m:
+    {
+        struct domain *d;
+        p2m_type_t pt;
+        unsigned long pfn;
+
+        d =3D rcu_lock_domain_by_id(domctl->domain);
+        if ( d !=3D NULL )
+        {
+            pfn =3D domctl->u.set_broken_page_p2m.pfn;
+
+            get_gfn_query(d, pfn, &pt);
+            p2m_change_type(d, pfn, pt, p2m_ram_broken);
+            put_gfn(d, pfn);
+
+            rcu_unlock_domain(d);
+        }
+        else
+            ret =3D -ESRCH;
+    }
+    break;
+
     default:
         ret =3D iommu_do_domctl(domctl, u_domctl);
         break;
diff -r b17fb3cb92d2 xen/include/public/domctl.h
--- a/xen/include/public/domctl.h	Mon Aug 27 05:27:54 2012 +0800
+++ b/xen/include/public/domctl.h	Mon Aug 27 23:25:43 2012 +0800
@@ -136,6 +136,7 @@
 #define XEN_DOMCTL_PFINFO_LPINTAB (0x1U<<31)
 #define XEN_DOMCTL_PFINFO_XTAB    (0xfU<<28) /* invalid page */
 #define XEN_DOMCTL_PFINFO_XALLOC  (0xeU<<28) /* allocate-only page */
+#define XEN_DOMCTL_PFINFO_BROKEN  (0xdU<<28) /* broken page */
 #define XEN_DOMCTL_PFINFO_PAGEDTAB (0x8U<<28)
 #define XEN_DOMCTL_PFINFO_LTAB_MASK (0xfU<<28)
=20
@@ -856,6 +857,12 @@
 typedef struct xen_domctl_vmce_monitor xen_domctl_vmce_monitor_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmce_monitor_t);
=20
+struct xen_domctl_set_broken_page_p2m {
+    uint64_t pfn;
+};
+typedef struct xen_domctl_set_broken_page_p2m xen_domctl_set_broken_page_p=
2m_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_broken_page_p2m_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -923,6 +930,7 @@
 #define XEN_DOMCTL_set_virq_handler              66
 #define XEN_DOMCTL_vmce_monitor_start            67
 #define XEN_DOMCTL_vmce_monitor_end              68
+#define XEN_DOMCTL_set_broken_page_p2m           69
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -980,6 +988,7 @@
         struct xen_domctl_set_virq_handler  set_virq_handler;
         struct xen_domctl_vmce_monitor      vmce_monitor;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
+        struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];=

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_
Content-Type: application/octet-stream;
	name="9_vmce_migration_pfntype_broken.patch"
Content-Description: 9_vmce_migration_pfntype_broken.patch
Content-Disposition: attachment;
	filename="9_vmce_migration_pfntype_broken.patch"; size=8730;
	creation-date="Mon, 27 Aug 2012 11:03:05 GMT";
	modification-date="Mon, 27 Aug 2012 11:08:32 GMT"
Content-Transfer-Encoding: base64

WDg2L3ZNQ0U6IGd1ZXN0IGJyb2tlbiBwYWdlIGhhbmRsaW5nIHdoZW4gbWlncmF0aW9uDQoNClRo
aXMgcGF0Y2ggaXMgdXNlZCB0byBoYW5kbGUgZ3Vlc3QgYnJva2VuIHBhZ2Ugd2hlbiBtaWdyYXRp
b24uDQoNCkF0IHNlbmRlciwgdGhlIGJyb2tlbiBwYWdlIHdvdWxkIG5vdCBiZSBtYXBwZWQsIGFu
ZCB0aGUgZXJyb3IgcGFnZSANCmNvbnRlbnQgd291bGQgbm90IGJlIGNvcGllZCB0byB0YXJnZXQs
IG90aGVyd2lzZSBpdCBtYXkgdHJpZ2dlciBtb3JlIA0Kc2VyaW91cyBlcnJvciAoaS5lLiBTUkFS
IGVycm9yKS4gV2hpbGUgaXRzIHBmbl90eXBlIGFuZCBwZm4gbnVtYmVyIA0Kd291bGQgYmUgdHJh
bnNmZXJyZWQgdG8gdGFyZ2V0IHNvIHRoYXQgdGFyZ2V0IHRha2UgYXBwcm9wcmlhdGUgYWN0aW9u
Lg0KDQpBdCB0YXJnZXQsIGl0IHdvdWxkIHNldCBwMm0gYXMgcDJtX3JhbV9icm9rZW4gZm9yIGJy
b2tlbiBwYWdlLCBzbyB0aGF0IA0KaWYgZ3Vlc3QgY3JhenkgYWNjZXNzIHRoZSBicm9rZW4gcGFn
ZSBhZ2FpbiwgaXQgd291bGQga2lsbCBndWVzdCBhcyBleHBlY3RlZC4NCg0KU2lnbmVkLW9mZi1i
eTogTGl1LCBKaW5zb25nIDxqaW5zb25nLmxpdUBpbnRlbC5jb20+DQoNCmRpZmYgLXIgYjE3ZmIz
Y2I5MmQyIHRvb2xzL2xpYnhjL3hjX2RvbWFpbi5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21h
aW4uYwlNb24gQXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hj
X2RvbWFpbi5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAgLTMxNCw2ICszMTQs
MjIgQEANCiAgICAgcmV0dXJuIHJldCA/IC0xIDogMDsNCiB9DQogDQorLyogc2V0IGJyb2tlbiBw
YWdlIHAybSAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2VfcDJtKHhjX2ludGVyZmFjZSAqeGNo
LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGRvbWlkLA0KKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuKQ0KK3sNCisgICAgaW50IHJldDsN
CisgICAgREVDTEFSRV9ET01DVEw7DQorDQorICAgIGRvbWN0bC5jbWQgPSBYRU5fRE9NQ1RMX3Nl
dF9icm9rZW5fcGFnZV9wMm07DQorICAgIGRvbWN0bC5kb21haW4gPSAoZG9taWRfdClkb21pZDsN
CisgICAgZG9tY3RsLnUuc2V0X2Jyb2tlbl9wYWdlX3AybS5wZm4gPSBwZm47DQorICAgIHJldCA9
IGRvX2RvbWN0bCh4Y2gsICZkb21jdGwpOw0KKw0KKyAgICByZXR1cm4gcmV0ID8gLTEgOiAwOw0K
K30NCisNCiAvKiBnZXQgaW5mbyBmcm9tIGh2bSBndWVzdCBmb3Igc2F2ZSAqLw0KIGludCB4Y19k
b21haW5faHZtX2dldGNvbnRleHQoeGNfaW50ZXJmYWNlICp4Y2gsDQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwNCmRpZmYgLXIgYjE3ZmIzY2I5MmQyIHRvb2xz
L2xpYnhjL3hjX2RvbWFpbl9yZXN0b3JlLmMNCi0tLSBhL3Rvb2xzL2xpYnhjL3hjX2RvbWFpbl9y
ZXN0b3JlLmMJTW9uIEF1ZyAyNyAwNToyNzo1NCAyMDEyICswODAwDQorKysgYi90b29scy9saWJ4
Yy94Y19kb21haW5fcmVzdG9yZS5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAg
LTk2Miw5ICs5NjIsMTUgQEANCiANCiAgICAgY291bnRwYWdlcyA9IGNvdW50Ow0KICAgICBmb3Ig
KGkgPSBvbGRjb3VudDsgaSA8IGJ1Zi0+bnJfcGFnZXM7ICsraSkNCi0gICAgICAgIGlmICgoYnVm
LT5wZm5fdHlwZXNbaV0gJiBYRU5fRE9NQ1RMX1BGSU5GT19MVEFCX01BU0spID09IFhFTl9ET01D
VExfUEZJTkZPX1hUQUINCi0gICAgICAgICAgICB8fChidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9E
T01DVExfUEZJTkZPX0xUQUJfTUFTSykgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DKQ0KKyAg
ICB7DQorICAgICAgICB1bnNpZ25lZCBsb25nIHBhZ2V0eXBlOw0KKw0KKyAgICAgICAgcGFnZXR5
cGUgPSBidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9ET01DVExfUEZJTkZPX0xUQUJfTUFTSzsNCisg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQiB8fA0KKyAgICAg
ICAgICAgICBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19CUk9LRU4gfHwNCisgICAgICAg
ICAgICAgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DICkNCiAgICAgICAgICAg
ICAtLWNvdW50cGFnZXM7DQorICAgIH0NCiANCiAgICAgaWYgKCFjb3VudHBhZ2VzKQ0KICAgICAg
ICAgcmV0dXJuIGNvdW50Ow0KQEAgLTEyMDAsNiArMTIwNiwxNyBAQA0KICAgICAgICAgICAgIC8q
IGEgYm9ndXMvdW5tYXBwZWQvYWxsb2NhdGUtb25seSBwYWdlOiBza2lwIGl0ICovDQogICAgICAg
ICAgICAgY29udGludWU7DQogDQorICAgICAgICBpZiAoIHBhZ2V0eXBlID09IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTiApDQorICAgICAgICB7DQorICAgICAgICAgICAgaWYgKCB4Y19zZXRfYnJv
a2VuX3BhZ2VfcDJtKHhjaCwgZG9tLCBwZm4pICkNCisgICAgICAgICAgICB7DQorICAgICAgICAg
ICAgICAgIEVSUk9SKCJTZXQgcDJtIGZvciBicm9rZW4gcGFnZSBmYWlsLCAiDQorICAgICAgICAg
ICAgICAgICAgICAgICJkb209JWQsIHBmbj0lbHhcbiIsIGRvbSwgcGZuKTsNCisgICAgICAgICAg
ICAgICAgZ290byBlcnJfbWFwcGVkOw0KKyAgICAgICAgICAgIH0NCisgICAgICAgICAgICBjb250
aW51ZTsNCisgICAgICAgIH0NCisNCiAgICAgICAgIGlmIChwZm5fZXJyW2ldKQ0KICAgICAgICAg
ew0KICAgICAgICAgICAgIEVSUk9SKCJ1bmV4cGVjdGVkIFBGTiBtYXBwaW5nIGZhaWx1cmUgcGZu
ICVseCBtYXBfbWZuICVseCBwMm1fbWZuICVseCIsDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB0b29s
cy9saWJ4Yy94Y19kb21haW5fc2F2ZS5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21haW5fc2F2
ZS5jCU1vbiBBdWcgMjcgMDU6Mjc6NTQgMjAxMiArMDgwMA0KKysrIGIvdG9vbHMvbGlieGMveGNf
ZG9tYWluX3NhdmUuYwlNb24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC0xMjg1LDYg
KzEyODUsMTMgQEANCiAgICAgICAgICAgICAgICAgaWYgKCAhaHZtICkNCiAgICAgICAgICAgICAg
ICAgICAgIGdtZm4gPSBwZm5fdG9fbWZuKGdtZm4pOw0KIA0KKyAgICAgICAgICAgICAgICBpZiAo
IHBmbl90eXBlW2pdID09IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiApDQorICAgICAgICAgICAg
ICAgIHsNCisgICAgICAgICAgICAgICAgICAgIHBmbl90eXBlW2pdIHw9IHBmbl9iYXRjaFtqXTsN
CisgICAgICAgICAgICAgICAgICAgICsrcnVuOw0KKyAgICAgICAgICAgICAgICAgICAgY29udGlu
dWU7DQorICAgICAgICAgICAgICAgIH0NCisNCiAgICAgICAgICAgICAgICAgaWYgKCBwZm5fZXJy
W2pdICkNCiAgICAgICAgICAgICAgICAgew0KICAgICAgICAgICAgICAgICAgICAgaWYgKCBwZm5f
dHlwZVtqXSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCICkNCkBAIC0xMzc5LDggKzEzODYsMTIg
QEANCiAgICAgICAgICAgICAgICAgICAgIH0NCiAgICAgICAgICAgICAgICAgfQ0KIA0KLSAgICAg
ICAgICAgICAgICAvKiBza2lwIHBhZ2VzIHRoYXQgYXJlbid0IHByZXNlbnQgb3IgYXJlIGFsbG9j
LW9ubHkgKi8NCisgICAgICAgICAgICAgICAgLyogDQorICAgICAgICAgICAgICAgICAqIHNraXAg
cGFnZXMgdGhhdCBhcmVuJ3QgcHJlc2VudCwNCisgICAgICAgICAgICAgICAgICogb3IgYXJlIGJy
b2tlbiwgb3IgYXJlIGFsbG9jLW9ubHkNCisgICAgICAgICAgICAgICAgICovDQogICAgICAgICAg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQg0KKyAgICAgICAg
ICAgICAgICAgICAgfHwgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fQlJPS0VODQogICAg
ICAgICAgICAgICAgICAgICB8fCBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YQUxMT0Mg
KQ0KICAgICAgICAgICAgICAgICAgICAgY29udGludWU7DQogDQpkaWZmIC1yIGIxN2ZiM2NiOTJk
MiB0b29scy9saWJ4Yy94ZW5jdHJsLmgNCi0tLSBhL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlNb24g
QXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlN
b24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC01ODgsNiArNTg4LDE3IEBADQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCAqdm1jZV93aGlsZV9taWdyYXRlKTsNCiAN
CiAvKioNCisgKiBUaGlzIGZ1bmN0aW9uIHNldCBwMm0gZm9yIGJyb2tlbiBwYWdlDQorICogJnBh
cm0geGNoIGEgaGFuZGxlIHRvIGFuIG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCisgKiBAcGFy
bSBkb21pZCB0aGUgZG9tYWluIGlkIHdoaWNoIGJyb2tlbiBwYWdlIGJlbG9uZyB0bw0KKyAqIEBw
YXJtIHBmbiB0aGUgcGZuIG51bWJlciBvZiB0aGUgYnJva2VuIHBhZ2UNCisgKiBAcmV0dXJuIDAg
b24gc3VjY2VzcywgLTEgb24gZmFpbHVyZQ0KKyAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2Vf
cDJtKHhjX2ludGVyZmFjZSAqeGNoLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQz
Ml90IGRvbWlkLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZu
KTsNCisNCisvKioNCiAgKiBUaGlzIGZ1bmN0aW9uIHJldHVybnMgaW5mb3JtYXRpb24gYWJvdXQg
dGhlIGNvbnRleHQgb2YgYSBodm0gZG9tYWluDQogICogQHBhcm0geGNoIGEgaGFuZGxlIHRvIGFu
IG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCiAgKiBAcGFybSBkb21pZCB0aGUgZG9tYWluIHRv
IGdldCBpbmZvcm1hdGlvbiBmcm9tDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB4ZW4vYXJjaC94ODYv
ZG9tY3RsLmMNCi0tLSBhL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDA1OjI3OjU0
IDIwMTIgKzA4MDANCisrKyBiL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDIzOjI1
OjQzIDIwMTIgKzA4MDANCkBAIC0yMDMsMTIgKzIwMywxOCBAQA0KICAgICAgICAgICAgICAgICBm
b3IgKCBqID0gMDsgaiA8IGs7IGorKyApDQogICAgICAgICAgICAgICAgIHsNCiAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgdHlwZSA9IDA7DQorICAgICAgICAgICAgICAgICAgICBw
Mm1fdHlwZV90IHQ7DQogDQotICAgICAgICAgICAgICAgICAgICBwYWdlID0gZ2V0X3BhZ2VfZnJv
bV9nZm4oZCwgYXJyW2pdLCBOVUxMLCBQMk1fQUxMT0MpOw0KKyAgICAgICAgICAgICAgICAgICAg
cGFnZSA9IGdldF9wYWdlX2Zyb21fZ2ZuKGQsIGFycltqXSwgJnQsIFAyTV9BTExPQyk7DQogDQog
ICAgICAgICAgICAgICAgICAgICBpZiAoIHVubGlrZWx5KCFwYWdlKSB8fA0KICAgICAgICAgICAg
ICAgICAgICAgICAgICB1bmxpa2VseShpc194ZW5faGVhcF9wYWdlKHBhZ2UpKSApDQotICAgICAg
ICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExfUEZJTkZPX1hUQUI7DQorICAgICAg
ICAgICAgICAgICAgICB7DQorICAgICAgICAgICAgICAgICAgICAgICAgaWYgKCBwMm1faXNfYnJv
a2VuKHQpICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTjsNCisgICAgICAgICAgICAgICAgICAgICAgICBlbHNlDQorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHR5cGUgPSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCOw0KKyAgICAg
ICAgICAgICAgICAgICAgfQ0KICAgICAgICAgICAgICAgICAgICAgZWxzZSBpZiAoIHhzbV9nZXRw
YWdlZnJhbWVpbmZvKHBhZ2UpICE9IDAgKQ0KICAgICAgICAgICAgICAgICAgICAgICAgIDsNCiAg
ICAgICAgICAgICAgICAgICAgIGVsc2UNCkBAIC0yMzEsNiArMjM3LDkgQEANCiANCiAgICAgICAg
ICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3Bpbm5l
ZCApDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgIHR5cGUgfD0gWEVOX0RPTUNUTF9QRklO
Rk9fTFBJTlRBQjsNCisNCisgICAgICAgICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPmNvdW50
X2luZm8gJiBQR0NfYnJva2VuICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9
IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTjsNCiAgICAgICAgICAgICAgICAgICAgIH0NCiANCiAg
ICAgICAgICAgICAgICAgICAgIGlmICggcGFnZSApDQpAQCAtMTU1Miw2ICsxNTYxLDI4IEBADQog
ICAgIH0NCiAgICAgYnJlYWs7DQogDQorICAgIGNhc2UgWEVOX0RPTUNUTF9zZXRfYnJva2VuX3Bh
Z2VfcDJtOg0KKyAgICB7DQorICAgICAgICBzdHJ1Y3QgZG9tYWluICpkOw0KKyAgICAgICAgcDJt
X3R5cGVfdCBwdDsNCisgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuOw0KKw0KKyAgICAgICAgZCA9
IHJjdV9sb2NrX2RvbWFpbl9ieV9pZChkb21jdGwtPmRvbWFpbik7DQorICAgICAgICBpZiAoIGQg
IT0gTlVMTCApDQorICAgICAgICB7DQorICAgICAgICAgICAgcGZuID0gZG9tY3RsLT51LnNldF9i
cm9rZW5fcGFnZV9wMm0ucGZuOw0KKw0KKyAgICAgICAgICAgIGdldF9nZm5fcXVlcnkoZCwgcGZu
LCAmcHQpOw0KKyAgICAgICAgICAgIHAybV9jaGFuZ2VfdHlwZShkLCBwZm4sIHB0LCBwMm1fcmFt
X2Jyb2tlbik7DQorICAgICAgICAgICAgcHV0X2dmbihkLCBwZm4pOw0KKw0KKyAgICAgICAgICAg
IHJjdV91bmxvY2tfZG9tYWluKGQpOw0KKyAgICAgICAgfQ0KKyAgICAgICAgZWxzZQ0KKyAgICAg
ICAgICAgIHJldCA9IC1FU1JDSDsNCisgICAgfQ0KKyAgICBicmVhazsNCisNCiAgICAgZGVmYXVs
dDoNCiAgICAgICAgIHJldCA9IGlvbW11X2RvX2RvbWN0bChkb21jdGwsIHVfZG9tY3RsKTsNCiAg
ICAgICAgIGJyZWFrOw0KZGlmZiAtciBiMTdmYjNjYjkyZDIgeGVuL2luY2x1ZGUvcHVibGljL2Rv
bWN0bC5oDQotLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1ZyAyNyAwNToy
Nzo1NCAyMDEyICswODAwDQorKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1
ZyAyNyAyMzoyNTo0MyAyMDEyICswODAwDQpAQCAtMTM2LDYgKzEzNiw3IEBADQogI2RlZmluZSBY
RU5fRE9NQ1RMX1BGSU5GT19MUElOVEFCICgweDFVPDwzMSkNCiAjZGVmaW5lIFhFTl9ET01DVExf
UEZJTkZPX1hUQUIgICAgKDB4ZlU8PDI4KSAvKiBpbnZhbGlkIHBhZ2UgKi8NCiAjZGVmaW5lIFhF
Tl9ET01DVExfUEZJTkZPX1hBTExPQyAgKDB4ZVU8PDI4KSAvKiBhbGxvY2F0ZS1vbmx5IHBhZ2Ug
Ki8NCisjZGVmaW5lIFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiAgKDB4ZFU8PDI4KSAvKiBicm9r
ZW4gcGFnZSAqLw0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fUEFHRURUQUIgKDB4OFU8PDI4
KQ0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fTFRBQl9NQVNLICgweGZVPDwyOCkNCiANCkBA
IC04NTYsNiArODU3LDEyIEBADQogdHlwZWRlZiBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNlX21vbml0
b3IgeGVuX2RvbWN0bF92bWNlX21vbml0b3JfdDsNCiBERUZJTkVfWEVOX0dVRVNUX0hBTkRMRSh4
ZW5fZG9tY3RsX3ZtY2VfbW9uaXRvcl90KTsNCiANCitzdHJ1Y3QgeGVuX2RvbWN0bF9zZXRfYnJv
a2VuX3BhZ2VfcDJtIHsNCisgICAgdWludDY0X3QgcGZuOw0KK307DQordHlwZWRlZiBzdHJ1Y3Qg
eGVuX2RvbWN0bF9zZXRfYnJva2VuX3BhZ2VfcDJtIHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9wYWdl
X3AybV90Ow0KK0RFRklORV9YRU5fR1VFU1RfSEFORExFKHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9w
YWdlX3AybV90KTsNCisNCiBzdHJ1Y3QgeGVuX2RvbWN0bCB7DQogICAgIHVpbnQzMl90IGNtZDsN
CiAjZGVmaW5lIFhFTl9ET01DVExfY3JlYXRlZG9tYWluICAgICAgICAgICAgICAgICAgIDENCkBA
IC05MjMsNiArOTMwLDcgQEANCiAjZGVmaW5lIFhFTl9ET01DVExfc2V0X3ZpcnFfaGFuZGxlciAg
ICAgICAgICAgICAgNjYNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX3N0YXJ0ICAg
ICAgICAgICAgNjcNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX2VuZCAgICAgICAg
ICAgICAgNjgNCisjZGVmaW5lIFhFTl9ET01DVExfc2V0X2Jyb2tlbl9wYWdlX3AybSAgICAgICAg
ICAgNjkNCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfZ3Vlc3RtZW1pbyAgICAgICAgICAgIDEw
MDANCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfcGF1c2V2Y3B1ICAgICAgICAgICAgIDEwMDEN
CiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfdW5wYXVzZXZjcHUgICAgICAgICAgIDEwMDINCkBA
IC05ODAsNiArOTg4LDcgQEANCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX3NldF92aXJxX2hh
bmRsZXIgIHNldF92aXJxX2hhbmRsZXI7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNl
X21vbml0b3IgICAgICB2bWNlX21vbml0b3I7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF9n
ZGJzeF9tZW1pbyAgICAgICBnZGJzeF9ndWVzdF9tZW1pbzsNCisgICAgICAgIHN0cnVjdCB4ZW5f
ZG9tY3RsX3NldF9icm9rZW5fcGFnZV9wMm0gc2V0X2Jyb2tlbl9wYWdlX3AybTsNCiAgICAgICAg
IHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X3BhdXNldW5wX3ZjcHUgZ2Ric3hfcGF1c2V1bnBfdmNw
dTsNCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X2RvbXN0YXR1cyAgIGdkYnN4X2Rv
bXN0YXR1czsNCiAgICAgICAgIHVpbnQ4X3QgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBh
ZFsxMjhdOw0K

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC8292335317D90SHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Mon Aug 27 14:01:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5zrm-0003pf-UB; Mon, 27 Aug 2012 14:00:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T5zrl-0003pa-2T
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 14:00:33 +0000
Received: from [85.158.143.99:48419] by server-3.bemta-4.messagelabs.com id
	2A/DF-08232-08D7B305; Mon, 27 Aug 2012 14:00:32 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346076026!27157375!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODUwMjc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODUwMjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31174 invoked from network); 27 Aug 2012 14:00:27 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Aug 2012 14:00:27 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (josoe mo35) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id a063e7o7RCvGfK
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 16:00:26 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id B477A183A5
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 16:00:25 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 03ef29089830b119b6efd87db8ab5c38b7428938
Message-Id: <03ef29089830b119b6ef.1346076024@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Mon, 27 Aug 2012 16:00:24 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] docs: update xenpaging.txt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1346072833 -7200
# Node ID 03ef29089830b119b6efd87db8ab5c38b7428938
# Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
docs: update xenpaging.txt

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r e6ca45ca03c2 -r 03ef29089830 docs/misc/xenpaging.txt
--- a/docs/misc/xenpaging.txt
+++ b/docs/misc/xenpaging.txt
@@ -12,37 +12,34 @@ access the paged-out memory, the page is
 memory.  This allows the sum of all running guests to use more memory
 than physically available on the host.
 
+Requirements:
+
+xenpaging relies on Intel EPT or AMD RVI, other hardware is not
+supported. Only HVM guests are supported.  The dom0 kernel needs
+paging-aware backend drivers to handle paged granttable entries.
+Currently only dom0 kernels based on classic Xen Linux support this
+functionality.
+
 Usage:
 
-Once the guest is running, run xenpaging with the guest_id and the
-number of pages to page-out:
+Up to now xenpaging is not integrated into libxl/xend, so it has to be
+started manually for each guest.
 
-  chdir /var/lib/xen/xenpaging
-  xenpaging <guest_id>  <number_of_pages>
+Once the guest is running, run xenpaging with the guest_id and the path
+to the pagefile:
+ 
+ /usr/lib/xen/bin/xenpaging -f /path/to/page_file -d dom_id &
 
-To obtain the guest_id, run 'xm list'.
-xenpaging will write the pagefile to the current directory.
-Example with 128MB pagefile on guest 1:
+Once xenpaging runs it needs a memory target, which is the memory
+footprint of the guest. This value (in KiB) must be written manually to
+xenstore. The following example sets the target to 512MB:
 
-  xenpaging 1 32768
+ xenstore-write /local/domain/<dom_id>/memory/target-tot_pages $((1024*512))
 
-Caution: stopping xenpaging manually will cause the guest to stall or
-crash because the paged-out memory is not written back into the guest!
-
-After a reboot of a guest, its guest_id changes, the current xenpaging
-binary has no target anymore. To automate restarting of xenpaging after
-guest reboot, specify the number if pages in the guest configuration
-file /etc/xen/vm/<guest_name>:
-
-xenpaging=32768
-
-Redo the guest with 'xm create /etc/xen/vm/<guest_name>' to activate the
-changes.
-
+Now xenpaging tries to page-out as many pages to keep the overall memory
+footprint of the guest at 512MB.
 
 Todo:
-- implement stopping of xenpaging
-- implement/test live migration
-
+- integrate xenpaging into libxl
 
 # vim: tw=72

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 14:01:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5zrm-0003pf-UB; Mon, 27 Aug 2012 14:00:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T5zrl-0003pa-2T
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 14:00:33 +0000
Received: from [85.158.143.99:48419] by server-3.bemta-4.messagelabs.com id
	2A/DF-08232-08D7B305; Mon, 27 Aug 2012 14:00:32 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346076026!27157375!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODUwMjc=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiAzODUwMjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31174 invoked from network); 27 Aug 2012 14:00:27 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Aug 2012 14:00:27 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (josoe mo35) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id a063e7o7RCvGfK
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 16:00:26 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id B477A183A5
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 16:00:25 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: 03ef29089830b119b6efd87db8ab5c38b7428938
Message-Id: <03ef29089830b119b6ef.1346076024@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Mon, 27 Aug 2012 16:00:24 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] docs: update xenpaging.txt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1346072833 -7200
# Node ID 03ef29089830b119b6efd87db8ab5c38b7428938
# Parent  e6ca45ca03c2e08af3a74b404166527b68fd1218
docs: update xenpaging.txt

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r e6ca45ca03c2 -r 03ef29089830 docs/misc/xenpaging.txt
--- a/docs/misc/xenpaging.txt
+++ b/docs/misc/xenpaging.txt
@@ -12,37 +12,34 @@ access the paged-out memory, the page is
 memory.  This allows the sum of all running guests to use more memory
 than physically available on the host.
 
+Requirements:
+
+xenpaging relies on Intel EPT or AMD RVI, other hardware is not
+supported. Only HVM guests are supported.  The dom0 kernel needs
+paging-aware backend drivers to handle paged granttable entries.
+Currently only dom0 kernels based on classic Xen Linux support this
+functionality.
+
 Usage:
 
-Once the guest is running, run xenpaging with the guest_id and the
-number of pages to page-out:
+Up to now xenpaging is not integrated into libxl/xend, so it has to be
+started manually for each guest.
 
-  chdir /var/lib/xen/xenpaging
-  xenpaging <guest_id>  <number_of_pages>
+Once the guest is running, run xenpaging with the guest_id and the path
+to the pagefile:
+ 
+ /usr/lib/xen/bin/xenpaging -f /path/to/page_file -d dom_id &
 
-To obtain the guest_id, run 'xm list'.
-xenpaging will write the pagefile to the current directory.
-Example with 128MB pagefile on guest 1:
+Once xenpaging runs it needs a memory target, which is the memory
+footprint of the guest. This value (in KiB) must be written manually to
+xenstore. The following example sets the target to 512MB:
 
-  xenpaging 1 32768
+ xenstore-write /local/domain/<dom_id>/memory/target-tot_pages $((1024*512))
 
-Caution: stopping xenpaging manually will cause the guest to stall or
-crash because the paged-out memory is not written back into the guest!
-
-After a reboot of a guest, its guest_id changes, the current xenpaging
-binary has no target anymore. To automate restarting of xenpaging after
-guest reboot, specify the number if pages in the guest configuration
-file /etc/xen/vm/<guest_name>:
-
-xenpaging=32768
-
-Redo the guest with 'xm create /etc/xen/vm/<guest_name>' to activate the
-changes.
-
+Now xenpaging tries to page-out as many pages to keep the overall memory
+footprint of the guest at 512MB.
 
 Todo:
-- implement stopping of xenpaging
-- implement/test live migration
-
+- integrate xenpaging into libxl
 
 # vim: tw=72

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 14:01:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5zsO-0003rO-BG; Mon, 27 Aug 2012 14:01:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <janez3k@gmail.com>) id 1T5zsM-0003r5-M2
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 14:01:10 +0000
Received: from [85.158.138.51:51764] by server-11.bemta-3.messagelabs.com id
	0D/13-23152-5AD7B305; Mon, 27 Aug 2012 14:01:09 +0000
X-Env-Sender: janez3k@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346076054!28149025!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20407 invoked from network); 27 Aug 2012 14:00:56 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 14:00:56 -0000
Received: by obbta14 with SMTP id ta14so7644043obb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 07:00:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=lI7Cgt9t96QGUCNcsKlQB1u7wUekojILVQdqBDYe3uY=;
	b=Uz2fHqDf/htCjltYbZIKPkVaXpgWnXHiIYFP03kSgCHGhUt/U+Jlg14HcWuYT9HeRh
	Dka+kG53XtgtLSgxvhIYhnmKNI8jhGHI/QEMZsqQ3BUYEgqFY1DzzEBrf5qi4TZpYk4Q
	MhzdolyyU33HzSZOwpIiw2eVdSbQW2MPpzIgdc6NjWM1DHHKkJgmaQLgdDMSlYJZ6Fbi
	3lrc642pVXZ0FJp5CV38iqmnNPHtotxCPu4SmBi/6MOtJD23iNDyiZOkCbs8bzGAijM6
	oObrGSyijLOKTi0IQSLTl+O0A32Ab82VhAR8hLozHQG1FY++umuhVr4I+R3uo0PRrXVo
	vfwA==
MIME-Version: 1.0
Received: by 10.60.25.38 with SMTP id z6mr10342088oef.15.1346076054243; Mon,
	27 Aug 2012 07:00:54 -0700 (PDT)
Received: by 10.76.79.36 with HTTP; Mon, 27 Aug 2012 07:00:54 -0700 (PDT)
Date: Mon, 27 Aug 2012 16:00:54 +0200
Message-ID: <CACC+8CTi1NAZaD2fwOCN3RGxrRd=5nzLGTWWFqJHKtieCH+rYg@mail.gmail.com>
From: =?UTF-8?Q?Kristijan_Le=C4=8Dnik?= <janez3k@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] PCI passthrough amd/ati
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3178810640224846293=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3178810640224846293==
Content-Type: multipart/alternative; boundary=e89a8ff253ae0dc8af04c83fc10c

--e89a8ff253ae0dc8af04c83fc10c
Content-Type: text/plain; charset=ISO-8859-1

Hi,

i have compiled xen from git 26.08.2012, with kernel 3.4.7 ( 3.5 has usb
problems ) and i have passthrough pci-e amd/ati 6670
after installing the drivers for it and reboot windows 7 it automatically
selects amd/ati for primary graphic card, in windows it works fine
i have stress test it a few hours no problem, but with some games the
display driver amdkmdap constantly restarts.
i get black screen for 10 to 20 seconds and when the driver restarts its
working fine afterwards until next driver restart.
anybody with similar problems?

Best Regards,
Kristijan Lecnik

--e89a8ff253ae0dc8af04c83fc10c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div><br></div><div>i have compiled xen from git 26.08.2012, with kernel=
 3.4.7 ( 3.5 has usb problems ) and i have=A0passthrough pci-e amd/ati 6670=
</div><div>after installing the drivers for it and reboot windows 7 it auto=
matically selects amd/ati for primary graphic card, in windows it works fin=
e</div>
<div>i have stress test it a few hours no problem, but with some games the =
display driver amdkmdap constantly restarts.=A0</div><div>i get black scree=
n for 10 to 20 seconds and when the driver restarts its working fine afterw=
ards until next driver restart.</div>
<div>anybody with similar problems?</div><div><br></div><div>Best Regards,<=
/div><div>Kristijan Lecnik</div>

--e89a8ff253ae0dc8af04c83fc10c--


--===============3178810640224846293==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3178810640224846293==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 14:01:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5zsO-0003rO-BG; Mon, 27 Aug 2012 14:01:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <janez3k@gmail.com>) id 1T5zsM-0003r5-M2
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 14:01:10 +0000
Received: from [85.158.138.51:51764] by server-11.bemta-3.messagelabs.com id
	0D/13-23152-5AD7B305; Mon, 27 Aug 2012 14:01:09 +0000
X-Env-Sender: janez3k@gmail.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346076054!28149025!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20407 invoked from network); 27 Aug 2012 14:00:56 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 14:00:56 -0000
Received: by obbta14 with SMTP id ta14so7644043obb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 07:00:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=lI7Cgt9t96QGUCNcsKlQB1u7wUekojILVQdqBDYe3uY=;
	b=Uz2fHqDf/htCjltYbZIKPkVaXpgWnXHiIYFP03kSgCHGhUt/U+Jlg14HcWuYT9HeRh
	Dka+kG53XtgtLSgxvhIYhnmKNI8jhGHI/QEMZsqQ3BUYEgqFY1DzzEBrf5qi4TZpYk4Q
	MhzdolyyU33HzSZOwpIiw2eVdSbQW2MPpzIgdc6NjWM1DHHKkJgmaQLgdDMSlYJZ6Fbi
	3lrc642pVXZ0FJp5CV38iqmnNPHtotxCPu4SmBi/6MOtJD23iNDyiZOkCbs8bzGAijM6
	oObrGSyijLOKTi0IQSLTl+O0A32Ab82VhAR8hLozHQG1FY++umuhVr4I+R3uo0PRrXVo
	vfwA==
MIME-Version: 1.0
Received: by 10.60.25.38 with SMTP id z6mr10342088oef.15.1346076054243; Mon,
	27 Aug 2012 07:00:54 -0700 (PDT)
Received: by 10.76.79.36 with HTTP; Mon, 27 Aug 2012 07:00:54 -0700 (PDT)
Date: Mon, 27 Aug 2012 16:00:54 +0200
Message-ID: <CACC+8CTi1NAZaD2fwOCN3RGxrRd=5nzLGTWWFqJHKtieCH+rYg@mail.gmail.com>
From: =?UTF-8?Q?Kristijan_Le=C4=8Dnik?= <janez3k@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] PCI passthrough amd/ati
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3178810640224846293=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3178810640224846293==
Content-Type: multipart/alternative; boundary=e89a8ff253ae0dc8af04c83fc10c

--e89a8ff253ae0dc8af04c83fc10c
Content-Type: text/plain; charset=ISO-8859-1

Hi,

i have compiled xen from git 26.08.2012, with kernel 3.4.7 ( 3.5 has usb
problems ) and i have passthrough pci-e amd/ati 6670
after installing the drivers for it and reboot windows 7 it automatically
selects amd/ati for primary graphic card, in windows it works fine
i have stress test it a few hours no problem, but with some games the
display driver amdkmdap constantly restarts.
i get black screen for 10 to 20 seconds and when the driver restarts its
working fine afterwards until next driver restart.
anybody with similar problems?

Best Regards,
Kristijan Lecnik

--e89a8ff253ae0dc8af04c83fc10c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div><br></div><div>i have compiled xen from git 26.08.2012, with kernel=
 3.4.7 ( 3.5 has usb problems ) and i have=A0passthrough pci-e amd/ati 6670=
</div><div>after installing the drivers for it and reboot windows 7 it auto=
matically selects amd/ati for primary graphic card, in windows it works fin=
e</div>
<div>i have stress test it a few hours no problem, but with some games the =
display driver amdkmdap constantly restarts.=A0</div><div>i get black scree=
n for 10 to 20 seconds and when the driver restarts its working fine afterw=
ards until next driver restart.</div>
<div>anybody with similar problems?</div><div><br></div><div>Best Regards,<=
/div><div>Kristijan Lecnik</div>

--e89a8ff253ae0dc8af04c83fc10c--


--===============3178810640224846293==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3178810640224846293==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 14:06:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5zxR-000470-MB; Mon, 27 Aug 2012 14:06:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5zxQ-00046l-6y
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 14:06:24 +0000
Received: from [85.158.143.35:41092] by server-1.bemta-4.messagelabs.com id
	33/92-12504-FDE7B305; Mon, 27 Aug 2012 14:06:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1346076382!12294117!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8510 invoked from network); 27 Aug 2012 14:06:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 14:06:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,321,1344211200"; d="scan'208";a="14211913"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Aug 2012 14:06:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 27 Aug 2012 15:06:00 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5zx2-0006Xm-3P;
	Mon, 27 Aug 2012 14:06:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5zx2-0000F9-0Z;
	Mon, 27 Aug 2012 15:06:00 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13632-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Aug 2012 15:06:00 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13632: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13632 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13632/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 13608
 build-amd64                   4 xen-build                 fail REGR. vs. 13608

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13608

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 linux                5aa287dcf1b5879aa0150b0511833c52885f5b4c
baseline version:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5aa287dcf1b5879aa0150b0511833c52885f5b4c
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Sun Aug 26 15:12:29 2012 -0700

    Linux 3.0.42

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 14:06:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T5zxR-000470-MB; Mon, 27 Aug 2012 14:06:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T5zxQ-00046l-6y
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 14:06:24 +0000
Received: from [85.158.143.35:41092] by server-1.bemta-4.messagelabs.com id
	33/92-12504-FDE7B305; Mon, 27 Aug 2012 14:06:23 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1346076382!12294117!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8510 invoked from network); 27 Aug 2012 14:06:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 14:06:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,321,1344211200"; d="scan'208";a="14211913"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Aug 2012 14:06:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 27 Aug 2012 15:06:00 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T5zx2-0006Xm-3P;
	Mon, 27 Aug 2012 14:06:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T5zx2-0000F9-0Z;
	Mon, 27 Aug 2012 15:06:00 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13632-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Aug 2012 15:06:00 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13632: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13632 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13632/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 13608
 build-amd64                   4 xen-build                 fail REGR. vs. 13608

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13608

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win-vcpus1  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-win-vcpus1    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-win          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-win           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win       1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 linux                5aa287dcf1b5879aa0150b0511833c52885f5b4c
baseline version:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-win-vcpus1                                   blocked 
 test-amd64-i386-xl-win-vcpus1                                blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-amd64-win                                         blocked 
 test-amd64-i386-win                                          blocked 
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      blocked 
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5aa287dcf1b5879aa0150b0511833c52885f5b4c
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Sun Aug 26 15:12:29 2012 -0700

    Linux 3.0.42

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 14:29:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T60JG-0004tv-AB; Mon, 27 Aug 2012 14:28:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T60JF-0004tn-Dg
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 14:28:57 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346077730!9127366!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gOTU4MzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23516 invoked from network); 27 Aug 2012 14:28:51 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-3.tower-27.messagelabs.com with SMTP;
	27 Aug 2012 14:28:51 -0000
Received: from zmail16.collab.prod.int.phx2.redhat.com
	(zmail16.collab.prod.int.phx2.redhat.com [10.5.83.18])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7RESchc017614;
	Mon, 27 Aug 2012 10:28:38 -0400
Date: Mon, 27 Aug 2012 10:28:38 -0400 (EDT)
From: Igor Mammedov <imammedo@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <703606518.14100692.1346077718872.JavaMail.root@redhat.com>
In-Reply-To: <CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
MIME-Version: 1.0
X-Originating-IP: [10.34.1.247]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: jan kiszka <jan.kiszka@siemens.com>, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	Eduardo Habkost <ehabkost@redhat.com>, xen-devel@lists.xensource.com,
	i mitsyanko <i.mitsyanko@samsung.com>, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony perard <anthony.perard@citrix.com>,
	lersek@redhat.com, stefanha@linux.vnet.ibm.com,
	stefano stabellini <stefano.stabellini@eu.citrix.com>,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

----- Original Message -----
> From: "Peter Maydell" <peter.maydell@linaro.org>
...
> 
> I'm not objecting to this patch if it helps us move forwards,
> but adding the #include to sysemu.h is effectively just adding
> the definition to another grabbag header (183 files include
> sysemu.h). It would be nicer long-term to separate out the
> one thing in this header that cares about qemu_irq (the extern
> declaration of qemu_system_powerdown).
                 ^^^^
Is there a preference/suggestion in which header it should be declared?

...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 14:29:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T60JG-0004tv-AB; Mon, 27 Aug 2012 14:28:58 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T60JF-0004tn-Dg
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 14:28:57 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346077730!9127366!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gOTU4MzQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23516 invoked from network); 27 Aug 2012 14:28:51 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-3.tower-27.messagelabs.com with SMTP;
	27 Aug 2012 14:28:51 -0000
Received: from zmail16.collab.prod.int.phx2.redhat.com
	(zmail16.collab.prod.int.phx2.redhat.com [10.5.83.18])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q7RESchc017614;
	Mon, 27 Aug 2012 10:28:38 -0400
Date: Mon, 27 Aug 2012 10:28:38 -0400 (EDT)
From: Igor Mammedov <imammedo@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <703606518.14100692.1346077718872.JavaMail.root@redhat.com>
In-Reply-To: <CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
MIME-Version: 1.0
X-Originating-IP: [10.34.1.247]
X-Mailer: Zimbra 7.2.0_GA_2669 (ZimbraWebClient - FF3.0 (Linux)/7.2.0_GA_2669)
Cc: jan kiszka <jan.kiszka@siemens.com>, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	armbru@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	Eduardo Habkost <ehabkost@redhat.com>, xen-devel@lists.xensource.com,
	i mitsyanko <i.mitsyanko@samsung.com>, mdroth@linux.vnet.ibm.com,
	avi@redhat.com, anthony perard <anthony.perard@citrix.com>,
	lersek@redhat.com, stefanha@linux.vnet.ibm.com,
	stefano stabellini <stefano.stabellini@eu.citrix.com>,
	sw@weilnetz.de, lcapitulino@redhat.com, rth@twiddle.net,
	kwolf@redhat.com, aliguori@us.ibm.com, mtosatti@redhat.com,
	pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

----- Original Message -----
> From: "Peter Maydell" <peter.maydell@linaro.org>
...
> 
> I'm not objecting to this patch if it helps us move forwards,
> but adding the #include to sysemu.h is effectively just adding
> the definition to another grabbag header (183 files include
> sysemu.h). It would be nicer long-term to separate out the
> one thing in this header that cares about qemu_irq (the extern
> declaration of qemu_system_powerdown).
                 ^^^^
Is there a preference/suggestion in which header it should be declared?

...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 14:33:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T60NO-00054p-3L; Mon, 27 Aug 2012 14:33:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ipavlikevich@gmail.com>) id 1T60Ls-00050o-Lt
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 14:31:40 +0000
Received: from [85.158.143.99:30632] by server-2.bemta-4.messagelabs.com id
	9A/18-21239-CC48B305; Mon, 27 Aug 2012 14:31:40 +0000
X-Env-Sender: ipavlikevich@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346077897!28094200!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10998 invoked from network); 27 Aug 2012 14:31:38 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 14:31:38 -0000
Received: by pbbjt11 with SMTP id jt11so7334847pbb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 07:31:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=USfpgH41dpqR29PNQj2M4zc2iULxeIlXiUuvoYO4qyc=;
	b=yIhTsg2Ply593S0lUijKytiKV1Aa7IpV6lTQlQ4fWQsScGW9JkLSsvybEtL9QwLdoX
	vlU++j88iLeaGiGTY95ADV7NFodgkbh3ZdrSxSBIq0j+p4p824E4EehZbVLn6NKhHfqu
	tAMNvfSjbHvuYDBoYbdAItMa8Fap0E+Cw1tgYJgkDGEwqe0XLss2PU0SbgL4lSguqB0/
	UhH6jNNDBXdSoTNkS1p1Bhoffv52/h4/1g7CoZ2oL2KUMnNpDJ0nW9rPk60DrPCYXHRX
	g2QJlP01w36kziZWoKi2W7JOCnPefq3qzCG+0//jiIq/sTj/x9lCAK9LK3bWl7w9yy5h
	kdSA==
MIME-Version: 1.0
Received: by 10.68.217.100 with SMTP id ox4mr34812957pbc.87.1346077896625;
	Mon, 27 Aug 2012 07:31:36 -0700 (PDT)
Received: by 10.66.251.71 with HTTP; Mon, 27 Aug 2012 07:31:36 -0700 (PDT)
In-Reply-To: <CA+O_+Eww4mCi10X=zpd2v18j4gwZzzjFDwemSqcxL0coqJ=6VA@mail.gmail.com>
References: <CA+O_+Eww4mCi10X=zpd2v18j4gwZzzjFDwemSqcxL0coqJ=6VA@mail.gmail.com>
Date: Mon, 27 Aug 2012 18:31:36 +0400
Message-ID: <CA+O_+Ez525JyiO-=U1z-A9tzd_Ras8KGAC9VYVBTce0UYKkBWA@mail.gmail.com>
From: Igor Pavlikevich <ipavlikevich@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Mon, 27 Aug 2012 14:33:12 +0000
Subject: [Xen-devel] Xen 4.1.3 restore problem with 2.6.32-45 domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1208401993436322832=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1208401993436322832==
Content-Type: multipart/alternative; boundary=e89a8ff245dfde4d9204c8402e44

--e89a8ff245dfde4d9204c8402e44
Content-Type: text/plain; charset=ISO-8859-1

Hi guys,

After upgrading Debian Dom0 from
xen-hypervisor-amd64-4.1.3~rc1+hg-20120614.a9c0a89c08f2-4 to
xen-hypervisor-amd64-4.1.3-1 all PV domains with debian squeeze with stable
kernel linux-image-2.6.32-45 fails to 'xm restore' with trace:

[296707.372141] INFO: task xenwatch:16 blocked for more than 120 seconds.
[296707.372869] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[296707.373605] xenwatch      D 0000000000000002     0    16      2
0x00000000
[296707.374353]  ffffffff814891f0 0000000000000246 ffff88007ff57138
ffffffff81041238
[296707.375113]  ffff88007ff57138 ffffffff81311f90 000000000000f9e0
ffff88007ff9dfd8
[296707.375884]  0000000000015780 0000000000015780 ffff88007ff57100
ffff88007ff573f8
[296707.376663] Call Trace:
[296707.377428]  [<ffffffff81041238>] ? pick_next_task_fair+0xca/0xd6
[296707.378210]  [<ffffffff812fba70>] ? thread_return+0x79/0xe0
[296707.378994]  [<ffffffff8100e15c>] ? xen_vcpuop_set_next_event+0x0/0x60
[296707.379802]  [<ffffffff812fbe2d>] ? schedule_timeout+0x2e/0xdd
[296707.380585]  [<ffffffff8100e22f>] ? xen_restore_fl_direct_end+0x0/0x1
[296707.381232]  [<ffffffff812fce0a>] ? _spin_unlock_irqrestore+0xd/0xe
[296707.381832]  [<ffffffff810ad4e8>] ? cpupri_set+0x10c/0x135
[296707.382419]  [<ffffffff812fbce4>] ? wait_for_common+0xde/0x15b
[296707.383059]  [<ffffffff8104a46c>] ? default_wake_function+0x0/0x9
[296707.383671]  [<ffffffff81061cb0>] ? flush_cpu_workqueue+0x5e/0x75
[296707.384287]  [<ffffffff81064dda>] ? kthread_stop+0x5d/0xa2
[296707.384913]  [<ffffffff81061d0b>] ? cleanup_workqueue_thread+0x44/0x51
[296707.385507]  [<ffffffff81061db2>] ? destroy_workqueue+0x76/0xad
[296707.386148]  [<ffffffff8108b03b>] ? stop_machine_destroy+0x2e/0x47
[296707.386791]  [<ffffffff811f0969>] ? shutdown_handler+0x230/0x25c
[296707.387443]  [<ffffffff812fc3a6>] ? mutex_lock+0xd/0x31
[296707.387986]  [<ffffffff811f1c8f>] ? xenwatch_thread+0x117/0x14a
[296707.388564]  [<ffffffff81065042>] ? autoremove_wake_function+0x0/0x2e
[296707.389097]  [<ffffffff811f1b78>] ? xenwatch_thread+0x0/0x14a
[296707.389656]  [<ffffffff81064d75>] ? kthread+0x79/0x81
[296707.390197]  [<ffffffff81011baa>] ? child_rip+0xa/0x20
[296707.390751]  [<ffffffff81010d61>] ? int_ret_from_sys_call+0x7/0x1b
[296707.391519]  [<ffffffff8101151d>] ? retint_restore_args+0x5/0x6
[296707.392185]  [<ffffffff8100e22f>] ? xen_restore_fl_direct_end+0x0/0x1
[296707.392757]  [<ffffffff81011ba0>] ? child_rip+0x0/0x20
[296707.393303] INFO: task getty:634 blocked for more than 120 seconds.
[296707.393866] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[296707.394449] getty         D 0000000000000002     0   634      1
0x00000000
[296707.395084]  ffffffff814891f0 0000000000000286 ffff88007ef0cde8
ffffffff81041238
[296707.395673]  ffff88007ef0cde8 ffffffff81311f90 000000000000f9e0
ffff88007f239fd8
[296707.396300]  0000000000015780 0000000000015780 ffff88007ef0cdb0
ffff88007ef0d0a8
[296707.396954] Call Trace:
[296707.397518]  [<ffffffff81041238>] ? pick_next_task_fair+0xca/0xd6
[296707.398103]  [<ffffffff812fba70>] ? thread_return+0x79/0xe0
[296707.398675]  [<ffffffff812fbe2d>] ? schedule_timeout+0x2e/0xdd
[296707.399244]  [<ffffffff81044007>] ? select_task_rq_fair+0x7dc/0x843
[296707.399810]  [<ffffffff812fbce4>] ? wait_for_common+0xde/0x15b
[296707.400355]  [<ffffffff8104a46c>] ? default_wake_function+0x0/0x9
[296707.400920]  [<ffffffff8104b09b>] ? sched_exec+0x114/0x130
[296707.401471]  [<ffffffff810f4b2b>] ? do_execve+0xe8/0x2c3
[296707.402013]  [<ffffffff8100f500>] ? sys_execve+0x35/0x4c
[296707.402552]  [<ffffffff81010f9a>] ? stub_execve+0x6a/0xc0
...

Upgrading domU kernel to debian testing 3.2.23-1 fix this problem.
Saving under xen 4.3.1 and restoring under xen
4.1.3~rc1+hg-20120614.a9c0a89c08f2-4 also works fine.
Is there any way to save/restore debian squeeze vm's running on 4.1.3 Dom0?

--e89a8ff245dfde4d9204c8402e44
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote">Hi guys,<br><br>After upgrading Debian Dom0 from=
 xen-hypervisor-amd64-4.1.3~rc1+hg-20120614.a9c0a89c08f2-4 to xen-hyperviso=
r-amd64-4.1.3-1 all PV domains with debian squeeze with stable kernel linux=
-image-2.6.32-45 fails to &#39;xm restore&#39; with trace:<br>

<br>[296707.372141] INFO: task xenwatch:16 blocked for more than 120 second=
s.<br>[296707.372869] &quot;echo 0 &gt; /proc/sys/kernel/hung_task_timeout_=
secs&quot; disables this message.<br>[296707.373605] xenwatch=A0=A0=A0=A0=
=A0 D 0000000000000002=A0=A0=A0=A0 0=A0=A0=A0 16=A0=A0=A0=A0=A0 2 0x0000000=
0<br>

[296707.374353]=A0 ffffffff814891f0 0000000000000246 ffff88007ff57138 fffff=
fff81041238<br>[296707.375113]=A0 ffff88007ff57138 ffffffff81311f90 0000000=
00000f9e0 ffff88007ff9dfd8<br>[296707.375884]=A0 0000000000015780 000000000=
0015780 ffff88007ff57100 ffff88007ff573f8<br>

[296707.376663] Call Trace:<br>[296707.377428]=A0 [&lt;ffffffff81041238&gt;=
] ? pick_next_task_fair+0xca/0xd6<br>[296707.378210]=A0 [&lt;ffffffff812fba=
70&gt;] ? thread_return+0x79/0xe0<br>[296707.378994]=A0 [&lt;ffffffff8100e1=
5c&gt;] ? xen_vcpuop_set_next_event+0x0/0x60<br>

[296707.379802]=A0 [&lt;ffffffff812fbe2d&gt;] ? schedule_timeout+0x2e/0xdd<=
br>[296707.380585]=A0 [&lt;ffffffff8100e22f&gt;] ? xen_restore_fl_direct_en=
d+0x0/0x1<br>[296707.381232]=A0 [&lt;ffffffff812fce0a&gt;] ? _spin_unlock_i=
rqrestore+0xd/0xe<br>

[296707.381832]=A0 [&lt;ffffffff810ad4e8&gt;] ? cpupri_set+0x10c/0x135<br>[=
296707.382419]=A0 [&lt;ffffffff812fbce4&gt;] ? wait_for_common+0xde/0x15b<b=
r>[296707.383059]=A0 [&lt;ffffffff8104a46c&gt;] ? default_wake_function+0x0=
/0x9<br>

[296707.383671]=A0 [&lt;ffffffff81061cb0&gt;] ? flush_cpu_workqueue+0x5e/0x=
75<br>[296707.384287]=A0 [&lt;ffffffff81064dda&gt;] ? kthread_stop+0x5d/0xa=
2<br>[296707.384913]=A0 [&lt;ffffffff81061d0b&gt;] ? cleanup_workqueue_thre=
ad+0x44/0x51<br>

[296707.385507]=A0 [&lt;ffffffff81061db2&gt;] ? destroy_workqueue+0x76/0xad=
<br>[296707.386148]=A0 [&lt;ffffffff8108b03b&gt;] ? stop_machine_destroy+0x=
2e/0x47<br>[296707.386791]=A0 [&lt;ffffffff811f0969&gt;] ? shutdown_handler=
+0x230/0x25c<br>

[296707.387443]=A0 [&lt;ffffffff812fc3a6&gt;] ? mutex_lock+0xd/0x31<br>[296=
707.387986]=A0 [&lt;ffffffff811f1c8f&gt;] ? xenwatch_thread+0x117/0x14a<br>=
[296707.388564]=A0 [&lt;ffffffff81065042&gt;] ? autoremove_wake_function+0x=
0/0x2e<br>

[296707.389097]=A0 [&lt;ffffffff811f1b78&gt;] ? xenwatch_thread+0x0/0x14a<b=
r>[296707.389656]=A0 [&lt;ffffffff81064d75&gt;] ? kthread+0x79/0x81<br>[296=
707.390197]=A0 [&lt;ffffffff81011baa&gt;] ? child_rip+0xa/0x20<br>[296707.3=
90751]=A0 [&lt;ffffffff81010d61&gt;] ? int_ret_from_sys_call+0x7/0x1b<br>

[296707.391519]=A0 [&lt;ffffffff8101151d&gt;] ? retint_restore_args+0x5/0x6=
<br>[296707.392185]=A0 [&lt;ffffffff8100e22f&gt;] ? xen_restore_fl_direct_e=
nd+0x0/0x1<br>[296707.392757]=A0 [&lt;ffffffff81011ba0&gt;] ? child_rip+0x0=
/0x20<br>

[296707.393303] INFO: task getty:634 blocked for more than 120 seconds.<br>=
[296707.393866] &quot;echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs&q=
uot; disables this message.<br>[296707.394449] getty=A0=A0=A0=A0=A0=A0=A0=
=A0 D 0000000000000002=A0=A0=A0=A0 0=A0=A0 634=A0=A0=A0=A0=A0 1 0x00000000<=
br>

[296707.395084]=A0 ffffffff814891f0 0000000000000286 ffff88007ef0cde8 fffff=
fff81041238<br>[296707.395673]=A0 ffff88007ef0cde8 ffffffff81311f90 0000000=
00000f9e0 ffff88007f239fd8<br>[296707.396300]=A0 0000000000015780 000000000=
0015780 ffff88007ef0cdb0 ffff88007ef0d0a8<br>

[296707.396954] Call Trace:<br>[296707.397518]=A0 [&lt;ffffffff81041238&gt;=
] ? pick_next_task_fair+0xca/0xd6<br>[296707.398103]=A0 [&lt;ffffffff812fba=
70&gt;] ? thread_return+0x79/0xe0<br>[296707.398675]=A0 [&lt;ffffffff812fbe=
2d&gt;] ? schedule_timeout+0x2e/0xdd<br>

[296707.399244]=A0 [&lt;ffffffff81044007&gt;] ? select_task_rq_fair+0x7dc/0=
x843<br>[296707.399810]=A0 [&lt;ffffffff812fbce4&gt;] ? wait_for_common+0xd=
e/0x15b<br>[296707.400355]=A0 [&lt;ffffffff8104a46c&gt;] ? default_wake_fun=
ction+0x0/0x9<br>

[296707.400920]=A0 [&lt;ffffffff8104b09b&gt;] ? sched_exec+0x114/0x130<br>[=
296707.401471]=A0 [&lt;ffffffff810f4b2b&gt;] ? do_execve+0xe8/0x2c3<br>[296=
707.402013]=A0 [&lt;ffffffff8100f500&gt;] ? sys_execve+0x35/0x4c<br>[296707=
.402552]=A0 [&lt;ffffffff81010f9a&gt;] ? stub_execve+0x6a/0xc0<br>

...<br><br>Upgrading domU kernel to debian testing 3.2.23-1 fix this proble=
m. <br>Saving under xen 4.3.1 and restoring under xen 4.1.3~rc1+hg-20120614=
.a9c0a89c08f2-4 also works fine.<br>Is there any way to save/restore debian=
 squeeze vm&#39;s running on 4.1.3 Dom0?<br>
<br>
</div><br>

--e89a8ff245dfde4d9204c8402e44--


--===============1208401993436322832==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1208401993436322832==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 14:33:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 14:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T60NO-00054p-3L; Mon, 27 Aug 2012 14:33:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ipavlikevich@gmail.com>) id 1T60Ls-00050o-Lt
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 14:31:40 +0000
Received: from [85.158.143.99:30632] by server-2.bemta-4.messagelabs.com id
	9A/18-21239-CC48B305; Mon, 27 Aug 2012 14:31:40 +0000
X-Env-Sender: ipavlikevich@gmail.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346077897!28094200!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10998 invoked from network); 27 Aug 2012 14:31:38 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 14:31:38 -0000
Received: by pbbjt11 with SMTP id jt11so7334847pbb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 07:31:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=USfpgH41dpqR29PNQj2M4zc2iULxeIlXiUuvoYO4qyc=;
	b=yIhTsg2Ply593S0lUijKytiKV1Aa7IpV6lTQlQ4fWQsScGW9JkLSsvybEtL9QwLdoX
	vlU++j88iLeaGiGTY95ADV7NFodgkbh3ZdrSxSBIq0j+p4p824E4EehZbVLn6NKhHfqu
	tAMNvfSjbHvuYDBoYbdAItMa8Fap0E+Cw1tgYJgkDGEwqe0XLss2PU0SbgL4lSguqB0/
	UhH6jNNDBXdSoTNkS1p1Bhoffv52/h4/1g7CoZ2oL2KUMnNpDJ0nW9rPk60DrPCYXHRX
	g2QJlP01w36kziZWoKi2W7JOCnPefq3qzCG+0//jiIq/sTj/x9lCAK9LK3bWl7w9yy5h
	kdSA==
MIME-Version: 1.0
Received: by 10.68.217.100 with SMTP id ox4mr34812957pbc.87.1346077896625;
	Mon, 27 Aug 2012 07:31:36 -0700 (PDT)
Received: by 10.66.251.71 with HTTP; Mon, 27 Aug 2012 07:31:36 -0700 (PDT)
In-Reply-To: <CA+O_+Eww4mCi10X=zpd2v18j4gwZzzjFDwemSqcxL0coqJ=6VA@mail.gmail.com>
References: <CA+O_+Eww4mCi10X=zpd2v18j4gwZzzjFDwemSqcxL0coqJ=6VA@mail.gmail.com>
Date: Mon, 27 Aug 2012 18:31:36 +0400
Message-ID: <CA+O_+Ez525JyiO-=U1z-A9tzd_Ras8KGAC9VYVBTce0UYKkBWA@mail.gmail.com>
From: Igor Pavlikevich <ipavlikevich@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Mon, 27 Aug 2012 14:33:12 +0000
Subject: [Xen-devel] Xen 4.1.3 restore problem with 2.6.32-45 domU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1208401993436322832=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1208401993436322832==
Content-Type: multipart/alternative; boundary=e89a8ff245dfde4d9204c8402e44

--e89a8ff245dfde4d9204c8402e44
Content-Type: text/plain; charset=ISO-8859-1

Hi guys,

After upgrading Debian Dom0 from
xen-hypervisor-amd64-4.1.3~rc1+hg-20120614.a9c0a89c08f2-4 to
xen-hypervisor-amd64-4.1.3-1 all PV domains with debian squeeze with stable
kernel linux-image-2.6.32-45 fails to 'xm restore' with trace:

[296707.372141] INFO: task xenwatch:16 blocked for more than 120 seconds.
[296707.372869] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[296707.373605] xenwatch      D 0000000000000002     0    16      2
0x00000000
[296707.374353]  ffffffff814891f0 0000000000000246 ffff88007ff57138
ffffffff81041238
[296707.375113]  ffff88007ff57138 ffffffff81311f90 000000000000f9e0
ffff88007ff9dfd8
[296707.375884]  0000000000015780 0000000000015780 ffff88007ff57100
ffff88007ff573f8
[296707.376663] Call Trace:
[296707.377428]  [<ffffffff81041238>] ? pick_next_task_fair+0xca/0xd6
[296707.378210]  [<ffffffff812fba70>] ? thread_return+0x79/0xe0
[296707.378994]  [<ffffffff8100e15c>] ? xen_vcpuop_set_next_event+0x0/0x60
[296707.379802]  [<ffffffff812fbe2d>] ? schedule_timeout+0x2e/0xdd
[296707.380585]  [<ffffffff8100e22f>] ? xen_restore_fl_direct_end+0x0/0x1
[296707.381232]  [<ffffffff812fce0a>] ? _spin_unlock_irqrestore+0xd/0xe
[296707.381832]  [<ffffffff810ad4e8>] ? cpupri_set+0x10c/0x135
[296707.382419]  [<ffffffff812fbce4>] ? wait_for_common+0xde/0x15b
[296707.383059]  [<ffffffff8104a46c>] ? default_wake_function+0x0/0x9
[296707.383671]  [<ffffffff81061cb0>] ? flush_cpu_workqueue+0x5e/0x75
[296707.384287]  [<ffffffff81064dda>] ? kthread_stop+0x5d/0xa2
[296707.384913]  [<ffffffff81061d0b>] ? cleanup_workqueue_thread+0x44/0x51
[296707.385507]  [<ffffffff81061db2>] ? destroy_workqueue+0x76/0xad
[296707.386148]  [<ffffffff8108b03b>] ? stop_machine_destroy+0x2e/0x47
[296707.386791]  [<ffffffff811f0969>] ? shutdown_handler+0x230/0x25c
[296707.387443]  [<ffffffff812fc3a6>] ? mutex_lock+0xd/0x31
[296707.387986]  [<ffffffff811f1c8f>] ? xenwatch_thread+0x117/0x14a
[296707.388564]  [<ffffffff81065042>] ? autoremove_wake_function+0x0/0x2e
[296707.389097]  [<ffffffff811f1b78>] ? xenwatch_thread+0x0/0x14a
[296707.389656]  [<ffffffff81064d75>] ? kthread+0x79/0x81
[296707.390197]  [<ffffffff81011baa>] ? child_rip+0xa/0x20
[296707.390751]  [<ffffffff81010d61>] ? int_ret_from_sys_call+0x7/0x1b
[296707.391519]  [<ffffffff8101151d>] ? retint_restore_args+0x5/0x6
[296707.392185]  [<ffffffff8100e22f>] ? xen_restore_fl_direct_end+0x0/0x1
[296707.392757]  [<ffffffff81011ba0>] ? child_rip+0x0/0x20
[296707.393303] INFO: task getty:634 blocked for more than 120 seconds.
[296707.393866] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[296707.394449] getty         D 0000000000000002     0   634      1
0x00000000
[296707.395084]  ffffffff814891f0 0000000000000286 ffff88007ef0cde8
ffffffff81041238
[296707.395673]  ffff88007ef0cde8 ffffffff81311f90 000000000000f9e0
ffff88007f239fd8
[296707.396300]  0000000000015780 0000000000015780 ffff88007ef0cdb0
ffff88007ef0d0a8
[296707.396954] Call Trace:
[296707.397518]  [<ffffffff81041238>] ? pick_next_task_fair+0xca/0xd6
[296707.398103]  [<ffffffff812fba70>] ? thread_return+0x79/0xe0
[296707.398675]  [<ffffffff812fbe2d>] ? schedule_timeout+0x2e/0xdd
[296707.399244]  [<ffffffff81044007>] ? select_task_rq_fair+0x7dc/0x843
[296707.399810]  [<ffffffff812fbce4>] ? wait_for_common+0xde/0x15b
[296707.400355]  [<ffffffff8104a46c>] ? default_wake_function+0x0/0x9
[296707.400920]  [<ffffffff8104b09b>] ? sched_exec+0x114/0x130
[296707.401471]  [<ffffffff810f4b2b>] ? do_execve+0xe8/0x2c3
[296707.402013]  [<ffffffff8100f500>] ? sys_execve+0x35/0x4c
[296707.402552]  [<ffffffff81010f9a>] ? stub_execve+0x6a/0xc0
...

Upgrading domU kernel to debian testing 3.2.23-1 fix this problem.
Saving under xen 4.3.1 and restoring under xen
4.1.3~rc1+hg-20120614.a9c0a89c08f2-4 also works fine.
Is there any way to save/restore debian squeeze vm's running on 4.1.3 Dom0?

--e89a8ff245dfde4d9204c8402e44
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div class=3D"gmail_quote">Hi guys,<br><br>After upgrading Debian Dom0 from=
 xen-hypervisor-amd64-4.1.3~rc1+hg-20120614.a9c0a89c08f2-4 to xen-hyperviso=
r-amd64-4.1.3-1 all PV domains with debian squeeze with stable kernel linux=
-image-2.6.32-45 fails to &#39;xm restore&#39; with trace:<br>

<br>[296707.372141] INFO: task xenwatch:16 blocked for more than 120 second=
s.<br>[296707.372869] &quot;echo 0 &gt; /proc/sys/kernel/hung_task_timeout_=
secs&quot; disables this message.<br>[296707.373605] xenwatch=A0=A0=A0=A0=
=A0 D 0000000000000002=A0=A0=A0=A0 0=A0=A0=A0 16=A0=A0=A0=A0=A0 2 0x0000000=
0<br>

[296707.374353]=A0 ffffffff814891f0 0000000000000246 ffff88007ff57138 fffff=
fff81041238<br>[296707.375113]=A0 ffff88007ff57138 ffffffff81311f90 0000000=
00000f9e0 ffff88007ff9dfd8<br>[296707.375884]=A0 0000000000015780 000000000=
0015780 ffff88007ff57100 ffff88007ff573f8<br>

[296707.376663] Call Trace:<br>[296707.377428]=A0 [&lt;ffffffff81041238&gt;=
] ? pick_next_task_fair+0xca/0xd6<br>[296707.378210]=A0 [&lt;ffffffff812fba=
70&gt;] ? thread_return+0x79/0xe0<br>[296707.378994]=A0 [&lt;ffffffff8100e1=
5c&gt;] ? xen_vcpuop_set_next_event+0x0/0x60<br>

[296707.379802]=A0 [&lt;ffffffff812fbe2d&gt;] ? schedule_timeout+0x2e/0xdd<=
br>[296707.380585]=A0 [&lt;ffffffff8100e22f&gt;] ? xen_restore_fl_direct_en=
d+0x0/0x1<br>[296707.381232]=A0 [&lt;ffffffff812fce0a&gt;] ? _spin_unlock_i=
rqrestore+0xd/0xe<br>

[296707.381832]=A0 [&lt;ffffffff810ad4e8&gt;] ? cpupri_set+0x10c/0x135<br>[=
296707.382419]=A0 [&lt;ffffffff812fbce4&gt;] ? wait_for_common+0xde/0x15b<b=
r>[296707.383059]=A0 [&lt;ffffffff8104a46c&gt;] ? default_wake_function+0x0=
/0x9<br>

[296707.383671]=A0 [&lt;ffffffff81061cb0&gt;] ? flush_cpu_workqueue+0x5e/0x=
75<br>[296707.384287]=A0 [&lt;ffffffff81064dda&gt;] ? kthread_stop+0x5d/0xa=
2<br>[296707.384913]=A0 [&lt;ffffffff81061d0b&gt;] ? cleanup_workqueue_thre=
ad+0x44/0x51<br>

[296707.385507]=A0 [&lt;ffffffff81061db2&gt;] ? destroy_workqueue+0x76/0xad=
<br>[296707.386148]=A0 [&lt;ffffffff8108b03b&gt;] ? stop_machine_destroy+0x=
2e/0x47<br>[296707.386791]=A0 [&lt;ffffffff811f0969&gt;] ? shutdown_handler=
+0x230/0x25c<br>

[296707.387443]=A0 [&lt;ffffffff812fc3a6&gt;] ? mutex_lock+0xd/0x31<br>[296=
707.387986]=A0 [&lt;ffffffff811f1c8f&gt;] ? xenwatch_thread+0x117/0x14a<br>=
[296707.388564]=A0 [&lt;ffffffff81065042&gt;] ? autoremove_wake_function+0x=
0/0x2e<br>

[296707.389097]=A0 [&lt;ffffffff811f1b78&gt;] ? xenwatch_thread+0x0/0x14a<b=
r>[296707.389656]=A0 [&lt;ffffffff81064d75&gt;] ? kthread+0x79/0x81<br>[296=
707.390197]=A0 [&lt;ffffffff81011baa&gt;] ? child_rip+0xa/0x20<br>[296707.3=
90751]=A0 [&lt;ffffffff81010d61&gt;] ? int_ret_from_sys_call+0x7/0x1b<br>

[296707.391519]=A0 [&lt;ffffffff8101151d&gt;] ? retint_restore_args+0x5/0x6=
<br>[296707.392185]=A0 [&lt;ffffffff8100e22f&gt;] ? xen_restore_fl_direct_e=
nd+0x0/0x1<br>[296707.392757]=A0 [&lt;ffffffff81011ba0&gt;] ? child_rip+0x0=
/0x20<br>

[296707.393303] INFO: task getty:634 blocked for more than 120 seconds.<br>=
[296707.393866] &quot;echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs&q=
uot; disables this message.<br>[296707.394449] getty=A0=A0=A0=A0=A0=A0=A0=
=A0 D 0000000000000002=A0=A0=A0=A0 0=A0=A0 634=A0=A0=A0=A0=A0 1 0x00000000<=
br>

[296707.395084]=A0 ffffffff814891f0 0000000000000286 ffff88007ef0cde8 fffff=
fff81041238<br>[296707.395673]=A0 ffff88007ef0cde8 ffffffff81311f90 0000000=
00000f9e0 ffff88007f239fd8<br>[296707.396300]=A0 0000000000015780 000000000=
0015780 ffff88007ef0cdb0 ffff88007ef0d0a8<br>

[296707.396954] Call Trace:<br>[296707.397518]=A0 [&lt;ffffffff81041238&gt;=
] ? pick_next_task_fair+0xca/0xd6<br>[296707.398103]=A0 [&lt;ffffffff812fba=
70&gt;] ? thread_return+0x79/0xe0<br>[296707.398675]=A0 [&lt;ffffffff812fbe=
2d&gt;] ? schedule_timeout+0x2e/0xdd<br>

[296707.399244]=A0 [&lt;ffffffff81044007&gt;] ? select_task_rq_fair+0x7dc/0=
x843<br>[296707.399810]=A0 [&lt;ffffffff812fbce4&gt;] ? wait_for_common+0xd=
e/0x15b<br>[296707.400355]=A0 [&lt;ffffffff8104a46c&gt;] ? default_wake_fun=
ction+0x0/0x9<br>

[296707.400920]=A0 [&lt;ffffffff8104b09b&gt;] ? sched_exec+0x114/0x130<br>[=
296707.401471]=A0 [&lt;ffffffff810f4b2b&gt;] ? do_execve+0xe8/0x2c3<br>[296=
707.402013]=A0 [&lt;ffffffff8100f500&gt;] ? sys_execve+0x35/0x4c<br>[296707=
.402552]=A0 [&lt;ffffffff81010f9a&gt;] ? stub_execve+0x6a/0xc0<br>

...<br><br>Upgrading domU kernel to debian testing 3.2.23-1 fix this proble=
m. <br>Saving under xen 4.3.1 and restoring under xen 4.1.3~rc1+hg-20120614=
.a9c0a89c08f2-4 also works fine.<br>Is there any way to save/restore debian=
 squeeze vm&#39;s running on 4.1.3 Dom0?<br>
<br>
</div><br>

--e89a8ff245dfde4d9204c8402e44--


--===============1208401993436322832==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1208401993436322832==--


From xen-devel-bounces@lists.xen.org Mon Aug 27 16:47:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 16:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T62Se-0006am-T9; Mon, 27 Aug 2012 16:46:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T62Sd-0006ae-6k
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 16:46:47 +0000
Received: from [85.158.138.51:60082] by server-8.bemta-3.messagelabs.com id
	2C/A9-29583-674AB305; Mon, 27 Aug 2012 16:46:46 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346086004!24081759!1
X-Originating-IP: [208.97.132.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi41ID0+IDI5OTM1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5168 invoked from network); 27 Aug 2012 16:46:45 -0000
Received: from mailbigip.dreamhost.com (HELO homiemail-a11.g.dreamhost.com)
	(208.97.132.5) by server-10.tower-174.messagelabs.com with SMTP;
	27 Aug 2012 16:46:45 -0000
Received: from homiemail-a11.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTP id 10A8E6E075;
	Mon, 27 Aug 2012 09:46:44 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=from:to:cc
	:subject:date:message-id; q=dns; s=lagarcavilla.org; b=udRV4dH+E
	rjyGjmOQgzagOO1VGUyXcICCC987numnBmSHNWmYfPvK94HZt2bBIltTl8EAz7Id
	Y3vREQMRnBmEArmT+5KpO006T3yDmBS9/XO3pUknV+Fxn0L9GWYrsJECjzFe4w69
	LkyG32jetj6JBM136Q8kT5KoeDzsJy1wVs=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=from
	:to:cc:subject:date:message-id; s=lagarcavilla.org; bh=t++T1rI/2
	Yur2U7w5wEsYHwGZaA=; b=TxTUz9FQrFuiBW2A44S89h8Z2KXubRoLBOu+KZBw5
	lEGINbdEMcx9mPb9yqmDu4HF96erTY+bOJdzJLS707LNgx2XyyORLdkqxM7ojaJD
	EGPzHa1wpZBHxjq4nct46OD7UqwKgAARnbOQbkB7Mzd6GspKTt42Jlb16xIH6kSq
	ZQ=
Received: from xdev.gridcentric.ca (unknown [206.223.182.18])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTPSA id DF1826E06F; 
	Mon, 27 Aug 2012 09:46:42 -0700 (PDT)
From: andres@lagarcavilla.org
To: xen-devel@lists.xen.org,
	xen-devel@xen.org
Date: Mon, 27 Aug 2012 12:51:27 -0400
Message-Id: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
X-Mailer: git-send-email 1.7.8.2
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] Xen backend support for paged out grant targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andres Lagar-Cavilla <andres@lagarcavilla.org>

Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
foreign domain (such as dom0) attempts to map these frames, the map will
initially fail. The hypervisor returns a suitable errno, and kicks an
asynchronous page-in operation carried out by a helper. The foreign domain is
expected to retry the mapping operation until it eventually succeeds. The
foreign domain is not put to sleep because itself could be the one running the
pager assist (typical scenario for dom0).

This patch adds support for this mechanism for backend drivers using grant
mapping and copying operations. Specifically, this covers the blkback and
gntdev drivers (which map foregin grants), and the netback driver (which copies
foreign grants).

* Add GNTST_eagain, already exposed by Xen, to the grant interface.
* Add a retry method for grants that fail with GNTST_eagain (i.e. because the
  target foregin frame is paged out).
* Insert hooks with appropriate macro decorators in the aforementioned drivers.

The retry loop is only invoked if the grant operation status is GNTST_eagain.
It guarantees to leave a new status code different from GNTST_eagain. Any other
status code results in identical code execution as before.

The retry loop performs 256 attempts with increasing time intervals through a
32 second period. It uses msleep to yield while waiting for the next retry.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
---
 drivers/net/xen-netback/netback.c   |   13 +++++++++++++
 drivers/xen/grant-table.c           |   25 +++++++++++++++++++++++++
 drivers/xen/xenbus/xenbus_client.c  |    6 ++----
 include/xen/grant_table.h           |   25 +++++++++++++++++++++++++
 include/xen/interface/grant_table.h |    2 ++
 5 files changed, 67 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 682633b..063adf5 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -548,6 +548,8 @@ static int netbk_check_gop(struct xenvif *vif, int nr_meta_slots,
 
 	for (i = 0; i < nr_meta_slots; i++) {
 		copy_op = npo->copy + npo->copy_cons++;
+		if (copy_op->status == GNTST_eagain)
+			gnttab_retry_eagain_copy(copy_op);
 		if (copy_op->status != GNTST_okay) {
 			netdev_dbg(vif->dev,
 				   "Bad status %d from copy to DOM%d.\n",
@@ -976,6 +978,11 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 
 	/* Check status of header. */
 	err = gop->status;
+	if (unlikely(err == GNTST_eagain))
+	{
+		gnttab_retry_eagain_copy(gop);
+		err = gop->status;
+	}
 	if (unlikely(err)) {
 		pending_ring_idx_t index;
 		index = pending_index(netbk->pending_prod++);
@@ -1001,6 +1008,12 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 			if (unlikely(err))
 				xen_netbk_idx_release(netbk, pending_idx);
 			continue;
+		} else {
+			if (newerr == GNTST_eagain)
+			{
+				gnttab_retry_eagain_copy(gop);
+				newerr = gop->status;
+			}
 		}
 
 		/* Error on this fragment: respond to client with an error. */
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index eea81cf..2b62a63 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -38,6 +38,7 @@
 #include <linux/vmalloc.h>
 #include <linux/uaccess.h>
 #include <linux/io.h>
+#include <linux/delay.h>
 #include <linux/hardirq.h>
 
 #include <xen/xen.h>
@@ -823,6 +824,25 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+void
+gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+						const char *func)
+{
+	u8 delay = 1;
+
+	do {
+		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
+		if (*status == GNTST_eagain)
+			msleep(delay++);
+	} while ((*status == GNTST_eagain) && delay);
+
+	if (delay == 0) {
+		printk(KERN_ERR "%s: %s eagain grant\n", func, current->comm);
+		*status = GNTST_bad_page;
+	}
+}
+EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
@@ -836,6 +856,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (ret)
 		return ret;
 
+	/* Retry eagain maps */
+	for (i = 0; i < count; i++)
+		if (map_ops[i].status == GNTST_eagain)
+			gnttab_retry_eagain_map(map_ops + i);
+
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return ret;
 
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index b3e146e..749f6a3 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 
 	op.host_addr = arbitrary_virt_to_machine(pte).maddr;
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		free_vm_area(area);
@@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref,
 	gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map, gnt_ref,
 			  dev->otherend_id);
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		xenbus_dev_fatal(dev, op.status,
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 11e27c3..0f75184 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -183,6 +183,31 @@ unsigned int gnttab_max_grant_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
+/* Retry a grant map/copy operation when the hypervisor returns GNTST_eagain.
+ * This is typically due to paged out target frames.
+ * Generic entry-point, use macro decorators below for specific grant
+ * operations.
+ * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
+ * Return value in *status guaranteed to no longer be GNTST_eagain. */
+void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+                             const char *func);
+
+#define gnttab_retry_eagain_map(_gop)                       \
+    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
+                            &(_gop)->status, __func__)
+
+#define gnttab_retry_eagain_copy(_gop)                  \
+    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
+                            &(_gop)->status, __func__)
+
+#define gnttab_map_grant_no_eagain(_gop)                                    \
+do {                                                                        \
+    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
+        BUG();                                                              \
+    if ((_gop)->status == GNTST_eagain)                                     \
+        gnttab_retry_eagain_map((_gop));                                    \
+} while(0)
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index 7da811b..66cb734 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
 #define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
 #define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
 #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary */
+#define GNTST_eagain          (-12) /* Retry.                                */
 
 #define GNTTABOP_error_msgs {                   \
     "okay",                                     \
@@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
     "permission denied",                        \
     "bad page",                                 \
     "copy arguments cross page boundary"        \
+    "retry"                                     \
 }
 
 #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
-- 
1.7.5.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 16:47:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 16:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T62Se-0006am-T9; Mon, 27 Aug 2012 16:46:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres@lagarcavilla.org>) id 1T62Sd-0006ae-6k
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 16:46:47 +0000
Received: from [85.158.138.51:60082] by server-8.bemta-3.messagelabs.com id
	2C/A9-29583-674AB305; Mon, 27 Aug 2012 16:46:46 +0000
X-Env-Sender: andres@lagarcavilla.org
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346086004!24081759!1
X-Originating-IP: [208.97.132.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4Ljk3LjEzMi41ID0+IDI5OTM1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5168 invoked from network); 27 Aug 2012 16:46:45 -0000
Received: from mailbigip.dreamhost.com (HELO homiemail-a11.g.dreamhost.com)
	(208.97.132.5) by server-10.tower-174.messagelabs.com with SMTP;
	27 Aug 2012 16:46:45 -0000
Received: from homiemail-a11.g.dreamhost.com (localhost [127.0.0.1])
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTP id 10A8E6E075;
	Mon, 27 Aug 2012 09:46:44 -0700 (PDT)
DomainKey-Signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=from:to:cc
	:subject:date:message-id; q=dns; s=lagarcavilla.org; b=udRV4dH+E
	rjyGjmOQgzagOO1VGUyXcICCC987numnBmSHNWmYfPvK94HZt2bBIltTl8EAz7Id
	Y3vREQMRnBmEArmT+5KpO006T3yDmBS9/XO3pUknV+Fxn0L9GWYrsJECjzFe4w69
	LkyG32jetj6JBM136Q8kT5KoeDzsJy1wVs=
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.org; h=from
	:to:cc:subject:date:message-id; s=lagarcavilla.org; bh=t++T1rI/2
	Yur2U7w5wEsYHwGZaA=; b=TxTUz9FQrFuiBW2A44S89h8Z2KXubRoLBOu+KZBw5
	lEGINbdEMcx9mPb9yqmDu4HF96erTY+bOJdzJLS707LNgx2XyyORLdkqxM7ojaJD
	EGPzHa1wpZBHxjq4nct46OD7UqwKgAARnbOQbkB7Mzd6GspKTt42Jlb16xIH6kSq
	ZQ=
Received: from xdev.gridcentric.ca (unknown [206.223.182.18])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: andres@lagarcavilla.com)
	by homiemail-a11.g.dreamhost.com (Postfix) with ESMTPSA id DF1826E06F; 
	Mon, 27 Aug 2012 09:46:42 -0700 (PDT)
From: andres@lagarcavilla.org
To: xen-devel@lists.xen.org,
	xen-devel@xen.org
Date: Mon, 27 Aug 2012 12:51:27 -0400
Message-Id: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
X-Mailer: git-send-email 1.7.8.2
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] Xen backend support for paged out grant targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andres Lagar-Cavilla <andres@lagarcavilla.org>

Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
foreign domain (such as dom0) attempts to map these frames, the map will
initially fail. The hypervisor returns a suitable errno, and kicks an
asynchronous page-in operation carried out by a helper. The foreign domain is
expected to retry the mapping operation until it eventually succeeds. The
foreign domain is not put to sleep because itself could be the one running the
pager assist (typical scenario for dom0).

This patch adds support for this mechanism for backend drivers using grant
mapping and copying operations. Specifically, this covers the blkback and
gntdev drivers (which map foregin grants), and the netback driver (which copies
foreign grants).

* Add GNTST_eagain, already exposed by Xen, to the grant interface.
* Add a retry method for grants that fail with GNTST_eagain (i.e. because the
  target foregin frame is paged out).
* Insert hooks with appropriate macro decorators in the aforementioned drivers.

The retry loop is only invoked if the grant operation status is GNTST_eagain.
It guarantees to leave a new status code different from GNTST_eagain. Any other
status code results in identical code execution as before.

The retry loop performs 256 attempts with increasing time intervals through a
32 second period. It uses msleep to yield while waiting for the next retry.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
---
 drivers/net/xen-netback/netback.c   |   13 +++++++++++++
 drivers/xen/grant-table.c           |   25 +++++++++++++++++++++++++
 drivers/xen/xenbus/xenbus_client.c  |    6 ++----
 include/xen/grant_table.h           |   25 +++++++++++++++++++++++++
 include/xen/interface/grant_table.h |    2 ++
 5 files changed, 67 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 682633b..063adf5 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -548,6 +548,8 @@ static int netbk_check_gop(struct xenvif *vif, int nr_meta_slots,
 
 	for (i = 0; i < nr_meta_slots; i++) {
 		copy_op = npo->copy + npo->copy_cons++;
+		if (copy_op->status == GNTST_eagain)
+			gnttab_retry_eagain_copy(copy_op);
 		if (copy_op->status != GNTST_okay) {
 			netdev_dbg(vif->dev,
 				   "Bad status %d from copy to DOM%d.\n",
@@ -976,6 +978,11 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 
 	/* Check status of header. */
 	err = gop->status;
+	if (unlikely(err == GNTST_eagain))
+	{
+		gnttab_retry_eagain_copy(gop);
+		err = gop->status;
+	}
 	if (unlikely(err)) {
 		pending_ring_idx_t index;
 		index = pending_index(netbk->pending_prod++);
@@ -1001,6 +1008,12 @@ static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
 			if (unlikely(err))
 				xen_netbk_idx_release(netbk, pending_idx);
 			continue;
+		} else {
+			if (newerr == GNTST_eagain)
+			{
+				gnttab_retry_eagain_copy(gop);
+				newerr = gop->status;
+			}
 		}
 
 		/* Error on this fragment: respond to client with an error. */
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index eea81cf..2b62a63 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -38,6 +38,7 @@
 #include <linux/vmalloc.h>
 #include <linux/uaccess.h>
 #include <linux/io.h>
+#include <linux/delay.h>
 #include <linux/hardirq.h>
 
 #include <xen/xen.h>
@@ -823,6 +824,25 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+void
+gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+						const char *func)
+{
+	u8 delay = 1;
+
+	do {
+		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
+		if (*status == GNTST_eagain)
+			msleep(delay++);
+	} while ((*status == GNTST_eagain) && delay);
+
+	if (delay == 0) {
+		printk(KERN_ERR "%s: %s eagain grant\n", func, current->comm);
+		*status = GNTST_bad_page;
+	}
+}
+EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
@@ -836,6 +856,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (ret)
 		return ret;
 
+	/* Retry eagain maps */
+	for (i = 0; i < count; i++)
+		if (map_ops[i].status == GNTST_eagain)
+			gnttab_retry_eagain_map(map_ops + i);
+
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return ret;
 
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index b3e146e..749f6a3 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 
 	op.host_addr = arbitrary_virt_to_machine(pte).maddr;
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		free_vm_area(area);
@@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref,
 	gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map, gnt_ref,
 			  dev->otherend_id);
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		xenbus_dev_fatal(dev, op.status,
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 11e27c3..0f75184 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -183,6 +183,31 @@ unsigned int gnttab_max_grant_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
+/* Retry a grant map/copy operation when the hypervisor returns GNTST_eagain.
+ * This is typically due to paged out target frames.
+ * Generic entry-point, use macro decorators below for specific grant
+ * operations.
+ * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
+ * Return value in *status guaranteed to no longer be GNTST_eagain. */
+void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+                             const char *func);
+
+#define gnttab_retry_eagain_map(_gop)                       \
+    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
+                            &(_gop)->status, __func__)
+
+#define gnttab_retry_eagain_copy(_gop)                  \
+    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
+                            &(_gop)->status, __func__)
+
+#define gnttab_map_grant_no_eagain(_gop)                                    \
+do {                                                                        \
+    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
+        BUG();                                                              \
+    if ((_gop)->status == GNTST_eagain)                                     \
+        gnttab_retry_eagain_map((_gop));                                    \
+} while(0)
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index 7da811b..66cb734 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
 #define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
 #define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
 #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary */
+#define GNTST_eagain          (-12) /* Retry.                                */
 
 #define GNTTABOP_error_msgs {                   \
     "okay",                                     \
@@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
     "permission denied",                        \
     "bad page",                                 \
     "copy arguments cross page boundary"        \
+    "retry"                                     \
 }
 
 #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
-- 
1.7.5.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 17:14:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 17:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T62tD-00077B-0v; Mon, 27 Aug 2012 17:14:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T62tB-000776-CW
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 17:14:13 +0000
Received: from [85.158.143.99:13467] by server-1.bemta-4.messagelabs.com id
	29/1A-12504-4EAAB305; Mon, 27 Aug 2012 17:14:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346087652!24771282!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14506 invoked from network); 27 Aug 2012 17:14:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 17:14:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,321,1344211200"; d="scan'208";a="14214702"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Aug 2012 17:14:11 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 27 Aug 2012 18:14:11 +0100
Date: Mon, 27 Aug 2012 18:13:45 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE1F771@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208271812020.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1F771@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Aug 2012, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > Sent: Friday, August 24, 2012 7:25 PM
> > To: Xu, Dongxiao
> > Cc: Stefano Stabellini; Keir (Xen.org); Jan Beulich; Ian Jackson;
> > xen-devel@lists.xen.org
> > Subject: RE: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for int
> > and uint types
> > 
> > 
> > I can only speak for qemu-upstream-unstable, but I'll certainly backport the
> > int/uint fix as soon as it gets accepted in QEMU upstream.
> > 
> > Regarding the other patches, if any of them are for qemu-upstream-unstable, I
> > am going to backport them only if they are bugfixes.
> 
> It seems that the int/uint patch is already merged by Anthony L, see commit b100fcfe4966aa41d4d6908d0c4c510bcf8f82dd.

Thanks for letting me know.
I am current away on a conference but I'll try to get the backport done by
the end of the week nonetheless.


> Sorry I have been running away from Xen list for a long time and not familiar with the current rules.
> For QEMU bug fixes, is it the regulation that patch developer need to fix the QEMU upstream, and Stefano will be in charge to backport it? Or patch developer needs to fix both?
 
That's right: fix needs to be upstream first then I'll backport it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 17:14:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 17:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T62tD-00077B-0v; Mon, 27 Aug 2012 17:14:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T62tB-000776-CW
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 17:14:13 +0000
Received: from [85.158.143.99:13467] by server-1.bemta-4.messagelabs.com id
	29/1A-12504-4EAAB305; Mon, 27 Aug 2012 17:14:12 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346087652!24771282!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14506 invoked from network); 27 Aug 2012 17:14:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 17:14:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,321,1344211200"; d="scan'208";a="14214702"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	27 Aug 2012 17:14:11 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Mon, 27 Aug 2012 18:14:11 +0100
Date: Mon, 27 Aug 2012 18:13:45 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A90FE1F771@SHSMSX102.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1208271812020.15568@kaball.uk.xensource.com>
References: <40776A41FC278F40B59438AD47D147A90FE17D93@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208201621330.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1D78C@SHSMSX102.ccr.corp.intel.com>
	<alpine.DEB.2.02.1208241220400.15568@kaball.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A90FE1F771@SHSMSX102.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Keir \(Xen.org\)" <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for
 int and uint types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Aug 2012, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > Sent: Friday, August 24, 2012 7:25 PM
> > To: Xu, Dongxiao
> > Cc: Stefano Stabellini; Keir (Xen.org); Jan Beulich; Ian Jackson;
> > xen-devel@lists.xen.org
> > Subject: RE: [Xen-devel] [PATCH v2] QEMU/helper2.c: Fix multiply issue for int
> > and uint types
> > 
> > 
> > I can only speak for qemu-upstream-unstable, but I'll certainly backport the
> > int/uint fix as soon as it gets accepted in QEMU upstream.
> > 
> > Regarding the other patches, if any of them are for qemu-upstream-unstable, I
> > am going to backport them only if they are bugfixes.
> 
> It seems that the int/uint patch is already merged by Anthony L, see commit b100fcfe4966aa41d4d6908d0c4c510bcf8f82dd.

Thanks for letting me know.
I am current away on a conference but I'll try to get the backport done by
the end of the week nonetheless.


> Sorry I have been running away from Xen list for a long time and not familiar with the current rules.
> For QEMU bug fixes, is it the regulation that patch developer need to fix the QEMU upstream, and Stefano will be in charge to backport it? Or patch developer needs to fix both?
 
That's right: fix needs to be upstream first then I'll backport it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 17:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 17:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T63Wm-0007MC-8t; Mon, 27 Aug 2012 17:55:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T63Wl-0007M7-RX
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 17:55:07 +0000
Received: from [85.158.138.51:5439] by server-1.bemta-3.messagelabs.com id
	A1/7A-09327-A74BB305; Mon, 27 Aug 2012 17:55:06 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346090106!28156419!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30022 invoked from network); 27 Aug 2012 17:55:06 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Aug 2012 17:55:06 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (jored mo32) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id V0289eo7RHAIq3
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 19:55:05 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 2933D183AD; Mon, 27 Aug 2012 19:55:05 +0200 (CEST)
Date: Mon, 27 Aug 2012 19:55:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20120827175504.GA13051@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Subject: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Recently I tried to move the shared_info page in the pvops kernel during
shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
patch. But this change was reverted because it caused reboot failures
because the actual moving of the shared page is fragile.

So my question is what can be done about it?
Right now the shared_info page is in the middle of the RAM and will be
overwritten during kexec because its area is not reserved.
Can the toolstack help to provide a reserved page which can then be used
by the kernel?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 17:55:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 17:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T63Wm-0007MC-8t; Mon, 27 Aug 2012 17:55:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T63Wl-0007M7-RX
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 17:55:07 +0000
Received: from [85.158.138.51:5439] by server-1.bemta-3.messagelabs.com id
	A1/7A-09327-A74BB305; Mon, 27 Aug 2012 17:55:06 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346090106!28156419!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30022 invoked from network); 27 Aug 2012 17:55:06 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Aug 2012 17:55:06 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (jored mo32) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id V0289eo7RHAIq3
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 19:55:05 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 2933D183AD; Mon, 27 Aug 2012 19:55:05 +0200 (CEST)
Date: Mon, 27 Aug 2012 19:55:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20120827175504.GA13051@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Subject: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Recently I tried to move the shared_info page in the pvops kernel during
shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
patch. But this change was reverted because it caused reboot failures
because the actual moving of the shared page is fragile.

So my question is what can be done about it?
Right now the shared_info page is in the middle of the RAM and will be
overwritten during kexec because its area is not reserved.
Can the toolstack help to provide a reserved page which can then be used
by the kernel?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 18:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 18:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T63lx-0007oN-BU; Mon, 27 Aug 2012 18:10:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T63lv-0007oF-5r
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 18:10:47 +0000
Received: from [85.158.143.99:43920] by server-2.bemta-4.messagelabs.com id
	33/65-21239-628BB305; Mon, 27 Aug 2012 18:10:46 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346091044!16965785!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19137 invoked from network); 27 Aug 2012 18:10:46 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 18:10:46 -0000
Received: by dadn15 with SMTP id n15so2537242dad.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 11:10:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=L5WsNlcVDtM/3RVYdfV8AIPk8Xd4a8yAKjTcSPfAp1s=;
	b=U72FyTRy2kfXkDObnTNEqIO0wc407veLwYjIzKQUIzCkXE1lYBKyZzj2+wKsVz89G8
	CVdtbPoL/pq0ZfAF7HYqACfHWRhQtfCFVht/CfYmljmyyODreqnth5+pzVTEMpVmrtGx
	mZyhgcOzJxXBkm9U27y7FVVRDh9+7FQ8B0jdSNNdrPvkFzF40khEFXRDQkA/zf3GEurg
	dtpPonDJWdnnfHl+J8Ue9zTPTqryw+1SpRcgs5cLg6+lVf0SSyMweJCQSlTJ4AkCIfmq
	Fg/SUreLKvEfxhfhOkZrPVzYsZS15IRJQdgMSnAsVT9VHM7+vcFW30jV0R9P95D0iSTA
	1rCg==
Received: by 10.68.136.137 with SMTP id qa9mr36021257pbb.140.1346091043925;
	Mon, 27 Aug 2012 11:10:43 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254])
	by mx.google.com with ESMTPS id jz4sm15180789pbc.17.2012.08.27.11.10.39
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 11:10:42 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Mon, 27 Aug 2012 19:10:34 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>,
	<xen-devel@lists.xen.org>
Message-ID: <CC6176AA.3CEB4%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2Ef0BpIpoLBfjdc0qwua02KzfjWQ==
In-Reply-To: <20120827175504.GA13051@aepfle.de>
Mime-version: 1.0
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/2012 18:55, "Olaf Hering" <olaf@aepfle.de> wrote:

> 
> Recently I tried to move the shared_info page in the pvops kernel during
> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
> patch. But this change was reverted because it caused reboot failures
> because the actual moving of the shared page is fragile.

How was it fragile? Moving it into MMIO should just work?

 -- Keir

> So my question is what can be done about it?
> Right now the shared_info page is in the middle of the RAM and will be
> overwritten during kexec because its area is not reserved.
> Can the toolstack help to provide a reserved page which can then be used
> by the kernel?
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 18:11:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 18:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T63lx-0007oN-BU; Mon, 27 Aug 2012 18:10:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T63lv-0007oF-5r
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 18:10:47 +0000
Received: from [85.158.143.99:43920] by server-2.bemta-4.messagelabs.com id
	33/65-21239-628BB305; Mon, 27 Aug 2012 18:10:46 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346091044!16965785!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19137 invoked from network); 27 Aug 2012 18:10:46 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 18:10:46 -0000
Received: by dadn15 with SMTP id n15so2537242dad.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 11:10:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=L5WsNlcVDtM/3RVYdfV8AIPk8Xd4a8yAKjTcSPfAp1s=;
	b=U72FyTRy2kfXkDObnTNEqIO0wc407veLwYjIzKQUIzCkXE1lYBKyZzj2+wKsVz89G8
	CVdtbPoL/pq0ZfAF7HYqACfHWRhQtfCFVht/CfYmljmyyODreqnth5+pzVTEMpVmrtGx
	mZyhgcOzJxXBkm9U27y7FVVRDh9+7FQ8B0jdSNNdrPvkFzF40khEFXRDQkA/zf3GEurg
	dtpPonDJWdnnfHl+J8Ue9zTPTqryw+1SpRcgs5cLg6+lVf0SSyMweJCQSlTJ4AkCIfmq
	Fg/SUreLKvEfxhfhOkZrPVzYsZS15IRJQdgMSnAsVT9VHM7+vcFW30jV0R9P95D0iSTA
	1rCg==
Received: by 10.68.136.137 with SMTP id qa9mr36021257pbb.140.1346091043925;
	Mon, 27 Aug 2012 11:10:43 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254])
	by mx.google.com with ESMTPS id jz4sm15180789pbc.17.2012.08.27.11.10.39
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 11:10:42 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Mon, 27 Aug 2012 19:10:34 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>,
	<xen-devel@lists.xen.org>
Message-ID: <CC6176AA.3CEB4%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2Ef0BpIpoLBfjdc0qwua02KzfjWQ==
In-Reply-To: <20120827175504.GA13051@aepfle.de>
Mime-version: 1.0
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/2012 18:55, "Olaf Hering" <olaf@aepfle.de> wrote:

> 
> Recently I tried to move the shared_info page in the pvops kernel during
> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
> patch. But this change was reverted because it caused reboot failures
> because the actual moving of the shared page is fragile.

How was it fragile? Moving it into MMIO should just work?

 -- Keir

> So my question is what can be done about it?
> Right now the shared_info page is in the middle of the RAM and will be
> overwritten during kexec because its area is not reserved.
> Can the toolstack help to provide a reserved page which can then be used
> by the kernel?
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 19:07:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 19:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T64eT-0008TR-Id; Mon, 27 Aug 2012 19:07:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T64eS-0008TL-8p
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 19:07:08 +0000
Received: from [85.158.143.99:60915] by server-2.bemta-4.messagelabs.com id
	BD/72-21239-B55CB305; Mon, 27 Aug 2012 19:07:07 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1346094426!18008584!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDM0MTY3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12349 invoked from network); 27 Aug 2012 19:07:07 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-8.tower-216.messagelabs.com with SMTP;
	27 Aug 2012 19:07:07 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 27 Aug 2012 12:07:05 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,322,1344236400"; d="scan'208";a="214537047"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 27 Aug 2012 12:06:55 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 27 Aug 2012 12:06:55 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 03:06:53 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xensource.com"
	<xen-devel@lists.xensource.com>
Thread-Topic: vMCE patches rebase
Thread-Index: Ac2Ehx33MFc/5LR8TPK0QvHZaV42Zw==
Date: Mon, 27 Aug 2012 19:06:52 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] vMCE patches rebase
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan,

Is Xen4.2 ready now? If yes, I would like to rebase vMCE patches (which I sent several weeks ago, and would update according to your comments).

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 19:07:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 19:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T64eT-0008TR-Id; Mon, 27 Aug 2012 19:07:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T64eS-0008TL-8p
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 19:07:08 +0000
Received: from [85.158.143.99:60915] by server-2.bemta-4.messagelabs.com id
	BD/72-21239-B55CB305; Mon, 27 Aug 2012 19:07:07 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1346094426!18008584!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDM0MTY3Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12349 invoked from network); 27 Aug 2012 19:07:07 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-8.tower-216.messagelabs.com with SMTP;
	27 Aug 2012 19:07:07 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 27 Aug 2012 12:07:05 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,322,1344236400"; d="scan'208";a="214537047"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 27 Aug 2012 12:06:55 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Mon, 27 Aug 2012 12:06:55 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 03:06:53 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xensource.com"
	<xen-devel@lists.xensource.com>
Thread-Topic: vMCE patches rebase
Thread-Index: Ac2Ehx33MFc/5LR8TPK0QvHZaV42Zw==
Date: Mon, 27 Aug 2012 19:06:52 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] vMCE patches rebase
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan,

Is Xen4.2 ready now? If yes, I would like to rebase vMCE patches (which I sent several weeks ago, and would update according to your comments).

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 21:32:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 21:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T66vA-0001Xt-8v; Mon, 27 Aug 2012 21:32:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T66v8-0001Xo-L7
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 21:32:30 +0000
Received: from [85.158.143.99:64653] by server-1.bemta-4.messagelabs.com id
	05/DC-12504-D67EB305; Mon, 27 Aug 2012 21:32:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346103147!22871241!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTM2NTQ=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTM2NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12776 invoked from network); 27 Aug 2012 21:32:27 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Aug 2012 21:32:27 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (jorabe mo39) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id q012c4o7RJXioz ;
	Mon, 27 Aug 2012 23:32:27 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id B3645183AD; Mon, 27 Aug 2012 23:32:26 +0200 (CEST)
Date: Mon, 27 Aug 2012 23:32:26 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120827213226.GA17915@aepfle.de>
References: <20120827175504.GA13051@aepfle.de>
	<CC6176AA.3CEB4%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC6176AA.3CEB4%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 27, Keir Fraser wrote:

> On 27/08/2012 18:55, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> > 
> > Recently I tried to move the shared_info page in the pvops kernel during
> > shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
> > patch. But this change was reverted because it caused reboot failures
> > because the actual moving of the shared page is fragile.
> 
> How was it fragile? Moving it into MMIO should just work?

The moving itself is not the issue, but the possible access to the page
during the move. Its not atomic, nor can it easily be atomic.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 21:32:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 21:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T66vA-0001Xt-8v; Mon, 27 Aug 2012 21:32:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T66v8-0001Xo-L7
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 21:32:30 +0000
Received: from [85.158.143.99:64653] by server-1.bemta-4.messagelabs.com id
	05/DC-12504-D67EB305; Mon, 27 Aug 2012 21:32:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346103147!22871241!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTM2NTQ=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTM2NTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12776 invoked from network); 27 Aug 2012 21:32:27 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-2.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Aug 2012 21:32:27 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (jorabe mo39) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id q012c4o7RJXioz ;
	Mon, 27 Aug 2012 23:32:27 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id B3645183AD; Mon, 27 Aug 2012 23:32:26 +0200 (CEST)
Date: Mon, 27 Aug 2012 23:32:26 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120827213226.GA17915@aepfle.de>
References: <20120827175504.GA13051@aepfle.de>
	<CC6176AA.3CEB4%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC6176AA.3CEB4%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 27, Keir Fraser wrote:

> On 27/08/2012 18:55, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> > 
> > Recently I tried to move the shared_info page in the pvops kernel during
> > shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
> > patch. But this change was reverted because it caused reboot failures
> > because the actual moving of the shared page is fragile.
> 
> How was it fragile? Moving it into MMIO should just work?

The moving itself is not the issue, but the possible access to the page
during the move. Its not atomic, nor can it easily be atomic.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 21:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 21:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T67E8-0001j8-61; Mon, 27 Aug 2012 21:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T67E6-0001j3-4C
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 21:52:06 +0000
Received: from [85.158.138.51:16165] by server-2.bemta-3.messagelabs.com id
	1D/8E-09157-50CEB305; Mon, 27 Aug 2012 21:52:05 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346104321!21690125!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11027 invoked from network); 27 Aug 2012 21:52:03 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 21:52:03 -0000
Received: by pbbjt11 with SMTP id jt11so7984453pbb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 14:52:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=EB99fFq/F7NxxT62V8YVWPwWUAj662dT4qRcYBivarY=;
	b=Qqx8x2hoIAW3k0YzGXNJwnEaFo7kc2rXm7zs952S3gCjsjOwsxxLcwgA35WQKBcYeo
	2VVbijNTBpJHxnu5+d9Os3FO2eFmYnz1uuPG2GYMe4pbi+Vl5Z3Ngi5K/UDl21vakPq9
	WdAwOVn+fMF7rqX89/Nm6XoqauKWxErowNJw1f6DIOoMZYA/RYqE1CRFYoBByXXBxW9L
	H51qYHN/srT8LSZ8ho8ct8ipj02pfcdpYx5HNScMxemKpnhC5rP2K3+kwqKm/jQyszAB
	K5az87PYXzkbaRApzrxDzUAzzXph0wtLjoUB4k9vw3LYhF3VlJU+DEvm8quXOrCbHQfo
	Tx8Q==
Received: by 10.66.83.234 with SMTP id t10mr33039245pay.39.1346104321049;
	Mon, 27 Aug 2012 14:52:01 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254]) by mx.google.com with ESMTPS id
	pj10sm15485175pbb.46.2012.08.27.14.51.58
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 14:51:59 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Mon, 27 Aug 2012 22:51:52 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC61AA88.3CEFA%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2Eniq3esKJiQo82Uyv3a32GOexXA==
In-Reply-To: <20120827213226.GA17915@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/2012 22:32, "Olaf Hering" <olaf@aepfle.de> wrote:

>>> Recently I tried to move the shared_info page in the pvops kernel during
>>> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
>>> patch. But this change was reverted because it caused reboot failures
>>> because the actual moving of the shared page is fragile.
>> 
>> How was it fragile? Moving it into MMIO should just work?
> 
> The moving itself is not the issue, but the possible access to the page
> during the move. Its not atomic, nor can it easily be atomic.

Why not map it into MMIO in the first place, rather than into the middle of
RAM? Do that early during boot, and early during resume from
save/restore/migrate (i.e., places you presumably already map the
shared_info page, but into the middle of RAM)?

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 21:52:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 21:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T67E8-0001j8-61; Mon, 27 Aug 2012 21:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T67E6-0001j3-4C
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 21:52:06 +0000
Received: from [85.158.138.51:16165] by server-2.bemta-3.messagelabs.com id
	1D/8E-09157-50CEB305; Mon, 27 Aug 2012 21:52:05 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346104321!21690125!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11027 invoked from network); 27 Aug 2012 21:52:03 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 21:52:03 -0000
Received: by pbbjt11 with SMTP id jt11so7984453pbb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 14:52:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=EB99fFq/F7NxxT62V8YVWPwWUAj662dT4qRcYBivarY=;
	b=Qqx8x2hoIAW3k0YzGXNJwnEaFo7kc2rXm7zs952S3gCjsjOwsxxLcwgA35WQKBcYeo
	2VVbijNTBpJHxnu5+d9Os3FO2eFmYnz1uuPG2GYMe4pbi+Vl5Z3Ngi5K/UDl21vakPq9
	WdAwOVn+fMF7rqX89/Nm6XoqauKWxErowNJw1f6DIOoMZYA/RYqE1CRFYoBByXXBxW9L
	H51qYHN/srT8LSZ8ho8ct8ipj02pfcdpYx5HNScMxemKpnhC5rP2K3+kwqKm/jQyszAB
	K5az87PYXzkbaRApzrxDzUAzzXph0wtLjoUB4k9vw3LYhF3VlJU+DEvm8quXOrCbHQfo
	Tx8Q==
Received: by 10.66.83.234 with SMTP id t10mr33039245pay.39.1346104321049;
	Mon, 27 Aug 2012 14:52:01 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254]) by mx.google.com with ESMTPS id
	pj10sm15485175pbb.46.2012.08.27.14.51.58
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 14:51:59 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Mon, 27 Aug 2012 22:51:52 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC61AA88.3CEFA%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2Eniq3esKJiQo82Uyv3a32GOexXA==
In-Reply-To: <20120827213226.GA17915@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/2012 22:32, "Olaf Hering" <olaf@aepfle.de> wrote:

>>> Recently I tried to move the shared_info page in the pvops kernel during
>>> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
>>> patch. But this change was reverted because it caused reboot failures
>>> because the actual moving of the shared page is fragile.
>> 
>> How was it fragile? Moving it into MMIO should just work?
> 
> The moving itself is not the issue, but the possible access to the page
> during the move. Its not atomic, nor can it easily be atomic.

Why not map it into MMIO in the first place, rather than into the middle of
RAM? Do that early during boot, and early during resume from
save/restore/migrate (i.e., places you presumably already map the
shared_info page, but into the middle of RAM)?

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 21:57:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 21:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T67IY-0001sf-2u; Mon, 27 Aug 2012 21:56:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T67IW-0001sa-T6
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 21:56:41 +0000
Received: from [85.158.139.83:16808] by server-4.bemta-5.messagelabs.com id
	0D/C6-12386-81DEB305; Mon, 27 Aug 2012 21:56:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346104597!23827557!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30856 invoked from network); 27 Aug 2012 21:56:38 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Aug 2012 21:56:38 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (jorabe mo18) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 500c17o7RL8IGK ;
	Mon, 27 Aug 2012 23:56:37 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 8E20A183AD; Mon, 27 Aug 2012 23:56:36 +0200 (CEST)
Date: Mon, 27 Aug 2012 23:56:36 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120827215636.GA22031@aepfle.de>
References: <20120827213226.GA17915@aepfle.de>
	<CC61AA88.3CEFA%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC61AA88.3CEFA%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 27, Keir Fraser wrote:

> On 27/08/2012 22:32, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> >>> Recently I tried to move the shared_info page in the pvops kernel during
> >>> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
> >>> patch. But this change was reverted because it caused reboot failures
> >>> because the actual moving of the shared page is fragile.
> >> 
> >> How was it fragile? Moving it into MMIO should just work?
> > 
> > The moving itself is not the issue, but the possible access to the page
> > during the move. Its not atomic, nor can it easily be atomic.
> 
> Why not map it into MMIO in the first place, rather than into the middle of
> RAM? Do that early during boot, and early during resume from
> save/restore/migrate (i.e., places you presumably already map the
> shared_info page, but into the middle of RAM)?

I think there is no easy way to know where the PCI device is located at
the time the shared info page is configured in the early kernel setup.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Aug 27 21:57:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Aug 2012 21:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T67IY-0001sf-2u; Mon, 27 Aug 2012 21:56:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T67IW-0001sa-T6
	for xen-devel@lists.xen.org; Mon, 27 Aug 2012 21:56:41 +0000
Received: from [85.158.139.83:16808] by server-4.bemta-5.messagelabs.com id
	0D/C6-12386-81DEB305; Mon, 27 Aug 2012 21:56:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346104597!23827557!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MDkxMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30856 invoked from network); 27 Aug 2012 21:56:38 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-14.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Aug 2012 21:56:38 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq287uF5o=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-072-055.pools.arcor-ip.net [84.57.72.55])
	by smtp.strato.de (jorabe mo18) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id 500c17o7RL8IGK ;
	Mon, 27 Aug 2012 23:56:37 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 8E20A183AD; Mon, 27 Aug 2012 23:56:36 +0200 (CEST)
Date: Mon, 27 Aug 2012 23:56:36 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120827215636.GA22031@aepfle.de>
References: <20120827213226.GA17915@aepfle.de>
	<CC61AA88.3CEFA%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC61AA88.3CEFA%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Aug 27, Keir Fraser wrote:

> On 27/08/2012 22:32, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> >>> Recently I tried to move the shared_info page in the pvops kernel during
> >>> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
> >>> patch. But this change was reverted because it caused reboot failures
> >>> because the actual moving of the shared page is fragile.
> >> 
> >> How was it fragile? Moving it into MMIO should just work?
> > 
> > The moving itself is not the issue, but the possible access to the page
> > during the move. Its not atomic, nor can it easily be atomic.
> 
> Why not map it into MMIO in the first place, rather than into the middle of
> RAM? Do that early during boot, and early during resume from
> save/restore/migrate (i.e., places you presumably already map the
> shared_info page, but into the middle of RAM)?

I think there is no easy way to know where the PCI device is located at
the time the shared info page is configured in the early kernel setup.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 00:14:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 00:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T69Rb-0002pK-Q3; Tue, 28 Aug 2012 00:14:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T69Ra-0002pC-CC
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 00:14:10 +0000
Received: from [85.158.138.51:56373] by server-1.bemta-3.messagelabs.com id
	76/CF-09327-15D0C305; Tue, 28 Aug 2012 00:14:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346112846!20274879!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16779 invoked from network); 28 Aug 2012 00:14:08 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 00:14:08 -0000
Received: by pbbjt11 with SMTP id jt11so8157603pbb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 17:14:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=n9CqZZ8EdrFrscV+tyd09mFwL9jM1cApFD6yEZ4iieg=;
	b=VkqgG5lr1b7B0mWv3AOAjANKhOaPeykyF6waWQ38+TSYBNz3Gh+ahm/qwcLa6wPkC8
	xldHgjMP6k5ZrWOKZ8VHV8Xu0eNT93fROEK/X8uedTQk2YH7czFkKuFWGRmSoy14VOx6
	y8Q1yguX5WcHibrzlaFxX8q5wR4MA8BKGXPnaCR9TRdgtzzNjKaxytUQo4bxz0gAR8ZX
	6+rO277xN+eSic+nhZ/Bpta/EXnwM59Uc2Px8maXSFURSKLWw3lfKpYvl8YSVvgMnEvX
	Jde/OX2V7O4gFF/nKBkrK7F+NSo+Fq+YOXP80aVmmLzSN0K9KgbvZBxN8ODZwBax9VQv
	FrWg==
Received: by 10.68.136.137 with SMTP id qa9mr37925067pbb.140.1346112846258;
	Mon, 27 Aug 2012 17:14:06 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254])
	by mx.google.com with ESMTPS id oc2sm15700457pbb.69.2012.08.27.17.14.02
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 17:14:05 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 01:13:57 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC61CBD5.3CF38%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2EsgQDsUiFmxaJIkyqtQkd3uEvow==
In-Reply-To: <20120827215636.GA22031@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/2012 22:56, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Mon, Aug 27, Keir Fraser wrote:
> 
>> On 27/08/2012 22:32, "Olaf Hering" <olaf@aepfle.de> wrote:
>> 
>>>>> Recently I tried to move the shared_info page in the pvops kernel during
>>>>> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
>>>>> patch. But this change was reverted because it caused reboot failures
>>>>> because the actual moving of the shared page is fragile.
>>>> 
>>>> How was it fragile? Moving it into MMIO should just work?
>>> 
>>> The moving itself is not the issue, but the possible access to the page
>>> during the move. Its not atomic, nor can it easily be atomic.
>> 
>> Why not map it into MMIO in the first place, rather than into the middle of
>> RAM? Do that early during boot, and early during resume from
>> save/restore/migrate (i.e., places you presumably already map the
>> shared_info page, but into the middle of RAM)?
> 
> I think there is no easy way to know where the PCI device is located at
> the time the shared info page is configured in the early kernel setup.

How about we guarantee that guests can use the 1MB in address range
0xFED00000-0xFEDFFFFF for whatever mappings they like, guaranteed unused (or
at least, mapped with nothing useful) when guest kernel starts?

This is already, and has always been, the case. But we can etch it in stone
quite easily. :)

Then you can stick shared_info at fed00000 always, plus there's space there
for plenty of other stuff that might one day be useful to map very early.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 00:14:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 00:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T69Rb-0002pK-Q3; Tue, 28 Aug 2012 00:14:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T69Ra-0002pC-CC
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 00:14:10 +0000
Received: from [85.158.138.51:56373] by server-1.bemta-3.messagelabs.com id
	76/CF-09327-15D0C305; Tue, 28 Aug 2012 00:14:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346112846!20274879!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16779 invoked from network); 28 Aug 2012 00:14:08 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 00:14:08 -0000
Received: by pbbjt11 with SMTP id jt11so8157603pbb.32
	for <xen-devel@lists.xen.org>; Mon, 27 Aug 2012 17:14:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=n9CqZZ8EdrFrscV+tyd09mFwL9jM1cApFD6yEZ4iieg=;
	b=VkqgG5lr1b7B0mWv3AOAjANKhOaPeykyF6waWQ38+TSYBNz3Gh+ahm/qwcLa6wPkC8
	xldHgjMP6k5ZrWOKZ8VHV8Xu0eNT93fROEK/X8uedTQk2YH7czFkKuFWGRmSoy14VOx6
	y8Q1yguX5WcHibrzlaFxX8q5wR4MA8BKGXPnaCR9TRdgtzzNjKaxytUQo4bxz0gAR8ZX
	6+rO277xN+eSic+nhZ/Bpta/EXnwM59Uc2Px8maXSFURSKLWw3lfKpYvl8YSVvgMnEvX
	Jde/OX2V7O4gFF/nKBkrK7F+NSo+Fq+YOXP80aVmmLzSN0K9KgbvZBxN8ODZwBax9VQv
	FrWg==
Received: by 10.68.136.137 with SMTP id qa9mr37925067pbb.140.1346112846258;
	Mon, 27 Aug 2012 17:14:06 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254])
	by mx.google.com with ESMTPS id oc2sm15700457pbb.69.2012.08.27.17.14.02
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 17:14:05 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 01:13:57 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC61CBD5.3CF38%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2EsgQDsUiFmxaJIkyqtQkd3uEvow==
In-Reply-To: <20120827215636.GA22031@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/2012 22:56, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Mon, Aug 27, Keir Fraser wrote:
> 
>> On 27/08/2012 22:32, "Olaf Hering" <olaf@aepfle.de> wrote:
>> 
>>>>> Recently I tried to move the shared_info page in the pvops kernel during
>>>>> shutdown, see "xen PVonHVM: move shared_info to MMIO before kexec"
>>>>> patch. But this change was reverted because it caused reboot failures
>>>>> because the actual moving of the shared page is fragile.
>>>> 
>>>> How was it fragile? Moving it into MMIO should just work?
>>> 
>>> The moving itself is not the issue, but the possible access to the page
>>> during the move. Its not atomic, nor can it easily be atomic.
>> 
>> Why not map it into MMIO in the first place, rather than into the middle of
>> RAM? Do that early during boot, and early during resume from
>> save/restore/migrate (i.e., places you presumably already map the
>> shared_info page, but into the middle of RAM)?
> 
> I think there is no easy way to know where the PCI device is located at
> the time the shared info page is configured in the early kernel setup.

How about we guarantee that guests can use the 1MB in address range
0xFED00000-0xFEDFFFFF for whatever mappings they like, guaranteed unused (or
at least, mapped with nothing useful) when guest kernel starts?

This is already, and has always been, the case. But we can etch it in stone
quite easily. :)

Then you can stick shared_info at fed00000 always, plus there's space there
for plenty of other stuff that might one day be useful to map very early.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 06:00:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 06:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Epp-00088e-TW; Tue, 28 Aug 2012 05:59:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6Epo-00088Z-Ul
	for xen-devel@lists.xensource.com; Tue, 28 Aug 2012 05:59:33 +0000
Received: from [85.158.139.83:48232] by server-2.bemta-5.messagelabs.com id
	96/85-11456-44E5C305; Tue, 28 Aug 2012 05:59:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1346133571!24815938!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28447 invoked from network); 28 Aug 2012 05:59:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 05:59:31 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14220489"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 05:59:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 28 Aug 2012 06:59:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6Epm-0004PG-A2;
	Tue, 28 Aug 2012 05:59:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6Epl-0007pR-Vs;
	Tue, 28 Aug 2012 06:59:30 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13633-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Aug 2012 06:59:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13633: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13633 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13633/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13631
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13631
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13631
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 06:00:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 06:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Epp-00088e-TW; Tue, 28 Aug 2012 05:59:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6Epo-00088Z-Ul
	for xen-devel@lists.xensource.com; Tue, 28 Aug 2012 05:59:33 +0000
Received: from [85.158.139.83:48232] by server-2.bemta-5.messagelabs.com id
	96/85-11456-44E5C305; Tue, 28 Aug 2012 05:59:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1346133571!24815938!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28447 invoked from network); 28 Aug 2012 05:59:31 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 05:59:31 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14220489"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 05:59:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 28 Aug 2012 06:59:30 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6Epm-0004PG-A2;
	Tue, 28 Aug 2012 05:59:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6Epl-0007pR-Vs;
	Tue, 28 Aug 2012 06:59:30 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13633-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Aug 2012 06:59:30 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13633: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13633 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13633/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13631
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13631
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13631
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1126b3079bef
baseline version:
 xen                  1126b3079bef

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 06:11:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 06:11:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6F0k-0008Mj-49; Tue, 28 Aug 2012 06:10:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1T68Lh-0002FE-Jk
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 23:04:01 +0000
Received: from [85.158.138.51:36590] by server-3.bemta-3.messagelabs.com id
	23/CD-13809-FDCFB305; Mon, 27 Aug 2012 23:03:59 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346108637!20268381!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22333 invoked from network); 27 Aug 2012 23:03:59 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 23:03:59 -0000
Received: by pbbrq2 with SMTP id rq2so9449397pbb.30
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Aug 2012 16:03:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=ww27WSKnfmewKPaGi5x81QxnGTRirHL+IYGd4oB7J70=;
	b=Mz5et3jPWrFtVP+5OzsLLLZYXpwuE/IXe6rhRSsL9X3YUCIhNwXDq9PO3GkHcdB2TK
	5UFRuvo9bYnXQAKOlo1s41Pi8uHRxGluovIBg7/XAVG2LfB9LhB+t74lNyNwohQ0q2d3
	j/0CCRNZM/1CVj9VJNdJlgtP5BAk+fRjSAx2Uz3Rj3cWEGpNgM9n5X4lUsPKGJdUHP4p
	T7ytSg9enDLjdg3d7RFFPazfNF92NPtljzHQFMIksWaAWaTa0tb5QRMmYe/8UxgB0/Jt
	VFEPs9HMOaZSKMelgmFkqzGmGml74braRoHKex2uFGl3ctQJ2OLZ6PoZ5ycDUVU8U7ia
	l4Vg==
Received: by 10.68.224.161 with SMTP id rd1mr38381644pbc.133.1346108636898;
	Mon, 27 Aug 2012 16:03:56 -0700 (PDT)
Received: from [10.11.12.53] ([38.96.16.75])
	by mx.google.com with ESMTPS id oj8sm15593209pbb.54.2012.08.27.16.03.53
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 16:03:55 -0700 (PDT)
Message-ID: <503BFCD8.5090804@gmail.com>
Date: Mon, 27 Aug 2012 18:03:52 -0500
From: Rob Herring <robherring2@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
	<1345131377-14713-6-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1345131377-14713-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Tue, 28 Aug 2012 06:10:48 +0000
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, devicetree-discuss@lists.ozlabs.org,
	tim@xen.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 06/25] docs: Xen ARM DT bindings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 10:35 AM, Stefano Stabellini wrote:
> Add a doc to describe the Xen ARM device tree bindings
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: devicetree-discuss@lists.ozlabs.org
> CC: David Vrabel <david.vrabel@citrix.com>
> ---
>  Documentation/devicetree/bindings/arm/xen.txt |   22 ++++++++++++++++++++++
>  1 files changed, 22 insertions(+), 0 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/arm/xen.txt
> 
> diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
> new file mode 100644
> index 0000000..ec6d884
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/arm/xen.txt
> @@ -0,0 +1,22 @@
> +* Xen hypervisor device tree bindings
> +
> +Xen ARM virtual platforms shall have the following properties:
> +
> +- compatible:
> +	compatible = "xen,xen", "xen,xen-<version>";
> +  where <version> is the version of the Xen ABI of the platform.
> +
> +- reg: specifies the base physical address and size of a region in
> +  memory where the grant table should be mapped to, using an
> +  HYPERVISOR_memory_op hypercall. 
> +
> +- interrupts: the interrupt used by Xen to inject event notifications.

You should look at ePAPR 1.1 which defines hypervisor related bindings.
While it is a PPC doc, we should reuse or extend what makes sense.

See section 7.5:

https://www.power.org/resources/downloads/Power_ePAPR_APPROVED_v1.1.pdf

Rob

> +
> +
> +Example:
> +
> +hypervisor {
> +	compatible = "xen,xen", "xen,xen-4.3";
> +	reg = <0xb0000000 0x20000>;
> +	interrupts = <1 15 0xf08>;
> +};
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 06:11:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 06:11:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6F0k-0008Mj-49; Tue, 28 Aug 2012 06:10:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1T68Lh-0002FE-Jk
	for xen-devel@lists.xensource.com; Mon, 27 Aug 2012 23:04:01 +0000
Received: from [85.158.138.51:36590] by server-3.bemta-3.messagelabs.com id
	23/CD-13809-FDCFB305; Mon, 27 Aug 2012 23:03:59 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346108637!20268381!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22333 invoked from network); 27 Aug 2012 23:03:59 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Aug 2012 23:03:59 -0000
Received: by pbbrq2 with SMTP id rq2so9449397pbb.30
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Aug 2012 16:03:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=ww27WSKnfmewKPaGi5x81QxnGTRirHL+IYGd4oB7J70=;
	b=Mz5et3jPWrFtVP+5OzsLLLZYXpwuE/IXe6rhRSsL9X3YUCIhNwXDq9PO3GkHcdB2TK
	5UFRuvo9bYnXQAKOlo1s41Pi8uHRxGluovIBg7/XAVG2LfB9LhB+t74lNyNwohQ0q2d3
	j/0CCRNZM/1CVj9VJNdJlgtP5BAk+fRjSAx2Uz3Rj3cWEGpNgM9n5X4lUsPKGJdUHP4p
	T7ytSg9enDLjdg3d7RFFPazfNF92NPtljzHQFMIksWaAWaTa0tb5QRMmYe/8UxgB0/Jt
	VFEPs9HMOaZSKMelgmFkqzGmGml74braRoHKex2uFGl3ctQJ2OLZ6PoZ5ycDUVU8U7ia
	l4Vg==
Received: by 10.68.224.161 with SMTP id rd1mr38381644pbc.133.1346108636898;
	Mon, 27 Aug 2012 16:03:56 -0700 (PDT)
Received: from [10.11.12.53] ([38.96.16.75])
	by mx.google.com with ESMTPS id oj8sm15593209pbb.54.2012.08.27.16.03.53
	(version=SSLv3 cipher=OTHER); Mon, 27 Aug 2012 16:03:55 -0700 (PDT)
Message-ID: <503BFCD8.5090804@gmail.com>
Date: Mon, 27 Aug 2012 18:03:52 -0500
From: Rob Herring <robherring2@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1208161618550.4850@kaball.uk.xensource.com>
	<1345131377-14713-6-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1345131377-14713-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailman-Approved-At: Tue, 28 Aug 2012 06:10:48 +0000
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Ian.Campbell@citrix.com, arnd@arndb.de, konrad.wilk@oracle.com,
	catalin.marinas@arm.com, devicetree-discuss@lists.ozlabs.org,
	tim@xen.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v3 06/25] docs: Xen ARM DT bindings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/16/2012 10:35 AM, Stefano Stabellini wrote:
> Add a doc to describe the Xen ARM device tree bindings
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: devicetree-discuss@lists.ozlabs.org
> CC: David Vrabel <david.vrabel@citrix.com>
> ---
>  Documentation/devicetree/bindings/arm/xen.txt |   22 ++++++++++++++++++++++
>  1 files changed, 22 insertions(+), 0 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/arm/xen.txt
> 
> diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
> new file mode 100644
> index 0000000..ec6d884
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/arm/xen.txt
> @@ -0,0 +1,22 @@
> +* Xen hypervisor device tree bindings
> +
> +Xen ARM virtual platforms shall have the following properties:
> +
> +- compatible:
> +	compatible = "xen,xen", "xen,xen-<version>";
> +  where <version> is the version of the Xen ABI of the platform.
> +
> +- reg: specifies the base physical address and size of a region in
> +  memory where the grant table should be mapped to, using an
> +  HYPERVISOR_memory_op hypercall. 
> +
> +- interrupts: the interrupt used by Xen to inject event notifications.

You should look at ePAPR 1.1 which defines hypervisor related bindings.
While it is a PPC doc, we should reuse or extend what makes sense.

See section 7.5:

https://www.power.org/resources/downloads/Power_ePAPR_APPROVED_v1.1.pdf

Rob

> +
> +
> +Example:
> +
> +hypervisor {
> +	compatible = "xen,xen", "xen,xen-4.3";
> +	reg = <0xb0000000 0x20000>;
> +	interrupts = <1 15 0xf08>;
> +};
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:23:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6H4p-00010b-9W; Tue, 28 Aug 2012 08:23:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T6H4n-00010W-Pi
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:23:09 +0000
Received: from [85.158.143.35:62028] by server-3.bemta-4.messagelabs.com id
	6A/54-08232-DEF7C305; Tue, 28 Aug 2012 08:23:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346142183!15157024!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTQ4NTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTQ4NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2130 invoked from network); 28 Aug 2012 08:23:06 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 08:23:06 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq3M7pEW89
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-075-237.pools.arcor-ip.net [84.57.75.237])
	by smtp.strato.de (josoe mo6) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id z04845o7S7C4PZ ;
	Tue, 28 Aug 2012 10:23:03 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 83ABD183AD; Tue, 28 Aug 2012 10:23:02 +0200 (CEST)
Date: Tue, 28 Aug 2012 10:23:02 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120828082302.GA27309@aepfle.de>
References: <20120827215636.GA22031@aepfle.de>
	<CC61CBD5.3CF38%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC61CBD5.3CF38%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, Keir Fraser wrote:

> How about we guarantee that guests can use the 1MB in address range
> 0xFED00000-0xFEDFFFFF for whatever mappings they like, guaranteed unused (or
> at least, mapped with nothing useful) when guest kernel starts?
> 
> This is already, and has always been, the case. But we can etch it in stone
> quite easily. :)

0xFED00000UL is appearently the hpet base adress. But if there is room
after that, then lets use that. However, I'm not familiar with these
things. Should the area appear in the E820 map as reserverd? If so,
where is the code which configures the guests E820 map?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:23:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6H4p-00010b-9W; Tue, 28 Aug 2012 08:23:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T6H4n-00010W-Pi
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:23:09 +0000
Received: from [85.158.143.35:62028] by server-3.bemta-4.messagelabs.com id
	6A/54-08232-DEF7C305; Tue, 28 Aug 2012 08:23:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346142183!15157024!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTQ4NTE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA0MTQ4NTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2130 invoked from network); 28 Aug 2012 08:23:06 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.161)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 08:23:06 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq3M7pEW89
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-075-237.pools.arcor-ip.net [84.57.75.237])
	by smtp.strato.de (josoe mo6) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id z04845o7S7C4PZ ;
	Tue, 28 Aug 2012 10:23:03 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 83ABD183AD; Tue, 28 Aug 2012 10:23:02 +0200 (CEST)
Date: Tue, 28 Aug 2012 10:23:02 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120828082302.GA27309@aepfle.de>
References: <20120827215636.GA22031@aepfle.de>
	<CC61CBD5.3CF38%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC61CBD5.3CF38%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, Keir Fraser wrote:

> How about we guarantee that guests can use the 1MB in address range
> 0xFED00000-0xFEDFFFFF for whatever mappings they like, guaranteed unused (or
> at least, mapped with nothing useful) when guest kernel starts?
> 
> This is already, and has always been, the case. But we can etch it in stone
> quite easily. :)

0xFED00000UL is appearently the hpet base adress. But if there is room
after that, then lets use that. However, I'm not familiar with these
things. Should the area appear in the E820 map as reserverd? If so,
where is the code which configures the guests E820 map?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:26:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:26:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6H7W-00015u-Rg; Tue, 28 Aug 2012 08:25:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T6H7V-00015o-VV
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:25:58 +0000
Received: from [85.158.138.51:32395] by server-6.bemta-3.messagelabs.com id
	50/20-32013-5908C305; Tue, 28 Aug 2012 08:25:57 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1346142343!28427343!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg3OTIz\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTIxOTAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30341 invoked from network); 28 Aug 2012 08:25:44 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-9.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 08:25:44 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 28 Aug 2012 01:25:42 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,325,1344236400"; d="scan'208";a="138635726"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 28 Aug 2012 01:25:42 -0700
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 01:25:42 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 01:25:41 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 16:25:39 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Thread-Topic: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
	Passthrough?!
Thread-Index: Ac2E9rQ673lcU/qTRZe3dqvfe2uFkA==
Date: Tue, 28 Aug 2012 08:25:39 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101881CE@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Tobias Geiger <tobias.geiger@vido.info>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> Konrad Rzeszutek Wilk
> Sent: Tuesday, August 21, 2012 10:23 PM
> To: Ren, Yongjie
> Cc: Konrad Rzeszutek Wilk; Tobias Geiger; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> Passthrough?!
> 
> On Tue, Aug 21, 2012 at 02:41:36AM +0000, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: xen-devel-bounces@lists.xen.org
> > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad
> Rzeszutek
> > > Wilk
> > > Sent: Tuesday, August 21, 2012 7:30 AM
> > > To: Tobias Geiger
> > > Cc: xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> > > Passthrough?!
> > >
> > > On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk
> wrote:
> > > > On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk
> > > wrote:
> > > > > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > > > > Hi!
> > > > > >
> > > > > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was
> rock
> > > > > > stable):
> > > > > >
> > > > > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller
> is
> > > > > > not recognized within the DomU (HVM Win7 64)
> > > > > > Dom0 cmdline is:
> > > > > > ro root=LABEL=dom0root
> > >
> xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > > > > security=apparmor noirqdebug nouveau.msi=1
> > > > > >
> > > > > > Only 8:00.0 and 8:00.1 get passed through without problems, all
> the
> > > > > > USB Controller IDs are not correctly passed through and get a
> > > > > > exclamation mark within the win7 device manager ("could not be
> > > > > > started").
> > > > >
> > > > > Ok, but they do get passed in though? As in, QEMU sees them.
> > > > > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > > > > passed in devices do you see them? Meaning lspci shows them?
> > > > >
> > > > >
> > > > > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > > > >
> > > > > >
> > > > > >
> > > > > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) -
> > > sorry
> > > > > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > > > > uploaded here:
> > > > > >
> > > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > > > >
> > > > > Ugh, that looks like somebody removed a large chunk of a
> pagetable.
> > > > >
> > > > > Hmm. Are you using dom0_mem=max parameter? If not, can you
> try
> > > > > that and also disable ballooning in the xm/xl config file pls?
> > > > >
> > > > > >
> > > > > >
> > > > > > With 3.4 both issues were not there - everything worked perfectly.
> > > > > > Tell me which debugging info you need, i may be able to re-install
> > > > > > my netconsole to get the full stacktrace (but i had not much luck
> > > > > > with netconsole regarding kernel panics - rarely this info gets sent
> > > > > > before the "panic"...)
> > > >
> > > > So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> > > > an Intel 82574L NIC. The video card still works, but the NIC stopped
> > > > working. Same version of hypervisor/toolstack/etc, only change is the
> > > > kernel (v3.4.6->v3.5).
> > > >
> > > > Time to get my hands greasy with this..
> > >
> > > And its due to a patch I added in v3.4
> > > (cd9db80e5257682a7f7ab245a2459648b3c8d268)
> > > - which did not work properly in v3.4, but with v3.5 got it working
> > > (977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to
> now
> > > work
> > > anymore.
> > >
> > > Anyhow, for right now jsut revert
> > > cd9db80e5257682a7f7ab245a2459648b3c8d268
> > > and it should work for you.
> > >
Confirmed, after reverting that commit, VT-d will work fine.
Will you fix this and push it to upstream Linux, Konrad?

> > Also, our team reported a VT-d bug 2 months ago.
> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:26:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:26:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6H7W-00015u-Rg; Tue, 28 Aug 2012 08:25:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T6H7V-00015o-VV
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:25:58 +0000
Received: from [85.158.138.51:32395] by server-6.bemta-3.messagelabs.com id
	50/20-32013-5908C305; Tue, 28 Aug 2012 08:25:57 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1346142343!28427343!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTg3OTIz\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTIxOTAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30341 invoked from network); 28 Aug 2012 08:25:44 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-9.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 08:25:44 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 28 Aug 2012 01:25:42 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,325,1344236400"; d="scan'208";a="138635726"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 28 Aug 2012 01:25:42 -0700
Received: from FMSMSX110.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 01:25:42 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 01:25:41 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 16:25:39 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Thread-Topic: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
	Passthrough?!
Thread-Index: Ac2E9rQ673lcU/qTRZe3dqvfe2uFkA==
Date: Tue, 28 Aug 2012 08:25:39 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A101881CE@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Tobias Geiger <tobias.geiger@vido.info>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> Konrad Rzeszutek Wilk
> Sent: Tuesday, August 21, 2012 10:23 PM
> To: Ren, Yongjie
> Cc: Konrad Rzeszutek Wilk; Tobias Geiger; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> Passthrough?!
> 
> On Tue, Aug 21, 2012 at 02:41:36AM +0000, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: xen-devel-bounces@lists.xen.org
> > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Konrad
> Rzeszutek
> > > Wilk
> > > Sent: Tuesday, August 21, 2012 7:30 AM
> > > To: Tobias Geiger
> > > Cc: xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
> > > Passthrough?!
> > >
> > > On Mon, Aug 06, 2012 at 12:16:33PM -0400, Konrad Rzeszutek Wilk
> wrote:
> > > > On Wed, Jul 25, 2012 at 09:43:57AM -0400, Konrad Rzeszutek Wilk
> > > wrote:
> > > > > On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
> > > > > > Hi!
> > > > > >
> > > > > > i notice a serious regression with 3.5 as Dom0 kernel (3.4 was
> rock
> > > > > > stable):
> > > > > >
> > > > > > 1st: only the GPU PCI Passthrough works, the PCI USB Controller
> is
> > > > > > not recognized within the DomU (HVM Win7 64)
> > > > > > Dom0 cmdline is:
> > > > > > ro root=LABEL=dom0root
> > >
> xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
> > > > > > security=apparmor noirqdebug nouveau.msi=1
> > > > > >
> > > > > > Only 8:00.0 and 8:00.1 get passed through without problems, all
> the
> > > > > > USB Controller IDs are not correctly passed through and get a
> > > > > > exclamation mark within the win7 device manager ("could not be
> > > > > > started").
> > > > >
> > > > > Ok, but they do get passed in though? As in, QEMU sees them.
> > > > > If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
> > > > > passed in devices do you see them? Meaning lspci shows them?
> > > > >
> > > > >
> > > > > Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
> > > > >
> > > > > >
> > > > > >
> > > > > > 2nd: After DomU shutdown , Dom0 panics (100% reproducable) -
> > > sorry
> > > > > > that i have no full stacktrace, all i have is a "screenshot" which i
> > > > > > uploaded here:
> > > > > >
> > > http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
> > > > >
> > > > > Ugh, that looks like somebody removed a large chunk of a
> pagetable.
> > > > >
> > > > > Hmm. Are you using dom0_mem=max parameter? If not, can you
> try
> > > > > that and also disable ballooning in the xm/xl config file pls?
> > > > >
> > > > > >
> > > > > >
> > > > > > With 3.4 both issues were not there - everything worked perfectly.
> > > > > > Tell me which debugging info you need, i may be able to re-install
> > > > > > my netconsole to get the full stacktrace (but i had not much luck
> > > > > > with netconsole regarding kernel panics - rarely this info gets sent
> > > > > > before the "panic"...)
> > > >
> > > > So I am able to reproduce this with a Windows 7 with an ATI 4870 and
> > > > an Intel 82574L NIC. The video card still works, but the NIC stopped
> > > > working. Same version of hypervisor/toolstack/etc, only change is the
> > > > kernel (v3.4.6->v3.5).
> > > >
> > > > Time to get my hands greasy with this..
> > >
> > > And its due to a patch I added in v3.4
> > > (cd9db80e5257682a7f7ab245a2459648b3c8d268)
> > > - which did not work properly in v3.4, but with v3.5 got it working
> > > (977f857ca566a1e68045fcbb7cfc9c4acb077cf0) which causes v3.5 to
> now
> > > work
> > > anymore.
> > >
> > > Anyhow, for right now jsut revert
> > > cd9db80e5257682a7f7ab245a2459648b3c8d268
> > > and it should work for you.
> > >
Confirmed, after reverting that commit, VT-d will work fine.
Will you fix this and push it to upstream Linux, Konrad?

> > Also, our team reported a VT-d bug 2 months ago.
> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:33:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6HEE-0001LQ-TC; Tue, 28 Aug 2012 08:32:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T6HED-0001LL-H7
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:32:53 +0000
Received: from [85.158.138.51:60714] by server-9.bemta-3.messagelabs.com id
	AB/62-23952-4328C305; Tue, 28 Aug 2012 08:32:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346142772!24208785!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTAzMDE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTAzMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24122 invoked from network); 28 Aug 2012 08:32:52 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Aug 2012 08:32:52 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq3M7pEW89
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-075-237.pools.arcor-ip.net [84.57.75.237])
	by smtp.strato.de (jored mo13) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id y00bc2o7S7VMAY
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 10:32:52 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 60AED183A5
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 10:32:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: b088e473c7fb1a47b9578693c1750cd27767cc25
Message-Id: <b088e473c7fb1a47b957.1346142770@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Tue, 28 Aug 2012 10:32:50 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xenpaging: use poll timeout if no paging target
	exists
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1346142745 -7200
# Node ID b088e473c7fb1a47b9578693c1750cd27767cc25
# Parent  03ef29089830b119b6efd87db8ab5c38b7428938
xenpaging: use poll timeout if no paging target exists

Currently xenpaging will use 100% cpu time if a paging target is not yet
set via xenstore. The reason is that ->use_poll_timeout is initialized
to zero. Another case is when the paging target is set to zero. In this
case ->use_poll_timeout will not be set to 1.

Fix the first case by initializing use_poll_timeout to 1 in
xenpaging_init. The second case is fixed by setting use_poll_timeout in
case the target is zero and there are no more pages left to resume.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 03ef29089830 -r b088e473c7fb tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -482,6 +482,9 @@ static struct xenpaging *xenpaging_init(
         goto err;
     }
 
+    /* Use a poll timeout until xenpaging got a target from xenstore */
+    paging->use_poll_timeout = 1;
+
     return paging;
 
  err:
@@ -1019,6 +1022,8 @@ int main(int argc, char *argv[])
         {
             if ( paging->num_paged_out )
                 resume_pages(paging, paging->num_paged_out);
+            else
+                paging->use_poll_timeout = 1;
         }
         /* Evict more pages if target not reached */
         else if ( tot_pages > paging->target_tot_pages )

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:33:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6HEE-0001LQ-TC; Tue, 28 Aug 2012 08:32:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T6HED-0001LL-H7
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:32:53 +0000
Received: from [85.158.138.51:60714] by server-9.bemta-3.messagelabs.com id
	AB/62-23952-4328C305; Tue, 28 Aug 2012 08:32:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346142772!24208785!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTAzMDE=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTAzMDE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24122 invoked from network); 28 Aug 2012 08:32:52 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-10.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Aug 2012 08:32:52 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq3M7pEW89
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-075-237.pools.arcor-ip.net [84.57.75.237])
	by smtp.strato.de (jored mo13) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id y00bc2o7S7VMAY
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 10:32:52 +0200 (CEST)
Received: from probook.site (localhost [IPv6:::1])
	by probook.site (Postfix) with ESMTP id 60AED183A5
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 10:32:51 +0200 (CEST)
MIME-Version: 1.0
X-Mercurial-Node: b088e473c7fb1a47b9578693c1750cd27767cc25
Message-Id: <b088e473c7fb1a47b957.1346142770@probook.site>
User-Agent: Mercurial-patchbomb/1.7.5
Date: Tue, 28 Aug 2012 10:32:50 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xenpaging: use poll timeout if no paging target
	exists
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1346142745 -7200
# Node ID b088e473c7fb1a47b9578693c1750cd27767cc25
# Parent  03ef29089830b119b6efd87db8ab5c38b7428938
xenpaging: use poll timeout if no paging target exists

Currently xenpaging will use 100% cpu time if a paging target is not yet
set via xenstore. The reason is that ->use_poll_timeout is initialized
to zero. Another case is when the paging target is set to zero. In this
case ->use_poll_timeout will not be set to 1.

Fix the first case by initializing use_poll_timeout to 1 in
xenpaging_init. The second case is fixed by setting use_poll_timeout in
case the target is zero and there are no more pages left to resume.

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 03ef29089830 -r b088e473c7fb tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -482,6 +482,9 @@ static struct xenpaging *xenpaging_init(
         goto err;
     }
 
+    /* Use a poll timeout until xenpaging got a target from xenstore */
+    paging->use_poll_timeout = 1;
+
     return paging;
 
  err:
@@ -1019,6 +1022,8 @@ int main(int argc, char *argv[])
         {
             if ( paging->num_paged_out )
                 resume_pages(paging, paging->num_paged_out);
+            else
+                paging->use_poll_timeout = 1;
         }
         /* Evict more pages if target not reached */
         else if ( tot_pages > paging->target_tot_pages )

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:36:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6HHS-0001X4-4y; Tue, 28 Aug 2012 08:36:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6HHR-0001Wo-BQ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:36:13 +0000
Received: from [85.158.143.99:7419] by server-2.bemta-4.messagelabs.com id
	9C/C7-21239-CF28C305; Tue, 28 Aug 2012 08:36:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346142972!21869394!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16378 invoked from network); 28 Aug 2012 08:36:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 08:36:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14223236"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 08:36:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 09:36:11 +0100
Message-ID: <1346142969.16230.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Tue, 28 Aug 2012 09:36:09 +0100
In-Reply-To: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
References: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to decide the xenstore path for a domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2012-08-25 at 12:15 +0100, Bei Guan wrote:
> Hi,
> 
> 
> Can anyone tell me how to decide the number in the Xenstore key path
> for a DomU's front driver? 
> In Mini-OS, the block front driver write the key-value to
> "device/vbd/768/[ring-ref|event-channelprotocol|...]" according to the
> code mini-os/blkfront.c.
> So why does it use the number "768" in the path here? Can this number
> be another one? For a new DomU's block front driver, how to decide
> this number?

It is the VBD number, as described in docs/misc/vbd-interface.txt. 768
is "3<<8 | 0" per the "Concrete encoding in the VBD interface (in
xenstore)" section.

You (as frontend driver author) don't need to decide it since it is
provided by the toolstack/guest config. You should just process each
sub-directory of "device/vbd" as a separate disk.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:36:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6HHS-0001X4-4y; Tue, 28 Aug 2012 08:36:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6HHR-0001Wo-BQ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:36:13 +0000
Received: from [85.158.143.99:7419] by server-2.bemta-4.messagelabs.com id
	9C/C7-21239-CF28C305; Tue, 28 Aug 2012 08:36:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346142972!21869394!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16378 invoked from network); 28 Aug 2012 08:36:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 08:36:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14223236"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 08:36:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 09:36:11 +0100
Message-ID: <1346142969.16230.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Bei Guan <gbtju85@gmail.com>
Date: Tue, 28 Aug 2012 09:36:09 +0100
In-Reply-To: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
References: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to decide the xenstore path for a domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2012-08-25 at 12:15 +0100, Bei Guan wrote:
> Hi,
> 
> 
> Can anyone tell me how to decide the number in the Xenstore key path
> for a DomU's front driver? 
> In Mini-OS, the block front driver write the key-value to
> "device/vbd/768/[ring-ref|event-channelprotocol|...]" according to the
> code mini-os/blkfront.c.
> So why does it use the number "768" in the path here? Can this number
> be another one? For a new DomU's block front driver, how to decide
> this number?

It is the VBD number, as described in docs/misc/vbd-interface.txt. 768
is "3<<8 | 0" per the "Concrete encoding in the VBD interface (in
xenstore)" section.

You (as frontend driver author) don't need to decide it since it is
provided by the toolstack/guest config. You should just process each
sub-directory of "device/vbd" as a separate disk.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:39:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6HKY-0001qL-17; Tue, 28 Aug 2012 08:39:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6HKW-0001qB-Ce
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:39:24 +0000
Received: from [85.158.143.99:49850] by server-2.bemta-4.messagelabs.com id
	B3/CE-21239-BB38C305; Tue, 28 Aug 2012 08:39:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1346143162!28233543!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15020 invoked from network); 28 Aug 2012 08:39:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 08:39:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14223291"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 08:39:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 09:39:22 +0100
Message-ID: <1346143161.16230.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Tue, 28 Aug 2012 09:39:21 +0100
In-Reply-To: <CANKx4w8=5nubN6C8CZij3Lz6_18n6ROaczvTuL+Ap8jug357dg@mail.gmail.com>
References: <CANKx4w8=5nubN6C8CZij3Lz6_18n6ROaczvTuL+Ap8jug357dg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Lockup in netback - Xen 4.1.2 (XS 6.0.2 hotfix 7)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-27 at 00:22 +0100, David Erickson wrote:
> Hi all-
> I am seeing an intermittent lockup on my machine's networking as soon
> as I apply a network load.

Hi David, 

This list is for development of the Xen.org version of Xen.

For user support with XenServer you should contact either your XS
support representative or use the XenServer user forums:
http://forums.citrix.com/category.jspa?categoryID=101

For user support with the Xen.org version of Xen you should use the
xen-users@ list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 08:39:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 08:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6HKY-0001qL-17; Tue, 28 Aug 2012 08:39:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6HKW-0001qB-Ce
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 08:39:24 +0000
Received: from [85.158.143.99:49850] by server-2.bemta-4.messagelabs.com id
	B3/CE-21239-BB38C305; Tue, 28 Aug 2012 08:39:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-216.messagelabs.com!1346143162!28233543!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1ODA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15020 invoked from network); 28 Aug 2012 08:39:23 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 08:39:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14223291"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 08:39:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 09:39:22 +0100
Message-ID: <1346143161.16230.7.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Erickson <halcyon1981@gmail.com>
Date: Tue, 28 Aug 2012 09:39:21 +0100
In-Reply-To: <CANKx4w8=5nubN6C8CZij3Lz6_18n6ROaczvTuL+Ap8jug357dg@mail.gmail.com>
References: <CANKx4w8=5nubN6C8CZij3Lz6_18n6ROaczvTuL+Ap8jug357dg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Lockup in netback - Xen 4.1.2 (XS 6.0.2 hotfix 7)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2012-08-27 at 00:22 +0100, David Erickson wrote:
> Hi all-
> I am seeing an intermittent lockup on my machine's networking as soon
> as I apply a network load.

Hi David, 

This list is for development of the Xen.org version of Xen.

For user support with XenServer you should contact either your XS
support representative or use the XenServer user forums:
http://forums.citrix.com/category.jspa?categoryID=101

For user support with the Xen.org version of Xen you should use the
xen-users@ list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 10:06:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 10:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6IgZ-0003nY-DC; Tue, 28 Aug 2012 10:06:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6IgY-0003nP-Kh
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 10:06:14 +0000
Received: from [85.158.143.35:14699] by server-2.bemta-4.messagelabs.com id
	86/4D-21239-5189C305; Tue, 28 Aug 2012 10:06:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346148364!12494303!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9169 invoked from network); 28 Aug 2012 10:06:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 10:06:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14225463"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 10:06:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 11:06:03 +0100
Message-ID: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Date: Tue, 28 Aug 2012 11:06:01 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UGxhbiBmb3IgYSA0LjIgcmVsZWFzZToKaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRt
bC94ZW4tZGV2ZWwvMjAxMi0wMy9tc2cwMDc5My5odG1sCgpUaGUgdGltZSBsaW5lIGlzIGFzIGZv
bGxvd3M6CgoxOSBNYXJjaCAgICAgICAgLS0gVE9ETyBsaXN0IGxvY2tlZCBkb3duCjIgQXByaWwg
ICAgICAgICAtLSBGZWF0dXJlIEZyZWV6ZQozMCBKdWx5ICAgICAgICAgLS0gRmlyc3QgcmVsZWFz
ZSBjYW5kaWRhdGUKV2Vla2x5ICAgICAgICAgIC0tIFJDTisxIHVudGlsIHJlbGVhc2UgICAgICAg
ICAgPDwgV0UgQVJFIEhFUkUKCktlaXIgcmVsZWFzZWQgNC4yLjAgcmMzIG9uIFRodXJzZGF5LiBQ
bGVhc2UgdGVzdC4KClRoZSB1cGRhdGVkIFRPRE8gbGlzdCBmb2xsb3dzLgoKaHlwZXJ2aXNvciwg
YmxvY2tlcnM6CgogICAgKiBOb25lCgp0b29scywgYmxvY2tlcnM6CgogICAgKiBsaWJ4bCBzdGFi
bGUgQVBJIC0tIHdlIHdvdWxkIGxpa2UgNC4yIHRvIGRlZmluZSBhIHN0YWJsZSBBUEkKICAgICAg
d2hpY2ggZG93bnN0cmVhbSdzIGNhbiBzdGFydCB0byByZWx5IG9uIG5vdCBjaGFuZ2luZy4gQXNw
ZWN0cyBvZgogICAgICB0aGlzIGFyZToKCiAgICAgICAgKiBOb25lIGtub3duCgogICAgKiB4bCBj
b21wYXRpYmlsaXR5IHdpdGggeG06CgogICAgICAgICogTm8ga25vd24gaXNzdWVzCgogICAgKiBb
Q0hFQ0tdIE1vcmUgZm9ybWFsbHkgZGVwcmVjYXRlIHhtL3hlbmQuIE1hbnBhZ2UgcGF0Y2hlcyBh
bHJlYWR5CiAgICAgIGluIHRyZWUuIE5lZWRzIHJlbGVhc2Ugbm90aW5nIGFuZCBjb21tdW5pY2F0
aW9uIGFyb3VuZCAtcmMxIHRvCiAgICAgIHJlbWluZCBwZW9wbGUgdG8gdGVzdCB4bC4KCiAgICAq
IFtDSEVDS10gQ29uZmlybSB0aGF0IG1pZ3JhdGlvbiBmcm9tIFhlbiA0LjEgLT4gNC4yIHdvcmtz
LgoKICAgICogQnVtcCBsaWJyYXJ5IFNPTkFNRVMgYXMgbmVjZXNzYXJ5LgogICAgICA8MjA1MDIu
Mzk0NDAuOTY5NjE5LjgyNDk3NkBtYXJpbmVyLnVrLnhlbnNvdXJjZS5jb20+CgogICAgKiBbQlVH
XSBxZW11LXRyYWRpdGlvbmFsIGhhcyA1MCUgY3B1IHV0aWxpemF0aW9uIG9uIGFuIGlkbGUKICAg
ICAgV2luZG93cyBzeXN0ZW0gaWYgVVNCIGlzIGVuYWJsZWQuIE5vdCAxMDAlIGNsZWFyIHdoZXRo
ZXIgdGhpcyBpcwogICAgICBYZW4gb3IgcWVtdS4gIEdlb3JnZSBEdW5sYXAgaXMgcGVyZm9ybWlu
ZyBpbml0aWFsCiAgICAgIGludmVzdGlnYXRpb25zLgoKICAgICogRGlzYWJsZSBnZW5lcmF0aW9u
IGlkIGZyb20geGwuIE1pY3Jvc29mdCBjaGFuZ2VkIHRoZQogICAgICBzcGVjaWZpY2F0aW9uIG9m
IHRoaXMgdmFsdWUgYmV0d2VlbiBXOCBiZXRhIGFuZCBSQyBhbmQgd2Ugbm93CiAgICAgIGltcGxl
bWVudCB0aGUgb2xkIHNwZWMuIERpc2FibGUgZm9yIDQuMiBhbmQgcmV2aXN0IGltcGxlbWVudGlu
ZwogICAgICBodGUgbmV3IHNwZWMgZm9yIDQuMy4gKFBhdWwgRHVycmFudCkKCmh5cGVydmlzb3Is
IG5pY2UgdG8gaGF2ZToKCiAgICAqIFtCVUcoPyldIFVuZGVyIGNlcnRhaW4gY29uZGl0aW9ucywg
dGhlIHAybV9wb2Rfc3dlZXAgY29kZSB3aWxsCiAgICAgIHN0b3AgaGFsZndheSB0aHJvdWdoIHNl
YXJjaGluZywgY2F1c2luZyBhIGd1ZXN0IHRvIGNyYXNoIGV2ZW4gaWYKICAgICAgdGhlcmUgd2Fz
IHplcm9lZCBtZW1vcnkgYXZhaWxhYmxlLiAgVGhpcyBpcyBOT1QgYSByZWdyZXNzaW9uCiAgICAg
IGZyb20gNC4xLCBhbmQgaXMgYSB2ZXJ5IHJhcmUgY2FzZSwgc28gcHJvYmFibHkgc2hvdWxkbid0
IGJlIGEKICAgICAgYmxvY2tlci4gIChJbiBmYWN0LCBJJ2QgYmUgb3BlbiB0byB0aGUgaWRlYSB0
aGF0IGl0IHNob3VsZCB3YWl0CiAgICAgIHVudGlsIGFmdGVyIHRoZSByZWxlYXNlIHRvIGdldCBt
b3JlIHRlc3RpbmcuKQogICAgICAJICAgIChHZW9yZ2UgRHVubGFwKQoKICAgICogUzMgcmVncmVz
c2lvbihzPykgcmVwb3J0ZWQgYnkgQmVuIEd1dGhybyAoQmVuICYgSmFuIEJldWxpY2gpCgogICAg
KiBmaXggaGlnaCBjaGFuZ2UgcmF0ZSB0byBDTU9TIFJUQyBwZXJpb2RpYyBpbnRlcnJ1cHQgY2F1
c2luZwogICAgICBndWVzdCB3YWxsIGNsb2NrIHRpbWUgdG8gbGFnIChwb3NzaWJsZSBmaXggb3V0
bGluZWQsIG5lZWRzIHRvIGJlCiAgICAgIHB1dCBpbiBwYXRjaCBmb3JtIGFuZCB0aG9yb3VnaGx5
IHJldmlld2VkL3Rlc3RlZCBmb3IgdW53YW50ZWQKICAgICAgc2lkZSBlZmZlY3RzLCBKYW4gQmV1
bGljaCkKCnRvb2xzLCBuaWNlIHRvIGhhdmU6CgogICAgKiB4bCBjb21wYXRpYmlsaXR5IHdpdGgg
eG06CgogICAgICAgICogdGhlIHBhcmFtZXRlciBpbyBhbmQgaXJxIGluIGRvbVUgY29uZmlnIGZp
bGVzIGFyZSBub3QKICAgICAgICAgIGV2YWx1YXRlZCBieSB4bC4gIFNvIGl0IGlzIG5vdCBwb3Nz
aWJsZSB0byBwYXNzdGhyb3VnaCBhCiAgICAgICAgICBwYXJhbGxlbCBwb3J0IGZvciBteSBwcmlu
dGVyIHRvIGRvbVUgd2hlbiBJIHN0YXJ0IHRoZSBkb21VCiAgICAgICAgICB3aXRoIHhsIGNvbW1h
bmQuIChyZXBvcnRlZCBieSBEaWV0ZXIgQmxvbXMsCiAgICAgICAgICA8MjAxMjA4MTQxMDA3MDQu
R0ExOTcwNEBibG9tcy5kZT4pCgogICAgKiB4bC5jZmcoNSkgZG9jdW1lbnRhdGlvbiBwYXRjaCBm
b3IgcWVtdS11cHN0cmVhbQogICAgICB2aWRlb3JhbS92aWRlb21lbSBzdXBwb3J0OgogICAgICBo
dHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTA1L21zZzAw
MjUwLmh0bWwKICAgICAgcWVtdS11cHN0cmVhbSBkb2Vzbid0IHN1cHBvcnQgc3BlY2lmeWluZyB2
aWRlb21lbSBzaXplIGZvciB0aGUKICAgICAgSFZNIGd1ZXN0IGNpcnJ1cy9zdGR2Z2EuICAoYnV0
IHRoaXMgd29ya3Mgd2l0aAogICAgICBxZW11LXhlbi10cmFkaXRpb25hbCkuIChQYXNpIEvDpHJr
a8OkaW5lbikKCiAgICAqIFtCVUddIGxvbmcgc3RvcCBkdXJpbmcgdGhlIGd1ZXN0IGJvb3QgcHJv
Y2VzcyB3aXRoIHFjb3cgaW1hZ2UsCiAgICAgIHJlcG9ydGVkIGJ5IEludGVsOiBodHRwOi8vYnVn
emlsbGEueGVuLm9yZy9idWd6aWxsYS9zaG93X2J1Zy5jZ2k/aWQ9MTgyMQogICAgICAoRG9uZSkK
CiAgICAqIFtCVUddIHZjcHUtc2V0IGRvZXNuJ3QgdGFrZSBlZmZlY3Qgb24gZ3Vlc3QsIHJlcG9y
dGVkIGJ5IEludGVsOgogICAgICBodHRwOi8vYnVnemlsbGEueGVuLm9yZy9idWd6aWxsYS9zaG93
X2J1Zy5jZ2k/aWQ9MTgyMgoKICAgICogTG9hZCBibGt0YXAgZHJpdmVyIGZyb20geGVuY29tbW9u
cyBpbml0c2NyaXB0IGlmIGF2YWlsYWJsZSwgdGhyZWFkIGF0OgogICAgICA8ZGI2MTRlOTJmYWY3
NDNlMjBiM2YuMTMzNzA5Njk3N0Brb2RvMj4uIFRvIGJlIGZpeGVkIG1vcmUKICAgICAgcHJvcGVy
bHkgaW4gNC4zLiAoUGF0Y2ggcG9zdGVkLCBkaXNjdXNzaW9uLCBwbGFuIHRvIHRha2Ugc2ltcGxl
CiAgICAgIHhlbmNvbW1vbnMgcGF0Y2ggZm9yIDQuMiBhbmQgcmV2aXN0IGZvciA0LjMuIFBpbmcg
c2VudCkKCiAgICAqIFtCVUddIHhsIGFsbG93cyBzYW1lIFBDSSBkZXZpY2UgdG8gYmUgYXNzaWdu
ZWQgdG8gbXVsdGlwbGUKICAgICAgZ3Vlc3RzLiBodHRwOi8vYnVnemlsbGEueGVuLm9yZy9idWd6
aWxsYS9zaG93X2J1Zy5jZ2k/aWQ9MTgyNgogICAgICAoPEU0NTU4QzBDOTY2ODg3NDg4MzdFQjFC
MDVCRUVENzVBMEZENTU3NEFAU0hTTVNYMTAyLmNjci5jb3JwLmludGVsLmNvbT4pCgogICAgKiBh
ZGRyZXNzIFBvRCBwcm9ibGVtcyB3aXRoIGVhcmx5IGhvc3Qgc2lkZSBhY2Nlc3NlcyB0byBndWVz
dAogICAgICBhZGRyZXNzIHNwYWNlIChKYW4gQmV1bGljaCwgRE9ORSkKCiAgICAqIGZpeCBpcHhl
IGJ1aWxkIHByb2JsZW1zIHdpdGggZ2NjIDQuNyAoZmVkb3JhIDE3KS4KICAgICAgVGhlIGZvbGxv
d2luZyBmaWxlcyBmYWlsIHRvIGJ1aWxkOgogICAgICAgIC0gaXB4ZS9zcmMvZHJpdmVycy9idXMv
aXNhLmMKICAgICAgICAtIGlweGUvc3JjL2RyaXZlcnMvbmV0L215cmkxMGdlLmMKICAgICAgICAt
IGlweGUvc3JjL2RyaXZlcnMvaW5maW5pYmFuZC9xaWI3MzIyLmMKICAgICAgUGF0Y2hlcyBoYXZl
IGJlZW4gcG9zdGVkIHRvIGlweGUtZGV2ZWwgbWFpbGluZ2xpc3QsIHNvIHdlIG5lZWQKICAgICAg
dG8gdXBkYXRlIG91ciBpcHhlIHZlcnNpb24gb3IgZ3JhYiB0aGUgcGF0Y2hlcy4gKERPTkUsIEtl
aXIpCgogICAgKiAieGwgbGlzdCAtbCIgZG9lcyBub3QgcHJvZHVjZSBwcm9wZXIganNvbi4gU2hv
dWxkIGJlIHBvc3NpYmxlIHRvCiAgICAgIG1ha2UgaXQgaW50byBhbiBhcnJheS4gUmVwb3J0ZWQg
YnkgQmFzdGlhbiBCbGFuaywKICAgICAgPDIwMTIwODE0MTIxNzQxLkdBMTAyMTRAd2F2ZWhhbW1l
ci53YWxkaS5ldS5vcmc+LiAoSWFuIENhbXBiZWxsLCBET05FKQoKICAgICogInhsIGNwdXBvb2wt
Y3JlYXRlIiBzZWdmYXVsdCBvbiBpbmNvcnJlY3QgaW5wdXQuIFJlcG9ydGVkIGJ5CiAgICAgIEdl
b3JnZSBEdW5sYXAsCiAgICAgIDxDQUZMQnhaYUVjaTBtT2NEQ2dGWDl6az13aDN6NE5mMUxENUU1
RmN5N1kzPWlvREFNPWdAbWFpbC5nbWFpbC5jb20+CiAgICAgIChJYW4gQ2FtcGJlbGwsIHBhdGNo
IHBvc3RlZCkKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Aug 28 10:06:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 10:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6IgZ-0003nY-DC; Tue, 28 Aug 2012 10:06:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6IgY-0003nP-Kh
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 10:06:14 +0000
Received: from [85.158.143.35:14699] by server-2.bemta-4.messagelabs.com id
	86/4D-21239-5189C305; Tue, 28 Aug 2012 10:06:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346148364!12494303!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9169 invoked from network); 28 Aug 2012 10:06:12 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 10:06:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,325,1344211200"; d="scan'208";a="14225463"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 10:06:03 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 11:06:03 +0100
Message-ID: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, xen-users <xen-users@lists.xen.org>
Date: Tue, 28 Aug 2012 11:06:01 +0100
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Subject: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

UGxhbiBmb3IgYSA0LjIgcmVsZWFzZToKaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRt
bC94ZW4tZGV2ZWwvMjAxMi0wMy9tc2cwMDc5My5odG1sCgpUaGUgdGltZSBsaW5lIGlzIGFzIGZv
bGxvd3M6CgoxOSBNYXJjaCAgICAgICAgLS0gVE9ETyBsaXN0IGxvY2tlZCBkb3duCjIgQXByaWwg
ICAgICAgICAtLSBGZWF0dXJlIEZyZWV6ZQozMCBKdWx5ICAgICAgICAgLS0gRmlyc3QgcmVsZWFz
ZSBjYW5kaWRhdGUKV2Vla2x5ICAgICAgICAgIC0tIFJDTisxIHVudGlsIHJlbGVhc2UgICAgICAg
ICAgPDwgV0UgQVJFIEhFUkUKCktlaXIgcmVsZWFzZWQgNC4yLjAgcmMzIG9uIFRodXJzZGF5LiBQ
bGVhc2UgdGVzdC4KClRoZSB1cGRhdGVkIFRPRE8gbGlzdCBmb2xsb3dzLgoKaHlwZXJ2aXNvciwg
YmxvY2tlcnM6CgogICAgKiBOb25lCgp0b29scywgYmxvY2tlcnM6CgogICAgKiBsaWJ4bCBzdGFi
bGUgQVBJIC0tIHdlIHdvdWxkIGxpa2UgNC4yIHRvIGRlZmluZSBhIHN0YWJsZSBBUEkKICAgICAg
d2hpY2ggZG93bnN0cmVhbSdzIGNhbiBzdGFydCB0byByZWx5IG9uIG5vdCBjaGFuZ2luZy4gQXNw
ZWN0cyBvZgogICAgICB0aGlzIGFyZToKCiAgICAgICAgKiBOb25lIGtub3duCgogICAgKiB4bCBj
b21wYXRpYmlsaXR5IHdpdGggeG06CgogICAgICAgICogTm8ga25vd24gaXNzdWVzCgogICAgKiBb
Q0hFQ0tdIE1vcmUgZm9ybWFsbHkgZGVwcmVjYXRlIHhtL3hlbmQuIE1hbnBhZ2UgcGF0Y2hlcyBh
bHJlYWR5CiAgICAgIGluIHRyZWUuIE5lZWRzIHJlbGVhc2Ugbm90aW5nIGFuZCBjb21tdW5pY2F0
aW9uIGFyb3VuZCAtcmMxIHRvCiAgICAgIHJlbWluZCBwZW9wbGUgdG8gdGVzdCB4bC4KCiAgICAq
IFtDSEVDS10gQ29uZmlybSB0aGF0IG1pZ3JhdGlvbiBmcm9tIFhlbiA0LjEgLT4gNC4yIHdvcmtz
LgoKICAgICogQnVtcCBsaWJyYXJ5IFNPTkFNRVMgYXMgbmVjZXNzYXJ5LgogICAgICA8MjA1MDIu
Mzk0NDAuOTY5NjE5LjgyNDk3NkBtYXJpbmVyLnVrLnhlbnNvdXJjZS5jb20+CgogICAgKiBbQlVH
XSBxZW11LXRyYWRpdGlvbmFsIGhhcyA1MCUgY3B1IHV0aWxpemF0aW9uIG9uIGFuIGlkbGUKICAg
ICAgV2luZG93cyBzeXN0ZW0gaWYgVVNCIGlzIGVuYWJsZWQuIE5vdCAxMDAlIGNsZWFyIHdoZXRo
ZXIgdGhpcyBpcwogICAgICBYZW4gb3IgcWVtdS4gIEdlb3JnZSBEdW5sYXAgaXMgcGVyZm9ybWlu
ZyBpbml0aWFsCiAgICAgIGludmVzdGlnYXRpb25zLgoKICAgICogRGlzYWJsZSBnZW5lcmF0aW9u
IGlkIGZyb20geGwuIE1pY3Jvc29mdCBjaGFuZ2VkIHRoZQogICAgICBzcGVjaWZpY2F0aW9uIG9m
IHRoaXMgdmFsdWUgYmV0d2VlbiBXOCBiZXRhIGFuZCBSQyBhbmQgd2Ugbm93CiAgICAgIGltcGxl
bWVudCB0aGUgb2xkIHNwZWMuIERpc2FibGUgZm9yIDQuMiBhbmQgcmV2aXN0IGltcGxlbWVudGlu
ZwogICAgICBodGUgbmV3IHNwZWMgZm9yIDQuMy4gKFBhdWwgRHVycmFudCkKCmh5cGVydmlzb3Is
IG5pY2UgdG8gaGF2ZToKCiAgICAqIFtCVUcoPyldIFVuZGVyIGNlcnRhaW4gY29uZGl0aW9ucywg
dGhlIHAybV9wb2Rfc3dlZXAgY29kZSB3aWxsCiAgICAgIHN0b3AgaGFsZndheSB0aHJvdWdoIHNl
YXJjaGluZywgY2F1c2luZyBhIGd1ZXN0IHRvIGNyYXNoIGV2ZW4gaWYKICAgICAgdGhlcmUgd2Fz
IHplcm9lZCBtZW1vcnkgYXZhaWxhYmxlLiAgVGhpcyBpcyBOT1QgYSByZWdyZXNzaW9uCiAgICAg
IGZyb20gNC4xLCBhbmQgaXMgYSB2ZXJ5IHJhcmUgY2FzZSwgc28gcHJvYmFibHkgc2hvdWxkbid0
IGJlIGEKICAgICAgYmxvY2tlci4gIChJbiBmYWN0LCBJJ2QgYmUgb3BlbiB0byB0aGUgaWRlYSB0
aGF0IGl0IHNob3VsZCB3YWl0CiAgICAgIHVudGlsIGFmdGVyIHRoZSByZWxlYXNlIHRvIGdldCBt
b3JlIHRlc3RpbmcuKQogICAgICAJICAgIChHZW9yZ2UgRHVubGFwKQoKICAgICogUzMgcmVncmVz
c2lvbihzPykgcmVwb3J0ZWQgYnkgQmVuIEd1dGhybyAoQmVuICYgSmFuIEJldWxpY2gpCgogICAg
KiBmaXggaGlnaCBjaGFuZ2UgcmF0ZSB0byBDTU9TIFJUQyBwZXJpb2RpYyBpbnRlcnJ1cHQgY2F1
c2luZwogICAgICBndWVzdCB3YWxsIGNsb2NrIHRpbWUgdG8gbGFnIChwb3NzaWJsZSBmaXggb3V0
bGluZWQsIG5lZWRzIHRvIGJlCiAgICAgIHB1dCBpbiBwYXRjaCBmb3JtIGFuZCB0aG9yb3VnaGx5
IHJldmlld2VkL3Rlc3RlZCBmb3IgdW53YW50ZWQKICAgICAgc2lkZSBlZmZlY3RzLCBKYW4gQmV1
bGljaCkKCnRvb2xzLCBuaWNlIHRvIGhhdmU6CgogICAgKiB4bCBjb21wYXRpYmlsaXR5IHdpdGgg
eG06CgogICAgICAgICogdGhlIHBhcmFtZXRlciBpbyBhbmQgaXJxIGluIGRvbVUgY29uZmlnIGZp
bGVzIGFyZSBub3QKICAgICAgICAgIGV2YWx1YXRlZCBieSB4bC4gIFNvIGl0IGlzIG5vdCBwb3Nz
aWJsZSB0byBwYXNzdGhyb3VnaCBhCiAgICAgICAgICBwYXJhbGxlbCBwb3J0IGZvciBteSBwcmlu
dGVyIHRvIGRvbVUgd2hlbiBJIHN0YXJ0IHRoZSBkb21VCiAgICAgICAgICB3aXRoIHhsIGNvbW1h
bmQuIChyZXBvcnRlZCBieSBEaWV0ZXIgQmxvbXMsCiAgICAgICAgICA8MjAxMjA4MTQxMDA3MDQu
R0ExOTcwNEBibG9tcy5kZT4pCgogICAgKiB4bC5jZmcoNSkgZG9jdW1lbnRhdGlvbiBwYXRjaCBm
b3IgcWVtdS11cHN0cmVhbQogICAgICB2aWRlb3JhbS92aWRlb21lbSBzdXBwb3J0OgogICAgICBo
dHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEyLTA1L21zZzAw
MjUwLmh0bWwKICAgICAgcWVtdS11cHN0cmVhbSBkb2Vzbid0IHN1cHBvcnQgc3BlY2lmeWluZyB2
aWRlb21lbSBzaXplIGZvciB0aGUKICAgICAgSFZNIGd1ZXN0IGNpcnJ1cy9zdGR2Z2EuICAoYnV0
IHRoaXMgd29ya3Mgd2l0aAogICAgICBxZW11LXhlbi10cmFkaXRpb25hbCkuIChQYXNpIEvDpHJr
a8OkaW5lbikKCiAgICAqIFtCVUddIGxvbmcgc3RvcCBkdXJpbmcgdGhlIGd1ZXN0IGJvb3QgcHJv
Y2VzcyB3aXRoIHFjb3cgaW1hZ2UsCiAgICAgIHJlcG9ydGVkIGJ5IEludGVsOiBodHRwOi8vYnVn
emlsbGEueGVuLm9yZy9idWd6aWxsYS9zaG93X2J1Zy5jZ2k/aWQ9MTgyMQogICAgICAoRG9uZSkK
CiAgICAqIFtCVUddIHZjcHUtc2V0IGRvZXNuJ3QgdGFrZSBlZmZlY3Qgb24gZ3Vlc3QsIHJlcG9y
dGVkIGJ5IEludGVsOgogICAgICBodHRwOi8vYnVnemlsbGEueGVuLm9yZy9idWd6aWxsYS9zaG93
X2J1Zy5jZ2k/aWQ9MTgyMgoKICAgICogTG9hZCBibGt0YXAgZHJpdmVyIGZyb20geGVuY29tbW9u
cyBpbml0c2NyaXB0IGlmIGF2YWlsYWJsZSwgdGhyZWFkIGF0OgogICAgICA8ZGI2MTRlOTJmYWY3
NDNlMjBiM2YuMTMzNzA5Njk3N0Brb2RvMj4uIFRvIGJlIGZpeGVkIG1vcmUKICAgICAgcHJvcGVy
bHkgaW4gNC4zLiAoUGF0Y2ggcG9zdGVkLCBkaXNjdXNzaW9uLCBwbGFuIHRvIHRha2Ugc2ltcGxl
CiAgICAgIHhlbmNvbW1vbnMgcGF0Y2ggZm9yIDQuMiBhbmQgcmV2aXN0IGZvciA0LjMuIFBpbmcg
c2VudCkKCiAgICAqIFtCVUddIHhsIGFsbG93cyBzYW1lIFBDSSBkZXZpY2UgdG8gYmUgYXNzaWdu
ZWQgdG8gbXVsdGlwbGUKICAgICAgZ3Vlc3RzLiBodHRwOi8vYnVnemlsbGEueGVuLm9yZy9idWd6
aWxsYS9zaG93X2J1Zy5jZ2k/aWQ9MTgyNgogICAgICAoPEU0NTU4QzBDOTY2ODg3NDg4MzdFQjFC
MDVCRUVENzVBMEZENTU3NEFAU0hTTVNYMTAyLmNjci5jb3JwLmludGVsLmNvbT4pCgogICAgKiBh
ZGRyZXNzIFBvRCBwcm9ibGVtcyB3aXRoIGVhcmx5IGhvc3Qgc2lkZSBhY2Nlc3NlcyB0byBndWVz
dAogICAgICBhZGRyZXNzIHNwYWNlIChKYW4gQmV1bGljaCwgRE9ORSkKCiAgICAqIGZpeCBpcHhl
IGJ1aWxkIHByb2JsZW1zIHdpdGggZ2NjIDQuNyAoZmVkb3JhIDE3KS4KICAgICAgVGhlIGZvbGxv
d2luZyBmaWxlcyBmYWlsIHRvIGJ1aWxkOgogICAgICAgIC0gaXB4ZS9zcmMvZHJpdmVycy9idXMv
aXNhLmMKICAgICAgICAtIGlweGUvc3JjL2RyaXZlcnMvbmV0L215cmkxMGdlLmMKICAgICAgICAt
IGlweGUvc3JjL2RyaXZlcnMvaW5maW5pYmFuZC9xaWI3MzIyLmMKICAgICAgUGF0Y2hlcyBoYXZl
IGJlZW4gcG9zdGVkIHRvIGlweGUtZGV2ZWwgbWFpbGluZ2xpc3QsIHNvIHdlIG5lZWQKICAgICAg
dG8gdXBkYXRlIG91ciBpcHhlIHZlcnNpb24gb3IgZ3JhYiB0aGUgcGF0Y2hlcy4gKERPTkUsIEtl
aXIpCgogICAgKiAieGwgbGlzdCAtbCIgZG9lcyBub3QgcHJvZHVjZSBwcm9wZXIganNvbi4gU2hv
dWxkIGJlIHBvc3NpYmxlIHRvCiAgICAgIG1ha2UgaXQgaW50byBhbiBhcnJheS4gUmVwb3J0ZWQg
YnkgQmFzdGlhbiBCbGFuaywKICAgICAgPDIwMTIwODE0MTIxNzQxLkdBMTAyMTRAd2F2ZWhhbW1l
ci53YWxkaS5ldS5vcmc+LiAoSWFuIENhbXBiZWxsLCBET05FKQoKICAgICogInhsIGNwdXBvb2wt
Y3JlYXRlIiBzZWdmYXVsdCBvbiBpbmNvcnJlY3QgaW5wdXQuIFJlcG9ydGVkIGJ5CiAgICAgIEdl
b3JnZSBEdW5sYXAsCiAgICAgIDxDQUZMQnhaYUVjaTBtT2NEQ2dGWDl6az13aDN6NE5mMUxENUU1
RmN5N1kzPWlvREFNPWdAbWFpbC5nbWFpbC5jb20+CiAgICAgIChJYW4gQ2FtcGJlbGwsIHBhdGNo
IHBvc3RlZCkKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Aug 28 11:41:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 11:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6KAH-0005AJ-O1; Tue, 28 Aug 2012 11:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6KAF-0005AE-TF
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 11:41:00 +0000
Received: from [85.158.143.35:10406] by server-3.bemta-4.messagelabs.com id
	22/94-08232-B4EAC305; Tue, 28 Aug 2012 11:40:59 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346154054!14015695!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27121 invoked from network); 28 Aug 2012 11:40:56 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 11:40:56 -0000
Received: by pbbjt11 with SMTP id jt11so9065479pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 04:40:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=vIWNvbYFqRhqD8Zf/8qGZkllkcjrms6IXJDAeL0s20I=;
	b=rTdPvt8s+wnaxhMsZXDu8OCuvLtdGH9hXE5ZMcumlIEynS/DjBXSSgSzpNihAMD+Ms
	01HqptRfKcLxIsGTe0aiXxXN7Cv8UHRqQNbfzbbN6Ym7AazrnaHI3HVeJibqtfhGrajI
	va6kQ8mVWv44d4byWn4fBY1o1/bpH0zh5wxyl/4UgVCJfgdpj67BYID+orKPJ4rB9e9e
	vWSJ7NJx71O8kzYzgbtOPAbx7v6v27B1LMH+qxmdiq7SkHm1Fkf6nWsFwM0F8HKWja3X
	NaKyBoeqPNguU6OP4DwD1E7b21X7MVMbQQJ96VYTZIJrmzQzNbpkWkdNTY/86wIOXYVd
	dhog==
Received: by 10.66.76.130 with SMTP id k2mr37164896paw.19.1346154053892;
	Tue, 28 Aug 2012 04:40:53 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id
	pj10sm16867650pbb.46.2012.08.28.04.40.46
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 04:40:52 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 12:40:41 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC626CC9.3CFA1%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2FEfODKTAF7z1XN0OSTYHltJ9szA==
In-Reply-To: <20120828082302.GA27309@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 09:23, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Tue, Aug 28, Keir Fraser wrote:
> 
>> How about we guarantee that guests can use the 1MB in address range
>> 0xFED00000-0xFEDFFFFF for whatever mappings they like, guaranteed unused (or
>> at least, mapped with nothing useful) when guest kernel starts?
>> 
>> This is already, and has always been, the case. But we can etch it in stone
>> quite easily. :)
> 
> 0xFED00000UL is appearently the hpet base adress. But if there is room
> after that, then lets use that. However, I'm not familiar with these
> things. Should the area appear in the E820 map as reserverd? If so,
> where is the code which configures the guests E820 map?

Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
pages. How about a bit lower in memory -- FE700000-FE7FFFFF?

Everything in range FC000000-FFFFFFFF should already be marked
E820_RESERVED. You can test that, and also see
tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
RESERVED_MEMBASE == FC000000).

Can document in hvmloader/config.h and have mem_alloc() test against it
rather than hvm_info->reserved_mem_pgstart.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 11:41:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 11:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6KAH-0005AJ-O1; Tue, 28 Aug 2012 11:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6KAF-0005AE-TF
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 11:41:00 +0000
Received: from [85.158.143.35:10406] by server-3.bemta-4.messagelabs.com id
	22/94-08232-B4EAC305; Tue, 28 Aug 2012 11:40:59 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346154054!14015695!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27121 invoked from network); 28 Aug 2012 11:40:56 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 11:40:56 -0000
Received: by pbbjt11 with SMTP id jt11so9065479pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 04:40:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=vIWNvbYFqRhqD8Zf/8qGZkllkcjrms6IXJDAeL0s20I=;
	b=rTdPvt8s+wnaxhMsZXDu8OCuvLtdGH9hXE5ZMcumlIEynS/DjBXSSgSzpNihAMD+Ms
	01HqptRfKcLxIsGTe0aiXxXN7Cv8UHRqQNbfzbbN6Ym7AazrnaHI3HVeJibqtfhGrajI
	va6kQ8mVWv44d4byWn4fBY1o1/bpH0zh5wxyl/4UgVCJfgdpj67BYID+orKPJ4rB9e9e
	vWSJ7NJx71O8kzYzgbtOPAbx7v6v27B1LMH+qxmdiq7SkHm1Fkf6nWsFwM0F8HKWja3X
	NaKyBoeqPNguU6OP4DwD1E7b21X7MVMbQQJ96VYTZIJrmzQzNbpkWkdNTY/86wIOXYVd
	dhog==
Received: by 10.66.76.130 with SMTP id k2mr37164896paw.19.1346154053892;
	Tue, 28 Aug 2012 04:40:53 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id
	pj10sm16867650pbb.46.2012.08.28.04.40.46
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 04:40:52 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 12:40:41 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC626CC9.3CFA1%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2FEfODKTAF7z1XN0OSTYHltJ9szA==
In-Reply-To: <20120828082302.GA27309@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 09:23, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Tue, Aug 28, Keir Fraser wrote:
> 
>> How about we guarantee that guests can use the 1MB in address range
>> 0xFED00000-0xFEDFFFFF for whatever mappings they like, guaranteed unused (or
>> at least, mapped with nothing useful) when guest kernel starts?
>> 
>> This is already, and has always been, the case. But we can etch it in stone
>> quite easily. :)
> 
> 0xFED00000UL is appearently the hpet base adress. But if there is room
> after that, then lets use that. However, I'm not familiar with these
> things. Should the area appear in the E820 map as reserverd? If so,
> where is the code which configures the guests E820 map?

Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
pages. How about a bit lower in memory -- FE700000-FE7FFFFF?

Everything in range FC000000-FFFFFFFF should already be marked
E820_RESERVED. You can test that, and also see
tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
RESERVED_MEMBASE == FC000000).

Can document in hvmloader/config.h and have mem_alloc() test against it
rather than hvm_info->reserved_mem_pgstart.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 12:26:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 12:26:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6KrN-0005Xl-4M; Tue, 28 Aug 2012 12:25:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1T6KrL-0005Xg-Ol
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 12:25:31 +0000
Received: from [85.158.138.51:59593] by server-6.bemta-3.messagelabs.com id
	36/D7-32013-AB8BC305; Tue, 28 Aug 2012 12:25:30 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346156723!20434118!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17402 invoked from network); 28 Aug 2012 12:25:24 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-6.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 12:25:24 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q7SCP14Y017491
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 07:25:01 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q7SCP1WO017490
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 07:25:01 -0500
Date: Tue, 28 Aug 2012 07:25:01 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208281225.q7SCP1WO017490@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Tue, 28 Aug 2012 07:25:01 -0500 (CDT)
Subject: [Xen-devel] XEN 4.1.3 blktap2 patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Good morning, hope the day is going well for everyone.

The patches to fix the blktap2 issues which result in orphaned
tapdisk2 processes and the transitory deadlock on guest shutdown
didn't make it into the 4.1.3 release.  Updated patches to address
these problems are available at the following location:

	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap1.patch
	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap2.patch

The patches are designed to be applied in order and have been verified
to work against the 4.1.3 release.

Best wishes for a productive week.

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"There are two things that are infinite; Human stupidity and the
 universe.  And I'm not sure about the universe."
                                -- Albert Einstein

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 12:26:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 12:26:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6KrN-0005Xl-4M; Tue, 28 Aug 2012 12:25:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <greg@wind.enjellic.com>) id 1T6KrL-0005Xg-Ol
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 12:25:31 +0000
Received: from [85.158.138.51:59593] by server-6.bemta-3.messagelabs.com id
	36/D7-32013-AB8BC305; Tue, 28 Aug 2012 12:25:30 +0000
X-Env-Sender: greg@wind.enjellic.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346156723!20434118!1
X-Originating-IP: [76.10.64.91]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17402 invoked from network); 28 Aug 2012 12:25:24 -0000
Received: from wind.enjellic.com (HELO wind.enjellic.com) (76.10.64.91)
	by server-6.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 12:25:24 -0000
Received: from wind.enjellic.com (localhost [127.0.0.1])
	by wind.enjellic.com (8.14.3/8.14.3) with ESMTP id q7SCP14Y017491
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 07:25:01 -0500
Received: (from greg@localhost)
	by wind.enjellic.com (8.14.3/8.14.3/Submit) id q7SCP1WO017490
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 07:25:01 -0500
Date: Tue, 28 Aug 2012 07:25:01 -0500
From: "Dr. Greg Wettstein" <greg@wind.enjellic.com>
Message-Id: <201208281225.q7SCP1WO017490@wind.enjellic.com>
X-Mailer: Mail User's Shell (7.2.6-ESD1.0 03/31/2012)
To: xen-devel@lists.xen.org
X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.2.3
	(wind.enjellic.com [0.0.0.0]);
	Tue, 28 Aug 2012 07:25:01 -0500 (CDT)
Subject: [Xen-devel] XEN 4.1.3 blktap2 patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: greg@enjellic.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Good morning, hope the day is going well for everyone.

The patches to fix the blktap2 issues which result in orphaned
tapdisk2 processes and the transitory deadlock on guest shutdown
didn't make it into the 4.1.3 release.  Updated patches to address
these problems are available at the following location:

	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap1.patch
	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap2.patch

The patches are designed to be applied in order and have been verified
to work against the 4.1.3 release.

Best wishes for a productive week.

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"There are two things that are infinite; Human stupidity and the
 universe.  And I'm not sure about the universe."
                                -- Albert Einstein

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 12:43:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 12:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6L8E-0005jn-QZ; Tue, 28 Aug 2012 12:42:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T6L8E-0005ji-4J
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 12:42:58 +0000
Received: from [85.158.143.35:47364] by server-3.bemta-4.messagelabs.com id
	23/72-08232-1DCBC305; Tue, 28 Aug 2012 12:42:57 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346157776!13273531!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTA4OTk=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTA4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6085 invoked from network); 28 Aug 2012 12:42:57 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 12:42:57 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq3M7pEW89
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-075-237.pools.arcor-ip.net [84.57.75.237])
	by smtp.strato.de (jorabe mo31) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e06389o7SBioRW ;
	Tue, 28 Aug 2012 14:42:56 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 09529183AD; Tue, 28 Aug 2012 14:42:55 +0200 (CEST)
Date: Tue, 28 Aug 2012 14:42:55 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120828124255.GA32452@aepfle.de>
References: <20120828082302.GA27309@aepfle.de>
	<CC626CC9.3CFA1%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC626CC9.3CFA1%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, Keir Fraser wrote:

> Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
> pages. How about a bit lower in memory -- FE700000-FE7FFFFF?
> 
> Everything in range FC000000-FFFFFFFF should already be marked
> E820_RESERVED. You can test that, and also see
> tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
> RESERVED_MEMBASE == FC000000).

Yes, FC000000-FFFFFFFF has already an E820_RESERVED entry. Within that
range the kernel finds the IOAPIC, LAPIC and the HPET, perhaps because
they are listed in the ACPI table or because they are found by other
ways.

To make the location of the of the shared pages configurable from the
tools, does tools/firmware/hvmloader/acpi/dsdt.asl have a way to
describe such special region? Maybe the kernel parses that table early
enough, before the shared_info page gets used.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 12:43:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 12:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6L8E-0005jn-QZ; Tue, 28 Aug 2012 12:42:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T6L8E-0005ji-4J
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 12:42:58 +0000
Received: from [85.158.143.35:47364] by server-3.bemta-4.messagelabs.com id
	23/72-08232-1DCBC305; Tue, 28 Aug 2012 12:42:57 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346157776!13273531!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTA4OTk=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTA4OTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6085 invoked from network); 28 Aug 2012 12:42:57 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 12:42:57 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/lq3M7pEW89
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-075-237.pools.arcor-ip.net [84.57.75.237])
	by smtp.strato.de (jorabe mo31) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e06389o7SBioRW ;
	Tue, 28 Aug 2012 14:42:56 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id 09529183AD; Tue, 28 Aug 2012 14:42:55 +0200 (CEST)
Date: Tue, 28 Aug 2012 14:42:55 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120828124255.GA32452@aepfle.de>
References: <20120828082302.GA27309@aepfle.de>
	<CC626CC9.3CFA1%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC626CC9.3CFA1%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, Keir Fraser wrote:

> Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
> pages. How about a bit lower in memory -- FE700000-FE7FFFFF?
> 
> Everything in range FC000000-FFFFFFFF should already be marked
> E820_RESERVED. You can test that, and also see
> tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
> RESERVED_MEMBASE == FC000000).

Yes, FC000000-FFFFFFFF has already an E820_RESERVED entry. Within that
range the kernel finds the IOAPIC, LAPIC and the HPET, perhaps because
they are listed in the ACPI table or because they are found by other
ways.

To make the location of the of the shared pages configurable from the
tools, does tools/firmware/hvmloader/acpi/dsdt.asl have a way to
describe such special region? Maybe the kernel parses that table early
enough, before the shared_info page gets used.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 13:20:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 13:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Lhn-0005zd-Rx; Tue, 28 Aug 2012 13:19:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T6Lhl-0005zY-SQ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 13:19:42 +0000
Received: from [85.158.143.35:24988] by server-3.bemta-4.messagelabs.com id
	2D/54-08232-D65CC305; Tue, 28 Aug 2012 13:19:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346159978!5113976!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyMzIwMQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12618 invoked from network); 28 Aug 2012 13:19:39 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 13:19:39 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7SDJWRn011613
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Aug 2012 13:19:32 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7SDJVrk009200
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Aug 2012 13:19:32 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7SDJVI0019320; Tue, 28 Aug 2012 08:19:31 -0500
Received: from localhost.localdomain (/10.159.167.73)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Aug 2012 06:19:30 -0700
Date: Tue, 28 Aug 2012 09:19:28 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120828131928.GB20870@localhost.localdomain>
References: <1B4B44D9196EFF41AE41FDA404FC0A101881CE@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A101881CE@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Tobias Geiger <tobias.geiger@vido.info>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > > Anyhow, for right now jsut revert
> > > > cd9db80e5257682a7f7ab245a2459648b3c8d268
> > > > and it should work for you.
> > > >
> Confirmed, after reverting that commit, VT-d will work fine.
> Will you fix this and push it to upstream Linux, Konrad?

Yes I plan to fix it - thought I am not sure exactly how. The
reset functionality works - (too well one could say) - perhaps
what I also need is to enable the device after the reset.

> 
> > > Also, our team reported a VT-d bug 2 months ago.
> > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 13:20:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 13:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Lhn-0005zd-Rx; Tue, 28 Aug 2012 13:19:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T6Lhl-0005zY-SQ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 13:19:42 +0000
Received: from [85.158.143.35:24988] by server-3.bemta-4.messagelabs.com id
	2D/54-08232-D65CC305; Tue, 28 Aug 2012 13:19:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346159978!5113976!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyMzIwMQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12618 invoked from network); 28 Aug 2012 13:19:39 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 13:19:39 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7SDJWRn011613
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Aug 2012 13:19:32 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7SDJVrk009200
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Aug 2012 13:19:32 GMT
Received: from abhmt120.oracle.com (abhmt120.oracle.com [141.146.116.72])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7SDJVI0019320; Tue, 28 Aug 2012 08:19:31 -0500
Received: from localhost.localdomain (/10.159.167.73)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Aug 2012 06:19:30 -0700
Date: Tue, 28 Aug 2012 09:19:28 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120828131928.GB20870@localhost.localdomain>
References: <1B4B44D9196EFF41AE41FDA404FC0A101881CE@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A101881CE@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	Tobias Geiger <tobias.geiger@vido.info>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Regression in kernel 3.5 as Dom0 regarding PCI
 Passthrough?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > > > Anyhow, for right now jsut revert
> > > > cd9db80e5257682a7f7ab245a2459648b3c8d268
> > > > and it should work for you.
> > > >
> Confirmed, after reverting that commit, VT-d will work fine.
> Will you fix this and push it to upstream Linux, Konrad?

Yes I plan to fix it - thought I am not sure exactly how. The
reset functionality works - (too well one could say) - perhaps
what I also need is to enable the device after the reset.

> 
> > > Also, our team reported a VT-d bug 2 months ago.
> > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 13:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 13:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Lx7-0006C5-Fj; Tue, 28 Aug 2012 13:35:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6Lx6-0006C0-9X
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 13:35:32 +0000
Received: from [85.158.139.83:21648] by server-10.bemta-5.messagelabs.com id
	75/F8-10969-329CC305; Tue, 28 Aug 2012 13:35:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346160926!28262746!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6209 invoked from network); 28 Aug 2012 13:35:28 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 13:35:28 -0000
Received: by dadn15 with SMTP id n15so3153654dad.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 06:35:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8CYDJEknuzFmmOHI3jHIbKklmVSWY5GC9NAuTXvx2zs=;
	b=Vt9DmLLMrMcZgRSVpDE2sHJbwXfqSsE0IfjCHkUQ4QSmdSYl6/PGf72Lpoc1zFChRJ
	SaINRo9RKoPHp4KYb/vVgKdE9QYrJhDGrZuapqVpA3Ui2JKLIaGVCXqgtv7yyZvFZfQb
	zmHmpP21JJJhWYQw95iuQsusNSFyggBr3xVRbwxv8RZ57eg6XkyMNI8tjbLW166nHi4A
	5XjRfEP/WyLkCFJIsWhTMz6d6UOc81WKUyMwCBwaYA72aB9cWxCx7U8tDYTy8UAuDucn
	zJ6da4VnTo0oJhkV80Yy1R2HhtCWG0kKO0eN1HnCgz7vUSw1W+NJt6MhpLdQI6BHqOeQ
	ihCg==
Received: by 10.68.228.100 with SMTP id sh4mr42637891pbc.45.1346160926293;
	Tue, 28 Aug 2012 06:35:26 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id qn3sm17075033pbc.6.2012.08.28.06.35.22
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 06:35:24 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 14:35:19 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC6287A7.3D199%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2FIfceLgog6kZBWk2Re4JoolWrbw==
In-Reply-To: <20120828124255.GA32452@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 13:42, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Tue, Aug 28, Keir Fraser wrote:
> 
>> Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
>> pages. How about a bit lower in memory -- FE700000-FE7FFFFF?
>> 
>> Everything in range FC000000-FFFFFFFF should already be marked
>> E820_RESERVED. You can test that, and also see
>> tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
>> RESERVED_MEMBASE == FC000000).
> 
> Yes, FC000000-FFFFFFFF has already an E820_RESERVED entry. Within that
> range the kernel finds the IOAPIC, LAPIC and the HPET, perhaps because
> they are listed in the ACPI table or because they are found by other
> ways.

Yes they are all listed in various ACPI tables.

> To make the location of the of the shared pages configurable from the
> tools, does tools/firmware/hvmloader/acpi/dsdt.asl have a way to
> describe such special region? Maybe the kernel parses that table early
> enough, before the shared_info page gets used.

If you have access to the ASL parser that early that would be awesome. You
can then just add a new name or something in the DSDT, like "Name (\XENR,
0xFE700000)" and then evaluate that Name object during boot to get the
numeric value (acpi_evaluate_object? Or acpi_evaluate_integer? Just
guessing!).

I have my doubts though... Some static ACPI tables are parsed very early,
but you need some more sophistication brought online before you can parse
out the DSDT and eval its contents. Well worth checking however, as sticking
this in the DSDT would be about the best option if it can work.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 13:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 13:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Lx7-0006C5-Fj; Tue, 28 Aug 2012 13:35:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6Lx6-0006C0-9X
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 13:35:32 +0000
Received: from [85.158.139.83:21648] by server-10.bemta-5.messagelabs.com id
	75/F8-10969-329CC305; Tue, 28 Aug 2012 13:35:31 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346160926!28262746!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6209 invoked from network); 28 Aug 2012 13:35:28 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 13:35:28 -0000
Received: by dadn15 with SMTP id n15so3153654dad.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 06:35:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=8CYDJEknuzFmmOHI3jHIbKklmVSWY5GC9NAuTXvx2zs=;
	b=Vt9DmLLMrMcZgRSVpDE2sHJbwXfqSsE0IfjCHkUQ4QSmdSYl6/PGf72Lpoc1zFChRJ
	SaINRo9RKoPHp4KYb/vVgKdE9QYrJhDGrZuapqVpA3Ui2JKLIaGVCXqgtv7yyZvFZfQb
	zmHmpP21JJJhWYQw95iuQsusNSFyggBr3xVRbwxv8RZ57eg6XkyMNI8tjbLW166nHi4A
	5XjRfEP/WyLkCFJIsWhTMz6d6UOc81WKUyMwCBwaYA72aB9cWxCx7U8tDYTy8UAuDucn
	zJ6da4VnTo0oJhkV80Yy1R2HhtCWG0kKO0eN1HnCgz7vUSw1W+NJt6MhpLdQI6BHqOeQ
	ihCg==
Received: by 10.68.228.100 with SMTP id sh4mr42637891pbc.45.1346160926293;
	Tue, 28 Aug 2012 06:35:26 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id qn3sm17075033pbc.6.2012.08.28.06.35.22
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 06:35:24 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 14:35:19 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <CC6287A7.3D199%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] fixed location of share info page in HVM guests
Thread-Index: Ac2FIfceLgog6kZBWk2Re4JoolWrbw==
In-Reply-To: <20120828124255.GA32452@aepfle.de>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 13:42, "Olaf Hering" <olaf@aepfle.de> wrote:

> On Tue, Aug 28, Keir Fraser wrote:
> 
>> Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
>> pages. How about a bit lower in memory -- FE700000-FE7FFFFF?
>> 
>> Everything in range FC000000-FFFFFFFF should already be marked
>> E820_RESERVED. You can test that, and also see
>> tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
>> RESERVED_MEMBASE == FC000000).
> 
> Yes, FC000000-FFFFFFFF has already an E820_RESERVED entry. Within that
> range the kernel finds the IOAPIC, LAPIC and the HPET, perhaps because
> they are listed in the ACPI table or because they are found by other
> ways.

Yes they are all listed in various ACPI tables.

> To make the location of the of the shared pages configurable from the
> tools, does tools/firmware/hvmloader/acpi/dsdt.asl have a way to
> describe such special region? Maybe the kernel parses that table early
> enough, before the shared_info page gets used.

If you have access to the ASL parser that early that would be awesome. You
can then just add a new name or something in the DSDT, like "Name (\XENR,
0xFE700000)" and then evaluate that Name object during boot to get the
numeric value (acpi_evaluate_object? Or acpi_evaluate_integer? Just
guessing!).

I have my doubts though... Some static ACPI tables are parsed very early,
but you need some more sophistication brought online before you can parse
out the DSDT and eval its contents. Well worth checking however, as sticking
this in the DSDT would be about the best option if it can work.

 -- Keir

> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 13:58:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 13:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MIs-0006ft-BU; Tue, 28 Aug 2012 13:58:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T6MIq-0006fl-E4
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 13:58:00 +0000
Received: from [85.158.139.83:40956] by server-1.bemta-5.messagelabs.com id
	2A/FC-32692-76ECC305; Tue, 28 Aug 2012 13:57:59 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346162277!28193579!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9526 invoked from network); 28 Aug 2012 13:57:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 13:57:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Aug 2012 14:57:57 +0100
Message-Id: <503CDC73020000780008AA00@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 28 Aug 2012 14:57:55 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] vMCE patches rebase
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> "Liu, Jinsong" jinsong.liu@intel.com> 08/27/12 12:07 PM >> 

> Is Xen4.2 ready now? If yes, I would like to rebase vMCE patches (which

> I sent several weeks ago, and would update according to your comments). 

No, we're at RC3 (I think). A few more weeks, perhaps.

 

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 13:58:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 13:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MIs-0006ft-BU; Tue, 28 Aug 2012 13:58:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T6MIq-0006fl-E4
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 13:58:00 +0000
Received: from [85.158.139.83:40956] by server-1.bemta-5.messagelabs.com id
	2A/FC-32692-76ECC305; Tue, 28 Aug 2012 13:57:59 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346162277!28193579!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9526 invoked from network); 28 Aug 2012 13:57:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 13:57:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Aug 2012 14:57:57 +0100
Message-Id: <503CDC73020000780008AA00@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 28 Aug 2012 14:57:55 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] vMCE patches rebase
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> "Liu, Jinsong" jinsong.liu@intel.com> 08/27/12 12:07 PM >> 

> Is Xen4.2 ready now? If yes, I would like to rebase vMCE patches (which

> I sent several weeks ago, and would update according to your comments). 

No, we're at RC3 (I think). A few more weeks, perhaps.

 

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 14:01:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MLl-0006qc-Tn; Tue, 28 Aug 2012 14:01:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6MLk-0006qU-G9
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:01:00 +0000
Received: from [85.158.143.99:19731] by server-1.bemta-4.messagelabs.com id
	3F/F0-12504-B1FCC305; Tue, 28 Aug 2012 14:00:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346162440!28320638!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32455 invoked from network); 28 Aug 2012 14:00:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 14:00:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,326,1344211200"; d="scan'208";a="14230536"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 14:00:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 15:00:40 +0100
Message-ID: <1346162438.9975.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 28 Aug 2012 15:00:38 +0100
In-Reply-To: <5037D4C3.4060309@citrix.com>
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
	<1345835562.4847.3.camel@dagon.hellion.org.uk>
	<5037D4C3.4060309@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 20:23 +0100, Andrew Cooper wrote:
> 
> 
> Attached is a far slimmed version which explicitly sets hand to NULL
> at
> the top, and future-proofs the use of hand in the middle of the domain
> loop. 

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 14:01:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MLl-0006qc-Tn; Tue, 28 Aug 2012 14:01:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6MLk-0006qU-G9
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:01:00 +0000
Received: from [85.158.143.99:19731] by server-1.bemta-4.messagelabs.com id
	3F/F0-12504-B1FCC305; Tue, 28 Aug 2012 14:00:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346162440!28320638!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32455 invoked from network); 28 Aug 2012 14:00:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 14:00:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,326,1344211200"; d="scan'208";a="14230536"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 14:00:40 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 15:00:40 +0100
Message-ID: <1346162438.9975.18.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 28 Aug 2012 15:00:38 +0100
In-Reply-To: <5037D4C3.4060309@citrix.com>
References: <20120824164301.46800@gmx.net> <5037CB11.9000308@citrix.com>
	<1345835562.4847.3.camel@dagon.hellion.org.uk>
	<5037D4C3.4060309@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Errors in compilation // xl_cmdimpl.c:2733:11 ...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-24 at 20:23 +0100, Andrew Cooper wrote:
> 
> 
> Attached is a far slimmed version which explicitly sets hand to NULL
> at
> the top, and future-proofs the use of hand in the middle of the domain
> loop. 

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 14:06:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MQW-000741-Ko; Tue, 28 Aug 2012 14:05:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T6MQV-00073t-68
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:05:55 +0000
Received: from [85.158.143.35:31132] by server-2.bemta-4.messagelabs.com id
	30/8A-21239-240DC305; Tue, 28 Aug 2012 14:05:54 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346162750!13904045!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 738 invoked from network); 28 Aug 2012 14:05:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	28 Aug 2012 14:05:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Aug 2012 15:05:50 +0100
Message-Id: <503CDE4C020000780008AA09@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 28 Aug 2012 15:05:48 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <Ian.Campbell@citrix.com>,<jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] X86/vMCE: guest broken page handling when
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> "Liu, Jinsong" <jinsong.liu@intel.com> 08/27/12 4:13 AM >>> 
>X86/vMCE: guest broken page handling when migration 
>
>This patch is used to handle guest broken page when migration. 
>
>At sender, the broken page would not be mapped, and the error page 
>content would not be copied to target, otherwise it may trigger more 
>serious error (i.e. SRAR error). While its pfn_type and pfn number 
>would be transferred to target so that target take appropriate action. 
>
>At target, it would set p2m as p2m_ram_broken for broken page, so that 
>if guest crazy access the broken page again, it would kill guest as expected. 

Looks okay to me, but would also need looking at by a tools person of course.

Please add to your series when you resubmit past-4.2.

 

Thanks, Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 14:06:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MQW-000741-Ko; Tue, 28 Aug 2012 14:05:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jbeulich@suse.com>) id 1T6MQV-00073t-68
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:05:55 +0000
Received: from [85.158.143.35:31132] by server-2.bemta-4.messagelabs.com id
	30/8A-21239-240DC305; Tue, 28 Aug 2012 14:05:54 +0000
X-Env-Sender: jbeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346162750!13904045!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 738 invoked from network); 28 Aug 2012 14:05:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with SMTP;
	28 Aug 2012 14:05:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Aug 2012 15:05:50 +0100
Message-Id: <503CDE4C020000780008AA09@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Tue, 28 Aug 2012 15:05:48 +0100
From: "Jan Beulich" <jbeulich@suse.com>
To: <Ian.Campbell@citrix.com>,<jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] X86/vMCE: guest broken page handling when
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> "Liu, Jinsong" <jinsong.liu@intel.com> 08/27/12 4:13 AM >>> 
>X86/vMCE: guest broken page handling when migration 
>
>This patch is used to handle guest broken page when migration. 
>
>At sender, the broken page would not be mapped, and the error page 
>content would not be copied to target, otherwise it may trigger more 
>serious error (i.e. SRAR error). While its pfn_type and pfn number 
>would be transferred to target so that target take appropriate action. 
>
>At target, it would set p2m as p2m_ram_broken for broken page, so that 
>if guest crazy access the broken page again, it would kill guest as expected. 

Looks okay to me, but would also need looking at by a tools person of course.

Please add to your series when you resubmit past-4.2.

 

Thanks, Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 14:25:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MjZ-0007FH-EM; Tue, 28 Aug 2012 14:25:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T6MjX-0007FC-KC
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:25:35 +0000
Received: from [85.158.138.51:60829] by server-4.bemta-3.messagelabs.com id
	62/36-04276-BD4DC305; Tue, 28 Aug 2012 14:25:31 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346163926!28220308!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMjg4NDA5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10703 invoked from network); 28 Aug 2012 14:25:28 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 14:25:28 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 28 Aug 2012 07:25:26 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,327,1344236400"; d="scan'208";a="192318286"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 28 Aug 2012 07:25:24 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 07:25:24 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 22:25:22 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <jbeulich@suse.com>
Thread-Topic: vMCE patches rebase
Thread-Index: AQHNhSUfMFc/5LR8TPK0QvHZaV42Z5dvR2og
Date: Tue, 28 Aug 2012 14:25:22 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335319B31@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
	<503CDC73020000780008AA00@nat28.tlf.novell.com>
In-Reply-To: <503CDC73020000780008AA00@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vMCE patches rebase
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> "Liu, Jinsong" jinsong.liu@intel.com> 08/27/12 12:07 PM >>
> 
>> Is Xen4.2 ready now? If yes, I would like to rebase vMCE patches
>> (which 
> 
>> I sent several weeks ago, and would update according to your
>> comments). 
> 
> No, we're at RC3 (I think). A few more weeks, perhaps.
> 

OK, thanks! I would keep an eye on it. Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 14:25:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6MjZ-0007FH-EM; Tue, 28 Aug 2012 14:25:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T6MjX-0007FC-KC
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:25:35 +0000
Received: from [85.158.138.51:60829] by server-4.bemta-3.messagelabs.com id
	62/36-04276-BD4DC305; Tue, 28 Aug 2012 14:25:31 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346163926!28220308!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMjg4NDA5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10703 invoked from network); 28 Aug 2012 14:25:28 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 14:25:28 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 28 Aug 2012 07:25:26 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,327,1344236400"; d="scan'208";a="192318286"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 28 Aug 2012 07:25:24 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 07:25:24 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 22:25:22 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <jbeulich@suse.com>
Thread-Topic: vMCE patches rebase
Thread-Index: AQHNhSUfMFc/5LR8TPK0QvHZaV42Z5dvR2og
Date: Tue, 28 Aug 2012 14:25:22 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335319B31@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC82923353192A5@SHSMSX101.ccr.corp.intel.com>
	<503CDC73020000780008AA00@nat28.tlf.novell.com>
In-Reply-To: <503CDC73020000780008AA00@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vMCE patches rebase
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> "Liu, Jinsong" jinsong.liu@intel.com> 08/27/12 12:07 PM >>
> 
>> Is Xen4.2 ready now? If yes, I would like to rebase vMCE patches
>> (which 
> 
>> I sent several weeks ago, and would update according to your
>> comments). 
> 
> No, we're at RC3 (I think). A few more weeks, perhaps.
> 

OK, thanks! I would keep an eye on it. Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 14:31:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Mp9-0007O7-88; Tue, 28 Aug 2012 14:31:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T6Mp6-0007Ny-Qw
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:31:21 +0000
Received: from [85.158.139.83:37951] by server-7.bemta-5.messagelabs.com id
	FB/28-19703-736DC305; Tue, 28 Aug 2012 14:31:19 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1346164272!27522375!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMTg4NTQ3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12710 invoked from network); 28 Aug 2012 14:31:13 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-15.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 14:31:13 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 28 Aug 2012 07:31:12 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,327,1344236400"; d="scan'208";a="138746698"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 28 Aug 2012 07:30:49 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 07:30:38 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 07:30:38 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 22:30:36 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "Ian.Campbell@citrix.com"
	<Ian.Campbell@citrix.com>
Thread-Topic: [PATCH] X86/vMCE: guest broken page handling when migration
Thread-Index: AQHNhSY5Zp6ZOLjuREqA3X3AgCYhP5dvR6Cw
Date: Tue, 28 Aug 2012 14:30:36 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335319B7E@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
	<503CDE4C020000780008AA09@nat28.tlf.novell.com>
In-Reply-To: <503CDE4C020000780008AA09@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_"
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] X86/vMCE: guest broken page handling when
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Jan Beulich wrote:
>>>> "Liu, Jinsong" <jinsong.liu@intel.com> 08/27/12 4:13 AM >>>
>> X86/vMCE: guest broken page handling when migration
>>=20
>> This patch is used to handle guest broken page when migration.
>>=20
>> At sender, the broken page would not be mapped, and the error page
>> content would not be copied to target, otherwise it may trigger more
>> serious error (i.e. SRAR error). While its pfn_type and pfn number
>> would be transferred to target so that target take appropriate
>> action.=20
>>=20
>> At target, it would set p2m as p2m_ram_broken for broken page, so
>> that=20
>> if guest crazy access the broken page again, it would kill guest as
>> expected.=20
>=20
> Looks okay to me, but would also need looking at by a tools person of
> course.=20
>=20
> Please add to your series when you resubmit past-4.2.
>=20

Yep. Ian/Keir, would you please help me to have a look at tools side, or re=
commend a person?

Thanks,
Jinsong


--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_
Content-Type: application/octet-stream;
	name="9_vmce_migration_pfntype_broken.patch"
Content-Description: 9_vmce_migration_pfntype_broken.patch
Content-Disposition: attachment;
	filename="9_vmce_migration_pfntype_broken.patch"; size=8730;
	creation-date="Mon, 27 Aug 2012 11:03:05 GMT";
	modification-date="Mon, 27 Aug 2012 11:08:32 GMT"
Content-Transfer-Encoding: base64

WDg2L3ZNQ0U6IGd1ZXN0IGJyb2tlbiBwYWdlIGhhbmRsaW5nIHdoZW4gbWlncmF0aW9uDQoNClRo
aXMgcGF0Y2ggaXMgdXNlZCB0byBoYW5kbGUgZ3Vlc3QgYnJva2VuIHBhZ2Ugd2hlbiBtaWdyYXRp
b24uDQoNCkF0IHNlbmRlciwgdGhlIGJyb2tlbiBwYWdlIHdvdWxkIG5vdCBiZSBtYXBwZWQsIGFu
ZCB0aGUgZXJyb3IgcGFnZSANCmNvbnRlbnQgd291bGQgbm90IGJlIGNvcGllZCB0byB0YXJnZXQs
IG90aGVyd2lzZSBpdCBtYXkgdHJpZ2dlciBtb3JlIA0Kc2VyaW91cyBlcnJvciAoaS5lLiBTUkFS
IGVycm9yKS4gV2hpbGUgaXRzIHBmbl90eXBlIGFuZCBwZm4gbnVtYmVyIA0Kd291bGQgYmUgdHJh
bnNmZXJyZWQgdG8gdGFyZ2V0IHNvIHRoYXQgdGFyZ2V0IHRha2UgYXBwcm9wcmlhdGUgYWN0aW9u
Lg0KDQpBdCB0YXJnZXQsIGl0IHdvdWxkIHNldCBwMm0gYXMgcDJtX3JhbV9icm9rZW4gZm9yIGJy
b2tlbiBwYWdlLCBzbyB0aGF0IA0KaWYgZ3Vlc3QgY3JhenkgYWNjZXNzIHRoZSBicm9rZW4gcGFn
ZSBhZ2FpbiwgaXQgd291bGQga2lsbCBndWVzdCBhcyBleHBlY3RlZC4NCg0KU2lnbmVkLW9mZi1i
eTogTGl1LCBKaW5zb25nIDxqaW5zb25nLmxpdUBpbnRlbC5jb20+DQoNCmRpZmYgLXIgYjE3ZmIz
Y2I5MmQyIHRvb2xzL2xpYnhjL3hjX2RvbWFpbi5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21h
aW4uYwlNb24gQXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hj
X2RvbWFpbi5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAgLTMxNCw2ICszMTQs
MjIgQEANCiAgICAgcmV0dXJuIHJldCA/IC0xIDogMDsNCiB9DQogDQorLyogc2V0IGJyb2tlbiBw
YWdlIHAybSAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2VfcDJtKHhjX2ludGVyZmFjZSAqeGNo
LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGRvbWlkLA0KKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuKQ0KK3sNCisgICAgaW50IHJldDsN
CisgICAgREVDTEFSRV9ET01DVEw7DQorDQorICAgIGRvbWN0bC5jbWQgPSBYRU5fRE9NQ1RMX3Nl
dF9icm9rZW5fcGFnZV9wMm07DQorICAgIGRvbWN0bC5kb21haW4gPSAoZG9taWRfdClkb21pZDsN
CisgICAgZG9tY3RsLnUuc2V0X2Jyb2tlbl9wYWdlX3AybS5wZm4gPSBwZm47DQorICAgIHJldCA9
IGRvX2RvbWN0bCh4Y2gsICZkb21jdGwpOw0KKw0KKyAgICByZXR1cm4gcmV0ID8gLTEgOiAwOw0K
K30NCisNCiAvKiBnZXQgaW5mbyBmcm9tIGh2bSBndWVzdCBmb3Igc2F2ZSAqLw0KIGludCB4Y19k
b21haW5faHZtX2dldGNvbnRleHQoeGNfaW50ZXJmYWNlICp4Y2gsDQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwNCmRpZmYgLXIgYjE3ZmIzY2I5MmQyIHRvb2xz
L2xpYnhjL3hjX2RvbWFpbl9yZXN0b3JlLmMNCi0tLSBhL3Rvb2xzL2xpYnhjL3hjX2RvbWFpbl9y
ZXN0b3JlLmMJTW9uIEF1ZyAyNyAwNToyNzo1NCAyMDEyICswODAwDQorKysgYi90b29scy9saWJ4
Yy94Y19kb21haW5fcmVzdG9yZS5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAg
LTk2Miw5ICs5NjIsMTUgQEANCiANCiAgICAgY291bnRwYWdlcyA9IGNvdW50Ow0KICAgICBmb3Ig
KGkgPSBvbGRjb3VudDsgaSA8IGJ1Zi0+bnJfcGFnZXM7ICsraSkNCi0gICAgICAgIGlmICgoYnVm
LT5wZm5fdHlwZXNbaV0gJiBYRU5fRE9NQ1RMX1BGSU5GT19MVEFCX01BU0spID09IFhFTl9ET01D
VExfUEZJTkZPX1hUQUINCi0gICAgICAgICAgICB8fChidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9E
T01DVExfUEZJTkZPX0xUQUJfTUFTSykgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DKQ0KKyAg
ICB7DQorICAgICAgICB1bnNpZ25lZCBsb25nIHBhZ2V0eXBlOw0KKw0KKyAgICAgICAgcGFnZXR5
cGUgPSBidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9ET01DVExfUEZJTkZPX0xUQUJfTUFTSzsNCisg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQiB8fA0KKyAgICAg
ICAgICAgICBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19CUk9LRU4gfHwNCisgICAgICAg
ICAgICAgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DICkNCiAgICAgICAgICAg
ICAtLWNvdW50cGFnZXM7DQorICAgIH0NCiANCiAgICAgaWYgKCFjb3VudHBhZ2VzKQ0KICAgICAg
ICAgcmV0dXJuIGNvdW50Ow0KQEAgLTEyMDAsNiArMTIwNiwxNyBAQA0KICAgICAgICAgICAgIC8q
IGEgYm9ndXMvdW5tYXBwZWQvYWxsb2NhdGUtb25seSBwYWdlOiBza2lwIGl0ICovDQogICAgICAg
ICAgICAgY29udGludWU7DQogDQorICAgICAgICBpZiAoIHBhZ2V0eXBlID09IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTiApDQorICAgICAgICB7DQorICAgICAgICAgICAgaWYgKCB4Y19zZXRfYnJv
a2VuX3BhZ2VfcDJtKHhjaCwgZG9tLCBwZm4pICkNCisgICAgICAgICAgICB7DQorICAgICAgICAg
ICAgICAgIEVSUk9SKCJTZXQgcDJtIGZvciBicm9rZW4gcGFnZSBmYWlsLCAiDQorICAgICAgICAg
ICAgICAgICAgICAgICJkb209JWQsIHBmbj0lbHhcbiIsIGRvbSwgcGZuKTsNCisgICAgICAgICAg
ICAgICAgZ290byBlcnJfbWFwcGVkOw0KKyAgICAgICAgICAgIH0NCisgICAgICAgICAgICBjb250
aW51ZTsNCisgICAgICAgIH0NCisNCiAgICAgICAgIGlmIChwZm5fZXJyW2ldKQ0KICAgICAgICAg
ew0KICAgICAgICAgICAgIEVSUk9SKCJ1bmV4cGVjdGVkIFBGTiBtYXBwaW5nIGZhaWx1cmUgcGZu
ICVseCBtYXBfbWZuICVseCBwMm1fbWZuICVseCIsDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB0b29s
cy9saWJ4Yy94Y19kb21haW5fc2F2ZS5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21haW5fc2F2
ZS5jCU1vbiBBdWcgMjcgMDU6Mjc6NTQgMjAxMiArMDgwMA0KKysrIGIvdG9vbHMvbGlieGMveGNf
ZG9tYWluX3NhdmUuYwlNb24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC0xMjg1LDYg
KzEyODUsMTMgQEANCiAgICAgICAgICAgICAgICAgaWYgKCAhaHZtICkNCiAgICAgICAgICAgICAg
ICAgICAgIGdtZm4gPSBwZm5fdG9fbWZuKGdtZm4pOw0KIA0KKyAgICAgICAgICAgICAgICBpZiAo
IHBmbl90eXBlW2pdID09IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiApDQorICAgICAgICAgICAg
ICAgIHsNCisgICAgICAgICAgICAgICAgICAgIHBmbl90eXBlW2pdIHw9IHBmbl9iYXRjaFtqXTsN
CisgICAgICAgICAgICAgICAgICAgICsrcnVuOw0KKyAgICAgICAgICAgICAgICAgICAgY29udGlu
dWU7DQorICAgICAgICAgICAgICAgIH0NCisNCiAgICAgICAgICAgICAgICAgaWYgKCBwZm5fZXJy
W2pdICkNCiAgICAgICAgICAgICAgICAgew0KICAgICAgICAgICAgICAgICAgICAgaWYgKCBwZm5f
dHlwZVtqXSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCICkNCkBAIC0xMzc5LDggKzEzODYsMTIg
QEANCiAgICAgICAgICAgICAgICAgICAgIH0NCiAgICAgICAgICAgICAgICAgfQ0KIA0KLSAgICAg
ICAgICAgICAgICAvKiBza2lwIHBhZ2VzIHRoYXQgYXJlbid0IHByZXNlbnQgb3IgYXJlIGFsbG9j
LW9ubHkgKi8NCisgICAgICAgICAgICAgICAgLyogDQorICAgICAgICAgICAgICAgICAqIHNraXAg
cGFnZXMgdGhhdCBhcmVuJ3QgcHJlc2VudCwNCisgICAgICAgICAgICAgICAgICogb3IgYXJlIGJy
b2tlbiwgb3IgYXJlIGFsbG9jLW9ubHkNCisgICAgICAgICAgICAgICAgICovDQogICAgICAgICAg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQg0KKyAgICAgICAg
ICAgICAgICAgICAgfHwgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fQlJPS0VODQogICAg
ICAgICAgICAgICAgICAgICB8fCBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YQUxMT0Mg
KQ0KICAgICAgICAgICAgICAgICAgICAgY29udGludWU7DQogDQpkaWZmIC1yIGIxN2ZiM2NiOTJk
MiB0b29scy9saWJ4Yy94ZW5jdHJsLmgNCi0tLSBhL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlNb24g
QXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlN
b24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC01ODgsNiArNTg4LDE3IEBADQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCAqdm1jZV93aGlsZV9taWdyYXRlKTsNCiAN
CiAvKioNCisgKiBUaGlzIGZ1bmN0aW9uIHNldCBwMm0gZm9yIGJyb2tlbiBwYWdlDQorICogJnBh
cm0geGNoIGEgaGFuZGxlIHRvIGFuIG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCisgKiBAcGFy
bSBkb21pZCB0aGUgZG9tYWluIGlkIHdoaWNoIGJyb2tlbiBwYWdlIGJlbG9uZyB0bw0KKyAqIEBw
YXJtIHBmbiB0aGUgcGZuIG51bWJlciBvZiB0aGUgYnJva2VuIHBhZ2UNCisgKiBAcmV0dXJuIDAg
b24gc3VjY2VzcywgLTEgb24gZmFpbHVyZQ0KKyAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2Vf
cDJtKHhjX2ludGVyZmFjZSAqeGNoLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQz
Ml90IGRvbWlkLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZu
KTsNCisNCisvKioNCiAgKiBUaGlzIGZ1bmN0aW9uIHJldHVybnMgaW5mb3JtYXRpb24gYWJvdXQg
dGhlIGNvbnRleHQgb2YgYSBodm0gZG9tYWluDQogICogQHBhcm0geGNoIGEgaGFuZGxlIHRvIGFu
IG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCiAgKiBAcGFybSBkb21pZCB0aGUgZG9tYWluIHRv
IGdldCBpbmZvcm1hdGlvbiBmcm9tDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB4ZW4vYXJjaC94ODYv
ZG9tY3RsLmMNCi0tLSBhL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDA1OjI3OjU0
IDIwMTIgKzA4MDANCisrKyBiL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDIzOjI1
OjQzIDIwMTIgKzA4MDANCkBAIC0yMDMsMTIgKzIwMywxOCBAQA0KICAgICAgICAgICAgICAgICBm
b3IgKCBqID0gMDsgaiA8IGs7IGorKyApDQogICAgICAgICAgICAgICAgIHsNCiAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgdHlwZSA9IDA7DQorICAgICAgICAgICAgICAgICAgICBw
Mm1fdHlwZV90IHQ7DQogDQotICAgICAgICAgICAgICAgICAgICBwYWdlID0gZ2V0X3BhZ2VfZnJv
bV9nZm4oZCwgYXJyW2pdLCBOVUxMLCBQMk1fQUxMT0MpOw0KKyAgICAgICAgICAgICAgICAgICAg
cGFnZSA9IGdldF9wYWdlX2Zyb21fZ2ZuKGQsIGFycltqXSwgJnQsIFAyTV9BTExPQyk7DQogDQog
ICAgICAgICAgICAgICAgICAgICBpZiAoIHVubGlrZWx5KCFwYWdlKSB8fA0KICAgICAgICAgICAg
ICAgICAgICAgICAgICB1bmxpa2VseShpc194ZW5faGVhcF9wYWdlKHBhZ2UpKSApDQotICAgICAg
ICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExfUEZJTkZPX1hUQUI7DQorICAgICAg
ICAgICAgICAgICAgICB7DQorICAgICAgICAgICAgICAgICAgICAgICAgaWYgKCBwMm1faXNfYnJv
a2VuKHQpICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTjsNCisgICAgICAgICAgICAgICAgICAgICAgICBlbHNlDQorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHR5cGUgPSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCOw0KKyAgICAg
ICAgICAgICAgICAgICAgfQ0KICAgICAgICAgICAgICAgICAgICAgZWxzZSBpZiAoIHhzbV9nZXRw
YWdlZnJhbWVpbmZvKHBhZ2UpICE9IDAgKQ0KICAgICAgICAgICAgICAgICAgICAgICAgIDsNCiAg
ICAgICAgICAgICAgICAgICAgIGVsc2UNCkBAIC0yMzEsNiArMjM3LDkgQEANCiANCiAgICAgICAg
ICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3Bpbm5l
ZCApDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgIHR5cGUgfD0gWEVOX0RPTUNUTF9QRklO
Rk9fTFBJTlRBQjsNCisNCisgICAgICAgICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPmNvdW50
X2luZm8gJiBQR0NfYnJva2VuICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9
IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTjsNCiAgICAgICAgICAgICAgICAgICAgIH0NCiANCiAg
ICAgICAgICAgICAgICAgICAgIGlmICggcGFnZSApDQpAQCAtMTU1Miw2ICsxNTYxLDI4IEBADQog
ICAgIH0NCiAgICAgYnJlYWs7DQogDQorICAgIGNhc2UgWEVOX0RPTUNUTF9zZXRfYnJva2VuX3Bh
Z2VfcDJtOg0KKyAgICB7DQorICAgICAgICBzdHJ1Y3QgZG9tYWluICpkOw0KKyAgICAgICAgcDJt
X3R5cGVfdCBwdDsNCisgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuOw0KKw0KKyAgICAgICAgZCA9
IHJjdV9sb2NrX2RvbWFpbl9ieV9pZChkb21jdGwtPmRvbWFpbik7DQorICAgICAgICBpZiAoIGQg
IT0gTlVMTCApDQorICAgICAgICB7DQorICAgICAgICAgICAgcGZuID0gZG9tY3RsLT51LnNldF9i
cm9rZW5fcGFnZV9wMm0ucGZuOw0KKw0KKyAgICAgICAgICAgIGdldF9nZm5fcXVlcnkoZCwgcGZu
LCAmcHQpOw0KKyAgICAgICAgICAgIHAybV9jaGFuZ2VfdHlwZShkLCBwZm4sIHB0LCBwMm1fcmFt
X2Jyb2tlbik7DQorICAgICAgICAgICAgcHV0X2dmbihkLCBwZm4pOw0KKw0KKyAgICAgICAgICAg
IHJjdV91bmxvY2tfZG9tYWluKGQpOw0KKyAgICAgICAgfQ0KKyAgICAgICAgZWxzZQ0KKyAgICAg
ICAgICAgIHJldCA9IC1FU1JDSDsNCisgICAgfQ0KKyAgICBicmVhazsNCisNCiAgICAgZGVmYXVs
dDoNCiAgICAgICAgIHJldCA9IGlvbW11X2RvX2RvbWN0bChkb21jdGwsIHVfZG9tY3RsKTsNCiAg
ICAgICAgIGJyZWFrOw0KZGlmZiAtciBiMTdmYjNjYjkyZDIgeGVuL2luY2x1ZGUvcHVibGljL2Rv
bWN0bC5oDQotLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1ZyAyNyAwNToy
Nzo1NCAyMDEyICswODAwDQorKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1
ZyAyNyAyMzoyNTo0MyAyMDEyICswODAwDQpAQCAtMTM2LDYgKzEzNiw3IEBADQogI2RlZmluZSBY
RU5fRE9NQ1RMX1BGSU5GT19MUElOVEFCICgweDFVPDwzMSkNCiAjZGVmaW5lIFhFTl9ET01DVExf
UEZJTkZPX1hUQUIgICAgKDB4ZlU8PDI4KSAvKiBpbnZhbGlkIHBhZ2UgKi8NCiAjZGVmaW5lIFhF
Tl9ET01DVExfUEZJTkZPX1hBTExPQyAgKDB4ZVU8PDI4KSAvKiBhbGxvY2F0ZS1vbmx5IHBhZ2Ug
Ki8NCisjZGVmaW5lIFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiAgKDB4ZFU8PDI4KSAvKiBicm9r
ZW4gcGFnZSAqLw0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fUEFHRURUQUIgKDB4OFU8PDI4
KQ0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fTFRBQl9NQVNLICgweGZVPDwyOCkNCiANCkBA
IC04NTYsNiArODU3LDEyIEBADQogdHlwZWRlZiBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNlX21vbml0
b3IgeGVuX2RvbWN0bF92bWNlX21vbml0b3JfdDsNCiBERUZJTkVfWEVOX0dVRVNUX0hBTkRMRSh4
ZW5fZG9tY3RsX3ZtY2VfbW9uaXRvcl90KTsNCiANCitzdHJ1Y3QgeGVuX2RvbWN0bF9zZXRfYnJv
a2VuX3BhZ2VfcDJtIHsNCisgICAgdWludDY0X3QgcGZuOw0KK307DQordHlwZWRlZiBzdHJ1Y3Qg
eGVuX2RvbWN0bF9zZXRfYnJva2VuX3BhZ2VfcDJtIHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9wYWdl
X3AybV90Ow0KK0RFRklORV9YRU5fR1VFU1RfSEFORExFKHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9w
YWdlX3AybV90KTsNCisNCiBzdHJ1Y3QgeGVuX2RvbWN0bCB7DQogICAgIHVpbnQzMl90IGNtZDsN
CiAjZGVmaW5lIFhFTl9ET01DVExfY3JlYXRlZG9tYWluICAgICAgICAgICAgICAgICAgIDENCkBA
IC05MjMsNiArOTMwLDcgQEANCiAjZGVmaW5lIFhFTl9ET01DVExfc2V0X3ZpcnFfaGFuZGxlciAg
ICAgICAgICAgICAgNjYNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX3N0YXJ0ICAg
ICAgICAgICAgNjcNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX2VuZCAgICAgICAg
ICAgICAgNjgNCisjZGVmaW5lIFhFTl9ET01DVExfc2V0X2Jyb2tlbl9wYWdlX3AybSAgICAgICAg
ICAgNjkNCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfZ3Vlc3RtZW1pbyAgICAgICAgICAgIDEw
MDANCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfcGF1c2V2Y3B1ICAgICAgICAgICAgIDEwMDEN
CiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfdW5wYXVzZXZjcHUgICAgICAgICAgIDEwMDINCkBA
IC05ODAsNiArOTg4LDcgQEANCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX3NldF92aXJxX2hh
bmRsZXIgIHNldF92aXJxX2hhbmRsZXI7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNl
X21vbml0b3IgICAgICB2bWNlX21vbml0b3I7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF9n
ZGJzeF9tZW1pbyAgICAgICBnZGJzeF9ndWVzdF9tZW1pbzsNCisgICAgICAgIHN0cnVjdCB4ZW5f
ZG9tY3RsX3NldF9icm9rZW5fcGFnZV9wMm0gc2V0X2Jyb2tlbl9wYWdlX3AybTsNCiAgICAgICAg
IHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X3BhdXNldW5wX3ZjcHUgZ2Ric3hfcGF1c2V1bnBfdmNw
dTsNCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X2RvbXN0YXR1cyAgIGdkYnN4X2Rv
bXN0YXR1czsNCiAgICAgICAgIHVpbnQ4X3QgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBh
ZFsxMjhdOw0K

--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Tue Aug 28 14:31:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Mp9-0007O7-88; Tue, 28 Aug 2012 14:31:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1T6Mp6-0007Ny-Qw
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:31:21 +0000
Received: from [85.158.139.83:37951] by server-7.bemta-5.messagelabs.com id
	FB/28-19703-736DC305; Tue, 28 Aug 2012 14:31:19 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1346164272!27522375!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMTg4NTQ3\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12710 invoked from network); 28 Aug 2012 14:31:13 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-15.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 14:31:13 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga102.ch.intel.com with ESMTP; 28 Aug 2012 07:31:12 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,327,1344236400"; d="scan'208";a="138746698"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by AZSMGA002.ch.intel.com with ESMTP; 28 Aug 2012 07:30:49 -0700
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 07:30:38 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Tue, 28 Aug 2012 07:30:38 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Tue, 28 Aug 2012 22:30:36 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "Ian.Campbell@citrix.com"
	<Ian.Campbell@citrix.com>
Thread-Topic: [PATCH] X86/vMCE: guest broken page handling when migration
Thread-Index: AQHNhSY5Zp6ZOLjuREqA3X3AgCYhP5dvR6Cw
Date: Tue, 28 Aug 2012 14:30:36 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335319B7E@SHSMSX101.ccr.corp.intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335317D90@SHSMSX101.ccr.corp.intel.com>
	<503CDE4C020000780008AA09@nat28.tlf.novell.com>
In-Reply-To: <503CDE4C020000780008AA09@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_"
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] X86/vMCE: guest broken page handling when
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Jan Beulich wrote:
>>>> "Liu, Jinsong" <jinsong.liu@intel.com> 08/27/12 4:13 AM >>>
>> X86/vMCE: guest broken page handling when migration
>>=20
>> This patch is used to handle guest broken page when migration.
>>=20
>> At sender, the broken page would not be mapped, and the error page
>> content would not be copied to target, otherwise it may trigger more
>> serious error (i.e. SRAR error). While its pfn_type and pfn number
>> would be transferred to target so that target take appropriate
>> action.=20
>>=20
>> At target, it would set p2m as p2m_ram_broken for broken page, so
>> that=20
>> if guest crazy access the broken page again, it would kill guest as
>> expected.=20
>=20
> Looks okay to me, but would also need looking at by a tools person of
> course.=20
>=20
> Please add to your series when you resubmit past-4.2.
>=20

Yep. Ian/Keir, would you please help me to have a look at tools side, or re=
commend a person?

Thanks,
Jinsong


--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_
Content-Type: application/octet-stream;
	name="9_vmce_migration_pfntype_broken.patch"
Content-Description: 9_vmce_migration_pfntype_broken.patch
Content-Disposition: attachment;
	filename="9_vmce_migration_pfntype_broken.patch"; size=8730;
	creation-date="Mon, 27 Aug 2012 11:03:05 GMT";
	modification-date="Mon, 27 Aug 2012 11:08:32 GMT"
Content-Transfer-Encoding: base64

WDg2L3ZNQ0U6IGd1ZXN0IGJyb2tlbiBwYWdlIGhhbmRsaW5nIHdoZW4gbWlncmF0aW9uDQoNClRo
aXMgcGF0Y2ggaXMgdXNlZCB0byBoYW5kbGUgZ3Vlc3QgYnJva2VuIHBhZ2Ugd2hlbiBtaWdyYXRp
b24uDQoNCkF0IHNlbmRlciwgdGhlIGJyb2tlbiBwYWdlIHdvdWxkIG5vdCBiZSBtYXBwZWQsIGFu
ZCB0aGUgZXJyb3IgcGFnZSANCmNvbnRlbnQgd291bGQgbm90IGJlIGNvcGllZCB0byB0YXJnZXQs
IG90aGVyd2lzZSBpdCBtYXkgdHJpZ2dlciBtb3JlIA0Kc2VyaW91cyBlcnJvciAoaS5lLiBTUkFS
IGVycm9yKS4gV2hpbGUgaXRzIHBmbl90eXBlIGFuZCBwZm4gbnVtYmVyIA0Kd291bGQgYmUgdHJh
bnNmZXJyZWQgdG8gdGFyZ2V0IHNvIHRoYXQgdGFyZ2V0IHRha2UgYXBwcm9wcmlhdGUgYWN0aW9u
Lg0KDQpBdCB0YXJnZXQsIGl0IHdvdWxkIHNldCBwMm0gYXMgcDJtX3JhbV9icm9rZW4gZm9yIGJy
b2tlbiBwYWdlLCBzbyB0aGF0IA0KaWYgZ3Vlc3QgY3JhenkgYWNjZXNzIHRoZSBicm9rZW4gcGFn
ZSBhZ2FpbiwgaXQgd291bGQga2lsbCBndWVzdCBhcyBleHBlY3RlZC4NCg0KU2lnbmVkLW9mZi1i
eTogTGl1LCBKaW5zb25nIDxqaW5zb25nLmxpdUBpbnRlbC5jb20+DQoNCmRpZmYgLXIgYjE3ZmIz
Y2I5MmQyIHRvb2xzL2xpYnhjL3hjX2RvbWFpbi5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21h
aW4uYwlNb24gQXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hj
X2RvbWFpbi5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAgLTMxNCw2ICszMTQs
MjIgQEANCiAgICAgcmV0dXJuIHJldCA/IC0xIDogMDsNCiB9DQogDQorLyogc2V0IGJyb2tlbiBw
YWdlIHAybSAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2VfcDJtKHhjX2ludGVyZmFjZSAqeGNo
LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGRvbWlkLA0KKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuKQ0KK3sNCisgICAgaW50IHJldDsN
CisgICAgREVDTEFSRV9ET01DVEw7DQorDQorICAgIGRvbWN0bC5jbWQgPSBYRU5fRE9NQ1RMX3Nl
dF9icm9rZW5fcGFnZV9wMm07DQorICAgIGRvbWN0bC5kb21haW4gPSAoZG9taWRfdClkb21pZDsN
CisgICAgZG9tY3RsLnUuc2V0X2Jyb2tlbl9wYWdlX3AybS5wZm4gPSBwZm47DQorICAgIHJldCA9
IGRvX2RvbWN0bCh4Y2gsICZkb21jdGwpOw0KKw0KKyAgICByZXR1cm4gcmV0ID8gLTEgOiAwOw0K
K30NCisNCiAvKiBnZXQgaW5mbyBmcm9tIGh2bSBndWVzdCBmb3Igc2F2ZSAqLw0KIGludCB4Y19k
b21haW5faHZtX2dldGNvbnRleHQoeGNfaW50ZXJmYWNlICp4Y2gsDQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwNCmRpZmYgLXIgYjE3ZmIzY2I5MmQyIHRvb2xz
L2xpYnhjL3hjX2RvbWFpbl9yZXN0b3JlLmMNCi0tLSBhL3Rvb2xzL2xpYnhjL3hjX2RvbWFpbl9y
ZXN0b3JlLmMJTW9uIEF1ZyAyNyAwNToyNzo1NCAyMDEyICswODAwDQorKysgYi90b29scy9saWJ4
Yy94Y19kb21haW5fcmVzdG9yZS5jCU1vbiBBdWcgMjcgMjM6MjU6NDMgMjAxMiArMDgwMA0KQEAg
LTk2Miw5ICs5NjIsMTUgQEANCiANCiAgICAgY291bnRwYWdlcyA9IGNvdW50Ow0KICAgICBmb3Ig
KGkgPSBvbGRjb3VudDsgaSA8IGJ1Zi0+bnJfcGFnZXM7ICsraSkNCi0gICAgICAgIGlmICgoYnVm
LT5wZm5fdHlwZXNbaV0gJiBYRU5fRE9NQ1RMX1BGSU5GT19MVEFCX01BU0spID09IFhFTl9ET01D
VExfUEZJTkZPX1hUQUINCi0gICAgICAgICAgICB8fChidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9E
T01DVExfUEZJTkZPX0xUQUJfTUFTSykgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DKQ0KKyAg
ICB7DQorICAgICAgICB1bnNpZ25lZCBsb25nIHBhZ2V0eXBlOw0KKw0KKyAgICAgICAgcGFnZXR5
cGUgPSBidWYtPnBmbl90eXBlc1tpXSAmIFhFTl9ET01DVExfUEZJTkZPX0xUQUJfTUFTSzsNCisg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQiB8fA0KKyAgICAg
ICAgICAgICBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19CUk9LRU4gfHwNCisgICAgICAg
ICAgICAgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWEFMTE9DICkNCiAgICAgICAgICAg
ICAtLWNvdW50cGFnZXM7DQorICAgIH0NCiANCiAgICAgaWYgKCFjb3VudHBhZ2VzKQ0KICAgICAg
ICAgcmV0dXJuIGNvdW50Ow0KQEAgLTEyMDAsNiArMTIwNiwxNyBAQA0KICAgICAgICAgICAgIC8q
IGEgYm9ndXMvdW5tYXBwZWQvYWxsb2NhdGUtb25seSBwYWdlOiBza2lwIGl0ICovDQogICAgICAg
ICAgICAgY29udGludWU7DQogDQorICAgICAgICBpZiAoIHBhZ2V0eXBlID09IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTiApDQorICAgICAgICB7DQorICAgICAgICAgICAgaWYgKCB4Y19zZXRfYnJv
a2VuX3BhZ2VfcDJtKHhjaCwgZG9tLCBwZm4pICkNCisgICAgICAgICAgICB7DQorICAgICAgICAg
ICAgICAgIEVSUk9SKCJTZXQgcDJtIGZvciBicm9rZW4gcGFnZSBmYWlsLCAiDQorICAgICAgICAg
ICAgICAgICAgICAgICJkb209JWQsIHBmbj0lbHhcbiIsIGRvbSwgcGZuKTsNCisgICAgICAgICAg
ICAgICAgZ290byBlcnJfbWFwcGVkOw0KKyAgICAgICAgICAgIH0NCisgICAgICAgICAgICBjb250
aW51ZTsNCisgICAgICAgIH0NCisNCiAgICAgICAgIGlmIChwZm5fZXJyW2ldKQ0KICAgICAgICAg
ew0KICAgICAgICAgICAgIEVSUk9SKCJ1bmV4cGVjdGVkIFBGTiBtYXBwaW5nIGZhaWx1cmUgcGZu
ICVseCBtYXBfbWZuICVseCBwMm1fbWZuICVseCIsDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB0b29s
cy9saWJ4Yy94Y19kb21haW5fc2F2ZS5jDQotLS0gYS90b29scy9saWJ4Yy94Y19kb21haW5fc2F2
ZS5jCU1vbiBBdWcgMjcgMDU6Mjc6NTQgMjAxMiArMDgwMA0KKysrIGIvdG9vbHMvbGlieGMveGNf
ZG9tYWluX3NhdmUuYwlNb24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC0xMjg1LDYg
KzEyODUsMTMgQEANCiAgICAgICAgICAgICAgICAgaWYgKCAhaHZtICkNCiAgICAgICAgICAgICAg
ICAgICAgIGdtZm4gPSBwZm5fdG9fbWZuKGdtZm4pOw0KIA0KKyAgICAgICAgICAgICAgICBpZiAo
IHBmbl90eXBlW2pdID09IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiApDQorICAgICAgICAgICAg
ICAgIHsNCisgICAgICAgICAgICAgICAgICAgIHBmbl90eXBlW2pdIHw9IHBmbl9iYXRjaFtqXTsN
CisgICAgICAgICAgICAgICAgICAgICsrcnVuOw0KKyAgICAgICAgICAgICAgICAgICAgY29udGlu
dWU7DQorICAgICAgICAgICAgICAgIH0NCisNCiAgICAgICAgICAgICAgICAgaWYgKCBwZm5fZXJy
W2pdICkNCiAgICAgICAgICAgICAgICAgew0KICAgICAgICAgICAgICAgICAgICAgaWYgKCBwZm5f
dHlwZVtqXSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCICkNCkBAIC0xMzc5LDggKzEzODYsMTIg
QEANCiAgICAgICAgICAgICAgICAgICAgIH0NCiAgICAgICAgICAgICAgICAgfQ0KIA0KLSAgICAg
ICAgICAgICAgICAvKiBza2lwIHBhZ2VzIHRoYXQgYXJlbid0IHByZXNlbnQgb3IgYXJlIGFsbG9j
LW9ubHkgKi8NCisgICAgICAgICAgICAgICAgLyogDQorICAgICAgICAgICAgICAgICAqIHNraXAg
cGFnZXMgdGhhdCBhcmVuJ3QgcHJlc2VudCwNCisgICAgICAgICAgICAgICAgICogb3IgYXJlIGJy
b2tlbiwgb3IgYXJlIGFsbG9jLW9ubHkNCisgICAgICAgICAgICAgICAgICovDQogICAgICAgICAg
ICAgICAgIGlmICggcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fWFRBQg0KKyAgICAgICAg
ICAgICAgICAgICAgfHwgcGFnZXR5cGUgPT0gWEVOX0RPTUNUTF9QRklORk9fQlJPS0VODQogICAg
ICAgICAgICAgICAgICAgICB8fCBwYWdldHlwZSA9PSBYRU5fRE9NQ1RMX1BGSU5GT19YQUxMT0Mg
KQ0KICAgICAgICAgICAgICAgICAgICAgY29udGludWU7DQogDQpkaWZmIC1yIGIxN2ZiM2NiOTJk
MiB0b29scy9saWJ4Yy94ZW5jdHJsLmgNCi0tLSBhL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlNb24g
QXVnIDI3IDA1OjI3OjU0IDIwMTIgKzA4MDANCisrKyBiL3Rvb2xzL2xpYnhjL3hlbmN0cmwuaAlN
b24gQXVnIDI3IDIzOjI1OjQzIDIwMTIgKzA4MDANCkBAIC01ODgsNiArNTg4LDE3IEBADQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCAqdm1jZV93aGlsZV9taWdyYXRlKTsNCiAN
CiAvKioNCisgKiBUaGlzIGZ1bmN0aW9uIHNldCBwMm0gZm9yIGJyb2tlbiBwYWdlDQorICogJnBh
cm0geGNoIGEgaGFuZGxlIHRvIGFuIG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCisgKiBAcGFy
bSBkb21pZCB0aGUgZG9tYWluIGlkIHdoaWNoIGJyb2tlbiBwYWdlIGJlbG9uZyB0bw0KKyAqIEBw
YXJtIHBmbiB0aGUgcGZuIG51bWJlciBvZiB0aGUgYnJva2VuIHBhZ2UNCisgKiBAcmV0dXJuIDAg
b24gc3VjY2VzcywgLTEgb24gZmFpbHVyZQ0KKyAqLw0KK2ludCB4Y19zZXRfYnJva2VuX3BhZ2Vf
cDJtKHhjX2ludGVyZmFjZSAqeGNoLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQz
Ml90IGRvbWlkLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZu
KTsNCisNCisvKioNCiAgKiBUaGlzIGZ1bmN0aW9uIHJldHVybnMgaW5mb3JtYXRpb24gYWJvdXQg
dGhlIGNvbnRleHQgb2YgYSBodm0gZG9tYWluDQogICogQHBhcm0geGNoIGEgaGFuZGxlIHRvIGFu
IG9wZW4gaHlwZXJ2aXNvciBpbnRlcmZhY2UNCiAgKiBAcGFybSBkb21pZCB0aGUgZG9tYWluIHRv
IGdldCBpbmZvcm1hdGlvbiBmcm9tDQpkaWZmIC1yIGIxN2ZiM2NiOTJkMiB4ZW4vYXJjaC94ODYv
ZG9tY3RsLmMNCi0tLSBhL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDA1OjI3OjU0
IDIwMTIgKzA4MDANCisrKyBiL3hlbi9hcmNoL3g4Ni9kb21jdGwuYwlNb24gQXVnIDI3IDIzOjI1
OjQzIDIwMTIgKzA4MDANCkBAIC0yMDMsMTIgKzIwMywxOCBAQA0KICAgICAgICAgICAgICAgICBm
b3IgKCBqID0gMDsgaiA8IGs7IGorKyApDQogICAgICAgICAgICAgICAgIHsNCiAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGxvbmcgdHlwZSA9IDA7DQorICAgICAgICAgICAgICAgICAgICBw
Mm1fdHlwZV90IHQ7DQogDQotICAgICAgICAgICAgICAgICAgICBwYWdlID0gZ2V0X3BhZ2VfZnJv
bV9nZm4oZCwgYXJyW2pdLCBOVUxMLCBQMk1fQUxMT0MpOw0KKyAgICAgICAgICAgICAgICAgICAg
cGFnZSA9IGdldF9wYWdlX2Zyb21fZ2ZuKGQsIGFycltqXSwgJnQsIFAyTV9BTExPQyk7DQogDQog
ICAgICAgICAgICAgICAgICAgICBpZiAoIHVubGlrZWx5KCFwYWdlKSB8fA0KICAgICAgICAgICAg
ICAgICAgICAgICAgICB1bmxpa2VseShpc194ZW5faGVhcF9wYWdlKHBhZ2UpKSApDQotICAgICAg
ICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExfUEZJTkZPX1hUQUI7DQorICAgICAg
ICAgICAgICAgICAgICB7DQorICAgICAgICAgICAgICAgICAgICAgICAgaWYgKCBwMm1faXNfYnJv
a2VuKHQpICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9IFhFTl9ET01DVExf
UEZJTkZPX0JST0tFTjsNCisgICAgICAgICAgICAgICAgICAgICAgICBlbHNlDQorICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHR5cGUgPSBYRU5fRE9NQ1RMX1BGSU5GT19YVEFCOw0KKyAgICAg
ICAgICAgICAgICAgICAgfQ0KICAgICAgICAgICAgICAgICAgICAgZWxzZSBpZiAoIHhzbV9nZXRw
YWdlZnJhbWVpbmZvKHBhZ2UpICE9IDAgKQ0KICAgICAgICAgICAgICAgICAgICAgICAgIDsNCiAg
ICAgICAgICAgICAgICAgICAgIGVsc2UNCkBAIC0yMzEsNiArMjM3LDkgQEANCiANCiAgICAgICAg
ICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3Bpbm5l
ZCApDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgIHR5cGUgfD0gWEVOX0RPTUNUTF9QRklO
Rk9fTFBJTlRBQjsNCisNCisgICAgICAgICAgICAgICAgICAgICAgICBpZiAoIHBhZ2UtPmNvdW50
X2luZm8gJiBQR0NfYnJva2VuICkNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9
IFhFTl9ET01DVExfUEZJTkZPX0JST0tFTjsNCiAgICAgICAgICAgICAgICAgICAgIH0NCiANCiAg
ICAgICAgICAgICAgICAgICAgIGlmICggcGFnZSApDQpAQCAtMTU1Miw2ICsxNTYxLDI4IEBADQog
ICAgIH0NCiAgICAgYnJlYWs7DQogDQorICAgIGNhc2UgWEVOX0RPTUNUTF9zZXRfYnJva2VuX3Bh
Z2VfcDJtOg0KKyAgICB7DQorICAgICAgICBzdHJ1Y3QgZG9tYWluICpkOw0KKyAgICAgICAgcDJt
X3R5cGVfdCBwdDsNCisgICAgICAgIHVuc2lnbmVkIGxvbmcgcGZuOw0KKw0KKyAgICAgICAgZCA9
IHJjdV9sb2NrX2RvbWFpbl9ieV9pZChkb21jdGwtPmRvbWFpbik7DQorICAgICAgICBpZiAoIGQg
IT0gTlVMTCApDQorICAgICAgICB7DQorICAgICAgICAgICAgcGZuID0gZG9tY3RsLT51LnNldF9i
cm9rZW5fcGFnZV9wMm0ucGZuOw0KKw0KKyAgICAgICAgICAgIGdldF9nZm5fcXVlcnkoZCwgcGZu
LCAmcHQpOw0KKyAgICAgICAgICAgIHAybV9jaGFuZ2VfdHlwZShkLCBwZm4sIHB0LCBwMm1fcmFt
X2Jyb2tlbik7DQorICAgICAgICAgICAgcHV0X2dmbihkLCBwZm4pOw0KKw0KKyAgICAgICAgICAg
IHJjdV91bmxvY2tfZG9tYWluKGQpOw0KKyAgICAgICAgfQ0KKyAgICAgICAgZWxzZQ0KKyAgICAg
ICAgICAgIHJldCA9IC1FU1JDSDsNCisgICAgfQ0KKyAgICBicmVhazsNCisNCiAgICAgZGVmYXVs
dDoNCiAgICAgICAgIHJldCA9IGlvbW11X2RvX2RvbWN0bChkb21jdGwsIHVfZG9tY3RsKTsNCiAg
ICAgICAgIGJyZWFrOw0KZGlmZiAtciBiMTdmYjNjYjkyZDIgeGVuL2luY2x1ZGUvcHVibGljL2Rv
bWN0bC5oDQotLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1ZyAyNyAwNToy
Nzo1NCAyMDEyICswODAwDQorKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgJTW9uIEF1
ZyAyNyAyMzoyNTo0MyAyMDEyICswODAwDQpAQCAtMTM2LDYgKzEzNiw3IEBADQogI2RlZmluZSBY
RU5fRE9NQ1RMX1BGSU5GT19MUElOVEFCICgweDFVPDwzMSkNCiAjZGVmaW5lIFhFTl9ET01DVExf
UEZJTkZPX1hUQUIgICAgKDB4ZlU8PDI4KSAvKiBpbnZhbGlkIHBhZ2UgKi8NCiAjZGVmaW5lIFhF
Tl9ET01DVExfUEZJTkZPX1hBTExPQyAgKDB4ZVU8PDI4KSAvKiBhbGxvY2F0ZS1vbmx5IHBhZ2Ug
Ki8NCisjZGVmaW5lIFhFTl9ET01DVExfUEZJTkZPX0JST0tFTiAgKDB4ZFU8PDI4KSAvKiBicm9r
ZW4gcGFnZSAqLw0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fUEFHRURUQUIgKDB4OFU8PDI4
KQ0KICNkZWZpbmUgWEVOX0RPTUNUTF9QRklORk9fTFRBQl9NQVNLICgweGZVPDwyOCkNCiANCkBA
IC04NTYsNiArODU3LDEyIEBADQogdHlwZWRlZiBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNlX21vbml0
b3IgeGVuX2RvbWN0bF92bWNlX21vbml0b3JfdDsNCiBERUZJTkVfWEVOX0dVRVNUX0hBTkRMRSh4
ZW5fZG9tY3RsX3ZtY2VfbW9uaXRvcl90KTsNCiANCitzdHJ1Y3QgeGVuX2RvbWN0bF9zZXRfYnJv
a2VuX3BhZ2VfcDJtIHsNCisgICAgdWludDY0X3QgcGZuOw0KK307DQordHlwZWRlZiBzdHJ1Y3Qg
eGVuX2RvbWN0bF9zZXRfYnJva2VuX3BhZ2VfcDJtIHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9wYWdl
X3AybV90Ow0KK0RFRklORV9YRU5fR1VFU1RfSEFORExFKHhlbl9kb21jdGxfc2V0X2Jyb2tlbl9w
YWdlX3AybV90KTsNCisNCiBzdHJ1Y3QgeGVuX2RvbWN0bCB7DQogICAgIHVpbnQzMl90IGNtZDsN
CiAjZGVmaW5lIFhFTl9ET01DVExfY3JlYXRlZG9tYWluICAgICAgICAgICAgICAgICAgIDENCkBA
IC05MjMsNiArOTMwLDcgQEANCiAjZGVmaW5lIFhFTl9ET01DVExfc2V0X3ZpcnFfaGFuZGxlciAg
ICAgICAgICAgICAgNjYNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX3N0YXJ0ICAg
ICAgICAgICAgNjcNCiAjZGVmaW5lIFhFTl9ET01DVExfdm1jZV9tb25pdG9yX2VuZCAgICAgICAg
ICAgICAgNjgNCisjZGVmaW5lIFhFTl9ET01DVExfc2V0X2Jyb2tlbl9wYWdlX3AybSAgICAgICAg
ICAgNjkNCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfZ3Vlc3RtZW1pbyAgICAgICAgICAgIDEw
MDANCiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfcGF1c2V2Y3B1ICAgICAgICAgICAgIDEwMDEN
CiAjZGVmaW5lIFhFTl9ET01DVExfZ2Ric3hfdW5wYXVzZXZjcHUgICAgICAgICAgIDEwMDINCkBA
IC05ODAsNiArOTg4LDcgQEANCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX3NldF92aXJxX2hh
bmRsZXIgIHNldF92aXJxX2hhbmRsZXI7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF92bWNl
X21vbml0b3IgICAgICB2bWNlX21vbml0b3I7DQogICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF9n
ZGJzeF9tZW1pbyAgICAgICBnZGJzeF9ndWVzdF9tZW1pbzsNCisgICAgICAgIHN0cnVjdCB4ZW5f
ZG9tY3RsX3NldF9icm9rZW5fcGFnZV9wMm0gc2V0X2Jyb2tlbl9wYWdlX3AybTsNCiAgICAgICAg
IHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X3BhdXNldW5wX3ZjcHUgZ2Ric3hfcGF1c2V1bnBfdmNw
dTsNCiAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3RsX2dkYnN4X2RvbXN0YXR1cyAgIGdkYnN4X2Rv
bXN0YXR1czsNCiAgICAgICAgIHVpbnQ4X3QgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBh
ZFsxMjhdOw0K

--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC8292335319B7ESHSMSX101ccrcorpi_--


From xen-devel-bounces@lists.xen.org Tue Aug 28 14:33:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Mqz-0007Ww-Ue; Tue, 28 Aug 2012 14:33:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T6Mqy-0007Wp-AZ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:33:16 +0000
Received: from [85.158.143.99:25521] by server-2.bemta-4.messagelabs.com id
	37/21-21239-BA6DC305; Tue, 28 Aug 2012 14:33:15 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346164392!21967254!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25953 invoked from network); 28 Aug 2012 14:33:13 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 14:33:13 -0000
Received: by lagz14 with SMTP id z14so3842575lag.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 07:33:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=izc3U69JFHgl5HgI4BwSLK5B4J7T+RXloNcVff6Mr0I=;
	b=WXba7/1ERCLtaBuD+8ND9o+3zNh3+1A/9SMRLzGAM/Aa97ORbupwH7j3TPMFOt5tMv
	GoV58geXPqlKxKDCR9jvRPYUpmYSVowO469bZeHUhnJrmD7529hOcHdYZQsf20Tu3J++
	77iA686Cfc+jVQLBJFSI72JUgDmLgumN+h5nO1HWe5HsyQzo43u4/5jX68r5XJOe069u
	fy6gMlbmhjTUg6dOOINr1564L2sjCNYYc8hGvyF8qWacqE26oBxK+4Xr7C0bDf5PoHFZ
	BJGJyhYhLgaQIYzevucVhq+OUZcWuoGnyyenHi4kbN9TXFemRrikYQpQlC2yMqKYkgQl
	yetw==
MIME-Version: 1.0
Received: by 10.152.146.169 with SMTP id td9mr18575770lab.42.1346164392604;
	Tue, 28 Aug 2012 07:33:12 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Tue, 28 Aug 2012 07:33:12 -0700 (PDT)
In-Reply-To: <1346142969.16230.3.camel@zakaz.uk.xensource.com>
References: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
	<1346142969.16230.3.camel@zakaz.uk.xensource.com>
Date: Tue, 28 Aug 2012 22:33:12 +0800
Message-ID: <CAEQjb-S8Ym1TTr-P65abOWwfRCEOa7Q3JdRhMvJMMT5q+Zb0kg@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to decide the xenstore path for a domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3175140117307060235=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3175140117307060235==
Content-Type: multipart/alternative; boundary=e89a8f2345896e327304c854529e

--e89a8f2345896e327304c854529e
Content-Type: text/plain; charset=ISO-8859-1

2012/8/28 Ian Campbell <Ian.Campbell@citrix.com>

> On Sat, 2012-08-25 at 12:15 +0100, Bei Guan wrote:
> > Hi,
> >
> >
> > Can anyone tell me how to decide the number in the Xenstore key path
> > for a DomU's front driver?
> > In Mini-OS, the block front driver write the key-value to
> > "device/vbd/768/[ring-ref|event-channelprotocol|...]" according to the
> > code mini-os/blkfront.c.
> > So why does it use the number "768" in the path here? Can this number
> > be another one? For a new DomU's block front driver, how to decide
> > this number?
>
> It is the VBD number, as described in docs/misc/vbd-interface.txt. 768
> is "3<<8 | 0" per the "Concrete encoding in the VBD interface (in
> xenstore)" section.
>
> You (as frontend driver author) don't need to decide it since it is
> provided by the toolstack/guest config. You should just process each
> sub-directory of "device/vbd" as a separate disk.
>
Ok, I got it. Thank you so much.


Thanks,
Bei Guan

>
> Ian.
>
>
>
>


-- 
Best Regards,
Bei Guan

--e89a8f2345896e327304c854529e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">2012/8/28 Ian Campbell <span dir=3D"ltr"=
>&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campb=
ell@citrix.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" style=3D=
"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">On Sat, 2012-08-25 at 12:15 +0100, Bei Guan wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt;<br>
&gt; Can anyone tell me how to decide the number in the Xenstore key path<b=
r>
&gt; for a DomU&#39;s front driver?<br>
&gt; In Mini-OS, the block front driver write the key-value to<br>
&gt; &quot;device/vbd/768/[ring-ref|event-channelprotocol|...]&quot; accord=
ing to the<br>
&gt; code mini-os/blkfront.c.<br>
&gt; So why does it use the number &quot;768&quot; in the path here? Can th=
is number<br>
&gt; be another one? For a new DomU&#39;s block front driver, how to decide=
<br>
&gt; this number?<br>
<br>
</div>It is the VBD number, as described in docs/misc/vbd-interface.txt. 76=
8<br>
is &quot;3&lt;&lt;8 | 0&quot; per the &quot;Concrete encoding in the VBD in=
terface (in<br>
xenstore)&quot; section.<br>
<br>
You (as frontend driver author) don&#39;t need to decide it since it is<br>
provided by the toolstack/guest config. You should just process each<br>
sub-directory of &quot;device/vbd&quot; as a separate disk.<br></blockquote=
><div>Ok, I got it. Thank you so much.=A0</div><div><br></div><div><br></di=
v><div>Thanks,</div><div>Bei Guan</div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r>Best Regards,<div>Bei Guan</div><br>

--e89a8f2345896e327304c854529e--


--===============3175140117307060235==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3175140117307060235==--


From xen-devel-bounces@lists.xen.org Tue Aug 28 14:33:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 14:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Mqz-0007Ww-Ue; Tue, 28 Aug 2012 14:33:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1T6Mqy-0007Wp-AZ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 14:33:16 +0000
Received: from [85.158.143.99:25521] by server-2.bemta-4.messagelabs.com id
	37/21-21239-BA6DC305; Tue, 28 Aug 2012 14:33:15 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346164392!21967254!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25953 invoked from network); 28 Aug 2012 14:33:13 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 14:33:13 -0000
Received: by lagz14 with SMTP id z14so3842575lag.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 07:33:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=izc3U69JFHgl5HgI4BwSLK5B4J7T+RXloNcVff6Mr0I=;
	b=WXba7/1ERCLtaBuD+8ND9o+3zNh3+1A/9SMRLzGAM/Aa97ORbupwH7j3TPMFOt5tMv
	GoV58geXPqlKxKDCR9jvRPYUpmYSVowO469bZeHUhnJrmD7529hOcHdYZQsf20Tu3J++
	77iA686Cfc+jVQLBJFSI72JUgDmLgumN+h5nO1HWe5HsyQzo43u4/5jX68r5XJOe069u
	fy6gMlbmhjTUg6dOOINr1564L2sjCNYYc8hGvyF8qWacqE26oBxK+4Xr7C0bDf5PoHFZ
	BJGJyhYhLgaQIYzevucVhq+OUZcWuoGnyyenHi4kbN9TXFemRrikYQpQlC2yMqKYkgQl
	yetw==
MIME-Version: 1.0
Received: by 10.152.146.169 with SMTP id td9mr18575770lab.42.1346164392604;
	Tue, 28 Aug 2012 07:33:12 -0700 (PDT)
Received: by 10.114.2.193 with HTTP; Tue, 28 Aug 2012 07:33:12 -0700 (PDT)
In-Reply-To: <1346142969.16230.3.camel@zakaz.uk.xensource.com>
References: <CAEQjb-SdU+q9eGnwY4hfyoQe8mBNVM6qQp+r_bwxJ+btnR-EDQ@mail.gmail.com>
	<1346142969.16230.3.camel@zakaz.uk.xensource.com>
Date: Tue, 28 Aug 2012 22:33:12 +0800
Message-ID: <CAEQjb-S8Ym1TTr-P65abOWwfRCEOa7Q3JdRhMvJMMT5q+Zb0kg@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Jordan Justen <jljusten@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to decide the xenstore path for a domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3175140117307060235=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3175140117307060235==
Content-Type: multipart/alternative; boundary=e89a8f2345896e327304c854529e

--e89a8f2345896e327304c854529e
Content-Type: text/plain; charset=ISO-8859-1

2012/8/28 Ian Campbell <Ian.Campbell@citrix.com>

> On Sat, 2012-08-25 at 12:15 +0100, Bei Guan wrote:
> > Hi,
> >
> >
> > Can anyone tell me how to decide the number in the Xenstore key path
> > for a DomU's front driver?
> > In Mini-OS, the block front driver write the key-value to
> > "device/vbd/768/[ring-ref|event-channelprotocol|...]" according to the
> > code mini-os/blkfront.c.
> > So why does it use the number "768" in the path here? Can this number
> > be another one? For a new DomU's block front driver, how to decide
> > this number?
>
> It is the VBD number, as described in docs/misc/vbd-interface.txt. 768
> is "3<<8 | 0" per the "Concrete encoding in the VBD interface (in
> xenstore)" section.
>
> You (as frontend driver author) don't need to decide it since it is
> provided by the toolstack/guest config. You should just process each
> sub-directory of "device/vbd" as a separate disk.
>
Ok, I got it. Thank you so much.


Thanks,
Bei Guan

>
> Ian.
>
>
>
>


-- 
Best Regards,
Bei Guan

--e89a8f2345896e327304c854529e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<br><br><div class=3D"gmail_quote">2012/8/28 Ian Campbell <span dir=3D"ltr"=
>&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campb=
ell@citrix.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" style=3D=
"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">On Sat, 2012-08-25 at 12:15 +0100, Bei Guan wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt;<br>
&gt; Can anyone tell me how to decide the number in the Xenstore key path<b=
r>
&gt; for a DomU&#39;s front driver?<br>
&gt; In Mini-OS, the block front driver write the key-value to<br>
&gt; &quot;device/vbd/768/[ring-ref|event-channelprotocol|...]&quot; accord=
ing to the<br>
&gt; code mini-os/blkfront.c.<br>
&gt; So why does it use the number &quot;768&quot; in the path here? Can th=
is number<br>
&gt; be another one? For a new DomU&#39;s block front driver, how to decide=
<br>
&gt; this number?<br>
<br>
</div>It is the VBD number, as described in docs/misc/vbd-interface.txt. 76=
8<br>
is &quot;3&lt;&lt;8 | 0&quot; per the &quot;Concrete encoding in the VBD in=
terface (in<br>
xenstore)&quot; section.<br>
<br>
You (as frontend driver author) don&#39;t need to decide it since it is<br>
provided by the toolstack/guest config. You should just process each<br>
sub-directory of &quot;device/vbd&quot; as a separate disk.<br></blockquote=
><div>Ok, I got it. Thank you so much.=A0</div><div><br></div><div><br></di=
v><div>Thanks,</div><div>Bei Guan</div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r>Best Regards,<div>Bei Guan</div><br>

--e89a8f2345896e327304c854529e--


--===============3175140117307060235==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3175140117307060235==--


From xen-devel-bounces@lists.xen.org Tue Aug 28 17:17:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 17:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6PP2-00019H-57; Tue, 28 Aug 2012 17:16:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T6PP1-00019C-3M
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 17:16:35 +0000
Received: from [85.158.143.35:33333] by server-3.bemta-4.messagelabs.com id
	C8/AC-08232-2FCFC305; Tue, 28 Aug 2012 17:16:34 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346174193!16583005!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0OTQxOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9370 invoked from network); 28 Aug 2012 17:16:33 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 17:16:33 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 8190C2DB0;
	Tue, 28 Aug 2012 20:16:32 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id C8B8F2005D; Tue, 28 Aug 2012 20:16:31 +0300 (EEST)
Date: Tue, 28 Aug 2012 20:16:31 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: greg@enjellic.com
Message-ID: <20120828171631.GJ19851@reaktio.net>
References: <201208281225.q7SCP1WO017490@wind.enjellic.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201208281225.q7SCP1WO017490@wind.enjellic.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN 4.1.3 blktap2 patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 07:25:01AM -0500, Dr. Greg Wettstein wrote:
> Good morning, hope the day is going well for everyone.
> 

Hello,

> The patches to fix the blktap2 issues which result in orphaned
> tapdisk2 processes and the transitory deadlock on guest shutdown
> didn't make it into the 4.1.3 release.  Updated patches to address
> these problems are available at the following location:
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap1.patch
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap2.patch
> 
> The patches are designed to be applied in order and have been verified
> to work against the 4.1.3 release.
> 

If these patches are already in xen-unstable then they definitely should be applied
to xen-4.1-testing.hg for Xen 4.1.4. 

Thanks,

-- Pasi

> Best wishes for a productive week.
> 
> As always,
> Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
> 4206 N. 19th Ave.           Specializing in information infra-structure
> Fargo, ND  58102            development.
> PH: 701-281-1686
> FAX: 701-281-3949           EMAIL: greg@enjellic.com
> ------------------------------------------------------------------------------
> "There are two things that are infinite; Human stupidity and the
>  universe.  And I'm not sure about the universe."
>                                 -- Albert Einstein
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 17:17:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 17:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6PP2-00019H-57; Tue, 28 Aug 2012 17:16:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1T6PP1-00019C-3M
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 17:16:35 +0000
Received: from [85.158.143.35:33333] by server-3.bemta-4.messagelabs.com id
	C8/AC-08232-2FCFC305; Tue, 28 Aug 2012 17:16:34 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346174193!16583005!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0OTQxOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9370 invoked from network); 28 Aug 2012 17:16:33 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 17:16:33 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 8190C2DB0;
	Tue, 28 Aug 2012 20:16:32 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id C8B8F2005D; Tue, 28 Aug 2012 20:16:31 +0300 (EEST)
Date: Tue, 28 Aug 2012 20:16:31 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: greg@enjellic.com
Message-ID: <20120828171631.GJ19851@reaktio.net>
References: <201208281225.q7SCP1WO017490@wind.enjellic.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <201208281225.q7SCP1WO017490@wind.enjellic.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN 4.1.3 blktap2 patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 07:25:01AM -0500, Dr. Greg Wettstein wrote:
> Good morning, hope the day is going well for everyone.
> 

Hello,

> The patches to fix the blktap2 issues which result in orphaned
> tapdisk2 processes and the transitory deadlock on guest shutdown
> didn't make it into the 4.1.3 release.  Updated patches to address
> these problems are available at the following location:
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap1.patch
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap2.patch
> 
> The patches are designed to be applied in order and have been verified
> to work against the 4.1.3 release.
> 

If these patches are already in xen-unstable then they definitely should be applied
to xen-4.1-testing.hg for Xen 4.1.4. 

Thanks,

-- Pasi

> Best wishes for a productive week.
> 
> As always,
> Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
> 4206 N. 19th Ave.           Specializing in information infra-structure
> Fargo, ND  58102            development.
> PH: 701-281-1686
> FAX: 701-281-3949           EMAIL: greg@enjellic.com
> ------------------------------------------------------------------------------
> "There are two things that are infinite; Human stupidity and the
>  universe.  And I'm not sure about the universe."
>                                 -- Albert Einstein
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 17:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 17:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6PYy-0001Iw-8g; Tue, 28 Aug 2012 17:26:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6PYx-0001Ir-8H
	for xen-devel@lists.xensource.com; Tue, 28 Aug 2012 17:26:51 +0000
Received: from [85.158.143.35:15545] by server-1.bemta-4.messagelabs.com id
	4D/FF-12504-A5FFC305; Tue, 28 Aug 2012 17:26:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346174807!12572797!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32291 invoked from network); 28 Aug 2012 17:26:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 17:26:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,327,1344211200"; d="scan'208";a="14234595"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 17:26:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 28 Aug 2012 18:26:37 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6PYi-0002jd-T4;
	Tue, 28 Aug 2012 17:26:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6PYi-00068X-6p;
	Tue, 28 Aug 2012 18:26:36 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13634-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Aug 2012 18:26:36 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13634: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13634 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2   14 guest-localmigrate/x10    fail REGR. vs. 13608

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13608
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13608
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13608
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13608
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13608

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 linux                5aa287dcf1b5879aa0150b0511833c52885f5b4c
baseline version:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5aa287dcf1b5879aa0150b0511833c52885f5b4c
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Sun Aug 26 15:12:29 2012 -0700

    Linux 3.0.42

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 17:27:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 17:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6PYy-0001Iw-8g; Tue, 28 Aug 2012 17:26:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6PYx-0001Ir-8H
	for xen-devel@lists.xensource.com; Tue, 28 Aug 2012 17:26:51 +0000
Received: from [85.158.143.35:15545] by server-1.bemta-4.messagelabs.com id
	4D/FF-12504-A5FFC305; Tue, 28 Aug 2012 17:26:50 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346174807!12572797!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32291 invoked from network); 28 Aug 2012 17:26:48 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 17:26:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,327,1344211200"; d="scan'208";a="14234595"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 17:26:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 28 Aug 2012 18:26:37 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6PYi-0002jd-T4;
	Tue, 28 Aug 2012 17:26:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6PYi-00068X-6p;
	Tue, 28 Aug 2012 18:26:36 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13634-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Aug 2012 18:26:36 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13634: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13634 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2   14 guest-localmigrate/x10    fail REGR. vs. 13608

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13608
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13608
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13608
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13608
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13608

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 linux                5aa287dcf1b5879aa0150b0511833c52885f5b4c
baseline version:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5aa287dcf1b5879aa0150b0511833c52885f5b4c
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date:   Sun Aug 26 15:12:29 2012 -0700

    Linux 3.0.42

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 18:35:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 18:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6QdJ-0003iG-Qf; Tue, 28 Aug 2012 18:35:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1T6QdH-0003hw-Rs; Tue, 28 Aug 2012 18:35:24 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346178917!9205236!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0OTQxOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13625 invoked from network); 28 Aug 2012 18:35:17 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 18:35:17 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 90A4028F6;
	Tue, 28 Aug 2012 21:35:16 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6F24F2005D; Tue, 28 Aug 2012 21:35:16 +0300 (EEST)
Date: Tue, 28 Aug 2012 21:35:16 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Matthew Dean <mcd40@cam.ac.uk>
Message-ID: <20120828183516.GR19851@reaktio.net>
References: <50376807.3060608@cam.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50376807.3060608@cam.ac.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] Physical cdrom passthrough to windows 7
 DomU, bsod bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, 2012 at 12:39:51PM +0100, Matthew Dean wrote:
> I'm currently using xen 4.1 on an Ubuntu 12.04 Dom0 and I'm trying
> to passthrough the machines phyical cdom drive as
> 'phy:/dev/cdrom,hdc:cdrom,r' in the config file.  If a cd is present
> in the drive on bootup everything works fine; I can read data from
> the disk, eject it and so on.  If however the drive is empty on
> startup I get a bsod of type driver_irql_not_less_or_equal.  Does
> anyone know what causes this or a work around?
> 
> Thank you in advance for your help.
> 
> Matthew Dean
>

Hello,

I added xen-devel to CC list.
Has anyone else seen this bug? 

Some questions:
- Is your Windows VM using PV drivers? if yes, which version? 
- Is it 32bit or 64 bit win7 ? 
- What exact Xen hypervisor version are you using? 
- Did you try with the latest Xen 4.1.3 release? 
- Do you get errors in dom0 kernel dmesg, or in Xen dmesg, or in xend log? 


-- Pasi
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 18:35:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 18:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6QdJ-0003iG-Qf; Tue, 28 Aug 2012 18:35:25 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>)
	id 1T6QdH-0003hw-Rs; Tue, 28 Aug 2012 18:35:24 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346178917!9205236!1
X-Originating-IP: [192.89.123.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjg5LjEyMy4yNSA9PiA0OTQxOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13625 invoked from network); 28 Aug 2012 18:35:17 -0000
Received: from smtp.tele.fi (HELO smtp.tele.fi) (192.89.123.25)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Aug 2012 18:35:17 -0000
X-Originating-Ip: [194.89.68.22]
Received: from ydin.reaktio.net (reaktio.net [194.89.68.22])
	by smtp.tele.fi (Postfix) with ESMTP id 90A4028F6;
	Tue, 28 Aug 2012 21:35:16 +0300 (EEST)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6F24F2005D; Tue, 28 Aug 2012 21:35:16 +0300 (EEST)
Date: Tue, 28 Aug 2012 21:35:16 +0300
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Matthew Dean <mcd40@cam.ac.uk>
Message-ID: <20120828183516.GR19851@reaktio.net>
References: <50376807.3060608@cam.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <50376807.3060608@cam.ac.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] Physical cdrom passthrough to windows 7
 DomU, bsod bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 24, 2012 at 12:39:51PM +0100, Matthew Dean wrote:
> I'm currently using xen 4.1 on an Ubuntu 12.04 Dom0 and I'm trying
> to passthrough the machines phyical cdom drive as
> 'phy:/dev/cdrom,hdc:cdrom,r' in the config file.  If a cd is present
> in the drive on bootup everything works fine; I can read data from
> the disk, eject it and so on.  If however the drive is empty on
> startup I get a bsod of type driver_irql_not_less_or_equal.  Does
> anyone know what causes this or a work around?
> 
> Thank you in advance for your help.
> 
> Matthew Dean
>

Hello,

I added xen-devel to CC list.
Has anyone else seen this bug? 

Some questions:
- Is your Windows VM using PV drivers? if yes, which version? 
- Is it 32bit or 64 bit win7 ? 
- What exact Xen hypervisor version are you using? 
- Did you try with the latest Xen 4.1.3 release? 
- Do you get errors in dom0 kernel dmesg, or in Xen dmesg, or in xend log? 


-- Pasi
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 19:36:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 19:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6RaK-00057T-OJ; Tue, 28 Aug 2012 19:36:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6RaJ-00057O-1H
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 19:36:23 +0000
Received: from [85.158.139.83:34592] by server-1.bemta-5.messagelabs.com id
	51/D4-32692-6BD1D305; Tue, 28 Aug 2012 19:36:22 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346182581!24341448!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16791 invoked from network); 28 Aug 2012 19:36:21 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-7.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 19:36:21 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 642EE5A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 20:35:54 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id QW9svyTJpJDg for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 20:35:51 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	AB0E75A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 20:35:51 +0100 (BST)
Message-ID: <503D1DB3.9000203@abpni.co.uk>
Date: Tue, 28 Aug 2012 20:36:19 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC5D9D2E.3CD9A%keir.xen@gmail.com>
In-Reply-To: <CC5D9D2E.3CD9A%keir.xen@gmail.com>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 21:05, Keir Fraser wrote:
> On 24/08/2012 18:39, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>> That flipping has nothing to do with UEFI, just with the way grub.efi
>>> works.
>>>
>>> Proper UEFI support implies use of EFI's boot and run time services,
>>> which only xen.efi currently does (and which, for those run time
>>> services that get made available for use by Dom0, also requires an
>>> enabled Dom0 kernel).
>>>
>> Thanks for the clarification.
>>
>> So from a security/reliability standpoint, nothing will be affected by
>> flipping the if block?
> It should simply make it more likely that Xen sees all your RAM. ;)
>
>   -- Keir
>
>

Hi Everyone,

I reversed the if block in setup.c and now my server can see the full 
32GB of RAM. I haven't submitted a patch yet as we have run into another 
(possibly unrelated to xen) issue with this server build that we are 
working on. Once we complete our full testing, a patch will be submitted :)

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 19:36:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 19:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6RaK-00057T-OJ; Tue, 28 Aug 2012 19:36:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6RaJ-00057O-1H
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 19:36:23 +0000
Received: from [85.158.139.83:34592] by server-1.bemta-5.messagelabs.com id
	51/D4-32692-6BD1D305; Tue, 28 Aug 2012 19:36:22 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346182581!24341448!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16791 invoked from network); 28 Aug 2012 19:36:21 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-7.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 19:36:21 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 642EE5A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 20:35:54 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id QW9svyTJpJDg for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 20:35:51 +0100 (BST)
Received: from Jonathans-MacBook-Air.local (unknown [10.87.0.109])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	AB0E75A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 20:35:51 +0100 (BST)
Message-ID: <503D1DB3.9000203@abpni.co.uk>
Date: Tue, 28 Aug 2012 20:36:19 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC5D9D2E.3CD9A%keir.xen@gmail.com>
In-Reply-To: <CC5D9D2E.3CD9A%keir.xen@gmail.com>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/08/2012 21:05, Keir Fraser wrote:
> On 24/08/2012 18:39, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>> That flipping has nothing to do with UEFI, just with the way grub.efi
>>> works.
>>>
>>> Proper UEFI support implies use of EFI's boot and run time services,
>>> which only xen.efi currently does (and which, for those run time
>>> services that get made available for use by Dom0, also requires an
>>> enabled Dom0 kernel).
>>>
>> Thanks for the clarification.
>>
>> So from a security/reliability standpoint, nothing will be affected by
>> flipping the if block?
> It should simply make it more likely that Xen sees all your RAM. ;)
>
>   -- Keir
>
>

Hi Everyone,

I reversed the if block in setup.c and now my server can see the full 
32GB of RAM. I haven't submitted a patch yet as we have run into another 
(possibly unrelated to xen) issue with this server build that we are 
working on. Once we complete our full testing, a patch will be submitted :)

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 19:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 19:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Rfp-0005HC-Gz; Tue, 28 Aug 2012 19:42:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6Rfn-0005H6-Al
	for xen-devel@lists.xensource.com; Tue, 28 Aug 2012 19:42:03 +0000
Received: from [85.158.143.35:3738] by server-3.bemta-4.messagelabs.com id
	37/46-08232-A0F1D305; Tue, 28 Aug 2012 19:42:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346182920!15269327!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13857 invoked from network); 28 Aug 2012 19:42:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 19:42:00 -0000
X-IronPort-AV: E=Sophos;i="4.80,328,1344211200"; d="scan'208";a="14235935"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 19:41:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 28 Aug 2012 20:41:59 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6Rfj-0003pc-BL;
	Tue, 28 Aug 2012 19:41:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6Rfj-0006r5-5j;
	Tue, 28 Aug 2012 20:41:59 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13635-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Aug 2012 20:41:59 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13635: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13635 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13635/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13633
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13633
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13633
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13633

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3908b256ff34
baseline version:
 xen                  1126b3079bef

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3908b256ff34
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3908b256ff34
+ branch=xen-unstable
+ revision=3908b256ff34
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3908b256ff34 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 19:42:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 19:42:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Rfp-0005HC-Gz; Tue, 28 Aug 2012 19:42:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6Rfn-0005H6-Al
	for xen-devel@lists.xensource.com; Tue, 28 Aug 2012 19:42:03 +0000
Received: from [85.158.143.35:3738] by server-3.bemta-4.messagelabs.com id
	37/46-08232-A0F1D305; Tue, 28 Aug 2012 19:42:02 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346182920!15269327!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA1NDA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13857 invoked from network); 28 Aug 2012 19:42:00 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 19:42:00 -0000
X-IronPort-AV: E=Sophos;i="4.80,328,1344211200"; d="scan'208";a="14235935"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 19:41:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Tue, 28 Aug 2012 20:41:59 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6Rfj-0003pc-BL;
	Tue, 28 Aug 2012 19:41:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6Rfj-0006r5-5j;
	Tue, 28 Aug 2012 20:41:59 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13635-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Aug 2012 20:41:59 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13635: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13635 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13635/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13633
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13633
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13633
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13633

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  3908b256ff34
baseline version:
 xen                  1126b3079bef

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3908b256ff34
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3908b256ff34
+ branch=xen-unstable
+ revision=3908b256ff34
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 3908b256ff34 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 20:00:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 20:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Rxb-0005eN-MZ; Tue, 28 Aug 2012 20:00:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T6Rxa-0005eI-5W
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 20:00:26 +0000
Received: from [85.158.138.51:54679] by server-10.bemta-3.messagelabs.com id
	49/68-20518-9532D305; Tue, 28 Aug 2012 20:00:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346184021!28448904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ2NTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4307 invoked from network); 28 Aug 2012 20:00:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 20:00:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,328,1344211200"; d="scan'208";a="206469272"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 19:59:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 15:59:53 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T6Rx3-0006gq-3P;
	Tue, 28 Aug 2012 20:59:53 +0100
Message-ID: <503D2339.1020705@citrix.com>
Date: Tue, 28 Aug 2012 20:59:53 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Keir Fraser <keir.xen@gmail.com>
X-Enigmail-Version: 1.4.4
Content-Type: multipart/mixed; boundary="------------020503080005010508030202"
Subject: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------020503080005010508030202
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

Since Jan's patch to print and mask bogus PIC vectors, we have found
some issues on older hardware were supurious PIC vectors are being
repeatedly logged, as spurious vectors will ignore the relevant mask bit.

The log message is deceptive in the case of a spurious vector.  I have
attached an RFC patch which changes the bogus_8259A_irq logic to be able
to detect spurious vectors and be rather less verbose about them.

The new bogus_8259A_irq() function is basically a copy of
_mask_and_ack_8259A_irq(), but returning a boolean indicating whether it
was a genuine interrupt or not, which controls whether the "No irq
handler" message in do_IRQ gets printed or not.

Jan: are you happy with the style of the adjustment, or could you
suggest a better way of doing it?

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------020503080005010508030202
Content-Type: text/x-patch; name="pic-bogus-spurious.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="pic-bogus-spurious.patch"

# HG changeset patch
# Parent 1126b3079bef37e1bb5a97b90c14a51d4e1c91c3

diff -r 1126b3079bef xen/arch/x86/i8259.c
--- a/xen/arch/x86/i8259.c
+++ b/xen/arch/x86/i8259.c
@@ -87,8 +87,82 @@ static DEFINE_SPINLOCK(i8259A_lock);
 
 static void _mask_and_ack_8259A_irq(unsigned int irq);
 
-void (*__read_mostly bogus_8259A_irq)(unsigned int irq) =
-    _mask_and_ack_8259A_irq;
+/*
+ * Deal with a bogus 8259A interrupt.  Return True if
+ * it is a genuine interrupt, and False if it is spurious.
+ */
+bool_t __read_mostly bogus_8259A_irq(unsigned int irq)
+{
+    unsigned int irqmask = 1 << irq;
+    unsigned long flags;
+    bool_t real_irq = True;
+    bool_t need_eoi = i8259A_irq_type.ack != disable_8259A_irq;
+
+    spin_lock_irqsave(&i8259A_lock, flags);
+    /*
+     * Lightweight spurious IRQ detection. We do not want
+     * to overdo spurious IRQ handling - it's usually a sign
+     * of hardware problems, so we only do the checks we can
+     * do without slowing down good hardware unnecesserily.
+     *
+     * Note that IRQ7 and IRQ15 (the two spurious IRQs
+     * usually resulting from the 8259A-1|2 PICs) occur
+     * even if the IRQ is masked in the 8259A. Thus we
+     * can check spurious 8259A IRQs without doing the
+     * quite slow i8259A_irq_real() call for every IRQ.
+     * This does not cover 100% of spurious interrupts,
+     * but should be enough to warn the user that there
+     * is something bad going on ...
+     */
+    if (cached_irq_mask & irqmask)
+        goto spurious_8259A_irq;
+    cached_irq_mask |= irqmask;
+
+ handle_real_irq:
+    if (irq & 8) {
+        inb(0xA1);              /* DUMMY - (do we need this?) */
+        outb(cached_A1,0xA1);
+        if ( need_eoi ) {
+            outb(0x60+(irq&7),0xA0);/* 'Specific EOI' to slave */
+            outb(0x62,0x20);        /* 'Specific EOI' to master-IRQ2 */
+        }
+    } else {
+        inb(0x21);              /* DUMMY - (do we need this?) */
+        outb(cached_21,0x21);
+        if ( need_eoi )
+            outb(0x60+irq,0x20);    /* 'Specific EOI' to master */
+    }
+
+    goto out;
+
+ spurious_8259A_irq:
+    /*
+     * this is the slow path - should happen rarely.
+     */
+    if (i8259A_irq_real(irq))
+        /*
+         * oops, the IRQ _is_ in service according to the
+         * 8259A - not spurious, go handle it.
+         */
+        goto handle_real_irq;
+
+    {
+        static int spurious_irq_mask;
+        /*
+         * At this point we can be sure the IRQ is spurious,
+         * lets ACK and report it. [once per IRQ]
+         */
+        if (!(spurious_irq_mask & irqmask)) {
+            printk("spurious 8259A interrupt: IRQ%d.\n", irq);
+            spurious_irq_mask |= irqmask;
+        }
+        real_irq = False;
+    }
+
+out:
+    spin_unlock_irqrestore(&i8259A_lock, flags);
+    return real_irq;
+}
 
 static void mask_and_ack_8259A_irq(struct irq_desc *desc)
 {
@@ -367,19 +441,13 @@ void __devinit init_8259A(int auto_eoi)
                                is to be investigated) */
 
     if (auto_eoi)
-    {
         /*
          * in AEOI mode we just have to mask the interrupt
          * when acking.
          */
         i8259A_irq_type.ack = disable_8259A_irq;
-        bogus_8259A_irq = _disable_8259A_irq;
-    }
     else
-    {
         i8259A_irq_type.ack = mask_and_ack_8259A_irq;
-        bogus_8259A_irq = _mask_and_ack_8259A_irq;
-    }
 
     udelay(100);            /* wait for 8259A to initialize */
 
diff -r 1126b3079bef xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -817,11 +817,11 @@ void do_IRQ(struct cpu_user_regs *regs)
                 ack_APIC_irq();
             else
                 kind = "";
-            if ( vector >= FIRST_LEGACY_VECTOR &&
-                 vector <= LAST_LEGACY_VECTOR )
-                bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR);
-            printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
-                   smp_processor_id(), vector, irq, kind);
+            if ( ! ( vector >= FIRST_LEGACY_VECTOR &&
+                     vector <= LAST_LEGACY_VECTOR &&
+                     bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR) ) )
+                printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
+                       smp_processor_id(), vector, irq, kind);
             TRACE_1D(TRC_HW_IRQ_UNMAPPED_VECTOR, vector);
         }
         goto out_no_unlock;
diff -r 1126b3079bef xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -104,7 +104,7 @@ void mask_8259A(void);
 void unmask_8259A(void);
 void init_8259A(int aeoi);
 void make_8259A_irq(unsigned int irq);
-extern void (*bogus_8259A_irq)(unsigned int irq);
+bool_t bogus_8259A_irq(unsigned int irq);
 int i8259A_suspend(void);
 int i8259A_resume(void);
 

--------------020503080005010508030202
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020503080005010508030202--


From xen-devel-bounces@lists.xen.org Tue Aug 28 20:00:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 20:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Rxb-0005eN-MZ; Tue, 28 Aug 2012 20:00:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T6Rxa-0005eI-5W
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 20:00:26 +0000
Received: from [85.158.138.51:54679] by server-10.bemta-3.messagelabs.com id
	49/68-20518-9532D305; Tue, 28 Aug 2012 20:00:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346184021!28448904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ2NTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4307 invoked from network); 28 Aug 2012 20:00:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 20:00:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,328,1344211200"; d="scan'208";a="206469272"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	28 Aug 2012 19:59:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Tue, 28 Aug 2012 15:59:53 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T6Rx3-0006gq-3P;
	Tue, 28 Aug 2012 20:59:53 +0100
Message-ID: <503D2339.1020705@citrix.com>
Date: Tue, 28 Aug 2012 20:59:53 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Keir Fraser <keir.xen@gmail.com>
X-Enigmail-Version: 1.4.4
Content-Type: multipart/mixed; boundary="------------020503080005010508030202"
Subject: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------020503080005010508030202
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

Since Jan's patch to print and mask bogus PIC vectors, we have found
some issues on older hardware were supurious PIC vectors are being
repeatedly logged, as spurious vectors will ignore the relevant mask bit.

The log message is deceptive in the case of a spurious vector.  I have
attached an RFC patch which changes the bogus_8259A_irq logic to be able
to detect spurious vectors and be rather less verbose about them.

The new bogus_8259A_irq() function is basically a copy of
_mask_and_ack_8259A_irq(), but returning a boolean indicating whether it
was a genuine interrupt or not, which controls whether the "No irq
handler" message in do_IRQ gets printed or not.

Jan: are you happy with the style of the adjustment, or could you
suggest a better way of doing it?

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------020503080005010508030202
Content-Type: text/x-patch; name="pic-bogus-spurious.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="pic-bogus-spurious.patch"

# HG changeset patch
# Parent 1126b3079bef37e1bb5a97b90c14a51d4e1c91c3

diff -r 1126b3079bef xen/arch/x86/i8259.c
--- a/xen/arch/x86/i8259.c
+++ b/xen/arch/x86/i8259.c
@@ -87,8 +87,82 @@ static DEFINE_SPINLOCK(i8259A_lock);
 
 static void _mask_and_ack_8259A_irq(unsigned int irq);
 
-void (*__read_mostly bogus_8259A_irq)(unsigned int irq) =
-    _mask_and_ack_8259A_irq;
+/*
+ * Deal with a bogus 8259A interrupt.  Return True if
+ * it is a genuine interrupt, and False if it is spurious.
+ */
+bool_t __read_mostly bogus_8259A_irq(unsigned int irq)
+{
+    unsigned int irqmask = 1 << irq;
+    unsigned long flags;
+    bool_t real_irq = True;
+    bool_t need_eoi = i8259A_irq_type.ack != disable_8259A_irq;
+
+    spin_lock_irqsave(&i8259A_lock, flags);
+    /*
+     * Lightweight spurious IRQ detection. We do not want
+     * to overdo spurious IRQ handling - it's usually a sign
+     * of hardware problems, so we only do the checks we can
+     * do without slowing down good hardware unnecesserily.
+     *
+     * Note that IRQ7 and IRQ15 (the two spurious IRQs
+     * usually resulting from the 8259A-1|2 PICs) occur
+     * even if the IRQ is masked in the 8259A. Thus we
+     * can check spurious 8259A IRQs without doing the
+     * quite slow i8259A_irq_real() call for every IRQ.
+     * This does not cover 100% of spurious interrupts,
+     * but should be enough to warn the user that there
+     * is something bad going on ...
+     */
+    if (cached_irq_mask & irqmask)
+        goto spurious_8259A_irq;
+    cached_irq_mask |= irqmask;
+
+ handle_real_irq:
+    if (irq & 8) {
+        inb(0xA1);              /* DUMMY - (do we need this?) */
+        outb(cached_A1,0xA1);
+        if ( need_eoi ) {
+            outb(0x60+(irq&7),0xA0);/* 'Specific EOI' to slave */
+            outb(0x62,0x20);        /* 'Specific EOI' to master-IRQ2 */
+        }
+    } else {
+        inb(0x21);              /* DUMMY - (do we need this?) */
+        outb(cached_21,0x21);
+        if ( need_eoi )
+            outb(0x60+irq,0x20);    /* 'Specific EOI' to master */
+    }
+
+    goto out;
+
+ spurious_8259A_irq:
+    /*
+     * this is the slow path - should happen rarely.
+     */
+    if (i8259A_irq_real(irq))
+        /*
+         * oops, the IRQ _is_ in service according to the
+         * 8259A - not spurious, go handle it.
+         */
+        goto handle_real_irq;
+
+    {
+        static int spurious_irq_mask;
+        /*
+         * At this point we can be sure the IRQ is spurious,
+         * lets ACK and report it. [once per IRQ]
+         */
+        if (!(spurious_irq_mask & irqmask)) {
+            printk("spurious 8259A interrupt: IRQ%d.\n", irq);
+            spurious_irq_mask |= irqmask;
+        }
+        real_irq = False;
+    }
+
+out:
+    spin_unlock_irqrestore(&i8259A_lock, flags);
+    return real_irq;
+}
 
 static void mask_and_ack_8259A_irq(struct irq_desc *desc)
 {
@@ -367,19 +441,13 @@ void __devinit init_8259A(int auto_eoi)
                                is to be investigated) */
 
     if (auto_eoi)
-    {
         /*
          * in AEOI mode we just have to mask the interrupt
          * when acking.
          */
         i8259A_irq_type.ack = disable_8259A_irq;
-        bogus_8259A_irq = _disable_8259A_irq;
-    }
     else
-    {
         i8259A_irq_type.ack = mask_and_ack_8259A_irq;
-        bogus_8259A_irq = _mask_and_ack_8259A_irq;
-    }
 
     udelay(100);            /* wait for 8259A to initialize */
 
diff -r 1126b3079bef xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -817,11 +817,11 @@ void do_IRQ(struct cpu_user_regs *regs)
                 ack_APIC_irq();
             else
                 kind = "";
-            if ( vector >= FIRST_LEGACY_VECTOR &&
-                 vector <= LAST_LEGACY_VECTOR )
-                bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR);
-            printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
-                   smp_processor_id(), vector, irq, kind);
+            if ( ! ( vector >= FIRST_LEGACY_VECTOR &&
+                     vector <= LAST_LEGACY_VECTOR &&
+                     bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR) ) )
+                printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
+                       smp_processor_id(), vector, irq, kind);
             TRACE_1D(TRC_HW_IRQ_UNMAPPED_VECTOR, vector);
         }
         goto out_no_unlock;
diff -r 1126b3079bef xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -104,7 +104,7 @@ void mask_8259A(void);
 void unmask_8259A(void);
 void init_8259A(int aeoi);
 void make_8259A_irq(unsigned int irq);
-extern void (*bogus_8259A_irq)(unsigned int irq);
+bool_t bogus_8259A_irq(unsigned int irq);
 int i8259A_suspend(void);
 int i8259A_resume(void);
 

--------------020503080005010508030202
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020503080005010508030202--


From xen-devel-bounces@lists.xen.org Tue Aug 28 21:11:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:11:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6T3m-00063W-7t; Tue, 28 Aug 2012 21:10:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6T3k-00063R-Jn
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:10:52 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346188242!9044031!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17636 invoked from network); 28 Aug 2012 21:10:44 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:10:44 -0000
Received: by pbbjt11 with SMTP id jt11so9947434pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:10:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ZF6xiywBplMPOTY55IsG8NwMQ/u+O3qa+rhVMCN7hCA=;
	b=NarmhMOFwqo1vLMPl+va24Qeai4thPoYKhP+R/Jor3C5bWaVuhRkMSW2ggmqaLADNb
	SK0bKyqZGd81ccRLXyU06H+7lpvctAQWJJQ/NdX/IHfgtLYmmS7smVv0bfsxM9a6wm2I
	oSjEEjs6KDqKbOhR/7GN2p3EC9CnyrccQqmuOZjzkAAt6FkKR5SEW77E/oAgIuAdXnVw
	tjAMhysI4KEChbbA0OqrN/5HKmLC/+zTUERt5i+vQ59QhlsY4i3KVZVw37RCcBgAh3fP
	iehUSodY8l+8Aa6CoV2TD8B4B30dEhryHJuWqIeXXMm4sOPD/gSvkAlpvkkd4TZXI+yy
	v2Mw==
Received: by 10.66.82.103 with SMTP id h7mr36828367pay.61.1346188242310;
	Tue, 28 Aug 2012 14:10:42 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id th6sm17772273pbc.0.2012.08.28.14.10.39
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:10:40 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:10:36 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC62F25C.3D207%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FYZFRLTpqtsN/qUicKLCJZO7n8w==
In-Reply-To: <503D1DB3.9000203@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>>> Thanks for the clarification.
>>> 
>>> So from a security/reliability standpoint, nothing will be affected by
>>> flipping the if block?
>> It should simply make it more likely that Xen sees all your RAM. ;)
>> 
>>   -- Keir
>> 
>> 
> 
> Hi Everyone,
> 
> I reversed the if block in setup.c and now my server can see the full
> 32GB of RAM. I haven't submitted a patch yet as we have run into another
> (possibly unrelated to xen) issue with this server build that we are
> working on. Once we complete our full testing, a patch will be submitted :)

In this case, I will re-make the patch myself and check it in. Since it is a
trivial one.

  Thanks,
 Keir




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:11:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:11:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6T3m-00063W-7t; Tue, 28 Aug 2012 21:10:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6T3k-00063R-Jn
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:10:52 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346188242!9044031!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17636 invoked from network); 28 Aug 2012 21:10:44 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:10:44 -0000
Received: by pbbjt11 with SMTP id jt11so9947434pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:10:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ZF6xiywBplMPOTY55IsG8NwMQ/u+O3qa+rhVMCN7hCA=;
	b=NarmhMOFwqo1vLMPl+va24Qeai4thPoYKhP+R/Jor3C5bWaVuhRkMSW2ggmqaLADNb
	SK0bKyqZGd81ccRLXyU06H+7lpvctAQWJJQ/NdX/IHfgtLYmmS7smVv0bfsxM9a6wm2I
	oSjEEjs6KDqKbOhR/7GN2p3EC9CnyrccQqmuOZjzkAAt6FkKR5SEW77E/oAgIuAdXnVw
	tjAMhysI4KEChbbA0OqrN/5HKmLC/+zTUERt5i+vQ59QhlsY4i3KVZVw37RCcBgAh3fP
	iehUSodY8l+8Aa6CoV2TD8B4B30dEhryHJuWqIeXXMm4sOPD/gSvkAlpvkkd4TZXI+yy
	v2Mw==
Received: by 10.66.82.103 with SMTP id h7mr36828367pay.61.1346188242310;
	Tue, 28 Aug 2012 14:10:42 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id th6sm17772273pbc.0.2012.08.28.14.10.39
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:10:40 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:10:36 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC62F25C.3D207%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FYZFRLTpqtsN/qUicKLCJZO7n8w==
In-Reply-To: <503D1DB3.9000203@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>>> Thanks for the clarification.
>>> 
>>> So from a security/reliability standpoint, nothing will be affected by
>>> flipping the if block?
>> It should simply make it more likely that Xen sees all your RAM. ;)
>> 
>>   -- Keir
>> 
>> 
> 
> Hi Everyone,
> 
> I reversed the if block in setup.c and now my server can see the full
> 32GB of RAM. I haven't submitted a patch yet as we have run into another
> (possibly unrelated to xen) issue with this server build that we are
> working on. Once we complete our full testing, a patch will be submitted :)

In this case, I will re-make the patch myself and check it in. Since it is a
trivial one.

  Thanks,
 Keir




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:13:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6T6L-00068y-Pw; Tue, 28 Aug 2012 21:13:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6T6L-00068m-AD
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:13:33 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346188405!9262289!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8289 invoked from network); 28 Aug 2012 21:13:26 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-2.tower-27.messagelabs.com with SMTP;
	28 Aug 2012 21:13:26 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id D06D15A0007
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:12:58 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id xd6PyibZC3By for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:12:56 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	5FF415A0005; Tue, 28 Aug 2012 22:12:55 +0100 (BST)
Message-ID: <503D3473.5050105@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:13:23 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC62F25C.3D207%keir.xen@gmail.com>
In-Reply-To: <CC62F25C.3D207%keir.xen@gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Keir Fraser wrote:
> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>>> Thanks for the clarification.
>>>>
>>>> So from a security/reliability standpoint, nothing will be affected by
>>>> flipping the if block?
>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>
>>>    -- Keir
>>>
>>>
>> Hi Everyone,
>>
>> I reversed the if block in setup.c and now my server can see the full
>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>> (possibly unrelated to xen) issue with this server build that we are
>> working on. Once we complete our full testing, a patch will be submitted :)
> In this case, I will re-make the patch myself and check it in. Since it is a
> trivial one.
>
>    Thanks,
>   Keir
Thanks Keir

Will this be backported to 4.1?

Cheers


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:13:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6T6L-00068y-Pw; Tue, 28 Aug 2012 21:13:33 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6T6L-00068m-AD
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:13:33 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346188405!9262289!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8289 invoked from network); 28 Aug 2012 21:13:26 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-2.tower-27.messagelabs.com with SMTP;
	28 Aug 2012 21:13:26 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id D06D15A0007
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:12:58 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id xd6PyibZC3By for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:12:56 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	5FF415A0005; Tue, 28 Aug 2012 22:12:55 +0100 (BST)
Message-ID: <503D3473.5050105@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:13:23 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC62F25C.3D207%keir.xen@gmail.com>
In-Reply-To: <CC62F25C.3D207%keir.xen@gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Keir Fraser wrote:
> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>>> Thanks for the clarification.
>>>>
>>>> So from a security/reliability standpoint, nothing will be affected by
>>>> flipping the if block?
>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>
>>>    -- Keir
>>>
>>>
>> Hi Everyone,
>>
>> I reversed the if block in setup.c and now my server can see the full
>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>> (possibly unrelated to xen) issue with this server build that we are
>> working on. Once we complete our full testing, a patch will be submitted :)
> In this case, I will re-make the patch myself and check it in. Since it is a
> trivial one.
>
>    Thanks,
>   Keir
Thanks Keir

Will this be backported to 4.1?

Cheers


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:21:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:21:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TDm-0006LR-OE; Tue, 28 Aug 2012 21:21:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6TDl-0006LM-CJ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:21:13 +0000
Received: from [85.158.138.51:33259] by server-9.bemta-3.messagelabs.com id
	37/F4-23952-8463D305; Tue, 28 Aug 2012 21:21:12 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346188871!28513831!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20994 invoked from network); 28 Aug 2012 21:21:11 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-2.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 21:21:11 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 8FAF65A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:20:44 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id bB8XDs8tO2OU for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:20:44 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	828885A0005; Tue, 28 Aug 2012 22:20:42 +0100 (BST)
Message-ID: <503D3646.7020800@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:21:10 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	James Harper <james.harper@bendigoit.com.au>
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9129877140981159981=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============9129877140981159981==
Content-Type: multipart/alternative;
 boundary="------------060609010306030004070902"

This is a multi-part message in MIME format.
--------------060609010306030004070902
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
>/  >/
>/  > I think the problem is definitely related to the 4K sector issue./
>/  >/
>/  > qemu appears to always present 512 byte sectors, thus only booting from a/
>/  > 512 byte sector partition table. Once the PV drivers take over though it all/
>/  > falls down because PV drivers are passed a 4K sector size and nothing/
>/  > matches up anymore./
>/  >

/Hi James,

I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors?
When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.

When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel
panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).

Thanks

//




--------------060609010306030004070902
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <meta charset="utf-8">
    <pre style="color: rgb(0, 0, 0); font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><meta charset="utf-8">On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
&gt;<i> &gt; </i>
&gt;<i> &gt; I think the problem is definitely related to the 4K sector issue.</i>
&gt;<i> &gt; </i>
&gt;<i> &gt; qemu appears to always present 512 byte sectors, thus only booting from a</i>
&gt;<i> &gt; 512 byte sector partition table. Once the PV drivers take over though it all</i>
&gt;<i> &gt; falls down because PV drivers are passed a 4K sector size and nothing</i>
&gt;<i> &gt; matches up anymore.</i>
&gt;<i> &gt; 

</i>Hi James,

I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors? 
When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.

When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel 
panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).

Thanks

<i></i></pre>
    <br>
    <br>
  </body>
</html>

--------------060609010306030004070902--


--===============9129877140981159981==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9129877140981159981==--


From xen-devel-bounces@lists.xen.org Tue Aug 28 21:21:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:21:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TDm-0006LR-OE; Tue, 28 Aug 2012 21:21:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6TDl-0006LM-CJ
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:21:13 +0000
Received: from [85.158.138.51:33259] by server-9.bemta-3.messagelabs.com id
	37/F4-23952-8463D305; Tue, 28 Aug 2012 21:21:12 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346188871!28513831!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20994 invoked from network); 28 Aug 2012 21:21:11 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-2.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 21:21:11 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 8FAF65A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:20:44 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id bB8XDs8tO2OU for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:20:44 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	828885A0005; Tue, 28 Aug 2012 22:20:42 +0100 (BST)
Message-ID: <503D3646.7020800@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:21:10 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	James Harper <james.harper@bendigoit.com.au>
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9129877140981159981=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============9129877140981159981==
Content-Type: multipart/alternative;
 boundary="------------060609010306030004070902"

This is a multi-part message in MIME format.
--------------060609010306030004070902
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
>/  >/
>/  > I think the problem is definitely related to the 4K sector issue./
>/  >/
>/  > qemu appears to always present 512 byte sectors, thus only booting from a/
>/  > 512 byte sector partition table. Once the PV drivers take over though it all/
>/  > falls down because PV drivers are passed a 4K sector size and nothing/
>/  > matches up anymore./
>/  >

/Hi James,

I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors?
When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.

When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel
panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).

Thanks

//




--------------060609010306030004070902
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <meta charset="utf-8">
    <pre style="color: rgb(0, 0, 0); font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><meta charset="utf-8">On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
&gt;<i> &gt; </i>
&gt;<i> &gt; I think the problem is definitely related to the 4K sector issue.</i>
&gt;<i> &gt; </i>
&gt;<i> &gt; qemu appears to always present 512 byte sectors, thus only booting from a</i>
&gt;<i> &gt; 512 byte sector partition table. Once the PV drivers take over though it all</i>
&gt;<i> &gt; falls down because PV drivers are passed a 4K sector size and nothing</i>
&gt;<i> &gt; matches up anymore.</i>
&gt;<i> &gt; 

</i>Hi James,

I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors? 
When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.

When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel 
panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).

Thanks

<i></i></pre>
    <br>
    <br>
  </body>
</html>

--------------060609010306030004070902--


--===============9129877140981159981==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9129877140981159981==--


From xen-devel-bounces@lists.xen.org Tue Aug 28 21:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TOg-0006VP-Us; Tue, 28 Aug 2012 21:32:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6TOf-0006VK-F0
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:32:29 +0000
Received: from [85.158.139.83:59294] by server-10.bemta-5.messagelabs.com id
	76/FF-10969-CE83D305; Tue, 28 Aug 2012 21:32:28 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346189546!28260469!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11035 invoked from network); 28 Aug 2012 21:32:27 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:32:27 -0000
Received: by pbbjt11 with SMTP id jt11so9975399pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:32:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=GHKv8j0f8OCyRyUR/d7eTjIcm1oW1MgRJ/q9g1QQ1sQ=;
	b=kwH0IkCfDZiqUEsmgrnHh2Z73MbBk7T3Y/5mHG1jJi6ownNZz9uzVV/akxJuGzuEfc
	aBXbFncJQutGC0BPtca4DdFwjP0cIaoTaxfI4OyNZpli6IUq0Ytg4VgHus3abpks/Ein
	fwtPDum0dyobzH5Zms1Szo4gwa56suRZsRSVod+eg2FNr+dKeEq8nEhmRG1rA1tJagWH
	yIQ2CaVC4dy46ruJ1QIiunK8GLUn+b0w/wTJ39mr6YaQxdJrLAptTuQb8vo0pBiZ1IRD
	QpDsaGD1Bdta2k2duHC546qNJoOIIYb1/u1EZFr37CAoN7WCqrzdyItnlW8qBm+Gl+Ou
	hUww==
Received: by 10.68.232.138 with SMTP id to10mr45560452pbc.77.1346189545641;
	Tue, 28 Aug 2012 14:32:25 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id to6sm17798069pbc.12.2012.08.28.14.32.20
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:32:24 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:32:13 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <jbeulich@suse.com>
Message-ID: <CC62F76D.3D210%keir.xen@gmail.com>
Thread-Topic: [RFC] Spurious PIC interrupts
Thread-Index: Ac2FZJZk6zmdyYv14kqnxmoxeNhp9Q==
In-Reply-To: <503D2339.1020705@citrix.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 20:59, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

> Since Jan's patch to print and mask bogus PIC vectors, we have found
> some issues on older hardware were supurious PIC vectors are being
> repeatedly logged, as spurious vectors will ignore the relevant mask bit.
> 
> The log message is deceptive in the case of a spurious vector.  I have
> attached an RFC patch which changes the bogus_8259A_irq logic to be able
> to detect spurious vectors and be rather less verbose about them.
> 
> The new bogus_8259A_irq() function is basically a copy of
> _mask_and_ack_8259A_irq(), but returning a boolean indicating whether it
> was a genuine interrupt or not, which controls whether the "No irq
> handler" message in do_IRQ gets printed or not.
> 
> Jan: are you happy with the style of the adjustment, or could you
> suggest a better way of doing it?

No, you should make the change to _mask_and_ack_8259A_irq() itself, and
callers which do not care about the return code can simply discard it.

I hardly even want one instance of that appalling function in the tree, let
alone two! Terrible abuse of goto, and I'm pretty laid back about goto use.

And we don't use True/False anywhere else in Xen. We don't even have any
centralised definition of true/false of any kind, so you should really just
use 1/0 directly. Alternatively you could propose a patch to define
true/false in our types.h, and use those. I don't know where the capitalised
True/False came from!

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:32:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TOg-0006VP-Us; Tue, 28 Aug 2012 21:32:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6TOf-0006VK-F0
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:32:29 +0000
Received: from [85.158.139.83:59294] by server-10.bemta-5.messagelabs.com id
	76/FF-10969-CE83D305; Tue, 28 Aug 2012 21:32:28 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346189546!28260469!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11035 invoked from network); 28 Aug 2012 21:32:27 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-12.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:32:27 -0000
Received: by pbbjt11 with SMTP id jt11so9975399pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:32:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=GHKv8j0f8OCyRyUR/d7eTjIcm1oW1MgRJ/q9g1QQ1sQ=;
	b=kwH0IkCfDZiqUEsmgrnHh2Z73MbBk7T3Y/5mHG1jJi6ownNZz9uzVV/akxJuGzuEfc
	aBXbFncJQutGC0BPtca4DdFwjP0cIaoTaxfI4OyNZpli6IUq0Ytg4VgHus3abpks/Ein
	fwtPDum0dyobzH5Zms1Szo4gwa56suRZsRSVod+eg2FNr+dKeEq8nEhmRG1rA1tJagWH
	yIQ2CaVC4dy46ruJ1QIiunK8GLUn+b0w/wTJ39mr6YaQxdJrLAptTuQb8vo0pBiZ1IRD
	QpDsaGD1Bdta2k2duHC546qNJoOIIYb1/u1EZFr37CAoN7WCqrzdyItnlW8qBm+Gl+Ou
	hUww==
Received: by 10.68.232.138 with SMTP id to10mr45560452pbc.77.1346189545641;
	Tue, 28 Aug 2012 14:32:25 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id to6sm17798069pbc.12.2012.08.28.14.32.20
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:32:24 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:32:13 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <jbeulich@suse.com>
Message-ID: <CC62F76D.3D210%keir.xen@gmail.com>
Thread-Topic: [RFC] Spurious PIC interrupts
Thread-Index: Ac2FZJZk6zmdyYv14kqnxmoxeNhp9Q==
In-Reply-To: <503D2339.1020705@citrix.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 20:59, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

> Since Jan's patch to print and mask bogus PIC vectors, we have found
> some issues on older hardware were supurious PIC vectors are being
> repeatedly logged, as spurious vectors will ignore the relevant mask bit.
> 
> The log message is deceptive in the case of a spurious vector.  I have
> attached an RFC patch which changes the bogus_8259A_irq logic to be able
> to detect spurious vectors and be rather less verbose about them.
> 
> The new bogus_8259A_irq() function is basically a copy of
> _mask_and_ack_8259A_irq(), but returning a boolean indicating whether it
> was a genuine interrupt or not, which controls whether the "No irq
> handler" message in do_IRQ gets printed or not.
> 
> Jan: are you happy with the style of the adjustment, or could you
> suggest a better way of doing it?

No, you should make the change to _mask_and_ack_8259A_irq() itself, and
callers which do not care about the return code can simply discard it.

I hardly even want one instance of that appalling function in the tree, let
alone two! Terrible abuse of goto, and I'm pretty laid back about goto use.

And we don't use True/False anywhere else in Xen. We don't even have any
centralised definition of true/false of any kind, so you should really just
use 1/0 directly. Alternatively you could propose a patch to define
true/false in our types.h, and use those. I don't know where the capitalised
True/False came from!

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:37:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TTP-0006g1-M5; Tue, 28 Aug 2012 21:37:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6TTO-0006fu-DF
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:37:22 +0000
Received: from [85.158.138.51:47400] by server-6.bemta-3.messagelabs.com id
	F4/B9-32013-11A3D305; Tue, 28 Aug 2012 21:37:21 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346189836!21897825!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23083 invoked from network); 28 Aug 2012 21:37:18 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:37:18 -0000
Received: by pbbjt11 with SMTP id jt11so9981143pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:37:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=SDWnyfWuEaGsQpxsm9N+xjdpSOPf8wHdONqg1dQCbGk=;
	b=UFUDXfJDneuO/6tgy48q2YcEQgaXah2/Rz5aeScIio5SWrGqih5NZfauEFox8ekuEk
	kS/31iGQeHEcbg962eRNWRG6L3Bbxa73hzUCdgOy97O3kl2BpFQ2a+G7jr0ptMOLEkbg
	MV4DilJqUVcEUVCa6e+9GmC7oBxT/7Xr1FzkjVjrU/AnoTcPr01GIODndBX36dFcfU2O
	Yvza+wibRlRVWrIfAEJ9K8FNPRn6M9+4TrJYc+JopAoQY8ft/4QRq87KMzaJpWnTPWPN
	EE+DiCmJBTxCD3jLh9U6yH0P80VjgVlMAg55hxt3YG9Y3P1iS8zJR38tYlI8YTvSJSbE
	eOCw==
Received: by 10.68.204.137 with SMTP id ky9mr45395782pbc.90.1346189836379;
	Tue, 28 Aug 2012 14:37:16 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id kp3sm17775199pbc.64.2012.08.28.14.37.14
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:37:15 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:37:11 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <CC62F897.3D213%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FZUgD+T0PKnSKvEeRdj8OS19I1A==
In-Reply-To: <503D3473.5050105@abpni.co.uk>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Keir Fraser wrote:
>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>> 
>>>>> Thanks for the clarification.
>>>>> 
>>>>> So from a security/reliability standpoint, nothing will be affected by
>>>>> flipping the if block?
>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>> 
>>>>    -- Keir
>>>> 
>>>> 
>>> Hi Everyone,
>>> 
>>> I reversed the if block in setup.c and now my server can see the full
>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>> (possibly unrelated to xen) issue with this server build that we are
>>> working on. Once we complete our full testing, a patch will be submitted :)
>> In this case, I will re-make the patch myself and check it in. Since it is a
>> trivial one.
>> 
>>    Thanks,
>>   Keir
> Thanks Keir
> 
> Will this be backported to 4.1?

Erm. Yes, I think so. It's not really sane ever to be using e801-style
memory information on modern systems, so preferring Multiboot-e820 over
Xen-e801 makes a lot of sense imo. It'll be Jan who actually does the
backports from now on, however.

 -- Keir

> Cheers
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:37:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TTP-0006g1-M5; Tue, 28 Aug 2012 21:37:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6TTO-0006fu-DF
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:37:22 +0000
Received: from [85.158.138.51:47400] by server-6.bemta-3.messagelabs.com id
	F4/B9-32013-11A3D305; Tue, 28 Aug 2012 21:37:21 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346189836!21897825!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23083 invoked from network); 28 Aug 2012 21:37:18 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:37:18 -0000
Received: by pbbjt11 with SMTP id jt11so9981143pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:37:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=SDWnyfWuEaGsQpxsm9N+xjdpSOPf8wHdONqg1dQCbGk=;
	b=UFUDXfJDneuO/6tgy48q2YcEQgaXah2/Rz5aeScIio5SWrGqih5NZfauEFox8ekuEk
	kS/31iGQeHEcbg962eRNWRG6L3Bbxa73hzUCdgOy97O3kl2BpFQ2a+G7jr0ptMOLEkbg
	MV4DilJqUVcEUVCa6e+9GmC7oBxT/7Xr1FzkjVjrU/AnoTcPr01GIODndBX36dFcfU2O
	Yvza+wibRlRVWrIfAEJ9K8FNPRn6M9+4TrJYc+JopAoQY8ft/4QRq87KMzaJpWnTPWPN
	EE+DiCmJBTxCD3jLh9U6yH0P80VjgVlMAg55hxt3YG9Y3P1iS8zJR38tYlI8YTvSJSbE
	eOCw==
Received: by 10.68.204.137 with SMTP id ky9mr45395782pbc.90.1346189836379;
	Tue, 28 Aug 2012 14:37:16 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id kp3sm17775199pbc.64.2012.08.28.14.37.14
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:37:15 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:37:11 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <CC62F897.3D213%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FZUgD+T0PKnSKvEeRdj8OS19I1A==
In-Reply-To: <503D3473.5050105@abpni.co.uk>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Keir Fraser wrote:
>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>> 
>>>>> Thanks for the clarification.
>>>>> 
>>>>> So from a security/reliability standpoint, nothing will be affected by
>>>>> flipping the if block?
>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>> 
>>>>    -- Keir
>>>> 
>>>> 
>>> Hi Everyone,
>>> 
>>> I reversed the if block in setup.c and now my server can see the full
>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>> (possibly unrelated to xen) issue with this server build that we are
>>> working on. Once we complete our full testing, a patch will be submitted :)
>> In this case, I will re-make the patch myself and check it in. Since it is a
>> trivial one.
>> 
>>    Thanks,
>>   Keir
> Thanks Keir
> 
> Will this be backported to 4.1?

Erm. Yes, I think so. It's not really sane ever to be using e801-style
memory information on modern systems, so preferring Multiboot-e820 over
Xen-e801 makes a lot of sense imo. It'll be Jan who actually does the
backports from now on, however.

 -- Keir

> Cheers
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:42:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TXv-0006on-D5; Tue, 28 Aug 2012 21:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6TXt-0006oh-RU
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:42:02 +0000
Received: from [85.158.143.99:24369] by server-3.bemta-4.messagelabs.com id
	6E/A7-08232-92B3D305; Tue, 28 Aug 2012 21:42:01 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346190118!23126096!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30587 invoked from network); 28 Aug 2012 21:42:00 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:42:00 -0000
Received: by pbbjt11 with SMTP id jt11so9987286pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:41:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=EydWE5LVNzK1APn535fMI9CpK5Ee3JqAolVaHsLPEng=;
	b=igx1zhlEJkIxAGc7EU7jdgqkpMQ0Z8SSslPyRSU4g6ORhdngRK3rAidYRW4DJqZq82
	ZElMkFb7Pi7NK2Z2ubXGBu2wc8BHlCVb4ERcKsVy0/KOAb3PpCxAuwht2vrVilYvSbXT
	dmTwY05TrdERami4OtLrfUnYRCHKbql8oNhabdR7SjTuL91oB/+GrUifibNfahw2ryOy
	/DzAb/Tf7uAdvPhMjzKHjLoit6Wm3jM2OoVYQUsv1Vf6E3Ou3hRkiuSZKBjAsMuctZdY
	7TDdgtW7ApoiO+dvA/8Ze61suNhKYgZgUIfFOrNqDsLhtqpPOoGXBBUz1NHA3+m50kzr
	EvvA==
Received: by 10.68.189.193 with SMTP id gk1mr16085086pbc.123.1346190118539;
	Tue, 28 Aug 2012 14:41:58 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id mu8sm17790820pbc.49.2012.08.28.14.41.56
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:41:57 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:41:52 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <CC62F9B0.3D214%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FZe+Aj02821cYyE6lvcOV3YvR5w==
In-Reply-To: <503D3473.5050105@abpni.co.uk>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Keir Fraser wrote:
>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>> 
>>>>> Thanks for the clarification.
>>>>> 
>>>>> So from a security/reliability standpoint, nothing will be affected by
>>>>> flipping the if block?
>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>> 
>>>>    -- Keir
>>>> 
>>>> 
>>> Hi Everyone,
>>> 
>>> I reversed the if block in setup.c and now my server can see the full
>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>> (possibly unrelated to xen) issue with this server build that we are
>>> working on. Once we complete our full testing, a patch will be submitted :)
>> In this case, I will re-make the patch myself and check it in. Since it is a
>> trivial one.
>> 
>>    Thanks,
>>   Keir
> Thanks Keir

Now done. xen-unstable:25786

 -- Keir

> Will this be backported to 4.1?
> 
> Cheers
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:42:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TXv-0006on-D5; Tue, 28 Aug 2012 21:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6TXt-0006oh-RU
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:42:02 +0000
Received: from [85.158.143.99:24369] by server-3.bemta-4.messagelabs.com id
	6E/A7-08232-92B3D305; Tue, 28 Aug 2012 21:42:01 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346190118!23126096!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30587 invoked from network); 28 Aug 2012 21:42:00 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 21:42:00 -0000
Received: by pbbjt11 with SMTP id jt11so9987286pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 14:41:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=EydWE5LVNzK1APn535fMI9CpK5Ee3JqAolVaHsLPEng=;
	b=igx1zhlEJkIxAGc7EU7jdgqkpMQ0Z8SSslPyRSU4g6ORhdngRK3rAidYRW4DJqZq82
	ZElMkFb7Pi7NK2Z2ubXGBu2wc8BHlCVb4ERcKsVy0/KOAb3PpCxAuwht2vrVilYvSbXT
	dmTwY05TrdERami4OtLrfUnYRCHKbql8oNhabdR7SjTuL91oB/+GrUifibNfahw2ryOy
	/DzAb/Tf7uAdvPhMjzKHjLoit6Wm3jM2OoVYQUsv1Vf6E3Ou3hRkiuSZKBjAsMuctZdY
	7TDdgtW7ApoiO+dvA/8Ze61suNhKYgZgUIfFOrNqDsLhtqpPOoGXBBUz1NHA3+m50kzr
	EvvA==
Received: by 10.68.189.193 with SMTP id gk1mr16085086pbc.123.1346190118539;
	Tue, 28 Aug 2012 14:41:58 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id mu8sm17790820pbc.49.2012.08.28.14.41.56
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 14:41:57 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 22:41:52 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
Message-ID: <CC62F9B0.3D214%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FZe+Aj02821cYyE6lvcOV3YvR5w==
In-Reply-To: <503D3473.5050105@abpni.co.uk>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

> Keir Fraser wrote:
>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>> 
>>>>> Thanks for the clarification.
>>>>> 
>>>>> So from a security/reliability standpoint, nothing will be affected by
>>>>> flipping the if block?
>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>> 
>>>>    -- Keir
>>>> 
>>>> 
>>> Hi Everyone,
>>> 
>>> I reversed the if block in setup.c and now my server can see the full
>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>> (possibly unrelated to xen) issue with this server build that we are
>>> working on. Once we complete our full testing, a patch will be submitted :)
>> In this case, I will re-make the patch myself and check it in. Since it is a
>> trivial one.
>> 
>>    Thanks,
>>   Keir
> Thanks Keir

Now done. xen-unstable:25786

 -- Keir

> Will this be backported to 4.1?
> 
> Cheers
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:51:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TgI-00072A-Mm; Tue, 28 Aug 2012 21:50:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6TgH-000725-Ca
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:50:41 +0000
Received: from [85.158.139.83:48926] by server-11.bemta-5.messagelabs.com id
	4D/0F-24658-03D3D305; Tue, 28 Aug 2012 21:50:40 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346190639!28335294!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25316 invoked from network); 28 Aug 2012 21:50:39 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-3.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 21:50:39 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 8B1E05A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:50:12 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id TsdZFZGJdVtH for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:50:10 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	2A6FE5A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:50:10 +0100 (BST)
Message-ID: <503D3D2E.70103@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:50:38 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC62F9B0.3D214%keir.xen@gmail.com>
In-Reply-To: <CC62F9B0.3D214%keir.xen@gmail.com>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 28/08/2012 22:41, Keir Fraser wrote:
> On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>> Keir Fraser wrote:
>>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>
>>>>>> Thanks for the clarification.
>>>>>>
>>>>>> So from a security/reliability standpoint, nothing will be affected by
>>>>>> flipping the if block?
>>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>>>
>>>>>     -- Keir
>>>>>
>>>>>
>>>> Hi Everyone,
>>>>
>>>> I reversed the if block in setup.c and now my server can see the full
>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>>> (possibly unrelated to xen) issue with this server build that we are
>>>> working on. Once we complete our full testing, a patch will be submitted :)
>>> In this case, I will re-make the patch myself and check it in. Since it is a
>>> trivial one.
>>>
>>>     Thanks,
>>>    Keir
>> Thanks Keir
> Now done. xen-unstable:25786
>
>   -- Keir
Hi Keir,

Thanks for doing that. However, the ordering of the if block in my code 
that I used was slightly different from your commit. Perhaps this 
doesn't make too much of a difference?

Thanks

if ( mbi->flags & MBI_MEMMAP )
     {
         memmap_type = "Multiboot-e820";
         while ( (bytes < mbi->mmap_length) && (e820_raw_nr < E820MAX) )
         {
             memory_map_t *map = __va(mbi->mmap_addr + bytes);

             /*
              * This is a gross workaround for a BIOS bug. Some 
bootloaders do
              * not write e820 map entries into pre-zeroed memory. This is
              * okay if the BIOS fills in all fields of the map entry, but
              * some broken BIOSes do not bother to write the high word of
              * the length field if the length is smaller than 4GB. We
              * detect and fix this by flagging sections below 4GB that
              * appear to be larger than 4GB in size.
              */
             if ( (map->base_addr_high == 0) && (map->length_high != 0) )
             {
                 if ( !e820_warn )
                 {
                     printk("WARNING: Buggy e820 map detected and fixed "
                            "(truncated length fields).\n");
                     e820_warn = 1;
                 }
                 map->length_high = 0;
             }

             e820_raw[e820_raw_nr].addr =
                 ((u64)map->base_addr_high << 32) | (u64)map->base_addr_low;
             e820_raw[e820_raw_nr].size =
                 ((u64)map->length_high << 32) | (u64)map->length_low;
             e820_raw[e820_raw_nr].type = map->type;
             e820_raw_nr++;

             bytes += map->size + 4;
         }
     }
     else if ( mbi->flags & MBI_MEMLIMITS )
     {
         memmap_type = "Multiboot-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = mbi->mem_lower << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = mbi->mem_upper << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     if ( e820_raw_nr != 0 )
     {
         memmap_type = "Xen-e820";
     }
     else if ( bootsym(lowmem_kb) )
     {
         memmap_type = "Xen-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = bootsym(lowmem_kb) << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = bootsym(highmem_kb) << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     else
     {
         EARLY_FAIL("Bootloader provided no memory information.\n");
     }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:51:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TgI-00072A-Mm; Tue, 28 Aug 2012 21:50:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6TgH-000725-Ca
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:50:41 +0000
Received: from [85.158.139.83:48926] by server-11.bemta-5.messagelabs.com id
	4D/0F-24658-03D3D305; Tue, 28 Aug 2012 21:50:40 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346190639!28335294!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25316 invoked from network); 28 Aug 2012 21:50:39 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-3.tower-182.messagelabs.com with SMTP;
	28 Aug 2012 21:50:39 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 8B1E05A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:50:12 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id TsdZFZGJdVtH for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:50:10 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	2A6FE5A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:50:10 +0100 (BST)
Message-ID: <503D3D2E.70103@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:50:38 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC62F9B0.3D214%keir.xen@gmail.com>
In-Reply-To: <CC62F9B0.3D214%keir.xen@gmail.com>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 28/08/2012 22:41, Keir Fraser wrote:
> On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>> Keir Fraser wrote:
>>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>
>>>>>> Thanks for the clarification.
>>>>>>
>>>>>> So from a security/reliability standpoint, nothing will be affected by
>>>>>> flipping the if block?
>>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>>>
>>>>>     -- Keir
>>>>>
>>>>>
>>>> Hi Everyone,
>>>>
>>>> I reversed the if block in setup.c and now my server can see the full
>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>>> (possibly unrelated to xen) issue with this server build that we are
>>>> working on. Once we complete our full testing, a patch will be submitted :)
>>> In this case, I will re-make the patch myself and check it in. Since it is a
>>> trivial one.
>>>
>>>     Thanks,
>>>    Keir
>> Thanks Keir
> Now done. xen-unstable:25786
>
>   -- Keir
Hi Keir,

Thanks for doing that. However, the ordering of the if block in my code 
that I used was slightly different from your commit. Perhaps this 
doesn't make too much of a difference?

Thanks

if ( mbi->flags & MBI_MEMMAP )
     {
         memmap_type = "Multiboot-e820";
         while ( (bytes < mbi->mmap_length) && (e820_raw_nr < E820MAX) )
         {
             memory_map_t *map = __va(mbi->mmap_addr + bytes);

             /*
              * This is a gross workaround for a BIOS bug. Some 
bootloaders do
              * not write e820 map entries into pre-zeroed memory. This is
              * okay if the BIOS fills in all fields of the map entry, but
              * some broken BIOSes do not bother to write the high word of
              * the length field if the length is smaller than 4GB. We
              * detect and fix this by flagging sections below 4GB that
              * appear to be larger than 4GB in size.
              */
             if ( (map->base_addr_high == 0) && (map->length_high != 0) )
             {
                 if ( !e820_warn )
                 {
                     printk("WARNING: Buggy e820 map detected and fixed "
                            "(truncated length fields).\n");
                     e820_warn = 1;
                 }
                 map->length_high = 0;
             }

             e820_raw[e820_raw_nr].addr =
                 ((u64)map->base_addr_high << 32) | (u64)map->base_addr_low;
             e820_raw[e820_raw_nr].size =
                 ((u64)map->length_high << 32) | (u64)map->length_low;
             e820_raw[e820_raw_nr].type = map->type;
             e820_raw_nr++;

             bytes += map->size + 4;
         }
     }
     else if ( mbi->flags & MBI_MEMLIMITS )
     {
         memmap_type = "Multiboot-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = mbi->mem_lower << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = mbi->mem_upper << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     if ( e820_raw_nr != 0 )
     {
         memmap_type = "Xen-e820";
     }
     else if ( bootsym(lowmem_kb) )
     {
         memmap_type = "Xen-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = bootsym(lowmem_kb) << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = bootsym(highmem_kb) << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     else
     {
         EARLY_FAIL("Bootloader provided no memory information.\n");
     }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:52:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TiG-00077X-6S; Tue, 28 Aug 2012 21:52:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6TiF-00077S-1H
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:52:43 +0000
Received: from [85.158.138.51:5616] by server-11.bemta-3.messagelabs.com id
	E8/BB-23152-AAD3D305; Tue, 28 Aug 2012 21:52:42 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346190761!20545270!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27448 invoked from network); 28 Aug 2012 21:52:41 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 21:52:41 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 525C45A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:52:14 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id 7afAtVE7RiNE for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:52:12 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	E70D35A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:52:11 +0100 (BST)
Message-ID: <503D3DA8.3070002@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:52:40 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC62F9B0.3D214%keir.xen@gmail.com> <503D3D2E.70103@abpni.co.uk>
In-Reply-To: <503D3D2E.70103@abpni.co.uk>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:50, Jonathan Tripathy wrote:
>
>
> On 28/08/2012 22:41, Keir Fraser wrote:
>> On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>
>>> Keir Fraser wrote:
>>>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>>
>>>>>>> Thanks for the clarification.
>>>>>>>
>>>>>>> So from a security/reliability standpoint, nothing will be 
>>>>>>> affected by
>>>>>>> flipping the if block?
>>>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>>>>
>>>>>>     -- Keir
>>>>>>
>>>>>>
>>>>> Hi Everyone,
>>>>>
>>>>> I reversed the if block in setup.c and now my server can see the full
>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into 
>>>>> another
>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>> working on. Once we complete our full testing, a patch will be 
>>>>> submitted :)
>>>> In this case, I will re-make the patch myself and check it in. 
>>>> Since it is a
>>>> trivial one.
>>>>
>>>>     Thanks,
>>>>    Keir
>>> Thanks Keir
>> Now done. xen-unstable:25786
>>
>>   -- Keir
> Hi Keir,
>
> Thanks for doing that. However, the ordering of the if block in my 
> code that I used was slightly different from your commit. Perhaps this 
> doesn't make too much of a difference?
>
> Thanks
>
Ooops, typo'd the code (I had to recreate the changes as I deleted my 
source tree). Here is the version I used:

if ( mbi->flags & MBI_MEMMAP )
     {
         memmap_type = "Multiboot-e820";
         while ( (bytes < mbi->mmap_length) && (e820_raw_nr < E820MAX) )
         {
             memory_map_t *map = __va(mbi->mmap_addr + bytes);

             /*
              * This is a gross workaround for a BIOS bug. Some 
bootloaders do
              * not write e820 map entries into pre-zeroed memory. This is
              * okay if the BIOS fills in all fields of the map entry, but
              * some broken BIOSes do not bother to write the high word of
              * the length field if the length is smaller than 4GB. We
              * detect and fix this by flagging sections below 4GB that
              * appear to be larger than 4GB in size.
              */
             if ( (map->base_addr_high == 0) && (map->length_high != 0) )
             {
                 if ( !e820_warn )
                 {
                     printk("WARNING: Buggy e820 map detected and fixed "
                            "(truncated length fields).\n");
                     e820_warn = 1;
                 }
                 map->length_high = 0;
             }

             e820_raw[e820_raw_nr].addr =
                 ((u64)map->base_addr_high << 32) | 
(u64)map->base_addr_low;
             e820_raw[e820_raw_nr].size =
                 ((u64)map->length_high << 32) | (u64)map->length_low;
             e820_raw[e820_raw_nr].type = map->type;
             e820_raw_nr++;

             bytes += map->size + 4;
         }
     }
     else if ( mbi->flags & MBI_MEMLIMITS )
     {
         memmap_type = "Multiboot-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = mbi->mem_lower << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = mbi->mem_upper << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     else if ( e820_raw_nr != 0 )
     {
         memmap_type = "Xen-e820";
     }
     else if ( bootsym(lowmem_kb) )
     {
         memmap_type = "Xen-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = bootsym(lowmem_kb) << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = bootsym(highmem_kb) << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     else
     {
         EARLY_FAIL("Bootloader provided no memory information.\n");
     }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 21:52:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 21:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6TiG-00077X-6S; Tue, 28 Aug 2012 21:52:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6TiF-00077S-1H
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 21:52:43 +0000
Received: from [85.158.138.51:5616] by server-11.bemta-3.messagelabs.com id
	E8/BB-23152-AAD3D305; Tue, 28 Aug 2012 21:52:42 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346190761!20545270!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27448 invoked from network); 28 Aug 2012 21:52:41 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-6.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 21:52:41 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 525C45A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:52:14 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id 7afAtVE7RiNE for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 22:52:12 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	E70D35A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 22:52:11 +0100 (BST)
Message-ID: <503D3DA8.3070002@abpni.co.uk>
Date: Tue, 28 Aug 2012 22:52:40 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC62F9B0.3D214%keir.xen@gmail.com> <503D3D2E.70103@abpni.co.uk>
In-Reply-To: <503D3D2E.70103@abpni.co.uk>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:50, Jonathan Tripathy wrote:
>
>
> On 28/08/2012 22:41, Keir Fraser wrote:
>> On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>
>>> Keir Fraser wrote:
>>>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>>
>>>>>>> Thanks for the clarification.
>>>>>>>
>>>>>>> So from a security/reliability standpoint, nothing will be 
>>>>>>> affected by
>>>>>>> flipping the if block?
>>>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>>>>
>>>>>>     -- Keir
>>>>>>
>>>>>>
>>>>> Hi Everyone,
>>>>>
>>>>> I reversed the if block in setup.c and now my server can see the full
>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into 
>>>>> another
>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>> working on. Once we complete our full testing, a patch will be 
>>>>> submitted :)
>>>> In this case, I will re-make the patch myself and check it in. 
>>>> Since it is a
>>>> trivial one.
>>>>
>>>>     Thanks,
>>>>    Keir
>>> Thanks Keir
>> Now done. xen-unstable:25786
>>
>>   -- Keir
> Hi Keir,
>
> Thanks for doing that. However, the ordering of the if block in my 
> code that I used was slightly different from your commit. Perhaps this 
> doesn't make too much of a difference?
>
> Thanks
>
Ooops, typo'd the code (I had to recreate the changes as I deleted my 
source tree). Here is the version I used:

if ( mbi->flags & MBI_MEMMAP )
     {
         memmap_type = "Multiboot-e820";
         while ( (bytes < mbi->mmap_length) && (e820_raw_nr < E820MAX) )
         {
             memory_map_t *map = __va(mbi->mmap_addr + bytes);

             /*
              * This is a gross workaround for a BIOS bug. Some 
bootloaders do
              * not write e820 map entries into pre-zeroed memory. This is
              * okay if the BIOS fills in all fields of the map entry, but
              * some broken BIOSes do not bother to write the high word of
              * the length field if the length is smaller than 4GB. We
              * detect and fix this by flagging sections below 4GB that
              * appear to be larger than 4GB in size.
              */
             if ( (map->base_addr_high == 0) && (map->length_high != 0) )
             {
                 if ( !e820_warn )
                 {
                     printk("WARNING: Buggy e820 map detected and fixed "
                            "(truncated length fields).\n");
                     e820_warn = 1;
                 }
                 map->length_high = 0;
             }

             e820_raw[e820_raw_nr].addr =
                 ((u64)map->base_addr_high << 32) | 
(u64)map->base_addr_low;
             e820_raw[e820_raw_nr].size =
                 ((u64)map->length_high << 32) | (u64)map->length_low;
             e820_raw[e820_raw_nr].type = map->type;
             e820_raw_nr++;

             bytes += map->size + 4;
         }
     }
     else if ( mbi->flags & MBI_MEMLIMITS )
     {
         memmap_type = "Multiboot-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = mbi->mem_lower << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = mbi->mem_upper << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     else if ( e820_raw_nr != 0 )
     {
         memmap_type = "Xen-e820";
     }
     else if ( bootsym(lowmem_kb) )
     {
         memmap_type = "Xen-e801";
         e820_raw[0].addr = 0;
         e820_raw[0].size = bootsym(lowmem_kb) << 10;
         e820_raw[0].type = E820_RAM;
         e820_raw[1].addr = 0x100000;
         e820_raw[1].size = bootsym(highmem_kb) << 10;
         e820_raw[1].type = E820_RAM;
         e820_raw_nr = 2;
     }
     else
     {
         EARLY_FAIL("Bootloader provided no memory information.\n");
     }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:06:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:06:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Tv6-0007Q7-Ja; Tue, 28 Aug 2012 22:06:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6Tv5-0007Q2-CL
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:05:59 +0000
Received: from [85.158.138.51:17307] by server-7.bemta-3.messagelabs.com id
	29/7C-01906-6C04D305; Tue, 28 Aug 2012 22:05:58 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346191555!28462275!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2998 invoked from network); 28 Aug 2012 22:05:56 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-11.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 22:05:56 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id A78E75A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 23:05:28 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id FlovpKn1jTsu for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 23:05:26 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	3DC2F5A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 23:05:26 +0100 (BST)
Message-ID: <503D40C2.6040407@abpni.co.uk>
Date: Tue, 28 Aug 2012 23:05:54 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC62F9B0.3D214%keir.xen@gmail.com> <503D3D2E.70103@abpni.co.uk>
	<503D3DA8.3070002@abpni.co.uk>
In-Reply-To: <503D3DA8.3070002@abpni.co.uk>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:52, Jonathan Tripathy wrote:
> On 28/08/2012 22:50, Jonathan Tripathy wrote:
>>
>>
>> On 28/08/2012 22:41, Keir Fraser wrote:
>>> On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>
>>>> Keir Fraser wrote:
>>>>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>>>
>>>>>>>> Thanks for the clarification.
>>>>>>>>
>>>>>>>> So from a security/reliability standpoint, nothing will be 
>>>>>>>> affected by
>>>>>>>> flipping the if block?
>>>>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>>>>>
>>>>>>>     -- Keir
>>>>>>>
>>>>>>>
>>>>>> Hi Everyone,
>>>>>>
>>>>>> I reversed the if block in setup.c and now my server can see the 
>>>>>> full
>>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into 
>>>>>> another
>>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>>> working on. Once we complete our full testing, a patch will be 
>>>>>> submitted :)
>>>>> In this case, I will re-make the patch myself and check it in. 
>>>>> Since it is a
>>>>> trivial one.
>>>>>
>>>>>     Thanks,
>>>>>    Keir
>>>> Thanks Keir
>>> Now done. xen-unstable:25786
>>>
>>>   -- Keir
>> Hi Keir,
>>
>> Thanks for doing that. However, the ordering of the if block in my 
>> code that I used was slightly different from your commit. Perhaps 
>> this doesn't make too much of a difference?
>>
>> Thanks
>>
> Ooops, typo'd the code (I had to recreate the changes as I deleted my 
> source tree). Here is the version I used:
>
> if ( mbi->flags & MBI_MEMMAP )
>     {
>         memmap_type = "Multiboot-e820";
>         while ( (bytes < mbi->mmap_length) && (e820_raw_nr < E820MAX) )
>         {
>             memory_map_t *map = __va(mbi->mmap_addr + bytes);
>
>             /*
>              * This is a gross workaround for a BIOS bug. Some 
> bootloaders do
>              * not write e820 map entries into pre-zeroed memory. This is
>              * okay if the BIOS fills in all fields of the map entry, but
>              * some broken BIOSes do not bother to write the high word of
>              * the length field if the length is smaller than 4GB. We
>              * detect and fix this by flagging sections below 4GB that
>              * appear to be larger than 4GB in size.
>              */
>             if ( (map->base_addr_high == 0) && (map->length_high != 0) )
>             {
>                 if ( !e820_warn )
>                 {
>                     printk("WARNING: Buggy e820 map detected and fixed "
>                            "(truncated length fields).\n");
>                     e820_warn = 1;
>                 }
>                 map->length_high = 0;
>             }
>
>             e820_raw[e820_raw_nr].addr =
>                 ((u64)map->base_addr_high << 32) | 
> (u64)map->base_addr_low;
>             e820_raw[e820_raw_nr].size =
>                 ((u64)map->length_high << 32) | (u64)map->length_low;
>             e820_raw[e820_raw_nr].type = map->type;
>             e820_raw_nr++;
>
>             bytes += map->size + 4;
>         }
>     }
>     else if ( mbi->flags & MBI_MEMLIMITS )
>     {
>         memmap_type = "Multiboot-e801";
>         e820_raw[0].addr = 0;
>         e820_raw[0].size = mbi->mem_lower << 10;
>         e820_raw[0].type = E820_RAM;
>         e820_raw[1].addr = 0x100000;
>         e820_raw[1].size = mbi->mem_upper << 10;
>         e820_raw[1].type = E820_RAM;
>         e820_raw_nr = 2;
>     }
>     else if ( e820_raw_nr != 0 )
>     {
>         memmap_type = "Xen-e820";
>     }
>     else if ( bootsym(lowmem_kb) )
>     {
>         memmap_type = "Xen-e801";
>         e820_raw[0].addr = 0;
>         e820_raw[0].size = bootsym(lowmem_kb) << 10;
>         e820_raw[0].type = E820_RAM;
>         e820_raw[1].addr = 0x100000;
>         e820_raw[1].size = bootsym(highmem_kb) << 10;
>         e820_raw[1].type = E820_RAM;
>         e820_raw_nr = 2;
>     }
>     else
>     {
>         EARLY_FAIL("Bootloader provided no memory information.\n");
>     }
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
Anyway, question is moot. I rebuilt my code using your ordering of the 
if block and it works just fine. All 32GB is detected. I actually agree 
with your order because as you as, 801 is outdated.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:06:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:06:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6Tv6-0007Q7-Ja; Tue, 28 Aug 2012 22:06:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6Tv5-0007Q2-CL
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:05:59 +0000
Received: from [85.158.138.51:17307] by server-7.bemta-3.messagelabs.com id
	29/7C-01906-6C04D305; Tue, 28 Aug 2012 22:05:58 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346191555!28462275!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2998 invoked from network); 28 Aug 2012 22:05:56 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-11.tower-174.messagelabs.com with SMTP;
	28 Aug 2012 22:05:56 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id A78E75A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 23:05:28 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id FlovpKn1jTsu for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 23:05:26 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	3DC2F5A0005
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 23:05:26 +0100 (BST)
Message-ID: <503D40C2.6040407@abpni.co.uk>
Date: Tue, 28 Aug 2012 23:05:54 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CC62F9B0.3D214%keir.xen@gmail.com> <503D3D2E.70103@abpni.co.uk>
	<503D3DA8.3070002@abpni.co.uk>
In-Reply-To: <503D3DA8.3070002@abpni.co.uk>
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:52, Jonathan Tripathy wrote:
> On 28/08/2012 22:50, Jonathan Tripathy wrote:
>>
>>
>> On 28/08/2012 22:41, Keir Fraser wrote:
>>> On 28/08/2012 22:13, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>
>>>> Keir Fraser wrote:
>>>>> On 28/08/2012 20:36, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>>>>>
>>>>>>>> Thanks for the clarification.
>>>>>>>>
>>>>>>>> So from a security/reliability standpoint, nothing will be 
>>>>>>>> affected by
>>>>>>>> flipping the if block?
>>>>>>> It should simply make it more likely that Xen sees all your RAM. ;)
>>>>>>>
>>>>>>>     -- Keir
>>>>>>>
>>>>>>>
>>>>>> Hi Everyone,
>>>>>>
>>>>>> I reversed the if block in setup.c and now my server can see the 
>>>>>> full
>>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into 
>>>>>> another
>>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>>> working on. Once we complete our full testing, a patch will be 
>>>>>> submitted :)
>>>>> In this case, I will re-make the patch myself and check it in. 
>>>>> Since it is a
>>>>> trivial one.
>>>>>
>>>>>     Thanks,
>>>>>    Keir
>>>> Thanks Keir
>>> Now done. xen-unstable:25786
>>>
>>>   -- Keir
>> Hi Keir,
>>
>> Thanks for doing that. However, the ordering of the if block in my 
>> code that I used was slightly different from your commit. Perhaps 
>> this doesn't make too much of a difference?
>>
>> Thanks
>>
> Ooops, typo'd the code (I had to recreate the changes as I deleted my 
> source tree). Here is the version I used:
>
> if ( mbi->flags & MBI_MEMMAP )
>     {
>         memmap_type = "Multiboot-e820";
>         while ( (bytes < mbi->mmap_length) && (e820_raw_nr < E820MAX) )
>         {
>             memory_map_t *map = __va(mbi->mmap_addr + bytes);
>
>             /*
>              * This is a gross workaround for a BIOS bug. Some 
> bootloaders do
>              * not write e820 map entries into pre-zeroed memory. This is
>              * okay if the BIOS fills in all fields of the map entry, but
>              * some broken BIOSes do not bother to write the high word of
>              * the length field if the length is smaller than 4GB. We
>              * detect and fix this by flagging sections below 4GB that
>              * appear to be larger than 4GB in size.
>              */
>             if ( (map->base_addr_high == 0) && (map->length_high != 0) )
>             {
>                 if ( !e820_warn )
>                 {
>                     printk("WARNING: Buggy e820 map detected and fixed "
>                            "(truncated length fields).\n");
>                     e820_warn = 1;
>                 }
>                 map->length_high = 0;
>             }
>
>             e820_raw[e820_raw_nr].addr =
>                 ((u64)map->base_addr_high << 32) | 
> (u64)map->base_addr_low;
>             e820_raw[e820_raw_nr].size =
>                 ((u64)map->length_high << 32) | (u64)map->length_low;
>             e820_raw[e820_raw_nr].type = map->type;
>             e820_raw_nr++;
>
>             bytes += map->size + 4;
>         }
>     }
>     else if ( mbi->flags & MBI_MEMLIMITS )
>     {
>         memmap_type = "Multiboot-e801";
>         e820_raw[0].addr = 0;
>         e820_raw[0].size = mbi->mem_lower << 10;
>         e820_raw[0].type = E820_RAM;
>         e820_raw[1].addr = 0x100000;
>         e820_raw[1].size = mbi->mem_upper << 10;
>         e820_raw[1].type = E820_RAM;
>         e820_raw_nr = 2;
>     }
>     else if ( e820_raw_nr != 0 )
>     {
>         memmap_type = "Xen-e820";
>     }
>     else if ( bootsym(lowmem_kb) )
>     {
>         memmap_type = "Xen-e801";
>         e820_raw[0].addr = 0;
>         e820_raw[0].size = bootsym(lowmem_kb) << 10;
>         e820_raw[0].type = E820_RAM;
>         e820_raw[1].addr = 0x100000;
>         e820_raw[1].size = bootsym(highmem_kb) << 10;
>         e820_raw[1].type = E820_RAM;
>         e820_raw_nr = 2;
>     }
>     else
>     {
>         EARLY_FAIL("Bootloader provided no memory information.\n");
>     }
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
Anyway, question is moot. I rebuilt my code using your ordering of the 
if block and it works just fine. All 32GB is detected. I actually agree 
with your order because as you as, 801 is outdated.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:32:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:32:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6UKI-0007cA-Vp; Tue, 28 Aug 2012 22:32:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6UKH-0007c5-3P
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:32:01 +0000
Received: from [85.158.143.99:35613] by server-2.bemta-4.messagelabs.com id
	E7/F3-21239-0E64D305; Tue, 28 Aug 2012 22:32:00 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346193118!17940897!1
X-Originating-IP: [209.85.160.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29449 invoked from network); 28 Aug 2012 22:31:59 -0000
Received: from mail-pb0-f54.google.com (HELO mail-pb0-f54.google.com)
	(209.85.160.54)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 22:31:59 -0000
Received: by pbbrp2 with SMTP id rp2so10493260pbb.13
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 15:31:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=mt55MuGF1uTfYXijbcs39reGlnNn5QBA5a0NbkMnUm4=;
	b=Co0yWy+0aXmq4YRiQeonoMPP+HPDUEfvot5S6s8ZVMZ5rIC/0pP8QDejIeFX3+IgvG
	CAsszl+AAEYh1TUY/fQgOY7thzaRx3/XzQzG0EoXOVOQ67aYRURfbY46zE5iXIg48082
	oM8XXLmXIKdUuJDmFy2kgYR33YRxPlicntNOi6l83aZ+GtX2Sr8jvLPZ/457LJU1A2fX
	x1hhPP2v40MDiyUkcSItlcLQ9dHySjATpmdgZV2JdxEK+f6GfRxAfqgwWcFTzShaAOzQ
	Rp+FdRhn9BLx5ONDsJ1ukVZuGc6O5rUu5d7j4uD3q0So3zcocX8svzgOkYEh7Ks3UC6T
	PZTw==
Received: by 10.68.211.105 with SMTP id nb9mr46459619pbc.67.1346193117698;
	Tue, 28 Aug 2012 15:31:57 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254])
	by mx.google.com with ESMTPS id kt1sm195724pbc.45.2012.08.28.15.31.54
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 15:31:56 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 23:31:49 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC630565.3D228%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FbOnaOCTss69U0Ea4U3HQIZm12g==
In-Reply-To: <503D3D2E.70103@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:50, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>>>>> I reversed the if block in setup.c and now my server can see the full
>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>> working on. Once we complete our full testing, a patch will be submitted
>>>>> :)
>>>> In this case, I will re-make the patch myself and check it in. Since it is
>>>> a
>>>> trivial one.
>>>> 
>>>>     Thanks,
>>>>    Keir
>>> Thanks Keir
>> Now done. xen-unstable:25786
>> 
>>   -- Keir
> Hi Keir,
> 
> Thanks for doing that. However, the ordering of the if block in my code
> that I used was slightly different from your commit. Perhaps this
> doesn't make too much of a difference?

Your ordering favours Multiboot-e820 over Xen-e820, which we do not want to
do, and Multiboot-e801 over Xen-e820 which is definitely not sane.

I see you tested the checked-in version and it worked okay for you, which is
expected, and is a more sensible re-ordering. :)

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:32:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:32:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6UKI-0007cA-Vp; Tue, 28 Aug 2012 22:32:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6UKH-0007c5-3P
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:32:01 +0000
Received: from [85.158.143.99:35613] by server-2.bemta-4.messagelabs.com id
	E7/F3-21239-0E64D305; Tue, 28 Aug 2012 22:32:00 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346193118!17940897!1
X-Originating-IP: [209.85.160.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29449 invoked from network); 28 Aug 2012 22:31:59 -0000
Received: from mail-pb0-f54.google.com (HELO mail-pb0-f54.google.com)
	(209.85.160.54)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 22:31:59 -0000
Received: by pbbrp2 with SMTP id rp2so10493260pbb.13
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 15:31:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=mt55MuGF1uTfYXijbcs39reGlnNn5QBA5a0NbkMnUm4=;
	b=Co0yWy+0aXmq4YRiQeonoMPP+HPDUEfvot5S6s8ZVMZ5rIC/0pP8QDejIeFX3+IgvG
	CAsszl+AAEYh1TUY/fQgOY7thzaRx3/XzQzG0EoXOVOQ67aYRURfbY46zE5iXIg48082
	oM8XXLmXIKdUuJDmFy2kgYR33YRxPlicntNOi6l83aZ+GtX2Sr8jvLPZ/457LJU1A2fX
	x1hhPP2v40MDiyUkcSItlcLQ9dHySjATpmdgZV2JdxEK+f6GfRxAfqgwWcFTzShaAOzQ
	Rp+FdRhn9BLx5ONDsJ1ukVZuGc6O5rUu5d7j4uD3q0So3zcocX8svzgOkYEh7Ks3UC6T
	PZTw==
Received: by 10.68.211.105 with SMTP id nb9mr46459619pbc.67.1346193117698;
	Tue, 28 Aug 2012 15:31:57 -0700 (PDT)
Received: from [10.183.91.105] ([38.96.16.254])
	by mx.google.com with ESMTPS id kt1sm195724pbc.45.2012.08.28.15.31.54
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 15:31:56 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Tue, 28 Aug 2012 23:31:49 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>,
	<xen-devel@lists.xen.org>
Message-ID: <CC630565.3D228%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
	memory map detected
Thread-Index: Ac2FbOnaOCTss69U0Ea4U3HQIZm12g==
In-Reply-To: <503D3D2E.70103@abpni.co.uk>
Mime-version: 1.0
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 22:50, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:

>>>>> I reversed the if block in setup.c and now my server can see the full
>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>> working on. Once we complete our full testing, a patch will be submitted
>>>>> :)
>>>> In this case, I will re-make the patch myself and check it in. Since it is
>>>> a
>>>> trivial one.
>>>> 
>>>>     Thanks,
>>>>    Keir
>>> Thanks Keir
>> Now done. xen-unstable:25786
>> 
>>   -- Keir
> Hi Keir,
> 
> Thanks for doing that. However, the ordering of the if block in my code
> that I used was slightly different from your commit. Perhaps this
> doesn't make too much of a difference?

Your ordering favours Multiboot-e820 over Xen-e820, which we do not want to
do, and Multiboot-e801 over Xen-e820 which is definitely not sane.

I see you tested the checked-in version and it worked okay for you, which is
expected, and is a more sensible re-ordering. :)

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:34:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6UM7-0007hx-GA; Tue, 28 Aug 2012 22:33:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6UM6-0007hq-2U
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:33:54 +0000
Received: from [85.158.143.99:15874] by server-1.bemta-4.messagelabs.com id
	6D/DA-12504-1574D305; Tue, 28 Aug 2012 22:33:53 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346193232!17220263!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1097 invoked from network); 28 Aug 2012 22:33:52 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-16.tower-216.messagelabs.com with SMTP;
	28 Aug 2012 22:33:52 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id B3B685A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 23:33:25 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id KwtRJaX7AB-t for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 23:33:23 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	44C475A0005; Tue, 28 Aug 2012 23:33:22 +0100 (BST)
Message-ID: <503D474E.9090103@abpni.co.uk>
Date: Tue, 28 Aug 2012 23:33:50 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC630565.3D228%keir.xen@gmail.com>
In-Reply-To: <CC630565.3D228%keir.xen@gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 23:31, Keir Fraser wrote:
> On 28/08/2012 22:50, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>>>>> I reversed the if block in setup.c and now my server can see the full
>>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>>> working on. Once we complete our full testing, a patch will be submitted
>>>>>> :)
>>>>> In this case, I will re-make the patch myself and check it in. Since it is
>>>>> a
>>>>> trivial one.
>>>>>
>>>>>      Thanks,
>>>>>     Keir
>>>> Thanks Keir
>>> Now done. xen-unstable:25786
>>>
>>>    -- Keir
>> Hi Keir,
>>
>> Thanks for doing that. However, the ordering of the if block in my code
>> that I used was slightly different from your commit. Perhaps this
>> doesn't make too much of a difference?
> Your ordering favours Multiboot-e820 over Xen-e820, which we do not want to
> do, and Multiboot-e801 over Xen-e820 which is definitely not sane.
>
> I see you tested the checked-in version and it worked okay for you, which is
> expected, and is a more sensible re-ordering. :)
>
Yup, I completely agree with you :)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:34:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6UM7-0007hx-GA; Tue, 28 Aug 2012 22:33:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T6UM6-0007hq-2U
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:33:54 +0000
Received: from [85.158.143.99:15874] by server-1.bemta-4.messagelabs.com id
	6D/DA-12504-1574D305; Tue, 28 Aug 2012 22:33:53 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346193232!17220263!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1097 invoked from network); 28 Aug 2012 22:33:52 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-16.tower-216.messagelabs.com with SMTP;
	28 Aug 2012 22:33:52 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id B3B685A0006
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 23:33:25 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id KwtRJaX7AB-t for <xen-devel@lists.xen.org>;
	Tue, 28 Aug 2012 23:33:23 +0100 (BST)
Received: from enda-pc.abpni.local (unknown [10.87.0.100])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPSA id
	44C475A0005; Tue, 28 Aug 2012 23:33:22 +0100 (BST)
Message-ID: <503D474E.9090103@abpni.co.uk>
Date: Tue, 28 Aug 2012 23:33:50 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:14.0) Gecko/20120713 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC630565.3D228%keir.xen@gmail.com>
In-Reply-To: <CC630565.3D228%keir.xen@gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Can't see more than 3.5GB of RAM / UEFI / no e820
 memory map detected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/2012 23:31, Keir Fraser wrote:
> On 28/08/2012 22:50, "Jonathan Tripathy" <jonnyt@abpni.co.uk> wrote:
>
>>>>>> I reversed the if block in setup.c and now my server can see the full
>>>>>> 32GB of RAM. I haven't submitted a patch yet as we have run into another
>>>>>> (possibly unrelated to xen) issue with this server build that we are
>>>>>> working on. Once we complete our full testing, a patch will be submitted
>>>>>> :)
>>>>> In this case, I will re-make the patch myself and check it in. Since it is
>>>>> a
>>>>> trivial one.
>>>>>
>>>>>      Thanks,
>>>>>     Keir
>>>> Thanks Keir
>>> Now done. xen-unstable:25786
>>>
>>>    -- Keir
>> Hi Keir,
>>
>> Thanks for doing that. However, the ordering of the if block in my code
>> that I used was slightly different from your commit. Perhaps this
>> doesn't make too much of a difference?
> Your ordering favours Multiboot-e820 over Xen-e820, which we do not want to
> do, and Multiboot-e801 over Xen-e820 which is definitely not sane.
>
> I see you tested the checked-in version and it worked okay for you, which is
> expected, and is a more sensible re-ordering. :)
>
Yup, I completely agree with you :)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:38:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6UQR-0007uV-6w; Tue, 28 Aug 2012 22:38:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6UQQ-0007uP-IK
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:38:22 +0000
Received: from [85.158.139.83:7132] by server-3.bemta-5.messagelabs.com id
	57/81-21836-D584D305; Tue, 28 Aug 2012 22:38:21 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346193499!24359925!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDc4OTk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16906 invoked from network); 28 Aug 2012 22:38:20 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 22:38:20 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346193500; x=1377729500;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=XL/CZrSQLrHM1OxCjIBHHJpLhlfjnT7aEEjBbLwFZG0=;
	b=Fxw3IfAuEGP3qYRDHTVTyZiQ/sJmiFhCJH3VgOtk8wDVfWLP7J6Jl+4k
	bnmz+rDcZg3uS9HOa1TpKhTbOOuc1w==;
X-IronPort-AV: E=Sophos;i="4.80,330,1344211200"; d="scan'208";a="352273466"
Received: from smtp-in-31001.sea31.amazon.com ([10.184.168.27])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Aug 2012 22:38:18 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-31001.sea31.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7SMcH1a026233
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 28 Aug 2012 22:38:17 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 28 Aug 2012 15:38:11 -0700
MIME-Version: 1.0
X-Mercurial-Node: d7e4efa17fb0b9b69c58b63293cad48afa8495af
Message-ID: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 28 Aug 2012 15:37:15 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <ian.campbell@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] tools: remove vestigial default_lib.m4 macros
 and adjust substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

LIB_PATH is no longer used, so the AX_DEFAULT_LIB macro is no longer
needed. Additionally lower case make variables are now used as
autoconf substitutions, which allows for more correct overrides at
build time.

I've checked the file layout in dist/install from the build made
before this change versus after with ./configure values of:
 1) ./configure (no flags provided)
 2) ./configure --libdir=/usr/lib/x86_64-linux-gnu (Debian style)
 3) ./configure --libdir='${exec_prefix}/lib' (late variable expansion)

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r c95d96c70108 -r d7e4efa17fb0 config/Tools.mk.in
--- a/config/Tools.mk.in	Tue Aug 28 13:36:29 2012 -0700
+++ b/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
@@ -1,7 +1,9 @@
 # Prefix and install folder
-PREFIX              := @prefix@
+prefix              := @prefix@
+PREFIX              := $(prefix)
 exec_prefix         := @exec_prefix@
-LIBDIR              := @libdir@
+libdir              := @libdir@
+LIBDIR              := $(libdir)
 
 # A debug build of tools?
 debug               := @debug@
diff -r c95d96c70108 -r d7e4efa17fb0 tools/configure.ac
--- a/tools/configure.ac	Tue Aug 28 13:36:29 2012 -0700
+++ b/tools/configure.ac	Tue Aug 28 15:35:08 2012 -0700
@@ -28,7 +28,6 @@
 m4_include([m4/python_version.m4])
 m4_include([m4/python_devel.m4])
 m4_include([m4/ocaml.m4])
-m4_include([m4/default_lib.m4])
 m4_include([m4/set_cflags_ldflags.m4])
 m4_include([m4/uuid.m4])
 m4_include([m4/pkg.m4])
@@ -121,9 +120,6 @@
  AX_CHECK_CURSES
 PKG_CHECK_MODULES(glib, [glib-2.0 >= 2.12])
 
-# Check library path
-AX_DEFAULT_LIB
-
 # Checks for libraries.
 AC_CHECK_HEADER([bzlib.h], [
 AC_CHECK_LIB([bz2], [BZ2_bzDecompressInit], [zlib="$zlib -DHAVE_BZLIB -lbz2"])
diff -r c95d96c70108 -r d7e4efa17fb0 tools/m4/default_lib.m4
--- a/tools/m4/default_lib.m4	Tue Aug 28 13:36:29 2012 -0700
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,14 +0,0 @@
-AC_DEFUN([AX_DEFAULT_LIB],
-[AS_IF([test "\${exec_prefix}/lib" = "$libdir"],
-    [AS_IF([test "$exec_prefix" = "NONE" && test "$prefix" != "NONE"],
-        [exec_prefix=$prefix])
-    AS_IF([test "$exec_prefix" = "NONE"], [exec_prefix=$ac_default_prefix])
-    AS_IF([test -d "${exec_prefix}/lib64"], [
-        LIB_PATH="lib64"
-    ],[
-        LIB_PATH="lib"
-    ])
-], [
-    LIB_PATH="${libdir:`expr length "$exec_prefix" + 1`}"
-])
-AC_SUBST(LIB_PATH)])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 22:38:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 22:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6UQR-0007uV-6w; Tue, 28 Aug 2012 22:38:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6UQQ-0007uP-IK
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 22:38:22 +0000
Received: from [85.158.139.83:7132] by server-3.bemta-5.messagelabs.com id
	57/81-21836-D584D305; Tue, 28 Aug 2012 22:38:21 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346193499!24359925!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDc4OTk4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16906 invoked from network); 28 Aug 2012 22:38:20 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 22:38:20 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346193500; x=1377729500;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=XL/CZrSQLrHM1OxCjIBHHJpLhlfjnT7aEEjBbLwFZG0=;
	b=Fxw3IfAuEGP3qYRDHTVTyZiQ/sJmiFhCJH3VgOtk8wDVfWLP7J6Jl+4k
	bnmz+rDcZg3uS9HOa1TpKhTbOOuc1w==;
X-IronPort-AV: E=Sophos;i="4.80,330,1344211200"; d="scan'208";a="352273466"
Received: from smtp-in-31001.sea31.amazon.com ([10.184.168.27])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Aug 2012 22:38:18 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-31001.sea31.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7SMcH1a026233
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 28 Aug 2012 22:38:17 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 28 Aug 2012 15:38:11 -0700
MIME-Version: 1.0
X-Mercurial-Node: d7e4efa17fb0b9b69c58b63293cad48afa8495af
Message-ID: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Tue, 28 Aug 2012 15:37:15 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <ian.campbell@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] tools: remove vestigial default_lib.m4 macros
 and adjust substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

LIB_PATH is no longer used, so the AX_DEFAULT_LIB macro is no longer
needed. Additionally lower case make variables are now used as
autoconf substitutions, which allows for more correct overrides at
build time.

I've checked the file layout in dist/install from the build made
before this change versus after with ./configure values of:
 1) ./configure (no flags provided)
 2) ./configure --libdir=/usr/lib/x86_64-linux-gnu (Debian style)
 3) ./configure --libdir='${exec_prefix}/lib' (late variable expansion)

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r c95d96c70108 -r d7e4efa17fb0 config/Tools.mk.in
--- a/config/Tools.mk.in	Tue Aug 28 13:36:29 2012 -0700
+++ b/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
@@ -1,7 +1,9 @@
 # Prefix and install folder
-PREFIX              := @prefix@
+prefix              := @prefix@
+PREFIX              := $(prefix)
 exec_prefix         := @exec_prefix@
-LIBDIR              := @libdir@
+libdir              := @libdir@
+LIBDIR              := $(libdir)
 
 # A debug build of tools?
 debug               := @debug@
diff -r c95d96c70108 -r d7e4efa17fb0 tools/configure.ac
--- a/tools/configure.ac	Tue Aug 28 13:36:29 2012 -0700
+++ b/tools/configure.ac	Tue Aug 28 15:35:08 2012 -0700
@@ -28,7 +28,6 @@
 m4_include([m4/python_version.m4])
 m4_include([m4/python_devel.m4])
 m4_include([m4/ocaml.m4])
-m4_include([m4/default_lib.m4])
 m4_include([m4/set_cflags_ldflags.m4])
 m4_include([m4/uuid.m4])
 m4_include([m4/pkg.m4])
@@ -121,9 +120,6 @@
  AX_CHECK_CURSES
 PKG_CHECK_MODULES(glib, [glib-2.0 >= 2.12])
 
-# Check library path
-AX_DEFAULT_LIB
-
 # Checks for libraries.
 AC_CHECK_HEADER([bzlib.h], [
 AC_CHECK_LIB([bz2], [BZ2_bzDecompressInit], [zlib="$zlib -DHAVE_BZLIB -lbz2"])
diff -r c95d96c70108 -r d7e4efa17fb0 tools/m4/default_lib.m4
--- a/tools/m4/default_lib.m4	Tue Aug 28 13:36:29 2012 -0700
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,14 +0,0 @@
-AC_DEFUN([AX_DEFAULT_LIB],
-[AS_IF([test "\${exec_prefix}/lib" = "$libdir"],
-    [AS_IF([test "$exec_prefix" = "NONE" && test "$prefix" != "NONE"],
-        [exec_prefix=$prefix])
-    AS_IF([test "$exec_prefix" = "NONE"], [exec_prefix=$ac_default_prefix])
-    AS_IF([test -d "${exec_prefix}/lib64"], [
-        LIB_PATH="lib64"
-    ],[
-        LIB_PATH="lib"
-    ])
-], [
-    LIB_PATH="${libdir:`expr length "$exec_prefix" + 1`}"
-])
-AC_SUBST(LIB_PATH)])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 23:29:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 23:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6VD7-0000p2-O6; Tue, 28 Aug 2012 23:28:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6VD6-0000ox-Mb
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 23:28:40 +0000
Received: from [85.158.139.83:21961] by server-10.bemta-5.messagelabs.com id
	66/0A-10969-7245D305; Tue, 28 Aug 2012 23:28:39 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346196516!28376788!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEzMzI5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30974 invoked from network); 28 Aug 2012 23:28:39 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 23:28:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346196519; x=1377732519;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=2+18v8JUK6DllBH0VciMdxO6K+jGetfo/ejBgXVnq7E=;
	b=IEy0Utz95/ZBMln9uce+7CK6aFaNF1OSQZVC8VJmOSP3g1UJMfSZuvt9
	TvVmVoSViN6Zt7n1BeTiI7Z8zEZ20Q==;
X-IronPort-AV: E=Sophos;i="4.80,330,1344211200"; d="scan'208";a="1017856036"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Aug 2012 23:28:35 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7SNSYTH015140
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 28 Aug 2012 23:28:34 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.38) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 28 Aug 2012 16:28:32 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 28 Aug 2012 16:28:32 -0700
Date: Tue, 28 Aug 2012 16:28:32 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120828232829.GA26703@u002268147cd4502c336d.ant.amazon.com>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345211930.10161.33.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:58:50PM +0100, Ian Campbell wrote:
> On Tue, 2012-08-07 at 04:24 +0100, Matt Wilson wrote:
> > On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> > > This change improves documentation for several Xen command line
> > > parameters. Some of the Itanium-specific options are now removed. A
> > > more thorough check should be performed to remove any other remnants.
> > >
> > > I've reformatted some of the entries to fit in 80 column terminals.
> > >
> > > Options that are yet undocumented but accept standard boolean /
> > > integer values are now annotated as such.
> > >
> > > The size suffixes have been corrected to use the binary prefixes
> > > instead of decimal prefixes.
> > >
> > > Changes since v2:
> > >  * Change *bi prefixes to GiB, MiB, KiB
> > >
> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > 
> > George's concerns were adressed in this version, and Andrew gave an
> > Ack. Anything else keeping this from landing in staging?
> 
> Keir, do you want to apply this sort of thing or shall I (it's not
> really a tools doc so I haven't touched it)

Ping...

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Aug 28 23:29:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Aug 2012 23:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6VD7-0000p2-O6; Tue, 28 Aug 2012 23:28:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6VD6-0000ox-Mb
	for xen-devel@lists.xen.org; Tue, 28 Aug 2012 23:28:40 +0000
Received: from [85.158.139.83:21961] by server-10.bemta-5.messagelabs.com id
	66/0A-10969-7245D305; Tue, 28 Aug 2012 23:28:39 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346196516!28376788!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEzMzI5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30974 invoked from network); 28 Aug 2012 23:28:39 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Aug 2012 23:28:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346196519; x=1377732519;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=2+18v8JUK6DllBH0VciMdxO6K+jGetfo/ejBgXVnq7E=;
	b=IEy0Utz95/ZBMln9uce+7CK6aFaNF1OSQZVC8VJmOSP3g1UJMfSZuvt9
	TvVmVoSViN6Zt7n1BeTiI7Z8zEZ20Q==;
X-IronPort-AV: E=Sophos;i="4.80,330,1344211200"; d="scan'208";a="1017856036"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Aug 2012 23:28:35 +0000
Received: from ex10-hub-31005.ant.amazon.com (ex10-hub-31005.sea31.amazon.com
	[10.185.176.12])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7SNSYTH015140
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Tue, 28 Aug 2012 23:28:34 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.38) by
	ex10-hub-31005.ant.amazon.com (10.185.176.12) with Microsoft SMTP
	Server id 14.2.247.3; Tue, 28 Aug 2012 16:28:32 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Tue, 28 Aug 2012 16:28:32 -0700
Date: Tue, 28 Aug 2012 16:28:32 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120828232829.GA26703@u002268147cd4502c336d.ant.amazon.com>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1345211930.10161.33.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 17, 2012 at 02:58:50PM +0100, Ian Campbell wrote:
> On Tue, 2012-08-07 at 04:24 +0100, Matt Wilson wrote:
> > On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
> > > This change improves documentation for several Xen command line
> > > parameters. Some of the Itanium-specific options are now removed. A
> > > more thorough check should be performed to remove any other remnants.
> > >
> > > I've reformatted some of the entries to fit in 80 column terminals.
> > >
> > > Options that are yet undocumented but accept standard boolean /
> > > integer values are now annotated as such.
> > >
> > > The size suffixes have been corrected to use the binary prefixes
> > > instead of decimal prefixes.
> > >
> > > Changes since v2:
> > >  * Change *bi prefixes to GiB, MiB, KiB
> > >
> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > 
> > George's concerns were adressed in this version, and Andrew gave an
> > Ack. Anything else keeping this from landing in staging?
> 
> Keir, do you want to apply this sort of thing or shall I (it's not
> really a tools doc so I haven't touched it)

Ping...

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 01:01:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 01:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6WeR-0001xs-7Q; Wed, 29 Aug 2012 01:00:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6WeP-0000jA-BL
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 01:00:57 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1346202046!2421856!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26350 invoked from network); 29 Aug 2012 01:00:48 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 01:00:48 -0000
Received: by pbbjt11 with SMTP id jt11so169319pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 18:00:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=UjtTCW/8cJ2mcOcD0vyexLhw+QQ4B1AKVoVOeJpOWWY=;
	b=Y7nCt+qRSgOPWOo9vLogM2HVD+wscdHQcOOfSygQfYgwM8q0UHn2plY1CzV23URbFU
	7qagl4Rk/iIkYoS4Y6mF6D8Zn6S0BJ+bsbhJ4uyvwVN05wAw2hI/Nyyih6JYC0YU3JtT
	DNE2vorrkzIXHrktWplFvvE48UQdxMVvtz91Un69KSBNy/0YZJ8jtVdnU+nfZccT3qhg
	MUpszaX2uDcmlO7ueRz5ooONewEEM2xRPFHz6LKMjs4Pwo5xdDPVJKkjACh77xOflzfb
	PJLkW4vA9tY7BDFA1l/aBjURWraoZdnjko4roKyvRngsuDtZTZSVW3HSTw9yz3capKS4
	k2tA==
Received: by 10.68.231.168 with SMTP id th8mr686107pbc.14.1346202046005;
	Tue, 28 Aug 2012 18:00:46 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id uu6sm18096769pbc.70.2012.08.28.18.00.40
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 18:00:44 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 29 Aug 2012 02:00:34 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Matt Wilson <msw@amazon.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <CC632842.3D258%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of Xen
	command line parameters
Thread-Index: Ac2FgbGRkyr1cEu9mkO7IZN+SY3ddw==
In-Reply-To: <20120828232829.GA26703@u002268147cd4502c336d.ant.amazon.com>
Mime-version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/2012 00:28, "Matt Wilson" <msw@amazon.com> wrote:

> On Fri, Aug 17, 2012 at 02:58:50PM +0100, Ian Campbell wrote:
>> On Tue, 2012-08-07 at 04:24 +0100, Matt Wilson wrote:
>>> On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
>>>> This change improves documentation for several Xen command line
>>>> parameters. Some of the Itanium-specific options are now removed. A
>>>> more thorough check should be performed to remove any other remnants.
>>>> 
>>>> I've reformatted some of the entries to fit in 80 column terminals.
>>>> 
>>>> Options that are yet undocumented but accept standard boolean /
>>>> integer values are now annotated as such.
>>>> 
>>>> The size suffixes have been corrected to use the binary prefixes
>>>> instead of decimal prefixes.
>>>> 
>>>> Changes since v2:
>>>>  * Change *bi prefixes to GiB, MiB, KiB
>>>> 
>>>> Signed-off-by: Matt Wilson <msw@amazon.com>
>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> 
>>> George's concerns were adressed in this version, and Andrew gave an
>>> Ack. Anything else keeping this from landing in staging?
>> 
>> Keir, do you want to apply this sort of thing or shall I (it's not
>> really a tools doc so I haven't touched it)

No need to wait for me on changes under docs/

 -- Keir

> Ping...
> 
> Matt



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 01:01:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 01:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6WeR-0001xs-7Q; Wed, 29 Aug 2012 01:00:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6WeP-0000jA-BL
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 01:00:57 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1346202046!2421856!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26350 invoked from network); 29 Aug 2012 01:00:48 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 01:00:48 -0000
Received: by pbbjt11 with SMTP id jt11so169319pbb.32
	for <xen-devel@lists.xen.org>; Tue, 28 Aug 2012 18:00:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=UjtTCW/8cJ2mcOcD0vyexLhw+QQ4B1AKVoVOeJpOWWY=;
	b=Y7nCt+qRSgOPWOo9vLogM2HVD+wscdHQcOOfSygQfYgwM8q0UHn2plY1CzV23URbFU
	7qagl4Rk/iIkYoS4Y6mF6D8Zn6S0BJ+bsbhJ4uyvwVN05wAw2hI/Nyyih6JYC0YU3JtT
	DNE2vorrkzIXHrktWplFvvE48UQdxMVvtz91Un69KSBNy/0YZJ8jtVdnU+nfZccT3qhg
	MUpszaX2uDcmlO7ueRz5ooONewEEM2xRPFHz6LKMjs4Pwo5xdDPVJKkjACh77xOflzfb
	PJLkW4vA9tY7BDFA1l/aBjURWraoZdnjko4roKyvRngsuDtZTZSVW3HSTw9yz3capKS4
	k2tA==
Received: by 10.68.231.168 with SMTP id th8mr686107pbc.14.1346202046005;
	Tue, 28 Aug 2012 18:00:46 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id uu6sm18096769pbc.70.2012.08.28.18.00.40
	(version=SSLv3 cipher=OTHER); Tue, 28 Aug 2012 18:00:44 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 29 Aug 2012 02:00:34 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Matt Wilson <msw@amazon.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <CC632842.3D258%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of Xen
	command line parameters
Thread-Index: Ac2FgbGRkyr1cEu9mkO7IZN+SY3ddw==
In-Reply-To: <20120828232829.GA26703@u002268147cd4502c336d.ant.amazon.com>
Mime-version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/2012 00:28, "Matt Wilson" <msw@amazon.com> wrote:

> On Fri, Aug 17, 2012 at 02:58:50PM +0100, Ian Campbell wrote:
>> On Tue, 2012-08-07 at 04:24 +0100, Matt Wilson wrote:
>>> On Tue, Jul 31, 2012 at 08:36:40AM -0700, Matt Wilson wrote:
>>>> This change improves documentation for several Xen command line
>>>> parameters. Some of the Itanium-specific options are now removed. A
>>>> more thorough check should be performed to remove any other remnants.
>>>> 
>>>> I've reformatted some of the entries to fit in 80 column terminals.
>>>> 
>>>> Options that are yet undocumented but accept standard boolean /
>>>> integer values are now annotated as such.
>>>> 
>>>> The size suffixes have been corrected to use the binary prefixes
>>>> instead of decimal prefixes.
>>>> 
>>>> Changes since v2:
>>>>  * Change *bi prefixes to GiB, MiB, KiB
>>>> 
>>>> Signed-off-by: Matt Wilson <msw@amazon.com>
>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> 
>>> George's concerns were adressed in this version, and Andrew gave an
>>> Ack. Anything else keeping this from landing in staging?
>> 
>> Keir, do you want to apply this sort of thing or shall I (it's not
>> really a tools doc so I haven't touched it)

No need to wait for me on changes under docs/

 -- Keir

> Ping...
> 
> Matt



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 03:51:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 03:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6ZIe-0006iD-53; Wed, 29 Aug 2012 03:50:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T6ZId-0006i8-6n
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 03:50:39 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346212230!8471161!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxNTgxMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18138 invoked from network); 29 Aug 2012 03:50:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-27.messagelabs.com with SMTP;
	29 Aug 2012 03:50:30 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 28 Aug 2012 20:50:29 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,330,1344236400"; d="scan'208";a="215165037"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 28 Aug 2012 20:50:27 -0700
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 29 Aug 2012 11:43:59 +0800
Message-Id: <1346211839-32707-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] nvmx: fix unhandled nested XSETBV VMExit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the L2 guest issue a XSETBV instruction, we need to deliver to
L1 guest.

This could fix the Fedora 17 booting hang issue as a L2 guest.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 5f6553d..73ccf63 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1506,6 +1506,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_VMXOFF:
     case EXIT_REASON_VMXON:
     case EXIT_REASON_INVEPT:
+    case EXIT_REASON_XSETBV:
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 03:51:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 03:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6ZIe-0006iD-53; Wed, 29 Aug 2012 03:50:40 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1T6ZId-0006i8-6n
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 03:50:39 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346212230!8471161!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxNTgxMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18138 invoked from network); 29 Aug 2012 03:50:30 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-27.messagelabs.com with SMTP;
	29 Aug 2012 03:50:30 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 28 Aug 2012 20:50:29 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,330,1344236400"; d="scan'208";a="215165037"
Received: from dongxiao-vt.sh.intel.com (HELO localhost) ([10.239.36.11])
	by fmsmga001.fm.intel.com with ESMTP; 28 Aug 2012 20:50:27 -0700
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 29 Aug 2012 11:43:59 +0800
Message-Id: <1346211839-32707-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.1
Subject: [Xen-devel] [PATCH] nvmx: fix unhandled nested XSETBV VMExit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the L2 guest issue a XSETBV instruction, we need to deliver to
L1 guest.

This could fix the Fedora 17 booting hang issue as a L2 guest.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 5f6553d..73ccf63 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1506,6 +1506,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_VMXOFF:
     case EXIT_REASON_VMXON:
     case EXIT_REASON_INVEPT:
+    case EXIT_REASON_XSETBV:
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 05:19:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 05:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6ag8-0007In-3T; Wed, 29 Aug 2012 05:19:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1T6ag6-0007Ii-I4
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 05:18:58 +0000
Received: from [85.158.143.99:41139] by server-2.bemta-4.messagelabs.com id
	4A/87-21239-146AD305; Wed, 29 Aug 2012 05:18:57 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346217536!23927040!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNDU1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19750 invoked from network); 29 Aug 2012 05:18:57 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 05:18:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7T5InVR009294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 05:18:49 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7T5Im4w001646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 05:18:48 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7T5ImrZ004595; Wed, 29 Aug 2012 00:18:48 -0500
Received: from [10.191.9.71] (/10.191.9.71)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Aug 2012 22:18:47 -0700
Message-ID: <503DA678.4040801@oracle.com>
Date: Wed, 29 Aug 2012 13:19:52 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 2012-08-13 19:08, Stefano Stabellini wrote:
> On Mon, 13 Aug 2012, Jan Beulich wrote:
>
> I tried to use PV spinlocks on PV on HVM guests but I found that:
>
> commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23
> Author: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> Date:   Tue Sep 6 17:41:47 2011 +0100
>
>      xen: disable PV spinlocks on HVM
>
>      PV spinlocks cannot possibly work with the current code because they are
>      enabled after pvops patching has already been done, and because PV
>      spinlocks use a different data structure than native spinlocks so we
>      cannot switch between them dynamically. A spinlock that has been taken
>      once by the native code (__ticket_spin_lock) cannot be taken by
>      __xen_spin_lock even after it has been released.
>
>      Reported-and-Tested-by: Stefan Bader<stefan.bader@canonical.com>
>      Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>      Signed-off-by: Konrad Rzeszutek Wilk<konrad.wilk@oracle.com>
>
>
> at that time Jeremy was finishing off his PV ticket locks series, that
> has the nice side effect of making it much easier to implement PV on HVM
> spin locks so I just deciced to wait and just append the following patch
> to his series:
>
> http://marc.info/?l=xen-devel&m=131846828430409&w=2
>
> that clearly never went upstream.
Hi Stefano,
Is there a schedule those patch merge to upstream?

zduan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 05:19:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 05:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6ag8-0007In-3T; Wed, 29 Aug 2012 05:19:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1T6ag6-0007Ii-I4
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 05:18:58 +0000
Received: from [85.158.143.99:41139] by server-2.bemta-4.messagelabs.com id
	4A/87-21239-146AD305; Wed, 29 Aug 2012 05:18:57 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346217536!23927040!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNDU1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19750 invoked from network); 29 Aug 2012 05:18:57 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 05:18:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7T5InVR009294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 05:18:49 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7T5Im4w001646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 05:18:48 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7T5ImrZ004595; Wed, 29 Aug 2012 00:18:48 -0500
Received: from [10.191.9.71] (/10.191.9.71)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Aug 2012 22:18:47 -0700
Message-ID: <503DA678.4040801@oracle.com>
Date: Wed, 29 Aug 2012 13:19:52 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 2012-08-13 19:08, Stefano Stabellini wrote:
> On Mon, 13 Aug 2012, Jan Beulich wrote:
>
> I tried to use PV spinlocks on PV on HVM guests but I found that:
>
> commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23
> Author: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> Date:   Tue Sep 6 17:41:47 2011 +0100
>
>      xen: disable PV spinlocks on HVM
>
>      PV spinlocks cannot possibly work with the current code because they are
>      enabled after pvops patching has already been done, and because PV
>      spinlocks use a different data structure than native spinlocks so we
>      cannot switch between them dynamically. A spinlock that has been taken
>      once by the native code (__ticket_spin_lock) cannot be taken by
>      __xen_spin_lock even after it has been released.
>
>      Reported-and-Tested-by: Stefan Bader<stefan.bader@canonical.com>
>      Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>      Signed-off-by: Konrad Rzeszutek Wilk<konrad.wilk@oracle.com>
>
>
> at that time Jeremy was finishing off his PV ticket locks series, that
> has the nice side effect of making it much easier to implement PV on HVM
> spin locks so I just deciced to wait and just append the following patch
> to his series:
>
> http://marc.info/?l=xen-devel&m=131846828430409&w=2
>
> that clearly never went upstream.
Hi Stefano,
Is there a schedule those patch merge to upstream?

zduan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 05:36:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 05:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6awP-0007U1-Lz; Wed, 29 Aug 2012 05:35:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1T6awO-0007Tu-LR
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 05:35:48 +0000
Received: from [85.158.143.99:35018] by server-3.bemta-4.messagelabs.com id
	A0/5C-08232-33AAD305; Wed, 29 Aug 2012 05:35:47 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346218546!27322242!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNDU1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28883 invoked from network); 29 Aug 2012 05:35:47 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 05:35:47 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7T5Zddq022591
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 05:35:42 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7T5ZcFD025996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 05:35:39 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7T5ZcJn019656; Wed, 29 Aug 2012 00:35:38 -0500
Received: from [10.191.9.71] (/10.191.9.71)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Aug 2012 22:35:37 -0700
Message-ID: <503DAA5F.5030306@oracle.com>
Date: Wed, 29 Aug 2012 13:36:31 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
In-Reply-To: <5028E53202000078000946B1@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Feng Jin <joe.jin@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cgrkuo4gMjAxMi0wOC0xMyAxNzoyOSwgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDEzLjA4
LjEyIGF0IDA5OjU4LCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+
ICB3cm90ZToKPj4g5LqOIDIwMTItMDgtMTAgMjI6MjIsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+
IEdvaW5nIGJhY2sgdG8geW91ciBvcmlnaW5hbCBtYWlsLCBJIHdvbmRlciBob3dldmVyIHdoeSB0
aGlzCj4+PiBnZXRzIGRvbmUgYXQgYWxsLiBZb3Ugc2FpZCBpdCBnb3QgdGhlcmUgdmlhCj4+Pgo+
Pj4gbXRycl9hcHNfaW5pdCgpCj4+PiAgICBcLT4gICBzZXRfbXRycigpCj4+PiAgICAgICAgXC0+
ICAgbXRycl93b3JrX2hhbmRsZXIoKQo+Pj4KPj4+IHlldCB0aGlzIGlzbid0IGRvbmUgdW5jb25k
aXRpb25hbGx5IC0gc2VlIHRoZSBjb21tZW50IGJlZm9yZQo+Pj4gY2hlY2tpbmcgbXRycl9hcHNf
ZGVsYXllZF9pbml0LiBDYW4geW91IGZpbmQgb3V0IHdoZXJlIHRoZQo+Pj4gb2J2aW91c2x5IG5l
Y2Vzc2FyeSBjYWxsKHMpIHRvIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQoKQo+Pj4gY29tZShz
KSBmcm9tPwo+PiBBdCBib290dXAgc3RhZ2UsIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQgaXMg
Y2FsbGVkIGJ5Cj4+IG5hdGl2ZV9zbXBfcHJlcGFyZV9jcHVzLgo+PiBtdHJyX2Fwc19kZWxheWVk
X2luaXQgaXMgYWx3YXlzIHNldCB0byB0dXJlIGZvciBpbnRlbCBwcm9jZXNzb3IgaW4gdXBzdHJl
YW0KPj4gY29kZS4KPiBJbmRlZWQsIGFuZCB0aGF0IChpbiBvbmUgZm9ybSBvciBhbm90aGVyKSBo
YXMgYmVlbiBkb25lCj4gdmlydHVhbGx5IGZvcmV2ZXIgaW4gTGludXguIEkgd29uZGVyIHdoeSB0
aGUgcHJvYmxlbSB3YXNuJ3QKPiBub3RpY2VkIChvciBsb29rZWQgaW50bywgaWYgaXQgd2FzIG5v
dGljZWQpIHNvIGZhci4KPgo+IEFzIGl0J3MgZ29pbmcgdG8gYmUgcmF0aGVyIGRpZmZpY3VsdCB0
byBjb252aW5jZSB0aGUgTGludXggZm9sa3MKPiB0byBjaGFuZ2UgdGhlaXIgY29kZSAocGx1cyB0
aGlzIHdvdWxkbid0IGhlbHAgd2l0aCBleGlzdGluZwo+IGtlcm5lbHMgYW55d2F5KSwgd2UnbGwg
bmVlZCB0byBmaW5kIGEgd2F5IHRvIGltcHJvdmUgdGhpcyBpbgo+IHRoZSBoeXBlcnZpc29yLgpI
aSBKYW4sIFRpbQpJcyB0aGlzIGlzc3VlIGltcHJvdmFibGUgZnJvbSB4ZW4gc2lkZT8KdGhhbmtz
CnpkdWFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Aug 29 05:36:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 05:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6awP-0007U1-Lz; Wed, 29 Aug 2012 05:35:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1T6awO-0007Tu-LR
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 05:35:48 +0000
Received: from [85.158.143.99:35018] by server-3.bemta-4.messagelabs.com id
	A0/5C-08232-33AAD305; Wed, 29 Aug 2012 05:35:47 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346218546!27322242!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNDU1Mw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28883 invoked from network); 29 Aug 2012 05:35:47 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-9.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 05:35:47 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7T5Zddq022591
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 05:35:42 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7T5ZcFD025996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 05:35:39 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7T5ZcJn019656; Wed, 29 Aug 2012 00:35:38 -0500
Received: from [10.191.9.71] (/10.191.9.71)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Aug 2012 22:35:37 -0700
Message-ID: <503DAA5F.5030306@oracle.com>
Date: Wed, 29 Aug 2012 13:36:31 +0800
From: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Organization: oracle
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
In-Reply-To: <5028E53202000078000946B1@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	xen-devel <xen-devel@lists.xen.org>, Feng Jin <joe.jin@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cgrkuo4gMjAxMi0wOC0xMyAxNzoyOSwgSmFuIEJldWxpY2gg5YaZ6YGTOgo+Pj4+IE9uIDEzLjA4
LjEyIGF0IDA5OjU4LCAiemhlbnpob25nLmR1YW4iPHpoZW56aG9uZy5kdWFuQG9yYWNsZS5jb20+
ICB3cm90ZToKPj4g5LqOIDIwMTItMDgtMTAgMjI6MjIsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+
IEdvaW5nIGJhY2sgdG8geW91ciBvcmlnaW5hbCBtYWlsLCBJIHdvbmRlciBob3dldmVyIHdoeSB0
aGlzCj4+PiBnZXRzIGRvbmUgYXQgYWxsLiBZb3Ugc2FpZCBpdCBnb3QgdGhlcmUgdmlhCj4+Pgo+
Pj4gbXRycl9hcHNfaW5pdCgpCj4+PiAgICBcLT4gICBzZXRfbXRycigpCj4+PiAgICAgICAgXC0+
ICAgbXRycl93b3JrX2hhbmRsZXIoKQo+Pj4KPj4+IHlldCB0aGlzIGlzbid0IGRvbmUgdW5jb25k
aXRpb25hbGx5IC0gc2VlIHRoZSBjb21tZW50IGJlZm9yZQo+Pj4gY2hlY2tpbmcgbXRycl9hcHNf
ZGVsYXllZF9pbml0LiBDYW4geW91IGZpbmQgb3V0IHdoZXJlIHRoZQo+Pj4gb2J2aW91c2x5IG5l
Y2Vzc2FyeSBjYWxsKHMpIHRvIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQoKQo+Pj4gY29tZShz
KSBmcm9tPwo+PiBBdCBib290dXAgc3RhZ2UsIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQgaXMg
Y2FsbGVkIGJ5Cj4+IG5hdGl2ZV9zbXBfcHJlcGFyZV9jcHVzLgo+PiBtdHJyX2Fwc19kZWxheWVk
X2luaXQgaXMgYWx3YXlzIHNldCB0byB0dXJlIGZvciBpbnRlbCBwcm9jZXNzb3IgaW4gdXBzdHJl
YW0KPj4gY29kZS4KPiBJbmRlZWQsIGFuZCB0aGF0IChpbiBvbmUgZm9ybSBvciBhbm90aGVyKSBo
YXMgYmVlbiBkb25lCj4gdmlydHVhbGx5IGZvcmV2ZXIgaW4gTGludXguIEkgd29uZGVyIHdoeSB0
aGUgcHJvYmxlbSB3YXNuJ3QKPiBub3RpY2VkIChvciBsb29rZWQgaW50bywgaWYgaXQgd2FzIG5v
dGljZWQpIHNvIGZhci4KPgo+IEFzIGl0J3MgZ29pbmcgdG8gYmUgcmF0aGVyIGRpZmZpY3VsdCB0
byBjb252aW5jZSB0aGUgTGludXggZm9sa3MKPiB0byBjaGFuZ2UgdGhlaXIgY29kZSAocGx1cyB0
aGlzIHdvdWxkbid0IGhlbHAgd2l0aCBleGlzdGluZwo+IGtlcm5lbHMgYW55d2F5KSwgd2UnbGwg
bmVlZCB0byBmaW5kIGEgd2F5IHRvIGltcHJvdmUgdGhpcyBpbgo+IHRoZSBoeXBlcnZpc29yLgpI
aSBKYW4sIFRpbQpJcyB0aGlzIGlzc3VlIGltcHJvdmFibGUgZnJvbSB4ZW4gc2lkZT8KdGhhbmtz
CnpkdWFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Aug 29 05:59:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 05:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6bJ9-0007f8-0u; Wed, 29 Aug 2012 05:59:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6bJ7-0007f3-MM
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 05:59:18 +0000
Received: from [85.158.138.51:61389] by server-3.bemta-3.messagelabs.com id
	00/10-13809-4BFAD305; Wed, 29 Aug 2012 05:59:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346219954!19407016!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27427 invoked from network); 29 Aug 2012 05:59:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 05:59:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,332,1344211200"; d="scan'208";a="14240677"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 05:59:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 06:59:14 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6bJ4-00084v-1A;
	Wed, 29 Aug 2012 05:59:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6bJ3-0005i6-Sn;
	Wed, 29 Aug 2012 06:59:13 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13636-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Aug 2012 06:59:13 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13636: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13636 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13636/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13635
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13635
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13635
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13635

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  a0b5f8102a00
baseline version:
 xen                  3908b256ff34

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jonathan Tripathy <jonnyt@abpni.co.uk>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=a0b5f8102a00
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable a0b5f8102a00
+ branch=xen-unstable
+ revision=a0b5f8102a00
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r a0b5f8102a00 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 05:59:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 05:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6bJ9-0007f8-0u; Wed, 29 Aug 2012 05:59:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6bJ7-0007f3-MM
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 05:59:18 +0000
Received: from [85.158.138.51:61389] by server-3.bemta-3.messagelabs.com id
	00/10-13809-4BFAD305; Wed, 29 Aug 2012 05:59:16 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346219954!19407016!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27427 invoked from network); 29 Aug 2012 05:59:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 05:59:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,332,1344211200"; d="scan'208";a="14240677"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 05:59:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 06:59:14 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6bJ4-00084v-1A;
	Wed, 29 Aug 2012 05:59:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6bJ3-0005i6-Sn;
	Wed, 29 Aug 2012 06:59:13 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13636-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Aug 2012 06:59:13 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13636: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13636 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13636/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13635
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13635
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13635
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13635

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  a0b5f8102a00
baseline version:
 xen                  3908b256ff34

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jonathan Tripathy <jonnyt@abpni.co.uk>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=a0b5f8102a00
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable a0b5f8102a00
+ branch=xen-unstable
+ revision=a0b5f8102a00
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r a0b5f8102a00 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 07:00:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 07:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6cGG-0000Hb-Dk; Wed, 29 Aug 2012 07:00:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T6cGF-0000HW-NM
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 07:00:23 +0000
Received: from [85.158.143.99:15637] by server-1.bemta-4.messagelabs.com id
	D0/35-12504-60EBD305; Wed, 29 Aug 2012 07:00:22 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346223618!27336495!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29556 invoked from network); 29 Aug 2012 07:00:21 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-9.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	29 Aug 2012 07:00:21 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T6cG3-0002Yf-Qs; Wed, 29 Aug 2012 17:00:11 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Wed, 29 Aug 2012 17:00:11 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0379.000; Wed, 29 Aug 2012 17:00:11 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
Thread-Index: AQHNhWMszDIlmHdMcESm7Pdn3r03ppdwLogg
Date: Wed, 29 Aug 2012 07:00:11 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B29A78CB7@BITCOM1.int.sbss.com.au>
References: <503D3646.7020800@abpni.co.uk>
In-Reply-To: <503D3646.7020800@abpni.co.uk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [203.193.213.162]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19146.004
x-tm-as-result: No--26.211000-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 29 Aug 2012 07:00:11.0949 (UTC)
	FILETIME=[EF075DD0:01CD85B3]
X-Really-From-Bendigo-IT: magichashvalue
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I'm curious as to how you came to the conclusion that qemu always presents
> 512 byte sectors?

I checked the code. I could be wrong but the value 512 seemed pretty much hardcoded.

> When using bcache formatted to a 4k sector size, Windows Server 2008 just
> flat out refuses to install...
> This is true regardless of whether I'm passing an LV directly to the DomU
> (phy), or whether I'm using tap::aio or file.
> 
> When formatting the bcache to 512 byte size, Windows tries to install. I say
> "tries" as then my system kernel panics and reboots, but that's a bcache
> issue (I've posted the trace to the bcache list).
> 

I'm watching your thread on bcache list too.

Ideally, we'd be able to emulate 512e so windows can know that the underlying block size is 4k and will (in win7 and newer) align to that boundary where possible.

lvm seems pretty flexible wrt block size. I had my lvm with a pv with 4k block size and wanted 512 byte block size, so I split my raid, created a new raid with the now spare disk and created bcache with 512 byte block size, added the 512 byte pv to the lvm (which is now 2 pv's one 512, one 4k), pvmove'd everything across, then removed the 4k pv and joined it to the new raid.
 
James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 07:00:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 07:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6cGG-0000Hb-Dk; Wed, 29 Aug 2012 07:00:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1T6cGF-0000HW-NM
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 07:00:23 +0000
Received: from [85.158.143.99:15637] by server-1.bemta-4.messagelabs.com id
	D0/35-12504-60EBD305; Wed, 29 Aug 2012 07:00:22 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346223618!27336495!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29556 invoked from network); 29 Aug 2012 07:00:21 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-9.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	29 Aug 2012 07:00:21 -0000
Received: from mail.bendigoit.com.au ([203.16.207.99])
	by smtp1.bendigoit.com.au with esmtp (Exim 4.69)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1T6cG3-0002Yf-Qs; Wed, 29 Aug 2012 17:00:11 +1000
Received: from BITCOM1.int.sbss.com.au ([192.168.200.237]) by
	mail.bendigoit.com.au with Microsoft SMTPSVC(6.0.3790.4675); 
	Wed, 29 Aug 2012 17:00:11 +1000
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.01.0379.000; Wed, 29 Aug 2012 17:00:11 +1000
From: James Harper <james.harper@bendigoit.com.au>
To: Jonathan Tripathy <jonnyt@abpni.co.uk>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
Thread-Index: AQHNhWMszDIlmHdMcESm7Pdn3r03ppdwLogg
Date: Wed, 29 Aug 2012 07:00:11 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B29A78CB7@BITCOM1.int.sbss.com.au>
References: <503D3646.7020800@abpni.co.uk>
In-Reply-To: <503D3646.7020800@abpni.co.uk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [203.193.213.162]
x-tm-as-product-ver: SMEX-10.2.0.1135-7.000.1014-19146.004
x-tm-as-result: No--26.211000-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-OriginalArrivalTime: 29 Aug 2012 07:00:11.0949 (UTC)
	FILETIME=[EF075DD0:01CD85B3]
X-Really-From-Bendigo-IT: magichashvalue
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I'm curious as to how you came to the conclusion that qemu always presents
> 512 byte sectors?

I checked the code. I could be wrong but the value 512 seemed pretty much hardcoded.

> When using bcache formatted to a 4k sector size, Windows Server 2008 just
> flat out refuses to install...
> This is true regardless of whether I'm passing an LV directly to the DomU
> (phy), or whether I'm using tap::aio or file.
> 
> When formatting the bcache to 512 byte size, Windows tries to install. I say
> "tries" as then my system kernel panics and reboots, but that's a bcache
> issue (I've posted the trace to the bcache list).
> 

I'm watching your thread on bcache list too.

Ideally, we'd be able to emulate 512e so windows can know that the underlying block size is 4k and will (in win7 and newer) align to that boundary where possible.

lvm seems pretty flexible wrt block size. I had my lvm with a pv with 4k block size and wanted 512 byte block size, so I split my raid, created a new raid with the now spare disk and created bcache with 512 byte block size, added the 512 byte pv to the lvm (which is now 2 pv's one 512, one 4k), pvmove'd everything across, then removed the 4k pv and joined it to the new raid.
 
James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 07:29:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 07:29:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6chp-0000eL-Ot; Wed, 29 Aug 2012 07:28:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <killian.de.volder@scarlet.be>) id 1T6chn-0000eD-Rl
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 07:28:52 +0000
X-Env-Sender: killian.de.volder@scarlet.be
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346225311!8597282!1
X-Originating-IP: [217.70.183.196]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjE3LjcwLjE4My4xOTYgPT4gNDQ3NTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22374 invoked from network); 29 Aug 2012 07:28:31 -0000
Received: from relay4-d.mail.gandi.net (HELO relay4-d.mail.gandi.net)
	(217.70.183.196) by server-3.tower-27.messagelabs.com with SMTP;
	29 Aug 2012 07:28:31 -0000
X-Originating-IP: 217.70.178.144
Received: from mfilter16-d.gandi.net (mfilter16-d.gandi.net [217.70.178.144])
	by relay4-d.mail.gandi.net (Postfix) with ESMTP id 25F1D172090
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 09:28:31 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at mfilter16-d.gandi.net
Received: from relay4-d.mail.gandi.net ([217.70.183.196])
	by mfilter16-d.gandi.net (mfilter16-d.gandi.net [10.0.15.180])
	(amavisd-new, port 10024)
	with ESMTP id 1c4NnQpgPY31 for <xen-devel@lists.xen.org>;
	Wed, 29 Aug 2012 09:28:29 +0200 (CEST)
X-Originating-IP: 78.20.80.220
Received: from [172.17.0.70] (78-20-80-220.access.telenet.be [78.20.80.220])
	(Authenticated sender: killian.de.volder@megasoft.be)
	by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 9A4CD1720AA
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 09:28:29 +0200 (CEST)
Message-ID: <503DC49D.4090903@scarlet.be>
Date: Wed, 29 Aug 2012 09:28:29 +0200
From: Killian De Volder <killian.de.volder@scarlet.be>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.4) Gecko/20120522 Thunderbird/10.0.4
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <503D3646.7020800@abpni.co.uk>
In-Reply-To: <503D3646.7020800@abpni.co.uk>
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8507634150989229098=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8507634150989229098==
Content-Type: multipart/alternative;
 boundary="------------010405020904080804050701"

This is a multi-part message in MIME format.
--------------010405020904080804050701
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Word of warning:
Since you are messing with 4k size bocks:

If you add a 4k dm device to a 512b dm device, the dm becomes a 4k device.
If you combine this with a phy:// disk and you are using a Windows it starts trashing your data.

I don't think this bug has been fixed yet, but strictly speaking, it's windows not supporting block-size changes on the fly.

On 28-08-12 23:21, Jonathan Tripathy wrote:
> On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
> >/  >  /
> >/  >  I think the problem is definitely related to the 4K sector issue./
> >/  >  /
> >/  >  qemu appears to always present 512 byte sectors, thus only booting from a/
> >/  >  512 byte sector partition table. Once the PV drivers take over though it all/
> >/  >  falls down because PV drivers are passed a 4K sector size and nothing/
> >/  >  matches up anymore./
> >/  >
>
> /Hi James,
>
> I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors?
> When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
> This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.
>
> When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel
> panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).
>
> Thanks
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010405020904080804050701
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Word of warning:<br>
    Since you are messing with 4k size bocks:<br>
    <br>
    If you add a 4k dm device to a 512b dm device, the dm becomes a 4k
    device.<br>
    If you combine this with a phy:// disk and you are using a Windows
    it starts trashing your data.<br>
    <br>
    I don't think this bug has been fixed yet, but strictly speaking,
    it's windows not supporting block-size changes on the fly.<br>
    <br>
    On 28-08-12 23:21, Jonathan Tripathy wrote:
    <blockquote cite="mid:503D3646.7020800@abpni.co.uk" type="cite">
      <meta http-equiv="content-type" content="text/html;
        charset=ISO-8859-1">
      <meta charset="utf-8">
      <pre style="color: rgb(0, 0, 0); font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><meta charset="utf-8">On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
&gt;<i> &gt; </i>
&gt;<i> &gt; I think the problem is definitely related to the 4K sector issue.</i>
&gt;<i> &gt; </i>
&gt;<i> &gt; qemu appears to always present 512 byte sectors, thus only booting from a</i>
&gt;<i> &gt; 512 byte sector partition table. Once the PV drivers take over though it all</i>
&gt;<i> &gt; falls down because PV drivers are passed a 4K sector size and nothing</i>
&gt;<i> &gt; matches up anymore.</i>
&gt;<i> &gt; 

</i>Hi James,

I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors? 
When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.

When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel 
panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).

Thanks

</pre>
      <br>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010405020904080804050701--


--===============8507634150989229098==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8507634150989229098==--


From xen-devel-bounces@lists.xen.org Wed Aug 29 07:29:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 07:29:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6chp-0000eL-Ot; Wed, 29 Aug 2012 07:28:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <killian.de.volder@scarlet.be>) id 1T6chn-0000eD-Rl
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 07:28:52 +0000
X-Env-Sender: killian.de.volder@scarlet.be
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346225311!8597282!1
X-Originating-IP: [217.70.183.196]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjE3LjcwLjE4My4xOTYgPT4gNDQ3NTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22374 invoked from network); 29 Aug 2012 07:28:31 -0000
Received: from relay4-d.mail.gandi.net (HELO relay4-d.mail.gandi.net)
	(217.70.183.196) by server-3.tower-27.messagelabs.com with SMTP;
	29 Aug 2012 07:28:31 -0000
X-Originating-IP: 217.70.178.144
Received: from mfilter16-d.gandi.net (mfilter16-d.gandi.net [217.70.178.144])
	by relay4-d.mail.gandi.net (Postfix) with ESMTP id 25F1D172090
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 09:28:31 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at mfilter16-d.gandi.net
Received: from relay4-d.mail.gandi.net ([217.70.183.196])
	by mfilter16-d.gandi.net (mfilter16-d.gandi.net [10.0.15.180])
	(amavisd-new, port 10024)
	with ESMTP id 1c4NnQpgPY31 for <xen-devel@lists.xen.org>;
	Wed, 29 Aug 2012 09:28:29 +0200 (CEST)
X-Originating-IP: 78.20.80.220
Received: from [172.17.0.70] (78-20-80-220.access.telenet.be [78.20.80.220])
	(Authenticated sender: killian.de.volder@megasoft.be)
	by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 9A4CD1720AA
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 09:28:29 +0200 (CEST)
Message-ID: <503DC49D.4090903@scarlet.be>
Date: Wed, 29 Aug 2012 09:28:29 +0200
From: Killian De Volder <killian.de.volder@scarlet.be>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.4) Gecko/20120522 Thunderbird/10.0.4
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <503D3646.7020800@abpni.co.uk>
In-Reply-To: <503D3646.7020800@abpni.co.uk>
Subject: Re: [Xen-devel] Xen and 4K Sectors (was blkback and bcache)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8507634150989229098=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8507634150989229098==
Content-Type: multipart/alternative;
 boundary="------------010405020904080804050701"

This is a multi-part message in MIME format.
--------------010405020904080804050701
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Word of warning:
Since you are messing with 4k size bocks:

If you add a 4k dm device to a 512b dm device, the dm becomes a 4k device.
If you combine this with a phy:// disk and you are using a Windows it starts trashing your data.

I don't think this bug has been fixed yet, but strictly speaking, it's windows not supporting block-size changes on the fly.

On 28-08-12 23:21, Jonathan Tripathy wrote:
> On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
> >/  >  /
> >/  >  I think the problem is definitely related to the 4K sector issue./
> >/  >  /
> >/  >  qemu appears to always present 512 byte sectors, thus only booting from a/
> >/  >  512 byte sector partition table. Once the PV drivers take over though it all/
> >/  >  falls down because PV drivers are passed a 4K sector size and nothing/
> >/  >  matches up anymore./
> >/  >
>
> /Hi James,
>
> I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors?
> When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
> This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.
>
> When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel
> panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).
>
> Thanks
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010405020904080804050701
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Word of warning:<br>
    Since you are messing with 4k size bocks:<br>
    <br>
    If you add a 4k dm device to a 512b dm device, the dm becomes a 4k
    device.<br>
    If you combine this with a phy:// disk and you are using a Windows
    it starts trashing your data.<br>
    <br>
    I don't think this bug has been fixed yet, but strictly speaking,
    it's windows not supporting block-size changes on the fly.<br>
    <br>
    On 28-08-12 23:21, Jonathan Tripathy wrote:
    <blockquote cite="mid:503D3646.7020800@abpni.co.uk" type="cite">
      <meta http-equiv="content-type" content="text/html;
        charset=ISO-8859-1">
      <meta charset="utf-8">
      <pre style="color: rgb(0, 0, 0); font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><meta charset="utf-8">On Mon, Aug 13, 2012 at 01:25:07PM +0000, James Harper wrote:
&gt;<i> &gt; </i>
&gt;<i> &gt; I think the problem is definitely related to the 4K sector issue.</i>
&gt;<i> &gt; </i>
&gt;<i> &gt; qemu appears to always present 512 byte sectors, thus only booting from a</i>
&gt;<i> &gt; 512 byte sector partition table. Once the PV drivers take over though it all</i>
&gt;<i> &gt; falls down because PV drivers are passed a 4K sector size and nothing</i>
&gt;<i> &gt; matches up anymore.</i>
&gt;<i> &gt; 

</i>Hi James,

I'm curious as to how you came to the conclusion that qemu always presents 512 byte sectors? 
When using bcache formatted to a 4k sector size, Windows Server 2008 just flat out refuses to install...
This is true regardless of whether I'm passing an LV directly to the DomU (phy), or whether I'm using tap::aio or file.

When formatting the bcache to 512 byte size, Windows tries to install. I say "tries" as then my system kernel 
panics and reboots, but that's a bcache issue (I've posted the trace to the bcache list).

Thanks

</pre>
      <br>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010405020904080804050701--


--===============8507634150989229098==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8507634150989229098==--


From xen-devel-bounces@lists.xen.org Wed Aug 29 07:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 07:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6d7Q-0001Y1-5l; Wed, 29 Aug 2012 07:55:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6d7O-0001Xl-HC
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 07:55:18 +0000
Received: from [85.158.138.51:9013] by server-1.bemta-3.messagelabs.com id
	D2/C9-09327-5EACD305; Wed, 29 Aug 2012 07:55:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346226915!27404831!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26000 invoked from network); 29 Aug 2012 07:55:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 07:55:15 -0000
X-IronPort-AV: E=Sophos;i="4.80,332,1344211200"; d="scan'208";a="14242844"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 07:54:29 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 08:54:29 +0100
Message-ID: <1346226868.4847.21.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "greg@enjellic.com" <greg@enjellic.com>
Date: Wed, 29 Aug 2012 08:54:28 +0100
In-Reply-To: <201208281225.q7SCP1WO017490@wind.enjellic.com>
References: <201208281225.q7SCP1WO017490@wind.enjellic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] XEN 4.1.3 blktap2 patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-28 at 13:25 +0100, Dr. Greg Wettstein wrote:
> Good morning, hope the day is going well for everyone.
> 
> The patches to fix the blktap2 issues which result in orphaned
> tapdisk2 processes and the transitory deadlock on guest shutdown
> didn't make it into the 4.1.3 release.  Updated patches to address
> these problems are available at the following location:
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap1.patch
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap2.patch
> 
> The patches are designed to be applied in order and have been verified
> to work against the 4.1.3 release.

Please can you post these patches as emails with a changelog and a
signed-off-by. You should also CC Ian Jackson since he is the one who
does the tools backports.

The changelog should be clear about the backport status. IIRC one of
these is a backport from unstable (so the changelog should say of which
commit and preserve the original S-o-b) and the other is a
reimplementation since too much has changed in unstable so there is no
plausible backport (and the changelog should mention this).

Ideally these mails would be threaded with subjects "[PATCH 1/2] ..."
and "[PATCH 2/2] ..." so that it is obvious that there is a sequence to
them. A tool such as hg email will do this for you.
http://wiki.xen.org/wiki/Submitting_Xen_Patches has some hints on how to
use it.

If nothing has changed since the previous posting it may be sufficient
to reply to that previous postings with a "ping" message and CC Ian
there.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 07:55:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 07:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6d7Q-0001Y1-5l; Wed, 29 Aug 2012 07:55:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6d7O-0001Xl-HC
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 07:55:18 +0000
Received: from [85.158.138.51:9013] by server-1.bemta-3.messagelabs.com id
	D2/C9-09327-5EACD305; Wed, 29 Aug 2012 07:55:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346226915!27404831!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA2NTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26000 invoked from network); 29 Aug 2012 07:55:15 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 07:55:15 -0000
X-IronPort-AV: E=Sophos;i="4.80,332,1344211200"; d="scan'208";a="14242844"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 07:54:29 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 08:54:29 +0100
Message-ID: <1346226868.4847.21.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "greg@enjellic.com" <greg@enjellic.com>
Date: Wed, 29 Aug 2012 08:54:28 +0100
In-Reply-To: <201208281225.q7SCP1WO017490@wind.enjellic.com>
References: <201208281225.q7SCP1WO017490@wind.enjellic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] XEN 4.1.3 blktap2 patches.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-28 at 13:25 +0100, Dr. Greg Wettstein wrote:
> Good morning, hope the day is going well for everyone.
> 
> The patches to fix the blktap2 issues which result in orphaned
> tapdisk2 processes and the transitory deadlock on guest shutdown
> didn't make it into the 4.1.3 release.  Updated patches to address
> these problems are available at the following location:
> 
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap1.patch
> 	ftp://ftp.enjellic.com/pub/xen/xen-4.1.3.blktap2.patch
> 
> The patches are designed to be applied in order and have been verified
> to work against the 4.1.3 release.

Please can you post these patches as emails with a changelog and a
signed-off-by. You should also CC Ian Jackson since he is the one who
does the tools backports.

The changelog should be clear about the backport status. IIRC one of
these is a backport from unstable (so the changelog should say of which
commit and preserve the original S-o-b) and the other is a
reimplementation since too much has changed in unstable so there is no
plausible backport (and the changelog should mention this).

Ideally these mails would be threaded with subjects "[PATCH 1/2] ..."
and "[PATCH 2/2] ..." so that it is obvious that there is a sequence to
them. A tool such as hg email will do this for you.
http://wiki.xen.org/wiki/Submitting_Xen_Patches has some hints on how to
use it.

If nothing has changed since the previous posting it may be sufficient
to reply to that previous postings with a "ping" message and CC Ian
there.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 09:20:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 09:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6eR1-0002zT-Vk; Wed, 29 Aug 2012 09:19:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T6eQz-0002zO-PZ
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 09:19:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346231318!8567151!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8822 invoked from network); 29 Aug 2012 09:08:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 09:08:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,332,1344211200"; d="scan'208";a="206515784"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 09:08:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 05:08:35 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T6eGJ-0001YX-2G;
	Wed, 29 Aug 2012 10:08:35 +0100
Message-ID: <503DDC13.30206@citrix.com>
Date: Wed, 29 Aug 2012 10:08:35 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC62F76D.3D210%keir.xen@gmail.com>
In-Reply-To: <CC62F76D.3D210%keir.xen@gmail.com>
X-Enigmail-Version: 1.4.4
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/12 22:32, Keir Fraser wrote:
> On 28/08/2012 20:59, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:
>
>> Since Jan's patch to print and mask bogus PIC vectors, we have found
>> some issues on older hardware were supurious PIC vectors are being
>> repeatedly logged, as spurious vectors will ignore the relevant mask bit.
>>
>> The log message is deceptive in the case of a spurious vector.  I have
>> attached an RFC patch which changes the bogus_8259A_irq logic to be able
>> to detect spurious vectors and be rather less verbose about them.
>>
>> The new bogus_8259A_irq() function is basically a copy of
>> _mask_and_ack_8259A_irq(), but returning a boolean indicating whether it
>> was a genuine interrupt or not, which controls whether the "No irq
>> handler" message in do_IRQ gets printed or not.
>>
>> Jan: are you happy with the style of the adjustment, or could you
>> suggest a better way of doing it?
> No, you should make the change to _mask_and_ack_8259A_irq() itself, and
> callers which do not care about the return code can simply discard it.

Ok - I initially avoided that because _mask_and_ack_8259A_irq() is used
to fill a function pointer structure, and preferred less change to the core.

I will re-design somewhat with these points in mind.

~Andrew

>
> I hardly even want one instance of that appalling function in the tree, let
> alone two! Terrible abuse of goto, and I'm pretty laid back about goto use.
>
> And we don't use True/False anywhere else in Xen. We don't even have any
> centralised definition of true/false of any kind, so you should really just
> use 1/0 directly. Alternatively you could propose a patch to define
> true/false in our types.h, and use those. I don't know where the capitalised
> True/False came from!
>
>  -- Keir
>
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 09:20:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 09:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6eR1-0002zT-Vk; Wed, 29 Aug 2012 09:19:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T6eQz-0002zO-PZ
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 09:19:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346231318!8567151!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8822 invoked from network); 29 Aug 2012 09:08:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 09:08:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,332,1344211200"; d="scan'208";a="206515784"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 09:08:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 05:08:35 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T6eGJ-0001YX-2G;
	Wed, 29 Aug 2012 10:08:35 +0100
Message-ID: <503DDC13.30206@citrix.com>
Date: Wed, 29 Aug 2012 10:08:35 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Keir Fraser <keir.xen@gmail.com>
References: <CC62F76D.3D210%keir.xen@gmail.com>
In-Reply-To: <CC62F76D.3D210%keir.xen@gmail.com>
X-Enigmail-Version: 1.4.4
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/08/12 22:32, Keir Fraser wrote:
> On 28/08/2012 20:59, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:
>
>> Since Jan's patch to print and mask bogus PIC vectors, we have found
>> some issues on older hardware were supurious PIC vectors are being
>> repeatedly logged, as spurious vectors will ignore the relevant mask bit.
>>
>> The log message is deceptive in the case of a spurious vector.  I have
>> attached an RFC patch which changes the bogus_8259A_irq logic to be able
>> to detect spurious vectors and be rather less verbose about them.
>>
>> The new bogus_8259A_irq() function is basically a copy of
>> _mask_and_ack_8259A_irq(), but returning a boolean indicating whether it
>> was a genuine interrupt or not, which controls whether the "No irq
>> handler" message in do_IRQ gets printed or not.
>>
>> Jan: are you happy with the style of the adjustment, or could you
>> suggest a better way of doing it?
> No, you should make the change to _mask_and_ack_8259A_irq() itself, and
> callers which do not care about the return code can simply discard it.

Ok - I initially avoided that because _mask_and_ack_8259A_irq() is used
to fill a function pointer structure, and preferred less change to the core.

I will re-design somewhat with these points in mind.

~Andrew

>
> I hardly even want one instance of that appalling function in the tree, let
> alone two! Terrible abuse of goto, and I'm pretty laid back about goto use.
>
> And we don't use True/False anywhere else in Xen. We don't even have any
> centralised definition of true/false of any kind, so you should really just
> use 1/0 directly. Alternatively you could propose a patch to define
> true/false in our types.h, and use those. I don't know where the capitalised
> True/False came from!
>
>  -- Keir
>
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 09:57:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 09:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6f1M-0003qL-D8; Wed, 29 Aug 2012 09:57:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6f1L-0003q5-IJ
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 09:57:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1346233810!8549744!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27351 invoked from network); 29 Aug 2012 09:50:12 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 09:50:12 -0000
Received: by pbbjt11 with SMTP id jt11so857130pbb.32
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 02:50:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Rd5xjCcn8bWn5FUmgkPBJk8bUsvsdbAKXj/wmU57D8I=;
	b=Pir64GVpNnbctgSG2ACYu6I3rvnAAfbDcvF2k334grcwXK7fr9NWx+a6Y0QlS1BhoN
	+CNMjt76tlDdQqO9FGYSN3Kk/BKIiRoikUtVN3thjwbvPRJysrY3+Zz/QUXfDzOn5eCw
	WHF8abucrAL+cwFzfuf+thnQwTOIRMSk0+eyUwuWuylQGoy0HnF3IE8Nmxv3uAMe/22m
	rp7zq4r7RoCa9EDfQyvXtVedipnLB1GYlCWKL4CWD/e4kM8Ql6Rg2rL70u427P77l5m6
	QH6NX/FASig4G6QAsnxz2svcRrkGtc+G7LZ85QNOdMu1KNeA+T8B0KPNlE1pgaPLnpWG
	DVGQ==
Received: by 10.66.74.195 with SMTP id w3mr2306085pav.64.1346233809649;
	Wed, 29 Aug 2012 02:50:09 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id ky8sm1987539pbc.43.2012.08.29.02.50.05
	(version=SSLv3 cipher=OTHER); Wed, 29 Aug 2012 02:50:08 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 29 Aug 2012 10:50:00 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <CC63A458.3D30F%keir.xen@gmail.com>
Thread-Topic: [RFC] Spurious PIC interrupts
Thread-Index: Ac2Fy6eUHivRF2oQO0iGjeLbq2vvqQ==
In-Reply-To: <503DDC13.30206@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/2012 10:08, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

>> No, you should make the change to _mask_and_ack_8259A_irq() itself, and
>> callers which do not care about the return code can simply discard it.
> 
> Ok - I initially avoided that because _mask_and_ack_8259A_irq() is used
> to fill a function pointer structure, and preferred less change to the core.
> 
> I will re-design somewhat with these points in mind.

Worst case rename to __mask_and_ack_8259A_irq() and implement new
_mask_and_ack_8259A_irq() which simply calls it and discards the return
code. Still much better than duplicating the function.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 09:57:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 09:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6f1M-0003qL-D8; Wed, 29 Aug 2012 09:57:12 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6f1L-0003q5-IJ
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 09:57:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1346233810!8549744!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27351 invoked from network); 29 Aug 2012 09:50:12 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 09:50:12 -0000
Received: by pbbjt11 with SMTP id jt11so857130pbb.32
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 02:50:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=Rd5xjCcn8bWn5FUmgkPBJk8bUsvsdbAKXj/wmU57D8I=;
	b=Pir64GVpNnbctgSG2ACYu6I3rvnAAfbDcvF2k334grcwXK7fr9NWx+a6Y0QlS1BhoN
	+CNMjt76tlDdQqO9FGYSN3Kk/BKIiRoikUtVN3thjwbvPRJysrY3+Zz/QUXfDzOn5eCw
	WHF8abucrAL+cwFzfuf+thnQwTOIRMSk0+eyUwuWuylQGoy0HnF3IE8Nmxv3uAMe/22m
	rp7zq4r7RoCa9EDfQyvXtVedipnLB1GYlCWKL4CWD/e4kM8Ql6Rg2rL70u427P77l5m6
	QH6NX/FASig4G6QAsnxz2svcRrkGtc+G7LZ85QNOdMu1KNeA+T8B0KPNlE1pgaPLnpWG
	DVGQ==
Received: by 10.66.74.195 with SMTP id w3mr2306085pav.64.1346233809649;
	Wed, 29 Aug 2012 02:50:09 -0700 (PDT)
Received: from [10.10.1.100] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id ky8sm1987539pbc.43.2012.08.29.02.50.05
	(version=SSLv3 cipher=OTHER); Wed, 29 Aug 2012 02:50:08 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Wed, 29 Aug 2012 10:50:00 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <CC63A458.3D30F%keir.xen@gmail.com>
Thread-Topic: [RFC] Spurious PIC interrupts
Thread-Index: Ac2Fy6eUHivRF2oQO0iGjeLbq2vvqQ==
In-Reply-To: <503DDC13.30206@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC] Spurious PIC interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/2012 10:08, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

>> No, you should make the change to _mask_and_ack_8259A_irq() itself, and
>> callers which do not care about the return code can simply discard it.
> 
> Ok - I initially avoided that because _mask_and_ack_8259A_irq() is used
> to fill a function pointer structure, and preferred less change to the core.
> 
> I will re-design somewhat with these points in mind.

Worst case rename to __mask_and_ack_8259A_irq() and implement new
_mask_and_ack_8259A_irq() which simply calls it and discards the return
code. Still much better than duplicating the function.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 11:07:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 11:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6g72-0005DD-6D; Wed, 29 Aug 2012 11:07:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T6g70-0005D7-N5
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 11:07:06 +0000
Received: from [85.158.143.99:2978] by server-2.bemta-4.messagelabs.com id
	F9/FF-21239-AD7FD305; Wed, 29 Aug 2012 11:07:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-216.messagelabs.com!1346238424!22097469!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3884 invoked from network); 29 Aug 2012 11:07:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	29 Aug 2012 11:07:04 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:55047
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T6g3o-0002Kz-7A; Wed, 29 Aug 2012 13:03:50 +0200
Date: Wed, 29 Aug 2012 13:06:58 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <13955109.20120829130658@eikelenboom.it>
To: Martin Behnke <martin.xenfan@gmail.com>
In-Reply-To: <CAJ0a4CFP1avOBtiieuSKP+AxCTLd6L653Pky9miTGHDdPbHUeQ@mail.gmail.com>
References: <CAJ0a4CFP1avOBtiieuSKP+AxCTLd6L653Pky9miTGHDdPbHUeQ@mail.gmail.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Another successful story about XEN VGA passthrough
	- hardware can be added to wiki article
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Martin,

Could you also give the particular brand/model of the AMD/ATI HD5450 you used ?

--
Sander

Thursday, August 23, 2012, 9:53:20 PM, you wrote:

> Dear XEN community,

> I would like to send you some informations about my used hardware to
> passthrough a ATI graphics adapter.
> Please update the article
> http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters with this
> adapter.

> I have to say:
> GREAT!!! XEN 4.1.2 runs now since 8 weeks without any problems. (like
> kernel oops or hanging Dom0s or DomUs).

> I decided to use a  ATI HD5450 adapter because my first try with a
> NVIDIA adapter failed completely. It was a
> NVIDIA Geforce GT430 1GB RAM PCIe. I nearly tried all hints and how
> to's - all without success. I could not figure out why. I have to say
> that I liked nvidia and Linux, but now I switched to ATI.
> My Dom0 uses the primary graphics adapter - Intel integrated graphics
> - to provide a graphical interface to Dom0 to use the tool
> virt-manager for managing all Dom0's.
> The secondary graphics adapter is used for my home-used XEN all-in-one
> NAS - VDR - DLNA Homeserver with xen-pciback :-)
> The ATI HD5450 runs 'out of the box'. I had nothing special to do for
> it - just compile and solve any compile errors.

> My problems while installing:
> - libpci-dev and pci-utils are mandatory for debian
> - resolve all python-bindings for use xm list (python-xml)
> - on 64Bit platforms all libs are located in /usr/lib64  - but all xen
> related tools look at /usr/lib
> (I had to move all xen libs to /usr/lib and symlink /usr/lib64 to /usr/lib )
> - kernel bootoptions:
>  multiboot /xen-4.1.2.gz dom0_mem=2048M iommu=1 xen-pciback.permissive
> xen-pciback.passthrough=1
> xen-pciback.hide=(0000:01:00.0)(0000:01:00.1)(0000:02:00.0)
> - I had to pciback both PCI IDs to passthrough this ATI device


> A short overview:

> Dom0 host operating system:
> Debian Wheezy 64bit with self compiled kernel 3.3.0-rc7

> xen version:
> XEN 4.1.2

> Motherboard:
> INTEL  DQ67OW (Intel Vt-d enabled)

> CPU:
> Intel Core i5 2400S (4 core)

> lspci -v

> 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee
> ATI Cedar PRO [Radeon HD 5450] (prog-if 00 [VGA controller])
>         Subsystem: PC Partner Limited Device e164
>         Flags: bus master, fast devsel, latency 0, IRQ 16
>         Memory at d0000000 (64-bit, prefetchable) [size=256M]
>         Memory at fe520000 (64-bit, non-prefetchable) [size=128K]
>         I/O ports at e000 [size=256]
>         Expansion ROM at fe500000 [disabled] [size=128K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [58] Express Legacy Endpoint, MSI 00
>         Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
>         Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
> Len=010 <?>
>         Capabilities: [150] Advanced Error Reporting
>         Kernel driver in use: pciback

> 01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cedar HDMI
> Audio [Radeon HD 5400/6300 Series]
>         Subsystem: PC Partner Limited Device aa68
>         Flags: bus master, fast devsel, latency 0, IRQ 17
>         Memory at fe540000 (64-bit, non-prefetchable) [size=16K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [58] Express Legacy Endpoint, MSI 00
>         Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
>         Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
> Len=010 <?>
>         Capabilities: [150] Advanced Error Reporting
>         Kernel driver in use: pciback


> 02:00.0 Multimedia controller: Philips Semiconductors SAA7146 (rev 01)
>         Subsystem: KNC One Device 0022
>         Flags: bus master, medium devsel, latency 32, IRQ 16
>         Memory at fe400000 (32-bit, non-prefetchable) [size=512]
>         Kernel driver in use: pciback


> Communication between DomU's:
> - networkshares with samba and nfs
> - bridged network based on a Intel e1000 NIC

> +++++++++ DomU-01:

> Windows7 32 bit, 4GB Ram, 50GB HDD imagefile

> Hardware:
> ATI Radeon HD5450

> Catalyst Control Center:
> Version: 2012.0611.1251.21046


> +++++++++ DomU-02:
> Ubuntu 32 bit, 2Gb RAM, 40 GB HDD imagefile

> Hardware:
> Satelco EasyWatch DVB-C





> Regards,

> Martin




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 11:07:37 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 11:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6g72-0005DD-6D; Wed, 29 Aug 2012 11:07:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T6g70-0005D7-N5
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 11:07:06 +0000
Received: from [85.158.143.99:2978] by server-2.bemta-4.messagelabs.com id
	F9/FF-21239-AD7FD305; Wed, 29 Aug 2012 11:07:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-216.messagelabs.com!1346238424!22097469!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3884 invoked from network); 29 Aug 2012 11:07:04 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-216.messagelabs.com with AES256-SHA encrypted SMTP;
	29 Aug 2012 11:07:04 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:55047
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T6g3o-0002Kz-7A; Wed, 29 Aug 2012 13:03:50 +0200
Date: Wed, 29 Aug 2012 13:06:58 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <13955109.20120829130658@eikelenboom.it>
To: Martin Behnke <martin.xenfan@gmail.com>
In-Reply-To: <CAJ0a4CFP1avOBtiieuSKP+AxCTLd6L653Pky9miTGHDdPbHUeQ@mail.gmail.com>
References: <CAJ0a4CFP1avOBtiieuSKP+AxCTLd6L653Pky9miTGHDdPbHUeQ@mail.gmail.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Another successful story about XEN VGA passthrough
	- hardware can be added to wiki article
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Martin,

Could you also give the particular brand/model of the AMD/ATI HD5450 you used ?

--
Sander

Thursday, August 23, 2012, 9:53:20 PM, you wrote:

> Dear XEN community,

> I would like to send you some informations about my used hardware to
> passthrough a ATI graphics adapter.
> Please update the article
> http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters with this
> adapter.

> I have to say:
> GREAT!!! XEN 4.1.2 runs now since 8 weeks without any problems. (like
> kernel oops or hanging Dom0s or DomUs).

> I decided to use a  ATI HD5450 adapter because my first try with a
> NVIDIA adapter failed completely. It was a
> NVIDIA Geforce GT430 1GB RAM PCIe. I nearly tried all hints and how
> to's - all without success. I could not figure out why. I have to say
> that I liked nvidia and Linux, but now I switched to ATI.
> My Dom0 uses the primary graphics adapter - Intel integrated graphics
> - to provide a graphical interface to Dom0 to use the tool
> virt-manager for managing all Dom0's.
> The secondary graphics adapter is used for my home-used XEN all-in-one
> NAS - VDR - DLNA Homeserver with xen-pciback :-)
> The ATI HD5450 runs 'out of the box'. I had nothing special to do for
> it - just compile and solve any compile errors.

> My problems while installing:
> - libpci-dev and pci-utils are mandatory for debian
> - resolve all python-bindings for use xm list (python-xml)
> - on 64Bit platforms all libs are located in /usr/lib64  - but all xen
> related tools look at /usr/lib
> (I had to move all xen libs to /usr/lib and symlink /usr/lib64 to /usr/lib )
> - kernel bootoptions:
>  multiboot /xen-4.1.2.gz dom0_mem=2048M iommu=1 xen-pciback.permissive
> xen-pciback.passthrough=1
> xen-pciback.hide=(0000:01:00.0)(0000:01:00.1)(0000:02:00.0)
> - I had to pciback both PCI IDs to passthrough this ATI device


> A short overview:

> Dom0 host operating system:
> Debian Wheezy 64bit with self compiled kernel 3.3.0-rc7

> xen version:
> XEN 4.1.2

> Motherboard:
> INTEL  DQ67OW (Intel Vt-d enabled)

> CPU:
> Intel Core i5 2400S (4 core)

> lspci -v

> 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee
> ATI Cedar PRO [Radeon HD 5450] (prog-if 00 [VGA controller])
>         Subsystem: PC Partner Limited Device e164
>         Flags: bus master, fast devsel, latency 0, IRQ 16
>         Memory at d0000000 (64-bit, prefetchable) [size=256M]
>         Memory at fe520000 (64-bit, non-prefetchable) [size=128K]
>         I/O ports at e000 [size=256]
>         Expansion ROM at fe500000 [disabled] [size=128K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [58] Express Legacy Endpoint, MSI 00
>         Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
>         Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
> Len=010 <?>
>         Capabilities: [150] Advanced Error Reporting
>         Kernel driver in use: pciback

> 01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cedar HDMI
> Audio [Radeon HD 5400/6300 Series]
>         Subsystem: PC Partner Limited Device aa68
>         Flags: bus master, fast devsel, latency 0, IRQ 17
>         Memory at fe540000 (64-bit, non-prefetchable) [size=16K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [58] Express Legacy Endpoint, MSI 00
>         Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
>         Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1
> Len=010 <?>
>         Capabilities: [150] Advanced Error Reporting
>         Kernel driver in use: pciback


> 02:00.0 Multimedia controller: Philips Semiconductors SAA7146 (rev 01)
>         Subsystem: KNC One Device 0022
>         Flags: bus master, medium devsel, latency 32, IRQ 16
>         Memory at fe400000 (32-bit, non-prefetchable) [size=512]
>         Kernel driver in use: pciback


> Communication between DomU's:
> - networkshares with samba and nfs
> - bridged network based on a Intel e1000 NIC

> +++++++++ DomU-01:

> Windows7 32 bit, 4GB Ram, 50GB HDD imagefile

> Hardware:
> ATI Radeon HD5450

> Catalyst Control Center:
> Version: 2012.0611.1251.21046


> +++++++++ DomU-02:
> Ubuntu 32 bit, 2Gb RAM, 40 GB HDD imagefile

> Hardware:
> Satelco EasyWatch DVB-C





> Regards,

> Martin




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 12:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 12:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6hGr-0005pB-BL; Wed, 29 Aug 2012 12:21:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T6hGp-0005p6-Mg
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 12:21:20 +0000
Received: from [85.158.143.99:52966] by server-1.bemta-4.messagelabs.com id
	8F/B7-12504-E390E305; Wed, 29 Aug 2012 12:21:18 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346242874!26491806!1
X-Originating-IP: [74.125.149.211]
X-SpamReason: No, hits=0.8 required=7.0 tests=HTML_90_100,
	HTML_ATTR_UNIQUE,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15323 invoked from network); 29 Aug 2012 12:21:17 -0000
Received: from na3sys009aog114.obsmtp.com (HELO na3sys009aog114.obsmtp.com)
	(74.125.149.211)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Aug 2012 12:21:17 -0000
Received: from INHYMS191.ca.com ([155.35.46.48]) (using TLSv1) by
	na3sys009aob114.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUD4JOmodZSb3YsFoKVExeSulQcb+IKoD@postini.com;
	Wed, 29 Aug 2012 05:21:16 PDT
Received: from INHYMS171.ca.com (155.35.35.45) by INHYMS191.ca.com
	(155.35.46.48) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 29 Aug 2012 17:51:11 +0530
Received: from INHYMS111A.ca.com ([169.254.3.18]) by INHYMS171.ca.com
	([155.35.35.45]) with mapi id 14.01.0355.002;
	Wed, 29 Aug 2012 17:51:11 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH RFC V2] xen/netback: Count ring slots properly when
	larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAg==
Date: Wed, 29 Aug 2012 12:21:10 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {BF37575F-06D2-4591-A9D7-3B2B596B5F16}
x-cr-hashedpuzzle: DPBf Iq0c Iy0w I/pe JJSG KpCq NK4N Qeia RGl6 SHZo TLKW
	Wbn9 Wx/g XBPr XDqR XFQB; 2;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {BF37575F-06D2-4591-A9D7-3B2B596B5F16};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	29 Aug 2012 12:19:01 GMT;
	WwBQAEEAVABDAEgAIABSAEYAQwAgAFYAMgBdACAAeABlAG4ALwBuAGUAdABiAGEAYwBrADoAIABDAG8AdQBuAHQAIAByAGkAbgBnACAAcwBsAG8AdABzACAAcAByAG8AcABlAHIAbAB5ACAAdwBoAGUAbgAgAGwAYQByAGcAZQByACAATQBUAFUAIABzAGkAegBlAHMAIABhAHIAZQAgAHUAcwBlAGQA
x-originating-ip: [10.134.16.218]
Content-Type: multipart/mixed;
	boundary="_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_"
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: multipart/alternative;
	boundary="_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_"

--_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

This patch contains the modifications that are discussed in thread http://l=
ists.xen.org/archives/html/xen-devel/2012-08/msg01730.html

Ian,
Instead of using max_required_rx_slots, I used the count that we already ha=
ve in hand to verify if we have enough room in the batch queue for next skb=
. Please let me know if that is not appropriate. Things worked fine in my e=
nvironment. Under heavy load now we seems to be consuming most of the slots=
 in the queue and no BUG_ON :-)

Thanks
Siva

--_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" xmlns:p=3D"urn:schemas-m=
icrosoft-com:office:powerpoint" xmlns:a=3D"urn:schemas-microsoft-com:office=
:access" xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:s=3D"=
uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:rs=3D"urn:schemas-microsof=
t-com:rowset" xmlns:z=3D"#RowsetSchema" xmlns:b=3D"urn:schemas-microsoft-co=
m:office:publisher" xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadshee=
t" xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" xmlns=
:odc=3D"urn:schemas-microsoft-com:office:odc" xmlns:oa=3D"urn:schemas-micro=
soft-com:office:activation" xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" xmlns:rtc=3D"http://m=
icrosoft.com/officenet/conferencing" xmlns:D=3D"DAV:" xmlns:Repl=3D"http://=
schemas.microsoft.com/repl/" xmlns:mt=3D"http://schemas.microsoft.com/share=
point/soap/meetings/" xmlns:x2=3D"http://schemas.microsoft.com/office/excel=
/2003/xml" xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" xmlns:ois=
=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" xmlns:dir=3D"http://=
schemas.microsoft.com/sharepoint/soap/directory/" xmlns:ds=3D"http://www.w3=
.org/2000/09/xmldsig#" xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint=
/dsp" xmlns:udc=3D"http://schemas.microsoft.com/data/udc" xmlns:xsd=3D"http=
://www.w3.org/2001/XMLSchema" xmlns:sub=3D"http://schemas.microsoft.com/sha=
repoint/soap/2002/1/alerts/" xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#"=
 xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" xmlns:sps=3D"http://=
schemas.microsoft.com/sharepoint/soap/" xmlns:xsi=3D"http://www.w3.org/2001=
/XMLSchema-instance" xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/so=
ap" xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" xmlns:udc=
p2p=3D"http://schemas.microsoft.com/data/udc/parttopart" xmlns:wf=3D"http:/=
/schemas.microsoft.com/sharepoint/soap/workflow/" xmlns:dsss=3D"http://sche=
mas.microsoft.com/office/2006/digsig-setup" xmlns:dssi=3D"http://schemas.mi=
crosoft.com/office/2006/digsig" xmlns:mdssi=3D"http://schemas.openxmlformat=
s.org/package/2006/digital-signature" xmlns:mver=3D"http://schemas.openxmlf=
ormats.org/markup-compatibility/2006" xmlns:m=3D"http://schemas.microsoft.c=
om/office/2004/12/omml" xmlns:mrels=3D"http://schemas.openxmlformats.org/pa=
ckage/2006/relationships" xmlns:spwp=3D"http://microsoft.com/sharepoint/web=
partpages" xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/20=
06/types" xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/200=
6/messages" xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/Sli=
deLibrary/" xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortal=
Server/PublishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" xmlns:=
st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style>
<!--
 /* Font Definitions */
 @font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
 /* Style Definitions */
 p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
-->
</style><!--[if gte mso 9]><xml>
 <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
 <o:shapelayout v:ext=3D"edit">
  <o:idmap v:ext=3D"edit" data=3D"1" />
 </o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">This patch contains the modifications that are discu=
ssed in thread
<span style=3D"font-size:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-s=
erif&quot;;
color:black"><a href=3D"http://lists.xen.org/archives/html/xen-devel/2012-0=
8/msg01730.html" target=3D"_blank">http://lists.xen.org/archives/html/xen-d=
evel/2012-08/msg01730.html</a><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Ian,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Instead of using max_required_rx_slots, I used the count that =
we already have in hand to verify if we have enough room in the batch queue=
 for next skb. Please
 let me know if that is not appropriate. Things worked fine in my environme=
nt. Under heavy load now we seems to be consuming most of the slots in the =
queue and no BUG_ON :-)<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Thanks<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Siva<o:p></o:p></span></p>
</div>
</body>
</html>

--_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_--

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: application/octet-stream;
	name="netback_slots_counting_v2.patch"
Content-Description: netback_slots_counting_v2.patch
Content-Disposition: attachment; filename="netback_slots_counting_v2.patch";
	size=2462; creation-date="Tue, 28 Aug 2012 21:59:13 GMT";
	modification-date="Tue, 28 Aug 2012 22:03:28 GMT"
Content-Transfer-Encoding: base64

RnJvbTogU2l2YSBQYWxhZ3VtbWkgPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4KCmNvdW50IHZhcmlh
YmxlIGluIHhlbl9uZXRia19yeF9hY3Rpb24gbmVlZCB0byBiZSBpbmNyZW1lbnRlZApjb3JyZWN0
bHkgdG8gdGFrZSBpbnRvIGFjY291bnQgb2YgZXh0cmEgc2xvdHMgcmVxdWlyZWQgd2hlbiBza2Jf
aGVhZGxlbiBpcyAKZ3JlYXRlciB0aGFuIFBBR0VfU0laRSB3aGVuIGxhcmdlciBNVFUgdmFsdWVz
IGFyZSB1c2VkLiBXaXRob3V0IHRoaXMgY2hhbmdlCkJVR19PTihucG8ubWV0YV9wcm9kID4gQVJS
QVlfU0laRShuZXRiay0+bWV0YSkpIGlzIGNhdXNpbmcgbmV0YmFjayB0aHJlYWQgCnRvIGV4aXQu
CgpUaGUgZml4IGlzIHRvIHN0YXNoIHRoZSBjb3VudGluZyBhbHJlYWR5IGRvbmUgaW4geGVuX25l
dGJrX2NvdW50X3NrYl9zbG90cwppbiBza2JfY2Jfb3ZlcmxheSBhbmQgdXNlIGl0IGRpcmVjdGx5
IGluIHhlbl9uZXRia19yeF9hY3Rpb24uCgpBbHNvIGltcHJvdmVkIHRoZSBjaGVja2luZyBmb3Ig
ZmlsbGluZyB0aGUgYmF0Y2ggcXVldWUuIAoKQWxzbyBtZXJnZWQgYSBjaGFuZ2UgZnJvbSBhIHBh
dGNoIGNyZWF0ZWQgZm9yIHhlbl9uZXRia19jb3VudF9za2Jfc2xvdHMgCmZ1bmN0aW9uIGFzIHBl
ciB0aHJlYWQgCmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIw
MTItMDUvbXNnMDE4NjQuaHRtbAoKVGhlIHByb2JsZW0gaXMgc2VlbiB3aXRoIGxpbnV4IDMuMi4y
IGtlcm5lbCBvbiBJbnRlbCAxMEdicHMgbmV0d29yawoKClNpZ25lZC1vZmYtYnk6IFNpdmEgUGFs
YWd1bW1pIDxTaXZhLlBhbGFndW1taUBjYS5jb20+Ci0tLQpkaWZmIC11cHJOIGEvZHJpdmVycy9u
ZXQveGVuLW5ldGJhY2svbmV0YmFjay5jIGIvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svbmV0YmFj
ay5jCi0tLSBhL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYwkyMDEyLTAxLTI1IDE5
OjM5OjMyLjAwMDAwMDAwMCAtMDUwMAorKysgYi9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9uZXRi
YWNrLmMJMjAxMi0wOC0yOCAxNzozMToyMi4wMDAwMDAwMDAgLTA0MDAKQEAgLTgwLDYgKzgwLDEx
IEBAIHVuaW9uIHBhZ2VfZXh0IHsKIAl2b2lkICptYXBwaW5nOwogfTsKIAorc3RydWN0IHNrYl9j
Yl9vdmVybGF5IHsKKwlpbnQgbWV0YV9zbG90c191c2VkOworCWludCBjb3VudDsKK307CisKIHN0
cnVjdCB4ZW5fbmV0YmsgewogCXdhaXRfcXVldWVfaGVhZF90IHdxOwogCXN0cnVjdCB0YXNrX3N0
cnVjdCAqdGFzazsKQEAgLTMyNCw5ICszMjksOSBAQCB1bnNpZ25lZCBpbnQgeGVuX25ldGJrX2Nv
dW50X3NrYl9zbG90cyhzCiB7CiAJdW5zaWduZWQgaW50IGNvdW50OwogCWludCBpLCBjb3B5X29m
ZjsKKwlzdHJ1Y3Qgc2tiX2NiX292ZXJsYXkgKnNjbzsKIAotCWNvdW50ID0gRElWX1JPVU5EX1VQ
KAotCQkJb2Zmc2V0X2luX3BhZ2Uoc2tiLT5kYXRhKStza2JfaGVhZGxlbihza2IpLCBQQUdFX1NJ
WkUpOworCWNvdW50ID0gRElWX1JPVU5EX1VQKHNrYl9oZWFkbGVuKHNrYiksIFBBR0VfU0laRSk7
CiAKIAljb3B5X29mZiA9IHNrYl9oZWFkbGVuKHNrYikgJSBQQUdFX1NJWkU7CiAKQEAgLTM1Miw2
ICszNTcsOCBAQCB1bnNpZ25lZCBpbnQgeGVuX25ldGJrX2NvdW50X3NrYl9zbG90cyhzCiAJCQlz
aXplIC09IGJ5dGVzOwogCQl9CiAJfQorCXNjbyA9IChzdHJ1Y3Qgc2tiX2NiX292ZXJsYXkgKilz
a2ItPmNiOworCXNjby0+Y291bnQgPSBjb3VudDsKIAlyZXR1cm4gY291bnQ7CiB9CiAKQEAgLTU4
Niw5ICs1OTMsNiBAQCBzdGF0aWMgdm9pZCBuZXRia19hZGRfZnJhZ19yZXNwb25zZXMoc3RyCiAJ
fQogfQogCi1zdHJ1Y3Qgc2tiX2NiX292ZXJsYXkgewotCWludCBtZXRhX3Nsb3RzX3VzZWQ7Ci19
OwogCiBzdGF0aWMgdm9pZCB4ZW5fbmV0YmtfcnhfYWN0aW9uKHN0cnVjdCB4ZW5fbmV0YmsgKm5l
dGJrKQogewpAQCAtNjIxLDEyICs2MjUsMTYgQEAgc3RhdGljIHZvaWQgeGVuX25ldGJrX3J4X2Fj
dGlvbihzdHJ1Y3QgeAogCQlzY28gPSAoc3RydWN0IHNrYl9jYl9vdmVybGF5ICopc2tiLT5jYjsK
IAkJc2NvLT5tZXRhX3Nsb3RzX3VzZWQgPSBuZXRia19nb3Bfc2tiKHNrYiwgJm5wbyk7CiAKLQkJ
Y291bnQgKz0gbnJfZnJhZ3MgKyAxOworCQljb3VudCArPSBzY28tPmNvdW50OwogCiAJCV9fc2ti
X3F1ZXVlX3RhaWwoJnJ4cSwgc2tiKTsKIAorCQlza2IgPSBza2JfcGVlaygmbmV0YmstPnJ4X3F1
ZXVlKTsKKwkJaWYgKHNrYiA9PSBOVUxMKQorCQkJYnJlYWs7CisJCXNjbyA9IChzdHJ1Y3Qgc2ti
X2NiX292ZXJsYXkgKilza2ItPmNiOwogCQkvKiBGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyAqLwot
CQlpZiAoY291bnQgKyBNQVhfU0tCX0ZSQUdTID49IFhFTl9ORVRJRl9SWF9SSU5HX1NJWkUpCisJ
CWlmIChjb3VudCArIHNjby0+Y291bnQgPj0gWEVOX05FVElGX1JYX1JJTkdfU0laRSkKIAkJCWJy
ZWFrOwogCX0KIAo=

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_--


From xen-devel-bounces@lists.xen.org Wed Aug 29 12:21:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 12:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6hGr-0005pB-BL; Wed, 29 Aug 2012 12:21:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T6hGp-0005p6-Mg
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 12:21:20 +0000
Received: from [85.158.143.99:52966] by server-1.bemta-4.messagelabs.com id
	8F/B7-12504-E390E305; Wed, 29 Aug 2012 12:21:18 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346242874!26491806!1
X-Originating-IP: [74.125.149.211]
X-SpamReason: No, hits=0.8 required=7.0 tests=HTML_90_100,
	HTML_ATTR_UNIQUE,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15323 invoked from network); 29 Aug 2012 12:21:17 -0000
Received: from na3sys009aog114.obsmtp.com (HELO na3sys009aog114.obsmtp.com)
	(74.125.149.211)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Aug 2012 12:21:17 -0000
Received: from INHYMS191.ca.com ([155.35.46.48]) (using TLSv1) by
	na3sys009aob114.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUD4JOmodZSb3YsFoKVExeSulQcb+IKoD@postini.com;
	Wed, 29 Aug 2012 05:21:16 PDT
Received: from INHYMS171.ca.com (155.35.35.45) by INHYMS191.ca.com
	(155.35.46.48) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Wed, 29 Aug 2012 17:51:11 +0530
Received: from INHYMS111A.ca.com ([169.254.3.18]) by INHYMS171.ca.com
	([155.35.35.45]) with mapi id 14.01.0355.002;
	Wed, 29 Aug 2012 17:51:11 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH RFC V2] xen/netback: Count ring slots properly when
	larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAg==
Date: Wed, 29 Aug 2012 12:21:10 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {BF37575F-06D2-4591-A9D7-3B2B596B5F16}
x-cr-hashedpuzzle: DPBf Iq0c Iy0w I/pe JJSG KpCq NK4N Qeia RGl6 SHZo TLKW
	Wbn9 Wx/g XBPr XDqR XFQB; 2;
	aQBhAG4ALgBjAGEAbQBwAGIAZQBsAGwAQABjAGkAdAByAGkAeAAuAGMAbwBtADsAeABlAG4ALQBkAGUAdgBlAGwAQABsAGkAcwB0AHMALgB4AGUAbgAuAG8AcgBnAA==;
	Sosha1_v1; 7; {BF37575F-06D2-4591-A9D7-3B2B596B5F16};
	cwBpAHYAYQAuAHAAYQBsAGEAZwB1AG0AbQBpAEAAYwBhAC4AYwBvAG0A; Wed,
	29 Aug 2012 12:19:01 GMT;
	WwBQAEEAVABDAEgAIABSAEYAQwAgAFYAMgBdACAAeABlAG4ALwBuAGUAdABiAGEAYwBrADoAIABDAG8AdQBuAHQAIAByAGkAbgBnACAAcwBsAG8AdABzACAAcAByAG8AcABlAHIAbAB5ACAAdwBoAGUAbgAgAGwAYQByAGcAZQByACAATQBUAFUAIABzAGkAegBlAHMAIABhAHIAZQAgAHUAcwBlAGQA
x-originating-ip: [10.134.16.218]
Content-Type: multipart/mixed;
	boundary="_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_"
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots properly
 when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: multipart/alternative;
	boundary="_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_"

--_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

This patch contains the modifications that are discussed in thread http://l=
ists.xen.org/archives/html/xen-devel/2012-08/msg01730.html

Ian,
Instead of using max_required_rx_slots, I used the count that we already ha=
ve in hand to verify if we have enough room in the batch queue for next skb=
. Please let me know if that is not appropriate. Things worked fine in my e=
nvironment. Under heavy load now we seems to be consuming most of the slots=
 in the queue and no BUG_ON :-)

Thanks
Siva

--_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" xmlns:p=3D"urn:schemas-m=
icrosoft-com:office:powerpoint" xmlns:a=3D"urn:schemas-microsoft-com:office=
:access" xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:s=3D"=
uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:rs=3D"urn:schemas-microsof=
t-com:rowset" xmlns:z=3D"#RowsetSchema" xmlns:b=3D"urn:schemas-microsoft-co=
m:office:publisher" xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadshee=
t" xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" xmlns=
:odc=3D"urn:schemas-microsoft-com:office:odc" xmlns:oa=3D"urn:schemas-micro=
soft-com:office:activation" xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" xmlns:rtc=3D"http://m=
icrosoft.com/officenet/conferencing" xmlns:D=3D"DAV:" xmlns:Repl=3D"http://=
schemas.microsoft.com/repl/" xmlns:mt=3D"http://schemas.microsoft.com/share=
point/soap/meetings/" xmlns:x2=3D"http://schemas.microsoft.com/office/excel=
/2003/xml" xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" xmlns:ois=
=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" xmlns:dir=3D"http://=
schemas.microsoft.com/sharepoint/soap/directory/" xmlns:ds=3D"http://www.w3=
.org/2000/09/xmldsig#" xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint=
/dsp" xmlns:udc=3D"http://schemas.microsoft.com/data/udc" xmlns:xsd=3D"http=
://www.w3.org/2001/XMLSchema" xmlns:sub=3D"http://schemas.microsoft.com/sha=
repoint/soap/2002/1/alerts/" xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#"=
 xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" xmlns:sps=3D"http://=
schemas.microsoft.com/sharepoint/soap/" xmlns:xsi=3D"http://www.w3.org/2001=
/XMLSchema-instance" xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/so=
ap" xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" xmlns:udc=
p2p=3D"http://schemas.microsoft.com/data/udc/parttopart" xmlns:wf=3D"http:/=
/schemas.microsoft.com/sharepoint/soap/workflow/" xmlns:dsss=3D"http://sche=
mas.microsoft.com/office/2006/digsig-setup" xmlns:dssi=3D"http://schemas.mi=
crosoft.com/office/2006/digsig" xmlns:mdssi=3D"http://schemas.openxmlformat=
s.org/package/2006/digital-signature" xmlns:mver=3D"http://schemas.openxmlf=
ormats.org/markup-compatibility/2006" xmlns:m=3D"http://schemas.microsoft.c=
om/office/2004/12/omml" xmlns:mrels=3D"http://schemas.openxmlformats.org/pa=
ckage/2006/relationships" xmlns:spwp=3D"http://microsoft.com/sharepoint/web=
partpages" xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/20=
06/types" xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/200=
6/messages" xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/Sli=
deLibrary/" xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortal=
Server/PublishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" xmlns:=
st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style>
<!--
 /* Font Definitions */
 @font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
 /* Style Definitions */
 p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
-->
</style><!--[if gte mso 9]><xml>
 <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
 <o:shapelayout v:ext=3D"edit">
  <o:idmap v:ext=3D"edit" data=3D"1" />
 </o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">This patch contains the modifications that are discu=
ssed in thread
<span style=3D"font-size:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-s=
erif&quot;;
color:black"><a href=3D"http://lists.xen.org/archives/html/xen-devel/2012-0=
8/msg01730.html" target=3D"_blank">http://lists.xen.org/archives/html/xen-d=
evel/2012-08/msg01730.html</a><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Ian,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Instead of using max_required_rx_slots, I used the count that =
we already have in hand to verify if we have enough room in the batch queue=
 for next skb. Please
 let me know if that is not appropriate. Things worked fine in my environme=
nt. Under heavy load now we seems to be consuming most of the slots in the =
queue and no BUG_ON :-)<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Thanks<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ta=
homa&quot;,&quot;sans-serif&quot;;
color:black">Siva<o:p></o:p></span></p>
</div>
</body>
</html>

--_000_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_--

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: application/octet-stream;
	name="netback_slots_counting_v2.patch"
Content-Description: netback_slots_counting_v2.patch
Content-Disposition: attachment; filename="netback_slots_counting_v2.patch";
	size=2462; creation-date="Tue, 28 Aug 2012 21:59:13 GMT";
	modification-date="Tue, 28 Aug 2012 22:03:28 GMT"
Content-Transfer-Encoding: base64

RnJvbTogU2l2YSBQYWxhZ3VtbWkgPFNpdmEuUGFsYWd1bW1pQGNhLmNvbT4KCmNvdW50IHZhcmlh
YmxlIGluIHhlbl9uZXRia19yeF9hY3Rpb24gbmVlZCB0byBiZSBpbmNyZW1lbnRlZApjb3JyZWN0
bHkgdG8gdGFrZSBpbnRvIGFjY291bnQgb2YgZXh0cmEgc2xvdHMgcmVxdWlyZWQgd2hlbiBza2Jf
aGVhZGxlbiBpcyAKZ3JlYXRlciB0aGFuIFBBR0VfU0laRSB3aGVuIGxhcmdlciBNVFUgdmFsdWVz
IGFyZSB1c2VkLiBXaXRob3V0IHRoaXMgY2hhbmdlCkJVR19PTihucG8ubWV0YV9wcm9kID4gQVJS
QVlfU0laRShuZXRiay0+bWV0YSkpIGlzIGNhdXNpbmcgbmV0YmFjayB0aHJlYWQgCnRvIGV4aXQu
CgpUaGUgZml4IGlzIHRvIHN0YXNoIHRoZSBjb3VudGluZyBhbHJlYWR5IGRvbmUgaW4geGVuX25l
dGJrX2NvdW50X3NrYl9zbG90cwppbiBza2JfY2Jfb3ZlcmxheSBhbmQgdXNlIGl0IGRpcmVjdGx5
IGluIHhlbl9uZXRia19yeF9hY3Rpb24uCgpBbHNvIGltcHJvdmVkIHRoZSBjaGVja2luZyBmb3Ig
ZmlsbGluZyB0aGUgYmF0Y2ggcXVldWUuIAoKQWxzbyBtZXJnZWQgYSBjaGFuZ2UgZnJvbSBhIHBh
dGNoIGNyZWF0ZWQgZm9yIHhlbl9uZXRia19jb3VudF9za2Jfc2xvdHMgCmZ1bmN0aW9uIGFzIHBl
ciB0aHJlYWQgCmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIw
MTItMDUvbXNnMDE4NjQuaHRtbAoKVGhlIHByb2JsZW0gaXMgc2VlbiB3aXRoIGxpbnV4IDMuMi4y
IGtlcm5lbCBvbiBJbnRlbCAxMEdicHMgbmV0d29yawoKClNpZ25lZC1vZmYtYnk6IFNpdmEgUGFs
YWd1bW1pIDxTaXZhLlBhbGFndW1taUBjYS5jb20+Ci0tLQpkaWZmIC11cHJOIGEvZHJpdmVycy9u
ZXQveGVuLW5ldGJhY2svbmV0YmFjay5jIGIvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svbmV0YmFj
ay5jCi0tLSBhL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYwkyMDEyLTAxLTI1IDE5
OjM5OjMyLjAwMDAwMDAwMCAtMDUwMAorKysgYi9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9uZXRi
YWNrLmMJMjAxMi0wOC0yOCAxNzozMToyMi4wMDAwMDAwMDAgLTA0MDAKQEAgLTgwLDYgKzgwLDEx
IEBAIHVuaW9uIHBhZ2VfZXh0IHsKIAl2b2lkICptYXBwaW5nOwogfTsKIAorc3RydWN0IHNrYl9j
Yl9vdmVybGF5IHsKKwlpbnQgbWV0YV9zbG90c191c2VkOworCWludCBjb3VudDsKK307CisKIHN0
cnVjdCB4ZW5fbmV0YmsgewogCXdhaXRfcXVldWVfaGVhZF90IHdxOwogCXN0cnVjdCB0YXNrX3N0
cnVjdCAqdGFzazsKQEAgLTMyNCw5ICszMjksOSBAQCB1bnNpZ25lZCBpbnQgeGVuX25ldGJrX2Nv
dW50X3NrYl9zbG90cyhzCiB7CiAJdW5zaWduZWQgaW50IGNvdW50OwogCWludCBpLCBjb3B5X29m
ZjsKKwlzdHJ1Y3Qgc2tiX2NiX292ZXJsYXkgKnNjbzsKIAotCWNvdW50ID0gRElWX1JPVU5EX1VQ
KAotCQkJb2Zmc2V0X2luX3BhZ2Uoc2tiLT5kYXRhKStza2JfaGVhZGxlbihza2IpLCBQQUdFX1NJ
WkUpOworCWNvdW50ID0gRElWX1JPVU5EX1VQKHNrYl9oZWFkbGVuKHNrYiksIFBBR0VfU0laRSk7
CiAKIAljb3B5X29mZiA9IHNrYl9oZWFkbGVuKHNrYikgJSBQQUdFX1NJWkU7CiAKQEAgLTM1Miw2
ICszNTcsOCBAQCB1bnNpZ25lZCBpbnQgeGVuX25ldGJrX2NvdW50X3NrYl9zbG90cyhzCiAJCQlz
aXplIC09IGJ5dGVzOwogCQl9CiAJfQorCXNjbyA9IChzdHJ1Y3Qgc2tiX2NiX292ZXJsYXkgKilz
a2ItPmNiOworCXNjby0+Y291bnQgPSBjb3VudDsKIAlyZXR1cm4gY291bnQ7CiB9CiAKQEAgLTU4
Niw5ICs1OTMsNiBAQCBzdGF0aWMgdm9pZCBuZXRia19hZGRfZnJhZ19yZXNwb25zZXMoc3RyCiAJ
fQogfQogCi1zdHJ1Y3Qgc2tiX2NiX292ZXJsYXkgewotCWludCBtZXRhX3Nsb3RzX3VzZWQ7Ci19
OwogCiBzdGF0aWMgdm9pZCB4ZW5fbmV0YmtfcnhfYWN0aW9uKHN0cnVjdCB4ZW5fbmV0YmsgKm5l
dGJrKQogewpAQCAtNjIxLDEyICs2MjUsMTYgQEAgc3RhdGljIHZvaWQgeGVuX25ldGJrX3J4X2Fj
dGlvbihzdHJ1Y3QgeAogCQlzY28gPSAoc3RydWN0IHNrYl9jYl9vdmVybGF5ICopc2tiLT5jYjsK
IAkJc2NvLT5tZXRhX3Nsb3RzX3VzZWQgPSBuZXRia19nb3Bfc2tiKHNrYiwgJm5wbyk7CiAKLQkJ
Y291bnQgKz0gbnJfZnJhZ3MgKyAxOworCQljb3VudCArPSBzY28tPmNvdW50OwogCiAJCV9fc2ti
X3F1ZXVlX3RhaWwoJnJ4cSwgc2tiKTsKIAorCQlza2IgPSBza2JfcGVlaygmbmV0YmstPnJ4X3F1
ZXVlKTsKKwkJaWYgKHNrYiA9PSBOVUxMKQorCQkJYnJlYWs7CisJCXNjbyA9IChzdHJ1Y3Qgc2ti
X2NiX292ZXJsYXkgKilza2ItPmNiOwogCQkvKiBGaWxsZWQgdGhlIGJhdGNoIHF1ZXVlPyAqLwot
CQlpZiAoY291bnQgKyBNQVhfU0tCX0ZSQUdTID49IFhFTl9ORVRJRl9SWF9SSU5HX1NJWkUpCisJ
CWlmIChjb3VudCArIHNjby0+Y291bnQgPj0gWEVOX05FVElGX1JYX1JJTkdfU0laRSkKIAkJCWJy
ZWFrOwogCX0KIAo=

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_004_7D7C26B1462EB14CB0E7246697A18C1312C3D2INHYMS111Acacom_--


From xen-devel-bounces@lists.xen.org Wed Aug 29 13:16:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 13:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6i7x-00069S-Ru; Wed, 29 Aug 2012 13:16:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6i7x-00069N-6n
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 13:16:13 +0000
Received: from [85.158.143.35:28458] by server-1.bemta-4.messagelabs.com id
	07/2D-12504-C161E305; Wed, 29 Aug 2012 13:16:12 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346246164!4696063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12336 invoked from network); 29 Aug 2012 13:16:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 13:16:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="206535744"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 13:15:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 09:15:57 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T6i7g-0005ii-Da;
	Wed, 29 Aug 2012 14:15:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 29 Aug 2012 14:15:14 +0100
Message-ID: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a straight forward-port of some functionality from
classic kernels to support Xen hosts that do paging of guests.

This isn't functionality the XenServer makes use of so I've not tested
these with paging in use.

Changes since v1:

- Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
  error).  It's broken and not really fixable sensibly and libxc will
  use V2 if it is available.
- Return -ENOENT if all failures were -ENOENT.
- Clear arg->err on success (libxc expected this).

I think this should probably get a "Tested-by" Andres or someone else
who uses paging before being applied.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 13:16:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 13:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6i7x-00069S-Ru; Wed, 29 Aug 2012 13:16:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6i7x-00069N-6n
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 13:16:13 +0000
Received: from [85.158.143.35:28458] by server-1.bemta-4.messagelabs.com id
	07/2D-12504-C161E305; Wed, 29 Aug 2012 13:16:12 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346246164!4696063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12336 invoked from network); 29 Aug 2012 13:16:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 13:16:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="206535744"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 13:15:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 09:15:57 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T6i7g-0005ii-Da;
	Wed, 29 Aug 2012 14:15:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 29 Aug 2012 14:15:14 +0100
Message-ID: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a straight forward-port of some functionality from
classic kernels to support Xen hosts that do paging of guests.

This isn't functionality the XenServer makes use of so I've not tested
these with paging in use.

Changes since v1:

- Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
  error).  It's broken and not really fixable sensibly and libxc will
  use V2 if it is available.
- Return -ENOENT if all failures were -ENOENT.
- Clear arg->err on success (libxc expected this).

I think this should probably get a "Tested-by" Andres or someone else
who uses paging before being applied.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 13:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 13:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6i81-00069h-7h; Wed, 29 Aug 2012 13:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6i7z-00069Z-C4
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 13:16:15 +0000
Received: from [85.158.143.35:28586] by server-3.bemta-4.messagelabs.com id
	EA/43-08232-E161E305; Wed, 29 Aug 2012 13:16:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346246164!4696063!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12441 invoked from network); 29 Aug 2012 13:16:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 13:16:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="206535745"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 13:15:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 09:15:57 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T6i7g-0005ii-FK;
	Wed, 29 Aug 2012 14:15:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 29 Aug 2012 14:15:15 +0100
Message-ID: <1346246116-29999-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/2] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Callers of xen_remap_domain_range() need to know if the remap failed
because frame is currently paged out.  So they can retry the remap
later on.  Return -ENOENT in this case.

This assumes that the error codes returned by Xen are a subset of
those used by the kernel.  It is unclear if this is defined as part of
the hypercall ABI.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..fb187ea 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = -EFAULT;
-		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
+		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
+		if (err < 0)
 			goto out;
 
 		nr -= batch;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 13:16:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 13:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6i81-00069h-7h; Wed, 29 Aug 2012 13:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6i7z-00069Z-C4
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 13:16:15 +0000
Received: from [85.158.143.35:28586] by server-3.bemta-4.messagelabs.com id
	EA/43-08232-E161E305; Wed, 29 Aug 2012 13:16:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346246164!4696063!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12441 invoked from network); 29 Aug 2012 13:16:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 13:16:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="206535745"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 13:15:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 09:15:57 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T6i7g-0005ii-FK;
	Wed, 29 Aug 2012 14:15:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 29 Aug 2012 14:15:15 +0100
Message-ID: <1346246116-29999-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/2] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Callers of xen_remap_domain_range() need to know if the remap failed
because frame is currently paged out.  So they can retry the remap
later on.  Return -ENOENT in this case.

This assumes that the error codes returned by Xen are a subset of
those used by the kernel.  It is unclear if this is defined as part of
the hypercall ABI.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..fb187ea 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = -EFAULT;
-		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
+		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
+		if (err < 0)
 			goto out;
 
 		nr -= batch;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 13:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 13:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6i8B-0006AY-KE; Wed, 29 Aug 2012 13:16:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6i89-0006AR-QT
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 13:16:26 +0000
Received: from [85.158.143.35:29224] by server-1.bemta-4.messagelabs.com id
	E0/9D-12504-9261E305; Wed, 29 Aug 2012 13:16:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346246169!13924046!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7930 invoked from network); 29 Aug 2012 13:16:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 13:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="206535746"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 13:15:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 09:15:57 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T6i7g-0005ii-Ft;
	Wed, 29 Aug 2012 14:15:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 29 Aug 2012 14:15:16 +0100
Message-ID: <1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
field for reporting the error code for every frame that could not be
mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   78 +++++++++++++++++++++++++++++++++++++-----------
 include/xen/privcmd.h |   21 ++++++++++++-
 2 files changed, 80 insertions(+), 19 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..ddd32cf 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -248,18 +248,23 @@ struct mmap_batch_state {
 	struct vm_area_struct *vma;
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
+	int *err = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		*err = ret;
+		if (st->err == 0 || st->err == -ENOENT)
+			st->err = ret;
 	}
 	st->va += PAGE_SIZE;
 
@@ -268,18 +273,30 @@ static int mmap_batch_fn(void *data, void *state)
 
 static int mmap_return_errors(void *data, void *state)
 {
-	xen_pfn_t *mfnp = data;
+	int *err = data;
 	struct mmap_batch_state *st = state;
+	int ret;
+
+	if (st->user_err)
+		return __put_user(*err, st->user_err++);
+	else {
+		xen_pfn_t mfn;
 
-	return put_user(*mfnp, st->user++);
+		ret = __get_user(mfn, st->user_mfn);
+		if (ret < 0)
+			return ret;
+
+		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
+		return __put_user(mfn, st->user_mfn++);
+	}
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
@@ -289,15 +306,32 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
 	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+			   (xen_pfn_t *)m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
+	if (state.err) {
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+				     &pagelist,
+				     mmap_return_errors, &state);
+		if (!ret)
+			ret = state.err;
+	} else if (m.err)
+		__clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
 	free_page_list(&pagelist);
@@ -354,7 +392,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..37e5255 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -59,13 +59,30 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 if all pages were mapped successfully. -ENOENT if all
+ * failed mappings returned -ENOENT, otherwise the error code of the
+ * first failed mapping.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -73,5 +90,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 13:16:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 13:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6i8B-0006AY-KE; Wed, 29 Aug 2012 13:16:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6i89-0006AR-QT
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 13:16:26 +0000
Received: from [85.158.143.35:29224] by server-1.bemta-4.messagelabs.com id
	E0/9D-12504-9261E305; Wed, 29 Aug 2012 13:16:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346246169!13924046!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7930 invoked from network); 29 Aug 2012 13:16:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 13:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="206535746"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 13:15:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 09:15:57 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T6i7g-0005ii-Ft;
	Wed, 29 Aug 2012 14:15:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Wed, 29 Aug 2012 14:15:16 +0100
Message-ID: <1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
field for reporting the error code for every frame that could not be
mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   78 +++++++++++++++++++++++++++++++++++++-----------
 include/xen/privcmd.h |   21 ++++++++++++-
 2 files changed, 80 insertions(+), 19 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..ddd32cf 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -248,18 +248,23 @@ struct mmap_batch_state {
 	struct vm_area_struct *vma;
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
+	int *err = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		*err = ret;
+		if (st->err == 0 || st->err == -ENOENT)
+			st->err = ret;
 	}
 	st->va += PAGE_SIZE;
 
@@ -268,18 +273,30 @@ static int mmap_batch_fn(void *data, void *state)
 
 static int mmap_return_errors(void *data, void *state)
 {
-	xen_pfn_t *mfnp = data;
+	int *err = data;
 	struct mmap_batch_state *st = state;
+	int ret;
+
+	if (st->user_err)
+		return __put_user(*err, st->user_err++);
+	else {
+		xen_pfn_t mfn;
 
-	return put_user(*mfnp, st->user++);
+		ret = __get_user(mfn, st->user_mfn);
+		if (ret < 0)
+			return ret;
+
+		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
+		return __put_user(mfn, st->user_mfn++);
+	}
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
@@ -289,15 +306,32 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
 	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+			   (xen_pfn_t *)m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
+	if (state.err) {
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+				     &pagelist,
+				     mmap_return_errors, &state);
+		if (!ret)
+			ret = state.err;
+	} else if (m.err)
+		__clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
 	free_page_list(&pagelist);
@@ -354,7 +392,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..37e5255 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -59,13 +59,30 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 if all pages were mapped successfully. -ENOENT if all
+ * failed mappings returned -ENOENT, otherwise the error code of the
+ * first failed mapping.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -73,5 +90,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:19:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6k2x-00070P-1i; Wed, 29 Aug 2012 15:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T6k2w-00070K-9R
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 15:19:10 +0000
Received: from [85.158.138.51:48709] by server-7.bemta-3.messagelabs.com id
	DA/19-01906-DE23E305; Wed, 29 Aug 2012 15:19:09 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346253545!27522210!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9178 invoked from network); 29 Aug 2012 15:19:06 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 15:19:06 -0000
Received: by obbta14 with SMTP id ta14so1591430obb.32
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 08:19:05 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=T0KHBIEAdCkg8dCyY5P+Qlb8A/INMQTF9///tPEpKjY=;
	b=PYvVfE/u4Kwg+VAWzBM/4eKlWJSDc4z3JobwD9YtRVpYjvdyFk2zp4T4UFz3P2O+wd
	Alw5GlHFQVPColuZkmkBa9lwee/vyXbj0unFt2PjVQWsEPXjO3agzHkRakGTZniTlk3n
	tqrZzSJZ0ZQMubfH6fSqo9I294Nm7Lig+bdiZ+BOnvZGc4sOg8T37hOLhgD3IDScM5c2
	StZX8sHENaQIpZrlta2TnaoMRzaEDCauykR+v3RGpWxtB37LljPZDY8ZHvMcEM12qx/u
	rf7y61yfqLiWs9hWFFZ2gMQhNV/LXgZCvyococ5u2JZHqYwZY9R2f7m2e1c4J/0+i9NQ
	hNIw==
MIME-Version: 1.0
Received: by 10.60.29.230 with SMTP id n6mr1551443oeh.123.1346253545139; Wed,
	29 Aug 2012 08:19:05 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Wed, 29 Aug 2012 08:19:04 -0700 (PDT)
X-Originating-IP: [121.44.74.61]
In-Reply-To: <CADvuQREkbfxTfzdjqYof00BF7GVa+ft+SXFzHzO0NwJOkSDP3g@mail.gmail.com>
References: <503D0A00.2080905@inktank.com>
	<CAF6-1L6QKQbgPZvXYMQw+0_vaLrP0cr_P0_nZs56Lh+A2B9k8Q@mail.gmail.com>
	<503DE69B.70804@widodh.nl>
	<CAF6-1L4sVnMsx79APO5X0zEEUyDoBD2PUvjTZw6Gzb7rgfpKkg@mail.gmail.com>
	<503E1BC6.8080308@widodh.nl>
	<CADvuQREkbfxTfzdjqYof00BF7GVa+ft+SXFzHzO0NwJOkSDP3g@mail.gmail.com>
Date: Thu, 30 Aug 2012 01:19:04 +1000
Message-ID: <CAOzFzEj=_YZcoXm6+gGemd1X-EU_PhZrFdscFMG5xPt_3uY1sg@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: Tommi Virtanen <tv@inktank.com>
X-Gm-Message-State: ALoCoQkghJmv08v7fPHGPNpee/vRmWLxsbcfC483V6Jq8z3kcS7L/IGJivDCBSOxb/IeWS7NtcJQ
Cc: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	Sylvain Munaut <s.munaut@whatever-company.com>,
	Wido den Hollander <wido@widodh.nl>, Ross Turk <ross@inktank.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Integration work
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29 August 2012 23:43, Tommi Virtanen <tv@inktank.com> wrote:
> On Wed, Aug 29, 2012 at 9:40 AM, Wido den Hollander <wido@widodh.nl> wrote:
>>> Huh ... I've never heard this. Also the guys in ##xen haven't either.
>>> I'm not really involved in xen dev and don't follow it closely but
>>> that seems unlikely. The few slides I looked at from the Xen Summit a
>>> couple days ago show that they really like their PV model.
>> I must be wrong then!
>
> They are (at least, Red Hat is) looking at using more qemu for
> xen-hvm. Whether that has any effect on the PV side, I wouldn't know.
> It might make sense for them to use virtio even for PV, so they might
> use qemu to implement the hypervisor side of virtio too, and that
> would get you librbd support.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

I don't there is much going on in terms of increasing use of QEMU,
only that Xen can now use upsteam QEMU rather than the Xen specific
fork (qemu-xen-traditional).
There was GSOC project to build a virtio front/backend for Xen but I
am not sure if this would be the way to go.
As far as I can see Xen dominates KVM in terms network and I/O
performance on every benchmark so apart from compatibility the gains
of using virtio don't seem that great... Xen's blkback/netback PV
system is just that much faster and more scalable with large numbers
of domains or 100k+ IOPs.

With regards to blktap.. blktap is currently in a state where blktap2
is included in a minimal amount of distros and is non-upstreamable.
blktap3 which is coming will but fully userspace but I have never been
a big fan of userspace block devices, YMMV.
That being said, building blktap devices is really easy (similar to tuntap).

Ideally improving the kernel RBD device would provide the best
performance across the board and the most compatibility (anything can
use a raw block device).

Joseph.

-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:19:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6k2x-00070P-1i; Wed, 29 Aug 2012 15:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <joseph.glanville@orionvm.com.au>) id 1T6k2w-00070K-9R
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 15:19:10 +0000
Received: from [85.158.138.51:48709] by server-7.bemta-3.messagelabs.com id
	DA/19-01906-DE23E305; Wed, 29 Aug 2012 15:19:09 +0000
X-Env-Sender: joseph.glanville@orionvm.com.au
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346253545!27522210!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9178 invoked from network); 29 Aug 2012 15:19:06 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 15:19:06 -0000
Received: by obbta14 with SMTP id ta14so1591430obb.32
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 08:19:05 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:x-originating-ip:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type:x-gm-message-state;
	bh=T0KHBIEAdCkg8dCyY5P+Qlb8A/INMQTF9///tPEpKjY=;
	b=PYvVfE/u4Kwg+VAWzBM/4eKlWJSDc4z3JobwD9YtRVpYjvdyFk2zp4T4UFz3P2O+wd
	Alw5GlHFQVPColuZkmkBa9lwee/vyXbj0unFt2PjVQWsEPXjO3agzHkRakGTZniTlk3n
	tqrZzSJZ0ZQMubfH6fSqo9I294Nm7Lig+bdiZ+BOnvZGc4sOg8T37hOLhgD3IDScM5c2
	StZX8sHENaQIpZrlta2TnaoMRzaEDCauykR+v3RGpWxtB37LljPZDY8ZHvMcEM12qx/u
	rf7y61yfqLiWs9hWFFZ2gMQhNV/LXgZCvyococ5u2JZHqYwZY9R2f7m2e1c4J/0+i9NQ
	hNIw==
MIME-Version: 1.0
Received: by 10.60.29.230 with SMTP id n6mr1551443oeh.123.1346253545139; Wed,
	29 Aug 2012 08:19:05 -0700 (PDT)
Received: by 10.182.80.200 with HTTP; Wed, 29 Aug 2012 08:19:04 -0700 (PDT)
X-Originating-IP: [121.44.74.61]
In-Reply-To: <CADvuQREkbfxTfzdjqYof00BF7GVa+ft+SXFzHzO0NwJOkSDP3g@mail.gmail.com>
References: <503D0A00.2080905@inktank.com>
	<CAF6-1L6QKQbgPZvXYMQw+0_vaLrP0cr_P0_nZs56Lh+A2B9k8Q@mail.gmail.com>
	<503DE69B.70804@widodh.nl>
	<CAF6-1L4sVnMsx79APO5X0zEEUyDoBD2PUvjTZw6Gzb7rgfpKkg@mail.gmail.com>
	<503E1BC6.8080308@widodh.nl>
	<CADvuQREkbfxTfzdjqYof00BF7GVa+ft+SXFzHzO0NwJOkSDP3g@mail.gmail.com>
Date: Thu, 30 Aug 2012 01:19:04 +1000
Message-ID: <CAOzFzEj=_YZcoXm6+gGemd1X-EU_PhZrFdscFMG5xPt_3uY1sg@mail.gmail.com>
From: Joseph Glanville <joseph.glanville@orionvm.com.au>
To: Tommi Virtanen <tv@inktank.com>
X-Gm-Message-State: ALoCoQkghJmv08v7fPHGPNpee/vRmWLxsbcfC483V6Jq8z3kcS7L/IGJivDCBSOxb/IeWS7NtcJQ
Cc: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	Sylvain Munaut <s.munaut@whatever-company.com>,
	Wido den Hollander <wido@widodh.nl>, Ross Turk <ross@inktank.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Integration work
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29 August 2012 23:43, Tommi Virtanen <tv@inktank.com> wrote:
> On Wed, Aug 29, 2012 at 9:40 AM, Wido den Hollander <wido@widodh.nl> wrote:
>>> Huh ... I've never heard this. Also the guys in ##xen haven't either.
>>> I'm not really involved in xen dev and don't follow it closely but
>>> that seems unlikely. The few slides I looked at from the Xen Summit a
>>> couple days ago show that they really like their PV model.
>> I must be wrong then!
>
> They are (at least, Red Hat is) looking at using more qemu for
> xen-hvm. Whether that has any effect on the PV side, I wouldn't know.
> It might make sense for them to use virtio even for PV, so they might
> use qemu to implement the hypervisor side of virtio too, and that
> would get you librbd support.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

I don't there is much going on in terms of increasing use of QEMU,
only that Xen can now use upsteam QEMU rather than the Xen specific
fork (qemu-xen-traditional).
There was GSOC project to build a virtio front/backend for Xen but I
am not sure if this would be the way to go.
As far as I can see Xen dominates KVM in terms network and I/O
performance on every benchmark so apart from compatibility the gains
of using virtio don't seem that great... Xen's blkback/netback PV
system is just that much faster and more scalable with large numbers
of domains or 100k+ IOPs.

With regards to blktap.. blktap is currently in a state where blktap2
is included in a minimal amount of distros and is non-upstreamable.
blktap3 which is coming will but fully userspace but I have never been
a big fan of userspace block devices, YMMV.
That being said, building blktap devices is really easy (similar to tuntap).

Ideally improving the kernel RBD device would provide the best
performance across the board and the most compatibility (anything can
use a raw block device).

Joseph.

-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:46:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kSZ-0007JK-Ju; Wed, 29 Aug 2012 15:45:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6kSY-0007JF-9Y
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 15:45:38 +0000
Received: from [85.158.143.35:51554] by server-2.bemta-4.messagelabs.com id
	9B/09-21239-1293E305; Wed, 29 Aug 2012 15:45:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346255133!12467380!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23052 invoked from network); 29 Aug 2012 15:45:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 15:45:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="14253985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 15:45:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 16:45:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T6kS1-0005DD-7r; Wed, 29 Aug 2012 15:45:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T6kS1-0008Kh-3v;
	Wed, 29 Aug 2012 16:45:05 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20542.14593.18652.74782@mariner.uk.xensource.com>
Date: Wed, 29 Aug 2012 16:45:05 +0100
To: xen-devel@lists.xen.org
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [RFC PATCH] xen: comment opaque expression in
	__page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

mm.h's __page_to_virt has a rather opaque expression.  Comment it.

The diff below shows the effect that the extra division and
multiplication has on gcc's output; the "-" lines are the result of
compiling
    return (void *)(DIRECTMAP_VIRT_START +
                    ((unsigned long)pg - FRAMETABLE_VIRT_START) /
                    (sizeof(*pg) ) *
                    (PAGE_SIZE )
                    );
instead.

NB that this patch is an RFC because I don't actually know whether
what I wrote in the comment about x86 performance, and the purpose, of
the code, is correct.  Jan, please confirm/deny/correct as
appropriate.

Reported-By: Ian Campbell <ian.campbell@citrix.com>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--- page_alloc.tmp.mariner.31972.s	2012-08-29 16:32:44.000000000 +0100
+++ page_alloc.tmp.mariner.31960.s	2012-08-29 16:32:09.000000000 +0100
@@ -5338,15 +5338,15 @@
 # 325 "/u/iwj/work/xen-unstable-tools.hg/xen/include/asm/mm.h" 1
 	ud2 ; ret $1303; movl $.LC31, %esp; movl $.LC41, %esp
 # 0 "" 2
-	.loc 10 327 0
+	.loc 10 333 0
 #NO_APP
-	movl	$3, %ebx
+	movl	$24, %ebx
 .LVL543:
 	movl	$0, %edx
 	divl	%ebx
-	addl	$8355840, %eax
+	addl	$1044480, %eax
 	movl	%eax, %ebx
-	sall	$9, %ebx
+	sall	$12, %ebx
 .LBE737:
 .LBE736:
 	.loc 1 1179 0
@@ -5368,13 +5368,13 @@
 .LBE739:
 .LBB741:
 .LBB738:
-	.loc 10 327 0
+	.loc 10 333 0
 	movl	$-1431655765, %edx
 	mull	%edx
-	shrl	%edx
-	leal	8355840(%edx), %ebx
+	shrl	$4, %edx
+	leal	1044480(%edx), %ebx
 .LVL545:
-	sall	$9, %ebx
+	sall	$12, %ebx
 .LBE738:
 .LBE741:
 	.loc 1 1179 0

diff -r a0b5f8102a00 xen/include/asm-x86/mm.h
--- a/xen/include/asm-x86/mm.h	Tue Aug 28 22:40:45 2012 +0100
+++ b/xen/include/asm-x86/mm.h	Wed Aug 29 16:44:58 2012 +0100
@@ -323,6 +323,13 @@ static inline struct page_info *__virt_t
 static inline void *__page_to_virt(const struct page_info *pg)
 {
     ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
+    /* (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg).
+     * The division and re-multiplication arranges to do the easy part
+     * of the division with a shift, and then puts the shifted-out
+     * power of 2 back again in the multiplication.  This is
+     * beneficial because with gcc (at least with 4.4.5) it generates
+     * a division by 3 instead of a division by 8 which is faster.
+     */
     return (void *)(DIRECTMAP_VIRT_START +
                     ((unsigned long)pg - FRAMETABLE_VIRT_START) /
                     (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:46:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kSZ-0007JK-Ju; Wed, 29 Aug 2012 15:45:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6kSY-0007JF-9Y
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 15:45:38 +0000
Received: from [85.158.143.35:51554] by server-2.bemta-4.messagelabs.com id
	9B/09-21239-1293E305; Wed, 29 Aug 2012 15:45:37 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346255133!12467380!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23052 invoked from network); 29 Aug 2012 15:45:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 15:45:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,334,1344211200"; d="scan'208";a="14253985"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 15:45:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 16:45:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T6kS1-0005DD-7r; Wed, 29 Aug 2012 15:45:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T6kS1-0008Kh-3v;
	Wed, 29 Aug 2012 16:45:05 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20542.14593.18652.74782@mariner.uk.xensource.com>
Date: Wed, 29 Aug 2012 16:45:05 +0100
To: xen-devel@lists.xen.org
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [RFC PATCH] xen: comment opaque expression in
	__page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

mm.h's __page_to_virt has a rather opaque expression.  Comment it.

The diff below shows the effect that the extra division and
multiplication has on gcc's output; the "-" lines are the result of
compiling
    return (void *)(DIRECTMAP_VIRT_START +
                    ((unsigned long)pg - FRAMETABLE_VIRT_START) /
                    (sizeof(*pg) ) *
                    (PAGE_SIZE )
                    );
instead.

NB that this patch is an RFC because I don't actually know whether
what I wrote in the comment about x86 performance, and the purpose, of
the code, is correct.  Jan, please confirm/deny/correct as
appropriate.

Reported-By: Ian Campbell <ian.campbell@citrix.com>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--- page_alloc.tmp.mariner.31972.s	2012-08-29 16:32:44.000000000 +0100
+++ page_alloc.tmp.mariner.31960.s	2012-08-29 16:32:09.000000000 +0100
@@ -5338,15 +5338,15 @@
 # 325 "/u/iwj/work/xen-unstable-tools.hg/xen/include/asm/mm.h" 1
 	ud2 ; ret $1303; movl $.LC31, %esp; movl $.LC41, %esp
 # 0 "" 2
-	.loc 10 327 0
+	.loc 10 333 0
 #NO_APP
-	movl	$3, %ebx
+	movl	$24, %ebx
 .LVL543:
 	movl	$0, %edx
 	divl	%ebx
-	addl	$8355840, %eax
+	addl	$1044480, %eax
 	movl	%eax, %ebx
-	sall	$9, %ebx
+	sall	$12, %ebx
 .LBE737:
 .LBE736:
 	.loc 1 1179 0
@@ -5368,13 +5368,13 @@
 .LBE739:
 .LBB741:
 .LBB738:
-	.loc 10 327 0
+	.loc 10 333 0
 	movl	$-1431655765, %edx
 	mull	%edx
-	shrl	%edx
-	leal	8355840(%edx), %ebx
+	shrl	$4, %edx
+	leal	1044480(%edx), %ebx
 .LVL545:
-	sall	$9, %ebx
+	sall	$12, %ebx
 .LBE738:
 .LBE741:
 	.loc 1 1179 0

diff -r a0b5f8102a00 xen/include/asm-x86/mm.h
--- a/xen/include/asm-x86/mm.h	Tue Aug 28 22:40:45 2012 +0100
+++ b/xen/include/asm-x86/mm.h	Wed Aug 29 16:44:58 2012 +0100
@@ -323,6 +323,13 @@ static inline struct page_info *__virt_t
 static inline void *__page_to_virt(const struct page_info *pg)
 {
     ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
+    /* (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg).
+     * The division and re-multiplication arranges to do the easy part
+     * of the division with a shift, and then puts the shifted-out
+     * power of 2 back again in the multiplication.  This is
+     * beneficial because with gcc (at least with 4.4.5) it generates
+     * a division by 3 instead of a division by 8 which is faster.
+     */
     return (void *)(DIRECTMAP_VIRT_START +
                     ((unsigned long)pg - FRAMETABLE_VIRT_START) /
                     (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:51:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kXP-0007RC-AQ; Wed, 29 Aug 2012 15:50:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T6kXN-0007Qy-7s
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 15:50:37 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346254591!8676167!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUyMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10008 invoked from network); 29 Aug 2012 15:36:32 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	29 Aug 2012 15:36:32 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7TFaGqt028185
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 11:36:16 -0400
Received: from thinkpad.mammed.net (ovpn-112-25.ams2.redhat.com [10.36.112.25])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7TFa5sd015077
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Wed, 29 Aug 2012 11:36:07 -0400
Date: Wed, 29 Aug 2012 17:36:04 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120829173604.77f90e52@thinkpad.mammed.net>
In-Reply-To: <CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-2-git-send-email-ehabkost@redhat.com>
	<CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	lcapitulino@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	mdroth@linux.vnet.ibm.com, stefanha@linux.vnet.ibm.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	armbru@redhat.com, avi@redhat.com, anthony.perard@citrix.com,
	lersek@redhat.com, Eduardo Habkost <ehabkost@redhat.com>,
	stefano.stabellini@eu.citrix.com, sw@weilnetz.de,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012 17:10:48 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > diff --git a/qemu-common.h b/qemu-common.h
> > index e5c2bcd..6677a30 100644
> > --- a/qemu-common.h
> > +++ b/qemu-common.h
> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
> >  typedef struct PCIESlot PCIESlot;
> >  typedef struct MSIMessage MSIMessage;
> >  typedef struct SerialState SerialState;
> > -typedef struct IRQState *qemu_irq;
> >  typedef struct PCMCIACardState PCMCIACardState;
> >  typedef struct MouseTransformInfo MouseTransformInfo;
> >  typedef struct uWireSlave uWireSlave;
> > diff --git a/sysemu.h b/sysemu.h
> > index 65552ac..f765821 100644
> > --- a/sysemu.h
> > +++ b/sysemu.h
> > @@ -9,6 +9,7 @@
> >  #include "qapi-types.h"
> >  #include "notify.h"
> >  #include "main-loop.h"
> > +#include "hw/irq.h"
> >
> >  /* vl.c */
> 
> I'm not objecting to this patch if it helps us move forwards,
> but adding the #include to sysemu.h is effectively just adding
> the definition to another grabbag header (183 files include
> sysemu.h). It would be nicer long-term to separate out the
> one thing in this header that cares about qemu_irq (the extern
> declaration of qemu_system_powerdown).
> [I'm not really convinced that a qemu_irq is even the right
> way to signal "hey the system has actually powered down now"...]

Instead of global qemu_system_powerdown we could use notifiers like it's done
for suspend, I'll post patches today after testing them on target-i386.

BTW getting rid of qemu_system_powerdown is orthogonal to topic of this series.
I hope you won't object to this patch providing there will be follow on series
to deal with qemu_system_powerdown.

> 
> -- PMM
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:51:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kXP-0007RC-AQ; Wed, 29 Aug 2012 15:50:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1T6kXN-0007Qy-7s
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 15:50:37 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346254591!8676167!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTUyMTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10008 invoked from network); 29 Aug 2012 15:36:32 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-3.tower-27.messagelabs.com with SMTP;
	29 Aug 2012 15:36:32 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q7TFaGqt028185
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 11:36:16 -0400
Received: from thinkpad.mammed.net (ovpn-112-25.ams2.redhat.com [10.36.112.25])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id q7TFa5sd015077
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Wed, 29 Aug 2012 11:36:07 -0400
Date: Wed, 29 Aug 2012 17:36:04 +0200
From: Igor Mammedov <imammedo@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20120829173604.77f90e52@thinkpad.mammed.net>
In-Reply-To: <CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-2-git-send-email-ehabkost@redhat.com>
	<CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	lcapitulino@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	mdroth@linux.vnet.ibm.com, stefanha@linux.vnet.ibm.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	armbru@redhat.com, avi@redhat.com, anthony.perard@citrix.com,
	lersek@redhat.com, Eduardo Habkost <ehabkost@redhat.com>,
	stefano.stabellini@eu.citrix.com, sw@weilnetz.de,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Aug 2012 17:10:48 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On 21 August 2012 16:42, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > diff --git a/qemu-common.h b/qemu-common.h
> > index e5c2bcd..6677a30 100644
> > --- a/qemu-common.h
> > +++ b/qemu-common.h
> > @@ -273,7 +273,6 @@ typedef struct PCIEPort PCIEPort;
> >  typedef struct PCIESlot PCIESlot;
> >  typedef struct MSIMessage MSIMessage;
> >  typedef struct SerialState SerialState;
> > -typedef struct IRQState *qemu_irq;
> >  typedef struct PCMCIACardState PCMCIACardState;
> >  typedef struct MouseTransformInfo MouseTransformInfo;
> >  typedef struct uWireSlave uWireSlave;
> > diff --git a/sysemu.h b/sysemu.h
> > index 65552ac..f765821 100644
> > --- a/sysemu.h
> > +++ b/sysemu.h
> > @@ -9,6 +9,7 @@
> >  #include "qapi-types.h"
> >  #include "notify.h"
> >  #include "main-loop.h"
> > +#include "hw/irq.h"
> >
> >  /* vl.c */
> 
> I'm not objecting to this patch if it helps us move forwards,
> but adding the #include to sysemu.h is effectively just adding
> the definition to another grabbag header (183 files include
> sysemu.h). It would be nicer long-term to separate out the
> one thing in this header that cares about qemu_irq (the extern
> declaration of qemu_system_powerdown).
> [I'm not really convinced that a qemu_irq is even the right
> way to signal "hey the system has actually powered down now"...]

Instead of global qemu_system_powerdown we could use notifiers like it's done
for suspend, I'll post patches today after testing them on target-i386.

BTW getting rid of qemu_system_powerdown is orthogonal to topic of this series.
I hope you won't object to this patch providing there will be follow on series
to deal with qemu_system_powerdown.

> 
> -- PMM
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:52:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kYq-0007Vt-Pu; Wed, 29 Aug 2012 15:52:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T6kYq-0007VX-4B
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 15:52:08 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346254693!8596816!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22174 invoked from network); 29 Aug 2012 15:38:14 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 15:38:14 -0000
Received: by iabz25 with SMTP id z25so1685628iab.30
	for <xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 08:38:13 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=YobpcMQlqVU8OnkdW35TL+sfg5fd5rs8E3tbpB+zfco=;
	b=EVFMAdt2s2ot3o9l1Z2r3GiTAE0jvezCD218+aHZ6j5OGSRvn3pp92DLn7/GqnY31Z
	Tz8tuR3m56P4WxeJCxBu6rRNXpnmuOqLQWS7Cb5b+W0aSdSRWA+H5UF/6byiM7RKs/qz
	buiaEtm4R8QiTkDDZBpFk5Dl5fbnKEEDqX4VRm8EknohWN8+wMeTqaOJQhp7dppawc8K
	tAAFaBrHsTNuasnClS3SMwRaqZEDytJ2P25Wi3CGBy9bm5broYrpObR0wd/YxiRP0DlK
	EYRlN3UcLJLlUBV0ytIp4D72sY9P8yYjq2ApDCJzGnDfQNjjqhdqiIKWGbC0G7yhajGN
	TxfQ==
MIME-Version: 1.0
Received: by 10.50.213.39 with SMTP id np7mr2109767igc.51.1346254693034; Wed,
	29 Aug 2012 08:38:13 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Wed, 29 Aug 2012 08:38:13 -0700 (PDT)
In-Reply-To: <20120829173604.77f90e52@thinkpad.mammed.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-2-git-send-email-ehabkost@redhat.com>
	<CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
	<20120829173604.77f90e52@thinkpad.mammed.net>
Date: Wed, 29 Aug 2012 16:38:13 +0100
Message-ID: <CAFEAcA8Lof1sK=q40ZVvw=UnMwmTXiyHvF92hgSZz-TuTSgVDw@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Igor Mammedov <imammedo@redhat.com>
X-Gm-Message-State: ALoCoQmFX8/z8RFrJ6A/XNC+XKptWFlqYpkQcGzcl3e7E5gBlb/IOESP7Bj6hdyxrFHEDQmnib7F
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	lcapitulino@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	mdroth@linux.vnet.ibm.com, stefanha@linux.vnet.ibm.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	armbru@redhat.com, avi@redhat.com, anthony.perard@citrix.com,
	lersek@redhat.com, Eduardo Habkost <ehabkost@redhat.com>,
	stefano.stabellini@eu.citrix.com, sw@weilnetz.de,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29 August 2012 16:36, Igor Mammedov <imammedo@redhat.com> wrote:
> Peter Maydell <peter.maydell@linaro.org> wrote:
>> I'm not objecting to this patch if it helps us move forwards,
>> but adding the #include to sysemu.h is effectively just adding
>> the definition to another grabbag header (183 files include
>> sysemu.h). It would be nicer long-term to separate out the
>> one thing in this header that cares about qemu_irq (the extern
>> declaration of qemu_system_powerdown).
>> [I'm not really convinced that a qemu_irq is even the right
>> way to signal "hey the system has actually powered down now"...]
>
> Instead of global qemu_system_powerdown we could use notifiers like it's done
> for suspend, I'll post patches today after testing them on target-i386.
>
> BTW getting rid of qemu_system_powerdown is orthogonal to topic of this series.
> I hope you won't object to this patch providing there will be follow on series
> to deal with qemu_system_powerdown.

Yes, as I say, I don't object if this patch is useful in the
meantime.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 15:52:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 15:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kYq-0007Vt-Pu; Wed, 29 Aug 2012 15:52:08 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1T6kYq-0007VX-4B
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 15:52:08 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346254693!8596816!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22174 invoked from network); 29 Aug 2012 15:38:14 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 15:38:14 -0000
Received: by iabz25 with SMTP id z25so1685628iab.30
	for <xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 08:38:13 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=YobpcMQlqVU8OnkdW35TL+sfg5fd5rs8E3tbpB+zfco=;
	b=EVFMAdt2s2ot3o9l1Z2r3GiTAE0jvezCD218+aHZ6j5OGSRvn3pp92DLn7/GqnY31Z
	Tz8tuR3m56P4WxeJCxBu6rRNXpnmuOqLQWS7Cb5b+W0aSdSRWA+H5UF/6byiM7RKs/qz
	buiaEtm4R8QiTkDDZBpFk5Dl5fbnKEEDqX4VRm8EknohWN8+wMeTqaOJQhp7dppawc8K
	tAAFaBrHsTNuasnClS3SMwRaqZEDytJ2P25Wi3CGBy9bm5broYrpObR0wd/YxiRP0DlK
	EYRlN3UcLJLlUBV0ytIp4D72sY9P8yYjq2ApDCJzGnDfQNjjqhdqiIKWGbC0G7yhajGN
	TxfQ==
MIME-Version: 1.0
Received: by 10.50.213.39 with SMTP id np7mr2109767igc.51.1346254693034; Wed,
	29 Aug 2012 08:38:13 -0700 (PDT)
Received: by 10.50.160.161 with HTTP; Wed, 29 Aug 2012 08:38:13 -0700 (PDT)
In-Reply-To: <20120829173604.77f90e52@thinkpad.mammed.net>
References: <1345563782-11224-1-git-send-email-ehabkost@redhat.com>
	<1345563782-11224-2-git-send-email-ehabkost@redhat.com>
	<CAFEAcA9NHk3wdExh=EaQRLs7=txF=3HXLRK+bxik1X8UuskUXA@mail.gmail.com>
	<20120829173604.77f90e52@thinkpad.mammed.net>
Date: Wed, 29 Aug 2012 16:38:13 +0100
Message-ID: <CAFEAcA8Lof1sK=q40ZVvw=UnMwmTXiyHvF92hgSZz-TuTSgVDw@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
To: Igor Mammedov <imammedo@redhat.com>
X-Gm-Message-State: ALoCoQmFX8/z8RFrJ6A/XNC+XKptWFlqYpkQcGzcl3e7E5gBlb/IOESP7Bj6hdyxrFHEDQmnib7F
Cc: jan.kiszka@siemens.com, mjt@tls.msk.ru, qemu-devel@nongnu.org,
	lcapitulino@redhat.com, blauwirbel@gmail.com, kraxel@redhat.com,
	mdroth@linux.vnet.ibm.com, stefanha@linux.vnet.ibm.com,
	xen-devel@lists.xensource.com, i.mitsyanko@samsung.com,
	armbru@redhat.com, avi@redhat.com, anthony.perard@citrix.com,
	lersek@redhat.com, Eduardo Habkost <ehabkost@redhat.com>,
	stefano.stabellini@eu.citrix.com, sw@weilnetz.de,
	rth@twiddle.net, kwolf@redhat.com, aliguori@us.ibm.com,
	mtosatti@redhat.com, pbonzini@redhat.com, afaerber@suse.de
Subject: Re: [Xen-devel] [RFC 1/8] move qemu_irq typedef out of cpu-common.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29 August 2012 16:36, Igor Mammedov <imammedo@redhat.com> wrote:
> Peter Maydell <peter.maydell@linaro.org> wrote:
>> I'm not objecting to this patch if it helps us move forwards,
>> but adding the #include to sysemu.h is effectively just adding
>> the definition to another grabbag header (183 files include
>> sysemu.h). It would be nicer long-term to separate out the
>> one thing in this header that cares about qemu_irq (the extern
>> declaration of qemu_system_powerdown).
>> [I'm not really convinced that a qemu_irq is even the right
>> way to signal "hey the system has actually powered down now"...]
>
> Instead of global qemu_system_powerdown we could use notifiers like it's done
> for suspend, I'll post patches today after testing them on target-i386.
>
> BTW getting rid of qemu_system_powerdown is orthogonal to topic of this series.
> I hope you won't object to this patch providing there will be follow on series
> to deal with qemu_system_powerdown.

Yes, as I say, I don't object if this patch is useful in the
meantime.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kul-0008Px-E3; Wed, 29 Aug 2012 16:14:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T6kuj-0008Pn-10
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:14:47 +0000
Received: from [85.158.138.51:20076] by server-4.bemta-3.messagelabs.com id
	96/81-04276-9EF3E305; Wed, 29 Aug 2012 16:14:33 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346256870!27503437!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24107 invoked from network); 29 Aug 2012 16:14:32 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:14:32 -0000
Received: by pbbrq2 with SMTP id rq2so1673319pbb.30
	for <xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 09:14:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=hkcsOj/tMbYR9O6C7NWOBt9CHga7CV0h95unRSZpM3g=;
	b=lldPlFyePMAnO+k9IABsUV5Fo9qNYygIXHYE24GDCDRfJjpX26KIdxnPxVsKEIxFwK
	M63eFiNQxWgr52N05he5LDMRvYy2sbtbSL5RW8HYTWNts4RuUkqksdUvdchI0TIvuX4p
	zYlcRZwtldh0oyA2ocz064u4UE2nl581LvyadghQeJL/cc/RmrbgHbLmPzibtkCGw/2A
	8eqD6a2AjLiCIQsZkC84QZpaVYFBrI9TkblwVSKTkBfGCMYQqMI7YciGw3Alt1hmaksR
	TUdK7M0kqQz56BBvwE38IbbjRry1ecfEM3H3IILRzm33Re7uZTz1JlJahL0F8B6ALjuC
	djBA==
Received: by 10.68.141.46 with SMTP id rl14mr5825640pbb.2.1346256855750;
	Wed, 29 Aug 2012 09:14:15 -0700 (PDT)
Received: from [10.10.4.42] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id
	gv1sm19578177pbc.38.2012.08.29.09.14.14
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 29 Aug 2012 09:14:15 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
Date: Wed, 29 Aug 2012 12:14:17 -0400
Message-Id: <7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQl9bNZRiT6go1FTHwxTiz3hqeuK6eQgtRCC9iJTLrPn9aJ4vAdtb/cKKMOLzboWqaI1pmi3
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 29, 2012, at 9:15 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> =

> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> =

> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   78 +++++++++++++++++++++++++++++++++++++-------=
----
> include/xen/privcmd.h |   21 ++++++++++++-
> 2 files changed, 80 insertions(+), 19 deletions(-)
> =

> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..ddd32cf 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -248,18 +248,23 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> =

> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> =

> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp =3D data;
> +	int *err =3D data;
Am I missing something or is there an aliasing here? Both mfnp and err poin=
t to data?
> 	struct mmap_batch_state *st =3D state;
> +	int ret;
> =

> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |=3D 0xf0000000U;
> -		st->err++;
> +	ret =3D xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, =
1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		*err =3D ret;
> +		if (st->err =3D=3D 0 || st->err =3D=3D -ENOENT)
> +			st->err =3D ret;
This will unset -ENOENT if a frame after an ENOENT error fails differently.

> 	}
> 	st->va +=3D PAGE_SIZE;
> =

> @@ -268,18 +273,30 @@ static int mmap_batch_fn(void *data, void *state)
> =

> static int mmap_return_errors(void *data, void *state)
> {
> -	xen_pfn_t *mfnp =3D data;
> +	int *err =3D data;
> 	struct mmap_batch_state *st =3D state;
> +	int ret;
> +
> +	if (st->user_err)
> +		return __put_user(*err, st->user_err++);
> +	else {
> +		xen_pfn_t mfn;
> =

> -	return put_user(*mfnp, st->user++);
> +		ret =3D __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |=3D PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> =

> static struct vm_operations_struct privcmd_vm_ops;
> =

> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm =3D current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +306,32 @@ static long privcmd_ioctl_mmap_batch(void __user *u=
data)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> =

> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err =3D NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> =

> 	nr_pages =3D m.num;
> 	if ((m.num <=3D 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> =

> 	ret =3D gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +			   (xen_pfn_t *)m.arr);
> =

> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user *u=
data)
> =

> 	up_write(&mm->mmap_sem);
> =

> -	if (state.err > 0) {
> -		state.user =3D m.arr;
> +	if (state.err) {
> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
> +		state.user_err =3D m.err;
> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
The callback now maps data to err (instead of mfnp =85 but I see no change =
to the gather_array other than a cast =85am I missing something?

Thanks
Andres
> +		if (!ret)
> +			ret =3D state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> =

> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +392,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> =

> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret =3D privcmd_ioctl_mmap_batch(udata);
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> =

> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..37e5255 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,30 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> =

> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 if all pages were mapped successfully. -ENOENT if all
> + * failed mappings returned -ENOENT, otherwise the error code of the
> + * first failed mapping.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +90,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> =

> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- =

> 1.7.2.5
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6kul-0008Px-E3; Wed, 29 Aug 2012 16:14:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T6kuj-0008Pn-10
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:14:47 +0000
Received: from [85.158.138.51:20076] by server-4.bemta-3.messagelabs.com id
	96/81-04276-9EF3E305; Wed, 29 Aug 2012 16:14:33 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346256870!27503437!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24107 invoked from network); 29 Aug 2012 16:14:32 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:14:32 -0000
Received: by pbbrq2 with SMTP id rq2so1673319pbb.30
	for <xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 09:14:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=hkcsOj/tMbYR9O6C7NWOBt9CHga7CV0h95unRSZpM3g=;
	b=lldPlFyePMAnO+k9IABsUV5Fo9qNYygIXHYE24GDCDRfJjpX26KIdxnPxVsKEIxFwK
	M63eFiNQxWgr52N05he5LDMRvYy2sbtbSL5RW8HYTWNts4RuUkqksdUvdchI0TIvuX4p
	zYlcRZwtldh0oyA2ocz064u4UE2nl581LvyadghQeJL/cc/RmrbgHbLmPzibtkCGw/2A
	8eqD6a2AjLiCIQsZkC84QZpaVYFBrI9TkblwVSKTkBfGCMYQqMI7YciGw3Alt1hmaksR
	TUdK7M0kqQz56BBvwE38IbbjRry1ecfEM3H3IILRzm33Re7uZTz1JlJahL0F8B6ALjuC
	djBA==
Received: by 10.68.141.46 with SMTP id rl14mr5825640pbb.2.1346256855750;
	Wed, 29 Aug 2012 09:14:15 -0700 (PDT)
Received: from [10.10.4.42] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id
	gv1sm19578177pbc.38.2012.08.29.09.14.14
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 29 Aug 2012 09:14:15 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
Date: Wed, 29 Aug 2012 12:14:17 -0400
Message-Id: <7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQl9bNZRiT6go1FTHwxTiz3hqeuK6eQgtRCC9iJTLrPn9aJ4vAdtb/cKKMOLzboWqaI1pmi3
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 29, 2012, at 9:15 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> =

> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> =

> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   78 +++++++++++++++++++++++++++++++++++++-------=
----
> include/xen/privcmd.h |   21 ++++++++++++-
> 2 files changed, 80 insertions(+), 19 deletions(-)
> =

> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..ddd32cf 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -248,18 +248,23 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> =

> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> =

> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp =3D data;
> +	int *err =3D data;
Am I missing something or is there an aliasing here? Both mfnp and err poin=
t to data?
> 	struct mmap_batch_state *st =3D state;
> +	int ret;
> =

> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |=3D 0xf0000000U;
> -		st->err++;
> +	ret =3D xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, =
1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		*err =3D ret;
> +		if (st->err =3D=3D 0 || st->err =3D=3D -ENOENT)
> +			st->err =3D ret;
This will unset -ENOENT if a frame after an ENOENT error fails differently.

> 	}
> 	st->va +=3D PAGE_SIZE;
> =

> @@ -268,18 +273,30 @@ static int mmap_batch_fn(void *data, void *state)
> =

> static int mmap_return_errors(void *data, void *state)
> {
> -	xen_pfn_t *mfnp =3D data;
> +	int *err =3D data;
> 	struct mmap_batch_state *st =3D state;
> +	int ret;
> +
> +	if (st->user_err)
> +		return __put_user(*err, st->user_err++);
> +	else {
> +		xen_pfn_t mfn;
> =

> -	return put_user(*mfnp, st->user++);
> +		ret =3D __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |=3D PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> =

> static struct vm_operations_struct privcmd_vm_ops;
> =

> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm =3D current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +306,32 @@ static long privcmd_ioctl_mmap_batch(void __user *u=
data)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> =

> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err =3D NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> =

> 	nr_pages =3D m.num;
> 	if ((m.num <=3D 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> =

> 	ret =3D gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +			   (xen_pfn_t *)m.arr);
> =

> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user *u=
data)
> =

> 	up_write(&mm->mmap_sem);
> =

> -	if (state.err > 0) {
> -		state.user =3D m.arr;
> +	if (state.err) {
> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
> +		state.user_err =3D m.err;
> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
The callback now maps data to err (instead of mfnp =85 but I see no change =
to the gather_array other than a cast =85am I missing something?

Thanks
Andres
> +		if (!ret)
> +			ret =3D state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> =

> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +392,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> =

> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret =3D privcmd_ioctl_mmap_batch(udata);
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret =3D privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> =

> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..37e5255 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,30 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> =

> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 if all pages were mapped successfully. -ENOENT if all
> + * failed mappings returned -ENOENT, otherwise the error code of the
> + * first failed mapping.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +90,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> =

> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- =

> 1.7.2.5
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:23:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6l3A-0000DE-EY; Wed, 29 Aug 2012 16:23:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6l39-0000D8-9T
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:23:27 +0000
Received: from [85.158.143.99:52708] by server-3.bemta-4.messagelabs.com id
	47/CA-08232-EF14E305; Wed, 29 Aug 2012 16:23:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346257404!22130028!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30011 invoked from network); 29 Aug 2012 16:23:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:23:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14254795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 16:23:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 17:23:23 +0100
Message-ID: <1346257402.20019.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 29 Aug 2012 17:23:22 +0100
In-Reply-To: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
> This series is a straight forward-port of some functionality from
> classic kernels to support Xen hosts that do paging of guests.
> 
> This isn't functionality the XenServer makes use of so I've not tested
> these with paging in use.
> 
> Changes since v1:
> 
> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
>   error).  It's broken and not really fixable sensibly and libxc will
>   use V2 if it is available.
> - Return -ENOENT if all failures were -ENOENT.

Is this behaviour a requirement from something?

Usually hypercalls of this type return a global error only if something
went wrong with the general mechanics of the hypercall (e.g. faults
reading arguments etc) and leave reporting of the individual failures of
subops to the op specific field, even if all the subops fail in the same
way.

Ian.

> - Clear arg->err on success (libxc expected this).
> 
> I think this should probably get a "Tested-by" Andres or someone else
> who uses paging before being applied.
> 
> David
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:23:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6l3A-0000DE-EY; Wed, 29 Aug 2012 16:23:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6l39-0000D8-9T
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:23:27 +0000
Received: from [85.158.143.99:52708] by server-3.bemta-4.messagelabs.com id
	47/CA-08232-EF14E305; Wed, 29 Aug 2012 16:23:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346257404!22130028!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30011 invoked from network); 29 Aug 2012 16:23:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:23:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14254795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 16:23:23 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 17:23:23 +0100
Message-ID: <1346257402.20019.9.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 29 Aug 2012 17:23:22 +0100
In-Reply-To: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
> This series is a straight forward-port of some functionality from
> classic kernels to support Xen hosts that do paging of guests.
> 
> This isn't functionality the XenServer makes use of so I've not tested
> these with paging in use.
> 
> Changes since v1:
> 
> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
>   error).  It's broken and not really fixable sensibly and libxc will
>   use V2 if it is available.
> - Return -ENOENT if all failures were -ENOENT.

Is this behaviour a requirement from something?

Usually hypercalls of this type return a global error only if something
went wrong with the general mechanics of the hypercall (e.g. faults
reading arguments etc) and leave reporting of the individual failures of
subops to the op specific field, even if all the subops fail in the same
way.

Ian.

> - Clear arg->err on success (libxc expected this).
> 
> I think this should probably get a "Tested-by" Andres or someone else
> who uses paging before being applied.
> 
> David
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6lG5-0000Pq-Rh; Wed, 29 Aug 2012 16:36:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6lG4-0000Pl-9g
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:36:48 +0000
Received: from [85.158.143.35:34685] by server-3.bemta-4.messagelabs.com id
	E1/6A-08232-F154E305; Wed, 29 Aug 2012 16:36:47 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346258203!12474798!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjc3NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3102 invoked from network); 29 Aug 2012 16:36:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="36200921"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 16:36:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Wed, 29 Aug 2012
	12:36:43 -0400
Message-ID: <503E451A.20107@citrix.com>
Date: Wed, 29 Aug 2012 17:36:42 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
	<7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
In-Reply-To: <7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 17:14, Andres Lagar-Cavilla wrote:
> =

> On Aug 29, 2012, at 9:15 AM, David Vrabel wrote:
> =

>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>> field for reporting the error code for every frame that could not be
>> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
[...]
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index ccee0f1..ddd32cf 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -248,18 +248,23 @@ struct mmap_batch_state {
>> 	struct vm_area_struct *vma;
>> 	int err;
>>
>> -	xen_pfn_t __user *user;
>> +	xen_pfn_t __user *user_mfn;
>> +	int __user *user_err;
>> };
>>
>> static int mmap_batch_fn(void *data, void *state)
>> {
>> 	xen_pfn_t *mfnp =3D data;
>> +	int *err =3D data;
> Am I missing something or is there an aliasing here? Both mfnp and err po=
int to data?

There is deliberate aliasing here.  We use the mfn array to save the
error codes for later processing.

>> 	struct mmap_batch_state *st =3D state;
>> +	int ret;
>>
>> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -				       st->vma->vm_page_prot, st->domain) < 0) {
>> -		*mfnp |=3D 0xf0000000U;
>> -		st->err++;
>> +	ret =3D xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp,=
 1,
>> +					 st->vma->vm_page_prot, st->domain);
>> +	if (ret < 0) {
>> +		*err =3D ret;
>> +		if (st->err =3D=3D 0 || st->err =3D=3D -ENOENT)
>> +			st->err =3D ret;
> This will unset -ENOENT if a frame after an ENOENT error fails differentl=
y.

I thought that was what the original implementation did but it seems it
does not.

>> @@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user *=
udata)
>>
>> 	up_write(&mm->mmap_sem);
>>
>> -	if (state.err > 0) {
>> -		state.user =3D m.arr;
>> +	if (state.err) {
>> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
>> +		state.user_err =3D m.err;
>> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
>> -			       &pagelist,
>> -			       mmap_return_errors, &state);
>> -	}
>> +				     &pagelist,
>> +				     mmap_return_errors, &state);

> The callback now maps data to err (instead of mfnp =85 but I see no
> change to the gather_array other than a cast =85am I missing something?

The kernel mfn and the err array are aliased.

I could have made gather_array() allow the kernel array to have larger
elements than the user array but that looked like too much work.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:37:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6lG5-0000Pq-Rh; Wed, 29 Aug 2012 16:36:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6lG4-0000Pl-9g
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:36:48 +0000
Received: from [85.158.143.35:34685] by server-3.bemta-4.messagelabs.com id
	E1/6A-08232-F154E305; Wed, 29 Aug 2012 16:36:47 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346258203!12474798!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjc3NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3102 invoked from network); 29 Aug 2012 16:36:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:36:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="36200921"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 16:36:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Wed, 29 Aug 2012
	12:36:43 -0400
Message-ID: <503E451A.20107@citrix.com>
Date: Wed, 29 Aug 2012 17:36:42 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
	<7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
In-Reply-To: <7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 17:14, Andres Lagar-Cavilla wrote:
> =

> On Aug 29, 2012, at 9:15 AM, David Vrabel wrote:
> =

>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>> field for reporting the error code for every frame that could not be
>> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
[...]
>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>> index ccee0f1..ddd32cf 100644
>> --- a/drivers/xen/privcmd.c
>> +++ b/drivers/xen/privcmd.c
>> @@ -248,18 +248,23 @@ struct mmap_batch_state {
>> 	struct vm_area_struct *vma;
>> 	int err;
>>
>> -	xen_pfn_t __user *user;
>> +	xen_pfn_t __user *user_mfn;
>> +	int __user *user_err;
>> };
>>
>> static int mmap_batch_fn(void *data, void *state)
>> {
>> 	xen_pfn_t *mfnp =3D data;
>> +	int *err =3D data;
> Am I missing something or is there an aliasing here? Both mfnp and err po=
int to data?

There is deliberate aliasing here.  We use the mfn array to save the
error codes for later processing.

>> 	struct mmap_batch_state *st =3D state;
>> +	int ret;
>>
>> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> -				       st->vma->vm_page_prot, st->domain) < 0) {
>> -		*mfnp |=3D 0xf0000000U;
>> -		st->err++;
>> +	ret =3D xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp,=
 1,
>> +					 st->vma->vm_page_prot, st->domain);
>> +	if (ret < 0) {
>> +		*err =3D ret;
>> +		if (st->err =3D=3D 0 || st->err =3D=3D -ENOENT)
>> +			st->err =3D ret;
> This will unset -ENOENT if a frame after an ENOENT error fails differentl=
y.

I thought that was what the original implementation did but it seems it
does not.

>> @@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user *=
udata)
>>
>> 	up_write(&mm->mmap_sem);
>>
>> -	if (state.err > 0) {
>> -		state.user =3D m.arr;
>> +	if (state.err) {
>> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
>> +		state.user_err =3D m.err;
>> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
>> -			       &pagelist,
>> -			       mmap_return_errors, &state);
>> -	}
>> +				     &pagelist,
>> +				     mmap_return_errors, &state);

> The callback now maps data to err (instead of mfnp =85 but I see no
> change to the gather_array other than a cast =85am I missing something?

The kernel mfn and the err array are aliased.

I could have made gather_array() allow the kernel array to have larger
elements than the user array but that looked like too much work.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:57:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6lZL-0000hb-2Q; Wed, 29 Aug 2012 16:56:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6lZJ-0000hW-P6
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:56:41 +0000
Received: from [85.158.143.99:16777] by server-3.bemta-4.messagelabs.com id
	E3/DD-08232-9C94E305; Wed, 29 Aug 2012 16:56:41 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346259399!24083762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17712 invoked from network); 29 Aug 2012 16:56:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="206567486"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 16:56:32 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Wed, 29 Aug 2012
	12:56:31 -0400
Message-ID: <503E49BE.7080704@citrix.com>
Date: Wed, 29 Aug 2012 17:56:30 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346257402.20019.9.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346257402.20019.9.camel@zakaz.uk.xensource.com>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 17:23, Ian Campbell wrote:
> On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
>> This series is a straight forward-port of some functionality from
>> classic kernels to support Xen hosts that do paging of guests.
>>
>> This isn't functionality the XenServer makes use of so I've not tested
>> these with paging in use.
>>
>> Changes since v1:
>>
>> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
>>   error).  It's broken and not really fixable sensibly and libxc will
>>   use V2 if it is available.
>> - Return -ENOENT if all failures were -ENOENT.
> 
> Is this behaviour a requirement from something?

It's the behaviour libxc is expecting.  It doesn't retry unless errno ==
ENOENT.

> Usually hypercalls of this type return a global error only if something
> went wrong with the general mechanics of the hypercall (e.g. faults
> reading arguments etc) and leave reporting of the individual failures of
> subops to the op specific field, even if all the subops fail in the same
> way.

I didn't design this interface...  Feel free to propose (and implement)
an alternate MMAPBATCH_V3 interface.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 16:57:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 16:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6lZL-0000hb-2Q; Wed, 29 Aug 2012 16:56:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T6lZJ-0000hW-P6
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 16:56:41 +0000
Received: from [85.158.143.99:16777] by server-3.bemta-4.messagelabs.com id
	E3/DD-08232-9C94E305; Wed, 29 Aug 2012 16:56:41 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346259399!24083762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17712 invoked from network); 29 Aug 2012 16:56:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 16:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="206567486"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 16:56:32 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Wed, 29 Aug 2012
	12:56:31 -0400
Message-ID: <503E49BE.7080704@citrix.com>
Date: Wed, 29 Aug 2012 17:56:30 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346257402.20019.9.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346257402.20019.9.camel@zakaz.uk.xensource.com>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 17:23, Ian Campbell wrote:
> On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
>> This series is a straight forward-port of some functionality from
>> classic kernels to support Xen hosts that do paging of guests.
>>
>> This isn't functionality the XenServer makes use of so I've not tested
>> these with paging in use.
>>
>> Changes since v1:
>>
>> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
>>   error).  It's broken and not really fixable sensibly and libxc will
>>   use V2 if it is available.
>> - Return -ENOENT if all failures were -ENOENT.
> 
> Is this behaviour a requirement from something?

It's the behaviour libxc is expecting.  It doesn't retry unless errno ==
ENOENT.

> Usually hypercalls of this type return a global error only if something
> went wrong with the general mechanics of the hypercall (e.g. faults
> reading arguments etc) and leave reporting of the individual failures of
> subops to the op specific field, even if all the subops fail in the same
> way.

I didn't design this interface...  Feel free to propose (and implement)
an alternate MMAPBATCH_V3 interface.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:05:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6mdf-0001Bx-Q3; Wed, 29 Aug 2012 18:05:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6mde-0001Bs-3S
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 18:05:14 +0000
Received: from [85.158.143.99:34291] by server-2.bemta-4.messagelabs.com id
	D9/A3-21239-9D95E305; Wed, 29 Aug 2012 18:05:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346263511!26562845!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26368 invoked from network); 29 Aug 2012 18:05:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:05:11 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14256169"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:05:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 19:05:11 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6mda-0006Kf-Px;
	Wed, 29 Aug 2012 18:05:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6mda-0002f8-E9;
	Wed, 29 Aug 2012 19:05:10 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13637-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Aug 2012 19:05:10 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13637: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13637 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13637/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13608
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13608
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13608
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13608
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13608

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 linux                5aa287dcf1b5879aa0150b0511833c52885f5b4c
baseline version:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=5aa287dcf1b5879aa0150b0511833c52885f5b4c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 5aa287dcf1b5879aa0150b0511833c52885f5b4c
+ branch=linux-3.0
+ revision=5aa287dcf1b5879aa0150b0511833c52885f5b4c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 5aa287dcf1b5879aa0150b0511833c52885f5b4c:tested/linux-3.0
Counting objects: 1   
Counting objects: 153, done.
Compressing objects:   5% (1/19)   
Compressing objects:  10% (2/19)   
Compressing objects:  15% (3/19)   
Compressing objects:  21% (4/19)   
Compressing objects:  26% (5/19)   
Compressing objects:  31% (6/19)   
Compressing objects:  36% (7/19)   
Compressing objects:  42% (8/19)   
Compressing objects:  47% (9/19)   
Compressing objects:  52% (10/19)   
Compressing objects:  57% (11/19)   
Compressing objects:  63% (12/19)   
Compressing objects:  68% (13/19)   
Compressing objects:  73% (14/19)   
Compressing objects:  78% (15/19)   
Compressing objects:  84% (16/19)   
Compressing objects:  89% (17/19)   
Compressing objects:  94% (18/19)   
Compressing objects: 100% (19/19)   
Compressing objects: 100% (19/19), done.
Writing objects:   0% (1/107)   
Writing objects:   1% (2/107)   
Writing objects:   2% (3/107)   
Writing objects:   3% (4/107)   
Writing objects:   4% (5/107)   
Writing objects:   5% (6/107)   
Writing objects:   6% (7/107)   
Writing objects:   7% (8/107)   
Writing objects:   8% (9/107)   
Writing objects:   9% (10/107)   
Writing objects:  10% (11/107)   
Writing objects:  11% (12/107)   
Writing objects:  12% (13/107)   
Writing objects:  13% (14/107)   
Writing objects:  14% (15/107)   
Writing objects:  15% (17/107)   
Writing objects:  16% (18/107)   
Writing objects:  17% (19/107)   
Writing objects:  18% (20/107)   
Writing objects:  19% (21/107)   
Writing objects:  20% (22/107)   
Writing objects:  21% (23/107)   
Writing objects:  22% (24/107)   
Writing objects:  23% (25/107)   
Writing objects:  24% (26/107)   
Writing objects:  25% (27/107)   
Writing objects:  26% (28/107)   
Writing objects:  27% (29/107)   
Writing objects:  28% (30/107)   
Writing objects:  29% (32/107)   
Writing objects:  30% (33/107)   
Writing objects:  31% (34/107)   
Writing objects:  32% (35/107)   
Writing objects:  33% (36/107)   
Writing objects:  34% (37/107)   
Writing objects:  35% (38/107)   
Writing objects:  36% (39/107)   
Writing objects:  37% (40/107)   
Writing objects:  38% (41/107)   
Writing objects:  39% (42/107)   
Writing objects:  40% (43/107)   
Writing objects:  41% (44/107)   
Writing objects:  42% (45/107)   
Writing objects:  43% (47/107)   
Writing objects:  44% (48/107)   
Writing objects:  45% (49/107)   
Writing objects:  46% (50/107)   
Writing objects:  47% (51/107)   
Writing objects:  48% (52/107)   
Writing objects:  49% (53/107)   
Writing objects:  50% (54/107)   
Writing objects:  51% (55/107)   
Writing objects:  52% (56/107)   
Writing objects:  53% (57/107)   
Writing objects:  54% (58/107)   
Writing objects:  55% (59/107)   
Writing objects:  56% (60/107)   
Writing objects:  57% (61/107)   
Writing objects:  58% (63/107)   
Writing objects:  59% (64/107)   
Writing objects:  60% (65/107)   
Writing objects:  61% (66/107)   
Writing objects:  62% (67/107)   
Writing objects:  63% (68/107)   
Writing objects:  64% (69/107)   
Writing objects:  65% (70/107)   
Writing objects:  66% (71/107)   
Writing objects:  67% (72/107)   
Writing objects:  68% (73/107)   
Writing objects:  69% (74/107)   
Writing objects:  70% (75/107)   
Writing objects:  71% (76/107)   
Writing objects:  72% (78/107)   
Writing objects:  73% (79/107)   
Writing objects:  74% (80/107)   
Writing objects:  75% (81/107)   
Writing objects:  76% (82/107)   
Writing objects:  77% (83/107)   
Writing objects:  78% (84/107)   
Writing objects:  79% (85/107)   
Writing objects:  80% (86/107)   
Writing objects:  81% (87/107)   
Writing objects:  82% (88/107)   
Writing objects:  83% (89/107)   
Writing objects:  84% (90/107)   
Writing objects:  85% (91/107)   
Writing objects:  86% (93/107)   
Writing objects:  87% (94/107)   
Writing objects:  88% (95/107)   
Writing objects:  89% (96/107)   
Writing objects:  90% (97/107)   
Writing objects:  91% (98/107)   
Writing objects:  92% (99/107)   
Writing objects:  93% (100/107)   
Writing objects:  94% (101/107)   
Writing objects:  95% (102/107)   
Writing objects:  96% (103/107)   
Writing objects:  97% (104/107)   
Writing objects:  98% (105/107)   
Writing objects:  99% (106/107)   
Writing objects: 100% (107/107)   
Writing objects: 100% (107/107), 18.84 KiB, done.
Total 107 (delta 88), reused 107 (delta 88)
To xen@xenbits.xensource.com:git/linux-pvops.git
   a422ca7..5aa287d  5aa287dcf1b5879aa0150b0511833c52885f5b4c -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:05:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6mdf-0001Bx-Q3; Wed, 29 Aug 2012 18:05:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6mde-0001Bs-3S
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 18:05:14 +0000
Received: from [85.158.143.99:34291] by server-2.bemta-4.messagelabs.com id
	D9/A3-21239-9D95E305; Wed, 29 Aug 2012 18:05:13 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346263511!26562845!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26368 invoked from network); 29 Aug 2012 18:05:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:05:11 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14256169"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:05:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 19:05:11 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6mda-0006Kf-Px;
	Wed, 29 Aug 2012 18:05:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6mda-0002f8-E9;
	Wed, 29 Aug 2012 19:05:10 +0100
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-13637-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Aug 2012 19:05:10 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.0 test] 13637: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13637 linux-3.0 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13637/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                  fail   like 13608
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13608
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13608
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13608
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13608

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  8 debian-fixup                fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass

version targeted for testing:
 linux                5aa287dcf1b5879aa0150b0511833c52885f5b4c
baseline version:
 linux                a422ca75bd264cd26bafeb6305655245d2ea7c6b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.0
+ revision=5aa287dcf1b5879aa0150b0511833c52885f5b4c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.0 5aa287dcf1b5879aa0150b0511833c52885f5b4c
+ branch=linux-3.0
+ revision=5aa287dcf1b5879aa0150b0511833c52885f5b4c
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.0
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.0
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.0
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree linux-3.0
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.0.y
+ : linux-3.0.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : xen@xenbits.xensource.com:git/linux-pvops.git
+ : tested/linux-3.0
+ : tested/linux-3.0
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push xen@xenbits.xensource.com:git/linux-pvops.git 5aa287dcf1b5879aa0150b0511833c52885f5b4c:tested/linux-3.0
Counting objects: 1   
Counting objects: 153, done.
Compressing objects:   5% (1/19)   
Compressing objects:  10% (2/19)   
Compressing objects:  15% (3/19)   
Compressing objects:  21% (4/19)   
Compressing objects:  26% (5/19)   
Compressing objects:  31% (6/19)   
Compressing objects:  36% (7/19)   
Compressing objects:  42% (8/19)   
Compressing objects:  47% (9/19)   
Compressing objects:  52% (10/19)   
Compressing objects:  57% (11/19)   
Compressing objects:  63% (12/19)   
Compressing objects:  68% (13/19)   
Compressing objects:  73% (14/19)   
Compressing objects:  78% (15/19)   
Compressing objects:  84% (16/19)   
Compressing objects:  89% (17/19)   
Compressing objects:  94% (18/19)   
Compressing objects: 100% (19/19)   
Compressing objects: 100% (19/19), done.
Writing objects:   0% (1/107)   
Writing objects:   1% (2/107)   
Writing objects:   2% (3/107)   
Writing objects:   3% (4/107)   
Writing objects:   4% (5/107)   
Writing objects:   5% (6/107)   
Writing objects:   6% (7/107)   
Writing objects:   7% (8/107)   
Writing objects:   8% (9/107)   
Writing objects:   9% (10/107)   
Writing objects:  10% (11/107)   
Writing objects:  11% (12/107)   
Writing objects:  12% (13/107)   
Writing objects:  13% (14/107)   
Writing objects:  14% (15/107)   
Writing objects:  15% (17/107)   
Writing objects:  16% (18/107)   
Writing objects:  17% (19/107)   
Writing objects:  18% (20/107)   
Writing objects:  19% (21/107)   
Writing objects:  20% (22/107)   
Writing objects:  21% (23/107)   
Writing objects:  22% (24/107)   
Writing objects:  23% (25/107)   
Writing objects:  24% (26/107)   
Writing objects:  25% (27/107)   
Writing objects:  26% (28/107)   
Writing objects:  27% (29/107)   
Writing objects:  28% (30/107)   
Writing objects:  29% (32/107)   
Writing objects:  30% (33/107)   
Writing objects:  31% (34/107)   
Writing objects:  32% (35/107)   
Writing objects:  33% (36/107)   
Writing objects:  34% (37/107)   
Writing objects:  35% (38/107)   
Writing objects:  36% (39/107)   
Writing objects:  37% (40/107)   
Writing objects:  38% (41/107)   
Writing objects:  39% (42/107)   
Writing objects:  40% (43/107)   
Writing objects:  41% (44/107)   
Writing objects:  42% (45/107)   
Writing objects:  43% (47/107)   
Writing objects:  44% (48/107)   
Writing objects:  45% (49/107)   
Writing objects:  46% (50/107)   
Writing objects:  47% (51/107)   
Writing objects:  48% (52/107)   
Writing objects:  49% (53/107)   
Writing objects:  50% (54/107)   
Writing objects:  51% (55/107)   
Writing objects:  52% (56/107)   
Writing objects:  53% (57/107)   
Writing objects:  54% (58/107)   
Writing objects:  55% (59/107)   
Writing objects:  56% (60/107)   
Writing objects:  57% (61/107)   
Writing objects:  58% (63/107)   
Writing objects:  59% (64/107)   
Writing objects:  60% (65/107)   
Writing objects:  61% (66/107)   
Writing objects:  62% (67/107)   
Writing objects:  63% (68/107)   
Writing objects:  64% (69/107)   
Writing objects:  65% (70/107)   
Writing objects:  66% (71/107)   
Writing objects:  67% (72/107)   
Writing objects:  68% (73/107)   
Writing objects:  69% (74/107)   
Writing objects:  70% (75/107)   
Writing objects:  71% (76/107)   
Writing objects:  72% (78/107)   
Writing objects:  73% (79/107)   
Writing objects:  74% (80/107)   
Writing objects:  75% (81/107)   
Writing objects:  76% (82/107)   
Writing objects:  77% (83/107)   
Writing objects:  78% (84/107)   
Writing objects:  79% (85/107)   
Writing objects:  80% (86/107)   
Writing objects:  81% (87/107)   
Writing objects:  82% (88/107)   
Writing objects:  83% (89/107)   
Writing objects:  84% (90/107)   
Writing objects:  85% (91/107)   
Writing objects:  86% (93/107)   
Writing objects:  87% (94/107)   
Writing objects:  88% (95/107)   
Writing objects:  89% (96/107)   
Writing objects:  90% (97/107)   
Writing objects:  91% (98/107)   
Writing objects:  92% (99/107)   
Writing objects:  93% (100/107)   
Writing objects:  94% (101/107)   
Writing objects:  95% (102/107)   
Writing objects:  96% (103/107)   
Writing objects:  97% (104/107)   
Writing objects:  98% (105/107)   
Writing objects:  99% (106/107)   
Writing objects: 100% (107/107)   
Writing objects: 100% (107/107), 18.84 KiB, done.
Total 107 (delta 88), reused 107 (delta 88)
To xen@xenbits.xensource.com:git/linux-pvops.git
   a422ca7..5aa287d  5aa287dcf1b5879aa0150b0511833c52885f5b4c -> tested/linux-3.0
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:05:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6mdp-0001CR-6Y; Wed, 29 Aug 2012 18:05:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6mdn-0001CH-Fy
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 18:05:23 +0000
Received: from [85.158.143.35:14064] by server-1.bemta-4.messagelabs.com id
	DB/1F-12504-2E95E305; Wed, 29 Aug 2012 18:05:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346263521!13206934!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12368 invoked from network); 29 Aug 2012 18:05:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:05:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14256174"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:05:21 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 19:05:21 +0100
Message-ID: <1346263520.6655.4.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 29 Aug 2012 19:05:20 +0100
In-Reply-To: <503E49BE.7080704@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346257402.20019.9.camel@zakaz.uk.xensource.com>
	<503E49BE.7080704@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-29 at 17:56 +0100, David Vrabel wrote:
> On 29/08/12 17:23, Ian Campbell wrote:
> > On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
> >> This series is a straight forward-port of some functionality from
> >> classic kernels to support Xen hosts that do paging of guests.
> >>
> >> This isn't functionality the XenServer makes use of so I've not tested
> >> these with paging in use.
> >>
> >> Changes since v1:
> >>
> >> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
> >>   error).  It's broken and not really fixable sensibly and libxc will
> >>   use V2 if it is available.
> >> - Return -ENOENT if all failures were -ENOENT.
> > 
> > Is this behaviour a requirement from something?
> 
> It's the behaviour libxc is expecting.  It doesn't retry unless errno ==
> ENOENT.

Surely if that is the case you must return -ENOENT if *any* failure was
-ENOENT? That seems to be what the linux-2.6.18-xen.hg implementation
does.

> > Usually hypercalls of this type return a global error only if something
> > went wrong with the general mechanics of the hypercall (e.g. faults
> > reading arguments etc) and leave reporting of the individual failures of
> > subops to the op specific field, even if all the subops fail in the same
> > way.
> 
> I didn't design this interface...

The interface you described doesn't make any sense so I was trying to
suggest a way in which you may have misunderstood the interface you were
trying to implement by pointing out the common pattern. It turns out
this interface doesn't follow that common pattern, but it's still
different to what you described AFAICT.

> Feel free to propose (and implement) an alternate MMAPBATCH_V3 interface.

No thanks ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:05:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6mdp-0001CR-6Y; Wed, 29 Aug 2012 18:05:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6mdn-0001CH-Fy
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 18:05:23 +0000
Received: from [85.158.143.35:14064] by server-1.bemta-4.messagelabs.com id
	DB/1F-12504-2E95E305; Wed, 29 Aug 2012 18:05:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346263521!13206934!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12368 invoked from network); 29 Aug 2012 18:05:22 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:05:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14256174"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:05:21 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 19:05:21 +0100
Message-ID: <1346263520.6655.4.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 29 Aug 2012 19:05:20 +0100
In-Reply-To: <503E49BE.7080704@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346257402.20019.9.camel@zakaz.uk.xensource.com>
	<503E49BE.7080704@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-29 at 17:56 +0100, David Vrabel wrote:
> On 29/08/12 17:23, Ian Campbell wrote:
> > On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
> >> This series is a straight forward-port of some functionality from
> >> classic kernels to support Xen hosts that do paging of guests.
> >>
> >> This isn't functionality the XenServer makes use of so I've not tested
> >> these with paging in use.
> >>
> >> Changes since v1:
> >>
> >> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
> >>   error).  It's broken and not really fixable sensibly and libxc will
> >>   use V2 if it is available.
> >> - Return -ENOENT if all failures were -ENOENT.
> > 
> > Is this behaviour a requirement from something?
> 
> It's the behaviour libxc is expecting.  It doesn't retry unless errno ==
> ENOENT.

Surely if that is the case you must return -ENOENT if *any* failure was
-ENOENT? That seems to be what the linux-2.6.18-xen.hg implementation
does.

> > Usually hypercalls of this type return a global error only if something
> > went wrong with the general mechanics of the hypercall (e.g. faults
> > reading arguments etc) and leave reporting of the individual failures of
> > subops to the op specific field, even if all the subops fail in the same
> > way.
> 
> I didn't design this interface...

The interface you described doesn't make any sense so I was trying to
suggest a way in which you may have misunderstood the interface you were
trying to implement by pointing out the common pattern. It turns out
this interface doesn't follow that common pattern, but it's still
different to what you described AFAICT.

> Feel free to propose (and implement) an alternate MMAPBATCH_V3 interface.

No thanks ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:10:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6mis-0001SB-Ut; Wed, 29 Aug 2012 18:10:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T6mir-0001Rz-Jg
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 18:10:37 +0000
Received: from [85.158.138.51:60845] by server-1.bemta-3.messagelabs.com id
	92/42-09327-C1B5E305; Wed, 29 Aug 2012 18:10:36 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346263832!19576389!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22618 invoked from network); 29 Aug 2012 18:10:34 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:10:34 -0000
Received: by pbbrq2 with SMTP id rq2so1862756pbb.30
	for <xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 11:10:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=HzyGisBMJwa8kXRFzpHEQqhrKcH1y4aZZAaeXO/RgLs=;
	b=jM6sC7ihNVRWn/JfP/FVLfjkHLJ6XOhFYHlpd86vkAKwWp2kJvPxebDePjTmkNp2Ww
	jsE8E1Z9zKiI5kcoSRnvXQrEjH68CBy23lTHzyycCM+07TjiGtvZo2QjZFCNsIp3JsIl
	OITRnEWtVM9HeQOZz2Obj0LTDHzDVI1qGf9knKOD+4Vk3vSQ3WjhWjaX236X5Hv5I0zJ
	nq5x8Y61jxSEISRO7uCmLUi9Uy+7/Y+v8PRoTNHMdBkkJGMimRyg/RmP6hV0PSChVYJ6
	4oPrLjC1txni3mVCOMtdvUXmPLcosmDQmOA9d43ngvz40Vy4GU11bfwL1uVCx4jwbuYi
	C+0Q==
Received: by 10.68.239.103 with SMTP id vr7mr6412811pbc.0.1346263832123;
	Wed, 29 Aug 2012 11:10:32 -0700 (PDT)
Received: from [192.168.1.130] (wsip-174-79-253-34.sd.sd.cox.net.
	[174.79.253.34])
	by mx.google.com with ESMTPS id sj5sm19729355pbc.30.2012.08.29.11.10.30
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 29 Aug 2012 11:10:31 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <503E451A.20107@citrix.com>
Date: Wed, 29 Aug 2012 14:10:34 -0400
Message-Id: <85F40D8D-2FBE-4F5A-A895-E299DA20C445@gridcentric.ca>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
	<7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
	<503E451A.20107@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlTuzWFS8IL3QfQuP9j+KFcYSAVc4rleddayIJSyZQnytNj8CM8NIEpn9ny28zCTb0ESVVx
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 29, 2012, at 12:36 PM, David Vrabel wrote:

> On 29/08/12 17:14, Andres Lagar-Cavilla wrote:
>> =

>> On Aug 29, 2012, at 9:15 AM, David Vrabel wrote:
>> =

>>> From: David Vrabel <david.vrabel@citrix.com>
>>> =

>>> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>>> field for reporting the error code for every frame that could not be
>>> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> [...]
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index ccee0f1..ddd32cf 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -248,18 +248,23 @@ struct mmap_batch_state {
>>> 	struct vm_area_struct *vma;
>>> 	int err;
>>> =

>>> -	xen_pfn_t __user *user;
>>> +	xen_pfn_t __user *user_mfn;
>>> +	int __user *user_err;
>>> };
>>> =

>>> static int mmap_batch_fn(void *data, void *state)
>>> {
>>> 	xen_pfn_t *mfnp =3D data;
>>> +	int *err =3D data;
>> Am I missing something or is there an aliasing here? Both mfnp and err p=
oint to data?
> =

> There is deliberate aliasing here.  We use the mfn array to save the
> error codes for later processing.

May I suggest a comment to clarify this here? Are xen_pfn_t and int the sam=
e size in both bitnesses? The very fact that I raise the question is an arg=
ument against this black-magic aliasing. Imho.

A explicit union for each slot in the *data, or passing both arrays to the =
callback looks better to me.

> =

>>> 	struct mmap_batch_state *st =3D state;
>>> +	int ret;
>>> =

>>> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>>> -				       st->vma->vm_page_prot, st->domain) < 0) {
>>> -		*mfnp |=3D 0xf0000000U;
>>> -		st->err++;
>>> +	ret =3D xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp=
, 1,
>>> +					 st->vma->vm_page_prot, st->domain);
>>> +	if (ret < 0) {
>>> +		*err =3D ret;
>>> +		if (st->err =3D=3D 0 || st->err =3D=3D -ENOENT)
>>> +			st->err =3D ret;
>> This will unset -ENOENT if a frame after an ENOENT error fails different=
ly.
> =

> I thought that was what the original implementation did but it seems it
> does not
I think the best way to do this is:

if ((ret =3D=3D -ENOENT) && (st->err =3D=3D 0))
	st->err =3D -ENOENT;

Then st->err is -ENOENT if at least there was one individual -ENOENT or zer=
o otherwise. Which is the expectation of libxc (barring an EFAULT or some o=
ther higher-level whole-operation error).

Andres

> .
> =

>>> @@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user =
*udata)
>>> =

>>> 	up_write(&mm->mmap_sem);
>>> =

>>> -	if (state.err > 0) {
>>> -		state.user =3D m.arr;
>>> +	if (state.err) {
>>> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
>>> +		state.user_err =3D m.err;
>>> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
>>> -			       &pagelist,
>>> -			       mmap_return_errors, &state);
>>> -	}
>>> +				     &pagelist,
>>> +				     mmap_return_errors, &state);
> =

>> The callback now maps data to err (instead of mfnp =85 but I see no
>> change to the gather_array other than a cast =85am I missing something?
> =

> The kernel mfn and the err array are aliased.
> =

> I could have made gather_array() allow the kernel array to have larger
> elements than the user array but that looked like too much work.
> =

> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:10:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6mis-0001SB-Ut; Wed, 29 Aug 2012 18:10:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T6mir-0001Rz-Jg
	for xen-devel@lists.xensource.com; Wed, 29 Aug 2012 18:10:37 +0000
Received: from [85.158.138.51:60845] by server-1.bemta-3.messagelabs.com id
	92/42-09327-C1B5E305; Wed, 29 Aug 2012 18:10:36 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346263832!19576389!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22618 invoked from network); 29 Aug 2012 18:10:34 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:10:34 -0000
Received: by pbbrq2 with SMTP id rq2so1862756pbb.30
	for <xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 11:10:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=HzyGisBMJwa8kXRFzpHEQqhrKcH1y4aZZAaeXO/RgLs=;
	b=jM6sC7ihNVRWn/JfP/FVLfjkHLJ6XOhFYHlpd86vkAKwWp2kJvPxebDePjTmkNp2Ww
	jsE8E1Z9zKiI5kcoSRnvXQrEjH68CBy23lTHzyycCM+07TjiGtvZo2QjZFCNsIp3JsIl
	OITRnEWtVM9HeQOZz2Obj0LTDHzDVI1qGf9knKOD+4Vk3vSQ3WjhWjaX236X5Hv5I0zJ
	nq5x8Y61jxSEISRO7uCmLUi9Uy+7/Y+v8PRoTNHMdBkkJGMimRyg/RmP6hV0PSChVYJ6
	4oPrLjC1txni3mVCOMtdvUXmPLcosmDQmOA9d43ngvz40Vy4GU11bfwL1uVCx4jwbuYi
	C+0Q==
Received: by 10.68.239.103 with SMTP id vr7mr6412811pbc.0.1346263832123;
	Wed, 29 Aug 2012 11:10:32 -0700 (PDT)
Received: from [192.168.1.130] (wsip-174-79-253-34.sd.sd.cox.net.
	[174.79.253.34])
	by mx.google.com with ESMTPS id sj5sm19729355pbc.30.2012.08.29.11.10.30
	(version=TLSv1/SSLv3 cipher=OTHER);
	Wed, 29 Aug 2012 11:10:31 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <503E451A.20107@citrix.com>
Date: Wed, 29 Aug 2012 14:10:34 -0400
Message-Id: <85F40D8D-2FBE-4F5A-A895-E299DA20C445@gridcentric.ca>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>
	<1346246116-29999-3-git-send-email-david.vrabel@citrix.com>
	<7392D0E0-02A4-48D7-8B16-4F93EA01F3AF@gridcentric.ca>
	<503E451A.20107@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlTuzWFS8IL3QfQuP9j+KFcYSAVc4rleddayIJSyZQnytNj8CM8NIEpn9ny28zCTb0ESVVx
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 29, 2012, at 12:36 PM, David Vrabel wrote:

> On 29/08/12 17:14, Andres Lagar-Cavilla wrote:
>> =

>> On Aug 29, 2012, at 9:15 AM, David Vrabel wrote:
>> =

>>> From: David Vrabel <david.vrabel@citrix.com>
>>> =

>>> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>>> field for reporting the error code for every frame that could not be
>>> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> [...]
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index ccee0f1..ddd32cf 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -248,18 +248,23 @@ struct mmap_batch_state {
>>> 	struct vm_area_struct *vma;
>>> 	int err;
>>> =

>>> -	xen_pfn_t __user *user;
>>> +	xen_pfn_t __user *user_mfn;
>>> +	int __user *user_err;
>>> };
>>> =

>>> static int mmap_batch_fn(void *data, void *state)
>>> {
>>> 	xen_pfn_t *mfnp =3D data;
>>> +	int *err =3D data;
>> Am I missing something or is there an aliasing here? Both mfnp and err p=
oint to data?
> =

> There is deliberate aliasing here.  We use the mfn array to save the
> error codes for later processing.

May I suggest a comment to clarify this here? Are xen_pfn_t and int the sam=
e size in both bitnesses? The very fact that I raise the question is an arg=
ument against this black-magic aliasing. Imho.

A explicit union for each slot in the *data, or passing both arrays to the =
callback looks better to me.

> =

>>> 	struct mmap_batch_state *st =3D state;
>>> +	int ret;
>>> =

>>> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>>> -				       st->vma->vm_page_prot, st->domain) < 0) {
>>> -		*mfnp |=3D 0xf0000000U;
>>> -		st->err++;
>>> +	ret =3D xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp=
, 1,
>>> +					 st->vma->vm_page_prot, st->domain);
>>> +	if (ret < 0) {
>>> +		*err =3D ret;
>>> +		if (st->err =3D=3D 0 || st->err =3D=3D -ENOENT)
>>> +			st->err =3D ret;
>> This will unset -ENOENT if a frame after an ENOENT error fails different=
ly.
> =

> I thought that was what the original implementation did but it seems it
> does not
I think the best way to do this is:

if ((ret =3D=3D -ENOENT) && (st->err =3D=3D 0))
	st->err =3D -ENOENT;

Then st->err is -ENOENT if at least there was one individual -ENOENT or zer=
o otherwise. Which is the expectation of libxc (barring an EFAULT or some o=
ther higher-level whole-operation error).

Andres

> .
> =

>>> @@ -325,12 +359,16 @@ static long privcmd_ioctl_mmap_batch(void __user =
*udata)
>>> =

>>> 	up_write(&mm->mmap_sem);
>>> =

>>> -	if (state.err > 0) {
>>> -		state.user =3D m.arr;
>>> +	if (state.err) {
>>> +		state.user_mfn =3D (xen_pfn_t *)m.arr;
>>> +		state.user_err =3D m.err;
>>> 		ret =3D traverse_pages(m.num, sizeof(xen_pfn_t),
>>> -			       &pagelist,
>>> -			       mmap_return_errors, &state);
>>> -	}
>>> +				     &pagelist,
>>> +				     mmap_return_errors, &state);
> =

>> The callback now maps data to err (instead of mfnp =85 but I see no
>> change to the gather_array other than a cast =85am I missing something?
> =

> The kernel mfn and the err array are aliased.
> =

> I could have made gather_array() allow the kernel array to have larger
> elements than the user array but that looked like too much work.
> =

> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:30:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6n1e-0001m4-DG; Wed, 29 Aug 2012 18:30:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T6n1c-0001lo-AR
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 18:30:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346264994!8657249!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10447 invoked from network); 29 Aug 2012 18:29:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:29:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14256574"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:29:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 19:29:01 +0100
Date: Wed, 29 Aug 2012 19:28:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: zhenzhong.duan <zhenzhong.duan@oracle.com>
In-Reply-To: <503DA678.4040801@oracle.com>
Message-ID: <alpine.DEB.2.02.1208291925550.15568@kaball.uk.xensource.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
	<503DA678.4040801@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Aug 2012, zhenzhong.duan wrote:
> 
> On 2012-08-13 19:08, Stefano Stabellini wrote:
> > On Mon, 13 Aug 2012, Jan Beulich wrote:
> >
> > I tried to use PV spinlocks on PV on HVM guests but I found that:
> >
> > commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23
> > Author: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> > Date:   Tue Sep 6 17:41:47 2011 +0100
> >
> >      xen: disable PV spinlocks on HVM
> >
> >      PV spinlocks cannot possibly work with the current code because they are
> >      enabled after pvops patching has already been done, and because PV
> >      spinlocks use a different data structure than native spinlocks so we
> >      cannot switch between them dynamically. A spinlock that has been taken
> >      once by the native code (__ticket_spin_lock) cannot be taken by
> >      __xen_spin_lock even after it has been released.
> >
> >      Reported-and-Tested-by: Stefan Bader<stefan.bader@canonical.com>
> >      Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> >      Signed-off-by: Konrad Rzeszutek Wilk<konrad.wilk@oracle.com>
> >
> >
> > at that time Jeremy was finishing off his PV ticket locks series, that
> > has the nice side effect of making it much easier to implement PV on HVM
> > spin locks so I just deciced to wait and just append the following patch
> > to his series:
> >
> > http://marc.info/?l=xen-devel&m=131846828430409&w=2
> >
> > that clearly never went upstream.
> Hi Stefano,
> Is there a schedule those patch merge to upstream?

They are currently being handled by the KVM guys:

https://lkml.org/lkml/2012/5/2/119
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:30:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6n1e-0001m4-DG; Wed, 29 Aug 2012 18:30:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@eu.citrix.com>)
	id 1T6n1c-0001lo-AR
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 18:30:00 +0000
X-Env-Sender: Stefano.Stabellini@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346264994!8657249!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA3MTY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10447 invoked from network); 29 Aug 2012 18:29:54 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:29:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="14256574"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:29:00 +0000
Received: from kaball.uk.xensource.com (10.80.2.59) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Wed, 29 Aug 2012 19:29:01 +0100
Date: Wed, 29 Aug 2012 19:28:32 +0100
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: zhenzhong.duan <zhenzhong.duan@oracle.com>
In-Reply-To: <503DA678.4040801@oracle.com>
Message-ID: <alpine.DEB.2.02.1208291925550.15568@kaball.uk.xensource.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1208131200480.21096@kaball.uk.xensource.com>
	<503DA678.4040801@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Aug 2012, zhenzhong.duan wrote:
> 
> On 2012-08-13 19:08, Stefano Stabellini wrote:
> > On Mon, 13 Aug 2012, Jan Beulich wrote:
> >
> > I tried to use PV spinlocks on PV on HVM guests but I found that:
> >
> > commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23
> > Author: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> > Date:   Tue Sep 6 17:41:47 2011 +0100
> >
> >      xen: disable PV spinlocks on HVM
> >
> >      PV spinlocks cannot possibly work with the current code because they are
> >      enabled after pvops patching has already been done, and because PV
> >      spinlocks use a different data structure than native spinlocks so we
> >      cannot switch between them dynamically. A spinlock that has been taken
> >      once by the native code (__ticket_spin_lock) cannot be taken by
> >      __xen_spin_lock even after it has been released.
> >
> >      Reported-and-Tested-by: Stefan Bader<stefan.bader@canonical.com>
> >      Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> >      Signed-off-by: Konrad Rzeszutek Wilk<konrad.wilk@oracle.com>
> >
> >
> > at that time Jeremy was finishing off his PV ticket locks series, that
> > has the nice side effect of making it much easier to implement PV on HVM
> > spin locks so I just deciced to wait and just append the following patch
> > to his series:
> >
> > http://marc.info/?l=xen-devel&m=131846828430409&w=2
> >
> > that clearly never went upstream.
> Hi Stefano,
> Is there a schedule those patch merge to upstream?

They are currently being handled by the KVM guys:

https://lkml.org/lkml/2012/5/2/119
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:41:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6nC6-0001zF-Oh; Wed, 29 Aug 2012 18:40:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T6nC6-0001zA-0g
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 18:40:50 +0000
Received: from [85.158.139.83:6445] by server-4.bemta-5.messagelabs.com id
	C4/03-23042-1326E305; Wed, 29 Aug 2012 18:40:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346265647!27841920!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4835 invoked from network); 29 Aug 2012 18:40:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:40:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="206581963"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:40:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 14:40:25 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T6nBh-0003Rh-NA;
	Wed, 29 Aug 2012 19:40:25 +0100
MIME-Version: 1.0
X-Mercurial-Node: 3cacac4d8a4dc7fc110717d5d61dae7a4b2610e1
Message-ID: <3cacac4d8a4dc7fc1107.1346265625@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Wed, 29 Aug 2012 19:40:25 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH V2] x86/i8259: Handle bogus spurious interrupts
	more quietly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

c/s 25336:edd7c7ad1ad2 introduced the concept of a bogus vector, for in irqs
delivered through the i8259 PIC after IO-APICs had been set up.

However, if supurious PIC vectors are received, many "No irq handler for vector"
log messages can be seen on the console.

This patch adds to the bogus vector logic to detect spurious PIC vectors and
simply ignore them.  _mask_and_ack_8259A_irq() has been modified to return a
boolean indicating whether the irq is real or not, and in the case of a spurious
vector, the error in do_IRQ() is not printed.

One complication is that now, _mask_and_ack_8259A_irq() can get called whatever
the ack mode is, so has been altered to work out whether it should EOI the irq
or not.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--
Changes since v1
    * Modify _mask_and_ack_8259A_irq() rather than basically duplicating it

diff -r 1126b3079bef -r 3cacac4d8a4d xen/arch/x86/i8259.c
--- a/xen/arch/x86/i8259.c
+++ b/xen/arch/x86/i8259.c
@@ -85,10 +85,12 @@ BUILD_16_IRQS(0xc) BUILD_16_IRQS(0xd) BU
 
 static DEFINE_SPINLOCK(i8259A_lock);
 
-static void _mask_and_ack_8259A_irq(unsigned int irq);
+static bool_t _mask_and_ack_8259A_irq(unsigned int irq);
 
-void (*__read_mostly bogus_8259A_irq)(unsigned int irq) =
-    _mask_and_ack_8259A_irq;
+bool_t bogus_8259A_irq(unsigned int irq)
+{
+    return _mask_and_ack_8259A_irq(irq);
+}
 
 static void mask_and_ack_8259A_irq(struct irq_desc *desc)
 {
@@ -239,12 +241,15 @@ static inline int i8259A_irq_real(unsign
  * Careful! The 8259A is a fragile beast, it pretty
  * much _has_ to be done exactly like this (mask it
  * first, _then_ send the EOI, and the order of EOI
- * to the two 8259s is important!
+ * to the two 8259s is important!  Return a boolean
+ * indicating whether the irq was genuine or spurious.
  */
-static void _mask_and_ack_8259A_irq(unsigned int irq)
+static bool_t _mask_and_ack_8259A_irq(unsigned int irq)
 {
     unsigned int irqmask = 1 << irq;
     unsigned long flags;
+    bool_t real_irq = 1; /* Assume real unless spurious */
+    bool_t need_eoi = i8259A_irq_type.ack != disable_8259A_irq;
 
     spin_lock_irqsave(&i8259A_lock, flags);
     /*
@@ -270,15 +275,19 @@ static void _mask_and_ack_8259A_irq(unsi
     if (irq & 8) {
         inb(0xA1);              /* DUMMY - (do we need this?) */
         outb(cached_A1,0xA1);
-        outb(0x60 + (irq & 7), 0xA0);/* 'Specific EOI' to slave */
-        outb(0x62,0x20);        /* 'Specific EOI' to master-IRQ2 */
+        if ( need_eoi )
+        {
+            outb(0x60 + (irq & 7), 0xA0);/* 'Specific EOI' to slave */
+            outb(0x62,0x20);        /* 'Specific EOI' to master-IRQ2 */
+        }
     } else {
         inb(0x21);              /* DUMMY - (do we need this?) */
         outb(cached_21,0x21);
-        outb(0x60 + irq, 0x20);/* 'Specific EOI' to master */
+        if ( need_eoi )
+            outb(0x60 + irq, 0x20);/* 'Specific EOI' to master */
     }
     spin_unlock_irqrestore(&i8259A_lock, flags);
-    return;
+    return real_irq;
 
  spurious_8259A_irq:
     /*
@@ -293,6 +302,7 @@ static void _mask_and_ack_8259A_irq(unsi
 
     {
         static int spurious_irq_mask;
+        real_irq = 0;
         /*
          * At this point we can be sure the IRQ is spurious,
          * lets ACK and report it. [once per IRQ]
@@ -367,19 +377,13 @@ void __devinit init_8259A(int auto_eoi)
                                is to be investigated) */
 
     if (auto_eoi)
-    {
         /*
          * in AEOI mode we just have to mask the interrupt
          * when acking.
          */
         i8259A_irq_type.ack = disable_8259A_irq;
-        bogus_8259A_irq = _disable_8259A_irq;
-    }
     else
-    {
         i8259A_irq_type.ack = mask_and_ack_8259A_irq;
-        bogus_8259A_irq = _mask_and_ack_8259A_irq;
-    }
 
     udelay(100);            /* wait for 8259A to initialize */
 
diff -r 1126b3079bef -r 3cacac4d8a4d xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -817,11 +817,11 @@ void do_IRQ(struct cpu_user_regs *regs)
                 ack_APIC_irq();
             else
                 kind = "";
-            if ( vector >= FIRST_LEGACY_VECTOR &&
-                 vector <= LAST_LEGACY_VECTOR )
-                bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR);
-            printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
-                   smp_processor_id(), vector, irq, kind);
+            if ( ! ( vector >= FIRST_LEGACY_VECTOR &&
+                     vector <= LAST_LEGACY_VECTOR &&
+                     bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR) ) )
+                printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
+                       smp_processor_id(), vector, irq, kind);
             TRACE_1D(TRC_HW_IRQ_UNMAPPED_VECTOR, vector);
         }
         goto out_no_unlock;
diff -r 1126b3079bef -r 3cacac4d8a4d xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -104,7 +104,7 @@ void mask_8259A(void);
 void unmask_8259A(void);
 void init_8259A(int aeoi);
 void make_8259A_irq(unsigned int irq);
-extern void (*bogus_8259A_irq)(unsigned int irq);
+bool_t bogus_8259A_irq(unsigned int irq);
 int i8259A_suspend(void);
 int i8259A_resume(void);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 18:41:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 18:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6nC6-0001zF-Oh; Wed, 29 Aug 2012 18:40:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T6nC6-0001zA-0g
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 18:40:50 +0000
Received: from [85.158.139.83:6445] by server-4.bemta-5.messagelabs.com id
	C4/03-23042-1326E305; Wed, 29 Aug 2012 18:40:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346265647!27841920!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzQ5MjE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4835 invoked from network); 29 Aug 2012 18:40:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 18:40:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,335,1344211200"; d="scan'208";a="206581963"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	29 Aug 2012 18:40:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Wed, 29 Aug 2012 14:40:25 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T6nBh-0003Rh-NA;
	Wed, 29 Aug 2012 19:40:25 +0100
MIME-Version: 1.0
X-Mercurial-Node: 3cacac4d8a4dc7fc110717d5d61dae7a4b2610e1
Message-ID: <3cacac4d8a4dc7fc1107.1346265625@andrewcoop.uk.xensource.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Wed, 29 Aug 2012 19:40:25 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH V2] x86/i8259: Handle bogus spurious interrupts
	more quietly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

c/s 25336:edd7c7ad1ad2 introduced the concept of a bogus vector, for in irqs
delivered through the i8259 PIC after IO-APICs had been set up.

However, if supurious PIC vectors are received, many "No irq handler for vector"
log messages can be seen on the console.

This patch adds to the bogus vector logic to detect spurious PIC vectors and
simply ignore them.  _mask_and_ack_8259A_irq() has been modified to return a
boolean indicating whether the irq is real or not, and in the case of a spurious
vector, the error in do_IRQ() is not printed.

One complication is that now, _mask_and_ack_8259A_irq() can get called whatever
the ack mode is, so has been altered to work out whether it should EOI the irq
or not.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--
Changes since v1
    * Modify _mask_and_ack_8259A_irq() rather than basically duplicating it

diff -r 1126b3079bef -r 3cacac4d8a4d xen/arch/x86/i8259.c
--- a/xen/arch/x86/i8259.c
+++ b/xen/arch/x86/i8259.c
@@ -85,10 +85,12 @@ BUILD_16_IRQS(0xc) BUILD_16_IRQS(0xd) BU
 
 static DEFINE_SPINLOCK(i8259A_lock);
 
-static void _mask_and_ack_8259A_irq(unsigned int irq);
+static bool_t _mask_and_ack_8259A_irq(unsigned int irq);
 
-void (*__read_mostly bogus_8259A_irq)(unsigned int irq) =
-    _mask_and_ack_8259A_irq;
+bool_t bogus_8259A_irq(unsigned int irq)
+{
+    return _mask_and_ack_8259A_irq(irq);
+}
 
 static void mask_and_ack_8259A_irq(struct irq_desc *desc)
 {
@@ -239,12 +241,15 @@ static inline int i8259A_irq_real(unsign
  * Careful! The 8259A is a fragile beast, it pretty
  * much _has_ to be done exactly like this (mask it
  * first, _then_ send the EOI, and the order of EOI
- * to the two 8259s is important!
+ * to the two 8259s is important!  Return a boolean
+ * indicating whether the irq was genuine or spurious.
  */
-static void _mask_and_ack_8259A_irq(unsigned int irq)
+static bool_t _mask_and_ack_8259A_irq(unsigned int irq)
 {
     unsigned int irqmask = 1 << irq;
     unsigned long flags;
+    bool_t real_irq = 1; /* Assume real unless spurious */
+    bool_t need_eoi = i8259A_irq_type.ack != disable_8259A_irq;
 
     spin_lock_irqsave(&i8259A_lock, flags);
     /*
@@ -270,15 +275,19 @@ static void _mask_and_ack_8259A_irq(unsi
     if (irq & 8) {
         inb(0xA1);              /* DUMMY - (do we need this?) */
         outb(cached_A1,0xA1);
-        outb(0x60 + (irq & 7), 0xA0);/* 'Specific EOI' to slave */
-        outb(0x62,0x20);        /* 'Specific EOI' to master-IRQ2 */
+        if ( need_eoi )
+        {
+            outb(0x60 + (irq & 7), 0xA0);/* 'Specific EOI' to slave */
+            outb(0x62,0x20);        /* 'Specific EOI' to master-IRQ2 */
+        }
     } else {
         inb(0x21);              /* DUMMY - (do we need this?) */
         outb(cached_21,0x21);
-        outb(0x60 + irq, 0x20);/* 'Specific EOI' to master */
+        if ( need_eoi )
+            outb(0x60 + irq, 0x20);/* 'Specific EOI' to master */
     }
     spin_unlock_irqrestore(&i8259A_lock, flags);
-    return;
+    return real_irq;
 
  spurious_8259A_irq:
     /*
@@ -293,6 +302,7 @@ static void _mask_and_ack_8259A_irq(unsi
 
     {
         static int spurious_irq_mask;
+        real_irq = 0;
         /*
          * At this point we can be sure the IRQ is spurious,
          * lets ACK and report it. [once per IRQ]
@@ -367,19 +377,13 @@ void __devinit init_8259A(int auto_eoi)
                                is to be investigated) */
 
     if (auto_eoi)
-    {
         /*
          * in AEOI mode we just have to mask the interrupt
          * when acking.
          */
         i8259A_irq_type.ack = disable_8259A_irq;
-        bogus_8259A_irq = _disable_8259A_irq;
-    }
     else
-    {
         i8259A_irq_type.ack = mask_and_ack_8259A_irq;
-        bogus_8259A_irq = _mask_and_ack_8259A_irq;
-    }
 
     udelay(100);            /* wait for 8259A to initialize */
 
diff -r 1126b3079bef -r 3cacac4d8a4d xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -817,11 +817,11 @@ void do_IRQ(struct cpu_user_regs *regs)
                 ack_APIC_irq();
             else
                 kind = "";
-            if ( vector >= FIRST_LEGACY_VECTOR &&
-                 vector <= LAST_LEGACY_VECTOR )
-                bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR);
-            printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
-                   smp_processor_id(), vector, irq, kind);
+            if ( ! ( vector >= FIRST_LEGACY_VECTOR &&
+                     vector <= LAST_LEGACY_VECTOR &&
+                     bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR) ) )
+                printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n",
+                       smp_processor_id(), vector, irq, kind);
             TRACE_1D(TRC_HW_IRQ_UNMAPPED_VECTOR, vector);
         }
         goto out_no_unlock;
diff -r 1126b3079bef -r 3cacac4d8a4d xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -104,7 +104,7 @@ void mask_8259A(void);
 void unmask_8259A(void);
 void init_8259A(int aeoi);
 void make_8259A_irq(unsigned int irq);
-extern void (*bogus_8259A_irq)(unsigned int irq);
+bool_t bogus_8259A_irq(unsigned int irq);
 int i8259A_suspend(void);
 int i8259A_resume(void);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 20:54:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 20:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pHJ-0002Yt-6q; Wed, 29 Aug 2012 20:54:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T6pHI-0002Yo-6k
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 20:54:20 +0000
Received: from [85.158.143.35:25507] by server-3.bemta-4.messagelabs.com id
	B2/2D-08232-B718E305; Wed, 29 Aug 2012 20:54:19 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346273657!15767199!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNTk5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19376 invoked from network); 29 Aug 2012 20:54:18 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 20:54:18 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TKsB0b031233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 20:54:12 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TKsA1C004680
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 20:54:11 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TKsAE1026580; Wed, 29 Aug 2012 15:54:10 -0500
MIME-Version: 1.0
Message-ID: <f9a8835e-95d8-4495-8c9d-4fa769913549@default>
Date: Wed, 29 Aug 2012 13:53:33 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: George Dunlap [mailto:George.Dunlap@eu.citrix.com]
> Subject: [Xen-devel] Xen 4.3 release planning proposal
> 
> Hello everyone!  With the completion of our first few release candidates
> for 4.2, it's time to look forward and start planning for the 4.3
> release.  I've volunteered to step up and help coordinate the release
> for this cycle.
> 
> The 4.2 release cycle this time has been nearly a year and a half.
> One of the problems with having such a long release is that people who
> get in features early have to wait a long time for that feature to be
> in a published version; they then have to wait even longer for it to
> be part of a released distribution.  Historically the cycle has been
> around 9 months, but this has not been made explicit.  Many people
> (including myself) think that the 9 month release cycle was a good
> cadence that we should aim for.
> 
> So I propose that we move to a time-based release schedule.  Rather
> than aiming for a release date, I propose that we aim to do a "feature
> freeze" six months after the 4.2 release -- that would be around March
> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
> around June 2013.  This is one of the things we can discuss at the Dev
> Meeting before the Xen Summit next week.  If you have other opinions,
> please let us know.

Hi George --

(Sorry if I missed relevant discussion on this... I'm way behind
on xen-devel.)

Just a thought...

Maybe it is time to move to match the well-known highly-greased
Linux kernel release process?  This would include, for example, a short
window for new functionality and a xen-next for pre-window shaking
out and merging (of new functionality) and testing.  As has
been pointed out, xen-unstable is, well, unstable for far too long.

It may not be necessary to aggressively match Linus' 8-9 week release
cycle or weekly rcN releases, but the core process is known to
work very well, is reasonably well documented, and will be familiar
to many in the open source community.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 20:54:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 20:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pHJ-0002Yt-6q; Wed, 29 Aug 2012 20:54:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T6pHI-0002Yo-6k
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 20:54:20 +0000
Received: from [85.158.143.35:25507] by server-3.bemta-4.messagelabs.com id
	B2/2D-08232-B718E305; Wed, 29 Aug 2012 20:54:19 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346273657!15767199!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNTk5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19376 invoked from network); 29 Aug 2012 20:54:18 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 20:54:18 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TKsB0b031233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 20:54:12 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TKsA1C004680
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 20:54:11 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TKsAE1026580; Wed, 29 Aug 2012 15:54:10 -0500
MIME-Version: 1.0
Message-ID: <f9a8835e-95d8-4495-8c9d-4fa769913549@default>
Date: Wed, 29 Aug 2012 13:53:33 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
In-Reply-To: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: George Dunlap [mailto:George.Dunlap@eu.citrix.com]
> Subject: [Xen-devel] Xen 4.3 release planning proposal
> 
> Hello everyone!  With the completion of our first few release candidates
> for 4.2, it's time to look forward and start planning for the 4.3
> release.  I've volunteered to step up and help coordinate the release
> for this cycle.
> 
> The 4.2 release cycle this time has been nearly a year and a half.
> One of the problems with having such a long release is that people who
> get in features early have to wait a long time for that feature to be
> in a published version; they then have to wait even longer for it to
> be part of a released distribution.  Historically the cycle has been
> around 9 months, but this has not been made explicit.  Many people
> (including myself) think that the 9 month release cycle was a good
> cadence that we should aim for.
> 
> So I propose that we move to a time-based release schedule.  Rather
> than aiming for a release date, I propose that we aim to do a "feature
> freeze" six months after the 4.2 release -- that would be around March
> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
> around June 2013.  This is one of the things we can discuss at the Dev
> Meeting before the Xen Summit next week.  If you have other opinions,
> please let us know.

Hi George --

(Sorry if I missed relevant discussion on this... I'm way behind
on xen-devel.)

Just a thought...

Maybe it is time to move to match the well-known highly-greased
Linux kernel release process?  This would include, for example, a short
window for new functionality and a xen-next for pre-window shaking
out and merging (of new functionality) and testing.  As has
been pointed out, xen-unstable is, well, unstable for far too long.

It may not be necessary to aggressively match Linus' 8-9 week release
cycle or weekly rcN releases, but the core process is known to
work very well, is reasonably well documented, and will be familiar
to many in the open source community.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWN-0002mS-N5; Wed, 29 Aug 2012 21:09:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWL-0002mN-VL
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:09:54 +0000
Received: from [85.158.138.51:22436] by server-10.bemta-3.messagelabs.com id
	06/5F-20518-1258E305; Wed, 29 Aug 2012 21:09:53 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346274590!27569398!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDc5MTk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11010 invoked from network); 29 Aug 2012 21:09:52 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:09:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274592; x=1377810592;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=gA+XBlxnupJyEL9jRagaUPzlVQLt43nAJYDt0ZkyVP8=;
	b=UVUUo3cqUdSBkd9ZUX/ucNsJMCodYxNPPeLBwuGWL3z4ivDvs6wmBOUB
	YnGemLU2FlmPTJFvV0Pyq9/l0lihIg==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="352781033"
Received: from smtp-in-9001.sea19.amazon.com ([10.186.144.32])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:09:49 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-9001.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TL9ngv012446
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:09:49 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:47 -0700
MIME-Version: 1.0
Message-ID: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:12 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0 of 3] improve checking for documentation tools
	and formatting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series is a follow up to a change that I posed for the last
Xen Documentation Day that used lynx to create plain text from
markdown files. I've added ./configure time checking for all the tools
used in the docs/ tree. The user can persistently override one of
these tools at ./configure time by setting the appropriate environment
variable. The docs/ tree maintains the list of default tools, so
running ./configure is not a prerequisite for running "make -C docs".

I've switched to using elinks instead of lynx for creating text
documentation, as it provides better formatting and interface compared
to other tools. Of course the user can override the tool and flags if
they'd like to use something else.

Here's a sample of ./configure output after this series is applied:

checking for ps2pdf... /usr/bin/ps2pdf
checking for dvips... no
configure: WARNING: dvips is not available so some documentation won't be built
checking for latex... no
configure: WARNING: latex is not available so some documentation won't be built

Please let me know if there are other concerns that need to be
addressed.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWN-0002mS-N5; Wed, 29 Aug 2012 21:09:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWL-0002mN-VL
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:09:54 +0000
Received: from [85.158.138.51:22436] by server-10.bemta-3.messagelabs.com id
	06/5F-20518-1258E305; Wed, 29 Aug 2012 21:09:53 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346274590!27569398!1
X-Originating-IP: [207.171.189.228]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODkuMjI4ID0+IDc5MTk2\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11010 invoked from network); 29 Aug 2012 21:09:52 -0000
Received: from smtp-fw-33001.amazon.com (HELO smtp-fw-33001.amazon.com)
	(207.171.189.228)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:09:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274592; x=1377810592;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=gA+XBlxnupJyEL9jRagaUPzlVQLt43nAJYDt0ZkyVP8=;
	b=UVUUo3cqUdSBkd9ZUX/ucNsJMCodYxNPPeLBwuGWL3z4ivDvs6wmBOUB
	YnGemLU2FlmPTJFvV0Pyq9/l0lihIg==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="352781033"
Received: from smtp-in-9001.sea19.amazon.com ([10.186.144.32])
	by smtp-border-fw-out-33001.sea14.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:09:49 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-9001.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TL9ngv012446
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:09:49 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:47 -0700
MIME-Version: 1.0
Message-ID: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:12 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0 of 3] improve checking for documentation tools
	and formatting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series is a follow up to a change that I posed for the last
Xen Documentation Day that used lynx to create plain text from
markdown files. I've added ./configure time checking for all the tools
used in the docs/ tree. The user can persistently override one of
these tools at ./configure time by setting the appropriate environment
variable. The docs/ tree maintains the list of default tools, so
running ./configure is not a prerequisite for running "make -C docs".

I've switched to using elinks instead of lynx for creating text
documentation, as it provides better formatting and interface compared
to other tools. Of course the user can override the tool and flags if
they'd like to use something else.

Here's a sample of ./configure output after this series is applied:

checking for ps2pdf... /usr/bin/ps2pdf
checking for dvips... no
configure: WARNING: dvips is not available so some documentation won't be built
checking for latex... no
configure: WARNING: latex is not available so some documentation won't be built

Please let me know if there are other concerns that need to be
addressed.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWT-0002mk-36; Wed, 29 Aug 2012 21:10:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWR-0002me-NV
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:10:00 +0000
Received: from [85.158.143.35:49803] by server-1.bemta-4.messagelabs.com id
	BA/CB-12504-6258E305; Wed, 29 Aug 2012 21:09:58 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346274597!12500975!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE3MDAxMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21001 invoked from network); 29 Aug 2012 21:09:58 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:09:58 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274598; x=1377810598;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=VLMfPuD71c+0psZu+jwvh8yat9SIZenvZ/tEZWZAfU0=;
	b=haLCZ+S/c9BYvnWOJSgnh7vhin+dCgASWTAjDMsyk5Fbjgpead/x8u6Z
	pwMATfCbyar9xCgdJfmp1Com9ZXE4Q==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="788164489"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:09:55 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TL9tdt027800
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:09:55 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:48 -0700
MIME-Version: 1.0
X-Mercurial-Node: 674b694814c8fb4f3c4b5c549d69793f32e12a6d
Message-ID: <674b694814c8fb4f3c4b.1346274073@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:13 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1 of 3] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is sometimes hard to discover all the optional documentation
generation tools that should be on a system to build all available Xen
documentation. By checking for documentation generation tools at
./configure time and displaying a warning, Xen packagers will more
easily learn about new optional build dependencies, like markdown,
when they are introduced.

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r d7e4efa17fb0 -r 674b694814c8 README
--- a/README	Tue Aug 28 15:35:08 2012 -0700
+++ b/README	Wed Aug 29 11:07:52 2012 -0700
@@ -28,8 +28,10 @@
 your system. For full documentation, see the Xen User Manual. If this
 is a pre-built release then you can find the manual at:
  dist/install/usr/share/doc/xen/pdf/user.pdf
-If you have a source release, then 'make -C docs' will build the
-manual at docs/pdf/user.pdf.
+If you have a source release and the required documentation generation
+tools, then 'make -C docs' will build the manual at docs/pdf/user.pdf.
+Running ./configure will check for the full suite of documentation
+tools and will display a warning if missing tools are detected.
 
 Quick-Start Guide
 =================
@@ -59,7 +61,6 @@
     * GNU gettext
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
-    * markdown
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
@@ -67,6 +68,7 @@
     * Development install of Ocaml (e.g. ocaml-nox and
       ocaml-findlib). Required to build ocaml components which
       includes the alternative ocaml xenstored.
+    * markdown
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff -r d7e4efa17fb0 -r 674b694814c8 config/Tools.mk.in
--- a/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
+++ b/config/Tools.mk.in	Wed Aug 29 11:07:52 2012 -0700
@@ -22,6 +22,17 @@
 LD86                := @LD86@
 BCC                 := @BCC@
 IASL                := @IASL@
+PS2PDF              := @PS2PDF@
+DVIPS               := @DVIPS@
+LATEX               := @LATEX@
+FIG2DEV             := @FIG2DEV@
+LATEX2HTML          := @LATEX2HTML@
+DOXYGEN             := @DOXYGEN@
+POD2MAN             := @POD2MAN@
+POD2TEXT            := @POD2TEXT@
+DOT                 := @DOT@
+NEATO               := @NEATO@
+MARKDOWN            := @MARKDOWN@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r d7e4efa17fb0 -r 674b694814c8 docs/Makefile
--- a/docs/Makefile	Tue Aug 28 15:35:08 2012 -0700
+++ b/docs/Makefile	Wed Aug 29 11:07:52 2012 -0700
@@ -3,6 +3,11 @@
 XEN_ROOT=$(CURDIR)/..
 include $(XEN_ROOT)/Config.mk
 include $(XEN_ROOT)/docs/Docs.mk
+# The default documentation tools specified in Docs.mk can be
+# persistently overridden by the user via ./configure, but running
+# ./configure is not required to build the docs tree. Thus Tools.mk is
+# optionally included.
+-include $(XEN_ROOT)/config/Tools.mk
 
 VERSION		= xen-unstable
 
diff -r d7e4efa17fb0 -r 674b694814c8 tools/configure.ac
--- a/tools/configure.ac	Tue Aug 28 15:35:08 2012 -0700
+++ b/tools/configure.ac	Wed Aug 29 11:07:52 2012 -0700
@@ -34,6 +34,7 @@
 m4_include([m4/curses.m4])
 m4_include([m4/pthread.m4])
 m4_include([m4/ptyfuncs.m4])
+m4_include([m4/docs_tool.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
@@ -80,6 +81,17 @@
 AC_PATH_PROG([BISON], [bison])
 AC_PATH_PROG([FLEX], [flex])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
+AX_DOCS_TOOL_PROG([PS2PDF], [ps2pdf])
+AX_DOCS_TOOL_PROG([DVIPS], [dvips])
+AX_DOCS_TOOL_PROG([LATEX], [latex])
+AX_DOCS_TOOL_PROG([FIG2DEV], [fig2dev])
+AX_DOCS_TOOL_PROG([LATEX2HTML], [latex2html])
+AX_DOCS_TOOL_PROG([DOXYGEN], [doxygen])
+AX_DOCS_TOOL_PROG([POD2MAN], [pod2man])
+AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
+AX_DOCS_TOOL_PROG([DOT], [dot])
+AX_DOCS_TOOL_PROG([NEATO], [neato])
+AX_DOCS_TOOL_PROG([MARKDOWN], [markdown])
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])
diff -r d7e4efa17fb0 -r 674b694814c8 tools/m4/docs_tool.m4
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/m4/docs_tool.m4	Wed Aug 29 11:07:52 2012 -0700
@@ -0,0 +1,8 @@
+AC_DEFUN([AX_DOCS_TOOL_PROG], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROG([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWT-0002mk-36; Wed, 29 Aug 2012 21:10:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWR-0002me-NV
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:10:00 +0000
Received: from [85.158.143.35:49803] by server-1.bemta-4.messagelabs.com id
	BA/CB-12504-6258E305; Wed, 29 Aug 2012 21:09:58 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346274597!12500975!1
X-Originating-IP: [72.21.198.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk4LjI1ID0+IDE3MDAxMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21001 invoked from network); 29 Aug 2012 21:09:58 -0000
Received: from smtp-fw-4101.amazon.com (HELO smtp-fw-4101.amazon.com)
	(72.21.198.25)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:09:58 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274598; x=1377810598;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=VLMfPuD71c+0psZu+jwvh8yat9SIZenvZ/tEZWZAfU0=;
	b=haLCZ+S/c9BYvnWOJSgnh7vhin+dCgASWTAjDMsyk5Fbjgpead/x8u6Z
	pwMATfCbyar9xCgdJfmp1Com9ZXE4Q==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="788164489"
Received: from smtp-in-0102.sea3.amazon.com ([10.224.19.46])
	by smtp-border-fw-out-4101.iad4.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:09:55 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-0102.sea3.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TL9tdt027800
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:09:55 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:48 -0700
MIME-Version: 1.0
X-Mercurial-Node: 674b694814c8fb4f3c4b5c549d69793f32e12a6d
Message-ID: <674b694814c8fb4f3c4b.1346274073@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:13 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1 of 3] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is sometimes hard to discover all the optional documentation
generation tools that should be on a system to build all available Xen
documentation. By checking for documentation generation tools at
./configure time and displaying a warning, Xen packagers will more
easily learn about new optional build dependencies, like markdown,
when they are introduced.

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r d7e4efa17fb0 -r 674b694814c8 README
--- a/README	Tue Aug 28 15:35:08 2012 -0700
+++ b/README	Wed Aug 29 11:07:52 2012 -0700
@@ -28,8 +28,10 @@
 your system. For full documentation, see the Xen User Manual. If this
 is a pre-built release then you can find the manual at:
  dist/install/usr/share/doc/xen/pdf/user.pdf
-If you have a source release, then 'make -C docs' will build the
-manual at docs/pdf/user.pdf.
+If you have a source release and the required documentation generation
+tools, then 'make -C docs' will build the manual at docs/pdf/user.pdf.
+Running ./configure will check for the full suite of documentation
+tools and will display a warning if missing tools are detected.
 
 Quick-Start Guide
 =================
@@ -59,7 +61,6 @@
     * GNU gettext
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
-    * markdown
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
@@ -67,6 +68,7 @@
     * Development install of Ocaml (e.g. ocaml-nox and
       ocaml-findlib). Required to build ocaml components which
       includes the alternative ocaml xenstored.
+    * markdown
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff -r d7e4efa17fb0 -r 674b694814c8 config/Tools.mk.in
--- a/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
+++ b/config/Tools.mk.in	Wed Aug 29 11:07:52 2012 -0700
@@ -22,6 +22,17 @@
 LD86                := @LD86@
 BCC                 := @BCC@
 IASL                := @IASL@
+PS2PDF              := @PS2PDF@
+DVIPS               := @DVIPS@
+LATEX               := @LATEX@
+FIG2DEV             := @FIG2DEV@
+LATEX2HTML          := @LATEX2HTML@
+DOXYGEN             := @DOXYGEN@
+POD2MAN             := @POD2MAN@
+POD2TEXT            := @POD2TEXT@
+DOT                 := @DOT@
+NEATO               := @NEATO@
+MARKDOWN            := @MARKDOWN@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r d7e4efa17fb0 -r 674b694814c8 docs/Makefile
--- a/docs/Makefile	Tue Aug 28 15:35:08 2012 -0700
+++ b/docs/Makefile	Wed Aug 29 11:07:52 2012 -0700
@@ -3,6 +3,11 @@
 XEN_ROOT=$(CURDIR)/..
 include $(XEN_ROOT)/Config.mk
 include $(XEN_ROOT)/docs/Docs.mk
+# The default documentation tools specified in Docs.mk can be
+# persistently overridden by the user via ./configure, but running
+# ./configure is not required to build the docs tree. Thus Tools.mk is
+# optionally included.
+-include $(XEN_ROOT)/config/Tools.mk
 
 VERSION		= xen-unstable
 
diff -r d7e4efa17fb0 -r 674b694814c8 tools/configure.ac
--- a/tools/configure.ac	Tue Aug 28 15:35:08 2012 -0700
+++ b/tools/configure.ac	Wed Aug 29 11:07:52 2012 -0700
@@ -34,6 +34,7 @@
 m4_include([m4/curses.m4])
 m4_include([m4/pthread.m4])
 m4_include([m4/ptyfuncs.m4])
+m4_include([m4/docs_tool.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
@@ -80,6 +81,17 @@
 AC_PATH_PROG([BISON], [bison])
 AC_PATH_PROG([FLEX], [flex])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
+AX_DOCS_TOOL_PROG([PS2PDF], [ps2pdf])
+AX_DOCS_TOOL_PROG([DVIPS], [dvips])
+AX_DOCS_TOOL_PROG([LATEX], [latex])
+AX_DOCS_TOOL_PROG([FIG2DEV], [fig2dev])
+AX_DOCS_TOOL_PROG([LATEX2HTML], [latex2html])
+AX_DOCS_TOOL_PROG([DOXYGEN], [doxygen])
+AX_DOCS_TOOL_PROG([POD2MAN], [pod2man])
+AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
+AX_DOCS_TOOL_PROG([DOT], [dot])
+AX_DOCS_TOOL_PROG([NEATO], [neato])
+AX_DOCS_TOOL_PROG([MARKDOWN], [markdown])
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])
diff -r d7e4efa17fb0 -r 674b694814c8 tools/m4/docs_tool.m4
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/m4/docs_tool.m4	Wed Aug 29 11:07:52 2012 -0700
@@ -0,0 +1,8 @@
+AC_DEFUN([AX_DOCS_TOOL_PROG], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROG([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWZ-0002nR-Fr; Wed, 29 Aug 2012 21:10:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWY-0002nF-CI
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:10:06 +0000
Received: from [85.158.139.83:63950] by server-9.bemta-5.messagelabs.com id
	C6/D0-20529-D258E305; Wed, 29 Aug 2012 21:10:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346274602!20350728!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEzNTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25491 invoked from network); 29 Aug 2012 21:10:04 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:10:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274604; x=1377810604;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=R/ntF5lmmnQ5/vpcqoA5ZbtfVH1ipDbOydeSwjSRKCQ=;
	b=cWJK8iLYeZLTNiA0reB4EssvGE9OJRKoMShMAHj+RBs2itK7hKNSrekn
	4nrtO5DTXghY90ONuldG0XUqelvJrw==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="1018321309"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:10:00 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TLA0bF028920
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:10:00 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:54 -0700
MIME-Version: 1.0
X-Mercurial-Node: 9a308e4fdc19336ce3ca879fc8032188e8b0e8cb
Message-ID: <9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:14 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Markdown, while easy to read and write, isn't the most consumable
format for users reading documentation on a terminal. This patch uses
lynx to format markdown produced HTML into text files.

Changes since v3:
 * check for html to text dump tool in ./configure
 * switch to using elinks
 * allow command line flags to dump tool to be specified

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 674b694814c8 -r 9a308e4fdc19 config/Tools.mk.in
--- a/config/Tools.mk.in	Wed Aug 29 11:07:52 2012 -0700
+++ b/config/Tools.mk.in	Wed Aug 29 11:50:44 2012 -0700
@@ -33,6 +33,8 @@
 DOT                 := @DOT@
 NEATO               := @NEATO@
 MARKDOWN            := @MARKDOWN@
+HTMLDUMP            := @HTMLDUMP@
+HTMLDUMPFLAGS       := @HTMLDUMPFLAGS@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r 674b694814c8 -r 9a308e4fdc19 docs/Docs.mk
--- a/docs/Docs.mk	Wed Aug 29 11:07:52 2012 -0700
+++ b/docs/Docs.mk	Wed Aug 29 11:50:44 2012 -0700
@@ -10,3 +10,5 @@
 DOT		:= dot
 NEATO		:= neato
 MARKDOWN	:= markdown
+HTMLDUMP        := elinks
+HTMLDUMPFLAGS   := -dump
diff -r 674b694814c8 -r 9a308e4fdc19 docs/Makefile
--- a/docs/Makefile	Wed Aug 29 11:07:52 2012 -0700
+++ b/docs/Makefile	Wed Aug 29 11:50:44 2012 -0700
@@ -136,9 +136,17 @@
 	$(call move-if-changed,$@.tmp,$@)
 
 txt/%.txt: %.markdown
-	$(INSTALL_DIR) $(@D)
-	cp $< $@.tmp
-	$(call move-if-changed,$@.tmp,$@)
+	@$(INSTALL_DIR) $(@D)
+	set -e ; \
+	if which $(MARKDOWN) >/dev/null 2>&1 && \
+		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
+		echo "Running markdown to generate $*.txt ... "; \
+		$(MARKDOWN) $< | $(HTMLDUMP) $(HTMLDUMPFLAGS) > $@.tmp ; \
+		$(call move-if-changed,$@.tmp,$@) ; \
+	else \
+		echo "markdown or html dump tool (like links) not installed; just copying $<."; \
+		cp $< $@; \
+	fi
 
 txt/man/%.1.txt: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
diff -r 674b694814c8 -r 9a308e4fdc19 tools/configure.ac
--- a/tools/configure.ac	Wed Aug 29 11:07:52 2012 -0700
+++ b/tools/configure.ac	Wed Aug 29 11:50:44 2012 -0700
@@ -92,6 +92,16 @@
 AX_DOCS_TOOL_PROG([DOT], [dot])
 AX_DOCS_TOOL_PROG([NEATO], [neato])
 AX_DOCS_TOOL_PROG([MARKDOWN], [markdown])
+
+AC_ARG_VAR([HTMLDUMP],
+           [Path to html-to-text generation tool (default: elinks)])
+AC_PATH_PROG([HTMLDUMP], [elinks])
+AS_IF([! test -x "$ac_cv_path_HTMLDUMP"], [
+    AC_MSG_WARN([$ac_cv_path_HTMLDUMP is not available so text documentation will be unformatted markdown])
+])
+AC_SUBST([HTMLDUMPFLAGS], ["-dump"])
+AC_ARG_VAR([HTMLDUMPFLAGS], [Flags passed to html to text translation tool])
+
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWZ-0002nR-Fr; Wed, 29 Aug 2012 21:10:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWY-0002nF-CI
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:10:06 +0000
Received: from [85.158.139.83:63950] by server-9.bemta-5.messagelabs.com id
	C6/D0-20529-D258E305; Wed, 29 Aug 2012 21:10:05 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346274602!20350728!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEzNTEy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25491 invoked from network); 29 Aug 2012 21:10:04 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:10:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274604; x=1377810604;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=R/ntF5lmmnQ5/vpcqoA5ZbtfVH1ipDbOydeSwjSRKCQ=;
	b=cWJK8iLYeZLTNiA0reB4EssvGE9OJRKoMShMAHj+RBs2itK7hKNSrekn
	4nrtO5DTXghY90ONuldG0XUqelvJrw==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="1018321309"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:10:00 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TLA0bF028920
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:10:00 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:54 -0700
MIME-Version: 1.0
X-Mercurial-Node: 9a308e4fdc19336ce3ca879fc8032188e8b0e8cb
Message-ID: <9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:14 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Markdown, while easy to read and write, isn't the most consumable
format for users reading documentation on a terminal. This patch uses
lynx to format markdown produced HTML into text files.

Changes since v3:
 * check for html to text dump tool in ./configure
 * switch to using elinks
 * allow command line flags to dump tool to be specified

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 674b694814c8 -r 9a308e4fdc19 config/Tools.mk.in
--- a/config/Tools.mk.in	Wed Aug 29 11:07:52 2012 -0700
+++ b/config/Tools.mk.in	Wed Aug 29 11:50:44 2012 -0700
@@ -33,6 +33,8 @@
 DOT                 := @DOT@
 NEATO               := @NEATO@
 MARKDOWN            := @MARKDOWN@
+HTMLDUMP            := @HTMLDUMP@
+HTMLDUMPFLAGS       := @HTMLDUMPFLAGS@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r 674b694814c8 -r 9a308e4fdc19 docs/Docs.mk
--- a/docs/Docs.mk	Wed Aug 29 11:07:52 2012 -0700
+++ b/docs/Docs.mk	Wed Aug 29 11:50:44 2012 -0700
@@ -10,3 +10,5 @@
 DOT		:= dot
 NEATO		:= neato
 MARKDOWN	:= markdown
+HTMLDUMP        := elinks
+HTMLDUMPFLAGS   := -dump
diff -r 674b694814c8 -r 9a308e4fdc19 docs/Makefile
--- a/docs/Makefile	Wed Aug 29 11:07:52 2012 -0700
+++ b/docs/Makefile	Wed Aug 29 11:50:44 2012 -0700
@@ -136,9 +136,17 @@
 	$(call move-if-changed,$@.tmp,$@)
 
 txt/%.txt: %.markdown
-	$(INSTALL_DIR) $(@D)
-	cp $< $@.tmp
-	$(call move-if-changed,$@.tmp,$@)
+	@$(INSTALL_DIR) $(@D)
+	set -e ; \
+	if which $(MARKDOWN) >/dev/null 2>&1 && \
+		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
+		echo "Running markdown to generate $*.txt ... "; \
+		$(MARKDOWN) $< | $(HTMLDUMP) $(HTMLDUMPFLAGS) > $@.tmp ; \
+		$(call move-if-changed,$@.tmp,$@) ; \
+	else \
+		echo "markdown or html dump tool (like links) not installed; just copying $<."; \
+		cp $< $@; \
+	fi
 
 txt/man/%.1.txt: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
diff -r 674b694814c8 -r 9a308e4fdc19 tools/configure.ac
--- a/tools/configure.ac	Wed Aug 29 11:07:52 2012 -0700
+++ b/tools/configure.ac	Wed Aug 29 11:50:44 2012 -0700
@@ -92,6 +92,16 @@
 AX_DOCS_TOOL_PROG([DOT], [dot])
 AX_DOCS_TOOL_PROG([NEATO], [neato])
 AX_DOCS_TOOL_PROG([MARKDOWN], [markdown])
+
+AC_ARG_VAR([HTMLDUMP],
+           [Path to html-to-text generation tool (default: elinks)])
+AC_PATH_PROG([HTMLDUMP], [elinks])
+AS_IF([! test -x "$ac_cv_path_HTMLDUMP"], [
+    AC_MSG_WARN([$ac_cv_path_HTMLDUMP is not available so text documentation will be unformatted markdown])
+])
+AC_SUBST([HTMLDUMPFLAGS], ["-dump"])
+AC_ARG_VAR([HTMLDUMPFLAGS], [Flags passed to html to text translation tool])
+
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWf-0002oN-14; Wed, 29 Aug 2012 21:10:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWc-0002o1-Vh
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:10:11 +0000
Received: from [85.158.143.99:29157] by server-2.bemta-4.messagelabs.com id
	04/5C-21239-2358E305; Wed, 29 Aug 2012 21:10:10 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346274608!24113225!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDExOTAxOQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6122 invoked from network); 29 Aug 2012 21:10:09 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:10:09 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274609; x=1377810609;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=oUmh/TT5V8a+WS0jUtBImaETHhMUqYRTCB8LVcQ6o0g=;
	b=EJax9B8b3Yd+qU60smPstupHeRCGiwdw1/7iP7Np/xEqZmNOZaE34UL3
	ZkVM7LUZ0TVCW8ivi+23P9xrWThd4g==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="431533786"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:10:07 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TLA6Dn005307
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:10:06 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:59 -0700
MIME-Version: 1.0
X-Mercurial-Node: f5a57d912d9f57f19472072fa9d6049e932352d8
Message-ID: <f5a57d912d9f57f19472.1346274075@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:15 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3 of 3] tools/docs: allow markdown_py to be used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

An alternative Python markdown implementation is available on some
systems as markdown_py, so look for that path as well.

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 9a308e4fdc19 -r f5a57d912d9f tools/configure.ac
--- a/tools/configure.ac	Wed Aug 29 11:50:44 2012 -0700
+++ b/tools/configure.ac	Wed Aug 29 11:58:30 2012 -0700
@@ -91,7 +91,7 @@
 AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
 AX_DOCS_TOOL_PROG([DOT], [dot])
 AX_DOCS_TOOL_PROG([NEATO], [neato])
-AX_DOCS_TOOL_PROG([MARKDOWN], [markdown])
+AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown markdown_py])
 
 AC_ARG_VAR([HTMLDUMP],
            [Path to html-to-text generation tool (default: elinks)])
diff -r 9a308e4fdc19 -r f5a57d912d9f tools/m4/docs_tool.m4
--- a/tools/m4/docs_tool.m4	Wed Aug 29 11:50:44 2012 -0700
+++ b/tools/m4/docs_tool.m4	Wed Aug 29 11:58:30 2012 -0700
@@ -6,3 +6,12 @@
         AC_MSG_WARN([$2 is not available so some documentation won't be built])
     ])
 ])
+
+AC_DEFUN([AX_DOCS_TOOL_PROGS], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROGS([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:10:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pWf-0002oN-14; Wed, 29 Aug 2012 21:10:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T6pWc-0002o1-Vh
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 21:10:11 +0000
Received: from [85.158.143.99:29157] by server-2.bemta-4.messagelabs.com id
	04/5C-21239-2358E305; Wed, 29 Aug 2012 21:10:10 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346274608!24113225!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDExOTAxOQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6122 invoked from network); 29 Aug 2012 21:10:09 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 21:10:09 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346274609; x=1377810609;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=oUmh/TT5V8a+WS0jUtBImaETHhMUqYRTCB8LVcQ6o0g=;
	b=EJax9B8b3Yd+qU60smPstupHeRCGiwdw1/7iP7Np/xEqZmNOZaE34UL3
	ZkVM7LUZ0TVCW8ivi+23P9xrWThd4g==;
X-IronPort-AV: E=Sophos;i="4.80,336,1344211200"; d="scan'208";a="431533786"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 29 Aug 2012 21:10:07 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7TLA6Dn005307
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 29 Aug 2012 21:10:06 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.43) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Wed, 29 Aug 2012 14:09:59 -0700
MIME-Version: 1.0
X-Mercurial-Node: f5a57d912d9f57f19472072fa9d6049e932352d8
Message-ID: <f5a57d912d9f57f19472.1346274075@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.0.2
Date: Wed, 29 Aug 2012 14:01:15 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3 of 3] tools/docs: allow markdown_py to be used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

An alternative Python markdown implementation is available on some
systems as markdown_py, so look for that path as well.

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 9a308e4fdc19 -r f5a57d912d9f tools/configure.ac
--- a/tools/configure.ac	Wed Aug 29 11:50:44 2012 -0700
+++ b/tools/configure.ac	Wed Aug 29 11:58:30 2012 -0700
@@ -91,7 +91,7 @@
 AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
 AX_DOCS_TOOL_PROG([DOT], [dot])
 AX_DOCS_TOOL_PROG([NEATO], [neato])
-AX_DOCS_TOOL_PROG([MARKDOWN], [markdown])
+AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown markdown_py])
 
 AC_ARG_VAR([HTMLDUMP],
            [Path to html-to-text generation tool (default: elinks)])
diff -r 9a308e4fdc19 -r f5a57d912d9f tools/m4/docs_tool.m4
--- a/tools/m4/docs_tool.m4	Wed Aug 29 11:50:44 2012 -0700
+++ b/tools/m4/docs_tool.m4	Wed Aug 29 11:58:30 2012 -0700
@@ -6,3 +6,12 @@
         AC_MSG_WARN([$2 is not available so some documentation won't be built])
     ])
 ])
+
+AC_DEFUN([AX_DOCS_TOOL_PROGS], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROGS([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:35:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pv0-0003Qu-8x; Wed, 29 Aug 2012 21:35:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T6puz-0003Qp-98
	for Xen-devel@lists.xensource.com; Wed, 29 Aug 2012 21:35:21 +0000
Received: from [85.158.139.83:45045] by server-1.bemta-5.messagelabs.com id
	13/76-32692-81B8E305; Wed, 29 Aug 2012 21:35:20 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346276118!27575233!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzE4Njk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5463 invoked from network); 29 Aug 2012 21:35:19 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 21:35:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TLZEUY013102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 21:35:15 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TLZDu7002545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 21:35:14 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TLZDTY013069; Wed, 29 Aug 2012 16:35:13 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Aug 2012 14:35:13 -0700
Date: Wed, 29 Aug 2012 14:35:12 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	msw@amazon.com
Message-ID: <20120829143512.7c579fb1@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ben Guthro <ben@guthro.net>
Subject: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Guys,

Thanks for the interest in the xen hypervisor debugger, prev known as
kdb. Btw. I'm gonna rename it to xdb for xen-debugger or hdb for
hypervisor debugger. KDB is confusing people with linux kdb debugger
and I often get emails where people think they need to apply linux kdb
patch also... 

Anyways, attaching patch that is cleaned up of my debug code that I
accidentally left in prev posting. Should apply cleanly to c/s 25467.

JFYI...  http://xenbits.xen.org/ext/debuggers.hg/ has gotten outdated,
I wasn't sure if anyone was even looking at it. But, it looks like
there is much interest, so I will just submit patch to Keir for
his feedback on merging it in xen. 

Please voice your opinion here. Good seeing all at the summit :).

Thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 21:35:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 21:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6pv0-0003Qu-8x; Wed, 29 Aug 2012 21:35:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T6puz-0003Qp-98
	for Xen-devel@lists.xensource.com; Wed, 29 Aug 2012 21:35:21 +0000
Received: from [85.158.139.83:45045] by server-1.bemta-5.messagelabs.com id
	13/76-32692-81B8E305; Wed, 29 Aug 2012 21:35:20 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346276118!27575233!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzE4Njk5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5463 invoked from network); 29 Aug 2012 21:35:19 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 21:35:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TLZEUY013102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 21:35:15 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TLZDu7002545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 21:35:14 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TLZDTY013069; Wed, 29 Aug 2012 16:35:13 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Aug 2012 14:35:13 -0700
Date: Wed, 29 Aug 2012 14:35:12 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	msw@amazon.com
Message-ID: <20120829143512.7c579fb1@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ben Guthro <ben@guthro.net>
Subject: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Guys,

Thanks for the interest in the xen hypervisor debugger, prev known as
kdb. Btw. I'm gonna rename it to xdb for xen-debugger or hdb for
hypervisor debugger. KDB is confusing people with linux kdb debugger
and I often get emails where people think they need to apply linux kdb
patch also... 

Anyways, attaching patch that is cleaned up of my debug code that I
accidentally left in prev posting. Should apply cleanly to c/s 25467.

JFYI...  http://xenbits.xen.org/ext/debuggers.hg/ has gotten outdated,
I wasn't sure if anyone was even looking at it. But, it looks like
there is much interest, so I will just submit patch to Keir for
his feedback on merging it in xen. 

Please voice your opinion here. Good seeing all at the summit :).

Thanks,
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 22:23:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 22:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6qfU-0003kf-G6; Wed, 29 Aug 2012 22:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T6qfS-0003ka-LR
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 22:23:23 +0000
Received: from [85.158.143.99:17092] by server-1.bemta-4.messagelabs.com id
	D0/A7-12504-9569E305; Wed, 29 Aug 2012 22:23:21 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346278999!26588161!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNTk5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16936 invoked from network); 29 Aug 2012 22:23:21 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Aug 2012 22:23:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TMNHHC008800
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:23:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TMNHTq005174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:23:17 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TMNHEp016631
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 17:23:17 -0500
MIME-Version: 1.0
Message-ID: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
Date: Wed, 29 Aug 2012 15:22:40 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex
 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
crashes early in boot with Xen 4.2.0-rc4?

The hardware is a new Dell Optiplex 790 which I've noticed has a couple
of idiosyncrasies when booting recent upstream versions of Linux:
1) reboot=pci is required or reboot hangs
2) reducing visible memory via memmap= on a Linux command line is
   fairly difficult (requires a very complex sequence of Linux boot
   parameters, presumably due to a weird e820 RAM map in this hardware)

These idiosyncrasies may or may not be related to the dom0 crash,
just mentioning them in case they suggest an easy workaround.

The stock FC17 Xen booted, then I built/booted xen-4.1-testing.hg
and it was fine.  Then I built/booted 4.2.0-rc4 and see the dom0 crash.
FC17 kernel/dom0 rev is 3.3.4-5.fc17.x86_64.  I can reboot the
box to 4.1.3 still (but can't do anything due to at least a 4.1/4.2
libxenlight.so version difference... will have to reinstall 4.1
tools I guess).

I think there have been some Xen memmap issues fixed in recent
upstream Linux, so will try building/booting linux-3.5 while
everyone flies home from Xen Summit ;-).  But this seems
unpromising since Xen 4.1.3 works fine.

Tombstone attached:

[first, tail end of previous boot with dom0 crash, then full log with dom0 crash]

(XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
(XEN) Scrubbing Free RAM: ......................................................
.....................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
)
(XEN) Freed 260kB init memory.
mapping kernel into physical memory
Xen: setup ISA identity maps
about to get started...
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
 __  __            _  _    ____    ___              _  _
 \ \/ /___ _ __   | || |  |___ \  / _ \    _ __ ___| || |     _ __  _ __ ___
  \  // _ \ '_ \  | || |_   __) || | | |__| '__/ __| || |_ __| '_ \| '__/ _ \
  /  \  __/ | | | |__   _| / __/ | |_| |__| | | (__|__   _|__| |_) | | |  __/
 /_/\_\___|_| |_|    |_|(_)_____(_)___/   |_|  \___|  |_|    | .__/|_|  \___|
                                                             |_|
(XEN) Xen version 4.2.0-rc4-pre (root@vpn.oracle.com) (gcc (GCC) 4.7.0 20120507
(Red Hat 4.7.0-5)) Wed Aug 29 15:27:44 MDT 2012
(XEN) Latest ChangeSet: Tue Aug 28 22:40:45 2012 +0100 25786:a0b5f8102a00
(XEN) Bootloader: GRUB 2.00~beta4
(XEN) Command line: placeholder com1=115200,8n1 dom0_mem=512M console=vga,com1 l
oglvl=all guest_loglvl=all cpuidle=off
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009ac00 (usable)
(XEN)  000000000009ac00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 0000000020000000 (usable)
(XEN)  0000000020000000 - 0000000020200000 (reserved)
(XEN)  0000000020200000 - 0000000040000000 (usable)
(XEN)  0000000040000000 - 0000000040200000 (reserved)
(XEN)  0000000040200000 - 00000000ca9f7000 (usable)
(XEN)  00000000ca9f7000 - 00000000caa3b000 (reserved)
(XEN)  00000000caa3b000 - 00000000cadb7000 (usable)
(XEN)  00000000cadb7000 - 00000000cade7000 (reserved)
(XEN)  00000000cade7000 - 00000000cafe7000 (ACPI NVS)
(XEN)  00000000cafe7000 - 00000000cafff000 (ACPI data)
(XEN)  00000000cafff000 - 00000000cb000000 (usable)
(XEN)  00000000cb800000 - 00000000cfa00000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000ffc00000 - 00000000ffc20000 (reserved)
(XEN)  0000000100000000 - 000000022e000000 (usable)
(XEN) ACPI: RSDP 000FE300, 0024 (r2 DELL  )
(XEN) ACPI: XSDT CAFFDE18, 007C (r1 DELL    CBX3     6222004 MSFT    10013)
(XEN) ACPI: FACP CAF87D98, 00F4 (r4 DELL    CBX3     6222004 MSFT    10013)
(XEN) ACPI: DSDT CAF71018, 7341 (r2 INT430 SYSFexxx     1001 INTL 20090903)
(XEN) ACPI: FACS CAFE4D40, 0040
(XEN) ACPI: APIC CAFFCF18, 00CC (r2 DELL    CBX3     6222004 MSFT    10013)
(XEN) ACPI: TCPA CAFE5D18, 0032 (r2                        0             0)
(XEN) ACPI: SSDT CAF88A98, 02F9 (r1 DELLTP      TPM     3000 INTL 20090903)
(XEN) ACPI: MCFG CAFE5C98, 003C (r1 DELL   SNDYBRDG  6222004 MSFT       97)
(XEN) ACPI: HPET CAFE5C18, 0038 (r1 A M I   PCHHPET  6222004 AMI.        3)
(XEN) ACPI: BOOT CAFE5B98, 0028 (r1 DELL   CBX3      6222004 AMI     10013)
(XEN) ACPI: SSDT CAF7E818, 07C2 (r1  PmRef  Cpu0Ist     3000 INTL 20090903)
(XEN) ACPI: SSDT CAF7D018, 0996 (r1  PmRef    CpuPm     3000 INTL 20090903)
(XEN) ACPI: DMAR CAF87C18, 00E8 (r1 INTEL      SNB         1 INTL        1)
(XEN) ACPI: SLIC CAF86C18, 0176 (r3 DELL    CBX3     6222004 MSFT    10013)
(XEN) System RAM: 8073MB (8266808kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000022e000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f1fb0
(XEN) DMI 2.6 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x408
(XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - cafe4e40/00000000cafe4d40, us
ing 32
(XEN) ACPI:                  wakeup_vec[cafe4e4c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x08] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x09] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x0a] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0b] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x0c] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x0d] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x0e] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x0f] disabled)
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) Table is not found!
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 16 CPUs (12 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3093.024 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extende
d MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: Not using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d supported page sizes: 4kB.
(XEN) Intel VT-d supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) TSC deadline timer enabled
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 32 KiB.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB
(XEN) Brought up 4 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) mtrr: your CPUs had inconsistent variable MTRR settings
(XEN) mtrr: probably your BIOS does not setup all CPUs.
(XEN) mtrr: corrected configuration.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0x885000
(XEN) elf_parse_binary: phdr: paddr=0x1a00000 memsz=0xd70e0
(XEN) elf_parse_binary: phdr: paddr=0x1ad8000 memsz=0x14240
(XEN) elf_parse_binary: phdr: paddr=0x1aed000 memsz=0x6bf000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x21ac000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81aed200
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"

(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff821ac000
(XEN)     virt_entry       = 0xffffffff81aed200
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x21ac000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000220000000->0000000224000000 (103435 pages to be all
ocated)
(XEN)  Init. ramdisk: 000000022b40b000->000000022dfffa00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff821ac000
(XEN)  Init. ramdisk: ffffffff821ac000->ffffffff84da0a00
(XEN)  Phys-Mach map: ffffffff84da1000->ffffffff84ea1000
(XEN)  Start info:    ffffffff84ea1000->ffffffff84ea14b4
(XEN)  Page tables:   ffffffff84ea2000->ffffffff84ecd000
(XEN)  Boot stack:    ffffffff84ecd000->ffffffff84ece000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85000000
(XEN)  ENTRY ADDRESS: ffffffff81aed200
(XEN) Dom0 has maximum 4 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81885000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81a00000 -> 0xffffffff81ad70e0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81ad8000 -> 0xffffffff81aec240
(XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
(XEN) Scrubbing Free RAM: ......................................................
.....................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
)
(XEN) Freed 260kB init memory.
mapping kernel into physical memory
Xen: setup ISA identity maps
about to get started...
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 22:23:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 22:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6qfU-0003kf-G6; Wed, 29 Aug 2012 22:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T6qfS-0003ka-LR
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 22:23:23 +0000
Received: from [85.158.143.99:17092] by server-1.bemta-4.messagelabs.com id
	D0/A7-12504-9569E305; Wed, 29 Aug 2012 22:23:21 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346278999!26588161!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNTk5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16936 invoked from network); 29 Aug 2012 22:23:21 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-13.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Aug 2012 22:23:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TMNHHC008800
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:23:18 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TMNHTq005174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:23:17 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TMNHEp016631
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 17:23:17 -0500
MIME-Version: 1.0
Message-ID: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
Date: Wed, 29 Aug 2012 15:22:40 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex
 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
crashes early in boot with Xen 4.2.0-rc4?

The hardware is a new Dell Optiplex 790 which I've noticed has a couple
of idiosyncrasies when booting recent upstream versions of Linux:
1) reboot=pci is required or reboot hangs
2) reducing visible memory via memmap= on a Linux command line is
   fairly difficult (requires a very complex sequence of Linux boot
   parameters, presumably due to a weird e820 RAM map in this hardware)

These idiosyncrasies may or may not be related to the dom0 crash,
just mentioning them in case they suggest an easy workaround.

The stock FC17 Xen booted, then I built/booted xen-4.1-testing.hg
and it was fine.  Then I built/booted 4.2.0-rc4 and see the dom0 crash.
FC17 kernel/dom0 rev is 3.3.4-5.fc17.x86_64.  I can reboot the
box to 4.1.3 still (but can't do anything due to at least a 4.1/4.2
libxenlight.so version difference... will have to reinstall 4.1
tools I guess).

I think there have been some Xen memmap issues fixed in recent
upstream Linux, so will try building/booting linux-3.5 while
everyone flies home from Xen Summit ;-).  But this seems
unpromising since Xen 4.1.3 works fine.

Tombstone attached:

[first, tail end of previous boot with dom0 crash, then full log with dom0 crash]

(XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
(XEN) Scrubbing Free RAM: ......................................................
.....................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
)
(XEN) Freed 260kB init memory.
mapping kernel into physical memory
Xen: setup ISA identity maps
about to get started...
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
 __  __            _  _    ____    ___              _  _
 \ \/ /___ _ __   | || |  |___ \  / _ \    _ __ ___| || |     _ __  _ __ ___
  \  // _ \ '_ \  | || |_   __) || | | |__| '__/ __| || |_ __| '_ \| '__/ _ \
  /  \  __/ | | | |__   _| / __/ | |_| |__| | | (__|__   _|__| |_) | | |  __/
 /_/\_\___|_| |_|    |_|(_)_____(_)___/   |_|  \___|  |_|    | .__/|_|  \___|
                                                             |_|
(XEN) Xen version 4.2.0-rc4-pre (root@vpn.oracle.com) (gcc (GCC) 4.7.0 20120507
(Red Hat 4.7.0-5)) Wed Aug 29 15:27:44 MDT 2012
(XEN) Latest ChangeSet: Tue Aug 28 22:40:45 2012 +0100 25786:a0b5f8102a00
(XEN) Bootloader: GRUB 2.00~beta4
(XEN) Command line: placeholder com1=115200,8n1 dom0_mem=512M console=vga,com1 l
oglvl=all guest_loglvl=all cpuidle=off
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009ac00 (usable)
(XEN)  000000000009ac00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 0000000020000000 (usable)
(XEN)  0000000020000000 - 0000000020200000 (reserved)
(XEN)  0000000020200000 - 0000000040000000 (usable)
(XEN)  0000000040000000 - 0000000040200000 (reserved)
(XEN)  0000000040200000 - 00000000ca9f7000 (usable)
(XEN)  00000000ca9f7000 - 00000000caa3b000 (reserved)
(XEN)  00000000caa3b000 - 00000000cadb7000 (usable)
(XEN)  00000000cadb7000 - 00000000cade7000 (reserved)
(XEN)  00000000cade7000 - 00000000cafe7000 (ACPI NVS)
(XEN)  00000000cafe7000 - 00000000cafff000 (ACPI data)
(XEN)  00000000cafff000 - 00000000cb000000 (usable)
(XEN)  00000000cb800000 - 00000000cfa00000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000ffc00000 - 00000000ffc20000 (reserved)
(XEN)  0000000100000000 - 000000022e000000 (usable)
(XEN) ACPI: RSDP 000FE300, 0024 (r2 DELL  )
(XEN) ACPI: XSDT CAFFDE18, 007C (r1 DELL    CBX3     6222004 MSFT    10013)
(XEN) ACPI: FACP CAF87D98, 00F4 (r4 DELL    CBX3     6222004 MSFT    10013)
(XEN) ACPI: DSDT CAF71018, 7341 (r2 INT430 SYSFexxx     1001 INTL 20090903)
(XEN) ACPI: FACS CAFE4D40, 0040
(XEN) ACPI: APIC CAFFCF18, 00CC (r2 DELL    CBX3     6222004 MSFT    10013)
(XEN) ACPI: TCPA CAFE5D18, 0032 (r2                        0             0)
(XEN) ACPI: SSDT CAF88A98, 02F9 (r1 DELLTP      TPM     3000 INTL 20090903)
(XEN) ACPI: MCFG CAFE5C98, 003C (r1 DELL   SNDYBRDG  6222004 MSFT       97)
(XEN) ACPI: HPET CAFE5C18, 0038 (r1 A M I   PCHHPET  6222004 AMI.        3)
(XEN) ACPI: BOOT CAFE5B98, 0028 (r1 DELL   CBX3      6222004 AMI     10013)
(XEN) ACPI: SSDT CAF7E818, 07C2 (r1  PmRef  Cpu0Ist     3000 INTL 20090903)
(XEN) ACPI: SSDT CAF7D018, 0996 (r1  PmRef    CpuPm     3000 INTL 20090903)
(XEN) ACPI: DMAR CAF87C18, 00E8 (r1 INTEL      SNB         1 INTL        1)
(XEN) ACPI: SLIC CAF86C18, 0176 (r3 DELL    CBX3     6222004 MSFT    10013)
(XEN) System RAM: 8073MB (8266808kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000022e000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f1fb0
(XEN) DMI 2.6 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x408
(XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - cafe4e40/00000000cafe4d40, us
ing 32
(XEN) ACPI:                  wakeup_vec[cafe4e4c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x08] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x09] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x0a] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0b] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x0c] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x0d] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x0e] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x0f] disabled)
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) Table is not found!
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 16 CPUs (12 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3093.024 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extende
d MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: Not using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d supported page sizes: 4kB.
(XEN) Intel VT-d supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) TSC deadline timer enabled
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 32 KiB.
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB
(XEN) Brought up 4 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) mtrr: your CPUs had inconsistent variable MTRR settings
(XEN) mtrr: probably your BIOS does not setup all CPUs.
(XEN) mtrr: corrected configuration.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0x885000
(XEN) elf_parse_binary: phdr: paddr=0x1a00000 memsz=0xd70e0
(XEN) elf_parse_binary: phdr: paddr=0x1ad8000 memsz=0x14240
(XEN) elf_parse_binary: phdr: paddr=0x1aed000 memsz=0x6bf000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x21ac000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81aed200
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"

(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff821ac000
(XEN)     virt_entry       = 0xffffffff81aed200
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x21ac000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000220000000->0000000224000000 (103435 pages to be all
ocated)
(XEN)  Init. ramdisk: 000000022b40b000->000000022dfffa00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff821ac000
(XEN)  Init. ramdisk: ffffffff821ac000->ffffffff84da0a00
(XEN)  Phys-Mach map: ffffffff84da1000->ffffffff84ea1000
(XEN)  Start info:    ffffffff84ea1000->ffffffff84ea14b4
(XEN)  Page tables:   ffffffff84ea2000->ffffffff84ecd000
(XEN)  Boot stack:    ffffffff84ecd000->ffffffff84ece000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85000000
(XEN)  ENTRY ADDRESS: ffffffff81aed200
(XEN) Dom0 has maximum 4 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81885000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81a00000 -> 0xffffffff81ad70e0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81ad8000 -> 0xffffffff81aec240
(XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
(XEN) Scrubbing Free RAM: ......................................................
.....................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
)
(XEN) Freed 260kB init memory.
mapping kernel into physical memory
Xen: setup ISA identity maps
about to get started...
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 22:45:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 22:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6r0E-0003x0-Dq; Wed, 29 Aug 2012 22:44:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T6r0C-0003wu-9l
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 22:44:48 +0000
Received: from [85.158.138.51:13155] by server-1.bemta-3.messagelabs.com id
	F2/63-09327-F5B9E305; Wed, 29 Aug 2012 22:44:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346280283!21203579!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzIwMDI0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23566 invoked from network); 29 Aug 2012 22:44:45 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Aug 2012 22:44:45 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TMigXj001515
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:44:43 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TMifEB027913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:44:42 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TMifSV029947
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 17:44:41 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Aug 2012 15:44:40 -0700
Date: Wed, 29 Aug 2012 18:44:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Message-ID: <20120829224434.GA4650@localhost.localdomain>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 29, 2012 at 03:22:40PM -0700, Dan Magenheimer wrote:
> Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
> crashes early in boot with Xen 4.2.0-rc4?
> 
> The hardware is a new Dell Optiplex 790 which I've noticed has a couple
> of idiosyncrasies when booting recent upstream versions of Linux:
> 1) reboot=pci is required or reboot hangs
> 2) reducing visible memory via memmap= on a Linux command line is
>    fairly difficult (requires a very complex sequence of Linux boot
>    parameters, presumably due to a weird e820 RAM map in this hardware)
> 
> These idiosyncrasies may or may not be related to the dom0 crash,
> just mentioning them in case they suggest an easy workaround.


What happens if you run with 'console=vga vga=text,keep' on the
hypervisor line and on the Linux command line: 'console=hvc0
earlyprink=xen debug loglevel=8'
> 
> The stock FC17 Xen booted, then I built/booted xen-4.1-testing.hg
> and it was fine.  Then I built/booted 4.2.0-rc4 and see the dom0 crash.
> FC17 kernel/dom0 rev is 3.3.4-5.fc17.x86_64.  I can reboot the
> box to 4.1.3 still (but can't do anything due to at least a 4.1/4.2
> libxenlight.so version difference... will have to reinstall 4.1
> tools I guess).
> 
> I think there have been some Xen memmap issues fixed in recent
> upstream Linux, so will try building/booting linux-3.5 while

Use v3.5.3 if you can.
> everyone flies home from Xen Summit ;-).  But this seems

There was a bug in 3.5 that got fixed in 3.6 and it should be on
the stable tree. It is is 5bc6f9888db5739abfa0cae279b4b442e4db8049
> unpromising since Xen 4.1.3 works fine.
> 
> Tombstone attached:
> 
> [first, tail end of previous boot with dom0 crash, then full log with dom0 crash]
> 
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
>  __  __            _  _    ____    ___              _  _
>  \ \/ /___ _ __   | || |  |___ \  / _ \    _ __ ___| || |     _ __  _ __ ___
>   \  // _ \ '_ \  | || |_   __) || | | |__| '__/ __| || |_ __| '_ \| '__/ _ \
>   /  \  __/ | | | |__   _| / __/ | |_| |__| | | (__|__   _|__| |_) | | |  __/
>  /_/\_\___|_| |_|    |_|(_)_____(_)___/   |_|  \___|  |_|    | .__/|_|  \___|
>                                                              |_|
> (XEN) Xen version 4.2.0-rc4-pre (root@vpn.oracle.com) (gcc (GCC) 4.7.0 20120507
> (Red Hat 4.7.0-5)) Wed Aug 29 15:27:44 MDT 2012
> (XEN) Latest ChangeSet: Tue Aug 28 22:40:45 2012 +0100 25786:a0b5f8102a00
> (XEN) Bootloader: GRUB 2.00~beta4
> (XEN) Command line: placeholder com1=115200,8n1 dom0_mem=512M console=vga,com1 l
> oglvl=all guest_loglvl=all cpuidle=off
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 1 MBR signatures
> (XEN)  Found 1 EDD information structures
> (XEN) Xen-e820 RAM map:
> (XEN)  0000000000000000 - 000000000009ac00 (usable)
> (XEN)  000000000009ac00 - 00000000000a0000 (reserved)
> (XEN)  00000000000e0000 - 0000000000100000 (reserved)
> (XEN)  0000000000100000 - 0000000020000000 (usable)
> (XEN)  0000000020000000 - 0000000020200000 (reserved)
> (XEN)  0000000020200000 - 0000000040000000 (usable)
> (XEN)  0000000040000000 - 0000000040200000 (reserved)
> (XEN)  0000000040200000 - 00000000ca9f7000 (usable)
> (XEN)  00000000ca9f7000 - 00000000caa3b000 (reserved)
> (XEN)  00000000caa3b000 - 00000000cadb7000 (usable)
> (XEN)  00000000cadb7000 - 00000000cade7000 (reserved)
> (XEN)  00000000cade7000 - 00000000cafe7000 (ACPI NVS)
> (XEN)  00000000cafe7000 - 00000000cafff000 (ACPI data)
> (XEN)  00000000cafff000 - 00000000cb000000 (usable)
> (XEN)  00000000cb800000 - 00000000cfa00000 (reserved)
> (XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
> (XEN)  00000000ffc00000 - 00000000ffc20000 (reserved)
> (XEN)  0000000100000000 - 000000022e000000 (usable)
> (XEN) ACPI: RSDP 000FE300, 0024 (r2 DELL  )
> (XEN) ACPI: XSDT CAFFDE18, 007C (r1 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: FACP CAF87D98, 00F4 (r4 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: DSDT CAF71018, 7341 (r2 INT430 SYSFexxx     1001 INTL 20090903)
> (XEN) ACPI: FACS CAFE4D40, 0040
> (XEN) ACPI: APIC CAFFCF18, 00CC (r2 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: TCPA CAFE5D18, 0032 (r2                        0             0)
> (XEN) ACPI: SSDT CAF88A98, 02F9 (r1 DELLTP      TPM     3000 INTL 20090903)
> (XEN) ACPI: MCFG CAFE5C98, 003C (r1 DELL   SNDYBRDG  6222004 MSFT       97)
> (XEN) ACPI: HPET CAFE5C18, 0038 (r1 A M I   PCHHPET  6222004 AMI.        3)
> (XEN) ACPI: BOOT CAFE5B98, 0028 (r1 DELL   CBX3      6222004 AMI     10013)
> (XEN) ACPI: SSDT CAF7E818, 07C2 (r1  PmRef  Cpu0Ist     3000 INTL 20090903)
> (XEN) ACPI: SSDT CAF7D018, 0996 (r1  PmRef    CpuPm     3000 INTL 20090903)
> (XEN) ACPI: DMAR CAF87C18, 00E8 (r1 INTEL      SNB         1 INTL        1)
> (XEN) ACPI: SLIC CAF86C18, 0176 (r3 DELL    CBX3     6222004 MSFT    10013)
> (XEN) System RAM: 8073MB (8266808kB)
> (XEN) No NUMA configuration found
> (XEN) Faking a node at 0000000000000000-000000022e000000
> (XEN) Domain heap initialised
> (XEN) found SMP MP-table at 000f1fb0
> (XEN) DMI 2.6 present.
> (XEN) Using APIC driver default
> (XEN) ACPI: PM-Timer IO Port: 0x408
> (XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
> (XEN) ACPI: 32/64X FACS address mismatch in FADT - cafe4e40/00000000cafe4d40, us
> ing 32
> (XEN) ACPI:                  wakeup_vec[cafe4e4c], vec_size[20]
> (XEN) ACPI: Local APIC address 0xfee00000
> (XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> (XEN) Processor #0 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
> (XEN) Processor #2 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
> (XEN) Processor #4 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
> (XEN) Processor #6 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x08] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x09] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x0a] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0b] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x0c] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x0d] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x0e] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x0f] disabled)
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override.
> (XEN) ACPI: IRQ9 used by override.
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
> (XEN) Table is not found!
> (XEN) Using ACPI (MADT) for SMP configuration information
> (XEN) SMP: Allowing 16 CPUs (12 hotplug CPUs)
> (XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3093.024 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extende
> d MCE MSR 0
> (XEN) Intel machine check reporting enabled
> (XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
> (XEN) PCI: Not using MCFG for segment 0000 bus 00-3f
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) TSC deadline timer enabled
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 32 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB
> (XEN) Brought up 4 CPUs
> (XEN) ACPI sleep modes: S3
> (XEN) mcheck_poll: Machine check polling timer started.
> (XEN) mtrr: your CPUs had inconsistent variable MTRR settings
> (XEN) mtrr: probably your BIOS does not setup all CPUs.
> (XEN) mtrr: corrected configuration.
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0x885000
> (XEN) elf_parse_binary: phdr: paddr=0x1a00000 memsz=0xd70e0
> (XEN) elf_parse_binary: phdr: paddr=0x1ad8000 memsz=0x14240
> (XEN) elf_parse_binary: phdr: paddr=0x1aed000 memsz=0x6bf000
> (XEN) elf_parse_binary: memory: 0x1000000 -> 0x21ac000
> (XEN) elf_xen_parse_note: GUEST_OS = "linux"
> (XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
> (XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
> (XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
> (XEN) elf_xen_parse_note: ENTRY = 0xffffffff81aed200
> (XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
> (XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
> 
> (XEN) elf_xen_parse_note: PAE_MODE = "yes"
> (XEN) elf_xen_parse_note: LOADER = "generic"
> (XEN) elf_xen_parse_note: unknown xen elf note (0xd)
> (XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
> (XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
> (XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
> (XEN) elf_xen_addr_calc_check: addresses:
> (XEN)     virt_base        = 0xffffffff80000000
> (XEN)     elf_paddr_offset = 0x0
> (XEN)     virt_offset      = 0xffffffff80000000
> (XEN)     virt_kstart      = 0xffffffff81000000
> (XEN)     virt_kend        = 0xffffffff821ac000
> (XEN)     virt_entry       = 0xffffffff81aed200
> (XEN)     p2m_base         = 0xffffffffffffffff
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x21ac000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   0000000220000000->0000000224000000 (103435 pages to be all
> ocated)
> (XEN)  Init. ramdisk: 000000022b40b000->000000022dfffa00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff821ac000
> (XEN)  Init. ramdisk: ffffffff821ac000->ffffffff84da0a00
> (XEN)  Phys-Mach map: ffffffff84da1000->ffffffff84ea1000
> (XEN)  Start info:    ffffffff84ea1000->ffffffff84ea14b4
> (XEN)  Page tables:   ffffffff84ea2000->ffffffff84ecd000
> (XEN)  Boot stack:    ffffffff84ecd000->ffffffff84ece000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff85000000
> (XEN)  ENTRY ADDRESS: ffffffff81aed200
> (XEN) Dom0 has maximum 4 VCPUs
> (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81885000
> (XEN) elf_load_binary: phdr 1 at 0xffffffff81a00000 -> 0xffffffff81ad70e0
> (XEN) elf_load_binary: phdr 2 at 0xffffffff81ad8000 -> 0xffffffff81aec240
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 22:45:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 22:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6r0E-0003x0-Dq; Wed, 29 Aug 2012 22:44:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T6r0C-0003wu-9l
	for xen-devel@lists.xen.org; Wed, 29 Aug 2012 22:44:48 +0000
Received: from [85.158.138.51:13155] by server-1.bemta-3.messagelabs.com id
	F2/63-09327-F5B9E305; Wed, 29 Aug 2012 22:44:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346280283!21203579!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzIwMDI0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23566 invoked from network); 29 Aug 2012 22:44:45 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Aug 2012 22:44:45 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TMigXj001515
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:44:43 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TMifEB027913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 22:44:42 GMT
Received: from abhmt113.oracle.com (abhmt113.oracle.com [141.146.116.65])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TMifSV029947
	for <xen-devel@lists.xen.org>; Wed, 29 Aug 2012 17:44:41 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Aug 2012 15:44:40 -0700
Date: Wed, 29 Aug 2012 18:44:35 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Message-ID: <20120829224434.GA4650@localhost.localdomain>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Aug 29, 2012 at 03:22:40PM -0700, Dan Magenheimer wrote:
> Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
> crashes early in boot with Xen 4.2.0-rc4?
> 
> The hardware is a new Dell Optiplex 790 which I've noticed has a couple
> of idiosyncrasies when booting recent upstream versions of Linux:
> 1) reboot=pci is required or reboot hangs
> 2) reducing visible memory via memmap= on a Linux command line is
>    fairly difficult (requires a very complex sequence of Linux boot
>    parameters, presumably due to a weird e820 RAM map in this hardware)
> 
> These idiosyncrasies may or may not be related to the dom0 crash,
> just mentioning them in case they suggest an easy workaround.


What happens if you run with 'console=vga vga=text,keep' on the
hypervisor line and on the Linux command line: 'console=hvc0
earlyprink=xen debug loglevel=8'
> 
> The stock FC17 Xen booted, then I built/booted xen-4.1-testing.hg
> and it was fine.  Then I built/booted 4.2.0-rc4 and see the dom0 crash.
> FC17 kernel/dom0 rev is 3.3.4-5.fc17.x86_64.  I can reboot the
> box to 4.1.3 still (but can't do anything due to at least a 4.1/4.2
> libxenlight.so version difference... will have to reinstall 4.1
> tools I guess).
> 
> I think there have been some Xen memmap issues fixed in recent
> upstream Linux, so will try building/booting linux-3.5 while

Use v3.5.3 if you can.
> everyone flies home from Xen Summit ;-).  But this seems

There was a bug in 3.5 that got fixed in 3.6 and it should be on
the stable tree. It is is 5bc6f9888db5739abfa0cae279b4b442e4db8049
> unpromising since Xen 4.1.3 works fine.
> 
> Tombstone attached:
> 
> [first, tail end of previous boot with dom0 crash, then full log with dom0 crash]
> 
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
>  __  __            _  _    ____    ___              _  _
>  \ \/ /___ _ __   | || |  |___ \  / _ \    _ __ ___| || |     _ __  _ __ ___
>   \  // _ \ '_ \  | || |_   __) || | | |__| '__/ __| || |_ __| '_ \| '__/ _ \
>   /  \  __/ | | | |__   _| / __/ | |_| |__| | | (__|__   _|__| |_) | | |  __/
>  /_/\_\___|_| |_|    |_|(_)_____(_)___/   |_|  \___|  |_|    | .__/|_|  \___|
>                                                              |_|
> (XEN) Xen version 4.2.0-rc4-pre (root@vpn.oracle.com) (gcc (GCC) 4.7.0 20120507
> (Red Hat 4.7.0-5)) Wed Aug 29 15:27:44 MDT 2012
> (XEN) Latest ChangeSet: Tue Aug 28 22:40:45 2012 +0100 25786:a0b5f8102a00
> (XEN) Bootloader: GRUB 2.00~beta4
> (XEN) Command line: placeholder com1=115200,8n1 dom0_mem=512M console=vga,com1 l
> oglvl=all guest_loglvl=all cpuidle=off
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 1 MBR signatures
> (XEN)  Found 1 EDD information structures
> (XEN) Xen-e820 RAM map:
> (XEN)  0000000000000000 - 000000000009ac00 (usable)
> (XEN)  000000000009ac00 - 00000000000a0000 (reserved)
> (XEN)  00000000000e0000 - 0000000000100000 (reserved)
> (XEN)  0000000000100000 - 0000000020000000 (usable)
> (XEN)  0000000020000000 - 0000000020200000 (reserved)
> (XEN)  0000000020200000 - 0000000040000000 (usable)
> (XEN)  0000000040000000 - 0000000040200000 (reserved)
> (XEN)  0000000040200000 - 00000000ca9f7000 (usable)
> (XEN)  00000000ca9f7000 - 00000000caa3b000 (reserved)
> (XEN)  00000000caa3b000 - 00000000cadb7000 (usable)
> (XEN)  00000000cadb7000 - 00000000cade7000 (reserved)
> (XEN)  00000000cade7000 - 00000000cafe7000 (ACPI NVS)
> (XEN)  00000000cafe7000 - 00000000cafff000 (ACPI data)
> (XEN)  00000000cafff000 - 00000000cb000000 (usable)
> (XEN)  00000000cb800000 - 00000000cfa00000 (reserved)
> (XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
> (XEN)  00000000ffc00000 - 00000000ffc20000 (reserved)
> (XEN)  0000000100000000 - 000000022e000000 (usable)
> (XEN) ACPI: RSDP 000FE300, 0024 (r2 DELL  )
> (XEN) ACPI: XSDT CAFFDE18, 007C (r1 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: FACP CAF87D98, 00F4 (r4 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: DSDT CAF71018, 7341 (r2 INT430 SYSFexxx     1001 INTL 20090903)
> (XEN) ACPI: FACS CAFE4D40, 0040
> (XEN) ACPI: APIC CAFFCF18, 00CC (r2 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: TCPA CAFE5D18, 0032 (r2                        0             0)
> (XEN) ACPI: SSDT CAF88A98, 02F9 (r1 DELLTP      TPM     3000 INTL 20090903)
> (XEN) ACPI: MCFG CAFE5C98, 003C (r1 DELL   SNDYBRDG  6222004 MSFT       97)
> (XEN) ACPI: HPET CAFE5C18, 0038 (r1 A M I   PCHHPET  6222004 AMI.        3)
> (XEN) ACPI: BOOT CAFE5B98, 0028 (r1 DELL   CBX3      6222004 AMI     10013)
> (XEN) ACPI: SSDT CAF7E818, 07C2 (r1  PmRef  Cpu0Ist     3000 INTL 20090903)
> (XEN) ACPI: SSDT CAF7D018, 0996 (r1  PmRef    CpuPm     3000 INTL 20090903)
> (XEN) ACPI: DMAR CAF87C18, 00E8 (r1 INTEL      SNB         1 INTL        1)
> (XEN) ACPI: SLIC CAF86C18, 0176 (r3 DELL    CBX3     6222004 MSFT    10013)
> (XEN) System RAM: 8073MB (8266808kB)
> (XEN) No NUMA configuration found
> (XEN) Faking a node at 0000000000000000-000000022e000000
> (XEN) Domain heap initialised
> (XEN) found SMP MP-table at 000f1fb0
> (XEN) DMI 2.6 present.
> (XEN) Using APIC driver default
> (XEN) ACPI: PM-Timer IO Port: 0x408
> (XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
> (XEN) ACPI: 32/64X FACS address mismatch in FADT - cafe4e40/00000000cafe4d40, us
> ing 32
> (XEN) ACPI:                  wakeup_vec[cafe4e4c], vec_size[20]
> (XEN) ACPI: Local APIC address 0xfee00000
> (XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> (XEN) Processor #0 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
> (XEN) Processor #2 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
> (XEN) Processor #4 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
> (XEN) Processor #6 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x08] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x09] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x0a] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0b] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x0c] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x0d] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x0e] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x0f] disabled)
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override.
> (XEN) ACPI: IRQ9 used by override.
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
> (XEN) Table is not found!
> (XEN) Using ACPI (MADT) for SMP configuration information
> (XEN) SMP: Allowing 16 CPUs (12 hotplug CPUs)
> (XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3093.024 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extende
> d MCE MSR 0
> (XEN) Intel machine check reporting enabled
> (XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
> (XEN) PCI: Not using MCFG for segment 0000 bus 00-3f
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) TSC deadline timer enabled
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 32 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB
> (XEN) Brought up 4 CPUs
> (XEN) ACPI sleep modes: S3
> (XEN) mcheck_poll: Machine check polling timer started.
> (XEN) mtrr: your CPUs had inconsistent variable MTRR settings
> (XEN) mtrr: probably your BIOS does not setup all CPUs.
> (XEN) mtrr: corrected configuration.
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0x885000
> (XEN) elf_parse_binary: phdr: paddr=0x1a00000 memsz=0xd70e0
> (XEN) elf_parse_binary: phdr: paddr=0x1ad8000 memsz=0x14240
> (XEN) elf_parse_binary: phdr: paddr=0x1aed000 memsz=0x6bf000
> (XEN) elf_parse_binary: memory: 0x1000000 -> 0x21ac000
> (XEN) elf_xen_parse_note: GUEST_OS = "linux"
> (XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
> (XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
> (XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
> (XEN) elf_xen_parse_note: ENTRY = 0xffffffff81aed200
> (XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
> (XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
> 
> (XEN) elf_xen_parse_note: PAE_MODE = "yes"
> (XEN) elf_xen_parse_note: LOADER = "generic"
> (XEN) elf_xen_parse_note: unknown xen elf note (0xd)
> (XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
> (XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
> (XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
> (XEN) elf_xen_addr_calc_check: addresses:
> (XEN)     virt_base        = 0xffffffff80000000
> (XEN)     elf_paddr_offset = 0x0
> (XEN)     virt_offset      = 0xffffffff80000000
> (XEN)     virt_kstart      = 0xffffffff81000000
> (XEN)     virt_kend        = 0xffffffff821ac000
> (XEN)     virt_entry       = 0xffffffff81aed200
> (XEN)     p2m_base         = 0xffffffffffffffff
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x21ac000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   0000000220000000->0000000224000000 (103435 pages to be all
> ocated)
> (XEN)  Init. ramdisk: 000000022b40b000->000000022dfffa00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff821ac000
> (XEN)  Init. ramdisk: ffffffff821ac000->ffffffff84da0a00
> (XEN)  Phys-Mach map: ffffffff84da1000->ffffffff84ea1000
> (XEN)  Start info:    ffffffff84ea1000->ffffffff84ea14b4
> (XEN)  Page tables:   ffffffff84ea2000->ffffffff84ecd000
> (XEN)  Boot stack:    ffffffff84ecd000->ffffffff84ece000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff85000000
> (XEN)  ENTRY ADDRESS: ffffffff81aed200
> (XEN) Dom0 has maximum 4 VCPUs
> (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81885000
> (XEN) elf_load_binary: phdr 1 at 0xffffffff81a00000 -> 0xffffffff81ad70e0
> (XEN) elf_load_binary: phdr 2 at 0xffffffff81ad8000 -> 0xffffffff81aec240
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 23:18:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 23:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6rWm-0004SN-39; Wed, 29 Aug 2012 23:18:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6rWl-0004SI-0o
	for Xen-devel@lists.xensource.com; Wed, 29 Aug 2012 23:18:27 +0000
Received: from [85.158.143.99:14064] by server-3.bemta-4.messagelabs.com id
	23/9E-08232-243AE305; Wed, 29 Aug 2012 23:18:26 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346282303!22174272!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8695 invoked from network); 29 Aug 2012 23:18:25 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 23:18:25 -0000
Received: by pbbrq2 with SMTP id rq2so2315132pbb.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 16:18:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=P7ZH8fD3jlUH5z6ZEo1siOXxI/Cli/D8pndzhiUeqcg=;
	b=fQ/oeZMKSCV1R+vxBez2mc4fmayRWymE4sQCrJiOYEb/Ocr14QjkOibPxyISRwSld/
	kYVon72SoH9zrQtQmZEbkIvtMvo3a6HobxPC12YLXmZW4lLJ2MULa/D00a40jgL8sPKd
	ItPJ3fMAc1LDW7QVPEVrvibUCFxQyyTD3mwOnIWd7fGjoy4DWxjxiKgrPlhG/TbJ64xD
	TfPdDdbXwqb49YWAWdM0gmv4fM1RW+p9rxFE7OVpBmczyP4wgJKNJ0u7RR/lV/z7tV8A
	ADkn6cH4r5rpYJeCTEzWjVdFafagZ+OMlvJFKl8j45Zj6rIBS0iT70mXyHq4DucTS/ri
	IAOQ==
Received: by 10.66.85.4 with SMTP id d4mr6389201paz.11.1346282303420;
	Wed, 29 Aug 2012 16:18:23 -0700 (PDT)
Received: from [206.87.97.153] ([206.87.97.153])
	by mx.google.com with ESMTPS id kt2sm212956pbc.73.2012.08.29.16.18.21
	(version=SSLv3 cipher=OTHER); Wed, 29 Aug 2012 16:18:22 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 30 Aug 2012 00:18:18 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	<msw@amazon.com>
Message-ID: <CC6461CA.3D3AD%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
Thread-Index: Ac2GPJKkPyAOb11Km0yoNX6UuKFHMw==
In-Reply-To: <20120829143512.7c579fb1@mantra.us.oracle.com>
Mime-version: 1.0
Cc: Ben Guthro <ben@guthro.net>
Subject: Re: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/2012 22:35, "Mukesh Rathor" <mukesh.rathor@oracle.com> wrote:

> Hi Guys,
> 
> Thanks for the interest in the xen hypervisor debugger, prev known as
> kdb. Btw. I'm gonna rename it to xdb for xen-debugger or hdb for
> hypervisor debugger. KDB is confusing people with linux kdb debugger
> and I often get emails where people think they need to apply linux kdb
> patch also... 
> 
> Anyways, attaching patch that is cleaned up of my debug code that I
> accidentally left in prev posting. Should apply cleanly to c/s 25467.
> 
> JFYI...  http://xenbits.xen.org/ext/debuggers.hg/ has gotten outdated,
> I wasn't sure if anyone was even looking at it. But, it looks like
> there is much interest, so I will just submit patch to Keir for
> his feedback on merging it in xen.
> 
> Please voice your opinion here. Good seeing all at the summit :).

Split off the support bits you need scattered into Xen from the main bulk of
the debugger. Then we can review and apply those trickiest bits first.

 -- Keir

> Thanks,
> Mukesh
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Aug 29 23:18:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Aug 2012 23:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6rWm-0004SN-39; Wed, 29 Aug 2012 23:18:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T6rWl-0004SI-0o
	for Xen-devel@lists.xensource.com; Wed, 29 Aug 2012 23:18:27 +0000
Received: from [85.158.143.99:14064] by server-3.bemta-4.messagelabs.com id
	23/9E-08232-243AE305; Wed, 29 Aug 2012 23:18:26 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346282303!22174272!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8695 invoked from network); 29 Aug 2012 23:18:25 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Aug 2012 23:18:25 -0000
Received: by pbbrq2 with SMTP id rq2so2315132pbb.30
	for <Xen-devel@lists.xensource.com>;
	Wed, 29 Aug 2012 16:18:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=P7ZH8fD3jlUH5z6ZEo1siOXxI/Cli/D8pndzhiUeqcg=;
	b=fQ/oeZMKSCV1R+vxBez2mc4fmayRWymE4sQCrJiOYEb/Ocr14QjkOibPxyISRwSld/
	kYVon72SoH9zrQtQmZEbkIvtMvo3a6HobxPC12YLXmZW4lLJ2MULa/D00a40jgL8sPKd
	ItPJ3fMAc1LDW7QVPEVrvibUCFxQyyTD3mwOnIWd7fGjoy4DWxjxiKgrPlhG/TbJ64xD
	TfPdDdbXwqb49YWAWdM0gmv4fM1RW+p9rxFE7OVpBmczyP4wgJKNJ0u7RR/lV/z7tV8A
	ADkn6cH4r5rpYJeCTEzWjVdFafagZ+OMlvJFKl8j45Zj6rIBS0iT70mXyHq4DucTS/ri
	IAOQ==
Received: by 10.66.85.4 with SMTP id d4mr6389201paz.11.1346282303420;
	Wed, 29 Aug 2012 16:18:23 -0700 (PDT)
Received: from [206.87.97.153] ([206.87.97.153])
	by mx.google.com with ESMTPS id kt2sm212956pbc.73.2012.08.29.16.18.21
	(version=SSLv3 cipher=OTHER); Wed, 29 Aug 2012 16:18:22 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 30 Aug 2012 00:18:18 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	<msw@amazon.com>
Message-ID: <CC6461CA.3D3AD%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
Thread-Index: Ac2GPJKkPyAOb11Km0yoNX6UuKFHMw==
In-Reply-To: <20120829143512.7c579fb1@mantra.us.oracle.com>
Mime-version: 1.0
Cc: Ben Guthro <ben@guthro.net>
Subject: Re: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/2012 22:35, "Mukesh Rathor" <mukesh.rathor@oracle.com> wrote:

> Hi Guys,
> 
> Thanks for the interest in the xen hypervisor debugger, prev known as
> kdb. Btw. I'm gonna rename it to xdb for xen-debugger or hdb for
> hypervisor debugger. KDB is confusing people with linux kdb debugger
> and I often get emails where people think they need to apply linux kdb
> patch also... 
> 
> Anyways, attaching patch that is cleaned up of my debug code that I
> accidentally left in prev posting. Should apply cleanly to c/s 25467.
> 
> JFYI...  http://xenbits.xen.org/ext/debuggers.hg/ has gotten outdated,
> I wasn't sure if anyone was even looking at it. But, it looks like
> there is much interest, so I will just submit patch to Keir for
> his feedback on merging it in xen.
> 
> Please voice your opinion here. Good seeing all at the summit :).

Split off the support bits you need scattered into Xen from the main bulk of
the debugger. Then we can review and apply those trickiest bits first.

 -- Keir

> Thanks,
> Mukesh
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 05:13:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 05:13:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6x46-0002jC-NY; Thu, 30 Aug 2012 05:13:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T6q18-0003aO-EY
	for Xen-devel@lists.xensource.com; Wed, 29 Aug 2012 21:41:43 +0000
Received: from [85.158.143.35:9558] by server-3.bemta-4.messagelabs.com id
	D8/21-08232-59C8E305; Wed, 29 Aug 2012 21:41:41 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346276496!13262681!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNTk5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24942 invoked from network); 29 Aug 2012 21:41:37 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 21:41:37 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TLfUmF008689
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 21:41:31 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TLfRe1022882
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 21:41:28 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TLfQuE024986; Wed, 29 Aug 2012 16:41:26 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Aug 2012 14:41:26 -0700
Date: Wed, 29 Aug 2012 14:41:25 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120829144125.25a5eb3e@mantra.us.oracle.com>
In-Reply-To: <20120829143512.7c579fb1@mantra.us.oracle.com>
References: <20120829143512.7c579fb1@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="MP_/vtJK/xRhlenzx/+qfkRLsPz"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
X-Mailman-Approved-At: Thu, 30 Aug 2012 05:13:13 +0000
Cc: Ben Guthro <ben@guthro.net>, msw@amazon.com
Subject: Re: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--MP_/vtJK/xRhlenzx/+qfkRLsPz
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Wed, 29 Aug 2012 14:35:12 -0700
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:


> 
> Anyways, attaching patch that is cleaned up of my debug code that I
> accidentally left in prev posting. Should apply cleanly to c/s 25467.

really attaching this time :).


--MP_/vtJK/xRhlenzx/+qfkRLsPz
Content-Type: text/x-patch
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=kdb.diff

diff -r 32034d1914a6 xen/Makefile
--- a/xen/Makefile	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -61,6 +61,7 @@
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C xsm clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C crypto clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C arch/$(TARGET_ARCH) clean
+	$(MAKE) -f $(BASEDIR)/Rules.mk -C kdb clean
 	rm -f include/asm *.o $(TARGET) $(TARGET).gz $(TARGET)-syms *~ core
 	rm -f include/asm-*/asm-offsets.h
 	[ -d tools/figlet ] && rm -f .banner*
@@ -129,7 +130,7 @@
 	  echo ""; \
 	  echo "#endif") <$< >$@
 
-SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers
+SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers kdb
 define all_sources
     ( find include/asm-$(TARGET_ARCH) -name '*.h' -print; \
       find include -name 'asm-*' -prune -o -name '*.h' -print; \
diff -r 32034d1914a6 xen/Rules.mk
--- a/xen/Rules.mk	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Rules.mk	Wed Aug 29 14:39:57 2012 -0700
@@ -10,6 +10,7 @@
 crash_debug   ?= n
 frame_pointer ?= n
 lto           ?= n
+kdb           ?= n
 
 include $(XEN_ROOT)/Config.mk
 
@@ -40,6 +41,7 @@
 ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
 ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
+ALL_OBJS-$(kdb)          += $(BASEDIR)/kdb/built_in.o
 
 CFLAGS-y                += -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
@@ -53,6 +55,7 @@
 CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
 CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
 CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
+CFLAGS-$(kdb)           += -DXEN_KDB_CONFIG
 
 ifneq ($(max_phys_cpus),)
 CFLAGS-y                += -DMAX_PHYS_CPUS=$(max_phys_cpus)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/entry.S
--- a/xen/arch/x86/hvm/svm/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -59,12 +59,23 @@
         get_current(bx)
         CLGI
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         testl $~0,(r(dx),r(ax),1)
         jnz  .Lsvm_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0, VCPU_nsvm_hap_enabled(r(bx))
 UNLIKELY_START(nz, nsvm_hap)
         mov  VCPU_nhvm_p2m(r(bx)),r(ax)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/svm.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2170,6 +2170,10 @@
         break;
 
     case VMEXIT_EXCEPTION_DB:
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+	    break;
+#endif
         if ( !v->domain->debugger_attached )
             goto exit_and_crash;
         domain_pause_for_debugger();
@@ -2182,6 +2186,10 @@
         if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
             break;
         __update_guest_eip(regs, inst_len);
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_int3, regs))
+            break;
+#endif
         current->arch.gdbsx_vcpu_event = TRAP_int3;
         domain_pause_for_debugger();
         break;
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
--- a/xen/arch/x86/hvm/svm/vmcb.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/vmcb.c	Wed Aug 29 14:39:57 2012 -0700
@@ -315,6 +315,36 @@
     register_keyhandler('v', &vmcb_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+/* did == 0 : display for all HVM domains. domid 0 is never HVM.
+ *  * vid == -1 : display for all HVM VCPUs
+ *   */
+void kdb_dump_vmcb(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain (dp) {
+        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
+            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
+            kdbp("\n");
+        }
+        kdbp("\n");
+    }
+    rcu_read_unlock(&domlist_read_lock);
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
--- a/xen/arch/x86/hvm/vmx/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -124,12 +124,23 @@
         get_current(bx)
         cli
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         cmpl $0,(r(dx),r(ax),1)
         jnz  .Lvmx_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0xff,VCPU_vmx_emulate(r(bx))
         jnz .Lvmx_goto_emulator
         testb $0xff,VCPU_vmx_realmode(r(bx))
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmcs.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1117,6 +1117,13 @@
         hvm_asid_flush_vcpu(v);
     }
 
+#if defined(XEN_KDB_CONFIG)
+    if (kdb_dr7)
+        __vmwrite(GUEST_DR7, kdb_dr7);
+    else
+        __vmwrite(GUEST_DR7, 0);
+#endif
+
     debug_state = v->domain->debugger_attached
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
@@ -1326,6 +1333,220 @@
     register_keyhandler('v', &vmcs_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+#define GUEST_EFER      0x2806   /* see page 23-20 */
+#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */
+
+/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
+ * will IPI other CPUs. also, print a subset relevant to software debugging */
+static void noinline kdb_print_vmcs(struct vcpu *vp)
+{
+    struct cpu_user_regs *regs = &vp->arch.user_regs;
+    unsigned long long x;
+
+    kdbp("*** Guest State ***\n");
+    kdbp("CR0: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR0),
+         (unsigned long long)vmr(CR0_READ_SHADOW), 
+         (unsigned long long)vmr(CR0_GUEST_HOST_MASK));
+    kdbp("CR4: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR4),
+         (unsigned long long)vmr(CR4_READ_SHADOW), 
+         (unsigned long long)vmr(CR4_GUEST_HOST_MASK));
+    kdbp("CR3: actual=0x%016llx, target_count=%d\n",
+         (unsigned long long)vmr(GUEST_CR3),
+         (int)vmr(CR3_TARGET_COUNT));
+    kdbp("     target0=%016llx, target1=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE0),
+         (unsigned long long)vmr(CR3_TARGET_VALUE1));
+    kdbp("     target2=%016llx, target3=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE2),
+         (unsigned long long)vmr(CR3_TARGET_VALUE3));
+    kdbp("RSP = 0x%016llx (0x%016llx)  RIP = 0x%016llx (0x%016llx)\n", 
+         (unsigned long long)vmr(GUEST_RSP),
+         (unsigned long long)regs->esp,
+         (unsigned long long)vmr(GUEST_RIP),
+         (unsigned long long)regs->eip);
+    kdbp("RFLAGS=0x%016llx (0x%016llx)  DR7 = 0x%016llx\n", 
+         (unsigned long long)vmr(GUEST_RFLAGS),
+         (unsigned long long)regs->eflags,
+         (unsigned long long)vmr(GUEST_DR7));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+         (unsigned long long)vmr(GUEST_SYSENTER_ESP),
+         (int)vmr(GUEST_SYSENTER_CS),
+         (unsigned long long)vmr(GUEST_SYSENTER_EIP));
+    vmx_dump_sel("CS", GUEST_CS_SELECTOR);
+    vmx_dump_sel("DS", GUEST_DS_SELECTOR);
+    vmx_dump_sel("SS", GUEST_SS_SELECTOR);
+    vmx_dump_sel("ES", GUEST_ES_SELECTOR);
+    vmx_dump_sel("FS", GUEST_FS_SELECTOR);
+    vmx_dump_sel("GS", GUEST_GS_SELECTOR);
+    vmx_dump_sel2("GDTR", GUEST_GDTR_LIMIT);
+    vmx_dump_sel("LDTR", GUEST_LDTR_SELECTOR);
+    vmx_dump_sel2("IDTR", GUEST_IDTR_LIMIT);
+    vmx_dump_sel("TR", GUEST_TR_SELECTOR);
+    kdbp("Guest EFER = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_EFER_HIGH), (uint32_t)vmr(GUEST_EFER));
+    kdbp("Guest PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_PAT_HIGH), (uint32_t)vmr(GUEST_PAT));
+    x  = (unsigned long long)vmr(TSC_OFFSET_HIGH) << 32;
+    x |= (uint32_t)vmr(TSC_OFFSET);
+    kdbp("TSC Offset = %016llx\n", x);
+    x  = (unsigned long long)vmr(GUEST_IA32_DEBUGCTL_HIGH) << 32;
+    x |= (uint32_t)vmr(GUEST_IA32_DEBUGCTL);
+    kdbp("DebugCtl=%016llx DebugExceptions=%016llx\n", x,
+           (unsigned long long)vmr(GUEST_PENDING_DBG_EXCEPTIONS));
+    kdbp("Interruptibility=%04x ActivityState=%04x\n",
+           (int)vmr(GUEST_INTERRUPTIBILITY_INFO),
+           (int)vmr(GUEST_ACTIVITY_STATE));
+
+    kdbp("MSRs: entry_load:$%d exit_load:$%d exit_store:$%d\n",
+         vmr(VM_ENTRY_MSR_LOAD_COUNT), vmr(VM_EXIT_MSR_LOAD_COUNT),
+         vmr(VM_EXIT_MSR_STORE_COUNT));
+
+    kdbp("\n*** Host State ***\n");
+    kdbp("RSP = 0x%016llx  RIP = 0x%016llx\n", 
+           (unsigned long long)vmr(HOST_RSP),
+           (unsigned long long)vmr(HOST_RIP));
+    kdbp("CS=%04x DS=%04x ES=%04x FS=%04x GS=%04x SS=%04x TR=%04x\n",
+           (uint16_t)vmr(HOST_CS_SELECTOR),
+           (uint16_t)vmr(HOST_DS_SELECTOR),
+           (uint16_t)vmr(HOST_ES_SELECTOR),
+           (uint16_t)vmr(HOST_FS_SELECTOR),
+           (uint16_t)vmr(HOST_GS_SELECTOR),
+           (uint16_t)vmr(HOST_SS_SELECTOR),
+           (uint16_t)vmr(HOST_TR_SELECTOR));
+    kdbp("FSBase=%016llx GSBase=%016llx TRBase=%016llx\n",
+           (unsigned long long)vmr(HOST_FS_BASE),
+           (unsigned long long)vmr(HOST_GS_BASE),
+           (unsigned long long)vmr(HOST_TR_BASE));
+    kdbp("GDTBase=%016llx IDTBase=%016llx\n",
+           (unsigned long long)vmr(HOST_GDTR_BASE),
+           (unsigned long long)vmr(HOST_IDTR_BASE));
+    kdbp("CR0=%016llx CR3=%016llx CR4=%016llx\n",
+           (unsigned long long)vmr(HOST_CR0),
+           (unsigned long long)vmr(HOST_CR3),
+           (unsigned long long)vmr(HOST_CR4));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+           (unsigned long long)vmr(HOST_SYSENTER_ESP),
+           (int)vmr(HOST_SYSENTER_CS),
+           (unsigned long long)vmr(HOST_SYSENTER_EIP));
+    kdbp("Host PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(HOST_PAT_HIGH), (uint32_t)vmr(HOST_PAT));
+
+    kdbp("\n*** Control State ***\n");
+    kdbp("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n",
+           (uint32_t)vmr(PIN_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(CPU_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(SECONDARY_VM_EXEC_CONTROL));
+    kdbp("EntryControls=%08x ExitControls=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_CONTROLS),
+           (uint32_t)vmr(VM_EXIT_CONTROLS));
+    kdbp("ExceptionBitmap=%08x\n",
+           (uint32_t)vmr(EXCEPTION_BITMAP));
+    kdbp("PAGE_FAULT_ERROR_CODE  MASK:0x%lx  MATCH:0x%lx\n", 
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MASK),
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MATCH));
+    kdbp("VMEntry: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_INTR_INFO),
+           (uint32_t)vmr(VM_ENTRY_EXCEPTION_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("VMExit: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_EXIT_INTR_INFO),
+           (uint32_t)vmr(VM_EXIT_INTR_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("        reason=%08x qualification=%08x\n",
+           (uint32_t)vmr(VM_EXIT_REASON),
+           (uint32_t)vmr(EXIT_QUALIFICATION));
+    kdbp("IDTVectoring: info=%08x errcode=%08x\n",
+           (uint32_t)vmr(IDT_VECTORING_INFO),
+           (uint32_t)vmr(IDT_VECTORING_ERROR_CODE));
+    kdbp("TPR Threshold = 0x%02x\n",
+           (uint32_t)vmr(TPR_THRESHOLD));
+    kdbp("EPT pointer = 0x%08x%08x\n",
+           (uint32_t)vmr(EPT_POINTER_HIGH), (uint32_t)vmr(EPT_POINTER));
+    kdbp("Virtual processor ID = 0x%04x\n",
+           (uint32_t)vmr(VIRTUAL_PROCESSOR_ID));
+    kdbp("=================================================================\n");
+}
+
+/* Flush VMCS on this cpu if it needs to: 
+ *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and 
+ *     do __vmreads. So, the VMCS pointer can't be left cleared.
+ *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
+ *     vmlaunch must be done and not vmresume. This means, we must clear 
+ *     arch_vmx->launched.
+ */
+void kdb_curr_cpu_flush_vmcs(void)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    int ccpu = smp_processor_id();
+    struct vmcs_struct *cvp = this_cpu(current_vmcs);
+
+    if (this_cpu(current_vmcs) == NULL)
+        return;             /* no HVM active on this CPU */
+
+    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
+
+    /* looks like we got one. unfortunately, current_vmcs points to vmcs 
+     * and not VCPU, so we gotta search the entire list... */
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        for_each_vcpu (dp, vp) {
+            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
+                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                vp->arch.hvm_vmx.launched = 0;
+                this_cpu(current_vmcs) = NULL;
+                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n", 
+		     ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
+            }
+        }
+    }
+}
+
+/*
+ * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)
+ * vcpu id == -1 : display all vcpuids
+ * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
+ */
+void kdb_dump_vmcs(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    struct vmcs_struct  *vmcsp;
+    u64 addr = -1;
+
+    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
+    __vmptrst(&addr);
+
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+	    vmcsp = vp->arch.hvm_vmx.vmcs;
+            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
+	         virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
+            __vmptrld(virt_to_maddr(vmcsp));
+            kdb_print_vmcs(vp);
+            __vmpclear(virt_to_maddr(vmcsp));
+            vp->arch.hvm_vmx.launched = 0;
+        }
+        kdbp("\n");
+    }
+    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
+    if (addr && addr != (u64)-1)
+        __vmptrld(addr);
+}
+#endif
 
 /*
  * Local variables:
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmx.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2183,11 +2183,14 @@
         printk("reason not known yet!");
         break;
     }
-
+#if defined(XEN_KDB_CONFIG)
+    kdbp("\n************* VMCS Area **************\n");
+    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
+#else
     printk("************* VMCS Area **************\n");
     vmcs_dump_vcpu(curr);
     printk("**************************************\n");
-
+#endif
     domain_crash(curr->domain);
 }
 
@@ -2415,6 +2418,12 @@
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
                 goto exit_and_crash;
+
+#if defined(XEN_KDB_CONFIG)
+            /* TRAP_debug: IP points correctly to next instr */
+            if (kdb_handle_trap_entry(vector, regs))
+                break;
+#endif
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 
@@ -2423,6 +2432,13 @@
             if ( v->domain->debugger_attached )
             {
                 update_guest_eip(); /* Safe: INT3 */            
+#if defined(XEN_KDB_CONFIG)
+                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
+                 * update_guest_eip which updates to bp+1. works for gdbsx too 
+                 */
+                if (kdb_handle_trap_entry(vector, regs))
+                    break;
+#endif
                 current->arch.gdbsx_vcpu_event = TRAP_int3;
                 domain_pause_for_debugger();
                 break;
@@ -2707,6 +2723,10 @@
     case EXIT_REASON_MONITOR_TRAP_FLAG:
         v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
+#if defined(XEN_KDB_CONFIG)
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+            break;
+#endif
         if ( v->arch.hvm_vcpu.single_step ) {
           hvm_memory_event_single_step(regs->eip);
           if ( v->domain->debugger_attached )
diff -r 32034d1914a6 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/irq.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2305,3 +2305,29 @@
     return is_hvm_domain(d) && pirq &&
            pirq->arch.hvm.emuirq != IRQ_UNBOUND; 
 }
+
+#ifdef XEN_KDB_CONFIG
+void kdb_prnt_guest_mapped_irqs(void)
+{
+    int irq, j;
+    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
+
+    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
+    for (irq=0; irq < nr_irqs; irq++) {
+        irq_desc_t  *dp = irq_to_desc(irq);
+        struct arch_irq_desc *archp = &dp->arch;
+        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
+
+        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
+            continue;
+
+        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
+        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
+             dp->handler->typename);
+        for (j=0; j < actp->nr_guests; j++)
+            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
+                 domain_irq_to_pirq(actp->guest[j], irq));
+        kdbp("\n");
+    }
+}
+#endif
diff -r 32034d1914a6 xen/arch/x86/setup.c
--- a/xen/arch/x86/setup.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/setup.c	Wed Aug 29 14:39:57 2012 -0700
@@ -47,6 +47,13 @@
 #include <xen/cpu.h>
 #include <asm/nmi.h>
 
+#ifdef XEN_KDB_CONFIG
+#include <asm/debugger.h>
+
+int opt_earlykdb=0;
+boolean_param("earlykdb", opt_earlykdb);
+#endif
+
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
 boolean_param("nosmp", opt_nosmp);
@@ -1242,6 +1249,11 @@
 
     trap_init();
 
+#ifdef XEN_KDB_CONFIG
+    kdb_init();
+    if (opt_earlykdb)
+        kdb_trap_immed(KDB_TRAP_NONFATAL);
+#endif
     rcu_init();
     
     early_time_init();
diff -r 32034d1914a6 xen/arch/x86/smp.c
--- a/xen/arch/x86/smp.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/smp.c	Wed Aug 29 14:39:57 2012 -0700
@@ -273,7 +273,7 @@
  * Structure and data for smp_call_function()/on_selected_cpus().
  */
 
-static void __smp_call_function_interrupt(void);
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs);
 static DEFINE_SPINLOCK(call_lock);
 static struct call_data_struct {
     void (*func) (void *info);
@@ -321,7 +321,7 @@
     if ( cpumask_test_cpu(smp_processor_id(), &call_data.selected) )
     {
         local_irq_disable();
-        __smp_call_function_interrupt();
+        __smp_call_function_interrupt(NULL);
         local_irq_enable();
     }
 
@@ -390,7 +390,7 @@
     this_cpu(irq_count)++;
 }
 
-static void __smp_call_function_interrupt(void)
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs)
 {
     void (*func)(void *info) = call_data.func;
     void *info = call_data.info;
@@ -411,6 +411,11 @@
     {
         mb();
         cpumask_clear_cpu(cpu, &call_data.selected);
+#ifdef XEN_KDB_CONFIG
+        if (info && !strcmp(info, "XENKDB")) {           /* called from kdb */
+                (*(void (*)(struct cpu_user_regs *, void *))func)(regs, info);
+        } else
+#endif
         (*func)(info);
     }
 
@@ -421,5 +426,5 @@
 {
     ack_APIC_irq();
     perfc_incr(ipis);
-    __smp_call_function_interrupt();
+    __smp_call_function_interrupt(regs);
 }
diff -r 32034d1914a6 xen/arch/x86/time.c
--- a/xen/arch/x86/time.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/time.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2007,6 +2007,46 @@
 }
 __initcall(setup_dump_softtsc);
 
+#ifdef XEN_KDB_CONFIG
+void kdb_time_resume(int update_domains)
+{
+        s_time_t now;
+        int ccpu = smp_processor_id();
+        struct cpu_time *t = &this_cpu(cpu_time);
+
+        if (!plt_src.read_counter)            /* not initialized for earlykdb */
+                return;
+
+        if (update_domains) {
+                plt_stamp = plt_src.read_counter();
+                platform_timer_stamp = plt_stamp64;
+                platform_time_calibration();
+                do_settime(get_cmos_time(), 0, read_platform_stime());
+        }
+        if (local_irq_is_enabled())
+                kdbp("kdb BUG: enabled in time_resume(). ccpu:%d\n", ccpu);
+
+        rdtscll(t->local_tsc_stamp);
+        now = read_platform_stime();
+        t->stime_master_stamp = now;
+        t->stime_local_stamp  = now;
+
+        update_vcpu_system_time(current);
+
+        if (update_domains)
+                set_timer(&calibration_timer, NOW() + EPOCH);
+}
+
+void kdb_dump_time_pcpu(void)
+{
+    int cpu;
+    for_each_online_cpu(cpu) {
+        kdbp("[%d]: cpu_time: %016lx\n", cpu, &per_cpu(cpu_time, cpu));
+        kdbp("[%d]: cpu_calibration: %016lx\n", cpu, 
+             &per_cpu(cpu_calibration, cpu));
+    }
+}
+#endif
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/traps.c	Wed Aug 29 14:39:57 2012 -0700
@@ -225,7 +225,7 @@
 
 #else
 
-static void show_trace(struct cpu_user_regs *regs)
+void show_trace(struct cpu_user_regs *regs)
 {
     unsigned long *frame, next, addr, low, high;
 
@@ -3326,6 +3326,10 @@
     if ( nmi_callback(regs, cpu) )
         return;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_enabled && kdb_handle_trap_entry(TRAP_nmi, regs))
+        return;
+#endif
     if ( nmi_watchdog )
         nmi_watchdog_tick(regs);
 
diff -r 32034d1914a6 xen/arch/x86/x86_64/compat/entry.S
--- a/xen/arch/x86/x86_64/compat/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/compat/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -95,6 +95,10 @@
 /* %rbx: struct vcpu */
 ENTRY(compat_test_all_events)
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG
+        testl $1, kdb_session_begun(%rip)
+        jnz   compat_restore_all_guest
+#endif
 /*compat_test_softirqs:*/
         movl  VCPU_processor(%rbx),%eax
         shlq  $IRQSTAT_shift,%rax
diff -r 32034d1914a6 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -184,6 +184,10 @@
 /* %rbx: struct vcpu */
 test_all_events:
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG                   /* 64bit dom0 will resume here */
+        testl $1, kdb_session_begun(%rip)
+        jnz   restore_all_guest
+#endif
 /*test_softirqs:*/  
         movl  VCPU_processor(%rbx),%eax
         shl   $IRQSTAT_shift,%rax
@@ -546,6 +550,13 @@
 
 ENTRY(int3)
         pushq $0
+#ifdef XEN_KDB_CONFIG
+        pushq %rax
+        GET_CPUINFO_FIELD(CPUINFO_processor_id, %rax)
+        movq  (%rax), %rax
+        lock  bts %rax, kdb_cpu_traps(%rip)
+        popq  %rax
+#endif
         movl  $TRAP_int3,4(%rsp)
         jmp   handle_exception
 
diff -r 32034d1914a6 xen/common/domain.c
--- a/xen/common/domain.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/domain.c	Wed Aug 29 14:39:57 2012 -0700
@@ -530,6 +530,14 @@
 {
     struct vcpu *v;
 
+#ifdef XEN_KDB_CONFIG
+    if (reason == SHUTDOWN_crash) {
+        if ( IS_PRIV(d) )
+            kdb_trap_immed(KDB_TRAP_FATAL);
+        else
+            kdb_trap_immed(KDB_TRAP_NONFATAL);
+    }
+#endif
     spin_lock(&d->shutdown_lock);
 
     if ( d->shutdown_code == -1 )
@@ -624,7 +632,9 @@
     for_each_vcpu ( d, v )
         vcpu_sleep_nosync(v);
 
-    send_global_virq(VIRQ_DEBUGGER);
+    /* send VIRQ_DEBUGGER to guest only if gdbsx_vcpu_event is not active */
+    if (current->arch.gdbsx_vcpu_event == 0)
+        send_global_virq(VIRQ_DEBUGGER);
 }
 
 /* Complete domain destroy after RCU readers are not holding old references. */
diff -r 32034d1914a6 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/sched_credit.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1475,6 +1475,33 @@
     printk("\n");
 }
 
+#ifdef XEN_KDB_CONFIG
+static void kdb_csched_dump(int cpu)
+{
+    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
+    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
+    struct list_head *tmp, *runq = RUNQ(cpu);
+
+    kdbp("    csched_pcpu: %p\n", pcpup);
+    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
+         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
+    kdbp("    runq:\n");
+
+    /* next is top of struct, so screw stupid, ugly hard to follow macros */
+    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
+        kdbp("next is not first in struct csched_vcpu. please fixme\n");
+        return;        /* otherwise for loop will crash */
+    }
+    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
+
+        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
+        struct vcpu *vp = csp->vcpu;
+        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
+             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
+    };
+}
+#endif
+
 static void
 csched_dump_pcpu(const struct scheduler *ops, int cpu)
 {
@@ -1484,6 +1511,10 @@
     int loop;
 #define cpustr keyhandler_scratch
 
+#ifdef XEN_KDB_CONFIG
+    kdb_csched_dump(cpu);
+    return;
+#endif
     spc = CSCHED_PCPU(cpu);
     runq = &spc->runq;
 
diff -r 32034d1914a6 xen/common/schedule.c
--- a/xen/common/schedule.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/schedule.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1454,6 +1454,25 @@
     schedule();
 }
 
+#ifdef XEN_KDB_CONFIG
+void kdb_print_sched_info(void)
+{
+    int cpu;
+
+    kdbp("Scheduler: name:%s opt_name:%s id:%d\n", ops.name, ops.opt_name,
+         ops.sched_id);
+    kdbp("per cpu schedule_data:\n");
+    for_each_online_cpu(cpu) {
+        struct schedule_data *p =  &per_cpu(schedule_data, cpu);
+        kdbp("  cpu:%d  &(per cpu)schedule_data:%p\n", cpu, p);
+        kdbp("         curr:%p sched_priv:%p\n", p->curr, p->sched_priv);
+        kdbp("\n");
+        ops.dump_cpu_state(&ops, cpu);
+        kdbp("\n");
+    }
+}
+#endif
+
 #ifdef CONFIG_COMPAT
 #include "compat/schedule.c"
 #endif
diff -r 32034d1914a6 xen/common/symbols.c
--- a/xen/common/symbols.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/symbols.c	Wed Aug 29 14:39:57 2012 -0700
@@ -168,3 +168,21 @@
 
     spin_unlock_irqrestore(&lock, flags);
 }
+
+#ifdef XEN_KDB_CONFIG
+/*
+ *  * Given a symbol, return its address 
+ *   */
+unsigned long address_lookup(char *symp)
+{
+    int i, off = 0;
+    char namebuf[KSYM_NAME_LEN+1];
+
+    for (i=0; i < symbols_num_syms; i++) {
+        off = symbols_expand_symbol(off, namebuf);
+        if (strcmp(namebuf, symp) == 0)                  /* found it */
+            return symbols_address(i);
+    }
+    return 0;
+}
+#endif
diff -r 32034d1914a6 xen/common/timer.c
--- a/xen/common/timer.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/timer.c	Wed Aug 29 14:39:57 2012 -0700
@@ -643,6 +643,48 @@
     register_keyhandler('a', &dump_timerq_keyhandler);
 }
 
+#ifdef XEN_KDB_CONFIG
+#include <xen/symbols.h>
+void kdb_dump_timer_queues(void)
+{
+    struct timer  *t;
+    struct timers *ts;
+    unsigned long sz, offs;
+    char buf[KSYM_NAME_LEN+1];
+    int cpu, j;
+    u64 tsc;
+
+    for_each_online_cpu( cpu )
+    {
+        ts = &per_cpu(timers, cpu);
+        kdbp("CPU[%02d]:", cpu);
+
+        if (cpu == smp_processor_id()) {
+            s_time_t now = NOW();
+            rdtscll(tsc);
+            kdbp("NOW:0x%08x%08x TSC:0x%016lx\n", (u32)(now>>32),(u32)now, tsc);
+        } else
+            kdbp("\n");
+
+        /* timers in the heap */
+        for ( j = 1; j <= GET_HEAP_SIZE(ts->heap); j++ ) {
+            t = ts->heap[j];
+            kdbp("  %d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+        /* timers on the link list */
+        for ( t = ts->list, j = 0; t != NULL; t = t->list_next, j++ ) {
+            kdbp(" L%d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+    }
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/drivers/char/console.c
--- a/xen/drivers/char/console.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/drivers/char/console.c	Wed Aug 29 14:39:57 2012 -0700
@@ -295,6 +295,21 @@
 {
     static int switch_code_count = 0;
 
+#ifdef XEN_KDB_CONFIG
+    /* if ctrl-\ pressed and kdb handles it, return */
+    if (kdb_enabled && c == 0x1c) {
+        if (!kdb_session_begun) {
+            if (kdb_keyboard(regs))
+                return;
+        } else {
+            kdbp("Sorry... kdb session already active.. please try again..\n");
+            return;
+        }
+    }
+    if (kdb_session_begun)      /* kdb should already be polling */
+        return;                 /* swallow chars so they don't buffer in dom0 */
+#endif
+
     if ( switch_code && (c == switch_code) )
     {
         /* We eat CTRL-<switch_char> in groups of 3 to switch console input. */
@@ -710,6 +725,18 @@
     atomic_dec(&print_everything);
 }
 
+#ifdef XEN_KDB_CONFIG
+void console_putc(char c)
+{
+    serial_putc(sercon_handle, c);
+}
+
+int console_getc(void)
+{
+    return serial_getc(sercon_handle);
+}
+#endif
+
 /*
  * printk rate limiting, lifted from Linux.
  *
diff -r 32034d1914a6 xen/include/asm-x86/debugger.h
--- a/xen/include/asm-x86/debugger.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/asm-x86/debugger.h	Wed Aug 29 14:39:57 2012 -0700
@@ -39,7 +39,11 @@
 #define DEBUGGER_trap_fatal(_v, _r) \
     if ( debugger_trap_fatal(_v, _r) ) return;
 
-#if defined(CRASH_DEBUG)
+#if defined(XEN_KDB_CONFIG)
+#define debugger_trap_immediate() kdb_trap_immed(KDB_TRAP_NONFATAL)
+#define debugger_trap_fatal(_v, _r) kdb_trap_fatal(_v, _r)
+
+#elif defined(CRASH_DEBUG)
 
 #include <xen/gdbstub.h>
 
@@ -70,6 +74,10 @@
 {
     struct vcpu *v = current;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_handle_trap_entry(vector, regs))
+        return 1;
+#endif
     if ( guest_kernel_mode(v, regs) && v->domain->debugger_attached &&
          ((vector == TRAP_int3) || (vector == TRAP_debug)) )
     {
diff -r 32034d1914a6 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/lib.h	Wed Aug 29 14:39:57 2012 -0700
@@ -116,4 +116,7 @@
 struct cpu_user_regs;
 void dump_execstate(struct cpu_user_regs *);
 
+#ifdef XEN_KDB_CONFIG
+#include "../../kdb/include/kdb_extern.h"
+#endif
 #endif /* __LIB_H__ */
diff -r 32034d1914a6 xen/include/xen/sched.h
--- a/xen/include/xen/sched.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/sched.h	Wed Aug 29 14:39:57 2012 -0700
@@ -576,11 +576,14 @@
 unsigned long hypercall_create_continuation(
     unsigned int op, const char *format, ...);
 void hypercall_cancel_continuation(void);
-
+#ifdef XEN_KDB_CONFIG
+#define hypercall_preempt_check() (0)
+#else
 #define hypercall_preempt_check() (unlikely(    \
         softirq_pending(smp_processor_id()) |   \
         local_events_need_delivery()            \
     ))
+#endif
 
 extern struct domain *domain_list;
 
diff -r 32034d1914a6 xen/kdb/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,5 @@
+
+obj-y		+= kdbmain.o kdb_cmds.o kdb_io.o 
+
+subdir-y += x86 guest
+
diff -r 32034d1914a6 xen/kdb/README
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/README	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,243 @@
+
+Welcome to kdb for xen, a hypervisor built in debugger.
+
+FEATURES:
+   - set breakpoints in hypervisor
+   - examine virt/machine memory, registers, domains, vcpus, etc...
+   - single step, single step till jump/call, step over call to next
+     instruction after the call.
+   - examine memory of a PV/HVM guest. 
+   - set breakpoints, single step, etc... for a PV guest.
+   - breaking into the debugger will freeze the system, all CPUs will pause,
+     no interrupts are acknowledged in the debugger. (Hence, the wall clock
+     will drift)
+   - single step will step only that cpu.
+   - earlykdb: break into kdb very early during boot. Put "earlykdb" on the
+               xen command line in grub.conf.
+   - generic tracing functions (see below) for quick tracing to debug timing
+     related problems. To use:
+        o set KDBTRCMAX to max num of recs in circular trc buffer in kdbmain.c
+	o call kdb_trc() from anywhere in xen
+	o turn tracing on by setting kdb_trcon in kdbmain.c or trcon command.
+	o trcp in kdb will give hints to dump trace recs. Use dd to see buffer
+	o trcz will zero out the entire buffer if needed.
+
+NOTE:
+   - since almost all numbers are in hex, 0x is not prefixed. Instead, decimal
+     numbers are preceded by $, as in $17 (sorry, one gets used to it). Note,
+     vcpu num, cpu num, domid are always displayed in decimal, without $.
+   - watchdog must be disabled to use kdb
+
+ISSUES:
+   - Currently, debug hypervisor is not supported. Make sure NDEBUG is defined
+     or compile with debug=n
+   - "timer went backwards" messages on dom0, but kdb/hyp should be fine.
+     I usually do "echo 2 > /proc/sys/kernel/printk" when using kdb.
+   - 32bit hypervisor may hang. Tested on 64bit hypervisor only.
+    
+
+TO BUILD:
+ - do >make kdb=y
+
+HOW TO USE:
+  1. A serial line is needed to use the debugger. Set up a serial line
+     from the source machine to target victim. Make sure the serial line
+     is working properly by displaying login prompt and loging in etc....
+
+  2. Add following to grub.conf:
+        kernel /xen.kdb console=com1,vga com1=57600,8n1 dom0_mem=542M
+
+        (57600 or whatever used in step 1 above)
+
+  3. Boot the hypervisor built with the debugger. 
+
+  4. ctrl-\ (ctrl and backslash) will break into the debugger. If the system is
+     badly hung, pressing NMI would also break into it. However, once kdb is
+     entered via NMI, normal execution can't continue.
+
+  5. type 'h' for list of commands.
+
+  6. Command line editing is limited to backspace. ctrl-c to start a new cmd.
+
+
+
+GUEST debug:
+  - type sym in the debugger
+  - for REL4, grep kallsyms_names, kallsyms_addresses, and kallsyms_num_syms
+    in the guest System.map* file. Run sym again with domid and the three
+    values on the command line.
+  - Now basic symbols can be used for guest debug. Note, if the binary is not
+    built with symbols, only function names are available, but not global vars.
+
+    Eg: sym 0 c0696084 c068a590 c0696080 c06b43e8 c06b4740
+        will set symbols for dom 0. Then :
+
+        [4]xkdb> bp some_function 0
+
+	wills set bp at some_function in dom 0
+
+	[3]xkdb> dw c068a590 32 0 : display 32 bytes of dom0 memory
+
+
+Tips:
+  - In "[0]xkdb>"  : 0 is the cpu number in decimal
+  - In
+      00000000c042645c: 0:do_timer+17                  push %ebp
+    0:do_timer : 0 is the domid in hex
+    offset +17 is in hex.
+
+    absense of 0: would indicate it's a hypervisor function
+
+  - commands starting with kdb (kdb*) are for kdb debug only.
+
+
+Finally,
+ - think hex.
+ - bug/problem: enter kdbdbg, reproduce, and send me the output.
+   If the output is not enough, I may ask to run kdbdbg twice, then collect
+   output.
+
+
+Thanks,
+Mukesh Rathor
+Oracle Corporatin, 
+Redwood Shores, CA 94065
+
+--------------------------------------------------------------------------------
+COMMAND DESCRIPTION:
+
+info:  Print basic info like version, compile flags, etc..
+
+cur:  print current domain id and vcpu id
+
+f: display current stack. If a vcpu ptr is given, then print stack for that
+   VCPU by using its IP and SP.
+
+fg: display stack for a guest given domid, SP and IP.
+
+dw: display words of memory. 'num' of bytes is optional, but if displaying guest
+    memory, then is required.
+
+dd: same as above, but display doublewords.
+
+dwm: same as above but the address is machine address instead of virtual.
+
+ddm: same as above, but display doublewords.
+
+dr: display registers. if 'sp' is specified then print few extra registers.
+
+drg: display guest context saved on stack bottom.
+
+dis: disassemble instructions. If disassembling for guest, then 'num' must
+     be specified. 'num' is number of instrs to display.
+
+dism: toggle disassembly mode between Intel and ATT/GAS.
+
+mw: modify word in memory given virtual address. 'domid' may be specified if
+    modifying guest memory. value is assumed in hex even without 0x.
+
+md: same as above but modify doubleword.
+
+mr: modify register. value is assumd hex.
+
+bc: clear given or all breakpoints
+
+bp: display breakpoints or set a breakpoint. Domid may be specified to set a bp
+    in guest. kdb functions may not be specified if debugging kdb.
+    Example:
+      xkdb> bp acpi_processor_idle  : will set bp in xen
+      xkdb> bp default_idle 0 :   will set bp in domid 0
+      xkdb> bp idle_cpu 9 :   will set bp in domid 9
+
+     Conditions may be specified for a bp: lhs == rhs or lhs != rhs
+     where : lhs is register like 'r6', 'rax', etc...  or memory location
+             rhs is hex value with or without leading 0x.
+     Thus,
+      xkdb> bp acpi_processor_idle rdi == c000 
+      xkdb> bp 0xffffffff80062ebc 0 rsi == ffff880021edbc98 : will break into
+            kdb at 0xffffffff80062ebc in dom0 when rsi is ffff880021edbc98 
+
+btp: break point trace. Upon bp, print some info and continue without stopping.
+   Ex: btp idle_cpu 7 rax rbx 0x20ef5a5 r9
+
+   will print: rax, rbx, *(long *)0x20ef5a5, r9 upon hitting idle_cpu() and 
+               continue.
+
+wp: set a watchpoint at a virtual address which can belong to hypervisor or
+    any guest. Do not specify wp in kdb path if debugging kdb.
+
+wc: clear given or all watchpoints.
+
+ni: single step, stepping over function calls.
+
+ss: single step. Be carefull when in interrupt handlers or context switches.
+    
+ssb: single step to branch. Use with care.
+
+go: leave kdb and continue.
+
+cpu: go back to orig cpu when entering kdb. If 'cpu number' given, then switch 
+     to that cpu. If 'all' then show status of all cpus.
+
+nmi: Only available in hung/crash state. Send NMI to a cpu that may be hung.
+
+sym: Initialize a symbol table for debugging a guest. Look into the System.map
+     file of guest for certain symbol values and provide them here.
+
+vcpuh: Given vcpu ptr, display hvm_vcpu struct.
+
+vcpu: Display current vcpu struct. If 'vcpu-ptr' given, display that vcpu.
+
+dom: display current domain. If 'domid' then display that domid. If 'all', then
+     display all domains.
+
+sched: show schedular info and run queues.
+
+mmu: print basic mmu info
+
+p2m: convert a gpfn to mfn given a domid. value in hex even without 0x.
+
+m2p: convert mfn to pfn. value in hex even without 0x.
+
+dpage: display struct page given a mfn or struct page ptr. Since, no info is 
+       kept on page type, we display all possible page types.
+
+dtrq: display timer queues.
+
+didt: dump IDT table.
+
+dgt: dump GDT table.
+
+dirq: display IRQ bindings.
+
+dvmc: display all or given dom/vcpu VMCS or VMCB.
+
+trcon: turn tracing on. Trace hooks must be added in xen and kdb function
+       called directly from there.
+
+trcoff: turn tracing off.
+
+trcz: zero trace buffer.
+
+trcp: give hints to print the circular trace buffer, like current active ptr.
+
+usr1: allows to add any arbitraty command quickly.
+
+--------------------------------------------------------------------------------
+/*
+ * Copyright (C) 2008 Oracle.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
diff -r 32034d1914a6 xen/kdb/guest/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/guest/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3 @@
+
+obj-y           := kdb_guest.o
+
diff -r 32034d1914a6 xen/kdb/guest/kdb_guest.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/guest/kdb_guest.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,342 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "../include/kdbinc.h"
+
+/* information for symbols for a guest (includeing dom 0 ) is saved here */
+struct gst_syminfo {           /* guest symbols info */
+    int   domid;               /* which domain */
+    int   bitness;             /* 32 or 64 */
+    void *addrtblp;            /* ptr to (32/64)addresses tbl */
+    u8   *toktbl;              /* ptr to kallsyms_token_table */
+    u16  *tokidxtbl;           /* ptr to kallsyms_token_index */
+    u8   *kallsyms_names;      /* ptr to kallsyms_names */
+    long  kallsyms_num_syms;   /* ptr to kallsyms_num_syms */
+    kdbva_t  stext;            /* value of _stext in guest */
+    kdbva_t  etext;            /* value of _etext in guest */
+    kdbva_t  sinittext;        /* value of _sinittext in guest */
+    kdbva_t  einittext;        /* value of _einittext in guest */
+};
+
+#define MAX_CACHE 16                              /* cache upto 16 guests */
+struct gst_syminfo gst_syminfoa[MAX_CACHE];       /* guest symbol info array */
+
+static struct gst_syminfo *
+kdb_get_syminfo_slot(void)
+{
+    int i;
+    for (i=0; i < MAX_CACHE; i++)
+        if (gst_syminfoa[i].addrtblp == NULL)
+            return (&gst_syminfoa[i]);      
+
+    return NULL;
+}
+
+static struct gst_syminfo *
+kdb_domid2syminfop(domid_t domid)
+{
+    int i;
+    for (i=0; i < MAX_CACHE; i++)
+        if (gst_syminfoa[i].domid == domid)
+            return (&gst_syminfoa[i]);      
+
+    return NULL;
+}
+
+/* check if an address looks like text address in guest */
+int
+kdb_is_addr_guest_text(kdbva_t addr, int domid)
+{
+    struct gst_syminfo *gp = kdb_domid2syminfop(domid);
+
+    if (!gp || !gp->stext || !gp->etext)
+        return 0;
+    KDBGP1("guestaddr: addr:%lx domid:%d\n", addr, domid);
+
+    return ( (addr >= gp->stext && addr <= gp->etext) ||
+             (addr >= gp->sinittext && addr <= gp->einittext) );
+}
+
+/*
+ * returns: value of kallsyms_addresses[idx];
+ */
+static kdbva_t
+kdb_rd_guest_addrtbl(struct gst_syminfo *gp, int idx)
+{
+    kdbva_t addr, retaddr=0;
+    int num = gp->bitness/8;       /* whether 4 byte or 8 byte ptrs */
+    domid_t id = gp->domid;
+
+    addr = (kdbva_t)(((char *)gp->addrtblp) + idx * num);
+    KDBGP1("rdguestaddrtbl:addr:%lx idx:%d\n", addr, idx);
+
+    if (kdb_read_mem(addr, (kdbbyt_t *)&retaddr,num,id) != num) {
+        kdbp("Can't read addrtbl domid:%d at:%lx\n", id, addr);
+        return 0;
+    }
+    KDBGP1("rdguestaddrtbl:exit:retaddr:%lx\n", retaddr);
+    return retaddr;
+}
+
+/* Based on el5 kallsyms.c file. */
+static unsigned int 
+kdb_expand_el5_sym(struct gst_syminfo *gp, unsigned int off, char *result)
+{   
+    int len, skipped_first = 0;
+    u8 u8idx, *tptr, *datap;
+    domid_t domid = gp->domid;
+
+    *result = '\0';
+
+    /* get the compressed symbol length from the first symbol byte */
+    datap = gp->kallsyms_names + off;
+    len = 0;
+    if ((kdb_read_mem((kdbva_t)datap, (kdbbyt_t *)&len, 1, domid)) != 1) {
+        KDBGP("failed to read guest memory\n");
+        return 0;
+    }
+    datap++;
+
+    /* update the offset to return the offset for the next symbol on
+     * the compressed stream */
+    off += len + 1;
+
+    /* for every byte on the compressed symbol data, copy the table
+     * entry for that byte */
+    while(len) {
+        u16 u16idx, *u16p;
+        if (kdb_read_mem((kdbva_t)datap,(kdbbyt_t *)&u8idx,1,domid)!=1){
+            kdbp("memory (u8idx) read error:%p\n",gp->tokidxtbl);
+            return 0;
+        }
+        u16p = u8idx + gp->tokidxtbl;
+        if (kdb_read_mem((kdbva_t)u16p,(kdbbyt_t *)&u16idx,2,domid)!=2){
+            kdbp("tokidxtbl read error:%p\n", u16p);
+            return 0;
+        }
+        tptr = gp->toktbl + u16idx;
+        datap++;
+        len--;
+
+        while ((kdb_read_mem((kdbva_t)tptr, (kdbbyt_t *)&u8idx, 1, domid)==1) &&
+               u8idx) {
+
+            if(skipped_first) {
+                *result = u8idx;
+                result++;
+            } else
+                skipped_first = 1;
+            tptr++;
+        }
+    }
+    *result = '\0';
+    return off;          /* return to offset to the next symbol */
+}
+
+#define EL4_NMLEN 127
+/* so much pain, so not sure of it's worth .. :).. */
+static kdbva_t
+kdb_expand_el4_sym(struct gst_syminfo *gp, int low, char *result, char *symp)
+{   
+    int i, j;
+    u8 *nmp = gp->kallsyms_names;       /* guest address space */
+    kdbbyt_t byte, prefix;
+    domid_t id = gp->domid;
+    kdbva_t addr;
+
+    KDBGP1("Eel4sym:nmp:%p maxidx:$%d sym:%s\n", nmp, low, symp);
+    for (i=0; i <= low; i++) {
+        /* unsigned prefix = *name++; */
+        if (kdb_read_mem((kdbva_t)nmp, &prefix, 1, id) != 1) {
+            kdbp("failed to read:%p domid:%x\n", nmp, id);
+            return 0;
+        }
+        KDBGP2("el4:i:%d prefix:%x\n", i, prefix);
+        nmp++;
+        /* strncpy(namebuf + prefix, name, KSYM_NAME_LEN - prefix); */
+        addr = (long)result + prefix;
+        for (j=0; j < EL4_NMLEN-prefix; j++) {
+            if (kdb_read_mem((kdbva_t)nmp, &byte, 1, id) != 1) {
+                kdbp("failed read:%p domid:%x\n", nmp, id);
+                return 0;
+            }
+            KDBGP2("el4:j:%d byte:%x\n", j, byte);
+            *(kdbbyt_t *)addr = byte;
+            addr++; nmp++;
+            if (byte == '\0')
+                break;
+        }
+        KDBGP2("el4sym:i:%d res:%s\n", i, result);
+        if (symp && strcmp(result, symp) == 0)
+            return(kdb_rd_guest_addrtbl(gp, i));
+
+        /* kallsyms.c: name += strlen(name) + 1; */
+        if (j == EL4_NMLEN-prefix && byte != '\0')
+            while (kdb_read_mem((kdbva_t)nmp, &byte, 1, id) && byte != '\0')
+                nmp++;
+    }
+    KDBGP1("Xel4sym: na-ga-da\n");
+    return 0;
+}
+
+static unsigned int
+kdb_get_el5_symoffset(struct gst_syminfo *gp, long pos)
+{
+    int i;
+    u8 data, *namep;
+    domid_t domid = gp->domid;
+
+    namep = gp->kallsyms_names;
+    for (i=0; i < pos; i++) {
+        if (kdb_read_mem((kdbva_t)namep, &data, 1, domid) != 1) {
+            kdbp("Can't read id:$%d mem:%p\n", domid, namep);
+            return 0;
+        }
+        namep = namep + data + 1;
+    }
+    return namep - gp->kallsyms_names;
+}
+
+/*
+ * for a given guest domid (domid >= 0 && < KDB_HYPDOMID), convert addr to
+ * symbol. offset is set to  addr - symbolstart
+ */
+char *
+kdb_guest_addr2sym(unsigned long addr, domid_t domid, ulong *offsp)
+{
+    static char namebuf[KSYM_NAME_LEN+1];
+    unsigned long low, high, mid;
+    struct gst_syminfo *gp = kdb_domid2syminfop(domid);
+
+    *offsp = 0;
+    if(!gp || gp->kallsyms_num_syms == 0)
+        return " ??? ";
+
+    namebuf[0] = namebuf[KSYM_NAME_LEN] = '\0';
+    if (1) {
+        /* do a binary search on the sorted kallsyms_addresses array */
+        low = 0;
+        high = gp->kallsyms_num_syms;
+
+        while (high-low > 1) {
+            mid = (low + high) / 2;
+            if (kdb_rd_guest_addrtbl(gp, mid) <= addr) 
+                low = mid;
+            else 
+                high = mid;
+        }
+        /* Grab name */
+        if (gp->toktbl) {
+            int symoff = kdb_get_el5_symoffset(gp,low);
+            kdb_expand_el5_sym(gp, symoff, namebuf);
+        } else
+            kdb_expand_el4_sym(gp, low, namebuf, NULL);
+        *offsp = addr - kdb_rd_guest_addrtbl(gp, low);
+        return namebuf;
+    }
+    return " ???? ";
+}
+
+
+/* 
+ * save guest (dom0 and others) symbols info : domid and following addresses:
+ *     &kallsyms_names &kallsyms_addresses &kallsyms_num_syms \
+ *     &kallsyms_token_table &kallsyms_token_index
+ */
+void
+kdb_sav_dom_syminfo(domid_t domid, long namesp, long addrap, long nump,
+                    long toktblp, long tokidxp)
+{
+    int bytes;
+    long val = 0;    /* must be set to zero for 32 on 64 cases */
+    struct gst_syminfo *gp = kdb_get_syminfo_slot();
+
+    if (gp == NULL) {
+        kdbp("kdb:kdb_sav_dom_syminfo():Table full.. symbols not saved\n");
+        return;
+    }
+    memset(gp, 0, sizeof(*gp));
+
+    gp->domid = domid;
+    gp->bitness = kdb_guest_bitness(domid);
+    gp->addrtblp = (void *)addrap;
+    gp->kallsyms_names = (u8 *)namesp;
+    gp->toktbl = (u8 *)toktblp;
+    gp->tokidxtbl = (u16 *)tokidxp;
+
+    KDBGP("domid:%x bitness:$%d numsyms:$%ld arrayp:%p\n", domid,
+          gp->bitness, gp->kallsyms_num_syms, gp->addrtblp);
+
+    bytes = gp->bitness/8;
+    if (kdb_read_mem(nump, (kdbbyt_t *)&val, bytes, domid) != bytes) {
+
+        kdbp("Unable to read number of symbols from:%lx\n", nump);
+        memset(gp, 0, sizeof(*gp));
+        return;
+    } else
+        kdbp("Number of symbols:$%ld\n", val);
+
+    gp->kallsyms_num_syms = val;
+
+    bytes = (gp->bitness/8) * gp->kallsyms_num_syms;
+    gp->stext = kdb_guest_sym2addr("_stext", domid);
+    gp->etext = kdb_guest_sym2addr("_etext", domid);
+    if (!gp->stext || !gp->etext)
+        kdbp("Warn: Can't find stext/etext\n");
+
+    if (gp->toktbl && gp->tokidxtbl) {
+        gp->sinittext = kdb_guest_sym2addr("_sinittext", domid);
+        gp->einittext = kdb_guest_sym2addr("_einittext", domid);
+        if (!gp->sinittext || !gp->einittext) {
+            kdbp("Warn: Can't find sinittext/einittext\n");
+    } 
+    }
+    KDBGP1("stxt:%lx etxt:%lx sitxt:%lx eitxt:%lx\n", gp->stext, gp->etext,
+           gp->sinittext, gp->einittext);
+    kdbp("Succesfully saved symbol info\n");
+}
+
+/*
+ * given a symbol string for a guest/domid, return its address
+ */
+kdbva_t
+kdb_guest_sym2addr(char *symp, domid_t domid)
+{
+    char namebuf[KSYM_NAME_LEN+1];
+    int i, off=0;
+    struct gst_syminfo *gp = kdb_domid2syminfop(domid);
+
+    KDBGP("sym2a: sym:%s domid:%x numsyms:%ld\n", symp, domid,
+          gp ? gp->kallsyms_num_syms: -1);
+
+    if (!gp)
+        return 0;
+
+    if (gp->toktbl == 0 || gp->tokidxtbl == 0)
+        return(kdb_expand_el4_sym(gp, gp->kallsyms_num_syms, namebuf, symp));
+
+    for (i=0; i < gp->kallsyms_num_syms; i++) {
+        off = kdb_expand_el5_sym(gp, off, namebuf);
+        KDBGP1("i:%d namebuf:%s\n", i, namebuf);
+        if (strcmp(namebuf, symp) == 0) {
+            return(kdb_rd_guest_addrtbl(gp, i));
+        }
+    }
+    KDBGP("sym2a:exit:na-ga-da\n");
+    return 0;
+}
diff -r 32034d1914a6 xen/kdb/include/kdb_extern.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdb_extern.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDB_EXTERN_H
+#define _KDB_EXTERN_H
+
+#define KDB_TRAP_FATAL     1    /* trap is fatal. can't resume from kdb */
+#define KDB_TRAP_NONFATAL  2    /* can resume from kdb */
+#define KDB_TRAP_KDBSTACK  3    /* to debug kdb itself. dump kdb stack */
+
+/* following can be called from anywhere in xen to debug */
+extern void kdb_trap_immed(int);
+extern void kdbtrc(unsigned int, unsigned int, uint64_t, uint64_t, uint64_t);
+extern void kdbp(const char *fmt, ...);
+
+typedef unsigned long kdbva_t;
+typedef unsigned char kdbbyt_t;
+typedef unsigned long kdbma_t;
+
+extern unsigned long kdb_dr7;
+
+
+extern volatile int kdb_session_begun;
+extern volatile int kdb_enabled;
+extern void kdb_init(void);
+extern int kdb_keyboard(struct cpu_user_regs *);
+extern void kdb_ssni_reenter(struct cpu_user_regs *);
+extern int kdb_handle_trap_entry(int, struct cpu_user_regs *);
+extern int kdb_trap_fatal(int, struct cpu_user_regs *);  /* fatal with regs */
+extern void kdb_dump_vmcs(uint16_t did, int vid);
+void kdb_dump_vmcb(uint16_t did, int vid);
+extern void kdb_dump_time_pcpu(void);
+
+
+#define VMPTRST_OPCODE  ".byte 0x0f,0xc7\n"     /* reg/opcode: /7 */
+#define MODRM_EAX_07    ".byte 0x38\n"          /* [EAX], with reg/opcode: /7 */
+static inline void __vmptrst(u64 *addr)
+{
+    asm volatile ( VMPTRST_OPCODE
+                   MODRM_EAX_07
+                   :
+                   : "a" (addr)
+                   : "memory");
+}
+
+#define is_hvm_or_hyb_domain is_hvm_domain
+#define is_hvm_or_hyb_vcpu is_hvm_vcpu
+#define is_hybrid_vcpu(x) (0)
+
+
+#endif  /* _KDB_EXTERN_H */
diff -r 32034d1914a6 xen/kdb/include/kdbdefs.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdbdefs.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,86 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDBDEFS_H
+#define _KDBDEFS_H
+
+/* reason we are entering kdbmain (bp == breakpoint) */
+typedef enum {
+    KDB_REASON_KEYBOARD=1,  /* Keyboard entry - always 1 */
+    KDB_REASON_BPEXCP,      /* #BP excp: sw bp (INT3) */
+    KDB_REASON_DBEXCP,      /* #DB excp: TF flag or HW bp */
+    KDB_REASON_PAUSE_IPI,   /* received pause IPI from another CPU */
+} kdb_reason_t;
+
+
+/* cpu state: past, present, and future */
+typedef enum {
+    KDB_CPU_INVAL=0,     /* invalid value. not in or leaving kdb */
+    KDB_CPU_QUIT,        /* main cpu does GO. all others do QUIT */
+    KDB_CPU_PAUSE,       /* cpu is paused */
+    KDB_CPU_DISABLE,     /* disable interrupts */
+    KDB_CPU_SHOWPC,      /* all cpus must display their pc */
+    KDB_CPU_DO_VMEXIT,   /* all cpus must do vmcs vmexit. intel only */
+    KDB_CPU_MAIN_KDB,    /* cpu in kdb main command loop */
+    KDB_CPU_GO,          /* user entered go for this cpu */
+    KDB_CPU_SS,          /* single step for this cpu */
+    KDB_CPU_NI,          /* go to next instr after the call instr */
+    KDB_CPU_INSTALL_BP,  /* delayed install of sw bp(s) by this cpu */
+} kdb_cpu_cmd_t;
+
+/* ============= kdb commands ============================================= */
+
+typedef kdb_cpu_cmd_t (*kdb_func_t)(int, const char **, struct cpu_user_regs *);
+typedef kdb_cpu_cmd_t (*kdb_usgf_t)(void);
+
+typedef enum {
+    KDB_REPEAT_NONE = 0,    /* Do not repeat this command */
+    KDB_REPEAT_NO_ARGS,     /* Repeat the command without arguments */
+    KDB_REPEAT_WITH_ARGS,   /* Repeat the command including its arguments */
+} kdb_repeat_t;
+
+typedef struct _kdbtab {
+    char        *kdb_cmd_name;        /* Command name */
+    kdb_func_t   kdb_cmd_func;        /* ptr to function to execute command */
+    kdb_usgf_t   kdb_cmd_usgf;        /* usage function ptr */
+    int          kdb_cmd_crash_avail; /* available in sys fatal/crash state */
+    kdb_repeat_t kdb_cmd_repeat;      /* Does command auto repeat on enter? */
+} kdbtab_t;
+
+
+/* ============= types and stuff ========================================= */
+#define BFD_INVAL (~0UL)            /* invalid bfd_vma */
+
+#if defined(__x86_64__)
+  #define KDBIP rip
+  #define KDBSP rsp
+#else
+  #define KDBIP eip
+  #define KDBSP esp
+#endif
+
+/* ============= macros ================================================== */
+extern volatile int kdbdbg;
+#define KDBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
+#define KDBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
+#define KDBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
+#define KDBGP3(...) {0;};
+
+#define KDBMIN(x,y) (((x)<(y))?(x):(y))
+
+#endif  /* !_KDBDEFS_H */
diff -r 32034d1914a6 xen/kdb/include/kdbinc.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdbinc.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDBINC_H
+#define _KDBINC_H
+
+#include <xen/compile.h>
+#include <xen/config.h>
+#include <xen/version.h>
+#include <xen/compat.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/sched.h>
+#include <xen/domain.h>
+#include <xen/mm.h>
+#include <xen/event.h>
+#include <xen/time.h>
+#include <xen/console.h>
+#include <xen/softirq.h>
+#include <xen/domain_page.h>
+#include <xen/rangeset.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/delay.h>
+#include <xen/shutdown.h>
+#include <xen/percpu.h>
+#include <xen/multicall.h>
+#include <xen/rcupdate.h>
+#include <xen/ctype.h>
+#include <xen/symbols.h>
+#include <xen/shutdown.h>
+#include <xen/serial.h>
+#include <xen/grant_table.h>
+#include <asm/debugger.h>
+#include <asm/shared.h>
+#include <asm/apicdef.h>
+
+#include <asm/nmi.h>
+#include <asm/p2m.h>
+#include <asm/debugreg.h>
+#include <public/sched.h>
+#include <public/vcpu.h>
+#ifdef _XEN_LATEST
+#include <xsm/xsm.h>
+#endif
+
+#include <asm/hvm/vmx/vmx.h>
+
+#include "kdb_extern.h"
+#include "kdbdefs.h"
+#include "kdbproto.h"
+
+#endif /* !_KDBINC_H */
diff -r 32034d1914a6 xen/kdb/include/kdbproto.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdbproto.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDBPROTO_H
+#define _KDBPROTO_H
+
+/* hypervisor interfaces use by kdb or kdb interfaces in xen files */
+extern void console_putc(char);
+extern int console_getc(void);
+extern void show_trace(struct cpu_user_regs *);
+extern void kdb_dump_timer_queues(void);
+extern void kdb_time_resume(int);
+extern void kdb_print_sched_info(void);
+extern void kdb_curr_cpu_flush_vmcs(void);
+extern unsigned long address_lookup(char *);
+extern void kdb_prnt_guest_mapped_irqs(void);
+
+/* kdb globals */
+extern kdbtab_t *kdb_cmd_tbl;
+extern char kdb_prompt[32];
+extern volatile int kdb_sys_crash;
+extern volatile kdb_cpu_cmd_t kdb_cpu_cmd[NR_CPUS];
+extern volatile int kdb_trcon;
+
+/* kdb interfaces */
+extern void __init kdb_io_init(void);
+extern void kdb_init_cmdtab(void);
+extern void kdb_do_cmds(struct cpu_user_regs *);
+extern int kdb_check_sw_bkpts(struct cpu_user_regs *);
+extern int kdb_check_watchpoints(struct cpu_user_regs *);
+extern void kdb_do_watchpoints(kdbva_t, int, int);
+extern void kdb_install_watchpoints(void);
+extern void kdb_clear_wps(int);
+extern kdbma_t kdb_rd_dbgreg(int);
+
+
+
+extern char *kdb_get_cmdline(char *);
+extern void kdb_clear_prev_cmd(void);
+extern void kdb_toggle_dis_syntax(void);
+extern int kdb_check_call_instr(domid_t, kdbva_t);
+extern void kdb_display_pc(struct cpu_user_regs *);
+extern kdbva_t kdb_print_instr(kdbva_t, long, domid_t);
+extern int kdb_read_mmem(kdbva_t, kdbbyt_t *, int);
+extern int kdb_read_mem(kdbva_t, kdbbyt_t *, int, domid_t);
+extern int kdb_write_mem(kdbva_t, kdbbyt_t *, int, domid_t);
+
+extern void kdb_install_all_swbp(void);
+extern void kdb_uninstall_all_swbp(void);
+extern int kdb_swbp_exists(void);
+extern void kdb_flush_swbp_table(void);
+extern int kdb_is_addr_guest_text(kdbva_t, int);
+extern kdbva_t kdb_guest_sym2addr(char *, domid_t);
+extern char *kdb_guest_addr2sym(unsigned long, domid_t, ulong *);
+extern void kdb_prnt_addr2sym(domid_t, kdbva_t, char *);
+extern void kdb_sav_dom_syminfo(domid_t, long, long, long, long, long);
+extern int kdb_guest_bitness(domid_t);
+extern void kdb_nmi_pause_cpus(cpumask_t);
+
+extern void kdb_trczero(void);
+void kdb_trcp(void);
+
+
+
+#endif /* !_KDBPROTO_H */
diff -r 32034d1914a6 xen/kdb/kdb_cmds.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/kdb_cmds.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3789 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "include/kdbinc.h"
+
+#if defined(__x86_64__)
+    #define KDBF64 "%lx"
+    #define KDBFL "%016lx"         /* print long all digits */
+#else
+    #define KDBF64 "%llx"
+    #define KDBFL "%08lx"
+#endif
+
+#if XEN_SUBVERSION > 4 || XEN_VERSION == 4              /* xen 3.5.x or above */
+    #define KDB_LKDEF(l) ((l).raw.lock)
+    #define KDB_PGLLE(t) ((t).tail)    /* page list last element ^%$#@ */
+#else
+    #define KDB_LKDEF(l) ((l).lock)
+    #define KDB_PGLLE(t) ((t).prev)    /* page list last element ^%$#@ */
+#endif
+
+#define KDB_CMD_HISTORY_COUNT   32
+#define CMD_BUFLEN              200     /* kdb_printf: max printline == 256 */
+
+#define KDBMAXSBP 16                    /* max number of software breakpoints */
+#define KDB_MAXARGC 16                  /* max args in a kdb command */
+#define KDB_MAXBTP  8                   /* max display args in btp */
+
+/* condition is: 'r6 == 0x123f' or '0xffffffff82800000 != deadbeef'  */
+struct kdb_bpcond {
+    kdbbyt_t bp_cond_status;       /* 0 == off, 1 == register, 2 == memory */
+    kdbbyt_t bp_cond_type;         /* 0 == bad, 1 == equal, 2 == not equal */
+    ulong    bp_cond_lhs;          /* lhs of condition: reg offset or mem loc */
+    ulong    bp_cond_rhs;          /* right hand side of condition */
+};
+
+/* software breakpoint structure */
+struct kdb_sbrkpt {
+    kdbva_t  bp_addr;              /* address the bp is set at */
+    domid_t  bp_domid;             /* which domain the bp belongs to */
+    kdbbyt_t bp_originst;          /* save orig instr/s here */
+    kdbbyt_t bp_deleted;           /* delete pending on this bp */
+    kdbbyt_t bp_ni;                /* set for KDB_CPU_NI */
+    kdbbyt_t bp_just_added;        /* added in the current kdb session */
+    kdbbyt_t bp_type;              /* 0 = normal, 1 == cond,  2 == btp */
+    union {
+        struct kdb_bpcond bp_cond;
+        ulong *bp_btp;
+    } u;
+};
+
+/* don't use kmalloc in kdb which hijacks all cpus */
+static ulong kdb_btp_argsa[KDBMAXSBP][KDB_MAXBTP];
+static ulong *kdb_btp_ap[KDBMAXSBP];
+
+static struct kdb_reg_nmofs {
+    char *reg_nm;
+    int reg_offs;
+} kdb_reg_nm_offs[] =  {
+       { "rax", offsetof(struct cpu_user_regs, rax) },
+       { "rbx", offsetof(struct cpu_user_regs, rbx) },
+       { "rcx", offsetof(struct cpu_user_regs, rcx) },
+       { "rdx", offsetof(struct cpu_user_regs, rdx) },
+       { "rsi", offsetof(struct cpu_user_regs, rsi) },
+       { "rdi", offsetof(struct cpu_user_regs, rdi) },
+       { "rbp", offsetof(struct cpu_user_regs, rbp) },
+       { "rsp", offsetof(struct cpu_user_regs, rsp) },
+       { "r8",  offsetof(struct cpu_user_regs, r8) },
+       { "r9",  offsetof(struct cpu_user_regs, r9) },
+       { "r10", offsetof(struct cpu_user_regs, r10) },
+       { "r11", offsetof(struct cpu_user_regs, r11) },
+       { "r12", offsetof(struct cpu_user_regs, r12) },
+       { "r13", offsetof(struct cpu_user_regs, r13) },
+       { "r14", offsetof(struct cpu_user_regs, r14) },
+       { "r15", offsetof(struct cpu_user_regs, r15) },
+       { "rflags", offsetof(struct cpu_user_regs, rflags) } };
+
+static const int KDBBPSZ=1;                   /* size of KDB_BPINST is 1 byte*/
+static kdbbyt_t kdb_bpinst = 0xcc;            /* breakpoint instr: INT3 */
+static struct kdb_sbrkpt kdb_sbpa[KDBMAXSBP]; /* soft brkpt array/table */
+static kdbtab_t *tbp;
+
+static int kdb_set_bp(domid_t, kdbva_t, int, ulong *, char*, char*, char*);
+static void kdb_print_uregs(struct cpu_user_regs *);
+
+
+/* ===================== cmdline functions  ================================ */
+
+/* lp points to a string of only alpha numeric chars terminated by '\n'.
+ * Parse the string into argv pointers, and RETURN argc
+ * Eg:  if lp --> "dr  sp\n" :  argv[0]=="dr\0"  argv[1]=="sp\0"  argc==2
+ */
+static int
+kdb_parse_cmdline(char *lp, const char **argv)
+{
+    int i=0;
+
+    for (; *lp == ' '; lp++);      /* note: isspace() skips '\n' also */
+    while ( *lp != '\n' ) {
+        if (i == KDB_MAXARGC) {
+            printk("kdb: max args exceeded\n");
+            break;
+        }
+        argv[i++] = lp;
+        for (; *lp != ' ' && *lp != '\n'; lp++);
+        if (*lp != '\n')
+            *lp++ = '\0';
+        for (; *lp == ' '; lp++);
+    }
+    *lp = '\0';
+    return i;
+}
+
+void
+kdb_clear_prev_cmd()             /* so previous command is not repeated */
+{
+    tbp = NULL;
+}
+
+void
+kdb_do_cmds(struct cpu_user_regs *regs)
+{
+    char *cmdlinep;
+    const char *argv[KDB_MAXARGC];
+    int argc = 0, curcpu = smp_processor_id();
+    kdb_cpu_cmd_t result = KDB_CPU_MAIN_KDB;
+
+    snprintf(kdb_prompt, sizeof(kdb_prompt), "[%d]xkdb> ", curcpu);
+
+    while (result == KDB_CPU_MAIN_KDB) {
+        cmdlinep = kdb_get_cmdline(kdb_prompt);
+        if (*cmdlinep == '\n') {
+            if (tbp==NULL || tbp->kdb_cmd_func==NULL)
+                continue;
+            else
+                argc = -1;    /* repeat prev command */
+        } else {
+            argc = kdb_parse_cmdline(cmdlinep, argv);
+            for(tbp=kdb_cmd_tbl; tbp->kdb_cmd_func; tbp++)  {
+                if (strcmp(argv[0], tbp->kdb_cmd_name)==0) 
+                    break;
+            }
+        }
+        if (kdb_sys_crash && tbp->kdb_cmd_func && !tbp->kdb_cmd_crash_avail) {
+            kdbp("cmd not available in fatal/crashed state....\n");
+            continue;
+        }
+        if (tbp->kdb_cmd_func) {
+            result = (*tbp->kdb_cmd_func)(argc, argv, regs);
+            if (tbp->kdb_cmd_repeat == KDB_REPEAT_NONE)
+                tbp = NULL;
+        } else
+            kdbp("kdb: Unknown cmd: %s\n", cmdlinep);
+    }
+    kdb_cpu_cmd[curcpu] = result;
+    return;
+}
+
+/* ===================== Util functions  ==================================== */
+
+int
+kdb_vcpu_valid(struct vcpu *in_vp)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+
+    for(dp=domain_list; in_vp && dp; dp=dp->next_in_list)
+        for_each_vcpu(dp, vp)
+            if (in_vp == vp)
+                return 1;
+    return 0;     /* not found */
+}
+
+/*
+ * Given a symbol, find it's address
+ */
+static kdbva_t
+kdb_sym2addr(const char *p, domid_t domid)
+{
+    kdbva_t addr;
+
+    KDBGP1("sym2addr: p:%s domid:%d\n", p, domid);
+    if (domid == DOMID_IDLE)
+        addr = address_lookup((char *)p);
+    else
+        addr = (kdbva_t)kdb_guest_sym2addr((char *)p, domid);
+    KDBGP1("sym2addr: exit: addr returned:0x%lx\n", addr);
+    return addr;
+}
+
+/*
+ * convert ascii to int decimal (base 10). 
+ * Return: 0 : failed to convert, otherwise 1 
+ */
+static int
+kdb_str2deci(const char *strp, int *intp)
+{
+    const char *endp;
+
+    KDBGP2("str2deci: str:%s\n", strp);
+    if (!isdigit(*strp))
+        return 0;
+    *intp = (int)simple_strtoul(strp, &endp, 10);
+    if (endp != strp+strlen(strp))
+        return 0;
+    KDBGP2("str2deci: intval:$%d\n", *intp);
+    return 1;
+}
+/*
+ * convert ascii to long. NOTE: base is 16
+ * Return: 0 : failed to convert, otherwise 1 
+ */
+static int
+kdb_str2ulong(const char *strp, ulong *longp)
+{
+    ulong val;
+    const char *endp;
+
+    KDBGP2("str2long: str:%s\n", strp);
+    if (!isxdigit(*strp))
+        return 0;
+    val = (long)simple_strtoul(strp, &endp, 16);   /* handles leading 0x */
+    if (endp != strp+strlen(strp))
+        return 0;
+    if (longp)
+        *longp = val;
+    KDBGP2("str2long: val:0x%lx\n", val);
+    return 1;
+}
+/*
+ * convert a symbol or ascii address to hex address
+ * Return: 0 : failed to convert, otherwise 1 
+ */
+static int
+kdb_str2addr(const char *strp, kdbva_t *addrp, domid_t id)
+{
+    kdbva_t addr;
+    const char *endp;
+
+    /* assume it's an address */
+    KDBGP2("str2addr: str:%s id:%d\n", strp, id);
+    addr = (kdbva_t)simple_strtoul(strp, &endp, 16); /*handles leading 0x */
+    if (endp != strp+strlen(strp))
+        if ( !(addr=kdb_sym2addr(strp, id)) )
+            return 0;
+    *addrp = addr;
+    KDBGP2("str2addr: addr:0x%lx\n", addr);
+    return 1;
+}
+
+/* Given domid, return ptr to struct domain 
+ * IF domid == DOMID_IDLE return ptr to idle_domain 
+ * IF domid == valid domain, return ptr to domain struct
+ * else domid is bad and return NULL
+ */
+static struct domain *
+kdb_domid2ptr(domid_t domid)
+{
+    struct domain *dp;
+
+    /* get_domain_by_id() ret NULL for both DOMID_IDLE and bad domids */
+    if (domid == DOMID_IDLE)
+        dp = idle_vcpu[smp_processor_id()]->domain;
+    else 
+        dp = get_domain_by_id(domid);   /* NULL now means bad domid */
+    return dp;
+}
+
+/*
+ * Returns:  0: failed. invalid domid or string, *idp not changed.
+ */
+static int
+kdb_str2domid(const char *domstr, domid_t *idp, int perr)
+{
+    int id;
+    if (!kdb_str2deci(domstr, &id) || !kdb_domid2ptr((domid_t)id)) {
+        if (perr)
+            kdbp("Invalid domid:%s\n", domstr);
+        return 0;
+    }
+    *idp = (domid_t)id;
+    return 1;
+}
+
+static struct domain *
+kdb_strdomid2ptr(const char *domstr, int perror)
+{
+    domid_t domid;
+    if (kdb_str2domid(domstr, &domid, perror)) {
+        return(kdb_domid2ptr(domid));
+    }
+    return NULL;
+}
+
+/* return a guest bitness: 32 or 64 */
+int
+kdb_guest_bitness(domid_t domid)
+{
+    const int HYPSZ = sizeof(long) * 8;
+    struct domain *dp = kdb_domid2ptr(domid);
+    int retval; 
+
+    if (is_idle_domain(dp))
+        retval = HYPSZ;
+    else if (is_hvm_or_hyb_domain(dp))
+        retval = (hvm_long_mode_enabled(dp->vcpu[0])) ? HYPSZ : 32;
+    else 
+        retval = is_pv_32bit_domain(dp) ? 32 : HYPSZ;
+    KDBGP1("gbitness: domid:%d dp:%p bitness:%d\n", domid, dp, retval);
+    return retval;
+}
+
+/* kdb_print_spin_lock(&xyz_lock, "xyz_lock:", "\n"); */
+static void
+kdb_print_spin_lock(char *strp, spinlock_t *lkp, char *nlp)
+{
+    kdbp("%s %04hx %d %d%s", strp, KDB_LKDEF(*lkp), lkp->recurse_cpu,
+         lkp->recurse_cnt, nlp);
+}
+
+/* check if register string is valid. if yes, return offset to the register
+ * in cpu_user_regs, else return -1 */
+static int
+kdb_valid_reg(const char *nmp) 
+{
+    int i;
+    for (i=0; i < sizeof(kdb_reg_nm_offs)/sizeof(kdb_reg_nm_offs[0]); i++)
+        if (strcmp(kdb_reg_nm_offs[i].reg_nm, nmp) == 0)
+            return kdb_reg_nm_offs[i].reg_offs;
+    return -1;
+}
+
+/* given offset of register, return register name string. if offset is invalid
+ * return NULL */
+static char *kdb_regoffs_to_name(int offs)
+{
+    int i;
+    for (i=0; i < sizeof(kdb_reg_nm_offs)/sizeof(kdb_reg_nm_offs[0]); i++)
+        if (kdb_reg_nm_offs[i].reg_offs == offs)
+            return kdb_reg_nm_offs[i].reg_nm;
+    return NULL;
+}
+
+/* ===================== util struct funcs ================================= */
+static void
+kdb_prnt_timer(struct timer *tp)
+{
+#if XEN_SUBVERSION == 0 
+    kdbp(" expires:%016lx expires_end:%016lx cpu:%d status:%x\n", tp->expires, 
+         tp->expires_end, tp->cpu, tp->status);
+#else
+    kdbp(" expires:%016lx cpu:%d status:%x\n", tp->expires, tp->cpu,tp->status);
+#endif
+    kdbp(" function data:%p ptr:%p ", tp->data, tp->function);
+    kdb_prnt_addr2sym(DOMID_IDLE, (kdbva_t)tp->function, "\n");
+}
+
+static void 
+kdb_prnt_periodic_time(struct periodic_time *ptp)
+{
+    kdbp(" next:%p prev:%p\n", ptp->list.next, ptp->list.prev);
+    kdbp(" on_list:%d one_shot:%d dont_freeze:%d irq_issued:%d src:%x irq:%x\n",
+         ptp->on_list, ptp->one_shot, ptp->do_not_freeze, ptp->irq_issued,
+         ptp->source, ptp->irq);
+    kdbp(" vcpu:%p pending_intr_nr:%08x period:%016lx\n", ptp->vcpu,
+         ptp->pending_intr_nr, ptp->period);
+    kdbp(" scheduled:%016lx last_plt_gtime:%016lx\n", ptp->scheduled,
+         ptp->last_plt_gtime);
+    kdbp(" \n          timer info:\n");
+    kdb_prnt_timer(&ptp->timer);
+    kdbp("\n");
+}
+
+/* ===================== cmd functions  ==================================== */
+
+/*
+ * FUNCTION: Disassemble instructions
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dis(void)
+{
+    kdbp("dis [addr|sym][num][domid] : Disassemble instrs\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dis(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int num = 8;                           /* display 8 instr by default */
+    static kdbva_t addr = BFD_INVAL;
+    static domid_t domid;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dis();
+
+    if (argc != -1)      /* not a command repeat */
+        domid = guest_mode(regs) ?  current->domain->domain_id : DOMID_IDLE;
+
+    if (argc >= 4 && !kdb_str2domid(argv[3], &domid, 1)) { 
+        return KDB_CPU_MAIN_KDB;
+    } 
+    if (argc >= 3 && !kdb_str2deci(argv[2], &num)) {
+        kdbp("kdb:Invalid num\n");
+        return KDB_CPU_MAIN_KDB;
+    } 
+    if (argc > 1 && !kdb_str2addr(argv[1], &addr, domid)) {
+        kdbp("kdb:Invalid addr/sym\n");
+        kdbp("(num has to be specified if providing domid)\n");
+        return KDB_CPU_MAIN_KDB;
+    } 
+    if (argc == 1)                    /* not command repeat */
+        addr = regs->KDBIP;           /* PC is the default */
+    else if (addr == BFD_INVAL) {
+        kdbp("kdb:Invalid addr/sym\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    addr = kdb_print_instr(addr, num, domid);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* FUNCTION: kdb_cmdf_dism() Toggle disassembly syntax from Intel to ATT/GAS */
+static kdb_cpu_cmd_t
+kdb_usgf_dism(void)
+{
+    kdbp("dism: toggle disassembly mode between ATT/GAS and INTEL\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dism(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dism();
+
+    kdb_toggle_dis_syntax();
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void
+_kdb_show_guest_stack(domid_t domid, kdbva_t ipaddr, kdbva_t spaddr)
+{
+    kdbva_t val;
+    int num=0, max=0, rd = kdb_guest_bitness(domid)/8;
+
+    kdb_print_instr(ipaddr, 1, domid);
+    KDBGP("_guest_stack:sp:%lx domid:%d rd:$%d\n", spaddr, domid, rd);
+    val = 0;                          /* must zero, in case guest is 32bit */
+    while((kdb_read_mem(spaddr,(kdbbyt_t *)&val,rd,domid)==rd) && num < 16){
+        KDBGP1("gstk:addr:%lx val:%lx\n", spaddr, val);
+        if (kdb_is_addr_guest_text(val, domid)) {
+            kdb_print_instr(val, 1, domid);
+            num++;
+        }
+        if (max++ > 10000)            /* don't walk down the stack forever */
+            break;                    /* 10k is chosen randomly */
+        spaddr += rd;
+    }
+}
+
+/* Read guest memory and display address that looks like text. */
+static void
+kdb_show_guest_stack(struct cpu_user_regs *regs, struct vcpu *vcpup)
+{
+    kdbva_t ipaddr=regs->KDBIP, spaddr = regs->KDBSP;
+    domid_t domid = vcpup->domain->domain_id;
+
+    ASSERT(domid != DOMID_IDLE);
+    _kdb_show_guest_stack(domid, ipaddr, spaddr);
+}
+
+/* display stack. if vcpu ptr given, then display stack for that. Otherwise,
+ * use current regs */
+static kdb_cpu_cmd_t
+kdb_usgf_f(void)
+{
+    kdbp("f [vcpu-ptr]: dump current/vcpu stack\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_f(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_f();
+
+    if (argc > 1 ) {
+        struct vcpu *vp;
+        if (!kdb_str2ulong(argv[1], (ulong *)&vp) || !kdb_vcpu_valid(vp)) {
+            kdbp("kdb: Bad VCPU ptr:%s\n", argv[1]);
+            return KDB_CPU_MAIN_KDB;
+        }
+        kdb_show_guest_stack(&vp->arch.user_regs, vp);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (guest_mode(regs))
+        kdb_show_guest_stack(regs, current);
+    else
+        show_trace(regs);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* given an spaddr and domid for guest, dump stack */
+static kdb_cpu_cmd_t
+kdb_usgf_fg(void)
+{
+    kdbp("fg domid RIP ESP: dump guest stack given domid, RIP, and ESP\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_fg(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    domid_t domid;
+    kdbva_t ipaddr, spaddr;
+
+    if (argc != 4) 
+        return kdb_usgf_fg();
+
+    if (kdb_str2domid(argv[1], &domid, 1)==0) {
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_str2ulong(argv[2], &ipaddr)==0) {
+        kdbp("Bad ipaddr:%s\n", argv[2]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_str2ulong(argv[3], &spaddr)==0) {
+        kdbp("Bad spaddr:%s\n", argv[3]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    _kdb_show_guest_stack(domid, ipaddr, spaddr);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display kdb stack. for debugging kdb itself */
+static kdb_cpu_cmd_t
+kdb_usgf_kdbf(void)
+{
+    kdbp("kdbf: display kdb stack. for debugging kdb only\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_kdbf(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_kdbf();
+
+    kdb_trap_immed(KDB_TRAP_KDBSTACK);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* worker function to display memory. Request could be for any guest, domid.
+ * Also address could be machine or virtual */
+static void
+_kdb_display_mem(kdbva_t *addrp, int *lenp, int wordsz, int domid, int is_maddr)
+{
+    #define DDBUFSZ 4096
+
+    kdbbyt_t buf[DDBUFSZ], *bp;
+    int numrd, bytes;
+    int len = *lenp;
+    kdbva_t addr = *addrp;
+
+    /* round len down to wordsz boundry because on intel endian, printing
+     * characters is not prudent, (long and ints can't be interpreted 
+     * easily) */
+    len &= ~(wordsz-1);
+    len = KDBMIN(DDBUFSZ, len);
+    len = len ? len : wordsz;
+
+    KDBGP("dmem:addr:%lx buf:%p len:$%d domid:%d sz:$%d maddr:%d\n", addr,
+          buf, len, domid, wordsz, is_maddr);
+    if (is_maddr)
+        numrd=kdb_read_mmem((kdbma_t)addr, buf, len);
+    else
+        numrd=kdb_read_mem(addr, buf, len, domid);
+    if (numrd != len)
+        kdbp("Memory read error. Bytes read:$%d\n", numrd);
+
+    for (bp = buf; numrd > 0;) {
+        kdbp("%016lx: ", addr); 
+
+        /* display 16 bytes per line */
+        for (bytes=0; bytes < 16 && numrd > 0; bytes += wordsz) {
+            if (numrd >= wordsz) {
+                if (wordsz == 8)
+                    kdbp(" %016lx", *(long *)bp);
+                else
+                    kdbp(" %08x", *(int *)bp);
+                bp += wordsz;
+                numrd -= wordsz;
+                addr += wordsz;
+            }
+        }
+        kdbp("\n");
+        continue;
+    }
+    *lenp = len;
+    *addrp = addr;
+}
+
+/* display machine mem, ie, the given address is machine address */
+static kdb_cpu_cmd_t 
+kdb_display_mmem(int argc, const char **argv, int wordsz, kdb_usgf_t usg_fp)
+{
+    static kdbma_t maddr;
+    static int len;
+    static domid_t id = DOMID_IDLE;
+
+    if (argc == -1) {
+        _kdb_display_mem(&maddr, &len, wordsz, id, 1);  /* cmd repeat */
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (argc <= 1 || *argv[1] == '?')
+        return (*usg_fp)();
+
+    /* check if num of bytes to display is given by user */
+    if (argc >= 3) {
+        if (!kdb_str2deci(argv[2], &len)) {
+            kdbp("Invalid length:%s\n", argv[2]);
+            return KDB_CPU_MAIN_KDB;
+        } 
+    } else
+        len = 32;                                     /* default read len */
+
+    if (!kdb_str2ulong(argv[1], &maddr)) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    _kdb_display_mem(&maddr, &len, wordsz, 0, 1);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Dispaly machine Memory Word
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dwm(void)
+{
+    kdbp("dwm:  maddr|sym [num] : dump memory word given machine addr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dwm(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mmem(argc, argv, 4, kdb_usgf_dwm);
+}
+
+/* 
+ * FUNCTION: Dispaly machine Memory DoubleWord 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ddm(void)
+{
+    kdbp("ddm:  maddr|sym [num] : dump double word given machine addr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ddm(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mmem(argc, argv, 8, kdb_usgf_ddm);
+}
+
+/* 
+ * FUNCTION: Dispaly Memory : word or doubleword
+ *           wordsz : bytes in word. 4 or 8
+ *
+ *           We display upto BUFSZ bytes. User can just press enter for more.
+ *           addr is always in hex with or without leading 0x
+ */
+static kdb_cpu_cmd_t 
+kdb_display_mem(int argc, const char **argv, int wordsz, kdb_usgf_t usg_fp)
+{
+    static kdbva_t addr;
+    static int len;
+    static domid_t id = DOMID_IDLE;
+
+    if (argc == -1) {
+        _kdb_display_mem(&addr, &len, wordsz, id, 0);  /* cmd repeat */
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (argc <= 1 || *argv[1] == '?')
+        return (*usg_fp)();
+
+    id = DOMID_IDLE;                /* not a command repeat, reset dom id */
+    if (argc >= 4) { 
+        if (!kdb_str2domid(argv[3], &id, 1)) 
+            return KDB_CPU_MAIN_KDB;
+    }
+    /* check if num of bytes to display is given by user */
+    if (argc >= 3) {
+        if (!kdb_str2deci(argv[2], &len)) {
+            kdbp("Invalid length:%s\n", argv[2]);
+            return KDB_CPU_MAIN_KDB;
+        } 
+    } else
+        len = 32;                       /* default read len */
+    if (!kdb_str2addr(argv[1], &addr, id)) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    _kdb_display_mem(&addr, &len, wordsz, id, 0);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Dispaly Memory Word
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dw(void)
+{
+    kdbp("dw vaddr|sym [num][domid] : dump mem word. num required for domid\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dw(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mem(argc, argv, 4, kdb_usgf_dw);
+}
+
+/* 
+ * FUNCTION: Dispaly Memory DoubleWord 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dd(void)
+{
+    kdbp("dd vaddr|sym [num][domid] : dump dword. num required for domid\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dd(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mem(argc, argv, 8, kdb_usgf_dd);
+}
+
+/* 
+ * FUNCTION: Modify Memory Word 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_mw(void)
+{
+    kdbp("mw vaddr|sym val [domid] : modify memory word in vaddr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_mw(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong val;
+    kdbva_t addr;
+    domid_t id = DOMID_IDLE;
+
+    if (argc < 3) {
+        return kdb_usgf_mw();
+    }
+    if (argc >=4) {
+        if (!kdb_str2domid(argv[3], &id, 1)) 
+            return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2ulong(argv[2], &val)) {
+        kdbp("Invalid val: %s\n", argv[2]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2addr(argv[1], &addr, id)) {
+        kdbp("Invalid addr/sym: %s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_write_mem(addr, (kdbbyt_t *)&val, 4, id) != 4)
+        kdbp("Unable to set 0x%lx to 0x%lx\n", addr, val);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Modify Memory DoubleWord 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_md(void)
+{
+    kdbp("md vaddr|sym val [domid] : modify memory dword in vaddr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_md(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong val;
+    kdbva_t addr;
+    domid_t id = DOMID_IDLE;
+
+    if (argc < 3) {
+        return kdb_usgf_md();
+    }
+    if (argc >=4) {
+        if (!kdb_str2domid(argv[3], &id, 1)) {
+            return KDB_CPU_MAIN_KDB;
+        }
+    }
+    if (!kdb_str2ulong(argv[2], &val)) {
+        kdbp("Invalid val: %s\n", argv[2]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2addr(argv[1], &addr, id)) {
+        kdbp("Invalid addr/sym: %s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_write_mem(addr, (kdbbyt_t *)&val,sizeof(val),id) != sizeof(val))
+        kdbp("Unable to set 0x%lx to 0x%lx\n", addr, val);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+struct  Xgt_desc_struct {
+    unsigned short size;
+    unsigned long address __attribute__((packed));
+};
+
+void
+kdb_show_special_regs(struct cpu_user_regs *regs)
+{
+    struct Xgt_desc_struct desc;
+    unsigned short tr;                 /* Task Register segment selector */
+    __u64 efer;
+
+    kdbp("\nSpecial Registers:\n");
+    __asm__ __volatile__ ("sidt  (%0) \n" :: "a"(&desc) : "memory");
+    kdbp("IDTR: addr: %016lx limit: %04x\n", desc.address, desc.size);
+    __asm__ __volatile__ ("sgdt  (%0) \n" :: "a"(&desc) : "memory");
+    kdbp("GDTR: addr: %016lx limit: %04x\n", desc.address, desc.size);
+
+    kdbp("cr0: %016lx  cr2: %016lx\n", read_cr0(), read_cr2());
+    kdbp("cr3: %016lx  cr4: %016lx\n", read_cr3(), read_cr4());
+    __asm__ __volatile__ ("str (%0) \n":: "a"(&tr) : "memory");
+    kdbp("TR: %x\n", tr);
+
+    rdmsrl(MSR_EFER, efer);    /* IA32_EFER */
+    kdbp("efer:"KDBF64" LMA(IA-32e mode):%d SCE(syscall/sysret):%d\n",
+         efer, ((efer&EFER_LMA) != 0), ((efer&EFER_SCE) != 0));
+
+    kdbp("DR0: %016lx  DR1:%016lx  DR2:%016lx\n", kdb_rd_dbgreg(0),
+         kdb_rd_dbgreg(1), kdb_rd_dbgreg(2)); 
+    kdbp("DR3: %016lx  DR6:%016lx  DR7:%016lx\n", kdb_rd_dbgreg(3),
+         kdb_rd_dbgreg(6), kdb_rd_dbgreg(7)); 
+}
+
+/* 
+ * FUNCTION: Dispaly Registers. If "sp" argument, then display additional regs
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dr(void)
+{
+    kdbp("dr [sp]: display registers. sp to display special regs also\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dr(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dr();
+
+    KDBGP1("regs:%p .rsp:%lx .rip:%lx\n", regs, regs->rsp, regs->rip);
+    show_registers(regs);
+    if (argc > 1 && !strcmp(argv[1], "sp")) 
+        kdb_show_special_regs(regs);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* show registers on stack bottom where guest context is. same as dr if
+ * not running in guest mode */
+static kdb_cpu_cmd_t
+kdb_usgf_drg(void)
+{
+    kdbp("drg: display active guest registers at stack bottom\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_drg(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_drg();
+
+    kdbp("\tNote: ds/es/fs/gs etc.. are not saved from the cpu\n");
+    kdb_print_uregs(guest_cpu_user_regs());
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Modify Register
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_mr(void)
+{
+    kdbp("mr reg val : Modify Register. val assumed in hex\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_mr(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    const char *argp;
+    int regoffs;
+    ulong val;
+
+    if (argc != 3 || !kdb_str2ulong(argv[2], &val)) {
+        return kdb_usgf_mr();
+    }
+    argp = argv[1];
+
+#if defined(__x86_64__)
+    if ((regoffs=kdb_valid_reg(argp)) != -1)
+        *((uint64_t *)((char *)regs+regoffs)) = val;
+#else
+    if (!strcmp(argp, "eax"))
+        regs->eax = val;
+    else if (!strcmp(argp, "ebx"))
+        regs->ebx = val;
+    else if (!strcmp(argp, "ecx"))
+        regs->ecx = val;
+    else if (!strcmp(argp, "edx"))
+        regs->edx = val;
+    else if (!strcmp(argp, "esi"))
+        regs->esi = val;
+    else if (!strcmp(argp, "edi"))
+        regs->edi = val;
+    else if (!strcmp(argp, "ebp"))
+        regs->ebp = val;
+    else if (!strcmp(argp, "esp"))
+        regs->esp = val;
+    else if (!strcmp(argp, "eflags") || !strcmp(argp, "rflags"))
+        regs->eflags = val;
+#endif
+    else
+        kdbp("Error. Bad register : %s\n", argp);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Single Step
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ss(void)
+{
+    kdbp("ss: single step\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ss(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    #define KDB_HALT_INSTR 0xf4
+
+    kdbbyt_t byte;
+    struct domain *dp = current->domain;
+    domid_t id = guest_mode(regs) ? dp->domain_id : DOMID_IDLE;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_ss();
+
+    KDBGP("enter kdb_cmdf_ss \n");
+    if (!regs) {
+        kdbp("%s: regs not available\n", __FUNCTION__);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_read_mem(regs->KDBIP, &byte, 1, id) == 1) {
+        if (byte == KDB_HALT_INSTR) {
+            kdbp("kdb: jumping over halt instruction\n");
+            regs->KDBIP++;
+        }
+    } else {
+        kdbp("kdb: Failed to read byte at: %lx\n", regs->KDBIP);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current)) {
+        dp->debugger_attached = 1;  /* see svm_do_resume/vmx_do_ */
+        current->arch.hvm_vcpu.single_step = 1;
+    } else
+        regs->eflags |= X86_EFLAGS_TF;
+
+    return KDB_CPU_SS;
+}
+
+/* 
+ * FUNCTION: Next Instruction, step over the call instr to the next instr
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ni(void)
+{
+    kdbp("ni: single step, stepping over function calls\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ni(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int sz, i;
+    domid_t id=guest_mode(regs) ? current->domain->domain_id:DOMID_IDLE;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_ni();
+
+    KDBGP("enter kdb_cmdf_ni \n");
+    if (!regs) {
+        kdbp("%s: regs not available\n", __FUNCTION__);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if ((sz=kdb_check_call_instr(id, regs->KDBIP)) == 0)  /* !call instr */
+        return kdb_cmdf_ss(argc, argv, regs);         /* just do ss */
+
+    if ((i=kdb_set_bp(id, regs->KDBIP+sz, 1,0,0,0,0)) >= KDBMAXSBP) /* failed */
+        return KDB_CPU_MAIN_KDB;
+
+    kdb_sbpa[i].bp_ni = 1;
+    if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+        current->arch.hvm_vcpu.single_step = 0;
+    else
+        regs->eflags &= ~X86_EFLAGS_TF;
+
+    return KDB_CPU_NI;
+}
+
+static void
+kdb_btf_enable(void)
+{
+    u64 debugctl;
+    rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
+    wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl | 0x2);
+}
+
+/* 
+ * FUNCTION: Single Step to branch. Doesn't seem to work very well.
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ssb(void)
+{
+    kdbp("ssb: singe step to branch\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ssb(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_ssb();
+
+    KDBGP("MUK: enter kdb_cmdf_ssb\n");
+    if (!regs) {
+        kdbp("%s: regs not available\n", __FUNCTION__);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (is_hvm_or_hyb_vcpu(current)) 
+        current->domain->debugger_attached = 1;        /* vmx/svm_do_resume()*/
+
+    regs->eflags |= X86_EFLAGS_TF;
+    kdb_btf_enable();
+    return KDB_CPU_SS;
+}
+
+/* 
+ * FUNCTION: Continue Execution. TF must be cleared here as this could run on 
+ *           any cpu. Hence not OK to do it from kdb_end_session.
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_go(void)
+{
+    kdbp("go: leave kdb and continue execution\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_go(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_go();
+
+    regs->eflags &= ~X86_EFLAGS_TF;
+    return KDB_CPU_GO;
+}
+
+/* All cpus must display their current context */
+static kdb_cpu_cmd_t 
+kdb_cpu_status_all(int ccpu, struct cpu_user_regs *regs)
+{
+    int cpu;
+    for_each_online_cpu(cpu) {
+        if (cpu == ccpu) {
+            kdbp("[%d]", ccpu);
+            kdb_display_pc(regs);
+        } else {
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE)   /* hung cpu */
+                continue;
+            kdb_cpu_cmd[cpu] = KDB_CPU_SHOWPC;
+            while (kdb_cpu_cmd[cpu]==KDB_CPU_SHOWPC);
+        }
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * display/switch CPU. 
+ *  Argument:
+ *     none:   just go back to initial cpu
+ *     cpunum: switch to given vpu
+ *     "all":  show one line status of all cpus
+ */
+extern volatile int kdb_init_cpu;
+static kdb_cpu_cmd_t
+kdb_usgf_cpu(void)
+{
+    kdbp("cpu [all|num]: none will switch back to initial cpu\n");
+    kdbp("               cpunum to switch to the vcpu. all to show status\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_cpu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int cpu;
+    int ccpu = smp_processor_id();
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_cpu();
+
+    if (argc > 1) {
+        if (!strcmp(argv[1], "all"))
+            return kdb_cpu_status_all(ccpu, regs);
+
+            cpu = (int)simple_strtoul(argv[1], NULL, 0); /* handles 0x */
+            if (cpu >= 0 && cpu < NR_CPUS && cpu != ccpu && 
+                cpu_online(cpu) && kdb_cpu_cmd[cpu] == KDB_CPU_PAUSE)
+            {
+                kdbp("Switching to cpu:%d\n", cpu);
+                kdb_cpu_cmd[cpu] = KDB_CPU_MAIN_KDB;
+
+                /* clear any single step on the current cpu */
+                regs->eflags &= ~X86_EFLAGS_TF;
+                return KDB_CPU_PAUSE;
+            } else {
+                if (cpu != ccpu)
+                    kdbp("Unable to switch to cpu:%d\n", cpu);
+                else {
+                    kdb_display_pc(regs);
+                }
+                return KDB_CPU_MAIN_KDB;
+            }
+    }
+    /* no arg means back to initial cpu */
+    if (!kdb_sys_crash && ccpu != kdb_init_cpu) {
+        if (kdb_cpu_cmd[kdb_init_cpu] == KDB_CPU_PAUSE) {
+            regs->eflags &= ~X86_EFLAGS_TF;
+            kdb_cpu_cmd[kdb_init_cpu] = KDB_CPU_MAIN_KDB;
+            return KDB_CPU_PAUSE;
+        } else
+            kdbp("Unable to switch to: %d\n", kdb_init_cpu);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* send NMI to all or given CPU. Must be crashed/fatal state */
+static kdb_cpu_cmd_t
+kdb_usgf_nmi(void)
+{
+    kdbp("nmi cpu#|all: send nmi cpu/s. must reboot when done with kdb\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_nmi(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    cpumask_t cpumask;
+    int ccpu = smp_processor_id();
+
+    if (argc <= 1 || (argc > 1 && *argv[1] == '?'))
+        return kdb_usgf_nmi();
+
+    if (!kdb_sys_crash) {
+        kdbp("kdb: nmi cmd available in crashed state only\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!strcmp(argv[1], "all"))
+        cpumask = cpu_online_map;
+    else {
+        int cpu = (int)simple_strtoul(argv[1], NULL, 0);
+        if (cpu >= 0 && cpu < NR_CPUS && cpu != ccpu && cpu_online(cpu))
+            cpumask = *cpumask_of(cpu);
+        else {
+            kdbp("KDB nmi: invalid cpu %s\n", argv[1]);
+            return KDB_CPU_MAIN_KDB;
+        }
+    }
+    kdb_nmi_pause_cpus(cpumask);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_percpu(void)
+{
+    kdbp("percpu: display per cpu pointers\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_percpu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_percpu();
+    kdb_dump_time_pcpu();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* ========================= Breakpoints ==================================== */
+
+static void
+kdb_prnt_bp_cond(int bpnum)
+{
+    struct kdb_bpcond *bpcp = &kdb_sbpa[bpnum].u.bp_cond;
+
+    if (bpcp->bp_cond_status == 1) {
+        kdbp("     ( %s %c%c %lx )\n", 
+             kdb_regoffs_to_name(bpcp->bp_cond_lhs),
+             bpcp->bp_cond_type == 1 ? '=' : '!', '=', bpcp->bp_cond_rhs);
+    } else {
+        kdbp("     ( %lx %c%c %lx )\n", bpcp->bp_cond_lhs,
+             bpcp->bp_cond_type == 1 ? '=' : '!', '=', bpcp->bp_cond_rhs);
+    }
+}
+
+static void
+kdb_prnt_bp_extra(int bpnum)
+{
+    if (kdb_sbpa[bpnum].bp_type == 2) {
+        ulong i, arg, *btp = kdb_sbpa[bpnum].u.bp_btp;
+        
+        kdbp("   will trace ");
+        for (i=0; i < KDB_MAXBTP && btp[i]; i++)
+            if ((arg=btp[i]) < sizeof (struct cpu_user_regs)) {
+                kdbp(" %s ", kdb_regoffs_to_name(arg));
+            } else {
+                kdbp(" %lx ", arg);
+            }
+        kdbp("\n");
+
+    } else if (kdb_sbpa[bpnum].bp_type == 1)
+        kdb_prnt_bp_cond(bpnum);
+}
+
+/*
+ * List software breakpoints
+ */
+static kdb_cpu_cmd_t
+kdb_display_sbkpts(void)
+{
+    int i;
+    for(i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && !kdb_sbpa[i].bp_deleted) {
+            struct domain *dp = kdb_domid2ptr(kdb_sbpa[i].bp_domid);
+
+            if (dp == NULL || dp->is_dying) {
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+                continue;
+            }
+            kdbp("[%d]: domid:%d 0x%lx   ", i, 
+                 kdb_sbpa[i].bp_domid, kdb_sbpa[i].bp_addr);
+            kdb_prnt_addr2sym(kdb_sbpa[i].bp_domid, kdb_sbpa[i].bp_addr,"\n");
+            kdb_prnt_bp_extra(i);
+        }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/*
+ * Check if any breakpoints that we need to install (delayed install)
+ * Returns: 1 if yes, 0 if none.
+ */
+int
+kdb_swbp_exists(void)
+{
+    int i;
+    for (i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && !kdb_sbpa[i].bp_deleted)
+            return 1;
+    return 0;
+}
+/*
+ * Check if any breakpoints were deleted this kdb session
+ * Returns: 0 if none, 1 if yes
+ */
+static int
+kdb_swbp_deleted(void)
+{
+    int i;
+    for (i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && kdb_sbpa[i].bp_deleted)
+            return 1;
+    return 0;
+}
+
+/*
+ * Flush deleted sw breakpoints
+ */
+void
+kdb_flush_swbp_table(void)
+{
+    int i;
+    KDBGP("ccpu:%d flush_swbp_table: deleted:%x\n", smp_processor_id(), 
+          kdb_swbp_deleted());
+    for(i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && kdb_sbpa[i].bp_deleted) {
+            KDBGP("flush:[%x] addr:0x%lx\n",i,kdb_sbpa[i].bp_addr);
+            memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+        }
+}
+
+/*
+ * Delete/Clear a sw breakpoint
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_bc(void)
+{
+    kdbp("bc $num|all : clear given or all breakpoints\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_bc(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int i, bpnum = -1, delall = 0;
+    const char *argp;
+
+    if (argc != 2 || *argv[1] == '?')
+        return kdb_usgf_bc();
+
+    if (!kdb_swbp_exists()) {
+        kdbp("No breakpoints are set\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    argp = argv[1];
+
+    if (!strcmp(argp, "all"))
+        delall = 1;
+    else if (!kdb_str2deci(argp, &bpnum) || bpnum < 0 || bpnum > KDBMAXSBP) {
+        kdbp("Invalid bpnum: %s\n", argp);
+        return KDB_CPU_MAIN_KDB;
+    }
+    for (i=0; i < KDBMAXSBP; i++) {
+        if (delall && kdb_sbpa[i].bp_addr) {
+            kdbp("Deleted breakpoint [%x] addr:0x%lx domid:%d\n", 
+                 (int)i, kdb_sbpa[i].bp_addr, kdb_sbpa[i].bp_domid);
+            if (kdb_sbpa[i].bp_just_added)
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+            else
+                kdb_sbpa[i].bp_deleted = 1;
+            continue;
+        }
+        if (bpnum != -1 && bpnum == i) {
+            kdbp("Deleted breakpoint [%x] at 0x%lx domid:%d\n", 
+                 (int)i, kdb_sbpa[i].bp_addr, kdb_sbpa[i].bp_domid);
+            if (kdb_sbpa[i].bp_just_added)
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+            else
+                kdb_sbpa[i].bp_deleted = 1;
+            break;
+        }
+    }
+    if (i >= KDBMAXSBP && !delall)
+        kdbp("Unable to delete breakpoint: %s\n", argp);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/*
+ * Install a breakpoint in the given array entry
+ * Returns: 0 : failed to install
+ *          1 : installed successfully
+ */
+static int
+kdb_install_swbp(int idx)                   /* which entry in the bp array */
+{
+    kdbva_t addr = kdb_sbpa[idx].bp_addr;
+    domid_t domid = kdb_sbpa[idx].bp_domid;
+    kdbbyt_t *p = &kdb_sbpa[idx].bp_originst;
+    struct domain *dp = kdb_domid2ptr(domid);
+
+    if (dp == NULL || dp->is_dying) {
+        memset(&kdb_sbpa[idx], 0, sizeof(kdb_sbpa[idx]));
+        kdbp("Removed bp %d addr:%p domid:%d\n", idx, addr, domid);
+        return 0;
+    }
+
+    if (kdb_read_mem(addr, p, KDBBPSZ, domid) != KDBBPSZ){
+        kdbp("Failed(R) to install bp:%x at:0x%lx domid:%d\n",
+             idx, kdb_sbpa[idx].bp_addr, domid);
+        return 0;
+    }
+    if (kdb_write_mem(addr, &kdb_bpinst, KDBBPSZ, domid) != KDBBPSZ) {
+        kdbp("Failed(W) to install bp:%x at:0x%lx domid:%d\n",
+             idx, kdb_sbpa[idx].bp_addr, domid);
+        return 0;
+    }
+    KDBGP("install_swbp: installed bp:%x at:0x%lx ccpu:%x domid:%d\n",
+          idx, kdb_sbpa[idx].bp_addr, smp_processor_id(), domid);
+    return 1;
+}
+
+/*
+ * Install all the software breakpoints
+ */
+void
+kdb_install_all_swbp(void)
+{
+    int i;
+    for(i=0; i < KDBMAXSBP; i++)
+        if (!kdb_sbpa[i].bp_deleted && kdb_sbpa[i].bp_addr)
+            kdb_install_swbp(i);
+}
+
+static void
+kdb_uninstall_a_swbp(int i)
+{
+    kdbva_t addr = kdb_sbpa[i].bp_addr;
+    kdbbyt_t originst = kdb_sbpa[i].bp_originst;
+    domid_t id = kdb_sbpa[i].bp_domid;
+
+    kdb_sbpa[i].bp_just_added = 0;
+    if (!addr)
+        return;
+    if (kdb_write_mem(addr, &originst, KDBBPSZ, id) != KDBBPSZ) {
+        kdbp("Failed to uninstall breakpoint %x at:0x%lx domid:%d\n",
+             i, kdb_sbpa[i].bp_addr, id);
+    }
+}
+
+/*
+ * Uninstall all the software breakpoints at beginning of kdb session
+ */
+void
+kdb_uninstall_all_swbp(void)
+{
+    int i;
+    for(i=0; i < KDBMAXSBP; i++) 
+        kdb_uninstall_a_swbp(i);
+    KDBGP("ccpu:%d uninstalled all bps\n", smp_processor_id());
+}
+
+/* RETURNS: rc == 2: condition was not met,  rc == 3: condition was met */
+static int
+kdb_check_bp_condition(int bpnum, struct cpu_user_regs *regs, domid_t domid)
+{
+    ulong res = 0, lhsval=0;
+    struct kdb_bpcond *bpcp = &kdb_sbpa[bpnum].u.bp_cond;
+
+    if (bpcp->bp_cond_status == 1) {             /* register condition */
+        uint64_t *rp = (uint64_t *)((char *)regs + bpcp->bp_cond_lhs);
+        lhsval = *rp;
+    } else if (bpcp->bp_cond_status == 2) {      /* memaddr condition */
+        ulong addr = bpcp->bp_cond_lhs;
+        int num = sizeof(lhsval);
+
+        if (kdb_read_mem(addr, (kdbbyt_t *)&lhsval, num, domid) != num) {
+            kdbp("kdb: unable to read %d bytes at %lx\n", num, addr);
+            return 3;
+        }
+    }
+    if (bpcp->bp_cond_type == 1)                 /* lhs == rhs */
+        res = (lhsval == bpcp->bp_cond_rhs);
+    else                                         /* lhs != rhs */
+        res = (lhsval != bpcp->bp_cond_rhs);
+
+    if (!res)
+        kdbp("KDB: [%d]Ignoring bp:%d condition not met. val:%lx\n", 
+              smp_processor_id(), bpnum, lhsval); 
+
+    KDBGP1("bpnum:%d domid:%d cond: %d %d %lx %lx res:%d\n", bpnum, domid, 
+           bpcp->bp_cond_status, bpcp->bp_cond_type, bpcp->bp_cond_lhs, 
+           bpcp->bp_cond_rhs, res);
+
+    return (res ? 3 : 2);
+}
+
+static void
+kdb_prnt_btp_info(int bpnum, struct cpu_user_regs *regs, domid_t domid)
+{
+    ulong i, arg, val, num, *btp = kdb_sbpa[bpnum].u.bp_btp;
+
+    kdb_prnt_addr2sym(domid, regs->KDBIP, "\n");
+    num = kdb_guest_bitness(domid)/8;
+    for (i=0; i < KDB_MAXBTP && (arg=btp[i]); i++) {
+        if (arg < sizeof (struct cpu_user_regs)) {
+            uint64_t *rp = (uint64_t *)((char *)regs + arg);
+            kdbp(" %s: %016lx ", kdb_regoffs_to_name(arg), *rp);
+        } else {
+            if (kdb_read_mem(arg, (kdbbyt_t *)&val, num, domid) != num)
+                kdbp("kdb: unable to read %d bytes at %lx\n", num, arg);
+            if (num == 8)
+                kdbp(" %016lx:%016lx ", arg, val);
+            else
+                kdbp(" %08lx:%08lx ", arg, val);
+        }
+    }
+    kdbp("\n");
+    KDBGP1("bpnum:%d domid:%d btp:%p num:%d\n", bpnum, domid, btp, num);
+}
+
+/*
+ * Check if the BP trap belongs to us. 
+ * Return: 0 : not one of ours. IP not changed. (leave kdb)
+ *         1 : one of ours but deleted. IP decremented. (leave kdb)
+ *         2 : one of ours but condition not met, or btp. IP decremented.(leave)
+ *         3 : one of ours and active. IP decremented. (stay in kdb)
+ */
+int 
+kdb_check_sw_bkpts(struct cpu_user_regs *regs)
+{
+    int i, rc=0;
+    domid_t curid;
+
+    curid = guest_mode(regs) ? current->domain->domain_id : DOMID_IDLE;
+    for(i=0; i < KDBMAXSBP; i++) {
+        if (kdb_sbpa[i].bp_domid == curid  && 
+            kdb_sbpa[i].bp_addr == (regs->KDBIP- KDBBPSZ)) {
+
+            regs->KDBIP -= KDBBPSZ;
+            rc = 3;
+
+            if (kdb_sbpa[i].bp_ni) {
+                kdb_uninstall_a_swbp(i);
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+            } else if (kdb_sbpa[i].bp_deleted) {
+                rc = 1;
+            } else if (kdb_sbpa[i].bp_type == 1) {
+                rc = kdb_check_bp_condition(i, regs, curid);
+            } else if (kdb_sbpa[i].bp_type == 2) {
+                kdb_prnt_btp_info(i, regs, curid);
+                rc = 2;
+            }
+            KDBGP1("ccpu:%d rc:%d curid:%d domid:%d addr:%lx\n", 
+                   smp_processor_id(), rc, curid, kdb_sbpa[i].bp_domid, 
+                   kdb_sbpa[i].bp_addr);
+            break;
+        }
+    }
+    return (rc);
+}
+
+/* Eg: r6 == 0x123EDF  or 0xFFFF2034 != 0xDEADBEEF
+ * regoffs: -1 means lhs is not reg. else offset of reg in cpu_user_regs
+ * addr: memory location if lhs is not register, eg, 0xFFFF2034
+ * condp : points to != or ==
+ * rhsval : right hand side value
+ */
+static void
+kdb_set_bp_cond(int bpnum, int regoffs, ulong addr, char *condp, ulong rhsval)
+{
+    if (bpnum >= KDBMAXSBP) {
+        kdbp("BUG: %s got invalid bpnum\n", __FUNCTION__);
+        return;
+    }
+    if (regoffs != -1) {
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_status = 1;
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_lhs = regoffs;
+    } else if (addr != 0) {
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_status = 2;
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_lhs = addr;
+    } else {
+        kdbp("error: invalid call to kdb_set_bp_cond\n");
+        return;
+    }
+    kdb_sbpa[bpnum].u.bp_cond.bp_cond_rhs = rhsval;
+
+    if (*condp == '!')
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_type = 2;
+    else
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_type = 1;
+}
+
+/* install breakpt at given addr. 
+ * ni: bp for next instr 
+ * btpa: ptr to args for btp for printing when bp is hit
+ * lhsp/condp/rhsp: point to strings of condition
+ *
+ * RETURNS: the index in array where installed. KDBMAXSBP if error 
+ */
+static int
+kdb_set_bp(domid_t domid, kdbva_t addr, int ni, ulong *btpa, char *lhsp, 
+           char *condp, char *rhsp)
+{
+    int i, pre_existing = 0, regoffs = -1;
+    ulong memloc=0, rhsval=0, tmpul;
+
+    if (btpa && (lhsp || rhsp || condp)) {
+        kdbp("internal error. btpa and (lhsp || rhsp || condp) set\n");
+        return KDBMAXSBP;
+    }
+    if (lhsp && ((regoffs=kdb_valid_reg(lhsp)) == -1)  &&
+        kdb_str2ulong(lhsp, &memloc) &&
+        kdb_read_mem(memloc, (kdbbyt_t *)&tmpul, sizeof(tmpul), domid)==0) {
+
+        kdbp("error: invalid argument: %s\n", lhsp);
+        return KDBMAXSBP;
+    }
+    if (rhsp && ! kdb_str2ulong(rhsp, &rhsval)) {
+        kdbp("error: invalid argument: %s\n", rhsp);
+        return KDBMAXSBP;
+    }
+
+    /* see if bp already set */
+    for (i=0; i < KDBMAXSBP; i++) {
+        if (kdb_sbpa[i].bp_addr==addr && kdb_sbpa[i].bp_domid==domid) {
+
+            if (kdb_sbpa[i].bp_deleted) {
+                /* just re-set this bp again */
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+                pre_existing = 1;
+            } else {
+                kdbp("Breakpoint already set \n");
+                return KDBMAXSBP;
+            }
+        }
+    }
+    /* see if any room left for another breakpoint */
+    for (i=0; i < KDBMAXSBP; i++)
+        if (!kdb_sbpa[i].bp_addr)
+            break;
+    if (i >= KDBMAXSBP) {
+        kdbp("ERROR: Breakpoint table full....\n");
+        return i;
+    }
+    kdb_sbpa[i].bp_addr = addr;
+    kdb_sbpa[i].bp_domid = domid;
+    if (btpa) {
+        kdb_sbpa[i].bp_type = 2;
+        kdb_sbpa[i].u.bp_btp = btpa;
+    } else if (regoffs != -1 || memloc) {
+        kdb_sbpa[i].bp_type = 1;
+        kdb_set_bp_cond(i, regoffs, memloc, condp, rhsval);
+    } else
+        kdb_sbpa[i].bp_type = 0;
+
+    if (kdb_install_swbp(i)) {                  /* make sure it can be done */
+        if (ni)
+            return i;
+
+        kdb_uninstall_a_swbp(i);                /* dont' show user INT3 */
+        if (!pre_existing)               /* make sure no is cpu sitting on it */
+            kdb_sbpa[i].bp_just_added = 1;
+
+        kdbp("bp %d set for domid:%d at: 0x%lx ", i, kdb_sbpa[i].bp_domid, 
+             kdb_sbpa[i].bp_addr);
+        kdb_prnt_addr2sym(domid, addr, "\n");
+        kdb_prnt_bp_extra(i);
+    } else {
+        kdbp("ERROR:Can't install bp: 0x%lx domid:%d\n", addr, domid);
+        if (pre_existing)     /* in case a cpu is sitting on this bp in traps */
+            kdb_sbpa[i].bp_deleted = 1;
+        else
+            memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+        return KDBMAXSBP;
+    }
+    /* make sure swbp reporting is enabled in the vmcb/vmcs */
+    if (is_hvm_or_hyb_domain(kdb_domid2ptr(domid))) {
+        struct domain *dp = kdb_domid2ptr(domid);
+        dp->debugger_attached = 1;              /* see svm_do_resume/vmx_do_ */
+        KDBGP("debugger_attached set. domid:%d\n", domid);
+    }
+    return i;
+}
+
+/* 
+ * Set/List Software Breakpoint/s
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_bp(void)
+{
+    kdbp("bp [addr|sym][domid][condition]: display or set a breakpoint\n");
+    kdbp("  where cond is like: r6 == 0x123F or rax != DEADBEEF or \n");
+    kdbp("       ffff82c48038fe58 == 321E or 0xffff82c48038fe58 != 0\n");
+    kdbp("  regs: rax rbx rcx rdx rsi rdi rbp rsp r8 r9");
+    kdbp(" r10 r11 r12 r13 r14 r15 rflags\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_bp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbva_t addr;
+    int idx = -1;
+    domid_t domid = DOMID_IDLE;
+    char *domidstrp, *lhsp=NULL, *condp=NULL, *rhsp=NULL;
+
+    if ((argc > 1 && *argv[1] == '?') || argc == 4 || argc > 6)
+        return kdb_usgf_bp();
+
+    if (argc < 2 || kdb_sys_crash)         /* list all set breakpoints */
+        return kdb_display_sbkpts();
+
+    /* valid argc either: 2 3 5 or 6 
+     * 'bp idle_loop r6 == 0xc000' OR 'bp idle_loop 3 r9 != 0xdeadbeef' */
+    idx = (argc == 5) ? 2 : ((argc == 6) ? 3 : idx);
+    if (argc >= 5 ) {
+        lhsp = (char *)argv[idx];
+        condp = (char *)argv[idx+1];
+        rhsp = (char *)argv[idx+2];
+
+        if (!kdb_str2ulong(rhsp, NULL) || *(condp+1) != '=' || 
+            (*condp != '=' && *condp != '!')) {
+
+            return kdb_usgf_bp();
+        }
+    }
+    domidstrp = (argc == 3 || argc == 6 ) ? (char *)argv[2] : NULL;
+    if (domidstrp && !kdb_str2domid(domidstrp, &domid, 1)) {
+        return kdb_usgf_bp();
+    }
+    if (argc > 3 && is_hvm_or_hyb_domain(kdb_domid2ptr(domid))) {
+        kdbp("HVM domain not supported yet for conditional bp\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    if (!kdb_str2addr(argv[1], &addr, domid) || addr == 0) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    /* make sure xen addr is in xen text, otherwise bp set in 64bit dom0/U */
+    if (domid == DOMID_IDLE && 
+        (addr < XEN_VIRT_START || addr > XEN_VIRT_END))
+    {
+        kdbp("addr:%lx not in  xen text\n", addr);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdb_set_bp(domid, addr, 0, NULL, lhsp, condp, rhsp);     /* 0 is ni flag */
+    return KDB_CPU_MAIN_KDB;
+}
+
+
+/* trace breakpoint, meaning, upon bp trace/print some info and continue */
+
+static kdb_cpu_cmd_t
+kdb_usgf_btp(void)
+{
+    kdbp("btp addr|sym [domid] reg|domid-mem-addr... : breakpoint trace\n");
+    kdbp("  regs: rax rbx rcx rdx rsi rdi rbp rsp r8 r9 ");
+    kdbp("r10 r11 r12 r13 r14 r15 rflags\n");
+    kdbp("  Eg. btp idle_cpu 7 rax rbx 0x20ef5a5 r9\n");
+    kdbp("      will print rax, rbx, *(long *)0x20ef5a5, r9 and continue\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_btp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int i, btpidx, numrd, argsidx, regoffs = -1;
+    kdbva_t addr, memloc=0;
+    domid_t domid = DOMID_IDLE;
+    ulong *btpa, tmpul;
+
+    if ((argc > 1 && *argv[1] == '?') || argc < 3)
+        return kdb_usgf_btp();
+
+    argsidx = 2;                   /* assume 3rd arg is not domid */
+    if (argc > 3 && kdb_str2domid(argv[2], &domid, 0)) {
+
+        if (is_hvm_or_hyb_domain(kdb_domid2ptr(domid))) {
+            kdbp("HVM domains are not currently supprted\n");
+            return KDB_CPU_MAIN_KDB;
+        } else
+            argsidx = 3;               /* 3rd arg is a domid */
+    }
+    if (!kdb_str2addr(argv[1], &addr, domid) || addr == 0) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    /* make sure xen addr is in xen text, otherwise will trace 64bit dom0/U */
+    if (domid == DOMID_IDLE && 
+        (addr < XEN_VIRT_START || addr > XEN_VIRT_END))
+    {
+        kdbp("addr:%lx not in  xen text\n", addr);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    numrd = kdb_guest_bitness(domid)/8;
+    if (kdb_read_mem(addr, (kdbbyt_t *)&tmpul, numrd, domid) != numrd) {
+        kdbp("Unable to read mem from %s (%lx)\n", argv[1], addr);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    for (btpidx=0; btpidx < KDBMAXSBP && kdb_btp_ap[btpidx]; btpidx++);
+    if (btpidx >= KDBMAXSBP) {
+        kdbp("error: table full. delete few breakpoints\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    btpa = kdb_btp_argsa[btpidx];
+    memset(btpa, 0, sizeof(kdb_btp_argsa[0]));
+
+    for (i=0; argv[argsidx]; i++, argsidx++) {
+
+        if (((regoffs=kdb_valid_reg(argv[argsidx])) == -1)  &&
+            kdb_str2ulong(argv[argsidx], &memloc) &&
+            (memloc < sizeof (struct cpu_user_regs) ||
+            kdb_read_mem(memloc, (kdbbyt_t *)&tmpul, sizeof(tmpul), domid)==0)){
+
+            kdbp("error: invalid argument: %s\n", argv[argsidx]);
+            return KDB_CPU_MAIN_KDB;
+        }
+        if (i >= KDB_MAXBTP) {
+            kdbp("error: cannot specify more than %d args\n", KDB_MAXBTP);
+            return KDB_CPU_MAIN_KDB;
+        }
+        btpa[i] = (regoffs == -1) ? memloc : regoffs;
+    }
+
+    i = kdb_set_bp(domid, addr, 0, btpa, 0, 0, 0);     /* 0 is ni flag */
+    if (i < KDBMAXSBP)
+        kdb_btp_ap[btpidx] = kdb_btp_argsa[btpidx];
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * Set/List watchpoints, ie, hardware breakpoint/s, in hypervisor
+ *   Usage: wp [sym|addr] [w|i]   w == write only data watchpoint
+ *                                i == IO watchpoint (read/write)
+ *
+ *   Eg:  wp        : list all watchpoints set
+ *        wp addr   : set a read/write wp at given addr
+ *        wp addr w : set a write only wp at given addr
+ *        wp addr i : set an IO wp at given addr (16bits port #)
+ *
+ *  TBD: allow to be set on particular cpu
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_wp(void)
+{
+    kdbp("wp [addr|sym][w|i]: display or set watchpoint. writeonly or IO\n");
+    kdbp("\tnote: watchpoint is triggered after the instruction executes\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_wp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbva_t addr;
+    domid_t domid = DOMID_IDLE;
+    int rw = 3, len = 4;       /* for now just default to 4 bytes len */
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_wp();
+
+    if (argc <= 1 || kdb_sys_crash) {       /* list all set watchpoints */
+        kdb_do_watchpoints(0, 0, 0);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2addr(argv[1], &addr, domid) || addr == 0) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (argc > 2) {
+        if (!strcmp(argv[2], "w"))
+            rw = 1;
+        else if (!strcmp(argv[2], "i"))
+            rw = 2;
+        else {
+            return kdb_usgf_wp();
+        }
+    }
+    kdb_do_watchpoints(addr, rw, len);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_wc(void)
+{
+    kdbp("wc $num|all : clear given or all watchpoints\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_wc(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    const char *argp;
+    int wpnum;              /* wp num to delete. -1 for all */
+
+    if (argc != 2 || *argv[1] == '?') 
+        return kdb_usgf_wc();
+
+    argp = argv[1];
+
+    if (!strcmp(argp, "all"))
+        wpnum = -1;
+    else if (!kdb_str2deci(argp, &wpnum)) {
+        kdbp("Invalid wpnum: %s\n", argp);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdb_clear_wps(wpnum);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void
+kdb_display_hvm_vcpu(struct vcpu *vp)
+{
+    struct hvm_vcpu *hvp;
+    struct vlapic *vlp;
+    struct hvm_io_op *ioop;
+
+    hvp = &vp->arch.hvm_vcpu;
+    vlp = &hvp->vlapic;
+    kdbp("vcpu:%lx id:%d domid:%d\n", vp, vp->vcpu_id, vp->domain->domain_id);
+
+#if 0
+    if (is_hybrid_vcpu(vp)) {
+        struct hybrid_ext *hp = &hvp->hv_hybrid;
+        kdbp("    &hybrid_ext:%p limit:%x iopl:%x vcpu_info_mfn:%lx\n",
+             hp, hp->hyb_iobmp_limit, hp->hyb_iopl, hp->hyb_vcpu_info_mfn);
+    }
+#endif
+
+    ioop = NULL;   /* compiler warning */
+    kdbp("    &hvm_vcpu:%lx  guest_efer:"KDBFL"\n", hvp, hvp->guest_efer);
+    kdbp("      guest_cr: [0]:"KDBFL" [1]:"KDBFL" [2]:"KDBFL"\n", 
+         hvp->guest_cr[0], hvp->guest_cr[1],hvp->guest_cr[2]);
+    kdbp("                [3]:"KDBFL" [4]:"KDBFL"\n", hvp->guest_cr[3],
+         hvp->guest_cr[4]);
+    kdbp("      hw_cr: [0]:"KDBFL" [1]:"KDBFL" [2]:"KDBFL"\n", hvp->hw_cr[0],
+         hvp->hw_cr[1], hvp->hw_cr[2]);
+    kdbp("              [3]:"KDBFL" [4]:"KDBFL"\n", hvp->hw_cr[3], 
+         hvp->hw_cr[4]);
+
+    kdbp("      VLAPIC: base msr:"KDBF64" dis:%x tmrdiv:%x\n", 
+         vlp->hw.apic_base_msr, vlp->hw.disabled, vlp->hw.timer_divisor);
+    kdbp("          regs:%p regs_page:%p\n", vlp->regs, vlp->regs_page);
+    kdbp("          periodic time:\n"); 
+    kdb_prnt_periodic_time(&vlp->pt);
+
+    kdbp("      xen_port:%x flag_dr_dirty:%x dbg_st_latch:%x\n", hvp->xen_port,
+         hvp->flag_dr_dirty, hvp->debug_state_latch);
+
+    if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+
+        struct arch_vmx_struct *vxp = &hvp->u.vmx;
+        kdbp("      &vmx: %p vmcs:%lx active_cpu:%x launched:%x\n", vxp, 
+             vxp->vmcs, vxp->active_cpu, vxp->launched);
+#if XEN_VERSION != 4               /* xen 3.x.x */
+        kdbp("        exec_ctrl:%x vpid:$%d\n", vxp->exec_control, vxp->vpid);
+#endif
+        kdbp("        host_cr0: "KDBFL" vmx: {realm:%x emulate:%x}\n",
+             vxp->host_cr0, vxp->vmx_realmode, vxp->vmx_emulate);
+
+#ifdef __x86_64__
+        kdbp("        &msr_state:%p exception_bitmap:%lx\n", &vxp->msr_state,
+             vxp->exception_bitmap);
+#endif
+    } else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
+        struct arch_svm_struct *svp = &hvp->u.svm;
+#if XEN_VERSION != 4               /* xen 3.x.x */
+        kdbp("  &svm: vmcb:%lx pa:"KDBF64" asid:"KDBF64"\n", svp, svp->vmcb,
+             svp->vmcb_pa, svp->asid_generation);
+#endif
+        kdbp("    msrpm:%p lnch_core:%x vmcb_sync:%x\n", svp->msrpm, 
+             svp->launch_core, svp->vmcb_in_sync);
+    }
+    kdbp("      cachemode:%x io: {state: %x data: "KDBFL"}\n", hvp->cache_mode,
+         hvp->hvm_io.io_state, hvp->hvm_io.io_data);
+    kdbp("      mmio: {gva: "KDBFL" gpfn: "KDBFL"}\n", hvp->hvm_io.mmio_gva,
+         hvp->hvm_io.mmio_gpfn);
+}
+
+/* display struct hvm_vcpu{} in struct vcpu.arch{} */
+static kdb_cpu_cmd_t
+kdb_usgf_vcpuh(void)
+{
+    kdbp("vcpuh vcpu-ptr : display hvm_vcpu struct\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_vcpuh(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct vcpu *vp;
+
+    if (argc < 2 || *argv[1] == '?') 
+        return kdb_usgf_vcpuh();
+
+    if (!kdb_str2ulong(argv[1], (ulong *)&vp) || !kdb_vcpu_valid(vp) ||
+        !is_hvm_or_hyb_vcpu(vp)) {
+
+        kdbp("kdb: Bad VCPU: %s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdb_display_hvm_vcpu(vp);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* also look into arch_get_info_guest() to get context */
+static void
+kdb_print_uregs(struct cpu_user_regs *regs)
+{
+#ifdef __x86_64__
+    kdbp("      rflags: %016lx   rip: %016lx\n", regs->rflags, regs->rip);
+    kdbp("         rax: %016lx   rbx: %016lx   rcx: %016lx\n",
+         regs->rax, regs->rbx, regs->rcx);
+    kdbp("         rdx: %016lx   rsi: %016lx   rdi: %016lx\n",
+         regs->rdx, regs->rsi, regs->rdi);
+    kdbp("         rbp: %016lx   rsp: %016lx    r8: %016lx\n",
+         regs->rbp, regs->rsp, regs->r8);
+    kdbp("          r9:  %016lx  r10: %016lx   r11: %016lx\n",
+         regs->r9,  regs->r10, regs->r11);
+    kdbp("         r12: %016lx   r13: %016lx   r14: %016lx\n",
+         regs->r12, regs->r13, regs->r14);
+    kdbp("         r15: %016lx\n", regs->r15);
+    kdbp("      ds: %04x   es: %04x   fs: %04x   gs: %04x   "
+         "      ss: %04x   cs: %04x\n", regs->ds, regs->es, regs->fs,
+         regs->gs, regs->ss, regs->cs);
+    kdbp("      errcode:%08lx entryvec:%08lx upcall_mask:%lx\n",
+         regs->error_code, regs->entry_vector, regs->saved_upcall_mask);
+#else
+    kdbp("      eflags: %016lx eip: 016lx\n", regs->eflags, regs->eip);
+    kdbp("      eax: %08x   ebx: %08x   ecx: %08x   edx: %08x\n",
+         regs->eax, regs->ebx, regs->ecx, regs->edx);
+    kdbp("      esi: %08x   edi: %08x   ebp: %08x   esp: %08x\n",
+         regs->esi, regs->edi, regs->ebp, regs->esp);
+    kdbp("      ds: %04x   es: %04x   fs: %04x   gs: %04x   "
+     "      ss: %04x   cs: %04x\n", regs->ds, regs->es, regs->fs,
+         regs->gs, regs->ss, regs->cs);
+    kdbp("      errcode:%04lx entryvec:%04lx upcall_mask:%lx\n", 
+         regs->error_code, regs->entry_vector, regs->saved_upcall_mask);
+#endif
+}
+
+#if XEN_SUBVERSION < 3             /* xen 3.1.x or xen 3.2.x */
+#ifdef CONFIG_COMPAT
+    #undef vcpu_info
+    #define vcpu_info(v, field)             \
+    (*(!has_32bit_shinfo((v)->domain) ?                                       \
+       (typeof(&(v)->vcpu_info->compat.field))&(v)->vcpu_info->native.field : \
+       (typeof(&(v)->vcpu_info->compat.field))&(v)->vcpu_info->compat.field))
+
+    #undef __shared_info
+    #define __shared_info(d, s, field)                      \
+    (*(!has_32bit_shinfo(d) ?                           \
+       (typeof(&(s)->compat.field))&(s)->native.field : \
+       (typeof(&(s)->compat.field))&(s)->compat.field))
+#endif
+#endif
+
+static void kdb_display_pv_vcpu(struct vcpu *vp)
+{
+    int i;
+    struct pv_vcpu *gp = &vp->arch.pv_vcpu;
+
+    kdbp("      GDT_VIRT_START(vcpu): %lx\n", GDT_VIRT_START(vp));
+    kdbp("      GDT: entries:0x%lx  frames:\n", gp->gdt_ents);
+    for (i=0; i < 16; i=i+4) 
+        kdbp("          %016lx %016lx %016lx %016lx\n", gp->gdt_frames[i], 
+             gp->gdt_frames[i+1], gp->gdt_frames[i+2],gp->gdt_frames[i+3]);
+    
+    kdbp("      trap_ctxt:%lx kernel_ss:%lx kernel_sp:%lx\n", gp->trap_ctxt,
+         gp->kernel_ss, gp->kernel_sp);
+    kdbp("      ctrlregs:\n");
+    for (i=0; i < 8; i=i+4)
+        kdbp("          %016lx %016lx %016lx %016lx\n", gp->ctrlreg[i], 
+             gp->ctrlreg[i+1], gp->ctrlreg[i+2], gp->ctrlreg[i+3]);
+#ifdef __x86_64__
+    kdbp("      callback:   event: %016lx   failsafe: %016lx\n", 
+         gp->event_callback_eip, gp->failsafe_callback_eip);
+    kdbp("      base: fs:0x%lx gskern:0x%lx gsuser:0x%lx\n", 
+         gp->fs_base, gp->gs_base_kernel, gp->gs_base_user);
+#else
+    kdbp("      callback:   event: %08lx:%08lx   failsafe: %08lx:%08lx\n", 
+         gp->event_callback_cs, gp->event_callback_eip, 
+         gp->failsafe_callback_cs, gp->failsafe_callback_eip);
+#endif
+    kdbp("    vcpu_info_mfn: %lx  iopl: %x\n", gp->vcpu_info_mfn, gp->iopl);
+    kdbp("\n");
+}
+
+/* Display one VCPU info */
+static void
+kdb_display_vcpu(struct vcpu *vp)
+{
+    int i;
+    struct arch_vcpu *avp = &vp->arch;
+    struct paging_vcpu *pvp = &vp->arch.paging;
+    int domid = vp->domain->domain_id;
+
+    kdbp("\nVCPU:  vcpu-id:%d  vcpu-ptr:%p ", vp->vcpu_id, vp);
+    kdbp("  processor:%d domid:%d  domp:%p\n", vp->processor, domid,vp->domain);
+
+    if (domid == DOMID_IDLE) {
+        kdbp("    IDLE vcpu.\n");
+        return;
+    }
+    kdbp("  pause: flags:0x%016lx count:%x\n", vp->pause_flags, 
+         vp->pause_count.counter);
+    kdbp("  vcpu: initdone:%d running:%d\n", 
+         vp->is_initialised, vp->is_running);
+    kdbp("  mcepend:%d nmipend:%d shut: def:%d paused:%d\n", 
+         vp->mce_pending,  vp->nmi_pending, vp->defer_shutdown, 
+         vp->paused_for_shutdown);
+    kdbp("  &vcpu_info:%p : evtchn_upc_pend:%x _mask:%x\n",
+         vp->vcpu_info, vcpu_info(vp, evtchn_upcall_pending),
+         vcpu_info(vp, evtchn_upcall_mask));
+    kdbp("  evt_pend_sel:%lx poll_evtchn:%x ", 
+         *(unsigned long *)&vcpu_info(vp, evtchn_pending_sel), vp->poll_evtchn);
+    kdb_print_spin_lock("virq_lock:", &vp->virq_lock, "\n");
+    for (i=0; i < NR_VIRQS; i++)
+        if (vp->virq_to_evtchn[i] != 0)
+            kdbp("      virq:$%d port:$%d\n", i, vp->virq_to_evtchn[i]);
+
+    kdbp("  next:%p periodic: period:0x%lx last_event:0x%lx\n", 
+         vp->next_in_list, vp->periodic_period, vp->periodic_last_event);
+    kdbp("  cpu_affinity:0x%lx vcpu_dirty_cpumask:%p sched_priv:0x%p\n",
+         vp->cpu_affinity, vp->vcpu_dirty_cpumask, vp->sched_priv);
+    kdbp("  &runstate: %p state: %x (eg. RUNSTATE_running) guestptr:%p\n", 
+         &vp->runstate, vp->runstate.state, runstate_guest(vp));
+    kdbp("\n");
+    kdbp("  arch info: (%p)\n", &vp->arch);
+    kdbp("    guest_context: VGCF_ flags:%lx", 
+         vp->arch.vgc_flags); /* VGCF_in_kernel */
+    if (is_hvm_or_hyb_vcpu(vp))
+        kdbp("    (HVM guest: IP, SP, EFLAGS may be stale)");
+    kdbp("\n");
+    kdb_print_uregs(&vp->arch.user_regs);
+    kdbp("      debugregs:\n");
+    for (i=0; i < 8; i=i+4)
+        kdbp("          %016lx %016lx %016lx %016lx\n", avp->debugreg[i], 
+             avp->debugreg[i+1], avp->debugreg[i+2], avp->debugreg[i+3]);
+
+    if (is_hvm_or_hyb_vcpu(vp))
+        kdb_display_hvm_vcpu(vp);
+    else
+        kdb_display_pv_vcpu(vp);
+
+    kdbp("    TF_flags: %016lx  guest_table: %016lx cr3:%016lx\n", 
+         vp->arch.flags, vp->arch.guest_table.pfn, avp->cr3); 
+    kdbp("    paging: \n");
+    kdbp("      vtlb:%p\n", &pvp->vtlb);
+    kdbp("      &pg_mode:%p gstlevels:%d &shadow:%p shlevels:%d\n",
+         pvp->mode, pvp->mode->guest_levels, &pvp->mode->shadow,
+         pvp->mode->shadow.shadow_levels);
+    kdbp("      shadow_vcpu:\n");
+    kdbp("        guest_vtable:%p last em_mfn:"KDBFL"\n",
+         pvp->shadow.guest_vtable, pvp->shadow.last_emulated_mfn);
+#if CONFIG_PAGING_LEVELS >= 3
+    kdbp("         l3tbl: 3:"KDBFL" 2:"KDBFL"\n"
+         "                1:"KDBFL" 0:"KDBFL"\n",
+     pvp->shadow.l3table[3].l3, pvp->shadow.l3table[2].l3, 
+     pvp->shadow.l3table[1].l3, pvp->shadow.l3table[0].l3);
+    kdbp("        gl3tbl: 3:"KDBFL" 2:"KDBFL"\n"
+         "                1:"KDBFL" 0:"KDBFL"\n",
+     pvp->shadow.gl3e[3].l3, pvp->shadow.gl3e[2].l3, 
+     pvp->shadow.gl3e[1].l3, pvp->shadow.gl3e[0].l3);
+#endif
+    kdbp("  gdbsx_vcpu_event:%x\n", vp->arch.gdbsx_vcpu_event);
+}
+
+/* 
+ * FUNCTION: Dispaly (current) VCPU/s
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_vcpu(void)
+{
+    kdbp("vcpu [vcpu-ptr] : display current/vcpu-ptr vcpu info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_vcpu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+
+    if (argc > 2 || (argc > 1 && *argv[1] == '?'))
+        kdb_usgf_vcpu();
+    else if (argc <= 1)
+        kdb_display_vcpu(v);
+    else if (kdb_str2ulong(argv[1], (ulong *)&v) && kdb_vcpu_valid(v))
+        kdb_display_vcpu(v);
+    else 
+        kdbp("Invalid usage/argument:%s v:%lx\n", argv[1], (long)v);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* from paging_dump_domain_info() */
+static void kdb_pr_dom_pg_modes(struct domain *d)
+{
+    if (paging_mode_enabled(d)) {
+        kdbp(" paging mode enabled");
+        if ( paging_mode_shadow(d) )
+            kdbp(" shadow(PG_SH_enable)");
+        if ( paging_mode_hap(d) )
+            kdbp(" hap(PG_HAP_enable) ");
+        if ( paging_mode_refcounts(d) )
+            kdbp(" refcounts(PG_refcounts) ");
+        if ( paging_mode_log_dirty(d) )
+            kdbp(" log_dirty(PG_log_dirty) ");
+        if ( paging_mode_translate(d) )
+            kdbp(" translate(PG_translate) ");
+        if ( paging_mode_external(d) )
+            kdbp(" external(PG_external) ");
+    } else
+        kdbp(" disabled");
+    kdbp("\n");
+}
+
+/* print event channels info for a given domain 
+ * NOTE: very confusing, port and event channel refer to the same thing. evtchn
+ * is arry of pointers to a bucket of pointers to 128 struct evtchn{}. while
+ * 64bit xen can handle 4096 max channels, a 32bit guest is limited to 1024 */
+static void noinline kdb_print_dom_eventinfo(struct domain *dp)
+{
+    uint chn;
+
+    kdbp("\n");
+    kdbp("  Evt: MAX_EVTCHNS:$%d ptr:%p pollmsk:%08lx ",
+         MAX_EVTCHNS(dp), dp->evtchn, dp->poll_mask[0]);
+    kdb_print_spin_lock("lk:", &dp->event_lock, "\n");
+    kdbp("    &evtchn_pending:%p &evtchn_mask:%p\n", 
+         shared_info(dp, evtchn_pending), shared_info(dp, evtchn_mask));
+
+    kdbp("   Channels info: (everything is in decimal):\n");
+    for (chn=0; chn < MAX_EVTCHNS(dp); chn++ ) {
+        struct evtchn *bktp = dp->evtchn[chn/EVTCHNS_PER_BUCKET];
+        struct evtchn *chnp = &bktp[chn & (EVTCHNS_PER_BUCKET-1)];
+        char pbit = test_bit(chn, &shared_info(dp, evtchn_pending)) ? 'Y' : 'N';
+        char mbit = test_bit(chn, &shared_info(dp, evtchn_mask)) ? 'Y' : 'N';
+
+        if (bktp==NULL || chnp->state==ECS_FREE)
+            continue;
+
+        kdbp("    chn:%4u st:%d _xen=%d _vcpu_id:%2d ", chn, chnp->state,
+             chnp->xen_consumer, chnp->notify_vcpu_id);
+        if (chnp->state == ECS_UNBOUND)
+            kdbp(" rem-domid:%d", chnp->u.unbound.remote_domid);
+        else if (chnp->state == ECS_INTERDOMAIN)
+            kdbp(" rem-port:%d rem-dom:%d", chnp->u.interdomain.remote_port,
+                 chnp->u.interdomain.remote_dom->domain_id);
+        else if (chnp->state == ECS_PIRQ)
+            kdbp(" pirq:%d", chnp->u.pirq);
+        else if (chnp->state == ECS_VIRQ)
+            kdbp(" virq:%d", chnp->u.virq);
+
+        kdbp("  pend:%c mask:%c\n", pbit, mbit);
+    }
+#if 0
+    kdbp("pirq to evtchn mapping (pirq:evtchn) (all decimal):\n");
+    for (i=0; i < dp->nr_pirqs; i ++)
+        if (dp->pirq_to_evtchn[i])
+            kdbp("(%d:%d) ", i, dp->pirq_to_evtchn[i]);
+    kdbp("\n");
+#endif
+}
+
+static void kdb_prnt_hvm_dom_info(struct domain *dp)
+{
+    struct hvm_domain *hvp = &dp->arch.hvm_domain;
+
+    kdbp("    HVM info: Hap is%s enabled\n", 
+         dp->arch.hvm_domain.hap_enabled ? "" : " not");
+
+    if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+        struct vmx_domain *vdp = &dp->arch.hvm_domain.vmx;
+        kdbp("    EPT: ept_mt:%x ept_wl:%x asr:%013lx\n", 
+             vdp->ept_control.ept_mt, vdp->ept_control.ept_wl, 
+             vdp->ept_control.asr);
+    }
+    if (hvp == NULL)
+        return;
+
+    if (hvp->irq.callback_via_type == HVMIRQ_callback_vector)
+        kdbp("    HVMIRQ_callback_vector: %x\n", hvp->irq.callback_via.vector);
+
+    if (!is_hvm_domain(dp))
+        return;
+
+    kdbp("    HVM PARAMS (all in hex):\n");
+    kdbp("\tioreq.page:%lx ioreq.va:%lx\n", hvp->ioreq.page, hvp->ioreq.va);
+    kdbp("\tbuf_ioreq.page:%lx ioreq.va:%lx\n", hvp->buf_ioreq.page, 
+         hvp->buf_ioreq.va);
+    kdbp("\tHVM_PARAM_CALLBACK_IRQ: %x\n", hvp->params[HVM_PARAM_CALLBACK_IRQ]);
+    kdbp("\tHVM_PARAM_STORE_PFN: %x\n", hvp->params[HVM_PARAM_STORE_PFN]);
+    kdbp("\tHVM_PARAM_STORE_EVTCHN: %x\n", hvp->params[HVM_PARAM_STORE_EVTCHN]);
+    kdbp("\tHVM_PARAM_PAE_ENABLED: %x\n", hvp->params[HVM_PARAM_PAE_ENABLED]);
+    kdbp("\tHVM_PARAM_IOREQ_PFN: %x\n", hvp->params[HVM_PARAM_IOREQ_PFN]);
+    kdbp("\tHVM_PARAM_BUFIOREQ_PFN: %x\n", hvp->params[HVM_PARAM_BUFIOREQ_PFN]);
+    kdbp("\tHVM_PARAM_VIRIDIAN: %x\n", hvp->params[HVM_PARAM_VIRIDIAN]);
+    kdbp("\tHVM_PARAM_TIMER_MODE: %x\n", hvp->params[HVM_PARAM_TIMER_MODE]);
+    kdbp("\tHVM_PARAM_HPET_ENABLED: %x\n", hvp->params[HVM_PARAM_HPET_ENABLED]);
+    kdbp("\tHVM_PARAM_IDENT_PT: %x\n", hvp->params[HVM_PARAM_IDENT_PT]);
+    kdbp("\tHVM_PARAM_DM_DOMAIN: %x\n", hvp->params[HVM_PARAM_DM_DOMAIN]);
+    kdbp("\tHVM_PARAM_ACPI_S_STATE: %x\n", hvp->params[HVM_PARAM_ACPI_S_STATE]);
+    kdbp("\tHVM_PARAM_VM86_TSS: %x\n", hvp->params[HVM_PARAM_VM86_TSS]);
+    kdbp("\tHVM_PARAM_VPT_ALIGN: %x\n", hvp->params[HVM_PARAM_VPT_ALIGN]);
+    kdbp("\tHVM_PARAM_CONSOLE_PFN: %x\n", hvp->params[HVM_PARAM_CONSOLE_PFN]);
+    kdbp("\tHVM_PARAM_CONSOLE_EVTCHN: %x\n", 
+         hvp->params[HVM_PARAM_CONSOLE_EVTCHN]);
+    kdbp("\tHVM_PARAM_ACPI_IOPORTS_LOCATION: %x\n", 
+         hvp->params[HVM_PARAM_ACPI_IOPORTS_LOCATION]);
+    kdbp("\tHVM_PARAM_MEMORY_EVENT_SINGLE_STEP: %x\n", 
+         hvp->params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP]);
+}
+static void kdb_print_rangesets(struct domain *dp)
+{
+    int locked = spin_is_locked(&dp->rangesets_lock);
+
+    if (locked)
+        spin_unlock(&dp->rangesets_lock);
+    rangeset_domain_printk(dp);
+    if (locked)
+        spin_lock(&dp->rangesets_lock);
+}
+
+static void kdb_pr_vtsc_info(struct arch_domain *ap)
+{
+    kdbp("    VTSC info: tsc_mode:%x  vtsc:%x  vtsc_last:%016lx\n", 
+         ap->tsc_mode, ap->vtsc, ap->vtsc_last);
+    kdbp("        vtsc_offset:%016lx tsc_khz:%08lx incarnation:%x\n", 
+         ap->vtsc_offset, ap->vtsc_offset, ap->incarnation);
+    kdbp("        vtsc_kerncount:%016lx _usercount:%016lx\n",
+         ap->vtsc_kerncount, ap->vtsc_usercount);
+}
+
+/* display one domain info */
+static void
+kdb_display_dom(struct domain *dp)
+{
+    struct vcpu *vp;
+    int printed = 0;
+    struct grant_table *gp = dp->grant_table;
+    struct arch_domain *ap = &dp->arch;
+
+    kdbp("\nDOMAIN :    domid:0x%04x ptr:0x%p\n", dp->domain_id, dp);
+    if (dp->domain_id == DOMID_IDLE) {
+        kdbp("    IDLE domain.\n");
+        return;
+    }
+    if (dp->is_dying) {
+        kdbp("    domain is DYING.\n");
+        return;
+    }
+#if 0
+    kdb_print_spin_lock("  pgalk:", &dp->page_alloc_lock, "\n");
+    kdbp("  pglist:  0x%p 0x%p\n", dp->page_list.next,KDB_PGLLE(dp->page_list));
+    kdbp("  xpglist: 0x%p 0x%p\n", dp->xenpage_list.next, 
+         KDB_PGLLE(dp->xenpage_list));
+    kdbp("  next:0x%p hashnext:0x%p\n", 
+         dp->next_in_list, dp->next_in_hashbucket);
+#endif
+    kdbp("  PAGES: tot:0x%08x max:0x%08x xenheap:0x%08x\n", 
+         dp->tot_pages, dp->max_pages, dp->xenheap_pages);
+
+    kdb_print_rangesets(dp);
+    kdb_print_dom_eventinfo(dp);
+    kdbp("\n");
+    kdbp("  Grant table: gp:0x%p\n", gp);
+    if (gp) {
+        kdbp("    nr_frames:0x%08x shpp:0x%p active:0x%p\n",
+             gp->nr_grant_frames, gp->shared_raw, gp->active);
+        kdbp("    maptrk:0x%p maphd:0x%08x maplmt:0x%08x\n", 
+             gp->maptrack, gp->maptrack_head, gp->maptrack_limit);
+        kdbp("    mapcnt:");
+        kdb_print_spin_lock("mapcnt: lk:", &gp->lock, "\n");
+    }
+    kdbp("  hvm:%d priv:%d need_iommu:%d dbg:%d dying:%d paused:%d\n",
+         dp->is_hvm, dp->is_privileged, dp->need_iommu,
+         dp->debugger_attached, dp->is_dying, dp->is_paused_by_controller);
+    kdb_print_spin_lock("  shutdown: lk:", &dp->shutdown_lock, "\n");
+    kdbp("  shutn:%d shut:%d code:%d \n", dp->is_shutting_down,
+         dp->is_shut_down, dp->shutdown_code);
+    kdbp("  pausecnt:0x%08x vm_assist:0x"KDBFL" refcnt:0x%08x\n",
+         dp->pause_count.counter, dp->vm_assist, dp->refcnt.counter);
+    kdbp("  &domain_dirty_cpumask:%p\n", &dp->domain_dirty_cpumask); 
+
+    kdbp("  shared == vcpu_info[]: %p\n",  dp->shared_info); 
+    kdbp("    arch_shared: maxpfn: %lx pfn-mfn-frame-ll mfn: %lx\n", 
+         arch_get_max_pfn(dp), arch_get_pfn_to_mfn_frame_list_list(dp));
+    kdbp("\n");
+    kdbp("  arch_domain at : %p\n", ap);
+
+#ifdef CONFIG_X86_64
+    kdbp("    pt_pages:0x%p ", ap->mm_perdomain_pt_pages);
+    kdbp("    l2:0x%p l3:0x%p\n", ap->mm_perdomain_l2, ap->mm_perdomain_l3);
+#else
+    kdbp("    pt:0x%p ", ap->mm_perdomain_pt);
+#endif
+#ifdef CONFIG_X86_32
+    kdbp("    &mapchache:0x%xp\n", &ap->mapcache);
+#endif
+    kdbp("    ioport:0x%p &hvm_dom:0x%p\n", ap->ioport_caps, &ap->hvm_domain);
+    if (is_hvm_or_hyb_domain(dp))
+        kdb_prnt_hvm_dom_info(dp);
+
+    kdbp("    &pging_dom:%p mode: %lx", &ap->paging, ap->paging.mode); 
+    kdb_pr_dom_pg_modes(dp);
+    kdbp("    p2m ptr:%p  pages:{%p, %p}\n", ap->p2m, ap->p2m->pages.next,
+         KDB_PGLLE(ap->p2m->pages));
+    kdbp("       max_mapped_pfn:"KDBFL, ap->p2m->max_mapped_pfn);
+#if XEN_SUBVERSION > 0 && XEN_VERSION == 4              /* xen 4.1 and above */
+    kdbp("  phys_table:%p\n", ap->p2m->phys_table.pfn);
+#else
+    kdbp("  phys_table.pfn:"KDBFL"\n", ap->phys_table.pfn);
+#endif
+    kdbp("    physaddr_bitsz:%d 32bit_pv:%d has_32bit_shinfo:%d\n", 
+         ap->physaddr_bitsize, ap->is_32bit_pv, ap->has_32bit_shinfo);
+    kdb_pr_vtsc_info(ap);
+    kdbp("  sched:0x%p  &handle:0x%p\n", dp->sched_priv, &dp->handle);
+    kdbp("  vcpu ptrs:\n   ");
+    for_each_vcpu(dp, vp) {
+        kdbp(" %d:%p", vp->vcpu_id, vp);
+        if (++printed % 4 == 0) kdbp("\n   ");
+    }
+    kdbp("\n");
+}
+
+/* 
+ * FUNCTION: Dispaly (current) domain/s
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dom(void)
+{
+    kdbp("dom [all|domid]: Display current/all/given domain/s\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dom(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int id;
+    struct domain *dp = current->domain;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dom();
+
+    if (argc > 1) {
+        for(dp=domain_list; dp; dp=dp->next_in_list)
+            if (kdb_str2deci(argv[1], &id) && dp->domain_id==id)
+                kdb_display_dom(dp);
+            else if (!strcmp(argv[1], "all")) 
+                kdb_display_dom(dp);
+    } else {
+        kdbp("Displaying current domain :\n");
+        kdb_display_dom(dp);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Dump irq desc table */
+static kdb_cpu_cmd_t
+kdb_usgf_dirq(void)
+{
+    kdbp("dirq : dump irq bindings\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dirq(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long irq, sz, offs, addr;
+    char buf[KSYM_NAME_LEN+1];
+    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dirq();
+
+#if XEN_VERSION < 4 && XEN_SUBVERSION < 5           /* xen 3.4.x or below */
+    kdbp("idx/irq#/status: all are in decimal\n");
+    kdbp("idx  irq#  status   action(handler name devid)\n");
+    for (irq=0; irq < NR_VECTORS; irq++) {
+        irq_desc_t  *dp = &irq_desc[irq];
+        if (!dp->action)
+            continue;
+        addr = (unsigned long)dp->action->handler;
+        kdbp("[%3ld]:irq:%3d st:%3d f:%s devnm:%s devid:0x%p\n",
+             i, vector_to_irq(irq), dp->status, (dp->status & IRQ_GUEST) ? 
+                            "GUEST IRQ" : symbols_lookup(addr, &sz, &offs, buf),
+             dp->action->name, dp->action->dev_id);
+    }
+#else
+    kdbp("irq_desc[]:%p nr_irqs: $%d nr_irqs_gsi: $%d\n", irq_desc, nr_irqs, 
+          nr_irqs_gsi);
+    kdbp("irq/vec#/status: in decimal. affinity in hex, not bitmap\n");
+    kdbp("irq-- vec sta function----------- name---- type--------- ");
+    kdbp("aff devid------------\n");
+    for (irq=0; irq < nr_irqs; irq++) {
+        void *devidp;
+        const char *symp, *nmp;
+        irq_desc_t  *dp = irq_to_desc(irq);
+        struct arch_irq_desc *archp = &dp->arch;
+
+        if (!dp->handler || dp->handler==&no_irq_type || dp->status & IRQ_GUEST)
+            continue;
+
+        addr = dp->action ? (unsigned long)dp->action->handler : 0;
+        symp = addr ? symbols_lookup(addr, &sz, &offs, buf) : "n/a ";
+        nmp = addr ? dp->action->name : "n/a ";
+        devidp = addr ? dp->action->dev_id : NULL;
+        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
+        kdbp("[%3ld] %03d %03d %-19s %-8s %-13s %3s 0x%p\n", irq, archp->vector,
+             dp->status, symp, nmp, dp->handler->typename, affstr, devidp);
+    }
+    kdb_prnt_guest_mapped_irqs();
+#endif
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void
+kdb_prnt_vec_irq_table(int cpu)
+{
+    int i,j, *tbl = per_cpu(vector_irq, cpu);
+
+    kdbp("CPU %d : ", cpu);
+    for (i=0, j=0; i < NR_VECTORS; i++)
+        if (tbl[i] != -1) {
+            kdbp("(%3d:%3d) ", i, tbl[i]);
+            if (!(++j % 5))
+                kdbp("\n        ");
+        }
+    kdbp("\n");
+}
+
+/* Dump irq desc table */
+static kdb_cpu_cmd_t
+kdb_usgf_dvit(void)
+{
+    kdbp("dvit [cpu|all]: dump (per cpu)vector irq table\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dvit(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int cpu, ccpu = smp_processor_id();
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dvit();
+    
+    if (argc > 1) {
+        if (!strcmp(argv[1], "all")) 
+            cpu = -1;
+        else if (!kdb_str2deci(argv[1], &cpu)) {
+            kdbp("Invalid cpu:%d\n", cpu);
+            return kdb_usgf_dvit();
+        }
+    } else
+        cpu = ccpu;
+
+    kdbp("Per CPU vector irq table pairs (vector:irq) (all decimals):\n");
+    if (cpu != -1) 
+        kdb_prnt_vec_irq_table(cpu);
+    else
+        for_each_online_cpu(cpu) 
+            kdb_prnt_vec_irq_table(cpu);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* do vmexit on all cpu's so intel VMCS can be dumped */
+static kdb_cpu_cmd_t 
+kdb_all_cpu_flush_vmcs(void)
+{
+    int cpu, ccpu = smp_processor_id();
+    for_each_online_cpu(cpu) {
+        if (cpu == ccpu) {
+            kdb_curr_cpu_flush_vmcs();
+        } else {
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE){  /* hung cpu */
+                kdbp("Skipping (hung?) cpu %d\n", cpu);
+                continue;
+            }
+            kdb_cpu_cmd[cpu] = KDB_CPU_DO_VMEXIT;
+            while (kdb_cpu_cmd[cpu]==KDB_CPU_DO_VMEXIT);
+        }
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display VMCS or VMCB */
+static kdb_cpu_cmd_t
+kdb_usgf_dvmc(void)
+{
+    kdbp("dvmc [domid][vcpuid] : Dump vmcs/vmcb\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dvmc(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    domid_t domid = 0;  /* unsigned type don't like -1 */
+    int vcpuid = -1;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dvmc();
+
+    if (argc > 1) { 
+        if (!kdb_str2domid(argv[1], &domid, 1))
+            return KDB_CPU_MAIN_KDB;
+    }
+    if (argc > 2 && !kdb_str2deci(argv[2], &vcpuid)) {
+        kdbp("Bad vcpuid: 0x%x\n", vcpuid);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+        kdb_all_cpu_flush_vmcs();
+        kdb_dump_vmcs(domid, (int)vcpuid);
+    } else {
+        kdb_dump_vmcb(domid, (int)vcpuid);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_mmio(void)
+{
+    kdbp("mmio: dump mmio related info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_mmio(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_mmio();
+
+    kdbp("r/o mmio ranges:\n");
+    rangeset_printk(mmio_ro_ranges);
+    kdbp("\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Dump timer/timers queues */
+static kdb_cpu_cmd_t
+kdb_usgf_dtrq(void)
+{
+    kdbp("dtrq: dump timer queues on all cpus\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dtrq(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dtrq();
+
+    kdb_dump_timer_queues();
+    return KDB_CPU_MAIN_KDB;
+}
+
+struct idte {
+    uint16_t offs0_15;
+    uint16_t selector;
+    uint16_t meta;
+    uint16_t offs16_31;
+    uint32_t offs32_63;
+    uint32_t resvd;
+};
+
+#ifdef __x86_64__
+static void
+kdb_print_idte(int num, struct idte *idtp) 
+{
+    uint16_t mta = idtp->meta;
+    char dpl = ((mta & 0x6000) >> 13);
+    char present = ((mta &0x8000) >> 15);
+    int tval = ((mta &0x300) >> 8);
+    char *type = (tval == 1) ? "Task" : ((tval== 2) ? "Intr" : "Trap");
+    domid_t domid = idtp->selector==__HYPERVISOR_CS64 ? DOMID_IDLE :
+                    current->domain->domain_id;
+    uint64_t addr = idtp->offs0_15 | ((uint64_t)idtp->offs16_31 << 16) | 
+                    ((uint64_t)idtp->offs32_63 << 32);
+
+    kdbp("[%03d]: %s %x  %x %04x:%016lx ", num, type, dpl, present,
+         idtp->selector, addr); 
+    kdb_prnt_addr2sym(domid, addr, "\n");
+}
+
+/* Dump 64bit idt table currently on this cpu. Intel Vol 3 section 5.14.1 */
+static kdb_cpu_cmd_t
+kdb_usgf_didt(void)
+{
+    kdbp("didt : dump IDT table on the current cpu\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_didt(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int i;
+    struct idte *idtp = (struct idte *)idt_tables[smp_processor_id()];
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_didt();
+
+    kdbp("IDT at:%p\n", idtp);
+    kdbp("idt#  Type DPL P addr (all hex except idt#)\n", idtp);
+    for (i=0; i < 256; i++, idtp++) 
+        kdb_print_idte(i, idtp);
+    return KDB_CPU_MAIN_KDB;
+}
+#else
+static kdb_cpu_cmd_t
+kdb_cmdf_didt(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbp("kdb: Please implement me in 32bit hypervisor\n");
+    return KDB_CPU_MAIN_KDB;
+}
+#endif
+
+struct gdte {             /* same for TSS and LDT */
+    ulong limit0:16;
+    ulong base0:24;       /* linear address base, not pa */
+    ulong acctype:4;      /* Type: access rights */
+    ulong S:1;            /* S: 0 = system, 1 = code/data */
+    ulong DPL:2;          /* DPL */
+    ulong P:1;            /* P: Segment Present */
+    ulong limit1:4;
+    ulong AVL:1;          /* AVL: avail for use by system software */
+    ulong L:1;            /* L: 64bit code segment */
+    ulong DB:1;           /* D/B */
+    ulong G:1;            /* G: granularity */
+    ulong base1:8;        /* linear address base, not pa */
+};
+
+union gdte_u {
+    struct gdte gdte;
+    u64 gval;
+};
+
+struct call_gdte {
+    unsigned short offs0:16;
+    unsigned short sel:16;
+    unsigned short misc0:16;
+    unsigned short offs1:16;
+};
+
+struct idt_gdte {
+    unsigned long offs0:16;
+    unsigned long sel:16;
+    unsigned long ist:3;
+    unsigned long unused0:13;
+    unsigned long offs1:16;
+};
+union sgdte_u {
+    struct call_gdte cgdte;
+    struct idt_gdte igdte;
+    u64 sgval;
+};
+
+/* return binary form of a hex in string : max 4 chars 0000 to 1111 */
+static char *kdb_ret_acctype(uint acctype)
+{
+    static char buf[16];
+    char *p = buf;
+    int i;
+
+    if (acctype > 0xf) {
+        buf[0] = buf[1] = buf[2] = buf[3] = '?';
+        buf[5] = '\n';
+        return buf;
+    }
+    for (i=0; i < 4; i++, p++, acctype=acctype>>1)
+        *p = (acctype & 0x1) ? '1' : '0';
+
+    return buf;
+}
+
+/* Display GDT table. IA-32e mode is assumded. */
+/* first display non system descriptors then display system descriptors */
+static kdb_cpu_cmd_t
+kdb_usgf_dgdt(void)
+{
+    kdbp("dgdt [gdt-ptr decimal-byte-size] dump GDT table on current cpu or for"
+         "given vcpu\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dgdt(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct Xgt_desc_struct desc;
+    union gdte_u u1;
+    ulong start_addr, end_addr, taddr=0;
+    domid_t domid = DOMID_IDLE;
+    int idx;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dgdt();
+
+    if (argc > 1) {
+        if (argc != 3)
+            return kdb_usgf_dgdt();
+
+        if (kdb_str2ulong(argv[1], (ulong *)&start_addr) && 
+            kdb_str2deci(argv[2], (int *)&taddr)) {
+            end_addr = start_addr + taddr;
+        } else {
+            kdbp("dgdt: Bad arg:%s or %s\n", argv[1], argv[2]);
+            return kdb_usgf_dgdt();
+        }
+    } else {
+        __asm__ __volatile__ ("sgdt  (%0) \n" :: "a"(&desc) : "memory");
+        start_addr = (ulong)desc.address; 
+        end_addr = (ulong)desc.address + desc.size;
+    }
+    kdbp("GDT: Will skip null desc at 0, start:%lx end:%lx\n", start_addr, 
+         end_addr);
+    kdbp("[idx]   sel --- val --------  Accs DPL P AVL L DB G "
+         "--Base Addr ----  Limit\n");
+    kdbp("                              Type\n");
+
+    /* skip first 8 null bytes */
+    /* the cpu multiplies the index by 8 and adds to GDT.base */
+    for (taddr = start_addr+8; taddr < end_addr;  taddr += sizeof(ulong)) {
+
+        /* not all entries are mapped. do this to avoid GP even if hyp */
+        if (!kdb_read_mem(taddr, (kdbbyt_t *)&u1, sizeof(u1),domid) || !u1.gval)
+            continue;
+
+        if (u1.gval == 0xffffffffffffffff || u1.gval == 0x5555555555555555)
+            continue;               /* what an effin x86 mess */
+
+        idx = (taddr - start_addr) / 8;
+        if (u1.gdte.S == 0) {       /* System Desc are 16 bytes in 64bit mode */
+            taddr += sizeof(ulong);
+            continue;
+        }
+        kdbp("[%04x] %04x %016lx  %4s  %x  %d  %d  %d  %d %d %016lx  %05x\n",
+             idx, (idx<<3), u1.gval, kdb_ret_acctype(u1.gdte.acctype), 
+             u1.gdte.DPL, 
+             u1.gdte.P, u1.gdte.AVL, u1.gdte.L, u1.gdte.DB, u1.gdte.G,  
+             (u64)((u64)u1.gdte.base0 | (u64)((u64)u1.gdte.base1<<24)), 
+             u1.gdte.limit0 | (u1.gdte.limit1<<16));
+    }
+
+    kdbp("\nSystem descriptors (S=0) : (skipping 0th entry)\n");
+    for (taddr=start_addr+8;  taddr < end_addr;  taddr += sizeof(ulong)) {
+        uint acctype;
+        u64 upper, addr64=0;
+
+        /* not all entries are mapped. do this to avoid GP even if hyp */
+        if (kdb_read_mem(taddr, (kdbbyt_t *)&u1, sizeof(u1), domid)==0 || 
+            u1.gval == 0 || u1.gdte.S == 1) {
+            continue;
+        }
+        idx = (taddr - start_addr) / 8;
+        taddr += sizeof(ulong);
+        if (kdb_read_mem(taddr, (kdbbyt_t *)&upper, 8, domid) == 0) {
+            kdbp("Could not read upper 8 bytes of system desc\n");
+            upper = 0;
+        }
+        acctype = u1.gdte.acctype;
+        if (acctype != 2 && acctype != 9 && acctype != 11 && acctype !=12 &&
+            acctype != 14 && acctype != 15)
+            continue;
+
+        kdbp("[%04x] %04x val:%016lx DPL:%x P:%d type:%x ",
+             idx, (idx<<3), u1.gval, u1.gdte.DPL, u1.gdte.P, acctype); 
+
+        upper = (u64)((u64)(upper & 0xFFFFFFFF) << 32);
+
+        /* Vol 3A: table: 3-2  page: 3-19 */
+        if (acctype == 2) {
+            kdbp("LDT gate (0010)\n");
+        }
+        else if (acctype == 9) {
+            kdbp("TSS avail gate(1001)\n");
+        }
+        else if (acctype == 11) {
+            kdbp("TSS busy gate(1011)\n");
+        }
+        else if (acctype == 12) {
+            kdbp("CALL gate (1100)\n");
+        }
+        else if (acctype == 14) {
+            kdbp("IDT gate (1110)\n");
+        }
+        else if (acctype == 15) {
+            kdbp("Trap gate (1111)\n"); 
+        }
+
+        if (acctype == 2 || acctype == 9 || acctype == 11) {
+            kdbp("        AVL:%d G:%d Base Addr:%016lx Limit:%x\n",
+                 u1.gdte.AVL, u1.gdte.G,  
+                 (u64)((u64)u1.gdte.base0 | ((u64)u1.gdte.base1<<24)| upper),
+                 (u32)u1.gdte.limit0 | (u32)((u32)u1.gdte.limit1<<16));
+
+        } else if (acctype == 12) {
+            union sgdte_u u2;
+            u2.sgval = u1.gval;
+
+            addr64 = (u64)((u64)u2.cgdte.offs0 | 
+                           (u64)((u64)u2.cgdte.offs1<<16) | upper);
+            kdbp("        Entry: %04x:%016lx\n", u2.cgdte.sel, addr64);
+        } else if (acctype == 14 || acctype == 15) {
+            union sgdte_u u2;
+            u2.sgval = u1.gval;
+
+            addr64 = (u64)((u64)u2.igdte.offs0 | 
+                           (u64)((u64)u2.igdte.offs1<<16) | upper);
+            kdbp("        Entry: %04x:%016lx ist:%03x\n", u2.igdte.sel, addr64,
+                 u2.igdte.ist);
+        } else 
+            kdbp(" Error: Unrecongized type:%lx\n", acctype);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display scheduler basic and extended info */
+static kdb_cpu_cmd_t
+kdb_usgf_sched(void)
+{
+    kdbp("sched: show schedular info and run queues\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_sched(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_sched();
+
+    kdb_print_sched_info();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display MMU basic and extended info */
+static kdb_cpu_cmd_t
+kdb_usgf_mmu(void)
+{
+    kdbp("mmu: print basic MMU info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_mmu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int cpu;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_mmu();
+
+    kdbp("MMU Info:\n");
+    kdbp("total  pages: %lx\n", total_pages);
+    kdbp("max page/mfn: %lx\n", max_page);
+    kdbp("frame_table:  %p\n", frame_table);
+    kdbp("DIRECTMAP_VIRT_START:  %lx\n", DIRECTMAP_VIRT_START);
+    kdbp("HYPERVISOR_VIRT_START: %lx\n", HYPERVISOR_VIRT_START);
+    kdbp("HYPERVISOR_VIRT_END:   %lx\n", HYPERVISOR_VIRT_END);
+    kdbp("RO_MPT_VIRT_START:     %lx\n", RO_MPT_VIRT_START);
+    kdbp("PERDOMAIN_VIRT_START:  %lx\n", PERDOMAIN_VIRT_START);
+    kdbp("CONFIG_PAGING_LEVELS:%d\n", CONFIG_PAGING_LEVELS);
+    kdbp("__HYPERVISOR_COMPAT_VIRT_START: %lx\n", 
+         (ulong)__HYPERVISOR_COMPAT_VIRT_START);
+    kdbp("&MPT[0] == %016lx\n", &machine_to_phys_mapping[0]);
+
+    kdbp("\nFIRST_RESERVED_GDT_PAGE: %x\n", FIRST_RESERVED_GDT_PAGE);
+    kdbp("FIRST_RESERVED_GDT_ENTRY: %lx\n", (ulong)FIRST_RESERVED_GDT_ENTRY);
+    kdbp("LAST_RESERVED_GDT_ENTRY: %lx\n", (ulong)LAST_RESERVED_GDT_ENTRY);
+    kdbp("  Per cpu non-compat gdt_table:\n");
+    for_each_online_cpu(cpu) {
+        kdbp("\tcpu:%d  gdt_table:%p\n", cpu, per_cpu(gdt_table, cpu));
+    }
+    kdbp("  Per cpu compat gdt_table:\n");
+    for_each_online_cpu(cpu) {
+        kdbp("\tcpu:%d  gdt_table:%p\n", cpu, per_cpu(compat_gdt_table, cpu));
+    }
+    kdbp("\n");
+    kdbp("  Per cpu tss:\n");
+    for_each_online_cpu(cpu) {
+        struct tss_struct *tssp = &per_cpu(init_tss, cpu);
+        kdbp("\tcpu:%d  tss:%p (rsp0:%016lx)\n", cpu, tssp, tssp->rsp0);
+    }
+#ifdef USER_MAPPINGS_ARE_GLOBAL
+    kdbp("USER_MAPPINGS_ARE_GLOBAL is defined\n");
+#else
+    kdbp("USER_MAPPINGS_ARE_GLOBAL is NOT defined\n");
+#endif
+    kdbp("\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* for HVM/HYB guests, go thru EPT. For PV guest we need to go to the btree. 
+ * btree: pfn_to_mfn_frame_list_list is root that points (has mfns of) upto 16
+ * pages (call 'em l2 nodes) that contain mfns of guest p2m table pages 
+ * NOTE: num of entries in a p2m page is same as num of entries in l2 node */
+static noinline ulong
+kdb_gpfn2mfn(struct domain *dp, ulong gpfn, p2m_type_t *typep) 
+{
+    int idx;
+
+    if ( !paging_mode_translate(dp) ) {
+        mfn_t *mfn_va, mfn = arch_get_pfn_to_mfn_frame_list_list(dp);
+        int g_longsz = kdb_guest_bitness(dp->domain_id)/8;
+        int entries_per_pg = PAGE_SIZE/g_longsz;
+        const int shift = get_count_order(entries_per_pg);
+
+	if ( !mfn_valid(mfn) ) {
+	    kdbp("Invalid frame_list_list mfn:%lx for non-xlate guest\n", mfn);
+	    return INVALID_MFN;
+	}
+
+        mfn_va = map_domain_page(mfn);
+        idx = gpfn >> 2*shift;     /* index in root page/node */
+        if (idx > 15) {
+            kdbp("gpfn:%lx idx:%x not in frame list limit of z16\n", gpfn, idx);
+            unmap_domain_page(mfn_va);
+            return INVALID_MFN;
+        }
+        mfn = (g_longsz == 4) ? ((int *)mfn_va)[idx] : mfn_va[idx];
+        if (mfn==0) {
+            kdbp("No mfn for idx:%d for gpfn:%lx in root pg\n", idx, gpfn);
+            unmap_domain_page(mfn_va);
+            return INVALID_MFN;
+        }
+        mfn_va = map_domain_page(mfn);
+        KDBGP1("p2m: idx:%x fll:%lx mfn of 2nd lvl page:%lx\n", idx,
+               arch_get_pfn_to_mfn_frame_list_list(dp), mfn);
+
+        idx = (gpfn>>shift) & ((1<<shift)-1);     /* idx in l2 node */
+        mfn = (g_longsz == 4) ? ((int *)mfn_va)[idx] : mfn_va[idx];
+        unmap_domain_page(mfn_va);
+        if (mfn == 0) {
+            kdbp("No mfn entry at:%x in 2nd lvl pg for gpfn:%lx\n", idx, gpfn);
+            return INVALID_MFN;
+        }
+        KDBGP1("p2m: idx:%x  mfn of p2m page:%lx\n", idx, mfn); 
+        mfn_va = map_domain_page(mfn);
+        idx = gpfn & ((1<<shift)-1);
+        mfn = (g_longsz == 4) ? ((int *)mfn_va)[idx] : mfn_va[idx];
+        unmap_domain_page(mfn_va);
+
+	*typep = -1;
+        return mfn;
+    } else
+        return mfn_x(get_gfn_query_unlocked(dp, gpfn, typep));
+
+    return INVALID_MFN;
+}
+
+/* given a pfn, find it's mfn */
+static kdb_cpu_cmd_t
+kdb_usgf_p2m(void)
+{
+    kdbp("p2m domid 0xgpfn : gpfn to mfn\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_p2m(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct domain *dp;
+    ulong gpfn, mfn=0xdeadbeef;
+    p2m_type_t p2mtype = -1;
+
+    if (argc < 3                                   ||
+        (dp=kdb_strdomid2ptr(argv[1], 1)) == NULL  ||
+        !kdb_str2ulong(argv[2], &gpfn)) {
+
+        return kdb_usgf_p2m();
+    }
+    mfn = kdb_gpfn2mfn(dp, gpfn, &p2mtype);
+    if ( paging_mode_translate(dp) )
+        kdbp("p2m[%lx] == %lx type:%d/0x%x\n", gpfn, mfn, p2mtype, p2mtype);
+    else 
+        kdbp("p2m[%lx] == %lx type:N/A(PV)\n", gpfn, mfn);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* given an mfn, lookup pfn in the MPT */
+static kdb_cpu_cmd_t
+kdb_usgf_m2p(void)
+{
+    kdbp("m2p 0xmfn: mfn to pfn\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_m2p(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    mfn_t mfn;
+    if (argc > 1 && kdb_str2ulong(argv[1], &mfn))
+        if (mfn_valid(mfn))
+            kdbp("mpt[%x] == %lx\n", mfn, machine_to_phys_mapping[mfn]);
+        else
+            kdbp("Invalid mfn:%lx\n", mfn);
+    else
+        kdb_usgf_m2p();
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void 
+kdb_pr_pg_pgt_flds(unsigned long type_info)
+{
+    switch (type_info & PGT_type_mask) {
+        case (PGT_l1_page_table):
+            kdbp("    page is PGT_l1_page_table\n");
+            break;
+        case PGT_l2_page_table:
+            kdbp("    page is PGT_l2_page_table\n");
+            break;
+        case PGT_l3_page_table:
+            kdbp("    page is PGT_l3_page_table\n");
+            break;
+        case PGT_l4_page_table:
+            kdbp("    page is PGT_l4_page_table\n");
+            break;
+        case PGT_seg_desc_page:
+            kdbp("    page is seg desc page\n");
+            break;
+        case PGT_writable_page:
+            kdbp("    page is writable page\n");
+            break;
+        case PGT_shared_page:
+            kdbp("    page is shared page\n");
+            break;
+    }
+    if (type_info & PGT_pinned)
+        kdbp("    page is pinned\n");
+    if (type_info & PGT_validated)
+        kdbp("    page is validated\n");
+    if (type_info & PGT_pae_xen_l2)
+        kdbp("    page is PGT_pae_xen_l2\n");
+    if (type_info & PGT_partial)
+        kdbp("    page is PGT_partial\n");
+    if (type_info & PGT_locked)
+        kdbp("    page is PGT_locked\n");
+}
+
+static void
+kdb_pr_pg_pgc_flds(unsigned long count_info)
+{
+    if (count_info & PGC_allocated)
+        kdbp("  PGC_allocated");
+    if (count_info & PGC_xen_heap)
+        kdbp("  PGC_xen_heap");
+    if (count_info & PGC_page_table)
+        kdbp("  PGC_page_table");
+    if (count_info & PGC_broken)
+        kdbp("  PGC_broken");
+#if XEN_VERSION < 4                                 /* xen 3.x.x */
+    if (count_info & PGC_offlining)
+        kdbp("  PGC_offlining");
+    if (count_info & PGC_offlined)
+        kdbp("  PGC_offlined");
+#else
+    if (count_info & PGC_state_inuse)
+        kdbp("  PGC_inuse");
+    if (count_info & PGC_state_offlining)
+        kdbp("  PGC_state_offlining");
+    if (count_info & PGC_state_offlined)
+        kdbp("  PGC_state_offlined");
+    if (count_info & PGC_state_free)
+        kdbp("  PGC_state_free");
+#endif
+    kdbp("\n");
+}
+
+/* print struct page_info{} given ptr to it or an mfn
+ * NOTE: that given an mfn there seems no way of knowing how it's used, so
+ *       here we just print all info and let user decide what's applicable */
+static kdb_cpu_cmd_t
+kdb_usgf_dpage(void)
+{
+    kdbp("dpage mfn|page-ptr : Display struct page\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dpage(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long val;
+    struct page_info *pgp;
+    struct domain *dp;
+
+    if (argc <= 1 || *argv[1] == '?') 
+        return kdb_usgf_dpage();
+
+    if ((kdb_str2ulong(argv[1], &val) == 0)      ||
+        (val <  (ulong)frame_table && !mfn_valid(val))) {
+
+        kdbp("Invalid arg:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdbp("Page Info:\n");
+    if (val <= (ulong)frame_table) {       /* arg is mfn */
+        pgp = mfn_to_page(val);
+        kdbp("  mfn: %lx page_info:%p\n", val, pgp);
+    } else {
+        pgp = (struct page_info *)val; /* arg is struct page{} */
+        if (pgp < frame_table || pgp >= frame_table+max_page) {
+            kdbp("Invalid page ptr. below/beyond max_page\n");
+            return KDB_CPU_MAIN_KDB;
+        }
+        kdbp("  mfn: %lx page_info:%p\n", page_to_mfn(pgp), pgp);
+    } 
+    kdbp("  count_info: %016lx  (refcnt: %x)\n", pgp->count_info,
+         pgp->count_info & PGC_count_mask);
+#if XEN_VERSION > 3 || XEN_SUBVERSION > 3             /* xen 3.4.x or later */
+    kdb_pr_pg_pgc_flds(pgp->count_info);
+
+    kdbp("In use info:\n");
+    kdbp("  type_info:%016lx\n", pgp->u.inuse.type_info);
+    kdb_pr_pg_pgt_flds(pgp->u.inuse.type_info);
+    dp = page_get_owner(pgp);
+    kdbp("  domid:%d (pickled:%lx)\n", dp ? dp->domain_id : -1, 
+         pgp->v.inuse._domain);
+
+    kdbp("Shadow Info:\n");
+    kdbp("  type:%x pinned:%x count:%x\n", pgp->u.sh.type, pgp->u.sh.pinned,
+         pgp->u.sh.count);
+    kdbp("  back:%lx  shadow_flags:%x  next_shadow:%lx\n", pgp->v.sh.back,
+         pgp->shadow_flags, pgp->next_shadow);
+
+    kdbp("Free Info\n");
+    kdbp("  need_tlbflush:%d order:%d tlbflush_timestamp:%x\n",
+         pgp->u.free.need_tlbflush, pgp->v.free.order, 
+         pgp->tlbflush_timestamp);
+#else
+    if (pgp->count_info & PGC_allocated)            /* page allocated */
+        kdbp("  PGC_allocated");
+    if (pgp->count_info & PGC_page_table)           /* page table page */
+        kdbp("  PGC_page_table");
+    kdbp("\n");
+    kdbp("  page is %s xen heap page\n", is_xen_heap_page(pgp) ? "a":"NOT");
+    kdbp("  cacheattr:%x\n", (pgp->count_info>>PGC_cacheattr_base) & 7);
+    if (pgp->count_info & PGC_count_mask) {         /* page in use */
+        dp = pgp->u.inuse._domain;         /* pickled domain */
+        kdbp("  page is in use\n");
+        kdbp("    domid: %d  (pickled dom:%x)\n", 
+             dp ? (unpickle_domptr(dp))->domain_id : -1, dp);
+        kdbp("    type_info: %lx\n", pgp->u.inuse.type_info);
+        kdb_prt_pg_type(pgp->u.inuse.type_info);
+    } else {                                         /* page is free */
+        kdbp("  page is free\n");
+        kdbp("    order: %x\n", pgp->u.free.order);
+        kdbp("    cpumask: %lx\n", pgp->u.free.cpumask.bits);
+    }
+    kdbp("  tlbflush/shadow_flags: %lx\n", pgp->shadow_flags);
+#endif
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* display asked msr value */
+static kdb_cpu_cmd_t
+kdb_usgf_dmsr(void)
+{
+    kdbp("dmsr address : Display msr value\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dmsr(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long addr, val;
+
+    if (argc <= 1 || *argv[1] == '?') 
+        return kdb_usgf_dmsr();
+
+    if ((kdb_str2ulong(argv[1], &addr) == 0)) {
+        kdbp("Invalid arg:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    rdmsrl(addr, val);
+    kdbp("msr: %lx  val:%lx\n", addr, val);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* execute cpuid for given value */
+static kdb_cpu_cmd_t
+kdb_usgf_cpuid(void)
+{
+    kdbp("cpuid eax : Display cpuid value returned in rax\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_cpuid(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long rax=0, rbx=0, rcx=0, rdx=0;
+
+    if (argc <= 1 || *argv[1] == '?') 
+        return kdb_usgf_cpuid();
+
+    if ((kdb_str2ulong(argv[1], &rax) == 0)) {
+        kdbp("Invalid arg:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+#if 0
+    __asm__ __volatile__ (
+            /* "pushl %%rax  \n" */
+
+            "movl %0, %%rax  \n"
+            "cpuid           \n" 
+            : "=&a" (rax), "=b" (rbx), "=c" (rcx), "=d" (rdx)
+            : "0" (rax)
+            : "rax", "rbx", "rcx", "rdx", "memory");
+#endif
+    cpuid(rax, &rax, &rbx, &rcx, &rdx);
+    kdbp("rax: %016lx  rbx:%016lx rcx:%016lx rdx:%016lx\n", rax, rbx,
+         rcx, rdx);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* execute cpuid for given value */
+static kdb_cpu_cmd_t
+kdb_usgf_wept(void)
+{
+    kdbp("wept domid gfn: walk ept table for given domid and gfn\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_wept(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct domain *dp;
+    ulong gfn;
+
+    if ((argc > 1 && *argv[1] == '?') || argc != 3)
+        return kdb_usgf_wept();
+    if ((dp=kdb_strdomid2ptr(argv[1], 1)) && kdb_str2ulong(argv[2], &gfn))
+        ept_walk_table(dp, gfn);
+    else
+        kdb_usgf_wept();
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/*
+ * Save symbols info for a guest, dom0 or other...
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_sym(void)
+{
+   kdbp("sym domid &kallsyms_names &kallsyms_addresses &kallsyms_num_syms\n");
+   kdbp("\t [&kallsyms_token_table] [&kallsyms_token_index]\n");
+   kdbp("\ttoken _table and _index MUST be specified for el5\n");
+   return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_sym(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong namesp, addrap, nump, toktblp, tokidxp;
+    domid_t domid;
+
+    if (argc < 5) {
+        return kdb_usgf_sym();
+    }
+    toktblp = tokidxp = 0;     /* optional parameters */
+    if (kdb_str2domid(argv[1], &domid, 1) &&
+        kdb_str2ulong(argv[2], &namesp)   &&
+        kdb_str2ulong(argv[3], &addrap)   &&
+        kdb_str2ulong(argv[4], &nump)     && 
+        (argc==5 || (argc==7 && kdb_str2ulong(argv[5], &toktblp) &&
+                                kdb_str2ulong(argv[6], &tokidxp)))) {
+
+        kdb_sav_dom_syminfo(domid, namesp, addrap,nump,toktblp,tokidxp);
+    } else
+        kdb_usgf_sym();
+    return KDB_CPU_MAIN_KDB;
+}
+
+
+/* mods is the dumb ass &modules. modules is struct {nxt, prev}, and not ptr */
+static void
+kdb_dump_linux_modules(domid_t domid, ulong mods, uint nxtoffs, uint nmoffs, 
+                       uint coreoffs)
+{
+    const int bufsz = 56;
+    char buf[bufsz];
+    uint64_t addr, addrval, *nxtptr, *modptr;
+    uint i, num = 8;
+
+    if (kdb_guest_bitness(domid) == 32)
+        num = 4;
+
+    /* first read modules{}.next ptr */
+    if (kdb_read_mem(mods, (kdbbyt_t *)&nxtptr, num, domid) != num) {
+        kdbp("ERROR: Could not read next at mod:%p\n", (void *)mods);
+        return;
+    }
+
+    KDBGP("mods:%p nxtptr:%p nmoffs:%x coreoffs:%x\n", (void *)mods, nxtptr,
+          nmoffs, coreoffs);
+
+    while ((uint64_t)nxtptr != mods) {
+
+        modptr = (uint64_t *) ((ulong)nxtptr - nxtoffs);
+
+        addr = (ulong)modptr + coreoffs;
+        if (kdb_read_mem(addr, (kdbbyt_t *)&addrval, num, domid) != num) {
+            kdbp("ERROR: Could not read mod addr at :%p\n", (void *)addr);
+            return;
+        }
+
+        KDBGP("modptr:%p addr:%p\n", modptr, (void *)addr);
+        addr = (ulong)modptr + nmoffs;
+        i=0;
+        do {
+            if (kdb_read_mem(addr, (kdbbyt_t *)&buf[i], 1, domid) != 1) {
+                kdbp("ERROR:Could not read name ch at addr:%p\n", (void *)addr);
+                return;
+            }
+            addr++;
+        } while (buf[i] && i++ < bufsz);
+        buf[bufsz-1] = '\0';
+
+        kdbp("%016lx %016lx %s\n", modptr, addrval, buf);
+
+        if (kdb_read_mem((ulong)nxtptr, (kdbbyt_t *)&nxtptr, num, domid)!=num) {
+            kdbp("ERROR: Could not read next at mod:%p\n", (void *)mods);
+            return;
+        }
+        KDBGP("nxtptr:%p addr:%p\n", nxtptr, (void *)addr);
+    } 
+}
+
+/* Display modules loaded in linux guest */
+static kdb_cpu_cmd_t
+kdb_usgf_mod(void)
+{
+   kdbp("mod domid &modules next-offs name-offs module_core-offs\n");
+   kdbp("\twhere next-offs: &((struct module *)0)->list.next\n");
+   kdbp("\tname-offs: &((struct module *)0)->name etc..\n");
+   kdbp("\tDisplays all loaded modules in the linux guest\n");
+   kdbp("\tEg: mod 0 ffffffff80302780 8 0x18 0x178\n");
+
+   return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_cmdf_mod(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong mods, nxtoffs, nmoffs, coreoffs;
+    domid_t domid;
+
+    if (argc < 6) {
+        return kdb_usgf_mod();
+    }
+    if (kdb_str2domid(argv[1], &domid, 1) &&
+        kdb_str2ulong(argv[2], &mods)     &&
+        kdb_str2ulong(argv[3], &nxtoffs)  &&
+        kdb_str2ulong(argv[4], &nmoffs)   &&
+        kdb_str2ulong(argv[5], &coreoffs)) {
+
+        kdbp("modptr address name\n");
+        kdb_dump_linux_modules(domid, mods, nxtoffs, nmoffs, coreoffs);
+    } else
+        kdb_usgf_mod();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* toggle kdb debug trace level */
+static kdb_cpu_cmd_t
+kdb_usgf_kdbdbg(void)
+{
+    kdbp("kdbdbg : trace info to debug kdb\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_kdbdbg(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_kdbdbg();
+
+    kdbdbg = (kdbdbg==3) ? 0 : (kdbdbg+1);
+    kdbp("kdbdbg set to:%d\n", kdbdbg);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_reboot(void)
+{
+    kdbp("reboot: reboot system\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_reboot(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_reboot();
+
+    machine_restart(500);
+    return KDB_CPU_MAIN_KDB;              /* not reached */
+}
+
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcon(void)
+{
+    kdbp("trcon: turn user added kdb tracing on\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcon(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcon();
+
+    kdb_trcon = 1;
+    kdbp("kdb tracing is now on\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcoff(void)
+{
+    kdbp("trcoff: turn user added kdb tracing off\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcoff(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcoff();
+
+    kdb_trcon = 0;
+    kdbp("kdb tracing is now off\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcz(void)
+{
+    kdbp("trcz : zero entire trace buffer\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcz(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcz();
+
+    kdb_trczero();
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcp(void)
+{
+    kdbp("trcp : give hints to dump trace buffer via dw/dd command\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcp();
+
+    kdb_trcp();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* print some basic info, constants, etc.. */
+static kdb_cpu_cmd_t
+kdb_usgf_info(void)
+{
+    kdbp("info : display basic info, constants, etc..\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_info(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct domain *dp;
+    struct cpuinfo_x86 *bcdp;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_info();
+
+    kdbp("Version: %d.%d.%s (%s@%s) %s\n", xen_major_version(), 
+         xen_minor_version(), xen_extra_version(), xen_compile_by(), 
+         xen_compile_domain(), xen_compile_date());
+    kdbp("__XEN_LATEST_INTERFACE_VERSION__ : 0x%x\n", 
+         __XEN_LATEST_INTERFACE_VERSION__);
+    kdbp("__XEN_INTERFACE_VERSION__: 0x%x\n", __XEN_INTERFACE_VERSION__);
+
+    bcdp = &boot_cpu_data;
+    kdbp("CPU: (all decimal)");
+        if (bcdp->x86_vendor == X86_VENDOR_AMD)
+            kdbp(" AMD");
+        else
+            kdbp(" INTEL");
+        kdbp(" family:%d model:%d\n", bcdp->x86, bcdp->x86_model);
+        kdbp("     vendor_id:%16s model_id:%64s\n", bcdp->x86_vendor_id,
+             bcdp->x86_model_id);
+        kdbp("     cpuidlvl:%d cache:sz:%d align:%d\n", bcdp->cpuid_level,
+             bcdp->x86_cache_size, bcdp->x86_cache_alignment);
+        kdbp("     power:%d cores: max:%d booted:%d siblings:%d apicid:%d\n",
+             bcdp->x86_power, bcdp->x86_max_cores, bcdp->booted_cores,
+             bcdp->x86_num_siblings, bcdp->apicid);
+        kdbp("     ");
+        if (cpu_has_apic)
+            kdbp("_apic");
+        if (cpu_has_sep)
+            kdbp("|_sep");
+        if (cpu_has_xmm3)
+            kdbp("|_xmm3");
+        if (cpu_has_ht)
+            kdbp("|_ht");
+        if (cpu_has_nx)
+            kdbp("|_nx");
+        if (cpu_has_clflush)
+            kdbp("|_clflush");
+        if (cpu_has_page1gb)
+            kdbp("|_page1gb");
+        if (cpu_has_ffxsr)
+            kdbp("|_ffxsr");
+        if (cpu_has_x2apic)
+            kdbp("|_x2apic");
+    kdbp("\n\n");
+    kdbp("CC:");
+#if defined(CONFIG_X86_64)
+        kdbp(" CONFIG_X86_64");
+#endif
+#if defined(CONFIG_COMPAT)
+        kdbp(" CONFIG_COMPAT");
+#endif
+#if defined(CONFIG_PAGING_ASSISTANCE)
+        kdbp(" CONFIG_PAGING_ASSISTANCE");
+#endif
+    kdbp("\n");
+    kdbp("cpu has following features:\n");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_TSC_RELIABLE) ? 
+         "X86_FEATURE_TSC_RELIABLE" : "");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_CONSTANT_TSC)? "X86_FEATURE_CONSTANT_TSC":"");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_NONSTOP_TSC) ? "X86_FEATURE_NONSTOP_TSC" :"");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_RDTSCP) ?  "X86_FEATURE_RDTSCP" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_FXSR) ?  "X86_FEATURE_FXSR" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_CPUID_FAULTING) ?  
+         "X86_FEATURE_CPUID_FAULTING" : "");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_PAGE1GB) ?  "X86_FEATURE_PAGE1GB" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_MWAIT) ?  "X86_FEATURE_MWAIT" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_X2APIC) ?  "X86_FEATURE_X2APIC":"");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_XSAVE) ?  "X86_FEATURE_XSAVE":"");
+    kdbp("\n");
+
+    kdbp("MAX_VIRT_CPUS:$%d  MAX_HVM_VCPUS:$%d\n", MAX_VIRT_CPUS,MAX_HVM_VCPUS);
+    kdbp("NR_EVENT_CHANNELS: $%d\n", NR_EVENT_CHANNELS);
+    kdbp("NR_EVTCHN_BUCKETS: $%d\n", NR_EVTCHN_BUCKETS);
+
+    kdbp("\nDomains and their vcpus:\n");
+    for_each_domain(dp) {
+        struct vcpu *vp;
+        int printed=0;
+        kdbp("  Domain: {id:%d 0x%x   ptr:%p%s}  VCPUs:\n", 
+             dp->domain_id, dp->domain_id, dp, dp->is_dying ? " DYING":"");
+        for(vp=dp->vcpu[0]; vp; vp = vp->next_in_list) {
+            kdbp("  {id:%d p:%p runstate:%d}", vp->vcpu_id, vp, 
+                 vp->runstate.state);
+            if (++printed % 2 == 0) kdbp("\n");
+        }
+        kdbp("\n");
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_cur(void)
+{
+    kdbp("cur : display current domid and vcpu\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Checking for guest_mode() not feasible here. if dom0->hcall->bp in xen, 
+ * then g_m() will show xen, but vcpu is still dom0. hence just look at 
+ * current only */
+static kdb_cpu_cmd_t
+kdb_cmdf_cur(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    domid_t id = current->domain->domain_id;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_cur();
+
+    kdbp("domid: %d{%p} %s vcpu:%d {%p} ", id, current->domain,
+         (id==DOMID_IDLE) ? "(IDLE)" : "", current->vcpu_id, current);
+
+    /* if (id != DOMID_IDLE) { */
+        if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+            u64 addr = -1;
+            __vmptrst(&addr);
+            kdbp(" VMCS:"KDBFL, addr);
+        }
+    /* } */
+    kdbp("\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* stub to quickly and easily add a new command */
+static kdb_cpu_cmd_t
+kdb_usgf_usr1(void)
+{
+    kdbp("usr1: add any arbitrary cmd using this in kdb_cmds.c\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_usr1(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_h(void)
+{
+    kdbp("h: display all commands. See kdb/README for more info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_h(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbtab_t *tbp;
+
+    kdbp(" - ccpu is current cpu \n");
+    kdbp(" - following are always in decimal:\n");
+    kdbp("     vcpu num, cpu num, domid\n");
+    kdbp(" - otherwise, almost all numbers are in hex (0x not needed)\n");
+    kdbp(" - output: $17 means decimal 17\n");
+    kdbp(" - domid 7fff($32767) refers to hypervisor\n");
+    kdbp(" - if no domid before function name, then it's hypervisor\n");
+    kdbp(" - earlykdb in xen grub line to break into kdb during boot\n");
+    kdbp(" - command ? will show the command usage\n");
+    kdbp("\n");
+
+    for(tbp=kdb_cmd_tbl; tbp->kdb_cmd_usgf; tbp++)
+        (*tbp->kdb_cmd_usgf)();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* ===================== cmd table initialization ========================== */
+void __init
+kdb_init_cmdtab(void)
+{
+  static kdbtab_t _kdb_cmd_table[] = {
+
+    {"info", kdb_cmdf_info, kdb_usgf_info, 1, KDB_REPEAT_NONE},
+    {"cur",  kdb_cmdf_cur, kdb_usgf_cur, 1, KDB_REPEAT_NONE},
+
+    {"f",  kdb_cmdf_f,  kdb_usgf_f,  1, KDB_REPEAT_NONE},
+    {"fg", kdb_cmdf_fg, kdb_usgf_fg, 1, KDB_REPEAT_NONE},
+
+    {"dw",  kdb_cmdf_dw,  kdb_usgf_dw,  1, KDB_REPEAT_NO_ARGS},
+    {"dd",  kdb_cmdf_dd,  kdb_usgf_dd,  1, KDB_REPEAT_NO_ARGS},
+    {"dwm", kdb_cmdf_dwm, kdb_usgf_dwm, 1, KDB_REPEAT_NO_ARGS},
+    {"ddm", kdb_cmdf_ddm, kdb_usgf_ddm, 1, KDB_REPEAT_NO_ARGS},
+    {"dr",  kdb_cmdf_dr,  kdb_usgf_dr,  1, KDB_REPEAT_NONE},
+    {"drg", kdb_cmdf_drg, kdb_usgf_drg, 1, KDB_REPEAT_NONE},
+
+    {"dis", kdb_cmdf_dis,  kdb_usgf_dis,  1, KDB_REPEAT_NO_ARGS},
+    {"dism",kdb_cmdf_dism, kdb_usgf_dism, 1, KDB_REPEAT_NO_ARGS},
+
+    {"mw", kdb_cmdf_mw, kdb_usgf_mw, 1, KDB_REPEAT_NONE},
+    {"md", kdb_cmdf_md, kdb_usgf_md, 1, KDB_REPEAT_NONE},
+    {"mr", kdb_cmdf_mr, kdb_usgf_mr, 1, KDB_REPEAT_NONE},
+
+    {"bc", kdb_cmdf_bc, kdb_usgf_bc, 0, KDB_REPEAT_NONE},
+    {"bp", kdb_cmdf_bp, kdb_usgf_bp, 1, KDB_REPEAT_NONE},
+    {"btp", kdb_cmdf_btp, kdb_usgf_btp, 1, KDB_REPEAT_NONE},
+
+    {"wp", kdb_cmdf_wp, kdb_usgf_wp, 1, KDB_REPEAT_NONE},
+    {"wc", kdb_cmdf_wc, kdb_usgf_wc, 0, KDB_REPEAT_NONE},
+
+    {"ni", kdb_cmdf_ni, kdb_usgf_ni, 0, KDB_REPEAT_NO_ARGS},
+    {"ss", kdb_cmdf_ss, kdb_usgf_ss, 1, KDB_REPEAT_NO_ARGS},
+    {"ssb",kdb_cmdf_ssb,kdb_usgf_ssb,0, KDB_REPEAT_NO_ARGS},
+    {"go", kdb_cmdf_go, kdb_usgf_go, 0, KDB_REPEAT_NONE},
+
+    {"cpu",kdb_cmdf_cpu, kdb_usgf_cpu, 1, KDB_REPEAT_NONE},
+    {"nmi",kdb_cmdf_nmi, kdb_usgf_nmi, 1, KDB_REPEAT_NONE},
+    {"percpu",kdb_cmdf_percpu, kdb_usgf_percpu, 1, KDB_REPEAT_NONE},
+
+    {"sym",  kdb_cmdf_sym,   kdb_usgf_sym,   1, KDB_REPEAT_NONE},
+    {"mod",  kdb_cmdf_mod,   kdb_usgf_mod,   1, KDB_REPEAT_NONE},
+
+    {"vcpuh",kdb_cmdf_vcpuh, kdb_usgf_vcpuh, 1, KDB_REPEAT_NONE},
+    {"vcpu", kdb_cmdf_vcpu,  kdb_usgf_vcpu,  1, KDB_REPEAT_NONE},
+    {"dom",  kdb_cmdf_dom,   kdb_usgf_dom,   1, KDB_REPEAT_NONE},
+
+    {"sched", kdb_cmdf_sched, kdb_usgf_sched, 1, KDB_REPEAT_NONE},
+    {"mmu",   kdb_cmdf_mmu,   kdb_usgf_mmu,   1, KDB_REPEAT_NONE},
+    {"p2m",   kdb_cmdf_p2m,   kdb_usgf_p2m,   1, KDB_REPEAT_NONE},
+    {"m2p",   kdb_cmdf_m2p,   kdb_usgf_m2p,   1, KDB_REPEAT_NONE},
+    {"dpage", kdb_cmdf_dpage, kdb_usgf_dpage, 1, KDB_REPEAT_NONE},
+    {"dmsr",  kdb_cmdf_dmsr,  kdb_usgf_dmsr, 1, KDB_REPEAT_NONE},
+    {"cpuid",  kdb_cmdf_cpuid,  kdb_usgf_cpuid, 1, KDB_REPEAT_NONE},
+    {"wept",  kdb_cmdf_wept,  kdb_usgf_wept, 1, KDB_REPEAT_NONE},
+
+    {"dtrq", kdb_cmdf_dtrq,  kdb_usgf_dtrq, 1, KDB_REPEAT_NONE},
+    {"didt", kdb_cmdf_didt,  kdb_usgf_didt, 1, KDB_REPEAT_NONE},
+    {"dgdt", kdb_cmdf_dgdt,  kdb_usgf_dgdt, 1, KDB_REPEAT_NONE},
+    {"dirq", kdb_cmdf_dirq,  kdb_usgf_dirq, 1, KDB_REPEAT_NONE},
+    {"dvit", kdb_cmdf_dvit,  kdb_usgf_dvit, 1, KDB_REPEAT_NONE},
+    {"dvmc", kdb_cmdf_dvmc,  kdb_usgf_dvmc, 1, KDB_REPEAT_NONE},
+    {"mmio", kdb_cmdf_mmio,  kdb_usgf_mmio, 1, KDB_REPEAT_NONE},
+
+    /* tracing related commands */
+    {"trcon", kdb_cmdf_trcon,  kdb_usgf_trcon,  0, KDB_REPEAT_NONE},
+    {"trcoff",kdb_cmdf_trcoff, kdb_usgf_trcoff, 0, KDB_REPEAT_NONE},
+    {"trcz",  kdb_cmdf_trcz,   kdb_usgf_trcz,   0, KDB_REPEAT_NONE},
+    {"trcp",  kdb_cmdf_trcp,   kdb_usgf_trcp,   1, KDB_REPEAT_NONE},
+
+    {"usr1",  kdb_cmdf_usr1,   kdb_usgf_usr1,   1, KDB_REPEAT_NONE},
+    {"kdbf",  kdb_cmdf_kdbf,   kdb_usgf_kdbf,   1, KDB_REPEAT_NONE},
+    {"kdbdbg",kdb_cmdf_kdbdbg, kdb_usgf_kdbdbg, 1, KDB_REPEAT_NONE},
+    {"reboot",kdb_cmdf_reboot, kdb_usgf_reboot, 1, KDB_REPEAT_NONE},
+    {"h",     kdb_cmdf_h,      kdb_usgf_h,      1, KDB_REPEAT_NONE},
+
+    {"", NULL, NULL, 0, 0},
+  };
+    kdb_cmd_tbl = _kdb_cmd_table;
+    return;
+}
diff -r 32034d1914a6 xen/kdb/kdb_io.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/kdb_io.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,174 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+#include "include/kdbinc.h"
+
+#define K_BACKSPACE  0x8                   /* ctrl-H */
+#define K_BACKSPACE1 0x7f                  /* ctrl-? */
+#define K_UNDERSCORE 0x5f
+#define K_CMD_BUFSZ  160
+#define K_CMD_MAXI   (K_CMD_BUFSZ - 1)     /* max index in buffer */
+
+#if 0        /* make a history array some day */
+#define K_UP_ARROW                         /* sequence : 1b 5b 41 ie, '\e[A' */
+#define K_DN_ARROW                         /* sequence : 1b 5b 42 ie, '\e[B' */
+#define K_NUM_HIST   32
+static int cursor;
+static char cmds_a[NUM_HIST][K_CMD_BUFSZ];
+#endif
+
+static char cmds_a[K_CMD_BUFSZ];
+
+
+static int
+kdb_key_valid(int key)
+{
+    /* note: isspace() is more than ' ', hence we don't use it here */
+    if (isalnum(key) || key == ' ' || key == K_BACKSPACE || key == '\n' ||
+        key == '?' || key == K_UNDERSCORE || key == '=' || key == '!')
+            return 1;
+    return 0;
+}
+
+/* display kdb prompt and read command from the console 
+ * RETURNS: a '\n' terminated command buffer */
+char *
+kdb_get_cmdline(char *prompt)
+{
+    #define K_BELL     0x7
+    #define K_CTRL_C   0x3
+
+    int key, i=0;
+
+    kdbp(prompt);
+    memset(cmds_a, 0, K_CMD_BUFSZ);
+    cmds_a[K_CMD_BUFSZ-1] = '\n';
+
+    do {
+        key = console_getc();
+        if (key == '\r') 
+            key = '\n';
+        if (key == K_BACKSPACE1) 
+            key = K_BACKSPACE;
+
+        if (key == K_CTRL_C || (i==K_CMD_MAXI && key != '\n')) {
+            console_putc('\n');
+            if (i >= K_CMD_MAXI) {
+                kdbp("KDB: cmd buffer overflow\n");
+                console_putc(K_BELL);
+            }
+            memset(cmds_a, 0, K_CMD_BUFSZ);
+            i = 0;
+            kdbp(prompt);
+            continue;
+        }
+        if (!kdb_key_valid(key)) {
+            console_putc(K_BELL);
+            continue;
+        }
+        if (key == K_BACKSPACE) {
+            if (i==0) {
+                console_putc(K_BELL);
+                continue;
+            } else 
+                cmds_a[--i] = '\0';
+                console_putc(K_BACKSPACE);
+                console_putc(' ');        /* erase character */
+        } else
+            cmds_a[i++] = key;
+
+        console_putc(key);
+
+    } while (key != '\n');
+
+    return cmds_a;
+}
+
+/*
+ * printk takes a lock, an NMI could come in after that, and another cpu may 
+ * spin. also, the console lock is forced unlock, so panic is been seen on 
+ * 8 way. hence, no printk() calls.
+ */
+static volatile int kdbp_gate = 0;
+void
+kdbp(const char *fmt, ...)
+{
+    static char buf[1024];
+    va_list args;
+    char *p;
+    int i=0;
+
+    while ((__cmpxchg(&kdbp_gate, 0,1, sizeof(kdbp_gate)) != 0) && i++<1000)
+        mdelay(10);
+
+    va_start(args, fmt);
+    (void)vsnprintf(buf, sizeof(buf), fmt, args);
+    va_end(args);
+
+    for (p=buf; *p != '\0'; p++)
+        console_putc(*p);
+    kdbp_gate = 0;
+}
+
+
+/*
+ * copy/read machine memory. 
+ * RETURNS: number of bytes copied 
+ */
+int
+kdb_read_mmem(kdbma_t maddr, kdbbyt_t *dbuf, int len)
+{
+    ulong remain, orig=len;
+
+    while (len > 0) {
+        ulong pagecnt = min_t(long, PAGE_SIZE-(maddr&~PAGE_MASK), len);
+        char *va = map_domain_page(maddr >> PAGE_SHIFT);
+
+        va = va + (maddr & (PAGE_SIZE-1));        /* add page offset */
+        remain = __copy_from_user(dbuf, (void *)va, pagecnt);
+        KDBGP1("maddr:%x va:%p len:%x pagecnt:%x rem:%x\n", 
+               maddr, va, len, pagecnt, remain);
+        unmap_domain_page(va);
+        len = len  - (pagecnt - remain);
+        if (remain != 0)
+            break;
+        maddr += pagecnt;
+        dbuf += pagecnt;
+    }
+    return orig - len;
+}
+
+
+/*
+ * copy/read guest or hypervisor memory. (domid == DOMID_IDLE) => hyp
+ * RETURNS: number of bytes copied 
+ */
+int
+kdb_read_mem(kdbva_t saddr, kdbbyt_t *dbuf, int len, domid_t domid)
+{
+    return (len - dbg_rw_mem(saddr, dbuf, len, domid, 0, 0));
+}
+
+/*
+ * write guest or hypervisor memory. (domid == DOMID_IDLE) => hyp
+ * RETURNS: number of bytes written
+ */
+int
+kdb_write_mem(kdbva_t daddr, kdbbyt_t *sbuf, int len, domid_t domid)
+{
+    return (len - dbg_rw_mem(daddr, sbuf, len, domid, 1, 0));
+}
diff -r 32034d1914a6 xen/kdb/kdbmain.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/kdbmain.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,739 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "include/kdbinc.h"
+
+static int kdbmain(kdb_reason_t, struct cpu_user_regs *);
+static int kdbmain_fatal(struct cpu_user_regs *, int);
+static const char *kdb_gettrapname(int);
+
+/* ======================== GLOBAL VARIABLES =============================== */
+/* All global variables used by KDB must be defined here only. Module specific
+ * static variables must be declared in respective modules.
+ */
+kdbtab_t *kdb_cmd_tbl;
+char kdb_prompt[32];
+
+volatile kdb_cpu_cmd_t kdb_cpu_cmd[NR_CPUS];
+cpumask_t kdb_cpu_traps;           /* bit per cpu to tell which cpus hit int3 */
+
+#ifndef NDEBUG
+    #error KDB is not supported on debug xen. Turn debug off
+#endif
+
+volatile int kdb_init_cpu = -1;           /* initial kdb cpu */
+volatile int kdb_session_begun = 0;       /* active kdb session? */
+volatile int kdb_enabled = 1;             /* kdb enabled currently? */
+volatile int kdb_sys_crash = 0;           /* are we in crashed state? */
+volatile int kdbdbg = 0;                  /* to debug kdb itself */
+
+static volatile int kdb_trap_immed_reason = 0;   /* reason for immed trap */
+
+static cpumask_t kdb_fatal_cpumask;       /* which cpus in fatal path */
+
+/* return index of first bit set in val. if val is 0, retval is undefined */
+static inline unsigned int kdb_firstbit(unsigned long val)
+{
+    __asm__ ( "bsf %1,%0" : "=r" (val) : "r" (val), "0" (BITS_PER_LONG) );
+    return (unsigned int)val;
+}
+
+static void 
+kdb_dbg_prnt_ctrps(char *label, int ccpu)
+{
+    int i;
+    if (!kdbdbg)
+        return;
+
+    if (label || *label)
+        kdbp("%s ", label);
+    if (ccpu != -1)
+        kdbp("ccpu:%d ", ccpu);
+    kdbp("cputrps:");
+    for (i=sizeof(kdb_cpu_traps)/sizeof(kdb_cpu_traps.bits[0]) - 1; i >=0; i--)
+        kdbp(" %lx", kdb_cpu_traps.bits[i]);
+    kdbp("\n");
+}
+
+/* 
+ * Hold this cpu. Don't disable until all CPUs in kdb to avoid IPI deadlock 
+ */
+static void
+kdb_hold_this_cpu(int ccpu, struct cpu_user_regs *regs)
+{
+    KDBGP("ccpu:%d hold. cmd:%x\n", kdb_cpu_cmd[ccpu]);
+    do {
+        for(; kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE; cpu_relax());
+        KDBGP("ccpu:%d hold. cmd:%x\n", kdb_cpu_cmd[ccpu]);
+
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_DISABLE) {
+            local_irq_disable();
+            kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+        }
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_DO_VMEXIT) {
+            kdb_curr_cpu_flush_vmcs();
+            kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+        }
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_SHOWPC) {
+            kdbp("[%d]", ccpu);
+            kdb_display_pc(regs);
+            kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+        }
+    } while (kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE);     /* No goto, eh! */
+    KDBGP1("un hold: ccpu:%d cmd:%d\n", ccpu, kdb_cpu_cmd[ccpu]);
+}
+
+/*
+ * Pause this cpu while one CPU does main kdb processing. If that CPU does
+ * a "cpu switch" to this cpu, this cpu will become the main kdb cpu. If the
+ * user next does single step of some sort, this function will be exited,
+ * and this cpu will come back into kdb via kdb_handle_trap_entry function.
+ */
+static void 
+kdb_pause_this_cpu(struct cpu_user_regs *regs, void *unused)
+{
+    kdbmain(KDB_REASON_PAUSE_IPI, regs);
+}
+
+/* pause other cpus via an IPI. Note, disabled CPUs can't receive IPIs until
+ * enabled */
+static void
+kdb_smp_pause_cpus(void)
+{
+    int cpu, wait_count = 0;
+    int ccpu = smp_processor_id();      /* current cpu */
+    cpumask_t cpumask = cpu_online_map;
+
+    cpumask_clear_cpu(smp_processor_id(), &cpumask);
+    for_each_cpu(cpu, &cpumask)
+        if (kdb_cpu_cmd[cpu] != KDB_CPU_INVAL) {
+            kdbp("KDB: won't pause cpu:%d, cmd[cpu]=%d\n",cpu,kdb_cpu_cmd[cpu]);
+            cpumask_clear_cpu(cpu, &cpumask);
+        }
+    KDBGP("ccpu:%d will pause cpus. mask:0x%lx\n", ccpu, cpumask.bits[0]);
+#if XEN_SUBVERSION > 4 || XEN_VERSION == 4              /* xen 3.5.x or above */
+    on_selected_cpus(&cpumask, (void (*)(void *))kdb_pause_this_cpu, 
+                     "XENKDB", 0);
+#else
+    on_selected_cpus(cpumask, (void (*)(void *))kdb_pause_this_cpu, 
+                     "XENKDB", 0, 0);
+#endif
+    mdelay(300);                     /* wait a bit for other CPUs to stop */
+    while(wait_count++ < 10) {
+        int bummer = 0;
+        for_each_cpu(cpu, &cpumask)
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE)
+                bummer = 1;
+        if (!bummer)
+            break;
+        kdbp("ccpu:%d trying to stop other cpus...\n", ccpu);
+        mdelay(100);  /* wait 100 ms */
+    };
+    for_each_cpu(cpu, &cpumask)          /* now check who is with us */
+        if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE)
+            kdbp("Bummer cpu %d not paused. ccpu:%d\n", cpu,ccpu);
+        else {
+            kdb_cpu_cmd[cpu] = KDB_CPU_DISABLE;  /* tell it to disable ints */
+            while (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE);
+        }
+}
+
+/* 
+ * Do once per kdb session:  A kdb session lasts from 
+ *    keybord/HWBP/SWBP till KDB_CPU_INSTALL_BP is done. Within a session,
+ *    user may do several cpu switches, single step, next instr,  etc..
+ *
+ * DO: 1. pause other cpus if they are not already. they would already be 
+ *        if we are in single step mode
+ *     2. watchdog_disable() 
+ *     3. uninstall all sw breakpoints so that user doesn't see them
+ */
+static void
+kdb_begin_session(void)
+{
+    if (!kdb_session_begun) {
+        kdb_session_begun = 1;
+        kdb_smp_pause_cpus();
+        local_irq_disable();
+        watchdog_disable();
+        kdb_uninstall_all_swbp();
+    }
+}
+
+static void
+kdb_smp_unpause_cpus(int ccpu)
+{
+    int cpu;
+
+    int wait_count = 0;
+    cpumask_t cpumask = cpu_online_map;
+
+    cpumask_clear_cpu(smp_processor_id(), &cpumask);
+
+    KDBGP("kdb_smp_unpause_other_cpus(). ccpu:%d\n", ccpu);
+    for_each_cpu(cpu, &cpumask)
+        kdb_cpu_cmd[cpu] = KDB_CPU_QUIT;
+
+    while(wait_count++ < 10) {
+        int bummer = 0;
+        for_each_cpu(cpu, &cpumask)
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_INVAL)
+                bummer = 1;
+            if (!bummer)
+                break;
+            mdelay(90);  /* wait 90 ms, 50 too short on large systems */
+    };
+    /* now make sure they are all in there */
+    for_each_cpu(cpu, &cpumask)
+        if (kdb_cpu_cmd[cpu] != KDB_CPU_INVAL)
+            kdbp("KDB: cpu %d still paused (cmd==%d). ccpu:%d\n",
+                 cpu, kdb_cpu_cmd[cpu], ccpu);
+}
+
+/*
+ * End of KDB session. 
+ *   This is called at the very end. In case of multiple cpus hitting BPs
+ *   and sitting on a trap handlers, the last cpu to exit will call this.
+ *   - isnstall all sw breakpoints, and purge deleted ones from table.
+ *   - clear TF here also in case go is entered on a different cpu after switch
+ */
+static void
+kdb_end_session(int ccpu, struct cpu_user_regs *regs)
+{
+    ASSERT(!cpumask_empty(&kdb_cpu_traps));
+    ASSERT(kdb_session_begun);
+    kdb_install_all_swbp();
+    kdb_flush_swbp_table();
+    kdb_install_watchpoints();
+
+    regs->eflags &= ~X86_EFLAGS_TF;
+    kdb_cpu_cmd[ccpu] = KDB_CPU_INVAL;
+    kdb_time_resume(1);
+    kdb_session_begun = 0;      /* before unpause for kdb_install_watchpoints */
+    kdb_smp_unpause_cpus(ccpu);
+    watchdog_enable();
+    KDBGP("end_session:ccpu:%d\n", ccpu);
+}
+
+/* 
+ * check if we entered kdb because of DB trap. If yes, then check if
+ * we caused it or someone else.
+ * RETURNS: 0 : not one of ours. hypervisor must handle it. 
+ *          1 : #DB for delayed sw bp install. 
+ *          2 : this cpu must stay in kdb.
+ */
+static noinline int
+kdb_check_dbtrap(kdb_reason_t *reasp, int ss_mode, struct cpu_user_regs *regs) 
+{
+    int rc = 2;
+    int ccpu = smp_processor_id();
+
+    /* DB excp caused by hw breakpoint or the TF flag. The TF flag is set
+     * by us for ss mode or to install breakpoints. In ss mode, none of the
+     * breakpoints are installed. Check to make sure we intended BP INSTALL
+     * so we don't do it on a spurious DB trap.
+     * check for kdb_cpu_traps here also, because each cpu sitting on a trap
+     * must execute the instruction without the BP before passing control
+     * to next cpu in kdb_cpu_traps.
+     */
+    if (*reasp == KDB_REASON_DBEXCP && !ss_mode) {
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_INSTALL_BP) {
+            if (!cpumask_empty(&kdb_cpu_traps)) {
+                int a_trap_cpu = cpumask_first(&kdb_cpu_traps);
+                KDBGP("ccpu:%d trapcpu:%d\n", ccpu, a_trap_cpu);
+                kdb_cpu_cmd[a_trap_cpu] = KDB_CPU_QUIT;
+                *reasp = KDB_REASON_PAUSE_IPI;
+                regs->eflags &= ~X86_EFLAGS_TF;  /* hvm: exit handler ss = 0 */
+                kdb_init_cpu = -1;
+            } else {
+                kdb_end_session(ccpu, regs);
+                rc = 1;
+            }
+        } else if (! kdb_check_watchpoints(regs)) {
+            rc = 0;                        /* hyp must handle it */
+        }
+    }
+    return rc;
+}
+
+/* 
+ * Misc processing on kdb entry like displaying PC, adjust IP for sw bp.... 
+ */
+static void
+kdb_main_entry_misc(kdb_reason_t reason, struct cpu_user_regs *regs, 
+                    int ccpu, int ss_mode, int enabled)
+{
+    if (reason == KDB_REASON_KEYBOARD)
+        kdbp("\nEnter kdb (cpu:%d reason:%d vcpu=%d domid:%d"
+             " eflg:0x%lx irqs:%d)\n", ccpu, reason, current->vcpu_id, 
+             current->domain->domain_id, regs->eflags, enabled);
+    else if (ss_mode)
+        KDBGP1("KDBG: KDB single step mode. ccpu:%d\n", ccpu);
+
+    if (reason == KDB_REASON_BPEXCP && !ss_mode) 
+        kdbp("Breakpoint on cpu %d at 0x%lx\n", ccpu, regs->KDBIP);
+
+    /* display the current PC and instruction at it */
+    if (reason != KDB_REASON_PAUSE_IPI)
+        kdb_display_pc(regs);
+    console_start_sync();
+}
+
+/* 
+ * The MAIN kdb function. All cpus go thru this. IRQ is enabled on entry because
+ * a cpu could hit a bp set in disabled code.
+ * IPI: Even the main cpu must enable in case another CPU is trying to IPI us.
+ *      That way, it would IPI us, then get out and be ready for our pause IPI.
+ * IRQs: The reason irqs enable/disable is scattered is because on a typical
+ *       system IPIs are constantly going on amongs CPUs in a set of any size. 
+ *       As a result,  to avoid deadlock, cpus have to loop enabled, until a 
+ *       quorum is established and the session has begun.
+ * Step: Intel Vol3B 18.3.1.4 : An external interrupt may be serviced upon
+ *       single step. Since, the likely ext timer_interrupt and 
+ *       apic_timer_interrupt dont' mess with time data structs, we are prob OK
+ *       leaving enabled.
+ * Time: Very messy. Most platform timers are readonly, so we can't stop time
+ *       in the debugger. We take the only resort, let the TSC and plt run as
+ *       normal, upon leaving, "attempt" to bring everybody to current time.
+ * kdbcputraps: bit per cpu. each cpu sets it bit in entry.S. The bit is 
+ *              reliable because upon traps, Ints are disabled. the bit is set
+ *              before Ints are enabled.
+ *
+ * RETURNS: 0 : kdb was called for event it was not responsible
+ *          1 : event owned and handled by kdb 
+ */
+static int
+kdbmain(kdb_reason_t reason, struct cpu_user_regs *regs)
+{
+    int ccpu = smp_processor_id();                /* current cpu */
+    int rc = 1, cmd = kdb_cpu_cmd[ccpu];
+    int ss_mode = (cmd == KDB_CPU_SS || cmd == KDB_CPU_NI);
+    int delayed_install = (kdb_cpu_cmd[ccpu] == KDB_CPU_INSTALL_BP);
+    int enabled = local_irq_is_enabled();
+
+    KDBGP("kdbmain:ccpu:%d rsn:%d eflgs:0x%lx cmd:%d initc:%d irqs:%d "
+          "regs:%lx IP:%lx ", ccpu, reason, regs->eflags, cmd, 
+          kdb_init_cpu, enabled, regs, regs->KDBIP);
+    kdb_dbg_prnt_ctrps("", -1);
+
+    if (!ss_mode && !delayed_install)    /* initial kdb enter */
+        local_irq_enable();              /* so we can receive IPI */
+
+    if (!ss_mode && ccpu != kdb_init_cpu && reason != KDB_REASON_PAUSE_IPI){
+        int sz = sizeof(kdb_init_cpu);
+        while (__cmpxchg(&kdb_init_cpu, -1, ccpu, sz) != -1)
+            for(; kdb_init_cpu != -1; cpu_relax());
+    }
+    if (kdb_session_begun)
+        local_irq_disable();             /* kdb always runs disabled */
+
+    if (reason == KDB_REASON_BPEXCP) {             /* INT 3 */
+        cpumask_clear_cpu(ccpu, &kdb_cpu_traps);   /* remove ourself */
+        rc = kdb_check_sw_bkpts(regs);
+        if (rc == 0) {               /* not one of ours. leave kdb */
+            kdb_init_cpu = -1;
+            goto out;
+        } else if (rc == 1) {        /* one of ours but deleted */
+            if (cpumask_empty(&kdb_cpu_traps)) {
+                kdb_end_session(ccpu,regs);     
+                kdb_init_cpu = -1;
+                goto out;
+            } else {                 
+                /* release another trap cpu, and put ourself in a pause mode */
+                int a_trap_cpu = cpumask_first(&kdb_cpu_traps);
+                KDBGP("ccpu:%d cmd:%d rsn:%d atrpcpu:%d initcpu:%d\n", ccpu, 
+                      kdb_cpu_cmd[ccpu], reason, a_trap_cpu, kdb_init_cpu);
+                kdb_cpu_cmd[a_trap_cpu] = KDB_CPU_QUIT;
+                reason = KDB_REASON_PAUSE_IPI;
+                kdb_init_cpu = -1;
+            }
+        } else if (rc == 2) {        /* one of ours but condition not met */
+                kdb_begin_session();
+                if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                    current->arch.hvm_vcpu.single_step = 1;
+                else
+                    regs->eflags |= X86_EFLAGS_TF;  
+                kdb_cpu_cmd[ccpu] = KDB_CPU_INSTALL_BP;
+                goto out;
+        }
+    }
+
+    /* following will take care of KDB_CPU_INSTALL_BP, and also release
+     * kdb_init_cpu. it should not be done twice */
+    if ((rc=kdb_check_dbtrap(&reason, ss_mode, regs)) == 0 || rc == 1) {
+        kdb_init_cpu = -1;       /* leaving kdb */
+        goto out;                /* rc properly set to 0 or 1 */
+    }
+    if (reason != KDB_REASON_PAUSE_IPI) {
+        kdb_cpu_cmd[ccpu] = KDB_CPU_MAIN_KDB;
+    } else
+        kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_MAIN_KDB && !ss_mode)
+        kdb_begin_session(); 
+
+    kdb_main_entry_misc(reason, regs, ccpu, ss_mode, enabled);
+    /* note, one or more cpu switches may occur in between */
+    while (1) {
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE)
+            kdb_hold_this_cpu(ccpu, regs);
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_MAIN_KDB)
+            kdb_do_cmds(regs);
+
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_GO) {
+            if (ccpu != kdb_init_cpu) {
+                kdb_cpu_cmd[kdb_init_cpu] = KDB_CPU_GO;
+                kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+                continue;               /* for the pause guy */
+            }
+            if (!cpumask_empty(&kdb_cpu_traps)) {
+                /* execute current instruction without 0xcc */
+                kdb_dbg_prnt_ctrps("nempty:", ccpu);
+                if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                    current->arch.hvm_vcpu.single_step = 1;
+                else
+                    regs->eflags |= X86_EFLAGS_TF;  
+                kdb_cpu_cmd[ccpu] = KDB_CPU_INSTALL_BP;
+                goto out;
+            }
+        }
+        if (kdb_cpu_cmd[ccpu] != KDB_CPU_PAUSE  && 
+            kdb_cpu_cmd[ccpu] != KDB_CPU_MAIN_KDB)
+                break;
+    }
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_GO) {
+        ASSERT(cpumask_empty(&kdb_cpu_traps));
+        if (kdb_swbp_exists()) {
+            if (reason == KDB_REASON_BPEXCP) {
+                /* do delayed install */
+                if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                    current->arch.hvm_vcpu.single_step = 1;
+                else
+                    regs->eflags |= X86_EFLAGS_TF;  
+                kdb_cpu_cmd[ccpu] = KDB_CPU_INSTALL_BP;
+                goto out;
+            } 
+        }
+        kdb_end_session(ccpu, regs);
+        kdb_init_cpu = -1;
+    }
+out:
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_QUIT) {
+        KDBGP("ccpu:%d _quit IP: %lx\n", ccpu, regs->KDBIP);
+        if (! kdb_session_begun)
+            kdb_install_watchpoints();
+        kdb_time_resume(0);
+        kdb_cpu_cmd[ccpu] = KDB_CPU_INVAL;
+    }
+
+    /* for ss and delayed install, TF is set. not much in EXT INT handlers*/
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_NI)
+        kdb_time_resume(1);
+    if (enabled)
+        local_irq_enable();
+
+    KDBGP("kdbmain:X:ccpu:%d rc:%d cmd:%d eflg:0x%lx initc:%d sesn:%d " 
+          "cs:%x irqs:%d ", ccpu, rc, kdb_cpu_cmd[ccpu], regs->eflags, 
+          kdb_init_cpu, kdb_session_begun, regs->cs, local_irq_is_enabled());
+    kdb_dbg_prnt_ctrps("", -1);
+    return (rc ? 1 : 0);
+}
+
+/* 
+ * kdb entry function when coming in via a keyboard
+ * RETURNS: 0 : kdb was called for event it was not responsible
+ *          1 : event owned and handled by kdb 
+ */
+int
+kdb_keyboard(struct cpu_user_regs *regs)
+{
+    return kdbmain(KDB_REASON_KEYBOARD, regs);
+}
+
+#if 0
+/*
+ * this function called when kdb session active and user presses ctrl\ again.
+ * the assumption is that the user typed ni/ss cmd, and it never got back into
+ * kdb, or the user is impatient. Either case, we just fake it that the SS did
+ * finish. Since, all other kdb cpus must be holding disabled, the interrupt
+ * would be on the CPU that did the ss/ni cmd
+ */
+void
+kdb_ssni_reenter(struct cpu_user_regs *regs)
+{
+    int ccpu = smp_processor_id();
+    int ccmd = kdb_cpu_cmd[ccpu];
+
+    if(ccmd == KDB_CPU_SS || ccmd == KDB_CPU_INSTALL_BP)
+        kdbmain(KDB_REASON_DBEXCP, regs); 
+    else 
+        kdbmain(KDB_REASON_KEYBOARD, regs);
+}
+#endif
+
+/* 
+ * All traps are routed thru here. We care about BP (#3) trap (INT 3) and
+ * the DB trap(#1) only. 
+ * returns: 0 kdb has nothing do with this trap
+ *          1 kdb handled this trap 
+ */
+int
+kdb_handle_trap_entry(int vector, struct cpu_user_regs *regs)
+{
+    int rc = 0;
+    int ccpu = smp_processor_id();
+
+    if (vector == TRAP_int3) {
+        rc = kdbmain(KDB_REASON_BPEXCP, regs);
+
+    } else if (vector == TRAP_debug) {
+        KDBGP("ccpu:%d trapdbg reas:%d\n", ccpu, kdb_trap_immed_reason);
+
+        if (kdb_trap_immed_reason == KDB_TRAP_FATAL) { 
+            KDBGP("kdbtrp:fatal ccpu:%d vec:%d\n", ccpu, vector);
+            rc = kdbmain_fatal(regs, vector);
+            BUG();                             /* no return */
+
+        } else if (kdb_trap_immed_reason == KDB_TRAP_KDBSTACK) {
+            kdb_trap_immed_reason = 0;         /* show kdb stack */
+            show_registers(regs);
+            show_stack(regs);
+            regs->eflags &= ~X86_EFLAGS_TF;
+            rc = 1;
+
+        } else if (kdb_trap_immed_reason == KDB_TRAP_NONFATAL) {
+            kdb_trap_immed_reason = 0;
+            rc = kdb_keyboard(regs);
+        } else {                         /* ss/ni/delayed install... */
+            if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                current->arch.hvm_vcpu.single_step = 0;
+            rc = kdbmain(KDB_REASON_DBEXCP, regs); 
+        }
+
+    } else if (vector == TRAP_nmi) {                   /* external nmi */
+        /* when nmi is pressed, it could go to one or more or all cpus
+         * depending on the hardware. Also, for now assume it's fatal */
+        KDBGP("kdbtrp:ccpu:%d vec:%d\n", ccpu, vector);
+        rc = kdbmain_fatal(regs, TRAP_nmi);
+    } 
+    return rc;
+}
+
+int
+kdb_trap_fatal(int vector, struct cpu_user_regs *regs)
+{
+    kdbmain_fatal(regs, vector);
+    return 0;
+}
+
+/* From smp_send_nmi_allbutself() in crash.c which is static */
+void
+kdb_nmi_pause_cpus(cpumask_t cpumask)
+{
+    int ccpu = smp_processor_id();
+    mdelay(200);
+    cpumask_complement(&cpumask, &cpumask);              /* flip bit map */
+    cpumask_and(&cpumask, &cpumask, &cpu_online_map);    /* remove extra bits */
+    cpumask_clear_cpu(ccpu, &cpumask);/* absolutely make sure we're not on it */
+
+    KDBGP("ccpu:%d nmi pause. mask:0x%lx\n", ccpu, cpumask.bits[0]);
+    if ( !cpumask_empty(&cpumask) )
+#if XEN_SUBVERSION > 4 || XEN_VERSION == 4              /* xen 3.5.x or above */
+        send_IPI_mask(&cpumask, APIC_DM_NMI);
+#else
+        send_IPI_mask(cpumask, APIC_DM_NMI);
+#endif
+    mdelay(200);
+    KDBGP("ccpu:%d nmi pause done...\n", ccpu);
+}
+
+/* 
+ * Separate function from kdbmain to keep both within sanity levels.
+ */
+DEFINE_SPINLOCK(kdb_fatal_lk);
+static int
+kdbmain_fatal(struct cpu_user_regs *regs, int vector)
+{
+    int ccpu = smp_processor_id();
+
+    console_start_sync();
+
+    KDBGP("mainf:ccpu:%d vec:%d irq:%d\n", ccpu, vector,local_irq_is_enabled());
+    cpumask_set_cpu(ccpu, &kdb_fatal_cpumask);        /* uses LOCK_PREFIX */
+
+    if (spin_trylock(&kdb_fatal_lk)) {
+
+        kdbp("*** kdb (Fatal Error on cpu:%d vec:%d %s):\n", ccpu,
+             vector, kdb_gettrapname(vector));
+        kdb_cpu_cmd[ccpu] = KDB_CPU_MAIN_KDB;
+        kdb_display_pc(regs);
+
+        watchdog_disable();     /* important */
+        kdb_sys_crash = 1;
+        kdb_session_begun = 0;  /* incase session already active */
+        local_irq_enable();
+        kdb_nmi_pause_cpus(kdb_fatal_cpumask);
+
+        kdb_clear_prev_cmd();   /* buffered CRs will repeat prev cmd */
+        kdb_session_begun = 1;  /* for kdb_hold_this_cpu() */
+        local_irq_disable();
+    } else {
+        kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+    }
+    while (1) {
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE)
+            kdb_hold_this_cpu(ccpu, regs);
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_MAIN_KDB)
+            kdb_do_cmds(regs);
+#if 0
+        /* dump is the only way to exit in crashed state */
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_DUMP)
+            kdb_do_dump(regs);
+#endif
+    }
+    return 0;
+}
+
+/* Mostly called in fatal cases. earlykdb calls non-fatal.
+ * kdb_trap_immed_reason is global, so allow only one cpu at a time. Also,
+ * multiple cpu may be crashing at the same time. We enable because if there
+ * is a bad hang, at least ctrl-\ will break into kdb. Also, we don't call
+ * call kdb_keyboard directly becaue we don't have the register context.
+ */
+DEFINE_SPINLOCK(kdb_immed_lk);
+void
+kdb_trap_immed(int reason)            /* fatal, non-fatal, kdb stack etc... */
+{
+    int ccpu = smp_processor_id();
+    int disabled = !local_irq_is_enabled();
+
+    KDBGP("trapimm:ccpu:%d reas:%d\n", ccpu, reason);
+    local_irq_enable();
+    spin_lock(&kdb_immed_lk);
+    kdb_trap_immed_reason = reason;
+    barrier();
+    __asm__ __volatile__ ( "int $1" );
+    kdb_trap_immed_reason = 0;
+
+    spin_unlock(&kdb_immed_lk);
+    if (disabled)
+        local_irq_disable();
+}
+
+/* called very early during init, even before all CPUs are brought online */
+void 
+kdb_init(void)
+{
+        kdb_init_cmdtab();      /* Initialize Command Table */
+}
+
+static const char *
+kdb_gettrapname(int trapno)
+{
+    char *ret;
+    switch (trapno) {
+        case  0:  ret = "Divide Error"; break;
+        case  2:  ret = "NMI Interrupt"; break;
+        case  3:  ret = "Int 3 Trap"; break;
+        case  4:  ret = "Overflow Error"; break;
+        case  6:  ret = "Invalid Opcode"; break;
+        case  8:  ret = "Double Fault"; break;
+        case 10:  ret = "Invalid TSS"; break;
+        case 11:  ret = "Segment Not Present"; break;
+        case 12:  ret = "Stack-Segment Fault"; break;
+        case 13:  ret = "General Protection"; break;
+        case 14:  ret = "Page Fault"; break;
+        case 17:  ret = "Alignment Check"; break;
+        default: ret = " ????? ";
+    }
+    return ret;
+}
+
+
+/* ====================== Generic tracing subsystem ======================== */
+
+#define KDBTRCMAX 1       /* set this to max number of recs to trace. each rec 
+                           * is 32 bytes */
+volatile int kdb_trcon=1; /* turn tracing ON: set here or via the trcon cmd */
+
+typedef struct {
+    union {
+        struct { uint d0; uint cpu_trcid; } s0;
+        uint64_t l0;
+    }u;
+    uint64_t l1, l2, l3; 
+} trc_rec_t;
+
+static volatile unsigned int trcidx;    /* points to where new entry will go */
+static trc_rec_t trca[KDBTRCMAX];       /* trace array */
+
+/* atomically: add i to *p, return prev value of *p (ie, val before add) */
+static int
+kdb_fetch_and_add(int i, uint *p)
+{
+    asm volatile("lock xaddl %0, %1;" : "=r"(i) : "m"(*p), "0"(i));
+    return i;
+}
+
+/* zero out the entire buffer */
+void 
+kdb_trczero(void)
+{
+    for (trcidx = KDBTRCMAX-1; trcidx; trcidx--) {
+        memset(&trca[trcidx], 0, sizeof(trc_rec_t));
+    }
+    memset(&trca[trcidx], 0, sizeof(trc_rec_t));
+    kdbp("kdb trace buffer has been zeroed\n");
+}
+
+/* add trace entry: eg.: kdbtrc(0xe0f099, intdata, vcpu, domain, 0)
+ *    where:  0xe0f099 : 24bits max trcid, lower 8 bits are set to cpuid */
+void
+kdbtrc(uint trcid, uint int_d0, uint64_t d1_64, uint64_t d2_64, uint64_t d3_64)
+{
+    uint idx;
+
+    if (!kdb_trcon)
+        return;
+
+    idx = kdb_fetch_and_add(1, (uint*)&trcidx);
+    idx = idx % KDBTRCMAX;
+
+#if 0
+    trca[idx].u.s0.cpu_trcid = (smp_processor_id()<<24) | trcid;
+#endif
+    trca[idx].u.s0.cpu_trcid = (trcid<<8) | smp_processor_id();
+    trca[idx].u.s0.d0 = int_d0;
+    trca[idx].l1 = d1_64;
+    trca[idx].l2 = d2_64;
+    trca[idx].l3 = d3_64;
+}
+
+/* give hints so user can print trc buffer via the dd command. last has the
+ * most recent entry */
+void
+kdb_trcp(void)
+{
+    int i = trcidx % KDBTRCMAX;
+
+    i = (i==0) ? KDBTRCMAX-1 : i-1;
+    kdbp("trcbuf:    [0]: %016lx [MAX-1]: %016lx\n", &trca[0],
+         &trca[KDBTRCMAX-1]);
+    kdbp(" [most recent]: %016lx   trcidx: 0x%x\n", &trca[i], trcidx);
+}
+
diff -r 32034d1914a6 xen/kdb/x86/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3 @@
+
+obj-y    := kdb_wp.o
+subdir-y += udis86-1.7
diff -r 32034d1914a6 xen/kdb/x86/kdb_wp.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/kdb_wp.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,310 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "../include/kdbinc.h"
+
+#if 0
+#define DR6_BT  0x00008000
+#define DR6_BS  0x00004000
+#define DR6_BD  0x00002000
+#endif
+#define DR6_B3  0x00000008
+#define DR6_B2  0x00000004
+#define DR6_B1  0x00000002
+#define DR6_B0  0x00000001
+
+#define KDB_MAXWP 4                          /* DR0 thru DR3 */
+
+struct kdb_wp {
+    kdbma_t  wp_addr;
+    int      wp_rwflag;
+    int      wp_len;
+    int      wp_deleted;                     /* pending delete */
+};
+static struct kdb_wp kdb_wpa[KDB_MAXWP];
+
+/* following because vmcs has it's own dr7. when vmcs runs, it messes up the
+ * native dr7 so we need to save/restore it */
+unsigned long kdb_dr7;
+
+
+/* Set G0-G3 bits in DR7. this does global enable of the corresponding wp */
+static void
+kdb_set_gx_in_dr7(int regno, kdbma_t *dr7p)
+{
+    if (regno == 0)
+        *dr7p = *dr7p | 0x2;
+    else if (regno == 1)
+        *dr7p = *dr7p | 0x8;
+    else if (regno == 2)
+        *dr7p = *dr7p | 0x20;
+    else if (regno == 3)
+        *dr7p = *dr7p | 0x80;
+}
+
+/* Set LEN0 - LEN3 pair bits in DR7 (len should be 1 2 4 or 8) */
+static void
+kdb_set_len_in_dr7(int regno, kdbma_t *dr7p, int len)
+{
+    int lenbits = (len == 8) ? 2 : len-1;
+
+    *dr7p &= ~(0x3 << (18 + 4*regno));
+    *dr7p |= ((ulong)(lenbits & 0x3) << (18 + 4*regno));
+}
+
+static void
+kdb_set_dr7_rw(int regno, kdbma_t *dr7p, int rw)
+{
+    *dr7p &= ~(0x3 << (16 + 4*regno));
+    *dr7p |= ((ulong)(rw & 0x3)) << (16 + 4*regno);
+}
+
+/* get value of a debug register: DR0-DR3 DR6 DR7. other values return 0 */
+kdbma_t
+kdb_rd_dbgreg(int regnum)
+{
+    kdbma_t contents = 0;
+
+    if (regnum == 0)
+        __asm__ ("movq %%db0,%0\n\t":"=r"(contents));
+    else if (regnum == 1)
+        __asm__ ("movq %%db1,%0\n\t":"=r"(contents));
+    else if (regnum == 2)
+        __asm__ ("movq %%db2,%0\n\t":"=r"(contents));
+    else if (regnum == 3)
+        __asm__ ("movq %%db3,%0\n\t":"=r"(contents));
+    else if (regnum == 6)
+        __asm__ ("movq %%db6,%0\n\t":"=r"(contents));
+    else if (regnum == 7)
+        __asm__ ("movq %%db7,%0\n\t":"=r"(contents));
+
+    return contents;
+}
+
+static void
+kdb_wr_dbgreg(int regnum, kdbma_t contents)
+{
+    if (regnum == 0)
+        __asm__ ("movq %0,%%db0\n\t"::"r"(contents));
+    else if (regnum == 1)
+        __asm__ ("movq %0,%%db1\n\t"::"r"(contents));
+    else if (regnum == 2)
+        __asm__ ("movq %0,%%db2\n\t"::"r"(contents));
+    else if (regnum == 3)
+        __asm__ ("movq %0,%%db3\n\t"::"r"(contents));
+    else if (regnum == 6)
+        __asm__ ("movq %0,%%db6\n\t"::"r"(contents));
+    else if (regnum == 7)
+        __asm__ ("movq %0,%%db7\n\t"::"r"(contents));
+}
+
+static void
+kdb_print_wp_info(char *strp, int idx)
+{
+    kdbp("%s[%d]:%016lx len:%d ", strp, idx, kdb_wpa[idx].wp_addr,
+         kdb_wpa[idx].wp_len);
+    if (kdb_wpa[idx].wp_rwflag == 1)
+        kdbp("on data write only\n");
+    else if (kdb_wpa[idx].wp_rwflag == 2)
+        kdbp("on IO read/write\n");
+    else 
+        kdbp("on data read/write\n");
+}
+
+/*
+ * Returns : 0 if not one of ours
+ *           1 if one of ours
+ */
+int
+kdb_check_watchpoints(struct cpu_user_regs *regs)
+{
+    int wpnum;
+    kdbma_t dr6 = kdb_rd_dbgreg(6);
+
+    KDBGP1("check_wp: IP:%lx EFLAGS:%lx\n", regs->rip, regs->rflags);
+    if (dr6 & DR6_B0)
+        wpnum = 0;
+    else if (dr6 & DR6_B1)
+        wpnum = 1;
+    else if (dr6 & DR6_B2)
+        wpnum = 2;
+    else if (dr6 & DR6_B3)
+        wpnum = 3;
+    else
+        return 0;
+
+    kdb_print_wp_info("Watchpoint ", wpnum);
+    return 1;
+}
+
+/* set a watchpoint at a given address 
+ * PreCondition: addr != 0 */
+static void
+kdb_set_wp(kdbva_t addr, int rwflag, int len)
+{
+    int regno;
+
+    for (regno=0; regno < KDB_MAXWP; regno++) {
+        if (kdb_wpa[regno].wp_addr == addr && !kdb_wpa[regno].wp_deleted) {
+            kdbp("Watchpoint already set\n");
+            return;
+        }
+        if (kdb_wpa[regno].wp_deleted)
+            memset(&kdb_wpa[regno], 0, sizeof(kdb_wpa[regno]));
+    }
+    for (regno=0; regno < KDB_MAXWP && kdb_wpa[regno].wp_addr; regno++);
+    if (regno >= KDB_MAXWP) {
+        kdbp("watchpoint table full. limit:%d\n", KDB_MAXWP);
+        return;
+    }
+    kdb_wpa[regno].wp_addr = addr;
+    kdb_wpa[regno].wp_rwflag = rwflag;
+    kdb_wpa[regno].wp_len = len;
+    kdb_print_wp_info("Watchpoint set ", regno);
+}
+
+/* write reg DR0-3 with address. Update corresponding bits in DR7 */
+static void
+kdb_install_watchpoint(int regno, kdbma_t *dr7p)
+{
+    kdb_set_gx_in_dr7(regno, dr7p);
+    kdb_set_len_in_dr7(regno, dr7p, kdb_wpa[regno].wp_len); 
+    kdb_set_dr7_rw(regno, dr7p, kdb_wpa[regno].wp_rwflag);
+    kdb_wr_dbgreg(regno, kdb_wpa[regno].wp_addr);
+
+    KDBGP1("ccpu:%d installed wp. addr:%lx rw:%x len:%x dr7:%016lx\n",
+           smp_processor_id(), kdb_wpa[regno].wp_addr, 
+           kdb_wpa[regno].wp_rwflag, kdb_wpa[regno].wp_len, *dr7p);
+}
+
+/* clear G0-G3 bits in DR7 for given DR0-3 */
+static void
+kdb_clear_dr7_gx(int regno, kdbma_t *dr7p)
+{
+    if (regno == 0)
+        *dr7p = *dr7p & ~0x2;
+    else if (regno == 1)
+        *dr7p = *dr7p & ~0x8;
+    else if (regno == 2)
+        *dr7p = *dr7p & ~0x20;
+    else if (regno == 3)
+        *dr7p = *dr7p & ~0x80;
+}
+
+/* update dr7 once, as it's slow to update debug regs and cpu's will still be 
+ * paused when leaving kdb.
+ *
+ * Just leave DR0-3 clobbered but remove bits from DR7 to disable wp 
+ */
+void
+kdb_install_watchpoints(void)
+{
+    int regno;
+    kdbma_t dr7 = kdb_rd_dbgreg(7);
+
+    for (regno=0; regno < KDB_MAXWP; regno++) {
+        /* do not clear wp_deleted here as all cpus must clear wps */
+        if (kdb_wpa[regno].wp_deleted) {
+            kdb_clear_dr7_gx(regno, &dr7);
+            continue;
+        }
+        if (kdb_wpa[regno].wp_addr)
+            kdb_install_watchpoint(regno, &dr7);
+    }
+    /* always clear DR6 when leaving */
+    kdb_wr_dbgreg(6, 0);
+    kdb_wr_dbgreg(7, dr7);
+
+    if (dr7 & DR7_ACTIVE_MASK)
+        kdb_dr7 = dr7;
+    else
+        kdb_dr7 = 0;
+#if 0
+    for(dp=domain_list; dp; dp=dp->next_in_list) {
+        struct vcpu *vp;
+        for_each_vcpu(dp, vp) {
+            for (regno=0; regno < KDB_MAXWP; regno++)
+                vp->arch.guest_context.debugreg[regno] = kdb_wpa[regno].wp_addr;
+
+            vp->arch.guest_context.debugreg[6] = 0;
+            vp->arch.guest_context.debugreg[7] = dr7;
+            KDBGP("kdb_install_watchpoints(): v:%p dr7:%lx\n", vp, dr7);
+            /* hvm_set_info_guest(vp);: Can't because can't vmcs_enter in kdb */
+        }
+    }
+#endif
+}
+
+/* clear watchpoint/s. wpnum == -1 to clear all watchpoints */
+void
+kdb_clear_wps(int wpnum)
+{
+    int i;
+
+    if (wpnum >= KDB_MAXWP) {
+        kdbp("Invalid wpnum %d\n", wpnum);
+        return;
+    }
+    if (wpnum >=0) {
+        if (kdb_wpa[wpnum].wp_addr) {
+            kdb_wpa[wpnum].wp_deleted = 1;
+            kdb_print_wp_info("Deleted watchpoint", wpnum);
+        } else
+            kdbp("watchpoint %d not set\n", wpnum);
+        return;
+    }
+    for (i=0; i < KDB_MAXWP; i++) {
+        if (kdb_wpa[i].wp_addr) {
+            kdb_wpa[i].wp_deleted = 1;
+            kdb_print_wp_info("Deleted watchpoint", i);
+        }
+    }
+}
+
+/* display any watchpoints that are set */
+static void
+kdb_display_wps(void)
+{
+    int i;
+    for (i=0; i < KDB_MAXWP; i++)
+        if (kdb_wpa[i].wp_addr && !kdb_wpa[i].wp_deleted) 
+            kdb_print_wp_info("", i);
+}
+
+/* 
+ * Display or Set hardware breakpoints, ie, watchpoints:
+ *   - Upto 4 are allowed
+ *   
+ *  rw_flag should be one of: 
+ *     01 == break on data write only
+ *     10 == break on IO read/write
+ *     11 == Break on data reads or writes
+ *
+ *  len should be one of : 1 2 4 8 
+ */
+void
+kdb_do_watchpoints(kdbva_t addr, int rw_flag, int len)
+{
+    if (addr == 0) {
+        kdb_display_wps();        /* display set watchpoints */
+        return;
+    }
+    kdb_set_wp(addr, rw_flag, len);
+    return;
+}
+
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/LICENSE
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/LICENSE	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,22 @@
+Copyright (c) 2002, 2003, 2004, 2005, 2006 <vivek@sig9.com>
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, 
+are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright notice, 
+      this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright notice, 
+      this list of conditions and the following disclaimer in the documentation 
+      and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR 
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,5 @@
+
+CFLAGS		+= -D__UD_STANDALONE__
+obj-y		:= decode.o input.o itab.o kdb_dis.o syn-att.o syn.o \
+                   syn-intel.o udis86.o
+
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/README
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/README	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,10 @@
+
+http://udis86.sourceforge.net/
+udis86-1.6 : 
+  - cd libudis86
+  - cp *c to here
+  - cp *h to here
+   
+Mukesh Rathor
+04/30/2008
+
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/decode.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/decode.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,1197 @@
+/* -----------------------------------------------------------------------------
+ * decode.c
+ *
+ * Copyright (c) 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+
+#if 0
+#include <assert.h>
+#include <string.h>
+#endif
+
+#include "types.h"
+#include "itab.h"
+#include "input.h"
+#include "decode.h"
+
+/* The max number of prefixes to an instruction */
+#define MAX_PREFIXES    15
+
+static struct ud_itab_entry ie_invalid = { UD_Iinvalid, O_NONE, O_NONE, O_NONE, P_none };
+static struct ud_itab_entry ie_pause   = { UD_Ipause,   O_NONE, O_NONE, O_NONE, P_none };
+static struct ud_itab_entry ie_nop     = { UD_Inop,     O_NONE, O_NONE, O_NONE, P_none };
+
+
+/* Looks up mnemonic code in the mnemonic string table
+ * Returns NULL if the mnemonic code is invalid
+ */
+const char * ud_lookup_mnemonic( enum ud_mnemonic_code c )
+{
+    if ( c < UD_Id3vil )
+        return ud_mnemonics_str[ c ];
+    return NULL;
+}
+
+
+/* Extracts instruction prefixes.
+ */
+static int get_prefixes( struct ud* u )
+{
+    unsigned int have_pfx = 1;
+    unsigned int i;
+    uint8_t curr;
+
+    /* if in error state, bail out */
+    if ( u->error ) 
+        return -1; 
+
+    /* keep going as long as there are prefixes available */
+    for ( i = 0; have_pfx ; ++i ) {
+
+        /* Get next byte. */
+        inp_next(u); 
+        if ( u->error ) 
+            return -1;
+        curr = inp_curr( u );
+
+        /* rex prefixes in 64bit mode */
+        if ( u->dis_mode == 64 && ( curr & 0xF0 ) == 0x40 ) {
+            u->pfx_rex = curr;  
+        } else {
+            switch ( curr )  
+            {
+            case 0x2E : 
+                u->pfx_seg = UD_R_CS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x36 :     
+                u->pfx_seg = UD_R_SS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x3E : 
+                u->pfx_seg = UD_R_DS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x26 : 
+                u->pfx_seg = UD_R_ES; 
+                u->pfx_rex = 0;
+                break;
+            case 0x64 : 
+                u->pfx_seg = UD_R_FS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x65 : 
+                u->pfx_seg = UD_R_GS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x67 : /* adress-size override prefix */ 
+                u->pfx_adr = 0x67;
+                u->pfx_rex = 0;
+                break;
+            case 0xF0 : 
+                u->pfx_lock = 0xF0;
+                u->pfx_rex  = 0;
+                break;
+            case 0x66: 
+                /* the 0x66 sse prefix is only effective if no other sse prefix
+                 * has already been specified.
+                 */
+                if ( !u->pfx_insn ) u->pfx_insn = 0x66;
+                u->pfx_opr = 0x66;           
+                u->pfx_rex = 0;
+                break;
+            case 0xF2:
+                u->pfx_insn  = 0xF2;
+                u->pfx_repne = 0xF2; 
+                u->pfx_rex   = 0;
+                break;
+            case 0xF3:
+                u->pfx_insn = 0xF3;
+                u->pfx_rep  = 0xF3; 
+                u->pfx_repe = 0xF3; 
+                u->pfx_rex  = 0;
+                break;
+            default : 
+                /* No more prefixes */
+                have_pfx = 0;
+                break;
+            }
+        }
+
+        /* check if we reached max instruction length */
+        if ( i + 1 == MAX_INSN_LENGTH ) {
+            u->error = 1;
+            break;
+        }
+    }
+
+    /* return status */
+    if ( u->error ) 
+        return -1; 
+
+    /* rewind back one byte in stream, since the above loop 
+     * stops with a non-prefix byte. 
+     */
+    inp_back(u);
+
+    /* speculatively determine the effective operand mode,
+     * based on the prefixes and the current disassembly
+     * mode. This may be inaccurate, but useful for mode
+     * dependent decoding.
+     */
+    if ( u->dis_mode == 64 ) {
+        u->opr_mode = REX_W( u->pfx_rex ) ? 64 : ( ( u->pfx_opr ) ? 16 : 32 ) ;
+        u->adr_mode = ( u->pfx_adr ) ? 32 : 64;
+    } else if ( u->dis_mode == 32 ) {
+        u->opr_mode = ( u->pfx_opr ) ? 16 : 32;
+        u->adr_mode = ( u->pfx_adr ) ? 16 : 32;
+    } else if ( u->dis_mode == 16 ) {
+        u->opr_mode = ( u->pfx_opr ) ? 32 : 16;
+        u->adr_mode = ( u->pfx_adr ) ? 32 : 16;
+    }
+
+    return 0;
+}
+
+
+/* Searches the instruction tables for the right entry.
+ */
+static int search_itab( struct ud * u )
+{
+    struct ud_itab_entry * e = NULL;
+    enum ud_itab_index table;
+    uint8_t peek;
+    uint8_t did_peek = 0;
+    uint8_t curr; 
+    uint8_t index;
+
+    /* if in state of error, return */
+    if ( u->error ) 
+        return -1;
+
+    /* get first byte of opcode. */
+    inp_next(u); 
+    if ( u->error ) 
+        return -1;
+    curr = inp_curr(u); 
+
+    /* resolve xchg, nop, pause crazyness */
+    if ( 0x90 == curr ) {
+        if ( !( u->dis_mode == 64 && REX_B( u->pfx_rex ) ) ) {
+            if ( u->pfx_rep ) {
+                u->pfx_rep = 0;
+                e = & ie_pause;
+            } else {
+                e = & ie_nop;
+            }
+            goto found_entry;
+        }
+    }
+
+    /* get top-level table */
+    if ( 0x0F == curr ) {
+        table = ITAB__0F;
+        curr  = inp_next(u);
+        if ( u->error )
+            return -1;
+
+        /* 2byte opcodes can be modified by 0x66, F3, and F2 prefixes */
+        if ( 0x66 == u->pfx_insn ) {
+            if ( ud_itab_list[ ITAB__PFX_SSE66__0F ][ curr ].mnemonic != UD_Iinvalid ) {
+                table = ITAB__PFX_SSE66__0F;
+                u->pfx_opr = 0;
+            }
+        } else if ( 0xF2 == u->pfx_insn ) {
+            if ( ud_itab_list[ ITAB__PFX_SSEF2__0F ][ curr ].mnemonic != UD_Iinvalid ) {
+                table = ITAB__PFX_SSEF2__0F; 
+                u->pfx_repne = 0;
+            }
+        } else if ( 0xF3 == u->pfx_insn ) {
+            if ( ud_itab_list[ ITAB__PFX_SSEF3__0F ][ curr ].mnemonic != UD_Iinvalid ) {
+                table = ITAB__PFX_SSEF3__0F;
+                u->pfx_repe = 0;
+                u->pfx_rep  = 0;
+            }
+        }
+    /* pick an instruction from the 1byte table */
+    } else {
+        table = ITAB__1BYTE; 
+    }
+
+    index = curr;
+
+search:
+
+    e = & ud_itab_list[ table ][ index ];
+
+    /* if mnemonic constant is a standard instruction constant
+     * our search is over.
+     */
+    
+    if ( e->mnemonic < UD_Id3vil ) {
+        if ( e->mnemonic == UD_Iinvalid ) {
+            if ( did_peek ) {
+                inp_next( u ); if ( u->error ) return -1;
+            }
+            goto found_entry;
+        }
+        goto found_entry;
+    }
+
+    table = e->prefix;
+
+    switch ( e->mnemonic )
+    {
+    case UD_Igrp_reg:
+        peek     = inp_peek( u );
+        did_peek = 1;
+        index    = MODRM_REG( peek );
+        break;
+
+    case UD_Igrp_mod:
+        peek     = inp_peek( u );
+        did_peek = 1;
+        index    = MODRM_MOD( peek );
+        if ( index == 3 )
+           index = ITAB__MOD_INDX__11;
+        else 
+           index = ITAB__MOD_INDX__NOT_11; 
+        break;
+
+    case UD_Igrp_rm:
+        curr     = inp_next( u );
+        did_peek = 0;
+        if ( u->error )
+            return -1;
+        index    = MODRM_RM( curr );
+        break;
+
+    case UD_Igrp_x87:
+        curr     = inp_next( u );
+        did_peek = 0;
+        if ( u->error )
+            return -1;
+        index    = curr - 0xC0;
+        break;
+
+    case UD_Igrp_osize:
+        if ( u->opr_mode == 64 ) 
+            index = ITAB__MODE_INDX__64;
+        else if ( u->opr_mode == 32 ) 
+            index = ITAB__MODE_INDX__32;
+        else
+            index = ITAB__MODE_INDX__16;
+        break;
+ 
+    case UD_Igrp_asize:
+        if ( u->adr_mode == 64 ) 
+            index = ITAB__MODE_INDX__64;
+        else if ( u->adr_mode == 32 ) 
+            index = ITAB__MODE_INDX__32;
+        else
+            index = ITAB__MODE_INDX__16;
+        break;               
+
+    case UD_Igrp_mode:
+        if ( u->dis_mode == 64 ) 
+            index = ITAB__MODE_INDX__64;
+        else if ( u->dis_mode == 32 ) 
+            index = ITAB__MODE_INDX__32;
+        else
+            index = ITAB__MODE_INDX__16;
+        break;
+
+    case UD_Igrp_vendor:
+        if ( u->vendor == UD_VENDOR_INTEL ) 
+            index = ITAB__VENDOR_INDX__INTEL; 
+        else if ( u->vendor == UD_VENDOR_AMD )
+            index = ITAB__VENDOR_INDX__AMD;
+        else {
+            kdbp("KDB:search_itab(): unrecognized vendor id\n");
+            return -1;
+        }
+        break;
+
+    case UD_Id3vil:
+        kdbp("KDB:search_itab(): invalid instr mnemonic constant Id3vil\n");
+        return -1;
+
+    default:
+        kdbp("KDB:search_itab(): invalid instruction mnemonic constant\n");
+        return -1;
+    }
+
+    goto search;
+
+found_entry:
+
+    u->itab_entry = e;
+    u->mnemonic = u->itab_entry->mnemonic;
+
+    return 0;
+}
+
+
+static unsigned int resolve_operand_size( const struct ud * u, unsigned int s )
+{
+    switch ( s ) 
+    {
+    case SZ_V:
+        return ( u->opr_mode );
+    case SZ_Z:  
+        return ( u->opr_mode == 16 ) ? 16 : 32;
+    case SZ_P:  
+        return ( u->opr_mode == 16 ) ? SZ_WP : SZ_DP;
+    case SZ_MDQ:
+        return ( u->opr_mode == 16 ) ? 32 : u->opr_mode;
+    case SZ_RDQ:
+        return ( u->dis_mode == 64 ) ? 64 : 32;
+    default:
+        return s;
+    }
+}
+
+
+static int resolve_mnemonic( struct ud* u )
+{
+  /* far/near flags */
+  u->br_far = 0;
+  u->br_near = 0;
+  /* readjust operand sizes for call/jmp instrcutions */
+  if ( u->mnemonic == UD_Icall || u->mnemonic == UD_Ijmp ) {
+    /* WP: 16bit pointer */
+    if ( u->operand[ 0 ].size == SZ_WP ) {
+        u->operand[ 0 ].size = 16;
+        u->br_far = 1;
+        u->br_near= 0;
+    /* DP: 32bit pointer */
+    } else if ( u->operand[ 0 ].size == SZ_DP ) {
+        u->operand[ 0 ].size = 32;
+        u->br_far = 1;
+        u->br_near= 0;
+    } else {
+        u->br_far = 0;
+        u->br_near= 1;
+    }
+  /* resolve 3dnow weirdness. */
+  } else if ( u->mnemonic == UD_I3dnow ) {
+    u->mnemonic = ud_itab_list[ ITAB__3DNOW ][ inp_curr( u )  ].mnemonic;
+  }
+  /* SWAPGS is only valid in 64bits mode */
+  if ( u->mnemonic == UD_Iswapgs && u->dis_mode != 64 ) {
+    u->error = 1;
+    return -1;
+  }
+
+  return 0;
+}
+
+
+/* -----------------------------------------------------------------------------
+ * decode_a()- Decodes operands of the type seg:offset
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_a(struct ud* u, struct ud_operand *op)
+{
+  if (u->opr_mode == 16) {  
+    /* seg16:off16 */
+    op->type = UD_OP_PTR;
+    op->size = 32;
+    op->lval.ptr.off = inp_uint16(u);
+    op->lval.ptr.seg = inp_uint16(u);
+  } else {
+    /* seg16:off32 */
+    op->type = UD_OP_PTR;
+    op->size = 48;
+    op->lval.ptr.off = inp_uint32(u);
+    op->lval.ptr.seg = inp_uint16(u);
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_gpr() - Returns decoded General Purpose Register 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+decode_gpr(register struct ud* u, unsigned int s, unsigned char rm)
+{
+  s = resolve_operand_size(u, s);
+        
+  switch (s) {
+    case 64:
+        return UD_R_RAX + rm;
+    case SZ_DP:
+    case 32:
+        return UD_R_EAX + rm;
+    case SZ_WP:
+    case 16:
+        return UD_R_AX  + rm;
+    case  8:
+        if (u->dis_mode == 64 && u->pfx_rex) {
+            if (rm >= 4)
+                return UD_R_SPL + (rm-4);
+            return UD_R_AL + rm;
+        } else return UD_R_AL + rm;
+    default:
+        return 0;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * resolve_gpr64() - 64bit General Purpose Register-Selection. 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+resolve_gpr64(struct ud* u, enum ud_operand_code gpr_op)
+{
+  if (gpr_op >= OP_rAXr8 && gpr_op <= OP_rDIr15)
+    gpr_op = (gpr_op - OP_rAXr8) | (REX_B(u->pfx_rex) << 3);          
+  else  gpr_op = (gpr_op - OP_rAX);
+
+  if (u->opr_mode == 16)
+    return gpr_op + UD_R_AX;
+  if (u->dis_mode == 32 || 
+    (u->opr_mode == 32 && ! (REX_W(u->pfx_rex) || u->default64))) {
+    return gpr_op + UD_R_EAX;
+  }
+
+  return gpr_op + UD_R_RAX;
+}
+
+/* -----------------------------------------------------------------------------
+ * resolve_gpr32 () - 32bit General Purpose Register-Selection. 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+resolve_gpr32(struct ud* u, enum ud_operand_code gpr_op)
+{
+  gpr_op = gpr_op - OP_eAX;
+
+  if (u->opr_mode == 16) 
+    return gpr_op + UD_R_AX;
+
+  return gpr_op +  UD_R_EAX;
+}
+
+/* -----------------------------------------------------------------------------
+ * resolve_reg() - Resolves the register type 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+resolve_reg(struct ud* u, unsigned int type, unsigned char i)
+{
+  switch (type) {
+    case T_MMX :    return UD_R_MM0  + (i & 7);
+    case T_XMM :    return UD_R_XMM0 + i;
+    case T_CRG :    return UD_R_CR0  + i;
+    case T_DBG :    return UD_R_DR0  + i;
+    case T_SEG :    return UD_R_ES   + (i & 7);
+    case T_NONE:
+    default:    return UD_NONE;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_imm() - Decodes Immediate values.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_imm(struct ud* u, unsigned int s, struct ud_operand *op)
+{
+  op->size = resolve_operand_size(u, s);
+  op->type = UD_OP_IMM;
+
+  switch (op->size) {
+    case  8: op->lval.sbyte = inp_uint8(u);   break;
+    case 16: op->lval.uword = inp_uint16(u);  break;
+    case 32: op->lval.udword = inp_uint32(u); break;
+    case 64: op->lval.uqword = inp_uint64(u); break;
+    default: return;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_modrm() - Decodes ModRM Byte
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_modrm(struct ud* u, struct ud_operand *op, unsigned int s, 
+         unsigned char rm_type, struct ud_operand *opreg, 
+         unsigned char reg_size, unsigned char reg_type)
+{
+  unsigned char mod, rm, reg;
+
+  inp_next(u);
+
+  /* get mod, r/m and reg fields */
+  mod = MODRM_MOD(inp_curr(u));
+  rm  = (REX_B(u->pfx_rex) << 3) | MODRM_RM(inp_curr(u));
+  reg = (REX_R(u->pfx_rex) << 3) | MODRM_REG(inp_curr(u));
+
+  op->size = resolve_operand_size(u, s);
+
+  /* if mod is 11b, then the UD_R_m specifies a gpr/mmx/sse/control/debug */
+  if (mod == 3) {
+    op->type = UD_OP_REG;
+    if (rm_type ==  T_GPR)
+        op->base = decode_gpr(u, op->size, rm);
+    else    op->base = resolve_reg(u, rm_type, (REX_B(u->pfx_rex) << 3) | (rm&7));
+  } 
+  /* else its memory addressing */  
+  else {
+    op->type = UD_OP_MEM;
+
+    /* 64bit addressing */
+    if (u->adr_mode == 64) {
+
+        op->base = UD_R_RAX + rm;
+
+        /* get offset type */
+        if (mod == 1)
+            op->offset = 8;
+        else if (mod == 2)
+            op->offset = 32;
+        else if (mod == 0 && (rm & 7) == 5) {           
+            op->base = UD_R_RIP;
+            op->offset = 32;
+        } else  op->offset = 0;
+
+        /* Scale-Index-Base (SIB) */
+        if ((rm & 7) == 4) {
+            inp_next(u);
+            
+            op->scale = (1 << SIB_S(inp_curr(u))) & ~1;
+            op->index = UD_R_RAX + (SIB_I(inp_curr(u)) | (REX_X(u->pfx_rex) << 3));
+            op->base  = UD_R_RAX + (SIB_B(inp_curr(u)) | (REX_B(u->pfx_rex) << 3));
+
+            /* special conditions for base reference */
+            if (op->index == UD_R_RSP) {
+                op->index = UD_NONE;
+                op->scale = UD_NONE;
+            }
+
+            if (op->base == UD_R_RBP || op->base == UD_R_R13) {
+                if (mod == 0) 
+                    op->base = UD_NONE;
+                if (mod == 1)
+                    op->offset = 8;
+                else op->offset = 32;
+            }
+        }
+    } 
+
+    /* 32-Bit addressing mode */
+    else if (u->adr_mode == 32) {
+
+        /* get base */
+        op->base = UD_R_EAX + rm;
+
+        /* get offset type */
+        if (mod == 1)
+            op->offset = 8;
+        else if (mod == 2)
+            op->offset = 32;
+        else if (mod == 0 && rm == 5) {
+            op->base = UD_NONE;
+            op->offset = 32;
+        } else  op->offset = 0;
+
+        /* Scale-Index-Base (SIB) */
+        if ((rm & 7) == 4) {
+            inp_next(u);
+
+            op->scale = (1 << SIB_S(inp_curr(u))) & ~1;
+            op->index = UD_R_EAX + (SIB_I(inp_curr(u)) | (REX_X(u->pfx_rex) << 3));
+            op->base  = UD_R_EAX + (SIB_B(inp_curr(u)) | (REX_B(u->pfx_rex) << 3));
+
+            if (op->index == UD_R_ESP) {
+                op->index = UD_NONE;
+                op->scale = UD_NONE;
+            }
+
+            /* special condition for base reference */
+            if (op->base == UD_R_EBP) {
+                if (mod == 0)
+                    op->base = UD_NONE;
+                if (mod == 1)
+                    op->offset = 8;
+                else op->offset = 32;
+            }
+        }
+    } 
+
+    /* 16bit addressing mode */
+    else  {
+        switch (rm) {
+            case 0: op->base = UD_R_BX; op->index = UD_R_SI; break;
+            case 1: op->base = UD_R_BX; op->index = UD_R_DI; break;
+            case 2: op->base = UD_R_BP; op->index = UD_R_SI; break;
+            case 3: op->base = UD_R_BP; op->index = UD_R_DI; break;
+            case 4: op->base = UD_R_SI; break;
+            case 5: op->base = UD_R_DI; break;
+            case 6: op->base = UD_R_BP; break;
+            case 7: op->base = UD_R_BX; break;
+        }
+
+        if (mod == 0 && rm == 6) {
+            op->offset= 16;
+            op->base = UD_NONE;
+        }
+        else if (mod == 1)
+            op->offset = 8;
+        else if (mod == 2) 
+            op->offset = 16;
+    }
+  }  
+
+  /* extract offset, if any */
+  switch(op->offset) {
+    case 8 : op->lval.ubyte  = inp_uint8(u);  break;
+    case 16: op->lval.uword  = inp_uint16(u);  break;
+    case 32: op->lval.udword = inp_uint32(u); break;
+    case 64: op->lval.uqword = inp_uint64(u); break;
+    default: break;
+  }
+
+  /* resolve register encoded in reg field */
+  if (opreg) {
+    opreg->type = UD_OP_REG;
+    opreg->size = resolve_operand_size(u, reg_size);
+    if (reg_type == T_GPR) 
+        opreg->base = decode_gpr(u, opreg->size, reg);
+    else opreg->base = resolve_reg(u, reg_type, reg);
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_o() - Decodes offset
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_o(struct ud* u, unsigned int s, struct ud_operand *op)
+{
+  switch (u->adr_mode) {
+    case 64:
+        op->offset = 64; 
+        op->lval.uqword = inp_uint64(u); 
+        break;
+    case 32:
+        op->offset = 32; 
+        op->lval.udword = inp_uint32(u); 
+        break;
+    case 16:
+        op->offset = 16; 
+        op->lval.uword  = inp_uint16(u); 
+        break;
+    default:
+        return;
+  }
+  op->type = UD_OP_MEM;
+  op->size = resolve_operand_size(u, s);
+}
+
+/* -----------------------------------------------------------------------------
+ * disasm_operands() - Disassembles Operands.
+ * -----------------------------------------------------------------------------
+ */
+static int disasm_operands(register struct ud* u)
+{
+
+
+  /* mopXt = map entry, operand X, type; */
+  enum ud_operand_code mop1t = u->itab_entry->operand1.type;
+  enum ud_operand_code mop2t = u->itab_entry->operand2.type;
+  enum ud_operand_code mop3t = u->itab_entry->operand3.type;
+
+  /* mopXs = map entry, operand X, size */
+  unsigned int mop1s = u->itab_entry->operand1.size;
+  unsigned int mop2s = u->itab_entry->operand2.size;
+  unsigned int mop3s = u->itab_entry->operand3.size;
+
+  /* iop = instruction operand */
+  register struct ud_operand* iop = u->operand;
+    
+  switch(mop1t) {
+    
+    case OP_A :
+        decode_a(u, &(iop[0]));
+        break;
+    
+    /* M[b] ... */
+    case OP_M :
+        if (MODRM_MOD(inp_peek(u)) == 3)
+            u->error= 1;
+    /* E, G/P/V/I/CL/1/S */
+    case OP_E :
+        if (mop2t == OP_G) {
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_GPR);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+            else if (mop3t == OP_CL) {
+                iop[2].type = UD_OP_REG;
+                iop[2].base = UD_R_CL;
+                iop[2].size = 8;
+            }
+        }
+        else if (mop2t == OP_P)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_MMX);
+        else if (mop2t == OP_V)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_XMM);
+        else if (mop2t == OP_S)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_SEG);
+        else {
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, NULL, 0, T_NONE);
+            if (mop2t == OP_CL) {
+                iop[1].type = UD_OP_REG;
+                iop[1].base = UD_R_CL;
+                iop[1].size = 8;
+            } else if (mop2t == OP_I1) {
+                iop[1].type = UD_OP_CONST;
+                u->operand[1].lval.udword = 1;
+            } else if (mop2t == OP_I) {
+                decode_imm(u, mop2s, &(iop[1]));
+            }
+        }
+        break;
+
+    /* G, E/PR[,I]/VR */
+    case OP_G :
+        if (mop2t == OP_M) {
+            if (MODRM_MOD(inp_peek(u)) == 3)
+                u->error= 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_GPR);
+        } else if (mop2t == OP_E) {
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_GPR);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_PR) {
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_GPR);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_VR) {
+            if (MODRM_MOD(inp_peek(u)) != 3)
+                u->error = 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_GPR);
+        } else if (mop2t == OP_W)
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_GPR);
+        break;
+
+    /* AL..BH, I/O/DX */
+    case OP_AL : case OP_CL : case OP_DL : case OP_BL :
+    case OP_AH : case OP_CH : case OP_DH : case OP_BH :
+
+        iop[0].type = UD_OP_REG;
+        iop[0].base = UD_R_AL + (mop1t - OP_AL);
+        iop[0].size = 8;
+
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        else if (mop2t == OP_DX) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_DX;
+            iop[1].size = 16;
+        }
+        else if (mop2t == OP_O)
+            decode_o(u, mop2s, &(iop[1]));
+        break;
+
+    /* rAX[r8]..rDI[r15], I/rAX..rDI/O */
+    case OP_rAXr8 : case OP_rCXr9 : case OP_rDXr10 : case OP_rBXr11 :
+    case OP_rSPr12: case OP_rBPr13: case OP_rSIr14 : case OP_rDIr15 :
+    case OP_rAX : case OP_rCX : case OP_rDX : case OP_rBX :
+    case OP_rSP : case OP_rBP : case OP_rSI : case OP_rDI :
+
+        iop[0].type = UD_OP_REG;
+        iop[0].base = resolve_gpr64(u, mop1t);
+
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        else if (mop2t >= OP_rAX && mop2t <= OP_rDI) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = resolve_gpr64(u, mop2t);
+        }
+        else if (mop2t == OP_O) {
+            decode_o(u, mop2s, &(iop[1]));  
+            iop[0].size = resolve_operand_size(u, mop2s);
+        }
+        break;
+
+    /* AL[r8b]..BH[r15b], I */
+    case OP_ALr8b : case OP_CLr9b : case OP_DLr10b : case OP_BLr11b :
+    case OP_AHr12b: case OP_CHr13b: case OP_DHr14b : case OP_BHr15b :
+    {
+        ud_type_t gpr = (mop1t - OP_ALr8b) + UD_R_AL + 
+                        (REX_B(u->pfx_rex) << 3);
+        if (UD_R_AH <= gpr && u->pfx_rex)
+            gpr = gpr + 4;
+        iop[0].type = UD_OP_REG;
+        iop[0].base = gpr;
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break;
+    }
+
+    /* eAX..eDX, DX/I */
+    case OP_eAX : case OP_eCX : case OP_eDX : case OP_eBX :
+    case OP_eSP : case OP_eBP : case OP_eSI : case OP_eDI :
+        iop[0].type = UD_OP_REG;
+        iop[0].base = resolve_gpr32(u, mop1t);
+        if (mop2t == OP_DX) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_DX;
+            iop[1].size = 16;
+        } else if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break;
+
+    /* ES..GS */
+    case OP_ES : case OP_CS : case OP_DS :
+    case OP_SS : case OP_FS : case OP_GS :
+
+        /* in 64bits mode, only fs and gs are allowed */
+        if (u->dis_mode == 64)
+            if (mop1t != OP_FS && mop1t != OP_GS)
+                u->error= 1;
+        iop[0].type = UD_OP_REG;
+        iop[0].base = (mop1t - OP_ES) + UD_R_ES;
+        iop[0].size = 16;
+
+        break;
+
+    /* J */
+    case OP_J :
+        decode_imm(u, mop1s, &(iop[0]));        
+        iop[0].type = UD_OP_JIMM;
+        break ;
+
+    /* PR, I */
+    case OP_PR:
+        if (MODRM_MOD(inp_peek(u)) != 3)
+            u->error = 1;
+        decode_modrm(u, &(iop[0]), mop1s, T_MMX, NULL, 0, T_NONE);
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break; 
+
+    /* VR, I */
+    case OP_VR:
+        if (MODRM_MOD(inp_peek(u)) != 3)
+            u->error = 1;
+        decode_modrm(u, &(iop[0]), mop1s, T_XMM, NULL, 0, T_NONE);
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break; 
+
+    /* P, Q[,I]/W/E[,I],VR */
+    case OP_P :
+        if (mop2t == OP_Q) {
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_MMX);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_W) {
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_MMX);
+        } else if (mop2t == OP_VR) {
+            if (MODRM_MOD(inp_peek(u)) != 3)
+                u->error = 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_MMX);
+        } else if (mop2t == OP_E) {
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_MMX);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        }
+        break;
+
+    /* R, C/D */
+    case OP_R :
+        if (mop2t == OP_C)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_CRG);
+        else if (mop2t == OP_D)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_DBG);
+        break;
+
+    /* C, R */
+    case OP_C :
+        decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_CRG);
+        break;
+
+    /* D, R */
+    case OP_D :
+        decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_DBG);
+        break;
+
+    /* Q, P */
+    case OP_Q :
+        decode_modrm(u, &(iop[0]), mop1s, T_MMX, &(iop[1]), mop2s, T_MMX);
+        break;
+
+    /* S, E */
+    case OP_S :
+        decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_SEG);
+        break;
+
+    /* W, V */
+    case OP_W :
+        decode_modrm(u, &(iop[0]), mop1s, T_XMM, &(iop[1]), mop2s, T_XMM);
+        break;
+
+    /* V, W[,I]/Q/M/E */
+    case OP_V :
+        if (mop2t == OP_W) {
+            /* special cases for movlps and movhps */
+            if (MODRM_MOD(inp_peek(u)) == 3) {
+                if (u->mnemonic == UD_Imovlps)
+                    u->mnemonic = UD_Imovhlps;
+                else
+                if (u->mnemonic == UD_Imovhps)
+                    u->mnemonic = UD_Imovlhps;
+            }
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_XMM);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_Q)
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_XMM);
+        else if (mop2t == OP_M) {
+            if (MODRM_MOD(inp_peek(u)) == 3)
+                u->error= 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_XMM);
+        } else if (mop2t == OP_E) {
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_XMM);
+        } else if (mop2t == OP_PR) {
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_XMM);
+        }
+        break;
+
+    /* DX, eAX/AL */
+    case OP_DX :
+        iop[0].type = UD_OP_REG;
+        iop[0].base = UD_R_DX;
+        iop[0].size = 16;
+
+        if (mop2t == OP_eAX) {
+            iop[1].type = UD_OP_REG;    
+            iop[1].base = resolve_gpr32(u, mop2t);
+        } else if (mop2t == OP_AL) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_AL;
+            iop[1].size = 8;
+        }
+
+        break;
+
+    /* I, I/AL/eAX */
+    case OP_I :
+        decode_imm(u, mop1s, &(iop[0]));
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        else if (mop2t == OP_AL) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_AL;
+            iop[1].size = 16;
+        } else if (mop2t == OP_eAX) {
+            iop[1].type = UD_OP_REG;    
+            iop[1].base = resolve_gpr32(u, mop2t);
+        }
+        break;
+
+    /* O, AL/eAX */
+    case OP_O :
+        decode_o(u, mop1s, &(iop[0]));
+        iop[1].type = UD_OP_REG;
+        iop[1].size = resolve_operand_size(u, mop1s);
+        if (mop2t == OP_AL)
+            iop[1].base = UD_R_AL;
+        else if (mop2t == OP_eAX)
+            iop[1].base = resolve_gpr32(u, mop2t);
+        else if (mop2t == OP_rAX)
+            iop[1].base = resolve_gpr64(u, mop2t);      
+        break;
+
+    /* 3 */
+    case OP_I3 :
+        iop[0].type = UD_OP_CONST;
+        iop[0].lval.sbyte = 3;
+        break;
+
+    /* ST(n), ST(n) */
+    case OP_ST0 : case OP_ST1 : case OP_ST2 : case OP_ST3 :
+    case OP_ST4 : case OP_ST5 : case OP_ST6 : case OP_ST7 :
+
+        iop[0].type = UD_OP_REG;
+        iop[0].base = (mop1t-OP_ST0) + UD_R_ST0;
+        iop[0].size = 0;
+
+        if (mop2t >= OP_ST0 && mop2t <= OP_ST7) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = (mop2t-OP_ST0) + UD_R_ST0;
+            iop[1].size = 0;
+        }
+        break;
+
+    /* AX */
+    case OP_AX:
+        iop[0].type = UD_OP_REG;
+        iop[0].base = UD_R_AX;
+        iop[0].size = 16;
+        break;
+
+    /* none */
+    default :
+        iop[0].type = iop[1].type = iop[2].type = UD_NONE;
+  }
+
+  return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * clear_insn() - clear instruction pointer 
+ * -----------------------------------------------------------------------------
+ */
+static int clear_insn(register struct ud* u)
+{
+  u->error     = 0;
+  u->pfx_seg   = 0;
+  u->pfx_opr   = 0;
+  u->pfx_adr   = 0;
+  u->pfx_lock  = 0;
+  u->pfx_repne = 0;
+  u->pfx_rep   = 0;
+  u->pfx_repe  = 0;
+  u->pfx_seg   = 0;
+  u->pfx_rex   = 0;
+  u->pfx_insn  = 0;
+  u->mnemonic  = UD_Inone;
+  u->itab_entry = NULL;
+
+  memset( &u->operand[ 0 ], 0, sizeof( struct ud_operand ) );
+  memset( &u->operand[ 1 ], 0, sizeof( struct ud_operand ) );
+  memset( &u->operand[ 2 ], 0, sizeof( struct ud_operand ) );
+ 
+  return 0;
+}
+
+static int do_mode( struct ud* u )
+{
+  /* if in error state, bail out */
+  if ( u->error ) return -1; 
+
+  /* propagate perfix effects */
+  if ( u->dis_mode == 64 ) {  /* set 64bit-mode flags */
+
+    /* Check validity of  instruction m64 */
+    if ( P_INV64( u->itab_entry->prefix ) ) {
+        u->error = 1;
+        return -1;
+    }
+
+    /* effective rex prefix is the  effective mask for the 
+     * instruction hard-coded in the opcode map.
+     */
+    u->pfx_rex = ( u->pfx_rex & 0x40 ) | 
+                 ( u->pfx_rex & REX_PFX_MASK( u->itab_entry->prefix ) ); 
+
+    /* whether this instruction has a default operand size of 
+     * 64bit, also hardcoded into the opcode map.
+     */
+    u->default64 = P_DEF64( u->itab_entry->prefix ); 
+    /* calculate effective operand size */
+    if ( REX_W( u->pfx_rex ) ) {
+        u->opr_mode = 64;
+    } else if ( u->pfx_opr ) {
+        u->opr_mode = 16;
+    } else {
+        /* unless the default opr size of instruction is 64,
+         * the effective operand size in the absence of rex.w
+         * prefix is 32.
+         */
+        u->opr_mode = ( u->default64 ) ? 64 : 32;
+    }
+
+    /* calculate effective address size */
+    u->adr_mode = (u->pfx_adr) ? 32 : 64;
+  } else if ( u->dis_mode == 32 ) { /* set 32bit-mode flags */
+    u->opr_mode = ( u->pfx_opr ) ? 16 : 32;
+    u->adr_mode = ( u->pfx_adr ) ? 16 : 32;
+  } else if ( u->dis_mode == 16 ) { /* set 16bit-mode flags */
+    u->opr_mode = ( u->pfx_opr ) ? 32 : 16;
+    u->adr_mode = ( u->pfx_adr ) ? 32 : 16;
+  }
+
+  /* These flags determine which operand to apply the operand size
+   * cast to.
+   */
+  u->c1 = ( P_C1( u->itab_entry->prefix ) ) ? 1 : 0;
+  u->c2 = ( P_C2( u->itab_entry->prefix ) ) ? 1 : 0;
+  u->c3 = ( P_C3( u->itab_entry->prefix ) ) ? 1 : 0;
+
+  /* set flags for implicit addressing */
+  u->implicit_addr = P_IMPADDR( u->itab_entry->prefix );
+
+  return 0;
+}
+
+static int gen_hex( struct ud *u )
+{
+  unsigned int i;
+  unsigned char *src_ptr = inp_sess( u );
+  char* src_hex;
+
+  /* bail out if in error stat. */
+  if ( u->error ) return -1; 
+  /* output buffer pointe */
+  src_hex = ( char* ) u->insn_hexcode;
+  /* for each byte used to decode instruction */
+  for ( i = 0; i < u->inp_ctr; ++i, ++src_ptr) {
+    snprintf( src_hex, 2, "%02x", *src_ptr & 0xFF );
+    src_hex += 2;
+  }
+  return 0;
+}
+
+/* =============================================================================
+ * ud_decode() - Instruction decoder. Returns the number of bytes decoded.
+ * =============================================================================
+ */
+unsigned int ud_decode( struct ud* u )
+{
+  inp_start(u);
+
+  if ( clear_insn( u ) ) {
+    ; /* error */
+  } else if ( get_prefixes( u ) != 0 ) {
+    ; /* error */
+  } else if ( search_itab( u ) != 0 ) {
+    ; /* error */
+  } else if ( do_mode( u ) != 0 ) {
+    ; /* error */
+  } else if ( disasm_operands( u ) != 0 ) {
+    ; /* error */
+  } else if ( resolve_mnemonic( u ) != 0 ) {
+    ; /* error */
+  }
+
+  /* Handle decode error. */
+  if ( u->error ) {
+    /* clear out the decode data. */
+    clear_insn( u );
+    /* mark the sequence of bytes as invalid. */
+    u->itab_entry = & ie_invalid;
+    u->mnemonic = u->itab_entry->mnemonic;
+  } 
+
+  u->insn_offset = u->pc; /* set offset of instruction */
+  u->insn_fill = 0;   /* set translation buffer index to 0 */
+  u->pc += u->inp_ctr;    /* move program counter by bytes decoded */
+  gen_hex( u );       /* generate hex code */
+
+  /* return number of bytes disassembled. */
+  return u->inp_ctr;
+}
+
+/* vim:cindent
+ * vim:ts=4
+ * vim:sw=4
+ * vim:expandtab
+ */
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/decode.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/decode.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,273 @@
+#ifndef UD_DECODE_H
+#define UD_DECODE_H
+
+#define MAX_INSN_LENGTH 15
+
+/* register classes */
+#define T_NONE  0
+#define T_GPR   1
+#define T_MMX   2
+#define T_CRG   3
+#define T_DBG   4
+#define T_SEG   5
+#define T_XMM   6
+
+/* itab prefix bits */
+#define P_none          ( 0 )
+#define P_c1            ( 1 << 0 )
+#define P_C1(n)         ( ( n >> 0 ) & 1 )
+#define P_rexb          ( 1 << 1 )
+#define P_REXB(n)       ( ( n >> 1 ) & 1 )
+#define P_depM          ( 1 << 2 )
+#define P_DEPM(n)       ( ( n >> 2 ) & 1 )
+#define P_c3            ( 1 << 3 )
+#define P_C3(n)         ( ( n >> 3 ) & 1 )
+#define P_inv64         ( 1 << 4 )
+#define P_INV64(n)      ( ( n >> 4 ) & 1 )
+#define P_rexw          ( 1 << 5 )
+#define P_REXW(n)       ( ( n >> 5 ) & 1 )
+#define P_c2            ( 1 << 6 )
+#define P_C2(n)         ( ( n >> 6 ) & 1 )
+#define P_def64         ( 1 << 7 )
+#define P_DEF64(n)      ( ( n >> 7 ) & 1 )
+#define P_rexr          ( 1 << 8 )
+#define P_REXR(n)       ( ( n >> 8 ) & 1 )
+#define P_oso           ( 1 << 9 )
+#define P_OSO(n)        ( ( n >> 9 ) & 1 )
+#define P_aso           ( 1 << 10 )
+#define P_ASO(n)        ( ( n >> 10 ) & 1 )
+#define P_rexx          ( 1 << 11 )
+#define P_REXX(n)       ( ( n >> 11 ) & 1 )
+#define P_ImpAddr       ( 1 << 12 )
+#define P_IMPADDR(n)    ( ( n >> 12 ) & 1 )
+
+/* rex prefix bits */
+#define REX_W(r)        ( ( 0xF & ( r ) )  >> 3 )
+#define REX_R(r)        ( ( 0x7 & ( r ) )  >> 2 )
+#define REX_X(r)        ( ( 0x3 & ( r ) )  >> 1 )
+#define REX_B(r)        ( ( 0x1 & ( r ) )  >> 0 )
+#define REX_PFX_MASK(n) ( ( P_REXW(n) << 3 ) | \
+                          ( P_REXR(n) << 2 ) | \
+                          ( P_REXX(n) << 1 ) | \
+                          ( P_REXB(n) << 0 ) )
+
+/* scable-index-base bits */
+#define SIB_S(b)        ( ( b ) >> 6 )
+#define SIB_I(b)        ( ( ( b ) >> 3 ) & 7 )
+#define SIB_B(b)        ( ( b ) & 7 )
+
+/* modrm bits */
+#define MODRM_REG(b)    ( ( ( b ) >> 3 ) & 7 )
+#define MODRM_NNN(b)    ( ( ( b ) >> 3 ) & 7 )
+#define MODRM_MOD(b)    ( ( ( b ) >> 6 ) & 3 )
+#define MODRM_RM(b)     ( ( b ) & 7 )
+
+/* operand type constants -- order is important! */
+
+enum ud_operand_code {
+    OP_NONE,
+
+    OP_A,      OP_E,      OP_M,       OP_G,       
+    OP_I,
+
+    OP_AL,     OP_CL,     OP_DL,      OP_BL,
+    OP_AH,     OP_CH,     OP_DH,      OP_BH,
+
+    OP_ALr8b,  OP_CLr9b,  OP_DLr10b,  OP_BLr11b,
+    OP_AHr12b, OP_CHr13b, OP_DHr14b,  OP_BHr15b,
+
+    OP_AX,     OP_CX,     OP_DX,      OP_BX,
+    OP_SI,     OP_DI,     OP_SP,      OP_BP,
+
+    OP_rAX,    OP_rCX,    OP_rDX,     OP_rBX,  
+    OP_rSP,    OP_rBP,    OP_rSI,     OP_rDI,
+
+    OP_rAXr8,  OP_rCXr9,  OP_rDXr10,  OP_rBXr11,  
+    OP_rSPr12, OP_rBPr13, OP_rSIr14,  OP_rDIr15,
+
+    OP_eAX,    OP_eCX,    OP_eDX,     OP_eBX,
+    OP_eSP,    OP_eBP,    OP_eSI,     OP_eDI,
+
+    OP_ES,     OP_CS,     OP_SS,      OP_DS,  
+    OP_FS,     OP_GS,
+
+    OP_ST0,    OP_ST1,    OP_ST2,     OP_ST3,
+    OP_ST4,    OP_ST5,    OP_ST6,     OP_ST7,
+
+    OP_J,      OP_S,      OP_O,          
+    OP_I1,     OP_I3, 
+
+    OP_V,      OP_W,      OP_Q,       OP_P, 
+
+    OP_R,      OP_C,  OP_D,       OP_VR,  OP_PR
+};
+
+
+/* operand size constants */
+
+enum ud_operand_size {
+    SZ_NA  = 0,
+    SZ_Z   = 1,
+    SZ_V   = 2,
+    SZ_P   = 3,
+    SZ_WP  = 4,
+    SZ_DP  = 5,
+    SZ_MDQ = 6,
+    SZ_RDQ = 7,
+
+    /* the following values are used as is,
+     * and thus hard-coded. changing them 
+     * will break internals 
+     */
+    SZ_B   = 8,
+    SZ_W   = 16,
+    SZ_D   = 32,
+    SZ_Q   = 64,
+    SZ_T   = 80,
+};
+
+/* itab entry operand definitions */
+
+#define O_rSPr12  { OP_rSPr12,   SZ_NA    }
+#define O_BL      { OP_BL,       SZ_NA    }
+#define O_BH      { OP_BH,       SZ_NA    }
+#define O_BP      { OP_BP,       SZ_NA    }
+#define O_AHr12b  { OP_AHr12b,   SZ_NA    }
+#define O_BX      { OP_BX,       SZ_NA    }
+#define O_Jz      { OP_J,        SZ_Z     }
+#define O_Jv      { OP_J,        SZ_V     }
+#define O_Jb      { OP_J,        SZ_B     }
+#define O_rSIr14  { OP_rSIr14,   SZ_NA    }
+#define O_GS      { OP_GS,       SZ_NA    }
+#define O_D       { OP_D,        SZ_NA    }
+#define O_rBPr13  { OP_rBPr13,   SZ_NA    }
+#define O_Ob      { OP_O,        SZ_B     }
+#define O_P       { OP_P,        SZ_NA    }
+#define O_Ow      { OP_O,        SZ_W     }
+#define O_Ov      { OP_O,        SZ_V     }
+#define O_Gw      { OP_G,        SZ_W     }
+#define O_Gv      { OP_G,        SZ_V     }
+#define O_rDX     { OP_rDX,      SZ_NA    }
+#define O_Gx      { OP_G,        SZ_MDQ   }
+#define O_Gd      { OP_G,        SZ_D     }
+#define O_Gb      { OP_G,        SZ_B     }
+#define O_rBXr11  { OP_rBXr11,   SZ_NA    }
+#define O_rDI     { OP_rDI,      SZ_NA    }
+#define O_rSI     { OP_rSI,      SZ_NA    }
+#define O_ALr8b   { OP_ALr8b,    SZ_NA    }
+#define O_eDI     { OP_eDI,      SZ_NA    }
+#define O_Gz      { OP_G,        SZ_Z     }
+#define O_eDX     { OP_eDX,      SZ_NA    }
+#define O_DHr14b  { OP_DHr14b,   SZ_NA    }
+#define O_rSP     { OP_rSP,      SZ_NA    }
+#define O_PR      { OP_PR,       SZ_NA    }
+#define O_NONE    { OP_NONE,     SZ_NA    }
+#define O_rCX     { OP_rCX,      SZ_NA    }
+#define O_jWP     { OP_J,        SZ_WP    }
+#define O_rDXr10  { OP_rDXr10,   SZ_NA    }
+#define O_Md      { OP_M,        SZ_D     }
+#define O_C       { OP_C,        SZ_NA    }
+#define O_G       { OP_G,        SZ_NA    }
+#define O_Mb      { OP_M,        SZ_B     }
+#define O_Mt      { OP_M,        SZ_T     }
+#define O_S       { OP_S,        SZ_NA    }
+#define O_Mq      { OP_M,        SZ_Q     }
+#define O_W       { OP_W,        SZ_NA    }
+#define O_ES      { OP_ES,       SZ_NA    }
+#define O_rBX     { OP_rBX,      SZ_NA    }
+#define O_Ed      { OP_E,        SZ_D     }
+#define O_DLr10b  { OP_DLr10b,   SZ_NA    }
+#define O_Mw      { OP_M,        SZ_W     }
+#define O_Eb      { OP_E,        SZ_B     }
+#define O_Ex      { OP_E,        SZ_MDQ   }
+#define O_Ez      { OP_E,        SZ_Z     }
+#define O_Ew      { OP_E,        SZ_W     }
+#define O_Ev      { OP_E,        SZ_V     }
+#define O_Ep      { OP_E,        SZ_P     }
+#define O_FS      { OP_FS,       SZ_NA    }
+#define O_Ms      { OP_M,        SZ_W     }
+#define O_rAXr8   { OP_rAXr8,    SZ_NA    }
+#define O_eBP     { OP_eBP,      SZ_NA    }
+#define O_Isb     { OP_I,        SZ_SB    }
+#define O_eBX     { OP_eBX,      SZ_NA    }
+#define O_rCXr9   { OP_rCXr9,    SZ_NA    }
+#define O_jDP     { OP_J,        SZ_DP    }
+#define O_CH      { OP_CH,       SZ_NA    }
+#define O_CL      { OP_CL,       SZ_NA    }
+#define O_R       { OP_R,        SZ_RDQ   }
+#define O_V       { OP_V,        SZ_NA    }
+#define O_CS      { OP_CS,       SZ_NA    }
+#define O_CHr13b  { OP_CHr13b,   SZ_NA    }
+#define O_eCX     { OP_eCX,      SZ_NA    }
+#define O_eSP     { OP_eSP,      SZ_NA    }
+#define O_SS      { OP_SS,       SZ_NA    }
+#define O_SP      { OP_SP,       SZ_NA    }
+#define O_BLr11b  { OP_BLr11b,   SZ_NA    }
+#define O_SI      { OP_SI,       SZ_NA    }
+#define O_eSI     { OP_eSI,      SZ_NA    }
+#define O_DL      { OP_DL,       SZ_NA    }
+#define O_DH      { OP_DH,       SZ_NA    }
+#define O_DI      { OP_DI,       SZ_NA    }
+#define O_DX      { OP_DX,       SZ_NA    }
+#define O_rBP     { OP_rBP,      SZ_NA    }
+#define O_Gvw     { OP_G,        SZ_MDQ   }
+#define O_I1      { OP_I1,       SZ_NA    }
+#define O_I3      { OP_I3,       SZ_NA    }
+#define O_DS      { OP_DS,       SZ_NA    }
+#define O_ST4     { OP_ST4,      SZ_NA    }
+#define O_ST5     { OP_ST5,      SZ_NA    }
+#define O_ST6     { OP_ST6,      SZ_NA    }
+#define O_ST7     { OP_ST7,      SZ_NA    }
+#define O_ST0     { OP_ST0,      SZ_NA    }
+#define O_ST1     { OP_ST1,      SZ_NA    }
+#define O_ST2     { OP_ST2,      SZ_NA    }
+#define O_ST3     { OP_ST3,      SZ_NA    }
+#define O_E       { OP_E,        SZ_NA    }
+#define O_AH      { OP_AH,       SZ_NA    }
+#define O_M       { OP_M,        SZ_NA    }
+#define O_AL      { OP_AL,       SZ_NA    }
+#define O_CLr9b   { OP_CLr9b,    SZ_NA    }
+#define O_Q       { OP_Q,        SZ_NA    }
+#define O_eAX     { OP_eAX,      SZ_NA    }
+#define O_VR      { OP_VR,       SZ_NA    }
+#define O_AX      { OP_AX,       SZ_NA    }
+#define O_rAX     { OP_rAX,      SZ_NA    }
+#define O_Iz      { OP_I,        SZ_Z     }
+#define O_rDIr15  { OP_rDIr15,   SZ_NA    }
+#define O_Iw      { OP_I,        SZ_W     }
+#define O_Iv      { OP_I,        SZ_V     }
+#define O_Ap      { OP_A,        SZ_P     }
+#define O_CX      { OP_CX,       SZ_NA    }
+#define O_Ib      { OP_I,        SZ_B     }
+#define O_BHr15b  { OP_BHr15b,   SZ_NA    }
+
+
+/* A single operand of an entry in the instruction table. 
+ * (internal use only)
+ */
+struct ud_itab_entry_operand 
+{
+  enum ud_operand_code type;
+  enum ud_operand_size size;
+};
+
+
+/* A single entry in an instruction table. 
+ *(internal use only)
+ */
+struct ud_itab_entry 
+{
+  enum ud_mnemonic_code         mnemonic;
+  struct ud_itab_entry_operand  operand1;
+  struct ud_itab_entry_operand  operand2;
+  struct ud_itab_entry_operand  operand3;
+  uint32_t                      prefix;
+};
+
+#endif /* UD_DECODE_H */
+
+/* vim:cindent
+ * vim:expandtab
+ * vim:ts=4
+ * vim:sw=4
+ */
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/extern.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/extern.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,67 @@
+/* -----------------------------------------------------------------------------
+ * extern.h
+ *
+ * Copyright (c) 2004, 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_EXTERN_H
+#define UD_EXTERN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* #include <stdio.h> */
+#include "types.h"
+
+/* ============================= PUBLIC API ================================= */
+
+extern void ud_init(struct ud*);
+
+extern void ud_set_mode(struct ud*, uint8_t);
+
+extern void ud_set_pc(struct ud*, uint64_t);
+
+extern void ud_set_input_hook(struct ud*, int (*)(struct ud*));
+
+extern void ud_set_input_buffer(struct ud*, uint8_t*, size_t);
+
+#ifndef __UD_STANDALONE__
+extern void ud_set_input_file(struct ud*, FILE*);
+#endif /* __UD_STANDALONE__ */
+
+extern void ud_set_vendor(struct ud*, unsigned);
+
+extern void ud_set_syntax(struct ud*, void (*)(struct ud*));
+
+extern void ud_input_skip(struct ud*, size_t);
+
+extern int ud_input_end(struct ud*);
+
+extern unsigned int ud_decode(struct ud*);
+
+extern unsigned int ud_disassemble(struct ud*);
+
+extern void ud_translate_intel(struct ud*);
+
+extern void ud_translate_att(struct ud*);
+
+extern char* ud_insn_asm(struct ud* u);
+
+extern uint8_t* ud_insn_ptr(struct ud* u);
+
+extern uint64_t ud_insn_off(struct ud*);
+
+extern char* ud_insn_hex(struct ud*);
+
+extern unsigned int ud_insn_len(struct ud* u);
+
+extern const char* ud_lookup_mnemonic(enum ud_mnemonic_code c);
+
+/* ========================================================================== */
+
+#ifdef __cplusplus
+}
+#endif
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/input.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/input.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,226 @@
+/* -----------------------------------------------------------------------------
+ * input.c
+ *
+ * Copyright (c) 2004, 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#include "extern.h"
+#include "types.h"
+#include "input.h"
+
+/* -----------------------------------------------------------------------------
+ * inp_buff_hook() - Hook for buffered inputs.
+ * -----------------------------------------------------------------------------
+ */
+static int 
+inp_buff_hook(struct ud* u)
+{
+  if (u->inp_buff < u->inp_buff_end)
+	return *u->inp_buff++;
+  else	return -1;
+}
+
+#ifndef __UD_STANDALONE__
+/* -----------------------------------------------------------------------------
+ * inp_file_hook() - Hook for FILE inputs.
+ * -----------------------------------------------------------------------------
+ */
+static int 
+inp_file_hook(struct ud* u)
+{
+  return fgetc(u->inp_file);
+}
+#endif /* __UD_STANDALONE__*/
+
+/* =============================================================================
+ * ud_inp_set_hook() - Sets input hook.
+ * =============================================================================
+ */
+extern void 
+ud_set_input_hook(register struct ud* u, int (*hook)(struct ud*))
+{
+  u->inp_hook = hook;
+  inp_init(u);
+}
+
+/* =============================================================================
+ * ud_inp_set_buffer() - Set buffer as input.
+ * =============================================================================
+ */
+extern void 
+ud_set_input_buffer(register struct ud* u, uint8_t* buf, size_t len)
+{
+  u->inp_hook = inp_buff_hook;
+  u->inp_buff = buf;
+  u->inp_buff_end = buf + len;
+  inp_init(u);
+}
+
+#ifndef __UD_STANDALONE__
+/* =============================================================================
+ * ud_input_set_file() - Set buffer as input.
+ * =============================================================================
+ */
+extern void 
+ud_set_input_file(register struct ud* u, FILE* f)
+{
+  u->inp_hook = inp_file_hook;
+  u->inp_file = f;
+  inp_init(u);
+}
+#endif /* __UD_STANDALONE__ */
+
+/* =============================================================================
+ * ud_input_skip() - Skip n input bytes.
+ * =============================================================================
+ */
+extern void 
+ud_input_skip(struct ud* u, size_t n)
+{
+  while (n--) {
+	u->inp_hook(u);
+  }
+}
+
+/* =============================================================================
+ * ud_input_end() - Test for end of input.
+ * =============================================================================
+ */
+extern int 
+ud_input_end(struct ud* u)
+{
+  return (u->inp_curr == u->inp_fill) && u->inp_end;
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_next() - Loads and returns the next byte from input.
+ *
+ * inp_curr and inp_fill are pointers to the cache. The program is written based
+ * on the property that they are 8-bits in size, and will eventually wrap around
+ * forming a circular buffer. So, the size of the cache is 256 in size, kind of
+ * unnecessary yet optimized.
+ *
+ * A buffer inp_sess stores the bytes disassembled for a single session.
+ * -----------------------------------------------------------------------------
+ */
+extern uint8_t inp_next(struct ud* u) 
+{
+  int c = -1;
+  /* if current pointer is not upto the fill point in the 
+   * input cache.
+   */
+  if ( u->inp_curr != u->inp_fill ) {
+	c = u->inp_cache[ ++u->inp_curr ];
+  /* if !end-of-input, call the input hook and get a byte */
+  } else if ( u->inp_end || ( c = u->inp_hook( u ) ) == -1 ) {
+	/* end-of-input, mark it as an error, since the decoder,
+	 * expected a byte more.
+	 */
+	u->error = 1;
+	/* flag end of input */
+	u->inp_end = 1;
+	return 0;
+  } else {
+	/* increment pointers, we have a new byte.  */
+	u->inp_curr = ++u->inp_fill;
+	/* add the byte to the cache */
+	u->inp_cache[ u->inp_fill ] = c;
+  }
+  /* record bytes input per decode-session. */
+  u->inp_sess[ u->inp_ctr++ ] = c;
+  /* return byte */
+  return ( uint8_t ) c;
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_back() - Move back a single byte in the stream.
+ * -----------------------------------------------------------------------------
+ */
+extern void
+inp_back(struct ud* u) 
+{
+  if ( u->inp_ctr > 0 ) {
+	--u->inp_curr;
+	--u->inp_ctr;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_peek() - Peek into the next byte in source. 
+ * -----------------------------------------------------------------------------
+ */
+extern uint8_t
+inp_peek(struct ud* u) 
+{
+  uint8_t r = inp_next(u);
+  if ( !u->error ) inp_back(u); /* Don't backup if there was an error */
+  return r;
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_move() - Move ahead n input bytes.
+ * -----------------------------------------------------------------------------
+ */
+extern void
+inp_move(struct ud* u, size_t n) 
+{
+  while (n--)
+	inp_next(u);
+}
+
+/*------------------------------------------------------------------------------
+ *  inp_uintN() - return uintN from source.
+ *------------------------------------------------------------------------------
+ */
+extern uint8_t 
+inp_uint8(struct ud* u)
+{
+  return inp_next(u);
+}
+
+extern uint16_t 
+inp_uint16(struct ud* u)
+{
+  uint16_t r, ret;
+
+  ret = inp_next(u);
+  r = inp_next(u);
+  return ret | (r << 8);
+}
+
+extern uint32_t 
+inp_uint32(struct ud* u)
+{
+  uint32_t r, ret;
+
+  ret = inp_next(u);
+  r = inp_next(u);
+  ret = ret | (r << 8);
+  r = inp_next(u);
+  ret = ret | (r << 16);
+  r = inp_next(u);
+  return ret | (r << 24);
+}
+
+extern uint64_t 
+inp_uint64(struct ud* u)
+{
+  uint64_t r, ret;
+
+  ret = inp_next(u);
+  r = inp_next(u);
+  ret = ret | (r << 8);
+  r = inp_next(u);
+  ret = ret | (r << 16);
+  r = inp_next(u);
+  ret = ret | (r << 24);
+  r = inp_next(u);
+  ret = ret | (r << 32);
+  r = inp_next(u);
+  ret = ret | (r << 40);
+  r = inp_next(u);
+  ret = ret | (r << 48);
+  r = inp_next(u);
+  return ret | (r << 56);
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/input.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/input.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,49 @@
+/* -----------------------------------------------------------------------------
+ * input.h
+ *
+ * Copyright (c) 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_INPUT_H
+#define UD_INPUT_H
+
+#include "types.h"
+
+uint8_t inp_next(struct ud*);
+uint8_t inp_peek(struct ud*);
+uint8_t inp_uint8(struct ud*);
+uint16_t inp_uint16(struct ud*);
+uint32_t inp_uint32(struct ud*);
+uint64_t inp_uint64(struct ud*);
+void inp_move(struct ud*, size_t);
+void inp_back(struct ud*);
+
+/* inp_init() - Initializes the input system. */
+#define inp_init(u) \
+do { \
+  u->inp_curr = 0; \
+  u->inp_fill = 0; \
+  u->inp_ctr  = 0; \
+  u->inp_end  = 0; \
+} while (0)
+
+/* inp_start() - Should be called before each de-code operation. */
+#define inp_start(u) u->inp_ctr = 0
+
+/* inp_back() - Resets the current pointer to its position before the current
+ * instruction disassembly was started.
+ */
+#define inp_reset(u) \
+do { \
+  u->inp_curr -= u->inp_ctr; \
+  u->inp_ctr = 0; \
+} while (0)
+
+/* inp_sess() - Returns the pointer to current session. */
+#define inp_sess(u) (u->inp_sess)
+
+/* inp_cur() - Returns the current input byte. */
+#define inp_curr(u) ((u)->inp_cache[(u)->inp_curr])
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/itab.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/itab.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3668 @@
+
+/* itab.c -- auto generated by opgen.py, do not edit. */
+
+#include "types.h"
+#include "decode.h"
+#include "itab.h"
+
+const char * ud_mnemonics_str[] = {
+  "3dnow",
+  "aaa",
+  "aad",
+  "aam",
+  "aas",
+  "adc",
+  "add",
+  "addpd",
+  "addps",
+  "addsd",
+  "addss",
+  "addsubpd",
+  "addsubps",
+  "and",
+  "andpd",
+  "andps",
+  "andnpd",
+  "andnps",
+  "arpl",
+  "movsxd",
+  "bound",
+  "bsf",
+  "bsr",
+  "bswap",
+  "bt",
+  "btc",
+  "btr",
+  "bts",
+  "call",
+  "cbw",
+  "cwde",
+  "cdqe",
+  "clc",
+  "cld",
+  "clflush",
+  "clgi",
+  "cli",
+  "clts",
+  "cmc",
+  "cmovo",
+  "cmovno",
+  "cmovb",
+  "cmovae",
+  "cmovz",
+  "cmovnz",
+  "cmovbe",
+  "cmova",
+  "cmovs",
+  "cmovns",
+  "cmovp",
+  "cmovnp",
+  "cmovl",
+  "cmovge",
+  "cmovle",
+  "cmovg",
+  "cmp",
+  "cmppd",
+  "cmpps",
+  "cmpsb",
+  "cmpsw",
+  "cmpsd",
+  "cmpsq",
+  "cmpss",
+  "cmpxchg",
+  "cmpxchg8b",
+  "comisd",
+  "comiss",
+  "cpuid",
+  "cvtdq2pd",
+  "cvtdq2ps",
+  "cvtpd2dq",
+  "cvtpd2pi",
+  "cvtpd2ps",
+  "cvtpi2ps",
+  "cvtpi2pd",
+  "cvtps2dq",
+  "cvtps2pi",
+  "cvtps2pd",
+  "cvtsd2si",
+  "cvtsd2ss",
+  "cvtsi2ss",
+  "cvtss2si",
+  "cvtss2sd",
+  "cvttpd2pi",
+  "cvttpd2dq",
+  "cvttps2dq",
+  "cvttps2pi",
+  "cvttsd2si",
+  "cvtsi2sd",
+  "cvttss2si",
+  "cwd",
+  "cdq",
+  "cqo",
+  "daa",
+  "das",
+  "dec",
+  "div",
+  "divpd",
+  "divps",
+  "divsd",
+  "divss",
+  "emms",
+  "enter",
+  "f2xm1",
+  "fabs",
+  "fadd",
+  "faddp",
+  "fbld",
+  "fbstp",
+  "fchs",
+  "fclex",
+  "fcmovb",
+  "fcmove",
+  "fcmovbe",
+  "fcmovu",
+  "fcmovnb",
+  "fcmovne",
+  "fcmovnbe",
+  "fcmovnu",
+  "fucomi",
+  "fcom",
+  "fcom2",
+  "fcomp3",
+  "fcomi",
+  "fucomip",
+  "fcomip",
+  "fcomp",
+  "fcomp5",
+  "fcompp",
+  "fcos",
+  "fdecstp",
+  "fdiv",
+  "fdivp",
+  "fdivr",
+  "fdivrp",
+  "femms",
+  "ffree",
+  "ffreep",
+  "ficom",
+  "ficomp",
+  "fild",
+  "fncstp",
+  "fninit",
+  "fiadd",
+  "fidivr",
+  "fidiv",
+  "fisub",
+  "fisubr",
+  "fist",
+  "fistp",
+  "fisttp",
+  "fld",
+  "fld1",
+  "fldl2t",
+  "fldl2e",
+  "fldlpi",
+  "fldlg2",
+  "fldln2",
+  "fldz",
+  "fldcw",
+  "fldenv",
+  "fmul",
+  "fmulp",
+  "fimul",
+  "fnop",
+  "fpatan",
+  "fprem",
+  "fprem1",
+  "fptan",
+  "frndint",
+  "frstor",
+  "fnsave",
+  "fscale",
+  "fsin",
+  "fsincos",
+  "fsqrt",
+  "fstp",
+  "fstp1",
+  "fstp8",
+  "fstp9",
+  "fst",
+  "fnstcw",
+  "fnstenv",
+  "fnstsw",
+  "fsub",
+  "fsubp",
+  "fsubr",
+  "fsubrp",
+  "ftst",
+  "fucom",
+  "fucomp",
+  "fucompp",
+  "fxam",
+  "fxch",
+  "fxch4",
+  "fxch7",
+  "fxrstor",
+  "fxsave",
+  "fpxtract",
+  "fyl2x",
+  "fyl2xp1",
+  "haddpd",
+  "haddps",
+  "hlt",
+  "hsubpd",
+  "hsubps",
+  "idiv",
+  "in",
+  "imul",
+  "inc",
+  "insb",
+  "insw",
+  "insd",
+  "int1",
+  "int3",
+  "int",
+  "into",
+  "invd",
+  "invlpg",
+  "invlpga",
+  "iretw",
+  "iretd",
+  "iretq",
+  "jo",
+  "jno",
+  "jb",
+  "jae",
+  "jz",
+  "jnz",
+  "jbe",
+  "ja",
+  "js",
+  "jns",
+  "jp",
+  "jnp",
+  "jl",
+  "jge",
+  "jle",
+  "jg",
+  "jcxz",
+  "jecxz",
+  "jrcxz",
+  "jmp",
+  "lahf",
+  "lar",
+  "lddqu",
+  "ldmxcsr",
+  "lds",
+  "lea",
+  "les",
+  "lfs",
+  "lgs",
+  "lidt",
+  "lss",
+  "leave",
+  "lfence",
+  "lgdt",
+  "lldt",
+  "lmsw",
+  "lock",
+  "lodsb",
+  "lodsw",
+  "lodsd",
+  "lodsq",
+  "loopnz",
+  "loope",
+  "loop",
+  "lsl",
+  "ltr",
+  "maskmovq",
+  "maxpd",
+  "maxps",
+  "maxsd",
+  "maxss",
+  "mfence",
+  "minpd",
+  "minps",
+  "minsd",
+  "minss",
+  "monitor",
+  "mov",
+  "movapd",
+  "movaps",
+  "movd",
+  "movddup",
+  "movdqa",
+  "movdqu",
+  "movdq2q",
+  "movhpd",
+  "movhps",
+  "movlhps",
+  "movlpd",
+  "movlps",
+  "movhlps",
+  "movmskpd",
+  "movmskps",
+  "movntdq",
+  "movnti",
+  "movntpd",
+  "movntps",
+  "movntq",
+  "movq",
+  "movqa",
+  "movq2dq",
+  "movsb",
+  "movsw",
+  "movsd",
+  "movsq",
+  "movsldup",
+  "movshdup",
+  "movss",
+  "movsx",
+  "movupd",
+  "movups",
+  "movzx",
+  "mul",
+  "mulpd",
+  "mulps",
+  "mulsd",
+  "mulss",
+  "mwait",
+  "neg",
+  "nop",
+  "not",
+  "or",
+  "orpd",
+  "orps",
+  "out",
+  "outsb",
+  "outsw",
+  "outsd",
+  "outsq",
+  "packsswb",
+  "packssdw",
+  "packuswb",
+  "paddb",
+  "paddw",
+  "paddq",
+  "paddsb",
+  "paddsw",
+  "paddusb",
+  "paddusw",
+  "pand",
+  "pandn",
+  "pause",
+  "pavgb",
+  "pavgw",
+  "pcmpeqb",
+  "pcmpeqw",
+  "pcmpeqd",
+  "pcmpgtb",
+  "pcmpgtw",
+  "pcmpgtd",
+  "pextrw",
+  "pinsrw",
+  "pmaddwd",
+  "pmaxsw",
+  "pmaxub",
+  "pminsw",
+  "pminub",
+  "pmovmskb",
+  "pmulhuw",
+  "pmulhw",
+  "pmullw",
+  "pmuludq",
+  "pop",
+  "popa",
+  "popad",
+  "popfw",
+  "popfd",
+  "popfq",
+  "por",
+  "prefetch",
+  "prefetchnta",
+  "prefetcht0",
+  "prefetcht1",
+  "prefetcht2",
+  "psadbw",
+  "pshufd",
+  "pshufhw",
+  "pshuflw",
+  "pshufw",
+  "pslldq",
+  "psllw",
+  "pslld",
+  "psllq",
+  "psraw",
+  "psrad",
+  "psrlw",
+  "psrld",
+  "psrlq",
+  "psrldq",
+  "psubb",
+  "psubw",
+  "psubd",
+  "psubq",
+  "psubsb",
+  "psubsw",
+  "psubusb",
+  "psubusw",
+  "punpckhbw",
+  "punpckhwd",
+  "punpckhdq",
+  "punpckhqdq",
+  "punpcklbw",
+  "punpcklwd",
+  "punpckldq",
+  "punpcklqdq",
+  "pi2fw",
+  "pi2fd",
+  "pf2iw",
+  "pf2id",
+  "pfnacc",
+  "pfpnacc",
+  "pfcmpge",
+  "pfmin",
+  "pfrcp",
+  "pfrsqrt",
+  "pfsub",
+  "pfadd",
+  "pfcmpgt",
+  "pfmax",
+  "pfrcpit1",
+  "pfrspit1",
+  "pfsubr",
+  "pfacc",
+  "pfcmpeq",
+  "pfmul",
+  "pfrcpit2",
+  "pmulhrw",
+  "pswapd",
+  "pavgusb",
+  "push",
+  "pusha",
+  "pushad",
+  "pushfw",
+  "pushfd",
+  "pushfq",
+  "pxor",
+  "rcl",
+  "rcr",
+  "rol",
+  "ror",
+  "rcpps",
+  "rcpss",
+  "rdmsr",
+  "rdpmc",
+  "rdtsc",
+  "rdtscp",
+  "repne",
+  "rep",
+  "ret",
+  "retf",
+  "rsm",
+  "rsqrtps",
+  "rsqrtss",
+  "sahf",
+  "sal",
+  "salc",
+  "sar",
+  "shl",
+  "shr",
+  "sbb",
+  "scasb",
+  "scasw",
+  "scasd",
+  "scasq",
+  "seto",
+  "setno",
+  "setb",
+  "setnb",
+  "setz",
+  "setnz",
+  "setbe",
+  "seta",
+  "sets",
+  "setns",
+  "setp",
+  "setnp",
+  "setl",
+  "setge",
+  "setle",
+  "setg",
+  "sfence",
+  "sgdt",
+  "shld",
+  "shrd",
+  "shufpd",
+  "shufps",
+  "sidt",
+  "sldt",
+  "smsw",
+  "sqrtps",
+  "sqrtpd",
+  "sqrtsd",
+  "sqrtss",
+  "stc",
+  "std",
+  "stgi",
+  "sti",
+  "skinit",
+  "stmxcsr",
+  "stosb",
+  "stosw",
+  "stosd",
+  "stosq",
+  "str",
+  "sub",
+  "subpd",
+  "subps",
+  "subsd",
+  "subss",
+  "swapgs",
+  "syscall",
+  "sysenter",
+  "sysexit",
+  "sysret",
+  "test",
+  "ucomisd",
+  "ucomiss",
+  "ud2",
+  "unpckhpd",
+  "unpckhps",
+  "unpcklps",
+  "unpcklpd",
+  "verr",
+  "verw",
+  "vmcall",
+  "vmclear",
+  "vmxon",
+  "vmptrld",
+  "vmptrst",
+  "vmresume",
+  "vmxoff",
+  "vmrun",
+  "vmmcall",
+  "vmload",
+  "vmsave",
+  "wait",
+  "wbinvd",
+  "wrmsr",
+  "xadd",
+  "xchg",
+  "xlatb",
+  "xor",
+  "xorpd",
+  "xorps",
+  "db",
+  "invalid",
+};
+
+
+
+static struct ud_itab_entry itab__0f[256] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_00__REG },
+  /* 01 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG },
+  /* 02 */  { UD_Ilar,         O_Gv,    O_Ew,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ilsl,         O_Gv,    O_Ew,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Isyscall,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Iclts,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Isysret,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 08 */  { UD_Iinvd,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 09 */  { UD_Iwbinvd,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iud2,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_0D__REG },
+  /* 0E */  { UD_Ifemms,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovups,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovups,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovlps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Imovlps,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 14 */  { UD_Iunpcklps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 15 */  { UD_Iunpckhps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 16 */  { UD_Imovhps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 17 */  { UD_Imovhps,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 18 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_18__REG },
+  /* 19 */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1A */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1B */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1C */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1D */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1E */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1F */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 20 */  { UD_Imov,         O_R,     O_C,     O_NONE,  P_rexr },
+  /* 21 */  { UD_Imov,         O_R,     O_D,     O_NONE,  P_rexr },
+  /* 22 */  { UD_Imov,         O_C,     O_R,     O_NONE,  P_rexr },
+  /* 23 */  { UD_Imov,         O_D,     O_R,     O_NONE,  P_rexr },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Imovaps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 29 */  { UD_Imovaps,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2A */  { UD_Icvtpi2ps,    O_V,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Imovntps,     O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2C */  { UD_Icvttps2pi,   O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtps2pi,    O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iucomiss,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2F */  { UD_Icomiss,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 30 */  { UD_Iwrmsr,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 31 */  { UD_Irdtsc,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 32 */  { UD_Irdmsr,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 33 */  { UD_Irdpmc,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 34 */  { UD_Isysenter,    O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 35 */  { UD_Isysexit,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Icmovo,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 41 */  { UD_Icmovno,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 42 */  { UD_Icmovb,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 43 */  { UD_Icmovae,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 44 */  { UD_Icmovz,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 45 */  { UD_Icmovnz,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 46 */  { UD_Icmovbe,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 47 */  { UD_Icmova,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 48 */  { UD_Icmovs,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 49 */  { UD_Icmovns,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4A */  { UD_Icmovp,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4B */  { UD_Icmovnp,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4C */  { UD_Icmovl,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4D */  { UD_Icmovge,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4E */  { UD_Icmovle,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4F */  { UD_Icmovg,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 50 */  { UD_Imovmskps,    O_Gd,    O_VR,    O_NONE,  P_oso|P_rexr|P_rexb },
+  /* 51 */  { UD_Isqrtps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Irsqrtps,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 53 */  { UD_Ircpps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 54 */  { UD_Iandps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 55 */  { UD_Iandnps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 56 */  { UD_Iorps,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 57 */  { UD_Ixorps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 58 */  { UD_Iaddps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtps2pd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Icvtdq2ps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5C */  { UD_Isubps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Ipunpcklbw,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 61 */  { UD_Ipunpcklwd,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 62 */  { UD_Ipunpckldq,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 63 */  { UD_Ipacksswb,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 64 */  { UD_Ipcmpgtb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 65 */  { UD_Ipcmpgtw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 66 */  { UD_Ipcmpgtd,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 67 */  { UD_Ipackuswb,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 68 */  { UD_Ipunpckhbw,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 69 */  { UD_Ipunpckhwd,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6A */  { UD_Ipunpckhdq,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6B */  { UD_Ipackssdw,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Imovd,        O_P,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6F */  { UD_Imovq,        O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 70 */  { UD_Ipshufw,      O_P,     O_Q,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_71__REG },
+  /* 72 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_72__REG },
+  /* 73 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_73__REG },
+  /* 74 */  { UD_Ipcmpeqb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 75 */  { UD_Ipcmpeqw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 76 */  { UD_Ipcmpeqd,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 77 */  { UD_Iemms,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7E */  { UD_Imovd,        O_Ex,    O_P,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7F */  { UD_Imovq,        O_Q,     O_P,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 80 */  { UD_Ijo,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 81 */  { UD_Ijno,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 82 */  { UD_Ijb,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 83 */  { UD_Ijae,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 84 */  { UD_Ijz,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 85 */  { UD_Ijnz,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 86 */  { UD_Ijbe,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 87 */  { UD_Ija,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 88 */  { UD_Ijs,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 89 */  { UD_Ijns,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8A */  { UD_Ijp,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8B */  { UD_Ijnp,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8C */  { UD_Ijl,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8D */  { UD_Ijge,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8E */  { UD_Ijle,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8F */  { UD_Ijg,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 90 */  { UD_Iseto,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 91 */  { UD_Isetno,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 92 */  { UD_Isetb,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 93 */  { UD_Isetnb,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 94 */  { UD_Isetz,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 95 */  { UD_Isetnz,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 96 */  { UD_Isetbe,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 97 */  { UD_Iseta,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 98 */  { UD_Isets,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 99 */  { UD_Isetns,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9A */  { UD_Isetp,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9B */  { UD_Isetnp,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9C */  { UD_Isetl,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9D */  { UD_Isetge,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9E */  { UD_Isetle,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9F */  { UD_Isetg,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* A0 */  { UD_Ipush,        O_FS,    O_NONE,  O_NONE,  P_none },
+  /* A1 */  { UD_Ipop,         O_FS,    O_NONE,  O_NONE,  P_none },
+  /* A2 */  { UD_Icpuid,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* A3 */  { UD_Ibt,          O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* A4 */  { UD_Ishld,        O_Ev,    O_Gv,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* A5 */  { UD_Ishld,        O_Ev,    O_Gv,    O_CL,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Ipush,        O_GS,    O_NONE,  O_NONE,  P_none },
+  /* A9 */  { UD_Ipop,         O_GS,    O_NONE,  O_NONE,  P_none },
+  /* AA */  { UD_Irsm,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* AB */  { UD_Ibts,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* AC */  { UD_Ishrd,        O_Ev,    O_Gv,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* AD */  { UD_Ishrd,        O_Ev,    O_Gv,    O_CL,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* AE */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG },
+  /* AF */  { UD_Iimul,        O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B0 */  { UD_Icmpxchg,     O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* B1 */  { UD_Icmpxchg,     O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B2 */  { UD_Ilss,         O_Gz,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B3 */  { UD_Ibtr,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B4 */  { UD_Ilfs,         O_Gz,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B5 */  { UD_Ilgs,         O_Gz,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B6 */  { UD_Imovzx,       O_Gv,    O_Eb,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B7 */  { UD_Imovzx,       O_Gv,    O_Ew,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_BA__REG },
+  /* BB */  { UD_Ibtc,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BC */  { UD_Ibsf,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BD */  { UD_Ibsr,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BE */  { UD_Imovsx,       O_Gv,    O_Eb,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BF */  { UD_Imovsx,       O_Gv,    O_Ew,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmpps,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Imovnti,      O_M,     O_Gvw,   O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C4 */  { UD_Ipinsrw,      O_P,     O_Ew,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C5 */  { UD_Ipextrw,      O_Gd,    O_PR,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C6 */  { UD_Ishufps,      O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_C7__REG },
+  /* C8 */  { UD_Ibswap,       O_rAXr8, O_NONE,  O_NONE,  P_oso|P_rexw|P_rexb },
+  /* C9 */  { UD_Ibswap,       O_rCXr9, O_NONE,  O_NONE,  P_oso|P_rexw|P_rexb },
+  /* CA */  { UD_Ibswap,       O_rDXr10, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CB */  { UD_Ibswap,       O_rBXr11, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CC */  { UD_Ibswap,       O_rSPr12, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CD */  { UD_Ibswap,       O_rBPr13, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CE */  { UD_Ibswap,       O_rSIr14, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CF */  { UD_Ibswap,       O_rDIr15, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* D0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D1 */  { UD_Ipsrlw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D2 */  { UD_Ipsrld,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D3 */  { UD_Ipsrlq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D4 */  { UD_Ipaddq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D5 */  { UD_Ipmullw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D7 */  { UD_Ipmovmskb,    O_Gd,    O_PR,    O_NONE,  P_oso|P_rexr|P_rexb },
+  /* D8 */  { UD_Ipsubusb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D9 */  { UD_Ipsubusw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DA */  { UD_Ipminub,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DB */  { UD_Ipand,        O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DC */  { UD_Ipaddusb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DD */  { UD_Ipaddusw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DE */  { UD_Ipmaxub,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DF */  { UD_Ipandn,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E0 */  { UD_Ipavgb,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E1 */  { UD_Ipsraw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E2 */  { UD_Ipsrad,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E3 */  { UD_Ipavgw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E4 */  { UD_Ipmulhuw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E5 */  { UD_Ipmulhw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E7 */  { UD_Imovntq,      O_M,     O_P,     O_NONE,  P_none },
+  /* E8 */  { UD_Ipsubsb,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E9 */  { UD_Ipsubsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EA */  { UD_Ipminsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EB */  { UD_Ipor,         O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EC */  { UD_Ipaddsb,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* ED */  { UD_Ipaddsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EE */  { UD_Ipmaxsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EF */  { UD_Ipxor,        O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Ipsllw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F2 */  { UD_Ipslld,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F3 */  { UD_Ipsllq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F4 */  { UD_Ipmuludq,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F5 */  { UD_Ipmaddwd,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F6 */  { UD_Ipsadbw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F7 */  { UD_Imaskmovq,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F8 */  { UD_Ipsubb,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F9 */  { UD_Ipsubw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FA */  { UD_Ipsubd,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FB */  { UD_Ipsubq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FC */  { UD_Ipaddb,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FD */  { UD_Ipaddw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_00__reg[8] = {
+  /* 00 */  { UD_Isldt,        O_Ev,    O_NONE,  O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Istr,         O_Ev,    O_NONE,  O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Illdt,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iltr,         O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iverr,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iverw,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg[8] = {
+  /* 00 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD },
+  /* 01 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_01__MOD },
+  /* 02 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_02__MOD },
+  /* 03 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD },
+  /* 04 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_04__MOD },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_06__MOD },
+  /* 07 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_07__MOD },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod[2] = {
+  /* 00 */  { UD_Isgdt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_01__VENDOR },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_03__VENDOR },
+  /* 04 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_04__VENDOR },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm__op_01__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmcall,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm__op_03__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmresume,    O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm__op_04__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmxoff,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_01__mod[2] = {
+  /* 00 */  { UD_Isidt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_01__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_01__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Imonitor,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Imwait,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_02__mod[2] = {
+  /* 00 */  { UD_Ilgdt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod[2] = {
+  /* 00 */  { UD_Ilidt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_00__VENDOR },
+  /* 01 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_01__VENDOR },
+  /* 02 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_02__VENDOR },
+  /* 03 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_03__VENDOR },
+  /* 04 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_04__VENDOR },
+  /* 05 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_05__VENDOR },
+  /* 06 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_06__VENDOR },
+  /* 07 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_07__VENDOR },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_00__vendor[2] = {
+  /* 00 */  { UD_Ivmrun,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_01__vendor[2] = {
+  /* 00 */  { UD_Ivmmcall,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_02__vendor[2] = {
+  /* 00 */  { UD_Ivmload,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_03__vendor[2] = {
+  /* 00 */  { UD_Ivmsave,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_04__vendor[2] = {
+  /* 00 */  { UD_Istgi,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_05__vendor[2] = {
+  /* 00 */  { UD_Iclgi,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_06__vendor[2] = {
+  /* 00 */  { UD_Iskinit,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_07__vendor[2] = {
+  /* 00 */  { UD_Iinvlpga,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_04__mod[2] = {
+  /* 00 */  { UD_Ismsw,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_06__mod[2] = {
+  /* 00 */  { UD_Ilmsw,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_07__mod[2] = {
+  /* 00 */  { UD_Iinvlpg,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_07__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Iswapgs,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM__OP_01__VENDOR },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_07__mod__op_01__rm__op_01__vendor[2] = {
+  /* 00 */  { UD_Irdtscp,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_0d__reg[8] = {
+  /* 00 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_18__reg[8] = {
+  /* 00 */  { UD_Iprefetchnta, O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iprefetcht0,  O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iprefetcht1,  O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iprefetcht2,  O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_71__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlw,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsraw,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllw,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_72__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrld,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsrad,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipslld,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_73__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlq,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllq,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ildmxcsr,     O_Md,    O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Istmxcsr,     O_Md,    O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_05__MOD },
+  /* 06 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_06__MOD },
+  /* 07 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_07__MOD },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_05__mod[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_05__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_05__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_06__mod[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_06__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_06__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_07__mod[2] = {
+  /* 00 */  { UD_Iclflush,     O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_07__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_07__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ba__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ibt,          O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ibts,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ibtr,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ibtc,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_c7__reg[8] = {
+  /* 00 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_C7__REG__OP_00__VENDOR },
+  /* 01 */  { UD_Icmpxchg8b,   O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_C7__REG__OP_07__VENDOR },
+};
+
+static struct ud_itab_entry itab__0f__op_c7__reg__op_00__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmptrld,     O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_c7__reg__op_07__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmptrst,     O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_d9__mod[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_D9__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__0f__op_d9__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 11 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 12 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Ifabs,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_If2xm1,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte[256] = {
+  /* 00 */  { UD_Iadd,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iadd,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadd,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iadd,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iadd,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 05 */  { UD_Iadd,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 06 */  { UD_Ipush,        O_ES,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 07 */  { UD_Ipop,         O_ES,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 08 */  { UD_Ior,          O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 09 */  { UD_Ior,          O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 0A */  { UD_Ior,          O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 0B */  { UD_Ior,          O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 0C */  { UD_Ior,          O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 0D */  { UD_Ior,          O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 0E */  { UD_Ipush,        O_CS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Iadc,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Iadc,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Iadc,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Iadc,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 14 */  { UD_Iadc,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 15 */  { UD_Iadc,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 16 */  { UD_Ipush,        O_SS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 17 */  { UD_Ipop,         O_SS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 18 */  { UD_Isbb,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 19 */  { UD_Isbb,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 1A */  { UD_Isbb,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1B */  { UD_Isbb,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 1C */  { UD_Isbb,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 1D */  { UD_Isbb,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 1E */  { UD_Ipush,        O_DS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 1F */  { UD_Ipop,         O_DS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 20 */  { UD_Iand,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 21 */  { UD_Iand,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 22 */  { UD_Iand,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 23 */  { UD_Iand,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 24 */  { UD_Iand,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 25 */  { UD_Iand,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Idaa,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 28 */  { UD_Isub,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 29 */  { UD_Isub,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 2A */  { UD_Isub,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Isub,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 2C */  { UD_Isub,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 2D */  { UD_Isub,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Idas,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 30 */  { UD_Ixor,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 31 */  { UD_Ixor,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 32 */  { UD_Ixor,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 33 */  { UD_Ixor,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 34 */  { UD_Ixor,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 35 */  { UD_Ixor,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iaaa,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 38 */  { UD_Icmp,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 39 */  { UD_Icmp,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 3A */  { UD_Icmp,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 3B */  { UD_Icmp,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 3C */  { UD_Icmp,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 3D */  { UD_Icmp,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iaas,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 40 */  { UD_Iinc,         O_eAX,   O_NONE,  O_NONE,  P_oso },
+  /* 41 */  { UD_Iinc,         O_eCX,   O_NONE,  O_NONE,  P_oso },
+  /* 42 */  { UD_Iinc,         O_eDX,   O_NONE,  O_NONE,  P_oso },
+  /* 43 */  { UD_Iinc,         O_eBX,   O_NONE,  O_NONE,  P_oso },
+  /* 44 */  { UD_Iinc,         O_eSP,   O_NONE,  O_NONE,  P_oso },
+  /* 45 */  { UD_Iinc,         O_eBP,   O_NONE,  O_NONE,  P_oso },
+  /* 46 */  { UD_Iinc,         O_eSI,   O_NONE,  O_NONE,  P_oso },
+  /* 47 */  { UD_Iinc,         O_eDI,   O_NONE,  O_NONE,  P_oso },
+  /* 48 */  { UD_Idec,         O_eAX,   O_NONE,  O_NONE,  P_oso },
+  /* 49 */  { UD_Idec,         O_eCX,   O_NONE,  O_NONE,  P_oso },
+  /* 4A */  { UD_Idec,         O_eDX,   O_NONE,  O_NONE,  P_oso },
+  /* 4B */  { UD_Idec,         O_eBX,   O_NONE,  O_NONE,  P_oso },
+  /* 4C */  { UD_Idec,         O_eSP,   O_NONE,  O_NONE,  P_oso },
+  /* 4D */  { UD_Idec,         O_eBP,   O_NONE,  O_NONE,  P_oso },
+  /* 4E */  { UD_Idec,         O_eSI,   O_NONE,  O_NONE,  P_oso },
+  /* 4F */  { UD_Idec,         O_eDI,   O_NONE,  O_NONE,  P_oso },
+  /* 50 */  { UD_Ipush,        O_rAXr8, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 51 */  { UD_Ipush,        O_rCXr9, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 52 */  { UD_Ipush,        O_rDXr10, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 53 */  { UD_Ipush,        O_rBXr11, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 54 */  { UD_Ipush,        O_rSPr12, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 55 */  { UD_Ipush,        O_rBPr13, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 56 */  { UD_Ipush,        O_rSIr14, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 57 */  { UD_Ipush,        O_rDIr15, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 58 */  { UD_Ipop,         O_rAXr8, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 59 */  { UD_Ipop,         O_rCXr9, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 5A */  { UD_Ipop,         O_rDXr10, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5B */  { UD_Ipop,         O_rBXr11, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5C */  { UD_Ipop,         O_rSPr12, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5D */  { UD_Ipop,         O_rBPr13, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5E */  { UD_Ipop,         O_rSIr14, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5F */  { UD_Ipop,         O_rDIr15, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 60 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_60__OSIZE },
+  /* 61 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_61__OSIZE },
+  /* 62 */  { UD_Ibound,       O_Gv,    O_M,     O_NONE,  P_inv64|P_aso|P_oso },
+  /* 63 */  { UD_Igrp_mode,    O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_63__MODE },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Ipush,        O_Iz,    O_NONE,  O_NONE,  P_c1|P_oso },
+  /* 69 */  { UD_Iimul,        O_Gv,    O_Ev,    O_Iz,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 6A */  { UD_Ipush,        O_Ib,    O_NONE,  O_NONE,  P_none },
+  /* 6B */  { UD_Iimul,        O_Gv,    O_Ev,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 6C */  { UD_Iinsb,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 6D */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_6D__OSIZE },
+  /* 6E */  { UD_Ioutsb,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 6F */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_6F__OSIZE },
+  /* 70 */  { UD_Ijo,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 71 */  { UD_Ijno,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 72 */  { UD_Ijb,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 73 */  { UD_Ijae,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 74 */  { UD_Ijz,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 75 */  { UD_Ijnz,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 76 */  { UD_Ijbe,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 77 */  { UD_Ija,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 78 */  { UD_Ijs,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 79 */  { UD_Ijns,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7A */  { UD_Ijp,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7B */  { UD_Ijnp,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7C */  { UD_Ijl,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7D */  { UD_Ijge,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7E */  { UD_Ijle,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7F */  { UD_Ijg,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 80 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_80__REG },
+  /* 81 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_81__REG },
+  /* 82 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_82__REG },
+  /* 83 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_83__REG },
+  /* 84 */  { UD_Itest,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 85 */  { UD_Itest,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 86 */  { UD_Ixchg,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 87 */  { UD_Ixchg,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 88 */  { UD_Imov,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 89 */  { UD_Imov,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 8A */  { UD_Imov,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 8B */  { UD_Imov,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 8C */  { UD_Imov,         O_Ev,    O_S,     O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 8D */  { UD_Ilea,         O_Gv,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 8E */  { UD_Imov,         O_S,     O_Ev,    O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 8F */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_8F__REG },
+  /* 90 */  { UD_Ixchg,        O_rAXr8, O_rAX,   O_NONE,  P_oso|P_rexw|P_rexb },
+  /* 91 */  { UD_Ixchg,        O_rCXr9, O_rAX,   O_NONE,  P_oso|P_rexw|P_rexb },
+  /* 92 */  { UD_Ixchg,        O_rDXr10, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 93 */  { UD_Ixchg,        O_rBXr11, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 94 */  { UD_Ixchg,        O_rSPr12, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 95 */  { UD_Ixchg,        O_rBPr13, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 96 */  { UD_Ixchg,        O_rSIr14, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 97 */  { UD_Ixchg,        O_rDIr15, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 98 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_98__OSIZE },
+  /* 99 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_99__OSIZE },
+  /* 9A */  { UD_Icall,        O_Ap,    O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 9B */  { UD_Iwait,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 9C */  { UD_Igrp_mode,    O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9C__MODE },
+  /* 9D */  { UD_Igrp_mode,    O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9D__MODE },
+  /* 9E */  { UD_Isahf,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 9F */  { UD_Ilahf,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* A0 */  { UD_Imov,         O_AL,    O_Ob,    O_NONE,  P_none },
+  /* A1 */  { UD_Imov,         O_rAX,   O_Ov,    O_NONE,  P_aso|P_oso|P_rexw },
+  /* A2 */  { UD_Imov,         O_Ob,    O_AL,    O_NONE,  P_none },
+  /* A3 */  { UD_Imov,         O_Ov,    O_rAX,   O_NONE,  P_aso|P_oso|P_rexw },
+  /* A4 */  { UD_Imovsb,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_none },
+  /* A5 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_A5__OSIZE },
+  /* A6 */  { UD_Icmpsb,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* A7 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_A7__OSIZE },
+  /* A8 */  { UD_Itest,        O_AL,    O_Ib,    O_NONE,  P_none },
+  /* A9 */  { UD_Itest,        O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* AA */  { UD_Istosb,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_none },
+  /* AB */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AB__OSIZE },
+  /* AC */  { UD_Ilodsb,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_none },
+  /* AD */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AD__OSIZE },
+  /* AE */  { UD_Iscasb,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* AF */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AF__OSIZE },
+  /* B0 */  { UD_Imov,         O_ALr8b, O_Ib,    O_NONE,  P_rexb },
+  /* B1 */  { UD_Imov,         O_CLr9b, O_Ib,    O_NONE,  P_rexb },
+  /* B2 */  { UD_Imov,         O_DLr10b, O_Ib,    O_NONE, P_rexb },
+  /* B3 */  { UD_Imov,         O_BLr11b, O_Ib,    O_NONE, P_rexb },
+  /* B4 */  { UD_Imov,         O_AHr12b, O_Ib,    O_NONE, P_rexb },
+  /* B5 */  { UD_Imov,         O_CHr13b, O_Ib,    O_NONE, P_rexb },
+  /* B6 */  { UD_Imov,         O_DHr14b, O_Ib,    O_NONE, P_rexb },
+  /* B7 */  { UD_Imov,         O_BHr15b, O_Ib,    O_NONE, P_rexb },
+  /* B8 */  { UD_Imov,         O_rAXr8, O_Iv,    O_NONE,  P_oso|P_rexw|P_rexb },
+  /* B9 */  { UD_Imov,         O_rCXr9, O_Iv,    O_NONE,  P_oso|P_rexw|P_rexb },
+  /* BA */  { UD_Imov,         O_rDXr10, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BB */  { UD_Imov,         O_rBXr11, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BC */  { UD_Imov,         O_rSPr12, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BD */  { UD_Imov,         O_rBPr13, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BE */  { UD_Imov,         O_rSIr14, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BF */  { UD_Imov,         O_rDIr15, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* C0 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C0__REG },
+  /* C1 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C1__REG },
+  /* C2 */  { UD_Iret,         O_Iw,    O_NONE,  O_NONE,  P_none },
+  /* C3 */  { UD_Iret,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* C4 */  { UD_Iles,         O_Gv,    O_M,     O_NONE,  P_inv64|P_aso|P_oso },
+  /* C5 */  { UD_Ilds,         O_Gv,    O_M,     O_NONE,  P_inv64|P_aso|P_oso },
+  /* C6 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C6__REG },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C7__REG },
+  /* C8 */  { UD_Ienter,       O_Iw,    O_Ib,    O_NONE,  P_def64|P_depM|P_none },
+  /* C9 */  { UD_Ileave,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* CA */  { UD_Iretf,        O_Iw,    O_NONE,  O_NONE,  P_none },
+  /* CB */  { UD_Iretf,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* CC */  { UD_Iint3,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* CD */  { UD_Iint,         O_Ib,    O_NONE,  O_NONE,  P_none },
+  /* CE */  { UD_Iinto,        O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* CF */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_CF__OSIZE },
+  /* D0 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D0__REG },
+  /* D1 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D1__REG },
+  /* D2 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D2__REG },
+  /* D3 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D3__REG },
+  /* D4 */  { UD_Iaam,         O_Ib,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* D5 */  { UD_Iaad,         O_Ib,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* D6 */  { UD_Isalc,        O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* D7 */  { UD_Ixlatb,       O_NONE,  O_NONE,  O_NONE,  P_rexw },
+  /* D8 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D8__MOD },
+  /* D9 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D9__MOD },
+  /* DA */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DA__MOD },
+  /* DB */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DB__MOD },
+  /* DC */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DC__MOD },
+  /* DD */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DD__MOD },
+  /* DE */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DE__MOD },
+  /* DF */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DF__MOD },
+  /* E0 */  { UD_Iloopnz,      O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* E1 */  { UD_Iloope,       O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* E2 */  { UD_Iloop,        O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* E3 */  { UD_Igrp_asize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_E3__ASIZE },
+  /* E4 */  { UD_Iin,          O_AL,    O_Ib,    O_NONE,  P_none },
+  /* E5 */  { UD_Iin,          O_eAX,   O_Ib,    O_NONE,  P_oso },
+  /* E6 */  { UD_Iout,         O_Ib,    O_AL,    O_NONE,  P_none },
+  /* E7 */  { UD_Iout,         O_Ib,    O_eAX,   O_NONE,  P_oso },
+  /* E8 */  { UD_Icall,        O_Jz,    O_NONE,  O_NONE,  P_def64|P_oso },
+  /* E9 */  { UD_Ijmp,         O_Jz,    O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* EA */  { UD_Ijmp,         O_Ap,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* EB */  { UD_Ijmp,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* EC */  { UD_Iin,          O_AL,    O_DX,    O_NONE,  P_none },
+  /* ED */  { UD_Iin,          O_eAX,   O_DX,    O_NONE,  P_oso },
+  /* EE */  { UD_Iout,         O_DX,    O_AL,    O_NONE,  P_none },
+  /* EF */  { UD_Iout,         O_DX,    O_eAX,   O_NONE,  P_oso },
+  /* F0 */  { UD_Ilock,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F1 */  { UD_Iint1,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F2 */  { UD_Irepne,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F3 */  { UD_Irep,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F4 */  { UD_Ihlt,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F5 */  { UD_Icmc,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F6 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_F6__REG },
+  /* F7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_F7__REG },
+  /* F8 */  { UD_Iclc,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F9 */  { UD_Istc,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FA */  { UD_Icli,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FB */  { UD_Isti,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FC */  { UD_Icld,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FD */  { UD_Istd,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FE */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_FE__REG },
+  /* FF */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_FF__REG },
+};
+
+static struct ud_itab_entry itab__1byte__op_60__osize[3] = {
+  /* 00 */  { UD_Ipusha,       O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 01 */  { UD_Ipushad,      O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_61__osize[3] = {
+  /* 00 */  { UD_Ipopa,        O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 01 */  { UD_Ipopad,       O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_63__mode[3] = {
+  /* 00 */  { UD_Iarpl,        O_Ew,    O_Gw,    O_NONE,  P_inv64|P_aso },
+  /* 01 */  { UD_Iarpl,        O_Ew,    O_Gw,    O_NONE,  P_inv64|P_aso },
+  /* 02 */  { UD_Imovsxd,      O_Gv,    O_Ed,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexx|P_rexr|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_6d__osize[3] = {
+  /* 00 */  { UD_Iinsw,        O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 01 */  { UD_Iinsd,        O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_6f__osize[3] = {
+  /* 00 */  { UD_Ioutsw,       O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 01 */  { UD_Ioutsd,       O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 02 */  { UD_Ioutsq,       O_NONE,  O_NONE,  O_NONE,  P_oso },
+};
+
+static struct ud_itab_entry itab__1byte__op_80__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_81__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_82__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_83__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_8f__reg[8] = {
+  /* 00 */  { UD_Ipop,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_98__osize[3] = {
+  /* 00 */  { UD_Icbw,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Icwde,        O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Icdqe,        O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_99__osize[3] = {
+  /* 00 */  { UD_Icwd,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Icdq,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Icqo,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_9c__mode[3] = {
+  /* 00 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9C__MODE__OP_00__OSIZE },
+  /* 01 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9C__MODE__OP_01__OSIZE },
+  /* 02 */  { UD_Ipushfq,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_9c__mode__op_00__osize[3] = {
+  /* 00 */  { UD_Ipushfw,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 01 */  { UD_Ipushfd,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_9c__mode__op_01__osize[3] = {
+  /* 00 */  { UD_Ipushfw,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 01 */  { UD_Ipushfd,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_9d__mode[3] = {
+  /* 00 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9D__MODE__OP_00__OSIZE },
+  /* 01 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9D__MODE__OP_01__OSIZE },
+  /* 02 */  { UD_Ipopfq,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+};
+
+static struct ud_itab_entry itab__1byte__op_9d__mode__op_00__osize[3] = {
+  /* 00 */  { UD_Ipopfw,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 01 */  { UD_Ipopfd,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_9d__mode__op_01__osize[3] = {
+  /* 00 */  { UD_Ipopfw,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 01 */  { UD_Ipopfd,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_a5__osize[3] = {
+  /* 00 */  { UD_Imovsw,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 01 */  { UD_Imovsd,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 02 */  { UD_Imovsq,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_a7__osize[3] = {
+  /* 00 */  { UD_Icmpsw,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Icmpsd,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Icmpsq,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_ab__osize[3] = {
+  /* 00 */  { UD_Istosw,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 01 */  { UD_Istosd,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 02 */  { UD_Istosq,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_ad__osize[3] = {
+  /* 00 */  { UD_Ilodsw,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 01 */  { UD_Ilodsd,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 02 */  { UD_Ilodsq,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_ae__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AE__MOD__OP_00__REG },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_ae__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifxsave,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifxrstor,     O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_af__osize[3] = {
+  /* 00 */  { UD_Iscasw,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Iscasd,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Iscasq,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_c0__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_c1__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_c6__reg[8] = {
+  /* 00 */  { UD_Imov,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_c7__reg[8] = {
+  /* 00 */  { UD_Imov,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_cf__osize[3] = {
+  /* 00 */  { UD_Iiretw,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Iiretd,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Iiretq,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_d0__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d1__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d2__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d3__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d8__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D8__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D8__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_d8__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifadd,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifmul,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifcom,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifcomp,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifsub,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifsubr,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifdiv,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifdivr,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d8__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifadd,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifadd,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifadd,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifadd,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifadd,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifadd,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifadd,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifadd,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifmul,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifmul,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifmul,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifmul,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifmul,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifmul,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifmul,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifmul,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcom,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 11 */  { UD_Ifcom,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 12 */  { UD_Ifcom,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 13 */  { UD_Ifcom,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 14 */  { UD_Ifcom,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 15 */  { UD_Ifcom,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 16 */  { UD_Ifcom,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 17 */  { UD_Ifcom,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 18 */  { UD_Ifcomp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 19 */  { UD_Ifcomp,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 1A */  { UD_Ifcomp,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 1B */  { UD_Ifcomp,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 1C */  { UD_Ifcomp,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 1D */  { UD_Ifcomp,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 1E */  { UD_Ifcomp,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 1F */  { UD_Ifcomp,       O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 20 */  { UD_Ifsub,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 21 */  { UD_Ifsub,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 22 */  { UD_Ifsub,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 23 */  { UD_Ifsub,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 24 */  { UD_Ifsub,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 25 */  { UD_Ifsub,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 26 */  { UD_Ifsub,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 27 */  { UD_Ifsub,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 28 */  { UD_Ifsubr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifsubr,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifsubr,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifsubr,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifsubr,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifsubr,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifsubr,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifsubr,       O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifdiv,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifdiv,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifdiv,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifdiv,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifdiv,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifdiv,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifdiv,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifdiv,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 38 */  { UD_Ifdivr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 39 */  { UD_Ifdivr,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 3A */  { UD_Ifdivr,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 3B */  { UD_Ifdivr,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 3C */  { UD_Ifdivr,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 3D */  { UD_Ifdivr,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 3E */  { UD_Ifdivr,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 3F */  { UD_Ifdivr,       O_ST0,   O_ST7,   O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_d9__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D9__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D9__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_d9__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifld,         O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ifst,         O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifstp,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifldenv,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifldcw,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifnstenv,     O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifnstcw,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d9__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifld,         O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifld,         O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifld,         O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifld,         O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifld,         O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifld,         O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifld,         O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifld,         O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifxch,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifxch,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifxch,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifxch,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifxch,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifxch,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifxch,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifxch,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifnop,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 12 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Ifstp1,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifstp1,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifstp1,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifstp1,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifstp1,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifstp1,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifstp1,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifstp1,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifchs,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iftst,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 25 */  { UD_Ifxam,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Ifld1,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 29 */  { UD_Ifldl2t,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2A */  { UD_Ifldl2e,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2B */  { UD_Ifldlpi,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2C */  { UD_Ifldlg2,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2D */  { UD_Ifldln2,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2E */  { UD_Ifldz,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Ifyl2x,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 32 */  { UD_Ifptan,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 33 */  { UD_Ifpatan,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 34 */  { UD_Ifpxtract,    O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 35 */  { UD_Ifprem1,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 36 */  { UD_Ifdecstp,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 37 */  { UD_Ifncstp,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 38 */  { UD_Ifprem,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 39 */  { UD_Ifyl2xp1,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3A */  { UD_Ifsqrt,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3B */  { UD_Ifsincos,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3C */  { UD_Ifrndint,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3D */  { UD_Ifscale,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3E */  { UD_Ifsin,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3F */  { UD_Ifcos,        O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_da__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DA__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DA__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_da__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifiadd,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifimul,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ificom,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ificomp,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifisub,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifisubr,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifidiv,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifidivr,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_da__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifcmovb,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifcmovb,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifcmovb,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifcmovb,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifcmovb,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifcmovb,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifcmovb,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifcmovb,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifcmove,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifcmove,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifcmove,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifcmove,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifcmove,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifcmove,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifcmove,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifcmove,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcmovbe,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 11 */  { UD_Ifcmovbe,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 12 */  { UD_Ifcmovbe,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 13 */  { UD_Ifcmovbe,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 14 */  { UD_Ifcmovbe,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 15 */  { UD_Ifcmovbe,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 16 */  { UD_Ifcmovbe,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 17 */  { UD_Ifcmovbe,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 18 */  { UD_Ifcmovu,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 19 */  { UD_Ifcmovu,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 1A */  { UD_Ifcmovu,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 1B */  { UD_Ifcmovu,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 1C */  { UD_Ifcmovu,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 1D */  { UD_Ifcmovu,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 1E */  { UD_Ifcmovu,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 1F */  { UD_Ifcmovu,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Ifucompp,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_db__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DB__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DB__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_db__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifild,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifisttp,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifist,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifistp,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Ifld,         O_Mt,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Ifstp,        O_Mt,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_db__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifcmovnb,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifcmovnb,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifcmovnb,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifcmovnb,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifcmovnb,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifcmovnb,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifcmovnb,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifcmovnb,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifcmovne,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifcmovne,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifcmovne,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifcmovne,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifcmovne,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifcmovne,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifcmovne,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifcmovne,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcmovnbe,    O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 11 */  { UD_Ifcmovnbe,    O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 12 */  { UD_Ifcmovnbe,    O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 13 */  { UD_Ifcmovnbe,    O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 14 */  { UD_Ifcmovnbe,    O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 15 */  { UD_Ifcmovnbe,    O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 16 */  { UD_Ifcmovnbe,    O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 17 */  { UD_Ifcmovnbe,    O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 18 */  { UD_Ifcmovnu,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 19 */  { UD_Ifcmovnu,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 1A */  { UD_Ifcmovnu,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 1B */  { UD_Ifcmovnu,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 1C */  { UD_Ifcmovnu,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 1D */  { UD_Ifcmovnu,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 1E */  { UD_Ifcmovnu,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 1F */  { UD_Ifcmovnu,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Ifclex,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 23 */  { UD_Ifninit,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Ifucomi,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifucomi,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifucomi,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifucomi,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifucomi,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifucomi,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifucomi,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifucomi,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifcomi,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifcomi,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifcomi,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifcomi,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifcomi,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifcomi,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifcomi,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifcomi,       O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_dc__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DC__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DC__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_dc__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifadd,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifmul,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifcom,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifcomp,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifsub,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifsubr,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifdiv,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifdivr,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_dc__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifadd,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifadd,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifadd,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifadd,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifadd,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifadd,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifadd,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifadd,        O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifmul,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifmul,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifmul,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifmul,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifmul,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifmul,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifmul,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifmul,        O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcom2,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifcom2,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifcom2,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifcom2,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifcom2,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifcom2,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifcom2,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifcom2,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Ifcomp3,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifcomp3,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifcomp3,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifcomp3,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifcomp3,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifcomp3,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifcomp3,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifcomp3,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifsubr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 21 */  { UD_Ifsubr,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 22 */  { UD_Ifsubr,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 23 */  { UD_Ifsubr,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 24 */  { UD_Ifsubr,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 25 */  { UD_Ifsubr,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 26 */  { UD_Ifsubr,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 27 */  { UD_Ifsubr,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 28 */  { UD_Ifsub,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifsub,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifsub,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifsub,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifsub,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifsub,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifsub,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifsub,        O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifdivr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifdivr,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifdivr,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifdivr,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifdivr,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifdivr,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifdivr,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifdivr,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 38 */  { UD_Ifdiv,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 39 */  { UD_Ifdiv,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 3A */  { UD_Ifdiv,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 3B */  { UD_Ifdiv,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 3C */  { UD_Ifdiv,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 3D */  { UD_Ifdiv,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 3E */  { UD_Ifdiv,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 3F */  { UD_Ifdiv,        O_ST7,   O_ST0,   O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_dd__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DD__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DD__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_dd__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifld,         O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifisttp,      O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifst,         O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifstp,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifrstor,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ifnsave,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifnstsw,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_dd__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Iffree,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iffree,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Iffree,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Iffree,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Iffree,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Iffree,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Iffree,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Iffree,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 08 */  { UD_Ifxch4,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 09 */  { UD_Ifxch4,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 0A */  { UD_Ifxch4,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 0B */  { UD_Ifxch4,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 0C */  { UD_Ifxch4,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 0D */  { UD_Ifxch4,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 0E */  { UD_Ifxch4,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 0F */  { UD_Ifxch4,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 10 */  { UD_Ifst,         O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifst,         O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifst,         O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifst,         O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifst,         O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifst,         O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifst,         O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifst,         O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Ifstp,        O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifstp,        O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifstp,        O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifstp,        O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifstp,        O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifstp,        O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifstp,        O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifstp,        O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifucom,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 21 */  { UD_Ifucom,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 22 */  { UD_Ifucom,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 23 */  { UD_Ifucom,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 24 */  { UD_Ifucom,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 25 */  { UD_Ifucom,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 26 */  { UD_Ifucom,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 27 */  { UD_Ifucom,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 28 */  { UD_Ifucomp,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 29 */  { UD_Ifucomp,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 2A */  { UD_Ifucomp,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 2B */  { UD_Ifucomp,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 2C */  { UD_Ifucomp,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 2D */  { UD_Ifucomp,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 2E */  { UD_Ifucomp,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 2F */  { UD_Ifucomp,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_de__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DE__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DE__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_de__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifiadd,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifimul,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ificom,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ificomp,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifisub,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifisubr,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifidiv,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifidivr,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_de__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifaddp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifaddp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifaddp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifaddp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifaddp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifaddp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifaddp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifaddp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifmulp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifmulp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifmulp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifmulp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifmulp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifmulp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifmulp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifmulp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcomp5,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifcomp5,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifcomp5,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifcomp5,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifcomp5,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifcomp5,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifcomp5,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifcomp5,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Ifcompp,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Ifsubrp,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 21 */  { UD_Ifsubrp,      O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 22 */  { UD_Ifsubrp,      O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 23 */  { UD_Ifsubrp,      O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 24 */  { UD_Ifsubrp,      O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 25 */  { UD_Ifsubrp,      O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 26 */  { UD_Ifsubrp,      O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 27 */  { UD_Ifsubrp,      O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 28 */  { UD_Ifsubp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifsubp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifsubp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifsubp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifsubp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifsubp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifsubp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifsubp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifdivrp,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifdivrp,      O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifdivrp,      O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifdivrp,      O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifdivrp,      O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifdivrp,      O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifdivrp,      O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifdivrp,      O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 38 */  { UD_Ifdivp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 39 */  { UD_Ifdivp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 3A */  { UD_Ifdivp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 3B */  { UD_Ifdivp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 3C */  { UD_Ifdivp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 3D */  { UD_Ifdivp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 3E */  { UD_Ifdivp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 3F */  { UD_Ifdivp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_df__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DF__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DF__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_df__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifild,        O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifisttp,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifist,        O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifistp,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifbld,        O_Mt,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifild,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifbstp,       O_Mt,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifistp,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_df__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Iffreep,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iffreep,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Iffreep,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Iffreep,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Iffreep,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Iffreep,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Iffreep,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Iffreep,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 08 */  { UD_Ifxch7,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 09 */  { UD_Ifxch7,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 0A */  { UD_Ifxch7,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 0B */  { UD_Ifxch7,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 0C */  { UD_Ifxch7,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 0D */  { UD_Ifxch7,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 0E */  { UD_Ifxch7,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 0F */  { UD_Ifxch7,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 10 */  { UD_Ifstp8,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifstp8,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifstp8,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifstp8,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifstp8,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifstp8,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifstp8,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifstp8,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Ifstp9,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifstp9,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifstp9,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifstp9,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifstp9,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifstp9,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifstp9,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifstp9,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifnstsw,      O_AX,    O_NONE,  O_NONE,  P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Ifucomip,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifucomip,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifucomip,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifucomip,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifucomip,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifucomip,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifucomip,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifucomip,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifcomip,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifcomip,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifcomip,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifcomip,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifcomip,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifcomip,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifcomip,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifcomip,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_e3__asize[3] = {
+  /* 00 */  { UD_Ijcxz,        O_Jb,    O_NONE,  O_NONE,  P_aso },
+  /* 01 */  { UD_Ijecxz,       O_Jb,    O_NONE,  O_NONE,  P_aso },
+  /* 02 */  { UD_Ijrcxz,       O_Jb,    O_NONE,  O_NONE,  P_aso },
+};
+
+static struct ud_itab_entry itab__1byte__op_f6__reg[8] = {
+  /* 00 */  { UD_Itest,        O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Itest,        O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Inot,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ineg,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Imul,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iimul,        O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Idiv,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iidiv,        O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_f7__reg[8] = {
+  /* 00 */  { UD_Itest,        O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Itest,        O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Inot,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ineg,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Imul,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iimul,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Idiv,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iidiv,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_fe__reg[8] = {
+  /* 00 */  { UD_Iinc,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Idec,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_ff__reg[8] = {
+  /* 00 */  { UD_Iinc,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Idec,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Icall,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Icall,        O_Ep,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ijmp,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ijmp,         O_Ep,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ipush,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__3dnow[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 11 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 12 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 51 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 52 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 53 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 54 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 55 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 56 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 57 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 58 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 59 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 60 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 61 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 62 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 63 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 69 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 70 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 71 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 72 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 73 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 74 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 75 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 76 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* ED */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovupd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovupd,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovlpd,      O_V,     O_M,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Imovlpd,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 14 */  { UD_Iunpcklpd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 15 */  { UD_Iunpckhpd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 16 */  { UD_Imovhpd,      O_V,     O_M,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 17 */  { UD_Imovhpd,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Imovapd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 29 */  { UD_Imovapd,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2A */  { UD_Icvtpi2pd,    O_V,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Imovntpd,     O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2C */  { UD_Icvttpd2pi,   O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtpd2pi,    O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iucomisd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2F */  { UD_Icomisd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Imovmskpd,    O_Gd,    O_VR,    O_NONE,  P_oso|P_rexr|P_rexb },
+  /* 51 */  { UD_Isqrtpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 53 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 54 */  { UD_Iandpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 55 */  { UD_Iandnpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 56 */  { UD_Iorpd,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 57 */  { UD_Ixorpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 58 */  { UD_Iaddpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtpd2ps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Icvtps2dq,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5C */  { UD_Isubpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Ipunpcklbw,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 61 */  { UD_Ipunpcklwd,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 62 */  { UD_Ipunpckldq,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 63 */  { UD_Ipacksswb,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 64 */  { UD_Ipcmpgtb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 65 */  { UD_Ipcmpgtw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 66 */  { UD_Ipcmpgtd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 67 */  { UD_Ipackuswb,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 68 */  { UD_Ipunpckhbw,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 69 */  { UD_Ipunpckhwd,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6A */  { UD_Ipunpckhdq,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6B */  { UD_Ipackssdw,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6C */  { UD_Ipunpcklqdq,  O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6D */  { UD_Ipunpckhqdq,  O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6E */  { UD_Imovd,        O_V,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 6F */  { UD_Imovqa,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 70 */  { UD_Ipshufd,      O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_71__REG },
+  /* 72 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_72__REG },
+  /* 73 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_73__REG },
+  /* 74 */  { UD_Ipcmpeqb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 75 */  { UD_Ipcmpeqw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 76 */  { UD_Ipcmpeqd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Ihaddpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7D */  { UD_Ihsubpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7E */  { UD_Imovd,        O_Ex,    O_V,     O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 7F */  { UD_Imovdqa,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmppd,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Ipinsrw,      O_V,     O_Ew,    O_Ib,    P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C5 */  { UD_Ipextrw,      O_Gd,    O_VR,    O_Ib,    P_aso|P_rexr|P_rexb },
+  /* C6 */  { UD_Ishufpd,      O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_C7__REG },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iaddsubpd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D1 */  { UD_Ipsrlw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D2 */  { UD_Ipsrld,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D3 */  { UD_Ipsrlq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D4 */  { UD_Ipaddq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D5 */  { UD_Ipmullw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D6 */  { UD_Imovq,        O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D7 */  { UD_Ipmovmskb,    O_Gd,    O_VR,    O_NONE,  P_rexr|P_rexb },
+  /* D8 */  { UD_Ipsubusb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D9 */  { UD_Ipsubusw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DA */  { UD_Ipminub,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DB */  { UD_Ipand,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DC */  { UD_Ipsubusb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DD */  { UD_Ipunpckhbw,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DE */  { UD_Ipmaxub,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DF */  { UD_Ipandn,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E0 */  { UD_Ipavgb,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E1 */  { UD_Ipsraw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E2 */  { UD_Ipsrad,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E3 */  { UD_Ipavgw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E4 */  { UD_Ipmulhuw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E5 */  { UD_Ipmulhw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E6 */  { UD_Icvttpd2dq,   O_V,     O_W,     O_NONE,  P_none },
+  /* E7 */  { UD_Imovntdq,     O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E8 */  { UD_Ipsubsb,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E9 */  { UD_Ipsubsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EA */  { UD_Ipminsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EB */  { UD_Ipor,         O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EC */  { UD_Ipaddsb,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* ED */  { UD_Ipaddsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EE */  { UD_Ipmaxsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EF */  { UD_Ipxor,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Ipsllw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F2 */  { UD_Ipslld,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F3 */  { UD_Ipsllq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F4 */  { UD_Ipmuludq,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F5 */  { UD_Ipmaddwd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F6 */  { UD_Ipsadbw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F7 */  { UD_Imaskmovq,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F8 */  { UD_Ipsubb,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F9 */  { UD_Ipsubw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FA */  { UD_Ipsubd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FB */  { UD_Ipsubq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FC */  { UD_Ipaddb,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FD */  { UD_Ipaddw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_71__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlw,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsraw,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllw,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_72__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrld,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsrad,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipslld,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_73__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlq,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 03 */  { UD_Ipsrldq,      O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllq,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 07 */  { UD_Ipslldq,      O_VR,    O_Ib,    O_NONE,  P_rexb },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_c7__reg[8] = {
+  /* 00 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_C7__REG__OP_00__VENDOR },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_c7__reg__op_00__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmclear,     O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__pfx_ssef2__0f[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovsd,       O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovddup,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Icvtsi2sd,    O_V,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Icvttsd2si,   O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtsd2si,    O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 51 */  { UD_Isqrtsd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 53 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 54 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 55 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 56 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 57 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 58 */  { UD_Iaddsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtsd2ss,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5C */  { UD_Isubsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 61 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 62 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 63 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 69 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 70 */  { UD_Ipshuflw,     O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 72 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 73 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 74 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 75 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 76 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Ihaddps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7D */  { UD_Ihsubps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmpsd,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iaddsubps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D6 */  { UD_Imovdq2q,     O_P,     O_VR,    O_NONE,  P_aso|P_rexb },
+  /* D7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E6 */  { UD_Icvtpd2dq,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* ED */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F0 */  { UD_Ilddqu,       O_V,     O_M,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_ssef3__0f[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovss,       O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovsldup,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Imovshdup,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Icvtsi2ss,    O_V,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Icvttss2si,   O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtss2si,    O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 51 */  { UD_Isqrtss,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Irsqrtss,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 53 */  { UD_Ircpss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 54 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 55 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 56 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 57 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 58 */  { UD_Iaddss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtss2sd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Icvttps2dq,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5C */  { UD_Isubss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 61 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 62 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 63 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 69 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6F */  { UD_Imovdqu,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 70 */  { UD_Ipshufhw,     O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 72 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 73 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 74 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 75 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 76 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7E */  { UD_Imovq,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7F */  { UD_Imovdqu,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmpss,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSEF3__0F__OP_C7__REG },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D6 */  { UD_Imovq2dq,     O_V,     O_PR,    O_NONE,  P_aso },
+  /* D7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E6 */  { UD_Icvtdq2pd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* ED */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_ssef3__0f__op_c7__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSEF3__0F__OP_C7__REG__OP_07__VENDOR },
+};
+
+static struct ud_itab_entry itab__pfx_ssef3__0f__op_c7__reg__op_07__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmxon,       O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+/* the order of this table matches enum ud_itab_index */
+struct ud_itab_entry * ud_itab_list[] = {
+  itab__0f,
+  itab__0f__op_00__reg,
+  itab__0f__op_01__reg,
+  itab__0f__op_01__reg__op_00__mod,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm__op_01__vendor,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm__op_03__vendor,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm__op_04__vendor,
+  itab__0f__op_01__reg__op_01__mod,
+  itab__0f__op_01__reg__op_01__mod__op_01__rm,
+  itab__0f__op_01__reg__op_02__mod,
+  itab__0f__op_01__reg__op_03__mod,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_00__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_01__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_02__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_03__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_04__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_05__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_06__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_07__vendor,
+  itab__0f__op_01__reg__op_04__mod,
+  itab__0f__op_01__reg__op_06__mod,
+  itab__0f__op_01__reg__op_07__mod,
+  itab__0f__op_01__reg__op_07__mod__op_01__rm,
+  itab__0f__op_01__reg__op_07__mod__op_01__rm__op_01__vendor,
+  itab__0f__op_0d__reg,
+  itab__0f__op_18__reg,
+  itab__0f__op_71__reg,
+  itab__0f__op_72__reg,
+  itab__0f__op_73__reg,
+  itab__0f__op_ae__reg,
+  itab__0f__op_ae__reg__op_05__mod,
+  itab__0f__op_ae__reg__op_05__mod__op_01__rm,
+  itab__0f__op_ae__reg__op_06__mod,
+  itab__0f__op_ae__reg__op_06__mod__op_01__rm,
+  itab__0f__op_ae__reg__op_07__mod,
+  itab__0f__op_ae__reg__op_07__mod__op_01__rm,
+  itab__0f__op_ba__reg,
+  itab__0f__op_c7__reg,
+  itab__0f__op_c7__reg__op_00__vendor,
+  itab__0f__op_c7__reg__op_07__vendor,
+  itab__0f__op_d9__mod,
+  itab__0f__op_d9__mod__op_01__x87,
+  itab__1byte,
+  itab__1byte__op_60__osize,
+  itab__1byte__op_61__osize,
+  itab__1byte__op_63__mode,
+  itab__1byte__op_6d__osize,
+  itab__1byte__op_6f__osize,
+  itab__1byte__op_80__reg,
+  itab__1byte__op_81__reg,
+  itab__1byte__op_82__reg,
+  itab__1byte__op_83__reg,
+  itab__1byte__op_8f__reg,
+  itab__1byte__op_98__osize,
+  itab__1byte__op_99__osize,
+  itab__1byte__op_9c__mode,
+  itab__1byte__op_9c__mode__op_00__osize,
+  itab__1byte__op_9c__mode__op_01__osize,
+  itab__1byte__op_9d__mode,
+  itab__1byte__op_9d__mode__op_00__osize,
+  itab__1byte__op_9d__mode__op_01__osize,
+  itab__1byte__op_a5__osize,
+  itab__1byte__op_a7__osize,
+  itab__1byte__op_ab__osize,
+  itab__1byte__op_ad__osize,
+  itab__1byte__op_ae__mod,
+  itab__1byte__op_ae__mod__op_00__reg,
+  itab__1byte__op_af__osize,
+  itab__1byte__op_c0__reg,
+  itab__1byte__op_c1__reg,
+  itab__1byte__op_c6__reg,
+  itab__1byte__op_c7__reg,
+  itab__1byte__op_cf__osize,
+  itab__1byte__op_d0__reg,
+  itab__1byte__op_d1__reg,
+  itab__1byte__op_d2__reg,
+  itab__1byte__op_d3__reg,
+  itab__1byte__op_d8__mod,
+  itab__1byte__op_d8__mod__op_00__reg,
+  itab__1byte__op_d8__mod__op_01__x87,
+  itab__1byte__op_d9__mod,
+  itab__1byte__op_d9__mod__op_00__reg,
+  itab__1byte__op_d9__mod__op_01__x87,
+  itab__1byte__op_da__mod,
+  itab__1byte__op_da__mod__op_00__reg,
+  itab__1byte__op_da__mod__op_01__x87,
+  itab__1byte__op_db__mod,
+  itab__1byte__op_db__mod__op_00__reg,
+  itab__1byte__op_db__mod__op_01__x87,
+  itab__1byte__op_dc__mod,
+  itab__1byte__op_dc__mod__op_00__reg,
+  itab__1byte__op_dc__mod__op_01__x87,
+  itab__1byte__op_dd__mod,
+  itab__1byte__op_dd__mod__op_00__reg,
+  itab__1byte__op_dd__mod__op_01__x87,
+  itab__1byte__op_de__mod,
+  itab__1byte__op_de__mod__op_00__reg,
+  itab__1byte__op_de__mod__op_01__x87,
+  itab__1byte__op_df__mod,
+  itab__1byte__op_df__mod__op_00__reg,
+  itab__1byte__op_df__mod__op_01__x87,
+  itab__1byte__op_e3__asize,
+  itab__1byte__op_f6__reg,
+  itab__1byte__op_f7__reg,
+  itab__1byte__op_fe__reg,
+  itab__1byte__op_ff__reg,
+  itab__3dnow,
+  itab__pfx_sse66__0f,
+  itab__pfx_sse66__0f__op_71__reg,
+  itab__pfx_sse66__0f__op_72__reg,
+  itab__pfx_sse66__0f__op_73__reg,
+  itab__pfx_sse66__0f__op_c7__reg,
+  itab__pfx_sse66__0f__op_c7__reg__op_00__vendor,
+  itab__pfx_ssef2__0f,
+  itab__pfx_ssef3__0f,
+  itab__pfx_ssef3__0f__op_c7__reg,
+  itab__pfx_ssef3__0f__op_c7__reg__op_07__vendor,
+};
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/itab.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/itab.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,719 @@
+
+/* itab.h -- auto generated by opgen.py, do not edit. */
+
+#ifndef UD_ITAB_H
+#define UD_ITAB_H
+
+
+
+enum ud_itab_vendor_index {
+  ITAB__VENDOR_INDX__AMD,
+  ITAB__VENDOR_INDX__INTEL,
+};
+
+
+enum ud_itab_mode_index {
+  ITAB__MODE_INDX__16,
+  ITAB__MODE_INDX__32,
+  ITAB__MODE_INDX__64
+};
+
+
+enum ud_itab_mod_index {
+  ITAB__MOD_INDX__NOT_11,
+  ITAB__MOD_INDX__11
+};
+
+
+enum ud_itab_index {
+  ITAB__0F,
+  ITAB__0F__OP_00__REG,
+  ITAB__0F__OP_01__REG,
+  ITAB__0F__OP_01__REG__OP_00__MOD,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_01__VENDOR,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_03__VENDOR,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_04__VENDOR,
+  ITAB__0F__OP_01__REG__OP_01__MOD,
+  ITAB__0F__OP_01__REG__OP_01__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_02__MOD,
+  ITAB__0F__OP_01__REG__OP_03__MOD,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_00__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_01__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_02__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_03__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_04__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_05__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_06__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_07__VENDOR,
+  ITAB__0F__OP_01__REG__OP_04__MOD,
+  ITAB__0F__OP_01__REG__OP_06__MOD,
+  ITAB__0F__OP_01__REG__OP_07__MOD,
+  ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM__OP_01__VENDOR,
+  ITAB__0F__OP_0D__REG,
+  ITAB__0F__OP_18__REG,
+  ITAB__0F__OP_71__REG,
+  ITAB__0F__OP_72__REG,
+  ITAB__0F__OP_73__REG,
+  ITAB__0F__OP_AE__REG,
+  ITAB__0F__OP_AE__REG__OP_05__MOD,
+  ITAB__0F__OP_AE__REG__OP_05__MOD__OP_01__RM,
+  ITAB__0F__OP_AE__REG__OP_06__MOD,
+  ITAB__0F__OP_AE__REG__OP_06__MOD__OP_01__RM,
+  ITAB__0F__OP_AE__REG__OP_07__MOD,
+  ITAB__0F__OP_AE__REG__OP_07__MOD__OP_01__RM,
+  ITAB__0F__OP_BA__REG,
+  ITAB__0F__OP_C7__REG,
+  ITAB__0F__OP_C7__REG__OP_00__VENDOR,
+  ITAB__0F__OP_C7__REG__OP_07__VENDOR,
+  ITAB__0F__OP_D9__MOD,
+  ITAB__0F__OP_D9__MOD__OP_01__X87,
+  ITAB__1BYTE,
+  ITAB__1BYTE__OP_60__OSIZE,
+  ITAB__1BYTE__OP_61__OSIZE,
+  ITAB__1BYTE__OP_63__MODE,
+  ITAB__1BYTE__OP_6D__OSIZE,
+  ITAB__1BYTE__OP_6F__OSIZE,
+  ITAB__1BYTE__OP_80__REG,
+  ITAB__1BYTE__OP_81__REG,
+  ITAB__1BYTE__OP_82__REG,
+  ITAB__1BYTE__OP_83__REG,
+  ITAB__1BYTE__OP_8F__REG,
+  ITAB__1BYTE__OP_98__OSIZE,
+  ITAB__1BYTE__OP_99__OSIZE,
+  ITAB__1BYTE__OP_9C__MODE,
+  ITAB__1BYTE__OP_9C__MODE__OP_00__OSIZE,
+  ITAB__1BYTE__OP_9C__MODE__OP_01__OSIZE,
+  ITAB__1BYTE__OP_9D__MODE,
+  ITAB__1BYTE__OP_9D__MODE__OP_00__OSIZE,
+  ITAB__1BYTE__OP_9D__MODE__OP_01__OSIZE,
+  ITAB__1BYTE__OP_A5__OSIZE,
+  ITAB__1BYTE__OP_A7__OSIZE,
+  ITAB__1BYTE__OP_AB__OSIZE,
+  ITAB__1BYTE__OP_AD__OSIZE,
+  ITAB__1BYTE__OP_AE__MOD,
+  ITAB__1BYTE__OP_AE__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_AF__OSIZE,
+  ITAB__1BYTE__OP_C0__REG,
+  ITAB__1BYTE__OP_C1__REG,
+  ITAB__1BYTE__OP_C6__REG,
+  ITAB__1BYTE__OP_C7__REG,
+  ITAB__1BYTE__OP_CF__OSIZE,
+  ITAB__1BYTE__OP_D0__REG,
+  ITAB__1BYTE__OP_D1__REG,
+  ITAB__1BYTE__OP_D2__REG,
+  ITAB__1BYTE__OP_D3__REG,
+  ITAB__1BYTE__OP_D8__MOD,
+  ITAB__1BYTE__OP_D8__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_D8__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_D9__MOD,
+  ITAB__1BYTE__OP_D9__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_D9__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DA__MOD,
+  ITAB__1BYTE__OP_DA__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DA__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DB__MOD,
+  ITAB__1BYTE__OP_DB__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DB__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DC__MOD,
+  ITAB__1BYTE__OP_DC__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DC__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DD__MOD,
+  ITAB__1BYTE__OP_DD__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DD__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DE__MOD,
+  ITAB__1BYTE__OP_DE__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DE__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DF__MOD,
+  ITAB__1BYTE__OP_DF__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DF__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_E3__ASIZE,
+  ITAB__1BYTE__OP_F6__REG,
+  ITAB__1BYTE__OP_F7__REG,
+  ITAB__1BYTE__OP_FE__REG,
+  ITAB__1BYTE__OP_FF__REG,
+  ITAB__3DNOW,
+  ITAB__PFX_SSE66__0F,
+  ITAB__PFX_SSE66__0F__OP_71__REG,
+  ITAB__PFX_SSE66__0F__OP_72__REG,
+  ITAB__PFX_SSE66__0F__OP_73__REG,
+  ITAB__PFX_SSE66__0F__OP_C7__REG,
+  ITAB__PFX_SSE66__0F__OP_C7__REG__OP_00__VENDOR,
+  ITAB__PFX_SSEF2__0F,
+  ITAB__PFX_SSEF3__0F,
+  ITAB__PFX_SSEF3__0F__OP_C7__REG,
+  ITAB__PFX_SSEF3__0F__OP_C7__REG__OP_07__VENDOR,
+};
+
+
+enum ud_mnemonic_code {
+  UD_I3dnow,
+  UD_Iaaa,
+  UD_Iaad,
+  UD_Iaam,
+  UD_Iaas,
+  UD_Iadc,
+  UD_Iadd,
+  UD_Iaddpd,
+  UD_Iaddps,
+  UD_Iaddsd,
+  UD_Iaddss,
+  UD_Iaddsubpd,
+  UD_Iaddsubps,
+  UD_Iand,
+  UD_Iandpd,
+  UD_Iandps,
+  UD_Iandnpd,
+  UD_Iandnps,
+  UD_Iarpl,
+  UD_Imovsxd,
+  UD_Ibound,
+  UD_Ibsf,
+  UD_Ibsr,
+  UD_Ibswap,
+  UD_Ibt,
+  UD_Ibtc,
+  UD_Ibtr,
+  UD_Ibts,
+  UD_Icall,
+  UD_Icbw,
+  UD_Icwde,
+  UD_Icdqe,
+  UD_Iclc,
+  UD_Icld,
+  UD_Iclflush,
+  UD_Iclgi,
+  UD_Icli,
+  UD_Iclts,
+  UD_Icmc,
+  UD_Icmovo,
+  UD_Icmovno,
+  UD_Icmovb,
+  UD_Icmovae,
+  UD_Icmovz,
+  UD_Icmovnz,
+  UD_Icmovbe,
+  UD_Icmova,
+  UD_Icmovs,
+  UD_Icmovns,
+  UD_Icmovp,
+  UD_Icmovnp,
+  UD_Icmovl,
+  UD_Icmovge,
+  UD_Icmovle,
+  UD_Icmovg,
+  UD_Icmp,
+  UD_Icmppd,
+  UD_Icmpps,
+  UD_Icmpsb,
+  UD_Icmpsw,
+  UD_Icmpsd,
+  UD_Icmpsq,
+  UD_Icmpss,
+  UD_Icmpxchg,
+  UD_Icmpxchg8b,
+  UD_Icomisd,
+  UD_Icomiss,
+  UD_Icpuid,
+  UD_Icvtdq2pd,
+  UD_Icvtdq2ps,
+  UD_Icvtpd2dq,
+  UD_Icvtpd2pi,
+  UD_Icvtpd2ps,
+  UD_Icvtpi2ps,
+  UD_Icvtpi2pd,
+  UD_Icvtps2dq,
+  UD_Icvtps2pi,
+  UD_Icvtps2pd,
+  UD_Icvtsd2si,
+  UD_Icvtsd2ss,
+  UD_Icvtsi2ss,
+  UD_Icvtss2si,
+  UD_Icvtss2sd,
+  UD_Icvttpd2pi,
+  UD_Icvttpd2dq,
+  UD_Icvttps2dq,
+  UD_Icvttps2pi,
+  UD_Icvttsd2si,
+  UD_Icvtsi2sd,
+  UD_Icvttss2si,
+  UD_Icwd,
+  UD_Icdq,
+  UD_Icqo,
+  UD_Idaa,
+  UD_Idas,
+  UD_Idec,
+  UD_Idiv,
+  UD_Idivpd,
+  UD_Idivps,
+  UD_Idivsd,
+  UD_Idivss,
+  UD_Iemms,
+  UD_Ienter,
+  UD_If2xm1,
+  UD_Ifabs,
+  UD_Ifadd,
+  UD_Ifaddp,
+  UD_Ifbld,
+  UD_Ifbstp,
+  UD_Ifchs,
+  UD_Ifclex,
+  UD_Ifcmovb,
+  UD_Ifcmove,
+  UD_Ifcmovbe,
+  UD_Ifcmovu,
+  UD_Ifcmovnb,
+  UD_Ifcmovne,
+  UD_Ifcmovnbe,
+  UD_Ifcmovnu,
+  UD_Ifucomi,
+  UD_Ifcom,
+  UD_Ifcom2,
+  UD_Ifcomp3,
+  UD_Ifcomi,
+  UD_Ifucomip,
+  UD_Ifcomip,
+  UD_Ifcomp,
+  UD_Ifcomp5,
+  UD_Ifcompp,
+  UD_Ifcos,
+  UD_Ifdecstp,
+  UD_Ifdiv,
+  UD_Ifdivp,
+  UD_Ifdivr,
+  UD_Ifdivrp,
+  UD_Ifemms,
+  UD_Iffree,
+  UD_Iffreep,
+  UD_Ificom,
+  UD_Ificomp,
+  UD_Ifild,
+  UD_Ifncstp,
+  UD_Ifninit,
+  UD_Ifiadd,
+  UD_Ifidivr,
+  UD_Ifidiv,
+  UD_Ifisub,
+  UD_Ifisubr,
+  UD_Ifist,
+  UD_Ifistp,
+  UD_Ifisttp,
+  UD_Ifld,
+  UD_Ifld1,
+  UD_Ifldl2t,
+  UD_Ifldl2e,
+  UD_Ifldlpi,
+  UD_Ifldlg2,
+  UD_Ifldln2,
+  UD_Ifldz,
+  UD_Ifldcw,
+  UD_Ifldenv,
+  UD_Ifmul,
+  UD_Ifmulp,
+  UD_Ifimul,
+  UD_Ifnop,
+  UD_Ifpatan,
+  UD_Ifprem,
+  UD_Ifprem1,
+  UD_Ifptan,
+  UD_Ifrndint,
+  UD_Ifrstor,
+  UD_Ifnsave,
+  UD_Ifscale,
+  UD_Ifsin,
+  UD_Ifsincos,
+  UD_Ifsqrt,
+  UD_Ifstp,
+  UD_Ifstp1,
+  UD_Ifstp8,
+  UD_Ifstp9,
+  UD_Ifst,
+  UD_Ifnstcw,
+  UD_Ifnstenv,
+  UD_Ifnstsw,
+  UD_Ifsub,
+  UD_Ifsubp,
+  UD_Ifsubr,
+  UD_Ifsubrp,
+  UD_Iftst,
+  UD_Ifucom,
+  UD_Ifucomp,
+  UD_Ifucompp,
+  UD_Ifxam,
+  UD_Ifxch,
+  UD_Ifxch4,
+  UD_Ifxch7,
+  UD_Ifxrstor,
+  UD_Ifxsave,
+  UD_Ifpxtract,
+  UD_Ifyl2x,
+  UD_Ifyl2xp1,
+  UD_Ihaddpd,
+  UD_Ihaddps,
+  UD_Ihlt,
+  UD_Ihsubpd,
+  UD_Ihsubps,
+  UD_Iidiv,
+  UD_Iin,
+  UD_Iimul,
+  UD_Iinc,
+  UD_Iinsb,
+  UD_Iinsw,
+  UD_Iinsd,
+  UD_Iint1,
+  UD_Iint3,
+  UD_Iint,
+  UD_Iinto,
+  UD_Iinvd,
+  UD_Iinvlpg,
+  UD_Iinvlpga,
+  UD_Iiretw,
+  UD_Iiretd,
+  UD_Iiretq,
+  UD_Ijo,
+  UD_Ijno,
+  UD_Ijb,
+  UD_Ijae,
+  UD_Ijz,
+  UD_Ijnz,
+  UD_Ijbe,
+  UD_Ija,
+  UD_Ijs,
+  UD_Ijns,
+  UD_Ijp,
+  UD_Ijnp,
+  UD_Ijl,
+  UD_Ijge,
+  UD_Ijle,
+  UD_Ijg,
+  UD_Ijcxz,
+  UD_Ijecxz,
+  UD_Ijrcxz,
+  UD_Ijmp,
+  UD_Ilahf,
+  UD_Ilar,
+  UD_Ilddqu,
+  UD_Ildmxcsr,
+  UD_Ilds,
+  UD_Ilea,
+  UD_Iles,
+  UD_Ilfs,
+  UD_Ilgs,
+  UD_Ilidt,
+  UD_Ilss,
+  UD_Ileave,
+  UD_Ilfence,
+  UD_Ilgdt,
+  UD_Illdt,
+  UD_Ilmsw,
+  UD_Ilock,
+  UD_Ilodsb,
+  UD_Ilodsw,
+  UD_Ilodsd,
+  UD_Ilodsq,
+  UD_Iloopnz,
+  UD_Iloope,
+  UD_Iloop,
+  UD_Ilsl,
+  UD_Iltr,
+  UD_Imaskmovq,
+  UD_Imaxpd,
+  UD_Imaxps,
+  UD_Imaxsd,
+  UD_Imaxss,
+  UD_Imfence,
+  UD_Iminpd,
+  UD_Iminps,
+  UD_Iminsd,
+  UD_Iminss,
+  UD_Imonitor,
+  UD_Imov,
+  UD_Imovapd,
+  UD_Imovaps,
+  UD_Imovd,
+  UD_Imovddup,
+  UD_Imovdqa,
+  UD_Imovdqu,
+  UD_Imovdq2q,
+  UD_Imovhpd,
+  UD_Imovhps,
+  UD_Imovlhps,
+  UD_Imovlpd,
+  UD_Imovlps,
+  UD_Imovhlps,
+  UD_Imovmskpd,
+  UD_Imovmskps,
+  UD_Imovntdq,
+  UD_Imovnti,
+  UD_Imovntpd,
+  UD_Imovntps,
+  UD_Imovntq,
+  UD_Imovq,
+  UD_Imovqa,
+  UD_Imovq2dq,
+  UD_Imovsb,
+  UD_Imovsw,
+  UD_Imovsd,
+  UD_Imovsq,
+  UD_Imovsldup,
+  UD_Imovshdup,
+  UD_Imovss,
+  UD_Imovsx,
+  UD_Imovupd,
+  UD_Imovups,
+  UD_Imovzx,
+  UD_Imul,
+  UD_Imulpd,
+  UD_Imulps,
+  UD_Imulsd,
+  UD_Imulss,
+  UD_Imwait,
+  UD_Ineg,
+  UD_Inop,
+  UD_Inot,
+  UD_Ior,
+  UD_Iorpd,
+  UD_Iorps,
+  UD_Iout,
+  UD_Ioutsb,
+  UD_Ioutsw,
+  UD_Ioutsd,
+  UD_Ioutsq,
+  UD_Ipacksswb,
+  UD_Ipackssdw,
+  UD_Ipackuswb,
+  UD_Ipaddb,
+  UD_Ipaddw,
+  UD_Ipaddq,
+  UD_Ipaddsb,
+  UD_Ipaddsw,
+  UD_Ipaddusb,
+  UD_Ipaddusw,
+  UD_Ipand,
+  UD_Ipandn,
+  UD_Ipause,
+  UD_Ipavgb,
+  UD_Ipavgw,
+  UD_Ipcmpeqb,
+  UD_Ipcmpeqw,
+  UD_Ipcmpeqd,
+  UD_Ipcmpgtb,
+  UD_Ipcmpgtw,
+  UD_Ipcmpgtd,
+  UD_Ipextrw,
+  UD_Ipinsrw,
+  UD_Ipmaddwd,
+  UD_Ipmaxsw,
+  UD_Ipmaxub,
+  UD_Ipminsw,
+  UD_Ipminub,
+  UD_Ipmovmskb,
+  UD_Ipmulhuw,
+  UD_Ipmulhw,
+  UD_Ipmullw,
+  UD_Ipmuludq,
+  UD_Ipop,
+  UD_Ipopa,
+  UD_Ipopad,
+  UD_Ipopfw,
+  UD_Ipopfd,
+  UD_Ipopfq,
+  UD_Ipor,
+  UD_Iprefetch,
+  UD_Iprefetchnta,
+  UD_Iprefetcht0,
+  UD_Iprefetcht1,
+  UD_Iprefetcht2,
+  UD_Ipsadbw,
+  UD_Ipshufd,
+  UD_Ipshufhw,
+  UD_Ipshuflw,
+  UD_Ipshufw,
+  UD_Ipslldq,
+  UD_Ipsllw,
+  UD_Ipslld,
+  UD_Ipsllq,
+  UD_Ipsraw,
+  UD_Ipsrad,
+  UD_Ipsrlw,
+  UD_Ipsrld,
+  UD_Ipsrlq,
+  UD_Ipsrldq,
+  UD_Ipsubb,
+  UD_Ipsubw,
+  UD_Ipsubd,
+  UD_Ipsubq,
+  UD_Ipsubsb,
+  UD_Ipsubsw,
+  UD_Ipsubusb,
+  UD_Ipsubusw,
+  UD_Ipunpckhbw,
+  UD_Ipunpckhwd,
+  UD_Ipunpckhdq,
+  UD_Ipunpckhqdq,
+  UD_Ipunpcklbw,
+  UD_Ipunpcklwd,
+  UD_Ipunpckldq,
+  UD_Ipunpcklqdq,
+  UD_Ipi2fw,
+  UD_Ipi2fd,
+  UD_Ipf2iw,
+  UD_Ipf2id,
+  UD_Ipfnacc,
+  UD_Ipfpnacc,
+  UD_Ipfcmpge,
+  UD_Ipfmin,
+  UD_Ipfrcp,
+  UD_Ipfrsqrt,
+  UD_Ipfsub,
+  UD_Ipfadd,
+  UD_Ipfcmpgt,
+  UD_Ipfmax,
+  UD_Ipfrcpit1,
+  UD_Ipfrspit1,
+  UD_Ipfsubr,
+  UD_Ipfacc,
+  UD_Ipfcmpeq,
+  UD_Ipfmul,
+  UD_Ipfrcpit2,
+  UD_Ipmulhrw,
+  UD_Ipswapd,
+  UD_Ipavgusb,
+  UD_Ipush,
+  UD_Ipusha,
+  UD_Ipushad,
+  UD_Ipushfw,
+  UD_Ipushfd,
+  UD_Ipushfq,
+  UD_Ipxor,
+  UD_Ircl,
+  UD_Ircr,
+  UD_Irol,
+  UD_Iror,
+  UD_Ircpps,
+  UD_Ircpss,
+  UD_Irdmsr,
+  UD_Irdpmc,
+  UD_Irdtsc,
+  UD_Irdtscp,
+  UD_Irepne,
+  UD_Irep,
+  UD_Iret,
+  UD_Iretf,
+  UD_Irsm,
+  UD_Irsqrtps,
+  UD_Irsqrtss,
+  UD_Isahf,
+  UD_Isal,
+  UD_Isalc,
+  UD_Isar,
+  UD_Ishl,
+  UD_Ishr,
+  UD_Isbb,
+  UD_Iscasb,
+  UD_Iscasw,
+  UD_Iscasd,
+  UD_Iscasq,
+  UD_Iseto,
+  UD_Isetno,
+  UD_Isetb,
+  UD_Isetnb,
+  UD_Isetz,
+  UD_Isetnz,
+  UD_Isetbe,
+  UD_Iseta,
+  UD_Isets,
+  UD_Isetns,
+  UD_Isetp,
+  UD_Isetnp,
+  UD_Isetl,
+  UD_Isetge,
+  UD_Isetle,
+  UD_Isetg,
+  UD_Isfence,
+  UD_Isgdt,
+  UD_Ishld,
+  UD_Ishrd,
+  UD_Ishufpd,
+  UD_Ishufps,
+  UD_Isidt,
+  UD_Isldt,
+  UD_Ismsw,
+  UD_Isqrtps,
+  UD_Isqrtpd,
+  UD_Isqrtsd,
+  UD_Isqrtss,
+  UD_Istc,
+  UD_Istd,
+  UD_Istgi,
+  UD_Isti,
+  UD_Iskinit,
+  UD_Istmxcsr,
+  UD_Istosb,
+  UD_Istosw,
+  UD_Istosd,
+  UD_Istosq,
+  UD_Istr,
+  UD_Isub,
+  UD_Isubpd,
+  UD_Isubps,
+  UD_Isubsd,
+  UD_Isubss,
+  UD_Iswapgs,
+  UD_Isyscall,
+  UD_Isysenter,
+  UD_Isysexit,
+  UD_Isysret,
+  UD_Itest,
+  UD_Iucomisd,
+  UD_Iucomiss,
+  UD_Iud2,
+  UD_Iunpckhpd,
+  UD_Iunpckhps,
+  UD_Iunpcklps,
+  UD_Iunpcklpd,
+  UD_Iverr,
+  UD_Iverw,
+  UD_Ivmcall,
+  UD_Ivmclear,
+  UD_Ivmxon,
+  UD_Ivmptrld,
+  UD_Ivmptrst,
+  UD_Ivmresume,
+  UD_Ivmxoff,
+  UD_Ivmrun,
+  UD_Ivmmcall,
+  UD_Ivmload,
+  UD_Ivmsave,
+  UD_Iwait,
+  UD_Iwbinvd,
+  UD_Iwrmsr,
+  UD_Ixadd,
+  UD_Ixchg,
+  UD_Ixlatb,
+  UD_Ixor,
+  UD_Ixorpd,
+  UD_Ixorps,
+  UD_Idb,
+  UD_Iinvalid,
+  UD_Id3vil,
+  UD_Ina,
+  UD_Igrp_reg,
+  UD_Igrp_rm,
+  UD_Igrp_vendor,
+  UD_Igrp_x87,
+  UD_Igrp_mode,
+  UD_Igrp_osize,
+  UD_Igrp_asize,
+  UD_Igrp_mod,
+  UD_Inone,
+};
+
+
+
+extern const char* ud_mnemonics_str[];;
+extern struct ud_itab_entry* ud_itab_list[];
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/kdb_dis.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/kdb_dis.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,204 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include <xen/compile.h>                /* for XEN_SUBVERSION */
+#include "../../include/kdbinc.h"
+#include "extern.h"
+
+static void (*dis_syntax)(ud_t*) = UD_SYN_ATT; /* default dis-assembly syntax */
+
+static struct {                         /* info for kdb_read_byte_for_ud() */
+    kdbva_t kud_instr_addr;
+    domid_t kud_domid;
+} kdb_ud_rd_info;
+
+/* called via function ptr by ud when disassembling. 
+ * kdb info passed via kdb_ud_rd_info{} 
+ */
+static int
+kdb_read_byte_for_ud(struct ud *udp)
+{
+    kdbbyt_t bytebuf;
+    domid_t domid = kdb_ud_rd_info.kud_domid;
+    kdbva_t addr = kdb_ud_rd_info.kud_instr_addr;
+
+    if (kdb_read_mem(addr, &bytebuf, 1, domid) == 1) {
+        kdb_ud_rd_info.kud_instr_addr++;
+        KDBGP1("udrd:addr:%lx domid:%d byt:%x\n", addr, domid, bytebuf);
+        return bytebuf;
+    }
+    KDBGP1("udrd:addr:%lx domid:%d err\n", addr, domid);
+    return UD_EOI;
+}
+
+/* 
+ * given a domid, convert addr to symbol and print it 
+ * Eg: ffff828c801235e2: idle_loop+52                  jmp  idle_loop+55
+ *    Called twice here for idle_loop. In first case, nl is null, 
+ *    in the second case nl == '\n'
+ */
+void
+kdb_prnt_addr2sym(domid_t domid, kdbva_t addr, char *nl)
+{
+    unsigned long sz, offs;
+    char buf[KSYM_NAME_LEN+1], pbuf[150], prefix[8];
+    char *p = buf;
+
+    prefix[0]='\0';
+    if (domid != DOMID_IDLE) {
+        snprintf(prefix, 8, "%x:", domid);
+        p = kdb_guest_addr2sym(addr, domid, &offs);
+    } else
+        symbols_lookup(addr, &sz, &offs, buf);
+
+    snprintf(pbuf, 150, "%s%s+%lx", prefix, p, offs);
+    if (*nl != '\n')
+        kdbp("%-30s%s", pbuf, nl);  /* prints more than 30 if needed */
+    else
+        kdbp("%s%s", pbuf, nl);
+
+}
+
+static int
+kdb_jump_instr(enum ud_mnemonic_code mnemonic)
+{
+    return (mnemonic >= UD_Ijo && mnemonic <= UD_Ijmp);
+}
+
+/*
+ * print one instr: function so that we can print offsets of jmp etc.. as
+ *  symbol+offset instead of just address
+ */
+static void
+kdb_print_one_instr(struct ud *udp, domid_t domid)
+{
+    signed long val = 0;
+    ud_type_t type = udp->operand[0].type;
+
+    if ((udp->mnemonic == UD_Icall || kdb_jump_instr(udp->mnemonic)) &&
+        type == UD_OP_JIMM) {
+        
+        int sz = udp->operand[0].size;
+        char *p, ibuf[40], *q = ibuf;
+        kdbva_t addr;
+
+        if (sz == 8) val = udp->operand[0].lval.sbyte;
+        else if (sz == 16) val = udp->operand[0].lval.sword;
+        else if (sz == 32) val = udp->operand[0].lval.sdword;
+        else if (sz == 64) val = udp->operand[0].lval.sqword;
+        else kdbp("kdb_print_one_instr: Inval sz:z%d\n", sz);
+
+        addr = udp->pc + val;
+        for(p=ud_insn_asm(udp); (*q=*p) && *p!=' '; p++,q++);
+        *q='\0';
+        kdbp(" %-4s ", ibuf);    /* space before for long func names */
+        kdb_prnt_addr2sym(domid, addr, "\n");
+    } else
+        kdbp(" %-24s\n", ud_insn_asm(udp));
+#if 0
+    kdbp("mnemonic:z%d ", udp->mnemonic);
+    if (type == UD_OP_CONST) kdbp("type is const\n");
+    else if (type == UD_OP_JIMM) kdbp("type is JIMM\n");
+    else if (type == UD_OP_IMM) kdbp("type is IMM\n");
+    else if (type == UD_OP_PTR) kdbp("type is PTR\n");
+#endif
+}
+
+static void
+kdb_setup_ud(struct ud *udp, kdbva_t addr, domid_t domid)
+{
+    int bitness = kdb_guest_bitness(domid);
+    uint vendor = (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) ?
+                                           UD_VENDOR_AMD : UD_VENDOR_INTEL;
+
+    KDBGP1("setup_ud:domid:%d bitness:%d addr:%lx\n", domid, bitness, addr);
+    ud_init(udp);
+    ud_set_mode(udp, kdb_guest_bitness(domid));
+    ud_set_syntax(udp, dis_syntax); 
+    ud_set_vendor(udp, vendor);           /* HVM: vmx/svm different instrs*/
+    ud_set_pc(udp, addr);                 /* for numbers printed on left */
+    ud_set_input_hook(udp, kdb_read_byte_for_ud);
+    kdb_ud_rd_info.kud_instr_addr = addr;
+    kdb_ud_rd_info.kud_domid = domid;
+}
+
+/*
+ * given an addr, print given number of instructions.
+ * Returns: address of next instruction in the stream
+ */
+kdbva_t
+kdb_print_instr(kdbva_t addr, long num, domid_t domid)
+{
+    struct ud ud_s;
+
+    KDBGP1("print_instr:addr:0x%lx num:%ld domid:%x\n", addr, num, domid);
+
+    kdb_setup_ud(&ud_s, addr, domid);
+    while(num--) {
+        if (ud_disassemble(&ud_s)) {
+            uint64_t pc = ud_insn_off(&ud_s);
+            /* kdbp("%08x: ",(int)pc); */
+            kdbp("%016lx: ", pc);
+            kdb_prnt_addr2sym(domid, pc, "");
+            kdb_print_one_instr(&ud_s, domid);
+        } else
+            kdbp("KDB:Couldn't disassemble PC:0x%lx\n", addr);
+            /* for stack reads, don't always display error */
+    }
+    KDBGP1("print_instr:kudaddr:0x%lx\n", kdb_ud_rd_info.kud_instr_addr);
+    return kdb_ud_rd_info.kud_instr_addr;
+}
+
+void
+kdb_display_pc(struct cpu_user_regs *regs)
+{   
+    domid_t domid;
+    struct cpu_user_regs regs1 = *regs;
+    domid = guest_mode(regs) ? current->domain->domain_id : DOMID_IDLE;
+
+    regs1.KDBIP = regs->KDBIP;
+    kdb_print_instr(regs1.KDBIP, 1, domid);
+}
+
+/* check if the instr at the addr is call instruction
+ * RETURNS: size of the instr if it's a call instr, else 0
+ */
+int
+kdb_check_call_instr(domid_t domid, kdbva_t addr)
+{
+    struct ud ud_s;
+    int sz;
+
+    kdb_setup_ud(&ud_s, addr, domid);
+    if ((sz=ud_disassemble(&ud_s)) && ud_s.mnemonic == UD_Icall)
+        return (sz);
+    return 0;
+}
+
+/* toggle ATT and Intel syntaxes */
+void
+kdb_toggle_dis_syntax(void)
+{
+    if (dis_syntax == UD_SYN_INTEL) {
+        dis_syntax = UD_SYN_ATT;
+        kdbp("dis syntax now set to ATT (Gas)\n");
+    } else {
+        dis_syntax = UD_SYN_INTEL;
+        kdbp("dis syntax now set to Intel (NASM)\n");
+    }
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn-att.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn-att.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,211 @@
+/* -----------------------------------------------------------------------------
+ * syn-att.c
+ *
+ * Copyright (c) 2004, 2005, 2006 Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See (LICENSE)
+ * -----------------------------------------------------------------------------
+ */
+
+#include "types.h"
+#include "extern.h"
+#include "decode.h"
+#include "itab.h"
+#include "syn.h"
+
+/* -----------------------------------------------------------------------------
+ * opr_cast() - Prints an operand cast.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+opr_cast(struct ud* u, struct ud_operand* op)
+{
+  switch(op->size) {
+	case 16 : case 32 :
+		mkasm(u, "*");   break;
+	default: break;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * gen_operand() - Generates assembly output for each operand.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+gen_operand(struct ud* u, struct ud_operand* op)
+{
+  switch(op->type) {
+	case UD_OP_REG:
+		mkasm(u, "%%%s", ud_reg_tab[op->base - UD_R_AL]);
+		break;
+
+	case UD_OP_MEM:
+		if (u->br_far) opr_cast(u, op);
+		if (u->pfx_seg)
+			mkasm(u, "%%%s:", ud_reg_tab[u->pfx_seg - UD_R_AL]);
+		if (op->offset == 8) {
+			if (op->lval.sbyte < 0)
+				mkasm(u, "-0x%x", (-op->lval.sbyte) & 0xff);
+			else	mkasm(u, "0x%x", op->lval.sbyte);
+		} 
+		else if (op->offset == 16) 
+			mkasm(u, "0x%x", op->lval.uword);
+		else if (op->offset == 32) 
+			mkasm(u, "0x%lx", op->lval.udword);
+		else if (op->offset == 64) 
+			mkasm(u, "0x" FMT64 "x", op->lval.uqword);
+
+		if (op->base)
+			mkasm(u, "(%%%s", ud_reg_tab[op->base - UD_R_AL]);
+		if (op->index) {
+			if (op->base)
+				mkasm(u, ",");
+			else mkasm(u, "(");
+			mkasm(u, "%%%s", ud_reg_tab[op->index - UD_R_AL]);
+		}
+		if (op->scale)
+			mkasm(u, ",%d", op->scale);
+		if (op->base || op->index)
+			mkasm(u, ")");
+		break;
+
+	case UD_OP_IMM:
+		switch (op->size) {
+			case  8: mkasm(u, "$0x%x", op->lval.ubyte);    break;
+			case 16: mkasm(u, "$0x%x", op->lval.uword);    break;
+			case 32: mkasm(u, "$0x%lx", op->lval.udword);  break;
+			case 64: mkasm(u, "$0x" FMT64 "x", op->lval.uqword); break;
+			default: break;
+		}
+		break;
+
+	case UD_OP_JIMM:
+		switch (op->size) {
+			case  8:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sbyte); 
+				break;
+			case 16:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sword);
+				break;
+			case 32:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sdword);
+				break;
+			default:break;
+		}
+		break;
+
+	case UD_OP_PTR:
+		switch (op->size) {
+			case 32:
+				mkasm(u, "$0x%x, $0x%x", op->lval.ptr.seg, 
+					op->lval.ptr.off & 0xFFFF);
+				break;
+			case 48:
+				mkasm(u, "$0x%x, $0x%lx", op->lval.ptr.seg, 
+					op->lval.ptr.off);
+				break;
+		}
+		break;
+			
+	default: return;
+  }
+}
+
+/* =============================================================================
+ * translates to AT&T syntax 
+ * =============================================================================
+ */
+extern void 
+ud_translate_att(struct ud *u)
+{
+  int size = 0;
+
+  /* check if P_OSO prefix is used */
+  if (! P_OSO(u->itab_entry->prefix) && u->pfx_opr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "o32 ");
+			break;
+		case 32:
+		case 64:
+ 			mkasm(u, "o16 ");
+			break;
+	}
+  }
+
+  /* check if P_ASO prefix was used */
+  if (! P_ASO(u->itab_entry->prefix) && u->pfx_adr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "a32 ");
+			break;
+		case 32:
+ 			mkasm(u, "a16 ");
+			break;
+		case 64:
+ 			mkasm(u, "a32 ");
+			break;
+	}
+  }
+
+  if (u->pfx_lock)
+  	mkasm(u,  "lock ");
+  if (u->pfx_rep)
+	mkasm(u,  "rep ");
+  if (u->pfx_repne)
+		mkasm(u,  "repne ");
+
+  /* special instructions */
+  switch (u->mnemonic) {
+	case UD_Iretf: 
+		mkasm(u, "lret "); 
+		break;
+	case UD_Idb:
+		mkasm(u, ".byte 0x%x", u->operand[0].lval.ubyte);
+		return;
+	case UD_Ijmp:
+	case UD_Icall:
+		if (u->br_far) mkasm(u,  "l");
+		mkasm(u, "%s", ud_lookup_mnemonic(u->mnemonic));
+		break;
+	case UD_Ibound:
+	case UD_Ienter:
+		if (u->operand[0].type != UD_NONE)
+			gen_operand(u, &u->operand[0]);
+		if (u->operand[1].type != UD_NONE) {
+			mkasm(u, ",");
+			gen_operand(u, &u->operand[1]);
+		}
+		return;
+	default:
+		mkasm(u, "%s", ud_lookup_mnemonic(u->mnemonic));
+  }
+
+  if (u->c1)
+	size = u->operand[0].size;
+  else if (u->c2)
+	size = u->operand[1].size;
+  else if (u->c3)
+	size = u->operand[2].size;
+
+  if (size == 8)
+	mkasm(u, "b");
+  else if (size == 16)
+	mkasm(u, "w");
+  else if (size == 64)
+ 	mkasm(u, "q");
+
+  mkasm(u, " ");
+
+  if (u->operand[2].type != UD_NONE) {
+	gen_operand(u, &u->operand[2]);
+	mkasm(u, ", ");
+  }
+
+  if (u->operand[1].type != UD_NONE) {
+	gen_operand(u, &u->operand[1]);
+	mkasm(u, ", ");
+  }
+
+  if (u->operand[0].type != UD_NONE)
+	gen_operand(u, &u->operand[0]);
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn-intel.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn-intel.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,208 @@
+/* -----------------------------------------------------------------------------
+ * syn-intel.c
+ *
+ * Copyright (c) 2002, 2003, 2004 Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See (LICENSE)
+ * -----------------------------------------------------------------------------
+ */
+
+#include "types.h"
+#include "extern.h"
+#include "decode.h"
+#include "itab.h"
+#include "syn.h"
+
+/* -----------------------------------------------------------------------------
+ * opr_cast() - Prints an operand cast.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+opr_cast(struct ud* u, struct ud_operand* op)
+{
+  switch(op->size) {
+	case  8: mkasm(u, "byte " ); break;
+	case 16: mkasm(u, "word " ); break;
+	case 32: mkasm(u, "dword "); break;
+	case 64: mkasm(u, "qword "); break;
+	case 80: mkasm(u, "tword "); break;
+	default: break;
+  }
+  if (u->br_far)
+	mkasm(u, "far "); 
+  else if (u->br_near)
+	mkasm(u, "near ");
+}
+
+/* -----------------------------------------------------------------------------
+ * gen_operand() - Generates assembly output for each operand.
+ * -----------------------------------------------------------------------------
+ */
+static void gen_operand(struct ud* u, struct ud_operand* op, int syn_cast)
+{
+  switch(op->type) {
+	case UD_OP_REG:
+		mkasm(u, ud_reg_tab[op->base - UD_R_AL]);
+		break;
+
+	case UD_OP_MEM: {
+
+		int op_f = 0;
+
+		if (syn_cast) 
+			opr_cast(u, op);
+
+		mkasm(u, "[");
+
+		if (u->pfx_seg)
+			mkasm(u, "%s:", ud_reg_tab[u->pfx_seg - UD_R_AL]);
+
+		if (op->base) {
+			mkasm(u, "%s", ud_reg_tab[op->base - UD_R_AL]);
+			op_f = 1;
+		}
+
+		if (op->index) {
+			if (op_f)
+				mkasm(u, "+");
+			mkasm(u, "%s", ud_reg_tab[op->index - UD_R_AL]);
+			op_f = 1;
+		}
+
+		if (op->scale)
+			mkasm(u, "*%d", op->scale);
+
+		if (op->offset == 8) {
+			if (op->lval.sbyte < 0)
+				mkasm(u, "-0x%x", -op->lval.sbyte);
+			else	mkasm(u, "%s0x%x", (op_f) ? "+" : "", op->lval.sbyte);
+		}
+		else if (op->offset == 16)
+			mkasm(u, "%s0x%x", (op_f) ? "+" : "", op->lval.uword);
+		else if (op->offset == 32) {
+			if (u->adr_mode == 64) {
+				if (op->lval.sdword < 0)
+					mkasm(u, "-0x%x", -op->lval.sdword);
+				else	mkasm(u, "%s0x%x", (op_f) ? "+" : "", op->lval.sdword);
+			} 
+			else	mkasm(u, "%s0x%lx", (op_f) ? "+" : "", op->lval.udword);
+		}
+		else if (op->offset == 64) 
+			mkasm(u, "%s0x" FMT64 "x", (op_f) ? "+" : "", op->lval.uqword);
+
+		mkasm(u, "]");
+		break;
+	}
+			
+	case UD_OP_IMM:
+		if (syn_cast) opr_cast(u, op);
+		switch (op->size) {
+			case  8: mkasm(u, "0x%x", op->lval.ubyte);    break;
+			case 16: mkasm(u, "0x%x", op->lval.uword);    break;
+			case 32: mkasm(u, "0x%lx", op->lval.udword);  break;
+			case 64: mkasm(u, "0x" FMT64 "x", op->lval.uqword); break;
+			default: break;
+		}
+		break;
+
+	case UD_OP_JIMM:
+		if (syn_cast) opr_cast(u, op);
+		switch (op->size) {
+			case  8:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sbyte); 
+				break;
+			case 16:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sword);
+				break;
+			case 32:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sdword);
+				break;
+			default:break;
+		}
+		break;
+
+	case UD_OP_PTR:
+		switch (op->size) {
+			case 32:
+				mkasm(u, "word 0x%x:0x%x", op->lval.ptr.seg, 
+					op->lval.ptr.off & 0xFFFF);
+				break;
+			case 48:
+				mkasm(u, "dword 0x%x:0x%lx", op->lval.ptr.seg, 
+					op->lval.ptr.off);
+				break;
+		}
+		break;
+
+	case UD_OP_CONST:
+		if (syn_cast) opr_cast(u, op);
+		mkasm(u, "%d", op->lval.udword);
+		break;
+
+	default: return;
+  }
+}
+
+/* =============================================================================
+ * translates to intel syntax 
+ * =============================================================================
+ */
+extern void ud_translate_intel(struct ud* u)
+{
+  /* -- prefixes -- */
+
+  /* check if P_OSO prefix is used */
+  if (! P_OSO(u->itab_entry->prefix) && u->pfx_opr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "o32 ");
+			break;
+		case 32:
+		case 64:
+ 			mkasm(u, "o16 ");
+			break;
+	}
+  }
+
+  /* check if P_ASO prefix was used */
+  if (! P_ASO(u->itab_entry->prefix) && u->pfx_adr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "a32 ");
+			break;
+		case 32:
+ 			mkasm(u, "a16 ");
+			break;
+		case 64:
+ 			mkasm(u, "a32 ");
+			break;
+	}
+  }
+
+  if (u->pfx_lock)
+	mkasm(u, "lock ");
+  if (u->pfx_rep)
+	mkasm(u, "rep ");
+  if (u->pfx_repne)
+	mkasm(u, "repne ");
+  if (u->implicit_addr && u->pfx_seg)
+	mkasm(u, "%s ", ud_reg_tab[u->pfx_seg - UD_R_AL]);
+
+  /* print the instruction mnemonic */
+  mkasm(u, "%s ", ud_lookup_mnemonic(u->mnemonic));
+
+  /* operand 1 */
+  if (u->operand[0].type != UD_NONE) {
+	gen_operand(u, &u->operand[0], u->c1);
+  }
+  /* operand 2 */
+  if (u->operand[1].type != UD_NONE) {
+	mkasm(u, ", ");
+	gen_operand(u, &u->operand[1], u->c2);
+  }
+
+  /* operand 3 */
+  if (u->operand[2].type != UD_NONE) {
+	mkasm(u, ", ");
+	gen_operand(u, &u->operand[2], u->c3);
+  }
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,61 @@
+/* -----------------------------------------------------------------------------
+ * syn.c
+ *
+ * Copyright (c) 2002, 2003, 2004 Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See (LICENSE)
+ * -----------------------------------------------------------------------------
+ */
+
+/* -----------------------------------------------------------------------------
+ * Intel Register Table - Order Matters (types.h)!
+ * -----------------------------------------------------------------------------
+ */
+const char* ud_reg_tab[] = 
+{
+  "al",		"cl",		"dl",		"bl",
+  "ah",		"ch",		"dh",		"bh",
+  "spl",	"bpl",		"sil",		"dil",
+  "r8b",	"r9b",		"r10b",		"r11b",
+  "r12b",	"r13b",		"r14b",		"r15b",
+
+  "ax",		"cx",		"dx",		"bx",
+  "sp",		"bp",		"si",		"di",
+  "r8w",	"r9w",		"r10w",		"r11w",
+  "r12w",	"r13W"	,	"r14w",		"r15w",
+	
+  "eax",	"ecx",		"edx",		"ebx",
+  "esp",	"ebp",		"esi",		"edi",
+  "r8d",	"r9d",		"r10d",		"r11d",
+  "r12d",	"r13d",		"r14d",		"r15d",
+	
+  "rax",	"rcx",		"rdx",		"rbx",
+  "rsp",	"rbp",		"rsi",		"rdi",
+  "r8",		"r9",		"r10",		"r11",
+  "r12",	"r13",		"r14",		"r15",
+
+  "es",		"cs",		"ss",		"ds",
+  "fs",		"gs",	
+
+  "cr0",	"cr1",		"cr2",		"cr3",
+  "cr4",	"cr5",		"cr6",		"cr7",
+  "cr8",	"cr9",		"cr10",		"cr11",
+  "cr12",	"cr13",		"cr14",		"cr15",
+	
+  "dr0",	"dr1",		"dr2",		"dr3",
+  "dr4",	"dr5",		"dr6",		"dr7",
+  "dr8",	"dr9",		"dr10",		"dr11",
+  "dr12",	"dr13",		"dr14",		"dr15",
+
+  "mm0",	"mm1",		"mm2",		"mm3",
+  "mm4",	"mm5",		"mm6",		"mm7",
+
+  "st0",	"st1",		"st2",		"st3",
+  "st4",	"st5",		"st6",		"st7", 
+
+  "xmm0",	"xmm1",		"xmm2",		"xmm3",
+  "xmm4",	"xmm5",		"xmm6",		"xmm7",
+  "xmm8",	"xmm9",		"xmm10",	"xmm11",
+  "xmm12",	"xmm13",	"xmm14",	"xmm15",
+
+  "rip"
+};
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,27 @@
+/* -----------------------------------------------------------------------------
+ * syn.h
+ *
+ * Copyright (c) 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_SYN_H
+#define UD_SYN_H
+
+#if 0
+#include <stdio.h>
+#include <stdarg.h>
+#endif
+#include "types.h"
+
+extern const char* ud_reg_tab[];
+
+static void mkasm(struct ud* u, const char* fmt, ...)
+{
+  va_list ap;
+  va_start(ap, fmt);
+  u->insn_fill += vsnprintf((char*) u->insn_buffer + u->insn_fill, 64, fmt, ap);
+  va_end(ap);
+}
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/types.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/types.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,186 @@
+/* -----------------------------------------------------------------------------
+ * types.h
+ *
+ * Copyright (c) 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_TYPES_H
+#define UD_TYPES_H
+
+
+#include "../../include/kdbinc.h"
+
+#define FMT64 "%ll"
+#include "itab.h"
+
+/* -----------------------------------------------------------------------------
+ * All possible "types" of objects in udis86. Order is Important!
+ * -----------------------------------------------------------------------------
+ */
+enum ud_type
+{
+  UD_NONE,
+
+  /* 8 bit GPRs */
+  UD_R_AL,	UD_R_CL,	UD_R_DL,	UD_R_BL,
+  UD_R_AH,	UD_R_CH,	UD_R_DH,	UD_R_BH,
+  UD_R_SPL,	UD_R_BPL,	UD_R_SIL,	UD_R_DIL,
+  UD_R_R8B,	UD_R_R9B,	UD_R_R10B,	UD_R_R11B,
+  UD_R_R12B,	UD_R_R13B,	UD_R_R14B,	UD_R_R15B,
+
+  /* 16 bit GPRs */
+  UD_R_AX,	UD_R_CX,	UD_R_DX,	UD_R_BX,
+  UD_R_SP,	UD_R_BP,	UD_R_SI,	UD_R_DI,
+  UD_R_R8W,	UD_R_R9W,	UD_R_R10W,	UD_R_R11W,
+  UD_R_R12W,	UD_R_R13W,	UD_R_R14W,	UD_R_R15W,
+	
+  /* 32 bit GPRs */
+  UD_R_EAX,	UD_R_ECX,	UD_R_EDX,	UD_R_EBX,
+  UD_R_ESP,	UD_R_EBP,	UD_R_ESI,	UD_R_EDI,
+  UD_R_R8D,	UD_R_R9D,	UD_R_R10D,	UD_R_R11D,
+  UD_R_R12D,	UD_R_R13D,	UD_R_R14D,	UD_R_R15D,
+	
+  /* 64 bit GPRs */
+  UD_R_RAX,	UD_R_RCX,	UD_R_RDX,	UD_R_RBX,
+  UD_R_RSP,	UD_R_RBP,	UD_R_RSI,	UD_R_RDI,
+  UD_R_R8,	UD_R_R9,	UD_R_R10,	UD_R_R11,
+  UD_R_R12,	UD_R_R13,	UD_R_R14,	UD_R_R15,
+
+  /* segment registers */
+  UD_R_ES,	UD_R_CS,	UD_R_SS,	UD_R_DS,
+  UD_R_FS,	UD_R_GS,	
+
+  /* control registers*/
+  UD_R_CR0,	UD_R_CR1,	UD_R_CR2,	UD_R_CR3,
+  UD_R_CR4,	UD_R_CR5,	UD_R_CR6,	UD_R_CR7,
+  UD_R_CR8,	UD_R_CR9,	UD_R_CR10,	UD_R_CR11,
+  UD_R_CR12,	UD_R_CR13,	UD_R_CR14,	UD_R_CR15,
+	
+  /* debug registers */
+  UD_R_DR0,	UD_R_DR1,	UD_R_DR2,	UD_R_DR3,
+  UD_R_DR4,	UD_R_DR5,	UD_R_DR6,	UD_R_DR7,
+  UD_R_DR8,	UD_R_DR9,	UD_R_DR10,	UD_R_DR11,
+  UD_R_DR12,	UD_R_DR13,	UD_R_DR14,	UD_R_DR15,
+
+  /* mmx registers */
+  UD_R_MM0,	UD_R_MM1,	UD_R_MM2,	UD_R_MM3,
+  UD_R_MM4,	UD_R_MM5,	UD_R_MM6,	UD_R_MM7,
+
+  /* x87 registers */
+  UD_R_ST0,	UD_R_ST1,	UD_R_ST2,	UD_R_ST3,
+  UD_R_ST4,	UD_R_ST5,	UD_R_ST6,	UD_R_ST7, 
+
+  /* extended multimedia registers */
+  UD_R_XMM0,	UD_R_XMM1,	UD_R_XMM2,	UD_R_XMM3,
+  UD_R_XMM4,	UD_R_XMM5,	UD_R_XMM6,	UD_R_XMM7,
+  UD_R_XMM8,	UD_R_XMM9,	UD_R_XMM10,	UD_R_XMM11,
+  UD_R_XMM12,	UD_R_XMM13,	UD_R_XMM14,	UD_R_XMM15,
+
+  UD_R_RIP,
+
+  /* Operand Types */
+  UD_OP_REG,	UD_OP_MEM,	UD_OP_PTR,	UD_OP_IMM,	
+  UD_OP_JIMM,	UD_OP_CONST
+};
+
+/* -----------------------------------------------------------------------------
+ * struct ud_operand - Disassembled instruction Operand.
+ * -----------------------------------------------------------------------------
+ */
+struct ud_operand 
+{
+  enum ud_type		type;
+  uint8_t		size;
+  union {
+	int8_t		sbyte;
+	uint8_t		ubyte;
+	int16_t		sword;
+	uint16_t	uword;
+	int32_t		sdword;
+	uint32_t	udword;
+	int64_t		sqword;
+	uint64_t	uqword;
+
+	struct {
+		uint16_t seg;
+		uint32_t off;
+	} ptr;
+  } lval;
+
+  enum ud_type		base;
+  enum ud_type		index;
+  uint8_t		offset;
+  uint8_t		scale;	
+};
+
+/* -----------------------------------------------------------------------------
+ * struct ud - The udis86 object.
+ * -----------------------------------------------------------------------------
+ */
+struct ud
+{
+  int 			(*inp_hook) (struct ud*);
+  uint8_t		inp_curr;
+  uint8_t		inp_fill;
+  uint8_t		inp_ctr;
+  uint8_t*		inp_buff;
+  uint8_t*		inp_buff_end;
+  uint8_t		inp_end;
+  void			(*translator)(struct ud*);
+  uint64_t		insn_offset;
+  char			insn_hexcode[32];
+  char			insn_buffer[64];
+  unsigned int		insn_fill;
+  uint8_t		dis_mode;
+  uint64_t		pc;
+  uint8_t		vendor;
+  struct map_entry*	mapen;
+  enum ud_mnemonic_code	mnemonic;
+  struct ud_operand	operand[3];
+  uint8_t		error;
+  uint8_t	 	pfx_rex;
+  uint8_t 		pfx_seg;
+  uint8_t 		pfx_opr;
+  uint8_t 		pfx_adr;
+  uint8_t 		pfx_lock;
+  uint8_t 		pfx_rep;
+  uint8_t 		pfx_repe;
+  uint8_t 		pfx_repne;
+  uint8_t 		pfx_insn;
+  uint8_t		default64;
+  uint8_t		opr_mode;
+  uint8_t		adr_mode;
+  uint8_t		br_far;
+  uint8_t		br_near;
+  uint8_t		implicit_addr;
+  uint8_t		c1;
+  uint8_t		c2;
+  uint8_t		c3;
+  uint8_t 		inp_cache[256];
+  uint8_t		inp_sess[64];
+  struct ud_itab_entry * itab_entry;
+};
+
+/* -----------------------------------------------------------------------------
+ * Type-definitions
+ * -----------------------------------------------------------------------------
+ */
+typedef enum ud_type 		ud_type_t;
+typedef enum ud_mnemonic_code	ud_mnemonic_code_t;
+
+typedef struct ud 		ud_t;
+typedef struct ud_operand 	ud_operand_t;
+
+#define UD_SYN_INTEL		ud_translate_intel
+#define UD_SYN_ATT		ud_translate_att
+#define UD_EOI			-1
+#define UD_INP_CACHE_SZ		32
+#define UD_VENDOR_AMD		0
+#define UD_VENDOR_INTEL		1
+
+#define bail_out(ud,error_code) longjmp( (ud)->bailout, error_code )
+#define try_decode(ud) if ( setjmp( (ud)->bailout ) == 0 )
+#define catch_error() else
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/udis86.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/udis86.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,156 @@
+/* -----------------------------------------------------------------------------
+ * udis86.c
+ *
+ * Copyright (c) 2004, 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+
+#if 0
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#endif
+
+#include "input.h"
+#include "extern.h"
+
+/* =============================================================================
+ * ud_init() - Initializes ud_t object.
+ * =============================================================================
+ */
+extern void 
+ud_init(struct ud* u)
+{
+  memset((void*)u, 0, sizeof(struct ud));
+  ud_set_mode(u, 16);
+  u->mnemonic = UD_Iinvalid;
+  ud_set_pc(u, 0);
+#ifndef __UD_STANDALONE__
+  ud_set_input_file(u, stdin);
+#endif /* __UD_STANDALONE__ */
+}
+
+/* =============================================================================
+ * ud_disassemble() - disassembles one instruction and returns the number of 
+ * bytes disassembled. A zero means end of disassembly.
+ * =============================================================================
+ */
+extern unsigned int
+ud_disassemble(struct ud* u)
+{
+  if (ud_input_end(u))
+	return 0;
+
+ 
+  u->insn_buffer[0] = u->insn_hexcode[0] = 0;
+
+ 
+  if (ud_decode(u) == 0)
+	return 0;
+  if (u->translator)
+	u->translator(u);
+  return ud_insn_len(u);
+}
+
+/* =============================================================================
+ * ud_set_mode() - Set Disassemly Mode.
+ * =============================================================================
+ */
+extern void 
+ud_set_mode(struct ud* u, uint8_t m)
+{
+  switch(m) {
+	case 16:
+	case 32:
+	case 64: u->dis_mode = m ; return;
+	default: u->dis_mode = 16; return;
+  }
+}
+
+/* =============================================================================
+ * ud_set_vendor() - Set vendor.
+ * =============================================================================
+ */
+extern void 
+ud_set_vendor(struct ud* u, unsigned v)
+{
+  switch(v) {
+	case UD_VENDOR_INTEL:
+		u->vendor = v;
+		break;
+	default:
+		u->vendor = UD_VENDOR_AMD;
+  }
+}
+
+/* =============================================================================
+ * ud_set_pc() - Sets code origin. 
+ * =============================================================================
+ */
+extern void 
+ud_set_pc(struct ud* u, uint64_t o)
+{
+  u->pc = o;
+}
+
+/* =============================================================================
+ * ud_set_syntax() - Sets the output syntax.
+ * =============================================================================
+ */
+extern void 
+ud_set_syntax(struct ud* u, void (*t)(struct ud*))
+{
+  u->translator = t;
+}
+
+/* =============================================================================
+ * ud_insn() - returns the disassembled instruction
+ * =============================================================================
+ */
+extern char* 
+ud_insn_asm(struct ud* u) 
+{
+  return u->insn_buffer;
+}
+
+/* =============================================================================
+ * ud_insn_offset() - Returns the offset.
+ * =============================================================================
+ */
+extern uint64_t
+ud_insn_off(struct ud* u) 
+{
+  return u->insn_offset;
+}
+
+
+/* =============================================================================
+ * ud_insn_hex() - Returns hex form of disassembled instruction.
+ * =============================================================================
+ */
+extern char* 
+ud_insn_hex(struct ud* u) 
+{
+  return u->insn_hexcode;
+}
+
+/* =============================================================================
+ * ud_insn_ptr() - Returns code disassembled.
+ * =============================================================================
+ */
+extern uint8_t* 
+ud_insn_ptr(struct ud* u) 
+{
+  return u->inp_sess;
+}
+
+/* =============================================================================
+ * ud_insn_len() - Returns the count of bytes disassembled.
+ * =============================================================================
+ */
+extern unsigned int 
+ud_insn_len(struct ud* u) 
+{
+  return u->inp_ctr;
+}

--MP_/vtJK/xRhlenzx/+qfkRLsPz
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--MP_/vtJK/xRhlenzx/+qfkRLsPz--


From xen-devel-bounces@lists.xen.org Thu Aug 30 05:13:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 05:13:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6x46-0002jC-NY; Thu, 30 Aug 2012 05:13:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T6q18-0003aO-EY
	for Xen-devel@lists.xensource.com; Wed, 29 Aug 2012 21:41:43 +0000
Received: from [85.158.143.35:9558] by server-3.bemta-4.messagelabs.com id
	D8/21-08232-59C8E305; Wed, 29 Aug 2012 21:41:41 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346276496!13262681!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyNTk5OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24942 invoked from network); 29 Aug 2012 21:41:37 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Aug 2012 21:41:37 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7TLfUmF008689
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Aug 2012 21:41:31 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7TLfRe1022882
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Aug 2012 21:41:28 GMT
Received: from abhmt114.oracle.com (abhmt114.oracle.com [141.146.116.66])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7TLfQuE024986; Wed, 29 Aug 2012 16:41:26 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Aug 2012 14:41:26 -0700
Date: Wed, 29 Aug 2012 14:41:25 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120829144125.25a5eb3e@mantra.us.oracle.com>
In-Reply-To: <20120829143512.7c579fb1@mantra.us.oracle.com>
References: <20120829143512.7c579fb1@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="MP_/vtJK/xRhlenzx/+qfkRLsPz"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
X-Mailman-Approved-At: Thu, 30 Aug 2012 05:13:13 +0000
Cc: Ben Guthro <ben@guthro.net>, msw@amazon.com
Subject: Re: [Xen-devel] xen debugger (kdb/xdb/hdb) patch for c/s 25467
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--MP_/vtJK/xRhlenzx/+qfkRLsPz
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Wed, 29 Aug 2012 14:35:12 -0700
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:


> 
> Anyways, attaching patch that is cleaned up of my debug code that I
> accidentally left in prev posting. Should apply cleanly to c/s 25467.

really attaching this time :).


--MP_/vtJK/xRhlenzx/+qfkRLsPz
Content-Type: text/x-patch
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=kdb.diff

diff -r 32034d1914a6 xen/Makefile
--- a/xen/Makefile	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -61,6 +61,7 @@
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C xsm clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C crypto clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C arch/$(TARGET_ARCH) clean
+	$(MAKE) -f $(BASEDIR)/Rules.mk -C kdb clean
 	rm -f include/asm *.o $(TARGET) $(TARGET).gz $(TARGET)-syms *~ core
 	rm -f include/asm-*/asm-offsets.h
 	[ -d tools/figlet ] && rm -f .banner*
@@ -129,7 +130,7 @@
 	  echo ""; \
 	  echo "#endif") <$< >$@
 
-SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers
+SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers kdb
 define all_sources
     ( find include/asm-$(TARGET_ARCH) -name '*.h' -print; \
       find include -name 'asm-*' -prune -o -name '*.h' -print; \
diff -r 32034d1914a6 xen/Rules.mk
--- a/xen/Rules.mk	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Rules.mk	Wed Aug 29 14:39:57 2012 -0700
@@ -10,6 +10,7 @@
 crash_debug   ?= n
 frame_pointer ?= n
 lto           ?= n
+kdb           ?= n
 
 include $(XEN_ROOT)/Config.mk
 
@@ -40,6 +41,7 @@
 ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
 ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
+ALL_OBJS-$(kdb)          += $(BASEDIR)/kdb/built_in.o
 
 CFLAGS-y                += -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
@@ -53,6 +55,7 @@
 CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
 CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
 CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
+CFLAGS-$(kdb)           += -DXEN_KDB_CONFIG
 
 ifneq ($(max_phys_cpus),)
 CFLAGS-y                += -DMAX_PHYS_CPUS=$(max_phys_cpus)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/entry.S
--- a/xen/arch/x86/hvm/svm/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -59,12 +59,23 @@
         get_current(bx)
         CLGI
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         testl $~0,(r(dx),r(ax),1)
         jnz  .Lsvm_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0, VCPU_nsvm_hap_enabled(r(bx))
 UNLIKELY_START(nz, nsvm_hap)
         mov  VCPU_nhvm_p2m(r(bx)),r(ax)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/svm.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2170,6 +2170,10 @@
         break;
 
     case VMEXIT_EXCEPTION_DB:
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+	    break;
+#endif
         if ( !v->domain->debugger_attached )
             goto exit_and_crash;
         domain_pause_for_debugger();
@@ -2182,6 +2186,10 @@
         if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
             break;
         __update_guest_eip(regs, inst_len);
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_int3, regs))
+            break;
+#endif
         current->arch.gdbsx_vcpu_event = TRAP_int3;
         domain_pause_for_debugger();
         break;
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
--- a/xen/arch/x86/hvm/svm/vmcb.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/vmcb.c	Wed Aug 29 14:39:57 2012 -0700
@@ -315,6 +315,36 @@
     register_keyhandler('v', &vmcb_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+/* did == 0 : display for all HVM domains. domid 0 is never HVM.
+ *  * vid == -1 : display for all HVM VCPUs
+ *   */
+void kdb_dump_vmcb(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain (dp) {
+        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
+            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
+            kdbp("\n");
+        }
+        kdbp("\n");
+    }
+    rcu_read_unlock(&domlist_read_lock);
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
--- a/xen/arch/x86/hvm/vmx/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -124,12 +124,23 @@
         get_current(bx)
         cli
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         cmpl $0,(r(dx),r(ax),1)
         jnz  .Lvmx_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0xff,VCPU_vmx_emulate(r(bx))
         jnz .Lvmx_goto_emulator
         testb $0xff,VCPU_vmx_realmode(r(bx))
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmcs.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1117,6 +1117,13 @@
         hvm_asid_flush_vcpu(v);
     }
 
+#if defined(XEN_KDB_CONFIG)
+    if (kdb_dr7)
+        __vmwrite(GUEST_DR7, kdb_dr7);
+    else
+        __vmwrite(GUEST_DR7, 0);
+#endif
+
     debug_state = v->domain->debugger_attached
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
@@ -1326,6 +1333,220 @@
     register_keyhandler('v', &vmcs_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+#define GUEST_EFER      0x2806   /* see page 23-20 */
+#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */
+
+/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
+ * will IPI other CPUs. also, print a subset relevant to software debugging */
+static void noinline kdb_print_vmcs(struct vcpu *vp)
+{
+    struct cpu_user_regs *regs = &vp->arch.user_regs;
+    unsigned long long x;
+
+    kdbp("*** Guest State ***\n");
+    kdbp("CR0: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR0),
+         (unsigned long long)vmr(CR0_READ_SHADOW), 
+         (unsigned long long)vmr(CR0_GUEST_HOST_MASK));
+    kdbp("CR4: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR4),
+         (unsigned long long)vmr(CR4_READ_SHADOW), 
+         (unsigned long long)vmr(CR4_GUEST_HOST_MASK));
+    kdbp("CR3: actual=0x%016llx, target_count=%d\n",
+         (unsigned long long)vmr(GUEST_CR3),
+         (int)vmr(CR3_TARGET_COUNT));
+    kdbp("     target0=%016llx, target1=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE0),
+         (unsigned long long)vmr(CR3_TARGET_VALUE1));
+    kdbp("     target2=%016llx, target3=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE2),
+         (unsigned long long)vmr(CR3_TARGET_VALUE3));
+    kdbp("RSP = 0x%016llx (0x%016llx)  RIP = 0x%016llx (0x%016llx)\n", 
+         (unsigned long long)vmr(GUEST_RSP),
+         (unsigned long long)regs->esp,
+         (unsigned long long)vmr(GUEST_RIP),
+         (unsigned long long)regs->eip);
+    kdbp("RFLAGS=0x%016llx (0x%016llx)  DR7 = 0x%016llx\n", 
+         (unsigned long long)vmr(GUEST_RFLAGS),
+         (unsigned long long)regs->eflags,
+         (unsigned long long)vmr(GUEST_DR7));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+         (unsigned long long)vmr(GUEST_SYSENTER_ESP),
+         (int)vmr(GUEST_SYSENTER_CS),
+         (unsigned long long)vmr(GUEST_SYSENTER_EIP));
+    vmx_dump_sel("CS", GUEST_CS_SELECTOR);
+    vmx_dump_sel("DS", GUEST_DS_SELECTOR);
+    vmx_dump_sel("SS", GUEST_SS_SELECTOR);
+    vmx_dump_sel("ES", GUEST_ES_SELECTOR);
+    vmx_dump_sel("FS", GUEST_FS_SELECTOR);
+    vmx_dump_sel("GS", GUEST_GS_SELECTOR);
+    vmx_dump_sel2("GDTR", GUEST_GDTR_LIMIT);
+    vmx_dump_sel("LDTR", GUEST_LDTR_SELECTOR);
+    vmx_dump_sel2("IDTR", GUEST_IDTR_LIMIT);
+    vmx_dump_sel("TR", GUEST_TR_SELECTOR);
+    kdbp("Guest EFER = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_EFER_HIGH), (uint32_t)vmr(GUEST_EFER));
+    kdbp("Guest PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_PAT_HIGH), (uint32_t)vmr(GUEST_PAT));
+    x  = (unsigned long long)vmr(TSC_OFFSET_HIGH) << 32;
+    x |= (uint32_t)vmr(TSC_OFFSET);
+    kdbp("TSC Offset = %016llx\n", x);
+    x  = (unsigned long long)vmr(GUEST_IA32_DEBUGCTL_HIGH) << 32;
+    x |= (uint32_t)vmr(GUEST_IA32_DEBUGCTL);
+    kdbp("DebugCtl=%016llx DebugExceptions=%016llx\n", x,
+           (unsigned long long)vmr(GUEST_PENDING_DBG_EXCEPTIONS));
+    kdbp("Interruptibility=%04x ActivityState=%04x\n",
+           (int)vmr(GUEST_INTERRUPTIBILITY_INFO),
+           (int)vmr(GUEST_ACTIVITY_STATE));
+
+    kdbp("MSRs: entry_load:$%d exit_load:$%d exit_store:$%d\n",
+         vmr(VM_ENTRY_MSR_LOAD_COUNT), vmr(VM_EXIT_MSR_LOAD_COUNT),
+         vmr(VM_EXIT_MSR_STORE_COUNT));
+
+    kdbp("\n*** Host State ***\n");
+    kdbp("RSP = 0x%016llx  RIP = 0x%016llx\n", 
+           (unsigned long long)vmr(HOST_RSP),
+           (unsigned long long)vmr(HOST_RIP));
+    kdbp("CS=%04x DS=%04x ES=%04x FS=%04x GS=%04x SS=%04x TR=%04x\n",
+           (uint16_t)vmr(HOST_CS_SELECTOR),
+           (uint16_t)vmr(HOST_DS_SELECTOR),
+           (uint16_t)vmr(HOST_ES_SELECTOR),
+           (uint16_t)vmr(HOST_FS_SELECTOR),
+           (uint16_t)vmr(HOST_GS_SELECTOR),
+           (uint16_t)vmr(HOST_SS_SELECTOR),
+           (uint16_t)vmr(HOST_TR_SELECTOR));
+    kdbp("FSBase=%016llx GSBase=%016llx TRBase=%016llx\n",
+           (unsigned long long)vmr(HOST_FS_BASE),
+           (unsigned long long)vmr(HOST_GS_BASE),
+           (unsigned long long)vmr(HOST_TR_BASE));
+    kdbp("GDTBase=%016llx IDTBase=%016llx\n",
+           (unsigned long long)vmr(HOST_GDTR_BASE),
+           (unsigned long long)vmr(HOST_IDTR_BASE));
+    kdbp("CR0=%016llx CR3=%016llx CR4=%016llx\n",
+           (unsigned long long)vmr(HOST_CR0),
+           (unsigned long long)vmr(HOST_CR3),
+           (unsigned long long)vmr(HOST_CR4));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+           (unsigned long long)vmr(HOST_SYSENTER_ESP),
+           (int)vmr(HOST_SYSENTER_CS),
+           (unsigned long long)vmr(HOST_SYSENTER_EIP));
+    kdbp("Host PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(HOST_PAT_HIGH), (uint32_t)vmr(HOST_PAT));
+
+    kdbp("\n*** Control State ***\n");
+    kdbp("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n",
+           (uint32_t)vmr(PIN_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(CPU_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(SECONDARY_VM_EXEC_CONTROL));
+    kdbp("EntryControls=%08x ExitControls=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_CONTROLS),
+           (uint32_t)vmr(VM_EXIT_CONTROLS));
+    kdbp("ExceptionBitmap=%08x\n",
+           (uint32_t)vmr(EXCEPTION_BITMAP));
+    kdbp("PAGE_FAULT_ERROR_CODE  MASK:0x%lx  MATCH:0x%lx\n", 
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MASK),
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MATCH));
+    kdbp("VMEntry: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_INTR_INFO),
+           (uint32_t)vmr(VM_ENTRY_EXCEPTION_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("VMExit: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_EXIT_INTR_INFO),
+           (uint32_t)vmr(VM_EXIT_INTR_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("        reason=%08x qualification=%08x\n",
+           (uint32_t)vmr(VM_EXIT_REASON),
+           (uint32_t)vmr(EXIT_QUALIFICATION));
+    kdbp("IDTVectoring: info=%08x errcode=%08x\n",
+           (uint32_t)vmr(IDT_VECTORING_INFO),
+           (uint32_t)vmr(IDT_VECTORING_ERROR_CODE));
+    kdbp("TPR Threshold = 0x%02x\n",
+           (uint32_t)vmr(TPR_THRESHOLD));
+    kdbp("EPT pointer = 0x%08x%08x\n",
+           (uint32_t)vmr(EPT_POINTER_HIGH), (uint32_t)vmr(EPT_POINTER));
+    kdbp("Virtual processor ID = 0x%04x\n",
+           (uint32_t)vmr(VIRTUAL_PROCESSOR_ID));
+    kdbp("=================================================================\n");
+}
+
+/* Flush VMCS on this cpu if it needs to: 
+ *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and 
+ *     do __vmreads. So, the VMCS pointer can't be left cleared.
+ *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
+ *     vmlaunch must be done and not vmresume. This means, we must clear 
+ *     arch_vmx->launched.
+ */
+void kdb_curr_cpu_flush_vmcs(void)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    int ccpu = smp_processor_id();
+    struct vmcs_struct *cvp = this_cpu(current_vmcs);
+
+    if (this_cpu(current_vmcs) == NULL)
+        return;             /* no HVM active on this CPU */
+
+    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
+
+    /* looks like we got one. unfortunately, current_vmcs points to vmcs 
+     * and not VCPU, so we gotta search the entire list... */
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        for_each_vcpu (dp, vp) {
+            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
+                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                vp->arch.hvm_vmx.launched = 0;
+                this_cpu(current_vmcs) = NULL;
+                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n", 
+		     ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
+            }
+        }
+    }
+}
+
+/*
+ * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)
+ * vcpu id == -1 : display all vcpuids
+ * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
+ */
+void kdb_dump_vmcs(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    struct vmcs_struct  *vmcsp;
+    u64 addr = -1;
+
+    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
+    __vmptrst(&addr);
+
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+	    vmcsp = vp->arch.hvm_vmx.vmcs;
+            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
+	         virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
+            __vmptrld(virt_to_maddr(vmcsp));
+            kdb_print_vmcs(vp);
+            __vmpclear(virt_to_maddr(vmcsp));
+            vp->arch.hvm_vmx.launched = 0;
+        }
+        kdbp("\n");
+    }
+    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
+    if (addr && addr != (u64)-1)
+        __vmptrld(addr);
+}
+#endif
 
 /*
  * Local variables:
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmx.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2183,11 +2183,14 @@
         printk("reason not known yet!");
         break;
     }
-
+#if defined(XEN_KDB_CONFIG)
+    kdbp("\n************* VMCS Area **************\n");
+    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
+#else
     printk("************* VMCS Area **************\n");
     vmcs_dump_vcpu(curr);
     printk("**************************************\n");
-
+#endif
     domain_crash(curr->domain);
 }
 
@@ -2415,6 +2418,12 @@
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
                 goto exit_and_crash;
+
+#if defined(XEN_KDB_CONFIG)
+            /* TRAP_debug: IP points correctly to next instr */
+            if (kdb_handle_trap_entry(vector, regs))
+                break;
+#endif
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 
@@ -2423,6 +2432,13 @@
             if ( v->domain->debugger_attached )
             {
                 update_guest_eip(); /* Safe: INT3 */            
+#if defined(XEN_KDB_CONFIG)
+                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
+                 * update_guest_eip which updates to bp+1. works for gdbsx too 
+                 */
+                if (kdb_handle_trap_entry(vector, regs))
+                    break;
+#endif
                 current->arch.gdbsx_vcpu_event = TRAP_int3;
                 domain_pause_for_debugger();
                 break;
@@ -2707,6 +2723,10 @@
     case EXIT_REASON_MONITOR_TRAP_FLAG:
         v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
+#if defined(XEN_KDB_CONFIG)
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+            break;
+#endif
         if ( v->arch.hvm_vcpu.single_step ) {
           hvm_memory_event_single_step(regs->eip);
           if ( v->domain->debugger_attached )
diff -r 32034d1914a6 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/irq.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2305,3 +2305,29 @@
     return is_hvm_domain(d) && pirq &&
            pirq->arch.hvm.emuirq != IRQ_UNBOUND; 
 }
+
+#ifdef XEN_KDB_CONFIG
+void kdb_prnt_guest_mapped_irqs(void)
+{
+    int irq, j;
+    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
+
+    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
+    for (irq=0; irq < nr_irqs; irq++) {
+        irq_desc_t  *dp = irq_to_desc(irq);
+        struct arch_irq_desc *archp = &dp->arch;
+        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
+
+        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
+            continue;
+
+        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
+        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
+             dp->handler->typename);
+        for (j=0; j < actp->nr_guests; j++)
+            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
+                 domain_irq_to_pirq(actp->guest[j], irq));
+        kdbp("\n");
+    }
+}
+#endif
diff -r 32034d1914a6 xen/arch/x86/setup.c
--- a/xen/arch/x86/setup.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/setup.c	Wed Aug 29 14:39:57 2012 -0700
@@ -47,6 +47,13 @@
 #include <xen/cpu.h>
 #include <asm/nmi.h>
 
+#ifdef XEN_KDB_CONFIG
+#include <asm/debugger.h>
+
+int opt_earlykdb=0;
+boolean_param("earlykdb", opt_earlykdb);
+#endif
+
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
 boolean_param("nosmp", opt_nosmp);
@@ -1242,6 +1249,11 @@
 
     trap_init();
 
+#ifdef XEN_KDB_CONFIG
+    kdb_init();
+    if (opt_earlykdb)
+        kdb_trap_immed(KDB_TRAP_NONFATAL);
+#endif
     rcu_init();
     
     early_time_init();
diff -r 32034d1914a6 xen/arch/x86/smp.c
--- a/xen/arch/x86/smp.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/smp.c	Wed Aug 29 14:39:57 2012 -0700
@@ -273,7 +273,7 @@
  * Structure and data for smp_call_function()/on_selected_cpus().
  */
 
-static void __smp_call_function_interrupt(void);
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs);
 static DEFINE_SPINLOCK(call_lock);
 static struct call_data_struct {
     void (*func) (void *info);
@@ -321,7 +321,7 @@
     if ( cpumask_test_cpu(smp_processor_id(), &call_data.selected) )
     {
         local_irq_disable();
-        __smp_call_function_interrupt();
+        __smp_call_function_interrupt(NULL);
         local_irq_enable();
     }
 
@@ -390,7 +390,7 @@
     this_cpu(irq_count)++;
 }
 
-static void __smp_call_function_interrupt(void)
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs)
 {
     void (*func)(void *info) = call_data.func;
     void *info = call_data.info;
@@ -411,6 +411,11 @@
     {
         mb();
         cpumask_clear_cpu(cpu, &call_data.selected);
+#ifdef XEN_KDB_CONFIG
+        if (info && !strcmp(info, "XENKDB")) {           /* called from kdb */
+                (*(void (*)(struct cpu_user_regs *, void *))func)(regs, info);
+        } else
+#endif
         (*func)(info);
     }
 
@@ -421,5 +426,5 @@
 {
     ack_APIC_irq();
     perfc_incr(ipis);
-    __smp_call_function_interrupt();
+    __smp_call_function_interrupt(regs);
 }
diff -r 32034d1914a6 xen/arch/x86/time.c
--- a/xen/arch/x86/time.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/time.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2007,6 +2007,46 @@
 }
 __initcall(setup_dump_softtsc);
 
+#ifdef XEN_KDB_CONFIG
+void kdb_time_resume(int update_domains)
+{
+        s_time_t now;
+        int ccpu = smp_processor_id();
+        struct cpu_time *t = &this_cpu(cpu_time);
+
+        if (!plt_src.read_counter)            /* not initialized for earlykdb */
+                return;
+
+        if (update_domains) {
+                plt_stamp = plt_src.read_counter();
+                platform_timer_stamp = plt_stamp64;
+                platform_time_calibration();
+                do_settime(get_cmos_time(), 0, read_platform_stime());
+        }
+        if (local_irq_is_enabled())
+                kdbp("kdb BUG: enabled in time_resume(). ccpu:%d\n", ccpu);
+
+        rdtscll(t->local_tsc_stamp);
+        now = read_platform_stime();
+        t->stime_master_stamp = now;
+        t->stime_local_stamp  = now;
+
+        update_vcpu_system_time(current);
+
+        if (update_domains)
+                set_timer(&calibration_timer, NOW() + EPOCH);
+}
+
+void kdb_dump_time_pcpu(void)
+{
+    int cpu;
+    for_each_online_cpu(cpu) {
+        kdbp("[%d]: cpu_time: %016lx\n", cpu, &per_cpu(cpu_time, cpu));
+        kdbp("[%d]: cpu_calibration: %016lx\n", cpu, 
+             &per_cpu(cpu_calibration, cpu));
+    }
+}
+#endif
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/traps.c	Wed Aug 29 14:39:57 2012 -0700
@@ -225,7 +225,7 @@
 
 #else
 
-static void show_trace(struct cpu_user_regs *regs)
+void show_trace(struct cpu_user_regs *regs)
 {
     unsigned long *frame, next, addr, low, high;
 
@@ -3326,6 +3326,10 @@
     if ( nmi_callback(regs, cpu) )
         return;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_enabled && kdb_handle_trap_entry(TRAP_nmi, regs))
+        return;
+#endif
     if ( nmi_watchdog )
         nmi_watchdog_tick(regs);
 
diff -r 32034d1914a6 xen/arch/x86/x86_64/compat/entry.S
--- a/xen/arch/x86/x86_64/compat/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/compat/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -95,6 +95,10 @@
 /* %rbx: struct vcpu */
 ENTRY(compat_test_all_events)
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG
+        testl $1, kdb_session_begun(%rip)
+        jnz   compat_restore_all_guest
+#endif
 /*compat_test_softirqs:*/
         movl  VCPU_processor(%rbx),%eax
         shlq  $IRQSTAT_shift,%rax
diff -r 32034d1914a6 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -184,6 +184,10 @@
 /* %rbx: struct vcpu */
 test_all_events:
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG                   /* 64bit dom0 will resume here */
+        testl $1, kdb_session_begun(%rip)
+        jnz   restore_all_guest
+#endif
 /*test_softirqs:*/  
         movl  VCPU_processor(%rbx),%eax
         shl   $IRQSTAT_shift,%rax
@@ -546,6 +550,13 @@
 
 ENTRY(int3)
         pushq $0
+#ifdef XEN_KDB_CONFIG
+        pushq %rax
+        GET_CPUINFO_FIELD(CPUINFO_processor_id, %rax)
+        movq  (%rax), %rax
+        lock  bts %rax, kdb_cpu_traps(%rip)
+        popq  %rax
+#endif
         movl  $TRAP_int3,4(%rsp)
         jmp   handle_exception
 
diff -r 32034d1914a6 xen/common/domain.c
--- a/xen/common/domain.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/domain.c	Wed Aug 29 14:39:57 2012 -0700
@@ -530,6 +530,14 @@
 {
     struct vcpu *v;
 
+#ifdef XEN_KDB_CONFIG
+    if (reason == SHUTDOWN_crash) {
+        if ( IS_PRIV(d) )
+            kdb_trap_immed(KDB_TRAP_FATAL);
+        else
+            kdb_trap_immed(KDB_TRAP_NONFATAL);
+    }
+#endif
     spin_lock(&d->shutdown_lock);
 
     if ( d->shutdown_code == -1 )
@@ -624,7 +632,9 @@
     for_each_vcpu ( d, v )
         vcpu_sleep_nosync(v);
 
-    send_global_virq(VIRQ_DEBUGGER);
+    /* send VIRQ_DEBUGGER to guest only if gdbsx_vcpu_event is not active */
+    if (current->arch.gdbsx_vcpu_event == 0)
+        send_global_virq(VIRQ_DEBUGGER);
 }
 
 /* Complete domain destroy after RCU readers are not holding old references. */
diff -r 32034d1914a6 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/sched_credit.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1475,6 +1475,33 @@
     printk("\n");
 }
 
+#ifdef XEN_KDB_CONFIG
+static void kdb_csched_dump(int cpu)
+{
+    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
+    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
+    struct list_head *tmp, *runq = RUNQ(cpu);
+
+    kdbp("    csched_pcpu: %p\n", pcpup);
+    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
+         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
+    kdbp("    runq:\n");
+
+    /* next is top of struct, so screw stupid, ugly hard to follow macros */
+    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
+        kdbp("next is not first in struct csched_vcpu. please fixme\n");
+        return;        /* otherwise for loop will crash */
+    }
+    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
+
+        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
+        struct vcpu *vp = csp->vcpu;
+        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
+             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
+    };
+}
+#endif
+
 static void
 csched_dump_pcpu(const struct scheduler *ops, int cpu)
 {
@@ -1484,6 +1511,10 @@
     int loop;
 #define cpustr keyhandler_scratch
 
+#ifdef XEN_KDB_CONFIG
+    kdb_csched_dump(cpu);
+    return;
+#endif
     spc = CSCHED_PCPU(cpu);
     runq = &spc->runq;
 
diff -r 32034d1914a6 xen/common/schedule.c
--- a/xen/common/schedule.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/schedule.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1454,6 +1454,25 @@
     schedule();
 }
 
+#ifdef XEN_KDB_CONFIG
+void kdb_print_sched_info(void)
+{
+    int cpu;
+
+    kdbp("Scheduler: name:%s opt_name:%s id:%d\n", ops.name, ops.opt_name,
+         ops.sched_id);
+    kdbp("per cpu schedule_data:\n");
+    for_each_online_cpu(cpu) {
+        struct schedule_data *p =  &per_cpu(schedule_data, cpu);
+        kdbp("  cpu:%d  &(per cpu)schedule_data:%p\n", cpu, p);
+        kdbp("         curr:%p sched_priv:%p\n", p->curr, p->sched_priv);
+        kdbp("\n");
+        ops.dump_cpu_state(&ops, cpu);
+        kdbp("\n");
+    }
+}
+#endif
+
 #ifdef CONFIG_COMPAT
 #include "compat/schedule.c"
 #endif
diff -r 32034d1914a6 xen/common/symbols.c
--- a/xen/common/symbols.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/symbols.c	Wed Aug 29 14:39:57 2012 -0700
@@ -168,3 +168,21 @@
 
     spin_unlock_irqrestore(&lock, flags);
 }
+
+#ifdef XEN_KDB_CONFIG
+/*
+ *  * Given a symbol, return its address 
+ *   */
+unsigned long address_lookup(char *symp)
+{
+    int i, off = 0;
+    char namebuf[KSYM_NAME_LEN+1];
+
+    for (i=0; i < symbols_num_syms; i++) {
+        off = symbols_expand_symbol(off, namebuf);
+        if (strcmp(namebuf, symp) == 0)                  /* found it */
+            return symbols_address(i);
+    }
+    return 0;
+}
+#endif
diff -r 32034d1914a6 xen/common/timer.c
--- a/xen/common/timer.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/timer.c	Wed Aug 29 14:39:57 2012 -0700
@@ -643,6 +643,48 @@
     register_keyhandler('a', &dump_timerq_keyhandler);
 }
 
+#ifdef XEN_KDB_CONFIG
+#include <xen/symbols.h>
+void kdb_dump_timer_queues(void)
+{
+    struct timer  *t;
+    struct timers *ts;
+    unsigned long sz, offs;
+    char buf[KSYM_NAME_LEN+1];
+    int cpu, j;
+    u64 tsc;
+
+    for_each_online_cpu( cpu )
+    {
+        ts = &per_cpu(timers, cpu);
+        kdbp("CPU[%02d]:", cpu);
+
+        if (cpu == smp_processor_id()) {
+            s_time_t now = NOW();
+            rdtscll(tsc);
+            kdbp("NOW:0x%08x%08x TSC:0x%016lx\n", (u32)(now>>32),(u32)now, tsc);
+        } else
+            kdbp("\n");
+
+        /* timers in the heap */
+        for ( j = 1; j <= GET_HEAP_SIZE(ts->heap); j++ ) {
+            t = ts->heap[j];
+            kdbp("  %d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+        /* timers on the link list */
+        for ( t = ts->list, j = 0; t != NULL; t = t->list_next, j++ ) {
+            kdbp(" L%d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+    }
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/drivers/char/console.c
--- a/xen/drivers/char/console.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/drivers/char/console.c	Wed Aug 29 14:39:57 2012 -0700
@@ -295,6 +295,21 @@
 {
     static int switch_code_count = 0;
 
+#ifdef XEN_KDB_CONFIG
+    /* if ctrl-\ pressed and kdb handles it, return */
+    if (kdb_enabled && c == 0x1c) {
+        if (!kdb_session_begun) {
+            if (kdb_keyboard(regs))
+                return;
+        } else {
+            kdbp("Sorry... kdb session already active.. please try again..\n");
+            return;
+        }
+    }
+    if (kdb_session_begun)      /* kdb should already be polling */
+        return;                 /* swallow chars so they don't buffer in dom0 */
+#endif
+
     if ( switch_code && (c == switch_code) )
     {
         /* We eat CTRL-<switch_char> in groups of 3 to switch console input. */
@@ -710,6 +725,18 @@
     atomic_dec(&print_everything);
 }
 
+#ifdef XEN_KDB_CONFIG
+void console_putc(char c)
+{
+    serial_putc(sercon_handle, c);
+}
+
+int console_getc(void)
+{
+    return serial_getc(sercon_handle);
+}
+#endif
+
 /*
  * printk rate limiting, lifted from Linux.
  *
diff -r 32034d1914a6 xen/include/asm-x86/debugger.h
--- a/xen/include/asm-x86/debugger.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/asm-x86/debugger.h	Wed Aug 29 14:39:57 2012 -0700
@@ -39,7 +39,11 @@
 #define DEBUGGER_trap_fatal(_v, _r) \
     if ( debugger_trap_fatal(_v, _r) ) return;
 
-#if defined(CRASH_DEBUG)
+#if defined(XEN_KDB_CONFIG)
+#define debugger_trap_immediate() kdb_trap_immed(KDB_TRAP_NONFATAL)
+#define debugger_trap_fatal(_v, _r) kdb_trap_fatal(_v, _r)
+
+#elif defined(CRASH_DEBUG)
 
 #include <xen/gdbstub.h>
 
@@ -70,6 +74,10 @@
 {
     struct vcpu *v = current;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_handle_trap_entry(vector, regs))
+        return 1;
+#endif
     if ( guest_kernel_mode(v, regs) && v->domain->debugger_attached &&
          ((vector == TRAP_int3) || (vector == TRAP_debug)) )
     {
diff -r 32034d1914a6 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/lib.h	Wed Aug 29 14:39:57 2012 -0700
@@ -116,4 +116,7 @@
 struct cpu_user_regs;
 void dump_execstate(struct cpu_user_regs *);
 
+#ifdef XEN_KDB_CONFIG
+#include "../../kdb/include/kdb_extern.h"
+#endif
 #endif /* __LIB_H__ */
diff -r 32034d1914a6 xen/include/xen/sched.h
--- a/xen/include/xen/sched.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/sched.h	Wed Aug 29 14:39:57 2012 -0700
@@ -576,11 +576,14 @@
 unsigned long hypercall_create_continuation(
     unsigned int op, const char *format, ...);
 void hypercall_cancel_continuation(void);
-
+#ifdef XEN_KDB_CONFIG
+#define hypercall_preempt_check() (0)
+#else
 #define hypercall_preempt_check() (unlikely(    \
         softirq_pending(smp_processor_id()) |   \
         local_events_need_delivery()            \
     ))
+#endif
 
 extern struct domain *domain_list;
 
diff -r 32034d1914a6 xen/kdb/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,5 @@
+
+obj-y		+= kdbmain.o kdb_cmds.o kdb_io.o 
+
+subdir-y += x86 guest
+
diff -r 32034d1914a6 xen/kdb/README
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/README	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,243 @@
+
+Welcome to kdb for xen, a hypervisor built in debugger.
+
+FEATURES:
+   - set breakpoints in hypervisor
+   - examine virt/machine memory, registers, domains, vcpus, etc...
+   - single step, single step till jump/call, step over call to next
+     instruction after the call.
+   - examine memory of a PV/HVM guest. 
+   - set breakpoints, single step, etc... for a PV guest.
+   - breaking into the debugger will freeze the system, all CPUs will pause,
+     no interrupts are acknowledged in the debugger. (Hence, the wall clock
+     will drift)
+   - single step will step only that cpu.
+   - earlykdb: break into kdb very early during boot. Put "earlykdb" on the
+               xen command line in grub.conf.
+   - generic tracing functions (see below) for quick tracing to debug timing
+     related problems. To use:
+        o set KDBTRCMAX to max num of recs in circular trc buffer in kdbmain.c
+	o call kdb_trc() from anywhere in xen
+	o turn tracing on by setting kdb_trcon in kdbmain.c or trcon command.
+	o trcp in kdb will give hints to dump trace recs. Use dd to see buffer
+	o trcz will zero out the entire buffer if needed.
+
+NOTE:
+   - since almost all numbers are in hex, 0x is not prefixed. Instead, decimal
+     numbers are preceded by $, as in $17 (sorry, one gets used to it). Note,
+     vcpu num, cpu num, domid are always displayed in decimal, without $.
+   - watchdog must be disabled to use kdb
+
+ISSUES:
+   - Currently, debug hypervisor is not supported. Make sure NDEBUG is defined
+     or compile with debug=n
+   - "timer went backwards" messages on dom0, but kdb/hyp should be fine.
+     I usually do "echo 2 > /proc/sys/kernel/printk" when using kdb.
+   - 32bit hypervisor may hang. Tested on 64bit hypervisor only.
+    
+
+TO BUILD:
+ - do >make kdb=y
+
+HOW TO USE:
+  1. A serial line is needed to use the debugger. Set up a serial line
+     from the source machine to target victim. Make sure the serial line
+     is working properly by displaying login prompt and loging in etc....
+
+  2. Add following to grub.conf:
+        kernel /xen.kdb console=com1,vga com1=57600,8n1 dom0_mem=542M
+
+        (57600 or whatever used in step 1 above)
+
+  3. Boot the hypervisor built with the debugger. 
+
+  4. ctrl-\ (ctrl and backslash) will break into the debugger. If the system is
+     badly hung, pressing NMI would also break into it. However, once kdb is
+     entered via NMI, normal execution can't continue.
+
+  5. type 'h' for list of commands.
+
+  6. Command line editing is limited to backspace. ctrl-c to start a new cmd.
+
+
+
+GUEST debug:
+  - type sym in the debugger
+  - for REL4, grep kallsyms_names, kallsyms_addresses, and kallsyms_num_syms
+    in the guest System.map* file. Run sym again with domid and the three
+    values on the command line.
+  - Now basic symbols can be used for guest debug. Note, if the binary is not
+    built with symbols, only function names are available, but not global vars.
+
+    Eg: sym 0 c0696084 c068a590 c0696080 c06b43e8 c06b4740
+        will set symbols for dom 0. Then :
+
+        [4]xkdb> bp some_function 0
+
+	wills set bp at some_function in dom 0
+
+	[3]xkdb> dw c068a590 32 0 : display 32 bytes of dom0 memory
+
+
+Tips:
+  - In "[0]xkdb>"  : 0 is the cpu number in decimal
+  - In
+      00000000c042645c: 0:do_timer+17                  push %ebp
+    0:do_timer : 0 is the domid in hex
+    offset +17 is in hex.
+
+    absense of 0: would indicate it's a hypervisor function
+
+  - commands starting with kdb (kdb*) are for kdb debug only.
+
+
+Finally,
+ - think hex.
+ - bug/problem: enter kdbdbg, reproduce, and send me the output.
+   If the output is not enough, I may ask to run kdbdbg twice, then collect
+   output.
+
+
+Thanks,
+Mukesh Rathor
+Oracle Corporatin, 
+Redwood Shores, CA 94065
+
+--------------------------------------------------------------------------------
+COMMAND DESCRIPTION:
+
+info:  Print basic info like version, compile flags, etc..
+
+cur:  print current domain id and vcpu id
+
+f: display current stack. If a vcpu ptr is given, then print stack for that
+   VCPU by using its IP and SP.
+
+fg: display stack for a guest given domid, SP and IP.
+
+dw: display words of memory. 'num' of bytes is optional, but if displaying guest
+    memory, then is required.
+
+dd: same as above, but display doublewords.
+
+dwm: same as above but the address is machine address instead of virtual.
+
+ddm: same as above, but display doublewords.
+
+dr: display registers. if 'sp' is specified then print few extra registers.
+
+drg: display guest context saved on stack bottom.
+
+dis: disassemble instructions. If disassembling for guest, then 'num' must
+     be specified. 'num' is number of instrs to display.
+
+dism: toggle disassembly mode between Intel and ATT/GAS.
+
+mw: modify word in memory given virtual address. 'domid' may be specified if
+    modifying guest memory. value is assumed in hex even without 0x.
+
+md: same as above but modify doubleword.
+
+mr: modify register. value is assumd hex.
+
+bc: clear given or all breakpoints
+
+bp: display breakpoints or set a breakpoint. Domid may be specified to set a bp
+    in guest. kdb functions may not be specified if debugging kdb.
+    Example:
+      xkdb> bp acpi_processor_idle  : will set bp in xen
+      xkdb> bp default_idle 0 :   will set bp in domid 0
+      xkdb> bp idle_cpu 9 :   will set bp in domid 9
+
+     Conditions may be specified for a bp: lhs == rhs or lhs != rhs
+     where : lhs is register like 'r6', 'rax', etc...  or memory location
+             rhs is hex value with or without leading 0x.
+     Thus,
+      xkdb> bp acpi_processor_idle rdi == c000 
+      xkdb> bp 0xffffffff80062ebc 0 rsi == ffff880021edbc98 : will break into
+            kdb at 0xffffffff80062ebc in dom0 when rsi is ffff880021edbc98 
+
+btp: break point trace. Upon bp, print some info and continue without stopping.
+   Ex: btp idle_cpu 7 rax rbx 0x20ef5a5 r9
+
+   will print: rax, rbx, *(long *)0x20ef5a5, r9 upon hitting idle_cpu() and 
+               continue.
+
+wp: set a watchpoint at a virtual address which can belong to hypervisor or
+    any guest. Do not specify wp in kdb path if debugging kdb.
+
+wc: clear given or all watchpoints.
+
+ni: single step, stepping over function calls.
+
+ss: single step. Be carefull when in interrupt handlers or context switches.
+    
+ssb: single step to branch. Use with care.
+
+go: leave kdb and continue.
+
+cpu: go back to orig cpu when entering kdb. If 'cpu number' given, then switch 
+     to that cpu. If 'all' then show status of all cpus.
+
+nmi: Only available in hung/crash state. Send NMI to a cpu that may be hung.
+
+sym: Initialize a symbol table for debugging a guest. Look into the System.map
+     file of guest for certain symbol values and provide them here.
+
+vcpuh: Given vcpu ptr, display hvm_vcpu struct.
+
+vcpu: Display current vcpu struct. If 'vcpu-ptr' given, display that vcpu.
+
+dom: display current domain. If 'domid' then display that domid. If 'all', then
+     display all domains.
+
+sched: show schedular info and run queues.
+
+mmu: print basic mmu info
+
+p2m: convert a gpfn to mfn given a domid. value in hex even without 0x.
+
+m2p: convert mfn to pfn. value in hex even without 0x.
+
+dpage: display struct page given a mfn or struct page ptr. Since, no info is 
+       kept on page type, we display all possible page types.
+
+dtrq: display timer queues.
+
+didt: dump IDT table.
+
+dgt: dump GDT table.
+
+dirq: display IRQ bindings.
+
+dvmc: display all or given dom/vcpu VMCS or VMCB.
+
+trcon: turn tracing on. Trace hooks must be added in xen and kdb function
+       called directly from there.
+
+trcoff: turn tracing off.
+
+trcz: zero trace buffer.
+
+trcp: give hints to print the circular trace buffer, like current active ptr.
+
+usr1: allows to add any arbitraty command quickly.
+
+--------------------------------------------------------------------------------
+/*
+ * Copyright (C) 2008 Oracle.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
diff -r 32034d1914a6 xen/kdb/guest/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/guest/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3 @@
+
+obj-y           := kdb_guest.o
+
diff -r 32034d1914a6 xen/kdb/guest/kdb_guest.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/guest/kdb_guest.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,342 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "../include/kdbinc.h"
+
+/* information for symbols for a guest (includeing dom 0 ) is saved here */
+struct gst_syminfo {           /* guest symbols info */
+    int   domid;               /* which domain */
+    int   bitness;             /* 32 or 64 */
+    void *addrtblp;            /* ptr to (32/64)addresses tbl */
+    u8   *toktbl;              /* ptr to kallsyms_token_table */
+    u16  *tokidxtbl;           /* ptr to kallsyms_token_index */
+    u8   *kallsyms_names;      /* ptr to kallsyms_names */
+    long  kallsyms_num_syms;   /* ptr to kallsyms_num_syms */
+    kdbva_t  stext;            /* value of _stext in guest */
+    kdbva_t  etext;            /* value of _etext in guest */
+    kdbva_t  sinittext;        /* value of _sinittext in guest */
+    kdbva_t  einittext;        /* value of _einittext in guest */
+};
+
+#define MAX_CACHE 16                              /* cache upto 16 guests */
+struct gst_syminfo gst_syminfoa[MAX_CACHE];       /* guest symbol info array */
+
+static struct gst_syminfo *
+kdb_get_syminfo_slot(void)
+{
+    int i;
+    for (i=0; i < MAX_CACHE; i++)
+        if (gst_syminfoa[i].addrtblp == NULL)
+            return (&gst_syminfoa[i]);      
+
+    return NULL;
+}
+
+static struct gst_syminfo *
+kdb_domid2syminfop(domid_t domid)
+{
+    int i;
+    for (i=0; i < MAX_CACHE; i++)
+        if (gst_syminfoa[i].domid == domid)
+            return (&gst_syminfoa[i]);      
+
+    return NULL;
+}
+
+/* check if an address looks like text address in guest */
+int
+kdb_is_addr_guest_text(kdbva_t addr, int domid)
+{
+    struct gst_syminfo *gp = kdb_domid2syminfop(domid);
+
+    if (!gp || !gp->stext || !gp->etext)
+        return 0;
+    KDBGP1("guestaddr: addr:%lx domid:%d\n", addr, domid);
+
+    return ( (addr >= gp->stext && addr <= gp->etext) ||
+             (addr >= gp->sinittext && addr <= gp->einittext) );
+}
+
+/*
+ * returns: value of kallsyms_addresses[idx];
+ */
+static kdbva_t
+kdb_rd_guest_addrtbl(struct gst_syminfo *gp, int idx)
+{
+    kdbva_t addr, retaddr=0;
+    int num = gp->bitness/8;       /* whether 4 byte or 8 byte ptrs */
+    domid_t id = gp->domid;
+
+    addr = (kdbva_t)(((char *)gp->addrtblp) + idx * num);
+    KDBGP1("rdguestaddrtbl:addr:%lx idx:%d\n", addr, idx);
+
+    if (kdb_read_mem(addr, (kdbbyt_t *)&retaddr,num,id) != num) {
+        kdbp("Can't read addrtbl domid:%d at:%lx\n", id, addr);
+        return 0;
+    }
+    KDBGP1("rdguestaddrtbl:exit:retaddr:%lx\n", retaddr);
+    return retaddr;
+}
+
+/* Based on el5 kallsyms.c file. */
+static unsigned int 
+kdb_expand_el5_sym(struct gst_syminfo *gp, unsigned int off, char *result)
+{   
+    int len, skipped_first = 0;
+    u8 u8idx, *tptr, *datap;
+    domid_t domid = gp->domid;
+
+    *result = '\0';
+
+    /* get the compressed symbol length from the first symbol byte */
+    datap = gp->kallsyms_names + off;
+    len = 0;
+    if ((kdb_read_mem((kdbva_t)datap, (kdbbyt_t *)&len, 1, domid)) != 1) {
+        KDBGP("failed to read guest memory\n");
+        return 0;
+    }
+    datap++;
+
+    /* update the offset to return the offset for the next symbol on
+     * the compressed stream */
+    off += len + 1;
+
+    /* for every byte on the compressed symbol data, copy the table
+     * entry for that byte */
+    while(len) {
+        u16 u16idx, *u16p;
+        if (kdb_read_mem((kdbva_t)datap,(kdbbyt_t *)&u8idx,1,domid)!=1){
+            kdbp("memory (u8idx) read error:%p\n",gp->tokidxtbl);
+            return 0;
+        }
+        u16p = u8idx + gp->tokidxtbl;
+        if (kdb_read_mem((kdbva_t)u16p,(kdbbyt_t *)&u16idx,2,domid)!=2){
+            kdbp("tokidxtbl read error:%p\n", u16p);
+            return 0;
+        }
+        tptr = gp->toktbl + u16idx;
+        datap++;
+        len--;
+
+        while ((kdb_read_mem((kdbva_t)tptr, (kdbbyt_t *)&u8idx, 1, domid)==1) &&
+               u8idx) {
+
+            if(skipped_first) {
+                *result = u8idx;
+                result++;
+            } else
+                skipped_first = 1;
+            tptr++;
+        }
+    }
+    *result = '\0';
+    return off;          /* return to offset to the next symbol */
+}
+
+#define EL4_NMLEN 127
+/* so much pain, so not sure of it's worth .. :).. */
+static kdbva_t
+kdb_expand_el4_sym(struct gst_syminfo *gp, int low, char *result, char *symp)
+{   
+    int i, j;
+    u8 *nmp = gp->kallsyms_names;       /* guest address space */
+    kdbbyt_t byte, prefix;
+    domid_t id = gp->domid;
+    kdbva_t addr;
+
+    KDBGP1("Eel4sym:nmp:%p maxidx:$%d sym:%s\n", nmp, low, symp);
+    for (i=0; i <= low; i++) {
+        /* unsigned prefix = *name++; */
+        if (kdb_read_mem((kdbva_t)nmp, &prefix, 1, id) != 1) {
+            kdbp("failed to read:%p domid:%x\n", nmp, id);
+            return 0;
+        }
+        KDBGP2("el4:i:%d prefix:%x\n", i, prefix);
+        nmp++;
+        /* strncpy(namebuf + prefix, name, KSYM_NAME_LEN - prefix); */
+        addr = (long)result + prefix;
+        for (j=0; j < EL4_NMLEN-prefix; j++) {
+            if (kdb_read_mem((kdbva_t)nmp, &byte, 1, id) != 1) {
+                kdbp("failed read:%p domid:%x\n", nmp, id);
+                return 0;
+            }
+            KDBGP2("el4:j:%d byte:%x\n", j, byte);
+            *(kdbbyt_t *)addr = byte;
+            addr++; nmp++;
+            if (byte == '\0')
+                break;
+        }
+        KDBGP2("el4sym:i:%d res:%s\n", i, result);
+        if (symp && strcmp(result, symp) == 0)
+            return(kdb_rd_guest_addrtbl(gp, i));
+
+        /* kallsyms.c: name += strlen(name) + 1; */
+        if (j == EL4_NMLEN-prefix && byte != '\0')
+            while (kdb_read_mem((kdbva_t)nmp, &byte, 1, id) && byte != '\0')
+                nmp++;
+    }
+    KDBGP1("Xel4sym: na-ga-da\n");
+    return 0;
+}
+
+static unsigned int
+kdb_get_el5_symoffset(struct gst_syminfo *gp, long pos)
+{
+    int i;
+    u8 data, *namep;
+    domid_t domid = gp->domid;
+
+    namep = gp->kallsyms_names;
+    for (i=0; i < pos; i++) {
+        if (kdb_read_mem((kdbva_t)namep, &data, 1, domid) != 1) {
+            kdbp("Can't read id:$%d mem:%p\n", domid, namep);
+            return 0;
+        }
+        namep = namep + data + 1;
+    }
+    return namep - gp->kallsyms_names;
+}
+
+/*
+ * for a given guest domid (domid >= 0 && < KDB_HYPDOMID), convert addr to
+ * symbol. offset is set to  addr - symbolstart
+ */
+char *
+kdb_guest_addr2sym(unsigned long addr, domid_t domid, ulong *offsp)
+{
+    static char namebuf[KSYM_NAME_LEN+1];
+    unsigned long low, high, mid;
+    struct gst_syminfo *gp = kdb_domid2syminfop(domid);
+
+    *offsp = 0;
+    if(!gp || gp->kallsyms_num_syms == 0)
+        return " ??? ";
+
+    namebuf[0] = namebuf[KSYM_NAME_LEN] = '\0';
+    if (1) {
+        /* do a binary search on the sorted kallsyms_addresses array */
+        low = 0;
+        high = gp->kallsyms_num_syms;
+
+        while (high-low > 1) {
+            mid = (low + high) / 2;
+            if (kdb_rd_guest_addrtbl(gp, mid) <= addr) 
+                low = mid;
+            else 
+                high = mid;
+        }
+        /* Grab name */
+        if (gp->toktbl) {
+            int symoff = kdb_get_el5_symoffset(gp,low);
+            kdb_expand_el5_sym(gp, symoff, namebuf);
+        } else
+            kdb_expand_el4_sym(gp, low, namebuf, NULL);
+        *offsp = addr - kdb_rd_guest_addrtbl(gp, low);
+        return namebuf;
+    }
+    return " ???? ";
+}
+
+
+/* 
+ * save guest (dom0 and others) symbols info : domid and following addresses:
+ *     &kallsyms_names &kallsyms_addresses &kallsyms_num_syms \
+ *     &kallsyms_token_table &kallsyms_token_index
+ */
+void
+kdb_sav_dom_syminfo(domid_t domid, long namesp, long addrap, long nump,
+                    long toktblp, long tokidxp)
+{
+    int bytes;
+    long val = 0;    /* must be set to zero for 32 on 64 cases */
+    struct gst_syminfo *gp = kdb_get_syminfo_slot();
+
+    if (gp == NULL) {
+        kdbp("kdb:kdb_sav_dom_syminfo():Table full.. symbols not saved\n");
+        return;
+    }
+    memset(gp, 0, sizeof(*gp));
+
+    gp->domid = domid;
+    gp->bitness = kdb_guest_bitness(domid);
+    gp->addrtblp = (void *)addrap;
+    gp->kallsyms_names = (u8 *)namesp;
+    gp->toktbl = (u8 *)toktblp;
+    gp->tokidxtbl = (u16 *)tokidxp;
+
+    KDBGP("domid:%x bitness:$%d numsyms:$%ld arrayp:%p\n", domid,
+          gp->bitness, gp->kallsyms_num_syms, gp->addrtblp);
+
+    bytes = gp->bitness/8;
+    if (kdb_read_mem(nump, (kdbbyt_t *)&val, bytes, domid) != bytes) {
+
+        kdbp("Unable to read number of symbols from:%lx\n", nump);
+        memset(gp, 0, sizeof(*gp));
+        return;
+    } else
+        kdbp("Number of symbols:$%ld\n", val);
+
+    gp->kallsyms_num_syms = val;
+
+    bytes = (gp->bitness/8) * gp->kallsyms_num_syms;
+    gp->stext = kdb_guest_sym2addr("_stext", domid);
+    gp->etext = kdb_guest_sym2addr("_etext", domid);
+    if (!gp->stext || !gp->etext)
+        kdbp("Warn: Can't find stext/etext\n");
+
+    if (gp->toktbl && gp->tokidxtbl) {
+        gp->sinittext = kdb_guest_sym2addr("_sinittext", domid);
+        gp->einittext = kdb_guest_sym2addr("_einittext", domid);
+        if (!gp->sinittext || !gp->einittext) {
+            kdbp("Warn: Can't find sinittext/einittext\n");
+    } 
+    }
+    KDBGP1("stxt:%lx etxt:%lx sitxt:%lx eitxt:%lx\n", gp->stext, gp->etext,
+           gp->sinittext, gp->einittext);
+    kdbp("Succesfully saved symbol info\n");
+}
+
+/*
+ * given a symbol string for a guest/domid, return its address
+ */
+kdbva_t
+kdb_guest_sym2addr(char *symp, domid_t domid)
+{
+    char namebuf[KSYM_NAME_LEN+1];
+    int i, off=0;
+    struct gst_syminfo *gp = kdb_domid2syminfop(domid);
+
+    KDBGP("sym2a: sym:%s domid:%x numsyms:%ld\n", symp, domid,
+          gp ? gp->kallsyms_num_syms: -1);
+
+    if (!gp)
+        return 0;
+
+    if (gp->toktbl == 0 || gp->tokidxtbl == 0)
+        return(kdb_expand_el4_sym(gp, gp->kallsyms_num_syms, namebuf, symp));
+
+    for (i=0; i < gp->kallsyms_num_syms; i++) {
+        off = kdb_expand_el5_sym(gp, off, namebuf);
+        KDBGP1("i:%d namebuf:%s\n", i, namebuf);
+        if (strcmp(namebuf, symp) == 0) {
+            return(kdb_rd_guest_addrtbl(gp, i));
+        }
+    }
+    KDBGP("sym2a:exit:na-ga-da\n");
+    return 0;
+}
diff -r 32034d1914a6 xen/kdb/include/kdb_extern.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdb_extern.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDB_EXTERN_H
+#define _KDB_EXTERN_H
+
+#define KDB_TRAP_FATAL     1    /* trap is fatal. can't resume from kdb */
+#define KDB_TRAP_NONFATAL  2    /* can resume from kdb */
+#define KDB_TRAP_KDBSTACK  3    /* to debug kdb itself. dump kdb stack */
+
+/* following can be called from anywhere in xen to debug */
+extern void kdb_trap_immed(int);
+extern void kdbtrc(unsigned int, unsigned int, uint64_t, uint64_t, uint64_t);
+extern void kdbp(const char *fmt, ...);
+
+typedef unsigned long kdbva_t;
+typedef unsigned char kdbbyt_t;
+typedef unsigned long kdbma_t;
+
+extern unsigned long kdb_dr7;
+
+
+extern volatile int kdb_session_begun;
+extern volatile int kdb_enabled;
+extern void kdb_init(void);
+extern int kdb_keyboard(struct cpu_user_regs *);
+extern void kdb_ssni_reenter(struct cpu_user_regs *);
+extern int kdb_handle_trap_entry(int, struct cpu_user_regs *);
+extern int kdb_trap_fatal(int, struct cpu_user_regs *);  /* fatal with regs */
+extern void kdb_dump_vmcs(uint16_t did, int vid);
+void kdb_dump_vmcb(uint16_t did, int vid);
+extern void kdb_dump_time_pcpu(void);
+
+
+#define VMPTRST_OPCODE  ".byte 0x0f,0xc7\n"     /* reg/opcode: /7 */
+#define MODRM_EAX_07    ".byte 0x38\n"          /* [EAX], with reg/opcode: /7 */
+static inline void __vmptrst(u64 *addr)
+{
+    asm volatile ( VMPTRST_OPCODE
+                   MODRM_EAX_07
+                   :
+                   : "a" (addr)
+                   : "memory");
+}
+
+#define is_hvm_or_hyb_domain is_hvm_domain
+#define is_hvm_or_hyb_vcpu is_hvm_vcpu
+#define is_hybrid_vcpu(x) (0)
+
+
+#endif  /* _KDB_EXTERN_H */
diff -r 32034d1914a6 xen/kdb/include/kdbdefs.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdbdefs.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,86 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDBDEFS_H
+#define _KDBDEFS_H
+
+/* reason we are entering kdbmain (bp == breakpoint) */
+typedef enum {
+    KDB_REASON_KEYBOARD=1,  /* Keyboard entry - always 1 */
+    KDB_REASON_BPEXCP,      /* #BP excp: sw bp (INT3) */
+    KDB_REASON_DBEXCP,      /* #DB excp: TF flag or HW bp */
+    KDB_REASON_PAUSE_IPI,   /* received pause IPI from another CPU */
+} kdb_reason_t;
+
+
+/* cpu state: past, present, and future */
+typedef enum {
+    KDB_CPU_INVAL=0,     /* invalid value. not in or leaving kdb */
+    KDB_CPU_QUIT,        /* main cpu does GO. all others do QUIT */
+    KDB_CPU_PAUSE,       /* cpu is paused */
+    KDB_CPU_DISABLE,     /* disable interrupts */
+    KDB_CPU_SHOWPC,      /* all cpus must display their pc */
+    KDB_CPU_DO_VMEXIT,   /* all cpus must do vmcs vmexit. intel only */
+    KDB_CPU_MAIN_KDB,    /* cpu in kdb main command loop */
+    KDB_CPU_GO,          /* user entered go for this cpu */
+    KDB_CPU_SS,          /* single step for this cpu */
+    KDB_CPU_NI,          /* go to next instr after the call instr */
+    KDB_CPU_INSTALL_BP,  /* delayed install of sw bp(s) by this cpu */
+} kdb_cpu_cmd_t;
+
+/* ============= kdb commands ============================================= */
+
+typedef kdb_cpu_cmd_t (*kdb_func_t)(int, const char **, struct cpu_user_regs *);
+typedef kdb_cpu_cmd_t (*kdb_usgf_t)(void);
+
+typedef enum {
+    KDB_REPEAT_NONE = 0,    /* Do not repeat this command */
+    KDB_REPEAT_NO_ARGS,     /* Repeat the command without arguments */
+    KDB_REPEAT_WITH_ARGS,   /* Repeat the command including its arguments */
+} kdb_repeat_t;
+
+typedef struct _kdbtab {
+    char        *kdb_cmd_name;        /* Command name */
+    kdb_func_t   kdb_cmd_func;        /* ptr to function to execute command */
+    kdb_usgf_t   kdb_cmd_usgf;        /* usage function ptr */
+    int          kdb_cmd_crash_avail; /* available in sys fatal/crash state */
+    kdb_repeat_t kdb_cmd_repeat;      /* Does command auto repeat on enter? */
+} kdbtab_t;
+
+
+/* ============= types and stuff ========================================= */
+#define BFD_INVAL (~0UL)            /* invalid bfd_vma */
+
+#if defined(__x86_64__)
+  #define KDBIP rip
+  #define KDBSP rsp
+#else
+  #define KDBIP eip
+  #define KDBSP esp
+#endif
+
+/* ============= macros ================================================== */
+extern volatile int kdbdbg;
+#define KDBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
+#define KDBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
+#define KDBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
+#define KDBGP3(...) {0;};
+
+#define KDBMIN(x,y) (((x)<(y))?(x):(y))
+
+#endif  /* !_KDBDEFS_H */
diff -r 32034d1914a6 xen/kdb/include/kdbinc.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdbinc.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDBINC_H
+#define _KDBINC_H
+
+#include <xen/compile.h>
+#include <xen/config.h>
+#include <xen/version.h>
+#include <xen/compat.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/sched.h>
+#include <xen/domain.h>
+#include <xen/mm.h>
+#include <xen/event.h>
+#include <xen/time.h>
+#include <xen/console.h>
+#include <xen/softirq.h>
+#include <xen/domain_page.h>
+#include <xen/rangeset.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/delay.h>
+#include <xen/shutdown.h>
+#include <xen/percpu.h>
+#include <xen/multicall.h>
+#include <xen/rcupdate.h>
+#include <xen/ctype.h>
+#include <xen/symbols.h>
+#include <xen/shutdown.h>
+#include <xen/serial.h>
+#include <xen/grant_table.h>
+#include <asm/debugger.h>
+#include <asm/shared.h>
+#include <asm/apicdef.h>
+
+#include <asm/nmi.h>
+#include <asm/p2m.h>
+#include <asm/debugreg.h>
+#include <public/sched.h>
+#include <public/vcpu.h>
+#ifdef _XEN_LATEST
+#include <xsm/xsm.h>
+#endif
+
+#include <asm/hvm/vmx/vmx.h>
+
+#include "kdb_extern.h"
+#include "kdbdefs.h"
+#include "kdbproto.h"
+
+#endif /* !_KDBINC_H */
diff -r 32034d1914a6 xen/kdb/include/kdbproto.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/include/kdbproto.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#ifndef _KDBPROTO_H
+#define _KDBPROTO_H
+
+/* hypervisor interfaces use by kdb or kdb interfaces in xen files */
+extern void console_putc(char);
+extern int console_getc(void);
+extern void show_trace(struct cpu_user_regs *);
+extern void kdb_dump_timer_queues(void);
+extern void kdb_time_resume(int);
+extern void kdb_print_sched_info(void);
+extern void kdb_curr_cpu_flush_vmcs(void);
+extern unsigned long address_lookup(char *);
+extern void kdb_prnt_guest_mapped_irqs(void);
+
+/* kdb globals */
+extern kdbtab_t *kdb_cmd_tbl;
+extern char kdb_prompt[32];
+extern volatile int kdb_sys_crash;
+extern volatile kdb_cpu_cmd_t kdb_cpu_cmd[NR_CPUS];
+extern volatile int kdb_trcon;
+
+/* kdb interfaces */
+extern void __init kdb_io_init(void);
+extern void kdb_init_cmdtab(void);
+extern void kdb_do_cmds(struct cpu_user_regs *);
+extern int kdb_check_sw_bkpts(struct cpu_user_regs *);
+extern int kdb_check_watchpoints(struct cpu_user_regs *);
+extern void kdb_do_watchpoints(kdbva_t, int, int);
+extern void kdb_install_watchpoints(void);
+extern void kdb_clear_wps(int);
+extern kdbma_t kdb_rd_dbgreg(int);
+
+
+
+extern char *kdb_get_cmdline(char *);
+extern void kdb_clear_prev_cmd(void);
+extern void kdb_toggle_dis_syntax(void);
+extern int kdb_check_call_instr(domid_t, kdbva_t);
+extern void kdb_display_pc(struct cpu_user_regs *);
+extern kdbva_t kdb_print_instr(kdbva_t, long, domid_t);
+extern int kdb_read_mmem(kdbva_t, kdbbyt_t *, int);
+extern int kdb_read_mem(kdbva_t, kdbbyt_t *, int, domid_t);
+extern int kdb_write_mem(kdbva_t, kdbbyt_t *, int, domid_t);
+
+extern void kdb_install_all_swbp(void);
+extern void kdb_uninstall_all_swbp(void);
+extern int kdb_swbp_exists(void);
+extern void kdb_flush_swbp_table(void);
+extern int kdb_is_addr_guest_text(kdbva_t, int);
+extern kdbva_t kdb_guest_sym2addr(char *, domid_t);
+extern char *kdb_guest_addr2sym(unsigned long, domid_t, ulong *);
+extern void kdb_prnt_addr2sym(domid_t, kdbva_t, char *);
+extern void kdb_sav_dom_syminfo(domid_t, long, long, long, long, long);
+extern int kdb_guest_bitness(domid_t);
+extern void kdb_nmi_pause_cpus(cpumask_t);
+
+extern void kdb_trczero(void);
+void kdb_trcp(void);
+
+
+
+#endif /* !_KDBPROTO_H */
diff -r 32034d1914a6 xen/kdb/kdb_cmds.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/kdb_cmds.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3789 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "include/kdbinc.h"
+
+#if defined(__x86_64__)
+    #define KDBF64 "%lx"
+    #define KDBFL "%016lx"         /* print long all digits */
+#else
+    #define KDBF64 "%llx"
+    #define KDBFL "%08lx"
+#endif
+
+#if XEN_SUBVERSION > 4 || XEN_VERSION == 4              /* xen 3.5.x or above */
+    #define KDB_LKDEF(l) ((l).raw.lock)
+    #define KDB_PGLLE(t) ((t).tail)    /* page list last element ^%$#@ */
+#else
+    #define KDB_LKDEF(l) ((l).lock)
+    #define KDB_PGLLE(t) ((t).prev)    /* page list last element ^%$#@ */
+#endif
+
+#define KDB_CMD_HISTORY_COUNT   32
+#define CMD_BUFLEN              200     /* kdb_printf: max printline == 256 */
+
+#define KDBMAXSBP 16                    /* max number of software breakpoints */
+#define KDB_MAXARGC 16                  /* max args in a kdb command */
+#define KDB_MAXBTP  8                   /* max display args in btp */
+
+/* condition is: 'r6 == 0x123f' or '0xffffffff82800000 != deadbeef'  */
+struct kdb_bpcond {
+    kdbbyt_t bp_cond_status;       /* 0 == off, 1 == register, 2 == memory */
+    kdbbyt_t bp_cond_type;         /* 0 == bad, 1 == equal, 2 == not equal */
+    ulong    bp_cond_lhs;          /* lhs of condition: reg offset or mem loc */
+    ulong    bp_cond_rhs;          /* right hand side of condition */
+};
+
+/* software breakpoint structure */
+struct kdb_sbrkpt {
+    kdbva_t  bp_addr;              /* address the bp is set at */
+    domid_t  bp_domid;             /* which domain the bp belongs to */
+    kdbbyt_t bp_originst;          /* save orig instr/s here */
+    kdbbyt_t bp_deleted;           /* delete pending on this bp */
+    kdbbyt_t bp_ni;                /* set for KDB_CPU_NI */
+    kdbbyt_t bp_just_added;        /* added in the current kdb session */
+    kdbbyt_t bp_type;              /* 0 = normal, 1 == cond,  2 == btp */
+    union {
+        struct kdb_bpcond bp_cond;
+        ulong *bp_btp;
+    } u;
+};
+
+/* don't use kmalloc in kdb which hijacks all cpus */
+static ulong kdb_btp_argsa[KDBMAXSBP][KDB_MAXBTP];
+static ulong *kdb_btp_ap[KDBMAXSBP];
+
+static struct kdb_reg_nmofs {
+    char *reg_nm;
+    int reg_offs;
+} kdb_reg_nm_offs[] =  {
+       { "rax", offsetof(struct cpu_user_regs, rax) },
+       { "rbx", offsetof(struct cpu_user_regs, rbx) },
+       { "rcx", offsetof(struct cpu_user_regs, rcx) },
+       { "rdx", offsetof(struct cpu_user_regs, rdx) },
+       { "rsi", offsetof(struct cpu_user_regs, rsi) },
+       { "rdi", offsetof(struct cpu_user_regs, rdi) },
+       { "rbp", offsetof(struct cpu_user_regs, rbp) },
+       { "rsp", offsetof(struct cpu_user_regs, rsp) },
+       { "r8",  offsetof(struct cpu_user_regs, r8) },
+       { "r9",  offsetof(struct cpu_user_regs, r9) },
+       { "r10", offsetof(struct cpu_user_regs, r10) },
+       { "r11", offsetof(struct cpu_user_regs, r11) },
+       { "r12", offsetof(struct cpu_user_regs, r12) },
+       { "r13", offsetof(struct cpu_user_regs, r13) },
+       { "r14", offsetof(struct cpu_user_regs, r14) },
+       { "r15", offsetof(struct cpu_user_regs, r15) },
+       { "rflags", offsetof(struct cpu_user_regs, rflags) } };
+
+static const int KDBBPSZ=1;                   /* size of KDB_BPINST is 1 byte*/
+static kdbbyt_t kdb_bpinst = 0xcc;            /* breakpoint instr: INT3 */
+static struct kdb_sbrkpt kdb_sbpa[KDBMAXSBP]; /* soft brkpt array/table */
+static kdbtab_t *tbp;
+
+static int kdb_set_bp(domid_t, kdbva_t, int, ulong *, char*, char*, char*);
+static void kdb_print_uregs(struct cpu_user_regs *);
+
+
+/* ===================== cmdline functions  ================================ */
+
+/* lp points to a string of only alpha numeric chars terminated by '\n'.
+ * Parse the string into argv pointers, and RETURN argc
+ * Eg:  if lp --> "dr  sp\n" :  argv[0]=="dr\0"  argv[1]=="sp\0"  argc==2
+ */
+static int
+kdb_parse_cmdline(char *lp, const char **argv)
+{
+    int i=0;
+
+    for (; *lp == ' '; lp++);      /* note: isspace() skips '\n' also */
+    while ( *lp != '\n' ) {
+        if (i == KDB_MAXARGC) {
+            printk("kdb: max args exceeded\n");
+            break;
+        }
+        argv[i++] = lp;
+        for (; *lp != ' ' && *lp != '\n'; lp++);
+        if (*lp != '\n')
+            *lp++ = '\0';
+        for (; *lp == ' '; lp++);
+    }
+    *lp = '\0';
+    return i;
+}
+
+void
+kdb_clear_prev_cmd()             /* so previous command is not repeated */
+{
+    tbp = NULL;
+}
+
+void
+kdb_do_cmds(struct cpu_user_regs *regs)
+{
+    char *cmdlinep;
+    const char *argv[KDB_MAXARGC];
+    int argc = 0, curcpu = smp_processor_id();
+    kdb_cpu_cmd_t result = KDB_CPU_MAIN_KDB;
+
+    snprintf(kdb_prompt, sizeof(kdb_prompt), "[%d]xkdb> ", curcpu);
+
+    while (result == KDB_CPU_MAIN_KDB) {
+        cmdlinep = kdb_get_cmdline(kdb_prompt);
+        if (*cmdlinep == '\n') {
+            if (tbp==NULL || tbp->kdb_cmd_func==NULL)
+                continue;
+            else
+                argc = -1;    /* repeat prev command */
+        } else {
+            argc = kdb_parse_cmdline(cmdlinep, argv);
+            for(tbp=kdb_cmd_tbl; tbp->kdb_cmd_func; tbp++)  {
+                if (strcmp(argv[0], tbp->kdb_cmd_name)==0) 
+                    break;
+            }
+        }
+        if (kdb_sys_crash && tbp->kdb_cmd_func && !tbp->kdb_cmd_crash_avail) {
+            kdbp("cmd not available in fatal/crashed state....\n");
+            continue;
+        }
+        if (tbp->kdb_cmd_func) {
+            result = (*tbp->kdb_cmd_func)(argc, argv, regs);
+            if (tbp->kdb_cmd_repeat == KDB_REPEAT_NONE)
+                tbp = NULL;
+        } else
+            kdbp("kdb: Unknown cmd: %s\n", cmdlinep);
+    }
+    kdb_cpu_cmd[curcpu] = result;
+    return;
+}
+
+/* ===================== Util functions  ==================================== */
+
+int
+kdb_vcpu_valid(struct vcpu *in_vp)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+
+    for(dp=domain_list; in_vp && dp; dp=dp->next_in_list)
+        for_each_vcpu(dp, vp)
+            if (in_vp == vp)
+                return 1;
+    return 0;     /* not found */
+}
+
+/*
+ * Given a symbol, find it's address
+ */
+static kdbva_t
+kdb_sym2addr(const char *p, domid_t domid)
+{
+    kdbva_t addr;
+
+    KDBGP1("sym2addr: p:%s domid:%d\n", p, domid);
+    if (domid == DOMID_IDLE)
+        addr = address_lookup((char *)p);
+    else
+        addr = (kdbva_t)kdb_guest_sym2addr((char *)p, domid);
+    KDBGP1("sym2addr: exit: addr returned:0x%lx\n", addr);
+    return addr;
+}
+
+/*
+ * convert ascii to int decimal (base 10). 
+ * Return: 0 : failed to convert, otherwise 1 
+ */
+static int
+kdb_str2deci(const char *strp, int *intp)
+{
+    const char *endp;
+
+    KDBGP2("str2deci: str:%s\n", strp);
+    if (!isdigit(*strp))
+        return 0;
+    *intp = (int)simple_strtoul(strp, &endp, 10);
+    if (endp != strp+strlen(strp))
+        return 0;
+    KDBGP2("str2deci: intval:$%d\n", *intp);
+    return 1;
+}
+/*
+ * convert ascii to long. NOTE: base is 16
+ * Return: 0 : failed to convert, otherwise 1 
+ */
+static int
+kdb_str2ulong(const char *strp, ulong *longp)
+{
+    ulong val;
+    const char *endp;
+
+    KDBGP2("str2long: str:%s\n", strp);
+    if (!isxdigit(*strp))
+        return 0;
+    val = (long)simple_strtoul(strp, &endp, 16);   /* handles leading 0x */
+    if (endp != strp+strlen(strp))
+        return 0;
+    if (longp)
+        *longp = val;
+    KDBGP2("str2long: val:0x%lx\n", val);
+    return 1;
+}
+/*
+ * convert a symbol or ascii address to hex address
+ * Return: 0 : failed to convert, otherwise 1 
+ */
+static int
+kdb_str2addr(const char *strp, kdbva_t *addrp, domid_t id)
+{
+    kdbva_t addr;
+    const char *endp;
+
+    /* assume it's an address */
+    KDBGP2("str2addr: str:%s id:%d\n", strp, id);
+    addr = (kdbva_t)simple_strtoul(strp, &endp, 16); /*handles leading 0x */
+    if (endp != strp+strlen(strp))
+        if ( !(addr=kdb_sym2addr(strp, id)) )
+            return 0;
+    *addrp = addr;
+    KDBGP2("str2addr: addr:0x%lx\n", addr);
+    return 1;
+}
+
+/* Given domid, return ptr to struct domain 
+ * IF domid == DOMID_IDLE return ptr to idle_domain 
+ * IF domid == valid domain, return ptr to domain struct
+ * else domid is bad and return NULL
+ */
+static struct domain *
+kdb_domid2ptr(domid_t domid)
+{
+    struct domain *dp;
+
+    /* get_domain_by_id() ret NULL for both DOMID_IDLE and bad domids */
+    if (domid == DOMID_IDLE)
+        dp = idle_vcpu[smp_processor_id()]->domain;
+    else 
+        dp = get_domain_by_id(domid);   /* NULL now means bad domid */
+    return dp;
+}
+
+/*
+ * Returns:  0: failed. invalid domid or string, *idp not changed.
+ */
+static int
+kdb_str2domid(const char *domstr, domid_t *idp, int perr)
+{
+    int id;
+    if (!kdb_str2deci(domstr, &id) || !kdb_domid2ptr((domid_t)id)) {
+        if (perr)
+            kdbp("Invalid domid:%s\n", domstr);
+        return 0;
+    }
+    *idp = (domid_t)id;
+    return 1;
+}
+
+static struct domain *
+kdb_strdomid2ptr(const char *domstr, int perror)
+{
+    domid_t domid;
+    if (kdb_str2domid(domstr, &domid, perror)) {
+        return(kdb_domid2ptr(domid));
+    }
+    return NULL;
+}
+
+/* return a guest bitness: 32 or 64 */
+int
+kdb_guest_bitness(domid_t domid)
+{
+    const int HYPSZ = sizeof(long) * 8;
+    struct domain *dp = kdb_domid2ptr(domid);
+    int retval; 
+
+    if (is_idle_domain(dp))
+        retval = HYPSZ;
+    else if (is_hvm_or_hyb_domain(dp))
+        retval = (hvm_long_mode_enabled(dp->vcpu[0])) ? HYPSZ : 32;
+    else 
+        retval = is_pv_32bit_domain(dp) ? 32 : HYPSZ;
+    KDBGP1("gbitness: domid:%d dp:%p bitness:%d\n", domid, dp, retval);
+    return retval;
+}
+
+/* kdb_print_spin_lock(&xyz_lock, "xyz_lock:", "\n"); */
+static void
+kdb_print_spin_lock(char *strp, spinlock_t *lkp, char *nlp)
+{
+    kdbp("%s %04hx %d %d%s", strp, KDB_LKDEF(*lkp), lkp->recurse_cpu,
+         lkp->recurse_cnt, nlp);
+}
+
+/* check if register string is valid. if yes, return offset to the register
+ * in cpu_user_regs, else return -1 */
+static int
+kdb_valid_reg(const char *nmp) 
+{
+    int i;
+    for (i=0; i < sizeof(kdb_reg_nm_offs)/sizeof(kdb_reg_nm_offs[0]); i++)
+        if (strcmp(kdb_reg_nm_offs[i].reg_nm, nmp) == 0)
+            return kdb_reg_nm_offs[i].reg_offs;
+    return -1;
+}
+
+/* given offset of register, return register name string. if offset is invalid
+ * return NULL */
+static char *kdb_regoffs_to_name(int offs)
+{
+    int i;
+    for (i=0; i < sizeof(kdb_reg_nm_offs)/sizeof(kdb_reg_nm_offs[0]); i++)
+        if (kdb_reg_nm_offs[i].reg_offs == offs)
+            return kdb_reg_nm_offs[i].reg_nm;
+    return NULL;
+}
+
+/* ===================== util struct funcs ================================= */
+static void
+kdb_prnt_timer(struct timer *tp)
+{
+#if XEN_SUBVERSION == 0 
+    kdbp(" expires:%016lx expires_end:%016lx cpu:%d status:%x\n", tp->expires, 
+         tp->expires_end, tp->cpu, tp->status);
+#else
+    kdbp(" expires:%016lx cpu:%d status:%x\n", tp->expires, tp->cpu,tp->status);
+#endif
+    kdbp(" function data:%p ptr:%p ", tp->data, tp->function);
+    kdb_prnt_addr2sym(DOMID_IDLE, (kdbva_t)tp->function, "\n");
+}
+
+static void 
+kdb_prnt_periodic_time(struct periodic_time *ptp)
+{
+    kdbp(" next:%p prev:%p\n", ptp->list.next, ptp->list.prev);
+    kdbp(" on_list:%d one_shot:%d dont_freeze:%d irq_issued:%d src:%x irq:%x\n",
+         ptp->on_list, ptp->one_shot, ptp->do_not_freeze, ptp->irq_issued,
+         ptp->source, ptp->irq);
+    kdbp(" vcpu:%p pending_intr_nr:%08x period:%016lx\n", ptp->vcpu,
+         ptp->pending_intr_nr, ptp->period);
+    kdbp(" scheduled:%016lx last_plt_gtime:%016lx\n", ptp->scheduled,
+         ptp->last_plt_gtime);
+    kdbp(" \n          timer info:\n");
+    kdb_prnt_timer(&ptp->timer);
+    kdbp("\n");
+}
+
+/* ===================== cmd functions  ==================================== */
+
+/*
+ * FUNCTION: Disassemble instructions
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dis(void)
+{
+    kdbp("dis [addr|sym][num][domid] : Disassemble instrs\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dis(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int num = 8;                           /* display 8 instr by default */
+    static kdbva_t addr = BFD_INVAL;
+    static domid_t domid;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dis();
+
+    if (argc != -1)      /* not a command repeat */
+        domid = guest_mode(regs) ?  current->domain->domain_id : DOMID_IDLE;
+
+    if (argc >= 4 && !kdb_str2domid(argv[3], &domid, 1)) { 
+        return KDB_CPU_MAIN_KDB;
+    } 
+    if (argc >= 3 && !kdb_str2deci(argv[2], &num)) {
+        kdbp("kdb:Invalid num\n");
+        return KDB_CPU_MAIN_KDB;
+    } 
+    if (argc > 1 && !kdb_str2addr(argv[1], &addr, domid)) {
+        kdbp("kdb:Invalid addr/sym\n");
+        kdbp("(num has to be specified if providing domid)\n");
+        return KDB_CPU_MAIN_KDB;
+    } 
+    if (argc == 1)                    /* not command repeat */
+        addr = regs->KDBIP;           /* PC is the default */
+    else if (addr == BFD_INVAL) {
+        kdbp("kdb:Invalid addr/sym\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    addr = kdb_print_instr(addr, num, domid);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* FUNCTION: kdb_cmdf_dism() Toggle disassembly syntax from Intel to ATT/GAS */
+static kdb_cpu_cmd_t
+kdb_usgf_dism(void)
+{
+    kdbp("dism: toggle disassembly mode between ATT/GAS and INTEL\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dism(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dism();
+
+    kdb_toggle_dis_syntax();
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void
+_kdb_show_guest_stack(domid_t domid, kdbva_t ipaddr, kdbva_t spaddr)
+{
+    kdbva_t val;
+    int num=0, max=0, rd = kdb_guest_bitness(domid)/8;
+
+    kdb_print_instr(ipaddr, 1, domid);
+    KDBGP("_guest_stack:sp:%lx domid:%d rd:$%d\n", spaddr, domid, rd);
+    val = 0;                          /* must zero, in case guest is 32bit */
+    while((kdb_read_mem(spaddr,(kdbbyt_t *)&val,rd,domid)==rd) && num < 16){
+        KDBGP1("gstk:addr:%lx val:%lx\n", spaddr, val);
+        if (kdb_is_addr_guest_text(val, domid)) {
+            kdb_print_instr(val, 1, domid);
+            num++;
+        }
+        if (max++ > 10000)            /* don't walk down the stack forever */
+            break;                    /* 10k is chosen randomly */
+        spaddr += rd;
+    }
+}
+
+/* Read guest memory and display address that looks like text. */
+static void
+kdb_show_guest_stack(struct cpu_user_regs *regs, struct vcpu *vcpup)
+{
+    kdbva_t ipaddr=regs->KDBIP, spaddr = regs->KDBSP;
+    domid_t domid = vcpup->domain->domain_id;
+
+    ASSERT(domid != DOMID_IDLE);
+    _kdb_show_guest_stack(domid, ipaddr, spaddr);
+}
+
+/* display stack. if vcpu ptr given, then display stack for that. Otherwise,
+ * use current regs */
+static kdb_cpu_cmd_t
+kdb_usgf_f(void)
+{
+    kdbp("f [vcpu-ptr]: dump current/vcpu stack\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_f(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_f();
+
+    if (argc > 1 ) {
+        struct vcpu *vp;
+        if (!kdb_str2ulong(argv[1], (ulong *)&vp) || !kdb_vcpu_valid(vp)) {
+            kdbp("kdb: Bad VCPU ptr:%s\n", argv[1]);
+            return KDB_CPU_MAIN_KDB;
+        }
+        kdb_show_guest_stack(&vp->arch.user_regs, vp);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (guest_mode(regs))
+        kdb_show_guest_stack(regs, current);
+    else
+        show_trace(regs);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* given an spaddr and domid for guest, dump stack */
+static kdb_cpu_cmd_t
+kdb_usgf_fg(void)
+{
+    kdbp("fg domid RIP ESP: dump guest stack given domid, RIP, and ESP\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_fg(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    domid_t domid;
+    kdbva_t ipaddr, spaddr;
+
+    if (argc != 4) 
+        return kdb_usgf_fg();
+
+    if (kdb_str2domid(argv[1], &domid, 1)==0) {
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_str2ulong(argv[2], &ipaddr)==0) {
+        kdbp("Bad ipaddr:%s\n", argv[2]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_str2ulong(argv[3], &spaddr)==0) {
+        kdbp("Bad spaddr:%s\n", argv[3]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    _kdb_show_guest_stack(domid, ipaddr, spaddr);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display kdb stack. for debugging kdb itself */
+static kdb_cpu_cmd_t
+kdb_usgf_kdbf(void)
+{
+    kdbp("kdbf: display kdb stack. for debugging kdb only\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_kdbf(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_kdbf();
+
+    kdb_trap_immed(KDB_TRAP_KDBSTACK);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* worker function to display memory. Request could be for any guest, domid.
+ * Also address could be machine or virtual */
+static void
+_kdb_display_mem(kdbva_t *addrp, int *lenp, int wordsz, int domid, int is_maddr)
+{
+    #define DDBUFSZ 4096
+
+    kdbbyt_t buf[DDBUFSZ], *bp;
+    int numrd, bytes;
+    int len = *lenp;
+    kdbva_t addr = *addrp;
+
+    /* round len down to wordsz boundry because on intel endian, printing
+     * characters is not prudent, (long and ints can't be interpreted 
+     * easily) */
+    len &= ~(wordsz-1);
+    len = KDBMIN(DDBUFSZ, len);
+    len = len ? len : wordsz;
+
+    KDBGP("dmem:addr:%lx buf:%p len:$%d domid:%d sz:$%d maddr:%d\n", addr,
+          buf, len, domid, wordsz, is_maddr);
+    if (is_maddr)
+        numrd=kdb_read_mmem((kdbma_t)addr, buf, len);
+    else
+        numrd=kdb_read_mem(addr, buf, len, domid);
+    if (numrd != len)
+        kdbp("Memory read error. Bytes read:$%d\n", numrd);
+
+    for (bp = buf; numrd > 0;) {
+        kdbp("%016lx: ", addr); 
+
+        /* display 16 bytes per line */
+        for (bytes=0; bytes < 16 && numrd > 0; bytes += wordsz) {
+            if (numrd >= wordsz) {
+                if (wordsz == 8)
+                    kdbp(" %016lx", *(long *)bp);
+                else
+                    kdbp(" %08x", *(int *)bp);
+                bp += wordsz;
+                numrd -= wordsz;
+                addr += wordsz;
+            }
+        }
+        kdbp("\n");
+        continue;
+    }
+    *lenp = len;
+    *addrp = addr;
+}
+
+/* display machine mem, ie, the given address is machine address */
+static kdb_cpu_cmd_t 
+kdb_display_mmem(int argc, const char **argv, int wordsz, kdb_usgf_t usg_fp)
+{
+    static kdbma_t maddr;
+    static int len;
+    static domid_t id = DOMID_IDLE;
+
+    if (argc == -1) {
+        _kdb_display_mem(&maddr, &len, wordsz, id, 1);  /* cmd repeat */
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (argc <= 1 || *argv[1] == '?')
+        return (*usg_fp)();
+
+    /* check if num of bytes to display is given by user */
+    if (argc >= 3) {
+        if (!kdb_str2deci(argv[2], &len)) {
+            kdbp("Invalid length:%s\n", argv[2]);
+            return KDB_CPU_MAIN_KDB;
+        } 
+    } else
+        len = 32;                                     /* default read len */
+
+    if (!kdb_str2ulong(argv[1], &maddr)) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    _kdb_display_mem(&maddr, &len, wordsz, 0, 1);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Dispaly machine Memory Word
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dwm(void)
+{
+    kdbp("dwm:  maddr|sym [num] : dump memory word given machine addr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dwm(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mmem(argc, argv, 4, kdb_usgf_dwm);
+}
+
+/* 
+ * FUNCTION: Dispaly machine Memory DoubleWord 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ddm(void)
+{
+    kdbp("ddm:  maddr|sym [num] : dump double word given machine addr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ddm(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mmem(argc, argv, 8, kdb_usgf_ddm);
+}
+
+/* 
+ * FUNCTION: Dispaly Memory : word or doubleword
+ *           wordsz : bytes in word. 4 or 8
+ *
+ *           We display upto BUFSZ bytes. User can just press enter for more.
+ *           addr is always in hex with or without leading 0x
+ */
+static kdb_cpu_cmd_t 
+kdb_display_mem(int argc, const char **argv, int wordsz, kdb_usgf_t usg_fp)
+{
+    static kdbva_t addr;
+    static int len;
+    static domid_t id = DOMID_IDLE;
+
+    if (argc == -1) {
+        _kdb_display_mem(&addr, &len, wordsz, id, 0);  /* cmd repeat */
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (argc <= 1 || *argv[1] == '?')
+        return (*usg_fp)();
+
+    id = DOMID_IDLE;                /* not a command repeat, reset dom id */
+    if (argc >= 4) { 
+        if (!kdb_str2domid(argv[3], &id, 1)) 
+            return KDB_CPU_MAIN_KDB;
+    }
+    /* check if num of bytes to display is given by user */
+    if (argc >= 3) {
+        if (!kdb_str2deci(argv[2], &len)) {
+            kdbp("Invalid length:%s\n", argv[2]);
+            return KDB_CPU_MAIN_KDB;
+        } 
+    } else
+        len = 32;                       /* default read len */
+    if (!kdb_str2addr(argv[1], &addr, id)) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    _kdb_display_mem(&addr, &len, wordsz, id, 0);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Dispaly Memory Word
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dw(void)
+{
+    kdbp("dw vaddr|sym [num][domid] : dump mem word. num required for domid\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dw(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mem(argc, argv, 4, kdb_usgf_dw);
+}
+
+/* 
+ * FUNCTION: Dispaly Memory DoubleWord 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dd(void)
+{
+    kdbp("dd vaddr|sym [num][domid] : dump dword. num required for domid\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dd(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return kdb_display_mem(argc, argv, 8, kdb_usgf_dd);
+}
+
+/* 
+ * FUNCTION: Modify Memory Word 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_mw(void)
+{
+    kdbp("mw vaddr|sym val [domid] : modify memory word in vaddr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_mw(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong val;
+    kdbva_t addr;
+    domid_t id = DOMID_IDLE;
+
+    if (argc < 3) {
+        return kdb_usgf_mw();
+    }
+    if (argc >=4) {
+        if (!kdb_str2domid(argv[3], &id, 1)) 
+            return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2ulong(argv[2], &val)) {
+        kdbp("Invalid val: %s\n", argv[2]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2addr(argv[1], &addr, id)) {
+        kdbp("Invalid addr/sym: %s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_write_mem(addr, (kdbbyt_t *)&val, 4, id) != 4)
+        kdbp("Unable to set 0x%lx to 0x%lx\n", addr, val);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Modify Memory DoubleWord 
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_md(void)
+{
+    kdbp("md vaddr|sym val [domid] : modify memory dword in vaddr\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_md(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong val;
+    kdbva_t addr;
+    domid_t id = DOMID_IDLE;
+
+    if (argc < 3) {
+        return kdb_usgf_md();
+    }
+    if (argc >=4) {
+        if (!kdb_str2domid(argv[3], &id, 1)) {
+            return KDB_CPU_MAIN_KDB;
+        }
+    }
+    if (!kdb_str2ulong(argv[2], &val)) {
+        kdbp("Invalid val: %s\n", argv[2]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2addr(argv[1], &addr, id)) {
+        kdbp("Invalid addr/sym: %s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_write_mem(addr, (kdbbyt_t *)&val,sizeof(val),id) != sizeof(val))
+        kdbp("Unable to set 0x%lx to 0x%lx\n", addr, val);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+struct  Xgt_desc_struct {
+    unsigned short size;
+    unsigned long address __attribute__((packed));
+};
+
+void
+kdb_show_special_regs(struct cpu_user_regs *regs)
+{
+    struct Xgt_desc_struct desc;
+    unsigned short tr;                 /* Task Register segment selector */
+    __u64 efer;
+
+    kdbp("\nSpecial Registers:\n");
+    __asm__ __volatile__ ("sidt  (%0) \n" :: "a"(&desc) : "memory");
+    kdbp("IDTR: addr: %016lx limit: %04x\n", desc.address, desc.size);
+    __asm__ __volatile__ ("sgdt  (%0) \n" :: "a"(&desc) : "memory");
+    kdbp("GDTR: addr: %016lx limit: %04x\n", desc.address, desc.size);
+
+    kdbp("cr0: %016lx  cr2: %016lx\n", read_cr0(), read_cr2());
+    kdbp("cr3: %016lx  cr4: %016lx\n", read_cr3(), read_cr4());
+    __asm__ __volatile__ ("str (%0) \n":: "a"(&tr) : "memory");
+    kdbp("TR: %x\n", tr);
+
+    rdmsrl(MSR_EFER, efer);    /* IA32_EFER */
+    kdbp("efer:"KDBF64" LMA(IA-32e mode):%d SCE(syscall/sysret):%d\n",
+         efer, ((efer&EFER_LMA) != 0), ((efer&EFER_SCE) != 0));
+
+    kdbp("DR0: %016lx  DR1:%016lx  DR2:%016lx\n", kdb_rd_dbgreg(0),
+         kdb_rd_dbgreg(1), kdb_rd_dbgreg(2)); 
+    kdbp("DR3: %016lx  DR6:%016lx  DR7:%016lx\n", kdb_rd_dbgreg(3),
+         kdb_rd_dbgreg(6), kdb_rd_dbgreg(7)); 
+}
+
+/* 
+ * FUNCTION: Dispaly Registers. If "sp" argument, then display additional regs
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dr(void)
+{
+    kdbp("dr [sp]: display registers. sp to display special regs also\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dr(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dr();
+
+    KDBGP1("regs:%p .rsp:%lx .rip:%lx\n", regs, regs->rsp, regs->rip);
+    show_registers(regs);
+    if (argc > 1 && !strcmp(argv[1], "sp")) 
+        kdb_show_special_regs(regs);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* show registers on stack bottom where guest context is. same as dr if
+ * not running in guest mode */
+static kdb_cpu_cmd_t
+kdb_usgf_drg(void)
+{
+    kdbp("drg: display active guest registers at stack bottom\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_drg(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_drg();
+
+    kdbp("\tNote: ds/es/fs/gs etc.. are not saved from the cpu\n");
+    kdb_print_uregs(guest_cpu_user_regs());
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Modify Register
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_mr(void)
+{
+    kdbp("mr reg val : Modify Register. val assumed in hex\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_mr(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    const char *argp;
+    int regoffs;
+    ulong val;
+
+    if (argc != 3 || !kdb_str2ulong(argv[2], &val)) {
+        return kdb_usgf_mr();
+    }
+    argp = argv[1];
+
+#if defined(__x86_64__)
+    if ((regoffs=kdb_valid_reg(argp)) != -1)
+        *((uint64_t *)((char *)regs+regoffs)) = val;
+#else
+    if (!strcmp(argp, "eax"))
+        regs->eax = val;
+    else if (!strcmp(argp, "ebx"))
+        regs->ebx = val;
+    else if (!strcmp(argp, "ecx"))
+        regs->ecx = val;
+    else if (!strcmp(argp, "edx"))
+        regs->edx = val;
+    else if (!strcmp(argp, "esi"))
+        regs->esi = val;
+    else if (!strcmp(argp, "edi"))
+        regs->edi = val;
+    else if (!strcmp(argp, "ebp"))
+        regs->ebp = val;
+    else if (!strcmp(argp, "esp"))
+        regs->esp = val;
+    else if (!strcmp(argp, "eflags") || !strcmp(argp, "rflags"))
+        regs->eflags = val;
+#endif
+    else
+        kdbp("Error. Bad register : %s\n", argp);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * FUNCTION: Single Step
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ss(void)
+{
+    kdbp("ss: single step\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ss(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    #define KDB_HALT_INSTR 0xf4
+
+    kdbbyt_t byte;
+    struct domain *dp = current->domain;
+    domid_t id = guest_mode(regs) ? dp->domain_id : DOMID_IDLE;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_ss();
+
+    KDBGP("enter kdb_cmdf_ss \n");
+    if (!regs) {
+        kdbp("%s: regs not available\n", __FUNCTION__);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (kdb_read_mem(regs->KDBIP, &byte, 1, id) == 1) {
+        if (byte == KDB_HALT_INSTR) {
+            kdbp("kdb: jumping over halt instruction\n");
+            regs->KDBIP++;
+        }
+    } else {
+        kdbp("kdb: Failed to read byte at: %lx\n", regs->KDBIP);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current)) {
+        dp->debugger_attached = 1;  /* see svm_do_resume/vmx_do_ */
+        current->arch.hvm_vcpu.single_step = 1;
+    } else
+        regs->eflags |= X86_EFLAGS_TF;
+
+    return KDB_CPU_SS;
+}
+
+/* 
+ * FUNCTION: Next Instruction, step over the call instr to the next instr
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ni(void)
+{
+    kdbp("ni: single step, stepping over function calls\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ni(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int sz, i;
+    domid_t id=guest_mode(regs) ? current->domain->domain_id:DOMID_IDLE;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_ni();
+
+    KDBGP("enter kdb_cmdf_ni \n");
+    if (!regs) {
+        kdbp("%s: regs not available\n", __FUNCTION__);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if ((sz=kdb_check_call_instr(id, regs->KDBIP)) == 0)  /* !call instr */
+        return kdb_cmdf_ss(argc, argv, regs);         /* just do ss */
+
+    if ((i=kdb_set_bp(id, regs->KDBIP+sz, 1,0,0,0,0)) >= KDBMAXSBP) /* failed */
+        return KDB_CPU_MAIN_KDB;
+
+    kdb_sbpa[i].bp_ni = 1;
+    if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+        current->arch.hvm_vcpu.single_step = 0;
+    else
+        regs->eflags &= ~X86_EFLAGS_TF;
+
+    return KDB_CPU_NI;
+}
+
+static void
+kdb_btf_enable(void)
+{
+    u64 debugctl;
+    rdmsrl(MSR_IA32_DEBUGCTLMSR, debugctl);
+    wrmsrl(MSR_IA32_DEBUGCTLMSR, debugctl | 0x2);
+}
+
+/* 
+ * FUNCTION: Single Step to branch. Doesn't seem to work very well.
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_ssb(void)
+{
+    kdbp("ssb: singe step to branch\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_ssb(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_ssb();
+
+    KDBGP("MUK: enter kdb_cmdf_ssb\n");
+    if (!regs) {
+        kdbp("%s: regs not available\n", __FUNCTION__);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (is_hvm_or_hyb_vcpu(current)) 
+        current->domain->debugger_attached = 1;        /* vmx/svm_do_resume()*/
+
+    regs->eflags |= X86_EFLAGS_TF;
+    kdb_btf_enable();
+    return KDB_CPU_SS;
+}
+
+/* 
+ * FUNCTION: Continue Execution. TF must be cleared here as this could run on 
+ *           any cpu. Hence not OK to do it from kdb_end_session.
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_go(void)
+{
+    kdbp("go: leave kdb and continue execution\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_go(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_go();
+
+    regs->eflags &= ~X86_EFLAGS_TF;
+    return KDB_CPU_GO;
+}
+
+/* All cpus must display their current context */
+static kdb_cpu_cmd_t 
+kdb_cpu_status_all(int ccpu, struct cpu_user_regs *regs)
+{
+    int cpu;
+    for_each_online_cpu(cpu) {
+        if (cpu == ccpu) {
+            kdbp("[%d]", ccpu);
+            kdb_display_pc(regs);
+        } else {
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE)   /* hung cpu */
+                continue;
+            kdb_cpu_cmd[cpu] = KDB_CPU_SHOWPC;
+            while (kdb_cpu_cmd[cpu]==KDB_CPU_SHOWPC);
+        }
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * display/switch CPU. 
+ *  Argument:
+ *     none:   just go back to initial cpu
+ *     cpunum: switch to given vpu
+ *     "all":  show one line status of all cpus
+ */
+extern volatile int kdb_init_cpu;
+static kdb_cpu_cmd_t
+kdb_usgf_cpu(void)
+{
+    kdbp("cpu [all|num]: none will switch back to initial cpu\n");
+    kdbp("               cpunum to switch to the vcpu. all to show status\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_cpu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int cpu;
+    int ccpu = smp_processor_id();
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_cpu();
+
+    if (argc > 1) {
+        if (!strcmp(argv[1], "all"))
+            return kdb_cpu_status_all(ccpu, regs);
+
+            cpu = (int)simple_strtoul(argv[1], NULL, 0); /* handles 0x */
+            if (cpu >= 0 && cpu < NR_CPUS && cpu != ccpu && 
+                cpu_online(cpu) && kdb_cpu_cmd[cpu] == KDB_CPU_PAUSE)
+            {
+                kdbp("Switching to cpu:%d\n", cpu);
+                kdb_cpu_cmd[cpu] = KDB_CPU_MAIN_KDB;
+
+                /* clear any single step on the current cpu */
+                regs->eflags &= ~X86_EFLAGS_TF;
+                return KDB_CPU_PAUSE;
+            } else {
+                if (cpu != ccpu)
+                    kdbp("Unable to switch to cpu:%d\n", cpu);
+                else {
+                    kdb_display_pc(regs);
+                }
+                return KDB_CPU_MAIN_KDB;
+            }
+    }
+    /* no arg means back to initial cpu */
+    if (!kdb_sys_crash && ccpu != kdb_init_cpu) {
+        if (kdb_cpu_cmd[kdb_init_cpu] == KDB_CPU_PAUSE) {
+            regs->eflags &= ~X86_EFLAGS_TF;
+            kdb_cpu_cmd[kdb_init_cpu] = KDB_CPU_MAIN_KDB;
+            return KDB_CPU_PAUSE;
+        } else
+            kdbp("Unable to switch to: %d\n", kdb_init_cpu);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* send NMI to all or given CPU. Must be crashed/fatal state */
+static kdb_cpu_cmd_t
+kdb_usgf_nmi(void)
+{
+    kdbp("nmi cpu#|all: send nmi cpu/s. must reboot when done with kdb\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_nmi(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    cpumask_t cpumask;
+    int ccpu = smp_processor_id();
+
+    if (argc <= 1 || (argc > 1 && *argv[1] == '?'))
+        return kdb_usgf_nmi();
+
+    if (!kdb_sys_crash) {
+        kdbp("kdb: nmi cmd available in crashed state only\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!strcmp(argv[1], "all"))
+        cpumask = cpu_online_map;
+    else {
+        int cpu = (int)simple_strtoul(argv[1], NULL, 0);
+        if (cpu >= 0 && cpu < NR_CPUS && cpu != ccpu && cpu_online(cpu))
+            cpumask = *cpumask_of(cpu);
+        else {
+            kdbp("KDB nmi: invalid cpu %s\n", argv[1]);
+            return KDB_CPU_MAIN_KDB;
+        }
+    }
+    kdb_nmi_pause_cpus(cpumask);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_percpu(void)
+{
+    kdbp("percpu: display per cpu pointers\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_percpu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_percpu();
+    kdb_dump_time_pcpu();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* ========================= Breakpoints ==================================== */
+
+static void
+kdb_prnt_bp_cond(int bpnum)
+{
+    struct kdb_bpcond *bpcp = &kdb_sbpa[bpnum].u.bp_cond;
+
+    if (bpcp->bp_cond_status == 1) {
+        kdbp("     ( %s %c%c %lx )\n", 
+             kdb_regoffs_to_name(bpcp->bp_cond_lhs),
+             bpcp->bp_cond_type == 1 ? '=' : '!', '=', bpcp->bp_cond_rhs);
+    } else {
+        kdbp("     ( %lx %c%c %lx )\n", bpcp->bp_cond_lhs,
+             bpcp->bp_cond_type == 1 ? '=' : '!', '=', bpcp->bp_cond_rhs);
+    }
+}
+
+static void
+kdb_prnt_bp_extra(int bpnum)
+{
+    if (kdb_sbpa[bpnum].bp_type == 2) {
+        ulong i, arg, *btp = kdb_sbpa[bpnum].u.bp_btp;
+        
+        kdbp("   will trace ");
+        for (i=0; i < KDB_MAXBTP && btp[i]; i++)
+            if ((arg=btp[i]) < sizeof (struct cpu_user_regs)) {
+                kdbp(" %s ", kdb_regoffs_to_name(arg));
+            } else {
+                kdbp(" %lx ", arg);
+            }
+        kdbp("\n");
+
+    } else if (kdb_sbpa[bpnum].bp_type == 1)
+        kdb_prnt_bp_cond(bpnum);
+}
+
+/*
+ * List software breakpoints
+ */
+static kdb_cpu_cmd_t
+kdb_display_sbkpts(void)
+{
+    int i;
+    for(i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && !kdb_sbpa[i].bp_deleted) {
+            struct domain *dp = kdb_domid2ptr(kdb_sbpa[i].bp_domid);
+
+            if (dp == NULL || dp->is_dying) {
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+                continue;
+            }
+            kdbp("[%d]: domid:%d 0x%lx   ", i, 
+                 kdb_sbpa[i].bp_domid, kdb_sbpa[i].bp_addr);
+            kdb_prnt_addr2sym(kdb_sbpa[i].bp_domid, kdb_sbpa[i].bp_addr,"\n");
+            kdb_prnt_bp_extra(i);
+        }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/*
+ * Check if any breakpoints that we need to install (delayed install)
+ * Returns: 1 if yes, 0 if none.
+ */
+int
+kdb_swbp_exists(void)
+{
+    int i;
+    for (i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && !kdb_sbpa[i].bp_deleted)
+            return 1;
+    return 0;
+}
+/*
+ * Check if any breakpoints were deleted this kdb session
+ * Returns: 0 if none, 1 if yes
+ */
+static int
+kdb_swbp_deleted(void)
+{
+    int i;
+    for (i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && kdb_sbpa[i].bp_deleted)
+            return 1;
+    return 0;
+}
+
+/*
+ * Flush deleted sw breakpoints
+ */
+void
+kdb_flush_swbp_table(void)
+{
+    int i;
+    KDBGP("ccpu:%d flush_swbp_table: deleted:%x\n", smp_processor_id(), 
+          kdb_swbp_deleted());
+    for(i=0; i < KDBMAXSBP; i++)
+        if (kdb_sbpa[i].bp_addr && kdb_sbpa[i].bp_deleted) {
+            KDBGP("flush:[%x] addr:0x%lx\n",i,kdb_sbpa[i].bp_addr);
+            memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+        }
+}
+
+/*
+ * Delete/Clear a sw breakpoint
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_bc(void)
+{
+    kdbp("bc $num|all : clear given or all breakpoints\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_bc(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int i, bpnum = -1, delall = 0;
+    const char *argp;
+
+    if (argc != 2 || *argv[1] == '?')
+        return kdb_usgf_bc();
+
+    if (!kdb_swbp_exists()) {
+        kdbp("No breakpoints are set\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    argp = argv[1];
+
+    if (!strcmp(argp, "all"))
+        delall = 1;
+    else if (!kdb_str2deci(argp, &bpnum) || bpnum < 0 || bpnum > KDBMAXSBP) {
+        kdbp("Invalid bpnum: %s\n", argp);
+        return KDB_CPU_MAIN_KDB;
+    }
+    for (i=0; i < KDBMAXSBP; i++) {
+        if (delall && kdb_sbpa[i].bp_addr) {
+            kdbp("Deleted breakpoint [%x] addr:0x%lx domid:%d\n", 
+                 (int)i, kdb_sbpa[i].bp_addr, kdb_sbpa[i].bp_domid);
+            if (kdb_sbpa[i].bp_just_added)
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+            else
+                kdb_sbpa[i].bp_deleted = 1;
+            continue;
+        }
+        if (bpnum != -1 && bpnum == i) {
+            kdbp("Deleted breakpoint [%x] at 0x%lx domid:%d\n", 
+                 (int)i, kdb_sbpa[i].bp_addr, kdb_sbpa[i].bp_domid);
+            if (kdb_sbpa[i].bp_just_added)
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+            else
+                kdb_sbpa[i].bp_deleted = 1;
+            break;
+        }
+    }
+    if (i >= KDBMAXSBP && !delall)
+        kdbp("Unable to delete breakpoint: %s\n", argp);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/*
+ * Install a breakpoint in the given array entry
+ * Returns: 0 : failed to install
+ *          1 : installed successfully
+ */
+static int
+kdb_install_swbp(int idx)                   /* which entry in the bp array */
+{
+    kdbva_t addr = kdb_sbpa[idx].bp_addr;
+    domid_t domid = kdb_sbpa[idx].bp_domid;
+    kdbbyt_t *p = &kdb_sbpa[idx].bp_originst;
+    struct domain *dp = kdb_domid2ptr(domid);
+
+    if (dp == NULL || dp->is_dying) {
+        memset(&kdb_sbpa[idx], 0, sizeof(kdb_sbpa[idx]));
+        kdbp("Removed bp %d addr:%p domid:%d\n", idx, addr, domid);
+        return 0;
+    }
+
+    if (kdb_read_mem(addr, p, KDBBPSZ, domid) != KDBBPSZ){
+        kdbp("Failed(R) to install bp:%x at:0x%lx domid:%d\n",
+             idx, kdb_sbpa[idx].bp_addr, domid);
+        return 0;
+    }
+    if (kdb_write_mem(addr, &kdb_bpinst, KDBBPSZ, domid) != KDBBPSZ) {
+        kdbp("Failed(W) to install bp:%x at:0x%lx domid:%d\n",
+             idx, kdb_sbpa[idx].bp_addr, domid);
+        return 0;
+    }
+    KDBGP("install_swbp: installed bp:%x at:0x%lx ccpu:%x domid:%d\n",
+          idx, kdb_sbpa[idx].bp_addr, smp_processor_id(), domid);
+    return 1;
+}
+
+/*
+ * Install all the software breakpoints
+ */
+void
+kdb_install_all_swbp(void)
+{
+    int i;
+    for(i=0; i < KDBMAXSBP; i++)
+        if (!kdb_sbpa[i].bp_deleted && kdb_sbpa[i].bp_addr)
+            kdb_install_swbp(i);
+}
+
+static void
+kdb_uninstall_a_swbp(int i)
+{
+    kdbva_t addr = kdb_sbpa[i].bp_addr;
+    kdbbyt_t originst = kdb_sbpa[i].bp_originst;
+    domid_t id = kdb_sbpa[i].bp_domid;
+
+    kdb_sbpa[i].bp_just_added = 0;
+    if (!addr)
+        return;
+    if (kdb_write_mem(addr, &originst, KDBBPSZ, id) != KDBBPSZ) {
+        kdbp("Failed to uninstall breakpoint %x at:0x%lx domid:%d\n",
+             i, kdb_sbpa[i].bp_addr, id);
+    }
+}
+
+/*
+ * Uninstall all the software breakpoints at beginning of kdb session
+ */
+void
+kdb_uninstall_all_swbp(void)
+{
+    int i;
+    for(i=0; i < KDBMAXSBP; i++) 
+        kdb_uninstall_a_swbp(i);
+    KDBGP("ccpu:%d uninstalled all bps\n", smp_processor_id());
+}
+
+/* RETURNS: rc == 2: condition was not met,  rc == 3: condition was met */
+static int
+kdb_check_bp_condition(int bpnum, struct cpu_user_regs *regs, domid_t domid)
+{
+    ulong res = 0, lhsval=0;
+    struct kdb_bpcond *bpcp = &kdb_sbpa[bpnum].u.bp_cond;
+
+    if (bpcp->bp_cond_status == 1) {             /* register condition */
+        uint64_t *rp = (uint64_t *)((char *)regs + bpcp->bp_cond_lhs);
+        lhsval = *rp;
+    } else if (bpcp->bp_cond_status == 2) {      /* memaddr condition */
+        ulong addr = bpcp->bp_cond_lhs;
+        int num = sizeof(lhsval);
+
+        if (kdb_read_mem(addr, (kdbbyt_t *)&lhsval, num, domid) != num) {
+            kdbp("kdb: unable to read %d bytes at %lx\n", num, addr);
+            return 3;
+        }
+    }
+    if (bpcp->bp_cond_type == 1)                 /* lhs == rhs */
+        res = (lhsval == bpcp->bp_cond_rhs);
+    else                                         /* lhs != rhs */
+        res = (lhsval != bpcp->bp_cond_rhs);
+
+    if (!res)
+        kdbp("KDB: [%d]Ignoring bp:%d condition not met. val:%lx\n", 
+              smp_processor_id(), bpnum, lhsval); 
+
+    KDBGP1("bpnum:%d domid:%d cond: %d %d %lx %lx res:%d\n", bpnum, domid, 
+           bpcp->bp_cond_status, bpcp->bp_cond_type, bpcp->bp_cond_lhs, 
+           bpcp->bp_cond_rhs, res);
+
+    return (res ? 3 : 2);
+}
+
+static void
+kdb_prnt_btp_info(int bpnum, struct cpu_user_regs *regs, domid_t domid)
+{
+    ulong i, arg, val, num, *btp = kdb_sbpa[bpnum].u.bp_btp;
+
+    kdb_prnt_addr2sym(domid, regs->KDBIP, "\n");
+    num = kdb_guest_bitness(domid)/8;
+    for (i=0; i < KDB_MAXBTP && (arg=btp[i]); i++) {
+        if (arg < sizeof (struct cpu_user_regs)) {
+            uint64_t *rp = (uint64_t *)((char *)regs + arg);
+            kdbp(" %s: %016lx ", kdb_regoffs_to_name(arg), *rp);
+        } else {
+            if (kdb_read_mem(arg, (kdbbyt_t *)&val, num, domid) != num)
+                kdbp("kdb: unable to read %d bytes at %lx\n", num, arg);
+            if (num == 8)
+                kdbp(" %016lx:%016lx ", arg, val);
+            else
+                kdbp(" %08lx:%08lx ", arg, val);
+        }
+    }
+    kdbp("\n");
+    KDBGP1("bpnum:%d domid:%d btp:%p num:%d\n", bpnum, domid, btp, num);
+}
+
+/*
+ * Check if the BP trap belongs to us. 
+ * Return: 0 : not one of ours. IP not changed. (leave kdb)
+ *         1 : one of ours but deleted. IP decremented. (leave kdb)
+ *         2 : one of ours but condition not met, or btp. IP decremented.(leave)
+ *         3 : one of ours and active. IP decremented. (stay in kdb)
+ */
+int 
+kdb_check_sw_bkpts(struct cpu_user_regs *regs)
+{
+    int i, rc=0;
+    domid_t curid;
+
+    curid = guest_mode(regs) ? current->domain->domain_id : DOMID_IDLE;
+    for(i=0; i < KDBMAXSBP; i++) {
+        if (kdb_sbpa[i].bp_domid == curid  && 
+            kdb_sbpa[i].bp_addr == (regs->KDBIP- KDBBPSZ)) {
+
+            regs->KDBIP -= KDBBPSZ;
+            rc = 3;
+
+            if (kdb_sbpa[i].bp_ni) {
+                kdb_uninstall_a_swbp(i);
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+            } else if (kdb_sbpa[i].bp_deleted) {
+                rc = 1;
+            } else if (kdb_sbpa[i].bp_type == 1) {
+                rc = kdb_check_bp_condition(i, regs, curid);
+            } else if (kdb_sbpa[i].bp_type == 2) {
+                kdb_prnt_btp_info(i, regs, curid);
+                rc = 2;
+            }
+            KDBGP1("ccpu:%d rc:%d curid:%d domid:%d addr:%lx\n", 
+                   smp_processor_id(), rc, curid, kdb_sbpa[i].bp_domid, 
+                   kdb_sbpa[i].bp_addr);
+            break;
+        }
+    }
+    return (rc);
+}
+
+/* Eg: r6 == 0x123EDF  or 0xFFFF2034 != 0xDEADBEEF
+ * regoffs: -1 means lhs is not reg. else offset of reg in cpu_user_regs
+ * addr: memory location if lhs is not register, eg, 0xFFFF2034
+ * condp : points to != or ==
+ * rhsval : right hand side value
+ */
+static void
+kdb_set_bp_cond(int bpnum, int regoffs, ulong addr, char *condp, ulong rhsval)
+{
+    if (bpnum >= KDBMAXSBP) {
+        kdbp("BUG: %s got invalid bpnum\n", __FUNCTION__);
+        return;
+    }
+    if (regoffs != -1) {
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_status = 1;
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_lhs = regoffs;
+    } else if (addr != 0) {
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_status = 2;
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_lhs = addr;
+    } else {
+        kdbp("error: invalid call to kdb_set_bp_cond\n");
+        return;
+    }
+    kdb_sbpa[bpnum].u.bp_cond.bp_cond_rhs = rhsval;
+
+    if (*condp == '!')
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_type = 2;
+    else
+        kdb_sbpa[bpnum].u.bp_cond.bp_cond_type = 1;
+}
+
+/* install breakpt at given addr. 
+ * ni: bp for next instr 
+ * btpa: ptr to args for btp for printing when bp is hit
+ * lhsp/condp/rhsp: point to strings of condition
+ *
+ * RETURNS: the index in array where installed. KDBMAXSBP if error 
+ */
+static int
+kdb_set_bp(domid_t domid, kdbva_t addr, int ni, ulong *btpa, char *lhsp, 
+           char *condp, char *rhsp)
+{
+    int i, pre_existing = 0, regoffs = -1;
+    ulong memloc=0, rhsval=0, tmpul;
+
+    if (btpa && (lhsp || rhsp || condp)) {
+        kdbp("internal error. btpa and (lhsp || rhsp || condp) set\n");
+        return KDBMAXSBP;
+    }
+    if (lhsp && ((regoffs=kdb_valid_reg(lhsp)) == -1)  &&
+        kdb_str2ulong(lhsp, &memloc) &&
+        kdb_read_mem(memloc, (kdbbyt_t *)&tmpul, sizeof(tmpul), domid)==0) {
+
+        kdbp("error: invalid argument: %s\n", lhsp);
+        return KDBMAXSBP;
+    }
+    if (rhsp && ! kdb_str2ulong(rhsp, &rhsval)) {
+        kdbp("error: invalid argument: %s\n", rhsp);
+        return KDBMAXSBP;
+    }
+
+    /* see if bp already set */
+    for (i=0; i < KDBMAXSBP; i++) {
+        if (kdb_sbpa[i].bp_addr==addr && kdb_sbpa[i].bp_domid==domid) {
+
+            if (kdb_sbpa[i].bp_deleted) {
+                /* just re-set this bp again */
+                memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+                pre_existing = 1;
+            } else {
+                kdbp("Breakpoint already set \n");
+                return KDBMAXSBP;
+            }
+        }
+    }
+    /* see if any room left for another breakpoint */
+    for (i=0; i < KDBMAXSBP; i++)
+        if (!kdb_sbpa[i].bp_addr)
+            break;
+    if (i >= KDBMAXSBP) {
+        kdbp("ERROR: Breakpoint table full....\n");
+        return i;
+    }
+    kdb_sbpa[i].bp_addr = addr;
+    kdb_sbpa[i].bp_domid = domid;
+    if (btpa) {
+        kdb_sbpa[i].bp_type = 2;
+        kdb_sbpa[i].u.bp_btp = btpa;
+    } else if (regoffs != -1 || memloc) {
+        kdb_sbpa[i].bp_type = 1;
+        kdb_set_bp_cond(i, regoffs, memloc, condp, rhsval);
+    } else
+        kdb_sbpa[i].bp_type = 0;
+
+    if (kdb_install_swbp(i)) {                  /* make sure it can be done */
+        if (ni)
+            return i;
+
+        kdb_uninstall_a_swbp(i);                /* dont' show user INT3 */
+        if (!pre_existing)               /* make sure no is cpu sitting on it */
+            kdb_sbpa[i].bp_just_added = 1;
+
+        kdbp("bp %d set for domid:%d at: 0x%lx ", i, kdb_sbpa[i].bp_domid, 
+             kdb_sbpa[i].bp_addr);
+        kdb_prnt_addr2sym(domid, addr, "\n");
+        kdb_prnt_bp_extra(i);
+    } else {
+        kdbp("ERROR:Can't install bp: 0x%lx domid:%d\n", addr, domid);
+        if (pre_existing)     /* in case a cpu is sitting on this bp in traps */
+            kdb_sbpa[i].bp_deleted = 1;
+        else
+            memset(&kdb_sbpa[i], 0, sizeof(kdb_sbpa[i]));
+        return KDBMAXSBP;
+    }
+    /* make sure swbp reporting is enabled in the vmcb/vmcs */
+    if (is_hvm_or_hyb_domain(kdb_domid2ptr(domid))) {
+        struct domain *dp = kdb_domid2ptr(domid);
+        dp->debugger_attached = 1;              /* see svm_do_resume/vmx_do_ */
+        KDBGP("debugger_attached set. domid:%d\n", domid);
+    }
+    return i;
+}
+
+/* 
+ * Set/List Software Breakpoint/s
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_bp(void)
+{
+    kdbp("bp [addr|sym][domid][condition]: display or set a breakpoint\n");
+    kdbp("  where cond is like: r6 == 0x123F or rax != DEADBEEF or \n");
+    kdbp("       ffff82c48038fe58 == 321E or 0xffff82c48038fe58 != 0\n");
+    kdbp("  regs: rax rbx rcx rdx rsi rdi rbp rsp r8 r9");
+    kdbp(" r10 r11 r12 r13 r14 r15 rflags\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_bp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbva_t addr;
+    int idx = -1;
+    domid_t domid = DOMID_IDLE;
+    char *domidstrp, *lhsp=NULL, *condp=NULL, *rhsp=NULL;
+
+    if ((argc > 1 && *argv[1] == '?') || argc == 4 || argc > 6)
+        return kdb_usgf_bp();
+
+    if (argc < 2 || kdb_sys_crash)         /* list all set breakpoints */
+        return kdb_display_sbkpts();
+
+    /* valid argc either: 2 3 5 or 6 
+     * 'bp idle_loop r6 == 0xc000' OR 'bp idle_loop 3 r9 != 0xdeadbeef' */
+    idx = (argc == 5) ? 2 : ((argc == 6) ? 3 : idx);
+    if (argc >= 5 ) {
+        lhsp = (char *)argv[idx];
+        condp = (char *)argv[idx+1];
+        rhsp = (char *)argv[idx+2];
+
+        if (!kdb_str2ulong(rhsp, NULL) || *(condp+1) != '=' || 
+            (*condp != '=' && *condp != '!')) {
+
+            return kdb_usgf_bp();
+        }
+    }
+    domidstrp = (argc == 3 || argc == 6 ) ? (char *)argv[2] : NULL;
+    if (domidstrp && !kdb_str2domid(domidstrp, &domid, 1)) {
+        return kdb_usgf_bp();
+    }
+    if (argc > 3 && is_hvm_or_hyb_domain(kdb_domid2ptr(domid))) {
+        kdbp("HVM domain not supported yet for conditional bp\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    if (!kdb_str2addr(argv[1], &addr, domid) || addr == 0) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    /* make sure xen addr is in xen text, otherwise bp set in 64bit dom0/U */
+    if (domid == DOMID_IDLE && 
+        (addr < XEN_VIRT_START || addr > XEN_VIRT_END))
+    {
+        kdbp("addr:%lx not in  xen text\n", addr);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdb_set_bp(domid, addr, 0, NULL, lhsp, condp, rhsp);     /* 0 is ni flag */
+    return KDB_CPU_MAIN_KDB;
+}
+
+
+/* trace breakpoint, meaning, upon bp trace/print some info and continue */
+
+static kdb_cpu_cmd_t
+kdb_usgf_btp(void)
+{
+    kdbp("btp addr|sym [domid] reg|domid-mem-addr... : breakpoint trace\n");
+    kdbp("  regs: rax rbx rcx rdx rsi rdi rbp rsp r8 r9 ");
+    kdbp("r10 r11 r12 r13 r14 r15 rflags\n");
+    kdbp("  Eg. btp idle_cpu 7 rax rbx 0x20ef5a5 r9\n");
+    kdbp("      will print rax, rbx, *(long *)0x20ef5a5, r9 and continue\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_btp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int i, btpidx, numrd, argsidx, regoffs = -1;
+    kdbva_t addr, memloc=0;
+    domid_t domid = DOMID_IDLE;
+    ulong *btpa, tmpul;
+
+    if ((argc > 1 && *argv[1] == '?') || argc < 3)
+        return kdb_usgf_btp();
+
+    argsidx = 2;                   /* assume 3rd arg is not domid */
+    if (argc > 3 && kdb_str2domid(argv[2], &domid, 0)) {
+
+        if (is_hvm_or_hyb_domain(kdb_domid2ptr(domid))) {
+            kdbp("HVM domains are not currently supprted\n");
+            return KDB_CPU_MAIN_KDB;
+        } else
+            argsidx = 3;               /* 3rd arg is a domid */
+    }
+    if (!kdb_str2addr(argv[1], &addr, domid) || addr == 0) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    /* make sure xen addr is in xen text, otherwise will trace 64bit dom0/U */
+    if (domid == DOMID_IDLE && 
+        (addr < XEN_VIRT_START || addr > XEN_VIRT_END))
+    {
+        kdbp("addr:%lx not in  xen text\n", addr);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    numrd = kdb_guest_bitness(domid)/8;
+    if (kdb_read_mem(addr, (kdbbyt_t *)&tmpul, numrd, domid) != numrd) {
+        kdbp("Unable to read mem from %s (%lx)\n", argv[1], addr);
+        return KDB_CPU_MAIN_KDB;
+    }
+
+    for (btpidx=0; btpidx < KDBMAXSBP && kdb_btp_ap[btpidx]; btpidx++);
+    if (btpidx >= KDBMAXSBP) {
+        kdbp("error: table full. delete few breakpoints\n");
+        return KDB_CPU_MAIN_KDB;
+    }
+    btpa = kdb_btp_argsa[btpidx];
+    memset(btpa, 0, sizeof(kdb_btp_argsa[0]));
+
+    for (i=0; argv[argsidx]; i++, argsidx++) {
+
+        if (((regoffs=kdb_valid_reg(argv[argsidx])) == -1)  &&
+            kdb_str2ulong(argv[argsidx], &memloc) &&
+            (memloc < sizeof (struct cpu_user_regs) ||
+            kdb_read_mem(memloc, (kdbbyt_t *)&tmpul, sizeof(tmpul), domid)==0)){
+
+            kdbp("error: invalid argument: %s\n", argv[argsidx]);
+            return KDB_CPU_MAIN_KDB;
+        }
+        if (i >= KDB_MAXBTP) {
+            kdbp("error: cannot specify more than %d args\n", KDB_MAXBTP);
+            return KDB_CPU_MAIN_KDB;
+        }
+        btpa[i] = (regoffs == -1) ? memloc : regoffs;
+    }
+
+    i = kdb_set_bp(domid, addr, 0, btpa, 0, 0, 0);     /* 0 is ni flag */
+    if (i < KDBMAXSBP)
+        kdb_btp_ap[btpidx] = kdb_btp_argsa[btpidx];
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* 
+ * Set/List watchpoints, ie, hardware breakpoint/s, in hypervisor
+ *   Usage: wp [sym|addr] [w|i]   w == write only data watchpoint
+ *                                i == IO watchpoint (read/write)
+ *
+ *   Eg:  wp        : list all watchpoints set
+ *        wp addr   : set a read/write wp at given addr
+ *        wp addr w : set a write only wp at given addr
+ *        wp addr i : set an IO wp at given addr (16bits port #)
+ *
+ *  TBD: allow to be set on particular cpu
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_wp(void)
+{
+    kdbp("wp [addr|sym][w|i]: display or set watchpoint. writeonly or IO\n");
+    kdbp("\tnote: watchpoint is triggered after the instruction executes\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_wp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbva_t addr;
+    domid_t domid = DOMID_IDLE;
+    int rw = 3, len = 4;       /* for now just default to 4 bytes len */
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_wp();
+
+    if (argc <= 1 || kdb_sys_crash) {       /* list all set watchpoints */
+        kdb_do_watchpoints(0, 0, 0);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (!kdb_str2addr(argv[1], &addr, domid) || addr == 0) {
+        kdbp("Invalid argument:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (argc > 2) {
+        if (!strcmp(argv[2], "w"))
+            rw = 1;
+        else if (!strcmp(argv[2], "i"))
+            rw = 2;
+        else {
+            return kdb_usgf_wp();
+        }
+    }
+    kdb_do_watchpoints(addr, rw, len);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_wc(void)
+{
+    kdbp("wc $num|all : clear given or all watchpoints\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_wc(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    const char *argp;
+    int wpnum;              /* wp num to delete. -1 for all */
+
+    if (argc != 2 || *argv[1] == '?') 
+        return kdb_usgf_wc();
+
+    argp = argv[1];
+
+    if (!strcmp(argp, "all"))
+        wpnum = -1;
+    else if (!kdb_str2deci(argp, &wpnum)) {
+        kdbp("Invalid wpnum: %s\n", argp);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdb_clear_wps(wpnum);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void
+kdb_display_hvm_vcpu(struct vcpu *vp)
+{
+    struct hvm_vcpu *hvp;
+    struct vlapic *vlp;
+    struct hvm_io_op *ioop;
+
+    hvp = &vp->arch.hvm_vcpu;
+    vlp = &hvp->vlapic;
+    kdbp("vcpu:%lx id:%d domid:%d\n", vp, vp->vcpu_id, vp->domain->domain_id);
+
+#if 0
+    if (is_hybrid_vcpu(vp)) {
+        struct hybrid_ext *hp = &hvp->hv_hybrid;
+        kdbp("    &hybrid_ext:%p limit:%x iopl:%x vcpu_info_mfn:%lx\n",
+             hp, hp->hyb_iobmp_limit, hp->hyb_iopl, hp->hyb_vcpu_info_mfn);
+    }
+#endif
+
+    ioop = NULL;   /* compiler warning */
+    kdbp("    &hvm_vcpu:%lx  guest_efer:"KDBFL"\n", hvp, hvp->guest_efer);
+    kdbp("      guest_cr: [0]:"KDBFL" [1]:"KDBFL" [2]:"KDBFL"\n", 
+         hvp->guest_cr[0], hvp->guest_cr[1],hvp->guest_cr[2]);
+    kdbp("                [3]:"KDBFL" [4]:"KDBFL"\n", hvp->guest_cr[3],
+         hvp->guest_cr[4]);
+    kdbp("      hw_cr: [0]:"KDBFL" [1]:"KDBFL" [2]:"KDBFL"\n", hvp->hw_cr[0],
+         hvp->hw_cr[1], hvp->hw_cr[2]);
+    kdbp("              [3]:"KDBFL" [4]:"KDBFL"\n", hvp->hw_cr[3], 
+         hvp->hw_cr[4]);
+
+    kdbp("      VLAPIC: base msr:"KDBF64" dis:%x tmrdiv:%x\n", 
+         vlp->hw.apic_base_msr, vlp->hw.disabled, vlp->hw.timer_divisor);
+    kdbp("          regs:%p regs_page:%p\n", vlp->regs, vlp->regs_page);
+    kdbp("          periodic time:\n"); 
+    kdb_prnt_periodic_time(&vlp->pt);
+
+    kdbp("      xen_port:%x flag_dr_dirty:%x dbg_st_latch:%x\n", hvp->xen_port,
+         hvp->flag_dr_dirty, hvp->debug_state_latch);
+
+    if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+
+        struct arch_vmx_struct *vxp = &hvp->u.vmx;
+        kdbp("      &vmx: %p vmcs:%lx active_cpu:%x launched:%x\n", vxp, 
+             vxp->vmcs, vxp->active_cpu, vxp->launched);
+#if XEN_VERSION != 4               /* xen 3.x.x */
+        kdbp("        exec_ctrl:%x vpid:$%d\n", vxp->exec_control, vxp->vpid);
+#endif
+        kdbp("        host_cr0: "KDBFL" vmx: {realm:%x emulate:%x}\n",
+             vxp->host_cr0, vxp->vmx_realmode, vxp->vmx_emulate);
+
+#ifdef __x86_64__
+        kdbp("        &msr_state:%p exception_bitmap:%lx\n", &vxp->msr_state,
+             vxp->exception_bitmap);
+#endif
+    } else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
+        struct arch_svm_struct *svp = &hvp->u.svm;
+#if XEN_VERSION != 4               /* xen 3.x.x */
+        kdbp("  &svm: vmcb:%lx pa:"KDBF64" asid:"KDBF64"\n", svp, svp->vmcb,
+             svp->vmcb_pa, svp->asid_generation);
+#endif
+        kdbp("    msrpm:%p lnch_core:%x vmcb_sync:%x\n", svp->msrpm, 
+             svp->launch_core, svp->vmcb_in_sync);
+    }
+    kdbp("      cachemode:%x io: {state: %x data: "KDBFL"}\n", hvp->cache_mode,
+         hvp->hvm_io.io_state, hvp->hvm_io.io_data);
+    kdbp("      mmio: {gva: "KDBFL" gpfn: "KDBFL"}\n", hvp->hvm_io.mmio_gva,
+         hvp->hvm_io.mmio_gpfn);
+}
+
+/* display struct hvm_vcpu{} in struct vcpu.arch{} */
+static kdb_cpu_cmd_t
+kdb_usgf_vcpuh(void)
+{
+    kdbp("vcpuh vcpu-ptr : display hvm_vcpu struct\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_vcpuh(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct vcpu *vp;
+
+    if (argc < 2 || *argv[1] == '?') 
+        return kdb_usgf_vcpuh();
+
+    if (!kdb_str2ulong(argv[1], (ulong *)&vp) || !kdb_vcpu_valid(vp) ||
+        !is_hvm_or_hyb_vcpu(vp)) {
+
+        kdbp("kdb: Bad VCPU: %s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdb_display_hvm_vcpu(vp);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* also look into arch_get_info_guest() to get context */
+static void
+kdb_print_uregs(struct cpu_user_regs *regs)
+{
+#ifdef __x86_64__
+    kdbp("      rflags: %016lx   rip: %016lx\n", regs->rflags, regs->rip);
+    kdbp("         rax: %016lx   rbx: %016lx   rcx: %016lx\n",
+         regs->rax, regs->rbx, regs->rcx);
+    kdbp("         rdx: %016lx   rsi: %016lx   rdi: %016lx\n",
+         regs->rdx, regs->rsi, regs->rdi);
+    kdbp("         rbp: %016lx   rsp: %016lx    r8: %016lx\n",
+         regs->rbp, regs->rsp, regs->r8);
+    kdbp("          r9:  %016lx  r10: %016lx   r11: %016lx\n",
+         regs->r9,  regs->r10, regs->r11);
+    kdbp("         r12: %016lx   r13: %016lx   r14: %016lx\n",
+         regs->r12, regs->r13, regs->r14);
+    kdbp("         r15: %016lx\n", regs->r15);
+    kdbp("      ds: %04x   es: %04x   fs: %04x   gs: %04x   "
+         "      ss: %04x   cs: %04x\n", regs->ds, regs->es, regs->fs,
+         regs->gs, regs->ss, regs->cs);
+    kdbp("      errcode:%08lx entryvec:%08lx upcall_mask:%lx\n",
+         regs->error_code, regs->entry_vector, regs->saved_upcall_mask);
+#else
+    kdbp("      eflags: %016lx eip: 016lx\n", regs->eflags, regs->eip);
+    kdbp("      eax: %08x   ebx: %08x   ecx: %08x   edx: %08x\n",
+         regs->eax, regs->ebx, regs->ecx, regs->edx);
+    kdbp("      esi: %08x   edi: %08x   ebp: %08x   esp: %08x\n",
+         regs->esi, regs->edi, regs->ebp, regs->esp);
+    kdbp("      ds: %04x   es: %04x   fs: %04x   gs: %04x   "
+     "      ss: %04x   cs: %04x\n", regs->ds, regs->es, regs->fs,
+         regs->gs, regs->ss, regs->cs);
+    kdbp("      errcode:%04lx entryvec:%04lx upcall_mask:%lx\n", 
+         regs->error_code, regs->entry_vector, regs->saved_upcall_mask);
+#endif
+}
+
+#if XEN_SUBVERSION < 3             /* xen 3.1.x or xen 3.2.x */
+#ifdef CONFIG_COMPAT
+    #undef vcpu_info
+    #define vcpu_info(v, field)             \
+    (*(!has_32bit_shinfo((v)->domain) ?                                       \
+       (typeof(&(v)->vcpu_info->compat.field))&(v)->vcpu_info->native.field : \
+       (typeof(&(v)->vcpu_info->compat.field))&(v)->vcpu_info->compat.field))
+
+    #undef __shared_info
+    #define __shared_info(d, s, field)                      \
+    (*(!has_32bit_shinfo(d) ?                           \
+       (typeof(&(s)->compat.field))&(s)->native.field : \
+       (typeof(&(s)->compat.field))&(s)->compat.field))
+#endif
+#endif
+
+static void kdb_display_pv_vcpu(struct vcpu *vp)
+{
+    int i;
+    struct pv_vcpu *gp = &vp->arch.pv_vcpu;
+
+    kdbp("      GDT_VIRT_START(vcpu): %lx\n", GDT_VIRT_START(vp));
+    kdbp("      GDT: entries:0x%lx  frames:\n", gp->gdt_ents);
+    for (i=0; i < 16; i=i+4) 
+        kdbp("          %016lx %016lx %016lx %016lx\n", gp->gdt_frames[i], 
+             gp->gdt_frames[i+1], gp->gdt_frames[i+2],gp->gdt_frames[i+3]);
+    
+    kdbp("      trap_ctxt:%lx kernel_ss:%lx kernel_sp:%lx\n", gp->trap_ctxt,
+         gp->kernel_ss, gp->kernel_sp);
+    kdbp("      ctrlregs:\n");
+    for (i=0; i < 8; i=i+4)
+        kdbp("          %016lx %016lx %016lx %016lx\n", gp->ctrlreg[i], 
+             gp->ctrlreg[i+1], gp->ctrlreg[i+2], gp->ctrlreg[i+3]);
+#ifdef __x86_64__
+    kdbp("      callback:   event: %016lx   failsafe: %016lx\n", 
+         gp->event_callback_eip, gp->failsafe_callback_eip);
+    kdbp("      base: fs:0x%lx gskern:0x%lx gsuser:0x%lx\n", 
+         gp->fs_base, gp->gs_base_kernel, gp->gs_base_user);
+#else
+    kdbp("      callback:   event: %08lx:%08lx   failsafe: %08lx:%08lx\n", 
+         gp->event_callback_cs, gp->event_callback_eip, 
+         gp->failsafe_callback_cs, gp->failsafe_callback_eip);
+#endif
+    kdbp("    vcpu_info_mfn: %lx  iopl: %x\n", gp->vcpu_info_mfn, gp->iopl);
+    kdbp("\n");
+}
+
+/* Display one VCPU info */
+static void
+kdb_display_vcpu(struct vcpu *vp)
+{
+    int i;
+    struct arch_vcpu *avp = &vp->arch;
+    struct paging_vcpu *pvp = &vp->arch.paging;
+    int domid = vp->domain->domain_id;
+
+    kdbp("\nVCPU:  vcpu-id:%d  vcpu-ptr:%p ", vp->vcpu_id, vp);
+    kdbp("  processor:%d domid:%d  domp:%p\n", vp->processor, domid,vp->domain);
+
+    if (domid == DOMID_IDLE) {
+        kdbp("    IDLE vcpu.\n");
+        return;
+    }
+    kdbp("  pause: flags:0x%016lx count:%x\n", vp->pause_flags, 
+         vp->pause_count.counter);
+    kdbp("  vcpu: initdone:%d running:%d\n", 
+         vp->is_initialised, vp->is_running);
+    kdbp("  mcepend:%d nmipend:%d shut: def:%d paused:%d\n", 
+         vp->mce_pending,  vp->nmi_pending, vp->defer_shutdown, 
+         vp->paused_for_shutdown);
+    kdbp("  &vcpu_info:%p : evtchn_upc_pend:%x _mask:%x\n",
+         vp->vcpu_info, vcpu_info(vp, evtchn_upcall_pending),
+         vcpu_info(vp, evtchn_upcall_mask));
+    kdbp("  evt_pend_sel:%lx poll_evtchn:%x ", 
+         *(unsigned long *)&vcpu_info(vp, evtchn_pending_sel), vp->poll_evtchn);
+    kdb_print_spin_lock("virq_lock:", &vp->virq_lock, "\n");
+    for (i=0; i < NR_VIRQS; i++)
+        if (vp->virq_to_evtchn[i] != 0)
+            kdbp("      virq:$%d port:$%d\n", i, vp->virq_to_evtchn[i]);
+
+    kdbp("  next:%p periodic: period:0x%lx last_event:0x%lx\n", 
+         vp->next_in_list, vp->periodic_period, vp->periodic_last_event);
+    kdbp("  cpu_affinity:0x%lx vcpu_dirty_cpumask:%p sched_priv:0x%p\n",
+         vp->cpu_affinity, vp->vcpu_dirty_cpumask, vp->sched_priv);
+    kdbp("  &runstate: %p state: %x (eg. RUNSTATE_running) guestptr:%p\n", 
+         &vp->runstate, vp->runstate.state, runstate_guest(vp));
+    kdbp("\n");
+    kdbp("  arch info: (%p)\n", &vp->arch);
+    kdbp("    guest_context: VGCF_ flags:%lx", 
+         vp->arch.vgc_flags); /* VGCF_in_kernel */
+    if (is_hvm_or_hyb_vcpu(vp))
+        kdbp("    (HVM guest: IP, SP, EFLAGS may be stale)");
+    kdbp("\n");
+    kdb_print_uregs(&vp->arch.user_regs);
+    kdbp("      debugregs:\n");
+    for (i=0; i < 8; i=i+4)
+        kdbp("          %016lx %016lx %016lx %016lx\n", avp->debugreg[i], 
+             avp->debugreg[i+1], avp->debugreg[i+2], avp->debugreg[i+3]);
+
+    if (is_hvm_or_hyb_vcpu(vp))
+        kdb_display_hvm_vcpu(vp);
+    else
+        kdb_display_pv_vcpu(vp);
+
+    kdbp("    TF_flags: %016lx  guest_table: %016lx cr3:%016lx\n", 
+         vp->arch.flags, vp->arch.guest_table.pfn, avp->cr3); 
+    kdbp("    paging: \n");
+    kdbp("      vtlb:%p\n", &pvp->vtlb);
+    kdbp("      &pg_mode:%p gstlevels:%d &shadow:%p shlevels:%d\n",
+         pvp->mode, pvp->mode->guest_levels, &pvp->mode->shadow,
+         pvp->mode->shadow.shadow_levels);
+    kdbp("      shadow_vcpu:\n");
+    kdbp("        guest_vtable:%p last em_mfn:"KDBFL"\n",
+         pvp->shadow.guest_vtable, pvp->shadow.last_emulated_mfn);
+#if CONFIG_PAGING_LEVELS >= 3
+    kdbp("         l3tbl: 3:"KDBFL" 2:"KDBFL"\n"
+         "                1:"KDBFL" 0:"KDBFL"\n",
+     pvp->shadow.l3table[3].l3, pvp->shadow.l3table[2].l3, 
+     pvp->shadow.l3table[1].l3, pvp->shadow.l3table[0].l3);
+    kdbp("        gl3tbl: 3:"KDBFL" 2:"KDBFL"\n"
+         "                1:"KDBFL" 0:"KDBFL"\n",
+     pvp->shadow.gl3e[3].l3, pvp->shadow.gl3e[2].l3, 
+     pvp->shadow.gl3e[1].l3, pvp->shadow.gl3e[0].l3);
+#endif
+    kdbp("  gdbsx_vcpu_event:%x\n", vp->arch.gdbsx_vcpu_event);
+}
+
+/* 
+ * FUNCTION: Dispaly (current) VCPU/s
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_vcpu(void)
+{
+    kdbp("vcpu [vcpu-ptr] : display current/vcpu-ptr vcpu info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_vcpu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+
+    if (argc > 2 || (argc > 1 && *argv[1] == '?'))
+        kdb_usgf_vcpu();
+    else if (argc <= 1)
+        kdb_display_vcpu(v);
+    else if (kdb_str2ulong(argv[1], (ulong *)&v) && kdb_vcpu_valid(v))
+        kdb_display_vcpu(v);
+    else 
+        kdbp("Invalid usage/argument:%s v:%lx\n", argv[1], (long)v);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* from paging_dump_domain_info() */
+static void kdb_pr_dom_pg_modes(struct domain *d)
+{
+    if (paging_mode_enabled(d)) {
+        kdbp(" paging mode enabled");
+        if ( paging_mode_shadow(d) )
+            kdbp(" shadow(PG_SH_enable)");
+        if ( paging_mode_hap(d) )
+            kdbp(" hap(PG_HAP_enable) ");
+        if ( paging_mode_refcounts(d) )
+            kdbp(" refcounts(PG_refcounts) ");
+        if ( paging_mode_log_dirty(d) )
+            kdbp(" log_dirty(PG_log_dirty) ");
+        if ( paging_mode_translate(d) )
+            kdbp(" translate(PG_translate) ");
+        if ( paging_mode_external(d) )
+            kdbp(" external(PG_external) ");
+    } else
+        kdbp(" disabled");
+    kdbp("\n");
+}
+
+/* print event channels info for a given domain 
+ * NOTE: very confusing, port and event channel refer to the same thing. evtchn
+ * is arry of pointers to a bucket of pointers to 128 struct evtchn{}. while
+ * 64bit xen can handle 4096 max channels, a 32bit guest is limited to 1024 */
+static void noinline kdb_print_dom_eventinfo(struct domain *dp)
+{
+    uint chn;
+
+    kdbp("\n");
+    kdbp("  Evt: MAX_EVTCHNS:$%d ptr:%p pollmsk:%08lx ",
+         MAX_EVTCHNS(dp), dp->evtchn, dp->poll_mask[0]);
+    kdb_print_spin_lock("lk:", &dp->event_lock, "\n");
+    kdbp("    &evtchn_pending:%p &evtchn_mask:%p\n", 
+         shared_info(dp, evtchn_pending), shared_info(dp, evtchn_mask));
+
+    kdbp("   Channels info: (everything is in decimal):\n");
+    for (chn=0; chn < MAX_EVTCHNS(dp); chn++ ) {
+        struct evtchn *bktp = dp->evtchn[chn/EVTCHNS_PER_BUCKET];
+        struct evtchn *chnp = &bktp[chn & (EVTCHNS_PER_BUCKET-1)];
+        char pbit = test_bit(chn, &shared_info(dp, evtchn_pending)) ? 'Y' : 'N';
+        char mbit = test_bit(chn, &shared_info(dp, evtchn_mask)) ? 'Y' : 'N';
+
+        if (bktp==NULL || chnp->state==ECS_FREE)
+            continue;
+
+        kdbp("    chn:%4u st:%d _xen=%d _vcpu_id:%2d ", chn, chnp->state,
+             chnp->xen_consumer, chnp->notify_vcpu_id);
+        if (chnp->state == ECS_UNBOUND)
+            kdbp(" rem-domid:%d", chnp->u.unbound.remote_domid);
+        else if (chnp->state == ECS_INTERDOMAIN)
+            kdbp(" rem-port:%d rem-dom:%d", chnp->u.interdomain.remote_port,
+                 chnp->u.interdomain.remote_dom->domain_id);
+        else if (chnp->state == ECS_PIRQ)
+            kdbp(" pirq:%d", chnp->u.pirq);
+        else if (chnp->state == ECS_VIRQ)
+            kdbp(" virq:%d", chnp->u.virq);
+
+        kdbp("  pend:%c mask:%c\n", pbit, mbit);
+    }
+#if 0
+    kdbp("pirq to evtchn mapping (pirq:evtchn) (all decimal):\n");
+    for (i=0; i < dp->nr_pirqs; i ++)
+        if (dp->pirq_to_evtchn[i])
+            kdbp("(%d:%d) ", i, dp->pirq_to_evtchn[i]);
+    kdbp("\n");
+#endif
+}
+
+static void kdb_prnt_hvm_dom_info(struct domain *dp)
+{
+    struct hvm_domain *hvp = &dp->arch.hvm_domain;
+
+    kdbp("    HVM info: Hap is%s enabled\n", 
+         dp->arch.hvm_domain.hap_enabled ? "" : " not");
+
+    if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+        struct vmx_domain *vdp = &dp->arch.hvm_domain.vmx;
+        kdbp("    EPT: ept_mt:%x ept_wl:%x asr:%013lx\n", 
+             vdp->ept_control.ept_mt, vdp->ept_control.ept_wl, 
+             vdp->ept_control.asr);
+    }
+    if (hvp == NULL)
+        return;
+
+    if (hvp->irq.callback_via_type == HVMIRQ_callback_vector)
+        kdbp("    HVMIRQ_callback_vector: %x\n", hvp->irq.callback_via.vector);
+
+    if (!is_hvm_domain(dp))
+        return;
+
+    kdbp("    HVM PARAMS (all in hex):\n");
+    kdbp("\tioreq.page:%lx ioreq.va:%lx\n", hvp->ioreq.page, hvp->ioreq.va);
+    kdbp("\tbuf_ioreq.page:%lx ioreq.va:%lx\n", hvp->buf_ioreq.page, 
+         hvp->buf_ioreq.va);
+    kdbp("\tHVM_PARAM_CALLBACK_IRQ: %x\n", hvp->params[HVM_PARAM_CALLBACK_IRQ]);
+    kdbp("\tHVM_PARAM_STORE_PFN: %x\n", hvp->params[HVM_PARAM_STORE_PFN]);
+    kdbp("\tHVM_PARAM_STORE_EVTCHN: %x\n", hvp->params[HVM_PARAM_STORE_EVTCHN]);
+    kdbp("\tHVM_PARAM_PAE_ENABLED: %x\n", hvp->params[HVM_PARAM_PAE_ENABLED]);
+    kdbp("\tHVM_PARAM_IOREQ_PFN: %x\n", hvp->params[HVM_PARAM_IOREQ_PFN]);
+    kdbp("\tHVM_PARAM_BUFIOREQ_PFN: %x\n", hvp->params[HVM_PARAM_BUFIOREQ_PFN]);
+    kdbp("\tHVM_PARAM_VIRIDIAN: %x\n", hvp->params[HVM_PARAM_VIRIDIAN]);
+    kdbp("\tHVM_PARAM_TIMER_MODE: %x\n", hvp->params[HVM_PARAM_TIMER_MODE]);
+    kdbp("\tHVM_PARAM_HPET_ENABLED: %x\n", hvp->params[HVM_PARAM_HPET_ENABLED]);
+    kdbp("\tHVM_PARAM_IDENT_PT: %x\n", hvp->params[HVM_PARAM_IDENT_PT]);
+    kdbp("\tHVM_PARAM_DM_DOMAIN: %x\n", hvp->params[HVM_PARAM_DM_DOMAIN]);
+    kdbp("\tHVM_PARAM_ACPI_S_STATE: %x\n", hvp->params[HVM_PARAM_ACPI_S_STATE]);
+    kdbp("\tHVM_PARAM_VM86_TSS: %x\n", hvp->params[HVM_PARAM_VM86_TSS]);
+    kdbp("\tHVM_PARAM_VPT_ALIGN: %x\n", hvp->params[HVM_PARAM_VPT_ALIGN]);
+    kdbp("\tHVM_PARAM_CONSOLE_PFN: %x\n", hvp->params[HVM_PARAM_CONSOLE_PFN]);
+    kdbp("\tHVM_PARAM_CONSOLE_EVTCHN: %x\n", 
+         hvp->params[HVM_PARAM_CONSOLE_EVTCHN]);
+    kdbp("\tHVM_PARAM_ACPI_IOPORTS_LOCATION: %x\n", 
+         hvp->params[HVM_PARAM_ACPI_IOPORTS_LOCATION]);
+    kdbp("\tHVM_PARAM_MEMORY_EVENT_SINGLE_STEP: %x\n", 
+         hvp->params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP]);
+}
+static void kdb_print_rangesets(struct domain *dp)
+{
+    int locked = spin_is_locked(&dp->rangesets_lock);
+
+    if (locked)
+        spin_unlock(&dp->rangesets_lock);
+    rangeset_domain_printk(dp);
+    if (locked)
+        spin_lock(&dp->rangesets_lock);
+}
+
+static void kdb_pr_vtsc_info(struct arch_domain *ap)
+{
+    kdbp("    VTSC info: tsc_mode:%x  vtsc:%x  vtsc_last:%016lx\n", 
+         ap->tsc_mode, ap->vtsc, ap->vtsc_last);
+    kdbp("        vtsc_offset:%016lx tsc_khz:%08lx incarnation:%x\n", 
+         ap->vtsc_offset, ap->vtsc_offset, ap->incarnation);
+    kdbp("        vtsc_kerncount:%016lx _usercount:%016lx\n",
+         ap->vtsc_kerncount, ap->vtsc_usercount);
+}
+
+/* display one domain info */
+static void
+kdb_display_dom(struct domain *dp)
+{
+    struct vcpu *vp;
+    int printed = 0;
+    struct grant_table *gp = dp->grant_table;
+    struct arch_domain *ap = &dp->arch;
+
+    kdbp("\nDOMAIN :    domid:0x%04x ptr:0x%p\n", dp->domain_id, dp);
+    if (dp->domain_id == DOMID_IDLE) {
+        kdbp("    IDLE domain.\n");
+        return;
+    }
+    if (dp->is_dying) {
+        kdbp("    domain is DYING.\n");
+        return;
+    }
+#if 0
+    kdb_print_spin_lock("  pgalk:", &dp->page_alloc_lock, "\n");
+    kdbp("  pglist:  0x%p 0x%p\n", dp->page_list.next,KDB_PGLLE(dp->page_list));
+    kdbp("  xpglist: 0x%p 0x%p\n", dp->xenpage_list.next, 
+         KDB_PGLLE(dp->xenpage_list));
+    kdbp("  next:0x%p hashnext:0x%p\n", 
+         dp->next_in_list, dp->next_in_hashbucket);
+#endif
+    kdbp("  PAGES: tot:0x%08x max:0x%08x xenheap:0x%08x\n", 
+         dp->tot_pages, dp->max_pages, dp->xenheap_pages);
+
+    kdb_print_rangesets(dp);
+    kdb_print_dom_eventinfo(dp);
+    kdbp("\n");
+    kdbp("  Grant table: gp:0x%p\n", gp);
+    if (gp) {
+        kdbp("    nr_frames:0x%08x shpp:0x%p active:0x%p\n",
+             gp->nr_grant_frames, gp->shared_raw, gp->active);
+        kdbp("    maptrk:0x%p maphd:0x%08x maplmt:0x%08x\n", 
+             gp->maptrack, gp->maptrack_head, gp->maptrack_limit);
+        kdbp("    mapcnt:");
+        kdb_print_spin_lock("mapcnt: lk:", &gp->lock, "\n");
+    }
+    kdbp("  hvm:%d priv:%d need_iommu:%d dbg:%d dying:%d paused:%d\n",
+         dp->is_hvm, dp->is_privileged, dp->need_iommu,
+         dp->debugger_attached, dp->is_dying, dp->is_paused_by_controller);
+    kdb_print_spin_lock("  shutdown: lk:", &dp->shutdown_lock, "\n");
+    kdbp("  shutn:%d shut:%d code:%d \n", dp->is_shutting_down,
+         dp->is_shut_down, dp->shutdown_code);
+    kdbp("  pausecnt:0x%08x vm_assist:0x"KDBFL" refcnt:0x%08x\n",
+         dp->pause_count.counter, dp->vm_assist, dp->refcnt.counter);
+    kdbp("  &domain_dirty_cpumask:%p\n", &dp->domain_dirty_cpumask); 
+
+    kdbp("  shared == vcpu_info[]: %p\n",  dp->shared_info); 
+    kdbp("    arch_shared: maxpfn: %lx pfn-mfn-frame-ll mfn: %lx\n", 
+         arch_get_max_pfn(dp), arch_get_pfn_to_mfn_frame_list_list(dp));
+    kdbp("\n");
+    kdbp("  arch_domain at : %p\n", ap);
+
+#ifdef CONFIG_X86_64
+    kdbp("    pt_pages:0x%p ", ap->mm_perdomain_pt_pages);
+    kdbp("    l2:0x%p l3:0x%p\n", ap->mm_perdomain_l2, ap->mm_perdomain_l3);
+#else
+    kdbp("    pt:0x%p ", ap->mm_perdomain_pt);
+#endif
+#ifdef CONFIG_X86_32
+    kdbp("    &mapchache:0x%xp\n", &ap->mapcache);
+#endif
+    kdbp("    ioport:0x%p &hvm_dom:0x%p\n", ap->ioport_caps, &ap->hvm_domain);
+    if (is_hvm_or_hyb_domain(dp))
+        kdb_prnt_hvm_dom_info(dp);
+
+    kdbp("    &pging_dom:%p mode: %lx", &ap->paging, ap->paging.mode); 
+    kdb_pr_dom_pg_modes(dp);
+    kdbp("    p2m ptr:%p  pages:{%p, %p}\n", ap->p2m, ap->p2m->pages.next,
+         KDB_PGLLE(ap->p2m->pages));
+    kdbp("       max_mapped_pfn:"KDBFL, ap->p2m->max_mapped_pfn);
+#if XEN_SUBVERSION > 0 && XEN_VERSION == 4              /* xen 4.1 and above */
+    kdbp("  phys_table:%p\n", ap->p2m->phys_table.pfn);
+#else
+    kdbp("  phys_table.pfn:"KDBFL"\n", ap->phys_table.pfn);
+#endif
+    kdbp("    physaddr_bitsz:%d 32bit_pv:%d has_32bit_shinfo:%d\n", 
+         ap->physaddr_bitsize, ap->is_32bit_pv, ap->has_32bit_shinfo);
+    kdb_pr_vtsc_info(ap);
+    kdbp("  sched:0x%p  &handle:0x%p\n", dp->sched_priv, &dp->handle);
+    kdbp("  vcpu ptrs:\n   ");
+    for_each_vcpu(dp, vp) {
+        kdbp(" %d:%p", vp->vcpu_id, vp);
+        if (++printed % 4 == 0) kdbp("\n   ");
+    }
+    kdbp("\n");
+}
+
+/* 
+ * FUNCTION: Dispaly (current) domain/s
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_dom(void)
+{
+    kdbp("dom [all|domid]: Display current/all/given domain/s\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t 
+kdb_cmdf_dom(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int id;
+    struct domain *dp = current->domain;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dom();
+
+    if (argc > 1) {
+        for(dp=domain_list; dp; dp=dp->next_in_list)
+            if (kdb_str2deci(argv[1], &id) && dp->domain_id==id)
+                kdb_display_dom(dp);
+            else if (!strcmp(argv[1], "all")) 
+                kdb_display_dom(dp);
+    } else {
+        kdbp("Displaying current domain :\n");
+        kdb_display_dom(dp);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Dump irq desc table */
+static kdb_cpu_cmd_t
+kdb_usgf_dirq(void)
+{
+    kdbp("dirq : dump irq bindings\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dirq(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long irq, sz, offs, addr;
+    char buf[KSYM_NAME_LEN+1];
+    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dirq();
+
+#if XEN_VERSION < 4 && XEN_SUBVERSION < 5           /* xen 3.4.x or below */
+    kdbp("idx/irq#/status: all are in decimal\n");
+    kdbp("idx  irq#  status   action(handler name devid)\n");
+    for (irq=0; irq < NR_VECTORS; irq++) {
+        irq_desc_t  *dp = &irq_desc[irq];
+        if (!dp->action)
+            continue;
+        addr = (unsigned long)dp->action->handler;
+        kdbp("[%3ld]:irq:%3d st:%3d f:%s devnm:%s devid:0x%p\n",
+             i, vector_to_irq(irq), dp->status, (dp->status & IRQ_GUEST) ? 
+                            "GUEST IRQ" : symbols_lookup(addr, &sz, &offs, buf),
+             dp->action->name, dp->action->dev_id);
+    }
+#else
+    kdbp("irq_desc[]:%p nr_irqs: $%d nr_irqs_gsi: $%d\n", irq_desc, nr_irqs, 
+          nr_irqs_gsi);
+    kdbp("irq/vec#/status: in decimal. affinity in hex, not bitmap\n");
+    kdbp("irq-- vec sta function----------- name---- type--------- ");
+    kdbp("aff devid------------\n");
+    for (irq=0; irq < nr_irqs; irq++) {
+        void *devidp;
+        const char *symp, *nmp;
+        irq_desc_t  *dp = irq_to_desc(irq);
+        struct arch_irq_desc *archp = &dp->arch;
+
+        if (!dp->handler || dp->handler==&no_irq_type || dp->status & IRQ_GUEST)
+            continue;
+
+        addr = dp->action ? (unsigned long)dp->action->handler : 0;
+        symp = addr ? symbols_lookup(addr, &sz, &offs, buf) : "n/a ";
+        nmp = addr ? dp->action->name : "n/a ";
+        devidp = addr ? dp->action->dev_id : NULL;
+        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
+        kdbp("[%3ld] %03d %03d %-19s %-8s %-13s %3s 0x%p\n", irq, archp->vector,
+             dp->status, symp, nmp, dp->handler->typename, affstr, devidp);
+    }
+    kdb_prnt_guest_mapped_irqs();
+#endif
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void
+kdb_prnt_vec_irq_table(int cpu)
+{
+    int i,j, *tbl = per_cpu(vector_irq, cpu);
+
+    kdbp("CPU %d : ", cpu);
+    for (i=0, j=0; i < NR_VECTORS; i++)
+        if (tbl[i] != -1) {
+            kdbp("(%3d:%3d) ", i, tbl[i]);
+            if (!(++j % 5))
+                kdbp("\n        ");
+        }
+    kdbp("\n");
+}
+
+/* Dump irq desc table */
+static kdb_cpu_cmd_t
+kdb_usgf_dvit(void)
+{
+    kdbp("dvit [cpu|all]: dump (per cpu)vector irq table\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dvit(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int cpu, ccpu = smp_processor_id();
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dvit();
+    
+    if (argc > 1) {
+        if (!strcmp(argv[1], "all")) 
+            cpu = -1;
+        else if (!kdb_str2deci(argv[1], &cpu)) {
+            kdbp("Invalid cpu:%d\n", cpu);
+            return kdb_usgf_dvit();
+        }
+    } else
+        cpu = ccpu;
+
+    kdbp("Per CPU vector irq table pairs (vector:irq) (all decimals):\n");
+    if (cpu != -1) 
+        kdb_prnt_vec_irq_table(cpu);
+    else
+        for_each_online_cpu(cpu) 
+            kdb_prnt_vec_irq_table(cpu);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* do vmexit on all cpu's so intel VMCS can be dumped */
+static kdb_cpu_cmd_t 
+kdb_all_cpu_flush_vmcs(void)
+{
+    int cpu, ccpu = smp_processor_id();
+    for_each_online_cpu(cpu) {
+        if (cpu == ccpu) {
+            kdb_curr_cpu_flush_vmcs();
+        } else {
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE){  /* hung cpu */
+                kdbp("Skipping (hung?) cpu %d\n", cpu);
+                continue;
+            }
+            kdb_cpu_cmd[cpu] = KDB_CPU_DO_VMEXIT;
+            while (kdb_cpu_cmd[cpu]==KDB_CPU_DO_VMEXIT);
+        }
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display VMCS or VMCB */
+static kdb_cpu_cmd_t
+kdb_usgf_dvmc(void)
+{
+    kdbp("dvmc [domid][vcpuid] : Dump vmcs/vmcb\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dvmc(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    domid_t domid = 0;  /* unsigned type don't like -1 */
+    int vcpuid = -1;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dvmc();
+
+    if (argc > 1) { 
+        if (!kdb_str2domid(argv[1], &domid, 1))
+            return KDB_CPU_MAIN_KDB;
+    }
+    if (argc > 2 && !kdb_str2deci(argv[2], &vcpuid)) {
+        kdbp("Bad vcpuid: 0x%x\n", vcpuid);
+        return KDB_CPU_MAIN_KDB;
+    }
+    if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+        kdb_all_cpu_flush_vmcs();
+        kdb_dump_vmcs(domid, (int)vcpuid);
+    } else {
+        kdb_dump_vmcb(domid, (int)vcpuid);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_mmio(void)
+{
+    kdbp("mmio: dump mmio related info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_mmio(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_mmio();
+
+    kdbp("r/o mmio ranges:\n");
+    rangeset_printk(mmio_ro_ranges);
+    kdbp("\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Dump timer/timers queues */
+static kdb_cpu_cmd_t
+kdb_usgf_dtrq(void)
+{
+    kdbp("dtrq: dump timer queues on all cpus\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dtrq(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dtrq();
+
+    kdb_dump_timer_queues();
+    return KDB_CPU_MAIN_KDB;
+}
+
+struct idte {
+    uint16_t offs0_15;
+    uint16_t selector;
+    uint16_t meta;
+    uint16_t offs16_31;
+    uint32_t offs32_63;
+    uint32_t resvd;
+};
+
+#ifdef __x86_64__
+static void
+kdb_print_idte(int num, struct idte *idtp) 
+{
+    uint16_t mta = idtp->meta;
+    char dpl = ((mta & 0x6000) >> 13);
+    char present = ((mta &0x8000) >> 15);
+    int tval = ((mta &0x300) >> 8);
+    char *type = (tval == 1) ? "Task" : ((tval== 2) ? "Intr" : "Trap");
+    domid_t domid = idtp->selector==__HYPERVISOR_CS64 ? DOMID_IDLE :
+                    current->domain->domain_id;
+    uint64_t addr = idtp->offs0_15 | ((uint64_t)idtp->offs16_31 << 16) | 
+                    ((uint64_t)idtp->offs32_63 << 32);
+
+    kdbp("[%03d]: %s %x  %x %04x:%016lx ", num, type, dpl, present,
+         idtp->selector, addr); 
+    kdb_prnt_addr2sym(domid, addr, "\n");
+}
+
+/* Dump 64bit idt table currently on this cpu. Intel Vol 3 section 5.14.1 */
+static kdb_cpu_cmd_t
+kdb_usgf_didt(void)
+{
+    kdbp("didt : dump IDT table on the current cpu\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_didt(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int i;
+    struct idte *idtp = (struct idte *)idt_tables[smp_processor_id()];
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_didt();
+
+    kdbp("IDT at:%p\n", idtp);
+    kdbp("idt#  Type DPL P addr (all hex except idt#)\n", idtp);
+    for (i=0; i < 256; i++, idtp++) 
+        kdb_print_idte(i, idtp);
+    return KDB_CPU_MAIN_KDB;
+}
+#else
+static kdb_cpu_cmd_t
+kdb_cmdf_didt(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbp("kdb: Please implement me in 32bit hypervisor\n");
+    return KDB_CPU_MAIN_KDB;
+}
+#endif
+
+struct gdte {             /* same for TSS and LDT */
+    ulong limit0:16;
+    ulong base0:24;       /* linear address base, not pa */
+    ulong acctype:4;      /* Type: access rights */
+    ulong S:1;            /* S: 0 = system, 1 = code/data */
+    ulong DPL:2;          /* DPL */
+    ulong P:1;            /* P: Segment Present */
+    ulong limit1:4;
+    ulong AVL:1;          /* AVL: avail for use by system software */
+    ulong L:1;            /* L: 64bit code segment */
+    ulong DB:1;           /* D/B */
+    ulong G:1;            /* G: granularity */
+    ulong base1:8;        /* linear address base, not pa */
+};
+
+union gdte_u {
+    struct gdte gdte;
+    u64 gval;
+};
+
+struct call_gdte {
+    unsigned short offs0:16;
+    unsigned short sel:16;
+    unsigned short misc0:16;
+    unsigned short offs1:16;
+};
+
+struct idt_gdte {
+    unsigned long offs0:16;
+    unsigned long sel:16;
+    unsigned long ist:3;
+    unsigned long unused0:13;
+    unsigned long offs1:16;
+};
+union sgdte_u {
+    struct call_gdte cgdte;
+    struct idt_gdte igdte;
+    u64 sgval;
+};
+
+/* return binary form of a hex in string : max 4 chars 0000 to 1111 */
+static char *kdb_ret_acctype(uint acctype)
+{
+    static char buf[16];
+    char *p = buf;
+    int i;
+
+    if (acctype > 0xf) {
+        buf[0] = buf[1] = buf[2] = buf[3] = '?';
+        buf[5] = '\n';
+        return buf;
+    }
+    for (i=0; i < 4; i++, p++, acctype=acctype>>1)
+        *p = (acctype & 0x1) ? '1' : '0';
+
+    return buf;
+}
+
+/* Display GDT table. IA-32e mode is assumded. */
+/* first display non system descriptors then display system descriptors */
+static kdb_cpu_cmd_t
+kdb_usgf_dgdt(void)
+{
+    kdbp("dgdt [gdt-ptr decimal-byte-size] dump GDT table on current cpu or for"
+         "given vcpu\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dgdt(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct Xgt_desc_struct desc;
+    union gdte_u u1;
+    ulong start_addr, end_addr, taddr=0;
+    domid_t domid = DOMID_IDLE;
+    int idx;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_dgdt();
+
+    if (argc > 1) {
+        if (argc != 3)
+            return kdb_usgf_dgdt();
+
+        if (kdb_str2ulong(argv[1], (ulong *)&start_addr) && 
+            kdb_str2deci(argv[2], (int *)&taddr)) {
+            end_addr = start_addr + taddr;
+        } else {
+            kdbp("dgdt: Bad arg:%s or %s\n", argv[1], argv[2]);
+            return kdb_usgf_dgdt();
+        }
+    } else {
+        __asm__ __volatile__ ("sgdt  (%0) \n" :: "a"(&desc) : "memory");
+        start_addr = (ulong)desc.address; 
+        end_addr = (ulong)desc.address + desc.size;
+    }
+    kdbp("GDT: Will skip null desc at 0, start:%lx end:%lx\n", start_addr, 
+         end_addr);
+    kdbp("[idx]   sel --- val --------  Accs DPL P AVL L DB G "
+         "--Base Addr ----  Limit\n");
+    kdbp("                              Type\n");
+
+    /* skip first 8 null bytes */
+    /* the cpu multiplies the index by 8 and adds to GDT.base */
+    for (taddr = start_addr+8; taddr < end_addr;  taddr += sizeof(ulong)) {
+
+        /* not all entries are mapped. do this to avoid GP even if hyp */
+        if (!kdb_read_mem(taddr, (kdbbyt_t *)&u1, sizeof(u1),domid) || !u1.gval)
+            continue;
+
+        if (u1.gval == 0xffffffffffffffff || u1.gval == 0x5555555555555555)
+            continue;               /* what an effin x86 mess */
+
+        idx = (taddr - start_addr) / 8;
+        if (u1.gdte.S == 0) {       /* System Desc are 16 bytes in 64bit mode */
+            taddr += sizeof(ulong);
+            continue;
+        }
+        kdbp("[%04x] %04x %016lx  %4s  %x  %d  %d  %d  %d %d %016lx  %05x\n",
+             idx, (idx<<3), u1.gval, kdb_ret_acctype(u1.gdte.acctype), 
+             u1.gdte.DPL, 
+             u1.gdte.P, u1.gdte.AVL, u1.gdte.L, u1.gdte.DB, u1.gdte.G,  
+             (u64)((u64)u1.gdte.base0 | (u64)((u64)u1.gdte.base1<<24)), 
+             u1.gdte.limit0 | (u1.gdte.limit1<<16));
+    }
+
+    kdbp("\nSystem descriptors (S=0) : (skipping 0th entry)\n");
+    for (taddr=start_addr+8;  taddr < end_addr;  taddr += sizeof(ulong)) {
+        uint acctype;
+        u64 upper, addr64=0;
+
+        /* not all entries are mapped. do this to avoid GP even if hyp */
+        if (kdb_read_mem(taddr, (kdbbyt_t *)&u1, sizeof(u1), domid)==0 || 
+            u1.gval == 0 || u1.gdte.S == 1) {
+            continue;
+        }
+        idx = (taddr - start_addr) / 8;
+        taddr += sizeof(ulong);
+        if (kdb_read_mem(taddr, (kdbbyt_t *)&upper, 8, domid) == 0) {
+            kdbp("Could not read upper 8 bytes of system desc\n");
+            upper = 0;
+        }
+        acctype = u1.gdte.acctype;
+        if (acctype != 2 && acctype != 9 && acctype != 11 && acctype !=12 &&
+            acctype != 14 && acctype != 15)
+            continue;
+
+        kdbp("[%04x] %04x val:%016lx DPL:%x P:%d type:%x ",
+             idx, (idx<<3), u1.gval, u1.gdte.DPL, u1.gdte.P, acctype); 
+
+        upper = (u64)((u64)(upper & 0xFFFFFFFF) << 32);
+
+        /* Vol 3A: table: 3-2  page: 3-19 */
+        if (acctype == 2) {
+            kdbp("LDT gate (0010)\n");
+        }
+        else if (acctype == 9) {
+            kdbp("TSS avail gate(1001)\n");
+        }
+        else if (acctype == 11) {
+            kdbp("TSS busy gate(1011)\n");
+        }
+        else if (acctype == 12) {
+            kdbp("CALL gate (1100)\n");
+        }
+        else if (acctype == 14) {
+            kdbp("IDT gate (1110)\n");
+        }
+        else if (acctype == 15) {
+            kdbp("Trap gate (1111)\n"); 
+        }
+
+        if (acctype == 2 || acctype == 9 || acctype == 11) {
+            kdbp("        AVL:%d G:%d Base Addr:%016lx Limit:%x\n",
+                 u1.gdte.AVL, u1.gdte.G,  
+                 (u64)((u64)u1.gdte.base0 | ((u64)u1.gdte.base1<<24)| upper),
+                 (u32)u1.gdte.limit0 | (u32)((u32)u1.gdte.limit1<<16));
+
+        } else if (acctype == 12) {
+            union sgdte_u u2;
+            u2.sgval = u1.gval;
+
+            addr64 = (u64)((u64)u2.cgdte.offs0 | 
+                           (u64)((u64)u2.cgdte.offs1<<16) | upper);
+            kdbp("        Entry: %04x:%016lx\n", u2.cgdte.sel, addr64);
+        } else if (acctype == 14 || acctype == 15) {
+            union sgdte_u u2;
+            u2.sgval = u1.gval;
+
+            addr64 = (u64)((u64)u2.igdte.offs0 | 
+                           (u64)((u64)u2.igdte.offs1<<16) | upper);
+            kdbp("        Entry: %04x:%016lx ist:%03x\n", u2.igdte.sel, addr64,
+                 u2.igdte.ist);
+        } else 
+            kdbp(" Error: Unrecongized type:%lx\n", acctype);
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display scheduler basic and extended info */
+static kdb_cpu_cmd_t
+kdb_usgf_sched(void)
+{
+    kdbp("sched: show schedular info and run queues\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_sched(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_sched();
+
+    kdb_print_sched_info();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Display MMU basic and extended info */
+static kdb_cpu_cmd_t
+kdb_usgf_mmu(void)
+{
+    kdbp("mmu: print basic MMU info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_mmu(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    int cpu;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_mmu();
+
+    kdbp("MMU Info:\n");
+    kdbp("total  pages: %lx\n", total_pages);
+    kdbp("max page/mfn: %lx\n", max_page);
+    kdbp("frame_table:  %p\n", frame_table);
+    kdbp("DIRECTMAP_VIRT_START:  %lx\n", DIRECTMAP_VIRT_START);
+    kdbp("HYPERVISOR_VIRT_START: %lx\n", HYPERVISOR_VIRT_START);
+    kdbp("HYPERVISOR_VIRT_END:   %lx\n", HYPERVISOR_VIRT_END);
+    kdbp("RO_MPT_VIRT_START:     %lx\n", RO_MPT_VIRT_START);
+    kdbp("PERDOMAIN_VIRT_START:  %lx\n", PERDOMAIN_VIRT_START);
+    kdbp("CONFIG_PAGING_LEVELS:%d\n", CONFIG_PAGING_LEVELS);
+    kdbp("__HYPERVISOR_COMPAT_VIRT_START: %lx\n", 
+         (ulong)__HYPERVISOR_COMPAT_VIRT_START);
+    kdbp("&MPT[0] == %016lx\n", &machine_to_phys_mapping[0]);
+
+    kdbp("\nFIRST_RESERVED_GDT_PAGE: %x\n", FIRST_RESERVED_GDT_PAGE);
+    kdbp("FIRST_RESERVED_GDT_ENTRY: %lx\n", (ulong)FIRST_RESERVED_GDT_ENTRY);
+    kdbp("LAST_RESERVED_GDT_ENTRY: %lx\n", (ulong)LAST_RESERVED_GDT_ENTRY);
+    kdbp("  Per cpu non-compat gdt_table:\n");
+    for_each_online_cpu(cpu) {
+        kdbp("\tcpu:%d  gdt_table:%p\n", cpu, per_cpu(gdt_table, cpu));
+    }
+    kdbp("  Per cpu compat gdt_table:\n");
+    for_each_online_cpu(cpu) {
+        kdbp("\tcpu:%d  gdt_table:%p\n", cpu, per_cpu(compat_gdt_table, cpu));
+    }
+    kdbp("\n");
+    kdbp("  Per cpu tss:\n");
+    for_each_online_cpu(cpu) {
+        struct tss_struct *tssp = &per_cpu(init_tss, cpu);
+        kdbp("\tcpu:%d  tss:%p (rsp0:%016lx)\n", cpu, tssp, tssp->rsp0);
+    }
+#ifdef USER_MAPPINGS_ARE_GLOBAL
+    kdbp("USER_MAPPINGS_ARE_GLOBAL is defined\n");
+#else
+    kdbp("USER_MAPPINGS_ARE_GLOBAL is NOT defined\n");
+#endif
+    kdbp("\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* for HVM/HYB guests, go thru EPT. For PV guest we need to go to the btree. 
+ * btree: pfn_to_mfn_frame_list_list is root that points (has mfns of) upto 16
+ * pages (call 'em l2 nodes) that contain mfns of guest p2m table pages 
+ * NOTE: num of entries in a p2m page is same as num of entries in l2 node */
+static noinline ulong
+kdb_gpfn2mfn(struct domain *dp, ulong gpfn, p2m_type_t *typep) 
+{
+    int idx;
+
+    if ( !paging_mode_translate(dp) ) {
+        mfn_t *mfn_va, mfn = arch_get_pfn_to_mfn_frame_list_list(dp);
+        int g_longsz = kdb_guest_bitness(dp->domain_id)/8;
+        int entries_per_pg = PAGE_SIZE/g_longsz;
+        const int shift = get_count_order(entries_per_pg);
+
+	if ( !mfn_valid(mfn) ) {
+	    kdbp("Invalid frame_list_list mfn:%lx for non-xlate guest\n", mfn);
+	    return INVALID_MFN;
+	}
+
+        mfn_va = map_domain_page(mfn);
+        idx = gpfn >> 2*shift;     /* index in root page/node */
+        if (idx > 15) {
+            kdbp("gpfn:%lx idx:%x not in frame list limit of z16\n", gpfn, idx);
+            unmap_domain_page(mfn_va);
+            return INVALID_MFN;
+        }
+        mfn = (g_longsz == 4) ? ((int *)mfn_va)[idx] : mfn_va[idx];
+        if (mfn==0) {
+            kdbp("No mfn for idx:%d for gpfn:%lx in root pg\n", idx, gpfn);
+            unmap_domain_page(mfn_va);
+            return INVALID_MFN;
+        }
+        mfn_va = map_domain_page(mfn);
+        KDBGP1("p2m: idx:%x fll:%lx mfn of 2nd lvl page:%lx\n", idx,
+               arch_get_pfn_to_mfn_frame_list_list(dp), mfn);
+
+        idx = (gpfn>>shift) & ((1<<shift)-1);     /* idx in l2 node */
+        mfn = (g_longsz == 4) ? ((int *)mfn_va)[idx] : mfn_va[idx];
+        unmap_domain_page(mfn_va);
+        if (mfn == 0) {
+            kdbp("No mfn entry at:%x in 2nd lvl pg for gpfn:%lx\n", idx, gpfn);
+            return INVALID_MFN;
+        }
+        KDBGP1("p2m: idx:%x  mfn of p2m page:%lx\n", idx, mfn); 
+        mfn_va = map_domain_page(mfn);
+        idx = gpfn & ((1<<shift)-1);
+        mfn = (g_longsz == 4) ? ((int *)mfn_va)[idx] : mfn_va[idx];
+        unmap_domain_page(mfn_va);
+
+	*typep = -1;
+        return mfn;
+    } else
+        return mfn_x(get_gfn_query_unlocked(dp, gpfn, typep));
+
+    return INVALID_MFN;
+}
+
+/* given a pfn, find it's mfn */
+static kdb_cpu_cmd_t
+kdb_usgf_p2m(void)
+{
+    kdbp("p2m domid 0xgpfn : gpfn to mfn\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_p2m(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct domain *dp;
+    ulong gpfn, mfn=0xdeadbeef;
+    p2m_type_t p2mtype = -1;
+
+    if (argc < 3                                   ||
+        (dp=kdb_strdomid2ptr(argv[1], 1)) == NULL  ||
+        !kdb_str2ulong(argv[2], &gpfn)) {
+
+        return kdb_usgf_p2m();
+    }
+    mfn = kdb_gpfn2mfn(dp, gpfn, &p2mtype);
+    if ( paging_mode_translate(dp) )
+        kdbp("p2m[%lx] == %lx type:%d/0x%x\n", gpfn, mfn, p2mtype, p2mtype);
+    else 
+        kdbp("p2m[%lx] == %lx type:N/A(PV)\n", gpfn, mfn);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* given an mfn, lookup pfn in the MPT */
+static kdb_cpu_cmd_t
+kdb_usgf_m2p(void)
+{
+    kdbp("m2p 0xmfn: mfn to pfn\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_m2p(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    mfn_t mfn;
+    if (argc > 1 && kdb_str2ulong(argv[1], &mfn))
+        if (mfn_valid(mfn))
+            kdbp("mpt[%x] == %lx\n", mfn, machine_to_phys_mapping[mfn]);
+        else
+            kdbp("Invalid mfn:%lx\n", mfn);
+    else
+        kdb_usgf_m2p();
+    return KDB_CPU_MAIN_KDB;
+}
+
+static void 
+kdb_pr_pg_pgt_flds(unsigned long type_info)
+{
+    switch (type_info & PGT_type_mask) {
+        case (PGT_l1_page_table):
+            kdbp("    page is PGT_l1_page_table\n");
+            break;
+        case PGT_l2_page_table:
+            kdbp("    page is PGT_l2_page_table\n");
+            break;
+        case PGT_l3_page_table:
+            kdbp("    page is PGT_l3_page_table\n");
+            break;
+        case PGT_l4_page_table:
+            kdbp("    page is PGT_l4_page_table\n");
+            break;
+        case PGT_seg_desc_page:
+            kdbp("    page is seg desc page\n");
+            break;
+        case PGT_writable_page:
+            kdbp("    page is writable page\n");
+            break;
+        case PGT_shared_page:
+            kdbp("    page is shared page\n");
+            break;
+    }
+    if (type_info & PGT_pinned)
+        kdbp("    page is pinned\n");
+    if (type_info & PGT_validated)
+        kdbp("    page is validated\n");
+    if (type_info & PGT_pae_xen_l2)
+        kdbp("    page is PGT_pae_xen_l2\n");
+    if (type_info & PGT_partial)
+        kdbp("    page is PGT_partial\n");
+    if (type_info & PGT_locked)
+        kdbp("    page is PGT_locked\n");
+}
+
+static void
+kdb_pr_pg_pgc_flds(unsigned long count_info)
+{
+    if (count_info & PGC_allocated)
+        kdbp("  PGC_allocated");
+    if (count_info & PGC_xen_heap)
+        kdbp("  PGC_xen_heap");
+    if (count_info & PGC_page_table)
+        kdbp("  PGC_page_table");
+    if (count_info & PGC_broken)
+        kdbp("  PGC_broken");
+#if XEN_VERSION < 4                                 /* xen 3.x.x */
+    if (count_info & PGC_offlining)
+        kdbp("  PGC_offlining");
+    if (count_info & PGC_offlined)
+        kdbp("  PGC_offlined");
+#else
+    if (count_info & PGC_state_inuse)
+        kdbp("  PGC_inuse");
+    if (count_info & PGC_state_offlining)
+        kdbp("  PGC_state_offlining");
+    if (count_info & PGC_state_offlined)
+        kdbp("  PGC_state_offlined");
+    if (count_info & PGC_state_free)
+        kdbp("  PGC_state_free");
+#endif
+    kdbp("\n");
+}
+
+/* print struct page_info{} given ptr to it or an mfn
+ * NOTE: that given an mfn there seems no way of knowing how it's used, so
+ *       here we just print all info and let user decide what's applicable */
+static kdb_cpu_cmd_t
+kdb_usgf_dpage(void)
+{
+    kdbp("dpage mfn|page-ptr : Display struct page\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dpage(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long val;
+    struct page_info *pgp;
+    struct domain *dp;
+
+    if (argc <= 1 || *argv[1] == '?') 
+        return kdb_usgf_dpage();
+
+    if ((kdb_str2ulong(argv[1], &val) == 0)      ||
+        (val <  (ulong)frame_table && !mfn_valid(val))) {
+
+        kdbp("Invalid arg:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    kdbp("Page Info:\n");
+    if (val <= (ulong)frame_table) {       /* arg is mfn */
+        pgp = mfn_to_page(val);
+        kdbp("  mfn: %lx page_info:%p\n", val, pgp);
+    } else {
+        pgp = (struct page_info *)val; /* arg is struct page{} */
+        if (pgp < frame_table || pgp >= frame_table+max_page) {
+            kdbp("Invalid page ptr. below/beyond max_page\n");
+            return KDB_CPU_MAIN_KDB;
+        }
+        kdbp("  mfn: %lx page_info:%p\n", page_to_mfn(pgp), pgp);
+    } 
+    kdbp("  count_info: %016lx  (refcnt: %x)\n", pgp->count_info,
+         pgp->count_info & PGC_count_mask);
+#if XEN_VERSION > 3 || XEN_SUBVERSION > 3             /* xen 3.4.x or later */
+    kdb_pr_pg_pgc_flds(pgp->count_info);
+
+    kdbp("In use info:\n");
+    kdbp("  type_info:%016lx\n", pgp->u.inuse.type_info);
+    kdb_pr_pg_pgt_flds(pgp->u.inuse.type_info);
+    dp = page_get_owner(pgp);
+    kdbp("  domid:%d (pickled:%lx)\n", dp ? dp->domain_id : -1, 
+         pgp->v.inuse._domain);
+
+    kdbp("Shadow Info:\n");
+    kdbp("  type:%x pinned:%x count:%x\n", pgp->u.sh.type, pgp->u.sh.pinned,
+         pgp->u.sh.count);
+    kdbp("  back:%lx  shadow_flags:%x  next_shadow:%lx\n", pgp->v.sh.back,
+         pgp->shadow_flags, pgp->next_shadow);
+
+    kdbp("Free Info\n");
+    kdbp("  need_tlbflush:%d order:%d tlbflush_timestamp:%x\n",
+         pgp->u.free.need_tlbflush, pgp->v.free.order, 
+         pgp->tlbflush_timestamp);
+#else
+    if (pgp->count_info & PGC_allocated)            /* page allocated */
+        kdbp("  PGC_allocated");
+    if (pgp->count_info & PGC_page_table)           /* page table page */
+        kdbp("  PGC_page_table");
+    kdbp("\n");
+    kdbp("  page is %s xen heap page\n", is_xen_heap_page(pgp) ? "a":"NOT");
+    kdbp("  cacheattr:%x\n", (pgp->count_info>>PGC_cacheattr_base) & 7);
+    if (pgp->count_info & PGC_count_mask) {         /* page in use */
+        dp = pgp->u.inuse._domain;         /* pickled domain */
+        kdbp("  page is in use\n");
+        kdbp("    domid: %d  (pickled dom:%x)\n", 
+             dp ? (unpickle_domptr(dp))->domain_id : -1, dp);
+        kdbp("    type_info: %lx\n", pgp->u.inuse.type_info);
+        kdb_prt_pg_type(pgp->u.inuse.type_info);
+    } else {                                         /* page is free */
+        kdbp("  page is free\n");
+        kdbp("    order: %x\n", pgp->u.free.order);
+        kdbp("    cpumask: %lx\n", pgp->u.free.cpumask.bits);
+    }
+    kdbp("  tlbflush/shadow_flags: %lx\n", pgp->shadow_flags);
+#endif
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* display asked msr value */
+static kdb_cpu_cmd_t
+kdb_usgf_dmsr(void)
+{
+    kdbp("dmsr address : Display msr value\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_dmsr(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long addr, val;
+
+    if (argc <= 1 || *argv[1] == '?') 
+        return kdb_usgf_dmsr();
+
+    if ((kdb_str2ulong(argv[1], &addr) == 0)) {
+        kdbp("Invalid arg:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+    rdmsrl(addr, val);
+    kdbp("msr: %lx  val:%lx\n", addr, val);
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* execute cpuid for given value */
+static kdb_cpu_cmd_t
+kdb_usgf_cpuid(void)
+{
+    kdbp("cpuid eax : Display cpuid value returned in rax\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_cpuid(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    unsigned long rax=0, rbx=0, rcx=0, rdx=0;
+
+    if (argc <= 1 || *argv[1] == '?') 
+        return kdb_usgf_cpuid();
+
+    if ((kdb_str2ulong(argv[1], &rax) == 0)) {
+        kdbp("Invalid arg:%s\n", argv[1]);
+        return KDB_CPU_MAIN_KDB;
+    }
+#if 0
+    __asm__ __volatile__ (
+            /* "pushl %%rax  \n" */
+
+            "movl %0, %%rax  \n"
+            "cpuid           \n" 
+            : "=&a" (rax), "=b" (rbx), "=c" (rcx), "=d" (rdx)
+            : "0" (rax)
+            : "rax", "rbx", "rcx", "rdx", "memory");
+#endif
+    cpuid(rax, &rax, &rbx, &rcx, &rdx);
+    kdbp("rax: %016lx  rbx:%016lx rcx:%016lx rdx:%016lx\n", rax, rbx,
+         rcx, rdx);
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* execute cpuid for given value */
+static kdb_cpu_cmd_t
+kdb_usgf_wept(void)
+{
+    kdbp("wept domid gfn: walk ept table for given domid and gfn\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_wept(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct domain *dp;
+    ulong gfn;
+
+    if ((argc > 1 && *argv[1] == '?') || argc != 3)
+        return kdb_usgf_wept();
+    if ((dp=kdb_strdomid2ptr(argv[1], 1)) && kdb_str2ulong(argv[2], &gfn))
+        ept_walk_table(dp, gfn);
+    else
+        kdb_usgf_wept();
+
+    return KDB_CPU_MAIN_KDB;
+}
+
+/*
+ * Save symbols info for a guest, dom0 or other...
+ */
+static kdb_cpu_cmd_t
+kdb_usgf_sym(void)
+{
+   kdbp("sym domid &kallsyms_names &kallsyms_addresses &kallsyms_num_syms\n");
+   kdbp("\t [&kallsyms_token_table] [&kallsyms_token_index]\n");
+   kdbp("\ttoken _table and _index MUST be specified for el5\n");
+   return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_sym(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong namesp, addrap, nump, toktblp, tokidxp;
+    domid_t domid;
+
+    if (argc < 5) {
+        return kdb_usgf_sym();
+    }
+    toktblp = tokidxp = 0;     /* optional parameters */
+    if (kdb_str2domid(argv[1], &domid, 1) &&
+        kdb_str2ulong(argv[2], &namesp)   &&
+        kdb_str2ulong(argv[3], &addrap)   &&
+        kdb_str2ulong(argv[4], &nump)     && 
+        (argc==5 || (argc==7 && kdb_str2ulong(argv[5], &toktblp) &&
+                                kdb_str2ulong(argv[6], &tokidxp)))) {
+
+        kdb_sav_dom_syminfo(domid, namesp, addrap,nump,toktblp,tokidxp);
+    } else
+        kdb_usgf_sym();
+    return KDB_CPU_MAIN_KDB;
+}
+
+
+/* mods is the dumb ass &modules. modules is struct {nxt, prev}, and not ptr */
+static void
+kdb_dump_linux_modules(domid_t domid, ulong mods, uint nxtoffs, uint nmoffs, 
+                       uint coreoffs)
+{
+    const int bufsz = 56;
+    char buf[bufsz];
+    uint64_t addr, addrval, *nxtptr, *modptr;
+    uint i, num = 8;
+
+    if (kdb_guest_bitness(domid) == 32)
+        num = 4;
+
+    /* first read modules{}.next ptr */
+    if (kdb_read_mem(mods, (kdbbyt_t *)&nxtptr, num, domid) != num) {
+        kdbp("ERROR: Could not read next at mod:%p\n", (void *)mods);
+        return;
+    }
+
+    KDBGP("mods:%p nxtptr:%p nmoffs:%x coreoffs:%x\n", (void *)mods, nxtptr,
+          nmoffs, coreoffs);
+
+    while ((uint64_t)nxtptr != mods) {
+
+        modptr = (uint64_t *) ((ulong)nxtptr - nxtoffs);
+
+        addr = (ulong)modptr + coreoffs;
+        if (kdb_read_mem(addr, (kdbbyt_t *)&addrval, num, domid) != num) {
+            kdbp("ERROR: Could not read mod addr at :%p\n", (void *)addr);
+            return;
+        }
+
+        KDBGP("modptr:%p addr:%p\n", modptr, (void *)addr);
+        addr = (ulong)modptr + nmoffs;
+        i=0;
+        do {
+            if (kdb_read_mem(addr, (kdbbyt_t *)&buf[i], 1, domid) != 1) {
+                kdbp("ERROR:Could not read name ch at addr:%p\n", (void *)addr);
+                return;
+            }
+            addr++;
+        } while (buf[i] && i++ < bufsz);
+        buf[bufsz-1] = '\0';
+
+        kdbp("%016lx %016lx %s\n", modptr, addrval, buf);
+
+        if (kdb_read_mem((ulong)nxtptr, (kdbbyt_t *)&nxtptr, num, domid)!=num) {
+            kdbp("ERROR: Could not read next at mod:%p\n", (void *)mods);
+            return;
+        }
+        KDBGP("nxtptr:%p addr:%p\n", nxtptr, (void *)addr);
+    } 
+}
+
+/* Display modules loaded in linux guest */
+static kdb_cpu_cmd_t
+kdb_usgf_mod(void)
+{
+   kdbp("mod domid &modules next-offs name-offs module_core-offs\n");
+   kdbp("\twhere next-offs: &((struct module *)0)->list.next\n");
+   kdbp("\tname-offs: &((struct module *)0)->name etc..\n");
+   kdbp("\tDisplays all loaded modules in the linux guest\n");
+   kdbp("\tEg: mod 0 ffffffff80302780 8 0x18 0x178\n");
+
+   return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_cmdf_mod(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    ulong mods, nxtoffs, nmoffs, coreoffs;
+    domid_t domid;
+
+    if (argc < 6) {
+        return kdb_usgf_mod();
+    }
+    if (kdb_str2domid(argv[1], &domid, 1) &&
+        kdb_str2ulong(argv[2], &mods)     &&
+        kdb_str2ulong(argv[3], &nxtoffs)  &&
+        kdb_str2ulong(argv[4], &nmoffs)   &&
+        kdb_str2ulong(argv[5], &coreoffs)) {
+
+        kdbp("modptr address name\n");
+        kdb_dump_linux_modules(domid, mods, nxtoffs, nmoffs, coreoffs);
+    } else
+        kdb_usgf_mod();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* toggle kdb debug trace level */
+static kdb_cpu_cmd_t
+kdb_usgf_kdbdbg(void)
+{
+    kdbp("kdbdbg : trace info to debug kdb\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_kdbdbg(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_kdbdbg();
+
+    kdbdbg = (kdbdbg==3) ? 0 : (kdbdbg+1);
+    kdbp("kdbdbg set to:%d\n", kdbdbg);
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_reboot(void)
+{
+    kdbp("reboot: reboot system\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_reboot(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_reboot();
+
+    machine_restart(500);
+    return KDB_CPU_MAIN_KDB;              /* not reached */
+}
+
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcon(void)
+{
+    kdbp("trcon: turn user added kdb tracing on\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcon(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcon();
+
+    kdb_trcon = 1;
+    kdbp("kdb tracing is now on\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcoff(void)
+{
+    kdbp("trcoff: turn user added kdb tracing off\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcoff(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcoff();
+
+    kdb_trcon = 0;
+    kdbp("kdb tracing is now off\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcz(void)
+{
+    kdbp("trcz : zero entire trace buffer\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcz(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcz();
+
+    kdb_trczero();
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_trcp(void)
+{
+    kdbp("trcp : give hints to dump trace buffer via dw/dd command\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_trcp(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_trcp();
+
+    kdb_trcp();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* print some basic info, constants, etc.. */
+static kdb_cpu_cmd_t
+kdb_usgf_info(void)
+{
+    kdbp("info : display basic info, constants, etc..\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_info(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    struct domain *dp;
+    struct cpuinfo_x86 *bcdp;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_info();
+
+    kdbp("Version: %d.%d.%s (%s@%s) %s\n", xen_major_version(), 
+         xen_minor_version(), xen_extra_version(), xen_compile_by(), 
+         xen_compile_domain(), xen_compile_date());
+    kdbp("__XEN_LATEST_INTERFACE_VERSION__ : 0x%x\n", 
+         __XEN_LATEST_INTERFACE_VERSION__);
+    kdbp("__XEN_INTERFACE_VERSION__: 0x%x\n", __XEN_INTERFACE_VERSION__);
+
+    bcdp = &boot_cpu_data;
+    kdbp("CPU: (all decimal)");
+        if (bcdp->x86_vendor == X86_VENDOR_AMD)
+            kdbp(" AMD");
+        else
+            kdbp(" INTEL");
+        kdbp(" family:%d model:%d\n", bcdp->x86, bcdp->x86_model);
+        kdbp("     vendor_id:%16s model_id:%64s\n", bcdp->x86_vendor_id,
+             bcdp->x86_model_id);
+        kdbp("     cpuidlvl:%d cache:sz:%d align:%d\n", bcdp->cpuid_level,
+             bcdp->x86_cache_size, bcdp->x86_cache_alignment);
+        kdbp("     power:%d cores: max:%d booted:%d siblings:%d apicid:%d\n",
+             bcdp->x86_power, bcdp->x86_max_cores, bcdp->booted_cores,
+             bcdp->x86_num_siblings, bcdp->apicid);
+        kdbp("     ");
+        if (cpu_has_apic)
+            kdbp("_apic");
+        if (cpu_has_sep)
+            kdbp("|_sep");
+        if (cpu_has_xmm3)
+            kdbp("|_xmm3");
+        if (cpu_has_ht)
+            kdbp("|_ht");
+        if (cpu_has_nx)
+            kdbp("|_nx");
+        if (cpu_has_clflush)
+            kdbp("|_clflush");
+        if (cpu_has_page1gb)
+            kdbp("|_page1gb");
+        if (cpu_has_ffxsr)
+            kdbp("|_ffxsr");
+        if (cpu_has_x2apic)
+            kdbp("|_x2apic");
+    kdbp("\n\n");
+    kdbp("CC:");
+#if defined(CONFIG_X86_64)
+        kdbp(" CONFIG_X86_64");
+#endif
+#if defined(CONFIG_COMPAT)
+        kdbp(" CONFIG_COMPAT");
+#endif
+#if defined(CONFIG_PAGING_ASSISTANCE)
+        kdbp(" CONFIG_PAGING_ASSISTANCE");
+#endif
+    kdbp("\n");
+    kdbp("cpu has following features:\n");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_TSC_RELIABLE) ? 
+         "X86_FEATURE_TSC_RELIABLE" : "");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_CONSTANT_TSC)? "X86_FEATURE_CONSTANT_TSC":"");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_NONSTOP_TSC) ? "X86_FEATURE_NONSTOP_TSC" :"");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_RDTSCP) ?  "X86_FEATURE_RDTSCP" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_FXSR) ?  "X86_FEATURE_FXSR" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_CPUID_FAULTING) ?  
+         "X86_FEATURE_CPUID_FAULTING" : "");
+    kdbp("  %s\n", 
+         boot_cpu_has(X86_FEATURE_PAGE1GB) ?  "X86_FEATURE_PAGE1GB" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_MWAIT) ?  "X86_FEATURE_MWAIT" : "");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_X2APIC) ?  "X86_FEATURE_X2APIC":"");
+    kdbp("  %s\n", boot_cpu_has(X86_FEATURE_XSAVE) ?  "X86_FEATURE_XSAVE":"");
+    kdbp("\n");
+
+    kdbp("MAX_VIRT_CPUS:$%d  MAX_HVM_VCPUS:$%d\n", MAX_VIRT_CPUS,MAX_HVM_VCPUS);
+    kdbp("NR_EVENT_CHANNELS: $%d\n", NR_EVENT_CHANNELS);
+    kdbp("NR_EVTCHN_BUCKETS: $%d\n", NR_EVTCHN_BUCKETS);
+
+    kdbp("\nDomains and their vcpus:\n");
+    for_each_domain(dp) {
+        struct vcpu *vp;
+        int printed=0;
+        kdbp("  Domain: {id:%d 0x%x   ptr:%p%s}  VCPUs:\n", 
+             dp->domain_id, dp->domain_id, dp, dp->is_dying ? " DYING":"");
+        for(vp=dp->vcpu[0]; vp; vp = vp->next_in_list) {
+            kdbp("  {id:%d p:%p runstate:%d}", vp->vcpu_id, vp, 
+                 vp->runstate.state);
+            if (++printed % 2 == 0) kdbp("\n");
+        }
+        kdbp("\n");
+    }
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_cur(void)
+{
+    kdbp("cur : display current domid and vcpu\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* Checking for guest_mode() not feasible here. if dom0->hcall->bp in xen, 
+ * then g_m() will show xen, but vcpu is still dom0. hence just look at 
+ * current only */
+static kdb_cpu_cmd_t
+kdb_cmdf_cur(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    domid_t id = current->domain->domain_id;
+
+    if (argc > 1 && *argv[1] == '?')
+        return kdb_usgf_cur();
+
+    kdbp("domid: %d{%p} %s vcpu:%d {%p} ", id, current->domain,
+         (id==DOMID_IDLE) ? "(IDLE)" : "", current->vcpu_id, current);
+
+    /* if (id != DOMID_IDLE) { */
+        if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+            u64 addr = -1;
+            __vmptrst(&addr);
+            kdbp(" VMCS:"KDBFL, addr);
+        }
+    /* } */
+    kdbp("\n");
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* stub to quickly and easily add a new command */
+static kdb_cpu_cmd_t
+kdb_usgf_usr1(void)
+{
+    kdbp("usr1: add any arbitrary cmd using this in kdb_cmds.c\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_usr1(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    return KDB_CPU_MAIN_KDB;
+}
+
+static kdb_cpu_cmd_t
+kdb_usgf_h(void)
+{
+    kdbp("h: display all commands. See kdb/README for more info\n");
+    return KDB_CPU_MAIN_KDB;
+}
+static kdb_cpu_cmd_t
+kdb_cmdf_h(int argc, const char **argv, struct cpu_user_regs *regs)
+{
+    kdbtab_t *tbp;
+
+    kdbp(" - ccpu is current cpu \n");
+    kdbp(" - following are always in decimal:\n");
+    kdbp("     vcpu num, cpu num, domid\n");
+    kdbp(" - otherwise, almost all numbers are in hex (0x not needed)\n");
+    kdbp(" - output: $17 means decimal 17\n");
+    kdbp(" - domid 7fff($32767) refers to hypervisor\n");
+    kdbp(" - if no domid before function name, then it's hypervisor\n");
+    kdbp(" - earlykdb in xen grub line to break into kdb during boot\n");
+    kdbp(" - command ? will show the command usage\n");
+    kdbp("\n");
+
+    for(tbp=kdb_cmd_tbl; tbp->kdb_cmd_usgf; tbp++)
+        (*tbp->kdb_cmd_usgf)();
+    return KDB_CPU_MAIN_KDB;
+}
+
+/* ===================== cmd table initialization ========================== */
+void __init
+kdb_init_cmdtab(void)
+{
+  static kdbtab_t _kdb_cmd_table[] = {
+
+    {"info", kdb_cmdf_info, kdb_usgf_info, 1, KDB_REPEAT_NONE},
+    {"cur",  kdb_cmdf_cur, kdb_usgf_cur, 1, KDB_REPEAT_NONE},
+
+    {"f",  kdb_cmdf_f,  kdb_usgf_f,  1, KDB_REPEAT_NONE},
+    {"fg", kdb_cmdf_fg, kdb_usgf_fg, 1, KDB_REPEAT_NONE},
+
+    {"dw",  kdb_cmdf_dw,  kdb_usgf_dw,  1, KDB_REPEAT_NO_ARGS},
+    {"dd",  kdb_cmdf_dd,  kdb_usgf_dd,  1, KDB_REPEAT_NO_ARGS},
+    {"dwm", kdb_cmdf_dwm, kdb_usgf_dwm, 1, KDB_REPEAT_NO_ARGS},
+    {"ddm", kdb_cmdf_ddm, kdb_usgf_ddm, 1, KDB_REPEAT_NO_ARGS},
+    {"dr",  kdb_cmdf_dr,  kdb_usgf_dr,  1, KDB_REPEAT_NONE},
+    {"drg", kdb_cmdf_drg, kdb_usgf_drg, 1, KDB_REPEAT_NONE},
+
+    {"dis", kdb_cmdf_dis,  kdb_usgf_dis,  1, KDB_REPEAT_NO_ARGS},
+    {"dism",kdb_cmdf_dism, kdb_usgf_dism, 1, KDB_REPEAT_NO_ARGS},
+
+    {"mw", kdb_cmdf_mw, kdb_usgf_mw, 1, KDB_REPEAT_NONE},
+    {"md", kdb_cmdf_md, kdb_usgf_md, 1, KDB_REPEAT_NONE},
+    {"mr", kdb_cmdf_mr, kdb_usgf_mr, 1, KDB_REPEAT_NONE},
+
+    {"bc", kdb_cmdf_bc, kdb_usgf_bc, 0, KDB_REPEAT_NONE},
+    {"bp", kdb_cmdf_bp, kdb_usgf_bp, 1, KDB_REPEAT_NONE},
+    {"btp", kdb_cmdf_btp, kdb_usgf_btp, 1, KDB_REPEAT_NONE},
+
+    {"wp", kdb_cmdf_wp, kdb_usgf_wp, 1, KDB_REPEAT_NONE},
+    {"wc", kdb_cmdf_wc, kdb_usgf_wc, 0, KDB_REPEAT_NONE},
+
+    {"ni", kdb_cmdf_ni, kdb_usgf_ni, 0, KDB_REPEAT_NO_ARGS},
+    {"ss", kdb_cmdf_ss, kdb_usgf_ss, 1, KDB_REPEAT_NO_ARGS},
+    {"ssb",kdb_cmdf_ssb,kdb_usgf_ssb,0, KDB_REPEAT_NO_ARGS},
+    {"go", kdb_cmdf_go, kdb_usgf_go, 0, KDB_REPEAT_NONE},
+
+    {"cpu",kdb_cmdf_cpu, kdb_usgf_cpu, 1, KDB_REPEAT_NONE},
+    {"nmi",kdb_cmdf_nmi, kdb_usgf_nmi, 1, KDB_REPEAT_NONE},
+    {"percpu",kdb_cmdf_percpu, kdb_usgf_percpu, 1, KDB_REPEAT_NONE},
+
+    {"sym",  kdb_cmdf_sym,   kdb_usgf_sym,   1, KDB_REPEAT_NONE},
+    {"mod",  kdb_cmdf_mod,   kdb_usgf_mod,   1, KDB_REPEAT_NONE},
+
+    {"vcpuh",kdb_cmdf_vcpuh, kdb_usgf_vcpuh, 1, KDB_REPEAT_NONE},
+    {"vcpu", kdb_cmdf_vcpu,  kdb_usgf_vcpu,  1, KDB_REPEAT_NONE},
+    {"dom",  kdb_cmdf_dom,   kdb_usgf_dom,   1, KDB_REPEAT_NONE},
+
+    {"sched", kdb_cmdf_sched, kdb_usgf_sched, 1, KDB_REPEAT_NONE},
+    {"mmu",   kdb_cmdf_mmu,   kdb_usgf_mmu,   1, KDB_REPEAT_NONE},
+    {"p2m",   kdb_cmdf_p2m,   kdb_usgf_p2m,   1, KDB_REPEAT_NONE},
+    {"m2p",   kdb_cmdf_m2p,   kdb_usgf_m2p,   1, KDB_REPEAT_NONE},
+    {"dpage", kdb_cmdf_dpage, kdb_usgf_dpage, 1, KDB_REPEAT_NONE},
+    {"dmsr",  kdb_cmdf_dmsr,  kdb_usgf_dmsr, 1, KDB_REPEAT_NONE},
+    {"cpuid",  kdb_cmdf_cpuid,  kdb_usgf_cpuid, 1, KDB_REPEAT_NONE},
+    {"wept",  kdb_cmdf_wept,  kdb_usgf_wept, 1, KDB_REPEAT_NONE},
+
+    {"dtrq", kdb_cmdf_dtrq,  kdb_usgf_dtrq, 1, KDB_REPEAT_NONE},
+    {"didt", kdb_cmdf_didt,  kdb_usgf_didt, 1, KDB_REPEAT_NONE},
+    {"dgdt", kdb_cmdf_dgdt,  kdb_usgf_dgdt, 1, KDB_REPEAT_NONE},
+    {"dirq", kdb_cmdf_dirq,  kdb_usgf_dirq, 1, KDB_REPEAT_NONE},
+    {"dvit", kdb_cmdf_dvit,  kdb_usgf_dvit, 1, KDB_REPEAT_NONE},
+    {"dvmc", kdb_cmdf_dvmc,  kdb_usgf_dvmc, 1, KDB_REPEAT_NONE},
+    {"mmio", kdb_cmdf_mmio,  kdb_usgf_mmio, 1, KDB_REPEAT_NONE},
+
+    /* tracing related commands */
+    {"trcon", kdb_cmdf_trcon,  kdb_usgf_trcon,  0, KDB_REPEAT_NONE},
+    {"trcoff",kdb_cmdf_trcoff, kdb_usgf_trcoff, 0, KDB_REPEAT_NONE},
+    {"trcz",  kdb_cmdf_trcz,   kdb_usgf_trcz,   0, KDB_REPEAT_NONE},
+    {"trcp",  kdb_cmdf_trcp,   kdb_usgf_trcp,   1, KDB_REPEAT_NONE},
+
+    {"usr1",  kdb_cmdf_usr1,   kdb_usgf_usr1,   1, KDB_REPEAT_NONE},
+    {"kdbf",  kdb_cmdf_kdbf,   kdb_usgf_kdbf,   1, KDB_REPEAT_NONE},
+    {"kdbdbg",kdb_cmdf_kdbdbg, kdb_usgf_kdbdbg, 1, KDB_REPEAT_NONE},
+    {"reboot",kdb_cmdf_reboot, kdb_usgf_reboot, 1, KDB_REPEAT_NONE},
+    {"h",     kdb_cmdf_h,      kdb_usgf_h,      1, KDB_REPEAT_NONE},
+
+    {"", NULL, NULL, 0, 0},
+  };
+    kdb_cmd_tbl = _kdb_cmd_table;
+    return;
+}
diff -r 32034d1914a6 xen/kdb/kdb_io.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/kdb_io.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,174 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+#include "include/kdbinc.h"
+
+#define K_BACKSPACE  0x8                   /* ctrl-H */
+#define K_BACKSPACE1 0x7f                  /* ctrl-? */
+#define K_UNDERSCORE 0x5f
+#define K_CMD_BUFSZ  160
+#define K_CMD_MAXI   (K_CMD_BUFSZ - 1)     /* max index in buffer */
+
+#if 0        /* make a history array some day */
+#define K_UP_ARROW                         /* sequence : 1b 5b 41 ie, '\e[A' */
+#define K_DN_ARROW                         /* sequence : 1b 5b 42 ie, '\e[B' */
+#define K_NUM_HIST   32
+static int cursor;
+static char cmds_a[NUM_HIST][K_CMD_BUFSZ];
+#endif
+
+static char cmds_a[K_CMD_BUFSZ];
+
+
+static int
+kdb_key_valid(int key)
+{
+    /* note: isspace() is more than ' ', hence we don't use it here */
+    if (isalnum(key) || key == ' ' || key == K_BACKSPACE || key == '\n' ||
+        key == '?' || key == K_UNDERSCORE || key == '=' || key == '!')
+            return 1;
+    return 0;
+}
+
+/* display kdb prompt and read command from the console 
+ * RETURNS: a '\n' terminated command buffer */
+char *
+kdb_get_cmdline(char *prompt)
+{
+    #define K_BELL     0x7
+    #define K_CTRL_C   0x3
+
+    int key, i=0;
+
+    kdbp(prompt);
+    memset(cmds_a, 0, K_CMD_BUFSZ);
+    cmds_a[K_CMD_BUFSZ-1] = '\n';
+
+    do {
+        key = console_getc();
+        if (key == '\r') 
+            key = '\n';
+        if (key == K_BACKSPACE1) 
+            key = K_BACKSPACE;
+
+        if (key == K_CTRL_C || (i==K_CMD_MAXI && key != '\n')) {
+            console_putc('\n');
+            if (i >= K_CMD_MAXI) {
+                kdbp("KDB: cmd buffer overflow\n");
+                console_putc(K_BELL);
+            }
+            memset(cmds_a, 0, K_CMD_BUFSZ);
+            i = 0;
+            kdbp(prompt);
+            continue;
+        }
+        if (!kdb_key_valid(key)) {
+            console_putc(K_BELL);
+            continue;
+        }
+        if (key == K_BACKSPACE) {
+            if (i==0) {
+                console_putc(K_BELL);
+                continue;
+            } else 
+                cmds_a[--i] = '\0';
+                console_putc(K_BACKSPACE);
+                console_putc(' ');        /* erase character */
+        } else
+            cmds_a[i++] = key;
+
+        console_putc(key);
+
+    } while (key != '\n');
+
+    return cmds_a;
+}
+
+/*
+ * printk takes a lock, an NMI could come in after that, and another cpu may 
+ * spin. also, the console lock is forced unlock, so panic is been seen on 
+ * 8 way. hence, no printk() calls.
+ */
+static volatile int kdbp_gate = 0;
+void
+kdbp(const char *fmt, ...)
+{
+    static char buf[1024];
+    va_list args;
+    char *p;
+    int i=0;
+
+    while ((__cmpxchg(&kdbp_gate, 0,1, sizeof(kdbp_gate)) != 0) && i++<1000)
+        mdelay(10);
+
+    va_start(args, fmt);
+    (void)vsnprintf(buf, sizeof(buf), fmt, args);
+    va_end(args);
+
+    for (p=buf; *p != '\0'; p++)
+        console_putc(*p);
+    kdbp_gate = 0;
+}
+
+
+/*
+ * copy/read machine memory. 
+ * RETURNS: number of bytes copied 
+ */
+int
+kdb_read_mmem(kdbma_t maddr, kdbbyt_t *dbuf, int len)
+{
+    ulong remain, orig=len;
+
+    while (len > 0) {
+        ulong pagecnt = min_t(long, PAGE_SIZE-(maddr&~PAGE_MASK), len);
+        char *va = map_domain_page(maddr >> PAGE_SHIFT);
+
+        va = va + (maddr & (PAGE_SIZE-1));        /* add page offset */
+        remain = __copy_from_user(dbuf, (void *)va, pagecnt);
+        KDBGP1("maddr:%x va:%p len:%x pagecnt:%x rem:%x\n", 
+               maddr, va, len, pagecnt, remain);
+        unmap_domain_page(va);
+        len = len  - (pagecnt - remain);
+        if (remain != 0)
+            break;
+        maddr += pagecnt;
+        dbuf += pagecnt;
+    }
+    return orig - len;
+}
+
+
+/*
+ * copy/read guest or hypervisor memory. (domid == DOMID_IDLE) => hyp
+ * RETURNS: number of bytes copied 
+ */
+int
+kdb_read_mem(kdbva_t saddr, kdbbyt_t *dbuf, int len, domid_t domid)
+{
+    return (len - dbg_rw_mem(saddr, dbuf, len, domid, 0, 0));
+}
+
+/*
+ * write guest or hypervisor memory. (domid == DOMID_IDLE) => hyp
+ * RETURNS: number of bytes written
+ */
+int
+kdb_write_mem(kdbva_t daddr, kdbbyt_t *sbuf, int len, domid_t domid)
+{
+    return (len - dbg_rw_mem(daddr, sbuf, len, domid, 1, 0));
+}
diff -r 32034d1914a6 xen/kdb/kdbmain.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/kdbmain.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,739 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "include/kdbinc.h"
+
+static int kdbmain(kdb_reason_t, struct cpu_user_regs *);
+static int kdbmain_fatal(struct cpu_user_regs *, int);
+static const char *kdb_gettrapname(int);
+
+/* ======================== GLOBAL VARIABLES =============================== */
+/* All global variables used by KDB must be defined here only. Module specific
+ * static variables must be declared in respective modules.
+ */
+kdbtab_t *kdb_cmd_tbl;
+char kdb_prompt[32];
+
+volatile kdb_cpu_cmd_t kdb_cpu_cmd[NR_CPUS];
+cpumask_t kdb_cpu_traps;           /* bit per cpu to tell which cpus hit int3 */
+
+#ifndef NDEBUG
+    #error KDB is not supported on debug xen. Turn debug off
+#endif
+
+volatile int kdb_init_cpu = -1;           /* initial kdb cpu */
+volatile int kdb_session_begun = 0;       /* active kdb session? */
+volatile int kdb_enabled = 1;             /* kdb enabled currently? */
+volatile int kdb_sys_crash = 0;           /* are we in crashed state? */
+volatile int kdbdbg = 0;                  /* to debug kdb itself */
+
+static volatile int kdb_trap_immed_reason = 0;   /* reason for immed trap */
+
+static cpumask_t kdb_fatal_cpumask;       /* which cpus in fatal path */
+
+/* return index of first bit set in val. if val is 0, retval is undefined */
+static inline unsigned int kdb_firstbit(unsigned long val)
+{
+    __asm__ ( "bsf %1,%0" : "=r" (val) : "r" (val), "0" (BITS_PER_LONG) );
+    return (unsigned int)val;
+}
+
+static void 
+kdb_dbg_prnt_ctrps(char *label, int ccpu)
+{
+    int i;
+    if (!kdbdbg)
+        return;
+
+    if (label || *label)
+        kdbp("%s ", label);
+    if (ccpu != -1)
+        kdbp("ccpu:%d ", ccpu);
+    kdbp("cputrps:");
+    for (i=sizeof(kdb_cpu_traps)/sizeof(kdb_cpu_traps.bits[0]) - 1; i >=0; i--)
+        kdbp(" %lx", kdb_cpu_traps.bits[i]);
+    kdbp("\n");
+}
+
+/* 
+ * Hold this cpu. Don't disable until all CPUs in kdb to avoid IPI deadlock 
+ */
+static void
+kdb_hold_this_cpu(int ccpu, struct cpu_user_regs *regs)
+{
+    KDBGP("ccpu:%d hold. cmd:%x\n", kdb_cpu_cmd[ccpu]);
+    do {
+        for(; kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE; cpu_relax());
+        KDBGP("ccpu:%d hold. cmd:%x\n", kdb_cpu_cmd[ccpu]);
+
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_DISABLE) {
+            local_irq_disable();
+            kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+        }
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_DO_VMEXIT) {
+            kdb_curr_cpu_flush_vmcs();
+            kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+        }
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_SHOWPC) {
+            kdbp("[%d]", ccpu);
+            kdb_display_pc(regs);
+            kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+        }
+    } while (kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE);     /* No goto, eh! */
+    KDBGP1("un hold: ccpu:%d cmd:%d\n", ccpu, kdb_cpu_cmd[ccpu]);
+}
+
+/*
+ * Pause this cpu while one CPU does main kdb processing. If that CPU does
+ * a "cpu switch" to this cpu, this cpu will become the main kdb cpu. If the
+ * user next does single step of some sort, this function will be exited,
+ * and this cpu will come back into kdb via kdb_handle_trap_entry function.
+ */
+static void 
+kdb_pause_this_cpu(struct cpu_user_regs *regs, void *unused)
+{
+    kdbmain(KDB_REASON_PAUSE_IPI, regs);
+}
+
+/* pause other cpus via an IPI. Note, disabled CPUs can't receive IPIs until
+ * enabled */
+static void
+kdb_smp_pause_cpus(void)
+{
+    int cpu, wait_count = 0;
+    int ccpu = smp_processor_id();      /* current cpu */
+    cpumask_t cpumask = cpu_online_map;
+
+    cpumask_clear_cpu(smp_processor_id(), &cpumask);
+    for_each_cpu(cpu, &cpumask)
+        if (kdb_cpu_cmd[cpu] != KDB_CPU_INVAL) {
+            kdbp("KDB: won't pause cpu:%d, cmd[cpu]=%d\n",cpu,kdb_cpu_cmd[cpu]);
+            cpumask_clear_cpu(cpu, &cpumask);
+        }
+    KDBGP("ccpu:%d will pause cpus. mask:0x%lx\n", ccpu, cpumask.bits[0]);
+#if XEN_SUBVERSION > 4 || XEN_VERSION == 4              /* xen 3.5.x or above */
+    on_selected_cpus(&cpumask, (void (*)(void *))kdb_pause_this_cpu, 
+                     "XENKDB", 0);
+#else
+    on_selected_cpus(cpumask, (void (*)(void *))kdb_pause_this_cpu, 
+                     "XENKDB", 0, 0);
+#endif
+    mdelay(300);                     /* wait a bit for other CPUs to stop */
+    while(wait_count++ < 10) {
+        int bummer = 0;
+        for_each_cpu(cpu, &cpumask)
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE)
+                bummer = 1;
+        if (!bummer)
+            break;
+        kdbp("ccpu:%d trying to stop other cpus...\n", ccpu);
+        mdelay(100);  /* wait 100 ms */
+    };
+    for_each_cpu(cpu, &cpumask)          /* now check who is with us */
+        if (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE)
+            kdbp("Bummer cpu %d not paused. ccpu:%d\n", cpu,ccpu);
+        else {
+            kdb_cpu_cmd[cpu] = KDB_CPU_DISABLE;  /* tell it to disable ints */
+            while (kdb_cpu_cmd[cpu] != KDB_CPU_PAUSE);
+        }
+}
+
+/* 
+ * Do once per kdb session:  A kdb session lasts from 
+ *    keybord/HWBP/SWBP till KDB_CPU_INSTALL_BP is done. Within a session,
+ *    user may do several cpu switches, single step, next instr,  etc..
+ *
+ * DO: 1. pause other cpus if they are not already. they would already be 
+ *        if we are in single step mode
+ *     2. watchdog_disable() 
+ *     3. uninstall all sw breakpoints so that user doesn't see them
+ */
+static void
+kdb_begin_session(void)
+{
+    if (!kdb_session_begun) {
+        kdb_session_begun = 1;
+        kdb_smp_pause_cpus();
+        local_irq_disable();
+        watchdog_disable();
+        kdb_uninstall_all_swbp();
+    }
+}
+
+static void
+kdb_smp_unpause_cpus(int ccpu)
+{
+    int cpu;
+
+    int wait_count = 0;
+    cpumask_t cpumask = cpu_online_map;
+
+    cpumask_clear_cpu(smp_processor_id(), &cpumask);
+
+    KDBGP("kdb_smp_unpause_other_cpus(). ccpu:%d\n", ccpu);
+    for_each_cpu(cpu, &cpumask)
+        kdb_cpu_cmd[cpu] = KDB_CPU_QUIT;
+
+    while(wait_count++ < 10) {
+        int bummer = 0;
+        for_each_cpu(cpu, &cpumask)
+            if (kdb_cpu_cmd[cpu] != KDB_CPU_INVAL)
+                bummer = 1;
+            if (!bummer)
+                break;
+            mdelay(90);  /* wait 90 ms, 50 too short on large systems */
+    };
+    /* now make sure they are all in there */
+    for_each_cpu(cpu, &cpumask)
+        if (kdb_cpu_cmd[cpu] != KDB_CPU_INVAL)
+            kdbp("KDB: cpu %d still paused (cmd==%d). ccpu:%d\n",
+                 cpu, kdb_cpu_cmd[cpu], ccpu);
+}
+
+/*
+ * End of KDB session. 
+ *   This is called at the very end. In case of multiple cpus hitting BPs
+ *   and sitting on a trap handlers, the last cpu to exit will call this.
+ *   - isnstall all sw breakpoints, and purge deleted ones from table.
+ *   - clear TF here also in case go is entered on a different cpu after switch
+ */
+static void
+kdb_end_session(int ccpu, struct cpu_user_regs *regs)
+{
+    ASSERT(!cpumask_empty(&kdb_cpu_traps));
+    ASSERT(kdb_session_begun);
+    kdb_install_all_swbp();
+    kdb_flush_swbp_table();
+    kdb_install_watchpoints();
+
+    regs->eflags &= ~X86_EFLAGS_TF;
+    kdb_cpu_cmd[ccpu] = KDB_CPU_INVAL;
+    kdb_time_resume(1);
+    kdb_session_begun = 0;      /* before unpause for kdb_install_watchpoints */
+    kdb_smp_unpause_cpus(ccpu);
+    watchdog_enable();
+    KDBGP("end_session:ccpu:%d\n", ccpu);
+}
+
+/* 
+ * check if we entered kdb because of DB trap. If yes, then check if
+ * we caused it or someone else.
+ * RETURNS: 0 : not one of ours. hypervisor must handle it. 
+ *          1 : #DB for delayed sw bp install. 
+ *          2 : this cpu must stay in kdb.
+ */
+static noinline int
+kdb_check_dbtrap(kdb_reason_t *reasp, int ss_mode, struct cpu_user_regs *regs) 
+{
+    int rc = 2;
+    int ccpu = smp_processor_id();
+
+    /* DB excp caused by hw breakpoint or the TF flag. The TF flag is set
+     * by us for ss mode or to install breakpoints. In ss mode, none of the
+     * breakpoints are installed. Check to make sure we intended BP INSTALL
+     * so we don't do it on a spurious DB trap.
+     * check for kdb_cpu_traps here also, because each cpu sitting on a trap
+     * must execute the instruction without the BP before passing control
+     * to next cpu in kdb_cpu_traps.
+     */
+    if (*reasp == KDB_REASON_DBEXCP && !ss_mode) {
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_INSTALL_BP) {
+            if (!cpumask_empty(&kdb_cpu_traps)) {
+                int a_trap_cpu = cpumask_first(&kdb_cpu_traps);
+                KDBGP("ccpu:%d trapcpu:%d\n", ccpu, a_trap_cpu);
+                kdb_cpu_cmd[a_trap_cpu] = KDB_CPU_QUIT;
+                *reasp = KDB_REASON_PAUSE_IPI;
+                regs->eflags &= ~X86_EFLAGS_TF;  /* hvm: exit handler ss = 0 */
+                kdb_init_cpu = -1;
+            } else {
+                kdb_end_session(ccpu, regs);
+                rc = 1;
+            }
+        } else if (! kdb_check_watchpoints(regs)) {
+            rc = 0;                        /* hyp must handle it */
+        }
+    }
+    return rc;
+}
+
+/* 
+ * Misc processing on kdb entry like displaying PC, adjust IP for sw bp.... 
+ */
+static void
+kdb_main_entry_misc(kdb_reason_t reason, struct cpu_user_regs *regs, 
+                    int ccpu, int ss_mode, int enabled)
+{
+    if (reason == KDB_REASON_KEYBOARD)
+        kdbp("\nEnter kdb (cpu:%d reason:%d vcpu=%d domid:%d"
+             " eflg:0x%lx irqs:%d)\n", ccpu, reason, current->vcpu_id, 
+             current->domain->domain_id, regs->eflags, enabled);
+    else if (ss_mode)
+        KDBGP1("KDBG: KDB single step mode. ccpu:%d\n", ccpu);
+
+    if (reason == KDB_REASON_BPEXCP && !ss_mode) 
+        kdbp("Breakpoint on cpu %d at 0x%lx\n", ccpu, regs->KDBIP);
+
+    /* display the current PC and instruction at it */
+    if (reason != KDB_REASON_PAUSE_IPI)
+        kdb_display_pc(regs);
+    console_start_sync();
+}
+
+/* 
+ * The MAIN kdb function. All cpus go thru this. IRQ is enabled on entry because
+ * a cpu could hit a bp set in disabled code.
+ * IPI: Even the main cpu must enable in case another CPU is trying to IPI us.
+ *      That way, it would IPI us, then get out and be ready for our pause IPI.
+ * IRQs: The reason irqs enable/disable is scattered is because on a typical
+ *       system IPIs are constantly going on amongs CPUs in a set of any size. 
+ *       As a result,  to avoid deadlock, cpus have to loop enabled, until a 
+ *       quorum is established and the session has begun.
+ * Step: Intel Vol3B 18.3.1.4 : An external interrupt may be serviced upon
+ *       single step. Since, the likely ext timer_interrupt and 
+ *       apic_timer_interrupt dont' mess with time data structs, we are prob OK
+ *       leaving enabled.
+ * Time: Very messy. Most platform timers are readonly, so we can't stop time
+ *       in the debugger. We take the only resort, let the TSC and plt run as
+ *       normal, upon leaving, "attempt" to bring everybody to current time.
+ * kdbcputraps: bit per cpu. each cpu sets it bit in entry.S. The bit is 
+ *              reliable because upon traps, Ints are disabled. the bit is set
+ *              before Ints are enabled.
+ *
+ * RETURNS: 0 : kdb was called for event it was not responsible
+ *          1 : event owned and handled by kdb 
+ */
+static int
+kdbmain(kdb_reason_t reason, struct cpu_user_regs *regs)
+{
+    int ccpu = smp_processor_id();                /* current cpu */
+    int rc = 1, cmd = kdb_cpu_cmd[ccpu];
+    int ss_mode = (cmd == KDB_CPU_SS || cmd == KDB_CPU_NI);
+    int delayed_install = (kdb_cpu_cmd[ccpu] == KDB_CPU_INSTALL_BP);
+    int enabled = local_irq_is_enabled();
+
+    KDBGP("kdbmain:ccpu:%d rsn:%d eflgs:0x%lx cmd:%d initc:%d irqs:%d "
+          "regs:%lx IP:%lx ", ccpu, reason, regs->eflags, cmd, 
+          kdb_init_cpu, enabled, regs, regs->KDBIP);
+    kdb_dbg_prnt_ctrps("", -1);
+
+    if (!ss_mode && !delayed_install)    /* initial kdb enter */
+        local_irq_enable();              /* so we can receive IPI */
+
+    if (!ss_mode && ccpu != kdb_init_cpu && reason != KDB_REASON_PAUSE_IPI){
+        int sz = sizeof(kdb_init_cpu);
+        while (__cmpxchg(&kdb_init_cpu, -1, ccpu, sz) != -1)
+            for(; kdb_init_cpu != -1; cpu_relax());
+    }
+    if (kdb_session_begun)
+        local_irq_disable();             /* kdb always runs disabled */
+
+    if (reason == KDB_REASON_BPEXCP) {             /* INT 3 */
+        cpumask_clear_cpu(ccpu, &kdb_cpu_traps);   /* remove ourself */
+        rc = kdb_check_sw_bkpts(regs);
+        if (rc == 0) {               /* not one of ours. leave kdb */
+            kdb_init_cpu = -1;
+            goto out;
+        } else if (rc == 1) {        /* one of ours but deleted */
+            if (cpumask_empty(&kdb_cpu_traps)) {
+                kdb_end_session(ccpu,regs);     
+                kdb_init_cpu = -1;
+                goto out;
+            } else {                 
+                /* release another trap cpu, and put ourself in a pause mode */
+                int a_trap_cpu = cpumask_first(&kdb_cpu_traps);
+                KDBGP("ccpu:%d cmd:%d rsn:%d atrpcpu:%d initcpu:%d\n", ccpu, 
+                      kdb_cpu_cmd[ccpu], reason, a_trap_cpu, kdb_init_cpu);
+                kdb_cpu_cmd[a_trap_cpu] = KDB_CPU_QUIT;
+                reason = KDB_REASON_PAUSE_IPI;
+                kdb_init_cpu = -1;
+            }
+        } else if (rc == 2) {        /* one of ours but condition not met */
+                kdb_begin_session();
+                if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                    current->arch.hvm_vcpu.single_step = 1;
+                else
+                    regs->eflags |= X86_EFLAGS_TF;  
+                kdb_cpu_cmd[ccpu] = KDB_CPU_INSTALL_BP;
+                goto out;
+        }
+    }
+
+    /* following will take care of KDB_CPU_INSTALL_BP, and also release
+     * kdb_init_cpu. it should not be done twice */
+    if ((rc=kdb_check_dbtrap(&reason, ss_mode, regs)) == 0 || rc == 1) {
+        kdb_init_cpu = -1;       /* leaving kdb */
+        goto out;                /* rc properly set to 0 or 1 */
+    }
+    if (reason != KDB_REASON_PAUSE_IPI) {
+        kdb_cpu_cmd[ccpu] = KDB_CPU_MAIN_KDB;
+    } else
+        kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_MAIN_KDB && !ss_mode)
+        kdb_begin_session(); 
+
+    kdb_main_entry_misc(reason, regs, ccpu, ss_mode, enabled);
+    /* note, one or more cpu switches may occur in between */
+    while (1) {
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE)
+            kdb_hold_this_cpu(ccpu, regs);
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_MAIN_KDB)
+            kdb_do_cmds(regs);
+
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_GO) {
+            if (ccpu != kdb_init_cpu) {
+                kdb_cpu_cmd[kdb_init_cpu] = KDB_CPU_GO;
+                kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+                continue;               /* for the pause guy */
+            }
+            if (!cpumask_empty(&kdb_cpu_traps)) {
+                /* execute current instruction without 0xcc */
+                kdb_dbg_prnt_ctrps("nempty:", ccpu);
+                if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                    current->arch.hvm_vcpu.single_step = 1;
+                else
+                    regs->eflags |= X86_EFLAGS_TF;  
+                kdb_cpu_cmd[ccpu] = KDB_CPU_INSTALL_BP;
+                goto out;
+            }
+        }
+        if (kdb_cpu_cmd[ccpu] != KDB_CPU_PAUSE  && 
+            kdb_cpu_cmd[ccpu] != KDB_CPU_MAIN_KDB)
+                break;
+    }
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_GO) {
+        ASSERT(cpumask_empty(&kdb_cpu_traps));
+        if (kdb_swbp_exists()) {
+            if (reason == KDB_REASON_BPEXCP) {
+                /* do delayed install */
+                if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                    current->arch.hvm_vcpu.single_step = 1;
+                else
+                    regs->eflags |= X86_EFLAGS_TF;  
+                kdb_cpu_cmd[ccpu] = KDB_CPU_INSTALL_BP;
+                goto out;
+            } 
+        }
+        kdb_end_session(ccpu, regs);
+        kdb_init_cpu = -1;
+    }
+out:
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_QUIT) {
+        KDBGP("ccpu:%d _quit IP: %lx\n", ccpu, regs->KDBIP);
+        if (! kdb_session_begun)
+            kdb_install_watchpoints();
+        kdb_time_resume(0);
+        kdb_cpu_cmd[ccpu] = KDB_CPU_INVAL;
+    }
+
+    /* for ss and delayed install, TF is set. not much in EXT INT handlers*/
+    if (kdb_cpu_cmd[ccpu] == KDB_CPU_NI)
+        kdb_time_resume(1);
+    if (enabled)
+        local_irq_enable();
+
+    KDBGP("kdbmain:X:ccpu:%d rc:%d cmd:%d eflg:0x%lx initc:%d sesn:%d " 
+          "cs:%x irqs:%d ", ccpu, rc, kdb_cpu_cmd[ccpu], regs->eflags, 
+          kdb_init_cpu, kdb_session_begun, regs->cs, local_irq_is_enabled());
+    kdb_dbg_prnt_ctrps("", -1);
+    return (rc ? 1 : 0);
+}
+
+/* 
+ * kdb entry function when coming in via a keyboard
+ * RETURNS: 0 : kdb was called for event it was not responsible
+ *          1 : event owned and handled by kdb 
+ */
+int
+kdb_keyboard(struct cpu_user_regs *regs)
+{
+    return kdbmain(KDB_REASON_KEYBOARD, regs);
+}
+
+#if 0
+/*
+ * this function called when kdb session active and user presses ctrl\ again.
+ * the assumption is that the user typed ni/ss cmd, and it never got back into
+ * kdb, or the user is impatient. Either case, we just fake it that the SS did
+ * finish. Since, all other kdb cpus must be holding disabled, the interrupt
+ * would be on the CPU that did the ss/ni cmd
+ */
+void
+kdb_ssni_reenter(struct cpu_user_regs *regs)
+{
+    int ccpu = smp_processor_id();
+    int ccmd = kdb_cpu_cmd[ccpu];
+
+    if(ccmd == KDB_CPU_SS || ccmd == KDB_CPU_INSTALL_BP)
+        kdbmain(KDB_REASON_DBEXCP, regs); 
+    else 
+        kdbmain(KDB_REASON_KEYBOARD, regs);
+}
+#endif
+
+/* 
+ * All traps are routed thru here. We care about BP (#3) trap (INT 3) and
+ * the DB trap(#1) only. 
+ * returns: 0 kdb has nothing do with this trap
+ *          1 kdb handled this trap 
+ */
+int
+kdb_handle_trap_entry(int vector, struct cpu_user_regs *regs)
+{
+    int rc = 0;
+    int ccpu = smp_processor_id();
+
+    if (vector == TRAP_int3) {
+        rc = kdbmain(KDB_REASON_BPEXCP, regs);
+
+    } else if (vector == TRAP_debug) {
+        KDBGP("ccpu:%d trapdbg reas:%d\n", ccpu, kdb_trap_immed_reason);
+
+        if (kdb_trap_immed_reason == KDB_TRAP_FATAL) { 
+            KDBGP("kdbtrp:fatal ccpu:%d vec:%d\n", ccpu, vector);
+            rc = kdbmain_fatal(regs, vector);
+            BUG();                             /* no return */
+
+        } else if (kdb_trap_immed_reason == KDB_TRAP_KDBSTACK) {
+            kdb_trap_immed_reason = 0;         /* show kdb stack */
+            show_registers(regs);
+            show_stack(regs);
+            regs->eflags &= ~X86_EFLAGS_TF;
+            rc = 1;
+
+        } else if (kdb_trap_immed_reason == KDB_TRAP_NONFATAL) {
+            kdb_trap_immed_reason = 0;
+            rc = kdb_keyboard(regs);
+        } else {                         /* ss/ni/delayed install... */
+            if (guest_mode(regs) && is_hvm_or_hyb_vcpu(current))
+                current->arch.hvm_vcpu.single_step = 0;
+            rc = kdbmain(KDB_REASON_DBEXCP, regs); 
+        }
+
+    } else if (vector == TRAP_nmi) {                   /* external nmi */
+        /* when nmi is pressed, it could go to one or more or all cpus
+         * depending on the hardware. Also, for now assume it's fatal */
+        KDBGP("kdbtrp:ccpu:%d vec:%d\n", ccpu, vector);
+        rc = kdbmain_fatal(regs, TRAP_nmi);
+    } 
+    return rc;
+}
+
+int
+kdb_trap_fatal(int vector, struct cpu_user_regs *regs)
+{
+    kdbmain_fatal(regs, vector);
+    return 0;
+}
+
+/* From smp_send_nmi_allbutself() in crash.c which is static */
+void
+kdb_nmi_pause_cpus(cpumask_t cpumask)
+{
+    int ccpu = smp_processor_id();
+    mdelay(200);
+    cpumask_complement(&cpumask, &cpumask);              /* flip bit map */
+    cpumask_and(&cpumask, &cpumask, &cpu_online_map);    /* remove extra bits */
+    cpumask_clear_cpu(ccpu, &cpumask);/* absolutely make sure we're not on it */
+
+    KDBGP("ccpu:%d nmi pause. mask:0x%lx\n", ccpu, cpumask.bits[0]);
+    if ( !cpumask_empty(&cpumask) )
+#if XEN_SUBVERSION > 4 || XEN_VERSION == 4              /* xen 3.5.x or above */
+        send_IPI_mask(&cpumask, APIC_DM_NMI);
+#else
+        send_IPI_mask(cpumask, APIC_DM_NMI);
+#endif
+    mdelay(200);
+    KDBGP("ccpu:%d nmi pause done...\n", ccpu);
+}
+
+/* 
+ * Separate function from kdbmain to keep both within sanity levels.
+ */
+DEFINE_SPINLOCK(kdb_fatal_lk);
+static int
+kdbmain_fatal(struct cpu_user_regs *regs, int vector)
+{
+    int ccpu = smp_processor_id();
+
+    console_start_sync();
+
+    KDBGP("mainf:ccpu:%d vec:%d irq:%d\n", ccpu, vector,local_irq_is_enabled());
+    cpumask_set_cpu(ccpu, &kdb_fatal_cpumask);        /* uses LOCK_PREFIX */
+
+    if (spin_trylock(&kdb_fatal_lk)) {
+
+        kdbp("*** kdb (Fatal Error on cpu:%d vec:%d %s):\n", ccpu,
+             vector, kdb_gettrapname(vector));
+        kdb_cpu_cmd[ccpu] = KDB_CPU_MAIN_KDB;
+        kdb_display_pc(regs);
+
+        watchdog_disable();     /* important */
+        kdb_sys_crash = 1;
+        kdb_session_begun = 0;  /* incase session already active */
+        local_irq_enable();
+        kdb_nmi_pause_cpus(kdb_fatal_cpumask);
+
+        kdb_clear_prev_cmd();   /* buffered CRs will repeat prev cmd */
+        kdb_session_begun = 1;  /* for kdb_hold_this_cpu() */
+        local_irq_disable();
+    } else {
+        kdb_cpu_cmd[ccpu] = KDB_CPU_PAUSE;
+    }
+    while (1) {
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_PAUSE)
+            kdb_hold_this_cpu(ccpu, regs);
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_MAIN_KDB)
+            kdb_do_cmds(regs);
+#if 0
+        /* dump is the only way to exit in crashed state */
+        if (kdb_cpu_cmd[ccpu] == KDB_CPU_DUMP)
+            kdb_do_dump(regs);
+#endif
+    }
+    return 0;
+}
+
+/* Mostly called in fatal cases. earlykdb calls non-fatal.
+ * kdb_trap_immed_reason is global, so allow only one cpu at a time. Also,
+ * multiple cpu may be crashing at the same time. We enable because if there
+ * is a bad hang, at least ctrl-\ will break into kdb. Also, we don't call
+ * call kdb_keyboard directly becaue we don't have the register context.
+ */
+DEFINE_SPINLOCK(kdb_immed_lk);
+void
+kdb_trap_immed(int reason)            /* fatal, non-fatal, kdb stack etc... */
+{
+    int ccpu = smp_processor_id();
+    int disabled = !local_irq_is_enabled();
+
+    KDBGP("trapimm:ccpu:%d reas:%d\n", ccpu, reason);
+    local_irq_enable();
+    spin_lock(&kdb_immed_lk);
+    kdb_trap_immed_reason = reason;
+    barrier();
+    __asm__ __volatile__ ( "int $1" );
+    kdb_trap_immed_reason = 0;
+
+    spin_unlock(&kdb_immed_lk);
+    if (disabled)
+        local_irq_disable();
+}
+
+/* called very early during init, even before all CPUs are brought online */
+void 
+kdb_init(void)
+{
+        kdb_init_cmdtab();      /* Initialize Command Table */
+}
+
+static const char *
+kdb_gettrapname(int trapno)
+{
+    char *ret;
+    switch (trapno) {
+        case  0:  ret = "Divide Error"; break;
+        case  2:  ret = "NMI Interrupt"; break;
+        case  3:  ret = "Int 3 Trap"; break;
+        case  4:  ret = "Overflow Error"; break;
+        case  6:  ret = "Invalid Opcode"; break;
+        case  8:  ret = "Double Fault"; break;
+        case 10:  ret = "Invalid TSS"; break;
+        case 11:  ret = "Segment Not Present"; break;
+        case 12:  ret = "Stack-Segment Fault"; break;
+        case 13:  ret = "General Protection"; break;
+        case 14:  ret = "Page Fault"; break;
+        case 17:  ret = "Alignment Check"; break;
+        default: ret = " ????? ";
+    }
+    return ret;
+}
+
+
+/* ====================== Generic tracing subsystem ======================== */
+
+#define KDBTRCMAX 1       /* set this to max number of recs to trace. each rec 
+                           * is 32 bytes */
+volatile int kdb_trcon=1; /* turn tracing ON: set here or via the trcon cmd */
+
+typedef struct {
+    union {
+        struct { uint d0; uint cpu_trcid; } s0;
+        uint64_t l0;
+    }u;
+    uint64_t l1, l2, l3; 
+} trc_rec_t;
+
+static volatile unsigned int trcidx;    /* points to where new entry will go */
+static trc_rec_t trca[KDBTRCMAX];       /* trace array */
+
+/* atomically: add i to *p, return prev value of *p (ie, val before add) */
+static int
+kdb_fetch_and_add(int i, uint *p)
+{
+    asm volatile("lock xaddl %0, %1;" : "=r"(i) : "m"(*p), "0"(i));
+    return i;
+}
+
+/* zero out the entire buffer */
+void 
+kdb_trczero(void)
+{
+    for (trcidx = KDBTRCMAX-1; trcidx; trcidx--) {
+        memset(&trca[trcidx], 0, sizeof(trc_rec_t));
+    }
+    memset(&trca[trcidx], 0, sizeof(trc_rec_t));
+    kdbp("kdb trace buffer has been zeroed\n");
+}
+
+/* add trace entry: eg.: kdbtrc(0xe0f099, intdata, vcpu, domain, 0)
+ *    where:  0xe0f099 : 24bits max trcid, lower 8 bits are set to cpuid */
+void
+kdbtrc(uint trcid, uint int_d0, uint64_t d1_64, uint64_t d2_64, uint64_t d3_64)
+{
+    uint idx;
+
+    if (!kdb_trcon)
+        return;
+
+    idx = kdb_fetch_and_add(1, (uint*)&trcidx);
+    idx = idx % KDBTRCMAX;
+
+#if 0
+    trca[idx].u.s0.cpu_trcid = (smp_processor_id()<<24) | trcid;
+#endif
+    trca[idx].u.s0.cpu_trcid = (trcid<<8) | smp_processor_id();
+    trca[idx].u.s0.d0 = int_d0;
+    trca[idx].l1 = d1_64;
+    trca[idx].l2 = d2_64;
+    trca[idx].l3 = d3_64;
+}
+
+/* give hints so user can print trc buffer via the dd command. last has the
+ * most recent entry */
+void
+kdb_trcp(void)
+{
+    int i = trcidx % KDBTRCMAX;
+
+    i = (i==0) ? KDBTRCMAX-1 : i-1;
+    kdbp("trcbuf:    [0]: %016lx [MAX-1]: %016lx\n", &trca[0],
+         &trca[KDBTRCMAX-1]);
+    kdbp(" [most recent]: %016lx   trcidx: 0x%x\n", &trca[i], trcidx);
+}
+
diff -r 32034d1914a6 xen/kdb/x86/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3 @@
+
+obj-y    := kdb_wp.o
+subdir-y += udis86-1.7
diff -r 32034d1914a6 xen/kdb/x86/kdb_wp.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/kdb_wp.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,310 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include "../include/kdbinc.h"
+
+#if 0
+#define DR6_BT  0x00008000
+#define DR6_BS  0x00004000
+#define DR6_BD  0x00002000
+#endif
+#define DR6_B3  0x00000008
+#define DR6_B2  0x00000004
+#define DR6_B1  0x00000002
+#define DR6_B0  0x00000001
+
+#define KDB_MAXWP 4                          /* DR0 thru DR3 */
+
+struct kdb_wp {
+    kdbma_t  wp_addr;
+    int      wp_rwflag;
+    int      wp_len;
+    int      wp_deleted;                     /* pending delete */
+};
+static struct kdb_wp kdb_wpa[KDB_MAXWP];
+
+/* following because vmcs has it's own dr7. when vmcs runs, it messes up the
+ * native dr7 so we need to save/restore it */
+unsigned long kdb_dr7;
+
+
+/* Set G0-G3 bits in DR7. this does global enable of the corresponding wp */
+static void
+kdb_set_gx_in_dr7(int regno, kdbma_t *dr7p)
+{
+    if (regno == 0)
+        *dr7p = *dr7p | 0x2;
+    else if (regno == 1)
+        *dr7p = *dr7p | 0x8;
+    else if (regno == 2)
+        *dr7p = *dr7p | 0x20;
+    else if (regno == 3)
+        *dr7p = *dr7p | 0x80;
+}
+
+/* Set LEN0 - LEN3 pair bits in DR7 (len should be 1 2 4 or 8) */
+static void
+kdb_set_len_in_dr7(int regno, kdbma_t *dr7p, int len)
+{
+    int lenbits = (len == 8) ? 2 : len-1;
+
+    *dr7p &= ~(0x3 << (18 + 4*regno));
+    *dr7p |= ((ulong)(lenbits & 0x3) << (18 + 4*regno));
+}
+
+static void
+kdb_set_dr7_rw(int regno, kdbma_t *dr7p, int rw)
+{
+    *dr7p &= ~(0x3 << (16 + 4*regno));
+    *dr7p |= ((ulong)(rw & 0x3)) << (16 + 4*regno);
+}
+
+/* get value of a debug register: DR0-DR3 DR6 DR7. other values return 0 */
+kdbma_t
+kdb_rd_dbgreg(int regnum)
+{
+    kdbma_t contents = 0;
+
+    if (regnum == 0)
+        __asm__ ("movq %%db0,%0\n\t":"=r"(contents));
+    else if (regnum == 1)
+        __asm__ ("movq %%db1,%0\n\t":"=r"(contents));
+    else if (regnum == 2)
+        __asm__ ("movq %%db2,%0\n\t":"=r"(contents));
+    else if (regnum == 3)
+        __asm__ ("movq %%db3,%0\n\t":"=r"(contents));
+    else if (regnum == 6)
+        __asm__ ("movq %%db6,%0\n\t":"=r"(contents));
+    else if (regnum == 7)
+        __asm__ ("movq %%db7,%0\n\t":"=r"(contents));
+
+    return contents;
+}
+
+static void
+kdb_wr_dbgreg(int regnum, kdbma_t contents)
+{
+    if (regnum == 0)
+        __asm__ ("movq %0,%%db0\n\t"::"r"(contents));
+    else if (regnum == 1)
+        __asm__ ("movq %0,%%db1\n\t"::"r"(contents));
+    else if (regnum == 2)
+        __asm__ ("movq %0,%%db2\n\t"::"r"(contents));
+    else if (regnum == 3)
+        __asm__ ("movq %0,%%db3\n\t"::"r"(contents));
+    else if (regnum == 6)
+        __asm__ ("movq %0,%%db6\n\t"::"r"(contents));
+    else if (regnum == 7)
+        __asm__ ("movq %0,%%db7\n\t"::"r"(contents));
+}
+
+static void
+kdb_print_wp_info(char *strp, int idx)
+{
+    kdbp("%s[%d]:%016lx len:%d ", strp, idx, kdb_wpa[idx].wp_addr,
+         kdb_wpa[idx].wp_len);
+    if (kdb_wpa[idx].wp_rwflag == 1)
+        kdbp("on data write only\n");
+    else if (kdb_wpa[idx].wp_rwflag == 2)
+        kdbp("on IO read/write\n");
+    else 
+        kdbp("on data read/write\n");
+}
+
+/*
+ * Returns : 0 if not one of ours
+ *           1 if one of ours
+ */
+int
+kdb_check_watchpoints(struct cpu_user_regs *regs)
+{
+    int wpnum;
+    kdbma_t dr6 = kdb_rd_dbgreg(6);
+
+    KDBGP1("check_wp: IP:%lx EFLAGS:%lx\n", regs->rip, regs->rflags);
+    if (dr6 & DR6_B0)
+        wpnum = 0;
+    else if (dr6 & DR6_B1)
+        wpnum = 1;
+    else if (dr6 & DR6_B2)
+        wpnum = 2;
+    else if (dr6 & DR6_B3)
+        wpnum = 3;
+    else
+        return 0;
+
+    kdb_print_wp_info("Watchpoint ", wpnum);
+    return 1;
+}
+
+/* set a watchpoint at a given address 
+ * PreCondition: addr != 0 */
+static void
+kdb_set_wp(kdbva_t addr, int rwflag, int len)
+{
+    int regno;
+
+    for (regno=0; regno < KDB_MAXWP; regno++) {
+        if (kdb_wpa[regno].wp_addr == addr && !kdb_wpa[regno].wp_deleted) {
+            kdbp("Watchpoint already set\n");
+            return;
+        }
+        if (kdb_wpa[regno].wp_deleted)
+            memset(&kdb_wpa[regno], 0, sizeof(kdb_wpa[regno]));
+    }
+    for (regno=0; regno < KDB_MAXWP && kdb_wpa[regno].wp_addr; regno++);
+    if (regno >= KDB_MAXWP) {
+        kdbp("watchpoint table full. limit:%d\n", KDB_MAXWP);
+        return;
+    }
+    kdb_wpa[regno].wp_addr = addr;
+    kdb_wpa[regno].wp_rwflag = rwflag;
+    kdb_wpa[regno].wp_len = len;
+    kdb_print_wp_info("Watchpoint set ", regno);
+}
+
+/* write reg DR0-3 with address. Update corresponding bits in DR7 */
+static void
+kdb_install_watchpoint(int regno, kdbma_t *dr7p)
+{
+    kdb_set_gx_in_dr7(regno, dr7p);
+    kdb_set_len_in_dr7(regno, dr7p, kdb_wpa[regno].wp_len); 
+    kdb_set_dr7_rw(regno, dr7p, kdb_wpa[regno].wp_rwflag);
+    kdb_wr_dbgreg(regno, kdb_wpa[regno].wp_addr);
+
+    KDBGP1("ccpu:%d installed wp. addr:%lx rw:%x len:%x dr7:%016lx\n",
+           smp_processor_id(), kdb_wpa[regno].wp_addr, 
+           kdb_wpa[regno].wp_rwflag, kdb_wpa[regno].wp_len, *dr7p);
+}
+
+/* clear G0-G3 bits in DR7 for given DR0-3 */
+static void
+kdb_clear_dr7_gx(int regno, kdbma_t *dr7p)
+{
+    if (regno == 0)
+        *dr7p = *dr7p & ~0x2;
+    else if (regno == 1)
+        *dr7p = *dr7p & ~0x8;
+    else if (regno == 2)
+        *dr7p = *dr7p & ~0x20;
+    else if (regno == 3)
+        *dr7p = *dr7p & ~0x80;
+}
+
+/* update dr7 once, as it's slow to update debug regs and cpu's will still be 
+ * paused when leaving kdb.
+ *
+ * Just leave DR0-3 clobbered but remove bits from DR7 to disable wp 
+ */
+void
+kdb_install_watchpoints(void)
+{
+    int regno;
+    kdbma_t dr7 = kdb_rd_dbgreg(7);
+
+    for (regno=0; regno < KDB_MAXWP; regno++) {
+        /* do not clear wp_deleted here as all cpus must clear wps */
+        if (kdb_wpa[regno].wp_deleted) {
+            kdb_clear_dr7_gx(regno, &dr7);
+            continue;
+        }
+        if (kdb_wpa[regno].wp_addr)
+            kdb_install_watchpoint(regno, &dr7);
+    }
+    /* always clear DR6 when leaving */
+    kdb_wr_dbgreg(6, 0);
+    kdb_wr_dbgreg(7, dr7);
+
+    if (dr7 & DR7_ACTIVE_MASK)
+        kdb_dr7 = dr7;
+    else
+        kdb_dr7 = 0;
+#if 0
+    for(dp=domain_list; dp; dp=dp->next_in_list) {
+        struct vcpu *vp;
+        for_each_vcpu(dp, vp) {
+            for (regno=0; regno < KDB_MAXWP; regno++)
+                vp->arch.guest_context.debugreg[regno] = kdb_wpa[regno].wp_addr;
+
+            vp->arch.guest_context.debugreg[6] = 0;
+            vp->arch.guest_context.debugreg[7] = dr7;
+            KDBGP("kdb_install_watchpoints(): v:%p dr7:%lx\n", vp, dr7);
+            /* hvm_set_info_guest(vp);: Can't because can't vmcs_enter in kdb */
+        }
+    }
+#endif
+}
+
+/* clear watchpoint/s. wpnum == -1 to clear all watchpoints */
+void
+kdb_clear_wps(int wpnum)
+{
+    int i;
+
+    if (wpnum >= KDB_MAXWP) {
+        kdbp("Invalid wpnum %d\n", wpnum);
+        return;
+    }
+    if (wpnum >=0) {
+        if (kdb_wpa[wpnum].wp_addr) {
+            kdb_wpa[wpnum].wp_deleted = 1;
+            kdb_print_wp_info("Deleted watchpoint", wpnum);
+        } else
+            kdbp("watchpoint %d not set\n", wpnum);
+        return;
+    }
+    for (i=0; i < KDB_MAXWP; i++) {
+        if (kdb_wpa[i].wp_addr) {
+            kdb_wpa[i].wp_deleted = 1;
+            kdb_print_wp_info("Deleted watchpoint", i);
+        }
+    }
+}
+
+/* display any watchpoints that are set */
+static void
+kdb_display_wps(void)
+{
+    int i;
+    for (i=0; i < KDB_MAXWP; i++)
+        if (kdb_wpa[i].wp_addr && !kdb_wpa[i].wp_deleted) 
+            kdb_print_wp_info("", i);
+}
+
+/* 
+ * Display or Set hardware breakpoints, ie, watchpoints:
+ *   - Upto 4 are allowed
+ *   
+ *  rw_flag should be one of: 
+ *     01 == break on data write only
+ *     10 == break on IO read/write
+ *     11 == Break on data reads or writes
+ *
+ *  len should be one of : 1 2 4 8 
+ */
+void
+kdb_do_watchpoints(kdbva_t addr, int rw_flag, int len)
+{
+    if (addr == 0) {
+        kdb_display_wps();        /* display set watchpoints */
+        return;
+    }
+    kdb_set_wp(addr, rw_flag, len);
+    return;
+}
+
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/LICENSE
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/LICENSE	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,22 @@
+Copyright (c) 2002, 2003, 2004, 2005, 2006 <vivek@sig9.com>
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, 
+are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright notice, 
+      this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright notice, 
+      this list of conditions and the following disclaimer in the documentation 
+      and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR 
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,5 @@
+
+CFLAGS		+= -D__UD_STANDALONE__
+obj-y		:= decode.o input.o itab.o kdb_dis.o syn-att.o syn.o \
+                   syn-intel.o udis86.o
+
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/README
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/README	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,10 @@
+
+http://udis86.sourceforge.net/
+udis86-1.6 : 
+  - cd libudis86
+  - cp *c to here
+  - cp *h to here
+   
+Mukesh Rathor
+04/30/2008
+
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/decode.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/decode.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,1197 @@
+/* -----------------------------------------------------------------------------
+ * decode.c
+ *
+ * Copyright (c) 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+
+#if 0
+#include <assert.h>
+#include <string.h>
+#endif
+
+#include "types.h"
+#include "itab.h"
+#include "input.h"
+#include "decode.h"
+
+/* The max number of prefixes to an instruction */
+#define MAX_PREFIXES    15
+
+static struct ud_itab_entry ie_invalid = { UD_Iinvalid, O_NONE, O_NONE, O_NONE, P_none };
+static struct ud_itab_entry ie_pause   = { UD_Ipause,   O_NONE, O_NONE, O_NONE, P_none };
+static struct ud_itab_entry ie_nop     = { UD_Inop,     O_NONE, O_NONE, O_NONE, P_none };
+
+
+/* Looks up mnemonic code in the mnemonic string table
+ * Returns NULL if the mnemonic code is invalid
+ */
+const char * ud_lookup_mnemonic( enum ud_mnemonic_code c )
+{
+    if ( c < UD_Id3vil )
+        return ud_mnemonics_str[ c ];
+    return NULL;
+}
+
+
+/* Extracts instruction prefixes.
+ */
+static int get_prefixes( struct ud* u )
+{
+    unsigned int have_pfx = 1;
+    unsigned int i;
+    uint8_t curr;
+
+    /* if in error state, bail out */
+    if ( u->error ) 
+        return -1; 
+
+    /* keep going as long as there are prefixes available */
+    for ( i = 0; have_pfx ; ++i ) {
+
+        /* Get next byte. */
+        inp_next(u); 
+        if ( u->error ) 
+            return -1;
+        curr = inp_curr( u );
+
+        /* rex prefixes in 64bit mode */
+        if ( u->dis_mode == 64 && ( curr & 0xF0 ) == 0x40 ) {
+            u->pfx_rex = curr;  
+        } else {
+            switch ( curr )  
+            {
+            case 0x2E : 
+                u->pfx_seg = UD_R_CS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x36 :     
+                u->pfx_seg = UD_R_SS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x3E : 
+                u->pfx_seg = UD_R_DS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x26 : 
+                u->pfx_seg = UD_R_ES; 
+                u->pfx_rex = 0;
+                break;
+            case 0x64 : 
+                u->pfx_seg = UD_R_FS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x65 : 
+                u->pfx_seg = UD_R_GS; 
+                u->pfx_rex = 0;
+                break;
+            case 0x67 : /* adress-size override prefix */ 
+                u->pfx_adr = 0x67;
+                u->pfx_rex = 0;
+                break;
+            case 0xF0 : 
+                u->pfx_lock = 0xF0;
+                u->pfx_rex  = 0;
+                break;
+            case 0x66: 
+                /* the 0x66 sse prefix is only effective if no other sse prefix
+                 * has already been specified.
+                 */
+                if ( !u->pfx_insn ) u->pfx_insn = 0x66;
+                u->pfx_opr = 0x66;           
+                u->pfx_rex = 0;
+                break;
+            case 0xF2:
+                u->pfx_insn  = 0xF2;
+                u->pfx_repne = 0xF2; 
+                u->pfx_rex   = 0;
+                break;
+            case 0xF3:
+                u->pfx_insn = 0xF3;
+                u->pfx_rep  = 0xF3; 
+                u->pfx_repe = 0xF3; 
+                u->pfx_rex  = 0;
+                break;
+            default : 
+                /* No more prefixes */
+                have_pfx = 0;
+                break;
+            }
+        }
+
+        /* check if we reached max instruction length */
+        if ( i + 1 == MAX_INSN_LENGTH ) {
+            u->error = 1;
+            break;
+        }
+    }
+
+    /* return status */
+    if ( u->error ) 
+        return -1; 
+
+    /* rewind back one byte in stream, since the above loop 
+     * stops with a non-prefix byte. 
+     */
+    inp_back(u);
+
+    /* speculatively determine the effective operand mode,
+     * based on the prefixes and the current disassembly
+     * mode. This may be inaccurate, but useful for mode
+     * dependent decoding.
+     */
+    if ( u->dis_mode == 64 ) {
+        u->opr_mode = REX_W( u->pfx_rex ) ? 64 : ( ( u->pfx_opr ) ? 16 : 32 ) ;
+        u->adr_mode = ( u->pfx_adr ) ? 32 : 64;
+    } else if ( u->dis_mode == 32 ) {
+        u->opr_mode = ( u->pfx_opr ) ? 16 : 32;
+        u->adr_mode = ( u->pfx_adr ) ? 16 : 32;
+    } else if ( u->dis_mode == 16 ) {
+        u->opr_mode = ( u->pfx_opr ) ? 32 : 16;
+        u->adr_mode = ( u->pfx_adr ) ? 32 : 16;
+    }
+
+    return 0;
+}
+
+
+/* Searches the instruction tables for the right entry.
+ */
+static int search_itab( struct ud * u )
+{
+    struct ud_itab_entry * e = NULL;
+    enum ud_itab_index table;
+    uint8_t peek;
+    uint8_t did_peek = 0;
+    uint8_t curr; 
+    uint8_t index;
+
+    /* if in state of error, return */
+    if ( u->error ) 
+        return -1;
+
+    /* get first byte of opcode. */
+    inp_next(u); 
+    if ( u->error ) 
+        return -1;
+    curr = inp_curr(u); 
+
+    /* resolve xchg, nop, pause crazyness */
+    if ( 0x90 == curr ) {
+        if ( !( u->dis_mode == 64 && REX_B( u->pfx_rex ) ) ) {
+            if ( u->pfx_rep ) {
+                u->pfx_rep = 0;
+                e = & ie_pause;
+            } else {
+                e = & ie_nop;
+            }
+            goto found_entry;
+        }
+    }
+
+    /* get top-level table */
+    if ( 0x0F == curr ) {
+        table = ITAB__0F;
+        curr  = inp_next(u);
+        if ( u->error )
+            return -1;
+
+        /* 2byte opcodes can be modified by 0x66, F3, and F2 prefixes */
+        if ( 0x66 == u->pfx_insn ) {
+            if ( ud_itab_list[ ITAB__PFX_SSE66__0F ][ curr ].mnemonic != UD_Iinvalid ) {
+                table = ITAB__PFX_SSE66__0F;
+                u->pfx_opr = 0;
+            }
+        } else if ( 0xF2 == u->pfx_insn ) {
+            if ( ud_itab_list[ ITAB__PFX_SSEF2__0F ][ curr ].mnemonic != UD_Iinvalid ) {
+                table = ITAB__PFX_SSEF2__0F; 
+                u->pfx_repne = 0;
+            }
+        } else if ( 0xF3 == u->pfx_insn ) {
+            if ( ud_itab_list[ ITAB__PFX_SSEF3__0F ][ curr ].mnemonic != UD_Iinvalid ) {
+                table = ITAB__PFX_SSEF3__0F;
+                u->pfx_repe = 0;
+                u->pfx_rep  = 0;
+            }
+        }
+    /* pick an instruction from the 1byte table */
+    } else {
+        table = ITAB__1BYTE; 
+    }
+
+    index = curr;
+
+search:
+
+    e = & ud_itab_list[ table ][ index ];
+
+    /* if mnemonic constant is a standard instruction constant
+     * our search is over.
+     */
+    
+    if ( e->mnemonic < UD_Id3vil ) {
+        if ( e->mnemonic == UD_Iinvalid ) {
+            if ( did_peek ) {
+                inp_next( u ); if ( u->error ) return -1;
+            }
+            goto found_entry;
+        }
+        goto found_entry;
+    }
+
+    table = e->prefix;
+
+    switch ( e->mnemonic )
+    {
+    case UD_Igrp_reg:
+        peek     = inp_peek( u );
+        did_peek = 1;
+        index    = MODRM_REG( peek );
+        break;
+
+    case UD_Igrp_mod:
+        peek     = inp_peek( u );
+        did_peek = 1;
+        index    = MODRM_MOD( peek );
+        if ( index == 3 )
+           index = ITAB__MOD_INDX__11;
+        else 
+           index = ITAB__MOD_INDX__NOT_11; 
+        break;
+
+    case UD_Igrp_rm:
+        curr     = inp_next( u );
+        did_peek = 0;
+        if ( u->error )
+            return -1;
+        index    = MODRM_RM( curr );
+        break;
+
+    case UD_Igrp_x87:
+        curr     = inp_next( u );
+        did_peek = 0;
+        if ( u->error )
+            return -1;
+        index    = curr - 0xC0;
+        break;
+
+    case UD_Igrp_osize:
+        if ( u->opr_mode == 64 ) 
+            index = ITAB__MODE_INDX__64;
+        else if ( u->opr_mode == 32 ) 
+            index = ITAB__MODE_INDX__32;
+        else
+            index = ITAB__MODE_INDX__16;
+        break;
+ 
+    case UD_Igrp_asize:
+        if ( u->adr_mode == 64 ) 
+            index = ITAB__MODE_INDX__64;
+        else if ( u->adr_mode == 32 ) 
+            index = ITAB__MODE_INDX__32;
+        else
+            index = ITAB__MODE_INDX__16;
+        break;               
+
+    case UD_Igrp_mode:
+        if ( u->dis_mode == 64 ) 
+            index = ITAB__MODE_INDX__64;
+        else if ( u->dis_mode == 32 ) 
+            index = ITAB__MODE_INDX__32;
+        else
+            index = ITAB__MODE_INDX__16;
+        break;
+
+    case UD_Igrp_vendor:
+        if ( u->vendor == UD_VENDOR_INTEL ) 
+            index = ITAB__VENDOR_INDX__INTEL; 
+        else if ( u->vendor == UD_VENDOR_AMD )
+            index = ITAB__VENDOR_INDX__AMD;
+        else {
+            kdbp("KDB:search_itab(): unrecognized vendor id\n");
+            return -1;
+        }
+        break;
+
+    case UD_Id3vil:
+        kdbp("KDB:search_itab(): invalid instr mnemonic constant Id3vil\n");
+        return -1;
+
+    default:
+        kdbp("KDB:search_itab(): invalid instruction mnemonic constant\n");
+        return -1;
+    }
+
+    goto search;
+
+found_entry:
+
+    u->itab_entry = e;
+    u->mnemonic = u->itab_entry->mnemonic;
+
+    return 0;
+}
+
+
+static unsigned int resolve_operand_size( const struct ud * u, unsigned int s )
+{
+    switch ( s ) 
+    {
+    case SZ_V:
+        return ( u->opr_mode );
+    case SZ_Z:  
+        return ( u->opr_mode == 16 ) ? 16 : 32;
+    case SZ_P:  
+        return ( u->opr_mode == 16 ) ? SZ_WP : SZ_DP;
+    case SZ_MDQ:
+        return ( u->opr_mode == 16 ) ? 32 : u->opr_mode;
+    case SZ_RDQ:
+        return ( u->dis_mode == 64 ) ? 64 : 32;
+    default:
+        return s;
+    }
+}
+
+
+static int resolve_mnemonic( struct ud* u )
+{
+  /* far/near flags */
+  u->br_far = 0;
+  u->br_near = 0;
+  /* readjust operand sizes for call/jmp instrcutions */
+  if ( u->mnemonic == UD_Icall || u->mnemonic == UD_Ijmp ) {
+    /* WP: 16bit pointer */
+    if ( u->operand[ 0 ].size == SZ_WP ) {
+        u->operand[ 0 ].size = 16;
+        u->br_far = 1;
+        u->br_near= 0;
+    /* DP: 32bit pointer */
+    } else if ( u->operand[ 0 ].size == SZ_DP ) {
+        u->operand[ 0 ].size = 32;
+        u->br_far = 1;
+        u->br_near= 0;
+    } else {
+        u->br_far = 0;
+        u->br_near= 1;
+    }
+  /* resolve 3dnow weirdness. */
+  } else if ( u->mnemonic == UD_I3dnow ) {
+    u->mnemonic = ud_itab_list[ ITAB__3DNOW ][ inp_curr( u )  ].mnemonic;
+  }
+  /* SWAPGS is only valid in 64bits mode */
+  if ( u->mnemonic == UD_Iswapgs && u->dis_mode != 64 ) {
+    u->error = 1;
+    return -1;
+  }
+
+  return 0;
+}
+
+
+/* -----------------------------------------------------------------------------
+ * decode_a()- Decodes operands of the type seg:offset
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_a(struct ud* u, struct ud_operand *op)
+{
+  if (u->opr_mode == 16) {  
+    /* seg16:off16 */
+    op->type = UD_OP_PTR;
+    op->size = 32;
+    op->lval.ptr.off = inp_uint16(u);
+    op->lval.ptr.seg = inp_uint16(u);
+  } else {
+    /* seg16:off32 */
+    op->type = UD_OP_PTR;
+    op->size = 48;
+    op->lval.ptr.off = inp_uint32(u);
+    op->lval.ptr.seg = inp_uint16(u);
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_gpr() - Returns decoded General Purpose Register 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+decode_gpr(register struct ud* u, unsigned int s, unsigned char rm)
+{
+  s = resolve_operand_size(u, s);
+        
+  switch (s) {
+    case 64:
+        return UD_R_RAX + rm;
+    case SZ_DP:
+    case 32:
+        return UD_R_EAX + rm;
+    case SZ_WP:
+    case 16:
+        return UD_R_AX  + rm;
+    case  8:
+        if (u->dis_mode == 64 && u->pfx_rex) {
+            if (rm >= 4)
+                return UD_R_SPL + (rm-4);
+            return UD_R_AL + rm;
+        } else return UD_R_AL + rm;
+    default:
+        return 0;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * resolve_gpr64() - 64bit General Purpose Register-Selection. 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+resolve_gpr64(struct ud* u, enum ud_operand_code gpr_op)
+{
+  if (gpr_op >= OP_rAXr8 && gpr_op <= OP_rDIr15)
+    gpr_op = (gpr_op - OP_rAXr8) | (REX_B(u->pfx_rex) << 3);          
+  else  gpr_op = (gpr_op - OP_rAX);
+
+  if (u->opr_mode == 16)
+    return gpr_op + UD_R_AX;
+  if (u->dis_mode == 32 || 
+    (u->opr_mode == 32 && ! (REX_W(u->pfx_rex) || u->default64))) {
+    return gpr_op + UD_R_EAX;
+  }
+
+  return gpr_op + UD_R_RAX;
+}
+
+/* -----------------------------------------------------------------------------
+ * resolve_gpr32 () - 32bit General Purpose Register-Selection. 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+resolve_gpr32(struct ud* u, enum ud_operand_code gpr_op)
+{
+  gpr_op = gpr_op - OP_eAX;
+
+  if (u->opr_mode == 16) 
+    return gpr_op + UD_R_AX;
+
+  return gpr_op +  UD_R_EAX;
+}
+
+/* -----------------------------------------------------------------------------
+ * resolve_reg() - Resolves the register type 
+ * -----------------------------------------------------------------------------
+ */
+static enum ud_type 
+resolve_reg(struct ud* u, unsigned int type, unsigned char i)
+{
+  switch (type) {
+    case T_MMX :    return UD_R_MM0  + (i & 7);
+    case T_XMM :    return UD_R_XMM0 + i;
+    case T_CRG :    return UD_R_CR0  + i;
+    case T_DBG :    return UD_R_DR0  + i;
+    case T_SEG :    return UD_R_ES   + (i & 7);
+    case T_NONE:
+    default:    return UD_NONE;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_imm() - Decodes Immediate values.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_imm(struct ud* u, unsigned int s, struct ud_operand *op)
+{
+  op->size = resolve_operand_size(u, s);
+  op->type = UD_OP_IMM;
+
+  switch (op->size) {
+    case  8: op->lval.sbyte = inp_uint8(u);   break;
+    case 16: op->lval.uword = inp_uint16(u);  break;
+    case 32: op->lval.udword = inp_uint32(u); break;
+    case 64: op->lval.uqword = inp_uint64(u); break;
+    default: return;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_modrm() - Decodes ModRM Byte
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_modrm(struct ud* u, struct ud_operand *op, unsigned int s, 
+         unsigned char rm_type, struct ud_operand *opreg, 
+         unsigned char reg_size, unsigned char reg_type)
+{
+  unsigned char mod, rm, reg;
+
+  inp_next(u);
+
+  /* get mod, r/m and reg fields */
+  mod = MODRM_MOD(inp_curr(u));
+  rm  = (REX_B(u->pfx_rex) << 3) | MODRM_RM(inp_curr(u));
+  reg = (REX_R(u->pfx_rex) << 3) | MODRM_REG(inp_curr(u));
+
+  op->size = resolve_operand_size(u, s);
+
+  /* if mod is 11b, then the UD_R_m specifies a gpr/mmx/sse/control/debug */
+  if (mod == 3) {
+    op->type = UD_OP_REG;
+    if (rm_type ==  T_GPR)
+        op->base = decode_gpr(u, op->size, rm);
+    else    op->base = resolve_reg(u, rm_type, (REX_B(u->pfx_rex) << 3) | (rm&7));
+  } 
+  /* else its memory addressing */  
+  else {
+    op->type = UD_OP_MEM;
+
+    /* 64bit addressing */
+    if (u->adr_mode == 64) {
+
+        op->base = UD_R_RAX + rm;
+
+        /* get offset type */
+        if (mod == 1)
+            op->offset = 8;
+        else if (mod == 2)
+            op->offset = 32;
+        else if (mod == 0 && (rm & 7) == 5) {           
+            op->base = UD_R_RIP;
+            op->offset = 32;
+        } else  op->offset = 0;
+
+        /* Scale-Index-Base (SIB) */
+        if ((rm & 7) == 4) {
+            inp_next(u);
+            
+            op->scale = (1 << SIB_S(inp_curr(u))) & ~1;
+            op->index = UD_R_RAX + (SIB_I(inp_curr(u)) | (REX_X(u->pfx_rex) << 3));
+            op->base  = UD_R_RAX + (SIB_B(inp_curr(u)) | (REX_B(u->pfx_rex) << 3));
+
+            /* special conditions for base reference */
+            if (op->index == UD_R_RSP) {
+                op->index = UD_NONE;
+                op->scale = UD_NONE;
+            }
+
+            if (op->base == UD_R_RBP || op->base == UD_R_R13) {
+                if (mod == 0) 
+                    op->base = UD_NONE;
+                if (mod == 1)
+                    op->offset = 8;
+                else op->offset = 32;
+            }
+        }
+    } 
+
+    /* 32-Bit addressing mode */
+    else if (u->adr_mode == 32) {
+
+        /* get base */
+        op->base = UD_R_EAX + rm;
+
+        /* get offset type */
+        if (mod == 1)
+            op->offset = 8;
+        else if (mod == 2)
+            op->offset = 32;
+        else if (mod == 0 && rm == 5) {
+            op->base = UD_NONE;
+            op->offset = 32;
+        } else  op->offset = 0;
+
+        /* Scale-Index-Base (SIB) */
+        if ((rm & 7) == 4) {
+            inp_next(u);
+
+            op->scale = (1 << SIB_S(inp_curr(u))) & ~1;
+            op->index = UD_R_EAX + (SIB_I(inp_curr(u)) | (REX_X(u->pfx_rex) << 3));
+            op->base  = UD_R_EAX + (SIB_B(inp_curr(u)) | (REX_B(u->pfx_rex) << 3));
+
+            if (op->index == UD_R_ESP) {
+                op->index = UD_NONE;
+                op->scale = UD_NONE;
+            }
+
+            /* special condition for base reference */
+            if (op->base == UD_R_EBP) {
+                if (mod == 0)
+                    op->base = UD_NONE;
+                if (mod == 1)
+                    op->offset = 8;
+                else op->offset = 32;
+            }
+        }
+    } 
+
+    /* 16bit addressing mode */
+    else  {
+        switch (rm) {
+            case 0: op->base = UD_R_BX; op->index = UD_R_SI; break;
+            case 1: op->base = UD_R_BX; op->index = UD_R_DI; break;
+            case 2: op->base = UD_R_BP; op->index = UD_R_SI; break;
+            case 3: op->base = UD_R_BP; op->index = UD_R_DI; break;
+            case 4: op->base = UD_R_SI; break;
+            case 5: op->base = UD_R_DI; break;
+            case 6: op->base = UD_R_BP; break;
+            case 7: op->base = UD_R_BX; break;
+        }
+
+        if (mod == 0 && rm == 6) {
+            op->offset= 16;
+            op->base = UD_NONE;
+        }
+        else if (mod == 1)
+            op->offset = 8;
+        else if (mod == 2) 
+            op->offset = 16;
+    }
+  }  
+
+  /* extract offset, if any */
+  switch(op->offset) {
+    case 8 : op->lval.ubyte  = inp_uint8(u);  break;
+    case 16: op->lval.uword  = inp_uint16(u);  break;
+    case 32: op->lval.udword = inp_uint32(u); break;
+    case 64: op->lval.uqword = inp_uint64(u); break;
+    default: break;
+  }
+
+  /* resolve register encoded in reg field */
+  if (opreg) {
+    opreg->type = UD_OP_REG;
+    opreg->size = resolve_operand_size(u, reg_size);
+    if (reg_type == T_GPR) 
+        opreg->base = decode_gpr(u, opreg->size, reg);
+    else opreg->base = resolve_reg(u, reg_type, reg);
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * decode_o() - Decodes offset
+ * -----------------------------------------------------------------------------
+ */
+static void 
+decode_o(struct ud* u, unsigned int s, struct ud_operand *op)
+{
+  switch (u->adr_mode) {
+    case 64:
+        op->offset = 64; 
+        op->lval.uqword = inp_uint64(u); 
+        break;
+    case 32:
+        op->offset = 32; 
+        op->lval.udword = inp_uint32(u); 
+        break;
+    case 16:
+        op->offset = 16; 
+        op->lval.uword  = inp_uint16(u); 
+        break;
+    default:
+        return;
+  }
+  op->type = UD_OP_MEM;
+  op->size = resolve_operand_size(u, s);
+}
+
+/* -----------------------------------------------------------------------------
+ * disasm_operands() - Disassembles Operands.
+ * -----------------------------------------------------------------------------
+ */
+static int disasm_operands(register struct ud* u)
+{
+
+
+  /* mopXt = map entry, operand X, type; */
+  enum ud_operand_code mop1t = u->itab_entry->operand1.type;
+  enum ud_operand_code mop2t = u->itab_entry->operand2.type;
+  enum ud_operand_code mop3t = u->itab_entry->operand3.type;
+
+  /* mopXs = map entry, operand X, size */
+  unsigned int mop1s = u->itab_entry->operand1.size;
+  unsigned int mop2s = u->itab_entry->operand2.size;
+  unsigned int mop3s = u->itab_entry->operand3.size;
+
+  /* iop = instruction operand */
+  register struct ud_operand* iop = u->operand;
+    
+  switch(mop1t) {
+    
+    case OP_A :
+        decode_a(u, &(iop[0]));
+        break;
+    
+    /* M[b] ... */
+    case OP_M :
+        if (MODRM_MOD(inp_peek(u)) == 3)
+            u->error= 1;
+    /* E, G/P/V/I/CL/1/S */
+    case OP_E :
+        if (mop2t == OP_G) {
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_GPR);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+            else if (mop3t == OP_CL) {
+                iop[2].type = UD_OP_REG;
+                iop[2].base = UD_R_CL;
+                iop[2].size = 8;
+            }
+        }
+        else if (mop2t == OP_P)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_MMX);
+        else if (mop2t == OP_V)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_XMM);
+        else if (mop2t == OP_S)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_SEG);
+        else {
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, NULL, 0, T_NONE);
+            if (mop2t == OP_CL) {
+                iop[1].type = UD_OP_REG;
+                iop[1].base = UD_R_CL;
+                iop[1].size = 8;
+            } else if (mop2t == OP_I1) {
+                iop[1].type = UD_OP_CONST;
+                u->operand[1].lval.udword = 1;
+            } else if (mop2t == OP_I) {
+                decode_imm(u, mop2s, &(iop[1]));
+            }
+        }
+        break;
+
+    /* G, E/PR[,I]/VR */
+    case OP_G :
+        if (mop2t == OP_M) {
+            if (MODRM_MOD(inp_peek(u)) == 3)
+                u->error= 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_GPR);
+        } else if (mop2t == OP_E) {
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_GPR);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_PR) {
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_GPR);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_VR) {
+            if (MODRM_MOD(inp_peek(u)) != 3)
+                u->error = 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_GPR);
+        } else if (mop2t == OP_W)
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_GPR);
+        break;
+
+    /* AL..BH, I/O/DX */
+    case OP_AL : case OP_CL : case OP_DL : case OP_BL :
+    case OP_AH : case OP_CH : case OP_DH : case OP_BH :
+
+        iop[0].type = UD_OP_REG;
+        iop[0].base = UD_R_AL + (mop1t - OP_AL);
+        iop[0].size = 8;
+
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        else if (mop2t == OP_DX) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_DX;
+            iop[1].size = 16;
+        }
+        else if (mop2t == OP_O)
+            decode_o(u, mop2s, &(iop[1]));
+        break;
+
+    /* rAX[r8]..rDI[r15], I/rAX..rDI/O */
+    case OP_rAXr8 : case OP_rCXr9 : case OP_rDXr10 : case OP_rBXr11 :
+    case OP_rSPr12: case OP_rBPr13: case OP_rSIr14 : case OP_rDIr15 :
+    case OP_rAX : case OP_rCX : case OP_rDX : case OP_rBX :
+    case OP_rSP : case OP_rBP : case OP_rSI : case OP_rDI :
+
+        iop[0].type = UD_OP_REG;
+        iop[0].base = resolve_gpr64(u, mop1t);
+
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        else if (mop2t >= OP_rAX && mop2t <= OP_rDI) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = resolve_gpr64(u, mop2t);
+        }
+        else if (mop2t == OP_O) {
+            decode_o(u, mop2s, &(iop[1]));  
+            iop[0].size = resolve_operand_size(u, mop2s);
+        }
+        break;
+
+    /* AL[r8b]..BH[r15b], I */
+    case OP_ALr8b : case OP_CLr9b : case OP_DLr10b : case OP_BLr11b :
+    case OP_AHr12b: case OP_CHr13b: case OP_DHr14b : case OP_BHr15b :
+    {
+        ud_type_t gpr = (mop1t - OP_ALr8b) + UD_R_AL + 
+                        (REX_B(u->pfx_rex) << 3);
+        if (UD_R_AH <= gpr && u->pfx_rex)
+            gpr = gpr + 4;
+        iop[0].type = UD_OP_REG;
+        iop[0].base = gpr;
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break;
+    }
+
+    /* eAX..eDX, DX/I */
+    case OP_eAX : case OP_eCX : case OP_eDX : case OP_eBX :
+    case OP_eSP : case OP_eBP : case OP_eSI : case OP_eDI :
+        iop[0].type = UD_OP_REG;
+        iop[0].base = resolve_gpr32(u, mop1t);
+        if (mop2t == OP_DX) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_DX;
+            iop[1].size = 16;
+        } else if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break;
+
+    /* ES..GS */
+    case OP_ES : case OP_CS : case OP_DS :
+    case OP_SS : case OP_FS : case OP_GS :
+
+        /* in 64bits mode, only fs and gs are allowed */
+        if (u->dis_mode == 64)
+            if (mop1t != OP_FS && mop1t != OP_GS)
+                u->error= 1;
+        iop[0].type = UD_OP_REG;
+        iop[0].base = (mop1t - OP_ES) + UD_R_ES;
+        iop[0].size = 16;
+
+        break;
+
+    /* J */
+    case OP_J :
+        decode_imm(u, mop1s, &(iop[0]));        
+        iop[0].type = UD_OP_JIMM;
+        break ;
+
+    /* PR, I */
+    case OP_PR:
+        if (MODRM_MOD(inp_peek(u)) != 3)
+            u->error = 1;
+        decode_modrm(u, &(iop[0]), mop1s, T_MMX, NULL, 0, T_NONE);
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break; 
+
+    /* VR, I */
+    case OP_VR:
+        if (MODRM_MOD(inp_peek(u)) != 3)
+            u->error = 1;
+        decode_modrm(u, &(iop[0]), mop1s, T_XMM, NULL, 0, T_NONE);
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        break; 
+
+    /* P, Q[,I]/W/E[,I],VR */
+    case OP_P :
+        if (mop2t == OP_Q) {
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_MMX);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_W) {
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_MMX);
+        } else if (mop2t == OP_VR) {
+            if (MODRM_MOD(inp_peek(u)) != 3)
+                u->error = 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_MMX);
+        } else if (mop2t == OP_E) {
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_MMX);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        }
+        break;
+
+    /* R, C/D */
+    case OP_R :
+        if (mop2t == OP_C)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_CRG);
+        else if (mop2t == OP_D)
+            decode_modrm(u, &(iop[0]), mop1s, T_GPR, &(iop[1]), mop2s, T_DBG);
+        break;
+
+    /* C, R */
+    case OP_C :
+        decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_CRG);
+        break;
+
+    /* D, R */
+    case OP_D :
+        decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_DBG);
+        break;
+
+    /* Q, P */
+    case OP_Q :
+        decode_modrm(u, &(iop[0]), mop1s, T_MMX, &(iop[1]), mop2s, T_MMX);
+        break;
+
+    /* S, E */
+    case OP_S :
+        decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_SEG);
+        break;
+
+    /* W, V */
+    case OP_W :
+        decode_modrm(u, &(iop[0]), mop1s, T_XMM, &(iop[1]), mop2s, T_XMM);
+        break;
+
+    /* V, W[,I]/Q/M/E */
+    case OP_V :
+        if (mop2t == OP_W) {
+            /* special cases for movlps and movhps */
+            if (MODRM_MOD(inp_peek(u)) == 3) {
+                if (u->mnemonic == UD_Imovlps)
+                    u->mnemonic = UD_Imovhlps;
+                else
+                if (u->mnemonic == UD_Imovhps)
+                    u->mnemonic = UD_Imovlhps;
+            }
+            decode_modrm(u, &(iop[1]), mop2s, T_XMM, &(iop[0]), mop1s, T_XMM);
+            if (mop3t == OP_I)
+                decode_imm(u, mop3s, &(iop[2]));
+        } else if (mop2t == OP_Q)
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_XMM);
+        else if (mop2t == OP_M) {
+            if (MODRM_MOD(inp_peek(u)) == 3)
+                u->error= 1;
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_XMM);
+        } else if (mop2t == OP_E) {
+            decode_modrm(u, &(iop[1]), mop2s, T_GPR, &(iop[0]), mop1s, T_XMM);
+        } else if (mop2t == OP_PR) {
+            decode_modrm(u, &(iop[1]), mop2s, T_MMX, &(iop[0]), mop1s, T_XMM);
+        }
+        break;
+
+    /* DX, eAX/AL */
+    case OP_DX :
+        iop[0].type = UD_OP_REG;
+        iop[0].base = UD_R_DX;
+        iop[0].size = 16;
+
+        if (mop2t == OP_eAX) {
+            iop[1].type = UD_OP_REG;    
+            iop[1].base = resolve_gpr32(u, mop2t);
+        } else if (mop2t == OP_AL) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_AL;
+            iop[1].size = 8;
+        }
+
+        break;
+
+    /* I, I/AL/eAX */
+    case OP_I :
+        decode_imm(u, mop1s, &(iop[0]));
+        if (mop2t == OP_I)
+            decode_imm(u, mop2s, &(iop[1]));
+        else if (mop2t == OP_AL) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = UD_R_AL;
+            iop[1].size = 16;
+        } else if (mop2t == OP_eAX) {
+            iop[1].type = UD_OP_REG;    
+            iop[1].base = resolve_gpr32(u, mop2t);
+        }
+        break;
+
+    /* O, AL/eAX */
+    case OP_O :
+        decode_o(u, mop1s, &(iop[0]));
+        iop[1].type = UD_OP_REG;
+        iop[1].size = resolve_operand_size(u, mop1s);
+        if (mop2t == OP_AL)
+            iop[1].base = UD_R_AL;
+        else if (mop2t == OP_eAX)
+            iop[1].base = resolve_gpr32(u, mop2t);
+        else if (mop2t == OP_rAX)
+            iop[1].base = resolve_gpr64(u, mop2t);      
+        break;
+
+    /* 3 */
+    case OP_I3 :
+        iop[0].type = UD_OP_CONST;
+        iop[0].lval.sbyte = 3;
+        break;
+
+    /* ST(n), ST(n) */
+    case OP_ST0 : case OP_ST1 : case OP_ST2 : case OP_ST3 :
+    case OP_ST4 : case OP_ST5 : case OP_ST6 : case OP_ST7 :
+
+        iop[0].type = UD_OP_REG;
+        iop[0].base = (mop1t-OP_ST0) + UD_R_ST0;
+        iop[0].size = 0;
+
+        if (mop2t >= OP_ST0 && mop2t <= OP_ST7) {
+            iop[1].type = UD_OP_REG;
+            iop[1].base = (mop2t-OP_ST0) + UD_R_ST0;
+            iop[1].size = 0;
+        }
+        break;
+
+    /* AX */
+    case OP_AX:
+        iop[0].type = UD_OP_REG;
+        iop[0].base = UD_R_AX;
+        iop[0].size = 16;
+        break;
+
+    /* none */
+    default :
+        iop[0].type = iop[1].type = iop[2].type = UD_NONE;
+  }
+
+  return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * clear_insn() - clear instruction pointer 
+ * -----------------------------------------------------------------------------
+ */
+static int clear_insn(register struct ud* u)
+{
+  u->error     = 0;
+  u->pfx_seg   = 0;
+  u->pfx_opr   = 0;
+  u->pfx_adr   = 0;
+  u->pfx_lock  = 0;
+  u->pfx_repne = 0;
+  u->pfx_rep   = 0;
+  u->pfx_repe  = 0;
+  u->pfx_seg   = 0;
+  u->pfx_rex   = 0;
+  u->pfx_insn  = 0;
+  u->mnemonic  = UD_Inone;
+  u->itab_entry = NULL;
+
+  memset( &u->operand[ 0 ], 0, sizeof( struct ud_operand ) );
+  memset( &u->operand[ 1 ], 0, sizeof( struct ud_operand ) );
+  memset( &u->operand[ 2 ], 0, sizeof( struct ud_operand ) );
+ 
+  return 0;
+}
+
+static int do_mode( struct ud* u )
+{
+  /* if in error state, bail out */
+  if ( u->error ) return -1; 
+
+  /* propagate perfix effects */
+  if ( u->dis_mode == 64 ) {  /* set 64bit-mode flags */
+
+    /* Check validity of  instruction m64 */
+    if ( P_INV64( u->itab_entry->prefix ) ) {
+        u->error = 1;
+        return -1;
+    }
+
+    /* effective rex prefix is the  effective mask for the 
+     * instruction hard-coded in the opcode map.
+     */
+    u->pfx_rex = ( u->pfx_rex & 0x40 ) | 
+                 ( u->pfx_rex & REX_PFX_MASK( u->itab_entry->prefix ) ); 
+
+    /* whether this instruction has a default operand size of 
+     * 64bit, also hardcoded into the opcode map.
+     */
+    u->default64 = P_DEF64( u->itab_entry->prefix ); 
+    /* calculate effective operand size */
+    if ( REX_W( u->pfx_rex ) ) {
+        u->opr_mode = 64;
+    } else if ( u->pfx_opr ) {
+        u->opr_mode = 16;
+    } else {
+        /* unless the default opr size of instruction is 64,
+         * the effective operand size in the absence of rex.w
+         * prefix is 32.
+         */
+        u->opr_mode = ( u->default64 ) ? 64 : 32;
+    }
+
+    /* calculate effective address size */
+    u->adr_mode = (u->pfx_adr) ? 32 : 64;
+  } else if ( u->dis_mode == 32 ) { /* set 32bit-mode flags */
+    u->opr_mode = ( u->pfx_opr ) ? 16 : 32;
+    u->adr_mode = ( u->pfx_adr ) ? 16 : 32;
+  } else if ( u->dis_mode == 16 ) { /* set 16bit-mode flags */
+    u->opr_mode = ( u->pfx_opr ) ? 32 : 16;
+    u->adr_mode = ( u->pfx_adr ) ? 32 : 16;
+  }
+
+  /* These flags determine which operand to apply the operand size
+   * cast to.
+   */
+  u->c1 = ( P_C1( u->itab_entry->prefix ) ) ? 1 : 0;
+  u->c2 = ( P_C2( u->itab_entry->prefix ) ) ? 1 : 0;
+  u->c3 = ( P_C3( u->itab_entry->prefix ) ) ? 1 : 0;
+
+  /* set flags for implicit addressing */
+  u->implicit_addr = P_IMPADDR( u->itab_entry->prefix );
+
+  return 0;
+}
+
+static int gen_hex( struct ud *u )
+{
+  unsigned int i;
+  unsigned char *src_ptr = inp_sess( u );
+  char* src_hex;
+
+  /* bail out if in error stat. */
+  if ( u->error ) return -1; 
+  /* output buffer pointe */
+  src_hex = ( char* ) u->insn_hexcode;
+  /* for each byte used to decode instruction */
+  for ( i = 0; i < u->inp_ctr; ++i, ++src_ptr) {
+    snprintf( src_hex, 2, "%02x", *src_ptr & 0xFF );
+    src_hex += 2;
+  }
+  return 0;
+}
+
+/* =============================================================================
+ * ud_decode() - Instruction decoder. Returns the number of bytes decoded.
+ * =============================================================================
+ */
+unsigned int ud_decode( struct ud* u )
+{
+  inp_start(u);
+
+  if ( clear_insn( u ) ) {
+    ; /* error */
+  } else if ( get_prefixes( u ) != 0 ) {
+    ; /* error */
+  } else if ( search_itab( u ) != 0 ) {
+    ; /* error */
+  } else if ( do_mode( u ) != 0 ) {
+    ; /* error */
+  } else if ( disasm_operands( u ) != 0 ) {
+    ; /* error */
+  } else if ( resolve_mnemonic( u ) != 0 ) {
+    ; /* error */
+  }
+
+  /* Handle decode error. */
+  if ( u->error ) {
+    /* clear out the decode data. */
+    clear_insn( u );
+    /* mark the sequence of bytes as invalid. */
+    u->itab_entry = & ie_invalid;
+    u->mnemonic = u->itab_entry->mnemonic;
+  } 
+
+  u->insn_offset = u->pc; /* set offset of instruction */
+  u->insn_fill = 0;   /* set translation buffer index to 0 */
+  u->pc += u->inp_ctr;    /* move program counter by bytes decoded */
+  gen_hex( u );       /* generate hex code */
+
+  /* return number of bytes disassembled. */
+  return u->inp_ctr;
+}
+
+/* vim:cindent
+ * vim:ts=4
+ * vim:sw=4
+ * vim:expandtab
+ */
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/decode.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/decode.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,273 @@
+#ifndef UD_DECODE_H
+#define UD_DECODE_H
+
+#define MAX_INSN_LENGTH 15
+
+/* register classes */
+#define T_NONE  0
+#define T_GPR   1
+#define T_MMX   2
+#define T_CRG   3
+#define T_DBG   4
+#define T_SEG   5
+#define T_XMM   6
+
+/* itab prefix bits */
+#define P_none          ( 0 )
+#define P_c1            ( 1 << 0 )
+#define P_C1(n)         ( ( n >> 0 ) & 1 )
+#define P_rexb          ( 1 << 1 )
+#define P_REXB(n)       ( ( n >> 1 ) & 1 )
+#define P_depM          ( 1 << 2 )
+#define P_DEPM(n)       ( ( n >> 2 ) & 1 )
+#define P_c3            ( 1 << 3 )
+#define P_C3(n)         ( ( n >> 3 ) & 1 )
+#define P_inv64         ( 1 << 4 )
+#define P_INV64(n)      ( ( n >> 4 ) & 1 )
+#define P_rexw          ( 1 << 5 )
+#define P_REXW(n)       ( ( n >> 5 ) & 1 )
+#define P_c2            ( 1 << 6 )
+#define P_C2(n)         ( ( n >> 6 ) & 1 )
+#define P_def64         ( 1 << 7 )
+#define P_DEF64(n)      ( ( n >> 7 ) & 1 )
+#define P_rexr          ( 1 << 8 )
+#define P_REXR(n)       ( ( n >> 8 ) & 1 )
+#define P_oso           ( 1 << 9 )
+#define P_OSO(n)        ( ( n >> 9 ) & 1 )
+#define P_aso           ( 1 << 10 )
+#define P_ASO(n)        ( ( n >> 10 ) & 1 )
+#define P_rexx          ( 1 << 11 )
+#define P_REXX(n)       ( ( n >> 11 ) & 1 )
+#define P_ImpAddr       ( 1 << 12 )
+#define P_IMPADDR(n)    ( ( n >> 12 ) & 1 )
+
+/* rex prefix bits */
+#define REX_W(r)        ( ( 0xF & ( r ) )  >> 3 )
+#define REX_R(r)        ( ( 0x7 & ( r ) )  >> 2 )
+#define REX_X(r)        ( ( 0x3 & ( r ) )  >> 1 )
+#define REX_B(r)        ( ( 0x1 & ( r ) )  >> 0 )
+#define REX_PFX_MASK(n) ( ( P_REXW(n) << 3 ) | \
+                          ( P_REXR(n) << 2 ) | \
+                          ( P_REXX(n) << 1 ) | \
+                          ( P_REXB(n) << 0 ) )
+
+/* scable-index-base bits */
+#define SIB_S(b)        ( ( b ) >> 6 )
+#define SIB_I(b)        ( ( ( b ) >> 3 ) & 7 )
+#define SIB_B(b)        ( ( b ) & 7 )
+
+/* modrm bits */
+#define MODRM_REG(b)    ( ( ( b ) >> 3 ) & 7 )
+#define MODRM_NNN(b)    ( ( ( b ) >> 3 ) & 7 )
+#define MODRM_MOD(b)    ( ( ( b ) >> 6 ) & 3 )
+#define MODRM_RM(b)     ( ( b ) & 7 )
+
+/* operand type constants -- order is important! */
+
+enum ud_operand_code {
+    OP_NONE,
+
+    OP_A,      OP_E,      OP_M,       OP_G,       
+    OP_I,
+
+    OP_AL,     OP_CL,     OP_DL,      OP_BL,
+    OP_AH,     OP_CH,     OP_DH,      OP_BH,
+
+    OP_ALr8b,  OP_CLr9b,  OP_DLr10b,  OP_BLr11b,
+    OP_AHr12b, OP_CHr13b, OP_DHr14b,  OP_BHr15b,
+
+    OP_AX,     OP_CX,     OP_DX,      OP_BX,
+    OP_SI,     OP_DI,     OP_SP,      OP_BP,
+
+    OP_rAX,    OP_rCX,    OP_rDX,     OP_rBX,  
+    OP_rSP,    OP_rBP,    OP_rSI,     OP_rDI,
+
+    OP_rAXr8,  OP_rCXr9,  OP_rDXr10,  OP_rBXr11,  
+    OP_rSPr12, OP_rBPr13, OP_rSIr14,  OP_rDIr15,
+
+    OP_eAX,    OP_eCX,    OP_eDX,     OP_eBX,
+    OP_eSP,    OP_eBP,    OP_eSI,     OP_eDI,
+
+    OP_ES,     OP_CS,     OP_SS,      OP_DS,  
+    OP_FS,     OP_GS,
+
+    OP_ST0,    OP_ST1,    OP_ST2,     OP_ST3,
+    OP_ST4,    OP_ST5,    OP_ST6,     OP_ST7,
+
+    OP_J,      OP_S,      OP_O,          
+    OP_I1,     OP_I3, 
+
+    OP_V,      OP_W,      OP_Q,       OP_P, 
+
+    OP_R,      OP_C,  OP_D,       OP_VR,  OP_PR
+};
+
+
+/* operand size constants */
+
+enum ud_operand_size {
+    SZ_NA  = 0,
+    SZ_Z   = 1,
+    SZ_V   = 2,
+    SZ_P   = 3,
+    SZ_WP  = 4,
+    SZ_DP  = 5,
+    SZ_MDQ = 6,
+    SZ_RDQ = 7,
+
+    /* the following values are used as is,
+     * and thus hard-coded. changing them 
+     * will break internals 
+     */
+    SZ_B   = 8,
+    SZ_W   = 16,
+    SZ_D   = 32,
+    SZ_Q   = 64,
+    SZ_T   = 80,
+};
+
+/* itab entry operand definitions */
+
+#define O_rSPr12  { OP_rSPr12,   SZ_NA    }
+#define O_BL      { OP_BL,       SZ_NA    }
+#define O_BH      { OP_BH,       SZ_NA    }
+#define O_BP      { OP_BP,       SZ_NA    }
+#define O_AHr12b  { OP_AHr12b,   SZ_NA    }
+#define O_BX      { OP_BX,       SZ_NA    }
+#define O_Jz      { OP_J,        SZ_Z     }
+#define O_Jv      { OP_J,        SZ_V     }
+#define O_Jb      { OP_J,        SZ_B     }
+#define O_rSIr14  { OP_rSIr14,   SZ_NA    }
+#define O_GS      { OP_GS,       SZ_NA    }
+#define O_D       { OP_D,        SZ_NA    }
+#define O_rBPr13  { OP_rBPr13,   SZ_NA    }
+#define O_Ob      { OP_O,        SZ_B     }
+#define O_P       { OP_P,        SZ_NA    }
+#define O_Ow      { OP_O,        SZ_W     }
+#define O_Ov      { OP_O,        SZ_V     }
+#define O_Gw      { OP_G,        SZ_W     }
+#define O_Gv      { OP_G,        SZ_V     }
+#define O_rDX     { OP_rDX,      SZ_NA    }
+#define O_Gx      { OP_G,        SZ_MDQ   }
+#define O_Gd      { OP_G,        SZ_D     }
+#define O_Gb      { OP_G,        SZ_B     }
+#define O_rBXr11  { OP_rBXr11,   SZ_NA    }
+#define O_rDI     { OP_rDI,      SZ_NA    }
+#define O_rSI     { OP_rSI,      SZ_NA    }
+#define O_ALr8b   { OP_ALr8b,    SZ_NA    }
+#define O_eDI     { OP_eDI,      SZ_NA    }
+#define O_Gz      { OP_G,        SZ_Z     }
+#define O_eDX     { OP_eDX,      SZ_NA    }
+#define O_DHr14b  { OP_DHr14b,   SZ_NA    }
+#define O_rSP     { OP_rSP,      SZ_NA    }
+#define O_PR      { OP_PR,       SZ_NA    }
+#define O_NONE    { OP_NONE,     SZ_NA    }
+#define O_rCX     { OP_rCX,      SZ_NA    }
+#define O_jWP     { OP_J,        SZ_WP    }
+#define O_rDXr10  { OP_rDXr10,   SZ_NA    }
+#define O_Md      { OP_M,        SZ_D     }
+#define O_C       { OP_C,        SZ_NA    }
+#define O_G       { OP_G,        SZ_NA    }
+#define O_Mb      { OP_M,        SZ_B     }
+#define O_Mt      { OP_M,        SZ_T     }
+#define O_S       { OP_S,        SZ_NA    }
+#define O_Mq      { OP_M,        SZ_Q     }
+#define O_W       { OP_W,        SZ_NA    }
+#define O_ES      { OP_ES,       SZ_NA    }
+#define O_rBX     { OP_rBX,      SZ_NA    }
+#define O_Ed      { OP_E,        SZ_D     }
+#define O_DLr10b  { OP_DLr10b,   SZ_NA    }
+#define O_Mw      { OP_M,        SZ_W     }
+#define O_Eb      { OP_E,        SZ_B     }
+#define O_Ex      { OP_E,        SZ_MDQ   }
+#define O_Ez      { OP_E,        SZ_Z     }
+#define O_Ew      { OP_E,        SZ_W     }
+#define O_Ev      { OP_E,        SZ_V     }
+#define O_Ep      { OP_E,        SZ_P     }
+#define O_FS      { OP_FS,       SZ_NA    }
+#define O_Ms      { OP_M,        SZ_W     }
+#define O_rAXr8   { OP_rAXr8,    SZ_NA    }
+#define O_eBP     { OP_eBP,      SZ_NA    }
+#define O_Isb     { OP_I,        SZ_SB    }
+#define O_eBX     { OP_eBX,      SZ_NA    }
+#define O_rCXr9   { OP_rCXr9,    SZ_NA    }
+#define O_jDP     { OP_J,        SZ_DP    }
+#define O_CH      { OP_CH,       SZ_NA    }
+#define O_CL      { OP_CL,       SZ_NA    }
+#define O_R       { OP_R,        SZ_RDQ   }
+#define O_V       { OP_V,        SZ_NA    }
+#define O_CS      { OP_CS,       SZ_NA    }
+#define O_CHr13b  { OP_CHr13b,   SZ_NA    }
+#define O_eCX     { OP_eCX,      SZ_NA    }
+#define O_eSP     { OP_eSP,      SZ_NA    }
+#define O_SS      { OP_SS,       SZ_NA    }
+#define O_SP      { OP_SP,       SZ_NA    }
+#define O_BLr11b  { OP_BLr11b,   SZ_NA    }
+#define O_SI      { OP_SI,       SZ_NA    }
+#define O_eSI     { OP_eSI,      SZ_NA    }
+#define O_DL      { OP_DL,       SZ_NA    }
+#define O_DH      { OP_DH,       SZ_NA    }
+#define O_DI      { OP_DI,       SZ_NA    }
+#define O_DX      { OP_DX,       SZ_NA    }
+#define O_rBP     { OP_rBP,      SZ_NA    }
+#define O_Gvw     { OP_G,        SZ_MDQ   }
+#define O_I1      { OP_I1,       SZ_NA    }
+#define O_I3      { OP_I3,       SZ_NA    }
+#define O_DS      { OP_DS,       SZ_NA    }
+#define O_ST4     { OP_ST4,      SZ_NA    }
+#define O_ST5     { OP_ST5,      SZ_NA    }
+#define O_ST6     { OP_ST6,      SZ_NA    }
+#define O_ST7     { OP_ST7,      SZ_NA    }
+#define O_ST0     { OP_ST0,      SZ_NA    }
+#define O_ST1     { OP_ST1,      SZ_NA    }
+#define O_ST2     { OP_ST2,      SZ_NA    }
+#define O_ST3     { OP_ST3,      SZ_NA    }
+#define O_E       { OP_E,        SZ_NA    }
+#define O_AH      { OP_AH,       SZ_NA    }
+#define O_M       { OP_M,        SZ_NA    }
+#define O_AL      { OP_AL,       SZ_NA    }
+#define O_CLr9b   { OP_CLr9b,    SZ_NA    }
+#define O_Q       { OP_Q,        SZ_NA    }
+#define O_eAX     { OP_eAX,      SZ_NA    }
+#define O_VR      { OP_VR,       SZ_NA    }
+#define O_AX      { OP_AX,       SZ_NA    }
+#define O_rAX     { OP_rAX,      SZ_NA    }
+#define O_Iz      { OP_I,        SZ_Z     }
+#define O_rDIr15  { OP_rDIr15,   SZ_NA    }
+#define O_Iw      { OP_I,        SZ_W     }
+#define O_Iv      { OP_I,        SZ_V     }
+#define O_Ap      { OP_A,        SZ_P     }
+#define O_CX      { OP_CX,       SZ_NA    }
+#define O_Ib      { OP_I,        SZ_B     }
+#define O_BHr15b  { OP_BHr15b,   SZ_NA    }
+
+
+/* A single operand of an entry in the instruction table. 
+ * (internal use only)
+ */
+struct ud_itab_entry_operand 
+{
+  enum ud_operand_code type;
+  enum ud_operand_size size;
+};
+
+
+/* A single entry in an instruction table. 
+ *(internal use only)
+ */
+struct ud_itab_entry 
+{
+  enum ud_mnemonic_code         mnemonic;
+  struct ud_itab_entry_operand  operand1;
+  struct ud_itab_entry_operand  operand2;
+  struct ud_itab_entry_operand  operand3;
+  uint32_t                      prefix;
+};
+
+#endif /* UD_DECODE_H */
+
+/* vim:cindent
+ * vim:expandtab
+ * vim:ts=4
+ * vim:sw=4
+ */
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/extern.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/extern.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,67 @@
+/* -----------------------------------------------------------------------------
+ * extern.h
+ *
+ * Copyright (c) 2004, 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_EXTERN_H
+#define UD_EXTERN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* #include <stdio.h> */
+#include "types.h"
+
+/* ============================= PUBLIC API ================================= */
+
+extern void ud_init(struct ud*);
+
+extern void ud_set_mode(struct ud*, uint8_t);
+
+extern void ud_set_pc(struct ud*, uint64_t);
+
+extern void ud_set_input_hook(struct ud*, int (*)(struct ud*));
+
+extern void ud_set_input_buffer(struct ud*, uint8_t*, size_t);
+
+#ifndef __UD_STANDALONE__
+extern void ud_set_input_file(struct ud*, FILE*);
+#endif /* __UD_STANDALONE__ */
+
+extern void ud_set_vendor(struct ud*, unsigned);
+
+extern void ud_set_syntax(struct ud*, void (*)(struct ud*));
+
+extern void ud_input_skip(struct ud*, size_t);
+
+extern int ud_input_end(struct ud*);
+
+extern unsigned int ud_decode(struct ud*);
+
+extern unsigned int ud_disassemble(struct ud*);
+
+extern void ud_translate_intel(struct ud*);
+
+extern void ud_translate_att(struct ud*);
+
+extern char* ud_insn_asm(struct ud* u);
+
+extern uint8_t* ud_insn_ptr(struct ud* u);
+
+extern uint64_t ud_insn_off(struct ud*);
+
+extern char* ud_insn_hex(struct ud*);
+
+extern unsigned int ud_insn_len(struct ud* u);
+
+extern const char* ud_lookup_mnemonic(enum ud_mnemonic_code c);
+
+/* ========================================================================== */
+
+#ifdef __cplusplus
+}
+#endif
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/input.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/input.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,226 @@
+/* -----------------------------------------------------------------------------
+ * input.c
+ *
+ * Copyright (c) 2004, 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#include "extern.h"
+#include "types.h"
+#include "input.h"
+
+/* -----------------------------------------------------------------------------
+ * inp_buff_hook() - Hook for buffered inputs.
+ * -----------------------------------------------------------------------------
+ */
+static int 
+inp_buff_hook(struct ud* u)
+{
+  if (u->inp_buff < u->inp_buff_end)
+	return *u->inp_buff++;
+  else	return -1;
+}
+
+#ifndef __UD_STANDALONE__
+/* -----------------------------------------------------------------------------
+ * inp_file_hook() - Hook for FILE inputs.
+ * -----------------------------------------------------------------------------
+ */
+static int 
+inp_file_hook(struct ud* u)
+{
+  return fgetc(u->inp_file);
+}
+#endif /* __UD_STANDALONE__*/
+
+/* =============================================================================
+ * ud_inp_set_hook() - Sets input hook.
+ * =============================================================================
+ */
+extern void 
+ud_set_input_hook(register struct ud* u, int (*hook)(struct ud*))
+{
+  u->inp_hook = hook;
+  inp_init(u);
+}
+
+/* =============================================================================
+ * ud_inp_set_buffer() - Set buffer as input.
+ * =============================================================================
+ */
+extern void 
+ud_set_input_buffer(register struct ud* u, uint8_t* buf, size_t len)
+{
+  u->inp_hook = inp_buff_hook;
+  u->inp_buff = buf;
+  u->inp_buff_end = buf + len;
+  inp_init(u);
+}
+
+#ifndef __UD_STANDALONE__
+/* =============================================================================
+ * ud_input_set_file() - Set buffer as input.
+ * =============================================================================
+ */
+extern void 
+ud_set_input_file(register struct ud* u, FILE* f)
+{
+  u->inp_hook = inp_file_hook;
+  u->inp_file = f;
+  inp_init(u);
+}
+#endif /* __UD_STANDALONE__ */
+
+/* =============================================================================
+ * ud_input_skip() - Skip n input bytes.
+ * =============================================================================
+ */
+extern void 
+ud_input_skip(struct ud* u, size_t n)
+{
+  while (n--) {
+	u->inp_hook(u);
+  }
+}
+
+/* =============================================================================
+ * ud_input_end() - Test for end of input.
+ * =============================================================================
+ */
+extern int 
+ud_input_end(struct ud* u)
+{
+  return (u->inp_curr == u->inp_fill) && u->inp_end;
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_next() - Loads and returns the next byte from input.
+ *
+ * inp_curr and inp_fill are pointers to the cache. The program is written based
+ * on the property that they are 8-bits in size, and will eventually wrap around
+ * forming a circular buffer. So, the size of the cache is 256 in size, kind of
+ * unnecessary yet optimized.
+ *
+ * A buffer inp_sess stores the bytes disassembled for a single session.
+ * -----------------------------------------------------------------------------
+ */
+extern uint8_t inp_next(struct ud* u) 
+{
+  int c = -1;
+  /* if current pointer is not upto the fill point in the 
+   * input cache.
+   */
+  if ( u->inp_curr != u->inp_fill ) {
+	c = u->inp_cache[ ++u->inp_curr ];
+  /* if !end-of-input, call the input hook and get a byte */
+  } else if ( u->inp_end || ( c = u->inp_hook( u ) ) == -1 ) {
+	/* end-of-input, mark it as an error, since the decoder,
+	 * expected a byte more.
+	 */
+	u->error = 1;
+	/* flag end of input */
+	u->inp_end = 1;
+	return 0;
+  } else {
+	/* increment pointers, we have a new byte.  */
+	u->inp_curr = ++u->inp_fill;
+	/* add the byte to the cache */
+	u->inp_cache[ u->inp_fill ] = c;
+  }
+  /* record bytes input per decode-session. */
+  u->inp_sess[ u->inp_ctr++ ] = c;
+  /* return byte */
+  return ( uint8_t ) c;
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_back() - Move back a single byte in the stream.
+ * -----------------------------------------------------------------------------
+ */
+extern void
+inp_back(struct ud* u) 
+{
+  if ( u->inp_ctr > 0 ) {
+	--u->inp_curr;
+	--u->inp_ctr;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_peek() - Peek into the next byte in source. 
+ * -----------------------------------------------------------------------------
+ */
+extern uint8_t
+inp_peek(struct ud* u) 
+{
+  uint8_t r = inp_next(u);
+  if ( !u->error ) inp_back(u); /* Don't backup if there was an error */
+  return r;
+}
+
+/* -----------------------------------------------------------------------------
+ * inp_move() - Move ahead n input bytes.
+ * -----------------------------------------------------------------------------
+ */
+extern void
+inp_move(struct ud* u, size_t n) 
+{
+  while (n--)
+	inp_next(u);
+}
+
+/*------------------------------------------------------------------------------
+ *  inp_uintN() - return uintN from source.
+ *------------------------------------------------------------------------------
+ */
+extern uint8_t 
+inp_uint8(struct ud* u)
+{
+  return inp_next(u);
+}
+
+extern uint16_t 
+inp_uint16(struct ud* u)
+{
+  uint16_t r, ret;
+
+  ret = inp_next(u);
+  r = inp_next(u);
+  return ret | (r << 8);
+}
+
+extern uint32_t 
+inp_uint32(struct ud* u)
+{
+  uint32_t r, ret;
+
+  ret = inp_next(u);
+  r = inp_next(u);
+  ret = ret | (r << 8);
+  r = inp_next(u);
+  ret = ret | (r << 16);
+  r = inp_next(u);
+  return ret | (r << 24);
+}
+
+extern uint64_t 
+inp_uint64(struct ud* u)
+{
+  uint64_t r, ret;
+
+  ret = inp_next(u);
+  r = inp_next(u);
+  ret = ret | (r << 8);
+  r = inp_next(u);
+  ret = ret | (r << 16);
+  r = inp_next(u);
+  ret = ret | (r << 24);
+  r = inp_next(u);
+  ret = ret | (r << 32);
+  r = inp_next(u);
+  ret = ret | (r << 40);
+  r = inp_next(u);
+  ret = ret | (r << 48);
+  r = inp_next(u);
+  return ret | (r << 56);
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/input.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/input.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,49 @@
+/* -----------------------------------------------------------------------------
+ * input.h
+ *
+ * Copyright (c) 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_INPUT_H
+#define UD_INPUT_H
+
+#include "types.h"
+
+uint8_t inp_next(struct ud*);
+uint8_t inp_peek(struct ud*);
+uint8_t inp_uint8(struct ud*);
+uint16_t inp_uint16(struct ud*);
+uint32_t inp_uint32(struct ud*);
+uint64_t inp_uint64(struct ud*);
+void inp_move(struct ud*, size_t);
+void inp_back(struct ud*);
+
+/* inp_init() - Initializes the input system. */
+#define inp_init(u) \
+do { \
+  u->inp_curr = 0; \
+  u->inp_fill = 0; \
+  u->inp_ctr  = 0; \
+  u->inp_end  = 0; \
+} while (0)
+
+/* inp_start() - Should be called before each de-code operation. */
+#define inp_start(u) u->inp_ctr = 0
+
+/* inp_back() - Resets the current pointer to its position before the current
+ * instruction disassembly was started.
+ */
+#define inp_reset(u) \
+do { \
+  u->inp_curr -= u->inp_ctr; \
+  u->inp_ctr = 0; \
+} while (0)
+
+/* inp_sess() - Returns the pointer to current session. */
+#define inp_sess(u) (u->inp_sess)
+
+/* inp_cur() - Returns the current input byte. */
+#define inp_curr(u) ((u)->inp_cache[(u)->inp_curr])
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/itab.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/itab.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,3668 @@
+
+/* itab.c -- auto generated by opgen.py, do not edit. */
+
+#include "types.h"
+#include "decode.h"
+#include "itab.h"
+
+const char * ud_mnemonics_str[] = {
+  "3dnow",
+  "aaa",
+  "aad",
+  "aam",
+  "aas",
+  "adc",
+  "add",
+  "addpd",
+  "addps",
+  "addsd",
+  "addss",
+  "addsubpd",
+  "addsubps",
+  "and",
+  "andpd",
+  "andps",
+  "andnpd",
+  "andnps",
+  "arpl",
+  "movsxd",
+  "bound",
+  "bsf",
+  "bsr",
+  "bswap",
+  "bt",
+  "btc",
+  "btr",
+  "bts",
+  "call",
+  "cbw",
+  "cwde",
+  "cdqe",
+  "clc",
+  "cld",
+  "clflush",
+  "clgi",
+  "cli",
+  "clts",
+  "cmc",
+  "cmovo",
+  "cmovno",
+  "cmovb",
+  "cmovae",
+  "cmovz",
+  "cmovnz",
+  "cmovbe",
+  "cmova",
+  "cmovs",
+  "cmovns",
+  "cmovp",
+  "cmovnp",
+  "cmovl",
+  "cmovge",
+  "cmovle",
+  "cmovg",
+  "cmp",
+  "cmppd",
+  "cmpps",
+  "cmpsb",
+  "cmpsw",
+  "cmpsd",
+  "cmpsq",
+  "cmpss",
+  "cmpxchg",
+  "cmpxchg8b",
+  "comisd",
+  "comiss",
+  "cpuid",
+  "cvtdq2pd",
+  "cvtdq2ps",
+  "cvtpd2dq",
+  "cvtpd2pi",
+  "cvtpd2ps",
+  "cvtpi2ps",
+  "cvtpi2pd",
+  "cvtps2dq",
+  "cvtps2pi",
+  "cvtps2pd",
+  "cvtsd2si",
+  "cvtsd2ss",
+  "cvtsi2ss",
+  "cvtss2si",
+  "cvtss2sd",
+  "cvttpd2pi",
+  "cvttpd2dq",
+  "cvttps2dq",
+  "cvttps2pi",
+  "cvttsd2si",
+  "cvtsi2sd",
+  "cvttss2si",
+  "cwd",
+  "cdq",
+  "cqo",
+  "daa",
+  "das",
+  "dec",
+  "div",
+  "divpd",
+  "divps",
+  "divsd",
+  "divss",
+  "emms",
+  "enter",
+  "f2xm1",
+  "fabs",
+  "fadd",
+  "faddp",
+  "fbld",
+  "fbstp",
+  "fchs",
+  "fclex",
+  "fcmovb",
+  "fcmove",
+  "fcmovbe",
+  "fcmovu",
+  "fcmovnb",
+  "fcmovne",
+  "fcmovnbe",
+  "fcmovnu",
+  "fucomi",
+  "fcom",
+  "fcom2",
+  "fcomp3",
+  "fcomi",
+  "fucomip",
+  "fcomip",
+  "fcomp",
+  "fcomp5",
+  "fcompp",
+  "fcos",
+  "fdecstp",
+  "fdiv",
+  "fdivp",
+  "fdivr",
+  "fdivrp",
+  "femms",
+  "ffree",
+  "ffreep",
+  "ficom",
+  "ficomp",
+  "fild",
+  "fncstp",
+  "fninit",
+  "fiadd",
+  "fidivr",
+  "fidiv",
+  "fisub",
+  "fisubr",
+  "fist",
+  "fistp",
+  "fisttp",
+  "fld",
+  "fld1",
+  "fldl2t",
+  "fldl2e",
+  "fldlpi",
+  "fldlg2",
+  "fldln2",
+  "fldz",
+  "fldcw",
+  "fldenv",
+  "fmul",
+  "fmulp",
+  "fimul",
+  "fnop",
+  "fpatan",
+  "fprem",
+  "fprem1",
+  "fptan",
+  "frndint",
+  "frstor",
+  "fnsave",
+  "fscale",
+  "fsin",
+  "fsincos",
+  "fsqrt",
+  "fstp",
+  "fstp1",
+  "fstp8",
+  "fstp9",
+  "fst",
+  "fnstcw",
+  "fnstenv",
+  "fnstsw",
+  "fsub",
+  "fsubp",
+  "fsubr",
+  "fsubrp",
+  "ftst",
+  "fucom",
+  "fucomp",
+  "fucompp",
+  "fxam",
+  "fxch",
+  "fxch4",
+  "fxch7",
+  "fxrstor",
+  "fxsave",
+  "fpxtract",
+  "fyl2x",
+  "fyl2xp1",
+  "haddpd",
+  "haddps",
+  "hlt",
+  "hsubpd",
+  "hsubps",
+  "idiv",
+  "in",
+  "imul",
+  "inc",
+  "insb",
+  "insw",
+  "insd",
+  "int1",
+  "int3",
+  "int",
+  "into",
+  "invd",
+  "invlpg",
+  "invlpga",
+  "iretw",
+  "iretd",
+  "iretq",
+  "jo",
+  "jno",
+  "jb",
+  "jae",
+  "jz",
+  "jnz",
+  "jbe",
+  "ja",
+  "js",
+  "jns",
+  "jp",
+  "jnp",
+  "jl",
+  "jge",
+  "jle",
+  "jg",
+  "jcxz",
+  "jecxz",
+  "jrcxz",
+  "jmp",
+  "lahf",
+  "lar",
+  "lddqu",
+  "ldmxcsr",
+  "lds",
+  "lea",
+  "les",
+  "lfs",
+  "lgs",
+  "lidt",
+  "lss",
+  "leave",
+  "lfence",
+  "lgdt",
+  "lldt",
+  "lmsw",
+  "lock",
+  "lodsb",
+  "lodsw",
+  "lodsd",
+  "lodsq",
+  "loopnz",
+  "loope",
+  "loop",
+  "lsl",
+  "ltr",
+  "maskmovq",
+  "maxpd",
+  "maxps",
+  "maxsd",
+  "maxss",
+  "mfence",
+  "minpd",
+  "minps",
+  "minsd",
+  "minss",
+  "monitor",
+  "mov",
+  "movapd",
+  "movaps",
+  "movd",
+  "movddup",
+  "movdqa",
+  "movdqu",
+  "movdq2q",
+  "movhpd",
+  "movhps",
+  "movlhps",
+  "movlpd",
+  "movlps",
+  "movhlps",
+  "movmskpd",
+  "movmskps",
+  "movntdq",
+  "movnti",
+  "movntpd",
+  "movntps",
+  "movntq",
+  "movq",
+  "movqa",
+  "movq2dq",
+  "movsb",
+  "movsw",
+  "movsd",
+  "movsq",
+  "movsldup",
+  "movshdup",
+  "movss",
+  "movsx",
+  "movupd",
+  "movups",
+  "movzx",
+  "mul",
+  "mulpd",
+  "mulps",
+  "mulsd",
+  "mulss",
+  "mwait",
+  "neg",
+  "nop",
+  "not",
+  "or",
+  "orpd",
+  "orps",
+  "out",
+  "outsb",
+  "outsw",
+  "outsd",
+  "outsq",
+  "packsswb",
+  "packssdw",
+  "packuswb",
+  "paddb",
+  "paddw",
+  "paddq",
+  "paddsb",
+  "paddsw",
+  "paddusb",
+  "paddusw",
+  "pand",
+  "pandn",
+  "pause",
+  "pavgb",
+  "pavgw",
+  "pcmpeqb",
+  "pcmpeqw",
+  "pcmpeqd",
+  "pcmpgtb",
+  "pcmpgtw",
+  "pcmpgtd",
+  "pextrw",
+  "pinsrw",
+  "pmaddwd",
+  "pmaxsw",
+  "pmaxub",
+  "pminsw",
+  "pminub",
+  "pmovmskb",
+  "pmulhuw",
+  "pmulhw",
+  "pmullw",
+  "pmuludq",
+  "pop",
+  "popa",
+  "popad",
+  "popfw",
+  "popfd",
+  "popfq",
+  "por",
+  "prefetch",
+  "prefetchnta",
+  "prefetcht0",
+  "prefetcht1",
+  "prefetcht2",
+  "psadbw",
+  "pshufd",
+  "pshufhw",
+  "pshuflw",
+  "pshufw",
+  "pslldq",
+  "psllw",
+  "pslld",
+  "psllq",
+  "psraw",
+  "psrad",
+  "psrlw",
+  "psrld",
+  "psrlq",
+  "psrldq",
+  "psubb",
+  "psubw",
+  "psubd",
+  "psubq",
+  "psubsb",
+  "psubsw",
+  "psubusb",
+  "psubusw",
+  "punpckhbw",
+  "punpckhwd",
+  "punpckhdq",
+  "punpckhqdq",
+  "punpcklbw",
+  "punpcklwd",
+  "punpckldq",
+  "punpcklqdq",
+  "pi2fw",
+  "pi2fd",
+  "pf2iw",
+  "pf2id",
+  "pfnacc",
+  "pfpnacc",
+  "pfcmpge",
+  "pfmin",
+  "pfrcp",
+  "pfrsqrt",
+  "pfsub",
+  "pfadd",
+  "pfcmpgt",
+  "pfmax",
+  "pfrcpit1",
+  "pfrspit1",
+  "pfsubr",
+  "pfacc",
+  "pfcmpeq",
+  "pfmul",
+  "pfrcpit2",
+  "pmulhrw",
+  "pswapd",
+  "pavgusb",
+  "push",
+  "pusha",
+  "pushad",
+  "pushfw",
+  "pushfd",
+  "pushfq",
+  "pxor",
+  "rcl",
+  "rcr",
+  "rol",
+  "ror",
+  "rcpps",
+  "rcpss",
+  "rdmsr",
+  "rdpmc",
+  "rdtsc",
+  "rdtscp",
+  "repne",
+  "rep",
+  "ret",
+  "retf",
+  "rsm",
+  "rsqrtps",
+  "rsqrtss",
+  "sahf",
+  "sal",
+  "salc",
+  "sar",
+  "shl",
+  "shr",
+  "sbb",
+  "scasb",
+  "scasw",
+  "scasd",
+  "scasq",
+  "seto",
+  "setno",
+  "setb",
+  "setnb",
+  "setz",
+  "setnz",
+  "setbe",
+  "seta",
+  "sets",
+  "setns",
+  "setp",
+  "setnp",
+  "setl",
+  "setge",
+  "setle",
+  "setg",
+  "sfence",
+  "sgdt",
+  "shld",
+  "shrd",
+  "shufpd",
+  "shufps",
+  "sidt",
+  "sldt",
+  "smsw",
+  "sqrtps",
+  "sqrtpd",
+  "sqrtsd",
+  "sqrtss",
+  "stc",
+  "std",
+  "stgi",
+  "sti",
+  "skinit",
+  "stmxcsr",
+  "stosb",
+  "stosw",
+  "stosd",
+  "stosq",
+  "str",
+  "sub",
+  "subpd",
+  "subps",
+  "subsd",
+  "subss",
+  "swapgs",
+  "syscall",
+  "sysenter",
+  "sysexit",
+  "sysret",
+  "test",
+  "ucomisd",
+  "ucomiss",
+  "ud2",
+  "unpckhpd",
+  "unpckhps",
+  "unpcklps",
+  "unpcklpd",
+  "verr",
+  "verw",
+  "vmcall",
+  "vmclear",
+  "vmxon",
+  "vmptrld",
+  "vmptrst",
+  "vmresume",
+  "vmxoff",
+  "vmrun",
+  "vmmcall",
+  "vmload",
+  "vmsave",
+  "wait",
+  "wbinvd",
+  "wrmsr",
+  "xadd",
+  "xchg",
+  "xlatb",
+  "xor",
+  "xorpd",
+  "xorps",
+  "db",
+  "invalid",
+};
+
+
+
+static struct ud_itab_entry itab__0f[256] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_00__REG },
+  /* 01 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG },
+  /* 02 */  { UD_Ilar,         O_Gv,    O_Ew,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ilsl,         O_Gv,    O_Ew,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Isyscall,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Iclts,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Isysret,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 08 */  { UD_Iinvd,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 09 */  { UD_Iwbinvd,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iud2,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_0D__REG },
+  /* 0E */  { UD_Ifemms,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovups,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovups,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovlps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Imovlps,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 14 */  { UD_Iunpcklps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 15 */  { UD_Iunpckhps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 16 */  { UD_Imovhps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 17 */  { UD_Imovhps,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 18 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_18__REG },
+  /* 19 */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1A */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1B */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1C */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1D */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1E */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1F */  { UD_Inop,         O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 20 */  { UD_Imov,         O_R,     O_C,     O_NONE,  P_rexr },
+  /* 21 */  { UD_Imov,         O_R,     O_D,     O_NONE,  P_rexr },
+  /* 22 */  { UD_Imov,         O_C,     O_R,     O_NONE,  P_rexr },
+  /* 23 */  { UD_Imov,         O_D,     O_R,     O_NONE,  P_rexr },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Imovaps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 29 */  { UD_Imovaps,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2A */  { UD_Icvtpi2ps,    O_V,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Imovntps,     O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2C */  { UD_Icvttps2pi,   O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtps2pi,    O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iucomiss,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2F */  { UD_Icomiss,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 30 */  { UD_Iwrmsr,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 31 */  { UD_Irdtsc,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 32 */  { UD_Irdmsr,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 33 */  { UD_Irdpmc,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 34 */  { UD_Isysenter,    O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 35 */  { UD_Isysexit,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Icmovo,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 41 */  { UD_Icmovno,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 42 */  { UD_Icmovb,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 43 */  { UD_Icmovae,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 44 */  { UD_Icmovz,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 45 */  { UD_Icmovnz,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 46 */  { UD_Icmovbe,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 47 */  { UD_Icmova,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 48 */  { UD_Icmovs,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 49 */  { UD_Icmovns,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4A */  { UD_Icmovp,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4B */  { UD_Icmovnp,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4C */  { UD_Icmovl,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4D */  { UD_Icmovge,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4E */  { UD_Icmovle,      O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 4F */  { UD_Icmovg,       O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 50 */  { UD_Imovmskps,    O_Gd,    O_VR,    O_NONE,  P_oso|P_rexr|P_rexb },
+  /* 51 */  { UD_Isqrtps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Irsqrtps,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 53 */  { UD_Ircpps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 54 */  { UD_Iandps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 55 */  { UD_Iandnps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 56 */  { UD_Iorps,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 57 */  { UD_Ixorps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 58 */  { UD_Iaddps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtps2pd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Icvtdq2ps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5C */  { UD_Isubps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxps,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Ipunpcklbw,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 61 */  { UD_Ipunpcklwd,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 62 */  { UD_Ipunpckldq,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 63 */  { UD_Ipacksswb,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 64 */  { UD_Ipcmpgtb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 65 */  { UD_Ipcmpgtw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 66 */  { UD_Ipcmpgtd,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 67 */  { UD_Ipackuswb,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 68 */  { UD_Ipunpckhbw,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 69 */  { UD_Ipunpckhwd,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6A */  { UD_Ipunpckhdq,   O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6B */  { UD_Ipackssdw,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Imovd,        O_P,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6F */  { UD_Imovq,        O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 70 */  { UD_Ipshufw,      O_P,     O_Q,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_71__REG },
+  /* 72 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_72__REG },
+  /* 73 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_73__REG },
+  /* 74 */  { UD_Ipcmpeqb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 75 */  { UD_Ipcmpeqw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 76 */  { UD_Ipcmpeqd,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 77 */  { UD_Iemms,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7E */  { UD_Imovd,        O_Ex,    O_P,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7F */  { UD_Imovq,        O_Q,     O_P,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 80 */  { UD_Ijo,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 81 */  { UD_Ijno,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 82 */  { UD_Ijb,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 83 */  { UD_Ijae,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 84 */  { UD_Ijz,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 85 */  { UD_Ijnz,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 86 */  { UD_Ijbe,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 87 */  { UD_Ija,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 88 */  { UD_Ijs,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 89 */  { UD_Ijns,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8A */  { UD_Ijp,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8B */  { UD_Ijnp,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8C */  { UD_Ijl,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8D */  { UD_Ijge,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8E */  { UD_Ijle,         O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 8F */  { UD_Ijg,          O_Jz,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_oso },
+  /* 90 */  { UD_Iseto,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 91 */  { UD_Isetno,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 92 */  { UD_Isetb,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 93 */  { UD_Isetnb,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 94 */  { UD_Isetz,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 95 */  { UD_Isetnz,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 96 */  { UD_Isetbe,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 97 */  { UD_Iseta,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 98 */  { UD_Isets,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 99 */  { UD_Isetns,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9A */  { UD_Isetp,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9B */  { UD_Isetnp,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9C */  { UD_Isetl,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9D */  { UD_Isetge,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9E */  { UD_Isetle,       O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 9F */  { UD_Isetg,        O_Eb,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* A0 */  { UD_Ipush,        O_FS,    O_NONE,  O_NONE,  P_none },
+  /* A1 */  { UD_Ipop,         O_FS,    O_NONE,  O_NONE,  P_none },
+  /* A2 */  { UD_Icpuid,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* A3 */  { UD_Ibt,          O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* A4 */  { UD_Ishld,        O_Ev,    O_Gv,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* A5 */  { UD_Ishld,        O_Ev,    O_Gv,    O_CL,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Ipush,        O_GS,    O_NONE,  O_NONE,  P_none },
+  /* A9 */  { UD_Ipop,         O_GS,    O_NONE,  O_NONE,  P_none },
+  /* AA */  { UD_Irsm,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* AB */  { UD_Ibts,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* AC */  { UD_Ishrd,        O_Ev,    O_Gv,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* AD */  { UD_Ishrd,        O_Ev,    O_Gv,    O_CL,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* AE */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG },
+  /* AF */  { UD_Iimul,        O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B0 */  { UD_Icmpxchg,     O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* B1 */  { UD_Icmpxchg,     O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B2 */  { UD_Ilss,         O_Gz,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B3 */  { UD_Ibtr,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B4 */  { UD_Ilfs,         O_Gz,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B5 */  { UD_Ilgs,         O_Gz,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B6 */  { UD_Imovzx,       O_Gv,    O_Eb,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B7 */  { UD_Imovzx,       O_Gv,    O_Ew,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_BA__REG },
+  /* BB */  { UD_Ibtc,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BC */  { UD_Ibsf,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BD */  { UD_Ibsr,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BE */  { UD_Imovsx,       O_Gv,    O_Eb,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* BF */  { UD_Imovsx,       O_Gv,    O_Ew,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmpps,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Imovnti,      O_M,     O_Gvw,   O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C4 */  { UD_Ipinsrw,      O_P,     O_Ew,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C5 */  { UD_Ipextrw,      O_Gd,    O_PR,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C6 */  { UD_Ishufps,      O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_C7__REG },
+  /* C8 */  { UD_Ibswap,       O_rAXr8, O_NONE,  O_NONE,  P_oso|P_rexw|P_rexb },
+  /* C9 */  { UD_Ibswap,       O_rCXr9, O_NONE,  O_NONE,  P_oso|P_rexw|P_rexb },
+  /* CA */  { UD_Ibswap,       O_rDXr10, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CB */  { UD_Ibswap,       O_rBXr11, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CC */  { UD_Ibswap,       O_rSPr12, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CD */  { UD_Ibswap,       O_rBPr13, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CE */  { UD_Ibswap,       O_rSIr14, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* CF */  { UD_Ibswap,       O_rDIr15, O_NONE,  O_NONE, P_oso|P_rexw|P_rexb },
+  /* D0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D1 */  { UD_Ipsrlw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D2 */  { UD_Ipsrld,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D3 */  { UD_Ipsrlq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D4 */  { UD_Ipaddq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D5 */  { UD_Ipmullw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D7 */  { UD_Ipmovmskb,    O_Gd,    O_PR,    O_NONE,  P_oso|P_rexr|P_rexb },
+  /* D8 */  { UD_Ipsubusb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D9 */  { UD_Ipsubusw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DA */  { UD_Ipminub,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DB */  { UD_Ipand,        O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DC */  { UD_Ipaddusb,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DD */  { UD_Ipaddusw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DE */  { UD_Ipmaxub,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DF */  { UD_Ipandn,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E0 */  { UD_Ipavgb,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E1 */  { UD_Ipsraw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E2 */  { UD_Ipsrad,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E3 */  { UD_Ipavgw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E4 */  { UD_Ipmulhuw,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E5 */  { UD_Ipmulhw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E7 */  { UD_Imovntq,      O_M,     O_P,     O_NONE,  P_none },
+  /* E8 */  { UD_Ipsubsb,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E9 */  { UD_Ipsubsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EA */  { UD_Ipminsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EB */  { UD_Ipor,         O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EC */  { UD_Ipaddsb,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* ED */  { UD_Ipaddsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EE */  { UD_Ipmaxsw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EF */  { UD_Ipxor,        O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Ipsllw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F2 */  { UD_Ipslld,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F3 */  { UD_Ipsllq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F4 */  { UD_Ipmuludq,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F5 */  { UD_Ipmaddwd,     O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F6 */  { UD_Ipsadbw,      O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F7 */  { UD_Imaskmovq,    O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F8 */  { UD_Ipsubb,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F9 */  { UD_Ipsubw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FA */  { UD_Ipsubd,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FB */  { UD_Ipsubq,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FC */  { UD_Ipaddb,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FD */  { UD_Ipaddw,       O_P,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_00__reg[8] = {
+  /* 00 */  { UD_Isldt,        O_Ev,    O_NONE,  O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Istr,         O_Ev,    O_NONE,  O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Illdt,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iltr,         O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iverr,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iverw,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg[8] = {
+  /* 00 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD },
+  /* 01 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_01__MOD },
+  /* 02 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_02__MOD },
+  /* 03 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD },
+  /* 04 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_04__MOD },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_06__MOD },
+  /* 07 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_07__MOD },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod[2] = {
+  /* 00 */  { UD_Isgdt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_01__VENDOR },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_03__VENDOR },
+  /* 04 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_04__VENDOR },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm__op_01__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmcall,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm__op_03__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmresume,    O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_00__mod__op_01__rm__op_04__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmxoff,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_01__mod[2] = {
+  /* 00 */  { UD_Isidt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_01__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_01__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Imonitor,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Imwait,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_02__mod[2] = {
+  /* 00 */  { UD_Ilgdt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod[2] = {
+  /* 00 */  { UD_Ilidt,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_00__VENDOR },
+  /* 01 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_01__VENDOR },
+  /* 02 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_02__VENDOR },
+  /* 03 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_03__VENDOR },
+  /* 04 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_04__VENDOR },
+  /* 05 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_05__VENDOR },
+  /* 06 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_06__VENDOR },
+  /* 07 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_07__VENDOR },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_00__vendor[2] = {
+  /* 00 */  { UD_Ivmrun,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_01__vendor[2] = {
+  /* 00 */  { UD_Ivmmcall,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_02__vendor[2] = {
+  /* 00 */  { UD_Ivmload,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_03__vendor[2] = {
+  /* 00 */  { UD_Ivmsave,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_04__vendor[2] = {
+  /* 00 */  { UD_Istgi,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_05__vendor[2] = {
+  /* 00 */  { UD_Iclgi,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_06__vendor[2] = {
+  /* 00 */  { UD_Iskinit,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_03__mod__op_01__rm__op_07__vendor[2] = {
+  /* 00 */  { UD_Iinvlpga,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_04__mod[2] = {
+  /* 00 */  { UD_Ismsw,        O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_06__mod[2] = {
+  /* 00 */  { UD_Ilmsw,        O_Ew,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_07__mod[2] = {
+  /* 00 */  { UD_Iinvlpg,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_07__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Iswapgs,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM__OP_01__VENDOR },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_01__reg__op_07__mod__op_01__rm__op_01__vendor[2] = {
+  /* 00 */  { UD_Irdtscp,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_0d__reg[8] = {
+  /* 00 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iprefetch,    O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_18__reg[8] = {
+  /* 00 */  { UD_Iprefetchnta, O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iprefetcht0,  O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iprefetcht1,  O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iprefetcht2,  O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_71__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlw,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsraw,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllw,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_72__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrld,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsrad,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipslld,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_73__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlq,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllq,       O_PR,    O_Ib,    O_NONE,  P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ildmxcsr,     O_Md,    O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Istmxcsr,     O_Md,    O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_05__MOD },
+  /* 06 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_06__MOD },
+  /* 07 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_07__MOD },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_05__mod[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_05__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_05__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Ilfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_06__mod[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_06__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_06__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Imfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_07__mod[2] = {
+  /* 00 */  { UD_Iclflush,     O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Igrp_rm,      O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_AE__REG__OP_07__MOD__OP_01__RM },
+};
+
+static struct ud_itab_entry itab__0f__op_ae__reg__op_07__mod__op_01__rm[8] = {
+  /* 00 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Isfence,      O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__0f__op_ba__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ibt,          O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ibts,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ibtr,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ibtc,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_c7__reg[8] = {
+  /* 00 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_C7__REG__OP_00__VENDOR },
+  /* 01 */  { UD_Icmpxchg8b,   O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_C7__REG__OP_07__VENDOR },
+};
+
+static struct ud_itab_entry itab__0f__op_c7__reg__op_00__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmptrld,     O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_c7__reg__op_07__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmptrst,     O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__0f__op_d9__mod[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__0F__OP_D9__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__0f__op_d9__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 11 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 12 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Ifabs,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_If2xm1,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte[256] = {
+  /* 00 */  { UD_Iadd,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iadd,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadd,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Iadd,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iadd,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 05 */  { UD_Iadd,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 06 */  { UD_Ipush,        O_ES,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 07 */  { UD_Ipop,         O_ES,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 08 */  { UD_Ior,          O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 09 */  { UD_Ior,          O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 0A */  { UD_Ior,          O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 0B */  { UD_Ior,          O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 0C */  { UD_Ior,          O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 0D */  { UD_Ior,          O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 0E */  { UD_Ipush,        O_CS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Iadc,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Iadc,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Iadc,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Iadc,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 14 */  { UD_Iadc,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 15 */  { UD_Iadc,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 16 */  { UD_Ipush,        O_SS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 17 */  { UD_Ipop,         O_SS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 18 */  { UD_Isbb,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 19 */  { UD_Isbb,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 1A */  { UD_Isbb,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 1B */  { UD_Isbb,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 1C */  { UD_Isbb,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 1D */  { UD_Isbb,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 1E */  { UD_Ipush,        O_DS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 1F */  { UD_Ipop,         O_DS,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 20 */  { UD_Iand,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 21 */  { UD_Iand,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 22 */  { UD_Iand,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 23 */  { UD_Iand,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 24 */  { UD_Iand,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 25 */  { UD_Iand,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Idaa,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 28 */  { UD_Isub,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 29 */  { UD_Isub,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 2A */  { UD_Isub,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Isub,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 2C */  { UD_Isub,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 2D */  { UD_Isub,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Idas,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 30 */  { UD_Ixor,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 31 */  { UD_Ixor,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 32 */  { UD_Ixor,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 33 */  { UD_Ixor,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 34 */  { UD_Ixor,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 35 */  { UD_Ixor,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iaaa,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 38 */  { UD_Icmp,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 39 */  { UD_Icmp,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 3A */  { UD_Icmp,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 3B */  { UD_Icmp,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 3C */  { UD_Icmp,         O_AL,    O_Ib,    O_NONE,  P_none },
+  /* 3D */  { UD_Icmp,         O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iaas,         O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* 40 */  { UD_Iinc,         O_eAX,   O_NONE,  O_NONE,  P_oso },
+  /* 41 */  { UD_Iinc,         O_eCX,   O_NONE,  O_NONE,  P_oso },
+  /* 42 */  { UD_Iinc,         O_eDX,   O_NONE,  O_NONE,  P_oso },
+  /* 43 */  { UD_Iinc,         O_eBX,   O_NONE,  O_NONE,  P_oso },
+  /* 44 */  { UD_Iinc,         O_eSP,   O_NONE,  O_NONE,  P_oso },
+  /* 45 */  { UD_Iinc,         O_eBP,   O_NONE,  O_NONE,  P_oso },
+  /* 46 */  { UD_Iinc,         O_eSI,   O_NONE,  O_NONE,  P_oso },
+  /* 47 */  { UD_Iinc,         O_eDI,   O_NONE,  O_NONE,  P_oso },
+  /* 48 */  { UD_Idec,         O_eAX,   O_NONE,  O_NONE,  P_oso },
+  /* 49 */  { UD_Idec,         O_eCX,   O_NONE,  O_NONE,  P_oso },
+  /* 4A */  { UD_Idec,         O_eDX,   O_NONE,  O_NONE,  P_oso },
+  /* 4B */  { UD_Idec,         O_eBX,   O_NONE,  O_NONE,  P_oso },
+  /* 4C */  { UD_Idec,         O_eSP,   O_NONE,  O_NONE,  P_oso },
+  /* 4D */  { UD_Idec,         O_eBP,   O_NONE,  O_NONE,  P_oso },
+  /* 4E */  { UD_Idec,         O_eSI,   O_NONE,  O_NONE,  P_oso },
+  /* 4F */  { UD_Idec,         O_eDI,   O_NONE,  O_NONE,  P_oso },
+  /* 50 */  { UD_Ipush,        O_rAXr8, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 51 */  { UD_Ipush,        O_rCXr9, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 52 */  { UD_Ipush,        O_rDXr10, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 53 */  { UD_Ipush,        O_rBXr11, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 54 */  { UD_Ipush,        O_rSPr12, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 55 */  { UD_Ipush,        O_rBPr13, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 56 */  { UD_Ipush,        O_rSIr14, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 57 */  { UD_Ipush,        O_rDIr15, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 58 */  { UD_Ipop,         O_rAXr8, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 59 */  { UD_Ipop,         O_rCXr9, O_NONE,  O_NONE,  P_def64|P_depM|P_oso|P_rexb },
+  /* 5A */  { UD_Ipop,         O_rDXr10, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5B */  { UD_Ipop,         O_rBXr11, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5C */  { UD_Ipop,         O_rSPr12, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5D */  { UD_Ipop,         O_rBPr13, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5E */  { UD_Ipop,         O_rSIr14, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 5F */  { UD_Ipop,         O_rDIr15, O_NONE,  O_NONE, P_def64|P_depM|P_oso|P_rexb },
+  /* 60 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_60__OSIZE },
+  /* 61 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_61__OSIZE },
+  /* 62 */  { UD_Ibound,       O_Gv,    O_M,     O_NONE,  P_inv64|P_aso|P_oso },
+  /* 63 */  { UD_Igrp_mode,    O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_63__MODE },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Ipush,        O_Iz,    O_NONE,  O_NONE,  P_c1|P_oso },
+  /* 69 */  { UD_Iimul,        O_Gv,    O_Ev,    O_Iz,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 6A */  { UD_Ipush,        O_Ib,    O_NONE,  O_NONE,  P_none },
+  /* 6B */  { UD_Iimul,        O_Gv,    O_Ev,    O_Ib,    P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 6C */  { UD_Iinsb,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 6D */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_6D__OSIZE },
+  /* 6E */  { UD_Ioutsb,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 6F */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_6F__OSIZE },
+  /* 70 */  { UD_Ijo,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 71 */  { UD_Ijno,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 72 */  { UD_Ijb,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 73 */  { UD_Ijae,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 74 */  { UD_Ijz,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 75 */  { UD_Ijnz,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 76 */  { UD_Ijbe,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 77 */  { UD_Ija,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 78 */  { UD_Ijs,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 79 */  { UD_Ijns,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7A */  { UD_Ijp,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7B */  { UD_Ijnp,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7C */  { UD_Ijl,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7D */  { UD_Ijge,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7E */  { UD_Ijle,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 7F */  { UD_Ijg,          O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* 80 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_80__REG },
+  /* 81 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_81__REG },
+  /* 82 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_82__REG },
+  /* 83 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_83__REG },
+  /* 84 */  { UD_Itest,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 85 */  { UD_Itest,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 86 */  { UD_Ixchg,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 87 */  { UD_Ixchg,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 88 */  { UD_Imov,         O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 89 */  { UD_Imov,         O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 8A */  { UD_Imov,         O_Gb,    O_Eb,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 8B */  { UD_Imov,         O_Gv,    O_Ev,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 8C */  { UD_Imov,         O_Ev,    O_S,     O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 8D */  { UD_Ilea,         O_Gv,    O_M,     O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 8E */  { UD_Imov,         O_S,     O_Ev,    O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* 8F */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_8F__REG },
+  /* 90 */  { UD_Ixchg,        O_rAXr8, O_rAX,   O_NONE,  P_oso|P_rexw|P_rexb },
+  /* 91 */  { UD_Ixchg,        O_rCXr9, O_rAX,   O_NONE,  P_oso|P_rexw|P_rexb },
+  /* 92 */  { UD_Ixchg,        O_rDXr10, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 93 */  { UD_Ixchg,        O_rBXr11, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 94 */  { UD_Ixchg,        O_rSPr12, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 95 */  { UD_Ixchg,        O_rBPr13, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 96 */  { UD_Ixchg,        O_rSIr14, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 97 */  { UD_Ixchg,        O_rDIr15, O_rAX,   O_NONE, P_oso|P_rexw|P_rexb },
+  /* 98 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_98__OSIZE },
+  /* 99 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_99__OSIZE },
+  /* 9A */  { UD_Icall,        O_Ap,    O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 9B */  { UD_Iwait,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 9C */  { UD_Igrp_mode,    O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9C__MODE },
+  /* 9D */  { UD_Igrp_mode,    O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9D__MODE },
+  /* 9E */  { UD_Isahf,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 9F */  { UD_Ilahf,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* A0 */  { UD_Imov,         O_AL,    O_Ob,    O_NONE,  P_none },
+  /* A1 */  { UD_Imov,         O_rAX,   O_Ov,    O_NONE,  P_aso|P_oso|P_rexw },
+  /* A2 */  { UD_Imov,         O_Ob,    O_AL,    O_NONE,  P_none },
+  /* A3 */  { UD_Imov,         O_Ov,    O_rAX,   O_NONE,  P_aso|P_oso|P_rexw },
+  /* A4 */  { UD_Imovsb,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_none },
+  /* A5 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_A5__OSIZE },
+  /* A6 */  { UD_Icmpsb,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* A7 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_A7__OSIZE },
+  /* A8 */  { UD_Itest,        O_AL,    O_Ib,    O_NONE,  P_none },
+  /* A9 */  { UD_Itest,        O_rAX,   O_Iz,    O_NONE,  P_oso|P_rexw },
+  /* AA */  { UD_Istosb,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_none },
+  /* AB */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AB__OSIZE },
+  /* AC */  { UD_Ilodsb,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_none },
+  /* AD */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AD__OSIZE },
+  /* AE */  { UD_Iscasb,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* AF */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AF__OSIZE },
+  /* B0 */  { UD_Imov,         O_ALr8b, O_Ib,    O_NONE,  P_rexb },
+  /* B1 */  { UD_Imov,         O_CLr9b, O_Ib,    O_NONE,  P_rexb },
+  /* B2 */  { UD_Imov,         O_DLr10b, O_Ib,    O_NONE, P_rexb },
+  /* B3 */  { UD_Imov,         O_BLr11b, O_Ib,    O_NONE, P_rexb },
+  /* B4 */  { UD_Imov,         O_AHr12b, O_Ib,    O_NONE, P_rexb },
+  /* B5 */  { UD_Imov,         O_CHr13b, O_Ib,    O_NONE, P_rexb },
+  /* B6 */  { UD_Imov,         O_DHr14b, O_Ib,    O_NONE, P_rexb },
+  /* B7 */  { UD_Imov,         O_BHr15b, O_Ib,    O_NONE, P_rexb },
+  /* B8 */  { UD_Imov,         O_rAXr8, O_Iv,    O_NONE,  P_oso|P_rexw|P_rexb },
+  /* B9 */  { UD_Imov,         O_rCXr9, O_Iv,    O_NONE,  P_oso|P_rexw|P_rexb },
+  /* BA */  { UD_Imov,         O_rDXr10, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BB */  { UD_Imov,         O_rBXr11, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BC */  { UD_Imov,         O_rSPr12, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BD */  { UD_Imov,         O_rBPr13, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BE */  { UD_Imov,         O_rSIr14, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* BF */  { UD_Imov,         O_rDIr15, O_Iv,    O_NONE, P_oso|P_rexw|P_rexb },
+  /* C0 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C0__REG },
+  /* C1 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C1__REG },
+  /* C2 */  { UD_Iret,         O_Iw,    O_NONE,  O_NONE,  P_none },
+  /* C3 */  { UD_Iret,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* C4 */  { UD_Iles,         O_Gv,    O_M,     O_NONE,  P_inv64|P_aso|P_oso },
+  /* C5 */  { UD_Ilds,         O_Gv,    O_M,     O_NONE,  P_inv64|P_aso|P_oso },
+  /* C6 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C6__REG },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_C7__REG },
+  /* C8 */  { UD_Ienter,       O_Iw,    O_Ib,    O_NONE,  P_def64|P_depM|P_none },
+  /* C9 */  { UD_Ileave,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* CA */  { UD_Iretf,        O_Iw,    O_NONE,  O_NONE,  P_none },
+  /* CB */  { UD_Iretf,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* CC */  { UD_Iint3,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* CD */  { UD_Iint,         O_Ib,    O_NONE,  O_NONE,  P_none },
+  /* CE */  { UD_Iinto,        O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* CF */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_CF__OSIZE },
+  /* D0 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D0__REG },
+  /* D1 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D1__REG },
+  /* D2 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D2__REG },
+  /* D3 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D3__REG },
+  /* D4 */  { UD_Iaam,         O_Ib,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* D5 */  { UD_Iaad,         O_Ib,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* D6 */  { UD_Isalc,        O_NONE,  O_NONE,  O_NONE,  P_inv64|P_none },
+  /* D7 */  { UD_Ixlatb,       O_NONE,  O_NONE,  O_NONE,  P_rexw },
+  /* D8 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D8__MOD },
+  /* D9 */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D9__MOD },
+  /* DA */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DA__MOD },
+  /* DB */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DB__MOD },
+  /* DC */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DC__MOD },
+  /* DD */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DD__MOD },
+  /* DE */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DE__MOD },
+  /* DF */  { UD_Igrp_mod,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DF__MOD },
+  /* E0 */  { UD_Iloopnz,      O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* E1 */  { UD_Iloope,       O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* E2 */  { UD_Iloop,        O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* E3 */  { UD_Igrp_asize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_E3__ASIZE },
+  /* E4 */  { UD_Iin,          O_AL,    O_Ib,    O_NONE,  P_none },
+  /* E5 */  { UD_Iin,          O_eAX,   O_Ib,    O_NONE,  P_oso },
+  /* E6 */  { UD_Iout,         O_Ib,    O_AL,    O_NONE,  P_none },
+  /* E7 */  { UD_Iout,         O_Ib,    O_eAX,   O_NONE,  P_oso },
+  /* E8 */  { UD_Icall,        O_Jz,    O_NONE,  O_NONE,  P_def64|P_oso },
+  /* E9 */  { UD_Ijmp,         O_Jz,    O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* EA */  { UD_Ijmp,         O_Ap,    O_NONE,  O_NONE,  P_inv64|P_none },
+  /* EB */  { UD_Ijmp,         O_Jb,    O_NONE,  O_NONE,  P_none },
+  /* EC */  { UD_Iin,          O_AL,    O_DX,    O_NONE,  P_none },
+  /* ED */  { UD_Iin,          O_eAX,   O_DX,    O_NONE,  P_oso },
+  /* EE */  { UD_Iout,         O_DX,    O_AL,    O_NONE,  P_none },
+  /* EF */  { UD_Iout,         O_DX,    O_eAX,   O_NONE,  P_oso },
+  /* F0 */  { UD_Ilock,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F1 */  { UD_Iint1,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F2 */  { UD_Irepne,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F3 */  { UD_Irep,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F4 */  { UD_Ihlt,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F5 */  { UD_Icmc,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F6 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_F6__REG },
+  /* F7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_F7__REG },
+  /* F8 */  { UD_Iclc,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* F9 */  { UD_Istc,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FA */  { UD_Icli,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FB */  { UD_Isti,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FC */  { UD_Icld,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FD */  { UD_Istd,         O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* FE */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_FE__REG },
+  /* FF */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_FF__REG },
+};
+
+static struct ud_itab_entry itab__1byte__op_60__osize[3] = {
+  /* 00 */  { UD_Ipusha,       O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 01 */  { UD_Ipushad,      O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_61__osize[3] = {
+  /* 00 */  { UD_Ipopa,        O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 01 */  { UD_Ipopad,       O_NONE,  O_NONE,  O_NONE,  P_inv64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_63__mode[3] = {
+  /* 00 */  { UD_Iarpl,        O_Ew,    O_Gw,    O_NONE,  P_inv64|P_aso },
+  /* 01 */  { UD_Iarpl,        O_Ew,    O_Gw,    O_NONE,  P_inv64|P_aso },
+  /* 02 */  { UD_Imovsxd,      O_Gv,    O_Ed,    O_NONE,  P_c2|P_aso|P_oso|P_rexw|P_rexx|P_rexr|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_6d__osize[3] = {
+  /* 00 */  { UD_Iinsw,        O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 01 */  { UD_Iinsd,        O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_6f__osize[3] = {
+  /* 00 */  { UD_Ioutsw,       O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 01 */  { UD_Ioutsd,       O_NONE,  O_NONE,  O_NONE,  P_oso },
+  /* 02 */  { UD_Ioutsq,       O_NONE,  O_NONE,  O_NONE,  P_oso },
+};
+
+static struct ud_itab_entry itab__1byte__op_80__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_81__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_82__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_inv64|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_83__reg[8] = {
+  /* 00 */  { UD_Iadd,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ior,          O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iadc,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Isbb,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iand,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Isub,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ixor,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Icmp,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_8f__reg[8] = {
+  /* 00 */  { UD_Ipop,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_98__osize[3] = {
+  /* 00 */  { UD_Icbw,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Icwde,        O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Icdqe,        O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_99__osize[3] = {
+  /* 00 */  { UD_Icwd,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Icdq,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Icqo,         O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_9c__mode[3] = {
+  /* 00 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9C__MODE__OP_00__OSIZE },
+  /* 01 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9C__MODE__OP_01__OSIZE },
+  /* 02 */  { UD_Ipushfq,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_9c__mode__op_00__osize[3] = {
+  /* 00 */  { UD_Ipushfw,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 01 */  { UD_Ipushfd,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_9c__mode__op_01__osize[3] = {
+  /* 00 */  { UD_Ipushfw,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 01 */  { UD_Ipushfd,      O_NONE,  O_NONE,  O_NONE,  P_def64|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_9d__mode[3] = {
+  /* 00 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9D__MODE__OP_00__OSIZE },
+  /* 01 */  { UD_Igrp_osize,   O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_9D__MODE__OP_01__OSIZE },
+  /* 02 */  { UD_Ipopfq,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+};
+
+static struct ud_itab_entry itab__1byte__op_9d__mode__op_00__osize[3] = {
+  /* 00 */  { UD_Ipopfw,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 01 */  { UD_Ipopfd,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_9d__mode__op_01__osize[3] = {
+  /* 00 */  { UD_Ipopfw,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 01 */  { UD_Ipopfd,       O_NONE,  O_NONE,  O_NONE,  P_def64|P_depM|P_oso },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_a5__osize[3] = {
+  /* 00 */  { UD_Imovsw,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 01 */  { UD_Imovsd,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 02 */  { UD_Imovsq,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_a7__osize[3] = {
+  /* 00 */  { UD_Icmpsw,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Icmpsd,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Icmpsq,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_ab__osize[3] = {
+  /* 00 */  { UD_Istosw,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 01 */  { UD_Istosd,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 02 */  { UD_Istosq,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_ad__osize[3] = {
+  /* 00 */  { UD_Ilodsw,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 01 */  { UD_Ilodsd,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+  /* 02 */  { UD_Ilodsq,       O_NONE,  O_NONE,  O_NONE,  P_ImpAddr|P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_ae__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_AE__MOD__OP_00__REG },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_ae__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifxsave,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifxrstor,     O_M,     O_NONE,  O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_af__osize[3] = {
+  /* 00 */  { UD_Iscasw,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Iscasd,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Iscasq,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_c0__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_c1__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Ev,    O_Ib,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_c6__reg[8] = {
+  /* 00 */  { UD_Imov,         O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_c7__reg[8] = {
+  /* 00 */  { UD_Imov,         O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_cf__osize[3] = {
+  /* 00 */  { UD_Iiretw,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 01 */  { UD_Iiretd,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+  /* 02 */  { UD_Iiretq,       O_NONE,  O_NONE,  O_NONE,  P_oso|P_rexw },
+};
+
+static struct ud_itab_entry itab__1byte__op_d0__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Eb,    O_I1,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d1__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Ev,    O_I1,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d2__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Eb,    O_CL,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Eb,    O_CL,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d3__reg[8] = {
+  /* 00 */  { UD_Irol,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iror,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ircl,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ircr,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ishl,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ishr,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ishl,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Isar,         O_Ev,    O_CL,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d8__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D8__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D8__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_d8__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifadd,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifmul,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifcom,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifcomp,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifsub,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifsubr,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifdiv,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifdivr,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d8__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifadd,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifadd,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifadd,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifadd,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifadd,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifadd,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifadd,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifadd,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifmul,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifmul,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifmul,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifmul,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifmul,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifmul,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifmul,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifmul,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcom,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 11 */  { UD_Ifcom,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 12 */  { UD_Ifcom,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 13 */  { UD_Ifcom,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 14 */  { UD_Ifcom,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 15 */  { UD_Ifcom,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 16 */  { UD_Ifcom,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 17 */  { UD_Ifcom,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 18 */  { UD_Ifcomp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 19 */  { UD_Ifcomp,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 1A */  { UD_Ifcomp,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 1B */  { UD_Ifcomp,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 1C */  { UD_Ifcomp,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 1D */  { UD_Ifcomp,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 1E */  { UD_Ifcomp,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 1F */  { UD_Ifcomp,       O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 20 */  { UD_Ifsub,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 21 */  { UD_Ifsub,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 22 */  { UD_Ifsub,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 23 */  { UD_Ifsub,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 24 */  { UD_Ifsub,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 25 */  { UD_Ifsub,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 26 */  { UD_Ifsub,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 27 */  { UD_Ifsub,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 28 */  { UD_Ifsubr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifsubr,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifsubr,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifsubr,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifsubr,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifsubr,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifsubr,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifsubr,       O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifdiv,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifdiv,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifdiv,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifdiv,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifdiv,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifdiv,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifdiv,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifdiv,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 38 */  { UD_Ifdivr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 39 */  { UD_Ifdivr,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 3A */  { UD_Ifdivr,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 3B */  { UD_Ifdivr,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 3C */  { UD_Ifdivr,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 3D */  { UD_Ifdivr,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 3E */  { UD_Ifdivr,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 3F */  { UD_Ifdivr,       O_ST0,   O_ST7,   O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_d9__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D9__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_D9__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_d9__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifld,         O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ifst,         O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifstp,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifldenv,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifldcw,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifnstenv,     O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifnstcw,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_d9__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifld,         O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifld,         O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifld,         O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifld,         O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifld,         O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifld,         O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifld,         O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifld,         O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifxch,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifxch,        O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifxch,        O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifxch,        O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifxch,        O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifxch,        O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifxch,        O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifxch,        O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifnop,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 12 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Ifstp1,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifstp1,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifstp1,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifstp1,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifstp1,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifstp1,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifstp1,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifstp1,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifchs,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iftst,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 25 */  { UD_Ifxam,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Ifld1,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 29 */  { UD_Ifldl2t,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2A */  { UD_Ifldl2e,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2B */  { UD_Ifldlpi,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2C */  { UD_Ifldlg2,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2D */  { UD_Ifldln2,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2E */  { UD_Ifldz,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Ifyl2x,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 32 */  { UD_Ifptan,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 33 */  { UD_Ifpatan,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 34 */  { UD_Ifpxtract,    O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 35 */  { UD_Ifprem1,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 36 */  { UD_Ifdecstp,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 37 */  { UD_Ifncstp,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 38 */  { UD_Ifprem,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 39 */  { UD_Ifyl2xp1,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3A */  { UD_Ifsqrt,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3B */  { UD_Ifsincos,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3C */  { UD_Ifrndint,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3D */  { UD_Ifscale,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3E */  { UD_Ifsin,        O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 3F */  { UD_Ifcos,        O_NONE,  O_NONE,  O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_da__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DA__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DA__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_da__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifiadd,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifimul,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ificom,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ificomp,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifisub,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifisubr,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifidiv,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifidivr,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_da__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifcmovb,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifcmovb,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifcmovb,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifcmovb,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifcmovb,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifcmovb,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifcmovb,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifcmovb,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifcmove,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifcmove,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifcmove,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifcmove,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifcmove,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifcmove,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifcmove,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifcmove,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcmovbe,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 11 */  { UD_Ifcmovbe,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 12 */  { UD_Ifcmovbe,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 13 */  { UD_Ifcmovbe,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 14 */  { UD_Ifcmovbe,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 15 */  { UD_Ifcmovbe,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 16 */  { UD_Ifcmovbe,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 17 */  { UD_Ifcmovbe,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 18 */  { UD_Ifcmovu,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 19 */  { UD_Ifcmovu,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 1A */  { UD_Ifcmovu,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 1B */  { UD_Ifcmovu,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 1C */  { UD_Ifcmovu,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 1D */  { UD_Ifcmovu,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 1E */  { UD_Ifcmovu,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 1F */  { UD_Ifcmovu,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Ifucompp,     O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 2A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_db__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DB__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DB__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_db__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifild,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifisttp,      O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifist,        O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifistp,       O_Md,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Ifld,         O_Mt,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Ifstp,        O_Mt,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_db__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifcmovnb,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifcmovnb,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifcmovnb,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifcmovnb,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifcmovnb,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifcmovnb,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifcmovnb,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifcmovnb,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifcmovne,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifcmovne,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifcmovne,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifcmovne,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifcmovne,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifcmovne,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifcmovne,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifcmovne,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcmovnbe,    O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 11 */  { UD_Ifcmovnbe,    O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 12 */  { UD_Ifcmovnbe,    O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 13 */  { UD_Ifcmovnbe,    O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 14 */  { UD_Ifcmovnbe,    O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 15 */  { UD_Ifcmovnbe,    O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 16 */  { UD_Ifcmovnbe,    O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 17 */  { UD_Ifcmovnbe,    O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 18 */  { UD_Ifcmovnu,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 19 */  { UD_Ifcmovnu,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 1A */  { UD_Ifcmovnu,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 1B */  { UD_Ifcmovnu,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 1C */  { UD_Ifcmovnu,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 1D */  { UD_Ifcmovnu,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 1E */  { UD_Ifcmovnu,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 1F */  { UD_Ifcmovnu,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Ifclex,       O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 23 */  { UD_Ifninit,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Ifucomi,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifucomi,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifucomi,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifucomi,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifucomi,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifucomi,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifucomi,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifucomi,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifcomi,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifcomi,       O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifcomi,       O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifcomi,       O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifcomi,       O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifcomi,       O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifcomi,       O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifcomi,       O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_dc__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DC__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DC__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_dc__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifadd,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifmul,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifcom,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifcomp,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifsub,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifsubr,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifdiv,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifdivr,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_dc__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifadd,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifadd,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifadd,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifadd,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifadd,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifadd,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifadd,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifadd,        O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifmul,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifmul,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifmul,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifmul,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifmul,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifmul,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifmul,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifmul,        O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcom2,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifcom2,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifcom2,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifcom2,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifcom2,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifcom2,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifcom2,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifcom2,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Ifcomp3,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifcomp3,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifcomp3,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifcomp3,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifcomp3,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifcomp3,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifcomp3,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifcomp3,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifsubr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 21 */  { UD_Ifsubr,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 22 */  { UD_Ifsubr,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 23 */  { UD_Ifsubr,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 24 */  { UD_Ifsubr,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 25 */  { UD_Ifsubr,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 26 */  { UD_Ifsubr,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 27 */  { UD_Ifsubr,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 28 */  { UD_Ifsub,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifsub,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifsub,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifsub,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifsub,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifsub,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifsub,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifsub,        O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifdivr,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifdivr,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifdivr,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifdivr,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifdivr,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifdivr,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifdivr,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifdivr,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 38 */  { UD_Ifdiv,        O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 39 */  { UD_Ifdiv,        O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 3A */  { UD_Ifdiv,        O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 3B */  { UD_Ifdiv,        O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 3C */  { UD_Ifdiv,        O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 3D */  { UD_Ifdiv,        O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 3E */  { UD_Ifdiv,        O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 3F */  { UD_Ifdiv,        O_ST7,   O_ST0,   O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_dd__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DD__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DD__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_dd__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifld,         O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifisttp,      O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifst,         O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifstp,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifrstor,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ifnsave,      O_M,     O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifnstsw,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_dd__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Iffree,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iffree,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Iffree,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Iffree,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Iffree,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Iffree,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Iffree,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Iffree,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 08 */  { UD_Ifxch4,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 09 */  { UD_Ifxch4,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 0A */  { UD_Ifxch4,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 0B */  { UD_Ifxch4,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 0C */  { UD_Ifxch4,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 0D */  { UD_Ifxch4,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 0E */  { UD_Ifxch4,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 0F */  { UD_Ifxch4,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 10 */  { UD_Ifst,         O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifst,         O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifst,         O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifst,         O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifst,         O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifst,         O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifst,         O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifst,         O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Ifstp,        O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifstp,        O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifstp,        O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifstp,        O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifstp,        O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifstp,        O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifstp,        O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifstp,        O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifucom,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 21 */  { UD_Ifucom,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 22 */  { UD_Ifucom,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 23 */  { UD_Ifucom,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 24 */  { UD_Ifucom,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 25 */  { UD_Ifucom,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 26 */  { UD_Ifucom,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 27 */  { UD_Ifucom,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 28 */  { UD_Ifucomp,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 29 */  { UD_Ifucomp,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 2A */  { UD_Ifucomp,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 2B */  { UD_Ifucomp,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 2C */  { UD_Ifucomp,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 2D */  { UD_Ifucomp,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 2E */  { UD_Ifucomp,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 2F */  { UD_Ifucomp,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_de__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DE__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DE__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_de__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifiadd,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifimul,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ificom,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ificomp,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifisub,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifisubr,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifidiv,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifidivr,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_de__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Ifaddp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 01 */  { UD_Ifaddp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 02 */  { UD_Ifaddp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 03 */  { UD_Ifaddp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 04 */  { UD_Ifaddp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 05 */  { UD_Ifaddp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 06 */  { UD_Ifaddp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 07 */  { UD_Ifaddp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 08 */  { UD_Ifmulp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 09 */  { UD_Ifmulp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 0A */  { UD_Ifmulp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 0B */  { UD_Ifmulp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 0C */  { UD_Ifmulp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 0D */  { UD_Ifmulp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 0E */  { UD_Ifmulp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 0F */  { UD_Ifmulp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 10 */  { UD_Ifcomp5,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifcomp5,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifcomp5,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifcomp5,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifcomp5,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifcomp5,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifcomp5,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifcomp5,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Ifcompp,      O_NONE,  O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Ifsubrp,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 21 */  { UD_Ifsubrp,      O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 22 */  { UD_Ifsubrp,      O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 23 */  { UD_Ifsubrp,      O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 24 */  { UD_Ifsubrp,      O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 25 */  { UD_Ifsubrp,      O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 26 */  { UD_Ifsubrp,      O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 27 */  { UD_Ifsubrp,      O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 28 */  { UD_Ifsubp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifsubp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifsubp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifsubp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifsubp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifsubp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifsubp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifsubp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifdivrp,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifdivrp,      O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifdivrp,      O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifdivrp,      O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifdivrp,      O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifdivrp,      O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifdivrp,      O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifdivrp,      O_ST7,   O_ST0,   O_NONE,  P_none },
+  /* 38 */  { UD_Ifdivp,       O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 39 */  { UD_Ifdivp,       O_ST1,   O_ST0,   O_NONE,  P_none },
+  /* 3A */  { UD_Ifdivp,       O_ST2,   O_ST0,   O_NONE,  P_none },
+  /* 3B */  { UD_Ifdivp,       O_ST3,   O_ST0,   O_NONE,  P_none },
+  /* 3C */  { UD_Ifdivp,       O_ST4,   O_ST0,   O_NONE,  P_none },
+  /* 3D */  { UD_Ifdivp,       O_ST5,   O_ST0,   O_NONE,  P_none },
+  /* 3E */  { UD_Ifdivp,       O_ST6,   O_ST0,   O_NONE,  P_none },
+  /* 3F */  { UD_Ifdivp,       O_ST7,   O_ST0,   O_NONE,  P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_df__mod[2] = {
+  /* 00 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DF__MOD__OP_00__REG },
+  /* 01 */  { UD_Igrp_x87,     O_NONE, O_NONE, O_NONE,    ITAB__1BYTE__OP_DF__MOD__OP_01__X87 },
+};
+
+static struct ud_itab_entry itab__1byte__op_df__mod__op_00__reg[8] = {
+  /* 00 */  { UD_Ifild,        O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Ifisttp,      O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Ifist,        O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ifistp,       O_Mw,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ifbld,        O_Mt,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ifild,        O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ifbstp,       O_Mt,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Ifistp,       O_Mq,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_df__mod__op_01__x87[64] = {
+  /* 00 */  { UD_Iffreep,      O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 01 */  { UD_Iffreep,      O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 02 */  { UD_Iffreep,      O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 03 */  { UD_Iffreep,      O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 04 */  { UD_Iffreep,      O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 05 */  { UD_Iffreep,      O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 06 */  { UD_Iffreep,      O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 07 */  { UD_Iffreep,      O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 08 */  { UD_Ifxch7,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 09 */  { UD_Ifxch7,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 0A */  { UD_Ifxch7,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 0B */  { UD_Ifxch7,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 0C */  { UD_Ifxch7,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 0D */  { UD_Ifxch7,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 0E */  { UD_Ifxch7,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 0F */  { UD_Ifxch7,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 10 */  { UD_Ifstp8,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 11 */  { UD_Ifstp8,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 12 */  { UD_Ifstp8,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 13 */  { UD_Ifstp8,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 14 */  { UD_Ifstp8,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 15 */  { UD_Ifstp8,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 16 */  { UD_Ifstp8,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 17 */  { UD_Ifstp8,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 18 */  { UD_Ifstp9,       O_ST0,   O_NONE,  O_NONE,  P_none },
+  /* 19 */  { UD_Ifstp9,       O_ST1,   O_NONE,  O_NONE,  P_none },
+  /* 1A */  { UD_Ifstp9,       O_ST2,   O_NONE,  O_NONE,  P_none },
+  /* 1B */  { UD_Ifstp9,       O_ST3,   O_NONE,  O_NONE,  P_none },
+  /* 1C */  { UD_Ifstp9,       O_ST4,   O_NONE,  O_NONE,  P_none },
+  /* 1D */  { UD_Ifstp9,       O_ST5,   O_NONE,  O_NONE,  P_none },
+  /* 1E */  { UD_Ifstp9,       O_ST6,   O_NONE,  O_NONE,  P_none },
+  /* 1F */  { UD_Ifstp9,       O_ST7,   O_NONE,  O_NONE,  P_none },
+  /* 20 */  { UD_Ifnstsw,      O_AX,    O_NONE,  O_NONE,  P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Ifucomip,     O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 29 */  { UD_Ifucomip,     O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 2A */  { UD_Ifucomip,     O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 2B */  { UD_Ifucomip,     O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 2C */  { UD_Ifucomip,     O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 2D */  { UD_Ifucomip,     O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 2E */  { UD_Ifucomip,     O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 2F */  { UD_Ifucomip,     O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 30 */  { UD_Ifcomip,      O_ST0,   O_ST0,   O_NONE,  P_none },
+  /* 31 */  { UD_Ifcomip,      O_ST0,   O_ST1,   O_NONE,  P_none },
+  /* 32 */  { UD_Ifcomip,      O_ST0,   O_ST2,   O_NONE,  P_none },
+  /* 33 */  { UD_Ifcomip,      O_ST0,   O_ST3,   O_NONE,  P_none },
+  /* 34 */  { UD_Ifcomip,      O_ST0,   O_ST4,   O_NONE,  P_none },
+  /* 35 */  { UD_Ifcomip,      O_ST0,   O_ST5,   O_NONE,  P_none },
+  /* 36 */  { UD_Ifcomip,      O_ST0,   O_ST6,   O_NONE,  P_none },
+  /* 37 */  { UD_Ifcomip,      O_ST0,   O_ST7,   O_NONE,  P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_e3__asize[3] = {
+  /* 00 */  { UD_Ijcxz,        O_Jb,    O_NONE,  O_NONE,  P_aso },
+  /* 01 */  { UD_Ijecxz,       O_Jb,    O_NONE,  O_NONE,  P_aso },
+  /* 02 */  { UD_Ijrcxz,       O_Jb,    O_NONE,  O_NONE,  P_aso },
+};
+
+static struct ud_itab_entry itab__1byte__op_f6__reg[8] = {
+  /* 00 */  { UD_Itest,        O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Itest,        O_Eb,    O_Ib,    O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Inot,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ineg,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Imul,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iimul,        O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Idiv,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iidiv,        O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_f7__reg[8] = {
+  /* 00 */  { UD_Itest,        O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Itest,        O_Ev,    O_Iz,    O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Inot,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Ineg,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Imul,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Iimul,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Idiv,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iidiv,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__1byte__op_fe__reg[8] = {
+  /* 00 */  { UD_Iinc,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Idec,         O_Eb,    O_NONE,  O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__1byte__op_ff__reg[8] = {
+  /* 00 */  { UD_Iinc,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 01 */  { UD_Idec,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 02 */  { UD_Icall,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 03 */  { UD_Icall,        O_Ep,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 04 */  { UD_Ijmp,         O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_depM|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 05 */  { UD_Ijmp,         O_Ep,    O_NONE,  O_NONE,  P_c1|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 06 */  { UD_Ipush,        O_Ev,    O_NONE,  O_NONE,  P_c1|P_def64|P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__3dnow[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 11 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 12 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 51 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 52 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 53 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 54 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 55 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 56 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 57 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 58 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 59 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 60 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 61 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 62 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 63 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 69 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 70 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 71 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 72 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 73 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 74 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 75 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 76 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* ED */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovupd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovupd,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovlpd,      O_V,     O_M,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Imovlpd,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 14 */  { UD_Iunpcklpd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 15 */  { UD_Iunpckhpd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 16 */  { UD_Imovhpd,      O_V,     O_M,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 17 */  { UD_Imovhpd,      O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Imovapd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 29 */  { UD_Imovapd,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2A */  { UD_Icvtpi2pd,    O_V,     O_Q,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Imovntpd,     O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2C */  { UD_Icvttpd2pi,   O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtpd2pi,    O_P,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iucomisd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2F */  { UD_Icomisd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Imovmskpd,    O_Gd,    O_VR,    O_NONE,  P_oso|P_rexr|P_rexb },
+  /* 51 */  { UD_Isqrtpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 53 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 54 */  { UD_Iandpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 55 */  { UD_Iandnpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 56 */  { UD_Iorpd,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 57 */  { UD_Ixorpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 58 */  { UD_Iaddpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtpd2ps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Icvtps2dq,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5C */  { UD_Isubpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxpd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Ipunpcklbw,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 61 */  { UD_Ipunpcklwd,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 62 */  { UD_Ipunpckldq,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 63 */  { UD_Ipacksswb,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 64 */  { UD_Ipcmpgtb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 65 */  { UD_Ipcmpgtw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 66 */  { UD_Ipcmpgtd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 67 */  { UD_Ipackuswb,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 68 */  { UD_Ipunpckhbw,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 69 */  { UD_Ipunpckhwd,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6A */  { UD_Ipunpckhdq,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6B */  { UD_Ipackssdw,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6C */  { UD_Ipunpcklqdq,  O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6D */  { UD_Ipunpckhqdq,  O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 6E */  { UD_Imovd,        O_V,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 6F */  { UD_Imovqa,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 70 */  { UD_Ipshufd,      O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_71__REG },
+  /* 72 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_72__REG },
+  /* 73 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_73__REG },
+  /* 74 */  { UD_Ipcmpeqb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 75 */  { UD_Ipcmpeqw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 76 */  { UD_Ipcmpeqd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Ihaddpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7D */  { UD_Ihsubpd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7E */  { UD_Imovd,        O_Ex,    O_V,     O_NONE,  P_c1|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 7F */  { UD_Imovdqa,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmppd,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Ipinsrw,      O_V,     O_Ew,    O_Ib,    P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C5 */  { UD_Ipextrw,      O_Gd,    O_VR,    O_Ib,    P_aso|P_rexr|P_rexb },
+  /* C6 */  { UD_Ishufpd,      O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_C7__REG },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iaddsubpd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D1 */  { UD_Ipsrlw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D2 */  { UD_Ipsrld,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D3 */  { UD_Ipsrlq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D4 */  { UD_Ipaddq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D5 */  { UD_Ipmullw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D6 */  { UD_Imovq,        O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D7 */  { UD_Ipmovmskb,    O_Gd,    O_VR,    O_NONE,  P_rexr|P_rexb },
+  /* D8 */  { UD_Ipsubusb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D9 */  { UD_Ipsubusw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DA */  { UD_Ipminub,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DB */  { UD_Ipand,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DC */  { UD_Ipsubusb,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DD */  { UD_Ipunpckhbw,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DE */  { UD_Ipmaxub,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* DF */  { UD_Ipandn,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E0 */  { UD_Ipavgb,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E1 */  { UD_Ipsraw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E2 */  { UD_Ipsrad,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E3 */  { UD_Ipavgw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E4 */  { UD_Ipmulhuw,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E5 */  { UD_Ipmulhw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E6 */  { UD_Icvttpd2dq,   O_V,     O_W,     O_NONE,  P_none },
+  /* E7 */  { UD_Imovntdq,     O_M,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E8 */  { UD_Ipsubsb,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E9 */  { UD_Ipsubsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EA */  { UD_Ipminsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EB */  { UD_Ipor,         O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EC */  { UD_Ipaddsb,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* ED */  { UD_Ipaddsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EE */  { UD_Ipmaxsw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* EF */  { UD_Ipxor,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Ipsllw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F2 */  { UD_Ipslld,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F3 */  { UD_Ipsllq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F4 */  { UD_Ipmuludq,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F5 */  { UD_Ipmaddwd,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F6 */  { UD_Ipsadbw,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F7 */  { UD_Imaskmovq,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F8 */  { UD_Ipsubb,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F9 */  { UD_Ipsubw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FA */  { UD_Ipsubd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FB */  { UD_Ipsubq,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FC */  { UD_Ipaddb,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FD */  { UD_Ipaddw,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_71__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlw,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsraw,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllw,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_72__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrld,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Ipsrad,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipslld,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_73__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Ipsrlq,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 03 */  { UD_Ipsrldq,      O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Ipsllq,       O_VR,    O_Ib,    O_NONE,  P_rexb },
+  /* 07 */  { UD_Ipslldq,      O_VR,    O_Ib,    O_NONE,  P_rexb },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_c7__reg[8] = {
+  /* 00 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSE66__0F__OP_C7__REG__OP_00__VENDOR },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_sse66__0f__op_c7__reg__op_00__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmclear,     O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+static struct ud_itab_entry itab__pfx_ssef2__0f[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovsd,       O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovddup,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Icvtsi2sd,    O_V,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Icvttsd2si,   O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtsd2si,    O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 51 */  { UD_Isqrtsd,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 53 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 54 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 55 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 56 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 57 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 58 */  { UD_Iaddsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtsd2ss,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 5C */  { UD_Isubsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxsd,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 61 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 62 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 63 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 69 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 70 */  { UD_Ipshuflw,     O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 72 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 73 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 74 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 75 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 76 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Ihaddps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7D */  { UD_Ihsubps,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_oso|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmpsd,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iaddsubps,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* D1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D6 */  { UD_Imovdq2q,     O_P,     O_VR,    O_NONE,  P_aso|P_rexb },
+  /* D7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E6 */  { UD_Icvtpd2dq,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* ED */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F0 */  { UD_Ilddqu,       O_V,     O_M,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* F1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_ssef3__0f[256] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 08 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 09 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 0F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 10 */  { UD_Imovss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 11 */  { UD_Imovss,       O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 12 */  { UD_Imovsldup,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 13 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 14 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 15 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 16 */  { UD_Imovshdup,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 17 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 18 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 19 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 1F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 20 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 21 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 22 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 23 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 24 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 25 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 26 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 27 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 28 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 29 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2A */  { UD_Icvtsi2ss,    O_V,     O_Ex,    O_NONE,  P_c2|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2C */  { UD_Icvttss2si,   O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2D */  { UD_Icvtss2si,    O_Gvw,   O_W,     O_NONE,  P_c1|P_aso|P_rexr|P_rexx|P_rexb },
+  /* 2E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 2F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 30 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 31 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 32 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 33 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 34 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 35 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 36 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 37 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 38 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 39 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 3F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 40 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 41 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 42 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 43 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 44 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 45 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 46 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 47 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 48 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 49 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 4F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 50 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 51 */  { UD_Isqrtss,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 52 */  { UD_Irsqrtss,     O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 53 */  { UD_Ircpss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 54 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 55 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 56 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 57 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 58 */  { UD_Iaddss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 59 */  { UD_Imulss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5A */  { UD_Icvtss2sd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5B */  { UD_Icvttps2dq,   O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5C */  { UD_Isubss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5D */  { UD_Iminss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5E */  { UD_Idivss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 5F */  { UD_Imaxss,       O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 60 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 61 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 62 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 63 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 64 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 65 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 66 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 67 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 68 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 69 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 6F */  { UD_Imovdqu,      O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 70 */  { UD_Ipshufhw,     O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* 71 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 72 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 73 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 74 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 75 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 76 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 77 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 78 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 79 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 7E */  { UD_Imovq,        O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 7F */  { UD_Imovdqu,      O_W,     O_V,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* 80 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 81 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 82 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 83 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 84 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 85 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 86 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 87 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 88 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 89 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 8F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 90 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 91 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 92 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 93 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 94 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 95 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 96 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 97 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 98 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 99 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9A */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9B */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9C */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9D */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9E */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 9F */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* A9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* AF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* B9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* BF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C0 */  { UD_Ixadd,        O_Eb,    O_Gb,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C1 */  { UD_Ixadd,        O_Ev,    O_Gv,    O_NONE,  P_aso|P_rexw|P_rexr|P_rexx|P_rexb },
+  /* C2 */  { UD_Icmpss,       O_V,     O_W,     O_Ib,    P_aso|P_rexr|P_rexx|P_rexb },
+  /* C3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C7 */  { UD_Igrp_reg,     O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSEF3__0F__OP_C7__REG },
+  /* C8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* C9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* CF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D6 */  { UD_Imovq2dq,     O_V,     O_PR,    O_NONE,  P_aso },
+  /* D7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* D9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* DF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E6 */  { UD_Icvtdq2pd,    O_V,     O_W,     O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+  /* E7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* E9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* ED */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* EF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F0 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F1 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F2 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F3 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F4 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F5 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F6 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F7 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F8 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* F9 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FA */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FB */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FC */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FD */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FE */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* FF */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+};
+
+static struct ud_itab_entry itab__pfx_ssef3__0f__op_c7__reg[8] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 02 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 03 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 04 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 05 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 06 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 07 */  { UD_Igrp_vendor,  O_NONE, O_NONE, O_NONE,    ITAB__PFX_SSEF3__0F__OP_C7__REG__OP_07__VENDOR },
+};
+
+static struct ud_itab_entry itab__pfx_ssef3__0f__op_c7__reg__op_07__vendor[2] = {
+  /* 00 */  { UD_Iinvalid,     O_NONE, O_NONE, O_NONE,    P_none },
+  /* 01 */  { UD_Ivmxon,       O_Mq,    O_NONE,  O_NONE,  P_aso|P_rexr|P_rexx|P_rexb },
+};
+
+/* the order of this table matches enum ud_itab_index */
+struct ud_itab_entry * ud_itab_list[] = {
+  itab__0f,
+  itab__0f__op_00__reg,
+  itab__0f__op_01__reg,
+  itab__0f__op_01__reg__op_00__mod,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm__op_01__vendor,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm__op_03__vendor,
+  itab__0f__op_01__reg__op_00__mod__op_01__rm__op_04__vendor,
+  itab__0f__op_01__reg__op_01__mod,
+  itab__0f__op_01__reg__op_01__mod__op_01__rm,
+  itab__0f__op_01__reg__op_02__mod,
+  itab__0f__op_01__reg__op_03__mod,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_00__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_01__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_02__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_03__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_04__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_05__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_06__vendor,
+  itab__0f__op_01__reg__op_03__mod__op_01__rm__op_07__vendor,
+  itab__0f__op_01__reg__op_04__mod,
+  itab__0f__op_01__reg__op_06__mod,
+  itab__0f__op_01__reg__op_07__mod,
+  itab__0f__op_01__reg__op_07__mod__op_01__rm,
+  itab__0f__op_01__reg__op_07__mod__op_01__rm__op_01__vendor,
+  itab__0f__op_0d__reg,
+  itab__0f__op_18__reg,
+  itab__0f__op_71__reg,
+  itab__0f__op_72__reg,
+  itab__0f__op_73__reg,
+  itab__0f__op_ae__reg,
+  itab__0f__op_ae__reg__op_05__mod,
+  itab__0f__op_ae__reg__op_05__mod__op_01__rm,
+  itab__0f__op_ae__reg__op_06__mod,
+  itab__0f__op_ae__reg__op_06__mod__op_01__rm,
+  itab__0f__op_ae__reg__op_07__mod,
+  itab__0f__op_ae__reg__op_07__mod__op_01__rm,
+  itab__0f__op_ba__reg,
+  itab__0f__op_c7__reg,
+  itab__0f__op_c7__reg__op_00__vendor,
+  itab__0f__op_c7__reg__op_07__vendor,
+  itab__0f__op_d9__mod,
+  itab__0f__op_d9__mod__op_01__x87,
+  itab__1byte,
+  itab__1byte__op_60__osize,
+  itab__1byte__op_61__osize,
+  itab__1byte__op_63__mode,
+  itab__1byte__op_6d__osize,
+  itab__1byte__op_6f__osize,
+  itab__1byte__op_80__reg,
+  itab__1byte__op_81__reg,
+  itab__1byte__op_82__reg,
+  itab__1byte__op_83__reg,
+  itab__1byte__op_8f__reg,
+  itab__1byte__op_98__osize,
+  itab__1byte__op_99__osize,
+  itab__1byte__op_9c__mode,
+  itab__1byte__op_9c__mode__op_00__osize,
+  itab__1byte__op_9c__mode__op_01__osize,
+  itab__1byte__op_9d__mode,
+  itab__1byte__op_9d__mode__op_00__osize,
+  itab__1byte__op_9d__mode__op_01__osize,
+  itab__1byte__op_a5__osize,
+  itab__1byte__op_a7__osize,
+  itab__1byte__op_ab__osize,
+  itab__1byte__op_ad__osize,
+  itab__1byte__op_ae__mod,
+  itab__1byte__op_ae__mod__op_00__reg,
+  itab__1byte__op_af__osize,
+  itab__1byte__op_c0__reg,
+  itab__1byte__op_c1__reg,
+  itab__1byte__op_c6__reg,
+  itab__1byte__op_c7__reg,
+  itab__1byte__op_cf__osize,
+  itab__1byte__op_d0__reg,
+  itab__1byte__op_d1__reg,
+  itab__1byte__op_d2__reg,
+  itab__1byte__op_d3__reg,
+  itab__1byte__op_d8__mod,
+  itab__1byte__op_d8__mod__op_00__reg,
+  itab__1byte__op_d8__mod__op_01__x87,
+  itab__1byte__op_d9__mod,
+  itab__1byte__op_d9__mod__op_00__reg,
+  itab__1byte__op_d9__mod__op_01__x87,
+  itab__1byte__op_da__mod,
+  itab__1byte__op_da__mod__op_00__reg,
+  itab__1byte__op_da__mod__op_01__x87,
+  itab__1byte__op_db__mod,
+  itab__1byte__op_db__mod__op_00__reg,
+  itab__1byte__op_db__mod__op_01__x87,
+  itab__1byte__op_dc__mod,
+  itab__1byte__op_dc__mod__op_00__reg,
+  itab__1byte__op_dc__mod__op_01__x87,
+  itab__1byte__op_dd__mod,
+  itab__1byte__op_dd__mod__op_00__reg,
+  itab__1byte__op_dd__mod__op_01__x87,
+  itab__1byte__op_de__mod,
+  itab__1byte__op_de__mod__op_00__reg,
+  itab__1byte__op_de__mod__op_01__x87,
+  itab__1byte__op_df__mod,
+  itab__1byte__op_df__mod__op_00__reg,
+  itab__1byte__op_df__mod__op_01__x87,
+  itab__1byte__op_e3__asize,
+  itab__1byte__op_f6__reg,
+  itab__1byte__op_f7__reg,
+  itab__1byte__op_fe__reg,
+  itab__1byte__op_ff__reg,
+  itab__3dnow,
+  itab__pfx_sse66__0f,
+  itab__pfx_sse66__0f__op_71__reg,
+  itab__pfx_sse66__0f__op_72__reg,
+  itab__pfx_sse66__0f__op_73__reg,
+  itab__pfx_sse66__0f__op_c7__reg,
+  itab__pfx_sse66__0f__op_c7__reg__op_00__vendor,
+  itab__pfx_ssef2__0f,
+  itab__pfx_ssef3__0f,
+  itab__pfx_ssef3__0f__op_c7__reg,
+  itab__pfx_ssef3__0f__op_c7__reg__op_07__vendor,
+};
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/itab.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/itab.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,719 @@
+
+/* itab.h -- auto generated by opgen.py, do not edit. */
+
+#ifndef UD_ITAB_H
+#define UD_ITAB_H
+
+
+
+enum ud_itab_vendor_index {
+  ITAB__VENDOR_INDX__AMD,
+  ITAB__VENDOR_INDX__INTEL,
+};
+
+
+enum ud_itab_mode_index {
+  ITAB__MODE_INDX__16,
+  ITAB__MODE_INDX__32,
+  ITAB__MODE_INDX__64
+};
+
+
+enum ud_itab_mod_index {
+  ITAB__MOD_INDX__NOT_11,
+  ITAB__MOD_INDX__11
+};
+
+
+enum ud_itab_index {
+  ITAB__0F,
+  ITAB__0F__OP_00__REG,
+  ITAB__0F__OP_01__REG,
+  ITAB__0F__OP_01__REG__OP_00__MOD,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_01__VENDOR,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_03__VENDOR,
+  ITAB__0F__OP_01__REG__OP_00__MOD__OP_01__RM__OP_04__VENDOR,
+  ITAB__0F__OP_01__REG__OP_01__MOD,
+  ITAB__0F__OP_01__REG__OP_01__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_02__MOD,
+  ITAB__0F__OP_01__REG__OP_03__MOD,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_00__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_01__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_02__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_03__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_04__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_05__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_06__VENDOR,
+  ITAB__0F__OP_01__REG__OP_03__MOD__OP_01__RM__OP_07__VENDOR,
+  ITAB__0F__OP_01__REG__OP_04__MOD,
+  ITAB__0F__OP_01__REG__OP_06__MOD,
+  ITAB__0F__OP_01__REG__OP_07__MOD,
+  ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM,
+  ITAB__0F__OP_01__REG__OP_07__MOD__OP_01__RM__OP_01__VENDOR,
+  ITAB__0F__OP_0D__REG,
+  ITAB__0F__OP_18__REG,
+  ITAB__0F__OP_71__REG,
+  ITAB__0F__OP_72__REG,
+  ITAB__0F__OP_73__REG,
+  ITAB__0F__OP_AE__REG,
+  ITAB__0F__OP_AE__REG__OP_05__MOD,
+  ITAB__0F__OP_AE__REG__OP_05__MOD__OP_01__RM,
+  ITAB__0F__OP_AE__REG__OP_06__MOD,
+  ITAB__0F__OP_AE__REG__OP_06__MOD__OP_01__RM,
+  ITAB__0F__OP_AE__REG__OP_07__MOD,
+  ITAB__0F__OP_AE__REG__OP_07__MOD__OP_01__RM,
+  ITAB__0F__OP_BA__REG,
+  ITAB__0F__OP_C7__REG,
+  ITAB__0F__OP_C7__REG__OP_00__VENDOR,
+  ITAB__0F__OP_C7__REG__OP_07__VENDOR,
+  ITAB__0F__OP_D9__MOD,
+  ITAB__0F__OP_D9__MOD__OP_01__X87,
+  ITAB__1BYTE,
+  ITAB__1BYTE__OP_60__OSIZE,
+  ITAB__1BYTE__OP_61__OSIZE,
+  ITAB__1BYTE__OP_63__MODE,
+  ITAB__1BYTE__OP_6D__OSIZE,
+  ITAB__1BYTE__OP_6F__OSIZE,
+  ITAB__1BYTE__OP_80__REG,
+  ITAB__1BYTE__OP_81__REG,
+  ITAB__1BYTE__OP_82__REG,
+  ITAB__1BYTE__OP_83__REG,
+  ITAB__1BYTE__OP_8F__REG,
+  ITAB__1BYTE__OP_98__OSIZE,
+  ITAB__1BYTE__OP_99__OSIZE,
+  ITAB__1BYTE__OP_9C__MODE,
+  ITAB__1BYTE__OP_9C__MODE__OP_00__OSIZE,
+  ITAB__1BYTE__OP_9C__MODE__OP_01__OSIZE,
+  ITAB__1BYTE__OP_9D__MODE,
+  ITAB__1BYTE__OP_9D__MODE__OP_00__OSIZE,
+  ITAB__1BYTE__OP_9D__MODE__OP_01__OSIZE,
+  ITAB__1BYTE__OP_A5__OSIZE,
+  ITAB__1BYTE__OP_A7__OSIZE,
+  ITAB__1BYTE__OP_AB__OSIZE,
+  ITAB__1BYTE__OP_AD__OSIZE,
+  ITAB__1BYTE__OP_AE__MOD,
+  ITAB__1BYTE__OP_AE__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_AF__OSIZE,
+  ITAB__1BYTE__OP_C0__REG,
+  ITAB__1BYTE__OP_C1__REG,
+  ITAB__1BYTE__OP_C6__REG,
+  ITAB__1BYTE__OP_C7__REG,
+  ITAB__1BYTE__OP_CF__OSIZE,
+  ITAB__1BYTE__OP_D0__REG,
+  ITAB__1BYTE__OP_D1__REG,
+  ITAB__1BYTE__OP_D2__REG,
+  ITAB__1BYTE__OP_D3__REG,
+  ITAB__1BYTE__OP_D8__MOD,
+  ITAB__1BYTE__OP_D8__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_D8__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_D9__MOD,
+  ITAB__1BYTE__OP_D9__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_D9__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DA__MOD,
+  ITAB__1BYTE__OP_DA__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DA__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DB__MOD,
+  ITAB__1BYTE__OP_DB__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DB__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DC__MOD,
+  ITAB__1BYTE__OP_DC__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DC__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DD__MOD,
+  ITAB__1BYTE__OP_DD__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DD__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DE__MOD,
+  ITAB__1BYTE__OP_DE__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DE__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_DF__MOD,
+  ITAB__1BYTE__OP_DF__MOD__OP_00__REG,
+  ITAB__1BYTE__OP_DF__MOD__OP_01__X87,
+  ITAB__1BYTE__OP_E3__ASIZE,
+  ITAB__1BYTE__OP_F6__REG,
+  ITAB__1BYTE__OP_F7__REG,
+  ITAB__1BYTE__OP_FE__REG,
+  ITAB__1BYTE__OP_FF__REG,
+  ITAB__3DNOW,
+  ITAB__PFX_SSE66__0F,
+  ITAB__PFX_SSE66__0F__OP_71__REG,
+  ITAB__PFX_SSE66__0F__OP_72__REG,
+  ITAB__PFX_SSE66__0F__OP_73__REG,
+  ITAB__PFX_SSE66__0F__OP_C7__REG,
+  ITAB__PFX_SSE66__0F__OP_C7__REG__OP_00__VENDOR,
+  ITAB__PFX_SSEF2__0F,
+  ITAB__PFX_SSEF3__0F,
+  ITAB__PFX_SSEF3__0F__OP_C7__REG,
+  ITAB__PFX_SSEF3__0F__OP_C7__REG__OP_07__VENDOR,
+};
+
+
+enum ud_mnemonic_code {
+  UD_I3dnow,
+  UD_Iaaa,
+  UD_Iaad,
+  UD_Iaam,
+  UD_Iaas,
+  UD_Iadc,
+  UD_Iadd,
+  UD_Iaddpd,
+  UD_Iaddps,
+  UD_Iaddsd,
+  UD_Iaddss,
+  UD_Iaddsubpd,
+  UD_Iaddsubps,
+  UD_Iand,
+  UD_Iandpd,
+  UD_Iandps,
+  UD_Iandnpd,
+  UD_Iandnps,
+  UD_Iarpl,
+  UD_Imovsxd,
+  UD_Ibound,
+  UD_Ibsf,
+  UD_Ibsr,
+  UD_Ibswap,
+  UD_Ibt,
+  UD_Ibtc,
+  UD_Ibtr,
+  UD_Ibts,
+  UD_Icall,
+  UD_Icbw,
+  UD_Icwde,
+  UD_Icdqe,
+  UD_Iclc,
+  UD_Icld,
+  UD_Iclflush,
+  UD_Iclgi,
+  UD_Icli,
+  UD_Iclts,
+  UD_Icmc,
+  UD_Icmovo,
+  UD_Icmovno,
+  UD_Icmovb,
+  UD_Icmovae,
+  UD_Icmovz,
+  UD_Icmovnz,
+  UD_Icmovbe,
+  UD_Icmova,
+  UD_Icmovs,
+  UD_Icmovns,
+  UD_Icmovp,
+  UD_Icmovnp,
+  UD_Icmovl,
+  UD_Icmovge,
+  UD_Icmovle,
+  UD_Icmovg,
+  UD_Icmp,
+  UD_Icmppd,
+  UD_Icmpps,
+  UD_Icmpsb,
+  UD_Icmpsw,
+  UD_Icmpsd,
+  UD_Icmpsq,
+  UD_Icmpss,
+  UD_Icmpxchg,
+  UD_Icmpxchg8b,
+  UD_Icomisd,
+  UD_Icomiss,
+  UD_Icpuid,
+  UD_Icvtdq2pd,
+  UD_Icvtdq2ps,
+  UD_Icvtpd2dq,
+  UD_Icvtpd2pi,
+  UD_Icvtpd2ps,
+  UD_Icvtpi2ps,
+  UD_Icvtpi2pd,
+  UD_Icvtps2dq,
+  UD_Icvtps2pi,
+  UD_Icvtps2pd,
+  UD_Icvtsd2si,
+  UD_Icvtsd2ss,
+  UD_Icvtsi2ss,
+  UD_Icvtss2si,
+  UD_Icvtss2sd,
+  UD_Icvttpd2pi,
+  UD_Icvttpd2dq,
+  UD_Icvttps2dq,
+  UD_Icvttps2pi,
+  UD_Icvttsd2si,
+  UD_Icvtsi2sd,
+  UD_Icvttss2si,
+  UD_Icwd,
+  UD_Icdq,
+  UD_Icqo,
+  UD_Idaa,
+  UD_Idas,
+  UD_Idec,
+  UD_Idiv,
+  UD_Idivpd,
+  UD_Idivps,
+  UD_Idivsd,
+  UD_Idivss,
+  UD_Iemms,
+  UD_Ienter,
+  UD_If2xm1,
+  UD_Ifabs,
+  UD_Ifadd,
+  UD_Ifaddp,
+  UD_Ifbld,
+  UD_Ifbstp,
+  UD_Ifchs,
+  UD_Ifclex,
+  UD_Ifcmovb,
+  UD_Ifcmove,
+  UD_Ifcmovbe,
+  UD_Ifcmovu,
+  UD_Ifcmovnb,
+  UD_Ifcmovne,
+  UD_Ifcmovnbe,
+  UD_Ifcmovnu,
+  UD_Ifucomi,
+  UD_Ifcom,
+  UD_Ifcom2,
+  UD_Ifcomp3,
+  UD_Ifcomi,
+  UD_Ifucomip,
+  UD_Ifcomip,
+  UD_Ifcomp,
+  UD_Ifcomp5,
+  UD_Ifcompp,
+  UD_Ifcos,
+  UD_Ifdecstp,
+  UD_Ifdiv,
+  UD_Ifdivp,
+  UD_Ifdivr,
+  UD_Ifdivrp,
+  UD_Ifemms,
+  UD_Iffree,
+  UD_Iffreep,
+  UD_Ificom,
+  UD_Ificomp,
+  UD_Ifild,
+  UD_Ifncstp,
+  UD_Ifninit,
+  UD_Ifiadd,
+  UD_Ifidivr,
+  UD_Ifidiv,
+  UD_Ifisub,
+  UD_Ifisubr,
+  UD_Ifist,
+  UD_Ifistp,
+  UD_Ifisttp,
+  UD_Ifld,
+  UD_Ifld1,
+  UD_Ifldl2t,
+  UD_Ifldl2e,
+  UD_Ifldlpi,
+  UD_Ifldlg2,
+  UD_Ifldln2,
+  UD_Ifldz,
+  UD_Ifldcw,
+  UD_Ifldenv,
+  UD_Ifmul,
+  UD_Ifmulp,
+  UD_Ifimul,
+  UD_Ifnop,
+  UD_Ifpatan,
+  UD_Ifprem,
+  UD_Ifprem1,
+  UD_Ifptan,
+  UD_Ifrndint,
+  UD_Ifrstor,
+  UD_Ifnsave,
+  UD_Ifscale,
+  UD_Ifsin,
+  UD_Ifsincos,
+  UD_Ifsqrt,
+  UD_Ifstp,
+  UD_Ifstp1,
+  UD_Ifstp8,
+  UD_Ifstp9,
+  UD_Ifst,
+  UD_Ifnstcw,
+  UD_Ifnstenv,
+  UD_Ifnstsw,
+  UD_Ifsub,
+  UD_Ifsubp,
+  UD_Ifsubr,
+  UD_Ifsubrp,
+  UD_Iftst,
+  UD_Ifucom,
+  UD_Ifucomp,
+  UD_Ifucompp,
+  UD_Ifxam,
+  UD_Ifxch,
+  UD_Ifxch4,
+  UD_Ifxch7,
+  UD_Ifxrstor,
+  UD_Ifxsave,
+  UD_Ifpxtract,
+  UD_Ifyl2x,
+  UD_Ifyl2xp1,
+  UD_Ihaddpd,
+  UD_Ihaddps,
+  UD_Ihlt,
+  UD_Ihsubpd,
+  UD_Ihsubps,
+  UD_Iidiv,
+  UD_Iin,
+  UD_Iimul,
+  UD_Iinc,
+  UD_Iinsb,
+  UD_Iinsw,
+  UD_Iinsd,
+  UD_Iint1,
+  UD_Iint3,
+  UD_Iint,
+  UD_Iinto,
+  UD_Iinvd,
+  UD_Iinvlpg,
+  UD_Iinvlpga,
+  UD_Iiretw,
+  UD_Iiretd,
+  UD_Iiretq,
+  UD_Ijo,
+  UD_Ijno,
+  UD_Ijb,
+  UD_Ijae,
+  UD_Ijz,
+  UD_Ijnz,
+  UD_Ijbe,
+  UD_Ija,
+  UD_Ijs,
+  UD_Ijns,
+  UD_Ijp,
+  UD_Ijnp,
+  UD_Ijl,
+  UD_Ijge,
+  UD_Ijle,
+  UD_Ijg,
+  UD_Ijcxz,
+  UD_Ijecxz,
+  UD_Ijrcxz,
+  UD_Ijmp,
+  UD_Ilahf,
+  UD_Ilar,
+  UD_Ilddqu,
+  UD_Ildmxcsr,
+  UD_Ilds,
+  UD_Ilea,
+  UD_Iles,
+  UD_Ilfs,
+  UD_Ilgs,
+  UD_Ilidt,
+  UD_Ilss,
+  UD_Ileave,
+  UD_Ilfence,
+  UD_Ilgdt,
+  UD_Illdt,
+  UD_Ilmsw,
+  UD_Ilock,
+  UD_Ilodsb,
+  UD_Ilodsw,
+  UD_Ilodsd,
+  UD_Ilodsq,
+  UD_Iloopnz,
+  UD_Iloope,
+  UD_Iloop,
+  UD_Ilsl,
+  UD_Iltr,
+  UD_Imaskmovq,
+  UD_Imaxpd,
+  UD_Imaxps,
+  UD_Imaxsd,
+  UD_Imaxss,
+  UD_Imfence,
+  UD_Iminpd,
+  UD_Iminps,
+  UD_Iminsd,
+  UD_Iminss,
+  UD_Imonitor,
+  UD_Imov,
+  UD_Imovapd,
+  UD_Imovaps,
+  UD_Imovd,
+  UD_Imovddup,
+  UD_Imovdqa,
+  UD_Imovdqu,
+  UD_Imovdq2q,
+  UD_Imovhpd,
+  UD_Imovhps,
+  UD_Imovlhps,
+  UD_Imovlpd,
+  UD_Imovlps,
+  UD_Imovhlps,
+  UD_Imovmskpd,
+  UD_Imovmskps,
+  UD_Imovntdq,
+  UD_Imovnti,
+  UD_Imovntpd,
+  UD_Imovntps,
+  UD_Imovntq,
+  UD_Imovq,
+  UD_Imovqa,
+  UD_Imovq2dq,
+  UD_Imovsb,
+  UD_Imovsw,
+  UD_Imovsd,
+  UD_Imovsq,
+  UD_Imovsldup,
+  UD_Imovshdup,
+  UD_Imovss,
+  UD_Imovsx,
+  UD_Imovupd,
+  UD_Imovups,
+  UD_Imovzx,
+  UD_Imul,
+  UD_Imulpd,
+  UD_Imulps,
+  UD_Imulsd,
+  UD_Imulss,
+  UD_Imwait,
+  UD_Ineg,
+  UD_Inop,
+  UD_Inot,
+  UD_Ior,
+  UD_Iorpd,
+  UD_Iorps,
+  UD_Iout,
+  UD_Ioutsb,
+  UD_Ioutsw,
+  UD_Ioutsd,
+  UD_Ioutsq,
+  UD_Ipacksswb,
+  UD_Ipackssdw,
+  UD_Ipackuswb,
+  UD_Ipaddb,
+  UD_Ipaddw,
+  UD_Ipaddq,
+  UD_Ipaddsb,
+  UD_Ipaddsw,
+  UD_Ipaddusb,
+  UD_Ipaddusw,
+  UD_Ipand,
+  UD_Ipandn,
+  UD_Ipause,
+  UD_Ipavgb,
+  UD_Ipavgw,
+  UD_Ipcmpeqb,
+  UD_Ipcmpeqw,
+  UD_Ipcmpeqd,
+  UD_Ipcmpgtb,
+  UD_Ipcmpgtw,
+  UD_Ipcmpgtd,
+  UD_Ipextrw,
+  UD_Ipinsrw,
+  UD_Ipmaddwd,
+  UD_Ipmaxsw,
+  UD_Ipmaxub,
+  UD_Ipminsw,
+  UD_Ipminub,
+  UD_Ipmovmskb,
+  UD_Ipmulhuw,
+  UD_Ipmulhw,
+  UD_Ipmullw,
+  UD_Ipmuludq,
+  UD_Ipop,
+  UD_Ipopa,
+  UD_Ipopad,
+  UD_Ipopfw,
+  UD_Ipopfd,
+  UD_Ipopfq,
+  UD_Ipor,
+  UD_Iprefetch,
+  UD_Iprefetchnta,
+  UD_Iprefetcht0,
+  UD_Iprefetcht1,
+  UD_Iprefetcht2,
+  UD_Ipsadbw,
+  UD_Ipshufd,
+  UD_Ipshufhw,
+  UD_Ipshuflw,
+  UD_Ipshufw,
+  UD_Ipslldq,
+  UD_Ipsllw,
+  UD_Ipslld,
+  UD_Ipsllq,
+  UD_Ipsraw,
+  UD_Ipsrad,
+  UD_Ipsrlw,
+  UD_Ipsrld,
+  UD_Ipsrlq,
+  UD_Ipsrldq,
+  UD_Ipsubb,
+  UD_Ipsubw,
+  UD_Ipsubd,
+  UD_Ipsubq,
+  UD_Ipsubsb,
+  UD_Ipsubsw,
+  UD_Ipsubusb,
+  UD_Ipsubusw,
+  UD_Ipunpckhbw,
+  UD_Ipunpckhwd,
+  UD_Ipunpckhdq,
+  UD_Ipunpckhqdq,
+  UD_Ipunpcklbw,
+  UD_Ipunpcklwd,
+  UD_Ipunpckldq,
+  UD_Ipunpcklqdq,
+  UD_Ipi2fw,
+  UD_Ipi2fd,
+  UD_Ipf2iw,
+  UD_Ipf2id,
+  UD_Ipfnacc,
+  UD_Ipfpnacc,
+  UD_Ipfcmpge,
+  UD_Ipfmin,
+  UD_Ipfrcp,
+  UD_Ipfrsqrt,
+  UD_Ipfsub,
+  UD_Ipfadd,
+  UD_Ipfcmpgt,
+  UD_Ipfmax,
+  UD_Ipfrcpit1,
+  UD_Ipfrspit1,
+  UD_Ipfsubr,
+  UD_Ipfacc,
+  UD_Ipfcmpeq,
+  UD_Ipfmul,
+  UD_Ipfrcpit2,
+  UD_Ipmulhrw,
+  UD_Ipswapd,
+  UD_Ipavgusb,
+  UD_Ipush,
+  UD_Ipusha,
+  UD_Ipushad,
+  UD_Ipushfw,
+  UD_Ipushfd,
+  UD_Ipushfq,
+  UD_Ipxor,
+  UD_Ircl,
+  UD_Ircr,
+  UD_Irol,
+  UD_Iror,
+  UD_Ircpps,
+  UD_Ircpss,
+  UD_Irdmsr,
+  UD_Irdpmc,
+  UD_Irdtsc,
+  UD_Irdtscp,
+  UD_Irepne,
+  UD_Irep,
+  UD_Iret,
+  UD_Iretf,
+  UD_Irsm,
+  UD_Irsqrtps,
+  UD_Irsqrtss,
+  UD_Isahf,
+  UD_Isal,
+  UD_Isalc,
+  UD_Isar,
+  UD_Ishl,
+  UD_Ishr,
+  UD_Isbb,
+  UD_Iscasb,
+  UD_Iscasw,
+  UD_Iscasd,
+  UD_Iscasq,
+  UD_Iseto,
+  UD_Isetno,
+  UD_Isetb,
+  UD_Isetnb,
+  UD_Isetz,
+  UD_Isetnz,
+  UD_Isetbe,
+  UD_Iseta,
+  UD_Isets,
+  UD_Isetns,
+  UD_Isetp,
+  UD_Isetnp,
+  UD_Isetl,
+  UD_Isetge,
+  UD_Isetle,
+  UD_Isetg,
+  UD_Isfence,
+  UD_Isgdt,
+  UD_Ishld,
+  UD_Ishrd,
+  UD_Ishufpd,
+  UD_Ishufps,
+  UD_Isidt,
+  UD_Isldt,
+  UD_Ismsw,
+  UD_Isqrtps,
+  UD_Isqrtpd,
+  UD_Isqrtsd,
+  UD_Isqrtss,
+  UD_Istc,
+  UD_Istd,
+  UD_Istgi,
+  UD_Isti,
+  UD_Iskinit,
+  UD_Istmxcsr,
+  UD_Istosb,
+  UD_Istosw,
+  UD_Istosd,
+  UD_Istosq,
+  UD_Istr,
+  UD_Isub,
+  UD_Isubpd,
+  UD_Isubps,
+  UD_Isubsd,
+  UD_Isubss,
+  UD_Iswapgs,
+  UD_Isyscall,
+  UD_Isysenter,
+  UD_Isysexit,
+  UD_Isysret,
+  UD_Itest,
+  UD_Iucomisd,
+  UD_Iucomiss,
+  UD_Iud2,
+  UD_Iunpckhpd,
+  UD_Iunpckhps,
+  UD_Iunpcklps,
+  UD_Iunpcklpd,
+  UD_Iverr,
+  UD_Iverw,
+  UD_Ivmcall,
+  UD_Ivmclear,
+  UD_Ivmxon,
+  UD_Ivmptrld,
+  UD_Ivmptrst,
+  UD_Ivmresume,
+  UD_Ivmxoff,
+  UD_Ivmrun,
+  UD_Ivmmcall,
+  UD_Ivmload,
+  UD_Ivmsave,
+  UD_Iwait,
+  UD_Iwbinvd,
+  UD_Iwrmsr,
+  UD_Ixadd,
+  UD_Ixchg,
+  UD_Ixlatb,
+  UD_Ixor,
+  UD_Ixorpd,
+  UD_Ixorps,
+  UD_Idb,
+  UD_Iinvalid,
+  UD_Id3vil,
+  UD_Ina,
+  UD_Igrp_reg,
+  UD_Igrp_rm,
+  UD_Igrp_vendor,
+  UD_Igrp_x87,
+  UD_Igrp_mode,
+  UD_Igrp_osize,
+  UD_Igrp_asize,
+  UD_Igrp_mod,
+  UD_Inone,
+};
+
+
+
+extern const char* ud_mnemonics_str[];;
+extern struct ud_itab_entry* ud_itab_list[];
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/kdb_dis.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/kdb_dis.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,204 @@
+/*
+ * Copyright (C) 2009, Mukesh Rathor, Oracle Corp.  All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ */
+
+#include <xen/compile.h>                /* for XEN_SUBVERSION */
+#include "../../include/kdbinc.h"
+#include "extern.h"
+
+static void (*dis_syntax)(ud_t*) = UD_SYN_ATT; /* default dis-assembly syntax */
+
+static struct {                         /* info for kdb_read_byte_for_ud() */
+    kdbva_t kud_instr_addr;
+    domid_t kud_domid;
+} kdb_ud_rd_info;
+
+/* called via function ptr by ud when disassembling. 
+ * kdb info passed via kdb_ud_rd_info{} 
+ */
+static int
+kdb_read_byte_for_ud(struct ud *udp)
+{
+    kdbbyt_t bytebuf;
+    domid_t domid = kdb_ud_rd_info.kud_domid;
+    kdbva_t addr = kdb_ud_rd_info.kud_instr_addr;
+
+    if (kdb_read_mem(addr, &bytebuf, 1, domid) == 1) {
+        kdb_ud_rd_info.kud_instr_addr++;
+        KDBGP1("udrd:addr:%lx domid:%d byt:%x\n", addr, domid, bytebuf);
+        return bytebuf;
+    }
+    KDBGP1("udrd:addr:%lx domid:%d err\n", addr, domid);
+    return UD_EOI;
+}
+
+/* 
+ * given a domid, convert addr to symbol and print it 
+ * Eg: ffff828c801235e2: idle_loop+52                  jmp  idle_loop+55
+ *    Called twice here for idle_loop. In first case, nl is null, 
+ *    in the second case nl == '\n'
+ */
+void
+kdb_prnt_addr2sym(domid_t domid, kdbva_t addr, char *nl)
+{
+    unsigned long sz, offs;
+    char buf[KSYM_NAME_LEN+1], pbuf[150], prefix[8];
+    char *p = buf;
+
+    prefix[0]='\0';
+    if (domid != DOMID_IDLE) {
+        snprintf(prefix, 8, "%x:", domid);
+        p = kdb_guest_addr2sym(addr, domid, &offs);
+    } else
+        symbols_lookup(addr, &sz, &offs, buf);
+
+    snprintf(pbuf, 150, "%s%s+%lx", prefix, p, offs);
+    if (*nl != '\n')
+        kdbp("%-30s%s", pbuf, nl);  /* prints more than 30 if needed */
+    else
+        kdbp("%s%s", pbuf, nl);
+
+}
+
+static int
+kdb_jump_instr(enum ud_mnemonic_code mnemonic)
+{
+    return (mnemonic >= UD_Ijo && mnemonic <= UD_Ijmp);
+}
+
+/*
+ * print one instr: function so that we can print offsets of jmp etc.. as
+ *  symbol+offset instead of just address
+ */
+static void
+kdb_print_one_instr(struct ud *udp, domid_t domid)
+{
+    signed long val = 0;
+    ud_type_t type = udp->operand[0].type;
+
+    if ((udp->mnemonic == UD_Icall || kdb_jump_instr(udp->mnemonic)) &&
+        type == UD_OP_JIMM) {
+        
+        int sz = udp->operand[0].size;
+        char *p, ibuf[40], *q = ibuf;
+        kdbva_t addr;
+
+        if (sz == 8) val = udp->operand[0].lval.sbyte;
+        else if (sz == 16) val = udp->operand[0].lval.sword;
+        else if (sz == 32) val = udp->operand[0].lval.sdword;
+        else if (sz == 64) val = udp->operand[0].lval.sqword;
+        else kdbp("kdb_print_one_instr: Inval sz:z%d\n", sz);
+
+        addr = udp->pc + val;
+        for(p=ud_insn_asm(udp); (*q=*p) && *p!=' '; p++,q++);
+        *q='\0';
+        kdbp(" %-4s ", ibuf);    /* space before for long func names */
+        kdb_prnt_addr2sym(domid, addr, "\n");
+    } else
+        kdbp(" %-24s\n", ud_insn_asm(udp));
+#if 0
+    kdbp("mnemonic:z%d ", udp->mnemonic);
+    if (type == UD_OP_CONST) kdbp("type is const\n");
+    else if (type == UD_OP_JIMM) kdbp("type is JIMM\n");
+    else if (type == UD_OP_IMM) kdbp("type is IMM\n");
+    else if (type == UD_OP_PTR) kdbp("type is PTR\n");
+#endif
+}
+
+static void
+kdb_setup_ud(struct ud *udp, kdbva_t addr, domid_t domid)
+{
+    int bitness = kdb_guest_bitness(domid);
+    uint vendor = (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) ?
+                                           UD_VENDOR_AMD : UD_VENDOR_INTEL;
+
+    KDBGP1("setup_ud:domid:%d bitness:%d addr:%lx\n", domid, bitness, addr);
+    ud_init(udp);
+    ud_set_mode(udp, kdb_guest_bitness(domid));
+    ud_set_syntax(udp, dis_syntax); 
+    ud_set_vendor(udp, vendor);           /* HVM: vmx/svm different instrs*/
+    ud_set_pc(udp, addr);                 /* for numbers printed on left */
+    ud_set_input_hook(udp, kdb_read_byte_for_ud);
+    kdb_ud_rd_info.kud_instr_addr = addr;
+    kdb_ud_rd_info.kud_domid = domid;
+}
+
+/*
+ * given an addr, print given number of instructions.
+ * Returns: address of next instruction in the stream
+ */
+kdbva_t
+kdb_print_instr(kdbva_t addr, long num, domid_t domid)
+{
+    struct ud ud_s;
+
+    KDBGP1("print_instr:addr:0x%lx num:%ld domid:%x\n", addr, num, domid);
+
+    kdb_setup_ud(&ud_s, addr, domid);
+    while(num--) {
+        if (ud_disassemble(&ud_s)) {
+            uint64_t pc = ud_insn_off(&ud_s);
+            /* kdbp("%08x: ",(int)pc); */
+            kdbp("%016lx: ", pc);
+            kdb_prnt_addr2sym(domid, pc, "");
+            kdb_print_one_instr(&ud_s, domid);
+        } else
+            kdbp("KDB:Couldn't disassemble PC:0x%lx\n", addr);
+            /* for stack reads, don't always display error */
+    }
+    KDBGP1("print_instr:kudaddr:0x%lx\n", kdb_ud_rd_info.kud_instr_addr);
+    return kdb_ud_rd_info.kud_instr_addr;
+}
+
+void
+kdb_display_pc(struct cpu_user_regs *regs)
+{   
+    domid_t domid;
+    struct cpu_user_regs regs1 = *regs;
+    domid = guest_mode(regs) ? current->domain->domain_id : DOMID_IDLE;
+
+    regs1.KDBIP = regs->KDBIP;
+    kdb_print_instr(regs1.KDBIP, 1, domid);
+}
+
+/* check if the instr at the addr is call instruction
+ * RETURNS: size of the instr if it's a call instr, else 0
+ */
+int
+kdb_check_call_instr(domid_t domid, kdbva_t addr)
+{
+    struct ud ud_s;
+    int sz;
+
+    kdb_setup_ud(&ud_s, addr, domid);
+    if ((sz=ud_disassemble(&ud_s)) && ud_s.mnemonic == UD_Icall)
+        return (sz);
+    return 0;
+}
+
+/* toggle ATT and Intel syntaxes */
+void
+kdb_toggle_dis_syntax(void)
+{
+    if (dis_syntax == UD_SYN_INTEL) {
+        dis_syntax = UD_SYN_ATT;
+        kdbp("dis syntax now set to ATT (Gas)\n");
+    } else {
+        dis_syntax = UD_SYN_INTEL;
+        kdbp("dis syntax now set to Intel (NASM)\n");
+    }
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn-att.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn-att.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,211 @@
+/* -----------------------------------------------------------------------------
+ * syn-att.c
+ *
+ * Copyright (c) 2004, 2005, 2006 Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See (LICENSE)
+ * -----------------------------------------------------------------------------
+ */
+
+#include "types.h"
+#include "extern.h"
+#include "decode.h"
+#include "itab.h"
+#include "syn.h"
+
+/* -----------------------------------------------------------------------------
+ * opr_cast() - Prints an operand cast.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+opr_cast(struct ud* u, struct ud_operand* op)
+{
+  switch(op->size) {
+	case 16 : case 32 :
+		mkasm(u, "*");   break;
+	default: break;
+  }
+}
+
+/* -----------------------------------------------------------------------------
+ * gen_operand() - Generates assembly output for each operand.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+gen_operand(struct ud* u, struct ud_operand* op)
+{
+  switch(op->type) {
+	case UD_OP_REG:
+		mkasm(u, "%%%s", ud_reg_tab[op->base - UD_R_AL]);
+		break;
+
+	case UD_OP_MEM:
+		if (u->br_far) opr_cast(u, op);
+		if (u->pfx_seg)
+			mkasm(u, "%%%s:", ud_reg_tab[u->pfx_seg - UD_R_AL]);
+		if (op->offset == 8) {
+			if (op->lval.sbyte < 0)
+				mkasm(u, "-0x%x", (-op->lval.sbyte) & 0xff);
+			else	mkasm(u, "0x%x", op->lval.sbyte);
+		} 
+		else if (op->offset == 16) 
+			mkasm(u, "0x%x", op->lval.uword);
+		else if (op->offset == 32) 
+			mkasm(u, "0x%lx", op->lval.udword);
+		else if (op->offset == 64) 
+			mkasm(u, "0x" FMT64 "x", op->lval.uqword);
+
+		if (op->base)
+			mkasm(u, "(%%%s", ud_reg_tab[op->base - UD_R_AL]);
+		if (op->index) {
+			if (op->base)
+				mkasm(u, ",");
+			else mkasm(u, "(");
+			mkasm(u, "%%%s", ud_reg_tab[op->index - UD_R_AL]);
+		}
+		if (op->scale)
+			mkasm(u, ",%d", op->scale);
+		if (op->base || op->index)
+			mkasm(u, ")");
+		break;
+
+	case UD_OP_IMM:
+		switch (op->size) {
+			case  8: mkasm(u, "$0x%x", op->lval.ubyte);    break;
+			case 16: mkasm(u, "$0x%x", op->lval.uword);    break;
+			case 32: mkasm(u, "$0x%lx", op->lval.udword);  break;
+			case 64: mkasm(u, "$0x" FMT64 "x", op->lval.uqword); break;
+			default: break;
+		}
+		break;
+
+	case UD_OP_JIMM:
+		switch (op->size) {
+			case  8:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sbyte); 
+				break;
+			case 16:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sword);
+				break;
+			case 32:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sdword);
+				break;
+			default:break;
+		}
+		break;
+
+	case UD_OP_PTR:
+		switch (op->size) {
+			case 32:
+				mkasm(u, "$0x%x, $0x%x", op->lval.ptr.seg, 
+					op->lval.ptr.off & 0xFFFF);
+				break;
+			case 48:
+				mkasm(u, "$0x%x, $0x%lx", op->lval.ptr.seg, 
+					op->lval.ptr.off);
+				break;
+		}
+		break;
+			
+	default: return;
+  }
+}
+
+/* =============================================================================
+ * translates to AT&T syntax 
+ * =============================================================================
+ */
+extern void 
+ud_translate_att(struct ud *u)
+{
+  int size = 0;
+
+  /* check if P_OSO prefix is used */
+  if (! P_OSO(u->itab_entry->prefix) && u->pfx_opr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "o32 ");
+			break;
+		case 32:
+		case 64:
+ 			mkasm(u, "o16 ");
+			break;
+	}
+  }
+
+  /* check if P_ASO prefix was used */
+  if (! P_ASO(u->itab_entry->prefix) && u->pfx_adr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "a32 ");
+			break;
+		case 32:
+ 			mkasm(u, "a16 ");
+			break;
+		case 64:
+ 			mkasm(u, "a32 ");
+			break;
+	}
+  }
+
+  if (u->pfx_lock)
+  	mkasm(u,  "lock ");
+  if (u->pfx_rep)
+	mkasm(u,  "rep ");
+  if (u->pfx_repne)
+		mkasm(u,  "repne ");
+
+  /* special instructions */
+  switch (u->mnemonic) {
+	case UD_Iretf: 
+		mkasm(u, "lret "); 
+		break;
+	case UD_Idb:
+		mkasm(u, ".byte 0x%x", u->operand[0].lval.ubyte);
+		return;
+	case UD_Ijmp:
+	case UD_Icall:
+		if (u->br_far) mkasm(u,  "l");
+		mkasm(u, "%s", ud_lookup_mnemonic(u->mnemonic));
+		break;
+	case UD_Ibound:
+	case UD_Ienter:
+		if (u->operand[0].type != UD_NONE)
+			gen_operand(u, &u->operand[0]);
+		if (u->operand[1].type != UD_NONE) {
+			mkasm(u, ",");
+			gen_operand(u, &u->operand[1]);
+		}
+		return;
+	default:
+		mkasm(u, "%s", ud_lookup_mnemonic(u->mnemonic));
+  }
+
+  if (u->c1)
+	size = u->operand[0].size;
+  else if (u->c2)
+	size = u->operand[1].size;
+  else if (u->c3)
+	size = u->operand[2].size;
+
+  if (size == 8)
+	mkasm(u, "b");
+  else if (size == 16)
+	mkasm(u, "w");
+  else if (size == 64)
+ 	mkasm(u, "q");
+
+  mkasm(u, " ");
+
+  if (u->operand[2].type != UD_NONE) {
+	gen_operand(u, &u->operand[2]);
+	mkasm(u, ", ");
+  }
+
+  if (u->operand[1].type != UD_NONE) {
+	gen_operand(u, &u->operand[1]);
+	mkasm(u, ", ");
+  }
+
+  if (u->operand[0].type != UD_NONE)
+	gen_operand(u, &u->operand[0]);
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn-intel.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn-intel.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,208 @@
+/* -----------------------------------------------------------------------------
+ * syn-intel.c
+ *
+ * Copyright (c) 2002, 2003, 2004 Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See (LICENSE)
+ * -----------------------------------------------------------------------------
+ */
+
+#include "types.h"
+#include "extern.h"
+#include "decode.h"
+#include "itab.h"
+#include "syn.h"
+
+/* -----------------------------------------------------------------------------
+ * opr_cast() - Prints an operand cast.
+ * -----------------------------------------------------------------------------
+ */
+static void 
+opr_cast(struct ud* u, struct ud_operand* op)
+{
+  switch(op->size) {
+	case  8: mkasm(u, "byte " ); break;
+	case 16: mkasm(u, "word " ); break;
+	case 32: mkasm(u, "dword "); break;
+	case 64: mkasm(u, "qword "); break;
+	case 80: mkasm(u, "tword "); break;
+	default: break;
+  }
+  if (u->br_far)
+	mkasm(u, "far "); 
+  else if (u->br_near)
+	mkasm(u, "near ");
+}
+
+/* -----------------------------------------------------------------------------
+ * gen_operand() - Generates assembly output for each operand.
+ * -----------------------------------------------------------------------------
+ */
+static void gen_operand(struct ud* u, struct ud_operand* op, int syn_cast)
+{
+  switch(op->type) {
+	case UD_OP_REG:
+		mkasm(u, ud_reg_tab[op->base - UD_R_AL]);
+		break;
+
+	case UD_OP_MEM: {
+
+		int op_f = 0;
+
+		if (syn_cast) 
+			opr_cast(u, op);
+
+		mkasm(u, "[");
+
+		if (u->pfx_seg)
+			mkasm(u, "%s:", ud_reg_tab[u->pfx_seg - UD_R_AL]);
+
+		if (op->base) {
+			mkasm(u, "%s", ud_reg_tab[op->base - UD_R_AL]);
+			op_f = 1;
+		}
+
+		if (op->index) {
+			if (op_f)
+				mkasm(u, "+");
+			mkasm(u, "%s", ud_reg_tab[op->index - UD_R_AL]);
+			op_f = 1;
+		}
+
+		if (op->scale)
+			mkasm(u, "*%d", op->scale);
+
+		if (op->offset == 8) {
+			if (op->lval.sbyte < 0)
+				mkasm(u, "-0x%x", -op->lval.sbyte);
+			else	mkasm(u, "%s0x%x", (op_f) ? "+" : "", op->lval.sbyte);
+		}
+		else if (op->offset == 16)
+			mkasm(u, "%s0x%x", (op_f) ? "+" : "", op->lval.uword);
+		else if (op->offset == 32) {
+			if (u->adr_mode == 64) {
+				if (op->lval.sdword < 0)
+					mkasm(u, "-0x%x", -op->lval.sdword);
+				else	mkasm(u, "%s0x%x", (op_f) ? "+" : "", op->lval.sdword);
+			} 
+			else	mkasm(u, "%s0x%lx", (op_f) ? "+" : "", op->lval.udword);
+		}
+		else if (op->offset == 64) 
+			mkasm(u, "%s0x" FMT64 "x", (op_f) ? "+" : "", op->lval.uqword);
+
+		mkasm(u, "]");
+		break;
+	}
+			
+	case UD_OP_IMM:
+		if (syn_cast) opr_cast(u, op);
+		switch (op->size) {
+			case  8: mkasm(u, "0x%x", op->lval.ubyte);    break;
+			case 16: mkasm(u, "0x%x", op->lval.uword);    break;
+			case 32: mkasm(u, "0x%lx", op->lval.udword);  break;
+			case 64: mkasm(u, "0x" FMT64 "x", op->lval.uqword); break;
+			default: break;
+		}
+		break;
+
+	case UD_OP_JIMM:
+		if (syn_cast) opr_cast(u, op);
+		switch (op->size) {
+			case  8:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sbyte); 
+				break;
+			case 16:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sword);
+				break;
+			case 32:
+				mkasm(u, "0x" FMT64 "x", u->pc + op->lval.sdword);
+				break;
+			default:break;
+		}
+		break;
+
+	case UD_OP_PTR:
+		switch (op->size) {
+			case 32:
+				mkasm(u, "word 0x%x:0x%x", op->lval.ptr.seg, 
+					op->lval.ptr.off & 0xFFFF);
+				break;
+			case 48:
+				mkasm(u, "dword 0x%x:0x%lx", op->lval.ptr.seg, 
+					op->lval.ptr.off);
+				break;
+		}
+		break;
+
+	case UD_OP_CONST:
+		if (syn_cast) opr_cast(u, op);
+		mkasm(u, "%d", op->lval.udword);
+		break;
+
+	default: return;
+  }
+}
+
+/* =============================================================================
+ * translates to intel syntax 
+ * =============================================================================
+ */
+extern void ud_translate_intel(struct ud* u)
+{
+  /* -- prefixes -- */
+
+  /* check if P_OSO prefix is used */
+  if (! P_OSO(u->itab_entry->prefix) && u->pfx_opr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "o32 ");
+			break;
+		case 32:
+		case 64:
+ 			mkasm(u, "o16 ");
+			break;
+	}
+  }
+
+  /* check if P_ASO prefix was used */
+  if (! P_ASO(u->itab_entry->prefix) && u->pfx_adr) {
+	switch (u->dis_mode) {
+		case 16: 
+			mkasm(u, "a32 ");
+			break;
+		case 32:
+ 			mkasm(u, "a16 ");
+			break;
+		case 64:
+ 			mkasm(u, "a32 ");
+			break;
+	}
+  }
+
+  if (u->pfx_lock)
+	mkasm(u, "lock ");
+  if (u->pfx_rep)
+	mkasm(u, "rep ");
+  if (u->pfx_repne)
+	mkasm(u, "repne ");
+  if (u->implicit_addr && u->pfx_seg)
+	mkasm(u, "%s ", ud_reg_tab[u->pfx_seg - UD_R_AL]);
+
+  /* print the instruction mnemonic */
+  mkasm(u, "%s ", ud_lookup_mnemonic(u->mnemonic));
+
+  /* operand 1 */
+  if (u->operand[0].type != UD_NONE) {
+	gen_operand(u, &u->operand[0], u->c1);
+  }
+  /* operand 2 */
+  if (u->operand[1].type != UD_NONE) {
+	mkasm(u, ", ");
+	gen_operand(u, &u->operand[1], u->c2);
+  }
+
+  /* operand 3 */
+  if (u->operand[2].type != UD_NONE) {
+	mkasm(u, ", ");
+	gen_operand(u, &u->operand[2], u->c3);
+  }
+}
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,61 @@
+/* -----------------------------------------------------------------------------
+ * syn.c
+ *
+ * Copyright (c) 2002, 2003, 2004 Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See (LICENSE)
+ * -----------------------------------------------------------------------------
+ */
+
+/* -----------------------------------------------------------------------------
+ * Intel Register Table - Order Matters (types.h)!
+ * -----------------------------------------------------------------------------
+ */
+const char* ud_reg_tab[] = 
+{
+  "al",		"cl",		"dl",		"bl",
+  "ah",		"ch",		"dh",		"bh",
+  "spl",	"bpl",		"sil",		"dil",
+  "r8b",	"r9b",		"r10b",		"r11b",
+  "r12b",	"r13b",		"r14b",		"r15b",
+
+  "ax",		"cx",		"dx",		"bx",
+  "sp",		"bp",		"si",		"di",
+  "r8w",	"r9w",		"r10w",		"r11w",
+  "r12w",	"r13W"	,	"r14w",		"r15w",
+	
+  "eax",	"ecx",		"edx",		"ebx",
+  "esp",	"ebp",		"esi",		"edi",
+  "r8d",	"r9d",		"r10d",		"r11d",
+  "r12d",	"r13d",		"r14d",		"r15d",
+	
+  "rax",	"rcx",		"rdx",		"rbx",
+  "rsp",	"rbp",		"rsi",		"rdi",
+  "r8",		"r9",		"r10",		"r11",
+  "r12",	"r13",		"r14",		"r15",
+
+  "es",		"cs",		"ss",		"ds",
+  "fs",		"gs",	
+
+  "cr0",	"cr1",		"cr2",		"cr3",
+  "cr4",	"cr5",		"cr6",		"cr7",
+  "cr8",	"cr9",		"cr10",		"cr11",
+  "cr12",	"cr13",		"cr14",		"cr15",
+	
+  "dr0",	"dr1",		"dr2",		"dr3",
+  "dr4",	"dr5",		"dr6",		"dr7",
+  "dr8",	"dr9",		"dr10",		"dr11",
+  "dr12",	"dr13",		"dr14",		"dr15",
+
+  "mm0",	"mm1",		"mm2",		"mm3",
+  "mm4",	"mm5",		"mm6",		"mm7",
+
+  "st0",	"st1",		"st2",		"st3",
+  "st4",	"st5",		"st6",		"st7", 
+
+  "xmm0",	"xmm1",		"xmm2",		"xmm3",
+  "xmm4",	"xmm5",		"xmm6",		"xmm7",
+  "xmm8",	"xmm9",		"xmm10",	"xmm11",
+  "xmm12",	"xmm13",	"xmm14",	"xmm15",
+
+  "rip"
+};
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/syn.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/syn.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,27 @@
+/* -----------------------------------------------------------------------------
+ * syn.h
+ *
+ * Copyright (c) 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_SYN_H
+#define UD_SYN_H
+
+#if 0
+#include <stdio.h>
+#include <stdarg.h>
+#endif
+#include "types.h"
+
+extern const char* ud_reg_tab[];
+
+static void mkasm(struct ud* u, const char* fmt, ...)
+{
+  va_list ap;
+  va_start(ap, fmt);
+  u->insn_fill += vsnprintf((char*) u->insn_buffer + u->insn_fill, 64, fmt, ap);
+  va_end(ap);
+}
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/types.h
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/types.h	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,186 @@
+/* -----------------------------------------------------------------------------
+ * types.h
+ *
+ * Copyright (c) 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+#ifndef UD_TYPES_H
+#define UD_TYPES_H
+
+
+#include "../../include/kdbinc.h"
+
+#define FMT64 "%ll"
+#include "itab.h"
+
+/* -----------------------------------------------------------------------------
+ * All possible "types" of objects in udis86. Order is Important!
+ * -----------------------------------------------------------------------------
+ */
+enum ud_type
+{
+  UD_NONE,
+
+  /* 8 bit GPRs */
+  UD_R_AL,	UD_R_CL,	UD_R_DL,	UD_R_BL,
+  UD_R_AH,	UD_R_CH,	UD_R_DH,	UD_R_BH,
+  UD_R_SPL,	UD_R_BPL,	UD_R_SIL,	UD_R_DIL,
+  UD_R_R8B,	UD_R_R9B,	UD_R_R10B,	UD_R_R11B,
+  UD_R_R12B,	UD_R_R13B,	UD_R_R14B,	UD_R_R15B,
+
+  /* 16 bit GPRs */
+  UD_R_AX,	UD_R_CX,	UD_R_DX,	UD_R_BX,
+  UD_R_SP,	UD_R_BP,	UD_R_SI,	UD_R_DI,
+  UD_R_R8W,	UD_R_R9W,	UD_R_R10W,	UD_R_R11W,
+  UD_R_R12W,	UD_R_R13W,	UD_R_R14W,	UD_R_R15W,
+	
+  /* 32 bit GPRs */
+  UD_R_EAX,	UD_R_ECX,	UD_R_EDX,	UD_R_EBX,
+  UD_R_ESP,	UD_R_EBP,	UD_R_ESI,	UD_R_EDI,
+  UD_R_R8D,	UD_R_R9D,	UD_R_R10D,	UD_R_R11D,
+  UD_R_R12D,	UD_R_R13D,	UD_R_R14D,	UD_R_R15D,
+	
+  /* 64 bit GPRs */
+  UD_R_RAX,	UD_R_RCX,	UD_R_RDX,	UD_R_RBX,
+  UD_R_RSP,	UD_R_RBP,	UD_R_RSI,	UD_R_RDI,
+  UD_R_R8,	UD_R_R9,	UD_R_R10,	UD_R_R11,
+  UD_R_R12,	UD_R_R13,	UD_R_R14,	UD_R_R15,
+
+  /* segment registers */
+  UD_R_ES,	UD_R_CS,	UD_R_SS,	UD_R_DS,
+  UD_R_FS,	UD_R_GS,	
+
+  /* control registers*/
+  UD_R_CR0,	UD_R_CR1,	UD_R_CR2,	UD_R_CR3,
+  UD_R_CR4,	UD_R_CR5,	UD_R_CR6,	UD_R_CR7,
+  UD_R_CR8,	UD_R_CR9,	UD_R_CR10,	UD_R_CR11,
+  UD_R_CR12,	UD_R_CR13,	UD_R_CR14,	UD_R_CR15,
+	
+  /* debug registers */
+  UD_R_DR0,	UD_R_DR1,	UD_R_DR2,	UD_R_DR3,
+  UD_R_DR4,	UD_R_DR5,	UD_R_DR6,	UD_R_DR7,
+  UD_R_DR8,	UD_R_DR9,	UD_R_DR10,	UD_R_DR11,
+  UD_R_DR12,	UD_R_DR13,	UD_R_DR14,	UD_R_DR15,
+
+  /* mmx registers */
+  UD_R_MM0,	UD_R_MM1,	UD_R_MM2,	UD_R_MM3,
+  UD_R_MM4,	UD_R_MM5,	UD_R_MM6,	UD_R_MM7,
+
+  /* x87 registers */
+  UD_R_ST0,	UD_R_ST1,	UD_R_ST2,	UD_R_ST3,
+  UD_R_ST4,	UD_R_ST5,	UD_R_ST6,	UD_R_ST7, 
+
+  /* extended multimedia registers */
+  UD_R_XMM0,	UD_R_XMM1,	UD_R_XMM2,	UD_R_XMM3,
+  UD_R_XMM4,	UD_R_XMM5,	UD_R_XMM6,	UD_R_XMM7,
+  UD_R_XMM8,	UD_R_XMM9,	UD_R_XMM10,	UD_R_XMM11,
+  UD_R_XMM12,	UD_R_XMM13,	UD_R_XMM14,	UD_R_XMM15,
+
+  UD_R_RIP,
+
+  /* Operand Types */
+  UD_OP_REG,	UD_OP_MEM,	UD_OP_PTR,	UD_OP_IMM,	
+  UD_OP_JIMM,	UD_OP_CONST
+};
+
+/* -----------------------------------------------------------------------------
+ * struct ud_operand - Disassembled instruction Operand.
+ * -----------------------------------------------------------------------------
+ */
+struct ud_operand 
+{
+  enum ud_type		type;
+  uint8_t		size;
+  union {
+	int8_t		sbyte;
+	uint8_t		ubyte;
+	int16_t		sword;
+	uint16_t	uword;
+	int32_t		sdword;
+	uint32_t	udword;
+	int64_t		sqword;
+	uint64_t	uqword;
+
+	struct {
+		uint16_t seg;
+		uint32_t off;
+	} ptr;
+  } lval;
+
+  enum ud_type		base;
+  enum ud_type		index;
+  uint8_t		offset;
+  uint8_t		scale;	
+};
+
+/* -----------------------------------------------------------------------------
+ * struct ud - The udis86 object.
+ * -----------------------------------------------------------------------------
+ */
+struct ud
+{
+  int 			(*inp_hook) (struct ud*);
+  uint8_t		inp_curr;
+  uint8_t		inp_fill;
+  uint8_t		inp_ctr;
+  uint8_t*		inp_buff;
+  uint8_t*		inp_buff_end;
+  uint8_t		inp_end;
+  void			(*translator)(struct ud*);
+  uint64_t		insn_offset;
+  char			insn_hexcode[32];
+  char			insn_buffer[64];
+  unsigned int		insn_fill;
+  uint8_t		dis_mode;
+  uint64_t		pc;
+  uint8_t		vendor;
+  struct map_entry*	mapen;
+  enum ud_mnemonic_code	mnemonic;
+  struct ud_operand	operand[3];
+  uint8_t		error;
+  uint8_t	 	pfx_rex;
+  uint8_t 		pfx_seg;
+  uint8_t 		pfx_opr;
+  uint8_t 		pfx_adr;
+  uint8_t 		pfx_lock;
+  uint8_t 		pfx_rep;
+  uint8_t 		pfx_repe;
+  uint8_t 		pfx_repne;
+  uint8_t 		pfx_insn;
+  uint8_t		default64;
+  uint8_t		opr_mode;
+  uint8_t		adr_mode;
+  uint8_t		br_far;
+  uint8_t		br_near;
+  uint8_t		implicit_addr;
+  uint8_t		c1;
+  uint8_t		c2;
+  uint8_t		c3;
+  uint8_t 		inp_cache[256];
+  uint8_t		inp_sess[64];
+  struct ud_itab_entry * itab_entry;
+};
+
+/* -----------------------------------------------------------------------------
+ * Type-definitions
+ * -----------------------------------------------------------------------------
+ */
+typedef enum ud_type 		ud_type_t;
+typedef enum ud_mnemonic_code	ud_mnemonic_code_t;
+
+typedef struct ud 		ud_t;
+typedef struct ud_operand 	ud_operand_t;
+
+#define UD_SYN_INTEL		ud_translate_intel
+#define UD_SYN_ATT		ud_translate_att
+#define UD_EOI			-1
+#define UD_INP_CACHE_SZ		32
+#define UD_VENDOR_AMD		0
+#define UD_VENDOR_INTEL		1
+
+#define bail_out(ud,error_code) longjmp( (ud)->bailout, error_code )
+#define try_decode(ud) if ( setjmp( (ud)->bailout ) == 0 )
+#define catch_error() else
+
+#endif
diff -r 32034d1914a6 xen/kdb/x86/udis86-1.7/udis86.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/xen/kdb/x86/udis86-1.7/udis86.c	Wed Aug 29 14:39:57 2012 -0700
@@ -0,0 +1,156 @@
+/* -----------------------------------------------------------------------------
+ * udis86.c
+ *
+ * Copyright (c) 2004, 2005, 2006, Vivek Mohan <vivek@sig9.com>
+ * All rights reserved. See LICENSE
+ * -----------------------------------------------------------------------------
+ */
+
+#if 0
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#endif
+
+#include "input.h"
+#include "extern.h"
+
+/* =============================================================================
+ * ud_init() - Initializes ud_t object.
+ * =============================================================================
+ */
+extern void 
+ud_init(struct ud* u)
+{
+  memset((void*)u, 0, sizeof(struct ud));
+  ud_set_mode(u, 16);
+  u->mnemonic = UD_Iinvalid;
+  ud_set_pc(u, 0);
+#ifndef __UD_STANDALONE__
+  ud_set_input_file(u, stdin);
+#endif /* __UD_STANDALONE__ */
+}
+
+/* =============================================================================
+ * ud_disassemble() - disassembles one instruction and returns the number of 
+ * bytes disassembled. A zero means end of disassembly.
+ * =============================================================================
+ */
+extern unsigned int
+ud_disassemble(struct ud* u)
+{
+  if (ud_input_end(u))
+	return 0;
+
+ 
+  u->insn_buffer[0] = u->insn_hexcode[0] = 0;
+
+ 
+  if (ud_decode(u) == 0)
+	return 0;
+  if (u->translator)
+	u->translator(u);
+  return ud_insn_len(u);
+}
+
+/* =============================================================================
+ * ud_set_mode() - Set Disassemly Mode.
+ * =============================================================================
+ */
+extern void 
+ud_set_mode(struct ud* u, uint8_t m)
+{
+  switch(m) {
+	case 16:
+	case 32:
+	case 64: u->dis_mode = m ; return;
+	default: u->dis_mode = 16; return;
+  }
+}
+
+/* =============================================================================
+ * ud_set_vendor() - Set vendor.
+ * =============================================================================
+ */
+extern void 
+ud_set_vendor(struct ud* u, unsigned v)
+{
+  switch(v) {
+	case UD_VENDOR_INTEL:
+		u->vendor = v;
+		break;
+	default:
+		u->vendor = UD_VENDOR_AMD;
+  }
+}
+
+/* =============================================================================
+ * ud_set_pc() - Sets code origin. 
+ * =============================================================================
+ */
+extern void 
+ud_set_pc(struct ud* u, uint64_t o)
+{
+  u->pc = o;
+}
+
+/* =============================================================================
+ * ud_set_syntax() - Sets the output syntax.
+ * =============================================================================
+ */
+extern void 
+ud_set_syntax(struct ud* u, void (*t)(struct ud*))
+{
+  u->translator = t;
+}
+
+/* =============================================================================
+ * ud_insn() - returns the disassembled instruction
+ * =============================================================================
+ */
+extern char* 
+ud_insn_asm(struct ud* u) 
+{
+  return u->insn_buffer;
+}
+
+/* =============================================================================
+ * ud_insn_offset() - Returns the offset.
+ * =============================================================================
+ */
+extern uint64_t
+ud_insn_off(struct ud* u) 
+{
+  return u->insn_offset;
+}
+
+
+/* =============================================================================
+ * ud_insn_hex() - Returns hex form of disassembled instruction.
+ * =============================================================================
+ */
+extern char* 
+ud_insn_hex(struct ud* u) 
+{
+  return u->insn_hexcode;
+}
+
+/* =============================================================================
+ * ud_insn_ptr() - Returns code disassembled.
+ * =============================================================================
+ */
+extern uint8_t* 
+ud_insn_ptr(struct ud* u) 
+{
+  return u->inp_sess;
+}
+
+/* =============================================================================
+ * ud_insn_len() - Returns the count of bytes disassembled.
+ * =============================================================================
+ */
+extern unsigned int 
+ud_insn_len(struct ud* u) 
+{
+  return u->inp_ctr;
+}

--MP_/vtJK/xRhlenzx/+qfkRLsPz
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--MP_/vtJK/xRhlenzx/+qfkRLsPz--


From xen-devel-bounces@lists.xen.org Thu Aug 30 06:42:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 06:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6ySJ-0003DS-PJ; Thu, 30 Aug 2012 06:42:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6ySI-0003DN-68
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 06:42:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346308929!8723136!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA5Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9740 invoked from network); 30 Aug 2012 06:42:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 06:42:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,337,1344211200"; d="scan'208";a="14261528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 06:42:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 07:42:08 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6yS8-0003bo-9C;
	Thu, 30 Aug 2012 06:42:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6yS4-0000vO-Lr;
	Thu, 30 Aug 2012 07:42:08 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13638-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Aug 2012 07:42:04 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13638: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13638 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13638/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13636
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13636
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13636
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13636

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  a0b5f8102a00
baseline version:
 xen                  a0b5f8102a00

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 06:42:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 06:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6ySJ-0003DS-PJ; Thu, 30 Aug 2012 06:42:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T6ySI-0003DN-68
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 06:42:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346308929!8723136!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA5Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9740 invoked from network); 30 Aug 2012 06:42:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 06:42:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,337,1344211200"; d="scan'208";a="14261528"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 06:42:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 07:42:08 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T6yS8-0003bo-9C;
	Thu, 30 Aug 2012 06:42:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T6yS4-0000vO-Lr;
	Thu, 30 Aug 2012 07:42:08 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13638-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Aug 2012 07:42:04 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13638: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13638 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13638/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13636
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13636
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13636
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13636

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  a0b5f8102a00
baseline version:
 xen                  a0b5f8102a00

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 08:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 08:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6zfv-0004X3-R0; Thu, 30 Aug 2012 08:00:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Libing.Chen@uts.edu.au>) id 1T6y3h-00037x-Jp
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 06:16:53 +0000
Received: from [85.158.143.99:39243] by server-2.bemta-4.messagelabs.com id
	85/0A-21239-4550F305; Thu, 30 Aug 2012 06:16:52 +0000
X-Env-Sender: Libing.Chen@uts.edu.au
X-Msg-Ref: server-15.tower-216.messagelabs.com!1346307410!27514510!1
X-Originating-IP: [138.25.1.132]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM4LjI1LjEuMTMyID0+IDg0Njgy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24918 invoked from network); 30 Aug 2012 06:16:51 -0000
Received: from utsppagent.uts.edu.au (HELO utspp.itd.uts.edu.au) (138.25.1.132)
	by server-15.tower-216.messagelabs.com with SMTP;
	30 Aug 2012 06:16:51 -0000
Received: from fry.itd.uts.edu.au (fry.itd.uts.edu.au [138.25.243.71])
	by utsppagent.uts.edu.au (8.14.4/8.14.4) with ESMTP id q7U6GmZ6013460
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 16:16:49 +1000
MIME-version: 1.0
Received: from postoffice.adsroot.uts.edu.au
	(hts1.adsroot.uts.edu.au [138.25.27.103])
	by postoffice.uts.edu.au (Sun Java(tm) System Messaging Server 6.3-7.01
	(built May 14 2008;
	32bit)) with ESMTPS id <0M9K00H4M1G0BZ60@postoffice.uts.edu.au>
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:16:49 +1000 (EST)
Received: from eng046150.eng.uts.edu.au (138.25.27.5)
	by postoffice.adsroot.uts.edu.au (138.25.27.103) with Microsoft SMTP
	Server (TLS) id 8.2.254.0; Thu, 30 Aug 2012 16:16:47 +1000
Message-id: <503F053D.5010001@uts.edu.au>
Date: Thu, 30 Aug 2012 16:16:29 +1000
From: bing <Libing.Chen@uts.edu.au>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120717
	Thunderbird/14.0
To: xen-devel@lists.xen.org
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
In-reply-to: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
x-proofpoint-disclaimer: uts
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.7.7855, 1.0.431,
	0.0.0000
	definitions=2012-08-30_02:2012-08-30, 2012-08-30,
	1970-01-01 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
	ipscore=0 suspectscore=4
	phishscore=0 bulkscore=0 adultscore=0 classifier=spam adjust=0
	reason=mlx
	scancount=1 engine=6.0.2-1203120001 definitions=main-1208290407
X-Mailman-Approved-At: Thu, 30 Aug 2012 08:00:26 +0000
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right 
after loading Dom0 kernel. I found a workaround by pass xsave=0 option 
to the xen kernel. It seems to be CPU related, same setup (same xen, 
Dom0 kernel version) works fine on another Dell Precision 3200.


bing

On 08/30/2012 08:22 AM, Dan Magenheimer wrote:
> Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
> crashes early in boot with Xen 4.2.0-rc4?
>
> The hardware is a new Dell Optiplex 790 which I've noticed has a couple
> of idiosyncrasies when booting recent upstream versions of Linux:
> 1) reboot=pci is required or reboot hangs
> 2) reducing visible memory via memmap= on a Linux command line is
>     fairly difficult (requires a very complex sequence of Linux boot
>     parameters, presumably due to a weird e820 RAM map in this hardware)
>
> These idiosyncrasies may or may not be related to the dom0 crash,
> just mentioning them in case they suggest an easy workaround.
>
> The stock FC17 Xen booted, then I built/booted xen-4.1-testing.hg
> and it was fine.  Then I built/booted 4.2.0-rc4 and see the dom0 crash.
> FC17 kernel/dom0 rev is 3.3.4-5.fc17.x86_64.  I can reboot the
> box to 4.1.3 still (but can't do anything due to at least a 4.1/4.2
> libxenlight.so version difference... will have to reinstall 4.1
> tools I guess).
>
> I think there have been some Xen memmap issues fixed in recent
> upstream Linux, so will try building/booting linux-3.5 while
> everyone flies home from Xen Summit ;-).  But this seems
> unpromising since Xen 4.1.3 works fine.
>
> Tombstone attached:
>
> [first, tail end of previous boot with dom0 crash, then full log with dom0 crash]
>
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
>   __  __            _  _    ____    ___              _  _
>   \ \/ /___ _ __   | || |  |___ \  / _ \    _ __ ___| || |     _ __  _ __ ___
>    \  // _ \ '_ \  | || |_   __) || | | |__| '__/ __| || |_ __| '_ \| '__/ _ \
>    /  \  __/ | | | |__   _| / __/ | |_| |__| | | (__|__   _|__| |_) | | |  __/
>   /_/\_\___|_| |_|    |_|(_)_____(_)___/   |_|  \___|  |_|    | .__/|_|  \___|
>                                                               |_|
> (XEN) Xen version 4.2.0-rc4-pre (root@vpn.oracle.com) (gcc (GCC) 4.7.0 20120507
> (Red Hat 4.7.0-5)) Wed Aug 29 15:27:44 MDT 2012
> (XEN) Latest ChangeSet: Tue Aug 28 22:40:45 2012 +0100 25786:a0b5f8102a00
> (XEN) Bootloader: GRUB 2.00~beta4
> (XEN) Command line: placeholder com1=115200,8n1 dom0_mem=512M console=vga,com1 l
> oglvl=all guest_loglvl=all cpuidle=off
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 1 MBR signatures
> (XEN)  Found 1 EDD information structures
> (XEN) Xen-e820 RAM map:
> (XEN)  0000000000000000 - 000000000009ac00 (usable)
> (XEN)  000000000009ac00 - 00000000000a0000 (reserved)
> (XEN)  00000000000e0000 - 0000000000100000 (reserved)
> (XEN)  0000000000100000 - 0000000020000000 (usable)
> (XEN)  0000000020000000 - 0000000020200000 (reserved)
> (XEN)  0000000020200000 - 0000000040000000 (usable)
> (XEN)  0000000040000000 - 0000000040200000 (reserved)
> (XEN)  0000000040200000 - 00000000ca9f7000 (usable)
> (XEN)  00000000ca9f7000 - 00000000caa3b000 (reserved)
> (XEN)  00000000caa3b000 - 00000000cadb7000 (usable)
> (XEN)  00000000cadb7000 - 00000000cade7000 (reserved)
> (XEN)  00000000cade7000 - 00000000cafe7000 (ACPI NVS)
> (XEN)  00000000cafe7000 - 00000000cafff000 (ACPI data)
> (XEN)  00000000cafff000 - 00000000cb000000 (usable)
> (XEN)  00000000cb800000 - 00000000cfa00000 (reserved)
> (XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
> (XEN)  00000000ffc00000 - 00000000ffc20000 (reserved)
> (XEN)  0000000100000000 - 000000022e000000 (usable)
> (XEN) ACPI: RSDP 000FE300, 0024 (r2 DELL  )
> (XEN) ACPI: XSDT CAFFDE18, 007C (r1 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: FACP CAF87D98, 00F4 (r4 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: DSDT CAF71018, 7341 (r2 INT430 SYSFexxx     1001 INTL 20090903)
> (XEN) ACPI: FACS CAFE4D40, 0040
> (XEN) ACPI: APIC CAFFCF18, 00CC (r2 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: TCPA CAFE5D18, 0032 (r2                        0             0)
> (XEN) ACPI: SSDT CAF88A98, 02F9 (r1 DELLTP      TPM     3000 INTL 20090903)
> (XEN) ACPI: MCFG CAFE5C98, 003C (r1 DELL   SNDYBRDG  6222004 MSFT       97)
> (XEN) ACPI: HPET CAFE5C18, 0038 (r1 A M I   PCHHPET  6222004 AMI.        3)
> (XEN) ACPI: BOOT CAFE5B98, 0028 (r1 DELL   CBX3      6222004 AMI     10013)
> (XEN) ACPI: SSDT CAF7E818, 07C2 (r1  PmRef  Cpu0Ist     3000 INTL 20090903)
> (XEN) ACPI: SSDT CAF7D018, 0996 (r1  PmRef    CpuPm     3000 INTL 20090903)
> (XEN) ACPI: DMAR CAF87C18, 00E8 (r1 INTEL      SNB         1 INTL        1)
> (XEN) ACPI: SLIC CAF86C18, 0176 (r3 DELL    CBX3     6222004 MSFT    10013)
> (XEN) System RAM: 8073MB (8266808kB)
> (XEN) No NUMA configuration found
> (XEN) Faking a node at 0000000000000000-000000022e000000
> (XEN) Domain heap initialised
> (XEN) found SMP MP-table at 000f1fb0
> (XEN) DMI 2.6 present.
> (XEN) Using APIC driver default
> (XEN) ACPI: PM-Timer IO Port: 0x408
> (XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
> (XEN) ACPI: 32/64X FACS address mismatch in FADT - cafe4e40/00000000cafe4d40, us
> ing 32
> (XEN) ACPI:                  wakeup_vec[cafe4e4c], vec_size[20]
> (XEN) ACPI: Local APIC address 0xfee00000
> (XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> (XEN) Processor #0 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
> (XEN) Processor #2 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
> (XEN) Processor #4 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
> (XEN) Processor #6 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x08] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x09] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x0a] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0b] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x0c] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x0d] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x0e] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x0f] disabled)
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override.
> (XEN) ACPI: IRQ9 used by override.
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
> (XEN) Table is not found!
> (XEN) Using ACPI (MADT) for SMP configuration information
> (XEN) SMP: Allowing 16 CPUs (12 hotplug CPUs)
> (XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3093.024 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extende
> d MCE MSR 0
> (XEN) Intel machine check reporting enabled
> (XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
> (XEN) PCI: Not using MCFG for segment 0000 bus 00-3f
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) TSC deadline timer enabled
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 32 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB
> (XEN) Brought up 4 CPUs
> (XEN) ACPI sleep modes: S3
> (XEN) mcheck_poll: Machine check polling timer started.
> (XEN) mtrr: your CPUs had inconsistent variable MTRR settings
> (XEN) mtrr: probably your BIOS does not setup all CPUs.
> (XEN) mtrr: corrected configuration.
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0x885000
> (XEN) elf_parse_binary: phdr: paddr=0x1a00000 memsz=0xd70e0
> (XEN) elf_parse_binary: phdr: paddr=0x1ad8000 memsz=0x14240
> (XEN) elf_parse_binary: phdr: paddr=0x1aed000 memsz=0x6bf000
> (XEN) elf_parse_binary: memory: 0x1000000 -> 0x21ac000
> (XEN) elf_xen_parse_note: GUEST_OS = "linux"
> (XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
> (XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
> (XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
> (XEN) elf_xen_parse_note: ENTRY = 0xffffffff81aed200
> (XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
> (XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
>
> (XEN) elf_xen_parse_note: PAE_MODE = "yes"
> (XEN) elf_xen_parse_note: LOADER = "generic"
> (XEN) elf_xen_parse_note: unknown xen elf note (0xd)
> (XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
> (XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
> (XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
> (XEN) elf_xen_addr_calc_check: addresses:
> (XEN)     virt_base        = 0xffffffff80000000
> (XEN)     elf_paddr_offset = 0x0
> (XEN)     virt_offset      = 0xffffffff80000000
> (XEN)     virt_kstart      = 0xffffffff81000000
> (XEN)     virt_kend        = 0xffffffff821ac000
> (XEN)     virt_entry       = 0xffffffff81aed200
> (XEN)     p2m_base         = 0xffffffffffffffff
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x21ac000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   0000000220000000->0000000224000000 (103435 pages to be all
> ocated)
> (XEN)  Init. ramdisk: 000000022b40b000->000000022dfffa00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff821ac000
> (XEN)  Init. ramdisk: ffffffff821ac000->ffffffff84da0a00
> (XEN)  Phys-Mach map: ffffffff84da1000->ffffffff84ea1000
> (XEN)  Start info:    ffffffff84ea1000->ffffffff84ea14b4
> (XEN)  Page tables:   ffffffff84ea2000->ffffffff84ecd000
> (XEN)  Boot stack:    ffffffff84ecd000->ffffffff84ece000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff85000000
> (XEN)  ENTRY ADDRESS: ffffffff81aed200
> (XEN) Dom0 has maximum 4 VCPUs
> (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81885000
> (XEN) elf_load_binary: phdr 1 at 0xffffffff81a00000 -> 0xffffffff81ad70e0
> (XEN) elf_load_binary: phdr 2 at 0xffffffff81ad8000 -> 0xffffffff81aec240
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

UTS CRICOS Provider Code: 00099F
DISCLAIMER: This email message and any accompanying attachments may contain confidential information.
If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or
attachments. If you have received this message in error, please notify the sender immediately and delete
this message. Any views expressed in this message are those of the individual sender, except where the
sender expressly, and with authority, states them to be the views of the University of Technology Sydney.
Before opening any attachments, please check them for viruses and defects.

Think. Green. Do.

Please consider the environment before printing this email.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 08:00:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 08:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6zfv-0004X3-R0; Thu, 30 Aug 2012 08:00:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Libing.Chen@uts.edu.au>) id 1T6y3h-00037x-Jp
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 06:16:53 +0000
Received: from [85.158.143.99:39243] by server-2.bemta-4.messagelabs.com id
	85/0A-21239-4550F305; Thu, 30 Aug 2012 06:16:52 +0000
X-Env-Sender: Libing.Chen@uts.edu.au
X-Msg-Ref: server-15.tower-216.messagelabs.com!1346307410!27514510!1
X-Originating-IP: [138.25.1.132]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM4LjI1LjEuMTMyID0+IDg0Njgy\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24918 invoked from network); 30 Aug 2012 06:16:51 -0000
Received: from utsppagent.uts.edu.au (HELO utspp.itd.uts.edu.au) (138.25.1.132)
	by server-15.tower-216.messagelabs.com with SMTP;
	30 Aug 2012 06:16:51 -0000
Received: from fry.itd.uts.edu.au (fry.itd.uts.edu.au [138.25.243.71])
	by utsppagent.uts.edu.au (8.14.4/8.14.4) with ESMTP id q7U6GmZ6013460
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 16:16:49 +1000
MIME-version: 1.0
Received: from postoffice.adsroot.uts.edu.au
	(hts1.adsroot.uts.edu.au [138.25.27.103])
	by postoffice.uts.edu.au (Sun Java(tm) System Messaging Server 6.3-7.01
	(built May 14 2008;
	32bit)) with ESMTPS id <0M9K00H4M1G0BZ60@postoffice.uts.edu.au>
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:16:49 +1000 (EST)
Received: from eng046150.eng.uts.edu.au (138.25.27.5)
	by postoffice.adsroot.uts.edu.au (138.25.27.103) with Microsoft SMTP
	Server (TLS) id 8.2.254.0; Thu, 30 Aug 2012 16:16:47 +1000
Message-id: <503F053D.5010001@uts.edu.au>
Date: Thu, 30 Aug 2012 16:16:29 +1000
From: bing <Libing.Chen@uts.edu.au>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120717
	Thunderbird/14.0
To: xen-devel@lists.xen.org
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
In-reply-to: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
x-proofpoint-disclaimer: uts
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.7.7855, 1.0.431,
	0.0.0000
	definitions=2012-08-30_02:2012-08-30, 2012-08-30,
	1970-01-01 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
	ipscore=0 suspectscore=4
	phishscore=0 bulkscore=0 adultscore=0 classifier=spam adjust=0
	reason=mlx
	scancount=1 engine=6.0.2-1203120001 definitions=main-1208290407
X-Mailman-Approved-At: Thu, 30 Aug 2012 08:00:26 +0000
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right 
after loading Dom0 kernel. I found a workaround by pass xsave=0 option 
to the xen kernel. It seems to be CPU related, same setup (same xen, 
Dom0 kernel version) works fine on another Dell Precision 3200.


bing

On 08/30/2012 08:22 AM, Dan Magenheimer wrote:
> Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
> crashes early in boot with Xen 4.2.0-rc4?
>
> The hardware is a new Dell Optiplex 790 which I've noticed has a couple
> of idiosyncrasies when booting recent upstream versions of Linux:
> 1) reboot=pci is required or reboot hangs
> 2) reducing visible memory via memmap= on a Linux command line is
>     fairly difficult (requires a very complex sequence of Linux boot
>     parameters, presumably due to a weird e820 RAM map in this hardware)
>
> These idiosyncrasies may or may not be related to the dom0 crash,
> just mentioning them in case they suggest an easy workaround.
>
> The stock FC17 Xen booted, then I built/booted xen-4.1-testing.hg
> and it was fine.  Then I built/booted 4.2.0-rc4 and see the dom0 crash.
> FC17 kernel/dom0 rev is 3.3.4-5.fc17.x86_64.  I can reboot the
> box to 4.1.3 still (but can't do anything due to at least a 4.1/4.2
> libxenlight.so version difference... will have to reinstall 4.1
> tools I guess).
>
> I think there have been some Xen memmap issues fixed in recent
> upstream Linux, so will try building/booting linux-3.5 while
> everyone flies home from Xen Summit ;-).  But this seems
> unpromising since Xen 4.1.3 works fine.
>
> Tombstone attached:
>
> [first, tail end of previous boot with dom0 crash, then full log with dom0 crash]
>
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
>   __  __            _  _    ____    ___              _  _
>   \ \/ /___ _ __   | || |  |___ \  / _ \    _ __ ___| || |     _ __  _ __ ___
>    \  // _ \ '_ \  | || |_   __) || | | |__| '__/ __| || |_ __| '_ \| '__/ _ \
>    /  \  __/ | | | |__   _| / __/ | |_| |__| | | (__|__   _|__| |_) | | |  __/
>   /_/\_\___|_| |_|    |_|(_)_____(_)___/   |_|  \___|  |_|    | .__/|_|  \___|
>                                                               |_|
> (XEN) Xen version 4.2.0-rc4-pre (root@vpn.oracle.com) (gcc (GCC) 4.7.0 20120507
> (Red Hat 4.7.0-5)) Wed Aug 29 15:27:44 MDT 2012
> (XEN) Latest ChangeSet: Tue Aug 28 22:40:45 2012 +0100 25786:a0b5f8102a00
> (XEN) Bootloader: GRUB 2.00~beta4
> (XEN) Command line: placeholder com1=115200,8n1 dom0_mem=512M console=vga,com1 l
> oglvl=all guest_loglvl=all cpuidle=off
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
> (XEN) Disc information:
> (XEN)  Found 1 MBR signatures
> (XEN)  Found 1 EDD information structures
> (XEN) Xen-e820 RAM map:
> (XEN)  0000000000000000 - 000000000009ac00 (usable)
> (XEN)  000000000009ac00 - 00000000000a0000 (reserved)
> (XEN)  00000000000e0000 - 0000000000100000 (reserved)
> (XEN)  0000000000100000 - 0000000020000000 (usable)
> (XEN)  0000000020000000 - 0000000020200000 (reserved)
> (XEN)  0000000020200000 - 0000000040000000 (usable)
> (XEN)  0000000040000000 - 0000000040200000 (reserved)
> (XEN)  0000000040200000 - 00000000ca9f7000 (usable)
> (XEN)  00000000ca9f7000 - 00000000caa3b000 (reserved)
> (XEN)  00000000caa3b000 - 00000000cadb7000 (usable)
> (XEN)  00000000cadb7000 - 00000000cade7000 (reserved)
> (XEN)  00000000cade7000 - 00000000cafe7000 (ACPI NVS)
> (XEN)  00000000cafe7000 - 00000000cafff000 (ACPI data)
> (XEN)  00000000cafff000 - 00000000cb000000 (usable)
> (XEN)  00000000cb800000 - 00000000cfa00000 (reserved)
> (XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
> (XEN)  00000000ffc00000 - 00000000ffc20000 (reserved)
> (XEN)  0000000100000000 - 000000022e000000 (usable)
> (XEN) ACPI: RSDP 000FE300, 0024 (r2 DELL  )
> (XEN) ACPI: XSDT CAFFDE18, 007C (r1 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: FACP CAF87D98, 00F4 (r4 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: DSDT CAF71018, 7341 (r2 INT430 SYSFexxx     1001 INTL 20090903)
> (XEN) ACPI: FACS CAFE4D40, 0040
> (XEN) ACPI: APIC CAFFCF18, 00CC (r2 DELL    CBX3     6222004 MSFT    10013)
> (XEN) ACPI: TCPA CAFE5D18, 0032 (r2                        0             0)
> (XEN) ACPI: SSDT CAF88A98, 02F9 (r1 DELLTP      TPM     3000 INTL 20090903)
> (XEN) ACPI: MCFG CAFE5C98, 003C (r1 DELL   SNDYBRDG  6222004 MSFT       97)
> (XEN) ACPI: HPET CAFE5C18, 0038 (r1 A M I   PCHHPET  6222004 AMI.        3)
> (XEN) ACPI: BOOT CAFE5B98, 0028 (r1 DELL   CBX3      6222004 AMI     10013)
> (XEN) ACPI: SSDT CAF7E818, 07C2 (r1  PmRef  Cpu0Ist     3000 INTL 20090903)
> (XEN) ACPI: SSDT CAF7D018, 0996 (r1  PmRef    CpuPm     3000 INTL 20090903)
> (XEN) ACPI: DMAR CAF87C18, 00E8 (r1 INTEL      SNB         1 INTL        1)
> (XEN) ACPI: SLIC CAF86C18, 0176 (r3 DELL    CBX3     6222004 MSFT    10013)
> (XEN) System RAM: 8073MB (8266808kB)
> (XEN) No NUMA configuration found
> (XEN) Faking a node at 0000000000000000-000000022e000000
> (XEN) Domain heap initialised
> (XEN) found SMP MP-table at 000f1fb0
> (XEN) DMI 2.6 present.
> (XEN) Using APIC driver default
> (XEN) ACPI: PM-Timer IO Port: 0x408
> (XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
> (XEN) ACPI: 32/64X FACS address mismatch in FADT - cafe4e40/00000000cafe4d40, us
> ing 32
> (XEN) ACPI:                  wakeup_vec[cafe4e4c], vec_size[20]
> (XEN) ACPI: Local APIC address 0xfee00000
> (XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
> (XEN) Processor #0 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
> (XEN) Processor #2 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
> (XEN) Processor #4 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
> (XEN) Processor #6 6:10 APIC version 21
> (XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x08] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x09] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x0a] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x0b] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x0c] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x0d] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x0e] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x0f] disabled)
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override.
> (XEN) ACPI: IRQ9 used by override.
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
> (XEN) Table is not found!
> (XEN) Using ACPI (MADT) for SMP configuration information
> (XEN) SMP: Allowing 16 CPUs (12 hotplug CPUs)
> (XEN) IRQ limits: 24 GSI, 760 MSI/MSI-X
> (XEN) Switched to APIC driver x2apic_cluster.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 3093.024 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extende
> d MCE MSR 0
> (XEN) Intel machine check reporting enabled
> (XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
> (XEN) PCI: Not using MCFG for segment 0000 bus 00-3f
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d supported page sizes: 4kB.
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables not enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) TSC deadline timer enabled
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 32 KiB.
> (XEN) VMX: Supported advanced features:
> (XEN)  - APIC MMIO access virtualisation
> (XEN)  - APIC TPR shadow
> (XEN)  - Extended Page Tables (EPT)
> (XEN)  - Virtual-Processor Identifiers (VPID)
> (XEN)  - Virtual NMI
> (XEN)  - MSR direct-access bitmap
> (XEN)  - Unrestricted Guest
> (XEN) HVM: ASIDs enabled.
> (XEN) HVM: VMX enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB
> (XEN) Brought up 4 CPUs
> (XEN) ACPI sleep modes: S3
> (XEN) mcheck_poll: Machine check polling timer started.
> (XEN) mtrr: your CPUs had inconsistent variable MTRR settings
> (XEN) mtrr: probably your BIOS does not setup all CPUs.
> (XEN) mtrr: corrected configuration.
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0x885000
> (XEN) elf_parse_binary: phdr: paddr=0x1a00000 memsz=0xd70e0
> (XEN) elf_parse_binary: phdr: paddr=0x1ad8000 memsz=0x14240
> (XEN) elf_parse_binary: phdr: paddr=0x1aed000 memsz=0x6bf000
> (XEN) elf_parse_binary: memory: 0x1000000 -> 0x21ac000
> (XEN) elf_xen_parse_note: GUEST_OS = "linux"
> (XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
> (XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
> (XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
> (XEN) elf_xen_parse_note: ENTRY = 0xffffffff81aed200
> (XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
> (XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
>
> (XEN) elf_xen_parse_note: PAE_MODE = "yes"
> (XEN) elf_xen_parse_note: LOADER = "generic"
> (XEN) elf_xen_parse_note: unknown xen elf note (0xd)
> (XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
> (XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
> (XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
> (XEN) elf_xen_addr_calc_check: addresses:
> (XEN)     virt_base        = 0xffffffff80000000
> (XEN)     elf_paddr_offset = 0x0
> (XEN)     virt_offset      = 0xffffffff80000000
> (XEN)     virt_kstart      = 0xffffffff81000000
> (XEN)     virt_kend        = 0xffffffff821ac000
> (XEN)     virt_entry       = 0xffffffff81aed200
> (XEN)     p2m_base         = 0xffffffffffffffff
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x21ac000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   0000000220000000->0000000224000000 (103435 pages to be all
> ocated)
> (XEN)  Init. ramdisk: 000000022b40b000->000000022dfffa00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff821ac000
> (XEN)  Init. ramdisk: ffffffff821ac000->ffffffff84da0a00
> (XEN)  Phys-Mach map: ffffffff84da1000->ffffffff84ea1000
> (XEN)  Start info:    ffffffff84ea1000->ffffffff84ea14b4
> (XEN)  Page tables:   ffffffff84ea2000->ffffffff84ecd000
> (XEN)  Boot stack:    ffffffff84ecd000->ffffffff84ece000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff85000000
> (XEN)  ENTRY ADDRESS: ffffffff81aed200
> (XEN) Dom0 has maximum 4 VCPUs
> (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81885000
> (XEN) elf_load_binary: phdr 1 at 0xffffffff81a00000 -> 0xffffffff81ad70e0
> (XEN) elf_load_binary: phdr 2 at 0xffffffff81ad8000 -> 0xffffffff81aec240
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81aed000 -> 0xffffffff81bd9000
> (XEN) Scrubbing Free RAM: ......................................................
> .....................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen
> )
> (XEN) Freed 260kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) Domain 0 crashed: rebooting machine in 5 seconds.
> (XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

UTS CRICOS Provider Code: 00099F
DISCLAIMER: This email message and any accompanying attachments may contain confidential information.
If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or
attachments. If you have received this message in error, please notify the sender immediately and delete
this message. Any views expressed in this message are those of the individual sender, except where the
sender expressly, and with authority, states them to be the views of the University of Technology Sydney.
Before opening any attachments, please check them for viruses and defects.

Think. Green. Do.

Please consider the environment before printing this email.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 08:07:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 08:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6zma-0004hK-N8; Thu, 30 Aug 2012 08:07:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6zmZ-0004hF-OO
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 08:07:19 +0000
Received: from [85.158.143.35:46670] by server-2.bemta-4.messagelabs.com id
	49/71-21239-73F1F305; Thu, 30 Aug 2012 08:07:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1346314033!11450708!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA5Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9911 invoked from network); 30 Aug 2012 08:07:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 08:07:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,338,1344211200"; d="scan'208";a="14263146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 08:07:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 09:07:13 +0100
Message-ID: <1346314031.27277.20.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Thu, 30 Aug 2012 09:07:11 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> This patch contains the modifications that are discussed in thread
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html

Thanks.

Please can you find a way to include your patches inline rather than as
attachments, it makes reply for review much easier.
Documentation/email-clients.txt has some hints for various clients or
you can just use the "git send-email" command (perhaps with "git
format-patch").

You should also CC the netdev@vger.kernel.org list.

> Instead of using max_required_rx_slots, I used the count that we
> already have in hand to verify if we have enough room in the batch
> queue for next skb. Please let me know if that is not appropriate.
> Things worked fine in my environment. Under heavy load now we seems to
> be consuming most of the slots in the queue and no BUG_ON :-)


> From: Siva Palagummi <Siva.Palagummi@ca.com>
> 
> count variable in xen_netbk_rx_action need to be incremented
> correctly to take into account of extra slots required when skb_headlen is 
> greater than PAGE_SIZE when larger MTU values are used. Without this change
> BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback thread 
> to exit.
> 
> The fix is to stash the counting already done in xen_netbk_count_skb_slots
> in skb_cb_overlay and use it directly in xen_netbk_rx_action.
> 
> Also improved the checking for filling the batch queue. 
> 
> Also merged a change from a patch created for xen_netbk_count_skb_slots 
> function as per thread 
> http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> 
> The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> 
> 
> Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> ---
> diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> --- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000 -0500
> +++ b/drivers/net/xen-netback/netback.c	2012-08-28 17:31:22.000000000 -0400
> @@ -80,6 +80,11 @@ union page_ext {
>  	void *mapping;
>  };
>  
> +struct skb_cb_overlay {
> +	int meta_slots_used;
> +	int count;
> +};
> +
>  struct xen_netbk {
>  	wait_queue_head_t wq;
>  	struct task_struct *task;
> @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
>  {
>  	unsigned int count;
>  	int i, copy_off;
> +	struct skb_cb_overlay *sco;
>  
> -	count = DIV_ROUND_UP(
> -			offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE);
> +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

This hunk appears to be upstream already (see
e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
against? You should either base patches on Linus' branch or on the
net-next branch.

Other than this the patch looks good, thanks.

>  
>  	copy_off = skb_headlen(skb) % PAGE_SIZE;
>  
> @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
>  			size -= bytes;
>  		}
>  	}
> +	sco = (struct skb_cb_overlay *)skb->cb;
> +	sco->count = count;
>  	return count;
>  }
>  
> @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
>  	}
>  }
>  
> -struct skb_cb_overlay {
> -	int meta_slots_used;
> -};
>  
>  static void xen_netbk_rx_action(struct xen_netbk *netbk)
>  {
> @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
>  		sco = (struct skb_cb_overlay *)skb->cb;
>  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
>  
> -		count += nr_frags + 1;
> +		count += sco->count;
>  
>  		__skb_queue_tail(&rxq, skb);
>  
> +		skb = skb_peek(&netbk->rx_queue);
> +		if (skb == NULL)
> +			break;
> +		sco = (struct skb_cb_overlay *)skb->cb;
>  		/* Filled the batch queue? */
> -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
>  			break;
>  	}
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 08:07:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 08:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T6zma-0004hK-N8; Thu, 30 Aug 2012 08:07:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T6zmZ-0004hF-OO
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 08:07:19 +0000
Received: from [85.158.143.35:46670] by server-2.bemta-4.messagelabs.com id
	49/71-21239-73F1F305; Thu, 30 Aug 2012 08:07:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1346314033!11450708!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTA5Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9911 invoked from network); 30 Aug 2012 08:07:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 08:07:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,338,1344211200"; d="scan'208";a="14263146"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 08:07:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 09:07:13 +0100
Message-ID: <1346314031.27277.20.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Palagummi, Siva" <Siva.Palagummi@ca.com>
Date: Thu, 30 Aug 2012 09:07:11 +0100
In-Reply-To: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> This patch contains the modifications that are discussed in thread
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html

Thanks.

Please can you find a way to include your patches inline rather than as
attachments, it makes reply for review much easier.
Documentation/email-clients.txt has some hints for various clients or
you can just use the "git send-email" command (perhaps with "git
format-patch").

You should also CC the netdev@vger.kernel.org list.

> Instead of using max_required_rx_slots, I used the count that we
> already have in hand to verify if we have enough room in the batch
> queue for next skb. Please let me know if that is not appropriate.
> Things worked fine in my environment. Under heavy load now we seems to
> be consuming most of the slots in the queue and no BUG_ON :-)


> From: Siva Palagummi <Siva.Palagummi@ca.com>
> 
> count variable in xen_netbk_rx_action need to be incremented
> correctly to take into account of extra slots required when skb_headlen is 
> greater than PAGE_SIZE when larger MTU values are used. Without this change
> BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback thread 
> to exit.
> 
> The fix is to stash the counting already done in xen_netbk_count_skb_slots
> in skb_cb_overlay and use it directly in xen_netbk_rx_action.
> 
> Also improved the checking for filling the batch queue. 
> 
> Also merged a change from a patch created for xen_netbk_count_skb_slots 
> function as per thread 
> http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> 
> The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> 
> 
> Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> ---
> diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> --- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000 -0500
> +++ b/drivers/net/xen-netback/netback.c	2012-08-28 17:31:22.000000000 -0400
> @@ -80,6 +80,11 @@ union page_ext {
>  	void *mapping;
>  };
>  
> +struct skb_cb_overlay {
> +	int meta_slots_used;
> +	int count;
> +};
> +
>  struct xen_netbk {
>  	wait_queue_head_t wq;
>  	struct task_struct *task;
> @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
>  {
>  	unsigned int count;
>  	int i, copy_off;
> +	struct skb_cb_overlay *sco;
>  
> -	count = DIV_ROUND_UP(
> -			offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE);
> +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);

This hunk appears to be upstream already (see
e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
against? You should either base patches on Linus' branch or on the
net-next branch.

Other than this the patch looks good, thanks.

>  
>  	copy_off = skb_headlen(skb) % PAGE_SIZE;
>  
> @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
>  			size -= bytes;
>  		}
>  	}
> +	sco = (struct skb_cb_overlay *)skb->cb;
> +	sco->count = count;
>  	return count;
>  }
>  
> @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
>  	}
>  }
>  
> -struct skb_cb_overlay {
> -	int meta_slots_used;
> -};
>  
>  static void xen_netbk_rx_action(struct xen_netbk *netbk)
>  {
> @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
>  		sco = (struct skb_cb_overlay *)skb->cb;
>  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
>  
> -		count += nr_frags + 1;
> +		count += sco->count;
>  
>  		__skb_queue_tail(&rxq, skb);
>  
> +		skb = skb_peek(&netbk->rx_queue);
> +		if (skb == NULL)
> +			break;
> +		sco = (struct skb_cb_overlay *)skb->cb;
>  		/* Filled the batch queue? */
> -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
>  			break;
>  	}
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 09:04:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 09:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T70fW-00053E-Hd; Thu, 30 Aug 2012 09:04:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T70fU-000534-Jr
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 09:04:04 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346317400!8635642!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32675 invoked from network); 30 Aug 2012 09:03:22 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 09:03:22 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T70ee-000EoQ-UV; Thu, 30 Aug 2012 09:03:12 +0000
Date: Thu, 30 Aug 2012 10:03:12 +0100
From: Tim Deegan <tim@xen.org>
To: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Message-ID: <20120830090312.GA54290@ocelot.phlegethon.org>
References: <5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<503DAA5F.5030306@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503DAA5F.5030306@oracle.com>
User-Agent: Mutt/1.4.2.1i
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:36 +0800 on 29 Aug (1346247391), zhenzhong.duan wrote:
> 
> 
> ??? 2012-08-13 17:29, Jan Beulich ??????:
> >>>>On 13.08.12 at 09:58, "zhenzhong.duan"<zhenzhong.duan@oracle.com>  
> >>>>wrote:
> >>??? 2012-08-10 22:22, Jan Beulich ??????:
> >>>Going back to your original mail, I wonder however why this
> >>>gets done at all. You said it got there via
> >>>
> >>>mtrr_aps_init()
> >>>   \->   set_mtrr()
> >>>       \->   mtrr_work_handler()
> >>>
> >>>yet this isn't done unconditionally - see the comment before
> >>>checking mtrr_aps_delayed_init. Can you find out where the
> >>>obviously necessary call(s) to set_mtrr_aps_delayed_init()
> >>>come(s) from?
> >>At bootup stage, set_mtrr_aps_delayed_init is called by
> >>native_smp_prepare_cpus.
> >>mtrr_aps_delayed_init is always set to ture for intel processor in 
> >>upstream
> >>code.
> >Indeed, and that (in one form or another) has been done
> >virtually forever in Linux. I wonder why the problem wasn't
> >noticed (or looked into, if it was noticed) so far.
> >
> >As it's going to be rather difficult to convince the Linux folks
> >to change their code (plus this wouldn't help with existing
> >kernels anyway), we'll need to find a way to improve this in
> >the hypervisor.
> Hi Jan, Tim
> Is this issue improvable from xen side?

Probably; we're looking into the best way to address it. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 09:04:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 09:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T70fW-00053E-Hd; Thu, 30 Aug 2012 09:04:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1T70fU-000534-Jr
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 09:04:04 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346317400!8635642!1
X-Originating-IP: [81.29.64.94]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32675 invoked from network); 30 Aug 2012 09:03:22 -0000
Received: from ocelot.phlegethon.org (HELO mail.phlegethon.org) (81.29.64.94)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 09:03:22 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.67 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1T70ee-000EoQ-UV; Thu, 30 Aug 2012 09:03:12 +0000
Date: Thu, 30 Aug 2012 10:03:12 +0100
From: Tim Deegan <tim@xen.org>
To: "zhenzhong.duan" <zhenzhong.duan@oracle.com>
Message-ID: <20120830090312.GA54290@ocelot.phlegethon.org>
References: <5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<503DAA5F.5030306@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <503DAA5F.5030306@oracle.com>
User-Agent: Mutt/1.4.2.1i
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>, xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:36 +0800 on 29 Aug (1346247391), zhenzhong.duan wrote:
> 
> 
> ??? 2012-08-13 17:29, Jan Beulich ??????:
> >>>>On 13.08.12 at 09:58, "zhenzhong.duan"<zhenzhong.duan@oracle.com>  
> >>>>wrote:
> >>??? 2012-08-10 22:22, Jan Beulich ??????:
> >>>Going back to your original mail, I wonder however why this
> >>>gets done at all. You said it got there via
> >>>
> >>>mtrr_aps_init()
> >>>   \->   set_mtrr()
> >>>       \->   mtrr_work_handler()
> >>>
> >>>yet this isn't done unconditionally - see the comment before
> >>>checking mtrr_aps_delayed_init. Can you find out where the
> >>>obviously necessary call(s) to set_mtrr_aps_delayed_init()
> >>>come(s) from?
> >>At bootup stage, set_mtrr_aps_delayed_init is called by
> >>native_smp_prepare_cpus.
> >>mtrr_aps_delayed_init is always set to ture for intel processor in 
> >>upstream
> >>code.
> >Indeed, and that (in one form or another) has been done
> >virtually forever in Linux. I wonder why the problem wasn't
> >noticed (or looked into, if it was noticed) so far.
> >
> >As it's going to be rather difficult to convince the Linux folks
> >to change their code (plus this wouldn't help with existing
> >kernels anyway), we'll need to find a way to improve this in
> >the hypervisor.
> Hi Jan, Tim
> Is this issue improvable from xen side?

Probably; we're looking into the best way to address it. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 09:18:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 09:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T70sz-0005Dn-Ti; Thu, 30 Aug 2012 09:18:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <poshakmaheshwari@gmail.com>) id 1T70sz-0005Di-6i
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 09:18:01 +0000
Received: from [85.158.143.99:28113] by server-3.bemta-4.messagelabs.com id
	3A/BB-08232-8CF2F305; Thu, 30 Aug 2012 09:18:00 +0000
X-Env-Sender: poshakmaheshwari@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1346318279!27005444!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25778 invoked from network); 30 Aug 2012 09:18:00 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 09:18:00 -0000
Received: by lagz14 with SMTP id z14so1700018lag.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 02:17:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=wD2k6Vo0MN01kIc1UHoMt9VOWIkXpxm6w83wenG8r1Y=;
	b=yWjz07anQNZtw3xmSB5AFMxn0MLaLQZn79sS7TCVDENfnmdeVW0qsCzk0RlhKGF5xE
	30AnJjuz9THnRLMC5IFPN6QEUSMklN+jeWcHLMcyTK4L2JsHMLGHpdRi1HENW3vUfkzs
	wHouxsvXNJYV2HDVm+Ne5plp4VxEOhmCdzhHCCvDwPoT1KHysvhd09qeF/E32T3MkNC0
	F+Glt8VQzh7fXZxJv491GOJgNtq+1ht7A1R5wN//GJVmSJhk0FwmauHJVY2QGNau9zvl
	mM4NwdHdLXfx79PCq2pTY5JiP0eb8ebxNh+ZFUn76scyqt3+kz69a/uuh5ihd60EdXwi
	717A==
Received: by 10.152.148.1 with SMTP id to1mr2625134lab.34.1346318279306; Thu,
	30 Aug 2012 02:17:59 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.112.29.200 with HTTP; Thu, 30 Aug 2012 02:17:38 -0700 (PDT)
From: poshak maheshwari <poshakmaheshwari@gmail.com>
Date: Thu, 30 Aug 2012 14:47:38 +0530
Message-ID: <CAKiSa-ecyumbOSH3+dz97Re9b4oGiZGHm7fm+M3vN28szW0EXA@mail.gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] collecting performance data from xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2646973965191970091=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2646973965191970091==
Content-Type: multipart/alternative; boundary=e89a8f22c34fcaed2a04c878262f

--e89a8f22c34fcaed2a04c878262f
Content-Type: text/plain; charset=ISO-8859-1

Hi,
I want to build an app to collect and display performance data from xen.
I did some internet research but could not find anything useful.
I am pretty new to xen hence please tell me where can I start from for
collecting performance data (cpu,disk etc.) for xen ,its Domain0 and
DomainUs

thanks
*--*
*Regards*
* *

--e89a8f22c34fcaed2a04c878262f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div>I want to build an app to collect and display performance data from=
 xen.</div><div>I did some internet research but could not find anything us=
eful.</div><div>I am pretty new to xen hence please tell me where can I sta=
rt from for collecting performance data (cpu,disk etc.) for xen ,its Domain=
0 and DomainUs=A0</div>

<div><br></div><div>thanks=A0</div><div><div><i><font face=3D"&#39;comic sa=
ns ms&#39;, sans-serif">--</font></i></div><div><i><font face=3D"&#39;comic=
 sans ms&#39;, sans-serif">Regards</font></i></div><div><i><font face=3D"&#=
39;comic sans ms&#39;, sans-serif">=A0</font></i></div>

<br>
</div>

--e89a8f22c34fcaed2a04c878262f--


--===============2646973965191970091==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2646973965191970091==--


From xen-devel-bounces@lists.xen.org Thu Aug 30 09:18:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 09:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T70sz-0005Dn-Ti; Thu, 30 Aug 2012 09:18:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <poshakmaheshwari@gmail.com>) id 1T70sz-0005Di-6i
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 09:18:01 +0000
Received: from [85.158.143.99:28113] by server-3.bemta-4.messagelabs.com id
	3A/BB-08232-8CF2F305; Thu, 30 Aug 2012 09:18:00 +0000
X-Env-Sender: poshakmaheshwari@gmail.com
X-Msg-Ref: server-3.tower-216.messagelabs.com!1346318279!27005444!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25778 invoked from network); 30 Aug 2012 09:18:00 -0000
Received: from mail-lpp01m010-f45.google.com (HELO
	mail-lpp01m010-f45.google.com) (209.85.215.45)
	by server-3.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 09:18:00 -0000
Received: by lagz14 with SMTP id z14so1700018lag.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 02:17:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=wD2k6Vo0MN01kIc1UHoMt9VOWIkXpxm6w83wenG8r1Y=;
	b=yWjz07anQNZtw3xmSB5AFMxn0MLaLQZn79sS7TCVDENfnmdeVW0qsCzk0RlhKGF5xE
	30AnJjuz9THnRLMC5IFPN6QEUSMklN+jeWcHLMcyTK4L2JsHMLGHpdRi1HENW3vUfkzs
	wHouxsvXNJYV2HDVm+Ne5plp4VxEOhmCdzhHCCvDwPoT1KHysvhd09qeF/E32T3MkNC0
	F+Glt8VQzh7fXZxJv491GOJgNtq+1ht7A1R5wN//GJVmSJhk0FwmauHJVY2QGNau9zvl
	mM4NwdHdLXfx79PCq2pTY5JiP0eb8ebxNh+ZFUn76scyqt3+kz69a/uuh5ihd60EdXwi
	717A==
Received: by 10.152.148.1 with SMTP id to1mr2625134lab.34.1346318279306; Thu,
	30 Aug 2012 02:17:59 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.112.29.200 with HTTP; Thu, 30 Aug 2012 02:17:38 -0700 (PDT)
From: poshak maheshwari <poshakmaheshwari@gmail.com>
Date: Thu, 30 Aug 2012 14:47:38 +0530
Message-ID: <CAKiSa-ecyumbOSH3+dz97Re9b4oGiZGHm7fm+M3vN28szW0EXA@mail.gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] collecting performance data from xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2646973965191970091=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2646973965191970091==
Content-Type: multipart/alternative; boundary=e89a8f22c34fcaed2a04c878262f

--e89a8f22c34fcaed2a04c878262f
Content-Type: text/plain; charset=ISO-8859-1

Hi,
I want to build an app to collect and display performance data from xen.
I did some internet research but could not find anything useful.
I am pretty new to xen hence please tell me where can I start from for
collecting performance data (cpu,disk etc.) for xen ,its Domain0 and
DomainUs

thanks
*--*
*Regards*
* *

--e89a8f22c34fcaed2a04c878262f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,<div>I want to build an app to collect and display performance data from=
 xen.</div><div>I did some internet research but could not find anything us=
eful.</div><div>I am pretty new to xen hence please tell me where can I sta=
rt from for collecting performance data (cpu,disk etc.) for xen ,its Domain=
0 and DomainUs=A0</div>

<div><br></div><div>thanks=A0</div><div><div><i><font face=3D"&#39;comic sa=
ns ms&#39;, sans-serif">--</font></i></div><div><i><font face=3D"&#39;comic=
 sans ms&#39;, sans-serif">Regards</font></i></div><div><i><font face=3D"&#=
39;comic sans ms&#39;, sans-serif">=A0</font></i></div>

<br>
</div>

--e89a8f22c34fcaed2a04c878262f--


--===============2646973965191970091==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2646973965191970091==--


From xen-devel-bounces@lists.xen.org Thu Aug 30 10:09:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T71ge-0005Ym-Gx; Thu, 30 Aug 2012 10:09:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T71gc-0005Yh-6R
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 10:09:18 +0000
Received: from [85.158.139.83:54221] by server-4.bemta-5.messagelabs.com id
	58/9C-23042-DCB3F305; Thu, 30 Aug 2012 10:09:17 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346321355!20450846!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzUxOTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15267 invoked from network); 30 Aug 2012 10:09:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:09:16 -0000
X-IronPort-AV: E=Sophos;i="4.80,338,1344211200"; d="scan'208";a="206641610"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 10:09:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Thu, 30 Aug 2012
	06:09:14 -0400
Message-ID: <503F3BC9.6020100@citrix.com>
Date: Thu, 30 Aug 2012 11:09:13 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>	
	<1346257402.20019.9.camel@zakaz.uk.xensource.com>	
	<503E49BE.7080704@citrix.com>
	<1346263520.6655.4.camel@dagon.hellion.org.uk>
In-Reply-To: <1346263520.6655.4.camel@dagon.hellion.org.uk>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 19:05, Ian Campbell wrote:
> On Wed, 2012-08-29 at 17:56 +0100, David Vrabel wrote:
>> On 29/08/12 17:23, Ian Campbell wrote:
>>> On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
>>>> This series is a straight forward-port of some functionality from
>>>> classic kernels to support Xen hosts that do paging of guests.
>>>>
>>>> This isn't functionality the XenServer makes use of so I've not tested
>>>> these with paging in use.
>>>>
>>>> Changes since v1:
>>>>
>>>> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
>>>>   error).  It's broken and not really fixable sensibly and libxc will
>>>>   use V2 if it is available.
>>>> - Return -ENOENT if all failures were -ENOENT.
>>>
>>> Is this behaviour a requirement from something?
>>
>> It's the behaviour libxc is expecting.  It doesn't retry unless errno ==
>> ENOENT.
> 
> Surely if that is the case you must return -ENOENT if *any* failure was
> -ENOENT? That seems to be what the linux-2.6.18-xen.hg implementation
> does.

Yes.

libxc will subsequently fail the whole map call is any frame errors
without ENOENT so if someone was to propose a V3 it may be useful to
return a different error code for other errors.

>>> Usually hypercalls of this type return a global error only if something
>>> went wrong with the general mechanics of the hypercall (e.g. faults
>>> reading arguments etc) and leave reporting of the individual failures of
>>> subops to the op specific field, even if all the subops fail in the same
>>> way.
>>
>> I didn't design this interface...
> 
> The interface you described doesn't make any sense...

I don't entirely agree.  There are three types of result: success,
retryable errors, and fatal errors.  It's not nonsensical to return
different error codes for these.

Regardless, it's not what the original implementation does so I'll fix
rework the patch.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:09:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T71ge-0005Ym-Gx; Thu, 30 Aug 2012 10:09:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T71gc-0005Yh-6R
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 10:09:18 +0000
Received: from [85.158.139.83:54221] by server-4.bemta-5.messagelabs.com id
	58/9C-23042-DCB3F305; Thu, 30 Aug 2012 10:09:17 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346321355!20450846!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzUxOTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15267 invoked from network); 30 Aug 2012 10:09:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:09:16 -0000
X-IronPort-AV: E=Sophos;i="4.80,338,1344211200"; d="scan'208";a="206641610"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 10:09:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Thu, 30 Aug 2012
	06:09:14 -0400
Message-ID: <503F3BC9.6020100@citrix.com>
Date: Thu, 30 Aug 2012 11:09:13 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1346246116-29999-1-git-send-email-david.vrabel@citrix.com>	
	<1346257402.20019.9.camel@zakaz.uk.xensource.com>	
	<503E49BE.7080704@citrix.com>
	<1346263520.6655.4.camel@dagon.hellion.org.uk>
In-Reply-To: <1346263520.6655.4.camel@dagon.hellion.org.uk>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv2 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 19:05, Ian Campbell wrote:
> On Wed, 2012-08-29 at 17:56 +0100, David Vrabel wrote:
>> On 29/08/12 17:23, Ian Campbell wrote:
>>> On Wed, 2012-08-29 at 14:15 +0100, David Vrabel wrote:
>>>> This series is a straight forward-port of some functionality from
>>>> classic kernels to support Xen hosts that do paging of guests.
>>>>
>>>> This isn't functionality the XenServer makes use of so I've not tested
>>>> these with paging in use.
>>>>
>>>> Changes since v1:
>>>>
>>>> - Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
>>>>   error).  It's broken and not really fixable sensibly and libxc will
>>>>   use V2 if it is available.
>>>> - Return -ENOENT if all failures were -ENOENT.
>>>
>>> Is this behaviour a requirement from something?
>>
>> It's the behaviour libxc is expecting.  It doesn't retry unless errno ==
>> ENOENT.
> 
> Surely if that is the case you must return -ENOENT if *any* failure was
> -ENOENT? That seems to be what the linux-2.6.18-xen.hg implementation
> does.

Yes.

libxc will subsequently fail the whole map call is any frame errors
without ENOENT so if someone was to propose a V3 it may be useful to
return a different error code for other errors.

>>> Usually hypercalls of this type return a global error only if something
>>> went wrong with the general mechanics of the hypercall (e.g. faults
>>> reading arguments etc) and leave reporting of the individual failures of
>>> subops to the op specific field, even if all the subops fail in the same
>>> way.
>>
>> I didn't design this interface...
> 
> The interface you described doesn't make any sense...

I don't entirely agree.  There are three types of result: success,
retryable errors, and fatal errors.  It's not nonsensical to return
different error codes for these.

Regardless, it's not what the original implementation does so I'll fix
rework the patch.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T71vB-0005j7-Vn; Thu, 30 Aug 2012 10:24:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T71vB-0005j2-9D
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:24:21 +0000
Received: from [85.158.143.35:26453] by server-1.bemta-4.messagelabs.com id
	53/34-12504-45F3F305; Thu, 30 Aug 2012 10:24:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1346322258!11478675!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15442 invoked from network); 30 Aug 2012 10:24:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:24:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,338,1344211200"; d="scan'208";a="36285234"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 10:24:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 06:24:17 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T71v7-00031b-MW	for
	xen-devel@lists.xen.org; Thu, 30 Aug 2012 11:24:17 +0100
Message-ID: <503F3F51.9010501@citrix.com>
Date: Thu, 30 Aug 2012 11:24:17 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
In-Reply-To: <f9a8835e-95d8-4495-8c9d-4fa769913549@default>
X-Enigmail-Version: 1.4.4
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 29/08/12 21:53, Dan Magenheimer wrote:
>> From: George Dunlap [mailto:George.Dunlap@eu.citrix.com]
>> Subject: [Xen-devel] Xen 4.3 release planning proposal
>>
>> Hello everyone!  With the completion of our first few release candidates
>> for 4.2, it's time to look forward and start planning for the 4.3
>> release.  I've volunteered to step up and help coordinate the release
>> for this cycle.
>>
>> The 4.2 release cycle this time has been nearly a year and a half.
>> One of the problems with having such a long release is that people who
>> get in features early have to wait a long time for that feature to be
>> in a published version; they then have to wait even longer for it to
>> be part of a released distribution.  Historically the cycle has been
>> around 9 months, but this has not been made explicit.  Many people
>> (including myself) think that the 9 month release cycle was a good
>> cadence that we should aim for.
>>
>> So I propose that we move to a time-based release schedule.  Rather
>> than aiming for a release date, I propose that we aim to do a "feature
>> freeze" six months after the 4.2 release -- that would be around March
>> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
>> around June 2013.  This is one of the things we can discuss at the Dev
>> Meeting before the Xen Summit next week.  If you have other opinions,
>> please let us know.
> Hi George --
>
> (Sorry if I missed relevant discussion on this... I'm way behind
> on xen-devel.)
>
> Just a thought...
>
> Maybe it is time to move to match the well-known highly-greased
> Linux kernel release process?  This would include, for example, a short
> window for new functionality and a xen-next for pre-window shaking
> out and merging (of new functionality) and testing.  As has
> been pointed out, xen-unstable is, well, unstable for far too long.
>
> It may not be necessary to aggressively match Linus' 8-9 week release
> cycle or weekly rcN releases, but the core process is known to
> work very well, is reasonably well documented, and will be familiar
> to many in the open source community.
>
> Dan

And in line with this, having a more formal approach/system for
backporting bugfixes would be fantastic.

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:24:51 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T71vB-0005j7-Vn; Thu, 30 Aug 2012 10:24:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T71vB-0005j2-9D
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:24:21 +0000
Received: from [85.158.143.35:26453] by server-1.bemta-4.messagelabs.com id
	53/34-12504-45F3F305; Thu, 30 Aug 2012 10:24:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1346322258!11478675!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15442 invoked from network); 30 Aug 2012 10:24:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:24:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,338,1344211200"; d="scan'208";a="36285234"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 10:24:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 06:24:17 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T71v7-00031b-MW	for
	xen-devel@lists.xen.org; Thu, 30 Aug 2012 11:24:17 +0100
Message-ID: <503F3F51.9010501@citrix.com>
Date: Thu, 30 Aug 2012 11:24:17 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
In-Reply-To: <f9a8835e-95d8-4495-8c9d-4fa769913549@default>
X-Enigmail-Version: 1.4.4
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 29/08/12 21:53, Dan Magenheimer wrote:
>> From: George Dunlap [mailto:George.Dunlap@eu.citrix.com]
>> Subject: [Xen-devel] Xen 4.3 release planning proposal
>>
>> Hello everyone!  With the completion of our first few release candidates
>> for 4.2, it's time to look forward and start planning for the 4.3
>> release.  I've volunteered to step up and help coordinate the release
>> for this cycle.
>>
>> The 4.2 release cycle this time has been nearly a year and a half.
>> One of the problems with having such a long release is that people who
>> get in features early have to wait a long time for that feature to be
>> in a published version; they then have to wait even longer for it to
>> be part of a released distribution.  Historically the cycle has been
>> around 9 months, but this has not been made explicit.  Many people
>> (including myself) think that the 9 month release cycle was a good
>> cadence that we should aim for.
>>
>> So I propose that we move to a time-based release schedule.  Rather
>> than aiming for a release date, I propose that we aim to do a "feature
>> freeze" six months after the 4.2 release -- that would be around March
>> 1, 2013.  That way we'll probably end up releasing in 9 months' time,
>> around June 2013.  This is one of the things we can discuss at the Dev
>> Meeting before the Xen Summit next week.  If you have other opinions,
>> please let us know.
> Hi George --
>
> (Sorry if I missed relevant discussion on this... I'm way behind
> on xen-devel.)
>
> Just a thought...
>
> Maybe it is time to move to match the well-known highly-greased
> Linux kernel release process?  This would include, for example, a short
> window for new functionality and a xen-next for pre-window shaking
> out and merging (of new functionality) and testing.  As has
> been pointed out, xen-unstable is, well, unstable for far too long.
>
> It may not be necessary to aggressively match Linus' 8-9 week release
> cycle or weekly rcN releases, but the core process is known to
> work very well, is reasonably well documented, and will be familiar
> to many in the open source community.
>
> Dan

And in line with this, having a more formal approach/system for
backporting bugfixes would be fantastic.

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:26:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T71xI-0005o6-GK; Thu, 30 Aug 2012 10:26:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T71xH-0005ny-69
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:26:31 +0000
Received: from [85.158.139.83:41149] by server-6.bemta-5.messagelabs.com id
	F1/AC-21336-6DF3F305; Thu, 30 Aug 2012 10:26:30 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346322386!20455131!1
X-Originating-IP: [74.125.149.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9411 invoked from network); 30 Aug 2012 10:26:29 -0000
Received: from na3sys009aog130.obsmtp.com (HELO na3sys009aog130.obsmtp.com)
	(74.125.149.143)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 10:26:29 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob130.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUD8/0lZbm2P2f5OSJY5UIfcLxKBZ5yXl@postini.com;
	Thu, 30 Aug 2012 03:26:28 PDT
Received: from INHYMS170.ca.com (155.35.35.44) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Thu, 30 Aug 2012 15:56:23 +0530
Received: from INHYMS111A.ca.com ([169.254.3.18]) by INHYMS170.ca.com
	([155.35.35.44]) with mapi id 14.01.0355.002;
	Thu, 30 Aug 2012 15:56:23 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH RFC V2] xen/netback: Count ring slots properly when
	larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAABARjCA=
Date: Thu, 30 Aug 2012 10:26:23 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1312C70D@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346314031.27277.20.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

I was using ms exchange client which is converting tabs into spaces and so on. That's why I sent it as an attachment. I will find a way to send the patch inline and resubmit. Thanks for reviewing it.

--Siva



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Thursday, August 30, 2012 1:37 PM
> To: Palagummi, Siva
> Cc: xen-devel@lists.xen.org
> Subject: Re: [PATCH RFC V2] xen/netback: Count ring slots properly when
> larger MTU sizes are used
> 
> On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> > This patch contains the modifications that are discussed in thread
> > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html
> 
> Thanks.
> 
> Please can you find a way to include your patches inline rather than as
> attachments, it makes reply for review much easier.
> Documentation/email-clients.txt has some hints for various clients or
> you can just use the "git send-email" command (perhaps with "git
> format-patch").
> 
> You should also CC the netdev@vger.kernel.org list.
> 
> > Instead of using max_required_rx_slots, I used the count that we
> > already have in hand to verify if we have enough room in the batch
> > queue for next skb. Please let me know if that is not appropriate.
> > Things worked fine in my environment. Under heavy load now we seems
> to
> > be consuming most of the slots in the queue and no BUG_ON :-)
> 
> 
> > From: Siva Palagummi <Siva.Palagummi@ca.com>
> >
> > count variable in xen_netbk_rx_action need to be incremented
> > correctly to take into account of extra slots required when
> skb_headlen is
> > greater than PAGE_SIZE when larger MTU values are used. Without this
> change
> > BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback
> thread
> > to exit.
> >
> > The fix is to stash the counting already done in
> xen_netbk_count_skb_slots
> > in skb_cb_overlay and use it directly in xen_netbk_rx_action.
> >
> > Also improved the checking for filling the batch queue.
> >
> > Also merged a change from a patch created for
> xen_netbk_count_skb_slots
> > function as per thread
> > http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> >
> > The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> >
> >
> > Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> > ---
> > diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > --- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000
> -0500
> > +++ b/drivers/net/xen-netback/netback.c	2012-08-28 17:31:22.000000000
> -0400
> > @@ -80,6 +80,11 @@ union page_ext {
> >  	void *mapping;
> >  };
> >
> > +struct skb_cb_overlay {
> > +	int meta_slots_used;
> > +	int count;
> > +};
> > +
> >  struct xen_netbk {
> >  	wait_queue_head_t wq;
> >  	struct task_struct *task;
> > @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
> >  {
> >  	unsigned int count;
> >  	int i, copy_off;
> > +	struct skb_cb_overlay *sco;
> >
> > -	count = DIV_ROUND_UP(
> > -			offset_in_page(skb->data)+skb_headlen(skb),
> PAGE_SIZE);
> > +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This hunk appears to be upstream already (see
> e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
> against? You should either base patches on Linus' branch or on the
> net-next branch.
> 
> Other than this the patch looks good, thanks.
> 
> >
> >  	copy_off = skb_headlen(skb) % PAGE_SIZE;
> >
> > @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
> >  			size -= bytes;
> >  		}
> >  	}
> > +	sco = (struct skb_cb_overlay *)skb->cb;
> > +	sco->count = count;
> >  	return count;
> >  }
> >
> > @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
> >  	}
> >  }
> >
> > -struct skb_cb_overlay {
> > -	int meta_slots_used;
> > -};
> >
> >  static void xen_netbk_rx_action(struct xen_netbk *netbk)
> >  {
> > @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
> >  		sco = (struct skb_cb_overlay *)skb->cb;
> >  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
> >
> > -		count += nr_frags + 1;
> > +		count += sco->count;
> >
> >  		__skb_queue_tail(&rxq, skb);
> >
> > +		skb = skb_peek(&netbk->rx_queue);
> > +		if (skb == NULL)
> > +			break;
> > +		sco = (struct skb_cb_overlay *)skb->cb;
> >  		/* Filled the batch queue? */
> > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> >  			break;
> >  	}
> >
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:26:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T71xI-0005o6-GK; Thu, 30 Aug 2012 10:26:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Siva.Palagummi@ca.com>) id 1T71xH-0005ny-69
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:26:31 +0000
Received: from [85.158.139.83:41149] by server-6.bemta-5.messagelabs.com id
	F1/AC-21336-6DF3F305; Thu, 30 Aug 2012 10:26:30 +0000
X-Env-Sender: Siva.Palagummi@ca.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346322386!20455131!1
X-Originating-IP: [74.125.149.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9411 invoked from network); 30 Aug 2012 10:26:29 -0000
Received: from na3sys009aog130.obsmtp.com (HELO na3sys009aog130.obsmtp.com)
	(74.125.149.143)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 10:26:29 -0000
Received: from INHYMS190.ca.com ([155.35.46.47]) (using TLSv1) by
	na3sys009aob130.postini.com ([74.125.148.12]) with SMTP
	ID DSNKUD8/0lZbm2P2f5OSJY5UIfcLxKBZ5yXl@postini.com;
	Thu, 30 Aug 2012 03:26:28 PDT
Received: from INHYMS170.ca.com (155.35.35.44) by INHYMS190.ca.com
	(155.35.46.47) with Microsoft SMTP Server (TLS) id 14.1.355.2;
	Thu, 30 Aug 2012 15:56:23 +0530
Received: from INHYMS111A.ca.com ([169.254.3.18]) by INHYMS170.ca.com
	([155.35.35.44]) with mapi id 14.01.0355.002;
	Thu, 30 Aug 2012 15:56:23 +0530
From: "Palagummi, Siva" <Siva.Palagummi@ca.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH RFC V2] xen/netback: Count ring slots properly when
	larger MTU sizes are used
Thread-Index: Ac2F4HBfCFIrW+OHQRWCwjPvwOEhAgAd+rmAABARjCA=
Date: Thu, 30 Aug 2012 10:26:23 +0000
Message-ID: <7D7C26B1462EB14CB0E7246697A18C1312C70D@INHYMS111A.ca.com>
References: <7D7C26B1462EB14CB0E7246697A18C1312C3D2@INHYMS111A.ca.com>
	<1346314031.27277.20.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346314031.27277.20.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.134.16.218]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots
 properly when larger MTU sizes are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

I was using ms exchange client which is converting tabs into spaces and so on. That's why I sent it as an attachment. I will find a way to send the patch inline and resubmit. Thanks for reviewing it.

--Siva



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Thursday, August 30, 2012 1:37 PM
> To: Palagummi, Siva
> Cc: xen-devel@lists.xen.org
> Subject: Re: [PATCH RFC V2] xen/netback: Count ring slots properly when
> larger MTU sizes are used
> 
> On Wed, 2012-08-29 at 13:21 +0100, Palagummi, Siva wrote:
> > This patch contains the modifications that are discussed in thread
> > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01730.html
> 
> Thanks.
> 
> Please can you find a way to include your patches inline rather than as
> attachments, it makes reply for review much easier.
> Documentation/email-clients.txt has some hints for various clients or
> you can just use the "git send-email" command (perhaps with "git
> format-patch").
> 
> You should also CC the netdev@vger.kernel.org list.
> 
> > Instead of using max_required_rx_slots, I used the count that we
> > already have in hand to verify if we have enough room in the batch
> > queue for next skb. Please let me know if that is not appropriate.
> > Things worked fine in my environment. Under heavy load now we seems
> to
> > be consuming most of the slots in the queue and no BUG_ON :-)
> 
> 
> > From: Siva Palagummi <Siva.Palagummi@ca.com>
> >
> > count variable in xen_netbk_rx_action need to be incremented
> > correctly to take into account of extra slots required when
> skb_headlen is
> > greater than PAGE_SIZE when larger MTU values are used. Without this
> change
> > BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)) is causing netback
> thread
> > to exit.
> >
> > The fix is to stash the counting already done in
> xen_netbk_count_skb_slots
> > in skb_cb_overlay and use it directly in xen_netbk_rx_action.
> >
> > Also improved the checking for filling the batch queue.
> >
> > Also merged a change from a patch created for
> xen_netbk_count_skb_slots
> > function as per thread
> > http://lists.xen.org/archives/html/xen-devel/2012-05/msg01864.html
> >
> > The problem is seen with linux 3.2.2 kernel on Intel 10Gbps network
> >
> >
> > Signed-off-by: Siva Palagummi <Siva.Palagummi@ca.com>
> > ---
> > diff -uprN a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > --- a/drivers/net/xen-netback/netback.c	2012-01-25 19:39:32.000000000
> -0500
> > +++ b/drivers/net/xen-netback/netback.c	2012-08-28 17:31:22.000000000
> -0400
> > @@ -80,6 +80,11 @@ union page_ext {
> >  	void *mapping;
> >  };
> >
> > +struct skb_cb_overlay {
> > +	int meta_slots_used;
> > +	int count;
> > +};
> > +
> >  struct xen_netbk {
> >  	wait_queue_head_t wq;
> >  	struct task_struct *task;
> > @@ -324,9 +329,9 @@ unsigned int xen_netbk_count_skb_slots(s
> >  {
> >  	unsigned int count;
> >  	int i, copy_off;
> > +	struct skb_cb_overlay *sco;
> >
> > -	count = DIV_ROUND_UP(
> > -			offset_in_page(skb->data)+skb_headlen(skb),
> PAGE_SIZE);
> > +	count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This hunk appears to be upstream already (see
> e26b203ede31fffd52571a5ba607a26c79dc5c0d). Which tree are you working
> against? You should either base patches on Linus' branch or on the
> net-next branch.
> 
> Other than this the patch looks good, thanks.
> 
> >
> >  	copy_off = skb_headlen(skb) % PAGE_SIZE;
> >
> > @@ -352,6 +357,8 @@ unsigned int xen_netbk_count_skb_slots(s
> >  			size -= bytes;
> >  		}
> >  	}
> > +	sco = (struct skb_cb_overlay *)skb->cb;
> > +	sco->count = count;
> >  	return count;
> >  }
> >
> > @@ -586,9 +593,6 @@ static void netbk_add_frag_responses(str
> >  	}
> >  }
> >
> > -struct skb_cb_overlay {
> > -	int meta_slots_used;
> > -};
> >
> >  static void xen_netbk_rx_action(struct xen_netbk *netbk)
> >  {
> > @@ -621,12 +625,16 @@ static void xen_netbk_rx_action(struct x
> >  		sco = (struct skb_cb_overlay *)skb->cb;
> >  		sco->meta_slots_used = netbk_gop_skb(skb, &npo);
> >
> > -		count += nr_frags + 1;
> > +		count += sco->count;
> >
> >  		__skb_queue_tail(&rxq, skb);
> >
> > +		skb = skb_peek(&netbk->rx_queue);
> > +		if (skb == NULL)
> > +			break;
> > +		sco = (struct skb_cb_overlay *)skb->cb;
> >  		/* Filled the batch queue? */
> > -		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > +		if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> >  			break;
> >  	}
> >
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:44:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T72ED-00064J-QK; Thu, 30 Aug 2012 10:44:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jmarcet@gmail.com>) id 1T72EC-00064E-J5
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:44:00 +0000
Received: from [85.158.143.35:4656] by server-3.bemta-4.messagelabs.com id
	65/80-08232-FE34F305; Thu, 30 Aug 2012 10:43:59 +0000
X-Env-Sender: jmarcet@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346323430!14072939!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_8,RCVD_BY_IP,spamassassin: ,surbl: (ASYNC_NO) 
	c3VyYmxfcmVjaGVja19kZWxheTogNTU0NTQ0MyAoYWJhbmRvbmVkOiBkbC5kcm9wYm94LmNvb
	S91\nLzEyNTc5MTEyL2xvZ3MvaW50ZXJydXB0cy5sb2cp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31241 invoked from network); 30 Aug 2012 10:43:52 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:43:52 -0000
Received: by obbta14 with SMTP id ta14so3771379obb.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 03:43:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=ipkmO+Uf0d2sAgwle+98TXH0ks1rZlFdPLl8EQGAnOA=;
	b=1E1nji+BIuS6wZLd5R1hs3nbud02dOE9PHXZ0vPyoroY8Lnho1TwNVmUbPkOeYaD4y
	N3VhsR1zIsYvBKtIm3Eh12dJenE+9ZgiwhXgV5Oi5O8xMVjn+Cme1tkes4MqwvMIFe8K
	OL8G40di4/+3/xl1xJGD2kMBrVxevrIUrC598aC6MIBRbiyu8SGE9Cwy8MpChHBdM6ji
	bsvBs57MgRQ8sz7JQNRlE97Q/ea2zpzjZlbyJ8SJSWG7y4CclYO7wpCS9FNHH0XeyTHv
	hcWuufoh1eFMXAnw98+XGXWlk/OiGkZDm2otr9wWMccBKp6HgVMtEi1E4yIN4e10hxzm
	DIIA==
Received: by 10.60.31.102 with SMTP id z6mr3702518oeh.42.1346323430354; Thu,
	30 Aug 2012 03:43:50 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.60.62.4 with HTTP; Thu, 30 Aug 2012 03:43:29 -0700 (PDT)
From: Javier Marcet <jmarcet@gmail.com>
Date: Thu, 30 Aug 2012 12:43:29 +0200
Message-ID: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
To: Xen Devel Mailing list <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.2.0-rc4 bugs with GigaByte H77M-D3H + Core i7 3770
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I've just upgraded a server of mine from a Core i3 2100T to an i7 3770, in order
to do full virtualization with VTd.

I'm using kernel 3.5.2 and Xen from git://xenbits.xen.org/xen.git @ commit
37d7ccdc2f50d659f1eb8ec11ee4bf8a8376926d (Fri Aug 24).

Since there are various issues I'm gonna comment on them all. I'd appreciate
if you help me deciding which bug reports to file, and where to file them.

Upon booting under the xen virtualizer everything works fine but I cannot
suspend the machine and I have reception problems on the DVB-T tuners
installed on the system.

Besides that, xen can't read the cpu capabilities, or so reports virt-manager
when creating a DomU. This results in being unable to boot any DomU due
to ACPI errors.

On the same kernel and machine, KVM can read the capabilities with no
problems and guests work reliably.

On the other hand, booting without the xen virtualizer fixes the suspension
and tuning problems but there are other issues.

I need to add the parameter intel_iommu=igfx_off to the kernel command line
or I see half a second of these errors at the beginning of each boot:

[    0.358278] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
[    0.358278] DMAR:[fault reason 06] PTE Read access is not set
[    0.358286] DRHD: handling fault status reg 2
[    0.358288] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
[    0.358288] DMAR:[fault reason 06] PTE Read access is not set
[    0.358291] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
[    0.358291] DMAR:[fault reason 06] PTE Read access is not set
[    0.358307] DRHD: handling fault status reg 3

Furthermore, later on, just after enabling the IOMMU, I get this:

[    0.328564] DMAR: No ATSR found
[    0.328580] IOMMU 1 0xfed91000: using Queued invalidation
[    0.328582] IOMMU: Setting RMRR:
[    0.328589] IOMMU: Setting identity map for device 0000:00:1d.0
[0x9de36000 - 0x9de52fff]
[    0.328606] IOMMU: Setting identity map for device 0000:00:1a.0
[0x9de36000 - 0x9de52fff]
[    0.328617] IOMMU: Setting identity map for device 0000:00:14.0
[0x9de36000 - 0x9de52fff]
[    0.328625] IOMMU: Prepare 0-16MiB unity mapping for LPC
[    0.328630] IOMMU: Setting identity map for device 0000:00:1f.0
[0x0 - 0xffffff]
[    0.328705] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O
[    0.328714] ------------[ cut here ]------------
[    0.328718] WARNING: at
/home/storage/src/ubuntu-precise/drivers/pci/search.c:44
pci_find_upstream_pcie_bridge+0x51/0x68()
[    0.328719] Hardware name: To be filled by O.E.M.
[    0.328720] Modules linked in:
[    0.328722] Pid: 1, comm: swapper/0 Not tainted 3.5.0-12-i3 #12~precise1
[    0.328723] Call Trace:
[    0.328727]  [<ffffffff8106ab0d>] warn_slowpath_common+0x7e/0x96
[    0.328729]  [<ffffffff8106ab3a>] warn_slowpath_null+0x15/0x17
[    0.328731]  [<ffffffff812992d5>] pci_find_upstream_pcie_bridge+0x51/0x68
[    0.328733]  [<ffffffff814bd02e>] intel_iommu_device_group+0x64/0xb7
[    0.328735]  [<ffffffff814b8a2b>] ? bus_set_iommu+0x3f/0x3f
[    0.328738]  [<ffffffff814b86f2>] iommu_device_group+0x24/0x26
[    0.328740]  [<ffffffff814b8a40>] add_iommu_group+0x15/0x33
[    0.328742]  [<ffffffff8137ba61>] bus_for_each_dev+0x54/0x80
[    0.328745]  [<ffffffff81cdaf83>] ? memblock_find_dma_reserve+0x13f/0x13f
[    0.328746]  [<ffffffff814b8a25>] bus_set_iommu+0x39/0x3f
[    0.328749]  [<ffffffff81d0367c>] intel_iommu_init+0x1aa/0x1ce
[    0.328751]  [<ffffffff81cdaf96>] pci_iommu_init+0x13/0x3e
[    0.328754]  [<ffffffff81002094>] do_one_initcall+0x7a/0x132
[    0.328756]  [<ffffffff81cd2bac>] do_basic_setup+0x96/0xb4
[    0.328758]  [<ffffffff81cd2533>] ? obsolete_checksetup+0xab/0xab
[    0.328759]  [<ffffffff81cd2c82>] kernel_init+0xb8/0x12e
[    0.328762]  [<ffffffff81615b24>] kernel_thread_helper+0x4/0x10
[    0.328764]  [<ffffffff81cd2bca>] ? do_basic_setup+0xb4/0xb4
[    0.328766]  [<ffffffff81615b20>] ? gs_change+0x13/0x13
[    0.328768] ---[ end trace 9bacf275b2da9216 ]---

You can see dmesg logs, lspci and dmidecode data here:

http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-bare.log
http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-normal.log
http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-xen.log
http://dl.dropbox.com/u/12579112/logs/dmidecode.log
http://dl.dropbox.com/u/12579112/logs/interrupts.log
http://dl.dropbox.com/u/12579112/logs/lspci.log

I'm willing to help with whatever is needed.


-- 
Javier Marcet <jmarcet@gmail.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:44:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T72ED-00064J-QK; Thu, 30 Aug 2012 10:44:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jmarcet@gmail.com>) id 1T72EC-00064E-J5
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:44:00 +0000
Received: from [85.158.143.35:4656] by server-3.bemta-4.messagelabs.com id
	65/80-08232-FE34F305; Thu, 30 Aug 2012 10:43:59 +0000
X-Env-Sender: jmarcet@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346323430!14072939!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_8,RCVD_BY_IP,spamassassin: ,surbl: (ASYNC_NO) 
	c3VyYmxfcmVjaGVja19kZWxheTogNTU0NTQ0MyAoYWJhbmRvbmVkOiBkbC5kcm9wYm94LmNvb
	S91\nLzEyNTc5MTEyL2xvZ3MvaW50ZXJydXB0cy5sb2cp\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31241 invoked from network); 30 Aug 2012 10:43:52 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:43:52 -0000
Received: by obbta14 with SMTP id ta14so3771379obb.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 03:43:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=ipkmO+Uf0d2sAgwle+98TXH0ks1rZlFdPLl8EQGAnOA=;
	b=1E1nji+BIuS6wZLd5R1hs3nbud02dOE9PHXZ0vPyoroY8Lnho1TwNVmUbPkOeYaD4y
	N3VhsR1zIsYvBKtIm3Eh12dJenE+9ZgiwhXgV5Oi5O8xMVjn+Cme1tkes4MqwvMIFe8K
	OL8G40di4/+3/xl1xJGD2kMBrVxevrIUrC598aC6MIBRbiyu8SGE9Cwy8MpChHBdM6ji
	bsvBs57MgRQ8sz7JQNRlE97Q/ea2zpzjZlbyJ8SJSWG7y4CclYO7wpCS9FNHH0XeyTHv
	hcWuufoh1eFMXAnw98+XGXWlk/OiGkZDm2otr9wWMccBKp6HgVMtEi1E4yIN4e10hxzm
	DIIA==
Received: by 10.60.31.102 with SMTP id z6mr3702518oeh.42.1346323430354; Thu,
	30 Aug 2012 03:43:50 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.60.62.4 with HTTP; Thu, 30 Aug 2012 03:43:29 -0700 (PDT)
From: Javier Marcet <jmarcet@gmail.com>
Date: Thu, 30 Aug 2012 12:43:29 +0200
Message-ID: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
To: Xen Devel Mailing list <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.2.0-rc4 bugs with GigaByte H77M-D3H + Core i7 3770
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I've just upgraded a server of mine from a Core i3 2100T to an i7 3770, in order
to do full virtualization with VTd.

I'm using kernel 3.5.2 and Xen from git://xenbits.xen.org/xen.git @ commit
37d7ccdc2f50d659f1eb8ec11ee4bf8a8376926d (Fri Aug 24).

Since there are various issues I'm gonna comment on them all. I'd appreciate
if you help me deciding which bug reports to file, and where to file them.

Upon booting under the xen virtualizer everything works fine but I cannot
suspend the machine and I have reception problems on the DVB-T tuners
installed on the system.

Besides that, xen can't read the cpu capabilities, or so reports virt-manager
when creating a DomU. This results in being unable to boot any DomU due
to ACPI errors.

On the same kernel and machine, KVM can read the capabilities with no
problems and guests work reliably.

On the other hand, booting without the xen virtualizer fixes the suspension
and tuning problems but there are other issues.

I need to add the parameter intel_iommu=igfx_off to the kernel command line
or I see half a second of these errors at the beginning of each boot:

[    0.358278] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
[    0.358278] DMAR:[fault reason 06] PTE Read access is not set
[    0.358286] DRHD: handling fault status reg 2
[    0.358288] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
[    0.358288] DMAR:[fault reason 06] PTE Read access is not set
[    0.358291] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
[    0.358291] DMAR:[fault reason 06] PTE Read access is not set
[    0.358307] DRHD: handling fault status reg 3

Furthermore, later on, just after enabling the IOMMU, I get this:

[    0.328564] DMAR: No ATSR found
[    0.328580] IOMMU 1 0xfed91000: using Queued invalidation
[    0.328582] IOMMU: Setting RMRR:
[    0.328589] IOMMU: Setting identity map for device 0000:00:1d.0
[0x9de36000 - 0x9de52fff]
[    0.328606] IOMMU: Setting identity map for device 0000:00:1a.0
[0x9de36000 - 0x9de52fff]
[    0.328617] IOMMU: Setting identity map for device 0000:00:14.0
[0x9de36000 - 0x9de52fff]
[    0.328625] IOMMU: Prepare 0-16MiB unity mapping for LPC
[    0.328630] IOMMU: Setting identity map for device 0000:00:1f.0
[0x0 - 0xffffff]
[    0.328705] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O
[    0.328714] ------------[ cut here ]------------
[    0.328718] WARNING: at
/home/storage/src/ubuntu-precise/drivers/pci/search.c:44
pci_find_upstream_pcie_bridge+0x51/0x68()
[    0.328719] Hardware name: To be filled by O.E.M.
[    0.328720] Modules linked in:
[    0.328722] Pid: 1, comm: swapper/0 Not tainted 3.5.0-12-i3 #12~precise1
[    0.328723] Call Trace:
[    0.328727]  [<ffffffff8106ab0d>] warn_slowpath_common+0x7e/0x96
[    0.328729]  [<ffffffff8106ab3a>] warn_slowpath_null+0x15/0x17
[    0.328731]  [<ffffffff812992d5>] pci_find_upstream_pcie_bridge+0x51/0x68
[    0.328733]  [<ffffffff814bd02e>] intel_iommu_device_group+0x64/0xb7
[    0.328735]  [<ffffffff814b8a2b>] ? bus_set_iommu+0x3f/0x3f
[    0.328738]  [<ffffffff814b86f2>] iommu_device_group+0x24/0x26
[    0.328740]  [<ffffffff814b8a40>] add_iommu_group+0x15/0x33
[    0.328742]  [<ffffffff8137ba61>] bus_for_each_dev+0x54/0x80
[    0.328745]  [<ffffffff81cdaf83>] ? memblock_find_dma_reserve+0x13f/0x13f
[    0.328746]  [<ffffffff814b8a25>] bus_set_iommu+0x39/0x3f
[    0.328749]  [<ffffffff81d0367c>] intel_iommu_init+0x1aa/0x1ce
[    0.328751]  [<ffffffff81cdaf96>] pci_iommu_init+0x13/0x3e
[    0.328754]  [<ffffffff81002094>] do_one_initcall+0x7a/0x132
[    0.328756]  [<ffffffff81cd2bac>] do_basic_setup+0x96/0xb4
[    0.328758]  [<ffffffff81cd2533>] ? obsolete_checksetup+0xab/0xab
[    0.328759]  [<ffffffff81cd2c82>] kernel_init+0xb8/0x12e
[    0.328762]  [<ffffffff81615b24>] kernel_thread_helper+0x4/0x10
[    0.328764]  [<ffffffff81cd2bca>] ? do_basic_setup+0xb4/0xb4
[    0.328766]  [<ffffffff81615b20>] ? gs_change+0x13/0x13
[    0.328768] ---[ end trace 9bacf275b2da9216 ]---

You can see dmesg logs, lspci and dmidecode data here:

http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-bare.log
http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-normal.log
http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-xen.log
http://dl.dropbox.com/u/12579112/logs/dmidecode.log
http://dl.dropbox.com/u/12579112/logs/interrupts.log
http://dl.dropbox.com/u/12579112/logs/lspci.log

I'm willing to help with whatever is needed.


-- 
Javier Marcet <jmarcet@gmail.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:55:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T72P5-0006Fg-Aq; Thu, 30 Aug 2012 10:55:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <d.vrabel.98@gmail.com>) id 1T72P3-0006FY-9n
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:55:13 +0000
X-Env-Sender: d.vrabel.98@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346324034!8741634!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29848 invoked from network); 30 Aug 2012 10:53:55 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:53:55 -0000
Received: by yenm4 with SMTP id m4so330986yen.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 03:53:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=yN1XFel/uBIXWSVHGXhb0TmORq3FD4SwR8XsNjV9mHY=;
	b=ZiEkdM2eU4nOJTNprc6UPIU00wjNwz0XGx9uqS7Y/nJuH5X31zxMOxP1oERxqG2R5u
	U+WB34DGhzJzOm2qRx9wy9459YnNUIyAgWYVjjGmjAYBN8ft/ghCl4wobWcoMSUdPaih
	0UyiyEFpbT7mU5yZjRoelj6sXUltlE/ajT+jb0mN1X4+vfyrr/SgEUVWpECXv1vv39rY
	yG4urUKsS6eJz6PUB0vgXLuCMClbtbO4siHNXUH3HA484hX+2UasnZ/0H3NR+PETQzm4
	SgQFcae2Rc2HFWyAkUjbJ8vShNAxfuR3cnJpHZtinE3WMG1Wy0qN9TteX2qpIFcjtb99
	NRtg==
Received: by 10.236.136.39 with SMTP id v27mr4299259yhi.96.1346324034283;
	Thu, 30 Aug 2012 03:53:54 -0700 (PDT)
Received: from [10.80.2.76] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id l7sm2412566yhk.22.2012.08.30.03.53.52
	(version=SSLv3 cipher=OTHER); Thu, 30 Aug 2012 03:53:53 -0700 (PDT)
Message-ID: <503F463E.90505@cantab.net>
Date: Thu, 30 Aug 2012 11:53:50 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dan Magenheimer <dan.magenheimer@oracle.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
In-Reply-To: <f9a8835e-95d8-4495-8c9d-4fa769913549@default>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 21:53, Dan Magenheimer wrote:
> 
> Maybe it is time to move to match the well-known highly-greased
> Linux kernel release process?  This would include, for example, a short
> window for new functionality and a xen-next for pre-window shaking
> out and merging (of new functionality) and testing.  As has
> been pointed out, xen-unstable is, well, unstable for far too long.
>
> It may not be necessary to aggressively match Linus' 8-9 week release
> cycle or weekly rcN releases, but the core process is known to
> work very well, is reasonably well documented, and will be familiar
> to many in the open source community.

I think such a system only works if you have a short release cycle.  If
the only time to merge new features is two weeks in every 6/9 months
then that is just far too long and is not very contributor-friendly.

Xen doesn't have the number of contributors or changes that make a Linux
kernel style process necessary.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 10:55:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 10:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T72P5-0006Fg-Aq; Thu, 30 Aug 2012 10:55:15 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <d.vrabel.98@gmail.com>) id 1T72P3-0006FY-9n
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 10:55:13 +0000
X-Env-Sender: d.vrabel.98@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346324034!8741634!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29848 invoked from network); 30 Aug 2012 10:53:55 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 10:53:55 -0000
Received: by yenm4 with SMTP id m4so330986yen.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 03:53:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=yN1XFel/uBIXWSVHGXhb0TmORq3FD4SwR8XsNjV9mHY=;
	b=ZiEkdM2eU4nOJTNprc6UPIU00wjNwz0XGx9uqS7Y/nJuH5X31zxMOxP1oERxqG2R5u
	U+WB34DGhzJzOm2qRx9wy9459YnNUIyAgWYVjjGmjAYBN8ft/ghCl4wobWcoMSUdPaih
	0UyiyEFpbT7mU5yZjRoelj6sXUltlE/ajT+jb0mN1X4+vfyrr/SgEUVWpECXv1vv39rY
	yG4urUKsS6eJz6PUB0vgXLuCMClbtbO4siHNXUH3HA484hX+2UasnZ/0H3NR+PETQzm4
	SgQFcae2Rc2HFWyAkUjbJ8vShNAxfuR3cnJpHZtinE3WMG1Wy0qN9TteX2qpIFcjtb99
	NRtg==
Received: by 10.236.136.39 with SMTP id v27mr4299259yhi.96.1346324034283;
	Thu, 30 Aug 2012 03:53:54 -0700 (PDT)
Received: from [10.80.2.76] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id l7sm2412566yhk.22.2012.08.30.03.53.52
	(version=SSLv3 cipher=OTHER); Thu, 30 Aug 2012 03:53:53 -0700 (PDT)
Message-ID: <503F463E.90505@cantab.net>
Date: Thu, 30 Aug 2012 11:53:50 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Dan Magenheimer <dan.magenheimer@oracle.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
In-Reply-To: <f9a8835e-95d8-4495-8c9d-4fa769913549@default>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/08/12 21:53, Dan Magenheimer wrote:
> 
> Maybe it is time to move to match the well-known highly-greased
> Linux kernel release process?  This would include, for example, a short
> window for new functionality and a xen-next for pre-window shaking
> out and merging (of new functionality) and testing.  As has
> been pointed out, xen-unstable is, well, unstable for far too long.
>
> It may not be necessary to aggressively match Linus' 8-9 week release
> cycle or weekly rcN releases, but the core process is known to
> work very well, is reasonably well documented, and will be familiar
> to many in the open source community.

I think such a system only works if you have a short release cycle.  If
the only time to merge new features is two weeks in every 6/9 months
then that is just far too long and is not very contributor-friendly.

Xen doesn't have the number of contributors or changes that make a Linux
kernel style process necessary.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 12:59:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 12:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T74KU-0007Eh-UZ; Thu, 30 Aug 2012 12:58:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T74KU-0007Ec-Dy
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 12:58:38 +0000
Received: from [85.158.139.83:48165] by server-12.bemta-5.messagelabs.com id
	BC/E2-18300-D736F305; Thu, 30 Aug 2012 12:58:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346331515!27506053!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1349 invoked from network); 30 Aug 2012 12:58:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 12:58:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,339,1344211200"; d="scan'208";a="36299380"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 12:58:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 08:58:17 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T74K9-0005jC-4o;
	Thu, 30 Aug 2012 13:58:17 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 30 Aug 2012 13:58:10 +0100
Message-ID: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCHv3 0/2] xen/privcmd: support for paged-out frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a straight forward-port of some functionality from
classic kernels to support Xen hosts that do paging of guests.

This isn't functionality the XenServer makes use of so I've not tested
these with paging in use.

Changes since v2:

- Better docs/comments,
- Return -ENOENT if any frame failed with -ENOENT (even if other
  frames fail for other reasons).

Changes since v1:

- Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
  error).  It's broken and not really fixable sensibly and libxc will
  use V2 if it is available.
- Return -ENOENT if all failures were -ENOENT.
- Clear arg->err on success (libxc expected this).

I think this should probably get a "Tested-by" Andres or someone else
who uses paging before being applied.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 12:59:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 12:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T74KU-0007Eh-UZ; Thu, 30 Aug 2012 12:58:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T74KU-0007Ec-Dy
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 12:58:38 +0000
Received: from [85.158.139.83:48165] by server-12.bemta-5.messagelabs.com id
	BC/E2-18300-D736F305; Thu, 30 Aug 2012 12:58:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346331515!27506053!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1349 invoked from network); 30 Aug 2012 12:58:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 12:58:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,339,1344211200"; d="scan'208";a="36299380"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 12:58:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 08:58:17 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T74K9-0005jC-4o;
	Thu, 30 Aug 2012 13:58:17 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 30 Aug 2012 13:58:10 +0100
Message-ID: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCHv3 0/2] xen/privcmd: support for paged-out frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a straight forward-port of some functionality from
classic kernels to support Xen hosts that do paging of guests.

This isn't functionality the XenServer makes use of so I've not tested
these with paging in use.

Changes since v2:

- Better docs/comments,
- Return -ENOENT if any frame failed with -ENOENT (even if other
  frames fail for other reasons).

Changes since v1:

- Don't change PRIVCMD_MMAPBATCH (except to #define a constant for the
  error).  It's broken and not really fixable sensibly and libxc will
  use V2 if it is available.
- Return -ENOENT if all failures were -ENOENT.
- Clear arg->err on success (libxc expected this).

I think this should probably get a "Tested-by" Andres or someone else
who uses paging before being applied.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 12:59:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 12:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T74KX-0007Ew-A8; Thu, 30 Aug 2012 12:58:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T74KV-0007El-PF
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 12:58:40 +0000
Received: from [85.158.139.83:48283] by server-9.bemta-5.messagelabs.com id
	30/BC-20529-E736F305; Thu, 30 Aug 2012 12:58:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346331515!27506053!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1406 invoked from network); 30 Aug 2012 12:58:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 12:58:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,339,1344211200"; d="scan'208";a="36299382"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 12:58:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 08:58:17 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T74K9-0005jC-6x;
	Thu, 30 Aug 2012 13:58:17 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 30 Aug 2012 13:58:12 +0100
Message-ID: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
field for reporting the error code for every frame that could not be
mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
 include/xen/privcmd.h |   23 +++++++++++-
 2 files changed, 102 insertions(+), 20 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..c0e89e7 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -248,18 +248,37 @@ struct mmap_batch_state {
 	struct vm_area_struct *vma;
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		/*
+		 * Error reporting is a mess but userspace relies on
+		 * it behaving this way.
+		 *
+		 * V2 needs to a) return the result of each frame's
+		 * remap; and b) return -ENOENT if any frame failed
+		 * with -ENOENT.
+		 *
+		 * In this first pass the error code is saved by
+		 * overwriting the mfn and an error is indicated in
+		 * st->err.
+		 *
+		 * The second pass by mmap_return_errors() will write
+		 * the error codes to user space and get the right
+		 * ioctl return value.
+		 */
+		*(int *)mfnp = ret;
+		st->err = ret;
 	}
 	st->va += PAGE_SIZE;
 
@@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
+
+	if (st->user_err) {
+		int err = *(int *)mfnp;
+
+		if (err == -ENOENT)
+			st->err = err;
 
-	return put_user(*mfnp, st->user++);
+		return __put_user(err, st->user_err++);
+	} else {
+		xen_pfn_t mfn;
+
+		ret = __get_user(mfn, st->user_mfn);
+		if (ret < 0)
+			return ret;
+
+		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
+		return __put_user(mfn, st->user_mfn++);
+	}
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
@@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
+	if (state.err) {
+		state.err = 0;
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+				     &pagelist,
+				     mmap_return_errors, &state);
+		if (ret >= 0)
+			ret = state.err;
+	} else if (m.err)
+		__clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
 	free_page_list(&pagelist);
@@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..f60d75c 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 12:59:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 12:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T74KX-0007Ew-A8; Thu, 30 Aug 2012 12:58:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T74KV-0007El-PF
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 12:58:40 +0000
Received: from [85.158.139.83:48283] by server-9.bemta-5.messagelabs.com id
	30/BC-20529-E736F305; Thu, 30 Aug 2012 12:58:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346331515!27506053!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1406 invoked from network); 30 Aug 2012 12:58:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 12:58:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,339,1344211200"; d="scan'208";a="36299382"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 12:58:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 08:58:17 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T74K9-0005jC-6x;
	Thu, 30 Aug 2012 13:58:17 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 30 Aug 2012 13:58:12 +0100
Message-ID: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
field for reporting the error code for every frame that could not be
mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
 include/xen/privcmd.h |   23 +++++++++++-
 2 files changed, 102 insertions(+), 20 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index ccee0f1..c0e89e7 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -248,18 +248,37 @@ struct mmap_batch_state {
 	struct vm_area_struct *vma;
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		/*
+		 * Error reporting is a mess but userspace relies on
+		 * it behaving this way.
+		 *
+		 * V2 needs to a) return the result of each frame's
+		 * remap; and b) return -ENOENT if any frame failed
+		 * with -ENOENT.
+		 *
+		 * In this first pass the error code is saved by
+		 * overwriting the mfn and an error is indicated in
+		 * st->err.
+		 *
+		 * The second pass by mmap_return_errors() will write
+		 * the error codes to user space and get the right
+		 * ioctl return value.
+		 */
+		*(int *)mfnp = ret;
+		st->err = ret;
 	}
 	st->va += PAGE_SIZE;
 
@@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
+
+	if (st->user_err) {
+		int err = *(int *)mfnp;
+
+		if (err == -ENOENT)
+			st->err = err;
 
-	return put_user(*mfnp, st->user++);
+		return __put_user(err, st->user_err++);
+	} else {
+		xen_pfn_t mfn;
+
+		ret = __get_user(mfn, st->user_mfn);
+		if (ret < 0)
+			return ret;
+
+		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
+		return __put_user(mfn, st->user_mfn++);
+	}
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
@@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
+	if (state.err) {
+		state.err = 0;
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+				     &pagelist,
+				     mmap_return_errors, &state);
+		if (ret >= 0)
+			ret = state.err;
+	} else if (m.err)
+		__clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
 	free_page_list(&pagelist);
@@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 17857fb..f60d75c 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 12:59:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 12:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T74KY-0007FA-N3; Thu, 30 Aug 2012 12:58:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T74KX-0007Et-DN
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 12:58:41 +0000
Received: from [85.158.139.83:8598] by server-10.bemta-5.messagelabs.com id
	5F/2E-10969-0836F305; Thu, 30 Aug 2012 12:58:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346331515!27506053!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1465 invoked from network); 30 Aug 2012 12:58:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 12:58:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,339,1344211200"; d="scan'208";a="36299381"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 12:58:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 08:58:17 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T74K9-0005jC-6U;
	Thu, 30 Aug 2012 13:58:17 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 30 Aug 2012 13:58:11 +0100
Message-ID: <1346331492-15027-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/2] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Callers of xen_remap_domain_range() need to know if the remap failed
because frame is currently paged out.  So they can retry the remap
later on.  Return -ENOENT in this case.

This assumes that the error codes returned by Xen are a subset of
those used by the kernel.  It is unclear if this is defined as part of
the hypercall ABI.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..fb187ea 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = -EFAULT;
-		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
+		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
+		if (err < 0)
 			goto out;
 
 		nr -= batch;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 12:59:07 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 12:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T74KY-0007FA-N3; Thu, 30 Aug 2012 12:58:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T74KX-0007Et-DN
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 12:58:41 +0000
Received: from [85.158.139.83:8598] by server-10.bemta-5.messagelabs.com id
	5F/2E-10969-0836F305; Thu, 30 Aug 2012 12:58:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346331515!27506053!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgwNTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1465 invoked from network); 30 Aug 2012 12:58:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 12:58:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,339,1344211200"; d="scan'208";a="36299381"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 12:58:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 08:58:17 -0400
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1T74K9-0005jC-6U;
	Thu, 30 Aug 2012 13:58:17 +0100
From: David Vrabel <david.vrabel@citrix.com>
To: xen-devel@lists.xensource.com
Date: Thu, 30 Aug 2012 13:58:11 +0100
Message-ID: <1346331492-15027-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH 1/2] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Callers of xen_remap_domain_range() need to know if the remap failed
because frame is currently paged out.  So they can retry the remap
later on.  Return -ENOENT in this case.

This assumes that the error codes returned by Xen are a subset of
those used by the kernel.  It is unclear if this is defined as part of
the hypercall ABI.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b65a761..fb187ea 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 		if (err)
 			goto out;
 
-		err = -EFAULT;
-		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
+		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
+		if (err < 0)
 			goto out;
 
 		nr -= batch;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 14:29:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 14:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T75js-00008N-6R; Thu, 30 Aug 2012 14:28:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1T75jq-00008G-4L
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 14:28:54 +0000
Received: from [85.158.139.83:62626] by server-5.bemta-5.messagelabs.com id
	06/F0-30514-5A87F305; Thu, 30 Aug 2012 14:28:53 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346336930!16468681!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9341 invoked from network); 30 Aug 2012 14:28:51 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-8.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	30 Aug 2012 14:28:51 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Thu, 30 Aug 2012 16:28:48 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Active Stereo with ATI FirePro V8800
Thread-Index: Ac2GrCeDCJes4ZyaQ4GUHwlLR184rgAD4baA
Date: Thu, 30 Aug 2012 14:28:48 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0D0CE@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: [Xen-devel]  Active Stereo with ATI FirePro V8800
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8579677192782890504=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8579677192782890504==
Content-Language: fr-FR
Content-Type: multipart/alternative;
	boundary="_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_"

--_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi,

I'm currently working on stereoscopy on Xen unstable and Xen 4.1.3.
With the latest catalyst driver and quadbuffer activated, I'm not able to h=
ave active stereo.
I've run some tests; all software that I've found, runs in stereo modes.
But for active stereo, there is no signal output for synchronization and th=
ere is only one image rendered.
I've tested with others stereo modes without issues.

Could it be something I've don't set properly?
Someone has been able to have active stereo with Xen 4.1.3?
>From a fresh install I've changed my grub.cfg for passtrough and set my Dom=
U config file (for Xen 4.1.3):
_________________________________
import os, re
arch =3D os.uname()[4]
if re.search('64', arch):
   arch_libdir =3D 'lib64'
else:
   arch_libdir =3D 'lib'

kernel =3D '/usr/lib/xen/boot/hvmloader'
builder =3D 'hvm'
memory =3D 3072
shadow_memory =3D 8
name =3D "VM"

vcpus=3D3
disk =3D ['file:/home/xen/domains/VM/first.img,ioemu:hda,w','phy:/dev/cdrom=
,hdc:cdrom,r']
device_model =3D '/usr/' + arch_libdir + '/xen/bin/qemu-dm'

usb=3D1
keymap =3D 'fr'
boot =3D 'dc'
sdl =3D 0
vnc =3D 1
vncconsole =3D 1
vncpasswd =3D ''

gfx_passthru=3D1
pci=3D['01:00.0','00:1a.0','00:1a.1','00:1a.2','00:1d.0','00:1d.1','00:1d.2=
','0f:00.0','0f:00.1']
_________________________________

01:00.0 =3D network card
0f:00.0 =3D VGA card
0f:00.1 =3D HDMI audio device
00:1a.0 to 00:1d.2 =3D USB


Computers: HP Z800 workstation

ATI FirePro V8800

Intel Xeon E5640

Intel 5520 chipset
Debian  6.0.5

Thanks,
Aurelien


--_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:Verdana;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Texte brut Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Texte de bulles Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";
	mso-fareast-language:EN-US;}
span.TextebrutCar
	{mso-style-name:"Texte brut Car";
	mso-style-priority:99;
	mso-style-link:"Texte brut";
	font-family:"Calibri","sans-serif";}
span.TextedebullesCar
	{mso-style-name:"Texte de bulles Car";
	mso-style-priority:99;
	mso-style-link:"Texte de bulles";
	font-family:"Tahoma","sans-serif";}
span.EmailStyle21
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.EmailStyle22
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I&#8217;m currently working on =
stereoscopy on Xen unstable and Xen 4.1.3.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">With the latest catalyst driver=
 and quadbuffer activated, I&#8217;m not able to have active stereo.<o:p></=
o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I&#8217;ve run some tests; all =
software that I&#8217;ve found, runs in stereo modes.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">But for active stereo, there is=
 no signal output for synchronization and there is only one image rendered.=
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I&#8216;ve tested with others s=
tereo modes without issues.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Could it be something I&#8217;v=
e don&#8217;t set properly?
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Someone has been able to have a=
ctive stereo with Xen 4.1.3?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">From a fresh install I&#8216;ve=
 changed my grub.cfg for passtrough and set my DomU config file (for Xen 4.=
1.3):<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">_______________________________=
__<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">import os, re<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">arch =3D os.uname()[4]<o:p></o:=
p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">if re.search('64', arch):<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&nbsp;&nbsp; arch_libdir =3D 'l=
ib64'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">else:<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&nbsp;&nbsp; arch_libdir =3D 'l=
ib'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">kernel =3D '/usr/lib/xen/boot/h=
vmloader'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">builder =3D 'hvm'<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">memory =3D 3072&nbsp;&nbsp; <o:=
p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">shadow_memory =3D 8<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">name =3D &quot;VM&quot;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vcpus=3D3<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">disk =3D ['file:/home/xen/domai=
ns/VM/first.img,ioemu:hda,w','phy:/dev/cdrom,hdc:cdrom,r']<o:p></o:p></span=
></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">device_model =3D '/usr/' &#43; =
arch_libdir &#43; '/xen/bin/qemu-dm'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">usb=3D1<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">keymap =3D 'fr'<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">boot =3D 'dc'<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">sdl =3D 0<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vnc =3D 1<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vncconsole =3D 1<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vncpasswd =3D ''<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal">gfx_passthru=3D1<o:p></o:p></p>
<p class=3D"MsoNormal">pci=3D['01:00.0','00:1a.0','00:1a.1','00:1a.2','00:1=
d.0','00:1d.1','00:1d.2','0f:00.0','0f:00.1']<o:p></o:p></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">_______________________________=
__<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">01:00.0 =3D network card<o:p></=
o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">0f:00.0 =3D VGA card<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">0f:00.1 =3D HDMI audio device <=
o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">00:1a.0 to 00:1d.2 =3D USB<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Computers: HP Z800 workstati=
on<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">ATI FirePro V8800<o:p></o:p>=
</span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Intel Xeon E5640<o:p></o:p><=
/span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Intel 5520 chipset<o:p></o:p=
></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Debian &nbsp;6.0.5<o:p></o:p></=
span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thanks</span>,<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"line-height:115%"><span lang=3D"PT" style=
=3D"font-size:9.0pt;line-height:115%;font-family:&quot;Verdana&quot;,&quot;=
sans-serif&quot;;color:black;mso-fareast-language:FR">Aurelien<o:p></o:p></=
span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_--


--===============8579677192782890504==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8579677192782890504==--


From xen-devel-bounces@lists.xen.org Thu Aug 30 14:29:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 14:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T75js-00008N-6R; Thu, 30 Aug 2012 14:28:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aurelien.MILLIAT@clarte.asso.fr>) id 1T75jq-00008G-4L
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 14:28:54 +0000
Received: from [85.158.139.83:62626] by server-5.bemta-5.messagelabs.com id
	06/F0-30514-5A87F305; Thu, 30 Aug 2012 14:28:53 +0000
X-Env-Sender: Aurelien.MILLIAT@clarte.asso.fr
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346336930!16468681!1
X-Originating-IP: [194.3.193.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9341 invoked from network); 30 Aug 2012 14:28:51 -0000
Received: from ns3.ingenierium.com (HELO Dulac.clarte.asso.fr) (194.3.193.29)
	by server-8.tower-182.messagelabs.com with AES128-SHA encrypted SMTP;
	30 Aug 2012 14:28:51 -0000
Received: from dulac.ingenierium.com ([192.168.59.1]) by dulac
	([192.168.59.1]) with mapi id 14.01.0289.001;
	Thu, 30 Aug 2012 16:28:48 +0200
From: =?iso-8859-1?Q?Aur=E9lien_MILLIAT?= <Aurelien.MILLIAT@clarte.asso.fr>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Active Stereo with ATI FirePro V8800
Thread-Index: Ac2GrCeDCJes4ZyaQ4GUHwlLR184rgAD4baA
Date: Thu, 30 Aug 2012 14:28:48 +0000
Message-ID: <36774CA35642C143BCDE93BA0C68DC5702C0D0CE@dulac>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.54.111]
x-avast-antispam: clean, score=0
x-original-content-type: application/ms-tnef
MIME-Version: 1.0
Subject: [Xen-devel]  Active Stereo with ATI FirePro V8800
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8579677192782890504=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8579677192782890504==
Content-Language: fr-FR
Content-Type: multipart/alternative;
	boundary="_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_"

--_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi,

I'm currently working on stereoscopy on Xen unstable and Xen 4.1.3.
With the latest catalyst driver and quadbuffer activated, I'm not able to h=
ave active stereo.
I've run some tests; all software that I've found, runs in stereo modes.
But for active stereo, there is no signal output for synchronization and th=
ere is only one image rendered.
I've tested with others stereo modes without issues.

Could it be something I've don't set properly?
Someone has been able to have active stereo with Xen 4.1.3?
>From a fresh install I've changed my grub.cfg for passtrough and set my Dom=
U config file (for Xen 4.1.3):
_________________________________
import os, re
arch =3D os.uname()[4]
if re.search('64', arch):
   arch_libdir =3D 'lib64'
else:
   arch_libdir =3D 'lib'

kernel =3D '/usr/lib/xen/boot/hvmloader'
builder =3D 'hvm'
memory =3D 3072
shadow_memory =3D 8
name =3D "VM"

vcpus=3D3
disk =3D ['file:/home/xen/domains/VM/first.img,ioemu:hda,w','phy:/dev/cdrom=
,hdc:cdrom,r']
device_model =3D '/usr/' + arch_libdir + '/xen/bin/qemu-dm'

usb=3D1
keymap =3D 'fr'
boot =3D 'dc'
sdl =3D 0
vnc =3D 1
vncconsole =3D 1
vncpasswd =3D ''

gfx_passthru=3D1
pci=3D['01:00.0','00:1a.0','00:1a.1','00:1a.2','00:1d.0','00:1d.1','00:1d.2=
','0f:00.0','0f:00.1']
_________________________________

01:00.0 =3D network card
0f:00.0 =3D VGA card
0f:00.1 =3D HDMI audio device
00:1a.0 to 00:1d.2 =3D USB


Computers: HP Z800 workstation

ATI FirePro V8800

Intel Xeon E5640

Intel 5520 chipset
Debian  6.0.5

Thanks,
Aurelien


--_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:Verdana;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
	{mso-style-priority:99;
	mso-style-link:"Texte brut Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:EN-US;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Texte de bulles Car";
	margin:0cm;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";
	mso-fareast-language:EN-US;}
span.TextebrutCar
	{mso-style-name:"Texte brut Car";
	mso-style-priority:99;
	mso-style-link:"Texte brut";
	font-family:"Calibri","sans-serif";}
span.TextedebullesCar
	{mso-style-name:"Texte de bulles Car";
	mso-style-priority:99;
	mso-style-link:"Texte de bulles";
	font-family:"Tahoma","sans-serif";}
span.EmailStyle21
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
span.EmailStyle22
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"FR" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I&#8217;m currently working on =
stereoscopy on Xen unstable and Xen 4.1.3.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">With the latest catalyst driver=
 and quadbuffer activated, I&#8217;m not able to have active stereo.<o:p></=
o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I&#8217;ve run some tests; all =
software that I&#8217;ve found, runs in stereo modes.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">But for active stereo, there is=
 no signal output for synchronization and there is only one image rendered.=
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I&#8216;ve tested with others s=
tereo modes without issues.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Could it be something I&#8217;v=
e don&#8217;t set properly?
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Someone has been able to have a=
ctive stereo with Xen 4.1.3?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">From a fresh install I&#8216;ve=
 changed my grub.cfg for passtrough and set my DomU config file (for Xen 4.=
1.3):<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">_______________________________=
__<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">import os, re<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">arch =3D os.uname()[4]<o:p></o:=
p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">if re.search('64', arch):<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&nbsp;&nbsp; arch_libdir =3D 'l=
ib64'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">else:<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">&nbsp;&nbsp; arch_libdir =3D 'l=
ib'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">kernel =3D '/usr/lib/xen/boot/h=
vmloader'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">builder =3D 'hvm'<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">memory =3D 3072&nbsp;&nbsp; <o:=
p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">shadow_memory =3D 8<o:p></o:p><=
/span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">name =3D &quot;VM&quot;<o:p></o=
:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vcpus=3D3<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">disk =3D ['file:/home/xen/domai=
ns/VM/first.img,ioemu:hda,w','phy:/dev/cdrom,hdc:cdrom,r']<o:p></o:p></span=
></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">device_model =3D '/usr/' &#43; =
arch_libdir &#43; '/xen/bin/qemu-dm'<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">usb=3D1<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">keymap =3D 'fr'<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">boot =3D 'dc'<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">sdl =3D 0<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vnc =3D 1<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vncconsole =3D 1<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">vncpasswd =3D ''<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal">gfx_passthru=3D1<o:p></o:p></p>
<p class=3D"MsoNormal">pci=3D['01:00.0','00:1a.0','00:1a.1','00:1a.2','00:1=
d.0','00:1d.1','00:1d.2','0f:00.0','0f:00.1']<o:p></o:p></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">_______________________________=
__<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">01:00.0 =3D network card<o:p></=
o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">0f:00.0 =3D VGA card<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">0f:00.1 =3D HDMI audio device <=
o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">00:1a.0 to 00:1d.2 =3D USB<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Computers: HP Z800 workstati=
on<o:p></o:p></span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">ATI FirePro V8800<o:p></o:p>=
</span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Intel Xeon E5640<o:p></o:p><=
/span></p>
<p class=3D"MsoPlainText"><span lang=3D"EN-US">Intel 5520 chipset<o:p></o:p=
></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Debian &nbsp;6.0.5<o:p></o:p></=
span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thanks</span>,<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"line-height:115%"><span lang=3D"PT" style=
=3D"font-size:9.0pt;line-height:115%;font-family:&quot;Verdana&quot;,&quot;=
sans-serif&quot;;color:black;mso-fareast-language:FR">Aurelien<o:p></o:p></=
span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--_000_36774CA35642C143BCDE93BA0C68DC5702C0D0CEdulac_--


--===============8579677192782890504==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8579677192782890504==--


From xen-devel-bounces@lists.xen.org Thu Aug 30 14:51:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 14:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T765o-0000j4-6H; Thu, 30 Aug 2012 14:51:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T765n-0000iw-BP
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 14:51:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1346338283!3175667!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23870 invoked from network); 30 Aug 2012 14:51:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 14:51:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14272780"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 14:50:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 15:50:37 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T764r-0000ki-Fq; Thu, 30 Aug 2012 14:50:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T764q-00041X-VF;
	Thu, 30 Aug 2012 15:50:36 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.32188.436594.193156@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 15:50:36 +0100
To: George Dunlap <george.dunlap@eu.citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <db614e92faf743e20b3f.1337096977@kodo2>
References: <db614e92faf743e20b3f.1337096977@kodo2>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("[Xen-devel] [PATCH] xencommons: Attempt to load blktap driver"):
> Older kernels, such as those found in Debian Squeeze:
> * Have bugs in handling of AIO into foreign pages
> * Have blktap modules, which will cause qemu not to use AIO, but
> which are not loaded on boot.
> 
> Attempt to load blktap in xencommons, to make sure modern qemu's which
> use AIO will work properly on those kernels.
> 
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

For 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 14:51:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 14:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T765o-0000j4-6H; Thu, 30 Aug 2012 14:51:36 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T765n-0000iw-BP
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 14:51:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1346338283!3175667!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23870 invoked from network); 30 Aug 2012 14:51:24 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 14:51:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14272780"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 14:50:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 15:50:37 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T764r-0000ki-Fq; Thu, 30 Aug 2012 14:50:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T764q-00041X-VF;
	Thu, 30 Aug 2012 15:50:36 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.32188.436594.193156@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 15:50:36 +0100
To: George Dunlap <george.dunlap@eu.citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <db614e92faf743e20b3f.1337096977@kodo2>
References: <db614e92faf743e20b3f.1337096977@kodo2>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("[Xen-devel] [PATCH] xencommons: Attempt to load blktap driver"):
> Older kernels, such as those found in Debian Squeeze:
> * Have bugs in handling of AIO into foreign pages
> * Have blktap modules, which will cause qemu not to use AIO, but
> which are not loaded on boot.
> 
> Attempt to load blktap in xencommons, to make sure modern qemu's which
> use AIO will work properly on those kernels.
> 
> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

For 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 14:59:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 14:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76DX-0001LJ-IK; Thu, 30 Aug 2012 14:59:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76DV-0001L5-TY
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 14:59:34 +0000
Received: from [85.158.138.51:9269] by server-12.bemta-3.messagelabs.com id
	92/DC-10384-5DF7F305; Thu, 30 Aug 2012 14:59:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346338772!27630207!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18794 invoked from network); 30 Aug 2012 14:59:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 14:59:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14272999"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 14:59:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 15:59:31 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76DT-0000uO-Pt; Thu, 30 Aug 2012 14:59:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76DT-0006Bm-ME;
	Thu, 30 Aug 2012 15:59:31 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.32723.578750.578966@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 15:59:31 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>, Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>,
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of Xen command line parameters"):
> This change improves documentation for several Xen command line
> parameters. Some of the Itanium-specific options are now removed. A
> more thorough check should be performed to remove any other remnants.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 14:59:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 14:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76DX-0001LJ-IK; Thu, 30 Aug 2012 14:59:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76DV-0001L5-TY
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 14:59:34 +0000
Received: from [85.158.138.51:9269] by server-12.bemta-3.messagelabs.com id
	92/DC-10384-5DF7F305; Thu, 30 Aug 2012 14:59:33 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346338772!27630207!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18794 invoked from network); 30 Aug 2012 14:59:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 14:59:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14272999"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 14:59:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 15:59:31 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76DT-0000uO-Pt; Thu, 30 Aug 2012 14:59:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76DT-0006Bm-ME;
	Thu, 30 Aug 2012 15:59:31 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.32723.578750.578966@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 15:59:31 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>, Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>,
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of Xen command line parameters"):
> This change improves documentation for several Xen command line
> parameters. Some of the Itanium-specific options are now removed. A
> more thorough check should be performed to remove any other remnants.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:00:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76EY-0001S3-0B; Thu, 30 Aug 2012 15:00:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76EV-0001Rm-S6
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:00:36 +0000
Received: from [85.158.138.51:12275] by server-2.bemta-3.messagelabs.com id
	EE/03-04862-3108F305; Thu, 30 Aug 2012 15:00:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346338833!27630534!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28170 invoked from network); 30 Aug 2012 15:00:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:00:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273039"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:00:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:00:32 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76ES-0000up-AT; Thu, 30 Aug 2012 15:00:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76ES-0006Bw-62;
	Thu, 30 Aug 2012 16:00:32 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.32784.60568.562924@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:00:32 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345537876.28762.136.camel@zakaz.uk.xensource.com>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
	<1345537876.28762.136.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH] README: Update references to PyXML to lxml
 (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[Xen-devel] [PATCH] README: Update references to PyXML to lxml (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)"):
> README: Update references to PyXML to lxml

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:00:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76EY-0001S3-0B; Thu, 30 Aug 2012 15:00:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76EV-0001Rm-S6
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:00:36 +0000
Received: from [85.158.138.51:12275] by server-2.bemta-3.messagelabs.com id
	EE/03-04862-3108F305; Thu, 30 Aug 2012 15:00:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346338833!27630534!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28170 invoked from network); 30 Aug 2012 15:00:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:00:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273039"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:00:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:00:32 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76ES-0000up-AT; Thu, 30 Aug 2012 15:00:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76ES-0006Bw-62;
	Thu, 30 Aug 2012 16:00:32 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.32784.60568.562924@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:00:32 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345537876.28762.136.camel@zakaz.uk.xensource.com>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
	<1345537876.28762.136.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH] README: Update references to PyXML to lxml
 (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[Xen-devel] [PATCH] README: Update references to PyXML to lxml (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)"):
> README: Update references to PyXML to lxml

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:07:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76Ku-0001ym-GH; Thu, 30 Aug 2012 15:07:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T76Ks-0001yQ-QM
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 15:07:10 +0000
Received: from [85.158.143.35:57883] by server-3.bemta-4.messagelabs.com id
	CE/AD-08232-E918F305; Thu, 30 Aug 2012 15:07:10 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-4.tower-21.messagelabs.com!1346339219!5502198!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5331 invoked from network); 30 Aug 2012 15:07:01 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:07:01 -0000
Received: by iabz25 with SMTP id z25so4225531iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 08:06:59 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=Mz4RvB++HlfOERXCylQ0SIONjYqxfEcFfYp5RZsE0Ow=;
	b=Wwh3qXcubjLWZLeIlZCAZUxGrvIydBukLv1DUpN+RBIDdIePpB8iU1WrtaUPYc/9rT
	PqbiaOw4gIcPoc8M0/ya73SGXW93XczKcv2hXCZZzSt5hNIW3aP7xrMMy6TJvikjzJbs
	3UuHzauTCcXqZI0YIkwJ8VIwWx6TSTrBkfjEKU4e7GJgvGUR63/8Zg0N/jOwqHpWkwvS
	8p0KvxT6eRZESUVYBOAHAulil+FTrG8huq1NiCE+5Xb68iI5Z1+NTT8NFZJAmQtfh/p9
	GPDbIKU/uYjmGwPbFfGDGqRwvDUAE3UvIpEPOERslOFfWJZNfegByFDfpjpp4PX5wAHP
	OnlA==
Received: by 10.50.156.196 with SMTP id wg4mr807508igb.54.1346339219427;
	Thu, 30 Aug 2012 08:06:59 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id q1sm1502252igj.15.2012.08.30.08.06.58
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 08:06:58 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-2-git-send-email-david.vrabel@citrix.com>
Date: Thu, 30 Aug 2012 11:07:06 -0400
Message-Id: <79B15458-BC8D-4D8C-80DD-F9BD44A1AE97@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-2-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQnRRc58LZr0ZvNcirUtxAX6JXy6Niow6+P/+w+xB9mD+h6Pd0kqb4V2p8UrxeUjMij8AEDD
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 1/2] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> Callers of xen_remap_domain_range() need to know if the remap failed
> because frame is currently paged out.  So they can retry the remap
> later on.  Return -ENOENT in this case.
> 
> This assumes that the error codes returned by Xen are a subset of
> those used by the kernel.  It is unclear if this is defined as part of
> the hypercall ABI.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

Thanks,
Andres
> ---
> arch/x86/xen/mmu.c |    4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..fb187ea 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> 		if (err)
> 			goto out;
> 
> -		err = -EFAULT;
> -		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
> +		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> +		if (err < 0)
> 			goto out;
> 
> 		nr -= batch;
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:07:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76Ku-0001ym-GH; Thu, 30 Aug 2012 15:07:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T76Ks-0001yQ-QM
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 15:07:10 +0000
Received: from [85.158.143.35:57883] by server-3.bemta-4.messagelabs.com id
	CE/AD-08232-E918F305; Thu, 30 Aug 2012 15:07:10 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-4.tower-21.messagelabs.com!1346339219!5502198!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5331 invoked from network); 30 Aug 2012 15:07:01 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:07:01 -0000
Received: by iabz25 with SMTP id z25so4225531iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 08:06:59 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=Mz4RvB++HlfOERXCylQ0SIONjYqxfEcFfYp5RZsE0Ow=;
	b=Wwh3qXcubjLWZLeIlZCAZUxGrvIydBukLv1DUpN+RBIDdIePpB8iU1WrtaUPYc/9rT
	PqbiaOw4gIcPoc8M0/ya73SGXW93XczKcv2hXCZZzSt5hNIW3aP7xrMMy6TJvikjzJbs
	3UuHzauTCcXqZI0YIkwJ8VIwWx6TSTrBkfjEKU4e7GJgvGUR63/8Zg0N/jOwqHpWkwvS
	8p0KvxT6eRZESUVYBOAHAulil+FTrG8huq1NiCE+5Xb68iI5Z1+NTT8NFZJAmQtfh/p9
	GPDbIKU/uYjmGwPbFfGDGqRwvDUAE3UvIpEPOERslOFfWJZNfegByFDfpjpp4PX5wAHP
	OnlA==
Received: by 10.50.156.196 with SMTP id wg4mr807508igb.54.1346339219427;
	Thu, 30 Aug 2012 08:06:59 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id q1sm1502252igj.15.2012.08.30.08.06.58
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 08:06:58 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-2-git-send-email-david.vrabel@citrix.com>
Date: Thu, 30 Aug 2012 11:07:06 -0400
Message-Id: <79B15458-BC8D-4D8C-80DD-F9BD44A1AE97@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-2-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQnRRc58LZr0ZvNcirUtxAX6JXy6Niow6+P/+w+xB9mD+h6Pd0kqb4V2p8UrxeUjMij8AEDD
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 1/2] xen/mm: return more precise error from
	xen_remap_domain_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> Callers of xen_remap_domain_range() need to know if the remap failed
> because frame is currently paged out.  So they can retry the remap
> later on.  Return -ENOENT in this case.
> 
> This assumes that the error codes returned by Xen are a subset of
> those used by the kernel.  It is unclear if this is defined as part of
> the hypercall ABI.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

Thanks,
Andres
> ---
> arch/x86/xen/mmu.c |    4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index b65a761..fb187ea 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2355,8 +2355,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
> 		if (err)
> 			goto out;
> 
> -		err = -EFAULT;
> -		if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0)
> +		err = HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid);
> +		if (err < 0)
> 			goto out;
> 
> 		nr -= batch;
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:12:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76Pl-0002IG-8T; Thu, 30 Aug 2012 15:12:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76Pj-0002I9-Sq
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 15:12:12 +0000
Received: from [85.158.138.51:64178] by server-10.bemta-3.messagelabs.com id
	E8/31-10411-BC28F305; Thu, 30 Aug 2012 15:12:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346339523!19696725!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11177 invoked from network); 30 Aug 2012 15:12:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:12:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273383"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:12:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:12:03 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76Pb-000117-16; Thu, 30 Aug 2012 15:12:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76Pa-0006D6-SZ;
	Thu, 30 Aug 2012 16:12:02 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33474.765508.72674@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:12:02 +0100
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
References: <CABs9EjnLJ4ibhFEU0niEeciqu+y9GhqzD7YPkoTg-DijZqaqXw@mail.gmail.com>
	<4FE06671020000780008A946@nat28.tlf.novell.com>
	<CABs9EjnqSSnZ5dY-bGTVVn9kq+w3rnBGiNEKWQizZFDe_AXV1w@mail.gmail.com>
	<4FE21084020000780008AE60@nat28.tlf.novell.com>
	<CABs9Ejk2Fz9VWup26X32JkCYzQTN2pke3NGUiXfjCZczuGC+tw@mail.gmail.com>
	<4FE83444020000780008BA00@nat28.tlf.novell.com>
	<CABs9EjkibDNKkbXz4Gq_hNuPkgYU9NZYzhcvZySa6vX-7Bp2SQ@mail.gmail.com>
	<alpine.DEB.2.02.1206261349200.27860@kaball.uk.xensource.com>
	<CABs9Ej=3JPgRexVaTSFj3A1WX_O9+USAZy4XMnHxRHWJFzia=w@mail.gmail.com>
	<alpine.DEB.2.02.1206271401520.27860@kaball.uk.xensource.com>
	<CABs9Ej=63Coc9Soi4wbbM63d-Cm=AmcDj1AsRg62dYpicoqvvQ@mail.gmail.com>
	<alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: rolu@roce.org, xen-devel@lists.xensource.com,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad: fix msi_translate with PV
	event	delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("[Xen-devel] [PATCH] qemu-xen-trad: fix msi_translate with PV event delivery"):
> When switching from msitranslate to straight msi we need to make sure
> that we respect PV event delivery for the msi if the guest asked for it:
> 
> - completely disable MSI on the device in pt_disable_msi_translate;
> - then enable MSI again (pt_msi_setup), mapping the correct pirq to it.

I have applied this to 4.2.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:12:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76Pl-0002IG-8T; Thu, 30 Aug 2012 15:12:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76Pj-0002I9-Sq
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 15:12:12 +0000
Received: from [85.158.138.51:64178] by server-10.bemta-3.messagelabs.com id
	E8/31-10411-BC28F305; Thu, 30 Aug 2012 15:12:11 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346339523!19696725!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11177 invoked from network); 30 Aug 2012 15:12:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:12:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273383"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:12:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:12:03 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76Pb-000117-16; Thu, 30 Aug 2012 15:12:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76Pa-0006D6-SZ;
	Thu, 30 Aug 2012 16:12:02 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33474.765508.72674@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:12:02 +0100
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
References: <CABs9EjnLJ4ibhFEU0niEeciqu+y9GhqzD7YPkoTg-DijZqaqXw@mail.gmail.com>
	<4FE06671020000780008A946@nat28.tlf.novell.com>
	<CABs9EjnqSSnZ5dY-bGTVVn9kq+w3rnBGiNEKWQizZFDe_AXV1w@mail.gmail.com>
	<4FE21084020000780008AE60@nat28.tlf.novell.com>
	<CABs9Ejk2Fz9VWup26X32JkCYzQTN2pke3NGUiXfjCZczuGC+tw@mail.gmail.com>
	<4FE83444020000780008BA00@nat28.tlf.novell.com>
	<CABs9EjkibDNKkbXz4Gq_hNuPkgYU9NZYzhcvZySa6vX-7Bp2SQ@mail.gmail.com>
	<alpine.DEB.2.02.1206261349200.27860@kaball.uk.xensource.com>
	<CABs9Ej=3JPgRexVaTSFj3A1WX_O9+USAZy4XMnHxRHWJFzia=w@mail.gmail.com>
	<alpine.DEB.2.02.1206271401520.27860@kaball.uk.xensource.com>
	<CABs9Ej=63Coc9Soi4wbbM63d-Cm=AmcDj1AsRg62dYpicoqvvQ@mail.gmail.com>
	<alpine.DEB.2.02.1206291211320.27860@kaball.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: rolu@roce.org, xen-devel@lists.xensource.com,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] qemu-xen-trad: fix msi_translate with PV
	event	delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini writes ("[Xen-devel] [PATCH] qemu-xen-trad: fix msi_translate with PV event delivery"):
> When switching from msitranslate to straight msi we need to make sure
> that we respect PV event delivery for the msi if the guest asked for it:
> 
> - completely disable MSI on the device in pt_disable_msi_translate;
> - then enable MSI again (pt_msi_setup), mapping the correct pirq to it.

I have applied this to 4.2.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76SN-0002RC-R4; Thu, 30 Aug 2012 15:14:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76SM-0002Qw-5Q
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 15:14:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1346339688!3180957!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7538 invoked from network); 30 Aug 2012 15:14:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:14:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:14:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:14:47 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76SF-00013J-Ku; Thu, 30 Aug 2012 15:14:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76SF-0006E9-GC;
	Thu, 30 Aug 2012 16:14:47 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33639.161709.272449@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:14:47 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345711732.12501.62.camel@zakaz.uk.xensource.com>
References: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
	<1345711732.12501.62.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains"):
> On Tue, 2012-08-21 at 17:24 +0100, David Vrabel wrote:
> > From: David Vrabel <david.vrabel@citrix.com>

Thanks, and to Ian for the review.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Good for 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:15:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76SN-0002RC-R4; Thu, 30 Aug 2012 15:14:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76SM-0002Qw-5Q
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 15:14:54 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1346339688!3180957!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7538 invoked from network); 30 Aug 2012 15:14:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:14:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273447"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:14:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:14:47 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76SF-00013J-Ku; Thu, 30 Aug 2012 15:14:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76SF-0006E9-GC;
	Thu, 30 Aug 2012 16:14:47 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33639.161709.272449@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:14:47 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345711732.12501.62.camel@zakaz.uk.xensource.com>
References: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
	<1345711732.12501.62.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains"):
> On Tue, 2012-08-21 at 17:24 +0100, David Vrabel wrote:
> > From: David Vrabel <david.vrabel@citrix.com>

Thanks, and to Ian for the review.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Good for 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:18:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76VB-0002c5-ER; Thu, 30 Aug 2012 15:17:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76V9-0002bv-HW
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:17:47 +0000
Received: from [85.158.138.51:39472] by server-9.bemta-3.messagelabs.com id
	82/C6-15390-A148F305; Thu, 30 Aug 2012 15:17:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346339866!27687589!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11839 invoked from network); 30 Aug 2012 15:17:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:17:46 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273511"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:17:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:17:39 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76V1-000166-4Q; Thu, 30 Aug 2012 15:17:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76V0-0006EZ-VG;
	Thu, 30 Aug 2012 16:17:38 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33810.862529.50557@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:17:38 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>, Paul Durrant
	<paul.durrant@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>,
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Paul Durrant writes ("[Xen-devel] [PATCH] Remove VM genearation ID device and incr_generationid from build_info"):
> Remove VM genearation ID device and incr_generationid from build_info.

Thanks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

for 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:18:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76VB-0002c5-ER; Thu, 30 Aug 2012 15:17:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76V9-0002bv-HW
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:17:47 +0000
Received: from [85.158.138.51:39472] by server-9.bemta-3.messagelabs.com id
	82/C6-15390-A148F305; Thu, 30 Aug 2012 15:17:46 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346339866!27687589!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11839 invoked from network); 30 Aug 2012 15:17:46 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:17:46 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273511"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:17:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:17:39 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76V1-000166-4Q; Thu, 30 Aug 2012 15:17:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76V0-0006EZ-VG;
	Thu, 30 Aug 2012 16:17:38 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33810.862529.50557@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:17:38 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>, Paul Durrant
	<paul.durrant@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>,
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Paul Durrant writes ("[Xen-devel] [PATCH] Remove VM genearation ID device and incr_generationid from build_info"):
> Remove VM genearation ID device and incr_generationid from build_info.

Thanks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

for 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76We-0002il-UI; Thu, 30 Aug 2012 15:19:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76Wd-0002iZ-Ez
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:19:19 +0000
Received: from [85.158.143.99:20368] by server-2.bemta-4.messagelabs.com id
	59/E4-21239-6748F305; Thu, 30 Aug 2012 15:19:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1346339957!22384949!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4029 invoked from network); 30 Aug 2012 15:19:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:19:17 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273544"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:19:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:19:16 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76Wa-00019q-MV; Thu, 30 Aug 2012 15:19:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76Wa-0006Et-Ip;
	Thu, 30 Aug 2012 16:19:16 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33906.764835.984658@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:19:14 +0100
To: Roger Pau Monne <roger.pau@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script
	fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("[Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script fixes"):
> Remaining patches from the hotplug script series for NetBSD. Expanded 
> with Ian Campbell recommendations.
> 
> The xenstore_write fix has been moved to a pre-patch, and the error 
> function has been expanded to write the script error to hotplug-error 
> (in a pre-patch also).

All three:

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

for 4.2.

Ia.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:19:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76We-0002il-UI; Thu, 30 Aug 2012 15:19:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76Wd-0002iZ-Ez
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:19:19 +0000
Received: from [85.158.143.99:20368] by server-2.bemta-4.messagelabs.com id
	59/E4-21239-6748F305; Thu, 30 Aug 2012 15:19:18 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1346339957!22384949!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4029 invoked from network); 30 Aug 2012 15:19:17 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:19:17 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273544"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:19:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:19:16 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76Wa-00019q-MV; Thu, 30 Aug 2012 15:19:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76Wa-0006Et-Ip;
	Thu, 30 Aug 2012 16:19:16 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.33906.764835.984658@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:19:14 +0100
To: Roger Pau Monne <roger.pau@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script
	fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monne writes ("[Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script fixes"):
> Remaining patches from the hotplug script series for NetBSD. Expanded 
> with Ian Campbell recommendations.
> 
> The xenstore_write fix has been moved to a pre-patch, and the error 
> function has been expanded to write the script error to hotplug-error 
> (in a pre-patch also).

All three:

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

for 4.2.

Ia.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:32:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76jH-0002zT-BO; Thu, 30 Aug 2012 15:32:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76jF-0002zO-MX
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:32:21 +0000
Received: from [85.158.138.51:61019] by server-3.bemta-3.messagelabs.com id
	32/69-21322-4878F305; Thu, 30 Aug 2012 15:32:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1346340739!19653896!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13317 invoked from network); 30 Aug 2012 15:32:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:32:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273802"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:31:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:31:53 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76in-0001I0-E8; Thu, 30 Aug 2012 15:31:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76in-0006G3-AT;
	Thu, 30 Aug 2012 16:31:53 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.34665.170118.455635@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:31:53 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345710212.12501.55.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
	<1345710212.12501.55.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "p.d@gmx.de" <p.d@gmx.de>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] do not remove kernels or modules on uninstall. (Was: Re: make uninstall can delete xen-kernels)"):
> That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
> about this patch as a 4.2 thing, but given that the regression happened
> due to the switch to autoconf in 4.2 I think it might be good to take,
> even though as a %age of what we install the delta is pretty
> insignificant.

I'm happy with both of these for 4.2.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:32:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76jH-0002zT-BO; Thu, 30 Aug 2012 15:32:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76jF-0002zO-MX
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:32:21 +0000
Received: from [85.158.138.51:61019] by server-3.bemta-3.messagelabs.com id
	32/69-21322-4878F305; Thu, 30 Aug 2012 15:32:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-3.tower-174.messagelabs.com!1346340739!19653896!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13317 invoked from network); 30 Aug 2012 15:32:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:32:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273802"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:31:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:31:53 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76in-0001I0-E8; Thu, 30 Aug 2012 15:31:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76in-0006G3-AT;
	Thu, 30 Aug 2012 16:31:53 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.34665.170118.455635@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:31:53 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345710212.12501.55.camel@zakaz.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
	<1345710212.12501.55.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "p.d@gmx.de" <p.d@gmx.de>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] do not remove kernels or modules on uninstall. (Was: Re: make uninstall can delete xen-kernels)"):
> That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
> about this patch as a 4.2 thing, but given that the regression happened
> due to the switch to autoconf in 4.2 I think it might be good to take,
> even though as a %age of what we install the delta is pretty
> insignificant.

I'm happy with both of these for 4.2.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:34:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76l2-00035j-SC; Thu, 30 Aug 2012 15:34:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76l1-00035Y-6M
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:34:11 +0000
Received: from [85.158.138.51:17971] by server-1.bemta-3.messagelabs.com id
	08/7D-16377-2F78F305; Thu, 30 Aug 2012 15:34:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346340849!27638027!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31591 invoked from network); 30 Aug 2012 15:34:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:34:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273847"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:34:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:34:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76kz-0001Iq-Bw; Thu, 30 Aug 2012 15:34:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76kz-0006GX-8A;
	Thu, 30 Aug 2012 16:34:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.34801.132186.226088@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:34:09 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
References: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: remove vestigial default_lib.m4
 macros and adjust substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH] tools: remove vestigial default_lib.m4 macros and adjust substitutions"):
> LIB_PATH is no longer used, so the AX_DEFAULT_LIB macro is no longer
> needed. Additionally lower case make variables are now used as
> autoconf substitutions, which allows for more correct overrides at
> build time.
> 
> I've checked the file layout in dist/install from the build made
> before this change versus after with ./configure values of:
>  1) ./configure (no flags provided)
>  2) ./configure --libdir=/usr/lib/x86_64-linux-gnu (Debian style)
>  3) ./configure --libdir='${exec_prefix}/lib' (late variable expansion)
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Good for 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:34:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76l2-00035j-SC; Thu, 30 Aug 2012 15:34:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76l1-00035Y-6M
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:34:11 +0000
Received: from [85.158.138.51:17971] by server-1.bemta-3.messagelabs.com id
	08/7D-16377-2F78F305; Thu, 30 Aug 2012 15:34:10 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-174.messagelabs.com!1346340849!27638027!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31591 invoked from network); 30 Aug 2012 15:34:10 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:34:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273847"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:34:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:34:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76kz-0001Iq-Bw; Thu, 30 Aug 2012 15:34:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76kz-0006GX-8A;
	Thu, 30 Aug 2012 16:34:09 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.34801.132186.226088@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:34:09 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
References: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: remove vestigial default_lib.m4
 macros and adjust substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH] tools: remove vestigial default_lib.m4 macros and adjust substitutions"):
> LIB_PATH is no longer used, so the AX_DEFAULT_LIB macro is no longer
> needed. Additionally lower case make variables are now used as
> autoconf substitutions, which allows for more correct overrides at
> build time.
> 
> I've checked the file layout in dist/install from the build made
> before this change versus after with ./configure values of:
>  1) ./configure (no flags provided)
>  2) ./configure --libdir=/usr/lib/x86_64-linux-gnu (Debian style)
>  3) ./configure --libdir='${exec_prefix}/lib' (late variable expansion)
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Good for 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:39:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76qH-0003N1-QW; Thu, 30 Aug 2012 15:39:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76qG-0003Mw-BM
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:39:36 +0000
Received: from [85.158.143.99:45368] by server-2.bemta-4.messagelabs.com id
	85/94-21239-7398F305; Thu, 30 Aug 2012 15:39:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346341174!27689247!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22590 invoked from network); 30 Aug 2012 15:39:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:39:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273959"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:39:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:39:34 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76qE-0001Lr-3s; Thu, 30 Aug 2012 15:39:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76qE-0006H4-06;
	Thu, 30 Aug 2012 16:39:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35124.886240.498538@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:39:32 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <674b694814c8fb4f3c4b.1346274073@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<674b694814c8fb4f3c4b.1346274073@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 3] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH 1 of 3] tools: check for documentation generation tools at configure time"):
> It is sometimes hard to discover all the optional documentation
> generation tools that should be on a system to build all available Xen
> documentation. By checking for documentation generation tools at
> ./configure time and displaying a warning, Xen packagers will more
> easily learn about new optional build dependencies, like markdown,
> when they are introduced.

The way you've done this seems a bit odd to me:

> diff -r d7e4efa17fb0 -r 674b694814c8 config/Tools.mk.in
> --- a/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
> +++ b/config/Tools.mk.in	Wed Aug 29 11:07:52 2012 -0700
> @@ -22,6 +22,17 @@
>  LD86                := @LD86@
>  BCC                 := @BCC@
>  IASL                := @IASL@
> +PS2PDF              := @PS2PDF@
> +DVIPS               := @DVIPS@

But this leaves settings of PS2PDF := ps2pdf in docs/Docs.mk AFAICT.
Surely this should be all plumbed through to the same places ?
Or is this part of the fallback mechanism ?  It's not clear.

You do say this:

> diff -r d7e4efa17fb0 -r 674b694814c8 docs/Makefile
> --- a/docs/Makefile	Tue Aug 28 15:35:08 2012 -0700
> +++ b/docs/Makefile	Wed Aug 29 11:07:52 2012 -0700
> @@ -3,6 +3,11 @@
>  XEN_ROOT=$(CURDIR)/..
>  include $(XEN_ROOT)/Config.mk
>  include $(XEN_ROOT)/docs/Docs.mk
> +# The default documentation tools specified in Docs.mk can be
> +# persistently overridden by the user via ./configure, but running
> +# ./configure is not required to build the docs tree. Thus Tools.mk is
> +# optionally included.
> +-include $(XEN_ROOT)/config/Tools.mk

But I think setting the same thing in two places like this does need
to be documented more clearly.  In particular if I'm right about the
purpose of the two settings, each set needs a comment pointing at the
other so that (a) they can be both changed at once when either is
changed (b) people don't get confused when the setting they're
changing has no effect.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:39:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76qH-0003N1-QW; Thu, 30 Aug 2012 15:39:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76qG-0003Mw-BM
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:39:36 +0000
Received: from [85.158.143.99:45368] by server-2.bemta-4.messagelabs.com id
	85/94-21239-7398F305; Thu, 30 Aug 2012 15:39:35 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-9.tower-216.messagelabs.com!1346341174!27689247!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22590 invoked from network); 30 Aug 2012 15:39:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:39:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273959"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:39:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:39:34 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76qE-0001Lr-3s; Thu, 30 Aug 2012 15:39:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76qE-0006H4-06;
	Thu, 30 Aug 2012 16:39:34 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35124.886240.498538@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:39:32 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <674b694814c8fb4f3c4b.1346274073@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<674b694814c8fb4f3c4b.1346274073@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1 of 3] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH 1 of 3] tools: check for documentation generation tools at configure time"):
> It is sometimes hard to discover all the optional documentation
> generation tools that should be on a system to build all available Xen
> documentation. By checking for documentation generation tools at
> ./configure time and displaying a warning, Xen packagers will more
> easily learn about new optional build dependencies, like markdown,
> when they are introduced.

The way you've done this seems a bit odd to me:

> diff -r d7e4efa17fb0 -r 674b694814c8 config/Tools.mk.in
> --- a/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
> +++ b/config/Tools.mk.in	Wed Aug 29 11:07:52 2012 -0700
> @@ -22,6 +22,17 @@
>  LD86                := @LD86@
>  BCC                 := @BCC@
>  IASL                := @IASL@
> +PS2PDF              := @PS2PDF@
> +DVIPS               := @DVIPS@

But this leaves settings of PS2PDF := ps2pdf in docs/Docs.mk AFAICT.
Surely this should be all plumbed through to the same places ?
Or is this part of the fallback mechanism ?  It's not clear.

You do say this:

> diff -r d7e4efa17fb0 -r 674b694814c8 docs/Makefile
> --- a/docs/Makefile	Tue Aug 28 15:35:08 2012 -0700
> +++ b/docs/Makefile	Wed Aug 29 11:07:52 2012 -0700
> @@ -3,6 +3,11 @@
>  XEN_ROOT=$(CURDIR)/..
>  include $(XEN_ROOT)/Config.mk
>  include $(XEN_ROOT)/docs/Docs.mk
> +# The default documentation tools specified in Docs.mk can be
> +# persistently overridden by the user via ./configure, but running
> +# ./configure is not required to build the docs tree. Thus Tools.mk is
> +# optionally included.
> +-include $(XEN_ROOT)/config/Tools.mk

But I think setting the same thing in two places like this does need
to be documented more clearly.  In particular if I'm right about the
purpose of the two settings, each set needs a comment pointing at the
other so that (a) they can be both changed at once when either is
changed (b) people don't get confused when the setting they're
changing has no effect.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:40:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76qo-0003P7-7r; Thu, 30 Aug 2012 15:40:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76qn-0003Ot-DD
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:40:09 +0000
Received: from [85.158.143.35:5068] by server-1.bemta-4.messagelabs.com id
	3B/EC-12504-8598F305; Thu, 30 Aug 2012 15:40:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346341206!12649528!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13785 invoked from network); 30 Aug 2012 15:40:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:40:07 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273968"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:40:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:40:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76qj-0001M8-E0; Thu, 30 Aug 2012 15:40:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76qj-0006HD-9q;
	Thu, 30 Aug 2012 16:40:05 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35157.189635.449787@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:40:05 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <f5a57d912d9f57f19472.1346274075@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<f5a57d912d9f57f19472.1346274075@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 3] tools/docs: allow markdown_py to be
	used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH 3 of 3] tools/docs: allow markdown_py to be used"):
> An alternative Python markdown implementation is available on some
> systems as markdown_py, so look for that path as well.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Good for 4.2.

Subject of course to the other patches going in.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:40:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76qo-0003P7-7r; Thu, 30 Aug 2012 15:40:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76qn-0003Ot-DD
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:40:09 +0000
Received: from [85.158.143.35:5068] by server-1.bemta-4.messagelabs.com id
	3B/EC-12504-8598F305; Thu, 30 Aug 2012 15:40:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346341206!12649528!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13785 invoked from network); 30 Aug 2012 15:40:07 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:40:07 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273968"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:40:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:40:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76qj-0001M8-E0; Thu, 30 Aug 2012 15:40:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76qj-0006HD-9q;
	Thu, 30 Aug 2012 16:40:05 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35157.189635.449787@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:40:05 +0100
To: Matt Wilson <msw@amazon.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <f5a57d912d9f57f19472.1346274075@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<f5a57d912d9f57f19472.1346274075@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 3 of 3] tools/docs: allow markdown_py to be
	used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH 3 of 3] tools/docs: allow markdown_py to be used"):
> An alternative Python markdown implementation is available on some
> systems as markdown_py, so look for that path as well.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Good for 4.2.

Subject of course to the other patches going in.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:41:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:41:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76rq-0003VM-Mm; Thu, 30 Aug 2012 15:41:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76ro-0003V5-Ml
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:41:12 +0000
Received: from [85.158.138.51:18238] by server-8.bemta-3.messagelabs.com id
	17/6A-24700-8998F305; Thu, 30 Aug 2012 15:41:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346341269!27733821!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31813 invoked from network); 30 Aug 2012 15:41:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:41:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273986"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:41:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:41:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76rk-0001N6-RD; Thu, 30 Aug 2012 15:41:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76rk-0006HO-MA;
	Thu, 30 Aug 2012 16:41:08 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35220.572436.112964@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:41:08 +0100
To: Matt Wilson <msw@amazon.com>, Keir Fraser <keir@xen.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format markdown-generated html to text"):
> Markdown, while easy to read and write, isn't the most consumable
> format for users reading documentation on a terminal. This patch uses
> lynx to format markdown produced HTML into text files.
...
>  txt/%.txt: %.markdown
> -	$(INSTALL_DIR) $(@D)
> -	cp $< $@.tmp
> -	$(call move-if-changed,$@.tmp,$@)
> +	@$(INSTALL_DIR) $(@D)
> +	set -e ; \
> +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> +		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> +		echo "Running markdown to generate $*.txt ... "; \

So now we have two efforts to try to find markdown, one in configure
and one here.

Keir, would it be OK if we simply declared that you must run configure
to "make docs" ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:41:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:41:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76rq-0003VM-Mm; Thu, 30 Aug 2012 15:41:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76ro-0003V5-Ml
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:41:12 +0000
Received: from [85.158.138.51:18238] by server-8.bemta-3.messagelabs.com id
	17/6A-24700-8998F305; Thu, 30 Aug 2012 15:41:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346341269!27733821!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31813 invoked from network); 30 Aug 2012 15:41:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:41:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273986"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:41:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:41:09 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76rk-0001N6-RD; Thu, 30 Aug 2012 15:41:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76rk-0006HO-MA;
	Thu, 30 Aug 2012 16:41:08 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35220.572436.112964@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:41:08 +0100
To: Matt Wilson <msw@amazon.com>, Keir Fraser <keir@xen.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format markdown-generated html to text"):
> Markdown, while easy to read and write, isn't the most consumable
> format for users reading documentation on a terminal. This patch uses
> lynx to format markdown produced HTML into text files.
...
>  txt/%.txt: %.markdown
> -	$(INSTALL_DIR) $(@D)
> -	cp $< $@.tmp
> -	$(call move-if-changed,$@.tmp,$@)
> +	@$(INSTALL_DIR) $(@D)
> +	set -e ; \
> +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> +		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> +		echo "Running markdown to generate $*.txt ... "; \

So now we have two efforts to try to find markdown, one in configure
and one here.

Keir, would it be OK if we simply declared that you must run configure
to "make docs" ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:42:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76t5-0003eg-5x; Thu, 30 Aug 2012 15:42:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1T76t2-0003d4-KY
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:42:29 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346341339!8832858!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7989 invoked from network); 30 Aug 2012 15:42:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:42:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273999"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:41:38 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Thu, 30 Aug 2012
	16:41:38 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 30 Aug 2012 16:41:37 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac17yZKo0OP8VBOYRr+gMgscgd+koAK+4b+Q
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE011A6D541036@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<1345133376.30865.45.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345133376.30865.45.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhhbmtzIGZvciB5b3VyIGNvbW1lbnRzLCBJYW4sIEknbGwgYWRkcmVzcyB0aGVtIGJlZm9yZSBy
ZXBvc3RpbmcuDQoNClRvIHN0YXJ0IHdpdGgsIHlvdSBzYXkgdGhhdCBJIHNob3VsZCByZXBsYWNl
IExJQlhMX0RJU0tfQkFDS0VORF9UQVA6DQo+ID4gICAgICAgICAgZGlzay0+YmFja2VuZCA9IExJ
QlhMX0RJU0tfQkFDS0VORF9UQVA7DQo+ID4gICAgICB9IGVsc2UgaWYgKCFzdHJjbXAoYmFja2Vu
ZF90eXBlLCAicWRpc2siKSkgew0KPiA+ICAgICAgICAgIGRpc2stPmJhY2tlbmQgPSBMSUJYTF9E
SVNLX0JBQ0tFTkRfUURJU0s7DQo+ID4gKyAgICB9IGVsc2UgaWYgKCFzdHJjbXAoYmFja2VuZF90
eXBlLCAieGVuaW8iKSkgew0KPiA+ICsgICAgICAgIGRpc2stPmJhY2tlbmQgPSBMSUJYTF9ESVNL
X0JBQ0tFTkRfWEVOSU87DQo+IA0KPiBJIHRoaW5rIHlvdSB3YW50IHRvIHJlcGxhY2UgTElCWExf
RElTS19CQUNLRU5EX1RBUCByYXRoZXIgdGhhbiBhZGQgYQ0KPiBuZXcgb25lLiBZb3UgY291bGQg
YWxzbyBzdGVhbCB0aGUgbmFtZSBpZiB5b3UgbGlrZSBJIHJlY2tvbi4NCkJ1dCBpbiB0b29scy9s
aWJ4bC9saWJ4bC5jOjE4NzYsIGxpYnhsX19ibGt0YXBfZGV2cGF0aCBpcyBjYWxsZWQgd2hpY2gg
c2VlbXMgYmxrdGFwMiBkZXBlbmRhbnQsIHNvIHdlIG5lZWQgYSBuZXcgYmFja2VuZCB0eXBlIHRv
IGJlIGFibGUgdG8gdXNlIGJsa3RhcDIgYWxvbmcgd2l0aCBibGt0YXAzLCBubz8NCg0KPiAtLS0t
LU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBJYW4gQ2FtcGJlbGwNCj4gU2VudDogMTYg
QXVndXN0IDIwMTIgMTc6MTANCj4gVG86IFRoYW5vcyBNYWthdG9zDQo+IENjOiB4ZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZw0KPiBTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gUkZDOiBibGt0YXAzDQo+
IA0KPiBPbiBUaHUsIDIwMTItMDgtMDkgYXQgMTU6MDMgKzAxMDAsIFRoYW5vcyBNYWthdG9zIHdy
b3RlOg0KPiA+IEnigJlkIGxpa2UgdG8gaW50cm9kdWNlIGJsa3RhcDM6IGVzc2VudGlhbGx5IGJs
a3RhcDIgd2l0aG91dCB0aGUgbmVlZA0KPiBvZg0KPiA+IGJsa2JhY2suIFRoaXMgaGFzIGJlZW4g
ZGV2ZWxvcGVkIGJ5IFNhbnRvc2ggSm9kaCwgYW5kIEnigJlsbCBtYWludGFpbg0KPiA+IGl0Lg0K
PiANCj4gSSB0aGluayB5b3UgYXJlIHdvcmtpbmcgb24gcmVwb3N0aW5nIGluIGEgbW9yZSBtYW5h
Z2VhYmxlIGZvcm0gYnV0DQo+IGhlcmUncyBhIGZldyB0aGluZ3Mgd2hpY2ggSSBub3RpY2VkIG9u
IGEgdG9wIGxldmVsIHNjcm9sbCB0aG91Z2guIChJDQo+IG1pZ2h0IGJlIHJlcGVhdGluZyBteXNl
bGYgb2NjYXNpb25hbGx5IGZyb20gdGhlIHF1aWNrIGNvbW1lbnRzIEkgbWFkZQ0KPiBlYXJsaWVy
LCBzb3JyeSkNCj4gDQo+ID4gZGlmZiAtLWdpdCBhL3Rvb2xzL01ha2VmaWxlIGIvdG9vbHMvTWFr
ZWZpbGUNCj4gPiAtLS0gYS90b29scy9NYWtlZmlsZQ0KPiA+ICsrKyBiL3Rvb2xzL01ha2VmaWxl
DQo+ID4gQEAgLTIwMSwzICsyMDMsMjAgQEANCj4gPg0KPiA+ICBzdWJkaXItZGlzdGNsZWFuLWZp
cm13YXJlOiAucGhvbnkNCj4gPiAgCSQoTUFLRSkgLUMgZmlybXdhcmUgZGlzdGNsZWFuDQo+ID4g
Kw0KPiA+ICtzdWJkaXItYWxsLWJsa3RhcDMgc3ViZGlyLWluc3RhbGwtYmxrdGFwMzogLnBob255
DQo+ID4gKwlzb3VyY2U9LjsgXA0KPiA+ICsJY2QgYmxrdGFwMzsgXA0KPiA+ICsJLi9hdXRvZ2Vu
LnNoOyBcDQo+IA0KPiBJZiBhbnl0aGluZyB0aGlzIHNob3VsZCBiZSBjYWxsZWQgZnJvbSB0aGUg
dG9wLWxldmVsIC4vYXV0b2dlbi5zaCBhbmQNCj4gbm90IGhlcmUuIFdlIHNob3VsZG4ndCBleHBl
Y3QgZW5kIHVzZXJzIHRvIGhhdmUgYXV0b2NvbmYgYXZhaWxhYmxlLg0KPiANCj4gPiArCS4vY29u
ZmlndXJlIFwNCj4gDQo+IEkgdGhpbmsgYXV0b2NvbmYgaGFzIGEgY29uc3RydWN0IHdoaWNoIGNh
biBjYXVzZSBjb25maWd1cmUgdG8gY2FsbA0KPiBvdGhlciBzdWItY29uZmlndXJlcyBpbiBzdWJk
aXJzLiBJZiBJJ20gcmlnaHQgdGhlbiBpdCB3b3VsZCBiZSBiZXR0ZXINCj4gdG8gdXNlIHRoaXMg
aW5zdGVhZCBvZiBjYWxsaW5nIGl0IGhlcmUuDQo+IA0KPiBIb3dldmVyIEkgdGhpbmsgdGhhdCB0
aGUgcmVhbCBjb3JyZWN0IGFuc3dlciBpcyB0aGF0IGJsa3RhcDMgc2hvdWxkbid0DQo+IGhhdmUg
aXQncyBvd24gY29uZmlndXJlIGFueXdheSBidXQgc2hvdWxkIHNpbXBseSBhZGQgdGhlIHRlc3Rz
IHdoaWNoIGl0DQo+IG5lZWRzIHRvIHRoZSBnbG9iYWwgdG9vbHMgbGV2ZWwgb25lIGFuZCB1c2Ug
dGhlIHJlc3VsdCBsaWtlIGV2ZXJ5b25lDQo+IGVsc2UuDQo+IA0KPiA+ICsJQ0ZMQUdTPSItSSQo
WEVOX1JPT1QpL3Rvb2xzL2luY2x1ZGUgXA0KPiA+ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMvbGli
eGMgXA0KPiA+ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMveGVuc3RvcmUiIFwNCj4gPiArCUxERkxB
R1M9Ii1MJChYRU5fUk9PVCkvdG9vbHMveGVuc3RvcmUgXA0KPiA+ICsJCSAtTCQoWEVOX1JPT1Qp
L3Rvb2xzL2xpYnhjIjsgXA0KPiANCj4gWW91ciBNYWtlZmlsZXMgc2hvdWxkIHN0YXJ0IHdpdGgN
Cj4gDQo+ICAgICAgICAgWEVOX1JPT1QgPSAkKENVUkRJUikvLi4vLi4NCj4gICAgICAgICBpbmNs
dWRlICQoWEVOX1JPT1QpL3Rvb2xzL1J1bGVzLm1rDQo+IA0KPiBBbmQgdGhlbiBtYWtlIHVzZSBv
ZiB0aGUgdmFyaWFibGVzIGRlZmluZWQgaW4gUnVsZXMubWsuIGUuZy4NCj4gQ0ZMQUdTX2xpYnhl
bmN0cmwsIExJQlNfbGlieGVuY3RybCBldGMgcmF0aGVyIHRoYW4gZG9pbmcgdGhpcy4NCj4gDQo+
IEkgc3VwcG9zZSBibGt0YXAzIG9uY2UgbGl2ZWQgb3V0c2lkZSBvZiB0aGUgeGVuIHRyZWUgYW5k
IHRoaXMgKGFuZCB0aGUNCj4gY29uZmlndXJleSkgaXMgYSBoYW5nb3ZlciBmcm9tIHRoYXQuIEJ1
dCB3ZSBzaG91bGQgY2xlYW4gaXQgdXAgb24gaXRzDQo+IHdheSBpbnRvIHRoZSB0cmVlDQo+IA0K
PiA+IGRpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUNCj4gPiBiL3Rv
b2xzL2Jsa3RhcDIvZHJpdmVycy9NYWtlZmlsZQ0KPiA+IC0tLSBhL3Rvb2xzL2Jsa3RhcDIvZHJp
dmVycy9NYWtlZmlsZQ0KPiA+ICsrKyBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy9NYWtlZmlsZQ0K
PiA+IEBAIC00LDkgKzQsOSBAQA0KPiA+DQo+ID4gIExJQlZIRERJUiAgPSAkKEJMS1RBUF9ST09U
KS92aGQvbGliDQo+ID4NCj4gPiAtSUJJTiAgICAgICA9IHRhcGRpc2syIHRkLXV0aWwgdGFwZGlz
ay1jbGllbnQgdGFwZGlzay1zdHJlYW0gdGFwZGlzay0NCj4gZGlmZg0KPiA+IC1RQ09XX1VUSUwg
ID0gaW1nMnFjb3cgcWNvdy1jcmVhdGUgcWNvdzJyYXcgLUxPQ0tfVVRJTCAgPSBsb2NrLXV0aWwN
Cj4gPiArSUJJTiAgICAgICA9IHRhcGRpc2syIHRkLXV0aWwyIHRhcGRpc2stY2xpZW50MiB0YXBk
aXNrLXN0cmVhbTINCj4gdGFwZGlzay1kaWZmMg0KPiA+ICtRQ09XX1VUSUwgID0gaW1nMnFjb3cy
IHFjb3ctY3JlYXRlMiBxY293MnJhdzIgTE9DS19VVElMICA9IGxvY2stDQo+IHV0aWwyDQo+IA0K
PiBUaGlzIHNlcmllcyBzaG91bGRuJ3QgYmUgcmVuYW1pbmcgYml0cyBvZiBibGt0YXAyLiBJbiBm
YWN0IEkgdGhpbmsgYXMgYQ0KPiBnZW5lcmFsIHJ1bGUgaXQgc2hvdWxkIG5vdCBiZSB0b3VjaGlu
ZyB0b29scy9ibGt0YXAyIGF0IGFsbC4gSWYgaXQgZG9lcw0KPiBpdCBzaG91bGQgYmUgaW4gYSBz
ZXBhcmF0ZSBwYXRjaCBJIHRoaW5rLg0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFw
My9NYWtlZmlsZS5hbSBiL3Rvb2xzL2Jsa3RhcDMvTWFrZWZpbGUuYW0NCj4gbmV3DQo+ID4gZmls
ZSBtb2RlIDEwMDY0NA0KPiA+IC0tLSAvZGV2L251bGwNCj4gPiArKysgYi90b29scy9ibGt0YXAz
L01ha2VmaWxlLmFtDQo+IA0KPiBUaGlzIGlzIGFkZGluZyBhIG5ldyBkZXBlbmRlbmN5IG9uIGF1
dG9tYWtlIHdoaWNoIGlzIHNvbWV0aGluZyB3ZSdsbA0KPiBoYXZlIHRvIGRpc2N1c3MuDQo+IA0K
PiBBcyBwYXJ0IG9mIHRoZSBpbml0aWFsIHB1c2ggSSB0aGluayBpdCB3b3VsZCBiZSBsZXNzIGNv
bnRyb3ZlcnNpYWwgdG8NCj4gc2ltcGx5IHVzZSB0aGUgZXhpc3RpbmcgWGVuIHRvb2xzIGJ1aWxk
IGluZnJhc3RydWN0dXJlIChzdWNoIGFzIGl0IGlzKS4NCj4gSSB0aGluayB0aGUgbWFqb3JpdHkg
b2YgdGhpcyBjb3VsZCBiZSBjcmliYmVkIHBldHR5IGRpcmVjdGx5IGZyb20NCj4gYmxrdGFwMiBh
bmQgb3RoZXIgcGFydHMgb2YgdGhlIHRvb2xzIHRyZWUuDQo+IA0KPiA+IGRpZmYgLS1naXQgYS90
b29scy9ibGt0YXAzL1JFQURNRSBiL3Rvb2xzL2Jsa3RhcDMvUkVBRE1FIG5ldyBmaWxlDQo+IG1v
ZGUNCj4gPiAxMDA2NDQNCj4gPiAtLS0gL2Rldi9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFw
My9SRUFETUUNCj4gDQo+IEkgdGhpbmsgSSBtZW50aW9uZWQgdGhpcyBiZWZvcmUgYnV0IGl0IGxv
b2tzIGxpa2UgdGhpcyBkb2N1bWVudCBjb3VsZA0KPiBkbyB3aXRoIGEgcHJldHR5IGhlZnR5IHVw
ZGF0ZS4NCj4gDQo+ID4gZGlmZiAtLWdpdCBhL3Rvb2xzL2Jsa3RhcDMvY29udHJvbC90YXAtY3Rs
LWF0dGFjaC5jDQo+ID4gYi90b29scy9ibGt0YXAzL2NvbnRyb2wvdGFwLWN0bC1hdHRhY2guYw0K
PiA+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+ID4gLS0tIC9kZXYvbnVsbA0KPiA+ICsrKyBiL3Rv
b2xzL2Jsa3RhcDMvY29udHJvbC90YXAtY3RsLWF0dGFjaC5jDQo+ID4gQEAgLTAsMCArMSw2NiBA
QA0KPiA+ICsvKg0KPiA+ICsgKiBDb3B5cmlnaHQgKGMpIDIwMDgsIFhlblNvdXJjZSBJbmMuDQo+
IA0KPiBZb3UgcHJvYmFibHkgd2FudCB0byBkbyBhbiB1cGRhdGUgb2YgYWxsIHRoZXNlIGNvcHly
aWdodCBoZWFkZXJzLg0KPiANCj4gDQo+ID4gKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuDQo+ID4g
KyAqDQo+ID4gKyAqIFJlZGlzdHJpYnV0aW9uIGFuZCB1c2UgaW4gc291cmNlIGFuZCBiaW5hcnkg
Zm9ybXMsIHdpdGggb3INCj4gd2l0aG91dA0KPiA+ICsgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJt
aXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nDQo+IGNvbmRpdGlvbnMgYXJlIG1ldDoN
Cj4gPiArICogICAgICogUmVkaXN0cmlidXRpb25zIG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWlu
IHRoZSBhYm92ZQ0KPiBjb3B5cmlnaHQNCj4gPiArICogICAgICAgbm90aWNlLCB0aGlzIGxpc3Qg
b2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZw0KPiBkaXNjbGFpbWVyLg0KPiA+ICsgKiAg
ICAgKiBSZWRpc3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFi
b3ZlDQo+IGNvcHlyaWdodA0KPiA+ICsgKiAgICAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25k
aXRpb25zIGFuZCB0aGUgZm9sbG93aW5nDQo+IGRpc2NsYWltZXIgaW4gdGhlDQo+ID4gKyAqICAg
ICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRo
ZQ0KPiBkaXN0cmlidXRpb24uDQo+ID4gKyAqICAgICAqIE5laXRoZXIgdGhlIG5hbWUgb2YgWGVu
U291cmNlIEluYy4gbm9yIHRoZSBuYW1lcyBvZiBpdHMNCj4gY29udHJpYnV0b3JzDQo+IA0KPiBB
bmQgSSBzdXBwb3NlIHRoaXMgb3VnaHQgdG8gYmUgdXBkYXRlZCB0b28uDQo+IA0KPiA+ICsgKiAg
ICAgICBtYXkgYmUgdXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZlZCBm
cm9tDQo+IHRoaXMgc29mdHdhcmUNCj4gPiArICogICAgICAgd2l0aG91dCBzcGVjaWZpYyBwcmlv
ciB3cml0dGVuIHBlcm1pc3Npb24uDQo+IA0KPiANCj4gVGhlIGFjdHVhbCB0aHJlZSBjbGF1c2Ug
QlNEIHNheXMgIlRoZSBuYW1lIG9mIHRoZSBhdXRob3IgbWF5IG5vdCBiZQ0KPiB1c2VkIHRvIGVu
ZG9yc2Ugb3IgcHJvbW90ZSBwcm9kdWN0cyBkZXJpdmVkIGZyb20gdGhpcyBzb2Z0d2FyZSB3aXRo
b3V0DQo+IHNwZWNpZmljIHByaW9yIHdyaXR0ZW4gcGVybWlzc2lvbi4NCj4gDQo+IFRoaXMgd2Vp
cmQgdmFyaWFudCBvZiB0aGUgMy1jbGF1c2UgQlNEIGlzIHNvbWV0aGluZyB5b3UgbWlnaHQgd2Fu
dCB0bw0KPiBkaXNjdXNzIHdpdGggeW91ciBtYW5hZ2VtZW50IHRvIHNlZSBpZiBpdCBjYW4ndCBi
ZSByYXRpb25hbGlzZWQuDQo+IA0KPiA+ICsgKg0KPiA+ICsgKiBUSElTIFNPRlRXQVJFIElTIFBS
T1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQNCj4gPiArIENPTlRSSUJVVE9SUw0K
PiA+ICsgKiAiQVMgSVMiIEFORCBBTlkgRVhQUkVTUyBPUiBJTVBMSUVEIFdBUlJBTlRJRVMsIElO
Q0xVRElORywgQlVUIE5PVA0KPiA+ICsgKiBMSU1JVEVEIFRPLCBUSEUgSU1QTElFRCBXQVJSQU5U
SUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUw0KPiA+ICsgRk9SDQo+ID4gKyAqIEEg
UEFSVElDVUxBUiBQVVJQT1NFIEFSRSBESVNDTEFJTUVELiBJTiBOTyBFVkVOVCBTSEFMTCBUSEUN
Cj4gPiArIENPUFlSSUdIVCBPV05FUg0KPiA+ICsgKiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxF
IEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwNCj4gPiArIFNQRUNJQUwsDQo+
ID4gKyAqIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTCBEQU1BR0VTIChJTkNMVURJTkcsIEJV
VCBOT1QgTElNSVRFRA0KPiA+ICsgVE8sDQo+ID4gKyAqIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRV
VEUgR09PRFMgT1IgU0VSVklDRVM7IExPU1MgT0YgVVNFLCBEQVRBLA0KPiBPUg0KPiA+ICsgKiBQ
Uk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pIEhPV0VWRVIgQ0FVU0VEIEFORCBPTiBB
TlkNCj4gPiArIFRIRU9SWSBPRg0KPiA+ICsgKiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09OVFJB
Q1QsIFNUUklDVCBMSUFCSUxJVFksIE9SIFRPUlQNCj4gPiArIChJTkNMVURJTkcNCj4gPiArICog
TkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWSBPVVQgT0YgVEhFIFVT
RSBPRg0KPiBUSElTDQo+ID4gKyAqIFNPRlRXQVJFLCBFVkVOIElGIEFEVklTRUQgT0YgVEhFIFBP
U1NJQklMSVRZIE9GIFNVQ0ggREFNQUdFLg0KPiA+ICsgKi8NCj4gDQo+IEkgdGhpbmsgaXQgd291
bGQgYmUgd29ydGh3aGlsZSB0byBoYXZlIGEgdG9vbHMvYmxrdGFwMy9DT1BZSU5HIGZpbGUgdG8N
Cj4gY2xhcmlmeSB0aGUgbGljZXNpbmcgdGVybXMgb2YgYmxrdGFwMyBhcyBhIHdob2xlLg0KPiAN
Cj4gWy4uLl0gSSBkaWRuJ3QgbG9vayBhdCB0aGUgbWFqb3JpdHkgb2YgdGhlIGFjdHVhbCB0b29s
cy9ibGt0YXAzIGNvZGUuDQo+IFRoZXJlJ3MgcXVpdGUgYSBsb3Qgb2YgaXQuIEkgbWVudGlvbmVk
IGVhcmxpZXIgdGhhdCB5b3UgbWlnaHQgd2FudCB0bw0KPiBjb25zaWRlciBkcm9wcGluZyBzb21l
IG9mIHRoZSBvcHRpb25hbCBjb21wb25lbnRzIGZvciB0aGUgdGltZSBiZWluZyB0bw0KPiBrZWVw
IHRoZSBpbml0aWFsIHVwc3RyZWFtaW5nIG1vcmUgbWFuYWdlYWJsZS4NCj4gDQo+ID4gZGlmZiAt
LWdpdCBhL3Rvb2xzL2Jsa3RhcDMvZHJpdmVycy90ZC1yYXRlZC4xLnR4dA0KPiA+IGIvdG9vbHMv
YmxrdGFwMy9kcml2ZXJzL3RkLXJhdGVkLjEudHh0DQo+ID4gbmV3IGZpbGUgbW9kZSAxMDA2NDQN
Cj4gPiAtLS0gL2Rldi9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3RkLXJh
dGVkLjEudHh0DQo+IA0KPiBJcyB0aGlzIGEgZ2VuZXJhdGVkIGZpbGU/IEkgZGlkbid0IHNlZSB0
aGUgc291cmNlIGJ1dCBpdCdkIGJlIG5pY2UgdG8NCj4gaGF2ZSBlLmcuIHRoZSBhY3R1YWwgbWFu
IHBhZ2UgZXRjLg0KPiANCj4gVGhpcyBtYWRlIG1lIGdyZXAgZm9yICJkb2MiLCAibWFuIiBhbmQg
InR4dCIgaW4gdGhlIHBhdGNoLCB3aGljaCBvbmx5DQo+IGZvdW5kIHRoaXMgb25lIGZpbGUuIEhv
cGVmdWxseSBJIGp1c3QgbWlzc2VkIGl0IGFsbCwgb3IgYXQgbGVhc3QgY2FuIHdlDQo+IGV4cGVj
dCB0aGF0IGFkZGl0aW9uYWwgZG9jcyB3aWxsIGJlIGZvcnRoY29taW5nIGluIHRoZSBmdXR1cmU/
DQo+IA0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9pbmNsdWRlL2Jsa3RhcDIu
aA0KPiA+IGIvdG9vbHMvYmxrdGFwMy9pbmNsdWRlL2Jsa3RhcDIuaCBuZXcgZmlsZSBtb2RlIDEw
MDY0NA0KPiA+IC0tLSAvZGV2L251bGwNCj4gPiArKysgYi90b29scy9ibGt0YXAzL2luY2x1ZGUv
YmxrdGFwMi5oDQo+IA0KPiBzLzIvMy8gT3IgZG9lcyB0aGlzIGZpbGUgYmVsb25nIGF0IGFsbD8g
SXQgc2VlbXMgdG8gbW9zdGx5IHJlbGF0ZSB0bw0KPiB0aGUNCj4gYmxrdGFwMiBrZXJuZWwgZHJp
dmVyIGlvY3RsIGludGVyZmFjZS4gUGxlYXNlIGNhbiB5b3Uga2lsbCBhbGwgdGhpcw0KPiBjcnVm
dCBiZWZvcmUgcmVwb3N0aW5nLg0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9p
bmNsdWRlL2xpc3QuaA0KPiA+IGIvdG9vbHMvYmxrdGFwMy9pbmNsdWRlL2xpc3QuaCBuZXcgZmls
ZSBtb2RlIDEwMDY0NA0KPiA+IC0tLSAvZGV2L251bGwNCj4gPiArKysgYi90b29scy9ibGt0YXAz
L2luY2x1ZGUvbGlzdC5oDQo+ID4gQEAgLTAsMCArMSwxNDkgQEANCj4gPiArLyoNCj4gPiArICog
bGlzdC5oDQo+ID4gKyAqDQo+ID4gKyAqIFRoaXMgaXMgYSBzdWJzZXQgb2YgbGludXgncyBsaXN0
LmggaW50ZW5kZWQgdG8gYmUgdXNlZCBpbiB1c2VyLQ0KPiBzcGFjZS4NCj4gPiArICoNCj4gPiAr
ICovDQo+IA0KPiBJZiB0aGlzIGNhbWUgZnJvbSBMaW51eCB0aGVuIGl0IGlzIEdQTCBsaWNlbnNl
ZCBhbmQgbXVzdCBoYXZlIGEgR1BMDQo+IGhlYWRlciBvbiBpdC4NCj4gDQo+IFRoZSBpbnRlbnRp
b24gc2VlbXMgdG8gYmUgdGhhdCBibGt0YXAzIGlzIEJTRCBidXQgdGhpcyB3b3VsZCBtYWtlIGl0
DQo+IG92ZXJhbGwgR1BMLiBZb3UgY291bGQgZWl0aGVyIHJlbGljZW5zZSB0aGUgd2hvbGUgdGhp
bmcgYXMgKEwpR1BMIG9yDQo+IHBlcmhhcHMgcmVpbXBsZW1lbnQgdXNpbmcgdGhlIEJTRCBsaWNl
bnNlZCBsaXN0IG1hY3JvcyAoc2VlDQo+IHRvb2xzL2luY2x1ZGUveGVuLWV4dGVybmFsIGZvciB0
aGUgQlNEIG1hY3JvcyB3aGljaCBsaWJ4bCBhbmQgbWluaS1vcw0KPiB1c2UpDQo+IA0KPiANCj4g
PiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy94ZW5pby9ibGtpZi5oDQo+IGIvdG9vbHMvYmxr
dGFwMy94ZW5pby9ibGtpZi5oDQo+ID4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gPiAtLS0gL2Rl
di9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFwMy94ZW5pby9ibGtpZi5oDQo+IA0KPiBHaXZl
biB0aGF0IHRoaXMgaXMgaW4tdHJlZSB5b3UgbWlnaHQgcGVyaGFwcyB3YW50IHRvIHVzZSB0aGUg
aW4tdGhyZWUNCj4gaW50ZXJmYWNlIGRlY2xhcmF0aW9ucyBmcm9tIHRvb2xzL2luY2x1ZGUuDQo+
IA0KPiA+IGRpZmYgLS1naXQgYS90b29scy9ibGt0YXAzL3hlbmlvL2xpc3QuaCBiL3Rvb2xzL2Js
a3RhcDMveGVuaW8vbGlzdC5oDQo+ID4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gPiAtLS0gL2Rl
di9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFwMy94ZW5pby9saXN0LmgNCj4gPiBAQCAtMCww
ICsxLDEzNCBAQA0KPiA+ICsvKg0KPiA+ICsgKiBsaXN0LmgNCj4gPiArICoNCj4gPiArICogVGhp
cyBpcyBhIHN1YnNldCBvZiBsaW51eCdzIGxpc3QuaCBpbnRlbmRlZCB0byBiZSB1c2VkIGluIHVz
ZXItDQo+IHNwYWNlLg0KPiA+ICsgKg0KPiA+ICsgKi8NCj4gDQo+IEFub3RoZXIgZHVwbGljYXRl
ZCBjb3B5IG9mIHNvbWUgR1BMIGNvZGUuDQo+IA0KPiBBcGFydCBmcm9tIHRoZSBsaWNlbnNpbmcg
dGhpbmdzIHBlcmhhcHMgeW91IGNvdWxkIHJhdGlvbmFsaXNlIHRoZQ0KPiBudW1iZXIgb2YgY29w
aWVzIG9mIHRoaW5ncyBsaWtlIHRoaXMgd2hpY2ggeW91IGFyZSBpbnRyb2R1Y2luZz8NCj4gDQo+
IA0KPiA+IGRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bC5jIGIvdG9vbHMvbGlieGwvbGli
eGwuYw0KPiA+IC0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMNCj4gPiArKysgYi90b29scy9saWJ4
bC9saWJ4bC5jDQo+ID4gQEAgLTExNzEsNiArMTE3MSw4IEBADQo+IA0KPiBDYW4geW91IGFkZCB0
aGUgZm9sbG93aW5nIHRvIHlvdXIgfi8uaGdyYyBwbGVhc2U6DQo+ICAgICAgICAgW2RpZmZdDQo+
ICAgICAgICAgc2hvd2Z1bmMgPSBUcnVlDQo+IA0KPiBUaGlzIHdpbGwgaW5qZWN0IHRoZSBjdXJy
ZW50IGZ1bmN0aW9uIG5hbWUgaW50byB0aGUgaHVuayBoZWFkZXIgd2hpY2gNCj4gbWFrZXMgcmV2
aWV3IG11Y2ggZWFzaWVyLg0KPiANCj4gPiAgICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExf
RElTS19CQUNLRU5EX1RBUDsNCj4gPiAgICAgIH0gZWxzZSBpZiAoIXN0cmNtcChiYWNrZW5kX3R5
cGUsICJxZGlzayIpKSB7DQo+ID4gICAgICAgICAgZGlzay0+YmFja2VuZCA9IExJQlhMX0RJU0tf
QkFDS0VORF9RRElTSzsNCj4gPiArICAgIH0gZWxzZSBpZiAoIXN0cmNtcChiYWNrZW5kX3R5cGUs
ICJ4ZW5pbyIpKSB7DQo+ID4gKyAgICAgICAgZGlzay0+YmFja2VuZCA9IExJQlhMX0RJU0tfQkFD
S0VORF9YRU5JTzsNCj4gDQo+IEkgdGhpbmsgeW91IHdhbnQgdG8gcmVwbGFjZSBMSUJYTF9ESVNL
X0JBQ0tFTkRfVEFQIHJhdGhlciB0aGFuIGFkZCBhDQo+IG5ldyBvbmUuIFlvdSBjb3VsZCBhbHNv
IHN0ZWFsIHRoZSBuYW1lIGlmIHlvdSBsaWtlIEkgcmVja29uLg0KPiANCj4gPg0KPiA+ICAgICAg
fSBlbHNlIHsNCj4gPiAgICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExfRElTS19CQUNLRU5E
X1VOS05PV047DQo+ID4gICAgICB9DQo+IA0KPiANCj4gPiBAQCAtMTk2MSw2ICsxOTgxLDcgQEAN
Cj4gPiAgfQ0KPiA+DQo+ID4gIHN0YXRpYyB2b2lkIGxpYnhsX19kZXZpY2VfZGlza19mcm9tX3hz
X2JlKGxpYnhsX19nYyAqZ2MsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHhzX3RyYW5zYWN0aW9uX3QgdCwNCj4gPiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqYmVfcGF0aCwNCj4gPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2aWNlX2Rpc2sgKmRpc2sp
DQo+IHsNCj4gDQo+IFRoaXMgc29ydCBvZiB0aGluZyBzaG91bGQgYmUgZG9uZSBhcyBhIHNlcGFy
YXRlIHByZS1jdXJzb3IgcGF0Y2guDQo+IA0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGli
eGwvbGlieGxfdGFwZGlzay5jDQo+IGIvdG9vbHMvbGlieGwvbGlieGxfdGFwZGlzay5jDQo+ID4g
bmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gPiAtLS0gL2Rldi9udWxsDQo+ID4gKysrIGIvdG9vbHMv
bGlieGwvbGlieGxfdGFwZGlzay5jDQo+IA0KPiBJcyB0aGlzIGFjdHVhbGx5IGEgbW92ZSBvZiBv
ZiB0aGUgZXhpc3RpbmcgaWJ4bF9ibGt0YXA/IEkgdGhpbmsgImhnDQo+IGRpZmYgLWciIHdpbGwg
Y2F1c2UgaXQgdG8gdXNlIGdpdCBzdHlsZSBwYXRjaGVzIHdoaWNoIG1ha2UgdGhpcw0KPiBjbGVh
cmVyLg0KPiANCj4gQWx0aG91Z2ggSSBkb24ndCBzZWUgbGlieGxfYmxrdGFwIGdldHRpbmcgcmVt
b3ZlZCwgc28gcGVyaGFwcyBub3Q/IEkNCj4gdGhvdWdodCBJIHNhdyB5b3UgY2hhbmdpbmcgdGhl
IE1ha2VmaWxlIGFzIGlmIHlvdSB3ZXJlIHJlbmFtbmcgYXMgd2VsbC4NCj4gDQo+IFJlbmFtaW5n
IHNob3VsZCBnZW5lcmFsbHkgYmUgZG9uZSBhcyBhIHN0YW5kYWxvbmUgcGF0Y2ggd2l0aCBubyBu
b24tDQo+IHJlbGF0ZWQgY2hhbmdlcyBpbiB0aGVtLCB0byBtYWtlIHRoZW0gZWFpc2VyIHRvIHJl
dmlldy4NCj4gDQo+ID4gQEAgLTAsMCArMSwxNjIgQEANCj4gWy4uLl0NCj4gPiArICAgICAgICBz
dHJ1Y3QgbGlzdF9oZWFkIGxpc3Q7DQo+ID4gKwl0YXBfbGlzdF90ICplbnRyeSwgKm5leHRfdDsN
Cj4gDQo+IFNvbWV0aGluZyBvZGQgd2l0aCB3aGl0ZXNwYWNlIGhlcmUuDQo+IA0KPiA+ICsgICAg
ICAgIGludCByZXQgPSAtRU5PRU5ULCBlcnI7DQo+ID4gKw0KPiA+ICsJZnByaW50ZihzdGRlcnIs
ICJibGt0YXBfZmluZCglczolcylcbiIsIHR5cGUsIHBhdGgpOw0KPiANCj4gUGxlYXNlIGRyb3Ag
dGhpcyBzb3J0IG9mIGRlYnVnLg0KPiANCj4gPiArICAgICAgICBJTklUX0xJU1RfSEVBRCgmbGlz
dCk7DQo+ID4gKyAgICAgICAgZXJyID0gdGFwX2N0bF9saXN0KCZsaXN0KTsNCj4gPiArICAgICAg
ICBpZiAoZXJyIDwgMCkNCj4gPiArICAgICAgICAgICAgICAgIHJldHVybiBlcnI7DQo+ID4gWy4u
Ll0NCj4gPiArLy8gICAgICAgIHRhcF9jdGxfbGlzdF9mcmVlKCZsaXN0KTsNCj4gDQo+IExlYWs/
DQo+IA0KPiANCj4gPiBjaGFyICpsaWJ4bF9fYmxrdGFwX2RldnBhdGgobGlieGxfX2djICpnYywN
Cj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGNoYXIgKmRpc2ssDQo+ID4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kaXNrX2Zvcm1hdCBmb3JtYXQpIHsN
Cj4gPiArICAgIGNvbnN0IGNoYXIgKnR5cGU7DQo+ID4gKyAgICBjaGFyICpwYXJhbXMsICpkZXZu
YW1lID0gTlVMTDsNCj4gPiArICAgIHRhcF9saXN0X3QgdGFwOw0KPiA+ICsgICAgaW50IGVycjsN
Cj4gPiArDQo+ID4gKyAgICB0eXBlID0gbGlieGxfX2RldmljZV9kaXNrX3N0cmluZ19vZl9mb3Jt
YXQoZm9ybWF0KTsNCj4gPiArICAgIGZwcmludGYoc3RkZXJyLCAibGlieGxfX2Jsa3RhcF9kZXZw
YXRoKCVzOiVzKVxuIiwgZGlzaywgdHlwZSk7DQo+ID4gKyAgICBlcnIgPSBibGt0YXBfZmluZCh0
eXBlLCBkaXNrLCAmdGFwKTsNCj4gPiArICAgIGlmIChlcnIgPT0gMCkgew0KPiA+ICsgICAgICAg
IGRldm5hbWUgPSBsaWJ4bF9fc3ByaW50ZihnYywgIi9kZXYveGVuL2Jsa3RhcC0yL3RhcGRldiVk
IiwNCj4gPiArIHRhcC5taW5vcik7DQo+IA0KPiBTdXJlbHkgbm90IGFueSBtb3JlPw0KPiANCj4g
PiArICAgICAgICBpZiAoZGV2bmFtZSkNCj4gPiArICAgICAgICAgICAgcmV0dXJuIGRldm5hbWU7
DQo+ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsgICAgcGFyYW1zID0gbGlieGxfX3NwcmludGYoZ2Ms
ICIlczolcyIsIHR5cGUsIGRpc2spOw0KPiA+ICsgICAgZXJyID0gdGFwX2N0bF9jcmVhdGUocGFy
YW1zLCAmZGV2bmFtZSwgMCwgLTEsIE5VTEwpOw0KPiA+ICsgICAgaWYgKCFlcnIpIHsNCj4gPiAr
ICAgICAgICBsaWJ4bF9fcHRyX2FkZChnYywgZGV2bmFtZSk7DQo+ID4gKyAgICAgICAgcmV0dXJu
IGRldm5hbWU7DQo+ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIE5VTEw7DQo+ID4g
K30NCj4gDQo+IFsuLi5dDQo+ID4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwu
YyBiL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYw0KPiA+IC0tLSBhL3Rvb2xzL2xpYnhsL3hsX2Nt
ZGltcGwuYw0KPiA+ICsrKyBiL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYw0KPiA+IEBAIC0xODYy
LDcgKzE4NjIsNyBAQA0KPiA+DQo+ID4gICAgICAgICAgY2hpbGQxID0geGxfZm9yayhjaGlsZF93
YWl0ZGFlbW9uKTsNCj4gPiAgICAgICAgICBpZiAoY2hpbGQxKSB7DQo+ID4gLSAgICAgICAgICAg
IHByaW50ZigiRGFlbW9uIHJ1bm5pbmcgd2l0aCBQSUQgJWRcbiIsIGNoaWxkMSk7DQo+ID4gKyAg
ICAgICAgICAgIHByaW50ZigiRGFlbW9uIHJ1bm5pbmcgd2l0aCBQSUQgJWQgZm9yIGRvbWFpbiAl
ZFxuIiwNCj4gPiArIGNoaWxkMSwgZG9taWQpOw0KPiANCj4gVGhpcyBpcyBwcm9iYWJseSBhIHVz
ZWZ1bCBjaGFuZ2UgYnV0IGl0IGhhcyBub3RoaW5nIGF0IGFsbCB0byBkbyB3aXRoDQo+IGJsa3Rh
cDMsIHBsZWFzZSBzZXBhcmF0ZSBhbGwgdGhpcyBzb3J0IG9mIHN0dWZmIG91dC4NCj4gDQo+ID4N
Cj4gPiAgICAgICAgICAgICAgZm9yICg7Oykgew0KPiA+ICAgICAgICAgICAgICAgICAgZ290X2No
aWxkID0geGxfd2FpdHBpZChjaGlsZF93YWl0ZGFlbW9uLCAmc3RhdHVzLA0KPiAwKTsNCg0KX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1h
aWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94
ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:42:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76t5-0003eg-5x; Thu, 30 Aug 2012 15:42:31 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <thanos.makatos@citrix.com>) id 1T76t2-0003d4-KY
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:42:29 +0000
X-Env-Sender: thanos.makatos@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346341339!8832858!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7989 invoked from network); 30 Aug 2012 15:42:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:42:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14273999"
Received: from lonpmailmx02.citrite.net ([10.30.203.163])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:41:38 +0000
Received: from LONPMAILBOX01.citrite.net ([10.30.224.160]) by
	LONPMAILMX02.citrite.net ([10.30.203.163]) with mapi; Thu, 30 Aug 2012
	16:41:38 +0100
From: Thanos Makatos <thanos.makatos@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 30 Aug 2012 16:41:37 +0100
Thread-Topic: [Xen-devel] RFC: blktap3
Thread-Index: Ac17yZKo0OP8VBOYRr+gMgscgd+koAK+4b+Q
Message-ID: <4B45B535F7F6BE4CB1C044ED5115CDDE011A6D541036@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<1345133376.30865.45.camel@zakaz.uk.xensource.com>
In-Reply-To: <1345133376.30865.45.camel@zakaz.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhhbmtzIGZvciB5b3VyIGNvbW1lbnRzLCBJYW4sIEknbGwgYWRkcmVzcyB0aGVtIGJlZm9yZSBy
ZXBvc3RpbmcuDQoNClRvIHN0YXJ0IHdpdGgsIHlvdSBzYXkgdGhhdCBJIHNob3VsZCByZXBsYWNl
IExJQlhMX0RJU0tfQkFDS0VORF9UQVA6DQo+ID4gICAgICAgICAgZGlzay0+YmFja2VuZCA9IExJ
QlhMX0RJU0tfQkFDS0VORF9UQVA7DQo+ID4gICAgICB9IGVsc2UgaWYgKCFzdHJjbXAoYmFja2Vu
ZF90eXBlLCAicWRpc2siKSkgew0KPiA+ICAgICAgICAgIGRpc2stPmJhY2tlbmQgPSBMSUJYTF9E
SVNLX0JBQ0tFTkRfUURJU0s7DQo+ID4gKyAgICB9IGVsc2UgaWYgKCFzdHJjbXAoYmFja2VuZF90
eXBlLCAieGVuaW8iKSkgew0KPiA+ICsgICAgICAgIGRpc2stPmJhY2tlbmQgPSBMSUJYTF9ESVNL
X0JBQ0tFTkRfWEVOSU87DQo+IA0KPiBJIHRoaW5rIHlvdSB3YW50IHRvIHJlcGxhY2UgTElCWExf
RElTS19CQUNLRU5EX1RBUCByYXRoZXIgdGhhbiBhZGQgYQ0KPiBuZXcgb25lLiBZb3UgY291bGQg
YWxzbyBzdGVhbCB0aGUgbmFtZSBpZiB5b3UgbGlrZSBJIHJlY2tvbi4NCkJ1dCBpbiB0b29scy9s
aWJ4bC9saWJ4bC5jOjE4NzYsIGxpYnhsX19ibGt0YXBfZGV2cGF0aCBpcyBjYWxsZWQgd2hpY2gg
c2VlbXMgYmxrdGFwMiBkZXBlbmRhbnQsIHNvIHdlIG5lZWQgYSBuZXcgYmFja2VuZCB0eXBlIHRv
IGJlIGFibGUgdG8gdXNlIGJsa3RhcDIgYWxvbmcgd2l0aCBibGt0YXAzLCBubz8NCg0KPiAtLS0t
LU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBJYW4gQ2FtcGJlbGwNCj4gU2VudDogMTYg
QXVndXN0IDIwMTIgMTc6MTANCj4gVG86IFRoYW5vcyBNYWthdG9zDQo+IENjOiB4ZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZw0KPiBTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gUkZDOiBibGt0YXAzDQo+
IA0KPiBPbiBUaHUsIDIwMTItMDgtMDkgYXQgMTU6MDMgKzAxMDAsIFRoYW5vcyBNYWthdG9zIHdy
b3RlOg0KPiA+IEnigJlkIGxpa2UgdG8gaW50cm9kdWNlIGJsa3RhcDM6IGVzc2VudGlhbGx5IGJs
a3RhcDIgd2l0aG91dCB0aGUgbmVlZA0KPiBvZg0KPiA+IGJsa2JhY2suIFRoaXMgaGFzIGJlZW4g
ZGV2ZWxvcGVkIGJ5IFNhbnRvc2ggSm9kaCwgYW5kIEnigJlsbCBtYWludGFpbg0KPiA+IGl0Lg0K
PiANCj4gSSB0aGluayB5b3UgYXJlIHdvcmtpbmcgb24gcmVwb3N0aW5nIGluIGEgbW9yZSBtYW5h
Z2VhYmxlIGZvcm0gYnV0DQo+IGhlcmUncyBhIGZldyB0aGluZ3Mgd2hpY2ggSSBub3RpY2VkIG9u
IGEgdG9wIGxldmVsIHNjcm9sbCB0aG91Z2guIChJDQo+IG1pZ2h0IGJlIHJlcGVhdGluZyBteXNl
bGYgb2NjYXNpb25hbGx5IGZyb20gdGhlIHF1aWNrIGNvbW1lbnRzIEkgbWFkZQ0KPiBlYXJsaWVy
LCBzb3JyeSkNCj4gDQo+ID4gZGlmZiAtLWdpdCBhL3Rvb2xzL01ha2VmaWxlIGIvdG9vbHMvTWFr
ZWZpbGUNCj4gPiAtLS0gYS90b29scy9NYWtlZmlsZQ0KPiA+ICsrKyBiL3Rvb2xzL01ha2VmaWxl
DQo+ID4gQEAgLTIwMSwzICsyMDMsMjAgQEANCj4gPg0KPiA+ICBzdWJkaXItZGlzdGNsZWFuLWZp
cm13YXJlOiAucGhvbnkNCj4gPiAgCSQoTUFLRSkgLUMgZmlybXdhcmUgZGlzdGNsZWFuDQo+ID4g
Kw0KPiA+ICtzdWJkaXItYWxsLWJsa3RhcDMgc3ViZGlyLWluc3RhbGwtYmxrdGFwMzogLnBob255
DQo+ID4gKwlzb3VyY2U9LjsgXA0KPiA+ICsJY2QgYmxrdGFwMzsgXA0KPiA+ICsJLi9hdXRvZ2Vu
LnNoOyBcDQo+IA0KPiBJZiBhbnl0aGluZyB0aGlzIHNob3VsZCBiZSBjYWxsZWQgZnJvbSB0aGUg
dG9wLWxldmVsIC4vYXV0b2dlbi5zaCBhbmQNCj4gbm90IGhlcmUuIFdlIHNob3VsZG4ndCBleHBl
Y3QgZW5kIHVzZXJzIHRvIGhhdmUgYXV0b2NvbmYgYXZhaWxhYmxlLg0KPiANCj4gPiArCS4vY29u
ZmlndXJlIFwNCj4gDQo+IEkgdGhpbmsgYXV0b2NvbmYgaGFzIGEgY29uc3RydWN0IHdoaWNoIGNh
biBjYXVzZSBjb25maWd1cmUgdG8gY2FsbA0KPiBvdGhlciBzdWItY29uZmlndXJlcyBpbiBzdWJk
aXJzLiBJZiBJJ20gcmlnaHQgdGhlbiBpdCB3b3VsZCBiZSBiZXR0ZXINCj4gdG8gdXNlIHRoaXMg
aW5zdGVhZCBvZiBjYWxsaW5nIGl0IGhlcmUuDQo+IA0KPiBIb3dldmVyIEkgdGhpbmsgdGhhdCB0
aGUgcmVhbCBjb3JyZWN0IGFuc3dlciBpcyB0aGF0IGJsa3RhcDMgc2hvdWxkbid0DQo+IGhhdmUg
aXQncyBvd24gY29uZmlndXJlIGFueXdheSBidXQgc2hvdWxkIHNpbXBseSBhZGQgdGhlIHRlc3Rz
IHdoaWNoIGl0DQo+IG5lZWRzIHRvIHRoZSBnbG9iYWwgdG9vbHMgbGV2ZWwgb25lIGFuZCB1c2Ug
dGhlIHJlc3VsdCBsaWtlIGV2ZXJ5b25lDQo+IGVsc2UuDQo+IA0KPiA+ICsJQ0ZMQUdTPSItSSQo
WEVOX1JPT1QpL3Rvb2xzL2luY2x1ZGUgXA0KPiA+ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMvbGli
eGMgXA0KPiA+ICsJCS1JJChYRU5fUk9PVCkvdG9vbHMveGVuc3RvcmUiIFwNCj4gPiArCUxERkxB
R1M9Ii1MJChYRU5fUk9PVCkvdG9vbHMveGVuc3RvcmUgXA0KPiA+ICsJCSAtTCQoWEVOX1JPT1Qp
L3Rvb2xzL2xpYnhjIjsgXA0KPiANCj4gWW91ciBNYWtlZmlsZXMgc2hvdWxkIHN0YXJ0IHdpdGgN
Cj4gDQo+ICAgICAgICAgWEVOX1JPT1QgPSAkKENVUkRJUikvLi4vLi4NCj4gICAgICAgICBpbmNs
dWRlICQoWEVOX1JPT1QpL3Rvb2xzL1J1bGVzLm1rDQo+IA0KPiBBbmQgdGhlbiBtYWtlIHVzZSBv
ZiB0aGUgdmFyaWFibGVzIGRlZmluZWQgaW4gUnVsZXMubWsuIGUuZy4NCj4gQ0ZMQUdTX2xpYnhl
bmN0cmwsIExJQlNfbGlieGVuY3RybCBldGMgcmF0aGVyIHRoYW4gZG9pbmcgdGhpcy4NCj4gDQo+
IEkgc3VwcG9zZSBibGt0YXAzIG9uY2UgbGl2ZWQgb3V0c2lkZSBvZiB0aGUgeGVuIHRyZWUgYW5k
IHRoaXMgKGFuZCB0aGUNCj4gY29uZmlndXJleSkgaXMgYSBoYW5nb3ZlciBmcm9tIHRoYXQuIEJ1
dCB3ZSBzaG91bGQgY2xlYW4gaXQgdXAgb24gaXRzDQo+IHdheSBpbnRvIHRoZSB0cmVlDQo+IA0K
PiA+IGRpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvTWFrZWZpbGUNCj4gPiBiL3Rv
b2xzL2Jsa3RhcDIvZHJpdmVycy9NYWtlZmlsZQ0KPiA+IC0tLSBhL3Rvb2xzL2Jsa3RhcDIvZHJp
dmVycy9NYWtlZmlsZQ0KPiA+ICsrKyBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy9NYWtlZmlsZQ0K
PiA+IEBAIC00LDkgKzQsOSBAQA0KPiA+DQo+ID4gIExJQlZIRERJUiAgPSAkKEJMS1RBUF9ST09U
KS92aGQvbGliDQo+ID4NCj4gPiAtSUJJTiAgICAgICA9IHRhcGRpc2syIHRkLXV0aWwgdGFwZGlz
ay1jbGllbnQgdGFwZGlzay1zdHJlYW0gdGFwZGlzay0NCj4gZGlmZg0KPiA+IC1RQ09XX1VUSUwg
ID0gaW1nMnFjb3cgcWNvdy1jcmVhdGUgcWNvdzJyYXcgLUxPQ0tfVVRJTCAgPSBsb2NrLXV0aWwN
Cj4gPiArSUJJTiAgICAgICA9IHRhcGRpc2syIHRkLXV0aWwyIHRhcGRpc2stY2xpZW50MiB0YXBk
aXNrLXN0cmVhbTINCj4gdGFwZGlzay1kaWZmMg0KPiA+ICtRQ09XX1VUSUwgID0gaW1nMnFjb3cy
IHFjb3ctY3JlYXRlMiBxY293MnJhdzIgTE9DS19VVElMICA9IGxvY2stDQo+IHV0aWwyDQo+IA0K
PiBUaGlzIHNlcmllcyBzaG91bGRuJ3QgYmUgcmVuYW1pbmcgYml0cyBvZiBibGt0YXAyLiBJbiBm
YWN0IEkgdGhpbmsgYXMgYQ0KPiBnZW5lcmFsIHJ1bGUgaXQgc2hvdWxkIG5vdCBiZSB0b3VjaGlu
ZyB0b29scy9ibGt0YXAyIGF0IGFsbC4gSWYgaXQgZG9lcw0KPiBpdCBzaG91bGQgYmUgaW4gYSBz
ZXBhcmF0ZSBwYXRjaCBJIHRoaW5rLg0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFw
My9NYWtlZmlsZS5hbSBiL3Rvb2xzL2Jsa3RhcDMvTWFrZWZpbGUuYW0NCj4gbmV3DQo+ID4gZmls
ZSBtb2RlIDEwMDY0NA0KPiA+IC0tLSAvZGV2L251bGwNCj4gPiArKysgYi90b29scy9ibGt0YXAz
L01ha2VmaWxlLmFtDQo+IA0KPiBUaGlzIGlzIGFkZGluZyBhIG5ldyBkZXBlbmRlbmN5IG9uIGF1
dG9tYWtlIHdoaWNoIGlzIHNvbWV0aGluZyB3ZSdsbA0KPiBoYXZlIHRvIGRpc2N1c3MuDQo+IA0K
PiBBcyBwYXJ0IG9mIHRoZSBpbml0aWFsIHB1c2ggSSB0aGluayBpdCB3b3VsZCBiZSBsZXNzIGNv
bnRyb3ZlcnNpYWwgdG8NCj4gc2ltcGx5IHVzZSB0aGUgZXhpc3RpbmcgWGVuIHRvb2xzIGJ1aWxk
IGluZnJhc3RydWN0dXJlIChzdWNoIGFzIGl0IGlzKS4NCj4gSSB0aGluayB0aGUgbWFqb3JpdHkg
b2YgdGhpcyBjb3VsZCBiZSBjcmliYmVkIHBldHR5IGRpcmVjdGx5IGZyb20NCj4gYmxrdGFwMiBh
bmQgb3RoZXIgcGFydHMgb2YgdGhlIHRvb2xzIHRyZWUuDQo+IA0KPiA+IGRpZmYgLS1naXQgYS90
b29scy9ibGt0YXAzL1JFQURNRSBiL3Rvb2xzL2Jsa3RhcDMvUkVBRE1FIG5ldyBmaWxlDQo+IG1v
ZGUNCj4gPiAxMDA2NDQNCj4gPiAtLS0gL2Rldi9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFw
My9SRUFETUUNCj4gDQo+IEkgdGhpbmsgSSBtZW50aW9uZWQgdGhpcyBiZWZvcmUgYnV0IGl0IGxv
b2tzIGxpa2UgdGhpcyBkb2N1bWVudCBjb3VsZA0KPiBkbyB3aXRoIGEgcHJldHR5IGhlZnR5IHVw
ZGF0ZS4NCj4gDQo+ID4gZGlmZiAtLWdpdCBhL3Rvb2xzL2Jsa3RhcDMvY29udHJvbC90YXAtY3Rs
LWF0dGFjaC5jDQo+ID4gYi90b29scy9ibGt0YXAzL2NvbnRyb2wvdGFwLWN0bC1hdHRhY2guYw0K
PiA+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+ID4gLS0tIC9kZXYvbnVsbA0KPiA+ICsrKyBiL3Rv
b2xzL2Jsa3RhcDMvY29udHJvbC90YXAtY3RsLWF0dGFjaC5jDQo+ID4gQEAgLTAsMCArMSw2NiBA
QA0KPiA+ICsvKg0KPiA+ICsgKiBDb3B5cmlnaHQgKGMpIDIwMDgsIFhlblNvdXJjZSBJbmMuDQo+
IA0KPiBZb3UgcHJvYmFibHkgd2FudCB0byBkbyBhbiB1cGRhdGUgb2YgYWxsIHRoZXNlIGNvcHly
aWdodCBoZWFkZXJzLg0KPiANCj4gDQo+ID4gKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuDQo+ID4g
KyAqDQo+ID4gKyAqIFJlZGlzdHJpYnV0aW9uIGFuZCB1c2UgaW4gc291cmNlIGFuZCBiaW5hcnkg
Zm9ybXMsIHdpdGggb3INCj4gd2l0aG91dA0KPiA+ICsgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJt
aXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nDQo+IGNvbmRpdGlvbnMgYXJlIG1ldDoN
Cj4gPiArICogICAgICogUmVkaXN0cmlidXRpb25zIG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWlu
IHRoZSBhYm92ZQ0KPiBjb3B5cmlnaHQNCj4gPiArICogICAgICAgbm90aWNlLCB0aGlzIGxpc3Qg
b2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZw0KPiBkaXNjbGFpbWVyLg0KPiA+ICsgKiAg
ICAgKiBSZWRpc3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFi
b3ZlDQo+IGNvcHlyaWdodA0KPiA+ICsgKiAgICAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25k
aXRpb25zIGFuZCB0aGUgZm9sbG93aW5nDQo+IGRpc2NsYWltZXIgaW4gdGhlDQo+ID4gKyAqICAg
ICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRo
ZQ0KPiBkaXN0cmlidXRpb24uDQo+ID4gKyAqICAgICAqIE5laXRoZXIgdGhlIG5hbWUgb2YgWGVu
U291cmNlIEluYy4gbm9yIHRoZSBuYW1lcyBvZiBpdHMNCj4gY29udHJpYnV0b3JzDQo+IA0KPiBB
bmQgSSBzdXBwb3NlIHRoaXMgb3VnaHQgdG8gYmUgdXBkYXRlZCB0b28uDQo+IA0KPiA+ICsgKiAg
ICAgICBtYXkgYmUgdXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZlZCBm
cm9tDQo+IHRoaXMgc29mdHdhcmUNCj4gPiArICogICAgICAgd2l0aG91dCBzcGVjaWZpYyBwcmlv
ciB3cml0dGVuIHBlcm1pc3Npb24uDQo+IA0KPiANCj4gVGhlIGFjdHVhbCB0aHJlZSBjbGF1c2Ug
QlNEIHNheXMgIlRoZSBuYW1lIG9mIHRoZSBhdXRob3IgbWF5IG5vdCBiZQ0KPiB1c2VkIHRvIGVu
ZG9yc2Ugb3IgcHJvbW90ZSBwcm9kdWN0cyBkZXJpdmVkIGZyb20gdGhpcyBzb2Z0d2FyZSB3aXRo
b3V0DQo+IHNwZWNpZmljIHByaW9yIHdyaXR0ZW4gcGVybWlzc2lvbi4NCj4gDQo+IFRoaXMgd2Vp
cmQgdmFyaWFudCBvZiB0aGUgMy1jbGF1c2UgQlNEIGlzIHNvbWV0aGluZyB5b3UgbWlnaHQgd2Fu
dCB0bw0KPiBkaXNjdXNzIHdpdGggeW91ciBtYW5hZ2VtZW50IHRvIHNlZSBpZiBpdCBjYW4ndCBi
ZSByYXRpb25hbGlzZWQuDQo+IA0KPiA+ICsgKg0KPiA+ICsgKiBUSElTIFNPRlRXQVJFIElTIFBS
T1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQNCj4gPiArIENPTlRSSUJVVE9SUw0K
PiA+ICsgKiAiQVMgSVMiIEFORCBBTlkgRVhQUkVTUyBPUiBJTVBMSUVEIFdBUlJBTlRJRVMsIElO
Q0xVRElORywgQlVUIE5PVA0KPiA+ICsgKiBMSU1JVEVEIFRPLCBUSEUgSU1QTElFRCBXQVJSQU5U
SUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUw0KPiA+ICsgRk9SDQo+ID4gKyAqIEEg
UEFSVElDVUxBUiBQVVJQT1NFIEFSRSBESVNDTEFJTUVELiBJTiBOTyBFVkVOVCBTSEFMTCBUSEUN
Cj4gPiArIENPUFlSSUdIVCBPV05FUg0KPiA+ICsgKiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxF
IEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwNCj4gPiArIFNQRUNJQUwsDQo+
ID4gKyAqIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTCBEQU1BR0VTIChJTkNMVURJTkcsIEJV
VCBOT1QgTElNSVRFRA0KPiA+ICsgVE8sDQo+ID4gKyAqIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRV
VEUgR09PRFMgT1IgU0VSVklDRVM7IExPU1MgT0YgVVNFLCBEQVRBLA0KPiBPUg0KPiA+ICsgKiBQ
Uk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pIEhPV0VWRVIgQ0FVU0VEIEFORCBPTiBB
TlkNCj4gPiArIFRIRU9SWSBPRg0KPiA+ICsgKiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09OVFJB
Q1QsIFNUUklDVCBMSUFCSUxJVFksIE9SIFRPUlQNCj4gPiArIChJTkNMVURJTkcNCj4gPiArICog
TkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWSBPVVQgT0YgVEhFIFVT
RSBPRg0KPiBUSElTDQo+ID4gKyAqIFNPRlRXQVJFLCBFVkVOIElGIEFEVklTRUQgT0YgVEhFIFBP
U1NJQklMSVRZIE9GIFNVQ0ggREFNQUdFLg0KPiA+ICsgKi8NCj4gDQo+IEkgdGhpbmsgaXQgd291
bGQgYmUgd29ydGh3aGlsZSB0byBoYXZlIGEgdG9vbHMvYmxrdGFwMy9DT1BZSU5HIGZpbGUgdG8N
Cj4gY2xhcmlmeSB0aGUgbGljZXNpbmcgdGVybXMgb2YgYmxrdGFwMyBhcyBhIHdob2xlLg0KPiAN
Cj4gWy4uLl0gSSBkaWRuJ3QgbG9vayBhdCB0aGUgbWFqb3JpdHkgb2YgdGhlIGFjdHVhbCB0b29s
cy9ibGt0YXAzIGNvZGUuDQo+IFRoZXJlJ3MgcXVpdGUgYSBsb3Qgb2YgaXQuIEkgbWVudGlvbmVk
IGVhcmxpZXIgdGhhdCB5b3UgbWlnaHQgd2FudCB0bw0KPiBjb25zaWRlciBkcm9wcGluZyBzb21l
IG9mIHRoZSBvcHRpb25hbCBjb21wb25lbnRzIGZvciB0aGUgdGltZSBiZWluZyB0bw0KPiBrZWVw
IHRoZSBpbml0aWFsIHVwc3RyZWFtaW5nIG1vcmUgbWFuYWdlYWJsZS4NCj4gDQo+ID4gZGlmZiAt
LWdpdCBhL3Rvb2xzL2Jsa3RhcDMvZHJpdmVycy90ZC1yYXRlZC4xLnR4dA0KPiA+IGIvdG9vbHMv
YmxrdGFwMy9kcml2ZXJzL3RkLXJhdGVkLjEudHh0DQo+ID4gbmV3IGZpbGUgbW9kZSAxMDA2NDQN
Cj4gPiAtLS0gL2Rldi9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFwMy9kcml2ZXJzL3RkLXJh
dGVkLjEudHh0DQo+IA0KPiBJcyB0aGlzIGEgZ2VuZXJhdGVkIGZpbGU/IEkgZGlkbid0IHNlZSB0
aGUgc291cmNlIGJ1dCBpdCdkIGJlIG5pY2UgdG8NCj4gaGF2ZSBlLmcuIHRoZSBhY3R1YWwgbWFu
IHBhZ2UgZXRjLg0KPiANCj4gVGhpcyBtYWRlIG1lIGdyZXAgZm9yICJkb2MiLCAibWFuIiBhbmQg
InR4dCIgaW4gdGhlIHBhdGNoLCB3aGljaCBvbmx5DQo+IGZvdW5kIHRoaXMgb25lIGZpbGUuIEhv
cGVmdWxseSBJIGp1c3QgbWlzc2VkIGl0IGFsbCwgb3IgYXQgbGVhc3QgY2FuIHdlDQo+IGV4cGVj
dCB0aGF0IGFkZGl0aW9uYWwgZG9jcyB3aWxsIGJlIGZvcnRoY29taW5nIGluIHRoZSBmdXR1cmU/
DQo+IA0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9pbmNsdWRlL2Jsa3RhcDIu
aA0KPiA+IGIvdG9vbHMvYmxrdGFwMy9pbmNsdWRlL2Jsa3RhcDIuaCBuZXcgZmlsZSBtb2RlIDEw
MDY0NA0KPiA+IC0tLSAvZGV2L251bGwNCj4gPiArKysgYi90b29scy9ibGt0YXAzL2luY2x1ZGUv
YmxrdGFwMi5oDQo+IA0KPiBzLzIvMy8gT3IgZG9lcyB0aGlzIGZpbGUgYmVsb25nIGF0IGFsbD8g
SXQgc2VlbXMgdG8gbW9zdGx5IHJlbGF0ZSB0bw0KPiB0aGUNCj4gYmxrdGFwMiBrZXJuZWwgZHJp
dmVyIGlvY3RsIGludGVyZmFjZS4gUGxlYXNlIGNhbiB5b3Uga2lsbCBhbGwgdGhpcw0KPiBjcnVm
dCBiZWZvcmUgcmVwb3N0aW5nLg0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy9p
bmNsdWRlL2xpc3QuaA0KPiA+IGIvdG9vbHMvYmxrdGFwMy9pbmNsdWRlL2xpc3QuaCBuZXcgZmls
ZSBtb2RlIDEwMDY0NA0KPiA+IC0tLSAvZGV2L251bGwNCj4gPiArKysgYi90b29scy9ibGt0YXAz
L2luY2x1ZGUvbGlzdC5oDQo+ID4gQEAgLTAsMCArMSwxNDkgQEANCj4gPiArLyoNCj4gPiArICog
bGlzdC5oDQo+ID4gKyAqDQo+ID4gKyAqIFRoaXMgaXMgYSBzdWJzZXQgb2YgbGludXgncyBsaXN0
LmggaW50ZW5kZWQgdG8gYmUgdXNlZCBpbiB1c2VyLQ0KPiBzcGFjZS4NCj4gPiArICoNCj4gPiAr
ICovDQo+IA0KPiBJZiB0aGlzIGNhbWUgZnJvbSBMaW51eCB0aGVuIGl0IGlzIEdQTCBsaWNlbnNl
ZCBhbmQgbXVzdCBoYXZlIGEgR1BMDQo+IGhlYWRlciBvbiBpdC4NCj4gDQo+IFRoZSBpbnRlbnRp
b24gc2VlbXMgdG8gYmUgdGhhdCBibGt0YXAzIGlzIEJTRCBidXQgdGhpcyB3b3VsZCBtYWtlIGl0
DQo+IG92ZXJhbGwgR1BMLiBZb3UgY291bGQgZWl0aGVyIHJlbGljZW5zZSB0aGUgd2hvbGUgdGhp
bmcgYXMgKEwpR1BMIG9yDQo+IHBlcmhhcHMgcmVpbXBsZW1lbnQgdXNpbmcgdGhlIEJTRCBsaWNl
bnNlZCBsaXN0IG1hY3JvcyAoc2VlDQo+IHRvb2xzL2luY2x1ZGUveGVuLWV4dGVybmFsIGZvciB0
aGUgQlNEIG1hY3JvcyB3aGljaCBsaWJ4bCBhbmQgbWluaS1vcw0KPiB1c2UpDQo+IA0KPiANCj4g
PiBkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMy94ZW5pby9ibGtpZi5oDQo+IGIvdG9vbHMvYmxr
dGFwMy94ZW5pby9ibGtpZi5oDQo+ID4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gPiAtLS0gL2Rl
di9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFwMy94ZW5pby9ibGtpZi5oDQo+IA0KPiBHaXZl
biB0aGF0IHRoaXMgaXMgaW4tdHJlZSB5b3UgbWlnaHQgcGVyaGFwcyB3YW50IHRvIHVzZSB0aGUg
aW4tdGhyZWUNCj4gaW50ZXJmYWNlIGRlY2xhcmF0aW9ucyBmcm9tIHRvb2xzL2luY2x1ZGUuDQo+
IA0KPiA+IGRpZmYgLS1naXQgYS90b29scy9ibGt0YXAzL3hlbmlvL2xpc3QuaCBiL3Rvb2xzL2Js
a3RhcDMveGVuaW8vbGlzdC5oDQo+ID4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gPiAtLS0gL2Rl
di9udWxsDQo+ID4gKysrIGIvdG9vbHMvYmxrdGFwMy94ZW5pby9saXN0LmgNCj4gPiBAQCAtMCww
ICsxLDEzNCBAQA0KPiA+ICsvKg0KPiA+ICsgKiBsaXN0LmgNCj4gPiArICoNCj4gPiArICogVGhp
cyBpcyBhIHN1YnNldCBvZiBsaW51eCdzIGxpc3QuaCBpbnRlbmRlZCB0byBiZSB1c2VkIGluIHVz
ZXItDQo+IHNwYWNlLg0KPiA+ICsgKg0KPiA+ICsgKi8NCj4gDQo+IEFub3RoZXIgZHVwbGljYXRl
ZCBjb3B5IG9mIHNvbWUgR1BMIGNvZGUuDQo+IA0KPiBBcGFydCBmcm9tIHRoZSBsaWNlbnNpbmcg
dGhpbmdzIHBlcmhhcHMgeW91IGNvdWxkIHJhdGlvbmFsaXNlIHRoZQ0KPiBudW1iZXIgb2YgY29w
aWVzIG9mIHRoaW5ncyBsaWtlIHRoaXMgd2hpY2ggeW91IGFyZSBpbnRyb2R1Y2luZz8NCj4gDQo+
IA0KPiA+IGRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bC5jIGIvdG9vbHMvbGlieGwvbGli
eGwuYw0KPiA+IC0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMNCj4gPiArKysgYi90b29scy9saWJ4
bC9saWJ4bC5jDQo+ID4gQEAgLTExNzEsNiArMTE3MSw4IEBADQo+IA0KPiBDYW4geW91IGFkZCB0
aGUgZm9sbG93aW5nIHRvIHlvdXIgfi8uaGdyYyBwbGVhc2U6DQo+ICAgICAgICAgW2RpZmZdDQo+
ICAgICAgICAgc2hvd2Z1bmMgPSBUcnVlDQo+IA0KPiBUaGlzIHdpbGwgaW5qZWN0IHRoZSBjdXJy
ZW50IGZ1bmN0aW9uIG5hbWUgaW50byB0aGUgaHVuayBoZWFkZXIgd2hpY2gNCj4gbWFrZXMgcmV2
aWV3IG11Y2ggZWFzaWVyLg0KPiANCj4gPiAgICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExf
RElTS19CQUNLRU5EX1RBUDsNCj4gPiAgICAgIH0gZWxzZSBpZiAoIXN0cmNtcChiYWNrZW5kX3R5
cGUsICJxZGlzayIpKSB7DQo+ID4gICAgICAgICAgZGlzay0+YmFja2VuZCA9IExJQlhMX0RJU0tf
QkFDS0VORF9RRElTSzsNCj4gPiArICAgIH0gZWxzZSBpZiAoIXN0cmNtcChiYWNrZW5kX3R5cGUs
ICJ4ZW5pbyIpKSB7DQo+ID4gKyAgICAgICAgZGlzay0+YmFja2VuZCA9IExJQlhMX0RJU0tfQkFD
S0VORF9YRU5JTzsNCj4gDQo+IEkgdGhpbmsgeW91IHdhbnQgdG8gcmVwbGFjZSBMSUJYTF9ESVNL
X0JBQ0tFTkRfVEFQIHJhdGhlciB0aGFuIGFkZCBhDQo+IG5ldyBvbmUuIFlvdSBjb3VsZCBhbHNv
IHN0ZWFsIHRoZSBuYW1lIGlmIHlvdSBsaWtlIEkgcmVja29uLg0KPiANCj4gPg0KPiA+ICAgICAg
fSBlbHNlIHsNCj4gPiAgICAgICAgICBkaXNrLT5iYWNrZW5kID0gTElCWExfRElTS19CQUNLRU5E
X1VOS05PV047DQo+ID4gICAgICB9DQo+IA0KPiANCj4gPiBAQCAtMTk2MSw2ICsxOTgxLDcgQEAN
Cj4gPiAgfQ0KPiA+DQo+ID4gIHN0YXRpYyB2b2lkIGxpYnhsX19kZXZpY2VfZGlza19mcm9tX3hz
X2JlKGxpYnhsX19nYyAqZ2MsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHhzX3RyYW5zYWN0aW9uX3QgdCwNCj4gPiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY29uc3QgY2hhciAqYmVfcGF0aCwNCj4gPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2aWNlX2Rpc2sgKmRpc2sp
DQo+IHsNCj4gDQo+IFRoaXMgc29ydCBvZiB0aGluZyBzaG91bGQgYmUgZG9uZSBhcyBhIHNlcGFy
YXRlIHByZS1jdXJzb3IgcGF0Y2guDQo+IA0KPiANCj4gPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGli
eGwvbGlieGxfdGFwZGlzay5jDQo+IGIvdG9vbHMvbGlieGwvbGlieGxfdGFwZGlzay5jDQo+ID4g
bmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gPiAtLS0gL2Rldi9udWxsDQo+ID4gKysrIGIvdG9vbHMv
bGlieGwvbGlieGxfdGFwZGlzay5jDQo+IA0KPiBJcyB0aGlzIGFjdHVhbGx5IGEgbW92ZSBvZiBv
ZiB0aGUgZXhpc3RpbmcgaWJ4bF9ibGt0YXA/IEkgdGhpbmsgImhnDQo+IGRpZmYgLWciIHdpbGwg
Y2F1c2UgaXQgdG8gdXNlIGdpdCBzdHlsZSBwYXRjaGVzIHdoaWNoIG1ha2UgdGhpcw0KPiBjbGVh
cmVyLg0KPiANCj4gQWx0aG91Z2ggSSBkb24ndCBzZWUgbGlieGxfYmxrdGFwIGdldHRpbmcgcmVt
b3ZlZCwgc28gcGVyaGFwcyBub3Q/IEkNCj4gdGhvdWdodCBJIHNhdyB5b3UgY2hhbmdpbmcgdGhl
IE1ha2VmaWxlIGFzIGlmIHlvdSB3ZXJlIHJlbmFtbmcgYXMgd2VsbC4NCj4gDQo+IFJlbmFtaW5n
IHNob3VsZCBnZW5lcmFsbHkgYmUgZG9uZSBhcyBhIHN0YW5kYWxvbmUgcGF0Y2ggd2l0aCBubyBu
b24tDQo+IHJlbGF0ZWQgY2hhbmdlcyBpbiB0aGVtLCB0byBtYWtlIHRoZW0gZWFpc2VyIHRvIHJl
dmlldy4NCj4gDQo+ID4gQEAgLTAsMCArMSwxNjIgQEANCj4gWy4uLl0NCj4gPiArICAgICAgICBz
dHJ1Y3QgbGlzdF9oZWFkIGxpc3Q7DQo+ID4gKwl0YXBfbGlzdF90ICplbnRyeSwgKm5leHRfdDsN
Cj4gDQo+IFNvbWV0aGluZyBvZGQgd2l0aCB3aGl0ZXNwYWNlIGhlcmUuDQo+IA0KPiA+ICsgICAg
ICAgIGludCByZXQgPSAtRU5PRU5ULCBlcnI7DQo+ID4gKw0KPiA+ICsJZnByaW50ZihzdGRlcnIs
ICJibGt0YXBfZmluZCglczolcylcbiIsIHR5cGUsIHBhdGgpOw0KPiANCj4gUGxlYXNlIGRyb3Ag
dGhpcyBzb3J0IG9mIGRlYnVnLg0KPiANCj4gPiArICAgICAgICBJTklUX0xJU1RfSEVBRCgmbGlz
dCk7DQo+ID4gKyAgICAgICAgZXJyID0gdGFwX2N0bF9saXN0KCZsaXN0KTsNCj4gPiArICAgICAg
ICBpZiAoZXJyIDwgMCkNCj4gPiArICAgICAgICAgICAgICAgIHJldHVybiBlcnI7DQo+ID4gWy4u
Ll0NCj4gPiArLy8gICAgICAgIHRhcF9jdGxfbGlzdF9mcmVlKCZsaXN0KTsNCj4gDQo+IExlYWs/
DQo+IA0KPiANCj4gPiBjaGFyICpsaWJ4bF9fYmxrdGFwX2RldnBhdGgobGlieGxfX2djICpnYywN
Cj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGNoYXIgKmRpc2ssDQo+ID4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kaXNrX2Zvcm1hdCBmb3JtYXQpIHsN
Cj4gPiArICAgIGNvbnN0IGNoYXIgKnR5cGU7DQo+ID4gKyAgICBjaGFyICpwYXJhbXMsICpkZXZu
YW1lID0gTlVMTDsNCj4gPiArICAgIHRhcF9saXN0X3QgdGFwOw0KPiA+ICsgICAgaW50IGVycjsN
Cj4gPiArDQo+ID4gKyAgICB0eXBlID0gbGlieGxfX2RldmljZV9kaXNrX3N0cmluZ19vZl9mb3Jt
YXQoZm9ybWF0KTsNCj4gPiArICAgIGZwcmludGYoc3RkZXJyLCAibGlieGxfX2Jsa3RhcF9kZXZw
YXRoKCVzOiVzKVxuIiwgZGlzaywgdHlwZSk7DQo+ID4gKyAgICBlcnIgPSBibGt0YXBfZmluZCh0
eXBlLCBkaXNrLCAmdGFwKTsNCj4gPiArICAgIGlmIChlcnIgPT0gMCkgew0KPiA+ICsgICAgICAg
IGRldm5hbWUgPSBsaWJ4bF9fc3ByaW50ZihnYywgIi9kZXYveGVuL2Jsa3RhcC0yL3RhcGRldiVk
IiwNCj4gPiArIHRhcC5taW5vcik7DQo+IA0KPiBTdXJlbHkgbm90IGFueSBtb3JlPw0KPiANCj4g
PiArICAgICAgICBpZiAoZGV2bmFtZSkNCj4gPiArICAgICAgICAgICAgcmV0dXJuIGRldm5hbWU7
DQo+ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsgICAgcGFyYW1zID0gbGlieGxfX3NwcmludGYoZ2Ms
ICIlczolcyIsIHR5cGUsIGRpc2spOw0KPiA+ICsgICAgZXJyID0gdGFwX2N0bF9jcmVhdGUocGFy
YW1zLCAmZGV2bmFtZSwgMCwgLTEsIE5VTEwpOw0KPiA+ICsgICAgaWYgKCFlcnIpIHsNCj4gPiAr
ICAgICAgICBsaWJ4bF9fcHRyX2FkZChnYywgZGV2bmFtZSk7DQo+ID4gKyAgICAgICAgcmV0dXJu
IGRldm5hbWU7DQo+ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIE5VTEw7DQo+ID4g
K30NCj4gDQo+IFsuLi5dDQo+ID4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwu
YyBiL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYw0KPiA+IC0tLSBhL3Rvb2xzL2xpYnhsL3hsX2Nt
ZGltcGwuYw0KPiA+ICsrKyBiL3Rvb2xzL2xpYnhsL3hsX2NtZGltcGwuYw0KPiA+IEBAIC0xODYy
LDcgKzE4NjIsNyBAQA0KPiA+DQo+ID4gICAgICAgICAgY2hpbGQxID0geGxfZm9yayhjaGlsZF93
YWl0ZGFlbW9uKTsNCj4gPiAgICAgICAgICBpZiAoY2hpbGQxKSB7DQo+ID4gLSAgICAgICAgICAg
IHByaW50ZigiRGFlbW9uIHJ1bm5pbmcgd2l0aCBQSUQgJWRcbiIsIGNoaWxkMSk7DQo+ID4gKyAg
ICAgICAgICAgIHByaW50ZigiRGFlbW9uIHJ1bm5pbmcgd2l0aCBQSUQgJWQgZm9yIGRvbWFpbiAl
ZFxuIiwNCj4gPiArIGNoaWxkMSwgZG9taWQpOw0KPiANCj4gVGhpcyBpcyBwcm9iYWJseSBhIHVz
ZWZ1bCBjaGFuZ2UgYnV0IGl0IGhhcyBub3RoaW5nIGF0IGFsbCB0byBkbyB3aXRoDQo+IGJsa3Rh
cDMsIHBsZWFzZSBzZXBhcmF0ZSBhbGwgdGhpcyBzb3J0IG9mIHN0dWZmIG91dC4NCj4gDQo+ID4N
Cj4gPiAgICAgICAgICAgICAgZm9yICg7Oykgew0KPiA+ICAgICAgICAgICAgICAgICAgZ290X2No
aWxkID0geGxfd2FpdHBpZChjaGlsZF93YWl0ZGFlbW9uLCAmc3RhdHVzLA0KPiAwKTsNCg0KX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1h
aWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94
ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:44:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:44:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76v9-0003tb-T0; Thu, 30 Aug 2012 15:44:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76v8-0003t2-AQ
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:44:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346341469!8714551!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15761 invoked from network); 30 Aug 2012 15:44:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:44:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274069"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:44:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:44:29 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76uz-0001OQ-Gj; Thu, 30 Aug 2012 15:44:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76uz-0006Hv-Ci;
	Thu, 30 Aug 2012 16:44:29 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35421.80723.497439@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:44:29 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345048547.5926.245.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> I think the issue is the parser has:
>         %destructor { xlu__cfg_set_free($$); }  value valuelist values
> which frees the current "setting" but does not remove it from the list
> of settings.

Sorry for this, this is my fault and I have dropped the fix.

> diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.y
> --- a/tools/libxl/libxlu_cfg_y.y	Tue Aug 14 15:59:38 2012 +0100
> +++ b/tools/libxl/libxlu_cfg_y.y	Wed Aug 15 17:34:25 2012 +0100
> @@ -47,7 +47,7 @@
>  file: /* empty */
>   |     file setting
>  
> -setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
> +setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); $3 = NULL; }

I don't think this is correct.  It may happen to work with this
version of bison but I don't think you're allowed to assign to $3.

Looking at the code I think this handling of the XLU_ConfigSettings
and flex is all wrong.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:44:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:44:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76v9-0003tb-T0; Thu, 30 Aug 2012 15:44:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T76v8-0003t2-AQ
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:44:38 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346341469!8714551!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15761 invoked from network); 30 Aug 2012 15:44:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:44:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274069"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:44:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 16:44:29 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T76uz-0001OQ-Gj; Thu, 30 Aug 2012 15:44:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T76uz-0006Hv-Ci;
	Thu, 30 Aug 2012 16:44:29 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.35421.80723.497439@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 16:44:29 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <1345048547.5926.245.camel@zakaz.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> I think the issue is the parser has:
>         %destructor { xlu__cfg_set_free($$); }  value valuelist values
> which frees the current "setting" but does not remove it from the list
> of settings.

Sorry for this, this is my fault and I have dropped the fix.

> diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.y
> --- a/tools/libxl/libxlu_cfg_y.y	Tue Aug 14 15:59:38 2012 +0100
> +++ b/tools/libxl/libxlu_cfg_y.y	Wed Aug 15 17:34:25 2012 +0100
> @@ -47,7 +47,7 @@
>  file: /* empty */
>   |     file setting
>  
> -setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
> +setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); $3 = NULL; }

I don't think this is correct.  It may happen to work with this
version of bison but I don't think you're allowed to assign to $3.

Looking at the code I think this handling of the XLU_ConfigSettings
and flex is all wrong.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:48:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76yP-00046U-Gk; Thu, 30 Aug 2012 15:48:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T76yO-00046N-AF
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:48:00 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346341659!8807178!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31958 invoked from network); 30 Aug 2012 15:47:41 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:47:41 -0000
Received: by pbbjt11 with SMTP id jt11so3282969pbb.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 08:47:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=mpzYTpfjERKMttcXFwxJzWpCce3h6LSfGvYiTUBqS9g=;
	b=eOqkGLBGWlPzLDmf9z30IGS6IKPPK6xHnVDeDwzUrGw1w4biPjaIjL4BLE+cxb4j/3
	YckZ7e5Y2XdHwhdiW9bfmYltQZUME+fLH3HYsHxRAExrlwAOw6GDBe+sGgR4pz+19QHz
	IH4dE50JQmZVCSnjSVaPuA4qHlIhiwjIA2H0Xeg0BQh2rZ6rw8CY7tQtNpg2iGZpz6RX
	NpE0jlxUa3ouFFyKMMd4jo1jBrckjHdRi87a2sIYKF5SxJ3iyJTpoCBIQSe8Ngm1bnPR
	6ctsdMYr9/PLsNi2FuAFUCPijc+4tnaK9XlsU9Za+q2xtaAXN/x0R+LJTOAakMu7M2WV
	JmRw==
Received: by 10.66.82.3 with SMTP id e3mr10398371pay.56.1346341659029;
	Thu, 30 Aug 2012 08:47:39 -0700 (PDT)
Received: from [206.87.99.8] ([206.87.99.8])
	by mx.google.com with ESMTPS id gf3sm1757956pbc.74.2012.08.30.08.47.34
	(version=SSLv3 cipher=OTHER); Thu, 30 Aug 2012 08:47:37 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 30 Aug 2012 16:47:29 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Matt Wilson <msw@amazon.com>
Message-ID: <CC6549A1.3D42A%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
	markdown-generated html to text
Thread-Index: Ac2GxsKXucl+HdXqDke168ZmujqKyQ==
In-Reply-To: <20543.35220.572436.112964@mariner.uk.xensource.com>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/08/2012 16:41, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:

> Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format
> markdown-generated html to text"):
>> Markdown, while easy to read and write, isn't the most consumable
>> format for users reading documentation on a terminal. This patch uses
>> lynx to format markdown produced HTML into text files.
> ...
>>  txt/%.txt: %.markdown
>> - $(INSTALL_DIR) $(@D)
>> - cp $< $@.tmp
>> - $(call move-if-changed,$@.tmp,$@)
>> + @$(INSTALL_DIR) $(@D)
>> + set -e ; \
>> + if which $(MARKDOWN) >/dev/null 2>&1 && \
>> +  which $(HTMLDUMP) >/dev/null 2>&1 ; then \
>> +  echo "Running markdown to generate $*.txt ... "; \
> 
> So now we have two efforts to try to find markdown, one in configure
> and one here.
> 
> Keir, would it be OK if we simply declared that you must run configure
> to "make docs" ?

Yes!

 -- Keir

> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:48:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T76yP-00046U-Gk; Thu, 30 Aug 2012 15:48:01 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T76yO-00046N-AF
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:48:00 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346341659!8807178!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31958 invoked from network); 30 Aug 2012 15:47:41 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:47:41 -0000
Received: by pbbjt11 with SMTP id jt11so3282969pbb.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 08:47:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=mpzYTpfjERKMttcXFwxJzWpCce3h6LSfGvYiTUBqS9g=;
	b=eOqkGLBGWlPzLDmf9z30IGS6IKPPK6xHnVDeDwzUrGw1w4biPjaIjL4BLE+cxb4j/3
	YckZ7e5Y2XdHwhdiW9bfmYltQZUME+fLH3HYsHxRAExrlwAOw6GDBe+sGgR4pz+19QHz
	IH4dE50JQmZVCSnjSVaPuA4qHlIhiwjIA2H0Xeg0BQh2rZ6rw8CY7tQtNpg2iGZpz6RX
	NpE0jlxUa3ouFFyKMMd4jo1jBrckjHdRi87a2sIYKF5SxJ3iyJTpoCBIQSe8Ngm1bnPR
	6ctsdMYr9/PLsNi2FuAFUCPijc+4tnaK9XlsU9Za+q2xtaAXN/x0R+LJTOAakMu7M2WV
	JmRw==
Received: by 10.66.82.3 with SMTP id e3mr10398371pay.56.1346341659029;
	Thu, 30 Aug 2012 08:47:39 -0700 (PDT)
Received: from [206.87.99.8] ([206.87.99.8])
	by mx.google.com with ESMTPS id gf3sm1757956pbc.74.2012.08.30.08.47.34
	(version=SSLv3 cipher=OTHER); Thu, 30 Aug 2012 08:47:37 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Thu, 30 Aug 2012 16:47:29 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Matt Wilson <msw@amazon.com>
Message-ID: <CC6549A1.3D42A%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
	markdown-generated html to text
Thread-Index: Ac2GxsKXucl+HdXqDke168ZmujqKyQ==
In-Reply-To: <20543.35220.572436.112964@mariner.uk.xensource.com>
Mime-version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/08/2012 16:41, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:

> Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format
> markdown-generated html to text"):
>> Markdown, while easy to read and write, isn't the most consumable
>> format for users reading documentation on a terminal. This patch uses
>> lynx to format markdown produced HTML into text files.
> ...
>>  txt/%.txt: %.markdown
>> - $(INSTALL_DIR) $(@D)
>> - cp $< $@.tmp
>> - $(call move-if-changed,$@.tmp,$@)
>> + @$(INSTALL_DIR) $(@D)
>> + set -e ; \
>> + if which $(MARKDOWN) >/dev/null 2>&1 && \
>> +  which $(HTMLDUMP) >/dev/null 2>&1 ; then \
>> +  echo "Running markdown to generate $*.txt ... "; \
> 
> So now we have two efforts to try to find markdown, one in configure
> and one here.
> 
> Keir, would it be OK if we simply declared that you must run configure
> to "make docs" ?

Yes!

 -- Keir

> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:49:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7705-0004CB-4x; Thu, 30 Aug 2012 15:49:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7704-0004C2-93
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:49:44 +0000
Received: from [85.158.139.83:12591] by server-11.bemta-5.messagelabs.com id
	E7/53-24658-79B8F305; Thu, 30 Aug 2012 15:49:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346341782!20529145!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23842 invoked from network); 30 Aug 2012 15:49:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:49:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274125"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:46:09 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 16:46:08 +0100
Message-ID: <1346341567.27277.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Thu, 30 Aug 2012 16:46:07 +0100
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE011A6D541036@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<1345133376.30865.45.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE011A6D541036@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:41 +0100, Thanos Makatos wrote:
> Thanks for your comments, Ian, I'll address them before reposting.
> 
> To start with, you say that I should replace LIBXL_DISK_BACKEND_TAP:
> > >          disk->backend = LIBXL_DISK_BACKEND_TAP;
> > >      } else if (!strcmp(backend_type, "qdisk")) {
> > >          disk->backend = LIBXL_DISK_BACKEND_QDISK;
> > > +    } else if (!strcmp(backend_type, "xenio")) {
> > > +        disk->backend = LIBXL_DISK_BACKEND_XENIO;
> >
> > I think you want to replace LIBXL_DISK_BACKEND_TAP rather than add a
> > new one. You could also steal the name if you like I reckon.
> But in tools/libxl/libxl.c:1876, libxl__blktap_devpath is called which
> seems blktap2 dependant, so we need a new backend type to be able to
> use blktap2 along with blktap3, no?

You can remove all the blktap2 support from libxl IMHO. I don't think
there is any need to support both in parallel in (lib)xl, especially
given that blktap2 is basically unmaintained.

I'm curious what other people think though.

We should leave blktap2 in the tree for the time being because xend
uses. Once we deprecate and remove xend we can clear that up too. In the
meantime you should leave the names of the blktap2 stuff alone etc.

> > -----Original Message-----
[... snip hundreds of lines of unnecessary quoted material, please trim
your quotes ...]

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:49:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7705-0004CB-4x; Thu, 30 Aug 2012 15:49:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7704-0004C2-93
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:49:44 +0000
Received: from [85.158.139.83:12591] by server-11.bemta-5.messagelabs.com id
	E7/53-24658-79B8F305; Thu, 30 Aug 2012 15:49:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346341782!20529145!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23842 invoked from network); 30 Aug 2012 15:49:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:49:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274125"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:46:09 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 16:46:08 +0100
Message-ID: <1346341567.27277.76.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Thanos Makatos <thanos.makatos@citrix.com>
Date: Thu, 30 Aug 2012 16:46:07 +0100
In-Reply-To: <4B45B535F7F6BE4CB1C044ED5115CDDE011A6D541036@LONPMAILBOX01.citrite.net>
References: <4B45B535F7F6BE4CB1C044ED5115CDDE0107F6C1D44C@LONPMAILBOX01.citrite.net>
	<1345133376.30865.45.camel@zakaz.uk.xensource.com>
	<4B45B535F7F6BE4CB1C044ED5115CDDE011A6D541036@LONPMAILBOX01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] RFC: blktap3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:41 +0100, Thanos Makatos wrote:
> Thanks for your comments, Ian, I'll address them before reposting.
> 
> To start with, you say that I should replace LIBXL_DISK_BACKEND_TAP:
> > >          disk->backend = LIBXL_DISK_BACKEND_TAP;
> > >      } else if (!strcmp(backend_type, "qdisk")) {
> > >          disk->backend = LIBXL_DISK_BACKEND_QDISK;
> > > +    } else if (!strcmp(backend_type, "xenio")) {
> > > +        disk->backend = LIBXL_DISK_BACKEND_XENIO;
> >
> > I think you want to replace LIBXL_DISK_BACKEND_TAP rather than add a
> > new one. You could also steal the name if you like I reckon.
> But in tools/libxl/libxl.c:1876, libxl__blktap_devpath is called which
> seems blktap2 dependant, so we need a new backend type to be able to
> use blktap2 along with blktap3, no?

You can remove all the blktap2 support from libxl IMHO. I don't think
there is any need to support both in parallel in (lib)xl, especially
given that blktap2 is basically unmaintained.

I'm curious what other people think though.

We should leave blktap2 in the tree for the time being because xend
uses. Once we deprecate and remove xend we can clear that up too. In the
meantime you should leave the names of the blktap2 stuff alone etc.

> > -----Original Message-----
[... snip hundreds of lines of unnecessary quoted material, please trim
your quotes ...]

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:50:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T770a-0004Fh-IH; Thu, 30 Aug 2012 15:50:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T770Z-0004FR-Dn
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:50:15 +0000
Received: from [85.158.143.99:6573] by server-1.bemta-4.messagelabs.com id
	20/EB-12504-6BB8F305; Thu, 30 Aug 2012 15:50:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346341802!26758096!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 373 invoked from network); 30 Aug 2012 15:50:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:50:11 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274151"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:47:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 16:47:35 +0100
Message-ID: <1346341654.27277.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Aug 2012 16:47:34 +0100
In-Reply-To: <20543.35220.572436.112964@mariner.uk.xensource.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
	<20543.35220.572436.112964@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:41 +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format markdown-generated html to text"):
> > Markdown, while easy to read and write, isn't the most consumable
> > format for users reading documentation on a terminal. This patch uses
> > lynx to format markdown produced HTML into text files.
> ...
> >  txt/%.txt: %.markdown
> > -	$(INSTALL_DIR) $(@D)
> > -	cp $< $@.tmp
> > -	$(call move-if-changed,$@.tmp,$@)
> > +	@$(INSTALL_DIR) $(@D)
> > +	set -e ; \
> > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > +		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> > +		echo "Running markdown to generate $*.txt ... "; \
> 
> So now we have two efforts to try to find markdown, one in configure
> and one here.

If we are going to have this idea that docs works without or without
running configure then I think it is fine for the without case to just
copy unconditionally and not run markdown.

> Keir, would it be OK if we simply declared that you must run configure
> to "make docs" ?

Also an option.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:50:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T770a-0004Fh-IH; Thu, 30 Aug 2012 15:50:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T770Z-0004FR-Dn
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:50:15 +0000
Received: from [85.158.143.99:6573] by server-1.bemta-4.messagelabs.com id
	20/EB-12504-6BB8F305; Thu, 30 Aug 2012 15:50:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-216.messagelabs.com!1346341802!26758096!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 373 invoked from network); 30 Aug 2012 15:50:11 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:50:11 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274151"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:47:35 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 16:47:35 +0100
Message-ID: <1346341654.27277.77.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Aug 2012 16:47:34 +0100
In-Reply-To: <20543.35220.572436.112964@mariner.uk.xensource.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
	<20543.35220.572436.112964@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:41 +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format markdown-generated html to text"):
> > Markdown, while easy to read and write, isn't the most consumable
> > format for users reading documentation on a terminal. This patch uses
> > lynx to format markdown produced HTML into text files.
> ...
> >  txt/%.txt: %.markdown
> > -	$(INSTALL_DIR) $(@D)
> > -	cp $< $@.tmp
> > -	$(call move-if-changed,$@.tmp,$@)
> > +	@$(INSTALL_DIR) $(@D)
> > +	set -e ; \
> > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > +		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> > +		echo "Running markdown to generate $*.txt ... "; \
> 
> So now we have two efforts to try to find markdown, one in configure
> and one here.

If we are going to have this idea that docs works without or without
running configure then I think it is fine for the without case to just
copy unconditionally and not run markdown.

> Keir, would it be OK if we simply declared that you must run configure
> to "make docs" ?

Also an option.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:52:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7730-0004TD-4H; Thu, 30 Aug 2012 15:52:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T772y-0004Su-8A
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:52:44 +0000
Received: from [85.158.139.83:62042] by server-7.bemta-5.messagelabs.com id
	F0/CA-19703-B4C8F305; Thu, 30 Aug 2012 15:52:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346341961!23884079!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20789 invoked from network); 30 Aug 2012 15:52:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:52:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274272"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:52:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 16:52:30 +0100
Message-ID: <1346341948.27277.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Aug 2012 16:52:28 +0100
In-Reply-To: <20543.35421.80723.497439@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20543.35421.80723.497439@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:44 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > I think the issue is the parser has:
> >         %destructor { xlu__cfg_set_free($$); }  value valuelist values
> > which frees the current "setting" but does not remove it from the list
> > of settings.
> 
> Sorry for this, this is my fault and I have dropped the fix.
> 
> > diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.y
> > --- a/tools/libxl/libxlu_cfg_y.y	Tue Aug 14 15:59:38 2012 +0100
> > +++ b/tools/libxl/libxlu_cfg_y.y	Wed Aug 15 17:34:25 2012 +0100
> > @@ -47,7 +47,7 @@
> >  file: /* empty */
> >   |     file setting
> >  
> > -setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
> > +setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); $3 = NULL; }
> 
> I don't think this is correct.  It may happen to work with this
> version of bison but I don't think you're allowed to assign to $3.

I suspected this might not be ok.

> Looking at the code I think this handling of the XLU_ConfigSettings
> and flex is all wrong.

Does this mean you know what the right fix is and/or a patch is in
progress?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 15:52:55 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 15:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7730-0004TD-4H; Thu, 30 Aug 2012 15:52:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T772y-0004Su-8A
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 15:52:44 +0000
Received: from [85.158.139.83:62042] by server-7.bemta-5.messagelabs.com id
	F0/CA-19703-B4C8F305; Thu, 30 Aug 2012 15:52:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346341961!23884079!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20789 invoked from network); 30 Aug 2012 15:52:41 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 15:52:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14274272"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 15:52:30 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Thu, 30 Aug 2012 16:52:30 +0100
Message-ID: <1346341948.27277.80.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Aug 2012 16:52:28 +0100
In-Reply-To: <20543.35421.80723.497439@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20543.35421.80723.497439@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:44 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > I think the issue is the parser has:
> >         %destructor { xlu__cfg_set_free($$); }  value valuelist values
> > which frees the current "setting" but does not remove it from the list
> > of settings.
> 
> Sorry for this, this is my fault and I have dropped the fix.
> 
> > diff -r af7143d97fa2 tools/libxl/libxlu_cfg_y.y
> > --- a/tools/libxl/libxlu_cfg_y.y	Tue Aug 14 15:59:38 2012 +0100
> > +++ b/tools/libxl/libxlu_cfg_y.y	Wed Aug 15 17:34:25 2012 +0100
> > @@ -47,7 +47,7 @@
> >  file: /* empty */
> >   |     file setting
> >  
> > -setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
> > +setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); $3 = NULL; }
> 
> I don't think this is correct.  It may happen to work with this
> version of bison but I don't think you're allowed to assign to $3.

I suspected this might not be ok.

> Looking at the code I think this handling of the XLU_ConfigSettings
> and flex is all wrong.

Does this mean you know what the right fix is and/or a patch is in
progress?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:13:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77N6-0005Bn-6H; Thu, 30 Aug 2012 16:13:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T77N4-0005Bi-QZ
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:13:31 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346343203!8361523!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22915 invoked from network); 30 Aug 2012 16:13:24 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 16:13:24 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UGCAp4016663
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Aug 2012 16:12:11 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UGC9qN005410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Aug 2012 16:12:10 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UGC9aE022994; Thu, 30 Aug 2012 11:12:09 -0500
MIME-Version: 1.0
Message-ID: <2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
Date: Thu, 30 Aug 2012 09:11:31 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: David Vrabel <dvrabel@cantab.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
	<503F463E.90505@cantab.net>
In-Reply-To: <503F463E.90505@cantab.net>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: David Vrabel [mailto:dvrabel@cantab.net]
> Sent: Thursday, August 30, 2012 4:54 AM
> To: Dan Magenheimer
> Cc: George Dunlap; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
> 
> On 29/08/12 21:53, Dan Magenheimer wrote:
> >
> > Maybe it is time to move to match the well-known highly-greased
> > Linux kernel release process?  This would include, for example, a short
> > window for new functionality and a xen-next for pre-window shaking
> > out and merging (of new functionality) and testing.  As has
> > been pointed out, xen-unstable is, well, unstable for far too long.
> >
> > It may not be necessary to aggressively match Linus' 8-9 week release
> > cycle or weekly rcN releases, but the core process is known to
> > work very well, is reasonably well documented, and will be familiar
> > to many in the open source community.
> 
> I think such a system only works if you have a short release cycle.  If
> the only time to merge new features is two weeks in every 6/9 months
> then that is just far too long and is not very contributor-friendly.

That wasn't my point (though I see I wasn't very clear).

I meant that the part of the release cycle where new functionality
is accepted (the "window") should be a smaller _percentage_ of the
release cycle.  With Linux, it is about 20-25%.  For Xen it is probably
closer to 60-80%.  Once the window closes, the RC's start and new
functionality is put into "xen-next".  At the next window, the
release-Linus (George in this case) decides which functionality
in xen-next is stable enough to be pulled in during the window.

Of course, 18 months is far too long a release cycle for this approach,
and 9 months may be too long as well.  I think a target cycle
of 6 months with a "window" of 6 weeks would be a step in
the right direction
 
> Xen doesn't have the number of contributors or changes that make a Linux
> kernel style process necessary.

Personally, I think that's a self-fulfilling prophecy.  It is too hard
to use or develop on xen-unstable in part because too much is thrown in
(which, as George pointed out, is a result of developers learning that
if it doesn't go in xen-unstable, it will wait for many months).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:13:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77N6-0005Bn-6H; Thu, 30 Aug 2012 16:13:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T77N4-0005Bi-QZ
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:13:31 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346343203!8361523!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22915 invoked from network); 30 Aug 2012 16:13:24 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 16:13:24 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UGCAp4016663
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Aug 2012 16:12:11 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UGC9qN005410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Aug 2012 16:12:10 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UGC9aE022994; Thu, 30 Aug 2012 11:12:09 -0500
MIME-Version: 1.0
Message-ID: <2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
Date: Thu, 30 Aug 2012 09:11:31 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: David Vrabel <dvrabel@cantab.net>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
	<503F463E.90505@cantab.net>
In-Reply-To: <503F463E.90505@cantab.net>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: David Vrabel [mailto:dvrabel@cantab.net]
> Sent: Thursday, August 30, 2012 4:54 AM
> To: Dan Magenheimer
> Cc: George Dunlap; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
> 
> On 29/08/12 21:53, Dan Magenheimer wrote:
> >
> > Maybe it is time to move to match the well-known highly-greased
> > Linux kernel release process?  This would include, for example, a short
> > window for new functionality and a xen-next for pre-window shaking
> > out and merging (of new functionality) and testing.  As has
> > been pointed out, xen-unstable is, well, unstable for far too long.
> >
> > It may not be necessary to aggressively match Linus' 8-9 week release
> > cycle or weekly rcN releases, but the core process is known to
> > work very well, is reasonably well documented, and will be familiar
> > to many in the open source community.
> 
> I think such a system only works if you have a short release cycle.  If
> the only time to merge new features is two weeks in every 6/9 months
> then that is just far too long and is not very contributor-friendly.

That wasn't my point (though I see I wasn't very clear).

I meant that the part of the release cycle where new functionality
is accepted (the "window") should be a smaller _percentage_ of the
release cycle.  With Linux, it is about 20-25%.  For Xen it is probably
closer to 60-80%.  Once the window closes, the RC's start and new
functionality is put into "xen-next".  At the next window, the
release-Linus (George in this case) decides which functionality
in xen-next is stable enough to be pulled in during the window.

Of course, 18 months is far too long a release cycle for this approach,
and 9 months may be too long as well.  I think a target cycle
of 6 months with a "window" of 6 weeks would be a step in
the right direction
 
> Xen doesn't have the number of contributors or changes that make a Linux
> kernel style process necessary.

Personally, I think that's a self-fulfilling prophecy.  It is too hard
to use or develop on xen-unstable in part because too much is thrown in
(which, as George pointed out, is a result of developers learning that
if it doesn't go in xen-unstable, it will wait for many months).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:20:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77Ti-0005LI-2n; Thu, 30 Aug 2012 16:20:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T77Tg-0005LD-Uh
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:20:21 +0000
Received: from [85.158.143.35:60166] by server-3.bemta-4.messagelabs.com id
	88/BF-08232-4C29F305; Thu, 30 Aug 2012 16:20:20 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346343615!16013097!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzIxNDYw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28661 invoked from network); 30 Aug 2012 16:20:17 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 16:20:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UGKDFT001192
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 16:20:14 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UGKDhi020772
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 16:20:13 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UGKCiI031449
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 11:20:12 -0500
MIME-Version: 1.0
Message-ID: <175f3c87-367d-4b1d-bf0c-517313f89ff6@default>
Date: Thu, 30 Aug 2012 09:19:35 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<20120829224434.GA4650@localhost.localdomain>
In-Reply-To: <20120829224434.GA4650@localhost.localdomain>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Konrad Rzeszutek Wilk
> 
> On Wed, Aug 29, 2012 at 03:22:40PM -0700, Dan Magenheimer wrote:
> > Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
> > crashes early in boot with Xen 4.2.0-rc4?
> >
> > The hardware is a new Dell Optiplex 790 which I've noticed has a couple
> > of idiosyncrasies when booting recent upstream versions of Linux:
> > 1) reboot=pci is required or reboot hangs
> > 2) reducing visible memory via memmap= on a Linux command line is
> >    fairly difficult (requires a very complex sequence of Linux boot
> >    parameters, presumably due to a weird e820 RAM map in this hardware)
> >
> > These idiosyncrasies may or may not be related to the dom0 crash,
> > just mentioning them in case they suggest an easy workaround.
> 
> What happens if you run with 'console=vga vga=text,keep' on the
> hypervisor line and on the Linux command line: 'console=hvc0
> earlyprink=xen debug loglevel=8'

Ah, good, thanks for the pointer.  Haven't had to do that in awhile.
The dom0 tombstone has an xsave_init call in it... I tried booting
with xsave=0 as a Xen boot parameter and I can now boot 4.2.0-rc4
with FC17 dom0.

Sorry for the noise.

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:20:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77Ti-0005LI-2n; Thu, 30 Aug 2012 16:20:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T77Tg-0005LD-Uh
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:20:21 +0000
Received: from [85.158.143.35:60166] by server-3.bemta-4.messagelabs.com id
	88/BF-08232-4C29F305; Thu, 30 Aug 2012 16:20:20 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346343615!16013097!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzIxNDYw\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28661 invoked from network); 30 Aug 2012 16:20:17 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 16:20:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UGKDFT001192
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 16:20:14 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UGKDhi020772
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 16:20:13 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UGKCiI031449
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 11:20:12 -0500
MIME-Version: 1.0
Message-ID: <175f3c87-367d-4b1d-bf0c-517313f89ff6@default>
Date: Thu, 30 Aug 2012 09:19:35 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<20120829224434.GA4650@localhost.localdomain>
In-Reply-To: <20120829224434.GA4650@localhost.localdomain>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Konrad Rzeszutek Wilk
> 
> On Wed, Aug 29, 2012 at 03:22:40PM -0700, Dan Magenheimer wrote:
> > Anybody else see this: a FC17 dom0 which boots fine for 4.1.3
> > crashes early in boot with Xen 4.2.0-rc4?
> >
> > The hardware is a new Dell Optiplex 790 which I've noticed has a couple
> > of idiosyncrasies when booting recent upstream versions of Linux:
> > 1) reboot=pci is required or reboot hangs
> > 2) reducing visible memory via memmap= on a Linux command line is
> >    fairly difficult (requires a very complex sequence of Linux boot
> >    parameters, presumably due to a weird e820 RAM map in this hardware)
> >
> > These idiosyncrasies may or may not be related to the dom0 crash,
> > just mentioning them in case they suggest an easy workaround.
> 
> What happens if you run with 'console=vga vga=text,keep' on the
> hypervisor line and on the Linux command line: 'console=hvc0
> earlyprink=xen debug loglevel=8'

Ah, good, thanks for the pointer.  Haven't had to do that in awhile.
The dom0 tombstone has an xsave_init call in it... I tried booting
with xsave=0 as a Xen boot parameter and I can now boot 4.2.0-rc4
with FC17 dom0.

Sorry for the noise.

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:25:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77YI-0005dS-Ek; Thu, 30 Aug 2012 16:25:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1T77YG-0005ck-I8
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 16:25:05 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346343898!8363551!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32265 invoked from network); 30 Aug 2012 16:24:58 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-16.tower-27.messagelabs.com with SMTP;
	30 Aug 2012 16:24:58 -0000
Received: from localhost (nat-pool-rdu.redhat.com [66.187.233.202])
	by shards.monkeyblade.net (Postfix) with ESMTPSA id C441A586C2E;
	Thu, 30 Aug 2012 09:24:57 -0700 (PDT)
Date: Thu, 30 Aug 2012 12:24:53 -0400 (EDT)
Message-Id: <20120830.122453.1449291050128191766.davem@davemloft.net>
To: konrad.wilk@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <20120823141740.GA30305@phenom.dumpdata.com>
References: <20120808.155046.820543563969484712.davem@davemloft.net>
	<1345631207.6821.140.camel@zakaz.uk.xensource.com>
	<20120823141740.GA30305@phenom.dumpdata.com>
X-Mailer: Mew version 6.5 on Emacs 23.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, mgorman@suse.de, konrad@darnok.org,
	akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 10:17:40 -0400

> On Wed, Aug 22, 2012 at 11:26:47AM +0100, Ian Campbell wrote:
>> On Wed, 2012-08-08 at 23:50 +0100, David Miller wrote:
>> > Just use something like a call to __pskb_pull_tail(skb, len) and all
>> > that other crap around that area can simply be deleted.
>> 
>> I think you mean something like this, which works for me, although I've
>> only lightly tested it.
>> 
> 
> I've tested it heavily and works great.
> 
> Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> and I took a look at it too and:
> 
> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Applied, thanks everyone.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:25:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77YI-0005dS-Ek; Thu, 30 Aug 2012 16:25:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1T77YG-0005ck-I8
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 16:25:05 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346343898!8363551!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32265 invoked from network); 30 Aug 2012 16:24:58 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-16.tower-27.messagelabs.com with SMTP;
	30 Aug 2012 16:24:58 -0000
Received: from localhost (nat-pool-rdu.redhat.com [66.187.233.202])
	by shards.monkeyblade.net (Postfix) with ESMTPSA id C441A586C2E;
	Thu, 30 Aug 2012 09:24:57 -0700 (PDT)
Date: Thu, 30 Aug 2012 12:24:53 -0400 (EDT)
Message-Id: <20120830.122453.1449291050128191766.davem@davemloft.net>
To: konrad.wilk@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <20120823141740.GA30305@phenom.dumpdata.com>
References: <20120808.155046.820543563969484712.davem@davemloft.net>
	<1345631207.6821.140.camel@zakaz.uk.xensource.com>
	<20120823141740.GA30305@phenom.dumpdata.com>
X-Mailer: Mew version 6.5 on Emacs 23.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, mgorman@suse.de, konrad@darnok.org,
	akpm@linux-foundation.org
Subject: Re: [Xen-devel] [PATCH] netvm: check for page == NULL when
 propogating the skb->pfmemalloc flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Aug 2012 10:17:40 -0400

> On Wed, Aug 22, 2012 at 11:26:47AM +0100, Ian Campbell wrote:
>> On Wed, 2012-08-08 at 23:50 +0100, David Miller wrote:
>> > Just use something like a call to __pskb_pull_tail(skb, len) and all
>> > that other crap around that area can simply be deleted.
>> 
>> I think you mean something like this, which works for me, although I've
>> only lightly tested it.
>> 
> 
> I've tested it heavily and works great.
> 
> Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> and I took a look at it too and:
> 
> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Applied, thanks everyone.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:34:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77gy-0005ym-Ks; Thu, 30 Aug 2012 16:34:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T77gx-0005yh-3k
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:34:03 +0000
Received: from [85.158.143.99:57905] by server-2.bemta-4.messagelabs.com id
	3A/0E-21239-AF59F305; Thu, 30 Aug 2012 16:34:02 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346344431!20149221!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	RCVD_BY_IP,spamassassin: ,surbl: (ASYNC_NO) 
	c3VyYmxfcmVjaGVja19kZWxheTogNjI4ODQ3NyAoYWJhbmRvbmVkOiBkbC5kcm9wYm94LmNvb
	S91\nLzEyNTc5MTEyL2xvZ3MvbHNwY2kubG9nKQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23481 invoked from network); 30 Aug 2012 16:33:53 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:33:53 -0000
Received: by dadn15 with SMTP id n15so1260728dad.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 09:33:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=3G6Hae5MU599ERYR5nKKexgKJEGu7f3t0tZPrCEdH+Q=;
	b=EdScLSEeVwJ49mak/nnDnp7cOgcdBXRhpXrQySHd2vIN3Zm5WfTBOAaRko2zq2r6a+
	zDxgOEMJpYBNz1BKgTFN1i4cAZNZGdFGNaEYgbzAmyPbHVl1I+4M79pBgm7U5BQ3FxSo
	fJyDPMUbwAk3ilLIOUqo4y9gEOLe93G/NvPy5+MuZUsgwxV5xCS29iQbkCW99xinUYN9
	cbyx/K6t8q/wnYD7C6rkpcGbYTz/GcB/poK2YAYeN4AV5J6GoJx9w8EigloFu6MEOF2e
	IIWPQD0C3pQX0iWoWqIhEsDvKkDX64ktdFguEuQKU/f2ippqEYhuZ13EqELyrlhbioS1
	uvIA==
Received: by 10.68.230.194 with SMTP id ta2mr12392981pbc.30.1346344430948;
	Thu, 30 Aug 2012 09:33:50 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id th6sm1842680pbc.0.2012.08.30.09.33.49
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 09:33:49 -0700 (PDT)
Date: Thu, 30 Aug 2012 12:33:42 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Javier Marcet <jmarcet@gmail.com>
Message-ID: <20120830163340.GA10091@localhost.localdomain>
References: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Xen Devel Mailing list <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.0-rc4 bugs with GigaByte H77M-D3H + Core i7
 3770
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 12:43:29PM +0200, Javier Marcet wrote:
> Hi,
> 
> I've just upgraded a server of mine from a Core i3 2100T to an i7 3770, in order
> to do full virtualization with VTd.
> 
> I'm using kernel 3.5.2 and Xen from git://xenbits.xen.org/xen.git @ commit
> 37d7ccdc2f50d659f1eb8ec11ee4bf8a8376926d (Fri Aug 24).
> 
> Since there are various issues I'm gonna comment on them all. I'd appreciate
> if you help me deciding which bug reports to file, and where to file them.

Its easier if there are seperate emails and then we can track them
step-by-step.
> 
> Upon booting under the xen virtualizer everything works fine but I cannot
> suspend the machine and I have reception problems on the DVB-T tuners

Right. The suspend (well, the resume part) is not yet working.
> installed on the system.

That sounds familiar - but without more details its a bit unclear.
> 
> Besides that, xen can't read the cpu capabilities, or so reports virt-manager
> when creating a DomU. This results in being unable to boot any DomU due
> to ACPI errors.

Can you provide a dmesg or output of what you mean by that?
> 
> On the same kernel and machine, KVM can read the capabilities with no
> problems and guests work reliably.
> 
> On the other hand, booting without the xen virtualizer fixes the suspension
> and tuning problems but there are other issues.
> 
> I need to add the parameter intel_iommu=igfx_off to the kernel command line
> or I see half a second of these errors at the beginning of each boot:

Those .. being were? On the Xen line I suppose as the Linux kernel
should not see the Intel DMAR at all - or you have two OSes trying to
utilize it and both failing.
> 
> [    0.358278] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358278] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358286] DRHD: handling fault status reg 2
> [    0.358288] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358288] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358291] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358291] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358307] DRHD: handling fault status reg 3
> 
> Furthermore, later on, just after enabling the IOMMU, I get this:

How are you enabling the IOMMU? The logs you pointed to did not have any
of this in them? Can you also provide the 'xm dmesg' output please?

> 
> [    0.328564] DMAR: No ATSR found
> [    0.328580] IOMMU 1 0xfed91000: using Queued invalidation
> [    0.328582] IOMMU: Setting RMRR:
> [    0.328589] IOMMU: Setting identity map for device 0000:00:1d.0
> [0x9de36000 - 0x9de52fff]
> [    0.328606] IOMMU: Setting identity map for device 0000:00:1a.0
> [0x9de36000 - 0x9de52fff]
> [    0.328617] IOMMU: Setting identity map for device 0000:00:14.0
> [0x9de36000 - 0x9de52fff]
> [    0.328625] IOMMU: Prepare 0-16MiB unity mapping for LPC
> [    0.328630] IOMMU: Setting identity map for device 0000:00:1f.0
> [0x0 - 0xffffff]
> [    0.328705] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O
> [    0.328714] ------------[ cut here ]------------
> [    0.328718] WARNING: at
> /home/storage/src/ubuntu-precise/drivers/pci/search.c:44
> pci_find_upstream_pcie_bridge+0x51/0x68()
> [    0.328719] Hardware name: To be filled by O.E.M.
> [    0.328720] Modules linked in:
> [    0.328722] Pid: 1, comm: swapper/0 Not tainted 3.5.0-12-i3 #12~precise1
> [    0.328723] Call Trace:
> [    0.328727]  [<ffffffff8106ab0d>] warn_slowpath_common+0x7e/0x96
> [    0.328729]  [<ffffffff8106ab3a>] warn_slowpath_null+0x15/0x17
> [    0.328731]  [<ffffffff812992d5>] pci_find_upstream_pcie_bridge+0x51/0x68
> [    0.328733]  [<ffffffff814bd02e>] intel_iommu_device_group+0x64/0xb7
> [    0.328735]  [<ffffffff814b8a2b>] ? bus_set_iommu+0x3f/0x3f
> [    0.328738]  [<ffffffff814b86f2>] iommu_device_group+0x24/0x26
> [    0.328740]  [<ffffffff814b8a40>] add_iommu_group+0x15/0x33
> [    0.328742]  [<ffffffff8137ba61>] bus_for_each_dev+0x54/0x80
> [    0.328745]  [<ffffffff81cdaf83>] ? memblock_find_dma_reserve+0x13f/0x13f
> [    0.328746]  [<ffffffff814b8a25>] bus_set_iommu+0x39/0x3f
> [    0.328749]  [<ffffffff81d0367c>] intel_iommu_init+0x1aa/0x1ce
> [    0.328751]  [<ffffffff81cdaf96>] pci_iommu_init+0x13/0x3e
> [    0.328754]  [<ffffffff81002094>] do_one_initcall+0x7a/0x132
> [    0.328756]  [<ffffffff81cd2bac>] do_basic_setup+0x96/0xb4
> [    0.328758]  [<ffffffff81cd2533>] ? obsolete_checksetup+0xab/0xab
> [    0.328759]  [<ffffffff81cd2c82>] kernel_init+0xb8/0x12e
> [    0.328762]  [<ffffffff81615b24>] kernel_thread_helper+0x4/0x10
> [    0.328764]  [<ffffffff81cd2bca>] ? do_basic_setup+0xb4/0xb4
> [    0.328766]  [<ffffffff81615b20>] ? gs_change+0x13/0x13
> [    0.328768] ---[ end trace 9bacf275b2da9216 ]---

> 
> You can see dmesg logs, lspci and dmidecode data here:
> 
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-bare.log
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-normal.log
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-xen.log
> http://dl.dropbox.com/u/12579112/logs/dmidecode.log
> http://dl.dropbox.com/u/12579112/logs/interrupts.log
> http://dl.dropbox.com/u/12579112/logs/lspci.log
> 
> I'm willing to help with whatever is needed.
> 
> 
> -- 
> Javier Marcet <jmarcet@gmail.com>
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:34:22 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77gy-0005ym-Ks; Thu, 30 Aug 2012 16:34:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T77gx-0005yh-3k
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:34:03 +0000
Received: from [85.158.143.99:57905] by server-2.bemta-4.messagelabs.com id
	3A/0E-21239-AF59F305; Thu, 30 Aug 2012 16:34:02 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346344431!20149221!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	RCVD_BY_IP,spamassassin: ,surbl: (ASYNC_NO) 
	c3VyYmxfcmVjaGVja19kZWxheTogNjI4ODQ3NyAoYWJhbmRvbmVkOiBkbC5kcm9wYm94LmNvb
	S91\nLzEyNTc5MTEyL2xvZ3MvbHNwY2kubG9nKQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23481 invoked from network); 30 Aug 2012 16:33:53 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:33:53 -0000
Received: by dadn15 with SMTP id n15so1260728dad.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 09:33:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=3G6Hae5MU599ERYR5nKKexgKJEGu7f3t0tZPrCEdH+Q=;
	b=EdScLSEeVwJ49mak/nnDnp7cOgcdBXRhpXrQySHd2vIN3Zm5WfTBOAaRko2zq2r6a+
	zDxgOEMJpYBNz1BKgTFN1i4cAZNZGdFGNaEYgbzAmyPbHVl1I+4M79pBgm7U5BQ3FxSo
	fJyDPMUbwAk3ilLIOUqo4y9gEOLe93G/NvPy5+MuZUsgwxV5xCS29iQbkCW99xinUYN9
	cbyx/K6t8q/wnYD7C6rkpcGbYTz/GcB/poK2YAYeN4AV5J6GoJx9w8EigloFu6MEOF2e
	IIWPQD0C3pQX0iWoWqIhEsDvKkDX64ktdFguEuQKU/f2ippqEYhuZ13EqELyrlhbioS1
	uvIA==
Received: by 10.68.230.194 with SMTP id ta2mr12392981pbc.30.1346344430948;
	Thu, 30 Aug 2012 09:33:50 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id th6sm1842680pbc.0.2012.08.30.09.33.49
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 09:33:49 -0700 (PDT)
Date: Thu, 30 Aug 2012 12:33:42 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Javier Marcet <jmarcet@gmail.com>
Message-ID: <20120830163340.GA10091@localhost.localdomain>
References: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Xen Devel Mailing list <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.0-rc4 bugs with GigaByte H77M-D3H + Core i7
 3770
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 12:43:29PM +0200, Javier Marcet wrote:
> Hi,
> 
> I've just upgraded a server of mine from a Core i3 2100T to an i7 3770, in order
> to do full virtualization with VTd.
> 
> I'm using kernel 3.5.2 and Xen from git://xenbits.xen.org/xen.git @ commit
> 37d7ccdc2f50d659f1eb8ec11ee4bf8a8376926d (Fri Aug 24).
> 
> Since there are various issues I'm gonna comment on them all. I'd appreciate
> if you help me deciding which bug reports to file, and where to file them.

Its easier if there are seperate emails and then we can track them
step-by-step.
> 
> Upon booting under the xen virtualizer everything works fine but I cannot
> suspend the machine and I have reception problems on the DVB-T tuners

Right. The suspend (well, the resume part) is not yet working.
> installed on the system.

That sounds familiar - but without more details its a bit unclear.
> 
> Besides that, xen can't read the cpu capabilities, or so reports virt-manager
> when creating a DomU. This results in being unable to boot any DomU due
> to ACPI errors.

Can you provide a dmesg or output of what you mean by that?
> 
> On the same kernel and machine, KVM can read the capabilities with no
> problems and guests work reliably.
> 
> On the other hand, booting without the xen virtualizer fixes the suspension
> and tuning problems but there are other issues.
> 
> I need to add the parameter intel_iommu=igfx_off to the kernel command line
> or I see half a second of these errors at the beginning of each boot:

Those .. being were? On the Xen line I suppose as the Linux kernel
should not see the Intel DMAR at all - or you have two OSes trying to
utilize it and both failing.
> 
> [    0.358278] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358278] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358286] DRHD: handling fault status reg 2
> [    0.358288] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358288] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358291] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358291] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358307] DRHD: handling fault status reg 3
> 
> Furthermore, later on, just after enabling the IOMMU, I get this:

How are you enabling the IOMMU? The logs you pointed to did not have any
of this in them? Can you also provide the 'xm dmesg' output please?

> 
> [    0.328564] DMAR: No ATSR found
> [    0.328580] IOMMU 1 0xfed91000: using Queued invalidation
> [    0.328582] IOMMU: Setting RMRR:
> [    0.328589] IOMMU: Setting identity map for device 0000:00:1d.0
> [0x9de36000 - 0x9de52fff]
> [    0.328606] IOMMU: Setting identity map for device 0000:00:1a.0
> [0x9de36000 - 0x9de52fff]
> [    0.328617] IOMMU: Setting identity map for device 0000:00:14.0
> [0x9de36000 - 0x9de52fff]
> [    0.328625] IOMMU: Prepare 0-16MiB unity mapping for LPC
> [    0.328630] IOMMU: Setting identity map for device 0000:00:1f.0
> [0x0 - 0xffffff]
> [    0.328705] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O
> [    0.328714] ------------[ cut here ]------------
> [    0.328718] WARNING: at
> /home/storage/src/ubuntu-precise/drivers/pci/search.c:44
> pci_find_upstream_pcie_bridge+0x51/0x68()
> [    0.328719] Hardware name: To be filled by O.E.M.
> [    0.328720] Modules linked in:
> [    0.328722] Pid: 1, comm: swapper/0 Not tainted 3.5.0-12-i3 #12~precise1
> [    0.328723] Call Trace:
> [    0.328727]  [<ffffffff8106ab0d>] warn_slowpath_common+0x7e/0x96
> [    0.328729]  [<ffffffff8106ab3a>] warn_slowpath_null+0x15/0x17
> [    0.328731]  [<ffffffff812992d5>] pci_find_upstream_pcie_bridge+0x51/0x68
> [    0.328733]  [<ffffffff814bd02e>] intel_iommu_device_group+0x64/0xb7
> [    0.328735]  [<ffffffff814b8a2b>] ? bus_set_iommu+0x3f/0x3f
> [    0.328738]  [<ffffffff814b86f2>] iommu_device_group+0x24/0x26
> [    0.328740]  [<ffffffff814b8a40>] add_iommu_group+0x15/0x33
> [    0.328742]  [<ffffffff8137ba61>] bus_for_each_dev+0x54/0x80
> [    0.328745]  [<ffffffff81cdaf83>] ? memblock_find_dma_reserve+0x13f/0x13f
> [    0.328746]  [<ffffffff814b8a25>] bus_set_iommu+0x39/0x3f
> [    0.328749]  [<ffffffff81d0367c>] intel_iommu_init+0x1aa/0x1ce
> [    0.328751]  [<ffffffff81cdaf96>] pci_iommu_init+0x13/0x3e
> [    0.328754]  [<ffffffff81002094>] do_one_initcall+0x7a/0x132
> [    0.328756]  [<ffffffff81cd2bac>] do_basic_setup+0x96/0xb4
> [    0.328758]  [<ffffffff81cd2533>] ? obsolete_checksetup+0xab/0xab
> [    0.328759]  [<ffffffff81cd2c82>] kernel_init+0xb8/0x12e
> [    0.328762]  [<ffffffff81615b24>] kernel_thread_helper+0x4/0x10
> [    0.328764]  [<ffffffff81cd2bca>] ? do_basic_setup+0xb4/0xb4
> [    0.328766]  [<ffffffff81615b20>] ? gs_change+0x13/0x13
> [    0.328768] ---[ end trace 9bacf275b2da9216 ]---

> 
> You can see dmesg logs, lspci and dmidecode data here:
> 
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-bare.log
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-normal.log
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-xen.log
> http://dl.dropbox.com/u/12579112/logs/dmidecode.log
> http://dl.dropbox.com/u/12579112/logs/interrupts.log
> http://dl.dropbox.com/u/12579112/logs/lspci.log
> 
> I'm willing to help with whatever is needed.
> 
> 
> -- 
> Javier Marcet <jmarcet@gmail.com>
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:41:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77oA-0006LK-5K; Thu, 30 Aug 2012 16:41:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T77o8-0006Kn-MR
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 16:41:29 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346344880!8366192!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2588 invoked from network); 30 Aug 2012 16:41:21 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:41:21 -0000
Received: by ieje14 with SMTP id e14so1181679iej.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 09:41:19 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=uY0Avq3LWgUkUtyH/s1bcmpMCtTbEc05flmLZsAVf7U=;
	b=lZqZFlHVC9xGmfvEY8nIl5gluUBpGAgV9eaMxfjA+Vwo1R2ggUKuIAGgU9qos1StPL
	qTpJ/RWWdFetnAhxVWdZZDGdpc0K8GO3FoU21zlbWyeO+JUqhNPHTuWancjExeTwwqey
	AdZLKlCnKTQf0/l80/FTA/MN+uvfmHiXf4Xqk0oTv3qK2uXIs7SsRAhsc5/5QtsMiNfa
	it/xrpiTMmVr6d44pY7uhJil0iBQVOTgSKjT8C0XZ/J/PyDLYHIJM7pfGVGPXGEtZxL5
	+hh5iStC4IDwM3OcQIGc4UpBz6gBGeFBjHyrztwT3jOdcG8f/5MO57qqYYxDrR0Y25uj
	QOAg==
Received: by 10.50.184.133 with SMTP id eu5mr1388249igc.22.1346344879660;
	Thu, 30 Aug 2012 09:41:19 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id bp8sm2014068igb.12.2012.08.30.09.41.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 09:41:18 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
Date: Thu, 30 Aug 2012 12:41:24 -0400
Message-Id: <12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkJDFATEb3kpN8yqs+v6zWE6VR0PuYof+3dwC9/Pml9tt7QkrdiWvdtkPygCX8yJRKO1VXP
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David,
The patch looks functionally ok, but I still have two lingering concerns:
- the hideous casting of mfn into err
- why not signal paged out frames for V1

Rather than keep writing English, I wrote some C :)

And took the liberty to include your signed-off. David & Konrad, let me know what you think, and once we settle on either version we can move into unit testing this.

Thanks
Andres

commit 3c0c619f11a26b7bc3f12a1c477cf969c25de231
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Thu Aug 30 12:23:33 2012 -0400

    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
    
    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
    field for reporting the error code for every frame that could not be
    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
    
    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
    in the mfn array.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..6562e29 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -246,20 +246,54 @@ struct mmap_batch_state {
 	domid_t domain;
 	unsigned long va;
 	struct vm_area_struct *vma;
+	/* A tristate: 
+	 *      0 for no errors
+	 *      1 if at least one error has happened (and no
+	 *          -ENOENT errors have happened)
+	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 */
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
+
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		/*
+		 * V2 provides a user-space (pre-checked for access) user_err
+		 * pointer, in which we store the individual map error codes.
+		 * 
+		 * V1 encodes the error codes in the 32bit top nibble of the 
+		 * mfn (with its known limitations vis-a-vis 64 bit callers).
+		 * 
+		 * In either case, global state.err is zero unless one or more
+		 * individual maps fail with -ENOENT, in which case it is -ENOENT.
+		 *
+		 */
+		if (st->user_err)
+			BUG_ON(__put_user(ret, st->user_err++));
+		else {
+			xen_pfn_t nibble = (ret == -ENOENT) ?
+					PRIVCMD_MMAPBATCH_PAGED_ERROR :
+					PRIVCMD_MMAPBATCH_MFN_ERROR;
+			*mfnp |= nibble;
+		}
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+		if (ret == -ENOENT)
+			st->err = -ENOENT;
+		else {
+			/* Record that at least one error has happened. */
+			if (st->err == 0)
+				st->err = 1;
+		}
 	}
 	st->va += PAGE_SIZE;
 
@@ -271,15 +305,18 @@ static int mmap_return_errors(void *data, void *state)
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 
-	return put_user(*mfnp, st->user++);
+	if (st->user_err == NULL)
+		return __put_user(*mfnp, st->user_mfn++);
+
+	return 0;
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
@@ -289,15 +326,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -315,22 +368,34 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
-	state.domain = m.dom;
-	state.vma = vma;
-	state.va = m.addr;
-	state.err = 0;
+	state.domain    = m.dom;
+	state.vma       = vma;
+	state.va        = m.addr;
+	state.err       = 0;
+	state.user_err  = m.err;
 
-	ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state);
+	/* mmap_batch_fn guarantees ret == 0 */
+	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
+			     &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
-		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+	if (state.err) {
+		if (state.err == -ENOENT)
+			ret = -ENOENT;
+		/* V1 still needs to write back nibbles. */
+		if (m.err == NULL)
+		{
+			int efault;
+			state.user_mfn = (xen_pfn_t *)m.arr;
+			efault = traverse_pages(m.num, sizeof(xen_pfn_t),
+						 &pagelist,
+						 mmap_return_errors, &state);
+			if (efault)
+				ret = efault;
+		}
+	} else if (m.err)
+		__clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
 	free_page_list(&pagelist);
@@ -354,7 +419,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 45c1aa1..a853168 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -58,13 +58,33 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_*_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR     0xf0000000U
+#define PRIVCMD_MMAPBATCH_PAGED_ERROR   0x80000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -72,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
 
On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
> include/xen/privcmd.h |   23 +++++++++++-
> 2 files changed, 102 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..c0e89e7 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
>  */
> static int gather_array(struct list_head *pagelist,
> 			unsigned nelem, size_t size,
> -			void __user *data)
> +			const void __user *data)
> {
> 	unsigned pageidx;
> 	void *pagedata;
> @@ -248,18 +248,37 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> 
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> 
> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> -		st->err++;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		/*
> +		 * Error reporting is a mess but userspace relies on
> +		 * it behaving this way.
> +		 *
> +		 * V2 needs to a) return the result of each frame's
> +		 * remap; and b) return -ENOENT if any frame failed
> +		 * with -ENOENT.
> +		 *
> +		 * In this first pass the error code is saved by
> +		 * overwriting the mfn and an error is indicated in
> +		 * st->err.
> +		 *
> +		 * The second pass by mmap_return_errors() will write
> +		 * the error codes to user space and get the right
> +		 * ioctl return value.
> +		 */
> +		*(int *)mfnp = ret;
> +		st->err = ret;
> 	}
> 	st->va += PAGE_SIZE;
> 
> @@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> +
> +	if (st->user_err) {
> +		int err = *(int *)mfnp;
> +
> +		if (err == -ENOENT)
> +			st->err = err;
> 
> -	return put_user(*mfnp, st->user++);
> +		return __put_user(err, st->user_err++);
> +	} else {
> +		xen_pfn_t mfn;
> +
> +		ret = __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> 
> static struct vm_operations_struct privcmd_vm_ops;
> 
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm = current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> 
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> 
> 	nr_pages = m.num;
> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> 
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
> 
> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 
> 	up_write(&mm->mmap_sem);
> 
> -	if (state.err > 0) {
> -		state.user = m.arr;
> +	if (state.err) {
> +		state.err = 0;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> +		if (ret >= 0)
> +			ret = state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> 
> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> 
> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> 
> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..f60d75c 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> 
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 on success (i.e., arg->err contains valid error codes for
> + * each frame).  On an error other than a failed frame remap, -1 is
> + * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
> + * if the operation was otherwise successful but any frame failed with
> + * -ENOENT, then -1 is returned and errno is set to ENOENT.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> 
> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:41:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77oA-0006LK-5K; Thu, 30 Aug 2012 16:41:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T77o8-0006Kn-MR
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 16:41:29 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346344880!8366192!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2588 invoked from network); 30 Aug 2012 16:41:21 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:41:21 -0000
Received: by ieje14 with SMTP id e14so1181679iej.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 09:41:19 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=uY0Avq3LWgUkUtyH/s1bcmpMCtTbEc05flmLZsAVf7U=;
	b=lZqZFlHVC9xGmfvEY8nIl5gluUBpGAgV9eaMxfjA+Vwo1R2ggUKuIAGgU9qos1StPL
	qTpJ/RWWdFetnAhxVWdZZDGdpc0K8GO3FoU21zlbWyeO+JUqhNPHTuWancjExeTwwqey
	AdZLKlCnKTQf0/l80/FTA/MN+uvfmHiXf4Xqk0oTv3qK2uXIs7SsRAhsc5/5QtsMiNfa
	it/xrpiTMmVr6d44pY7uhJil0iBQVOTgSKjT8C0XZ/J/PyDLYHIJM7pfGVGPXGEtZxL5
	+hh5iStC4IDwM3OcQIGc4UpBz6gBGeFBjHyrztwT3jOdcG8f/5MO57qqYYxDrR0Y25uj
	QOAg==
Received: by 10.50.184.133 with SMTP id eu5mr1388249igc.22.1346344879660;
	Thu, 30 Aug 2012 09:41:19 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id bp8sm2014068igb.12.2012.08.30.09.41.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 09:41:18 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
Date: Thu, 30 Aug 2012 12:41:24 -0400
Message-Id: <12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkJDFATEb3kpN8yqs+v6zWE6VR0PuYof+3dwC9/Pml9tt7QkrdiWvdtkPygCX8yJRKO1VXP
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David,
The patch looks functionally ok, but I still have two lingering concerns:
- the hideous casting of mfn into err
- why not signal paged out frames for V1

Rather than keep writing English, I wrote some C :)

And took the liberty to include your signed-off. David & Konrad, let me know what you think, and once we settle on either version we can move into unit testing this.

Thanks
Andres

commit 3c0c619f11a26b7bc3f12a1c477cf969c25de231
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Thu Aug 30 12:23:33 2012 -0400

    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
    
    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
    field for reporting the error code for every frame that could not be
    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
    
    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
    in the mfn array.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..6562e29 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -246,20 +246,54 @@ struct mmap_batch_state {
 	domid_t domain;
 	unsigned long va;
 	struct vm_area_struct *vma;
+	/* A tristate: 
+	 *      0 for no errors
+	 *      1 if at least one error has happened (and no
+	 *          -ENOENT errors have happened)
+	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 */
 	int err;
 
-	xen_pfn_t __user *user;
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
+
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+	if (ret < 0) {
+		/*
+		 * V2 provides a user-space (pre-checked for access) user_err
+		 * pointer, in which we store the individual map error codes.
+		 * 
+		 * V1 encodes the error codes in the 32bit top nibble of the 
+		 * mfn (with its known limitations vis-a-vis 64 bit callers).
+		 * 
+		 * In either case, global state.err is zero unless one or more
+		 * individual maps fail with -ENOENT, in which case it is -ENOENT.
+		 *
+		 */
+		if (st->user_err)
+			BUG_ON(__put_user(ret, st->user_err++));
+		else {
+			xen_pfn_t nibble = (ret == -ENOENT) ?
+					PRIVCMD_MMAPBATCH_PAGED_ERROR :
+					PRIVCMD_MMAPBATCH_MFN_ERROR;
+			*mfnp |= nibble;
+		}
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+		if (ret == -ENOENT)
+			st->err = -ENOENT;
+		else {
+			/* Record that at least one error has happened. */
+			if (st->err == 0)
+				st->err = 1;
+		}
 	}
 	st->va += PAGE_SIZE;
 
@@ -271,15 +305,18 @@ static int mmap_return_errors(void *data, void *state)
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
 
-	return put_user(*mfnp, st->user++);
+	if (st->user_err == NULL)
+		return __put_user(*mfnp, st->user_mfn++);
+
+	return 0;
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
@@ -289,15 +326,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
 
 	if (ret || list_empty(&pagelist))
 		goto out;
@@ -315,22 +368,34 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
-	state.domain = m.dom;
-	state.vma = vma;
-	state.va = m.addr;
-	state.err = 0;
+	state.domain    = m.dom;
+	state.vma       = vma;
+	state.va        = m.addr;
+	state.err       = 0;
+	state.user_err  = m.err;
 
-	ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state);
+	/* mmap_batch_fn guarantees ret == 0 */
+	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
+			     &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
-		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+	if (state.err) {
+		if (state.err == -ENOENT)
+			ret = -ENOENT;
+		/* V1 still needs to write back nibbles. */
+		if (m.err == NULL)
+		{
+			int efault;
+			state.user_mfn = (xen_pfn_t *)m.arr;
+			efault = traverse_pages(m.num, sizeof(xen_pfn_t),
+						 &pagelist,
+						 mmap_return_errors, &state);
+			if (efault)
+				ret = efault;
+		}
+	} else if (m.err)
+		__clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
 	free_page_list(&pagelist);
@@ -354,7 +419,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 45c1aa1..a853168 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -58,13 +58,33 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_*_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR     0xf0000000U
+#define PRIVCMD_MMAPBATCH_PAGED_ERROR   0x80000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -72,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
 
On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
> include/xen/privcmd.h |   23 +++++++++++-
> 2 files changed, 102 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..c0e89e7 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
>  */
> static int gather_array(struct list_head *pagelist,
> 			unsigned nelem, size_t size,
> -			void __user *data)
> +			const void __user *data)
> {
> 	unsigned pageidx;
> 	void *pagedata;
> @@ -248,18 +248,37 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> 
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> 
> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> -		st->err++;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		/*
> +		 * Error reporting is a mess but userspace relies on
> +		 * it behaving this way.
> +		 *
> +		 * V2 needs to a) return the result of each frame's
> +		 * remap; and b) return -ENOENT if any frame failed
> +		 * with -ENOENT.
> +		 *
> +		 * In this first pass the error code is saved by
> +		 * overwriting the mfn and an error is indicated in
> +		 * st->err.
> +		 *
> +		 * The second pass by mmap_return_errors() will write
> +		 * the error codes to user space and get the right
> +		 * ioctl return value.
> +		 */
> +		*(int *)mfnp = ret;
> +		st->err = ret;
> 	}
> 	st->va += PAGE_SIZE;
> 
> @@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> +
> +	if (st->user_err) {
> +		int err = *(int *)mfnp;
> +
> +		if (err == -ENOENT)
> +			st->err = err;
> 
> -	return put_user(*mfnp, st->user++);
> +		return __put_user(err, st->user_err++);
> +	} else {
> +		xen_pfn_t mfn;
> +
> +		ret = __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> 
> static struct vm_operations_struct privcmd_vm_ops;
> 
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm = current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> 
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> 
> 	nr_pages = m.num;
> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> 
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
> 
> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 
> 	up_write(&mm->mmap_sem);
> 
> -	if (state.err > 0) {
> -		state.user = m.arr;
> +	if (state.err) {
> +		state.err = 0;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> +		if (ret >= 0)
> +			ret = state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> 
> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> 
> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> 
> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..f60d75c 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> 
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 on success (i.e., arg->err contains valid error codes for
> + * each frame).  On an error other than a failed frame remap, -1 is
> + * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
> + * if the operation was otherwise successful but any frame failed with
> + * -ENOENT, then -1 is returned and errno is set to ENOENT.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> 
> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77qD-0006VB-MX; Thu, 30 Aug 2012 16:43:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T77qC-0006V1-KO
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:43:36 +0000
Received: from [85.158.139.83:48416] by server-5.bemta-5.messagelabs.com id
	19/AF-30514-7389F305; Thu, 30 Aug 2012 16:43:35 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346345012!23893622!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEzODc1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4064 invoked from network); 30 Aug 2012 16:43:33 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:43:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346345013; x=1377881013;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=DT6EMMt1lwtN8WaI2Ktm7+G9JB1biKjAw2J27/ySL3c=;
	b=Mw/k9nmWG3b127A1jWuL9qe2JUgnsACbLHf42rcLiNCK7NRSR+xPU04J
	goIHz4cj+N8r/HfWwGGWL8+kAFgXCw==;
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="1018715626"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 16:42:05 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UGg5vp003922
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 16:42:05 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.34) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 09:41:57 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 30 Aug 2012 09:41:59 -0700
Date: Thu, 30 Aug 2012 09:41:59 -0700
From: Matt Wilson <msw@amazon.com>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120830164158.GA2955@u002268147cd4502c336d.ant.amazon.com>
References: <20543.35220.572436.112964@mariner.uk.xensource.com>
	<CC6549A1.3D42A%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC6549A1.3D42A%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 04:47:29PM +0100, Keir Fraser wrote:
> On 30/08/2012 16:41, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
> 
> > Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format
> > markdown-generated html to text"):
> >> Markdown, while easy to read and write, isn't the most consumable
> >> format for users reading documentation on a terminal. This patch uses
> >> lynx to format markdown produced HTML into text files.
> > ...
> >>  txt/%.txt: %.markdown
> >> - $(INSTALL_DIR) $(@D)
> >> - cp $< $@.tmp
> >> - $(call move-if-changed,$@.tmp,$@)
> >> + @$(INSTALL_DIR) $(@D)
> >> + set -e ; \
> >> + if which $(MARKDOWN) >/dev/null 2>&1 && \
> >> +  which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> >> +  echo "Running markdown to generate $*.txt ... "; \
> > 
> > So now we have two efforts to try to find markdown, one in configure
> > and one here.
> > 
> > Keir, would it be OK if we simply declared that you must run configure
> > to "make docs" ?
> 
> Yes!

Great! The confusing bit of having these tools in two places was only
to retain the ability to run 'make -C docs' without running
./configure. I'll resubmit with this cleaned up.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:43:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77qD-0006VB-MX; Thu, 30 Aug 2012 16:43:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T77qC-0006V1-KO
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:43:36 +0000
Received: from [85.158.139.83:48416] by server-5.bemta-5.messagelabs.com id
	19/AF-30514-7389F305; Thu, 30 Aug 2012 16:43:35 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346345012!23893622!1
X-Originating-IP: [207.171.184.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xODQuMjUgPT4gMzEzODc1\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4064 invoked from network); 30 Aug 2012 16:43:33 -0000
Received: from smtp-fw-9101.amazon.com (HELO smtp-fw-9101.amazon.com)
	(207.171.184.25)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:43:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346345013; x=1377881013;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=DT6EMMt1lwtN8WaI2Ktm7+G9JB1biKjAw2J27/ySL3c=;
	b=Mw/k9nmWG3b127A1jWuL9qe2JUgnsACbLHf42rcLiNCK7NRSR+xPU04J
	goIHz4cj+N8r/HfWwGGWL8+kAFgXCw==;
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="1018715626"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-9101.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 16:42:05 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UGg5vp003922
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 16:42:05 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.34) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 09:41:57 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 30 Aug 2012 09:41:59 -0700
Date: Thu, 30 Aug 2012 09:41:59 -0700
From: Matt Wilson <msw@amazon.com>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120830164158.GA2955@u002268147cd4502c336d.ant.amazon.com>
References: <20543.35220.572436.112964@mariner.uk.xensource.com>
	<CC6549A1.3D42A%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC6549A1.3D42A%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 04:47:29PM +0100, Keir Fraser wrote:
> On 30/08/2012 16:41, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
> 
> > Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format
> > markdown-generated html to text"):
> >> Markdown, while easy to read and write, isn't the most consumable
> >> format for users reading documentation on a terminal. This patch uses
> >> lynx to format markdown produced HTML into text files.
> > ...
> >>  txt/%.txt: %.markdown
> >> - $(INSTALL_DIR) $(@D)
> >> - cp $< $@.tmp
> >> - $(call move-if-changed,$@.tmp,$@)
> >> + @$(INSTALL_DIR) $(@D)
> >> + set -e ; \
> >> + if which $(MARKDOWN) >/dev/null 2>&1 && \
> >> +  which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> >> +  echo "Running markdown to generate $*.txt ... "; \
> > 
> > So now we have two efforts to try to find markdown, one in configure
> > and one here.
> > 
> > Keir, would it be OK if we simply declared that you must run configure
> > to "make docs" ?
> 
> Yes!

Great! The confusing bit of having these tools in two places was only
to retain the ability to run 'make -C docs' without running
./configure. I'll resubmit with this cleaned up.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:49:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77w0-0006oD-Lr; Thu, 30 Aug 2012 16:49:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T77vz-0006o8-32
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:49:35 +0000
Received: from [85.158.143.35:62096] by server-1.bemta-4.messagelabs.com id
	C7/C5-12504-E999F305; Thu, 30 Aug 2012 16:49:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346345373!13422628!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31749 invoked from network); 30 Aug 2012 16:49:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:49:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14275210"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 16:49:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 17:49:32 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T77vw-0002IX-H5; Thu, 30 Aug 2012 16:49:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T77vw-0007II-Db;
	Thu, 30 Aug 2012 17:49:32 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.39324.318867.373709@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 17:49:32 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>, George Dunlap <dunlapg@umich.edu>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20543.35421.80723.497439@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20543.35421.80723.497439@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> I don't think this is correct.  It may happen to work with this
> version of bison but I don't think you're allowed to assign to $3.

I think this fixes it.

Ian.

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH] libxl: fix double free on some config parser errors

If libxlu_cfg_y.y encountered a config file error, the code generated
by bison would sometimes _both_ run the %destructor _and_ call
xlu__cfg_set_store for the same XLU_ConfigSetting* semantic value.
The result would be a double free.

This appears to be because of the use of a mid-rule action.  There is
some discussion of the problems with destructors and mid-rule action
error handling in "(bison)Mid-Rule Actions".  This area is complex and
best avoided.

So fix the bug by abolishing the use of a mid-rule action, which was
in any case not necessary here.

Also while we are there rename the nonterminal rule "setting" to
"assignment", to avoid confusion with the token type "setting", which
had an identically name in a different namespace.  This was especially
confusing because the nonterminal "setting" did not have "setting" as
the type of its semantic value!  (In fact the nonterminal, now called
"assignment", does not have a value so it does not have a value type.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
 tools/libxl/libxlu_cfg_y.c |  120 +++++++++++++++++++++-----------------------
 tools/libxl/libxlu_cfg_y.y |    6 +-
 2 files changed, 61 insertions(+), 65 deletions(-)

diff --git a/tools/libxl/libxlu_cfg_y.c b/tools/libxl/libxlu_cfg_y.c
index 7d1f6c8..5214386 100644
--- a/tools/libxl/libxlu_cfg_y.c
+++ b/tools/libxl/libxlu_cfg_y.c
@@ -380,11 +380,11 @@ union yyalloc
 /* YYNTOKENS -- Number of terminals.  */
 #define YYNTOKENS  12
 /* YYNNTS -- Number of nonterminals.  */
-#define YYNNTS  10
+#define YYNNTS  9
 /* YYNRULES -- Number of rules.  */
-#define YYNRULES  20
+#define YYNRULES  19
 /* YYNRULES -- Number of states.  */
-#define YYNSTATES  29
+#define YYNSTATES  28
 
 /* YYTRANSLATE(YYLEX) -- Bison symbol number corresponding to YYLEX.  */
 #define YYUNDEFTOK  2
@@ -430,28 +430,26 @@ static const yytype_uint8 yytranslate[] =
    YYRHS.  */
 static const yytype_uint8 yyprhs[] =
 {
-       0,     0,     3,     4,     7,     8,    14,    16,    19,    21,
-      23,    25,    30,    32,    34,    35,    37,    41,    44,    50,
-      51
+       0,     0,     3,     4,     7,    12,    14,    17,    19,    21,
+      23,    28,    30,    32,    33,    35,    39,    42,    48,    49
 };
 
 /* YYRHS -- A `-1'-separated list of the rules' RHS.  */
 static const yytype_int8 yyrhs[] =
 {
-      13,     0,    -1,    -1,    13,    14,    -1,    -1,     3,     7,
-      17,    15,    16,    -1,    16,    -1,     1,     6,    -1,     6,
-      -1,     8,    -1,    18,    -1,     9,    21,    19,    10,    -1,
-       4,    -1,     5,    -1,    -1,    20,    -1,    20,    11,    21,
-      -1,    18,    21,    -1,    20,    11,    21,    18,    21,    -1,
-      -1,    21,     6,    -1
+      13,     0,    -1,    -1,    13,    14,    -1,     3,     7,    16,
+      15,    -1,    15,    -1,     1,     6,    -1,     6,    -1,     8,
+      -1,    17,    -1,     9,    20,    18,    10,    -1,     4,    -1,
+       5,    -1,    -1,    19,    -1,    19,    11,    20,    -1,    17,
+      20,    -1,    19,    11,    20,    17,    20,    -1,    -1,    20,
+       6,    -1
 };
 
 /* YYRLINE[YYN] -- source line where rule number YYN was defined.  */
 static const yytype_uint8 yyrline[] =
 {
-       0,    47,    47,    48,    50,    50,    52,    53,    55,    56,
-      58,    59,    61,    62,    64,    65,    66,    68,    69,    71,
-      73
+       0,    47,    47,    48,    50,    52,    53,    55,    56,    58,
+      59,    61,    62,    64,    65,    66,    68,    69,    71,    73
 };
 #endif
 
@@ -461,7 +459,7 @@ static const yytype_uint8 yyrline[] =
 static const char *const yytname[] =
 {
   "$end", "error", "$undefined", "IDENT", "STRING", "NUMBER", "NEWLINE",
-  "'='", "';'", "'['", "']'", "','", "$accept", "file", "setting", "$@1",
+  "'='", "';'", "'['", "']'", "','", "$accept", "file", "assignment",
   "endstmt", "value", "atom", "valuelist", "values", "nlok", 0
 };
 #endif
@@ -479,17 +477,15 @@ static const yytype_uint16 yytoknum[] =
 /* YYR1[YYN] -- Symbol number of symbol that rule YYN derives.  */
 static const yytype_uint8 yyr1[] =
 {
-       0,    12,    13,    13,    15,    14,    14,    14,    16,    16,
-      17,    17,    18,    18,    19,    19,    19,    20,    20,    21,
-      21
+       0,    12,    13,    13,    14,    14,    14,    15,    15,    16,
+      16,    17,    17,    18,    18,    18,    19,    19,    20,    20
 };
 
 /* YYR2[YYN] -- Number of symbols composing right hand side of rule YYN.  */
 static const yytype_uint8 yyr2[] =
 {
-       0,     2,     0,     2,     0,     5,     1,     2,     1,     1,
-       1,     4,     1,     1,     0,     1,     3,     2,     5,     0,
-       2
+       0,     2,     0,     2,     4,     1,     2,     1,     1,     1,
+       4,     1,     1,     0,     1,     3,     2,     5,     0,     2
 };
 
 /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state
@@ -497,15 +493,15 @@ static const yytype_uint8 yyr2[] =
    means the default is an error.  */
 static const yytype_uint8 yydefact[] =
 {
-       2,     0,     1,     0,     0,     8,     9,     3,     6,     7,
-       0,    12,    13,    19,     4,    10,    14,     0,    20,    19,
-       0,    15,     5,    17,    11,    19,    16,    19,    18
+       2,     0,     1,     0,     0,     7,     8,     3,     5,     6,
+       0,    11,    12,    18,     0,     9,    13,     4,    19,    18,
+       0,    14,    16,    10,    18,    15,    18,    17
 };
 
 /* YYDEFGOTO[NTERM-NUM].  */
 static const yytype_int8 yydefgoto[] =
 {
-      -1,     1,     7,    17,     8,    14,    15,    20,    21,    16
+      -1,     1,     7,     8,    14,    15,    20,    21,    16
 };
 
 /* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing
@@ -513,15 +509,15 @@ static const yytype_int8 yydefgoto[] =
 #define YYPACT_NINF -17
 static const yytype_int8 yypact[] =
 {
-     -17,     1,   -17,    -3,     5,   -17,   -17,   -17,   -17,   -17,
-      10,   -17,   -17,   -17,   -17,   -17,    12,     0,   -17,   -17,
-      11,     9,   -17,    16,   -17,   -17,    12,   -17,    16
+     -17,     2,   -17,    -5,    -3,   -17,   -17,   -17,   -17,   -17,
+      10,   -17,   -17,   -17,    14,   -17,    12,   -17,   -17,   -17,
+      11,    -4,     6,   -17,   -17,    12,   -17,     6
 };
 
 /* YYPGOTO[NTERM-NUM].  */
 static const yytype_int8 yypgoto[] =
 {
-     -17,   -17,   -17,   -17,     6,   -17,   -16,   -17,   -17,   -14
+     -17,   -17,   -17,     9,   -17,   -16,   -17,   -17,   -13
 };
 
 /* YYTABLE[YYPACT[STATE-NUM]].  What to do in state STATE-NUM.  If
@@ -531,25 +527,25 @@ static const yytype_int8 yypgoto[] =
 #define YYTABLE_NINF -1
 static const yytype_uint8 yytable[] =
 {
-      19,     2,     3,     9,     4,    23,     5,     5,     6,     6,
-      27,    26,    10,    28,    11,    12,    11,    12,    18,    13,
-      25,    24,    18,    22
+      19,     9,     2,     3,    10,     4,    22,    24,     5,    26,
+       6,    25,    18,    27,    11,    12,    11,    12,    18,    13,
+       5,    23,     6,    17
 };
 
 static const yytype_uint8 yycheck[] =
 {
-      16,     0,     1,     6,     3,    19,     6,     6,     8,     8,
-      26,    25,     7,    27,     4,     5,     4,     5,     6,     9,
-      11,    10,     6,    17
+      16,     6,     0,     1,     7,     3,    19,    11,     6,    25,
+       8,    24,     6,    26,     4,     5,     4,     5,     6,     9,
+       6,    10,     8,    14
 };
 
 /* YYSTOS[STATE-NUM] -- The (internal number of the) accessing
    symbol of state STATE-NUM.  */
 static const yytype_uint8 yystos[] =
 {
-       0,    13,     0,     1,     3,     6,     8,    14,    16,     6,
-       7,     4,     5,     9,    17,    18,    21,    15,     6,    18,
-      19,    20,    16,    21,    10,    11,    21,    18,    21
+       0,    13,     0,     1,     3,     6,     8,    14,    15,     6,
+       7,     4,     5,     9,    16,    17,    20,    15,     6,    17,
+      18,    19,    20,    10,    11,    20,    17,    20
 };
 
 #define yyerrok		(yyerrstatus = 0)
@@ -1081,7 +1077,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1085 "libxlu_cfg_y.c"
+#line 1081 "libxlu_cfg_y.c"
 	break;
       case 4: /* "STRING" */
 
@@ -1090,7 +1086,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1094 "libxlu_cfg_y.c"
+#line 1090 "libxlu_cfg_y.c"
 	break;
       case 5: /* "NUMBER" */
 
@@ -1099,43 +1095,43 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1103 "libxlu_cfg_y.c"
+#line 1099 "libxlu_cfg_y.c"
 	break;
-      case 17: /* "value" */
+      case 16: /* "value" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1112 "libxlu_cfg_y.c"
+#line 1108 "libxlu_cfg_y.c"
 	break;
-      case 18: /* "atom" */
+      case 17: /* "atom" */
 
 /* Line 1000 of yacc.c  */
 #line 40 "libxlu_cfg_y.y"
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1121 "libxlu_cfg_y.c"
+#line 1117 "libxlu_cfg_y.c"
 	break;
-      case 19: /* "valuelist" */
+      case 18: /* "valuelist" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1130 "libxlu_cfg_y.c"
+#line 1126 "libxlu_cfg_y.c"
 	break;
-      case 20: /* "values" */
+      case 19: /* "values" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1139 "libxlu_cfg_y.c"
+#line 1135 "libxlu_cfg_y.c"
 	break;
 
       default:
@@ -1466,67 +1462,67 @@ yyreduce:
         case 4:
 
 /* Line 1455 of yacc.c  */
-#line 50 "libxlu_cfg_y.y"
-    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); ;}
+#line 51 "libxlu_cfg_y.y"
+    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (4)].string),(yyvsp[(3) - (4)].setting),(yylsp[(3) - (4)]).first_line); ;}
     break;
 
-  case 10:
+  case 9:
 
 /* Line 1455 of yacc.c  */
 #line 58 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,1,(yyvsp[(1) - (1)].string)); ;}
     break;
 
-  case 11:
+  case 10:
 
 /* Line 1455 of yacc.c  */
 #line 59 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(3) - (4)].setting); ;}
     break;
 
-  case 12:
+  case 11:
 
 /* Line 1455 of yacc.c  */
 #line 61 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 13:
+  case 12:
 
 /* Line 1455 of yacc.c  */
 #line 62 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 14:
+  case 13:
 
 /* Line 1455 of yacc.c  */
 #line 64 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,0,0); ;}
     break;
 
-  case 15:
+  case 14:
 
 /* Line 1455 of yacc.c  */
 #line 65 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (1)].setting); ;}
     break;
 
-  case 16:
+  case 15:
 
 /* Line 1455 of yacc.c  */
 #line 66 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (3)].setting); ;}
     break;
 
-  case 17:
+  case 16:
 
 /* Line 1455 of yacc.c  */
 #line 68 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,2,(yyvsp[(1) - (2)].string)); ;}
     break;
 
-  case 18:
+  case 17:
 
 /* Line 1455 of yacc.c  */
 #line 69 "libxlu_cfg_y.y"
@@ -1536,7 +1532,7 @@ yyreduce:
 
 
 /* Line 1455 of yacc.c  */
-#line 1540 "libxlu_cfg_y.c"
+#line 1536 "libxlu_cfg_y.c"
       default: break;
     }
   YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc);
diff --git a/tools/libxl/libxlu_cfg_y.y b/tools/libxl/libxlu_cfg_y.y
index f0a0559..29aedca 100644
--- a/tools/libxl/libxlu_cfg_y.y
+++ b/tools/libxl/libxlu_cfg_y.y
@@ -45,10 +45,10 @@
 %%
 
 file: /* empty */
- |     file setting
+ |     file assignment
 
-setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
-                     endstmt
+assignment: IDENT '=' value endstmt
+                            { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
  |      endstmt
  |      error NEWLINE
 
-- 
tg: (5babc64..) t/xen/xl.cfg.mem-fix (depends on: base)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:49:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T77w0-0006oD-Lr; Thu, 30 Aug 2012 16:49:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T77vz-0006o8-32
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:49:35 +0000
Received: from [85.158.143.35:62096] by server-1.bemta-4.messagelabs.com id
	C7/C5-12504-E999F305; Thu, 30 Aug 2012 16:49:34 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346345373!13422628!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEwOTA=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31749 invoked from network); 30 Aug 2012 16:49:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:49:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="14275210"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 16:49:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Thu, 30 Aug 2012 17:49:32 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T77vw-0002IX-H5; Thu, 30 Aug 2012 16:49:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T77vw-0007II-Db;
	Thu, 30 Aug 2012 17:49:32 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20543.39324.318867.373709@mariner.uk.xensource.com>
Date: Thu, 30 Aug 2012 17:49:32 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>, George Dunlap <dunlapg@umich.edu>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <20543.35421.80723.497439@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20543.35421.80723.497439@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> I don't think this is correct.  It may happen to work with this
> version of bison but I don't think you're allowed to assign to $3.

I think this fixes it.

Ian.

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH] libxl: fix double free on some config parser errors

If libxlu_cfg_y.y encountered a config file error, the code generated
by bison would sometimes _both_ run the %destructor _and_ call
xlu__cfg_set_store for the same XLU_ConfigSetting* semantic value.
The result would be a double free.

This appears to be because of the use of a mid-rule action.  There is
some discussion of the problems with destructors and mid-rule action
error handling in "(bison)Mid-Rule Actions".  This area is complex and
best avoided.

So fix the bug by abolishing the use of a mid-rule action, which was
in any case not necessary here.

Also while we are there rename the nonterminal rule "setting" to
"assignment", to avoid confusion with the token type "setting", which
had an identically name in a different namespace.  This was especially
confusing because the nonterminal "setting" did not have "setting" as
the type of its semantic value!  (In fact the nonterminal, now called
"assignment", does not have a value so it does not have a value type.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
 tools/libxl/libxlu_cfg_y.c |  120 +++++++++++++++++++++-----------------------
 tools/libxl/libxlu_cfg_y.y |    6 +-
 2 files changed, 61 insertions(+), 65 deletions(-)

diff --git a/tools/libxl/libxlu_cfg_y.c b/tools/libxl/libxlu_cfg_y.c
index 7d1f6c8..5214386 100644
--- a/tools/libxl/libxlu_cfg_y.c
+++ b/tools/libxl/libxlu_cfg_y.c
@@ -380,11 +380,11 @@ union yyalloc
 /* YYNTOKENS -- Number of terminals.  */
 #define YYNTOKENS  12
 /* YYNNTS -- Number of nonterminals.  */
-#define YYNNTS  10
+#define YYNNTS  9
 /* YYNRULES -- Number of rules.  */
-#define YYNRULES  20
+#define YYNRULES  19
 /* YYNRULES -- Number of states.  */
-#define YYNSTATES  29
+#define YYNSTATES  28
 
 /* YYTRANSLATE(YYLEX) -- Bison symbol number corresponding to YYLEX.  */
 #define YYUNDEFTOK  2
@@ -430,28 +430,26 @@ static const yytype_uint8 yytranslate[] =
    YYRHS.  */
 static const yytype_uint8 yyprhs[] =
 {
-       0,     0,     3,     4,     7,     8,    14,    16,    19,    21,
-      23,    25,    30,    32,    34,    35,    37,    41,    44,    50,
-      51
+       0,     0,     3,     4,     7,    12,    14,    17,    19,    21,
+      23,    28,    30,    32,    33,    35,    39,    42,    48,    49
 };
 
 /* YYRHS -- A `-1'-separated list of the rules' RHS.  */
 static const yytype_int8 yyrhs[] =
 {
-      13,     0,    -1,    -1,    13,    14,    -1,    -1,     3,     7,
-      17,    15,    16,    -1,    16,    -1,     1,     6,    -1,     6,
-      -1,     8,    -1,    18,    -1,     9,    21,    19,    10,    -1,
-       4,    -1,     5,    -1,    -1,    20,    -1,    20,    11,    21,
-      -1,    18,    21,    -1,    20,    11,    21,    18,    21,    -1,
-      -1,    21,     6,    -1
+      13,     0,    -1,    -1,    13,    14,    -1,     3,     7,    16,
+      15,    -1,    15,    -1,     1,     6,    -1,     6,    -1,     8,
+      -1,    17,    -1,     9,    20,    18,    10,    -1,     4,    -1,
+       5,    -1,    -1,    19,    -1,    19,    11,    20,    -1,    17,
+      20,    -1,    19,    11,    20,    17,    20,    -1,    -1,    20,
+       6,    -1
 };
 
 /* YYRLINE[YYN] -- source line where rule number YYN was defined.  */
 static const yytype_uint8 yyrline[] =
 {
-       0,    47,    47,    48,    50,    50,    52,    53,    55,    56,
-      58,    59,    61,    62,    64,    65,    66,    68,    69,    71,
-      73
+       0,    47,    47,    48,    50,    52,    53,    55,    56,    58,
+      59,    61,    62,    64,    65,    66,    68,    69,    71,    73
 };
 #endif
 
@@ -461,7 +459,7 @@ static const yytype_uint8 yyrline[] =
 static const char *const yytname[] =
 {
   "$end", "error", "$undefined", "IDENT", "STRING", "NUMBER", "NEWLINE",
-  "'='", "';'", "'['", "']'", "','", "$accept", "file", "setting", "$@1",
+  "'='", "';'", "'['", "']'", "','", "$accept", "file", "assignment",
   "endstmt", "value", "atom", "valuelist", "values", "nlok", 0
 };
 #endif
@@ -479,17 +477,15 @@ static const yytype_uint16 yytoknum[] =
 /* YYR1[YYN] -- Symbol number of symbol that rule YYN derives.  */
 static const yytype_uint8 yyr1[] =
 {
-       0,    12,    13,    13,    15,    14,    14,    14,    16,    16,
-      17,    17,    18,    18,    19,    19,    19,    20,    20,    21,
-      21
+       0,    12,    13,    13,    14,    14,    14,    15,    15,    16,
+      16,    17,    17,    18,    18,    18,    19,    19,    20,    20
 };
 
 /* YYR2[YYN] -- Number of symbols composing right hand side of rule YYN.  */
 static const yytype_uint8 yyr2[] =
 {
-       0,     2,     0,     2,     0,     5,     1,     2,     1,     1,
-       1,     4,     1,     1,     0,     1,     3,     2,     5,     0,
-       2
+       0,     2,     0,     2,     4,     1,     2,     1,     1,     1,
+       4,     1,     1,     0,     1,     3,     2,     5,     0,     2
 };
 
 /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state
@@ -497,15 +493,15 @@ static const yytype_uint8 yyr2[] =
    means the default is an error.  */
 static const yytype_uint8 yydefact[] =
 {
-       2,     0,     1,     0,     0,     8,     9,     3,     6,     7,
-       0,    12,    13,    19,     4,    10,    14,     0,    20,    19,
-       0,    15,     5,    17,    11,    19,    16,    19,    18
+       2,     0,     1,     0,     0,     7,     8,     3,     5,     6,
+       0,    11,    12,    18,     0,     9,    13,     4,    19,    18,
+       0,    14,    16,    10,    18,    15,    18,    17
 };
 
 /* YYDEFGOTO[NTERM-NUM].  */
 static const yytype_int8 yydefgoto[] =
 {
-      -1,     1,     7,    17,     8,    14,    15,    20,    21,    16
+      -1,     1,     7,     8,    14,    15,    20,    21,    16
 };
 
 /* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing
@@ -513,15 +509,15 @@ static const yytype_int8 yydefgoto[] =
 #define YYPACT_NINF -17
 static const yytype_int8 yypact[] =
 {
-     -17,     1,   -17,    -3,     5,   -17,   -17,   -17,   -17,   -17,
-      10,   -17,   -17,   -17,   -17,   -17,    12,     0,   -17,   -17,
-      11,     9,   -17,    16,   -17,   -17,    12,   -17,    16
+     -17,     2,   -17,    -5,    -3,   -17,   -17,   -17,   -17,   -17,
+      10,   -17,   -17,   -17,    14,   -17,    12,   -17,   -17,   -17,
+      11,    -4,     6,   -17,   -17,    12,   -17,     6
 };
 
 /* YYPGOTO[NTERM-NUM].  */
 static const yytype_int8 yypgoto[] =
 {
-     -17,   -17,   -17,   -17,     6,   -17,   -16,   -17,   -17,   -14
+     -17,   -17,   -17,     9,   -17,   -16,   -17,   -17,   -13
 };
 
 /* YYTABLE[YYPACT[STATE-NUM]].  What to do in state STATE-NUM.  If
@@ -531,25 +527,25 @@ static const yytype_int8 yypgoto[] =
 #define YYTABLE_NINF -1
 static const yytype_uint8 yytable[] =
 {
-      19,     2,     3,     9,     4,    23,     5,     5,     6,     6,
-      27,    26,    10,    28,    11,    12,    11,    12,    18,    13,
-      25,    24,    18,    22
+      19,     9,     2,     3,    10,     4,    22,    24,     5,    26,
+       6,    25,    18,    27,    11,    12,    11,    12,    18,    13,
+       5,    23,     6,    17
 };
 
 static const yytype_uint8 yycheck[] =
 {
-      16,     0,     1,     6,     3,    19,     6,     6,     8,     8,
-      26,    25,     7,    27,     4,     5,     4,     5,     6,     9,
-      11,    10,     6,    17
+      16,     6,     0,     1,     7,     3,    19,    11,     6,    25,
+       8,    24,     6,    26,     4,     5,     4,     5,     6,     9,
+       6,    10,     8,    14
 };
 
 /* YYSTOS[STATE-NUM] -- The (internal number of the) accessing
    symbol of state STATE-NUM.  */
 static const yytype_uint8 yystos[] =
 {
-       0,    13,     0,     1,     3,     6,     8,    14,    16,     6,
-       7,     4,     5,     9,    17,    18,    21,    15,     6,    18,
-      19,    20,    16,    21,    10,    11,    21,    18,    21
+       0,    13,     0,     1,     3,     6,     8,    14,    15,     6,
+       7,     4,     5,     9,    16,    17,    20,    15,     6,    17,
+      18,    19,    20,    10,    11,    20,    17,    20
 };
 
 #define yyerrok		(yyerrstatus = 0)
@@ -1081,7 +1077,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1085 "libxlu_cfg_y.c"
+#line 1081 "libxlu_cfg_y.c"
 	break;
       case 4: /* "STRING" */
 
@@ -1090,7 +1086,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1094 "libxlu_cfg_y.c"
+#line 1090 "libxlu_cfg_y.c"
 	break;
       case 5: /* "NUMBER" */
 
@@ -1099,43 +1095,43 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1103 "libxlu_cfg_y.c"
+#line 1099 "libxlu_cfg_y.c"
 	break;
-      case 17: /* "value" */
+      case 16: /* "value" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1112 "libxlu_cfg_y.c"
+#line 1108 "libxlu_cfg_y.c"
 	break;
-      case 18: /* "atom" */
+      case 17: /* "atom" */
 
 /* Line 1000 of yacc.c  */
 #line 40 "libxlu_cfg_y.y"
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1121 "libxlu_cfg_y.c"
+#line 1117 "libxlu_cfg_y.c"
 	break;
-      case 19: /* "valuelist" */
+      case 18: /* "valuelist" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1130 "libxlu_cfg_y.c"
+#line 1126 "libxlu_cfg_y.c"
 	break;
-      case 20: /* "values" */
+      case 19: /* "values" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1139 "libxlu_cfg_y.c"
+#line 1135 "libxlu_cfg_y.c"
 	break;
 
       default:
@@ -1466,67 +1462,67 @@ yyreduce:
         case 4:
 
 /* Line 1455 of yacc.c  */
-#line 50 "libxlu_cfg_y.y"
-    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); ;}
+#line 51 "libxlu_cfg_y.y"
+    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (4)].string),(yyvsp[(3) - (4)].setting),(yylsp[(3) - (4)]).first_line); ;}
     break;
 
-  case 10:
+  case 9:
 
 /* Line 1455 of yacc.c  */
 #line 58 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,1,(yyvsp[(1) - (1)].string)); ;}
     break;
 
-  case 11:
+  case 10:
 
 /* Line 1455 of yacc.c  */
 #line 59 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(3) - (4)].setting); ;}
     break;
 
-  case 12:
+  case 11:
 
 /* Line 1455 of yacc.c  */
 #line 61 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 13:
+  case 12:
 
 /* Line 1455 of yacc.c  */
 #line 62 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 14:
+  case 13:
 
 /* Line 1455 of yacc.c  */
 #line 64 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,0,0); ;}
     break;
 
-  case 15:
+  case 14:
 
 /* Line 1455 of yacc.c  */
 #line 65 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (1)].setting); ;}
     break;
 
-  case 16:
+  case 15:
 
 /* Line 1455 of yacc.c  */
 #line 66 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (3)].setting); ;}
     break;
 
-  case 17:
+  case 16:
 
 /* Line 1455 of yacc.c  */
 #line 68 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,2,(yyvsp[(1) - (2)].string)); ;}
     break;
 
-  case 18:
+  case 17:
 
 /* Line 1455 of yacc.c  */
 #line 69 "libxlu_cfg_y.y"
@@ -1536,7 +1532,7 @@ yyreduce:
 
 
 /* Line 1455 of yacc.c  */
-#line 1540 "libxlu_cfg_y.c"
+#line 1536 "libxlu_cfg_y.c"
       default: break;
     }
   YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc);
diff --git a/tools/libxl/libxlu_cfg_y.y b/tools/libxl/libxlu_cfg_y.y
index f0a0559..29aedca 100644
--- a/tools/libxl/libxlu_cfg_y.y
+++ b/tools/libxl/libxlu_cfg_y.y
@@ -45,10 +45,10 @@
 %%
 
 file: /* empty */
- |     file setting
+ |     file assignment
 
-setting: IDENT '=' value      { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
-                     endstmt
+assignment: IDENT '=' value endstmt
+                            { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
  |      endstmt
  |      error NEWLINE
 
-- 
tg: (5babc64..) t/xen/xl.cfg.mem-fix (depends on: base)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7810-0006zF-DT; Thu, 30 Aug 2012 16:54:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T780z-0006z3-6j
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:54:45 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1346345675!6413535!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14375 invoked from network); 30 Aug 2012 16:54:37 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:54:37 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346345677; x=1377881677;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=mp+VJ5/DRPhmV4Zok1bt+0RBMqPNi7yvOKYTyMHg300=;
	b=PSJ6tHZM/lGi/qtmwOD/Rwd7Dtx1Lk8xizEfyy6/L2KHy0mwfzpqsmff
	cXbziQIfirzh4JwTSlZFTW6CV9A8WA==;
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="286704160"
Received: from smtp-in-1105.vdc.amazon.com ([10.140.9.24])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 16:54:34 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-1105.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UGsU8e017473
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 16:54:32 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 09:54:23 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 30 Aug 2012 09:54:25 -0700
Date: Thu, 30 Aug 2012 09:54:25 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120830165421.GA3087@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
	<20543.35220.572436.112964@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20543.35220.572436.112964@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 04:41:08PM +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format markdown-generated html to text"):
> > Markdown, while easy to read and write, isn't the most consumable
> > format for users reading documentation on a terminal. This patch uses
> > lynx to format markdown produced HTML into text files.
> ...
> >  txt/%.txt: %.markdown
> > -	$(INSTALL_DIR) $(@D)
> > -	cp $< $@.tmp
> > -	$(call move-if-changed,$@.tmp,$@)
> > +	@$(INSTALL_DIR) $(@D)
> > +	set -e ; \
> > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > +		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> > +		echo "Running markdown to generate $*.txt ... "; \
> 
> So now we have two efforts to try to find markdown, one in configure
> and one here.

For what it's worth, if ./configure is run and a markdown binary is
found in $PATH, $(MARKDOWN) will be the full path detected at
./configure time like /usr/bin/markdown and the "which" bit will be a
no-op.

> Keir, would it be OK if we simply declared that you must run configure
> to "make docs" ?
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 16:54:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 16:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7810-0006zF-DT; Thu, 30 Aug 2012 16:54:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T780z-0006z3-6j
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:54:45 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1346345675!6413535!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14375 invoked from network); 30 Aug 2012 16:54:37 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 16:54:37 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346345677; x=1377881677;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=mp+VJ5/DRPhmV4Zok1bt+0RBMqPNi7yvOKYTyMHg300=;
	b=PSJ6tHZM/lGi/qtmwOD/Rwd7Dtx1Lk8xizEfyy6/L2KHy0mwfzpqsmff
	cXbziQIfirzh4JwTSlZFTW6CV9A8WA==;
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="286704160"
Received: from smtp-in-1105.vdc.amazon.com ([10.140.9.24])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 16:54:34 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-1105.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UGsU8e017473
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 16:54:32 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 09:54:23 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 30 Aug 2012 09:54:25 -0700
Date: Thu, 30 Aug 2012 09:54:25 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20120830165421.GA3087@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346274072@u002268147cd4502c336d.ant.amazon.com>
	<9a308e4fdc19336ce3ca.1346274074@u002268147cd4502c336d.ant.amazon.com>
	<20543.35220.572436.112964@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20543.35220.572436.112964@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2 of 3] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 04:41:08PM +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH 2 of 3] docs: use elinks to format markdown-generated html to text"):
> > Markdown, while easy to read and write, isn't the most consumable
> > format for users reading documentation on a terminal. This patch uses
> > lynx to format markdown produced HTML into text files.
> ...
> >  txt/%.txt: %.markdown
> > -	$(INSTALL_DIR) $(@D)
> > -	cp $< $@.tmp
> > -	$(call move-if-changed,$@.tmp,$@)
> > +	@$(INSTALL_DIR) $(@D)
> > +	set -e ; \
> > +	if which $(MARKDOWN) >/dev/null 2>&1 && \
> > +		which $(HTMLDUMP) >/dev/null 2>&1 ; then \
> > +		echo "Running markdown to generate $*.txt ... "; \
> 
> So now we have two efforts to try to find markdown, one in configure
> and one here.

For what it's worth, if ./configure is run and a markdown binary is
found in $PATH, $(MARKDOWN) will be the full path detected at
./configure time like /usr/bin/markdown and the "which" bit will be a
no-op.

> Keir, would it be OK if we simply declared that you must run configure
> to "make docs" ?
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 17:04:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 17:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T78AP-0007F0-IX; Thu, 30 Aug 2012 17:04:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T78AO-0007Eu-5B
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 17:04:28 +0000
Received: from [85.158.143.35:5567] by server-2.bemta-4.messagelabs.com id
	60/7E-21239-B1D9F305; Thu, 30 Aug 2012 17:04:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346346262!16020068!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzUxOTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14451 invoked from network); 30 Aug 2012 17:04:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 17:04:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="206693297"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 17:04:22 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Thu, 30 Aug 2012
	13:04:21 -0400
Message-ID: <503F9D14.8000600@citrix.com>
Date: Thu, 30 Aug 2012 18:04:20 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
In-Reply-To: <12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/08/12 17:41, Andres Lagar-Cavilla wrote:
> David,
> The patch looks functionally ok, but I still have two lingering concerns:
> - the hideous casting of mfn into err

I considered couple of other approaches (unions, extending
gather_array() to add gaps for the int return). They were all worse.

I also tried your proposal here but it doesn't work. See below.

> - why not signal paged out frames for V1

Because V1 is broken on 64bit and there doesn't seem to be any point in
changing it given that libxc won't call it if V2 is present,

> Rather than keep writing English, I wrote some C :)
> 
> And took the liberty to include your signed-off. David & Konrad, let
> me know what you think, and once we settle on either version we can move
> into unit testing this.
[...]
>  static int mmap_batch_fn(void *data, void *state)
>  {
>         xen_pfn_t *mfnp = data;
>         struct mmap_batch_state *st = state;
> +       int ret;
> +
> +       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +                                        st->vma->vm_page_prot, st->domain);
> +       if (ret < 0) {
> +               /*
> +                * V2 provides a user-space (pre-checked for access) user_err
> +                * pointer, in which we store the individual map error codes.
> +                *
> +                * V1 encodes the error codes in the 32bit top nibble of the
> +                * mfn (with its known limitations vis-a-vis 64 bit callers).
> +                *
> +                * In either case, global state.err is zero unless one or more
> +                * individual maps fail with -ENOENT, in which case it is -ENOENT.
> +                *
> +                */
> +               if (st->user_err)
> +                       BUG_ON(__put_user(ret, st->user_err++));

You can't access user space pages here while holding
current->mm->mmap_sem.  I tried this and it would sometimes deadlock in
the page fault handler.

access_ok() only checks if the pointer is in the user space virtual
address space - not that a valid mapping exists and is writable.  So
BUG_ON(__put_user()) should not be done.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 17:04:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 17:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T78AP-0007F0-IX; Thu, 30 Aug 2012 17:04:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T78AO-0007Eu-5B
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 17:04:28 +0000
Received: from [85.158.143.35:5567] by server-2.bemta-4.messagelabs.com id
	60/7E-21239-B1D9F305; Thu, 30 Aug 2012 17:04:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346346262!16020068!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzUxOTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14451 invoked from network); 30 Aug 2012 17:04:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 17:04:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,341,1344211200"; d="scan'208";a="206693297"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 17:04:22 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Thu, 30 Aug 2012
	13:04:21 -0400
Message-ID: <503F9D14.8000600@citrix.com>
Date: Thu, 30 Aug 2012 18:04:20 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
In-Reply-To: <12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/08/12 17:41, Andres Lagar-Cavilla wrote:
> David,
> The patch looks functionally ok, but I still have two lingering concerns:
> - the hideous casting of mfn into err

I considered couple of other approaches (unions, extending
gather_array() to add gaps for the int return). They were all worse.

I also tried your proposal here but it doesn't work. See below.

> - why not signal paged out frames for V1

Because V1 is broken on 64bit and there doesn't seem to be any point in
changing it given that libxc won't call it if V2 is present,

> Rather than keep writing English, I wrote some C :)
> 
> And took the liberty to include your signed-off. David & Konrad, let
> me know what you think, and once we settle on either version we can move
> into unit testing this.
[...]
>  static int mmap_batch_fn(void *data, void *state)
>  {
>         xen_pfn_t *mfnp = data;
>         struct mmap_batch_state *st = state;
> +       int ret;
> +
> +       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +                                        st->vma->vm_page_prot, st->domain);
> +       if (ret < 0) {
> +               /*
> +                * V2 provides a user-space (pre-checked for access) user_err
> +                * pointer, in which we store the individual map error codes.
> +                *
> +                * V1 encodes the error codes in the 32bit top nibble of the
> +                * mfn (with its known limitations vis-a-vis 64 bit callers).
> +                *
> +                * In either case, global state.err is zero unless one or more
> +                * individual maps fail with -ENOENT, in which case it is -ENOENT.
> +                *
> +                */
> +               if (st->user_err)
> +                       BUG_ON(__put_user(ret, st->user_err++));

You can't access user space pages here while holding
current->mm->mmap_sem.  I tried this and it would sometimes deadlock in
the page fault handler.

access_ok() only checks if the pointer is in the user space virtual
address space - not that a valid mapping exists and is writable.  So
BUG_ON(__put_user()) should not be done.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 17:05:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 17:05:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T78BR-0007JF-2P; Thu, 30 Aug 2012 17:05:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T78BP-0007J2-H1
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 17:05:31 +0000
Received: from [85.158.138.51:13047] by server-9.bemta-3.messagelabs.com id
	A7/ED-15390-A5D9F305; Thu, 30 Aug 2012 17:05:30 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346346325!27750215!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29343 invoked from network); 30 Aug 2012 17:05:28 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 17:05:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UH5N3Q005298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Aug 2012 17:05:24 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UH5MC2023258
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Aug 2012 17:05:23 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UH5M4Z020490; Thu, 30 Aug 2012 12:05:22 -0500
MIME-Version: 1.0
Message-ID: <aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
Date: Thu, 30 Aug 2012 10:04:42 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: bing <Libing.Chen@uts.edu.au>, xen-devel@lists.xen.org
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
In-Reply-To: <503F053D.5010001@uts.edu.au>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: bing [mailto:Libing.Chen@uts.edu.au]
> Sent: Thursday, August 30, 2012 12:16 AM
> To: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 790, boots fine with
> 4.1.3
> 
> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
> to the xen kernel. It seems to be CPU related, same setup (same xen,
> Dom0 kernel version) works fine on another Dell Precision 3200.

Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
did solve my problem also.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 17:05:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 17:05:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T78BR-0007JF-2P; Thu, 30 Aug 2012 17:05:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T78BP-0007J2-H1
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 17:05:31 +0000
Received: from [85.158.138.51:13047] by server-9.bemta-3.messagelabs.com id
	A7/ED-15390-A5D9F305; Thu, 30 Aug 2012 17:05:30 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346346325!27750215!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29343 invoked from network); 30 Aug 2012 17:05:28 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 17:05:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UH5N3Q005298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Aug 2012 17:05:24 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UH5MC2023258
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Aug 2012 17:05:23 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UH5M4Z020490; Thu, 30 Aug 2012 12:05:22 -0500
MIME-Version: 1.0
Message-ID: <aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
Date: Thu, 30 Aug 2012 10:04:42 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: bing <Libing.Chen@uts.edu.au>, xen-devel@lists.xen.org
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
In-Reply-To: <503F053D.5010001@uts.edu.au>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: bing [mailto:Libing.Chen@uts.edu.au]
> Sent: Thursday, August 30, 2012 12:16 AM
> To: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 790, boots fine with
> 4.1.3
> 
> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
> to the xen kernel. It seems to be CPU related, same setup (same xen,
> Dom0 kernel version) works fine on another Dell Precision 3200.

Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
did solve my problem also.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:22:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79NG-0008TL-Lw; Thu, 30 Aug 2012 18:21:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T79NF-0008TF-Kh
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:21:49 +0000
Received: from [85.158.139.83:14476] by server-6.bemta-5.messagelabs.com id
	B0/C9-21336-C3FAF305; Thu, 30 Aug 2012 18:21:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346350906!20551431!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14358 invoked from network); 30 Aug 2012 18:21:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 18:21:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UILhxm016571
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:21:45 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UILhNe020524
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:21:43 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UILhq5019449
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 13:21:43 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Aug 2012 11:21:42 -0700
Date: Thu, 30 Aug 2012 11:21:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120830112141.3e2dec75@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [RFC PATCH 0/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Keir,

As promised sending two patches after this. First is the changes to
common code. Other is a tar file of kdb subdirectory under
xen-unstable.hg/xen.

It seems there is enough interested that it's worth considering for
merging into xen. Good thing is I've developed it as I debug things. So
it's developed completely from a developer's perspective who did not
have access to any other tools like jtag etc.. 

BTW, I'd like to rename it from kdb to xdb or hdb in the final
submission.

The diffs are against c/s 25467 btw. 

Thanks,
Mukesh


At present I've following commands:

info:  Print basic info like version, compile flags, etc..

cur:  print current domain id and vcpu id

f: display current stack. If a vcpu ptr is given, then print stack for
that VCPU by using its IP and SP.

fg: display stack for a guest given domid, SP and IP.

dw: display words of memory. 'num' of bytes is optional, but if
displaying guest memory, then is required.

dd: same as above, but display doublewords.

dwm: same as above but the address is machine address instead of
virtual.

ddm: same as above, but display doublewords.

dr: display registers. if 'sp' is specified then print few extra
registers.

drg: display guest context saved, ie, guest_cpu_user_regs.

dis: disassemble instructions. If disassembling for guest, then 'num'
must be specified. 'num' is number of instrs to display.

dism: toggle disassembly mode between Intel and ATT/GAS.

mw: modify word in memory given virtual address. 'domid' may be
specified if modifying guest memory. value is assumed in hex even
without 0x.

md: same as above but modify doubleword.

mr: modify register. value is assumd hex.

bc: clear given or all breakpoints

bp: display breakpoints or set a breakpoint. Domid may be specified to
set a bp in guest. kdb functions may not be specified if debugging kdb.
    Example:
      xkdb> bp acpi_processor_idle  : will set bp in xen
      xkdb> bp default_idle 0 :   will set bp in domid 0
      xkdb> bp idle_cpu 9 :   will set bp in domid 9

     Conditions may be specified for a bp: lhs == rhs or lhs != rhs
     where : lhs is register like 'r6', 'rax', etc...  or memory
     location rhs is hex value with or without leading 0x.
     Thus,
      xkdb> bp acpi_processor_idle rdi == c000 
      xkdb> bp 0xffffffff80062ebc 0 rsi == ffff880021edbc98 : will
     break into kdb at 0xffffffff80062ebc in dom0 when rsi is
     ffff880021edbc98 

btp: break point trace. Upon bp, print some info and continue without
stopping. Ex: btp idle_cpu 7 rax rbx 0x20ef5a5 r9

   will print: rax, rbx, *(long *)0x20ef5a5, r9 upon hitting idle_cpu()
   and continue.

wp: set a watchpoint at a virtual address which can belong to
hypervisor or any guest. Do not specify wp in kdb path if debugging kdb.

wc: clear given or all watchpoints.

ni: single step, stepping over function calls.

ss: single step. Be carefull when in interrupt handlers or context
switches. 
ssb: single step to branch. Use with care.

go: leave kdb and continue.

cpu: go back to orig cpu when entering kdb. If 'cpu number' given, then
switch to that cpu. If 'all' then show status of all cpus.

nmi: Only available in hung/crash state. Send NMI to a cpu that may be
hung.

sym: Initialize a symbol table for debugging a guest. Look into the
System.map file of guest for certain symbol values and provide them
here.

mod: Display modules loaded in linux guest: modptr, address loaded at,
and name.

vcpuh: Given vcpu ptr, display hvm_vcpu struct.

vcpu: Display current vcpu struct. If 'vcpu-ptr' given, display that
vcpu.

dom: display current domain. If 'domid' then display that domid. If
'all', then display all domains.

sched: show schedular info and run queues.

mmu: print basic mmu info

p2m: convert a gpfn to mfn given a domid. input is in hex even without
0x.

m2p: convert mfn to pfn. input in hex even without 0x.

dpage: display struct page given a mfn or struct page ptr. Since, no
info is kept on page type, we display all possible page types.

dmsr: display an msr value.

dtrq: display timer queues.

cpuid: run cpuid.

wept: walk ept table for given domid and gfn

dtrq: dump timer queues on all cpus

didt: dump IDT table.

dgt: dump GDT table.

dirq: display IRQ bindings.

dvit: dump (per cpu)vector irq table

dvmc: display all or given dom/vcpu VMCS or VMCB.

mmio: dump mmio related info
trcon: turn tracing on. Trace hooks must be added in xen and kdb
function called directly from there.

trcoff: turn tracing off.

trcz: zero trace buffer.

trcp: give hints to print the circular trace buffer, like current
active ptr.

usr1: allows to add any arbitraty command quickly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:22:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79NG-0008TL-Lw; Thu, 30 Aug 2012 18:21:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T79NF-0008TF-Kh
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:21:49 +0000
Received: from [85.158.139.83:14476] by server-6.bemta-5.messagelabs.com id
	B0/C9-21336-C3FAF305; Thu, 30 Aug 2012 18:21:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346350906!20551431!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14358 invoked from network); 30 Aug 2012 18:21:48 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-11.tower-182.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 18:21:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UILhxm016571
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:21:45 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UILhNe020524
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:21:43 GMT
Received: from abhmt101.oracle.com (abhmt101.oracle.com [141.146.116.53])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UILhq5019449
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 13:21:43 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Aug 2012 11:21:42 -0700
Date: Thu, 30 Aug 2012 11:21:41 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120830112141.3e2dec75@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [RFC PATCH 0/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Keir,

As promised sending two patches after this. First is the changes to
common code. Other is a tar file of kdb subdirectory under
xen-unstable.hg/xen.

It seems there is enough interested that it's worth considering for
merging into xen. Good thing is I've developed it as I debug things. So
it's developed completely from a developer's perspective who did not
have access to any other tools like jtag etc.. 

BTW, I'd like to rename it from kdb to xdb or hdb in the final
submission.

The diffs are against c/s 25467 btw. 

Thanks,
Mukesh


At present I've following commands:

info:  Print basic info like version, compile flags, etc..

cur:  print current domain id and vcpu id

f: display current stack. If a vcpu ptr is given, then print stack for
that VCPU by using its IP and SP.

fg: display stack for a guest given domid, SP and IP.

dw: display words of memory. 'num' of bytes is optional, but if
displaying guest memory, then is required.

dd: same as above, but display doublewords.

dwm: same as above but the address is machine address instead of
virtual.

ddm: same as above, but display doublewords.

dr: display registers. if 'sp' is specified then print few extra
registers.

drg: display guest context saved, ie, guest_cpu_user_regs.

dis: disassemble instructions. If disassembling for guest, then 'num'
must be specified. 'num' is number of instrs to display.

dism: toggle disassembly mode between Intel and ATT/GAS.

mw: modify word in memory given virtual address. 'domid' may be
specified if modifying guest memory. value is assumed in hex even
without 0x.

md: same as above but modify doubleword.

mr: modify register. value is assumd hex.

bc: clear given or all breakpoints

bp: display breakpoints or set a breakpoint. Domid may be specified to
set a bp in guest. kdb functions may not be specified if debugging kdb.
    Example:
      xkdb> bp acpi_processor_idle  : will set bp in xen
      xkdb> bp default_idle 0 :   will set bp in domid 0
      xkdb> bp idle_cpu 9 :   will set bp in domid 9

     Conditions may be specified for a bp: lhs == rhs or lhs != rhs
     where : lhs is register like 'r6', 'rax', etc...  or memory
     location rhs is hex value with or without leading 0x.
     Thus,
      xkdb> bp acpi_processor_idle rdi == c000 
      xkdb> bp 0xffffffff80062ebc 0 rsi == ffff880021edbc98 : will
     break into kdb at 0xffffffff80062ebc in dom0 when rsi is
     ffff880021edbc98 

btp: break point trace. Upon bp, print some info and continue without
stopping. Ex: btp idle_cpu 7 rax rbx 0x20ef5a5 r9

   will print: rax, rbx, *(long *)0x20ef5a5, r9 upon hitting idle_cpu()
   and continue.

wp: set a watchpoint at a virtual address which can belong to
hypervisor or any guest. Do not specify wp in kdb path if debugging kdb.

wc: clear given or all watchpoints.

ni: single step, stepping over function calls.

ss: single step. Be carefull when in interrupt handlers or context
switches. 
ssb: single step to branch. Use with care.

go: leave kdb and continue.

cpu: go back to orig cpu when entering kdb. If 'cpu number' given, then
switch to that cpu. If 'all' then show status of all cpus.

nmi: Only available in hung/crash state. Send NMI to a cpu that may be
hung.

sym: Initialize a symbol table for debugging a guest. Look into the
System.map file of guest for certain symbol values and provide them
here.

mod: Display modules loaded in linux guest: modptr, address loaded at,
and name.

vcpuh: Given vcpu ptr, display hvm_vcpu struct.

vcpu: Display current vcpu struct. If 'vcpu-ptr' given, display that
vcpu.

dom: display current domain. If 'domid' then display that domid. If
'all', then display all domains.

sched: show schedular info and run queues.

mmu: print basic mmu info

p2m: convert a gpfn to mfn given a domid. input is in hex even without
0x.

m2p: convert mfn to pfn. input in hex even without 0x.

dpage: display struct page given a mfn or struct page ptr. Since, no
info is kept on page type, we display all possible page types.

dmsr: display an msr value.

dtrq: display timer queues.

cpuid: run cpuid.

wept: walk ept table for given domid and gfn

dtrq: dump timer queues on all cpus

didt: dump IDT table.

dgt: dump GDT table.

dirq: display IRQ bindings.

dvit: dump (per cpu)vector irq table

dvmc: display all or given dom/vcpu VMCS or VMCB.

mmio: dump mmio related info
trcon: turn tracing on. Trace hooks must be added in xen and kdb
function called directly from there.

trcoff: turn tracing off.

trcz: zero trace buffer.

trcp: give hints to print the circular trace buffer, like current
active ptr.

usr1: allows to add any arbitraty command quickly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:23:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79Ou-00008l-CS; Thu, 30 Aug 2012 18:23:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T79Os-00008g-OK
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:23:31 +0000
Received: from [85.158.139.83:41234] by server-12.bemta-5.messagelabs.com id
	64/FB-18300-1AFAF305; Thu, 30 Aug 2012 18:23:29 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346351007!23827868!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19400 invoked from network); 30 Aug 2012 18:23:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 18:23:29 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UINPsO018285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:23:26 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UINO66026108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:23:25 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UINOmC020529
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 13:23:24 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Aug 2012 11:23:24 -0700
Date: Thu, 30 Aug 2012 11:23:23 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120830112323.5086d73c@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Changes to xen code for the debugger.


diff -r 32034d1914a6 xen/Makefile
--- a/xen/Makefile	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -61,6 +61,7 @@
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C xsm clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C crypto clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C arch/$(TARGET_ARCH) clean
+	$(MAKE) -f $(BASEDIR)/Rules.mk -C kdb clean
 	rm -f include/asm *.o $(TARGET) $(TARGET).gz $(TARGET)-syms *~ core
 	rm -f include/asm-*/asm-offsets.h
 	[ -d tools/figlet ] && rm -f .banner*
@@ -129,7 +130,7 @@
 	  echo ""; \
 	  echo "#endif") <$< >$@
 
-SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers
+SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers kdb
 define all_sources
     ( find include/asm-$(TARGET_ARCH) -name '*.h' -print; \
       find include -name 'asm-*' -prune -o -name '*.h' -print; \
diff -r 32034d1914a6 xen/Rules.mk
--- a/xen/Rules.mk	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Rules.mk	Wed Aug 29 14:39:57 2012 -0700
@@ -10,6 +10,7 @@
 crash_debug   ?= n
 frame_pointer ?= n
 lto           ?= n
+kdb           ?= n
 
 include $(XEN_ROOT)/Config.mk
 
@@ -40,6 +41,7 @@
 ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
 ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
+ALL_OBJS-$(kdb)          += $(BASEDIR)/kdb/built_in.o
 
 CFLAGS-y                += -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
@@ -53,6 +55,7 @@
 CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
 CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
 CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
+CFLAGS-$(kdb)           += -DXEN_KDB_CONFIG
 
 ifneq ($(max_phys_cpus),)
 CFLAGS-y                += -DMAX_PHYS_CPUS=$(max_phys_cpus)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/entry.S
--- a/xen/arch/x86/hvm/svm/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -59,12 +59,23 @@
         get_current(bx)
         CLGI
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         testl $~0,(r(dx),r(ax),1)
         jnz  .Lsvm_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0, VCPU_nsvm_hap_enabled(r(bx))
 UNLIKELY_START(nz, nsvm_hap)
         mov  VCPU_nhvm_p2m(r(bx)),r(ax)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/svm.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2170,6 +2170,10 @@
         break;
 
     case VMEXIT_EXCEPTION_DB:
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+	    break;
+#endif
         if ( !v->domain->debugger_attached )
             goto exit_and_crash;
         domain_pause_for_debugger();
@@ -2182,6 +2186,10 @@
         if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
             break;
         __update_guest_eip(regs, inst_len);
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_int3, regs))
+            break;
+#endif
         current->arch.gdbsx_vcpu_event = TRAP_int3;
         domain_pause_for_debugger();
         break;
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
--- a/xen/arch/x86/hvm/svm/vmcb.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/vmcb.c	Wed Aug 29 14:39:57 2012 -0700
@@ -315,6 +315,36 @@
     register_keyhandler('v', &vmcb_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+/* did == 0 : display for all HVM domains. domid 0 is never HVM.
+ *  * vid == -1 : display for all HVM VCPUs
+ *   */
+void kdb_dump_vmcb(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain (dp) {
+        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
+            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
+            kdbp("\n");
+        }
+        kdbp("\n");
+    }
+    rcu_read_unlock(&domlist_read_lock);
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
--- a/xen/arch/x86/hvm/vmx/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -124,12 +124,23 @@
         get_current(bx)
         cli
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         cmpl $0,(r(dx),r(ax),1)
         jnz  .Lvmx_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0xff,VCPU_vmx_emulate(r(bx))
         jnz .Lvmx_goto_emulator
         testb $0xff,VCPU_vmx_realmode(r(bx))
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmcs.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1117,6 +1117,13 @@
         hvm_asid_flush_vcpu(v);
     }
 
+#if defined(XEN_KDB_CONFIG)
+    if (kdb_dr7)
+        __vmwrite(GUEST_DR7, kdb_dr7);
+    else
+        __vmwrite(GUEST_DR7, 0);
+#endif
+
     debug_state = v->domain->debugger_attached
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
@@ -1326,6 +1333,220 @@
     register_keyhandler('v', &vmcs_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+#define GUEST_EFER      0x2806   /* see page 23-20 */
+#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */
+
+/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
+ * will IPI other CPUs. also, print a subset relevant to software debugging */
+static void noinline kdb_print_vmcs(struct vcpu *vp)
+{
+    struct cpu_user_regs *regs = &vp->arch.user_regs;
+    unsigned long long x;
+
+    kdbp("*** Guest State ***\n");
+    kdbp("CR0: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR0),
+         (unsigned long long)vmr(CR0_READ_SHADOW), 
+         (unsigned long long)vmr(CR0_GUEST_HOST_MASK));
+    kdbp("CR4: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR4),
+         (unsigned long long)vmr(CR4_READ_SHADOW), 
+         (unsigned long long)vmr(CR4_GUEST_HOST_MASK));
+    kdbp("CR3: actual=0x%016llx, target_count=%d\n",
+         (unsigned long long)vmr(GUEST_CR3),
+         (int)vmr(CR3_TARGET_COUNT));
+    kdbp("     target0=%016llx, target1=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE0),
+         (unsigned long long)vmr(CR3_TARGET_VALUE1));
+    kdbp("     target2=%016llx, target3=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE2),
+         (unsigned long long)vmr(CR3_TARGET_VALUE3));
+    kdbp("RSP = 0x%016llx (0x%016llx)  RIP = 0x%016llx (0x%016llx)\n", 
+         (unsigned long long)vmr(GUEST_RSP),
+         (unsigned long long)regs->esp,
+         (unsigned long long)vmr(GUEST_RIP),
+         (unsigned long long)regs->eip);
+    kdbp("RFLAGS=0x%016llx (0x%016llx)  DR7 = 0x%016llx\n", 
+         (unsigned long long)vmr(GUEST_RFLAGS),
+         (unsigned long long)regs->eflags,
+         (unsigned long long)vmr(GUEST_DR7));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+         (unsigned long long)vmr(GUEST_SYSENTER_ESP),
+         (int)vmr(GUEST_SYSENTER_CS),
+         (unsigned long long)vmr(GUEST_SYSENTER_EIP));
+    vmx_dump_sel("CS", GUEST_CS_SELECTOR);
+    vmx_dump_sel("DS", GUEST_DS_SELECTOR);
+    vmx_dump_sel("SS", GUEST_SS_SELECTOR);
+    vmx_dump_sel("ES", GUEST_ES_SELECTOR);
+    vmx_dump_sel("FS", GUEST_FS_SELECTOR);
+    vmx_dump_sel("GS", GUEST_GS_SELECTOR);
+    vmx_dump_sel2("GDTR", GUEST_GDTR_LIMIT);
+    vmx_dump_sel("LDTR", GUEST_LDTR_SELECTOR);
+    vmx_dump_sel2("IDTR", GUEST_IDTR_LIMIT);
+    vmx_dump_sel("TR", GUEST_TR_SELECTOR);
+    kdbp("Guest EFER = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_EFER_HIGH), (uint32_t)vmr(GUEST_EFER));
+    kdbp("Guest PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_PAT_HIGH), (uint32_t)vmr(GUEST_PAT));
+    x  = (unsigned long long)vmr(TSC_OFFSET_HIGH) << 32;
+    x |= (uint32_t)vmr(TSC_OFFSET);
+    kdbp("TSC Offset = %016llx\n", x);
+    x  = (unsigned long long)vmr(GUEST_IA32_DEBUGCTL_HIGH) << 32;
+    x |= (uint32_t)vmr(GUEST_IA32_DEBUGCTL);
+    kdbp("DebugCtl=%016llx DebugExceptions=%016llx\n", x,
+           (unsigned long long)vmr(GUEST_PENDING_DBG_EXCEPTIONS));
+    kdbp("Interruptibility=%04x ActivityState=%04x\n",
+           (int)vmr(GUEST_INTERRUPTIBILITY_INFO),
+           (int)vmr(GUEST_ACTIVITY_STATE));
+
+    kdbp("MSRs: entry_load:$%d exit_load:$%d exit_store:$%d\n",
+         vmr(VM_ENTRY_MSR_LOAD_COUNT), vmr(VM_EXIT_MSR_LOAD_COUNT),
+         vmr(VM_EXIT_MSR_STORE_COUNT));
+
+    kdbp("\n*** Host State ***\n");
+    kdbp("RSP = 0x%016llx  RIP = 0x%016llx\n", 
+           (unsigned long long)vmr(HOST_RSP),
+           (unsigned long long)vmr(HOST_RIP));
+    kdbp("CS=%04x DS=%04x ES=%04x FS=%04x GS=%04x SS=%04x TR=%04x\n",
+           (uint16_t)vmr(HOST_CS_SELECTOR),
+           (uint16_t)vmr(HOST_DS_SELECTOR),
+           (uint16_t)vmr(HOST_ES_SELECTOR),
+           (uint16_t)vmr(HOST_FS_SELECTOR),
+           (uint16_t)vmr(HOST_GS_SELECTOR),
+           (uint16_t)vmr(HOST_SS_SELECTOR),
+           (uint16_t)vmr(HOST_TR_SELECTOR));
+    kdbp("FSBase=%016llx GSBase=%016llx TRBase=%016llx\n",
+           (unsigned long long)vmr(HOST_FS_BASE),
+           (unsigned long long)vmr(HOST_GS_BASE),
+           (unsigned long long)vmr(HOST_TR_BASE));
+    kdbp("GDTBase=%016llx IDTBase=%016llx\n",
+           (unsigned long long)vmr(HOST_GDTR_BASE),
+           (unsigned long long)vmr(HOST_IDTR_BASE));
+    kdbp("CR0=%016llx CR3=%016llx CR4=%016llx\n",
+           (unsigned long long)vmr(HOST_CR0),
+           (unsigned long long)vmr(HOST_CR3),
+           (unsigned long long)vmr(HOST_CR4));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+           (unsigned long long)vmr(HOST_SYSENTER_ESP),
+           (int)vmr(HOST_SYSENTER_CS),
+           (unsigned long long)vmr(HOST_SYSENTER_EIP));
+    kdbp("Host PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(HOST_PAT_HIGH), (uint32_t)vmr(HOST_PAT));
+
+    kdbp("\n*** Control State ***\n");
+    kdbp("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n",
+           (uint32_t)vmr(PIN_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(CPU_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(SECONDARY_VM_EXEC_CONTROL));
+    kdbp("EntryControls=%08x ExitControls=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_CONTROLS),
+           (uint32_t)vmr(VM_EXIT_CONTROLS));
+    kdbp("ExceptionBitmap=%08x\n",
+           (uint32_t)vmr(EXCEPTION_BITMAP));
+    kdbp("PAGE_FAULT_ERROR_CODE  MASK:0x%lx  MATCH:0x%lx\n", 
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MASK),
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MATCH));
+    kdbp("VMEntry: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_INTR_INFO),
+           (uint32_t)vmr(VM_ENTRY_EXCEPTION_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("VMExit: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_EXIT_INTR_INFO),
+           (uint32_t)vmr(VM_EXIT_INTR_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("        reason=%08x qualification=%08x\n",
+           (uint32_t)vmr(VM_EXIT_REASON),
+           (uint32_t)vmr(EXIT_QUALIFICATION));
+    kdbp("IDTVectoring: info=%08x errcode=%08x\n",
+           (uint32_t)vmr(IDT_VECTORING_INFO),
+           (uint32_t)vmr(IDT_VECTORING_ERROR_CODE));
+    kdbp("TPR Threshold = 0x%02x\n",
+           (uint32_t)vmr(TPR_THRESHOLD));
+    kdbp("EPT pointer = 0x%08x%08x\n",
+           (uint32_t)vmr(EPT_POINTER_HIGH), (uint32_t)vmr(EPT_POINTER));
+    kdbp("Virtual processor ID = 0x%04x\n",
+           (uint32_t)vmr(VIRTUAL_PROCESSOR_ID));
+    kdbp("=================================================================\n");
+}
+
+/* Flush VMCS on this cpu if it needs to: 
+ *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and 
+ *     do __vmreads. So, the VMCS pointer can't be left cleared.
+ *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
+ *     vmlaunch must be done and not vmresume. This means, we must clear 
+ *     arch_vmx->launched.
+ */
+void kdb_curr_cpu_flush_vmcs(void)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    int ccpu = smp_processor_id();
+    struct vmcs_struct *cvp = this_cpu(current_vmcs);
+
+    if (this_cpu(current_vmcs) == NULL)
+        return;             /* no HVM active on this CPU */
+
+    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
+
+    /* looks like we got one. unfortunately, current_vmcs points to vmcs 
+     * and not VCPU, so we gotta search the entire list... */
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        for_each_vcpu (dp, vp) {
+            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
+                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                vp->arch.hvm_vmx.launched = 0;
+                this_cpu(current_vmcs) = NULL;
+                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n", 
+		     ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
+            }
+        }
+    }
+}
+
+/*
+ * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)
+ * vcpu id == -1 : display all vcpuids
+ * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
+ */
+void kdb_dump_vmcs(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    struct vmcs_struct  *vmcsp;
+    u64 addr = -1;
+
+    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
+    __vmptrst(&addr);
+
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+	    vmcsp = vp->arch.hvm_vmx.vmcs;
+            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
+	         virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
+            __vmptrld(virt_to_maddr(vmcsp));
+            kdb_print_vmcs(vp);
+            __vmpclear(virt_to_maddr(vmcsp));
+            vp->arch.hvm_vmx.launched = 0;
+        }
+        kdbp("\n");
+    }
+    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
+    if (addr && addr != (u64)-1)
+        __vmptrld(addr);
+}
+#endif
 
 /*
  * Local variables:
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmx.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2183,11 +2183,14 @@
         printk("reason not known yet!");
         break;
     }
-
+#if defined(XEN_KDB_CONFIG)
+    kdbp("\n************* VMCS Area **************\n");
+    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
+#else
     printk("************* VMCS Area **************\n");
     vmcs_dump_vcpu(curr);
     printk("**************************************\n");
-
+#endif
     domain_crash(curr->domain);
 }
 
@@ -2415,6 +2418,12 @@
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
                 goto exit_and_crash;
+
+#if defined(XEN_KDB_CONFIG)
+            /* TRAP_debug: IP points correctly to next instr */
+            if (kdb_handle_trap_entry(vector, regs))
+                break;
+#endif
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 
@@ -2423,6 +2432,13 @@
             if ( v->domain->debugger_attached )
             {
                 update_guest_eip(); /* Safe: INT3 */            
+#if defined(XEN_KDB_CONFIG)
+                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
+                 * update_guest_eip which updates to bp+1. works for gdbsx too 
+                 */
+                if (kdb_handle_trap_entry(vector, regs))
+                    break;
+#endif
                 current->arch.gdbsx_vcpu_event = TRAP_int3;
                 domain_pause_for_debugger();
                 break;
@@ -2707,6 +2723,10 @@
     case EXIT_REASON_MONITOR_TRAP_FLAG:
         v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
+#if defined(XEN_KDB_CONFIG)
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+            break;
+#endif
         if ( v->arch.hvm_vcpu.single_step ) {
           hvm_memory_event_single_step(regs->eip);
           if ( v->domain->debugger_attached )
diff -r 32034d1914a6 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/irq.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2305,3 +2305,29 @@
     return is_hvm_domain(d) && pirq &&
            pirq->arch.hvm.emuirq != IRQ_UNBOUND; 
 }
+
+#ifdef XEN_KDB_CONFIG
+void kdb_prnt_guest_mapped_irqs(void)
+{
+    int irq, j;
+    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
+
+    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
+    for (irq=0; irq < nr_irqs; irq++) {
+        irq_desc_t  *dp = irq_to_desc(irq);
+        struct arch_irq_desc *archp = &dp->arch;
+        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
+
+        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
+            continue;
+
+        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
+        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
+             dp->handler->typename);
+        for (j=0; j < actp->nr_guests; j++)
+            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
+                 domain_irq_to_pirq(actp->guest[j], irq));
+        kdbp("\n");
+    }
+}
+#endif
diff -r 32034d1914a6 xen/arch/x86/setup.c
--- a/xen/arch/x86/setup.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/setup.c	Wed Aug 29 14:39:57 2012 -0700
@@ -47,6 +47,13 @@
 #include <xen/cpu.h>
 #include <asm/nmi.h>
 
+#ifdef XEN_KDB_CONFIG
+#include <asm/debugger.h>
+
+int opt_earlykdb=0;
+boolean_param("earlykdb", opt_earlykdb);
+#endif
+
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
 boolean_param("nosmp", opt_nosmp);
@@ -1242,6 +1249,11 @@
 
     trap_init();
 
+#ifdef XEN_KDB_CONFIG
+    kdb_init();
+    if (opt_earlykdb)
+        kdb_trap_immed(KDB_TRAP_NONFATAL);
+#endif
     rcu_init();
     
     early_time_init();
diff -r 32034d1914a6 xen/arch/x86/smp.c
--- a/xen/arch/x86/smp.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/smp.c	Wed Aug 29 14:39:57 2012 -0700
@@ -273,7 +273,7 @@
  * Structure and data for smp_call_function()/on_selected_cpus().
  */
 
-static void __smp_call_function_interrupt(void);
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs);
 static DEFINE_SPINLOCK(call_lock);
 static struct call_data_struct {
     void (*func) (void *info);
@@ -321,7 +321,7 @@
     if ( cpumask_test_cpu(smp_processor_id(), &call_data.selected) )
     {
         local_irq_disable();
-        __smp_call_function_interrupt();
+        __smp_call_function_interrupt(NULL);
         local_irq_enable();
     }
 
@@ -390,7 +390,7 @@
     this_cpu(irq_count)++;
 }
 
-static void __smp_call_function_interrupt(void)
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs)
 {
     void (*func)(void *info) = call_data.func;
     void *info = call_data.info;
@@ -411,6 +411,11 @@
     {
         mb();
         cpumask_clear_cpu(cpu, &call_data.selected);
+#ifdef XEN_KDB_CONFIG
+        if (info && !strcmp(info, "XENKDB")) {           /* called from kdb */
+                (*(void (*)(struct cpu_user_regs *, void *))func)(regs, info);
+        } else
+#endif
         (*func)(info);
     }
 
@@ -421,5 +426,5 @@
 {
     ack_APIC_irq();
     perfc_incr(ipis);
-    __smp_call_function_interrupt();
+    __smp_call_function_interrupt(regs);
 }
diff -r 32034d1914a6 xen/arch/x86/time.c
--- a/xen/arch/x86/time.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/time.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2007,6 +2007,46 @@
 }
 __initcall(setup_dump_softtsc);
 
+#ifdef XEN_KDB_CONFIG
+void kdb_time_resume(int update_domains)
+{
+        s_time_t now;
+        int ccpu = smp_processor_id();
+        struct cpu_time *t = &this_cpu(cpu_time);
+
+        if (!plt_src.read_counter)            /* not initialized for earlykdb */
+                return;
+
+        if (update_domains) {
+                plt_stamp = plt_src.read_counter();
+                platform_timer_stamp = plt_stamp64;
+                platform_time_calibration();
+                do_settime(get_cmos_time(), 0, read_platform_stime());
+        }
+        if (local_irq_is_enabled())
+                kdbp("kdb BUG: enabled in time_resume(). ccpu:%d\n", ccpu);
+
+        rdtscll(t->local_tsc_stamp);
+        now = read_platform_stime();
+        t->stime_master_stamp = now;
+        t->stime_local_stamp  = now;
+
+        update_vcpu_system_time(current);
+
+        if (update_domains)
+                set_timer(&calibration_timer, NOW() + EPOCH);
+}
+
+void kdb_dump_time_pcpu(void)
+{
+    int cpu;
+    for_each_online_cpu(cpu) {
+        kdbp("[%d]: cpu_time: %016lx\n", cpu, &per_cpu(cpu_time, cpu));
+        kdbp("[%d]: cpu_calibration: %016lx\n", cpu, 
+             &per_cpu(cpu_calibration, cpu));
+    }
+}
+#endif
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/traps.c	Wed Aug 29 14:39:57 2012 -0700
@@ -225,7 +225,7 @@
 
 #else
 
-static void show_trace(struct cpu_user_regs *regs)
+void show_trace(struct cpu_user_regs *regs)
 {
     unsigned long *frame, next, addr, low, high;
 
@@ -3326,6 +3326,10 @@
     if ( nmi_callback(regs, cpu) )
         return;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_enabled && kdb_handle_trap_entry(TRAP_nmi, regs))
+        return;
+#endif
     if ( nmi_watchdog )
         nmi_watchdog_tick(regs);
 
diff -r 32034d1914a6 xen/arch/x86/x86_64/compat/entry.S
--- a/xen/arch/x86/x86_64/compat/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/compat/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -95,6 +95,10 @@
 /* %rbx: struct vcpu */
 ENTRY(compat_test_all_events)
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG
+        testl $1, kdb_session_begun(%rip)
+        jnz   compat_restore_all_guest
+#endif
 /*compat_test_softirqs:*/
         movl  VCPU_processor(%rbx),%eax
         shlq  $IRQSTAT_shift,%rax
diff -r 32034d1914a6 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -184,6 +184,10 @@
 /* %rbx: struct vcpu */
 test_all_events:
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG                   /* 64bit dom0 will resume here */
+        testl $1, kdb_session_begun(%rip)
+        jnz   restore_all_guest
+#endif
 /*test_softirqs:*/  
         movl  VCPU_processor(%rbx),%eax
         shl   $IRQSTAT_shift,%rax
@@ -546,6 +550,13 @@
 
 ENTRY(int3)
         pushq $0
+#ifdef XEN_KDB_CONFIG
+        pushq %rax
+        GET_CPUINFO_FIELD(CPUINFO_processor_id, %rax)
+        movq  (%rax), %rax
+        lock  bts %rax, kdb_cpu_traps(%rip)
+        popq  %rax
+#endif
         movl  $TRAP_int3,4(%rsp)
         jmp   handle_exception
 
diff -r 32034d1914a6 xen/common/domain.c
--- a/xen/common/domain.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/domain.c	Wed Aug 29 14:39:57 2012 -0700
@@ -530,6 +530,14 @@
 {
     struct vcpu *v;
 
+#ifdef XEN_KDB_CONFIG
+    if (reason == SHUTDOWN_crash) {
+        if ( IS_PRIV(d) )
+            kdb_trap_immed(KDB_TRAP_FATAL);
+        else
+            kdb_trap_immed(KDB_TRAP_NONFATAL);
+    }
+#endif
     spin_lock(&d->shutdown_lock);
 
     if ( d->shutdown_code == -1 )
@@ -624,7 +632,9 @@
     for_each_vcpu ( d, v )
         vcpu_sleep_nosync(v);
 
-    send_global_virq(VIRQ_DEBUGGER);
+    /* send VIRQ_DEBUGGER to guest only if gdbsx_vcpu_event is not active */
+    if (current->arch.gdbsx_vcpu_event == 0)
+        send_global_virq(VIRQ_DEBUGGER);
 }
 
 /* Complete domain destroy after RCU readers are not holding old references. */
diff -r 32034d1914a6 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/sched_credit.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1475,6 +1475,33 @@
     printk("\n");
 }
 
+#ifdef XEN_KDB_CONFIG
+static void kdb_csched_dump(int cpu)
+{
+    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
+    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
+    struct list_head *tmp, *runq = RUNQ(cpu);
+
+    kdbp("    csched_pcpu: %p\n", pcpup);
+    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
+         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
+    kdbp("    runq:\n");
+
+    /* next is top of struct, so screw stupid, ugly hard to follow macros */
+    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
+        kdbp("next is not first in struct csched_vcpu. please fixme\n");
+        return;        /* otherwise for loop will crash */
+    }
+    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
+
+        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
+        struct vcpu *vp = csp->vcpu;
+        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
+             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
+    };
+}
+#endif
+
 static void
 csched_dump_pcpu(const struct scheduler *ops, int cpu)
 {
@@ -1484,6 +1511,10 @@
     int loop;
 #define cpustr keyhandler_scratch
 
+#ifdef XEN_KDB_CONFIG
+    kdb_csched_dump(cpu);
+    return;
+#endif
     spc = CSCHED_PCPU(cpu);
     runq = &spc->runq;
 
diff -r 32034d1914a6 xen/common/schedule.c
--- a/xen/common/schedule.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/schedule.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1454,6 +1454,25 @@
     schedule();
 }
 
+#ifdef XEN_KDB_CONFIG
+void kdb_print_sched_info(void)
+{
+    int cpu;
+
+    kdbp("Scheduler: name:%s opt_name:%s id:%d\n", ops.name, ops.opt_name,
+         ops.sched_id);
+    kdbp("per cpu schedule_data:\n");
+    for_each_online_cpu(cpu) {
+        struct schedule_data *p =  &per_cpu(schedule_data, cpu);
+        kdbp("  cpu:%d  &(per cpu)schedule_data:%p\n", cpu, p);
+        kdbp("         curr:%p sched_priv:%p\n", p->curr, p->sched_priv);
+        kdbp("\n");
+        ops.dump_cpu_state(&ops, cpu);
+        kdbp("\n");
+    }
+}
+#endif
+
 #ifdef CONFIG_COMPAT
 #include "compat/schedule.c"
 #endif
diff -r 32034d1914a6 xen/common/symbols.c
--- a/xen/common/symbols.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/symbols.c	Wed Aug 29 14:39:57 2012 -0700
@@ -168,3 +168,21 @@
 
     spin_unlock_irqrestore(&lock, flags);
 }
+
+#ifdef XEN_KDB_CONFIG
+/*
+ *  * Given a symbol, return its address 
+ *   */
+unsigned long address_lookup(char *symp)
+{
+    int i, off = 0;
+    char namebuf[KSYM_NAME_LEN+1];
+
+    for (i=0; i < symbols_num_syms; i++) {
+        off = symbols_expand_symbol(off, namebuf);
+        if (strcmp(namebuf, symp) == 0)                  /* found it */
+            return symbols_address(i);
+    }
+    return 0;
+}
+#endif
diff -r 32034d1914a6 xen/common/timer.c
--- a/xen/common/timer.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/timer.c	Wed Aug 29 14:39:57 2012 -0700
@@ -643,6 +643,48 @@
     register_keyhandler('a', &dump_timerq_keyhandler);
 }
 
+#ifdef XEN_KDB_CONFIG
+#include <xen/symbols.h>
+void kdb_dump_timer_queues(void)
+{
+    struct timer  *t;
+    struct timers *ts;
+    unsigned long sz, offs;
+    char buf[KSYM_NAME_LEN+1];
+    int cpu, j;
+    u64 tsc;
+
+    for_each_online_cpu( cpu )
+    {
+        ts = &per_cpu(timers, cpu);
+        kdbp("CPU[%02d]:", cpu);
+
+        if (cpu == smp_processor_id()) {
+            s_time_t now = NOW();
+            rdtscll(tsc);
+            kdbp("NOW:0x%08x%08x TSC:0x%016lx\n", (u32)(now>>32),(u32)now, tsc);
+        } else
+            kdbp("\n");
+
+        /* timers in the heap */
+        for ( j = 1; j <= GET_HEAP_SIZE(ts->heap); j++ ) {
+            t = ts->heap[j];
+            kdbp("  %d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+        /* timers on the link list */
+        for ( t = ts->list, j = 0; t != NULL; t = t->list_next, j++ ) {
+            kdbp(" L%d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+    }
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/drivers/char/console.c
--- a/xen/drivers/char/console.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/drivers/char/console.c	Wed Aug 29 14:39:57 2012 -0700
@@ -295,6 +295,21 @@
 {
     static int switch_code_count = 0;
 
+#ifdef XEN_KDB_CONFIG
+    /* if ctrl-\ pressed and kdb handles it, return */
+    if (kdb_enabled && c == 0x1c) {
+        if (!kdb_session_begun) {
+            if (kdb_keyboard(regs))
+                return;
+        } else {
+            kdbp("Sorry... kdb session already active.. please try again..\n");
+            return;
+        }
+    }
+    if (kdb_session_begun)      /* kdb should already be polling */
+        return;                 /* swallow chars so they don't buffer in dom0 */
+#endif
+
     if ( switch_code && (c == switch_code) )
     {
         /* We eat CTRL-<switch_char> in groups of 3 to switch console input. */
@@ -710,6 +725,18 @@
     atomic_dec(&print_everything);
 }
 
+#ifdef XEN_KDB_CONFIG
+void console_putc(char c)
+{
+    serial_putc(sercon_handle, c);
+}
+
+int console_getc(void)
+{
+    return serial_getc(sercon_handle);
+}
+#endif
+
 /*
  * printk rate limiting, lifted from Linux.
  *
diff -r 32034d1914a6 xen/include/asm-x86/debugger.h
--- a/xen/include/asm-x86/debugger.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/asm-x86/debugger.h	Wed Aug 29 14:39:57 2012 -0700
@@ -39,7 +39,11 @@
 #define DEBUGGER_trap_fatal(_v, _r) \
     if ( debugger_trap_fatal(_v, _r) ) return;
 
-#if defined(CRASH_DEBUG)
+#if defined(XEN_KDB_CONFIG)
+#define debugger_trap_immediate() kdb_trap_immed(KDB_TRAP_NONFATAL)
+#define debugger_trap_fatal(_v, _r) kdb_trap_fatal(_v, _r)
+
+#elif defined(CRASH_DEBUG)
 
 #include <xen/gdbstub.h>
 
@@ -70,6 +74,10 @@
 {
     struct vcpu *v = current;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_handle_trap_entry(vector, regs))
+        return 1;
+#endif
     if ( guest_kernel_mode(v, regs) && v->domain->debugger_attached &&
          ((vector == TRAP_int3) || (vector == TRAP_debug)) )
     {
diff -r 32034d1914a6 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/lib.h	Wed Aug 29 14:39:57 2012 -0700
@@ -116,4 +116,7 @@
 struct cpu_user_regs;
 void dump_execstate(struct cpu_user_regs *);
 
+#ifdef XEN_KDB_CONFIG
+#include "../../kdb/include/kdb_extern.h"
+#endif
 #endif /* __LIB_H__ */
diff -r 32034d1914a6 xen/include/xen/sched.h
--- a/xen/include/xen/sched.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/sched.h	Wed Aug 29 14:39:57 2012 -0700
@@ -576,11 +576,14 @@
 unsigned long hypercall_create_continuation(
     unsigned int op, const char *format, ...);
 void hypercall_cancel_continuation(void);
-
+#ifdef XEN_KDB_CONFIG
+#define hypercall_preempt_check() (0)
+#else
 #define hypercall_preempt_check() (unlikely(    \
         softirq_pending(smp_processor_id()) |   \
         local_events_need_delivery()            \
     ))
+#endif
 
 extern struct domain *domain_list;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:23:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79Ou-00008l-CS; Thu, 30 Aug 2012 18:23:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T79Os-00008g-OK
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:23:31 +0000
Received: from [85.158.139.83:41234] by server-12.bemta-5.messagelabs.com id
	64/FB-18300-1AFAF305; Thu, 30 Aug 2012 18:23:29 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346351007!23827868!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDcyODc2OQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19400 invoked from network); 30 Aug 2012 18:23:29 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-7.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 18:23:29 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UINPsO018285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:23:26 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UINO66026108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:23:25 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UINOmC020529
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 13:23:24 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Aug 2012 11:23:24 -0700
Date: Thu, 30 Aug 2012 11:23:23 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120830112323.5086d73c@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Changes to xen code for the debugger.


diff -r 32034d1914a6 xen/Makefile
--- a/xen/Makefile	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Makefile	Wed Aug 29 14:39:57 2012 -0700
@@ -61,6 +61,7 @@
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C xsm clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C crypto clean
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C arch/$(TARGET_ARCH) clean
+	$(MAKE) -f $(BASEDIR)/Rules.mk -C kdb clean
 	rm -f include/asm *.o $(TARGET) $(TARGET).gz $(TARGET)-syms *~ core
 	rm -f include/asm-*/asm-offsets.h
 	[ -d tools/figlet ] && rm -f .banner*
@@ -129,7 +130,7 @@
 	  echo ""; \
 	  echo "#endif") <$< >$@
 
-SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers
+SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers kdb
 define all_sources
     ( find include/asm-$(TARGET_ARCH) -name '*.h' -print; \
       find include -name 'asm-*' -prune -o -name '*.h' -print; \
diff -r 32034d1914a6 xen/Rules.mk
--- a/xen/Rules.mk	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/Rules.mk	Wed Aug 29 14:39:57 2012 -0700
@@ -10,6 +10,7 @@
 crash_debug   ?= n
 frame_pointer ?= n
 lto           ?= n
+kdb           ?= n
 
 include $(XEN_ROOT)/Config.mk
 
@@ -40,6 +41,7 @@
 ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
 ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
+ALL_OBJS-$(kdb)          += $(BASEDIR)/kdb/built_in.o
 
 CFLAGS-y                += -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
@@ -53,6 +55,7 @@
 CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
 CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
 CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
+CFLAGS-$(kdb)           += -DXEN_KDB_CONFIG
 
 ifneq ($(max_phys_cpus),)
 CFLAGS-y                += -DMAX_PHYS_CPUS=$(max_phys_cpus)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/entry.S
--- a/xen/arch/x86/hvm/svm/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -59,12 +59,23 @@
         get_current(bx)
         CLGI
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         testl $~0,(r(dx),r(ax),1)
         jnz  .Lsvm_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0, VCPU_nsvm_hap_enabled(r(bx))
 UNLIKELY_START(nz, nsvm_hap)
         mov  VCPU_nhvm_p2m(r(bx)),r(ax)
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/svm.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2170,6 +2170,10 @@
         break;
 
     case VMEXIT_EXCEPTION_DB:
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+	    break;
+#endif
         if ( !v->domain->debugger_attached )
             goto exit_and_crash;
         domain_pause_for_debugger();
@@ -2182,6 +2186,10 @@
         if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
             break;
         __update_guest_eip(regs, inst_len);
+#ifdef XEN_KDB_CONFIG
+        if (kdb_handle_trap_entry(TRAP_int3, regs))
+            break;
+#endif
         current->arch.gdbsx_vcpu_event = TRAP_int3;
         domain_pause_for_debugger();
         break;
diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
--- a/xen/arch/x86/hvm/svm/vmcb.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/svm/vmcb.c	Wed Aug 29 14:39:57 2012 -0700
@@ -315,6 +315,36 @@
     register_keyhandler('v', &vmcb_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+/* did == 0 : display for all HVM domains. domid 0 is never HVM.
+ *  * vid == -1 : display for all HVM VCPUs
+ *   */
+void kdb_dump_vmcb(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain (dp) {
+        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
+            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
+            kdbp("\n");
+        }
+        kdbp("\n");
+    }
+    rcu_read_unlock(&domlist_read_lock);
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
--- a/xen/arch/x86/hvm/vmx/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -124,12 +124,23 @@
         get_current(bx)
         cli
 
+#ifdef XEN_KDB_CONFIG
+#if defined(__x86_64__)
+        testl $1, kdb_session_begun(%rip)
+#else
+        testl $1, kdb_session_begun
+#endif
+        jnz  .Lkdb_skip_softirq
+#endif
         mov  VCPU_processor(r(bx)),%eax
         shl  $IRQSTAT_shift,r(ax)
         lea  addr_of(irq_stat),r(dx)
         cmpl $0,(r(dx),r(ax),1)
         jnz  .Lvmx_process_softirqs
 
+#ifdef XEN_KDB_CONFIG
+.Lkdb_skip_softirq:
+#endif
         testb $0xff,VCPU_vmx_emulate(r(bx))
         jnz .Lvmx_goto_emulator
         testb $0xff,VCPU_vmx_realmode(r(bx))
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmcs.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1117,6 +1117,13 @@
         hvm_asid_flush_vcpu(v);
     }
 
+#if defined(XEN_KDB_CONFIG)
+    if (kdb_dr7)
+        __vmwrite(GUEST_DR7, kdb_dr7);
+    else
+        __vmwrite(GUEST_DR7, 0);
+#endif
+
     debug_state = v->domain->debugger_attached
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
                   || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
@@ -1326,6 +1333,220 @@
     register_keyhandler('v', &vmcs_dump_keyhandler);
 }
 
+#if defined(XEN_KDB_CONFIG)
+#define GUEST_EFER      0x2806   /* see page 23-20 */
+#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */
+
+/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
+ * will IPI other CPUs. also, print a subset relevant to software debugging */
+static void noinline kdb_print_vmcs(struct vcpu *vp)
+{
+    struct cpu_user_regs *regs = &vp->arch.user_regs;
+    unsigned long long x;
+
+    kdbp("*** Guest State ***\n");
+    kdbp("CR0: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR0),
+         (unsigned long long)vmr(CR0_READ_SHADOW), 
+         (unsigned long long)vmr(CR0_GUEST_HOST_MASK));
+    kdbp("CR4: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
+         (unsigned long long)vmr(GUEST_CR4),
+         (unsigned long long)vmr(CR4_READ_SHADOW), 
+         (unsigned long long)vmr(CR4_GUEST_HOST_MASK));
+    kdbp("CR3: actual=0x%016llx, target_count=%d\n",
+         (unsigned long long)vmr(GUEST_CR3),
+         (int)vmr(CR3_TARGET_COUNT));
+    kdbp("     target0=%016llx, target1=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE0),
+         (unsigned long long)vmr(CR3_TARGET_VALUE1));
+    kdbp("     target2=%016llx, target3=%016llx\n",
+         (unsigned long long)vmr(CR3_TARGET_VALUE2),
+         (unsigned long long)vmr(CR3_TARGET_VALUE3));
+    kdbp("RSP = 0x%016llx (0x%016llx)  RIP = 0x%016llx (0x%016llx)\n", 
+         (unsigned long long)vmr(GUEST_RSP),
+         (unsigned long long)regs->esp,
+         (unsigned long long)vmr(GUEST_RIP),
+         (unsigned long long)regs->eip);
+    kdbp("RFLAGS=0x%016llx (0x%016llx)  DR7 = 0x%016llx\n", 
+         (unsigned long long)vmr(GUEST_RFLAGS),
+         (unsigned long long)regs->eflags,
+         (unsigned long long)vmr(GUEST_DR7));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+         (unsigned long long)vmr(GUEST_SYSENTER_ESP),
+         (int)vmr(GUEST_SYSENTER_CS),
+         (unsigned long long)vmr(GUEST_SYSENTER_EIP));
+    vmx_dump_sel("CS", GUEST_CS_SELECTOR);
+    vmx_dump_sel("DS", GUEST_DS_SELECTOR);
+    vmx_dump_sel("SS", GUEST_SS_SELECTOR);
+    vmx_dump_sel("ES", GUEST_ES_SELECTOR);
+    vmx_dump_sel("FS", GUEST_FS_SELECTOR);
+    vmx_dump_sel("GS", GUEST_GS_SELECTOR);
+    vmx_dump_sel2("GDTR", GUEST_GDTR_LIMIT);
+    vmx_dump_sel("LDTR", GUEST_LDTR_SELECTOR);
+    vmx_dump_sel2("IDTR", GUEST_IDTR_LIMIT);
+    vmx_dump_sel("TR", GUEST_TR_SELECTOR);
+    kdbp("Guest EFER = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_EFER_HIGH), (uint32_t)vmr(GUEST_EFER));
+    kdbp("Guest PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(GUEST_PAT_HIGH), (uint32_t)vmr(GUEST_PAT));
+    x  = (unsigned long long)vmr(TSC_OFFSET_HIGH) << 32;
+    x |= (uint32_t)vmr(TSC_OFFSET);
+    kdbp("TSC Offset = %016llx\n", x);
+    x  = (unsigned long long)vmr(GUEST_IA32_DEBUGCTL_HIGH) << 32;
+    x |= (uint32_t)vmr(GUEST_IA32_DEBUGCTL);
+    kdbp("DebugCtl=%016llx DebugExceptions=%016llx\n", x,
+           (unsigned long long)vmr(GUEST_PENDING_DBG_EXCEPTIONS));
+    kdbp("Interruptibility=%04x ActivityState=%04x\n",
+           (int)vmr(GUEST_INTERRUPTIBILITY_INFO),
+           (int)vmr(GUEST_ACTIVITY_STATE));
+
+    kdbp("MSRs: entry_load:$%d exit_load:$%d exit_store:$%d\n",
+         vmr(VM_ENTRY_MSR_LOAD_COUNT), vmr(VM_EXIT_MSR_LOAD_COUNT),
+         vmr(VM_EXIT_MSR_STORE_COUNT));
+
+    kdbp("\n*** Host State ***\n");
+    kdbp("RSP = 0x%016llx  RIP = 0x%016llx\n", 
+           (unsigned long long)vmr(HOST_RSP),
+           (unsigned long long)vmr(HOST_RIP));
+    kdbp("CS=%04x DS=%04x ES=%04x FS=%04x GS=%04x SS=%04x TR=%04x\n",
+           (uint16_t)vmr(HOST_CS_SELECTOR),
+           (uint16_t)vmr(HOST_DS_SELECTOR),
+           (uint16_t)vmr(HOST_ES_SELECTOR),
+           (uint16_t)vmr(HOST_FS_SELECTOR),
+           (uint16_t)vmr(HOST_GS_SELECTOR),
+           (uint16_t)vmr(HOST_SS_SELECTOR),
+           (uint16_t)vmr(HOST_TR_SELECTOR));
+    kdbp("FSBase=%016llx GSBase=%016llx TRBase=%016llx\n",
+           (unsigned long long)vmr(HOST_FS_BASE),
+           (unsigned long long)vmr(HOST_GS_BASE),
+           (unsigned long long)vmr(HOST_TR_BASE));
+    kdbp("GDTBase=%016llx IDTBase=%016llx\n",
+           (unsigned long long)vmr(HOST_GDTR_BASE),
+           (unsigned long long)vmr(HOST_IDTR_BASE));
+    kdbp("CR0=%016llx CR3=%016llx CR4=%016llx\n",
+           (unsigned long long)vmr(HOST_CR0),
+           (unsigned long long)vmr(HOST_CR3),
+           (unsigned long long)vmr(HOST_CR4));
+    kdbp("Sysenter RSP=%016llx CS:RIP=%04x:%016llx\n",
+           (unsigned long long)vmr(HOST_SYSENTER_ESP),
+           (int)vmr(HOST_SYSENTER_CS),
+           (unsigned long long)vmr(HOST_SYSENTER_EIP));
+    kdbp("Host PAT = 0x%08x%08x\n",
+           (uint32_t)vmr(HOST_PAT_HIGH), (uint32_t)vmr(HOST_PAT));
+
+    kdbp("\n*** Control State ***\n");
+    kdbp("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n",
+           (uint32_t)vmr(PIN_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(CPU_BASED_VM_EXEC_CONTROL),
+           (uint32_t)vmr(SECONDARY_VM_EXEC_CONTROL));
+    kdbp("EntryControls=%08x ExitControls=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_CONTROLS),
+           (uint32_t)vmr(VM_EXIT_CONTROLS));
+    kdbp("ExceptionBitmap=%08x\n",
+           (uint32_t)vmr(EXCEPTION_BITMAP));
+    kdbp("PAGE_FAULT_ERROR_CODE  MASK:0x%lx  MATCH:0x%lx\n", 
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MASK),
+         (unsigned long)vmr(PAGE_FAULT_ERROR_CODE_MATCH));
+    kdbp("VMEntry: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_ENTRY_INTR_INFO),
+           (uint32_t)vmr(VM_ENTRY_EXCEPTION_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("VMExit: intr_info=%08x errcode=%08x ilen=%08x\n",
+           (uint32_t)vmr(VM_EXIT_INTR_INFO),
+           (uint32_t)vmr(VM_EXIT_INTR_ERROR_CODE),
+           (uint32_t)vmr(VM_ENTRY_INSTRUCTION_LEN));
+    kdbp("        reason=%08x qualification=%08x\n",
+           (uint32_t)vmr(VM_EXIT_REASON),
+           (uint32_t)vmr(EXIT_QUALIFICATION));
+    kdbp("IDTVectoring: info=%08x errcode=%08x\n",
+           (uint32_t)vmr(IDT_VECTORING_INFO),
+           (uint32_t)vmr(IDT_VECTORING_ERROR_CODE));
+    kdbp("TPR Threshold = 0x%02x\n",
+           (uint32_t)vmr(TPR_THRESHOLD));
+    kdbp("EPT pointer = 0x%08x%08x\n",
+           (uint32_t)vmr(EPT_POINTER_HIGH), (uint32_t)vmr(EPT_POINTER));
+    kdbp("Virtual processor ID = 0x%04x\n",
+           (uint32_t)vmr(VIRTUAL_PROCESSOR_ID));
+    kdbp("=================================================================\n");
+}
+
+/* Flush VMCS on this cpu if it needs to: 
+ *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and 
+ *     do __vmreads. So, the VMCS pointer can't be left cleared.
+ *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
+ *     vmlaunch must be done and not vmresume. This means, we must clear 
+ *     arch_vmx->launched.
+ */
+void kdb_curr_cpu_flush_vmcs(void)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    int ccpu = smp_processor_id();
+    struct vmcs_struct *cvp = this_cpu(current_vmcs);
+
+    if (this_cpu(current_vmcs) == NULL)
+        return;             /* no HVM active on this CPU */
+
+    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
+
+    /* looks like we got one. unfortunately, current_vmcs points to vmcs 
+     * and not VCPU, so we gotta search the entire list... */
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        for_each_vcpu (dp, vp) {
+            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
+                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
+                vp->arch.hvm_vmx.launched = 0;
+                this_cpu(current_vmcs) = NULL;
+                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n", 
+		     ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
+            }
+        }
+    }
+}
+
+/*
+ * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)
+ * vcpu id == -1 : display all vcpuids
+ * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
+ */
+void kdb_dump_vmcs(domid_t did, int vid)
+{
+    struct domain *dp;
+    struct vcpu *vp;
+    struct vmcs_struct  *vmcsp;
+    u64 addr = -1;
+
+    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
+    __vmptrst(&addr);
+
+    for_each_domain (dp) {
+        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
+            continue;
+        if (did != 0 && did != dp->domain_id)
+            continue;
+
+        for_each_vcpu (dp, vp) {
+            if (vid != -1 && vid != vp->vcpu_id)
+                continue;
+
+	    vmcsp = vp->arch.hvm_vmx.vmcs;
+            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
+	         virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
+            __vmptrld(virt_to_maddr(vmcsp));
+            kdb_print_vmcs(vp);
+            __vmpclear(virt_to_maddr(vmcsp));
+            vp->arch.hvm_vmx.launched = 0;
+        }
+        kdbp("\n");
+    }
+    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
+    if (addr && addr != (u64)-1)
+        __vmptrld(addr);
+}
+#endif
 
 /*
  * Local variables:
diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/hvm/vmx/vmx.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2183,11 +2183,14 @@
         printk("reason not known yet!");
         break;
     }
-
+#if defined(XEN_KDB_CONFIG)
+    kdbp("\n************* VMCS Area **************\n");
+    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
+#else
     printk("************* VMCS Area **************\n");
     vmcs_dump_vcpu(curr);
     printk("**************************************\n");
-
+#endif
     domain_crash(curr->domain);
 }
 
@@ -2415,6 +2418,12 @@
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
                 goto exit_and_crash;
+
+#if defined(XEN_KDB_CONFIG)
+            /* TRAP_debug: IP points correctly to next instr */
+            if (kdb_handle_trap_entry(vector, regs))
+                break;
+#endif
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 
@@ -2423,6 +2432,13 @@
             if ( v->domain->debugger_attached )
             {
                 update_guest_eip(); /* Safe: INT3 */            
+#if defined(XEN_KDB_CONFIG)
+                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
+                 * update_guest_eip which updates to bp+1. works for gdbsx too 
+                 */
+                if (kdb_handle_trap_entry(vector, regs))
+                    break;
+#endif
                 current->arch.gdbsx_vcpu_event = TRAP_int3;
                 domain_pause_for_debugger();
                 break;
@@ -2707,6 +2723,10 @@
     case EXIT_REASON_MONITOR_TRAP_FLAG:
         v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
+#if defined(XEN_KDB_CONFIG)
+        if (kdb_handle_trap_entry(TRAP_debug, regs))
+            break;
+#endif
         if ( v->arch.hvm_vcpu.single_step ) {
           hvm_memory_event_single_step(regs->eip);
           if ( v->domain->debugger_attached )
diff -r 32034d1914a6 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/irq.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2305,3 +2305,29 @@
     return is_hvm_domain(d) && pirq &&
            pirq->arch.hvm.emuirq != IRQ_UNBOUND; 
 }
+
+#ifdef XEN_KDB_CONFIG
+void kdb_prnt_guest_mapped_irqs(void)
+{
+    int irq, j;
+    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
+
+    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
+    for (irq=0; irq < nr_irqs; irq++) {
+        irq_desc_t  *dp = irq_to_desc(irq);
+        struct arch_irq_desc *archp = &dp->arch;
+        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
+
+        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
+            continue;
+
+        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
+        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
+             dp->handler->typename);
+        for (j=0; j < actp->nr_guests; j++)
+            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
+                 domain_irq_to_pirq(actp->guest[j], irq));
+        kdbp("\n");
+    }
+}
+#endif
diff -r 32034d1914a6 xen/arch/x86/setup.c
--- a/xen/arch/x86/setup.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/setup.c	Wed Aug 29 14:39:57 2012 -0700
@@ -47,6 +47,13 @@
 #include <xen/cpu.h>
 #include <asm/nmi.h>
 
+#ifdef XEN_KDB_CONFIG
+#include <asm/debugger.h>
+
+int opt_earlykdb=0;
+boolean_param("earlykdb", opt_earlykdb);
+#endif
+
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
 boolean_param("nosmp", opt_nosmp);
@@ -1242,6 +1249,11 @@
 
     trap_init();
 
+#ifdef XEN_KDB_CONFIG
+    kdb_init();
+    if (opt_earlykdb)
+        kdb_trap_immed(KDB_TRAP_NONFATAL);
+#endif
     rcu_init();
     
     early_time_init();
diff -r 32034d1914a6 xen/arch/x86/smp.c
--- a/xen/arch/x86/smp.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/smp.c	Wed Aug 29 14:39:57 2012 -0700
@@ -273,7 +273,7 @@
  * Structure and data for smp_call_function()/on_selected_cpus().
  */
 
-static void __smp_call_function_interrupt(void);
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs);
 static DEFINE_SPINLOCK(call_lock);
 static struct call_data_struct {
     void (*func) (void *info);
@@ -321,7 +321,7 @@
     if ( cpumask_test_cpu(smp_processor_id(), &call_data.selected) )
     {
         local_irq_disable();
-        __smp_call_function_interrupt();
+        __smp_call_function_interrupt(NULL);
         local_irq_enable();
     }
 
@@ -390,7 +390,7 @@
     this_cpu(irq_count)++;
 }
 
-static void __smp_call_function_interrupt(void)
+static void __smp_call_function_interrupt(struct cpu_user_regs *regs)
 {
     void (*func)(void *info) = call_data.func;
     void *info = call_data.info;
@@ -411,6 +411,11 @@
     {
         mb();
         cpumask_clear_cpu(cpu, &call_data.selected);
+#ifdef XEN_KDB_CONFIG
+        if (info && !strcmp(info, "XENKDB")) {           /* called from kdb */
+                (*(void (*)(struct cpu_user_regs *, void *))func)(regs, info);
+        } else
+#endif
         (*func)(info);
     }
 
@@ -421,5 +426,5 @@
 {
     ack_APIC_irq();
     perfc_incr(ipis);
-    __smp_call_function_interrupt();
+    __smp_call_function_interrupt(regs);
 }
diff -r 32034d1914a6 xen/arch/x86/time.c
--- a/xen/arch/x86/time.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/time.c	Wed Aug 29 14:39:57 2012 -0700
@@ -2007,6 +2007,46 @@
 }
 __initcall(setup_dump_softtsc);
 
+#ifdef XEN_KDB_CONFIG
+void kdb_time_resume(int update_domains)
+{
+        s_time_t now;
+        int ccpu = smp_processor_id();
+        struct cpu_time *t = &this_cpu(cpu_time);
+
+        if (!plt_src.read_counter)            /* not initialized for earlykdb */
+                return;
+
+        if (update_domains) {
+                plt_stamp = plt_src.read_counter();
+                platform_timer_stamp = plt_stamp64;
+                platform_time_calibration();
+                do_settime(get_cmos_time(), 0, read_platform_stime());
+        }
+        if (local_irq_is_enabled())
+                kdbp("kdb BUG: enabled in time_resume(). ccpu:%d\n", ccpu);
+
+        rdtscll(t->local_tsc_stamp);
+        now = read_platform_stime();
+        t->stime_master_stamp = now;
+        t->stime_local_stamp  = now;
+
+        update_vcpu_system_time(current);
+
+        if (update_domains)
+                set_timer(&calibration_timer, NOW() + EPOCH);
+}
+
+void kdb_dump_time_pcpu(void)
+{
+    int cpu;
+    for_each_online_cpu(cpu) {
+        kdbp("[%d]: cpu_time: %016lx\n", cpu, &per_cpu(cpu_time, cpu));
+        kdbp("[%d]: cpu_calibration: %016lx\n", cpu, 
+             &per_cpu(cpu_calibration, cpu));
+    }
+}
+#endif
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/arch/x86/traps.c
--- a/xen/arch/x86/traps.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/traps.c	Wed Aug 29 14:39:57 2012 -0700
@@ -225,7 +225,7 @@
 
 #else
 
-static void show_trace(struct cpu_user_regs *regs)
+void show_trace(struct cpu_user_regs *regs)
 {
     unsigned long *frame, next, addr, low, high;
 
@@ -3326,6 +3326,10 @@
     if ( nmi_callback(regs, cpu) )
         return;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_enabled && kdb_handle_trap_entry(TRAP_nmi, regs))
+        return;
+#endif
     if ( nmi_watchdog )
         nmi_watchdog_tick(regs);
 
diff -r 32034d1914a6 xen/arch/x86/x86_64/compat/entry.S
--- a/xen/arch/x86/x86_64/compat/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/compat/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -95,6 +95,10 @@
 /* %rbx: struct vcpu */
 ENTRY(compat_test_all_events)
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG
+        testl $1, kdb_session_begun(%rip)
+        jnz   compat_restore_all_guest
+#endif
 /*compat_test_softirqs:*/
         movl  VCPU_processor(%rbx),%eax
         shlq  $IRQSTAT_shift,%rax
diff -r 32034d1914a6 xen/arch/x86/x86_64/entry.S
--- a/xen/arch/x86/x86_64/entry.S	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/arch/x86/x86_64/entry.S	Wed Aug 29 14:39:57 2012 -0700
@@ -184,6 +184,10 @@
 /* %rbx: struct vcpu */
 test_all_events:
         cli                             # tests must not race interrupts
+#ifdef XEN_KDB_CONFIG                   /* 64bit dom0 will resume here */
+        testl $1, kdb_session_begun(%rip)
+        jnz   restore_all_guest
+#endif
 /*test_softirqs:*/  
         movl  VCPU_processor(%rbx),%eax
         shl   $IRQSTAT_shift,%rax
@@ -546,6 +550,13 @@
 
 ENTRY(int3)
         pushq $0
+#ifdef XEN_KDB_CONFIG
+        pushq %rax
+        GET_CPUINFO_FIELD(CPUINFO_processor_id, %rax)
+        movq  (%rax), %rax
+        lock  bts %rax, kdb_cpu_traps(%rip)
+        popq  %rax
+#endif
         movl  $TRAP_int3,4(%rsp)
         jmp   handle_exception
 
diff -r 32034d1914a6 xen/common/domain.c
--- a/xen/common/domain.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/domain.c	Wed Aug 29 14:39:57 2012 -0700
@@ -530,6 +530,14 @@
 {
     struct vcpu *v;
 
+#ifdef XEN_KDB_CONFIG
+    if (reason == SHUTDOWN_crash) {
+        if ( IS_PRIV(d) )
+            kdb_trap_immed(KDB_TRAP_FATAL);
+        else
+            kdb_trap_immed(KDB_TRAP_NONFATAL);
+    }
+#endif
     spin_lock(&d->shutdown_lock);
 
     if ( d->shutdown_code == -1 )
@@ -624,7 +632,9 @@
     for_each_vcpu ( d, v )
         vcpu_sleep_nosync(v);
 
-    send_global_virq(VIRQ_DEBUGGER);
+    /* send VIRQ_DEBUGGER to guest only if gdbsx_vcpu_event is not active */
+    if (current->arch.gdbsx_vcpu_event == 0)
+        send_global_virq(VIRQ_DEBUGGER);
 }
 
 /* Complete domain destroy after RCU readers are not holding old references. */
diff -r 32034d1914a6 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/sched_credit.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1475,6 +1475,33 @@
     printk("\n");
 }
 
+#ifdef XEN_KDB_CONFIG
+static void kdb_csched_dump(int cpu)
+{
+    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
+    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
+    struct list_head *tmp, *runq = RUNQ(cpu);
+
+    kdbp("    csched_pcpu: %p\n", pcpup);
+    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
+         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
+    kdbp("    runq:\n");
+
+    /* next is top of struct, so screw stupid, ugly hard to follow macros */
+    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
+        kdbp("next is not first in struct csched_vcpu. please fixme\n");
+        return;        /* otherwise for loop will crash */
+    }
+    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
+
+        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
+        struct vcpu *vp = csp->vcpu;
+        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
+             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
+    };
+}
+#endif
+
 static void
 csched_dump_pcpu(const struct scheduler *ops, int cpu)
 {
@@ -1484,6 +1511,10 @@
     int loop;
 #define cpustr keyhandler_scratch
 
+#ifdef XEN_KDB_CONFIG
+    kdb_csched_dump(cpu);
+    return;
+#endif
     spc = CSCHED_PCPU(cpu);
     runq = &spc->runq;
 
diff -r 32034d1914a6 xen/common/schedule.c
--- a/xen/common/schedule.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/schedule.c	Wed Aug 29 14:39:57 2012 -0700
@@ -1454,6 +1454,25 @@
     schedule();
 }
 
+#ifdef XEN_KDB_CONFIG
+void kdb_print_sched_info(void)
+{
+    int cpu;
+
+    kdbp("Scheduler: name:%s opt_name:%s id:%d\n", ops.name, ops.opt_name,
+         ops.sched_id);
+    kdbp("per cpu schedule_data:\n");
+    for_each_online_cpu(cpu) {
+        struct schedule_data *p =  &per_cpu(schedule_data, cpu);
+        kdbp("  cpu:%d  &(per cpu)schedule_data:%p\n", cpu, p);
+        kdbp("         curr:%p sched_priv:%p\n", p->curr, p->sched_priv);
+        kdbp("\n");
+        ops.dump_cpu_state(&ops, cpu);
+        kdbp("\n");
+    }
+}
+#endif
+
 #ifdef CONFIG_COMPAT
 #include "compat/schedule.c"
 #endif
diff -r 32034d1914a6 xen/common/symbols.c
--- a/xen/common/symbols.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/symbols.c	Wed Aug 29 14:39:57 2012 -0700
@@ -168,3 +168,21 @@
 
     spin_unlock_irqrestore(&lock, flags);
 }
+
+#ifdef XEN_KDB_CONFIG
+/*
+ *  * Given a symbol, return its address 
+ *   */
+unsigned long address_lookup(char *symp)
+{
+    int i, off = 0;
+    char namebuf[KSYM_NAME_LEN+1];
+
+    for (i=0; i < symbols_num_syms; i++) {
+        off = symbols_expand_symbol(off, namebuf);
+        if (strcmp(namebuf, symp) == 0)                  /* found it */
+            return symbols_address(i);
+    }
+    return 0;
+}
+#endif
diff -r 32034d1914a6 xen/common/timer.c
--- a/xen/common/timer.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/common/timer.c	Wed Aug 29 14:39:57 2012 -0700
@@ -643,6 +643,48 @@
     register_keyhandler('a', &dump_timerq_keyhandler);
 }
 
+#ifdef XEN_KDB_CONFIG
+#include <xen/symbols.h>
+void kdb_dump_timer_queues(void)
+{
+    struct timer  *t;
+    struct timers *ts;
+    unsigned long sz, offs;
+    char buf[KSYM_NAME_LEN+1];
+    int cpu, j;
+    u64 tsc;
+
+    for_each_online_cpu( cpu )
+    {
+        ts = &per_cpu(timers, cpu);
+        kdbp("CPU[%02d]:", cpu);
+
+        if (cpu == smp_processor_id()) {
+            s_time_t now = NOW();
+            rdtscll(tsc);
+            kdbp("NOW:0x%08x%08x TSC:0x%016lx\n", (u32)(now>>32),(u32)now, tsc);
+        } else
+            kdbp("\n");
+
+        /* timers in the heap */
+        for ( j = 1; j <= GET_HEAP_SIZE(ts->heap); j++ ) {
+            t = ts->heap[j];
+            kdbp("  %d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+        /* timers on the link list */
+        for ( t = ts->list, j = 0; t != NULL; t = t->list_next, j++ ) {
+            kdbp(" L%d: exp=0x%08x%08x fn:%s data:%p\n",
+                 j, (u32)(t->expires>>32), (u32)t->expires,
+                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
+                 t->data);
+        }
+    }
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff -r 32034d1914a6 xen/drivers/char/console.c
--- a/xen/drivers/char/console.c	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/drivers/char/console.c	Wed Aug 29 14:39:57 2012 -0700
@@ -295,6 +295,21 @@
 {
     static int switch_code_count = 0;
 
+#ifdef XEN_KDB_CONFIG
+    /* if ctrl-\ pressed and kdb handles it, return */
+    if (kdb_enabled && c == 0x1c) {
+        if (!kdb_session_begun) {
+            if (kdb_keyboard(regs))
+                return;
+        } else {
+            kdbp("Sorry... kdb session already active.. please try again..\n");
+            return;
+        }
+    }
+    if (kdb_session_begun)      /* kdb should already be polling */
+        return;                 /* swallow chars so they don't buffer in dom0 */
+#endif
+
     if ( switch_code && (c == switch_code) )
     {
         /* We eat CTRL-<switch_char> in groups of 3 to switch console input. */
@@ -710,6 +725,18 @@
     atomic_dec(&print_everything);
 }
 
+#ifdef XEN_KDB_CONFIG
+void console_putc(char c)
+{
+    serial_putc(sercon_handle, c);
+}
+
+int console_getc(void)
+{
+    return serial_getc(sercon_handle);
+}
+#endif
+
 /*
  * printk rate limiting, lifted from Linux.
  *
diff -r 32034d1914a6 xen/include/asm-x86/debugger.h
--- a/xen/include/asm-x86/debugger.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/asm-x86/debugger.h	Wed Aug 29 14:39:57 2012 -0700
@@ -39,7 +39,11 @@
 #define DEBUGGER_trap_fatal(_v, _r) \
     if ( debugger_trap_fatal(_v, _r) ) return;
 
-#if defined(CRASH_DEBUG)
+#if defined(XEN_KDB_CONFIG)
+#define debugger_trap_immediate() kdb_trap_immed(KDB_TRAP_NONFATAL)
+#define debugger_trap_fatal(_v, _r) kdb_trap_fatal(_v, _r)
+
+#elif defined(CRASH_DEBUG)
 
 #include <xen/gdbstub.h>
 
@@ -70,6 +74,10 @@
 {
     struct vcpu *v = current;
 
+#ifdef XEN_KDB_CONFIG
+    if (kdb_handle_trap_entry(vector, regs))
+        return 1;
+#endif
     if ( guest_kernel_mode(v, regs) && v->domain->debugger_attached &&
          ((vector == TRAP_int3) || (vector == TRAP_debug)) )
     {
diff -r 32034d1914a6 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/lib.h	Wed Aug 29 14:39:57 2012 -0700
@@ -116,4 +116,7 @@
 struct cpu_user_regs;
 void dump_execstate(struct cpu_user_regs *);
 
+#ifdef XEN_KDB_CONFIG
+#include "../../kdb/include/kdb_extern.h"
+#endif
 #endif /* __LIB_H__ */
diff -r 32034d1914a6 xen/include/xen/sched.h
--- a/xen/include/xen/sched.h	Thu Jun 07 19:46:57 2012 +0100
+++ b/xen/include/xen/sched.h	Wed Aug 29 14:39:57 2012 -0700
@@ -576,11 +576,14 @@
 unsigned long hypercall_create_continuation(
     unsigned int op, const char *format, ...);
 void hypercall_cancel_continuation(void);
-
+#ifdef XEN_KDB_CONFIG
+#define hypercall_preempt_check() (0)
+#else
 #define hypercall_preempt_check() (unlikely(    \
         softirq_pending(smp_processor_id()) |   \
         local_events_need_delivery()            \
     ))
+#endif
 
 extern struct domain *domain_list;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:29:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79UO-0000OZ-U6; Thu, 30 Aug 2012 18:29:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T79UN-0000OT-Qv
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:29:12 +0000
Received: from [85.158.143.99:38842] by server-2.bemta-4.messagelabs.com id
	12/06-21239-7F0BF305; Thu, 30 Aug 2012 18:29:11 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-6.tower-216.messagelabs.com!1346351347!20658986!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11814 invoked from network); 30 Aug 2012 18:29:08 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 18:29:08 -0000
Received: by iabz25 with SMTP id z25so4637386iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 11:29:07 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=pXN+T93fsehErRqa2BDr0e019wt2TqVN7F3eo9hjMfE=;
	b=IR3AiLcUdBA13J8KfGy6FDzlPkeUvucK1bXihdRvV7MdyV8GEAZ9I9yHplby9SzuvB
	7dNhh9R7HC95FBfg0Kw3DHMMW3MmhbUiCn74ul3BAtyGVCMZ3Ihyz12S3fcQcoQM4eub
	MtKa1jUaHELWsp93FVs0LaEXsdF2rqgOB5G5TFP8kEMBpejenOYoprH55iX9A06lJaa0
	0vgpvr+HmCksSIRwl9b5tkbpviWIKOzkHQVECZj8xsCd2Lc59jegDScr2RoDX+ScMOE+
	MQleLa5Ii7goMuOHPX5GQ6HJrxYyOfUEwvvRh+YK6/VW2PTbPlAZJI5Zyo/D2ddB+7t5
	eYPw==
Received: by 10.50.173.2 with SMTP id bg2mr1939504igc.1.1346351347378;
	Thu, 30 Aug 2012 11:29:07 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ch4sm1503586igb.2.2012.08.30.11.29.05
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 11:29:06 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <503F9D14.8000600@citrix.com>
Date: Thu, 30 Aug 2012 14:29:13 -0400
Message-Id: <165680AD-2970-4A8D-BEC0-B02830F8A97F@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
	<503F9D14.8000600@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkw3DkL8zlMrngUP5KRn1yWUGb//FwgKgAdno4v7t8OCeAzLYZmR7CYDfJHP0JwnqpwE89U
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 30, 2012, at 1:04 PM, David Vrabel wrote:

> On 30/08/12 17:41, Andres Lagar-Cavilla wrote:
>> David,
>> The patch looks functionally ok, but I still have two lingering concerns:
>> - the hideous casting of mfn into err
> 
> I considered couple of other approaches (unions, extending
> gather_array() to add gaps for the int return). They were all worse.
> 
> I also tried your proposal here but it doesn't work. See below.
> 
>> - why not signal paged out frames for V1
> 
> Because V1 is broken on 64bit and there doesn't seem to be any point in
> changing it given that libxc won't call it if V2 is present,
> 
>> Rather than keep writing English, I wrote some C :)
>> 
>> And took the liberty to include your signed-off. David & Konrad, let
>> me know what you think, and once we settle on either version we can move
>> into unit testing this.
> [...]
>> static int mmap_batch_fn(void *data, void *state)
>> {
>>        xen_pfn_t *mfnp = data;
>>        struct mmap_batch_state *st = state;
>> +       int ret;
>> +
>> +       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> +                                        st->vma->vm_page_prot, st->domain);
>> +       if (ret < 0) {
>> +               /*
>> +                * V2 provides a user-space (pre-checked for access) user_err
>> +                * pointer, in which we store the individual map error codes.
>> +                *
>> +                * V1 encodes the error codes in the 32bit top nibble of the
>> +                * mfn (with its known limitations vis-a-vis 64 bit callers).
>> +                *
>> +                * In either case, global state.err is zero unless one or more
>> +                * individual maps fail with -ENOENT, in which case it is -ENOENT.
>> +                *
>> +                */
>> +               if (st->user_err)
>> +                       BUG_ON(__put_user(ret, st->user_err++));
> 
> You can't access user space pages here while holding
> current->mm->mmap_sem.  I tried this and it would sometimes deadlock in
> the page fault handler.
> 
> access_ok() only checks if the pointer is in the user space virtual
> address space - not that a valid mapping exists and is writable.  So
> BUG_ON(__put_user()) should not be done.

Very true. Thanks for the pointer. Clearly the reason for the gather_array/traverse_pages structure.
Re-posting my version
Andres
> 
> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:29:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79UO-0000OZ-U6; Thu, 30 Aug 2012 18:29:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T79UN-0000OT-Qv
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:29:12 +0000
Received: from [85.158.143.99:38842] by server-2.bemta-4.messagelabs.com id
	12/06-21239-7F0BF305; Thu, 30 Aug 2012 18:29:11 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-6.tower-216.messagelabs.com!1346351347!20658986!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11814 invoked from network); 30 Aug 2012 18:29:08 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 18:29:08 -0000
Received: by iabz25 with SMTP id z25so4637386iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 11:29:07 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=pXN+T93fsehErRqa2BDr0e019wt2TqVN7F3eo9hjMfE=;
	b=IR3AiLcUdBA13J8KfGy6FDzlPkeUvucK1bXihdRvV7MdyV8GEAZ9I9yHplby9SzuvB
	7dNhh9R7HC95FBfg0Kw3DHMMW3MmhbUiCn74ul3BAtyGVCMZ3Ihyz12S3fcQcoQM4eub
	MtKa1jUaHELWsp93FVs0LaEXsdF2rqgOB5G5TFP8kEMBpejenOYoprH55iX9A06lJaa0
	0vgpvr+HmCksSIRwl9b5tkbpviWIKOzkHQVECZj8xsCd2Lc59jegDScr2RoDX+ScMOE+
	MQleLa5Ii7goMuOHPX5GQ6HJrxYyOfUEwvvRh+YK6/VW2PTbPlAZJI5Zyo/D2ddB+7t5
	eYPw==
Received: by 10.50.173.2 with SMTP id bg2mr1939504igc.1.1346351347378;
	Thu, 30 Aug 2012 11:29:07 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ch4sm1503586igb.2.2012.08.30.11.29.05
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 11:29:06 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <503F9D14.8000600@citrix.com>
Date: Thu, 30 Aug 2012 14:29:13 -0400
Message-Id: <165680AD-2970-4A8D-BEC0-B02830F8A97F@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
	<503F9D14.8000600@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQkw3DkL8zlMrngUP5KRn1yWUGb//FwgKgAdno4v7t8OCeAzLYZmR7CYDfJHP0JwnqpwE89U
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 30, 2012, at 1:04 PM, David Vrabel wrote:

> On 30/08/12 17:41, Andres Lagar-Cavilla wrote:
>> David,
>> The patch looks functionally ok, but I still have two lingering concerns:
>> - the hideous casting of mfn into err
> 
> I considered couple of other approaches (unions, extending
> gather_array() to add gaps for the int return). They were all worse.
> 
> I also tried your proposal here but it doesn't work. See below.
> 
>> - why not signal paged out frames for V1
> 
> Because V1 is broken on 64bit and there doesn't seem to be any point in
> changing it given that libxc won't call it if V2 is present,
> 
>> Rather than keep writing English, I wrote some C :)
>> 
>> And took the liberty to include your signed-off. David & Konrad, let
>> me know what you think, and once we settle on either version we can move
>> into unit testing this.
> [...]
>> static int mmap_batch_fn(void *data, void *state)
>> {
>>        xen_pfn_t *mfnp = data;
>>        struct mmap_batch_state *st = state;
>> +       int ret;
>> +
>> +       ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
>> +                                        st->vma->vm_page_prot, st->domain);
>> +       if (ret < 0) {
>> +               /*
>> +                * V2 provides a user-space (pre-checked for access) user_err
>> +                * pointer, in which we store the individual map error codes.
>> +                *
>> +                * V1 encodes the error codes in the 32bit top nibble of the
>> +                * mfn (with its known limitations vis-a-vis 64 bit callers).
>> +                *
>> +                * In either case, global state.err is zero unless one or more
>> +                * individual maps fail with -ENOENT, in which case it is -ENOENT.
>> +                *
>> +                */
>> +               if (st->user_err)
>> +                       BUG_ON(__put_user(ret, st->user_err++));
> 
> You can't access user space pages here while holding
> current->mm->mmap_sem.  I tried this and it would sometimes deadlock in
> the page fault handler.
> 
> access_ok() only checks if the pointer is in the user space virtual
> address space - not that a valid mapping exists and is writable.  So
> BUG_ON(__put_user()) should not be done.

Very true. Thanks for the pointer. Clearly the reason for the gather_array/traverse_pages structure.
Re-posting my version
Andres
> 
> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:32:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79X6-0000Ur-Fy; Thu, 30 Aug 2012 18:32:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T79X5-0000Uh-6Z
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:31:59 +0000
Received: from [85.158.138.51:28515] by server-7.bemta-3.messagelabs.com id
	9B/50-32000-E91BF305; Thu, 30 Aug 2012 18:31:58 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346351515!27720088!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14381 invoked from network); 30 Aug 2012 18:31:56 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 18:31:56 -0000
Received: by iabz25 with SMTP id z25so4642355iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 11:31:55 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=jziFlUI8XzKhkOWAuWkrbRImV54D4WYYn1I/QOEnj9o=;
	b=kGEjLjPFW7hN2WH282/GKnIDhiRGLoLuJershb4KbrP3+JhZa8a6xgkzRl8m2cmxWw
	k7Oos5N9ajHRzaRAAsj5U4rQbDa5Os/nT2E8kf893Q59FKMvdXs+oY8brExkuUygnDan
	AAu9fIhQ8sF5Okr/97euXjvOjoW54Bt2E829YDYeb+pDQ97YLkGnNXz9Phty/sd0p990
	AcSDO0k3Zpvr9kX5Xj+db/JwfbI8lEBqBdQiljlP19ltEkcFLV5UR6EayKPfx2sYH1FC
	THvji4iBbppUQ7xaKtXGWjjZM7ikOZkWA4ph7uLXcuCnviReRSAFWipXBPdOogSPeE9z
	HZpw==
Received: by 10.50.212.10 with SMTP id ng10mr1787904igc.35.1346351514741;
	Thu, 30 Aug 2012 11:31:54 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ud8sm2609696igb.4.2012.08.30.11.31.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 11:31:53 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
Date: Thu, 30 Aug 2012 14:32:00 -0400
Message-Id: <DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlYD6R9KcxZ7vPeZ8yzO5xaTa23rbfytEsP0BkjmMoRCNJ2f3PsBDahVfhSUPZB32UERiyb
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Second repost of my version, heavily based on David's. 

Complementary to this patch, on the xen tree I intend to add PRIVCMD_MMAPBATCH_*_ERROR into the libxc header files, and remove XEN_DOMCTL_PFINFO_PAGEDTAB from domctl.h

Please review. Thanks
Andres

commit 3f40e8d79b7e032527ee207a97499ddbc81ca12b
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Thu Aug 30 12:23:33 2012 -0400

    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
    
    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
    field for reporting the error code for every frame that could not be
    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
    
    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
    in the mfn array.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..5a03dc1 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -246,20 +246,42 @@ struct mmap_batch_state {
 	domid_t domain;
 	unsigned long va;
 	struct vm_area_struct *vma;
-	int err;
-
-	xen_pfn_t __user *user;
+	/* A tristate: 
+	 *      0 for no errors
+	 *      1 if at least one error has happened (and no
+	 *          -ENOENT errors have happened)
+	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 */
+	int global_error;
+	/* An array for individual errors */
+	int *err;
+
+	/* User-space pointers to store errors in the second pass. */
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+
+	/* Store error code for second pass. */
+	*(st->err++) = ret;
+
+	/* And see if it affects the global global_error. */
+	if (ret < 0) {
+		if (ret == -ENOENT)
+			st->global_error = -ENOENT;
+		else {
+			/* Record that at least one error has happened. */
+			if (st->global_error == 0)
+				st->global_error = 1;
+		}
 	}
 	st->va += PAGE_SIZE;
 
@@ -270,37 +292,76 @@ static int mmap_return_errors(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
-
-	return put_user(*mfnp, st->user++);
+	int err = *(st->err++);
+
+	/*
+	 * V2 provides a user-space user_err pointer, in which we store the
+	 * individual map error codes.
+	 */
+	if (st->user_err)
+		return __put_user(err, st->user_err++);
+
+	/*
+	 * V1 encodes the error codes in the 32bit top nibble of the 
+	 * mfn (with its known limitations vis-a-vis 64 bit callers).
+	 */
+	*mfnp |= (err == -ENOENT) ?
+				PRIVCMD_MMAPBATCH_PAGED_ERROR :
+				PRIVCMD_MMAPBATCH_MFN_ERROR;
+	return __put_user(*mfnp, st->user_mfn++);
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
 	LIST_HEAD(pagelist);
+	int *err_array;
 	struct mmap_batch_state state;
 
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
 
 	if (ret || list_empty(&pagelist))
-		goto out;
+		goto out_no_err_array;
+
+	err_array = kmalloc_array(m.num, sizeof(int), GFP_KERNEL);
+	if (err_array == NULL)
+	{
+		ret = -ENOMEM;
+		goto out_no_err_array;
+	}
 
 	down_write(&mm->mmap_sem);
 
@@ -315,24 +376,38 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
-	state.domain = m.dom;
-	state.vma = vma;
-	state.va = m.addr;
-	state.err = 0;
+	state.domain        = m.dom;
+	state.vma           = vma;
+	state.va            = m.addr;
+	state.global_error  = 0;
+	state.err           = err_array;
 
-	ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state);
+	/* mmap_batch_fn guarantees ret == 0 */
+	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
+			     &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
-		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+	if (state.global_error) {
+		int efault;
+
+		if (state.global_error == -ENOENT)
+			ret = -ENOENT;
+
+		/* Write back errors in second pass. */
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
+		state.err      = err_array;
+		efault = traverse_pages(m.num, sizeof(xen_pfn_t),
+					 &pagelist, mmap_return_errors, &state);
+		if (efault)
+			ret = efault;
+	} else if (m.err)
+		ret = __clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
+	kfree(err_array);
+out_no_err_array:
 	free_page_list(&pagelist);
 
 	return ret;
@@ -354,7 +429,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 45c1aa1..a853168 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -58,13 +58,33 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_*_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR     0xf0000000U
+#define PRIVCMD_MMAPBATCH_PAGED_ERROR   0x80000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -72,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */

On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
> include/xen/privcmd.h |   23 +++++++++++-
> 2 files changed, 102 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..c0e89e7 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
>  */
> static int gather_array(struct list_head *pagelist,
> 			unsigned nelem, size_t size,
> -			void __user *data)
> +			const void __user *data)
> {
> 	unsigned pageidx;
> 	void *pagedata;
> @@ -248,18 +248,37 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> 
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> 
> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> -		st->err++;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		/*
> +		 * Error reporting is a mess but userspace relies on
> +		 * it behaving this way.
> +		 *
> +		 * V2 needs to a) return the result of each frame's
> +		 * remap; and b) return -ENOENT if any frame failed
> +		 * with -ENOENT.
> +		 *
> +		 * In this first pass the error code is saved by
> +		 * overwriting the mfn and an error is indicated in
> +		 * st->err.
> +		 *
> +		 * The second pass by mmap_return_errors() will write
> +		 * the error codes to user space and get the right
> +		 * ioctl return value.
> +		 */
> +		*(int *)mfnp = ret;
> +		st->err = ret;
> 	}
> 	st->va += PAGE_SIZE;
> 
> @@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> +
> +	if (st->user_err) {
> +		int err = *(int *)mfnp;
> +
> +		if (err == -ENOENT)
> +			st->err = err;
> 
> -	return put_user(*mfnp, st->user++);
> +		return __put_user(err, st->user_err++);
> +	} else {
> +		xen_pfn_t mfn;
> +
> +		ret = __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> 
> static struct vm_operations_struct privcmd_vm_ops;
> 
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm = current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> 
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> 
> 	nr_pages = m.num;
> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> 
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
> 
> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 
> 	up_write(&mm->mmap_sem);
> 
> -	if (state.err > 0) {
> -		state.user = m.arr;
> +	if (state.err) {
> +		state.err = 0;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> +		if (ret >= 0)
> +			ret = state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> 
> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> 
> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> 
> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..f60d75c 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> 
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 on success (i.e., arg->err contains valid error codes for
> + * each frame).  On an error other than a failed frame remap, -1 is
> + * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
> + * if the operation was otherwise successful but any frame failed with
> + * -ENOENT, then -1 is returned and errno is set to ENOENT.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> 
> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:32:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79X6-0000Ur-Fy; Thu, 30 Aug 2012 18:32:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T79X5-0000Uh-6Z
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:31:59 +0000
Received: from [85.158.138.51:28515] by server-7.bemta-3.messagelabs.com id
	9B/50-32000-E91BF305; Thu, 30 Aug 2012 18:31:58 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346351515!27720088!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14381 invoked from network); 30 Aug 2012 18:31:56 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 18:31:56 -0000
Received: by iabz25 with SMTP id z25so4642355iab.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 11:31:55 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=jziFlUI8XzKhkOWAuWkrbRImV54D4WYYn1I/QOEnj9o=;
	b=kGEjLjPFW7hN2WH282/GKnIDhiRGLoLuJershb4KbrP3+JhZa8a6xgkzRl8m2cmxWw
	k7Oos5N9ajHRzaRAAsj5U4rQbDa5Os/nT2E8kf893Q59FKMvdXs+oY8brExkuUygnDan
	AAu9fIhQ8sF5Okr/97euXjvOjoW54Bt2E829YDYeb+pDQ97YLkGnNXz9Phty/sd0p990
	AcSDO0k3Zpvr9kX5Xj+db/JwfbI8lEBqBdQiljlP19ltEkcFLV5UR6EayKPfx2sYH1FC
	THvji4iBbppUQ7xaKtXGWjjZM7ikOZkWA4ph7uLXcuCnviReRSAFWipXBPdOogSPeE9z
	HZpw==
Received: by 10.50.212.10 with SMTP id ng10mr1787904igc.35.1346351514741;
	Thu, 30 Aug 2012 11:31:54 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ud8sm2609696igb.4.2012.08.30.11.31.53
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 11:31:53 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
Date: Thu, 30 Aug 2012 14:32:00 -0400
Message-Id: <DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlYD6R9KcxZ7vPeZ8yzO5xaTa23rbfytEsP0BkjmMoRCNJ2f3PsBDahVfhSUPZB32UERiyb
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Second repost of my version, heavily based on David's. 

Complementary to this patch, on the xen tree I intend to add PRIVCMD_MMAPBATCH_*_ERROR into the libxc header files, and remove XEN_DOMCTL_PFINFO_PAGEDTAB from domctl.h

Please review. Thanks
Andres

commit 3f40e8d79b7e032527ee207a97499ddbc81ca12b
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Thu Aug 30 12:23:33 2012 -0400

    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
    
    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
    field for reporting the error code for every frame that could not be
    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
    
    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
    in the mfn array.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..5a03dc1 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -246,20 +246,42 @@ struct mmap_batch_state {
 	domid_t domain;
 	unsigned long va;
 	struct vm_area_struct *vma;
-	int err;
-
-	xen_pfn_t __user *user;
+	/* A tristate: 
+	 *      0 for no errors
+	 *      1 if at least one error has happened (and no
+	 *          -ENOENT errors have happened)
+	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 */
+	int global_error;
+	/* An array for individual errors */
+	int *err;
+
+	/* User-space pointers to store errors in the second pass. */
+	xen_pfn_t __user *user_mfn;
+	int __user *user_err;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+
+	/* Store error code for second pass. */
+	*(st->err++) = ret;
+
+	/* And see if it affects the global global_error. */
+	if (ret < 0) {
+		if (ret == -ENOENT)
+			st->global_error = -ENOENT;
+		else {
+			/* Record that at least one error has happened. */
+			if (st->global_error == 0)
+				st->global_error = 1;
+		}
 	}
 	st->va += PAGE_SIZE;
 
@@ -270,37 +292,76 @@ static int mmap_return_errors(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
-
-	return put_user(*mfnp, st->user++);
+	int err = *(st->err++);
+
+	/*
+	 * V2 provides a user-space user_err pointer, in which we store the
+	 * individual map error codes.
+	 */
+	if (st->user_err)
+		return __put_user(err, st->user_err++);
+
+	/*
+	 * V1 encodes the error codes in the 32bit top nibble of the 
+	 * mfn (with its known limitations vis-a-vis 64 bit callers).
+	 */
+	*mfnp |= (err == -ENOENT) ?
+				PRIVCMD_MMAPBATCH_PAGED_ERROR :
+				PRIVCMD_MMAPBATCH_MFN_ERROR;
+	return __put_user(*mfnp, st->user_mfn++);
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
 	LIST_HEAD(pagelist);
+	int *err_array;
 	struct mmap_batch_state state;
 
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
 
 	if (ret || list_empty(&pagelist))
-		goto out;
+		goto out_no_err_array;
+
+	err_array = kmalloc_array(m.num, sizeof(int), GFP_KERNEL);
+	if (err_array == NULL)
+	{
+		ret = -ENOMEM;
+		goto out_no_err_array;
+	}
 
 	down_write(&mm->mmap_sem);
 
@@ -315,24 +376,38 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
-	state.domain = m.dom;
-	state.vma = vma;
-	state.va = m.addr;
-	state.err = 0;
+	state.domain        = m.dom;
+	state.vma           = vma;
+	state.va            = m.addr;
+	state.global_error  = 0;
+	state.err           = err_array;
 
-	ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state);
+	/* mmap_batch_fn guarantees ret == 0 */
+	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
+			     &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
-		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+	if (state.global_error) {
+		int efault;
+
+		if (state.global_error == -ENOENT)
+			ret = -ENOENT;
+
+		/* Write back errors in second pass. */
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.user_err = m.err;
+		state.err      = err_array;
+		efault = traverse_pages(m.num, sizeof(xen_pfn_t),
+					 &pagelist, mmap_return_errors, &state);
+		if (efault)
+			ret = efault;
+	} else if (m.err)
+		ret = __clear_user(m.err, m.num * sizeof(*m.err));
 
 out:
+	kfree(err_array);
+out_no_err_array:
 	free_page_list(&pagelist);
 
 	return ret;
@@ -354,7 +429,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 45c1aa1..a853168 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -58,13 +58,33 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_*_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR     0xf0000000U
+#define PRIVCMD_MMAPBATCH_PAGED_ERROR   0x80000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -72,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */

On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
> include/xen/privcmd.h |   23 +++++++++++-
> 2 files changed, 102 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..c0e89e7 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
>  */
> static int gather_array(struct list_head *pagelist,
> 			unsigned nelem, size_t size,
> -			void __user *data)
> +			const void __user *data)
> {
> 	unsigned pageidx;
> 	void *pagedata;
> @@ -248,18 +248,37 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> 
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> 
> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> -		st->err++;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		/*
> +		 * Error reporting is a mess but userspace relies on
> +		 * it behaving this way.
> +		 *
> +		 * V2 needs to a) return the result of each frame's
> +		 * remap; and b) return -ENOENT if any frame failed
> +		 * with -ENOENT.
> +		 *
> +		 * In this first pass the error code is saved by
> +		 * overwriting the mfn and an error is indicated in
> +		 * st->err.
> +		 *
> +		 * The second pass by mmap_return_errors() will write
> +		 * the error codes to user space and get the right
> +		 * ioctl return value.
> +		 */
> +		*(int *)mfnp = ret;
> +		st->err = ret;
> 	}
> 	st->va += PAGE_SIZE;
> 
> @@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> +
> +	if (st->user_err) {
> +		int err = *(int *)mfnp;
> +
> +		if (err == -ENOENT)
> +			st->err = err;
> 
> -	return put_user(*mfnp, st->user++);
> +		return __put_user(err, st->user_err++);
> +	} else {
> +		xen_pfn_t mfn;
> +
> +		ret = __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> 
> static struct vm_operations_struct privcmd_vm_ops;
> 
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm = current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> 
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> 
> 	nr_pages = m.num;
> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> 
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
> 
> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 
> 	up_write(&mm->mmap_sem);
> 
> -	if (state.err > 0) {
> -		state.user = m.arr;
> +	if (state.err) {
> +		state.err = 0;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> +		if (ret >= 0)
> +			ret = state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> 
> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> 
> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> 
> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..f60d75c 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> 
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 on success (i.e., arg->err contains valid error codes for
> + * each frame).  On an error other than a failed frame remap, -1 is
> + * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
> + * if the operation was otherwise successful but any frame failed with
> + * -ENOENT, then -1 is returned and errno is set to ENOENT.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> 
> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 18:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79hE-0000kH-Jp; Thu, 30 Aug 2012 18:42:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <artnapor@yahoo.com>) id 1T77Km-0005BA-IM
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:11:08 +0000
Received: from [85.158.138.51:8479] by server-3.bemta-3.messagelabs.com id
	D1/3B-21322-B909F305; Thu, 30 Aug 2012 16:11:07 +0000
X-Env-Sender: artnapor@yahoo.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1346343066!27815691!1
X-Originating-IP: [98.138.91.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30068 invoked from network); 30 Aug 2012 16:11:06 -0000
Received: from nm26-vm1.bullet.mail.ne1.yahoo.com (HELO
	nm26-vm1.bullet.mail.ne1.yahoo.com) (98.138.91.61)
	by server-5.tower-174.messagelabs.com with SMTP;
	30 Aug 2012 16:11:06 -0000
Received: from [98.138.90.54] by nm26.bullet.mail.ne1.yahoo.com with NNFMP;
	30 Aug 2012 16:11:05 -0000
Received: from [98.138.86.157] by tm7.bullet.mail.ne1.yahoo.com with NNFMP;
	30 Aug 2012 16:11:05 -0000
Received: from [127.0.0.1] by omp1015.mail.ne1.yahoo.com with NNFMP;
	30 Aug 2012 16:11:05 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 51524.81935.bm@omp1015.mail.ne1.yahoo.com
Received: (qmail 95649 invoked by uid 60001); 30 Aug 2012 16:11:05 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1346343064; bh=C9PZCg4Km/uYRbeIza0/O+oEbGxBHOJCf/MnB8ycWms=;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=RUCpnUsqFbXttQmOzMLCgZ4QpTcsRgDJv+9WOGI27zkmXywcSu6MJ4uzpLx6uNGo0Tdj6WIeI0KpryPsYlIsCd5YBtN8jxwuKLFLQRodLdcsbaozKWl5muZ32Y+7k/EYHLVXfwM+iLee378OjG+W4bDq78tNni13rrZTYUB5+Es=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=3oRvVmKyoBd+XaALSzQf3PuvdNskfMtseBJ1feWtvEWcrObLyNpOXFngWtGrWyrXSUrtTcAT3VgiVd4Fq6AST0/MFtD19Y7hZY1rlos5vUlpj9Sp4wu+VxG1MPXUqw3C5GQ3qS0+X87SdWz6icpZ8KBHUBkyyKkmitl1Wza1R6A=;
X-YMail-OSG: 5uw30koVM1kjc1tUaW22264IPkil3bjhAe34ZOENSdgdbIJ
	BmrRNL44ge11_Ipvu7oHKFG1TqLLfqOiyX7kndgfIT8i2h3kyPcSkvT.QCha
	vVjDSf6yCuCfSCsoTYYYxI.4mQIC_zGTEOIvWncfKJhuu.kjnrkGtfQNpTxm
	pL995uw49mWF_m3qctILr7fuPeybGQSBnPYCDr1ZM0I0w_d.jIH1Y720LXHl
	xCgC1ayXC4hBJZtUJCHe0IOboMnQgCbNs2NjogeKTU2yd6ydlIrB6CmGmkQe
	hPS_PwG3sLp.m9ucxsYDheLC4KBd9jPUqRjbBSWp4dUDFRPUs86JWS5S6uVK
	pJ7LVbd8xsPgzvxgu17ap9Yfvtan2bbqwGwOgPTKC1na.3mS7n.aTUuqUKaa
	nMZ2mphJEsu.e0rpb6n8zIadiFJjZZbnq5qbiIFSjZ1yhihSsZqJFiUasORo
	Nv8EIrNX1UeNFFZmM5eCsP3AY99doMvF8eLBfhv9H2Hz1hhsviH3xR4RbPMY
	aZE.WKVI0oZkEiiHppenhayttCqXsDpeh8nOrZUGkdSC4
Received: from [50.58.96.2] by web121001.mail.ne1.yahoo.com via HTTP;
	Thu, 30 Aug 2012 09:11:04 PDT
X-Mailer: YahooMailWebService/0.8.121.416
Message-ID: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Date: Thu, 30 Aug 2012 09:11:04 -0700 (PDT)
From: Art Napor <artnapor@yahoo.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Thu, 30 Aug 2012 18:42:27 +0000
Subject: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Art Napor <artnapor@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4104853267472564175=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4104853267472564175==
Content-Type: multipart/alternative; boundary="344044665-1260793075-1346343064=:91089"

--344044665-1260793075-1346343064=:91089
Content-Type: text/plain; charset=us-ascii

Is there a patch series that applies to Xen 4.1.2 for this feature? 



http://markmail.org/message/ipmyqtuaepe7d7iy
--344044665-1260793075-1346343064=:91089
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:14pt"><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;">Is there a patch series that applies to Xen 4.1.2 for this feature? <br></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;">http://markmail.org/message/ipmyqtuaepe7d7iy</div></div></body></html>
--344044665-1260793075-1346343064=:91089--


--===============4104853267472564175==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4104853267472564175==--


From xen-devel-bounces@lists.xen.org Thu Aug 30 18:42:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 18:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T79hE-0000kH-Jp; Thu, 30 Aug 2012 18:42:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <artnapor@yahoo.com>) id 1T77Km-0005BA-IM
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 16:11:08 +0000
Received: from [85.158.138.51:8479] by server-3.bemta-3.messagelabs.com id
	D1/3B-21322-B909F305; Thu, 30 Aug 2012 16:11:07 +0000
X-Env-Sender: artnapor@yahoo.com
X-Msg-Ref: server-5.tower-174.messagelabs.com!1346343066!27815691!1
X-Originating-IP: [98.138.91.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30068 invoked from network); 30 Aug 2012 16:11:06 -0000
Received: from nm26-vm1.bullet.mail.ne1.yahoo.com (HELO
	nm26-vm1.bullet.mail.ne1.yahoo.com) (98.138.91.61)
	by server-5.tower-174.messagelabs.com with SMTP;
	30 Aug 2012 16:11:06 -0000
Received: from [98.138.90.54] by nm26.bullet.mail.ne1.yahoo.com with NNFMP;
	30 Aug 2012 16:11:05 -0000
Received: from [98.138.86.157] by tm7.bullet.mail.ne1.yahoo.com with NNFMP;
	30 Aug 2012 16:11:05 -0000
Received: from [127.0.0.1] by omp1015.mail.ne1.yahoo.com with NNFMP;
	30 Aug 2012 16:11:05 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 51524.81935.bm@omp1015.mail.ne1.yahoo.com
Received: (qmail 95649 invoked by uid 60001); 30 Aug 2012 16:11:05 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1346343064; bh=C9PZCg4Km/uYRbeIza0/O+oEbGxBHOJCf/MnB8ycWms=;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=RUCpnUsqFbXttQmOzMLCgZ4QpTcsRgDJv+9WOGI27zkmXywcSu6MJ4uzpLx6uNGo0Tdj6WIeI0KpryPsYlIsCd5YBtN8jxwuKLFLQRodLdcsbaozKWl5muZ32Y+7k/EYHLVXfwM+iLee378OjG+W4bDq78tNni13rrZTYUB5+Es=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=3oRvVmKyoBd+XaALSzQf3PuvdNskfMtseBJ1feWtvEWcrObLyNpOXFngWtGrWyrXSUrtTcAT3VgiVd4Fq6AST0/MFtD19Y7hZY1rlos5vUlpj9Sp4wu+VxG1MPXUqw3C5GQ3qS0+X87SdWz6icpZ8KBHUBkyyKkmitl1Wza1R6A=;
X-YMail-OSG: 5uw30koVM1kjc1tUaW22264IPkil3bjhAe34ZOENSdgdbIJ
	BmrRNL44ge11_Ipvu7oHKFG1TqLLfqOiyX7kndgfIT8i2h3kyPcSkvT.QCha
	vVjDSf6yCuCfSCsoTYYYxI.4mQIC_zGTEOIvWncfKJhuu.kjnrkGtfQNpTxm
	pL995uw49mWF_m3qctILr7fuPeybGQSBnPYCDr1ZM0I0w_d.jIH1Y720LXHl
	xCgC1ayXC4hBJZtUJCHe0IOboMnQgCbNs2NjogeKTU2yd6ydlIrB6CmGmkQe
	hPS_PwG3sLp.m9ucxsYDheLC4KBd9jPUqRjbBSWp4dUDFRPUs86JWS5S6uVK
	pJ7LVbd8xsPgzvxgu17ap9Yfvtan2bbqwGwOgPTKC1na.3mS7n.aTUuqUKaa
	nMZ2mphJEsu.e0rpb6n8zIadiFJjZZbnq5qbiIFSjZ1yhihSsZqJFiUasORo
	Nv8EIrNX1UeNFFZmM5eCsP3AY99doMvF8eLBfhv9H2Hz1hhsviH3xR4RbPMY
	aZE.WKVI0oZkEiiHppenhayttCqXsDpeh8nOrZUGkdSC4
Received: from [50.58.96.2] by web121001.mail.ne1.yahoo.com via HTTP;
	Thu, 30 Aug 2012 09:11:04 PDT
X-Mailer: YahooMailWebService/0.8.121.416
Message-ID: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Date: Thu, 30 Aug 2012 09:11:04 -0700 (PDT)
From: Art Napor <artnapor@yahoo.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Thu, 30 Aug 2012 18:42:27 +0000
Subject: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Art Napor <artnapor@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4104853267472564175=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4104853267472564175==
Content-Type: multipart/alternative; boundary="344044665-1260793075-1346343064=:91089"

--344044665-1260793075-1346343064=:91089
Content-Type: text/plain; charset=us-ascii

Is there a patch series that applies to Xen 4.1.2 for this feature? 



http://markmail.org/message/ipmyqtuaepe7d7iy
--344044665-1260793075-1346343064=:91089
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:14pt"><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;">Is there a patch series that applies to Xen 4.1.2 for this feature? <br></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;">http://markmail.org/message/ipmyqtuaepe7d7iy</div></div></body></html>
--344044665-1260793075-1346343064=:91089--


--===============4104853267472564175==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4104853267472564175==--


From xen-devel-bounces@lists.xen.org Thu Aug 30 19:06:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 19:06:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7A4S-00010L-Oi; Thu, 30 Aug 2012 19:06:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7A4R-00010G-2q
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 19:06:27 +0000
Received: from [85.158.139.83:28420] by server-3.bemta-5.messagelabs.com id
	D2/8F-21836-2B9BF305; Thu, 30 Aug 2012 19:06:26 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346353584!28059844!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDExOTY4MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28928 invoked from network); 30 Aug 2012 19:06:25 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 19:06:25 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346353584; x=1377889584;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=ER5tX1r7Aqd/J/UV3PXR235/Sy9M+NCNbnKGaNSlVvA=;
	b=eOli8sb//s/pcOUbyjG6uJcuGjvYZRqKVEoXL6IWXx7co04zLKJfYIDk
	1/c1GFRxaj744sGFQYZLnNZdkoidAA==;
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="432024783"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 19:06:22 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UJ6LV6007806
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 19:06:22 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 12:06:16 -0700
MIME-Version: 1.0
X-Mercurial-Node: 512b4e0c49f331e252ae767f6cf9de39522c9c45
Message-ID: <512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Thu, 30 Aug 2012 11:59:59 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is sometimes hard to discover all the optional tools that should be
on a system to build all available Xen documentation. By checking for
documentation generation tools at ./configure time and displaying a
warning, Xen packagers will more easily learn about new optional build
dependencies, like markdown, when they are introduced.

Changes since v1:
 * require that ./configure be run before building docs
 * remove Docs.mk and make Tools.mk the canonical locaiton where
   docs tools are defined (via ./configure)
 * fold in checking for markdown_py

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r d7e4efa17fb0 -r 512b4e0c49f3 README
--- a/README	Tue Aug 28 15:35:08 2012 -0700
+++ b/README	Thu Aug 30 10:51:00 2012 -0700
@@ -28,8 +28,9 @@
 your system. For full documentation, see the Xen User Manual. If this
 is a pre-built release then you can find the manual at:
  dist/install/usr/share/doc/xen/pdf/user.pdf
-If you have a source release, then 'make -C docs' will build the
-manual at docs/pdf/user.pdf.
+If you have a source release and the required documentation generation
+tools, then './configure; make -C docs' will build the manual at
+docs/pdf/user.pdf.
 
 Quick-Start Guide
 =================
@@ -59,7 +60,6 @@
     * GNU gettext
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
-    * markdown
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
@@ -67,6 +67,7 @@
     * Development install of Ocaml (e.g. ocaml-nox and
       ocaml-findlib). Required to build ocaml components which
       includes the alternative ocaml xenstored.
+    * markdown
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff -r d7e4efa17fb0 -r 512b4e0c49f3 config/Tools.mk.in
--- a/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
+++ b/config/Tools.mk.in	Thu Aug 30 10:51:00 2012 -0700
@@ -22,6 +22,18 @@
 LD86                := @LD86@
 BCC                 := @BCC@
 IASL                := @IASL@
+PS2PDF              := @PS2PDF@
+DVIPS               := @DVIPS@
+LATEX               := @LATEX@
+FIG2DEV             := @FIG2DEV@
+LATEX2HTML          := @LATEX2HTML@
+DOXYGEN             := @DOXYGEN@
+POD2MAN             := @POD2MAN@
+POD2HTML            := @POD2HTML@
+POD2TEXT            := @POD2TEXT@
+DOT                 := @DOT@
+NEATO               := @NEATO@
+MARKDOWN            := @MARKDOWN@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r d7e4efa17fb0 -r 512b4e0c49f3 docs/Docs.mk
--- a/docs/Docs.mk	Tue Aug 28 15:35:08 2012 -0700
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,12 +0,0 @@
-PS2PDF		:= ps2pdf
-DVIPS		:= dvips
-LATEX		:= latex
-FIG2DEV		:= fig2dev
-LATEX2HTML	:= latex2html
-DOXYGEN		:= doxygen
-POD2MAN		:= pod2man
-POD2HTML	:= pod2html
-POD2TEXT	:= pod2text
-DOT		:= dot
-NEATO		:= neato
-MARKDOWN	:= markdown
diff -r d7e4efa17fb0 -r 512b4e0c49f3 docs/Makefile
--- a/docs/Makefile	Tue Aug 28 15:35:08 2012 -0700
+++ b/docs/Makefile	Thu Aug 30 10:51:00 2012 -0700
@@ -2,7 +2,7 @@
 
 XEN_ROOT=$(CURDIR)/..
 include $(XEN_ROOT)/Config.mk
-include $(XEN_ROOT)/docs/Docs.mk
+-include $(XEN_ROOT)/config/Tools.mk
 
 VERSION		= xen-unstable
 
@@ -26,10 +26,12 @@
 
 .PHONY: build
 build: html txt man-pages
-	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
-	$(MAKE) -C xen-api build ; else                              \
-        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
+ifdef DOT
+	$(MAKE) -C xen-api build
 	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
+else
+	@echo "Graphviz (dot) not installed; skipping xen-api."
+endif
 
 .PHONY: dev-docs
 dev-docs: python-dev-docs
@@ -42,18 +44,21 @@
 
 .PHONY: python-dev-docs
 python-dev-docs:
-	@mkdir -v -p api/tools/python
-	@set -e ; if which $(DOXYGEN) 1>/dev/null 2>/dev/null; then \
-        echo "Running doxygen to generate Python tools APIs ... "; \
-	$(DOXYGEN) Doxyfile;                                       \
-	$(MAKE) -C api/tools/python/latex ; else                   \
-        echo "Doxygen not installed; skipping python-dev-docs."; fi
+ifdef DOXYGEN
+	@echo "Running doxygen to generate Python tools APIs ... "
+	mkdir -v -p api/tools/python
+	$(DOXYGEN) Doxyfile && $(MAKE) -C api/tools/python/latex
+else
+	@echo "Doxygen not installed; skipping python-dev-docs."
+endif
 
 .PHONY: man-pages
 man-pages:
-	@if which $(POD2MAN) 1>/dev/null 2>/dev/null; then \
-	$(MAKE) $(DOC_MAN1) $(DOC_MAN5); else              \
-	echo "pod2man not installed; skipping man-pages."; fi
+ifdef POD2MAN
+	$(MAKE) $(DOC_MAN1) $(DOC_MAN5)
+else
+	@echo "pod2man not installed; skipping man-pages."
+endif
 
 man1/%.1: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
@@ -94,30 +99,40 @@
 	perl -w -- ./gen-html-index -i INDEX html $(DOC_HTML)
 
 html/%.html: %.markdown
-	@$(INSTALL_DIR) $(@D)
-	@set -e ; if which $(MARKDOWN) 1>/dev/null 2>/dev/null; then \
-	echo "Running markdown to generate $*.html ... "; \
+	$(INSTALL_DIR) $(@D)
+ifdef MARKDOWN
+	@echo "Running markdown to generate $*.html ... "
 	$(MARKDOWN) $< > $@.tmp ; \
-	$(call move-if-changed,$@.tmp,$@) ; else \
-	echo "markdown not installed; skipping $*.html."; fi
+	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "markdown not installed; skipping $*.html."
+endif
 
 html/%.txt: %.txt
-	@$(INSTALL_DIR) $(@D)
+	$(INSTALL_DIR) $(@D)
 	cp $< $@
 
 html/man/%.1.html: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2HTML
 	$(POD2HTML) --infile=$< --outfile=$@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2html not installed; skipping $<."
+endif
 
 html/man/%.5.html: man/%.pod.5 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2HTML
 	$(POD2HTML) --infile=$< --outfile=$@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2html not installed; skipping $<."
+endif
 
 html/hypercall/index.html: ./xen-headers
 	rm -rf $(@D)
-	@$(INSTALL_DIR) $(@D)
+	$(INSTALL_DIR) $(@D)
 	./xen-headers -O $(@D) \
 		-T 'arch-x86_64 - Xen public headers' \
 		-X arch-ia64 -X arch-x86_32 -X xen-x86_32 -X arch-arm \
@@ -137,11 +152,24 @@
 
 txt/man/%.1.txt: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2TEXT
 	$(POD2TEXT) $< $@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2text not installed; skipping $<."
+endif
 
 txt/man/%.5.txt: man/%.pod.5 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2TEXT
 	$(POD2TEXT) $< $@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2text not installed; skipping $<."
+endif
 
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Tools.mk:
+	$(error You have to run ./configure before building docs)
+endif
+
diff -r d7e4efa17fb0 -r 512b4e0c49f3 docs/xen-api/Makefile
--- a/docs/xen-api/Makefile	Tue Aug 28 15:35:08 2012 -0700
+++ b/docs/xen-api/Makefile	Thu Aug 30 10:51:00 2012 -0700
@@ -2,7 +2,7 @@
 
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/Config.mk
-include $(XEN_ROOT)/docs/Docs.mk
+-include $(XEN_ROOT)/config/Tools.mk
 
 
 TEX := $(wildcard *.tex)
@@ -42,3 +42,8 @@
 .PHONY: clean
 clean:
 	rm -f *.pdf *.ps *.dvi *.aux *.log *.out $(EPSDOT)
+
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Tools.mk:
+	$(error You have to run ./configure before building docs)
+endif
diff -r d7e4efa17fb0 -r 512b4e0c49f3 tools/configure.ac
--- a/tools/configure.ac	Tue Aug 28 15:35:08 2012 -0700
+++ b/tools/configure.ac	Thu Aug 30 10:51:00 2012 -0700
@@ -34,6 +34,7 @@
 m4_include([m4/curses.m4])
 m4_include([m4/pthread.m4])
 m4_include([m4/ptyfuncs.m4])
+m4_include([m4/docs_tool.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
@@ -80,6 +81,18 @@
 AC_PATH_PROG([BISON], [bison])
 AC_PATH_PROG([FLEX], [flex])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
+AX_DOCS_TOOL_PROG([PS2PDF], [ps2pdf])
+AX_DOCS_TOOL_PROG([DVIPS], [dvips])
+AX_DOCS_TOOL_PROG([LATEX], [latex])
+AX_DOCS_TOOL_PROG([FIG2DEV], [fig2dev])
+AX_DOCS_TOOL_PROG([LATEX2HTML], [latex2html])
+AX_DOCS_TOOL_PROG([DOXYGEN], [doxygen])
+AX_DOCS_TOOL_PROG([POD2MAN], [pod2man])
+AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
+AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
+AX_DOCS_TOOL_PROG([DOT], [dot])
+AX_DOCS_TOOL_PROG([NEATO], [neato])
+AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown markdown_py])
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])
diff -r d7e4efa17fb0 -r 512b4e0c49f3 tools/m4/docs_tool.m4
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/m4/docs_tool.m4	Thu Aug 30 10:51:00 2012 -0700
@@ -0,0 +1,17 @@
+AC_DEFUN([AX_DOCS_TOOL_PROG], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROG([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])
+
+AC_DEFUN([AX_DOCS_TOOL_PROGS], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROGS([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 19:06:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 19:06:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7A4S-00010L-Oi; Thu, 30 Aug 2012 19:06:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7A4R-00010G-2q
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 19:06:27 +0000
Received: from [85.158.139.83:28420] by server-3.bemta-5.messagelabs.com id
	D2/8F-21836-2B9BF305; Thu, 30 Aug 2012 19:06:26 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346353584!28059844!1
X-Originating-IP: [72.21.196.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNzIuMjEuMTk2LjI1ID0+IDExOTY4MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28928 invoked from network); 30 Aug 2012 19:06:25 -0000
Received: from smtp-fw-2101.amazon.com (HELO smtp-fw-2101.amazon.com)
	(72.21.196.25)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 19:06:25 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346353584; x=1377889584;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=ER5tX1r7Aqd/J/UV3PXR235/Sy9M+NCNbnKGaNSlVvA=;
	b=eOli8sb//s/pcOUbyjG6uJcuGjvYZRqKVEoXL6IWXx7co04zLKJfYIDk
	1/c1GFRxaj744sGFQYZLnNZdkoidAA==;
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="432024783"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-2101.iad2.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 19:06:22 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UJ6LV6007806
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 19:06:22 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 12:06:16 -0700
MIME-Version: 1.0
X-Mercurial-Node: 512b4e0c49f331e252ae767f6cf9de39522c9c45
Message-ID: <512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Thu, 30 Aug 2012 11:59:59 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is sometimes hard to discover all the optional tools that should be
on a system to build all available Xen documentation. By checking for
documentation generation tools at ./configure time and displaying a
warning, Xen packagers will more easily learn about new optional build
dependencies, like markdown, when they are introduced.

Changes since v1:
 * require that ./configure be run before building docs
 * remove Docs.mk and make Tools.mk the canonical locaiton where
   docs tools are defined (via ./configure)
 * fold in checking for markdown_py

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r d7e4efa17fb0 -r 512b4e0c49f3 README
--- a/README	Tue Aug 28 15:35:08 2012 -0700
+++ b/README	Thu Aug 30 10:51:00 2012 -0700
@@ -28,8 +28,9 @@
 your system. For full documentation, see the Xen User Manual. If this
 is a pre-built release then you can find the manual at:
  dist/install/usr/share/doc/xen/pdf/user.pdf
-If you have a source release, then 'make -C docs' will build the
-manual at docs/pdf/user.pdf.
+If you have a source release and the required documentation generation
+tools, then './configure; make -C docs' will build the manual at
+docs/pdf/user.pdf.
 
 Quick-Start Guide
 =================
@@ -59,7 +60,6 @@
     * GNU gettext
     * 16-bit x86 assembler, loader and compiler (dev86 rpm or bin86 & bcc debs)
     * ACPI ASL compiler (iasl)
-    * markdown
 
 In addition to the above there are a number of optional build
 prerequisites. Omitting these will cause the related features to be
@@ -67,6 +67,7 @@
     * Development install of Ocaml (e.g. ocaml-nox and
       ocaml-findlib). Required to build ocaml components which
       includes the alternative ocaml xenstored.
+    * markdown
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff -r d7e4efa17fb0 -r 512b4e0c49f3 config/Tools.mk.in
--- a/config/Tools.mk.in	Tue Aug 28 15:35:08 2012 -0700
+++ b/config/Tools.mk.in	Thu Aug 30 10:51:00 2012 -0700
@@ -22,6 +22,18 @@
 LD86                := @LD86@
 BCC                 := @BCC@
 IASL                := @IASL@
+PS2PDF              := @PS2PDF@
+DVIPS               := @DVIPS@
+LATEX               := @LATEX@
+FIG2DEV             := @FIG2DEV@
+LATEX2HTML          := @LATEX2HTML@
+DOXYGEN             := @DOXYGEN@
+POD2MAN             := @POD2MAN@
+POD2HTML            := @POD2HTML@
+POD2TEXT            := @POD2TEXT@
+DOT                 := @DOT@
+NEATO               := @NEATO@
+MARKDOWN            := @MARKDOWN@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r d7e4efa17fb0 -r 512b4e0c49f3 docs/Docs.mk
--- a/docs/Docs.mk	Tue Aug 28 15:35:08 2012 -0700
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,12 +0,0 @@
-PS2PDF		:= ps2pdf
-DVIPS		:= dvips
-LATEX		:= latex
-FIG2DEV		:= fig2dev
-LATEX2HTML	:= latex2html
-DOXYGEN		:= doxygen
-POD2MAN		:= pod2man
-POD2HTML	:= pod2html
-POD2TEXT	:= pod2text
-DOT		:= dot
-NEATO		:= neato
-MARKDOWN	:= markdown
diff -r d7e4efa17fb0 -r 512b4e0c49f3 docs/Makefile
--- a/docs/Makefile	Tue Aug 28 15:35:08 2012 -0700
+++ b/docs/Makefile	Thu Aug 30 10:51:00 2012 -0700
@@ -2,7 +2,7 @@
 
 XEN_ROOT=$(CURDIR)/..
 include $(XEN_ROOT)/Config.mk
-include $(XEN_ROOT)/docs/Docs.mk
+-include $(XEN_ROOT)/config/Tools.mk
 
 VERSION		= xen-unstable
 
@@ -26,10 +26,12 @@
 
 .PHONY: build
 build: html txt man-pages
-	@if which $(DOT) 1>/dev/null 2>/dev/null ; then              \
-	$(MAKE) -C xen-api build ; else                              \
-        echo "Graphviz (dot) not installed; skipping xen-api." ; fi
+ifdef DOT
+	$(MAKE) -C xen-api build
 	rm -f *.aux *.dvi *.bbl *.blg *.glo *.idx *.ilg *.log *.ind *.toc
+else
+	@echo "Graphviz (dot) not installed; skipping xen-api."
+endif
 
 .PHONY: dev-docs
 dev-docs: python-dev-docs
@@ -42,18 +44,21 @@
 
 .PHONY: python-dev-docs
 python-dev-docs:
-	@mkdir -v -p api/tools/python
-	@set -e ; if which $(DOXYGEN) 1>/dev/null 2>/dev/null; then \
-        echo "Running doxygen to generate Python tools APIs ... "; \
-	$(DOXYGEN) Doxyfile;                                       \
-	$(MAKE) -C api/tools/python/latex ; else                   \
-        echo "Doxygen not installed; skipping python-dev-docs."; fi
+ifdef DOXYGEN
+	@echo "Running doxygen to generate Python tools APIs ... "
+	mkdir -v -p api/tools/python
+	$(DOXYGEN) Doxyfile && $(MAKE) -C api/tools/python/latex
+else
+	@echo "Doxygen not installed; skipping python-dev-docs."
+endif
 
 .PHONY: man-pages
 man-pages:
-	@if which $(POD2MAN) 1>/dev/null 2>/dev/null; then \
-	$(MAKE) $(DOC_MAN1) $(DOC_MAN5); else              \
-	echo "pod2man not installed; skipping man-pages."; fi
+ifdef POD2MAN
+	$(MAKE) $(DOC_MAN1) $(DOC_MAN5)
+else
+	@echo "pod2man not installed; skipping man-pages."
+endif
 
 man1/%.1: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
@@ -94,30 +99,40 @@
 	perl -w -- ./gen-html-index -i INDEX html $(DOC_HTML)
 
 html/%.html: %.markdown
-	@$(INSTALL_DIR) $(@D)
-	@set -e ; if which $(MARKDOWN) 1>/dev/null 2>/dev/null; then \
-	echo "Running markdown to generate $*.html ... "; \
+	$(INSTALL_DIR) $(@D)
+ifdef MARKDOWN
+	@echo "Running markdown to generate $*.html ... "
 	$(MARKDOWN) $< > $@.tmp ; \
-	$(call move-if-changed,$@.tmp,$@) ; else \
-	echo "markdown not installed; skipping $*.html."; fi
+	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "markdown not installed; skipping $*.html."
+endif
 
 html/%.txt: %.txt
-	@$(INSTALL_DIR) $(@D)
+	$(INSTALL_DIR) $(@D)
 	cp $< $@
 
 html/man/%.1.html: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2HTML
 	$(POD2HTML) --infile=$< --outfile=$@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2html not installed; skipping $<."
+endif
 
 html/man/%.5.html: man/%.pod.5 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2HTML
 	$(POD2HTML) --infile=$< --outfile=$@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2html not installed; skipping $<."
+endif
 
 html/hypercall/index.html: ./xen-headers
 	rm -rf $(@D)
-	@$(INSTALL_DIR) $(@D)
+	$(INSTALL_DIR) $(@D)
 	./xen-headers -O $(@D) \
 		-T 'arch-x86_64 - Xen public headers' \
 		-X arch-ia64 -X arch-x86_32 -X xen-x86_32 -X arch-arm \
@@ -137,11 +152,24 @@
 
 txt/man/%.1.txt: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2TEXT
 	$(POD2TEXT) $< $@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2text not installed; skipping $<."
+endif
 
 txt/man/%.5.txt: man/%.pod.5 Makefile
 	$(INSTALL_DIR) $(@D)
+ifdef POD2TEXT
 	$(POD2TEXT) $< $@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "pod2text not installed; skipping $<."
+endif
 
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Tools.mk:
+	$(error You have to run ./configure before building docs)
+endif
+
diff -r d7e4efa17fb0 -r 512b4e0c49f3 docs/xen-api/Makefile
--- a/docs/xen-api/Makefile	Tue Aug 28 15:35:08 2012 -0700
+++ b/docs/xen-api/Makefile	Thu Aug 30 10:51:00 2012 -0700
@@ -2,7 +2,7 @@
 
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/Config.mk
-include $(XEN_ROOT)/docs/Docs.mk
+-include $(XEN_ROOT)/config/Tools.mk
 
 
 TEX := $(wildcard *.tex)
@@ -42,3 +42,8 @@
 .PHONY: clean
 clean:
 	rm -f *.pdf *.ps *.dvi *.aux *.log *.out $(EPSDOT)
+
+ifeq (,$(findstring clean,$(MAKECMDGOALS)))
+$(XEN_ROOT)/config/Tools.mk:
+	$(error You have to run ./configure before building docs)
+endif
diff -r d7e4efa17fb0 -r 512b4e0c49f3 tools/configure.ac
--- a/tools/configure.ac	Tue Aug 28 15:35:08 2012 -0700
+++ b/tools/configure.ac	Thu Aug 30 10:51:00 2012 -0700
@@ -34,6 +34,7 @@
 m4_include([m4/curses.m4])
 m4_include([m4/pthread.m4])
 m4_include([m4/ptyfuncs.m4])
+m4_include([m4/docs_tool.m4])
 
 # Enable/disable options
 AX_ARG_DEFAULT_DISABLE([githttp], [Download GIT repositories via HTTP])
@@ -80,6 +81,18 @@
 AC_PATH_PROG([BISON], [bison])
 AC_PATH_PROG([FLEX], [flex])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
+AX_DOCS_TOOL_PROG([PS2PDF], [ps2pdf])
+AX_DOCS_TOOL_PROG([DVIPS], [dvips])
+AX_DOCS_TOOL_PROG([LATEX], [latex])
+AX_DOCS_TOOL_PROG([FIG2DEV], [fig2dev])
+AX_DOCS_TOOL_PROG([LATEX2HTML], [latex2html])
+AX_DOCS_TOOL_PROG([DOXYGEN], [doxygen])
+AX_DOCS_TOOL_PROG([POD2MAN], [pod2man])
+AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
+AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
+AX_DOCS_TOOL_PROG([DOT], [dot])
+AX_DOCS_TOOL_PROG([NEATO], [neato])
+AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown markdown_py])
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])
diff -r d7e4efa17fb0 -r 512b4e0c49f3 tools/m4/docs_tool.m4
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/m4/docs_tool.m4	Thu Aug 30 10:51:00 2012 -0700
@@ -0,0 +1,17 @@
+AC_DEFUN([AX_DOCS_TOOL_PROG], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROG([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])
+
+AC_DEFUN([AX_DOCS_TOOL_PROGS], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROGS([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_WARN([$2 is not available so some documentation won't be built])
+    ])
+])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 19:06:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 19:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7A4a-00011B-BX; Thu, 30 Aug 2012 19:06:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7A4Y-00010x-Er
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 19:06:34 +0000
Received: from [85.158.138.51:27448] by server-10.bemta-3.messagelabs.com id
	50/52-10411-9B9BF305; Thu, 30 Aug 2012 19:06:33 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346353591!23722499!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27188 invoked from network); 30 Aug 2012 19:06:32 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 19:06:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346353592; x=1377889592;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=OTQloVAwx79aZH2EZcWohhAtaT0tnIkFD7+ZtVEahhc=;
	b=fYYk/i9y12ltGnc5qobMB0CSOIItvlQwG1UvcRUJBtPjYbLE2+oMvuX+
	jKTw1yTfRNJHy1PhV6hn9os/49fCrw==;
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="286756519"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 19:06:16 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UJ6F5L011940
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 19:06:16 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 12:06:10 -0700
MIME-Version: 1.0
Message-ID: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Thu, 30 Aug 2012 11:59:58 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0 of 2 v2] improve checking for documentation
 tools and formatting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This version makes running ./configure a prerequisite for building
documentation. If a tool is not detected by ./configure and is not
specified by the user, the make variable will be empty. So I've
switched to using make conditionals instead of which(1).

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 19:06:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 19:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7A4a-00011B-BX; Thu, 30 Aug 2012 19:06:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7A4Y-00010x-Er
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 19:06:34 +0000
Received: from [85.158.138.51:27448] by server-10.bemta-3.messagelabs.com id
	50/52-10411-9B9BF305; Thu, 30 Aug 2012 19:06:33 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346353591!23722499!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27188 invoked from network); 30 Aug 2012 19:06:32 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 19:06:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346353592; x=1377889592;
	h=mime-version:content-transfer-encoding:subject:
	message-id:date:from:to:cc;
	bh=OTQloVAwx79aZH2EZcWohhAtaT0tnIkFD7+ZtVEahhc=;
	b=fYYk/i9y12ltGnc5qobMB0CSOIItvlQwG1UvcRUJBtPjYbLE2+oMvuX+
	jKTw1yTfRNJHy1PhV6hn9os/49fCrw==;
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="286756519"
Received: from smtp-in-9002.sea19.amazon.com ([10.186.174.20])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 19:06:16 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-9002.sea19.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UJ6F5L011940
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 19:06:16 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 12:06:10 -0700
MIME-Version: 1.0
Message-ID: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Thu, 30 Aug 2012 11:59:58 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0 of 2 v2] improve checking for documentation
 tools and formatting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This version makes running ./configure a prerequisite for building
documentation. If a tool is not detected by ./configure and is not
specified by the user, the make variable will be empty. So I've
switched to using make conditionals instead of which(1).

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 19:07:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 19:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7A52-00014h-O1; Thu, 30 Aug 2012 19:07:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7A51-00014R-M0
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 19:07:03 +0000
Received: from [85.158.143.99:50968] by server-1.bemta-4.messagelabs.com id
	E8/C4-12504-6D9BF305; Thu, 30 Aug 2012 19:07:02 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346353619!20169155!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11060 invoked from network); 30 Aug 2012 19:07:02 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 19:07:02 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346353621; x=1377889621;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=y/oBBxhvWNzYiFmljEbj18XSHVCtZgXGjEic10JDW0A=;
	b=JgT4qOuhgQ2rjMPIEL+ZpEqnzEGeBtqB9+9EAjffUaQZtCUtMQHFSJxA
	DigQHgKlvxrof09z+jZMETrSJ2TBVg==;
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="286756689"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 19:06:33 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UJ6Vu7028785
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 19:06:32 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 12:06:23 -0700
MIME-Version: 1.0
X-Mercurial-Node: 651347cccff7c5619ab0d812e93a04b3ada7b0cd
Message-ID: <651347cccff7c5619ab0.1346353200@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Thu, 30 Aug 2012 12:00:00 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2 of 2 v2] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Markdown, while easy to read and write, isn't the most consumable
format for users reading documentation on a terminal. This patch uses
lynx to format markdown produced HTML into text files.

Changes since v3:
 * check for html to text dump tool in ./configure
 * switch to using elinks
 * allow command line flags to dump tool to be specified

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 512b4e0c49f3 -r 651347cccff7 config/Tools.mk.in
--- a/config/Tools.mk.in	Thu Aug 30 10:51:00 2012 -0700
+++ b/config/Tools.mk.in	Thu Aug 30 11:56:01 2012 -0700
@@ -34,6 +34,8 @@
 DOT                 := @DOT@
 NEATO               := @NEATO@
 MARKDOWN            := @MARKDOWN@
+HTMLDUMP            := @HTMLDUMP@
+HTMLDUMPFLAGS       := @HTMLDUMPFLAGS@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r 512b4e0c49f3 -r 651347cccff7 docs/Makefile
--- a/docs/Makefile	Thu Aug 30 10:51:00 2012 -0700
+++ b/docs/Makefile	Thu Aug 30 11:56:01 2012 -0700
@@ -146,9 +146,20 @@
 	$(call move-if-changed,$@.tmp,$@)
 
 txt/%.txt: %.markdown
-	$(INSTALL_DIR) $(@D)
-	cp $< $@.tmp
+	@$(INSTALL_DIR) $(@D)
+ifdef MARKDOWN
+ifdef HTMLDUMP
+	@echo "Running markdown to generate $*.txt ... "; \
+	$(MARKDOWN) $< | $(HTMLDUMP) $(HTMLDUMPFLAGS) > $@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "html dump tool (like elinks) not installed; just copying $<." \;
+	cp $< $@;
+endif
+else
+	@echo "markdown not installed; just copying $<." \;
+	cp $< $@;
+endif
 
 txt/man/%.1.txt: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
diff -r 512b4e0c49f3 -r 651347cccff7 tools/configure.ac
--- a/tools/configure.ac	Thu Aug 30 10:51:00 2012 -0700
+++ b/tools/configure.ac	Thu Aug 30 11:56:01 2012 -0700
@@ -93,6 +93,16 @@
 AX_DOCS_TOOL_PROG([DOT], [dot])
 AX_DOCS_TOOL_PROG([NEATO], [neato])
 AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown markdown_py])
+AC_ARG_VAR([HTMLDUMP],
+           [Path to html-to-text generation tool (default: elinks)])
+AC_PATH_PROG([HTMLDUMP], [elinks])
+AS_IF([! test -x "$ac_cv_path_HTMLDUMP"], [
+    AC_MSG_WARN([$ac_cv_path_HTMLDUMP is not available so text documentation will be unformatted markdown])
+])
+AC_SUBST([HTMLDUMPFLAGS], ["-dump"])
+AC_ARG_VAR([HTMLDUMPFLAGS], [Flags passed to html to text translation tool])
+
+
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 19:07:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 19:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7A52-00014h-O1; Thu, 30 Aug 2012 19:07:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7A51-00014R-M0
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 19:07:03 +0000
Received: from [85.158.143.99:50968] by server-1.bemta-4.messagelabs.com id
	E8/C4-12504-6D9BF305; Thu, 30 Aug 2012 19:07:02 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346353619!20169155!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11060 invoked from network); 30 Aug 2012 19:07:02 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 19:07:02 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346353621; x=1377889621;
	h=mime-version:content-transfer-encoding:subject:
	message-id:in-reply-to:references:date:from:to:cc;
	bh=y/oBBxhvWNzYiFmljEbj18XSHVCtZgXGjEic10JDW0A=;
	b=JgT4qOuhgQ2rjMPIEL+ZpEqnzEGeBtqB9+9EAjffUaQZtCUtMQHFSJxA
	DigQHgKlvxrof09z+jZMETrSJ2TBVg==;
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="286756689"
Received: from smtp-in-1104.vdc.amazon.com ([10.140.10.25])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 19:06:33 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-1104.vdc.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7UJ6Vu7028785
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 19:06:32 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.41) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 12:06:23 -0700
MIME-Version: 1.0
X-Mercurial-Node: 651347cccff7c5619ab0d812e93a04b3ada7b0cd
Message-ID: <651347cccff7c5619ab0.1346353200@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
User-Agent: Mercurial-patchbomb/2.3
Date: Thu, 30 Aug 2012 12:00:00 -0700
From: Matt Wilson <msw@amazon.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2 of 2 v2] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Markdown, while easy to read and write, isn't the most consumable
format for users reading documentation on a terminal. This patch uses
lynx to format markdown produced HTML into text files.

Changes since v3:
 * check for html to text dump tool in ./configure
 * switch to using elinks
 * allow command line flags to dump tool to be specified

Signed-off-by: Matt Wilson <msw@amazon.com>

diff -r 512b4e0c49f3 -r 651347cccff7 config/Tools.mk.in
--- a/config/Tools.mk.in	Thu Aug 30 10:51:00 2012 -0700
+++ b/config/Tools.mk.in	Thu Aug 30 11:56:01 2012 -0700
@@ -34,6 +34,8 @@
 DOT                 := @DOT@
 NEATO               := @NEATO@
 MARKDOWN            := @MARKDOWN@
+HTMLDUMP            := @HTMLDUMP@
+HTMLDUMPFLAGS       := @HTMLDUMPFLAGS@
 
 # Extra folder for libs/includes
 PREPEND_INCLUDES    := @PREPEND_INCLUDES@
diff -r 512b4e0c49f3 -r 651347cccff7 docs/Makefile
--- a/docs/Makefile	Thu Aug 30 10:51:00 2012 -0700
+++ b/docs/Makefile	Thu Aug 30 11:56:01 2012 -0700
@@ -146,9 +146,20 @@
 	$(call move-if-changed,$@.tmp,$@)
 
 txt/%.txt: %.markdown
-	$(INSTALL_DIR) $(@D)
-	cp $< $@.tmp
+	@$(INSTALL_DIR) $(@D)
+ifdef MARKDOWN
+ifdef HTMLDUMP
+	@echo "Running markdown to generate $*.txt ... "; \
+	$(MARKDOWN) $< | $(HTMLDUMP) $(HTMLDUMPFLAGS) > $@.tmp
 	$(call move-if-changed,$@.tmp,$@)
+else
+	@echo "html dump tool (like elinks) not installed; just copying $<." \;
+	cp $< $@;
+endif
+else
+	@echo "markdown not installed; just copying $<." \;
+	cp $< $@;
+endif
 
 txt/man/%.1.txt: man/%.pod.1 Makefile
 	$(INSTALL_DIR) $(@D)
diff -r 512b4e0c49f3 -r 651347cccff7 tools/configure.ac
--- a/tools/configure.ac	Thu Aug 30 10:51:00 2012 -0700
+++ b/tools/configure.ac	Thu Aug 30 11:56:01 2012 -0700
@@ -93,6 +93,16 @@
 AX_DOCS_TOOL_PROG([DOT], [dot])
 AX_DOCS_TOOL_PROG([NEATO], [neato])
 AX_DOCS_TOOL_PROGS([MARKDOWN], [markdown markdown_py])
+AC_ARG_VAR([HTMLDUMP],
+           [Path to html-to-text generation tool (default: elinks)])
+AC_PATH_PROG([HTMLDUMP], [elinks])
+AS_IF([! test -x "$ac_cv_path_HTMLDUMP"], [
+    AC_MSG_WARN([$ac_cv_path_HTMLDUMP is not available so text documentation will be unformatted markdown])
+])
+AC_SUBST([HTMLDUMPFLAGS], ["-dump"])
+AC_ARG_VAR([HTMLDUMPFLAGS], [Flags passed to html to text translation tool])
+
+
 AS_IF([test "x$xapi" = "xy"], [
     AX_PATH_PROG_OR_FAIL([CURL], [curl-config])
     AX_PATH_PROG_OR_FAIL([XML], [xml2-config])

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:05:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7AzY-0001hj-E5; Thu, 30 Aug 2012 20:05:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7AzX-0001he-Hw
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 20:05:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346357119!8908433!1
X-Originating-IP: [209.85.160.65]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7468 invoked from network); 30 Aug 2012 20:05:20 -0000
Received: from mail-pb0-f65.google.com (HELO mail-pb0-f65.google.com)
	(209.85.160.65)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 20:05:20 -0000
Received: by pbbrr13 with SMTP id rr13so1495699pbb.0
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 13:05:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=FvwVD49Yf6dpmBRfWCJPO8JMS1NFR1sNjUljA2P5UcQ=;
	b=O8gISf5m9gMw55MlKbvtjT30tpa0hhzXdIflHWxJNGyu9hkExqnwleeiMe1mrFrdzx
	+tKy+FRMc0pY5AsNyjgvar0gW5492L7FQqOmddm8JW+bX/g3KLbGh3rYR7d/KO918Eda
	eURFXByeux5++ksUWehnssXXQxHpF0Ary80YTmQPX6FMuffGadiOx+OdoJTFNT9ud8ba
	h5DwSDm25u6mGxIfy7a3nZ1hGsHxbMWQaQrEQXA1kZm1KYE3frEZ0mvjHUsz2ER8RZNa
	psiXF0CPK0boJl9Ckl5+eywfaxwXrBHw2RqjvqZufht0X4ewPUzFLp0T3QQVsFr5AidB
	1bJg==
Received: by 10.68.224.70 with SMTP id ra6mr13130793pbc.11.1346357118700;
	Thu, 30 Aug 2012 13:05:18 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id ox5sm2077605pbc.75.2012.08.30.13.05.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 13:05:17 -0700 (PDT)
Date: Thu, 30 Aug 2012 16:05:10 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120830200509.GA12955@localhost.localdomain>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv3 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I think this should probably get a "Tested-by" Andres or someone else
> who uses paging before being applied.

How do I test it? Is there an easy explanation/tutorial Andres?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:05:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7AzY-0001hj-E5; Thu, 30 Aug 2012 20:05:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7AzX-0001he-Hw
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 20:05:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1346357119!8908433!1
X-Originating-IP: [209.85.160.65]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7468 invoked from network); 30 Aug 2012 20:05:20 -0000
Received: from mail-pb0-f65.google.com (HELO mail-pb0-f65.google.com)
	(209.85.160.65)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 20:05:20 -0000
Received: by pbbrr13 with SMTP id rr13so1495699pbb.0
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 13:05:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=FvwVD49Yf6dpmBRfWCJPO8JMS1NFR1sNjUljA2P5UcQ=;
	b=O8gISf5m9gMw55MlKbvtjT30tpa0hhzXdIflHWxJNGyu9hkExqnwleeiMe1mrFrdzx
	+tKy+FRMc0pY5AsNyjgvar0gW5492L7FQqOmddm8JW+bX/g3KLbGh3rYR7d/KO918Eda
	eURFXByeux5++ksUWehnssXXQxHpF0Ary80YTmQPX6FMuffGadiOx+OdoJTFNT9ud8ba
	h5DwSDm25u6mGxIfy7a3nZ1hGsHxbMWQaQrEQXA1kZm1KYE3frEZ0mvjHUsz2ER8RZNa
	psiXF0CPK0boJl9Ckl5+eywfaxwXrBHw2RqjvqZufht0X4ewPUzFLp0T3QQVsFr5AidB
	1bJg==
Received: by 10.68.224.70 with SMTP id ra6mr13130793pbc.11.1346357118700;
	Thu, 30 Aug 2012 13:05:18 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id ox5sm2077605pbc.75.2012.08.30.13.05.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 13:05:17 -0700 (PDT)
Date: Thu, 30 Aug 2012 16:05:10 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120830200509.GA12955@localhost.localdomain>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>, xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv3 0/2] xen/privcmd: support for paged-out
 frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I think this should probably get a "Tested-by" Andres or someone else
> who uses paging before being applied.

How do I test it? Is there an easy explanation/tutorial Andres?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:12:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:12:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7B5v-0001qs-9G; Thu, 30 Aug 2012 20:12:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7B5u-0001qn-5s
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 20:12:02 +0000
Received: from [85.158.143.99:55438] by server-2.bemta-4.messagelabs.com id
	2E/C5-21239-119CF305; Thu, 30 Aug 2012 20:12:01 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346357517!16501145!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=MIME_QP_LONG_LINE,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22518 invoked from network); 30 Aug 2012 20:11:59 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 20:11:59 -0000
Received: by ieje14 with SMTP id e14so1372721iej.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 13:11:57 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=cCIjGVayCl+VQgKk4MdfrlWevmiD5YTh+AI5p5koWnM=;
	b=aYpE/D+0QjWrUfN9bTzyXpJJow3Kpc9v87MvqbwO4/O7HMpO1iGq9+i777Tct/Cg5x
	6mRwfGai/MJzjwNMdKVclQ2G7W16GZetfhKAkBDG2mGxk9wavmG5V9XOQfhzDlFUL8SA
	L1YzH0yGhvg0JY+ii4R/4/uz5y/Kv8HKO0YFCqdTFQJsI/z8X3MpqahzyXiZlpm/XM4g
	9Se49y6oKnSXpkxhOBX7hry8oUxe3nVzpMadgOpxtHf38RIBTQJu0PemB6aAmQscP1Wf
	u4rHn0Jd8a7GYgMhnPLuPggaYDD7rxufCoN/Qz87cBff7cUeTo3KUmIZfSsJ/p3+oPII
	kfCg==
Received: by 10.50.94.133 with SMTP id dc5mr2289864igb.16.1346357517514;
	Thu, 30 Aug 2012 13:11:57 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id wm7sm3117191igb.6.2012.08.30.13.11.56
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 13:11:56 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20120830200509.GA12955@localhost.localdomain>
Date: Thu, 30 Aug 2012 16:12:03 -0400
Message-Id: <EE725C18-21B4-4F03-834E-F516178E0EAA@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<20120830200509.GA12955@localhost.localdomain>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlzP1MSPgpyrMCejo/HS1qnUHm9X8T6bMCW4BZ9CAVev5LP/rlKUOKXOf0Vmhv8bPIE09C3
Cc: xen-devel@lists.xensource.com, David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv3 0/2] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 30, 2012, at 4:05 PM, Konrad Rzeszutek Wilk wrote:

>> I think this should probably get a "Tested-by" Andres or someone else
>> who uses paging before being applied.
> 
> How do I test it? Is there an easy explanation/tutorial Andres?

I have a unit test lying somewhere but I'll need a day to dig it out.

Or you can start an hvm, pause it, fire up xenpaging with debug output so it tells you what it paged out, and then trace libxc as you xc_map_foreign_bulk a known batch of evicted pfns. On the first try it should get rc == -1, errno == ENOENT, error field for the pfn == ENOENT. On the second try it should all be dandy.

Thanks
Andres
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:12:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:12:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7B5v-0001qs-9G; Thu, 30 Aug 2012 20:12:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7B5u-0001qn-5s
	for xen-devel@lists.xensource.com; Thu, 30 Aug 2012 20:12:02 +0000
Received: from [85.158.143.99:55438] by server-2.bemta-4.messagelabs.com id
	2E/C5-21239-119CF305; Thu, 30 Aug 2012 20:12:01 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346357517!16501145!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=MIME_QP_LONG_LINE,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22518 invoked from network); 30 Aug 2012 20:11:59 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 20:11:59 -0000
Received: by ieje14 with SMTP id e14so1372721iej.30
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Aug 2012 13:11:57 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=cCIjGVayCl+VQgKk4MdfrlWevmiD5YTh+AI5p5koWnM=;
	b=aYpE/D+0QjWrUfN9bTzyXpJJow3Kpc9v87MvqbwO4/O7HMpO1iGq9+i777Tct/Cg5x
	6mRwfGai/MJzjwNMdKVclQ2G7W16GZetfhKAkBDG2mGxk9wavmG5V9XOQfhzDlFUL8SA
	L1YzH0yGhvg0JY+ii4R/4/uz5y/Kv8HKO0YFCqdTFQJsI/z8X3MpqahzyXiZlpm/XM4g
	9Se49y6oKnSXpkxhOBX7hry8oUxe3nVzpMadgOpxtHf38RIBTQJu0PemB6aAmQscP1Wf
	u4rHn0Jd8a7GYgMhnPLuPggaYDD7rxufCoN/Qz87cBff7cUeTo3KUmIZfSsJ/p3+oPII
	kfCg==
Received: by 10.50.94.133 with SMTP id dc5mr2289864igb.16.1346357517514;
	Thu, 30 Aug 2012 13:11:57 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id wm7sm3117191igb.6.2012.08.30.13.11.56
	(version=TLSv1/SSLv3 cipher=OTHER);
	Thu, 30 Aug 2012 13:11:56 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20120830200509.GA12955@localhost.localdomain>
Date: Thu, 30 Aug 2012 16:12:03 -0400
Message-Id: <EE725C18-21B4-4F03-834E-F516178E0EAA@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<20120830200509.GA12955@localhost.localdomain>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlzP1MSPgpyrMCejo/HS1qnUHm9X8T6bMCW4BZ9CAVev5LP/rlKUOKXOf0Vmhv8bPIE09C3
Cc: xen-devel@lists.xensource.com, David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCHv3 0/2] xen/privcmd: support for paged-out
	frames
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Aug 30, 2012, at 4:05 PM, Konrad Rzeszutek Wilk wrote:

>> I think this should probably get a "Tested-by" Andres or someone else
>> who uses paging before being applied.
> 
> How do I test it? Is there an easy explanation/tutorial Andres?

I have a unit test lying somewhere but I'll need a day to dig it out.

Or you can start an hvm, pause it, fire up xenpaging with debug output so it tells you what it paged out, and then trace libxc as you xc_map_foreign_bulk a known batch of evicted pfns. On the first try it should get rc == -1, errno == ENOENT, error field for the pfn == ENOENT. On the second try it should all be dandy.

Thanks
Andres
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:33:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7BQl-00024D-8I; Thu, 30 Aug 2012 20:33:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1T7BQj-000248-CT
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 20:33:33 +0000
Received: from [85.158.138.51:38329] by server-5.bemta-3.messagelabs.com id
	FF/49-13133-C1ECF305; Thu, 30 Aug 2012 20:33:32 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346358808!21430233!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzUxOTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7400 invoked from network); 30 Aug 2012 20:33:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 20:33:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="206718396"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 20:32:57 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Thu, 30 Aug 2012
	16:32:57 -0400
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Art Napor <artnapor@yahoo.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Date: Thu, 30 Aug 2012 16:32:49 -0400
Thread-Topic: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
Thread-Index: Ac2G32+VSwVSxZpkQNKb3CjBq26iPwADxlzA
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
References: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
In-Reply-To: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Is there a patch series that applies to Xen 4.1.2 for this feature? 
> 
> 
> http://markmail.org/message/ipmyqtuaepe7d7iy

I missed the cut for 4.2 so the series is targeted at 4.3. There are no patches to support any earlier versions.

Thanks,
Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:33:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7BQl-00024D-8I; Thu, 30 Aug 2012 20:33:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1T7BQj-000248-CT
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 20:33:33 +0000
Received: from [85.158.138.51:38329] by server-5.bemta-3.messagelabs.com id
	FF/49-13133-C1ECF305; Thu, 30 Aug 2012 20:33:32 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346358808!21430233!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzUxOTI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7400 invoked from network); 30 Aug 2012 20:33:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 20:33:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,342,1344211200"; d="scan'208";a="206718396"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	30 Aug 2012 20:32:57 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX02.citrite.net ([10.13.107.66]) with mapi; Thu, 30 Aug 2012
	16:32:57 -0400
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Art Napor <artnapor@yahoo.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Date: Thu, 30 Aug 2012 16:32:49 -0400
Thread-Topic: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
Thread-Index: Ac2G32+VSwVSxZpkQNKb3CjBq26iPwADxlzA
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
References: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
In-Reply-To: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Is there a patch series that applies to Xen 4.1.2 for this feature? 
> 
> 
> http://markmail.org/message/ipmyqtuaepe7d7iy

I missed the cut for 4.2 so the series is targeted at 4.3. There are no patches to support any earlier versions.

Thanks,
Ross

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:37:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7BTx-0002DI-U6; Thu, 30 Aug 2012 20:36:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T79Ry-0000KO-Lu
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:26:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1346351192!3210193!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=Mail larger than max spam size
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5425 invoked from network); 30 Aug 2012 18:26:34 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 18:26:34 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UIPGRL020074
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:25:17 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UIPE5r028759
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:25:16 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UIPDdq021626
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 13:25:13 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Aug 2012 11:25:12 -0700
Date: Thu, 30 Aug 2012 11:25:11 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120830112511.19fb0a49@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="MP_/hjZb_3K/+AywVJErRJV.V7."
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
X-Mailman-Approved-At: Thu, 30 Aug 2012 20:36:52 +0000
Subject: [Xen-devel] [RFC PATCH 2/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--MP_/hjZb_3K/+AywVJErRJV.V7.
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Tar file of kdb subdirectory attached.


--MP_/hjZb_3K/+AywVJErRJV.V7.
Content-Type: application/x-tar
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=kdb.tar

a2RiLwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA3NzUAMDAwMjc1
NgAwMDAyNzU2ADAwMDAwMDAwMDAwADEyMDE3NzI0NjI0ADAxMTU1NQAgNQAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABr
ZGIva2RiX2lvLmMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2
ADAwMDI3NTYAMDAwMDAwMTEzMjUAMTE3NjU0NjU1NTYAMDEzMTcxACAwAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8q
CiAqIENvcHlyaWdodCAoQykgMjAwOSwgTXVrZXNoIFJhdGhvciwgT3JhY2xlIENvcnAuICBBbGwg
cmlnaHRzIHJlc2VydmVkLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91
IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yCiAqIG1vZGlmeSBpdCB1bmRlciB0aGUgdGVybXMg
b2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIHYyIGFzIHB1Ymxpc2hlZCBieSB0
aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJp
YnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKICogYnV0IFdJVEhPVVQg
QU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKICogTUVS
Q0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRo
ZSBHTlUKICogR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgogKgogKiBZ
b3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMK
ICogTGljZW5zZSBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbTsgaWYgbm90LCB3cml0ZSB0byB0aGUK
ICogRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuLCA1OSBUZW1wbGUgUGxhY2UgLSBTdWl0
ZSAzMzAsCiAqIEJvc3RvbiwgTUEgMDIxMTEwLTEzMDcsIFVTQS4KICovCiNpbmNsdWRlICJpbmNs
dWRlL2tkYmluYy5oIgoKI2RlZmluZSBLX0JBQ0tTUEFDRSAgMHg4ICAgICAgICAgICAgICAgICAg
IC8qIGN0cmwtSCAqLwojZGVmaW5lIEtfQkFDS1NQQUNFMSAweDdmICAgICAgICAgICAgICAgICAg
LyogY3RybC0/ICovCiNkZWZpbmUgS19VTkRFUlNDT1JFIDB4NWYKI2RlZmluZSBLX0NNRF9CVUZT
WiAgMTYwCiNkZWZpbmUgS19DTURfTUFYSSAgIChLX0NNRF9CVUZTWiAtIDEpICAgICAvKiBtYXgg
aW5kZXggaW4gYnVmZmVyICovCgojaWYgMCAgICAgICAgLyogbWFrZSBhIGhpc3RvcnkgYXJyYXkg
c29tZSBkYXkgKi8KI2RlZmluZSBLX1VQX0FSUk9XICAgICAgICAgICAgICAgICAgICAgICAgIC8q
IHNlcXVlbmNlIDogMWIgNWIgNDEgaWUsICdcZVtBJyAqLwojZGVmaW5lIEtfRE5fQVJST1cgICAg
ICAgICAgICAgICAgICAgICAgICAgLyogc2VxdWVuY2UgOiAxYiA1YiA0MiBpZSwgJ1xlW0InICov
CiNkZWZpbmUgS19OVU1fSElTVCAgIDMyCnN0YXRpYyBpbnQgY3Vyc29yOwpzdGF0aWMgY2hhciBj
bWRzX2FbTlVNX0hJU1RdW0tfQ01EX0JVRlNaXTsKI2VuZGlmCgpzdGF0aWMgY2hhciBjbWRzX2Fb
S19DTURfQlVGU1pdOwoKCnN0YXRpYyBpbnQKa2RiX2tleV92YWxpZChpbnQga2V5KQp7CiAgICAv
KiBub3RlOiBpc3NwYWNlKCkgaXMgbW9yZSB0aGFuICcgJywgaGVuY2Ugd2UgZG9uJ3QgdXNlIGl0
IGhlcmUgKi8KICAgIGlmIChpc2FsbnVtKGtleSkgfHwga2V5ID09ICcgJyB8fCBrZXkgPT0gS19C
QUNLU1BBQ0UgfHwga2V5ID09ICdcbicgfHwKICAgICAgICBrZXkgPT0gJz8nIHx8IGtleSA9PSBL
X1VOREVSU0NPUkUgfHwga2V5ID09ICc9JyB8fCBrZXkgPT0gJyEnKQogICAgICAgICAgICByZXR1
cm4gMTsKICAgIHJldHVybiAwOwp9CgovKiBkaXNwbGF5IGtkYiBwcm9tcHQgYW5kIHJlYWQgY29t
bWFuZCBmcm9tIHRoZSBjb25zb2xlIAogKiBSRVRVUk5TOiBhICdcbicgdGVybWluYXRlZCBjb21t
YW5kIGJ1ZmZlciAqLwpjaGFyICoKa2RiX2dldF9jbWRsaW5lKGNoYXIgKnByb21wdCkKewogICAg
I2RlZmluZSBLX0JFTEwgICAgIDB4NwogICAgI2RlZmluZSBLX0NUUkxfQyAgIDB4MwoKICAgIGlu
dCBrZXksIGk9MDsKCiAgICBrZGJwKHByb21wdCk7CiAgICBtZW1zZXQoY21kc19hLCAwLCBLX0NN
RF9CVUZTWik7CiAgICBjbWRzX2FbS19DTURfQlVGU1otMV0gPSAnXG4nOwoKICAgIGRvIHsKICAg
ICAgICBrZXkgPSBjb25zb2xlX2dldGMoKTsKICAgICAgICBpZiAoa2V5ID09ICdccicpIAogICAg
ICAgICAgICBrZXkgPSAnXG4nOwogICAgICAgIGlmIChrZXkgPT0gS19CQUNLU1BBQ0UxKSAKICAg
ICAgICAgICAga2V5ID0gS19CQUNLU1BBQ0U7CgogICAgICAgIGlmIChrZXkgPT0gS19DVFJMX0Mg
fHwgKGk9PUtfQ01EX01BWEkgJiYga2V5ICE9ICdcbicpKSB7CiAgICAgICAgICAgIGNvbnNvbGVf
cHV0YygnXG4nKTsKICAgICAgICAgICAgaWYgKGkgPj0gS19DTURfTUFYSSkgewogICAgICAgICAg
ICAgICAga2RicCgiS0RCOiBjbWQgYnVmZmVyIG92ZXJmbG93XG4iKTsKICAgICAgICAgICAgICAg
IGNvbnNvbGVfcHV0YyhLX0JFTEwpOwogICAgICAgICAgICB9CiAgICAgICAgICAgIG1lbXNldChj
bWRzX2EsIDAsIEtfQ01EX0JVRlNaKTsKICAgICAgICAgICAgaSA9IDA7CiAgICAgICAgICAgIGtk
YnAocHJvbXB0KTsKICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlmICgh
a2RiX2tleV92YWxpZChrZXkpKSB7CiAgICAgICAgICAgIGNvbnNvbGVfcHV0YyhLX0JFTEwpOwog
ICAgICAgICAgICBjb250aW51ZTsKICAgICAgICB9CiAgICAgICAgaWYgKGtleSA9PSBLX0JBQ0tT
UEFDRSkgewogICAgICAgICAgICBpZiAoaT09MCkgewogICAgICAgICAgICAgICAgY29uc29sZV9w
dXRjKEtfQkVMTCk7CiAgICAgICAgICAgICAgICBjb250aW51ZTsKICAgICAgICAgICAgfSBlbHNl
IAogICAgICAgICAgICAgICAgY21kc19hWy0taV0gPSAnXDAnOwogICAgICAgICAgICAgICAgY29u
c29sZV9wdXRjKEtfQkFDS1NQQUNFKTsKICAgICAgICAgICAgICAgIGNvbnNvbGVfcHV0YygnICcp
OyAgICAgICAgLyogZXJhc2UgY2hhcmFjdGVyICovCiAgICAgICAgfSBlbHNlCiAgICAgICAgICAg
IGNtZHNfYVtpKytdID0ga2V5OwoKICAgICAgICBjb25zb2xlX3B1dGMoa2V5KTsKCiAgICB9IHdo
aWxlIChrZXkgIT0gJ1xuJyk7CgogICAgcmV0dXJuIGNtZHNfYTsKfQoKLyoKICogcHJpbnRrIHRh
a2VzIGEgbG9jaywgYW4gTk1JIGNvdWxkIGNvbWUgaW4gYWZ0ZXIgdGhhdCwgYW5kIGFub3RoZXIg
Y3B1IG1heSAKICogc3Bpbi4gYWxzbywgdGhlIGNvbnNvbGUgbG9jayBpcyBmb3JjZWQgdW5sb2Nr
LCBzbyBwYW5pYyBpcyBiZWVuIHNlZW4gb24gCiAqIDggd2F5LiBoZW5jZSwgbm8gcHJpbnRrKCkg
Y2FsbHMuCiAqLwpzdGF0aWMgdm9sYXRpbGUgaW50IGtkYnBfZ2F0ZSA9IDA7CnZvaWQKa2RicChj
b25zdCBjaGFyICpmbXQsIC4uLikKewogICAgc3RhdGljIGNoYXIgYnVmWzEwMjRdOwogICAgdmFf
bGlzdCBhcmdzOwogICAgY2hhciAqcDsKICAgIGludCBpPTA7CgogICAgd2hpbGUgKChfX2NtcHhj
aGcoJmtkYnBfZ2F0ZSwgMCwxLCBzaXplb2Yoa2RicF9nYXRlKSkgIT0gMCkgJiYgaSsrPDEwMDAp
CiAgICAgICAgbWRlbGF5KDEwKTsKCiAgICB2YV9zdGFydChhcmdzLCBmbXQpOwogICAgKHZvaWQp
dnNucHJpbnRmKGJ1Ziwgc2l6ZW9mKGJ1ZiksIGZtdCwgYXJncyk7CiAgICB2YV9lbmQoYXJncyk7
CgogICAgZm9yIChwPWJ1ZjsgKnAgIT0gJ1wwJzsgcCsrKQogICAgICAgIGNvbnNvbGVfcHV0Yygq
cCk7CiAgICBrZGJwX2dhdGUgPSAwOwp9CgoKLyoKICogY29weS9yZWFkIG1hY2hpbmUgbWVtb3J5
LiAKICogUkVUVVJOUzogbnVtYmVyIG9mIGJ5dGVzIGNvcGllZCAKICovCmludAprZGJfcmVhZF9t
bWVtKGtkYm1hX3QgbWFkZHIsIGtkYmJ5dF90ICpkYnVmLCBpbnQgbGVuKQp7CiAgICB1bG9uZyBy
ZW1haW4sIG9yaWc9bGVuOwoKICAgIHdoaWxlIChsZW4gPiAwKSB7CiAgICAgICAgdWxvbmcgcGFn
ZWNudCA9IG1pbl90KGxvbmcsIFBBR0VfU0laRS0obWFkZHImflBBR0VfTUFTSyksIGxlbik7CiAg
ICAgICAgY2hhciAqdmEgPSBtYXBfZG9tYWluX3BhZ2UobWFkZHIgPj4gUEFHRV9TSElGVCk7Cgog
ICAgICAgIHZhID0gdmEgKyAobWFkZHIgJiAoUEFHRV9TSVpFLTEpKTsgICAgICAgIC8qIGFkZCBw
YWdlIG9mZnNldCAqLwogICAgICAgIHJlbWFpbiA9IF9fY29weV9mcm9tX3VzZXIoZGJ1ZiwgKHZv
aWQgKil2YSwgcGFnZWNudCk7CiAgICAgICAgS0RCR1AxKCJtYWRkcjoleCB2YTolcCBsZW46JXgg
cGFnZWNudDoleCByZW06JXhcbiIsIAogICAgICAgICAgICAgICBtYWRkciwgdmEsIGxlbiwgcGFn
ZWNudCwgcmVtYWluKTsKICAgICAgICB1bm1hcF9kb21haW5fcGFnZSh2YSk7CiAgICAgICAgbGVu
ID0gbGVuICAtIChwYWdlY250IC0gcmVtYWluKTsKICAgICAgICBpZiAocmVtYWluICE9IDApCiAg
ICAgICAgICAgIGJyZWFrOwogICAgICAgIG1hZGRyICs9IHBhZ2VjbnQ7CiAgICAgICAgZGJ1ZiAr
PSBwYWdlY250OwogICAgfQogICAgcmV0dXJuIG9yaWcgLSBsZW47Cn0KCgovKgogKiBjb3B5L3Jl
YWQgZ3Vlc3Qgb3IgaHlwZXJ2aXNvciBtZW1vcnkuIChkb21pZCA9PSBET01JRF9JRExFKSA9PiBo
eXAKICogUkVUVVJOUzogbnVtYmVyIG9mIGJ5dGVzIGNvcGllZCAKICovCmludAprZGJfcmVhZF9t
ZW0oa2RidmFfdCBzYWRkciwga2RiYnl0X3QgKmRidWYsIGludCBsZW4sIGRvbWlkX3QgZG9taWQp
CnsKICAgIHJldHVybiAobGVuIC0gZGJnX3J3X21lbShzYWRkciwgZGJ1ZiwgbGVuLCBkb21pZCwg
MCwgMCkpOwp9CgovKgogKiB3cml0ZSBndWVzdCBvciBoeXBlcnZpc29yIG1lbW9yeS4gKGRvbWlk
ID09IERPTUlEX0lETEUpID0+IGh5cAogKiBSRVRVUk5TOiBudW1iZXIgb2YgYnl0ZXMgd3JpdHRl
bgogKi8KaW50CmtkYl93cml0ZV9tZW0oa2RidmFfdCBkYWRkciwga2RiYnl0X3QgKnNidWYsIGlu
dCBsZW4sIGRvbWlkX3QgZG9taWQpCnsKICAgIHJldHVybiAobGVuIC0gZGJnX3J3X21lbShkYWRk
ciwgc2J1ZiwgbGVuLCBkb21pZCwgMSwgMCkpOwp9CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2luY2x1ZGUv
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA3NzUAMDAwMjc1NgAwMDAyNzU2ADAw
MDAwMDAwMDAwADEyMDE3NTA0MDIxADAxMzE2MwAgNQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
bXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIvaW5jbHVkZS9r
ZGJkZWZzLmgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAw
MDAwMDYzNzUAMTE3NjU0NjU1NTYAMDE1MDA1ACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABt
cmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8qCiAqIENvcHlyaWdo
dCAoQykgMjAwOSwgTXVrZXNoIFJhdGhvciwgT3JhY2xlIENvcnAuICBBbGwgcmlnaHRzIHJlc2Vy
dmVkLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3Ry
aWJ1dGUgaXQgYW5kL29yCiAqIG1vZGlmeSBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBH
ZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIHYyIGFzIHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0
d2FyZSBGb3VuZGF0aW9uLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0ZWQgaW4gdGhl
IGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZ
OyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKICogTUVSQ0hBTlRBQklMSVRZ
IG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZSBHTlUKICogR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgogKgogKiBZb3Ugc2hvdWxkIGhh
dmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMKICogTGljZW5zZSBh
bG9uZyB3aXRoIHRoaXMgcHJvZ3JhbTsgaWYgbm90LCB3cml0ZSB0byB0aGUKICogRnJlZSBTb2Z0
d2FyZSBGb3VuZGF0aW9uLCBJbmMuLCA1OSBUZW1wbGUgUGxhY2UgLSBTdWl0ZSAzMzAsCiAqIEJv
c3RvbiwgTUEgMDIxMTEwLTEzMDcsIFVTQS4KICovCgojaWZuZGVmIF9LREJERUZTX0gKI2RlZmlu
ZSBfS0RCREVGU19ICgovKiByZWFzb24gd2UgYXJlIGVudGVyaW5nIGtkYm1haW4gKGJwID09IGJy
ZWFrcG9pbnQpICovCnR5cGVkZWYgZW51bSB7CiAgICBLREJfUkVBU09OX0tFWUJPQVJEPTEsICAv
KiBLZXlib2FyZCBlbnRyeSAtIGFsd2F5cyAxICovCiAgICBLREJfUkVBU09OX0JQRVhDUCwgICAg
ICAvKiAjQlAgZXhjcDogc3cgYnAgKElOVDMpICovCiAgICBLREJfUkVBU09OX0RCRVhDUCwgICAg
ICAvKiAjREIgZXhjcDogVEYgZmxhZyBvciBIVyBicCAqLwogICAgS0RCX1JFQVNPTl9QQVVTRV9J
UEksICAgLyogcmVjZWl2ZWQgcGF1c2UgSVBJIGZyb20gYW5vdGhlciBDUFUgKi8KfSBrZGJfcmVh
c29uX3Q7CgoKLyogY3B1IHN0YXRlOiBwYXN0LCBwcmVzZW50LCBhbmQgZnV0dXJlICovCnR5cGVk
ZWYgZW51bSB7CiAgICBLREJfQ1BVX0lOVkFMPTAsICAgICAvKiBpbnZhbGlkIHZhbHVlLiBub3Qg
aW4gb3IgbGVhdmluZyBrZGIgKi8KICAgIEtEQl9DUFVfUVVJVCwgICAgICAgIC8qIG1haW4gY3B1
IGRvZXMgR08uIGFsbCBvdGhlcnMgZG8gUVVJVCAqLwogICAgS0RCX0NQVV9QQVVTRSwgICAgICAg
LyogY3B1IGlzIHBhdXNlZCAqLwogICAgS0RCX0NQVV9ESVNBQkxFLCAgICAgLyogZGlzYWJsZSBp
bnRlcnJ1cHRzICovCiAgICBLREJfQ1BVX1NIT1dQQywgICAgICAvKiBhbGwgY3B1cyBtdXN0IGRp
c3BsYXkgdGhlaXIgcGMgKi8KICAgIEtEQl9DUFVfRE9fVk1FWElULCAgIC8qIGFsbCBjcHVzIG11
c3QgZG8gdm1jcyB2bWV4aXQuIGludGVsIG9ubHkgKi8KICAgIEtEQl9DUFVfTUFJTl9LREIsICAg
IC8qIGNwdSBpbiBrZGIgbWFpbiBjb21tYW5kIGxvb3AgKi8KICAgIEtEQl9DUFVfR08sICAgICAg
ICAgIC8qIHVzZXIgZW50ZXJlZCBnbyBmb3IgdGhpcyBjcHUgKi8KICAgIEtEQl9DUFVfU1MsICAg
ICAgICAgIC8qIHNpbmdsZSBzdGVwIGZvciB0aGlzIGNwdSAqLwogICAgS0RCX0NQVV9OSSwgICAg
ICAgICAgLyogZ28gdG8gbmV4dCBpbnN0ciBhZnRlciB0aGUgY2FsbCBpbnN0ciAqLwogICAgS0RC
X0NQVV9JTlNUQUxMX0JQLCAgLyogZGVsYXllZCBpbnN0YWxsIG9mIHN3IGJwKHMpIGJ5IHRoaXMg
Y3B1ICovCn0ga2RiX2NwdV9jbWRfdDsKCi8qID09PT09PT09PT09PT0ga2RiIGNvbW1hbmRzID09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PSAqLwoKdHlwZWRlZiBr
ZGJfY3B1X2NtZF90ICgqa2RiX2Z1bmNfdCkoaW50LCBjb25zdCBjaGFyICoqLCBzdHJ1Y3QgY3B1
X3VzZXJfcmVncyAqKTsKdHlwZWRlZiBrZGJfY3B1X2NtZF90ICgqa2RiX3VzZ2ZfdCkodm9pZCk7
Cgp0eXBlZGVmIGVudW0gewogICAgS0RCX1JFUEVBVF9OT05FID0gMCwgICAgLyogRG8gbm90IHJl
cGVhdCB0aGlzIGNvbW1hbmQgKi8KICAgIEtEQl9SRVBFQVRfTk9fQVJHUywgICAgIC8qIFJlcGVh
dCB0aGUgY29tbWFuZCB3aXRob3V0IGFyZ3VtZW50cyAqLwogICAgS0RCX1JFUEVBVF9XSVRIX0FS
R1MsICAgLyogUmVwZWF0IHRoZSBjb21tYW5kIGluY2x1ZGluZyBpdHMgYXJndW1lbnRzICovCn0g
a2RiX3JlcGVhdF90OwoKdHlwZWRlZiBzdHJ1Y3QgX2tkYnRhYiB7CiAgICBjaGFyICAgICAgICAq
a2RiX2NtZF9uYW1lOyAgICAgICAgLyogQ29tbWFuZCBuYW1lICovCiAgICBrZGJfZnVuY190ICAg
a2RiX2NtZF9mdW5jOyAgICAgICAgLyogcHRyIHRvIGZ1bmN0aW9uIHRvIGV4ZWN1dGUgY29tbWFu
ZCAqLwogICAga2RiX3VzZ2ZfdCAgIGtkYl9jbWRfdXNnZjsgICAgICAgIC8qIHVzYWdlIGZ1bmN0
aW9uIHB0ciAqLwogICAgaW50ICAgICAgICAgIGtkYl9jbWRfY3Jhc2hfYXZhaWw7IC8qIGF2YWls
YWJsZSBpbiBzeXMgZmF0YWwvY3Jhc2ggc3RhdGUgKi8KICAgIGtkYl9yZXBlYXRfdCBrZGJfY21k
X3JlcGVhdDsgICAgICAvKiBEb2VzIGNvbW1hbmQgYXV0byByZXBlYXQgb24gZW50ZXI/ICovCn0g
a2RidGFiX3Q7CgoKLyogPT09PT09PT09PT09PSB0eXBlcyBhbmQgc3R1ZmYgPT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gKi8KI2RlZmluZSBCRkRfSU5WQUwgKH4wVUwp
ICAgICAgICAgICAgLyogaW52YWxpZCBiZmRfdm1hICovCgojaWYgZGVmaW5lZChfX3g4Nl82NF9f
KQogICNkZWZpbmUgS0RCSVAgcmlwCiAgI2RlZmluZSBLREJTUCByc3AKI2Vsc2UKICAjZGVmaW5l
IEtEQklQIGVpcAogICNkZWZpbmUgS0RCU1AgZXNwCiNlbmRpZgoKLyogPT09PT09PT09PT09PSBt
YWNyb3MgPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0g
Ki8KZXh0ZXJuIHZvbGF0aWxlIGludCBrZGJkYmc7CiNkZWZpbmUgS0RCR1AoLi4uKSB7KGtkYmRi
ZykgPyBrZGJwKF9fVkFfQVJHU19fKTowO30KI2RlZmluZSBLREJHUDEoLi4uKSB7KGtkYmRiZz4x
KSA/IGtkYnAoX19WQV9BUkdTX18pOjA7fQojZGVmaW5lIEtEQkdQMiguLi4pIHsoa2RiZGJnPjIp
ID8ga2RicChfX1ZBX0FSR1NfXyk6MDt9CiNkZWZpbmUgS0RCR1AzKC4uLikgezA7fTsKCiNkZWZp
bmUgS0RCTUlOKHgseSkgKCgoeCk8KHkpKT8oeCk6KHkpKQoKI2VuZGlmICAvKiAhX0tEQkRFRlNf
SCAqLwoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2luY2x1ZGUva2RiaW5jLmgA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDAzNDM3
ADExNzY1NDY1NTU2ADAxNDYzMQAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAvKgogKiBDb3B5cmlnaHQgKEMpIDIw
MDksIE11a2VzaCBSYXRob3IsIE9yYWNsZSBDb3JwLiAgQWxsIHJpZ2h0cyByZXNlcnZlZC4KICoK
ICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0
IGFuZC9vcgogKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQ
dWJsaWMKICogTGljZW5zZSB2MiBhcyBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbi4KICoKICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRo
YXQgaXQgd2lsbCBiZSB1c2VmdWwsCiAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91
dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCiAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRO
RVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUgR05VCiAqIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KICoKICogWW91IHNob3VsZCBoYXZlIHJlY2Vp
dmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2UgYWxvbmcgd2l0
aCB0aGlzIHByb2dyYW07IGlmIG5vdCwgd3JpdGUgdG8gdGhlCiAqIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbiwgSW5jLiwgNTkgVGVtcGxlIFBsYWNlIC0gU3VpdGUgMzMwLAogKiBCb3N0b24sIE1B
IDAyMTExMC0xMzA3LCBVU0EuCiAqLwoKI2lmbmRlZiBfS0RCSU5DX0gKI2RlZmluZSBfS0RCSU5D
X0gKCiNpbmNsdWRlIDx4ZW4vY29tcGlsZS5oPgojaW5jbHVkZSA8eGVuL2NvbmZpZy5oPgojaW5j
bHVkZSA8eGVuL3ZlcnNpb24uaD4KI2luY2x1ZGUgPHhlbi9jb21wYXQuaD4KI2luY2x1ZGUgPHhl
bi9pbml0Lmg+CiNpbmNsdWRlIDx4ZW4vbGliLmg+CiNpbmNsdWRlIDx4ZW4vZXJybm8uaD4KI2lu
Y2x1ZGUgPHhlbi9zY2hlZC5oPgojaW5jbHVkZSA8eGVuL2RvbWFpbi5oPgojaW5jbHVkZSA8eGVu
L21tLmg+CiNpbmNsdWRlIDx4ZW4vZXZlbnQuaD4KI2luY2x1ZGUgPHhlbi90aW1lLmg+CiNpbmNs
dWRlIDx4ZW4vY29uc29sZS5oPgojaW5jbHVkZSA8eGVuL3NvZnRpcnEuaD4KI2luY2x1ZGUgPHhl
bi9kb21haW5fcGFnZS5oPgojaW5jbHVkZSA8eGVuL3Jhbmdlc2V0Lmg+CiNpbmNsdWRlIDx4ZW4v
Z3Vlc3RfYWNjZXNzLmg+CiNpbmNsdWRlIDx4ZW4vaHlwZXJjYWxsLmg+CiNpbmNsdWRlIDx4ZW4v
ZGVsYXkuaD4KI2luY2x1ZGUgPHhlbi9zaHV0ZG93bi5oPgojaW5jbHVkZSA8eGVuL3BlcmNwdS5o
PgojaW5jbHVkZSA8eGVuL211bHRpY2FsbC5oPgojaW5jbHVkZSA8eGVuL3JjdXBkYXRlLmg+CiNp
bmNsdWRlIDx4ZW4vY3R5cGUuaD4KI2luY2x1ZGUgPHhlbi9zeW1ib2xzLmg+CiNpbmNsdWRlIDx4
ZW4vc2h1dGRvd24uaD4KI2luY2x1ZGUgPHhlbi9zZXJpYWwuaD4KI2luY2x1ZGUgPHhlbi9ncmFu
dF90YWJsZS5oPgojaW5jbHVkZSA8YXNtL2RlYnVnZ2VyLmg+CiNpbmNsdWRlIDxhc20vc2hhcmVk
Lmg+CiNpbmNsdWRlIDxhc20vYXBpY2RlZi5oPgoKI2luY2x1ZGUgPGFzbS9ubWkuaD4KI2luY2x1
ZGUgPGFzbS9wMm0uaD4KI2luY2x1ZGUgPGFzbS9kZWJ1Z3JlZy5oPgojaW5jbHVkZSA8cHVibGlj
L3NjaGVkLmg+CiNpbmNsdWRlIDxwdWJsaWMvdmNwdS5oPgojaWZkZWYgX1hFTl9MQVRFU1QKI2lu
Y2x1ZGUgPHhzbS94c20uaD4KI2VuZGlmCgojaW5jbHVkZSA8YXNtL2h2bS92bXgvdm14Lmg+Cgoj
aW5jbHVkZSAia2RiX2V4dGVybi5oIgojaW5jbHVkZSAia2RiZGVmcy5oIgojaW5jbHVkZSAia2Ri
cHJvdG8uaCIKCiNlbmRpZiAvKiAhX0tEQklOQ19IICovCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9pbmNsdWRlL2tkYl9leHRlcm4uaAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwNDM2MAAxMjAx
NzUwNDAyMQAwMTU0NjQAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBN
dWtlc2ggUmF0aG9yLCBPcmFjbGUgQ29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRo
aXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQv
b3IKICogbW9kaWZ5IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
CiAqIExpY2Vuc2UgdjIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRp
b24uCiAqCiAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0
IHdpbGwgYmUgdXNlZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZl
biB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBG
T1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBM
aWNlbnNlIGZvciBtb3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBh
IGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhp
cyBwcm9ncmFtOyBpZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRp
b24sIEluYy4sIDU5IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjEx
MTAtMTMwNywgVVNBLgogKi8KCiNpZm5kZWYgX0tEQl9FWFRFUk5fSAojZGVmaW5lIF9LREJfRVhU
RVJOX0gKCiNkZWZpbmUgS0RCX1RSQVBfRkFUQUwgICAgIDEgICAgLyogdHJhcCBpcyBmYXRhbC4g
Y2FuJ3QgcmVzdW1lIGZyb20ga2RiICovCiNkZWZpbmUgS0RCX1RSQVBfTk9ORkFUQUwgIDIgICAg
LyogY2FuIHJlc3VtZSBmcm9tIGtkYiAqLwojZGVmaW5lIEtEQl9UUkFQX0tEQlNUQUNLICAzICAg
IC8qIHRvIGRlYnVnIGtkYiBpdHNlbGYuIGR1bXAga2RiIHN0YWNrICovCgovKiBmb2xsb3dpbmcg
Y2FuIGJlIGNhbGxlZCBmcm9tIGFueXdoZXJlIGluIHhlbiB0byBkZWJ1ZyAqLwpleHRlcm4gdm9p
ZCBrZGJfdHJhcF9pbW1lZChpbnQpOwpleHRlcm4gdm9pZCBrZGJ0cmModW5zaWduZWQgaW50LCB1
bnNpZ25lZCBpbnQsIHVpbnQ2NF90LCB1aW50NjRfdCwgdWludDY0X3QpOwpleHRlcm4gdm9pZCBr
ZGJwKGNvbnN0IGNoYXIgKmZtdCwgLi4uKTsKCnR5cGVkZWYgdW5zaWduZWQgbG9uZyBrZGJ2YV90
Owp0eXBlZGVmIHVuc2lnbmVkIGNoYXIga2RiYnl0X3Q7CnR5cGVkZWYgdW5zaWduZWQgbG9uZyBr
ZGJtYV90OwoKZXh0ZXJuIHVuc2lnbmVkIGxvbmcga2RiX2RyNzsKCgpleHRlcm4gdm9sYXRpbGUg
aW50IGtkYl9zZXNzaW9uX2JlZ3VuOwpleHRlcm4gdm9sYXRpbGUgaW50IGtkYl9lbmFibGVkOwpl
eHRlcm4gdm9pZCBrZGJfaW5pdCh2b2lkKTsKZXh0ZXJuIGludCBrZGJfa2V5Ym9hcmQoc3RydWN0
IGNwdV91c2VyX3JlZ3MgKik7CmV4dGVybiB2b2lkIGtkYl9zc25pX3JlZW50ZXIoc3RydWN0IGNw
dV91c2VyX3JlZ3MgKik7CmV4dGVybiBpbnQga2RiX2hhbmRsZV90cmFwX2VudHJ5KGludCwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKik7CmV4dGVybiBpbnQga2RiX3RyYXBfZmF0YWwoaW50LCBzdHJ1
Y3QgY3B1X3VzZXJfcmVncyAqKTsgIC8qIGZhdGFsIHdpdGggcmVncyAqLwpleHRlcm4gdm9pZCBr
ZGJfZHVtcF92bWNzKHVpbnQxNl90IGRpZCwgaW50IHZpZCk7CnZvaWQga2RiX2R1bXBfdm1jYih1
aW50MTZfdCBkaWQsIGludCB2aWQpOwpleHRlcm4gdm9pZCBrZGJfZHVtcF90aW1lX3BjcHUodm9p
ZCk7CgoKI2RlZmluZSBWTVBUUlNUX09QQ09ERSAgIi5ieXRlIDB4MGYsMHhjN1xuIiAgICAgLyog
cmVnL29wY29kZTogLzcgKi8KI2RlZmluZSBNT0RSTV9FQVhfMDcgICAgIi5ieXRlIDB4MzhcbiIg
ICAgICAgICAgLyogW0VBWF0sIHdpdGggcmVnL29wY29kZTogLzcgKi8Kc3RhdGljIGlubGluZSB2
b2lkIF9fdm1wdHJzdCh1NjQgKmFkZHIpCnsKICAgIGFzbSB2b2xhdGlsZSAoIFZNUFRSU1RfT1BD
T0RFCiAgICAgICAgICAgICAgICAgICBNT0RSTV9FQVhfMDcKICAgICAgICAgICAgICAgICAgIDoK
ICAgICAgICAgICAgICAgICAgIDogImEiIChhZGRyKQogICAgICAgICAgICAgICAgICAgOiAibWVt
b3J5Iik7Cn0KCiNkZWZpbmUgaXNfaHZtX29yX2h5Yl9kb21haW4gaXNfaHZtX2RvbWFpbgojZGVm
aW5lIGlzX2h2bV9vcl9oeWJfdmNwdSBpc19odm1fdmNwdQojZGVmaW5lIGlzX2h5YnJpZF92Y3B1
KHgpICgwKQoKCiNlbmRpZiAgLyogX0tEQl9FWFRFUk5fSCAqLwoAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9pbmNsdWRlL2tkYnByb3RvLmgAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwNTUwMwAxMTc2NTQ2NTU1
NgAwMTUyMTcAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0
YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBNdWtlc2gg
UmF0aG9yLCBPcmFjbGUgQ29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJv
Z3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICog
bW9kaWZ5IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExp
Y2Vuc2UgdjIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAq
CiAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwg
YmUgdXNlZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUg
aW1wbGllZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQ
QVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl
IGZvciBtb3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkg
b2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9n
cmFtOyBpZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIElu
Yy4sIDU5IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMw
NywgVVNBLgogKi8KCiNpZm5kZWYgX0tEQlBST1RPX0gKI2RlZmluZSBfS0RCUFJPVE9fSAoKLyog
aHlwZXJ2aXNvciBpbnRlcmZhY2VzIHVzZSBieSBrZGIgb3Iga2RiIGludGVyZmFjZXMgaW4geGVu
IGZpbGVzICovCmV4dGVybiB2b2lkIGNvbnNvbGVfcHV0YyhjaGFyKTsKZXh0ZXJuIGludCBjb25z
b2xlX2dldGModm9pZCk7CmV4dGVybiB2b2lkIHNob3dfdHJhY2Uoc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKik7CmV4dGVybiB2b2lkIGtkYl9kdW1wX3RpbWVyX3F1ZXVlcyh2b2lkKTsKZXh0ZXJuIHZv
aWQga2RiX3RpbWVfcmVzdW1lKGludCk7CmV4dGVybiB2b2lkIGtkYl9wcmludF9zY2hlZF9pbmZv
KHZvaWQpOwpleHRlcm4gdm9pZCBrZGJfY3Vycl9jcHVfZmx1c2hfdm1jcyh2b2lkKTsKZXh0ZXJu
IHVuc2lnbmVkIGxvbmcgYWRkcmVzc19sb29rdXAoY2hhciAqKTsKZXh0ZXJuIHZvaWQga2RiX3By
bnRfZ3Vlc3RfbWFwcGVkX2lycXModm9pZCk7CgovKiBrZGIgZ2xvYmFscyAqLwpleHRlcm4ga2Ri
dGFiX3QgKmtkYl9jbWRfdGJsOwpleHRlcm4gY2hhciBrZGJfcHJvbXB0WzMyXTsKZXh0ZXJuIHZv
bGF0aWxlIGludCBrZGJfc3lzX2NyYXNoOwpleHRlcm4gdm9sYXRpbGUga2RiX2NwdV9jbWRfdCBr
ZGJfY3B1X2NtZFtOUl9DUFVTXTsKZXh0ZXJuIHZvbGF0aWxlIGludCBrZGJfdHJjb247CgovKiBr
ZGIgaW50ZXJmYWNlcyAqLwpleHRlcm4gdm9pZCBfX2luaXQga2RiX2lvX2luaXQodm9pZCk7CmV4
dGVybiB2b2lkIGtkYl9pbml0X2NtZHRhYih2b2lkKTsKZXh0ZXJuIHZvaWQga2RiX2RvX2NtZHMo
c3RydWN0IGNwdV91c2VyX3JlZ3MgKik7CmV4dGVybiBpbnQga2RiX2NoZWNrX3N3X2JrcHRzKHN0
cnVjdCBjcHVfdXNlcl9yZWdzICopOwpleHRlcm4gaW50IGtkYl9jaGVja193YXRjaHBvaW50cyhz
dHJ1Y3QgY3B1X3VzZXJfcmVncyAqKTsKZXh0ZXJuIHZvaWQga2RiX2RvX3dhdGNocG9pbnRzKGtk
YnZhX3QsIGludCwgaW50KTsKZXh0ZXJuIHZvaWQga2RiX2luc3RhbGxfd2F0Y2hwb2ludHModm9p
ZCk7CmV4dGVybiB2b2lkIGtkYl9jbGVhcl93cHMoaW50KTsKZXh0ZXJuIGtkYm1hX3Qga2RiX3Jk
X2RiZ3JlZyhpbnQpOwoKCgpleHRlcm4gY2hhciAqa2RiX2dldF9jbWRsaW5lKGNoYXIgKik7CmV4
dGVybiB2b2lkIGtkYl9jbGVhcl9wcmV2X2NtZCh2b2lkKTsKZXh0ZXJuIHZvaWQga2RiX3RvZ2ds
ZV9kaXNfc3ludGF4KHZvaWQpOwpleHRlcm4gaW50IGtkYl9jaGVja19jYWxsX2luc3RyKGRvbWlk
X3QsIGtkYnZhX3QpOwpleHRlcm4gdm9pZCBrZGJfZGlzcGxheV9wYyhzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqKTsKZXh0ZXJuIGtkYnZhX3Qga2RiX3ByaW50X2luc3RyKGtkYnZhX3QsIGxvbmcsIGRv
bWlkX3QpOwpleHRlcm4gaW50IGtkYl9yZWFkX21tZW0oa2RidmFfdCwga2RiYnl0X3QgKiwgaW50
KTsKZXh0ZXJuIGludCBrZGJfcmVhZF9tZW0oa2RidmFfdCwga2RiYnl0X3QgKiwgaW50LCBkb21p
ZF90KTsKZXh0ZXJuIGludCBrZGJfd3JpdGVfbWVtKGtkYnZhX3QsIGtkYmJ5dF90ICosIGludCwg
ZG9taWRfdCk7CgpleHRlcm4gdm9pZCBrZGJfaW5zdGFsbF9hbGxfc3dicCh2b2lkKTsKZXh0ZXJu
IHZvaWQga2RiX3VuaW5zdGFsbF9hbGxfc3dicCh2b2lkKTsKZXh0ZXJuIGludCBrZGJfc3dicF9l
eGlzdHModm9pZCk7CmV4dGVybiB2b2lkIGtkYl9mbHVzaF9zd2JwX3RhYmxlKHZvaWQpOwpleHRl
cm4gaW50IGtkYl9pc19hZGRyX2d1ZXN0X3RleHQoa2RidmFfdCwgaW50KTsKZXh0ZXJuIGtkYnZh
X3Qga2RiX2d1ZXN0X3N5bTJhZGRyKGNoYXIgKiwgZG9taWRfdCk7CmV4dGVybiBjaGFyICprZGJf
Z3Vlc3RfYWRkcjJzeW0odW5zaWduZWQgbG9uZywgZG9taWRfdCwgdWxvbmcgKik7CmV4dGVybiB2
b2lkIGtkYl9wcm50X2FkZHIyc3ltKGRvbWlkX3QsIGtkYnZhX3QsIGNoYXIgKik7CmV4dGVybiB2
b2lkIGtkYl9zYXZfZG9tX3N5bWluZm8oZG9taWRfdCwgbG9uZywgbG9uZywgbG9uZywgbG9uZywg
bG9uZyk7CmV4dGVybiBpbnQga2RiX2d1ZXN0X2JpdG5lc3MoZG9taWRfdCk7CmV4dGVybiB2b2lk
IGtkYl9ubWlfcGF1c2VfY3B1cyhjcHVtYXNrX3QpOwoKZXh0ZXJuIHZvaWQga2RiX3RyY3plcm8o
dm9pZCk7CnZvaWQga2RiX3RyY3Aodm9pZCk7CgoKCiNlbmRpZiAvKiAhX0tEQlBST1RPX0ggKi8K
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAa2RiL1JFQURNRQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDIwNDQ1ADExNzY1NDY1NTU2ADAxMjQ2
MQAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1y
YXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAKV2VsY29tZSB0byBrZGIgZm9yIHhlbiwgYSBoeXBlcnZpc29yIGJ1
aWx0IGluIGRlYnVnZ2VyLgoKRkVBVFVSRVM6CiAgIC0gc2V0IGJyZWFrcG9pbnRzIGluIGh5cGVy
dmlzb3IKICAgLSBleGFtaW5lIHZpcnQvbWFjaGluZSBtZW1vcnksIHJlZ2lzdGVycywgZG9tYWlu
cywgdmNwdXMsIGV0Yy4uLgogICAtIHNpbmdsZSBzdGVwLCBzaW5nbGUgc3RlcCB0aWxsIGp1bXAv
Y2FsbCwgc3RlcCBvdmVyIGNhbGwgdG8gbmV4dAogICAgIGluc3RydWN0aW9uIGFmdGVyIHRoZSBj
YWxsLgogICAtIGV4YW1pbmUgbWVtb3J5IG9mIGEgUFYvSFZNIGd1ZXN0LiAKICAgLSBzZXQgYnJl
YWtwb2ludHMsIHNpbmdsZSBzdGVwLCBldGMuLi4gZm9yIGEgUFYgZ3Vlc3QuCiAgIC0gYnJlYWtp
bmcgaW50byB0aGUgZGVidWdnZXIgd2lsbCBmcmVlemUgdGhlIHN5c3RlbSwgYWxsIENQVXMgd2ls
bCBwYXVzZSwKICAgICBubyBpbnRlcnJ1cHRzIGFyZSBhY2tub3dsZWRnZWQgaW4gdGhlIGRlYnVn
Z2VyLiAoSGVuY2UsIHRoZSB3YWxsIGNsb2NrCiAgICAgd2lsbCBkcmlmdCkKICAgLSBzaW5nbGUg
c3RlcCB3aWxsIHN0ZXAgb25seSB0aGF0IGNwdS4KICAgLSBlYXJseWtkYjogYnJlYWsgaW50byBr
ZGIgdmVyeSBlYXJseSBkdXJpbmcgYm9vdC4gUHV0ICJlYXJseWtkYiIgb24gdGhlCiAgICAgICAg
ICAgICAgIHhlbiBjb21tYW5kIGxpbmUgaW4gZ3J1Yi5jb25mLgogICAtIGdlbmVyaWMgdHJhY2lu
ZyBmdW5jdGlvbnMgKHNlZSBiZWxvdykgZm9yIHF1aWNrIHRyYWNpbmcgdG8gZGVidWcgdGltaW5n
CiAgICAgcmVsYXRlZCBwcm9ibGVtcy4gVG8gdXNlOgogICAgICAgIG8gc2V0IEtEQlRSQ01BWCB0
byBtYXggbnVtIG9mIHJlY3MgaW4gY2lyY3VsYXIgdHJjIGJ1ZmZlciBpbiBrZGJtYWluLmMKCW8g
Y2FsbCBrZGJfdHJjKCkgZnJvbSBhbnl3aGVyZSBpbiB4ZW4KCW8gdHVybiB0cmFjaW5nIG9uIGJ5
IHNldHRpbmcga2RiX3RyY29uIGluIGtkYm1haW4uYyBvciB0cmNvbiBjb21tYW5kLgoJbyB0cmNw
IGluIGtkYiB3aWxsIGdpdmUgaGludHMgdG8gZHVtcCB0cmFjZSByZWNzLiBVc2UgZGQgdG8gc2Vl
IGJ1ZmZlcgoJbyB0cmN6IHdpbGwgemVybyBvdXQgdGhlIGVudGlyZSBidWZmZXIgaWYgbmVlZGVk
LgoKTk9URToKICAgLSBzaW5jZSBhbG1vc3QgYWxsIG51bWJlcnMgYXJlIGluIGhleCwgMHggaXMg
bm90IHByZWZpeGVkLiBJbnN0ZWFkLCBkZWNpbWFsCiAgICAgbnVtYmVycyBhcmUgcHJlY2VkZWQg
YnkgJCwgYXMgaW4gJDE3IChzb3JyeSwgb25lIGdldHMgdXNlZCB0byBpdCkuIE5vdGUsCiAgICAg
dmNwdSBudW0sIGNwdSBudW0sIGRvbWlkIGFyZSBhbHdheXMgZGlzcGxheWVkIGluIGRlY2ltYWws
IHdpdGhvdXQgJC4KICAgLSB3YXRjaGRvZyBtdXN0IGJlIGRpc2FibGVkIHRvIHVzZSBrZGIKCklT
U1VFUzoKICAgLSBDdXJyZW50bHksIGRlYnVnIGh5cGVydmlzb3IgaXMgbm90IHN1cHBvcnRlZC4g
TWFrZSBzdXJlIE5ERUJVRyBpcyBkZWZpbmVkCiAgICAgb3IgY29tcGlsZSB3aXRoIGRlYnVnPW4K
ICAgLSAidGltZXIgd2VudCBiYWNrd2FyZHMiIG1lc3NhZ2VzIG9uIGRvbTAsIGJ1dCBrZGIvaHlw
IHNob3VsZCBiZSBmaW5lLgogICAgIEkgdXN1YWxseSBkbyAiZWNobyAyID4gL3Byb2Mvc3lzL2tl
cm5lbC9wcmludGsiIHdoZW4gdXNpbmcga2RiLgogICAtIDMyYml0IGh5cGVydmlzb3IgbWF5IGhh
bmcuIFRlc3RlZCBvbiA2NGJpdCBoeXBlcnZpc29yIG9ubHkuCiAgICAKClRPIEJVSUxEOgogLSBk
byA+bWFrZSBrZGI9eQoKSE9XIFRPIFVTRToKICAxLiBBIHNlcmlhbCBsaW5lIGlzIG5lZWRlZCB0
byB1c2UgdGhlIGRlYnVnZ2VyLiBTZXQgdXAgYSBzZXJpYWwgbGluZQogICAgIGZyb20gdGhlIHNv
dXJjZSBtYWNoaW5lIHRvIHRhcmdldCB2aWN0aW0uIE1ha2Ugc3VyZSB0aGUgc2VyaWFsIGxpbmUK
ICAgICBpcyB3b3JraW5nIHByb3Blcmx5IGJ5IGRpc3BsYXlpbmcgbG9naW4gcHJvbXB0IGFuZCBs
b2dpbmcgaW4gZXRjLi4uLgoKICAyLiBBZGQgZm9sbG93aW5nIHRvIGdydWIuY29uZjoKICAgICAg
ICBrZXJuZWwgL3hlbi5rZGIgY29uc29sZT1jb20xLHZnYSBjb20xPTU3NjAwLDhuMSBkb20wX21l
bT01NDJNCgogICAgICAgICg1NzYwMCBvciB3aGF0ZXZlciB1c2VkIGluIHN0ZXAgMSBhYm92ZSkK
CiAgMy4gQm9vdCB0aGUgaHlwZXJ2aXNvciBidWlsdCB3aXRoIHRoZSBkZWJ1Z2dlci4gCgogIDQu
IGN0cmwtXCAoY3RybCBhbmQgYmFja3NsYXNoKSB3aWxsIGJyZWFrIGludG8gdGhlIGRlYnVnZ2Vy
LiBJZiB0aGUgc3lzdGVtIGlzCiAgICAgYmFkbHkgaHVuZywgcHJlc3NpbmcgTk1JIHdvdWxkIGFs
c28gYnJlYWsgaW50byBpdC4gSG93ZXZlciwgb25jZSBrZGIgaXMKICAgICBlbnRlcmVkIHZpYSBO
TUksIG5vcm1hbCBleGVjdXRpb24gY2FuJ3QgY29udGludWUuCgogIDUuIHR5cGUgJ2gnIGZvciBs
aXN0IG9mIGNvbW1hbmRzLgoKICA2LiBDb21tYW5kIGxpbmUgZWRpdGluZyBpcyBsaW1pdGVkIHRv
IGJhY2tzcGFjZS4gY3RybC1jIHRvIHN0YXJ0IGEgbmV3IGNtZC4KCgoKR1VFU1QgZGVidWc6CiAg
LSB0eXBlIHN5bSBpbiB0aGUgZGVidWdnZXIKICAtIGZvciBSRUw0LCBncmVwIGthbGxzeW1zX25h
bWVzLCBrYWxsc3ltc19hZGRyZXNzZXMsIGFuZCBrYWxsc3ltc19udW1fc3ltcwogICAgaW4gdGhl
IGd1ZXN0IFN5c3RlbS5tYXAqIGZpbGUuIFJ1biBzeW0gYWdhaW4gd2l0aCBkb21pZCBhbmQgdGhl
IHRocmVlCiAgICB2YWx1ZXMgb24gdGhlIGNvbW1hbmQgbGluZS4KICAtIE5vdyBiYXNpYyBzeW1i
b2xzIGNhbiBiZSB1c2VkIGZvciBndWVzdCBkZWJ1Zy4gTm90ZSwgaWYgdGhlIGJpbmFyeSBpcyBu
b3QKICAgIGJ1aWx0IHdpdGggc3ltYm9scywgb25seSBmdW5jdGlvbiBuYW1lcyBhcmUgYXZhaWxh
YmxlLCBidXQgbm90IGdsb2JhbCB2YXJzLgoKICAgIEVnOiBzeW0gMCBjMDY5NjA4NCBjMDY4YTU5
MCBjMDY5NjA4MCBjMDZiNDNlOCBjMDZiNDc0MAogICAgICAgIHdpbGwgc2V0IHN5bWJvbHMgZm9y
IGRvbSAwLiBUaGVuIDoKCiAgICAgICAgWzRdeGtkYj4gYnAgc29tZV9mdW5jdGlvbiAwCgoJd2ls
bHMgc2V0IGJwIGF0IHNvbWVfZnVuY3Rpb24gaW4gZG9tIDAKCglbM114a2RiPiBkdyBjMDY4YTU5
MCAzMiAwIDogZGlzcGxheSAzMiBieXRlcyBvZiBkb20wIG1lbW9yeQoKClRpcHM6CiAgLSBJbiAi
WzBdeGtkYj4iICA6IDAgaXMgdGhlIGNwdSBudW1iZXIgaW4gZGVjaW1hbAogIC0gSW4KICAgICAg
MDAwMDAwMDBjMDQyNjQ1YzogMDpkb190aW1lcisxNyAgICAgICAgICAgICAgICAgIHB1c2ggJWVi
cAogICAgMDpkb190aW1lciA6IDAgaXMgdGhlIGRvbWlkIGluIGhleAogICAgb2Zmc2V0ICsxNyBp
cyBpbiBoZXguCgogICAgYWJzZW5zZSBvZiAwOiB3b3VsZCBpbmRpY2F0ZSBpdCdzIGEgaHlwZXJ2
aXNvciBmdW5jdGlvbgoKICAtIGNvbW1hbmRzIHN0YXJ0aW5nIHdpdGgga2RiIChrZGIqKSBhcmUg
Zm9yIGtkYiBkZWJ1ZyBvbmx5LgoKCkZpbmFsbHksCiAtIHRoaW5rIGhleC4KIC0gYnVnL3Byb2Js
ZW06IGVudGVyIGtkYmRiZywgcmVwcm9kdWNlLCBhbmQgc2VuZCBtZSB0aGUgb3V0cHV0LgogICBJ
ZiB0aGUgb3V0cHV0IGlzIG5vdCBlbm91Z2gsIEkgbWF5IGFzayB0byBydW4ga2RiZGJnIHR3aWNl
LCB0aGVuIGNvbGxlY3QKICAgb3V0cHV0LgoKClRoYW5rcywKTXVrZXNoIFJhdGhvcgpPcmFjbGUg
Q29ycG9yYXRpbiwgClJlZHdvb2QgU2hvcmVzLCBDQSA5NDA2NQoKLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0KQ09NTUFORCBERVNDUklQVElPTjoKCmluZm86ICBQcmludCBiYXNpYyBpbmZvIGxpa2Ug
dmVyc2lvbiwgY29tcGlsZSBmbGFncywgZXRjLi4KCmN1cjogIHByaW50IGN1cnJlbnQgZG9tYWlu
IGlkIGFuZCB2Y3B1IGlkCgpmOiBkaXNwbGF5IGN1cnJlbnQgc3RhY2suIElmIGEgdmNwdSBwdHIg
aXMgZ2l2ZW4sIHRoZW4gcHJpbnQgc3RhY2sgZm9yIHRoYXQKICAgVkNQVSBieSB1c2luZyBpdHMg
SVAgYW5kIFNQLgoKZmc6IGRpc3BsYXkgc3RhY2sgZm9yIGEgZ3Vlc3QgZ2l2ZW4gZG9taWQsIFNQ
IGFuZCBJUC4KCmR3OiBkaXNwbGF5IHdvcmRzIG9mIG1lbW9yeS4gJ251bScgb2YgYnl0ZXMgaXMg
b3B0aW9uYWwsIGJ1dCBpZiBkaXNwbGF5aW5nIGd1ZXN0CiAgICBtZW1vcnksIHRoZW4gaXMgcmVx
dWlyZWQuCgpkZDogc2FtZSBhcyBhYm92ZSwgYnV0IGRpc3BsYXkgZG91Ymxld29yZHMuCgpkd206
IHNhbWUgYXMgYWJvdmUgYnV0IHRoZSBhZGRyZXNzIGlzIG1hY2hpbmUgYWRkcmVzcyBpbnN0ZWFk
IG9mIHZpcnR1YWwuCgpkZG06IHNhbWUgYXMgYWJvdmUsIGJ1dCBkaXNwbGF5IGRvdWJsZXdvcmRz
LgoKZHI6IGRpc3BsYXkgcmVnaXN0ZXJzLiBpZiAnc3AnIGlzIHNwZWNpZmllZCB0aGVuIHByaW50
IGZldyBleHRyYSByZWdpc3RlcnMuCgpkcmc6IGRpc3BsYXkgZ3Vlc3QgY29udGV4dCBzYXZlZCBv
biBzdGFjayBib3R0b20uCgpkaXM6IGRpc2Fzc2VtYmxlIGluc3RydWN0aW9ucy4gSWYgZGlzYXNz
ZW1ibGluZyBmb3IgZ3Vlc3QsIHRoZW4gJ251bScgbXVzdAogICAgIGJlIHNwZWNpZmllZC4gJ251
bScgaXMgbnVtYmVyIG9mIGluc3RycyB0byBkaXNwbGF5LgoKZGlzbTogdG9nZ2xlIGRpc2Fzc2Vt
Ymx5IG1vZGUgYmV0d2VlbiBJbnRlbCBhbmQgQVRUL0dBUy4KCm13OiBtb2RpZnkgd29yZCBpbiBt
ZW1vcnkgZ2l2ZW4gdmlydHVhbCBhZGRyZXNzLiAnZG9taWQnIG1heSBiZSBzcGVjaWZpZWQgaWYK
ICAgIG1vZGlmeWluZyBndWVzdCBtZW1vcnkuIHZhbHVlIGlzIGFzc3VtZWQgaW4gaGV4IGV2ZW4g
d2l0aG91dCAweC4KCm1kOiBzYW1lIGFzIGFib3ZlIGJ1dCBtb2RpZnkgZG91Ymxld29yZC4KCm1y
OiBtb2RpZnkgcmVnaXN0ZXIuIHZhbHVlIGlzIGFzc3VtZCBoZXguCgpiYzogY2xlYXIgZ2l2ZW4g
b3IgYWxsIGJyZWFrcG9pbnRzCgpicDogZGlzcGxheSBicmVha3BvaW50cyBvciBzZXQgYSBicmVh
a3BvaW50LiBEb21pZCBtYXkgYmUgc3BlY2lmaWVkIHRvIHNldCBhIGJwCiAgICBpbiBndWVzdC4g
a2RiIGZ1bmN0aW9ucyBtYXkgbm90IGJlIHNwZWNpZmllZCBpZiBkZWJ1Z2dpbmcga2RiLgogICAg
RXhhbXBsZToKICAgICAgeGtkYj4gYnAgYWNwaV9wcm9jZXNzb3JfaWRsZSAgOiB3aWxsIHNldCBi
cCBpbiB4ZW4KICAgICAgeGtkYj4gYnAgZGVmYXVsdF9pZGxlIDAgOiAgIHdpbGwgc2V0IGJwIGlu
IGRvbWlkIDAKICAgICAgeGtkYj4gYnAgaWRsZV9jcHUgOSA6ICAgd2lsbCBzZXQgYnAgaW4gZG9t
aWQgOQoKICAgICBDb25kaXRpb25zIG1heSBiZSBzcGVjaWZpZWQgZm9yIGEgYnA6IGxocyA9PSBy
aHMgb3IgbGhzICE9IHJocwogICAgIHdoZXJlIDogbGhzIGlzIHJlZ2lzdGVyIGxpa2UgJ3I2Jywg
J3JheCcsIGV0Yy4uLiAgb3IgbWVtb3J5IGxvY2F0aW9uCiAgICAgICAgICAgICByaHMgaXMgaGV4
IHZhbHVlIHdpdGggb3Igd2l0aG91dCBsZWFkaW5nIDB4LgogICAgIFRodXMsCiAgICAgIHhrZGI+
IGJwIGFjcGlfcHJvY2Vzc29yX2lkbGUgcmRpID09IGMwMDAgCiAgICAgIHhrZGI+IGJwIDB4ZmZm
ZmZmZmY4MDA2MmViYyAwIHJzaSA9PSBmZmZmODgwMDIxZWRiYzk4IDogd2lsbCBicmVhayBpbnRv
CiAgICAgICAgICAgIGtkYiBhdCAweGZmZmZmZmZmODAwNjJlYmMgaW4gZG9tMCB3aGVuIHJzaSBp
cyBmZmZmODgwMDIxZWRiYzk4IAoKYnRwOiBicmVhayBwb2ludCB0cmFjZS4gVXBvbiBicCwgcHJp
bnQgc29tZSBpbmZvIGFuZCBjb250aW51ZSB3aXRob3V0IHN0b3BwaW5nLgogICBFeDogYnRwIGlk
bGVfY3B1IDcgcmF4IHJieCAweDIwZWY1YTUgcjkKCiAgIHdpbGwgcHJpbnQ6IHJheCwgcmJ4LCAq
KGxvbmcgKikweDIwZWY1YTUsIHI5IHVwb24gaGl0dGluZyBpZGxlX2NwdSgpIGFuZCAKICAgICAg
ICAgICAgICAgY29udGludWUuCgp3cDogc2V0IGEgd2F0Y2hwb2ludCBhdCBhIHZpcnR1YWwgYWRk
cmVzcyB3aGljaCBjYW4gYmVsb25nIHRvIGh5cGVydmlzb3Igb3IKICAgIGFueSBndWVzdC4gRG8g
bm90IHNwZWNpZnkgd3AgaW4ga2RiIHBhdGggaWYgZGVidWdnaW5nIGtkYi4KCndjOiBjbGVhciBn
aXZlbiBvciBhbGwgd2F0Y2hwb2ludHMuCgpuaTogc2luZ2xlIHN0ZXAsIHN0ZXBwaW5nIG92ZXIg
ZnVuY3Rpb24gY2FsbHMuCgpzczogc2luZ2xlIHN0ZXAuIEJlIGNhcmVmdWxsIHdoZW4gaW4gaW50
ZXJydXB0IGhhbmRsZXJzIG9yIGNvbnRleHQgc3dpdGNoZXMuCiAgICAKc3NiOiBzaW5nbGUgc3Rl
cCB0byBicmFuY2guIFVzZSB3aXRoIGNhcmUuCgpnbzogbGVhdmUga2RiIGFuZCBjb250aW51ZS4K
CmNwdTogZ28gYmFjayB0byBvcmlnIGNwdSB3aGVuIGVudGVyaW5nIGtkYi4gSWYgJ2NwdSBudW1i
ZXInIGdpdmVuLCB0aGVuIHN3aXRjaCAKICAgICB0byB0aGF0IGNwdS4gSWYgJ2FsbCcgdGhlbiBz
aG93IHN0YXR1cyBvZiBhbGwgY3B1cy4KCm5taTogT25seSBhdmFpbGFibGUgaW4gaHVuZy9jcmFz
aCBzdGF0ZS4gU2VuZCBOTUkgdG8gYSBjcHUgdGhhdCBtYXkgYmUgaHVuZy4KCnN5bTogSW5pdGlh
bGl6ZSBhIHN5bWJvbCB0YWJsZSBmb3IgZGVidWdnaW5nIGEgZ3Vlc3QuIExvb2sgaW50byB0aGUg
U3lzdGVtLm1hcAogICAgIGZpbGUgb2YgZ3Vlc3QgZm9yIGNlcnRhaW4gc3ltYm9sIHZhbHVlcyBh
bmQgcHJvdmlkZSB0aGVtIGhlcmUuCgp2Y3B1aDogR2l2ZW4gdmNwdSBwdHIsIGRpc3BsYXkgaHZt
X3ZjcHUgc3RydWN0LgoKdmNwdTogRGlzcGxheSBjdXJyZW50IHZjcHUgc3RydWN0LiBJZiAndmNw
dS1wdHInIGdpdmVuLCBkaXNwbGF5IHRoYXQgdmNwdS4KCmRvbTogZGlzcGxheSBjdXJyZW50IGRv
bWFpbi4gSWYgJ2RvbWlkJyB0aGVuIGRpc3BsYXkgdGhhdCBkb21pZC4gSWYgJ2FsbCcsIHRoZW4K
ICAgICBkaXNwbGF5IGFsbCBkb21haW5zLgoKc2NoZWQ6IHNob3cgc2NoZWR1bGFyIGluZm8gYW5k
IHJ1biBxdWV1ZXMuCgptbXU6IHByaW50IGJhc2ljIG1tdSBpbmZvCgpwMm06IGNvbnZlcnQgYSBn
cGZuIHRvIG1mbiBnaXZlbiBhIGRvbWlkLiB2YWx1ZSBpbiBoZXggZXZlbiB3aXRob3V0IDB4LgoK
bTJwOiBjb252ZXJ0IG1mbiB0byBwZm4uIHZhbHVlIGluIGhleCBldmVuIHdpdGhvdXQgMHguCgpk
cGFnZTogZGlzcGxheSBzdHJ1Y3QgcGFnZSBnaXZlbiBhIG1mbiBvciBzdHJ1Y3QgcGFnZSBwdHIu
IFNpbmNlLCBubyBpbmZvIGlzIAogICAgICAga2VwdCBvbiBwYWdlIHR5cGUsIHdlIGRpc3BsYXkg
YWxsIHBvc3NpYmxlIHBhZ2UgdHlwZXMuCgpkdHJxOiBkaXNwbGF5IHRpbWVyIHF1ZXVlcy4KCmRp
ZHQ6IGR1bXAgSURUIHRhYmxlLgoKZGd0OiBkdW1wIEdEVCB0YWJsZS4KCmRpcnE6IGRpc3BsYXkg
SVJRIGJpbmRpbmdzLgoKZHZtYzogZGlzcGxheSBhbGwgb3IgZ2l2ZW4gZG9tL3ZjcHUgVk1DUyBv
ciBWTUNCLgoKdHJjb246IHR1cm4gdHJhY2luZyBvbi4gVHJhY2UgaG9va3MgbXVzdCBiZSBhZGRl
ZCBpbiB4ZW4gYW5kIGtkYiBmdW5jdGlvbgogICAgICAgY2FsbGVkIGRpcmVjdGx5IGZyb20gdGhl
cmUuCgp0cmNvZmY6IHR1cm4gdHJhY2luZyBvZmYuCgp0cmN6OiB6ZXJvIHRyYWNlIGJ1ZmZlci4K
CnRyY3A6IGdpdmUgaGludHMgdG8gcHJpbnQgdGhlIGNpcmN1bGFyIHRyYWNlIGJ1ZmZlciwgbGlr
ZSBjdXJyZW50IGFjdGl2ZSBwdHIuCgp1c3IxOiBhbGxvd3MgdG8gYWRkIGFueSBhcmJpdHJhdHkg
Y29tbWFuZCBxdWlja2x5LgoKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KLyoKICogQ29weXJpZ2h0
IChDKSAyMDA4IE9yYWNsZS4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJvZ3Jh
bSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICogbW9k
aWZ5IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vu
c2UgdjIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAqCiAq
IFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUg
dXNlZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1w
bGllZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJU
SUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZv
ciBtb3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2Yg
dGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9ncmFt
OyBpZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4s
IDU5IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMwNywg
VVNBLgogKi8KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
a2RiL2tkYm1haW4uYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1
NgAwMDAyNzU2ADAwMDAwMDYyNjcxADEyMDE3NTAyNjEzADAxMzMzMgAgMAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAv
KgogKiBDb3B5cmlnaHQgKEMpIDIwMDksIE11a2VzaCBSYXRob3IsIE9yYWNsZSBDb3JwLiAgQWxs
IHJpZ2h0cyByZXNlcnZlZC4KICoKICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlv
dSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vcgogKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRlcm1z
IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMKICogTGljZW5zZSB2MiBhcyBwdWJsaXNoZWQgYnkg
dGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4KICoKICogVGhpcyBwcm9ncmFtIGlzIGRpc3Ry
aWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCiAqIGJ1dCBXSVRIT1VU
IEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCiAqIE1F
UkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0
aGUgR05VCiAqIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KICoKICog
WW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
CiAqIExpY2Vuc2UgYWxvbmcgd2l0aCB0aGlzIHByb2dyYW07IGlmIG5vdCwgd3JpdGUgdG8gdGhl
CiAqIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgSW5jLiwgNTkgVGVtcGxlIFBsYWNlIC0gU3Vp
dGUgMzMwLAogKiBCb3N0b24sIE1BIDAyMTExMC0xMzA3LCBVU0EuCiAqLwoKI2luY2x1ZGUgImlu
Y2x1ZGUva2RiaW5jLmgiCgpzdGF0aWMgaW50IGtkYm1haW4oa2RiX3JlYXNvbl90LCBzdHJ1Y3Qg
Y3B1X3VzZXJfcmVncyAqKTsKc3RhdGljIGludCBrZGJtYWluX2ZhdGFsKHN0cnVjdCBjcHVfdXNl
cl9yZWdzICosIGludCk7CnN0YXRpYyBjb25zdCBjaGFyICprZGJfZ2V0dHJhcG5hbWUoaW50KTsK
Ci8qID09PT09PT09PT09PT09PT09PT09PT09PSBHTE9CQUwgVkFSSUFCTEVTID09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT0gKi8KLyogQWxsIGdsb2JhbCB2YXJpYWJsZXMgdXNlZCBieSBL
REIgbXVzdCBiZSBkZWZpbmVkIGhlcmUgb25seS4gTW9kdWxlIHNwZWNpZmljCiAqIHN0YXRpYyB2
YXJpYWJsZXMgbXVzdCBiZSBkZWNsYXJlZCBpbiByZXNwZWN0aXZlIG1vZHVsZXMuCiAqLwprZGJ0
YWJfdCAqa2RiX2NtZF90Ymw7CmNoYXIga2RiX3Byb21wdFszMl07Cgp2b2xhdGlsZSBrZGJfY3B1
X2NtZF90IGtkYl9jcHVfY21kW05SX0NQVVNdOwpjcHVtYXNrX3Qga2RiX2NwdV90cmFwczsgICAg
ICAgICAgIC8qIGJpdCBwZXIgY3B1IHRvIHRlbGwgd2hpY2ggY3B1cyBoaXQgaW50MyAqLwoKI2lm
bmRlZiBOREVCVUcKICAgICNlcnJvciBLREIgaXMgbm90IHN1cHBvcnRlZCBvbiBkZWJ1ZyB4ZW4u
IFR1cm4gZGVidWcgb2ZmCiNlbmRpZgoKdm9sYXRpbGUgaW50IGtkYl9pbml0X2NwdSA9IC0xOyAg
ICAgICAgICAgLyogaW5pdGlhbCBrZGIgY3B1ICovCnZvbGF0aWxlIGludCBrZGJfc2Vzc2lvbl9i
ZWd1biA9IDA7ICAgICAgIC8qIGFjdGl2ZSBrZGIgc2Vzc2lvbj8gKi8Kdm9sYXRpbGUgaW50IGtk
Yl9lbmFibGVkID0gMTsgICAgICAgICAgICAgLyoga2RiIGVuYWJsZWQgY3VycmVudGx5PyAqLwp2
b2xhdGlsZSBpbnQga2RiX3N5c19jcmFzaCA9IDA7ICAgICAgICAgICAvKiBhcmUgd2UgaW4gY3Jh
c2hlZCBzdGF0ZT8gKi8Kdm9sYXRpbGUgaW50IGtkYmRiZyA9IDA7ICAgICAgICAgICAgICAgICAg
LyogdG8gZGVidWcga2RiIGl0c2VsZiAqLwoKc3RhdGljIHZvbGF0aWxlIGludCBrZGJfdHJhcF9p
bW1lZF9yZWFzb24gPSAwOyAgIC8qIHJlYXNvbiBmb3IgaW1tZWQgdHJhcCAqLwoKc3RhdGljIGNw
dW1hc2tfdCBrZGJfZmF0YWxfY3B1bWFzazsgICAgICAgLyogd2hpY2ggY3B1cyBpbiBmYXRhbCBw
YXRoICovCgovKiByZXR1cm4gaW5kZXggb2YgZmlyc3QgYml0IHNldCBpbiB2YWwuIGlmIHZhbCBp
cyAwLCByZXR2YWwgaXMgdW5kZWZpbmVkICovCnN0YXRpYyBpbmxpbmUgdW5zaWduZWQgaW50IGtk
Yl9maXJzdGJpdCh1bnNpZ25lZCBsb25nIHZhbCkKewogICAgX19hc21fXyAoICJic2YgJTEsJTAi
IDogIj1yIiAodmFsKSA6ICJyIiAodmFsKSwgIjAiIChCSVRTX1BFUl9MT05HKSApOwogICAgcmV0
dXJuICh1bnNpZ25lZCBpbnQpdmFsOwp9CgpzdGF0aWMgdm9pZCAKa2RiX2RiZ19wcm50X2N0cnBz
KGNoYXIgKmxhYmVsLCBpbnQgY2NwdSkKewogICAgaW50IGk7CiAgICBpZiAoIWtkYmRiZykKICAg
ICAgICByZXR1cm47CgogICAgaWYgKGxhYmVsIHx8ICpsYWJlbCkKICAgICAgICBrZGJwKCIlcyAi
LCBsYWJlbCk7CiAgICBpZiAoY2NwdSAhPSAtMSkKICAgICAgICBrZGJwKCJjY3B1OiVkICIsIGNj
cHUpOwogICAga2RicCgiY3B1dHJwczoiKTsKICAgIGZvciAoaT1zaXplb2Yoa2RiX2NwdV90cmFw
cykvc2l6ZW9mKGtkYl9jcHVfdHJhcHMuYml0c1swXSkgLSAxOyBpID49MDsgaS0tKQogICAgICAg
IGtkYnAoIiAlbHgiLCBrZGJfY3B1X3RyYXBzLmJpdHNbaV0pOwogICAga2RicCgiXG4iKTsKfQoK
LyogCiAqIEhvbGQgdGhpcyBjcHUuIERvbid0IGRpc2FibGUgdW50aWwgYWxsIENQVXMgaW4ga2Ri
IHRvIGF2b2lkIElQSSBkZWFkbG9jayAKICovCnN0YXRpYyB2b2lkCmtkYl9ob2xkX3RoaXNfY3B1
KGludCBjY3B1LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgS0RCR1AoImNjcHU6
JWQgaG9sZC4gY21kOiV4XG4iLCBrZGJfY3B1X2NtZFtjY3B1XSk7CiAgICBkbyB7CiAgICAgICAg
Zm9yKDsga2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9QQVVTRTsgY3B1X3JlbGF4KCkpOwog
ICAgICAgIEtEQkdQKCJjY3B1OiVkIGhvbGQuIGNtZDoleFxuIiwga2RiX2NwdV9jbWRbY2NwdV0p
OwoKICAgICAgICBpZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9ESVNBQkxFKSB7CiAg
ICAgICAgICAgIGxvY2FsX2lycV9kaXNhYmxlKCk7CiAgICAgICAgICAgIGtkYl9jcHVfY21kW2Nj
cHVdID0gS0RCX0NQVV9QQVVTRTsKICAgICAgICB9CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2Nj
cHVdID09IEtEQl9DUFVfRE9fVk1FWElUKSB7CiAgICAgICAgICAgIGtkYl9jdXJyX2NwdV9mbHVz
aF92bWNzKCk7CiAgICAgICAgICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9QQVVTRTsK
ICAgICAgICB9CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09IEtEQl9DUFVfU0hPV1BD
KSB7CiAgICAgICAgICAgIGtkYnAoIlslZF0iLCBjY3B1KTsKICAgICAgICAgICAga2RiX2Rpc3Bs
YXlfcGMocmVncyk7CiAgICAgICAgICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9QQVVT
RTsKICAgICAgICB9CiAgICB9IHdoaWxlIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX1BB
VVNFKTsgICAgIC8qIE5vIGdvdG8sIGVoISAqLwogICAgS0RCR1AxKCJ1biBob2xkOiBjY3B1OiVk
IGNtZDolZFxuIiwgY2NwdSwga2RiX2NwdV9jbWRbY2NwdV0pOwp9CgovKgogKiBQYXVzZSB0aGlz
IGNwdSB3aGlsZSBvbmUgQ1BVIGRvZXMgbWFpbiBrZGIgcHJvY2Vzc2luZy4gSWYgdGhhdCBDUFUg
ZG9lcwogKiBhICJjcHUgc3dpdGNoIiB0byB0aGlzIGNwdSwgdGhpcyBjcHUgd2lsbCBiZWNvbWUg
dGhlIG1haW4ga2RiIGNwdS4gSWYgdGhlCiAqIHVzZXIgbmV4dCBkb2VzIHNpbmdsZSBzdGVwIG9m
IHNvbWUgc29ydCwgdGhpcyBmdW5jdGlvbiB3aWxsIGJlIGV4aXRlZCwKICogYW5kIHRoaXMgY3B1
IHdpbGwgY29tZSBiYWNrIGludG8ga2RiIHZpYSBrZGJfaGFuZGxlX3RyYXBfZW50cnkgZnVuY3Rp
b24uCiAqLwpzdGF0aWMgdm9pZCAKa2RiX3BhdXNlX3RoaXNfY3B1KHN0cnVjdCBjcHVfdXNlcl9y
ZWdzICpyZWdzLCB2b2lkICp1bnVzZWQpCnsKICAgIGtkYm1haW4oS0RCX1JFQVNPTl9QQVVTRV9J
UEksIHJlZ3MpOwp9CgovKiBwYXVzZSBvdGhlciBjcHVzIHZpYSBhbiBJUEkuIE5vdGUsIGRpc2Fi
bGVkIENQVXMgY2FuJ3QgcmVjZWl2ZSBJUElzIHVudGlsCiAqIGVuYWJsZWQgKi8Kc3RhdGljIHZv
aWQKa2RiX3NtcF9wYXVzZV9jcHVzKHZvaWQpCnsKICAgIGludCBjcHUsIHdhaXRfY291bnQgPSAw
OwogICAgaW50IGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7ICAgICAgLyogY3VycmVudCBjcHUg
Ki8KICAgIGNwdW1hc2tfdCBjcHVtYXNrID0gY3B1X29ubGluZV9tYXA7CgogICAgY3B1bWFza19j
bGVhcl9jcHUoc21wX3Byb2Nlc3Nvcl9pZCgpLCAmY3B1bWFzayk7CiAgICBmb3JfZWFjaF9jcHUo
Y3B1LCAmY3B1bWFzaykKICAgICAgICBpZiAoa2RiX2NwdV9jbWRbY3B1XSAhPSBLREJfQ1BVX0lO
VkFMKSB7CiAgICAgICAgICAgIGtkYnAoIktEQjogd29uJ3QgcGF1c2UgY3B1OiVkLCBjbWRbY3B1
XT0lZFxuIixjcHUsa2RiX2NwdV9jbWRbY3B1XSk7CiAgICAgICAgICAgIGNwdW1hc2tfY2xlYXJf
Y3B1KGNwdSwgJmNwdW1hc2spOwogICAgICAgIH0KICAgIEtEQkdQKCJjY3B1OiVkIHdpbGwgcGF1
c2UgY3B1cy4gbWFzazoweCVseFxuIiwgY2NwdSwgY3B1bWFzay5iaXRzWzBdKTsKI2lmIFhFTl9T
VUJWRVJTSU9OID4gNCB8fCBYRU5fVkVSU0lPTiA9PSA0ICAgICAgICAgICAgICAvKiB4ZW4gMy41
Lnggb3IgYWJvdmUgKi8KICAgIG9uX3NlbGVjdGVkX2NwdXMoJmNwdW1hc2ssICh2b2lkICgqKSh2
b2lkICopKWtkYl9wYXVzZV90aGlzX2NwdSwgCiAgICAgICAgICAgICAgICAgICAgICJYRU5LREIi
LCAwKTsKI2Vsc2UKICAgIG9uX3NlbGVjdGVkX2NwdXMoY3B1bWFzaywgKHZvaWQgKCopKHZvaWQg
Kikpa2RiX3BhdXNlX3RoaXNfY3B1LCAKICAgICAgICAgICAgICAgICAgICAgIlhFTktEQiIsIDAs
IDApOwojZW5kaWYKICAgIG1kZWxheSgzMDApOyAgICAgICAgICAgICAgICAgICAgIC8qIHdhaXQg
YSBiaXQgZm9yIG90aGVyIENQVXMgdG8gc3RvcCAqLwogICAgd2hpbGUod2FpdF9jb3VudCsrIDwg
MTApIHsKICAgICAgICBpbnQgYnVtbWVyID0gMDsKICAgICAgICBmb3JfZWFjaF9jcHUoY3B1LCAm
Y3B1bWFzaykKICAgICAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NwdV0gIT0gS0RCX0NQVV9QQVVT
RSkKICAgICAgICAgICAgICAgIGJ1bW1lciA9IDE7CiAgICAgICAgaWYgKCFidW1tZXIpCiAgICAg
ICAgICAgIGJyZWFrOwogICAgICAgIGtkYnAoImNjcHU6JWQgdHJ5aW5nIHRvIHN0b3Agb3RoZXIg
Y3B1cy4uLlxuIiwgY2NwdSk7CiAgICAgICAgbWRlbGF5KDEwMCk7ICAvKiB3YWl0IDEwMCBtcyAq
LwogICAgfTsKICAgIGZvcl9lYWNoX2NwdShjcHUsICZjcHVtYXNrKSAgICAgICAgICAvKiBub3cg
Y2hlY2sgd2hvIGlzIHdpdGggdXMgKi8KICAgICAgICBpZiAoa2RiX2NwdV9jbWRbY3B1XSAhPSBL
REJfQ1BVX1BBVVNFKQogICAgICAgICAgICBrZGJwKCJCdW1tZXIgY3B1ICVkIG5vdCBwYXVzZWQu
IGNjcHU6JWRcbiIsIGNwdSxjY3B1KTsKICAgICAgICBlbHNlIHsKICAgICAgICAgICAga2RiX2Nw
dV9jbWRbY3B1XSA9IEtEQl9DUFVfRElTQUJMRTsgIC8qIHRlbGwgaXQgdG8gZGlzYWJsZSBpbnRz
ICovCiAgICAgICAgICAgIHdoaWxlIChrZGJfY3B1X2NtZFtjcHVdICE9IEtEQl9DUFVfUEFVU0Up
OwogICAgICAgIH0KfQoKLyogCiAqIERvIG9uY2UgcGVyIGtkYiBzZXNzaW9uOiAgQSBrZGIgc2Vz
c2lvbiBsYXN0cyBmcm9tIAogKiAgICBrZXlib3JkL0hXQlAvU1dCUCB0aWxsIEtEQl9DUFVfSU5T
VEFMTF9CUCBpcyBkb25lLiBXaXRoaW4gYSBzZXNzaW9uLAogKiAgICB1c2VyIG1heSBkbyBzZXZl
cmFsIGNwdSBzd2l0Y2hlcywgc2luZ2xlIHN0ZXAsIG5leHQgaW5zdHIsICBldGMuLgogKgogKiBE
TzogMS4gcGF1c2Ugb3RoZXIgY3B1cyBpZiB0aGV5IGFyZSBub3QgYWxyZWFkeS4gdGhleSB3b3Vs
ZCBhbHJlYWR5IGJlIAogKiAgICAgICAgaWYgd2UgYXJlIGluIHNpbmdsZSBzdGVwIG1vZGUKICog
ICAgIDIuIHdhdGNoZG9nX2Rpc2FibGUoKSAKICogICAgIDMuIHVuaW5zdGFsbCBhbGwgc3cgYnJl
YWtwb2ludHMgc28gdGhhdCB1c2VyIGRvZXNuJ3Qgc2VlIHRoZW0KICovCnN0YXRpYyB2b2lkCmtk
Yl9iZWdpbl9zZXNzaW9uKHZvaWQpCnsKICAgIGlmICgha2RiX3Nlc3Npb25fYmVndW4pIHsKICAg
ICAgICBrZGJfc2Vzc2lvbl9iZWd1biA9IDE7CiAgICAgICAga2RiX3NtcF9wYXVzZV9jcHVzKCk7
CiAgICAgICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsKICAgICAgICB3YXRjaGRvZ19kaXNhYmxlKCk7
CiAgICAgICAga2RiX3VuaW5zdGFsbF9hbGxfc3dicCgpOwogICAgfQp9CgpzdGF0aWMgdm9pZApr
ZGJfc21wX3VucGF1c2VfY3B1cyhpbnQgY2NwdSkKewogICAgaW50IGNwdTsKCiAgICBpbnQgd2Fp
dF9jb3VudCA9IDA7CiAgICBjcHVtYXNrX3QgY3B1bWFzayA9IGNwdV9vbmxpbmVfbWFwOwoKICAg
IGNwdW1hc2tfY2xlYXJfY3B1KHNtcF9wcm9jZXNzb3JfaWQoKSwgJmNwdW1hc2spOwoKICAgIEtE
QkdQKCJrZGJfc21wX3VucGF1c2Vfb3RoZXJfY3B1cygpLiBjY3B1OiVkXG4iLCBjY3B1KTsKICAg
IGZvcl9lYWNoX2NwdShjcHUsICZjcHVtYXNrKQogICAgICAgIGtkYl9jcHVfY21kW2NwdV0gPSBL
REJfQ1BVX1FVSVQ7CgogICAgd2hpbGUod2FpdF9jb3VudCsrIDwgMTApIHsKICAgICAgICBpbnQg
YnVtbWVyID0gMDsKICAgICAgICBmb3JfZWFjaF9jcHUoY3B1LCAmY3B1bWFzaykKICAgICAgICAg
ICAgaWYgKGtkYl9jcHVfY21kW2NwdV0gIT0gS0RCX0NQVV9JTlZBTCkKICAgICAgICAgICAgICAg
IGJ1bW1lciA9IDE7CiAgICAgICAgICAgIGlmICghYnVtbWVyKQogICAgICAgICAgICAgICAgYnJl
YWs7CiAgICAgICAgICAgIG1kZWxheSg5MCk7ICAvKiB3YWl0IDkwIG1zLCA1MCB0b28gc2hvcnQg
b24gbGFyZ2Ugc3lzdGVtcyAqLwogICAgfTsKICAgIC8qIG5vdyBtYWtlIHN1cmUgdGhleSBhcmUg
YWxsIGluIHRoZXJlICovCiAgICBmb3JfZWFjaF9jcHUoY3B1LCAmY3B1bWFzaykKICAgICAgICBp
ZiAoa2RiX2NwdV9jbWRbY3B1XSAhPSBLREJfQ1BVX0lOVkFMKQogICAgICAgICAgICBrZGJwKCJL
REI6IGNwdSAlZCBzdGlsbCBwYXVzZWQgKGNtZD09JWQpLiBjY3B1OiVkXG4iLAogICAgICAgICAg
ICAgICAgIGNwdSwga2RiX2NwdV9jbWRbY3B1XSwgY2NwdSk7Cn0KCi8qCiAqIEVuZCBvZiBLREIg
c2Vzc2lvbi4gCiAqICAgVGhpcyBpcyBjYWxsZWQgYXQgdGhlIHZlcnkgZW5kLiBJbiBjYXNlIG9m
IG11bHRpcGxlIGNwdXMgaGl0dGluZyBCUHMKICogICBhbmQgc2l0dGluZyBvbiBhIHRyYXAgaGFu
ZGxlcnMsIHRoZSBsYXN0IGNwdSB0byBleGl0IHdpbGwgY2FsbCB0aGlzLgogKiAgIC0gaXNuc3Rh
bGwgYWxsIHN3IGJyZWFrcG9pbnRzLCBhbmQgcHVyZ2UgZGVsZXRlZCBvbmVzIGZyb20gdGFibGUu
CiAqICAgLSBjbGVhciBURiBoZXJlIGFsc28gaW4gY2FzZSBnbyBpcyBlbnRlcmVkIG9uIGEgZGlm
ZmVyZW50IGNwdSBhZnRlciBzd2l0Y2gKICovCnN0YXRpYyB2b2lkCmtkYl9lbmRfc2Vzc2lvbihp
bnQgY2NwdSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIEFTU0VSVCghY3B1bWFz
a19lbXB0eSgma2RiX2NwdV90cmFwcykpOwogICAgQVNTRVJUKGtkYl9zZXNzaW9uX2JlZ3VuKTsK
ICAgIGtkYl9pbnN0YWxsX2FsbF9zd2JwKCk7CiAgICBrZGJfZmx1c2hfc3dicF90YWJsZSgpOwog
ICAga2RiX2luc3RhbGxfd2F0Y2hwb2ludHMoKTsKCiAgICByZWdzLT5lZmxhZ3MgJj0gflg4Nl9F
RkxBR1NfVEY7CiAgICBrZGJfY3B1X2NtZFtjY3B1XSA9IEtEQl9DUFVfSU5WQUw7CiAgICBrZGJf
dGltZV9yZXN1bWUoMSk7CiAgICBrZGJfc2Vzc2lvbl9iZWd1biA9IDA7ICAgICAgLyogYmVmb3Jl
IHVucGF1c2UgZm9yIGtkYl9pbnN0YWxsX3dhdGNocG9pbnRzICovCiAgICBrZGJfc21wX3VucGF1
c2VfY3B1cyhjY3B1KTsKICAgIHdhdGNoZG9nX2VuYWJsZSgpOwogICAgS0RCR1AoImVuZF9zZXNz
aW9uOmNjcHU6JWRcbiIsIGNjcHUpOwp9CgovKiAKICogY2hlY2sgaWYgd2UgZW50ZXJlZCBrZGIg
YmVjYXVzZSBvZiBEQiB0cmFwLiBJZiB5ZXMsIHRoZW4gY2hlY2sgaWYKICogd2UgY2F1c2VkIGl0
IG9yIHNvbWVvbmUgZWxzZS4KICogUkVUVVJOUzogMCA6IG5vdCBvbmUgb2Ygb3Vycy4gaHlwZXJ2
aXNvciBtdXN0IGhhbmRsZSBpdC4gCiAqICAgICAgICAgIDEgOiAjREIgZm9yIGRlbGF5ZWQgc3cg
YnAgaW5zdGFsbC4gCiAqICAgICAgICAgIDIgOiB0aGlzIGNwdSBtdXN0IHN0YXkgaW4ga2RiLgog
Ki8Kc3RhdGljIG5vaW5saW5lIGludAprZGJfY2hlY2tfZGJ0cmFwKGtkYl9yZWFzb25fdCAqcmVh
c3AsIGludCBzc19tb2RlLCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykgCnsKICAgIGludCBy
YyA9IDI7CiAgICBpbnQgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKCiAgICAvKiBEQiBleGNw
IGNhdXNlZCBieSBodyBicmVha3BvaW50IG9yIHRoZSBURiBmbGFnLiBUaGUgVEYgZmxhZyBpcyBz
ZXQKICAgICAqIGJ5IHVzIGZvciBzcyBtb2RlIG9yIHRvIGluc3RhbGwgYnJlYWtwb2ludHMuIElu
IHNzIG1vZGUsIG5vbmUgb2YgdGhlCiAgICAgKiBicmVha3BvaW50cyBhcmUgaW5zdGFsbGVkLiBD
aGVjayB0byBtYWtlIHN1cmUgd2UgaW50ZW5kZWQgQlAgSU5TVEFMTAogICAgICogc28gd2UgZG9u
J3QgZG8gaXQgb24gYSBzcHVyaW91cyBEQiB0cmFwLgogICAgICogY2hlY2sgZm9yIGtkYl9jcHVf
dHJhcHMgaGVyZSBhbHNvLCBiZWNhdXNlIGVhY2ggY3B1IHNpdHRpbmcgb24gYSB0cmFwCiAgICAg
KiBtdXN0IGV4ZWN1dGUgdGhlIGluc3RydWN0aW9uIHdpdGhvdXQgdGhlIEJQIGJlZm9yZSBwYXNz
aW5nIGNvbnRyb2wKICAgICAqIHRvIG5leHQgY3B1IGluIGtkYl9jcHVfdHJhcHMuCiAgICAgKi8K
ICAgIGlmICgqcmVhc3AgPT0gS0RCX1JFQVNPTl9EQkVYQ1AgJiYgIXNzX21vZGUpIHsKICAgICAg
ICBpZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9JTlNUQUxMX0JQKSB7CiAgICAgICAg
ICAgIGlmICghY3B1bWFza19lbXB0eSgma2RiX2NwdV90cmFwcykpIHsKICAgICAgICAgICAgICAg
IGludCBhX3RyYXBfY3B1ID0gY3B1bWFza19maXJzdCgma2RiX2NwdV90cmFwcyk7CiAgICAgICAg
ICAgICAgICBLREJHUCgiY2NwdTolZCB0cmFwY3B1OiVkXG4iLCBjY3B1LCBhX3RyYXBfY3B1KTsK
ICAgICAgICAgICAgICAgIGtkYl9jcHVfY21kW2FfdHJhcF9jcHVdID0gS0RCX0NQVV9RVUlUOwog
ICAgICAgICAgICAgICAgKnJlYXNwID0gS0RCX1JFQVNPTl9QQVVTRV9JUEk7CiAgICAgICAgICAg
ICAgICByZWdzLT5lZmxhZ3MgJj0gflg4Nl9FRkxBR1NfVEY7ICAvKiBodm06IGV4aXQgaGFuZGxl
ciBzcyA9IDAgKi8KICAgICAgICAgICAgICAgIGtkYl9pbml0X2NwdSA9IC0xOwogICAgICAgICAg
ICB9IGVsc2UgewogICAgICAgICAgICAgICAga2RiX2VuZF9zZXNzaW9uKGNjcHUsIHJlZ3MpOwog
ICAgICAgICAgICAgICAgcmMgPSAxOwogICAgICAgICAgICB9CiAgICAgICAgfSBlbHNlIGlmICgh
IGtkYl9jaGVja193YXRjaHBvaW50cyhyZWdzKSkgewogICAgICAgICAgICByYyA9IDA7ICAgICAg
ICAgICAgICAgICAgICAgICAgLyogaHlwIG11c3QgaGFuZGxlIGl0ICovCiAgICAgICAgfQogICAg
fQogICAgcmV0dXJuIHJjOwp9CgovKiAKICogTWlzYyBwcm9jZXNzaW5nIG9uIGtkYiBlbnRyeSBs
aWtlIGRpc3BsYXlpbmcgUEMsIGFkanVzdCBJUCBmb3Igc3cgYnAuLi4uIAogKi8Kc3RhdGljIHZv
aWQKa2RiX21haW5fZW50cnlfbWlzYyhrZGJfcmVhc29uX3QgcmVhc29uLCBzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncywgCiAgICAgICAgICAgICAgICAgICAgaW50IGNjcHUsIGludCBzc19tb2Rl
LCBpbnQgZW5hYmxlZCkKewogICAgaWYgKHJlYXNvbiA9PSBLREJfUkVBU09OX0tFWUJPQVJEKQog
ICAgICAgIGtkYnAoIlxuRW50ZXIga2RiIChjcHU6JWQgcmVhc29uOiVkIHZjcHU9JWQgZG9taWQ6
JWQiCiAgICAgICAgICAgICAiIGVmbGc6MHglbHggaXJxczolZClcbiIsIGNjcHUsIHJlYXNvbiwg
Y3VycmVudC0+dmNwdV9pZCwgCiAgICAgICAgICAgICBjdXJyZW50LT5kb21haW4tPmRvbWFpbl9p
ZCwgcmVncy0+ZWZsYWdzLCBlbmFibGVkKTsKICAgIGVsc2UgaWYgKHNzX21vZGUpCiAgICAgICAg
S0RCR1AxKCJLREJHOiBLREIgc2luZ2xlIHN0ZXAgbW9kZS4gY2NwdTolZFxuIiwgY2NwdSk7Cgog
ICAgaWYgKHJlYXNvbiA9PSBLREJfUkVBU09OX0JQRVhDUCAmJiAhc3NfbW9kZSkgCiAgICAgICAg
a2RicCgiQnJlYWtwb2ludCBvbiBjcHUgJWQgYXQgMHglbHhcbiIsIGNjcHUsIHJlZ3MtPktEQklQ
KTsKCiAgICAvKiBkaXNwbGF5IHRoZSBjdXJyZW50IFBDIGFuZCBpbnN0cnVjdGlvbiBhdCBpdCAq
LwogICAgaWYgKHJlYXNvbiAhPSBLREJfUkVBU09OX1BBVVNFX0lQSSkKICAgICAgICBrZGJfZGlz
cGxheV9wYyhyZWdzKTsKICAgIGNvbnNvbGVfc3RhcnRfc3luYygpOwp9CgovKiAKICogVGhlIE1B
SU4ga2RiIGZ1bmN0aW9uLiBBbGwgY3B1cyBnbyB0aHJ1IHRoaXMuIElSUSBpcyBlbmFibGVkIG9u
IGVudHJ5IGJlY2F1c2UKICogYSBjcHUgY291bGQgaGl0IGEgYnAgc2V0IGluIGRpc2FibGVkIGNv
ZGUuCiAqIElQSTogRXZlbiB0aGUgbWFpbiBjcHUgbXVzdCBlbmFibGUgaW4gY2FzZSBhbm90aGVy
IENQVSBpcyB0cnlpbmcgdG8gSVBJIHVzLgogKiAgICAgIFRoYXQgd2F5LCBpdCB3b3VsZCBJUEkg
dXMsIHRoZW4gZ2V0IG91dCBhbmQgYmUgcmVhZHkgZm9yIG91ciBwYXVzZSBJUEkuCiAqIElSUXM6
IFRoZSByZWFzb24gaXJxcyBlbmFibGUvZGlzYWJsZSBpcyBzY2F0dGVyZWQgaXMgYmVjYXVzZSBv
biBhIHR5cGljYWwKICogICAgICAgc3lzdGVtIElQSXMgYXJlIGNvbnN0YW50bHkgZ29pbmcgb24g
YW1vbmdzIENQVXMgaW4gYSBzZXQgb2YgYW55IHNpemUuIAogKiAgICAgICBBcyBhIHJlc3VsdCwg
IHRvIGF2b2lkIGRlYWRsb2NrLCBjcHVzIGhhdmUgdG8gbG9vcCBlbmFibGVkLCB1bnRpbCBhIAog
KiAgICAgICBxdW9ydW0gaXMgZXN0YWJsaXNoZWQgYW5kIHRoZSBzZXNzaW9uIGhhcyBiZWd1bi4K
ICogU3RlcDogSW50ZWwgVm9sM0IgMTguMy4xLjQgOiBBbiBleHRlcm5hbCBpbnRlcnJ1cHQgbWF5
IGJlIHNlcnZpY2VkIHVwb24KICogICAgICAgc2luZ2xlIHN0ZXAuIFNpbmNlLCB0aGUgbGlrZWx5
IGV4dCB0aW1lcl9pbnRlcnJ1cHQgYW5kIAogKiAgICAgICBhcGljX3RpbWVyX2ludGVycnVwdCBk
b250JyBtZXNzIHdpdGggdGltZSBkYXRhIHN0cnVjdHMsIHdlIGFyZSBwcm9iIE9LCiAqICAgICAg
IGxlYXZpbmcgZW5hYmxlZC4KICogVGltZTogVmVyeSBtZXNzeS4gTW9zdCBwbGF0Zm9ybSB0aW1l
cnMgYXJlIHJlYWRvbmx5LCBzbyB3ZSBjYW4ndCBzdG9wIHRpbWUKICogICAgICAgaW4gdGhlIGRl
YnVnZ2VyLiBXZSB0YWtlIHRoZSBvbmx5IHJlc29ydCwgbGV0IHRoZSBUU0MgYW5kIHBsdCBydW4g
YXMKICogICAgICAgbm9ybWFsLCB1cG9uIGxlYXZpbmcsICJhdHRlbXB0IiB0byBicmluZyBldmVy
eWJvZHkgdG8gY3VycmVudCB0aW1lLgogKiBrZGJjcHV0cmFwczogYml0IHBlciBjcHUuIGVhY2gg
Y3B1IHNldHMgaXQgYml0IGluIGVudHJ5LlMuIFRoZSBiaXQgaXMgCiAqICAgICAgICAgICAgICBy
ZWxpYWJsZSBiZWNhdXNlIHVwb24gdHJhcHMsIEludHMgYXJlIGRpc2FibGVkLiB0aGUgYml0IGlz
IHNldAogKiAgICAgICAgICAgICAgYmVmb3JlIEludHMgYXJlIGVuYWJsZWQuCiAqCiAqIFJFVFVS
TlM6IDAgOiBrZGIgd2FzIGNhbGxlZCBmb3IgZXZlbnQgaXQgd2FzIG5vdCByZXNwb25zaWJsZQog
KiAgICAgICAgICAxIDogZXZlbnQgb3duZWQgYW5kIGhhbmRsZWQgYnkga2RiIAogKi8Kc3RhdGlj
IGludAprZGJtYWluKGtkYl9yZWFzb25fdCByZWFzb24sIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICBpbnQgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsgICAgICAgICAgICAgICAg
LyogY3VycmVudCBjcHUgKi8KICAgIGludCByYyA9IDEsIGNtZCA9IGtkYl9jcHVfY21kW2NjcHVd
OwogICAgaW50IHNzX21vZGUgPSAoY21kID09IEtEQl9DUFVfU1MgfHwgY21kID09IEtEQl9DUFVf
TkkpOwogICAgaW50IGRlbGF5ZWRfaW5zdGFsbCA9IChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJf
Q1BVX0lOU1RBTExfQlApOwogICAgaW50IGVuYWJsZWQgPSBsb2NhbF9pcnFfaXNfZW5hYmxlZCgp
OwoKICAgIEtEQkdQKCJrZGJtYWluOmNjcHU6JWQgcnNuOiVkIGVmbGdzOjB4JWx4IGNtZDolZCBp
bml0YzolZCBpcnFzOiVkICIKICAgICAgICAgICJyZWdzOiVseCBJUDolbHggIiwgY2NwdSwgcmVh
c29uLCByZWdzLT5lZmxhZ3MsIGNtZCwgCiAgICAgICAgICBrZGJfaW5pdF9jcHUsIGVuYWJsZWQs
IHJlZ3MsIHJlZ3MtPktEQklQKTsKICAgIGtkYl9kYmdfcHJudF9jdHJwcygiIiwgLTEpOwoKICAg
IGlmICghc3NfbW9kZSAmJiAhZGVsYXllZF9pbnN0YWxsKSAgICAvKiBpbml0aWFsIGtkYiBlbnRl
ciAqLwogICAgICAgIGxvY2FsX2lycV9lbmFibGUoKTsgICAgICAgICAgICAgIC8qIHNvIHdlIGNh
biByZWNlaXZlIElQSSAqLwoKICAgIGlmICghc3NfbW9kZSAmJiBjY3B1ICE9IGtkYl9pbml0X2Nw
dSAmJiByZWFzb24gIT0gS0RCX1JFQVNPTl9QQVVTRV9JUEkpewogICAgICAgIGludCBzeiA9IHNp
emVvZihrZGJfaW5pdF9jcHUpOwogICAgICAgIHdoaWxlIChfX2NtcHhjaGcoJmtkYl9pbml0X2Nw
dSwgLTEsIGNjcHUsIHN6KSAhPSAtMSkKICAgICAgICAgICAgZm9yKDsga2RiX2luaXRfY3B1ICE9
IC0xOyBjcHVfcmVsYXgoKSk7CiAgICB9CiAgICBpZiAoa2RiX3Nlc3Npb25fYmVndW4pCiAgICAg
ICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsgICAgICAgICAgICAgLyoga2RiIGFsd2F5cyBydW5zIGRp
c2FibGVkICovCgogICAgaWYgKHJlYXNvbiA9PSBLREJfUkVBU09OX0JQRVhDUCkgeyAgICAgICAg
ICAgICAvKiBJTlQgMyAqLwogICAgICAgIGNwdW1hc2tfY2xlYXJfY3B1KGNjcHUsICZrZGJfY3B1
X3RyYXBzKTsgICAvKiByZW1vdmUgb3Vyc2VsZiAqLwogICAgICAgIHJjID0ga2RiX2NoZWNrX3N3
X2JrcHRzKHJlZ3MpOwogICAgICAgIGlmIChyYyA9PSAwKSB7ICAgICAgICAgICAgICAgLyogbm90
IG9uZSBvZiBvdXJzLiBsZWF2ZSBrZGIgKi8KICAgICAgICAgICAga2RiX2luaXRfY3B1ID0gLTE7
CiAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAgIH0gZWxzZSBpZiAocmMgPT0gMSkgeyAgICAg
ICAgLyogb25lIG9mIG91cnMgYnV0IGRlbGV0ZWQgKi8KICAgICAgICAgICAgaWYgKGNwdW1hc2tf
ZW1wdHkoJmtkYl9jcHVfdHJhcHMpKSB7CiAgICAgICAgICAgICAgICBrZGJfZW5kX3Nlc3Npb24o
Y2NwdSxyZWdzKTsgICAgIAogICAgICAgICAgICAgICAga2RiX2luaXRfY3B1ID0gLTE7CiAgICAg
ICAgICAgICAgICBnb3RvIG91dDsKICAgICAgICAgICAgfSBlbHNlIHsgICAgICAgICAgICAgICAg
IAogICAgICAgICAgICAgICAgLyogcmVsZWFzZSBhbm90aGVyIHRyYXAgY3B1LCBhbmQgcHV0IG91
cnNlbGYgaW4gYSBwYXVzZSBtb2RlICovCiAgICAgICAgICAgICAgICBpbnQgYV90cmFwX2NwdSA9
IGNwdW1hc2tfZmlyc3QoJmtkYl9jcHVfdHJhcHMpOwogICAgICAgICAgICAgICAgS0RCR1AoImNj
cHU6JWQgY21kOiVkIHJzbjolZCBhdHJwY3B1OiVkIGluaXRjcHU6JWRcbiIsIGNjcHUsIAogICAg
ICAgICAgICAgICAgICAgICAga2RiX2NwdV9jbWRbY2NwdV0sIHJlYXNvbiwgYV90cmFwX2NwdSwg
a2RiX2luaXRfY3B1KTsKICAgICAgICAgICAgICAgIGtkYl9jcHVfY21kW2FfdHJhcF9jcHVdID0g
S0RCX0NQVV9RVUlUOwogICAgICAgICAgICAgICAgcmVhc29uID0gS0RCX1JFQVNPTl9QQVVTRV9J
UEk7CiAgICAgICAgICAgICAgICBrZGJfaW5pdF9jcHUgPSAtMTsKICAgICAgICAgICAgfQogICAg
ICAgIH0gZWxzZSBpZiAocmMgPT0gMikgeyAgICAgICAgLyogb25lIG9mIG91cnMgYnV0IGNvbmRp
dGlvbiBub3QgbWV0ICovCiAgICAgICAgICAgICAgICBrZGJfYmVnaW5fc2Vzc2lvbigpOwogICAg
ICAgICAgICAgICAgaWYgKGd1ZXN0X21vZGUocmVncykgJiYgaXNfaHZtX29yX2h5Yl92Y3B1KGN1
cnJlbnQpKQogICAgICAgICAgICAgICAgICAgIGN1cnJlbnQtPmFyY2guaHZtX3ZjcHUuc2luZ2xl
X3N0ZXAgPSAxOwogICAgICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgICAgICAgIHJlZ3Mt
PmVmbGFncyB8PSBYODZfRUZMQUdTX1RGOyAgCiAgICAgICAgICAgICAgICBrZGJfY3B1X2NtZFtj
Y3B1XSA9IEtEQl9DUFVfSU5TVEFMTF9CUDsKICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAg
ICAgIH0KICAgIH0KCiAgICAvKiBmb2xsb3dpbmcgd2lsbCB0YWtlIGNhcmUgb2YgS0RCX0NQVV9J
TlNUQUxMX0JQLCBhbmQgYWxzbyByZWxlYXNlCiAgICAgKiBrZGJfaW5pdF9jcHUuIGl0IHNob3Vs
ZCBub3QgYmUgZG9uZSB0d2ljZSAqLwogICAgaWYgKChyYz1rZGJfY2hlY2tfZGJ0cmFwKCZyZWFz
b24sIHNzX21vZGUsIHJlZ3MpKSA9PSAwIHx8IHJjID09IDEpIHsKICAgICAgICBrZGJfaW5pdF9j
cHUgPSAtMTsgICAgICAgLyogbGVhdmluZyBrZGIgKi8KICAgICAgICBnb3RvIG91dDsgICAgICAg
ICAgICAgICAgLyogcmMgcHJvcGVybHkgc2V0IHRvIDAgb3IgMSAqLwogICAgfQogICAgaWYgKHJl
YXNvbiAhPSBLREJfUkVBU09OX1BBVVNFX0lQSSkgewogICAgICAgIGtkYl9jcHVfY21kW2NjcHVd
ID0gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0gZWxzZQogICAgICAgIGtkYl9jcHVfY21kW2NjcHVd
ID0gS0RCX0NQVV9QQVVTRTsKCiAgICBpZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9N
QUlOX0tEQiAmJiAhc3NfbW9kZSkKICAgICAgICBrZGJfYmVnaW5fc2Vzc2lvbigpOyAKCiAgICBr
ZGJfbWFpbl9lbnRyeV9taXNjKHJlYXNvbiwgcmVncywgY2NwdSwgc3NfbW9kZSwgZW5hYmxlZCk7
CiAgICAvKiBub3RlLCBvbmUgb3IgbW9yZSBjcHUgc3dpdGNoZXMgbWF5IG9jY3VyIGluIGJldHdl
ZW4gKi8KICAgIHdoaWxlICgxKSB7CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09IEtE
Ql9DUFVfUEFVU0UpCiAgICAgICAgICAgIGtkYl9ob2xkX3RoaXNfY3B1KGNjcHUsIHJlZ3MpOwog
ICAgICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX01BSU5fS0RCKQogICAgICAg
ICAgICBrZGJfZG9fY21kcyhyZWdzKTsKCiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09
IEtEQl9DUFVfR08pIHsKICAgICAgICAgICAgaWYgKGNjcHUgIT0ga2RiX2luaXRfY3B1KSB7CiAg
ICAgICAgICAgICAgICBrZGJfY3B1X2NtZFtrZGJfaW5pdF9jcHVdID0gS0RCX0NQVV9HTzsKICAg
ICAgICAgICAgICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9QQVVTRTsKICAgICAgICAg
ICAgICAgIGNvbnRpbnVlOyAgICAgICAgICAgICAgIC8qIGZvciB0aGUgcGF1c2UgZ3V5ICovCiAg
ICAgICAgICAgIH0KICAgICAgICAgICAgaWYgKCFjcHVtYXNrX2VtcHR5KCZrZGJfY3B1X3RyYXBz
KSkgewogICAgICAgICAgICAgICAgLyogZXhlY3V0ZSBjdXJyZW50IGluc3RydWN0aW9uIHdpdGhv
dXQgMHhjYyAqLwogICAgICAgICAgICAgICAga2RiX2RiZ19wcm50X2N0cnBzKCJuZW1wdHk6Iiwg
Y2NwdSk7CiAgICAgICAgICAgICAgICBpZiAoZ3Vlc3RfbW9kZShyZWdzKSAmJiBpc19odm1fb3Jf
aHliX3ZjcHUoY3VycmVudCkpCiAgICAgICAgICAgICAgICAgICAgY3VycmVudC0+YXJjaC5odm1f
dmNwdS5zaW5nbGVfc3RlcCA9IDE7CiAgICAgICAgICAgICAgICBlbHNlCiAgICAgICAgICAgICAg
ICAgICAgcmVncy0+ZWZsYWdzIHw9IFg4Nl9FRkxBR1NfVEY7ICAKICAgICAgICAgICAgICAgIGtk
Yl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9JTlNUQUxMX0JQOwogICAgICAgICAgICAgICAgZ290
byBvdXQ7CiAgICAgICAgICAgIH0KICAgICAgICB9CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2Nj
cHVdICE9IEtEQl9DUFVfUEFVU0UgICYmIAogICAgICAgICAgICBrZGJfY3B1X2NtZFtjY3B1XSAh
PSBLREJfQ1BVX01BSU5fS0RCKQogICAgICAgICAgICAgICAgYnJlYWs7CiAgICB9CiAgICBpZiAo
a2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9HTykgewogICAgICAgIEFTU0VSVChjcHVtYXNr
X2VtcHR5KCZrZGJfY3B1X3RyYXBzKSk7CiAgICAgICAgaWYgKGtkYl9zd2JwX2V4aXN0cygpKSB7
CiAgICAgICAgICAgIGlmIChyZWFzb24gPT0gS0RCX1JFQVNPTl9CUEVYQ1ApIHsKICAgICAgICAg
ICAgICAgIC8qIGRvIGRlbGF5ZWQgaW5zdGFsbCAqLwogICAgICAgICAgICAgICAgaWYgKGd1ZXN0
X21vZGUocmVncykgJiYgaXNfaHZtX29yX2h5Yl92Y3B1KGN1cnJlbnQpKQogICAgICAgICAgICAg
ICAgICAgIGN1cnJlbnQtPmFyY2guaHZtX3ZjcHUuc2luZ2xlX3N0ZXAgPSAxOwogICAgICAgICAg
ICAgICAgZWxzZQogICAgICAgICAgICAgICAgICAgIHJlZ3MtPmVmbGFncyB8PSBYODZfRUZMQUdT
X1RGOyAgCiAgICAgICAgICAgICAgICBrZGJfY3B1X2NtZFtjY3B1XSA9IEtEQl9DUFVfSU5TVEFM
TF9CUDsKICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAgICAgICB9IAogICAgICAgIH0K
ICAgICAgICBrZGJfZW5kX3Nlc3Npb24oY2NwdSwgcmVncyk7CiAgICAgICAga2RiX2luaXRfY3B1
ID0gLTE7CiAgICB9Cm91dDoKICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX1FV
SVQpIHsKICAgICAgICBLREJHUCgiY2NwdTolZCBfcXVpdCBJUDogJWx4XG4iLCBjY3B1LCByZWdz
LT5LREJJUCk7CiAgICAgICAgaWYgKCEga2RiX3Nlc3Npb25fYmVndW4pCiAgICAgICAgICAgIGtk
Yl9pbnN0YWxsX3dhdGNocG9pbnRzKCk7CiAgICAgICAga2RiX3RpbWVfcmVzdW1lKDApOwogICAg
ICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9JTlZBTDsKICAgIH0KCiAgICAvKiBmb3Ig
c3MgYW5kIGRlbGF5ZWQgaW5zdGFsbCwgVEYgaXMgc2V0LiBub3QgbXVjaCBpbiBFWFQgSU5UIGhh
bmRsZXJzKi8KICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX05JKQogICAgICAg
IGtkYl90aW1lX3Jlc3VtZSgxKTsKICAgIGlmIChlbmFibGVkKQogICAgICAgIGxvY2FsX2lycV9l
bmFibGUoKTsKCiAgICBLREJHUCgia2RibWFpbjpYOmNjcHU6JWQgcmM6JWQgY21kOiVkIGVmbGc6
MHglbHggaW5pdGM6JWQgc2VzbjolZCAiIAogICAgICAgICAgImNzOiV4IGlycXM6JWQgIiwgY2Nw
dSwgcmMsIGtkYl9jcHVfY21kW2NjcHVdLCByZWdzLT5lZmxhZ3MsIAogICAgICAgICAga2RiX2lu
aXRfY3B1LCBrZGJfc2Vzc2lvbl9iZWd1biwgcmVncy0+Y3MsIGxvY2FsX2lycV9pc19lbmFibGVk
KCkpOwogICAga2RiX2RiZ19wcm50X2N0cnBzKCIiLCAtMSk7CiAgICByZXR1cm4gKHJjID8gMSA6
IDApOwp9CgovKiAKICoga2RiIGVudHJ5IGZ1bmN0aW9uIHdoZW4gY29taW5nIGluIHZpYSBhIGtl
eWJvYXJkCiAqIFJFVFVSTlM6IDAgOiBrZGIgd2FzIGNhbGxlZCBmb3IgZXZlbnQgaXQgd2FzIG5v
dCByZXNwb25zaWJsZQogKiAgICAgICAgICAxIDogZXZlbnQgb3duZWQgYW5kIGhhbmRsZWQgYnkg
a2RiIAogKi8KaW50CmtkYl9rZXlib2FyZChzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAgcmV0dXJuIGtkYm1haW4oS0RCX1JFQVNPTl9LRVlCT0FSRCwgcmVncyk7Cn0KCiNpZiAwCi8q
CiAqIHRoaXMgZnVuY3Rpb24gY2FsbGVkIHdoZW4ga2RiIHNlc3Npb24gYWN0aXZlIGFuZCB1c2Vy
IHByZXNzZXMgY3RybFwgYWdhaW4uCiAqIHRoZSBhc3N1bXB0aW9uIGlzIHRoYXQgdGhlIHVzZXIg
dHlwZWQgbmkvc3MgY21kLCBhbmQgaXQgbmV2ZXIgZ290IGJhY2sgaW50bwogKiBrZGIsIG9yIHRo
ZSB1c2VyIGlzIGltcGF0aWVudC4gRWl0aGVyIGNhc2UsIHdlIGp1c3QgZmFrZSBpdCB0aGF0IHRo
ZSBTUyBkaWQKICogZmluaXNoLiBTaW5jZSwgYWxsIG90aGVyIGtkYiBjcHVzIG11c3QgYmUgaG9s
ZGluZyBkaXNhYmxlZCwgdGhlIGludGVycnVwdAogKiB3b3VsZCBiZSBvbiB0aGUgQ1BVIHRoYXQg
ZGlkIHRoZSBzcy9uaSBjbWQKICovCnZvaWQKa2RiX3NzbmlfcmVlbnRlcihzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncykKewogICAgaW50IGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CiAgICBp
bnQgY2NtZCA9IGtkYl9jcHVfY21kW2NjcHVdOwoKICAgIGlmKGNjbWQgPT0gS0RCX0NQVV9TUyB8
fCBjY21kID09IEtEQl9DUFVfSU5TVEFMTF9CUCkKICAgICAgICBrZGJtYWluKEtEQl9SRUFTT05f
REJFWENQLCByZWdzKTsgCiAgICBlbHNlIAogICAgICAgIGtkYm1haW4oS0RCX1JFQVNPTl9LRVlC
T0FSRCwgcmVncyk7Cn0KI2VuZGlmCgovKiAKICogQWxsIHRyYXBzIGFyZSByb3V0ZWQgdGhydSBo
ZXJlLiBXZSBjYXJlIGFib3V0IEJQICgjMykgdHJhcCAoSU5UIDMpIGFuZAogKiB0aGUgREIgdHJh
cCgjMSkgb25seS4gCiAqIHJldHVybnM6IDAga2RiIGhhcyBub3RoaW5nIGRvIHdpdGggdGhpcyB0
cmFwCiAqICAgICAgICAgIDEga2RiIGhhbmRsZWQgdGhpcyB0cmFwIAogKi8KaW50CmtkYl9oYW5k
bGVfdHJhcF9lbnRyeShpbnQgdmVjdG9yLCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAgaW50IHJjID0gMDsKICAgIGludCBjY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwoKICAgIGlm
ICh2ZWN0b3IgPT0gVFJBUF9pbnQzKSB7CiAgICAgICAgcmMgPSBrZGJtYWluKEtEQl9SRUFTT05f
QlBFWENQLCByZWdzKTsKCiAgICB9IGVsc2UgaWYgKHZlY3RvciA9PSBUUkFQX2RlYnVnKSB7CiAg
ICAgICAgS0RCR1AoImNjcHU6JWQgdHJhcGRiZyByZWFzOiVkXG4iLCBjY3B1LCBrZGJfdHJhcF9p
bW1lZF9yZWFzb24pOwoKICAgICAgICBpZiAoa2RiX3RyYXBfaW1tZWRfcmVhc29uID09IEtEQl9U
UkFQX0ZBVEFMKSB7IAogICAgICAgICAgICBLREJHUCgia2RidHJwOmZhdGFsIGNjcHU6JWQgdmVj
OiVkXG4iLCBjY3B1LCB2ZWN0b3IpOwogICAgICAgICAgICByYyA9IGtkYm1haW5fZmF0YWwocmVn
cywgdmVjdG9yKTsKICAgICAgICAgICAgQlVHKCk7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAvKiBubyByZXR1cm4gKi8KCiAgICAgICAgfSBlbHNlIGlmIChrZGJfdHJhcF9pbW1lZF9yZWFz
b24gPT0gS0RCX1RSQVBfS0RCU1RBQ0spIHsKICAgICAgICAgICAga2RiX3RyYXBfaW1tZWRfcmVh
c29uID0gMDsgICAgICAgICAvKiBzaG93IGtkYiBzdGFjayAqLwogICAgICAgICAgICBzaG93X3Jl
Z2lzdGVycyhyZWdzKTsKICAgICAgICAgICAgc2hvd19zdGFjayhyZWdzKTsKICAgICAgICAgICAg
cmVncy0+ZWZsYWdzICY9IH5YODZfRUZMQUdTX1RGOwogICAgICAgICAgICByYyA9IDE7CgogICAg
ICAgIH0gZWxzZSBpZiAoa2RiX3RyYXBfaW1tZWRfcmVhc29uID09IEtEQl9UUkFQX05PTkZBVEFM
KSB7CiAgICAgICAgICAgIGtkYl90cmFwX2ltbWVkX3JlYXNvbiA9IDA7CiAgICAgICAgICAgIHJj
ID0ga2RiX2tleWJvYXJkKHJlZ3MpOwogICAgICAgIH0gZWxzZSB7ICAgICAgICAgICAgICAgICAg
ICAgICAgIC8qIHNzL25pL2RlbGF5ZWQgaW5zdGFsbC4uLiAqLwogICAgICAgICAgICBpZiAoZ3Vl
c3RfbW9kZShyZWdzKSAmJiBpc19odm1fb3JfaHliX3ZjcHUoY3VycmVudCkpCiAgICAgICAgICAg
ICAgICBjdXJyZW50LT5hcmNoLmh2bV92Y3B1LnNpbmdsZV9zdGVwID0gMDsKICAgICAgICAgICAg
cmMgPSBrZGJtYWluKEtEQl9SRUFTT05fREJFWENQLCByZWdzKTsgCiAgICAgICAgfQoKICAgIH0g
ZWxzZSBpZiAodmVjdG9yID09IFRSQVBfbm1pKSB7ICAgICAgICAgICAgICAgICAgIC8qIGV4dGVy
bmFsIG5taSAqLwogICAgICAgIC8qIHdoZW4gbm1pIGlzIHByZXNzZWQsIGl0IGNvdWxkIGdvIHRv
IG9uZSBvciBtb3JlIG9yIGFsbCBjcHVzCiAgICAgICAgICogZGVwZW5kaW5nIG9uIHRoZSBoYXJk
d2FyZS4gQWxzbywgZm9yIG5vdyBhc3N1bWUgaXQncyBmYXRhbCAqLwogICAgICAgIEtEQkdQKCJr
ZGJ0cnA6Y2NwdTolZCB2ZWM6JWRcbiIsIGNjcHUsIHZlY3Rvcik7CiAgICAgICAgcmMgPSBrZGJt
YWluX2ZhdGFsKHJlZ3MsIFRSQVBfbm1pKTsKICAgIH0gCiAgICByZXR1cm4gcmM7Cn0KCmludApr
ZGJfdHJhcF9mYXRhbChpbnQgdmVjdG9yLCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAga2RibWFpbl9mYXRhbChyZWdzLCB2ZWN0b3IpOwogICAgcmV0dXJuIDA7Cn0KCi8qIEZyb20g
c21wX3NlbmRfbm1pX2FsbGJ1dHNlbGYoKSBpbiBjcmFzaC5jIHdoaWNoIGlzIHN0YXRpYyAqLwp2
b2lkCmtkYl9ubWlfcGF1c2VfY3B1cyhjcHVtYXNrX3QgY3B1bWFzaykKewogICAgaW50IGNjcHUg
PSBzbXBfcHJvY2Vzc29yX2lkKCk7CiAgICBtZGVsYXkoMjAwKTsKICAgIGNwdW1hc2tfY29tcGxl
bWVudCgmY3B1bWFzaywgJmNwdW1hc2spOyAgICAgICAgICAgICAgLyogZmxpcCBiaXQgbWFwICov
CiAgICBjcHVtYXNrX2FuZCgmY3B1bWFzaywgJmNwdW1hc2ssICZjcHVfb25saW5lX21hcCk7ICAg
IC8qIHJlbW92ZSBleHRyYSBiaXRzICovCiAgICBjcHVtYXNrX2NsZWFyX2NwdShjY3B1LCAmY3B1
bWFzayk7LyogYWJzb2x1dGVseSBtYWtlIHN1cmUgd2UncmUgbm90IG9uIGl0ICovCgogICAgS0RC
R1AoImNjcHU6JWQgbm1pIHBhdXNlLiBtYXNrOjB4JWx4XG4iLCBjY3B1LCBjcHVtYXNrLmJpdHNb
MF0pOwogICAgaWYgKCAhY3B1bWFza19lbXB0eSgmY3B1bWFzaykgKQojaWYgWEVOX1NVQlZFUlNJ
T04gPiA0IHx8IFhFTl9WRVJTSU9OID09IDQgICAgICAgICAgICAgIC8qIHhlbiAzLjUueCBvciBh
Ym92ZSAqLwogICAgICAgIHNlbmRfSVBJX21hc2soJmNwdW1hc2ssIEFQSUNfRE1fTk1JKTsKI2Vs
c2UKICAgICAgICBzZW5kX0lQSV9tYXNrKGNwdW1hc2ssIEFQSUNfRE1fTk1JKTsKI2VuZGlmCiAg
ICBtZGVsYXkoMjAwKTsKICAgIEtEQkdQKCJjY3B1OiVkIG5taSBwYXVzZSBkb25lLi4uXG4iLCBj
Y3B1KTsKfQoKLyogCiAqIFNlcGFyYXRlIGZ1bmN0aW9uIGZyb20ga2RibWFpbiB0byBrZWVwIGJv
dGggd2l0aGluIHNhbml0eSBsZXZlbHMuCiAqLwpERUZJTkVfU1BJTkxPQ0soa2RiX2ZhdGFsX2xr
KTsKc3RhdGljIGludAprZGJtYWluX2ZhdGFsKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzLCBp
bnQgdmVjdG9yKQp7CiAgICBpbnQgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKCiAgICBjb25z
b2xlX3N0YXJ0X3N5bmMoKTsKCiAgICBLREJHUCgibWFpbmY6Y2NwdTolZCB2ZWM6JWQgaXJxOiVk
XG4iLCBjY3B1LCB2ZWN0b3IsbG9jYWxfaXJxX2lzX2VuYWJsZWQoKSk7CiAgICBjcHVtYXNrX3Nl
dF9jcHUoY2NwdSwgJmtkYl9mYXRhbF9jcHVtYXNrKTsgICAgICAgIC8qIHVzZXMgTE9DS19QUkVG
SVggKi8KCiAgICBpZiAoc3Bpbl90cnlsb2NrKCZrZGJfZmF0YWxfbGspKSB7CgogICAgICAgIGtk
YnAoIioqKiBrZGIgKEZhdGFsIEVycm9yIG9uIGNwdTolZCB2ZWM6JWQgJXMpOlxuIiwgY2NwdSwK
ICAgICAgICAgICAgIHZlY3Rvciwga2RiX2dldHRyYXBuYW1lKHZlY3RvcikpOwogICAgICAgIGtk
Yl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9NQUlOX0tEQjsKICAgICAgICBrZGJfZGlzcGxheV9w
YyhyZWdzKTsKCiAgICAgICAgd2F0Y2hkb2dfZGlzYWJsZSgpOyAgICAgLyogaW1wb3J0YW50ICov
CiAgICAgICAga2RiX3N5c19jcmFzaCA9IDE7CiAgICAgICAga2RiX3Nlc3Npb25fYmVndW4gPSAw
OyAgLyogaW5jYXNlIHNlc3Npb24gYWxyZWFkeSBhY3RpdmUgKi8KICAgICAgICBsb2NhbF9pcnFf
ZW5hYmxlKCk7CiAgICAgICAga2RiX25taV9wYXVzZV9jcHVzKGtkYl9mYXRhbF9jcHVtYXNrKTsK
CiAgICAgICAga2RiX2NsZWFyX3ByZXZfY21kKCk7ICAgLyogYnVmZmVyZWQgQ1JzIHdpbGwgcmVw
ZWF0IHByZXYgY21kICovCiAgICAgICAga2RiX3Nlc3Npb25fYmVndW4gPSAxOyAgLyogZm9yIGtk
Yl9ob2xkX3RoaXNfY3B1KCkgKi8KICAgICAgICBsb2NhbF9pcnFfZGlzYWJsZSgpOwogICAgfSBl
bHNlIHsKICAgICAgICBrZGJfY3B1X2NtZFtjY3B1XSA9IEtEQl9DUFVfUEFVU0U7CiAgICB9CiAg
ICB3aGlsZSAoMSkgewogICAgICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX1BB
VVNFKQogICAgICAgICAgICBrZGJfaG9sZF90aGlzX2NwdShjY3B1LCByZWdzKTsKICAgICAgICBp
ZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9NQUlOX0tEQikKICAgICAgICAgICAga2Ri
X2RvX2NtZHMocmVncyk7CiNpZiAwCiAgICAgICAgLyogZHVtcCBpcyB0aGUgb25seSB3YXkgdG8g
ZXhpdCBpbiBjcmFzaGVkIHN0YXRlICovCiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09
IEtEQl9DUFVfRFVNUCkKICAgICAgICAgICAga2RiX2RvX2R1bXAocmVncyk7CiNlbmRpZgogICAg
fQogICAgcmV0dXJuIDA7Cn0KCi8qIE1vc3RseSBjYWxsZWQgaW4gZmF0YWwgY2FzZXMuIGVhcmx5
a2RiIGNhbGxzIG5vbi1mYXRhbC4KICoga2RiX3RyYXBfaW1tZWRfcmVhc29uIGlzIGdsb2JhbCwg
c28gYWxsb3cgb25seSBvbmUgY3B1IGF0IGEgdGltZS4gQWxzbywKICogbXVsdGlwbGUgY3B1IG1h
eSBiZSBjcmFzaGluZyBhdCB0aGUgc2FtZSB0aW1lLiBXZSBlbmFibGUgYmVjYXVzZSBpZiB0aGVy
ZQogKiBpcyBhIGJhZCBoYW5nLCBhdCBsZWFzdCBjdHJsLVwgd2lsbCBicmVhayBpbnRvIGtkYi4g
QWxzbywgd2UgZG9uJ3QgY2FsbAogKiBjYWxsIGtkYl9rZXlib2FyZCBkaXJlY3RseSBiZWNhdWUg
d2UgZG9uJ3QgaGF2ZSB0aGUgcmVnaXN0ZXIgY29udGV4dC4KICovCkRFRklORV9TUElOTE9DSyhr
ZGJfaW1tZWRfbGspOwp2b2lkCmtkYl90cmFwX2ltbWVkKGludCByZWFzb24pICAgICAgICAgICAg
LyogZmF0YWwsIG5vbi1mYXRhbCwga2RiIHN0YWNrIGV0Yy4uLiAqLwp7CiAgICBpbnQgY2NwdSA9
IHNtcF9wcm9jZXNzb3JfaWQoKTsKICAgIGludCBkaXNhYmxlZCA9ICFsb2NhbF9pcnFfaXNfZW5h
YmxlZCgpOwoKICAgIEtEQkdQKCJ0cmFwaW1tOmNjcHU6JWQgcmVhczolZFxuIiwgY2NwdSwgcmVh
c29uKTsKICAgIGxvY2FsX2lycV9lbmFibGUoKTsKICAgIHNwaW5fbG9jaygma2RiX2ltbWVkX2xr
KTsKICAgIGtkYl90cmFwX2ltbWVkX3JlYXNvbiA9IHJlYXNvbjsKICAgIGJhcnJpZXIoKTsKICAg
IF9fYXNtX18gX192b2xhdGlsZV9fICggImludCAkMSIgKTsKICAgIGtkYl90cmFwX2ltbWVkX3Jl
YXNvbiA9IDA7CgogICAgc3Bpbl91bmxvY2soJmtkYl9pbW1lZF9sayk7CiAgICBpZiAoZGlzYWJs
ZWQpCiAgICAgICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsKfQoKLyogY2FsbGVkIHZlcnkgZWFybHkg
ZHVyaW5nIGluaXQsIGV2ZW4gYmVmb3JlIGFsbCBDUFVzIGFyZSBicm91Z2h0IG9ubGluZSAqLwp2
b2lkIAprZGJfaW5pdCh2b2lkKQp7CiAgICAgICAga2RiX2luaXRfY21kdGFiKCk7ICAgICAgLyog
SW5pdGlhbGl6ZSBDb21tYW5kIFRhYmxlICovCn0KCnN0YXRpYyBjb25zdCBjaGFyICoKa2RiX2dl
dHRyYXBuYW1lKGludCB0cmFwbm8pCnsKICAgIGNoYXIgKnJldDsKICAgIHN3aXRjaCAodHJhcG5v
KSB7CiAgICAgICAgY2FzZSAgMDogIHJldCA9ICJEaXZpZGUgRXJyb3IiOyBicmVhazsKICAgICAg
ICBjYXNlICAyOiAgcmV0ID0gIk5NSSBJbnRlcnJ1cHQiOyBicmVhazsKICAgICAgICBjYXNlICAz
OiAgcmV0ID0gIkludCAzIFRyYXAiOyBicmVhazsKICAgICAgICBjYXNlICA0OiAgcmV0ID0gIk92
ZXJmbG93IEVycm9yIjsgYnJlYWs7CiAgICAgICAgY2FzZSAgNjogIHJldCA9ICJJbnZhbGlkIE9w
Y29kZSI7IGJyZWFrOwogICAgICAgIGNhc2UgIDg6ICByZXQgPSAiRG91YmxlIEZhdWx0IjsgYnJl
YWs7CiAgICAgICAgY2FzZSAxMDogIHJldCA9ICJJbnZhbGlkIFRTUyI7IGJyZWFrOwogICAgICAg
IGNhc2UgMTE6ICByZXQgPSAiU2VnbWVudCBOb3QgUHJlc2VudCI7IGJyZWFrOwogICAgICAgIGNh
c2UgMTI6ICByZXQgPSAiU3RhY2stU2VnbWVudCBGYXVsdCI7IGJyZWFrOwogICAgICAgIGNhc2Ug
MTM6ICByZXQgPSAiR2VuZXJhbCBQcm90ZWN0aW9uIjsgYnJlYWs7CiAgICAgICAgY2FzZSAxNDog
IHJldCA9ICJQYWdlIEZhdWx0IjsgYnJlYWs7CiAgICAgICAgY2FzZSAxNzogIHJldCA9ICJBbGln
bm1lbnQgQ2hlY2siOyBicmVhazsKICAgICAgICBkZWZhdWx0OiByZXQgPSAiID8/Pz8/ICI7CiAg
ICB9CiAgICByZXR1cm4gcmV0Owp9CgoKLyogPT09PT09PT09PT09PT09PT09PT09PSBHZW5lcmlj
IHRyYWNpbmcgc3Vic3lzdGVtID09PT09PT09PT09PT09PT09PT09PT09PSAqLwoKI2RlZmluZSBL
REJUUkNNQVggMSAgICAgICAvKiBzZXQgdGhpcyB0byBtYXggbnVtYmVyIG9mIHJlY3MgdG8gdHJh
Y2UuIGVhY2ggcmVjIAogICAgICAgICAgICAgICAgICAgICAgICAgICAqIGlzIDMyIGJ5dGVzICov
CnZvbGF0aWxlIGludCBrZGJfdHJjb249MTsgLyogdHVybiB0cmFjaW5nIE9OOiBzZXQgaGVyZSBv
ciB2aWEgdGhlIHRyY29uIGNtZCAqLwoKdHlwZWRlZiBzdHJ1Y3QgewogICAgdW5pb24gewogICAg
ICAgIHN0cnVjdCB7IHVpbnQgZDA7IHVpbnQgY3B1X3RyY2lkOyB9IHMwOwogICAgICAgIHVpbnQ2
NF90IGwwOwogICAgfXU7CiAgICB1aW50NjRfdCBsMSwgbDIsIGwzOyAKfSB0cmNfcmVjX3Q7Cgpz
dGF0aWMgdm9sYXRpbGUgdW5zaWduZWQgaW50IHRyY2lkeDsgICAgLyogcG9pbnRzIHRvIHdoZXJl
IG5ldyBlbnRyeSB3aWxsIGdvICovCnN0YXRpYyB0cmNfcmVjX3QgdHJjYVtLREJUUkNNQVhdOyAg
ICAgICAvKiB0cmFjZSBhcnJheSAqLwoKLyogYXRvbWljYWxseTogYWRkIGkgdG8gKnAsIHJldHVy
biBwcmV2IHZhbHVlIG9mICpwIChpZSwgdmFsIGJlZm9yZSBhZGQpICovCnN0YXRpYyBpbnQKa2Ri
X2ZldGNoX2FuZF9hZGQoaW50IGksIHVpbnQgKnApCnsKICAgIGFzbSB2b2xhdGlsZSgibG9jayB4
YWRkbCAlMCwgJTE7IiA6ICI9ciIoaSkgOiAibSIoKnApLCAiMCIoaSkpOwogICAgcmV0dXJuIGk7
Cn0KCi8qIHplcm8gb3V0IHRoZSBlbnRpcmUgYnVmZmVyICovCnZvaWQgCmtkYl90cmN6ZXJvKHZv
aWQpCnsKICAgIGZvciAodHJjaWR4ID0gS0RCVFJDTUFYLTE7IHRyY2lkeDsgdHJjaWR4LS0pIHsK
ICAgICAgICBtZW1zZXQoJnRyY2FbdHJjaWR4XSwgMCwgc2l6ZW9mKHRyY19yZWNfdCkpOwogICAg
fQogICAgbWVtc2V0KCZ0cmNhW3RyY2lkeF0sIDAsIHNpemVvZih0cmNfcmVjX3QpKTsKICAgIGtk
YnAoImtkYiB0cmFjZSBidWZmZXIgaGFzIGJlZW4gemVyb2VkXG4iKTsKfQoKLyogYWRkIHRyYWNl
IGVudHJ5OiBlZy46IGtkYnRyYygweGUwZjA5OSwgaW50ZGF0YSwgdmNwdSwgZG9tYWluLCAwKQog
KiAgICB3aGVyZTogIDB4ZTBmMDk5IDogMjRiaXRzIG1heCB0cmNpZCwgbG93ZXIgOCBiaXRzIGFy
ZSBzZXQgdG8gY3B1aWQgKi8Kdm9pZAprZGJ0cmModWludCB0cmNpZCwgdWludCBpbnRfZDAsIHVp
bnQ2NF90IGQxXzY0LCB1aW50NjRfdCBkMl82NCwgdWludDY0X3QgZDNfNjQpCnsKICAgIHVpbnQg
aWR4OwoKICAgIGlmICgha2RiX3RyY29uKQogICAgICAgIHJldHVybjsKCiAgICBpZHggPSBrZGJf
ZmV0Y2hfYW5kX2FkZCgxLCAodWludCopJnRyY2lkeCk7CiAgICBpZHggPSBpZHggJSBLREJUUkNN
QVg7CgojaWYgMAogICAgdHJjYVtpZHhdLnUuczAuY3B1X3RyY2lkID0gKHNtcF9wcm9jZXNzb3Jf
aWQoKTw8MjQpIHwgdHJjaWQ7CiNlbmRpZgogICAgdHJjYVtpZHhdLnUuczAuY3B1X3RyY2lkID0g
KHRyY2lkPDw4KSB8IHNtcF9wcm9jZXNzb3JfaWQoKTsKICAgIHRyY2FbaWR4XS51LnMwLmQwID0g
aW50X2QwOwogICAgdHJjYVtpZHhdLmwxID0gZDFfNjQ7CiAgICB0cmNhW2lkeF0ubDIgPSBkMl82
NDsKICAgIHRyY2FbaWR4XS5sMyA9IGQzXzY0Owp9CgovKiBnaXZlIGhpbnRzIHNvIHVzZXIgY2Fu
IHByaW50IHRyYyBidWZmZXIgdmlhIHRoZSBkZCBjb21tYW5kLiBsYXN0IGhhcyB0aGUKICogbW9z
dCByZWNlbnQgZW50cnkgKi8Kdm9pZAprZGJfdHJjcCh2b2lkKQp7CiAgICBpbnQgaSA9IHRyY2lk
eCAlIEtEQlRSQ01BWDsKCiAgICBpID0gKGk9PTApID8gS0RCVFJDTUFYLTEgOiBpLTE7CiAgICBr
ZGJwKCJ0cmNidWY6ICAgIFswXTogJTAxNmx4IFtNQVgtMV06ICUwMTZseFxuIiwgJnRyY2FbMF0s
CiAgICAgICAgICZ0cmNhW0tEQlRSQ01BWC0xXSk7CiAgICBrZGJwKCIgW21vc3QgcmVjZW50XTog
JTAxNmx4ICAgdHJjaWR4OiAweCV4XG4iLCAmdHJjYVtpXSwgdHJjaWR4KTsKfQoKAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAABrZGIveDg2LwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDc3NQAw
MDAyNzU2ADAwMDI3NTYAMDAwMDAwMDAwMDAAMTIwMTc3MjQ2MjQAMDEyMjAyACA1AAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAGtkYi94ODYvdWRpczg2LTEuNy8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNzc1ADAw
MDI3NTYAMDAwMjc1NgAwMDAwMDAwMDAwMAAxMjAxNzcyNDYyNAAwMTM2MjcAIDUAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAa2RiL3g4Ni91ZGlzODYtMS43L2lucHV0LmgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAw
Mjc1NgAwMDAyNzU2ADAwMDAwMDAyNDAwADExNzY1NDY1NTU2ADAxNTE1MgAgMAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAvKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBpbnB1dC5oCiAqCiAqIENvcHlyaWdodCAoYykg
MjAwNiwgVml2ZWsgTW9oYW4gPHZpdmVrQHNpZzkuY29tPgogKiBBbGwgcmlnaHRzIHJlc2VydmVk
LiBTZWUgTElDRU5TRQogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8KI2lmbmRlZiBVRF9JTlBV
VF9ICiNkZWZpbmUgVURfSU5QVVRfSAoKI2luY2x1ZGUgInR5cGVzLmgiCgp1aW50OF90IGlucF9u
ZXh0KHN0cnVjdCB1ZCopOwp1aW50OF90IGlucF9wZWVrKHN0cnVjdCB1ZCopOwp1aW50OF90IGlu
cF91aW50OChzdHJ1Y3QgdWQqKTsKdWludDE2X3QgaW5wX3VpbnQxNihzdHJ1Y3QgdWQqKTsKdWlu
dDMyX3QgaW5wX3VpbnQzMihzdHJ1Y3QgdWQqKTsKdWludDY0X3QgaW5wX3VpbnQ2NChzdHJ1Y3Qg
dWQqKTsKdm9pZCBpbnBfbW92ZShzdHJ1Y3QgdWQqLCBzaXplX3QpOwp2b2lkIGlucF9iYWNrKHN0
cnVjdCB1ZCopOwoKLyogaW5wX2luaXQoKSAtIEluaXRpYWxpemVzIHRoZSBpbnB1dCBzeXN0ZW0u
ICovCiNkZWZpbmUgaW5wX2luaXQodSkgXApkbyB7IFwKICB1LT5pbnBfY3VyciA9IDA7IFwKICB1
LT5pbnBfZmlsbCA9IDA7IFwKICB1LT5pbnBfY3RyICA9IDA7IFwKICB1LT5pbnBfZW5kICA9IDA7
IFwKfSB3aGlsZSAoMCkKCi8qIGlucF9zdGFydCgpIC0gU2hvdWxkIGJlIGNhbGxlZCBiZWZvcmUg
ZWFjaCBkZS1jb2RlIG9wZXJhdGlvbi4gKi8KI2RlZmluZSBpbnBfc3RhcnQodSkgdS0+aW5wX2N0
ciA9IDAKCi8qIGlucF9iYWNrKCkgLSBSZXNldHMgdGhlIGN1cnJlbnQgcG9pbnRlciB0byBpdHMg
cG9zaXRpb24gYmVmb3JlIHRoZSBjdXJyZW50CiAqIGluc3RydWN0aW9uIGRpc2Fzc2VtYmx5IHdh
cyBzdGFydGVkLgogKi8KI2RlZmluZSBpbnBfcmVzZXQodSkgXApkbyB7IFwKICB1LT5pbnBfY3Vy
ciAtPSB1LT5pbnBfY3RyOyBcCiAgdS0+aW5wX2N0ciA9IDA7IFwKfSB3aGlsZSAoMCkKCi8qIGlu
cF9zZXNzKCkgLSBSZXR1cm5zIHRoZSBwb2ludGVyIHRvIGN1cnJlbnQgc2Vzc2lvbi4gKi8KI2Rl
ZmluZSBpbnBfc2Vzcyh1KSAodS0+aW5wX3Nlc3MpCgovKiBpbnBfY3VyKCkgLSBSZXR1cm5zIHRo
ZSBjdXJyZW50IGlucHV0IGJ5dGUuICovCiNkZWZpbmUgaW5wX2N1cnIodSkgKCh1KS0+aW5wX2Nh
Y2hlWyh1KS0+aW5wX2N1cnJdKQoKI2VuZGlmCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABr
ZGIveDg2L3VkaXM4Ni0xLjcvc3luLWludGVsLmMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2
ADAwMDI3NTYAMDAwMDAwMTEzNzYAMTE3NjU0NjU1NTYAMDE1NzQ0ACAwAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8q
IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIHN5bi1pbnRlbC5jCiAqCiAqIENvcHlyaWdodCAoYykg
MjAwMiwgMjAwMywgMjAwNCBWaXZlayBNb2hhbiA8dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdo
dHMgcmVzZXJ2ZWQuIFNlZSAoTElDRU5TRSkKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCgoj
aW5jbHVkZSAidHlwZXMuaCIKI2luY2x1ZGUgImV4dGVybi5oIgojaW5jbHVkZSAiZGVjb2RlLmgi
CiNpbmNsdWRlICJpdGFiLmgiCiNpbmNsdWRlICJzeW4uaCIKCi8qIC0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tCiAqIG9wcl9jYXN0KCkgLSBQcmludHMgYW4gb3BlcmFuZCBjYXN0LgogKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKi8Kc3RhdGljIHZvaWQgCm9wcl9jYXN0KHN0cnVjdCB1ZCogdSwgc3RydWN0
IHVkX29wZXJhbmQqIG9wKQp7CiAgc3dpdGNoKG9wLT5zaXplKSB7CgljYXNlICA4OiBta2FzbSh1
LCAiYnl0ZSAiICk7IGJyZWFrOwoJY2FzZSAxNjogbWthc20odSwgIndvcmQgIiApOyBicmVhazsK
CWNhc2UgMzI6IG1rYXNtKHUsICJkd29yZCAiKTsgYnJlYWs7CgljYXNlIDY0OiBta2FzbSh1LCAi
cXdvcmQgIik7IGJyZWFrOwoJY2FzZSA4MDogbWthc20odSwgInR3b3JkICIpOyBicmVhazsKCWRl
ZmF1bHQ6IGJyZWFrOwogIH0KICBpZiAodS0+YnJfZmFyKQoJbWthc20odSwgImZhciAiKTsgCiAg
ZWxzZSBpZiAodS0+YnJfbmVhcikKCW1rYXNtKHUsICJuZWFyICIpOwp9CgovKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKiBnZW5fb3BlcmFuZCgpIC0gR2VuZXJhdGVzIGFzc2VtYmx5IG91dHB1dCBm
b3IgZWFjaCBvcGVyYW5kLgogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIHZvaWQg
Z2VuX29wZXJhbmQoc3RydWN0IHVkKiB1LCBzdHJ1Y3QgdWRfb3BlcmFuZCogb3AsIGludCBzeW5f
Y2FzdCkKewogIHN3aXRjaChvcC0+dHlwZSkgewoJY2FzZSBVRF9PUF9SRUc6CgkJbWthc20odSwg
dWRfcmVnX3RhYltvcC0+YmFzZSAtIFVEX1JfQUxdKTsKCQlicmVhazsKCgljYXNlIFVEX09QX01F
TTogewoKCQlpbnQgb3BfZiA9IDA7CgoJCWlmIChzeW5fY2FzdCkgCgkJCW9wcl9jYXN0KHUsIG9w
KTsKCgkJbWthc20odSwgIlsiKTsKCgkJaWYgKHUtPnBmeF9zZWcpCgkJCW1rYXNtKHUsICIlczoi
LCB1ZF9yZWdfdGFiW3UtPnBmeF9zZWcgLSBVRF9SX0FMXSk7CgoJCWlmIChvcC0+YmFzZSkgewoJ
CQlta2FzbSh1LCAiJXMiLCB1ZF9yZWdfdGFiW29wLT5iYXNlIC0gVURfUl9BTF0pOwoJCQlvcF9m
ID0gMTsKCQl9CgoJCWlmIChvcC0+aW5kZXgpIHsKCQkJaWYgKG9wX2YpCgkJCQlta2FzbSh1LCAi
KyIpOwoJCQlta2FzbSh1LCAiJXMiLCB1ZF9yZWdfdGFiW29wLT5pbmRleCAtIFVEX1JfQUxdKTsK
CQkJb3BfZiA9IDE7CgkJfQoKCQlpZiAob3AtPnNjYWxlKQoJCQlta2FzbSh1LCAiKiVkIiwgb3At
PnNjYWxlKTsKCgkJaWYgKG9wLT5vZmZzZXQgPT0gOCkgewoJCQlpZiAob3AtPmx2YWwuc2J5dGUg
PCAwKQoJCQkJbWthc20odSwgIi0weCV4IiwgLW9wLT5sdmFsLnNieXRlKTsKCQkJZWxzZQlta2Fz
bSh1LCAiJXMweCV4IiwgKG9wX2YpID8gIisiIDogIiIsIG9wLT5sdmFsLnNieXRlKTsKCQl9CgkJ
ZWxzZSBpZiAob3AtPm9mZnNldCA9PSAxNikKCQkJbWthc20odSwgIiVzMHgleCIsIChvcF9mKSA/
ICIrIiA6ICIiLCBvcC0+bHZhbC51d29yZCk7CgkJZWxzZSBpZiAob3AtPm9mZnNldCA9PSAzMikg
ewoJCQlpZiAodS0+YWRyX21vZGUgPT0gNjQpIHsKCQkJCWlmIChvcC0+bHZhbC5zZHdvcmQgPCAw
KQoJCQkJCW1rYXNtKHUsICItMHgleCIsIC1vcC0+bHZhbC5zZHdvcmQpOwoJCQkJZWxzZQlta2Fz
bSh1LCAiJXMweCV4IiwgKG9wX2YpID8gIisiIDogIiIsIG9wLT5sdmFsLnNkd29yZCk7CgkJCX0g
CgkJCWVsc2UJbWthc20odSwgIiVzMHglbHgiLCAob3BfZikgPyAiKyIgOiAiIiwgb3AtPmx2YWwu
dWR3b3JkKTsKCQl9CgkJZWxzZSBpZiAob3AtPm9mZnNldCA9PSA2NCkgCgkJCW1rYXNtKHUsICIl
czB4IiBGTVQ2NCAieCIsIChvcF9mKSA/ICIrIiA6ICIiLCBvcC0+bHZhbC51cXdvcmQpOwoKCQlt
a2FzbSh1LCAiXSIpOwoJCWJyZWFrOwoJfQoJCQkKCWNhc2UgVURfT1BfSU1NOgoJCWlmIChzeW5f
Y2FzdCkgb3ByX2Nhc3QodSwgb3ApOwoJCXN3aXRjaCAob3AtPnNpemUpIHsKCQkJY2FzZSAgODog
bWthc20odSwgIjB4JXgiLCBvcC0+bHZhbC51Ynl0ZSk7ICAgIGJyZWFrOwoJCQljYXNlIDE2OiBt
a2FzbSh1LCAiMHgleCIsIG9wLT5sdmFsLnV3b3JkKTsgICAgYnJlYWs7CgkJCWNhc2UgMzI6IG1r
YXNtKHUsICIweCVseCIsIG9wLT5sdmFsLnVkd29yZCk7ICBicmVhazsKCQkJY2FzZSA2NDogbWth
c20odSwgIjB4IiBGTVQ2NCAieCIsIG9wLT5sdmFsLnVxd29yZCk7IGJyZWFrOwoJCQlkZWZhdWx0
OiBicmVhazsKCQl9CgkJYnJlYWs7CgoJY2FzZSBVRF9PUF9KSU1NOgoJCWlmIChzeW5fY2FzdCkg
b3ByX2Nhc3QodSwgb3ApOwoJCXN3aXRjaCAob3AtPnNpemUpIHsKCQkJY2FzZSAgODoKCQkJCW1r
YXNtKHUsICIweCIgRk1UNjQgIngiLCB1LT5wYyArIG9wLT5sdmFsLnNieXRlKTsgCgkJCQlicmVh
azsKCQkJY2FzZSAxNjoKCQkJCW1rYXNtKHUsICIweCIgRk1UNjQgIngiLCB1LT5wYyArIG9wLT5s
dmFsLnN3b3JkKTsKCQkJCWJyZWFrOwoJCQljYXNlIDMyOgoJCQkJbWthc20odSwgIjB4IiBGTVQ2
NCAieCIsIHUtPnBjICsgb3AtPmx2YWwuc2R3b3JkKTsKCQkJCWJyZWFrOwoJCQlkZWZhdWx0OmJy
ZWFrOwoJCX0KCQlicmVhazsKCgljYXNlIFVEX09QX1BUUjoKCQlzd2l0Y2ggKG9wLT5zaXplKSB7
CgkJCWNhc2UgMzI6CgkJCQlta2FzbSh1LCAid29yZCAweCV4OjB4JXgiLCBvcC0+bHZhbC5wdHIu
c2VnLCAKCQkJCQlvcC0+bHZhbC5wdHIub2ZmICYgMHhGRkZGKTsKCQkJCWJyZWFrOwoJCQljYXNl
IDQ4OgoJCQkJbWthc20odSwgImR3b3JkIDB4JXg6MHglbHgiLCBvcC0+bHZhbC5wdHIuc2VnLCAK
CQkJCQlvcC0+bHZhbC5wdHIub2ZmKTsKCQkJCWJyZWFrOwoJCX0KCQlicmVhazsKCgljYXNlIFVE
X09QX0NPTlNUOgoJCWlmIChzeW5fY2FzdCkgb3ByX2Nhc3QodSwgb3ApOwoJCW1rYXNtKHUsICIl
ZCIsIG9wLT5sdmFsLnVkd29yZCk7CgkJYnJlYWs7CgoJZGVmYXVsdDogcmV0dXJuOwogIH0KfQoK
LyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT0KICogdHJhbnNsYXRlcyB0byBpbnRlbCBzeW50YXggCiAq
ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09CiAqLwpleHRlcm4gdm9pZCB1ZF90cmFuc2xhdGVfaW50ZWwo
c3RydWN0IHVkKiB1KQp7CiAgLyogLS0gcHJlZml4ZXMgLS0gKi8KCiAgLyogY2hlY2sgaWYgUF9P
U08gcHJlZml4IGlzIHVzZWQgKi8KICBpZiAoISBQX09TTyh1LT5pdGFiX2VudHJ5LT5wcmVmaXgp
ICYmIHUtPnBmeF9vcHIpIHsKCXN3aXRjaCAodS0+ZGlzX21vZGUpIHsKCQljYXNlIDE2OiAKCQkJ
bWthc20odSwgIm8zMiAiKTsKCQkJYnJlYWs7CgkJY2FzZSAzMjoKCQljYXNlIDY0OgogCQkJbWth
c20odSwgIm8xNiAiKTsKCQkJYnJlYWs7Cgl9CiAgfQoKICAvKiBjaGVjayBpZiBQX0FTTyBwcmVm
aXggd2FzIHVzZWQgKi8KICBpZiAoISBQX0FTTyh1LT5pdGFiX2VudHJ5LT5wcmVmaXgpICYmIHUt
PnBmeF9hZHIpIHsKCXN3aXRjaCAodS0+ZGlzX21vZGUpIHsKCQljYXNlIDE2OiAKCQkJbWthc20o
dSwgImEzMiAiKTsKCQkJYnJlYWs7CgkJY2FzZSAzMjoKIAkJCW1rYXNtKHUsICJhMTYgIik7CgkJ
CWJyZWFrOwoJCWNhc2UgNjQ6CiAJCQlta2FzbSh1LCAiYTMyICIpOwoJCQlicmVhazsKCX0KICB9
CgogIGlmICh1LT5wZnhfbG9jaykKCW1rYXNtKHUsICJsb2NrICIpOwogIGlmICh1LT5wZnhfcmVw
KQoJbWthc20odSwgInJlcCAiKTsKICBpZiAodS0+cGZ4X3JlcG5lKQoJbWthc20odSwgInJlcG5l
ICIpOwogIGlmICh1LT5pbXBsaWNpdF9hZGRyICYmIHUtPnBmeF9zZWcpCglta2FzbSh1LCAiJXMg
IiwgdWRfcmVnX3RhYlt1LT5wZnhfc2VnIC0gVURfUl9BTF0pOwoKICAvKiBwcmludCB0aGUgaW5z
dHJ1Y3Rpb24gbW5lbW9uaWMgKi8KICBta2FzbSh1LCAiJXMgIiwgdWRfbG9va3VwX21uZW1vbmlj
KHUtPm1uZW1vbmljKSk7CgogIC8qIG9wZXJhbmQgMSAqLwogIGlmICh1LT5vcGVyYW5kWzBdLnR5
cGUgIT0gVURfTk9ORSkgewoJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMF0sIHUtPmMxKTsK
ICB9CiAgLyogb3BlcmFuZCAyICovCiAgaWYgKHUtPm9wZXJhbmRbMV0udHlwZSAhPSBVRF9OT05F
KSB7Cglta2FzbSh1LCAiLCAiKTsKCWdlbl9vcGVyYW5kKHUsICZ1LT5vcGVyYW5kWzFdLCB1LT5j
Mik7CiAgfQoKICAvKiBvcGVyYW5kIDMgKi8KICBpZiAodS0+b3BlcmFuZFsyXS50eXBlICE9IFVE
X05PTkUpIHsKCW1rYXNtKHUsICIsICIpOwoJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMl0s
IHUtPmMzKTsKICB9Cn0KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlz
ODYtMS43L2l0YWIuYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAw
MDAwNjQ3MzU2ADExNzY1NDY1NTU2ADAxNDc1NgAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
bXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKLyogaXRhYi5jIC0t
IGF1dG8gZ2VuZXJhdGVkIGJ5IG9wZ2VuLnB5LCBkbyBub3QgZWRpdC4gKi8KCiNpbmNsdWRlICJ0
eXBlcy5oIgojaW5jbHVkZSAiZGVjb2RlLmgiCiNpbmNsdWRlICJpdGFiLmgiCgpjb25zdCBjaGFy
ICogdWRfbW5lbW9uaWNzX3N0cltdID0gewogICIzZG5vdyIsCiAgImFhYSIsCiAgImFhZCIsCiAg
ImFhbSIsCiAgImFhcyIsCiAgImFkYyIsCiAgImFkZCIsCiAgImFkZHBkIiwKICAiYWRkcHMiLAog
ICJhZGRzZCIsCiAgImFkZHNzIiwKICAiYWRkc3VicGQiLAogICJhZGRzdWJwcyIsCiAgImFuZCIs
CiAgImFuZHBkIiwKICAiYW5kcHMiLAogICJhbmRucGQiLAogICJhbmRucHMiLAogICJhcnBsIiwK
ICAibW92c3hkIiwKICAiYm91bmQiLAogICJic2YiLAogICJic3IiLAogICJic3dhcCIsCiAgImJ0
IiwKICAiYnRjIiwKICAiYnRyIiwKICAiYnRzIiwKICAiY2FsbCIsCiAgImNidyIsCiAgImN3ZGUi
LAogICJjZHFlIiwKICAiY2xjIiwKICAiY2xkIiwKICAiY2xmbHVzaCIsCiAgImNsZ2kiLAogICJj
bGkiLAogICJjbHRzIiwKICAiY21jIiwKICAiY21vdm8iLAogICJjbW92bm8iLAogICJjbW92YiIs
CiAgImNtb3ZhZSIsCiAgImNtb3Z6IiwKICAiY21vdm56IiwKICAiY21vdmJlIiwKICAiY21vdmEi
LAogICJjbW92cyIsCiAgImNtb3ZucyIsCiAgImNtb3ZwIiwKICAiY21vdm5wIiwKICAiY21vdmwi
LAogICJjbW92Z2UiLAogICJjbW92bGUiLAogICJjbW92ZyIsCiAgImNtcCIsCiAgImNtcHBkIiwK
ICAiY21wcHMiLAogICJjbXBzYiIsCiAgImNtcHN3IiwKICAiY21wc2QiLAogICJjbXBzcSIsCiAg
ImNtcHNzIiwKICAiY21weGNoZyIsCiAgImNtcHhjaGc4YiIsCiAgImNvbWlzZCIsCiAgImNvbWlz
cyIsCiAgImNwdWlkIiwKICAiY3Z0ZHEycGQiLAogICJjdnRkcTJwcyIsCiAgImN2dHBkMmRxIiwK
ICAiY3Z0cGQycGkiLAogICJjdnRwZDJwcyIsCiAgImN2dHBpMnBzIiwKICAiY3Z0cGkycGQiLAog
ICJjdnRwczJkcSIsCiAgImN2dHBzMnBpIiwKICAiY3Z0cHMycGQiLAogICJjdnRzZDJzaSIsCiAg
ImN2dHNkMnNzIiwKICAiY3Z0c2kyc3MiLAogICJjdnRzczJzaSIsCiAgImN2dHNzMnNkIiwKICAi
Y3Z0dHBkMnBpIiwKICAiY3Z0dHBkMmRxIiwKICAiY3Z0dHBzMmRxIiwKICAiY3Z0dHBzMnBpIiwK
ICAiY3Z0dHNkMnNpIiwKICAiY3Z0c2kyc2QiLAogICJjdnR0c3Myc2kiLAogICJjd2QiLAogICJj
ZHEiLAogICJjcW8iLAogICJkYWEiLAogICJkYXMiLAogICJkZWMiLAogICJkaXYiLAogICJkaXZw
ZCIsCiAgImRpdnBzIiwKICAiZGl2c2QiLAogICJkaXZzcyIsCiAgImVtbXMiLAogICJlbnRlciIs
CiAgImYyeG0xIiwKICAiZmFicyIsCiAgImZhZGQiLAogICJmYWRkcCIsCiAgImZibGQiLAogICJm
YnN0cCIsCiAgImZjaHMiLAogICJmY2xleCIsCiAgImZjbW92YiIsCiAgImZjbW92ZSIsCiAgImZj
bW92YmUiLAogICJmY21vdnUiLAogICJmY21vdm5iIiwKICAiZmNtb3ZuZSIsCiAgImZjbW92bmJl
IiwKICAiZmNtb3ZudSIsCiAgImZ1Y29taSIsCiAgImZjb20iLAogICJmY29tMiIsCiAgImZjb21w
MyIsCiAgImZjb21pIiwKICAiZnVjb21pcCIsCiAgImZjb21pcCIsCiAgImZjb21wIiwKICAiZmNv
bXA1IiwKICAiZmNvbXBwIiwKICAiZmNvcyIsCiAgImZkZWNzdHAiLAogICJmZGl2IiwKICAiZmRp
dnAiLAogICJmZGl2ciIsCiAgImZkaXZycCIsCiAgImZlbW1zIiwKICAiZmZyZWUiLAogICJmZnJl
ZXAiLAogICJmaWNvbSIsCiAgImZpY29tcCIsCiAgImZpbGQiLAogICJmbmNzdHAiLAogICJmbmlu
aXQiLAogICJmaWFkZCIsCiAgImZpZGl2ciIsCiAgImZpZGl2IiwKICAiZmlzdWIiLAogICJmaXN1
YnIiLAogICJmaXN0IiwKICAiZmlzdHAiLAogICJmaXN0dHAiLAogICJmbGQiLAogICJmbGQxIiwK
ICAiZmxkbDJ0IiwKICAiZmxkbDJlIiwKICAiZmxkbHBpIiwKICAiZmxkbGcyIiwKICAiZmxkbG4y
IiwKICAiZmxkeiIsCiAgImZsZGN3IiwKICAiZmxkZW52IiwKICAiZm11bCIsCiAgImZtdWxwIiwK
ICAiZmltdWwiLAogICJmbm9wIiwKICAiZnBhdGFuIiwKICAiZnByZW0iLAogICJmcHJlbTEiLAog
ICJmcHRhbiIsCiAgImZybmRpbnQiLAogICJmcnN0b3IiLAogICJmbnNhdmUiLAogICJmc2NhbGUi
LAogICJmc2luIiwKICAiZnNpbmNvcyIsCiAgImZzcXJ0IiwKICAiZnN0cCIsCiAgImZzdHAxIiwK
ICAiZnN0cDgiLAogICJmc3RwOSIsCiAgImZzdCIsCiAgImZuc3RjdyIsCiAgImZuc3RlbnYiLAog
ICJmbnN0c3ciLAogICJmc3ViIiwKICAiZnN1YnAiLAogICJmc3ViciIsCiAgImZzdWJycCIsCiAg
ImZ0c3QiLAogICJmdWNvbSIsCiAgImZ1Y29tcCIsCiAgImZ1Y29tcHAiLAogICJmeGFtIiwKICAi
ZnhjaCIsCiAgImZ4Y2g0IiwKICAiZnhjaDciLAogICJmeHJzdG9yIiwKICAiZnhzYXZlIiwKICAi
ZnB4dHJhY3QiLAogICJmeWwyeCIsCiAgImZ5bDJ4cDEiLAogICJoYWRkcGQiLAogICJoYWRkcHMi
LAogICJobHQiLAogICJoc3VicGQiLAogICJoc3VicHMiLAogICJpZGl2IiwKICAiaW4iLAogICJp
bXVsIiwKICAiaW5jIiwKICAiaW5zYiIsCiAgImluc3ciLAogICJpbnNkIiwKICAiaW50MSIsCiAg
ImludDMiLAogICJpbnQiLAogICJpbnRvIiwKICAiaW52ZCIsCiAgImludmxwZyIsCiAgImludmxw
Z2EiLAogICJpcmV0dyIsCiAgImlyZXRkIiwKICAiaXJldHEiLAogICJqbyIsCiAgImpubyIsCiAg
ImpiIiwKICAiamFlIiwKICAianoiLAogICJqbnoiLAogICJqYmUiLAogICJqYSIsCiAgImpzIiwK
ICAiam5zIiwKICAianAiLAogICJqbnAiLAogICJqbCIsCiAgImpnZSIsCiAgImpsZSIsCiAgImpn
IiwKICAiamN4eiIsCiAgImplY3h6IiwKICAianJjeHoiLAogICJqbXAiLAogICJsYWhmIiwKICAi
bGFyIiwKICAibGRkcXUiLAogICJsZG14Y3NyIiwKICAibGRzIiwKICAibGVhIiwKICAibGVzIiwK
ICAibGZzIiwKICAibGdzIiwKICAibGlkdCIsCiAgImxzcyIsCiAgImxlYXZlIiwKICAibGZlbmNl
IiwKICAibGdkdCIsCiAgImxsZHQiLAogICJsbXN3IiwKICAibG9jayIsCiAgImxvZHNiIiwKICAi
bG9kc3ciLAogICJsb2RzZCIsCiAgImxvZHNxIiwKICAibG9vcG56IiwKICAibG9vcGUiLAogICJs
b29wIiwKICAibHNsIiwKICAibHRyIiwKICAibWFza21vdnEiLAogICJtYXhwZCIsCiAgIm1heHBz
IiwKICAibWF4c2QiLAogICJtYXhzcyIsCiAgIm1mZW5jZSIsCiAgIm1pbnBkIiwKICAibWlucHMi
LAogICJtaW5zZCIsCiAgIm1pbnNzIiwKICAibW9uaXRvciIsCiAgIm1vdiIsCiAgIm1vdmFwZCIs
CiAgIm1vdmFwcyIsCiAgIm1vdmQiLAogICJtb3ZkZHVwIiwKICAibW92ZHFhIiwKICAibW92ZHF1
IiwKICAibW92ZHEycSIsCiAgIm1vdmhwZCIsCiAgIm1vdmhwcyIsCiAgIm1vdmxocHMiLAogICJt
b3ZscGQiLAogICJtb3ZscHMiLAogICJtb3ZobHBzIiwKICAibW92bXNrcGQiLAogICJtb3Ztc2tw
cyIsCiAgIm1vdm50ZHEiLAogICJtb3ZudGkiLAogICJtb3ZudHBkIiwKICAibW92bnRwcyIsCiAg
Im1vdm50cSIsCiAgIm1vdnEiLAogICJtb3ZxYSIsCiAgIm1vdnEyZHEiLAogICJtb3ZzYiIsCiAg
Im1vdnN3IiwKICAibW92c2QiLAogICJtb3ZzcSIsCiAgIm1vdnNsZHVwIiwKICAibW92c2hkdXAi
LAogICJtb3ZzcyIsCiAgIm1vdnN4IiwKICAibW92dXBkIiwKICAibW92dXBzIiwKICAibW92engi
LAogICJtdWwiLAogICJtdWxwZCIsCiAgIm11bHBzIiwKICAibXVsc2QiLAogICJtdWxzcyIsCiAg
Im13YWl0IiwKICAibmVnIiwKICAibm9wIiwKICAibm90IiwKICAib3IiLAogICJvcnBkIiwKICAi
b3JwcyIsCiAgIm91dCIsCiAgIm91dHNiIiwKICAib3V0c3ciLAogICJvdXRzZCIsCiAgIm91dHNx
IiwKICAicGFja3Nzd2IiLAogICJwYWNrc3NkdyIsCiAgInBhY2t1c3diIiwKICAicGFkZGIiLAog
ICJwYWRkdyIsCiAgInBhZGRxIiwKICAicGFkZHNiIiwKICAicGFkZHN3IiwKICAicGFkZHVzYiIs
CiAgInBhZGR1c3ciLAogICJwYW5kIiwKICAicGFuZG4iLAogICJwYXVzZSIsCiAgInBhdmdiIiwK
ICAicGF2Z3ciLAogICJwY21wZXFiIiwKICAicGNtcGVxdyIsCiAgInBjbXBlcWQiLAogICJwY21w
Z3RiIiwKICAicGNtcGd0dyIsCiAgInBjbXBndGQiLAogICJwZXh0cnciLAogICJwaW5zcnciLAog
ICJwbWFkZHdkIiwKICAicG1heHN3IiwKICAicG1heHViIiwKICAicG1pbnN3IiwKICAicG1pbnVi
IiwKICAicG1vdm1za2IiLAogICJwbXVsaHV3IiwKICAicG11bGh3IiwKICAicG11bGx3IiwKICAi
cG11bHVkcSIsCiAgInBvcCIsCiAgInBvcGEiLAogICJwb3BhZCIsCiAgInBvcGZ3IiwKICAicG9w
ZmQiLAogICJwb3BmcSIsCiAgInBvciIsCiAgInByZWZldGNoIiwKICAicHJlZmV0Y2hudGEiLAog
ICJwcmVmZXRjaHQwIiwKICAicHJlZmV0Y2h0MSIsCiAgInByZWZldGNodDIiLAogICJwc2FkYnci
LAogICJwc2h1ZmQiLAogICJwc2h1Zmh3IiwKICAicHNodWZsdyIsCiAgInBzaHVmdyIsCiAgInBz
bGxkcSIsCiAgInBzbGx3IiwKICAicHNsbGQiLAogICJwc2xscSIsCiAgInBzcmF3IiwKICAicHNy
YWQiLAogICJwc3JsdyIsCiAgInBzcmxkIiwKICAicHNybHEiLAogICJwc3JsZHEiLAogICJwc3Vi
YiIsCiAgInBzdWJ3IiwKICAicHN1YmQiLAogICJwc3VicSIsCiAgInBzdWJzYiIsCiAgInBzdWJz
dyIsCiAgInBzdWJ1c2IiLAogICJwc3VidXN3IiwKICAicHVucGNraGJ3IiwKICAicHVucGNraHdk
IiwKICAicHVucGNraGRxIiwKICAicHVucGNraHFkcSIsCiAgInB1bnBja2xidyIsCiAgInB1bnBj
a2x3ZCIsCiAgInB1bnBja2xkcSIsCiAgInB1bnBja2xxZHEiLAogICJwaTJmdyIsCiAgInBpMmZk
IiwKICAicGYyaXciLAogICJwZjJpZCIsCiAgInBmbmFjYyIsCiAgInBmcG5hY2MiLAogICJwZmNt
cGdlIiwKICAicGZtaW4iLAogICJwZnJjcCIsCiAgInBmcnNxcnQiLAogICJwZnN1YiIsCiAgInBm
YWRkIiwKICAicGZjbXBndCIsCiAgInBmbWF4IiwKICAicGZyY3BpdDEiLAogICJwZnJzcGl0MSIs
CiAgInBmc3ViciIsCiAgInBmYWNjIiwKICAicGZjbXBlcSIsCiAgInBmbXVsIiwKICAicGZyY3Bp
dDIiLAogICJwbXVsaHJ3IiwKICAicHN3YXBkIiwKICAicGF2Z3VzYiIsCiAgInB1c2giLAogICJw
dXNoYSIsCiAgInB1c2hhZCIsCiAgInB1c2hmdyIsCiAgInB1c2hmZCIsCiAgInB1c2hmcSIsCiAg
InB4b3IiLAogICJyY2wiLAogICJyY3IiLAogICJyb2wiLAogICJyb3IiLAogICJyY3BwcyIsCiAg
InJjcHNzIiwKICAicmRtc3IiLAogICJyZHBtYyIsCiAgInJkdHNjIiwKICAicmR0c2NwIiwKICAi
cmVwbmUiLAogICJyZXAiLAogICJyZXQiLAogICJyZXRmIiwKICAicnNtIiwKICAicnNxcnRwcyIs
CiAgInJzcXJ0c3MiLAogICJzYWhmIiwKICAic2FsIiwKICAic2FsYyIsCiAgInNhciIsCiAgInNo
bCIsCiAgInNociIsCiAgInNiYiIsCiAgInNjYXNiIiwKICAic2Nhc3ciLAogICJzY2FzZCIsCiAg
InNjYXNxIiwKICAic2V0byIsCiAgInNldG5vIiwKICAic2V0YiIsCiAgInNldG5iIiwKICAic2V0
eiIsCiAgInNldG56IiwKICAic2V0YmUiLAogICJzZXRhIiwKICAic2V0cyIsCiAgInNldG5zIiwK
ICAic2V0cCIsCiAgInNldG5wIiwKICAic2V0bCIsCiAgInNldGdlIiwKICAic2V0bGUiLAogICJz
ZXRnIiwKICAic2ZlbmNlIiwKICAic2dkdCIsCiAgInNobGQiLAogICJzaHJkIiwKICAic2h1ZnBk
IiwKICAic2h1ZnBzIiwKICAic2lkdCIsCiAgInNsZHQiLAogICJzbXN3IiwKICAic3FydHBzIiwK
ICAic3FydHBkIiwKICAic3FydHNkIiwKICAic3FydHNzIiwKICAic3RjIiwKICAic3RkIiwKICAi
c3RnaSIsCiAgInN0aSIsCiAgInNraW5pdCIsCiAgInN0bXhjc3IiLAogICJzdG9zYiIsCiAgInN0
b3N3IiwKICAic3Rvc2QiLAogICJzdG9zcSIsCiAgInN0ciIsCiAgInN1YiIsCiAgInN1YnBkIiwK
ICAic3VicHMiLAogICJzdWJzZCIsCiAgInN1YnNzIiwKICAic3dhcGdzIiwKICAic3lzY2FsbCIs
CiAgInN5c2VudGVyIiwKICAic3lzZXhpdCIsCiAgInN5c3JldCIsCiAgInRlc3QiLAogICJ1Y29t
aXNkIiwKICAidWNvbWlzcyIsCiAgInVkMiIsCiAgInVucGNraHBkIiwKICAidW5wY2tocHMiLAog
ICJ1bnBja2xwcyIsCiAgInVucGNrbHBkIiwKICAidmVyciIsCiAgInZlcnciLAogICJ2bWNhbGwi
LAogICJ2bWNsZWFyIiwKICAidm14b24iLAogICJ2bXB0cmxkIiwKICAidm1wdHJzdCIsCiAgInZt
cmVzdW1lIiwKICAidm14b2ZmIiwKICAidm1ydW4iLAogICJ2bW1jYWxsIiwKICAidm1sb2FkIiwK
ICAidm1zYXZlIiwKICAid2FpdCIsCiAgIndiaW52ZCIsCiAgIndybXNyIiwKICAieGFkZCIsCiAg
InhjaGciLAogICJ4bGF0YiIsCiAgInhvciIsCiAgInhvcnBkIiwKICAieG9ycHMiLAogICJkYiIs
CiAgImludmFsaWQiLAp9OwoKCgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZb
MjU2XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzBGX19PUF8wMF9fUkVHIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF9y
ZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFRyB9
LAogIC8qIDAyICovICB7IFVEX0lsYXIsICAgICAgICAgT19HdiwgICAgT19FdywgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8g
IHsgVURfSWxzbCwgICAgICAgICBPX0d2LCAgICBPX0V3LCAgICBPX05PTkUsICBQX2Fzb3xQX29z
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7
IFVEX0lzeXNjYWxsLCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDYgKi8gIHsgVURfSWNsdHMsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9Jc3lzcmV0LCAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lpbnZkLCAgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURfSXdiaW52ZCwg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDBCICovICB7IFVEX0l1ZDIsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF8wRF9fUkVHIH0sCiAgLyogMEUgKi8gIHsg
VURfSWZlbW1zLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAwRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDEwICovICB7IFVEX0ltb3Z1cHMsICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxMSAqLyAgeyBVRF9J
bW92dXBzLCAgICAgIE9fVywgICAgIE9fViwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMTIgKi8gIHsgVURfSW1vdmxwcywgICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEzICovICB7
IFVEX0ltb3ZscHMsICAgICAgT19NLCAgICAgT19WLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAxNCAqLyAgeyBVRF9JdW5wY2tscHMsICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTUg
Ki8gIHsgVURfSXVucGNraHBzLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDE2ICovICB7IFVEX0ltb3ZocHMsICAgICAgT19W
LCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAxNyAqLyAgeyBVRF9JbW92aHBzLCAgICAgIE9fTSwgICAgIE9fViwgICAgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTggKi8gIHsgVURfSWdycF9yZWcsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMThfX1JFRyB9LAogIC8q
IDE5ICovICB7IFVEX0lub3AsICAgICAgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxQSAqLyAgeyBVRF9Jbm9wLCAgICAgICAg
IE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMUIgKi8gIHsgVURfSW5vcCwgICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUs
ICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDFDICovICB7IFVEX0lub3AsICAg
ICAgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAxRCAqLyAgeyBVRF9Jbm9wLCAgICAgICAgIE9fTSwgICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMUUgKi8gIHsgVURfSW5v
cCwgICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDFGICovICB7IFVEX0lub3AsICAgICAgICAgT19NLCAgICAgT19OT05F
LCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyMCAqLyAgeyBV
RF9JbW92LCAgICAgICAgIE9fUiwgICAgIE9fQywgICAgIE9fTk9ORSwgIFBfcmV4ciB9LAogIC8q
IDIxICovICB7IFVEX0ltb3YsICAgICAgICAgT19SLCAgICAgT19ELCAgICAgT19OT05FLCAgUF9y
ZXhyIH0sCiAgLyogMjIgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0MsICAgICBPX1IsICAgICBP
X05PTkUsICBQX3JleHIgfSwKICAvKiAyMyAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fRCwgICAg
IE9fUiwgICAgIE9fTk9ORSwgIFBfcmV4ciB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAy
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSW1vdmFwcywgICAgICBPX1YsICAgICBP
X1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDI5ICov
ICB7IFVEX0ltb3ZhcHMsICAgICAgT19XLCAgICAgT19WLCAgICAgT19OT05FLCAgUF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyQSAqLyAgeyBVRF9JY3Z0cGkycHMsICAgIE9fViwg
ICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MkIgKi8gIHsgVURfSW1vdm50cHMsICAgICBPX00sICAgICBPX1YsICAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDJDICovICB7IFVEX0ljdnR0cHMycGksICAg
T19QLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAyRCAqLyAgeyBVRF9JY3Z0cHMycGksICAgIE9fUCwgICAgIE9fVywgICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkUgKi8gIHsgVURfSXVjb21pc3Ms
ICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDJGICovICB7IFVEX0ljb21pc3MsICAgICAgT19WLCAgICAgT19XLCAgICAgT19O
T05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAzMCAqLyAgeyBVRF9Jd3Jt
c3IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMxICov
ICB7IFVEX0lyZHRzYywgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMzIgKi8gIHsgVURfSXJkbXNyLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JcmRwbWMsICAgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lzeXNlbnRlciwgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAzNSAqLyAgeyBV
RF9Jc3lzZXhpdCwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDM2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0EgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
QiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDNDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNGICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDAg
Ki8gIHsgVURfSWNtb3ZvLCAgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA0MSAqLyAgeyBVRF9JY21v
dm5vLCAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDQyICovICB7IFVEX0ljbW92YiwgICAgICAgT19H
diwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogNDMgKi8gIHsgVURfSWNtb3ZhZSwgICAgICBPX0d2LCAgICBPX0V2LCAg
ICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiA0NCAqLyAgeyBVRF9JY21vdnosICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBf
YXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDQ1ICovICB7IFVE
X0ljbW92bnosICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNDYgKi8gIHsgVURfSWNtb3ZiZSwgICAg
ICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA0NyAqLyAgeyBVRF9JY21vdmEsICAgICAgIE9fR3YsICAgIE9f
RXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDQ4ICovICB7IFVEX0ljbW92cywgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNDkgKi8g
IHsgVURfSWNtb3ZucywgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29z
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA0QSAqLyAgeyBVRF9JY21vdnAs
ICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDRCICovICB7IFVEX0ljbW92bnAsICAgICAgT19Hdiwg
ICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogNEMgKi8gIHsgVURfSWNtb3ZsLCAgICAgICBPX0d2LCAgICBPX0V2LCAgICBP
X05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA0
RCAqLyAgeyBVRF9JY21vdmdlLCAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDRFICovICB7IFVEX0lj
bW92bGUsICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNEYgKi8gIHsgVURfSWNtb3ZnLCAgICAgICBP
X0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiA1MCAqLyAgeyBVRF9JbW92bXNrcHMsICAgIE9fR2QsICAgIE9fVlIs
ICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4cnxQX3JleGIgfSwKICAvKiA1MSAqLyAgeyBVRF9Jc3Fy
dHBzLCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogNTIgKi8gIHsgVURfSXJzcXJ0cHMsICAgICBPX1YsICAgICBPX1csICAg
ICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDUzICovICB7IFVE
X0lyY3BwcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA1NCAqLyAgeyBVRF9JYW5kcHMsICAgICAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTUgKi8g
IHsgVURfSWFuZG5wcywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDU2ICovICB7IFVEX0lvcnBzLCAgICAgICAgT19WLCAg
ICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1
NyAqLyAgeyBVRF9JeG9ycHMsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTggKi8gIHsgVURfSWFkZHBzLCAgICAgICBP
X1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDU5ICovICB7IFVEX0ltdWxwcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1QSAqLyAgeyBVRF9JY3Z0cHMycGQs
ICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogNUIgKi8gIHsgVURfSWN2dGRxMnBzLCAgICBPX1YsICAgICBPX1csICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVDICovICB7IFVEX0lzdWJw
cywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiA1RCAqLyAgeyBVRF9JbWlucHMsICAgICAgIE9fViwgICAgIE9fVywgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUUgKi8gIHsgVURf
SWRpdnBzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDVGICovICB7IFVEX0ltYXhwcywgICAgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2MCAqLyAg
eyBVRF9JcHVucGNrbGJ3LCAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNjEgKi8gIHsgVURfSXB1bnBja2x3ZCwgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDYy
ICovICB7IFVEX0lwdW5wY2tsZHEsICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2MyAqLyAgeyBVRF9JcGFja3Nzd2IsICAgIE9f
UCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogNjQgKi8gIHsgVURfSXBjbXBndGIsICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDY1ICovICB7IFVEX0lwY21wZ3R3LCAg
ICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiA2NiAqLyAgeyBVRF9JcGNtcGd0ZCwgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNjcgKi8gIHsgVURfSXBhY2t1
c3diLCAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDY4ICovICB7IFVEX0lwdW5wY2toYncsICAgT19QLCAgICAgT19RLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2OSAqLyAgeyBVRF9J
cHVucGNraHdkLCAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogNkEgKi8gIHsgVURfSXB1bnBja2hkcSwgICBPX1AsICAgICBPX1Es
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDZCICovICB7
IFVEX0lwYWNrc3NkdywgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiA2QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZEICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkUgKi8gIHsgVURf
SW1vdmQsICAgICAgICBPX1AsICAgICBPX0V4LCAgICBPX05PTkUsICBQX2MyfFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNkYgKi8gIHsgVURfSW1vdnEsICAgICAgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDcw
ICovICB7IFVEX0lwc2h1ZncsICAgICAgT19QLCAgICAgT19RLCAgICAgT19JYiwgICAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA3MSAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF83MV9fUkVHIH0sCiAgLyogNzIg
Ki8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18w
Rl9fT1BfNzJfX1JFRyB9LAogIC8qIDczICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzczX19SRUcgfSwKICAvKiA3NCAqLyAgeyBV
RF9JcGNtcGVxYiwgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogNzUgKi8gIHsgVURfSXBjbXBlcXcsICAgICBPX1AsICAgICBP
X1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDc2ICov
ICB7IFVEX0lwY21wZXFkLCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA3NyAqLyAgeyBVRF9JZW1tcywgICAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDc4ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzkgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA3QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDdCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0MgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3RCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdFICovICB7IFVE
X0ltb3ZkLCAgICAgICAgT19FeCwgICAgT19QLCAgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDdGICovICB7IFVEX0ltb3ZxLCAgICAgICAgT19RLCAg
ICAgT19QLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA4
MCAqLyAgeyBVRF9Jam8sICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8
UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4MSAqLyAgeyBVRF9Jam5vLCAgICAgICAgIE9f
SnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAv
KiA4MiAqLyAgeyBVRF9JamIsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
YzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4MyAqLyAgeyBVRF9JamFlLCAgICAgICAg
IE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwK
ICAvKiA4NCAqLyAgeyBVRF9JanosICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4NSAqLyAgeyBVRF9Jam56LCAgICAg
ICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28g
fSwKICAvKiA4NiAqLyAgeyBVRF9JamJlLCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4NyAqLyAgeyBVRF9JamEsICAg
ICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9v
c28gfSwKICAvKiA4OCAqLyAgeyBVRF9JanMsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4OSAqLyAgeyBVRF9Jam5z
LCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18
UF9vc28gfSwKICAvKiA4QSAqLyAgeyBVRF9JanAsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4QiAqLyAgeyBVRF9J
am5wLCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2Rl
cE18UF9vc28gfSwKICAvKiA4QyAqLyAgeyBVRF9JamwsICAgICAgICAgIE9fSnosICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4RCAqLyAgeyBV
RF9JamdlLCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQ
X2RlcE18UF9vc28gfSwKICAvKiA4RSAqLyAgeyBVRF9JamxlLCAgICAgICAgIE9fSnosICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4RiAqLyAg
eyBVRF9JamcsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2
NHxQX2RlcE18UF9vc28gfSwKICAvKiA5MCAqLyAgeyBVRF9Jc2V0bywgICAgICAgIE9fRWIsICAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOTEg
Ki8gIHsgVURfSXNldG5vLCAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDkyICovICB7IFVEX0lzZXRiLCAgICAgICAgT19F
YiwgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiA5MyAqLyAgeyBVRF9Jc2V0bmIsICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOTQgKi8gIHsgVURfSXNldHosICAgICAg
ICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDk1ICovICB7IFVEX0lzZXRueiwgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA5NiAqLyAgeyBVRF9Jc2V0YmUs
ICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogOTcgKi8gIHsgVURfSXNldGEsICAgICAgICBPX0ViLCAgICBPX05PTkUsICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDk4ICovICB7IFVEX0lz
ZXRzLCAgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiA5OSAqLyAgeyBVRF9Jc2V0bnMsICAgICAgIE9fRWIsICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOUEgKi8gIHsg
VURfSXNldHAsICAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDlCICovICB7IFVEX0lzZXRucCwgICAgICAgT19FYiwgICAg
T19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA5QyAq
LyAgeyBVRF9Jc2V0bCwgICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOUQgKi8gIHsgVURfSXNldGdlLCAgICAgICBPX0Vi
LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDlFICovICB7IFVEX0lzZXRsZSwgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA5RiAqLyAgeyBVRF9Jc2V0ZywgICAgICAg
IE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogQTAgKi8gIHsgVURfSXB1c2gsICAgICAgICBPX0ZTLCAgICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiBBMSAqLyAgeyBVRF9JcG9wLCAgICAgICAgIE9fRlMsICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEEyICovICB7IFVEX0ljcHVpZCwgICAgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSWJ0LCAg
ICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBBNCAqLyAgeyBVRF9Jc2hsZCwgICAgICAgIE9fRXYs
ICAgIE9fR3YsICAgIE9fSWIsICAgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEE1ICovICB7IFVEX0lzaGxkLCAgICAgICAgT19FdiwgICAgT19HdiwgICAg
T19DTCwgICAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
QTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBBNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEE4ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19HUywgICAg
T19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSXBvcCwgICAgICAg
ICBPX0dTLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBBQSAqLyAgeyBVRF9J
cnNtLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEFC
ICovICB7IFVEX0lidHMsICAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298
UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQUMgKi8gIHsgVURfSXNo
cmQsICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX0liLCAgICBQX2Fzb3xQX29zb3xQX3JleHd8
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBBRCAqLyAgeyBVRF9Jc2hyZCwgICAgICAgIE9f
RXYsICAgIE9fR3YsICAgIE9fQ0wsICAgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIEFFICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgSVRBQl9fMEZfX09QX0FFX19SRUcgfSwKICAvKiBBRiAqLyAgeyBVRF9JaW11
bCwgICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEIwICovICB7IFVEX0ljbXB4Y2hnLCAgICAgT19F
YiwgICAgT19HYiwgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiBCMSAqLyAgeyBVRF9JY21weGNoZywgICAgIE9fRXYsICAgIE9fR3YsICAgIE9fTk9ORSwgIFBf
YXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEIyICovICB7IFVE
X0lsc3MsICAgICAgICAgT19HeiwgICAgT19NLCAgICAgT19OT05FLCAgUF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQjMgKi8gIHsgVURfSWJ0ciwgICAgICAg
ICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiBCNCAqLyAgeyBVRF9JbGZzLCAgICAgICAgIE9fR3osICAgIE9f
TSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEI1ICovICB7IFVEX0lsZ3MsICAgICAgICAgT19HeiwgICAgT19NLCAgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQjYgKi8g
IHsgVURfSW1vdnp4LCAgICAgICBPX0d2LCAgICBPX0ViLCAgICBPX05PTkUsICBQX2MyfFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEI3ICovICB7IFVEX0lt
b3Z6eCwgICAgICAgT19HdiwgICAgT19FdywgICAgT19OT05FLCAgUF9jMnxQX2Fzb3xQX29zb3xQ
X3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBCOCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI5ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
QkEgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFC
X18wRl9fT1BfQkFfX1JFRyB9LAogIC8qIEJCICovICB7IFVEX0lidGMsICAgICAgICAgT19Fdiwg
ICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogQkMgKi8gIHsgVURfSWJzZiwgICAgICAgICBPX0d2LCAgICBPX0V2LCAgICBP
X05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBC
RCAqLyAgeyBVRF9JYnNyLCAgICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEJFICovICB7IFVEX0lt
b3ZzeCwgICAgICAgT19HdiwgICAgT19FYiwgICAgT19OT05FLCAgUF9jMnxQX2Fzb3xQX29zb3xQ
X3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBCRiAqLyAgeyBVRF9JbW92c3gsICAg
ICAgIE9fR3YsICAgIE9fRXcsICAgIE9fTk9ORSwgIFBfYzJ8UF9hc298UF9vc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzAgKi8gIHsgVURfSXhhZGQsICAgICAgICBPX0Vi
LCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEMxICovICB7IFVEX0l4YWRkLCAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzIgKi8g
IHsgVURfSWNtcHBzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEMzICovICB7IFVEX0ltb3ZudGksICAgICAgT19NLCAg
ICAgT19HdncsICAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogQzQgKi8gIHsgVURfSXBpbnNydywgICAgICBPX1AsICAgICBPX0V3LCAgICBPX0liLCAg
ICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBDNSAqLyAg
eyBVRF9JcGV4dHJ3LCAgICAgIE9fR2QsICAgIE9fUFIsICAgIE9fSWIsICAgIFBfYXNvfFBfb3Nv
fFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEM2ICovICB7IFVEX0lzaHVmcHMs
ICAgICAgT19WLCAgICAgT19XLCAgICAgT19JYiwgICAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiBDNyAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIElUQUJfXzBGX19PUF9DN19fUkVHIH0sCiAgLyogQzggKi8gIHsgVURfSWJzd2FwLCAg
ICAgICBPX3JBWHI4LCBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAg
LyogQzkgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JDWHI5LCBPX05PTkUsICBPX05PTkUsICBQ
X29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0EgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JE
WHIxMCwgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0IgKi8g
IHsgVURfSWJzd2FwLCAgICAgICBPX3JCWHIxMSwgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3Jl
eHd8UF9yZXhiIH0sCiAgLyogQ0MgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JTUHIxMiwgT19O
T05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0QgKi8gIHsgVURfSWJz
d2FwLCAgICAgICBPX3JCUHIxMywgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhi
IH0sCiAgLyogQ0UgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JTSXIxNCwgT19OT05FLCAgT19O
T05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0YgKi8gIHsgVURfSWJzd2FwLCAgICAg
ICBPX3JESXIxNSwgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyog
RDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBEMSAqLyAgeyBVRF9JcHNybHcsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRDIgKi8gIHsgVURfSXBz
cmxkLCAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIEQzICovICB7IFVEX0lwc3JscSwgICAgICAgT19QLCAgICAgT19RLCAg
ICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBENCAqLyAgeyBV
RF9JcGFkZHEsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogRDUgKi8gIHsgVURfSXBtdWxsdywgICAgICBPX1AsICAgICBP
X1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEQ2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogRDcgKi8gIHsgVURfSXBtb3Ztc2tiLCAgICBPX0dkLCAgICBPX1BSLCAgICBPX05PTkUs
ICBQX29zb3xQX3JleHJ8UF9yZXhiIH0sCiAgLyogRDggKi8gIHsgVURfSXBzdWJ1c2IsICAgICBP
X1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIEQ5ICovICB7IFVEX0lwc3VidXN3LCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBEQSAqLyAgeyBVRF9JcG1pbnViLCAg
ICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogREIgKi8gIHsgVURfSXBhbmQsICAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIERDICovICB7IFVEX0lwYWRk
dXNiLCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiBERCAqLyAgeyBVRF9JcGFkZHVzdywgICAgIE9fUCwgICAgIE9fUSwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogREUgKi8gIHsgVURf
SXBtYXh1YiwgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIERGICovICB7IFVEX0lwYW5kbiwgICAgICAgT19QLCAgICAgT19R
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFMCAqLyAg
eyBVRF9JcGF2Z2IsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRTEgKi8gIHsgVURfSXBzcmF3LCAgICAgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEUy
ICovICB7IFVEX0lwc3JhZCwgICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFMyAqLyAgeyBVRF9JcGF2Z3csICAgICAgIE9f
UCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogRTQgKi8gIHsgVURfSXBtdWxodXcsICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEU1ICovICB7IFVEX0lwbXVsaHcsICAg
ICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBFNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEU3ICovICB7IFVEX0ltb3ZudHEsICAgICAgT19NLCAgICAgT19Q
LCAgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRTggKi8gIHsgVURfSXBzdWJzYiwgICAgICBP
X1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIEU5ICovICB7IFVEX0lwc3Vic3csICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFQSAqLyAgeyBVRF9JcG1pbnN3LCAg
ICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogRUIgKi8gIHsgVURfSXBvciwgICAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEVDICovICB7IFVEX0lwYWRk
c2IsICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiBFRCAqLyAgeyBVRF9JcGFkZHN3LCAgICAgIE9fUCwgICAgIE9fUSwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRUUgKi8gIHsgVURf
SXBtYXhzdywgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIEVGICovICB7IFVEX0lweG9yLCAgICAgICAgT19QLCAgICAgT19R
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGMCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEYxICovICB7IFVEX0lwc2xsdywgICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGMiAqLyAgeyBVRF9JcHNsbGQsICAg
ICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogRjMgKi8gIHsgVURfSXBzbGxxLCAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEY0ICovICB7IFVEX0lwbXVs
dWRxLCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiBGNSAqLyAgeyBVRF9JcG1hZGR3ZCwgICAgIE9fUCwgICAgIE9fUSwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjYgKi8gIHsgVURf
SXBzYWRidywgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIEY3ICovICB7IFVEX0ltYXNrbW92cSwgICAgT19QLCAgICAgT19R
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGOCAqLyAg
eyBVRF9JcHN1YmIsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjkgKi8gIHsgVURfSXBzdWJ3LCAgICAgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEZB
ICovICB7IFVEX0lwc3ViZCwgICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGQiAqLyAgeyBVRF9JcHN1YnEsICAgICAgIE9f
UCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogRkMgKi8gIHsgVURfSXBhZGRiLCAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEZEICovICB7IFVEX0lwYWRkdywgICAg
ICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBGRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEZGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkg
aXRhYl9fMGZfX29wXzAwX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXNsZHQsICAgICAg
ICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lzdHIsICAgICAgICAgT19FdiwgICAgT19OT05FLCAg
T19OT05FLCAgUF9hc298UF9vc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMiAqLyAg
eyBVRF9JbGxkdCwgICAgICAgIE9fRXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWx0ciwgICAgICAgICBPX0V3LCAg
ICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0
ICovICB7IFVEX0l2ZXJyLCAgICAgICAgT19FdywgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JdmVydywgICAgICAgIE9f
RXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJf
XzBmX19vcF8wMV9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lncnBfbW9kLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAwX19NT0Qg
fSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wMV9fTU9EIH0sCiAgLyogMDIgKi8gIHsgVURf
SWdycF9tb2QsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFf
X1JFR19fT1BfMDJfX01PRCB9LAogIC8qIDAzICovICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0QgfSwK
ICAvKiAwNCAqLyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wNF9fTU9EIH0sCiAgLyogMDUgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAq
LyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBG
X19PUF8wMV9fUkVHX19PUF8wNl9fTU9EIH0sCiAgLyogMDcgKi8gIHsgVURfSWdycF9tb2QsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDdf
X01PRCB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF8wMV9f
cmVnX19vcF8wMF9fbW9kWzJdID0gewogIC8qIDAwICovICB7IFVEX0lzZ2R0LCAgICAgICAgT19N
LCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAwMSAqLyAgeyBVRF9JZ3JwX3JtLCAgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElU
QUJfXzBGX19PUF8wMV9fUkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk0gfSwKfTsKCnN0YXRpYyBz
dHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3Bf
MDFfX3JtWzhdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF92ZW5kb3IsICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDBfX01P
RF9fT1BfMDFfX1JNX19PUF8wMV9fVkVORE9SIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBV
RF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF8w
MV9fUkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk1fX09QXzAzX19WRU5ET1IgfSwKICAvKiAwNCAq
LyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBG
X19PUF8wMV9fUkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk1fX09QXzA0X19WRU5ET1IgfSwKICAv
KiAwNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9l
bnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3BfMDFfX3JtX19vcF8wMV9f
dmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZtY2FsbCwgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRf
aXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3BfMDFfX3JtX19v
cF8wM19fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZtcmVzdW1l
LCAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1
Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3BfMDFf
X3JtX19vcF8wNF9fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZt
eG9mZiwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0YXRp
YyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDFfX21vZFsy
XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jc2lkdCwgICAgICAgIE9fTSwgICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWdy
cF9ybSwgICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JF
R19fT1BfMDFfX01PRF9fT1BfMDFfX1JNIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50
cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAxX19tb2RfX29wXzAxX19ybVs4XSA9IHsKICAv
KiAwMCAqLyAgeyBVRF9JbW9uaXRvciwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDAxICovICB7IFVEX0ltd2FpdCwgICAgICAgT19OT05FLCAgT19OT05FLCAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50
cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAyX19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWxnZHQsICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50
cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWxpZHQsICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lncnBfcm0sICAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0RfX09QXzAx
X19STSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF8wMV9f
cmVnX19vcF8wM19fbW9kX19vcF8wMV9fcm1bOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF92
ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19f
T1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wMF9fVkVORE9SIH0sCiAgLyogMDEgKi8gIHsgVURf
SWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFf
X1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wMV9fVkVORE9SIH0sCiAgLyogMDIgKi8g
IHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9f
T1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wMl9fVkVORE9SIH0sCiAgLyog
MDMgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFC
X18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wM19fVkVORE9SIH0s
CiAgLyogMDQgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wNF9fVkVO
RE9SIH0sCiAgLyogMDUgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8w
NV9fVkVORE9SIH0sCiAgLyogMDYgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JN
X19PUF8wNl9fVkVORE9SIH0sCiAgLyogMDcgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1Bf
MDFfX1JNX19PUF8wN19fVkVORE9SIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkg
aXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDBfX3ZlbmRv
clsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1ydW4sICAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJf
ZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDFf
X3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1tY2FsbCwgICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVk
X2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9f
b3BfMDJfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1sb2FkLCAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3Ry
dWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAx
X19ybV9fb3BfMDNfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1zYXZlLCAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0
aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2Rf
X29wXzAxX19ybV9fb3BfMDRfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jc3RnaSwg
ICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07
CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAz
X19tb2RfX29wXzAxX19ybV9fb3BfMDVfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
Y2xnaSwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAx
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdf
X29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDZfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAg
eyBVRF9Jc2tpbml0LCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAx
X19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDdfX3ZlbmRvclsyXSA9IHsKICAvKiAw
MCAqLyAgeyBVRF9JaW52bHBnYSwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzA0X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXNtc3csICAg
ICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzA2X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWxtc3csICAg
ICAgICBPX0V3LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzA3X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmxwZywg
ICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lncnBfcm0sICAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzA3X19NT0RfX09QXzAxX19STSB9LAp9OwoK
c3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wN19f
bW9kX19vcF8wMV9fcm1bOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXN3YXBncywgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3Zl
bmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF8wMV9fUkVHX19P
UF8wN19fTU9EX19PUF8wMV9fUk1fX09QXzAxX19WRU5ET1IgfSwKICAvKiAwMiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAz
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRp
YyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDdfX21vZF9f
b3BfMDFfX3JtX19vcF8wMV9fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lyZHRzY3As
ICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMGRfX3JlZ1s4XSA9IHsK
ICAvKiAwMCAqLyAgeyBVRF9JcHJlZmV0Y2gsICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lw
cmVmZXRjaCwgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSXByZWZldGNoLCAgICBPX00sICAg
ICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMyAqLyAgeyBVRF9JcHJlZmV0Y2gsICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICovICB7IFVEX0lw
cmVmZXRjaCwgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXByZWZldGNoLCAgICBPX00sICAg
ICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwNiAqLyAgeyBVRF9JcHJlZmV0Y2gsICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lw
cmVmZXRjaCwgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9f
MGZfX29wXzE4X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXByZWZldGNobnRhLCBPX00s
ICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9JcHJlZmV0Y2h0MCwgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVE
X0lwcmVmZXRjaHQxLCAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSXByZWZldGNodDIsICBPX00s
ICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGlj
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF83MV9fcmVnWzhdID0gewogIC8qIDAw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBVRF9JcHNybHcsICAgICAgIE9fUFIsICAgIE9f
SWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSXBz
cmF3LCAgICAgICBPX1BSLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA2ICovICB7IFVEX0lwc2xsdywgICAgICAgT19QUiwgICAgT19JYiwgICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18wZl9fb3BfNzJfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIg
Ki8gIHsgVURfSXBzcmxkLCAgICAgICBPX1BSLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lwc3JhZCwgICAgICAgT19QUiwgICAgT19J
YiwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JcHNs
bGQsICAgICAgIE9fUFIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
Cn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzczX19yZWdbOF0g
PSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lwc3JscSwgICAgICAg
T19QUiwgICAgT19JYiwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSXBzbGxxLCAgICAgICBPX1BSLCAgICBPX0li
LCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGl0YWJfXzBmX19vcF9hZV9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAwMiAqLyAgeyBVRF9JbGRteGNzciwgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVE
X0lzdG14Y3NyLCAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JZ3JwX21v
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9BRV9fUkVHX19P
UF8wNV9fTU9EIH0sCiAgLyogMDYgKi8gIHsgVURfSWdycF9tb2QsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfQUVfX1JFR19fT1BfMDZfX01PRCB9LAogIC8qIDA3
ICovICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9f
MEZfX09QX0FFX19SRUdfX09QXzA3X19NT0QgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9l
bnRyeSBpdGFiX18wZl9fb3BfYWVfX3JlZ19fb3BfMDVfX21vZFsyXSA9IHsKICAvKiAwMCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDAxICovICB7IFVEX0lncnBfcm0sICAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
SVRBQl9fMEZfX09QX0FFX19SRUdfX09QXzA1X19NT0RfX09QXzAxX19STSB9LAp9OwoKc3RhdGlj
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNV9fbW9kX19v
cF8wMV9fcm1bOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWxmZW5jZSwgICAgICBPX05PTkUsICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JbGZlbmNlLCAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0ls
ZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMg
Ki8gIHsgVURfSWxmZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwNCAqLyAgeyBVRF9JbGZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lsZmVuY2UsICAgICAgT19OT05FLCAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWxmZW5jZSwgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JbGZl
bmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAp9OwoKc3RhdGlj
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNl9fbW9kWzJd
ID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF9ybSwgICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfQUVfX1JFR19fT1BfMDZfX01PRF9fT1BfMDFf
X1JNIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2FlX19y
ZWdfX29wXzA2X19tb2RfX29wXzAxX19ybVs4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JbWZlbmNl
LCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7
IFVEX0ltZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDIgKi8gIHsgVURfSW1mZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JbWZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0ltZmVuY2UsICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSW1mZW5jZSwg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBV
RF9JbWZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDA3ICovICB7IFVEX0ltZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2FlX19y
ZWdfX29wXzA3X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWNsZmx1c2gsICAgICBPX00s
ICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3JtLCAgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzBGX19PUF9BRV9fUkVHX19PUF8wN19fTU9EX19PUF8wMV9fUk0gfSwKfTsKCnN0
YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfYWVfX3JlZ19fb3BfMDdfX21v
ZF9fb3BfMDFfX3JtWzhdID0gewogIC8qIDAwICovICB7IFVEX0lzZmVuY2UsICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXNmZW5jZSwg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBV
RF9Jc2ZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDAzICovICB7IFVEX0lzZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMDQgKi8gIHsgVURfSXNmZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9Jc2ZlbmNlLCAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lzZmVuY2UsICAg
ICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURf
SXNmZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0
YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfYmFfX3JlZ1s4XSA9IHsKICAv
KiAwMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVE
X0lidCwgICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29z
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JYnRzLCAg
ICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsgVURfSWJ0ciwgICAgICAgICBP
X0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lidGMsICAgICAgICAgT19FdiwgICAg
T19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfYzdf
X3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9DN19fUkVHX19PUF8wMF9fVkVORE9SIH0sCiAgLyog
MDEgKi8gIHsgVURfSWNtcHhjaGc4YiwgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9DN19fUkVHX19PUF8wN19fVkVO
RE9SIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2M3X19y
ZWdfX29wXzAwX192ZW5kb3JbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9Jdm1w
dHJsZCwgICAgIE9fTXEsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2M3
X19yZWdfX29wXzA3X192ZW5kb3JbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9J
dm1wdHJzdCwgICAgIE9fTXEsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29w
X2Q5X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4NywgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9EOV9fTU9EX19PUF8wMV9f
WDg3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2Q5X19t
b2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDAzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwQiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBDICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MEQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDBGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTAgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxMSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDEy
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE1ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxNyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAxRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIxICovICB7
IFVEX0lmYWJzLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMjggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMkIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAy
QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDJEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMwICovICB7IFVEX0lm
MnhtMSwgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAzMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMzcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0EgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDNDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNGICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3Ry
dWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVbMjU2XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
YWRkLCAgICAgICAgIE9fRWIsICAgIE9fR2IsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWFkZCwgICAgICAgICBPX0V2LCAgICBPX0d2
LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JYWRkLCAgICAgICAgIE9fR2IsICAgIE9fRWIsICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWFkZCwgICAg
ICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JYWRkLCAgICAgICAgIE9fQUwsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lhZGQsICAgICAg
ICAgT19yQVgsICAgT19JeiwgICAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDYgKi8g
IHsgVURfSXB1c2gsICAgICAgICBPX0VTLCAgICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBf
bm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lwb3AsICAgICAgICAgT19FUywgICAgT19OT05FLCAg
T19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAwOCAqLyAgeyBVRF9Jb3IsICAgICAgICAg
IE9fRWIsICAgIE9fR2IsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDkgKi8gIHsgVURfSW9yLCAgICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUs
ICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwQSAqLyAg
eyBVRF9Jb3IsICAgICAgICAgIE9fR2IsICAgIE9fRWIsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMEIgKi8gIHsgVURfSW9yLCAgICAgICAgICBPX0d2LCAg
ICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAwQyAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fQUwsICAgIE9fSWIsICAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDBEICovICB7IFVEX0lvciwgICAgICAgICAgT19yQVgsICAg
T19JeiwgICAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMEUgKi8gIHsgVURfSXB1c2gs
ICAgICAgICBPX0NTLCAgICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBfbm9uZSB9LAogIC8q
IDBGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMTAgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDExICovICB7IFVEX0lh
ZGMsICAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTIgKi8gIHsgVURfSWFkYywgICAgICAgICBP
X0diLCAgICBPX0ViLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDEzICovICB7IFVEX0lhZGMsICAgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAg
UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTQgKi8gIHsg
VURfSWFkYywgICAgICAgICBPX0FMLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAxNSAqLyAgeyBVRF9JYWRjLCAgICAgICAgIE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBf
b3NvfFBfcmV4dyB9LAogIC8qIDE2ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19TUywgICAgT19O
T05FLCAgT19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAxNyAqLyAgeyBVRF9JcG9wLCAg
ICAgICAgIE9fU1MsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyog
MTggKi8gIHsgVURfSXNiYiwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDE5ICovICB7IFVEX0lzYmIsICAgICAgICAg
T19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMUEgKi8gIHsgVURfSXNiYiwgICAgICAgICBPX0diLCAgICBPX0Vi
LCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDFCICovICB7
IFVEX0lzYmIsICAgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298
UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMUMgKi8gIHsgVURfSXNiYiwgICAg
ICAgICBPX0FMLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxRCAqLyAgeyBV
RF9Jc2JiLCAgICAgICAgIE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9
LAogIC8qIDFFICovICB7IFVEX0lwdXNoLCAgICAgICAgT19EUywgICAgT19OT05FLCAgT19OT05F
LCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAxRiAqLyAgeyBVRF9JcG9wLCAgICAgICAgIE9fRFMs
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURf
SWFuZCwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDIxICovICB7IFVEX0lhbmQsICAgICAgICAgT19FdiwgICAgT19H
diwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMjIgKi8gIHsgVURfSWFuZCwgICAgICAgICBPX0diLCAgICBPX0ViLCAgICBPX05PTkUs
ICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDIzICovICB7IFVEX0lhbmQsICAg
ICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMjQgKi8gIHsgVURfSWFuZCwgICAgICAgICBPX0FMLCAg
ICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyNSAqLyAgeyBVRF9JYW5kLCAgICAg
ICAgIE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDI2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMjcgKi8gIHsgVURfSWRhYSwgICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX2ludjY0fFBfbm9uZSB9LAogIC8qIDI4ICovICB7IFVEX0lzdWIsICAgICAgICAgT19FYiwg
ICAgT19HYiwgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAy
OSAqLyAgeyBVRF9Jc3ViLCAgICAgICAgIE9fRXYsICAgIE9fR3YsICAgIE9fTk9ORSwgIFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDJBICovICB7IFVEX0lz
dWIsICAgICAgICAgT19HYiwgICAgT19FYiwgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiAyQiAqLyAgeyBVRF9Jc3ViLCAgICAgICAgIE9fR3YsICAgIE9fRXYs
ICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDJDICovICB7IFVEX0lzdWIsICAgICAgICAgT19BTCwgICAgT19JYiwgICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMkQgKi8gIHsgVURfSXN1YiwgICAgICAgICBPX3JBWCwgICBPX0l6LCAg
ICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiAyRSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJGICovICB7IFVEX0lk
YXMsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwK
ICAvKiAzMCAqLyAgeyBVRF9JeG9yLCAgICAgICAgIE9fRWIsICAgIE9fR2IsICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMzEgKi8gIHsgVURfSXhvciwgICAg
ICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAzMiAqLyAgeyBVRF9JeG9yLCAgICAgICAgIE9fR2IsICAg
IE9fRWIsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMzMg
Ki8gIHsgVURfSXhvciwgICAgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAzNCAqLyAgeyBVRF9JeG9y
LCAgICAgICAgIE9fQUwsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM1ICov
ICB7IFVEX0l4b3IsICAgICAgICAgT19yQVgsICAgT19JeiwgICAgT19OT05FLCAgUF9vc298UF9y
ZXh3IH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBVRF9JYWFhLCAgICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogMzggKi8gIHsgVURfSWNt
cCwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDM5ICovICB7IFVEX0ljbXAsICAgICAgICAgT19FdiwgICAgT19Hdiwg
ICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogM0EgKi8gIHsgVURfSWNtcCwgICAgICAgICBPX0diLCAgICBPX0ViLCAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDNCICovICB7IFVEX0ljbXAsICAgICAg
ICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogM0MgKi8gIHsgVURfSWNtcCwgICAgICAgICBPX0FMLCAgICBP
X0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRCAqLyAgeyBVRF9JY21wLCAgICAgICAg
IE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDNFICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogM0YgKi8gIHsgVURfSWFhcywgICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X2ludjY0fFBfbm9uZSB9LAogIC8qIDQwICovICB7IFVEX0lpbmMsICAgICAgICAgT19lQVgsICAg
T19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiA0MSAqLyAgeyBVRF9JaW5jLCAgICAgICAg
IE9fZUNYLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCiAgLyogNDIgKi8gIHsgVURfSWlu
YywgICAgICAgICBPX2VEWCwgICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDQzICov
ICB7IFVEX0lpbmMsICAgICAgICAgT19lQlgsICAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwK
ICAvKiA0NCAqLyAgeyBVRF9JaW5jLCAgICAgICAgIE9fZVNQLCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfb3NvIH0sCiAgLyogNDUgKi8gIHsgVURfSWluYywgICAgICAgICBPX2VCUCwgICBPX05PTkUs
ICBPX05PTkUsICBQX29zbyB9LAogIC8qIDQ2ICovICB7IFVEX0lpbmMsICAgICAgICAgT19lU0ks
ICAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiA0NyAqLyAgeyBVRF9JaW5jLCAgICAg
ICAgIE9fZURJLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCiAgLyogNDggKi8gIHsgVURf
SWRlYywgICAgICAgICBPX2VBWCwgICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDQ5
ICovICB7IFVEX0lkZWMsICAgICAgICAgT19lQ1gsICAgT19OT05FLCAgT19OT05FLCAgUF9vc28g
fSwKICAvKiA0QSAqLyAgeyBVRF9JZGVjLCAgICAgICAgIE9fZURYLCAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfb3NvIH0sCiAgLyogNEIgKi8gIHsgVURfSWRlYywgICAgICAgICBPX2VCWCwgICBPX05P
TkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDRDICovICB7IFVEX0lkZWMsICAgICAgICAgT19l
U1AsICAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiA0RCAqLyAgeyBVRF9JZGVjLCAg
ICAgICAgIE9fZUJQLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCiAgLyogNEUgKi8gIHsg
VURfSWRlYywgICAgICAgICBPX2VTSSwgICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8q
IDRGICovICB7IFVEX0lkZWMsICAgICAgICAgT19lREksICAgT19OT05FLCAgT19OT05FLCAgUF9v
c28gfSwKICAvKiA1MCAqLyAgeyBVRF9JcHVzaCwgICAgICAgIE9fckFYcjgsIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDUxICovICB7IFVEX0lw
dXNoLCAgICAgICAgT19yQ1hyOSwgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX2RlcE18UF9v
c298UF9yZXhiIH0sCiAgLyogNTIgKi8gIHsgVURfSXB1c2gsICAgICAgICBPX3JEWHIxMCwgT19O
T05FLCAgT19OT05FLCBQX2RlZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1MyAqLyAg
eyBVRF9JcHVzaCwgICAgICAgIE9fckJYcjExLCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8UF9k
ZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDU0ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19yU1By
MTIsIE9fTk9ORSwgIE9fTk9ORSwgUF9kZWY2NHxQX2RlcE18UF9vc298UF9yZXhiIH0sCiAgLyog
NTUgKi8gIHsgVURfSXB1c2gsICAgICAgICBPX3JCUHIxMywgT19OT05FLCAgT19OT05FLCBQX2Rl
ZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1NiAqLyAgeyBVRF9JcHVzaCwgICAgICAg
IE9fclNJcjE0LCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4YiB9
LAogIC8qIDU3ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19yRElyMTUsIE9fTk9ORSwgIE9fTk9O
RSwgUF9kZWY2NHxQX2RlcE18UF9vc298UF9yZXhiIH0sCiAgLyogNTggKi8gIHsgVURfSXBvcCwg
ICAgICAgICBPX3JBWHI4LCBPX05PTkUsICBPX05PTkUsICBQX2RlZjY0fFBfZGVwTXxQX29zb3xQ
X3JleGIgfSwKICAvKiA1OSAqLyAgeyBVRF9JcG9wLCAgICAgICAgIE9fckNYcjksIE9fTk9ORSwg
IE9fTk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDVBICovICB7IFVE
X0lwb3AsICAgICAgICAgT19yRFhyMTAsIE9fTk9ORSwgIE9fTk9ORSwgUF9kZWY2NHxQX2RlcE18
UF9vc298UF9yZXhiIH0sCiAgLyogNUIgKi8gIHsgVURfSXBvcCwgICAgICAgICBPX3JCWHIxMSwg
T19OT05FLCAgT19OT05FLCBQX2RlZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1QyAq
LyAgeyBVRF9JcG9wLCAgICAgICAgIE9fclNQcjEyLCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8
UF9kZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDVEICovICB7IFVEX0lwb3AsICAgICAgICAgT19y
QlByMTMsIE9fTk9ORSwgIE9fTk9ORSwgUF9kZWY2NHxQX2RlcE18UF9vc298UF9yZXhiIH0sCiAg
LyogNUUgKi8gIHsgVURfSXBvcCwgICAgICAgICBPX3JTSXIxNCwgT19OT05FLCAgT19OT05FLCBQ
X2RlZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1RiAqLyAgeyBVRF9JcG9wLCAgICAg
ICAgIE9fckRJcjE1LCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4
YiB9LAogIC8qIDYwICovICB7IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fMUJZVEVfX09QXzYwX19PU0laRSB9LAogIC8qIDYxICovICB7IFVEX0lncnBf
b3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzYxX19P
U0laRSB9LAogIC8qIDYyICovICB7IFVEX0lib3VuZCwgICAgICAgT19HdiwgICAgT19NLCAgICAg
T19OT05FLCAgUF9pbnY2NHxQX2Fzb3xQX29zbyB9LAogIC8qIDYzICovICB7IFVEX0lncnBfbW9k
ZSwgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzYzX19NT0RF
IH0sCiAgLyogNjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA2NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY2ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjcgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2OCAq
LyAgeyBVRF9JcHVzaCwgICAgICAgIE9fSXosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9v
c28gfSwKICAvKiA2OSAqLyAgeyBVRF9JaW11bCwgICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9f
SXosICAgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDZB
ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19JYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogNkIgKi8gIHsgVURfSWltdWwsICAgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX0li
LCAgICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2QyAq
LyAgeyBVRF9JaW5zYiwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDZEICovICB7IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgSVRBQl9fMUJZVEVfX09QXzZEX19PU0laRSB9LAogIC8qIDZFICovICB7IFVEX0lvdXRzYiwg
ICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogNkYgKi8gIHsg
VURfSWdycF9vc2l6ZSwgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9f
T1BfNkZfX09TSVpFIH0sCiAgLyogNzAgKi8gIHsgVURfSWpvLCAgICAgICAgICBPX0piLCAgICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA3MSAqLyAgeyBVRF9Jam5vLCAgICAgICAg
IE9fSmIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDcyICovICB7IFVEX0lq
YiwgICAgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogNzMg
Ki8gIHsgVURfSWphZSwgICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiA3NCAqLyAgeyBVRF9JanosICAgICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDc1ICovICB7IFVEX0lqbnosICAgICAgICAgT19KYiwgICAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogNzYgKi8gIHsgVURfSWpiZSwgICAgICAgICBP
X0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA3NyAqLyAgeyBVRF9JamEs
ICAgICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDc4ICov
ICB7IFVEX0lqcywgICAgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogNzkgKi8gIHsgVURfSWpucywgICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiA3QSAqLyAgeyBVRF9JanAsICAgICAgICAgIE9fSmIsICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDdCICovICB7IFVEX0lqbnAsICAgICAgICAgT19K
YiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogN0MgKi8gIHsgVURfSWpsLCAg
ICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA3RCAqLyAg
eyBVRF9JamdlLCAgICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDdFICovICB7IFVEX0lqbGUsICAgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogN0YgKi8gIHsgVURfSWpnLCAgICAgICAgICBPX0piLCAgICBPX05PTkUs
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA4MCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF84MF9fUkVHIH0sCiAgLyogODEg
Ki8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18x
QllURV9fT1BfODFfX1JFRyB9LAogIC8qIDgyICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzgyX19SRUcgfSwKICAvKiA4MyAq
LyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFC
WVRFX19PUF84M19fUkVHIH0sCiAgLyogODQgKi8gIHsgVURfSXRlc3QsICAgICAgICBPX0ViLCAg
ICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDg1
ICovICB7IFVEX0l0ZXN0LCAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298
UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogODYgKi8gIHsgVURfSXhj
aGcsICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDg3ICovICB7IFVEX0l4Y2hnLCAgICAgICAgT19FdiwgICAgT19Hdiwg
ICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogODggKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDg5ICovICB7IFVEX0ltb3YsICAgICAg
ICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogOEEgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0diLCAgICBP
X0ViLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDhCICov
ICB7IFVEX0ltb3YsICAgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9v
c298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOEMgKi8gIHsgVURfSW1vdiwg
ICAgICAgICBPX0V2LCAgICBPX1MsICAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDhEICovICB7IFVEX0lsZWEsICAgICAgICAgT19HdiwgICAgT19N
LCAgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogOEUgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX1MsICAgICBPX0V2LCAgICBPX05PTkUs
ICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDhGICovICB7IFVEX0ln
cnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzhG
X19SRUcgfSwKICAvKiA5MCAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckFYcjgsIE9fckFYLCAg
IE9fTk9ORSwgIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5MSAqLyAgeyBVRF9JeGNoZywg
ICAgICAgIE9fckNYcjksIE9fckFYLCAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwK
ICAvKiA5MiAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckRYcjEwLCBPX3JBWCwgICBPX05PTkUs
IFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5MyAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9f
ckJYcjExLCBPX3JBWCwgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5NCAq
LyAgeyBVRF9JeGNoZywgICAgICAgIE9fclNQcjEyLCBPX3JBWCwgICBPX05PTkUsIFBfb3NvfFBf
cmV4d3xQX3JleGIgfSwKICAvKiA5NSAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckJQcjEzLCBP
X3JBWCwgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5NiAqLyAgeyBVRF9J
eGNoZywgICAgICAgIE9fclNJcjE0LCBPX3JBWCwgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3Jl
eGIgfSwKICAvKiA5NyAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckRJcjE1LCBPX3JBWCwgICBP
X05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5OCAqLyAgeyBVRF9JZ3JwX29zaXpl
LCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF85OF9fT1NJWkUg
fSwKICAvKiA5OSAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzFCWVRFX19PUF85OV9fT1NJWkUgfSwKICAvKiA5QSAqLyAgeyBVRF9JY2FsbCwg
ICAgICAgIE9fQXAsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9vc28gfSwKICAvKiA5
QiAqLyAgeyBVRF9Jd2FpdCwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDlDICovICB7IFVEX0lncnBfbW9kZSwgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fMUJZVEVfX09QXzlDX19NT0RFIH0sCiAgLyogOUQgKi8gIHsgVURfSWdycF9t
b2RlLCAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfOURfX01P
REUgfSwKICAvKiA5RSAqLyAgeyBVRF9Jc2FoZiwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDlGICovICB7IFVEX0lsYWhmLCAgICAgICAgT19OT05FLCAg
T19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTAgKi8gIHsgVURfSW1vdiwgICAgICAg
ICBPX0FMLCAgICBPX09iLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBBMSAqLyAgeyBVRF9J
bW92LCAgICAgICAgIE9fckFYLCAgIE9fT3YsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4
dyB9LAogIC8qIEEyICovICB7IFVEX0ltb3YsICAgICAgICAgT19PYiwgICAgT19BTCwgICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX092LCAgICBP
X3JBWCwgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHcgfSwKICAvKiBBNCAqLyAgeyBVRF9J
bW92c2IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX25vbmUg
fSwKICAvKiBBNSAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzFCWVRFX19PUF9BNV9fT1NJWkUgfSwKICAvKiBBNiAqLyAgeyBVRF9JY21wc2Is
ICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEE3ICovICB7
IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0E3X19PU0laRSB9LAogIC8qIEE4ICovICB7IFVEX0l0ZXN0LCAgICAgICAgT19BTCwgICAg
T19JYiwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSXRlc3QsICAgICAg
ICBPX3JBWCwgICBPX0l6LCAgICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiBBQSAqLyAg
eyBVRF9Jc3Rvc2IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQ
X25vbmUgfSwKICAvKiBBQiAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9BQl9fT1NJWkUgfSwKICAvKiBBQyAqLyAgeyBVRF9J
bG9kc2IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX25vbmUg
fSwKICAvKiBBRCAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzFCWVRFX19PUF9BRF9fT1NJWkUgfSwKICAvKiBBRSAqLyAgeyBVRF9Jc2Nhc2Is
ICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEFGICovICB7
IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0FGX19PU0laRSB9LAogIC8qIEIwICovICB7IFVEX0ltb3YsICAgICAgICAgT19BTHI4Yiwg
T19JYiwgICAgT19OT05FLCAgUF9yZXhiIH0sCiAgLyogQjEgKi8gIHsgVURfSW1vdiwgICAgICAg
ICBPX0NMcjliLCBPX0liLCAgICBPX05PTkUsICBQX3JleGIgfSwKICAvKiBCMiAqLyAgeyBVRF9J
bW92LCAgICAgICAgIE9fRExyMTBiLCBPX0liLCAgICBPX05PTkUsIFBfcmV4YiB9LAogIC8qIEIz
ICovICB7IFVEX0ltb3YsICAgICAgICAgT19CTHIxMWIsIE9fSWIsICAgIE9fTk9ORSwgUF9yZXhi
IH0sCiAgLyogQjQgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0FIcjEyYiwgT19JYiwgICAgT19O
T05FLCBQX3JleGIgfSwKICAvKiBCNSAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fQ0hyMTNiLCBP
X0liLCAgICBPX05PTkUsIFBfcmV4YiB9LAogIC8qIEI2ICovICB7IFVEX0ltb3YsICAgICAgICAg
T19ESHIxNGIsIE9fSWIsICAgIE9fTk9ORSwgUF9yZXhiIH0sCiAgLyogQjcgKi8gIHsgVURfSW1v
diwgICAgICAgICBPX0JIcjE1YiwgT19JYiwgICAgT19OT05FLCBQX3JleGIgfSwKICAvKiBCOCAq
LyAgeyBVRF9JbW92LCAgICAgICAgIE9fckFYcjgsIE9fSXYsICAgIE9fTk9ORSwgIFBfb3NvfFBf
cmV4d3xQX3JleGIgfSwKICAvKiBCOSAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fckNYcjksIE9f
SXYsICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCQSAqLyAgeyBVRF9J
bW92LCAgICAgICAgIE9fckRYcjEwLCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3Jl
eGIgfSwKICAvKiBCQiAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fckJYcjExLCBPX0l2LCAgICBP
X05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCQyAqLyAgeyBVRF9JbW92LCAgICAg
ICAgIE9fclNQcjEyLCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAv
KiBCRCAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fckJQcjEzLCBPX0l2LCAgICBPX05PTkUsIFBf
b3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCRSAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fclNJ
cjE0LCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCRiAqLyAg
eyBVRF9JbW92LCAgICAgICAgIE9fckRJcjE1LCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4
d3xQX3JleGIgfSwKICAvKiBDMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9DMF9fUkVHIH0sCiAgLyogQzEgKi8gIHsgVURf
SWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1Bf
QzFfX1JFRyB9LAogIC8qIEMyICovICB7IFVEX0lyZXQsICAgICAgICAgT19JdywgICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQzMgKi8gIHsgVURfSXJldCwgICAgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBDNCAqLyAgeyBVRF9JbGVzLCAg
ICAgICAgIE9fR3YsICAgIE9fTSwgICAgIE9fTk9ORSwgIFBfaW52NjR8UF9hc298UF9vc28gfSwK
ICAvKiBDNSAqLyAgeyBVRF9JbGRzLCAgICAgICAgIE9fR3YsICAgIE9fTSwgICAgIE9fTk9ORSwg
IFBfaW52NjR8UF9hc298UF9vc28gfSwKICAvKiBDNiAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9DNl9fUkVHIH0sCiAgLyog
QzcgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFC
X18xQllURV9fT1BfQzdfX1JFRyB9LAogIC8qIEM4ICovICB7IFVEX0llbnRlciwgICAgICAgT19J
dywgICAgT19JYiwgICAgT19OT05FLCAgUF9kZWY2NHxQX2RlcE18UF9ub25lIH0sCiAgLyogQzkg
Ki8gIHsgVURfSWxlYXZlLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiBDQSAqLyAgeyBVRF9JcmV0ZiwgICAgICAgIE9fSXcsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIENCICovICB7IFVEX0lyZXRmLCAgICAgICAgT19OT05FLCAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQ0MgKi8gIHsgVURfSWludDMsICAgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBDRCAqLyAgeyBVRF9JaW50
LCAgICAgICAgIE9fSWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIENFICov
ICB7IFVEX0lpbnRvLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9pbnY2NHxQ
X25vbmUgfSwKICAvKiBDRiAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9DRl9fT1NJWkUgfSwKICAvKiBEMCAqLyAgeyBVRF9J
Z3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9E
MF9fUkVHIH0sCiAgLyogRDEgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBJVEFCX18xQllURV9fT1BfRDFfX1JFRyB9LAogIC8qIEQyICovICB7IFVEX0ln
cnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0Qy
X19SRUcgfSwKICAvKiBEMyAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EM19fUkVHIH0sCiAgLyogRDQgKi8gIHsgVURfSWFh
bSwgICAgICAgICBPX0liLCAgICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBfbm9uZSB9LAog
IC8qIEQ1ICovICB7IFVEX0lhYWQsICAgICAgICAgT19JYiwgICAgT19OT05FLCAgT19OT05FLCAg
UF9pbnY2NHxQX25vbmUgfSwKICAvKiBENiAqLyAgeyBVRF9Jc2FsYywgICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogRDcgKi8gIHsgVURfSXhs
YXRiLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX3JleHcgfSwKICAvKiBEOCAq
LyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFC
WVRFX19PUF9EOF9fTU9EIH0sCiAgLyogRDkgKi8gIHsgVURfSWdycF9tb2QsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfRDlfX01PRCB9LAogIC8qIERBICov
ICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZ
VEVfX09QX0RBX19NT0QgfSwKICAvKiBEQiAqLyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EQl9fTU9EIH0sCiAgLyogREMgKi8g
IHsgVURfSWdycF9tb2QsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllU
RV9fT1BfRENfX01PRCB9LAogIC8qIEREICovICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0REX19NT0QgfSwKICAvKiBERSAqLyAg
eyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRF
X19PUF9ERV9fTU9EIH0sCiAgLyogREYgKi8gIHsgVURfSWdycF9tb2QsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfREZfX01PRCB9LAogIC8qIEUwICovICB7
IFVEX0lsb29wbnosICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogRTEgKi8gIHsgVURfSWxvb3BlLCAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiBFMiAqLyAgeyBVRF9JbG9vcCwgICAgICAgIE9fSmIsICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEUzICovICB7IFVEX0lncnBfYXNpemUsICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0UzX19BU0laRSB9LAogIC8qIEU0
ICovICB7IFVEX0lpbiwgICAgICAgICAgT19BTCwgICAgT19JYiwgICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogRTUgKi8gIHsgVURfSWluLCAgICAgICAgICBPX2VBWCwgICBPX0liLCAgICBPX05P
TkUsICBQX29zbyB9LAogIC8qIEU2ICovICB7IFVEX0lvdXQsICAgICAgICAgT19JYiwgICAgT19B
TCwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRTcgKi8gIHsgVURfSW91dCwgICAgICAgICBP
X0liLCAgICBPX2VBWCwgICBPX05PTkUsICBQX29zbyB9LAogIC8qIEU4ICovICB7IFVEX0ljYWxs
LCAgICAgICAgT19KeiwgICAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX29zbyB9LAogIC8q
IEU5ICovICB7IFVEX0lqbXAsICAgICAgICAgT19KeiwgICAgT19OT05FLCAgT19OT05FLCAgUF9k
ZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiBFQSAqLyAgeyBVRF9Jam1wLCAgICAgICAgIE9fQXAs
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogRUIgKi8gIHsgVURf
SWptcCwgICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBF
QyAqLyAgeyBVRF9JaW4sICAgICAgICAgIE9fQUwsICAgIE9fRFgsICAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIEVEICovICB7IFVEX0lpbiwgICAgICAgICAgT19lQVgsICAgT19EWCwgICAgT19O
T05FLCAgUF9vc28gfSwKICAvKiBFRSAqLyAgeyBVRF9Jb3V0LCAgICAgICAgIE9fRFgsICAgIE9f
QUwsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEVGICovICB7IFVEX0lvdXQsICAgICAgICAg
T19EWCwgICAgT19lQVgsICAgT19OT05FLCAgUF9vc28gfSwKICAvKiBGMCAqLyAgeyBVRF9JbG9j
aywgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEYxICov
ICB7IFVEX0lpbnQxLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogRjIgKi8gIHsgVURfSXJlcG5lLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiBGMyAqLyAgeyBVRF9JcmVwLCAgICAgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEY0ICovICB7IFVEX0lobHQsICAgICAgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRjUgKi8gIHsgVURfSWNtYywg
ICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBGNiAqLyAg
eyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRF
X19PUF9GNl9fUkVHIH0sCiAgLyogRjcgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfRjdfX1JFRyB9LAogIC8qIEY4ICovICB7
IFVEX0ljbGMsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogRjkgKi8gIHsgVURfSXN0YywgICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiBGQSAqLyAgeyBVRF9JY2xpLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEZCICovICB7IFVEX0lzdGksICAgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRkMgKi8gIHsgVURfSWNsZCwgICAg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBGRCAqLyAgeyBV
RF9Jc3RkLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IEZFICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRB
Ql9fMUJZVEVfX09QX0ZFX19SRUcgfSwKICAvKiBGRiAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9GRl9fUkVHIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzYwX19vc2l6ZVszXSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9JcHVzaGEsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfaW52NjR8UF9vc28gfSwKICAvKiAwMSAqLyAgeyBVRF9JcHVzaGFkLCAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9vc28gfSwKICAvKiAwMiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3Rh
dGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF82MV9fb3NpemVbM10gPSB7
CiAgLyogMDAgKi8gIHsgVURfSXBvcGEsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX2ludjY0fFBfb3NvIH0sCiAgLyogMDEgKi8gIHsgVURfSXBvcGFkLCAgICAgICBPX05PTkUs
ICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBfb3NvIH0sCiAgLyogMDIgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRp
YyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfNjNfX21vZGVbM10gPSB7CiAg
LyogMDAgKi8gIHsgVURfSWFycGwsICAgICAgICBPX0V3LCAgICBPX0d3LCAgICBPX05PTkUsICBQ
X2ludjY0fFBfYXNvIH0sCiAgLyogMDEgKi8gIHsgVURfSWFycGwsICAgICAgICBPX0V3LCAgICBP
X0d3LCAgICBPX05PTkUsICBQX2ludjY0fFBfYXNvIH0sCiAgLyogMDIgKi8gIHsgVURfSW1vdnN4
ZCwgICAgICBPX0d2LCAgICBPX0VkLCAgICBPX05PTkUsICBQX2MyfFBfYXNvfFBfb3NvfFBfcmV4
d3xQX3JleHh8UF9yZXhyfFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5
IGl0YWJfXzFieXRlX19vcF82ZF9fb3NpemVbM10gPSB7CiAgLyogMDAgKi8gIHsgVURfSWluc3cs
ICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDAxICovICB7
IFVEX0lpbnNkLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAv
KiAwMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF82
Zl9fb3NpemVbM10gPSB7CiAgLyogMDAgKi8gIHsgVURfSW91dHN3LCAgICAgICBPX05PTkUsICBP
X05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDAxICovICB7IFVEX0lvdXRzZCwgICAgICAg
T19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiAwMiAqLyAgeyBVRF9Jb3V0
c3EsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCn07CgpzdGF0aWMg
c3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzgwX19yZWdbOF0gPSB7CiAgLyog
MDAgKi8gIHsgVURfSWFkZCwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSW9yLCAgICAg
ICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0ViLCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8g
IHsgVURfSXNiYiwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsgVURfSWFuZCwgICAgICAgICBP
X0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lzdWIsICAgICAgICAgT19FYiwgICAgT19JYiwg
ICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICov
ICB7IFVEX0l4b3IsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0ljbXAsICAgICAgICAg
T19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF84MV9f
cmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lhZGQsICAgICAgICAgT19FdiwgICAgT19Jeiwg
ICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fRXYsICAgIE9fSXosICAgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDIgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0V2LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7
IFVEX0lzYmIsICAgICAgICAgT19FdiwgICAgT19JeiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JYW5k
LCAgICAgICAgIE9fRXYsICAgIE9fSXosICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXN1YiwgICAgICAg
ICBPX0V2LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7IFVEX0l4b3IsICAgICAgICAgT19Fdiwg
ICAgT19JeiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JY21wLCAgICAgICAgIE9fRXYsICAgIE9fSXos
ICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzgyX19y
ZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWFkZCwgICAgICAgICBPX0ViLCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfaW52NjR8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAwMSAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBf
YzF8UF9pbnY2NHxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVE
X0lhZGMsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2ludjY0fFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSXNiYiwgICAgICAg
ICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfaW52NjR8UF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JYW5kLCAgICAgICAgIE9fRWIsICAgIE9f
SWIsICAgIE9fTk9ORSwgIFBfYzF8UF9pbnY2NHxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDA1ICovICB7IFVEX0lzdWIsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05F
LCAgUF9jMXxQX2ludjY0fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8g
IHsgVURfSXhvciwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfaW52
NjR8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JY21wLCAg
ICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9pbnY2NHxQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJf
XzFieXRlX19vcF84M19fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lhZGQsICAgICAgICAg
T19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fRXYsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0V2LCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDAzICovICB7IFVEX0lzYmIsICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05F
LCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
NCAqLyAgeyBVRF9JYW5kLCAgICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsg
VURfSXN1YiwgICAgICAgICBPX0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
b3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7IFVEX0l4b3Is
ICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3Jl
eHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JY21wLCAgICAgICAg
IE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9f
MWJ5dGVfX29wXzhmX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXBvcCwgICAgICAgICBP
X0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfZGVmNjR8UF9kZXBNfFBfYXNvfFBfb3Nv
fFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzk4X19vc2l6ZVszXSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9JY2J3LCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDAxICovICB7IFVEX0ljd2RlLCAgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDIgKi8gIHsgVURfSWNk
cWUsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfOTlfX29zaXplWzNd
ID0gewogIC8qIDAwICovICB7IFVEX0ljd2QsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19O
T05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDEgKi8gIHsgVURfSWNkcSwgICAgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9J
Y3FvLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF85Y19fbW9kZVsz
XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIElUQUJfXzFCWVRFX19PUF85Q19fTU9ERV9fT1BfMDBfX09TSVpFIH0sCiAgLyogMDEg
Ki8gIHsgVURfSWdycF9vc2l6ZSwgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18x
QllURV9fT1BfOUNfX01PREVfX09QXzAxX19PU0laRSB9LAogIC8qIDAyICovICB7IFVEX0lwdXNo
ZnEsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX29zb3xQX3JleHcg
fSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfOWNfX21v
ZGVfX29wXzAwX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JcHVzaGZ3LCAgICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9vc28gfSwKICAvKiAwMSAqLyAgeyBV
RF9JcHVzaGZkLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9vc28g
fSwKICAvKiAwMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRl
X19vcF85Y19fbW9kZV9fb3BfMDFfX29zaXplWzNdID0gewogIC8qIDAwICovICB7IFVEX0lwdXNo
ZncsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX29zbyB9LAogIC8q
IDAxICovICB7IFVEX0lwdXNoZmQsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9k
ZWY2NHxQX29zbyB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkg
aXRhYl9fMWJ5dGVfX29wXzlkX19tb2RlWzNdID0gewogIC8qIDAwICovICB7IFVEX0lncnBfb3Np
emUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzlEX19NT0RF
X19PUF8wMF9fT1NJWkUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF85RF9fTU9ERV9fT1BfMDFfX09TSVpF
IH0sCiAgLyogMDIgKi8gIHsgVURfSXBvcGZxLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05P
TkUsICBQX2RlZjY0fFBfZGVwTXxQX29zbyB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2Vu
dHJ5IGl0YWJfXzFieXRlX19vcF85ZF9fbW9kZV9fb3BfMDBfX29zaXplWzNdID0gewogIC8qIDAw
ICovICB7IFVEX0lwb3BmdywgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2
NHxQX2RlcE18UF9vc28gfSwKICAvKiAwMSAqLyAgeyBVRF9JcG9wZmQsICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvIH0sCiAgLyogMDIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfOWRfX21vZGVfX29w
XzAxX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JcG9wZncsICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvIH0sCiAgLyogMDEgKi8gIHsg
VURfSXBvcGZkLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX2RlZjY0fFBfZGVw
TXxQX29zbyB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRh
Yl9fMWJ5dGVfX29wX2E1X19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JbW92c3csICAg
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwK
ICAvKiAwMSAqLyAgeyBVRF9JbW92c2QsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9JbW92c3EsICAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYTdfX29zaXplWzNd
ID0gewogIC8qIDAwICovICB7IFVEX0ljbXBzdywgICAgICAgT19OT05FLCAgT19OT05FLCAgT19O
T05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDEgKi8gIHsgVURfSWNtcHNkLCAgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9J
Y21wc3EsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9hYl9fb3NpemVb
M10gPSB7CiAgLyogMDAgKi8gIHsgVURfSXN0b3N3LCAgICAgICBPX05PTkUsICBPX05PTkUsICBP
X05PTkUsICBQX0ltcEFkZHJ8UF9vc298UF9yZXh3IH0sCiAgLyogMDEgKi8gIHsgVURfSXN0b3Nk
LCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX0ltcEFkZHJ8UF9vc298UF9yZXh3
IH0sCiAgLyogMDIgKi8gIHsgVURfSXN0b3NxLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05P
TkUsICBQX0ltcEFkZHJ8UF9vc298UF9yZXh3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJf
ZW50cnkgaXRhYl9fMWJ5dGVfX29wX2FkX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
bG9kc3csICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQ
X3JleHcgfSwKICAvKiAwMSAqLyAgeyBVRF9JbG9kc2QsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9JbG9k
c3EsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3Jl
eHcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYWVf
X21vZFsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9BRV9fTU9EX19PUF8wMF9fUkVHIH0sCiAgLyog
MDEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYWVf
X21vZF9fb3BfMDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZnhzYXZlLCAgICAgIE9f
TSwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lmeHJzdG9yLCAgICAgT19NLCAgICAgT19OT05FLCAgT19O
T05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2FmX19vc2l6ZVszXSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9Jc2Nhc3csICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDAxICovICB7IFVEX0lzY2FzZCwgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDIgKi8gIHsgVURfSXNj
YXNxLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYzBfX3JlZ1s4XSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9Jcm9sLCAgICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8g
IHsgVURfSXJvciwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNv
fFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lyY2wsICAg
ICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JcmNyLCAgICAgICAgIE9fRWIsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogMDQgKi8gIHsgVURfSXNobCwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05P
TkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICov
ICB7IFVEX0lzaHIsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9Jc2hsLCAg
ICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSXNhciwgICAgICAgICBPX0ViLCAg
ICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9jMV9f
cmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lyb2wsICAgICAgICAgT19FdiwgICAgT19JYiwg
ICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9Jcm9yLCAgICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDIgKi8gIHsgVURfSXJjbCwgICAgICAgICBPX0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7
IFVEX0lyY3IsICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9Jc2hs
LCAgICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXNociwgICAgICAg
ICBPX0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7IFVEX0lzaGwsICAgICAgICAgT19Fdiwg
ICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9Jc2FyLCAgICAgICAgIE9fRXYsICAgIE9fSWIs
ICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2M2X19y
ZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0ViLCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAw
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5
dGVfX29wX2M3X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0V2
LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0
YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2NmX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBV
RF9JaXJldHcsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9
LAogIC8qIDAxICovICB7IFVEX0lpcmV0ZCwgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05F
LCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDIgKi8gIHsgVURfSWlyZXRxLCAgICAgICBPX05PTkUs
ICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRf
aXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfZDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBV
RF9Jcm9sLCAgICAgICAgIE9fRWIsICAgIE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSXJvciwgICAgICAg
ICBPX0ViLCAgICBPX0kxLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lyY2wsICAgICAgICAgT19FYiwgICAgT19J
MSwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMyAqLyAgeyBVRF9JcmNyLCAgICAgICAgIE9fRWIsICAgIE9fSTEsICAgIE9fTk9ORSwg
IFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsg
VURfSXNobCwgICAgICAgICBPX0ViLCAgICBPX0kxLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
cmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lzaHIsICAgICAg
ICAgT19FYiwgICAgT19JMSwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9Jc2hsLCAgICAgICAgIE9fRWIsICAgIE9f
STEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDcgKi8gIHsgVURfSXNhciwgICAgICAgICBPX0ViLCAgICBPX0kxLCAgICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0
cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kMV9fcmVnWzhdID0gewogIC8qIDAw
ICovICB7IFVEX0lyb2wsICAgICAgICAgT19FdiwgICAgT19JMSwgICAgT19OT05FLCAgUF9jMXxQ
X2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBV
RF9Jcm9yLCAgICAgICAgIE9fRXYsICAgIE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9v
c298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSXJjbCwg
ICAgICAgICBPX0V2LCAgICBPX0kxLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4
d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVEX0lyY3IsICAgICAgICAg
T19FdiwgICAgT19JMSwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9Jc2hsLCAgICAgICAgIE9fRXYsICAg
IE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXNociwgICAgICAgICBPX0V2LCAgICBPX0kxLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDA2ICovICB7IFVEX0lzaGwsICAgICAgICAgT19FdiwgICAgT19JMSwgICAgT19OT05F
LCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
NyAqLyAgeyBVRF9Jc2FyLCAgICAgICAgIE9fRXYsICAgIE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3Ry
dWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2QyX19yZWdbOF0gPSB7CiAgLyogMDAg
Ki8gIHsgVURfSXJvbCwgICAgICAgICBPX0ViLCAgICBPX0NMLCAgICBPX05PTkUsICBQX2MxfFBf
YXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lyb3Is
ICAgICAgICAgT19FYiwgICAgT19DTCwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMiAqLyAgeyBVRF9JcmNsLCAgICAgICAgIE9fRWIs
ICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSXJjciwgICAgICAgICBPX0ViLCAgICBPX0NMLCAgICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0
ICovICB7IFVEX0lzaGwsICAgICAgICAgT19FYiwgICAgT19DTCwgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9Jc2hyLCAgICAgICAgIE9f
RWIsICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDA2ICovICB7IFVEX0lzaGwsICAgICAgICAgT19FYiwgICAgT19DTCwgICAgT19O
T05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsg
VURfSXNhciwgICAgICAgICBPX0ViLCAgICBPX0NMLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHd8
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18xYnl0ZV9fb3BfZDNfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jcm9sLCAgICAg
ICAgIE9fRXYsICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSXJvciwgICAgICAgICBPX0V2
LCAgICBPX0NMLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lyY2wsICAgICAgICAgT19FdiwgICAgT19D
TCwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JcmNyLCAgICAgICAgIE9fRXYsICAgIE9fQ0wsICAgIE9f
Tk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogMDQgKi8gIHsgVURfSXNobCwgICAgICAgICBPX0V2LCAgICBPX0NMLCAgICBPX05PTkUsICBQ
X2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICov
ICB7IFVEX0lzaHIsICAgICAgICAgT19FdiwgICAgT19DTCwgICAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9J
c2hsLCAgICAgICAgIE9fRXYsICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298
UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSXNhciwgICAg
ICAgICBPX0V2LCAgICBPX0NMLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0
YWJfXzFieXRlX19vcF9kOF9fbW9kWzJdID0gewogIC8qIDAwICovICB7IFVEX0lncnBfcmVnLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0RfX09Q
XzAwX19SRUcgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4NywgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EOF9fTU9EX19PUF8wMV9fWDg3IH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2Q4X19tb2RfX29wXzAw
X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX01kLCAgICBPX05P
TkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEg
Ki8gIHsgVURfSWZtdWwsICAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSWZjb20sICAgICAg
ICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWZjb21wLCAgICAgICBPX01kLCAgICBPX05PTkUsICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsg
VURfSWZzdWIsICAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX01k
LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDYgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSWZk
aXZyLCAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5
dGVfX29wX2Q4X19tb2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAwICovICB7IFVEX0lmYWRk
LCAgICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8g
IHsgVURfSWZhZGQsICAgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JZmFkZCwgICAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lmYWRkLCAgICAgICAgT19TVDAsICAgT19TVDMs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX1NU
MCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JZmFkZCwg
ICAgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7
IFVEX0lmYWRkLCAgICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDcgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwOCAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1QwLCAgIE9fU1QwLCAg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDAs
ICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEEgKi8gIHsgVURfSWZtdWwsICAg
ICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQiAqLyAgeyBV
RF9JZm11bCwgICAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDBDICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMEQgKi8gIHsgVURfSWZtdWwsICAgICAgICBPX1NUMCwgICBPX1NUNSwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1QwLCAg
IE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBGICovICB7IFVEX0lmbXVsLCAgICAg
ICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTAgKi8gIHsgVURf
SWZjb20sICAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
MSAqLyAgeyBVRF9JZmNvbSwgICAgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDEyICovICB7IFVEX0lmY29tLCAgICAgICAgT19TVDAsICAgT19TVDIsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMTMgKi8gIHsgVURfSWZjb20sICAgICAgICBPX1NUMCwgICBP
X1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBVRF9JZmNvbSwgICAgICAg
IE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE1ICovICB7IFVEX0lm
Y29tLCAgICAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTYg
Ki8gIHsgVURfSWZjb20sICAgICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAxNyAqLyAgeyBVRF9JZmNvbSwgICAgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDE4ICovICB7IFVEX0lmY29tcCwgICAgICAgT19TVDAsICAgT19T
VDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWZjb21wLCAgICAgICBP
X1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JZmNv
bXAsICAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFCICov
ICB7IFVEX0lmY29tcCwgICAgICAgT19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMUMgKi8gIHsgVURfSWZjb21wLCAgICAgICBPX1NUMCwgICBPX1NUNCwgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAxRCAqLyAgeyBVRF9JZmNvbXAsICAgICAgIE9fU1QwLCAgIE9fU1Q1
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lmY29tcCwgICAgICAgT19T
VDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWZjb21w
LCAgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMCAqLyAg
eyBVRF9JZnN1YiwgICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDIxICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMjIgKi8gIHsgVURfSWZzdWIsICAgICAgICBPX1NUMCwgICBPX1NUMiwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JZnN1YiwgICAgICAgIE9fU1Qw
LCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lmc3ViLCAg
ICAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsg
VURfSWZzdWIsICAgICAgICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAyNiAqLyAgeyBVRF9JZnN1YiwgICAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDI3ICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDAsICAgT19TVDcsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX1NUMCwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JZnN1YnIsICAg
ICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVE
X0lmc3ViciwgICAgICAgT19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MkIgKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAyQyAqLyAgeyBVRF9JZnN1YnIsICAgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDJEICovICB7IFVEX0lmc3ViciwgICAgICAgT19TVDAsICAg
T19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsgVURfSWZzdWJyLCAgICAg
ICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBVRF9J
ZnN1YnIsICAgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMw
ICovICB7IFVEX0lmZGl2LCAgICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMzEgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1QwLCAgIE9f
U1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lmZGl2LCAgICAgICAg
T19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURfSWZk
aXYsICAgICAgICBPX1NUMCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNSAq
LyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDM2ICovICB7IFVEX0lmZGl2LCAgICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUMCwgICBPX1NU
NywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JZmRpdnIsICAgICAgIE9f
U1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lmZGl2
ciwgICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0EgKi8g
IHsgVURfSWZkaXZyLCAgICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAzQiAqLyAgeyBVRF9JZmRpdnIsICAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lmZGl2ciwgICAgICAgT19TVDAsICAgT19TVDQs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWZkaXZyLCAgICAgICBPX1NU
MCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JZmRpdnIs
ICAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNGICovICB7
IFVEX0lmZGl2ciwgICAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCn07
CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2Q5X19tb2RbMl0g
PSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBJVEFCX18xQllURV9fT1BfRDlfX01PRF9fT1BfMDBfX1JFRyB9LAogIC8qIDAxICovICB7
IFVEX0lncnBfeDg3LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0Q5X19NT0RfX09QXzAxX19YODcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRy
eSBpdGFiX18xYnl0ZV9fb3BfZDlfX21vZF9fb3BfMDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAg
eyBVRF9JZmxkLCAgICAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lmc3Qs
ICAgICAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVEX0lmc3RwLCAgICAgICAgT19NZCwgICAgT19O
T05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0
ICovICB7IFVEX0lmbGRlbnYsICAgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JZmxkY3csICAgICAgIE9f
TXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwNiAqLyAgeyBVRF9JZm5zdGVudiwgICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSWZuc3Rj
dywgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVf
X29wX2Q5X19tb2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAwICovICB7IFVEX0lmbGQsICAg
ICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsg
VURfSWZsZCwgICAgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAwMiAqLyAgeyBVRF9JZmxkLCAgICAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lmbGQsICAgICAgICAgT19TVDAsICAgT19TVDMsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWZsZCwgICAgICAgICBPX1NUMCwg
ICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JZmxkLCAgICAg
ICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVE
X0lmbGQsICAgICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MDcgKi8gIHsgVURfSWZsZCwgICAgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAwOCAqLyAgeyBVRF9JZnhjaCwgICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lmeGNoLCAgICAgICAgT19TVDAsICAg
T19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEEgKi8gIHsgVURfSWZ4Y2gsICAgICAg
ICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQiAqLyAgeyBVRF9J
ZnhjaCwgICAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBD
ICovICB7IFVEX0lmeGNoLCAgICAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMEQgKi8gIHsgVURfSWZ4Y2gsICAgICAgICBPX1NUMCwgICBPX1NUNSwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JZnhjaCwgICAgICAgIE9fU1QwLCAgIE9f
U1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBGICovICB7IFVEX0lmeGNoLCAgICAgICAg
T19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTAgKi8gIHsgVURfSWZu
b3AsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDEyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE1ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAxNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDE4ICovICB7IFVEX0lmc3RwMSwgICAgICAgT19TVDAsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWZzdHAxLCAgICAgICBPX1NU
MSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JZnN0cDEs
ICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFCICovICB7
IFVEX0lmc3RwMSwgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMUMgKi8gIHsgVURfSWZzdHAxLCAgICAgICBPX1NUNCwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAxRCAqLyAgeyBVRF9JZnN0cDEsICAgICAgIE9fU1Q1LCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lmc3RwMSwgICAgICAgT19TVDYs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWZzdHAxLCAg
ICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBV
RF9JZmNocywgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDIxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lmdHN0LCAgICAg
ICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURf
SWZ4YW0sICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAy
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSWZsZDEsICAgICAgICBPX05PTkUsICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JZmxkbDJ0LCAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVEX0lm
bGRsMmUsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkIg
Ki8gIHsgVURfSWZsZGxwaSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAyQyAqLyAgeyBVRF9JZmxkbGcyLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDJEICovICB7IFVEX0lmbGRsbjIsICAgICAgT19OT05FLCAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsgVURfSWZsZHosICAgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMwICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMzEgKi8gIHsgVURfSWZ5bDJ4LCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JZnB0YW4sICAgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lmcGF0YW4sICAgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURfSWZweHRy
YWN0LCAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNSAqLyAg
eyBVRF9JZnByZW0xLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDM2ICovICB7IFVEX0lmZGVjc3RwLCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWZuY3N0cCwgICAgICBPX05PTkUsICBPX05PTkUs
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JZnByZW0sICAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lmeWwyeHAx
LCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0EgKi8gIHsg
VURfSWZzcXJ0LCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAzQiAqLyAgeyBVRF9JZnNpbmNvcywgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lmcm5kaW50LCAgICAgT19OT05FLCAgT19OT05FLCAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWZzY2FsZSwgICAgICBPX05PTkUs
ICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JZnNpbiwgICAg
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNGICovICB7IFVE
X0lmY29zLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2RhX19tb2RbMl0gPSB7
CiAgLyogMDAgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBJVEFCX18xQllURV9fT1BfREFfX01PRF9fT1BfMDBfX1JFRyB9LAogIC8qIDAxICovICB7IFVE
X0lncnBfeDg3LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09Q
X0RBX19NT0RfX09QXzAxX19YODcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18xYnl0ZV9fb3BfZGFfX21vZF9fb3BfMDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBV
RF9JZmlhZGQsICAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9JZmltdWwsICAgICAgIE9fTWQs
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JZmljb20sICAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JZmlj
b21wLCAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JZmlzdWIsICAgICAgIE9fTWQsICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
NSAqLyAgeyBVRF9JZmlzdWJyLCAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmlkaXYsICAg
ICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JZmlkaXZyLCAgICAgIE9fTWQsICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBz
dHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfZGFfX21vZF9fb3BfMDFfX3g4N1s2
NF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZjbW92YiwgICAgICBPX1NUMCwgICBPX1NUMCwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZmNtb3ZiLCAgICAgIE9fU1QwLCAg
IE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lmY21vdmIsICAg
ICAgT19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURf
SWZjbW92YiwgICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAw
NCAqLyAgeyBVRF9JZmNtb3ZiLCAgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDA1ICovICB7IFVEX0lmY21vdmIsICAgICAgT19TVDAsICAgT19TVDUsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWZjbW92YiwgICAgICBPX1NUMCwgICBP
X1NUNiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JZmNtb3ZiLCAgICAg
IE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lm
Y21vdmUsICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDkg
Ki8gIHsgVURfSWZjbW92ZSwgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwQSAqLyAgeyBVRF9JZmNtb3ZlLCAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDBCICovICB7IFVEX0lmY21vdmUsICAgICAgT19TVDAsICAgT19T
VDMsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWZjbW92ZSwgICAgICBP
X1NUMCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JZmNt
b3ZlLCAgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBFICov
ICB7IFVEX0lmY21vdmUsICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMEYgKi8gIHsgVURfSWZjbW92ZSwgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAxMCAqLyAgeyBVRF9JZmNtb3ZiZSwgICAgIE9fU1QwLCAgIE9fU1Qw
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDExICovICB7IFVEX0lmY21vdmJlLCAgICAgT19T
VDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTIgKi8gIHsgVURfSWZjbW92
YmUsICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMyAqLyAg
eyBVRF9JZmNtb3ZiZSwgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDE0ICovICB7IFVEX0lmY21vdmJlLCAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMTUgKi8gIHsgVURfSWZjbW92YmUsICAgICBPX1NUMCwgICBPX1NUNSwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNiAqLyAgeyBVRF9JZmNtb3ZiZSwgICAgIE9fU1Qw
LCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE3ICovICB7IFVEX0lmY21vdmJl
LCAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTggKi8gIHsg
VURfSWZjbW92dSwgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAxOSAqLyAgeyBVRF9JZmNtb3Z1LCAgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDFBICovICB7IFVEX0lmY21vdnUsICAgICAgT19TVDAsICAgT19TVDIsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMUIgKi8gIHsgVURfSWZjbW92dSwgICAgICBPX1NUMCwg
ICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQyAqLyAgeyBVRF9JZmNtb3Z1LCAg
ICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFEICovICB7IFVE
X0lmY21vdnUsICAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MUUgKi8gIHsgVURfSWZjbW92dSwgICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAxRiAqLyAgeyBVRF9JZmNtb3Z1LCAgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjEgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIz
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjcgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyOCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDI5ICovICB7IFVEX0lmdWNvbXBwLCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMkEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJDICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMkQgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAyRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDJGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMyICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMzMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAzNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDM4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDNFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRy
eSBpdGFiX18xYnl0ZV9fb3BfZGJfX21vZFsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3Jl
ZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EQl9fTU9E
X19PUF8wMF9fUkVHIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF94ODcsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfREJfX01PRF9fT1BfMDFfX1g4NyB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kYl9fbW9kX19v
cF8wMF9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lmaWxkLCAgICAgICAgT19NZCwgICAg
T19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDAxICovICB7IFVEX0lmaXN0dHAsICAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9j
MXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lmaXN0LCAg
ICAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVEX0lmaXN0cCwgICAgICAgT19NZCwgICAgT19OT05F
LCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMDUgKi8gIHsgVURfSWZsZCwgICAgICAgICBPX010LCAgICBPX05PTkUsICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAq
LyAgeyBVRF9JZnN0cCwgICAgICAgIE9fTXQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRy
eSBpdGFiX18xYnl0ZV9fb3BfZGJfX21vZF9fb3BfMDFfX3g4N1s2NF0gPSB7CiAgLyogMDAgKi8g
IHsgVURfSWZjbW92bmIsICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwMSAqLyAgeyBVRF9JZmNtb3ZuYiwgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lmY21vdm5iLCAgICAgT19TVDAsICAgT19TVDIs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWZjbW92bmIsICAgICBPX1NU
MCwgICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JZmNtb3Zu
YiwgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7
IFVEX0lmY21vdm5iLCAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDYgKi8gIHsgVURfSWZjbW92bmIsICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JZmNtb3ZuYiwgICAgIE9fU1QwLCAgIE9fU1Q3LCAg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lmY21vdm5lLCAgICAgT19TVDAs
ICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURfSWZjbW92bmUs
ICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQSAqLyAgeyBV
RF9JZmNtb3ZuZSwgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDBCICovICB7IFVEX0lmY21vdm5lLCAgICAgT19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWZjbW92bmUsICAgICBPX1NUMCwgICBPX1NUNCwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JZmNtb3ZuZSwgICAgIE9fU1QwLCAg
IE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBFICovICB7IFVEX0lmY21vdm5lLCAg
ICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEYgKi8gIHsgVURf
SWZjbW92bmUsICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
MCAqLyAgeyBVRF9JZmNtb3ZuYmUsICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDExICovICB7IFVEX0lmY21vdm5iZSwgICAgT19TVDAsICAgT19TVDEsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMTIgKi8gIHsgVURfSWZjbW92bmJlLCAgICBPX1NUMCwgICBP
X1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMyAqLyAgeyBVRF9JZmNtb3ZuYmUsICAg
IE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE0ICovICB7IFVEX0lm
Y21vdm5iZSwgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTUg
Ki8gIHsgVURfSWZjbW92bmJlLCAgICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAxNiAqLyAgeyBVRF9JZmNtb3ZuYmUsICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDE3ICovICB7IFVEX0lmY21vdm5iZSwgICAgT19TVDAsICAgT19T
VDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTggKi8gIHsgVURfSWZjbW92bnUsICAgICBP
X1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxOSAqLyAgeyBVRF9JZmNt
b3ZudSwgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFBICov
ICB7IFVEX0lmY21vdm51LCAgICAgT19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMUIgKi8gIHsgVURfSWZjbW92bnUsICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAxQyAqLyAgeyBVRF9JZmNtb3ZudSwgICAgIE9fU1QwLCAgIE9fU1Q0
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFEICovICB7IFVEX0lmY21vdm51LCAgICAgT19T
VDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUUgKi8gIHsgVURfSWZjbW92
bnUsICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxRiAqLyAg
eyBVRF9JZmNtb3ZudSwgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMjEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMiAqLyAgeyBVRF9JZmNsZXgsICAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDIzICovICB7IFVEX0lmbmluaXQs
ICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjQgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAyNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyOCAqLyAgeyBVRF9JZnVjb21pLCAg
ICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI5ICovICB7IFVE
X0lmdWNvbWksICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MkEgKi8gIHsgVURfSWZ1Y29taSwgICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAyQiAqLyAgeyBVRF9JZnVjb21pLCAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDJDICovICB7IFVEX0lmdWNvbWksICAgICAgT19TVDAsICAg
T19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkQgKi8gIHsgVURfSWZ1Y29taSwgICAg
ICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyRSAqLyAgeyBVRF9J
ZnVjb21pLCAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJG
ICovICB7IFVEX0lmdWNvbWksICAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMzAgKi8gIHsgVURfSWZjb21pLCAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JZmNvbWksICAgICAgIE9fU1QwLCAgIE9f
U1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMyICovICB7IFVEX0lmY29taSwgICAgICAg
T19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzMgKi8gIHsgVURfSWZj
b21pLCAgICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNCAq
LyAgeyBVRF9JZmNvbWksICAgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDM1ICovICB7IFVEX0lmY29taSwgICAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWZjb21pLCAgICAgICBPX1NUMCwgICBPX1NU
NiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBVRF9JZmNvbWksICAgICAgIE9f
U1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM4ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzkgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNFICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3Bf
ZGNfX21vZFsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EQ19fTU9EX19PUF8wMF9fUkVHIH0sCiAg
LyogMDEgKi8gIHsgVURfSWdycF94ODcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJ
VEFCX18xQllURV9fT1BfRENfX01PRF9fT1BfMDFfX1g4NyB9LAp9OwoKc3RhdGljIHN0cnVjdCB1
ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kY19fbW9kX19vcF8wMF9fcmVnWzhdID0gewog
IC8qIDAwICovICB7IFVEX0lmYWRkLCAgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAg
UF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lmbXVs
LCAgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lmY29tLCAgICAgICAgT19NcSwgICAgT19O
T05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAz
ICovICB7IFVEX0lmY29tcCwgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICovICB7IFVEX0lmc3ViLCAgICAg
ICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lmc3ViciwgICAgICAgT19NcSwgICAgT19OT05FLCAg
T19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7
IFVEX0lmZGl2LCAgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lmZGl2ciwgICAgICAgT19N
cSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kY19fbW9k
X19vcF8wMV9feDg3WzY0XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZmFkZCwgICAgICAgIE9fU1Qw
LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lmYWRkLCAg
ICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsg
VURfSWZhZGQsICAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAwMyAqLyAgeyBVRF9JZmFkZCwgICAgICAgIE9fU1QzLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lmYWRkLCAgICAgICAgT19TVDQsICAgT19TVDAsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX1NUNSwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmFkZCwgICAg
ICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVE
X0lmYWRkLCAgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MDggKi8gIHsgVURfSWZtdWwsICAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAwOSAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1QxLCAgIE9fU1QwLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDIsICAg
T19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEIgKi8gIHsgVURfSWZtdWwsICAgICAg
ICBPX1NUMywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9J
Zm11bCwgICAgICAgIE9fU1Q0LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBE
ICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDUsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMEUgKi8gIHsgVURfSWZtdWwsICAgICAgICBPX1NUNiwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1Q3LCAgIE9f
U1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEwICovICB7IFVEX0lmY29tMiwgICAgICAg
T19TVDAsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTEgKi8gIHsgVURfSWZj
b20yLCAgICAgICBPX1NUMSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMiAq
LyAgeyBVRF9JZmNvbTIsICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDEzICovICB7IFVEX0lmY29tMiwgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWZjb20yLCAgICAgICBPX1NUNCwgICBPX05P
TkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNSAqLyAgeyBVRF9JZmNvbTIsICAgICAgIE9f
U1Q1LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0lmY29t
MiwgICAgICAgT19TVDYsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTcgKi8g
IHsgVURfSWZjb20yLCAgICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAxOCAqLyAgeyBVRF9JZmNvbXAzLCAgICAgIE9fU1QwLCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDE5ICovICB7IFVEX0lmY29tcDMsICAgICAgT19TVDEsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsgVURfSWZjb21wMywgICAgICBPX1NU
MiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQiAqLyAgeyBVRF9JZmNvbXAz
LCAgICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFDICovICB7
IFVEX0lmY29tcDMsICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMUQgKi8gIHsgVURfSWZjb21wMywgICAgICBPX1NUNSwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAxRSAqLyAgeyBVRF9JZmNvbXAzLCAgICAgIE9fU1Q2LCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVEX0lmY29tcDMsICAgICAgT19TVDcs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURfSWZzdWJyLCAg
ICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMSAqLyAgeyBV
RF9JZnN1YnIsICAgICAgIE9fU1QxLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDIyICovICB7IFVEX0lmc3ViciwgICAgICAgT19TVDIsICAgT19TVDAsICAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMjMgKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX1NUMywgICBPX1NUMCwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9JZnN1YnIsICAgICAgIE9fU1Q0LCAg
IE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI1ICovICB7IFVEX0lmc3ViciwgICAg
ICAgT19TVDUsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjYgKi8gIHsgVURf
SWZzdWJyLCAgICAgICBPX1NUNiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAy
NyAqLyAgeyBVRF9JZnN1YnIsICAgICAgIE9fU1Q3LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDI4ICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDAsICAgT19TVDAsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWZzdWIsICAgICAgICBPX1NUMSwgICBP
X1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQSAqLyAgeyBVRF9JZnN1YiwgICAgICAg
IE9fU1QyLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJCICovICB7IFVEX0lm
c3ViLCAgICAgICAgT19TVDMsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkMg
Ki8gIHsgVURfSWZzdWIsICAgICAgICBPX1NUNCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAyRCAqLyAgeyBVRF9JZnN1YiwgICAgICAgIE9fU1Q1LCAgIE9fU1QwLCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDJFICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDYsICAgT19T
VDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkYgKi8gIHsgVURfSWZzdWIsICAgICAgICBP
X1NUNywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMCAqLyAgeyBVRF9JZmRp
dnIsICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMxICov
ICB7IFVEX0lmZGl2ciwgICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMzIgKi8gIHsgVURfSWZkaXZyLCAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JZmRpdnIsICAgICAgIE9fU1QzLCAgIE9fU1Qw
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lmZGl2ciwgICAgICAgT19T
VDQsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWZkaXZy
LCAgICAgICBPX1NUNSwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNiAqLyAg
eyBVRF9JZmRpdnIsICAgICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDM3ICovICB7IFVEX0lmZGl2ciwgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMzggKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUMCwgICBPX1NUMCwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOSAqLyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1Qx
LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lmZGl2LCAg
ICAgICAgT19TVDIsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsg
VURfSWZkaXYsICAgICAgICBPX1NUMywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAzQyAqLyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1Q0LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDNEICovICB7IFVEX0lmZGl2LCAgICAgICAgT19TVDUsICAgT19TVDAsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogM0UgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUNiwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JZmRpdiwgICAg
ICAgIE9fU1Q3LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVj
dCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZF9fbW9kWzJdID0gewogIC8qIDAwICov
ICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZ
VEVfX09QX0REX19NT0RfX09QXzAwX19SRUcgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4Nywg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9ERF9fTU9EX19P
UF8wMV9fWDg3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVf
X29wX2RkX19tb2RfX29wXzAwX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZsZCwgICAg
ICAgICBPX01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWZpc3R0cCwgICAgICBPX01xLCAgICBPX05PTkUs
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8g
IHsgVURfSWZzdCwgICAgICAgICBPX01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWZzdHAsICAgICAgICBP
X01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogMDQgKi8gIHsgVURfSWZyc3RvciwgICAgICBPX00sICAgICBPX05PTkUsICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8g
IHsgVURfSWZuc2F2ZSwgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lmbnN0c3csICAgICAgT19Ndywg
ICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZF9fbW9kX19v
cF8wMV9feDg3WzY0XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZmZyZWUsICAgICAgIE9fU1QwLCAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lmZnJlZSwgICAg
ICAgT19TVDEsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURf
SWZmcmVlLCAgICAgICBPX1NUMiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAw
MyAqLyAgeyBVRF9JZmZyZWUsICAgICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDA0ICovICB7IFVEX0lmZnJlZSwgICAgICAgT19TVDQsICAgT19OT05FLCAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWZmcmVlLCAgICAgICBPX1NUNSwgICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmZyZWUsICAgICAg
IE9fU1Q2LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lm
ZnJlZSwgICAgICAgT19TVDcsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDgg
Ki8gIHsgVURfSWZ4Y2g0LCAgICAgICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwOSAqLyAgeyBVRF9JZnhjaDQsICAgICAgIE9fU1QxLCAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lmeGNoNCwgICAgICAgT19TVDIsICAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEIgKi8gIHsgVURfSWZ4Y2g0LCAgICAgICBP
X1NUMywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9JZnhj
aDQsICAgICAgIE9fU1Q0LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBEICov
ICB7IFVEX0lmeGNoNCwgICAgICAgT19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMEUgKi8gIHsgVURfSWZ4Y2g0LCAgICAgICBPX1NUNiwgICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JZnhjaDQsICAgICAgIE9fU1Q3LCAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEwICovICB7IFVEX0lmc3QsICAgICAgICAgT19T
VDAsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTEgKi8gIHsgVURfSWZzdCwg
ICAgICAgICBPX1NUMSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMiAqLyAg
eyBVRF9JZnN0LCAgICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDEzICovICB7IFVEX0lmc3QsICAgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWZzdCwgICAgICAgICBPX1NUNCwgICBPX05PTkUs
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNSAqLyAgeyBVRF9JZnN0LCAgICAgICAgIE9fU1Q1
LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0lmc3QsICAg
ICAgICAgT19TVDYsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTcgKi8gIHsg
VURfSWZzdCwgICAgICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAxOCAqLyAgeyBVRF9JZnN0cCwgICAgICAgIE9fU1QwLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDE5ICovICB7IFVEX0lmc3RwLCAgICAgICAgT19TVDEsICAgT19OT05FLCAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsgVURfSWZzdHAsICAgICAgICBPX1NUMiwg
ICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQiAqLyAgeyBVRF9JZnN0cCwgICAg
ICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFDICovICB7IFVE
X0lmc3RwLCAgICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MUQgKi8gIHsgVURfSWZzdHAsICAgICAgICBPX1NUNSwgICBPX05PTkUsICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAxRSAqLyAgeyBVRF9JZnN0cCwgICAgICAgIE9fU1Q2LCAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVEX0lmc3RwLCAgICAgICAgT19TVDcsICAg
T19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURfSWZ1Y29tLCAgICAg
ICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMSAqLyAgeyBVRF9J
ZnVjb20sICAgICAgIE9fU1QxLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDIy
ICovICB7IFVEX0lmdWNvbSwgICAgICAgT19TVDIsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMjMgKi8gIHsgVURfSWZ1Y29tLCAgICAgICBPX1NUMywgICBPX05PTkUsICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9JZnVjb20sICAgICAgIE9fU1Q0LCAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI1ICovICB7IFVEX0lmdWNvbSwgICAgICAg
T19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjYgKi8gIHsgVURfSWZ1
Y29tLCAgICAgICBPX1NUNiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyNyAq
LyAgeyBVRF9JZnVjb20sICAgICAgIE9fU1Q3LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDI4ICovICB7IFVEX0lmdWNvbXAsICAgICAgT19TVDAsICAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWZ1Y29tcCwgICAgICBPX1NUMSwgICBPX05P
TkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQSAqLyAgeyBVRF9JZnVjb21wLCAgICAgIE9f
U1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJCICovICB7IFVEX0lmdWNv
bXAsICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkMgKi8g
IHsgVURfSWZ1Y29tcCwgICAgICBPX1NUNCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAyRCAqLyAgeyBVRF9JZnVjb21wLCAgICAgIE9fU1Q1LCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDJFICovICB7IFVEX0lmdWNvbXAsICAgICAgT19TVDYsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkYgKi8gIHsgVURfSWZ1Y29tcCwgICAgICBPX1NU
NywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMxICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDM3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDNEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogM0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1
ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZV9fbW9kWzJdID0gewogIC8qIDAwICovICB7
IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0RFX19NT0RfX09QXzAwX19SRUcgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4NywgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9ERV9fTU9EX19PUF8w
MV9fWDg3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29w
X2RlX19tb2RfX29wXzAwX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZpYWRkLCAgICAg
ICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWZpbXVsLCAgICAgICBPX013LCAgICBPX05PTkUsICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsg
VURfSWZpY29tLCAgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWZpY29tcCwgICAgICBPX013
LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDQgKi8gIHsgVURfSWZpc3ViLCAgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSWZp
c3ViciwgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsgVURfSWZpZGl2LCAgICAgICBPX013LCAgICBP
X05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDcgKi8gIHsgVURfSWZpZGl2ciwgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJf
ZW50cnkgaXRhYl9fMWJ5dGVfX29wX2RlX19tb2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAw
ICovICB7IFVEX0lmYWRkcCwgICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMDEgKi8gIHsgVURfSWZhZGRwLCAgICAgICBPX1NUMSwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBVRF9JZmFkZHAsICAgICAgIE9fU1QyLCAgIE9f
U1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lmYWRkcCwgICAgICAg
T19TVDMsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWZh
ZGRwLCAgICAgICBPX1NUNCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAq
LyAgeyBVRF9JZmFkZHAsICAgICAgIE9fU1Q1LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDA2ICovICB7IFVEX0lmYWRkcCwgICAgICAgT19TVDYsICAgT19TVDAsICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWZhZGRwLCAgICAgICBPX1NUNywgICBPX1NU
MCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwOCAqLyAgeyBVRF9JZm11bHAsICAgICAgIE9f
U1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lmbXVs
cCwgICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEEgKi8g
IHsgVURfSWZtdWxwLCAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwQiAqLyAgeyBVRF9JZm11bHAsICAgICAgIE9fU1QzLCAgIE9fU1QwLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDBDICovICB7IFVEX0lmbXVscCwgICAgICAgT19TVDQsICAgT19TVDAs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEQgKi8gIHsgVURfSWZtdWxwLCAgICAgICBPX1NU
NSwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JZm11bHAs
ICAgICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBGICovICB7
IFVEX0lmbXVscCwgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMTAgKi8gIHsgVURfSWZjb21wNSwgICAgICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAxMSAqLyAgeyBVRF9JZmNvbXA1LCAgICAgIE9fU1QxLCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEyICovICB7IFVEX0lmY29tcDUsICAgICAgT19TVDIs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTMgKi8gIHsgVURfSWZjb21wNSwg
ICAgICBPX1NUMywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBV
RF9JZmNvbXA1LCAgICAgIE9fU1Q0LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDE1ICovICB7IFVEX0lmY29tcDUsICAgICAgT19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMTYgKi8gIHsgVURfSWZjb21wNSwgICAgICBPX1NUNiwgICBPX05PTkUsICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAxNyAqLyAgeyBVRF9JZmNvbXA1LCAgICAgIE9fU1Q3LCAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE4ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURf
SWZjb21wcCwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxRCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAyMCAqLyAgeyBVRF9JZnN1YnJwLCAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDIxICovICB7IFVEX0lmc3VicnAsICAgICAgT19TVDEsICAgT19T
VDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjIgKi8gIHsgVURfSWZzdWJycCwgICAgICBP
X1NUMiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JZnN1
YnJwLCAgICAgIE9fU1QzLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI0ICov
ICB7IFVEX0lmc3VicnAsICAgICAgT19TVDQsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMjUgKi8gIHsgVURfSWZzdWJycCwgICAgICBPX1NUNSwgICBPX1NUMCwgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBVRF9JZnN1YnJwLCAgICAgIE9fU1Q2LCAgIE9fU1Qw
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI3ICovICB7IFVEX0lmc3VicnAsICAgICAgT19T
VDcsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSWZzdWJw
LCAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyOSAqLyAg
eyBVRF9JZnN1YnAsICAgICAgIE9fU1QxLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDJBICovICB7IFVEX0lmc3VicCwgICAgICAgT19TVDIsICAgT19TVDAsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMkIgKi8gIHsgVURfSWZzdWJwLCAgICAgICBPX1NUMywgICBPX1NUMCwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQyAqLyAgeyBVRF9JZnN1YnAsICAgICAgIE9fU1Q0
LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJEICovICB7IFVEX0lmc3VicCwg
ICAgICAgT19TVDUsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsg
VURfSWZzdWJwLCAgICAgICBPX1NUNiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAyRiAqLyAgeyBVRF9JZnN1YnAsICAgICAgIE9fU1Q3LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDMwICovICB7IFVEX0lmZGl2cnAsICAgICAgT19TVDAsICAgT19TVDAsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMzEgKi8gIHsgVURfSWZkaXZycCwgICAgICBPX1NUMSwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JZmRpdnJwLCAg
ICAgIE9fU1QyLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVE
X0lmZGl2cnAsICAgICAgT19TVDMsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MzQgKi8gIHsgVURfSWZkaXZycCwgICAgICBPX1NUNCwgICBPX1NUMCwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAzNSAqLyAgeyBVRF9JZmRpdnJwLCAgICAgIE9fU1Q1LCAgIE9fU1QwLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDM2ICovICB7IFVEX0lmZGl2cnAsICAgICAgT19TVDYsICAg
T19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWZkaXZycCwgICAg
ICBPX1NUNywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9J
ZmRpdnAsICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM5
ICovICB7IFVEX0lmZGl2cCwgICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogM0EgKi8gIHsgVURfSWZkaXZwLCAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAzQiAqLyAgeyBVRF9JZmRpdnAsICAgICAgIE9fU1QzLCAgIE9f
U1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lmZGl2cCwgICAgICAg
T19TVDQsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWZk
aXZwLCAgICAgICBPX1NUNSwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRSAq
LyAgeyBVRF9JZmRpdnAsICAgICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDNGICovICB7IFVEX0lmZGl2cCwgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05F
LCAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVf
X29wX2RmX19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfREZfX01PRF9fT1BfMDBfX1JFRyB9
LAogIC8qIDAxICovICB7IFVEX0lncnBfeDg3LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgSVRBQl9fMUJZVEVfX09QX0RGX19NT0RfX09QXzAxX19YODcgfSwKfTsKCnN0YXRpYyBzdHJ1
Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfZGZfX21vZF9fb3BfMDBfX3JlZ1s4XSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9JZmlsZCwgICAgICAgIE9fTXcsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9J
ZmlzdHRwLCAgICAgIE9fTXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMiAqLyAgeyBVRF9JZmlzdCwgICAgICAgIE9fTXcsICAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAwMyAqLyAgeyBVRF9JZmlzdHAsICAgICAgIE9fTXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
YzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JZmJsZCwg
ICAgICAgIE9fTXQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSWZpbGQsICAgICAgICBPX01xLCAgICBPX05PTkUsICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsg
VURfSWZic3RwLCAgICAgICBPX010LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lmaXN0cCwgICAgICAgT19NcSwgICAg
T19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoK
c3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZl9fbW9kX19vcF8w
MV9feDg3WzY0XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZmZyZWVwLCAgICAgIE9fU1QwLCAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lmZnJlZXAsICAgICAg
T19TVDEsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWZm
cmVlcCwgICAgICBPX1NUMiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMyAq
LyAgeyBVRF9JZmZyZWVwLCAgICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDA0ICovICB7IFVEX0lmZnJlZXAsICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWZmcmVlcCwgICAgICBPX1NUNSwgICBPX05P
TkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmZyZWVwLCAgICAgIE9f
U1Q2LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lmZnJl
ZXAsICAgICAgT19TVDcsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDggKi8g
IHsgVURfSWZ4Y2g3LCAgICAgICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwOSAqLyAgeyBVRF9JZnhjaDcsICAgICAgIE9fU1QxLCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lmeGNoNywgICAgICAgT19TVDIsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEIgKi8gIHsgVURfSWZ4Y2g3LCAgICAgICBPX1NU
MywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9JZnhjaDcs
ICAgICAgIE9fU1Q0LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBEICovICB7
IFVEX0lmeGNoNywgICAgICAgT19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMEUgKi8gIHsgVURfSWZ4Y2g3LCAgICAgICBPX1NUNiwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JZnhjaDcsICAgICAgIE9fU1Q3LCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEwICovICB7IFVEX0lmc3RwOCwgICAgICAgT19TVDAs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTEgKi8gIHsgVURfSWZzdHA4LCAg
ICAgICBPX1NUMSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMiAqLyAgeyBV
RF9JZnN0cDgsICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDEzICovICB7IFVEX0lmc3RwOCwgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWZzdHA4LCAgICAgICBPX1NUNCwgICBPX05PTkUsICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAxNSAqLyAgeyBVRF9JZnN0cDgsICAgICAgIE9fU1Q1LCAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0lmc3RwOCwgICAg
ICAgT19TVDYsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTcgKi8gIHsgVURf
SWZzdHA4LCAgICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
OCAqLyAgeyBVRF9JZnN0cDksICAgICAgIE9fU1QwLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDE5ICovICB7IFVEX0lmc3RwOSwgICAgICAgT19TVDEsICAgT19OT05FLCAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsgVURfSWZzdHA5LCAgICAgICBPX1NUMiwgICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQiAqLyAgeyBVRF9JZnN0cDksICAgICAg
IE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFDICovICB7IFVEX0lm
c3RwOSwgICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUQg
Ki8gIHsgVURfSWZzdHA5LCAgICAgICBPX1NUNSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAxRSAqLyAgeyBVRF9JZnN0cDksICAgICAgIE9fU1Q2LCAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVEX0lmc3RwOSwgICAgICAgT19TVDcsICAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURfSWZuc3RzdywgICAgICBP
X0FYLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIyICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMjMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDI4ICovICB7IFVEX0lmdWNvbWlwLCAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWZ1Y29taXAsICAgICBPX1NUMCwgICBPX1NUMSwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQSAqLyAgeyBVRF9JZnVjb21pcCwgICAgIE9fU1Qw
LCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJCICovICB7IFVEX0lmdWNvbWlw
LCAgICAgT19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkMgKi8gIHsg
VURfSWZ1Y29taXAsICAgICBPX1NUMCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAyRCAqLyAgeyBVRF9JZnVjb21pcCwgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDJFICovICB7IFVEX0lmdWNvbWlwLCAgICAgT19TVDAsICAgT19TVDYsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMkYgKi8gIHsgVURfSWZ1Y29taXAsICAgICBPX1NUMCwg
ICBPX1NUNywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMCAqLyAgeyBVRF9JZmNvbWlwLCAg
ICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMxICovICB7IFVE
X0lmY29taXAsICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MzIgKi8gIHsgVURfSWZjb21pcCwgICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JZmNvbWlwLCAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lmY29taXAsICAgICAgT19TVDAsICAg
T19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWZjb21pcCwgICAg
ICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNiAqLyAgeyBVRF9J
ZmNvbWlwLCAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM3
ICovICB7IFVEX0lmY29taXAsICAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAzOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDNEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogM0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9lM19fYXNpemVbM10gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWpjeHosICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX2FzbyB9LAogIC8q
IDAxICovICB7IFVEX0lqZWN4eiwgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9h
c28gfSwKICAvKiAwMiAqLyAgeyBVRF9JanJjeHosICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYXNvIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5
dGVfX29wX2Y2X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXRlc3QsICAgICAgICBPX0Vi
LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDAxICovICB7IFVEX0l0ZXN0LCAgICAgICAgT19FYiwgICAgT19JYiwgICAg
T19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
MiAqLyAgeyBVRF9Jbm90LCAgICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSW5l
ZywgICAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICovICB7IFVEX0ltdWwsICAgICAgICAgT19F
YiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JaW11bCwgICAgICAgIE9fRWIsICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDYgKi8gIHsgVURfSWRpdiwgICAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lp
ZGl2LCAgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18xYnl0ZV9fb3BfZjdfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JdGVzdCwgICAg
ICAgIE9fRXYsICAgIE9fSXosICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSXRlc3QsICAgICAgICBPX0V2
LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lub3QsICAgICAgICAgT19FdiwgICAgT19O
T05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JbmVnLCAgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogMDQgKi8gIHsgVURfSW11bCwgICAgICAgICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQ
X2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICov
ICB7IFVEX0lpbXVsLCAgICAgICAgT19FdiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9J
ZGl2LCAgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298
UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSWlkaXYsICAg
ICAgICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0
YWJfXzFieXRlX19vcF9mZV9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lpbmMsICAgICAg
ICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9JZGVjLCAgICAgICAgIE9fRWIsICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29w
X2ZmX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWluYywgICAgICAgICBPX0V2LCAgICBP
X05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lkZWMsICAgICAgICAgT19FdiwgICAgT19OT05FLCAg
T19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JY2FsbCwgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9kZWY2NHxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMyAqLyAgeyBVRF9JY2FsbCwgICAgICAgIE9fRXAsICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQg
Ki8gIHsgVURfSWptcCwgICAgICAgICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBf
ZGVmNjR8UF9kZXBNfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDA1ICovICB7IFVEX0lqbXAsICAgICAgICAgT19FcCwgICAgT19OT05FLCAgT19OT05FLCAg
UF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAq
LyAgeyBVRF9JcHVzaCwgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9k
ZWY2NHxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzNkbm93WzI1Nl0gPSB7CiAg
LyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAw
QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDBCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBFICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAxMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDExICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxMyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMTUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAxNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTggKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxOSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDFBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMUIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFEICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAxRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIzICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAyNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjcgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyOCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI5
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMkEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJDICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMkQgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyRSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDJGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMzAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMyICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzMgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAzNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM4ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMzkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDNFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA0MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQxICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0
MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDQ0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogNDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ3ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDgg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA0OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDRBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0QyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDREICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA0RiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDUwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1MiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDUzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNTQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDU2ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTcgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA1OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDU5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNUEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1QiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDVDICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA1RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDVGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjAgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2MSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDYy
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogNjMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA2NCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY1ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2NyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDY4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogNjkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZCICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkMgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiA2RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDZFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3MCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDcxICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogNzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiA3MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDc0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3NiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdBICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0IgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3
QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDdEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogN0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3RiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDgwICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA4MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDgzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODQgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4NSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogODcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA4OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4QiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDhDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogOEQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhGICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTAgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA5MSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDkyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5NCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk1ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
OTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA5NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDk4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTkgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5QSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlC
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogOUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA5RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlFICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBMCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEExICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQTIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE0ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTUgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBBNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEE3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBOSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFBICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQUIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBBQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBRiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQjEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBCMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEIzICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjQgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogQjcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCOCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI5ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBCQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEJDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkQgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCRSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJGICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogQzAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBDMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEMyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzMgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogQzYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEM4ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzkgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiBDQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIENCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0MgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDRCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENFICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
Q0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBEMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEQxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDIgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEMyAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ0
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogRDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBENiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ3ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDggKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEOSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIERBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogREIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEREICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogREUgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBERiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEUwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFMiAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEUzICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRTQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBFNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEU2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTcgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFOCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEU5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogRUEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBFQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEVDICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRUQgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBF
RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEVGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogRjAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGMSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEYyICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjMg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBGNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEY1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjYgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGNyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEY4ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogRjkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBGQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEZCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRkMgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGRCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEZFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogRkYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFi
X19wZnhfc3NlNjZfXzBmWzI1Nl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDA4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBCICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEMgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDBFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxMCAqLyAgeyBVRF9JbW92dXBkLCAg
ICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogMTEgKi8gIHsgVURfSW1vdnVwZCwgICAgICBPX1csICAgICBPX1YsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEyICovICB7IFVEX0ltb3Zs
cGQsICAgICAgT19WLCAgICAgT19NLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiAxMyAqLyAgeyBVRF9JbW92bHBkLCAgICAgIE9fTSwgICAgIE9fViwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTQgKi8gIHsgVURf
SXVucGNrbHBkLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDE1ICovICB7IFVEX0l1bnBja2hwZCwgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxNiAqLyAg
eyBVRF9JbW92aHBkLCAgICAgIE9fViwgICAgIE9fTSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTcgKi8gIHsgVURfSW1vdmhwZCwgICAgICBPX00sICAg
ICBPX1YsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDE4
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxRCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDFFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIxICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjIgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI3ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMjggKi8gIHsgVURfSW1vdmFwZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDI5ICovICB7IFVEX0ltb3ZhcGQsICAg
ICAgT19XLCAgICAgT19WLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAyQSAqLyAgeyBVRF9JY3Z0cGkycGQsICAgIE9fViwgICAgIE9fUSwgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkIgKi8gIHsgVURfSW1vdm50
cGQsICAgICBPX00sICAgICBPX1YsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDJDICovICB7IFVEX0ljdnR0cGQycGksICAgT19QLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyRCAqLyAgeyBVRF9J
Y3Z0cGQycGksICAgIE9fUCwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMkUgKi8gIHsgVURfSXVjb21pc2QsICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDJGICovICB7
IFVEX0ljb21pc2QsICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAzMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMxICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDM0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM3ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzgg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAzOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNEICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogM0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0MiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDQzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ2ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDcgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA0OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDQ5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0QiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDRDICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NEQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA0RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDRGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTAgKi8gIHsgVURfSW1vdm1za3BkLCAg
ICBPX0dkLCAgICBPX1ZSLCAgICBPX05PTkUsICBQX29zb3xQX3JleHJ8UF9yZXhiIH0sCiAgLyog
NTEgKi8gIHsgVURfSXNxcnRwZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDUyICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1NCAq
LyAgeyBVRF9JYW5kcGQsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTUgKi8gIHsgVURfSWFuZG5wZCwgICAgICBPX1Ys
ICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDU2ICovICB7IFVEX0lvcnBkLCAgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1NyAqLyAgeyBVRF9JeG9ycGQsICAgICAg
IE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogNTggKi8gIHsgVURfSWFkZHBkLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUs
ICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDU5ICovICB7IFVEX0ltdWxwZCwg
ICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiA1QSAqLyAgeyBVRF9JY3Z0cGQycHMsICAgIE9fViwgICAgIE9fVywgICAgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUIgKi8gIHsgVURfSWN2
dHBzMmRxLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDVDICovICB7IFVEX0lzdWJwZCwgICAgICAgT19WLCAgICAgT19XLCAg
ICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1RCAqLyAgeyBV
RF9JbWlucGQsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogNUUgKi8gIHsgVURfSWRpdnBkLCAgICAgICBPX1YsICAgICBP
X1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVGICov
ICB7IFVEX0ltYXhwZCwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2MCAqLyAgeyBVRF9JcHVucGNrbGJ3LCAgIE9fViwg
ICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
NjEgKi8gIHsgVURfSXB1bnBja2x3ZCwgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDYyICovICB7IFVEX0lwdW5wY2tsZHEsICAg
T19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiA2MyAqLyAgeyBVRF9JcGFja3Nzd2IsICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNjQgKi8gIHsgVURfSXBjbXBndGIs
ICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDY1ICovICB7IFVEX0lwY21wZ3R3LCAgICAgT19WLCAgICAgT19XLCAgICAgT19O
T05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2NiAqLyAgeyBVRF9JcGNt
cGd0ZCwgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogNjcgKi8gIHsgVURfSXBhY2t1c3diLCAgICBPX1YsICAgICBPX1csICAg
ICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDY4ICovICB7IFVE
X0lwdW5wY2toYncsICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA2OSAqLyAgeyBVRF9JcHVucGNraHdkLCAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNkEgKi8g
IHsgVURfSXB1bnBja2hkcSwgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDZCICovICB7IFVEX0lwYWNrc3NkdywgICAgT19WLCAg
ICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2
QyAqLyAgeyBVRF9JcHVucGNrbHFkcSwgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNkQgKi8gIHsgVURfSXB1bnBja2hxZHEsICBP
X1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDZFICovICB7IFVEX0ltb3ZkLCAgICAgICAgT19WLCAgICAgT19FeCwgICAgT19OT05FLCAg
UF9jMnxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2RiAqLyAgeyBV
RF9JbW92cWEsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogNzAgKi8gIHsgVURfSXBzaHVmZCwgICAgICBPX1YsICAgICBP
X1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDcxICov
ICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fUEZY
X1NTRTY2X18wRl9fT1BfNzFfX1JFRyB9LAogIC8qIDcyICovICB7IFVEX0lncnBfcmVnLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfNzJfX1JF
RyB9LAogIC8qIDczICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfNzNfX1JFRyB9LAogIC8qIDc0ICovICB7IFVE
X0lwY21wZXFiLCAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA3NSAqLyAgeyBVRF9JcGNtcGVxdywgICAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNzYgKi8g
IHsgVURfSXBjbXBlcWQsICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDdBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogN0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA3QyAqLyAgeyBVRF9JaGFkZHBkLCAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogN0Qg
Ki8gIHsgVURfSWhzdWJwZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDdFICovICB7IFVEX0ltb3ZkLCAgICAgICAgT19F
eCwgICAgT19WLCAgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiA3RiAqLyAgeyBVRF9JbW92ZHFhLCAgICAgIE9fVywgICAgIE9fViwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogODAgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4
MSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDgyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogODMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4NCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg1ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA4NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDg4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODkgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4QSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhCICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogOEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA4RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5MCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDkxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogOTIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk0ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA5NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDk3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5OSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlBICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
OUIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA5QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDlEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5RiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEEw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBBMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEEzICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTQgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBNSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEE2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE5ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUEgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBBQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEFDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBRSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFGICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQjAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBCMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEIyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCNCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEI1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQjYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBCNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI4ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjkgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEJCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogQkMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCRCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJFICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBDMCAqLyAgeyBVRF9JeGFkZCwgICAgICAgIE9fRWIsICAgIE9fR2IsICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEMxICovICB7IFVE
X0l4YWRkLCAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzIgKi8gIHsgVURfSWNtcHBkLCAgICAg
ICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEMzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQzQgKi8gIHsgVURfSXBpbnNydywgICAgICBPX1YsICAgICBPX0V3
LCAgICBPX0liLCAgICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBD
NSAqLyAgeyBVRF9JcGV4dHJ3LCAgICAgIE9fR2QsICAgIE9fVlIsICAgIE9fSWIsICAgIFBfYXNv
fFBfcmV4cnxQX3JleGIgfSwKICAvKiBDNiAqLyAgeyBVRF9Jc2h1ZnBkLCAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fSWIsICAgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzcg
Ki8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX19Q
RlhfU1NFNjZfXzBGX19PUF9DN19fUkVHIH0sCiAgLyogQzggKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDOSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENB
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQ0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBDQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENEICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0UgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDRiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEQwICovICB7IFVEX0lhZGRzdWJwZCwgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBEMSAqLyAgeyBVRF9JcHNybHcs
ICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogRDIgKi8gIHsgVURfSXBzcmxkLCAgICAgICBPX1YsICAgICBPX1csICAgICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEQzICovICB7IFVEX0lw
c3JscSwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiBENCAqLyAgeyBVRF9JcGFkZHEsICAgICAgIE9fViwgICAgIE9fVywg
ICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRDUgKi8gIHsg
VURfSXBtdWxsdywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIEQ2ICovICB7IFVEX0ltb3ZxLCAgICAgICAgT19XLCAgICAg
T19WLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBENyAq
LyAgeyBVRF9JcG1vdm1za2IsICAgIE9fR2QsICAgIE9fVlIsICAgIE9fTk9ORSwgIFBfcmV4cnxQ
X3JleGIgfSwKICAvKiBEOCAqLyAgeyBVRF9JcHN1YnVzYiwgICAgIE9fViwgICAgIE9fVywgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRDkgKi8gIHsgVURf
SXBzdWJ1c3csICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIERBICovICB7IFVEX0lwbWludWIsICAgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBEQiAqLyAg
eyBVRF9JcGFuZCwgICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogREMgKi8gIHsgVURfSXBzdWJ1c2IsICAgICBPX1YsICAg
ICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIERE
ICovICB7IFVEX0lwdW5wY2toYncsICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBERSAqLyAgeyBVRF9JcG1heHViLCAgICAgIE9f
ViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogREYgKi8gIHsgVURfSXBhbmRuLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEUwICovICB7IFVEX0lwYXZnYiwgICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBFMSAqLyAgeyBVRF9JcHNyYXcsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRTIgKi8gIHsgVURfSXBzcmFk
LCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEUzICovICB7IFVEX0lwYXZndywgICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFNCAqLyAgeyBVRF9J
cG11bGh1dywgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogRTUgKi8gIHsgVURfSXBtdWxodywgICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEU2ICovICB7
IFVEX0ljdnR0cGQyZHEsICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogRTcgKi8gIHsgVURfSW1vdm50ZHEsICAgICBPX00sICAgICBPX1YsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEU4ICovICB7IFVEX0lwc3Vic2IsICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBFOSAqLyAgeyBVRF9JcHN1YnN3LCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRUEgKi8gIHsgVURfSXBtaW5z
dywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEVCICovICB7IFVEX0lwb3IsICAgICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFQyAqLyAgeyBVRF9J
cGFkZHNiLCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogRUQgKi8gIHsgVURfSXBhZGRzdywgICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEVFICovICB7
IFVEX0lwbWF4c3csICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiBFRiAqLyAgeyBVRF9JcHhvciwgICAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjAg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBGMSAqLyAgeyBVRF9JcHNsbHcsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjIgKi8gIHsgVURfSXBzbGxk
LCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEYzICovICB7IFVEX0lwc2xscSwgICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGNCAqLyAgeyBVRF9J
cG11bHVkcSwgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogRjUgKi8gIHsgVURfSXBtYWRkd2QsICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEY2ICovICB7
IFVEX0lwc2FkYncsICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiBGNyAqLyAgeyBVRF9JbWFza21vdnEsICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjgg
Ki8gIHsgVURfSXBzdWJiLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEY5ICovICB7IFVEX0lwc3VidywgICAgICAgT19W
LCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiBGQSAqLyAgeyBVRF9JcHN1YmQsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRkIgKi8gIHsgVURfSXBzdWJxLCAgICAg
ICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEZDICovICB7IFVEX0lwYWRkYiwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGRCAqLyAgeyBVRF9JcGFkZHcs
ICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogRkUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBGRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2Vu
dHJ5IGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wXzcxX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lwc3JsdywgICAgICAgT19WUiwgICAgT19JYiwgICAg
T19OT05FLCAgUF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JcHNyYXcsICAg
ICAgIE9fVlIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MDYgKi8gIHsgVURfSXBzbGx3LCAgICAgICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3Jl
eGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3Bm
eF9zc2U2Nl9fMGZfX29wXzcyX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDAyICovICB7IFVEX0lwc3JsZCwgICAgICAgT19WUiwgICAgT19JYiwgICAgT19OT05FLCAgUF9y
ZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JcHNyYWQsICAgICAgIE9fVlIsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURf
SXBzbGxkLCAgICAgICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3JleGIgfSwKICAvKiAw
NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3BmeF9zc2U2Nl9fMGZf
X29wXzczX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVE
X0lwc3JscSwgICAgICAgT19WUiwgICAgT19JYiwgICAgT19OT05FLCAgUF9yZXhiIH0sCiAgLyog
MDMgKi8gIHsgVURfSXBzcmxkcSwgICAgICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3Jl
eGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSXBzbGxxLCAgICAg
ICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9J
cHNsbGRxLCAgICAgIE9fVlIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfcmV4YiB9LAp9OwoKc3Rh
dGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wX2M3X19yZWdb
OF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBJVEFCX19QRlhfU1NFNjZfXzBGX19PUF9DN19fUkVHX19PUF8wMF9fVkVORE9SIH0s
CiAgLyogMDEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDA2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFi
X19wZnhfc3NlNjZfXzBmX19vcF9jN19fcmVnX19vcF8wMF9fdmVuZG9yWzJdID0gewogIC8qIDAw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDEgKi8gIHsgVURfSXZtY2xlYXIsICAgICBPX01xLCAgICBPX05PTkUsICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGl0YWJfX3BmeF9zc2VmMl9fMGZbMjU2XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAx
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEIgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAwQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDBEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDEwICovICB7
IFVEX0ltb3ZzZCwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAxMSAqLyAgeyBVRF9JbW92c2QsICAgICAgIE9fVywgICAg
IE9fViwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTIg
Ki8gIHsgVURfSW1vdmRkdXAsICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxNSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDE2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE5ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAxQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDFDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxRSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MjAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAyMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDIyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjMgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI1
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMjYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI4ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyQSAq
LyAgeyBVRF9JY3Z0c2kyc2QsICAgIE9fViwgICAgIE9fRXgsICAgIE9fTk9ORSwgIFBfYzJ8UF9h
c298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkIgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyQyAqLyAg
eyBVRF9JY3Z0dHNkMnNpLCAgIE9fR3Z3LCAgIE9fVywgICAgIE9fTk9ORSwgIFBfYzF8UF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyRCAqLyAgeyBVRF9JY3Z0c2Qyc2ksICAgIE9f
R3Z3LCAgIE9fVywgICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAyRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDJGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzAgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMyICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMzMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAzNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDM4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMzkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAzRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDNFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0MCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQxICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA0MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDQ0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ3
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogNDggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA0OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDRBICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEIgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0QyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDREICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogNEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0RiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDUwICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTEgKi8g
IHsgVURfSXNxcnRzZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDUyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1NCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDU1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogNTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA1NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDU4ICovICB7IFVEX0lhZGRzZCwgICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiA1OSAqLyAgeyBVRF9JbXVsc2QsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUEgKi8gIHsgVURfSWN2dHNk
MnNzLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDVCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNUMgKi8gIHsgVURfSXN1YnNkLCAgICAgICBPX1YsICAg
ICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVE
ICovICB7IFVEX0ltaW5zZCwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1RSAqLyAgeyBVRF9JZGl2c2QsICAgICAgIE9f
ViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogNUYgKi8gIHsgVURfSW1heHNkLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDYwICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjEgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2
MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDYzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogNjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2NSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY2ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjcg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA2OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDY5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkEgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2QiAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZDICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNkQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA2RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzAgKi8gIHsgVURfSXBzaHVm
bHcsICAgICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDcxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3MyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDc0ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NzUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA3NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdB
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogN0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA3QyAqLyAgeyBVRF9JaGFkZHBzLCAgICAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogN0QgKi8g
IHsgVURfSWhzdWJwcywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDdFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0YgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4MCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDgxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogODIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA4MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg0ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDg3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogODggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4OSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhBICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEIg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA4QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDhEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEUgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4RiAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDkwICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogOTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA5MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDkzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTQgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5NSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDk2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogOTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk5ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUEgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA5QiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDlDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5RSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlGICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
QTAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBBMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEEyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBNCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE1
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBBNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBQSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEFCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFFICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBCMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEIxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCMyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI0ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBCNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjggKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCOSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEJBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBCQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJEICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
RiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEMwICovICB7IFVEX0l4YWRkLCAgICAgICAgT19FYiwgICAgT19HYiwgICAgT19O
T05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzEgKi8gIHsg
VURfSXhhZGQsICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEMyICovICB7IFVEX0ljbXBzZCwgICAgICAgT19W
LCAgICAgT19XLCAgICAgT19JYiwgICAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiBDMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIEM0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEM3ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
QzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBDOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIENBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0IgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDQyAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENE
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQ0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBDRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQwICovICB7IFVEX0lhZGRzdWJwcywgICAg
T19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiBEMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEQyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBENCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ1ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRDYgKi8gIHsgVURfSW1vdmRxMnEsICAgICBPX1AsICAgICBPX1ZSLCAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleGIgfSwKICAvKiBENyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ4ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDkgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEQSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIERCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogREMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBERCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIERFICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogREYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBFMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEUxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFMyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEU0ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRTUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBFNiAqLyAgeyBVRF9JY3Z0cGQyZHEsICAgIE9fViwgICAgIE9fVywgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRTcgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBF
OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEU5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogRUEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFQiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEVDICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRUQg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBFRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEVGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjAgKi8gIHsgVURfSWxkZHF1LCAgICAgICBP
X1YsICAgICBPX00sICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIEYxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogRjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEY0ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiBGNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIEY3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGOSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEZBICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
RkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBGQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEZEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRkUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGRiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3Rh
dGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3BmeF9zc2VmM19fMGZbMjU2XSA9IHsKICAv
KiAwMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDggKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwOSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBB
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMEIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBEICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEUgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwRiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDEwICovICB7IFVEX0ltb3ZzcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxMSAqLyAgeyBVRF9JbW92c3Ms
ICAgICAgIE9fVywgICAgIE9fViwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMTIgKi8gIHsgVURfSW1vdnNsZHVwLCAgICBPX1YsICAgICBPX1csICAgICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEzICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTQg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAxNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0ltb3ZzaGR1cCwgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAxRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIxICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI3
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMjggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVEX0ljdnRzaTJzcywgICAg
T19WLCAgICAgT19FeCwgICAgT19OT05FLCAgUF9jMnxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDJCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMkMgKi8gIHsgVURfSWN2dHRzczJzaSwgICBPX0d2dywgICBP
X1csICAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MkQgKi8gIHsgVURfSWN2dHNzMnNpLCAgICBPX0d2dywgICBPX1csICAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDMwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDM2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0Eg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAzQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNGICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA0MSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDMgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDQ1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ4ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDkgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA0QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDRCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0RCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDRFICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NEYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA1MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDUxICovICB7IFVEX0lzcXJ0c3MsICAgICAgT19WLCAgICAg
T19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1MiAq
LyAgeyBVRF9JcnNxcnRzcywgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTMgKi8gIHsgVURfSXJjcHNzLCAgICAgICBPX1Ys
ICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDU0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogNTUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA1NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDU3ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTggKi8gIHsgVURf
SWFkZHNzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDU5ICovICB7IFVEX0ltdWxzcywgICAgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1QSAqLyAg
eyBVRF9JY3Z0c3Myc2QsICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUIgKi8gIHsgVURfSWN2dHRwczJkcSwgICBPX1YsICAg
ICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVD
ICovICB7IFVEX0lzdWJzcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1RCAqLyAgeyBVRF9JbWluc3MsICAgICAgIE9f
ViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogNUUgKi8gIHsgVURfSWRpdnNzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVGICovICB7IFVEX0ltYXhzcywgICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiA2MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDYxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2MyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA2NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjggKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2OSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDZBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZEICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA2RiAqLyAgeyBVRF9JbW92ZHF1LCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNzAgKi8gIHsgVURfSXBzaHVmaHcsICAg
ICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDcxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogNzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDc0ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzUgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiA3NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdBICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogN0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiA3QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0UgKi8gIHsgVURfSW1vdnEsICAg
ICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDdGICovICB7IFVEX0ltb3ZkcXUsICAgICAgT19XLCAgICAgT19WLCAgICAgT19O
T05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA4MCAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDgxICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogODIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA4MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODUgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4NiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDg3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogODggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhBICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA4QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDhEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4RiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDkwICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
OTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA5MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDkzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTQgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5NSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk2
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogOTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA5OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk5ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUEgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5QiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDlDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogOUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlGICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTAgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBBMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEEyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBNCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE1ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBBNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBQSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEFCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBBRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFFICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUYgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEIxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogQjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCMyAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI0ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjUg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBCNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjggKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCOSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJBICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogQkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBCQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkUgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCRiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEMwICovICB7IFVEX0l4YWRkLCAgICAgICAgT19FYiwgICAgT19HYiwgICAgT19OT05FLCAg
UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzEgKi8gIHsgVURfSXhh
ZGQsICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiBDMiAqLyAgeyBVRF9JY21wc3MsICAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fSWIsICAgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzMg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBDNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzYgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNyAqLyAgeyBVRF9JZ3Jw
X3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfX1BGWF9TU0VGM19fMEZf
X09QX0M3X19SRUcgfSwKICAvKiBDOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEM5ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0EgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDQiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIENDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQ0QgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENGICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDAgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBEMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEQyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBENCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ1ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRDYgKi8gIHsgVURfSW1vdnEyZHEsICAgICBPX1YsICAgICBPX1BSLCAgICBPX05PTkUsICBQ
X2FzbyB9LAogIC8qIEQ3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEOSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIERBICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
REIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBEQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEREICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogREUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBERiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEUw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogRTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBFMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEUzICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTQgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFNSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEU2ICovICB7IFVEX0ljdnRkcTJwZCwgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFNyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEU4ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBFQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEVCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRUMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFRCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEVFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogRUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBGMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEYxICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBG
MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEY0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogRjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGNiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEY3ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjgg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBGOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEZBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRkIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGQyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEZEICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogRkUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBGRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0
YWJfX3BmeF9zc2VmM19fMGZfX29wX2M3X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAwNyAqLyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IElUQUJfX1BGWF9TU0VGM19fMEZfX09QX0M3X19SRUdfX09QXzA3X19WRU5ET1IgfSwKfTsKCnN0
YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX19wZnhfc3NlZjNfXzBmX19vcF9jN19fcmVn
X19vcF8wN19fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZteG9u
LCAgICAgICBPX01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAp9OwoKLyogdGhlIG9yZGVyIG9mIHRoaXMgdGFibGUgbWF0Y2hlcyBlbnVtIHVkX2l0
YWJfaW5kZXggKi8Kc3RydWN0IHVkX2l0YWJfZW50cnkgKiB1ZF9pdGFiX2xpc3RbXSA9IHsKICBp
dGFiX18wZiwKICBpdGFiX18wZl9fb3BfMDBfX3JlZywKICBpdGFiX18wZl9fb3BfMDFfX3JlZywK
ICBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZCwKICBpdGFiX18wZl9fb3BfMDFfX3Jl
Z19fb3BfMDBfX21vZF9fb3BfMDFfX3JtLAogIGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wMF9f
bW9kX19vcF8wMV9fcm1fX29wXzAxX192ZW5kb3IsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29w
XzAwX19tb2RfX29wXzAxX19ybV9fb3BfMDNfX3ZlbmRvciwKICBpdGFiX18wZl9fb3BfMDFfX3Jl
Z19fb3BfMDBfX21vZF9fb3BfMDFfX3JtX19vcF8wNF9fdmVuZG9yLAogIGl0YWJfXzBmX19vcF8w
MV9fcmVnX19vcF8wMV9fbW9kLAogIGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wMV9fbW9kX19v
cF8wMV9fcm0sCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAyX19tb2QsCiAgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzAzX19tb2QsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19t
b2RfX29wXzAxX19ybSwKICBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDNfX21vZF9fb3BfMDFf
X3JtX19vcF8wMF9fdmVuZG9yLAogIGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wM19fbW9kX19v
cF8wMV9fcm1fX29wXzAxX192ZW5kb3IsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19t
b2RfX29wXzAxX19ybV9fb3BfMDJfX3ZlbmRvciwKICBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3Bf
MDNfX21vZF9fb3BfMDFfX3JtX19vcF8wM19fdmVuZG9yLAogIGl0YWJfXzBmX19vcF8wMV9fcmVn
X19vcF8wM19fbW9kX19vcF8wMV9fcm1fX29wXzA0X192ZW5kb3IsCiAgaXRhYl9fMGZfX29wXzAx
X19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDVfX3ZlbmRvciwKICBpdGFiX18wZl9f
b3BfMDFfX3JlZ19fb3BfMDNfX21vZF9fb3BfMDFfX3JtX19vcF8wNl9fdmVuZG9yLAogIGl0YWJf
XzBmX19vcF8wMV9fcmVnX19vcF8wM19fbW9kX19vcF8wMV9fcm1fX29wXzA3X192ZW5kb3IsCiAg
aXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzA0X19tb2QsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdf
X29wXzA2X19tb2QsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzA3X19tb2QsCiAgaXRhYl9f
MGZfX29wXzAxX19yZWdfX29wXzA3X19tb2RfX29wXzAxX19ybSwKICBpdGFiX18wZl9fb3BfMDFf
X3JlZ19fb3BfMDdfX21vZF9fb3BfMDFfX3JtX19vcF8wMV9fdmVuZG9yLAogIGl0YWJfXzBmX19v
cF8wZF9fcmVnLAogIGl0YWJfXzBmX19vcF8xOF9fcmVnLAogIGl0YWJfXzBmX19vcF83MV9fcmVn
LAogIGl0YWJfXzBmX19vcF83Ml9fcmVnLAogIGl0YWJfXzBmX19vcF83M19fcmVnLAogIGl0YWJf
XzBmX19vcF9hZV9fcmVnLAogIGl0YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNV9fbW9kLAogIGl0
YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNV9fbW9kX19vcF8wMV9fcm0sCiAgaXRhYl9fMGZfX29w
X2FlX19yZWdfX29wXzA2X19tb2QsCiAgaXRhYl9fMGZfX29wX2FlX19yZWdfX29wXzA2X19tb2Rf
X29wXzAxX19ybSwKICBpdGFiX18wZl9fb3BfYWVfX3JlZ19fb3BfMDdfX21vZCwKICBpdGFiX18w
Zl9fb3BfYWVfX3JlZ19fb3BfMDdfX21vZF9fb3BfMDFfX3JtLAogIGl0YWJfXzBmX19vcF9iYV9f
cmVnLAogIGl0YWJfXzBmX19vcF9jN19fcmVnLAogIGl0YWJfXzBmX19vcF9jN19fcmVnX19vcF8w
MF9fdmVuZG9yLAogIGl0YWJfXzBmX19vcF9jN19fcmVnX19vcF8wN19fdmVuZG9yLAogIGl0YWJf
XzBmX19vcF9kOV9fbW9kLAogIGl0YWJfXzBmX19vcF9kOV9fbW9kX19vcF8wMV9feDg3LAogIGl0
YWJfXzFieXRlLAogIGl0YWJfXzFieXRlX19vcF82MF9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29w
XzYxX19vc2l6ZSwKICBpdGFiX18xYnl0ZV9fb3BfNjNfX21vZGUsCiAgaXRhYl9fMWJ5dGVfX29w
XzZkX19vc2l6ZSwKICBpdGFiX18xYnl0ZV9fb3BfNmZfX29zaXplLAogIGl0YWJfXzFieXRlX19v
cF84MF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF84MV9fcmVnLAogIGl0YWJfXzFieXRlX19vcF84
Ml9fcmVnLAogIGl0YWJfXzFieXRlX19vcF84M19fcmVnLAogIGl0YWJfXzFieXRlX19vcF84Zl9f
cmVnLAogIGl0YWJfXzFieXRlX19vcF85OF9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29wXzk5X19v
c2l6ZSwKICBpdGFiX18xYnl0ZV9fb3BfOWNfX21vZGUsCiAgaXRhYl9fMWJ5dGVfX29wXzljX19t
b2RlX19vcF8wMF9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29wXzljX19tb2RlX19vcF8wMV9fb3Np
emUsCiAgaXRhYl9fMWJ5dGVfX29wXzlkX19tb2RlLAogIGl0YWJfXzFieXRlX19vcF85ZF9fbW9k
ZV9fb3BfMDBfX29zaXplLAogIGl0YWJfXzFieXRlX19vcF85ZF9fbW9kZV9fb3BfMDFfX29zaXpl
LAogIGl0YWJfXzFieXRlX19vcF9hNV9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29wX2E3X19vc2l6
ZSwKICBpdGFiX18xYnl0ZV9fb3BfYWJfX29zaXplLAogIGl0YWJfXzFieXRlX19vcF9hZF9fb3Np
emUsCiAgaXRhYl9fMWJ5dGVfX29wX2FlX19tb2QsCiAgaXRhYl9fMWJ5dGVfX29wX2FlX19tb2Rf
X29wXzAwX19yZWcsCiAgaXRhYl9fMWJ5dGVfX29wX2FmX19vc2l6ZSwKICBpdGFiX18xYnl0ZV9f
b3BfYzBfX3JlZywKICBpdGFiX18xYnl0ZV9fb3BfYzFfX3JlZywKICBpdGFiX18xYnl0ZV9fb3Bf
YzZfX3JlZywKICBpdGFiX18xYnl0ZV9fb3BfYzdfX3JlZywKICBpdGFiX18xYnl0ZV9fb3BfY2Zf
X29zaXplLAogIGl0YWJfXzFieXRlX19vcF9kMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kMV9f
cmVnLAogIGl0YWJfXzFieXRlX19vcF9kMl9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kM19fcmVn
LAogIGl0YWJfXzFieXRlX19vcF9kOF9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kOF9fbW9kX19v
cF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kOF9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJf
XzFieXRlX19vcF9kOV9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kOV9fbW9kX19vcF8wMF9fcmVn
LAogIGl0YWJfXzFieXRlX19vcF9kOV9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19v
cF9kYV9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kYV9fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJf
XzFieXRlX19vcF9kYV9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19vcF9kYl9fbW9k
LAogIGl0YWJfXzFieXRlX19vcF9kYl9fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19v
cF9kYl9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19vcF9kY19fbW9kLAogIGl0YWJf
XzFieXRlX19vcF9kY19fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kY19fbW9k
X19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19vcF9kZF9fbW9kLAogIGl0YWJfXzFieXRlX19v
cF9kZF9fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kZF9fbW9kX19vcF8wMV9f
eDg3LAogIGl0YWJfXzFieXRlX19vcF9kZV9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kZV9fbW9k
X19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kZV9fbW9kX19vcF8wMV9feDg3LAogIGl0
YWJfXzFieXRlX19vcF9kZl9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kZl9fbW9kX19vcF8wMF9f
cmVnLAogIGl0YWJfXzFieXRlX19vcF9kZl9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRl
X19vcF9lM19fYXNpemUsCiAgaXRhYl9fMWJ5dGVfX29wX2Y2X19yZWcsCiAgaXRhYl9fMWJ5dGVf
X29wX2Y3X19yZWcsCiAgaXRhYl9fMWJ5dGVfX29wX2ZlX19yZWcsCiAgaXRhYl9fMWJ5dGVfX29w
X2ZmX19yZWcsCiAgaXRhYl9fM2Rub3csCiAgaXRhYl9fcGZ4X3NzZTY2X18wZiwKICBpdGFiX19w
Znhfc3NlNjZfXzBmX19vcF83MV9fcmVnLAogIGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wXzcyX19y
ZWcsCiAgaXRhYl9fcGZ4X3NzZTY2X18wZl9fb3BfNzNfX3JlZywKICBpdGFiX19wZnhfc3NlNjZf
XzBmX19vcF9jN19fcmVnLAogIGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wX2M3X19yZWdfX29wXzAw
X192ZW5kb3IsCiAgaXRhYl9fcGZ4X3NzZWYyX18wZiwKICBpdGFiX19wZnhfc3NlZjNfXzBmLAog
IGl0YWJfX3BmeF9zc2VmM19fMGZfX29wX2M3X19yZWcsCiAgaXRhYl9fcGZ4X3NzZWYzX18wZl9f
b3BfYzdfX3JlZ19fb3BfMDdfX3ZlbmRvciwKfTsKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9SRUFETUUAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwMDIwMQAxMTc2NTQ2NTU1NgAwMTQ1
MTcAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABt
cmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAACmh0dHA6Ly91ZGlzODYuc291cmNlZm9yZ2UubmV0Lwp1ZGlzODYt
MS42IDogCiAgLSBjZCBsaWJ1ZGlzODYKICAtIGNwICpjIHRvIGhlcmUKICAtIGNwICpoIHRvIGhl
cmUKICAgCk11a2VzaCBSYXRob3IKMDQvMzAvMjAwOAoKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAABrZGIveDg2L3VkaXM4Ni0xLjcvaXRhYi5oAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAwMDAwMjc2MDIAMTE3NjU0NjU1NTYAMDE0NzQ1
ACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJh
dGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAovKiBpdGFiLmggLS0gYXV0byBnZW5lcmF0ZWQgYnkgb3BnZW4ucHks
IGRvIG5vdCBlZGl0LiAqLwoKI2lmbmRlZiBVRF9JVEFCX0gKI2RlZmluZSBVRF9JVEFCX0gKCgoK
ZW51bSB1ZF9pdGFiX3ZlbmRvcl9pbmRleCB7CiAgSVRBQl9fVkVORE9SX0lORFhfX0FNRCwKICBJ
VEFCX19WRU5ET1JfSU5EWF9fSU5URUwsCn07CgoKZW51bSB1ZF9pdGFiX21vZGVfaW5kZXggewog
IElUQUJfX01PREVfSU5EWF9fMTYsCiAgSVRBQl9fTU9ERV9JTkRYX18zMiwKICBJVEFCX19NT0RF
X0lORFhfXzY0Cn07CgoKZW51bSB1ZF9pdGFiX21vZF9pbmRleCB7CiAgSVRBQl9fTU9EX0lORFhf
X05PVF8xMSwKICBJVEFCX19NT0RfSU5EWF9fMTEKfTsKCgplbnVtIHVkX2l0YWJfaW5kZXggewog
IElUQUJfXzBGLAogIElUQUJfXzBGX19PUF8wMF9fUkVHLAogIElUQUJfXzBGX19PUF8wMV9fUkVH
LAogIElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wMF9fTU9ELAogIElUQUJfXzBGX19PUF8wMV9f
UkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk0sCiAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAw
X19NT0RfX09QXzAxX19STV9fT1BfMDFfX1ZFTkRPUiwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19f
T1BfMDBfX01PRF9fT1BfMDFfX1JNX19PUF8wM19fVkVORE9SLAogIElUQUJfXzBGX19PUF8wMV9f
UkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk1fX09QXzA0X19WRU5ET1IsCiAgSVRBQl9fMEZfX09Q
XzAxX19SRUdfX09QXzAxX19NT0QsCiAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAxX19NT0Rf
X09QXzAxX19STSwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDJfX01PRCwKICBJVEFCX18w
Rl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRCwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNf
X01PRF9fT1BfMDFfX1JNLAogIElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wM19fTU9EX19PUF8w
MV9fUk1fX09QXzAwX19WRU5ET1IsCiAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0Rf
X09QXzAxX19STV9fT1BfMDFfX1ZFTkRPUiwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNf
X01PRF9fT1BfMDFfX1JNX19PUF8wMl9fVkVORE9SLAogIElUQUJfXzBGX19PUF8wMV9fUkVHX19P
UF8wM19fTU9EX19PUF8wMV9fUk1fX09QXzAzX19WRU5ET1IsCiAgSVRBQl9fMEZfX09QXzAxX19S
RUdfX09QXzAzX19NT0RfX09QXzAxX19STV9fT1BfMDRfX1ZFTkRPUiwKICBJVEFCX18wRl9fT1Bf
MDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wNV9fVkVORE9SLAogIElUQUJfXzBG
X19PUF8wMV9fUkVHX19PUF8wM19fTU9EX19PUF8wMV9fUk1fX09QXzA2X19WRU5ET1IsCiAgSVRB
Ql9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0RfX09QXzAxX19STV9fT1BfMDdfX1ZFTkRPUiwK
ICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDRfX01PRCwKICBJVEFCX18wRl9fT1BfMDFfX1JF
R19fT1BfMDZfX01PRCwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDdfX01PRCwKICBJVEFC
X18wRl9fT1BfMDFfX1JFR19fT1BfMDdfX01PRF9fT1BfMDFfX1JNLAogIElUQUJfXzBGX19PUF8w
MV9fUkVHX19PUF8wN19fTU9EX19PUF8wMV9fUk1fX09QXzAxX19WRU5ET1IsCiAgSVRBQl9fMEZf
X09QXzBEX19SRUcsCiAgSVRBQl9fMEZfX09QXzE4X19SRUcsCiAgSVRBQl9fMEZfX09QXzcxX19S
RUcsCiAgSVRBQl9fMEZfX09QXzcyX19SRUcsCiAgSVRBQl9fMEZfX09QXzczX19SRUcsCiAgSVRB
Ql9fMEZfX09QX0FFX19SRUcsCiAgSVRBQl9fMEZfX09QX0FFX19SRUdfX09QXzA1X19NT0QsCiAg
SVRBQl9fMEZfX09QX0FFX19SRUdfX09QXzA1X19NT0RfX09QXzAxX19STSwKICBJVEFCX18wRl9f
T1BfQUVfX1JFR19fT1BfMDZfX01PRCwKICBJVEFCX18wRl9fT1BfQUVfX1JFR19fT1BfMDZfX01P
RF9fT1BfMDFfX1JNLAogIElUQUJfXzBGX19PUF9BRV9fUkVHX19PUF8wN19fTU9ELAogIElUQUJf
XzBGX19PUF9BRV9fUkVHX19PUF8wN19fTU9EX19PUF8wMV9fUk0sCiAgSVRBQl9fMEZfX09QX0JB
X19SRUcsCiAgSVRBQl9fMEZfX09QX0M3X19SRUcsCiAgSVRBQl9fMEZfX09QX0M3X19SRUdfX09Q
XzAwX19WRU5ET1IsCiAgSVRBQl9fMEZfX09QX0M3X19SRUdfX09QXzA3X19WRU5ET1IsCiAgSVRB
Ql9fMEZfX09QX0Q5X19NT0QsCiAgSVRBQl9fMEZfX09QX0Q5X19NT0RfX09QXzAxX19YODcsCiAg
SVRBQl9fMUJZVEUsCiAgSVRBQl9fMUJZVEVfX09QXzYwX19PU0laRSwKICBJVEFCX18xQllURV9f
T1BfNjFfX09TSVpFLAogIElUQUJfXzFCWVRFX19PUF82M19fTU9ERSwKICBJVEFCX18xQllURV9f
T1BfNkRfX09TSVpFLAogIElUQUJfXzFCWVRFX19PUF82Rl9fT1NJWkUsCiAgSVRBQl9fMUJZVEVf
X09QXzgwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzgxX19SRUcsCiAgSVRBQl9fMUJZVEVfX09Q
XzgyX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzgzX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzhG
X19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzk4X19PU0laRSwKICBJVEFCX18xQllURV9fT1BfOTlf
X09TSVpFLAogIElUQUJfXzFCWVRFX19PUF85Q19fTU9ERSwKICBJVEFCX18xQllURV9fT1BfOUNf
X01PREVfX09QXzAwX19PU0laRSwKICBJVEFCX18xQllURV9fT1BfOUNfX01PREVfX09QXzAxX19P
U0laRSwKICBJVEFCX18xQllURV9fT1BfOURfX01PREUsCiAgSVRBQl9fMUJZVEVfX09QXzlEX19N
T0RFX19PUF8wMF9fT1NJWkUsCiAgSVRBQl9fMUJZVEVfX09QXzlEX19NT0RFX19PUF8wMV9fT1NJ
WkUsCiAgSVRBQl9fMUJZVEVfX09QX0E1X19PU0laRSwKICBJVEFCX18xQllURV9fT1BfQTdfX09T
SVpFLAogIElUQUJfXzFCWVRFX19PUF9BQl9fT1NJWkUsCiAgSVRBQl9fMUJZVEVfX09QX0FEX19P
U0laRSwKICBJVEFCX18xQllURV9fT1BfQUVfX01PRCwKICBJVEFCX18xQllURV9fT1BfQUVfX01P
RF9fT1BfMDBfX1JFRywKICBJVEFCX18xQllURV9fT1BfQUZfX09TSVpFLAogIElUQUJfXzFCWVRF
X19PUF9DMF9fUkVHLAogIElUQUJfXzFCWVRFX19PUF9DMV9fUkVHLAogIElUQUJfXzFCWVRFX19P
UF9DNl9fUkVHLAogIElUQUJfXzFCWVRFX19PUF9DN19fUkVHLAogIElUQUJfXzFCWVRFX19PUF9D
Rl9fT1NJWkUsCiAgSVRBQl9fMUJZVEVfX09QX0QwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0Qx
X19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0QyX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0QzX19S
RUcsCiAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0Rf
X09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0RfX09QXzAxX19YODcsCiAgSVRB
Ql9fMUJZVEVfX09QX0Q5X19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0Q5X19NT0RfX09QXzAwX19S
RUcsCiAgSVRBQl9fMUJZVEVfX09QX0Q5X19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVf
X09QX0RBX19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0RBX19NT0RfX09QXzAwX19SRUcsCiAgSVRB
Ql9fMUJZVEVfX09QX0RBX19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0RCX19N
T0QsCiAgSVRBQl9fMUJZVEVfX09QX0RCX19NT0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVf
X09QX0RCX19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0RDX19NT0QsCiAgSVRB
Ql9fMUJZVEVfX09QX0RDX19NT0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0RDX19N
T0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0REX19NT0QsCiAgSVRBQl9fMUJZVEVf
X09QX0REX19NT0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0REX19NT0RfX09QXzAx
X19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0RFX19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0RFX19N
T0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0RFX19NT0RfX09QXzAxX19YODcsCiAg
SVRBQl9fMUJZVEVfX09QX0RGX19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0RGX19NT0RfX09QXzAw
X19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0RGX19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZ
VEVfX09QX0UzX19BU0laRSwKICBJVEFCX18xQllURV9fT1BfRjZfX1JFRywKICBJVEFCX18xQllU
RV9fT1BfRjdfX1JFRywKICBJVEFCX18xQllURV9fT1BfRkVfX1JFRywKICBJVEFCX18xQllURV9f
T1BfRkZfX1JFRywKICBJVEFCX18zRE5PVywKICBJVEFCX19QRlhfU1NFNjZfXzBGLAogIElUQUJf
X1BGWF9TU0U2Nl9fMEZfX09QXzcxX19SRUcsCiAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfNzJf
X1JFRywKICBJVEFCX19QRlhfU1NFNjZfXzBGX19PUF83M19fUkVHLAogIElUQUJfX1BGWF9TU0U2
Nl9fMEZfX09QX0M3X19SRUcsCiAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfQzdfX1JFR19fT1Bf
MDBfX1ZFTkRPUiwKICBJVEFCX19QRlhfU1NFRjJfXzBGLAogIElUQUJfX1BGWF9TU0VGM19fMEYs
CiAgSVRBQl9fUEZYX1NTRUYzX18wRl9fT1BfQzdfX1JFRywKICBJVEFCX19QRlhfU1NFRjNfXzBG
X19PUF9DN19fUkVHX19PUF8wN19fVkVORE9SLAp9OwoKCmVudW0gdWRfbW5lbW9uaWNfY29kZSB7
CiAgVURfSTNkbm93LAogIFVEX0lhYWEsCiAgVURfSWFhZCwKICBVRF9JYWFtLAogIFVEX0lhYXMs
CiAgVURfSWFkYywKICBVRF9JYWRkLAogIFVEX0lhZGRwZCwKICBVRF9JYWRkcHMsCiAgVURfSWFk
ZHNkLAogIFVEX0lhZGRzcywKICBVRF9JYWRkc3VicGQsCiAgVURfSWFkZHN1YnBzLAogIFVEX0lh
bmQsCiAgVURfSWFuZHBkLAogIFVEX0lhbmRwcywKICBVRF9JYW5kbnBkLAogIFVEX0lhbmRucHMs
CiAgVURfSWFycGwsCiAgVURfSW1vdnN4ZCwKICBVRF9JYm91bmQsCiAgVURfSWJzZiwKICBVRF9J
YnNyLAogIFVEX0lic3dhcCwKICBVRF9JYnQsCiAgVURfSWJ0YywKICBVRF9JYnRyLAogIFVEX0li
dHMsCiAgVURfSWNhbGwsCiAgVURfSWNidywKICBVRF9JY3dkZSwKICBVRF9JY2RxZSwKICBVRF9J
Y2xjLAogIFVEX0ljbGQsCiAgVURfSWNsZmx1c2gsCiAgVURfSWNsZ2ksCiAgVURfSWNsaSwKICBV
RF9JY2x0cywKICBVRF9JY21jLAogIFVEX0ljbW92bywKICBVRF9JY21vdm5vLAogIFVEX0ljbW92
YiwKICBVRF9JY21vdmFlLAogIFVEX0ljbW92eiwKICBVRF9JY21vdm56LAogIFVEX0ljbW92YmUs
CiAgVURfSWNtb3ZhLAogIFVEX0ljbW92cywKICBVRF9JY21vdm5zLAogIFVEX0ljbW92cCwKICBV
RF9JY21vdm5wLAogIFVEX0ljbW92bCwKICBVRF9JY21vdmdlLAogIFVEX0ljbW92bGUsCiAgVURf
SWNtb3ZnLAogIFVEX0ljbXAsCiAgVURfSWNtcHBkLAogIFVEX0ljbXBwcywKICBVRF9JY21wc2Is
CiAgVURfSWNtcHN3LAogIFVEX0ljbXBzZCwKICBVRF9JY21wc3EsCiAgVURfSWNtcHNzLAogIFVE
X0ljbXB4Y2hnLAogIFVEX0ljbXB4Y2hnOGIsCiAgVURfSWNvbWlzZCwKICBVRF9JY29taXNzLAog
IFVEX0ljcHVpZCwKICBVRF9JY3Z0ZHEycGQsCiAgVURfSWN2dGRxMnBzLAogIFVEX0ljdnRwZDJk
cSwKICBVRF9JY3Z0cGQycGksCiAgVURfSWN2dHBkMnBzLAogIFVEX0ljdnRwaTJwcywKICBVRF9J
Y3Z0cGkycGQsCiAgVURfSWN2dHBzMmRxLAogIFVEX0ljdnRwczJwaSwKICBVRF9JY3Z0cHMycGQs
CiAgVURfSWN2dHNkMnNpLAogIFVEX0ljdnRzZDJzcywKICBVRF9JY3Z0c2kyc3MsCiAgVURfSWN2
dHNzMnNpLAogIFVEX0ljdnRzczJzZCwKICBVRF9JY3Z0dHBkMnBpLAogIFVEX0ljdnR0cGQyZHEs
CiAgVURfSWN2dHRwczJkcSwKICBVRF9JY3Z0dHBzMnBpLAogIFVEX0ljdnR0c2Qyc2ksCiAgVURf
SWN2dHNpMnNkLAogIFVEX0ljdnR0c3Myc2ksCiAgVURfSWN3ZCwKICBVRF9JY2RxLAogIFVEX0lj
cW8sCiAgVURfSWRhYSwKICBVRF9JZGFzLAogIFVEX0lkZWMsCiAgVURfSWRpdiwKICBVRF9JZGl2
cGQsCiAgVURfSWRpdnBzLAogIFVEX0lkaXZzZCwKICBVRF9JZGl2c3MsCiAgVURfSWVtbXMsCiAg
VURfSWVudGVyLAogIFVEX0lmMnhtMSwKICBVRF9JZmFicywKICBVRF9JZmFkZCwKICBVRF9JZmFk
ZHAsCiAgVURfSWZibGQsCiAgVURfSWZic3RwLAogIFVEX0lmY2hzLAogIFVEX0lmY2xleCwKICBV
RF9JZmNtb3ZiLAogIFVEX0lmY21vdmUsCiAgVURfSWZjbW92YmUsCiAgVURfSWZjbW92dSwKICBV
RF9JZmNtb3ZuYiwKICBVRF9JZmNtb3ZuZSwKICBVRF9JZmNtb3ZuYmUsCiAgVURfSWZjbW92bnUs
CiAgVURfSWZ1Y29taSwKICBVRF9JZmNvbSwKICBVRF9JZmNvbTIsCiAgVURfSWZjb21wMywKICBV
RF9JZmNvbWksCiAgVURfSWZ1Y29taXAsCiAgVURfSWZjb21pcCwKICBVRF9JZmNvbXAsCiAgVURf
SWZjb21wNSwKICBVRF9JZmNvbXBwLAogIFVEX0lmY29zLAogIFVEX0lmZGVjc3RwLAogIFVEX0lm
ZGl2LAogIFVEX0lmZGl2cCwKICBVRF9JZmRpdnIsCiAgVURfSWZkaXZycCwKICBVRF9JZmVtbXMs
CiAgVURfSWZmcmVlLAogIFVEX0lmZnJlZXAsCiAgVURfSWZpY29tLAogIFVEX0lmaWNvbXAsCiAg
VURfSWZpbGQsCiAgVURfSWZuY3N0cCwKICBVRF9JZm5pbml0LAogIFVEX0lmaWFkZCwKICBVRF9J
ZmlkaXZyLAogIFVEX0lmaWRpdiwKICBVRF9JZmlzdWIsCiAgVURfSWZpc3ViciwKICBVRF9JZmlz
dCwKICBVRF9JZmlzdHAsCiAgVURfSWZpc3R0cCwKICBVRF9JZmxkLAogIFVEX0lmbGQxLAogIFVE
X0lmbGRsMnQsCiAgVURfSWZsZGwyZSwKICBVRF9JZmxkbHBpLAogIFVEX0lmbGRsZzIsCiAgVURf
SWZsZGxuMiwKICBVRF9JZmxkeiwKICBVRF9JZmxkY3csCiAgVURfSWZsZGVudiwKICBVRF9JZm11
bCwKICBVRF9JZm11bHAsCiAgVURfSWZpbXVsLAogIFVEX0lmbm9wLAogIFVEX0lmcGF0YW4sCiAg
VURfSWZwcmVtLAogIFVEX0lmcHJlbTEsCiAgVURfSWZwdGFuLAogIFVEX0lmcm5kaW50LAogIFVE
X0lmcnN0b3IsCiAgVURfSWZuc2F2ZSwKICBVRF9JZnNjYWxlLAogIFVEX0lmc2luLAogIFVEX0lm
c2luY29zLAogIFVEX0lmc3FydCwKICBVRF9JZnN0cCwKICBVRF9JZnN0cDEsCiAgVURfSWZzdHA4
LAogIFVEX0lmc3RwOSwKICBVRF9JZnN0LAogIFVEX0lmbnN0Y3csCiAgVURfSWZuc3RlbnYsCiAg
VURfSWZuc3RzdywKICBVRF9JZnN1YiwKICBVRF9JZnN1YnAsCiAgVURfSWZzdWJyLAogIFVEX0lm
c3VicnAsCiAgVURfSWZ0c3QsCiAgVURfSWZ1Y29tLAogIFVEX0lmdWNvbXAsCiAgVURfSWZ1Y29t
cHAsCiAgVURfSWZ4YW0sCiAgVURfSWZ4Y2gsCiAgVURfSWZ4Y2g0LAogIFVEX0lmeGNoNywKICBV
RF9JZnhyc3RvciwKICBVRF9JZnhzYXZlLAogIFVEX0lmcHh0cmFjdCwKICBVRF9JZnlsMngsCiAg
VURfSWZ5bDJ4cDEsCiAgVURfSWhhZGRwZCwKICBVRF9JaGFkZHBzLAogIFVEX0lobHQsCiAgVURf
SWhzdWJwZCwKICBVRF9JaHN1YnBzLAogIFVEX0lpZGl2LAogIFVEX0lpbiwKICBVRF9JaW11bCwK
ICBVRF9JaW5jLAogIFVEX0lpbnNiLAogIFVEX0lpbnN3LAogIFVEX0lpbnNkLAogIFVEX0lpbnQx
LAogIFVEX0lpbnQzLAogIFVEX0lpbnQsCiAgVURfSWludG8sCiAgVURfSWludmQsCiAgVURfSWlu
dmxwZywKICBVRF9JaW52bHBnYSwKICBVRF9JaXJldHcsCiAgVURfSWlyZXRkLAogIFVEX0lpcmV0
cSwKICBVRF9Jam8sCiAgVURfSWpubywKICBVRF9JamIsCiAgVURfSWphZSwKICBVRF9JanosCiAg
VURfSWpueiwKICBVRF9JamJlLAogIFVEX0lqYSwKICBVRF9JanMsCiAgVURfSWpucywKICBVRF9J
anAsCiAgVURfSWpucCwKICBVRF9JamwsCiAgVURfSWpnZSwKICBVRF9JamxlLAogIFVEX0lqZywK
ICBVRF9JamN4eiwKICBVRF9JamVjeHosCiAgVURfSWpyY3h6LAogIFVEX0lqbXAsCiAgVURfSWxh
aGYsCiAgVURfSWxhciwKICBVRF9JbGRkcXUsCiAgVURfSWxkbXhjc3IsCiAgVURfSWxkcywKICBV
RF9JbGVhLAogIFVEX0lsZXMsCiAgVURfSWxmcywKICBVRF9JbGdzLAogIFVEX0lsaWR0LAogIFVE
X0lsc3MsCiAgVURfSWxlYXZlLAogIFVEX0lsZmVuY2UsCiAgVURfSWxnZHQsCiAgVURfSWxsZHQs
CiAgVURfSWxtc3csCiAgVURfSWxvY2ssCiAgVURfSWxvZHNiLAogIFVEX0lsb2RzdywKICBVRF9J
bG9kc2QsCiAgVURfSWxvZHNxLAogIFVEX0lsb29wbnosCiAgVURfSWxvb3BlLAogIFVEX0lsb29w
LAogIFVEX0lsc2wsCiAgVURfSWx0ciwKICBVRF9JbWFza21vdnEsCiAgVURfSW1heHBkLAogIFVE
X0ltYXhwcywKICBVRF9JbWF4c2QsCiAgVURfSW1heHNzLAogIFVEX0ltZmVuY2UsCiAgVURfSW1p
bnBkLAogIFVEX0ltaW5wcywKICBVRF9JbWluc2QsCiAgVURfSW1pbnNzLAogIFVEX0ltb25pdG9y
LAogIFVEX0ltb3YsCiAgVURfSW1vdmFwZCwKICBVRF9JbW92YXBzLAogIFVEX0ltb3ZkLAogIFVE
X0ltb3ZkZHVwLAogIFVEX0ltb3ZkcWEsCiAgVURfSW1vdmRxdSwKICBVRF9JbW92ZHEycSwKICBV
RF9JbW92aHBkLAogIFVEX0ltb3ZocHMsCiAgVURfSW1vdmxocHMsCiAgVURfSW1vdmxwZCwKICBV
RF9JbW92bHBzLAogIFVEX0ltb3ZobHBzLAogIFVEX0ltb3Ztc2twZCwKICBVRF9JbW92bXNrcHMs
CiAgVURfSW1vdm50ZHEsCiAgVURfSW1vdm50aSwKICBVRF9JbW92bnRwZCwKICBVRF9JbW92bnRw
cywKICBVRF9JbW92bnRxLAogIFVEX0ltb3ZxLAogIFVEX0ltb3ZxYSwKICBVRF9JbW92cTJkcSwK
ICBVRF9JbW92c2IsCiAgVURfSW1vdnN3LAogIFVEX0ltb3ZzZCwKICBVRF9JbW92c3EsCiAgVURf
SW1vdnNsZHVwLAogIFVEX0ltb3ZzaGR1cCwKICBVRF9JbW92c3MsCiAgVURfSW1vdnN4LAogIFVE
X0ltb3Z1cGQsCiAgVURfSW1vdnVwcywKICBVRF9JbW92engsCiAgVURfSW11bCwKICBVRF9JbXVs
cGQsCiAgVURfSW11bHBzLAogIFVEX0ltdWxzZCwKICBVRF9JbXVsc3MsCiAgVURfSW13YWl0LAog
IFVEX0luZWcsCiAgVURfSW5vcCwKICBVRF9Jbm90LAogIFVEX0lvciwKICBVRF9Jb3JwZCwKICBV
RF9Jb3JwcywKICBVRF9Jb3V0LAogIFVEX0lvdXRzYiwKICBVRF9Jb3V0c3csCiAgVURfSW91dHNk
LAogIFVEX0lvdXRzcSwKICBVRF9JcGFja3Nzd2IsCiAgVURfSXBhY2tzc2R3LAogIFVEX0lwYWNr
dXN3YiwKICBVRF9JcGFkZGIsCiAgVURfSXBhZGR3LAogIFVEX0lwYWRkcSwKICBVRF9JcGFkZHNi
LAogIFVEX0lwYWRkc3csCiAgVURfSXBhZGR1c2IsCiAgVURfSXBhZGR1c3csCiAgVURfSXBhbmQs
CiAgVURfSXBhbmRuLAogIFVEX0lwYXVzZSwKICBVRF9JcGF2Z2IsCiAgVURfSXBhdmd3LAogIFVE
X0lwY21wZXFiLAogIFVEX0lwY21wZXF3LAogIFVEX0lwY21wZXFkLAogIFVEX0lwY21wZ3RiLAog
IFVEX0lwY21wZ3R3LAogIFVEX0lwY21wZ3RkLAogIFVEX0lwZXh0cncsCiAgVURfSXBpbnNydywK
ICBVRF9JcG1hZGR3ZCwKICBVRF9JcG1heHN3LAogIFVEX0lwbWF4dWIsCiAgVURfSXBtaW5zdywK
ICBVRF9JcG1pbnViLAogIFVEX0lwbW92bXNrYiwKICBVRF9JcG11bGh1dywKICBVRF9JcG11bGh3
LAogIFVEX0lwbXVsbHcsCiAgVURfSXBtdWx1ZHEsCiAgVURfSXBvcCwKICBVRF9JcG9wYSwKICBV
RF9JcG9wYWQsCiAgVURfSXBvcGZ3LAogIFVEX0lwb3BmZCwKICBVRF9JcG9wZnEsCiAgVURfSXBv
ciwKICBVRF9JcHJlZmV0Y2gsCiAgVURfSXByZWZldGNobnRhLAogIFVEX0lwcmVmZXRjaHQwLAog
IFVEX0lwcmVmZXRjaHQxLAogIFVEX0lwcmVmZXRjaHQyLAogIFVEX0lwc2FkYncsCiAgVURfSXBz
aHVmZCwKICBVRF9JcHNodWZodywKICBVRF9JcHNodWZsdywKICBVRF9JcHNodWZ3LAogIFVEX0lw
c2xsZHEsCiAgVURfSXBzbGx3LAogIFVEX0lwc2xsZCwKICBVRF9JcHNsbHEsCiAgVURfSXBzcmF3
LAogIFVEX0lwc3JhZCwKICBVRF9JcHNybHcsCiAgVURfSXBzcmxkLAogIFVEX0lwc3JscSwKICBV
RF9JcHNybGRxLAogIFVEX0lwc3ViYiwKICBVRF9JcHN1YncsCiAgVURfSXBzdWJkLAogIFVEX0lw
c3VicSwKICBVRF9JcHN1YnNiLAogIFVEX0lwc3Vic3csCiAgVURfSXBzdWJ1c2IsCiAgVURfSXBz
dWJ1c3csCiAgVURfSXB1bnBja2hidywKICBVRF9JcHVucGNraHdkLAogIFVEX0lwdW5wY2toZHEs
CiAgVURfSXB1bnBja2hxZHEsCiAgVURfSXB1bnBja2xidywKICBVRF9JcHVucGNrbHdkLAogIFVE
X0lwdW5wY2tsZHEsCiAgVURfSXB1bnBja2xxZHEsCiAgVURfSXBpMmZ3LAogIFVEX0lwaTJmZCwK
ICBVRF9JcGYyaXcsCiAgVURfSXBmMmlkLAogIFVEX0lwZm5hY2MsCiAgVURfSXBmcG5hY2MsCiAg
VURfSXBmY21wZ2UsCiAgVURfSXBmbWluLAogIFVEX0lwZnJjcCwKICBVRF9JcGZyc3FydCwKICBV
RF9JcGZzdWIsCiAgVURfSXBmYWRkLAogIFVEX0lwZmNtcGd0LAogIFVEX0lwZm1heCwKICBVRF9J
cGZyY3BpdDEsCiAgVURfSXBmcnNwaXQxLAogIFVEX0lwZnN1YnIsCiAgVURfSXBmYWNjLAogIFVE
X0lwZmNtcGVxLAogIFVEX0lwZm11bCwKICBVRF9JcGZyY3BpdDIsCiAgVURfSXBtdWxocncsCiAg
VURfSXBzd2FwZCwKICBVRF9JcGF2Z3VzYiwKICBVRF9JcHVzaCwKICBVRF9JcHVzaGEsCiAgVURf
SXB1c2hhZCwKICBVRF9JcHVzaGZ3LAogIFVEX0lwdXNoZmQsCiAgVURfSXB1c2hmcSwKICBVRF9J
cHhvciwKICBVRF9JcmNsLAogIFVEX0lyY3IsCiAgVURfSXJvbCwKICBVRF9Jcm9yLAogIFVEX0ly
Y3BwcywKICBVRF9JcmNwc3MsCiAgVURfSXJkbXNyLAogIFVEX0lyZHBtYywKICBVRF9JcmR0c2Ms
CiAgVURfSXJkdHNjcCwKICBVRF9JcmVwbmUsCiAgVURfSXJlcCwKICBVRF9JcmV0LAogIFVEX0ly
ZXRmLAogIFVEX0lyc20sCiAgVURfSXJzcXJ0cHMsCiAgVURfSXJzcXJ0c3MsCiAgVURfSXNhaGYs
CiAgVURfSXNhbCwKICBVRF9Jc2FsYywKICBVRF9Jc2FyLAogIFVEX0lzaGwsCiAgVURfSXNociwK
ICBVRF9Jc2JiLAogIFVEX0lzY2FzYiwKICBVRF9Jc2Nhc3csCiAgVURfSXNjYXNkLAogIFVEX0lz
Y2FzcSwKICBVRF9Jc2V0bywKICBVRF9Jc2V0bm8sCiAgVURfSXNldGIsCiAgVURfSXNldG5iLAog
IFVEX0lzZXR6LAogIFVEX0lzZXRueiwKICBVRF9Jc2V0YmUsCiAgVURfSXNldGEsCiAgVURfSXNl
dHMsCiAgVURfSXNldG5zLAogIFVEX0lzZXRwLAogIFVEX0lzZXRucCwKICBVRF9Jc2V0bCwKICBV
RF9Jc2V0Z2UsCiAgVURfSXNldGxlLAogIFVEX0lzZXRnLAogIFVEX0lzZmVuY2UsCiAgVURfSXNn
ZHQsCiAgVURfSXNobGQsCiAgVURfSXNocmQsCiAgVURfSXNodWZwZCwKICBVRF9Jc2h1ZnBzLAog
IFVEX0lzaWR0LAogIFVEX0lzbGR0LAogIFVEX0lzbXN3LAogIFVEX0lzcXJ0cHMsCiAgVURfSXNx
cnRwZCwKICBVRF9Jc3FydHNkLAogIFVEX0lzcXJ0c3MsCiAgVURfSXN0YywKICBVRF9Jc3RkLAog
IFVEX0lzdGdpLAogIFVEX0lzdGksCiAgVURfSXNraW5pdCwKICBVRF9Jc3RteGNzciwKICBVRF9J
c3Rvc2IsCiAgVURfSXN0b3N3LAogIFVEX0lzdG9zZCwKICBVRF9Jc3Rvc3EsCiAgVURfSXN0ciwK
ICBVRF9Jc3ViLAogIFVEX0lzdWJwZCwKICBVRF9Jc3VicHMsCiAgVURfSXN1YnNkLAogIFVEX0lz
dWJzcywKICBVRF9Jc3dhcGdzLAogIFVEX0lzeXNjYWxsLAogIFVEX0lzeXNlbnRlciwKICBVRF9J
c3lzZXhpdCwKICBVRF9Jc3lzcmV0LAogIFVEX0l0ZXN0LAogIFVEX0l1Y29taXNkLAogIFVEX0l1
Y29taXNzLAogIFVEX0l1ZDIsCiAgVURfSXVucGNraHBkLAogIFVEX0l1bnBja2hwcywKICBVRF9J
dW5wY2tscHMsCiAgVURfSXVucGNrbHBkLAogIFVEX0l2ZXJyLAogIFVEX0l2ZXJ3LAogIFVEX0l2
bWNhbGwsCiAgVURfSXZtY2xlYXIsCiAgVURfSXZteG9uLAogIFVEX0l2bXB0cmxkLAogIFVEX0l2
bXB0cnN0LAogIFVEX0l2bXJlc3VtZSwKICBVRF9Jdm14b2ZmLAogIFVEX0l2bXJ1biwKICBVRF9J
dm1tY2FsbCwKICBVRF9Jdm1sb2FkLAogIFVEX0l2bXNhdmUsCiAgVURfSXdhaXQsCiAgVURfSXdi
aW52ZCwKICBVRF9Jd3Jtc3IsCiAgVURfSXhhZGQsCiAgVURfSXhjaGcsCiAgVURfSXhsYXRiLAog
IFVEX0l4b3IsCiAgVURfSXhvcnBkLAogIFVEX0l4b3JwcywKICBVRF9JZGIsCiAgVURfSWludmFs
aWQsCiAgVURfSWQzdmlsLAogIFVEX0luYSwKICBVRF9JZ3JwX3JlZywKICBVRF9JZ3JwX3JtLAog
IFVEX0lncnBfdmVuZG9yLAogIFVEX0lncnBfeDg3LAogIFVEX0lncnBfbW9kZSwKICBVRF9JZ3Jw
X29zaXplLAogIFVEX0lncnBfYXNpemUsCiAgVURfSWdycF9tb2QsCiAgVURfSW5vbmUsCn07CgoK
CmV4dGVybiBjb25zdCBjaGFyKiB1ZF9tbmVtb25pY3Nfc3RyW107OwpleHRlcm4gc3RydWN0IHVk
X2l0YWJfZW50cnkqIHVkX2l0YWJfbGlzdFtdOwoKI2VuZGlmCgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYv
dWRpczg2LTEuNy90eXBlcy5oAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1
NgAwMDAwMDAxMTQzMwAxMTc2NTQ2NTU1NgAwMTUxNjUAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogdHlwZXMuaAogKgogKiBDb3B5cmlnaHQgKGMpIDIwMDYsIFZpdmVr
IE1vaGFuIDx2aXZla0BzaWc5LmNvbT4KICogQWxsIHJpZ2h0cyByZXNlcnZlZC4gU2VlIExJQ0VO
U0UKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCiNpZm5kZWYgVURfVFlQRVNfSAojZGVmaW5l
IFVEX1RZUEVTX0gKCgojaW5jbHVkZSAiLi4vLi4vaW5jbHVkZS9rZGJpbmMuaCIKCiNkZWZpbmUg
Rk1UNjQgIiVsbCIKI2luY2x1ZGUgIml0YWIuaCIKCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAq
IEFsbCBwb3NzaWJsZSAidHlwZXMiIG9mIG9iamVjdHMgaW4gdWRpczg2LiBPcmRlciBpcyBJbXBv
cnRhbnQhCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwplbnVtIHVkX3R5cGUKewogIFVEX05P
TkUsCgogIC8qIDggYml0IEdQUnMgKi8KICBVRF9SX0FMLAlVRF9SX0NMLAlVRF9SX0RMLAlVRF9S
X0JMLAogIFVEX1JfQUgsCVVEX1JfQ0gsCVVEX1JfREgsCVVEX1JfQkgsCiAgVURfUl9TUEwsCVVE
X1JfQlBMLAlVRF9SX1NJTCwJVURfUl9ESUwsCiAgVURfUl9SOEIsCVVEX1JfUjlCLAlVRF9SX1Ix
MEIsCVVEX1JfUjExQiwKICBVRF9SX1IxMkIsCVVEX1JfUjEzQiwJVURfUl9SMTRCLAlVRF9SX1Ix
NUIsCgogIC8qIDE2IGJpdCBHUFJzICovCiAgVURfUl9BWCwJVURfUl9DWCwJVURfUl9EWCwJVURf
Ul9CWCwKICBVRF9SX1NQLAlVRF9SX0JQLAlVRF9SX1NJLAlVRF9SX0RJLAogIFVEX1JfUjhXLAlV
RF9SX1I5VywJVURfUl9SMTBXLAlVRF9SX1IxMVcsCiAgVURfUl9SMTJXLAlVRF9SX1IxM1csCVVE
X1JfUjE0VywJVURfUl9SMTVXLAoJCiAgLyogMzIgYml0IEdQUnMgKi8KICBVRF9SX0VBWCwJVURf
Ul9FQ1gsCVVEX1JfRURYLAlVRF9SX0VCWCwKICBVRF9SX0VTUCwJVURfUl9FQlAsCVVEX1JfRVNJ
LAlVRF9SX0VESSwKICBVRF9SX1I4RCwJVURfUl9SOUQsCVVEX1JfUjEwRCwJVURfUl9SMTFELAog
IFVEX1JfUjEyRCwJVURfUl9SMTNELAlVRF9SX1IxNEQsCVVEX1JfUjE1RCwKCQogIC8qIDY0IGJp
dCBHUFJzICovCiAgVURfUl9SQVgsCVVEX1JfUkNYLAlVRF9SX1JEWCwJVURfUl9SQlgsCiAgVURf
Ul9SU1AsCVVEX1JfUkJQLAlVRF9SX1JTSSwJVURfUl9SREksCiAgVURfUl9SOCwJVURfUl9SOSwJ
VURfUl9SMTAsCVVEX1JfUjExLAogIFVEX1JfUjEyLAlVRF9SX1IxMywJVURfUl9SMTQsCVVEX1Jf
UjE1LAoKICAvKiBzZWdtZW50IHJlZ2lzdGVycyAqLwogIFVEX1JfRVMsCVVEX1JfQ1MsCVVEX1Jf
U1MsCVVEX1JfRFMsCiAgVURfUl9GUywJVURfUl9HUywJCgogIC8qIGNvbnRyb2wgcmVnaXN0ZXJz
Ki8KICBVRF9SX0NSMCwJVURfUl9DUjEsCVVEX1JfQ1IyLAlVRF9SX0NSMywKICBVRF9SX0NSNCwJ
VURfUl9DUjUsCVVEX1JfQ1I2LAlVRF9SX0NSNywKICBVRF9SX0NSOCwJVURfUl9DUjksCVVEX1Jf
Q1IxMCwJVURfUl9DUjExLAogIFVEX1JfQ1IxMiwJVURfUl9DUjEzLAlVRF9SX0NSMTQsCVVEX1Jf
Q1IxNSwKCQogIC8qIGRlYnVnIHJlZ2lzdGVycyAqLwogIFVEX1JfRFIwLAlVRF9SX0RSMSwJVURf
Ul9EUjIsCVVEX1JfRFIzLAogIFVEX1JfRFI0LAlVRF9SX0RSNSwJVURfUl9EUjYsCVVEX1JfRFI3
LAogIFVEX1JfRFI4LAlVRF9SX0RSOSwJVURfUl9EUjEwLAlVRF9SX0RSMTEsCiAgVURfUl9EUjEy
LAlVRF9SX0RSMTMsCVVEX1JfRFIxNCwJVURfUl9EUjE1LAoKICAvKiBtbXggcmVnaXN0ZXJzICov
CiAgVURfUl9NTTAsCVVEX1JfTU0xLAlVRF9SX01NMiwJVURfUl9NTTMsCiAgVURfUl9NTTQsCVVE
X1JfTU01LAlVRF9SX01NNiwJVURfUl9NTTcsCgogIC8qIHg4NyByZWdpc3RlcnMgKi8KICBVRF9S
X1NUMCwJVURfUl9TVDEsCVVEX1JfU1QyLAlVRF9SX1NUMywKICBVRF9SX1NUNCwJVURfUl9TVDUs
CVVEX1JfU1Q2LAlVRF9SX1NUNywgCgogIC8qIGV4dGVuZGVkIG11bHRpbWVkaWEgcmVnaXN0ZXJz
ICovCiAgVURfUl9YTU0wLAlVRF9SX1hNTTEsCVVEX1JfWE1NMiwJVURfUl9YTU0zLAogIFVEX1Jf
WE1NNCwJVURfUl9YTU01LAlVRF9SX1hNTTYsCVVEX1JfWE1NNywKICBVRF9SX1hNTTgsCVVEX1Jf
WE1NOSwJVURfUl9YTU0xMCwJVURfUl9YTU0xMSwKICBVRF9SX1hNTTEyLAlVRF9SX1hNTTEzLAlV
RF9SX1hNTTE0LAlVRF9SX1hNTTE1LAoKICBVRF9SX1JJUCwKCiAgLyogT3BlcmFuZCBUeXBlcyAq
LwogIFVEX09QX1JFRywJVURfT1BfTUVNLAlVRF9PUF9QVFIsCVVEX09QX0lNTSwJCiAgVURfT1Bf
SklNTSwJVURfT1BfQ09OU1QKfTsKCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIHN0cnVjdCB1
ZF9vcGVyYW5kIC0gRGlzYXNzZW1ibGVkIGluc3RydWN0aW9uIE9wZXJhbmQuCiAqIC0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tCiAqLwpzdHJ1Y3QgdWRfb3BlcmFuZCAKewogIGVudW0gdWRfdHlwZQkJdHlw
ZTsKICB1aW50OF90CQlzaXplOwogIHVuaW9uIHsKCWludDhfdAkJc2J5dGU7Cgl1aW50OF90CQl1
Ynl0ZTsKCWludDE2X3QJCXN3b3JkOwoJdWludDE2X3QJdXdvcmQ7CglpbnQzMl90CQlzZHdvcmQ7
Cgl1aW50MzJfdAl1ZHdvcmQ7CglpbnQ2NF90CQlzcXdvcmQ7Cgl1aW50NjRfdAl1cXdvcmQ7CgoJ
c3RydWN0IHsKCQl1aW50MTZfdCBzZWc7CgkJdWludDMyX3Qgb2ZmOwoJfSBwdHI7CiAgfSBsdmFs
OwoKICBlbnVtIHVkX3R5cGUJCWJhc2U7CiAgZW51bSB1ZF90eXBlCQlpbmRleDsKICB1aW50OF90
CQlvZmZzZXQ7CiAgdWludDhfdAkJc2NhbGU7CQp9OwoKLyogLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
ICogc3RydWN0IHVkIC0gVGhlIHVkaXM4NiBvYmplY3QuCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
CiAqLwpzdHJ1Y3QgdWQKewogIGludCAJCQkoKmlucF9ob29rKSAoc3RydWN0IHVkKik7CiAgdWlu
dDhfdAkJaW5wX2N1cnI7CiAgdWludDhfdAkJaW5wX2ZpbGw7CiAgdWludDhfdAkJaW5wX2N0cjsK
ICB1aW50OF90KgkJaW5wX2J1ZmY7CiAgdWludDhfdCoJCWlucF9idWZmX2VuZDsKICB1aW50OF90
CQlpbnBfZW5kOwogIHZvaWQJCQkoKnRyYW5zbGF0b3IpKHN0cnVjdCB1ZCopOwogIHVpbnQ2NF90
CQlpbnNuX29mZnNldDsKICBjaGFyCQkJaW5zbl9oZXhjb2RlWzMyXTsKICBjaGFyCQkJaW5zbl9i
dWZmZXJbNjRdOwogIHVuc2lnbmVkIGludAkJaW5zbl9maWxsOwogIHVpbnQ4X3QJCWRpc19tb2Rl
OwogIHVpbnQ2NF90CQlwYzsKICB1aW50OF90CQl2ZW5kb3I7CiAgc3RydWN0IG1hcF9lbnRyeSoJ
bWFwZW47CiAgZW51bSB1ZF9tbmVtb25pY19jb2RlCW1uZW1vbmljOwogIHN0cnVjdCB1ZF9vcGVy
YW5kCW9wZXJhbmRbM107CiAgdWludDhfdAkJZXJyb3I7CiAgdWludDhfdAkgCXBmeF9yZXg7CiAg
dWludDhfdCAJCXBmeF9zZWc7CiAgdWludDhfdCAJCXBmeF9vcHI7CiAgdWludDhfdCAJCXBmeF9h
ZHI7CiAgdWludDhfdCAJCXBmeF9sb2NrOwogIHVpbnQ4X3QgCQlwZnhfcmVwOwogIHVpbnQ4X3Qg
CQlwZnhfcmVwZTsKICB1aW50OF90IAkJcGZ4X3JlcG5lOwogIHVpbnQ4X3QgCQlwZnhfaW5zbjsK
ICB1aW50OF90CQlkZWZhdWx0NjQ7CiAgdWludDhfdAkJb3ByX21vZGU7CiAgdWludDhfdAkJYWRy
X21vZGU7CiAgdWludDhfdAkJYnJfZmFyOwogIHVpbnQ4X3QJCWJyX25lYXI7CiAgdWludDhfdAkJ
aW1wbGljaXRfYWRkcjsKICB1aW50OF90CQljMTsKICB1aW50OF90CQljMjsKICB1aW50OF90CQlj
MzsKICB1aW50OF90IAkJaW5wX2NhY2hlWzI1Nl07CiAgdWludDhfdAkJaW5wX3Nlc3NbNjRdOwog
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5ICogaXRhYl9lbnRyeTsKfTsKCi8qIC0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tCiAqIFR5cGUtZGVmaW5pdGlvbnMKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCnR5
cGVkZWYgZW51bSB1ZF90eXBlIAkJdWRfdHlwZV90Owp0eXBlZGVmIGVudW0gdWRfbW5lbW9uaWNf
Y29kZQl1ZF9tbmVtb25pY19jb2RlX3Q7Cgp0eXBlZGVmIHN0cnVjdCB1ZCAJCXVkX3Q7CnR5cGVk
ZWYgc3RydWN0IHVkX29wZXJhbmQgCXVkX29wZXJhbmRfdDsKCiNkZWZpbmUgVURfU1lOX0lOVEVM
CQl1ZF90cmFuc2xhdGVfaW50ZWwKI2RlZmluZSBVRF9TWU5fQVRUCQl1ZF90cmFuc2xhdGVfYXR0
CiNkZWZpbmUgVURfRU9JCQkJLTEKI2RlZmluZSBVRF9JTlBfQ0FDSEVfU1oJCTMyCiNkZWZpbmUg
VURfVkVORE9SX0FNRAkJMAojZGVmaW5lIFVEX1ZFTkRPUl9JTlRFTAkJMQoKI2RlZmluZSBiYWls
X291dCh1ZCxlcnJvcl9jb2RlKSBsb25nam1wKCAodWQpLT5iYWlsb3V0LCBlcnJvcl9jb2RlICkK
I2RlZmluZSB0cnlfZGVjb2RlKHVkKSBpZiAoIHNldGptcCggKHVkKS0+YmFpbG91dCApID09IDAg
KQojZGVmaW5lIGNhdGNoX2Vycm9yKCkgZWxzZQoKI2VuZGlmCgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIveDg2L3VkaXM4Ni0xLjcv
TElDRU5TRQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAwMDAwMDI1
MTEAMTE3NjU0NjU1NTYAMDE0NjUyACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABtcmF0aG9y
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAENvcHlyaWdodCAoYykgMjAwMiwg
MjAwMywgMjAwNCwgMjAwNSwgMjAwNiA8dml2ZWtAc2lnOS5jb20+CkFsbCByaWdodHMgcmVzZXJ2
ZWQuCgpSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQgYmluYXJ5IGZvcm1zLCB3
aXRoIG9yIHdpdGhvdXQgbW9kaWZpY2F0aW9uLCAKYXJlIHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0
IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucyBhcmUgbWV0OgoKICAgICogUmVkaXN0cmlidXRpb25z
IG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWluIHRoZSBhYm92ZSBjb3B5cmlnaHQgbm90aWNlLCAK
ICAgICAgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1l
ci4KICAgICogUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRo
ZSBhYm92ZSBjb3B5cmlnaHQgbm90aWNlLCAKICAgICAgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMg
YW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUgZG9jdW1lbnRhdGlvbiAKICAgICAg
YW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCgpU
SElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQgQ09O
VFJJQlVUT1JTICJBUyBJUyIgQU5EIApBTlkgRVhQUkVTUyBPUiBJTVBMSUVEIFdBUlJBTlRJRVMs
IElOQ0xVRElORywgQlVUIE5PVCBMSU1JVEVEIFRPLCBUSEUgSU1QTElFRCAKV0FSUkFOVElFUyBP
RiBNRVJDSEFOVEFCSUxJVFkgQU5EIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFIEFS
RSAKRElTQ0xBSU1FRC4gSU4gTk8gRVZFTlQgU0hBTEwgVEhFIENPUFlSSUdIVCBPV05FUiBPUiBD
T05UUklCVVRPUlMgQkUgTElBQkxFIEZPUiAKQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lERU5U
QUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTCBEQU1BR0VTIAooSU5DTFVE
SU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUgR09PRFMg
T1IgU0VSVklDRVM7IApMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1IgQlVTSU5FU1Mg
SU5URVJSVVBUSU9OKSBIT1dFVkVSIENBVVNFRCBBTkQgT04gCkFOWSBUSEVPUlkgT0YgTElBQklM
SVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QgTElBQklMSVRZLCBPUiBUT1JUIAooSU5D
TFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElOIEFOWSBXQVkgT1VUIE9G
IFRIRSBVU0UgT0YgVEhJUyAKU09GVFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lC
SUxJVFkgT0YgU1VDSCBEQU1BR0UuCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9pbnB1
dC5jAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAxMzc0MgAx
MTc2NTQ2NTU1NgAwMTUxNjAAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
ICogaW5wdXQuYwogKgogKiBDb3B5cmlnaHQgKGMpIDIwMDQsIDIwMDUsIDIwMDYsIFZpdmVrIE1v
aGFuIDx2aXZla0BzaWc5LmNvbT4KICogQWxsIHJpZ2h0cyByZXNlcnZlZC4gU2VlIExJQ0VOU0UK
ICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCiNpbmNsdWRlICJleHRlcm4uaCIKI2luY2x1ZGUg
InR5cGVzLmgiCiNpbmNsdWRlICJpbnB1dC5oIgoKLyogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICog
aW5wX2J1ZmZfaG9vaygpIC0gSG9vayBmb3IgYnVmZmVyZWQgaW5wdXRzLgogKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKi8Kc3RhdGljIGludCAKaW5wX2J1ZmZfaG9vayhzdHJ1Y3QgdWQqIHUpCnsK
ICBpZiAodS0+aW5wX2J1ZmYgPCB1LT5pbnBfYnVmZl9lbmQpCglyZXR1cm4gKnUtPmlucF9idWZm
Kys7CiAgZWxzZQlyZXR1cm4gLTE7Cn0KCiNpZm5kZWYgX19VRF9TVEFOREFMT05FX18KLyogLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0KICogaW5wX2ZpbGVfaG9vaygpIC0gSG9vayBmb3IgRklMRSBpbnB1
dHMuCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpzdGF0aWMgaW50IAppbnBfZmlsZV9ob29r
KHN0cnVjdCB1ZCogdSkKewogIHJldHVybiBmZ2V0Yyh1LT5pbnBfZmlsZSk7Cn0KI2VuZGlmIC8q
IF9fVURfU1RBTkRBTE9ORV9fKi8KCi8qID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqIHVkX2lucF9z
ZXRfaG9vaygpIC0gU2V0cyBpbnB1dCBob29rLgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8K
ZXh0ZXJuIHZvaWQgCnVkX3NldF9pbnB1dF9ob29rKHJlZ2lzdGVyIHN0cnVjdCB1ZCogdSwgaW50
ICgqaG9vaykoc3RydWN0IHVkKikpCnsKICB1LT5pbnBfaG9vayA9IGhvb2s7CiAgaW5wX2luaXQo
dSk7Cn0KCi8qID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqIHVkX2lucF9zZXRfYnVmZmVyKCkgLSBT
ZXQgYnVmZmVyIGFzIGlucHV0LgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8KZXh0ZXJuIHZv
aWQgCnVkX3NldF9pbnB1dF9idWZmZXIocmVnaXN0ZXIgc3RydWN0IHVkKiB1LCB1aW50OF90KiBi
dWYsIHNpemVfdCBsZW4pCnsKICB1LT5pbnBfaG9vayA9IGlucF9idWZmX2hvb2s7CiAgdS0+aW5w
X2J1ZmYgPSBidWY7CiAgdS0+aW5wX2J1ZmZfZW5kID0gYnVmICsgbGVuOwogIGlucF9pbml0KHUp
Owp9CgojaWZuZGVmIF9fVURfU1RBTkRBTE9ORV9fCi8qID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
IHVkX2lucHV0X3NldF9maWxlKCkgLSBTZXQgYnVmZmVyIGFzIGlucHV0LgogKiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PQogKi8KZXh0ZXJuIHZvaWQgCnVkX3NldF9pbnB1dF9maWxlKHJlZ2lzdGVyIHN0
cnVjdCB1ZCogdSwgRklMRSogZikKewogIHUtPmlucF9ob29rID0gaW5wX2ZpbGVfaG9vazsKICB1
LT5pbnBfZmlsZSA9IGY7CiAgaW5wX2luaXQodSk7Cn0KI2VuZGlmIC8qIF9fVURfU1RBTkRBTE9O
RV9fICovCgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnB1dF9za2lwKCkgLSBTa2lw
IG4gaW5wdXQgYnl0ZXMuCiAqID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqLwpleHRlcm4gdm9pZCAK
dWRfaW5wdXRfc2tpcChzdHJ1Y3QgdWQqIHUsIHNpemVfdCBuKQp7CiAgd2hpbGUgKG4tLSkgewoJ
dS0+aW5wX2hvb2sodSk7CiAgfQp9CgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnB1
dF9lbmQoKSAtIFRlc3QgZm9yIGVuZCBvZiBpbnB1dC4KICogPT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0K
ICovCmV4dGVybiBpbnQgCnVkX2lucHV0X2VuZChzdHJ1Y3QgdWQqIHUpCnsKICByZXR1cm4gKHUt
PmlucF9jdXJyID09IHUtPmlucF9maWxsKSAmJiB1LT5pbnBfZW5kOwp9CgovKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKiBpbnBfbmV4dCgpIC0gTG9hZHMgYW5kIHJldHVybnMgdGhlIG5leHQgYnl0
ZSBmcm9tIGlucHV0LgogKgogKiBpbnBfY3VyciBhbmQgaW5wX2ZpbGwgYXJlIHBvaW50ZXJzIHRv
IHRoZSBjYWNoZS4gVGhlIHByb2dyYW0gaXMgd3JpdHRlbiBiYXNlZAogKiBvbiB0aGUgcHJvcGVy
dHkgdGhhdCB0aGV5IGFyZSA4LWJpdHMgaW4gc2l6ZSwgYW5kIHdpbGwgZXZlbnR1YWxseSB3cmFw
IGFyb3VuZAogKiBmb3JtaW5nIGEgY2lyY3VsYXIgYnVmZmVyLiBTbywgdGhlIHNpemUgb2YgdGhl
IGNhY2hlIGlzIDI1NiBpbiBzaXplLCBraW5kIG9mCiAqIHVubmVjZXNzYXJ5IHlldCBvcHRpbWl6
ZWQuCiAqCiAqIEEgYnVmZmVyIGlucF9zZXNzIHN0b3JlcyB0aGUgYnl0ZXMgZGlzYXNzZW1ibGVk
IGZvciBhIHNpbmdsZSBzZXNzaW9uLgogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8KZXh0ZXJu
IHVpbnQ4X3QgaW5wX25leHQoc3RydWN0IHVkKiB1KSAKewogIGludCBjID0gLTE7CiAgLyogaWYg
Y3VycmVudCBwb2ludGVyIGlzIG5vdCB1cHRvIHRoZSBmaWxsIHBvaW50IGluIHRoZSAKICAgKiBp
bnB1dCBjYWNoZS4KICAgKi8KICBpZiAoIHUtPmlucF9jdXJyICE9IHUtPmlucF9maWxsICkgewoJ
YyA9IHUtPmlucF9jYWNoZVsgKyt1LT5pbnBfY3VyciBdOwogIC8qIGlmICFlbmQtb2YtaW5wdXQs
IGNhbGwgdGhlIGlucHV0IGhvb2sgYW5kIGdldCBhIGJ5dGUgKi8KICB9IGVsc2UgaWYgKCB1LT5p
bnBfZW5kIHx8ICggYyA9IHUtPmlucF9ob29rKCB1ICkgKSA9PSAtMSApIHsKCS8qIGVuZC1vZi1p
bnB1dCwgbWFyayBpdCBhcyBhbiBlcnJvciwgc2luY2UgdGhlIGRlY29kZXIsCgkgKiBleHBlY3Rl
ZCBhIGJ5dGUgbW9yZS4KCSAqLwoJdS0+ZXJyb3IgPSAxOwoJLyogZmxhZyBlbmQgb2YgaW5wdXQg
Ki8KCXUtPmlucF9lbmQgPSAxOwoJcmV0dXJuIDA7CiAgfSBlbHNlIHsKCS8qIGluY3JlbWVudCBw
b2ludGVycywgd2UgaGF2ZSBhIG5ldyBieXRlLiAgKi8KCXUtPmlucF9jdXJyID0gKyt1LT5pbnBf
ZmlsbDsKCS8qIGFkZCB0aGUgYnl0ZSB0byB0aGUgY2FjaGUgKi8KCXUtPmlucF9jYWNoZVsgdS0+
aW5wX2ZpbGwgXSA9IGM7CiAgfQogIC8qIHJlY29yZCBieXRlcyBpbnB1dCBwZXIgZGVjb2RlLXNl
c3Npb24uICovCiAgdS0+aW5wX3Nlc3NbIHUtPmlucF9jdHIrKyBdID0gYzsKICAvKiByZXR1cm4g
Ynl0ZSAqLwogIHJldHVybiAoIHVpbnQ4X3QgKSBjOwp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQogKiBpbnBfYmFjaygpIC0gTW92ZSBiYWNrIGEgc2luZ2xlIGJ5dGUgaW4gdGhlIHN0cmVhbS4K
ICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCmV4dGVybiB2b2lkCmlucF9iYWNrKHN0cnVjdCB1
ZCogdSkgCnsKICBpZiAoIHUtPmlucF9jdHIgPiAwICkgewoJLS11LT5pbnBfY3VycjsKCS0tdS0+
aW5wX2N0cjsKICB9Cn0KCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIGlucF9wZWVrKCkgLSBQ
ZWVrIGludG8gdGhlIG5leHQgYnl0ZSBpbiBzb3VyY2UuIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQogKi8KZXh0ZXJuIHVpbnQ4X3QKaW5wX3BlZWsoc3RydWN0IHVkKiB1KSAKewogIHVpbnQ4X3Qg
ciA9IGlucF9uZXh0KHUpOwogIGlmICggIXUtPmVycm9yICkgaW5wX2JhY2sodSk7IC8qIERvbid0
IGJhY2t1cCBpZiB0aGVyZSB3YXMgYW4gZXJyb3IgKi8KICByZXR1cm4gcjsKfQoKLyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogaW5wX21vdmUoKSAtIE1vdmUgYWhlYWQgbiBpbnB1dCBieXRlcy4K
ICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCmV4dGVybiB2b2lkCmlucF9tb3ZlKHN0cnVjdCB1
ZCogdSwgc2l6ZV90IG4pIAp7CiAgd2hpbGUgKG4tLSkKCWlucF9uZXh0KHUpOwp9CgovKi0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLQogKiAgaW5wX3VpbnROKCkgLSByZXR1cm4gdWludE4gZnJvbSBzb3Vy
Y2UuCiAqLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpleHRlcm4gdWludDhfdCAKaW5wX3VpbnQ4
KHN0cnVjdCB1ZCogdSkKewogIHJldHVybiBpbnBfbmV4dCh1KTsKfQoKZXh0ZXJuIHVpbnQxNl90
IAppbnBfdWludDE2KHN0cnVjdCB1ZCogdSkKewogIHVpbnQxNl90IHIsIHJldDsKCiAgcmV0ID0g
aW5wX25leHQodSk7CiAgciA9IGlucF9uZXh0KHUpOwogIHJldHVybiByZXQgfCAociA8PCA4KTsK
fQoKZXh0ZXJuIHVpbnQzMl90IAppbnBfdWludDMyKHN0cnVjdCB1ZCogdSkKewogIHVpbnQzMl90
IHIsIHJldDsKCiAgcmV0ID0gaW5wX25leHQodSk7CiAgciA9IGlucF9uZXh0KHUpOwogIHJldCA9
IHJldCB8IChyIDw8IDgpOwogIHIgPSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAociA8PCAx
Nik7CiAgciA9IGlucF9uZXh0KHUpOwogIHJldHVybiByZXQgfCAociA8PCAyNCk7Cn0KCmV4dGVy
biB1aW50NjRfdCAKaW5wX3VpbnQ2NChzdHJ1Y3QgdWQqIHUpCnsKICB1aW50NjRfdCByLCByZXQ7
CgogIHJldCA9IGlucF9uZXh0KHUpOwogIHIgPSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAo
ciA8PCA4KTsKICByID0gaW5wX25leHQodSk7CiAgcmV0ID0gcmV0IHwgKHIgPDwgMTYpOwogIHIg
PSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAociA8PCAyNCk7CiAgciA9IGlucF9uZXh0KHUp
OwogIHJldCA9IHJldCB8IChyIDw8IDMyKTsKICByID0gaW5wX25leHQodSk7CiAgcmV0ID0gcmV0
IHwgKHIgPDwgNDApOwogIHIgPSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAociA8PCA0OCk7
CiAgciA9IGlucF9uZXh0KHUpOwogIHJldHVybiByZXQgfCAociA8PCA1Nik7Cn0KAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlzODYtMS43L2RlY29kZS5oAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDIxNTI2ADExNzY1NDY1NTU2ADAx
NTI1MAAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAg
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAjaWZuZGVmIFVEX0RFQ09ERV9ICiNkZWZpbmUgVURfREVDT0RF
X0gKCiNkZWZpbmUgTUFYX0lOU05fTEVOR1RIIDE1CgovKiByZWdpc3RlciBjbGFzc2VzICovCiNk
ZWZpbmUgVF9OT05FICAwCiNkZWZpbmUgVF9HUFIgICAxCiNkZWZpbmUgVF9NTVggICAyCiNkZWZp
bmUgVF9DUkcgICAzCiNkZWZpbmUgVF9EQkcgICA0CiNkZWZpbmUgVF9TRUcgICA1CiNkZWZpbmUg
VF9YTU0gICA2CgovKiBpdGFiIHByZWZpeCBiaXRzICovCiNkZWZpbmUgUF9ub25lICAgICAgICAg
ICggMCApCiNkZWZpbmUgUF9jMSAgICAgICAgICAgICggMSA8PCAwICkKI2RlZmluZSBQX0MxKG4p
ICAgICAgICAgKCAoIG4gPj4gMCApICYgMSApCiNkZWZpbmUgUF9yZXhiICAgICAgICAgICggMSA8
PCAxICkKI2RlZmluZSBQX1JFWEIobikgICAgICAgKCAoIG4gPj4gMSApICYgMSApCiNkZWZpbmUg
UF9kZXBNICAgICAgICAgICggMSA8PCAyICkKI2RlZmluZSBQX0RFUE0obikgICAgICAgKCAoIG4g
Pj4gMiApICYgMSApCiNkZWZpbmUgUF9jMyAgICAgICAgICAgICggMSA8PCAzICkKI2RlZmluZSBQ
X0MzKG4pICAgICAgICAgKCAoIG4gPj4gMyApICYgMSApCiNkZWZpbmUgUF9pbnY2NCAgICAgICAg
ICggMSA8PCA0ICkKI2RlZmluZSBQX0lOVjY0KG4pICAgICAgKCAoIG4gPj4gNCApICYgMSApCiNk
ZWZpbmUgUF9yZXh3ICAgICAgICAgICggMSA8PCA1ICkKI2RlZmluZSBQX1JFWFcobikgICAgICAg
KCAoIG4gPj4gNSApICYgMSApCiNkZWZpbmUgUF9jMiAgICAgICAgICAgICggMSA8PCA2ICkKI2Rl
ZmluZSBQX0MyKG4pICAgICAgICAgKCAoIG4gPj4gNiApICYgMSApCiNkZWZpbmUgUF9kZWY2NCAg
ICAgICAgICggMSA8PCA3ICkKI2RlZmluZSBQX0RFRjY0KG4pICAgICAgKCAoIG4gPj4gNyApICYg
MSApCiNkZWZpbmUgUF9yZXhyICAgICAgICAgICggMSA8PCA4ICkKI2RlZmluZSBQX1JFWFIobikg
ICAgICAgKCAoIG4gPj4gOCApICYgMSApCiNkZWZpbmUgUF9vc28gICAgICAgICAgICggMSA8PCA5
ICkKI2RlZmluZSBQX09TTyhuKSAgICAgICAgKCAoIG4gPj4gOSApICYgMSApCiNkZWZpbmUgUF9h
c28gICAgICAgICAgICggMSA8PCAxMCApCiNkZWZpbmUgUF9BU08obikgICAgICAgICggKCBuID4+
IDEwICkgJiAxICkKI2RlZmluZSBQX3JleHggICAgICAgICAgKCAxIDw8IDExICkKI2RlZmluZSBQ
X1JFWFgobikgICAgICAgKCAoIG4gPj4gMTEgKSAmIDEgKQojZGVmaW5lIFBfSW1wQWRkciAgICAg
ICAoIDEgPDwgMTIgKQojZGVmaW5lIFBfSU1QQUREUihuKSAgICAoICggbiA+PiAxMiApICYgMSAp
CgovKiByZXggcHJlZml4IGJpdHMgKi8KI2RlZmluZSBSRVhfVyhyKSAgICAgICAgKCAoIDB4RiAm
ICggciApICkgID4+IDMgKQojZGVmaW5lIFJFWF9SKHIpICAgICAgICAoICggMHg3ICYgKCByICkg
KSAgPj4gMiApCiNkZWZpbmUgUkVYX1gocikgICAgICAgICggKCAweDMgJiAoIHIgKSApICA+PiAx
ICkKI2RlZmluZSBSRVhfQihyKSAgICAgICAgKCAoIDB4MSAmICggciApICkgID4+IDAgKQojZGVm
aW5lIFJFWF9QRlhfTUFTSyhuKSAoICggUF9SRVhXKG4pIDw8IDMgKSB8IFwKICAgICAgICAgICAg
ICAgICAgICAgICAgICAoIFBfUkVYUihuKSA8PCAyICkgfCBcCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgKCBQX1JFWFgobikgPDwgMSApIHwgXAogICAgICAgICAgICAgICAgICAgICAgICAgICgg
UF9SRVhCKG4pIDw8IDAgKSApCgovKiBzY2FibGUtaW5kZXgtYmFzZSBiaXRzICovCiNkZWZpbmUg
U0lCX1MoYikgICAgICAgICggKCBiICkgPj4gNiApCiNkZWZpbmUgU0lCX0koYikgICAgICAgICgg
KCAoIGIgKSA+PiAzICkgJiA3ICkKI2RlZmluZSBTSUJfQihiKSAgICAgICAgKCAoIGIgKSAmIDcg
KQoKLyogbW9kcm0gYml0cyAqLwojZGVmaW5lIE1PRFJNX1JFRyhiKSAgICAoICggKCBiICkgPj4g
MyApICYgNyApCiNkZWZpbmUgTU9EUk1fTk5OKGIpICAgICggKCAoIGIgKSA+PiAzICkgJiA3ICkK
I2RlZmluZSBNT0RSTV9NT0QoYikgICAgKCAoICggYiApID4+IDYgKSAmIDMgKQojZGVmaW5lIE1P
RFJNX1JNKGIpICAgICAoICggYiApICYgNyApCgovKiBvcGVyYW5kIHR5cGUgY29uc3RhbnRzIC0t
IG9yZGVyIGlzIGltcG9ydGFudCEgKi8KCmVudW0gdWRfb3BlcmFuZF9jb2RlIHsKICAgIE9QX05P
TkUsCgogICAgT1BfQSwgICAgICBPUF9FLCAgICAgIE9QX00sICAgICAgIE9QX0csICAgICAgIAog
ICAgT1BfSSwKCiAgICBPUF9BTCwgICAgIE9QX0NMLCAgICAgT1BfREwsICAgICAgT1BfQkwsCiAg
ICBPUF9BSCwgICAgIE9QX0NILCAgICAgT1BfREgsICAgICAgT1BfQkgsCgogICAgT1BfQUxyOGIs
ICBPUF9DTHI5YiwgIE9QX0RMcjEwYiwgIE9QX0JMcjExYiwKICAgIE9QX0FIcjEyYiwgT1BfQ0hy
MTNiLCBPUF9ESHIxNGIsICBPUF9CSHIxNWIsCgogICAgT1BfQVgsICAgICBPUF9DWCwgICAgIE9Q
X0RYLCAgICAgIE9QX0JYLAogICAgT1BfU0ksICAgICBPUF9ESSwgICAgIE9QX1NQLCAgICAgIE9Q
X0JQLAoKICAgIE9QX3JBWCwgICAgT1BfckNYLCAgICBPUF9yRFgsICAgICBPUF9yQlgsICAKICAg
IE9QX3JTUCwgICAgT1BfckJQLCAgICBPUF9yU0ksICAgICBPUF9yREksCgogICAgT1BfckFYcjgs
ICBPUF9yQ1hyOSwgIE9QX3JEWHIxMCwgIE9QX3JCWHIxMSwgIAogICAgT1BfclNQcjEyLCBPUF9y
QlByMTMsIE9QX3JTSXIxNCwgIE9QX3JESXIxNSwKCiAgICBPUF9lQVgsICAgIE9QX2VDWCwgICAg
T1BfZURYLCAgICAgT1BfZUJYLAogICAgT1BfZVNQLCAgICBPUF9lQlAsICAgIE9QX2VTSSwgICAg
IE9QX2VESSwKCiAgICBPUF9FUywgICAgIE9QX0NTLCAgICAgT1BfU1MsICAgICAgT1BfRFMsICAK
ICAgIE9QX0ZTLCAgICAgT1BfR1MsCgogICAgT1BfU1QwLCAgICBPUF9TVDEsICAgIE9QX1NUMiwg
ICAgIE9QX1NUMywKICAgIE9QX1NUNCwgICAgT1BfU1Q1LCAgICBPUF9TVDYsICAgICBPUF9TVDcs
CgogICAgT1BfSiwgICAgICBPUF9TLCAgICAgIE9QX08sICAgICAgICAgIAogICAgT1BfSTEsICAg
ICBPUF9JMywgCgogICAgT1BfViwgICAgICBPUF9XLCAgICAgIE9QX1EsICAgICAgIE9QX1AsIAoK
ICAgIE9QX1IsICAgICAgT1BfQywgIE9QX0QsICAgICAgIE9QX1ZSLCAgT1BfUFIKfTsKCgovKiBv
cGVyYW5kIHNpemUgY29uc3RhbnRzICovCgplbnVtIHVkX29wZXJhbmRfc2l6ZSB7CiAgICBTWl9O
QSAgPSAwLAogICAgU1pfWiAgID0gMSwKICAgIFNaX1YgICA9IDIsCiAgICBTWl9QICAgPSAzLAog
ICAgU1pfV1AgID0gNCwKICAgIFNaX0RQICA9IDUsCiAgICBTWl9NRFEgPSA2LAogICAgU1pfUkRR
ID0gNywKCiAgICAvKiB0aGUgZm9sbG93aW5nIHZhbHVlcyBhcmUgdXNlZCBhcyBpcywKICAgICAq
IGFuZCB0aHVzIGhhcmQtY29kZWQuIGNoYW5naW5nIHRoZW0gCiAgICAgKiB3aWxsIGJyZWFrIGlu
dGVybmFscyAKICAgICAqLwogICAgU1pfQiAgID0gOCwKICAgIFNaX1cgICA9IDE2LAogICAgU1pf
RCAgID0gMzIsCiAgICBTWl9RICAgPSA2NCwKICAgIFNaX1QgICA9IDgwLAp9OwoKLyogaXRhYiBl
bnRyeSBvcGVyYW5kIGRlZmluaXRpb25zICovCgojZGVmaW5lIE9fclNQcjEyICB7IE9QX3JTUHIx
MiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19CTCAgICAgIHsgT1BfQkwsICAgICAgIFNaX05BICAg
IH0KI2RlZmluZSBPX0JIICAgICAgeyBPUF9CSCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9f
QlAgICAgICB7IE9QX0JQLCAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19BSHIxMmIgIHsgT1Bf
QUhyMTJiLCAgIFNaX05BICAgIH0KI2RlZmluZSBPX0JYICAgICAgeyBPUF9CWCwgICAgICAgU1pf
TkEgICAgfQojZGVmaW5lIE9fSnogICAgICB7IE9QX0osICAgICAgICBTWl9aICAgICB9CiNkZWZp
bmUgT19KdiAgICAgIHsgT1BfSiwgICAgICAgIFNaX1YgICAgIH0KI2RlZmluZSBPX0piICAgICAg
eyBPUF9KLCAgICAgICAgU1pfQiAgICAgfQojZGVmaW5lIE9fclNJcjE0ICB7IE9QX3JTSXIxNCwg
ICBTWl9OQSAgICB9CiNkZWZpbmUgT19HUyAgICAgIHsgT1BfR1MsICAgICAgIFNaX05BICAgIH0K
I2RlZmluZSBPX0QgICAgICAgeyBPUF9ELCAgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fckJQ
cjEzICB7IE9QX3JCUHIxMywgICBTWl9OQSAgICB9CiNkZWZpbmUgT19PYiAgICAgIHsgT1BfTywg
ICAgICAgIFNaX0IgICAgIH0KI2RlZmluZSBPX1AgICAgICAgeyBPUF9QLCAgICAgICAgU1pfTkEg
ICAgfQojZGVmaW5lIE9fT3cgICAgICB7IE9QX08sICAgICAgICBTWl9XICAgICB9CiNkZWZpbmUg
T19PdiAgICAgIHsgT1BfTywgICAgICAgIFNaX1YgICAgIH0KI2RlZmluZSBPX0d3ICAgICAgeyBP
UF9HLCAgICAgICAgU1pfVyAgICAgfQojZGVmaW5lIE9fR3YgICAgICB7IE9QX0csICAgICAgICBT
Wl9WICAgICB9CiNkZWZpbmUgT19yRFggICAgIHsgT1BfckRYLCAgICAgIFNaX05BICAgIH0KI2Rl
ZmluZSBPX0d4ICAgICAgeyBPUF9HLCAgICAgICAgU1pfTURRICAgfQojZGVmaW5lIE9fR2QgICAg
ICB7IE9QX0csICAgICAgICBTWl9EICAgICB9CiNkZWZpbmUgT19HYiAgICAgIHsgT1BfRywgICAg
ICAgIFNaX0IgICAgIH0KI2RlZmluZSBPX3JCWHIxMSAgeyBPUF9yQlhyMTEsICAgU1pfTkEgICAg
fQojZGVmaW5lIE9fckRJICAgICB7IE9QX3JESSwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19y
U0kgICAgIHsgT1BfclNJLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0FMcjhiICAgeyBPUF9B
THI4YiwgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fZURJICAgICB7IE9QX2VESSwgICAgICBTWl9O
QSAgICB9CiNkZWZpbmUgT19HeiAgICAgIHsgT1BfRywgICAgICAgIFNaX1ogICAgIH0KI2RlZmlu
ZSBPX2VEWCAgICAgeyBPUF9lRFgsICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fREhyMTRiICB7
IE9QX0RIcjE0YiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19yU1AgICAgIHsgT1BfclNQLCAgICAg
IFNaX05BICAgIH0KI2RlZmluZSBPX1BSICAgICAgeyBPUF9QUiwgICAgICAgU1pfTkEgICAgfQoj
ZGVmaW5lIE9fTk9ORSAgICB7IE9QX05PTkUsICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19yQ1gg
ICAgIHsgT1BfckNYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX2pXUCAgICAgeyBPUF9KLCAg
ICAgICAgU1pfV1AgICAgfQojZGVmaW5lIE9fckRYcjEwICB7IE9QX3JEWHIxMCwgICBTWl9OQSAg
ICB9CiNkZWZpbmUgT19NZCAgICAgIHsgT1BfTSwgICAgICAgIFNaX0QgICAgIH0KI2RlZmluZSBP
X0MgICAgICAgeyBPUF9DLCAgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fRyAgICAgICB7IE9Q
X0csICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19NYiAgICAgIHsgT1BfTSwgICAgICAgIFNa
X0IgICAgIH0KI2RlZmluZSBPX010ICAgICAgeyBPUF9NLCAgICAgICAgU1pfVCAgICAgfQojZGVm
aW5lIE9fUyAgICAgICB7IE9QX1MsICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19NcSAgICAg
IHsgT1BfTSwgICAgICAgIFNaX1EgICAgIH0KI2RlZmluZSBPX1cgICAgICAgeyBPUF9XLCAgICAg
ICAgU1pfTkEgICAgfQojZGVmaW5lIE9fRVMgICAgICB7IE9QX0VTLCAgICAgICBTWl9OQSAgICB9
CiNkZWZpbmUgT19yQlggICAgIHsgT1BfckJYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0Vk
ICAgICAgeyBPUF9FLCAgICAgICAgU1pfRCAgICAgfQojZGVmaW5lIE9fRExyMTBiICB7IE9QX0RM
cjEwYiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19NdyAgICAgIHsgT1BfTSwgICAgICAgIFNaX1cg
ICAgIH0KI2RlZmluZSBPX0ViICAgICAgeyBPUF9FLCAgICAgICAgU1pfQiAgICAgfQojZGVmaW5l
IE9fRXggICAgICB7IE9QX0UsICAgICAgICBTWl9NRFEgICB9CiNkZWZpbmUgT19FeiAgICAgIHsg
T1BfRSwgICAgICAgIFNaX1ogICAgIH0KI2RlZmluZSBPX0V3ICAgICAgeyBPUF9FLCAgICAgICAg
U1pfVyAgICAgfQojZGVmaW5lIE9fRXYgICAgICB7IE9QX0UsICAgICAgICBTWl9WICAgICB9CiNk
ZWZpbmUgT19FcCAgICAgIHsgT1BfRSwgICAgICAgIFNaX1AgICAgIH0KI2RlZmluZSBPX0ZTICAg
ICAgeyBPUF9GUywgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fTXMgICAgICB7IE9QX00sICAg
ICAgICBTWl9XICAgICB9CiNkZWZpbmUgT19yQVhyOCAgIHsgT1BfckFYcjgsICAgIFNaX05BICAg
IH0KI2RlZmluZSBPX2VCUCAgICAgeyBPUF9lQlAsICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9f
SXNiICAgICB7IE9QX0ksICAgICAgICBTWl9TQiAgICB9CiNkZWZpbmUgT19lQlggICAgIHsgT1Bf
ZUJYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX3JDWHI5ICAgeyBPUF9yQ1hyOSwgICAgU1pf
TkEgICAgfQojZGVmaW5lIE9fakRQICAgICB7IE9QX0osICAgICAgICBTWl9EUCAgICB9CiNkZWZp
bmUgT19DSCAgICAgIHsgT1BfQ0gsICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0NMICAgICAg
eyBPUF9DTCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fUiAgICAgICB7IE9QX1IsICAgICAg
ICBTWl9SRFEgICB9CiNkZWZpbmUgT19WICAgICAgIHsgT1BfViwgICAgICAgIFNaX05BICAgIH0K
I2RlZmluZSBPX0NTICAgICAgeyBPUF9DUywgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fQ0hy
MTNiICB7IE9QX0NIcjEzYiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19lQ1ggICAgIHsgT1BfZUNY
LCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX2VTUCAgICAgeyBPUF9lU1AsICAgICAgU1pfTkEg
ICAgfQojZGVmaW5lIE9fU1MgICAgICB7IE9QX1NTLCAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUg
T19TUCAgICAgIHsgT1BfU1AsICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0JMcjExYiAgeyBP
UF9CTHIxMWIsICAgU1pfTkEgICAgfQojZGVmaW5lIE9fU0kgICAgICB7IE9QX1NJLCAgICAgICBT
Wl9OQSAgICB9CiNkZWZpbmUgT19lU0kgICAgIHsgT1BfZVNJLCAgICAgIFNaX05BICAgIH0KI2Rl
ZmluZSBPX0RMICAgICAgeyBPUF9ETCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fREggICAg
ICB7IE9QX0RILCAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19ESSAgICAgIHsgT1BfREksICAg
ICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0RYICAgICAgeyBPUF9EWCwgICAgICAgU1pfTkEgICAg
fQojZGVmaW5lIE9fckJQICAgICB7IE9QX3JCUCwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19H
dncgICAgIHsgT1BfRywgICAgICAgIFNaX01EUSAgIH0KI2RlZmluZSBPX0kxICAgICAgeyBPUF9J
MSwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fSTMgICAgICB7IE9QX0kzLCAgICAgICBTWl9O
QSAgICB9CiNkZWZpbmUgT19EUyAgICAgIHsgT1BfRFMsICAgICAgIFNaX05BICAgIH0KI2RlZmlu
ZSBPX1NUNCAgICAgeyBPUF9TVDQsICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fU1Q1ICAgICB7
IE9QX1NUNSwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19TVDYgICAgIHsgT1BfU1Q2LCAgICAg
IFNaX05BICAgIH0KI2RlZmluZSBPX1NUNyAgICAgeyBPUF9TVDcsICAgICAgU1pfTkEgICAgfQoj
ZGVmaW5lIE9fU1QwICAgICB7IE9QX1NUMCwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19TVDEg
ICAgIHsgT1BfU1QxLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX1NUMiAgICAgeyBPUF9TVDIs
ICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fU1QzICAgICB7IE9QX1NUMywgICAgICBTWl9OQSAg
ICB9CiNkZWZpbmUgT19FICAgICAgIHsgT1BfRSwgICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBP
X0FIICAgICAgeyBPUF9BSCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fTSAgICAgICB7IE9Q
X00sICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19BTCAgICAgIHsgT1BfQUwsICAgICAgIFNa
X05BICAgIH0KI2RlZmluZSBPX0NMcjliICAgeyBPUF9DTHI5YiwgICAgU1pfTkEgICAgfQojZGVm
aW5lIE9fUSAgICAgICB7IE9QX1EsICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19lQVggICAg
IHsgT1BfZUFYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX1ZSICAgICAgeyBPUF9WUiwgICAg
ICAgU1pfTkEgICAgfQojZGVmaW5lIE9fQVggICAgICB7IE9QX0FYLCAgICAgICBTWl9OQSAgICB9
CiNkZWZpbmUgT19yQVggICAgIHsgT1BfckFYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0l6
ICAgICAgeyBPUF9JLCAgICAgICAgU1pfWiAgICAgfQojZGVmaW5lIE9fckRJcjE1ICB7IE9QX3JE
SXIxNSwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19JdyAgICAgIHsgT1BfSSwgICAgICAgIFNaX1cg
ICAgIH0KI2RlZmluZSBPX0l2ICAgICAgeyBPUF9JLCAgICAgICAgU1pfViAgICAgfQojZGVmaW5l
IE9fQXAgICAgICB7IE9QX0EsICAgICAgICBTWl9QICAgICB9CiNkZWZpbmUgT19DWCAgICAgIHsg
T1BfQ1gsICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0liICAgICAgeyBPUF9JLCAgICAgICAg
U1pfQiAgICAgfQojZGVmaW5lIE9fQkhyMTViICB7IE9QX0JIcjE1YiwgICBTWl9OQSAgICB9CgoK
LyogQSBzaW5nbGUgb3BlcmFuZCBvZiBhbiBlbnRyeSBpbiB0aGUgaW5zdHJ1Y3Rpb24gdGFibGUu
IAogKiAoaW50ZXJuYWwgdXNlIG9ubHkpCiAqLwpzdHJ1Y3QgdWRfaXRhYl9lbnRyeV9vcGVyYW5k
IAp7CiAgZW51bSB1ZF9vcGVyYW5kX2NvZGUgdHlwZTsKICBlbnVtIHVkX29wZXJhbmRfc2l6ZSBz
aXplOwp9OwoKCi8qIEEgc2luZ2xlIGVudHJ5IGluIGFuIGluc3RydWN0aW9uIHRhYmxlLiAKICoo
aW50ZXJuYWwgdXNlIG9ubHkpCiAqLwpzdHJ1Y3QgdWRfaXRhYl9lbnRyeSAKewogIGVudW0gdWRf
bW5lbW9uaWNfY29kZSAgICAgICAgIG1uZW1vbmljOwogIHN0cnVjdCB1ZF9pdGFiX2VudHJ5X29w
ZXJhbmQgIG9wZXJhbmQxOwogIHN0cnVjdCB1ZF9pdGFiX2VudHJ5X29wZXJhbmQgIG9wZXJhbmQy
OwogIHN0cnVjdCB1ZF9pdGFiX2VudHJ5X29wZXJhbmQgIG9wZXJhbmQzOwogIHVpbnQzMl90ICAg
ICAgICAgICAgICAgICAgICAgIHByZWZpeDsKfTsKCiNlbmRpZiAvKiBVRF9ERUNPREVfSCAqLwoK
LyogdmltOmNpbmRlbnQKICogdmltOmV4cGFuZHRhYgogKiB2aW06dHM9NAogKiB2aW06c3c9NAog
Ki8KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AABrZGIveDg2L3VkaXM4Ni0xLjcvZXh0ZXJuLmgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAy
NzU2ADAwMDI3NTYAMDAwMDAwMDMxMzQAMTE3NjU0NjU1NTYAMDE1MzI1ACAwAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AC8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIGV4dGVybi5oCiAqCiAqIENvcHlyaWdodCAoYykg
MjAwNCwgMjAwNSwgMjAwNiwgVml2ZWsgTW9oYW4gPHZpdmVrQHNpZzkuY29tPgogKiBBbGwgcmln
aHRzIHJlc2VydmVkLiBTZWUgTElDRU5TRQogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8KI2lm
bmRlZiBVRF9FWFRFUk5fSAojZGVmaW5lIFVEX0VYVEVSTl9ICgojaWZkZWYgX19jcGx1c3BsdXMK
ZXh0ZXJuICJDIiB7CiNlbmRpZgoKLyogI2luY2x1ZGUgPHN0ZGlvLmg+ICovCiNpbmNsdWRlICJ0
eXBlcy5oIgoKLyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gUFVCTElDIEFQSSA9PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gKi8KCmV4dGVybiB2b2lkIHVkX2luaXQoc3Ry
dWN0IHVkKik7CgpleHRlcm4gdm9pZCB1ZF9zZXRfbW9kZShzdHJ1Y3QgdWQqLCB1aW50OF90KTsK
CmV4dGVybiB2b2lkIHVkX3NldF9wYyhzdHJ1Y3QgdWQqLCB1aW50NjRfdCk7CgpleHRlcm4gdm9p
ZCB1ZF9zZXRfaW5wdXRfaG9vayhzdHJ1Y3QgdWQqLCBpbnQgKCopKHN0cnVjdCB1ZCopKTsKCmV4
dGVybiB2b2lkIHVkX3NldF9pbnB1dF9idWZmZXIoc3RydWN0IHVkKiwgdWludDhfdCosIHNpemVf
dCk7CgojaWZuZGVmIF9fVURfU1RBTkRBTE9ORV9fCmV4dGVybiB2b2lkIHVkX3NldF9pbnB1dF9m
aWxlKHN0cnVjdCB1ZCosIEZJTEUqKTsKI2VuZGlmIC8qIF9fVURfU1RBTkRBTE9ORV9fICovCgpl
eHRlcm4gdm9pZCB1ZF9zZXRfdmVuZG9yKHN0cnVjdCB1ZCosIHVuc2lnbmVkKTsKCmV4dGVybiB2
b2lkIHVkX3NldF9zeW50YXgoc3RydWN0IHVkKiwgdm9pZCAoKikoc3RydWN0IHVkKikpOwoKZXh0
ZXJuIHZvaWQgdWRfaW5wdXRfc2tpcChzdHJ1Y3QgdWQqLCBzaXplX3QpOwoKZXh0ZXJuIGludCB1
ZF9pbnB1dF9lbmQoc3RydWN0IHVkKik7CgpleHRlcm4gdW5zaWduZWQgaW50IHVkX2RlY29kZShz
dHJ1Y3QgdWQqKTsKCmV4dGVybiB1bnNpZ25lZCBpbnQgdWRfZGlzYXNzZW1ibGUoc3RydWN0IHVk
Kik7CgpleHRlcm4gdm9pZCB1ZF90cmFuc2xhdGVfaW50ZWwoc3RydWN0IHVkKik7CgpleHRlcm4g
dm9pZCB1ZF90cmFuc2xhdGVfYXR0KHN0cnVjdCB1ZCopOwoKZXh0ZXJuIGNoYXIqIHVkX2luc25f
YXNtKHN0cnVjdCB1ZCogdSk7CgpleHRlcm4gdWludDhfdCogdWRfaW5zbl9wdHIoc3RydWN0IHVk
KiB1KTsKCmV4dGVybiB1aW50NjRfdCB1ZF9pbnNuX29mZihzdHJ1Y3QgdWQqKTsKCmV4dGVybiBj
aGFyKiB1ZF9pbnNuX2hleChzdHJ1Y3QgdWQqKTsKCmV4dGVybiB1bnNpZ25lZCBpbnQgdWRfaW5z
bl9sZW4oc3RydWN0IHVkKiB1KTsKCmV4dGVybiBjb25zdCBjaGFyKiB1ZF9sb29rdXBfbW5lbW9u
aWMoZW51bSB1ZF9tbmVtb25pY19jb2RlIGMpOwoKLyogPT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gKi8KCiNp
ZmRlZiBfX2NwbHVzcGx1cwp9CiNlbmRpZgojZW5kaWYKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2Ri
L3g4Ni91ZGlzODYtMS43L3N5bi5jAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAw
MDAyNzU2ADAwMDAwMDAzMjMxADExNzY1NDY1NTU2ADAxNDYyMgAgMAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAvKiAt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLQogKiBzeW4uYwogKgogKiBDb3B5cmlnaHQgKGMpIDIwMDIsIDIw
MDMsIDIwMDQgVml2ZWsgTW9oYW4gPHZpdmVrQHNpZzkuY29tPgogKiBBbGwgcmlnaHRzIHJlc2Vy
dmVkLiBTZWUgKExJQ0VOU0UpCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwoKLyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogSW50ZWwgUmVnaXN0ZXIgVGFibGUgLSBPcmRlciBNYXR0ZXJzICh0
eXBlcy5oKSEKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCmNvbnN0IGNoYXIqIHVkX3JlZ190
YWJbXSA9IAp7CiAgImFsIiwJCSJjbCIsCQkiZGwiLAkJImJsIiwKICAiYWgiLAkJImNoIiwJCSJk
aCIsCQkiYmgiLAogICJzcGwiLAkiYnBsIiwJCSJzaWwiLAkJImRpbCIsCiAgInI4YiIsCSJyOWIi
LAkJInIxMGIiLAkJInIxMWIiLAogICJyMTJiIiwJInIxM2IiLAkJInIxNGIiLAkJInIxNWIiLAoK
ICAiYXgiLAkJImN4IiwJCSJkeCIsCQkiYngiLAogICJzcCIsCQkiYnAiLAkJInNpIiwJCSJkaSIs
CiAgInI4dyIsCSJyOXciLAkJInIxMHciLAkJInIxMXciLAogICJyMTJ3IiwJInIxM1ciCSwJInIx
NHciLAkJInIxNXciLAoJCiAgImVheCIsCSJlY3giLAkJImVkeCIsCQkiZWJ4IiwKICAiZXNwIiwJ
ImVicCIsCQkiZXNpIiwJCSJlZGkiLAogICJyOGQiLAkicjlkIiwJCSJyMTBkIiwJCSJyMTFkIiwK
ICAicjEyZCIsCSJyMTNkIiwJCSJyMTRkIiwJCSJyMTVkIiwKCQogICJyYXgiLAkicmN4IiwJCSJy
ZHgiLAkJInJieCIsCiAgInJzcCIsCSJyYnAiLAkJInJzaSIsCQkicmRpIiwKICAicjgiLAkJInI5
IiwJCSJyMTAiLAkJInIxMSIsCiAgInIxMiIsCSJyMTMiLAkJInIxNCIsCQkicjE1IiwKCiAgImVz
IiwJCSJjcyIsCQkic3MiLAkJImRzIiwKICAiZnMiLAkJImdzIiwJCgogICJjcjAiLAkiY3IxIiwJ
CSJjcjIiLAkJImNyMyIsCiAgImNyNCIsCSJjcjUiLAkJImNyNiIsCQkiY3I3IiwKICAiY3I4IiwJ
ImNyOSIsCQkiY3IxMCIsCQkiY3IxMSIsCiAgImNyMTIiLAkiY3IxMyIsCQkiY3IxNCIsCQkiY3Ix
NSIsCgkKICAiZHIwIiwJImRyMSIsCQkiZHIyIiwJCSJkcjMiLAogICJkcjQiLAkiZHI1IiwJCSJk
cjYiLAkJImRyNyIsCiAgImRyOCIsCSJkcjkiLAkJImRyMTAiLAkJImRyMTEiLAogICJkcjEyIiwJ
ImRyMTMiLAkJImRyMTQiLAkJImRyMTUiLAoKICAibW0wIiwJIm1tMSIsCQkibW0yIiwJCSJtbTMi
LAogICJtbTQiLAkibW01IiwJCSJtbTYiLAkJIm1tNyIsCgogICJzdDAiLAkic3QxIiwJCSJzdDIi
LAkJInN0MyIsCiAgInN0NCIsCSJzdDUiLAkJInN0NiIsCQkic3Q3IiwgCgogICJ4bW0wIiwJInht
bTEiLAkJInhtbTIiLAkJInhtbTMiLAogICJ4bW00IiwJInhtbTUiLAkJInhtbTYiLAkJInhtbTci
LAogICJ4bW04IiwJInhtbTkiLAkJInhtbTEwIiwJInhtbTExIiwKICAieG1tMTIiLAkieG1tMTMi
LAkieG1tMTQiLAkieG1tMTUiLAoKICAicmlwIgp9OwoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYv
dWRpczg2LTEuNy9zeW4uaAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1
NgAwMDAwMDAwMTEzMwAxMTc2NTQ2NTU1NgAwMTQ2MjYAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogc3luLmgKICoKICogQ29weXJpZ2h0IChjKSAyMDA2LCBWaXZlayBN
b2hhbiA8dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuIFNlZSBMSUNFTlNF
CiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwojaWZuZGVmIFVEX1NZTl9ICiNkZWZpbmUgVURf
U1lOX0gKCiNpZiAwCiNpbmNsdWRlIDxzdGRpby5oPgojaW5jbHVkZSA8c3RkYXJnLmg+CiNlbmRp
ZgojaW5jbHVkZSAidHlwZXMuaCIKCmV4dGVybiBjb25zdCBjaGFyKiB1ZF9yZWdfdGFiW107Cgpz
dGF0aWMgdm9pZCBta2FzbShzdHJ1Y3QgdWQqIHUsIGNvbnN0IGNoYXIqIGZtdCwgLi4uKQp7CiAg
dmFfbGlzdCBhcDsKICB2YV9zdGFydChhcCwgZm10KTsKICB1LT5pbnNuX2ZpbGwgKz0gdnNucHJp
bnRmKChjaGFyKikgdS0+aW5zbl9idWZmZXIgKyB1LT5pbnNuX2ZpbGwsIDY0LCBmbXQsIGFwKTsK
ICB2YV9lbmQoYXApOwp9CgojZW5kaWYKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRp
czg2LTEuNy91ZGlzODYuYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAw
MDAwMDAxMDA1NgAxMTc2NTQ2NTU1NgAwMTUxMzYAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KICogdWRpczg2LmMKICoKICogQ29weXJpZ2h0IChjKSAyMDA0LCAyMDA1LCAy
MDA2LCBWaXZlayBNb2hhbiA8dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdodHMgcmVzZXJ2ZWQu
IFNlZSBMSUNFTlNFCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwoKI2lmIDAKI2luY2x1ZGUg
PHN0ZGxpYi5oPgojaW5jbHVkZSA8c3RkaW8uaD4KI2luY2x1ZGUgPHN0cmluZy5oPgojZW5kaWYK
CiNpbmNsdWRlICJpbnB1dC5oIgojaW5jbHVkZSAiZXh0ZXJuLmgiCgovKiA9PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PQogKiB1ZF9pbml0KCkgLSBJbml0aWFsaXplcyB1ZF90IG9iamVjdC4KICogPT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT0KICovCmV4dGVybiB2b2lkIAp1ZF9pbml0KHN0cnVjdCB1ZCogdSkKewog
IG1lbXNldCgodm9pZCopdSwgMCwgc2l6ZW9mKHN0cnVjdCB1ZCkpOwogIHVkX3NldF9tb2RlKHUs
IDE2KTsKICB1LT5tbmVtb25pYyA9IFVEX0lpbnZhbGlkOwogIHVkX3NldF9wYyh1LCAwKTsKI2lm
bmRlZiBfX1VEX1NUQU5EQUxPTkVfXwogIHVkX3NldF9pbnB1dF9maWxlKHUsIHN0ZGluKTsKI2Vu
ZGlmIC8qIF9fVURfU1RBTkRBTE9ORV9fICovCn0KCi8qID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
IHVkX2Rpc2Fzc2VtYmxlKCkgLSBkaXNhc3NlbWJsZXMgb25lIGluc3RydWN0aW9uIGFuZCByZXR1
cm5zIHRoZSBudW1iZXIgb2YgCiAqIGJ5dGVzIGRpc2Fzc2VtYmxlZC4gQSB6ZXJvIG1lYW5zIGVu
ZCBvZiBkaXNhc3NlbWJseS4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCmV4dGVybiB1bnNp
Z25lZCBpbnQKdWRfZGlzYXNzZW1ibGUoc3RydWN0IHVkKiB1KQp7CiAgaWYgKHVkX2lucHV0X2Vu
ZCh1KSkKCXJldHVybiAwOwoKIAogIHUtPmluc25fYnVmZmVyWzBdID0gdS0+aW5zbl9oZXhjb2Rl
WzBdID0gMDsKCiAKICBpZiAodWRfZGVjb2RlKHUpID09IDApCglyZXR1cm4gMDsKICBpZiAodS0+
dHJhbnNsYXRvcikKCXUtPnRyYW5zbGF0b3IodSk7CiAgcmV0dXJuIHVkX2luc25fbGVuKHUpOwp9
CgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9zZXRfbW9kZSgpIC0gU2V0IERpc2Fzc2Vt
bHkgTW9kZS4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCmV4dGVybiB2b2lkIAp1ZF9zZXRf
bW9kZShzdHJ1Y3QgdWQqIHUsIHVpbnQ4X3QgbSkKewogIHN3aXRjaChtKSB7CgljYXNlIDE2OgoJ
Y2FzZSAzMjoKCWNhc2UgNjQ6IHUtPmRpc19tb2RlID0gbSA7IHJldHVybjsKCWRlZmF1bHQ6IHUt
PmRpc19tb2RlID0gMTY7IHJldHVybjsKICB9Cn0KCi8qID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
IHVkX3NldF92ZW5kb3IoKSAtIFNldCB2ZW5kb3IuCiAqID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
LwpleHRlcm4gdm9pZCAKdWRfc2V0X3ZlbmRvcihzdHJ1Y3QgdWQqIHUsIHVuc2lnbmVkIHYpCnsK
ICBzd2l0Y2godikgewoJY2FzZSBVRF9WRU5ET1JfSU5URUw6CgkJdS0+dmVuZG9yID0gdjsKCQli
cmVhazsKCWRlZmF1bHQ6CgkJdS0+dmVuZG9yID0gVURfVkVORE9SX0FNRDsKICB9Cn0KCi8qID09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09CiAqIHVkX3NldF9wYygpIC0gU2V0cyBjb2RlIG9yaWdpbi4gCiAq
ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09CiAqLwpleHRlcm4gdm9pZCAKdWRfc2V0X3BjKHN0cnVjdCB1
ZCogdSwgdWludDY0X3QgbykKewogIHUtPnBjID0gbzsKfQoKLyogPT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT0KICogdWRfc2V0X3N5bnRheCgpIC0gU2V0cyB0aGUgb3V0cHV0IHN5bnRheC4KICogPT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT0KICovCmV4dGVybiB2b2lkIAp1ZF9zZXRfc3ludGF4KHN0cnVjdCB1ZCog
dSwgdm9pZCAoKnQpKHN0cnVjdCB1ZCopKQp7CiAgdS0+dHJhbnNsYXRvciA9IHQ7Cn0KCi8qID09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09CiAqIHVkX2luc24oKSAtIHJldHVybnMgdGhlIGRpc2Fzc2VtYmxl
ZCBpbnN0cnVjdGlvbgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8KZXh0ZXJuIGNoYXIqIAp1
ZF9pbnNuX2FzbShzdHJ1Y3QgdWQqIHUpIAp7CiAgcmV0dXJuIHUtPmluc25fYnVmZmVyOwp9Cgov
KiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnNuX29mZnNldCgpIC0gUmV0dXJucyB0aGUg
b2Zmc2V0LgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8KZXh0ZXJuIHVpbnQ2NF90CnVkX2lu
c25fb2ZmKHN0cnVjdCB1ZCogdSkgCnsKICByZXR1cm4gdS0+aW5zbl9vZmZzZXQ7Cn0KCgovKiA9
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnNuX2hleCgpIC0gUmV0dXJucyBoZXggZm9ybSBv
ZiBkaXNhc3NlbWJsZWQgaW5zdHJ1Y3Rpb24uCiAqID09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqLwpl
eHRlcm4gY2hhciogCnVkX2luc25faGV4KHN0cnVjdCB1ZCogdSkgCnsKICByZXR1cm4gdS0+aW5z
bl9oZXhjb2RlOwp9CgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnNuX3B0cigpIC0g
UmV0dXJucyBjb2RlIGRpc2Fzc2VtYmxlZC4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCmV4
dGVybiB1aW50OF90KiAKdWRfaW5zbl9wdHIoc3RydWN0IHVkKiB1KSAKewogIHJldHVybiB1LT5p
bnBfc2VzczsKfQoKLyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICogdWRfaW5zbl9sZW4oKSAtIFJl
dHVybnMgdGhlIGNvdW50IG9mIGJ5dGVzIGRpc2Fzc2VtYmxlZC4KICogPT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT0KICovCmV4dGVybiB1bnNpZ25lZCBpbnQgCnVkX2luc25fbGVuKHN0cnVjdCB1ZCogdSkg
CnsKICByZXR1cm4gdS0+aW5wX2N0cjsKfQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlzODYtMS43L01h
a2VmaWxlAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDAwMjA3
ADExNzY1NDY1NTU2ADAxNTMwNQAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKQ0ZMQUdTCQkrPSAtRF9fVURfU1RB
TkRBTE9ORV9fCm9iai15CQk6PSBkZWNvZGUubyBpbnB1dC5vIGl0YWIubyBrZGJfZGlzLm8gc3lu
LWF0dC5vIHN5bi5vIFwKICAgICAgICAgICAgICAgICAgIHN5bi1pbnRlbC5vIHVkaXM4Ni5vCgoA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9kZWNv
ZGUuYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDEwNDM0MgAx
MTc2NTQ2NTU1NgAwMTUyNDEAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
ICogZGVjb2RlLmMKICoKICogQ29weXJpZ2h0IChjKSAyMDA1LCAyMDA2LCBWaXZlayBNb2hhbiA8
dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuIFNlZSBMSUNFTlNFCiAqIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tCiAqLwoKI2lmIDAKI2luY2x1ZGUgPGFzc2VydC5oPgojaW5jbHVk
ZSA8c3RyaW5nLmg+CiNlbmRpZgoKI2luY2x1ZGUgInR5cGVzLmgiCiNpbmNsdWRlICJpdGFiLmgi
CiNpbmNsdWRlICJpbnB1dC5oIgojaW5jbHVkZSAiZGVjb2RlLmgiCgovKiBUaGUgbWF4IG51bWJl
ciBvZiBwcmVmaXhlcyB0byBhbiBpbnN0cnVjdGlvbiAqLwojZGVmaW5lIE1BWF9QUkVGSVhFUyAg
ICAxNQoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGllX2ludmFsaWQgPSB7IFVEX0lpbnZh
bGlkLCBPX05PTkUsIE9fTk9ORSwgT19OT05FLCBQX25vbmUgfTsKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGllX3BhdXNlICAgPSB7IFVEX0lwYXVzZSwgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCBQX25vbmUgfTsKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGllX25vcCAgICAgPSB7
IFVEX0lub3AsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCBQX25vbmUgfTsKCgovKiBMb29r
cyB1cCBtbmVtb25pYyBjb2RlIGluIHRoZSBtbmVtb25pYyBzdHJpbmcgdGFibGUKICogUmV0dXJu
cyBOVUxMIGlmIHRoZSBtbmVtb25pYyBjb2RlIGlzIGludmFsaWQKICovCmNvbnN0IGNoYXIgKiB1
ZF9sb29rdXBfbW5lbW9uaWMoIGVudW0gdWRfbW5lbW9uaWNfY29kZSBjICkKewogICAgaWYgKCBj
IDwgVURfSWQzdmlsICkKICAgICAgICByZXR1cm4gdWRfbW5lbW9uaWNzX3N0clsgYyBdOwogICAg
cmV0dXJuIE5VTEw7Cn0KCgovKiBFeHRyYWN0cyBpbnN0cnVjdGlvbiBwcmVmaXhlcy4KICovCnN0
YXRpYyBpbnQgZ2V0X3ByZWZpeGVzKCBzdHJ1Y3QgdWQqIHUgKQp7CiAgICB1bnNpZ25lZCBpbnQg
aGF2ZV9wZnggPSAxOwogICAgdW5zaWduZWQgaW50IGk7CiAgICB1aW50OF90IGN1cnI7CgogICAg
LyogaWYgaW4gZXJyb3Igc3RhdGUsIGJhaWwgb3V0ICovCiAgICBpZiAoIHUtPmVycm9yICkgCiAg
ICAgICAgcmV0dXJuIC0xOyAKCiAgICAvKiBrZWVwIGdvaW5nIGFzIGxvbmcgYXMgdGhlcmUgYXJl
IHByZWZpeGVzIGF2YWlsYWJsZSAqLwogICAgZm9yICggaSA9IDA7IGhhdmVfcGZ4IDsgKytpICkg
ewoKICAgICAgICAvKiBHZXQgbmV4dCBieXRlLiAqLwogICAgICAgIGlucF9uZXh0KHUpOyAKICAg
ICAgICBpZiAoIHUtPmVycm9yICkgCiAgICAgICAgICAgIHJldHVybiAtMTsKICAgICAgICBjdXJy
ID0gaW5wX2N1cnIoIHUgKTsKCiAgICAgICAgLyogcmV4IHByZWZpeGVzIGluIDY0Yml0IG1vZGUg
Ki8KICAgICAgICBpZiAoIHUtPmRpc19tb2RlID09IDY0ICYmICggY3VyciAmIDB4RjAgKSA9PSAw
eDQwICkgewogICAgICAgICAgICB1LT5wZnhfcmV4ID0gY3VycjsgIAogICAgICAgIH0gZWxzZSB7
CiAgICAgICAgICAgIHN3aXRjaCAoIGN1cnIgKSAgCiAgICAgICAgICAgIHsKICAgICAgICAgICAg
Y2FzZSAweDJFIDogCiAgICAgICAgICAgICAgICB1LT5wZnhfc2VnID0gVURfUl9DUzsgCiAgICAg
ICAgICAgICAgICB1LT5wZnhfcmV4ID0gMDsKICAgICAgICAgICAgICAgIGJyZWFrOwogICAgICAg
ICAgICBjYXNlIDB4MzYgOiAgICAgCiAgICAgICAgICAgICAgICB1LT5wZnhfc2VnID0gVURfUl9T
UzsgCiAgICAgICAgICAgICAgICB1LT5wZnhfcmV4ID0gMDsKICAgICAgICAgICAgICAgIGJyZWFr
OwogICAgICAgICAgICBjYXNlIDB4M0UgOiAKICAgICAgICAgICAgICAgIHUtPnBmeF9zZWcgPSBV
RF9SX0RTOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXggPSAwOwogICAgICAgICAgICAgICAg
YnJlYWs7CiAgICAgICAgICAgIGNhc2UgMHgyNiA6IAogICAgICAgICAgICAgICAgdS0+cGZ4X3Nl
ZyA9IFVEX1JfRVM7IAogICAgICAgICAgICAgICAgdS0+cGZ4X3JleCA9IDA7CiAgICAgICAgICAg
ICAgICBicmVhazsKICAgICAgICAgICAgY2FzZSAweDY0IDogCiAgICAgICAgICAgICAgICB1LT5w
Znhfc2VnID0gVURfUl9GUzsgCiAgICAgICAgICAgICAgICB1LT5wZnhfcmV4ID0gMDsKICAgICAg
ICAgICAgICAgIGJyZWFrOwogICAgICAgICAgICBjYXNlIDB4NjUgOiAKICAgICAgICAgICAgICAg
IHUtPnBmeF9zZWcgPSBVRF9SX0dTOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXggPSAwOwog
ICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgICAgIGNhc2UgMHg2NyA6IC8qIGFkcmVzcy1z
aXplIG92ZXJyaWRlIHByZWZpeCAqLyAKICAgICAgICAgICAgICAgIHUtPnBmeF9hZHIgPSAweDY3
OwogICAgICAgICAgICAgICAgdS0+cGZ4X3JleCA9IDA7CiAgICAgICAgICAgICAgICBicmVhazsK
ICAgICAgICAgICAgY2FzZSAweEYwIDogCiAgICAgICAgICAgICAgICB1LT5wZnhfbG9jayA9IDB4
RjA7CiAgICAgICAgICAgICAgICB1LT5wZnhfcmV4ICA9IDA7CiAgICAgICAgICAgICAgICBicmVh
azsKICAgICAgICAgICAgY2FzZSAweDY2OiAKICAgICAgICAgICAgICAgIC8qIHRoZSAweDY2IHNz
ZSBwcmVmaXggaXMgb25seSBlZmZlY3RpdmUgaWYgbm8gb3RoZXIgc3NlIHByZWZpeAogICAgICAg
ICAgICAgICAgICogaGFzIGFscmVhZHkgYmVlbiBzcGVjaWZpZWQuCiAgICAgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgICAgIGlmICggIXUtPnBmeF9pbnNuICkgdS0+cGZ4X2luc24gPSAweDY2
OwogICAgICAgICAgICAgICAgdS0+cGZ4X29wciA9IDB4NjY7ICAgICAgICAgICAKICAgICAgICAg
ICAgICAgIHUtPnBmeF9yZXggPSAwOwogICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgICAg
IGNhc2UgMHhGMjoKICAgICAgICAgICAgICAgIHUtPnBmeF9pbnNuICA9IDB4RjI7CiAgICAgICAg
ICAgICAgICB1LT5wZnhfcmVwbmUgPSAweEYyOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXgg
ICA9IDA7CiAgICAgICAgICAgICAgICBicmVhazsKICAgICAgICAgICAgY2FzZSAweEYzOgogICAg
ICAgICAgICAgICAgdS0+cGZ4X2luc24gPSAweEYzOwogICAgICAgICAgICAgICAgdS0+cGZ4X3Jl
cCAgPSAweEYzOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXBlID0gMHhGMzsgCiAgICAgICAg
ICAgICAgICB1LT5wZnhfcmV4ICA9IDA7CiAgICAgICAgICAgICAgICBicmVhazsKICAgICAgICAg
ICAgZGVmYXVsdCA6IAogICAgICAgICAgICAgICAgLyogTm8gbW9yZSBwcmVmaXhlcyAqLwogICAg
ICAgICAgICAgICAgaGF2ZV9wZnggPSAwOwogICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAg
ICAgIH0KICAgICAgICB9CgogICAgICAgIC8qIGNoZWNrIGlmIHdlIHJlYWNoZWQgbWF4IGluc3Ry
dWN0aW9uIGxlbmd0aCAqLwogICAgICAgIGlmICggaSArIDEgPT0gTUFYX0lOU05fTEVOR1RIICkg
ewogICAgICAgICAgICB1LT5lcnJvciA9IDE7CiAgICAgICAgICAgIGJyZWFrOwogICAgICAgIH0K
ICAgIH0KCiAgICAvKiByZXR1cm4gc3RhdHVzICovCiAgICBpZiAoIHUtPmVycm9yICkgCiAgICAg
ICAgcmV0dXJuIC0xOyAKCiAgICAvKiByZXdpbmQgYmFjayBvbmUgYnl0ZSBpbiBzdHJlYW0sIHNp
bmNlIHRoZSBhYm92ZSBsb29wIAogICAgICogc3RvcHMgd2l0aCBhIG5vbi1wcmVmaXggYnl0ZS4g
CiAgICAgKi8KICAgIGlucF9iYWNrKHUpOwoKICAgIC8qIHNwZWN1bGF0aXZlbHkgZGV0ZXJtaW5l
IHRoZSBlZmZlY3RpdmUgb3BlcmFuZCBtb2RlLAogICAgICogYmFzZWQgb24gdGhlIHByZWZpeGVz
IGFuZCB0aGUgY3VycmVudCBkaXNhc3NlbWJseQogICAgICogbW9kZS4gVGhpcyBtYXkgYmUgaW5h
Y2N1cmF0ZSwgYnV0IHVzZWZ1bCBmb3IgbW9kZQogICAgICogZGVwZW5kZW50IGRlY29kaW5nLgog
ICAgICovCiAgICBpZiAoIHUtPmRpc19tb2RlID09IDY0ICkgewogICAgICAgIHUtPm9wcl9tb2Rl
ID0gUkVYX1coIHUtPnBmeF9yZXggKSA/IDY0IDogKCAoIHUtPnBmeF9vcHIgKSA/IDE2IDogMzIg
KSA7CiAgICAgICAgdS0+YWRyX21vZGUgPSAoIHUtPnBmeF9hZHIgKSA/IDMyIDogNjQ7CiAgICB9
IGVsc2UgaWYgKCB1LT5kaXNfbW9kZSA9PSAzMiApIHsKICAgICAgICB1LT5vcHJfbW9kZSA9ICgg
dS0+cGZ4X29wciApID8gMTYgOiAzMjsKICAgICAgICB1LT5hZHJfbW9kZSA9ICggdS0+cGZ4X2Fk
ciApID8gMTYgOiAzMjsKICAgIH0gZWxzZSBpZiAoIHUtPmRpc19tb2RlID09IDE2ICkgewogICAg
ICAgIHUtPm9wcl9tb2RlID0gKCB1LT5wZnhfb3ByICkgPyAzMiA6IDE2OwogICAgICAgIHUtPmFk
cl9tb2RlID0gKCB1LT5wZnhfYWRyICkgPyAzMiA6IDE2OwogICAgfQoKICAgIHJldHVybiAwOwp9
CgoKLyogU2VhcmNoZXMgdGhlIGluc3RydWN0aW9uIHRhYmxlcyBmb3IgdGhlIHJpZ2h0IGVudHJ5
LgogKi8Kc3RhdGljIGludCBzZWFyY2hfaXRhYiggc3RydWN0IHVkICogdSApCnsKICAgIHN0cnVj
dCB1ZF9pdGFiX2VudHJ5ICogZSA9IE5VTEw7CiAgICBlbnVtIHVkX2l0YWJfaW5kZXggdGFibGU7
CiAgICB1aW50OF90IHBlZWs7CiAgICB1aW50OF90IGRpZF9wZWVrID0gMDsKICAgIHVpbnQ4X3Qg
Y3VycjsgCiAgICB1aW50OF90IGluZGV4OwoKICAgIC8qIGlmIGluIHN0YXRlIG9mIGVycm9yLCBy
ZXR1cm4gKi8KICAgIGlmICggdS0+ZXJyb3IgKSAKICAgICAgICByZXR1cm4gLTE7CgogICAgLyog
Z2V0IGZpcnN0IGJ5dGUgb2Ygb3Bjb2RlLiAqLwogICAgaW5wX25leHQodSk7IAogICAgaWYgKCB1
LT5lcnJvciApIAogICAgICAgIHJldHVybiAtMTsKICAgIGN1cnIgPSBpbnBfY3Vycih1KTsgCgog
ICAgLyogcmVzb2x2ZSB4Y2hnLCBub3AsIHBhdXNlIGNyYXp5bmVzcyAqLwogICAgaWYgKCAweDkw
ID09IGN1cnIgKSB7CiAgICAgICAgaWYgKCAhKCB1LT5kaXNfbW9kZSA9PSA2NCAmJiBSRVhfQigg
dS0+cGZ4X3JleCApICkgKSB7CiAgICAgICAgICAgIGlmICggdS0+cGZ4X3JlcCApIHsKICAgICAg
ICAgICAgICAgIHUtPnBmeF9yZXAgPSAwOwogICAgICAgICAgICAgICAgZSA9ICYgaWVfcGF1c2U7
CiAgICAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgICAgICBlID0gJiBpZV9ub3A7CiAgICAg
ICAgICAgIH0KICAgICAgICAgICAgZ290byBmb3VuZF9lbnRyeTsKICAgICAgICB9CiAgICB9Cgog
ICAgLyogZ2V0IHRvcC1sZXZlbCB0YWJsZSAqLwogICAgaWYgKCAweDBGID09IGN1cnIgKSB7CiAg
ICAgICAgdGFibGUgPSBJVEFCX18wRjsKICAgICAgICBjdXJyICA9IGlucF9uZXh0KHUpOwogICAg
ICAgIGlmICggdS0+ZXJyb3IgKQogICAgICAgICAgICByZXR1cm4gLTE7CgogICAgICAgIC8qIDJi
eXRlIG9wY29kZXMgY2FuIGJlIG1vZGlmaWVkIGJ5IDB4NjYsIEYzLCBhbmQgRjIgcHJlZml4ZXMg
Ki8KICAgICAgICBpZiAoIDB4NjYgPT0gdS0+cGZ4X2luc24gKSB7CiAgICAgICAgICAgIGlmICgg
dWRfaXRhYl9saXN0WyBJVEFCX19QRlhfU1NFNjZfXzBGIF1bIGN1cnIgXS5tbmVtb25pYyAhPSBV
RF9JaW52YWxpZCApIHsKICAgICAgICAgICAgICAgIHRhYmxlID0gSVRBQl9fUEZYX1NTRTY2X18w
RjsKICAgICAgICAgICAgICAgIHUtPnBmeF9vcHIgPSAwOwogICAgICAgICAgICB9CiAgICAgICAg
fSBlbHNlIGlmICggMHhGMiA9PSB1LT5wZnhfaW5zbiApIHsKICAgICAgICAgICAgaWYgKCB1ZF9p
dGFiX2xpc3RbIElUQUJfX1BGWF9TU0VGMl9fMEYgXVsgY3VyciBdLm1uZW1vbmljICE9IFVEX0lp
bnZhbGlkICkgewogICAgICAgICAgICAgICAgdGFibGUgPSBJVEFCX19QRlhfU1NFRjJfXzBGOyAK
ICAgICAgICAgICAgICAgIHUtPnBmeF9yZXBuZSA9IDA7CiAgICAgICAgICAgIH0KICAgICAgICB9
IGVsc2UgaWYgKCAweEYzID09IHUtPnBmeF9pbnNuICkgewogICAgICAgICAgICBpZiAoIHVkX2l0
YWJfbGlzdFsgSVRBQl9fUEZYX1NTRUYzX18wRiBdWyBjdXJyIF0ubW5lbW9uaWMgIT0gVURfSWlu
dmFsaWQgKSB7CiAgICAgICAgICAgICAgICB0YWJsZSA9IElUQUJfX1BGWF9TU0VGM19fMEY7CiAg
ICAgICAgICAgICAgICB1LT5wZnhfcmVwZSA9IDA7CiAgICAgICAgICAgICAgICB1LT5wZnhfcmVw
ICA9IDA7CiAgICAgICAgICAgIH0KICAgICAgICB9CiAgICAvKiBwaWNrIGFuIGluc3RydWN0aW9u
IGZyb20gdGhlIDFieXRlIHRhYmxlICovCiAgICB9IGVsc2UgewogICAgICAgIHRhYmxlID0gSVRB
Ql9fMUJZVEU7IAogICAgfQoKICAgIGluZGV4ID0gY3VycjsKCnNlYXJjaDoKCiAgICBlID0gJiB1
ZF9pdGFiX2xpc3RbIHRhYmxlIF1bIGluZGV4IF07CgogICAgLyogaWYgbW5lbW9uaWMgY29uc3Rh
bnQgaXMgYSBzdGFuZGFyZCBpbnN0cnVjdGlvbiBjb25zdGFudAogICAgICogb3VyIHNlYXJjaCBp
cyBvdmVyLgogICAgICovCiAgICAKICAgIGlmICggZS0+bW5lbW9uaWMgPCBVRF9JZDN2aWwgKSB7
CiAgICAgICAgaWYgKCBlLT5tbmVtb25pYyA9PSBVRF9JaW52YWxpZCApIHsKICAgICAgICAgICAg
aWYgKCBkaWRfcGVlayApIHsKICAgICAgICAgICAgICAgIGlucF9uZXh0KCB1ICk7IGlmICggdS0+
ZXJyb3IgKSByZXR1cm4gLTE7CiAgICAgICAgICAgIH0KICAgICAgICAgICAgZ290byBmb3VuZF9l
bnRyeTsKICAgICAgICB9CiAgICAgICAgZ290byBmb3VuZF9lbnRyeTsKICAgIH0KCiAgICB0YWJs
ZSA9IGUtPnByZWZpeDsKCiAgICBzd2l0Y2ggKCBlLT5tbmVtb25pYyApCiAgICB7CiAgICBjYXNl
IFVEX0lncnBfcmVnOgogICAgICAgIHBlZWsgICAgID0gaW5wX3BlZWsoIHUgKTsKICAgICAgICBk
aWRfcGVlayA9IDE7CiAgICAgICAgaW5kZXggICAgPSBNT0RSTV9SRUcoIHBlZWsgKTsKICAgICAg
ICBicmVhazsKCiAgICBjYXNlIFVEX0lncnBfbW9kOgogICAgICAgIHBlZWsgICAgID0gaW5wX3Bl
ZWsoIHUgKTsKICAgICAgICBkaWRfcGVlayA9IDE7CiAgICAgICAgaW5kZXggICAgPSBNT0RSTV9N
T0QoIHBlZWsgKTsKICAgICAgICBpZiAoIGluZGV4ID09IDMgKQogICAgICAgICAgIGluZGV4ID0g
SVRBQl9fTU9EX0lORFhfXzExOwogICAgICAgIGVsc2UgCiAgICAgICAgICAgaW5kZXggPSBJVEFC
X19NT0RfSU5EWF9fTk9UXzExOyAKICAgICAgICBicmVhazsKCiAgICBjYXNlIFVEX0lncnBfcm06
CiAgICAgICAgY3VyciAgICAgPSBpbnBfbmV4dCggdSApOwogICAgICAgIGRpZF9wZWVrID0gMDsK
ICAgICAgICBpZiAoIHUtPmVycm9yICkKICAgICAgICAgICAgcmV0dXJuIC0xOwogICAgICAgIGlu
ZGV4ICAgID0gTU9EUk1fUk0oIGN1cnIgKTsKICAgICAgICBicmVhazsKCiAgICBjYXNlIFVEX0ln
cnBfeDg3OgogICAgICAgIGN1cnIgICAgID0gaW5wX25leHQoIHUgKTsKICAgICAgICBkaWRfcGVl
ayA9IDA7CiAgICAgICAgaWYgKCB1LT5lcnJvciApCiAgICAgICAgICAgIHJldHVybiAtMTsKICAg
ICAgICBpbmRleCAgICA9IGN1cnIgLSAweEMwOwogICAgICAgIGJyZWFrOwoKICAgIGNhc2UgVURf
SWdycF9vc2l6ZToKICAgICAgICBpZiAoIHUtPm9wcl9tb2RlID09IDY0ICkgCiAgICAgICAgICAg
IGluZGV4ID0gSVRBQl9fTU9ERV9JTkRYX182NDsKICAgICAgICBlbHNlIGlmICggdS0+b3ByX21v
ZGUgPT0gMzIgKSAKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzMyOwogICAg
ICAgIGVsc2UKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzE2OwogICAgICAg
IGJyZWFrOwogCiAgICBjYXNlIFVEX0lncnBfYXNpemU6CiAgICAgICAgaWYgKCB1LT5hZHJfbW9k
ZSA9PSA2NCApIAogICAgICAgICAgICBpbmRleCA9IElUQUJfX01PREVfSU5EWF9fNjQ7CiAgICAg
ICAgZWxzZSBpZiAoIHUtPmFkcl9tb2RlID09IDMyICkgCiAgICAgICAgICAgIGluZGV4ID0gSVRB
Ql9fTU9ERV9JTkRYX18zMjsKICAgICAgICBlbHNlCiAgICAgICAgICAgIGluZGV4ID0gSVRBQl9f
TU9ERV9JTkRYX18xNjsKICAgICAgICBicmVhazsgICAgICAgICAgICAgICAKCiAgICBjYXNlIFVE
X0lncnBfbW9kZToKICAgICAgICBpZiAoIHUtPmRpc19tb2RlID09IDY0ICkgCiAgICAgICAgICAg
IGluZGV4ID0gSVRBQl9fTU9ERV9JTkRYX182NDsKICAgICAgICBlbHNlIGlmICggdS0+ZGlzX21v
ZGUgPT0gMzIgKSAKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzMyOwogICAg
ICAgIGVsc2UKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzE2OwogICAgICAg
IGJyZWFrOwoKICAgIGNhc2UgVURfSWdycF92ZW5kb3I6CiAgICAgICAgaWYgKCB1LT52ZW5kb3Ig
PT0gVURfVkVORE9SX0lOVEVMICkgCiAgICAgICAgICAgIGluZGV4ID0gSVRBQl9fVkVORE9SX0lO
RFhfX0lOVEVMOyAKICAgICAgICBlbHNlIGlmICggdS0+dmVuZG9yID09IFVEX1ZFTkRPUl9BTUQg
KQogICAgICAgICAgICBpbmRleCA9IElUQUJfX1ZFTkRPUl9JTkRYX19BTUQ7CiAgICAgICAgZWxz
ZSB7CiAgICAgICAgICAgIGtkYnAoIktEQjpzZWFyY2hfaXRhYigpOiB1bnJlY29nbml6ZWQgdmVu
ZG9yIGlkXG4iKTsKICAgICAgICAgICAgcmV0dXJuIC0xOwogICAgICAgIH0KICAgICAgICBicmVh
azsKCiAgICBjYXNlIFVEX0lkM3ZpbDoKICAgICAgICBrZGJwKCJLREI6c2VhcmNoX2l0YWIoKTog
aW52YWxpZCBpbnN0ciBtbmVtb25pYyBjb25zdGFudCBJZDN2aWxcbiIpOwogICAgICAgIHJldHVy
biAtMTsKCiAgICBkZWZhdWx0OgogICAgICAgIGtkYnAoIktEQjpzZWFyY2hfaXRhYigpOiBpbnZh
bGlkIGluc3RydWN0aW9uIG1uZW1vbmljIGNvbnN0YW50XG4iKTsKICAgICAgICByZXR1cm4gLTE7
CiAgICB9CgogICAgZ290byBzZWFyY2g7Cgpmb3VuZF9lbnRyeToKCiAgICB1LT5pdGFiX2VudHJ5
ID0gZTsKICAgIHUtPm1uZW1vbmljID0gdS0+aXRhYl9lbnRyeS0+bW5lbW9uaWM7CgogICAgcmV0
dXJuIDA7Cn0KCgpzdGF0aWMgdW5zaWduZWQgaW50IHJlc29sdmVfb3BlcmFuZF9zaXplKCBjb25z
dCBzdHJ1Y3QgdWQgKiB1LCB1bnNpZ25lZCBpbnQgcyApCnsKICAgIHN3aXRjaCAoIHMgKSAKICAg
IHsKICAgIGNhc2UgU1pfVjoKICAgICAgICByZXR1cm4gKCB1LT5vcHJfbW9kZSApOwogICAgY2Fz
ZSBTWl9aOiAgCiAgICAgICAgcmV0dXJuICggdS0+b3ByX21vZGUgPT0gMTYgKSA/IDE2IDogMzI7
CiAgICBjYXNlIFNaX1A6ICAKICAgICAgICByZXR1cm4gKCB1LT5vcHJfbW9kZSA9PSAxNiApID8g
U1pfV1AgOiBTWl9EUDsKICAgIGNhc2UgU1pfTURROgogICAgICAgIHJldHVybiAoIHUtPm9wcl9t
b2RlID09IDE2ICkgPyAzMiA6IHUtPm9wcl9tb2RlOwogICAgY2FzZSBTWl9SRFE6CiAgICAgICAg
cmV0dXJuICggdS0+ZGlzX21vZGUgPT0gNjQgKSA/IDY0IDogMzI7CiAgICBkZWZhdWx0OgogICAg
ICAgIHJldHVybiBzOwogICAgfQp9CgoKc3RhdGljIGludCByZXNvbHZlX21uZW1vbmljKCBzdHJ1
Y3QgdWQqIHUgKQp7CiAgLyogZmFyL25lYXIgZmxhZ3MgKi8KICB1LT5icl9mYXIgPSAwOwogIHUt
PmJyX25lYXIgPSAwOwogIC8qIHJlYWRqdXN0IG9wZXJhbmQgc2l6ZXMgZm9yIGNhbGwvam1wIGlu
c3RyY3V0aW9ucyAqLwogIGlmICggdS0+bW5lbW9uaWMgPT0gVURfSWNhbGwgfHwgdS0+bW5lbW9u
aWMgPT0gVURfSWptcCApIHsKICAgIC8qIFdQOiAxNmJpdCBwb2ludGVyICovCiAgICBpZiAoIHUt
Pm9wZXJhbmRbIDAgXS5zaXplID09IFNaX1dQICkgewogICAgICAgIHUtPm9wZXJhbmRbIDAgXS5z
aXplID0gMTY7CiAgICAgICAgdS0+YnJfZmFyID0gMTsKICAgICAgICB1LT5icl9uZWFyPSAwOwog
ICAgLyogRFA6IDMyYml0IHBvaW50ZXIgKi8KICAgIH0gZWxzZSBpZiAoIHUtPm9wZXJhbmRbIDAg
XS5zaXplID09IFNaX0RQICkgewogICAgICAgIHUtPm9wZXJhbmRbIDAgXS5zaXplID0gMzI7CiAg
ICAgICAgdS0+YnJfZmFyID0gMTsKICAgICAgICB1LT5icl9uZWFyPSAwOwogICAgfSBlbHNlIHsK
ICAgICAgICB1LT5icl9mYXIgPSAwOwogICAgICAgIHUtPmJyX25lYXI9IDE7CiAgICB9CiAgLyog
cmVzb2x2ZSAzZG5vdyB3ZWlyZG5lc3MuICovCiAgfSBlbHNlIGlmICggdS0+bW5lbW9uaWMgPT0g
VURfSTNkbm93ICkgewogICAgdS0+bW5lbW9uaWMgPSB1ZF9pdGFiX2xpc3RbIElUQUJfXzNETk9X
IF1bIGlucF9jdXJyKCB1ICkgIF0ubW5lbW9uaWM7CiAgfQogIC8qIFNXQVBHUyBpcyBvbmx5IHZh
bGlkIGluIDY0Yml0cyBtb2RlICovCiAgaWYgKCB1LT5tbmVtb25pYyA9PSBVRF9Jc3dhcGdzICYm
IHUtPmRpc19tb2RlICE9IDY0ICkgewogICAgdS0+ZXJyb3IgPSAxOwogICAgcmV0dXJuIC0xOwog
IH0KCiAgcmV0dXJuIDA7Cn0KCgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBkZWNvZGVfYSgp
LSBEZWNvZGVzIG9wZXJhbmRzIG9mIHRoZSB0eXBlIHNlZzpvZmZzZXQKICogLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0KICovCnN0YXRpYyB2b2lkIApkZWNvZGVfYShzdHJ1Y3QgdWQqIHUsIHN0cnVjdCB1
ZF9vcGVyYW5kICpvcCkKewogIGlmICh1LT5vcHJfbW9kZSA9PSAxNikgeyAgCiAgICAvKiBzZWcx
NjpvZmYxNiAqLwogICAgb3AtPnR5cGUgPSBVRF9PUF9QVFI7CiAgICBvcC0+c2l6ZSA9IDMyOwog
ICAgb3AtPmx2YWwucHRyLm9mZiA9IGlucF91aW50MTYodSk7CiAgICBvcC0+bHZhbC5wdHIuc2Vn
ID0gaW5wX3VpbnQxNih1KTsKICB9IGVsc2UgewogICAgLyogc2VnMTY6b2ZmMzIgKi8KICAgIG9w
LT50eXBlID0gVURfT1BfUFRSOwogICAgb3AtPnNpemUgPSA0ODsKICAgIG9wLT5sdmFsLnB0ci5v
ZmYgPSBpbnBfdWludDMyKHUpOwogICAgb3AtPmx2YWwucHRyLnNlZyA9IGlucF91aW50MTYodSk7
CiAgfQp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBkZWNvZGVfZ3ByKCkgLSBSZXR1cm5z
IGRlY29kZWQgR2VuZXJhbCBQdXJwb3NlIFJlZ2lzdGVyIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQogKi8Kc3RhdGljIGVudW0gdWRfdHlwZSAKZGVjb2RlX2dwcihyZWdpc3RlciBzdHJ1Y3QgdWQq
IHUsIHVuc2lnbmVkIGludCBzLCB1bnNpZ25lZCBjaGFyIHJtKQp7CiAgcyA9IHJlc29sdmVfb3Bl
cmFuZF9zaXplKHUsIHMpOwogICAgICAgIAogIHN3aXRjaCAocykgewogICAgY2FzZSA2NDoKICAg
ICAgICByZXR1cm4gVURfUl9SQVggKyBybTsKICAgIGNhc2UgU1pfRFA6CiAgICBjYXNlIDMyOgog
ICAgICAgIHJldHVybiBVRF9SX0VBWCArIHJtOwogICAgY2FzZSBTWl9XUDoKICAgIGNhc2UgMTY6
CiAgICAgICAgcmV0dXJuIFVEX1JfQVggICsgcm07CiAgICBjYXNlICA4OgogICAgICAgIGlmICh1
LT5kaXNfbW9kZSA9PSA2NCAmJiB1LT5wZnhfcmV4KSB7CiAgICAgICAgICAgIGlmIChybSA+PSA0
KQogICAgICAgICAgICAgICAgcmV0dXJuIFVEX1JfU1BMICsgKHJtLTQpOwogICAgICAgICAgICBy
ZXR1cm4gVURfUl9BTCArIHJtOwogICAgICAgIH0gZWxzZSByZXR1cm4gVURfUl9BTCArIHJtOwog
ICAgZGVmYXVsdDoKICAgICAgICByZXR1cm4gMDsKICB9Cn0KCi8qIC0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tCiAqIHJlc29sdmVfZ3ByNjQoKSAtIDY0Yml0IEdlbmVyYWwgUHVycG9zZSBSZWdpc3Rlci1T
ZWxlY3Rpb24uIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGVudW0gdWRfdHlw
ZSAKcmVzb2x2ZV9ncHI2NChzdHJ1Y3QgdWQqIHUsIGVudW0gdWRfb3BlcmFuZF9jb2RlIGdwcl9v
cCkKewogIGlmIChncHJfb3AgPj0gT1BfckFYcjggJiYgZ3ByX29wIDw9IE9QX3JESXIxNSkKICAg
IGdwcl9vcCA9IChncHJfb3AgLSBPUF9yQVhyOCkgfCAoUkVYX0IodS0+cGZ4X3JleCkgPDwgMyk7
ICAgICAgICAgIAogIGVsc2UgIGdwcl9vcCA9IChncHJfb3AgLSBPUF9yQVgpOwoKICBpZiAodS0+
b3ByX21vZGUgPT0gMTYpCiAgICByZXR1cm4gZ3ByX29wICsgVURfUl9BWDsKICBpZiAodS0+ZGlz
X21vZGUgPT0gMzIgfHwgCiAgICAodS0+b3ByX21vZGUgPT0gMzIgJiYgISAoUkVYX1codS0+cGZ4
X3JleCkgfHwgdS0+ZGVmYXVsdDY0KSkpIHsKICAgIHJldHVybiBncHJfb3AgKyBVRF9SX0VBWDsK
ICB9CgogIHJldHVybiBncHJfb3AgKyBVRF9SX1JBWDsKfQoKLyogLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0KICogcmVzb2x2ZV9ncHIzMiAoKSAtIDMyYml0IEdlbmVyYWwgUHVycG9zZSBSZWdpc3Rlci1T
ZWxlY3Rpb24uIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGVudW0gdWRfdHlw
ZSAKcmVzb2x2ZV9ncHIzMihzdHJ1Y3QgdWQqIHUsIGVudW0gdWRfb3BlcmFuZF9jb2RlIGdwcl9v
cCkKewogIGdwcl9vcCA9IGdwcl9vcCAtIE9QX2VBWDsKCiAgaWYgKHUtPm9wcl9tb2RlID09IDE2
KSAKICAgIHJldHVybiBncHJfb3AgKyBVRF9SX0FYOwoKICByZXR1cm4gZ3ByX29wICsgIFVEX1Jf
RUFYOwp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiByZXNvbHZlX3JlZygpIC0gUmVzb2x2
ZXMgdGhlIHJlZ2lzdGVyIHR5cGUgCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpzdGF0aWMg
ZW51bSB1ZF90eXBlIApyZXNvbHZlX3JlZyhzdHJ1Y3QgdWQqIHUsIHVuc2lnbmVkIGludCB0eXBl
LCB1bnNpZ25lZCBjaGFyIGkpCnsKICBzd2l0Y2ggKHR5cGUpIHsKICAgIGNhc2UgVF9NTVggOiAg
ICByZXR1cm4gVURfUl9NTTAgICsgKGkgJiA3KTsKICAgIGNhc2UgVF9YTU0gOiAgICByZXR1cm4g
VURfUl9YTU0wICsgaTsKICAgIGNhc2UgVF9DUkcgOiAgICByZXR1cm4gVURfUl9DUjAgICsgaTsK
ICAgIGNhc2UgVF9EQkcgOiAgICByZXR1cm4gVURfUl9EUjAgICsgaTsKICAgIGNhc2UgVF9TRUcg
OiAgICByZXR1cm4gVURfUl9FUyAgICsgKGkgJiA3KTsKICAgIGNhc2UgVF9OT05FOgogICAgZGVm
YXVsdDogICAgcmV0dXJuIFVEX05PTkU7CiAgfQp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQog
KiBkZWNvZGVfaW1tKCkgLSBEZWNvZGVzIEltbWVkaWF0ZSB2YWx1ZXMuCiAqIC0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tCiAqLwpzdGF0aWMgdm9pZCAKZGVjb2RlX2ltbShzdHJ1Y3QgdWQqIHUsIHVuc2ln
bmVkIGludCBzLCBzdHJ1Y3QgdWRfb3BlcmFuZCAqb3ApCnsKICBvcC0+c2l6ZSA9IHJlc29sdmVf
b3BlcmFuZF9zaXplKHUsIHMpOwogIG9wLT50eXBlID0gVURfT1BfSU1NOwoKICBzd2l0Y2ggKG9w
LT5zaXplKSB7CiAgICBjYXNlICA4OiBvcC0+bHZhbC5zYnl0ZSA9IGlucF91aW50OCh1KTsgICBi
cmVhazsKICAgIGNhc2UgMTY6IG9wLT5sdmFsLnV3b3JkID0gaW5wX3VpbnQxNih1KTsgIGJyZWFr
OwogICAgY2FzZSAzMjogb3AtPmx2YWwudWR3b3JkID0gaW5wX3VpbnQzMih1KTsgYnJlYWs7CiAg
ICBjYXNlIDY0OiBvcC0+bHZhbC51cXdvcmQgPSBpbnBfdWludDY0KHUpOyBicmVhazsKICAgIGRl
ZmF1bHQ6IHJldHVybjsKICB9Cn0KCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIGRlY29kZV9t
b2RybSgpIC0gRGVjb2RlcyBNb2RSTSBCeXRlCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpz
dGF0aWMgdm9pZCAKZGVjb2RlX21vZHJtKHN0cnVjdCB1ZCogdSwgc3RydWN0IHVkX29wZXJhbmQg
Km9wLCB1bnNpZ25lZCBpbnQgcywgCiAgICAgICAgIHVuc2lnbmVkIGNoYXIgcm1fdHlwZSwgc3Ry
dWN0IHVkX29wZXJhbmQgKm9wcmVnLCAKICAgICAgICAgdW5zaWduZWQgY2hhciByZWdfc2l6ZSwg
dW5zaWduZWQgY2hhciByZWdfdHlwZSkKewogIHVuc2lnbmVkIGNoYXIgbW9kLCBybSwgcmVnOwoK
ICBpbnBfbmV4dCh1KTsKCiAgLyogZ2V0IG1vZCwgci9tIGFuZCByZWcgZmllbGRzICovCiAgbW9k
ID0gTU9EUk1fTU9EKGlucF9jdXJyKHUpKTsKICBybSAgPSAoUkVYX0IodS0+cGZ4X3JleCkgPDwg
MykgfCBNT0RSTV9STShpbnBfY3Vycih1KSk7CiAgcmVnID0gKFJFWF9SKHUtPnBmeF9yZXgpIDw8
IDMpIHwgTU9EUk1fUkVHKGlucF9jdXJyKHUpKTsKCiAgb3AtPnNpemUgPSByZXNvbHZlX29wZXJh
bmRfc2l6ZSh1LCBzKTsKCiAgLyogaWYgbW9kIGlzIDExYiwgdGhlbiB0aGUgVURfUl9tIHNwZWNp
ZmllcyBhIGdwci9tbXgvc3NlL2NvbnRyb2wvZGVidWcgKi8KICBpZiAobW9kID09IDMpIHsKICAg
IG9wLT50eXBlID0gVURfT1BfUkVHOwogICAgaWYgKHJtX3R5cGUgPT0gIFRfR1BSKQogICAgICAg
IG9wLT5iYXNlID0gZGVjb2RlX2dwcih1LCBvcC0+c2l6ZSwgcm0pOwogICAgZWxzZSAgICBvcC0+
YmFzZSA9IHJlc29sdmVfcmVnKHUsIHJtX3R5cGUsIChSRVhfQih1LT5wZnhfcmV4KSA8PCAzKSB8
IChybSY3KSk7CiAgfSAKICAvKiBlbHNlIGl0cyBtZW1vcnkgYWRkcmVzc2luZyAqLyAgCiAgZWxz
ZSB7CiAgICBvcC0+dHlwZSA9IFVEX09QX01FTTsKCiAgICAvKiA2NGJpdCBhZGRyZXNzaW5nICov
CiAgICBpZiAodS0+YWRyX21vZGUgPT0gNjQpIHsKCiAgICAgICAgb3AtPmJhc2UgPSBVRF9SX1JB
WCArIHJtOwoKICAgICAgICAvKiBnZXQgb2Zmc2V0IHR5cGUgKi8KICAgICAgICBpZiAobW9kID09
IDEpCiAgICAgICAgICAgIG9wLT5vZmZzZXQgPSA4OwogICAgICAgIGVsc2UgaWYgKG1vZCA9PSAy
KQogICAgICAgICAgICBvcC0+b2Zmc2V0ID0gMzI7CiAgICAgICAgZWxzZSBpZiAobW9kID09IDAg
JiYgKHJtICYgNykgPT0gNSkgeyAgICAgICAgICAgCiAgICAgICAgICAgIG9wLT5iYXNlID0gVURf
Ul9SSVA7CiAgICAgICAgICAgIG9wLT5vZmZzZXQgPSAzMjsKICAgICAgICB9IGVsc2UgIG9wLT5v
ZmZzZXQgPSAwOwoKICAgICAgICAvKiBTY2FsZS1JbmRleC1CYXNlIChTSUIpICovCiAgICAgICAg
aWYgKChybSAmIDcpID09IDQpIHsKICAgICAgICAgICAgaW5wX25leHQodSk7CiAgICAgICAgICAg
IAogICAgICAgICAgICBvcC0+c2NhbGUgPSAoMSA8PCBTSUJfUyhpbnBfY3Vycih1KSkpICYgfjE7
CiAgICAgICAgICAgIG9wLT5pbmRleCA9IFVEX1JfUkFYICsgKFNJQl9JKGlucF9jdXJyKHUpKSB8
IChSRVhfWCh1LT5wZnhfcmV4KSA8PCAzKSk7CiAgICAgICAgICAgIG9wLT5iYXNlICA9IFVEX1Jf
UkFYICsgKFNJQl9CKGlucF9jdXJyKHUpKSB8IChSRVhfQih1LT5wZnhfcmV4KSA8PCAzKSk7Cgog
ICAgICAgICAgICAvKiBzcGVjaWFsIGNvbmRpdGlvbnMgZm9yIGJhc2UgcmVmZXJlbmNlICovCiAg
ICAgICAgICAgIGlmIChvcC0+aW5kZXggPT0gVURfUl9SU1ApIHsKICAgICAgICAgICAgICAgIG9w
LT5pbmRleCA9IFVEX05PTkU7CiAgICAgICAgICAgICAgICBvcC0+c2NhbGUgPSBVRF9OT05FOwog
ICAgICAgICAgICB9CgogICAgICAgICAgICBpZiAob3AtPmJhc2UgPT0gVURfUl9SQlAgfHwgb3At
PmJhc2UgPT0gVURfUl9SMTMpIHsKICAgICAgICAgICAgICAgIGlmIChtb2QgPT0gMCkgCiAgICAg
ICAgICAgICAgICAgICAgb3AtPmJhc2UgPSBVRF9OT05FOwogICAgICAgICAgICAgICAgaWYgKG1v
ZCA9PSAxKQogICAgICAgICAgICAgICAgICAgIG9wLT5vZmZzZXQgPSA4OwogICAgICAgICAgICAg
ICAgZWxzZSBvcC0+b2Zmc2V0ID0gMzI7CiAgICAgICAgICAgIH0KICAgICAgICB9CiAgICB9IAoK
ICAgIC8qIDMyLUJpdCBhZGRyZXNzaW5nIG1vZGUgKi8KICAgIGVsc2UgaWYgKHUtPmFkcl9tb2Rl
ID09IDMyKSB7CgogICAgICAgIC8qIGdldCBiYXNlICovCiAgICAgICAgb3AtPmJhc2UgPSBVRF9S
X0VBWCArIHJtOwoKICAgICAgICAvKiBnZXQgb2Zmc2V0IHR5cGUgKi8KICAgICAgICBpZiAobW9k
ID09IDEpCiAgICAgICAgICAgIG9wLT5vZmZzZXQgPSA4OwogICAgICAgIGVsc2UgaWYgKG1vZCA9
PSAyKQogICAgICAgICAgICBvcC0+b2Zmc2V0ID0gMzI7CiAgICAgICAgZWxzZSBpZiAobW9kID09
IDAgJiYgcm0gPT0gNSkgewogICAgICAgICAgICBvcC0+YmFzZSA9IFVEX05PTkU7CiAgICAgICAg
ICAgIG9wLT5vZmZzZXQgPSAzMjsKICAgICAgICB9IGVsc2UgIG9wLT5vZmZzZXQgPSAwOwoKICAg
ICAgICAvKiBTY2FsZS1JbmRleC1CYXNlIChTSUIpICovCiAgICAgICAgaWYgKChybSAmIDcpID09
IDQpIHsKICAgICAgICAgICAgaW5wX25leHQodSk7CgogICAgICAgICAgICBvcC0+c2NhbGUgPSAo
MSA8PCBTSUJfUyhpbnBfY3Vycih1KSkpICYgfjE7CiAgICAgICAgICAgIG9wLT5pbmRleCA9IFVE
X1JfRUFYICsgKFNJQl9JKGlucF9jdXJyKHUpKSB8IChSRVhfWCh1LT5wZnhfcmV4KSA8PCAzKSk7
CiAgICAgICAgICAgIG9wLT5iYXNlICA9IFVEX1JfRUFYICsgKFNJQl9CKGlucF9jdXJyKHUpKSB8
IChSRVhfQih1LT5wZnhfcmV4KSA8PCAzKSk7CgogICAgICAgICAgICBpZiAob3AtPmluZGV4ID09
IFVEX1JfRVNQKSB7CiAgICAgICAgICAgICAgICBvcC0+aW5kZXggPSBVRF9OT05FOwogICAgICAg
ICAgICAgICAgb3AtPnNjYWxlID0gVURfTk9ORTsKICAgICAgICAgICAgfQoKICAgICAgICAgICAg
Lyogc3BlY2lhbCBjb25kaXRpb24gZm9yIGJhc2UgcmVmZXJlbmNlICovCiAgICAgICAgICAgIGlm
IChvcC0+YmFzZSA9PSBVRF9SX0VCUCkgewogICAgICAgICAgICAgICAgaWYgKG1vZCA9PSAwKQog
ICAgICAgICAgICAgICAgICAgIG9wLT5iYXNlID0gVURfTk9ORTsKICAgICAgICAgICAgICAgIGlm
IChtb2QgPT0gMSkKICAgICAgICAgICAgICAgICAgICBvcC0+b2Zmc2V0ID0gODsKICAgICAgICAg
ICAgICAgIGVsc2Ugb3AtPm9mZnNldCA9IDMyOwogICAgICAgICAgICB9CiAgICAgICAgfQogICAg
fSAKCiAgICAvKiAxNmJpdCBhZGRyZXNzaW5nIG1vZGUgKi8KICAgIGVsc2UgIHsKICAgICAgICBz
d2l0Y2ggKHJtKSB7CiAgICAgICAgICAgIGNhc2UgMDogb3AtPmJhc2UgPSBVRF9SX0JYOyBvcC0+
aW5kZXggPSBVRF9SX1NJOyBicmVhazsKICAgICAgICAgICAgY2FzZSAxOiBvcC0+YmFzZSA9IFVE
X1JfQlg7IG9wLT5pbmRleCA9IFVEX1JfREk7IGJyZWFrOwogICAgICAgICAgICBjYXNlIDI6IG9w
LT5iYXNlID0gVURfUl9CUDsgb3AtPmluZGV4ID0gVURfUl9TSTsgYnJlYWs7CiAgICAgICAgICAg
IGNhc2UgMzogb3AtPmJhc2UgPSBVRF9SX0JQOyBvcC0+aW5kZXggPSBVRF9SX0RJOyBicmVhazsK
ICAgICAgICAgICAgY2FzZSA0OiBvcC0+YmFzZSA9IFVEX1JfU0k7IGJyZWFrOwogICAgICAgICAg
ICBjYXNlIDU6IG9wLT5iYXNlID0gVURfUl9ESTsgYnJlYWs7CiAgICAgICAgICAgIGNhc2UgNjog
b3AtPmJhc2UgPSBVRF9SX0JQOyBicmVhazsKICAgICAgICAgICAgY2FzZSA3OiBvcC0+YmFzZSA9
IFVEX1JfQlg7IGJyZWFrOwogICAgICAgIH0KCiAgICAgICAgaWYgKG1vZCA9PSAwICYmIHJtID09
IDYpIHsKICAgICAgICAgICAgb3AtPm9mZnNldD0gMTY7CiAgICAgICAgICAgIG9wLT5iYXNlID0g
VURfTk9ORTsKICAgICAgICB9CiAgICAgICAgZWxzZSBpZiAobW9kID09IDEpCiAgICAgICAgICAg
IG9wLT5vZmZzZXQgPSA4OwogICAgICAgIGVsc2UgaWYgKG1vZCA9PSAyKSAKICAgICAgICAgICAg
b3AtPm9mZnNldCA9IDE2OwogICAgfQogIH0gIAoKICAvKiBleHRyYWN0IG9mZnNldCwgaWYgYW55
ICovCiAgc3dpdGNoKG9wLT5vZmZzZXQpIHsKICAgIGNhc2UgOCA6IG9wLT5sdmFsLnVieXRlICA9
IGlucF91aW50OCh1KTsgIGJyZWFrOwogICAgY2FzZSAxNjogb3AtPmx2YWwudXdvcmQgID0gaW5w
X3VpbnQxNih1KTsgIGJyZWFrOwogICAgY2FzZSAzMjogb3AtPmx2YWwudWR3b3JkID0gaW5wX3Vp
bnQzMih1KTsgYnJlYWs7CiAgICBjYXNlIDY0OiBvcC0+bHZhbC51cXdvcmQgPSBpbnBfdWludDY0
KHUpOyBicmVhazsKICAgIGRlZmF1bHQ6IGJyZWFrOwogIH0KCiAgLyogcmVzb2x2ZSByZWdpc3Rl
ciBlbmNvZGVkIGluIHJlZyBmaWVsZCAqLwogIGlmIChvcHJlZykgewogICAgb3ByZWctPnR5cGUg
PSBVRF9PUF9SRUc7CiAgICBvcHJlZy0+c2l6ZSA9IHJlc29sdmVfb3BlcmFuZF9zaXplKHUsIHJl
Z19zaXplKTsKICAgIGlmIChyZWdfdHlwZSA9PSBUX0dQUikgCiAgICAgICAgb3ByZWctPmJhc2Ug
PSBkZWNvZGVfZ3ByKHUsIG9wcmVnLT5zaXplLCByZWcpOwogICAgZWxzZSBvcHJlZy0+YmFzZSA9
IHJlc29sdmVfcmVnKHUsIHJlZ190eXBlLCByZWcpOwogIH0KfQoKLyogLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0KICogZGVjb2RlX28oKSAtIERlY29kZXMgb2Zmc2V0CiAqIC0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tCiAqLwpzdGF0aWMgdm9pZCAKZGVjb2RlX28oc3RydWN0IHVkKiB1LCB1bnNpZ25lZCBpbnQg
cywgc3RydWN0IHVkX29wZXJhbmQgKm9wKQp7CiAgc3dpdGNoICh1LT5hZHJfbW9kZSkgewogICAg
Y2FzZSA2NDoKICAgICAgICBvcC0+b2Zmc2V0ID0gNjQ7IAogICAgICAgIG9wLT5sdmFsLnVxd29y
ZCA9IGlucF91aW50NjQodSk7IAogICAgICAgIGJyZWFrOwogICAgY2FzZSAzMjoKICAgICAgICBv
cC0+b2Zmc2V0ID0gMzI7IAogICAgICAgIG9wLT5sdmFsLnVkd29yZCA9IGlucF91aW50MzIodSk7
IAogICAgICAgIGJyZWFrOwogICAgY2FzZSAxNjoKICAgICAgICBvcC0+b2Zmc2V0ID0gMTY7IAog
ICAgICAgIG9wLT5sdmFsLnV3b3JkICA9IGlucF91aW50MTYodSk7IAogICAgICAgIGJyZWFrOwog
ICAgZGVmYXVsdDoKICAgICAgICByZXR1cm47CiAgfQogIG9wLT50eXBlID0gVURfT1BfTUVNOwog
IG9wLT5zaXplID0gcmVzb2x2ZV9vcGVyYW5kX3NpemUodSwgcyk7Cn0KCi8qIC0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tCiAqIGRpc2FzbV9vcGVyYW5kcygpIC0gRGlzYXNzZW1ibGVzIE9wZXJhbmRzLgog
KiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGludCBkaXNhc21fb3BlcmFuZHMocmVn
aXN0ZXIgc3RydWN0IHVkKiB1KQp7CgoKICAvKiBtb3BYdCA9IG1hcCBlbnRyeSwgb3BlcmFuZCBY
LCB0eXBlOyAqLwogIGVudW0gdWRfb3BlcmFuZF9jb2RlIG1vcDF0ID0gdS0+aXRhYl9lbnRyeS0+
b3BlcmFuZDEudHlwZTsKICBlbnVtIHVkX29wZXJhbmRfY29kZSBtb3AydCA9IHUtPml0YWJfZW50
cnktPm9wZXJhbmQyLnR5cGU7CiAgZW51bSB1ZF9vcGVyYW5kX2NvZGUgbW9wM3QgPSB1LT5pdGFi
X2VudHJ5LT5vcGVyYW5kMy50eXBlOwoKICAvKiBtb3BYcyA9IG1hcCBlbnRyeSwgb3BlcmFuZCBY
LCBzaXplICovCiAgdW5zaWduZWQgaW50IG1vcDFzID0gdS0+aXRhYl9lbnRyeS0+b3BlcmFuZDEu
c2l6ZTsKICB1bnNpZ25lZCBpbnQgbW9wMnMgPSB1LT5pdGFiX2VudHJ5LT5vcGVyYW5kMi5zaXpl
OwogIHVuc2lnbmVkIGludCBtb3AzcyA9IHUtPml0YWJfZW50cnktPm9wZXJhbmQzLnNpemU7Cgog
IC8qIGlvcCA9IGluc3RydWN0aW9uIG9wZXJhbmQgKi8KICByZWdpc3RlciBzdHJ1Y3QgdWRfb3Bl
cmFuZCogaW9wID0gdS0+b3BlcmFuZDsKICAgIAogIHN3aXRjaChtb3AxdCkgewogICAgCiAgICBj
YXNlIE9QX0EgOgogICAgICAgIGRlY29kZV9hKHUsICYoaW9wWzBdKSk7CiAgICAgICAgYnJlYWs7
CiAgICAKICAgIC8qIE1bYl0gLi4uICovCiAgICBjYXNlIE9QX00gOgogICAgICAgIGlmIChNT0RS
TV9NT0QoaW5wX3BlZWsodSkpID09IDMpCiAgICAgICAgICAgIHUtPmVycm9yPSAxOwogICAgLyog
RSwgRy9QL1YvSS9DTC8xL1MgKi8KICAgIGNhc2UgT1BfRSA6CiAgICAgICAgaWYgKG1vcDJ0ID09
IE9QX0cpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzBdKSwgbW9wMXMsIFRf
R1BSLCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUik7CiAgICAgICAgICAgIGlmIChtb3AzdCA9PSBP
UF9JKQogICAgICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AzcywgJihpb3BbMl0pKTsKICAg
ICAgICAgICAgZWxzZSBpZiAobW9wM3QgPT0gT1BfQ0wpIHsKICAgICAgICAgICAgICAgIGlvcFsy
XS50eXBlID0gVURfT1BfUkVHOwogICAgICAgICAgICAgICAgaW9wWzJdLmJhc2UgPSBVRF9SX0NM
OwogICAgICAgICAgICAgICAgaW9wWzJdLnNpemUgPSA4OwogICAgICAgICAgICB9CiAgICAgICAg
fQogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX1ApCiAgICAgICAgICAgIGRlY29kZV9tb2Ry
bSh1LCAmKGlvcFswXSksIG1vcDFzLCBUX0dQUiwgJihpb3BbMV0pLCBtb3AycywgVF9NTVgpOwog
ICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX1YpCiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1
LCAmKGlvcFswXSksIG1vcDFzLCBUX0dQUiwgJihpb3BbMV0pLCBtb3AycywgVF9YTU0pOwogICAg
ICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX1MpCiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1LCAm
KGlvcFswXSksIG1vcDFzLCBUX0dQUiwgJihpb3BbMV0pLCBtb3AycywgVF9TRUcpOwogICAgICAg
IGVsc2UgewogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0pLCBtb3AxcywgVF9H
UFIsIE5VTEwsIDAsIFRfTk9ORSk7CiAgICAgICAgICAgIGlmIChtb3AydCA9PSBPUF9DTCkgewog
ICAgICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgICAgICAgICBp
b3BbMV0uYmFzZSA9IFVEX1JfQ0w7CiAgICAgICAgICAgICAgICBpb3BbMV0uc2l6ZSA9IDg7CiAg
ICAgICAgICAgIH0gZWxzZSBpZiAobW9wMnQgPT0gT1BfSTEpIHsKICAgICAgICAgICAgICAgIGlv
cFsxXS50eXBlID0gVURfT1BfQ09OU1Q7CiAgICAgICAgICAgICAgICB1LT5vcGVyYW5kWzFdLmx2
YWwudWR3b3JkID0gMTsKICAgICAgICAgICAgfSBlbHNlIGlmIChtb3AydCA9PSBPUF9JKSB7CiAg
ICAgICAgICAgICAgICBkZWNvZGVfaW1tKHUsIG1vcDJzLCAmKGlvcFsxXSkpOwogICAgICAgICAg
ICB9CiAgICAgICAgfQogICAgICAgIGJyZWFrOwoKICAgIC8qIEcsIEUvUFJbLEldL1ZSICovCiAg
ICBjYXNlIE9QX0cgOgogICAgICAgIGlmIChtb3AydCA9PSBPUF9NKSB7CiAgICAgICAgICAgIGlm
IChNT0RSTV9NT0QoaW5wX3BlZWsodSkpID09IDMpCiAgICAgICAgICAgICAgICB1LT5lcnJvcj0g
MTsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfR1BSLCAm
KGlvcFswXSksIG1vcDFzLCBUX0dQUik7CiAgICAgICAgfSBlbHNlIGlmIChtb3AydCA9PSBPUF9F
KSB7CiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUiwg
Jihpb3BbMF0pLCBtb3AxcywgVF9HUFIpOwogICAgICAgICAgICBpZiAobW9wM3QgPT0gT1BfSSkK
ICAgICAgICAgICAgICAgIGRlY29kZV9pbW0odSwgbW9wM3MsICYoaW9wWzJdKSk7CiAgICAgICAg
fSBlbHNlIGlmIChtb3AydCA9PSBPUF9QUikgewogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwg
Jihpb3BbMV0pLCBtb3AycywgVF9NTVgsICYoaW9wWzBdKSwgbW9wMXMsIFRfR1BSKTsKICAgICAg
ICAgICAgaWYgKG1vcDN0ID09IE9QX0kpCiAgICAgICAgICAgICAgICBkZWNvZGVfaW1tKHUsIG1v
cDNzLCAmKGlvcFsyXSkpOwogICAgICAgIH0gZWxzZSBpZiAobW9wMnQgPT0gT1BfVlIpIHsKICAg
ICAgICAgICAgaWYgKE1PRFJNX01PRChpbnBfcGVlayh1KSkgIT0gMykKICAgICAgICAgICAgICAg
IHUtPmVycm9yID0gMTsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9w
MnMsIFRfWE1NLCAmKGlvcFswXSksIG1vcDFzLCBUX0dQUik7CiAgICAgICAgfSBlbHNlIGlmICht
b3AydCA9PSBPUF9XKQogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMV0pLCBtb3Ay
cywgVF9YTU0sICYoaW9wWzBdKSwgbW9wMXMsIFRfR1BSKTsKICAgICAgICBicmVhazsKCiAgICAv
KiBBTC4uQkgsIEkvTy9EWCAqLwogICAgY2FzZSBPUF9BTCA6IGNhc2UgT1BfQ0wgOiBjYXNlIE9Q
X0RMIDogY2FzZSBPUF9CTCA6CiAgICBjYXNlIE9QX0FIIDogY2FzZSBPUF9DSCA6IGNhc2UgT1Bf
REggOiBjYXNlIE9QX0JIIDoKCiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAg
ICAgaW9wWzBdLmJhc2UgPSBVRF9SX0FMICsgKG1vcDF0IC0gT1BfQUwpOwogICAgICAgIGlvcFsw
XS5zaXplID0gODsKCiAgICAgICAgaWYgKG1vcDJ0ID09IE9QX0kpCiAgICAgICAgICAgIGRlY29k
ZV9pbW0odSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgZWxzZSBpZiAobW9wMnQgPT0gT1Bf
RFgpIHsKICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgICAgIGlv
cFsxXS5iYXNlID0gVURfUl9EWDsKICAgICAgICAgICAgaW9wWzFdLnNpemUgPSAxNjsKICAgICAg
ICB9CiAgICAgICAgZWxzZSBpZiAobW9wMnQgPT0gT1BfTykKICAgICAgICAgICAgZGVjb2RlX28o
dSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgYnJlYWs7CgogICAgLyogckFYW3I4XS4uckRJ
W3IxNV0sIEkvckFYLi5yREkvTyAqLwogICAgY2FzZSBPUF9yQVhyOCA6IGNhc2UgT1BfckNYcjkg
OiBjYXNlIE9QX3JEWHIxMCA6IGNhc2UgT1BfckJYcjExIDoKICAgIGNhc2UgT1BfclNQcjEyOiBj
YXNlIE9QX3JCUHIxMzogY2FzZSBPUF9yU0lyMTQgOiBjYXNlIE9QX3JESXIxNSA6CiAgICBjYXNl
IE9QX3JBWCA6IGNhc2UgT1BfckNYIDogY2FzZSBPUF9yRFggOiBjYXNlIE9QX3JCWCA6CiAgICBj
YXNlIE9QX3JTUCA6IGNhc2UgT1BfckJQIDogY2FzZSBPUF9yU0kgOiBjYXNlIE9QX3JESSA6Cgog
ICAgICAgIGlvcFswXS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlvcFswXS5iYXNlID0gcmVz
b2x2ZV9ncHI2NCh1LCBtb3AxdCk7CgogICAgICAgIGlmIChtb3AydCA9PSBPUF9JKQogICAgICAg
ICAgICBkZWNvZGVfaW1tKHUsIG1vcDJzLCAmKGlvcFsxXSkpOwogICAgICAgIGVsc2UgaWYgKG1v
cDJ0ID49IE9QX3JBWCAmJiBtb3AydCA8PSBPUF9yREkpIHsKICAgICAgICAgICAgaW9wWzFdLnR5
cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgICAgIGlvcFsxXS5iYXNlID0gcmVzb2x2ZV9ncHI2NCh1
LCBtb3AydCk7CiAgICAgICAgfQogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX08pIHsKICAg
ICAgICAgICAgZGVjb2RlX28odSwgbW9wMnMsICYoaW9wWzFdKSk7ICAKICAgICAgICAgICAgaW9w
WzBdLnNpemUgPSByZXNvbHZlX29wZXJhbmRfc2l6ZSh1LCBtb3Aycyk7CiAgICAgICAgfQogICAg
ICAgIGJyZWFrOwoKICAgIC8qIEFMW3I4Yl0uLkJIW3IxNWJdLCBJICovCiAgICBjYXNlIE9QX0FM
cjhiIDogY2FzZSBPUF9DTHI5YiA6IGNhc2UgT1BfRExyMTBiIDogY2FzZSBPUF9CTHIxMWIgOgog
ICAgY2FzZSBPUF9BSHIxMmI6IGNhc2UgT1BfQ0hyMTNiOiBjYXNlIE9QX0RIcjE0YiA6IGNhc2Ug
T1BfQkhyMTViIDoKICAgIHsKICAgICAgICB1ZF90eXBlX3QgZ3ByID0gKG1vcDF0IC0gT1BfQUxy
OGIpICsgVURfUl9BTCArIAogICAgICAgICAgICAgICAgICAgICAgICAoUkVYX0IodS0+cGZ4X3Jl
eCkgPDwgMyk7CiAgICAgICAgaWYgKFVEX1JfQUggPD0gZ3ByICYmIHUtPnBmeF9yZXgpCiAgICAg
ICAgICAgIGdwciA9IGdwciArIDQ7CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAg
ICAgICAgaW9wWzBdLmJhc2UgPSBncHI7CiAgICAgICAgaWYgKG1vcDJ0ID09IE9QX0kpCiAgICAg
ICAgICAgIGRlY29kZV9pbW0odSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgYnJlYWs7CiAg
ICB9CgogICAgLyogZUFYLi5lRFgsIERYL0kgKi8KICAgIGNhc2UgT1BfZUFYIDogY2FzZSBPUF9l
Q1ggOiBjYXNlIE9QX2VEWCA6IGNhc2UgT1BfZUJYIDoKICAgIGNhc2UgT1BfZVNQIDogY2FzZSBP
UF9lQlAgOiBjYXNlIE9QX2VTSSA6IGNhc2UgT1BfZURJIDoKICAgICAgICBpb3BbMF0udHlwZSA9
IFVEX09QX1JFRzsKICAgICAgICBpb3BbMF0uYmFzZSA9IHJlc29sdmVfZ3ByMzIodSwgbW9wMXQp
OwogICAgICAgIGlmIChtb3AydCA9PSBPUF9EWCkgewogICAgICAgICAgICBpb3BbMV0udHlwZSA9
IFVEX09QX1JFRzsKICAgICAgICAgICAgaW9wWzFdLmJhc2UgPSBVRF9SX0RYOwogICAgICAgICAg
ICBpb3BbMV0uc2l6ZSA9IDE2OwogICAgICAgIH0gZWxzZSBpZiAobW9wMnQgPT0gT1BfSSkKICAg
ICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AycywgJihpb3BbMV0pKTsKICAgICAgICBicmVhazsK
CiAgICAvKiBFUy4uR1MgKi8KICAgIGNhc2UgT1BfRVMgOiBjYXNlIE9QX0NTIDogY2FzZSBPUF9E
UyA6CiAgICBjYXNlIE9QX1NTIDogY2FzZSBPUF9GUyA6IGNhc2UgT1BfR1MgOgoKICAgICAgICAv
KiBpbiA2NGJpdHMgbW9kZSwgb25seSBmcyBhbmQgZ3MgYXJlIGFsbG93ZWQgKi8KICAgICAgICBp
ZiAodS0+ZGlzX21vZGUgPT0gNjQpCiAgICAgICAgICAgIGlmIChtb3AxdCAhPSBPUF9GUyAmJiBt
b3AxdCAhPSBPUF9HUykKICAgICAgICAgICAgICAgIHUtPmVycm9yPSAxOwogICAgICAgIGlvcFsw
XS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlvcFswXS5iYXNlID0gKG1vcDF0IC0gT1BfRVMp
ICsgVURfUl9FUzsKICAgICAgICBpb3BbMF0uc2l6ZSA9IDE2OwoKICAgICAgICBicmVhazsKCiAg
ICAvKiBKICovCiAgICBjYXNlIE9QX0ogOgogICAgICAgIGRlY29kZV9pbW0odSwgbW9wMXMsICYo
aW9wWzBdKSk7ICAgICAgICAKICAgICAgICBpb3BbMF0udHlwZSA9IFVEX09QX0pJTU07CiAgICAg
ICAgYnJlYWsgOwoKICAgIC8qIFBSLCBJICovCiAgICBjYXNlIE9QX1BSOgogICAgICAgIGlmIChN
T0RSTV9NT0QoaW5wX3BlZWsodSkpICE9IDMpCiAgICAgICAgICAgIHUtPmVycm9yID0gMTsKICAg
ICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0pLCBtb3AxcywgVF9NTVgsIE5VTEwsIDAsIFRf
Tk9ORSk7CiAgICAgICAgaWYgKG1vcDJ0ID09IE9QX0kpCiAgICAgICAgICAgIGRlY29kZV9pbW0o
dSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgYnJlYWs7IAoKICAgIC8qIFZSLCBJICovCiAg
ICBjYXNlIE9QX1ZSOgogICAgICAgIGlmIChNT0RSTV9NT0QoaW5wX3BlZWsodSkpICE9IDMpCiAg
ICAgICAgICAgIHUtPmVycm9yID0gMTsKICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0p
LCBtb3AxcywgVF9YTU0sIE5VTEwsIDAsIFRfTk9ORSk7CiAgICAgICAgaWYgKG1vcDJ0ID09IE9Q
X0kpCiAgICAgICAgICAgIGRlY29kZV9pbW0odSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAg
YnJlYWs7IAoKICAgIC8qIFAsIFFbLEldL1cvRVssSV0sVlIgKi8KICAgIGNhc2UgT1BfUCA6CiAg
ICAgICAgaWYgKG1vcDJ0ID09IE9QX1EpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYo
aW9wWzFdKSwgbW9wMnMsIFRfTU1YLCAmKGlvcFswXSksIG1vcDFzLCBUX01NWCk7CiAgICAgICAg
ICAgIGlmIChtb3AzdCA9PSBPUF9JKQogICAgICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3Az
cywgJihpb3BbMl0pKTsKICAgICAgICB9IGVsc2UgaWYgKG1vcDJ0ID09IE9QX1cpIHsKICAgICAg
ICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfWE1NLCAmKGlvcFswXSks
IG1vcDFzLCBUX01NWCk7CiAgICAgICAgfSBlbHNlIGlmIChtb3AydCA9PSBPUF9WUikgewogICAg
ICAgICAgICBpZiAoTU9EUk1fTU9EKGlucF9wZWVrKHUpKSAhPSAzKQogICAgICAgICAgICAgICAg
dS0+ZXJyb3IgPSAxOwogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMV0pLCBtb3Ay
cywgVF9YTU0sICYoaW9wWzBdKSwgbW9wMXMsIFRfTU1YKTsKICAgICAgICB9IGVsc2UgaWYgKG1v
cDJ0ID09IE9QX0UpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9w
MnMsIFRfR1BSLCAmKGlvcFswXSksIG1vcDFzLCBUX01NWCk7CiAgICAgICAgICAgIGlmIChtb3Az
dCA9PSBPUF9JKQogICAgICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AzcywgJihpb3BbMl0p
KTsKICAgICAgICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogUiwgQy9EICovCiAgICBjYXNlIE9Q
X1IgOgogICAgICAgIGlmIChtb3AydCA9PSBPUF9DKQogICAgICAgICAgICBkZWNvZGVfbW9kcm0o
dSwgJihpb3BbMF0pLCBtb3AxcywgVF9HUFIsICYoaW9wWzFdKSwgbW9wMnMsIFRfQ1JHKTsKICAg
ICAgICBlbHNlIGlmIChtb3AydCA9PSBPUF9EKQogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwg
Jihpb3BbMF0pLCBtb3AxcywgVF9HUFIsICYoaW9wWzFdKSwgbW9wMnMsIFRfREJHKTsKICAgICAg
ICBicmVhazsKCiAgICAvKiBDLCBSICovCiAgICBjYXNlIE9QX0MgOgogICAgICAgIGRlY29kZV9t
b2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUiwgJihpb3BbMF0pLCBtb3AxcywgVF9DUkcp
OwogICAgICAgIGJyZWFrOwoKICAgIC8qIEQsIFIgKi8KICAgIGNhc2UgT1BfRCA6CiAgICAgICAg
ZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfR1BSLCAmKGlvcFswXSksIG1vcDFz
LCBUX0RCRyk7CiAgICAgICAgYnJlYWs7CgogICAgLyogUSwgUCAqLwogICAgY2FzZSBPUF9RIDoK
ICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0pLCBtb3AxcywgVF9NTVgsICYoaW9wWzFd
KSwgbW9wMnMsIFRfTU1YKTsKICAgICAgICBicmVhazsKCiAgICAvKiBTLCBFICovCiAgICBjYXNl
IE9QX1MgOgogICAgICAgIGRlY29kZV9tb2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUiwg
Jihpb3BbMF0pLCBtb3AxcywgVF9TRUcpOwogICAgICAgIGJyZWFrOwoKICAgIC8qIFcsIFYgKi8K
ICAgIGNhc2UgT1BfVyA6CiAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzBdKSwgbW9wMXMs
IFRfWE1NLCAmKGlvcFsxXSksIG1vcDJzLCBUX1hNTSk7CiAgICAgICAgYnJlYWs7CgogICAgLyog
ViwgV1ssSV0vUS9NL0UgKi8KICAgIGNhc2UgT1BfViA6CiAgICAgICAgaWYgKG1vcDJ0ID09IE9Q
X1cpIHsKICAgICAgICAgICAgLyogc3BlY2lhbCBjYXNlcyBmb3IgbW92bHBzIGFuZCBtb3ZocHMg
Ki8KICAgICAgICAgICAgaWYgKE1PRFJNX01PRChpbnBfcGVlayh1KSkgPT0gMykgewogICAgICAg
ICAgICAgICAgaWYgKHUtPm1uZW1vbmljID09IFVEX0ltb3ZscHMpCiAgICAgICAgICAgICAgICAg
ICAgdS0+bW5lbW9uaWMgPSBVRF9JbW92aGxwczsKICAgICAgICAgICAgICAgIGVsc2UKICAgICAg
ICAgICAgICAgIGlmICh1LT5tbmVtb25pYyA9PSBVRF9JbW92aHBzKQogICAgICAgICAgICAgICAg
ICAgIHUtPm1uZW1vbmljID0gVURfSW1vdmxocHM7CiAgICAgICAgICAgIH0KICAgICAgICAgICAg
ZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfWE1NLCAmKGlvcFswXSksIG1vcDFz
LCBUX1hNTSk7CiAgICAgICAgICAgIGlmIChtb3AzdCA9PSBPUF9JKQogICAgICAgICAgICAgICAg
ZGVjb2RlX2ltbSh1LCBtb3AzcywgJihpb3BbMl0pKTsKICAgICAgICB9IGVsc2UgaWYgKG1vcDJ0
ID09IE9QX1EpCiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBU
X01NWCwgJihpb3BbMF0pLCBtb3AxcywgVF9YTU0pOwogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09
IE9QX00pIHsKICAgICAgICAgICAgaWYgKE1PRFJNX01PRChpbnBfcGVlayh1KSkgPT0gMykKICAg
ICAgICAgICAgICAgIHUtPmVycm9yPSAxOwogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihp
b3BbMV0pLCBtb3AycywgVF9HUFIsICYoaW9wWzBdKSwgbW9wMXMsIFRfWE1NKTsKICAgICAgICB9
IGVsc2UgaWYgKG1vcDJ0ID09IE9QX0UpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYo
aW9wWzFdKSwgbW9wMnMsIFRfR1BSLCAmKGlvcFswXSksIG1vcDFzLCBUX1hNTSk7CiAgICAgICAg
fSBlbHNlIGlmIChtb3AydCA9PSBPUF9QUikgewogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwg
Jihpb3BbMV0pLCBtb3AycywgVF9NTVgsICYoaW9wWzBdKSwgbW9wMXMsIFRfWE1NKTsKICAgICAg
ICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogRFgsIGVBWC9BTCAqLwogICAgY2FzZSBPUF9EWCA6
CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgaW9wWzBdLmJhc2UgPSBV
RF9SX0RYOwogICAgICAgIGlvcFswXS5zaXplID0gMTY7CgogICAgICAgIGlmIChtb3AydCA9PSBP
UF9lQVgpIHsKICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9SRUc7ICAgIAogICAgICAg
ICAgICBpb3BbMV0uYmFzZSA9IHJlc29sdmVfZ3ByMzIodSwgbW9wMnQpOwogICAgICAgIH0gZWxz
ZSBpZiAobW9wMnQgPT0gT1BfQUwpIHsKICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9S
RUc7CiAgICAgICAgICAgIGlvcFsxXS5iYXNlID0gVURfUl9BTDsKICAgICAgICAgICAgaW9wWzFd
LnNpemUgPSA4OwogICAgICAgIH0KCiAgICAgICAgYnJlYWs7CgogICAgLyogSSwgSS9BTC9lQVgg
Ki8KICAgIGNhc2UgT1BfSSA6CiAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AxcywgJihpb3BbMF0p
KTsKICAgICAgICBpZiAobW9wMnQgPT0gT1BfSSkKICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBt
b3AycywgJihpb3BbMV0pKTsKICAgICAgICBlbHNlIGlmIChtb3AydCA9PSBPUF9BTCkgewogICAg
ICAgICAgICBpb3BbMV0udHlwZSA9IFVEX09QX1JFRzsKICAgICAgICAgICAgaW9wWzFdLmJhc2Ug
PSBVRF9SX0FMOwogICAgICAgICAgICBpb3BbMV0uc2l6ZSA9IDE2OwogICAgICAgIH0gZWxzZSBp
ZiAobW9wMnQgPT0gT1BfZUFYKSB7CiAgICAgICAgICAgIGlvcFsxXS50eXBlID0gVURfT1BfUkVH
OyAgICAKICAgICAgICAgICAgaW9wWzFdLmJhc2UgPSByZXNvbHZlX2dwcjMyKHUsIG1vcDJ0KTsK
ICAgICAgICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogTywgQUwvZUFYICovCiAgICBjYXNlIE9Q
X08gOgogICAgICAgIGRlY29kZV9vKHUsIG1vcDFzLCAmKGlvcFswXSkpOwogICAgICAgIGlvcFsx
XS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlvcFsxXS5zaXplID0gcmVzb2x2ZV9vcGVyYW5k
X3NpemUodSwgbW9wMXMpOwogICAgICAgIGlmIChtb3AydCA9PSBPUF9BTCkKICAgICAgICAgICAg
aW9wWzFdLmJhc2UgPSBVRF9SX0FMOwogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX2VBWCkK
ICAgICAgICAgICAgaW9wWzFdLmJhc2UgPSByZXNvbHZlX2dwcjMyKHUsIG1vcDJ0KTsKICAgICAg
ICBlbHNlIGlmIChtb3AydCA9PSBPUF9yQVgpCiAgICAgICAgICAgIGlvcFsxXS5iYXNlID0gcmVz
b2x2ZV9ncHI2NCh1LCBtb3AydCk7ICAgICAgCiAgICAgICAgYnJlYWs7CgogICAgLyogMyAqLwog
ICAgY2FzZSBPUF9JMyA6CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9DT05TVDsKICAgICAg
ICBpb3BbMF0ubHZhbC5zYnl0ZSA9IDM7CiAgICAgICAgYnJlYWs7CgogICAgLyogU1QobiksIFNU
KG4pICovCiAgICBjYXNlIE9QX1NUMCA6IGNhc2UgT1BfU1QxIDogY2FzZSBPUF9TVDIgOiBjYXNl
IE9QX1NUMyA6CiAgICBjYXNlIE9QX1NUNCA6IGNhc2UgT1BfU1Q1IDogY2FzZSBPUF9TVDYgOiBj
YXNlIE9QX1NUNyA6CgogICAgICAgIGlvcFswXS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlv
cFswXS5iYXNlID0gKG1vcDF0LU9QX1NUMCkgKyBVRF9SX1NUMDsKICAgICAgICBpb3BbMF0uc2l6
ZSA9IDA7CgogICAgICAgIGlmIChtb3AydCA+PSBPUF9TVDAgJiYgbW9wMnQgPD0gT1BfU1Q3KSB7
CiAgICAgICAgICAgIGlvcFsxXS50eXBlID0gVURfT1BfUkVHOwogICAgICAgICAgICBpb3BbMV0u
YmFzZSA9IChtb3AydC1PUF9TVDApICsgVURfUl9TVDA7CiAgICAgICAgICAgIGlvcFsxXS5zaXpl
ID0gMDsKICAgICAgICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogQVggKi8KICAgIGNhc2UgT1Bf
QVg6CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgaW9wWzBdLmJhc2Ug
PSBVRF9SX0FYOwogICAgICAgIGlvcFswXS5zaXplID0gMTY7CiAgICAgICAgYnJlYWs7CgogICAg
Lyogbm9uZSAqLwogICAgZGVmYXVsdCA6CiAgICAgICAgaW9wWzBdLnR5cGUgPSBpb3BbMV0udHlw
ZSA9IGlvcFsyXS50eXBlID0gVURfTk9ORTsKICB9CgogIHJldHVybiAwOwp9CgovKiAtLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQogKiBjbGVhcl9pbnNuKCkgLSBjbGVhciBpbnN0cnVjdGlvbiBwb2ludGVy
IAogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGludCBjbGVhcl9pbnNuKHJlZ2lz
dGVyIHN0cnVjdCB1ZCogdSkKewogIHUtPmVycm9yICAgICA9IDA7CiAgdS0+cGZ4X3NlZyAgID0g
MDsKICB1LT5wZnhfb3ByICAgPSAwOwogIHUtPnBmeF9hZHIgICA9IDA7CiAgdS0+cGZ4X2xvY2sg
ID0gMDsKICB1LT5wZnhfcmVwbmUgPSAwOwogIHUtPnBmeF9yZXAgICA9IDA7CiAgdS0+cGZ4X3Jl
cGUgID0gMDsKICB1LT5wZnhfc2VnICAgPSAwOwogIHUtPnBmeF9yZXggICA9IDA7CiAgdS0+cGZ4
X2luc24gID0gMDsKICB1LT5tbmVtb25pYyAgPSBVRF9Jbm9uZTsKICB1LT5pdGFiX2VudHJ5ID0g
TlVMTDsKCiAgbWVtc2V0KCAmdS0+b3BlcmFuZFsgMCBdLCAwLCBzaXplb2YoIHN0cnVjdCB1ZF9v
cGVyYW5kICkgKTsKICBtZW1zZXQoICZ1LT5vcGVyYW5kWyAxIF0sIDAsIHNpemVvZiggc3RydWN0
IHVkX29wZXJhbmQgKSApOwogIG1lbXNldCggJnUtPm9wZXJhbmRbIDIgXSwgMCwgc2l6ZW9mKCBz
dHJ1Y3QgdWRfb3BlcmFuZCApICk7CiAKICByZXR1cm4gMDsKfQoKc3RhdGljIGludCBkb19tb2Rl
KCBzdHJ1Y3QgdWQqIHUgKQp7CiAgLyogaWYgaW4gZXJyb3Igc3RhdGUsIGJhaWwgb3V0ICovCiAg
aWYgKCB1LT5lcnJvciApIHJldHVybiAtMTsgCgogIC8qIHByb3BhZ2F0ZSBwZXJmaXggZWZmZWN0
cyAqLwogIGlmICggdS0+ZGlzX21vZGUgPT0gNjQgKSB7ICAvKiBzZXQgNjRiaXQtbW9kZSBmbGFn
cyAqLwoKICAgIC8qIENoZWNrIHZhbGlkaXR5IG9mICBpbnN0cnVjdGlvbiBtNjQgKi8KICAgIGlm
ICggUF9JTlY2NCggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICkgKSB7CiAgICAgICAgdS0+ZXJyb3Ig
PSAxOwogICAgICAgIHJldHVybiAtMTsKICAgIH0KCiAgICAvKiBlZmZlY3RpdmUgcmV4IHByZWZp
eCBpcyB0aGUgIGVmZmVjdGl2ZSBtYXNrIGZvciB0aGUgCiAgICAgKiBpbnN0cnVjdGlvbiBoYXJk
LWNvZGVkIGluIHRoZSBvcGNvZGUgbWFwLgogICAgICovCiAgICB1LT5wZnhfcmV4ID0gKCB1LT5w
ZnhfcmV4ICYgMHg0MCApIHwgCiAgICAgICAgICAgICAgICAgKCB1LT5wZnhfcmV4ICYgUkVYX1BG
WF9NQVNLKCB1LT5pdGFiX2VudHJ5LT5wcmVmaXggKSApOyAKCiAgICAvKiB3aGV0aGVyIHRoaXMg
aW5zdHJ1Y3Rpb24gaGFzIGEgZGVmYXVsdCBvcGVyYW5kIHNpemUgb2YgCiAgICAgKiA2NGJpdCwg
YWxzbyBoYXJkY29kZWQgaW50byB0aGUgb3Bjb2RlIG1hcC4KICAgICAqLwogICAgdS0+ZGVmYXVs
dDY0ID0gUF9ERUY2NCggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICk7IAogICAgLyogY2FsY3VsYXRl
IGVmZmVjdGl2ZSBvcGVyYW5kIHNpemUgKi8KICAgIGlmICggUkVYX1coIHUtPnBmeF9yZXggKSAp
IHsKICAgICAgICB1LT5vcHJfbW9kZSA9IDY0OwogICAgfSBlbHNlIGlmICggdS0+cGZ4X29wciAp
IHsKICAgICAgICB1LT5vcHJfbW9kZSA9IDE2OwogICAgfSBlbHNlIHsKICAgICAgICAvKiB1bmxl
c3MgdGhlIGRlZmF1bHQgb3ByIHNpemUgb2YgaW5zdHJ1Y3Rpb24gaXMgNjQsCiAgICAgICAgICog
dGhlIGVmZmVjdGl2ZSBvcGVyYW5kIHNpemUgaW4gdGhlIGFic2VuY2Ugb2YgcmV4LncKICAgICAg
ICAgKiBwcmVmaXggaXMgMzIuCiAgICAgICAgICovCiAgICAgICAgdS0+b3ByX21vZGUgPSAoIHUt
PmRlZmF1bHQ2NCApID8gNjQgOiAzMjsKICAgIH0KCiAgICAvKiBjYWxjdWxhdGUgZWZmZWN0aXZl
IGFkZHJlc3Mgc2l6ZSAqLwogICAgdS0+YWRyX21vZGUgPSAodS0+cGZ4X2FkcikgPyAzMiA6IDY0
OwogIH0gZWxzZSBpZiAoIHUtPmRpc19tb2RlID09IDMyICkgeyAvKiBzZXQgMzJiaXQtbW9kZSBm
bGFncyAqLwogICAgdS0+b3ByX21vZGUgPSAoIHUtPnBmeF9vcHIgKSA/IDE2IDogMzI7CiAgICB1
LT5hZHJfbW9kZSA9ICggdS0+cGZ4X2FkciApID8gMTYgOiAzMjsKICB9IGVsc2UgaWYgKCB1LT5k
aXNfbW9kZSA9PSAxNiApIHsgLyogc2V0IDE2Yml0LW1vZGUgZmxhZ3MgKi8KICAgIHUtPm9wcl9t
b2RlID0gKCB1LT5wZnhfb3ByICkgPyAzMiA6IDE2OwogICAgdS0+YWRyX21vZGUgPSAoIHUtPnBm
eF9hZHIgKSA/IDMyIDogMTY7CiAgfQoKICAvKiBUaGVzZSBmbGFncyBkZXRlcm1pbmUgd2hpY2gg
b3BlcmFuZCB0byBhcHBseSB0aGUgb3BlcmFuZCBzaXplCiAgICogY2FzdCB0by4KICAgKi8KICB1
LT5jMSA9ICggUF9DMSggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICkgKSA/IDEgOiAwOwogIHUtPmMy
ID0gKCBQX0MyKCB1LT5pdGFiX2VudHJ5LT5wcmVmaXggKSApID8gMSA6IDA7CiAgdS0+YzMgPSAo
IFBfQzMoIHUtPml0YWJfZW50cnktPnByZWZpeCApICkgPyAxIDogMDsKCiAgLyogc2V0IGZsYWdz
IGZvciBpbXBsaWNpdCBhZGRyZXNzaW5nICovCiAgdS0+aW1wbGljaXRfYWRkciA9IFBfSU1QQURE
UiggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICk7CgogIHJldHVybiAwOwp9CgpzdGF0aWMgaW50IGdl
bl9oZXgoIHN0cnVjdCB1ZCAqdSApCnsKICB1bnNpZ25lZCBpbnQgaTsKICB1bnNpZ25lZCBjaGFy
ICpzcmNfcHRyID0gaW5wX3Nlc3MoIHUgKTsKICBjaGFyKiBzcmNfaGV4OwoKICAvKiBiYWlsIG91
dCBpZiBpbiBlcnJvciBzdGF0LiAqLwogIGlmICggdS0+ZXJyb3IgKSByZXR1cm4gLTE7IAogIC8q
IG91dHB1dCBidWZmZXIgcG9pbnRlICovCiAgc3JjX2hleCA9ICggY2hhciogKSB1LT5pbnNuX2hl
eGNvZGU7CiAgLyogZm9yIGVhY2ggYnl0ZSB1c2VkIHRvIGRlY29kZSBpbnN0cnVjdGlvbiAqLwog
IGZvciAoIGkgPSAwOyBpIDwgdS0+aW5wX2N0cjsgKytpLCArK3NyY19wdHIpIHsKICAgIHNucHJp
bnRmKCBzcmNfaGV4LCAyLCAiJTAyeCIsICpzcmNfcHRyICYgMHhGRiApOwogICAgc3JjX2hleCAr
PSAyOwogIH0KICByZXR1cm4gMDsKfQoKLyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICogdWRfZGVj
b2RlKCkgLSBJbnN0cnVjdGlvbiBkZWNvZGVyLiBSZXR1cm5zIHRoZSBudW1iZXIgb2YgYnl0ZXMg
ZGVjb2RlZC4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCnVuc2lnbmVkIGludCB1ZF9kZWNv
ZGUoIHN0cnVjdCB1ZCogdSApCnsKICBpbnBfc3RhcnQodSk7CgogIGlmICggY2xlYXJfaW5zbigg
dSApICkgewogICAgOyAvKiBlcnJvciAqLwogIH0gZWxzZSBpZiAoIGdldF9wcmVmaXhlcyggdSAp
ICE9IDAgKSB7CiAgICA7IC8qIGVycm9yICovCiAgfSBlbHNlIGlmICggc2VhcmNoX2l0YWIoIHUg
KSAhPSAwICkgewogICAgOyAvKiBlcnJvciAqLwogIH0gZWxzZSBpZiAoIGRvX21vZGUoIHUgKSAh
PSAwICkgewogICAgOyAvKiBlcnJvciAqLwogIH0gZWxzZSBpZiAoIGRpc2FzbV9vcGVyYW5kcygg
dSApICE9IDAgKSB7CiAgICA7IC8qIGVycm9yICovCiAgfSBlbHNlIGlmICggcmVzb2x2ZV9tbmVt
b25pYyggdSApICE9IDAgKSB7CiAgICA7IC8qIGVycm9yICovCiAgfQoKICAvKiBIYW5kbGUgZGVj
b2RlIGVycm9yLiAqLwogIGlmICggdS0+ZXJyb3IgKSB7CiAgICAvKiBjbGVhciBvdXQgdGhlIGRl
Y29kZSBkYXRhLiAqLwogICAgY2xlYXJfaW5zbiggdSApOwogICAgLyogbWFyayB0aGUgc2VxdWVu
Y2Ugb2YgYnl0ZXMgYXMgaW52YWxpZC4gKi8KICAgIHUtPml0YWJfZW50cnkgPSAmIGllX2ludmFs
aWQ7CiAgICB1LT5tbmVtb25pYyA9IHUtPml0YWJfZW50cnktPm1uZW1vbmljOwogIH0gCgogIHUt
Pmluc25fb2Zmc2V0ID0gdS0+cGM7IC8qIHNldCBvZmZzZXQgb2YgaW5zdHJ1Y3Rpb24gKi8KICB1
LT5pbnNuX2ZpbGwgPSAwOyAgIC8qIHNldCB0cmFuc2xhdGlvbiBidWZmZXIgaW5kZXggdG8gMCAq
LwogIHUtPnBjICs9IHUtPmlucF9jdHI7ICAgIC8qIG1vdmUgcHJvZ3JhbSBjb3VudGVyIGJ5IGJ5
dGVzIGRlY29kZWQgKi8KICBnZW5faGV4KCB1ICk7ICAgICAgIC8qIGdlbmVyYXRlIGhleCBjb2Rl
ICovCgogIC8qIHJldHVybiBudW1iZXIgb2YgYnl0ZXMgZGlzYXNzZW1ibGVkLiAqLwogIHJldHVy
biB1LT5pbnBfY3RyOwp9CgovKiB2aW06Y2luZGVudAogKiB2aW06dHM9NAogKiB2aW06c3c9NAog
KiB2aW06ZXhwYW5kdGFiCiAqLwoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlzODYtMS43L3N5bi1hdHQuYwAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDExMjU3ADExNzY1NDY1NTU2ADAx
NTQxNwAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAg
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAvKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBzeW4tYXR0LmMK
ICoKICogQ29weXJpZ2h0IChjKSAyMDA0LCAyMDA1LCAyMDA2IFZpdmVrIE1vaGFuIDx2aXZla0Bz
aWc5LmNvbT4KICogQWxsIHJpZ2h0cyByZXNlcnZlZC4gU2VlIChMSUNFTlNFKQogKiAtLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQogKi8KCiNpbmNsdWRlICJ0eXBlcy5oIgojaW5jbHVkZSAiZXh0ZXJuLmgi
CiNpbmNsdWRlICJkZWNvZGUuaCIKI2luY2x1ZGUgIml0YWIuaCIKI2luY2x1ZGUgInN5bi5oIgoK
LyogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICogb3ByX2Nhc3QoKSAtIFByaW50cyBhbiBvcGVyYW5k
IGNhc3QuCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpzdGF0aWMgdm9pZCAKb3ByX2Nhc3Qo
c3RydWN0IHVkKiB1LCBzdHJ1Y3QgdWRfb3BlcmFuZCogb3ApCnsKICBzd2l0Y2gob3AtPnNpemUp
IHsKCWNhc2UgMTYgOiBjYXNlIDMyIDoKCQlta2FzbSh1LCAiKiIpOyAgIGJyZWFrOwoJZGVmYXVs
dDogYnJlYWs7CiAgfQp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBnZW5fb3BlcmFuZCgp
IC0gR2VuZXJhdGVzIGFzc2VtYmx5IG91dHB1dCBmb3IgZWFjaCBvcGVyYW5kLgogKiAtLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIHZvaWQgCmdlbl9vcGVyYW5kKHN0cnVjdCB1ZCogdSwg
c3RydWN0IHVkX29wZXJhbmQqIG9wKQp7CiAgc3dpdGNoKG9wLT50eXBlKSB7CgljYXNlIFVEX09Q
X1JFRzoKCQlta2FzbSh1LCAiJSUlcyIsIHVkX3JlZ190YWJbb3AtPmJhc2UgLSBVRF9SX0FMXSk7
CgkJYnJlYWs7CgoJY2FzZSBVRF9PUF9NRU06CgkJaWYgKHUtPmJyX2Zhcikgb3ByX2Nhc3QodSwg
b3ApOwoJCWlmICh1LT5wZnhfc2VnKQoJCQlta2FzbSh1LCAiJSUlczoiLCB1ZF9yZWdfdGFiW3Ut
PnBmeF9zZWcgLSBVRF9SX0FMXSk7CgkJaWYgKG9wLT5vZmZzZXQgPT0gOCkgewoJCQlpZiAob3At
Pmx2YWwuc2J5dGUgPCAwKQoJCQkJbWthc20odSwgIi0weCV4IiwgKC1vcC0+bHZhbC5zYnl0ZSkg
JiAweGZmKTsKCQkJZWxzZQlta2FzbSh1LCAiMHgleCIsIG9wLT5sdmFsLnNieXRlKTsKCQl9IAoJ
CWVsc2UgaWYgKG9wLT5vZmZzZXQgPT0gMTYpIAoJCQlta2FzbSh1LCAiMHgleCIsIG9wLT5sdmFs
LnV3b3JkKTsKCQllbHNlIGlmIChvcC0+b2Zmc2V0ID09IDMyKSAKCQkJbWthc20odSwgIjB4JWx4
Iiwgb3AtPmx2YWwudWR3b3JkKTsKCQllbHNlIGlmIChvcC0+b2Zmc2V0ID09IDY0KSAKCQkJbWth
c20odSwgIjB4IiBGTVQ2NCAieCIsIG9wLT5sdmFsLnVxd29yZCk7CgoJCWlmIChvcC0+YmFzZSkK
CQkJbWthc20odSwgIiglJSVzIiwgdWRfcmVnX3RhYltvcC0+YmFzZSAtIFVEX1JfQUxdKTsKCQlp
ZiAob3AtPmluZGV4KSB7CgkJCWlmIChvcC0+YmFzZSkKCQkJCW1rYXNtKHUsICIsIik7CgkJCWVs
c2UgbWthc20odSwgIigiKTsKCQkJbWthc20odSwgIiUlJXMiLCB1ZF9yZWdfdGFiW29wLT5pbmRl
eCAtIFVEX1JfQUxdKTsKCQl9CgkJaWYgKG9wLT5zY2FsZSkKCQkJbWthc20odSwgIiwlZCIsIG9w
LT5zY2FsZSk7CgkJaWYgKG9wLT5iYXNlIHx8IG9wLT5pbmRleCkKCQkJbWthc20odSwgIikiKTsK
CQlicmVhazsKCgljYXNlIFVEX09QX0lNTToKCQlzd2l0Y2ggKG9wLT5zaXplKSB7CgkJCWNhc2Ug
IDg6IG1rYXNtKHUsICIkMHgleCIsIG9wLT5sdmFsLnVieXRlKTsgICAgYnJlYWs7CgkJCWNhc2Ug
MTY6IG1rYXNtKHUsICIkMHgleCIsIG9wLT5sdmFsLnV3b3JkKTsgICAgYnJlYWs7CgkJCWNhc2Ug
MzI6IG1rYXNtKHUsICIkMHglbHgiLCBvcC0+bHZhbC51ZHdvcmQpOyAgYnJlYWs7CgkJCWNhc2Ug
NjQ6IG1rYXNtKHUsICIkMHgiIEZNVDY0ICJ4Iiwgb3AtPmx2YWwudXF3b3JkKTsgYnJlYWs7CgkJ
CWRlZmF1bHQ6IGJyZWFrOwoJCX0KCQlicmVhazsKCgljYXNlIFVEX09QX0pJTU06CgkJc3dpdGNo
IChvcC0+c2l6ZSkgewoJCQljYXNlICA4OgoJCQkJbWthc20odSwgIjB4IiBGTVQ2NCAieCIsIHUt
PnBjICsgb3AtPmx2YWwuc2J5dGUpOyAKCQkJCWJyZWFrOwoJCQljYXNlIDE2OgoJCQkJbWthc20o
dSwgIjB4IiBGTVQ2NCAieCIsIHUtPnBjICsgb3AtPmx2YWwuc3dvcmQpOwoJCQkJYnJlYWs7CgkJ
CWNhc2UgMzI6CgkJCQlta2FzbSh1LCAiMHgiIEZNVDY0ICJ4IiwgdS0+cGMgKyBvcC0+bHZhbC5z
ZHdvcmQpOwoJCQkJYnJlYWs7CgkJCWRlZmF1bHQ6YnJlYWs7CgkJfQoJCWJyZWFrOwoKCWNhc2Ug
VURfT1BfUFRSOgoJCXN3aXRjaCAob3AtPnNpemUpIHsKCQkJY2FzZSAzMjoKCQkJCW1rYXNtKHUs
ICIkMHgleCwgJDB4JXgiLCBvcC0+bHZhbC5wdHIuc2VnLCAKCQkJCQlvcC0+bHZhbC5wdHIub2Zm
ICYgMHhGRkZGKTsKCQkJCWJyZWFrOwoJCQljYXNlIDQ4OgoJCQkJbWthc20odSwgIiQweCV4LCAk
MHglbHgiLCBvcC0+bHZhbC5wdHIuc2VnLCAKCQkJCQlvcC0+bHZhbC5wdHIub2ZmKTsKCQkJCWJy
ZWFrOwoJCX0KCQlicmVhazsKCQkJCglkZWZhdWx0OiByZXR1cm47CiAgfQp9CgovKiA9PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PQogKiB0cmFuc2xhdGVzIHRvIEFUJlQgc3ludGF4IAogKiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PQogKi8KZXh0ZXJuIHZvaWQgCnVkX3RyYW5zbGF0ZV9hdHQoc3RydWN0IHVkICp1
KQp7CiAgaW50IHNpemUgPSAwOwoKICAvKiBjaGVjayBpZiBQX09TTyBwcmVmaXggaXMgdXNlZCAq
LwogIGlmICghIFBfT1NPKHUtPml0YWJfZW50cnktPnByZWZpeCkgJiYgdS0+cGZ4X29wcikgewoJ
c3dpdGNoICh1LT5kaXNfbW9kZSkgewoJCWNhc2UgMTY6IAoJCQlta2FzbSh1LCAibzMyICIpOwoJ
CQlicmVhazsKCQljYXNlIDMyOgoJCWNhc2UgNjQ6CiAJCQlta2FzbSh1LCAibzE2ICIpOwoJCQli
cmVhazsKCX0KICB9CgogIC8qIGNoZWNrIGlmIFBfQVNPIHByZWZpeCB3YXMgdXNlZCAqLwogIGlm
ICghIFBfQVNPKHUtPml0YWJfZW50cnktPnByZWZpeCkgJiYgdS0+cGZ4X2FkcikgewoJc3dpdGNo
ICh1LT5kaXNfbW9kZSkgewoJCWNhc2UgMTY6IAoJCQlta2FzbSh1LCAiYTMyICIpOwoJCQlicmVh
azsKCQljYXNlIDMyOgogCQkJbWthc20odSwgImExNiAiKTsKCQkJYnJlYWs7CgkJY2FzZSA2NDoK
IAkJCW1rYXNtKHUsICJhMzIgIik7CgkJCWJyZWFrOwoJfQogIH0KCiAgaWYgKHUtPnBmeF9sb2Nr
KQogIAlta2FzbSh1LCAgImxvY2sgIik7CiAgaWYgKHUtPnBmeF9yZXApCglta2FzbSh1LCAgInJl
cCAiKTsKICBpZiAodS0+cGZ4X3JlcG5lKQoJCW1rYXNtKHUsICAicmVwbmUgIik7CgogIC8qIHNw
ZWNpYWwgaW5zdHJ1Y3Rpb25zICovCiAgc3dpdGNoICh1LT5tbmVtb25pYykgewoJY2FzZSBVRF9J
cmV0ZjogCgkJbWthc20odSwgImxyZXQgIik7IAoJCWJyZWFrOwoJY2FzZSBVRF9JZGI6CgkJbWth
c20odSwgIi5ieXRlIDB4JXgiLCB1LT5vcGVyYW5kWzBdLmx2YWwudWJ5dGUpOwoJCXJldHVybjsK
CWNhc2UgVURfSWptcDoKCWNhc2UgVURfSWNhbGw6CgkJaWYgKHUtPmJyX2ZhcikgbWthc20odSwg
ICJsIik7CgkJbWthc20odSwgIiVzIiwgdWRfbG9va3VwX21uZW1vbmljKHUtPm1uZW1vbmljKSk7
CgkJYnJlYWs7CgljYXNlIFVEX0lib3VuZDoKCWNhc2UgVURfSWVudGVyOgoJCWlmICh1LT5vcGVy
YW5kWzBdLnR5cGUgIT0gVURfTk9ORSkKCQkJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMF0p
OwoJCWlmICh1LT5vcGVyYW5kWzFdLnR5cGUgIT0gVURfTk9ORSkgewoJCQlta2FzbSh1LCAiLCIp
OwoJCQlnZW5fb3BlcmFuZCh1LCAmdS0+b3BlcmFuZFsxXSk7CgkJfQoJCXJldHVybjsKCWRlZmF1
bHQ6CgkJbWthc20odSwgIiVzIiwgdWRfbG9va3VwX21uZW1vbmljKHUtPm1uZW1vbmljKSk7CiAg
fQoKICBpZiAodS0+YzEpCglzaXplID0gdS0+b3BlcmFuZFswXS5zaXplOwogIGVsc2UgaWYgKHUt
PmMyKQoJc2l6ZSA9IHUtPm9wZXJhbmRbMV0uc2l6ZTsKICBlbHNlIGlmICh1LT5jMykKCXNpemUg
PSB1LT5vcGVyYW5kWzJdLnNpemU7CgogIGlmIChzaXplID09IDgpCglta2FzbSh1LCAiYiIpOwog
IGVsc2UgaWYgKHNpemUgPT0gMTYpCglta2FzbSh1LCAidyIpOwogIGVsc2UgaWYgKHNpemUgPT0g
NjQpCiAJbWthc20odSwgInEiKTsKCiAgbWthc20odSwgIiAiKTsKCiAgaWYgKHUtPm9wZXJhbmRb
Ml0udHlwZSAhPSBVRF9OT05FKSB7CglnZW5fb3BlcmFuZCh1LCAmdS0+b3BlcmFuZFsyXSk7Cglt
a2FzbSh1LCAiLCAiKTsKICB9CgogIGlmICh1LT5vcGVyYW5kWzFdLnR5cGUgIT0gVURfTk9ORSkg
ewoJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMV0pOwoJbWthc20odSwgIiwgIik7CiAgfQoK
ICBpZiAodS0+b3BlcmFuZFswXS50eXBlICE9IFVEX05PTkUpCglnZW5fb3BlcmFuZCh1LCAmdS0+
b3BlcmFuZFswXSk7Cn0KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9rZGJfZGlzLmMAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAw
NjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAxNDMzNgAxMTc2NTQ2NTU1NgAwMTU0MjAAIDAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBNdWtlc2ggUmF0aG9yLCBPcmFjbGUg
Q29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNv
ZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICogbW9kaWZ5IGl0IHVuZGVy
IHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2UgdjIgYXMgcHVi
bGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAqCiAqIFRoaXMgcHJvZ3Jh
bSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAogKiBi
dXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50
eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBP
U0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFp
bHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5l
cmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBpZiBub3QsIHdy
aXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4sIDU5IFRlbXBsZSBQ
bGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMwNywgVVNBLgogKi8KCiNp
bmNsdWRlIDx4ZW4vY29tcGlsZS5oPiAgICAgICAgICAgICAgICAvKiBmb3IgWEVOX1NVQlZFUlNJ
T04gKi8KI2luY2x1ZGUgIi4uLy4uL2luY2x1ZGUva2RiaW5jLmgiCiNpbmNsdWRlICJleHRlcm4u
aCIKCnN0YXRpYyB2b2lkICgqZGlzX3N5bnRheCkodWRfdCopID0gVURfU1lOX0FUVDsgLyogZGVm
YXVsdCBkaXMtYXNzZW1ibHkgc3ludGF4ICovCgpzdGF0aWMgc3RydWN0IHsgICAgICAgICAgICAg
ICAgICAgICAgICAgLyogaW5mbyBmb3Iga2RiX3JlYWRfYnl0ZV9mb3JfdWQoKSAqLwogICAga2Ri
dmFfdCBrdWRfaW5zdHJfYWRkcjsKICAgIGRvbWlkX3Qga3VkX2RvbWlkOwp9IGtkYl91ZF9yZF9p
bmZvOwoKLyogY2FsbGVkIHZpYSBmdW5jdGlvbiBwdHIgYnkgdWQgd2hlbiBkaXNhc3NlbWJsaW5n
LiAKICoga2RiIGluZm8gcGFzc2VkIHZpYSBrZGJfdWRfcmRfaW5mb3t9IAogKi8Kc3RhdGljIGlu
dAprZGJfcmVhZF9ieXRlX2Zvcl91ZChzdHJ1Y3QgdWQgKnVkcCkKewogICAga2RiYnl0X3QgYnl0
ZWJ1ZjsKICAgIGRvbWlkX3QgZG9taWQgPSBrZGJfdWRfcmRfaW5mby5rdWRfZG9taWQ7CiAgICBr
ZGJ2YV90IGFkZHIgPSBrZGJfdWRfcmRfaW5mby5rdWRfaW5zdHJfYWRkcjsKCiAgICBpZiAoa2Ri
X3JlYWRfbWVtKGFkZHIsICZieXRlYnVmLCAxLCBkb21pZCkgPT0gMSkgewogICAgICAgIGtkYl91
ZF9yZF9pbmZvLmt1ZF9pbnN0cl9hZGRyKys7CiAgICAgICAgS0RCR1AxKCJ1ZHJkOmFkZHI6JWx4
IGRvbWlkOiVkIGJ5dDoleFxuIiwgYWRkciwgZG9taWQsIGJ5dGVidWYpOwogICAgICAgIHJldHVy
biBieXRlYnVmOwogICAgfQogICAgS0RCR1AxKCJ1ZHJkOmFkZHI6JWx4IGRvbWlkOiVkIGVyclxu
IiwgYWRkciwgZG9taWQpOwogICAgcmV0dXJuIFVEX0VPSTsKfQoKLyogCiAqIGdpdmVuIGEgZG9t
aWQsIGNvbnZlcnQgYWRkciB0byBzeW1ib2wgYW5kIHByaW50IGl0IAogKiBFZzogZmZmZjgyOGM4
MDEyMzVlMjogaWRsZV9sb29wKzUyICAgICAgICAgICAgICAgICAgam1wICBpZGxlX2xvb3ArNTUK
ICogICAgQ2FsbGVkIHR3aWNlIGhlcmUgZm9yIGlkbGVfbG9vcC4gSW4gZmlyc3QgY2FzZSwgbmwg
aXMgbnVsbCwgCiAqICAgIGluIHRoZSBzZWNvbmQgY2FzZSBubCA9PSAnXG4nCiAqLwp2b2lkCmtk
Yl9wcm50X2FkZHIyc3ltKGRvbWlkX3QgZG9taWQsIGtkYnZhX3QgYWRkciwgY2hhciAqbmwpCnsK
ICAgIHVuc2lnbmVkIGxvbmcgc3osIG9mZnM7CiAgICBjaGFyIGJ1ZltLU1lNX05BTUVfTEVOKzFd
LCBwYnVmWzE1MF0sIHByZWZpeFs4XTsKICAgIGNoYXIgKnAgPSBidWY7CgogICAgcHJlZml4WzBd
PSdcMCc7CiAgICBpZiAoZG9taWQgIT0gRE9NSURfSURMRSkgewogICAgICAgIHNucHJpbnRmKHBy
ZWZpeCwgOCwgIiV4OiIsIGRvbWlkKTsKICAgICAgICBwID0ga2RiX2d1ZXN0X2FkZHIyc3ltKGFk
ZHIsIGRvbWlkLCAmb2Zmcyk7CiAgICB9IGVsc2UKICAgICAgICBzeW1ib2xzX2xvb2t1cChhZGRy
LCAmc3osICZvZmZzLCBidWYpOwoKICAgIHNucHJpbnRmKHBidWYsIDE1MCwgIiVzJXMrJWx4Iiwg
cHJlZml4LCBwLCBvZmZzKTsKICAgIGlmICgqbmwgIT0gJ1xuJykKICAgICAgICBrZGJwKCIlLTMw
cyVzIiwgcGJ1ZiwgbmwpOyAgLyogcHJpbnRzIG1vcmUgdGhhbiAzMCBpZiBuZWVkZWQgKi8KICAg
IGVsc2UKICAgICAgICBrZGJwKCIlcyVzIiwgcGJ1ZiwgbmwpOwoKfQoKc3RhdGljIGludAprZGJf
anVtcF9pbnN0cihlbnVtIHVkX21uZW1vbmljX2NvZGUgbW5lbW9uaWMpCnsKICAgIHJldHVybiAo
bW5lbW9uaWMgPj0gVURfSWpvICYmIG1uZW1vbmljIDw9IFVEX0lqbXApOwp9CgovKgogKiBwcmlu
dCBvbmUgaW5zdHI6IGZ1bmN0aW9uIHNvIHRoYXQgd2UgY2FuIHByaW50IG9mZnNldHMgb2Ygam1w
IGV0Yy4uIGFzCiAqICBzeW1ib2wrb2Zmc2V0IGluc3RlYWQgb2YganVzdCBhZGRyZXNzCiAqLwpz
dGF0aWMgdm9pZAprZGJfcHJpbnRfb25lX2luc3RyKHN0cnVjdCB1ZCAqdWRwLCBkb21pZF90IGRv
bWlkKQp7CiAgICBzaWduZWQgbG9uZyB2YWwgPSAwOwogICAgdWRfdHlwZV90IHR5cGUgPSB1ZHAt
Pm9wZXJhbmRbMF0udHlwZTsKCiAgICBpZiAoKHVkcC0+bW5lbW9uaWMgPT0gVURfSWNhbGwgfHwg
a2RiX2p1bXBfaW5zdHIodWRwLT5tbmVtb25pYykpICYmCiAgICAgICAgdHlwZSA9PSBVRF9PUF9K
SU1NKSB7CiAgICAgICAgCiAgICAgICAgaW50IHN6ID0gdWRwLT5vcGVyYW5kWzBdLnNpemU7CiAg
ICAgICAgY2hhciAqcCwgaWJ1Zls0MF0sICpxID0gaWJ1ZjsKICAgICAgICBrZGJ2YV90IGFkZHI7
CgogICAgICAgIGlmIChzeiA9PSA4KSB2YWwgPSB1ZHAtPm9wZXJhbmRbMF0ubHZhbC5zYnl0ZTsK
ICAgICAgICBlbHNlIGlmIChzeiA9PSAxNikgdmFsID0gdWRwLT5vcGVyYW5kWzBdLmx2YWwuc3dv
cmQ7CiAgICAgICAgZWxzZSBpZiAoc3ogPT0gMzIpIHZhbCA9IHVkcC0+b3BlcmFuZFswXS5sdmFs
LnNkd29yZDsKICAgICAgICBlbHNlIGlmIChzeiA9PSA2NCkgdmFsID0gdWRwLT5vcGVyYW5kWzBd
Lmx2YWwuc3F3b3JkOwogICAgICAgIGVsc2Uga2RicCgia2RiX3ByaW50X29uZV9pbnN0cjogSW52
YWwgc3o6eiVkXG4iLCBzeik7CgogICAgICAgIGFkZHIgPSB1ZHAtPnBjICsgdmFsOwogICAgICAg
IGZvcihwPXVkX2luc25fYXNtKHVkcCk7ICgqcT0qcCkgJiYgKnAhPScgJzsgcCsrLHErKyk7CiAg
ICAgICAgKnE9J1wwJzsKICAgICAgICBrZGJwKCIgJS00cyAiLCBpYnVmKTsgICAgLyogc3BhY2Ug
YmVmb3JlIGZvciBsb25nIGZ1bmMgbmFtZXMgKi8KICAgICAgICBrZGJfcHJudF9hZGRyMnN5bShk
b21pZCwgYWRkciwgIlxuIik7CiAgICB9IGVsc2UKICAgICAgICBrZGJwKCIgJS0yNHNcbiIsIHVk
X2luc25fYXNtKHVkcCkpOwojaWYgMAogICAga2RicCgibW5lbW9uaWM6eiVkICIsIHVkcC0+bW5l
bW9uaWMpOwogICAgaWYgKHR5cGUgPT0gVURfT1BfQ09OU1QpIGtkYnAoInR5cGUgaXMgY29uc3Rc
biIpOwogICAgZWxzZSBpZiAodHlwZSA9PSBVRF9PUF9KSU1NKSBrZGJwKCJ0eXBlIGlzIEpJTU1c
biIpOwogICAgZWxzZSBpZiAodHlwZSA9PSBVRF9PUF9JTU0pIGtkYnAoInR5cGUgaXMgSU1NXG4i
KTsKICAgIGVsc2UgaWYgKHR5cGUgPT0gVURfT1BfUFRSKSBrZGJwKCJ0eXBlIGlzIFBUUlxuIik7
CiNlbmRpZgp9CgpzdGF0aWMgdm9pZAprZGJfc2V0dXBfdWQoc3RydWN0IHVkICp1ZHAsIGtkYnZh
X3QgYWRkciwgZG9taWRfdCBkb21pZCkKewogICAgaW50IGJpdG5lc3MgPSBrZGJfZ3Vlc3RfYml0
bmVzcyhkb21pZCk7CiAgICB1aW50IHZlbmRvciA9IChib290X2NwdV9kYXRhLng4Nl92ZW5kb3Ig
PT0gWDg2X1ZFTkRPUl9BTUQpID8KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFVEX1ZFTkRPUl9BTUQgOiBVRF9WRU5ET1JfSU5URUw7CgogICAgS0RCR1AxKCJzZXR1
cF91ZDpkb21pZDolZCBiaXRuZXNzOiVkIGFkZHI6JWx4XG4iLCBkb21pZCwgYml0bmVzcywgYWRk
cik7CiAgICB1ZF9pbml0KHVkcCk7CiAgICB1ZF9zZXRfbW9kZSh1ZHAsIGtkYl9ndWVzdF9iaXRu
ZXNzKGRvbWlkKSk7CiAgICB1ZF9zZXRfc3ludGF4KHVkcCwgZGlzX3N5bnRheCk7IAogICAgdWRf
c2V0X3ZlbmRvcih1ZHAsIHZlbmRvcik7ICAgICAgICAgICAvKiBIVk06IHZteC9zdm0gZGlmZmVy
ZW50IGluc3RycyovCiAgICB1ZF9zZXRfcGModWRwLCBhZGRyKTsgICAgICAgICAgICAgICAgIC8q
IGZvciBudW1iZXJzIHByaW50ZWQgb24gbGVmdCAqLwogICAgdWRfc2V0X2lucHV0X2hvb2sodWRw
LCBrZGJfcmVhZF9ieXRlX2Zvcl91ZCk7CiAgICBrZGJfdWRfcmRfaW5mby5rdWRfaW5zdHJfYWRk
ciA9IGFkZHI7CiAgICBrZGJfdWRfcmRfaW5mby5rdWRfZG9taWQgPSBkb21pZDsKfQoKLyoKICog
Z2l2ZW4gYW4gYWRkciwgcHJpbnQgZ2l2ZW4gbnVtYmVyIG9mIGluc3RydWN0aW9ucy4KICogUmV0
dXJuczogYWRkcmVzcyBvZiBuZXh0IGluc3RydWN0aW9uIGluIHRoZSBzdHJlYW0KICovCmtkYnZh
X3QKa2RiX3ByaW50X2luc3RyKGtkYnZhX3QgYWRkciwgbG9uZyBudW0sIGRvbWlkX3QgZG9taWQp
CnsKICAgIHN0cnVjdCB1ZCB1ZF9zOwoKICAgIEtEQkdQMSgicHJpbnRfaW5zdHI6YWRkcjoweCVs
eCBudW06JWxkIGRvbWlkOiV4XG4iLCBhZGRyLCBudW0sIGRvbWlkKTsKCiAgICBrZGJfc2V0dXBf
dWQoJnVkX3MsIGFkZHIsIGRvbWlkKTsKICAgIHdoaWxlKG51bS0tKSB7CiAgICAgICAgaWYgKHVk
X2Rpc2Fzc2VtYmxlKCZ1ZF9zKSkgewogICAgICAgICAgICB1aW50NjRfdCBwYyA9IHVkX2luc25f
b2ZmKCZ1ZF9zKTsKICAgICAgICAgICAgLyoga2RicCgiJTA4eDogIiwoaW50KXBjKTsgKi8KICAg
ICAgICAgICAga2RicCgiJTAxNmx4OiAiLCBwYyk7CiAgICAgICAgICAgIGtkYl9wcm50X2FkZHIy
c3ltKGRvbWlkLCBwYywgIiIpOwogICAgICAgICAgICBrZGJfcHJpbnRfb25lX2luc3RyKCZ1ZF9z
LCBkb21pZCk7CiAgICAgICAgfSBlbHNlCiAgICAgICAgICAgIGtkYnAoIktEQjpDb3VsZG4ndCBk
aXNhc3NlbWJsZSBQQzoweCVseFxuIiwgYWRkcik7CiAgICAgICAgICAgIC8qIGZvciBzdGFjayBy
ZWFkcywgZG9uJ3QgYWx3YXlzIGRpc3BsYXkgZXJyb3IgKi8KICAgIH0KICAgIEtEQkdQMSgicHJp
bnRfaW5zdHI6a3VkYWRkcjoweCVseFxuIiwga2RiX3VkX3JkX2luZm8ua3VkX2luc3RyX2FkZHIp
OwogICAgcmV0dXJuIGtkYl91ZF9yZF9pbmZvLmt1ZF9pbnN0cl9hZGRyOwp9Cgp2b2lkCmtkYl9k
aXNwbGF5X3BjKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7ICAgCiAgICBkb21pZF90IGRv
bWlkOwogICAgc3RydWN0IGNwdV91c2VyX3JlZ3MgcmVnczEgPSAqcmVnczsKICAgIGRvbWlkID0g
Z3Vlc3RfbW9kZShyZWdzKSA/IGN1cnJlbnQtPmRvbWFpbi0+ZG9tYWluX2lkIDogRE9NSURfSURM
RTsKCiAgICByZWdzMS5LREJJUCA9IHJlZ3MtPktEQklQOwogICAga2RiX3ByaW50X2luc3RyKHJl
Z3MxLktEQklQLCAxLCBkb21pZCk7Cn0KCi8qIGNoZWNrIGlmIHRoZSBpbnN0ciBhdCB0aGUgYWRk
ciBpcyBjYWxsIGluc3RydWN0aW9uCiAqIFJFVFVSTlM6IHNpemUgb2YgdGhlIGluc3RyIGlmIGl0
J3MgYSBjYWxsIGluc3RyLCBlbHNlIDAKICovCmludAprZGJfY2hlY2tfY2FsbF9pbnN0cihkb21p
ZF90IGRvbWlkLCBrZGJ2YV90IGFkZHIpCnsKICAgIHN0cnVjdCB1ZCB1ZF9zOwogICAgaW50IHN6
OwoKICAgIGtkYl9zZXR1cF91ZCgmdWRfcywgYWRkciwgZG9taWQpOwogICAgaWYgKChzej11ZF9k
aXNhc3NlbWJsZSgmdWRfcykpICYmIHVkX3MubW5lbW9uaWMgPT0gVURfSWNhbGwpCiAgICAgICAg
cmV0dXJuIChzeik7CiAgICByZXR1cm4gMDsKfQoKLyogdG9nZ2xlIEFUVCBhbmQgSW50ZWwgc3lu
dGF4ZXMgKi8Kdm9pZAprZGJfdG9nZ2xlX2Rpc19zeW50YXgodm9pZCkKewogICAgaWYgKGRpc19z
eW50YXggPT0gVURfU1lOX0lOVEVMKSB7CiAgICAgICAgZGlzX3N5bnRheCA9IFVEX1NZTl9BVFQ7
CiAgICAgICAga2RicCgiZGlzIHN5bnRheCBub3cgc2V0IHRvIEFUVCAoR2FzKVxuIik7CiAgICB9
IGVsc2UgewogICAgICAgIGRpc19zeW50YXggPSBVRF9TWU5fSU5URUw7CiAgICAgICAga2RicCgi
ZGlzIHN5bnRheCBub3cgc2V0IHRvIEludGVsIChOQVNNKVxuIik7CiAgICB9Cn0KAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIv
eDg2L2tkYl93cC5jAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAw
MDI3NTYAMDAwMDAwMjA1MzYAMTE3NjU0NjU1NTYAMDEzNjQxACAwAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8qCiAq
IENvcHlyaWdodCAoQykgMjAwOSwgTXVrZXNoIFJhdGhvciwgT3JhY2xlIENvcnAuICBBbGwgcmln
aHRzIHJlc2VydmVkLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNh
biByZWRpc3RyaWJ1dGUgaXQgYW5kL29yCiAqIG1vZGlmeSBpdCB1bmRlciB0aGUgdGVybXMgb2Yg
dGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIHYyIGFzIHB1Ymxpc2hlZCBieSB0aGUg
RnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0
ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKICogYnV0IFdJVEhPVVQgQU5Z
IFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKICogTUVSQ0hB
TlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZSBH
TlUKICogR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgogKgogKiBZb3Ug
c2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMKICog
TGljZW5zZSBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbTsgaWYgbm90LCB3cml0ZSB0byB0aGUKICog
RnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuLCA1OSBUZW1wbGUgUGxhY2UgLSBTdWl0ZSAz
MzAsCiAqIEJvc3RvbiwgTUEgMDIxMTEwLTEzMDcsIFVTQS4KICovCgojaW5jbHVkZSAiLi4vaW5j
bHVkZS9rZGJpbmMuaCIKCiNpZiAwCiNkZWZpbmUgRFI2X0JUICAweDAwMDA4MDAwCiNkZWZpbmUg
RFI2X0JTICAweDAwMDA0MDAwCiNkZWZpbmUgRFI2X0JEICAweDAwMDAyMDAwCiNlbmRpZgojZGVm
aW5lIERSNl9CMyAgMHgwMDAwMDAwOAojZGVmaW5lIERSNl9CMiAgMHgwMDAwMDAwNAojZGVmaW5l
IERSNl9CMSAgMHgwMDAwMDAwMgojZGVmaW5lIERSNl9CMCAgMHgwMDAwMDAwMQoKI2RlZmluZSBL
REJfTUFYV1AgNCAgICAgICAgICAgICAgICAgICAgICAgICAgLyogRFIwIHRocnUgRFIzICovCgpz
dHJ1Y3Qga2RiX3dwIHsKICAgIGtkYm1hX3QgIHdwX2FkZHI7CiAgICBpbnQgICAgICB3cF9yd2Zs
YWc7CiAgICBpbnQgICAgICB3cF9sZW47CiAgICBpbnQgICAgICB3cF9kZWxldGVkOyAgICAgICAg
ICAgICAgICAgICAgIC8qIHBlbmRpbmcgZGVsZXRlICovCn07CnN0YXRpYyBzdHJ1Y3Qga2RiX3dw
IGtkYl93cGFbS0RCX01BWFdQXTsKCi8qIGZvbGxvd2luZyBiZWNhdXNlIHZtY3MgaGFzIGl0J3Mg
b3duIGRyNy4gd2hlbiB2bWNzIHJ1bnMsIGl0IG1lc3NlcyB1cCB0aGUKICogbmF0aXZlIGRyNyBz
byB3ZSBuZWVkIHRvIHNhdmUvcmVzdG9yZSBpdCAqLwp1bnNpZ25lZCBsb25nIGtkYl9kcjc7CgoK
LyogU2V0IEcwLUczIGJpdHMgaW4gRFI3LiB0aGlzIGRvZXMgZ2xvYmFsIGVuYWJsZSBvZiB0aGUg
Y29ycmVzcG9uZGluZyB3cCAqLwpzdGF0aWMgdm9pZAprZGJfc2V0X2d4X2luX2RyNyhpbnQgcmVn
bm8sIGtkYm1hX3QgKmRyN3ApCnsKICAgIGlmIChyZWdubyA9PSAwKQogICAgICAgICpkcjdwID0g
KmRyN3AgfCAweDI7CiAgICBlbHNlIGlmIChyZWdubyA9PSAxKQogICAgICAgICpkcjdwID0gKmRy
N3AgfCAweDg7CiAgICBlbHNlIGlmIChyZWdubyA9PSAyKQogICAgICAgICpkcjdwID0gKmRyN3Ag
fCAweDIwOwogICAgZWxzZSBpZiAocmVnbm8gPT0gMykKICAgICAgICAqZHI3cCA9ICpkcjdwIHwg
MHg4MDsKfQoKLyogU2V0IExFTjAgLSBMRU4zIHBhaXIgYml0cyBpbiBEUjcgKGxlbiBzaG91bGQg
YmUgMSAyIDQgb3IgOCkgKi8Kc3RhdGljIHZvaWQKa2RiX3NldF9sZW5faW5fZHI3KGludCByZWdu
bywga2RibWFfdCAqZHI3cCwgaW50IGxlbikKewogICAgaW50IGxlbmJpdHMgPSAobGVuID09IDgp
ID8gMiA6IGxlbi0xOwoKICAgICpkcjdwICY9IH4oMHgzIDw8ICgxOCArIDQqcmVnbm8pKTsKICAg
ICpkcjdwIHw9ICgodWxvbmcpKGxlbmJpdHMgJiAweDMpIDw8ICgxOCArIDQqcmVnbm8pKTsKfQoK
c3RhdGljIHZvaWQKa2RiX3NldF9kcjdfcncoaW50IHJlZ25vLCBrZGJtYV90ICpkcjdwLCBpbnQg
cncpCnsKICAgICpkcjdwICY9IH4oMHgzIDw8ICgxNiArIDQqcmVnbm8pKTsKICAgICpkcjdwIHw9
ICgodWxvbmcpKHJ3ICYgMHgzKSkgPDwgKDE2ICsgNCpyZWdubyk7Cn0KCi8qIGdldCB2YWx1ZSBv
ZiBhIGRlYnVnIHJlZ2lzdGVyOiBEUjAtRFIzIERSNiBEUjcuIG90aGVyIHZhbHVlcyByZXR1cm4g
MCAqLwprZGJtYV90CmtkYl9yZF9kYmdyZWcoaW50IHJlZ251bSkKewogICAga2RibWFfdCBjb250
ZW50cyA9IDA7CgogICAgaWYgKHJlZ251bSA9PSAwKQogICAgICAgIF9fYXNtX18gKCJtb3ZxICUl
ZGIwLCUwXG5cdCI6Ij1yIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVnbnVtID09IDEpCiAg
ICAgICAgX19hc21fXyAoIm1vdnEgJSVkYjEsJTBcblx0IjoiPXIiKGNvbnRlbnRzKSk7CiAgICBl
bHNlIGlmIChyZWdudW0gPT0gMikKICAgICAgICBfX2FzbV9fICgibW92cSAlJWRiMiwlMFxuXHQi
OiI9ciIoY29udGVudHMpKTsKICAgIGVsc2UgaWYgKHJlZ251bSA9PSAzKQogICAgICAgIF9fYXNt
X18gKCJtb3ZxICUlZGIzLCUwXG5cdCI6Ij1yIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVn
bnVtID09IDYpCiAgICAgICAgX19hc21fXyAoIm1vdnEgJSVkYjYsJTBcblx0IjoiPXIiKGNvbnRl
bnRzKSk7CiAgICBlbHNlIGlmIChyZWdudW0gPT0gNykKICAgICAgICBfX2FzbV9fICgibW92cSAl
JWRiNywlMFxuXHQiOiI9ciIoY29udGVudHMpKTsKCiAgICByZXR1cm4gY29udGVudHM7Cn0KCnN0
YXRpYyB2b2lkCmtkYl93cl9kYmdyZWcoaW50IHJlZ251bSwga2RibWFfdCBjb250ZW50cykKewog
ICAgaWYgKHJlZ251bSA9PSAwKQogICAgICAgIF9fYXNtX18gKCJtb3ZxICUwLCUlZGIwXG5cdCI6
OiJyIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVnbnVtID09IDEpCiAgICAgICAgX19hc21f
XyAoIm1vdnEgJTAsJSVkYjFcblx0Ijo6InIiKGNvbnRlbnRzKSk7CiAgICBlbHNlIGlmIChyZWdu
dW0gPT0gMikKICAgICAgICBfX2FzbV9fICgibW92cSAlMCwlJWRiMlxuXHQiOjoiciIoY29udGVu
dHMpKTsKICAgIGVsc2UgaWYgKHJlZ251bSA9PSAzKQogICAgICAgIF9fYXNtX18gKCJtb3ZxICUw
LCUlZGIzXG5cdCI6OiJyIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVnbnVtID09IDYpCiAg
ICAgICAgX19hc21fXyAoIm1vdnEgJTAsJSVkYjZcblx0Ijo6InIiKGNvbnRlbnRzKSk7CiAgICBl
bHNlIGlmIChyZWdudW0gPT0gNykKICAgICAgICBfX2FzbV9fICgibW92cSAlMCwlJWRiN1xuXHQi
OjoiciIoY29udGVudHMpKTsKfQoKc3RhdGljIHZvaWQKa2RiX3ByaW50X3dwX2luZm8oY2hhciAq
c3RycCwgaW50IGlkeCkKewogICAga2RicCgiJXNbJWRdOiUwMTZseCBsZW46JWQgIiwgc3RycCwg
aWR4LCBrZGJfd3BhW2lkeF0ud3BfYWRkciwKICAgICAgICAga2RiX3dwYVtpZHhdLndwX2xlbik7
CiAgICBpZiAoa2RiX3dwYVtpZHhdLndwX3J3ZmxhZyA9PSAxKQogICAgICAgIGtkYnAoIm9uIGRh
dGEgd3JpdGUgb25seVxuIik7CiAgICBlbHNlIGlmIChrZGJfd3BhW2lkeF0ud3BfcndmbGFnID09
IDIpCiAgICAgICAga2RicCgib24gSU8gcmVhZC93cml0ZVxuIik7CiAgICBlbHNlIAogICAgICAg
IGtkYnAoIm9uIGRhdGEgcmVhZC93cml0ZVxuIik7Cn0KCi8qCiAqIFJldHVybnMgOiAwIGlmIG5v
dCBvbmUgb2Ygb3VycwogKiAgICAgICAgICAgMSBpZiBvbmUgb2Ygb3VycwogKi8KaW50CmtkYl9j
aGVja193YXRjaHBvaW50cyhzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaW50IHdw
bnVtOwogICAga2RibWFfdCBkcjYgPSBrZGJfcmRfZGJncmVnKDYpOwoKICAgIEtEQkdQMSgiY2hl
Y2tfd3A6IElQOiVseCBFRkxBR1M6JWx4XG4iLCByZWdzLT5yaXAsIHJlZ3MtPnJmbGFncyk7CiAg
ICBpZiAoZHI2ICYgRFI2X0IwKQogICAgICAgIHdwbnVtID0gMDsKICAgIGVsc2UgaWYgKGRyNiAm
IERSNl9CMSkKICAgICAgICB3cG51bSA9IDE7CiAgICBlbHNlIGlmIChkcjYgJiBEUjZfQjIpCiAg
ICAgICAgd3BudW0gPSAyOwogICAgZWxzZSBpZiAoZHI2ICYgRFI2X0IzKQogICAgICAgIHdwbnVt
ID0gMzsKICAgIGVsc2UKICAgICAgICByZXR1cm4gMDsKCiAgICBrZGJfcHJpbnRfd3BfaW5mbygi
V2F0Y2hwb2ludCAiLCB3cG51bSk7CiAgICByZXR1cm4gMTsKfQoKLyogc2V0IGEgd2F0Y2hwb2lu
dCBhdCBhIGdpdmVuIGFkZHJlc3MgCiAqIFByZUNvbmRpdGlvbjogYWRkciAhPSAwICovCnN0YXRp
YyB2b2lkCmtkYl9zZXRfd3Aoa2RidmFfdCBhZGRyLCBpbnQgcndmbGFnLCBpbnQgbGVuKQp7CiAg
ICBpbnQgcmVnbm87CgogICAgZm9yIChyZWdubz0wOyByZWdubyA8IEtEQl9NQVhXUDsgcmVnbm8r
KykgewogICAgICAgIGlmIChrZGJfd3BhW3JlZ25vXS53cF9hZGRyID09IGFkZHIgJiYgIWtkYl93
cGFbcmVnbm9dLndwX2RlbGV0ZWQpIHsKICAgICAgICAgICAga2RicCgiV2F0Y2hwb2ludCBhbHJl
YWR5IHNldFxuIik7CiAgICAgICAgICAgIHJldHVybjsKICAgICAgICB9CiAgICAgICAgaWYgKGtk
Yl93cGFbcmVnbm9dLndwX2RlbGV0ZWQpCiAgICAgICAgICAgIG1lbXNldCgma2RiX3dwYVtyZWdu
b10sIDAsIHNpemVvZihrZGJfd3BhW3JlZ25vXSkpOwogICAgfQogICAgZm9yIChyZWdubz0wOyBy
ZWdubyA8IEtEQl9NQVhXUCAmJiBrZGJfd3BhW3JlZ25vXS53cF9hZGRyOyByZWdubysrKTsKICAg
IGlmIChyZWdubyA+PSBLREJfTUFYV1ApIHsKICAgICAgICBrZGJwKCJ3YXRjaHBvaW50IHRhYmxl
IGZ1bGwuIGxpbWl0OiVkXG4iLCBLREJfTUFYV1ApOwogICAgICAgIHJldHVybjsKICAgIH0KICAg
IGtkYl93cGFbcmVnbm9dLndwX2FkZHIgPSBhZGRyOwogICAga2RiX3dwYVtyZWdub10ud3Bfcndm
bGFnID0gcndmbGFnOwogICAga2RiX3dwYVtyZWdub10ud3BfbGVuID0gbGVuOwogICAga2RiX3By
aW50X3dwX2luZm8oIldhdGNocG9pbnQgc2V0ICIsIHJlZ25vKTsKfQoKLyogd3JpdGUgcmVnIERS
MC0zIHdpdGggYWRkcmVzcy4gVXBkYXRlIGNvcnJlc3BvbmRpbmcgYml0cyBpbiBEUjcgKi8Kc3Rh
dGljIHZvaWQKa2RiX2luc3RhbGxfd2F0Y2hwb2ludChpbnQgcmVnbm8sIGtkYm1hX3QgKmRyN3Ap
CnsKICAgIGtkYl9zZXRfZ3hfaW5fZHI3KHJlZ25vLCBkcjdwKTsKICAgIGtkYl9zZXRfbGVuX2lu
X2RyNyhyZWdubywgZHI3cCwga2RiX3dwYVtyZWdub10ud3BfbGVuKTsgCiAgICBrZGJfc2V0X2Ry
N19ydyhyZWdubywgZHI3cCwga2RiX3dwYVtyZWdub10ud3BfcndmbGFnKTsKICAgIGtkYl93cl9k
YmdyZWcocmVnbm8sIGtkYl93cGFbcmVnbm9dLndwX2FkZHIpOwoKICAgIEtEQkdQMSgiY2NwdTol
ZCBpbnN0YWxsZWQgd3AuIGFkZHI6JWx4IHJ3OiV4IGxlbjoleCBkcjc6JTAxNmx4XG4iLAogICAg
ICAgICAgIHNtcF9wcm9jZXNzb3JfaWQoKSwga2RiX3dwYVtyZWdub10ud3BfYWRkciwgCiAgICAg
ICAgICAga2RiX3dwYVtyZWdub10ud3BfcndmbGFnLCBrZGJfd3BhW3JlZ25vXS53cF9sZW4sICpk
cjdwKTsKfQoKLyogY2xlYXIgRzAtRzMgYml0cyBpbiBEUjcgZm9yIGdpdmVuIERSMC0zICovCnN0
YXRpYyB2b2lkCmtkYl9jbGVhcl9kcjdfZ3goaW50IHJlZ25vLCBrZGJtYV90ICpkcjdwKQp7CiAg
ICBpZiAocmVnbm8gPT0gMCkKICAgICAgICAqZHI3cCA9ICpkcjdwICYgfjB4MjsKICAgIGVsc2Ug
aWYgKHJlZ25vID09IDEpCiAgICAgICAgKmRyN3AgPSAqZHI3cCAmIH4weDg7CiAgICBlbHNlIGlm
IChyZWdubyA9PSAyKQogICAgICAgICpkcjdwID0gKmRyN3AgJiB+MHgyMDsKICAgIGVsc2UgaWYg
KHJlZ25vID09IDMpCiAgICAgICAgKmRyN3AgPSAqZHI3cCAmIH4weDgwOwp9CgovKiB1cGRhdGUg
ZHI3IG9uY2UsIGFzIGl0J3Mgc2xvdyB0byB1cGRhdGUgZGVidWcgcmVncyBhbmQgY3B1J3Mgd2ls
bCBzdGlsbCBiZSAKICogcGF1c2VkIHdoZW4gbGVhdmluZyBrZGIuCiAqCiAqIEp1c3QgbGVhdmUg
RFIwLTMgY2xvYmJlcmVkIGJ1dCByZW1vdmUgYml0cyBmcm9tIERSNyB0byBkaXNhYmxlIHdwIAog
Ki8Kdm9pZAprZGJfaW5zdGFsbF93YXRjaHBvaW50cyh2b2lkKQp7CiAgICBpbnQgcmVnbm87CiAg
ICBrZGJtYV90IGRyNyA9IGtkYl9yZF9kYmdyZWcoNyk7CgogICAgZm9yIChyZWdubz0wOyByZWdu
byA8IEtEQl9NQVhXUDsgcmVnbm8rKykgewogICAgICAgIC8qIGRvIG5vdCBjbGVhciB3cF9kZWxl
dGVkIGhlcmUgYXMgYWxsIGNwdXMgbXVzdCBjbGVhciB3cHMgKi8KICAgICAgICBpZiAoa2RiX3dw
YVtyZWdub10ud3BfZGVsZXRlZCkgewogICAgICAgICAgICBrZGJfY2xlYXJfZHI3X2d4KHJlZ25v
LCAmZHI3KTsKICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlmIChrZGJf
d3BhW3JlZ25vXS53cF9hZGRyKQogICAgICAgICAgICBrZGJfaW5zdGFsbF93YXRjaHBvaW50KHJl
Z25vLCAmZHI3KTsKICAgIH0KICAgIC8qIGFsd2F5cyBjbGVhciBEUjYgd2hlbiBsZWF2aW5nICov
CiAgICBrZGJfd3JfZGJncmVnKDYsIDApOwogICAga2RiX3dyX2RiZ3JlZyg3LCBkcjcpOwoKICAg
IGlmIChkcjcgJiBEUjdfQUNUSVZFX01BU0spCiAgICAgICAga2RiX2RyNyA9IGRyNzsKICAgIGVs
c2UKICAgICAgICBrZGJfZHI3ID0gMDsKI2lmIDAKICAgIGZvcihkcD1kb21haW5fbGlzdDsgZHA7
IGRwPWRwLT5uZXh0X2luX2xpc3QpIHsKICAgICAgICBzdHJ1Y3QgdmNwdSAqdnA7CiAgICAgICAg
Zm9yX2VhY2hfdmNwdShkcCwgdnApIHsKICAgICAgICAgICAgZm9yIChyZWdubz0wOyByZWdubyA8
IEtEQl9NQVhXUDsgcmVnbm8rKykKICAgICAgICAgICAgICAgIHZwLT5hcmNoLmd1ZXN0X2NvbnRl
eHQuZGVidWdyZWdbcmVnbm9dID0ga2RiX3dwYVtyZWdub10ud3BfYWRkcjsKCiAgICAgICAgICAg
IHZwLT5hcmNoLmd1ZXN0X2NvbnRleHQuZGVidWdyZWdbNl0gPSAwOwogICAgICAgICAgICB2cC0+
YXJjaC5ndWVzdF9jb250ZXh0LmRlYnVncmVnWzddID0gZHI3OwogICAgICAgICAgICBLREJHUCgi
a2RiX2luc3RhbGxfd2F0Y2hwb2ludHMoKTogdjolcCBkcjc6JWx4XG4iLCB2cCwgZHI3KTsKICAg
ICAgICAgICAgLyogaHZtX3NldF9pbmZvX2d1ZXN0KHZwKTs6IENhbid0IGJlY2F1c2UgY2FuJ3Qg
dm1jc19lbnRlciBpbiBrZGIgKi8KICAgICAgICB9CiAgICB9CiNlbmRpZgp9CgovKiBjbGVhciB3
YXRjaHBvaW50L3MuIHdwbnVtID09IC0xIHRvIGNsZWFyIGFsbCB3YXRjaHBvaW50cyAqLwp2b2lk
CmtkYl9jbGVhcl93cHMoaW50IHdwbnVtKQp7CiAgICBpbnQgaTsKCiAgICBpZiAod3BudW0gPj0g
S0RCX01BWFdQKSB7CiAgICAgICAga2RicCgiSW52YWxpZCB3cG51bSAlZFxuIiwgd3BudW0pOwog
ICAgICAgIHJldHVybjsKICAgIH0KICAgIGlmICh3cG51bSA+PTApIHsKICAgICAgICBpZiAoa2Ri
X3dwYVt3cG51bV0ud3BfYWRkcikgewogICAgICAgICAgICBrZGJfd3BhW3dwbnVtXS53cF9kZWxl
dGVkID0gMTsKICAgICAgICAgICAga2RiX3ByaW50X3dwX2luZm8oIkRlbGV0ZWQgd2F0Y2hwb2lu
dCIsIHdwbnVtKTsKICAgICAgICB9IGVsc2UKICAgICAgICAgICAga2RicCgid2F0Y2hwb2ludCAl
ZCBub3Qgc2V0XG4iLCB3cG51bSk7CiAgICAgICAgcmV0dXJuOwogICAgfQogICAgZm9yIChpPTA7
IGkgPCBLREJfTUFYV1A7IGkrKykgewogICAgICAgIGlmIChrZGJfd3BhW2ldLndwX2FkZHIpIHsK
ICAgICAgICAgICAga2RiX3dwYVtpXS53cF9kZWxldGVkID0gMTsKICAgICAgICAgICAga2RiX3By
aW50X3dwX2luZm8oIkRlbGV0ZWQgd2F0Y2hwb2ludCIsIGkpOwogICAgICAgIH0KICAgIH0KfQoK
LyogZGlzcGxheSBhbnkgd2F0Y2hwb2ludHMgdGhhdCBhcmUgc2V0ICovCnN0YXRpYyB2b2lkCmtk
Yl9kaXNwbGF5X3dwcyh2b2lkKQp7CiAgICBpbnQgaTsKICAgIGZvciAoaT0wOyBpIDwgS0RCX01B
WFdQOyBpKyspCiAgICAgICAgaWYgKGtkYl93cGFbaV0ud3BfYWRkciAmJiAha2RiX3dwYVtpXS53
cF9kZWxldGVkKSAKICAgICAgICAgICAga2RiX3ByaW50X3dwX2luZm8oIiIsIGkpOwp9CgovKiAK
ICogRGlzcGxheSBvciBTZXQgaGFyZHdhcmUgYnJlYWtwb2ludHMsIGllLCB3YXRjaHBvaW50czoK
ICogICAtIFVwdG8gNCBhcmUgYWxsb3dlZAogKiAgIAogKiAgcndfZmxhZyBzaG91bGQgYmUgb25l
IG9mOiAKICogICAgIDAxID09IGJyZWFrIG9uIGRhdGEgd3JpdGUgb25seQogKiAgICAgMTAgPT0g
YnJlYWsgb24gSU8gcmVhZC93cml0ZQogKiAgICAgMTEgPT0gQnJlYWsgb24gZGF0YSByZWFkcyBv
ciB3cml0ZXMKICoKICogIGxlbiBzaG91bGQgYmUgb25lIG9mIDogMSAyIDQgOCAKICovCnZvaWQK
a2RiX2RvX3dhdGNocG9pbnRzKGtkYnZhX3QgYWRkciwgaW50IHJ3X2ZsYWcsIGludCBsZW4pCnsK
ICAgIGlmIChhZGRyID09IDApIHsKICAgICAgICBrZGJfZGlzcGxheV93cHMoKTsgICAgICAgIC8q
IGRpc3BsYXkgc2V0IHdhdGNocG9pbnRzICovCiAgICAgICAgcmV0dXJuOwogICAgfQogICAga2Ri
X3NldF93cChhZGRyLCByd19mbGFnLCBsZW4pOwogICAgcmV0dXJuOwp9CgoAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIveDg2L01ha2VmaWxlAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAwMDAwMDAwNTUA
MTE3NjU0NjU1NTYAMDEzNjYxACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABtcmF0aG9yAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAApvYmoteSAgICA6PSBrZGJfd3Aubwpz
dWJkaXIteSArPSB1ZGlzODYtMS43CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2tkYl9jbWRzLmMAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMzU1NTcyADEy
MDE3NTAzNzExADAxMzUwMAAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAvKgogKiBDb3B5cmlnaHQgKEMpIDIwMDks
IE11a2VzaCBSYXRob3IsIE9yYWNsZSBDb3JwLiAgQWxsIHJpZ2h0cyByZXNlcnZlZC4KICoKICog
VGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFu
ZC9vcgogKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJs
aWMKICogTGljZW5zZSB2MiBhcyBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbi4KICoKICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQg
aXQgd2lsbCBiZSB1c2VmdWwsCiAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBl
dmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCiAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNT
IEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUgR05VCiAqIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KICoKICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVk
IGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2UgYWxvbmcgd2l0aCB0
aGlzIHByb2dyYW07IGlmIG5vdCwgd3JpdGUgdG8gdGhlCiAqIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbiwgSW5jLiwgNTkgVGVtcGxlIFBsYWNlIC0gU3VpdGUgMzMwLAogKiBCb3N0b24sIE1BIDAy
MTExMC0xMzA3LCBVU0EuCiAqLwoKI2luY2x1ZGUgImluY2x1ZGUva2RiaW5jLmgiCgojaWYgZGVm
aW5lZChfX3g4Nl82NF9fKQogICAgI2RlZmluZSBLREJGNjQgIiVseCIKICAgICNkZWZpbmUgS0RC
RkwgIiUwMTZseCIgICAgICAgICAvKiBwcmludCBsb25nIGFsbCBkaWdpdHMgKi8KI2Vsc2UKICAg
ICNkZWZpbmUgS0RCRjY0ICIlbGx4IgogICAgI2RlZmluZSBLREJGTCAiJTA4bHgiCiNlbmRpZgoK
I2lmIFhFTl9TVUJWRVJTSU9OID4gNCB8fCBYRU5fVkVSU0lPTiA9PSA0ICAgICAgICAgICAgICAv
KiB4ZW4gMy41Lnggb3IgYWJvdmUgKi8KICAgICNkZWZpbmUgS0RCX0xLREVGKGwpICgobCkucmF3
LmxvY2spCiAgICAjZGVmaW5lIEtEQl9QR0xMRSh0KSAoKHQpLnRhaWwpICAgIC8qIHBhZ2UgbGlz
dCBsYXN0IGVsZW1lbnQgXiUkI0AgKi8KI2Vsc2UKICAgICNkZWZpbmUgS0RCX0xLREVGKGwpICgo
bCkubG9jaykKICAgICNkZWZpbmUgS0RCX1BHTExFKHQpICgodCkucHJldikgICAgLyogcGFnZSBs
aXN0IGxhc3QgZWxlbWVudCBeJSQjQCAqLwojZW5kaWYKCiNkZWZpbmUgS0RCX0NNRF9ISVNUT1JZ
X0NPVU5UICAgMzIKI2RlZmluZSBDTURfQlVGTEVOICAgICAgICAgICAgICAyMDAgICAgIC8qIGtk
Yl9wcmludGY6IG1heCBwcmludGxpbmUgPT0gMjU2ICovCgojZGVmaW5lIEtEQk1BWFNCUCAxNiAg
ICAgICAgICAgICAgICAgICAgLyogbWF4IG51bWJlciBvZiBzb2Z0d2FyZSBicmVha3BvaW50cyAq
LwojZGVmaW5lIEtEQl9NQVhBUkdDIDE2ICAgICAgICAgICAgICAgICAgLyogbWF4IGFyZ3MgaW4g
YSBrZGIgY29tbWFuZCAqLwojZGVmaW5lIEtEQl9NQVhCVFAgIDggICAgICAgICAgICAgICAgICAg
LyogbWF4IGRpc3BsYXkgYXJncyBpbiBidHAgKi8KCi8qIGNvbmRpdGlvbiBpczogJ3I2ID09IDB4
MTIzZicgb3IgJzB4ZmZmZmZmZmY4MjgwMDAwMCAhPSBkZWFkYmVlZicgICovCnN0cnVjdCBrZGJf
YnBjb25kIHsKICAgIGtkYmJ5dF90IGJwX2NvbmRfc3RhdHVzOyAgICAgICAvKiAwID09IG9mZiwg
MSA9PSByZWdpc3RlciwgMiA9PSBtZW1vcnkgKi8KICAgIGtkYmJ5dF90IGJwX2NvbmRfdHlwZTsg
ICAgICAgICAvKiAwID09IGJhZCwgMSA9PSBlcXVhbCwgMiA9PSBub3QgZXF1YWwgKi8KICAgIHVs
b25nICAgIGJwX2NvbmRfbGhzOyAgICAgICAgICAvKiBsaHMgb2YgY29uZGl0aW9uOiByZWcgb2Zm
c2V0IG9yIG1lbSBsb2MgKi8KICAgIHVsb25nICAgIGJwX2NvbmRfcmhzOyAgICAgICAgICAvKiBy
aWdodCBoYW5kIHNpZGUgb2YgY29uZGl0aW9uICovCn07CgovKiBzb2Z0d2FyZSBicmVha3BvaW50
IHN0cnVjdHVyZSAqLwpzdHJ1Y3Qga2RiX3NicmtwdCB7CiAgICBrZGJ2YV90ICBicF9hZGRyOyAg
ICAgICAgICAgICAgLyogYWRkcmVzcyB0aGUgYnAgaXMgc2V0IGF0ICovCiAgICBkb21pZF90ICBi
cF9kb21pZDsgICAgICAgICAgICAgLyogd2hpY2ggZG9tYWluIHRoZSBicCBiZWxvbmdzIHRvICov
CiAgICBrZGJieXRfdCBicF9vcmlnaW5zdDsgICAgICAgICAgLyogc2F2ZSBvcmlnIGluc3RyL3Mg
aGVyZSAqLwogICAga2RiYnl0X3QgYnBfZGVsZXRlZDsgICAgICAgICAgIC8qIGRlbGV0ZSBwZW5k
aW5nIG9uIHRoaXMgYnAgKi8KICAgIGtkYmJ5dF90IGJwX25pOyAgICAgICAgICAgICAgICAvKiBz
ZXQgZm9yIEtEQl9DUFVfTkkgKi8KICAgIGtkYmJ5dF90IGJwX2p1c3RfYWRkZWQ7ICAgICAgICAv
KiBhZGRlZCBpbiB0aGUgY3VycmVudCBrZGIgc2Vzc2lvbiAqLwogICAga2RiYnl0X3QgYnBfdHlw
ZTsgICAgICAgICAgICAgIC8qIDAgPSBub3JtYWwsIDEgPT0gY29uZCwgIDIgPT0gYnRwICovCiAg
ICB1bmlvbiB7CiAgICAgICAgc3RydWN0IGtkYl9icGNvbmQgYnBfY29uZDsKICAgICAgICB1bG9u
ZyAqYnBfYnRwOwogICAgfSB1Owp9OwoKLyogZG9uJ3QgdXNlIGttYWxsb2MgaW4ga2RiIHdoaWNo
IGhpamFja3MgYWxsIGNwdXMgKi8Kc3RhdGljIHVsb25nIGtkYl9idHBfYXJnc2FbS0RCTUFYU0JQ
XVtLREJfTUFYQlRQXTsKc3RhdGljIHVsb25nICprZGJfYnRwX2FwW0tEQk1BWFNCUF07CgpzdGF0
aWMgc3RydWN0IGtkYl9yZWdfbm1vZnMgewogICAgY2hhciAqcmVnX25tOwogICAgaW50IHJlZ19v
ZmZzOwp9IGtkYl9yZWdfbm1fb2Zmc1tdID0gIHsKICAgICAgIHsgInJheCIsIG9mZnNldG9mKHN0
cnVjdCBjcHVfdXNlcl9yZWdzLCByYXgpIH0sCiAgICAgICB7ICJyYngiLCBvZmZzZXRvZihzdHJ1
Y3QgY3B1X3VzZXJfcmVncywgcmJ4KSB9LAogICAgICAgeyAicmN4Iiwgb2Zmc2V0b2Yoc3RydWN0
IGNwdV91c2VyX3JlZ3MsIHJjeCkgfSwKICAgICAgIHsgInJkeCIsIG9mZnNldG9mKHN0cnVjdCBj
cHVfdXNlcl9yZWdzLCByZHgpIH0sCiAgICAgICB7ICJyc2kiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1
X3VzZXJfcmVncywgcnNpKSB9LAogICAgICAgeyAicmRpIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91
c2VyX3JlZ3MsIHJkaSkgfSwKICAgICAgIHsgInJicCIsIG9mZnNldG9mKHN0cnVjdCBjcHVfdXNl
cl9yZWdzLCByYnApIH0sCiAgICAgICB7ICJyc3AiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1X3VzZXJf
cmVncywgcnNwKSB9LAogICAgICAgeyAicjgiLCAgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3Jl
Z3MsIHI4KSB9LAogICAgICAgeyAicjkiLCAgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3Ms
IHI5KSB9LAogICAgICAgeyAicjEwIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3MsIHIx
MCkgfSwKICAgICAgIHsgInIxMSIsIG9mZnNldG9mKHN0cnVjdCBjcHVfdXNlcl9yZWdzLCByMTEp
IH0sCiAgICAgICB7ICJyMTIiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1X3VzZXJfcmVncywgcjEyKSB9
LAogICAgICAgeyAicjEzIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3MsIHIxMykgfSwK
ICAgICAgIHsgInIxNCIsIG9mZnNldG9mKHN0cnVjdCBjcHVfdXNlcl9yZWdzLCByMTQpIH0sCiAg
ICAgICB7ICJyMTUiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1X3VzZXJfcmVncywgcjE1KSB9LAogICAg
ICAgeyAicmZsYWdzIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3MsIHJmbGFncykgfSB9
OwoKc3RhdGljIGNvbnN0IGludCBLREJCUFNaPTE7ICAgICAgICAgICAgICAgICAgIC8qIHNpemUg
b2YgS0RCX0JQSU5TVCBpcyAxIGJ5dGUqLwpzdGF0aWMga2RiYnl0X3Qga2RiX2JwaW5zdCA9IDB4
Y2M7ICAgICAgICAgICAgLyogYnJlYWtwb2ludCBpbnN0cjogSU5UMyAqLwpzdGF0aWMgc3RydWN0
IGtkYl9zYnJrcHQga2RiX3NicGFbS0RCTUFYU0JQXTsgLyogc29mdCBicmtwdCBhcnJheS90YWJs
ZSAqLwpzdGF0aWMga2RidGFiX3QgKnRicDsKCnN0YXRpYyBpbnQga2RiX3NldF9icChkb21pZF90
LCBrZGJ2YV90LCBpbnQsIHVsb25nICosIGNoYXIqLCBjaGFyKiwgY2hhciopOwpzdGF0aWMgdm9p
ZCBrZGJfcHJpbnRfdXJlZ3Moc3RydWN0IGNwdV91c2VyX3JlZ3MgKik7CgoKLyogPT09PT09PT09
PT09PT09PT09PT09IGNtZGxpbmUgZnVuY3Rpb25zICA9PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PSAqLwoKLyogbHAgcG9pbnRzIHRvIGEgc3RyaW5nIG9mIG9ubHkgYWxwaGEgbnVtZXJp
YyBjaGFycyB0ZXJtaW5hdGVkIGJ5ICdcbicuCiAqIFBhcnNlIHRoZSBzdHJpbmcgaW50byBhcmd2
IHBvaW50ZXJzLCBhbmQgUkVUVVJOIGFyZ2MKICogRWc6ICBpZiBscCAtLT4gImRyICBzcFxuIiA6
ICBhcmd2WzBdPT0iZHJcMCIgIGFyZ3ZbMV09PSJzcFwwIiAgYXJnYz09MgogKi8Kc3RhdGljIGlu
dAprZGJfcGFyc2VfY21kbGluZShjaGFyICpscCwgY29uc3QgY2hhciAqKmFyZ3YpCnsKICAgIGlu
dCBpPTA7CgogICAgZm9yICg7ICpscCA9PSAnICc7IGxwKyspOyAgICAgIC8qIG5vdGU6IGlzc3Bh
Y2UoKSBza2lwcyAnXG4nIGFsc28gKi8KICAgIHdoaWxlICggKmxwICE9ICdcbicgKSB7CiAgICAg
ICAgaWYgKGkgPT0gS0RCX01BWEFSR0MpIHsKICAgICAgICAgICAgcHJpbnRrKCJrZGI6IG1heCBh
cmdzIGV4Y2VlZGVkXG4iKTsKICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgfQogICAgICAgIGFy
Z3ZbaSsrXSA9IGxwOwogICAgICAgIGZvciAoOyAqbHAgIT0gJyAnICYmICpscCAhPSAnXG4nOyBs
cCsrKTsKICAgICAgICBpZiAoKmxwICE9ICdcbicpCiAgICAgICAgICAgICpscCsrID0gJ1wwJzsK
ICAgICAgICBmb3IgKDsgKmxwID09ICcgJzsgbHArKyk7CiAgICB9CiAgICAqbHAgPSAnXDAnOwog
ICAgcmV0dXJuIGk7Cn0KCnZvaWQKa2RiX2NsZWFyX3ByZXZfY21kKCkgICAgICAgICAgICAgLyog
c28gcHJldmlvdXMgY29tbWFuZCBpcyBub3QgcmVwZWF0ZWQgKi8KewogICAgdGJwID0gTlVMTDsK
fQoKdm9pZAprZGJfZG9fY21kcyhzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgY2hh
ciAqY21kbGluZXA7CiAgICBjb25zdCBjaGFyICphcmd2W0tEQl9NQVhBUkdDXTsKICAgIGludCBh
cmdjID0gMCwgY3VyY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwogICAga2RiX2NwdV9jbWRfdCBy
ZXN1bHQgPSBLREJfQ1BVX01BSU5fS0RCOwoKICAgIHNucHJpbnRmKGtkYl9wcm9tcHQsIHNpemVv
ZihrZGJfcHJvbXB0KSwgIlslZF14a2RiPiAiLCBjdXJjcHUpOwoKICAgIHdoaWxlIChyZXN1bHQg
PT0gS0RCX0NQVV9NQUlOX0tEQikgewogICAgICAgIGNtZGxpbmVwID0ga2RiX2dldF9jbWRsaW5l
KGtkYl9wcm9tcHQpOwogICAgICAgIGlmICgqY21kbGluZXAgPT0gJ1xuJykgewogICAgICAgICAg
ICBpZiAodGJwPT1OVUxMIHx8IHRicC0+a2RiX2NtZF9mdW5jPT1OVUxMKQogICAgICAgICAgICAg
ICAgY29udGludWU7CiAgICAgICAgICAgIGVsc2UKICAgICAgICAgICAgICAgIGFyZ2MgPSAtMTsg
ICAgLyogcmVwZWF0IHByZXYgY29tbWFuZCAqLwogICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAg
IGFyZ2MgPSBrZGJfcGFyc2VfY21kbGluZShjbWRsaW5lcCwgYXJndik7CiAgICAgICAgICAgIGZv
cih0YnA9a2RiX2NtZF90Ymw7IHRicC0+a2RiX2NtZF9mdW5jOyB0YnArKykgIHsKICAgICAgICAg
ICAgICAgIGlmIChzdHJjbXAoYXJndlswXSwgdGJwLT5rZGJfY21kX25hbWUpPT0wKSAKICAgICAg
ICAgICAgICAgICAgICBicmVhazsKICAgICAgICAgICAgfQogICAgICAgIH0KICAgICAgICBpZiAo
a2RiX3N5c19jcmFzaCAmJiB0YnAtPmtkYl9jbWRfZnVuYyAmJiAhdGJwLT5rZGJfY21kX2NyYXNo
X2F2YWlsKSB7CiAgICAgICAgICAgIGtkYnAoImNtZCBub3QgYXZhaWxhYmxlIGluIGZhdGFsL2Ny
YXNoZWQgc3RhdGUuLi4uXG4iKTsKICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAg
ICAgIGlmICh0YnAtPmtkYl9jbWRfZnVuYykgewogICAgICAgICAgICByZXN1bHQgPSAoKnRicC0+
a2RiX2NtZF9mdW5jKShhcmdjLCBhcmd2LCByZWdzKTsKICAgICAgICAgICAgaWYgKHRicC0+a2Ri
X2NtZF9yZXBlYXQgPT0gS0RCX1JFUEVBVF9OT05FKQogICAgICAgICAgICAgICAgdGJwID0gTlVM
TDsKICAgICAgICB9IGVsc2UKICAgICAgICAgICAga2RicCgia2RiOiBVbmtub3duIGNtZDogJXNc
biIsIGNtZGxpbmVwKTsKICAgIH0KICAgIGtkYl9jcHVfY21kW2N1cmNwdV0gPSByZXN1bHQ7CiAg
ICByZXR1cm47Cn0KCi8qID09PT09PT09PT09PT09PT09PT09PSBVdGlsIGZ1bmN0aW9ucyAgPT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09ICovCgppbnQKa2RiX3ZjcHVfdmFsaWQo
c3RydWN0IHZjcHUgKmluX3ZwKQp7CiAgICBzdHJ1Y3QgZG9tYWluICpkcDsKICAgIHN0cnVjdCB2
Y3B1ICp2cDsKCiAgICBmb3IoZHA9ZG9tYWluX2xpc3Q7IGluX3ZwICYmIGRwOyBkcD1kcC0+bmV4
dF9pbl9saXN0KQogICAgICAgIGZvcl9lYWNoX3ZjcHUoZHAsIHZwKQogICAgICAgICAgICBpZiAo
aW5fdnAgPT0gdnApCiAgICAgICAgICAgICAgICByZXR1cm4gMTsKICAgIHJldHVybiAwOyAgICAg
Lyogbm90IGZvdW5kICovCn0KCi8qCiAqIEdpdmVuIGEgc3ltYm9sLCBmaW5kIGl0J3MgYWRkcmVz
cwogKi8Kc3RhdGljIGtkYnZhX3QKa2RiX3N5bTJhZGRyKGNvbnN0IGNoYXIgKnAsIGRvbWlkX3Qg
ZG9taWQpCnsKICAgIGtkYnZhX3QgYWRkcjsKCiAgICBLREJHUDEoInN5bTJhZGRyOiBwOiVzIGRv
bWlkOiVkXG4iLCBwLCBkb21pZCk7CiAgICBpZiAoZG9taWQgPT0gRE9NSURfSURMRSkKICAgICAg
ICBhZGRyID0gYWRkcmVzc19sb29rdXAoKGNoYXIgKilwKTsKICAgIGVsc2UKICAgICAgICBhZGRy
ID0gKGtkYnZhX3Qpa2RiX2d1ZXN0X3N5bTJhZGRyKChjaGFyICopcCwgZG9taWQpOwogICAgS0RC
R1AxKCJzeW0yYWRkcjogZXhpdDogYWRkciByZXR1cm5lZDoweCVseFxuIiwgYWRkcik7CiAgICBy
ZXR1cm4gYWRkcjsKfQoKLyoKICogY29udmVydCBhc2NpaSB0byBpbnQgZGVjaW1hbCAoYmFzZSAx
MCkuIAogKiBSZXR1cm46IDAgOiBmYWlsZWQgdG8gY29udmVydCwgb3RoZXJ3aXNlIDEgCiAqLwpz
dGF0aWMgaW50CmtkYl9zdHIyZGVjaShjb25zdCBjaGFyICpzdHJwLCBpbnQgKmludHApCnsKICAg
IGNvbnN0IGNoYXIgKmVuZHA7CgogICAgS0RCR1AyKCJzdHIyZGVjaTogc3RyOiVzXG4iLCBzdHJw
KTsKICAgIGlmICghaXNkaWdpdCgqc3RycCkpCiAgICAgICAgcmV0dXJuIDA7CiAgICAqaW50cCA9
IChpbnQpc2ltcGxlX3N0cnRvdWwoc3RycCwgJmVuZHAsIDEwKTsKICAgIGlmIChlbmRwICE9IHN0
cnArc3RybGVuKHN0cnApKQogICAgICAgIHJldHVybiAwOwogICAgS0RCR1AyKCJzdHIyZGVjaTog
aW50dmFsOiQlZFxuIiwgKmludHApOwogICAgcmV0dXJuIDE7Cn0KLyoKICogY29udmVydCBhc2Np
aSB0byBsb25nLiBOT1RFOiBiYXNlIGlzIDE2CiAqIFJldHVybjogMCA6IGZhaWxlZCB0byBjb252
ZXJ0LCBvdGhlcndpc2UgMSAKICovCnN0YXRpYyBpbnQKa2RiX3N0cjJ1bG9uZyhjb25zdCBjaGFy
ICpzdHJwLCB1bG9uZyAqbG9uZ3ApCnsKICAgIHVsb25nIHZhbDsKICAgIGNvbnN0IGNoYXIgKmVu
ZHA7CgogICAgS0RCR1AyKCJzdHIybG9uZzogc3RyOiVzXG4iLCBzdHJwKTsKICAgIGlmICghaXN4
ZGlnaXQoKnN0cnApKQogICAgICAgIHJldHVybiAwOwogICAgdmFsID0gKGxvbmcpc2ltcGxlX3N0
cnRvdWwoc3RycCwgJmVuZHAsIDE2KTsgICAvKiBoYW5kbGVzIGxlYWRpbmcgMHggKi8KICAgIGlm
IChlbmRwICE9IHN0cnArc3RybGVuKHN0cnApKQogICAgICAgIHJldHVybiAwOwogICAgaWYgKGxv
bmdwKQogICAgICAgICpsb25ncCA9IHZhbDsKICAgIEtEQkdQMigic3RyMmxvbmc6IHZhbDoweCVs
eFxuIiwgdmFsKTsKICAgIHJldHVybiAxOwp9Ci8qCiAqIGNvbnZlcnQgYSBzeW1ib2wgb3IgYXNj
aWkgYWRkcmVzcyB0byBoZXggYWRkcmVzcwogKiBSZXR1cm46IDAgOiBmYWlsZWQgdG8gY29udmVy
dCwgb3RoZXJ3aXNlIDEgCiAqLwpzdGF0aWMgaW50CmtkYl9zdHIyYWRkcihjb25zdCBjaGFyICpz
dHJwLCBrZGJ2YV90ICphZGRycCwgZG9taWRfdCBpZCkKewogICAga2RidmFfdCBhZGRyOwogICAg
Y29uc3QgY2hhciAqZW5kcDsKCiAgICAvKiBhc3N1bWUgaXQncyBhbiBhZGRyZXNzICovCiAgICBL
REJHUDIoInN0cjJhZGRyOiBzdHI6JXMgaWQ6JWRcbiIsIHN0cnAsIGlkKTsKICAgIGFkZHIgPSAo
a2RidmFfdClzaW1wbGVfc3RydG91bChzdHJwLCAmZW5kcCwgMTYpOyAvKmhhbmRsZXMgbGVhZGlu
ZyAweCAqLwogICAgaWYgKGVuZHAgIT0gc3RycCtzdHJsZW4oc3RycCkpCiAgICAgICAgaWYgKCAh
KGFkZHI9a2RiX3N5bTJhZGRyKHN0cnAsIGlkKSkgKQogICAgICAgICAgICByZXR1cm4gMDsKICAg
ICphZGRycCA9IGFkZHI7CiAgICBLREJHUDIoInN0cjJhZGRyOiBhZGRyOjB4JWx4XG4iLCBhZGRy
KTsKICAgIHJldHVybiAxOwp9CgovKiBHaXZlbiBkb21pZCwgcmV0dXJuIHB0ciB0byBzdHJ1Y3Qg
ZG9tYWluIAogKiBJRiBkb21pZCA9PSBET01JRF9JRExFIHJldHVybiBwdHIgdG8gaWRsZV9kb21h
aW4gCiAqIElGIGRvbWlkID09IHZhbGlkIGRvbWFpbiwgcmV0dXJuIHB0ciB0byBkb21haW4gc3Ry
dWN0CiAqIGVsc2UgZG9taWQgaXMgYmFkIGFuZCByZXR1cm4gTlVMTAogKi8Kc3RhdGljIHN0cnVj
dCBkb21haW4gKgprZGJfZG9taWQycHRyKGRvbWlkX3QgZG9taWQpCnsKICAgIHN0cnVjdCBkb21h
aW4gKmRwOwoKICAgIC8qIGdldF9kb21haW5fYnlfaWQoKSByZXQgTlVMTCBmb3IgYm90aCBET01J
RF9JRExFIGFuZCBiYWQgZG9taWRzICovCiAgICBpZiAoZG9taWQgPT0gRE9NSURfSURMRSkKICAg
ICAgICBkcCA9IGlkbGVfdmNwdVtzbXBfcHJvY2Vzc29yX2lkKCldLT5kb21haW47CiAgICBlbHNl
IAogICAgICAgIGRwID0gZ2V0X2RvbWFpbl9ieV9pZChkb21pZCk7ICAgLyogTlVMTCBub3cgbWVh
bnMgYmFkIGRvbWlkICovCiAgICByZXR1cm4gZHA7Cn0KCi8qCiAqIFJldHVybnM6ICAwOiBmYWls
ZWQuIGludmFsaWQgZG9taWQgb3Igc3RyaW5nLCAqaWRwIG5vdCBjaGFuZ2VkLgogKi8Kc3RhdGlj
IGludAprZGJfc3RyMmRvbWlkKGNvbnN0IGNoYXIgKmRvbXN0ciwgZG9taWRfdCAqaWRwLCBpbnQg
cGVycikKewogICAgaW50IGlkOwogICAgaWYgKCFrZGJfc3RyMmRlY2koZG9tc3RyLCAmaWQpIHx8
ICFrZGJfZG9taWQycHRyKChkb21pZF90KWlkKSkgewogICAgICAgIGlmIChwZXJyKQogICAgICAg
ICAgICBrZGJwKCJJbnZhbGlkIGRvbWlkOiVzXG4iLCBkb21zdHIpOwogICAgICAgIHJldHVybiAw
OwogICAgfQogICAgKmlkcCA9IChkb21pZF90KWlkOwogICAgcmV0dXJuIDE7Cn0KCnN0YXRpYyBz
dHJ1Y3QgZG9tYWluICoKa2RiX3N0cmRvbWlkMnB0cihjb25zdCBjaGFyICpkb21zdHIsIGludCBw
ZXJyb3IpCnsKICAgIGRvbWlkX3QgZG9taWQ7CiAgICBpZiAoa2RiX3N0cjJkb21pZChkb21zdHIs
ICZkb21pZCwgcGVycm9yKSkgewogICAgICAgIHJldHVybihrZGJfZG9taWQycHRyKGRvbWlkKSk7
CiAgICB9CiAgICByZXR1cm4gTlVMTDsKfQoKLyogcmV0dXJuIGEgZ3Vlc3QgYml0bmVzczogMzIg
b3IgNjQgKi8KaW50CmtkYl9ndWVzdF9iaXRuZXNzKGRvbWlkX3QgZG9taWQpCnsKICAgIGNvbnN0
IGludCBIWVBTWiA9IHNpemVvZihsb25nKSAqIDg7CiAgICBzdHJ1Y3QgZG9tYWluICpkcCA9IGtk
Yl9kb21pZDJwdHIoZG9taWQpOwogICAgaW50IHJldHZhbDsgCgogICAgaWYgKGlzX2lkbGVfZG9t
YWluKGRwKSkKICAgICAgICByZXR2YWwgPSBIWVBTWjsKICAgIGVsc2UgaWYgKGlzX2h2bV9vcl9o
eWJfZG9tYWluKGRwKSkKICAgICAgICByZXR2YWwgPSAoaHZtX2xvbmdfbW9kZV9lbmFibGVkKGRw
LT52Y3B1WzBdKSkgPyBIWVBTWiA6IDMyOwogICAgZWxzZSAKICAgICAgICByZXR2YWwgPSBpc19w
dl8zMmJpdF9kb21haW4oZHApID8gMzIgOiBIWVBTWjsKICAgIEtEQkdQMSgiZ2JpdG5lc3M6IGRv
bWlkOiVkIGRwOiVwIGJpdG5lc3M6JWRcbiIsIGRvbWlkLCBkcCwgcmV0dmFsKTsKICAgIHJldHVy
biByZXR2YWw7Cn0KCi8qIGtkYl9wcmludF9zcGluX2xvY2soJnh5el9sb2NrLCAieHl6X2xvY2s6
IiwgIlxuIik7ICovCnN0YXRpYyB2b2lkCmtkYl9wcmludF9zcGluX2xvY2soY2hhciAqc3RycCwg
c3BpbmxvY2tfdCAqbGtwLCBjaGFyICpubHApCnsKICAgIGtkYnAoIiVzICUwNGh4ICVkICVkJXMi
LCBzdHJwLCBLREJfTEtERUYoKmxrcCksIGxrcC0+cmVjdXJzZV9jcHUsCiAgICAgICAgIGxrcC0+
cmVjdXJzZV9jbnQsIG5scCk7Cn0KCi8qIGNoZWNrIGlmIHJlZ2lzdGVyIHN0cmluZyBpcyB2YWxp
ZC4gaWYgeWVzLCByZXR1cm4gb2Zmc2V0IHRvIHRoZSByZWdpc3RlcgogKiBpbiBjcHVfdXNlcl9y
ZWdzLCBlbHNlIHJldHVybiAtMSAqLwpzdGF0aWMgaW50CmtkYl92YWxpZF9yZWcoY29uc3QgY2hh
ciAqbm1wKSAKewogICAgaW50IGk7CiAgICBmb3IgKGk9MDsgaSA8IHNpemVvZihrZGJfcmVnX25t
X29mZnMpL3NpemVvZihrZGJfcmVnX25tX29mZnNbMF0pOyBpKyspCiAgICAgICAgaWYgKHN0cmNt
cChrZGJfcmVnX25tX29mZnNbaV0ucmVnX25tLCBubXApID09IDApCiAgICAgICAgICAgIHJldHVy
biBrZGJfcmVnX25tX29mZnNbaV0ucmVnX29mZnM7CiAgICByZXR1cm4gLTE7Cn0KCi8qIGdpdmVu
IG9mZnNldCBvZiByZWdpc3RlciwgcmV0dXJuIHJlZ2lzdGVyIG5hbWUgc3RyaW5nLiBpZiBvZmZz
ZXQgaXMgaW52YWxpZAogKiByZXR1cm4gTlVMTCAqLwpzdGF0aWMgY2hhciAqa2RiX3JlZ29mZnNf
dG9fbmFtZShpbnQgb2ZmcykKewogICAgaW50IGk7CiAgICBmb3IgKGk9MDsgaSA8IHNpemVvZihr
ZGJfcmVnX25tX29mZnMpL3NpemVvZihrZGJfcmVnX25tX29mZnNbMF0pOyBpKyspCiAgICAgICAg
aWYgKGtkYl9yZWdfbm1fb2Zmc1tpXS5yZWdfb2ZmcyA9PSBvZmZzKQogICAgICAgICAgICByZXR1
cm4ga2RiX3JlZ19ubV9vZmZzW2ldLnJlZ19ubTsKICAgIHJldHVybiBOVUxMOwp9CgovKiA9PT09
PT09PT09PT09PT09PT09PT0gdXRpbCBzdHJ1Y3QgZnVuY3MgPT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09ICovCnN0YXRpYyB2b2lkCmtkYl9wcm50X3RpbWVyKHN0cnVjdCB0aW1lciAq
dHApCnsKI2lmIFhFTl9TVUJWRVJTSU9OID09IDAgCiAgICBrZGJwKCIgZXhwaXJlczolMDE2bHgg
ZXhwaXJlc19lbmQ6JTAxNmx4IGNwdTolZCBzdGF0dXM6JXhcbiIsIHRwLT5leHBpcmVzLCAKICAg
ICAgICAgdHAtPmV4cGlyZXNfZW5kLCB0cC0+Y3B1LCB0cC0+c3RhdHVzKTsKI2Vsc2UKICAgIGtk
YnAoIiBleHBpcmVzOiUwMTZseCBjcHU6JWQgc3RhdHVzOiV4XG4iLCB0cC0+ZXhwaXJlcywgdHAt
PmNwdSx0cC0+c3RhdHVzKTsKI2VuZGlmCiAgICBrZGJwKCIgZnVuY3Rpb24gZGF0YTolcCBwdHI6
JXAgIiwgdHAtPmRhdGEsIHRwLT5mdW5jdGlvbik7CiAgICBrZGJfcHJudF9hZGRyMnN5bShET01J
RF9JRExFLCAoa2RidmFfdCl0cC0+ZnVuY3Rpb24sICJcbiIpOwp9CgpzdGF0aWMgdm9pZCAKa2Ri
X3BybnRfcGVyaW9kaWNfdGltZShzdHJ1Y3QgcGVyaW9kaWNfdGltZSAqcHRwKQp7CiAgICBrZGJw
KCIgbmV4dDolcCBwcmV2OiVwXG4iLCBwdHAtPmxpc3QubmV4dCwgcHRwLT5saXN0LnByZXYpOwog
ICAga2RicCgiIG9uX2xpc3Q6JWQgb25lX3Nob3Q6JWQgZG9udF9mcmVlemU6JWQgaXJxX2lzc3Vl
ZDolZCBzcmM6JXggaXJxOiV4XG4iLAogICAgICAgICBwdHAtPm9uX2xpc3QsIHB0cC0+b25lX3No
b3QsIHB0cC0+ZG9fbm90X2ZyZWV6ZSwgcHRwLT5pcnFfaXNzdWVkLAogICAgICAgICBwdHAtPnNv
dXJjZSwgcHRwLT5pcnEpOwogICAga2RicCgiIHZjcHU6JXAgcGVuZGluZ19pbnRyX25yOiUwOHgg
cGVyaW9kOiUwMTZseFxuIiwgcHRwLT52Y3B1LAogICAgICAgICBwdHAtPnBlbmRpbmdfaW50cl9u
ciwgcHRwLT5wZXJpb2QpOwogICAga2RicCgiIHNjaGVkdWxlZDolMDE2bHggbGFzdF9wbHRfZ3Rp
bWU6JTAxNmx4XG4iLCBwdHAtPnNjaGVkdWxlZCwKICAgICAgICAgcHRwLT5sYXN0X3BsdF9ndGlt
ZSk7CiAgICBrZGJwKCIgXG4gICAgICAgICAgdGltZXIgaW5mbzpcbiIpOwogICAga2RiX3BybnRf
dGltZXIoJnB0cC0+dGltZXIpOwogICAga2RicCgiXG4iKTsKfQoKLyogPT09PT09PT09PT09PT09
PT09PT09IGNtZCBmdW5jdGlvbnMgID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PSAqLwoKLyoKICogRlVOQ1RJT046IERpc2Fzc2VtYmxlIGluc3RydWN0aW9ucwogKi8Kc3RhdGlj
IGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZGlzKHZvaWQpCnsKICAgIGtkYnAoImRpcyBbYWRkcnxz
eW1dW251bV1bZG9taWRdIDogRGlzYXNzZW1ibGUgaW5zdHJzXG4iKTsKICAgIHJldHVybiBLREJf
Q1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl9kaXMoaW50IGFy
Z2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAg
aW50IG51bSA9IDg7ICAgICAgICAgICAgICAgICAgICAgICAgICAgLyogZGlzcGxheSA4IGluc3Ry
IGJ5IGRlZmF1bHQgKi8KICAgIHN0YXRpYyBrZGJ2YV90IGFkZHIgPSBCRkRfSU5WQUw7CiAgICBz
dGF0aWMgZG9taWRfdCBkb21pZDsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8n
KQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kaXMoKTsKCiAgICBpZiAoYXJnYyAhPSAtMSkgICAg
ICAvKiBub3QgYSBjb21tYW5kIHJlcGVhdCAqLwogICAgICAgIGRvbWlkID0gZ3Vlc3RfbW9kZShy
ZWdzKSA/ICBjdXJyZW50LT5kb21haW4tPmRvbWFpbl9pZCA6IERPTUlEX0lETEU7CgogICAgaWYg
KGFyZ2MgPj0gNCAmJiAha2RiX3N0cjJkb21pZChhcmd2WzNdLCAmZG9taWQsIDEpKSB7IAogICAg
ICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfSAKICAgIGlmIChhcmdjID49IDMgJiYg
IWtkYl9zdHIyZGVjaShhcmd2WzJdLCAmbnVtKSkgewogICAgICAgIGtkYnAoImtkYjpJbnZhbGlk
IG51bVxuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9IAogICAgaWYg
KGFyZ2MgPiAxICYmICFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGRvbWlkKSkgewogICAg
ICAgIGtkYnAoImtkYjpJbnZhbGlkIGFkZHIvc3ltXG4iKTsKICAgICAgICBrZGJwKCIobnVtIGhh
cyB0byBiZSBzcGVjaWZpZWQgaWYgcHJvdmlkaW5nIGRvbWlkKVxuIik7CiAgICAgICAgcmV0dXJu
IEtEQl9DUFVfTUFJTl9LREI7CiAgICB9IAogICAgaWYgKGFyZ2MgPT0gMSkgICAgICAgICAgICAg
ICAgICAgIC8qIG5vdCBjb21tYW5kIHJlcGVhdCAqLwogICAgICAgIGFkZHIgPSByZWdzLT5LREJJ
UDsgICAgICAgICAgIC8qIFBDIGlzIHRoZSBkZWZhdWx0ICovCiAgICBlbHNlIGlmIChhZGRyID09
IEJGRF9JTlZBTCkgewogICAgICAgIGtkYnAoImtkYjpJbnZhbGlkIGFkZHIvc3ltXG4iKTsKICAg
ICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGFkZHIgPSBrZGJfcHJpbnRf
aW5zdHIoYWRkciwgbnVtLCBkb21pZCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoK
LyogRlVOQ1RJT046IGtkYl9jbWRmX2Rpc20oKSBUb2dnbGUgZGlzYXNzZW1ibHkgc3ludGF4IGZy
b20gSW50ZWwgdG8gQVRUL0dBUyAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kaXNt
KHZvaWQpCnsKICAgIGtkYnAoImRpc206IHRvZ2dsZSBkaXNhc3NlbWJseSBtb2RlIGJldHdlZW4g
QVRUL0dBUyBhbmQgSU5URUxcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3Rh
dGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2Rpc20oaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiph
cmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICph
cmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZGlzbSgpOwoKICAgIGtkYl90
b2dnbGVfZGlzX3N5bnRheCgpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRp
YyB2b2lkCl9rZGJfc2hvd19ndWVzdF9zdGFjayhkb21pZF90IGRvbWlkLCBrZGJ2YV90IGlwYWRk
ciwga2RidmFfdCBzcGFkZHIpCnsKICAgIGtkYnZhX3QgdmFsOwogICAgaW50IG51bT0wLCBtYXg9
MCwgcmQgPSBrZGJfZ3Vlc3RfYml0bmVzcyhkb21pZCkvODsKCiAgICBrZGJfcHJpbnRfaW5zdHIo
aXBhZGRyLCAxLCBkb21pZCk7CiAgICBLREJHUCgiX2d1ZXN0X3N0YWNrOnNwOiVseCBkb21pZDol
ZCByZDokJWRcbiIsIHNwYWRkciwgZG9taWQsIHJkKTsKICAgIHZhbCA9IDA7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAvKiBtdXN0IHplcm8sIGluIGNhc2UgZ3Vlc3QgaXMgMzJiaXQgKi8KICAg
IHdoaWxlKChrZGJfcmVhZF9tZW0oc3BhZGRyLChrZGJieXRfdCAqKSZ2YWwscmQsZG9taWQpPT1y
ZCkgJiYgbnVtIDwgMTYpewogICAgICAgIEtEQkdQMSgiZ3N0azphZGRyOiVseCB2YWw6JWx4XG4i
LCBzcGFkZHIsIHZhbCk7CiAgICAgICAgaWYgKGtkYl9pc19hZGRyX2d1ZXN0X3RleHQodmFsLCBk
b21pZCkpIHsKICAgICAgICAgICAga2RiX3ByaW50X2luc3RyKHZhbCwgMSwgZG9taWQpOwogICAg
ICAgICAgICBudW0rKzsKICAgICAgICB9CiAgICAgICAgaWYgKG1heCsrID4gMTAwMDApICAgICAg
ICAgICAgLyogZG9uJ3Qgd2FsayBkb3duIHRoZSBzdGFjayBmb3JldmVyICovCiAgICAgICAgICAg
IGJyZWFrOyAgICAgICAgICAgICAgICAgICAgLyogMTBrIGlzIGNob3NlbiByYW5kb21seSAqLwog
ICAgICAgIHNwYWRkciArPSByZDsKICAgIH0KfQoKLyogUmVhZCBndWVzdCBtZW1vcnkgYW5kIGRp
c3BsYXkgYWRkcmVzcyB0aGF0IGxvb2tzIGxpa2UgdGV4dC4gKi8Kc3RhdGljIHZvaWQKa2RiX3No
b3dfZ3Vlc3Rfc3RhY2soc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIHN0cnVjdCB2Y3B1ICp2
Y3B1cCkKewogICAga2RidmFfdCBpcGFkZHI9cmVncy0+S0RCSVAsIHNwYWRkciA9IHJlZ3MtPktE
QlNQOwogICAgZG9taWRfdCBkb21pZCA9IHZjcHVwLT5kb21haW4tPmRvbWFpbl9pZDsKCiAgICBB
U1NFUlQoZG9taWQgIT0gRE9NSURfSURMRSk7CiAgICBfa2RiX3Nob3dfZ3Vlc3Rfc3RhY2soZG9t
aWQsIGlwYWRkciwgc3BhZGRyKTsKfQoKLyogZGlzcGxheSBzdGFjay4gaWYgdmNwdSBwdHIgZ2l2
ZW4sIHRoZW4gZGlzcGxheSBzdGFjayBmb3IgdGhhdC4gT3RoZXJ3aXNlLAogKiB1c2UgY3VycmVu
dCByZWdzICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2Yodm9pZCkKewogICAga2Ri
cCgiZiBbdmNwdS1wdHJdOiBkdW1wIGN1cnJlbnQvdmNwdSBzdGFja1xuIik7CiAgICByZXR1cm4g
S0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2NtZGZfZihpbnQg
YXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAg
ICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNn
Zl9mKCk7CgogICAgaWYgKGFyZ2MgPiAxICkgewogICAgICAgIHN0cnVjdCB2Y3B1ICp2cDsKICAg
ICAgICBpZiAoIWtkYl9zdHIydWxvbmcoYXJndlsxXSwgKHVsb25nICopJnZwKSB8fCAha2RiX3Zj
cHVfdmFsaWQodnApKSB7CiAgICAgICAgICAgIGtkYnAoImtkYjogQmFkIFZDUFUgcHRyOiVzXG4i
LCBhcmd2WzFdKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAg
fQogICAgICAgIGtkYl9zaG93X2d1ZXN0X3N0YWNrKCZ2cC0+YXJjaC51c2VyX3JlZ3MsIHZwKTsK
ICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGlmIChndWVzdF9tb2Rl
KHJlZ3MpKQogICAgICAgIGtkYl9zaG93X2d1ZXN0X3N0YWNrKHJlZ3MsIGN1cnJlbnQpOwogICAg
ZWxzZQogICAgICAgIHNob3dfdHJhY2UocmVncyk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKfQoKLyogZ2l2ZW4gYW4gc3BhZGRyIGFuZCBkb21pZCBmb3IgZ3Vlc3QsIGR1bXAgc3RhY2sg
Ki8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZmcodm9pZCkKewogICAga2RicCgiZmcg
ZG9taWQgUklQIEVTUDogZHVtcCBndWVzdCBzdGFjayBnaXZlbiBkb21pZCwgUklQLCBhbmQgRVNQ
XG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90
IAprZGJfY21kZl9mZyhpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNl
cl9yZWdzICpyZWdzKQp7CiAgICBkb21pZF90IGRvbWlkOwogICAga2RidmFfdCBpcGFkZHIsIHNw
YWRkcjsKCiAgICBpZiAoYXJnYyAhPSA0KSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZmcoKTsK
CiAgICBpZiAoa2RiX3N0cjJkb21pZChhcmd2WzFdLCAmZG9taWQsIDEpPT0wKSB7CiAgICAgICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICBpZiAoa2RiX3N0cjJ1bG9uZyhhcmd2
WzJdLCAmaXBhZGRyKT09MCkgewogICAgICAgIGtkYnAoIkJhZCBpcGFkZHI6JXNcbiIsIGFyZ3Zb
Ml0pOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGtkYl9z
dHIydWxvbmcoYXJndlszXSwgJnNwYWRkcik9PTApIHsKICAgICAgICBrZGJwKCJCYWQgc3BhZGRy
OiVzXG4iLCBhcmd2WzNdKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0K
ICAgIF9rZGJfc2hvd19ndWVzdF9zdGFjayhkb21pZCwgaXBhZGRyLCBzcGFkZHIpOwogICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIERpc3BsYXkga2RiIHN0YWNrLiBmb3IgZGVidWdn
aW5nIGtkYiBpdHNlbGYgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2Zfa2RiZih2b2lk
KQp7CiAgICBrZGJwKCJrZGJmOiBkaXNwbGF5IGtkYiBzdGFjay4gZm9yIGRlYnVnZ2luZyBrZGIg
b25seVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9j
bWRfdCAKa2RiX2NtZGZfa2RiZihpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBj
cHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8n
KQogICAgICAgIHJldHVybiBrZGJfdXNnZl9rZGJmKCk7CgogICAga2RiX3RyYXBfaW1tZWQoS0RC
X1RSQVBfS0RCU1RBQ0spOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIHdvcmtl
ciBmdW5jdGlvbiB0byBkaXNwbGF5IG1lbW9yeS4gUmVxdWVzdCBjb3VsZCBiZSBmb3IgYW55IGd1
ZXN0LCBkb21pZC4KICogQWxzbyBhZGRyZXNzIGNvdWxkIGJlIG1hY2hpbmUgb3IgdmlydHVhbCAq
LwpzdGF0aWMgdm9pZApfa2RiX2Rpc3BsYXlfbWVtKGtkYnZhX3QgKmFkZHJwLCBpbnQgKmxlbnAs
IGludCB3b3Jkc3osIGludCBkb21pZCwgaW50IGlzX21hZGRyKQp7CiAgICAjZGVmaW5lIEREQlVG
U1ogNDA5NgoKICAgIGtkYmJ5dF90IGJ1ZltEREJVRlNaXSwgKmJwOwogICAgaW50IG51bXJkLCBi
eXRlczsKICAgIGludCBsZW4gPSAqbGVucDsKICAgIGtkYnZhX3QgYWRkciA9ICphZGRycDsKCiAg
ICAvKiByb3VuZCBsZW4gZG93biB0byB3b3Jkc3ogYm91bmRyeSBiZWNhdXNlIG9uIGludGVsIGVu
ZGlhbiwgcHJpbnRpbmcKICAgICAqIGNoYXJhY3RlcnMgaXMgbm90IHBydWRlbnQsIChsb25nIGFu
ZCBpbnRzIGNhbid0IGJlIGludGVycHJldGVkIAogICAgICogZWFzaWx5KSAqLwogICAgbGVuICY9
IH4od29yZHN6LTEpOwogICAgbGVuID0gS0RCTUlOKEREQlVGU1osIGxlbik7CiAgICBsZW4gPSBs
ZW4gPyBsZW4gOiB3b3Jkc3o7CgogICAgS0RCR1AoImRtZW06YWRkcjolbHggYnVmOiVwIGxlbjok
JWQgZG9taWQ6JWQgc3o6JCVkIG1hZGRyOiVkXG4iLCBhZGRyLAogICAgICAgICAgYnVmLCBsZW4s
IGRvbWlkLCB3b3Jkc3osIGlzX21hZGRyKTsKICAgIGlmIChpc19tYWRkcikKICAgICAgICBudW1y
ZD1rZGJfcmVhZF9tbWVtKChrZGJtYV90KWFkZHIsIGJ1ZiwgbGVuKTsKICAgIGVsc2UKICAgICAg
ICBudW1yZD1rZGJfcmVhZF9tZW0oYWRkciwgYnVmLCBsZW4sIGRvbWlkKTsKICAgIGlmIChudW1y
ZCAhPSBsZW4pCiAgICAgICAga2RicCgiTWVtb3J5IHJlYWQgZXJyb3IuIEJ5dGVzIHJlYWQ6JCVk
XG4iLCBudW1yZCk7CgogICAgZm9yIChicCA9IGJ1ZjsgbnVtcmQgPiAwOykgewogICAgICAgIGtk
YnAoIiUwMTZseDogIiwgYWRkcik7IAoKICAgICAgICAvKiBkaXNwbGF5IDE2IGJ5dGVzIHBlciBs
aW5lICovCiAgICAgICAgZm9yIChieXRlcz0wOyBieXRlcyA8IDE2ICYmIG51bXJkID4gMDsgYnl0
ZXMgKz0gd29yZHN6KSB7CiAgICAgICAgICAgIGlmIChudW1yZCA+PSB3b3Jkc3opIHsKICAgICAg
ICAgICAgICAgIGlmICh3b3Jkc3ogPT0gOCkKICAgICAgICAgICAgICAgICAgICBrZGJwKCIgJTAx
Nmx4IiwgKihsb25nICopYnApOwogICAgICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgICAg
ICAgIGtkYnAoIiAlMDh4IiwgKihpbnQgKilicCk7CiAgICAgICAgICAgICAgICBicCArPSB3b3Jk
c3o7CiAgICAgICAgICAgICAgICBudW1yZCAtPSB3b3Jkc3o7CiAgICAgICAgICAgICAgICBhZGRy
ICs9IHdvcmRzejsKICAgICAgICAgICAgfQogICAgICAgIH0KICAgICAgICBrZGJwKCJcbiIpOwog
ICAgICAgIGNvbnRpbnVlOwogICAgfQogICAgKmxlbnAgPSBsZW47CiAgICAqYWRkcnAgPSBhZGRy
Owp9CgovKiBkaXNwbGF5IG1hY2hpbmUgbWVtLCBpZSwgdGhlIGdpdmVuIGFkZHJlc3MgaXMgbWFj
aGluZSBhZGRyZXNzICovCnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfZGlzcGxheV9tbWVtKGlu
dCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgaW50IHdvcmRzeiwga2RiX3VzZ2ZfdCB1c2dfZnAp
CnsKICAgIHN0YXRpYyBrZGJtYV90IG1hZGRyOwogICAgc3RhdGljIGludCBsZW47CiAgICBzdGF0
aWMgZG9taWRfdCBpZCA9IERPTUlEX0lETEU7CgogICAgaWYgKGFyZ2MgPT0gLTEpIHsKICAgICAg
ICBfa2RiX2Rpc3BsYXlfbWVtKCZtYWRkciwgJmxlbiwgd29yZHN6LCBpZCwgMSk7ICAvKiBjbWQg
cmVwZWF0ICovCiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICBpZiAo
YXJnYyA8PSAxIHx8ICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4gKCp1c2dfZnApKCk7
CgogICAgLyogY2hlY2sgaWYgbnVtIG9mIGJ5dGVzIHRvIGRpc3BsYXkgaXMgZ2l2ZW4gYnkgdXNl
ciAqLwogICAgaWYgKGFyZ2MgPj0gMykgewogICAgICAgIGlmICgha2RiX3N0cjJkZWNpKGFyZ3Zb
Ml0sICZsZW4pKSB7CiAgICAgICAgICAgIGtkYnAoIkludmFsaWQgbGVuZ3RoOiVzXG4iLCBhcmd2
WzJdKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgfSAKICAg
IH0gZWxzZQogICAgICAgIGxlbiA9IDMyOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAvKiBkZWZhdWx0IHJlYWQgbGVuICovCgogICAgaWYgKCFrZGJfc3RyMnVsb25nKGFyZ3Zb
MV0sICZtYWRkcikpIHsKICAgICAgICBrZGJwKCJJbnZhbGlkIGFyZ3VtZW50OiVzXG4iLCBhcmd2
WzFdKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIF9rZGJfZGlz
cGxheV9tZW0oJm1hZGRyLCAmbGVuLCB3b3Jkc3osIDAsIDEpOwogICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7Cn0KCi8qIAogKiBGVU5DVElPTjogRGlzcGFseSBtYWNoaW5lIE1lbW9yeSBXb3Jk
CiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kd20odm9pZCkKewogICAga2RicCgi
ZHdtOiAgbWFkZHJ8c3ltIFtudW1dIDogZHVtcCBtZW1vcnkgd29yZCBnaXZlbiBtYWNoaW5lIGFk
ZHJcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21k
X3QgCmtkYl9jbWRmX2R3bShpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVf
dXNlcl9yZWdzICpyZWdzKQp7CiAgICByZXR1cm4ga2RiX2Rpc3BsYXlfbW1lbShhcmdjLCBhcmd2
LCA0LCBrZGJfdXNnZl9kd20pOwp9CgovKiAKICogRlVOQ1RJT046IERpc3BhbHkgbWFjaGluZSBN
ZW1vcnkgRG91YmxlV29yZCAKICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2RkbSh2
b2lkKQp7CiAgICBrZGJwKCJkZG06ICBtYWRkcnxzeW0gW251bV0gOiBkdW1wIGRvdWJsZSB3b3Jk
IGdpdmVuIG1hY2hpbmUgYWRkclxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpz
dGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2NtZGZfZGRtKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHJldHVybiBrZGJfZGlzcGxh
eV9tbWVtKGFyZ2MsIGFyZ3YsIDgsIGtkYl91c2dmX2RkbSk7Cn0KCi8qIAogKiBGVU5DVElPTjog
RGlzcGFseSBNZW1vcnkgOiB3b3JkIG9yIGRvdWJsZXdvcmQKICogICAgICAgICAgIHdvcmRzeiA6
IGJ5dGVzIGluIHdvcmQuIDQgb3IgOAogKgogKiAgICAgICAgICAgV2UgZGlzcGxheSB1cHRvIEJV
RlNaIGJ5dGVzLiBVc2VyIGNhbiBqdXN0IHByZXNzIGVudGVyIGZvciBtb3JlLgogKiAgICAgICAg
ICAgYWRkciBpcyBhbHdheXMgaW4gaGV4IHdpdGggb3Igd2l0aG91dCBsZWFkaW5nIDB4CiAqLwpz
dGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Rpc3BsYXlfbWVtKGludCBhcmdjLCBjb25zdCBjaGFy
ICoqYXJndiwgaW50IHdvcmRzeiwga2RiX3VzZ2ZfdCB1c2dfZnApCnsKICAgIHN0YXRpYyBrZGJ2
YV90IGFkZHI7CiAgICBzdGF0aWMgaW50IGxlbjsKICAgIHN0YXRpYyBkb21pZF90IGlkID0gRE9N
SURfSURMRTsKCiAgICBpZiAoYXJnYyA9PSAtMSkgewogICAgICAgIF9rZGJfZGlzcGxheV9tZW0o
JmFkZHIsICZsZW4sIHdvcmRzeiwgaWQsIDApOyAgLyogY21kIHJlcGVhdCAqLwogICAgICAgIHJl
dHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGFyZ2MgPD0gMSB8fCAqYXJndlsx
XSA9PSAnPycpCiAgICAgICAgcmV0dXJuICgqdXNnX2ZwKSgpOwoKICAgIGlkID0gRE9NSURfSURM
RTsgICAgICAgICAgICAgICAgLyogbm90IGEgY29tbWFuZCByZXBlYXQsIHJlc2V0IGRvbSBpZCAq
LwogICAgaWYgKGFyZ2MgPj0gNCkgeyAKICAgICAgICBpZiAoIWtkYl9zdHIyZG9taWQoYXJndlsz
XSwgJmlkLCAxKSkgCiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQog
ICAgLyogY2hlY2sgaWYgbnVtIG9mIGJ5dGVzIHRvIGRpc3BsYXkgaXMgZ2l2ZW4gYnkgdXNlciAq
LwogICAgaWYgKGFyZ2MgPj0gMykgewogICAgICAgIGlmICgha2RiX3N0cjJkZWNpKGFyZ3ZbMl0s
ICZsZW4pKSB7CiAgICAgICAgICAgIGtkYnAoIkludmFsaWQgbGVuZ3RoOiVzXG4iLCBhcmd2WzJd
KTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgfSAKICAgIH0g
ZWxzZQogICAgICAgIGxlbiA9IDMyOyAgICAgICAgICAgICAgICAgICAgICAgLyogZGVmYXVsdCBy
ZWFkIGxlbiAqLwogICAgaWYgKCFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGlkKSkgewog
ICAgICAgIGtkYnAoIkludmFsaWQgYXJndW1lbnQ6JXNcbiIsIGFyZ3ZbMV0pOwogICAgICAgIHJl
dHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQoKICAgIF9rZGJfZGlzcGxheV9tZW0oJmFkZHIs
ICZsZW4sIHdvcmRzeiwgaWQsIDApOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8q
IAogKiBGVU5DVElPTjogRGlzcGFseSBNZW1vcnkgV29yZAogKi8Kc3RhdGljIGtkYl9jcHVfY21k
X3QKa2RiX3VzZ2ZfZHcodm9pZCkKewogICAga2RicCgiZHcgdmFkZHJ8c3ltIFtudW1dW2RvbWlk
XSA6IGR1bXAgbWVtIHdvcmQuIG51bSByZXF1aXJlZCBmb3IgZG9taWRcbiIpOwogICAgcmV0dXJu
IEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2R3KGlu
dCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsK
ICAgIHJldHVybiBrZGJfZGlzcGxheV9tZW0oYXJnYywgYXJndiwgNCwga2RiX3VzZ2ZfZHcpOwp9
CgovKiAKICogRlVOQ1RJT046IERpc3BhbHkgTWVtb3J5IERvdWJsZVdvcmQgCiAqLwpzdGF0aWMg
a2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kZCh2b2lkKQp7CiAgICBrZGJwKCJkZCB2YWRkcnxzeW0g
W251bV1bZG9taWRdIDogZHVtcCBkd29yZC4gbnVtIHJlcXVpcmVkIGZvciBkb21pZFxuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Nt
ZGZfZGQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAgcmV0dXJuIGtkYl9kaXNwbGF5X21lbShhcmdjLCBhcmd2LCA4LCBrZGJfdXNn
Zl9kZCk7Cn0KCi8qIAogKiBGVU5DVElPTjogTW9kaWZ5IE1lbW9yeSBXb3JkIAogKi8Kc3RhdGlj
IGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfbXcodm9pZCkKewogICAga2RicCgibXcgdmFkZHJ8c3lt
IHZhbCBbZG9taWRdIDogbW9kaWZ5IG1lbW9yeSB3b3JkIGluIHZhZGRyXG4iKTsKICAgIHJldHVy
biBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl9tdyhp
bnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7
CiAgICB1bG9uZyB2YWw7CiAgICBrZGJ2YV90IGFkZHI7CiAgICBkb21pZF90IGlkID0gRE9NSURf
SURMRTsKCiAgICBpZiAoYXJnYyA8IDMpIHsKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfbXcoKTsK
ICAgIH0KICAgIGlmIChhcmdjID49NCkgewogICAgICAgIGlmICgha2RiX3N0cjJkb21pZChhcmd2
WzNdLCAmaWQsIDEpKSAKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9
CiAgICBpZiAoIWtkYl9zdHIydWxvbmcoYXJndlsyXSwgJnZhbCkpIHsKICAgICAgICBrZGJwKCJJ
bnZhbGlkIHZhbDogJXNcbiIsIGFyZ3ZbMl0pOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5f
S0RCOwogICAgfQogICAgaWYgKCFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGlkKSkgewog
ICAgICAgIGtkYnAoIkludmFsaWQgYWRkci9zeW06ICVzXG4iLCBhcmd2WzFdKTsKICAgICAgICBy
ZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGlmIChrZGJfd3JpdGVfbWVtKGFkZHIs
IChrZGJieXRfdCAqKSZ2YWwsIDQsIGlkKSAhPSA0KQogICAgICAgIGtkYnAoIlVuYWJsZSB0byBz
ZXQgMHglbHggdG8gMHglbHhcbiIsIGFkZHIsIHZhbCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlO
X0tEQjsKfQoKLyogCiAqIEZVTkNUSU9OOiBNb2RpZnkgTWVtb3J5IERvdWJsZVdvcmQgCiAqLwpz
dGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9tZCh2b2lkKQp7CiAgICBrZGJwKCJtZCB2YWRk
cnxzeW0gdmFsIFtkb21pZF0gOiBtb2RpZnkgbWVtb3J5IGR3b3JkIGluIHZhZGRyXG4iKTsKICAg
IHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21k
Zl9tZChpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICB1bG9uZyB2YWw7CiAgICBrZGJ2YV90IGFkZHI7CiAgICBkb21pZF90IGlkID0g
RE9NSURfSURMRTsKCiAgICBpZiAoYXJnYyA8IDMpIHsKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zf
bWQoKTsKICAgIH0KICAgIGlmIChhcmdjID49NCkgewogICAgICAgIGlmICgha2RiX3N0cjJkb21p
ZChhcmd2WzNdLCAmaWQsIDEpKSB7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RC
OwogICAgICAgIH0KICAgIH0KICAgIGlmICgha2RiX3N0cjJ1bG9uZyhhcmd2WzJdLCAmdmFsKSkg
ewogICAgICAgIGtkYnAoIkludmFsaWQgdmFsOiAlc1xuIiwgYXJndlsyXSk7CiAgICAgICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICBpZiAoIWtkYl9zdHIyYWRkcihhcmd2WzFd
LCAmYWRkciwgaWQpKSB7CiAgICAgICAga2RicCgiSW52YWxpZCBhZGRyL3N5bTogJXNcbiIsIGFy
Z3ZbMV0pOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGtk
Yl93cml0ZV9tZW0oYWRkciwgKGtkYmJ5dF90ICopJnZhbCxzaXplb2YodmFsKSxpZCkgIT0gc2l6
ZW9mKHZhbCkpCiAgICAgICAga2RicCgiVW5hYmxlIHRvIHNldCAweCVseCB0byAweCVseFxuIiwg
YWRkciwgdmFsKTsKCiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RydWN0ICBYZ3Rf
ZGVzY19zdHJ1Y3QgewogICAgdW5zaWduZWQgc2hvcnQgc2l6ZTsKICAgIHVuc2lnbmVkIGxvbmcg
YWRkcmVzcyBfX2F0dHJpYnV0ZV9fKChwYWNrZWQpKTsKfTsKCnZvaWQKa2RiX3Nob3dfc3BlY2lh
bF9yZWdzKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBzdHJ1Y3QgWGd0X2Rlc2Nf
c3RydWN0IGRlc2M7CiAgICB1bnNpZ25lZCBzaG9ydCB0cjsgICAgICAgICAgICAgICAgIC8qIFRh
c2sgUmVnaXN0ZXIgc2VnbWVudCBzZWxlY3RvciAqLwogICAgX191NjQgZWZlcjsKCiAgICBrZGJw
KCJcblNwZWNpYWwgUmVnaXN0ZXJzOlxuIik7CiAgICBfX2FzbV9fIF9fdm9sYXRpbGVfXyAoInNp
ZHQgICglMCkgXG4iIDo6ICJhIigmZGVzYykgOiAibWVtb3J5Iik7CiAgICBrZGJwKCJJRFRSOiBh
ZGRyOiAlMDE2bHggbGltaXQ6ICUwNHhcbiIsIGRlc2MuYWRkcmVzcywgZGVzYy5zaXplKTsKICAg
IF9fYXNtX18gX192b2xhdGlsZV9fICgic2dkdCAgKCUwKSBcbiIgOjogImEiKCZkZXNjKSA6ICJt
ZW1vcnkiKTsKICAgIGtkYnAoIkdEVFI6IGFkZHI6ICUwMTZseCBsaW1pdDogJTA0eFxuIiwgZGVz
Yy5hZGRyZXNzLCBkZXNjLnNpemUpOwoKICAgIGtkYnAoImNyMDogJTAxNmx4ICBjcjI6ICUwMTZs
eFxuIiwgcmVhZF9jcjAoKSwgcmVhZF9jcjIoKSk7CiAgICBrZGJwKCJjcjM6ICUwMTZseCAgY3I0
OiAlMDE2bHhcbiIsIHJlYWRfY3IzKCksIHJlYWRfY3I0KCkpOwogICAgX19hc21fXyBfX3ZvbGF0
aWxlX18gKCJzdHIgKCUwKSBcbiI6OiAiYSIoJnRyKSA6ICJtZW1vcnkiKTsKICAgIGtkYnAoIlRS
OiAleFxuIiwgdHIpOwoKICAgIHJkbXNybChNU1JfRUZFUiwgZWZlcik7ICAgIC8qIElBMzJfRUZF
UiAqLwogICAga2RicCgiZWZlcjoiS0RCRjY0IiBMTUEoSUEtMzJlIG1vZGUpOiVkIFNDRShzeXNj
YWxsL3N5c3JldCk6JWRcbiIsCiAgICAgICAgIGVmZXIsICgoZWZlciZFRkVSX0xNQSkgIT0gMCks
ICgoZWZlciZFRkVSX1NDRSkgIT0gMCkpOwoKICAgIGtkYnAoIkRSMDogJTAxNmx4ICBEUjE6JTAx
Nmx4ICBEUjI6JTAxNmx4XG4iLCBrZGJfcmRfZGJncmVnKDApLAogICAgICAgICBrZGJfcmRfZGJn
cmVnKDEpLCBrZGJfcmRfZGJncmVnKDIpKTsgCiAgICBrZGJwKCJEUjM6ICUwMTZseCAgRFI2OiUw
MTZseCAgRFI3OiUwMTZseFxuIiwga2RiX3JkX2RiZ3JlZygzKSwKICAgICAgICAga2RiX3JkX2Ri
Z3JlZyg2KSwga2RiX3JkX2RiZ3JlZyg3KSk7IAp9CgovKiAKICogRlVOQ1RJT046IERpc3BhbHkg
UmVnaXN0ZXJzLiBJZiAic3AiIGFyZ3VtZW50LCB0aGVuIGRpc3BsYXkgYWRkaXRpb25hbCByZWdz
CiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kcih2b2lkKQp7CiAgICBrZGJwKCJk
ciBbc3BdOiBkaXNwbGF5IHJlZ2lzdGVycy4gc3AgdG8gZGlzcGxheSBzcGVjaWFsIHJlZ3MgYWxz
b1xuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRf
dCAKa2RiX2NtZGZfZHIoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAg
ICAgICByZXR1cm4ga2RiX3VzZ2ZfZHIoKTsKCiAgICBLREJHUDEoInJlZ3M6JXAgLnJzcDolbHgg
LnJpcDolbHhcbiIsIHJlZ3MsIHJlZ3MtPnJzcCwgcmVncy0+cmlwKTsKICAgIHNob3dfcmVnaXN0
ZXJzKHJlZ3MpOwogICAgaWYgKGFyZ2MgPiAxICYmICFzdHJjbXAoYXJndlsxXSwgInNwIikpIAog
ICAgICAgIGtkYl9zaG93X3NwZWNpYWxfcmVncyhyZWdzKTsKICAgIHJldHVybiBLREJfQ1BVX01B
SU5fS0RCOwp9CgovKiBzaG93IHJlZ2lzdGVycyBvbiBzdGFjayBib3R0b20gd2hlcmUgZ3Vlc3Qg
Y29udGV4dCBpcy4gc2FtZSBhcyBkciBpZgogKiBub3QgcnVubmluZyBpbiBndWVzdCBtb2RlICov
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2RyZyh2b2lkKQp7CiAgICBrZGJwKCJkcmc6
IGRpc3BsYXkgYWN0aXZlIGd1ZXN0IHJlZ2lzdGVycyBhdCBzdGFjayBib3R0b21cbiIpOwogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRm
X2RyZyhpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVy
biBrZGJfdXNnZl9kcmcoKTsKCiAgICBrZGJwKCJcdE5vdGU6IGRzL2VzL2ZzL2dzIGV0Yy4uIGFy
ZSBub3Qgc2F2ZWQgZnJvbSB0aGUgY3B1XG4iKTsKICAgIGtkYl9wcmludF91cmVncyhndWVzdF9j
cHVfdXNlcl9yZWdzKCkpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIAogKiBG
VU5DVElPTjogTW9kaWZ5IFJlZ2lzdGVyCiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNn
Zl9tcih2b2lkKQp7CiAgICBrZGJwKCJtciByZWcgdmFsIDogTW9kaWZ5IFJlZ2lzdGVyLiB2YWwg
YXNzdW1lZCBpbiBoZXhcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGlj
IGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX21yKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwg
c3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGNvbnN0IGNoYXIgKmFyZ3A7CiAgICBp
bnQgcmVnb2ZmczsKICAgIHVsb25nIHZhbDsKCiAgICBpZiAoYXJnYyAhPSAzIHx8ICFrZGJfc3Ry
MnVsb25nKGFyZ3ZbMl0sICZ2YWwpKSB7CiAgICAgICAgcmV0dXJuIGtkYl91c2dmX21yKCk7CiAg
ICB9CiAgICBhcmdwID0gYXJndlsxXTsKCiNpZiBkZWZpbmVkKF9feDg2XzY0X18pCiAgICBpZiAo
KHJlZ29mZnM9a2RiX3ZhbGlkX3JlZyhhcmdwKSkgIT0gLTEpCiAgICAgICAgKigodWludDY0X3Qg
KikoKGNoYXIgKilyZWdzK3JlZ29mZnMpKSA9IHZhbDsKI2Vsc2UKICAgIGlmICghc3RyY21wKGFy
Z3AsICJlYXgiKSkKICAgICAgICByZWdzLT5lYXggPSB2YWw7CiAgICBlbHNlIGlmICghc3RyY21w
KGFyZ3AsICJlYngiKSkKICAgICAgICByZWdzLT5lYnggPSB2YWw7CiAgICBlbHNlIGlmICghc3Ry
Y21wKGFyZ3AsICJlY3giKSkKICAgICAgICByZWdzLT5lY3ggPSB2YWw7CiAgICBlbHNlIGlmICgh
c3RyY21wKGFyZ3AsICJlZHgiKSkKICAgICAgICByZWdzLT5lZHggPSB2YWw7CiAgICBlbHNlIGlm
ICghc3RyY21wKGFyZ3AsICJlc2kiKSkKICAgICAgICByZWdzLT5lc2kgPSB2YWw7CiAgICBlbHNl
IGlmICghc3RyY21wKGFyZ3AsICJlZGkiKSkKICAgICAgICByZWdzLT5lZGkgPSB2YWw7CiAgICBl
bHNlIGlmICghc3RyY21wKGFyZ3AsICJlYnAiKSkKICAgICAgICByZWdzLT5lYnAgPSB2YWw7CiAg
ICBlbHNlIGlmICghc3RyY21wKGFyZ3AsICJlc3AiKSkKICAgICAgICByZWdzLT5lc3AgPSB2YWw7
CiAgICBlbHNlIGlmICghc3RyY21wKGFyZ3AsICJlZmxhZ3MiKSB8fCAhc3RyY21wKGFyZ3AsICJy
ZmxhZ3MiKSkKICAgICAgICByZWdzLT5lZmxhZ3MgPSB2YWw7CiNlbmRpZgogICAgZWxzZQogICAg
ICAgIGtkYnAoIkVycm9yLiBCYWQgcmVnaXN0ZXIgOiAlc1xuIiwgYXJncCk7CgogICAgcmV0dXJu
IEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIAogKiBGVU5DVElPTjogU2luZ2xlIFN0ZXAKICovCnN0
YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3NzKHZvaWQpCnsKICAgIGtkYnAoInNzOiBzaW5n
bGUgc3RlcFxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2Nw
dV9jbWRfdCAKa2RiX2NtZGZfc3MoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3Qg
Y3B1X3VzZXJfcmVncyAqcmVncykKewogICAgI2RlZmluZSBLREJfSEFMVF9JTlNUUiAweGY0Cgog
ICAga2RiYnl0X3QgYnl0ZTsKICAgIHN0cnVjdCBkb21haW4gKmRwID0gY3VycmVudC0+ZG9tYWlu
OwogICAgZG9taWRfdCBpZCA9IGd1ZXN0X21vZGUocmVncykgPyBkcC0+ZG9tYWluX2lkIDogRE9N
SURfSURMRTsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJl
dHVybiBrZGJfdXNnZl9zcygpOwoKICAgIEtEQkdQKCJlbnRlciBrZGJfY21kZl9zcyBcbiIpOwog
ICAgaWYgKCFyZWdzKSB7CiAgICAgICAga2RicCgiJXM6IHJlZ3Mgbm90IGF2YWlsYWJsZVxuIiwg
X19GVU5DVElPTl9fKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAg
IGlmIChrZGJfcmVhZF9tZW0ocmVncy0+S0RCSVAsICZieXRlLCAxLCBpZCkgPT0gMSkgewogICAg
ICAgIGlmIChieXRlID09IEtEQl9IQUxUX0lOU1RSKSB7CiAgICAgICAgICAgIGtkYnAoImtkYjog
anVtcGluZyBvdmVyIGhhbHQgaW5zdHJ1Y3Rpb25cbiIpOwogICAgICAgICAgICByZWdzLT5LREJJ
UCsrOwogICAgICAgIH0KICAgIH0gZWxzZSB7CiAgICAgICAga2RicCgia2RiOiBGYWlsZWQgdG8g
cmVhZCBieXRlIGF0OiAlbHhcbiIsIHJlZ3MtPktEQklQKTsKICAgICAgICByZXR1cm4gS0RCX0NQ
VV9NQUlOX0tEQjsKICAgIH0KICAgIGlmIChndWVzdF9tb2RlKHJlZ3MpICYmIGlzX2h2bV9vcl9o
eWJfdmNwdShjdXJyZW50KSkgewogICAgICAgIGRwLT5kZWJ1Z2dlcl9hdHRhY2hlZCA9IDE7ICAv
KiBzZWUgc3ZtX2RvX3Jlc3VtZS92bXhfZG9fICovCiAgICAgICAgY3VycmVudC0+YXJjaC5odm1f
dmNwdS5zaW5nbGVfc3RlcCA9IDE7CiAgICB9IGVsc2UKICAgICAgICByZWdzLT5lZmxhZ3MgfD0g
WDg2X0VGTEFHU19URjsKCiAgICByZXR1cm4gS0RCX0NQVV9TUzsKfQoKLyogCiAqIEZVTkNUSU9O
OiBOZXh0IEluc3RydWN0aW9uLCBzdGVwIG92ZXIgdGhlIGNhbGwgaW5zdHIgdG8gdGhlIG5leHQg
aW5zdHIKICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX25pKHZvaWQpCnsKICAgIGtk
YnAoIm5pOiBzaW5nbGUgc3RlcCwgc3RlcHBpbmcgb3ZlciBmdW5jdGlvbiBjYWxsc1xuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Nt
ZGZfbmkoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAgaW50IHN6LCBpOwogICAgZG9taWRfdCBpZD1ndWVzdF9tb2RlKHJlZ3MpID8g
Y3VycmVudC0+ZG9tYWluLT5kb21haW5faWQ6RE9NSURfSURMRTsKCiAgICBpZiAoYXJnYyA+IDEg
JiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9uaSgpOwoKICAgIEtE
QkdQKCJlbnRlciBrZGJfY21kZl9uaSBcbiIpOwogICAgaWYgKCFyZWdzKSB7CiAgICAgICAga2Ri
cCgiJXM6IHJlZ3Mgbm90IGF2YWlsYWJsZVxuIiwgX19GVU5DVElPTl9fKTsKICAgICAgICByZXR1
cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGlmICgoc3o9a2RiX2NoZWNrX2NhbGxfaW5z
dHIoaWQsIHJlZ3MtPktEQklQKSkgPT0gMCkgIC8qICFjYWxsIGluc3RyICovCiAgICAgICAgcmV0
dXJuIGtkYl9jbWRmX3NzKGFyZ2MsIGFyZ3YsIHJlZ3MpOyAgICAgICAgIC8qIGp1c3QgZG8gc3Mg
Ki8KCiAgICBpZiAoKGk9a2RiX3NldF9icChpZCwgcmVncy0+S0RCSVArc3osIDEsMCwwLDAsMCkp
ID49IEtEQk1BWFNCUCkgLyogZmFpbGVkICovCiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9L
REI7CgogICAga2RiX3NicGFbaV0uYnBfbmkgPSAxOwogICAgaWYgKGd1ZXN0X21vZGUocmVncykg
JiYgaXNfaHZtX29yX2h5Yl92Y3B1KGN1cnJlbnQpKQogICAgICAgIGN1cnJlbnQtPmFyY2guaHZt
X3ZjcHUuc2luZ2xlX3N0ZXAgPSAwOwogICAgZWxzZQogICAgICAgIHJlZ3MtPmVmbGFncyAmPSB+
WDg2X0VGTEFHU19URjsKCiAgICByZXR1cm4gS0RCX0NQVV9OSTsKfQoKc3RhdGljIHZvaWQKa2Ri
X2J0Zl9lbmFibGUodm9pZCkKewogICAgdTY0IGRlYnVnY3RsOwogICAgcmRtc3JsKE1TUl9JQTMy
X0RFQlVHQ1RMTVNSLCBkZWJ1Z2N0bCk7CiAgICB3cm1zcmwoTVNSX0lBMzJfREVCVUdDVExNU1Is
IGRlYnVnY3RsIHwgMHgyKTsKfQoKLyogCiAqIEZVTkNUSU9OOiBTaW5nbGUgU3RlcCB0byBicmFu
Y2guIERvZXNuJ3Qgc2VlbSB0byB3b3JrIHZlcnkgd2VsbC4KICovCnN0YXRpYyBrZGJfY3B1X2Nt
ZF90CmtkYl91c2dmX3NzYih2b2lkKQp7CiAgICBrZGJwKCJzc2I6IHNpbmdlIHN0ZXAgdG8gYnJh
bmNoXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2Nt
ZF90IAprZGJfY21kZl9zc2IoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1
X3VzZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykK
ICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfc3NiKCk7CgogICAgS0RCR1AoIk1VSzogZW50ZXIga2Ri
X2NtZGZfc3NiXG4iKTsKICAgIGlmICghcmVncykgewogICAgICAgIGtkYnAoIiVzOiByZWdzIG5v
dCBhdmFpbGFibGVcbiIsIF9fRlVOQ1RJT05fXyk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7CiAgICB9CiAgICBpZiAoaXNfaHZtX29yX2h5Yl92Y3B1KGN1cnJlbnQpKSAKICAgICAg
ICBjdXJyZW50LT5kb21haW4tPmRlYnVnZ2VyX2F0dGFjaGVkID0gMTsgICAgICAgIC8qIHZteC9z
dm1fZG9fcmVzdW1lKCkqLwoKICAgIHJlZ3MtPmVmbGFncyB8PSBYODZfRUZMQUdTX1RGOwogICAg
a2RiX2J0Zl9lbmFibGUoKTsKICAgIHJldHVybiBLREJfQ1BVX1NTOwp9CgovKiAKICogRlVOQ1RJ
T046IENvbnRpbnVlIEV4ZWN1dGlvbi4gVEYgbXVzdCBiZSBjbGVhcmVkIGhlcmUgYXMgdGhpcyBj
b3VsZCBydW4gb24gCiAqICAgICAgICAgICBhbnkgY3B1LiBIZW5jZSBub3QgT0sgdG8gZG8gaXQg
ZnJvbSBrZGJfZW5kX3Nlc3Npb24uCiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9n
byh2b2lkKQp7CiAgICBrZGJwKCJnbzogbGVhdmUga2RiIGFuZCBjb250aW51ZSBleGVjdXRpb25c
biIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3Qg
CmtkYl9jbWRmX2dvKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2Vy
X3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAg
ICAgcmV0dXJuIGtkYl91c2dmX2dvKCk7CgogICAgcmVncy0+ZWZsYWdzICY9IH5YODZfRUZMQUdT
X1RGOwogICAgcmV0dXJuIEtEQl9DUFVfR087Cn0KCi8qIEFsbCBjcHVzIG11c3QgZGlzcGxheSB0
aGVpciBjdXJyZW50IGNvbnRleHQgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jcHVfc3Rh
dHVzX2FsbChpbnQgY2NwdSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGludCBj
cHU7CiAgICBmb3JfZWFjaF9vbmxpbmVfY3B1KGNwdSkgewogICAgICAgIGlmIChjcHUgPT0gY2Nw
dSkgewogICAgICAgICAgICBrZGJwKCJbJWRdIiwgY2NwdSk7CiAgICAgICAgICAgIGtkYl9kaXNw
bGF5X3BjKHJlZ3MpOwogICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgIGlmIChrZGJfY3B1X2Nt
ZFtjcHVdICE9IEtEQl9DUFVfUEFVU0UpICAgLyogaHVuZyBjcHUgKi8KICAgICAgICAgICAgICAg
IGNvbnRpbnVlOwogICAgICAgICAgICBrZGJfY3B1X2NtZFtjcHVdID0gS0RCX0NQVV9TSE9XUEM7
CiAgICAgICAgICAgIHdoaWxlIChrZGJfY3B1X2NtZFtjcHVdPT1LREJfQ1BVX1NIT1dQQyk7CiAg
ICAgICAgfQogICAgfQogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIAogKiBkaXNw
bGF5L3N3aXRjaCBDUFUuIAogKiAgQXJndW1lbnQ6CiAqICAgICBub25lOiAgIGp1c3QgZ28gYmFj
ayB0byBpbml0aWFsIGNwdQogKiAgICAgY3B1bnVtOiBzd2l0Y2ggdG8gZ2l2ZW4gdnB1CiAqICAg
ICAiYWxsIjogIHNob3cgb25lIGxpbmUgc3RhdHVzIG9mIGFsbCBjcHVzCiAqLwpleHRlcm4gdm9s
YXRpbGUgaW50IGtkYl9pbml0X2NwdTsKc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfY3B1
KHZvaWQpCnsKICAgIGtkYnAoImNwdSBbYWxsfG51bV06IG5vbmUgd2lsbCBzd2l0Y2ggYmFjayB0
byBpbml0aWFsIGNwdVxuIik7CiAgICBrZGJwKCIgICAgICAgICAgICAgICBjcHVudW0gdG8gc3dp
dGNoIHRvIHRoZSB2Y3B1LiBhbGwgdG8gc2hvdyBzdGF0dXNcbiIpOwogICAgcmV0dXJuIEtEQl9D
UFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2NwdShpbnQgYXJn
YywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBp
bnQgY3B1OwogICAgaW50IGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CgogICAgaWYgKGFyZ2Mg
PiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfY3B1KCk7Cgog
ICAgaWYgKGFyZ2MgPiAxKSB7CiAgICAgICAgaWYgKCFzdHJjbXAoYXJndlsxXSwgImFsbCIpKQog
ICAgICAgICAgICByZXR1cm4ga2RiX2NwdV9zdGF0dXNfYWxsKGNjcHUsIHJlZ3MpOwoKICAgICAg
ICAgICAgY3B1ID0gKGludClzaW1wbGVfc3RydG91bChhcmd2WzFdLCBOVUxMLCAwKTsgLyogaGFu
ZGxlcyAweCAqLwogICAgICAgICAgICBpZiAoY3B1ID49IDAgJiYgY3B1IDwgTlJfQ1BVUyAmJiBj
cHUgIT0gY2NwdSAmJiAKICAgICAgICAgICAgICAgIGNwdV9vbmxpbmUoY3B1KSAmJiBrZGJfY3B1
X2NtZFtjcHVdID09IEtEQl9DUFVfUEFVU0UpCiAgICAgICAgICAgIHsKICAgICAgICAgICAgICAg
IGtkYnAoIlN3aXRjaGluZyB0byBjcHU6JWRcbiIsIGNwdSk7CiAgICAgICAgICAgICAgICBrZGJf
Y3B1X2NtZFtjcHVdID0gS0RCX0NQVV9NQUlOX0tEQjsKCiAgICAgICAgICAgICAgICAvKiBjbGVh
ciBhbnkgc2luZ2xlIHN0ZXAgb24gdGhlIGN1cnJlbnQgY3B1ICovCiAgICAgICAgICAgICAgICBy
ZWdzLT5lZmxhZ3MgJj0gflg4Nl9FRkxBR1NfVEY7CiAgICAgICAgICAgICAgICByZXR1cm4gS0RC
X0NQVV9QQVVTRTsKICAgICAgICAgICAgfSBlbHNlIHsKICAgICAgICAgICAgICAgIGlmIChjcHUg
IT0gY2NwdSkKICAgICAgICAgICAgICAgICAgICBrZGJwKCJVbmFibGUgdG8gc3dpdGNoIHRvIGNw
dTolZFxuIiwgY3B1KTsKICAgICAgICAgICAgICAgIGVsc2UgewogICAgICAgICAgICAgICAgICAg
IGtkYl9kaXNwbGF5X3BjKHJlZ3MpOwogICAgICAgICAgICAgICAgfQogICAgICAgICAgICAgICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgICAgIH0KICAgIH0KICAgIC8qIG5vIGFy
ZyBtZWFucyBiYWNrIHRvIGluaXRpYWwgY3B1ICovCiAgICBpZiAoIWtkYl9zeXNfY3Jhc2ggJiYg
Y2NwdSAhPSBrZGJfaW5pdF9jcHUpIHsKICAgICAgICBpZiAoa2RiX2NwdV9jbWRba2RiX2luaXRf
Y3B1XSA9PSBLREJfQ1BVX1BBVVNFKSB7CiAgICAgICAgICAgIHJlZ3MtPmVmbGFncyAmPSB+WDg2
X0VGTEFHU19URjsKICAgICAgICAgICAga2RiX2NwdV9jbWRba2RiX2luaXRfY3B1XSA9IEtEQl9D
UFVfTUFJTl9LREI7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX1BBVVNFOwogICAgICAgIH0g
ZWxzZQogICAgICAgICAgICBrZGJwKCJVbmFibGUgdG8gc3dpdGNoIHRvOiAlZFxuIiwga2RiX2lu
aXRfY3B1KTsKICAgIH0KICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBzZW5kIE5N
SSB0byBhbGwgb3IgZ2l2ZW4gQ1BVLiBNdXN0IGJlIGNyYXNoZWQvZmF0YWwgc3RhdGUgKi8Kc3Rh
dGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2Zfbm1pKHZvaWQpCnsKICAgIGtkYnAoIm5taSBjcHUj
fGFsbDogc2VuZCBubWkgY3B1L3MuIG11c3QgcmVib290IHdoZW4gZG9uZSB3aXRoIGtkYlxuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2Ri
X2NtZGZfbm1pKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKnJlZ3MpCnsKICAgIGNwdW1hc2tfdCBjcHVtYXNrOwogICAgaW50IGNjcHUgPSBzbXBfcHJv
Y2Vzc29yX2lkKCk7CgogICAgaWYgKGFyZ2MgPD0gMSB8fCAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0g
PT0gJz8nKSkKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfbm1pKCk7CgogICAgaWYgKCFrZGJfc3lz
X2NyYXNoKSB7CiAgICAgICAga2RicCgia2RiOiBubWkgY21kIGF2YWlsYWJsZSBpbiBjcmFzaGVk
IHN0YXRlIG9ubHlcbiIpOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQog
ICAgaWYgKCFzdHJjbXAoYXJndlsxXSwgImFsbCIpKQogICAgICAgIGNwdW1hc2sgPSBjcHVfb25s
aW5lX21hcDsKICAgIGVsc2UgewogICAgICAgIGludCBjcHUgPSAoaW50KXNpbXBsZV9zdHJ0b3Vs
KGFyZ3ZbMV0sIE5VTEwsIDApOwogICAgICAgIGlmIChjcHUgPj0gMCAmJiBjcHUgPCBOUl9DUFVT
ICYmIGNwdSAhPSBjY3B1ICYmIGNwdV9vbmxpbmUoY3B1KSkKICAgICAgICAgICAgY3B1bWFzayA9
ICpjcHVtYXNrX29mKGNwdSk7CiAgICAgICAgZWxzZSB7CiAgICAgICAgICAgIGtkYnAoIktEQiBu
bWk6IGludmFsaWQgY3B1ICVzXG4iLCBhcmd2WzFdKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9D
UFVfTUFJTl9LREI7CiAgICAgICAgfQogICAgfQogICAga2RiX25taV9wYXVzZV9jcHVzKGNwdW1h
c2spOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90
CmtkYl91c2dmX3BlcmNwdSh2b2lkKQp7CiAgICBrZGJwKCJwZXJjcHU6IGRpc3BsYXkgcGVyIGNw
dSBwb2ludGVyc1xuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2Ri
X2NwdV9jbWRfdCAKa2RiX2NtZGZfcGVyY3B1KGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwg
c3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsx
XSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3BlcmNwdSgpOwogICAga2RiX2R1bXBf
dGltZV9wY3B1KCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKLyogPT09PT09PT09
PT09PT09PT09PT09PT09PSBCcmVha3BvaW50cyA9PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT0gKi8KCnN0YXRpYyB2b2lkCmtkYl9wcm50X2JwX2NvbmQoaW50IGJwbnVtKQp7CiAg
ICBzdHJ1Y3Qga2RiX2JwY29uZCAqYnBjcCA9ICZrZGJfc2JwYVticG51bV0udS5icF9jb25kOwoK
ICAgIGlmIChicGNwLT5icF9jb25kX3N0YXR1cyA9PSAxKSB7CiAgICAgICAga2RicCgiICAgICAo
ICVzICVjJWMgJWx4IClcbiIsIAogICAgICAgICAgICAga2RiX3JlZ29mZnNfdG9fbmFtZShicGNw
LT5icF9jb25kX2xocyksCiAgICAgICAgICAgICBicGNwLT5icF9jb25kX3R5cGUgPT0gMSA/ICc9
JyA6ICchJywgJz0nLCBicGNwLT5icF9jb25kX3Jocyk7CiAgICB9IGVsc2UgewogICAgICAgIGtk
YnAoIiAgICAgKCAlbHggJWMlYyAlbHggKVxuIiwgYnBjcC0+YnBfY29uZF9saHMsCiAgICAgICAg
ICAgICBicGNwLT5icF9jb25kX3R5cGUgPT0gMSA/ICc9JyA6ICchJywgJz0nLCBicGNwLT5icF9j
b25kX3Jocyk7CiAgICB9Cn0KCnN0YXRpYyB2b2lkCmtkYl9wcm50X2JwX2V4dHJhKGludCBicG51
bSkKewogICAgaWYgKGtkYl9zYnBhW2JwbnVtXS5icF90eXBlID09IDIpIHsKICAgICAgICB1bG9u
ZyBpLCBhcmcsICpidHAgPSBrZGJfc2JwYVticG51bV0udS5icF9idHA7CiAgICAgICAgCiAgICAg
ICAga2RicCgiICAgd2lsbCB0cmFjZSAiKTsKICAgICAgICBmb3IgKGk9MDsgaSA8IEtEQl9NQVhC
VFAgJiYgYnRwW2ldOyBpKyspCiAgICAgICAgICAgIGlmICgoYXJnPWJ0cFtpXSkgPCBzaXplb2Yg
KHN0cnVjdCBjcHVfdXNlcl9yZWdzKSkgewogICAgICAgICAgICAgICAga2RicCgiICVzICIsIGtk
Yl9yZWdvZmZzX3RvX25hbWUoYXJnKSk7CiAgICAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAg
ICAgICBrZGJwKCIgJWx4ICIsIGFyZyk7CiAgICAgICAgICAgIH0KICAgICAgICBrZGJwKCJcbiIp
OwoKICAgIH0gZWxzZSBpZiAoa2RiX3NicGFbYnBudW1dLmJwX3R5cGUgPT0gMSkKICAgICAgICBr
ZGJfcHJudF9icF9jb25kKGJwbnVtKTsKfQoKLyoKICogTGlzdCBzb2Z0d2FyZSBicmVha3BvaW50
cwogKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2Rpc3BsYXlfc2JrcHRzKHZvaWQpCnsKICAg
IGludCBpOwogICAgZm9yKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsrKQogICAgICAgIGlmIChrZGJf
c2JwYVtpXS5icF9hZGRyICYmICFrZGJfc2JwYVtpXS5icF9kZWxldGVkKSB7CiAgICAgICAgICAg
IHN0cnVjdCBkb21haW4gKmRwID0ga2RiX2RvbWlkMnB0cihrZGJfc2JwYVtpXS5icF9kb21pZCk7
CgogICAgICAgICAgICBpZiAoZHAgPT0gTlVMTCB8fCBkcC0+aXNfZHlpbmcpIHsKICAgICAgICAg
ICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAsIHNpemVvZihrZGJfc2JwYVtpXSkpOwogICAg
ICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgICAgIH0KICAgICAgICAgICAga2RicCgiWyVk
XTogZG9taWQ6JWQgMHglbHggICAiLCBpLCAKICAgICAgICAgICAgICAgICBrZGJfc2JwYVtpXS5i
cF9kb21pZCwga2RiX3NicGFbaV0uYnBfYWRkcik7CiAgICAgICAgICAgIGtkYl9wcm50X2FkZHIy
c3ltKGtkYl9zYnBhW2ldLmJwX2RvbWlkLCBrZGJfc2JwYVtpXS5icF9hZGRyLCJcbiIpOwogICAg
ICAgICAgICBrZGJfcHJudF9icF9leHRyYShpKTsKICAgICAgICB9CiAgICByZXR1cm4gS0RCX0NQ
VV9NQUlOX0tEQjsKfQoKLyoKICogQ2hlY2sgaWYgYW55IGJyZWFrcG9pbnRzIHRoYXQgd2UgbmVl
ZCB0byBpbnN0YWxsIChkZWxheWVkIGluc3RhbGwpCiAqIFJldHVybnM6IDEgaWYgeWVzLCAwIGlm
IG5vbmUuCiAqLwppbnQKa2RiX3N3YnBfZXhpc3RzKHZvaWQpCnsKICAgIGludCBpOwogICAgZm9y
IChpPTA7IGkgPCBLREJNQVhTQlA7IGkrKykKICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfYWRk
ciAmJiAha2RiX3NicGFbaV0uYnBfZGVsZXRlZCkKICAgICAgICAgICAgcmV0dXJuIDE7CiAgICBy
ZXR1cm4gMDsKfQovKgogKiBDaGVjayBpZiBhbnkgYnJlYWtwb2ludHMgd2VyZSBkZWxldGVkIHRo
aXMga2RiIHNlc3Npb24KICogUmV0dXJuczogMCBpZiBub25lLCAxIGlmIHllcwogKi8Kc3RhdGlj
IGludAprZGJfc3dicF9kZWxldGVkKHZvaWQpCnsKICAgIGludCBpOwogICAgZm9yIChpPTA7IGkg
PCBLREJNQVhTQlA7IGkrKykKICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfYWRkciAmJiBrZGJf
c2JwYVtpXS5icF9kZWxldGVkKQogICAgICAgICAgICByZXR1cm4gMTsKICAgIHJldHVybiAwOwp9
CgovKgogKiBGbHVzaCBkZWxldGVkIHN3IGJyZWFrcG9pbnRzCiAqLwp2b2lkCmtkYl9mbHVzaF9z
d2JwX3RhYmxlKHZvaWQpCnsKICAgIGludCBpOwogICAgS0RCR1AoImNjcHU6JWQgZmx1c2hfc3di
cF90YWJsZTogZGVsZXRlZDoleFxuIiwgc21wX3Byb2Nlc3Nvcl9pZCgpLCAKICAgICAgICAgIGtk
Yl9zd2JwX2RlbGV0ZWQoKSk7CiAgICBmb3IoaT0wOyBpIDwgS0RCTUFYU0JQOyBpKyspCiAgICAg
ICAgaWYgKGtkYl9zYnBhW2ldLmJwX2FkZHIgJiYga2RiX3NicGFbaV0uYnBfZGVsZXRlZCkgewog
ICAgICAgICAgICBLREJHUCgiZmx1c2g6WyV4XSBhZGRyOjB4JWx4XG4iLGksa2RiX3NicGFbaV0u
YnBfYWRkcik7CiAgICAgICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAsIHNpemVvZihrZGJf
c2JwYVtpXSkpOwogICAgICAgIH0KfQoKLyoKICogRGVsZXRlL0NsZWFyIGEgc3cgYnJlYWtwb2lu
dAogKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfYmModm9pZCkKewogICAga2RicCgi
YmMgJG51bXxhbGwgOiBjbGVhciBnaXZlbiBvciBhbGwgYnJlYWtwb2ludHNcbiIpOwogICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2Jj
KGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3Mp
CnsKICAgIGludCBpLCBicG51bSA9IC0xLCBkZWxhbGwgPSAwOwogICAgY29uc3QgY2hhciAqYXJn
cDsKCiAgICBpZiAoYXJnYyAhPSAyIHx8ICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4g
a2RiX3VzZ2ZfYmMoKTsKCiAgICBpZiAoIWtkYl9zd2JwX2V4aXN0cygpKSB7CiAgICAgICAga2Ri
cCgiTm8gYnJlYWtwb2ludHMgYXJlIHNldFxuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7CiAgICB9CiAgICBhcmdwID0gYXJndlsxXTsKCiAgICBpZiAoIXN0cmNtcChhcmdwLCAi
YWxsIikpCiAgICAgICAgZGVsYWxsID0gMTsKICAgIGVsc2UgaWYgKCFrZGJfc3RyMmRlY2koYXJn
cCwgJmJwbnVtKSB8fCBicG51bSA8IDAgfHwgYnBudW0gPiBLREJNQVhTQlApIHsKICAgICAgICBr
ZGJwKCJJbnZhbGlkIGJwbnVtOiAlc1xuIiwgYXJncCk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7CiAgICB9CiAgICBmb3IgKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsrKSB7CiAgICAg
ICAgaWYgKGRlbGFsbCAmJiBrZGJfc2JwYVtpXS5icF9hZGRyKSB7CiAgICAgICAgICAgIGtkYnAo
IkRlbGV0ZWQgYnJlYWtwb2ludCBbJXhdIGFkZHI6MHglbHggZG9taWQ6JWRcbiIsIAogICAgICAg
ICAgICAgICAgIChpbnQpaSwga2RiX3NicGFbaV0uYnBfYWRkciwga2RiX3NicGFbaV0uYnBfZG9t
aWQpOwogICAgICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfanVzdF9hZGRlZCkKICAgICAgICAg
ICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAsIHNpemVvZihrZGJfc2JwYVtpXSkpOwogICAg
ICAgICAgICBlbHNlCiAgICAgICAgICAgICAgICBrZGJfc2JwYVtpXS5icF9kZWxldGVkID0gMTsK
ICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlmIChicG51bSAhPSAtMSAm
JiBicG51bSA9PSBpKSB7CiAgICAgICAgICAgIGtkYnAoIkRlbGV0ZWQgYnJlYWtwb2ludCBbJXhd
IGF0IDB4JWx4IGRvbWlkOiVkXG4iLCAKICAgICAgICAgICAgICAgICAoaW50KWksIGtkYl9zYnBh
W2ldLmJwX2FkZHIsIGtkYl9zYnBhW2ldLmJwX2RvbWlkKTsKICAgICAgICAgICAgaWYgKGtkYl9z
YnBhW2ldLmJwX2p1c3RfYWRkZWQpCiAgICAgICAgICAgICAgICBtZW1zZXQoJmtkYl9zYnBhW2ld
LCAwLCBzaXplb2Yoa2RiX3NicGFbaV0pKTsKICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAg
ICAga2RiX3NicGFbaV0uYnBfZGVsZXRlZCA9IDE7CiAgICAgICAgICAgIGJyZWFrOwogICAgICAg
IH0KICAgIH0KICAgIGlmIChpID49IEtEQk1BWFNCUCAmJiAhZGVsYWxsKQogICAgICAgIGtkYnAo
IlVuYWJsZSB0byBkZWxldGUgYnJlYWtwb2ludDogJXNcbiIsIGFyZ3ApOwoKICAgIHJldHVybiBL
REJfQ1BVX01BSU5fS0RCOwp9CgovKgogKiBJbnN0YWxsIGEgYnJlYWtwb2ludCBpbiB0aGUgZ2l2
ZW4gYXJyYXkgZW50cnkKICogUmV0dXJuczogMCA6IGZhaWxlZCB0byBpbnN0YWxsCiAqICAgICAg
ICAgIDEgOiBpbnN0YWxsZWQgc3VjY2Vzc2Z1bGx5CiAqLwpzdGF0aWMgaW50CmtkYl9pbnN0YWxs
X3N3YnAoaW50IGlkeCkgICAgICAgICAgICAgICAgICAgLyogd2hpY2ggZW50cnkgaW4gdGhlIGJw
IGFycmF5ICovCnsKICAgIGtkYnZhX3QgYWRkciA9IGtkYl9zYnBhW2lkeF0uYnBfYWRkcjsKICAg
IGRvbWlkX3QgZG9taWQgPSBrZGJfc2JwYVtpZHhdLmJwX2RvbWlkOwogICAga2RiYnl0X3QgKnAg
PSAma2RiX3NicGFbaWR4XS5icF9vcmlnaW5zdDsKICAgIHN0cnVjdCBkb21haW4gKmRwID0ga2Ri
X2RvbWlkMnB0cihkb21pZCk7CgogICAgaWYgKGRwID09IE5VTEwgfHwgZHAtPmlzX2R5aW5nKSB7
CiAgICAgICAgbWVtc2V0KCZrZGJfc2JwYVtpZHhdLCAwLCBzaXplb2Yoa2RiX3NicGFbaWR4XSkp
OwogICAgICAgIGtkYnAoIlJlbW92ZWQgYnAgJWQgYWRkcjolcCBkb21pZDolZFxuIiwgaWR4LCBh
ZGRyLCBkb21pZCk7CiAgICAgICAgcmV0dXJuIDA7CiAgICB9CgogICAgaWYgKGtkYl9yZWFkX21l
bShhZGRyLCBwLCBLREJCUFNaLCBkb21pZCkgIT0gS0RCQlBTWil7CiAgICAgICAga2RicCgiRmFp
bGVkKFIpIHRvIGluc3RhbGwgYnA6JXggYXQ6MHglbHggZG9taWQ6JWRcbiIsCiAgICAgICAgICAg
ICBpZHgsIGtkYl9zYnBhW2lkeF0uYnBfYWRkciwgZG9taWQpOwogICAgICAgIHJldHVybiAwOwog
ICAgfQogICAgaWYgKGtkYl93cml0ZV9tZW0oYWRkciwgJmtkYl9icGluc3QsIEtEQkJQU1osIGRv
bWlkKSAhPSBLREJCUFNaKSB7CiAgICAgICAga2RicCgiRmFpbGVkKFcpIHRvIGluc3RhbGwgYnA6
JXggYXQ6MHglbHggZG9taWQ6JWRcbiIsCiAgICAgICAgICAgICBpZHgsIGtkYl9zYnBhW2lkeF0u
YnBfYWRkciwgZG9taWQpOwogICAgICAgIHJldHVybiAwOwogICAgfQogICAgS0RCR1AoImluc3Rh
bGxfc3dicDogaW5zdGFsbGVkIGJwOiV4IGF0OjB4JWx4IGNjcHU6JXggZG9taWQ6JWRcbiIsCiAg
ICAgICAgICBpZHgsIGtkYl9zYnBhW2lkeF0uYnBfYWRkciwgc21wX3Byb2Nlc3Nvcl9pZCgpLCBk
b21pZCk7CiAgICByZXR1cm4gMTsKfQoKLyoKICogSW5zdGFsbCBhbGwgdGhlIHNvZnR3YXJlIGJy
ZWFrcG9pbnRzCiAqLwp2b2lkCmtkYl9pbnN0YWxsX2FsbF9zd2JwKHZvaWQpCnsKICAgIGludCBp
OwogICAgZm9yKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsrKQogICAgICAgIGlmICgha2RiX3NicGFb
aV0uYnBfZGVsZXRlZCAmJiBrZGJfc2JwYVtpXS5icF9hZGRyKQogICAgICAgICAgICBrZGJfaW5z
dGFsbF9zd2JwKGkpOwp9CgpzdGF0aWMgdm9pZAprZGJfdW5pbnN0YWxsX2Ffc3dicChpbnQgaSkK
ewogICAga2RidmFfdCBhZGRyID0ga2RiX3NicGFbaV0uYnBfYWRkcjsKICAgIGtkYmJ5dF90IG9y
aWdpbnN0ID0ga2RiX3NicGFbaV0uYnBfb3JpZ2luc3Q7CiAgICBkb21pZF90IGlkID0ga2RiX3Ni
cGFbaV0uYnBfZG9taWQ7CgogICAga2RiX3NicGFbaV0uYnBfanVzdF9hZGRlZCA9IDA7CiAgICBp
ZiAoIWFkZHIpCiAgICAgICAgcmV0dXJuOwogICAgaWYgKGtkYl93cml0ZV9tZW0oYWRkciwgJm9y
aWdpbnN0LCBLREJCUFNaLCBpZCkgIT0gS0RCQlBTWikgewogICAgICAgIGtkYnAoIkZhaWxlZCB0
byB1bmluc3RhbGwgYnJlYWtwb2ludCAleCBhdDoweCVseCBkb21pZDolZFxuIiwKICAgICAgICAg
ICAgIGksIGtkYl9zYnBhW2ldLmJwX2FkZHIsIGlkKTsKICAgIH0KfQoKLyoKICogVW5pbnN0YWxs
IGFsbCB0aGUgc29mdHdhcmUgYnJlYWtwb2ludHMgYXQgYmVnaW5uaW5nIG9mIGtkYiBzZXNzaW9u
CiAqLwp2b2lkCmtkYl91bmluc3RhbGxfYWxsX3N3YnAodm9pZCkKewogICAgaW50IGk7CiAgICBm
b3IoaT0wOyBpIDwgS0RCTUFYU0JQOyBpKyspIAogICAgICAgIGtkYl91bmluc3RhbGxfYV9zd2Jw
KGkpOwogICAgS0RCR1AoImNjcHU6JWQgdW5pbnN0YWxsZWQgYWxsIGJwc1xuIiwgc21wX3Byb2Nl
c3Nvcl9pZCgpKTsKfQoKLyogUkVUVVJOUzogcmMgPT0gMjogY29uZGl0aW9uIHdhcyBub3QgbWV0
LCAgcmMgPT0gMzogY29uZGl0aW9uIHdhcyBtZXQgKi8Kc3RhdGljIGludAprZGJfY2hlY2tfYnBf
Y29uZGl0aW9uKGludCBicG51bSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGRvbWlkX3Qg
ZG9taWQpCnsKICAgIHVsb25nIHJlcyA9IDAsIGxoc3ZhbD0wOwogICAgc3RydWN0IGtkYl9icGNv
bmQgKmJwY3AgPSAma2RiX3NicGFbYnBudW1dLnUuYnBfY29uZDsKCiAgICBpZiAoYnBjcC0+YnBf
Y29uZF9zdGF0dXMgPT0gMSkgeyAgICAgICAgICAgICAvKiByZWdpc3RlciBjb25kaXRpb24gKi8K
ICAgICAgICB1aW50NjRfdCAqcnAgPSAodWludDY0X3QgKikoKGNoYXIgKilyZWdzICsgYnBjcC0+
YnBfY29uZF9saHMpOwogICAgICAgIGxoc3ZhbCA9ICpycDsKICAgIH0gZWxzZSBpZiAoYnBjcC0+
YnBfY29uZF9zdGF0dXMgPT0gMikgeyAgICAgIC8qIG1lbWFkZHIgY29uZGl0aW9uICovCiAgICAg
ICAgdWxvbmcgYWRkciA9IGJwY3AtPmJwX2NvbmRfbGhzOwogICAgICAgIGludCBudW0gPSBzaXpl
b2YobGhzdmFsKTsKCiAgICAgICAgaWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikm
bGhzdmFsLCBudW0sIGRvbWlkKSAhPSBudW0pIHsKICAgICAgICAgICAga2RicCgia2RiOiB1bmFi
bGUgdG8gcmVhZCAlZCBieXRlcyBhdCAlbHhcbiIsIG51bSwgYWRkcik7CiAgICAgICAgICAgIHJl
dHVybiAzOwogICAgICAgIH0KICAgIH0KICAgIGlmIChicGNwLT5icF9jb25kX3R5cGUgPT0gMSkg
ICAgICAgICAgICAgICAgIC8qIGxocyA9PSByaHMgKi8KICAgICAgICByZXMgPSAobGhzdmFsID09
IGJwY3AtPmJwX2NvbmRfcmhzKTsKICAgIGVsc2UgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIC8qIGxocyAhPSByaHMgKi8KICAgICAgICByZXMgPSAobGhzdmFsICE9IGJw
Y3AtPmJwX2NvbmRfcmhzKTsKCiAgICBpZiAoIXJlcykKICAgICAgICBrZGJwKCJLREI6IFslZF1J
Z25vcmluZyBicDolZCBjb25kaXRpb24gbm90IG1ldC4gdmFsOiVseFxuIiwgCiAgICAgICAgICAg
ICAgc21wX3Byb2Nlc3Nvcl9pZCgpLCBicG51bSwgbGhzdmFsKTsgCgogICAgS0RCR1AxKCJicG51
bTolZCBkb21pZDolZCBjb25kOiAlZCAlZCAlbHggJWx4IHJlczolZFxuIiwgYnBudW0sIGRvbWlk
LCAKICAgICAgICAgICBicGNwLT5icF9jb25kX3N0YXR1cywgYnBjcC0+YnBfY29uZF90eXBlLCBi
cGNwLT5icF9jb25kX2xocywgCiAgICAgICAgICAgYnBjcC0+YnBfY29uZF9yaHMsIHJlcyk7Cgog
ICAgcmV0dXJuIChyZXMgPyAzIDogMik7Cn0KCnN0YXRpYyB2b2lkCmtkYl9wcm50X2J0cF9pbmZv
KGludCBicG51bSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGRvbWlkX3QgZG9taWQpCnsK
ICAgIHVsb25nIGksIGFyZywgdmFsLCBudW0sICpidHAgPSBrZGJfc2JwYVticG51bV0udS5icF9i
dHA7CgogICAga2RiX3BybnRfYWRkcjJzeW0oZG9taWQsIHJlZ3MtPktEQklQLCAiXG4iKTsKICAg
IG51bSA9IGtkYl9ndWVzdF9iaXRuZXNzKGRvbWlkKS84OwogICAgZm9yIChpPTA7IGkgPCBLREJf
TUFYQlRQICYmIChhcmc9YnRwW2ldKTsgaSsrKSB7CiAgICAgICAgaWYgKGFyZyA8IHNpemVvZiAo
c3RydWN0IGNwdV91c2VyX3JlZ3MpKSB7CiAgICAgICAgICAgIHVpbnQ2NF90ICpycCA9ICh1aW50
NjRfdCAqKSgoY2hhciAqKXJlZ3MgKyBhcmcpOwogICAgICAgICAgICBrZGJwKCIgJXM6ICUwMTZs
eCAiLCBrZGJfcmVnb2Zmc190b19uYW1lKGFyZyksICpycCk7CiAgICAgICAgfSBlbHNlIHsKICAg
ICAgICAgICAgaWYgKGtkYl9yZWFkX21lbShhcmcsIChrZGJieXRfdCAqKSZ2YWwsIG51bSwgZG9t
aWQpICE9IG51bSkKICAgICAgICAgICAgICAgIGtkYnAoImtkYjogdW5hYmxlIHRvIHJlYWQgJWQg
Ynl0ZXMgYXQgJWx4XG4iLCBudW0sIGFyZyk7CiAgICAgICAgICAgIGlmIChudW0gPT0gOCkKICAg
ICAgICAgICAgICAgIGtkYnAoIiAlMDE2bHg6JTAxNmx4ICIsIGFyZywgdmFsKTsKICAgICAgICAg
ICAgZWxzZQogICAgICAgICAgICAgICAga2RicCgiICUwOGx4OiUwOGx4ICIsIGFyZywgdmFsKTsK
ICAgICAgICB9CiAgICB9CiAgICBrZGJwKCJcbiIpOwogICAgS0RCR1AxKCJicG51bTolZCBkb21p
ZDolZCBidHA6JXAgbnVtOiVkXG4iLCBicG51bSwgZG9taWQsIGJ0cCwgbnVtKTsKfQoKLyoKICog
Q2hlY2sgaWYgdGhlIEJQIHRyYXAgYmVsb25ncyB0byB1cy4gCiAqIFJldHVybjogMCA6IG5vdCBv
bmUgb2Ygb3Vycy4gSVAgbm90IGNoYW5nZWQuIChsZWF2ZSBrZGIpCiAqICAgICAgICAgMSA6IG9u
ZSBvZiBvdXJzIGJ1dCBkZWxldGVkLiBJUCBkZWNyZW1lbnRlZC4gKGxlYXZlIGtkYikKICogICAg
ICAgICAyIDogb25lIG9mIG91cnMgYnV0IGNvbmRpdGlvbiBub3QgbWV0LCBvciBidHAuIElQIGRl
Y3JlbWVudGVkLihsZWF2ZSkKICogICAgICAgICAzIDogb25lIG9mIG91cnMgYW5kIGFjdGl2ZS4g
SVAgZGVjcmVtZW50ZWQuIChzdGF5IGluIGtkYikKICovCmludCAKa2RiX2NoZWNrX3N3X2JrcHRz
KHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBpbnQgaSwgcmM9MDsKICAgIGRvbWlk
X3QgY3VyaWQ7CgogICAgY3VyaWQgPSBndWVzdF9tb2RlKHJlZ3MpID8gY3VycmVudC0+ZG9tYWlu
LT5kb21haW5faWQgOiBET01JRF9JRExFOwogICAgZm9yKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsr
KSB7CiAgICAgICAgaWYgKGtkYl9zYnBhW2ldLmJwX2RvbWlkID09IGN1cmlkICAmJiAKICAgICAg
ICAgICAga2RiX3NicGFbaV0uYnBfYWRkciA9PSAocmVncy0+S0RCSVAtIEtEQkJQU1opKSB7Cgog
ICAgICAgICAgICByZWdzLT5LREJJUCAtPSBLREJCUFNaOwogICAgICAgICAgICByYyA9IDM7Cgog
ICAgICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfbmkpIHsKICAgICAgICAgICAgICAgIGtkYl91
bmluc3RhbGxfYV9zd2JwKGkpOwogICAgICAgICAgICAgICAgbWVtc2V0KCZrZGJfc2JwYVtpXSwg
MCwgc2l6ZW9mKGtkYl9zYnBhW2ldKSk7CiAgICAgICAgICAgIH0gZWxzZSBpZiAoa2RiX3NicGFb
aV0uYnBfZGVsZXRlZCkgewogICAgICAgICAgICAgICAgcmMgPSAxOwogICAgICAgICAgICB9IGVs
c2UgaWYgKGtkYl9zYnBhW2ldLmJwX3R5cGUgPT0gMSkgewogICAgICAgICAgICAgICAgcmMgPSBr
ZGJfY2hlY2tfYnBfY29uZGl0aW9uKGksIHJlZ3MsIGN1cmlkKTsKICAgICAgICAgICAgfSBlbHNl
IGlmIChrZGJfc2JwYVtpXS5icF90eXBlID09IDIpIHsKICAgICAgICAgICAgICAgIGtkYl9wcm50
X2J0cF9pbmZvKGksIHJlZ3MsIGN1cmlkKTsKICAgICAgICAgICAgICAgIHJjID0gMjsKICAgICAg
ICAgICAgfQogICAgICAgICAgICBLREJHUDEoImNjcHU6JWQgcmM6JWQgY3VyaWQ6JWQgZG9taWQ6
JWQgYWRkcjolbHhcbiIsIAogICAgICAgICAgICAgICAgICAgc21wX3Byb2Nlc3Nvcl9pZCgpLCBy
YywgY3VyaWQsIGtkYl9zYnBhW2ldLmJwX2RvbWlkLCAKICAgICAgICAgICAgICAgICAgIGtkYl9z
YnBhW2ldLmJwX2FkZHIpOwogICAgICAgICAgICBicmVhazsKICAgICAgICB9CiAgICB9CiAgICBy
ZXR1cm4gKHJjKTsKfQoKLyogRWc6IHI2ID09IDB4MTIzRURGICBvciAweEZGRkYyMDM0ICE9IDB4
REVBREJFRUYKICogcmVnb2ZmczogLTEgbWVhbnMgbGhzIGlzIG5vdCByZWcuIGVsc2Ugb2Zmc2V0
IG9mIHJlZyBpbiBjcHVfdXNlcl9yZWdzCiAqIGFkZHI6IG1lbW9yeSBsb2NhdGlvbiBpZiBsaHMg
aXMgbm90IHJlZ2lzdGVyLCBlZywgMHhGRkZGMjAzNAogKiBjb25kcCA6IHBvaW50cyB0byAhPSBv
ciA9PQogKiByaHN2YWwgOiByaWdodCBoYW5kIHNpZGUgdmFsdWUKICovCnN0YXRpYyB2b2lkCmtk
Yl9zZXRfYnBfY29uZChpbnQgYnBudW0sIGludCByZWdvZmZzLCB1bG9uZyBhZGRyLCBjaGFyICpj
b25kcCwgdWxvbmcgcmhzdmFsKQp7CiAgICBpZiAoYnBudW0gPj0gS0RCTUFYU0JQKSB7CiAgICAg
ICAga2RicCgiQlVHOiAlcyBnb3QgaW52YWxpZCBicG51bVxuIiwgX19GVU5DVElPTl9fKTsKICAg
ICAgICByZXR1cm47CiAgICB9CiAgICBpZiAocmVnb2ZmcyAhPSAtMSkgewogICAgICAgIGtkYl9z
YnBhW2JwbnVtXS51LmJwX2NvbmQuYnBfY29uZF9zdGF0dXMgPSAxOwogICAgICAgIGtkYl9zYnBh
W2JwbnVtXS51LmJwX2NvbmQuYnBfY29uZF9saHMgPSByZWdvZmZzOwogICAgfSBlbHNlIGlmIChh
ZGRyICE9IDApIHsKICAgICAgICBrZGJfc2JwYVticG51bV0udS5icF9jb25kLmJwX2NvbmRfc3Rh
dHVzID0gMjsKICAgICAgICBrZGJfc2JwYVticG51bV0udS5icF9jb25kLmJwX2NvbmRfbGhzID0g
YWRkcjsKICAgIH0gZWxzZSB7CiAgICAgICAga2RicCgiZXJyb3I6IGludmFsaWQgY2FsbCB0byBr
ZGJfc2V0X2JwX2NvbmRcbiIpOwogICAgICAgIHJldHVybjsKICAgIH0KICAgIGtkYl9zYnBhW2Jw
bnVtXS51LmJwX2NvbmQuYnBfY29uZF9yaHMgPSByaHN2YWw7CgogICAgaWYgKCpjb25kcCA9PSAn
IScpCiAgICAgICAga2RiX3NicGFbYnBudW1dLnUuYnBfY29uZC5icF9jb25kX3R5cGUgPSAyOwog
ICAgZWxzZQogICAgICAgIGtkYl9zYnBhW2JwbnVtXS51LmJwX2NvbmQuYnBfY29uZF90eXBlID0g
MTsKfQoKLyogaW5zdGFsbCBicmVha3B0IGF0IGdpdmVuIGFkZHIuIAogKiBuaTogYnAgZm9yIG5l
eHQgaW5zdHIgCiAqIGJ0cGE6IHB0ciB0byBhcmdzIGZvciBidHAgZm9yIHByaW50aW5nIHdoZW4g
YnAgaXMgaGl0CiAqIGxoc3AvY29uZHAvcmhzcDogcG9pbnQgdG8gc3RyaW5ncyBvZiBjb25kaXRp
b24KICoKICogUkVUVVJOUzogdGhlIGluZGV4IGluIGFycmF5IHdoZXJlIGluc3RhbGxlZC4gS0RC
TUFYU0JQIGlmIGVycm9yIAogKi8Kc3RhdGljIGludAprZGJfc2V0X2JwKGRvbWlkX3QgZG9taWQs
IGtkYnZhX3QgYWRkciwgaW50IG5pLCB1bG9uZyAqYnRwYSwgY2hhciAqbGhzcCwgCiAgICAgICAg
ICAgY2hhciAqY29uZHAsIGNoYXIgKnJoc3ApCnsKICAgIGludCBpLCBwcmVfZXhpc3RpbmcgPSAw
LCByZWdvZmZzID0gLTE7CiAgICB1bG9uZyBtZW1sb2M9MCwgcmhzdmFsPTAsIHRtcHVsOwoKICAg
IGlmIChidHBhICYmIChsaHNwIHx8IHJoc3AgfHwgY29uZHApKSB7CiAgICAgICAga2RicCgiaW50
ZXJuYWwgZXJyb3IuIGJ0cGEgYW5kIChsaHNwIHx8IHJoc3AgfHwgY29uZHApIHNldFxuIik7CiAg
ICAgICAgcmV0dXJuIEtEQk1BWFNCUDsKICAgIH0KICAgIGlmIChsaHNwICYmICgocmVnb2Zmcz1r
ZGJfdmFsaWRfcmVnKGxoc3ApKSA9PSAtMSkgICYmCiAgICAgICAga2RiX3N0cjJ1bG9uZyhsaHNw
LCAmbWVtbG9jKSAmJgogICAgICAgIGtkYl9yZWFkX21lbShtZW1sb2MsIChrZGJieXRfdCAqKSZ0
bXB1bCwgc2l6ZW9mKHRtcHVsKSwgZG9taWQpPT0wKSB7CgogICAgICAgIGtkYnAoImVycm9yOiBp
bnZhbGlkIGFyZ3VtZW50OiAlc1xuIiwgbGhzcCk7CiAgICAgICAgcmV0dXJuIEtEQk1BWFNCUDsK
ICAgIH0KICAgIGlmIChyaHNwICYmICEga2RiX3N0cjJ1bG9uZyhyaHNwLCAmcmhzdmFsKSkgewog
ICAgICAgIGtkYnAoImVycm9yOiBpbnZhbGlkIGFyZ3VtZW50OiAlc1xuIiwgcmhzcCk7CiAgICAg
ICAgcmV0dXJuIEtEQk1BWFNCUDsKICAgIH0KCiAgICAvKiBzZWUgaWYgYnAgYWxyZWFkeSBzZXQg
Ki8KICAgIGZvciAoaT0wOyBpIDwgS0RCTUFYU0JQOyBpKyspIHsKICAgICAgICBpZiAoa2RiX3Ni
cGFbaV0uYnBfYWRkcj09YWRkciAmJiBrZGJfc2JwYVtpXS5icF9kb21pZD09ZG9taWQpIHsKCiAg
ICAgICAgICAgIGlmIChrZGJfc2JwYVtpXS5icF9kZWxldGVkKSB7CiAgICAgICAgICAgICAgICAv
KiBqdXN0IHJlLXNldCB0aGlzIGJwIGFnYWluICovCiAgICAgICAgICAgICAgICBtZW1zZXQoJmtk
Yl9zYnBhW2ldLCAwLCBzaXplb2Yoa2RiX3NicGFbaV0pKTsKICAgICAgICAgICAgICAgIHByZV9l
eGlzdGluZyA9IDE7CiAgICAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgICAgICBrZGJwKCJC
cmVha3BvaW50IGFscmVhZHkgc2V0IFxuIik7CiAgICAgICAgICAgICAgICByZXR1cm4gS0RCTUFY
U0JQOwogICAgICAgICAgICB9CiAgICAgICAgfQogICAgfQogICAgLyogc2VlIGlmIGFueSByb29t
IGxlZnQgZm9yIGFub3RoZXIgYnJlYWtwb2ludCAqLwogICAgZm9yIChpPTA7IGkgPCBLREJNQVhT
QlA7IGkrKykKICAgICAgICBpZiAoIWtkYl9zYnBhW2ldLmJwX2FkZHIpCiAgICAgICAgICAgIGJy
ZWFrOwogICAgaWYgKGkgPj0gS0RCTUFYU0JQKSB7CiAgICAgICAga2RicCgiRVJST1I6IEJyZWFr
cG9pbnQgdGFibGUgZnVsbC4uLi5cbiIpOwogICAgICAgIHJldHVybiBpOwogICAgfQogICAga2Ri
X3NicGFbaV0uYnBfYWRkciA9IGFkZHI7CiAgICBrZGJfc2JwYVtpXS5icF9kb21pZCA9IGRvbWlk
OwogICAgaWYgKGJ0cGEpIHsKICAgICAgICBrZGJfc2JwYVtpXS5icF90eXBlID0gMjsKICAgICAg
ICBrZGJfc2JwYVtpXS51LmJwX2J0cCA9IGJ0cGE7CiAgICB9IGVsc2UgaWYgKHJlZ29mZnMgIT0g
LTEgfHwgbWVtbG9jKSB7CiAgICAgICAga2RiX3NicGFbaV0uYnBfdHlwZSA9IDE7CiAgICAgICAg
a2RiX3NldF9icF9jb25kKGksIHJlZ29mZnMsIG1lbWxvYywgY29uZHAsIHJoc3ZhbCk7CiAgICB9
IGVsc2UKICAgICAgICBrZGJfc2JwYVtpXS5icF90eXBlID0gMDsKCiAgICBpZiAoa2RiX2luc3Rh
bGxfc3dicChpKSkgeyAgICAgICAgICAgICAgICAgIC8qIG1ha2Ugc3VyZSBpdCBjYW4gYmUgZG9u
ZSAqLwogICAgICAgIGlmIChuaSkKICAgICAgICAgICAgcmV0dXJuIGk7CgogICAgICAgIGtkYl91
bmluc3RhbGxfYV9zd2JwKGkpOyAgICAgICAgICAgICAgICAvKiBkb250JyBzaG93IHVzZXIgSU5U
MyAqLwogICAgICAgIGlmICghcHJlX2V4aXN0aW5nKSAgICAgICAgICAgICAgIC8qIG1ha2Ugc3Vy
ZSBubyBpcyBjcHUgc2l0dGluZyBvbiBpdCAqLwogICAgICAgICAgICBrZGJfc2JwYVtpXS5icF9q
dXN0X2FkZGVkID0gMTsKCiAgICAgICAga2RicCgiYnAgJWQgc2V0IGZvciBkb21pZDolZCBhdDog
MHglbHggIiwgaSwga2RiX3NicGFbaV0uYnBfZG9taWQsIAogICAgICAgICAgICAga2RiX3NicGFb
aV0uYnBfYWRkcik7CiAgICAgICAga2RiX3BybnRfYWRkcjJzeW0oZG9taWQsIGFkZHIsICJcbiIp
OwogICAgICAgIGtkYl9wcm50X2JwX2V4dHJhKGkpOwogICAgfSBlbHNlIHsKICAgICAgICBrZGJw
KCJFUlJPUjpDYW4ndCBpbnN0YWxsIGJwOiAweCVseCBkb21pZDolZFxuIiwgYWRkciwgZG9taWQp
OwogICAgICAgIGlmIChwcmVfZXhpc3RpbmcpICAgICAvKiBpbiBjYXNlIGEgY3B1IGlzIHNpdHRp
bmcgb24gdGhpcyBicCBpbiB0cmFwcyAqLwogICAgICAgICAgICBrZGJfc2JwYVtpXS5icF9kZWxl
dGVkID0gMTsKICAgICAgICBlbHNlCiAgICAgICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAs
IHNpemVvZihrZGJfc2JwYVtpXSkpOwogICAgICAgIHJldHVybiBLREJNQVhTQlA7CiAgICB9CiAg
ICAvKiBtYWtlIHN1cmUgc3dicCByZXBvcnRpbmcgaXMgZW5hYmxlZCBpbiB0aGUgdm1jYi92bWNz
ICovCiAgICBpZiAoaXNfaHZtX29yX2h5Yl9kb21haW4oa2RiX2RvbWlkMnB0cihkb21pZCkpKSB7
CiAgICAgICAgc3RydWN0IGRvbWFpbiAqZHAgPSBrZGJfZG9taWQycHRyKGRvbWlkKTsKICAgICAg
ICBkcC0+ZGVidWdnZXJfYXR0YWNoZWQgPSAxOyAgICAgICAgICAgICAgLyogc2VlIHN2bV9kb19y
ZXN1bWUvdm14X2RvXyAqLwogICAgICAgIEtEQkdQKCJkZWJ1Z2dlcl9hdHRhY2hlZCBzZXQuIGRv
bWlkOiVkXG4iLCBkb21pZCk7CiAgICB9CiAgICByZXR1cm4gaTsKfQoKLyogCiAqIFNldC9MaXN0
IFNvZnR3YXJlIEJyZWFrcG9pbnQvcwogKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2Zf
YnAodm9pZCkKewogICAga2RicCgiYnAgW2FkZHJ8c3ltXVtkb21pZF1bY29uZGl0aW9uXTogZGlz
cGxheSBvciBzZXQgYSBicmVha3BvaW50XG4iKTsKICAgIGtkYnAoIiAgd2hlcmUgY29uZCBpcyBs
aWtlOiByNiA9PSAweDEyM0Ygb3IgcmF4ICE9IERFQURCRUVGIG9yIFxuIik7CiAgICBrZGJwKCIg
ICAgICAgZmZmZjgyYzQ4MDM4ZmU1OCA9PSAzMjFFIG9yIDB4ZmZmZjgyYzQ4MDM4ZmU1OCAhPSAw
XG4iKTsKICAgIGtkYnAoIiAgcmVnczogcmF4IHJieCByY3ggcmR4IHJzaSByZGkgcmJwIHJzcCBy
OCByOSIpOwogICAga2RicCgiIHIxMCByMTEgcjEyIHIxMyByMTQgcjE1IHJmbGFnc1xuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Nt
ZGZfYnAoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAga2RidmFfdCBhZGRyOwogICAgaW50IGlkeCA9IC0xOwogICAgZG9taWRfdCBk
b21pZCA9IERPTUlEX0lETEU7CiAgICBjaGFyICpkb21pZHN0cnAsICpsaHNwPU5VTEwsICpjb25k
cD1OVUxMLCAqcmhzcD1OVUxMOwoKICAgIGlmICgoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8n
KSB8fCBhcmdjID09IDQgfHwgYXJnYyA+IDYpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX2JwKCk7
CgogICAgaWYgKGFyZ2MgPCAyIHx8IGtkYl9zeXNfY3Jhc2gpICAgICAgICAgLyogbGlzdCBhbGwg
c2V0IGJyZWFrcG9pbnRzICovCiAgICAgICAgcmV0dXJuIGtkYl9kaXNwbGF5X3Nia3B0cygpOwoK
ICAgIC8qIHZhbGlkIGFyZ2MgZWl0aGVyOiAyIDMgNSBvciA2IAogICAgICogJ2JwIGlkbGVfbG9v
cCByNiA9PSAweGMwMDAnIE9SICdicCBpZGxlX2xvb3AgMyByOSAhPSAweGRlYWRiZWVmJyAqLwog
ICAgaWR4ID0gKGFyZ2MgPT0gNSkgPyAyIDogKChhcmdjID09IDYpID8gMyA6IGlkeCk7CiAgICBp
ZiAoYXJnYyA+PSA1ICkgewogICAgICAgIGxoc3AgPSAoY2hhciAqKWFyZ3ZbaWR4XTsKICAgICAg
ICBjb25kcCA9IChjaGFyICopYXJndltpZHgrMV07CiAgICAgICAgcmhzcCA9IChjaGFyICopYXJn
dltpZHgrMl07CgogICAgICAgIGlmICgha2RiX3N0cjJ1bG9uZyhyaHNwLCBOVUxMKSB8fCAqKGNv
bmRwKzEpICE9ICc9JyB8fCAKICAgICAgICAgICAgKCpjb25kcCAhPSAnPScgJiYgKmNvbmRwICE9
ICchJykpIHsKCiAgICAgICAgICAgIHJldHVybiBrZGJfdXNnZl9icCgpOwogICAgICAgIH0KICAg
IH0KICAgIGRvbWlkc3RycCA9IChhcmdjID09IDMgfHwgYXJnYyA9PSA2ICkgPyAoY2hhciAqKWFy
Z3ZbMl0gOiBOVUxMOwogICAgaWYgKGRvbWlkc3RycCAmJiAha2RiX3N0cjJkb21pZChkb21pZHN0
cnAsICZkb21pZCwgMSkpIHsKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfYnAoKTsKICAgIH0KICAg
IGlmIChhcmdjID4gMyAmJiBpc19odm1fb3JfaHliX2RvbWFpbihrZGJfZG9taWQycHRyKGRvbWlk
KSkpIHsKICAgICAgICBrZGJwKCJIVk0gZG9tYWluIG5vdCBzdXBwb3J0ZWQgeWV0IGZvciBjb25k
aXRpb25hbCBicFxuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9Cgog
ICAgaWYgKCFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGRvbWlkKSB8fCBhZGRyID09IDAp
IHsKICAgICAgICBrZGJwKCJJbnZhbGlkIGFyZ3VtZW50OiVzXG4iLCBhcmd2WzFdKTsKICAgICAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KCiAgICAvKiBtYWtlIHN1cmUgeGVuIGFk
ZHIgaXMgaW4geGVuIHRleHQsIG90aGVyd2lzZSBicCBzZXQgaW4gNjRiaXQgZG9tMC9VICovCiAg
ICBpZiAoZG9taWQgPT0gRE9NSURfSURMRSAmJiAKICAgICAgICAoYWRkciA8IFhFTl9WSVJUX1NU
QVJUIHx8IGFkZHIgPiBYRU5fVklSVF9FTkQpKQogICAgewogICAgICAgIGtkYnAoImFkZHI6JWx4
IG5vdCBpbiAgeGVuIHRleHRcbiIsIGFkZHIpOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5f
S0RCOwogICAgfQogICAga2RiX3NldF9icChkb21pZCwgYWRkciwgMCwgTlVMTCwgbGhzcCwgY29u
ZHAsIHJoc3ApOyAgICAgLyogMCBpcyBuaSBmbGFnICovCiAgICByZXR1cm4gS0RCX0NQVV9NQUlO
X0tEQjsKfQoKCi8qIHRyYWNlIGJyZWFrcG9pbnQsIG1lYW5pbmcsIHVwb24gYnAgdHJhY2UvcHJp
bnQgc29tZSBpbmZvIGFuZCBjb250aW51ZSAqLwoKc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfYnRwKHZvaWQpCnsKICAgIGtkYnAoImJ0cCBhZGRyfHN5bSBbZG9taWRdIHJlZ3xkb21pZC1t
ZW0tYWRkci4uLiA6IGJyZWFrcG9pbnQgdHJhY2VcbiIpOwogICAga2RicCgiICByZWdzOiByYXgg
cmJ4IHJjeCByZHggcnNpIHJkaSByYnAgcnNwIHI4IHI5ICIpOwogICAga2RicCgicjEwIHIxMSBy
MTIgcjEzIHIxNCByMTUgcmZsYWdzXG4iKTsKICAgIGtkYnAoIiAgRWcuIGJ0cCBpZGxlX2NwdSA3
IHJheCByYnggMHgyMGVmNWE1IHI5XG4iKTsKICAgIGtkYnAoIiAgICAgIHdpbGwgcHJpbnQgcmF4
LCByYngsICoobG9uZyAqKTB4MjBlZjVhNSwgcjkgYW5kIGNvbnRpbnVlXG4iKTsKICAgIHJldHVy
biBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl9idHAo
aW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykK
ewogICAgaW50IGksIGJ0cGlkeCwgbnVtcmQsIGFyZ3NpZHgsIHJlZ29mZnMgPSAtMTsKICAgIGtk
YnZhX3QgYWRkciwgbWVtbG9jPTA7CiAgICBkb21pZF90IGRvbWlkID0gRE9NSURfSURMRTsKICAg
IHVsb25nICpidHBhLCB0bXB1bDsKCiAgICBpZiAoKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/
JykgfHwgYXJnYyA8IDMpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX2J0cCgpOwoKICAgIGFyZ3Np
ZHggPSAyOyAgICAgICAgICAgICAgICAgICAvKiBhc3N1bWUgM3JkIGFyZyBpcyBub3QgZG9taWQg
Ki8KICAgIGlmIChhcmdjID4gMyAmJiBrZGJfc3RyMmRvbWlkKGFyZ3ZbMl0sICZkb21pZCwgMCkp
IHsKCiAgICAgICAgaWYgKGlzX2h2bV9vcl9oeWJfZG9tYWluKGtkYl9kb21pZDJwdHIoZG9taWQp
KSkgewogICAgICAgICAgICBrZGJwKCJIVk0gZG9tYWlucyBhcmUgbm90IGN1cnJlbnRseSBzdXBw
cnRlZFxuIik7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgICAgIH0g
ZWxzZQogICAgICAgICAgICBhcmdzaWR4ID0gMzsgICAgICAgICAgICAgICAvKiAzcmQgYXJnIGlz
IGEgZG9taWQgKi8KICAgIH0KICAgIGlmICgha2RiX3N0cjJhZGRyKGFyZ3ZbMV0sICZhZGRyLCBk
b21pZCkgfHwgYWRkciA9PSAwKSB7CiAgICAgICAga2RicCgiSW52YWxpZCBhcmd1bWVudDolc1xu
IiwgYXJndlsxXSk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICAv
KiBtYWtlIHN1cmUgeGVuIGFkZHIgaXMgaW4geGVuIHRleHQsIG90aGVyd2lzZSB3aWxsIHRyYWNl
IDY0Yml0IGRvbTAvVSAqLwogICAgaWYgKGRvbWlkID09IERPTUlEX0lETEUgJiYgCiAgICAgICAg
KGFkZHIgPCBYRU5fVklSVF9TVEFSVCB8fCBhZGRyID4gWEVOX1ZJUlRfRU5EKSkKICAgIHsKICAg
ICAgICBrZGJwKCJhZGRyOiVseCBub3QgaW4gIHhlbiB0ZXh0XG4iLCBhZGRyKTsKICAgICAgICBy
ZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KCiAgICBudW1yZCA9IGtkYl9ndWVzdF9iaXRu
ZXNzKGRvbWlkKS84OwogICAgaWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikmdG1w
dWwsIG51bXJkLCBkb21pZCkgIT0gbnVtcmQpIHsKICAgICAgICBrZGJwKCJVbmFibGUgdG8gcmVh
ZCBtZW0gZnJvbSAlcyAoJWx4KVxuIiwgYXJndlsxXSwgYWRkcik7CiAgICAgICAgcmV0dXJuIEtE
Ql9DUFVfTUFJTl9LREI7CiAgICB9CgogICAgZm9yIChidHBpZHg9MDsgYnRwaWR4IDwgS0RCTUFY
U0JQICYmIGtkYl9idHBfYXBbYnRwaWR4XTsgYnRwaWR4KyspOwogICAgaWYgKGJ0cGlkeCA+PSBL
REJNQVhTQlApIHsKICAgICAgICBrZGJwKCJlcnJvcjogdGFibGUgZnVsbC4gZGVsZXRlIGZldyBi
cmVha3BvaW50c1xuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAg
ICBidHBhID0ga2RiX2J0cF9hcmdzYVtidHBpZHhdOwogICAgbWVtc2V0KGJ0cGEsIDAsIHNpemVv
ZihrZGJfYnRwX2FyZ3NhWzBdKSk7CgogICAgZm9yIChpPTA7IGFyZ3ZbYXJnc2lkeF07IGkrKywg
YXJnc2lkeCsrKSB7CgogICAgICAgIGlmICgoKHJlZ29mZnM9a2RiX3ZhbGlkX3JlZyhhcmd2W2Fy
Z3NpZHhdKSkgPT0gLTEpICAmJgogICAgICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbYXJnc2lk
eF0sICZtZW1sb2MpICYmCiAgICAgICAgICAgIChtZW1sb2MgPCBzaXplb2YgKHN0cnVjdCBjcHVf
dXNlcl9yZWdzKSB8fAogICAgICAgICAgICBrZGJfcmVhZF9tZW0obWVtbG9jLCAoa2RiYnl0X3Qg
KikmdG1wdWwsIHNpemVvZih0bXB1bCksIGRvbWlkKT09MCkpewoKICAgICAgICAgICAga2RicCgi
ZXJyb3I6IGludmFsaWQgYXJndW1lbnQ6ICVzXG4iLCBhcmd2W2FyZ3NpZHhdKTsKICAgICAgICAg
ICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgfQogICAgICAgIGlmIChpID49IEtE
Ql9NQVhCVFApIHsKICAgICAgICAgICAga2RicCgiZXJyb3I6IGNhbm5vdCBzcGVjaWZ5IG1vcmUg
dGhhbiAlZCBhcmdzXG4iLCBLREJfTUFYQlRQKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7CiAgICAgICAgfQogICAgICAgIGJ0cGFbaV0gPSAocmVnb2ZmcyA9PSAtMSkgPyBt
ZW1sb2MgOiByZWdvZmZzOwogICAgfQoKICAgIGkgPSBrZGJfc2V0X2JwKGRvbWlkLCBhZGRyLCAw
LCBidHBhLCAwLCAwLCAwKTsgICAgIC8qIDAgaXMgbmkgZmxhZyAqLwogICAgaWYgKGkgPCBLREJN
QVhTQlApCiAgICAgICAga2RiX2J0cF9hcFtidHBpZHhdID0ga2RiX2J0cF9hcmdzYVtidHBpZHhd
OwoKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiAKICogU2V0L0xpc3Qgd2F0Y2hw
b2ludHMsIGllLCBoYXJkd2FyZSBicmVha3BvaW50L3MsIGluIGh5cGVydmlzb3IKICogICBVc2Fn
ZTogd3AgW3N5bXxhZGRyXSBbd3xpXSAgIHcgPT0gd3JpdGUgb25seSBkYXRhIHdhdGNocG9pbnQK
ICogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGkgPT0gSU8gd2F0Y2hwb2ludCAocmVh
ZC93cml0ZSkKICoKICogICBFZzogIHdwICAgICAgICA6IGxpc3QgYWxsIHdhdGNocG9pbnRzIHNl
dAogKiAgICAgICAgd3AgYWRkciAgIDogc2V0IGEgcmVhZC93cml0ZSB3cCBhdCBnaXZlbiBhZGRy
CiAqICAgICAgICB3cCBhZGRyIHcgOiBzZXQgYSB3cml0ZSBvbmx5IHdwIGF0IGdpdmVuIGFkZHIK
ICogICAgICAgIHdwIGFkZHIgaSA6IHNldCBhbiBJTyB3cCBhdCBnaXZlbiBhZGRyICgxNmJpdHMg
cG9ydCAjKQogKgogKiAgVEJEOiBhbGxvdyB0byBiZSBzZXQgb24gcGFydGljdWxhciBjcHUKICov
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3dwKHZvaWQpCnsKICAgIGtkYnAoIndwIFth
ZGRyfHN5bV1bd3xpXTogZGlzcGxheSBvciBzZXQgd2F0Y2hwb2ludC4gd3JpdGVvbmx5IG9yIElP
XG4iKTsKICAgIGtkYnAoIlx0bm90ZTogd2F0Y2hwb2ludCBpcyB0cmlnZ2VyZWQgYWZ0ZXIgdGhl
IGluc3RydWN0aW9uIGV4ZWN1dGVzXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9
CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl93cChpbnQgYXJnYywgY29uc3QgY2hhciAq
KmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBrZGJ2YV90IGFkZHI7CiAg
ICBkb21pZF90IGRvbWlkID0gRE9NSURfSURMRTsKICAgIGludCBydyA9IDMsIGxlbiA9IDQ7ICAg
ICAgIC8qIGZvciBub3cganVzdCBkZWZhdWx0IHRvIDQgYnl0ZXMgbGVuICovCgogICAgaWYgKGFy
Z2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfd3AoKTsK
CiAgICBpZiAoYXJnYyA8PSAxIHx8IGtkYl9zeXNfY3Jhc2gpIHsgICAgICAgLyogbGlzdCBhbGwg
c2V0IHdhdGNocG9pbnRzICovCiAgICAgICAga2RiX2RvX3dhdGNocG9pbnRzKDAsIDAsIDApOwog
ICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKCFrZGJfc3RyMmFk
ZHIoYXJndlsxXSwgJmFkZHIsIGRvbWlkKSB8fCBhZGRyID09IDApIHsKICAgICAgICBrZGJwKCJJ
bnZhbGlkIGFyZ3VtZW50OiVzXG4iLCBhcmd2WzFdKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9N
QUlOX0tEQjsKICAgIH0KICAgIGlmIChhcmdjID4gMikgewogICAgICAgIGlmICghc3RyY21wKGFy
Z3ZbMl0sICJ3IikpCiAgICAgICAgICAgIHJ3ID0gMTsKICAgICAgICBlbHNlIGlmICghc3RyY21w
KGFyZ3ZbMl0sICJpIikpCiAgICAgICAgICAgIHJ3ID0gMjsKICAgICAgICBlbHNlIHsKICAgICAg
ICAgICAgcmV0dXJuIGtkYl91c2dmX3dwKCk7CiAgICAgICAgfQogICAgfQogICAga2RiX2RvX3dh
dGNocG9pbnRzKGFkZHIsIHJ3LCBsZW4pOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0K
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3djKHZvaWQpCnsKICAgIGtkYnAoIndjICRu
dW18YWxsIDogY2xlYXIgZ2l2ZW4gb3IgYWxsIHdhdGNocG9pbnRzXG4iKTsKICAgIHJldHVybiBL
REJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl93YyhpbnQg
YXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAg
ICBjb25zdCBjaGFyICphcmdwOwogICAgaW50IHdwbnVtOyAgICAgICAgICAgICAgLyogd3AgbnVt
IHRvIGRlbGV0ZS4gLTEgZm9yIGFsbCAqLwoKICAgIGlmIChhcmdjICE9IDIgfHwgKmFyZ3ZbMV0g
PT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfd2MoKTsKCiAgICBhcmdwID0gYXJndlsx
XTsKCiAgICBpZiAoIXN0cmNtcChhcmdwLCAiYWxsIikpCiAgICAgICAgd3BudW0gPSAtMTsKICAg
IGVsc2UgaWYgKCFrZGJfc3RyMmRlY2koYXJncCwgJndwbnVtKSkgewogICAgICAgIGtkYnAoIklu
dmFsaWQgd3BudW06ICVzXG4iLCBhcmdwKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKICAgIH0KICAgIGtkYl9jbGVhcl93cHMod3BudW0pOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7Cn0KCnN0YXRpYyB2b2lkCmtkYl9kaXNwbGF5X2h2bV92Y3B1KHN0cnVjdCB2Y3B1ICp2
cCkKewogICAgc3RydWN0IGh2bV92Y3B1ICpodnA7CiAgICBzdHJ1Y3QgdmxhcGljICp2bHA7CiAg
ICBzdHJ1Y3QgaHZtX2lvX29wICppb29wOwoKICAgIGh2cCA9ICZ2cC0+YXJjaC5odm1fdmNwdTsK
ICAgIHZscCA9ICZodnAtPnZsYXBpYzsKICAgIGtkYnAoInZjcHU6JWx4IGlkOiVkIGRvbWlkOiVk
XG4iLCB2cCwgdnAtPnZjcHVfaWQsIHZwLT5kb21haW4tPmRvbWFpbl9pZCk7CgojaWYgMAogICAg
aWYgKGlzX2h5YnJpZF92Y3B1KHZwKSkgewogICAgICAgIHN0cnVjdCBoeWJyaWRfZXh0ICpocCA9
ICZodnAtPmh2X2h5YnJpZDsKICAgICAgICBrZGJwKCIgICAgJmh5YnJpZF9leHQ6JXAgbGltaXQ6
JXggaW9wbDoleCB2Y3B1X2luZm9fbWZuOiVseFxuIiwKICAgICAgICAgICAgIGhwLCBocC0+aHli
X2lvYm1wX2xpbWl0LCBocC0+aHliX2lvcGwsIGhwLT5oeWJfdmNwdV9pbmZvX21mbik7CiAgICB9
CiNlbmRpZgoKICAgIGlvb3AgPSBOVUxMOyAgIC8qIGNvbXBpbGVyIHdhcm5pbmcgKi8KICAgIGtk
YnAoIiAgICAmaHZtX3ZjcHU6JWx4ICBndWVzdF9lZmVyOiJLREJGTCJcbiIsIGh2cCwgaHZwLT5n
dWVzdF9lZmVyKTsKICAgIGtkYnAoIiAgICAgIGd1ZXN0X2NyOiBbMF06IktEQkZMIiBbMV06IktE
QkZMIiBbMl06IktEQkZMIlxuIiwgCiAgICAgICAgIGh2cC0+Z3Vlc3RfY3JbMF0sIGh2cC0+Z3Vl
c3RfY3JbMV0saHZwLT5ndWVzdF9jclsyXSk7CiAgICBrZGJwKCIgICAgICAgICAgICAgICAgWzNd
OiJLREJGTCIgWzRdOiJLREJGTCJcbiIsIGh2cC0+Z3Vlc3RfY3JbM10sCiAgICAgICAgIGh2cC0+
Z3Vlc3RfY3JbNF0pOwogICAga2RicCgiICAgICAgaHdfY3I6IFswXToiS0RCRkwiIFsxXToiS0RC
RkwiIFsyXToiS0RCRkwiXG4iLCBodnAtPmh3X2NyWzBdLAogICAgICAgICBodnAtPmh3X2NyWzFd
LCBodnAtPmh3X2NyWzJdKTsKICAgIGtkYnAoIiAgICAgICAgICAgICAgWzNdOiJLREJGTCIgWzRd
OiJLREJGTCJcbiIsIGh2cC0+aHdfY3JbM10sIAogICAgICAgICBodnAtPmh3X2NyWzRdKTsKCiAg
ICBrZGJwKCIgICAgICBWTEFQSUM6IGJhc2UgbXNyOiJLREJGNjQiIGRpczoleCB0bXJkaXY6JXhc
biIsIAogICAgICAgICB2bHAtPmh3LmFwaWNfYmFzZV9tc3IsIHZscC0+aHcuZGlzYWJsZWQsIHZs
cC0+aHcudGltZXJfZGl2aXNvcik7CiAgICBrZGJwKCIgICAgICAgICAgcmVnczolcCByZWdzX3Bh
Z2U6JXBcbiIsIHZscC0+cmVncywgdmxwLT5yZWdzX3BhZ2UpOwogICAga2RicCgiICAgICAgICAg
IHBlcmlvZGljIHRpbWU6XG4iKTsgCiAgICBrZGJfcHJudF9wZXJpb2RpY190aW1lKCZ2bHAtPnB0
KTsKCiAgICBrZGJwKCIgICAgICB4ZW5fcG9ydDoleCBmbGFnX2RyX2RpcnR5OiV4IGRiZ19zdF9s
YXRjaDoleFxuIiwgaHZwLT54ZW5fcG9ydCwKICAgICAgICAgaHZwLT5mbGFnX2RyX2RpcnR5LCBo
dnAtPmRlYnVnX3N0YXRlX2xhdGNoKTsKCiAgICBpZiAoYm9vdF9jcHVfZGF0YS54ODZfdmVuZG9y
ID09IFg4Nl9WRU5ET1JfSU5URUwpIHsKCiAgICAgICAgc3RydWN0IGFyY2hfdm14X3N0cnVjdCAq
dnhwID0gJmh2cC0+dS52bXg7CiAgICAgICAga2RicCgiICAgICAgJnZteDogJXAgdm1jczolbHgg
YWN0aXZlX2NwdToleCBsYXVuY2hlZDoleFxuIiwgdnhwLCAKICAgICAgICAgICAgIHZ4cC0+dm1j
cywgdnhwLT5hY3RpdmVfY3B1LCB2eHAtPmxhdW5jaGVkKTsKI2lmIFhFTl9WRVJTSU9OICE9IDQg
ICAgICAgICAgICAgICAvKiB4ZW4gMy54LnggKi8KICAgICAgICBrZGJwKCIgICAgICAgIGV4ZWNf
Y3RybDoleCB2cGlkOiQlZFxuIiwgdnhwLT5leGVjX2NvbnRyb2wsIHZ4cC0+dnBpZCk7CiNlbmRp
ZgogICAgICAgIGtkYnAoIiAgICAgICAgaG9zdF9jcjA6ICJLREJGTCIgdm14OiB7cmVhbG06JXgg
ZW11bGF0ZToleH1cbiIsCiAgICAgICAgICAgICB2eHAtPmhvc3RfY3IwLCB2eHAtPnZteF9yZWFs
bW9kZSwgdnhwLT52bXhfZW11bGF0ZSk7CgojaWZkZWYgX194ODZfNjRfXwogICAgICAgIGtkYnAo
IiAgICAgICAgJm1zcl9zdGF0ZTolcCBleGNlcHRpb25fYml0bWFwOiVseFxuIiwgJnZ4cC0+bXNy
X3N0YXRlLAogICAgICAgICAgICAgdnhwLT5leGNlcHRpb25fYml0bWFwKTsKI2VuZGlmCiAgICB9
IGVsc2UgaWYgKGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRvciA9PSBYODZfVkVORE9SX0FNRCkgewog
ICAgICAgIHN0cnVjdCBhcmNoX3N2bV9zdHJ1Y3QgKnN2cCA9ICZodnAtPnUuc3ZtOwojaWYgWEVO
X1ZFUlNJT04gIT0gNCAgICAgICAgICAgICAgIC8qIHhlbiAzLngueCAqLwogICAgICAgIGtkYnAo
IiAgJnN2bTogdm1jYjolbHggcGE6IktEQkY2NCIgYXNpZDoiS0RCRjY0IlxuIiwgc3ZwLCBzdnAt
PnZtY2IsCiAgICAgICAgICAgICBzdnAtPnZtY2JfcGEsIHN2cC0+YXNpZF9nZW5lcmF0aW9uKTsK
I2VuZGlmCiAgICAgICAga2RicCgiICAgIG1zcnBtOiVwIGxuY2hfY29yZToleCB2bWNiX3N5bmM6
JXhcbiIsIHN2cC0+bXNycG0sIAogICAgICAgICAgICAgc3ZwLT5sYXVuY2hfY29yZSwgc3ZwLT52
bWNiX2luX3N5bmMpOwogICAgfQogICAga2RicCgiICAgICAgY2FjaGVtb2RlOiV4IGlvOiB7c3Rh
dGU6ICV4IGRhdGE6ICJLREJGTCJ9XG4iLCBodnAtPmNhY2hlX21vZGUsCiAgICAgICAgIGh2cC0+
aHZtX2lvLmlvX3N0YXRlLCBodnAtPmh2bV9pby5pb19kYXRhKTsKICAgIGtkYnAoIiAgICAgIG1t
aW86IHtndmE6ICJLREJGTCIgZ3BmbjogIktEQkZMIn1cbiIsIGh2cC0+aHZtX2lvLm1taW9fZ3Zh
LAogICAgICAgICBodnAtPmh2bV9pby5tbWlvX2dwZm4pOwp9CgovKiBkaXNwbGF5IHN0cnVjdCBo
dm1fdmNwdXt9IGluIHN0cnVjdCB2Y3B1LmFyY2h7fSAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdApr
ZGJfdXNnZl92Y3B1aCh2b2lkKQp7CiAgICBrZGJwKCJ2Y3B1aCB2Y3B1LXB0ciA6IGRpc3BsYXkg
aHZtX3ZjcHUgc3RydWN0XG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRp
YyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl92Y3B1aChpbnQgYXJnYywgY29uc3QgY2hhciAqKmFy
Z3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBzdHJ1Y3QgdmNwdSAqdnA7Cgog
ICAgaWYgKGFyZ2MgPCAyIHx8ICphcmd2WzFdID09ICc/JykgCiAgICAgICAgcmV0dXJuIGtkYl91
c2dmX3ZjcHVoKCk7CgogICAgaWYgKCFrZGJfc3RyMnVsb25nKGFyZ3ZbMV0sICh1bG9uZyAqKSZ2
cCkgfHwgIWtkYl92Y3B1X3ZhbGlkKHZwKSB8fAogICAgICAgICFpc19odm1fb3JfaHliX3ZjcHUo
dnApKSB7CgogICAgICAgIGtkYnAoImtkYjogQmFkIFZDUFU6ICVzXG4iLCBhcmd2WzFdKTsKICAg
ICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGtkYl9kaXNwbGF5X2h2bV92
Y3B1KHZwKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBhbHNvIGxvb2sgaW50
byBhcmNoX2dldF9pbmZvX2d1ZXN0KCkgdG8gZ2V0IGNvbnRleHQgKi8Kc3RhdGljIHZvaWQKa2Ri
X3ByaW50X3VyZWdzKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiNpZmRlZiBfX3g4Nl82
NF9fCiAgICBrZGJwKCIgICAgICByZmxhZ3M6ICUwMTZseCAgIHJpcDogJTAxNmx4XG4iLCByZWdz
LT5yZmxhZ3MsIHJlZ3MtPnJpcCk7CiAgICBrZGJwKCIgICAgICAgICByYXg6ICUwMTZseCAgIHJi
eDogJTAxNmx4ICAgcmN4OiAlMDE2bHhcbiIsCiAgICAgICAgIHJlZ3MtPnJheCwgcmVncy0+cmJ4
LCByZWdzLT5yY3gpOwogICAga2RicCgiICAgICAgICAgcmR4OiAlMDE2bHggICByc2k6ICUwMTZs
eCAgIHJkaTogJTAxNmx4XG4iLAogICAgICAgICByZWdzLT5yZHgsIHJlZ3MtPnJzaSwgcmVncy0+
cmRpKTsKICAgIGtkYnAoIiAgICAgICAgIHJicDogJTAxNmx4ICAgcnNwOiAlMDE2bHggICAgcjg6
ICUwMTZseFxuIiwKICAgICAgICAgcmVncy0+cmJwLCByZWdzLT5yc3AsIHJlZ3MtPnI4KTsKICAg
IGtkYnAoIiAgICAgICAgICByOTogICUwMTZseCAgcjEwOiAlMDE2bHggICByMTE6ICUwMTZseFxu
IiwKICAgICAgICAgcmVncy0+cjksICByZWdzLT5yMTAsIHJlZ3MtPnIxMSk7CiAgICBrZGJwKCIg
ICAgICAgICByMTI6ICUwMTZseCAgIHIxMzogJTAxNmx4ICAgcjE0OiAlMDE2bHhcbiIsCiAgICAg
ICAgIHJlZ3MtPnIxMiwgcmVncy0+cjEzLCByZWdzLT5yMTQpOwogICAga2RicCgiICAgICAgICAg
cjE1OiAlMDE2bHhcbiIsIHJlZ3MtPnIxNSk7CiAgICBrZGJwKCIgICAgICBkczogJTA0eCAgIGVz
OiAlMDR4ICAgZnM6ICUwNHggICBnczogJTA0eCAgICIKICAgICAgICAgIiAgICAgIHNzOiAlMDR4
ICAgY3M6ICUwNHhcbiIsIHJlZ3MtPmRzLCByZWdzLT5lcywgcmVncy0+ZnMsCiAgICAgICAgIHJl
Z3MtPmdzLCByZWdzLT5zcywgcmVncy0+Y3MpOwogICAga2RicCgiICAgICAgZXJyY29kZTolMDhs
eCBlbnRyeXZlYzolMDhseCB1cGNhbGxfbWFzazolbHhcbiIsCiAgICAgICAgIHJlZ3MtPmVycm9y
X2NvZGUsIHJlZ3MtPmVudHJ5X3ZlY3RvciwgcmVncy0+c2F2ZWRfdXBjYWxsX21hc2spOwojZWxz
ZQogICAga2RicCgiICAgICAgZWZsYWdzOiAlMDE2bHggZWlwOiAwMTZseFxuIiwgcmVncy0+ZWZs
YWdzLCByZWdzLT5laXApOwogICAga2RicCgiICAgICAgZWF4OiAlMDh4ICAgZWJ4OiAlMDh4ICAg
ZWN4OiAlMDh4ICAgZWR4OiAlMDh4XG4iLAogICAgICAgICByZWdzLT5lYXgsIHJlZ3MtPmVieCwg
cmVncy0+ZWN4LCByZWdzLT5lZHgpOwogICAga2RicCgiICAgICAgZXNpOiAlMDh4ICAgZWRpOiAl
MDh4ICAgZWJwOiAlMDh4ICAgZXNwOiAlMDh4XG4iLAogICAgICAgICByZWdzLT5lc2ksIHJlZ3Mt
PmVkaSwgcmVncy0+ZWJwLCByZWdzLT5lc3ApOwogICAga2RicCgiICAgICAgZHM6ICUwNHggICBl
czogJTA0eCAgIGZzOiAlMDR4ICAgZ3M6ICUwNHggICAiCiAgICAgIiAgICAgIHNzOiAlMDR4ICAg
Y3M6ICUwNHhcbiIsIHJlZ3MtPmRzLCByZWdzLT5lcywgcmVncy0+ZnMsCiAgICAgICAgIHJlZ3Mt
PmdzLCByZWdzLT5zcywgcmVncy0+Y3MpOwogICAga2RicCgiICAgICAgZXJyY29kZTolMDRseCBl
bnRyeXZlYzolMDRseCB1cGNhbGxfbWFzazolbHhcbiIsIAogICAgICAgICByZWdzLT5lcnJvcl9j
b2RlLCByZWdzLT5lbnRyeV92ZWN0b3IsIHJlZ3MtPnNhdmVkX3VwY2FsbF9tYXNrKTsKI2VuZGlm
Cn0KCiNpZiBYRU5fU1VCVkVSU0lPTiA8IDMgICAgICAgICAgICAgLyogeGVuIDMuMS54IG9yIHhl
biAzLjIueCAqLwojaWZkZWYgQ09ORklHX0NPTVBBVAogICAgI3VuZGVmIHZjcHVfaW5mbwogICAg
I2RlZmluZSB2Y3B1X2luZm8odiwgZmllbGQpICAgICAgICAgICAgIFwKICAgICgqKCFoYXNfMzJi
aXRfc2hpbmZvKCh2KS0+ZG9tYWluKSA/ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAogICAgICAgKHR5cGVvZigmKHYpLT52Y3B1X2luZm8tPmNvbXBhdC5maWVsZCkpJih2
KS0+dmNwdV9pbmZvLT5uYXRpdmUuZmllbGQgOiBcCiAgICAgICAodHlwZW9mKCYodiktPnZjcHVf
aW5mby0+Y29tcGF0LmZpZWxkKSkmKHYpLT52Y3B1X2luZm8tPmNvbXBhdC5maWVsZCkpCgogICAg
I3VuZGVmIF9fc2hhcmVkX2luZm8KICAgICNkZWZpbmUgX19zaGFyZWRfaW5mbyhkLCBzLCBmaWVs
ZCkgICAgICAgICAgICAgICAgICAgICAgXAogICAgKCooIWhhc18zMmJpdF9zaGluZm8oZCkgPyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICh0eXBlb2YoJihzKS0+Y29tcGF0LmZp
ZWxkKSkmKHMpLT5uYXRpdmUuZmllbGQgOiBcCiAgICAgICAodHlwZW9mKCYocyktPmNvbXBhdC5m
aWVsZCkpJihzKS0+Y29tcGF0LmZpZWxkKSkKI2VuZGlmCiNlbmRpZgoKc3RhdGljIHZvaWQga2Ri
X2Rpc3BsYXlfcHZfdmNwdShzdHJ1Y3QgdmNwdSAqdnApCnsKICAgIGludCBpOwogICAgc3RydWN0
IHB2X3ZjcHUgKmdwID0gJnZwLT5hcmNoLnB2X3ZjcHU7CgogICAga2RicCgiICAgICAgR0RUX1ZJ
UlRfU1RBUlQodmNwdSk6ICVseFxuIiwgR0RUX1ZJUlRfU1RBUlQodnApKTsKICAgIGtkYnAoIiAg
ICAgIEdEVDogZW50cmllczoweCVseCAgZnJhbWVzOlxuIiwgZ3AtPmdkdF9lbnRzKTsKICAgIGZv
ciAoaT0wOyBpIDwgMTY7IGk9aSs0KSAKICAgICAgICBrZGJwKCIgICAgICAgICAgJTAxNmx4ICUw
MTZseCAlMDE2bHggJTAxNmx4XG4iLCBncC0+Z2R0X2ZyYW1lc1tpXSwgCiAgICAgICAgICAgICBn
cC0+Z2R0X2ZyYW1lc1tpKzFdLCBncC0+Z2R0X2ZyYW1lc1tpKzJdLGdwLT5nZHRfZnJhbWVzW2kr
M10pOwogICAgCiAgICBrZGJwKCIgICAgICB0cmFwX2N0eHQ6JWx4IGtlcm5lbF9zczolbHgga2Vy
bmVsX3NwOiVseFxuIiwgZ3AtPnRyYXBfY3R4dCwKICAgICAgICAgZ3AtPmtlcm5lbF9zcywgZ3At
Pmtlcm5lbF9zcCk7CiAgICBrZGJwKCIgICAgICBjdHJscmVnczpcbiIpOwogICAgZm9yIChpPTA7
IGkgPCA4OyBpPWkrNCkKICAgICAgICBrZGJwKCIgICAgICAgICAgJTAxNmx4ICUwMTZseCAlMDE2
bHggJTAxNmx4XG4iLCBncC0+Y3RybHJlZ1tpXSwgCiAgICAgICAgICAgICBncC0+Y3RybHJlZ1tp
KzFdLCBncC0+Y3RybHJlZ1tpKzJdLCBncC0+Y3RybHJlZ1tpKzNdKTsKI2lmZGVmIF9feDg2XzY0
X18KICAgIGtkYnAoIiAgICAgIGNhbGxiYWNrOiAgIGV2ZW50OiAlMDE2bHggICBmYWlsc2FmZTog
JTAxNmx4XG4iLCAKICAgICAgICAgZ3AtPmV2ZW50X2NhbGxiYWNrX2VpcCwgZ3AtPmZhaWxzYWZl
X2NhbGxiYWNrX2VpcCk7CiAgICBrZGJwKCIgICAgICBiYXNlOiBmczoweCVseCBnc2tlcm46MHgl
bHggZ3N1c2VyOjB4JWx4XG4iLCAKICAgICAgICAgZ3AtPmZzX2Jhc2UsIGdwLT5nc19iYXNlX2tl
cm5lbCwgZ3AtPmdzX2Jhc2VfdXNlcik7CiNlbHNlCiAgICBrZGJwKCIgICAgICBjYWxsYmFjazog
ICBldmVudDogJTA4bHg6JTA4bHggICBmYWlsc2FmZTogJTA4bHg6JTA4bHhcbiIsIAogICAgICAg
ICBncC0+ZXZlbnRfY2FsbGJhY2tfY3MsIGdwLT5ldmVudF9jYWxsYmFja19laXAsIAogICAgICAg
ICBncC0+ZmFpbHNhZmVfY2FsbGJhY2tfY3MsIGdwLT5mYWlsc2FmZV9jYWxsYmFja19laXApOwoj
ZW5kaWYKICAgIGtkYnAoIiAgICB2Y3B1X2luZm9fbWZuOiAlbHggIGlvcGw6ICV4XG4iLCBncC0+
dmNwdV9pbmZvX21mbiwgZ3AtPmlvcGwpOwogICAga2RicCgiXG4iKTsKfQoKLyogRGlzcGxheSBv
bmUgVkNQVSBpbmZvICovCnN0YXRpYyB2b2lkCmtkYl9kaXNwbGF5X3ZjcHUoc3RydWN0IHZjcHUg
KnZwKQp7CiAgICBpbnQgaTsKICAgIHN0cnVjdCBhcmNoX3ZjcHUgKmF2cCA9ICZ2cC0+YXJjaDsK
ICAgIHN0cnVjdCBwYWdpbmdfdmNwdSAqcHZwID0gJnZwLT5hcmNoLnBhZ2luZzsKICAgIGludCBk
b21pZCA9IHZwLT5kb21haW4tPmRvbWFpbl9pZDsKCiAgICBrZGJwKCJcblZDUFU6ICB2Y3B1LWlk
OiVkICB2Y3B1LXB0cjolcCAiLCB2cC0+dmNwdV9pZCwgdnApOwogICAga2RicCgiICBwcm9jZXNz
b3I6JWQgZG9taWQ6JWQgIGRvbXA6JXBcbiIsIHZwLT5wcm9jZXNzb3IsIGRvbWlkLHZwLT5kb21h
aW4pOwoKICAgIGlmIChkb21pZCA9PSBET01JRF9JRExFKSB7CiAgICAgICAga2RicCgiICAgIElE
TEUgdmNwdS5cbiIpOwogICAgICAgIHJldHVybjsKICAgIH0KICAgIGtkYnAoIiAgcGF1c2U6IGZs
YWdzOjB4JTAxNmx4IGNvdW50OiV4XG4iLCB2cC0+cGF1c2VfZmxhZ3MsIAogICAgICAgICB2cC0+
cGF1c2VfY291bnQuY291bnRlcik7CiAgICBrZGJwKCIgIHZjcHU6IGluaXRkb25lOiVkIHJ1bm5p
bmc6JWRcbiIsIAogICAgICAgICB2cC0+aXNfaW5pdGlhbGlzZWQsIHZwLT5pc19ydW5uaW5nKTsK
ICAgIGtkYnAoIiAgbWNlcGVuZDolZCBubWlwZW5kOiVkIHNodXQ6IGRlZjolZCBwYXVzZWQ6JWRc
biIsIAogICAgICAgICB2cC0+bWNlX3BlbmRpbmcsICB2cC0+bm1pX3BlbmRpbmcsIHZwLT5kZWZl
cl9zaHV0ZG93biwgCiAgICAgICAgIHZwLT5wYXVzZWRfZm9yX3NodXRkb3duKTsKICAgIGtkYnAo
IiAgJnZjcHVfaW5mbzolcCA6IGV2dGNobl91cGNfcGVuZDoleCBfbWFzazoleFxuIiwKICAgICAg
ICAgdnAtPnZjcHVfaW5mbywgdmNwdV9pbmZvKHZwLCBldnRjaG5fdXBjYWxsX3BlbmRpbmcpLAog
ICAgICAgICB2Y3B1X2luZm8odnAsIGV2dGNobl91cGNhbGxfbWFzaykpOwogICAga2RicCgiICBl
dnRfcGVuZF9zZWw6JWx4IHBvbGxfZXZ0Y2huOiV4ICIsIAogICAgICAgICAqKHVuc2lnbmVkIGxv
bmcgKikmdmNwdV9pbmZvKHZwLCBldnRjaG5fcGVuZGluZ19zZWwpLCB2cC0+cG9sbF9ldnRjaG4p
OwogICAga2RiX3ByaW50X3NwaW5fbG9jaygidmlycV9sb2NrOiIsICZ2cC0+dmlycV9sb2NrLCAi
XG4iKTsKICAgIGZvciAoaT0wOyBpIDwgTlJfVklSUVM7IGkrKykKICAgICAgICBpZiAodnAtPnZp
cnFfdG9fZXZ0Y2huW2ldICE9IDApCiAgICAgICAgICAgIGtkYnAoIiAgICAgIHZpcnE6JCVkIHBv
cnQ6JCVkXG4iLCBpLCB2cC0+dmlycV90b19ldnRjaG5baV0pOwoKICAgIGtkYnAoIiAgbmV4dDol
cCBwZXJpb2RpYzogcGVyaW9kOjB4JWx4IGxhc3RfZXZlbnQ6MHglbHhcbiIsIAogICAgICAgICB2
cC0+bmV4dF9pbl9saXN0LCB2cC0+cGVyaW9kaWNfcGVyaW9kLCB2cC0+cGVyaW9kaWNfbGFzdF9l
dmVudCk7CiAgICBrZGJwKCIgIGNwdV9hZmZpbml0eToweCVseCB2Y3B1X2RpcnR5X2NwdW1hc2s6
JXAgc2NoZWRfcHJpdjoweCVwXG4iLAogICAgICAgICB2cC0+Y3B1X2FmZmluaXR5LCB2cC0+dmNw
dV9kaXJ0eV9jcHVtYXNrLCB2cC0+c2NoZWRfcHJpdik7CiAgICBrZGJwKCIgICZydW5zdGF0ZTog
JXAgc3RhdGU6ICV4IChlZy4gUlVOU1RBVEVfcnVubmluZykgZ3Vlc3RwdHI6JXBcbiIsIAogICAg
ICAgICAmdnAtPnJ1bnN0YXRlLCB2cC0+cnVuc3RhdGUuc3RhdGUsIHJ1bnN0YXRlX2d1ZXN0KHZw
KSk7CiAgICBrZGJwKCJcbiIpOwogICAga2RicCgiICBhcmNoIGluZm86ICglcClcbiIsICZ2cC0+
YXJjaCk7CiAgICBrZGJwKCIgICAgZ3Vlc3RfY29udGV4dDogVkdDRl8gZmxhZ3M6JWx4IiwgCiAg
ICAgICAgIHZwLT5hcmNoLnZnY19mbGFncyk7IC8qIFZHQ0ZfaW5fa2VybmVsICovCiAgICBpZiAo
aXNfaHZtX29yX2h5Yl92Y3B1KHZwKSkKICAgICAgICBrZGJwKCIgICAgKEhWTSBndWVzdDogSVAs
IFNQLCBFRkxBR1MgbWF5IGJlIHN0YWxlKSIpOwogICAga2RicCgiXG4iKTsKICAgIGtkYl9wcmlu
dF91cmVncygmdnAtPmFyY2gudXNlcl9yZWdzKTsKICAgIGtkYnAoIiAgICAgIGRlYnVncmVnczpc
biIpOwogICAgZm9yIChpPTA7IGkgPCA4OyBpPWkrNCkKICAgICAgICBrZGJwKCIgICAgICAgICAg
JTAxNmx4ICUwMTZseCAlMDE2bHggJTAxNmx4XG4iLCBhdnAtPmRlYnVncmVnW2ldLCAKICAgICAg
ICAgICAgIGF2cC0+ZGVidWdyZWdbaSsxXSwgYXZwLT5kZWJ1Z3JlZ1tpKzJdLCBhdnAtPmRlYnVn
cmVnW2krM10pOwoKICAgIGlmIChpc19odm1fb3JfaHliX3ZjcHUodnApKQogICAgICAgIGtkYl9k
aXNwbGF5X2h2bV92Y3B1KHZwKTsKICAgIGVsc2UKICAgICAgICBrZGJfZGlzcGxheV9wdl92Y3B1
KHZwKTsKCiAgICBrZGJwKCIgICAgVEZfZmxhZ3M6ICUwMTZseCAgZ3Vlc3RfdGFibGU6ICUwMTZs
eCBjcjM6JTAxNmx4XG4iLCAKICAgICAgICAgdnAtPmFyY2guZmxhZ3MsIHZwLT5hcmNoLmd1ZXN0
X3RhYmxlLnBmbiwgYXZwLT5jcjMpOyAKICAgIGtkYnAoIiAgICBwYWdpbmc6IFxuIik7CiAgICBr
ZGJwKCIgICAgICB2dGxiOiVwXG4iLCAmcHZwLT52dGxiKTsKICAgIGtkYnAoIiAgICAgICZwZ19t
b2RlOiVwIGdzdGxldmVsczolZCAmc2hhZG93OiVwIHNobGV2ZWxzOiVkXG4iLAogICAgICAgICBw
dnAtPm1vZGUsIHB2cC0+bW9kZS0+Z3Vlc3RfbGV2ZWxzLCAmcHZwLT5tb2RlLT5zaGFkb3csCiAg
ICAgICAgIHB2cC0+bW9kZS0+c2hhZG93LnNoYWRvd19sZXZlbHMpOwogICAga2RicCgiICAgICAg
c2hhZG93X3ZjcHU6XG4iKTsKICAgIGtkYnAoIiAgICAgICAgZ3Vlc3RfdnRhYmxlOiVwIGxhc3Qg
ZW1fbWZuOiJLREJGTCJcbiIsCiAgICAgICAgIHB2cC0+c2hhZG93Lmd1ZXN0X3Z0YWJsZSwgcHZw
LT5zaGFkb3cubGFzdF9lbXVsYXRlZF9tZm4pOwojaWYgQ09ORklHX1BBR0lOR19MRVZFTFMgPj0g
MwogICAga2RicCgiICAgICAgICAgbDN0Ymw6IDM6IktEQkZMIiAyOiJLREJGTCJcbiIKICAgICAg
ICAgIiAgICAgICAgICAgICAgICAxOiJLREJGTCIgMDoiS0RCRkwiXG4iLAogICAgIHB2cC0+c2hh
ZG93LmwzdGFibGVbM10ubDMsIHB2cC0+c2hhZG93LmwzdGFibGVbMl0ubDMsIAogICAgIHB2cC0+
c2hhZG93LmwzdGFibGVbMV0ubDMsIHB2cC0+c2hhZG93LmwzdGFibGVbMF0ubDMpOwogICAga2Ri
cCgiICAgICAgICBnbDN0Ymw6IDM6IktEQkZMIiAyOiJLREJGTCJcbiIKICAgICAgICAgIiAgICAg
ICAgICAgICAgICAxOiJLREJGTCIgMDoiS0RCRkwiXG4iLAogICAgIHB2cC0+c2hhZG93LmdsM2Vb
M10ubDMsIHB2cC0+c2hhZG93LmdsM2VbMl0ubDMsIAogICAgIHB2cC0+c2hhZG93LmdsM2VbMV0u
bDMsIHB2cC0+c2hhZG93LmdsM2VbMF0ubDMpOwojZW5kaWYKICAgIGtkYnAoIiAgZ2Ric3hfdmNw
dV9ldmVudDoleFxuIiwgdnAtPmFyY2guZ2Ric3hfdmNwdV9ldmVudCk7Cn0KCi8qIAogKiBGVU5D
VElPTjogRGlzcGFseSAoY3VycmVudCkgVkNQVS9zCiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdApr
ZGJfdXNnZl92Y3B1KHZvaWQpCnsKICAgIGtkYnAoInZjcHUgW3ZjcHUtcHRyXSA6IGRpc3BsYXkg
Y3VycmVudC92Y3B1LXB0ciB2Y3B1IGluZm9cbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9L
REI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX3ZjcHUoaW50IGFyZ2MsIGNvbnN0
IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgc3RydWN0IHZj
cHUgKnYgPSBjdXJyZW50OwoKICAgIGlmIChhcmdjID4gMiB8fCAoYXJnYyA+IDEgJiYgKmFyZ3Zb
MV0gPT0gJz8nKSkKICAgICAgICBrZGJfdXNnZl92Y3B1KCk7CiAgICBlbHNlIGlmIChhcmdjIDw9
IDEpCiAgICAgICAga2RiX2Rpc3BsYXlfdmNwdSh2KTsKICAgIGVsc2UgaWYgKGtkYl9zdHIydWxv
bmcoYXJndlsxXSwgKHVsb25nICopJnYpICYmIGtkYl92Y3B1X3ZhbGlkKHYpKQogICAgICAgIGtk
Yl9kaXNwbGF5X3ZjcHUodik7CiAgICBlbHNlIAogICAgICAgIGtkYnAoIkludmFsaWQgdXNhZ2Uv
YXJndW1lbnQ6JXMgdjolbHhcbiIsIGFyZ3ZbMV0sIChsb25nKXYpOwogICAgcmV0dXJuIEtEQl9D
UFVfTUFJTl9LREI7Cn0KCi8qIGZyb20gcGFnaW5nX2R1bXBfZG9tYWluX2luZm8oKSAqLwpzdGF0
aWMgdm9pZCBrZGJfcHJfZG9tX3BnX21vZGVzKHN0cnVjdCBkb21haW4gKmQpCnsKICAgIGlmIChw
YWdpbmdfbW9kZV9lbmFibGVkKGQpKSB7CiAgICAgICAga2RicCgiIHBhZ2luZyBtb2RlIGVuYWJs
ZWQiKTsKICAgICAgICBpZiAoIHBhZ2luZ19tb2RlX3NoYWRvdyhkKSApCiAgICAgICAgICAgIGtk
YnAoIiBzaGFkb3coUEdfU0hfZW5hYmxlKSIpOwogICAgICAgIGlmICggcGFnaW5nX21vZGVfaGFw
KGQpICkKICAgICAgICAgICAga2RicCgiIGhhcChQR19IQVBfZW5hYmxlKSAiKTsKICAgICAgICBp
ZiAoIHBhZ2luZ19tb2RlX3JlZmNvdW50cyhkKSApCiAgICAgICAgICAgIGtkYnAoIiByZWZjb3Vu
dHMoUEdfcmVmY291bnRzKSAiKTsKICAgICAgICBpZiAoIHBhZ2luZ19tb2RlX2xvZ19kaXJ0eShk
KSApCiAgICAgICAgICAgIGtkYnAoIiBsb2dfZGlydHkoUEdfbG9nX2RpcnR5KSAiKTsKICAgICAg
ICBpZiAoIHBhZ2luZ19tb2RlX3RyYW5zbGF0ZShkKSApCiAgICAgICAgICAgIGtkYnAoIiB0cmFu
c2xhdGUoUEdfdHJhbnNsYXRlKSAiKTsKICAgICAgICBpZiAoIHBhZ2luZ19tb2RlX2V4dGVybmFs
KGQpICkKICAgICAgICAgICAga2RicCgiIGV4dGVybmFsKFBHX2V4dGVybmFsKSAiKTsKICAgIH0g
ZWxzZQogICAgICAgIGtkYnAoIiBkaXNhYmxlZCIpOwogICAga2RicCgiXG4iKTsKfQoKLyogcHJp
bnQgZXZlbnQgY2hhbm5lbHMgaW5mbyBmb3IgYSBnaXZlbiBkb21haW4gCiAqIE5PVEU6IHZlcnkg
Y29uZnVzaW5nLCBwb3J0IGFuZCBldmVudCBjaGFubmVsIHJlZmVyIHRvIHRoZSBzYW1lIHRoaW5n
LiBldnRjaG4KICogaXMgYXJyeSBvZiBwb2ludGVycyB0byBhIGJ1Y2tldCBvZiBwb2ludGVycyB0
byAxMjggc3RydWN0IGV2dGNobnt9LiB3aGlsZQogKiA2NGJpdCB4ZW4gY2FuIGhhbmRsZSA0MDk2
IG1heCBjaGFubmVscywgYSAzMmJpdCBndWVzdCBpcyBsaW1pdGVkIHRvIDEwMjQgKi8Kc3RhdGlj
IHZvaWQgbm9pbmxpbmUga2RiX3ByaW50X2RvbV9ldmVudGluZm8oc3RydWN0IGRvbWFpbiAqZHAp
CnsKICAgIHVpbnQgY2huOwoKICAgIGtkYnAoIlxuIik7CiAgICBrZGJwKCIgIEV2dDogTUFYX0VW
VENITlM6JCVkIHB0cjolcCBwb2xsbXNrOiUwOGx4ICIsCiAgICAgICAgIE1BWF9FVlRDSE5TKGRw
KSwgZHAtPmV2dGNobiwgZHAtPnBvbGxfbWFza1swXSk7CiAgICBrZGJfcHJpbnRfc3Bpbl9sb2Nr
KCJsazoiLCAmZHAtPmV2ZW50X2xvY2ssICJcbiIpOwogICAga2RicCgiICAgICZldnRjaG5fcGVu
ZGluZzolcCAmZXZ0Y2huX21hc2s6JXBcbiIsIAogICAgICAgICBzaGFyZWRfaW5mbyhkcCwgZXZ0
Y2huX3BlbmRpbmcpLCBzaGFyZWRfaW5mbyhkcCwgZXZ0Y2huX21hc2spKTsKCiAgICBrZGJwKCIg
ICBDaGFubmVscyBpbmZvOiAoZXZlcnl0aGluZyBpcyBpbiBkZWNpbWFsKTpcbiIpOwogICAgZm9y
IChjaG49MDsgY2huIDwgTUFYX0VWVENITlMoZHApOyBjaG4rKyApIHsKICAgICAgICBzdHJ1Y3Qg
ZXZ0Y2huICpia3RwID0gZHAtPmV2dGNobltjaG4vRVZUQ0hOU19QRVJfQlVDS0VUXTsKICAgICAg
ICBzdHJ1Y3QgZXZ0Y2huICpjaG5wID0gJmJrdHBbY2huICYgKEVWVENITlNfUEVSX0JVQ0tFVC0x
KV07CiAgICAgICAgY2hhciBwYml0ID0gdGVzdF9iaXQoY2huLCAmc2hhcmVkX2luZm8oZHAsIGV2
dGNobl9wZW5kaW5nKSkgPyAnWScgOiAnTic7CiAgICAgICAgY2hhciBtYml0ID0gdGVzdF9iaXQo
Y2huLCAmc2hhcmVkX2luZm8oZHAsIGV2dGNobl9tYXNrKSkgPyAnWScgOiAnTic7CgogICAgICAg
IGlmIChia3RwPT1OVUxMIHx8IGNobnAtPnN0YXRlPT1FQ1NfRlJFRSkKICAgICAgICAgICAgY29u
dGludWU7CgogICAgICAgIGtkYnAoIiAgICBjaG46JTR1IHN0OiVkIF94ZW49JWQgX3ZjcHVfaWQ6
JTJkICIsIGNobiwgY2hucC0+c3RhdGUsCiAgICAgICAgICAgICBjaG5wLT54ZW5fY29uc3VtZXIs
IGNobnAtPm5vdGlmeV92Y3B1X2lkKTsKICAgICAgICBpZiAoY2hucC0+c3RhdGUgPT0gRUNTX1VO
Qk9VTkQpCiAgICAgICAgICAgIGtkYnAoIiByZW0tZG9taWQ6JWQiLCBjaG5wLT51LnVuYm91bmQu
cmVtb3RlX2RvbWlkKTsKICAgICAgICBlbHNlIGlmIChjaG5wLT5zdGF0ZSA9PSBFQ1NfSU5URVJE
T01BSU4pCiAgICAgICAgICAgIGtkYnAoIiByZW0tcG9ydDolZCByZW0tZG9tOiVkIiwgY2hucC0+
dS5pbnRlcmRvbWFpbi5yZW1vdGVfcG9ydCwKICAgICAgICAgICAgICAgICBjaG5wLT51LmludGVy
ZG9tYWluLnJlbW90ZV9kb20tPmRvbWFpbl9pZCk7CiAgICAgICAgZWxzZSBpZiAoY2hucC0+c3Rh
dGUgPT0gRUNTX1BJUlEpCiAgICAgICAgICAgIGtkYnAoIiBwaXJxOiVkIiwgY2hucC0+dS5waXJx
KTsKICAgICAgICBlbHNlIGlmIChjaG5wLT5zdGF0ZSA9PSBFQ1NfVklSUSkKICAgICAgICAgICAg
a2RicCgiIHZpcnE6JWQiLCBjaG5wLT51LnZpcnEpOwoKICAgICAgICBrZGJwKCIgIHBlbmQ6JWMg
bWFzazolY1xuIiwgcGJpdCwgbWJpdCk7CiAgICB9CiNpZiAwCiAgICBrZGJwKCJwaXJxIHRvIGV2
dGNobiBtYXBwaW5nIChwaXJxOmV2dGNobikgKGFsbCBkZWNpbWFsKTpcbiIpOwogICAgZm9yIChp
PTA7IGkgPCBkcC0+bnJfcGlycXM7IGkgKyspCiAgICAgICAgaWYgKGRwLT5waXJxX3RvX2V2dGNo
bltpXSkKICAgICAgICAgICAga2RicCgiKCVkOiVkKSAiLCBpLCBkcC0+cGlycV90b19ldnRjaG5b
aV0pOwogICAga2RicCgiXG4iKTsKI2VuZGlmCn0KCnN0YXRpYyB2b2lkIGtkYl9wcm50X2h2bV9k
b21faW5mbyhzdHJ1Y3QgZG9tYWluICpkcCkKewogICAgc3RydWN0IGh2bV9kb21haW4gKmh2cCA9
ICZkcC0+YXJjaC5odm1fZG9tYWluOwoKICAgIGtkYnAoIiAgICBIVk0gaW5mbzogSGFwIGlzJXMg
ZW5hYmxlZFxuIiwgCiAgICAgICAgIGRwLT5hcmNoLmh2bV9kb21haW4uaGFwX2VuYWJsZWQgPyAi
IiA6ICIgbm90Iik7CgogICAgaWYgKGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRvciA9PSBYODZfVkVO
RE9SX0lOVEVMKSB7CiAgICAgICAgc3RydWN0IHZteF9kb21haW4gKnZkcCA9ICZkcC0+YXJjaC5o
dm1fZG9tYWluLnZteDsKICAgICAgICBrZGJwKCIgICAgRVBUOiBlcHRfbXQ6JXggZXB0X3dsOiV4
IGFzcjolMDEzbHhcbiIsIAogICAgICAgICAgICAgdmRwLT5lcHRfY29udHJvbC5lcHRfbXQsIHZk
cC0+ZXB0X2NvbnRyb2wuZXB0X3dsLCAKICAgICAgICAgICAgIHZkcC0+ZXB0X2NvbnRyb2wuYXNy
KTsKICAgIH0KICAgIGlmIChodnAgPT0gTlVMTCkKICAgICAgICByZXR1cm47CgogICAgaWYgKGh2
cC0+aXJxLmNhbGxiYWNrX3ZpYV90eXBlID09IEhWTUlSUV9jYWxsYmFja192ZWN0b3IpCiAgICAg
ICAga2RicCgiICAgIEhWTUlSUV9jYWxsYmFja192ZWN0b3I6ICV4XG4iLCBodnAtPmlycS5jYWxs
YmFja192aWEudmVjdG9yKTsKCiAgICBpZiAoIWlzX2h2bV9kb21haW4oZHApKQogICAgICAgIHJl
dHVybjsKCiAgICBrZGJwKCIgICAgSFZNIFBBUkFNUyAoYWxsIGluIGhleCk6XG4iKTsKICAgIGtk
YnAoIlx0aW9yZXEucGFnZTolbHggaW9yZXEudmE6JWx4XG4iLCBodnAtPmlvcmVxLnBhZ2UsIGh2
cC0+aW9yZXEudmEpOwogICAga2RicCgiXHRidWZfaW9yZXEucGFnZTolbHggaW9yZXEudmE6JWx4
XG4iLCBodnAtPmJ1Zl9pb3JlcS5wYWdlLCAKICAgICAgICAgaHZwLT5idWZfaW9yZXEudmEpOwog
ICAga2RicCgiXHRIVk1fUEFSQU1fQ0FMTEJBQ0tfSVJROiAleFxuIiwgaHZwLT5wYXJhbXNbSFZN
X1BBUkFNX0NBTExCQUNLX0lSUV0pOwogICAga2RicCgiXHRIVk1fUEFSQU1fU1RPUkVfUEZOOiAl
eFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1NUT1JFX1BGTl0pOwogICAga2RicCgiXHRIVk1f
UEFSQU1fU1RPUkVfRVZUQ0hOOiAleFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1NUT1JFX0VW
VENITl0pOwogICAga2RicCgiXHRIVk1fUEFSQU1fUEFFX0VOQUJMRUQ6ICV4XG4iLCBodnAtPnBh
cmFtc1tIVk1fUEFSQU1fUEFFX0VOQUJMRURdKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX0lPUkVR
X1BGTjogJXhcbiIsIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9JT1JFUV9QRk5dKTsKICAgIGtkYnAo
Ilx0SFZNX1BBUkFNX0JVRklPUkVRX1BGTjogJXhcbiIsIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9C
VUZJT1JFUV9QRk5dKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX1ZJUklESUFOOiAleFxuIiwgaHZw
LT5wYXJhbXNbSFZNX1BBUkFNX1ZJUklESUFOXSk7CiAgICBrZGJwKCJcdEhWTV9QQVJBTV9USU1F
Ul9NT0RFOiAleFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1RJTUVSX01PREVdKTsKICAgIGtk
YnAoIlx0SFZNX1BBUkFNX0hQRVRfRU5BQkxFRDogJXhcbiIsIGh2cC0+cGFyYW1zW0hWTV9QQVJB
TV9IUEVUX0VOQUJMRURdKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX0lERU5UX1BUOiAleFxuIiwg
aHZwLT5wYXJhbXNbSFZNX1BBUkFNX0lERU5UX1BUXSk7CiAgICBrZGJwKCJcdEhWTV9QQVJBTV9E
TV9ET01BSU46ICV4XG4iLCBodnAtPnBhcmFtc1tIVk1fUEFSQU1fRE1fRE9NQUlOXSk7CiAgICBr
ZGJwKCJcdEhWTV9QQVJBTV9BQ1BJX1NfU1RBVEU6ICV4XG4iLCBodnAtPnBhcmFtc1tIVk1fUEFS
QU1fQUNQSV9TX1NUQVRFXSk7CiAgICBrZGJwKCJcdEhWTV9QQVJBTV9WTTg2X1RTUzogJXhcbiIs
IGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9WTTg2X1RTU10pOwogICAga2RicCgiXHRIVk1fUEFSQU1f
VlBUX0FMSUdOOiAleFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1ZQVF9BTElHTl0pOwogICAg
a2RicCgiXHRIVk1fUEFSQU1fQ09OU09MRV9QRk46ICV4XG4iLCBodnAtPnBhcmFtc1tIVk1fUEFS
QU1fQ09OU09MRV9QRk5dKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX0NPTlNPTEVfRVZUQ0hOOiAl
eFxuIiwgCiAgICAgICAgIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9DT05TT0xFX0VWVENITl0pOwog
ICAga2RicCgiXHRIVk1fUEFSQU1fQUNQSV9JT1BPUlRTX0xPQ0FUSU9OOiAleFxuIiwgCiAgICAg
ICAgIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9BQ1BJX0lPUE9SVFNfTE9DQVRJT05dKTsKICAgIGtk
YnAoIlx0SFZNX1BBUkFNX01FTU9SWV9FVkVOVF9TSU5HTEVfU1RFUDogJXhcbiIsIAogICAgICAg
ICBodnAtPnBhcmFtc1tIVk1fUEFSQU1fTUVNT1JZX0VWRU5UX1NJTkdMRV9TVEVQXSk7Cn0Kc3Rh
dGljIHZvaWQga2RiX3ByaW50X3Jhbmdlc2V0cyhzdHJ1Y3QgZG9tYWluICpkcCkKewogICAgaW50
IGxvY2tlZCA9IHNwaW5faXNfbG9ja2VkKCZkcC0+cmFuZ2VzZXRzX2xvY2spOwoKICAgIGlmIChs
b2NrZWQpCiAgICAgICAgc3Bpbl91bmxvY2soJmRwLT5yYW5nZXNldHNfbG9jayk7CiAgICByYW5n
ZXNldF9kb21haW5fcHJpbnRrKGRwKTsKICAgIGlmIChsb2NrZWQpCiAgICAgICAgc3Bpbl9sb2Nr
KCZkcC0+cmFuZ2VzZXRzX2xvY2spOwp9CgpzdGF0aWMgdm9pZCBrZGJfcHJfdnRzY19pbmZvKHN0
cnVjdCBhcmNoX2RvbWFpbiAqYXApCnsKICAgIGtkYnAoIiAgICBWVFNDIGluZm86IHRzY19tb2Rl
OiV4ICB2dHNjOiV4ICB2dHNjX2xhc3Q6JTAxNmx4XG4iLCAKICAgICAgICAgYXAtPnRzY19tb2Rl
LCBhcC0+dnRzYywgYXAtPnZ0c2NfbGFzdCk7CiAgICBrZGJwKCIgICAgICAgIHZ0c2Nfb2Zmc2V0
OiUwMTZseCB0c2Nfa2h6OiUwOGx4IGluY2FybmF0aW9uOiV4XG4iLCAKICAgICAgICAgYXAtPnZ0
c2Nfb2Zmc2V0LCBhcC0+dnRzY19vZmZzZXQsIGFwLT5pbmNhcm5hdGlvbik7CiAgICBrZGJwKCIg
ICAgICAgIHZ0c2Nfa2VybmNvdW50OiUwMTZseCBfdXNlcmNvdW50OiUwMTZseFxuIiwKICAgICAg
ICAgYXAtPnZ0c2Nfa2VybmNvdW50LCBhcC0+dnRzY191c2VyY291bnQpOwp9CgovKiBkaXNwbGF5
IG9uZSBkb21haW4gaW5mbyAqLwpzdGF0aWMgdm9pZAprZGJfZGlzcGxheV9kb20oc3RydWN0IGRv
bWFpbiAqZHApCnsKICAgIHN0cnVjdCB2Y3B1ICp2cDsKICAgIGludCBwcmludGVkID0gMDsKICAg
IHN0cnVjdCBncmFudF90YWJsZSAqZ3AgPSBkcC0+Z3JhbnRfdGFibGU7CiAgICBzdHJ1Y3QgYXJj
aF9kb21haW4gKmFwID0gJmRwLT5hcmNoOwoKICAgIGtkYnAoIlxuRE9NQUlOIDogICAgZG9taWQ6
MHglMDR4IHB0cjoweCVwXG4iLCBkcC0+ZG9tYWluX2lkLCBkcCk7CiAgICBpZiAoZHAtPmRvbWFp
bl9pZCA9PSBET01JRF9JRExFKSB7CiAgICAgICAga2RicCgiICAgIElETEUgZG9tYWluLlxuIik7
CiAgICAgICAgcmV0dXJuOwogICAgfQogICAgaWYgKGRwLT5pc19keWluZykgewogICAgICAgIGtk
YnAoIiAgICBkb21haW4gaXMgRFlJTkcuXG4iKTsKICAgICAgICByZXR1cm47CiAgICB9CiNpZiAw
CiAgICBrZGJfcHJpbnRfc3Bpbl9sb2NrKCIgIHBnYWxrOiIsICZkcC0+cGFnZV9hbGxvY19sb2Nr
LCAiXG4iKTsKICAgIGtkYnAoIiAgcGdsaXN0OiAgMHglcCAweCVwXG4iLCBkcC0+cGFnZV9saXN0
Lm5leHQsS0RCX1BHTExFKGRwLT5wYWdlX2xpc3QpKTsKICAgIGtkYnAoIiAgeHBnbGlzdDogMHgl
cCAweCVwXG4iLCBkcC0+eGVucGFnZV9saXN0Lm5leHQsIAogICAgICAgICBLREJfUEdMTEUoZHAt
PnhlbnBhZ2VfbGlzdCkpOwogICAga2RicCgiICBuZXh0OjB4JXAgaGFzaG5leHQ6MHglcFxuIiwg
CiAgICAgICAgIGRwLT5uZXh0X2luX2xpc3QsIGRwLT5uZXh0X2luX2hhc2hidWNrZXQpOwojZW5k
aWYKICAgIGtkYnAoIiAgUEFHRVM6IHRvdDoweCUwOHggbWF4OjB4JTA4eCB4ZW5oZWFwOjB4JTA4
eFxuIiwgCiAgICAgICAgIGRwLT50b3RfcGFnZXMsIGRwLT5tYXhfcGFnZXMsIGRwLT54ZW5oZWFw
X3BhZ2VzKTsKCiAgICBrZGJfcHJpbnRfcmFuZ2VzZXRzKGRwKTsKICAgIGtkYl9wcmludF9kb21f
ZXZlbnRpbmZvKGRwKTsKICAgIGtkYnAoIlxuIik7CiAgICBrZGJwKCIgIEdyYW50IHRhYmxlOiBn
cDoweCVwXG4iLCBncCk7CiAgICBpZiAoZ3ApIHsKICAgICAgICBrZGJwKCIgICAgbnJfZnJhbWVz
OjB4JTA4eCBzaHBwOjB4JXAgYWN0aXZlOjB4JXBcbiIsCiAgICAgICAgICAgICBncC0+bnJfZ3Jh
bnRfZnJhbWVzLCBncC0+c2hhcmVkX3JhdywgZ3AtPmFjdGl2ZSk7CiAgICAgICAga2RicCgiICAg
IG1hcHRyazoweCVwIG1hcGhkOjB4JTA4eCBtYXBsbXQ6MHglMDh4XG4iLCAKICAgICAgICAgICAg
IGdwLT5tYXB0cmFjaywgZ3AtPm1hcHRyYWNrX2hlYWQsIGdwLT5tYXB0cmFja19saW1pdCk7CiAg
ICAgICAga2RicCgiICAgIG1hcGNudDoiKTsKICAgICAgICBrZGJfcHJpbnRfc3Bpbl9sb2NrKCJt
YXBjbnQ6IGxrOiIsICZncC0+bG9jaywgIlxuIik7CiAgICB9CiAgICBrZGJwKCIgIGh2bTolZCBw
cml2OiVkIG5lZWRfaW9tbXU6JWQgZGJnOiVkIGR5aW5nOiVkIHBhdXNlZDolZFxuIiwKICAgICAg
ICAgZHAtPmlzX2h2bSwgZHAtPmlzX3ByaXZpbGVnZWQsIGRwLT5uZWVkX2lvbW11LAogICAgICAg
ICBkcC0+ZGVidWdnZXJfYXR0YWNoZWQsIGRwLT5pc19keWluZywgZHAtPmlzX3BhdXNlZF9ieV9j
b250cm9sbGVyKTsKICAgIGtkYl9wcmludF9zcGluX2xvY2soIiAgc2h1dGRvd246IGxrOiIsICZk
cC0+c2h1dGRvd25fbG9jaywgIlxuIik7CiAgICBrZGJwKCIgIHNodXRuOiVkIHNodXQ6JWQgY29k
ZTolZCBcbiIsIGRwLT5pc19zaHV0dGluZ19kb3duLAogICAgICAgICBkcC0+aXNfc2h1dF9kb3du
LCBkcC0+c2h1dGRvd25fY29kZSk7CiAgICBrZGJwKCIgIHBhdXNlY250OjB4JTA4eCB2bV9hc3Np
c3Q6MHgiS0RCRkwiIHJlZmNudDoweCUwOHhcbiIsCiAgICAgICAgIGRwLT5wYXVzZV9jb3VudC5j
b3VudGVyLCBkcC0+dm1fYXNzaXN0LCBkcC0+cmVmY250LmNvdW50ZXIpOwogICAga2RicCgiICAm
ZG9tYWluX2RpcnR5X2NwdW1hc2s6JXBcbiIsICZkcC0+ZG9tYWluX2RpcnR5X2NwdW1hc2spOyAK
CiAgICBrZGJwKCIgIHNoYXJlZCA9PSB2Y3B1X2luZm9bXTogJXBcbiIsICBkcC0+c2hhcmVkX2lu
Zm8pOyAKICAgIGtkYnAoIiAgICBhcmNoX3NoYXJlZDogbWF4cGZuOiAlbHggcGZuLW1mbi1mcmFt
ZS1sbCBtZm46ICVseFxuIiwgCiAgICAgICAgIGFyY2hfZ2V0X21heF9wZm4oZHApLCBhcmNoX2dl
dF9wZm5fdG9fbWZuX2ZyYW1lX2xpc3RfbGlzdChkcCkpOwogICAga2RicCgiXG4iKTsKICAgIGtk
YnAoIiAgYXJjaF9kb21haW4gYXQgOiAlcFxuIiwgYXApOwoKI2lmZGVmIENPTkZJR19YODZfNjQK
ICAgIGtkYnAoIiAgICBwdF9wYWdlczoweCVwICIsIGFwLT5tbV9wZXJkb21haW5fcHRfcGFnZXMp
OwogICAga2RicCgiICAgIGwyOjB4JXAgbDM6MHglcFxuIiwgYXAtPm1tX3BlcmRvbWFpbl9sMiwg
YXAtPm1tX3BlcmRvbWFpbl9sMyk7CiNlbHNlCiAgICBrZGJwKCIgICAgcHQ6MHglcCAiLCBhcC0+
bW1fcGVyZG9tYWluX3B0KTsKI2VuZGlmCiNpZmRlZiBDT05GSUdfWDg2XzMyCiAgICBrZGJwKCIg
ICAgJm1hcGNoYWNoZToweCV4cFxuIiwgJmFwLT5tYXBjYWNoZSk7CiNlbmRpZgogICAga2RicCgi
ICAgIGlvcG9ydDoweCVwICZodm1fZG9tOjB4JXBcbiIsIGFwLT5pb3BvcnRfY2FwcywgJmFwLT5o
dm1fZG9tYWluKTsKICAgIGlmIChpc19odm1fb3JfaHliX2RvbWFpbihkcCkpCiAgICAgICAga2Ri
X3BybnRfaHZtX2RvbV9pbmZvKGRwKTsKCiAgICBrZGJwKCIgICAgJnBnaW5nX2RvbTolcCBtb2Rl
OiAlbHgiLCAmYXAtPnBhZ2luZywgYXAtPnBhZ2luZy5tb2RlKTsgCiAgICBrZGJfcHJfZG9tX3Bn
X21vZGVzKGRwKTsKICAgIGtkYnAoIiAgICBwMm0gcHRyOiVwICBwYWdlczp7JXAsICVwfVxuIiwg
YXAtPnAybSwgYXAtPnAybS0+cGFnZXMubmV4dCwKICAgICAgICAgS0RCX1BHTExFKGFwLT5wMm0t
PnBhZ2VzKSk7CiAgICBrZGJwKCIgICAgICAgbWF4X21hcHBlZF9wZm46IktEQkZMLCBhcC0+cDJt
LT5tYXhfbWFwcGVkX3Bmbik7CiNpZiBYRU5fU1VCVkVSU0lPTiA+IDAgJiYgWEVOX1ZFUlNJT04g
PT0gNCAgICAgICAgICAgICAgLyogeGVuIDQuMSBhbmQgYWJvdmUgKi8KICAgIGtkYnAoIiAgcGh5
c190YWJsZTolcFxuIiwgYXAtPnAybS0+cGh5c190YWJsZS5wZm4pOwojZWxzZQogICAga2RicCgi
ICBwaHlzX3RhYmxlLnBmbjoiS0RCRkwiXG4iLCBhcC0+cGh5c190YWJsZS5wZm4pOwojZW5kaWYK
ICAgIGtkYnAoIiAgICBwaHlzYWRkcl9iaXRzejolZCAzMmJpdF9wdjolZCBoYXNfMzJiaXRfc2hp
bmZvOiVkXG4iLCAKICAgICAgICAgYXAtPnBoeXNhZGRyX2JpdHNpemUsIGFwLT5pc18zMmJpdF9w
diwgYXAtPmhhc18zMmJpdF9zaGluZm8pOwogICAga2RiX3ByX3Z0c2NfaW5mbyhhcCk7CiAgICBr
ZGJwKCIgIHNjaGVkOjB4JXAgICZoYW5kbGU6MHglcFxuIiwgZHAtPnNjaGVkX3ByaXYsICZkcC0+
aGFuZGxlKTsKICAgIGtkYnAoIiAgdmNwdSBwdHJzOlxuICAgIik7CiAgICBmb3JfZWFjaF92Y3B1
KGRwLCB2cCkgewogICAgICAgIGtkYnAoIiAlZDolcCIsIHZwLT52Y3B1X2lkLCB2cCk7CiAgICAg
ICAgaWYgKCsrcHJpbnRlZCAlIDQgPT0gMCkga2RicCgiXG4gICAiKTsKICAgIH0KICAgIGtkYnAo
IlxuIik7Cn0KCi8qIAogKiBGVU5DVElPTjogRGlzcGFseSAoY3VycmVudCkgZG9tYWluL3MKICov
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2RvbSh2b2lkKQp7CiAgICBrZGJwKCJkb20g
W2FsbHxkb21pZF06IERpc3BsYXkgY3VycmVudC9hbGwvZ2l2ZW4gZG9tYWluL3NcbiIpOwogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRm
X2RvbShpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICBpbnQgaWQ7CiAgICBzdHJ1Y3QgZG9tYWluICpkcCA9IGN1cnJlbnQtPmRvbWFp
bjsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBr
ZGJfdXNnZl9kb20oKTsKCiAgICBpZiAoYXJnYyA+IDEpIHsKICAgICAgICBmb3IoZHA9ZG9tYWlu
X2xpc3Q7IGRwOyBkcD1kcC0+bmV4dF9pbl9saXN0KQogICAgICAgICAgICBpZiAoa2RiX3N0cjJk
ZWNpKGFyZ3ZbMV0sICZpZCkgJiYgZHAtPmRvbWFpbl9pZD09aWQpCiAgICAgICAgICAgICAgICBr
ZGJfZGlzcGxheV9kb20oZHApOwogICAgICAgICAgICBlbHNlIGlmICghc3RyY21wKGFyZ3ZbMV0s
ICJhbGwiKSkgCiAgICAgICAgICAgICAgICBrZGJfZGlzcGxheV9kb20oZHApOwogICAgfSBlbHNl
IHsKICAgICAgICBrZGJwKCJEaXNwbGF5aW5nIGN1cnJlbnQgZG9tYWluIDpcbiIpOwogICAgICAg
IGtkYl9kaXNwbGF5X2RvbShkcCk7CiAgICB9CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQoKLyogRHVtcCBpcnEgZGVzYyB0YWJsZSAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNn
Zl9kaXJxKHZvaWQpCnsKICAgIGtkYnAoImRpcnEgOiBkdW1wIGlycSBiaW5kaW5nc1xuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21k
Zl9kaXJxKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3Mg
KnJlZ3MpCnsKICAgIHVuc2lnbmVkIGxvbmcgaXJxLCBzeiwgb2ZmcywgYWRkcjsKICAgIGNoYXIg
YnVmW0tTWU1fTkFNRV9MRU4rMV07CiAgICBjaGFyIGFmZnN0cltOUl9DUFVTLzQrTlJfQ1BVUy8z
MisyXTsgICAgLyogY291cnRlc3kgZHVtcF9pcnFzKCkgKi8KCiAgICBpZiAoYXJnYyA+IDEgJiYg
KmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kaXJxKCk7CgojaWYgWEVO
X1ZFUlNJT04gPCA0ICYmIFhFTl9TVUJWRVJTSU9OIDwgNSAgICAgICAgICAgLyogeGVuIDMuNC54
IG9yIGJlbG93ICovCiAgICBrZGJwKCJpZHgvaXJxIy9zdGF0dXM6IGFsbCBhcmUgaW4gZGVjaW1h
bFxuIik7CiAgICBrZGJwKCJpZHggIGlycSMgIHN0YXR1cyAgIGFjdGlvbihoYW5kbGVyIG5hbWUg
ZGV2aWQpXG4iKTsKICAgIGZvciAoaXJxPTA7IGlycSA8IE5SX1ZFQ1RPUlM7IGlycSsrKSB7CiAg
ICAgICAgaXJxX2Rlc2NfdCAgKmRwID0gJmlycV9kZXNjW2lycV07CiAgICAgICAgaWYgKCFkcC0+
YWN0aW9uKQogICAgICAgICAgICBjb250aW51ZTsKICAgICAgICBhZGRyID0gKHVuc2lnbmVkIGxv
bmcpZHAtPmFjdGlvbi0+aGFuZGxlcjsKICAgICAgICBrZGJwKCJbJTNsZF06aXJxOiUzZCBzdDol
M2QgZjolcyBkZXZubTolcyBkZXZpZDoweCVwXG4iLAogICAgICAgICAgICAgaSwgdmVjdG9yX3Rv
X2lycShpcnEpLCBkcC0+c3RhdHVzLCAoZHAtPnN0YXR1cyAmIElSUV9HVUVTVCkgPyAKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJHVUVTVCBJUlEiIDogc3ltYm9sc19sb29rdXAoYWRkciwg
JnN6LCAmb2ZmcywgYnVmKSwKICAgICAgICAgICAgIGRwLT5hY3Rpb24tPm5hbWUsIGRwLT5hY3Rp
b24tPmRldl9pZCk7CiAgICB9CiNlbHNlCiAgICBrZGJwKCJpcnFfZGVzY1tdOiVwIG5yX2lycXM6
ICQlZCBucl9pcnFzX2dzaTogJCVkXG4iLCBpcnFfZGVzYywgbnJfaXJxcywgCiAgICAgICAgICBu
cl9pcnFzX2dzaSk7CiAgICBrZGJwKCJpcnEvdmVjIy9zdGF0dXM6IGluIGRlY2ltYWwuIGFmZmlu
aXR5IGluIGhleCwgbm90IGJpdG1hcFxuIik7CiAgICBrZGJwKCJpcnEtLSB2ZWMgc3RhIGZ1bmN0
aW9uLS0tLS0tLS0tLS0gbmFtZS0tLS0gdHlwZS0tLS0tLS0tLSAiKTsKICAgIGtkYnAoImFmZiBk
ZXZpZC0tLS0tLS0tLS0tLVxuIik7CiAgICBmb3IgKGlycT0wOyBpcnEgPCBucl9pcnFzOyBpcnEr
KykgewogICAgICAgIHZvaWQgKmRldmlkcDsKICAgICAgICBjb25zdCBjaGFyICpzeW1wLCAqbm1w
OwogICAgICAgIGlycV9kZXNjX3QgICpkcCA9IGlycV90b19kZXNjKGlycSk7CiAgICAgICAgc3Ry
dWN0IGFyY2hfaXJxX2Rlc2MgKmFyY2hwID0gJmRwLT5hcmNoOwoKICAgICAgICBpZiAoIWRwLT5o
YW5kbGVyIHx8IGRwLT5oYW5kbGVyPT0mbm9faXJxX3R5cGUgfHwgZHAtPnN0YXR1cyAmIElSUV9H
VUVTVCkKICAgICAgICAgICAgY29udGludWU7CgogICAgICAgIGFkZHIgPSBkcC0+YWN0aW9uID8g
KHVuc2lnbmVkIGxvbmcpZHAtPmFjdGlvbi0+aGFuZGxlciA6IDA7CiAgICAgICAgc3ltcCA9IGFk
ZHIgPyBzeW1ib2xzX2xvb2t1cChhZGRyLCAmc3osICZvZmZzLCBidWYpIDogIm4vYSAiOwogICAg
ICAgIG5tcCA9IGFkZHIgPyBkcC0+YWN0aW9uLT5uYW1lIDogIm4vYSAiOwogICAgICAgIGRldmlk
cCA9IGFkZHIgPyBkcC0+YWN0aW9uLT5kZXZfaWQgOiBOVUxMOwogICAgICAgIGNwdW1hc2tfc2Nu
cHJpbnRmKGFmZnN0ciwgc2l6ZW9mKGFmZnN0ciksIGRwLT5hZmZpbml0eSk7CiAgICAgICAga2Ri
cCgiWyUzbGRdICUwM2QgJTAzZCAlLTE5cyAlLThzICUtMTNzICUzcyAweCVwXG4iLCBpcnEsIGFy
Y2hwLT52ZWN0b3IsCiAgICAgICAgICAgICBkcC0+c3RhdHVzLCBzeW1wLCBubXAsIGRwLT5oYW5k
bGVyLT50eXBlbmFtZSwgYWZmc3RyLCBkZXZpZHApOwogICAgfQogICAga2RiX3BybnRfZ3Vlc3Rf
bWFwcGVkX2lycXMoKTsKI2VuZGlmCiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3Rh
dGljIHZvaWQKa2RiX3BybnRfdmVjX2lycV90YWJsZShpbnQgY3B1KQp7CiAgICBpbnQgaSxqLCAq
dGJsID0gcGVyX2NwdSh2ZWN0b3JfaXJxLCBjcHUpOwoKICAgIGtkYnAoIkNQVSAlZCA6ICIsIGNw
dSk7CiAgICBmb3IgKGk9MCwgaj0wOyBpIDwgTlJfVkVDVE9SUzsgaSsrKQogICAgICAgIGlmICh0
YmxbaV0gIT0gLTEpIHsKICAgICAgICAgICAga2RicCgiKCUzZDolM2QpICIsIGksIHRibFtpXSk7
CiAgICAgICAgICAgIGlmICghKCsraiAlIDUpKQogICAgICAgICAgICAgICAga2RicCgiXG4gICAg
ICAgICIpOwogICAgICAgIH0KICAgIGtkYnAoIlxuIik7Cn0KCi8qIER1bXAgaXJxIGRlc2MgdGFi
bGUgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZHZpdCh2b2lkKQp7CiAgICBrZGJw
KCJkdml0IFtjcHV8YWxsXTogZHVtcCAocGVyIGNwdSl2ZWN0b3IgaXJxIHRhYmxlXG4iKTsKICAg
IHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRm
X2R2aXQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAgaW50IGNwdSwgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKCiAgICBpZiAo
YXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kdml0
KCk7CiAgICAKICAgIGlmIChhcmdjID4gMSkgewogICAgICAgIGlmICghc3RyY21wKGFyZ3ZbMV0s
ICJhbGwiKSkgCiAgICAgICAgICAgIGNwdSA9IC0xOwogICAgICAgIGVsc2UgaWYgKCFrZGJfc3Ry
MmRlY2koYXJndlsxXSwgJmNwdSkpIHsKICAgICAgICAgICAga2RicCgiSW52YWxpZCBjcHU6JWRc
biIsIGNwdSk7CiAgICAgICAgICAgIHJldHVybiBrZGJfdXNnZl9kdml0KCk7CiAgICAgICAgfQog
ICAgfSBlbHNlCiAgICAgICAgY3B1ID0gY2NwdTsKCiAgICBrZGJwKCJQZXIgQ1BVIHZlY3RvciBp
cnEgdGFibGUgcGFpcnMgKHZlY3RvcjppcnEpIChhbGwgZGVjaW1hbHMpOlxuIik7CiAgICBpZiAo
Y3B1ICE9IC0xKSAKICAgICAgICBrZGJfcHJudF92ZWNfaXJxX3RhYmxlKGNwdSk7CiAgICBlbHNl
CiAgICAgICAgZm9yX2VhY2hfb25saW5lX2NwdShjcHUpIAogICAgICAgICAgICBrZGJfcHJudF92
ZWNfaXJxX3RhYmxlKGNwdSk7CgogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIGRv
IHZtZXhpdCBvbiBhbGwgY3B1J3Mgc28gaW50ZWwgVk1DUyBjYW4gYmUgZHVtcGVkICovCnN0YXRp
YyBrZGJfY3B1X2NtZF90IAprZGJfYWxsX2NwdV9mbHVzaF92bWNzKHZvaWQpCnsKICAgIGludCBj
cHUsIGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CiAgICBmb3JfZWFjaF9vbmxpbmVfY3B1KGNw
dSkgewogICAgICAgIGlmIChjcHUgPT0gY2NwdSkgewogICAgICAgICAgICBrZGJfY3Vycl9jcHVf
Zmx1c2hfdm1jcygpOwogICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgIGlmIChrZGJfY3B1X2Nt
ZFtjcHVdICE9IEtEQl9DUFVfUEFVU0UpeyAgLyogaHVuZyBjcHUgKi8KICAgICAgICAgICAgICAg
IGtkYnAoIlNraXBwaW5nIChodW5nPykgY3B1ICVkXG4iLCBjcHUpOwogICAgICAgICAgICAgICAg
Y29udGludWU7CiAgICAgICAgICAgIH0KICAgICAgICAgICAga2RiX2NwdV9jbWRbY3B1XSA9IEtE
Ql9DUFVfRE9fVk1FWElUOwogICAgICAgICAgICB3aGlsZSAoa2RiX2NwdV9jbWRbY3B1XT09S0RC
X0NQVV9ET19WTUVYSVQpOwogICAgICAgIH0KICAgIH0KICAgIHJldHVybiBLREJfQ1BVX01BSU5f
S0RCOwp9CgovKiBEaXNwbGF5IFZNQ1Mgb3IgVk1DQiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdApr
ZGJfdXNnZl9kdm1jKHZvaWQpCnsKICAgIGtkYnAoImR2bWMgW2RvbWlkXVt2Y3B1aWRdIDogRHVt
cCB2bWNzL3ZtY2JcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtk
Yl9jcHVfY21kX3QKa2RiX2NtZGZfZHZtYyhpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0
cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBkb21pZF90IGRvbWlkID0gMDsgIC8qIHVu
c2lnbmVkIHR5cGUgZG9uJ3QgbGlrZSAtMSAqLwogICAgaW50IHZjcHVpZCA9IC0xOwoKICAgIGlm
IChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX2R2
bWMoKTsKCiAgICBpZiAoYXJnYyA+IDEpIHsgCiAgICAgICAgaWYgKCFrZGJfc3RyMmRvbWlkKGFy
Z3ZbMV0sICZkb21pZCwgMSkpCiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwog
ICAgfQogICAgaWYgKGFyZ2MgPiAyICYmICFrZGJfc3RyMmRlY2koYXJndlsyXSwgJnZjcHVpZCkp
IHsKICAgICAgICBrZGJwKCJCYWQgdmNwdWlkOiAweCV4XG4iLCB2Y3B1aWQpOwogICAgICAgIHJl
dHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGJvb3RfY3B1X2RhdGEueDg2X3Zl
bmRvciA9PSBYODZfVkVORE9SX0lOVEVMKSB7CiAgICAgICAga2RiX2FsbF9jcHVfZmx1c2hfdm1j
cygpOwogICAgICAgIGtkYl9kdW1wX3ZtY3MoZG9taWQsIChpbnQpdmNwdWlkKTsKICAgIH0gZWxz
ZSB7CiAgICAgICAga2RiX2R1bXBfdm1jYihkb21pZCwgKGludCl2Y3B1aWQpOwogICAgfQogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dm
X21taW8odm9pZCkKewogICAga2RicCgibW1pbzogZHVtcCBtbWlvIHJlbGF0ZWQgaW5mb1xuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9tbWlvKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAg
cmV0dXJuIGtkYl91c2dmX21taW8oKTsKCiAgICBrZGJwKCJyL28gbW1pbyByYW5nZXM6XG4iKTsK
ICAgIHJhbmdlc2V0X3ByaW50ayhtbWlvX3JvX3Jhbmdlcyk7CiAgICBrZGJwKCJcbiIpOwogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIER1bXAgdGltZXIvdGltZXJzIHF1ZXVlcyAq
LwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kdHJxKHZvaWQpCnsKICAgIGtkYnAoImR0
cnE6IGR1bXAgdGltZXIgcXVldWVzIG9uIGFsbCBjcHVzXG4iKTsKICAgIHJldHVybiBLREJfQ1BV
X01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRmX2R0cnEoaW50IGFyZ2Ms
IGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaWYg
KGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZHRy
cSgpOwoKICAgIGtkYl9kdW1wX3RpbWVyX3F1ZXVlcygpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7Cn0KCnN0cnVjdCBpZHRlIHsKICAgIHVpbnQxNl90IG9mZnMwXzE1OwogICAgdWludDE2
X3Qgc2VsZWN0b3I7CiAgICB1aW50MTZfdCBtZXRhOwogICAgdWludDE2X3Qgb2ZmczE2XzMxOwog
ICAgdWludDMyX3Qgb2ZmczMyXzYzOwogICAgdWludDMyX3QgcmVzdmQ7Cn07CgojaWZkZWYgX194
ODZfNjRfXwpzdGF0aWMgdm9pZAprZGJfcHJpbnRfaWR0ZShpbnQgbnVtLCBzdHJ1Y3QgaWR0ZSAq
aWR0cCkgCnsKICAgIHVpbnQxNl90IG10YSA9IGlkdHAtPm1ldGE7CiAgICBjaGFyIGRwbCA9ICgo
bXRhICYgMHg2MDAwKSA+PiAxMyk7CiAgICBjaGFyIHByZXNlbnQgPSAoKG10YSAmMHg4MDAwKSA+
PiAxNSk7CiAgICBpbnQgdHZhbCA9ICgobXRhICYweDMwMCkgPj4gOCk7CiAgICBjaGFyICp0eXBl
ID0gKHR2YWwgPT0gMSkgPyAiVGFzayIgOiAoKHR2YWw9PSAyKSA/ICJJbnRyIiA6ICJUcmFwIik7
CiAgICBkb21pZF90IGRvbWlkID0gaWR0cC0+c2VsZWN0b3I9PV9fSFlQRVJWSVNPUl9DUzY0ID8g
RE9NSURfSURMRSA6CiAgICAgICAgICAgICAgICAgICAgY3VycmVudC0+ZG9tYWluLT5kb21haW5f
aWQ7CiAgICB1aW50NjRfdCBhZGRyID0gaWR0cC0+b2ZmczBfMTUgfCAoKHVpbnQ2NF90KWlkdHAt
Pm9mZnMxNl8zMSA8PCAxNikgfCAKICAgICAgICAgICAgICAgICAgICAoKHVpbnQ2NF90KWlkdHAt
Pm9mZnMzMl82MyA8PCAzMik7CgogICAga2RicCgiWyUwM2RdOiAlcyAleCAgJXggJTA0eDolMDE2
bHggIiwgbnVtLCB0eXBlLCBkcGwsIHByZXNlbnQsCiAgICAgICAgIGlkdHAtPnNlbGVjdG9yLCBh
ZGRyKTsgCiAgICBrZGJfcHJudF9hZGRyMnN5bShkb21pZCwgYWRkciwgIlxuIik7Cn0KCi8qIER1
bXAgNjRiaXQgaWR0IHRhYmxlIGN1cnJlbnRseSBvbiB0aGlzIGNwdS4gSW50ZWwgVm9sIDMgc2Vj
dGlvbiA1LjE0LjEgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZGlkdCh2b2lkKQp7
CiAgICBrZGJwKCJkaWR0IDogZHVtcCBJRFQgdGFibGUgb24gdGhlIGN1cnJlbnQgY3B1XG4iKTsK
ICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9j
bWRmX2RpZHQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVn
cyAqcmVncykKewogICAgaW50IGk7CiAgICBzdHJ1Y3QgaWR0ZSAqaWR0cCA9IChzdHJ1Y3QgaWR0
ZSAqKWlkdF90YWJsZXNbc21wX3Byb2Nlc3Nvcl9pZCgpXTsKCiAgICBpZiAoYXJnYyA+IDEgJiYg
KmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kaWR0KCk7CgogICAga2Ri
cCgiSURUIGF0OiVwXG4iLCBpZHRwKTsKICAgIGtkYnAoImlkdCMgIFR5cGUgRFBMIFAgYWRkciAo
YWxsIGhleCBleGNlcHQgaWR0IylcbiIsIGlkdHApOwogICAgZm9yIChpPTA7IGkgPCAyNTY7IGkr
KywgaWR0cCsrKSAKICAgICAgICBrZGJfcHJpbnRfaWR0ZShpLCBpZHRwKTsKICAgIHJldHVybiBL
REJfQ1BVX01BSU5fS0RCOwp9CiNlbHNlCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRmX2Rp
ZHQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVn
cykKewogICAga2RicCgia2RiOiBQbGVhc2UgaW1wbGVtZW50IG1lIGluIDMyYml0IGh5cGVydmlz
b3JcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KI2VuZGlmCgpzdHJ1Y3QgZ2R0
ZSB7ICAgICAgICAgICAgIC8qIHNhbWUgZm9yIFRTUyBhbmQgTERUICovCiAgICB1bG9uZyBsaW1p
dDA6MTY7CiAgICB1bG9uZyBiYXNlMDoyNDsgICAgICAgLyogbGluZWFyIGFkZHJlc3MgYmFzZSwg
bm90IHBhICovCiAgICB1bG9uZyBhY2N0eXBlOjQ7ICAgICAgLyogVHlwZTogYWNjZXNzIHJpZ2h0
cyAqLwogICAgdWxvbmcgUzoxOyAgICAgICAgICAgIC8qIFM6IDAgPSBzeXN0ZW0sIDEgPSBjb2Rl
L2RhdGEgKi8KICAgIHVsb25nIERQTDoyOyAgICAgICAgICAvKiBEUEwgKi8KICAgIHVsb25nIFA6
MTsgICAgICAgICAgICAvKiBQOiBTZWdtZW50IFByZXNlbnQgKi8KICAgIHVsb25nIGxpbWl0MTo0
OwogICAgdWxvbmcgQVZMOjE7ICAgICAgICAgIC8qIEFWTDogYXZhaWwgZm9yIHVzZSBieSBzeXN0
ZW0gc29mdHdhcmUgKi8KICAgIHVsb25nIEw6MTsgICAgICAgICAgICAvKiBMOiA2NGJpdCBjb2Rl
IHNlZ21lbnQgKi8KICAgIHVsb25nIERCOjE7ICAgICAgICAgICAvKiBEL0IgKi8KICAgIHVsb25n
IEc6MTsgICAgICAgICAgICAvKiBHOiBncmFudWxhcml0eSAqLwogICAgdWxvbmcgYmFzZTE6ODsg
ICAgICAgIC8qIGxpbmVhciBhZGRyZXNzIGJhc2UsIG5vdCBwYSAqLwp9OwoKdW5pb24gZ2R0ZV91
IHsKICAgIHN0cnVjdCBnZHRlIGdkdGU7CiAgICB1NjQgZ3ZhbDsKfTsKCnN0cnVjdCBjYWxsX2dk
dGUgewogICAgdW5zaWduZWQgc2hvcnQgb2ZmczA6MTY7CiAgICB1bnNpZ25lZCBzaG9ydCBzZWw6
MTY7CiAgICB1bnNpZ25lZCBzaG9ydCBtaXNjMDoxNjsKICAgIHVuc2lnbmVkIHNob3J0IG9mZnMx
OjE2Owp9OwoKc3RydWN0IGlkdF9nZHRlIHsKICAgIHVuc2lnbmVkIGxvbmcgb2ZmczA6MTY7CiAg
ICB1bnNpZ25lZCBsb25nIHNlbDoxNjsKICAgIHVuc2lnbmVkIGxvbmcgaXN0OjM7CiAgICB1bnNp
Z25lZCBsb25nIHVudXNlZDA6MTM7CiAgICB1bnNpZ25lZCBsb25nIG9mZnMxOjE2Owp9Owp1bmlv
biBzZ2R0ZV91IHsKICAgIHN0cnVjdCBjYWxsX2dkdGUgY2dkdGU7CiAgICBzdHJ1Y3QgaWR0X2dk
dGUgaWdkdGU7CiAgICB1NjQgc2d2YWw7Cn07CgovKiByZXR1cm4gYmluYXJ5IGZvcm0gb2YgYSBo
ZXggaW4gc3RyaW5nIDogbWF4IDQgY2hhcnMgMDAwMCB0byAxMTExICovCnN0YXRpYyBjaGFyICpr
ZGJfcmV0X2FjY3R5cGUodWludCBhY2N0eXBlKQp7CiAgICBzdGF0aWMgY2hhciBidWZbMTZdOwog
ICAgY2hhciAqcCA9IGJ1ZjsKICAgIGludCBpOwoKICAgIGlmIChhY2N0eXBlID4gMHhmKSB7CiAg
ICAgICAgYnVmWzBdID0gYnVmWzFdID0gYnVmWzJdID0gYnVmWzNdID0gJz8nOwogICAgICAgIGJ1
Zls1XSA9ICdcbic7CiAgICAgICAgcmV0dXJuIGJ1ZjsKICAgIH0KICAgIGZvciAoaT0wOyBpIDwg
NDsgaSsrLCBwKyssIGFjY3R5cGU9YWNjdHlwZT4+MSkKICAgICAgICAqcCA9IChhY2N0eXBlICYg
MHgxKSA/ICcxJyA6ICcwJzsKCiAgICByZXR1cm4gYnVmOwp9CgovKiBEaXNwbGF5IEdEVCB0YWJs
ZS4gSUEtMzJlIG1vZGUgaXMgYXNzdW1kZWQuICovCi8qIGZpcnN0IGRpc3BsYXkgbm9uIHN5c3Rl
bSBkZXNjcmlwdG9ycyB0aGVuIGRpc3BsYXkgc3lzdGVtIGRlc2NyaXB0b3JzICovCnN0YXRpYyBr
ZGJfY3B1X2NtZF90CmtkYl91c2dmX2RnZHQodm9pZCkKewogICAga2RicCgiZGdkdCBbZ2R0LXB0
ciBkZWNpbWFsLWJ5dGUtc2l6ZV0gZHVtcCBHRFQgdGFibGUgb24gY3VycmVudCBjcHUgb3IgZm9y
IgogICAgICAgICAiZ2l2ZW4gdmNwdVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9kZ2R0KGludCBhcmdjLCBjb25zdCBjaGFy
ICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHN0cnVjdCBYZ3RfZGVz
Y19zdHJ1Y3QgZGVzYzsKICAgIHVuaW9uIGdkdGVfdSB1MTsKICAgIHVsb25nIHN0YXJ0X2FkZHIs
IGVuZF9hZGRyLCB0YWRkcj0wOwogICAgZG9taWRfdCBkb21pZCA9IERPTUlEX0lETEU7CiAgICBp
bnQgaWR4OwoKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAgcmV0
dXJuIGtkYl91c2dmX2RnZHQoKTsKCiAgICBpZiAoYXJnYyA+IDEpIHsKICAgICAgICBpZiAoYXJn
YyAhPSAzKQogICAgICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZGdkdCgpOwoKICAgICAgICBpZiAo
a2RiX3N0cjJ1bG9uZyhhcmd2WzFdLCAodWxvbmcgKikmc3RhcnRfYWRkcikgJiYgCiAgICAgICAg
ICAgIGtkYl9zdHIyZGVjaShhcmd2WzJdLCAoaW50ICopJnRhZGRyKSkgewogICAgICAgICAgICBl
bmRfYWRkciA9IHN0YXJ0X2FkZHIgKyB0YWRkcjsKICAgICAgICB9IGVsc2UgewogICAgICAgICAg
ICBrZGJwKCJkZ2R0OiBCYWQgYXJnOiVzIG9yICVzXG4iLCBhcmd2WzFdLCBhcmd2WzJdKTsKICAg
ICAgICAgICAgcmV0dXJuIGtkYl91c2dmX2RnZHQoKTsKICAgICAgICB9CiAgICB9IGVsc2Ugewog
ICAgICAgIF9fYXNtX18gX192b2xhdGlsZV9fICgic2dkdCAgKCUwKSBcbiIgOjogImEiKCZkZXNj
KSA6ICJtZW1vcnkiKTsKICAgICAgICBzdGFydF9hZGRyID0gKHVsb25nKWRlc2MuYWRkcmVzczsg
CiAgICAgICAgZW5kX2FkZHIgPSAodWxvbmcpZGVzYy5hZGRyZXNzICsgZGVzYy5zaXplOwogICAg
fQogICAga2RicCgiR0RUOiBXaWxsIHNraXAgbnVsbCBkZXNjIGF0IDAsIHN0YXJ0OiVseCBlbmQ6
JWx4XG4iLCBzdGFydF9hZGRyLCAKICAgICAgICAgZW5kX2FkZHIpOwogICAga2RicCgiW2lkeF0g
ICBzZWwgLS0tIHZhbCAtLS0tLS0tLSAgQWNjcyBEUEwgUCBBVkwgTCBEQiBHICIKICAgICAgICAg
Ii0tQmFzZSBBZGRyIC0tLS0gIExpbWl0XG4iKTsKICAgIGtkYnAoIiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFR5cGVcbiIpOwoKICAgIC8qIHNraXAgZmlyc3QgOCBudWxsIGJ5dGVzICov
CiAgICAvKiB0aGUgY3B1IG11bHRpcGxpZXMgdGhlIGluZGV4IGJ5IDggYW5kIGFkZHMgdG8gR0RU
LmJhc2UgKi8KICAgIGZvciAodGFkZHIgPSBzdGFydF9hZGRyKzg7IHRhZGRyIDwgZW5kX2FkZHI7
ICB0YWRkciArPSBzaXplb2YodWxvbmcpKSB7CgogICAgICAgIC8qIG5vdCBhbGwgZW50cmllcyBh
cmUgbWFwcGVkLiBkbyB0aGlzIHRvIGF2b2lkIEdQIGV2ZW4gaWYgaHlwICovCiAgICAgICAgaWYg
KCFrZGJfcmVhZF9tZW0odGFkZHIsIChrZGJieXRfdCAqKSZ1MSwgc2l6ZW9mKHUxKSxkb21pZCkg
fHwgIXUxLmd2YWwpCiAgICAgICAgICAgIGNvbnRpbnVlOwoKICAgICAgICBpZiAodTEuZ3ZhbCA9
PSAweGZmZmZmZmZmZmZmZmZmZmYgfHwgdTEuZ3ZhbCA9PSAweDU1NTU1NTU1NTU1NTU1NTUpCiAg
ICAgICAgICAgIGNvbnRpbnVlOyAgICAgICAgICAgICAgIC8qIHdoYXQgYW4gZWZmaW4geDg2IG1l
c3MgKi8KCiAgICAgICAgaWR4ID0gKHRhZGRyIC0gc3RhcnRfYWRkcikgLyA4OwogICAgICAgIGlm
ICh1MS5nZHRlLlMgPT0gMCkgeyAgICAgICAvKiBTeXN0ZW0gRGVzYyBhcmUgMTYgYnl0ZXMgaW4g
NjRiaXQgbW9kZSAqLwogICAgICAgICAgICB0YWRkciArPSBzaXplb2YodWxvbmcpOwogICAgICAg
ICAgICBjb250aW51ZTsKICAgICAgICB9CiAgICAgICAga2RicCgiWyUwNHhdICUwNHggJTAxNmx4
ICAlNHMgICV4ICAlZCAgJWQgICVkICAlZCAlZCAlMDE2bHggICUwNXhcbiIsCiAgICAgICAgICAg
ICBpZHgsIChpZHg8PDMpLCB1MS5ndmFsLCBrZGJfcmV0X2FjY3R5cGUodTEuZ2R0ZS5hY2N0eXBl
KSwgCiAgICAgICAgICAgICB1MS5nZHRlLkRQTCwgCiAgICAgICAgICAgICB1MS5nZHRlLlAsIHUx
LmdkdGUuQVZMLCB1MS5nZHRlLkwsIHUxLmdkdGUuREIsIHUxLmdkdGUuRywgIAogICAgICAgICAg
ICAgKHU2NCkoKHU2NCl1MS5nZHRlLmJhc2UwIHwgKHU2NCkoKHU2NCl1MS5nZHRlLmJhc2UxPDwy
NCkpLCAKICAgICAgICAgICAgIHUxLmdkdGUubGltaXQwIHwgKHUxLmdkdGUubGltaXQxPDwxNikp
OwogICAgfQoKICAgIGtkYnAoIlxuU3lzdGVtIGRlc2NyaXB0b3JzIChTPTApIDogKHNraXBwaW5n
IDB0aCBlbnRyeSlcbiIpOwogICAgZm9yICh0YWRkcj1zdGFydF9hZGRyKzg7ICB0YWRkciA8IGVu
ZF9hZGRyOyAgdGFkZHIgKz0gc2l6ZW9mKHVsb25nKSkgewogICAgICAgIHVpbnQgYWNjdHlwZTsK
ICAgICAgICB1NjQgdXBwZXIsIGFkZHI2ND0wOwoKICAgICAgICAvKiBub3QgYWxsIGVudHJpZXMg
YXJlIG1hcHBlZC4gZG8gdGhpcyB0byBhdm9pZCBHUCBldmVuIGlmIGh5cCAqLwogICAgICAgIGlm
IChrZGJfcmVhZF9tZW0odGFkZHIsIChrZGJieXRfdCAqKSZ1MSwgc2l6ZW9mKHUxKSwgZG9taWQp
PT0wIHx8IAogICAgICAgICAgICB1MS5ndmFsID09IDAgfHwgdTEuZ2R0ZS5TID09IDEpIHsKICAg
ICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlkeCA9ICh0YWRkciAtIHN0YXJ0
X2FkZHIpIC8gODsKICAgICAgICB0YWRkciArPSBzaXplb2YodWxvbmcpOwogICAgICAgIGlmIChr
ZGJfcmVhZF9tZW0odGFkZHIsIChrZGJieXRfdCAqKSZ1cHBlciwgOCwgZG9taWQpID09IDApIHsK
ICAgICAgICAgICAga2RicCgiQ291bGQgbm90IHJlYWQgdXBwZXIgOCBieXRlcyBvZiBzeXN0ZW0g
ZGVzY1xuIik7CiAgICAgICAgICAgIHVwcGVyID0gMDsKICAgICAgICB9CiAgICAgICAgYWNjdHlw
ZSA9IHUxLmdkdGUuYWNjdHlwZTsKICAgICAgICBpZiAoYWNjdHlwZSAhPSAyICYmIGFjY3R5cGUg
IT0gOSAmJiBhY2N0eXBlICE9IDExICYmIGFjY3R5cGUgIT0xMiAmJgogICAgICAgICAgICBhY2N0
eXBlICE9IDE0ICYmIGFjY3R5cGUgIT0gMTUpCiAgICAgICAgICAgIGNvbnRpbnVlOwoKICAgICAg
ICBrZGJwKCJbJTA0eF0gJTA0eCB2YWw6JTAxNmx4IERQTDoleCBQOiVkIHR5cGU6JXggIiwKICAg
ICAgICAgICAgIGlkeCwgKGlkeDw8MyksIHUxLmd2YWwsIHUxLmdkdGUuRFBMLCB1MS5nZHRlLlAs
IGFjY3R5cGUpOyAKCiAgICAgICAgdXBwZXIgPSAodTY0KSgodTY0KSh1cHBlciAmIDB4RkZGRkZG
RkYpIDw8IDMyKTsKCiAgICAgICAgLyogVm9sIDNBOiB0YWJsZTogMy0yICBwYWdlOiAzLTE5ICov
CiAgICAgICAgaWYgKGFjY3R5cGUgPT0gMikgewogICAgICAgICAgICBrZGJwKCJMRFQgZ2F0ZSAo
MDAxMClcbiIpOwogICAgICAgIH0KICAgICAgICBlbHNlIGlmIChhY2N0eXBlID09IDkpIHsKICAg
ICAgICAgICAga2RicCgiVFNTIGF2YWlsIGdhdGUoMTAwMSlcbiIpOwogICAgICAgIH0KICAgICAg
ICBlbHNlIGlmIChhY2N0eXBlID09IDExKSB7CiAgICAgICAgICAgIGtkYnAoIlRTUyBidXN5IGdh
dGUoMTAxMSlcbiIpOwogICAgICAgIH0KICAgICAgICBlbHNlIGlmIChhY2N0eXBlID09IDEyKSB7
CiAgICAgICAgICAgIGtkYnAoIkNBTEwgZ2F0ZSAoMTEwMClcbiIpOwogICAgICAgIH0KICAgICAg
ICBlbHNlIGlmIChhY2N0eXBlID09IDE0KSB7CiAgICAgICAgICAgIGtkYnAoIklEVCBnYXRlICgx
MTEwKVxuIik7CiAgICAgICAgfQogICAgICAgIGVsc2UgaWYgKGFjY3R5cGUgPT0gMTUpIHsKICAg
ICAgICAgICAga2RicCgiVHJhcCBnYXRlICgxMTExKVxuIik7IAogICAgICAgIH0KCiAgICAgICAg
aWYgKGFjY3R5cGUgPT0gMiB8fCBhY2N0eXBlID09IDkgfHwgYWNjdHlwZSA9PSAxMSkgewogICAg
ICAgICAgICBrZGJwKCIgICAgICAgIEFWTDolZCBHOiVkIEJhc2UgQWRkcjolMDE2bHggTGltaXQ6
JXhcbiIsCiAgICAgICAgICAgICAgICAgdTEuZ2R0ZS5BVkwsIHUxLmdkdGUuRywgIAogICAgICAg
ICAgICAgICAgICh1NjQpKCh1NjQpdTEuZ2R0ZS5iYXNlMCB8ICgodTY0KXUxLmdkdGUuYmFzZTE8
PDI0KXwgdXBwZXIpLAogICAgICAgICAgICAgICAgICh1MzIpdTEuZ2R0ZS5saW1pdDAgfCAodTMy
KSgodTMyKXUxLmdkdGUubGltaXQxPDwxNikpOwoKICAgICAgICB9IGVsc2UgaWYgKGFjY3R5cGUg
PT0gMTIpIHsKICAgICAgICAgICAgdW5pb24gc2dkdGVfdSB1MjsKICAgICAgICAgICAgdTIuc2d2
YWwgPSB1MS5ndmFsOwoKICAgICAgICAgICAgYWRkcjY0ID0gKHU2NCkoKHU2NCl1Mi5jZ2R0ZS5v
ZmZzMCB8IAogICAgICAgICAgICAgICAgICAgICAgICAgICAodTY0KSgodTY0KXUyLmNnZHRlLm9m
ZnMxPDwxNikgfCB1cHBlcik7CiAgICAgICAgICAgIGtkYnAoIiAgICAgICAgRW50cnk6ICUwNHg6
JTAxNmx4XG4iLCB1Mi5jZ2R0ZS5zZWwsIGFkZHI2NCk7CiAgICAgICAgfSBlbHNlIGlmIChhY2N0
eXBlID09IDE0IHx8IGFjY3R5cGUgPT0gMTUpIHsKICAgICAgICAgICAgdW5pb24gc2dkdGVfdSB1
MjsKICAgICAgICAgICAgdTIuc2d2YWwgPSB1MS5ndmFsOwoKICAgICAgICAgICAgYWRkcjY0ID0g
KHU2NCkoKHU2NCl1Mi5pZ2R0ZS5vZmZzMCB8IAogICAgICAgICAgICAgICAgICAgICAgICAgICAo
dTY0KSgodTY0KXUyLmlnZHRlLm9mZnMxPDwxNikgfCB1cHBlcik7CiAgICAgICAgICAgIGtkYnAo
IiAgICAgICAgRW50cnk6ICUwNHg6JTAxNmx4IGlzdDolMDN4XG4iLCB1Mi5pZ2R0ZS5zZWwsIGFk
ZHI2NCwKICAgICAgICAgICAgICAgICB1Mi5pZ2R0ZS5pc3QpOwogICAgICAgIH0gZWxzZSAKICAg
ICAgICAgICAga2RicCgiIEVycm9yOiBVbnJlY29uZ2l6ZWQgdHlwZTolbHhcbiIsIGFjY3R5cGUp
OwogICAgfQogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIERpc3BsYXkgc2NoZWR1
bGVyIGJhc2ljIGFuZCBleHRlbmRlZCBpbmZvICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91
c2dmX3NjaGVkKHZvaWQpCnsKICAgIGtkYnAoInNjaGVkOiBzaG93IHNjaGVkdWxhciBpbmZvIGFu
ZCBydW4gcXVldWVzXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBr
ZGJfY3B1X2NtZF90CmtkYl9jbWRmX3NjaGVkKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwg
c3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsx
XSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3NjaGVkKCk7CgogICAga2RiX3ByaW50
X3NjaGVkX2luZm8oKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBEaXNwbGF5
IE1NVSBiYXNpYyBhbmQgZXh0ZW5kZWQgaW5mbyAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
dXNnZl9tbXUodm9pZCkKewogICAga2RicCgibW11OiBwcmludCBiYXNpYyBNTVUgaW5mb1xuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9tbXUoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVn
cyAqcmVncykKewogICAgaW50IGNwdTsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0g
Jz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9tbXUoKTsKCiAgICBrZGJwKCJNTVUgSW5mbzpc
biIpOwogICAga2RicCgidG90YWwgIHBhZ2VzOiAlbHhcbiIsIHRvdGFsX3BhZ2VzKTsKICAgIGtk
YnAoIm1heCBwYWdlL21mbjogJWx4XG4iLCBtYXhfcGFnZSk7CiAgICBrZGJwKCJmcmFtZV90YWJs
ZTogICVwXG4iLCBmcmFtZV90YWJsZSk7CiAgICBrZGJwKCJESVJFQ1RNQVBfVklSVF9TVEFSVDog
ICVseFxuIiwgRElSRUNUTUFQX1ZJUlRfU1RBUlQpOwogICAga2RicCgiSFlQRVJWSVNPUl9WSVJU
X1NUQVJUOiAlbHhcbiIsIEhZUEVSVklTT1JfVklSVF9TVEFSVCk7CiAgICBrZGJwKCJIWVBFUlZJ
U09SX1ZJUlRfRU5EOiAgICVseFxuIiwgSFlQRVJWSVNPUl9WSVJUX0VORCk7CiAgICBrZGJwKCJS
T19NUFRfVklSVF9TVEFSVDogICAgICVseFxuIiwgUk9fTVBUX1ZJUlRfU1RBUlQpOwogICAga2Ri
cCgiUEVSRE9NQUlOX1ZJUlRfU1RBUlQ6ICAlbHhcbiIsIFBFUkRPTUFJTl9WSVJUX1NUQVJUKTsK
ICAgIGtkYnAoIkNPTkZJR19QQUdJTkdfTEVWRUxTOiVkXG4iLCBDT05GSUdfUEFHSU5HX0xFVkVM
Uyk7CiAgICBrZGJwKCJfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRfU1RBUlQ6ICVseFxuIiwgCiAg
ICAgICAgICh1bG9uZylfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRfU1RBUlQpOwogICAga2RicCgi
Jk1QVFswXSA9PSAlMDE2bHhcbiIsICZtYWNoaW5lX3RvX3BoeXNfbWFwcGluZ1swXSk7CgogICAg
a2RicCgiXG5GSVJTVF9SRVNFUlZFRF9HRFRfUEFHRTogJXhcbiIsIEZJUlNUX1JFU0VSVkVEX0dE
VF9QQUdFKTsKICAgIGtkYnAoIkZJUlNUX1JFU0VSVkVEX0dEVF9FTlRSWTogJWx4XG4iLCAodWxv
bmcpRklSU1RfUkVTRVJWRURfR0RUX0VOVFJZKTsKICAgIGtkYnAoIkxBU1RfUkVTRVJWRURfR0RU
X0VOVFJZOiAlbHhcbiIsICh1bG9uZylMQVNUX1JFU0VSVkVEX0dEVF9FTlRSWSk7CiAgICBrZGJw
KCIgIFBlciBjcHUgbm9uLWNvbXBhdCBnZHRfdGFibGU6XG4iKTsKICAgIGZvcl9lYWNoX29ubGlu
ZV9jcHUoY3B1KSB7CiAgICAgICAga2RicCgiXHRjcHU6JWQgIGdkdF90YWJsZTolcFxuIiwgY3B1
LCBwZXJfY3B1KGdkdF90YWJsZSwgY3B1KSk7CiAgICB9CiAgICBrZGJwKCIgIFBlciBjcHUgY29t
cGF0IGdkdF90YWJsZTpcbiIpOwogICAgZm9yX2VhY2hfb25saW5lX2NwdShjcHUpIHsKICAgICAg
ICBrZGJwKCJcdGNwdTolZCAgZ2R0X3RhYmxlOiVwXG4iLCBjcHUsIHBlcl9jcHUoY29tcGF0X2dk
dF90YWJsZSwgY3B1KSk7CiAgICB9CiAgICBrZGJwKCJcbiIpOwogICAga2RicCgiICBQZXIgY3B1
IHRzczpcbiIpOwogICAgZm9yX2VhY2hfb25saW5lX2NwdShjcHUpIHsKICAgICAgICBzdHJ1Y3Qg
dHNzX3N0cnVjdCAqdHNzcCA9ICZwZXJfY3B1KGluaXRfdHNzLCBjcHUpOwogICAgICAgIGtkYnAo
Ilx0Y3B1OiVkICB0c3M6JXAgKHJzcDA6JTAxNmx4KVxuIiwgY3B1LCB0c3NwLCB0c3NwLT5yc3Aw
KTsKICAgIH0KI2lmZGVmIFVTRVJfTUFQUElOR1NfQVJFX0dMT0JBTAogICAga2RicCgiVVNFUl9N
QVBQSU5HU19BUkVfR0xPQkFMIGlzIGRlZmluZWRcbiIpOwojZWxzZQogICAga2RicCgiVVNFUl9N
QVBQSU5HU19BUkVfR0xPQkFMIGlzIE5PVCBkZWZpbmVkXG4iKTsKI2VuZGlmCiAgICBrZGJwKCJc
biIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIGZvciBIVk0vSFlCIGd1ZXN0
cywgZ28gdGhydSBFUFQuIEZvciBQViBndWVzdCB3ZSBuZWVkIHRvIGdvIHRvIHRoZSBidHJlZS4g
CiAqIGJ0cmVlOiBwZm5fdG9fbWZuX2ZyYW1lX2xpc3RfbGlzdCBpcyByb290IHRoYXQgcG9pbnRz
IChoYXMgbWZucyBvZikgdXB0byAxNgogKiBwYWdlcyAoY2FsbCAnZW0gbDIgbm9kZXMpIHRoYXQg
Y29udGFpbiBtZm5zIG9mIGd1ZXN0IHAybSB0YWJsZSBwYWdlcyAKICogTk9URTogbnVtIG9mIGVu
dHJpZXMgaW4gYSBwMm0gcGFnZSBpcyBzYW1lIGFzIG51bSBvZiBlbnRyaWVzIGluIGwyIG5vZGUg
Ki8Kc3RhdGljIG5vaW5saW5lIHVsb25nCmtkYl9ncGZuMm1mbihzdHJ1Y3QgZG9tYWluICpkcCwg
dWxvbmcgZ3BmbiwgcDJtX3R5cGVfdCAqdHlwZXApIAp7CiAgICBpbnQgaWR4OwoKICAgIGlmICgg
IXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShkcCkgKSB7CiAgICAgICAgbWZuX3QgKm1mbl92YSwgbWZu
ID0gYXJjaF9nZXRfcGZuX3RvX21mbl9mcmFtZV9saXN0X2xpc3QoZHApOwogICAgICAgIGludCBn
X2xvbmdzeiA9IGtkYl9ndWVzdF9iaXRuZXNzKGRwLT5kb21haW5faWQpLzg7CiAgICAgICAgaW50
IGVudHJpZXNfcGVyX3BnID0gUEFHRV9TSVpFL2dfbG9uZ3N6OwogICAgICAgIGNvbnN0IGludCBz
aGlmdCA9IGdldF9jb3VudF9vcmRlcihlbnRyaWVzX3Blcl9wZyk7CgoJaWYgKCAhbWZuX3ZhbGlk
KG1mbikgKSB7CgkgICAga2RicCgiSW52YWxpZCBmcmFtZV9saXN0X2xpc3QgbWZuOiVseCBmb3Ig
bm9uLXhsYXRlIGd1ZXN0XG4iLCBtZm4pOwoJICAgIHJldHVybiBJTlZBTElEX01GTjsKCX0KCiAg
ICAgICAgbWZuX3ZhID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CiAgICAgICAgaWR4ID0gZ3BmbiA+
PiAyKnNoaWZ0OyAgICAgLyogaW5kZXggaW4gcm9vdCBwYWdlL25vZGUgKi8KICAgICAgICBpZiAo
aWR4ID4gMTUpIHsKICAgICAgICAgICAga2RicCgiZ3BmbjolbHggaWR4OiV4IG5vdCBpbiBmcmFt
ZSBsaXN0IGxpbWl0IG9mIHoxNlxuIiwgZ3BmbiwgaWR4KTsKICAgICAgICAgICAgdW5tYXBfZG9t
YWluX3BhZ2UobWZuX3ZhKTsKICAgICAgICAgICAgcmV0dXJuIElOVkFMSURfTUZOOwogICAgICAg
IH0KICAgICAgICBtZm4gPSAoZ19sb25nc3ogPT0gNCkgPyAoKGludCAqKW1mbl92YSlbaWR4XSA6
IG1mbl92YVtpZHhdOwogICAgICAgIGlmIChtZm49PTApIHsKICAgICAgICAgICAga2RicCgiTm8g
bWZuIGZvciBpZHg6JWQgZm9yIGdwZm46JWx4IGluIHJvb3QgcGdcbiIsIGlkeCwgZ3Bmbik7CiAg
ICAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKG1mbl92YSk7CiAgICAgICAgICAgIHJldHVybiBJ
TlZBTElEX01GTjsKICAgICAgICB9CiAgICAgICAgbWZuX3ZhID0gbWFwX2RvbWFpbl9wYWdlKG1m
bik7CiAgICAgICAgS0RCR1AxKCJwMm06IGlkeDoleCBmbGw6JWx4IG1mbiBvZiAybmQgbHZsIHBh
Z2U6JWx4XG4iLCBpZHgsCiAgICAgICAgICAgICAgIGFyY2hfZ2V0X3Bmbl90b19tZm5fZnJhbWVf
bGlzdF9saXN0KGRwKSwgbWZuKTsKCiAgICAgICAgaWR4ID0gKGdwZm4+PnNoaWZ0KSAmICgoMTw8
c2hpZnQpLTEpOyAgICAgLyogaWR4IGluIGwyIG5vZGUgKi8KICAgICAgICBtZm4gPSAoZ19sb25n
c3ogPT0gNCkgPyAoKGludCAqKW1mbl92YSlbaWR4XSA6IG1mbl92YVtpZHhdOwogICAgICAgIHVu
bWFwX2RvbWFpbl9wYWdlKG1mbl92YSk7CiAgICAgICAgaWYgKG1mbiA9PSAwKSB7CiAgICAgICAg
ICAgIGtkYnAoIk5vIG1mbiBlbnRyeSBhdDoleCBpbiAybmQgbHZsIHBnIGZvciBncGZuOiVseFxu
IiwgaWR4LCBncGZuKTsKICAgICAgICAgICAgcmV0dXJuIElOVkFMSURfTUZOOwogICAgICAgIH0K
ICAgICAgICBLREJHUDEoInAybTogaWR4OiV4ICBtZm4gb2YgcDJtIHBhZ2U6JWx4XG4iLCBpZHgs
IG1mbik7IAogICAgICAgIG1mbl92YSA9IG1hcF9kb21haW5fcGFnZShtZm4pOwogICAgICAgIGlk
eCA9IGdwZm4gJiAoKDE8PHNoaWZ0KS0xKTsKICAgICAgICBtZm4gPSAoZ19sb25nc3ogPT0gNCkg
PyAoKGludCAqKW1mbl92YSlbaWR4XSA6IG1mbl92YVtpZHhdOwogICAgICAgIHVubWFwX2RvbWFp
bl9wYWdlKG1mbl92YSk7CgoJKnR5cGVwID0gLTE7CiAgICAgICAgcmV0dXJuIG1mbjsKICAgIH0g
ZWxzZQogICAgICAgIHJldHVybiBtZm5feChnZXRfZ2ZuX3F1ZXJ5X3VubG9ja2VkKGRwLCBncGZu
LCB0eXBlcCkpOwoKICAgIHJldHVybiBJTlZBTElEX01GTjsKfQoKLyogZ2l2ZW4gYSBwZm4sIGZp
bmQgaXQncyBtZm4gKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfcDJtKHZvaWQpCnsK
ICAgIGtkYnAoInAybSBkb21pZCAweGdwZm4gOiBncGZuIHRvIG1mblxuIik7CiAgICByZXR1cm4g
S0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9wMm0oaW50
IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAgc3RydWN0IGRvbWFpbiAqZHA7CiAgICB1bG9uZyBncGZuLCBtZm49MHhkZWFkYmVlZjsKICAg
IHAybV90eXBlX3QgcDJtdHlwZSA9IC0xOwoKICAgIGlmIChhcmdjIDwgMyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgfHwKICAgICAgICAoZHA9a2RiX3N0cmRvbWlkMnB0cihhcmd2
WzFdLCAxKSkgPT0gTlVMTCAgfHwKICAgICAgICAha2RiX3N0cjJ1bG9uZyhhcmd2WzJdLCAmZ3Bm
bikpIHsKCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3AybSgpOwogICAgfQogICAgbWZuID0ga2Ri
X2dwZm4ybWZuKGRwLCBncGZuLCAmcDJtdHlwZSk7CiAgICBpZiAoIHBhZ2luZ19tb2RlX3RyYW5z
bGF0ZShkcCkgKQogICAgICAgIGtkYnAoInAybVslbHhdID09ICVseCB0eXBlOiVkLzB4JXhcbiIs
IGdwZm4sIG1mbiwgcDJtdHlwZSwgcDJtdHlwZSk7CiAgICBlbHNlIAogICAgICAgIGtkYnAoInAy
bVslbHhdID09ICVseCB0eXBlOk4vQShQVilcbiIsIGdwZm4sIG1mbik7CgogICAgcmV0dXJuIEtE
Ql9DUFVfTUFJTl9LREI7Cn0KCi8qIGdpdmVuIGFuIG1mbiwgbG9va3VwIHBmbiBpbiB0aGUgTVBU
ICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX20ycCh2b2lkKQp7CiAgICBrZGJwKCJt
MnAgMHhtZm46IG1mbiB0byBwZm5cbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0K
c3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfbTJwKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIG1mbl90IG1mbjsKICAgIGlm
IChhcmdjID4gMSAmJiBrZGJfc3RyMnVsb25nKGFyZ3ZbMV0sICZtZm4pKQogICAgICAgIGlmICht
Zm5fdmFsaWQobWZuKSkKICAgICAgICAgICAga2RicCgibXB0WyV4XSA9PSAlbHhcbiIsIG1mbiwg
bWFjaGluZV90b19waHlzX21hcHBpbmdbbWZuXSk7CiAgICAgICAgZWxzZQogICAgICAgICAgICBr
ZGJwKCJJbnZhbGlkIG1mbjolbHhcbiIsIG1mbik7CiAgICBlbHNlCiAgICAgICAga2RiX3VzZ2Zf
bTJwKCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIHZvaWQgCmtkYl9w
cl9wZ19wZ3RfZmxkcyh1bnNpZ25lZCBsb25nIHR5cGVfaW5mbykKewogICAgc3dpdGNoICh0eXBl
X2luZm8gJiBQR1RfdHlwZV9tYXNrKSB7CiAgICAgICAgY2FzZSAoUEdUX2wxX3BhZ2VfdGFibGUp
OgogICAgICAgICAgICBrZGJwKCIgICAgcGFnZSBpcyBQR1RfbDFfcGFnZV90YWJsZVxuIik7CiAg
ICAgICAgICAgIGJyZWFrOwogICAgICAgIGNhc2UgUEdUX2wyX3BhZ2VfdGFibGU6CiAgICAgICAg
ICAgIGtkYnAoIiAgICBwYWdlIGlzIFBHVF9sMl9wYWdlX3RhYmxlXG4iKTsKICAgICAgICAgICAg
YnJlYWs7CiAgICAgICAgY2FzZSBQR1RfbDNfcGFnZV90YWJsZToKICAgICAgICAgICAga2RicCgi
ICAgIHBhZ2UgaXMgUEdUX2wzX3BhZ2VfdGFibGVcbiIpOwogICAgICAgICAgICBicmVhazsKICAg
ICAgICBjYXNlIFBHVF9sNF9wYWdlX3RhYmxlOgogICAgICAgICAgICBrZGJwKCIgICAgcGFnZSBp
cyBQR1RfbDRfcGFnZV90YWJsZVxuIik7CiAgICAgICAgICAgIGJyZWFrOwogICAgICAgIGNhc2Ug
UEdUX3NlZ19kZXNjX3BhZ2U6CiAgICAgICAgICAgIGtkYnAoIiAgICBwYWdlIGlzIHNlZyBkZXNj
IHBhZ2VcbiIpOwogICAgICAgICAgICBicmVhazsKICAgICAgICBjYXNlIFBHVF93cml0YWJsZV9w
YWdlOgogICAgICAgICAgICBrZGJwKCIgICAgcGFnZSBpcyB3cml0YWJsZSBwYWdlXG4iKTsKICAg
ICAgICAgICAgYnJlYWs7CiAgICAgICAgY2FzZSBQR1Rfc2hhcmVkX3BhZ2U6CiAgICAgICAgICAg
IGtkYnAoIiAgICBwYWdlIGlzIHNoYXJlZCBwYWdlXG4iKTsKICAgICAgICAgICAgYnJlYWs7CiAg
ICB9CiAgICBpZiAodHlwZV9pbmZvICYgUEdUX3Bpbm5lZCkKICAgICAgICBrZGJwKCIgICAgcGFn
ZSBpcyBwaW5uZWRcbiIpOwogICAgaWYgKHR5cGVfaW5mbyAmIFBHVF92YWxpZGF0ZWQpCiAgICAg
ICAga2RicCgiICAgIHBhZ2UgaXMgdmFsaWRhdGVkXG4iKTsKICAgIGlmICh0eXBlX2luZm8gJiBQ
R1RfcGFlX3hlbl9sMikKICAgICAgICBrZGJwKCIgICAgcGFnZSBpcyBQR1RfcGFlX3hlbl9sMlxu
Iik7CiAgICBpZiAodHlwZV9pbmZvICYgUEdUX3BhcnRpYWwpCiAgICAgICAga2RicCgiICAgIHBh
Z2UgaXMgUEdUX3BhcnRpYWxcbiIpOwogICAgaWYgKHR5cGVfaW5mbyAmIFBHVF9sb2NrZWQpCiAg
ICAgICAga2RicCgiICAgIHBhZ2UgaXMgUEdUX2xvY2tlZFxuIik7Cn0KCnN0YXRpYyB2b2lkCmtk
Yl9wcl9wZ19wZ2NfZmxkcyh1bnNpZ25lZCBsb25nIGNvdW50X2luZm8pCnsKICAgIGlmIChjb3Vu
dF9pbmZvICYgUEdDX2FsbG9jYXRlZCkKICAgICAgICBrZGJwKCIgIFBHQ19hbGxvY2F0ZWQiKTsK
ICAgIGlmIChjb3VudF9pbmZvICYgUEdDX3hlbl9oZWFwKQogICAgICAgIGtkYnAoIiAgUEdDX3hl
bl9oZWFwIik7CiAgICBpZiAoY291bnRfaW5mbyAmIFBHQ19wYWdlX3RhYmxlKQogICAgICAgIGtk
YnAoIiAgUEdDX3BhZ2VfdGFibGUiKTsKICAgIGlmIChjb3VudF9pbmZvICYgUEdDX2Jyb2tlbikK
ICAgICAgICBrZGJwKCIgIFBHQ19icm9rZW4iKTsKI2lmIFhFTl9WRVJTSU9OIDwgNCAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIC8qIHhlbiAzLngueCAqLwogICAgaWYgKGNvdW50X2lu
Zm8gJiBQR0Nfb2ZmbGluaW5nKQogICAgICAgIGtkYnAoIiAgUEdDX29mZmxpbmluZyIpOwogICAg
aWYgKGNvdW50X2luZm8gJiBQR0Nfb2ZmbGluZWQpCiAgICAgICAga2RicCgiICBQR0Nfb2ZmbGlu
ZWQiKTsKI2Vsc2UKICAgIGlmIChjb3VudF9pbmZvICYgUEdDX3N0YXRlX2ludXNlKQogICAgICAg
IGtkYnAoIiAgUEdDX2ludXNlIik7CiAgICBpZiAoY291bnRfaW5mbyAmIFBHQ19zdGF0ZV9vZmZs
aW5pbmcpCiAgICAgICAga2RicCgiICBQR0Nfc3RhdGVfb2ZmbGluaW5nIik7CiAgICBpZiAoY291
bnRfaW5mbyAmIFBHQ19zdGF0ZV9vZmZsaW5lZCkKICAgICAgICBrZGJwKCIgIFBHQ19zdGF0ZV9v
ZmZsaW5lZCIpOwogICAgaWYgKGNvdW50X2luZm8gJiBQR0Nfc3RhdGVfZnJlZSkKICAgICAgICBr
ZGJwKCIgIFBHQ19zdGF0ZV9mcmVlIik7CiNlbmRpZgogICAga2RicCgiXG4iKTsKfQoKLyogcHJp
bnQgc3RydWN0IHBhZ2VfaW5mb3t9IGdpdmVuIHB0ciB0byBpdCBvciBhbiBtZm4KICogTk9URTog
dGhhdCBnaXZlbiBhbiBtZm4gdGhlcmUgc2VlbXMgbm8gd2F5IG9mIGtub3dpbmcgaG93IGl0J3Mg
dXNlZCwgc28KICogICAgICAgaGVyZSB3ZSBqdXN0IHByaW50IGFsbCBpbmZvIGFuZCBsZXQgdXNl
ciBkZWNpZGUgd2hhdCdzIGFwcGxpY2FibGUgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfZHBhZ2Uodm9pZCkKewogICAga2RicCgiZHBhZ2UgbWZufHBhZ2UtcHRyIDogRGlzcGxheSBz
dHJ1Y3QgcGFnZVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2Ri
X2NwdV9jbWRfdAprZGJfY21kZl9kcGFnZShpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0
cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICB1bnNpZ25lZCBsb25nIHZhbDsKICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBncDsKICAgIHN0cnVjdCBkb21haW4gKmRwOwoKICAgIGlmIChhcmdj
IDw9IDEgfHwgKmFyZ3ZbMV0gPT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZHBhZ2Uo
KTsKCiAgICBpZiAoKGtkYl9zdHIydWxvbmcoYXJndlsxXSwgJnZhbCkgPT0gMCkgICAgICB8fAog
ICAgICAgICh2YWwgPCAgKHVsb25nKWZyYW1lX3RhYmxlICYmICFtZm5fdmFsaWQodmFsKSkpIHsK
CiAgICAgICAga2RicCgiSW52YWxpZCBhcmc6JXNcbiIsIGFyZ3ZbMV0pOwogICAgICAgIHJldHVy
biBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAga2RicCgiUGFnZSBJbmZvOlxuIik7CiAgICBp
ZiAodmFsIDw9ICh1bG9uZylmcmFtZV90YWJsZSkgeyAgICAgICAvKiBhcmcgaXMgbWZuICovCiAg
ICAgICAgcGdwID0gbWZuX3RvX3BhZ2UodmFsKTsKICAgICAgICBrZGJwKCIgIG1mbjogJWx4IHBh
Z2VfaW5mbzolcFxuIiwgdmFsLCBwZ3ApOwogICAgfSBlbHNlIHsKICAgICAgICBwZ3AgPSAoc3Ry
dWN0IHBhZ2VfaW5mbyAqKXZhbDsgLyogYXJnIGlzIHN0cnVjdCBwYWdle30gKi8KICAgICAgICBp
ZiAocGdwIDwgZnJhbWVfdGFibGUgfHwgcGdwID49IGZyYW1lX3RhYmxlK21heF9wYWdlKSB7CiAg
ICAgICAgICAgIGtkYnAoIkludmFsaWQgcGFnZSBwdHIuIGJlbG93L2JleW9uZCBtYXhfcGFnZVxu
Iik7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgICAgIH0KICAgICAg
ICBrZGJwKCIgIG1mbjogJWx4IHBhZ2VfaW5mbzolcFxuIiwgcGFnZV90b19tZm4ocGdwKSwgcGdw
KTsKICAgIH0gCiAgICBrZGJwKCIgIGNvdW50X2luZm86ICUwMTZseCAgKHJlZmNudDogJXgpXG4i
LCBwZ3AtPmNvdW50X2luZm8sCiAgICAgICAgIHBncC0+Y291bnRfaW5mbyAmIFBHQ19jb3VudF9t
YXNrKTsKI2lmIFhFTl9WRVJTSU9OID4gMyB8fCBYRU5fU1VCVkVSU0lPTiA+IDMgICAgICAgICAg
ICAgLyogeGVuIDMuNC54IG9yIGxhdGVyICovCiAgICBrZGJfcHJfcGdfcGdjX2ZsZHMocGdwLT5j
b3VudF9pbmZvKTsKCiAgICBrZGJwKCJJbiB1c2UgaW5mbzpcbiIpOwogICAga2RicCgiICB0eXBl
X2luZm86JTAxNmx4XG4iLCBwZ3AtPnUuaW51c2UudHlwZV9pbmZvKTsKICAgIGtkYl9wcl9wZ19w
Z3RfZmxkcyhwZ3AtPnUuaW51c2UudHlwZV9pbmZvKTsKICAgIGRwID0gcGFnZV9nZXRfb3duZXIo
cGdwKTsKICAgIGtkYnAoIiAgZG9taWQ6JWQgKHBpY2tsZWQ6JWx4KVxuIiwgZHAgPyBkcC0+ZG9t
YWluX2lkIDogLTEsIAogICAgICAgICBwZ3AtPnYuaW51c2UuX2RvbWFpbik7CgogICAga2RicCgi
U2hhZG93IEluZm86XG4iKTsKICAgIGtkYnAoIiAgdHlwZToleCBwaW5uZWQ6JXggY291bnQ6JXhc
biIsIHBncC0+dS5zaC50eXBlLCBwZ3AtPnUuc2gucGlubmVkLAogICAgICAgICBwZ3AtPnUuc2gu
Y291bnQpOwogICAga2RicCgiICBiYWNrOiVseCAgc2hhZG93X2ZsYWdzOiV4ICBuZXh0X3NoYWRv
dzolbHhcbiIsIHBncC0+di5zaC5iYWNrLAogICAgICAgICBwZ3AtPnNoYWRvd19mbGFncywgcGdw
LT5uZXh0X3NoYWRvdyk7CgogICAga2RicCgiRnJlZSBJbmZvXG4iKTsKICAgIGtkYnAoIiAgbmVl
ZF90bGJmbHVzaDolZCBvcmRlcjolZCB0bGJmbHVzaF90aW1lc3RhbXA6JXhcbiIsCiAgICAgICAg
IHBncC0+dS5mcmVlLm5lZWRfdGxiZmx1c2gsIHBncC0+di5mcmVlLm9yZGVyLCAKICAgICAgICAg
cGdwLT50bGJmbHVzaF90aW1lc3RhbXApOwojZWxzZQogICAgaWYgKHBncC0+Y291bnRfaW5mbyAm
IFBHQ19hbGxvY2F0ZWQpICAgICAgICAgICAgLyogcGFnZSBhbGxvY2F0ZWQgKi8KICAgICAgICBr
ZGJwKCIgIFBHQ19hbGxvY2F0ZWQiKTsKICAgIGlmIChwZ3AtPmNvdW50X2luZm8gJiBQR0NfcGFn
ZV90YWJsZSkgICAgICAgICAgIC8qIHBhZ2UgdGFibGUgcGFnZSAqLwogICAgICAgIGtkYnAoIiAg
UEdDX3BhZ2VfdGFibGUiKTsKICAgIGtkYnAoIlxuIik7CiAgICBrZGJwKCIgIHBhZ2UgaXMgJXMg
eGVuIGhlYXAgcGFnZVxuIiwgaXNfeGVuX2hlYXBfcGFnZShwZ3ApID8gImEiOiJOT1QiKTsKICAg
IGtkYnAoIiAgY2FjaGVhdHRyOiV4XG4iLCAocGdwLT5jb3VudF9pbmZvPj5QR0NfY2FjaGVhdHRy
X2Jhc2UpICYgNyk7CiAgICBpZiAocGdwLT5jb3VudF9pbmZvICYgUEdDX2NvdW50X21hc2spIHsg
ICAgICAgICAvKiBwYWdlIGluIHVzZSAqLwogICAgICAgIGRwID0gcGdwLT51LmludXNlLl9kb21h
aW47ICAgICAgICAgLyogcGlja2xlZCBkb21haW4gKi8KICAgICAgICBrZGJwKCIgIHBhZ2UgaXMg
aW4gdXNlXG4iKTsKICAgICAgICBrZGJwKCIgICAgZG9taWQ6ICVkICAocGlja2xlZCBkb206JXgp
XG4iLCAKICAgICAgICAgICAgIGRwID8gKHVucGlja2xlX2RvbXB0cihkcCkpLT5kb21haW5faWQg
OiAtMSwgZHApOwogICAgICAgIGtkYnAoIiAgICB0eXBlX2luZm86ICVseFxuIiwgcGdwLT51Lmlu
dXNlLnR5cGVfaW5mbyk7CiAgICAgICAga2RiX3BydF9wZ190eXBlKHBncC0+dS5pbnVzZS50eXBl
X2luZm8pOwogICAgfSBlbHNlIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIC8qIHBhZ2UgaXMgZnJlZSAqLwogICAgICAgIGtkYnAoIiAgcGFnZSBpcyBmcmVlXG4iKTsK
ICAgICAgICBrZGJwKCIgICAgb3JkZXI6ICV4XG4iLCBwZ3AtPnUuZnJlZS5vcmRlcik7CiAgICAg
ICAga2RicCgiICAgIGNwdW1hc2s6ICVseFxuIiwgcGdwLT51LmZyZWUuY3B1bWFzay5iaXRzKTsK
ICAgIH0KICAgIGtkYnAoIiAgdGxiZmx1c2gvc2hhZG93X2ZsYWdzOiAlbHhcbiIsIHBncC0+c2hh
ZG93X2ZsYWdzKTsKI2VuZGlmCiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKLyogZGlz
cGxheSBhc2tlZCBtc3IgdmFsdWUgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZG1z
cih2b2lkKQp7CiAgICBrZGJwKCJkbXNyIGFkZHJlc3MgOiBEaXNwbGF5IG1zciB2YWx1ZVxuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9kbXNyKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKnJlZ3MpCnsKICAgIHVuc2lnbmVkIGxvbmcgYWRkciwgdmFsOwoKICAgIGlmIChhcmdjIDw9
IDEgfHwgKmFyZ3ZbMV0gPT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZG1zcigpOwoK
ICAgIGlmICgoa2RiX3N0cjJ1bG9uZyhhcmd2WzFdLCAmYWRkcikgPT0gMCkpIHsKICAgICAgICBr
ZGJwKCJJbnZhbGlkIGFyZzolc1xuIiwgYXJndlsxXSk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7CiAgICB9CiAgICByZG1zcmwoYWRkciwgdmFsKTsKICAgIGtkYnAoIm1zcjogJWx4
ICB2YWw6JWx4XG4iLCBhZGRyLCB2YWwpOwoKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9
CgovKiBleGVjdXRlIGNwdWlkIGZvciBnaXZlbiB2YWx1ZSAqLwpzdGF0aWMga2RiX2NwdV9jbWRf
dAprZGJfdXNnZl9jcHVpZCh2b2lkKQp7CiAgICBrZGJwKCJjcHVpZCBlYXggOiBEaXNwbGF5IGNw
dWlkIHZhbHVlIHJldHVybmVkIGluIHJheFxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9jcHVpZChpbnQgYXJnYywgY29uc3Qg
Y2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICB1bnNpZ25lZCBs
b25nIHJheD0wLCByYng9MCwgcmN4PTAsIHJkeD0wOwoKICAgIGlmIChhcmdjIDw9IDEgfHwgKmFy
Z3ZbMV0gPT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfY3B1aWQoKTsKCiAgICBpZiAo
KGtkYl9zdHIydWxvbmcoYXJndlsxXSwgJnJheCkgPT0gMCkpIHsKICAgICAgICBrZGJwKCJJbnZh
bGlkIGFyZzolc1xuIiwgYXJndlsxXSk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7
CiAgICB9CiNpZiAwCiAgICBfX2FzbV9fIF9fdm9sYXRpbGVfXyAoCiAgICAgICAgICAgIC8qICJw
dXNobCAlJXJheCAgXG4iICovCgogICAgICAgICAgICAibW92bCAlMCwgJSVyYXggIFxuIgogICAg
ICAgICAgICAiY3B1aWQgICAgICAgICAgIFxuIiAKICAgICAgICAgICAgOiAiPSZhIiAocmF4KSwg
Ij1iIiAocmJ4KSwgIj1jIiAocmN4KSwgIj1kIiAocmR4KQogICAgICAgICAgICA6ICIwIiAocmF4
KQogICAgICAgICAgICA6ICJyYXgiLCAicmJ4IiwgInJjeCIsICJyZHgiLCAibWVtb3J5Iik7CiNl
bmRpZgogICAgY3B1aWQocmF4LCAmcmF4LCAmcmJ4LCAmcmN4LCAmcmR4KTsKICAgIGtkYnAoInJh
eDogJTAxNmx4ICByYng6JTAxNmx4IHJjeDolMDE2bHggcmR4OiUwMTZseFxuIiwgcmF4LCByYngs
CiAgICAgICAgIHJjeCwgcmR4KTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBl
eGVjdXRlIGNwdWlkIGZvciBnaXZlbiB2YWx1ZSAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
dXNnZl93ZXB0KHZvaWQpCnsKICAgIGtkYnAoIndlcHQgZG9taWQgZ2ZuOiB3YWxrIGVwdCB0YWJs
ZSBmb3IgZ2l2ZW4gZG9taWQgYW5kIGdmblxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl93ZXB0KGludCBhcmdjLCBjb25zdCBj
aGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHN0cnVjdCBkb21h
aW4gKmRwOwogICAgdWxvbmcgZ2ZuOwoKICAgIGlmICgoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0g
Jz8nKSB8fCBhcmdjICE9IDMpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3dlcHQoKTsKICAgIGlm
ICgoZHA9a2RiX3N0cmRvbWlkMnB0cihhcmd2WzFdLCAxKSkgJiYga2RiX3N0cjJ1bG9uZyhhcmd2
WzJdLCAmZ2ZuKSkKICAgICAgICBlcHRfd2Fsa190YWJsZShkcCwgZ2ZuKTsKICAgIGVsc2UKICAg
ICAgICBrZGJfdXNnZl93ZXB0KCk7CgogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8q
CiAqIFNhdmUgc3ltYm9scyBpbmZvIGZvciBhIGd1ZXN0LCBkb20wIG9yIG90aGVyLi4uCiAqLwpz
dGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9zeW0odm9pZCkKewogICBrZGJwKCJzeW0gZG9t
aWQgJmthbGxzeW1zX25hbWVzICZrYWxsc3ltc19hZGRyZXNzZXMgJmthbGxzeW1zX251bV9zeW1z
XG4iKTsKICAga2RicCgiXHQgWyZrYWxsc3ltc190b2tlbl90YWJsZV0gWyZrYWxsc3ltc190b2tl
bl9pbmRleF1cbiIpOwogICBrZGJwKCJcdHRva2VuIF90YWJsZSBhbmQgX2luZGV4IE1VU1QgYmUg
c3BlY2lmaWVkIGZvciBlbDVcbiIpOwogICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0
aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9zeW0oaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2
LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgdWxvbmcgbmFtZXNwLCBhZGRyYXAs
IG51bXAsIHRva3RibHAsIHRva2lkeHA7CiAgICBkb21pZF90IGRvbWlkOwoKICAgIGlmIChhcmdj
IDwgNSkgewogICAgICAgIHJldHVybiBrZGJfdXNnZl9zeW0oKTsKICAgIH0KICAgIHRva3RibHAg
PSB0b2tpZHhwID0gMDsgICAgIC8qIG9wdGlvbmFsIHBhcmFtZXRlcnMgKi8KICAgIGlmIChrZGJf
c3RyMmRvbWlkKGFyZ3ZbMV0sICZkb21pZCwgMSkgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFy
Z3ZbMl0sICZuYW1lc3ApICAgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbM10sICZhZGRy
YXApICAgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbNF0sICZudW1wKSAgICAgJiYgCiAg
ICAgICAgKGFyZ2M9PTUgfHwgKGFyZ2M9PTcgJiYga2RiX3N0cjJ1bG9uZyhhcmd2WzVdLCAmdG9r
dGJscCkgJiYKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBrZGJfc3RyMnVsb25nKGFy
Z3ZbNl0sICZ0b2tpZHhwKSkpKSB7CgogICAgICAgIGtkYl9zYXZfZG9tX3N5bWluZm8oZG9taWQs
IG5hbWVzcCwgYWRkcmFwLG51bXAsdG9rdGJscCx0b2tpZHhwKTsKICAgIH0gZWxzZQogICAgICAg
IGtkYl91c2dmX3N5bSgpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCgovKiBtb2Rz
IGlzIHRoZSBkdW1iIGFzcyAmbW9kdWxlcy4gbW9kdWxlcyBpcyBzdHJ1Y3Qge254dCwgcHJldn0s
IGFuZCBub3QgcHRyICovCnN0YXRpYyB2b2lkCmtkYl9kdW1wX2xpbnV4X21vZHVsZXMoZG9taWRf
dCBkb21pZCwgdWxvbmcgbW9kcywgdWludCBueHRvZmZzLCB1aW50IG5tb2ZmcywgCiAgICAgICAg
ICAgICAgICAgICAgICAgdWludCBjb3Jlb2ZmcykKewogICAgY29uc3QgaW50IGJ1ZnN6ID0gNTY7
CiAgICBjaGFyIGJ1ZltidWZzel07CiAgICB1aW50NjRfdCBhZGRyLCBhZGRydmFsLCAqbnh0cHRy
LCAqbW9kcHRyOwogICAgdWludCBpLCBudW0gPSA4OwoKICAgIGlmIChrZGJfZ3Vlc3RfYml0bmVz
cyhkb21pZCkgPT0gMzIpCiAgICAgICAgbnVtID0gNDsKCiAgICAvKiBmaXJzdCByZWFkIG1vZHVs
ZXN7fS5uZXh0IHB0ciAqLwogICAgaWYgKGtkYl9yZWFkX21lbShtb2RzLCAoa2RiYnl0X3QgKikm
bnh0cHRyLCBudW0sIGRvbWlkKSAhPSBudW0pIHsKICAgICAgICBrZGJwKCJFUlJPUjogQ291bGQg
bm90IHJlYWQgbmV4dCBhdCBtb2Q6JXBcbiIsICh2b2lkICopbW9kcyk7CiAgICAgICAgcmV0dXJu
OwogICAgfQoKICAgIEtEQkdQKCJtb2RzOiVwIG54dHB0cjolcCBubW9mZnM6JXggY29yZW9mZnM6
JXhcbiIsICh2b2lkICopbW9kcywgbnh0cHRyLAogICAgICAgICAgbm1vZmZzLCBjb3Jlb2Zmcyk7
CgogICAgd2hpbGUgKCh1aW50NjRfdClueHRwdHIgIT0gbW9kcykgewoKICAgICAgICBtb2RwdHIg
PSAodWludDY0X3QgKikgKCh1bG9uZylueHRwdHIgLSBueHRvZmZzKTsKCiAgICAgICAgYWRkciA9
ICh1bG9uZyltb2RwdHIgKyBjb3Jlb2ZmczsKICAgICAgICBpZiAoa2RiX3JlYWRfbWVtKGFkZHIs
IChrZGJieXRfdCAqKSZhZGRydmFsLCBudW0sIGRvbWlkKSAhPSBudW0pIHsKICAgICAgICAgICAg
a2RicCgiRVJST1I6IENvdWxkIG5vdCByZWFkIG1vZCBhZGRyIGF0IDolcFxuIiwgKHZvaWQgKilh
ZGRyKTsKICAgICAgICAgICAgcmV0dXJuOwogICAgICAgIH0KCiAgICAgICAgS0RCR1AoIm1vZHB0
cjolcCBhZGRyOiVwXG4iLCBtb2RwdHIsICh2b2lkICopYWRkcik7CiAgICAgICAgYWRkciA9ICh1
bG9uZyltb2RwdHIgKyBubW9mZnM7CiAgICAgICAgaT0wOwogICAgICAgIGRvIHsKICAgICAgICAg
ICAgaWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikmYnVmW2ldLCAxLCBkb21pZCkg
IT0gMSkgewogICAgICAgICAgICAgICAga2RicCgiRVJST1I6Q291bGQgbm90IHJlYWQgbmFtZSBj
aCBhdCBhZGRyOiVwXG4iLCAodm9pZCAqKWFkZHIpOwogICAgICAgICAgICAgICAgcmV0dXJuOwog
ICAgICAgICAgICB9CiAgICAgICAgICAgIGFkZHIrKzsKICAgICAgICB9IHdoaWxlIChidWZbaV0g
JiYgaSsrIDwgYnVmc3opOwogICAgICAgIGJ1ZltidWZzei0xXSA9ICdcMCc7CgogICAgICAgIGtk
YnAoIiUwMTZseCAlMDE2bHggJXNcbiIsIG1vZHB0ciwgYWRkcnZhbCwgYnVmKTsKCiAgICAgICAg
aWYgKGtkYl9yZWFkX21lbSgodWxvbmcpbnh0cHRyLCAoa2RiYnl0X3QgKikmbnh0cHRyLCBudW0s
IGRvbWlkKSE9bnVtKSB7CiAgICAgICAgICAgIGtkYnAoIkVSUk9SOiBDb3VsZCBub3QgcmVhZCBu
ZXh0IGF0IG1vZDolcFxuIiwgKHZvaWQgKiltb2RzKTsKICAgICAgICAgICAgcmV0dXJuOwogICAg
ICAgIH0KICAgICAgICBLREJHUCgibnh0cHRyOiVwIGFkZHI6JXBcbiIsIG54dHB0ciwgKHZvaWQg
KilhZGRyKTsKICAgIH0gCn0KCi8qIERpc3BsYXkgbW9kdWxlcyBsb2FkZWQgaW4gbGludXggZ3Vl
c3QgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfbW9kKHZvaWQpCnsKICAga2RicCgi
bW9kIGRvbWlkICZtb2R1bGVzIG5leHQtb2ZmcyBuYW1lLW9mZnMgbW9kdWxlX2NvcmUtb2Zmc1xu
Iik7CiAgIGtkYnAoIlx0d2hlcmUgbmV4dC1vZmZzOiAmKChzdHJ1Y3QgbW9kdWxlICopMCktPmxp
c3QubmV4dFxuIik7CiAgIGtkYnAoIlx0bmFtZS1vZmZzOiAmKChzdHJ1Y3QgbW9kdWxlICopMCkt
Pm5hbWUgZXRjLi5cbiIpOwogICBrZGJwKCJcdERpc3BsYXlzIGFsbCBsb2FkZWQgbW9kdWxlcyBp
biB0aGUgbGludXggZ3Vlc3RcbiIpOwogICBrZGJwKCJcdEVnOiBtb2QgMCBmZmZmZmZmZjgwMzAy
NzgwIDggMHgxOCAweDE3OFxuIik7CgogICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3Rh
dGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfbW9kKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJn
diwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHVsb25nIG1vZHMsIG54dG9mZnMs
IG5tb2ZmcywgY29yZW9mZnM7CiAgICBkb21pZF90IGRvbWlkOwoKICAgIGlmIChhcmdjIDwgNikg
ewogICAgICAgIHJldHVybiBrZGJfdXNnZl9tb2QoKTsKICAgIH0KICAgIGlmIChrZGJfc3RyMmRv
bWlkKGFyZ3ZbMV0sICZkb21pZCwgMSkgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbMl0s
ICZtb2RzKSAgICAgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbM10sICZueHRvZmZzKSAg
JiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbNF0sICZubW9mZnMpICAgJiYKICAgICAgICBr
ZGJfc3RyMnVsb25nKGFyZ3ZbNV0sICZjb3Jlb2ZmcykpIHsKCiAgICAgICAga2RicCgibW9kcHRy
IGFkZHJlc3MgbmFtZVxuIik7CiAgICAgICAga2RiX2R1bXBfbGludXhfbW9kdWxlcyhkb21pZCwg
bW9kcywgbnh0b2Zmcywgbm1vZmZzLCBjb3Jlb2Zmcyk7CiAgICB9IGVsc2UKICAgICAgICBrZGJf
dXNnZl9tb2QoKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiB0b2dnbGUga2Ri
IGRlYnVnIHRyYWNlIGxldmVsICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2tkYmRi
Zyh2b2lkKQp7CiAgICBrZGJwKCJrZGJkYmcgOiB0cmFjZSBpbmZvIHRvIGRlYnVnIGtkYlxuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9rZGJkYmcoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAg
ICByZXR1cm4ga2RiX3VzZ2Zfa2RiZGJnKCk7CgogICAga2RiZGJnID0gKGtkYmRiZz09MykgPyAw
IDogKGtkYmRiZysxKTsKICAgIGtkYnAoImtkYmRiZyBzZXQgdG86JWRcbiIsIGtkYmRiZyk7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfcmVib290KHZvaWQpCnsKICAgIGtkYnAoInJlYm9vdDogcmVib290IHN5c3RlbVxuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21k
Zl9yZWJvb3QoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVn
cyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICBy
ZXR1cm4ga2RiX3VzZ2ZfcmVib290KCk7CgogICAgbWFjaGluZV9yZXN0YXJ0KDUwMCk7CiAgICBy
ZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsgICAgICAgICAgICAgIC8qIG5vdCByZWFjaGVkICovCn0K
CgpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl90cmNvbih2b2lkKQp7CiAgICBrZGJwKCJ0
cmNvbjogdHVybiB1c2VyIGFkZGVkIGtkYiB0cmFjaW5nIG9uXG4iKTsKICAgIHJldHVybiBLREJf
Q1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRmX3RyY29uKGludCBh
cmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAg
IGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dm
X3RyY29uKCk7CgogICAga2RiX3RyY29uID0gMTsKICAgIGtkYnAoImtkYiB0cmFjaW5nIGlzIG5v
dyBvblxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIGtkYl9jcHVf
Y21kX3QKa2RiX3VzZ2ZfdHJjb2ZmKHZvaWQpCnsKICAgIGtkYnAoInRyY29mZjogdHVybiB1c2Vy
IGFkZGVkIGtkYiB0cmFjaW5nIG9mZlxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl90cmNvZmYoaW50IGFyZ2MsIGNvbnN0IGNo
YXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAx
ICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfdHJjb2ZmKCk7Cgog
ICAga2RiX3RyY29uID0gMDsKICAgIGtkYnAoImtkYiB0cmFjaW5nIGlzIG5vdyBvZmZcbiIpOwog
ICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91
c2dmX3RyY3oodm9pZCkKewogICAga2RicCgidHJjeiA6IHplcm8gZW50aXJlIHRyYWNlIGJ1ZmZl
clxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRf
dAprZGJfY21kZl90cmN6KGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91
c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAg
ICAgICAgcmV0dXJuIGtkYl91c2dmX3RyY3ooKTsKCiAgICBrZGJfdHJjemVybygpOwogICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3Ry
Y3Aodm9pZCkKewogICAga2RicCgidHJjcCA6IGdpdmUgaGludHMgdG8gZHVtcCB0cmFjZSBidWZm
ZXIgdmlhIGR3L2RkIGNvbW1hbmRcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0K
c3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfdHJjcChpbnQgYXJnYywgY29uc3QgY2hhciAq
KmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBpZiAoYXJnYyA+IDEgJiYg
KmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl90cmNwKCk7CgogICAga2Ri
X3RyY3AoKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBwcmludCBzb21lIGJh
c2ljIGluZm8sIGNvbnN0YW50cywgZXRjLi4gKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfaW5mbyh2b2lkKQp7CiAgICBrZGJwKCJpbmZvIDogZGlzcGxheSBiYXNpYyBpbmZvLCBjb25z
dGFudHMsIGV0Yy4uXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBr
ZGJfY3B1X2NtZF90CmtkYl9jbWRmX2luZm8oaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBz
dHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgc3RydWN0IGRvbWFpbiAqZHA7CiAgICBz
dHJ1Y3QgY3B1aW5mb194ODYgKmJjZHA7CgogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09
ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfaW5mbygpOwoKICAgIGtkYnAoIlZlcnNpb246
ICVkLiVkLiVzICglc0AlcykgJXNcbiIsIHhlbl9tYWpvcl92ZXJzaW9uKCksIAogICAgICAgICB4
ZW5fbWlub3JfdmVyc2lvbigpLCB4ZW5fZXh0cmFfdmVyc2lvbigpLCB4ZW5fY29tcGlsZV9ieSgp
LCAKICAgICAgICAgeGVuX2NvbXBpbGVfZG9tYWluKCksIHhlbl9jb21waWxlX2RhdGUoKSk7CiAg
ICBrZGJwKCJfX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyA6IDB4JXhcbiIsIAogICAg
ICAgICBfX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyk7CiAgICBrZGJwKCJfX1hFTl9J
TlRFUkZBQ0VfVkVSU0lPTl9fOiAweCV4XG4iLCBfX1hFTl9JTlRFUkZBQ0VfVkVSU0lPTl9fKTsK
CiAgICBiY2RwID0gJmJvb3RfY3B1X2RhdGE7CiAgICBrZGJwKCJDUFU6IChhbGwgZGVjaW1hbCki
KTsKICAgICAgICBpZiAoYmNkcC0+eDg2X3ZlbmRvciA9PSBYODZfVkVORE9SX0FNRCkKICAgICAg
ICAgICAga2RicCgiIEFNRCIpOwogICAgICAgIGVsc2UKICAgICAgICAgICAga2RicCgiIElOVEVM
Iik7CiAgICAgICAga2RicCgiIGZhbWlseTolZCBtb2RlbDolZFxuIiwgYmNkcC0+eDg2LCBiY2Rw
LT54ODZfbW9kZWwpOwogICAgICAgIGtkYnAoIiAgICAgdmVuZG9yX2lkOiUxNnMgbW9kZWxfaWQ6
JTY0c1xuIiwgYmNkcC0+eDg2X3ZlbmRvcl9pZCwKICAgICAgICAgICAgIGJjZHAtPng4Nl9tb2Rl
bF9pZCk7CiAgICAgICAga2RicCgiICAgICBjcHVpZGx2bDolZCBjYWNoZTpzejolZCBhbGlnbjol
ZFxuIiwgYmNkcC0+Y3B1aWRfbGV2ZWwsCiAgICAgICAgICAgICBiY2RwLT54ODZfY2FjaGVfc2l6
ZSwgYmNkcC0+eDg2X2NhY2hlX2FsaWdubWVudCk7CiAgICAgICAga2RicCgiICAgICBwb3dlcjol
ZCBjb3JlczogbWF4OiVkIGJvb3RlZDolZCBzaWJsaW5nczolZCBhcGljaWQ6JWRcbiIsCiAgICAg
ICAgICAgICBiY2RwLT54ODZfcG93ZXIsIGJjZHAtPng4Nl9tYXhfY29yZXMsIGJjZHAtPmJvb3Rl
ZF9jb3JlcywKICAgICAgICAgICAgIGJjZHAtPng4Nl9udW1fc2libGluZ3MsIGJjZHAtPmFwaWNp
ZCk7CiAgICAgICAga2RicCgiICAgICAiKTsKICAgICAgICBpZiAoY3B1X2hhc19hcGljKQogICAg
ICAgICAgICBrZGJwKCJfYXBpYyIpOwogICAgICAgIGlmIChjcHVfaGFzX3NlcCkKICAgICAgICAg
ICAga2RicCgifF9zZXAiKTsKICAgICAgICBpZiAoY3B1X2hhc194bW0zKQogICAgICAgICAgICBr
ZGJwKCJ8X3htbTMiKTsKICAgICAgICBpZiAoY3B1X2hhc19odCkKICAgICAgICAgICAga2RicCgi
fF9odCIpOwogICAgICAgIGlmIChjcHVfaGFzX254KQogICAgICAgICAgICBrZGJwKCJ8X254Iik7
CiAgICAgICAgaWYgKGNwdV9oYXNfY2xmbHVzaCkKICAgICAgICAgICAga2RicCgifF9jbGZsdXNo
Iik7CiAgICAgICAgaWYgKGNwdV9oYXNfcGFnZTFnYikKICAgICAgICAgICAga2RicCgifF9wYWdl
MWdiIik7CiAgICAgICAgaWYgKGNwdV9oYXNfZmZ4c3IpCiAgICAgICAgICAgIGtkYnAoInxfZmZ4
c3IiKTsKICAgICAgICBpZiAoY3B1X2hhc194MmFwaWMpCiAgICAgICAgICAgIGtkYnAoInxfeDJh
cGljIik7CiAgICBrZGJwKCJcblxuIik7CiAgICBrZGJwKCJDQzoiKTsKI2lmIGRlZmluZWQoQ09O
RklHX1g4Nl82NCkKICAgICAgICBrZGJwKCIgQ09ORklHX1g4Nl82NCIpOwojZW5kaWYKI2lmIGRl
ZmluZWQoQ09ORklHX0NPTVBBVCkKICAgICAgICBrZGJwKCIgQ09ORklHX0NPTVBBVCIpOwojZW5k
aWYKI2lmIGRlZmluZWQoQ09ORklHX1BBR0lOR19BU1NJU1RBTkNFKQogICAgICAgIGtkYnAoIiBD
T05GSUdfUEFHSU5HX0FTU0lTVEFOQ0UiKTsKI2VuZGlmCiAgICBrZGJwKCJcbiIpOwogICAga2Ri
cCgiY3B1IGhhcyBmb2xsb3dpbmcgZmVhdHVyZXM6XG4iKTsKICAgIGtkYnAoIiAgJXNcbiIsIGJv
b3RfY3B1X2hhcyhYODZfRkVBVFVSRV9UU0NfUkVMSUFCTEUpID8gCiAgICAgICAgICJYODZfRkVB
VFVSRV9UU0NfUkVMSUFCTEUiIDogIiIpOwogICAga2RicCgiICAlc1xuIiwgCiAgICAgICAgIGJv
b3RfY3B1X2hhcyhYODZfRkVBVFVSRV9DT05TVEFOVF9UU0MpPyAiWDg2X0ZFQVRVUkVfQ09OU1RB
TlRfVFNDIjoiIik7CiAgICBrZGJwKCIgICVzXG4iLCAKICAgICAgICAgYm9vdF9jcHVfaGFzKFg4
Nl9GRUFUVVJFX05PTlNUT1BfVFNDKSA/ICJYODZfRkVBVFVSRV9OT05TVE9QX1RTQyIgOiIiKTsK
ICAgIGtkYnAoIiAgJXNcbiIsIAogICAgICAgICBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfUkRU
U0NQKSA/ICAiWDg2X0ZFQVRVUkVfUkRUU0NQIiA6ICIiKTsKICAgIGtkYnAoIiAgJXNcbiIsIGJv
b3RfY3B1X2hhcyhYODZfRkVBVFVSRV9GWFNSKSA/ICAiWDg2X0ZFQVRVUkVfRlhTUiIgOiAiIik7
CiAgICBrZGJwKCIgICVzXG4iLCBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfQ1BVSURfRkFVTFRJ
TkcpID8gIAogICAgICAgICAiWDg2X0ZFQVRVUkVfQ1BVSURfRkFVTFRJTkciIDogIiIpOwogICAg
a2RicCgiICAlc1xuIiwgCiAgICAgICAgIGJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9QQUdFMUdC
KSA/ICAiWDg2X0ZFQVRVUkVfUEFHRTFHQiIgOiAiIik7CiAgICBrZGJwKCIgICVzXG4iLCBib290
X2NwdV9oYXMoWDg2X0ZFQVRVUkVfTVdBSVQpID8gICJYODZfRkVBVFVSRV9NV0FJVCIgOiAiIik7
CiAgICBrZGJwKCIgICVzXG4iLCBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfWDJBUElDKSA/ICAi
WDg2X0ZFQVRVUkVfWDJBUElDIjoiIik7CiAgICBrZGJwKCIgICVzXG4iLCBib290X2NwdV9oYXMo
WDg2X0ZFQVRVUkVfWFNBVkUpID8gICJYODZfRkVBVFVSRV9YU0FWRSI6IiIpOwogICAga2RicCgi
XG4iKTsKCiAgICBrZGJwKCJNQVhfVklSVF9DUFVTOiQlZCAgTUFYX0hWTV9WQ1BVUzokJWRcbiIs
IE1BWF9WSVJUX0NQVVMsTUFYX0hWTV9WQ1BVUyk7CiAgICBrZGJwKCJOUl9FVkVOVF9DSEFOTkVM
UzogJCVkXG4iLCBOUl9FVkVOVF9DSEFOTkVMUyk7CiAgICBrZGJwKCJOUl9FVlRDSE5fQlVDS0VU
UzogJCVkXG4iLCBOUl9FVlRDSE5fQlVDS0VUUyk7CgogICAga2RicCgiXG5Eb21haW5zIGFuZCB0
aGVpciB2Y3B1czpcbiIpOwogICAgZm9yX2VhY2hfZG9tYWluKGRwKSB7CiAgICAgICAgc3RydWN0
IHZjcHUgKnZwOwogICAgICAgIGludCBwcmludGVkPTA7CiAgICAgICAga2RicCgiICBEb21haW46
IHtpZDolZCAweCV4ICAgcHRyOiVwJXN9ICBWQ1BVczpcbiIsIAogICAgICAgICAgICAgZHAtPmRv
bWFpbl9pZCwgZHAtPmRvbWFpbl9pZCwgZHAsIGRwLT5pc19keWluZyA/ICIgRFlJTkciOiIiKTsK
ICAgICAgICBmb3IodnA9ZHAtPnZjcHVbMF07IHZwOyB2cCA9IHZwLT5uZXh0X2luX2xpc3QpIHsK
ICAgICAgICAgICAga2RicCgiICB7aWQ6JWQgcDolcCBydW5zdGF0ZTolZH0iLCB2cC0+dmNwdV9p
ZCwgdnAsIAogICAgICAgICAgICAgICAgIHZwLT5ydW5zdGF0ZS5zdGF0ZSk7CiAgICAgICAgICAg
IGlmICgrK3ByaW50ZWQgJSAyID09IDApIGtkYnAoIlxuIik7CiAgICAgICAgfQogICAgICAgIGtk
YnAoIlxuIik7CiAgICB9CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIGtk
Yl9jcHVfY21kX3QKa2RiX3VzZ2ZfY3VyKHZvaWQpCnsKICAgIGtkYnAoImN1ciA6IGRpc3BsYXkg
Y3VycmVudCBkb21pZCBhbmQgdmNwdVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQoKLyogQ2hlY2tpbmcgZm9yIGd1ZXN0X21vZGUoKSBub3QgZmVhc2libGUgaGVyZS4gaWYgZG9t
MC0+aGNhbGwtPmJwIGluIHhlbiwgCiAqIHRoZW4gZ19tKCkgd2lsbCBzaG93IHhlbiwgYnV0IHZj
cHUgaXMgc3RpbGwgZG9tMC4gaGVuY2UganVzdCBsb29rIGF0IAogKiBjdXJyZW50IG9ubHkgKi8K
c3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfY3VyKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGRvbWlkX3QgaWQgPSBjdXJy
ZW50LT5kb21haW4tPmRvbWFpbl9pZDsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0g
Jz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9jdXIoKTsKCiAgICBrZGJwKCJkb21pZDogJWR7
JXB9ICVzIHZjcHU6JWQgeyVwfSAiLCBpZCwgY3VycmVudC0+ZG9tYWluLAogICAgICAgICAoaWQ9
PURPTUlEX0lETEUpID8gIihJRExFKSIgOiAiIiwgY3VycmVudC0+dmNwdV9pZCwgY3VycmVudCk7
CgogICAgLyogaWYgKGlkICE9IERPTUlEX0lETEUpIHsgKi8KICAgICAgICBpZiAoYm9vdF9jcHVf
ZGF0YS54ODZfdmVuZG9yID09IFg4Nl9WRU5ET1JfSU5URUwpIHsKICAgICAgICAgICAgdTY0IGFk
ZHIgPSAtMTsKICAgICAgICAgICAgX192bXB0cnN0KCZhZGRyKTsKICAgICAgICAgICAga2RicCgi
IFZNQ1M6IktEQkZMLCBhZGRyKTsKICAgICAgICB9CiAgICAvKiB9ICovCiAgICBrZGJwKCJcbiIp
OwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIHN0dWIgdG8gcXVpY2tseSBhbmQg
ZWFzaWx5IGFkZCBhIG5ldyBjb21tYW5kICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dm
X3VzcjEodm9pZCkKewogICAga2RicCgidXNyMTogYWRkIGFueSBhcmJpdHJhcnkgY21kIHVzaW5n
IHRoaXMgaW4ga2RiX2NtZHMuY1xuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpz
dGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl91c3IxKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHJldHVybiBLREJfQ1BVX01B
SU5fS0RCOwp9CgpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9oKHZvaWQpCnsKICAgIGtk
YnAoImg6IGRpc3BsYXkgYWxsIGNvbW1hbmRzLiBTZWUga2RiL1JFQURNRSBmb3IgbW9yZSBpbmZv
XG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90
CmtkYl9jbWRmX2goaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqcmVncykKewogICAga2RidGFiX3QgKnRicDsKCiAgICBrZGJwKCIgLSBjY3B1IGlzIGN1
cnJlbnQgY3B1IFxuIik7CiAgICBrZGJwKCIgLSBmb2xsb3dpbmcgYXJlIGFsd2F5cyBpbiBkZWNp
bWFsOlxuIik7CiAgICBrZGJwKCIgICAgIHZjcHUgbnVtLCBjcHUgbnVtLCBkb21pZFxuIik7CiAg
ICBrZGJwKCIgLSBvdGhlcndpc2UsIGFsbW9zdCBhbGwgbnVtYmVycyBhcmUgaW4gaGV4ICgweCBu
b3QgbmVlZGVkKVxuIik7CiAgICBrZGJwKCIgLSBvdXRwdXQ6ICQxNyBtZWFucyBkZWNpbWFsIDE3
XG4iKTsKICAgIGtkYnAoIiAtIGRvbWlkIDdmZmYoJDMyNzY3KSByZWZlcnMgdG8gaHlwZXJ2aXNv
clxuIik7CiAgICBrZGJwKCIgLSBpZiBubyBkb21pZCBiZWZvcmUgZnVuY3Rpb24gbmFtZSwgdGhl
biBpdCdzIGh5cGVydmlzb3JcbiIpOwogICAga2RicCgiIC0gZWFybHlrZGIgaW4geGVuIGdydWIg
bGluZSB0byBicmVhayBpbnRvIGtkYiBkdXJpbmcgYm9vdFxuIik7CiAgICBrZGJwKCIgLSBjb21t
YW5kID8gd2lsbCBzaG93IHRoZSBjb21tYW5kIHVzYWdlXG4iKTsKICAgIGtkYnAoIlxuIik7Cgog
ICAgZm9yKHRicD1rZGJfY21kX3RibDsgdGJwLT5rZGJfY21kX3VzZ2Y7IHRicCsrKQogICAgICAg
ICgqdGJwLT5rZGJfY21kX3VzZ2YpKCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoK
LyogPT09PT09PT09PT09PT09PT09PT09IGNtZCB0YWJsZSBpbml0aWFsaXphdGlvbiA9PT09PT09
PT09PT09PT09PT09PT09PT09PSAqLwp2b2lkIF9faW5pdAprZGJfaW5pdF9jbWR0YWIodm9pZCkK
ewogIHN0YXRpYyBrZGJ0YWJfdCBfa2RiX2NtZF90YWJsZVtdID0gewoKICAgIHsiaW5mbyIsIGtk
Yl9jbWRmX2luZm8sIGtkYl91c2dmX2luZm8sIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImN1
ciIsICBrZGJfY21kZl9jdXIsIGtkYl91c2dmX2N1ciwgMSwgS0RCX1JFUEVBVF9OT05FfSwKCiAg
ICB7ImYiLCAga2RiX2NtZGZfZiwgIGtkYl91c2dmX2YsICAxLCBLREJfUkVQRUFUX05PTkV9LAog
ICAgeyJmZyIsIGtkYl9jbWRmX2ZnLCBrZGJfdXNnZl9mZywgMSwgS0RCX1JFUEVBVF9OT05FfSwK
CiAgICB7ImR3IiwgIGtkYl9jbWRmX2R3LCAga2RiX3VzZ2ZfZHcsICAxLCBLREJfUkVQRUFUX05P
X0FSR1N9LAogICAgeyJkZCIsICBrZGJfY21kZl9kZCwgIGtkYl91c2dmX2RkLCAgMSwgS0RCX1JF
UEVBVF9OT19BUkdTfSwKICAgIHsiZHdtIiwga2RiX2NtZGZfZHdtLCBrZGJfdXNnZl9kd20sIDEs
IEtEQl9SRVBFQVRfTk9fQVJHU30sCiAgICB7ImRkbSIsIGtkYl9jbWRmX2RkbSwga2RiX3VzZ2Zf
ZGRtLCAxLCBLREJfUkVQRUFUX05PX0FSR1N9LAogICAgeyJkciIsICBrZGJfY21kZl9kciwgIGtk
Yl91c2dmX2RyLCAgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZHJnIiwga2RiX2NtZGZfZHJn
LCBrZGJfdXNnZl9kcmcsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCgogICAgeyJkaXMiLCBrZGJfY21k
Zl9kaXMsICBrZGJfdXNnZl9kaXMsICAxLCBLREJfUkVQRUFUX05PX0FSR1N9LAogICAgeyJkaXNt
IixrZGJfY21kZl9kaXNtLCBrZGJfdXNnZl9kaXNtLCAxLCBLREJfUkVQRUFUX05PX0FSR1N9LAoK
ICAgIHsibXciLCBrZGJfY21kZl9tdywga2RiX3VzZ2ZfbXcsIDEsIEtEQl9SRVBFQVRfTk9ORX0s
CiAgICB7Im1kIiwga2RiX2NtZGZfbWQsIGtkYl91c2dmX21kLCAxLCBLREJfUkVQRUFUX05PTkV9
LAogICAgeyJtciIsIGtkYl9jbWRmX21yLCBrZGJfdXNnZl9tciwgMSwgS0RCX1JFUEVBVF9OT05F
fSwKCiAgICB7ImJjIiwga2RiX2NtZGZfYmMsIGtkYl91c2dmX2JjLCAwLCBLREJfUkVQRUFUX05P
TkV9LAogICAgeyJicCIsIGtkYl9jbWRmX2JwLCBrZGJfdXNnZl9icCwgMSwgS0RCX1JFUEVBVF9O
T05FfSwKICAgIHsiYnRwIiwga2RiX2NtZGZfYnRwLCBrZGJfdXNnZl9idHAsIDEsIEtEQl9SRVBF
QVRfTk9ORX0sCgogICAgeyJ3cCIsIGtkYl9jbWRmX3dwLCBrZGJfdXNnZl93cCwgMSwgS0RCX1JF
UEVBVF9OT05FfSwKICAgIHsid2MiLCBrZGJfY21kZl93Yywga2RiX3VzZ2Zfd2MsIDAsIEtEQl9S
RVBFQVRfTk9ORX0sCgogICAgeyJuaSIsIGtkYl9jbWRmX25pLCBrZGJfdXNnZl9uaSwgMCwgS0RC
X1JFUEVBVF9OT19BUkdTfSwKICAgIHsic3MiLCBrZGJfY21kZl9zcywga2RiX3VzZ2Zfc3MsIDEs
IEtEQl9SRVBFQVRfTk9fQVJHU30sCiAgICB7InNzYiIsa2RiX2NtZGZfc3NiLGtkYl91c2dmX3Nz
YiwwLCBLREJfUkVQRUFUX05PX0FSR1N9LAogICAgeyJnbyIsIGtkYl9jbWRmX2dvLCBrZGJfdXNn
Zl9nbywgMCwgS0RCX1JFUEVBVF9OT05FfSwKCiAgICB7ImNwdSIsa2RiX2NtZGZfY3B1LCBrZGJf
dXNnZl9jcHUsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7Im5taSIsa2RiX2NtZGZfbm1pLCBr
ZGJfdXNnZl9ubWksIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7InBlcmNwdSIsa2RiX2NtZGZf
cGVyY3B1LCBrZGJfdXNnZl9wZXJjcHUsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCgogICAgeyJzeW0i
LCAga2RiX2NtZGZfc3ltLCAgIGtkYl91c2dmX3N5bSwgICAxLCBLREJfUkVQRUFUX05PTkV9LAog
ICAgeyJtb2QiLCAga2RiX2NtZGZfbW9kLCAgIGtkYl91c2dmX21vZCwgICAxLCBLREJfUkVQRUFU
X05PTkV9LAoKICAgIHsidmNwdWgiLGtkYl9jbWRmX3ZjcHVoLCBrZGJfdXNnZl92Y3B1aCwgMSwg
S0RCX1JFUEVBVF9OT05FfSwKICAgIHsidmNwdSIsIGtkYl9jbWRmX3ZjcHUsICBrZGJfdXNnZl92
Y3B1LCAgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZG9tIiwgIGtkYl9jbWRmX2RvbSwgICBr
ZGJfdXNnZl9kb20sICAgMSwgS0RCX1JFUEVBVF9OT05FfSwKCiAgICB7InNjaGVkIiwga2RiX2Nt
ZGZfc2NoZWQsIGtkYl91c2dmX3NjaGVkLCAxLCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJtbXUi
LCAgIGtkYl9jbWRmX21tdSwgICBrZGJfdXNnZl9tbXUsICAgMSwgS0RCX1JFUEVBVF9OT05FfSwK
ICAgIHsicDJtIiwgICBrZGJfY21kZl9wMm0sICAga2RiX3VzZ2ZfcDJtLCAgIDEsIEtEQl9SRVBF
QVRfTk9ORX0sCiAgICB7Im0ycCIsICAga2RiX2NtZGZfbTJwLCAgIGtkYl91c2dmX20ycCwgICAx
LCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJkcGFnZSIsIGtkYl9jbWRmX2RwYWdlLCBrZGJfdXNn
Zl9kcGFnZSwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZG1zciIsICBrZGJfY21kZl9kbXNy
LCAga2RiX3VzZ2ZfZG1zciwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiY3B1aWQiLCAga2Ri
X2NtZGZfY3B1aWQsICBrZGJfdXNnZl9jcHVpZCwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsi
d2VwdCIsICBrZGJfY21kZl93ZXB0LCAga2RiX3VzZ2Zfd2VwdCwgMSwgS0RCX1JFUEVBVF9OT05F
fSwKCiAgICB7ImR0cnEiLCBrZGJfY21kZl9kdHJxLCAga2RiX3VzZ2ZfZHRycSwgMSwgS0RCX1JF
UEVBVF9OT05FfSwKICAgIHsiZGlkdCIsIGtkYl9jbWRmX2RpZHQsICBrZGJfdXNnZl9kaWR0LCAx
LCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJkZ2R0Iiwga2RiX2NtZGZfZGdkdCwgIGtkYl91c2dm
X2RnZHQsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImRpcnEiLCBrZGJfY21kZl9kaXJxLCAg
a2RiX3VzZ2ZfZGlycSwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZHZpdCIsIGtkYl9jbWRm
X2R2aXQsICBrZGJfdXNnZl9kdml0LCAxLCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJkdm1jIiwg
a2RiX2NtZGZfZHZtYywgIGtkYl91c2dmX2R2bWMsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7
Im1taW8iLCBrZGJfY21kZl9tbWlvLCAga2RiX3VzZ2ZfbW1pbywgMSwgS0RCX1JFUEVBVF9OT05F
fSwKCiAgICAvKiB0cmFjaW5nIHJlbGF0ZWQgY29tbWFuZHMgKi8KICAgIHsidHJjb24iLCBrZGJf
Y21kZl90cmNvbiwgIGtkYl91c2dmX3RyY29uLCAgMCwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsi
dHJjb2ZmIixrZGJfY21kZl90cmNvZmYsIGtkYl91c2dmX3RyY29mZiwgMCwgS0RCX1JFUEVBVF9O
T05FfSwKICAgIHsidHJjeiIsICBrZGJfY21kZl90cmN6LCAgIGtkYl91c2dmX3RyY3osICAgMCwg
S0RCX1JFUEVBVF9OT05FfSwKICAgIHsidHJjcCIsICBrZGJfY21kZl90cmNwLCAgIGtkYl91c2dm
X3RyY3AsICAgMSwgS0RCX1JFUEVBVF9OT05FfSwKCiAgICB7InVzcjEiLCAga2RiX2NtZGZfdXNy
MSwgICBrZGJfdXNnZl91c3IxLCAgIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImtkYmYiLCAg
a2RiX2NtZGZfa2RiZiwgICBrZGJfdXNnZl9rZGJmLCAgIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAg
ICB7ImtkYmRiZyIsa2RiX2NtZGZfa2RiZGJnLCBrZGJfdXNnZl9rZGJkYmcsIDEsIEtEQl9SRVBF
QVRfTk9ORX0sCiAgICB7InJlYm9vdCIsa2RiX2NtZGZfcmVib290LCBrZGJfdXNnZl9yZWJvb3Qs
IDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImgiLCAgICAga2RiX2NtZGZfaCwgICAgICBrZGJf
dXNnZl9oLCAgICAgIDEsIEtEQl9SRVBFQVRfTk9ORX0sCgogICAgeyIiLCBOVUxMLCBOVUxMLCAw
LCAwfSwKICB9OwogICAga2RiX2NtZF90YmwgPSBfa2RiX2NtZF90YWJsZTsKICAgIHJldHVybjsK
fQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9ndWVzdC8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAwMDAwNzc1ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwMDAwMAAxMjAxNzcyNDYyNAAw
MTI3MDQAIDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIg
IABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2d1ZXN0L01ha2VmaWxlAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDAwMDQxADExNzY1NDY1NTU2ADAx
NDM1NgAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAg
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAKb2JqLXkgICAgICAgICAgIDo9IGtkYl9ndWVzdC5vCgoAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9ndWVzdC9rZGJfZ3Vlc3QuYwAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAyNTAwNgAxMTc2NTQ2NTU1NgAwMTUw
NDEAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABt
cmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBNdWtlc2ggUmF0aG9y
LCBPcmFjbGUgQ29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJvZ3JhbSBp
cyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICogbW9kaWZ5
IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2Ug
djIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAqCiAqIFRo
aXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNl
ZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGll
ZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNV
TEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBt
b3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBp
ZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4sIDU5
IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMwNywgVVNB
LgogKi8KCiNpbmNsdWRlICIuLi9pbmNsdWRlL2tkYmluYy5oIgoKLyogaW5mb3JtYXRpb24gZm9y
IHN5bWJvbHMgZm9yIGEgZ3Vlc3QgKGluY2x1ZGVpbmcgZG9tIDAgKSBpcyBzYXZlZCBoZXJlICov
CnN0cnVjdCBnc3Rfc3ltaW5mbyB7ICAgICAgICAgICAvKiBndWVzdCBzeW1ib2xzIGluZm8gKi8K
ICAgIGludCAgIGRvbWlkOyAgICAgICAgICAgICAgIC8qIHdoaWNoIGRvbWFpbiAqLwogICAgaW50
ICAgYml0bmVzczsgICAgICAgICAgICAgLyogMzIgb3IgNjQgKi8KICAgIHZvaWQgKmFkZHJ0Ymxw
OyAgICAgICAgICAgIC8qIHB0ciB0byAoMzIvNjQpYWRkcmVzc2VzIHRibCAqLwogICAgdTggICAq
dG9rdGJsOyAgICAgICAgICAgICAgLyogcHRyIHRvIGthbGxzeW1zX3Rva2VuX3RhYmxlICovCiAg
ICB1MTYgICp0b2tpZHh0Ymw7ICAgICAgICAgICAvKiBwdHIgdG8ga2FsbHN5bXNfdG9rZW5faW5k
ZXggKi8KICAgIHU4ICAgKmthbGxzeW1zX25hbWVzOyAgICAgIC8qIHB0ciB0byBrYWxsc3ltc19u
YW1lcyAqLwogICAgbG9uZyAga2FsbHN5bXNfbnVtX3N5bXM7ICAgLyogcHRyIHRvIGthbGxzeW1z
X251bV9zeW1zICovCiAgICBrZGJ2YV90ICBzdGV4dDsgICAgICAgICAgICAvKiB2YWx1ZSBvZiBf
c3RleHQgaW4gZ3Vlc3QgKi8KICAgIGtkYnZhX3QgIGV0ZXh0OyAgICAgICAgICAgIC8qIHZhbHVl
IG9mIF9ldGV4dCBpbiBndWVzdCAqLwogICAga2RidmFfdCAgc2luaXR0ZXh0OyAgICAgICAgLyog
dmFsdWUgb2YgX3Npbml0dGV4dCBpbiBndWVzdCAqLwogICAga2RidmFfdCAgZWluaXR0ZXh0OyAg
ICAgICAgLyogdmFsdWUgb2YgX2Vpbml0dGV4dCBpbiBndWVzdCAqLwp9OwoKI2RlZmluZSBNQVhf
Q0FDSEUgMTYgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAvKiBjYWNoZSB1cHRvIDE2IGd1
ZXN0cyAqLwpzdHJ1Y3QgZ3N0X3N5bWluZm8gZ3N0X3N5bWluZm9hW01BWF9DQUNIRV07ICAgICAg
IC8qIGd1ZXN0IHN5bWJvbCBpbmZvIGFycmF5ICovCgpzdGF0aWMgc3RydWN0IGdzdF9zeW1pbmZv
ICoKa2RiX2dldF9zeW1pbmZvX3Nsb3Qodm9pZCkKewogICAgaW50IGk7CiAgICBmb3IgKGk9MDsg
aSA8IE1BWF9DQUNIRTsgaSsrKQogICAgICAgIGlmIChnc3Rfc3ltaW5mb2FbaV0uYWRkcnRibHAg
PT0gTlVMTCkKICAgICAgICAgICAgcmV0dXJuICgmZ3N0X3N5bWluZm9hW2ldKTsgICAgICAKCiAg
ICByZXR1cm4gTlVMTDsKfQoKc3RhdGljIHN0cnVjdCBnc3Rfc3ltaW5mbyAqCmtkYl9kb21pZDJz
eW1pbmZvcChkb21pZF90IGRvbWlkKQp7CiAgICBpbnQgaTsKICAgIGZvciAoaT0wOyBpIDwgTUFY
X0NBQ0hFOyBpKyspCiAgICAgICAgaWYgKGdzdF9zeW1pbmZvYVtpXS5kb21pZCA9PSBkb21pZCkK
ICAgICAgICAgICAgcmV0dXJuICgmZ3N0X3N5bWluZm9hW2ldKTsgICAgICAKCiAgICByZXR1cm4g
TlVMTDsKfQoKLyogY2hlY2sgaWYgYW4gYWRkcmVzcyBsb29rcyBsaWtlIHRleHQgYWRkcmVzcyBp
biBndWVzdCAqLwppbnQKa2RiX2lzX2FkZHJfZ3Vlc3RfdGV4dChrZGJ2YV90IGFkZHIsIGludCBk
b21pZCkKewogICAgc3RydWN0IGdzdF9zeW1pbmZvICpncCA9IGtkYl9kb21pZDJzeW1pbmZvcChk
b21pZCk7CgogICAgaWYgKCFncCB8fCAhZ3AtPnN0ZXh0IHx8ICFncC0+ZXRleHQpCiAgICAgICAg
cmV0dXJuIDA7CiAgICBLREJHUDEoImd1ZXN0YWRkcjogYWRkcjolbHggZG9taWQ6JWRcbiIsIGFk
ZHIsIGRvbWlkKTsKCiAgICByZXR1cm4gKCAoYWRkciA+PSBncC0+c3RleHQgJiYgYWRkciA8PSBn
cC0+ZXRleHQpIHx8CiAgICAgICAgICAgICAoYWRkciA+PSBncC0+c2luaXR0ZXh0ICYmIGFkZHIg
PD0gZ3AtPmVpbml0dGV4dCkgKTsKfQoKLyoKICogcmV0dXJuczogdmFsdWUgb2Yga2FsbHN5bXNf
YWRkcmVzc2VzW2lkeF07CiAqLwpzdGF0aWMga2RidmFfdAprZGJfcmRfZ3Vlc3RfYWRkcnRibChz
dHJ1Y3QgZ3N0X3N5bWluZm8gKmdwLCBpbnQgaWR4KQp7CiAgICBrZGJ2YV90IGFkZHIsIHJldGFk
ZHI9MDsKICAgIGludCBudW0gPSBncC0+Yml0bmVzcy84OyAgICAgICAvKiB3aGV0aGVyIDQgYnl0
ZSBvciA4IGJ5dGUgcHRycyAqLwogICAgZG9taWRfdCBpZCA9IGdwLT5kb21pZDsKCiAgICBhZGRy
ID0gKGtkYnZhX3QpKCgoY2hhciAqKWdwLT5hZGRydGJscCkgKyBpZHggKiBudW0pOwogICAgS0RC
R1AxKCJyZGd1ZXN0YWRkcnRibDphZGRyOiVseCBpZHg6JWRcbiIsIGFkZHIsIGlkeCk7CgogICAg
aWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikmcmV0YWRkcixudW0saWQpICE9IG51
bSkgewogICAgICAgIGtkYnAoIkNhbid0IHJlYWQgYWRkcnRibCBkb21pZDolZCBhdDolbHhcbiIs
IGlkLCBhZGRyKTsKICAgICAgICByZXR1cm4gMDsKICAgIH0KICAgIEtEQkdQMSgicmRndWVzdGFk
ZHJ0Ymw6ZXhpdDpyZXRhZGRyOiVseFxuIiwgcmV0YWRkcik7CiAgICByZXR1cm4gcmV0YWRkcjsK
fQoKLyogQmFzZWQgb24gZWw1IGthbGxzeW1zLmMgZmlsZS4gKi8Kc3RhdGljIHVuc2lnbmVkIGlu
dCAKa2RiX2V4cGFuZF9lbDVfc3ltKHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AsIHVuc2lnbmVkIGlu
dCBvZmYsIGNoYXIgKnJlc3VsdCkKeyAgIAogICAgaW50IGxlbiwgc2tpcHBlZF9maXJzdCA9IDA7
CiAgICB1OCB1OGlkeCwgKnRwdHIsICpkYXRhcDsKICAgIGRvbWlkX3QgZG9taWQgPSBncC0+ZG9t
aWQ7CgogICAgKnJlc3VsdCA9ICdcMCc7CgogICAgLyogZ2V0IHRoZSBjb21wcmVzc2VkIHN5bWJv
bCBsZW5ndGggZnJvbSB0aGUgZmlyc3Qgc3ltYm9sIGJ5dGUgKi8KICAgIGRhdGFwID0gZ3AtPmth
bGxzeW1zX25hbWVzICsgb2ZmOwogICAgbGVuID0gMDsKICAgIGlmICgoa2RiX3JlYWRfbWVtKChr
ZGJ2YV90KWRhdGFwLCAoa2RiYnl0X3QgKikmbGVuLCAxLCBkb21pZCkpICE9IDEpIHsKICAgICAg
ICBLREJHUCgiZmFpbGVkIHRvIHJlYWQgZ3Vlc3QgbWVtb3J5XG4iKTsKICAgICAgICByZXR1cm4g
MDsKICAgIH0KICAgIGRhdGFwKys7CgogICAgLyogdXBkYXRlIHRoZSBvZmZzZXQgdG8gcmV0dXJu
IHRoZSBvZmZzZXQgZm9yIHRoZSBuZXh0IHN5bWJvbCBvbgogICAgICogdGhlIGNvbXByZXNzZWQg
c3RyZWFtICovCiAgICBvZmYgKz0gbGVuICsgMTsKCiAgICAvKiBmb3IgZXZlcnkgYnl0ZSBvbiB0
aGUgY29tcHJlc3NlZCBzeW1ib2wgZGF0YSwgY29weSB0aGUgdGFibGUKICAgICAqIGVudHJ5IGZv
ciB0aGF0IGJ5dGUgKi8KICAgIHdoaWxlKGxlbikgewogICAgICAgIHUxNiB1MTZpZHgsICp1MTZw
OwogICAgICAgIGlmIChrZGJfcmVhZF9tZW0oKGtkYnZhX3QpZGF0YXAsKGtkYmJ5dF90ICopJnU4
aWR4LDEsZG9taWQpIT0xKXsKICAgICAgICAgICAga2RicCgibWVtb3J5ICh1OGlkeCkgcmVhZCBl
cnJvcjolcFxuIixncC0+dG9raWR4dGJsKTsKICAgICAgICAgICAgcmV0dXJuIDA7CiAgICAgICAg
fQogICAgICAgIHUxNnAgPSB1OGlkeCArIGdwLT50b2tpZHh0Ymw7CiAgICAgICAgaWYgKGtkYl9y
ZWFkX21lbSgoa2RidmFfdCl1MTZwLChrZGJieXRfdCAqKSZ1MTZpZHgsMixkb21pZCkhPTIpewog
ICAgICAgICAgICBrZGJwKCJ0b2tpZHh0YmwgcmVhZCBlcnJvcjolcFxuIiwgdTE2cCk7CiAgICAg
ICAgICAgIHJldHVybiAwOwogICAgICAgIH0KICAgICAgICB0cHRyID0gZ3AtPnRva3RibCArIHUx
NmlkeDsKICAgICAgICBkYXRhcCsrOwogICAgICAgIGxlbi0tOwoKICAgICAgICB3aGlsZSAoKGtk
Yl9yZWFkX21lbSgoa2RidmFfdCl0cHRyLCAoa2RiYnl0X3QgKikmdThpZHgsIDEsIGRvbWlkKT09
MSkgJiYKICAgICAgICAgICAgICAgdThpZHgpIHsKCiAgICAgICAgICAgIGlmKHNraXBwZWRfZmly
c3QpIHsKICAgICAgICAgICAgICAgICpyZXN1bHQgPSB1OGlkeDsKICAgICAgICAgICAgICAgIHJl
c3VsdCsrOwogICAgICAgICAgICB9IGVsc2UKICAgICAgICAgICAgICAgIHNraXBwZWRfZmlyc3Qg
PSAxOwogICAgICAgICAgICB0cHRyKys7CiAgICAgICAgfQogICAgfQogICAgKnJlc3VsdCA9ICdc
MCc7CiAgICByZXR1cm4gb2ZmOyAgICAgICAgICAvKiByZXR1cm4gdG8gb2Zmc2V0IHRvIHRoZSBu
ZXh0IHN5bWJvbCAqLwp9CgojZGVmaW5lIEVMNF9OTUxFTiAxMjcKLyogc28gbXVjaCBwYWluLCBz
byBub3Qgc3VyZSBvZiBpdCdzIHdvcnRoIC4uIDopLi4gKi8Kc3RhdGljIGtkYnZhX3QKa2RiX2V4
cGFuZF9lbDRfc3ltKHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AsIGludCBsb3csIGNoYXIgKnJlc3Vs
dCwgY2hhciAqc3ltcCkKeyAgIAogICAgaW50IGksIGo7CiAgICB1OCAqbm1wID0gZ3AtPmthbGxz
eW1zX25hbWVzOyAgICAgICAvKiBndWVzdCBhZGRyZXNzIHNwYWNlICovCiAgICBrZGJieXRfdCBi
eXRlLCBwcmVmaXg7CiAgICBkb21pZF90IGlkID0gZ3AtPmRvbWlkOwogICAga2RidmFfdCBhZGRy
OwoKICAgIEtEQkdQMSgiRWVsNHN5bTpubXA6JXAgbWF4aWR4OiQlZCBzeW06JXNcbiIsIG5tcCwg
bG93LCBzeW1wKTsKICAgIGZvciAoaT0wOyBpIDw9IGxvdzsgaSsrKSB7CiAgICAgICAgLyogdW5z
aWduZWQgcHJlZml4ID0gKm5hbWUrKzsgKi8KICAgICAgICBpZiAoa2RiX3JlYWRfbWVtKChrZGJ2
YV90KW5tcCwgJnByZWZpeCwgMSwgaWQpICE9IDEpIHsKICAgICAgICAgICAga2RicCgiZmFpbGVk
IHRvIHJlYWQ6JXAgZG9taWQ6JXhcbiIsIG5tcCwgaWQpOwogICAgICAgICAgICByZXR1cm4gMDsK
ICAgICAgICB9CiAgICAgICAgS0RCR1AyKCJlbDQ6aTolZCBwcmVmaXg6JXhcbiIsIGksIHByZWZp
eCk7CiAgICAgICAgbm1wKys7CiAgICAgICAgLyogc3RybmNweShuYW1lYnVmICsgcHJlZml4LCBu
YW1lLCBLU1lNX05BTUVfTEVOIC0gcHJlZml4KTsgKi8KICAgICAgICBhZGRyID0gKGxvbmcpcmVz
dWx0ICsgcHJlZml4OwogICAgICAgIGZvciAoaj0wOyBqIDwgRUw0X05NTEVOLXByZWZpeDsgaisr
KSB7CiAgICAgICAgICAgIGlmIChrZGJfcmVhZF9tZW0oKGtkYnZhX3Qpbm1wLCAmYnl0ZSwgMSwg
aWQpICE9IDEpIHsKICAgICAgICAgICAgICAgIGtkYnAoImZhaWxlZCByZWFkOiVwIGRvbWlkOiV4
XG4iLCBubXAsIGlkKTsKICAgICAgICAgICAgICAgIHJldHVybiAwOwogICAgICAgICAgICB9CiAg
ICAgICAgICAgIEtEQkdQMigiZWw0Omo6JWQgYnl0ZToleFxuIiwgaiwgYnl0ZSk7CiAgICAgICAg
ICAgICooa2RiYnl0X3QgKilhZGRyID0gYnl0ZTsKICAgICAgICAgICAgYWRkcisrOyBubXArKzsK
ICAgICAgICAgICAgaWYgKGJ5dGUgPT0gJ1wwJykKICAgICAgICAgICAgICAgIGJyZWFrOwogICAg
ICAgIH0KICAgICAgICBLREJHUDIoImVsNHN5bTppOiVkIHJlczolc1xuIiwgaSwgcmVzdWx0KTsK
ICAgICAgICBpZiAoc3ltcCAmJiBzdHJjbXAocmVzdWx0LCBzeW1wKSA9PSAwKQogICAgICAgICAg
ICByZXR1cm4oa2RiX3JkX2d1ZXN0X2FkZHJ0YmwoZ3AsIGkpKTsKCiAgICAgICAgLyoga2FsbHN5
bXMuYzogbmFtZSArPSBzdHJsZW4obmFtZSkgKyAxOyAqLwogICAgICAgIGlmIChqID09IEVMNF9O
TUxFTi1wcmVmaXggJiYgYnl0ZSAhPSAnXDAnKQogICAgICAgICAgICB3aGlsZSAoa2RiX3JlYWRf
bWVtKChrZGJ2YV90KW5tcCwgJmJ5dGUsIDEsIGlkKSAmJiBieXRlICE9ICdcMCcpCiAgICAgICAg
ICAgICAgICBubXArKzsKICAgIH0KICAgIEtEQkdQMSgiWGVsNHN5bTogbmEtZ2EtZGFcbiIpOwog
ICAgcmV0dXJuIDA7Cn0KCnN0YXRpYyB1bnNpZ25lZCBpbnQKa2RiX2dldF9lbDVfc3ltb2Zmc2V0
KHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AsIGxvbmcgcG9zKQp7CiAgICBpbnQgaTsKICAgIHU4IGRh
dGEsICpuYW1lcDsKICAgIGRvbWlkX3QgZG9taWQgPSBncC0+ZG9taWQ7CgogICAgbmFtZXAgPSBn
cC0+a2FsbHN5bXNfbmFtZXM7CiAgICBmb3IgKGk9MDsgaSA8IHBvczsgaSsrKSB7CiAgICAgICAg
aWYgKGtkYl9yZWFkX21lbSgoa2RidmFfdCluYW1lcCwgJmRhdGEsIDEsIGRvbWlkKSAhPSAxKSB7
CiAgICAgICAgICAgIGtkYnAoIkNhbid0IHJlYWQgaWQ6JCVkIG1lbTolcFxuIiwgZG9taWQsIG5h
bWVwKTsKICAgICAgICAgICAgcmV0dXJuIDA7CiAgICAgICAgfQogICAgICAgIG5hbWVwID0gbmFt
ZXAgKyBkYXRhICsgMTsKICAgIH0KICAgIHJldHVybiBuYW1lcCAtIGdwLT5rYWxsc3ltc19uYW1l
czsKfQoKLyoKICogZm9yIGEgZ2l2ZW4gZ3Vlc3QgZG9taWQgKGRvbWlkID49IDAgJiYgPCBLREJf
SFlQRE9NSUQpLCBjb252ZXJ0IGFkZHIgdG8KICogc3ltYm9sLiBvZmZzZXQgaXMgc2V0IHRvICBh
ZGRyIC0gc3ltYm9sc3RhcnQKICovCmNoYXIgKgprZGJfZ3Vlc3RfYWRkcjJzeW0odW5zaWduZWQg
bG9uZyBhZGRyLCBkb21pZF90IGRvbWlkLCB1bG9uZyAqb2Zmc3ApCnsKICAgIHN0YXRpYyBjaGFy
IG5hbWVidWZbS1NZTV9OQU1FX0xFTisxXTsKICAgIHVuc2lnbmVkIGxvbmcgbG93LCBoaWdoLCBt
aWQ7CiAgICBzdHJ1Y3QgZ3N0X3N5bWluZm8gKmdwID0ga2RiX2RvbWlkMnN5bWluZm9wKGRvbWlk
KTsKCiAgICAqb2Zmc3AgPSAwOwogICAgaWYoIWdwIHx8IGdwLT5rYWxsc3ltc19udW1fc3ltcyA9
PSAwKQogICAgICAgIHJldHVybiAiID8/PyAiOwoKICAgIG5hbWVidWZbMF0gPSBuYW1lYnVmW0tT
WU1fTkFNRV9MRU5dID0gJ1wwJzsKICAgIGlmICgxKSB7CiAgICAgICAgLyogZG8gYSBiaW5hcnkg
c2VhcmNoIG9uIHRoZSBzb3J0ZWQga2FsbHN5bXNfYWRkcmVzc2VzIGFycmF5ICovCiAgICAgICAg
bG93ID0gMDsKICAgICAgICBoaWdoID0gZ3AtPmthbGxzeW1zX251bV9zeW1zOwoKICAgICAgICB3
aGlsZSAoaGlnaC1sb3cgPiAxKSB7CiAgICAgICAgICAgIG1pZCA9IChsb3cgKyBoaWdoKSAvIDI7
CiAgICAgICAgICAgIGlmIChrZGJfcmRfZ3Vlc3RfYWRkcnRibChncCwgbWlkKSA8PSBhZGRyKSAK
ICAgICAgICAgICAgICAgIGxvdyA9IG1pZDsKICAgICAgICAgICAgZWxzZSAKICAgICAgICAgICAg
ICAgIGhpZ2ggPSBtaWQ7CiAgICAgICAgfQogICAgICAgIC8qIEdyYWIgbmFtZSAqLwogICAgICAg
IGlmIChncC0+dG9rdGJsKSB7CiAgICAgICAgICAgIGludCBzeW1vZmYgPSBrZGJfZ2V0X2VsNV9z
eW1vZmZzZXQoZ3AsbG93KTsKICAgICAgICAgICAga2RiX2V4cGFuZF9lbDVfc3ltKGdwLCBzeW1v
ZmYsIG5hbWVidWYpOwogICAgICAgIH0gZWxzZQogICAgICAgICAgICBrZGJfZXhwYW5kX2VsNF9z
eW0oZ3AsIGxvdywgbmFtZWJ1ZiwgTlVMTCk7CiAgICAgICAgKm9mZnNwID0gYWRkciAtIGtkYl9y
ZF9ndWVzdF9hZGRydGJsKGdwLCBsb3cpOwogICAgICAgIHJldHVybiBuYW1lYnVmOwogICAgfQog
ICAgcmV0dXJuICIgPz8/PyAiOwp9CgoKLyogCiAqIHNhdmUgZ3Vlc3QgKGRvbTAgYW5kIG90aGVy
cykgc3ltYm9scyBpbmZvIDogZG9taWQgYW5kIGZvbGxvd2luZyBhZGRyZXNzZXM6CiAqICAgICAm
a2FsbHN5bXNfbmFtZXMgJmthbGxzeW1zX2FkZHJlc3NlcyAma2FsbHN5bXNfbnVtX3N5bXMgXAog
KiAgICAgJmthbGxzeW1zX3Rva2VuX3RhYmxlICZrYWxsc3ltc190b2tlbl9pbmRleAogKi8Kdm9p
ZAprZGJfc2F2X2RvbV9zeW1pbmZvKGRvbWlkX3QgZG9taWQsIGxvbmcgbmFtZXNwLCBsb25nIGFk
ZHJhcCwgbG9uZyBudW1wLAogICAgICAgICAgICAgICAgICAgIGxvbmcgdG9rdGJscCwgbG9uZyB0
b2tpZHhwKQp7CiAgICBpbnQgYnl0ZXM7CiAgICBsb25nIHZhbCA9IDA7ICAgIC8qIG11c3QgYmUg
c2V0IHRvIHplcm8gZm9yIDMyIG9uIDY0IGNhc2VzICovCiAgICBzdHJ1Y3QgZ3N0X3N5bWluZm8g
KmdwID0ga2RiX2dldF9zeW1pbmZvX3Nsb3QoKTsKCiAgICBpZiAoZ3AgPT0gTlVMTCkgewogICAg
ICAgIGtkYnAoImtkYjprZGJfc2F2X2RvbV9zeW1pbmZvKCk6VGFibGUgZnVsbC4uIHN5bWJvbHMg
bm90IHNhdmVkXG4iKTsKICAgICAgICByZXR1cm47CiAgICB9CiAgICBtZW1zZXQoZ3AsIDAsIHNp
emVvZigqZ3ApKTsKCiAgICBncC0+ZG9taWQgPSBkb21pZDsKICAgIGdwLT5iaXRuZXNzID0ga2Ri
X2d1ZXN0X2JpdG5lc3MoZG9taWQpOwogICAgZ3AtPmFkZHJ0YmxwID0gKHZvaWQgKilhZGRyYXA7
CiAgICBncC0+a2FsbHN5bXNfbmFtZXMgPSAodTggKiluYW1lc3A7CiAgICBncC0+dG9rdGJsID0g
KHU4ICopdG9rdGJscDsKICAgIGdwLT50b2tpZHh0YmwgPSAodTE2ICopdG9raWR4cDsKCiAgICBL
REJHUCgiZG9taWQ6JXggYml0bmVzczokJWQgbnVtc3ltczokJWxkIGFycmF5cDolcFxuIiwgZG9t
aWQsCiAgICAgICAgICBncC0+Yml0bmVzcywgZ3AtPmthbGxzeW1zX251bV9zeW1zLCBncC0+YWRk
cnRibHApOwoKICAgIGJ5dGVzID0gZ3AtPmJpdG5lc3MvODsKICAgIGlmIChrZGJfcmVhZF9tZW0o
bnVtcCwgKGtkYmJ5dF90ICopJnZhbCwgYnl0ZXMsIGRvbWlkKSAhPSBieXRlcykgewoKICAgICAg
ICBrZGJwKCJVbmFibGUgdG8gcmVhZCBudW1iZXIgb2Ygc3ltYm9scyBmcm9tOiVseFxuIiwgbnVt
cCk7CiAgICAgICAgbWVtc2V0KGdwLCAwLCBzaXplb2YoKmdwKSk7CiAgICAgICAgcmV0dXJuOwog
ICAgfSBlbHNlCiAgICAgICAga2RicCgiTnVtYmVyIG9mIHN5bWJvbHM6JCVsZFxuIiwgdmFsKTsK
CiAgICBncC0+a2FsbHN5bXNfbnVtX3N5bXMgPSB2YWw7CgogICAgYnl0ZXMgPSAoZ3AtPmJpdG5l
c3MvOCkgKiBncC0+a2FsbHN5bXNfbnVtX3N5bXM7CiAgICBncC0+c3RleHQgPSBrZGJfZ3Vlc3Rf
c3ltMmFkZHIoIl9zdGV4dCIsIGRvbWlkKTsKICAgIGdwLT5ldGV4dCA9IGtkYl9ndWVzdF9zeW0y
YWRkcigiX2V0ZXh0IiwgZG9taWQpOwogICAgaWYgKCFncC0+c3RleHQgfHwgIWdwLT5ldGV4dCkK
ICAgICAgICBrZGJwKCJXYXJuOiBDYW4ndCBmaW5kIHN0ZXh0L2V0ZXh0XG4iKTsKCiAgICBpZiAo
Z3AtPnRva3RibCAmJiBncC0+dG9raWR4dGJsKSB7CiAgICAgICAgZ3AtPnNpbml0dGV4dCA9IGtk
Yl9ndWVzdF9zeW0yYWRkcigiX3Npbml0dGV4dCIsIGRvbWlkKTsKICAgICAgICBncC0+ZWluaXR0
ZXh0ID0ga2RiX2d1ZXN0X3N5bTJhZGRyKCJfZWluaXR0ZXh0IiwgZG9taWQpOwogICAgICAgIGlm
ICghZ3AtPnNpbml0dGV4dCB8fCAhZ3AtPmVpbml0dGV4dCkgewogICAgICAgICAgICBrZGJwKCJX
YXJuOiBDYW4ndCBmaW5kIHNpbml0dGV4dC9laW5pdHRleHRcbiIpOwogICAgfSAKICAgIH0KICAg
IEtEQkdQMSgic3R4dDolbHggZXR4dDolbHggc2l0eHQ6JWx4IGVpdHh0OiVseFxuIiwgZ3AtPnN0
ZXh0LCBncC0+ZXRleHQsCiAgICAgICAgICAgZ3AtPnNpbml0dGV4dCwgZ3AtPmVpbml0dGV4dCk7
CiAgICBrZGJwKCJTdWNjZXNmdWxseSBzYXZlZCBzeW1ib2wgaW5mb1xuIik7Cn0KCi8qCiAqIGdp
dmVuIGEgc3ltYm9sIHN0cmluZyBmb3IgYSBndWVzdC9kb21pZCwgcmV0dXJuIGl0cyBhZGRyZXNz
CiAqLwprZGJ2YV90CmtkYl9ndWVzdF9zeW0yYWRkcihjaGFyICpzeW1wLCBkb21pZF90IGRvbWlk
KQp7CiAgICBjaGFyIG5hbWVidWZbS1NZTV9OQU1FX0xFTisxXTsKICAgIGludCBpLCBvZmY9MDsK
ICAgIHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AgPSBrZGJfZG9taWQyc3ltaW5mb3AoZG9taWQpOwoK
ICAgIEtEQkdQKCJzeW0yYTogc3ltOiVzIGRvbWlkOiV4IG51bXN5bXM6JWxkXG4iLCBzeW1wLCBk
b21pZCwKICAgICAgICAgIGdwID8gZ3AtPmthbGxzeW1zX251bV9zeW1zOiAtMSk7CgogICAgaWYg
KCFncCkKICAgICAgICByZXR1cm4gMDsKCiAgICBpZiAoZ3AtPnRva3RibCA9PSAwIHx8IGdwLT50
b2tpZHh0YmwgPT0gMCkKICAgICAgICByZXR1cm4oa2RiX2V4cGFuZF9lbDRfc3ltKGdwLCBncC0+
a2FsbHN5bXNfbnVtX3N5bXMsIG5hbWVidWYsIHN5bXApKTsKCiAgICBmb3IgKGk9MDsgaSA8IGdw
LT5rYWxsc3ltc19udW1fc3ltczsgaSsrKSB7CiAgICAgICAgb2ZmID0ga2RiX2V4cGFuZF9lbDVf
c3ltKGdwLCBvZmYsIG5hbWVidWYpOwogICAgICAgIEtEQkdQMSgiaTolZCBuYW1lYnVmOiVzXG4i
LCBpLCBuYW1lYnVmKTsKICAgICAgICBpZiAoc3RyY21wKG5hbWVidWYsIHN5bXApID09IDApIHsK
ICAgICAgICAgICAgcmV0dXJuKGtkYl9yZF9ndWVzdF9hZGRydGJsKGdwLCBpKSk7CiAgICAgICAg
fQogICAgfQogICAgS0RCR1AoInN5bTJhOmV4aXQ6bmEtZ2EtZGFcbiIpOwogICAgcmV0dXJuIDA7
Cn0KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIv
TWFrZWZpbGUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAw
MDI3NTYAMDAwMDAwMDAxMDIAMTE3NjU0NjU1NTYAMDEzMjI1ACAwAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAApvYmot
eQkJKz0ga2RibWFpbi5vIGtkYl9jbWRzLm8ga2RiX2lvLm8gCgpzdWJkaXIteSArPSB4ODYgZ3Vl
c3QKCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==

--MP_/hjZb_3K/+AywVJErRJV.V7.
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--MP_/hjZb_3K/+AywVJErRJV.V7.--


From xen-devel-bounces@lists.xen.org Thu Aug 30 20:37:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7BTx-0002DI-U6; Thu, 30 Aug 2012 20:36:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1T79Ry-0000KO-Lu
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 18:26:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1346351192!3210193!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=Mail larger than max spam size
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5425 invoked from network); 30 Aug 2012 18:26:34 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Aug 2012 18:26:34 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7UIPGRL020074
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:25:17 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7UIPE5r028759
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 18:25:16 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7UIPDdq021626
	for <Xen-devel@lists.xensource.com>; Thu, 30 Aug 2012 13:25:13 -0500
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Aug 2012 11:25:12 -0700
Date: Thu, 30 Aug 2012 11:25:11 -0700
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20120830112511.19fb0a49@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.7.6 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="MP_/hjZb_3K/+AywVJErRJV.V7."
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
X-Mailman-Approved-At: Thu, 30 Aug 2012 20:36:52 +0000
Subject: [Xen-devel] [RFC PATCH 2/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--MP_/hjZb_3K/+AywVJErRJV.V7.
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Tar file of kdb subdirectory attached.


--MP_/hjZb_3K/+AywVJErRJV.V7.
Content-Type: application/x-tar
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=kdb.tar

a2RiLwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA3NzUAMDAwMjc1
NgAwMDAyNzU2ADAwMDAwMDAwMDAwADEyMDE3NzI0NjI0ADAxMTU1NQAgNQAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABr
ZGIva2RiX2lvLmMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2
ADAwMDI3NTYAMDAwMDAwMTEzMjUAMTE3NjU0NjU1NTYAMDEzMTcxACAwAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8q
CiAqIENvcHlyaWdodCAoQykgMjAwOSwgTXVrZXNoIFJhdGhvciwgT3JhY2xlIENvcnAuICBBbGwg
cmlnaHRzIHJlc2VydmVkLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91
IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yCiAqIG1vZGlmeSBpdCB1bmRlciB0aGUgdGVybXMg
b2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIHYyIGFzIHB1Ymxpc2hlZCBieSB0
aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJp
YnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKICogYnV0IFdJVEhPVVQg
QU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKICogTUVS
Q0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRo
ZSBHTlUKICogR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgogKgogKiBZ
b3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMK
ICogTGljZW5zZSBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbTsgaWYgbm90LCB3cml0ZSB0byB0aGUK
ICogRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuLCA1OSBUZW1wbGUgUGxhY2UgLSBTdWl0
ZSAzMzAsCiAqIEJvc3RvbiwgTUEgMDIxMTEwLTEzMDcsIFVTQS4KICovCiNpbmNsdWRlICJpbmNs
dWRlL2tkYmluYy5oIgoKI2RlZmluZSBLX0JBQ0tTUEFDRSAgMHg4ICAgICAgICAgICAgICAgICAg
IC8qIGN0cmwtSCAqLwojZGVmaW5lIEtfQkFDS1NQQUNFMSAweDdmICAgICAgICAgICAgICAgICAg
LyogY3RybC0/ICovCiNkZWZpbmUgS19VTkRFUlNDT1JFIDB4NWYKI2RlZmluZSBLX0NNRF9CVUZT
WiAgMTYwCiNkZWZpbmUgS19DTURfTUFYSSAgIChLX0NNRF9CVUZTWiAtIDEpICAgICAvKiBtYXgg
aW5kZXggaW4gYnVmZmVyICovCgojaWYgMCAgICAgICAgLyogbWFrZSBhIGhpc3RvcnkgYXJyYXkg
c29tZSBkYXkgKi8KI2RlZmluZSBLX1VQX0FSUk9XICAgICAgICAgICAgICAgICAgICAgICAgIC8q
IHNlcXVlbmNlIDogMWIgNWIgNDEgaWUsICdcZVtBJyAqLwojZGVmaW5lIEtfRE5fQVJST1cgICAg
ICAgICAgICAgICAgICAgICAgICAgLyogc2VxdWVuY2UgOiAxYiA1YiA0MiBpZSwgJ1xlW0InICov
CiNkZWZpbmUgS19OVU1fSElTVCAgIDMyCnN0YXRpYyBpbnQgY3Vyc29yOwpzdGF0aWMgY2hhciBj
bWRzX2FbTlVNX0hJU1RdW0tfQ01EX0JVRlNaXTsKI2VuZGlmCgpzdGF0aWMgY2hhciBjbWRzX2Fb
S19DTURfQlVGU1pdOwoKCnN0YXRpYyBpbnQKa2RiX2tleV92YWxpZChpbnQga2V5KQp7CiAgICAv
KiBub3RlOiBpc3NwYWNlKCkgaXMgbW9yZSB0aGFuICcgJywgaGVuY2Ugd2UgZG9uJ3QgdXNlIGl0
IGhlcmUgKi8KICAgIGlmIChpc2FsbnVtKGtleSkgfHwga2V5ID09ICcgJyB8fCBrZXkgPT0gS19C
QUNLU1BBQ0UgfHwga2V5ID09ICdcbicgfHwKICAgICAgICBrZXkgPT0gJz8nIHx8IGtleSA9PSBL
X1VOREVSU0NPUkUgfHwga2V5ID09ICc9JyB8fCBrZXkgPT0gJyEnKQogICAgICAgICAgICByZXR1
cm4gMTsKICAgIHJldHVybiAwOwp9CgovKiBkaXNwbGF5IGtkYiBwcm9tcHQgYW5kIHJlYWQgY29t
bWFuZCBmcm9tIHRoZSBjb25zb2xlIAogKiBSRVRVUk5TOiBhICdcbicgdGVybWluYXRlZCBjb21t
YW5kIGJ1ZmZlciAqLwpjaGFyICoKa2RiX2dldF9jbWRsaW5lKGNoYXIgKnByb21wdCkKewogICAg
I2RlZmluZSBLX0JFTEwgICAgIDB4NwogICAgI2RlZmluZSBLX0NUUkxfQyAgIDB4MwoKICAgIGlu
dCBrZXksIGk9MDsKCiAgICBrZGJwKHByb21wdCk7CiAgICBtZW1zZXQoY21kc19hLCAwLCBLX0NN
RF9CVUZTWik7CiAgICBjbWRzX2FbS19DTURfQlVGU1otMV0gPSAnXG4nOwoKICAgIGRvIHsKICAg
ICAgICBrZXkgPSBjb25zb2xlX2dldGMoKTsKICAgICAgICBpZiAoa2V5ID09ICdccicpIAogICAg
ICAgICAgICBrZXkgPSAnXG4nOwogICAgICAgIGlmIChrZXkgPT0gS19CQUNLU1BBQ0UxKSAKICAg
ICAgICAgICAga2V5ID0gS19CQUNLU1BBQ0U7CgogICAgICAgIGlmIChrZXkgPT0gS19DVFJMX0Mg
fHwgKGk9PUtfQ01EX01BWEkgJiYga2V5ICE9ICdcbicpKSB7CiAgICAgICAgICAgIGNvbnNvbGVf
cHV0YygnXG4nKTsKICAgICAgICAgICAgaWYgKGkgPj0gS19DTURfTUFYSSkgewogICAgICAgICAg
ICAgICAga2RicCgiS0RCOiBjbWQgYnVmZmVyIG92ZXJmbG93XG4iKTsKICAgICAgICAgICAgICAg
IGNvbnNvbGVfcHV0YyhLX0JFTEwpOwogICAgICAgICAgICB9CiAgICAgICAgICAgIG1lbXNldChj
bWRzX2EsIDAsIEtfQ01EX0JVRlNaKTsKICAgICAgICAgICAgaSA9IDA7CiAgICAgICAgICAgIGtk
YnAocHJvbXB0KTsKICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlmICgh
a2RiX2tleV92YWxpZChrZXkpKSB7CiAgICAgICAgICAgIGNvbnNvbGVfcHV0YyhLX0JFTEwpOwog
ICAgICAgICAgICBjb250aW51ZTsKICAgICAgICB9CiAgICAgICAgaWYgKGtleSA9PSBLX0JBQ0tT
UEFDRSkgewogICAgICAgICAgICBpZiAoaT09MCkgewogICAgICAgICAgICAgICAgY29uc29sZV9w
dXRjKEtfQkVMTCk7CiAgICAgICAgICAgICAgICBjb250aW51ZTsKICAgICAgICAgICAgfSBlbHNl
IAogICAgICAgICAgICAgICAgY21kc19hWy0taV0gPSAnXDAnOwogICAgICAgICAgICAgICAgY29u
c29sZV9wdXRjKEtfQkFDS1NQQUNFKTsKICAgICAgICAgICAgICAgIGNvbnNvbGVfcHV0YygnICcp
OyAgICAgICAgLyogZXJhc2UgY2hhcmFjdGVyICovCiAgICAgICAgfSBlbHNlCiAgICAgICAgICAg
IGNtZHNfYVtpKytdID0ga2V5OwoKICAgICAgICBjb25zb2xlX3B1dGMoa2V5KTsKCiAgICB9IHdo
aWxlIChrZXkgIT0gJ1xuJyk7CgogICAgcmV0dXJuIGNtZHNfYTsKfQoKLyoKICogcHJpbnRrIHRh
a2VzIGEgbG9jaywgYW4gTk1JIGNvdWxkIGNvbWUgaW4gYWZ0ZXIgdGhhdCwgYW5kIGFub3RoZXIg
Y3B1IG1heSAKICogc3Bpbi4gYWxzbywgdGhlIGNvbnNvbGUgbG9jayBpcyBmb3JjZWQgdW5sb2Nr
LCBzbyBwYW5pYyBpcyBiZWVuIHNlZW4gb24gCiAqIDggd2F5LiBoZW5jZSwgbm8gcHJpbnRrKCkg
Y2FsbHMuCiAqLwpzdGF0aWMgdm9sYXRpbGUgaW50IGtkYnBfZ2F0ZSA9IDA7CnZvaWQKa2RicChj
b25zdCBjaGFyICpmbXQsIC4uLikKewogICAgc3RhdGljIGNoYXIgYnVmWzEwMjRdOwogICAgdmFf
bGlzdCBhcmdzOwogICAgY2hhciAqcDsKICAgIGludCBpPTA7CgogICAgd2hpbGUgKChfX2NtcHhj
aGcoJmtkYnBfZ2F0ZSwgMCwxLCBzaXplb2Yoa2RicF9nYXRlKSkgIT0gMCkgJiYgaSsrPDEwMDAp
CiAgICAgICAgbWRlbGF5KDEwKTsKCiAgICB2YV9zdGFydChhcmdzLCBmbXQpOwogICAgKHZvaWQp
dnNucHJpbnRmKGJ1Ziwgc2l6ZW9mKGJ1ZiksIGZtdCwgYXJncyk7CiAgICB2YV9lbmQoYXJncyk7
CgogICAgZm9yIChwPWJ1ZjsgKnAgIT0gJ1wwJzsgcCsrKQogICAgICAgIGNvbnNvbGVfcHV0Yygq
cCk7CiAgICBrZGJwX2dhdGUgPSAwOwp9CgoKLyoKICogY29weS9yZWFkIG1hY2hpbmUgbWVtb3J5
LiAKICogUkVUVVJOUzogbnVtYmVyIG9mIGJ5dGVzIGNvcGllZCAKICovCmludAprZGJfcmVhZF9t
bWVtKGtkYm1hX3QgbWFkZHIsIGtkYmJ5dF90ICpkYnVmLCBpbnQgbGVuKQp7CiAgICB1bG9uZyBy
ZW1haW4sIG9yaWc9bGVuOwoKICAgIHdoaWxlIChsZW4gPiAwKSB7CiAgICAgICAgdWxvbmcgcGFn
ZWNudCA9IG1pbl90KGxvbmcsIFBBR0VfU0laRS0obWFkZHImflBBR0VfTUFTSyksIGxlbik7CiAg
ICAgICAgY2hhciAqdmEgPSBtYXBfZG9tYWluX3BhZ2UobWFkZHIgPj4gUEFHRV9TSElGVCk7Cgog
ICAgICAgIHZhID0gdmEgKyAobWFkZHIgJiAoUEFHRV9TSVpFLTEpKTsgICAgICAgIC8qIGFkZCBw
YWdlIG9mZnNldCAqLwogICAgICAgIHJlbWFpbiA9IF9fY29weV9mcm9tX3VzZXIoZGJ1ZiwgKHZv
aWQgKil2YSwgcGFnZWNudCk7CiAgICAgICAgS0RCR1AxKCJtYWRkcjoleCB2YTolcCBsZW46JXgg
cGFnZWNudDoleCByZW06JXhcbiIsIAogICAgICAgICAgICAgICBtYWRkciwgdmEsIGxlbiwgcGFn
ZWNudCwgcmVtYWluKTsKICAgICAgICB1bm1hcF9kb21haW5fcGFnZSh2YSk7CiAgICAgICAgbGVu
ID0gbGVuICAtIChwYWdlY250IC0gcmVtYWluKTsKICAgICAgICBpZiAocmVtYWluICE9IDApCiAg
ICAgICAgICAgIGJyZWFrOwogICAgICAgIG1hZGRyICs9IHBhZ2VjbnQ7CiAgICAgICAgZGJ1ZiAr
PSBwYWdlY250OwogICAgfQogICAgcmV0dXJuIG9yaWcgLSBsZW47Cn0KCgovKgogKiBjb3B5L3Jl
YWQgZ3Vlc3Qgb3IgaHlwZXJ2aXNvciBtZW1vcnkuIChkb21pZCA9PSBET01JRF9JRExFKSA9PiBo
eXAKICogUkVUVVJOUzogbnVtYmVyIG9mIGJ5dGVzIGNvcGllZCAKICovCmludAprZGJfcmVhZF9t
ZW0oa2RidmFfdCBzYWRkciwga2RiYnl0X3QgKmRidWYsIGludCBsZW4sIGRvbWlkX3QgZG9taWQp
CnsKICAgIHJldHVybiAobGVuIC0gZGJnX3J3X21lbShzYWRkciwgZGJ1ZiwgbGVuLCBkb21pZCwg
MCwgMCkpOwp9CgovKgogKiB3cml0ZSBndWVzdCBvciBoeXBlcnZpc29yIG1lbW9yeS4gKGRvbWlk
ID09IERPTUlEX0lETEUpID0+IGh5cAogKiBSRVRVUk5TOiBudW1iZXIgb2YgYnl0ZXMgd3JpdHRl
bgogKi8KaW50CmtkYl93cml0ZV9tZW0oa2RidmFfdCBkYWRkciwga2RiYnl0X3QgKnNidWYsIGlu
dCBsZW4sIGRvbWlkX3QgZG9taWQpCnsKICAgIHJldHVybiAobGVuIC0gZGJnX3J3X21lbShkYWRk
ciwgc2J1ZiwgbGVuLCBkb21pZCwgMSwgMCkpOwp9CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2luY2x1ZGUv
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA3NzUAMDAwMjc1NgAwMDAyNzU2ADAw
MDAwMDAwMDAwADEyMDE3NTA0MDIxADAxMzE2MwAgNQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
bXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIvaW5jbHVkZS9r
ZGJkZWZzLmgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAw
MDAwMDYzNzUAMTE3NjU0NjU1NTYAMDE1MDA1ACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABt
cmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8qCiAqIENvcHlyaWdo
dCAoQykgMjAwOSwgTXVrZXNoIFJhdGhvciwgT3JhY2xlIENvcnAuICBBbGwgcmlnaHRzIHJlc2Vy
dmVkLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3Ry
aWJ1dGUgaXQgYW5kL29yCiAqIG1vZGlmeSBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBH
ZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIHYyIGFzIHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0
d2FyZSBGb3VuZGF0aW9uLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0ZWQgaW4gdGhl
IGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZ
OyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKICogTUVSQ0hBTlRBQklMSVRZ
IG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZSBHTlUKICogR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgogKgogKiBZb3Ugc2hvdWxkIGhh
dmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMKICogTGljZW5zZSBh
bG9uZyB3aXRoIHRoaXMgcHJvZ3JhbTsgaWYgbm90LCB3cml0ZSB0byB0aGUKICogRnJlZSBTb2Z0
d2FyZSBGb3VuZGF0aW9uLCBJbmMuLCA1OSBUZW1wbGUgUGxhY2UgLSBTdWl0ZSAzMzAsCiAqIEJv
c3RvbiwgTUEgMDIxMTEwLTEzMDcsIFVTQS4KICovCgojaWZuZGVmIF9LREJERUZTX0gKI2RlZmlu
ZSBfS0RCREVGU19ICgovKiByZWFzb24gd2UgYXJlIGVudGVyaW5nIGtkYm1haW4gKGJwID09IGJy
ZWFrcG9pbnQpICovCnR5cGVkZWYgZW51bSB7CiAgICBLREJfUkVBU09OX0tFWUJPQVJEPTEsICAv
KiBLZXlib2FyZCBlbnRyeSAtIGFsd2F5cyAxICovCiAgICBLREJfUkVBU09OX0JQRVhDUCwgICAg
ICAvKiAjQlAgZXhjcDogc3cgYnAgKElOVDMpICovCiAgICBLREJfUkVBU09OX0RCRVhDUCwgICAg
ICAvKiAjREIgZXhjcDogVEYgZmxhZyBvciBIVyBicCAqLwogICAgS0RCX1JFQVNPTl9QQVVTRV9J
UEksICAgLyogcmVjZWl2ZWQgcGF1c2UgSVBJIGZyb20gYW5vdGhlciBDUFUgKi8KfSBrZGJfcmVh
c29uX3Q7CgoKLyogY3B1IHN0YXRlOiBwYXN0LCBwcmVzZW50LCBhbmQgZnV0dXJlICovCnR5cGVk
ZWYgZW51bSB7CiAgICBLREJfQ1BVX0lOVkFMPTAsICAgICAvKiBpbnZhbGlkIHZhbHVlLiBub3Qg
aW4gb3IgbGVhdmluZyBrZGIgKi8KICAgIEtEQl9DUFVfUVVJVCwgICAgICAgIC8qIG1haW4gY3B1
IGRvZXMgR08uIGFsbCBvdGhlcnMgZG8gUVVJVCAqLwogICAgS0RCX0NQVV9QQVVTRSwgICAgICAg
LyogY3B1IGlzIHBhdXNlZCAqLwogICAgS0RCX0NQVV9ESVNBQkxFLCAgICAgLyogZGlzYWJsZSBp
bnRlcnJ1cHRzICovCiAgICBLREJfQ1BVX1NIT1dQQywgICAgICAvKiBhbGwgY3B1cyBtdXN0IGRp
c3BsYXkgdGhlaXIgcGMgKi8KICAgIEtEQl9DUFVfRE9fVk1FWElULCAgIC8qIGFsbCBjcHVzIG11
c3QgZG8gdm1jcyB2bWV4aXQuIGludGVsIG9ubHkgKi8KICAgIEtEQl9DUFVfTUFJTl9LREIsICAg
IC8qIGNwdSBpbiBrZGIgbWFpbiBjb21tYW5kIGxvb3AgKi8KICAgIEtEQl9DUFVfR08sICAgICAg
ICAgIC8qIHVzZXIgZW50ZXJlZCBnbyBmb3IgdGhpcyBjcHUgKi8KICAgIEtEQl9DUFVfU1MsICAg
ICAgICAgIC8qIHNpbmdsZSBzdGVwIGZvciB0aGlzIGNwdSAqLwogICAgS0RCX0NQVV9OSSwgICAg
ICAgICAgLyogZ28gdG8gbmV4dCBpbnN0ciBhZnRlciB0aGUgY2FsbCBpbnN0ciAqLwogICAgS0RC
X0NQVV9JTlNUQUxMX0JQLCAgLyogZGVsYXllZCBpbnN0YWxsIG9mIHN3IGJwKHMpIGJ5IHRoaXMg
Y3B1ICovCn0ga2RiX2NwdV9jbWRfdDsKCi8qID09PT09PT09PT09PT0ga2RiIGNvbW1hbmRzID09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PSAqLwoKdHlwZWRlZiBr
ZGJfY3B1X2NtZF90ICgqa2RiX2Z1bmNfdCkoaW50LCBjb25zdCBjaGFyICoqLCBzdHJ1Y3QgY3B1
X3VzZXJfcmVncyAqKTsKdHlwZWRlZiBrZGJfY3B1X2NtZF90ICgqa2RiX3VzZ2ZfdCkodm9pZCk7
Cgp0eXBlZGVmIGVudW0gewogICAgS0RCX1JFUEVBVF9OT05FID0gMCwgICAgLyogRG8gbm90IHJl
cGVhdCB0aGlzIGNvbW1hbmQgKi8KICAgIEtEQl9SRVBFQVRfTk9fQVJHUywgICAgIC8qIFJlcGVh
dCB0aGUgY29tbWFuZCB3aXRob3V0IGFyZ3VtZW50cyAqLwogICAgS0RCX1JFUEVBVF9XSVRIX0FS
R1MsICAgLyogUmVwZWF0IHRoZSBjb21tYW5kIGluY2x1ZGluZyBpdHMgYXJndW1lbnRzICovCn0g
a2RiX3JlcGVhdF90OwoKdHlwZWRlZiBzdHJ1Y3QgX2tkYnRhYiB7CiAgICBjaGFyICAgICAgICAq
a2RiX2NtZF9uYW1lOyAgICAgICAgLyogQ29tbWFuZCBuYW1lICovCiAgICBrZGJfZnVuY190ICAg
a2RiX2NtZF9mdW5jOyAgICAgICAgLyogcHRyIHRvIGZ1bmN0aW9uIHRvIGV4ZWN1dGUgY29tbWFu
ZCAqLwogICAga2RiX3VzZ2ZfdCAgIGtkYl9jbWRfdXNnZjsgICAgICAgIC8qIHVzYWdlIGZ1bmN0
aW9uIHB0ciAqLwogICAgaW50ICAgICAgICAgIGtkYl9jbWRfY3Jhc2hfYXZhaWw7IC8qIGF2YWls
YWJsZSBpbiBzeXMgZmF0YWwvY3Jhc2ggc3RhdGUgKi8KICAgIGtkYl9yZXBlYXRfdCBrZGJfY21k
X3JlcGVhdDsgICAgICAvKiBEb2VzIGNvbW1hbmQgYXV0byByZXBlYXQgb24gZW50ZXI/ICovCn0g
a2RidGFiX3Q7CgoKLyogPT09PT09PT09PT09PSB0eXBlcyBhbmQgc3R1ZmYgPT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gKi8KI2RlZmluZSBCRkRfSU5WQUwgKH4wVUwp
ICAgICAgICAgICAgLyogaW52YWxpZCBiZmRfdm1hICovCgojaWYgZGVmaW5lZChfX3g4Nl82NF9f
KQogICNkZWZpbmUgS0RCSVAgcmlwCiAgI2RlZmluZSBLREJTUCByc3AKI2Vsc2UKICAjZGVmaW5l
IEtEQklQIGVpcAogICNkZWZpbmUgS0RCU1AgZXNwCiNlbmRpZgoKLyogPT09PT09PT09PT09PSBt
YWNyb3MgPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0g
Ki8KZXh0ZXJuIHZvbGF0aWxlIGludCBrZGJkYmc7CiNkZWZpbmUgS0RCR1AoLi4uKSB7KGtkYmRi
ZykgPyBrZGJwKF9fVkFfQVJHU19fKTowO30KI2RlZmluZSBLREJHUDEoLi4uKSB7KGtkYmRiZz4x
KSA/IGtkYnAoX19WQV9BUkdTX18pOjA7fQojZGVmaW5lIEtEQkdQMiguLi4pIHsoa2RiZGJnPjIp
ID8ga2RicChfX1ZBX0FSR1NfXyk6MDt9CiNkZWZpbmUgS0RCR1AzKC4uLikgezA7fTsKCiNkZWZp
bmUgS0RCTUlOKHgseSkgKCgoeCk8KHkpKT8oeCk6KHkpKQoKI2VuZGlmICAvKiAhX0tEQkRFRlNf
SCAqLwoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2luY2x1ZGUva2RiaW5jLmgA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDAzNDM3
ADExNzY1NDY1NTU2ADAxNDYzMQAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAvKgogKiBDb3B5cmlnaHQgKEMpIDIw
MDksIE11a2VzaCBSYXRob3IsIE9yYWNsZSBDb3JwLiAgQWxsIHJpZ2h0cyByZXNlcnZlZC4KICoK
ICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0
IGFuZC9vcgogKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQ
dWJsaWMKICogTGljZW5zZSB2MiBhcyBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbi4KICoKICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRo
YXQgaXQgd2lsbCBiZSB1c2VmdWwsCiAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91
dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCiAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRO
RVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUgR05VCiAqIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KICoKICogWW91IHNob3VsZCBoYXZlIHJlY2Vp
dmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2UgYWxvbmcgd2l0
aCB0aGlzIHByb2dyYW07IGlmIG5vdCwgd3JpdGUgdG8gdGhlCiAqIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbiwgSW5jLiwgNTkgVGVtcGxlIFBsYWNlIC0gU3VpdGUgMzMwLAogKiBCb3N0b24sIE1B
IDAyMTExMC0xMzA3LCBVU0EuCiAqLwoKI2lmbmRlZiBfS0RCSU5DX0gKI2RlZmluZSBfS0RCSU5D
X0gKCiNpbmNsdWRlIDx4ZW4vY29tcGlsZS5oPgojaW5jbHVkZSA8eGVuL2NvbmZpZy5oPgojaW5j
bHVkZSA8eGVuL3ZlcnNpb24uaD4KI2luY2x1ZGUgPHhlbi9jb21wYXQuaD4KI2luY2x1ZGUgPHhl
bi9pbml0Lmg+CiNpbmNsdWRlIDx4ZW4vbGliLmg+CiNpbmNsdWRlIDx4ZW4vZXJybm8uaD4KI2lu
Y2x1ZGUgPHhlbi9zY2hlZC5oPgojaW5jbHVkZSA8eGVuL2RvbWFpbi5oPgojaW5jbHVkZSA8eGVu
L21tLmg+CiNpbmNsdWRlIDx4ZW4vZXZlbnQuaD4KI2luY2x1ZGUgPHhlbi90aW1lLmg+CiNpbmNs
dWRlIDx4ZW4vY29uc29sZS5oPgojaW5jbHVkZSA8eGVuL3NvZnRpcnEuaD4KI2luY2x1ZGUgPHhl
bi9kb21haW5fcGFnZS5oPgojaW5jbHVkZSA8eGVuL3Jhbmdlc2V0Lmg+CiNpbmNsdWRlIDx4ZW4v
Z3Vlc3RfYWNjZXNzLmg+CiNpbmNsdWRlIDx4ZW4vaHlwZXJjYWxsLmg+CiNpbmNsdWRlIDx4ZW4v
ZGVsYXkuaD4KI2luY2x1ZGUgPHhlbi9zaHV0ZG93bi5oPgojaW5jbHVkZSA8eGVuL3BlcmNwdS5o
PgojaW5jbHVkZSA8eGVuL211bHRpY2FsbC5oPgojaW5jbHVkZSA8eGVuL3JjdXBkYXRlLmg+CiNp
bmNsdWRlIDx4ZW4vY3R5cGUuaD4KI2luY2x1ZGUgPHhlbi9zeW1ib2xzLmg+CiNpbmNsdWRlIDx4
ZW4vc2h1dGRvd24uaD4KI2luY2x1ZGUgPHhlbi9zZXJpYWwuaD4KI2luY2x1ZGUgPHhlbi9ncmFu
dF90YWJsZS5oPgojaW5jbHVkZSA8YXNtL2RlYnVnZ2VyLmg+CiNpbmNsdWRlIDxhc20vc2hhcmVk
Lmg+CiNpbmNsdWRlIDxhc20vYXBpY2RlZi5oPgoKI2luY2x1ZGUgPGFzbS9ubWkuaD4KI2luY2x1
ZGUgPGFzbS9wMm0uaD4KI2luY2x1ZGUgPGFzbS9kZWJ1Z3JlZy5oPgojaW5jbHVkZSA8cHVibGlj
L3NjaGVkLmg+CiNpbmNsdWRlIDxwdWJsaWMvdmNwdS5oPgojaWZkZWYgX1hFTl9MQVRFU1QKI2lu
Y2x1ZGUgPHhzbS94c20uaD4KI2VuZGlmCgojaW5jbHVkZSA8YXNtL2h2bS92bXgvdm14Lmg+Cgoj
aW5jbHVkZSAia2RiX2V4dGVybi5oIgojaW5jbHVkZSAia2RiZGVmcy5oIgojaW5jbHVkZSAia2Ri
cHJvdG8uaCIKCiNlbmRpZiAvKiAhX0tEQklOQ19IICovCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9pbmNsdWRlL2tkYl9leHRlcm4uaAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwNDM2MAAxMjAx
NzUwNDAyMQAwMTU0NjQAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBN
dWtlc2ggUmF0aG9yLCBPcmFjbGUgQ29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRo
aXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQv
b3IKICogbW9kaWZ5IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
CiAqIExpY2Vuc2UgdjIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRp
b24uCiAqCiAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0
IHdpbGwgYmUgdXNlZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZl
biB0aGUgaW1wbGllZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBG
T1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBM
aWNlbnNlIGZvciBtb3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBh
IGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhp
cyBwcm9ncmFtOyBpZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRp
b24sIEluYy4sIDU5IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjEx
MTAtMTMwNywgVVNBLgogKi8KCiNpZm5kZWYgX0tEQl9FWFRFUk5fSAojZGVmaW5lIF9LREJfRVhU
RVJOX0gKCiNkZWZpbmUgS0RCX1RSQVBfRkFUQUwgICAgIDEgICAgLyogdHJhcCBpcyBmYXRhbC4g
Y2FuJ3QgcmVzdW1lIGZyb20ga2RiICovCiNkZWZpbmUgS0RCX1RSQVBfTk9ORkFUQUwgIDIgICAg
LyogY2FuIHJlc3VtZSBmcm9tIGtkYiAqLwojZGVmaW5lIEtEQl9UUkFQX0tEQlNUQUNLICAzICAg
IC8qIHRvIGRlYnVnIGtkYiBpdHNlbGYuIGR1bXAga2RiIHN0YWNrICovCgovKiBmb2xsb3dpbmcg
Y2FuIGJlIGNhbGxlZCBmcm9tIGFueXdoZXJlIGluIHhlbiB0byBkZWJ1ZyAqLwpleHRlcm4gdm9p
ZCBrZGJfdHJhcF9pbW1lZChpbnQpOwpleHRlcm4gdm9pZCBrZGJ0cmModW5zaWduZWQgaW50LCB1
bnNpZ25lZCBpbnQsIHVpbnQ2NF90LCB1aW50NjRfdCwgdWludDY0X3QpOwpleHRlcm4gdm9pZCBr
ZGJwKGNvbnN0IGNoYXIgKmZtdCwgLi4uKTsKCnR5cGVkZWYgdW5zaWduZWQgbG9uZyBrZGJ2YV90
Owp0eXBlZGVmIHVuc2lnbmVkIGNoYXIga2RiYnl0X3Q7CnR5cGVkZWYgdW5zaWduZWQgbG9uZyBr
ZGJtYV90OwoKZXh0ZXJuIHVuc2lnbmVkIGxvbmcga2RiX2RyNzsKCgpleHRlcm4gdm9sYXRpbGUg
aW50IGtkYl9zZXNzaW9uX2JlZ3VuOwpleHRlcm4gdm9sYXRpbGUgaW50IGtkYl9lbmFibGVkOwpl
eHRlcm4gdm9pZCBrZGJfaW5pdCh2b2lkKTsKZXh0ZXJuIGludCBrZGJfa2V5Ym9hcmQoc3RydWN0
IGNwdV91c2VyX3JlZ3MgKik7CmV4dGVybiB2b2lkIGtkYl9zc25pX3JlZW50ZXIoc3RydWN0IGNw
dV91c2VyX3JlZ3MgKik7CmV4dGVybiBpbnQga2RiX2hhbmRsZV90cmFwX2VudHJ5KGludCwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKik7CmV4dGVybiBpbnQga2RiX3RyYXBfZmF0YWwoaW50LCBzdHJ1
Y3QgY3B1X3VzZXJfcmVncyAqKTsgIC8qIGZhdGFsIHdpdGggcmVncyAqLwpleHRlcm4gdm9pZCBr
ZGJfZHVtcF92bWNzKHVpbnQxNl90IGRpZCwgaW50IHZpZCk7CnZvaWQga2RiX2R1bXBfdm1jYih1
aW50MTZfdCBkaWQsIGludCB2aWQpOwpleHRlcm4gdm9pZCBrZGJfZHVtcF90aW1lX3BjcHUodm9p
ZCk7CgoKI2RlZmluZSBWTVBUUlNUX09QQ09ERSAgIi5ieXRlIDB4MGYsMHhjN1xuIiAgICAgLyog
cmVnL29wY29kZTogLzcgKi8KI2RlZmluZSBNT0RSTV9FQVhfMDcgICAgIi5ieXRlIDB4MzhcbiIg
ICAgICAgICAgLyogW0VBWF0sIHdpdGggcmVnL29wY29kZTogLzcgKi8Kc3RhdGljIGlubGluZSB2
b2lkIF9fdm1wdHJzdCh1NjQgKmFkZHIpCnsKICAgIGFzbSB2b2xhdGlsZSAoIFZNUFRSU1RfT1BD
T0RFCiAgICAgICAgICAgICAgICAgICBNT0RSTV9FQVhfMDcKICAgICAgICAgICAgICAgICAgIDoK
ICAgICAgICAgICAgICAgICAgIDogImEiIChhZGRyKQogICAgICAgICAgICAgICAgICAgOiAibWVt
b3J5Iik7Cn0KCiNkZWZpbmUgaXNfaHZtX29yX2h5Yl9kb21haW4gaXNfaHZtX2RvbWFpbgojZGVm
aW5lIGlzX2h2bV9vcl9oeWJfdmNwdSBpc19odm1fdmNwdQojZGVmaW5lIGlzX2h5YnJpZF92Y3B1
KHgpICgwKQoKCiNlbmRpZiAgLyogX0tEQl9FWFRFUk5fSCAqLwoAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9pbmNsdWRlL2tkYnByb3RvLmgAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwNTUwMwAxMTc2NTQ2NTU1
NgAwMTUyMTcAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0
YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBNdWtlc2gg
UmF0aG9yLCBPcmFjbGUgQ29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJv
Z3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICog
bW9kaWZ5IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExp
Y2Vuc2UgdjIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAq
CiAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwg
YmUgdXNlZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUg
aW1wbGllZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQ
QVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl
IGZvciBtb3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkg
b2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9n
cmFtOyBpZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIElu
Yy4sIDU5IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMw
NywgVVNBLgogKi8KCiNpZm5kZWYgX0tEQlBST1RPX0gKI2RlZmluZSBfS0RCUFJPVE9fSAoKLyog
aHlwZXJ2aXNvciBpbnRlcmZhY2VzIHVzZSBieSBrZGIgb3Iga2RiIGludGVyZmFjZXMgaW4geGVu
IGZpbGVzICovCmV4dGVybiB2b2lkIGNvbnNvbGVfcHV0YyhjaGFyKTsKZXh0ZXJuIGludCBjb25z
b2xlX2dldGModm9pZCk7CmV4dGVybiB2b2lkIHNob3dfdHJhY2Uoc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKik7CmV4dGVybiB2b2lkIGtkYl9kdW1wX3RpbWVyX3F1ZXVlcyh2b2lkKTsKZXh0ZXJuIHZv
aWQga2RiX3RpbWVfcmVzdW1lKGludCk7CmV4dGVybiB2b2lkIGtkYl9wcmludF9zY2hlZF9pbmZv
KHZvaWQpOwpleHRlcm4gdm9pZCBrZGJfY3Vycl9jcHVfZmx1c2hfdm1jcyh2b2lkKTsKZXh0ZXJu
IHVuc2lnbmVkIGxvbmcgYWRkcmVzc19sb29rdXAoY2hhciAqKTsKZXh0ZXJuIHZvaWQga2RiX3By
bnRfZ3Vlc3RfbWFwcGVkX2lycXModm9pZCk7CgovKiBrZGIgZ2xvYmFscyAqLwpleHRlcm4ga2Ri
dGFiX3QgKmtkYl9jbWRfdGJsOwpleHRlcm4gY2hhciBrZGJfcHJvbXB0WzMyXTsKZXh0ZXJuIHZv
bGF0aWxlIGludCBrZGJfc3lzX2NyYXNoOwpleHRlcm4gdm9sYXRpbGUga2RiX2NwdV9jbWRfdCBr
ZGJfY3B1X2NtZFtOUl9DUFVTXTsKZXh0ZXJuIHZvbGF0aWxlIGludCBrZGJfdHJjb247CgovKiBr
ZGIgaW50ZXJmYWNlcyAqLwpleHRlcm4gdm9pZCBfX2luaXQga2RiX2lvX2luaXQodm9pZCk7CmV4
dGVybiB2b2lkIGtkYl9pbml0X2NtZHRhYih2b2lkKTsKZXh0ZXJuIHZvaWQga2RiX2RvX2NtZHMo
c3RydWN0IGNwdV91c2VyX3JlZ3MgKik7CmV4dGVybiBpbnQga2RiX2NoZWNrX3N3X2JrcHRzKHN0
cnVjdCBjcHVfdXNlcl9yZWdzICopOwpleHRlcm4gaW50IGtkYl9jaGVja193YXRjaHBvaW50cyhz
dHJ1Y3QgY3B1X3VzZXJfcmVncyAqKTsKZXh0ZXJuIHZvaWQga2RiX2RvX3dhdGNocG9pbnRzKGtk
YnZhX3QsIGludCwgaW50KTsKZXh0ZXJuIHZvaWQga2RiX2luc3RhbGxfd2F0Y2hwb2ludHModm9p
ZCk7CmV4dGVybiB2b2lkIGtkYl9jbGVhcl93cHMoaW50KTsKZXh0ZXJuIGtkYm1hX3Qga2RiX3Jk
X2RiZ3JlZyhpbnQpOwoKCgpleHRlcm4gY2hhciAqa2RiX2dldF9jbWRsaW5lKGNoYXIgKik7CmV4
dGVybiB2b2lkIGtkYl9jbGVhcl9wcmV2X2NtZCh2b2lkKTsKZXh0ZXJuIHZvaWQga2RiX3RvZ2ds
ZV9kaXNfc3ludGF4KHZvaWQpOwpleHRlcm4gaW50IGtkYl9jaGVja19jYWxsX2luc3RyKGRvbWlk
X3QsIGtkYnZhX3QpOwpleHRlcm4gdm9pZCBrZGJfZGlzcGxheV9wYyhzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqKTsKZXh0ZXJuIGtkYnZhX3Qga2RiX3ByaW50X2luc3RyKGtkYnZhX3QsIGxvbmcsIGRv
bWlkX3QpOwpleHRlcm4gaW50IGtkYl9yZWFkX21tZW0oa2RidmFfdCwga2RiYnl0X3QgKiwgaW50
KTsKZXh0ZXJuIGludCBrZGJfcmVhZF9tZW0oa2RidmFfdCwga2RiYnl0X3QgKiwgaW50LCBkb21p
ZF90KTsKZXh0ZXJuIGludCBrZGJfd3JpdGVfbWVtKGtkYnZhX3QsIGtkYmJ5dF90ICosIGludCwg
ZG9taWRfdCk7CgpleHRlcm4gdm9pZCBrZGJfaW5zdGFsbF9hbGxfc3dicCh2b2lkKTsKZXh0ZXJu
IHZvaWQga2RiX3VuaW5zdGFsbF9hbGxfc3dicCh2b2lkKTsKZXh0ZXJuIGludCBrZGJfc3dicF9l
eGlzdHModm9pZCk7CmV4dGVybiB2b2lkIGtkYl9mbHVzaF9zd2JwX3RhYmxlKHZvaWQpOwpleHRl
cm4gaW50IGtkYl9pc19hZGRyX2d1ZXN0X3RleHQoa2RidmFfdCwgaW50KTsKZXh0ZXJuIGtkYnZh
X3Qga2RiX2d1ZXN0X3N5bTJhZGRyKGNoYXIgKiwgZG9taWRfdCk7CmV4dGVybiBjaGFyICprZGJf
Z3Vlc3RfYWRkcjJzeW0odW5zaWduZWQgbG9uZywgZG9taWRfdCwgdWxvbmcgKik7CmV4dGVybiB2
b2lkIGtkYl9wcm50X2FkZHIyc3ltKGRvbWlkX3QsIGtkYnZhX3QsIGNoYXIgKik7CmV4dGVybiB2
b2lkIGtkYl9zYXZfZG9tX3N5bWluZm8oZG9taWRfdCwgbG9uZywgbG9uZywgbG9uZywgbG9uZywg
bG9uZyk7CmV4dGVybiBpbnQga2RiX2d1ZXN0X2JpdG5lc3MoZG9taWRfdCk7CmV4dGVybiB2b2lk
IGtkYl9ubWlfcGF1c2VfY3B1cyhjcHVtYXNrX3QpOwoKZXh0ZXJuIHZvaWQga2RiX3RyY3plcm8o
dm9pZCk7CnZvaWQga2RiX3RyY3Aodm9pZCk7CgoKCiNlbmRpZiAvKiAhX0tEQlBST1RPX0ggKi8K
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAa2RiL1JFQURNRQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDIwNDQ1ADExNzY1NDY1NTU2ADAxMjQ2
MQAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1y
YXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAKV2VsY29tZSB0byBrZGIgZm9yIHhlbiwgYSBoeXBlcnZpc29yIGJ1
aWx0IGluIGRlYnVnZ2VyLgoKRkVBVFVSRVM6CiAgIC0gc2V0IGJyZWFrcG9pbnRzIGluIGh5cGVy
dmlzb3IKICAgLSBleGFtaW5lIHZpcnQvbWFjaGluZSBtZW1vcnksIHJlZ2lzdGVycywgZG9tYWlu
cywgdmNwdXMsIGV0Yy4uLgogICAtIHNpbmdsZSBzdGVwLCBzaW5nbGUgc3RlcCB0aWxsIGp1bXAv
Y2FsbCwgc3RlcCBvdmVyIGNhbGwgdG8gbmV4dAogICAgIGluc3RydWN0aW9uIGFmdGVyIHRoZSBj
YWxsLgogICAtIGV4YW1pbmUgbWVtb3J5IG9mIGEgUFYvSFZNIGd1ZXN0LiAKICAgLSBzZXQgYnJl
YWtwb2ludHMsIHNpbmdsZSBzdGVwLCBldGMuLi4gZm9yIGEgUFYgZ3Vlc3QuCiAgIC0gYnJlYWtp
bmcgaW50byB0aGUgZGVidWdnZXIgd2lsbCBmcmVlemUgdGhlIHN5c3RlbSwgYWxsIENQVXMgd2ls
bCBwYXVzZSwKICAgICBubyBpbnRlcnJ1cHRzIGFyZSBhY2tub3dsZWRnZWQgaW4gdGhlIGRlYnVn
Z2VyLiAoSGVuY2UsIHRoZSB3YWxsIGNsb2NrCiAgICAgd2lsbCBkcmlmdCkKICAgLSBzaW5nbGUg
c3RlcCB3aWxsIHN0ZXAgb25seSB0aGF0IGNwdS4KICAgLSBlYXJseWtkYjogYnJlYWsgaW50byBr
ZGIgdmVyeSBlYXJseSBkdXJpbmcgYm9vdC4gUHV0ICJlYXJseWtkYiIgb24gdGhlCiAgICAgICAg
ICAgICAgIHhlbiBjb21tYW5kIGxpbmUgaW4gZ3J1Yi5jb25mLgogICAtIGdlbmVyaWMgdHJhY2lu
ZyBmdW5jdGlvbnMgKHNlZSBiZWxvdykgZm9yIHF1aWNrIHRyYWNpbmcgdG8gZGVidWcgdGltaW5n
CiAgICAgcmVsYXRlZCBwcm9ibGVtcy4gVG8gdXNlOgogICAgICAgIG8gc2V0IEtEQlRSQ01BWCB0
byBtYXggbnVtIG9mIHJlY3MgaW4gY2lyY3VsYXIgdHJjIGJ1ZmZlciBpbiBrZGJtYWluLmMKCW8g
Y2FsbCBrZGJfdHJjKCkgZnJvbSBhbnl3aGVyZSBpbiB4ZW4KCW8gdHVybiB0cmFjaW5nIG9uIGJ5
IHNldHRpbmcga2RiX3RyY29uIGluIGtkYm1haW4uYyBvciB0cmNvbiBjb21tYW5kLgoJbyB0cmNw
IGluIGtkYiB3aWxsIGdpdmUgaGludHMgdG8gZHVtcCB0cmFjZSByZWNzLiBVc2UgZGQgdG8gc2Vl
IGJ1ZmZlcgoJbyB0cmN6IHdpbGwgemVybyBvdXQgdGhlIGVudGlyZSBidWZmZXIgaWYgbmVlZGVk
LgoKTk9URToKICAgLSBzaW5jZSBhbG1vc3QgYWxsIG51bWJlcnMgYXJlIGluIGhleCwgMHggaXMg
bm90IHByZWZpeGVkLiBJbnN0ZWFkLCBkZWNpbWFsCiAgICAgbnVtYmVycyBhcmUgcHJlY2VkZWQg
YnkgJCwgYXMgaW4gJDE3IChzb3JyeSwgb25lIGdldHMgdXNlZCB0byBpdCkuIE5vdGUsCiAgICAg
dmNwdSBudW0sIGNwdSBudW0sIGRvbWlkIGFyZSBhbHdheXMgZGlzcGxheWVkIGluIGRlY2ltYWws
IHdpdGhvdXQgJC4KICAgLSB3YXRjaGRvZyBtdXN0IGJlIGRpc2FibGVkIHRvIHVzZSBrZGIKCklT
U1VFUzoKICAgLSBDdXJyZW50bHksIGRlYnVnIGh5cGVydmlzb3IgaXMgbm90IHN1cHBvcnRlZC4g
TWFrZSBzdXJlIE5ERUJVRyBpcyBkZWZpbmVkCiAgICAgb3IgY29tcGlsZSB3aXRoIGRlYnVnPW4K
ICAgLSAidGltZXIgd2VudCBiYWNrd2FyZHMiIG1lc3NhZ2VzIG9uIGRvbTAsIGJ1dCBrZGIvaHlw
IHNob3VsZCBiZSBmaW5lLgogICAgIEkgdXN1YWxseSBkbyAiZWNobyAyID4gL3Byb2Mvc3lzL2tl
cm5lbC9wcmludGsiIHdoZW4gdXNpbmcga2RiLgogICAtIDMyYml0IGh5cGVydmlzb3IgbWF5IGhh
bmcuIFRlc3RlZCBvbiA2NGJpdCBoeXBlcnZpc29yIG9ubHkuCiAgICAKClRPIEJVSUxEOgogLSBk
byA+bWFrZSBrZGI9eQoKSE9XIFRPIFVTRToKICAxLiBBIHNlcmlhbCBsaW5lIGlzIG5lZWRlZCB0
byB1c2UgdGhlIGRlYnVnZ2VyLiBTZXQgdXAgYSBzZXJpYWwgbGluZQogICAgIGZyb20gdGhlIHNv
dXJjZSBtYWNoaW5lIHRvIHRhcmdldCB2aWN0aW0uIE1ha2Ugc3VyZSB0aGUgc2VyaWFsIGxpbmUK
ICAgICBpcyB3b3JraW5nIHByb3Blcmx5IGJ5IGRpc3BsYXlpbmcgbG9naW4gcHJvbXB0IGFuZCBs
b2dpbmcgaW4gZXRjLi4uLgoKICAyLiBBZGQgZm9sbG93aW5nIHRvIGdydWIuY29uZjoKICAgICAg
ICBrZXJuZWwgL3hlbi5rZGIgY29uc29sZT1jb20xLHZnYSBjb20xPTU3NjAwLDhuMSBkb20wX21l
bT01NDJNCgogICAgICAgICg1NzYwMCBvciB3aGF0ZXZlciB1c2VkIGluIHN0ZXAgMSBhYm92ZSkK
CiAgMy4gQm9vdCB0aGUgaHlwZXJ2aXNvciBidWlsdCB3aXRoIHRoZSBkZWJ1Z2dlci4gCgogIDQu
IGN0cmwtXCAoY3RybCBhbmQgYmFja3NsYXNoKSB3aWxsIGJyZWFrIGludG8gdGhlIGRlYnVnZ2Vy
LiBJZiB0aGUgc3lzdGVtIGlzCiAgICAgYmFkbHkgaHVuZywgcHJlc3NpbmcgTk1JIHdvdWxkIGFs
c28gYnJlYWsgaW50byBpdC4gSG93ZXZlciwgb25jZSBrZGIgaXMKICAgICBlbnRlcmVkIHZpYSBO
TUksIG5vcm1hbCBleGVjdXRpb24gY2FuJ3QgY29udGludWUuCgogIDUuIHR5cGUgJ2gnIGZvciBs
aXN0IG9mIGNvbW1hbmRzLgoKICA2LiBDb21tYW5kIGxpbmUgZWRpdGluZyBpcyBsaW1pdGVkIHRv
IGJhY2tzcGFjZS4gY3RybC1jIHRvIHN0YXJ0IGEgbmV3IGNtZC4KCgoKR1VFU1QgZGVidWc6CiAg
LSB0eXBlIHN5bSBpbiB0aGUgZGVidWdnZXIKICAtIGZvciBSRUw0LCBncmVwIGthbGxzeW1zX25h
bWVzLCBrYWxsc3ltc19hZGRyZXNzZXMsIGFuZCBrYWxsc3ltc19udW1fc3ltcwogICAgaW4gdGhl
IGd1ZXN0IFN5c3RlbS5tYXAqIGZpbGUuIFJ1biBzeW0gYWdhaW4gd2l0aCBkb21pZCBhbmQgdGhl
IHRocmVlCiAgICB2YWx1ZXMgb24gdGhlIGNvbW1hbmQgbGluZS4KICAtIE5vdyBiYXNpYyBzeW1i
b2xzIGNhbiBiZSB1c2VkIGZvciBndWVzdCBkZWJ1Zy4gTm90ZSwgaWYgdGhlIGJpbmFyeSBpcyBu
b3QKICAgIGJ1aWx0IHdpdGggc3ltYm9scywgb25seSBmdW5jdGlvbiBuYW1lcyBhcmUgYXZhaWxh
YmxlLCBidXQgbm90IGdsb2JhbCB2YXJzLgoKICAgIEVnOiBzeW0gMCBjMDY5NjA4NCBjMDY4YTU5
MCBjMDY5NjA4MCBjMDZiNDNlOCBjMDZiNDc0MAogICAgICAgIHdpbGwgc2V0IHN5bWJvbHMgZm9y
IGRvbSAwLiBUaGVuIDoKCiAgICAgICAgWzRdeGtkYj4gYnAgc29tZV9mdW5jdGlvbiAwCgoJd2ls
bHMgc2V0IGJwIGF0IHNvbWVfZnVuY3Rpb24gaW4gZG9tIDAKCglbM114a2RiPiBkdyBjMDY4YTU5
MCAzMiAwIDogZGlzcGxheSAzMiBieXRlcyBvZiBkb20wIG1lbW9yeQoKClRpcHM6CiAgLSBJbiAi
WzBdeGtkYj4iICA6IDAgaXMgdGhlIGNwdSBudW1iZXIgaW4gZGVjaW1hbAogIC0gSW4KICAgICAg
MDAwMDAwMDBjMDQyNjQ1YzogMDpkb190aW1lcisxNyAgICAgICAgICAgICAgICAgIHB1c2ggJWVi
cAogICAgMDpkb190aW1lciA6IDAgaXMgdGhlIGRvbWlkIGluIGhleAogICAgb2Zmc2V0ICsxNyBp
cyBpbiBoZXguCgogICAgYWJzZW5zZSBvZiAwOiB3b3VsZCBpbmRpY2F0ZSBpdCdzIGEgaHlwZXJ2
aXNvciBmdW5jdGlvbgoKICAtIGNvbW1hbmRzIHN0YXJ0aW5nIHdpdGgga2RiIChrZGIqKSBhcmUg
Zm9yIGtkYiBkZWJ1ZyBvbmx5LgoKCkZpbmFsbHksCiAtIHRoaW5rIGhleC4KIC0gYnVnL3Byb2Js
ZW06IGVudGVyIGtkYmRiZywgcmVwcm9kdWNlLCBhbmQgc2VuZCBtZSB0aGUgb3V0cHV0LgogICBJ
ZiB0aGUgb3V0cHV0IGlzIG5vdCBlbm91Z2gsIEkgbWF5IGFzayB0byBydW4ga2RiZGJnIHR3aWNl
LCB0aGVuIGNvbGxlY3QKICAgb3V0cHV0LgoKClRoYW5rcywKTXVrZXNoIFJhdGhvcgpPcmFjbGUg
Q29ycG9yYXRpbiwgClJlZHdvb2QgU2hvcmVzLCBDQSA5NDA2NQoKLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0KQ09NTUFORCBERVNDUklQVElPTjoKCmluZm86ICBQcmludCBiYXNpYyBpbmZvIGxpa2Ug
dmVyc2lvbiwgY29tcGlsZSBmbGFncywgZXRjLi4KCmN1cjogIHByaW50IGN1cnJlbnQgZG9tYWlu
IGlkIGFuZCB2Y3B1IGlkCgpmOiBkaXNwbGF5IGN1cnJlbnQgc3RhY2suIElmIGEgdmNwdSBwdHIg
aXMgZ2l2ZW4sIHRoZW4gcHJpbnQgc3RhY2sgZm9yIHRoYXQKICAgVkNQVSBieSB1c2luZyBpdHMg
SVAgYW5kIFNQLgoKZmc6IGRpc3BsYXkgc3RhY2sgZm9yIGEgZ3Vlc3QgZ2l2ZW4gZG9taWQsIFNQ
IGFuZCBJUC4KCmR3OiBkaXNwbGF5IHdvcmRzIG9mIG1lbW9yeS4gJ251bScgb2YgYnl0ZXMgaXMg
b3B0aW9uYWwsIGJ1dCBpZiBkaXNwbGF5aW5nIGd1ZXN0CiAgICBtZW1vcnksIHRoZW4gaXMgcmVx
dWlyZWQuCgpkZDogc2FtZSBhcyBhYm92ZSwgYnV0IGRpc3BsYXkgZG91Ymxld29yZHMuCgpkd206
IHNhbWUgYXMgYWJvdmUgYnV0IHRoZSBhZGRyZXNzIGlzIG1hY2hpbmUgYWRkcmVzcyBpbnN0ZWFk
IG9mIHZpcnR1YWwuCgpkZG06IHNhbWUgYXMgYWJvdmUsIGJ1dCBkaXNwbGF5IGRvdWJsZXdvcmRz
LgoKZHI6IGRpc3BsYXkgcmVnaXN0ZXJzLiBpZiAnc3AnIGlzIHNwZWNpZmllZCB0aGVuIHByaW50
IGZldyBleHRyYSByZWdpc3RlcnMuCgpkcmc6IGRpc3BsYXkgZ3Vlc3QgY29udGV4dCBzYXZlZCBv
biBzdGFjayBib3R0b20uCgpkaXM6IGRpc2Fzc2VtYmxlIGluc3RydWN0aW9ucy4gSWYgZGlzYXNz
ZW1ibGluZyBmb3IgZ3Vlc3QsIHRoZW4gJ251bScgbXVzdAogICAgIGJlIHNwZWNpZmllZC4gJ251
bScgaXMgbnVtYmVyIG9mIGluc3RycyB0byBkaXNwbGF5LgoKZGlzbTogdG9nZ2xlIGRpc2Fzc2Vt
Ymx5IG1vZGUgYmV0d2VlbiBJbnRlbCBhbmQgQVRUL0dBUy4KCm13OiBtb2RpZnkgd29yZCBpbiBt
ZW1vcnkgZ2l2ZW4gdmlydHVhbCBhZGRyZXNzLiAnZG9taWQnIG1heSBiZSBzcGVjaWZpZWQgaWYK
ICAgIG1vZGlmeWluZyBndWVzdCBtZW1vcnkuIHZhbHVlIGlzIGFzc3VtZWQgaW4gaGV4IGV2ZW4g
d2l0aG91dCAweC4KCm1kOiBzYW1lIGFzIGFib3ZlIGJ1dCBtb2RpZnkgZG91Ymxld29yZC4KCm1y
OiBtb2RpZnkgcmVnaXN0ZXIuIHZhbHVlIGlzIGFzc3VtZCBoZXguCgpiYzogY2xlYXIgZ2l2ZW4g
b3IgYWxsIGJyZWFrcG9pbnRzCgpicDogZGlzcGxheSBicmVha3BvaW50cyBvciBzZXQgYSBicmVh
a3BvaW50LiBEb21pZCBtYXkgYmUgc3BlY2lmaWVkIHRvIHNldCBhIGJwCiAgICBpbiBndWVzdC4g
a2RiIGZ1bmN0aW9ucyBtYXkgbm90IGJlIHNwZWNpZmllZCBpZiBkZWJ1Z2dpbmcga2RiLgogICAg
RXhhbXBsZToKICAgICAgeGtkYj4gYnAgYWNwaV9wcm9jZXNzb3JfaWRsZSAgOiB3aWxsIHNldCBi
cCBpbiB4ZW4KICAgICAgeGtkYj4gYnAgZGVmYXVsdF9pZGxlIDAgOiAgIHdpbGwgc2V0IGJwIGlu
IGRvbWlkIDAKICAgICAgeGtkYj4gYnAgaWRsZV9jcHUgOSA6ICAgd2lsbCBzZXQgYnAgaW4gZG9t
aWQgOQoKICAgICBDb25kaXRpb25zIG1heSBiZSBzcGVjaWZpZWQgZm9yIGEgYnA6IGxocyA9PSBy
aHMgb3IgbGhzICE9IHJocwogICAgIHdoZXJlIDogbGhzIGlzIHJlZ2lzdGVyIGxpa2UgJ3I2Jywg
J3JheCcsIGV0Yy4uLiAgb3IgbWVtb3J5IGxvY2F0aW9uCiAgICAgICAgICAgICByaHMgaXMgaGV4
IHZhbHVlIHdpdGggb3Igd2l0aG91dCBsZWFkaW5nIDB4LgogICAgIFRodXMsCiAgICAgIHhrZGI+
IGJwIGFjcGlfcHJvY2Vzc29yX2lkbGUgcmRpID09IGMwMDAgCiAgICAgIHhrZGI+IGJwIDB4ZmZm
ZmZmZmY4MDA2MmViYyAwIHJzaSA9PSBmZmZmODgwMDIxZWRiYzk4IDogd2lsbCBicmVhayBpbnRv
CiAgICAgICAgICAgIGtkYiBhdCAweGZmZmZmZmZmODAwNjJlYmMgaW4gZG9tMCB3aGVuIHJzaSBp
cyBmZmZmODgwMDIxZWRiYzk4IAoKYnRwOiBicmVhayBwb2ludCB0cmFjZS4gVXBvbiBicCwgcHJp
bnQgc29tZSBpbmZvIGFuZCBjb250aW51ZSB3aXRob3V0IHN0b3BwaW5nLgogICBFeDogYnRwIGlk
bGVfY3B1IDcgcmF4IHJieCAweDIwZWY1YTUgcjkKCiAgIHdpbGwgcHJpbnQ6IHJheCwgcmJ4LCAq
KGxvbmcgKikweDIwZWY1YTUsIHI5IHVwb24gaGl0dGluZyBpZGxlX2NwdSgpIGFuZCAKICAgICAg
ICAgICAgICAgY29udGludWUuCgp3cDogc2V0IGEgd2F0Y2hwb2ludCBhdCBhIHZpcnR1YWwgYWRk
cmVzcyB3aGljaCBjYW4gYmVsb25nIHRvIGh5cGVydmlzb3Igb3IKICAgIGFueSBndWVzdC4gRG8g
bm90IHNwZWNpZnkgd3AgaW4ga2RiIHBhdGggaWYgZGVidWdnaW5nIGtkYi4KCndjOiBjbGVhciBn
aXZlbiBvciBhbGwgd2F0Y2hwb2ludHMuCgpuaTogc2luZ2xlIHN0ZXAsIHN0ZXBwaW5nIG92ZXIg
ZnVuY3Rpb24gY2FsbHMuCgpzczogc2luZ2xlIHN0ZXAuIEJlIGNhcmVmdWxsIHdoZW4gaW4gaW50
ZXJydXB0IGhhbmRsZXJzIG9yIGNvbnRleHQgc3dpdGNoZXMuCiAgICAKc3NiOiBzaW5nbGUgc3Rl
cCB0byBicmFuY2guIFVzZSB3aXRoIGNhcmUuCgpnbzogbGVhdmUga2RiIGFuZCBjb250aW51ZS4K
CmNwdTogZ28gYmFjayB0byBvcmlnIGNwdSB3aGVuIGVudGVyaW5nIGtkYi4gSWYgJ2NwdSBudW1i
ZXInIGdpdmVuLCB0aGVuIHN3aXRjaCAKICAgICB0byB0aGF0IGNwdS4gSWYgJ2FsbCcgdGhlbiBz
aG93IHN0YXR1cyBvZiBhbGwgY3B1cy4KCm5taTogT25seSBhdmFpbGFibGUgaW4gaHVuZy9jcmFz
aCBzdGF0ZS4gU2VuZCBOTUkgdG8gYSBjcHUgdGhhdCBtYXkgYmUgaHVuZy4KCnN5bTogSW5pdGlh
bGl6ZSBhIHN5bWJvbCB0YWJsZSBmb3IgZGVidWdnaW5nIGEgZ3Vlc3QuIExvb2sgaW50byB0aGUg
U3lzdGVtLm1hcAogICAgIGZpbGUgb2YgZ3Vlc3QgZm9yIGNlcnRhaW4gc3ltYm9sIHZhbHVlcyBh
bmQgcHJvdmlkZSB0aGVtIGhlcmUuCgp2Y3B1aDogR2l2ZW4gdmNwdSBwdHIsIGRpc3BsYXkgaHZt
X3ZjcHUgc3RydWN0LgoKdmNwdTogRGlzcGxheSBjdXJyZW50IHZjcHUgc3RydWN0LiBJZiAndmNw
dS1wdHInIGdpdmVuLCBkaXNwbGF5IHRoYXQgdmNwdS4KCmRvbTogZGlzcGxheSBjdXJyZW50IGRv
bWFpbi4gSWYgJ2RvbWlkJyB0aGVuIGRpc3BsYXkgdGhhdCBkb21pZC4gSWYgJ2FsbCcsIHRoZW4K
ICAgICBkaXNwbGF5IGFsbCBkb21haW5zLgoKc2NoZWQ6IHNob3cgc2NoZWR1bGFyIGluZm8gYW5k
IHJ1biBxdWV1ZXMuCgptbXU6IHByaW50IGJhc2ljIG1tdSBpbmZvCgpwMm06IGNvbnZlcnQgYSBn
cGZuIHRvIG1mbiBnaXZlbiBhIGRvbWlkLiB2YWx1ZSBpbiBoZXggZXZlbiB3aXRob3V0IDB4LgoK
bTJwOiBjb252ZXJ0IG1mbiB0byBwZm4uIHZhbHVlIGluIGhleCBldmVuIHdpdGhvdXQgMHguCgpk
cGFnZTogZGlzcGxheSBzdHJ1Y3QgcGFnZSBnaXZlbiBhIG1mbiBvciBzdHJ1Y3QgcGFnZSBwdHIu
IFNpbmNlLCBubyBpbmZvIGlzIAogICAgICAga2VwdCBvbiBwYWdlIHR5cGUsIHdlIGRpc3BsYXkg
YWxsIHBvc3NpYmxlIHBhZ2UgdHlwZXMuCgpkdHJxOiBkaXNwbGF5IHRpbWVyIHF1ZXVlcy4KCmRp
ZHQ6IGR1bXAgSURUIHRhYmxlLgoKZGd0OiBkdW1wIEdEVCB0YWJsZS4KCmRpcnE6IGRpc3BsYXkg
SVJRIGJpbmRpbmdzLgoKZHZtYzogZGlzcGxheSBhbGwgb3IgZ2l2ZW4gZG9tL3ZjcHUgVk1DUyBv
ciBWTUNCLgoKdHJjb246IHR1cm4gdHJhY2luZyBvbi4gVHJhY2UgaG9va3MgbXVzdCBiZSBhZGRl
ZCBpbiB4ZW4gYW5kIGtkYiBmdW5jdGlvbgogICAgICAgY2FsbGVkIGRpcmVjdGx5IGZyb20gdGhl
cmUuCgp0cmNvZmY6IHR1cm4gdHJhY2luZyBvZmYuCgp0cmN6OiB6ZXJvIHRyYWNlIGJ1ZmZlci4K
CnRyY3A6IGdpdmUgaGludHMgdG8gcHJpbnQgdGhlIGNpcmN1bGFyIHRyYWNlIGJ1ZmZlciwgbGlr
ZSBjdXJyZW50IGFjdGl2ZSBwdHIuCgp1c3IxOiBhbGxvd3MgdG8gYWRkIGFueSBhcmJpdHJhdHkg
Y29tbWFuZCBxdWlja2x5LgoKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KLyoKICogQ29weXJpZ2h0
IChDKSAyMDA4IE9yYWNsZS4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJvZ3Jh
bSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICogbW9k
aWZ5IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vu
c2UgdjIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAqCiAq
IFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUg
dXNlZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1w
bGllZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJU
SUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZv
ciBtb3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2Yg
dGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9ncmFt
OyBpZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4s
IDU5IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMwNywg
VVNBLgogKi8KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
a2RiL2tkYm1haW4uYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1
NgAwMDAyNzU2ADAwMDAwMDYyNjcxADEyMDE3NTAyNjEzADAxMzMzMgAgMAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAv
KgogKiBDb3B5cmlnaHQgKEMpIDIwMDksIE11a2VzaCBSYXRob3IsIE9yYWNsZSBDb3JwLiAgQWxs
IHJpZ2h0cyByZXNlcnZlZC4KICoKICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlv
dSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vcgogKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRlcm1z
IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMKICogTGljZW5zZSB2MiBhcyBwdWJsaXNoZWQgYnkg
dGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4KICoKICogVGhpcyBwcm9ncmFtIGlzIGRpc3Ry
aWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsCiAqIGJ1dCBXSVRIT1VU
IEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCiAqIE1F
UkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0
aGUgR05VCiAqIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KICoKICog
WW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
CiAqIExpY2Vuc2UgYWxvbmcgd2l0aCB0aGlzIHByb2dyYW07IGlmIG5vdCwgd3JpdGUgdG8gdGhl
CiAqIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbiwgSW5jLiwgNTkgVGVtcGxlIFBsYWNlIC0gU3Vp
dGUgMzMwLAogKiBCb3N0b24sIE1BIDAyMTExMC0xMzA3LCBVU0EuCiAqLwoKI2luY2x1ZGUgImlu
Y2x1ZGUva2RiaW5jLmgiCgpzdGF0aWMgaW50IGtkYm1haW4oa2RiX3JlYXNvbl90LCBzdHJ1Y3Qg
Y3B1X3VzZXJfcmVncyAqKTsKc3RhdGljIGludCBrZGJtYWluX2ZhdGFsKHN0cnVjdCBjcHVfdXNl
cl9yZWdzICosIGludCk7CnN0YXRpYyBjb25zdCBjaGFyICprZGJfZ2V0dHJhcG5hbWUoaW50KTsK
Ci8qID09PT09PT09PT09PT09PT09PT09PT09PSBHTE9CQUwgVkFSSUFCTEVTID09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT0gKi8KLyogQWxsIGdsb2JhbCB2YXJpYWJsZXMgdXNlZCBieSBL
REIgbXVzdCBiZSBkZWZpbmVkIGhlcmUgb25seS4gTW9kdWxlIHNwZWNpZmljCiAqIHN0YXRpYyB2
YXJpYWJsZXMgbXVzdCBiZSBkZWNsYXJlZCBpbiByZXNwZWN0aXZlIG1vZHVsZXMuCiAqLwprZGJ0
YWJfdCAqa2RiX2NtZF90Ymw7CmNoYXIga2RiX3Byb21wdFszMl07Cgp2b2xhdGlsZSBrZGJfY3B1
X2NtZF90IGtkYl9jcHVfY21kW05SX0NQVVNdOwpjcHVtYXNrX3Qga2RiX2NwdV90cmFwczsgICAg
ICAgICAgIC8qIGJpdCBwZXIgY3B1IHRvIHRlbGwgd2hpY2ggY3B1cyBoaXQgaW50MyAqLwoKI2lm
bmRlZiBOREVCVUcKICAgICNlcnJvciBLREIgaXMgbm90IHN1cHBvcnRlZCBvbiBkZWJ1ZyB4ZW4u
IFR1cm4gZGVidWcgb2ZmCiNlbmRpZgoKdm9sYXRpbGUgaW50IGtkYl9pbml0X2NwdSA9IC0xOyAg
ICAgICAgICAgLyogaW5pdGlhbCBrZGIgY3B1ICovCnZvbGF0aWxlIGludCBrZGJfc2Vzc2lvbl9i
ZWd1biA9IDA7ICAgICAgIC8qIGFjdGl2ZSBrZGIgc2Vzc2lvbj8gKi8Kdm9sYXRpbGUgaW50IGtk
Yl9lbmFibGVkID0gMTsgICAgICAgICAgICAgLyoga2RiIGVuYWJsZWQgY3VycmVudGx5PyAqLwp2
b2xhdGlsZSBpbnQga2RiX3N5c19jcmFzaCA9IDA7ICAgICAgICAgICAvKiBhcmUgd2UgaW4gY3Jh
c2hlZCBzdGF0ZT8gKi8Kdm9sYXRpbGUgaW50IGtkYmRiZyA9IDA7ICAgICAgICAgICAgICAgICAg
LyogdG8gZGVidWcga2RiIGl0c2VsZiAqLwoKc3RhdGljIHZvbGF0aWxlIGludCBrZGJfdHJhcF9p
bW1lZF9yZWFzb24gPSAwOyAgIC8qIHJlYXNvbiBmb3IgaW1tZWQgdHJhcCAqLwoKc3RhdGljIGNw
dW1hc2tfdCBrZGJfZmF0YWxfY3B1bWFzazsgICAgICAgLyogd2hpY2ggY3B1cyBpbiBmYXRhbCBw
YXRoICovCgovKiByZXR1cm4gaW5kZXggb2YgZmlyc3QgYml0IHNldCBpbiB2YWwuIGlmIHZhbCBp
cyAwLCByZXR2YWwgaXMgdW5kZWZpbmVkICovCnN0YXRpYyBpbmxpbmUgdW5zaWduZWQgaW50IGtk
Yl9maXJzdGJpdCh1bnNpZ25lZCBsb25nIHZhbCkKewogICAgX19hc21fXyAoICJic2YgJTEsJTAi
IDogIj1yIiAodmFsKSA6ICJyIiAodmFsKSwgIjAiIChCSVRTX1BFUl9MT05HKSApOwogICAgcmV0
dXJuICh1bnNpZ25lZCBpbnQpdmFsOwp9CgpzdGF0aWMgdm9pZCAKa2RiX2RiZ19wcm50X2N0cnBz
KGNoYXIgKmxhYmVsLCBpbnQgY2NwdSkKewogICAgaW50IGk7CiAgICBpZiAoIWtkYmRiZykKICAg
ICAgICByZXR1cm47CgogICAgaWYgKGxhYmVsIHx8ICpsYWJlbCkKICAgICAgICBrZGJwKCIlcyAi
LCBsYWJlbCk7CiAgICBpZiAoY2NwdSAhPSAtMSkKICAgICAgICBrZGJwKCJjY3B1OiVkICIsIGNj
cHUpOwogICAga2RicCgiY3B1dHJwczoiKTsKICAgIGZvciAoaT1zaXplb2Yoa2RiX2NwdV90cmFw
cykvc2l6ZW9mKGtkYl9jcHVfdHJhcHMuYml0c1swXSkgLSAxOyBpID49MDsgaS0tKQogICAgICAg
IGtkYnAoIiAlbHgiLCBrZGJfY3B1X3RyYXBzLmJpdHNbaV0pOwogICAga2RicCgiXG4iKTsKfQoK
LyogCiAqIEhvbGQgdGhpcyBjcHUuIERvbid0IGRpc2FibGUgdW50aWwgYWxsIENQVXMgaW4ga2Ri
IHRvIGF2b2lkIElQSSBkZWFkbG9jayAKICovCnN0YXRpYyB2b2lkCmtkYl9ob2xkX3RoaXNfY3B1
KGludCBjY3B1LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgS0RCR1AoImNjcHU6
JWQgaG9sZC4gY21kOiV4XG4iLCBrZGJfY3B1X2NtZFtjY3B1XSk7CiAgICBkbyB7CiAgICAgICAg
Zm9yKDsga2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9QQVVTRTsgY3B1X3JlbGF4KCkpOwog
ICAgICAgIEtEQkdQKCJjY3B1OiVkIGhvbGQuIGNtZDoleFxuIiwga2RiX2NwdV9jbWRbY2NwdV0p
OwoKICAgICAgICBpZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9ESVNBQkxFKSB7CiAg
ICAgICAgICAgIGxvY2FsX2lycV9kaXNhYmxlKCk7CiAgICAgICAgICAgIGtkYl9jcHVfY21kW2Nj
cHVdID0gS0RCX0NQVV9QQVVTRTsKICAgICAgICB9CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2Nj
cHVdID09IEtEQl9DUFVfRE9fVk1FWElUKSB7CiAgICAgICAgICAgIGtkYl9jdXJyX2NwdV9mbHVz
aF92bWNzKCk7CiAgICAgICAgICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9QQVVTRTsK
ICAgICAgICB9CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09IEtEQl9DUFVfU0hPV1BD
KSB7CiAgICAgICAgICAgIGtkYnAoIlslZF0iLCBjY3B1KTsKICAgICAgICAgICAga2RiX2Rpc3Bs
YXlfcGMocmVncyk7CiAgICAgICAgICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9QQVVT
RTsKICAgICAgICB9CiAgICB9IHdoaWxlIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX1BB
VVNFKTsgICAgIC8qIE5vIGdvdG8sIGVoISAqLwogICAgS0RCR1AxKCJ1biBob2xkOiBjY3B1OiVk
IGNtZDolZFxuIiwgY2NwdSwga2RiX2NwdV9jbWRbY2NwdV0pOwp9CgovKgogKiBQYXVzZSB0aGlz
IGNwdSB3aGlsZSBvbmUgQ1BVIGRvZXMgbWFpbiBrZGIgcHJvY2Vzc2luZy4gSWYgdGhhdCBDUFUg
ZG9lcwogKiBhICJjcHUgc3dpdGNoIiB0byB0aGlzIGNwdSwgdGhpcyBjcHUgd2lsbCBiZWNvbWUg
dGhlIG1haW4ga2RiIGNwdS4gSWYgdGhlCiAqIHVzZXIgbmV4dCBkb2VzIHNpbmdsZSBzdGVwIG9m
IHNvbWUgc29ydCwgdGhpcyBmdW5jdGlvbiB3aWxsIGJlIGV4aXRlZCwKICogYW5kIHRoaXMgY3B1
IHdpbGwgY29tZSBiYWNrIGludG8ga2RiIHZpYSBrZGJfaGFuZGxlX3RyYXBfZW50cnkgZnVuY3Rp
b24uCiAqLwpzdGF0aWMgdm9pZCAKa2RiX3BhdXNlX3RoaXNfY3B1KHN0cnVjdCBjcHVfdXNlcl9y
ZWdzICpyZWdzLCB2b2lkICp1bnVzZWQpCnsKICAgIGtkYm1haW4oS0RCX1JFQVNPTl9QQVVTRV9J
UEksIHJlZ3MpOwp9CgovKiBwYXVzZSBvdGhlciBjcHVzIHZpYSBhbiBJUEkuIE5vdGUsIGRpc2Fi
bGVkIENQVXMgY2FuJ3QgcmVjZWl2ZSBJUElzIHVudGlsCiAqIGVuYWJsZWQgKi8Kc3RhdGljIHZv
aWQKa2RiX3NtcF9wYXVzZV9jcHVzKHZvaWQpCnsKICAgIGludCBjcHUsIHdhaXRfY291bnQgPSAw
OwogICAgaW50IGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7ICAgICAgLyogY3VycmVudCBjcHUg
Ki8KICAgIGNwdW1hc2tfdCBjcHVtYXNrID0gY3B1X29ubGluZV9tYXA7CgogICAgY3B1bWFza19j
bGVhcl9jcHUoc21wX3Byb2Nlc3Nvcl9pZCgpLCAmY3B1bWFzayk7CiAgICBmb3JfZWFjaF9jcHUo
Y3B1LCAmY3B1bWFzaykKICAgICAgICBpZiAoa2RiX2NwdV9jbWRbY3B1XSAhPSBLREJfQ1BVX0lO
VkFMKSB7CiAgICAgICAgICAgIGtkYnAoIktEQjogd29uJ3QgcGF1c2UgY3B1OiVkLCBjbWRbY3B1
XT0lZFxuIixjcHUsa2RiX2NwdV9jbWRbY3B1XSk7CiAgICAgICAgICAgIGNwdW1hc2tfY2xlYXJf
Y3B1KGNwdSwgJmNwdW1hc2spOwogICAgICAgIH0KICAgIEtEQkdQKCJjY3B1OiVkIHdpbGwgcGF1
c2UgY3B1cy4gbWFzazoweCVseFxuIiwgY2NwdSwgY3B1bWFzay5iaXRzWzBdKTsKI2lmIFhFTl9T
VUJWRVJTSU9OID4gNCB8fCBYRU5fVkVSU0lPTiA9PSA0ICAgICAgICAgICAgICAvKiB4ZW4gMy41
Lnggb3IgYWJvdmUgKi8KICAgIG9uX3NlbGVjdGVkX2NwdXMoJmNwdW1hc2ssICh2b2lkICgqKSh2
b2lkICopKWtkYl9wYXVzZV90aGlzX2NwdSwgCiAgICAgICAgICAgICAgICAgICAgICJYRU5LREIi
LCAwKTsKI2Vsc2UKICAgIG9uX3NlbGVjdGVkX2NwdXMoY3B1bWFzaywgKHZvaWQgKCopKHZvaWQg
Kikpa2RiX3BhdXNlX3RoaXNfY3B1LCAKICAgICAgICAgICAgICAgICAgICAgIlhFTktEQiIsIDAs
IDApOwojZW5kaWYKICAgIG1kZWxheSgzMDApOyAgICAgICAgICAgICAgICAgICAgIC8qIHdhaXQg
YSBiaXQgZm9yIG90aGVyIENQVXMgdG8gc3RvcCAqLwogICAgd2hpbGUod2FpdF9jb3VudCsrIDwg
MTApIHsKICAgICAgICBpbnQgYnVtbWVyID0gMDsKICAgICAgICBmb3JfZWFjaF9jcHUoY3B1LCAm
Y3B1bWFzaykKICAgICAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NwdV0gIT0gS0RCX0NQVV9QQVVT
RSkKICAgICAgICAgICAgICAgIGJ1bW1lciA9IDE7CiAgICAgICAgaWYgKCFidW1tZXIpCiAgICAg
ICAgICAgIGJyZWFrOwogICAgICAgIGtkYnAoImNjcHU6JWQgdHJ5aW5nIHRvIHN0b3Agb3RoZXIg
Y3B1cy4uLlxuIiwgY2NwdSk7CiAgICAgICAgbWRlbGF5KDEwMCk7ICAvKiB3YWl0IDEwMCBtcyAq
LwogICAgfTsKICAgIGZvcl9lYWNoX2NwdShjcHUsICZjcHVtYXNrKSAgICAgICAgICAvKiBub3cg
Y2hlY2sgd2hvIGlzIHdpdGggdXMgKi8KICAgICAgICBpZiAoa2RiX2NwdV9jbWRbY3B1XSAhPSBL
REJfQ1BVX1BBVVNFKQogICAgICAgICAgICBrZGJwKCJCdW1tZXIgY3B1ICVkIG5vdCBwYXVzZWQu
IGNjcHU6JWRcbiIsIGNwdSxjY3B1KTsKICAgICAgICBlbHNlIHsKICAgICAgICAgICAga2RiX2Nw
dV9jbWRbY3B1XSA9IEtEQl9DUFVfRElTQUJMRTsgIC8qIHRlbGwgaXQgdG8gZGlzYWJsZSBpbnRz
ICovCiAgICAgICAgICAgIHdoaWxlIChrZGJfY3B1X2NtZFtjcHVdICE9IEtEQl9DUFVfUEFVU0Up
OwogICAgICAgIH0KfQoKLyogCiAqIERvIG9uY2UgcGVyIGtkYiBzZXNzaW9uOiAgQSBrZGIgc2Vz
c2lvbiBsYXN0cyBmcm9tIAogKiAgICBrZXlib3JkL0hXQlAvU1dCUCB0aWxsIEtEQl9DUFVfSU5T
VEFMTF9CUCBpcyBkb25lLiBXaXRoaW4gYSBzZXNzaW9uLAogKiAgICB1c2VyIG1heSBkbyBzZXZl
cmFsIGNwdSBzd2l0Y2hlcywgc2luZ2xlIHN0ZXAsIG5leHQgaW5zdHIsICBldGMuLgogKgogKiBE
TzogMS4gcGF1c2Ugb3RoZXIgY3B1cyBpZiB0aGV5IGFyZSBub3QgYWxyZWFkeS4gdGhleSB3b3Vs
ZCBhbHJlYWR5IGJlIAogKiAgICAgICAgaWYgd2UgYXJlIGluIHNpbmdsZSBzdGVwIG1vZGUKICog
ICAgIDIuIHdhdGNoZG9nX2Rpc2FibGUoKSAKICogICAgIDMuIHVuaW5zdGFsbCBhbGwgc3cgYnJl
YWtwb2ludHMgc28gdGhhdCB1c2VyIGRvZXNuJ3Qgc2VlIHRoZW0KICovCnN0YXRpYyB2b2lkCmtk
Yl9iZWdpbl9zZXNzaW9uKHZvaWQpCnsKICAgIGlmICgha2RiX3Nlc3Npb25fYmVndW4pIHsKICAg
ICAgICBrZGJfc2Vzc2lvbl9iZWd1biA9IDE7CiAgICAgICAga2RiX3NtcF9wYXVzZV9jcHVzKCk7
CiAgICAgICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsKICAgICAgICB3YXRjaGRvZ19kaXNhYmxlKCk7
CiAgICAgICAga2RiX3VuaW5zdGFsbF9hbGxfc3dicCgpOwogICAgfQp9CgpzdGF0aWMgdm9pZApr
ZGJfc21wX3VucGF1c2VfY3B1cyhpbnQgY2NwdSkKewogICAgaW50IGNwdTsKCiAgICBpbnQgd2Fp
dF9jb3VudCA9IDA7CiAgICBjcHVtYXNrX3QgY3B1bWFzayA9IGNwdV9vbmxpbmVfbWFwOwoKICAg
IGNwdW1hc2tfY2xlYXJfY3B1KHNtcF9wcm9jZXNzb3JfaWQoKSwgJmNwdW1hc2spOwoKICAgIEtE
QkdQKCJrZGJfc21wX3VucGF1c2Vfb3RoZXJfY3B1cygpLiBjY3B1OiVkXG4iLCBjY3B1KTsKICAg
IGZvcl9lYWNoX2NwdShjcHUsICZjcHVtYXNrKQogICAgICAgIGtkYl9jcHVfY21kW2NwdV0gPSBL
REJfQ1BVX1FVSVQ7CgogICAgd2hpbGUod2FpdF9jb3VudCsrIDwgMTApIHsKICAgICAgICBpbnQg
YnVtbWVyID0gMDsKICAgICAgICBmb3JfZWFjaF9jcHUoY3B1LCAmY3B1bWFzaykKICAgICAgICAg
ICAgaWYgKGtkYl9jcHVfY21kW2NwdV0gIT0gS0RCX0NQVV9JTlZBTCkKICAgICAgICAgICAgICAg
IGJ1bW1lciA9IDE7CiAgICAgICAgICAgIGlmICghYnVtbWVyKQogICAgICAgICAgICAgICAgYnJl
YWs7CiAgICAgICAgICAgIG1kZWxheSg5MCk7ICAvKiB3YWl0IDkwIG1zLCA1MCB0b28gc2hvcnQg
b24gbGFyZ2Ugc3lzdGVtcyAqLwogICAgfTsKICAgIC8qIG5vdyBtYWtlIHN1cmUgdGhleSBhcmUg
YWxsIGluIHRoZXJlICovCiAgICBmb3JfZWFjaF9jcHUoY3B1LCAmY3B1bWFzaykKICAgICAgICBp
ZiAoa2RiX2NwdV9jbWRbY3B1XSAhPSBLREJfQ1BVX0lOVkFMKQogICAgICAgICAgICBrZGJwKCJL
REI6IGNwdSAlZCBzdGlsbCBwYXVzZWQgKGNtZD09JWQpLiBjY3B1OiVkXG4iLAogICAgICAgICAg
ICAgICAgIGNwdSwga2RiX2NwdV9jbWRbY3B1XSwgY2NwdSk7Cn0KCi8qCiAqIEVuZCBvZiBLREIg
c2Vzc2lvbi4gCiAqICAgVGhpcyBpcyBjYWxsZWQgYXQgdGhlIHZlcnkgZW5kLiBJbiBjYXNlIG9m
IG11bHRpcGxlIGNwdXMgaGl0dGluZyBCUHMKICogICBhbmQgc2l0dGluZyBvbiBhIHRyYXAgaGFu
ZGxlcnMsIHRoZSBsYXN0IGNwdSB0byBleGl0IHdpbGwgY2FsbCB0aGlzLgogKiAgIC0gaXNuc3Rh
bGwgYWxsIHN3IGJyZWFrcG9pbnRzLCBhbmQgcHVyZ2UgZGVsZXRlZCBvbmVzIGZyb20gdGFibGUu
CiAqICAgLSBjbGVhciBURiBoZXJlIGFsc28gaW4gY2FzZSBnbyBpcyBlbnRlcmVkIG9uIGEgZGlm
ZmVyZW50IGNwdSBhZnRlciBzd2l0Y2gKICovCnN0YXRpYyB2b2lkCmtkYl9lbmRfc2Vzc2lvbihp
bnQgY2NwdSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIEFTU0VSVCghY3B1bWFz
a19lbXB0eSgma2RiX2NwdV90cmFwcykpOwogICAgQVNTRVJUKGtkYl9zZXNzaW9uX2JlZ3VuKTsK
ICAgIGtkYl9pbnN0YWxsX2FsbF9zd2JwKCk7CiAgICBrZGJfZmx1c2hfc3dicF90YWJsZSgpOwog
ICAga2RiX2luc3RhbGxfd2F0Y2hwb2ludHMoKTsKCiAgICByZWdzLT5lZmxhZ3MgJj0gflg4Nl9F
RkxBR1NfVEY7CiAgICBrZGJfY3B1X2NtZFtjY3B1XSA9IEtEQl9DUFVfSU5WQUw7CiAgICBrZGJf
dGltZV9yZXN1bWUoMSk7CiAgICBrZGJfc2Vzc2lvbl9iZWd1biA9IDA7ICAgICAgLyogYmVmb3Jl
IHVucGF1c2UgZm9yIGtkYl9pbnN0YWxsX3dhdGNocG9pbnRzICovCiAgICBrZGJfc21wX3VucGF1
c2VfY3B1cyhjY3B1KTsKICAgIHdhdGNoZG9nX2VuYWJsZSgpOwogICAgS0RCR1AoImVuZF9zZXNz
aW9uOmNjcHU6JWRcbiIsIGNjcHUpOwp9CgovKiAKICogY2hlY2sgaWYgd2UgZW50ZXJlZCBrZGIg
YmVjYXVzZSBvZiBEQiB0cmFwLiBJZiB5ZXMsIHRoZW4gY2hlY2sgaWYKICogd2UgY2F1c2VkIGl0
IG9yIHNvbWVvbmUgZWxzZS4KICogUkVUVVJOUzogMCA6IG5vdCBvbmUgb2Ygb3Vycy4gaHlwZXJ2
aXNvciBtdXN0IGhhbmRsZSBpdC4gCiAqICAgICAgICAgIDEgOiAjREIgZm9yIGRlbGF5ZWQgc3cg
YnAgaW5zdGFsbC4gCiAqICAgICAgICAgIDIgOiB0aGlzIGNwdSBtdXN0IHN0YXkgaW4ga2RiLgog
Ki8Kc3RhdGljIG5vaW5saW5lIGludAprZGJfY2hlY2tfZGJ0cmFwKGtkYl9yZWFzb25fdCAqcmVh
c3AsIGludCBzc19tb2RlLCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykgCnsKICAgIGludCBy
YyA9IDI7CiAgICBpbnQgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKCiAgICAvKiBEQiBleGNw
IGNhdXNlZCBieSBodyBicmVha3BvaW50IG9yIHRoZSBURiBmbGFnLiBUaGUgVEYgZmxhZyBpcyBz
ZXQKICAgICAqIGJ5IHVzIGZvciBzcyBtb2RlIG9yIHRvIGluc3RhbGwgYnJlYWtwb2ludHMuIElu
IHNzIG1vZGUsIG5vbmUgb2YgdGhlCiAgICAgKiBicmVha3BvaW50cyBhcmUgaW5zdGFsbGVkLiBD
aGVjayB0byBtYWtlIHN1cmUgd2UgaW50ZW5kZWQgQlAgSU5TVEFMTAogICAgICogc28gd2UgZG9u
J3QgZG8gaXQgb24gYSBzcHVyaW91cyBEQiB0cmFwLgogICAgICogY2hlY2sgZm9yIGtkYl9jcHVf
dHJhcHMgaGVyZSBhbHNvLCBiZWNhdXNlIGVhY2ggY3B1IHNpdHRpbmcgb24gYSB0cmFwCiAgICAg
KiBtdXN0IGV4ZWN1dGUgdGhlIGluc3RydWN0aW9uIHdpdGhvdXQgdGhlIEJQIGJlZm9yZSBwYXNz
aW5nIGNvbnRyb2wKICAgICAqIHRvIG5leHQgY3B1IGluIGtkYl9jcHVfdHJhcHMuCiAgICAgKi8K
ICAgIGlmICgqcmVhc3AgPT0gS0RCX1JFQVNPTl9EQkVYQ1AgJiYgIXNzX21vZGUpIHsKICAgICAg
ICBpZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9JTlNUQUxMX0JQKSB7CiAgICAgICAg
ICAgIGlmICghY3B1bWFza19lbXB0eSgma2RiX2NwdV90cmFwcykpIHsKICAgICAgICAgICAgICAg
IGludCBhX3RyYXBfY3B1ID0gY3B1bWFza19maXJzdCgma2RiX2NwdV90cmFwcyk7CiAgICAgICAg
ICAgICAgICBLREJHUCgiY2NwdTolZCB0cmFwY3B1OiVkXG4iLCBjY3B1LCBhX3RyYXBfY3B1KTsK
ICAgICAgICAgICAgICAgIGtkYl9jcHVfY21kW2FfdHJhcF9jcHVdID0gS0RCX0NQVV9RVUlUOwog
ICAgICAgICAgICAgICAgKnJlYXNwID0gS0RCX1JFQVNPTl9QQVVTRV9JUEk7CiAgICAgICAgICAg
ICAgICByZWdzLT5lZmxhZ3MgJj0gflg4Nl9FRkxBR1NfVEY7ICAvKiBodm06IGV4aXQgaGFuZGxl
ciBzcyA9IDAgKi8KICAgICAgICAgICAgICAgIGtkYl9pbml0X2NwdSA9IC0xOwogICAgICAgICAg
ICB9IGVsc2UgewogICAgICAgICAgICAgICAga2RiX2VuZF9zZXNzaW9uKGNjcHUsIHJlZ3MpOwog
ICAgICAgICAgICAgICAgcmMgPSAxOwogICAgICAgICAgICB9CiAgICAgICAgfSBlbHNlIGlmICgh
IGtkYl9jaGVja193YXRjaHBvaW50cyhyZWdzKSkgewogICAgICAgICAgICByYyA9IDA7ICAgICAg
ICAgICAgICAgICAgICAgICAgLyogaHlwIG11c3QgaGFuZGxlIGl0ICovCiAgICAgICAgfQogICAg
fQogICAgcmV0dXJuIHJjOwp9CgovKiAKICogTWlzYyBwcm9jZXNzaW5nIG9uIGtkYiBlbnRyeSBs
aWtlIGRpc3BsYXlpbmcgUEMsIGFkanVzdCBJUCBmb3Igc3cgYnAuLi4uIAogKi8Kc3RhdGljIHZv
aWQKa2RiX21haW5fZW50cnlfbWlzYyhrZGJfcmVhc29uX3QgcmVhc29uLCBzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncywgCiAgICAgICAgICAgICAgICAgICAgaW50IGNjcHUsIGludCBzc19tb2Rl
LCBpbnQgZW5hYmxlZCkKewogICAgaWYgKHJlYXNvbiA9PSBLREJfUkVBU09OX0tFWUJPQVJEKQog
ICAgICAgIGtkYnAoIlxuRW50ZXIga2RiIChjcHU6JWQgcmVhc29uOiVkIHZjcHU9JWQgZG9taWQ6
JWQiCiAgICAgICAgICAgICAiIGVmbGc6MHglbHggaXJxczolZClcbiIsIGNjcHUsIHJlYXNvbiwg
Y3VycmVudC0+dmNwdV9pZCwgCiAgICAgICAgICAgICBjdXJyZW50LT5kb21haW4tPmRvbWFpbl9p
ZCwgcmVncy0+ZWZsYWdzLCBlbmFibGVkKTsKICAgIGVsc2UgaWYgKHNzX21vZGUpCiAgICAgICAg
S0RCR1AxKCJLREJHOiBLREIgc2luZ2xlIHN0ZXAgbW9kZS4gY2NwdTolZFxuIiwgY2NwdSk7Cgog
ICAgaWYgKHJlYXNvbiA9PSBLREJfUkVBU09OX0JQRVhDUCAmJiAhc3NfbW9kZSkgCiAgICAgICAg
a2RicCgiQnJlYWtwb2ludCBvbiBjcHUgJWQgYXQgMHglbHhcbiIsIGNjcHUsIHJlZ3MtPktEQklQ
KTsKCiAgICAvKiBkaXNwbGF5IHRoZSBjdXJyZW50IFBDIGFuZCBpbnN0cnVjdGlvbiBhdCBpdCAq
LwogICAgaWYgKHJlYXNvbiAhPSBLREJfUkVBU09OX1BBVVNFX0lQSSkKICAgICAgICBrZGJfZGlz
cGxheV9wYyhyZWdzKTsKICAgIGNvbnNvbGVfc3RhcnRfc3luYygpOwp9CgovKiAKICogVGhlIE1B
SU4ga2RiIGZ1bmN0aW9uLiBBbGwgY3B1cyBnbyB0aHJ1IHRoaXMuIElSUSBpcyBlbmFibGVkIG9u
IGVudHJ5IGJlY2F1c2UKICogYSBjcHUgY291bGQgaGl0IGEgYnAgc2V0IGluIGRpc2FibGVkIGNv
ZGUuCiAqIElQSTogRXZlbiB0aGUgbWFpbiBjcHUgbXVzdCBlbmFibGUgaW4gY2FzZSBhbm90aGVy
IENQVSBpcyB0cnlpbmcgdG8gSVBJIHVzLgogKiAgICAgIFRoYXQgd2F5LCBpdCB3b3VsZCBJUEkg
dXMsIHRoZW4gZ2V0IG91dCBhbmQgYmUgcmVhZHkgZm9yIG91ciBwYXVzZSBJUEkuCiAqIElSUXM6
IFRoZSByZWFzb24gaXJxcyBlbmFibGUvZGlzYWJsZSBpcyBzY2F0dGVyZWQgaXMgYmVjYXVzZSBv
biBhIHR5cGljYWwKICogICAgICAgc3lzdGVtIElQSXMgYXJlIGNvbnN0YW50bHkgZ29pbmcgb24g
YW1vbmdzIENQVXMgaW4gYSBzZXQgb2YgYW55IHNpemUuIAogKiAgICAgICBBcyBhIHJlc3VsdCwg
IHRvIGF2b2lkIGRlYWRsb2NrLCBjcHVzIGhhdmUgdG8gbG9vcCBlbmFibGVkLCB1bnRpbCBhIAog
KiAgICAgICBxdW9ydW0gaXMgZXN0YWJsaXNoZWQgYW5kIHRoZSBzZXNzaW9uIGhhcyBiZWd1bi4K
ICogU3RlcDogSW50ZWwgVm9sM0IgMTguMy4xLjQgOiBBbiBleHRlcm5hbCBpbnRlcnJ1cHQgbWF5
IGJlIHNlcnZpY2VkIHVwb24KICogICAgICAgc2luZ2xlIHN0ZXAuIFNpbmNlLCB0aGUgbGlrZWx5
IGV4dCB0aW1lcl9pbnRlcnJ1cHQgYW5kIAogKiAgICAgICBhcGljX3RpbWVyX2ludGVycnVwdCBk
b250JyBtZXNzIHdpdGggdGltZSBkYXRhIHN0cnVjdHMsIHdlIGFyZSBwcm9iIE9LCiAqICAgICAg
IGxlYXZpbmcgZW5hYmxlZC4KICogVGltZTogVmVyeSBtZXNzeS4gTW9zdCBwbGF0Zm9ybSB0aW1l
cnMgYXJlIHJlYWRvbmx5LCBzbyB3ZSBjYW4ndCBzdG9wIHRpbWUKICogICAgICAgaW4gdGhlIGRl
YnVnZ2VyLiBXZSB0YWtlIHRoZSBvbmx5IHJlc29ydCwgbGV0IHRoZSBUU0MgYW5kIHBsdCBydW4g
YXMKICogICAgICAgbm9ybWFsLCB1cG9uIGxlYXZpbmcsICJhdHRlbXB0IiB0byBicmluZyBldmVy
eWJvZHkgdG8gY3VycmVudCB0aW1lLgogKiBrZGJjcHV0cmFwczogYml0IHBlciBjcHUuIGVhY2gg
Y3B1IHNldHMgaXQgYml0IGluIGVudHJ5LlMuIFRoZSBiaXQgaXMgCiAqICAgICAgICAgICAgICBy
ZWxpYWJsZSBiZWNhdXNlIHVwb24gdHJhcHMsIEludHMgYXJlIGRpc2FibGVkLiB0aGUgYml0IGlz
IHNldAogKiAgICAgICAgICAgICAgYmVmb3JlIEludHMgYXJlIGVuYWJsZWQuCiAqCiAqIFJFVFVS
TlM6IDAgOiBrZGIgd2FzIGNhbGxlZCBmb3IgZXZlbnQgaXQgd2FzIG5vdCByZXNwb25zaWJsZQog
KiAgICAgICAgICAxIDogZXZlbnQgb3duZWQgYW5kIGhhbmRsZWQgYnkga2RiIAogKi8Kc3RhdGlj
IGludAprZGJtYWluKGtkYl9yZWFzb25fdCByZWFzb24sIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICBpbnQgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsgICAgICAgICAgICAgICAg
LyogY3VycmVudCBjcHUgKi8KICAgIGludCByYyA9IDEsIGNtZCA9IGtkYl9jcHVfY21kW2NjcHVd
OwogICAgaW50IHNzX21vZGUgPSAoY21kID09IEtEQl9DUFVfU1MgfHwgY21kID09IEtEQl9DUFVf
TkkpOwogICAgaW50IGRlbGF5ZWRfaW5zdGFsbCA9IChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJf
Q1BVX0lOU1RBTExfQlApOwogICAgaW50IGVuYWJsZWQgPSBsb2NhbF9pcnFfaXNfZW5hYmxlZCgp
OwoKICAgIEtEQkdQKCJrZGJtYWluOmNjcHU6JWQgcnNuOiVkIGVmbGdzOjB4JWx4IGNtZDolZCBp
bml0YzolZCBpcnFzOiVkICIKICAgICAgICAgICJyZWdzOiVseCBJUDolbHggIiwgY2NwdSwgcmVh
c29uLCByZWdzLT5lZmxhZ3MsIGNtZCwgCiAgICAgICAgICBrZGJfaW5pdF9jcHUsIGVuYWJsZWQs
IHJlZ3MsIHJlZ3MtPktEQklQKTsKICAgIGtkYl9kYmdfcHJudF9jdHJwcygiIiwgLTEpOwoKICAg
IGlmICghc3NfbW9kZSAmJiAhZGVsYXllZF9pbnN0YWxsKSAgICAvKiBpbml0aWFsIGtkYiBlbnRl
ciAqLwogICAgICAgIGxvY2FsX2lycV9lbmFibGUoKTsgICAgICAgICAgICAgIC8qIHNvIHdlIGNh
biByZWNlaXZlIElQSSAqLwoKICAgIGlmICghc3NfbW9kZSAmJiBjY3B1ICE9IGtkYl9pbml0X2Nw
dSAmJiByZWFzb24gIT0gS0RCX1JFQVNPTl9QQVVTRV9JUEkpewogICAgICAgIGludCBzeiA9IHNp
emVvZihrZGJfaW5pdF9jcHUpOwogICAgICAgIHdoaWxlIChfX2NtcHhjaGcoJmtkYl9pbml0X2Nw
dSwgLTEsIGNjcHUsIHN6KSAhPSAtMSkKICAgICAgICAgICAgZm9yKDsga2RiX2luaXRfY3B1ICE9
IC0xOyBjcHVfcmVsYXgoKSk7CiAgICB9CiAgICBpZiAoa2RiX3Nlc3Npb25fYmVndW4pCiAgICAg
ICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsgICAgICAgICAgICAgLyoga2RiIGFsd2F5cyBydW5zIGRp
c2FibGVkICovCgogICAgaWYgKHJlYXNvbiA9PSBLREJfUkVBU09OX0JQRVhDUCkgeyAgICAgICAg
ICAgICAvKiBJTlQgMyAqLwogICAgICAgIGNwdW1hc2tfY2xlYXJfY3B1KGNjcHUsICZrZGJfY3B1
X3RyYXBzKTsgICAvKiByZW1vdmUgb3Vyc2VsZiAqLwogICAgICAgIHJjID0ga2RiX2NoZWNrX3N3
X2JrcHRzKHJlZ3MpOwogICAgICAgIGlmIChyYyA9PSAwKSB7ICAgICAgICAgICAgICAgLyogbm90
IG9uZSBvZiBvdXJzLiBsZWF2ZSBrZGIgKi8KICAgICAgICAgICAga2RiX2luaXRfY3B1ID0gLTE7
CiAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAgIH0gZWxzZSBpZiAocmMgPT0gMSkgeyAgICAg
ICAgLyogb25lIG9mIG91cnMgYnV0IGRlbGV0ZWQgKi8KICAgICAgICAgICAgaWYgKGNwdW1hc2tf
ZW1wdHkoJmtkYl9jcHVfdHJhcHMpKSB7CiAgICAgICAgICAgICAgICBrZGJfZW5kX3Nlc3Npb24o
Y2NwdSxyZWdzKTsgICAgIAogICAgICAgICAgICAgICAga2RiX2luaXRfY3B1ID0gLTE7CiAgICAg
ICAgICAgICAgICBnb3RvIG91dDsKICAgICAgICAgICAgfSBlbHNlIHsgICAgICAgICAgICAgICAg
IAogICAgICAgICAgICAgICAgLyogcmVsZWFzZSBhbm90aGVyIHRyYXAgY3B1LCBhbmQgcHV0IG91
cnNlbGYgaW4gYSBwYXVzZSBtb2RlICovCiAgICAgICAgICAgICAgICBpbnQgYV90cmFwX2NwdSA9
IGNwdW1hc2tfZmlyc3QoJmtkYl9jcHVfdHJhcHMpOwogICAgICAgICAgICAgICAgS0RCR1AoImNj
cHU6JWQgY21kOiVkIHJzbjolZCBhdHJwY3B1OiVkIGluaXRjcHU6JWRcbiIsIGNjcHUsIAogICAg
ICAgICAgICAgICAgICAgICAga2RiX2NwdV9jbWRbY2NwdV0sIHJlYXNvbiwgYV90cmFwX2NwdSwg
a2RiX2luaXRfY3B1KTsKICAgICAgICAgICAgICAgIGtkYl9jcHVfY21kW2FfdHJhcF9jcHVdID0g
S0RCX0NQVV9RVUlUOwogICAgICAgICAgICAgICAgcmVhc29uID0gS0RCX1JFQVNPTl9QQVVTRV9J
UEk7CiAgICAgICAgICAgICAgICBrZGJfaW5pdF9jcHUgPSAtMTsKICAgICAgICAgICAgfQogICAg
ICAgIH0gZWxzZSBpZiAocmMgPT0gMikgeyAgICAgICAgLyogb25lIG9mIG91cnMgYnV0IGNvbmRp
dGlvbiBub3QgbWV0ICovCiAgICAgICAgICAgICAgICBrZGJfYmVnaW5fc2Vzc2lvbigpOwogICAg
ICAgICAgICAgICAgaWYgKGd1ZXN0X21vZGUocmVncykgJiYgaXNfaHZtX29yX2h5Yl92Y3B1KGN1
cnJlbnQpKQogICAgICAgICAgICAgICAgICAgIGN1cnJlbnQtPmFyY2guaHZtX3ZjcHUuc2luZ2xl
X3N0ZXAgPSAxOwogICAgICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgICAgICAgIHJlZ3Mt
PmVmbGFncyB8PSBYODZfRUZMQUdTX1RGOyAgCiAgICAgICAgICAgICAgICBrZGJfY3B1X2NtZFtj
Y3B1XSA9IEtEQl9DUFVfSU5TVEFMTF9CUDsKICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAg
ICAgIH0KICAgIH0KCiAgICAvKiBmb2xsb3dpbmcgd2lsbCB0YWtlIGNhcmUgb2YgS0RCX0NQVV9J
TlNUQUxMX0JQLCBhbmQgYWxzbyByZWxlYXNlCiAgICAgKiBrZGJfaW5pdF9jcHUuIGl0IHNob3Vs
ZCBub3QgYmUgZG9uZSB0d2ljZSAqLwogICAgaWYgKChyYz1rZGJfY2hlY2tfZGJ0cmFwKCZyZWFz
b24sIHNzX21vZGUsIHJlZ3MpKSA9PSAwIHx8IHJjID09IDEpIHsKICAgICAgICBrZGJfaW5pdF9j
cHUgPSAtMTsgICAgICAgLyogbGVhdmluZyBrZGIgKi8KICAgICAgICBnb3RvIG91dDsgICAgICAg
ICAgICAgICAgLyogcmMgcHJvcGVybHkgc2V0IHRvIDAgb3IgMSAqLwogICAgfQogICAgaWYgKHJl
YXNvbiAhPSBLREJfUkVBU09OX1BBVVNFX0lQSSkgewogICAgICAgIGtkYl9jcHVfY21kW2NjcHVd
ID0gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0gZWxzZQogICAgICAgIGtkYl9jcHVfY21kW2NjcHVd
ID0gS0RCX0NQVV9QQVVTRTsKCiAgICBpZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9N
QUlOX0tEQiAmJiAhc3NfbW9kZSkKICAgICAgICBrZGJfYmVnaW5fc2Vzc2lvbigpOyAKCiAgICBr
ZGJfbWFpbl9lbnRyeV9taXNjKHJlYXNvbiwgcmVncywgY2NwdSwgc3NfbW9kZSwgZW5hYmxlZCk7
CiAgICAvKiBub3RlLCBvbmUgb3IgbW9yZSBjcHUgc3dpdGNoZXMgbWF5IG9jY3VyIGluIGJldHdl
ZW4gKi8KICAgIHdoaWxlICgxKSB7CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09IEtE
Ql9DUFVfUEFVU0UpCiAgICAgICAgICAgIGtkYl9ob2xkX3RoaXNfY3B1KGNjcHUsIHJlZ3MpOwog
ICAgICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX01BSU5fS0RCKQogICAgICAg
ICAgICBrZGJfZG9fY21kcyhyZWdzKTsKCiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09
IEtEQl9DUFVfR08pIHsKICAgICAgICAgICAgaWYgKGNjcHUgIT0ga2RiX2luaXRfY3B1KSB7CiAg
ICAgICAgICAgICAgICBrZGJfY3B1X2NtZFtrZGJfaW5pdF9jcHVdID0gS0RCX0NQVV9HTzsKICAg
ICAgICAgICAgICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9QQVVTRTsKICAgICAgICAg
ICAgICAgIGNvbnRpbnVlOyAgICAgICAgICAgICAgIC8qIGZvciB0aGUgcGF1c2UgZ3V5ICovCiAg
ICAgICAgICAgIH0KICAgICAgICAgICAgaWYgKCFjcHVtYXNrX2VtcHR5KCZrZGJfY3B1X3RyYXBz
KSkgewogICAgICAgICAgICAgICAgLyogZXhlY3V0ZSBjdXJyZW50IGluc3RydWN0aW9uIHdpdGhv
dXQgMHhjYyAqLwogICAgICAgICAgICAgICAga2RiX2RiZ19wcm50X2N0cnBzKCJuZW1wdHk6Iiwg
Y2NwdSk7CiAgICAgICAgICAgICAgICBpZiAoZ3Vlc3RfbW9kZShyZWdzKSAmJiBpc19odm1fb3Jf
aHliX3ZjcHUoY3VycmVudCkpCiAgICAgICAgICAgICAgICAgICAgY3VycmVudC0+YXJjaC5odm1f
dmNwdS5zaW5nbGVfc3RlcCA9IDE7CiAgICAgICAgICAgICAgICBlbHNlCiAgICAgICAgICAgICAg
ICAgICAgcmVncy0+ZWZsYWdzIHw9IFg4Nl9FRkxBR1NfVEY7ICAKICAgICAgICAgICAgICAgIGtk
Yl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9JTlNUQUxMX0JQOwogICAgICAgICAgICAgICAgZ290
byBvdXQ7CiAgICAgICAgICAgIH0KICAgICAgICB9CiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2Nj
cHVdICE9IEtEQl9DUFVfUEFVU0UgICYmIAogICAgICAgICAgICBrZGJfY3B1X2NtZFtjY3B1XSAh
PSBLREJfQ1BVX01BSU5fS0RCKQogICAgICAgICAgICAgICAgYnJlYWs7CiAgICB9CiAgICBpZiAo
a2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9HTykgewogICAgICAgIEFTU0VSVChjcHVtYXNr
X2VtcHR5KCZrZGJfY3B1X3RyYXBzKSk7CiAgICAgICAgaWYgKGtkYl9zd2JwX2V4aXN0cygpKSB7
CiAgICAgICAgICAgIGlmIChyZWFzb24gPT0gS0RCX1JFQVNPTl9CUEVYQ1ApIHsKICAgICAgICAg
ICAgICAgIC8qIGRvIGRlbGF5ZWQgaW5zdGFsbCAqLwogICAgICAgICAgICAgICAgaWYgKGd1ZXN0
X21vZGUocmVncykgJiYgaXNfaHZtX29yX2h5Yl92Y3B1KGN1cnJlbnQpKQogICAgICAgICAgICAg
ICAgICAgIGN1cnJlbnQtPmFyY2guaHZtX3ZjcHUuc2luZ2xlX3N0ZXAgPSAxOwogICAgICAgICAg
ICAgICAgZWxzZQogICAgICAgICAgICAgICAgICAgIHJlZ3MtPmVmbGFncyB8PSBYODZfRUZMQUdT
X1RGOyAgCiAgICAgICAgICAgICAgICBrZGJfY3B1X2NtZFtjY3B1XSA9IEtEQl9DUFVfSU5TVEFM
TF9CUDsKICAgICAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAgICAgICB9IAogICAgICAgIH0K
ICAgICAgICBrZGJfZW5kX3Nlc3Npb24oY2NwdSwgcmVncyk7CiAgICAgICAga2RiX2luaXRfY3B1
ID0gLTE7CiAgICB9Cm91dDoKICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX1FV
SVQpIHsKICAgICAgICBLREJHUCgiY2NwdTolZCBfcXVpdCBJUDogJWx4XG4iLCBjY3B1LCByZWdz
LT5LREJJUCk7CiAgICAgICAgaWYgKCEga2RiX3Nlc3Npb25fYmVndW4pCiAgICAgICAgICAgIGtk
Yl9pbnN0YWxsX3dhdGNocG9pbnRzKCk7CiAgICAgICAga2RiX3RpbWVfcmVzdW1lKDApOwogICAg
ICAgIGtkYl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9JTlZBTDsKICAgIH0KCiAgICAvKiBmb3Ig
c3MgYW5kIGRlbGF5ZWQgaW5zdGFsbCwgVEYgaXMgc2V0LiBub3QgbXVjaCBpbiBFWFQgSU5UIGhh
bmRsZXJzKi8KICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX05JKQogICAgICAg
IGtkYl90aW1lX3Jlc3VtZSgxKTsKICAgIGlmIChlbmFibGVkKQogICAgICAgIGxvY2FsX2lycV9l
bmFibGUoKTsKCiAgICBLREJHUCgia2RibWFpbjpYOmNjcHU6JWQgcmM6JWQgY21kOiVkIGVmbGc6
MHglbHggaW5pdGM6JWQgc2VzbjolZCAiIAogICAgICAgICAgImNzOiV4IGlycXM6JWQgIiwgY2Nw
dSwgcmMsIGtkYl9jcHVfY21kW2NjcHVdLCByZWdzLT5lZmxhZ3MsIAogICAgICAgICAga2RiX2lu
aXRfY3B1LCBrZGJfc2Vzc2lvbl9iZWd1biwgcmVncy0+Y3MsIGxvY2FsX2lycV9pc19lbmFibGVk
KCkpOwogICAga2RiX2RiZ19wcm50X2N0cnBzKCIiLCAtMSk7CiAgICByZXR1cm4gKHJjID8gMSA6
IDApOwp9CgovKiAKICoga2RiIGVudHJ5IGZ1bmN0aW9uIHdoZW4gY29taW5nIGluIHZpYSBhIGtl
eWJvYXJkCiAqIFJFVFVSTlM6IDAgOiBrZGIgd2FzIGNhbGxlZCBmb3IgZXZlbnQgaXQgd2FzIG5v
dCByZXNwb25zaWJsZQogKiAgICAgICAgICAxIDogZXZlbnQgb3duZWQgYW5kIGhhbmRsZWQgYnkg
a2RiIAogKi8KaW50CmtkYl9rZXlib2FyZChzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAgcmV0dXJuIGtkYm1haW4oS0RCX1JFQVNPTl9LRVlCT0FSRCwgcmVncyk7Cn0KCiNpZiAwCi8q
CiAqIHRoaXMgZnVuY3Rpb24gY2FsbGVkIHdoZW4ga2RiIHNlc3Npb24gYWN0aXZlIGFuZCB1c2Vy
IHByZXNzZXMgY3RybFwgYWdhaW4uCiAqIHRoZSBhc3N1bXB0aW9uIGlzIHRoYXQgdGhlIHVzZXIg
dHlwZWQgbmkvc3MgY21kLCBhbmQgaXQgbmV2ZXIgZ290IGJhY2sgaW50bwogKiBrZGIsIG9yIHRo
ZSB1c2VyIGlzIGltcGF0aWVudC4gRWl0aGVyIGNhc2UsIHdlIGp1c3QgZmFrZSBpdCB0aGF0IHRo
ZSBTUyBkaWQKICogZmluaXNoLiBTaW5jZSwgYWxsIG90aGVyIGtkYiBjcHVzIG11c3QgYmUgaG9s
ZGluZyBkaXNhYmxlZCwgdGhlIGludGVycnVwdAogKiB3b3VsZCBiZSBvbiB0aGUgQ1BVIHRoYXQg
ZGlkIHRoZSBzcy9uaSBjbWQKICovCnZvaWQKa2RiX3NzbmlfcmVlbnRlcihzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncykKewogICAgaW50IGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CiAgICBp
bnQgY2NtZCA9IGtkYl9jcHVfY21kW2NjcHVdOwoKICAgIGlmKGNjbWQgPT0gS0RCX0NQVV9TUyB8
fCBjY21kID09IEtEQl9DUFVfSU5TVEFMTF9CUCkKICAgICAgICBrZGJtYWluKEtEQl9SRUFTT05f
REJFWENQLCByZWdzKTsgCiAgICBlbHNlIAogICAgICAgIGtkYm1haW4oS0RCX1JFQVNPTl9LRVlC
T0FSRCwgcmVncyk7Cn0KI2VuZGlmCgovKiAKICogQWxsIHRyYXBzIGFyZSByb3V0ZWQgdGhydSBo
ZXJlLiBXZSBjYXJlIGFib3V0IEJQICgjMykgdHJhcCAoSU5UIDMpIGFuZAogKiB0aGUgREIgdHJh
cCgjMSkgb25seS4gCiAqIHJldHVybnM6IDAga2RiIGhhcyBub3RoaW5nIGRvIHdpdGggdGhpcyB0
cmFwCiAqICAgICAgICAgIDEga2RiIGhhbmRsZWQgdGhpcyB0cmFwIAogKi8KaW50CmtkYl9oYW5k
bGVfdHJhcF9lbnRyeShpbnQgdmVjdG9yLCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAgaW50IHJjID0gMDsKICAgIGludCBjY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwoKICAgIGlm
ICh2ZWN0b3IgPT0gVFJBUF9pbnQzKSB7CiAgICAgICAgcmMgPSBrZGJtYWluKEtEQl9SRUFTT05f
QlBFWENQLCByZWdzKTsKCiAgICB9IGVsc2UgaWYgKHZlY3RvciA9PSBUUkFQX2RlYnVnKSB7CiAg
ICAgICAgS0RCR1AoImNjcHU6JWQgdHJhcGRiZyByZWFzOiVkXG4iLCBjY3B1LCBrZGJfdHJhcF9p
bW1lZF9yZWFzb24pOwoKICAgICAgICBpZiAoa2RiX3RyYXBfaW1tZWRfcmVhc29uID09IEtEQl9U
UkFQX0ZBVEFMKSB7IAogICAgICAgICAgICBLREJHUCgia2RidHJwOmZhdGFsIGNjcHU6JWQgdmVj
OiVkXG4iLCBjY3B1LCB2ZWN0b3IpOwogICAgICAgICAgICByYyA9IGtkYm1haW5fZmF0YWwocmVn
cywgdmVjdG9yKTsKICAgICAgICAgICAgQlVHKCk7ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAvKiBubyByZXR1cm4gKi8KCiAgICAgICAgfSBlbHNlIGlmIChrZGJfdHJhcF9pbW1lZF9yZWFz
b24gPT0gS0RCX1RSQVBfS0RCU1RBQ0spIHsKICAgICAgICAgICAga2RiX3RyYXBfaW1tZWRfcmVh
c29uID0gMDsgICAgICAgICAvKiBzaG93IGtkYiBzdGFjayAqLwogICAgICAgICAgICBzaG93X3Jl
Z2lzdGVycyhyZWdzKTsKICAgICAgICAgICAgc2hvd19zdGFjayhyZWdzKTsKICAgICAgICAgICAg
cmVncy0+ZWZsYWdzICY9IH5YODZfRUZMQUdTX1RGOwogICAgICAgICAgICByYyA9IDE7CgogICAg
ICAgIH0gZWxzZSBpZiAoa2RiX3RyYXBfaW1tZWRfcmVhc29uID09IEtEQl9UUkFQX05PTkZBVEFM
KSB7CiAgICAgICAgICAgIGtkYl90cmFwX2ltbWVkX3JlYXNvbiA9IDA7CiAgICAgICAgICAgIHJj
ID0ga2RiX2tleWJvYXJkKHJlZ3MpOwogICAgICAgIH0gZWxzZSB7ICAgICAgICAgICAgICAgICAg
ICAgICAgIC8qIHNzL25pL2RlbGF5ZWQgaW5zdGFsbC4uLiAqLwogICAgICAgICAgICBpZiAoZ3Vl
c3RfbW9kZShyZWdzKSAmJiBpc19odm1fb3JfaHliX3ZjcHUoY3VycmVudCkpCiAgICAgICAgICAg
ICAgICBjdXJyZW50LT5hcmNoLmh2bV92Y3B1LnNpbmdsZV9zdGVwID0gMDsKICAgICAgICAgICAg
cmMgPSBrZGJtYWluKEtEQl9SRUFTT05fREJFWENQLCByZWdzKTsgCiAgICAgICAgfQoKICAgIH0g
ZWxzZSBpZiAodmVjdG9yID09IFRSQVBfbm1pKSB7ICAgICAgICAgICAgICAgICAgIC8qIGV4dGVy
bmFsIG5taSAqLwogICAgICAgIC8qIHdoZW4gbm1pIGlzIHByZXNzZWQsIGl0IGNvdWxkIGdvIHRv
IG9uZSBvciBtb3JlIG9yIGFsbCBjcHVzCiAgICAgICAgICogZGVwZW5kaW5nIG9uIHRoZSBoYXJk
d2FyZS4gQWxzbywgZm9yIG5vdyBhc3N1bWUgaXQncyBmYXRhbCAqLwogICAgICAgIEtEQkdQKCJr
ZGJ0cnA6Y2NwdTolZCB2ZWM6JWRcbiIsIGNjcHUsIHZlY3Rvcik7CiAgICAgICAgcmMgPSBrZGJt
YWluX2ZhdGFsKHJlZ3MsIFRSQVBfbm1pKTsKICAgIH0gCiAgICByZXR1cm4gcmM7Cn0KCmludApr
ZGJfdHJhcF9mYXRhbChpbnQgdmVjdG9yLCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAga2RibWFpbl9mYXRhbChyZWdzLCB2ZWN0b3IpOwogICAgcmV0dXJuIDA7Cn0KCi8qIEZyb20g
c21wX3NlbmRfbm1pX2FsbGJ1dHNlbGYoKSBpbiBjcmFzaC5jIHdoaWNoIGlzIHN0YXRpYyAqLwp2
b2lkCmtkYl9ubWlfcGF1c2VfY3B1cyhjcHVtYXNrX3QgY3B1bWFzaykKewogICAgaW50IGNjcHUg
PSBzbXBfcHJvY2Vzc29yX2lkKCk7CiAgICBtZGVsYXkoMjAwKTsKICAgIGNwdW1hc2tfY29tcGxl
bWVudCgmY3B1bWFzaywgJmNwdW1hc2spOyAgICAgICAgICAgICAgLyogZmxpcCBiaXQgbWFwICov
CiAgICBjcHVtYXNrX2FuZCgmY3B1bWFzaywgJmNwdW1hc2ssICZjcHVfb25saW5lX21hcCk7ICAg
IC8qIHJlbW92ZSBleHRyYSBiaXRzICovCiAgICBjcHVtYXNrX2NsZWFyX2NwdShjY3B1LCAmY3B1
bWFzayk7LyogYWJzb2x1dGVseSBtYWtlIHN1cmUgd2UncmUgbm90IG9uIGl0ICovCgogICAgS0RC
R1AoImNjcHU6JWQgbm1pIHBhdXNlLiBtYXNrOjB4JWx4XG4iLCBjY3B1LCBjcHVtYXNrLmJpdHNb
MF0pOwogICAgaWYgKCAhY3B1bWFza19lbXB0eSgmY3B1bWFzaykgKQojaWYgWEVOX1NVQlZFUlNJ
T04gPiA0IHx8IFhFTl9WRVJTSU9OID09IDQgICAgICAgICAgICAgIC8qIHhlbiAzLjUueCBvciBh
Ym92ZSAqLwogICAgICAgIHNlbmRfSVBJX21hc2soJmNwdW1hc2ssIEFQSUNfRE1fTk1JKTsKI2Vs
c2UKICAgICAgICBzZW5kX0lQSV9tYXNrKGNwdW1hc2ssIEFQSUNfRE1fTk1JKTsKI2VuZGlmCiAg
ICBtZGVsYXkoMjAwKTsKICAgIEtEQkdQKCJjY3B1OiVkIG5taSBwYXVzZSBkb25lLi4uXG4iLCBj
Y3B1KTsKfQoKLyogCiAqIFNlcGFyYXRlIGZ1bmN0aW9uIGZyb20ga2RibWFpbiB0byBrZWVwIGJv
dGggd2l0aGluIHNhbml0eSBsZXZlbHMuCiAqLwpERUZJTkVfU1BJTkxPQ0soa2RiX2ZhdGFsX2xr
KTsKc3RhdGljIGludAprZGJtYWluX2ZhdGFsKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzLCBp
bnQgdmVjdG9yKQp7CiAgICBpbnQgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKCiAgICBjb25z
b2xlX3N0YXJ0X3N5bmMoKTsKCiAgICBLREJHUCgibWFpbmY6Y2NwdTolZCB2ZWM6JWQgaXJxOiVk
XG4iLCBjY3B1LCB2ZWN0b3IsbG9jYWxfaXJxX2lzX2VuYWJsZWQoKSk7CiAgICBjcHVtYXNrX3Nl
dF9jcHUoY2NwdSwgJmtkYl9mYXRhbF9jcHVtYXNrKTsgICAgICAgIC8qIHVzZXMgTE9DS19QUkVG
SVggKi8KCiAgICBpZiAoc3Bpbl90cnlsb2NrKCZrZGJfZmF0YWxfbGspKSB7CgogICAgICAgIGtk
YnAoIioqKiBrZGIgKEZhdGFsIEVycm9yIG9uIGNwdTolZCB2ZWM6JWQgJXMpOlxuIiwgY2NwdSwK
ICAgICAgICAgICAgIHZlY3Rvciwga2RiX2dldHRyYXBuYW1lKHZlY3RvcikpOwogICAgICAgIGtk
Yl9jcHVfY21kW2NjcHVdID0gS0RCX0NQVV9NQUlOX0tEQjsKICAgICAgICBrZGJfZGlzcGxheV9w
YyhyZWdzKTsKCiAgICAgICAgd2F0Y2hkb2dfZGlzYWJsZSgpOyAgICAgLyogaW1wb3J0YW50ICov
CiAgICAgICAga2RiX3N5c19jcmFzaCA9IDE7CiAgICAgICAga2RiX3Nlc3Npb25fYmVndW4gPSAw
OyAgLyogaW5jYXNlIHNlc3Npb24gYWxyZWFkeSBhY3RpdmUgKi8KICAgICAgICBsb2NhbF9pcnFf
ZW5hYmxlKCk7CiAgICAgICAga2RiX25taV9wYXVzZV9jcHVzKGtkYl9mYXRhbF9jcHVtYXNrKTsK
CiAgICAgICAga2RiX2NsZWFyX3ByZXZfY21kKCk7ICAgLyogYnVmZmVyZWQgQ1JzIHdpbGwgcmVw
ZWF0IHByZXYgY21kICovCiAgICAgICAga2RiX3Nlc3Npb25fYmVndW4gPSAxOyAgLyogZm9yIGtk
Yl9ob2xkX3RoaXNfY3B1KCkgKi8KICAgICAgICBsb2NhbF9pcnFfZGlzYWJsZSgpOwogICAgfSBl
bHNlIHsKICAgICAgICBrZGJfY3B1X2NtZFtjY3B1XSA9IEtEQl9DUFVfUEFVU0U7CiAgICB9CiAg
ICB3aGlsZSAoMSkgewogICAgICAgIGlmIChrZGJfY3B1X2NtZFtjY3B1XSA9PSBLREJfQ1BVX1BB
VVNFKQogICAgICAgICAgICBrZGJfaG9sZF90aGlzX2NwdShjY3B1LCByZWdzKTsKICAgICAgICBp
ZiAoa2RiX2NwdV9jbWRbY2NwdV0gPT0gS0RCX0NQVV9NQUlOX0tEQikKICAgICAgICAgICAga2Ri
X2RvX2NtZHMocmVncyk7CiNpZiAwCiAgICAgICAgLyogZHVtcCBpcyB0aGUgb25seSB3YXkgdG8g
ZXhpdCBpbiBjcmFzaGVkIHN0YXRlICovCiAgICAgICAgaWYgKGtkYl9jcHVfY21kW2NjcHVdID09
IEtEQl9DUFVfRFVNUCkKICAgICAgICAgICAga2RiX2RvX2R1bXAocmVncyk7CiNlbmRpZgogICAg
fQogICAgcmV0dXJuIDA7Cn0KCi8qIE1vc3RseSBjYWxsZWQgaW4gZmF0YWwgY2FzZXMuIGVhcmx5
a2RiIGNhbGxzIG5vbi1mYXRhbC4KICoga2RiX3RyYXBfaW1tZWRfcmVhc29uIGlzIGdsb2JhbCwg
c28gYWxsb3cgb25seSBvbmUgY3B1IGF0IGEgdGltZS4gQWxzbywKICogbXVsdGlwbGUgY3B1IG1h
eSBiZSBjcmFzaGluZyBhdCB0aGUgc2FtZSB0aW1lLiBXZSBlbmFibGUgYmVjYXVzZSBpZiB0aGVy
ZQogKiBpcyBhIGJhZCBoYW5nLCBhdCBsZWFzdCBjdHJsLVwgd2lsbCBicmVhayBpbnRvIGtkYi4g
QWxzbywgd2UgZG9uJ3QgY2FsbAogKiBjYWxsIGtkYl9rZXlib2FyZCBkaXJlY3RseSBiZWNhdWUg
d2UgZG9uJ3QgaGF2ZSB0aGUgcmVnaXN0ZXIgY29udGV4dC4KICovCkRFRklORV9TUElOTE9DSyhr
ZGJfaW1tZWRfbGspOwp2b2lkCmtkYl90cmFwX2ltbWVkKGludCByZWFzb24pICAgICAgICAgICAg
LyogZmF0YWwsIG5vbi1mYXRhbCwga2RiIHN0YWNrIGV0Yy4uLiAqLwp7CiAgICBpbnQgY2NwdSA9
IHNtcF9wcm9jZXNzb3JfaWQoKTsKICAgIGludCBkaXNhYmxlZCA9ICFsb2NhbF9pcnFfaXNfZW5h
YmxlZCgpOwoKICAgIEtEQkdQKCJ0cmFwaW1tOmNjcHU6JWQgcmVhczolZFxuIiwgY2NwdSwgcmVh
c29uKTsKICAgIGxvY2FsX2lycV9lbmFibGUoKTsKICAgIHNwaW5fbG9jaygma2RiX2ltbWVkX2xr
KTsKICAgIGtkYl90cmFwX2ltbWVkX3JlYXNvbiA9IHJlYXNvbjsKICAgIGJhcnJpZXIoKTsKICAg
IF9fYXNtX18gX192b2xhdGlsZV9fICggImludCAkMSIgKTsKICAgIGtkYl90cmFwX2ltbWVkX3Jl
YXNvbiA9IDA7CgogICAgc3Bpbl91bmxvY2soJmtkYl9pbW1lZF9sayk7CiAgICBpZiAoZGlzYWJs
ZWQpCiAgICAgICAgbG9jYWxfaXJxX2Rpc2FibGUoKTsKfQoKLyogY2FsbGVkIHZlcnkgZWFybHkg
ZHVyaW5nIGluaXQsIGV2ZW4gYmVmb3JlIGFsbCBDUFVzIGFyZSBicm91Z2h0IG9ubGluZSAqLwp2
b2lkIAprZGJfaW5pdCh2b2lkKQp7CiAgICAgICAga2RiX2luaXRfY21kdGFiKCk7ICAgICAgLyog
SW5pdGlhbGl6ZSBDb21tYW5kIFRhYmxlICovCn0KCnN0YXRpYyBjb25zdCBjaGFyICoKa2RiX2dl
dHRyYXBuYW1lKGludCB0cmFwbm8pCnsKICAgIGNoYXIgKnJldDsKICAgIHN3aXRjaCAodHJhcG5v
KSB7CiAgICAgICAgY2FzZSAgMDogIHJldCA9ICJEaXZpZGUgRXJyb3IiOyBicmVhazsKICAgICAg
ICBjYXNlICAyOiAgcmV0ID0gIk5NSSBJbnRlcnJ1cHQiOyBicmVhazsKICAgICAgICBjYXNlICAz
OiAgcmV0ID0gIkludCAzIFRyYXAiOyBicmVhazsKICAgICAgICBjYXNlICA0OiAgcmV0ID0gIk92
ZXJmbG93IEVycm9yIjsgYnJlYWs7CiAgICAgICAgY2FzZSAgNjogIHJldCA9ICJJbnZhbGlkIE9w
Y29kZSI7IGJyZWFrOwogICAgICAgIGNhc2UgIDg6ICByZXQgPSAiRG91YmxlIEZhdWx0IjsgYnJl
YWs7CiAgICAgICAgY2FzZSAxMDogIHJldCA9ICJJbnZhbGlkIFRTUyI7IGJyZWFrOwogICAgICAg
IGNhc2UgMTE6ICByZXQgPSAiU2VnbWVudCBOb3QgUHJlc2VudCI7IGJyZWFrOwogICAgICAgIGNh
c2UgMTI6ICByZXQgPSAiU3RhY2stU2VnbWVudCBGYXVsdCI7IGJyZWFrOwogICAgICAgIGNhc2Ug
MTM6ICByZXQgPSAiR2VuZXJhbCBQcm90ZWN0aW9uIjsgYnJlYWs7CiAgICAgICAgY2FzZSAxNDog
IHJldCA9ICJQYWdlIEZhdWx0IjsgYnJlYWs7CiAgICAgICAgY2FzZSAxNzogIHJldCA9ICJBbGln
bm1lbnQgQ2hlY2siOyBicmVhazsKICAgICAgICBkZWZhdWx0OiByZXQgPSAiID8/Pz8/ICI7CiAg
ICB9CiAgICByZXR1cm4gcmV0Owp9CgoKLyogPT09PT09PT09PT09PT09PT09PT09PSBHZW5lcmlj
IHRyYWNpbmcgc3Vic3lzdGVtID09PT09PT09PT09PT09PT09PT09PT09PSAqLwoKI2RlZmluZSBL
REJUUkNNQVggMSAgICAgICAvKiBzZXQgdGhpcyB0byBtYXggbnVtYmVyIG9mIHJlY3MgdG8gdHJh
Y2UuIGVhY2ggcmVjIAogICAgICAgICAgICAgICAgICAgICAgICAgICAqIGlzIDMyIGJ5dGVzICov
CnZvbGF0aWxlIGludCBrZGJfdHJjb249MTsgLyogdHVybiB0cmFjaW5nIE9OOiBzZXQgaGVyZSBv
ciB2aWEgdGhlIHRyY29uIGNtZCAqLwoKdHlwZWRlZiBzdHJ1Y3QgewogICAgdW5pb24gewogICAg
ICAgIHN0cnVjdCB7IHVpbnQgZDA7IHVpbnQgY3B1X3RyY2lkOyB9IHMwOwogICAgICAgIHVpbnQ2
NF90IGwwOwogICAgfXU7CiAgICB1aW50NjRfdCBsMSwgbDIsIGwzOyAKfSB0cmNfcmVjX3Q7Cgpz
dGF0aWMgdm9sYXRpbGUgdW5zaWduZWQgaW50IHRyY2lkeDsgICAgLyogcG9pbnRzIHRvIHdoZXJl
IG5ldyBlbnRyeSB3aWxsIGdvICovCnN0YXRpYyB0cmNfcmVjX3QgdHJjYVtLREJUUkNNQVhdOyAg
ICAgICAvKiB0cmFjZSBhcnJheSAqLwoKLyogYXRvbWljYWxseTogYWRkIGkgdG8gKnAsIHJldHVy
biBwcmV2IHZhbHVlIG9mICpwIChpZSwgdmFsIGJlZm9yZSBhZGQpICovCnN0YXRpYyBpbnQKa2Ri
X2ZldGNoX2FuZF9hZGQoaW50IGksIHVpbnQgKnApCnsKICAgIGFzbSB2b2xhdGlsZSgibG9jayB4
YWRkbCAlMCwgJTE7IiA6ICI9ciIoaSkgOiAibSIoKnApLCAiMCIoaSkpOwogICAgcmV0dXJuIGk7
Cn0KCi8qIHplcm8gb3V0IHRoZSBlbnRpcmUgYnVmZmVyICovCnZvaWQgCmtkYl90cmN6ZXJvKHZv
aWQpCnsKICAgIGZvciAodHJjaWR4ID0gS0RCVFJDTUFYLTE7IHRyY2lkeDsgdHJjaWR4LS0pIHsK
ICAgICAgICBtZW1zZXQoJnRyY2FbdHJjaWR4XSwgMCwgc2l6ZW9mKHRyY19yZWNfdCkpOwogICAg
fQogICAgbWVtc2V0KCZ0cmNhW3RyY2lkeF0sIDAsIHNpemVvZih0cmNfcmVjX3QpKTsKICAgIGtk
YnAoImtkYiB0cmFjZSBidWZmZXIgaGFzIGJlZW4gemVyb2VkXG4iKTsKfQoKLyogYWRkIHRyYWNl
IGVudHJ5OiBlZy46IGtkYnRyYygweGUwZjA5OSwgaW50ZGF0YSwgdmNwdSwgZG9tYWluLCAwKQog
KiAgICB3aGVyZTogIDB4ZTBmMDk5IDogMjRiaXRzIG1heCB0cmNpZCwgbG93ZXIgOCBiaXRzIGFy
ZSBzZXQgdG8gY3B1aWQgKi8Kdm9pZAprZGJ0cmModWludCB0cmNpZCwgdWludCBpbnRfZDAsIHVp
bnQ2NF90IGQxXzY0LCB1aW50NjRfdCBkMl82NCwgdWludDY0X3QgZDNfNjQpCnsKICAgIHVpbnQg
aWR4OwoKICAgIGlmICgha2RiX3RyY29uKQogICAgICAgIHJldHVybjsKCiAgICBpZHggPSBrZGJf
ZmV0Y2hfYW5kX2FkZCgxLCAodWludCopJnRyY2lkeCk7CiAgICBpZHggPSBpZHggJSBLREJUUkNN
QVg7CgojaWYgMAogICAgdHJjYVtpZHhdLnUuczAuY3B1X3RyY2lkID0gKHNtcF9wcm9jZXNzb3Jf
aWQoKTw8MjQpIHwgdHJjaWQ7CiNlbmRpZgogICAgdHJjYVtpZHhdLnUuczAuY3B1X3RyY2lkID0g
KHRyY2lkPDw4KSB8IHNtcF9wcm9jZXNzb3JfaWQoKTsKICAgIHRyY2FbaWR4XS51LnMwLmQwID0g
aW50X2QwOwogICAgdHJjYVtpZHhdLmwxID0gZDFfNjQ7CiAgICB0cmNhW2lkeF0ubDIgPSBkMl82
NDsKICAgIHRyY2FbaWR4XS5sMyA9IGQzXzY0Owp9CgovKiBnaXZlIGhpbnRzIHNvIHVzZXIgY2Fu
IHByaW50IHRyYyBidWZmZXIgdmlhIHRoZSBkZCBjb21tYW5kLiBsYXN0IGhhcyB0aGUKICogbW9z
dCByZWNlbnQgZW50cnkgKi8Kdm9pZAprZGJfdHJjcCh2b2lkKQp7CiAgICBpbnQgaSA9IHRyY2lk
eCAlIEtEQlRSQ01BWDsKCiAgICBpID0gKGk9PTApID8gS0RCVFJDTUFYLTEgOiBpLTE7CiAgICBr
ZGJwKCJ0cmNidWY6ICAgIFswXTogJTAxNmx4IFtNQVgtMV06ICUwMTZseFxuIiwgJnRyY2FbMF0s
CiAgICAgICAgICZ0cmNhW0tEQlRSQ01BWC0xXSk7CiAgICBrZGJwKCIgW21vc3QgcmVjZW50XTog
JTAxNmx4ICAgdHJjaWR4OiAweCV4XG4iLCAmdHJjYVtpXSwgdHJjaWR4KTsKfQoKAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAABrZGIveDg2LwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDc3NQAw
MDAyNzU2ADAwMDI3NTYAMDAwMDAwMDAwMDAAMTIwMTc3MjQ2MjQAMDEyMjAyACA1AAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAGtkYi94ODYvdWRpczg2LTEuNy8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNzc1ADAw
MDI3NTYAMDAwMjc1NgAwMDAwMDAwMDAwMAAxMjAxNzcyNDYyNAAwMTM2MjcAIDUAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAa2RiL3g4Ni91ZGlzODYtMS43L2lucHV0LmgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAw
Mjc1NgAwMDAyNzU2ADAwMDAwMDAyNDAwADExNzY1NDY1NTU2ADAxNTE1MgAgMAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAvKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBpbnB1dC5oCiAqCiAqIENvcHlyaWdodCAoYykg
MjAwNiwgVml2ZWsgTW9oYW4gPHZpdmVrQHNpZzkuY29tPgogKiBBbGwgcmlnaHRzIHJlc2VydmVk
LiBTZWUgTElDRU5TRQogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8KI2lmbmRlZiBVRF9JTlBV
VF9ICiNkZWZpbmUgVURfSU5QVVRfSAoKI2luY2x1ZGUgInR5cGVzLmgiCgp1aW50OF90IGlucF9u
ZXh0KHN0cnVjdCB1ZCopOwp1aW50OF90IGlucF9wZWVrKHN0cnVjdCB1ZCopOwp1aW50OF90IGlu
cF91aW50OChzdHJ1Y3QgdWQqKTsKdWludDE2X3QgaW5wX3VpbnQxNihzdHJ1Y3QgdWQqKTsKdWlu
dDMyX3QgaW5wX3VpbnQzMihzdHJ1Y3QgdWQqKTsKdWludDY0X3QgaW5wX3VpbnQ2NChzdHJ1Y3Qg
dWQqKTsKdm9pZCBpbnBfbW92ZShzdHJ1Y3QgdWQqLCBzaXplX3QpOwp2b2lkIGlucF9iYWNrKHN0
cnVjdCB1ZCopOwoKLyogaW5wX2luaXQoKSAtIEluaXRpYWxpemVzIHRoZSBpbnB1dCBzeXN0ZW0u
ICovCiNkZWZpbmUgaW5wX2luaXQodSkgXApkbyB7IFwKICB1LT5pbnBfY3VyciA9IDA7IFwKICB1
LT5pbnBfZmlsbCA9IDA7IFwKICB1LT5pbnBfY3RyICA9IDA7IFwKICB1LT5pbnBfZW5kICA9IDA7
IFwKfSB3aGlsZSAoMCkKCi8qIGlucF9zdGFydCgpIC0gU2hvdWxkIGJlIGNhbGxlZCBiZWZvcmUg
ZWFjaCBkZS1jb2RlIG9wZXJhdGlvbi4gKi8KI2RlZmluZSBpbnBfc3RhcnQodSkgdS0+aW5wX2N0
ciA9IDAKCi8qIGlucF9iYWNrKCkgLSBSZXNldHMgdGhlIGN1cnJlbnQgcG9pbnRlciB0byBpdHMg
cG9zaXRpb24gYmVmb3JlIHRoZSBjdXJyZW50CiAqIGluc3RydWN0aW9uIGRpc2Fzc2VtYmx5IHdh
cyBzdGFydGVkLgogKi8KI2RlZmluZSBpbnBfcmVzZXQodSkgXApkbyB7IFwKICB1LT5pbnBfY3Vy
ciAtPSB1LT5pbnBfY3RyOyBcCiAgdS0+aW5wX2N0ciA9IDA7IFwKfSB3aGlsZSAoMCkKCi8qIGlu
cF9zZXNzKCkgLSBSZXR1cm5zIHRoZSBwb2ludGVyIHRvIGN1cnJlbnQgc2Vzc2lvbi4gKi8KI2Rl
ZmluZSBpbnBfc2Vzcyh1KSAodS0+aW5wX3Nlc3MpCgovKiBpbnBfY3VyKCkgLSBSZXR1cm5zIHRo
ZSBjdXJyZW50IGlucHV0IGJ5dGUuICovCiNkZWZpbmUgaW5wX2N1cnIodSkgKCh1KS0+aW5wX2Nh
Y2hlWyh1KS0+aW5wX2N1cnJdKQoKI2VuZGlmCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABr
ZGIveDg2L3VkaXM4Ni0xLjcvc3luLWludGVsLmMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2
ADAwMDI3NTYAMDAwMDAwMTEzNzYAMTE3NjU0NjU1NTYAMDE1NzQ0ACAwAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8q
IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIHN5bi1pbnRlbC5jCiAqCiAqIENvcHlyaWdodCAoYykg
MjAwMiwgMjAwMywgMjAwNCBWaXZlayBNb2hhbiA8dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdo
dHMgcmVzZXJ2ZWQuIFNlZSAoTElDRU5TRSkKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCgoj
aW5jbHVkZSAidHlwZXMuaCIKI2luY2x1ZGUgImV4dGVybi5oIgojaW5jbHVkZSAiZGVjb2RlLmgi
CiNpbmNsdWRlICJpdGFiLmgiCiNpbmNsdWRlICJzeW4uaCIKCi8qIC0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tCiAqIG9wcl9jYXN0KCkgLSBQcmludHMgYW4gb3BlcmFuZCBjYXN0LgogKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKi8Kc3RhdGljIHZvaWQgCm9wcl9jYXN0KHN0cnVjdCB1ZCogdSwgc3RydWN0
IHVkX29wZXJhbmQqIG9wKQp7CiAgc3dpdGNoKG9wLT5zaXplKSB7CgljYXNlICA4OiBta2FzbSh1
LCAiYnl0ZSAiICk7IGJyZWFrOwoJY2FzZSAxNjogbWthc20odSwgIndvcmQgIiApOyBicmVhazsK
CWNhc2UgMzI6IG1rYXNtKHUsICJkd29yZCAiKTsgYnJlYWs7CgljYXNlIDY0OiBta2FzbSh1LCAi
cXdvcmQgIik7IGJyZWFrOwoJY2FzZSA4MDogbWthc20odSwgInR3b3JkICIpOyBicmVhazsKCWRl
ZmF1bHQ6IGJyZWFrOwogIH0KICBpZiAodS0+YnJfZmFyKQoJbWthc20odSwgImZhciAiKTsgCiAg
ZWxzZSBpZiAodS0+YnJfbmVhcikKCW1rYXNtKHUsICJuZWFyICIpOwp9CgovKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKiBnZW5fb3BlcmFuZCgpIC0gR2VuZXJhdGVzIGFzc2VtYmx5IG91dHB1dCBm
b3IgZWFjaCBvcGVyYW5kLgogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIHZvaWQg
Z2VuX29wZXJhbmQoc3RydWN0IHVkKiB1LCBzdHJ1Y3QgdWRfb3BlcmFuZCogb3AsIGludCBzeW5f
Y2FzdCkKewogIHN3aXRjaChvcC0+dHlwZSkgewoJY2FzZSBVRF9PUF9SRUc6CgkJbWthc20odSwg
dWRfcmVnX3RhYltvcC0+YmFzZSAtIFVEX1JfQUxdKTsKCQlicmVhazsKCgljYXNlIFVEX09QX01F
TTogewoKCQlpbnQgb3BfZiA9IDA7CgoJCWlmIChzeW5fY2FzdCkgCgkJCW9wcl9jYXN0KHUsIG9w
KTsKCgkJbWthc20odSwgIlsiKTsKCgkJaWYgKHUtPnBmeF9zZWcpCgkJCW1rYXNtKHUsICIlczoi
LCB1ZF9yZWdfdGFiW3UtPnBmeF9zZWcgLSBVRF9SX0FMXSk7CgoJCWlmIChvcC0+YmFzZSkgewoJ
CQlta2FzbSh1LCAiJXMiLCB1ZF9yZWdfdGFiW29wLT5iYXNlIC0gVURfUl9BTF0pOwoJCQlvcF9m
ID0gMTsKCQl9CgoJCWlmIChvcC0+aW5kZXgpIHsKCQkJaWYgKG9wX2YpCgkJCQlta2FzbSh1LCAi
KyIpOwoJCQlta2FzbSh1LCAiJXMiLCB1ZF9yZWdfdGFiW29wLT5pbmRleCAtIFVEX1JfQUxdKTsK
CQkJb3BfZiA9IDE7CgkJfQoKCQlpZiAob3AtPnNjYWxlKQoJCQlta2FzbSh1LCAiKiVkIiwgb3At
PnNjYWxlKTsKCgkJaWYgKG9wLT5vZmZzZXQgPT0gOCkgewoJCQlpZiAob3AtPmx2YWwuc2J5dGUg
PCAwKQoJCQkJbWthc20odSwgIi0weCV4IiwgLW9wLT5sdmFsLnNieXRlKTsKCQkJZWxzZQlta2Fz
bSh1LCAiJXMweCV4IiwgKG9wX2YpID8gIisiIDogIiIsIG9wLT5sdmFsLnNieXRlKTsKCQl9CgkJ
ZWxzZSBpZiAob3AtPm9mZnNldCA9PSAxNikKCQkJbWthc20odSwgIiVzMHgleCIsIChvcF9mKSA/
ICIrIiA6ICIiLCBvcC0+bHZhbC51d29yZCk7CgkJZWxzZSBpZiAob3AtPm9mZnNldCA9PSAzMikg
ewoJCQlpZiAodS0+YWRyX21vZGUgPT0gNjQpIHsKCQkJCWlmIChvcC0+bHZhbC5zZHdvcmQgPCAw
KQoJCQkJCW1rYXNtKHUsICItMHgleCIsIC1vcC0+bHZhbC5zZHdvcmQpOwoJCQkJZWxzZQlta2Fz
bSh1LCAiJXMweCV4IiwgKG9wX2YpID8gIisiIDogIiIsIG9wLT5sdmFsLnNkd29yZCk7CgkJCX0g
CgkJCWVsc2UJbWthc20odSwgIiVzMHglbHgiLCAob3BfZikgPyAiKyIgOiAiIiwgb3AtPmx2YWwu
dWR3b3JkKTsKCQl9CgkJZWxzZSBpZiAob3AtPm9mZnNldCA9PSA2NCkgCgkJCW1rYXNtKHUsICIl
czB4IiBGTVQ2NCAieCIsIChvcF9mKSA/ICIrIiA6ICIiLCBvcC0+bHZhbC51cXdvcmQpOwoKCQlt
a2FzbSh1LCAiXSIpOwoJCWJyZWFrOwoJfQoJCQkKCWNhc2UgVURfT1BfSU1NOgoJCWlmIChzeW5f
Y2FzdCkgb3ByX2Nhc3QodSwgb3ApOwoJCXN3aXRjaCAob3AtPnNpemUpIHsKCQkJY2FzZSAgODog
bWthc20odSwgIjB4JXgiLCBvcC0+bHZhbC51Ynl0ZSk7ICAgIGJyZWFrOwoJCQljYXNlIDE2OiBt
a2FzbSh1LCAiMHgleCIsIG9wLT5sdmFsLnV3b3JkKTsgICAgYnJlYWs7CgkJCWNhc2UgMzI6IG1r
YXNtKHUsICIweCVseCIsIG9wLT5sdmFsLnVkd29yZCk7ICBicmVhazsKCQkJY2FzZSA2NDogbWth
c20odSwgIjB4IiBGTVQ2NCAieCIsIG9wLT5sdmFsLnVxd29yZCk7IGJyZWFrOwoJCQlkZWZhdWx0
OiBicmVhazsKCQl9CgkJYnJlYWs7CgoJY2FzZSBVRF9PUF9KSU1NOgoJCWlmIChzeW5fY2FzdCkg
b3ByX2Nhc3QodSwgb3ApOwoJCXN3aXRjaCAob3AtPnNpemUpIHsKCQkJY2FzZSAgODoKCQkJCW1r
YXNtKHUsICIweCIgRk1UNjQgIngiLCB1LT5wYyArIG9wLT5sdmFsLnNieXRlKTsgCgkJCQlicmVh
azsKCQkJY2FzZSAxNjoKCQkJCW1rYXNtKHUsICIweCIgRk1UNjQgIngiLCB1LT5wYyArIG9wLT5s
dmFsLnN3b3JkKTsKCQkJCWJyZWFrOwoJCQljYXNlIDMyOgoJCQkJbWthc20odSwgIjB4IiBGTVQ2
NCAieCIsIHUtPnBjICsgb3AtPmx2YWwuc2R3b3JkKTsKCQkJCWJyZWFrOwoJCQlkZWZhdWx0OmJy
ZWFrOwoJCX0KCQlicmVhazsKCgljYXNlIFVEX09QX1BUUjoKCQlzd2l0Y2ggKG9wLT5zaXplKSB7
CgkJCWNhc2UgMzI6CgkJCQlta2FzbSh1LCAid29yZCAweCV4OjB4JXgiLCBvcC0+bHZhbC5wdHIu
c2VnLCAKCQkJCQlvcC0+bHZhbC5wdHIub2ZmICYgMHhGRkZGKTsKCQkJCWJyZWFrOwoJCQljYXNl
IDQ4OgoJCQkJbWthc20odSwgImR3b3JkIDB4JXg6MHglbHgiLCBvcC0+bHZhbC5wdHIuc2VnLCAK
CQkJCQlvcC0+bHZhbC5wdHIub2ZmKTsKCQkJCWJyZWFrOwoJCX0KCQlicmVhazsKCgljYXNlIFVE
X09QX0NPTlNUOgoJCWlmIChzeW5fY2FzdCkgb3ByX2Nhc3QodSwgb3ApOwoJCW1rYXNtKHUsICIl
ZCIsIG9wLT5sdmFsLnVkd29yZCk7CgkJYnJlYWs7CgoJZGVmYXVsdDogcmV0dXJuOwogIH0KfQoK
LyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT0KICogdHJhbnNsYXRlcyB0byBpbnRlbCBzeW50YXggCiAq
ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09CiAqLwpleHRlcm4gdm9pZCB1ZF90cmFuc2xhdGVfaW50ZWwo
c3RydWN0IHVkKiB1KQp7CiAgLyogLS0gcHJlZml4ZXMgLS0gKi8KCiAgLyogY2hlY2sgaWYgUF9P
U08gcHJlZml4IGlzIHVzZWQgKi8KICBpZiAoISBQX09TTyh1LT5pdGFiX2VudHJ5LT5wcmVmaXgp
ICYmIHUtPnBmeF9vcHIpIHsKCXN3aXRjaCAodS0+ZGlzX21vZGUpIHsKCQljYXNlIDE2OiAKCQkJ
bWthc20odSwgIm8zMiAiKTsKCQkJYnJlYWs7CgkJY2FzZSAzMjoKCQljYXNlIDY0OgogCQkJbWth
c20odSwgIm8xNiAiKTsKCQkJYnJlYWs7Cgl9CiAgfQoKICAvKiBjaGVjayBpZiBQX0FTTyBwcmVm
aXggd2FzIHVzZWQgKi8KICBpZiAoISBQX0FTTyh1LT5pdGFiX2VudHJ5LT5wcmVmaXgpICYmIHUt
PnBmeF9hZHIpIHsKCXN3aXRjaCAodS0+ZGlzX21vZGUpIHsKCQljYXNlIDE2OiAKCQkJbWthc20o
dSwgImEzMiAiKTsKCQkJYnJlYWs7CgkJY2FzZSAzMjoKIAkJCW1rYXNtKHUsICJhMTYgIik7CgkJ
CWJyZWFrOwoJCWNhc2UgNjQ6CiAJCQlta2FzbSh1LCAiYTMyICIpOwoJCQlicmVhazsKCX0KICB9
CgogIGlmICh1LT5wZnhfbG9jaykKCW1rYXNtKHUsICJsb2NrICIpOwogIGlmICh1LT5wZnhfcmVw
KQoJbWthc20odSwgInJlcCAiKTsKICBpZiAodS0+cGZ4X3JlcG5lKQoJbWthc20odSwgInJlcG5l
ICIpOwogIGlmICh1LT5pbXBsaWNpdF9hZGRyICYmIHUtPnBmeF9zZWcpCglta2FzbSh1LCAiJXMg
IiwgdWRfcmVnX3RhYlt1LT5wZnhfc2VnIC0gVURfUl9BTF0pOwoKICAvKiBwcmludCB0aGUgaW5z
dHJ1Y3Rpb24gbW5lbW9uaWMgKi8KICBta2FzbSh1LCAiJXMgIiwgdWRfbG9va3VwX21uZW1vbmlj
KHUtPm1uZW1vbmljKSk7CgogIC8qIG9wZXJhbmQgMSAqLwogIGlmICh1LT5vcGVyYW5kWzBdLnR5
cGUgIT0gVURfTk9ORSkgewoJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMF0sIHUtPmMxKTsK
ICB9CiAgLyogb3BlcmFuZCAyICovCiAgaWYgKHUtPm9wZXJhbmRbMV0udHlwZSAhPSBVRF9OT05F
KSB7Cglta2FzbSh1LCAiLCAiKTsKCWdlbl9vcGVyYW5kKHUsICZ1LT5vcGVyYW5kWzFdLCB1LT5j
Mik7CiAgfQoKICAvKiBvcGVyYW5kIDMgKi8KICBpZiAodS0+b3BlcmFuZFsyXS50eXBlICE9IFVE
X05PTkUpIHsKCW1rYXNtKHUsICIsICIpOwoJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMl0s
IHUtPmMzKTsKICB9Cn0KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlz
ODYtMS43L2l0YWIuYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAw
MDAwNjQ3MzU2ADExNzY1NDY1NTU2ADAxNDc1NgAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
bXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKLyogaXRhYi5jIC0t
IGF1dG8gZ2VuZXJhdGVkIGJ5IG9wZ2VuLnB5LCBkbyBub3QgZWRpdC4gKi8KCiNpbmNsdWRlICJ0
eXBlcy5oIgojaW5jbHVkZSAiZGVjb2RlLmgiCiNpbmNsdWRlICJpdGFiLmgiCgpjb25zdCBjaGFy
ICogdWRfbW5lbW9uaWNzX3N0cltdID0gewogICIzZG5vdyIsCiAgImFhYSIsCiAgImFhZCIsCiAg
ImFhbSIsCiAgImFhcyIsCiAgImFkYyIsCiAgImFkZCIsCiAgImFkZHBkIiwKICAiYWRkcHMiLAog
ICJhZGRzZCIsCiAgImFkZHNzIiwKICAiYWRkc3VicGQiLAogICJhZGRzdWJwcyIsCiAgImFuZCIs
CiAgImFuZHBkIiwKICAiYW5kcHMiLAogICJhbmRucGQiLAogICJhbmRucHMiLAogICJhcnBsIiwK
ICAibW92c3hkIiwKICAiYm91bmQiLAogICJic2YiLAogICJic3IiLAogICJic3dhcCIsCiAgImJ0
IiwKICAiYnRjIiwKICAiYnRyIiwKICAiYnRzIiwKICAiY2FsbCIsCiAgImNidyIsCiAgImN3ZGUi
LAogICJjZHFlIiwKICAiY2xjIiwKICAiY2xkIiwKICAiY2xmbHVzaCIsCiAgImNsZ2kiLAogICJj
bGkiLAogICJjbHRzIiwKICAiY21jIiwKICAiY21vdm8iLAogICJjbW92bm8iLAogICJjbW92YiIs
CiAgImNtb3ZhZSIsCiAgImNtb3Z6IiwKICAiY21vdm56IiwKICAiY21vdmJlIiwKICAiY21vdmEi
LAogICJjbW92cyIsCiAgImNtb3ZucyIsCiAgImNtb3ZwIiwKICAiY21vdm5wIiwKICAiY21vdmwi
LAogICJjbW92Z2UiLAogICJjbW92bGUiLAogICJjbW92ZyIsCiAgImNtcCIsCiAgImNtcHBkIiwK
ICAiY21wcHMiLAogICJjbXBzYiIsCiAgImNtcHN3IiwKICAiY21wc2QiLAogICJjbXBzcSIsCiAg
ImNtcHNzIiwKICAiY21weGNoZyIsCiAgImNtcHhjaGc4YiIsCiAgImNvbWlzZCIsCiAgImNvbWlz
cyIsCiAgImNwdWlkIiwKICAiY3Z0ZHEycGQiLAogICJjdnRkcTJwcyIsCiAgImN2dHBkMmRxIiwK
ICAiY3Z0cGQycGkiLAogICJjdnRwZDJwcyIsCiAgImN2dHBpMnBzIiwKICAiY3Z0cGkycGQiLAog
ICJjdnRwczJkcSIsCiAgImN2dHBzMnBpIiwKICAiY3Z0cHMycGQiLAogICJjdnRzZDJzaSIsCiAg
ImN2dHNkMnNzIiwKICAiY3Z0c2kyc3MiLAogICJjdnRzczJzaSIsCiAgImN2dHNzMnNkIiwKICAi
Y3Z0dHBkMnBpIiwKICAiY3Z0dHBkMmRxIiwKICAiY3Z0dHBzMmRxIiwKICAiY3Z0dHBzMnBpIiwK
ICAiY3Z0dHNkMnNpIiwKICAiY3Z0c2kyc2QiLAogICJjdnR0c3Myc2kiLAogICJjd2QiLAogICJj
ZHEiLAogICJjcW8iLAogICJkYWEiLAogICJkYXMiLAogICJkZWMiLAogICJkaXYiLAogICJkaXZw
ZCIsCiAgImRpdnBzIiwKICAiZGl2c2QiLAogICJkaXZzcyIsCiAgImVtbXMiLAogICJlbnRlciIs
CiAgImYyeG0xIiwKICAiZmFicyIsCiAgImZhZGQiLAogICJmYWRkcCIsCiAgImZibGQiLAogICJm
YnN0cCIsCiAgImZjaHMiLAogICJmY2xleCIsCiAgImZjbW92YiIsCiAgImZjbW92ZSIsCiAgImZj
bW92YmUiLAogICJmY21vdnUiLAogICJmY21vdm5iIiwKICAiZmNtb3ZuZSIsCiAgImZjbW92bmJl
IiwKICAiZmNtb3ZudSIsCiAgImZ1Y29taSIsCiAgImZjb20iLAogICJmY29tMiIsCiAgImZjb21w
MyIsCiAgImZjb21pIiwKICAiZnVjb21pcCIsCiAgImZjb21pcCIsCiAgImZjb21wIiwKICAiZmNv
bXA1IiwKICAiZmNvbXBwIiwKICAiZmNvcyIsCiAgImZkZWNzdHAiLAogICJmZGl2IiwKICAiZmRp
dnAiLAogICJmZGl2ciIsCiAgImZkaXZycCIsCiAgImZlbW1zIiwKICAiZmZyZWUiLAogICJmZnJl
ZXAiLAogICJmaWNvbSIsCiAgImZpY29tcCIsCiAgImZpbGQiLAogICJmbmNzdHAiLAogICJmbmlu
aXQiLAogICJmaWFkZCIsCiAgImZpZGl2ciIsCiAgImZpZGl2IiwKICAiZmlzdWIiLAogICJmaXN1
YnIiLAogICJmaXN0IiwKICAiZmlzdHAiLAogICJmaXN0dHAiLAogICJmbGQiLAogICJmbGQxIiwK
ICAiZmxkbDJ0IiwKICAiZmxkbDJlIiwKICAiZmxkbHBpIiwKICAiZmxkbGcyIiwKICAiZmxkbG4y
IiwKICAiZmxkeiIsCiAgImZsZGN3IiwKICAiZmxkZW52IiwKICAiZm11bCIsCiAgImZtdWxwIiwK
ICAiZmltdWwiLAogICJmbm9wIiwKICAiZnBhdGFuIiwKICAiZnByZW0iLAogICJmcHJlbTEiLAog
ICJmcHRhbiIsCiAgImZybmRpbnQiLAogICJmcnN0b3IiLAogICJmbnNhdmUiLAogICJmc2NhbGUi
LAogICJmc2luIiwKICAiZnNpbmNvcyIsCiAgImZzcXJ0IiwKICAiZnN0cCIsCiAgImZzdHAxIiwK
ICAiZnN0cDgiLAogICJmc3RwOSIsCiAgImZzdCIsCiAgImZuc3RjdyIsCiAgImZuc3RlbnYiLAog
ICJmbnN0c3ciLAogICJmc3ViIiwKICAiZnN1YnAiLAogICJmc3ViciIsCiAgImZzdWJycCIsCiAg
ImZ0c3QiLAogICJmdWNvbSIsCiAgImZ1Y29tcCIsCiAgImZ1Y29tcHAiLAogICJmeGFtIiwKICAi
ZnhjaCIsCiAgImZ4Y2g0IiwKICAiZnhjaDciLAogICJmeHJzdG9yIiwKICAiZnhzYXZlIiwKICAi
ZnB4dHJhY3QiLAogICJmeWwyeCIsCiAgImZ5bDJ4cDEiLAogICJoYWRkcGQiLAogICJoYWRkcHMi
LAogICJobHQiLAogICJoc3VicGQiLAogICJoc3VicHMiLAogICJpZGl2IiwKICAiaW4iLAogICJp
bXVsIiwKICAiaW5jIiwKICAiaW5zYiIsCiAgImluc3ciLAogICJpbnNkIiwKICAiaW50MSIsCiAg
ImludDMiLAogICJpbnQiLAogICJpbnRvIiwKICAiaW52ZCIsCiAgImludmxwZyIsCiAgImludmxw
Z2EiLAogICJpcmV0dyIsCiAgImlyZXRkIiwKICAiaXJldHEiLAogICJqbyIsCiAgImpubyIsCiAg
ImpiIiwKICAiamFlIiwKICAianoiLAogICJqbnoiLAogICJqYmUiLAogICJqYSIsCiAgImpzIiwK
ICAiam5zIiwKICAianAiLAogICJqbnAiLAogICJqbCIsCiAgImpnZSIsCiAgImpsZSIsCiAgImpn
IiwKICAiamN4eiIsCiAgImplY3h6IiwKICAianJjeHoiLAogICJqbXAiLAogICJsYWhmIiwKICAi
bGFyIiwKICAibGRkcXUiLAogICJsZG14Y3NyIiwKICAibGRzIiwKICAibGVhIiwKICAibGVzIiwK
ICAibGZzIiwKICAibGdzIiwKICAibGlkdCIsCiAgImxzcyIsCiAgImxlYXZlIiwKICAibGZlbmNl
IiwKICAibGdkdCIsCiAgImxsZHQiLAogICJsbXN3IiwKICAibG9jayIsCiAgImxvZHNiIiwKICAi
bG9kc3ciLAogICJsb2RzZCIsCiAgImxvZHNxIiwKICAibG9vcG56IiwKICAibG9vcGUiLAogICJs
b29wIiwKICAibHNsIiwKICAibHRyIiwKICAibWFza21vdnEiLAogICJtYXhwZCIsCiAgIm1heHBz
IiwKICAibWF4c2QiLAogICJtYXhzcyIsCiAgIm1mZW5jZSIsCiAgIm1pbnBkIiwKICAibWlucHMi
LAogICJtaW5zZCIsCiAgIm1pbnNzIiwKICAibW9uaXRvciIsCiAgIm1vdiIsCiAgIm1vdmFwZCIs
CiAgIm1vdmFwcyIsCiAgIm1vdmQiLAogICJtb3ZkZHVwIiwKICAibW92ZHFhIiwKICAibW92ZHF1
IiwKICAibW92ZHEycSIsCiAgIm1vdmhwZCIsCiAgIm1vdmhwcyIsCiAgIm1vdmxocHMiLAogICJt
b3ZscGQiLAogICJtb3ZscHMiLAogICJtb3ZobHBzIiwKICAibW92bXNrcGQiLAogICJtb3Ztc2tw
cyIsCiAgIm1vdm50ZHEiLAogICJtb3ZudGkiLAogICJtb3ZudHBkIiwKICAibW92bnRwcyIsCiAg
Im1vdm50cSIsCiAgIm1vdnEiLAogICJtb3ZxYSIsCiAgIm1vdnEyZHEiLAogICJtb3ZzYiIsCiAg
Im1vdnN3IiwKICAibW92c2QiLAogICJtb3ZzcSIsCiAgIm1vdnNsZHVwIiwKICAibW92c2hkdXAi
LAogICJtb3ZzcyIsCiAgIm1vdnN4IiwKICAibW92dXBkIiwKICAibW92dXBzIiwKICAibW92engi
LAogICJtdWwiLAogICJtdWxwZCIsCiAgIm11bHBzIiwKICAibXVsc2QiLAogICJtdWxzcyIsCiAg
Im13YWl0IiwKICAibmVnIiwKICAibm9wIiwKICAibm90IiwKICAib3IiLAogICJvcnBkIiwKICAi
b3JwcyIsCiAgIm91dCIsCiAgIm91dHNiIiwKICAib3V0c3ciLAogICJvdXRzZCIsCiAgIm91dHNx
IiwKICAicGFja3Nzd2IiLAogICJwYWNrc3NkdyIsCiAgInBhY2t1c3diIiwKICAicGFkZGIiLAog
ICJwYWRkdyIsCiAgInBhZGRxIiwKICAicGFkZHNiIiwKICAicGFkZHN3IiwKICAicGFkZHVzYiIs
CiAgInBhZGR1c3ciLAogICJwYW5kIiwKICAicGFuZG4iLAogICJwYXVzZSIsCiAgInBhdmdiIiwK
ICAicGF2Z3ciLAogICJwY21wZXFiIiwKICAicGNtcGVxdyIsCiAgInBjbXBlcWQiLAogICJwY21w
Z3RiIiwKICAicGNtcGd0dyIsCiAgInBjbXBndGQiLAogICJwZXh0cnciLAogICJwaW5zcnciLAog
ICJwbWFkZHdkIiwKICAicG1heHN3IiwKICAicG1heHViIiwKICAicG1pbnN3IiwKICAicG1pbnVi
IiwKICAicG1vdm1za2IiLAogICJwbXVsaHV3IiwKICAicG11bGh3IiwKICAicG11bGx3IiwKICAi
cG11bHVkcSIsCiAgInBvcCIsCiAgInBvcGEiLAogICJwb3BhZCIsCiAgInBvcGZ3IiwKICAicG9w
ZmQiLAogICJwb3BmcSIsCiAgInBvciIsCiAgInByZWZldGNoIiwKICAicHJlZmV0Y2hudGEiLAog
ICJwcmVmZXRjaHQwIiwKICAicHJlZmV0Y2h0MSIsCiAgInByZWZldGNodDIiLAogICJwc2FkYnci
LAogICJwc2h1ZmQiLAogICJwc2h1Zmh3IiwKICAicHNodWZsdyIsCiAgInBzaHVmdyIsCiAgInBz
bGxkcSIsCiAgInBzbGx3IiwKICAicHNsbGQiLAogICJwc2xscSIsCiAgInBzcmF3IiwKICAicHNy
YWQiLAogICJwc3JsdyIsCiAgInBzcmxkIiwKICAicHNybHEiLAogICJwc3JsZHEiLAogICJwc3Vi
YiIsCiAgInBzdWJ3IiwKICAicHN1YmQiLAogICJwc3VicSIsCiAgInBzdWJzYiIsCiAgInBzdWJz
dyIsCiAgInBzdWJ1c2IiLAogICJwc3VidXN3IiwKICAicHVucGNraGJ3IiwKICAicHVucGNraHdk
IiwKICAicHVucGNraGRxIiwKICAicHVucGNraHFkcSIsCiAgInB1bnBja2xidyIsCiAgInB1bnBj
a2x3ZCIsCiAgInB1bnBja2xkcSIsCiAgInB1bnBja2xxZHEiLAogICJwaTJmdyIsCiAgInBpMmZk
IiwKICAicGYyaXciLAogICJwZjJpZCIsCiAgInBmbmFjYyIsCiAgInBmcG5hY2MiLAogICJwZmNt
cGdlIiwKICAicGZtaW4iLAogICJwZnJjcCIsCiAgInBmcnNxcnQiLAogICJwZnN1YiIsCiAgInBm
YWRkIiwKICAicGZjbXBndCIsCiAgInBmbWF4IiwKICAicGZyY3BpdDEiLAogICJwZnJzcGl0MSIs
CiAgInBmc3ViciIsCiAgInBmYWNjIiwKICAicGZjbXBlcSIsCiAgInBmbXVsIiwKICAicGZyY3Bp
dDIiLAogICJwbXVsaHJ3IiwKICAicHN3YXBkIiwKICAicGF2Z3VzYiIsCiAgInB1c2giLAogICJw
dXNoYSIsCiAgInB1c2hhZCIsCiAgInB1c2hmdyIsCiAgInB1c2hmZCIsCiAgInB1c2hmcSIsCiAg
InB4b3IiLAogICJyY2wiLAogICJyY3IiLAogICJyb2wiLAogICJyb3IiLAogICJyY3BwcyIsCiAg
InJjcHNzIiwKICAicmRtc3IiLAogICJyZHBtYyIsCiAgInJkdHNjIiwKICAicmR0c2NwIiwKICAi
cmVwbmUiLAogICJyZXAiLAogICJyZXQiLAogICJyZXRmIiwKICAicnNtIiwKICAicnNxcnRwcyIs
CiAgInJzcXJ0c3MiLAogICJzYWhmIiwKICAic2FsIiwKICAic2FsYyIsCiAgInNhciIsCiAgInNo
bCIsCiAgInNociIsCiAgInNiYiIsCiAgInNjYXNiIiwKICAic2Nhc3ciLAogICJzY2FzZCIsCiAg
InNjYXNxIiwKICAic2V0byIsCiAgInNldG5vIiwKICAic2V0YiIsCiAgInNldG5iIiwKICAic2V0
eiIsCiAgInNldG56IiwKICAic2V0YmUiLAogICJzZXRhIiwKICAic2V0cyIsCiAgInNldG5zIiwK
ICAic2V0cCIsCiAgInNldG5wIiwKICAic2V0bCIsCiAgInNldGdlIiwKICAic2V0bGUiLAogICJz
ZXRnIiwKICAic2ZlbmNlIiwKICAic2dkdCIsCiAgInNobGQiLAogICJzaHJkIiwKICAic2h1ZnBk
IiwKICAic2h1ZnBzIiwKICAic2lkdCIsCiAgInNsZHQiLAogICJzbXN3IiwKICAic3FydHBzIiwK
ICAic3FydHBkIiwKICAic3FydHNkIiwKICAic3FydHNzIiwKICAic3RjIiwKICAic3RkIiwKICAi
c3RnaSIsCiAgInN0aSIsCiAgInNraW5pdCIsCiAgInN0bXhjc3IiLAogICJzdG9zYiIsCiAgInN0
b3N3IiwKICAic3Rvc2QiLAogICJzdG9zcSIsCiAgInN0ciIsCiAgInN1YiIsCiAgInN1YnBkIiwK
ICAic3VicHMiLAogICJzdWJzZCIsCiAgInN1YnNzIiwKICAic3dhcGdzIiwKICAic3lzY2FsbCIs
CiAgInN5c2VudGVyIiwKICAic3lzZXhpdCIsCiAgInN5c3JldCIsCiAgInRlc3QiLAogICJ1Y29t
aXNkIiwKICAidWNvbWlzcyIsCiAgInVkMiIsCiAgInVucGNraHBkIiwKICAidW5wY2tocHMiLAog
ICJ1bnBja2xwcyIsCiAgInVucGNrbHBkIiwKICAidmVyciIsCiAgInZlcnciLAogICJ2bWNhbGwi
LAogICJ2bWNsZWFyIiwKICAidm14b24iLAogICJ2bXB0cmxkIiwKICAidm1wdHJzdCIsCiAgInZt
cmVzdW1lIiwKICAidm14b2ZmIiwKICAidm1ydW4iLAogICJ2bW1jYWxsIiwKICAidm1sb2FkIiwK
ICAidm1zYXZlIiwKICAid2FpdCIsCiAgIndiaW52ZCIsCiAgIndybXNyIiwKICAieGFkZCIsCiAg
InhjaGciLAogICJ4bGF0YiIsCiAgInhvciIsCiAgInhvcnBkIiwKICAieG9ycHMiLAogICJkYiIs
CiAgImludmFsaWQiLAp9OwoKCgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZb
MjU2XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzBGX19PUF8wMF9fUkVHIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF9y
ZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFRyB9
LAogIC8qIDAyICovICB7IFVEX0lsYXIsICAgICAgICAgT19HdiwgICAgT19FdywgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8g
IHsgVURfSWxzbCwgICAgICAgICBPX0d2LCAgICBPX0V3LCAgICBPX05PTkUsICBQX2Fzb3xQX29z
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7
IFVEX0lzeXNjYWxsLCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDYgKi8gIHsgVURfSWNsdHMsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9Jc3lzcmV0LCAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lpbnZkLCAgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURfSXdiaW52ZCwg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDBCICovICB7IFVEX0l1ZDIsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF8wRF9fUkVHIH0sCiAgLyogMEUgKi8gIHsg
VURfSWZlbW1zLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAwRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDEwICovICB7IFVEX0ltb3Z1cHMsICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxMSAqLyAgeyBVRF9J
bW92dXBzLCAgICAgIE9fVywgICAgIE9fViwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMTIgKi8gIHsgVURfSW1vdmxwcywgICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEzICovICB7
IFVEX0ltb3ZscHMsICAgICAgT19NLCAgICAgT19WLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAxNCAqLyAgeyBVRF9JdW5wY2tscHMsICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTUg
Ki8gIHsgVURfSXVucGNraHBzLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDE2ICovICB7IFVEX0ltb3ZocHMsICAgICAgT19W
LCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAxNyAqLyAgeyBVRF9JbW92aHBzLCAgICAgIE9fTSwgICAgIE9fViwgICAgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTggKi8gIHsgVURfSWdycF9yZWcsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMThfX1JFRyB9LAogIC8q
IDE5ICovICB7IFVEX0lub3AsICAgICAgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxQSAqLyAgeyBVRF9Jbm9wLCAgICAgICAg
IE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMUIgKi8gIHsgVURfSW5vcCwgICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUs
ICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDFDICovICB7IFVEX0lub3AsICAg
ICAgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAxRCAqLyAgeyBVRF9Jbm9wLCAgICAgICAgIE9fTSwgICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMUUgKi8gIHsgVURfSW5v
cCwgICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDFGICovICB7IFVEX0lub3AsICAgICAgICAgT19NLCAgICAgT19OT05F
LCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyMCAqLyAgeyBV
RF9JbW92LCAgICAgICAgIE9fUiwgICAgIE9fQywgICAgIE9fTk9ORSwgIFBfcmV4ciB9LAogIC8q
IDIxICovICB7IFVEX0ltb3YsICAgICAgICAgT19SLCAgICAgT19ELCAgICAgT19OT05FLCAgUF9y
ZXhyIH0sCiAgLyogMjIgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0MsICAgICBPX1IsICAgICBP
X05PTkUsICBQX3JleHIgfSwKICAvKiAyMyAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fRCwgICAg
IE9fUiwgICAgIE9fTk9ORSwgIFBfcmV4ciB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAy
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSW1vdmFwcywgICAgICBPX1YsICAgICBP
X1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDI5ICov
ICB7IFVEX0ltb3ZhcHMsICAgICAgT19XLCAgICAgT19WLCAgICAgT19OT05FLCAgUF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyQSAqLyAgeyBVRF9JY3Z0cGkycHMsICAgIE9fViwg
ICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MkIgKi8gIHsgVURfSW1vdm50cHMsICAgICBPX00sICAgICBPX1YsICAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDJDICovICB7IFVEX0ljdnR0cHMycGksICAg
T19QLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAyRCAqLyAgeyBVRF9JY3Z0cHMycGksICAgIE9fUCwgICAgIE9fVywgICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkUgKi8gIHsgVURfSXVjb21pc3Ms
ICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDJGICovICB7IFVEX0ljb21pc3MsICAgICAgT19WLCAgICAgT19XLCAgICAgT19O
T05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAzMCAqLyAgeyBVRF9Jd3Jt
c3IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMxICov
ICB7IFVEX0lyZHRzYywgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMzIgKi8gIHsgVURfSXJkbXNyLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JcmRwbWMsICAgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lzeXNlbnRlciwgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAzNSAqLyAgeyBV
RF9Jc3lzZXhpdCwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDM2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0EgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
QiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDNDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNGICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDAg
Ki8gIHsgVURfSWNtb3ZvLCAgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA0MSAqLyAgeyBVRF9JY21v
dm5vLCAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDQyICovICB7IFVEX0ljbW92YiwgICAgICAgT19H
diwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogNDMgKi8gIHsgVURfSWNtb3ZhZSwgICAgICBPX0d2LCAgICBPX0V2LCAg
ICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiA0NCAqLyAgeyBVRF9JY21vdnosICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBf
YXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDQ1ICovICB7IFVE
X0ljbW92bnosICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNDYgKi8gIHsgVURfSWNtb3ZiZSwgICAg
ICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA0NyAqLyAgeyBVRF9JY21vdmEsICAgICAgIE9fR3YsICAgIE9f
RXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDQ4ICovICB7IFVEX0ljbW92cywgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNDkgKi8g
IHsgVURfSWNtb3ZucywgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29z
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA0QSAqLyAgeyBVRF9JY21vdnAs
ICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDRCICovICB7IFVEX0ljbW92bnAsICAgICAgT19Hdiwg
ICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogNEMgKi8gIHsgVURfSWNtb3ZsLCAgICAgICBPX0d2LCAgICBPX0V2LCAgICBP
X05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA0
RCAqLyAgeyBVRF9JY21vdmdlLCAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDRFICovICB7IFVEX0lj
bW92bGUsICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNEYgKi8gIHsgVURfSWNtb3ZnLCAgICAgICBP
X0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiA1MCAqLyAgeyBVRF9JbW92bXNrcHMsICAgIE9fR2QsICAgIE9fVlIs
ICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4cnxQX3JleGIgfSwKICAvKiA1MSAqLyAgeyBVRF9Jc3Fy
dHBzLCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogNTIgKi8gIHsgVURfSXJzcXJ0cHMsICAgICBPX1YsICAgICBPX1csICAg
ICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDUzICovICB7IFVE
X0lyY3BwcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA1NCAqLyAgeyBVRF9JYW5kcHMsICAgICAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTUgKi8g
IHsgVURfSWFuZG5wcywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDU2ICovICB7IFVEX0lvcnBzLCAgICAgICAgT19WLCAg
ICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1
NyAqLyAgeyBVRF9JeG9ycHMsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTggKi8gIHsgVURfSWFkZHBzLCAgICAgICBP
X1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDU5ICovICB7IFVEX0ltdWxwcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1QSAqLyAgeyBVRF9JY3Z0cHMycGQs
ICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogNUIgKi8gIHsgVURfSWN2dGRxMnBzLCAgICBPX1YsICAgICBPX1csICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVDICovICB7IFVEX0lzdWJw
cywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiA1RCAqLyAgeyBVRF9JbWlucHMsICAgICAgIE9fViwgICAgIE9fVywgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUUgKi8gIHsgVURf
SWRpdnBzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDVGICovICB7IFVEX0ltYXhwcywgICAgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2MCAqLyAg
eyBVRF9JcHVucGNrbGJ3LCAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNjEgKi8gIHsgVURfSXB1bnBja2x3ZCwgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDYy
ICovICB7IFVEX0lwdW5wY2tsZHEsICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2MyAqLyAgeyBVRF9JcGFja3Nzd2IsICAgIE9f
UCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogNjQgKi8gIHsgVURfSXBjbXBndGIsICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDY1ICovICB7IFVEX0lwY21wZ3R3LCAg
ICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiA2NiAqLyAgeyBVRF9JcGNtcGd0ZCwgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNjcgKi8gIHsgVURfSXBhY2t1
c3diLCAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDY4ICovICB7IFVEX0lwdW5wY2toYncsICAgT19QLCAgICAgT19RLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2OSAqLyAgeyBVRF9J
cHVucGNraHdkLCAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogNkEgKi8gIHsgVURfSXB1bnBja2hkcSwgICBPX1AsICAgICBPX1Es
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDZCICovICB7
IFVEX0lwYWNrc3NkdywgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiA2QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZEICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkUgKi8gIHsgVURf
SW1vdmQsICAgICAgICBPX1AsICAgICBPX0V4LCAgICBPX05PTkUsICBQX2MyfFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNkYgKi8gIHsgVURfSW1vdnEsICAgICAgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDcw
ICovICB7IFVEX0lwc2h1ZncsICAgICAgT19QLCAgICAgT19RLCAgICAgT19JYiwgICAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA3MSAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF83MV9fUkVHIH0sCiAgLyogNzIg
Ki8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18w
Rl9fT1BfNzJfX1JFRyB9LAogIC8qIDczICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzczX19SRUcgfSwKICAvKiA3NCAqLyAgeyBV
RF9JcGNtcGVxYiwgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogNzUgKi8gIHsgVURfSXBjbXBlcXcsICAgICBPX1AsICAgICBP
X1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDc2ICov
ICB7IFVEX0lwY21wZXFkLCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA3NyAqLyAgeyBVRF9JZW1tcywgICAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDc4ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzkgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA3QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDdCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0MgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3RCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdFICovICB7IFVE
X0ltb3ZkLCAgICAgICAgT19FeCwgICAgT19QLCAgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDdGICovICB7IFVEX0ltb3ZxLCAgICAgICAgT19RLCAg
ICAgT19QLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA4
MCAqLyAgeyBVRF9Jam8sICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8
UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4MSAqLyAgeyBVRF9Jam5vLCAgICAgICAgIE9f
SnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAv
KiA4MiAqLyAgeyBVRF9JamIsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
YzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4MyAqLyAgeyBVRF9JamFlLCAgICAgICAg
IE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwK
ICAvKiA4NCAqLyAgeyBVRF9JanosICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4NSAqLyAgeyBVRF9Jam56LCAgICAg
ICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28g
fSwKICAvKiA4NiAqLyAgeyBVRF9JamJlLCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4NyAqLyAgeyBVRF9JamEsICAg
ICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9v
c28gfSwKICAvKiA4OCAqLyAgeyBVRF9JanMsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4OSAqLyAgeyBVRF9Jam5z
LCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18
UF9vc28gfSwKICAvKiA4QSAqLyAgeyBVRF9JanAsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4QiAqLyAgeyBVRF9J
am5wLCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2Rl
cE18UF9vc28gfSwKICAvKiA4QyAqLyAgeyBVRF9JamwsICAgICAgICAgIE9fSnosICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4RCAqLyAgeyBV
RF9JamdlLCAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQ
X2RlcE18UF9vc28gfSwKICAvKiA4RSAqLyAgeyBVRF9JamxlLCAgICAgICAgIE9fSnosICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiA4RiAqLyAg
eyBVRF9JamcsICAgICAgICAgIE9fSnosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9kZWY2
NHxQX2RlcE18UF9vc28gfSwKICAvKiA5MCAqLyAgeyBVRF9Jc2V0bywgICAgICAgIE9fRWIsICAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOTEg
Ki8gIHsgVURfSXNldG5vLCAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDkyICovICB7IFVEX0lzZXRiLCAgICAgICAgT19F
YiwgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiA5MyAqLyAgeyBVRF9Jc2V0bmIsICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOTQgKi8gIHsgVURfSXNldHosICAgICAg
ICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDk1ICovICB7IFVEX0lzZXRueiwgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA5NiAqLyAgeyBVRF9Jc2V0YmUs
ICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogOTcgKi8gIHsgVURfSXNldGEsICAgICAgICBPX0ViLCAgICBPX05PTkUsICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDk4ICovICB7IFVEX0lz
ZXRzLCAgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiA5OSAqLyAgeyBVRF9Jc2V0bnMsICAgICAgIE9fRWIsICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOUEgKi8gIHsg
VURfSXNldHAsICAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDlCICovICB7IFVEX0lzZXRucCwgICAgICAgT19FYiwgICAg
T19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA5QyAq
LyAgeyBVRF9Jc2V0bCwgICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOUQgKi8gIHsgVURfSXNldGdlLCAgICAgICBPX0Vi
LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDlFICovICB7IFVEX0lzZXRsZSwgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA5RiAqLyAgeyBVRF9Jc2V0ZywgICAgICAg
IE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogQTAgKi8gIHsgVURfSXB1c2gsICAgICAgICBPX0ZTLCAgICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiBBMSAqLyAgeyBVRF9JcG9wLCAgICAgICAgIE9fRlMsICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEEyICovICB7IFVEX0ljcHVpZCwgICAgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSWJ0LCAg
ICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBBNCAqLyAgeyBVRF9Jc2hsZCwgICAgICAgIE9fRXYs
ICAgIE9fR3YsICAgIE9fSWIsICAgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEE1ICovICB7IFVEX0lzaGxkLCAgICAgICAgT19FdiwgICAgT19HdiwgICAg
T19DTCwgICAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
QTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBBNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEE4ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19HUywgICAg
T19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSXBvcCwgICAgICAg
ICBPX0dTLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBBQSAqLyAgeyBVRF9J
cnNtLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEFC
ICovICB7IFVEX0lidHMsICAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298
UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQUMgKi8gIHsgVURfSXNo
cmQsICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX0liLCAgICBQX2Fzb3xQX29zb3xQX3JleHd8
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBBRCAqLyAgeyBVRF9Jc2hyZCwgICAgICAgIE9f
RXYsICAgIE9fR3YsICAgIE9fQ0wsICAgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIEFFICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgSVRBQl9fMEZfX09QX0FFX19SRUcgfSwKICAvKiBBRiAqLyAgeyBVRF9JaW11
bCwgICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEIwICovICB7IFVEX0ljbXB4Y2hnLCAgICAgT19F
YiwgICAgT19HYiwgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiBCMSAqLyAgeyBVRF9JY21weGNoZywgICAgIE9fRXYsICAgIE9fR3YsICAgIE9fTk9ORSwgIFBf
YXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEIyICovICB7IFVE
X0lsc3MsICAgICAgICAgT19HeiwgICAgT19NLCAgICAgT19OT05FLCAgUF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQjMgKi8gIHsgVURfSWJ0ciwgICAgICAg
ICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiBCNCAqLyAgeyBVRF9JbGZzLCAgICAgICAgIE9fR3osICAgIE9f
TSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEI1ICovICB7IFVEX0lsZ3MsICAgICAgICAgT19HeiwgICAgT19NLCAgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQjYgKi8g
IHsgVURfSW1vdnp4LCAgICAgICBPX0d2LCAgICBPX0ViLCAgICBPX05PTkUsICBQX2MyfFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEI3ICovICB7IFVEX0lt
b3Z6eCwgICAgICAgT19HdiwgICAgT19FdywgICAgT19OT05FLCAgUF9jMnxQX2Fzb3xQX29zb3xQ
X3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBCOCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI5ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
QkEgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFC
X18wRl9fT1BfQkFfX1JFRyB9LAogIC8qIEJCICovICB7IFVEX0lidGMsICAgICAgICAgT19Fdiwg
ICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogQkMgKi8gIHsgVURfSWJzZiwgICAgICAgICBPX0d2LCAgICBPX0V2LCAgICBP
X05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBC
RCAqLyAgeyBVRF9JYnNyLCAgICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9fTk9ORSwgIFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEJFICovICB7IFVEX0lt
b3ZzeCwgICAgICAgT19HdiwgICAgT19FYiwgICAgT19OT05FLCAgUF9jMnxQX2Fzb3xQX29zb3xQ
X3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBCRiAqLyAgeyBVRF9JbW92c3gsICAg
ICAgIE9fR3YsICAgIE9fRXcsICAgIE9fTk9ORSwgIFBfYzJ8UF9hc298UF9vc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzAgKi8gIHsgVURfSXhhZGQsICAgICAgICBPX0Vi
LCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEMxICovICB7IFVEX0l4YWRkLCAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05F
LCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzIgKi8g
IHsgVURfSWNtcHBzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEMzICovICB7IFVEX0ltb3ZudGksICAgICAgT19NLCAg
ICAgT19HdncsICAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogQzQgKi8gIHsgVURfSXBpbnNydywgICAgICBPX1AsICAgICBPX0V3LCAgICBPX0liLCAg
ICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBDNSAqLyAg
eyBVRF9JcGV4dHJ3LCAgICAgIE9fR2QsICAgIE9fUFIsICAgIE9fSWIsICAgIFBfYXNvfFBfb3Nv
fFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEM2ICovICB7IFVEX0lzaHVmcHMs
ICAgICAgT19WLCAgICAgT19XLCAgICAgT19JYiwgICAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiBDNyAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIElUQUJfXzBGX19PUF9DN19fUkVHIH0sCiAgLyogQzggKi8gIHsgVURfSWJzd2FwLCAg
ICAgICBPX3JBWHI4LCBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAg
LyogQzkgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JDWHI5LCBPX05PTkUsICBPX05PTkUsICBQ
X29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0EgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JE
WHIxMCwgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0IgKi8g
IHsgVURfSWJzd2FwLCAgICAgICBPX3JCWHIxMSwgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3Jl
eHd8UF9yZXhiIH0sCiAgLyogQ0MgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JTUHIxMiwgT19O
T05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0QgKi8gIHsgVURfSWJz
d2FwLCAgICAgICBPX3JCUHIxMywgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhi
IH0sCiAgLyogQ0UgKi8gIHsgVURfSWJzd2FwLCAgICAgICBPX3JTSXIxNCwgT19OT05FLCAgT19O
T05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyogQ0YgKi8gIHsgVURfSWJzd2FwLCAgICAg
ICBPX3JESXIxNSwgT19OT05FLCAgT19OT05FLCBQX29zb3xQX3JleHd8UF9yZXhiIH0sCiAgLyog
RDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBEMSAqLyAgeyBVRF9JcHNybHcsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRDIgKi8gIHsgVURfSXBz
cmxkLCAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIEQzICovICB7IFVEX0lwc3JscSwgICAgICAgT19QLCAgICAgT19RLCAg
ICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBENCAqLyAgeyBV
RF9JcGFkZHEsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogRDUgKi8gIHsgVURfSXBtdWxsdywgICAgICBPX1AsICAgICBP
X1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEQ2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogRDcgKi8gIHsgVURfSXBtb3Ztc2tiLCAgICBPX0dkLCAgICBPX1BSLCAgICBPX05PTkUs
ICBQX29zb3xQX3JleHJ8UF9yZXhiIH0sCiAgLyogRDggKi8gIHsgVURfSXBzdWJ1c2IsICAgICBP
X1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIEQ5ICovICB7IFVEX0lwc3VidXN3LCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBEQSAqLyAgeyBVRF9JcG1pbnViLCAg
ICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogREIgKi8gIHsgVURfSXBhbmQsICAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIERDICovICB7IFVEX0lwYWRk
dXNiLCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiBERCAqLyAgeyBVRF9JcGFkZHVzdywgICAgIE9fUCwgICAgIE9fUSwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogREUgKi8gIHsgVURf
SXBtYXh1YiwgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIERGICovICB7IFVEX0lwYW5kbiwgICAgICAgT19QLCAgICAgT19R
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFMCAqLyAg
eyBVRF9JcGF2Z2IsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRTEgKi8gIHsgVURfSXBzcmF3LCAgICAgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEUy
ICovICB7IFVEX0lwc3JhZCwgICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFMyAqLyAgeyBVRF9JcGF2Z3csICAgICAgIE9f
UCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogRTQgKi8gIHsgVURfSXBtdWxodXcsICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEU1ICovICB7IFVEX0lwbXVsaHcsICAg
ICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBFNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEU3ICovICB7IFVEX0ltb3ZudHEsICAgICAgT19NLCAgICAgT19Q
LCAgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRTggKi8gIHsgVURfSXBzdWJzYiwgICAgICBP
X1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIEU5ICovICB7IFVEX0lwc3Vic3csICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFQSAqLyAgeyBVRF9JcG1pbnN3LCAg
ICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogRUIgKi8gIHsgVURfSXBvciwgICAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEVDICovICB7IFVEX0lwYWRk
c2IsICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiBFRCAqLyAgeyBVRF9JcGFkZHN3LCAgICAgIE9fUCwgICAgIE9fUSwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRUUgKi8gIHsgVURf
SXBtYXhzdywgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIEVGICovICB7IFVEX0lweG9yLCAgICAgICAgT19QLCAgICAgT19R
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGMCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEYxICovICB7IFVEX0lwc2xsdywgICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAg
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGMiAqLyAgeyBVRF9JcHNsbGQsICAg
ICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogRjMgKi8gIHsgVURfSXBzbGxxLCAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEY0ICovICB7IFVEX0lwbXVs
dWRxLCAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiBGNSAqLyAgeyBVRF9JcG1hZGR3ZCwgICAgIE9fUCwgICAgIE9fUSwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjYgKi8gIHsgVURf
SXBzYWRidywgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIEY3ICovICB7IFVEX0ltYXNrbW92cSwgICAgT19QLCAgICAgT19R
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGOCAqLyAg
eyBVRF9JcHN1YmIsICAgICAgIE9fUCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjkgKi8gIHsgVURfSXBzdWJ3LCAgICAgICBPX1AsICAg
ICBPX1EsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEZB
ICovICB7IFVEX0lwc3ViZCwgICAgICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGQiAqLyAgeyBVRF9JcHN1YnEsICAgICAgIE9f
UCwgICAgIE9fUSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogRkMgKi8gIHsgVURfSXBhZGRiLCAgICAgICBPX1AsICAgICBPX1EsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEZEICovICB7IFVEX0lwYWRkdywgICAg
ICAgT19QLCAgICAgT19RLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBGRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEZGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkg
aXRhYl9fMGZfX29wXzAwX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXNsZHQsICAgICAg
ICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lzdHIsICAgICAgICAgT19FdiwgICAgT19OT05FLCAg
T19OT05FLCAgUF9hc298UF9vc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMiAqLyAg
eyBVRF9JbGxkdCwgICAgICAgIE9fRXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWx0ciwgICAgICAgICBPX0V3LCAg
ICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0
ICovICB7IFVEX0l2ZXJyLCAgICAgICAgT19FdywgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JdmVydywgICAgICAgIE9f
RXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJf
XzBmX19vcF8wMV9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lncnBfbW9kLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAwX19NT0Qg
fSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wMV9fTU9EIH0sCiAgLyogMDIgKi8gIHsgVURf
SWdycF9tb2QsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFf
X1JFR19fT1BfMDJfX01PRCB9LAogIC8qIDAzICovICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0QgfSwK
ICAvKiAwNCAqLyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wNF9fTU9EIH0sCiAgLyogMDUgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAq
LyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBG
X19PUF8wMV9fUkVHX19PUF8wNl9fTU9EIH0sCiAgLyogMDcgKi8gIHsgVURfSWdycF9tb2QsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDdf
X01PRCB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF8wMV9f
cmVnX19vcF8wMF9fbW9kWzJdID0gewogIC8qIDAwICovICB7IFVEX0lzZ2R0LCAgICAgICAgT19N
LCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAwMSAqLyAgeyBVRF9JZ3JwX3JtLCAgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElU
QUJfXzBGX19PUF8wMV9fUkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk0gfSwKfTsKCnN0YXRpYyBz
dHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3Bf
MDFfX3JtWzhdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF92ZW5kb3IsICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDBfX01P
RF9fT1BfMDFfX1JNX19PUF8wMV9fVkVORE9SIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBV
RF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF8w
MV9fUkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk1fX09QXzAzX19WRU5ET1IgfSwKICAvKiAwNCAq
LyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBG
X19PUF8wMV9fUkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk1fX09QXzA0X19WRU5ET1IgfSwKICAv
KiAwNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9l
bnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3BfMDFfX3JtX19vcF8wMV9f
dmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZtY2FsbCwgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRf
aXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3BfMDFfX3JtX19v
cF8wM19fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZtcmVzdW1l
LCAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1
Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZF9fb3BfMDFf
X3JtX19vcF8wNF9fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZt
eG9mZiwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0YXRp
YyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDFfX21vZFsy
XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jc2lkdCwgICAgICAgIE9fTSwgICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWdy
cF9ybSwgICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JF
R19fT1BfMDFfX01PRF9fT1BfMDFfX1JNIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50
cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAxX19tb2RfX29wXzAxX19ybVs4XSA9IHsKICAv
KiAwMCAqLyAgeyBVRF9JbW9uaXRvciwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDAxICovICB7IFVEX0ltd2FpdCwgICAgICAgT19OT05FLCAgT19OT05FLCAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50
cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAyX19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWxnZHQsICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50
cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWxpZHQsICAgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lncnBfcm0sICAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0RfX09QXzAx
X19STSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF8wMV9f
cmVnX19vcF8wM19fbW9kX19vcF8wMV9fcm1bOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF92
ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19f
T1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wMF9fVkVORE9SIH0sCiAgLyogMDEgKi8gIHsgVURf
SWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFf
X1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wMV9fVkVORE9SIH0sCiAgLyogMDIgKi8g
IHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9f
T1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wMl9fVkVORE9SIH0sCiAgLyog
MDMgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFC
X18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wM19fVkVORE9SIH0s
CiAgLyogMDQgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wNF9fVkVO
RE9SIH0sCiAgLyogMDUgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8w
NV9fVkVORE9SIH0sCiAgLyogMDYgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JN
X19PUF8wNl9fVkVORE9SIH0sCiAgLyogMDcgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRF9fT1Bf
MDFfX1JNX19PUF8wN19fVkVORE9SIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkg
aXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDBfX3ZlbmRv
clsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1ydW4sICAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJf
ZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDFf
X3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1tY2FsbCwgICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVk
X2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9f
b3BfMDJfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1sb2FkLCAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3Ry
dWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2RfX29wXzAx
X19ybV9fb3BfMDNfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jdm1zYXZlLCAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0
aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19tb2Rf
X29wXzAxX19ybV9fb3BfMDRfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jc3RnaSwg
ICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07
CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAz
X19tb2RfX29wXzAxX19ybV9fb3BfMDVfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
Y2xnaSwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAx
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAxX19yZWdf
X29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDZfX3ZlbmRvclsyXSA9IHsKICAvKiAwMCAqLyAg
eyBVRF9Jc2tpbml0LCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzAx
X19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDdfX3ZlbmRvclsyXSA9IHsKICAvKiAw
MCAqLyAgeyBVRF9JaW52bHBnYSwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzA0X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXNtc3csICAg
ICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzA2X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWxtc3csICAg
ICAgICBPX0V3LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzA3X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmxwZywg
ICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lncnBfcm0sICAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzA3X19NT0RfX09QXzAxX19STSB9LAp9OwoK
c3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wN19f
bW9kX19vcF8wMV9fcm1bOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXN3YXBncywgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3Zl
bmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF8wMV9fUkVHX19P
UF8wN19fTU9EX19PUF8wMV9fUk1fX09QXzAxX19WRU5ET1IgfSwKICAvKiAwMiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAz
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRp
YyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDdfX21vZF9f
b3BfMDFfX3JtX19vcF8wMV9fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lyZHRzY3As
ICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfMGRfX3JlZ1s4XSA9IHsK
ICAvKiAwMCAqLyAgeyBVRF9JcHJlZmV0Y2gsICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lw
cmVmZXRjaCwgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSXByZWZldGNoLCAgICBPX00sICAg
ICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMyAqLyAgeyBVRF9JcHJlZmV0Y2gsICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICovICB7IFVEX0lw
cmVmZXRjaCwgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXByZWZldGNoLCAgICBPX00sICAg
ICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwNiAqLyAgeyBVRF9JcHJlZmV0Y2gsICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lw
cmVmZXRjaCwgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9f
MGZfX29wXzE4X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXByZWZldGNobnRhLCBPX00s
ICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9JcHJlZmV0Y2h0MCwgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVE
X0lwcmVmZXRjaHQxLCAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSXByZWZldGNodDIsICBPX00s
ICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGlj
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF83MV9fcmVnWzhdID0gewogIC8qIDAw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBVRF9JcHNybHcsICAgICAgIE9fUFIsICAgIE9f
SWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSXBz
cmF3LCAgICAgICBPX1BSLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA2ICovICB7IFVEX0lwc2xsdywgICAgICAgT19QUiwgICAgT19JYiwgICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18wZl9fb3BfNzJfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIg
Ki8gIHsgVURfSXBzcmxkLCAgICAgICBPX1BSLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lwc3JhZCwgICAgICAgT19QUiwgICAgT19J
YiwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JcHNs
bGQsICAgICAgIE9fUFIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
Cn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wXzczX19yZWdbOF0g
PSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lwc3JscSwgICAgICAg
T19QUiwgICAgT19JYiwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSXBzbGxxLCAgICAgICBPX1BSLCAgICBPX0li
LCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGl0YWJfXzBmX19vcF9hZV9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAwMiAqLyAgeyBVRF9JbGRteGNzciwgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVE
X0lzdG14Y3NyLCAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JZ3JwX21v
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9BRV9fUkVHX19P
UF8wNV9fTU9EIH0sCiAgLyogMDYgKi8gIHsgVURfSWdycF9tb2QsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfQUVfX1JFR19fT1BfMDZfX01PRCB9LAogIC8qIDA3
ICovICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9f
MEZfX09QX0FFX19SRUdfX09QXzA3X19NT0QgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9l
bnRyeSBpdGFiX18wZl9fb3BfYWVfX3JlZ19fb3BfMDVfX21vZFsyXSA9IHsKICAvKiAwMCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDAxICovICB7IFVEX0lncnBfcm0sICAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
SVRBQl9fMEZfX09QX0FFX19SRUdfX09QXzA1X19NT0RfX09QXzAxX19STSB9LAp9OwoKc3RhdGlj
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNV9fbW9kX19v
cF8wMV9fcm1bOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWxmZW5jZSwgICAgICBPX05PTkUsICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JbGZlbmNlLCAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0ls
ZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMg
Ki8gIHsgVURfSWxmZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwNCAqLyAgeyBVRF9JbGZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lsZmVuY2UsICAgICAgT19OT05FLCAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWxmZW5jZSwgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JbGZl
bmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAp9OwoKc3RhdGlj
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNl9fbW9kWzJd
ID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF9ybSwgICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18wRl9fT1BfQUVfX1JFR19fT1BfMDZfX01PRF9fT1BfMDFf
X1JNIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2FlX19y
ZWdfX29wXzA2X19tb2RfX29wXzAxX19ybVs4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JbWZlbmNl
LCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7
IFVEX0ltZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDIgKi8gIHsgVURfSW1mZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JbWZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0ltZmVuY2UsICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSW1mZW5jZSwg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBV
RF9JbWZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDA3ICovICB7IFVEX0ltZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2FlX19y
ZWdfX29wXzA3X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWNsZmx1c2gsICAgICBPX00s
ICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3JtLCAgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzBGX19PUF9BRV9fUkVHX19PUF8wN19fTU9EX19PUF8wMV9fUk0gfSwKfTsKCnN0
YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfYWVfX3JlZ19fb3BfMDdfX21v
ZF9fb3BfMDFfX3JtWzhdID0gewogIC8qIDAwICovICB7IFVEX0lzZmVuY2UsICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXNmZW5jZSwg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBV
RF9Jc2ZlbmNlLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDAzICovICB7IFVEX0lzZmVuY2UsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMDQgKi8gIHsgVURfSXNmZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9Jc2ZlbmNlLCAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lzZmVuY2UsICAg
ICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURf
SXNmZW5jZSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKfTsKCnN0
YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfYmFfX3JlZ1s4XSA9IHsKICAv
KiAwMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVE
X0lidCwgICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29z
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JYnRzLCAg
ICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsgVURfSWJ0ciwgICAgICAgICBP
X0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lidGMsICAgICAgICAgT19FdiwgICAg
T19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18wZl9fb3BfYzdf
X3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9DN19fUkVHX19PUF8wMF9fVkVORE9SIH0sCiAgLyog
MDEgKi8gIHsgVURfSWNtcHhjaGc4YiwgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9DN19fUkVHX19PUF8wN19fVkVO
RE9SIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2M3X19y
ZWdfX29wXzAwX192ZW5kb3JbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9Jdm1w
dHJsZCwgICAgIE9fTXEsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2M3
X19yZWdfX29wXzA3X192ZW5kb3JbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9J
dm1wdHJzdCwgICAgIE9fTXEsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29w
X2Q5X19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4NywgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzBGX19PUF9EOV9fTU9EX19PUF8wMV9f
WDg3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMGZfX29wX2Q5X19t
b2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDAzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwQiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBDICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MEQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDBGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTAgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxMSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDEy
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE1ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxNyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAxRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIxICovICB7
IFVEX0lmYWJzLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMjggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMkIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAy
QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDJEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMwICovICB7IFVEX0lm
MnhtMSwgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAzMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMzcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0EgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDNDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNGICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3Ry
dWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVbMjU2XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
YWRkLCAgICAgICAgIE9fRWIsICAgIE9fR2IsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWFkZCwgICAgICAgICBPX0V2LCAgICBPX0d2
LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JYWRkLCAgICAgICAgIE9fR2IsICAgIE9fRWIsICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWFkZCwgICAg
ICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JYWRkLCAgICAgICAgIE9fQUwsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lhZGQsICAgICAg
ICAgT19yQVgsICAgT19JeiwgICAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDYgKi8g
IHsgVURfSXB1c2gsICAgICAgICBPX0VTLCAgICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBf
bm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lwb3AsICAgICAgICAgT19FUywgICAgT19OT05FLCAg
T19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAwOCAqLyAgeyBVRF9Jb3IsICAgICAgICAg
IE9fRWIsICAgIE9fR2IsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDkgKi8gIHsgVURfSW9yLCAgICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUs
ICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwQSAqLyAg
eyBVRF9Jb3IsICAgICAgICAgIE9fR2IsICAgIE9fRWIsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMEIgKi8gIHsgVURfSW9yLCAgICAgICAgICBPX0d2LCAg
ICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAwQyAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fQUwsICAgIE9fSWIsICAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDBEICovICB7IFVEX0lvciwgICAgICAgICAgT19yQVgsICAg
T19JeiwgICAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMEUgKi8gIHsgVURfSXB1c2gs
ICAgICAgICBPX0NTLCAgICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBfbm9uZSB9LAogIC8q
IDBGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMTAgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDExICovICB7IFVEX0lh
ZGMsICAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTIgKi8gIHsgVURfSWFkYywgICAgICAgICBP
X0diLCAgICBPX0ViLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDEzICovICB7IFVEX0lhZGMsICAgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAg
UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTQgKi8gIHsg
VURfSWFkYywgICAgICAgICBPX0FMLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAxNSAqLyAgeyBVRF9JYWRjLCAgICAgICAgIE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBf
b3NvfFBfcmV4dyB9LAogIC8qIDE2ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19TUywgICAgT19O
T05FLCAgT19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAxNyAqLyAgeyBVRF9JcG9wLCAg
ICAgICAgIE9fU1MsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyog
MTggKi8gIHsgVURfSXNiYiwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDE5ICovICB7IFVEX0lzYmIsICAgICAgICAg
T19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMUEgKi8gIHsgVURfSXNiYiwgICAgICAgICBPX0diLCAgICBPX0Vi
LCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDFCICovICB7
IFVEX0lzYmIsICAgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298
UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMUMgKi8gIHsgVURfSXNiYiwgICAg
ICAgICBPX0FMLCAgICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxRCAqLyAgeyBV
RF9Jc2JiLCAgICAgICAgIE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9
LAogIC8qIDFFICovICB7IFVEX0lwdXNoLCAgICAgICAgT19EUywgICAgT19OT05FLCAgT19OT05F
LCAgUF9pbnY2NHxQX25vbmUgfSwKICAvKiAxRiAqLyAgeyBVRF9JcG9wLCAgICAgICAgIE9fRFMs
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURf
SWFuZCwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDIxICovICB7IFVEX0lhbmQsICAgICAgICAgT19FdiwgICAgT19H
diwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMjIgKi8gIHsgVURfSWFuZCwgICAgICAgICBPX0diLCAgICBPX0ViLCAgICBPX05PTkUs
ICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDIzICovICB7IFVEX0lhbmQsICAg
ICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMjQgKi8gIHsgVURfSWFuZCwgICAgICAgICBPX0FMLCAg
ICBPX0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyNSAqLyAgeyBVRF9JYW5kLCAgICAg
ICAgIE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDI2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMjcgKi8gIHsgVURfSWRhYSwgICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX2ludjY0fFBfbm9uZSB9LAogIC8qIDI4ICovICB7IFVEX0lzdWIsICAgICAgICAgT19FYiwg
ICAgT19HYiwgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAy
OSAqLyAgeyBVRF9Jc3ViLCAgICAgICAgIE9fRXYsICAgIE9fR3YsICAgIE9fTk9ORSwgIFBfYXNv
fFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDJBICovICB7IFVEX0lz
dWIsICAgICAgICAgT19HYiwgICAgT19FYiwgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiAyQiAqLyAgeyBVRF9Jc3ViLCAgICAgICAgIE9fR3YsICAgIE9fRXYs
ICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDJDICovICB7IFVEX0lzdWIsICAgICAgICAgT19BTCwgICAgT19JYiwgICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMkQgKi8gIHsgVURfSXN1YiwgICAgICAgICBPX3JBWCwgICBPX0l6LCAg
ICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiAyRSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJGICovICB7IFVEX0lk
YXMsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9pbnY2NHxQX25vbmUgfSwK
ICAvKiAzMCAqLyAgeyBVRF9JeG9yLCAgICAgICAgIE9fRWIsICAgIE9fR2IsICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMzEgKi8gIHsgVURfSXhvciwgICAg
ICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAzMiAqLyAgeyBVRF9JeG9yLCAgICAgICAgIE9fR2IsICAg
IE9fRWIsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMzMg
Ki8gIHsgVURfSXhvciwgICAgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX05PTkUsICBQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAzNCAqLyAgeyBVRF9JeG9y
LCAgICAgICAgIE9fQUwsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM1ICov
ICB7IFVEX0l4b3IsICAgICAgICAgT19yQVgsICAgT19JeiwgICAgT19OT05FLCAgUF9vc298UF9y
ZXh3IH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBVRF9JYWFhLCAgICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogMzggKi8gIHsgVURfSWNt
cCwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDM5ICovICB7IFVEX0ljbXAsICAgICAgICAgT19FdiwgICAgT19Hdiwg
ICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogM0EgKi8gIHsgVURfSWNtcCwgICAgICAgICBPX0diLCAgICBPX0ViLCAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDNCICovICB7IFVEX0ljbXAsICAgICAg
ICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogM0MgKi8gIHsgVURfSWNtcCwgICAgICAgICBPX0FMLCAgICBP
X0liLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRCAqLyAgeyBVRF9JY21wLCAgICAgICAg
IE9fckFYLCAgIE9fSXosICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDNFICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogM0YgKi8gIHsgVURfSWFhcywgICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X2ludjY0fFBfbm9uZSB9LAogIC8qIDQwICovICB7IFVEX0lpbmMsICAgICAgICAgT19lQVgsICAg
T19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiA0MSAqLyAgeyBVRF9JaW5jLCAgICAgICAg
IE9fZUNYLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCiAgLyogNDIgKi8gIHsgVURfSWlu
YywgICAgICAgICBPX2VEWCwgICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDQzICov
ICB7IFVEX0lpbmMsICAgICAgICAgT19lQlgsICAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwK
ICAvKiA0NCAqLyAgeyBVRF9JaW5jLCAgICAgICAgIE9fZVNQLCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfb3NvIH0sCiAgLyogNDUgKi8gIHsgVURfSWluYywgICAgICAgICBPX2VCUCwgICBPX05PTkUs
ICBPX05PTkUsICBQX29zbyB9LAogIC8qIDQ2ICovICB7IFVEX0lpbmMsICAgICAgICAgT19lU0ks
ICAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiA0NyAqLyAgeyBVRF9JaW5jLCAgICAg
ICAgIE9fZURJLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCiAgLyogNDggKi8gIHsgVURf
SWRlYywgICAgICAgICBPX2VBWCwgICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDQ5
ICovICB7IFVEX0lkZWMsICAgICAgICAgT19lQ1gsICAgT19OT05FLCAgT19OT05FLCAgUF9vc28g
fSwKICAvKiA0QSAqLyAgeyBVRF9JZGVjLCAgICAgICAgIE9fZURYLCAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfb3NvIH0sCiAgLyogNEIgKi8gIHsgVURfSWRlYywgICAgICAgICBPX2VCWCwgICBPX05P
TkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDRDICovICB7IFVEX0lkZWMsICAgICAgICAgT19l
U1AsICAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiA0RCAqLyAgeyBVRF9JZGVjLCAg
ICAgICAgIE9fZUJQLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCiAgLyogNEUgKi8gIHsg
VURfSWRlYywgICAgICAgICBPX2VTSSwgICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8q
IDRGICovICB7IFVEX0lkZWMsICAgICAgICAgT19lREksICAgT19OT05FLCAgT19OT05FLCAgUF9v
c28gfSwKICAvKiA1MCAqLyAgeyBVRF9JcHVzaCwgICAgICAgIE9fckFYcjgsIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDUxICovICB7IFVEX0lw
dXNoLCAgICAgICAgT19yQ1hyOSwgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX2RlcE18UF9v
c298UF9yZXhiIH0sCiAgLyogNTIgKi8gIHsgVURfSXB1c2gsICAgICAgICBPX3JEWHIxMCwgT19O
T05FLCAgT19OT05FLCBQX2RlZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1MyAqLyAg
eyBVRF9JcHVzaCwgICAgICAgIE9fckJYcjExLCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8UF9k
ZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDU0ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19yU1By
MTIsIE9fTk9ORSwgIE9fTk9ORSwgUF9kZWY2NHxQX2RlcE18UF9vc298UF9yZXhiIH0sCiAgLyog
NTUgKi8gIHsgVURfSXB1c2gsICAgICAgICBPX3JCUHIxMywgT19OT05FLCAgT19OT05FLCBQX2Rl
ZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1NiAqLyAgeyBVRF9JcHVzaCwgICAgICAg
IE9fclNJcjE0LCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4YiB9
LAogIC8qIDU3ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19yRElyMTUsIE9fTk9ORSwgIE9fTk9O
RSwgUF9kZWY2NHxQX2RlcE18UF9vc298UF9yZXhiIH0sCiAgLyogNTggKi8gIHsgVURfSXBvcCwg
ICAgICAgICBPX3JBWHI4LCBPX05PTkUsICBPX05PTkUsICBQX2RlZjY0fFBfZGVwTXxQX29zb3xQ
X3JleGIgfSwKICAvKiA1OSAqLyAgeyBVRF9JcG9wLCAgICAgICAgIE9fckNYcjksIE9fTk9ORSwg
IE9fTk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDVBICovICB7IFVE
X0lwb3AsICAgICAgICAgT19yRFhyMTAsIE9fTk9ORSwgIE9fTk9ORSwgUF9kZWY2NHxQX2RlcE18
UF9vc298UF9yZXhiIH0sCiAgLyogNUIgKi8gIHsgVURfSXBvcCwgICAgICAgICBPX3JCWHIxMSwg
T19OT05FLCAgT19OT05FLCBQX2RlZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1QyAq
LyAgeyBVRF9JcG9wLCAgICAgICAgIE9fclNQcjEyLCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8
UF9kZXBNfFBfb3NvfFBfcmV4YiB9LAogIC8qIDVEICovICB7IFVEX0lwb3AsICAgICAgICAgT19y
QlByMTMsIE9fTk9ORSwgIE9fTk9ORSwgUF9kZWY2NHxQX2RlcE18UF9vc298UF9yZXhiIH0sCiAg
LyogNUUgKi8gIHsgVURfSXBvcCwgICAgICAgICBPX3JTSXIxNCwgT19OT05FLCAgT19OT05FLCBQ
X2RlZjY0fFBfZGVwTXxQX29zb3xQX3JleGIgfSwKICAvKiA1RiAqLyAgeyBVRF9JcG9wLCAgICAg
ICAgIE9fckRJcjE1LCBPX05PTkUsICBPX05PTkUsIFBfZGVmNjR8UF9kZXBNfFBfb3NvfFBfcmV4
YiB9LAogIC8qIDYwICovICB7IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fMUJZVEVfX09QXzYwX19PU0laRSB9LAogIC8qIDYxICovICB7IFVEX0lncnBf
b3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzYxX19P
U0laRSB9LAogIC8qIDYyICovICB7IFVEX0lib3VuZCwgICAgICAgT19HdiwgICAgT19NLCAgICAg
T19OT05FLCAgUF9pbnY2NHxQX2Fzb3xQX29zbyB9LAogIC8qIDYzICovICB7IFVEX0lncnBfbW9k
ZSwgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzYzX19NT0RF
IH0sCiAgLyogNjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA2NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY2ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjcgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2OCAq
LyAgeyBVRF9JcHVzaCwgICAgICAgIE9fSXosICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9v
c28gfSwKICAvKiA2OSAqLyAgeyBVRF9JaW11bCwgICAgICAgIE9fR3YsICAgIE9fRXYsICAgIE9f
SXosICAgIFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDZB
ICovICB7IFVEX0lwdXNoLCAgICAgICAgT19JYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogNkIgKi8gIHsgVURfSWltdWwsICAgICAgICBPX0d2LCAgICBPX0V2LCAgICBPX0li
LCAgICBQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2QyAq
LyAgeyBVRF9JaW5zYiwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDZEICovICB7IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgSVRBQl9fMUJZVEVfX09QXzZEX19PU0laRSB9LAogIC8qIDZFICovICB7IFVEX0lvdXRzYiwg
ICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogNkYgKi8gIHsg
VURfSWdycF9vc2l6ZSwgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9f
T1BfNkZfX09TSVpFIH0sCiAgLyogNzAgKi8gIHsgVURfSWpvLCAgICAgICAgICBPX0piLCAgICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA3MSAqLyAgeyBVRF9Jam5vLCAgICAgICAg
IE9fSmIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDcyICovICB7IFVEX0lq
YiwgICAgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogNzMg
Ki8gIHsgVURfSWphZSwgICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiA3NCAqLyAgeyBVRF9JanosICAgICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDc1ICovICB7IFVEX0lqbnosICAgICAgICAgT19KYiwgICAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogNzYgKi8gIHsgVURfSWpiZSwgICAgICAgICBP
X0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA3NyAqLyAgeyBVRF9JamEs
ICAgICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDc4ICov
ICB7IFVEX0lqcywgICAgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogNzkgKi8gIHsgVURfSWpucywgICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiA3QSAqLyAgeyBVRF9JanAsICAgICAgICAgIE9fSmIsICAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDdCICovICB7IFVEX0lqbnAsICAgICAgICAgT19K
YiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogN0MgKi8gIHsgVURfSWpsLCAg
ICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA3RCAqLyAg
eyBVRF9JamdlLCAgICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDdFICovICB7IFVEX0lqbGUsICAgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogN0YgKi8gIHsgVURfSWpnLCAgICAgICAgICBPX0piLCAgICBPX05PTkUs
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiA4MCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF84MF9fUkVHIH0sCiAgLyogODEg
Ki8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18x
QllURV9fT1BfODFfX1JFRyB9LAogIC8qIDgyICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzgyX19SRUcgfSwKICAvKiA4MyAq
LyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFC
WVRFX19PUF84M19fUkVHIH0sCiAgLyogODQgKi8gIHsgVURfSXRlc3QsICAgICAgICBPX0ViLCAg
ICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDg1
ICovICB7IFVEX0l0ZXN0LCAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298
UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogODYgKi8gIHsgVURfSXhj
aGcsICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDg3ICovICB7IFVEX0l4Y2hnLCAgICAgICAgT19FdiwgICAgT19Hdiwg
ICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogODggKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0ViLCAgICBPX0diLCAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDg5ICovICB7IFVEX0ltb3YsICAgICAg
ICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogOEEgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0diLCAgICBP
X0ViLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDhCICov
ICB7IFVEX0ltb3YsICAgICAgICAgT19HdiwgICAgT19FdiwgICAgT19OT05FLCAgUF9hc298UF9v
c298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogOEMgKi8gIHsgVURfSW1vdiwg
ICAgICAgICBPX0V2LCAgICBPX1MsICAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDhEICovICB7IFVEX0lsZWEsICAgICAgICAgT19HdiwgICAgT19N
LCAgICAgT19OT05FLCAgUF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogOEUgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX1MsICAgICBPX0V2LCAgICBPX05PTkUs
ICBQX2Fzb3xQX29zb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDhGICovICB7IFVEX0ln
cnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzhG
X19SRUcgfSwKICAvKiA5MCAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckFYcjgsIE9fckFYLCAg
IE9fTk9ORSwgIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5MSAqLyAgeyBVRF9JeGNoZywg
ICAgICAgIE9fckNYcjksIE9fckFYLCAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwK
ICAvKiA5MiAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckRYcjEwLCBPX3JBWCwgICBPX05PTkUs
IFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5MyAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9f
ckJYcjExLCBPX3JBWCwgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5NCAq
LyAgeyBVRF9JeGNoZywgICAgICAgIE9fclNQcjEyLCBPX3JBWCwgICBPX05PTkUsIFBfb3NvfFBf
cmV4d3xQX3JleGIgfSwKICAvKiA5NSAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckJQcjEzLCBP
X3JBWCwgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5NiAqLyAgeyBVRF9J
eGNoZywgICAgICAgIE9fclNJcjE0LCBPX3JBWCwgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3Jl
eGIgfSwKICAvKiA5NyAqLyAgeyBVRF9JeGNoZywgICAgICAgIE9fckRJcjE1LCBPX3JBWCwgICBP
X05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiA5OCAqLyAgeyBVRF9JZ3JwX29zaXpl
LCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF85OF9fT1NJWkUg
fSwKICAvKiA5OSAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzFCWVRFX19PUF85OV9fT1NJWkUgfSwKICAvKiA5QSAqLyAgeyBVRF9JY2FsbCwg
ICAgICAgIE9fQXAsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9vc28gfSwKICAvKiA5
QiAqLyAgeyBVRF9Jd2FpdCwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDlDICovICB7IFVEX0lncnBfbW9kZSwgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fMUJZVEVfX09QXzlDX19NT0RFIH0sCiAgLyogOUQgKi8gIHsgVURfSWdycF9t
b2RlLCAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfOURfX01P
REUgfSwKICAvKiA5RSAqLyAgeyBVRF9Jc2FoZiwgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDlGICovICB7IFVEX0lsYWhmLCAgICAgICAgT19OT05FLCAg
T19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTAgKi8gIHsgVURfSW1vdiwgICAgICAg
ICBPX0FMLCAgICBPX09iLCAgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBBMSAqLyAgeyBVRF9J
bW92LCAgICAgICAgIE9fckFYLCAgIE9fT3YsICAgIE9fTk9ORSwgIFBfYXNvfFBfb3NvfFBfcmV4
dyB9LAogIC8qIEEyICovICB7IFVEX0ltb3YsICAgICAgICAgT19PYiwgICAgT19BTCwgICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX092LCAgICBP
X3JBWCwgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQX3JleHcgfSwKICAvKiBBNCAqLyAgeyBVRF9J
bW92c2IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX25vbmUg
fSwKICAvKiBBNSAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzFCWVRFX19PUF9BNV9fT1NJWkUgfSwKICAvKiBBNiAqLyAgeyBVRF9JY21wc2Is
ICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEE3ICovICB7
IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0E3X19PU0laRSB9LAogIC8qIEE4ICovICB7IFVEX0l0ZXN0LCAgICAgICAgT19BTCwgICAg
T19JYiwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSXRlc3QsICAgICAg
ICBPX3JBWCwgICBPX0l6LCAgICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiBBQSAqLyAg
eyBVRF9Jc3Rvc2IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQ
X25vbmUgfSwKICAvKiBBQiAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9BQl9fT1NJWkUgfSwKICAvKiBBQyAqLyAgeyBVRF9J
bG9kc2IsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX25vbmUg
fSwKICAvKiBBRCAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIElUQUJfXzFCWVRFX19PUF9BRF9fT1NJWkUgfSwKICAvKiBBRSAqLyAgeyBVRF9Jc2Nhc2Is
ICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEFGICovICB7
IFVEX0lncnBfb3NpemUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0FGX19PU0laRSB9LAogIC8qIEIwICovICB7IFVEX0ltb3YsICAgICAgICAgT19BTHI4Yiwg
T19JYiwgICAgT19OT05FLCAgUF9yZXhiIH0sCiAgLyogQjEgKi8gIHsgVURfSW1vdiwgICAgICAg
ICBPX0NMcjliLCBPX0liLCAgICBPX05PTkUsICBQX3JleGIgfSwKICAvKiBCMiAqLyAgeyBVRF9J
bW92LCAgICAgICAgIE9fRExyMTBiLCBPX0liLCAgICBPX05PTkUsIFBfcmV4YiB9LAogIC8qIEIz
ICovICB7IFVEX0ltb3YsICAgICAgICAgT19CTHIxMWIsIE9fSWIsICAgIE9fTk9ORSwgUF9yZXhi
IH0sCiAgLyogQjQgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0FIcjEyYiwgT19JYiwgICAgT19O
T05FLCBQX3JleGIgfSwKICAvKiBCNSAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fQ0hyMTNiLCBP
X0liLCAgICBPX05PTkUsIFBfcmV4YiB9LAogIC8qIEI2ICovICB7IFVEX0ltb3YsICAgICAgICAg
T19ESHIxNGIsIE9fSWIsICAgIE9fTk9ORSwgUF9yZXhiIH0sCiAgLyogQjcgKi8gIHsgVURfSW1v
diwgICAgICAgICBPX0JIcjE1YiwgT19JYiwgICAgT19OT05FLCBQX3JleGIgfSwKICAvKiBCOCAq
LyAgeyBVRF9JbW92LCAgICAgICAgIE9fckFYcjgsIE9fSXYsICAgIE9fTk9ORSwgIFBfb3NvfFBf
cmV4d3xQX3JleGIgfSwKICAvKiBCOSAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fckNYcjksIE9f
SXYsICAgIE9fTk9ORSwgIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCQSAqLyAgeyBVRF9J
bW92LCAgICAgICAgIE9fckRYcjEwLCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3Jl
eGIgfSwKICAvKiBCQiAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fckJYcjExLCBPX0l2LCAgICBP
X05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCQyAqLyAgeyBVRF9JbW92LCAgICAg
ICAgIE9fclNQcjEyLCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAv
KiBCRCAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fckJQcjEzLCBPX0l2LCAgICBPX05PTkUsIFBf
b3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCRSAqLyAgeyBVRF9JbW92LCAgICAgICAgIE9fclNJ
cjE0LCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4d3xQX3JleGIgfSwKICAvKiBCRiAqLyAg
eyBVRF9JbW92LCAgICAgICAgIE9fckRJcjE1LCBPX0l2LCAgICBPX05PTkUsIFBfb3NvfFBfcmV4
d3xQX3JleGIgfSwKICAvKiBDMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9DMF9fUkVHIH0sCiAgLyogQzEgKi8gIHsgVURf
SWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1Bf
QzFfX1JFRyB9LAogIC8qIEMyICovICB7IFVEX0lyZXQsICAgICAgICAgT19JdywgICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQzMgKi8gIHsgVURfSXJldCwgICAgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBDNCAqLyAgeyBVRF9JbGVzLCAg
ICAgICAgIE9fR3YsICAgIE9fTSwgICAgIE9fTk9ORSwgIFBfaW52NjR8UF9hc298UF9vc28gfSwK
ICAvKiBDNSAqLyAgeyBVRF9JbGRzLCAgICAgICAgIE9fR3YsICAgIE9fTSwgICAgIE9fTk9ORSwg
IFBfaW52NjR8UF9hc298UF9vc28gfSwKICAvKiBDNiAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9DNl9fUkVHIH0sCiAgLyog
QzcgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFC
X18xQllURV9fT1BfQzdfX1JFRyB9LAogIC8qIEM4ICovICB7IFVEX0llbnRlciwgICAgICAgT19J
dywgICAgT19JYiwgICAgT19OT05FLCAgUF9kZWY2NHxQX2RlcE18UF9ub25lIH0sCiAgLyogQzkg
Ki8gIHsgVURfSWxlYXZlLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiBDQSAqLyAgeyBVRF9JcmV0ZiwgICAgICAgIE9fSXcsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIENCICovICB7IFVEX0lyZXRmLCAgICAgICAgT19OT05FLCAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogQ0MgKi8gIHsgVURfSWludDMsICAgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBDRCAqLyAgeyBVRF9JaW50
LCAgICAgICAgIE9fSWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIENFICov
ICB7IFVEX0lpbnRvLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9pbnY2NHxQ
X25vbmUgfSwKICAvKiBDRiAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9DRl9fT1NJWkUgfSwKICAvKiBEMCAqLyAgeyBVRF9J
Z3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9E
MF9fUkVHIH0sCiAgLyogRDEgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBJVEFCX18xQllURV9fT1BfRDFfX1JFRyB9LAogIC8qIEQyICovICB7IFVEX0ln
cnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0Qy
X19SRUcgfSwKICAvKiBEMyAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EM19fUkVHIH0sCiAgLyogRDQgKi8gIHsgVURfSWFh
bSwgICAgICAgICBPX0liLCAgICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBfbm9uZSB9LAog
IC8qIEQ1ICovICB7IFVEX0lhYWQsICAgICAgICAgT19JYiwgICAgT19OT05FLCAgT19OT05FLCAg
UF9pbnY2NHxQX25vbmUgfSwKICAvKiBENiAqLyAgeyBVRF9Jc2FsYywgICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogRDcgKi8gIHsgVURfSXhs
YXRiLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX3JleHcgfSwKICAvKiBEOCAq
LyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFC
WVRFX19PUF9EOF9fTU9EIH0sCiAgLyogRDkgKi8gIHsgVURfSWdycF9tb2QsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfRDlfX01PRCB9LAogIC8qIERBICov
ICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZ
VEVfX09QX0RBX19NT0QgfSwKICAvKiBEQiAqLyAgeyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EQl9fTU9EIH0sCiAgLyogREMgKi8g
IHsgVURfSWdycF9tb2QsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllU
RV9fT1BfRENfX01PRCB9LAogIC8qIEREICovICB7IFVEX0lncnBfbW9kLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0REX19NT0QgfSwKICAvKiBERSAqLyAg
eyBVRF9JZ3JwX21vZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRF
X19PUF9ERV9fTU9EIH0sCiAgLyogREYgKi8gIHsgVURfSWdycF9tb2QsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfREZfX01PRCB9LAogIC8qIEUwICovICB7
IFVEX0lsb29wbnosICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogRTEgKi8gIHsgVURfSWxvb3BlLCAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiBFMiAqLyAgeyBVRF9JbG9vcCwgICAgICAgIE9fSmIsICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEUzICovICB7IFVEX0lncnBfYXNpemUsICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0UzX19BU0laRSB9LAogIC8qIEU0
ICovICB7IFVEX0lpbiwgICAgICAgICAgT19BTCwgICAgT19JYiwgICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogRTUgKi8gIHsgVURfSWluLCAgICAgICAgICBPX2VBWCwgICBPX0liLCAgICBPX05P
TkUsICBQX29zbyB9LAogIC8qIEU2ICovICB7IFVEX0lvdXQsICAgICAgICAgT19JYiwgICAgT19B
TCwgICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRTcgKi8gIHsgVURfSW91dCwgICAgICAgICBP
X0liLCAgICBPX2VBWCwgICBPX05PTkUsICBQX29zbyB9LAogIC8qIEU4ICovICB7IFVEX0ljYWxs
LCAgICAgICAgT19KeiwgICAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX29zbyB9LAogIC8q
IEU5ICovICB7IFVEX0lqbXAsICAgICAgICAgT19KeiwgICAgT19OT05FLCAgT19OT05FLCAgUF9k
ZWY2NHxQX2RlcE18UF9vc28gfSwKICAvKiBFQSAqLyAgeyBVRF9Jam1wLCAgICAgICAgIE9fQXAs
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9ub25lIH0sCiAgLyogRUIgKi8gIHsgVURf
SWptcCwgICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBF
QyAqLyAgeyBVRF9JaW4sICAgICAgICAgIE9fQUwsICAgIE9fRFgsICAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIEVEICovICB7IFVEX0lpbiwgICAgICAgICAgT19lQVgsICAgT19EWCwgICAgT19O
T05FLCAgUF9vc28gfSwKICAvKiBFRSAqLyAgeyBVRF9Jb3V0LCAgICAgICAgIE9fRFgsICAgIE9f
QUwsICAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEVGICovICB7IFVEX0lvdXQsICAgICAgICAg
T19EWCwgICAgT19lQVgsICAgT19OT05FLCAgUF9vc28gfSwKICAvKiBGMCAqLyAgeyBVRF9JbG9j
aywgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEYxICov
ICB7IFVEX0lpbnQxLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogRjIgKi8gIHsgVURfSXJlcG5lLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiBGMyAqLyAgeyBVRF9JcmVwLCAgICAgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEY0ICovICB7IFVEX0lobHQsICAgICAgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRjUgKi8gIHsgVURfSWNtYywg
ICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBGNiAqLyAg
eyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRF
X19PUF9GNl9fUkVHIH0sCiAgLyogRjcgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfRjdfX1JFRyB9LAogIC8qIEY4ICovICB7
IFVEX0ljbGMsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogRjkgKi8gIHsgVURfSXN0YywgICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiBGQSAqLyAgeyBVRF9JY2xpLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIEZCICovICB7IFVEX0lzdGksICAgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogRkMgKi8gIHsgVURfSWNsZCwgICAg
ICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiBGRCAqLyAgeyBV
RF9Jc3RkLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IEZFICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRB
Ql9fMUJZVEVfX09QX0ZFX19SRUcgfSwKICAvKiBGRiAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9GRl9fUkVHIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzYwX19vc2l6ZVszXSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9JcHVzaGEsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfaW52NjR8UF9vc28gfSwKICAvKiAwMSAqLyAgeyBVRF9JcHVzaGFkLCAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfaW52NjR8UF9vc28gfSwKICAvKiAwMiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3Rh
dGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF82MV9fb3NpemVbM10gPSB7
CiAgLyogMDAgKi8gIHsgVURfSXBvcGEsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX2ludjY0fFBfb3NvIH0sCiAgLyogMDEgKi8gIHsgVURfSXBvcGFkLCAgICAgICBPX05PTkUs
ICBPX05PTkUsICBPX05PTkUsICBQX2ludjY0fFBfb3NvIH0sCiAgLyogMDIgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRp
YyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfNjNfX21vZGVbM10gPSB7CiAg
LyogMDAgKi8gIHsgVURfSWFycGwsICAgICAgICBPX0V3LCAgICBPX0d3LCAgICBPX05PTkUsICBQ
X2ludjY0fFBfYXNvIH0sCiAgLyogMDEgKi8gIHsgVURfSWFycGwsICAgICAgICBPX0V3LCAgICBP
X0d3LCAgICBPX05PTkUsICBQX2ludjY0fFBfYXNvIH0sCiAgLyogMDIgKi8gIHsgVURfSW1vdnN4
ZCwgICAgICBPX0d2LCAgICBPX0VkLCAgICBPX05PTkUsICBQX2MyfFBfYXNvfFBfb3NvfFBfcmV4
d3xQX3JleHh8UF9yZXhyfFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5
IGl0YWJfXzFieXRlX19vcF82ZF9fb3NpemVbM10gPSB7CiAgLyogMDAgKi8gIHsgVURfSWluc3cs
ICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDAxICovICB7
IFVEX0lpbnNkLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAv
KiAwMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF82
Zl9fb3NpemVbM10gPSB7CiAgLyogMDAgKi8gIHsgVURfSW91dHN3LCAgICAgICBPX05PTkUsICBP
X05PTkUsICBPX05PTkUsICBQX29zbyB9LAogIC8qIDAxICovICB7IFVEX0lvdXRzZCwgICAgICAg
T19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9vc28gfSwKICAvKiAwMiAqLyAgeyBVRF9Jb3V0
c3EsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvIH0sCn07CgpzdGF0aWMg
c3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzgwX19yZWdbOF0gPSB7CiAgLyog
MDAgKi8gIHsgVURfSWFkZCwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSW9yLCAgICAg
ICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0ViLCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8g
IHsgVURfSXNiYiwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsgVURfSWFuZCwgICAgICAgICBP
X0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lzdWIsICAgICAgICAgT19FYiwgICAgT19JYiwg
ICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICov
ICB7IFVEX0l4b3IsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0ljbXAsICAgICAgICAg
T19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF84MV9f
cmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lhZGQsICAgICAgICAgT19FdiwgICAgT19Jeiwg
ICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fRXYsICAgIE9fSXosICAgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDIgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0V2LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7
IFVEX0lzYmIsICAgICAgICAgT19FdiwgICAgT19JeiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JYW5k
LCAgICAgICAgIE9fRXYsICAgIE9fSXosICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXN1YiwgICAgICAg
ICBPX0V2LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7IFVEX0l4b3IsICAgICAgICAgT19Fdiwg
ICAgT19JeiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JY21wLCAgICAgICAgIE9fRXYsICAgIE9fSXos
ICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzgyX19y
ZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWFkZCwgICAgICAgICBPX0ViLCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfaW52NjR8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAwMSAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBf
YzF8UF9pbnY2NHxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVE
X0lhZGMsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2ludjY0fFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSXNiYiwgICAgICAg
ICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfaW52NjR8UF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JYW5kLCAgICAgICAgIE9fRWIsICAgIE9f
SWIsICAgIE9fTk9ORSwgIFBfYzF8UF9pbnY2NHxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDA1ICovICB7IFVEX0lzdWIsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05F
LCAgUF9jMXxQX2ludjY0fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8g
IHsgVURfSXhvciwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfaW52
NjR8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JY21wLCAg
ICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9pbnY2NHxQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJf
XzFieXRlX19vcF84M19fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lhZGQsICAgICAgICAg
T19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9Jb3IsICAgICAgICAgIE9fRXYsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSWFkYywgICAgICAgICBPX0V2LCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDAzICovICB7IFVEX0lzYmIsICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05F
LCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
NCAqLyAgeyBVRF9JYW5kLCAgICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsg
VURfSXN1YiwgICAgICAgICBPX0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
b3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7IFVEX0l4b3Is
ICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3Jl
eHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JY21wLCAgICAgICAg
IE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9f
MWJ5dGVfX29wXzhmX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXBvcCwgICAgICAgICBP
X0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfZGVmNjR8UF9kZXBNfFBfYXNvfFBfb3Nv
fFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wXzk4X19vc2l6ZVszXSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9JY2J3LCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDAxICovICB7IFVEX0ljd2RlLCAgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDIgKi8gIHsgVURfSWNk
cWUsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfOTlfX29zaXplWzNd
ID0gewogIC8qIDAwICovICB7IFVEX0ljd2QsICAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19O
T05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDEgKi8gIHsgVURfSWNkcSwgICAgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9J
Y3FvLCAgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF85Y19fbW9kZVsz
XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIElUQUJfXzFCWVRFX19PUF85Q19fTU9ERV9fT1BfMDBfX09TSVpFIH0sCiAgLyogMDEg
Ki8gIHsgVURfSWdycF9vc2l6ZSwgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18x
QllURV9fT1BfOUNfX01PREVfX09QXzAxX19PU0laRSB9LAogIC8qIDAyICovICB7IFVEX0lwdXNo
ZnEsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX29zb3xQX3JleHcg
fSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfOWNfX21v
ZGVfX29wXzAwX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JcHVzaGZ3LCAgICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9vc28gfSwKICAvKiAwMSAqLyAgeyBV
RF9JcHVzaGZkLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9vc28g
fSwKICAvKiAwMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRl
X19vcF85Y19fbW9kZV9fb3BfMDFfX29zaXplWzNdID0gewogIC8qIDAwICovICB7IFVEX0lwdXNo
ZncsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2NHxQX29zbyB9LAogIC8q
IDAxICovICB7IFVEX0lwdXNoZmQsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9k
ZWY2NHxQX29zbyB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkg
aXRhYl9fMWJ5dGVfX29wXzlkX19tb2RlWzNdID0gewogIC8qIDAwICovICB7IFVEX0lncnBfb3Np
emUsICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QXzlEX19NT0RF
X19PUF8wMF9fT1NJWkUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX29zaXplLCAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF85RF9fTU9ERV9fT1BfMDFfX09TSVpF
IH0sCiAgLyogMDIgKi8gIHsgVURfSXBvcGZxLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05P
TkUsICBQX2RlZjY0fFBfZGVwTXxQX29zbyB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2Vu
dHJ5IGl0YWJfXzFieXRlX19vcF85ZF9fbW9kZV9fb3BfMDBfX29zaXplWzNdID0gewogIC8qIDAw
ICovICB7IFVEX0lwb3BmdywgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9kZWY2
NHxQX2RlcE18UF9vc28gfSwKICAvKiAwMSAqLyAgeyBVRF9JcG9wZmQsICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvIH0sCiAgLyogMDIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfOWRfX21vZGVfX29w
XzAxX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JcG9wZncsICAgICAgIE9fTk9ORSwg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfZGVmNjR8UF9kZXBNfFBfb3NvIH0sCiAgLyogMDEgKi8gIHsg
VURfSXBvcGZkLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX2RlZjY0fFBfZGVw
TXxQX29zbyB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRh
Yl9fMWJ5dGVfX29wX2E1X19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JbW92c3csICAg
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwK
ICAvKiAwMSAqLyAgeyBVRF9JbW92c2QsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9JbW92c3EsICAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYTdfX29zaXplWzNd
ID0gewogIC8qIDAwICovICB7IFVEX0ljbXBzdywgICAgICAgT19OT05FLCAgT19OT05FLCAgT19O
T05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDEgKi8gIHsgVURfSWNtcHNkLCAgICAgICBPX05P
TkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9J
Y21wc3EsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9hYl9fb3NpemVb
M10gPSB7CiAgLyogMDAgKi8gIHsgVURfSXN0b3N3LCAgICAgICBPX05PTkUsICBPX05PTkUsICBP
X05PTkUsICBQX0ltcEFkZHJ8UF9vc298UF9yZXh3IH0sCiAgLyogMDEgKi8gIHsgVURfSXN0b3Nk
LCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX0ltcEFkZHJ8UF9vc298UF9yZXh3
IH0sCiAgLyogMDIgKi8gIHsgVURfSXN0b3NxLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05P
TkUsICBQX0ltcEFkZHJ8UF9vc298UF9yZXh3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJf
ZW50cnkgaXRhYl9fMWJ5dGVfX29wX2FkX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
bG9kc3csICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQ
X3JleHcgfSwKICAvKiAwMSAqLyAgeyBVRF9JbG9kc2QsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3JleHcgfSwKICAvKiAwMiAqLyAgeyBVRF9JbG9k
c3EsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfSW1wQWRkcnxQX29zb3xQX3Jl
eHcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYWVf
X21vZFsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9BRV9fTU9EX19PUF8wMF9fUkVHIH0sCiAgLyog
MDEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYWVf
X21vZF9fb3BfMDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZnhzYXZlLCAgICAgIE9f
TSwgICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDAxICovICB7IFVEX0lmeHJzdG9yLCAgICAgT19NLCAgICAgT19OT05FLCAgT19O
T05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2FmX19vc2l6ZVszXSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9Jc2Nhc3csICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfb3NvfFBfcmV4dyB9LAogIC8qIDAxICovICB7IFVEX0lzY2FzZCwgICAgICAgT19OT05F
LCAgT19OT05FLCAgT19OT05FLCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDIgKi8gIHsgVURfSXNj
YXNxLCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKfTsK
CnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfYzBfX3JlZ1s4XSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9Jcm9sLCAgICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8g
IHsgVURfSXJvciwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNv
fFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lyY2wsICAg
ICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JcmNyLCAgICAgICAgIE9fRWIsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogMDQgKi8gIHsgVURfSXNobCwgICAgICAgICBPX0ViLCAgICBPX0liLCAgICBPX05P
TkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICov
ICB7IFVEX0lzaHIsICAgICAgICAgT19FYiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9Jc2hsLCAg
ICAgICAgIE9fRWIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSXNhciwgICAgICAgICBPX0ViLCAg
ICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9jMV9f
cmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lyb2wsICAgICAgICAgT19FdiwgICAgT19JYiwg
ICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwMSAqLyAgeyBVRF9Jcm9yLCAgICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDIgKi8gIHsgVURfSXJjbCwgICAgICAgICBPX0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7
IFVEX0lyY3IsICAgICAgICAgT19FdiwgICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQ
X29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9Jc2hs
LCAgICAgICAgIE9fRXYsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXNociwgICAgICAg
ICBPX0V2LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7IFVEX0lzaGwsICAgICAgICAgT19Fdiwg
ICAgT19JYiwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9Jc2FyLCAgICAgICAgIE9fRXYsICAgIE9fSWIs
ICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2M2X19y
ZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0ViLCAgICBPX0liLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAw
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5
dGVfX29wX2M3X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSW1vdiwgICAgICAgICBPX0V2
LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0
YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2NmX19vc2l6ZVszXSA9IHsKICAvKiAwMCAqLyAgeyBV
RF9JaXJldHcsICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfb3NvfFBfcmV4dyB9
LAogIC8qIDAxICovICB7IFVEX0lpcmV0ZCwgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05F
LCAgUF9vc298UF9yZXh3IH0sCiAgLyogMDIgKi8gIHsgVURfSWlyZXRxLCAgICAgICBPX05PTkUs
ICBPX05PTkUsICBPX05PTkUsICBQX29zb3xQX3JleHcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRf
aXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfZDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBV
RF9Jcm9sLCAgICAgICAgIE9fRWIsICAgIE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSXJvciwgICAgICAg
ICBPX0ViLCAgICBPX0kxLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lyY2wsICAgICAgICAgT19FYiwgICAgT19J
MSwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMyAqLyAgeyBVRF9JcmNyLCAgICAgICAgIE9fRWIsICAgIE9fSTEsICAgIE9fTk9ORSwg
IFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsg
VURfSXNobCwgICAgICAgICBPX0ViLCAgICBPX0kxLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
cmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lzaHIsICAgICAg
ICAgT19FYiwgICAgT19JMSwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9Jc2hsLCAgICAgICAgIE9fRWIsICAgIE9f
STEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDcgKi8gIHsgVURfSXNhciwgICAgICAgICBPX0ViLCAgICBPX0kxLCAgICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0
cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kMV9fcmVnWzhdID0gewogIC8qIDAw
ICovICB7IFVEX0lyb2wsICAgICAgICAgT19FdiwgICAgT19JMSwgICAgT19OT05FLCAgUF9jMXxQ
X2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBV
RF9Jcm9yLCAgICAgICAgIE9fRXYsICAgIE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9v
c298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSXJjbCwg
ICAgICAgICBPX0V2LCAgICBPX0kxLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4
d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVEX0lyY3IsICAgICAgICAg
T19FdiwgICAgT19JMSwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9Jc2hsLCAgICAgICAgIE9fRXYsICAg
IE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSXNociwgICAgICAgICBPX0V2LCAgICBPX0kxLCAg
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDA2ICovICB7IFVEX0lzaGwsICAgICAgICAgT19FdiwgICAgT19JMSwgICAgT19OT05F
LCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
NyAqLyAgeyBVRF9Jc2FyLCAgICAgICAgIE9fRXYsICAgIE9fSTEsICAgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3Ry
dWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2QyX19yZWdbOF0gPSB7CiAgLyogMDAg
Ki8gIHsgVURfSXJvbCwgICAgICAgICBPX0ViLCAgICBPX0NMLCAgICBPX05PTkUsICBQX2MxfFBf
YXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lyb3Is
ICAgICAgICAgT19FYiwgICAgT19DTCwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMiAqLyAgeyBVRF9JcmNsLCAgICAgICAgIE9fRWIs
ICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSXJjciwgICAgICAgICBPX0ViLCAgICBPX0NMLCAgICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0
ICovICB7IFVEX0lzaGwsICAgICAgICAgT19FYiwgICAgT19DTCwgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9Jc2hyLCAgICAgICAgIE9f
RWIsICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDA2ICovICB7IFVEX0lzaGwsICAgICAgICAgT19FYiwgICAgT19DTCwgICAgT19O
T05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsg
VURfSXNhciwgICAgICAgICBPX0ViLCAgICBPX0NMLCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHd8
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18xYnl0ZV9fb3BfZDNfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9Jcm9sLCAgICAg
ICAgIE9fRXYsICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSXJvciwgICAgICAgICBPX0V2
LCAgICBPX0NMLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lyY2wsICAgICAgICAgT19FdiwgICAgT19D
TCwgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JcmNyLCAgICAgICAgIE9fRXYsICAgIE9fQ0wsICAgIE9f
Tk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogMDQgKi8gIHsgVURfSXNobCwgICAgICAgICBPX0V2LCAgICBPX0NMLCAgICBPX05PTkUsICBQ
X2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICov
ICB7IFVEX0lzaHIsICAgICAgICAgT19FdiwgICAgT19DTCwgICAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9J
c2hsLCAgICAgICAgIE9fRXYsICAgIE9fQ0wsICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298
UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSXNhciwgICAg
ICAgICBPX0V2LCAgICBPX0NMLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0
YWJfXzFieXRlX19vcF9kOF9fbW9kWzJdID0gewogIC8qIDAwICovICB7IFVEX0lncnBfcmVnLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0RfX09Q
XzAwX19SRUcgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4NywgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EOF9fTU9EX19PUF8wMV9fWDg3IH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2Q4X19tb2RfX29wXzAw
X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX01kLCAgICBPX05P
TkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEg
Ki8gIHsgVURfSWZtdWwsICAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsgVURfSWZjb20sICAgICAg
ICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWZjb21wLCAgICAgICBPX01kLCAgICBPX05PTkUsICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQgKi8gIHsg
VURfSWZzdWIsICAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX01k
LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDYgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSWZk
aXZyLCAgICAgICBPX01kLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5
dGVfX29wX2Q4X19tb2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAwICovICB7IFVEX0lmYWRk
LCAgICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8g
IHsgVURfSWZhZGQsICAgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JZmFkZCwgICAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lmYWRkLCAgICAgICAgT19TVDAsICAgT19TVDMs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX1NU
MCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JZmFkZCwg
ICAgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7
IFVEX0lmYWRkLCAgICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDcgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwOCAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1QwLCAgIE9fU1QwLCAg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDAs
ICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEEgKi8gIHsgVURfSWZtdWwsICAg
ICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQiAqLyAgeyBV
RF9JZm11bCwgICAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDBDICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMEQgKi8gIHsgVURfSWZtdWwsICAgICAgICBPX1NUMCwgICBPX1NUNSwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1QwLCAg
IE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBGICovICB7IFVEX0lmbXVsLCAgICAg
ICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTAgKi8gIHsgVURf
SWZjb20sICAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
MSAqLyAgeyBVRF9JZmNvbSwgICAgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDEyICovICB7IFVEX0lmY29tLCAgICAgICAgT19TVDAsICAgT19TVDIsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMTMgKi8gIHsgVURfSWZjb20sICAgICAgICBPX1NUMCwgICBP
X1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBVRF9JZmNvbSwgICAgICAg
IE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE1ICovICB7IFVEX0lm
Y29tLCAgICAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTYg
Ki8gIHsgVURfSWZjb20sICAgICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAxNyAqLyAgeyBVRF9JZmNvbSwgICAgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDE4ICovICB7IFVEX0lmY29tcCwgICAgICAgT19TVDAsICAgT19T
VDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWZjb21wLCAgICAgICBP
X1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JZmNv
bXAsICAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFCICov
ICB7IFVEX0lmY29tcCwgICAgICAgT19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMUMgKi8gIHsgVURfSWZjb21wLCAgICAgICBPX1NUMCwgICBPX1NUNCwgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAxRCAqLyAgeyBVRF9JZmNvbXAsICAgICAgIE9fU1QwLCAgIE9fU1Q1
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lmY29tcCwgICAgICAgT19T
VDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWZjb21w
LCAgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMCAqLyAg
eyBVRF9JZnN1YiwgICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDIxICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMjIgKi8gIHsgVURfSWZzdWIsICAgICAgICBPX1NUMCwgICBPX1NUMiwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JZnN1YiwgICAgICAgIE9fU1Qw
LCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lmc3ViLCAg
ICAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsg
VURfSWZzdWIsICAgICAgICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAyNiAqLyAgeyBVRF9JZnN1YiwgICAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDI3ICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDAsICAgT19TVDcsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX1NUMCwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JZnN1YnIsICAg
ICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVE
X0lmc3ViciwgICAgICAgT19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MkIgKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAyQyAqLyAgeyBVRF9JZnN1YnIsICAgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDJEICovICB7IFVEX0lmc3ViciwgICAgICAgT19TVDAsICAg
T19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsgVURfSWZzdWJyLCAgICAg
ICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBVRF9J
ZnN1YnIsICAgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMw
ICovICB7IFVEX0lmZGl2LCAgICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMzEgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1QwLCAgIE9f
U1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lmZGl2LCAgICAgICAg
T19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURfSWZk
aXYsICAgICAgICBPX1NUMCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNSAq
LyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDM2ICovICB7IFVEX0lmZGl2LCAgICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUMCwgICBPX1NU
NywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JZmRpdnIsICAgICAgIE9f
U1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lmZGl2
ciwgICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0EgKi8g
IHsgVURfSWZkaXZyLCAgICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAzQiAqLyAgeyBVRF9JZmRpdnIsICAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lmZGl2ciwgICAgICAgT19TVDAsICAgT19TVDQs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWZkaXZyLCAgICAgICBPX1NU
MCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JZmRpdnIs
ICAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNGICovICB7
IFVEX0lmZGl2ciwgICAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCn07
CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2Q5X19tb2RbMl0g
PSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBJVEFCX18xQllURV9fT1BfRDlfX01PRF9fT1BfMDBfX1JFRyB9LAogIC8qIDAxICovICB7
IFVEX0lncnBfeDg3LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0Q5X19NT0RfX09QXzAxX19YODcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRy
eSBpdGFiX18xYnl0ZV9fb3BfZDlfX21vZF9fb3BfMDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAg
eyBVRF9JZmxkLCAgICAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lmc3Qs
ICAgICAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVEX0lmc3RwLCAgICAgICAgT19NZCwgICAgT19O
T05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0
ICovICB7IFVEX0lmbGRlbnYsICAgICAgT19NLCAgICAgT19OT05FLCAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JZmxkY3csICAgICAgIE9f
TXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAwNiAqLyAgeyBVRF9JZm5zdGVudiwgICAgIE9fTSwgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSWZuc3Rj
dywgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVf
X29wX2Q5X19tb2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAwICovICB7IFVEX0lmbGQsICAg
ICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsg
VURfSWZsZCwgICAgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAwMiAqLyAgeyBVRF9JZmxkLCAgICAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lmbGQsICAgICAgICAgT19TVDAsICAgT19TVDMsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWZsZCwgICAgICAgICBPX1NUMCwg
ICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAqLyAgeyBVRF9JZmxkLCAgICAg
ICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA2ICovICB7IFVE
X0lmbGQsICAgICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MDcgKi8gIHsgVURfSWZsZCwgICAgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAwOCAqLyAgeyBVRF9JZnhjaCwgICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lmeGNoLCAgICAgICAgT19TVDAsICAg
T19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEEgKi8gIHsgVURfSWZ4Y2gsICAgICAg
ICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQiAqLyAgeyBVRF9J
ZnhjaCwgICAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBD
ICovICB7IFVEX0lmeGNoLCAgICAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMEQgKi8gIHsgVURfSWZ4Y2gsICAgICAgICBPX1NUMCwgICBPX1NUNSwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JZnhjaCwgICAgICAgIE9fU1QwLCAgIE9f
U1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBGICovICB7IFVEX0lmeGNoLCAgICAgICAg
T19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTAgKi8gIHsgVURfSWZu
b3AsICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDEyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE1ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAxNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDE4ICovICB7IFVEX0lmc3RwMSwgICAgICAgT19TVDAsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWZzdHAxLCAgICAgICBPX1NU
MSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JZnN0cDEs
ICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFCICovICB7
IFVEX0lmc3RwMSwgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMUMgKi8gIHsgVURfSWZzdHAxLCAgICAgICBPX1NUNCwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAxRCAqLyAgeyBVRF9JZnN0cDEsICAgICAgIE9fU1Q1LCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lmc3RwMSwgICAgICAgT19TVDYs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWZzdHAxLCAg
ICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBV
RF9JZmNocywgICAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDIxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lmdHN0LCAgICAg
ICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURf
SWZ4YW0sICAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAy
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSWZsZDEsICAgICAgICBPX05PTkUsICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JZmxkbDJ0LCAgICAg
IE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVEX0lm
bGRsMmUsICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkIg
Ki8gIHsgVURfSWZsZGxwaSwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAyQyAqLyAgeyBVRF9JZmxkbGcyLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDJEICovICB7IFVEX0lmbGRsbjIsICAgICAgT19OT05FLCAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsgVURfSWZsZHosICAgICAgICBP
X05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMwICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMzEgKi8gIHsgVURfSWZ5bDJ4LCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JZnB0YW4sICAgICAgIE9fTk9ORSwgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lmcGF0YW4sICAgICAgT19O
T05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURfSWZweHRy
YWN0LCAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNSAqLyAg
eyBVRF9JZnByZW0xLCAgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDM2ICovICB7IFVEX0lmZGVjc3RwLCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWZuY3N0cCwgICAgICBPX05PTkUsICBPX05PTkUs
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JZnByZW0sICAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lmeWwyeHAx
LCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0EgKi8gIHsg
VURfSWZzcXJ0LCAgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAzQiAqLyAgeyBVRF9JZnNpbmNvcywgICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lmcm5kaW50LCAgICAgT19OT05FLCAgT19OT05FLCAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWZzY2FsZSwgICAgICBPX05PTkUs
ICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JZnNpbiwgICAg
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNGICovICB7IFVE
X0lmY29zLCAgICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCn07Cgpz
dGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29wX2RhX19tb2RbMl0gPSB7
CiAgLyogMDAgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBJVEFCX18xQllURV9fT1BfREFfX01PRF9fT1BfMDBfX1JFRyB9LAogIC8qIDAxICovICB7IFVE
X0lncnBfeDg3LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVfX09Q
X0RBX19NT0RfX09QXzAxX19YODcgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18xYnl0ZV9fb3BfZGFfX21vZF9fb3BfMDBfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBV
RF9JZmlhZGQsICAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9JZmltdWwsICAgICAgIE9fTWQs
ICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JZmljb20sICAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JZmlj
b21wLCAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JZmlzdWIsICAgICAgIE9fTWQsICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
NSAqLyAgeyBVRF9JZmlzdWJyLCAgICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmlkaXYsICAg
ICAgIE9fTWQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JZmlkaXZyLCAgICAgIE9fTWQsICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBz
dHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfZGFfX21vZF9fb3BfMDFfX3g4N1s2
NF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZjbW92YiwgICAgICBPX1NUMCwgICBPX1NUMCwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JZmNtb3ZiLCAgICAgIE9fU1QwLCAg
IE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lmY21vdmIsICAg
ICAgT19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURf
SWZjbW92YiwgICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAw
NCAqLyAgeyBVRF9JZmNtb3ZiLCAgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDA1ICovICB7IFVEX0lmY21vdmIsICAgICAgT19TVDAsICAgT19TVDUsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWZjbW92YiwgICAgICBPX1NUMCwgICBP
X1NUNiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JZmNtb3ZiLCAgICAg
IE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lm
Y21vdmUsICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDkg
Ki8gIHsgVURfSWZjbW92ZSwgICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwQSAqLyAgeyBVRF9JZmNtb3ZlLCAgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDBCICovICB7IFVEX0lmY21vdmUsICAgICAgT19TVDAsICAgT19T
VDMsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWZjbW92ZSwgICAgICBP
X1NUMCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JZmNt
b3ZlLCAgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBFICov
ICB7IFVEX0lmY21vdmUsICAgICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMEYgKi8gIHsgVURfSWZjbW92ZSwgICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAxMCAqLyAgeyBVRF9JZmNtb3ZiZSwgICAgIE9fU1QwLCAgIE9fU1Qw
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDExICovICB7IFVEX0lmY21vdmJlLCAgICAgT19T
VDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTIgKi8gIHsgVURfSWZjbW92
YmUsICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMyAqLyAg
eyBVRF9JZmNtb3ZiZSwgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDE0ICovICB7IFVEX0lmY21vdmJlLCAgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMTUgKi8gIHsgVURfSWZjbW92YmUsICAgICBPX1NUMCwgICBPX1NUNSwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNiAqLyAgeyBVRF9JZmNtb3ZiZSwgICAgIE9fU1Qw
LCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE3ICovICB7IFVEX0lmY21vdmJl
LCAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTggKi8gIHsg
VURfSWZjbW92dSwgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAxOSAqLyAgeyBVRF9JZmNtb3Z1LCAgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDFBICovICB7IFVEX0lmY21vdnUsICAgICAgT19TVDAsICAgT19TVDIsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMUIgKi8gIHsgVURfSWZjbW92dSwgICAgICBPX1NUMCwg
ICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQyAqLyAgeyBVRF9JZmNtb3Z1LCAg
ICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFEICovICB7IFVE
X0lmY21vdnUsICAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MUUgKi8gIHsgVURfSWZjbW92dSwgICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAxRiAqLyAgeyBVRF9JZmNtb3Z1LCAgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjEgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIz
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjcgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyOCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDI5ICovICB7IFVEX0lmdWNvbXBwLCAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMkEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJDICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMkQgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAyRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDJGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMyICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMzMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAzNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDM4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDNFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRy
eSBpdGFiX18xYnl0ZV9fb3BfZGJfX21vZFsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3Jl
ZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EQl9fTU9E
X19PUF8wMF9fUkVHIH0sCiAgLyogMDEgKi8gIHsgVURfSWdycF94ODcsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfREJfX01PRF9fT1BfMDFfX1g4NyB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kYl9fbW9kX19v
cF8wMF9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lmaWxkLCAgICAgICAgT19NZCwgICAg
T19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDAxICovICB7IFVEX0lmaXN0dHAsICAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9j
MXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lmaXN0LCAg
ICAgICAgT19NZCwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDAzICovICB7IFVEX0lmaXN0cCwgICAgICAgT19NZCwgICAgT19OT05F
LCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMDUgKi8gIHsgVURfSWZsZCwgICAgICAgICBPX010LCAgICBPX05PTkUsICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAq
LyAgeyBVRF9JZnN0cCwgICAgICAgIE9fTXQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRy
eSBpdGFiX18xYnl0ZV9fb3BfZGJfX21vZF9fb3BfMDFfX3g4N1s2NF0gPSB7CiAgLyogMDAgKi8g
IHsgVURfSWZjbW92bmIsICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwMSAqLyAgeyBVRF9JZmNtb3ZuYiwgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lmY21vdm5iLCAgICAgT19TVDAsICAgT19TVDIs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWZjbW92bmIsICAgICBPX1NU
MCwgICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JZmNtb3Zu
YiwgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7
IFVEX0lmY21vdm5iLCAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMDYgKi8gIHsgVURfSWZjbW92bmIsICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JZmNtb3ZuYiwgICAgIE9fU1QwLCAgIE9fU1Q3LCAg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lmY21vdm5lLCAgICAgT19TVDAs
ICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURfSWZjbW92bmUs
ICAgICBPX1NUMCwgICBPX1NUMSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQSAqLyAgeyBV
RF9JZmNtb3ZuZSwgICAgIE9fU1QwLCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDBCICovICB7IFVEX0lmY21vdm5lLCAgICAgT19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWZjbW92bmUsICAgICBPX1NUMCwgICBPX1NUNCwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JZmNtb3ZuZSwgICAgIE9fU1QwLCAg
IE9fU1Q1LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBFICovICB7IFVEX0lmY21vdm5lLCAg
ICAgT19TVDAsICAgT19TVDYsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEYgKi8gIHsgVURf
SWZjbW92bmUsICAgICBPX1NUMCwgICBPX1NUNywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
MCAqLyAgeyBVRF9JZmNtb3ZuYmUsICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDExICovICB7IFVEX0lmY21vdm5iZSwgICAgT19TVDAsICAgT19TVDEsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMTIgKi8gIHsgVURfSWZjbW92bmJlLCAgICBPX1NUMCwgICBP
X1NUMiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMyAqLyAgeyBVRF9JZmNtb3ZuYmUsICAg
IE9fU1QwLCAgIE9fU1QzLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE0ICovICB7IFVEX0lm
Y21vdm5iZSwgICAgT19TVDAsICAgT19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTUg
Ki8gIHsgVURfSWZjbW92bmJlLCAgICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAxNiAqLyAgeyBVRF9JZmNtb3ZuYmUsICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDE3ICovICB7IFVEX0lmY21vdm5iZSwgICAgT19TVDAsICAgT19T
VDcsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTggKi8gIHsgVURfSWZjbW92bnUsICAgICBP
X1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxOSAqLyAgeyBVRF9JZmNt
b3ZudSwgICAgIE9fU1QwLCAgIE9fU1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFBICov
ICB7IFVEX0lmY21vdm51LCAgICAgT19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMUIgKi8gIHsgVURfSWZjbW92bnUsICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAxQyAqLyAgeyBVRF9JZmNtb3ZudSwgICAgIE9fU1QwLCAgIE9fU1Q0
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFEICovICB7IFVEX0lmY21vdm51LCAgICAgT19T
VDAsICAgT19TVDUsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUUgKi8gIHsgVURfSWZjbW92
bnUsICAgICBPX1NUMCwgICBPX1NUNiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxRiAqLyAg
eyBVRF9JZmNtb3ZudSwgICAgIE9fU1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMjEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMiAqLyAgeyBVRF9JZmNsZXgsICAgICAgIE9fTk9O
RSwgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDIzICovICB7IFVEX0lmbmluaXQs
ICAgICAgT19OT05FLCAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjQgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAyNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyOCAqLyAgeyBVRF9JZnVjb21pLCAg
ICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI5ICovICB7IFVE
X0lmdWNvbWksICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MkEgKi8gIHsgVURfSWZ1Y29taSwgICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAyQiAqLyAgeyBVRF9JZnVjb21pLCAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDJDICovICB7IFVEX0lmdWNvbWksICAgICAgT19TVDAsICAg
T19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkQgKi8gIHsgVURfSWZ1Y29taSwgICAg
ICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyRSAqLyAgeyBVRF9J
ZnVjb21pLCAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJG
ICovICB7IFVEX0lmdWNvbWksICAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMzAgKi8gIHsgVURfSWZjb21pLCAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JZmNvbWksICAgICAgIE9fU1QwLCAgIE9f
U1QxLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMyICovICB7IFVEX0lmY29taSwgICAgICAg
T19TVDAsICAgT19TVDIsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzMgKi8gIHsgVURfSWZj
b21pLCAgICAgICBPX1NUMCwgICBPX1NUMywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNCAq
LyAgeyBVRF9JZmNvbWksICAgICAgIE9fU1QwLCAgIE9fU1Q0LCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDM1ICovICB7IFVEX0lmY29taSwgICAgICAgT19TVDAsICAgT19TVDUsICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWZjb21pLCAgICAgICBPX1NUMCwgICBPX1NU
NiwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBVRF9JZmNvbWksICAgICAgIE9f
U1QwLCAgIE9fU1Q3LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM4ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzkgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNFICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3Bf
ZGNfX21vZFsyXSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZ3JwX3JlZywgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9EQ19fTU9EX19PUF8wMF9fUkVHIH0sCiAg
LyogMDEgKi8gIHsgVURfSWdycF94ODcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJ
VEFCX18xQllURV9fT1BfRENfX01PRF9fT1BfMDFfX1g4NyB9LAp9OwoKc3RhdGljIHN0cnVjdCB1
ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kY19fbW9kX19vcF8wMF9fcmVnWzhdID0gewog
IC8qIDAwICovICB7IFVEX0lmYWRkLCAgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAg
UF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lmbXVs
LCAgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lmY29tLCAgICAgICAgT19NcSwgICAgT19O
T05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDAz
ICovICB7IFVEX0lmY29tcCwgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICovICB7IFVEX0lmc3ViLCAgICAg
ICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lmc3ViciwgICAgICAgT19NcSwgICAgT19OT05FLCAg
T19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA2ICovICB7
IFVEX0lmZGl2LCAgICAgICAgT19NcSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lmZGl2ciwgICAgICAgT19N
cSwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kY19fbW9k
X19vcF8wMV9feDg3WzY0XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZmFkZCwgICAgICAgIE9fU1Qw
LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lmYWRkLCAg
ICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsg
VURfSWZhZGQsICAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAwMyAqLyAgeyBVRF9JZmFkZCwgICAgICAgIE9fU1QzLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lmYWRkLCAgICAgICAgT19TVDQsICAgT19TVDAsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWZhZGQsICAgICAgICBPX1NUNSwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmFkZCwgICAg
ICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVE
X0lmYWRkLCAgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MDggKi8gIHsgVURfSWZtdWwsICAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAwOSAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1QxLCAgIE9fU1QwLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDIsICAg
T19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEIgKi8gIHsgVURfSWZtdWwsICAgICAg
ICBPX1NUMywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9J
Zm11bCwgICAgICAgIE9fU1Q0LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBE
ICovICB7IFVEX0lmbXVsLCAgICAgICAgT19TVDUsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMEUgKi8gIHsgVURfSWZtdWwsICAgICAgICBPX1NUNiwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JZm11bCwgICAgICAgIE9fU1Q3LCAgIE9f
U1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEwICovICB7IFVEX0lmY29tMiwgICAgICAg
T19TVDAsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTEgKi8gIHsgVURfSWZj
b20yLCAgICAgICBPX1NUMSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMiAq
LyAgeyBVRF9JZmNvbTIsICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDEzICovICB7IFVEX0lmY29tMiwgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWZjb20yLCAgICAgICBPX1NUNCwgICBPX05P
TkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNSAqLyAgeyBVRF9JZmNvbTIsICAgICAgIE9f
U1Q1LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0lmY29t
MiwgICAgICAgT19TVDYsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTcgKi8g
IHsgVURfSWZjb20yLCAgICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAxOCAqLyAgeyBVRF9JZmNvbXAzLCAgICAgIE9fU1QwLCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDE5ICovICB7IFVEX0lmY29tcDMsICAgICAgT19TVDEsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsgVURfSWZjb21wMywgICAgICBPX1NU
MiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQiAqLyAgeyBVRF9JZmNvbXAz
LCAgICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFDICovICB7
IFVEX0lmY29tcDMsICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMUQgKi8gIHsgVURfSWZjb21wMywgICAgICBPX1NUNSwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAxRSAqLyAgeyBVRF9JZmNvbXAzLCAgICAgIE9fU1Q2LCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVEX0lmY29tcDMsICAgICAgT19TVDcs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURfSWZzdWJyLCAg
ICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMSAqLyAgeyBV
RF9JZnN1YnIsICAgICAgIE9fU1QxLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDIyICovICB7IFVEX0lmc3ViciwgICAgICAgT19TVDIsICAgT19TVDAsICAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMjMgKi8gIHsgVURfSWZzdWJyLCAgICAgICBPX1NUMywgICBPX1NUMCwgICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9JZnN1YnIsICAgICAgIE9fU1Q0LCAg
IE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI1ICovICB7IFVEX0lmc3ViciwgICAg
ICAgT19TVDUsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjYgKi8gIHsgVURf
SWZzdWJyLCAgICAgICBPX1NUNiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAy
NyAqLyAgeyBVRF9JZnN1YnIsICAgICAgIE9fU1Q3LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDI4ICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDAsICAgT19TVDAsICAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWZzdWIsICAgICAgICBPX1NUMSwgICBP
X1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQSAqLyAgeyBVRF9JZnN1YiwgICAgICAg
IE9fU1QyLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJCICovICB7IFVEX0lm
c3ViLCAgICAgICAgT19TVDMsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkMg
Ki8gIHsgVURfSWZzdWIsICAgICAgICBPX1NUNCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAyRCAqLyAgeyBVRF9JZnN1YiwgICAgICAgIE9fU1Q1LCAgIE9fU1QwLCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDJFICovICB7IFVEX0lmc3ViLCAgICAgICAgT19TVDYsICAgT19T
VDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkYgKi8gIHsgVURfSWZzdWIsICAgICAgICBP
X1NUNywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMCAqLyAgeyBVRF9JZmRp
dnIsICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMxICov
ICB7IFVEX0lmZGl2ciwgICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMzIgKi8gIHsgVURfSWZkaXZyLCAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JZmRpdnIsICAgICAgIE9fU1QzLCAgIE9fU1Qw
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lmZGl2ciwgICAgICAgT19T
VDQsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWZkaXZy
LCAgICAgICBPX1NUNSwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNiAqLyAg
eyBVRF9JZmRpdnIsICAgICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDM3ICovICB7IFVEX0lmZGl2ciwgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMzggKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUMCwgICBPX1NUMCwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOSAqLyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1Qx
LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lmZGl2LCAg
ICAgICAgT19TVDIsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsg
VURfSWZkaXYsICAgICAgICBPX1NUMywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAzQyAqLyAgeyBVRF9JZmRpdiwgICAgICAgIE9fU1Q0LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDNEICovICB7IFVEX0lmZGl2LCAgICAgICAgT19TVDUsICAgT19TVDAsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogM0UgKi8gIHsgVURfSWZkaXYsICAgICAgICBPX1NUNiwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JZmRpdiwgICAg
ICAgIE9fU1Q3LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVj
dCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZF9fbW9kWzJdID0gewogIC8qIDAwICov
ICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZ
VEVfX09QX0REX19NT0RfX09QXzAwX19SRUcgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4Nywg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9ERF9fTU9EX19P
UF8wMV9fWDg3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVf
X29wX2RkX19tb2RfX29wXzAwX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZsZCwgICAg
ICAgICBPX01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWZpc3R0cCwgICAgICBPX01xLCAgICBPX05PTkUs
ICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8g
IHsgVURfSWZzdCwgICAgICAgICBPX01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWZzdHAsICAgICAgICBP
X01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogMDQgKi8gIHsgVURfSWZyc3RvciwgICAgICBPX00sICAgICBPX05PTkUsICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8g
IHsgVURfSWZuc2F2ZSwgICAgICBPX00sICAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lmbnN0c3csICAgICAgT19Ndywg
ICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9
OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZF9fbW9kX19v
cF8wMV9feDg3WzY0XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZmZyZWUsICAgICAgIE9fU1QwLCAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lmZnJlZSwgICAg
ICAgT19TVDEsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURf
SWZmcmVlLCAgICAgICBPX1NUMiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAw
MyAqLyAgeyBVRF9JZmZyZWUsICAgICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDA0ICovICB7IFVEX0lmZnJlZSwgICAgICAgT19TVDQsICAgT19OT05FLCAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWZmcmVlLCAgICAgICBPX1NUNSwgICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmZyZWUsICAgICAg
IE9fU1Q2LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lm
ZnJlZSwgICAgICAgT19TVDcsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDgg
Ki8gIHsgVURfSWZ4Y2g0LCAgICAgICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAwOSAqLyAgeyBVRF9JZnhjaDQsICAgICAgIE9fU1QxLCAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lmeGNoNCwgICAgICAgT19TVDIsICAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEIgKi8gIHsgVURfSWZ4Y2g0LCAgICAgICBP
X1NUMywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9JZnhj
aDQsICAgICAgIE9fU1Q0LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBEICov
ICB7IFVEX0lmeGNoNCwgICAgICAgT19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMEUgKi8gIHsgVURfSWZ4Y2g0LCAgICAgICBPX1NUNiwgICBPX05PTkUsICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JZnhjaDQsICAgICAgIE9fU1Q3LCAgIE9fTk9O
RSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEwICovICB7IFVEX0lmc3QsICAgICAgICAgT19T
VDAsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTEgKi8gIHsgVURfSWZzdCwg
ICAgICAgICBPX1NUMSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMiAqLyAg
eyBVRF9JZnN0LCAgICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDEzICovICB7IFVEX0lmc3QsICAgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWZzdCwgICAgICAgICBPX1NUNCwgICBPX05PTkUs
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNSAqLyAgeyBVRF9JZnN0LCAgICAgICAgIE9fU1Q1
LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0lmc3QsICAg
ICAgICAgT19TVDYsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTcgKi8gIHsg
VURfSWZzdCwgICAgICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAxOCAqLyAgeyBVRF9JZnN0cCwgICAgICAgIE9fU1QwLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDE5ICovICB7IFVEX0lmc3RwLCAgICAgICAgT19TVDEsICAgT19OT05FLCAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsgVURfSWZzdHAsICAgICAgICBPX1NUMiwg
ICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQiAqLyAgeyBVRF9JZnN0cCwgICAg
ICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFDICovICB7IFVE
X0lmc3RwLCAgICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MUQgKi8gIHsgVURfSWZzdHAsICAgICAgICBPX1NUNSwgICBPX05PTkUsICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAxRSAqLyAgeyBVRF9JZnN0cCwgICAgICAgIE9fU1Q2LCAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVEX0lmc3RwLCAgICAgICAgT19TVDcsICAg
T19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURfSWZ1Y29tLCAgICAg
ICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMSAqLyAgeyBVRF9J
ZnVjb20sICAgICAgIE9fU1QxLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDIy
ICovICB7IFVEX0lmdWNvbSwgICAgICAgT19TVDIsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMjMgKi8gIHsgVURfSWZ1Y29tLCAgICAgICBPX1NUMywgICBPX05PTkUsICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9JZnVjb20sICAgICAgIE9fU1Q0LCAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI1ICovICB7IFVEX0lmdWNvbSwgICAgICAg
T19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjYgKi8gIHsgVURfSWZ1
Y29tLCAgICAgICBPX1NUNiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyNyAq
LyAgeyBVRF9JZnVjb20sICAgICAgIE9fU1Q3LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDI4ICovICB7IFVEX0lmdWNvbXAsICAgICAgT19TVDAsICAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWZ1Y29tcCwgICAgICBPX1NUMSwgICBPX05P
TkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQSAqLyAgeyBVRF9JZnVjb21wLCAgICAgIE9f
U1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJCICovICB7IFVEX0lmdWNv
bXAsICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkMgKi8g
IHsgVURfSWZ1Y29tcCwgICAgICBPX1NUNCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAyRCAqLyAgeyBVRF9JZnVjb21wLCAgICAgIE9fU1Q1LCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDJFICovICB7IFVEX0lmdWNvbXAsICAgICAgT19TVDYsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkYgKi8gIHsgVURfSWZ1Y29tcCwgICAgICBPX1NU
NywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMxICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDM3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDNEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogM0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1
ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZV9fbW9kWzJdID0gewogIC8qIDAwICovICB7
IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fMUJZVEVf
X09QX0RFX19NT0RfX09QXzAwX19SRUcgfSwKICAvKiAwMSAqLyAgeyBVRF9JZ3JwX3g4NywgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfXzFCWVRFX19PUF9ERV9fTU9EX19PUF8w
MV9fWDg3IH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29w
X2RlX19tb2RfX29wXzAwX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWZpYWRkLCAgICAg
ICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSWZpbXVsLCAgICAgICBPX013LCAgICBPX05PTkUsICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDIgKi8gIHsg
VURfSWZpY29tLCAgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWZpY29tcCwgICAgICBPX013
LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDQgKi8gIHsgVURfSWZpc3ViLCAgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUs
ICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSWZp
c3ViciwgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsgVURfSWZpZGl2LCAgICAgICBPX013LCAgICBP
X05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDcgKi8gIHsgVURfSWZpZGl2ciwgICAgICBPX013LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJf
ZW50cnkgaXRhYl9fMWJ5dGVfX29wX2RlX19tb2RfX29wXzAxX194ODdbNjRdID0gewogIC8qIDAw
ICovICB7IFVEX0lmYWRkcCwgICAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMDEgKi8gIHsgVURfSWZhZGRwLCAgICAgICBPX1NUMSwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBVRF9JZmFkZHAsICAgICAgIE9fU1QyLCAgIE9f
U1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lmYWRkcCwgICAgICAg
T19TVDMsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWZh
ZGRwLCAgICAgICBPX1NUNCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNSAq
LyAgeyBVRF9JZmFkZHAsICAgICAgIE9fU1Q1LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDA2ICovICB7IFVEX0lmYWRkcCwgICAgICAgT19TVDYsICAgT19TVDAsICAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWZhZGRwLCAgICAgICBPX1NUNywgICBPX1NU
MCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwOCAqLyAgeyBVRF9JZm11bHAsICAgICAgIE9f
U1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA5ICovICB7IFVEX0lmbXVs
cCwgICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEEgKi8g
IHsgVURfSWZtdWxwLCAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwQiAqLyAgeyBVRF9JZm11bHAsICAgICAgIE9fU1QzLCAgIE9fU1QwLCAgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDBDICovICB7IFVEX0lmbXVscCwgICAgICAgT19TVDQsICAgT19TVDAs
ICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEQgKi8gIHsgVURfSWZtdWxwLCAgICAgICBPX1NU
NSwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwRSAqLyAgeyBVRF9JZm11bHAs
ICAgICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBGICovICB7
IFVEX0lmbXVscCwgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMTAgKi8gIHsgVURfSWZjb21wNSwgICAgICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAxMSAqLyAgeyBVRF9JZmNvbXA1LCAgICAgIE9fU1QxLCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEyICovICB7IFVEX0lmY29tcDUsICAgICAgT19TVDIs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTMgKi8gIHsgVURfSWZjb21wNSwg
ICAgICBPX1NUMywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxNCAqLyAgeyBV
RF9JZmNvbXA1LCAgICAgIE9fU1Q0LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDE1ICovICB7IFVEX0lmY29tcDUsICAgICAgT19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMTYgKi8gIHsgVURfSWZjb21wNSwgICAgICBPX1NUNiwgICBPX05PTkUsICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAxNyAqLyAgeyBVRF9JZmNvbXA1LCAgICAgIE9fU1Q3LCAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE4ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURf
SWZjb21wcCwgICAgICBPX05PTkUsICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxRCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAyMCAqLyAgeyBVRF9JZnN1YnJwLCAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDIxICovICB7IFVEX0lmc3VicnAsICAgICAgT19TVDEsICAgT19T
VDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjIgKi8gIHsgVURfSWZzdWJycCwgICAgICBP
X1NUMiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JZnN1
YnJwLCAgICAgIE9fU1QzLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI0ICov
ICB7IFVEX0lmc3VicnAsICAgICAgT19TVDQsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0s
CiAgLyogMjUgKi8gIHsgVURfSWZzdWJycCwgICAgICBPX1NUNSwgICBPX1NUMCwgICBPX05PTkUs
ICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBVRF9JZnN1YnJwLCAgICAgIE9fU1Q2LCAgIE9fU1Qw
LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDI3ICovICB7IFVEX0lmc3VicnAsICAgICAgT19T
VDcsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjggKi8gIHsgVURfSWZzdWJw
LCAgICAgICBPX1NUMCwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyOSAqLyAg
eyBVRF9JZnN1YnAsICAgICAgIE9fU1QxLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAog
IC8qIDJBICovICB7IFVEX0lmc3VicCwgICAgICAgT19TVDIsICAgT19TVDAsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMkIgKi8gIHsgVURfSWZzdWJwLCAgICAgICBPX1NUMywgICBPX1NUMCwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQyAqLyAgeyBVRF9JZnN1YnAsICAgICAgIE9fU1Q0
LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJEICovICB7IFVEX0lmc3VicCwg
ICAgICAgT19TVDUsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkUgKi8gIHsg
VURfSWZzdWJwLCAgICAgICBPX1NUNiwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAyRiAqLyAgeyBVRF9JZnN1YnAsICAgICAgIE9fU1Q3LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDMwICovICB7IFVEX0lmZGl2cnAsICAgICAgT19TVDAsICAgT19TVDAsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMzEgKi8gIHsgVURfSWZkaXZycCwgICAgICBPX1NUMSwg
ICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JZmRpdnJwLCAg
ICAgIE9fU1QyLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVE
X0lmZGl2cnAsICAgICAgT19TVDMsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MzQgKi8gIHsgVURfSWZkaXZycCwgICAgICBPX1NUNCwgICBPX1NUMCwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAzNSAqLyAgeyBVRF9JZmRpdnJwLCAgICAgIE9fU1Q1LCAgIE9fU1QwLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDM2ICovICB7IFVEX0lmZGl2cnAsICAgICAgT19TVDYsICAg
T19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWZkaXZycCwgICAg
ICBPX1NUNywgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9J
ZmRpdnAsICAgICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM5
ICovICB7IFVEX0lmZGl2cCwgICAgICAgT19TVDEsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogM0EgKi8gIHsgVURfSWZkaXZwLCAgICAgICBPX1NUMiwgICBPX1NUMCwgICBPX05P
TkUsICBQX25vbmUgfSwKICAvKiAzQiAqLyAgeyBVRF9JZmRpdnAsICAgICAgIE9fU1QzLCAgIE9f
U1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lmZGl2cCwgICAgICAg
T19TVDQsICAgT19TVDAsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWZk
aXZwLCAgICAgICBPX1NUNSwgICBPX1NUMCwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzRSAq
LyAgeyBVRF9JZmRpdnAsICAgICAgIE9fU1Q2LCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDNGICovICB7IFVEX0lmZGl2cCwgICAgICAgT19TVDcsICAgT19TVDAsICAgT19OT05F
LCAgUF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVf
X29wX2RmX19tb2RbMl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBJVEFCX18xQllURV9fT1BfREZfX01PRF9fT1BfMDBfX1JFRyB9
LAogIC8qIDAxICovICB7IFVEX0lncnBfeDg3LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgSVRBQl9fMUJZVEVfX09QX0RGX19NT0RfX09QXzAxX19YODcgfSwKfTsKCnN0YXRpYyBzdHJ1
Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX18xYnl0ZV9fb3BfZGZfX21vZF9fb3BfMDBfX3JlZ1s4XSA9
IHsKICAvKiAwMCAqLyAgeyBVRF9JZmlsZCwgICAgICAgIE9fTXcsICAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9J
ZmlzdHRwLCAgICAgIE9fTXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAwMiAqLyAgeyBVRF9JZmlzdCwgICAgICAgIE9fTXcsICAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiAwMyAqLyAgeyBVRF9JZmlzdHAsICAgICAgIE9fTXcsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBf
YzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JZmJsZCwg
ICAgICAgIE9fTXQsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMDUgKi8gIHsgVURfSWZpbGQsICAgICAgICBPX01xLCAgICBPX05PTkUsICBP
X05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDYgKi8gIHsg
VURfSWZic3RwLCAgICAgICBPX010LCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lmaXN0cCwgICAgICAgT19NcSwgICAg
T19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoK
c3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9kZl9fbW9kX19vcF8w
MV9feDg3WzY0XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JZmZyZWVwLCAgICAgIE9fU1QwLCAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lmZnJlZXAsICAgICAg
T19TVDEsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWZm
cmVlcCwgICAgICBPX1NUMiwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwMyAq
LyAgeyBVRF9JZmZyZWVwLCAgICAgIE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9
LAogIC8qIDA0ICovICB7IFVEX0lmZnJlZXAsICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05F
LCAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWZmcmVlcCwgICAgICBPX1NUNSwgICBPX05P
TkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JZmZyZWVwLCAgICAgIE9f
U1Q2LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lmZnJl
ZXAsICAgICAgT19TVDcsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMDggKi8g
IHsgVURfSWZ4Y2g3LCAgICAgICBPX1NUMCwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwK
ICAvKiAwOSAqLyAgeyBVRF9JZnhjaDcsICAgICAgIE9fU1QxLCAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lmeGNoNywgICAgICAgT19TVDIsICAgT19OT05F
LCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMEIgKi8gIHsgVURfSWZ4Y2g3LCAgICAgICBPX1NU
MywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9JZnhjaDcs
ICAgICAgIE9fU1Q0LCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDBEICovICB7
IFVEX0lmeGNoNywgICAgICAgT19TVDUsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogMEUgKi8gIHsgVURfSWZ4Y2g3LCAgICAgICBPX1NUNiwgICBPX05PTkUsICBPX05PTkUsICBQ
X25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JZnhjaDcsICAgICAgIE9fU1Q3LCAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDEwICovICB7IFVEX0lmc3RwOCwgICAgICAgT19TVDAs
ICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTEgKi8gIHsgVURfSWZzdHA4LCAg
ICAgICBPX1NUMSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxMiAqLyAgeyBV
RF9JZnN0cDgsICAgICAgIE9fU1QyLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8q
IDEzICovICB7IFVEX0lmc3RwOCwgICAgICAgT19TVDMsICAgT19OT05FLCAgT19OT05FLCAgUF9u
b25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWZzdHA4LCAgICAgICBPX1NUNCwgICBPX05PTkUsICBP
X05PTkUsICBQX25vbmUgfSwKICAvKiAxNSAqLyAgeyBVRF9JZnN0cDgsICAgICAgIE9fU1Q1LCAg
IE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0lmc3RwOCwgICAg
ICAgT19TVDYsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMTcgKi8gIHsgVURf
SWZzdHA4LCAgICAgICBPX1NUNywgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAx
OCAqLyAgeyBVRF9JZnN0cDksICAgICAgIE9fU1QwLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9u
ZSB9LAogIC8qIDE5ICovICB7IFVEX0lmc3RwOSwgICAgICAgT19TVDEsICAgT19OT05FLCAgT19O
T05FLCAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsgVURfSWZzdHA5LCAgICAgICBPX1NUMiwgICBP
X05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAxQiAqLyAgeyBVRF9JZnN0cDksICAgICAg
IE9fU1QzLCAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDFDICovICB7IFVEX0lm
c3RwOSwgICAgICAgT19TVDQsICAgT19OT05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMUQg
Ki8gIHsgVURfSWZzdHA5LCAgICAgICBPX1NUNSwgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUg
fSwKICAvKiAxRSAqLyAgeyBVRF9JZnN0cDksICAgICAgIE9fU1Q2LCAgIE9fTk9ORSwgIE9fTk9O
RSwgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVEX0lmc3RwOSwgICAgICAgT19TVDcsICAgT19O
T05FLCAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMjAgKi8gIHsgVURfSWZuc3RzdywgICAgICBP
X0FYLCAgICBPX05PTkUsICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyMSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIyICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMjMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDI4ICovICB7IFVEX0lmdWNvbWlwLCAgICAgT19TVDAsICAgT19TVDAsICAgT19OT05FLCAg
UF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWZ1Y29taXAsICAgICBPX1NUMCwgICBPX1NUMSwg
ICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAyQSAqLyAgeyBVRF9JZnVjb21pcCwgICAgIE9fU1Qw
LCAgIE9fU1QyLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDJCICovICB7IFVEX0lmdWNvbWlw
LCAgICAgT19TVDAsICAgT19TVDMsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMkMgKi8gIHsg
VURfSWZ1Y29taXAsICAgICBPX1NUMCwgICBPX1NUNCwgICBPX05PTkUsICBQX25vbmUgfSwKICAv
KiAyRCAqLyAgeyBVRF9JZnVjb21pcCwgICAgIE9fU1QwLCAgIE9fU1Q1LCAgIE9fTk9ORSwgIFBf
bm9uZSB9LAogIC8qIDJFICovICB7IFVEX0lmdWNvbWlwLCAgICAgT19TVDAsICAgT19TVDYsICAg
T19OT05FLCAgUF9ub25lIH0sCiAgLyogMkYgKi8gIHsgVURfSWZ1Y29taXAsICAgICBPX1NUMCwg
ICBPX1NUNywgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzMCAqLyAgeyBVRF9JZmNvbWlwLCAg
ICAgIE9fU1QwLCAgIE9fU1QwLCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDMxICovICB7IFVE
X0lmY29taXAsICAgICAgT19TVDAsICAgT19TVDEsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyog
MzIgKi8gIHsgVURfSWZjb21pcCwgICAgICBPX1NUMCwgICBPX1NUMiwgICBPX05PTkUsICBQX25v
bmUgfSwKICAvKiAzMyAqLyAgeyBVRF9JZmNvbWlwLCAgICAgIE9fU1QwLCAgIE9fU1QzLCAgIE9f
Tk9ORSwgIFBfbm9uZSB9LAogIC8qIDM0ICovICB7IFVEX0lmY29taXAsICAgICAgT19TVDAsICAg
T19TVDQsICAgT19OT05FLCAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWZjb21pcCwgICAg
ICBPX1NUMCwgICBPX1NUNSwgICBPX05PTkUsICBQX25vbmUgfSwKICAvKiAzNiAqLyAgeyBVRF9J
ZmNvbWlwLCAgICAgIE9fU1QwLCAgIE9fU1Q2LCAgIE9fTk9ORSwgIFBfbm9uZSB9LAogIC8qIDM3
ICovICB7IFVEX0lmY29taXAsICAgICAgT19TVDAsICAgT19TVDcsICAgT19OT05FLCAgUF9ub25l
IH0sCiAgLyogMzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAzOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDNEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogM0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGl0YWJfXzFieXRlX19vcF9lM19fYXNpemVbM10gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWpjeHosICAgICAgICBPX0piLCAgICBPX05PTkUsICBPX05PTkUsICBQX2FzbyB9LAogIC8q
IDAxICovICB7IFVEX0lqZWN4eiwgICAgICAgT19KYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9h
c28gfSwKICAvKiAwMiAqLyAgeyBVRF9JanJjeHosICAgICAgIE9fSmIsICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYXNvIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5
dGVfX29wX2Y2X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSXRlc3QsICAgICAgICBPX0Vi
LCAgICBPX0liLCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDAxICovICB7IFVEX0l0ZXN0LCAgICAgICAgT19FYiwgICAgT19JYiwgICAg
T19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAw
MiAqLyAgeyBVRF9Jbm90LCAgICAgICAgIE9fRWIsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8
UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSW5l
ZywgICAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA0ICovICB7IFVEX0ltdWwsICAgICAgICAgT19F
YiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiAwNSAqLyAgeyBVRF9JaW11bCwgICAgICAgIE9fRWIsICAgIE9fTk9ORSwg
IE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MDYgKi8gIHsgVURfSWRpdiwgICAgICAgICBPX0ViLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA3ICovICB7IFVEX0lp
ZGl2LCAgICAgICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBp
dGFiX18xYnl0ZV9fb3BfZjdfX3JlZ1s4XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9JdGVzdCwgICAg
ICAgIE9fRXYsICAgIE9fSXosICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDEgKi8gIHsgVURfSXRlc3QsICAgICAgICBPX0V2
LCAgICBPX0l6LCAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDAyICovICB7IFVEX0lub3QsICAgICAgICAgT19FdiwgICAgT19O
T05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiAwMyAqLyAgeyBVRF9JbmVnLCAgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9f
Tk9ORSwgIFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogMDQgKi8gIHsgVURfSW11bCwgICAgICAgICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQ
X2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDA1ICov
ICB7IFVEX0lpbXVsLCAgICAgICAgT19FdiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fz
b3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAqLyAgeyBVRF9J
ZGl2LCAgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9vc298
UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDcgKi8gIHsgVURfSWlkaXYsICAg
ICAgICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0
YWJfXzFieXRlX19vcF9mZV9fcmVnWzhdID0gewogIC8qIDAwICovICB7IFVEX0lpbmMsICAgICAg
ICAgT19FYiwgICAgT19OT05FLCAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiAwMSAqLyAgeyBVRF9JZGVjLCAgICAgICAgIE9fRWIsICAgIE9f
Tk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCn07CgpzdGF0aWMgc3RydWN0IHVkX2l0YWJfZW50cnkgaXRhYl9fMWJ5dGVfX29w
X2ZmX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWluYywgICAgICAgICBPX0V2LCAgICBP
X05PTkUsICBPX05PTkUsICBQX2MxfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDAxICovICB7IFVEX0lkZWMsICAgICAgICAgT19FdiwgICAgT19OT05FLCAg
T19OT05FLCAgUF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMiAqLyAgeyBVRF9JY2FsbCwgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9kZWY2NHxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiAwMyAqLyAgeyBVRF9JY2FsbCwgICAgICAgIE9fRXAsICAgIE9fTk9ORSwgIE9fTk9ORSwg
IFBfYzF8UF9hc298UF9vc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMDQg
Ki8gIHsgVURfSWptcCwgICAgICAgICBPX0V2LCAgICBPX05PTkUsICBPX05PTkUsICBQX2MxfFBf
ZGVmNjR8UF9kZXBNfFBfYXNvfFBfb3NvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDA1ICovICB7IFVEX0lqbXAsICAgICAgICAgT19FcCwgICAgT19OT05FLCAgT19OT05FLCAg
UF9jMXxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNiAq
LyAgeyBVRF9JcHVzaCwgICAgICAgIE9fRXYsICAgIE9fTk9ORSwgIE9fTk9ORSwgIFBfYzF8UF9k
ZWY2NHxQX2Fzb3xQX29zb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAwNyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfXzNkbm93WzI1Nl0gPSB7CiAg
LyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA4ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAw
QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDBCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwRCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBFICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAxMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDExICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxMyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMTUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAxNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTggKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxOSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDFBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMUIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFEICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAxRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIzICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAyNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjcgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyOCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI5
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMkEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJDICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMkQgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyRSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDJGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMzAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMyICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzMgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAzNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM4ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMzkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDNFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA0MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQxICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0
MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDQ0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogNDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ3ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDgg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA0OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDRBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0QyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDREICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA0RiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDUwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1MiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDUzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNTQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDU2ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTcgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA1OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDU5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNUEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1QiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDVDICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA1RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDVGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjAgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2MSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDYy
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogNjMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA2NCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY1ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2NyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDY4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogNjkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZCICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkMgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiA2RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDZFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3MCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDcxICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogNzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiA3MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDc0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3NiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdBICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0IgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3
QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDdEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogN0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3RiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDgwICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA4MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDgzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODQgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4NSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg2ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogODcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA4OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4QiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDhDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogOEQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhGICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTAgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA5MSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDkyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5NCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk1ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
OTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA5NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDk4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTkgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5QSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlC
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogOUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA5RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlFICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUYgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBMCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEExICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQTIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE0ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTUgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBBNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEE3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBOSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFBICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQUIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBBQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBRiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEIwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQjEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBCMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEIzICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjQgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEI2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogQjcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCOCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI5ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkEg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBCQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEJDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkQgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCRSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJGICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogQzAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBDMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEMyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzMgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogQzYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEM4ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzkgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiBDQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIENCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0MgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDRCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENFICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
Q0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBEMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEQxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDIgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEMyAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ0
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogRDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBENiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ3ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDggKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEOSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIERBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogREIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEREICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogREUgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBERiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEUwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFMiAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEUzICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRTQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBFNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEU2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTcgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFOCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEU5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogRUEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBFQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEVDICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRUQgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBF
RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEVGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogRjAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGMSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEYyICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjMg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBGNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEY1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjYgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGNyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEY4ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogRjkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBGQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEZCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRkMgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGRCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEZFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogRkYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFi
X19wZnhfc3NlNjZfXzBmWzI1Nl0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDA4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMDkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBCICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEMgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDBFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxMCAqLyAgeyBVRF9JbW92dXBkLCAg
ICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhi
IH0sCiAgLyogMTEgKi8gIHsgVURfSW1vdnVwZCwgICAgICBPX1csICAgICBPX1YsICAgICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEyICovICB7IFVEX0ltb3Zs
cGQsICAgICAgT19WLCAgICAgT19NLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiAxMyAqLyAgeyBVRF9JbW92bHBkLCAgICAgIE9fTSwgICAgIE9fViwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTQgKi8gIHsgVURf
SXVucGNrbHBkLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDE1ICovICB7IFVEX0l1bnBja2hwZCwgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxNiAqLyAg
eyBVRF9JbW92aHBkLCAgICAgIE9fViwgICAgIE9fTSwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTcgKi8gIHsgVURfSW1vdmhwZCwgICAgICBPX00sICAg
ICBPX1YsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDE4
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxRCAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDFFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIxICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjIgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI3ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogMjggKi8gIHsgVURfSW1vdmFwZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDI5ICovICB7IFVEX0ltb3ZhcGQsICAg
ICAgT19XLCAgICAgT19WLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAyQSAqLyAgeyBVRF9JY3Z0cGkycGQsICAgIE9fViwgICAgIE9fUSwgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkIgKi8gIHsgVURfSW1vdm50
cGQsICAgICBPX00sICAgICBPX1YsICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDJDICovICB7IFVEX0ljdnR0cGQycGksICAgT19QLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyRCAqLyAgeyBVRF9J
Y3Z0cGQycGksICAgIE9fUCwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogMkUgKi8gIHsgVURfSXVjb21pc2QsICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDJGICovICB7
IFVEX0ljb21pc2QsICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAzMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMxICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDM0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMzUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM3ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzgg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAzOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDNBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0IgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNEICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogM0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAzRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDEgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0MiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDQzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNDQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ2ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDcgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA0OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDQ5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0QiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDRDICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NEQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA0RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDRGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTAgKi8gIHsgVURfSW1vdm1za3BkLCAg
ICBPX0dkLCAgICBPX1ZSLCAgICBPX05PTkUsICBQX29zb3xQX3JleHJ8UF9yZXhiIH0sCiAgLyog
NTEgKi8gIHsgVURfSXNxcnRwZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDUyICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTMgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1NCAq
LyAgeyBVRF9JYW5kcGQsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTUgKi8gIHsgVURfSWFuZG5wZCwgICAgICBPX1Ys
ICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDU2ICovICB7IFVEX0lvcnBkLCAgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9h
c298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1NyAqLyAgeyBVRF9JeG9ycGQsICAgICAg
IE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0s
CiAgLyogNTggKi8gIHsgVURfSWFkZHBkLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUs
ICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDU5ICovICB7IFVEX0ltdWxwZCwg
ICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3Jl
eGIgfSwKICAvKiA1QSAqLyAgeyBVRF9JY3Z0cGQycHMsICAgIE9fViwgICAgIE9fVywgICAgIE9f
Tk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUIgKi8gIHsgVURfSWN2
dHBzMmRxLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4
fFBfcmV4YiB9LAogIC8qIDVDICovICB7IFVEX0lzdWJwZCwgICAgICAgT19WLCAgICAgT19XLCAg
ICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1RCAqLyAgeyBV
RF9JbWlucGQsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogNUUgKi8gIHsgVURfSWRpdnBkLCAgICAgICBPX1YsICAgICBP
X1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVGICov
ICB7IFVEX0ltYXhwZCwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9y
ZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2MCAqLyAgeyBVRF9JcHVucGNrbGJ3LCAgIE9fViwg
ICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
NjEgKi8gIHsgVURfSXB1bnBja2x3ZCwgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fz
b3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDYyICovICB7IFVEX0lwdW5wY2tsZHEsICAg
T19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiA2MyAqLyAgeyBVRF9JcGFja3Nzd2IsICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwg
IFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNjQgKi8gIHsgVURfSXBjbXBndGIs
ICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDY1ICovICB7IFVEX0lwY21wZ3R3LCAgICAgT19WLCAgICAgT19XLCAgICAgT19O
T05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2NiAqLyAgeyBVRF9JcGNt
cGd0ZCwgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8
UF9yZXhiIH0sCiAgLyogNjcgKi8gIHsgVURfSXBhY2t1c3diLCAgICBPX1YsICAgICBPX1csICAg
ICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDY4ICovICB7IFVE
X0lwdW5wY2toYncsICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA2OSAqLyAgeyBVRF9JcHVucGNraHdkLCAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNkEgKi8g
IHsgVURfSXB1bnBja2hkcSwgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDZCICovICB7IFVEX0lwYWNrc3NkdywgICAgT19WLCAg
ICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2
QyAqLyAgeyBVRF9JcHVucGNrbHFkcSwgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNv
fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNkQgKi8gIHsgVURfSXB1bnBja2hxZHEsICBP
X1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIDZFICovICB7IFVEX0ltb3ZkLCAgICAgICAgT19WLCAgICAgT19FeCwgICAgT19OT05FLCAg
UF9jMnxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA2RiAqLyAgeyBV
RF9JbW92cWEsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQ
X3JleHh8UF9yZXhiIH0sCiAgLyogNzAgKi8gIHsgVURfSXBzaHVmZCwgICAgICBPX1YsICAgICBP
X1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDcxICov
ICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fUEZY
X1NTRTY2X18wRl9fT1BfNzFfX1JFRyB9LAogIC8qIDcyICovICB7IFVEX0lncnBfcmVnLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfNzJfX1JF
RyB9LAogIC8qIDczICovICB7IFVEX0lncnBfcmVnLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfNzNfX1JFRyB9LAogIC8qIDc0ICovICB7IFVE
X0lwY21wZXFiLCAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBf
cmV4eHxQX3JleGIgfSwKICAvKiA3NSAqLyAgeyBVRF9JcGNtcGVxdywgICAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNzYgKi8g
IHsgVURfSXBjbXBlcWQsICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDdBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogN0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA3QyAqLyAgeyBVRF9JaGFkZHBkLCAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogN0Qg
Ki8gIHsgVURfSWhzdWJwZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDdFICovICB7IFVEX0ltb3ZkLCAgICAgICAgT19F
eCwgICAgT19WLCAgICAgT19OT05FLCAgUF9jMXxQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQ
X3JleGIgfSwKICAvKiA3RiAqLyAgeyBVRF9JbW92ZHFhLCAgICAgIE9fVywgICAgIE9fViwgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogODAgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4
MSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDgyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogODMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4NCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg1ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA4NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDg4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODkgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4QSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhCICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogOEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA4RCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5MCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDkxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogOTIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk0ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA5NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDk3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5OSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlBICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
OUIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA5QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDlEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5RiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEEw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBBMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEEzICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTQgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBNSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEE2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE5ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUEgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBBQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEFDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBRSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFGICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQjAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBCMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEIyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCNCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEI1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQjYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBCNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI4ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjkgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEJCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogQkMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCRCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJFICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkYg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBDMCAqLyAgeyBVRF9JeGFkZCwgICAgICAgIE9fRWIsICAgIE9fR2IsICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4d3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEMxICovICB7IFVE
X0l4YWRkLCAgICAgICAgT19FdiwgICAgT19HdiwgICAgT19OT05FLCAgUF9hc298UF9vc298UF9y
ZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzIgKi8gIHsgVURfSWNtcHBkLCAgICAg
ICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEMzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQzQgKi8gIHsgVURfSXBpbnNydywgICAgICBPX1YsICAgICBPX0V3
LCAgICBPX0liLCAgICBQX2Fzb3xQX3JleHd8UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBD
NSAqLyAgeyBVRF9JcGV4dHJ3LCAgICAgIE9fR2QsICAgIE9fVlIsICAgIE9fSWIsICAgIFBfYXNv
fFBfcmV4cnxQX3JleGIgfSwKICAvKiBDNiAqLyAgeyBVRF9Jc2h1ZnBkLCAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fSWIsICAgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzcg
Ki8gIHsgVURfSWdycF9yZWcsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBJVEFCX19Q
RlhfU1NFNjZfXzBGX19PUF9DN19fUkVHIH0sCiAgLyogQzggKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDOSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENB
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQ0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBDQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENEICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0UgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDRiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEQwICovICB7IFVEX0lhZGRzdWJwZCwgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBEMSAqLyAgeyBVRF9JcHNybHcs
ICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogRDIgKi8gIHsgVURfSXBzcmxkLCAgICAgICBPX1YsICAgICBPX1csICAgICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEQzICovICB7IFVEX0lw
c3JscSwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4
eHxQX3JleGIgfSwKICAvKiBENCAqLyAgeyBVRF9JcGFkZHEsICAgICAgIE9fViwgICAgIE9fVywg
ICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRDUgKi8gIHsg
VURfSXBtdWxsdywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8
UF9yZXh4fFBfcmV4YiB9LAogIC8qIEQ2ICovICB7IFVEX0ltb3ZxLCAgICAgICAgT19XLCAgICAg
T19WLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBENyAq
LyAgeyBVRF9JcG1vdm1za2IsICAgIE9fR2QsICAgIE9fVlIsICAgIE9fTk9ORSwgIFBfcmV4cnxQ
X3JleGIgfSwKICAvKiBEOCAqLyAgeyBVRF9JcHN1YnVzYiwgICAgIE9fViwgICAgIE9fVywgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRDkgKi8gIHsgVURf
SXBzdWJ1c3csICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIERBICovICB7IFVEX0lwbWludWIsICAgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBEQiAqLyAg
eyBVRF9JcGFuZCwgICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogREMgKi8gIHsgVURfSXBzdWJ1c2IsICAgICBPX1YsICAg
ICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIERE
ICovICB7IFVEX0lwdW5wY2toYncsICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBERSAqLyAgeyBVRF9JcG1heHViLCAgICAgIE9f
ViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogREYgKi8gIHsgVURfSXBhbmRuLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEUwICovICB7IFVEX0lwYXZnYiwgICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBFMSAqLyAgeyBVRF9JcHNyYXcsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRTIgKi8gIHsgVURfSXBzcmFk
LCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEUzICovICB7IFVEX0lwYXZndywgICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFNCAqLyAgeyBVRF9J
cG11bGh1dywgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogRTUgKi8gIHsgVURfSXBtdWxodywgICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEU2ICovICB7
IFVEX0ljdnR0cGQyZHEsICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9ub25lIH0sCiAg
LyogRTcgKi8gIHsgVURfSW1vdm50ZHEsICAgICBPX00sICAgICBPX1YsICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEU4ICovICB7IFVEX0lwc3Vic2IsICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiBFOSAqLyAgeyBVRF9JcHN1YnN3LCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRUEgKi8gIHsgVURfSXBtaW5z
dywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEVCICovICB7IFVEX0lwb3IsICAgICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFQyAqLyAgeyBVRF9J
cGFkZHNiLCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogRUQgKi8gIHsgVURfSXBhZGRzdywgICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEVFICovICB7
IFVEX0lwbWF4c3csICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiBFRiAqLyAgeyBVRF9JcHhvciwgICAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjAg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBGMSAqLyAgeyBVRF9JcHNsbHcsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjIgKi8gIHsgVURfSXBzbGxk
LCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIEYzICovICB7IFVEX0lwc2xscSwgICAgICAgT19WLCAgICAgT19XLCAgICAg
T19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGNCAqLyAgeyBVRF9J
cG11bHVkcSwgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3Jl
eHh8UF9yZXhiIH0sCiAgLyogRjUgKi8gIHsgVURfSXBtYWRkd2QsICAgICBPX1YsICAgICBPX1cs
ICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEY2ICovICB7
IFVEX0lwc2FkYncsICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiBGNyAqLyAgeyBVRF9JbWFza21vdnEsICAgIE9fViwgICAg
IE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRjgg
Ki8gIHsgVURfSXBzdWJiLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEY5ICovICB7IFVEX0lwc3VidywgICAgICAgT19W
LCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiBGQSAqLyAgeyBVRF9JcHN1YmQsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRkIgKi8gIHsgVURfSXBzdWJxLCAgICAg
ICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIEZDICovICB7IFVEX0lwYWRkYiwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBGRCAqLyAgeyBVRF9JcGFkZHcs
ICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogRkUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBGRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2Vu
dHJ5IGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wXzcxX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAwMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDAyICovICB7IFVEX0lwc3JsdywgICAgICAgT19WUiwgICAgT19JYiwgICAg
T19OT05FLCAgUF9yZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JcHNyYXcsICAg
ICAgIE9fVlIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MDYgKi8gIHsgVURfSXBzbGx3LCAgICAgICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3Jl
eGIgfSwKICAvKiAwNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3Bm
eF9zc2U2Nl9fMGZfX29wXzcyX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDAyICovICB7IFVEX0lwc3JsZCwgICAgICAgT19WUiwgICAgT19JYiwgICAgT19OT05FLCAgUF9y
ZXhiIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JcHNyYWQsICAgICAgIE9fVlIsICAg
IE9fSWIsICAgIE9fTk9ORSwgIFBfcmV4YiB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURf
SXBzbGxkLCAgICAgICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3JleGIgfSwKICAvKiAw
NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3BmeF9zc2U2Nl9fMGZf
X29wXzczX19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAyICovICB7IFVE
X0lwc3JscSwgICAgICAgT19WUiwgICAgT19JYiwgICAgT19OT05FLCAgUF9yZXhiIH0sCiAgLyog
MDMgKi8gIHsgVURfSXBzcmxkcSwgICAgICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3Jl
eGIgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8gIHsgVURfSXBzbGxxLCAgICAg
ICBPX1ZSLCAgICBPX0liLCAgICBPX05PTkUsICBQX3JleGIgfSwKICAvKiAwNyAqLyAgeyBVRF9J
cHNsbGRxLCAgICAgIE9fVlIsICAgIE9fSWIsICAgIE9fTk9ORSwgIFBfcmV4YiB9LAp9OwoKc3Rh
dGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wX2M3X19yZWdb
OF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWdycF92ZW5kb3IsICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBJVEFCX19QRlhfU1NFNjZfXzBGX19PUF9DN19fUkVHX19PUF8wMF9fVkVORE9SIH0s
CiAgLyogMDEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAwMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDQgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDA2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMDcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKfTsKCnN0YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFi
X19wZnhfc3NlNjZfXzBmX19vcF9jN19fcmVnX19vcF8wMF9fdmVuZG9yWzJdID0gewogIC8qIDAw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDEgKi8gIHsgVURfSXZtY2xlYXIsICAgICBPX01xLCAgICBPX05PTkUsICBPX05P
TkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGl0YWJfX3BmeF9zc2VmMl9fMGZbMjU2XSA9IHsKICAvKiAwMCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDAx
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDUgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBBICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEIgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAwQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDBEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwRiAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDEwICovICB7
IFVEX0ltb3ZzZCwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiAxMSAqLyAgeyBVRF9JbW92c2QsICAgICAgIE9fVywgICAg
IE9fViwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMTIg
Ki8gIHsgVURfSW1vdmRkdXAsICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTQgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxNSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDE2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDE5ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUEgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAxQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDFDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxRSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFGICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MjAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAyMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDIyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjMgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI1
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMjYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI4ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjkgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyQSAq
LyAgeyBVRF9JY3Z0c2kyc2QsICAgIE9fViwgICAgIE9fRXgsICAgIE9fTk9ORSwgIFBfYzJ8UF9h
c298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkIgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyQyAqLyAg
eyBVRF9JY3Z0dHNkMnNpLCAgIE9fR3Z3LCAgIE9fVywgICAgIE9fTk9ORSwgIFBfYzF8UF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAyRCAqLyAgeyBVRF9JY3Z0c2Qyc2ksICAgIE9f
R3Z3LCAgIE9fVywgICAgIE9fTk9ORSwgIFBfYzF8UF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiAyRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDJGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzAgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzMSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMyICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogMzMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiAzNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzYgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDM4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMzkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNCICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0MgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAzRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDNFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0YgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0MCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQxICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA0MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDQ0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ3
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogNDggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA0OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDRBICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEIgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0QyAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDREICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogNEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0RiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDUwICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTEgKi8g
IHsgVURfSXNxcnRzZCwgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDUyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA1NCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDU1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogNTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA1NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDU4ICovICB7IFVEX0lhZGRzZCwgICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiA1OSAqLyAgeyBVRF9JbXVsc2QsICAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9O
RSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUEgKi8gIHsgVURfSWN2dHNk
MnNzLCAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDVCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNUMgKi8gIHsgVURfSXN1YnNkLCAgICAgICBPX1YsICAg
ICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVE
ICovICB7IFVEX0ltaW5zZCwgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1RSAqLyAgeyBVRF9JZGl2c2QsICAgICAgIE9f
ViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogNUYgKi8gIHsgVURfSW1heHNkLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDYwICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjEgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2
MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDYzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogNjQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2NSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY2ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjcg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA2OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDY5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkEgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2QiAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZDICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNkQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA2RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzAgKi8gIHsgVURfSXBzaHVm
bHcsICAgICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAogIC8qIDcxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3MyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDc0ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NzUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA3NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdB
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogN0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA3QyAqLyAgeyBVRF9JaGFkZHBzLCAgICAgIE9fViwgICAgIE9f
VywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogN0QgKi8g
IHsgVURfSWhzdWJwcywgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3Jl
eHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDdFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0YgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4MCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDgxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogODIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA4MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg0ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4
NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDg3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogODggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4OSAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhBICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEIg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiA4QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDhEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEUgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4RiAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDkwICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogOTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA5MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDkzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTQgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5NSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDk2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogOTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk5ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUEgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA5QiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDlDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5RSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlGICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
QTAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBBMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEEyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBNCAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE1
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBBNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBQSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEFCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFFICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBCMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEIxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCMyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI0ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBCNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjggKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCOSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEJBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBCQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJEICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkUgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
RiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEMwICovICB7IFVEX0l4YWRkLCAgICAgICAgT19FYiwgICAgT19HYiwgICAgT19O
T05FLCAgUF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzEgKi8gIHsg
VURfSXhhZGQsICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX29zb3xQ
X3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIEMyICovICB7IFVEX0ljbXBzZCwgICAgICAgT19W
LCAgICAgT19XLCAgICAgT19JYiwgICAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAv
KiBDMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIEM0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEM3ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
QzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBDOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIENBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0IgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDQyAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENE
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogQ0UgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBDRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQwICovICB7IFVEX0lhZGRzdWJwcywgICAg
T19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwK
ICAvKiBEMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEQyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBENCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ1ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRDYgKi8gIHsgVURfSW1vdmRxMnEsICAgICBPX1AsICAgICBPX1ZSLCAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleGIgfSwKICAvKiBENyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ4ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDkgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEQSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIERCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogREMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBERCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIERFICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogREYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBFMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEUxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFMyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEU0ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRTUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBFNiAqLyAgeyBVRF9JY3Z0cGQyZHEsICAgIE9fViwgICAgIE9fVywgICAg
IE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogRTcgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBF
OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEU5ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogRUEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFQiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEVDICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRUQg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBFRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEVGICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjAgKi8gIHsgVURfSWxkZHF1LCAgICAgICBP
X1YsICAgICBPX00sICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAog
IC8qIEYxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogRjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEY0ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiBGNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIEY3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGOSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEZBICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
RkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBGQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEZEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRkUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGRiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3Rh
dGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0YWJfX3BmeF9zc2VmM19fMGZbMjU2XSA9IHsKICAv
KiAwMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDAxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMyAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA0ICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MDUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAwNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDA3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDggKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwOSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBB
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMEIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAwQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDBEICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMEUgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwRiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDEwICovICB7IFVEX0ltb3ZzcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxMSAqLyAgeyBVRF9JbW92c3Ms
ICAgICAgIE9fVywgICAgIE9fViwgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9y
ZXhiIH0sCiAgLyogMTIgKi8gIHsgVURfSW1vdnNsZHVwLCAgICBPX1YsICAgICBPX1csICAgICBP
X05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDEzICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMTQg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAxNSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDE2ICovICB7IFVEX0ltb3ZzaGR1cCwgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiAxNyAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogMTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiAxQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDFCICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUMgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiAxRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDFFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogMUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyMCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDIxICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
MjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiAyMyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDI0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMjUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyNiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDI3
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogMjggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiAyOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDJBICovICB7IFVEX0ljdnRzaTJzcywgICAg
T19WLCAgICAgT19FeCwgICAgT19OT05FLCAgUF9jMnxQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDJCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMkMgKi8gIHsgVURfSWN2dHRzczJzaSwgICBPX0d2dywgICBP
X1csICAgICBPX05PTkUsICBQX2MxfFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyog
MkQgKi8gIHsgVURfSWN2dHNzMnNpLCAgICBPX0d2dywgICBPX1csICAgICBPX05PTkUsICBQX2Mx
fFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogMkUgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAyRiAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IDMwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogMzEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiAzMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDMzICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMzQgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAz
NSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIDM2ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogMzcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzOCAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDM5ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0Eg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiAzQiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDNDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogM0QgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAzRSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDNGICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNDAgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA0MSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDMgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NCAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDQ1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNDYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0NyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDQ4ICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNDkgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA0QSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDRCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogNEMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA0RCAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDRFICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
NEYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA1MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDUxICovICB7IFVEX0lzcXJ0c3MsICAgICAgT19WLCAgICAg
T19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1MiAq
LyAgeyBVRF9JcnNxcnRzcywgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBf
cmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNTMgKi8gIHsgVURfSXJjcHNzLCAgICAgICBPX1Ys
ICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8q
IDU0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogNTUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiA1NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDU3ICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNTggKi8gIHsgVURf
SWFkZHNzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9y
ZXh4fFBfcmV4YiB9LAogIC8qIDU5ICovICB7IFVEX0ltdWxzcywgICAgICAgT19WLCAgICAgT19X
LCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1QSAqLyAg
eyBVRF9JY3Z0c3Myc2QsICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4
cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNUIgKi8gIHsgVURfSWN2dHRwczJkcSwgICBPX1YsICAg
ICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVD
ICovICB7IFVEX0lzdWJzcywgICAgICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298
UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA1RCAqLyAgeyBVRF9JbWluc3MsICAgICAgIE9f
ViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAg
LyogNUUgKi8gIHsgVURfSWRpdnNzLCAgICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQ
X2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9LAogIC8qIDVGICovICB7IFVEX0ltYXhzcywgICAg
ICAgT19WLCAgICAgT19XLCAgICAgT19OT05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIg
fSwKICAvKiA2MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIDYxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2MyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY0ICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogNjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA2NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDY3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNjggKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2OSAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDZBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogNkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA2QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDZEICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNkUgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA2RiAqLyAgeyBVRF9JbW92ZHF1LCAgICAgIE9fViwgICAgIE9fVywgICAgIE9fTk9ORSwgIFBf
YXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogNzAgKi8gIHsgVURfSXBzaHVmaHcsICAg
ICBPX1YsICAgICBPX1csICAgICBPX0liLCAgICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4YiB9
LAogIC8qIDcxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogNzIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDc0ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzUgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiA3NiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIDc3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogNzggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA3OSAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdBICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogN0IgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiA3QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDdEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogN0UgKi8gIHsgVURfSW1vdnEsICAg
ICAgICBPX1YsICAgICBPX1csICAgICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBfcmV4
YiB9LAogIC8qIDdGICovICB7IFVEX0ltb3ZkcXUsICAgICAgT19XLCAgICAgT19WLCAgICAgT19O
T05FLCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiA4MCAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDgxICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogODIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiA4MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDg0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogODUgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4NiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIDg3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAg
UF9ub25lIH0sCiAgLyogODggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwg
T19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4OSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9O
RSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDhBICovICB7IFVEX0lpbnZhbGlk
LCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEIgKi8gIHsg
VURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAv
KiA4QyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBf
bm9uZSB9LAogIC8qIDhEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogOEUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA4RiAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDkwICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
OTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiA5MiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIDkzICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOTQgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5NSAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk2
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogOTcgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiA5OCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDk5ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogOUEgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5QiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDlDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogOUQgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiA5RSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDlGICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTAgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBBMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEEyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBNCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE1ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogQTYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBBNyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEE4ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQTkgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBBQSAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEFCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogQUMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBBRCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEFFICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQUYgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBC
MCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEIxICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogQjIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCMyAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEI0ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjUg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBCNiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEI3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQjggKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCOSAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJBICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogQkIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBCQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEJEICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQkUgKi8gIHsgVURfSWludmFs
aWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBCRiAqLyAg
eyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAog
IC8qIEMwICovICB7IFVEX0l4YWRkLCAgICAgICAgT19FYiwgICAgT19HYiwgICAgT19OT05FLCAg
UF9hc298UF9yZXh3fFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzEgKi8gIHsgVURfSXhh
ZGQsICAgICAgICBPX0V2LCAgICBPX0d2LCAgICBPX05PTkUsICBQX2Fzb3xQX3JleHd8UF9yZXhy
fFBfcmV4eHxQX3JleGIgfSwKICAvKiBDMiAqLyAgeyBVRF9JY21wc3MsICAgICAgIE9fViwgICAg
IE9fVywgICAgIE9fSWIsICAgIFBfYXNvfFBfcmV4cnxQX3JleHh8UF9yZXhiIH0sCiAgLyogQzMg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBDNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEM1ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQzYgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDNyAqLyAgeyBVRF9JZ3Jw
X3JlZywgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIElUQUJfX1BGWF9TU0VGM19fMEZf
X09QX0M3X19SRUcgfSwKICAvKiBDOCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEM5ICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogQ0EgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDQiAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIENDICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogQ0QgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBDRSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIENGICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDAgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiBEMSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IFBfbm9uZSB9LAogIC8qIEQyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUs
IE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05P
TkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBENCAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEQ1ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRDYgKi8gIHsgVURfSW1vdnEyZHEsICAgICBPX1YsICAgICBPX1BSLCAgICBPX05PTkUsICBQ
X2FzbyB9LAogIC8qIEQ3ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9f
Tk9ORSwgICAgUF9ub25lIH0sCiAgLyogRDggKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUs
IE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBEOSAqLyAgeyBVRF9JaW52YWxpZCwg
ICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIERBICovICB7IFVE
X0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyog
REIgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25v
bmUgfSwKICAvKiBEQyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05P
TkUsICAgIFBfbm9uZSB9LAogIC8qIEREICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBP
X05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogREUgKi8gIHsgVURfSWludmFsaWQsICAg
ICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBERiAqLyAgeyBVRF9J
aW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEUw
ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25l
IH0sCiAgLyogRTEgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05F
LCAgICBQX25vbmUgfSwKICAvKiBFMiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19O
T05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEUzICovICB7IFVEX0lpbnZhbGlkLCAgICAg
T19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRTQgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFNSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIEU2ICovICB7IFVEX0ljdnRkcTJwZCwgICAgT19WLCAgICAgT19XLCAgICAgT19OT05F
LCAgUF9hc298UF9yZXhyfFBfcmV4eHxQX3JleGIgfSwKICAvKiBFNyAqLyAgeyBVRF9JaW52YWxp
ZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEU4ICovICB7
IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAg
LyogRTkgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQ
X25vbmUgfSwKICAvKiBFQSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBP
X05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEVCICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05F
LCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRUMgKi8gIHsgVURfSWludmFsaWQs
ICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBFRCAqLyAgeyBV
RF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8q
IEVFICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9u
b25lIH0sCiAgLyogRUYgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCAgICBQX25vbmUgfSwKICAvKiBGMCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwg
T19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEYxICovICB7IFVEX0lpbnZhbGlkLCAg
ICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjIgKi8gIHsgVURf
SWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBG
MyAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9u
ZSB9LAogIC8qIEY0ICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9O
RSwgICAgUF9ub25lIH0sCiAgLyogRjUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9f
Tk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGNiAqLyAgeyBVRF9JaW52YWxpZCwgICAg
IE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEY3ICovICB7IFVEX0lp
bnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRjgg
Ki8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUg
fSwKICAvKiBGOSAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUs
ICAgIFBfbm9uZSB9LAogIC8qIEZBICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05P
TkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogRkIgKi8gIHsgVURfSWludmFsaWQsICAgICBP
X05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiBGQyAqLyAgeyBVRF9JaW52
YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIEZEICov
ICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0s
CiAgLyogRkUgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAg
ICBQX25vbmUgfSwKICAvKiBGRiAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05F
LCBPX05PTkUsICAgIFBfbm9uZSB9LAp9OwoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGl0
YWJfX3BmeF9zc2VmM19fMGZfX29wX2M3X19yZWdbOF0gPSB7CiAgLyogMDAgKi8gIHsgVURfSWlu
dmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwMSAq
LyAgeyBVRF9JaW52YWxpZCwgICAgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9
LAogIC8qIDAyICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwg
ICAgUF9ub25lIH0sCiAgLyogMDMgKi8gIHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9O
RSwgT19OT05FLCAgICBQX25vbmUgfSwKICAvKiAwNCAqLyAgeyBVRF9JaW52YWxpZCwgICAgIE9f
Tk9ORSwgT19OT05FLCBPX05PTkUsICAgIFBfbm9uZSB9LAogIC8qIDA1ICovICB7IFVEX0lpbnZh
bGlkLCAgICAgT19OT05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDYgKi8g
IHsgVURfSWludmFsaWQsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCAgICBQX25vbmUgfSwK
ICAvKiAwNyAqLyAgeyBVRF9JZ3JwX3ZlbmRvciwgIE9fTk9ORSwgT19OT05FLCBPX05PTkUsICAg
IElUQUJfX1BGWF9TU0VGM19fMEZfX09QX0M3X19SRUdfX09QXzA3X19WRU5ET1IgfSwKfTsKCnN0
YXRpYyBzdHJ1Y3QgdWRfaXRhYl9lbnRyeSBpdGFiX19wZnhfc3NlZjNfXzBmX19vcF9jN19fcmVn
X19vcF8wN19fdmVuZG9yWzJdID0gewogIC8qIDAwICovICB7IFVEX0lpbnZhbGlkLCAgICAgT19O
T05FLCBPX05PTkUsIE9fTk9ORSwgICAgUF9ub25lIH0sCiAgLyogMDEgKi8gIHsgVURfSXZteG9u
LCAgICAgICBPX01xLCAgICBPX05PTkUsICBPX05PTkUsICBQX2Fzb3xQX3JleHJ8UF9yZXh4fFBf
cmV4YiB9LAp9OwoKLyogdGhlIG9yZGVyIG9mIHRoaXMgdGFibGUgbWF0Y2hlcyBlbnVtIHVkX2l0
YWJfaW5kZXggKi8Kc3RydWN0IHVkX2l0YWJfZW50cnkgKiB1ZF9pdGFiX2xpc3RbXSA9IHsKICBp
dGFiX18wZiwKICBpdGFiX18wZl9fb3BfMDBfX3JlZywKICBpdGFiX18wZl9fb3BfMDFfX3JlZywK
ICBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDBfX21vZCwKICBpdGFiX18wZl9fb3BfMDFfX3Jl
Z19fb3BfMDBfX21vZF9fb3BfMDFfX3JtLAogIGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wMF9f
bW9kX19vcF8wMV9fcm1fX29wXzAxX192ZW5kb3IsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29w
XzAwX19tb2RfX29wXzAxX19ybV9fb3BfMDNfX3ZlbmRvciwKICBpdGFiX18wZl9fb3BfMDFfX3Jl
Z19fb3BfMDBfX21vZF9fb3BfMDFfX3JtX19vcF8wNF9fdmVuZG9yLAogIGl0YWJfXzBmX19vcF8w
MV9fcmVnX19vcF8wMV9fbW9kLAogIGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wMV9fbW9kX19v
cF8wMV9fcm0sCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAyX19tb2QsCiAgaXRhYl9fMGZf
X29wXzAxX19yZWdfX29wXzAzX19tb2QsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19t
b2RfX29wXzAxX19ybSwKICBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3BfMDNfX21vZF9fb3BfMDFf
X3JtX19vcF8wMF9fdmVuZG9yLAogIGl0YWJfXzBmX19vcF8wMV9fcmVnX19vcF8wM19fbW9kX19v
cF8wMV9fcm1fX29wXzAxX192ZW5kb3IsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzAzX19t
b2RfX29wXzAxX19ybV9fb3BfMDJfX3ZlbmRvciwKICBpdGFiX18wZl9fb3BfMDFfX3JlZ19fb3Bf
MDNfX21vZF9fb3BfMDFfX3JtX19vcF8wM19fdmVuZG9yLAogIGl0YWJfXzBmX19vcF8wMV9fcmVn
X19vcF8wM19fbW9kX19vcF8wMV9fcm1fX29wXzA0X192ZW5kb3IsCiAgaXRhYl9fMGZfX29wXzAx
X19yZWdfX29wXzAzX19tb2RfX29wXzAxX19ybV9fb3BfMDVfX3ZlbmRvciwKICBpdGFiX18wZl9f
b3BfMDFfX3JlZ19fb3BfMDNfX21vZF9fb3BfMDFfX3JtX19vcF8wNl9fdmVuZG9yLAogIGl0YWJf
XzBmX19vcF8wMV9fcmVnX19vcF8wM19fbW9kX19vcF8wMV9fcm1fX29wXzA3X192ZW5kb3IsCiAg
aXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzA0X19tb2QsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdf
X29wXzA2X19tb2QsCiAgaXRhYl9fMGZfX29wXzAxX19yZWdfX29wXzA3X19tb2QsCiAgaXRhYl9f
MGZfX29wXzAxX19yZWdfX29wXzA3X19tb2RfX29wXzAxX19ybSwKICBpdGFiX18wZl9fb3BfMDFf
X3JlZ19fb3BfMDdfX21vZF9fb3BfMDFfX3JtX19vcF8wMV9fdmVuZG9yLAogIGl0YWJfXzBmX19v
cF8wZF9fcmVnLAogIGl0YWJfXzBmX19vcF8xOF9fcmVnLAogIGl0YWJfXzBmX19vcF83MV9fcmVn
LAogIGl0YWJfXzBmX19vcF83Ml9fcmVnLAogIGl0YWJfXzBmX19vcF83M19fcmVnLAogIGl0YWJf
XzBmX19vcF9hZV9fcmVnLAogIGl0YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNV9fbW9kLAogIGl0
YWJfXzBmX19vcF9hZV9fcmVnX19vcF8wNV9fbW9kX19vcF8wMV9fcm0sCiAgaXRhYl9fMGZfX29w
X2FlX19yZWdfX29wXzA2X19tb2QsCiAgaXRhYl9fMGZfX29wX2FlX19yZWdfX29wXzA2X19tb2Rf
X29wXzAxX19ybSwKICBpdGFiX18wZl9fb3BfYWVfX3JlZ19fb3BfMDdfX21vZCwKICBpdGFiX18w
Zl9fb3BfYWVfX3JlZ19fb3BfMDdfX21vZF9fb3BfMDFfX3JtLAogIGl0YWJfXzBmX19vcF9iYV9f
cmVnLAogIGl0YWJfXzBmX19vcF9jN19fcmVnLAogIGl0YWJfXzBmX19vcF9jN19fcmVnX19vcF8w
MF9fdmVuZG9yLAogIGl0YWJfXzBmX19vcF9jN19fcmVnX19vcF8wN19fdmVuZG9yLAogIGl0YWJf
XzBmX19vcF9kOV9fbW9kLAogIGl0YWJfXzBmX19vcF9kOV9fbW9kX19vcF8wMV9feDg3LAogIGl0
YWJfXzFieXRlLAogIGl0YWJfXzFieXRlX19vcF82MF9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29w
XzYxX19vc2l6ZSwKICBpdGFiX18xYnl0ZV9fb3BfNjNfX21vZGUsCiAgaXRhYl9fMWJ5dGVfX29w
XzZkX19vc2l6ZSwKICBpdGFiX18xYnl0ZV9fb3BfNmZfX29zaXplLAogIGl0YWJfXzFieXRlX19v
cF84MF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF84MV9fcmVnLAogIGl0YWJfXzFieXRlX19vcF84
Ml9fcmVnLAogIGl0YWJfXzFieXRlX19vcF84M19fcmVnLAogIGl0YWJfXzFieXRlX19vcF84Zl9f
cmVnLAogIGl0YWJfXzFieXRlX19vcF85OF9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29wXzk5X19v
c2l6ZSwKICBpdGFiX18xYnl0ZV9fb3BfOWNfX21vZGUsCiAgaXRhYl9fMWJ5dGVfX29wXzljX19t
b2RlX19vcF8wMF9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29wXzljX19tb2RlX19vcF8wMV9fb3Np
emUsCiAgaXRhYl9fMWJ5dGVfX29wXzlkX19tb2RlLAogIGl0YWJfXzFieXRlX19vcF85ZF9fbW9k
ZV9fb3BfMDBfX29zaXplLAogIGl0YWJfXzFieXRlX19vcF85ZF9fbW9kZV9fb3BfMDFfX29zaXpl
LAogIGl0YWJfXzFieXRlX19vcF9hNV9fb3NpemUsCiAgaXRhYl9fMWJ5dGVfX29wX2E3X19vc2l6
ZSwKICBpdGFiX18xYnl0ZV9fb3BfYWJfX29zaXplLAogIGl0YWJfXzFieXRlX19vcF9hZF9fb3Np
emUsCiAgaXRhYl9fMWJ5dGVfX29wX2FlX19tb2QsCiAgaXRhYl9fMWJ5dGVfX29wX2FlX19tb2Rf
X29wXzAwX19yZWcsCiAgaXRhYl9fMWJ5dGVfX29wX2FmX19vc2l6ZSwKICBpdGFiX18xYnl0ZV9f
b3BfYzBfX3JlZywKICBpdGFiX18xYnl0ZV9fb3BfYzFfX3JlZywKICBpdGFiX18xYnl0ZV9fb3Bf
YzZfX3JlZywKICBpdGFiX18xYnl0ZV9fb3BfYzdfX3JlZywKICBpdGFiX18xYnl0ZV9fb3BfY2Zf
X29zaXplLAogIGl0YWJfXzFieXRlX19vcF9kMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kMV9f
cmVnLAogIGl0YWJfXzFieXRlX19vcF9kMl9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kM19fcmVn
LAogIGl0YWJfXzFieXRlX19vcF9kOF9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kOF9fbW9kX19v
cF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kOF9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJf
XzFieXRlX19vcF9kOV9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kOV9fbW9kX19vcF8wMF9fcmVn
LAogIGl0YWJfXzFieXRlX19vcF9kOV9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19v
cF9kYV9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kYV9fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJf
XzFieXRlX19vcF9kYV9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19vcF9kYl9fbW9k
LAogIGl0YWJfXzFieXRlX19vcF9kYl9fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19v
cF9kYl9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19vcF9kY19fbW9kLAogIGl0YWJf
XzFieXRlX19vcF9kY19fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kY19fbW9k
X19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRlX19vcF9kZF9fbW9kLAogIGl0YWJfXzFieXRlX19v
cF9kZF9fbW9kX19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kZF9fbW9kX19vcF8wMV9f
eDg3LAogIGl0YWJfXzFieXRlX19vcF9kZV9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kZV9fbW9k
X19vcF8wMF9fcmVnLAogIGl0YWJfXzFieXRlX19vcF9kZV9fbW9kX19vcF8wMV9feDg3LAogIGl0
YWJfXzFieXRlX19vcF9kZl9fbW9kLAogIGl0YWJfXzFieXRlX19vcF9kZl9fbW9kX19vcF8wMF9f
cmVnLAogIGl0YWJfXzFieXRlX19vcF9kZl9fbW9kX19vcF8wMV9feDg3LAogIGl0YWJfXzFieXRl
X19vcF9lM19fYXNpemUsCiAgaXRhYl9fMWJ5dGVfX29wX2Y2X19yZWcsCiAgaXRhYl9fMWJ5dGVf
X29wX2Y3X19yZWcsCiAgaXRhYl9fMWJ5dGVfX29wX2ZlX19yZWcsCiAgaXRhYl9fMWJ5dGVfX29w
X2ZmX19yZWcsCiAgaXRhYl9fM2Rub3csCiAgaXRhYl9fcGZ4X3NzZTY2X18wZiwKICBpdGFiX19w
Znhfc3NlNjZfXzBmX19vcF83MV9fcmVnLAogIGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wXzcyX19y
ZWcsCiAgaXRhYl9fcGZ4X3NzZTY2X18wZl9fb3BfNzNfX3JlZywKICBpdGFiX19wZnhfc3NlNjZf
XzBmX19vcF9jN19fcmVnLAogIGl0YWJfX3BmeF9zc2U2Nl9fMGZfX29wX2M3X19yZWdfX29wXzAw
X192ZW5kb3IsCiAgaXRhYl9fcGZ4X3NzZWYyX18wZiwKICBpdGFiX19wZnhfc3NlZjNfXzBmLAog
IGl0YWJfX3BmeF9zc2VmM19fMGZfX29wX2M3X19yZWcsCiAgaXRhYl9fcGZ4X3NzZWYzX18wZl9f
b3BfYzdfX3JlZ19fb3BfMDdfX3ZlbmRvciwKfTsKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9SRUFETUUAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwMDIwMQAxMTc2NTQ2NTU1NgAwMTQ1
MTcAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABt
cmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAACmh0dHA6Ly91ZGlzODYuc291cmNlZm9yZ2UubmV0Lwp1ZGlzODYt
MS42IDogCiAgLSBjZCBsaWJ1ZGlzODYKICAtIGNwICpjIHRvIGhlcmUKICAtIGNwICpoIHRvIGhl
cmUKICAgCk11a2VzaCBSYXRob3IKMDQvMzAvMjAwOAoKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAABrZGIveDg2L3VkaXM4Ni0xLjcvaXRhYi5oAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAwMDAwMjc2MDIAMTE3NjU0NjU1NTYAMDE0NzQ1
ACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJh
dGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAovKiBpdGFiLmggLS0gYXV0byBnZW5lcmF0ZWQgYnkgb3BnZW4ucHks
IGRvIG5vdCBlZGl0LiAqLwoKI2lmbmRlZiBVRF9JVEFCX0gKI2RlZmluZSBVRF9JVEFCX0gKCgoK
ZW51bSB1ZF9pdGFiX3ZlbmRvcl9pbmRleCB7CiAgSVRBQl9fVkVORE9SX0lORFhfX0FNRCwKICBJ
VEFCX19WRU5ET1JfSU5EWF9fSU5URUwsCn07CgoKZW51bSB1ZF9pdGFiX21vZGVfaW5kZXggewog
IElUQUJfX01PREVfSU5EWF9fMTYsCiAgSVRBQl9fTU9ERV9JTkRYX18zMiwKICBJVEFCX19NT0RF
X0lORFhfXzY0Cn07CgoKZW51bSB1ZF9pdGFiX21vZF9pbmRleCB7CiAgSVRBQl9fTU9EX0lORFhf
X05PVF8xMSwKICBJVEFCX19NT0RfSU5EWF9fMTEKfTsKCgplbnVtIHVkX2l0YWJfaW5kZXggewog
IElUQUJfXzBGLAogIElUQUJfXzBGX19PUF8wMF9fUkVHLAogIElUQUJfXzBGX19PUF8wMV9fUkVH
LAogIElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wMF9fTU9ELAogIElUQUJfXzBGX19PUF8wMV9f
UkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk0sCiAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAw
X19NT0RfX09QXzAxX19STV9fT1BfMDFfX1ZFTkRPUiwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19f
T1BfMDBfX01PRF9fT1BfMDFfX1JNX19PUF8wM19fVkVORE9SLAogIElUQUJfXzBGX19PUF8wMV9f
UkVHX19PUF8wMF9fTU9EX19PUF8wMV9fUk1fX09QXzA0X19WRU5ET1IsCiAgSVRBQl9fMEZfX09Q
XzAxX19SRUdfX09QXzAxX19NT0QsCiAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAxX19NT0Rf
X09QXzAxX19STSwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDJfX01PRCwKICBJVEFCX18w
Rl9fT1BfMDFfX1JFR19fT1BfMDNfX01PRCwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNf
X01PRF9fT1BfMDFfX1JNLAogIElUQUJfXzBGX19PUF8wMV9fUkVHX19PUF8wM19fTU9EX19PUF8w
MV9fUk1fX09QXzAwX19WRU5ET1IsCiAgSVRBQl9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0Rf
X09QXzAxX19STV9fT1BfMDFfX1ZFTkRPUiwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDNf
X01PRF9fT1BfMDFfX1JNX19PUF8wMl9fVkVORE9SLAogIElUQUJfXzBGX19PUF8wMV9fUkVHX19P
UF8wM19fTU9EX19PUF8wMV9fUk1fX09QXzAzX19WRU5ET1IsCiAgSVRBQl9fMEZfX09QXzAxX19S
RUdfX09QXzAzX19NT0RfX09QXzAxX19STV9fT1BfMDRfX1ZFTkRPUiwKICBJVEFCX18wRl9fT1Bf
MDFfX1JFR19fT1BfMDNfX01PRF9fT1BfMDFfX1JNX19PUF8wNV9fVkVORE9SLAogIElUQUJfXzBG
X19PUF8wMV9fUkVHX19PUF8wM19fTU9EX19PUF8wMV9fUk1fX09QXzA2X19WRU5ET1IsCiAgSVRB
Ql9fMEZfX09QXzAxX19SRUdfX09QXzAzX19NT0RfX09QXzAxX19STV9fT1BfMDdfX1ZFTkRPUiwK
ICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDRfX01PRCwKICBJVEFCX18wRl9fT1BfMDFfX1JF
R19fT1BfMDZfX01PRCwKICBJVEFCX18wRl9fT1BfMDFfX1JFR19fT1BfMDdfX01PRCwKICBJVEFC
X18wRl9fT1BfMDFfX1JFR19fT1BfMDdfX01PRF9fT1BfMDFfX1JNLAogIElUQUJfXzBGX19PUF8w
MV9fUkVHX19PUF8wN19fTU9EX19PUF8wMV9fUk1fX09QXzAxX19WRU5ET1IsCiAgSVRBQl9fMEZf
X09QXzBEX19SRUcsCiAgSVRBQl9fMEZfX09QXzE4X19SRUcsCiAgSVRBQl9fMEZfX09QXzcxX19S
RUcsCiAgSVRBQl9fMEZfX09QXzcyX19SRUcsCiAgSVRBQl9fMEZfX09QXzczX19SRUcsCiAgSVRB
Ql9fMEZfX09QX0FFX19SRUcsCiAgSVRBQl9fMEZfX09QX0FFX19SRUdfX09QXzA1X19NT0QsCiAg
SVRBQl9fMEZfX09QX0FFX19SRUdfX09QXzA1X19NT0RfX09QXzAxX19STSwKICBJVEFCX18wRl9f
T1BfQUVfX1JFR19fT1BfMDZfX01PRCwKICBJVEFCX18wRl9fT1BfQUVfX1JFR19fT1BfMDZfX01P
RF9fT1BfMDFfX1JNLAogIElUQUJfXzBGX19PUF9BRV9fUkVHX19PUF8wN19fTU9ELAogIElUQUJf
XzBGX19PUF9BRV9fUkVHX19PUF8wN19fTU9EX19PUF8wMV9fUk0sCiAgSVRBQl9fMEZfX09QX0JB
X19SRUcsCiAgSVRBQl9fMEZfX09QX0M3X19SRUcsCiAgSVRBQl9fMEZfX09QX0M3X19SRUdfX09Q
XzAwX19WRU5ET1IsCiAgSVRBQl9fMEZfX09QX0M3X19SRUdfX09QXzA3X19WRU5ET1IsCiAgSVRB
Ql9fMEZfX09QX0Q5X19NT0QsCiAgSVRBQl9fMEZfX09QX0Q5X19NT0RfX09QXzAxX19YODcsCiAg
SVRBQl9fMUJZVEUsCiAgSVRBQl9fMUJZVEVfX09QXzYwX19PU0laRSwKICBJVEFCX18xQllURV9f
T1BfNjFfX09TSVpFLAogIElUQUJfXzFCWVRFX19PUF82M19fTU9ERSwKICBJVEFCX18xQllURV9f
T1BfNkRfX09TSVpFLAogIElUQUJfXzFCWVRFX19PUF82Rl9fT1NJWkUsCiAgSVRBQl9fMUJZVEVf
X09QXzgwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzgxX19SRUcsCiAgSVRBQl9fMUJZVEVfX09Q
XzgyX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzgzX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzhG
X19SRUcsCiAgSVRBQl9fMUJZVEVfX09QXzk4X19PU0laRSwKICBJVEFCX18xQllURV9fT1BfOTlf
X09TSVpFLAogIElUQUJfXzFCWVRFX19PUF85Q19fTU9ERSwKICBJVEFCX18xQllURV9fT1BfOUNf
X01PREVfX09QXzAwX19PU0laRSwKICBJVEFCX18xQllURV9fT1BfOUNfX01PREVfX09QXzAxX19P
U0laRSwKICBJVEFCX18xQllURV9fT1BfOURfX01PREUsCiAgSVRBQl9fMUJZVEVfX09QXzlEX19N
T0RFX19PUF8wMF9fT1NJWkUsCiAgSVRBQl9fMUJZVEVfX09QXzlEX19NT0RFX19PUF8wMV9fT1NJ
WkUsCiAgSVRBQl9fMUJZVEVfX09QX0E1X19PU0laRSwKICBJVEFCX18xQllURV9fT1BfQTdfX09T
SVpFLAogIElUQUJfXzFCWVRFX19PUF9BQl9fT1NJWkUsCiAgSVRBQl9fMUJZVEVfX09QX0FEX19P
U0laRSwKICBJVEFCX18xQllURV9fT1BfQUVfX01PRCwKICBJVEFCX18xQllURV9fT1BfQUVfX01P
RF9fT1BfMDBfX1JFRywKICBJVEFCX18xQllURV9fT1BfQUZfX09TSVpFLAogIElUQUJfXzFCWVRF
X19PUF9DMF9fUkVHLAogIElUQUJfXzFCWVRFX19PUF9DMV9fUkVHLAogIElUQUJfXzFCWVRFX19P
UF9DNl9fUkVHLAogIElUQUJfXzFCWVRFX19PUF9DN19fUkVHLAogIElUQUJfXzFCWVRFX19PUF9D
Rl9fT1NJWkUsCiAgSVRBQl9fMUJZVEVfX09QX0QwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0Qx
X19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0QyX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0QzX19S
RUcsCiAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0Rf
X09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0Q4X19NT0RfX09QXzAxX19YODcsCiAgSVRB
Ql9fMUJZVEVfX09QX0Q5X19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0Q5X19NT0RfX09QXzAwX19S
RUcsCiAgSVRBQl9fMUJZVEVfX09QX0Q5X19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVf
X09QX0RBX19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0RBX19NT0RfX09QXzAwX19SRUcsCiAgSVRB
Ql9fMUJZVEVfX09QX0RBX19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0RCX19N
T0QsCiAgSVRBQl9fMUJZVEVfX09QX0RCX19NT0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVf
X09QX0RCX19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0RDX19NT0QsCiAgSVRB
Ql9fMUJZVEVfX09QX0RDX19NT0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0RDX19N
T0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0REX19NT0QsCiAgSVRBQl9fMUJZVEVf
X09QX0REX19NT0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0REX19NT0RfX09QXzAx
X19YODcsCiAgSVRBQl9fMUJZVEVfX09QX0RFX19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0RFX19N
T0RfX09QXzAwX19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0RFX19NT0RfX09QXzAxX19YODcsCiAg
SVRBQl9fMUJZVEVfX09QX0RGX19NT0QsCiAgSVRBQl9fMUJZVEVfX09QX0RGX19NT0RfX09QXzAw
X19SRUcsCiAgSVRBQl9fMUJZVEVfX09QX0RGX19NT0RfX09QXzAxX19YODcsCiAgSVRBQl9fMUJZ
VEVfX09QX0UzX19BU0laRSwKICBJVEFCX18xQllURV9fT1BfRjZfX1JFRywKICBJVEFCX18xQllU
RV9fT1BfRjdfX1JFRywKICBJVEFCX18xQllURV9fT1BfRkVfX1JFRywKICBJVEFCX18xQllURV9f
T1BfRkZfX1JFRywKICBJVEFCX18zRE5PVywKICBJVEFCX19QRlhfU1NFNjZfXzBGLAogIElUQUJf
X1BGWF9TU0U2Nl9fMEZfX09QXzcxX19SRUcsCiAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfNzJf
X1JFRywKICBJVEFCX19QRlhfU1NFNjZfXzBGX19PUF83M19fUkVHLAogIElUQUJfX1BGWF9TU0U2
Nl9fMEZfX09QX0M3X19SRUcsCiAgSVRBQl9fUEZYX1NTRTY2X18wRl9fT1BfQzdfX1JFR19fT1Bf
MDBfX1ZFTkRPUiwKICBJVEFCX19QRlhfU1NFRjJfXzBGLAogIElUQUJfX1BGWF9TU0VGM19fMEYs
CiAgSVRBQl9fUEZYX1NTRUYzX18wRl9fT1BfQzdfX1JFRywKICBJVEFCX19QRlhfU1NFRjNfXzBG
X19PUF9DN19fUkVHX19PUF8wN19fVkVORE9SLAp9OwoKCmVudW0gdWRfbW5lbW9uaWNfY29kZSB7
CiAgVURfSTNkbm93LAogIFVEX0lhYWEsCiAgVURfSWFhZCwKICBVRF9JYWFtLAogIFVEX0lhYXMs
CiAgVURfSWFkYywKICBVRF9JYWRkLAogIFVEX0lhZGRwZCwKICBVRF9JYWRkcHMsCiAgVURfSWFk
ZHNkLAogIFVEX0lhZGRzcywKICBVRF9JYWRkc3VicGQsCiAgVURfSWFkZHN1YnBzLAogIFVEX0lh
bmQsCiAgVURfSWFuZHBkLAogIFVEX0lhbmRwcywKICBVRF9JYW5kbnBkLAogIFVEX0lhbmRucHMs
CiAgVURfSWFycGwsCiAgVURfSW1vdnN4ZCwKICBVRF9JYm91bmQsCiAgVURfSWJzZiwKICBVRF9J
YnNyLAogIFVEX0lic3dhcCwKICBVRF9JYnQsCiAgVURfSWJ0YywKICBVRF9JYnRyLAogIFVEX0li
dHMsCiAgVURfSWNhbGwsCiAgVURfSWNidywKICBVRF9JY3dkZSwKICBVRF9JY2RxZSwKICBVRF9J
Y2xjLAogIFVEX0ljbGQsCiAgVURfSWNsZmx1c2gsCiAgVURfSWNsZ2ksCiAgVURfSWNsaSwKICBV
RF9JY2x0cywKICBVRF9JY21jLAogIFVEX0ljbW92bywKICBVRF9JY21vdm5vLAogIFVEX0ljbW92
YiwKICBVRF9JY21vdmFlLAogIFVEX0ljbW92eiwKICBVRF9JY21vdm56LAogIFVEX0ljbW92YmUs
CiAgVURfSWNtb3ZhLAogIFVEX0ljbW92cywKICBVRF9JY21vdm5zLAogIFVEX0ljbW92cCwKICBV
RF9JY21vdm5wLAogIFVEX0ljbW92bCwKICBVRF9JY21vdmdlLAogIFVEX0ljbW92bGUsCiAgVURf
SWNtb3ZnLAogIFVEX0ljbXAsCiAgVURfSWNtcHBkLAogIFVEX0ljbXBwcywKICBVRF9JY21wc2Is
CiAgVURfSWNtcHN3LAogIFVEX0ljbXBzZCwKICBVRF9JY21wc3EsCiAgVURfSWNtcHNzLAogIFVE
X0ljbXB4Y2hnLAogIFVEX0ljbXB4Y2hnOGIsCiAgVURfSWNvbWlzZCwKICBVRF9JY29taXNzLAog
IFVEX0ljcHVpZCwKICBVRF9JY3Z0ZHEycGQsCiAgVURfSWN2dGRxMnBzLAogIFVEX0ljdnRwZDJk
cSwKICBVRF9JY3Z0cGQycGksCiAgVURfSWN2dHBkMnBzLAogIFVEX0ljdnRwaTJwcywKICBVRF9J
Y3Z0cGkycGQsCiAgVURfSWN2dHBzMmRxLAogIFVEX0ljdnRwczJwaSwKICBVRF9JY3Z0cHMycGQs
CiAgVURfSWN2dHNkMnNpLAogIFVEX0ljdnRzZDJzcywKICBVRF9JY3Z0c2kyc3MsCiAgVURfSWN2
dHNzMnNpLAogIFVEX0ljdnRzczJzZCwKICBVRF9JY3Z0dHBkMnBpLAogIFVEX0ljdnR0cGQyZHEs
CiAgVURfSWN2dHRwczJkcSwKICBVRF9JY3Z0dHBzMnBpLAogIFVEX0ljdnR0c2Qyc2ksCiAgVURf
SWN2dHNpMnNkLAogIFVEX0ljdnR0c3Myc2ksCiAgVURfSWN3ZCwKICBVRF9JY2RxLAogIFVEX0lj
cW8sCiAgVURfSWRhYSwKICBVRF9JZGFzLAogIFVEX0lkZWMsCiAgVURfSWRpdiwKICBVRF9JZGl2
cGQsCiAgVURfSWRpdnBzLAogIFVEX0lkaXZzZCwKICBVRF9JZGl2c3MsCiAgVURfSWVtbXMsCiAg
VURfSWVudGVyLAogIFVEX0lmMnhtMSwKICBVRF9JZmFicywKICBVRF9JZmFkZCwKICBVRF9JZmFk
ZHAsCiAgVURfSWZibGQsCiAgVURfSWZic3RwLAogIFVEX0lmY2hzLAogIFVEX0lmY2xleCwKICBV
RF9JZmNtb3ZiLAogIFVEX0lmY21vdmUsCiAgVURfSWZjbW92YmUsCiAgVURfSWZjbW92dSwKICBV
RF9JZmNtb3ZuYiwKICBVRF9JZmNtb3ZuZSwKICBVRF9JZmNtb3ZuYmUsCiAgVURfSWZjbW92bnUs
CiAgVURfSWZ1Y29taSwKICBVRF9JZmNvbSwKICBVRF9JZmNvbTIsCiAgVURfSWZjb21wMywKICBV
RF9JZmNvbWksCiAgVURfSWZ1Y29taXAsCiAgVURfSWZjb21pcCwKICBVRF9JZmNvbXAsCiAgVURf
SWZjb21wNSwKICBVRF9JZmNvbXBwLAogIFVEX0lmY29zLAogIFVEX0lmZGVjc3RwLAogIFVEX0lm
ZGl2LAogIFVEX0lmZGl2cCwKICBVRF9JZmRpdnIsCiAgVURfSWZkaXZycCwKICBVRF9JZmVtbXMs
CiAgVURfSWZmcmVlLAogIFVEX0lmZnJlZXAsCiAgVURfSWZpY29tLAogIFVEX0lmaWNvbXAsCiAg
VURfSWZpbGQsCiAgVURfSWZuY3N0cCwKICBVRF9JZm5pbml0LAogIFVEX0lmaWFkZCwKICBVRF9J
ZmlkaXZyLAogIFVEX0lmaWRpdiwKICBVRF9JZmlzdWIsCiAgVURfSWZpc3ViciwKICBVRF9JZmlz
dCwKICBVRF9JZmlzdHAsCiAgVURfSWZpc3R0cCwKICBVRF9JZmxkLAogIFVEX0lmbGQxLAogIFVE
X0lmbGRsMnQsCiAgVURfSWZsZGwyZSwKICBVRF9JZmxkbHBpLAogIFVEX0lmbGRsZzIsCiAgVURf
SWZsZGxuMiwKICBVRF9JZmxkeiwKICBVRF9JZmxkY3csCiAgVURfSWZsZGVudiwKICBVRF9JZm11
bCwKICBVRF9JZm11bHAsCiAgVURfSWZpbXVsLAogIFVEX0lmbm9wLAogIFVEX0lmcGF0YW4sCiAg
VURfSWZwcmVtLAogIFVEX0lmcHJlbTEsCiAgVURfSWZwdGFuLAogIFVEX0lmcm5kaW50LAogIFVE
X0lmcnN0b3IsCiAgVURfSWZuc2F2ZSwKICBVRF9JZnNjYWxlLAogIFVEX0lmc2luLAogIFVEX0lm
c2luY29zLAogIFVEX0lmc3FydCwKICBVRF9JZnN0cCwKICBVRF9JZnN0cDEsCiAgVURfSWZzdHA4
LAogIFVEX0lmc3RwOSwKICBVRF9JZnN0LAogIFVEX0lmbnN0Y3csCiAgVURfSWZuc3RlbnYsCiAg
VURfSWZuc3RzdywKICBVRF9JZnN1YiwKICBVRF9JZnN1YnAsCiAgVURfSWZzdWJyLAogIFVEX0lm
c3VicnAsCiAgVURfSWZ0c3QsCiAgVURfSWZ1Y29tLAogIFVEX0lmdWNvbXAsCiAgVURfSWZ1Y29t
cHAsCiAgVURfSWZ4YW0sCiAgVURfSWZ4Y2gsCiAgVURfSWZ4Y2g0LAogIFVEX0lmeGNoNywKICBV
RF9JZnhyc3RvciwKICBVRF9JZnhzYXZlLAogIFVEX0lmcHh0cmFjdCwKICBVRF9JZnlsMngsCiAg
VURfSWZ5bDJ4cDEsCiAgVURfSWhhZGRwZCwKICBVRF9JaGFkZHBzLAogIFVEX0lobHQsCiAgVURf
SWhzdWJwZCwKICBVRF9JaHN1YnBzLAogIFVEX0lpZGl2LAogIFVEX0lpbiwKICBVRF9JaW11bCwK
ICBVRF9JaW5jLAogIFVEX0lpbnNiLAogIFVEX0lpbnN3LAogIFVEX0lpbnNkLAogIFVEX0lpbnQx
LAogIFVEX0lpbnQzLAogIFVEX0lpbnQsCiAgVURfSWludG8sCiAgVURfSWludmQsCiAgVURfSWlu
dmxwZywKICBVRF9JaW52bHBnYSwKICBVRF9JaXJldHcsCiAgVURfSWlyZXRkLAogIFVEX0lpcmV0
cSwKICBVRF9Jam8sCiAgVURfSWpubywKICBVRF9JamIsCiAgVURfSWphZSwKICBVRF9JanosCiAg
VURfSWpueiwKICBVRF9JamJlLAogIFVEX0lqYSwKICBVRF9JanMsCiAgVURfSWpucywKICBVRF9J
anAsCiAgVURfSWpucCwKICBVRF9JamwsCiAgVURfSWpnZSwKICBVRF9JamxlLAogIFVEX0lqZywK
ICBVRF9JamN4eiwKICBVRF9JamVjeHosCiAgVURfSWpyY3h6LAogIFVEX0lqbXAsCiAgVURfSWxh
aGYsCiAgVURfSWxhciwKICBVRF9JbGRkcXUsCiAgVURfSWxkbXhjc3IsCiAgVURfSWxkcywKICBV
RF9JbGVhLAogIFVEX0lsZXMsCiAgVURfSWxmcywKICBVRF9JbGdzLAogIFVEX0lsaWR0LAogIFVE
X0lsc3MsCiAgVURfSWxlYXZlLAogIFVEX0lsZmVuY2UsCiAgVURfSWxnZHQsCiAgVURfSWxsZHQs
CiAgVURfSWxtc3csCiAgVURfSWxvY2ssCiAgVURfSWxvZHNiLAogIFVEX0lsb2RzdywKICBVRF9J
bG9kc2QsCiAgVURfSWxvZHNxLAogIFVEX0lsb29wbnosCiAgVURfSWxvb3BlLAogIFVEX0lsb29w
LAogIFVEX0lsc2wsCiAgVURfSWx0ciwKICBVRF9JbWFza21vdnEsCiAgVURfSW1heHBkLAogIFVE
X0ltYXhwcywKICBVRF9JbWF4c2QsCiAgVURfSW1heHNzLAogIFVEX0ltZmVuY2UsCiAgVURfSW1p
bnBkLAogIFVEX0ltaW5wcywKICBVRF9JbWluc2QsCiAgVURfSW1pbnNzLAogIFVEX0ltb25pdG9y
LAogIFVEX0ltb3YsCiAgVURfSW1vdmFwZCwKICBVRF9JbW92YXBzLAogIFVEX0ltb3ZkLAogIFVE
X0ltb3ZkZHVwLAogIFVEX0ltb3ZkcWEsCiAgVURfSW1vdmRxdSwKICBVRF9JbW92ZHEycSwKICBV
RF9JbW92aHBkLAogIFVEX0ltb3ZocHMsCiAgVURfSW1vdmxocHMsCiAgVURfSW1vdmxwZCwKICBV
RF9JbW92bHBzLAogIFVEX0ltb3ZobHBzLAogIFVEX0ltb3Ztc2twZCwKICBVRF9JbW92bXNrcHMs
CiAgVURfSW1vdm50ZHEsCiAgVURfSW1vdm50aSwKICBVRF9JbW92bnRwZCwKICBVRF9JbW92bnRw
cywKICBVRF9JbW92bnRxLAogIFVEX0ltb3ZxLAogIFVEX0ltb3ZxYSwKICBVRF9JbW92cTJkcSwK
ICBVRF9JbW92c2IsCiAgVURfSW1vdnN3LAogIFVEX0ltb3ZzZCwKICBVRF9JbW92c3EsCiAgVURf
SW1vdnNsZHVwLAogIFVEX0ltb3ZzaGR1cCwKICBVRF9JbW92c3MsCiAgVURfSW1vdnN4LAogIFVE
X0ltb3Z1cGQsCiAgVURfSW1vdnVwcywKICBVRF9JbW92engsCiAgVURfSW11bCwKICBVRF9JbXVs
cGQsCiAgVURfSW11bHBzLAogIFVEX0ltdWxzZCwKICBVRF9JbXVsc3MsCiAgVURfSW13YWl0LAog
IFVEX0luZWcsCiAgVURfSW5vcCwKICBVRF9Jbm90LAogIFVEX0lvciwKICBVRF9Jb3JwZCwKICBV
RF9Jb3JwcywKICBVRF9Jb3V0LAogIFVEX0lvdXRzYiwKICBVRF9Jb3V0c3csCiAgVURfSW91dHNk
LAogIFVEX0lvdXRzcSwKICBVRF9JcGFja3Nzd2IsCiAgVURfSXBhY2tzc2R3LAogIFVEX0lwYWNr
dXN3YiwKICBVRF9JcGFkZGIsCiAgVURfSXBhZGR3LAogIFVEX0lwYWRkcSwKICBVRF9JcGFkZHNi
LAogIFVEX0lwYWRkc3csCiAgVURfSXBhZGR1c2IsCiAgVURfSXBhZGR1c3csCiAgVURfSXBhbmQs
CiAgVURfSXBhbmRuLAogIFVEX0lwYXVzZSwKICBVRF9JcGF2Z2IsCiAgVURfSXBhdmd3LAogIFVE
X0lwY21wZXFiLAogIFVEX0lwY21wZXF3LAogIFVEX0lwY21wZXFkLAogIFVEX0lwY21wZ3RiLAog
IFVEX0lwY21wZ3R3LAogIFVEX0lwY21wZ3RkLAogIFVEX0lwZXh0cncsCiAgVURfSXBpbnNydywK
ICBVRF9JcG1hZGR3ZCwKICBVRF9JcG1heHN3LAogIFVEX0lwbWF4dWIsCiAgVURfSXBtaW5zdywK
ICBVRF9JcG1pbnViLAogIFVEX0lwbW92bXNrYiwKICBVRF9JcG11bGh1dywKICBVRF9JcG11bGh3
LAogIFVEX0lwbXVsbHcsCiAgVURfSXBtdWx1ZHEsCiAgVURfSXBvcCwKICBVRF9JcG9wYSwKICBV
RF9JcG9wYWQsCiAgVURfSXBvcGZ3LAogIFVEX0lwb3BmZCwKICBVRF9JcG9wZnEsCiAgVURfSXBv
ciwKICBVRF9JcHJlZmV0Y2gsCiAgVURfSXByZWZldGNobnRhLAogIFVEX0lwcmVmZXRjaHQwLAog
IFVEX0lwcmVmZXRjaHQxLAogIFVEX0lwcmVmZXRjaHQyLAogIFVEX0lwc2FkYncsCiAgVURfSXBz
aHVmZCwKICBVRF9JcHNodWZodywKICBVRF9JcHNodWZsdywKICBVRF9JcHNodWZ3LAogIFVEX0lw
c2xsZHEsCiAgVURfSXBzbGx3LAogIFVEX0lwc2xsZCwKICBVRF9JcHNsbHEsCiAgVURfSXBzcmF3
LAogIFVEX0lwc3JhZCwKICBVRF9JcHNybHcsCiAgVURfSXBzcmxkLAogIFVEX0lwc3JscSwKICBV
RF9JcHNybGRxLAogIFVEX0lwc3ViYiwKICBVRF9JcHN1YncsCiAgVURfSXBzdWJkLAogIFVEX0lw
c3VicSwKICBVRF9JcHN1YnNiLAogIFVEX0lwc3Vic3csCiAgVURfSXBzdWJ1c2IsCiAgVURfSXBz
dWJ1c3csCiAgVURfSXB1bnBja2hidywKICBVRF9JcHVucGNraHdkLAogIFVEX0lwdW5wY2toZHEs
CiAgVURfSXB1bnBja2hxZHEsCiAgVURfSXB1bnBja2xidywKICBVRF9JcHVucGNrbHdkLAogIFVE
X0lwdW5wY2tsZHEsCiAgVURfSXB1bnBja2xxZHEsCiAgVURfSXBpMmZ3LAogIFVEX0lwaTJmZCwK
ICBVRF9JcGYyaXcsCiAgVURfSXBmMmlkLAogIFVEX0lwZm5hY2MsCiAgVURfSXBmcG5hY2MsCiAg
VURfSXBmY21wZ2UsCiAgVURfSXBmbWluLAogIFVEX0lwZnJjcCwKICBVRF9JcGZyc3FydCwKICBV
RF9JcGZzdWIsCiAgVURfSXBmYWRkLAogIFVEX0lwZmNtcGd0LAogIFVEX0lwZm1heCwKICBVRF9J
cGZyY3BpdDEsCiAgVURfSXBmcnNwaXQxLAogIFVEX0lwZnN1YnIsCiAgVURfSXBmYWNjLAogIFVE
X0lwZmNtcGVxLAogIFVEX0lwZm11bCwKICBVRF9JcGZyY3BpdDIsCiAgVURfSXBtdWxocncsCiAg
VURfSXBzd2FwZCwKICBVRF9JcGF2Z3VzYiwKICBVRF9JcHVzaCwKICBVRF9JcHVzaGEsCiAgVURf
SXB1c2hhZCwKICBVRF9JcHVzaGZ3LAogIFVEX0lwdXNoZmQsCiAgVURfSXB1c2hmcSwKICBVRF9J
cHhvciwKICBVRF9JcmNsLAogIFVEX0lyY3IsCiAgVURfSXJvbCwKICBVRF9Jcm9yLAogIFVEX0ly
Y3BwcywKICBVRF9JcmNwc3MsCiAgVURfSXJkbXNyLAogIFVEX0lyZHBtYywKICBVRF9JcmR0c2Ms
CiAgVURfSXJkdHNjcCwKICBVRF9JcmVwbmUsCiAgVURfSXJlcCwKICBVRF9JcmV0LAogIFVEX0ly
ZXRmLAogIFVEX0lyc20sCiAgVURfSXJzcXJ0cHMsCiAgVURfSXJzcXJ0c3MsCiAgVURfSXNhaGYs
CiAgVURfSXNhbCwKICBVRF9Jc2FsYywKICBVRF9Jc2FyLAogIFVEX0lzaGwsCiAgVURfSXNociwK
ICBVRF9Jc2JiLAogIFVEX0lzY2FzYiwKICBVRF9Jc2Nhc3csCiAgVURfSXNjYXNkLAogIFVEX0lz
Y2FzcSwKICBVRF9Jc2V0bywKICBVRF9Jc2V0bm8sCiAgVURfSXNldGIsCiAgVURfSXNldG5iLAog
IFVEX0lzZXR6LAogIFVEX0lzZXRueiwKICBVRF9Jc2V0YmUsCiAgVURfSXNldGEsCiAgVURfSXNl
dHMsCiAgVURfSXNldG5zLAogIFVEX0lzZXRwLAogIFVEX0lzZXRucCwKICBVRF9Jc2V0bCwKICBV
RF9Jc2V0Z2UsCiAgVURfSXNldGxlLAogIFVEX0lzZXRnLAogIFVEX0lzZmVuY2UsCiAgVURfSXNn
ZHQsCiAgVURfSXNobGQsCiAgVURfSXNocmQsCiAgVURfSXNodWZwZCwKICBVRF9Jc2h1ZnBzLAog
IFVEX0lzaWR0LAogIFVEX0lzbGR0LAogIFVEX0lzbXN3LAogIFVEX0lzcXJ0cHMsCiAgVURfSXNx
cnRwZCwKICBVRF9Jc3FydHNkLAogIFVEX0lzcXJ0c3MsCiAgVURfSXN0YywKICBVRF9Jc3RkLAog
IFVEX0lzdGdpLAogIFVEX0lzdGksCiAgVURfSXNraW5pdCwKICBVRF9Jc3RteGNzciwKICBVRF9J
c3Rvc2IsCiAgVURfSXN0b3N3LAogIFVEX0lzdG9zZCwKICBVRF9Jc3Rvc3EsCiAgVURfSXN0ciwK
ICBVRF9Jc3ViLAogIFVEX0lzdWJwZCwKICBVRF9Jc3VicHMsCiAgVURfSXN1YnNkLAogIFVEX0lz
dWJzcywKICBVRF9Jc3dhcGdzLAogIFVEX0lzeXNjYWxsLAogIFVEX0lzeXNlbnRlciwKICBVRF9J
c3lzZXhpdCwKICBVRF9Jc3lzcmV0LAogIFVEX0l0ZXN0LAogIFVEX0l1Y29taXNkLAogIFVEX0l1
Y29taXNzLAogIFVEX0l1ZDIsCiAgVURfSXVucGNraHBkLAogIFVEX0l1bnBja2hwcywKICBVRF9J
dW5wY2tscHMsCiAgVURfSXVucGNrbHBkLAogIFVEX0l2ZXJyLAogIFVEX0l2ZXJ3LAogIFVEX0l2
bWNhbGwsCiAgVURfSXZtY2xlYXIsCiAgVURfSXZteG9uLAogIFVEX0l2bXB0cmxkLAogIFVEX0l2
bXB0cnN0LAogIFVEX0l2bXJlc3VtZSwKICBVRF9Jdm14b2ZmLAogIFVEX0l2bXJ1biwKICBVRF9J
dm1tY2FsbCwKICBVRF9Jdm1sb2FkLAogIFVEX0l2bXNhdmUsCiAgVURfSXdhaXQsCiAgVURfSXdi
aW52ZCwKICBVRF9Jd3Jtc3IsCiAgVURfSXhhZGQsCiAgVURfSXhjaGcsCiAgVURfSXhsYXRiLAog
IFVEX0l4b3IsCiAgVURfSXhvcnBkLAogIFVEX0l4b3JwcywKICBVRF9JZGIsCiAgVURfSWludmFs
aWQsCiAgVURfSWQzdmlsLAogIFVEX0luYSwKICBVRF9JZ3JwX3JlZywKICBVRF9JZ3JwX3JtLAog
IFVEX0lncnBfdmVuZG9yLAogIFVEX0lncnBfeDg3LAogIFVEX0lncnBfbW9kZSwKICBVRF9JZ3Jw
X29zaXplLAogIFVEX0lncnBfYXNpemUsCiAgVURfSWdycF9tb2QsCiAgVURfSW5vbmUsCn07CgoK
CmV4dGVybiBjb25zdCBjaGFyKiB1ZF9tbmVtb25pY3Nfc3RyW107OwpleHRlcm4gc3RydWN0IHVk
X2l0YWJfZW50cnkqIHVkX2l0YWJfbGlzdFtdOwoKI2VuZGlmCgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYv
dWRpczg2LTEuNy90eXBlcy5oAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1
NgAwMDAwMDAxMTQzMwAxMTc2NTQ2NTU1NgAwMTUxNjUAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogdHlwZXMuaAogKgogKiBDb3B5cmlnaHQgKGMpIDIwMDYsIFZpdmVr
IE1vaGFuIDx2aXZla0BzaWc5LmNvbT4KICogQWxsIHJpZ2h0cyByZXNlcnZlZC4gU2VlIExJQ0VO
U0UKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCiNpZm5kZWYgVURfVFlQRVNfSAojZGVmaW5l
IFVEX1RZUEVTX0gKCgojaW5jbHVkZSAiLi4vLi4vaW5jbHVkZS9rZGJpbmMuaCIKCiNkZWZpbmUg
Rk1UNjQgIiVsbCIKI2luY2x1ZGUgIml0YWIuaCIKCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAq
IEFsbCBwb3NzaWJsZSAidHlwZXMiIG9mIG9iamVjdHMgaW4gdWRpczg2LiBPcmRlciBpcyBJbXBv
cnRhbnQhCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwplbnVtIHVkX3R5cGUKewogIFVEX05P
TkUsCgogIC8qIDggYml0IEdQUnMgKi8KICBVRF9SX0FMLAlVRF9SX0NMLAlVRF9SX0RMLAlVRF9S
X0JMLAogIFVEX1JfQUgsCVVEX1JfQ0gsCVVEX1JfREgsCVVEX1JfQkgsCiAgVURfUl9TUEwsCVVE
X1JfQlBMLAlVRF9SX1NJTCwJVURfUl9ESUwsCiAgVURfUl9SOEIsCVVEX1JfUjlCLAlVRF9SX1Ix
MEIsCVVEX1JfUjExQiwKICBVRF9SX1IxMkIsCVVEX1JfUjEzQiwJVURfUl9SMTRCLAlVRF9SX1Ix
NUIsCgogIC8qIDE2IGJpdCBHUFJzICovCiAgVURfUl9BWCwJVURfUl9DWCwJVURfUl9EWCwJVURf
Ul9CWCwKICBVRF9SX1NQLAlVRF9SX0JQLAlVRF9SX1NJLAlVRF9SX0RJLAogIFVEX1JfUjhXLAlV
RF9SX1I5VywJVURfUl9SMTBXLAlVRF9SX1IxMVcsCiAgVURfUl9SMTJXLAlVRF9SX1IxM1csCVVE
X1JfUjE0VywJVURfUl9SMTVXLAoJCiAgLyogMzIgYml0IEdQUnMgKi8KICBVRF9SX0VBWCwJVURf
Ul9FQ1gsCVVEX1JfRURYLAlVRF9SX0VCWCwKICBVRF9SX0VTUCwJVURfUl9FQlAsCVVEX1JfRVNJ
LAlVRF9SX0VESSwKICBVRF9SX1I4RCwJVURfUl9SOUQsCVVEX1JfUjEwRCwJVURfUl9SMTFELAog
IFVEX1JfUjEyRCwJVURfUl9SMTNELAlVRF9SX1IxNEQsCVVEX1JfUjE1RCwKCQogIC8qIDY0IGJp
dCBHUFJzICovCiAgVURfUl9SQVgsCVVEX1JfUkNYLAlVRF9SX1JEWCwJVURfUl9SQlgsCiAgVURf
Ul9SU1AsCVVEX1JfUkJQLAlVRF9SX1JTSSwJVURfUl9SREksCiAgVURfUl9SOCwJVURfUl9SOSwJ
VURfUl9SMTAsCVVEX1JfUjExLAogIFVEX1JfUjEyLAlVRF9SX1IxMywJVURfUl9SMTQsCVVEX1Jf
UjE1LAoKICAvKiBzZWdtZW50IHJlZ2lzdGVycyAqLwogIFVEX1JfRVMsCVVEX1JfQ1MsCVVEX1Jf
U1MsCVVEX1JfRFMsCiAgVURfUl9GUywJVURfUl9HUywJCgogIC8qIGNvbnRyb2wgcmVnaXN0ZXJz
Ki8KICBVRF9SX0NSMCwJVURfUl9DUjEsCVVEX1JfQ1IyLAlVRF9SX0NSMywKICBVRF9SX0NSNCwJ
VURfUl9DUjUsCVVEX1JfQ1I2LAlVRF9SX0NSNywKICBVRF9SX0NSOCwJVURfUl9DUjksCVVEX1Jf
Q1IxMCwJVURfUl9DUjExLAogIFVEX1JfQ1IxMiwJVURfUl9DUjEzLAlVRF9SX0NSMTQsCVVEX1Jf
Q1IxNSwKCQogIC8qIGRlYnVnIHJlZ2lzdGVycyAqLwogIFVEX1JfRFIwLAlVRF9SX0RSMSwJVURf
Ul9EUjIsCVVEX1JfRFIzLAogIFVEX1JfRFI0LAlVRF9SX0RSNSwJVURfUl9EUjYsCVVEX1JfRFI3
LAogIFVEX1JfRFI4LAlVRF9SX0RSOSwJVURfUl9EUjEwLAlVRF9SX0RSMTEsCiAgVURfUl9EUjEy
LAlVRF9SX0RSMTMsCVVEX1JfRFIxNCwJVURfUl9EUjE1LAoKICAvKiBtbXggcmVnaXN0ZXJzICov
CiAgVURfUl9NTTAsCVVEX1JfTU0xLAlVRF9SX01NMiwJVURfUl9NTTMsCiAgVURfUl9NTTQsCVVE
X1JfTU01LAlVRF9SX01NNiwJVURfUl9NTTcsCgogIC8qIHg4NyByZWdpc3RlcnMgKi8KICBVRF9S
X1NUMCwJVURfUl9TVDEsCVVEX1JfU1QyLAlVRF9SX1NUMywKICBVRF9SX1NUNCwJVURfUl9TVDUs
CVVEX1JfU1Q2LAlVRF9SX1NUNywgCgogIC8qIGV4dGVuZGVkIG11bHRpbWVkaWEgcmVnaXN0ZXJz
ICovCiAgVURfUl9YTU0wLAlVRF9SX1hNTTEsCVVEX1JfWE1NMiwJVURfUl9YTU0zLAogIFVEX1Jf
WE1NNCwJVURfUl9YTU01LAlVRF9SX1hNTTYsCVVEX1JfWE1NNywKICBVRF9SX1hNTTgsCVVEX1Jf
WE1NOSwJVURfUl9YTU0xMCwJVURfUl9YTU0xMSwKICBVRF9SX1hNTTEyLAlVRF9SX1hNTTEzLAlV
RF9SX1hNTTE0LAlVRF9SX1hNTTE1LAoKICBVRF9SX1JJUCwKCiAgLyogT3BlcmFuZCBUeXBlcyAq
LwogIFVEX09QX1JFRywJVURfT1BfTUVNLAlVRF9PUF9QVFIsCVVEX09QX0lNTSwJCiAgVURfT1Bf
SklNTSwJVURfT1BfQ09OU1QKfTsKCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIHN0cnVjdCB1
ZF9vcGVyYW5kIC0gRGlzYXNzZW1ibGVkIGluc3RydWN0aW9uIE9wZXJhbmQuCiAqIC0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tCiAqLwpzdHJ1Y3QgdWRfb3BlcmFuZCAKewogIGVudW0gdWRfdHlwZQkJdHlw
ZTsKICB1aW50OF90CQlzaXplOwogIHVuaW9uIHsKCWludDhfdAkJc2J5dGU7Cgl1aW50OF90CQl1
Ynl0ZTsKCWludDE2X3QJCXN3b3JkOwoJdWludDE2X3QJdXdvcmQ7CglpbnQzMl90CQlzZHdvcmQ7
Cgl1aW50MzJfdAl1ZHdvcmQ7CglpbnQ2NF90CQlzcXdvcmQ7Cgl1aW50NjRfdAl1cXdvcmQ7CgoJ
c3RydWN0IHsKCQl1aW50MTZfdCBzZWc7CgkJdWludDMyX3Qgb2ZmOwoJfSBwdHI7CiAgfSBsdmFs
OwoKICBlbnVtIHVkX3R5cGUJCWJhc2U7CiAgZW51bSB1ZF90eXBlCQlpbmRleDsKICB1aW50OF90
CQlvZmZzZXQ7CiAgdWludDhfdAkJc2NhbGU7CQp9OwoKLyogLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
ICogc3RydWN0IHVkIC0gVGhlIHVkaXM4NiBvYmplY3QuCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
CiAqLwpzdHJ1Y3QgdWQKewogIGludCAJCQkoKmlucF9ob29rKSAoc3RydWN0IHVkKik7CiAgdWlu
dDhfdAkJaW5wX2N1cnI7CiAgdWludDhfdAkJaW5wX2ZpbGw7CiAgdWludDhfdAkJaW5wX2N0cjsK
ICB1aW50OF90KgkJaW5wX2J1ZmY7CiAgdWludDhfdCoJCWlucF9idWZmX2VuZDsKICB1aW50OF90
CQlpbnBfZW5kOwogIHZvaWQJCQkoKnRyYW5zbGF0b3IpKHN0cnVjdCB1ZCopOwogIHVpbnQ2NF90
CQlpbnNuX29mZnNldDsKICBjaGFyCQkJaW5zbl9oZXhjb2RlWzMyXTsKICBjaGFyCQkJaW5zbl9i
dWZmZXJbNjRdOwogIHVuc2lnbmVkIGludAkJaW5zbl9maWxsOwogIHVpbnQ4X3QJCWRpc19tb2Rl
OwogIHVpbnQ2NF90CQlwYzsKICB1aW50OF90CQl2ZW5kb3I7CiAgc3RydWN0IG1hcF9lbnRyeSoJ
bWFwZW47CiAgZW51bSB1ZF9tbmVtb25pY19jb2RlCW1uZW1vbmljOwogIHN0cnVjdCB1ZF9vcGVy
YW5kCW9wZXJhbmRbM107CiAgdWludDhfdAkJZXJyb3I7CiAgdWludDhfdAkgCXBmeF9yZXg7CiAg
dWludDhfdCAJCXBmeF9zZWc7CiAgdWludDhfdCAJCXBmeF9vcHI7CiAgdWludDhfdCAJCXBmeF9h
ZHI7CiAgdWludDhfdCAJCXBmeF9sb2NrOwogIHVpbnQ4X3QgCQlwZnhfcmVwOwogIHVpbnQ4X3Qg
CQlwZnhfcmVwZTsKICB1aW50OF90IAkJcGZ4X3JlcG5lOwogIHVpbnQ4X3QgCQlwZnhfaW5zbjsK
ICB1aW50OF90CQlkZWZhdWx0NjQ7CiAgdWludDhfdAkJb3ByX21vZGU7CiAgdWludDhfdAkJYWRy
X21vZGU7CiAgdWludDhfdAkJYnJfZmFyOwogIHVpbnQ4X3QJCWJyX25lYXI7CiAgdWludDhfdAkJ
aW1wbGljaXRfYWRkcjsKICB1aW50OF90CQljMTsKICB1aW50OF90CQljMjsKICB1aW50OF90CQlj
MzsKICB1aW50OF90IAkJaW5wX2NhY2hlWzI1Nl07CiAgdWludDhfdAkJaW5wX3Nlc3NbNjRdOwog
IHN0cnVjdCB1ZF9pdGFiX2VudHJ5ICogaXRhYl9lbnRyeTsKfTsKCi8qIC0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tCiAqIFR5cGUtZGVmaW5pdGlvbnMKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCnR5
cGVkZWYgZW51bSB1ZF90eXBlIAkJdWRfdHlwZV90Owp0eXBlZGVmIGVudW0gdWRfbW5lbW9uaWNf
Y29kZQl1ZF9tbmVtb25pY19jb2RlX3Q7Cgp0eXBlZGVmIHN0cnVjdCB1ZCAJCXVkX3Q7CnR5cGVk
ZWYgc3RydWN0IHVkX29wZXJhbmQgCXVkX29wZXJhbmRfdDsKCiNkZWZpbmUgVURfU1lOX0lOVEVM
CQl1ZF90cmFuc2xhdGVfaW50ZWwKI2RlZmluZSBVRF9TWU5fQVRUCQl1ZF90cmFuc2xhdGVfYXR0
CiNkZWZpbmUgVURfRU9JCQkJLTEKI2RlZmluZSBVRF9JTlBfQ0FDSEVfU1oJCTMyCiNkZWZpbmUg
VURfVkVORE9SX0FNRAkJMAojZGVmaW5lIFVEX1ZFTkRPUl9JTlRFTAkJMQoKI2RlZmluZSBiYWls
X291dCh1ZCxlcnJvcl9jb2RlKSBsb25nam1wKCAodWQpLT5iYWlsb3V0LCBlcnJvcl9jb2RlICkK
I2RlZmluZSB0cnlfZGVjb2RlKHVkKSBpZiAoIHNldGptcCggKHVkKS0+YmFpbG91dCApID09IDAg
KQojZGVmaW5lIGNhdGNoX2Vycm9yKCkgZWxzZQoKI2VuZGlmCgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIveDg2L3VkaXM4Ni0xLjcv
TElDRU5TRQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAwMDAwMDI1
MTEAMTE3NjU0NjU1NTYAMDE0NjUyACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABtcmF0aG9y
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAENvcHlyaWdodCAoYykgMjAwMiwg
MjAwMywgMjAwNCwgMjAwNSwgMjAwNiA8dml2ZWtAc2lnOS5jb20+CkFsbCByaWdodHMgcmVzZXJ2
ZWQuCgpSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQgYmluYXJ5IGZvcm1zLCB3
aXRoIG9yIHdpdGhvdXQgbW9kaWZpY2F0aW9uLCAKYXJlIHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0
IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucyBhcmUgbWV0OgoKICAgICogUmVkaXN0cmlidXRpb25z
IG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWluIHRoZSBhYm92ZSBjb3B5cmlnaHQgbm90aWNlLCAK
ICAgICAgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1l
ci4KICAgICogUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRo
ZSBhYm92ZSBjb3B5cmlnaHQgbm90aWNlLCAKICAgICAgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMg
YW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUgZG9jdW1lbnRhdGlvbiAKICAgICAg
YW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCgpU
SElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQgQ09O
VFJJQlVUT1JTICJBUyBJUyIgQU5EIApBTlkgRVhQUkVTUyBPUiBJTVBMSUVEIFdBUlJBTlRJRVMs
IElOQ0xVRElORywgQlVUIE5PVCBMSU1JVEVEIFRPLCBUSEUgSU1QTElFRCAKV0FSUkFOVElFUyBP
RiBNRVJDSEFOVEFCSUxJVFkgQU5EIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFIEFS
RSAKRElTQ0xBSU1FRC4gSU4gTk8gRVZFTlQgU0hBTEwgVEhFIENPUFlSSUdIVCBPV05FUiBPUiBD
T05UUklCVVRPUlMgQkUgTElBQkxFIEZPUiAKQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lERU5U
QUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTCBEQU1BR0VTIAooSU5DTFVE
SU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUgR09PRFMg
T1IgU0VSVklDRVM7IApMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1IgQlVTSU5FU1Mg
SU5URVJSVVBUSU9OKSBIT1dFVkVSIENBVVNFRCBBTkQgT04gCkFOWSBUSEVPUlkgT0YgTElBQklM
SVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QgTElBQklMSVRZLCBPUiBUT1JUIAooSU5D
TFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElOIEFOWSBXQVkgT1VUIE9G
IFRIRSBVU0UgT0YgVEhJUyAKU09GVFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lC
SUxJVFkgT0YgU1VDSCBEQU1BR0UuCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9pbnB1
dC5jAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAxMzc0MgAx
MTc2NTQ2NTU1NgAwMTUxNjAAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
ICogaW5wdXQuYwogKgogKiBDb3B5cmlnaHQgKGMpIDIwMDQsIDIwMDUsIDIwMDYsIFZpdmVrIE1v
aGFuIDx2aXZla0BzaWc5LmNvbT4KICogQWxsIHJpZ2h0cyByZXNlcnZlZC4gU2VlIExJQ0VOU0UK
ICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCiNpbmNsdWRlICJleHRlcm4uaCIKI2luY2x1ZGUg
InR5cGVzLmgiCiNpbmNsdWRlICJpbnB1dC5oIgoKLyogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICog
aW5wX2J1ZmZfaG9vaygpIC0gSG9vayBmb3IgYnVmZmVyZWQgaW5wdXRzLgogKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKi8Kc3RhdGljIGludCAKaW5wX2J1ZmZfaG9vayhzdHJ1Y3QgdWQqIHUpCnsK
ICBpZiAodS0+aW5wX2J1ZmYgPCB1LT5pbnBfYnVmZl9lbmQpCglyZXR1cm4gKnUtPmlucF9idWZm
Kys7CiAgZWxzZQlyZXR1cm4gLTE7Cn0KCiNpZm5kZWYgX19VRF9TVEFOREFMT05FX18KLyogLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0KICogaW5wX2ZpbGVfaG9vaygpIC0gSG9vayBmb3IgRklMRSBpbnB1
dHMuCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpzdGF0aWMgaW50IAppbnBfZmlsZV9ob29r
KHN0cnVjdCB1ZCogdSkKewogIHJldHVybiBmZ2V0Yyh1LT5pbnBfZmlsZSk7Cn0KI2VuZGlmIC8q
IF9fVURfU1RBTkRBTE9ORV9fKi8KCi8qID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqIHVkX2lucF9z
ZXRfaG9vaygpIC0gU2V0cyBpbnB1dCBob29rLgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8K
ZXh0ZXJuIHZvaWQgCnVkX3NldF9pbnB1dF9ob29rKHJlZ2lzdGVyIHN0cnVjdCB1ZCogdSwgaW50
ICgqaG9vaykoc3RydWN0IHVkKikpCnsKICB1LT5pbnBfaG9vayA9IGhvb2s7CiAgaW5wX2luaXQo
dSk7Cn0KCi8qID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqIHVkX2lucF9zZXRfYnVmZmVyKCkgLSBT
ZXQgYnVmZmVyIGFzIGlucHV0LgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8KZXh0ZXJuIHZv
aWQgCnVkX3NldF9pbnB1dF9idWZmZXIocmVnaXN0ZXIgc3RydWN0IHVkKiB1LCB1aW50OF90KiBi
dWYsIHNpemVfdCBsZW4pCnsKICB1LT5pbnBfaG9vayA9IGlucF9idWZmX2hvb2s7CiAgdS0+aW5w
X2J1ZmYgPSBidWY7CiAgdS0+aW5wX2J1ZmZfZW5kID0gYnVmICsgbGVuOwogIGlucF9pbml0KHUp
Owp9CgojaWZuZGVmIF9fVURfU1RBTkRBTE9ORV9fCi8qID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
IHVkX2lucHV0X3NldF9maWxlKCkgLSBTZXQgYnVmZmVyIGFzIGlucHV0LgogKiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PQogKi8KZXh0ZXJuIHZvaWQgCnVkX3NldF9pbnB1dF9maWxlKHJlZ2lzdGVyIHN0
cnVjdCB1ZCogdSwgRklMRSogZikKewogIHUtPmlucF9ob29rID0gaW5wX2ZpbGVfaG9vazsKICB1
LT5pbnBfZmlsZSA9IGY7CiAgaW5wX2luaXQodSk7Cn0KI2VuZGlmIC8qIF9fVURfU1RBTkRBTE9O
RV9fICovCgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnB1dF9za2lwKCkgLSBTa2lw
IG4gaW5wdXQgYnl0ZXMuCiAqID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqLwpleHRlcm4gdm9pZCAK
dWRfaW5wdXRfc2tpcChzdHJ1Y3QgdWQqIHUsIHNpemVfdCBuKQp7CiAgd2hpbGUgKG4tLSkgewoJ
dS0+aW5wX2hvb2sodSk7CiAgfQp9CgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnB1
dF9lbmQoKSAtIFRlc3QgZm9yIGVuZCBvZiBpbnB1dC4KICogPT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0K
ICovCmV4dGVybiBpbnQgCnVkX2lucHV0X2VuZChzdHJ1Y3QgdWQqIHUpCnsKICByZXR1cm4gKHUt
PmlucF9jdXJyID09IHUtPmlucF9maWxsKSAmJiB1LT5pbnBfZW5kOwp9CgovKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQogKiBpbnBfbmV4dCgpIC0gTG9hZHMgYW5kIHJldHVybnMgdGhlIG5leHQgYnl0
ZSBmcm9tIGlucHV0LgogKgogKiBpbnBfY3VyciBhbmQgaW5wX2ZpbGwgYXJlIHBvaW50ZXJzIHRv
IHRoZSBjYWNoZS4gVGhlIHByb2dyYW0gaXMgd3JpdHRlbiBiYXNlZAogKiBvbiB0aGUgcHJvcGVy
dHkgdGhhdCB0aGV5IGFyZSA4LWJpdHMgaW4gc2l6ZSwgYW5kIHdpbGwgZXZlbnR1YWxseSB3cmFw
IGFyb3VuZAogKiBmb3JtaW5nIGEgY2lyY3VsYXIgYnVmZmVyLiBTbywgdGhlIHNpemUgb2YgdGhl
IGNhY2hlIGlzIDI1NiBpbiBzaXplLCBraW5kIG9mCiAqIHVubmVjZXNzYXJ5IHlldCBvcHRpbWl6
ZWQuCiAqCiAqIEEgYnVmZmVyIGlucF9zZXNzIHN0b3JlcyB0aGUgYnl0ZXMgZGlzYXNzZW1ibGVk
IGZvciBhIHNpbmdsZSBzZXNzaW9uLgogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8KZXh0ZXJu
IHVpbnQ4X3QgaW5wX25leHQoc3RydWN0IHVkKiB1KSAKewogIGludCBjID0gLTE7CiAgLyogaWYg
Y3VycmVudCBwb2ludGVyIGlzIG5vdCB1cHRvIHRoZSBmaWxsIHBvaW50IGluIHRoZSAKICAgKiBp
bnB1dCBjYWNoZS4KICAgKi8KICBpZiAoIHUtPmlucF9jdXJyICE9IHUtPmlucF9maWxsICkgewoJ
YyA9IHUtPmlucF9jYWNoZVsgKyt1LT5pbnBfY3VyciBdOwogIC8qIGlmICFlbmQtb2YtaW5wdXQs
IGNhbGwgdGhlIGlucHV0IGhvb2sgYW5kIGdldCBhIGJ5dGUgKi8KICB9IGVsc2UgaWYgKCB1LT5p
bnBfZW5kIHx8ICggYyA9IHUtPmlucF9ob29rKCB1ICkgKSA9PSAtMSApIHsKCS8qIGVuZC1vZi1p
bnB1dCwgbWFyayBpdCBhcyBhbiBlcnJvciwgc2luY2UgdGhlIGRlY29kZXIsCgkgKiBleHBlY3Rl
ZCBhIGJ5dGUgbW9yZS4KCSAqLwoJdS0+ZXJyb3IgPSAxOwoJLyogZmxhZyBlbmQgb2YgaW5wdXQg
Ki8KCXUtPmlucF9lbmQgPSAxOwoJcmV0dXJuIDA7CiAgfSBlbHNlIHsKCS8qIGluY3JlbWVudCBw
b2ludGVycywgd2UgaGF2ZSBhIG5ldyBieXRlLiAgKi8KCXUtPmlucF9jdXJyID0gKyt1LT5pbnBf
ZmlsbDsKCS8qIGFkZCB0aGUgYnl0ZSB0byB0aGUgY2FjaGUgKi8KCXUtPmlucF9jYWNoZVsgdS0+
aW5wX2ZpbGwgXSA9IGM7CiAgfQogIC8qIHJlY29yZCBieXRlcyBpbnB1dCBwZXIgZGVjb2RlLXNl
c3Npb24uICovCiAgdS0+aW5wX3Nlc3NbIHUtPmlucF9jdHIrKyBdID0gYzsKICAvKiByZXR1cm4g
Ynl0ZSAqLwogIHJldHVybiAoIHVpbnQ4X3QgKSBjOwp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQogKiBpbnBfYmFjaygpIC0gTW92ZSBiYWNrIGEgc2luZ2xlIGJ5dGUgaW4gdGhlIHN0cmVhbS4K
ICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCmV4dGVybiB2b2lkCmlucF9iYWNrKHN0cnVjdCB1
ZCogdSkgCnsKICBpZiAoIHUtPmlucF9jdHIgPiAwICkgewoJLS11LT5pbnBfY3VycjsKCS0tdS0+
aW5wX2N0cjsKICB9Cn0KCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIGlucF9wZWVrKCkgLSBQ
ZWVrIGludG8gdGhlIG5leHQgYnl0ZSBpbiBzb3VyY2UuIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQogKi8KZXh0ZXJuIHVpbnQ4X3QKaW5wX3BlZWsoc3RydWN0IHVkKiB1KSAKewogIHVpbnQ4X3Qg
ciA9IGlucF9uZXh0KHUpOwogIGlmICggIXUtPmVycm9yICkgaW5wX2JhY2sodSk7IC8qIERvbid0
IGJhY2t1cCBpZiB0aGVyZSB3YXMgYW4gZXJyb3IgKi8KICByZXR1cm4gcjsKfQoKLyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogaW5wX21vdmUoKSAtIE1vdmUgYWhlYWQgbiBpbnB1dCBieXRlcy4K
ICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCmV4dGVybiB2b2lkCmlucF9tb3ZlKHN0cnVjdCB1
ZCogdSwgc2l6ZV90IG4pIAp7CiAgd2hpbGUgKG4tLSkKCWlucF9uZXh0KHUpOwp9CgovKi0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLQogKiAgaW5wX3VpbnROKCkgLSByZXR1cm4gdWludE4gZnJvbSBzb3Vy
Y2UuCiAqLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpleHRlcm4gdWludDhfdCAKaW5wX3VpbnQ4
KHN0cnVjdCB1ZCogdSkKewogIHJldHVybiBpbnBfbmV4dCh1KTsKfQoKZXh0ZXJuIHVpbnQxNl90
IAppbnBfdWludDE2KHN0cnVjdCB1ZCogdSkKewogIHVpbnQxNl90IHIsIHJldDsKCiAgcmV0ID0g
aW5wX25leHQodSk7CiAgciA9IGlucF9uZXh0KHUpOwogIHJldHVybiByZXQgfCAociA8PCA4KTsK
fQoKZXh0ZXJuIHVpbnQzMl90IAppbnBfdWludDMyKHN0cnVjdCB1ZCogdSkKewogIHVpbnQzMl90
IHIsIHJldDsKCiAgcmV0ID0gaW5wX25leHQodSk7CiAgciA9IGlucF9uZXh0KHUpOwogIHJldCA9
IHJldCB8IChyIDw8IDgpOwogIHIgPSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAociA8PCAx
Nik7CiAgciA9IGlucF9uZXh0KHUpOwogIHJldHVybiByZXQgfCAociA8PCAyNCk7Cn0KCmV4dGVy
biB1aW50NjRfdCAKaW5wX3VpbnQ2NChzdHJ1Y3QgdWQqIHUpCnsKICB1aW50NjRfdCByLCByZXQ7
CgogIHJldCA9IGlucF9uZXh0KHUpOwogIHIgPSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAo
ciA8PCA4KTsKICByID0gaW5wX25leHQodSk7CiAgcmV0ID0gcmV0IHwgKHIgPDwgMTYpOwogIHIg
PSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAociA8PCAyNCk7CiAgciA9IGlucF9uZXh0KHUp
OwogIHJldCA9IHJldCB8IChyIDw8IDMyKTsKICByID0gaW5wX25leHQodSk7CiAgcmV0ID0gcmV0
IHwgKHIgPDwgNDApOwogIHIgPSBpbnBfbmV4dCh1KTsKICByZXQgPSByZXQgfCAociA8PCA0OCk7
CiAgciA9IGlucF9uZXh0KHUpOwogIHJldHVybiByZXQgfCAociA8PCA1Nik7Cn0KAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlzODYtMS43L2RlY29kZS5oAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDIxNTI2ADExNzY1NDY1NTU2ADAx
NTI1MAAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAg
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAjaWZuZGVmIFVEX0RFQ09ERV9ICiNkZWZpbmUgVURfREVDT0RF
X0gKCiNkZWZpbmUgTUFYX0lOU05fTEVOR1RIIDE1CgovKiByZWdpc3RlciBjbGFzc2VzICovCiNk
ZWZpbmUgVF9OT05FICAwCiNkZWZpbmUgVF9HUFIgICAxCiNkZWZpbmUgVF9NTVggICAyCiNkZWZp
bmUgVF9DUkcgICAzCiNkZWZpbmUgVF9EQkcgICA0CiNkZWZpbmUgVF9TRUcgICA1CiNkZWZpbmUg
VF9YTU0gICA2CgovKiBpdGFiIHByZWZpeCBiaXRzICovCiNkZWZpbmUgUF9ub25lICAgICAgICAg
ICggMCApCiNkZWZpbmUgUF9jMSAgICAgICAgICAgICggMSA8PCAwICkKI2RlZmluZSBQX0MxKG4p
ICAgICAgICAgKCAoIG4gPj4gMCApICYgMSApCiNkZWZpbmUgUF9yZXhiICAgICAgICAgICggMSA8
PCAxICkKI2RlZmluZSBQX1JFWEIobikgICAgICAgKCAoIG4gPj4gMSApICYgMSApCiNkZWZpbmUg
UF9kZXBNICAgICAgICAgICggMSA8PCAyICkKI2RlZmluZSBQX0RFUE0obikgICAgICAgKCAoIG4g
Pj4gMiApICYgMSApCiNkZWZpbmUgUF9jMyAgICAgICAgICAgICggMSA8PCAzICkKI2RlZmluZSBQ
X0MzKG4pICAgICAgICAgKCAoIG4gPj4gMyApICYgMSApCiNkZWZpbmUgUF9pbnY2NCAgICAgICAg
ICggMSA8PCA0ICkKI2RlZmluZSBQX0lOVjY0KG4pICAgICAgKCAoIG4gPj4gNCApICYgMSApCiNk
ZWZpbmUgUF9yZXh3ICAgICAgICAgICggMSA8PCA1ICkKI2RlZmluZSBQX1JFWFcobikgICAgICAg
KCAoIG4gPj4gNSApICYgMSApCiNkZWZpbmUgUF9jMiAgICAgICAgICAgICggMSA8PCA2ICkKI2Rl
ZmluZSBQX0MyKG4pICAgICAgICAgKCAoIG4gPj4gNiApICYgMSApCiNkZWZpbmUgUF9kZWY2NCAg
ICAgICAgICggMSA8PCA3ICkKI2RlZmluZSBQX0RFRjY0KG4pICAgICAgKCAoIG4gPj4gNyApICYg
MSApCiNkZWZpbmUgUF9yZXhyICAgICAgICAgICggMSA8PCA4ICkKI2RlZmluZSBQX1JFWFIobikg
ICAgICAgKCAoIG4gPj4gOCApICYgMSApCiNkZWZpbmUgUF9vc28gICAgICAgICAgICggMSA8PCA5
ICkKI2RlZmluZSBQX09TTyhuKSAgICAgICAgKCAoIG4gPj4gOSApICYgMSApCiNkZWZpbmUgUF9h
c28gICAgICAgICAgICggMSA8PCAxMCApCiNkZWZpbmUgUF9BU08obikgICAgICAgICggKCBuID4+
IDEwICkgJiAxICkKI2RlZmluZSBQX3JleHggICAgICAgICAgKCAxIDw8IDExICkKI2RlZmluZSBQ
X1JFWFgobikgICAgICAgKCAoIG4gPj4gMTEgKSAmIDEgKQojZGVmaW5lIFBfSW1wQWRkciAgICAg
ICAoIDEgPDwgMTIgKQojZGVmaW5lIFBfSU1QQUREUihuKSAgICAoICggbiA+PiAxMiApICYgMSAp
CgovKiByZXggcHJlZml4IGJpdHMgKi8KI2RlZmluZSBSRVhfVyhyKSAgICAgICAgKCAoIDB4RiAm
ICggciApICkgID4+IDMgKQojZGVmaW5lIFJFWF9SKHIpICAgICAgICAoICggMHg3ICYgKCByICkg
KSAgPj4gMiApCiNkZWZpbmUgUkVYX1gocikgICAgICAgICggKCAweDMgJiAoIHIgKSApICA+PiAx
ICkKI2RlZmluZSBSRVhfQihyKSAgICAgICAgKCAoIDB4MSAmICggciApICkgID4+IDAgKQojZGVm
aW5lIFJFWF9QRlhfTUFTSyhuKSAoICggUF9SRVhXKG4pIDw8IDMgKSB8IFwKICAgICAgICAgICAg
ICAgICAgICAgICAgICAoIFBfUkVYUihuKSA8PCAyICkgfCBcCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgKCBQX1JFWFgobikgPDwgMSApIHwgXAogICAgICAgICAgICAgICAgICAgICAgICAgICgg
UF9SRVhCKG4pIDw8IDAgKSApCgovKiBzY2FibGUtaW5kZXgtYmFzZSBiaXRzICovCiNkZWZpbmUg
U0lCX1MoYikgICAgICAgICggKCBiICkgPj4gNiApCiNkZWZpbmUgU0lCX0koYikgICAgICAgICgg
KCAoIGIgKSA+PiAzICkgJiA3ICkKI2RlZmluZSBTSUJfQihiKSAgICAgICAgKCAoIGIgKSAmIDcg
KQoKLyogbW9kcm0gYml0cyAqLwojZGVmaW5lIE1PRFJNX1JFRyhiKSAgICAoICggKCBiICkgPj4g
MyApICYgNyApCiNkZWZpbmUgTU9EUk1fTk5OKGIpICAgICggKCAoIGIgKSA+PiAzICkgJiA3ICkK
I2RlZmluZSBNT0RSTV9NT0QoYikgICAgKCAoICggYiApID4+IDYgKSAmIDMgKQojZGVmaW5lIE1P
RFJNX1JNKGIpICAgICAoICggYiApICYgNyApCgovKiBvcGVyYW5kIHR5cGUgY29uc3RhbnRzIC0t
IG9yZGVyIGlzIGltcG9ydGFudCEgKi8KCmVudW0gdWRfb3BlcmFuZF9jb2RlIHsKICAgIE9QX05P
TkUsCgogICAgT1BfQSwgICAgICBPUF9FLCAgICAgIE9QX00sICAgICAgIE9QX0csICAgICAgIAog
ICAgT1BfSSwKCiAgICBPUF9BTCwgICAgIE9QX0NMLCAgICAgT1BfREwsICAgICAgT1BfQkwsCiAg
ICBPUF9BSCwgICAgIE9QX0NILCAgICAgT1BfREgsICAgICAgT1BfQkgsCgogICAgT1BfQUxyOGIs
ICBPUF9DTHI5YiwgIE9QX0RMcjEwYiwgIE9QX0JMcjExYiwKICAgIE9QX0FIcjEyYiwgT1BfQ0hy
MTNiLCBPUF9ESHIxNGIsICBPUF9CSHIxNWIsCgogICAgT1BfQVgsICAgICBPUF9DWCwgICAgIE9Q
X0RYLCAgICAgIE9QX0JYLAogICAgT1BfU0ksICAgICBPUF9ESSwgICAgIE9QX1NQLCAgICAgIE9Q
X0JQLAoKICAgIE9QX3JBWCwgICAgT1BfckNYLCAgICBPUF9yRFgsICAgICBPUF9yQlgsICAKICAg
IE9QX3JTUCwgICAgT1BfckJQLCAgICBPUF9yU0ksICAgICBPUF9yREksCgogICAgT1BfckFYcjgs
ICBPUF9yQ1hyOSwgIE9QX3JEWHIxMCwgIE9QX3JCWHIxMSwgIAogICAgT1BfclNQcjEyLCBPUF9y
QlByMTMsIE9QX3JTSXIxNCwgIE9QX3JESXIxNSwKCiAgICBPUF9lQVgsICAgIE9QX2VDWCwgICAg
T1BfZURYLCAgICAgT1BfZUJYLAogICAgT1BfZVNQLCAgICBPUF9lQlAsICAgIE9QX2VTSSwgICAg
IE9QX2VESSwKCiAgICBPUF9FUywgICAgIE9QX0NTLCAgICAgT1BfU1MsICAgICAgT1BfRFMsICAK
ICAgIE9QX0ZTLCAgICAgT1BfR1MsCgogICAgT1BfU1QwLCAgICBPUF9TVDEsICAgIE9QX1NUMiwg
ICAgIE9QX1NUMywKICAgIE9QX1NUNCwgICAgT1BfU1Q1LCAgICBPUF9TVDYsICAgICBPUF9TVDcs
CgogICAgT1BfSiwgICAgICBPUF9TLCAgICAgIE9QX08sICAgICAgICAgIAogICAgT1BfSTEsICAg
ICBPUF9JMywgCgogICAgT1BfViwgICAgICBPUF9XLCAgICAgIE9QX1EsICAgICAgIE9QX1AsIAoK
ICAgIE9QX1IsICAgICAgT1BfQywgIE9QX0QsICAgICAgIE9QX1ZSLCAgT1BfUFIKfTsKCgovKiBv
cGVyYW5kIHNpemUgY29uc3RhbnRzICovCgplbnVtIHVkX29wZXJhbmRfc2l6ZSB7CiAgICBTWl9O
QSAgPSAwLAogICAgU1pfWiAgID0gMSwKICAgIFNaX1YgICA9IDIsCiAgICBTWl9QICAgPSAzLAog
ICAgU1pfV1AgID0gNCwKICAgIFNaX0RQICA9IDUsCiAgICBTWl9NRFEgPSA2LAogICAgU1pfUkRR
ID0gNywKCiAgICAvKiB0aGUgZm9sbG93aW5nIHZhbHVlcyBhcmUgdXNlZCBhcyBpcywKICAgICAq
IGFuZCB0aHVzIGhhcmQtY29kZWQuIGNoYW5naW5nIHRoZW0gCiAgICAgKiB3aWxsIGJyZWFrIGlu
dGVybmFscyAKICAgICAqLwogICAgU1pfQiAgID0gOCwKICAgIFNaX1cgICA9IDE2LAogICAgU1pf
RCAgID0gMzIsCiAgICBTWl9RICAgPSA2NCwKICAgIFNaX1QgICA9IDgwLAp9OwoKLyogaXRhYiBl
bnRyeSBvcGVyYW5kIGRlZmluaXRpb25zICovCgojZGVmaW5lIE9fclNQcjEyICB7IE9QX3JTUHIx
MiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19CTCAgICAgIHsgT1BfQkwsICAgICAgIFNaX05BICAg
IH0KI2RlZmluZSBPX0JIICAgICAgeyBPUF9CSCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9f
QlAgICAgICB7IE9QX0JQLCAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19BSHIxMmIgIHsgT1Bf
QUhyMTJiLCAgIFNaX05BICAgIH0KI2RlZmluZSBPX0JYICAgICAgeyBPUF9CWCwgICAgICAgU1pf
TkEgICAgfQojZGVmaW5lIE9fSnogICAgICB7IE9QX0osICAgICAgICBTWl9aICAgICB9CiNkZWZp
bmUgT19KdiAgICAgIHsgT1BfSiwgICAgICAgIFNaX1YgICAgIH0KI2RlZmluZSBPX0piICAgICAg
eyBPUF9KLCAgICAgICAgU1pfQiAgICAgfQojZGVmaW5lIE9fclNJcjE0ICB7IE9QX3JTSXIxNCwg
ICBTWl9OQSAgICB9CiNkZWZpbmUgT19HUyAgICAgIHsgT1BfR1MsICAgICAgIFNaX05BICAgIH0K
I2RlZmluZSBPX0QgICAgICAgeyBPUF9ELCAgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fckJQ
cjEzICB7IE9QX3JCUHIxMywgICBTWl9OQSAgICB9CiNkZWZpbmUgT19PYiAgICAgIHsgT1BfTywg
ICAgICAgIFNaX0IgICAgIH0KI2RlZmluZSBPX1AgICAgICAgeyBPUF9QLCAgICAgICAgU1pfTkEg
ICAgfQojZGVmaW5lIE9fT3cgICAgICB7IE9QX08sICAgICAgICBTWl9XICAgICB9CiNkZWZpbmUg
T19PdiAgICAgIHsgT1BfTywgICAgICAgIFNaX1YgICAgIH0KI2RlZmluZSBPX0d3ICAgICAgeyBP
UF9HLCAgICAgICAgU1pfVyAgICAgfQojZGVmaW5lIE9fR3YgICAgICB7IE9QX0csICAgICAgICBT
Wl9WICAgICB9CiNkZWZpbmUgT19yRFggICAgIHsgT1BfckRYLCAgICAgIFNaX05BICAgIH0KI2Rl
ZmluZSBPX0d4ICAgICAgeyBPUF9HLCAgICAgICAgU1pfTURRICAgfQojZGVmaW5lIE9fR2QgICAg
ICB7IE9QX0csICAgICAgICBTWl9EICAgICB9CiNkZWZpbmUgT19HYiAgICAgIHsgT1BfRywgICAg
ICAgIFNaX0IgICAgIH0KI2RlZmluZSBPX3JCWHIxMSAgeyBPUF9yQlhyMTEsICAgU1pfTkEgICAg
fQojZGVmaW5lIE9fckRJICAgICB7IE9QX3JESSwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19y
U0kgICAgIHsgT1BfclNJLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0FMcjhiICAgeyBPUF9B
THI4YiwgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fZURJICAgICB7IE9QX2VESSwgICAgICBTWl9O
QSAgICB9CiNkZWZpbmUgT19HeiAgICAgIHsgT1BfRywgICAgICAgIFNaX1ogICAgIH0KI2RlZmlu
ZSBPX2VEWCAgICAgeyBPUF9lRFgsICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fREhyMTRiICB7
IE9QX0RIcjE0YiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19yU1AgICAgIHsgT1BfclNQLCAgICAg
IFNaX05BICAgIH0KI2RlZmluZSBPX1BSICAgICAgeyBPUF9QUiwgICAgICAgU1pfTkEgICAgfQoj
ZGVmaW5lIE9fTk9ORSAgICB7IE9QX05PTkUsICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19yQ1gg
ICAgIHsgT1BfckNYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX2pXUCAgICAgeyBPUF9KLCAg
ICAgICAgU1pfV1AgICAgfQojZGVmaW5lIE9fckRYcjEwICB7IE9QX3JEWHIxMCwgICBTWl9OQSAg
ICB9CiNkZWZpbmUgT19NZCAgICAgIHsgT1BfTSwgICAgICAgIFNaX0QgICAgIH0KI2RlZmluZSBP
X0MgICAgICAgeyBPUF9DLCAgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fRyAgICAgICB7IE9Q
X0csICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19NYiAgICAgIHsgT1BfTSwgICAgICAgIFNa
X0IgICAgIH0KI2RlZmluZSBPX010ICAgICAgeyBPUF9NLCAgICAgICAgU1pfVCAgICAgfQojZGVm
aW5lIE9fUyAgICAgICB7IE9QX1MsICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19NcSAgICAg
IHsgT1BfTSwgICAgICAgIFNaX1EgICAgIH0KI2RlZmluZSBPX1cgICAgICAgeyBPUF9XLCAgICAg
ICAgU1pfTkEgICAgfQojZGVmaW5lIE9fRVMgICAgICB7IE9QX0VTLCAgICAgICBTWl9OQSAgICB9
CiNkZWZpbmUgT19yQlggICAgIHsgT1BfckJYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0Vk
ICAgICAgeyBPUF9FLCAgICAgICAgU1pfRCAgICAgfQojZGVmaW5lIE9fRExyMTBiICB7IE9QX0RM
cjEwYiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19NdyAgICAgIHsgT1BfTSwgICAgICAgIFNaX1cg
ICAgIH0KI2RlZmluZSBPX0ViICAgICAgeyBPUF9FLCAgICAgICAgU1pfQiAgICAgfQojZGVmaW5l
IE9fRXggICAgICB7IE9QX0UsICAgICAgICBTWl9NRFEgICB9CiNkZWZpbmUgT19FeiAgICAgIHsg
T1BfRSwgICAgICAgIFNaX1ogICAgIH0KI2RlZmluZSBPX0V3ICAgICAgeyBPUF9FLCAgICAgICAg
U1pfVyAgICAgfQojZGVmaW5lIE9fRXYgICAgICB7IE9QX0UsICAgICAgICBTWl9WICAgICB9CiNk
ZWZpbmUgT19FcCAgICAgIHsgT1BfRSwgICAgICAgIFNaX1AgICAgIH0KI2RlZmluZSBPX0ZTICAg
ICAgeyBPUF9GUywgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fTXMgICAgICB7IE9QX00sICAg
ICAgICBTWl9XICAgICB9CiNkZWZpbmUgT19yQVhyOCAgIHsgT1BfckFYcjgsICAgIFNaX05BICAg
IH0KI2RlZmluZSBPX2VCUCAgICAgeyBPUF9lQlAsICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9f
SXNiICAgICB7IE9QX0ksICAgICAgICBTWl9TQiAgICB9CiNkZWZpbmUgT19lQlggICAgIHsgT1Bf
ZUJYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX3JDWHI5ICAgeyBPUF9yQ1hyOSwgICAgU1pf
TkEgICAgfQojZGVmaW5lIE9fakRQICAgICB7IE9QX0osICAgICAgICBTWl9EUCAgICB9CiNkZWZp
bmUgT19DSCAgICAgIHsgT1BfQ0gsICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0NMICAgICAg
eyBPUF9DTCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fUiAgICAgICB7IE9QX1IsICAgICAg
ICBTWl9SRFEgICB9CiNkZWZpbmUgT19WICAgICAgIHsgT1BfViwgICAgICAgIFNaX05BICAgIH0K
I2RlZmluZSBPX0NTICAgICAgeyBPUF9DUywgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fQ0hy
MTNiICB7IE9QX0NIcjEzYiwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19lQ1ggICAgIHsgT1BfZUNY
LCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX2VTUCAgICAgeyBPUF9lU1AsICAgICAgU1pfTkEg
ICAgfQojZGVmaW5lIE9fU1MgICAgICB7IE9QX1NTLCAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUg
T19TUCAgICAgIHsgT1BfU1AsICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0JMcjExYiAgeyBP
UF9CTHIxMWIsICAgU1pfTkEgICAgfQojZGVmaW5lIE9fU0kgICAgICB7IE9QX1NJLCAgICAgICBT
Wl9OQSAgICB9CiNkZWZpbmUgT19lU0kgICAgIHsgT1BfZVNJLCAgICAgIFNaX05BICAgIH0KI2Rl
ZmluZSBPX0RMICAgICAgeyBPUF9ETCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fREggICAg
ICB7IE9QX0RILCAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19ESSAgICAgIHsgT1BfREksICAg
ICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0RYICAgICAgeyBPUF9EWCwgICAgICAgU1pfTkEgICAg
fQojZGVmaW5lIE9fckJQICAgICB7IE9QX3JCUCwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19H
dncgICAgIHsgT1BfRywgICAgICAgIFNaX01EUSAgIH0KI2RlZmluZSBPX0kxICAgICAgeyBPUF9J
MSwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fSTMgICAgICB7IE9QX0kzLCAgICAgICBTWl9O
QSAgICB9CiNkZWZpbmUgT19EUyAgICAgIHsgT1BfRFMsICAgICAgIFNaX05BICAgIH0KI2RlZmlu
ZSBPX1NUNCAgICAgeyBPUF9TVDQsICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fU1Q1ICAgICB7
IE9QX1NUNSwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19TVDYgICAgIHsgT1BfU1Q2LCAgICAg
IFNaX05BICAgIH0KI2RlZmluZSBPX1NUNyAgICAgeyBPUF9TVDcsICAgICAgU1pfTkEgICAgfQoj
ZGVmaW5lIE9fU1QwICAgICB7IE9QX1NUMCwgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19TVDEg
ICAgIHsgT1BfU1QxLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX1NUMiAgICAgeyBPUF9TVDIs
ICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fU1QzICAgICB7IE9QX1NUMywgICAgICBTWl9OQSAg
ICB9CiNkZWZpbmUgT19FICAgICAgIHsgT1BfRSwgICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBP
X0FIICAgICAgeyBPUF9BSCwgICAgICAgU1pfTkEgICAgfQojZGVmaW5lIE9fTSAgICAgICB7IE9Q
X00sICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19BTCAgICAgIHsgT1BfQUwsICAgICAgIFNa
X05BICAgIH0KI2RlZmluZSBPX0NMcjliICAgeyBPUF9DTHI5YiwgICAgU1pfTkEgICAgfQojZGVm
aW5lIE9fUSAgICAgICB7IE9QX1EsICAgICAgICBTWl9OQSAgICB9CiNkZWZpbmUgT19lQVggICAg
IHsgT1BfZUFYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX1ZSICAgICAgeyBPUF9WUiwgICAg
ICAgU1pfTkEgICAgfQojZGVmaW5lIE9fQVggICAgICB7IE9QX0FYLCAgICAgICBTWl9OQSAgICB9
CiNkZWZpbmUgT19yQVggICAgIHsgT1BfckFYLCAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0l6
ICAgICAgeyBPUF9JLCAgICAgICAgU1pfWiAgICAgfQojZGVmaW5lIE9fckRJcjE1ICB7IE9QX3JE
SXIxNSwgICBTWl9OQSAgICB9CiNkZWZpbmUgT19JdyAgICAgIHsgT1BfSSwgICAgICAgIFNaX1cg
ICAgIH0KI2RlZmluZSBPX0l2ICAgICAgeyBPUF9JLCAgICAgICAgU1pfViAgICAgfQojZGVmaW5l
IE9fQXAgICAgICB7IE9QX0EsICAgICAgICBTWl9QICAgICB9CiNkZWZpbmUgT19DWCAgICAgIHsg
T1BfQ1gsICAgICAgIFNaX05BICAgIH0KI2RlZmluZSBPX0liICAgICAgeyBPUF9JLCAgICAgICAg
U1pfQiAgICAgfQojZGVmaW5lIE9fQkhyMTViICB7IE9QX0JIcjE1YiwgICBTWl9OQSAgICB9CgoK
LyogQSBzaW5nbGUgb3BlcmFuZCBvZiBhbiBlbnRyeSBpbiB0aGUgaW5zdHJ1Y3Rpb24gdGFibGUu
IAogKiAoaW50ZXJuYWwgdXNlIG9ubHkpCiAqLwpzdHJ1Y3QgdWRfaXRhYl9lbnRyeV9vcGVyYW5k
IAp7CiAgZW51bSB1ZF9vcGVyYW5kX2NvZGUgdHlwZTsKICBlbnVtIHVkX29wZXJhbmRfc2l6ZSBz
aXplOwp9OwoKCi8qIEEgc2luZ2xlIGVudHJ5IGluIGFuIGluc3RydWN0aW9uIHRhYmxlLiAKICoo
aW50ZXJuYWwgdXNlIG9ubHkpCiAqLwpzdHJ1Y3QgdWRfaXRhYl9lbnRyeSAKewogIGVudW0gdWRf
bW5lbW9uaWNfY29kZSAgICAgICAgIG1uZW1vbmljOwogIHN0cnVjdCB1ZF9pdGFiX2VudHJ5X29w
ZXJhbmQgIG9wZXJhbmQxOwogIHN0cnVjdCB1ZF9pdGFiX2VudHJ5X29wZXJhbmQgIG9wZXJhbmQy
OwogIHN0cnVjdCB1ZF9pdGFiX2VudHJ5X29wZXJhbmQgIG9wZXJhbmQzOwogIHVpbnQzMl90ICAg
ICAgICAgICAgICAgICAgICAgIHByZWZpeDsKfTsKCiNlbmRpZiAvKiBVRF9ERUNPREVfSCAqLwoK
LyogdmltOmNpbmRlbnQKICogdmltOmV4cGFuZHRhYgogKiB2aW06dHM9NAogKiB2aW06c3c9NAog
Ki8KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AABrZGIveDg2L3VkaXM4Ni0xLjcvZXh0ZXJuLmgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAy
NzU2ADAwMDI3NTYAMDAwMDAwMDMxMzQAMTE3NjU0NjU1NTYAMDE1MzI1ACAwAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AC8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIGV4dGVybi5oCiAqCiAqIENvcHlyaWdodCAoYykg
MjAwNCwgMjAwNSwgMjAwNiwgVml2ZWsgTW9oYW4gPHZpdmVrQHNpZzkuY29tPgogKiBBbGwgcmln
aHRzIHJlc2VydmVkLiBTZWUgTElDRU5TRQogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8KI2lm
bmRlZiBVRF9FWFRFUk5fSAojZGVmaW5lIFVEX0VYVEVSTl9ICgojaWZkZWYgX19jcGx1c3BsdXMK
ZXh0ZXJuICJDIiB7CiNlbmRpZgoKLyogI2luY2x1ZGUgPHN0ZGlvLmg+ICovCiNpbmNsdWRlICJ0
eXBlcy5oIgoKLyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gUFVCTElDIEFQSSA9PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gKi8KCmV4dGVybiB2b2lkIHVkX2luaXQoc3Ry
dWN0IHVkKik7CgpleHRlcm4gdm9pZCB1ZF9zZXRfbW9kZShzdHJ1Y3QgdWQqLCB1aW50OF90KTsK
CmV4dGVybiB2b2lkIHVkX3NldF9wYyhzdHJ1Y3QgdWQqLCB1aW50NjRfdCk7CgpleHRlcm4gdm9p
ZCB1ZF9zZXRfaW5wdXRfaG9vayhzdHJ1Y3QgdWQqLCBpbnQgKCopKHN0cnVjdCB1ZCopKTsKCmV4
dGVybiB2b2lkIHVkX3NldF9pbnB1dF9idWZmZXIoc3RydWN0IHVkKiwgdWludDhfdCosIHNpemVf
dCk7CgojaWZuZGVmIF9fVURfU1RBTkRBTE9ORV9fCmV4dGVybiB2b2lkIHVkX3NldF9pbnB1dF9m
aWxlKHN0cnVjdCB1ZCosIEZJTEUqKTsKI2VuZGlmIC8qIF9fVURfU1RBTkRBTE9ORV9fICovCgpl
eHRlcm4gdm9pZCB1ZF9zZXRfdmVuZG9yKHN0cnVjdCB1ZCosIHVuc2lnbmVkKTsKCmV4dGVybiB2
b2lkIHVkX3NldF9zeW50YXgoc3RydWN0IHVkKiwgdm9pZCAoKikoc3RydWN0IHVkKikpOwoKZXh0
ZXJuIHZvaWQgdWRfaW5wdXRfc2tpcChzdHJ1Y3QgdWQqLCBzaXplX3QpOwoKZXh0ZXJuIGludCB1
ZF9pbnB1dF9lbmQoc3RydWN0IHVkKik7CgpleHRlcm4gdW5zaWduZWQgaW50IHVkX2RlY29kZShz
dHJ1Y3QgdWQqKTsKCmV4dGVybiB1bnNpZ25lZCBpbnQgdWRfZGlzYXNzZW1ibGUoc3RydWN0IHVk
Kik7CgpleHRlcm4gdm9pZCB1ZF90cmFuc2xhdGVfaW50ZWwoc3RydWN0IHVkKik7CgpleHRlcm4g
dm9pZCB1ZF90cmFuc2xhdGVfYXR0KHN0cnVjdCB1ZCopOwoKZXh0ZXJuIGNoYXIqIHVkX2luc25f
YXNtKHN0cnVjdCB1ZCogdSk7CgpleHRlcm4gdWludDhfdCogdWRfaW5zbl9wdHIoc3RydWN0IHVk
KiB1KTsKCmV4dGVybiB1aW50NjRfdCB1ZF9pbnNuX29mZihzdHJ1Y3QgdWQqKTsKCmV4dGVybiBj
aGFyKiB1ZF9pbnNuX2hleChzdHJ1Y3QgdWQqKTsKCmV4dGVybiB1bnNpZ25lZCBpbnQgdWRfaW5z
bl9sZW4oc3RydWN0IHVkKiB1KTsKCmV4dGVybiBjb25zdCBjaGFyKiB1ZF9sb29rdXBfbW5lbW9u
aWMoZW51bSB1ZF9tbmVtb25pY19jb2RlIGMpOwoKLyogPT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0gKi8KCiNp
ZmRlZiBfX2NwbHVzcGx1cwp9CiNlbmRpZgojZW5kaWYKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2Ri
L3g4Ni91ZGlzODYtMS43L3N5bi5jAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAw
MDAyNzU2ADAwMDAwMDAzMjMxADExNzY1NDY1NTU2ADAxNDYyMgAgMAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAvKiAt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLQogKiBzeW4uYwogKgogKiBDb3B5cmlnaHQgKGMpIDIwMDIsIDIw
MDMsIDIwMDQgVml2ZWsgTW9oYW4gPHZpdmVrQHNpZzkuY29tPgogKiBBbGwgcmlnaHRzIHJlc2Vy
dmVkLiBTZWUgKExJQ0VOU0UpCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwoKLyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogSW50ZWwgUmVnaXN0ZXIgVGFibGUgLSBPcmRlciBNYXR0ZXJzICh0
eXBlcy5oKSEKICogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICovCmNvbnN0IGNoYXIqIHVkX3JlZ190
YWJbXSA9IAp7CiAgImFsIiwJCSJjbCIsCQkiZGwiLAkJImJsIiwKICAiYWgiLAkJImNoIiwJCSJk
aCIsCQkiYmgiLAogICJzcGwiLAkiYnBsIiwJCSJzaWwiLAkJImRpbCIsCiAgInI4YiIsCSJyOWIi
LAkJInIxMGIiLAkJInIxMWIiLAogICJyMTJiIiwJInIxM2IiLAkJInIxNGIiLAkJInIxNWIiLAoK
ICAiYXgiLAkJImN4IiwJCSJkeCIsCQkiYngiLAogICJzcCIsCQkiYnAiLAkJInNpIiwJCSJkaSIs
CiAgInI4dyIsCSJyOXciLAkJInIxMHciLAkJInIxMXciLAogICJyMTJ3IiwJInIxM1ciCSwJInIx
NHciLAkJInIxNXciLAoJCiAgImVheCIsCSJlY3giLAkJImVkeCIsCQkiZWJ4IiwKICAiZXNwIiwJ
ImVicCIsCQkiZXNpIiwJCSJlZGkiLAogICJyOGQiLAkicjlkIiwJCSJyMTBkIiwJCSJyMTFkIiwK
ICAicjEyZCIsCSJyMTNkIiwJCSJyMTRkIiwJCSJyMTVkIiwKCQogICJyYXgiLAkicmN4IiwJCSJy
ZHgiLAkJInJieCIsCiAgInJzcCIsCSJyYnAiLAkJInJzaSIsCQkicmRpIiwKICAicjgiLAkJInI5
IiwJCSJyMTAiLAkJInIxMSIsCiAgInIxMiIsCSJyMTMiLAkJInIxNCIsCQkicjE1IiwKCiAgImVz
IiwJCSJjcyIsCQkic3MiLAkJImRzIiwKICAiZnMiLAkJImdzIiwJCgogICJjcjAiLAkiY3IxIiwJ
CSJjcjIiLAkJImNyMyIsCiAgImNyNCIsCSJjcjUiLAkJImNyNiIsCQkiY3I3IiwKICAiY3I4IiwJ
ImNyOSIsCQkiY3IxMCIsCQkiY3IxMSIsCiAgImNyMTIiLAkiY3IxMyIsCQkiY3IxNCIsCQkiY3Ix
NSIsCgkKICAiZHIwIiwJImRyMSIsCQkiZHIyIiwJCSJkcjMiLAogICJkcjQiLAkiZHI1IiwJCSJk
cjYiLAkJImRyNyIsCiAgImRyOCIsCSJkcjkiLAkJImRyMTAiLAkJImRyMTEiLAogICJkcjEyIiwJ
ImRyMTMiLAkJImRyMTQiLAkJImRyMTUiLAoKICAibW0wIiwJIm1tMSIsCQkibW0yIiwJCSJtbTMi
LAogICJtbTQiLAkibW01IiwJCSJtbTYiLAkJIm1tNyIsCgogICJzdDAiLAkic3QxIiwJCSJzdDIi
LAkJInN0MyIsCiAgInN0NCIsCSJzdDUiLAkJInN0NiIsCQkic3Q3IiwgCgogICJ4bW0wIiwJInht
bTEiLAkJInhtbTIiLAkJInhtbTMiLAogICJ4bW00IiwJInhtbTUiLAkJInhtbTYiLAkJInhtbTci
LAogICJ4bW04IiwJInhtbTkiLAkJInhtbTEwIiwJInhtbTExIiwKICAieG1tMTIiLAkieG1tMTMi
LAkieG1tMTQiLAkieG1tMTUiLAoKICAicmlwIgp9OwoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYv
dWRpczg2LTEuNy9zeW4uaAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1
NgAwMDAwMDAwMTEzMwAxMTc2NTQ2NTU1NgAwMTQ2MjYAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0KICogc3luLmgKICoKICogQ29weXJpZ2h0IChjKSAyMDA2LCBWaXZlayBN
b2hhbiA8dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuIFNlZSBMSUNFTlNF
CiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwojaWZuZGVmIFVEX1NZTl9ICiNkZWZpbmUgVURf
U1lOX0gKCiNpZiAwCiNpbmNsdWRlIDxzdGRpby5oPgojaW5jbHVkZSA8c3RkYXJnLmg+CiNlbmRp
ZgojaW5jbHVkZSAidHlwZXMuaCIKCmV4dGVybiBjb25zdCBjaGFyKiB1ZF9yZWdfdGFiW107Cgpz
dGF0aWMgdm9pZCBta2FzbShzdHJ1Y3QgdWQqIHUsIGNvbnN0IGNoYXIqIGZtdCwgLi4uKQp7CiAg
dmFfbGlzdCBhcDsKICB2YV9zdGFydChhcCwgZm10KTsKICB1LT5pbnNuX2ZpbGwgKz0gdnNucHJp
bnRmKChjaGFyKikgdS0+aW5zbl9idWZmZXIgKyB1LT5pbnNuX2ZpbGwsIDY0LCBmbXQsIGFwKTsK
ICB2YV9lbmQoYXApOwp9CgojZW5kaWYKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRp
czg2LTEuNy91ZGlzODYuYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAw
MDAwMDAxMDA1NgAxMTc2NTQ2NTU1NgAwMTUxMzYAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KICogdWRpczg2LmMKICoKICogQ29weXJpZ2h0IChjKSAyMDA0LCAyMDA1LCAy
MDA2LCBWaXZlayBNb2hhbiA8dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdodHMgcmVzZXJ2ZWQu
IFNlZSBMSUNFTlNFCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwoKI2lmIDAKI2luY2x1ZGUg
PHN0ZGxpYi5oPgojaW5jbHVkZSA8c3RkaW8uaD4KI2luY2x1ZGUgPHN0cmluZy5oPgojZW5kaWYK
CiNpbmNsdWRlICJpbnB1dC5oIgojaW5jbHVkZSAiZXh0ZXJuLmgiCgovKiA9PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PQogKiB1ZF9pbml0KCkgLSBJbml0aWFsaXplcyB1ZF90IG9iamVjdC4KICogPT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT0KICovCmV4dGVybiB2b2lkIAp1ZF9pbml0KHN0cnVjdCB1ZCogdSkKewog
IG1lbXNldCgodm9pZCopdSwgMCwgc2l6ZW9mKHN0cnVjdCB1ZCkpOwogIHVkX3NldF9tb2RlKHUs
IDE2KTsKICB1LT5tbmVtb25pYyA9IFVEX0lpbnZhbGlkOwogIHVkX3NldF9wYyh1LCAwKTsKI2lm
bmRlZiBfX1VEX1NUQU5EQUxPTkVfXwogIHVkX3NldF9pbnB1dF9maWxlKHUsIHN0ZGluKTsKI2Vu
ZGlmIC8qIF9fVURfU1RBTkRBTE9ORV9fICovCn0KCi8qID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
IHVkX2Rpc2Fzc2VtYmxlKCkgLSBkaXNhc3NlbWJsZXMgb25lIGluc3RydWN0aW9uIGFuZCByZXR1
cm5zIHRoZSBudW1iZXIgb2YgCiAqIGJ5dGVzIGRpc2Fzc2VtYmxlZC4gQSB6ZXJvIG1lYW5zIGVu
ZCBvZiBkaXNhc3NlbWJseS4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCmV4dGVybiB1bnNp
Z25lZCBpbnQKdWRfZGlzYXNzZW1ibGUoc3RydWN0IHVkKiB1KQp7CiAgaWYgKHVkX2lucHV0X2Vu
ZCh1KSkKCXJldHVybiAwOwoKIAogIHUtPmluc25fYnVmZmVyWzBdID0gdS0+aW5zbl9oZXhjb2Rl
WzBdID0gMDsKCiAKICBpZiAodWRfZGVjb2RlKHUpID09IDApCglyZXR1cm4gMDsKICBpZiAodS0+
dHJhbnNsYXRvcikKCXUtPnRyYW5zbGF0b3IodSk7CiAgcmV0dXJuIHVkX2luc25fbGVuKHUpOwp9
CgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9zZXRfbW9kZSgpIC0gU2V0IERpc2Fzc2Vt
bHkgTW9kZS4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCmV4dGVybiB2b2lkIAp1ZF9zZXRf
bW9kZShzdHJ1Y3QgdWQqIHUsIHVpbnQ4X3QgbSkKewogIHN3aXRjaChtKSB7CgljYXNlIDE2OgoJ
Y2FzZSAzMjoKCWNhc2UgNjQ6IHUtPmRpc19tb2RlID0gbSA7IHJldHVybjsKCWRlZmF1bHQ6IHUt
PmRpc19tb2RlID0gMTY7IHJldHVybjsKICB9Cn0KCi8qID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
IHVkX3NldF92ZW5kb3IoKSAtIFNldCB2ZW5kb3IuCiAqID09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAq
LwpleHRlcm4gdm9pZCAKdWRfc2V0X3ZlbmRvcihzdHJ1Y3QgdWQqIHUsIHVuc2lnbmVkIHYpCnsK
ICBzd2l0Y2godikgewoJY2FzZSBVRF9WRU5ET1JfSU5URUw6CgkJdS0+dmVuZG9yID0gdjsKCQli
cmVhazsKCWRlZmF1bHQ6CgkJdS0+dmVuZG9yID0gVURfVkVORE9SX0FNRDsKICB9Cn0KCi8qID09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09CiAqIHVkX3NldF9wYygpIC0gU2V0cyBjb2RlIG9yaWdpbi4gCiAq
ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09CiAqLwpleHRlcm4gdm9pZCAKdWRfc2V0X3BjKHN0cnVjdCB1
ZCogdSwgdWludDY0X3QgbykKewogIHUtPnBjID0gbzsKfQoKLyogPT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT0KICogdWRfc2V0X3N5bnRheCgpIC0gU2V0cyB0aGUgb3V0cHV0IHN5bnRheC4KICogPT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT0KICovCmV4dGVybiB2b2lkIAp1ZF9zZXRfc3ludGF4KHN0cnVjdCB1ZCog
dSwgdm9pZCAoKnQpKHN0cnVjdCB1ZCopKQp7CiAgdS0+dHJhbnNsYXRvciA9IHQ7Cn0KCi8qID09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09CiAqIHVkX2luc24oKSAtIHJldHVybnMgdGhlIGRpc2Fzc2VtYmxl
ZCBpbnN0cnVjdGlvbgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8KZXh0ZXJuIGNoYXIqIAp1
ZF9pbnNuX2FzbShzdHJ1Y3QgdWQqIHUpIAp7CiAgcmV0dXJuIHUtPmluc25fYnVmZmVyOwp9Cgov
KiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnNuX29mZnNldCgpIC0gUmV0dXJucyB0aGUg
b2Zmc2V0LgogKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKi8KZXh0ZXJuIHVpbnQ2NF90CnVkX2lu
c25fb2ZmKHN0cnVjdCB1ZCogdSkgCnsKICByZXR1cm4gdS0+aW5zbl9vZmZzZXQ7Cn0KCgovKiA9
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnNuX2hleCgpIC0gUmV0dXJucyBoZXggZm9ybSBv
ZiBkaXNhc3NlbWJsZWQgaW5zdHJ1Y3Rpb24uCiAqID09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09CiAqLwpl
eHRlcm4gY2hhciogCnVkX2luc25faGV4KHN0cnVjdCB1ZCogdSkgCnsKICByZXR1cm4gdS0+aW5z
bl9oZXhjb2RlOwp9CgovKiA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQogKiB1ZF9pbnNuX3B0cigpIC0g
UmV0dXJucyBjb2RlIGRpc2Fzc2VtYmxlZC4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCmV4
dGVybiB1aW50OF90KiAKdWRfaW5zbl9wdHIoc3RydWN0IHVkKiB1KSAKewogIHJldHVybiB1LT5p
bnBfc2VzczsKfQoKLyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICogdWRfaW5zbl9sZW4oKSAtIFJl
dHVybnMgdGhlIGNvdW50IG9mIGJ5dGVzIGRpc2Fzc2VtYmxlZC4KICogPT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT0KICovCmV4dGVybiB1bnNpZ25lZCBpbnQgCnVkX2luc25fbGVuKHN0cnVjdCB1ZCogdSkg
CnsKICByZXR1cm4gdS0+aW5wX2N0cjsKfQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlzODYtMS43L01h
a2VmaWxlAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDAwMjA3
ADExNzY1NDY1NTU2ADAxNTMwNQAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKQ0ZMQUdTCQkrPSAtRF9fVURfU1RB
TkRBTE9ORV9fCm9iai15CQk6PSBkZWNvZGUubyBpbnB1dC5vIGl0YWIubyBrZGJfZGlzLm8gc3lu
LWF0dC5vIHN5bi5vIFwKICAgICAgICAgICAgICAgICAgIHN5bi1pbnRlbC5vIHVkaXM4Ni5vCgoA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9kZWNv
ZGUuYwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDEwNDM0MgAx
MTc2NTQ2NTU1NgAwMTUyNDEAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAdXN0YXIgIABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALyogLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0K
ICogZGVjb2RlLmMKICoKICogQ29weXJpZ2h0IChjKSAyMDA1LCAyMDA2LCBWaXZlayBNb2hhbiA8
dml2ZWtAc2lnOS5jb20+CiAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuIFNlZSBMSUNFTlNFCiAqIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tCiAqLwoKI2lmIDAKI2luY2x1ZGUgPGFzc2VydC5oPgojaW5jbHVk
ZSA8c3RyaW5nLmg+CiNlbmRpZgoKI2luY2x1ZGUgInR5cGVzLmgiCiNpbmNsdWRlICJpdGFiLmgi
CiNpbmNsdWRlICJpbnB1dC5oIgojaW5jbHVkZSAiZGVjb2RlLmgiCgovKiBUaGUgbWF4IG51bWJl
ciBvZiBwcmVmaXhlcyB0byBhbiBpbnN0cnVjdGlvbiAqLwojZGVmaW5lIE1BWF9QUkVGSVhFUyAg
ICAxNQoKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGllX2ludmFsaWQgPSB7IFVEX0lpbnZh
bGlkLCBPX05PTkUsIE9fTk9ORSwgT19OT05FLCBQX25vbmUgfTsKc3RhdGljIHN0cnVjdCB1ZF9p
dGFiX2VudHJ5IGllX3BhdXNlICAgPSB7IFVEX0lwYXVzZSwgICBPX05PTkUsIE9fTk9ORSwgT19O
T05FLCBQX25vbmUgfTsKc3RhdGljIHN0cnVjdCB1ZF9pdGFiX2VudHJ5IGllX25vcCAgICAgPSB7
IFVEX0lub3AsICAgICBPX05PTkUsIE9fTk9ORSwgT19OT05FLCBQX25vbmUgfTsKCgovKiBMb29r
cyB1cCBtbmVtb25pYyBjb2RlIGluIHRoZSBtbmVtb25pYyBzdHJpbmcgdGFibGUKICogUmV0dXJu
cyBOVUxMIGlmIHRoZSBtbmVtb25pYyBjb2RlIGlzIGludmFsaWQKICovCmNvbnN0IGNoYXIgKiB1
ZF9sb29rdXBfbW5lbW9uaWMoIGVudW0gdWRfbW5lbW9uaWNfY29kZSBjICkKewogICAgaWYgKCBj
IDwgVURfSWQzdmlsICkKICAgICAgICByZXR1cm4gdWRfbW5lbW9uaWNzX3N0clsgYyBdOwogICAg
cmV0dXJuIE5VTEw7Cn0KCgovKiBFeHRyYWN0cyBpbnN0cnVjdGlvbiBwcmVmaXhlcy4KICovCnN0
YXRpYyBpbnQgZ2V0X3ByZWZpeGVzKCBzdHJ1Y3QgdWQqIHUgKQp7CiAgICB1bnNpZ25lZCBpbnQg
aGF2ZV9wZnggPSAxOwogICAgdW5zaWduZWQgaW50IGk7CiAgICB1aW50OF90IGN1cnI7CgogICAg
LyogaWYgaW4gZXJyb3Igc3RhdGUsIGJhaWwgb3V0ICovCiAgICBpZiAoIHUtPmVycm9yICkgCiAg
ICAgICAgcmV0dXJuIC0xOyAKCiAgICAvKiBrZWVwIGdvaW5nIGFzIGxvbmcgYXMgdGhlcmUgYXJl
IHByZWZpeGVzIGF2YWlsYWJsZSAqLwogICAgZm9yICggaSA9IDA7IGhhdmVfcGZ4IDsgKytpICkg
ewoKICAgICAgICAvKiBHZXQgbmV4dCBieXRlLiAqLwogICAgICAgIGlucF9uZXh0KHUpOyAKICAg
ICAgICBpZiAoIHUtPmVycm9yICkgCiAgICAgICAgICAgIHJldHVybiAtMTsKICAgICAgICBjdXJy
ID0gaW5wX2N1cnIoIHUgKTsKCiAgICAgICAgLyogcmV4IHByZWZpeGVzIGluIDY0Yml0IG1vZGUg
Ki8KICAgICAgICBpZiAoIHUtPmRpc19tb2RlID09IDY0ICYmICggY3VyciAmIDB4RjAgKSA9PSAw
eDQwICkgewogICAgICAgICAgICB1LT5wZnhfcmV4ID0gY3VycjsgIAogICAgICAgIH0gZWxzZSB7
CiAgICAgICAgICAgIHN3aXRjaCAoIGN1cnIgKSAgCiAgICAgICAgICAgIHsKICAgICAgICAgICAg
Y2FzZSAweDJFIDogCiAgICAgICAgICAgICAgICB1LT5wZnhfc2VnID0gVURfUl9DUzsgCiAgICAg
ICAgICAgICAgICB1LT5wZnhfcmV4ID0gMDsKICAgICAgICAgICAgICAgIGJyZWFrOwogICAgICAg
ICAgICBjYXNlIDB4MzYgOiAgICAgCiAgICAgICAgICAgICAgICB1LT5wZnhfc2VnID0gVURfUl9T
UzsgCiAgICAgICAgICAgICAgICB1LT5wZnhfcmV4ID0gMDsKICAgICAgICAgICAgICAgIGJyZWFr
OwogICAgICAgICAgICBjYXNlIDB4M0UgOiAKICAgICAgICAgICAgICAgIHUtPnBmeF9zZWcgPSBV
RF9SX0RTOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXggPSAwOwogICAgICAgICAgICAgICAg
YnJlYWs7CiAgICAgICAgICAgIGNhc2UgMHgyNiA6IAogICAgICAgICAgICAgICAgdS0+cGZ4X3Nl
ZyA9IFVEX1JfRVM7IAogICAgICAgICAgICAgICAgdS0+cGZ4X3JleCA9IDA7CiAgICAgICAgICAg
ICAgICBicmVhazsKICAgICAgICAgICAgY2FzZSAweDY0IDogCiAgICAgICAgICAgICAgICB1LT5w
Znhfc2VnID0gVURfUl9GUzsgCiAgICAgICAgICAgICAgICB1LT5wZnhfcmV4ID0gMDsKICAgICAg
ICAgICAgICAgIGJyZWFrOwogICAgICAgICAgICBjYXNlIDB4NjUgOiAKICAgICAgICAgICAgICAg
IHUtPnBmeF9zZWcgPSBVRF9SX0dTOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXggPSAwOwog
ICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgICAgIGNhc2UgMHg2NyA6IC8qIGFkcmVzcy1z
aXplIG92ZXJyaWRlIHByZWZpeCAqLyAKICAgICAgICAgICAgICAgIHUtPnBmeF9hZHIgPSAweDY3
OwogICAgICAgICAgICAgICAgdS0+cGZ4X3JleCA9IDA7CiAgICAgICAgICAgICAgICBicmVhazsK
ICAgICAgICAgICAgY2FzZSAweEYwIDogCiAgICAgICAgICAgICAgICB1LT5wZnhfbG9jayA9IDB4
RjA7CiAgICAgICAgICAgICAgICB1LT5wZnhfcmV4ICA9IDA7CiAgICAgICAgICAgICAgICBicmVh
azsKICAgICAgICAgICAgY2FzZSAweDY2OiAKICAgICAgICAgICAgICAgIC8qIHRoZSAweDY2IHNz
ZSBwcmVmaXggaXMgb25seSBlZmZlY3RpdmUgaWYgbm8gb3RoZXIgc3NlIHByZWZpeAogICAgICAg
ICAgICAgICAgICogaGFzIGFscmVhZHkgYmVlbiBzcGVjaWZpZWQuCiAgICAgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgICAgIGlmICggIXUtPnBmeF9pbnNuICkgdS0+cGZ4X2luc24gPSAweDY2
OwogICAgICAgICAgICAgICAgdS0+cGZ4X29wciA9IDB4NjY7ICAgICAgICAgICAKICAgICAgICAg
ICAgICAgIHUtPnBmeF9yZXggPSAwOwogICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgICAg
IGNhc2UgMHhGMjoKICAgICAgICAgICAgICAgIHUtPnBmeF9pbnNuICA9IDB4RjI7CiAgICAgICAg
ICAgICAgICB1LT5wZnhfcmVwbmUgPSAweEYyOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXgg
ICA9IDA7CiAgICAgICAgICAgICAgICBicmVhazsKICAgICAgICAgICAgY2FzZSAweEYzOgogICAg
ICAgICAgICAgICAgdS0+cGZ4X2luc24gPSAweEYzOwogICAgICAgICAgICAgICAgdS0+cGZ4X3Jl
cCAgPSAweEYzOyAKICAgICAgICAgICAgICAgIHUtPnBmeF9yZXBlID0gMHhGMzsgCiAgICAgICAg
ICAgICAgICB1LT5wZnhfcmV4ICA9IDA7CiAgICAgICAgICAgICAgICBicmVhazsKICAgICAgICAg
ICAgZGVmYXVsdCA6IAogICAgICAgICAgICAgICAgLyogTm8gbW9yZSBwcmVmaXhlcyAqLwogICAg
ICAgICAgICAgICAgaGF2ZV9wZnggPSAwOwogICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAg
ICAgIH0KICAgICAgICB9CgogICAgICAgIC8qIGNoZWNrIGlmIHdlIHJlYWNoZWQgbWF4IGluc3Ry
dWN0aW9uIGxlbmd0aCAqLwogICAgICAgIGlmICggaSArIDEgPT0gTUFYX0lOU05fTEVOR1RIICkg
ewogICAgICAgICAgICB1LT5lcnJvciA9IDE7CiAgICAgICAgICAgIGJyZWFrOwogICAgICAgIH0K
ICAgIH0KCiAgICAvKiByZXR1cm4gc3RhdHVzICovCiAgICBpZiAoIHUtPmVycm9yICkgCiAgICAg
ICAgcmV0dXJuIC0xOyAKCiAgICAvKiByZXdpbmQgYmFjayBvbmUgYnl0ZSBpbiBzdHJlYW0sIHNp
bmNlIHRoZSBhYm92ZSBsb29wIAogICAgICogc3RvcHMgd2l0aCBhIG5vbi1wcmVmaXggYnl0ZS4g
CiAgICAgKi8KICAgIGlucF9iYWNrKHUpOwoKICAgIC8qIHNwZWN1bGF0aXZlbHkgZGV0ZXJtaW5l
IHRoZSBlZmZlY3RpdmUgb3BlcmFuZCBtb2RlLAogICAgICogYmFzZWQgb24gdGhlIHByZWZpeGVz
IGFuZCB0aGUgY3VycmVudCBkaXNhc3NlbWJseQogICAgICogbW9kZS4gVGhpcyBtYXkgYmUgaW5h
Y2N1cmF0ZSwgYnV0IHVzZWZ1bCBmb3IgbW9kZQogICAgICogZGVwZW5kZW50IGRlY29kaW5nLgog
ICAgICovCiAgICBpZiAoIHUtPmRpc19tb2RlID09IDY0ICkgewogICAgICAgIHUtPm9wcl9tb2Rl
ID0gUkVYX1coIHUtPnBmeF9yZXggKSA/IDY0IDogKCAoIHUtPnBmeF9vcHIgKSA/IDE2IDogMzIg
KSA7CiAgICAgICAgdS0+YWRyX21vZGUgPSAoIHUtPnBmeF9hZHIgKSA/IDMyIDogNjQ7CiAgICB9
IGVsc2UgaWYgKCB1LT5kaXNfbW9kZSA9PSAzMiApIHsKICAgICAgICB1LT5vcHJfbW9kZSA9ICgg
dS0+cGZ4X29wciApID8gMTYgOiAzMjsKICAgICAgICB1LT5hZHJfbW9kZSA9ICggdS0+cGZ4X2Fk
ciApID8gMTYgOiAzMjsKICAgIH0gZWxzZSBpZiAoIHUtPmRpc19tb2RlID09IDE2ICkgewogICAg
ICAgIHUtPm9wcl9tb2RlID0gKCB1LT5wZnhfb3ByICkgPyAzMiA6IDE2OwogICAgICAgIHUtPmFk
cl9tb2RlID0gKCB1LT5wZnhfYWRyICkgPyAzMiA6IDE2OwogICAgfQoKICAgIHJldHVybiAwOwp9
CgoKLyogU2VhcmNoZXMgdGhlIGluc3RydWN0aW9uIHRhYmxlcyBmb3IgdGhlIHJpZ2h0IGVudHJ5
LgogKi8Kc3RhdGljIGludCBzZWFyY2hfaXRhYiggc3RydWN0IHVkICogdSApCnsKICAgIHN0cnVj
dCB1ZF9pdGFiX2VudHJ5ICogZSA9IE5VTEw7CiAgICBlbnVtIHVkX2l0YWJfaW5kZXggdGFibGU7
CiAgICB1aW50OF90IHBlZWs7CiAgICB1aW50OF90IGRpZF9wZWVrID0gMDsKICAgIHVpbnQ4X3Qg
Y3VycjsgCiAgICB1aW50OF90IGluZGV4OwoKICAgIC8qIGlmIGluIHN0YXRlIG9mIGVycm9yLCBy
ZXR1cm4gKi8KICAgIGlmICggdS0+ZXJyb3IgKSAKICAgICAgICByZXR1cm4gLTE7CgogICAgLyog
Z2V0IGZpcnN0IGJ5dGUgb2Ygb3Bjb2RlLiAqLwogICAgaW5wX25leHQodSk7IAogICAgaWYgKCB1
LT5lcnJvciApIAogICAgICAgIHJldHVybiAtMTsKICAgIGN1cnIgPSBpbnBfY3Vycih1KTsgCgog
ICAgLyogcmVzb2x2ZSB4Y2hnLCBub3AsIHBhdXNlIGNyYXp5bmVzcyAqLwogICAgaWYgKCAweDkw
ID09IGN1cnIgKSB7CiAgICAgICAgaWYgKCAhKCB1LT5kaXNfbW9kZSA9PSA2NCAmJiBSRVhfQigg
dS0+cGZ4X3JleCApICkgKSB7CiAgICAgICAgICAgIGlmICggdS0+cGZ4X3JlcCApIHsKICAgICAg
ICAgICAgICAgIHUtPnBmeF9yZXAgPSAwOwogICAgICAgICAgICAgICAgZSA9ICYgaWVfcGF1c2U7
CiAgICAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgICAgICBlID0gJiBpZV9ub3A7CiAgICAg
ICAgICAgIH0KICAgICAgICAgICAgZ290byBmb3VuZF9lbnRyeTsKICAgICAgICB9CiAgICB9Cgog
ICAgLyogZ2V0IHRvcC1sZXZlbCB0YWJsZSAqLwogICAgaWYgKCAweDBGID09IGN1cnIgKSB7CiAg
ICAgICAgdGFibGUgPSBJVEFCX18wRjsKICAgICAgICBjdXJyICA9IGlucF9uZXh0KHUpOwogICAg
ICAgIGlmICggdS0+ZXJyb3IgKQogICAgICAgICAgICByZXR1cm4gLTE7CgogICAgICAgIC8qIDJi
eXRlIG9wY29kZXMgY2FuIGJlIG1vZGlmaWVkIGJ5IDB4NjYsIEYzLCBhbmQgRjIgcHJlZml4ZXMg
Ki8KICAgICAgICBpZiAoIDB4NjYgPT0gdS0+cGZ4X2luc24gKSB7CiAgICAgICAgICAgIGlmICgg
dWRfaXRhYl9saXN0WyBJVEFCX19QRlhfU1NFNjZfXzBGIF1bIGN1cnIgXS5tbmVtb25pYyAhPSBV
RF9JaW52YWxpZCApIHsKICAgICAgICAgICAgICAgIHRhYmxlID0gSVRBQl9fUEZYX1NTRTY2X18w
RjsKICAgICAgICAgICAgICAgIHUtPnBmeF9vcHIgPSAwOwogICAgICAgICAgICB9CiAgICAgICAg
fSBlbHNlIGlmICggMHhGMiA9PSB1LT5wZnhfaW5zbiApIHsKICAgICAgICAgICAgaWYgKCB1ZF9p
dGFiX2xpc3RbIElUQUJfX1BGWF9TU0VGMl9fMEYgXVsgY3VyciBdLm1uZW1vbmljICE9IFVEX0lp
bnZhbGlkICkgewogICAgICAgICAgICAgICAgdGFibGUgPSBJVEFCX19QRlhfU1NFRjJfXzBGOyAK
ICAgICAgICAgICAgICAgIHUtPnBmeF9yZXBuZSA9IDA7CiAgICAgICAgICAgIH0KICAgICAgICB9
IGVsc2UgaWYgKCAweEYzID09IHUtPnBmeF9pbnNuICkgewogICAgICAgICAgICBpZiAoIHVkX2l0
YWJfbGlzdFsgSVRBQl9fUEZYX1NTRUYzX18wRiBdWyBjdXJyIF0ubW5lbW9uaWMgIT0gVURfSWlu
dmFsaWQgKSB7CiAgICAgICAgICAgICAgICB0YWJsZSA9IElUQUJfX1BGWF9TU0VGM19fMEY7CiAg
ICAgICAgICAgICAgICB1LT5wZnhfcmVwZSA9IDA7CiAgICAgICAgICAgICAgICB1LT5wZnhfcmVw
ICA9IDA7CiAgICAgICAgICAgIH0KICAgICAgICB9CiAgICAvKiBwaWNrIGFuIGluc3RydWN0aW9u
IGZyb20gdGhlIDFieXRlIHRhYmxlICovCiAgICB9IGVsc2UgewogICAgICAgIHRhYmxlID0gSVRB
Ql9fMUJZVEU7IAogICAgfQoKICAgIGluZGV4ID0gY3VycjsKCnNlYXJjaDoKCiAgICBlID0gJiB1
ZF9pdGFiX2xpc3RbIHRhYmxlIF1bIGluZGV4IF07CgogICAgLyogaWYgbW5lbW9uaWMgY29uc3Rh
bnQgaXMgYSBzdGFuZGFyZCBpbnN0cnVjdGlvbiBjb25zdGFudAogICAgICogb3VyIHNlYXJjaCBp
cyBvdmVyLgogICAgICovCiAgICAKICAgIGlmICggZS0+bW5lbW9uaWMgPCBVRF9JZDN2aWwgKSB7
CiAgICAgICAgaWYgKCBlLT5tbmVtb25pYyA9PSBVRF9JaW52YWxpZCApIHsKICAgICAgICAgICAg
aWYgKCBkaWRfcGVlayApIHsKICAgICAgICAgICAgICAgIGlucF9uZXh0KCB1ICk7IGlmICggdS0+
ZXJyb3IgKSByZXR1cm4gLTE7CiAgICAgICAgICAgIH0KICAgICAgICAgICAgZ290byBmb3VuZF9l
bnRyeTsKICAgICAgICB9CiAgICAgICAgZ290byBmb3VuZF9lbnRyeTsKICAgIH0KCiAgICB0YWJs
ZSA9IGUtPnByZWZpeDsKCiAgICBzd2l0Y2ggKCBlLT5tbmVtb25pYyApCiAgICB7CiAgICBjYXNl
IFVEX0lncnBfcmVnOgogICAgICAgIHBlZWsgICAgID0gaW5wX3BlZWsoIHUgKTsKICAgICAgICBk
aWRfcGVlayA9IDE7CiAgICAgICAgaW5kZXggICAgPSBNT0RSTV9SRUcoIHBlZWsgKTsKICAgICAg
ICBicmVhazsKCiAgICBjYXNlIFVEX0lncnBfbW9kOgogICAgICAgIHBlZWsgICAgID0gaW5wX3Bl
ZWsoIHUgKTsKICAgICAgICBkaWRfcGVlayA9IDE7CiAgICAgICAgaW5kZXggICAgPSBNT0RSTV9N
T0QoIHBlZWsgKTsKICAgICAgICBpZiAoIGluZGV4ID09IDMgKQogICAgICAgICAgIGluZGV4ID0g
SVRBQl9fTU9EX0lORFhfXzExOwogICAgICAgIGVsc2UgCiAgICAgICAgICAgaW5kZXggPSBJVEFC
X19NT0RfSU5EWF9fTk9UXzExOyAKICAgICAgICBicmVhazsKCiAgICBjYXNlIFVEX0lncnBfcm06
CiAgICAgICAgY3VyciAgICAgPSBpbnBfbmV4dCggdSApOwogICAgICAgIGRpZF9wZWVrID0gMDsK
ICAgICAgICBpZiAoIHUtPmVycm9yICkKICAgICAgICAgICAgcmV0dXJuIC0xOwogICAgICAgIGlu
ZGV4ICAgID0gTU9EUk1fUk0oIGN1cnIgKTsKICAgICAgICBicmVhazsKCiAgICBjYXNlIFVEX0ln
cnBfeDg3OgogICAgICAgIGN1cnIgICAgID0gaW5wX25leHQoIHUgKTsKICAgICAgICBkaWRfcGVl
ayA9IDA7CiAgICAgICAgaWYgKCB1LT5lcnJvciApCiAgICAgICAgICAgIHJldHVybiAtMTsKICAg
ICAgICBpbmRleCAgICA9IGN1cnIgLSAweEMwOwogICAgICAgIGJyZWFrOwoKICAgIGNhc2UgVURf
SWdycF9vc2l6ZToKICAgICAgICBpZiAoIHUtPm9wcl9tb2RlID09IDY0ICkgCiAgICAgICAgICAg
IGluZGV4ID0gSVRBQl9fTU9ERV9JTkRYX182NDsKICAgICAgICBlbHNlIGlmICggdS0+b3ByX21v
ZGUgPT0gMzIgKSAKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzMyOwogICAg
ICAgIGVsc2UKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzE2OwogICAgICAg
IGJyZWFrOwogCiAgICBjYXNlIFVEX0lncnBfYXNpemU6CiAgICAgICAgaWYgKCB1LT5hZHJfbW9k
ZSA9PSA2NCApIAogICAgICAgICAgICBpbmRleCA9IElUQUJfX01PREVfSU5EWF9fNjQ7CiAgICAg
ICAgZWxzZSBpZiAoIHUtPmFkcl9tb2RlID09IDMyICkgCiAgICAgICAgICAgIGluZGV4ID0gSVRB
Ql9fTU9ERV9JTkRYX18zMjsKICAgICAgICBlbHNlCiAgICAgICAgICAgIGluZGV4ID0gSVRBQl9f
TU9ERV9JTkRYX18xNjsKICAgICAgICBicmVhazsgICAgICAgICAgICAgICAKCiAgICBjYXNlIFVE
X0lncnBfbW9kZToKICAgICAgICBpZiAoIHUtPmRpc19tb2RlID09IDY0ICkgCiAgICAgICAgICAg
IGluZGV4ID0gSVRBQl9fTU9ERV9JTkRYX182NDsKICAgICAgICBlbHNlIGlmICggdS0+ZGlzX21v
ZGUgPT0gMzIgKSAKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzMyOwogICAg
ICAgIGVsc2UKICAgICAgICAgICAgaW5kZXggPSBJVEFCX19NT0RFX0lORFhfXzE2OwogICAgICAg
IGJyZWFrOwoKICAgIGNhc2UgVURfSWdycF92ZW5kb3I6CiAgICAgICAgaWYgKCB1LT52ZW5kb3Ig
PT0gVURfVkVORE9SX0lOVEVMICkgCiAgICAgICAgICAgIGluZGV4ID0gSVRBQl9fVkVORE9SX0lO
RFhfX0lOVEVMOyAKICAgICAgICBlbHNlIGlmICggdS0+dmVuZG9yID09IFVEX1ZFTkRPUl9BTUQg
KQogICAgICAgICAgICBpbmRleCA9IElUQUJfX1ZFTkRPUl9JTkRYX19BTUQ7CiAgICAgICAgZWxz
ZSB7CiAgICAgICAgICAgIGtkYnAoIktEQjpzZWFyY2hfaXRhYigpOiB1bnJlY29nbml6ZWQgdmVu
ZG9yIGlkXG4iKTsKICAgICAgICAgICAgcmV0dXJuIC0xOwogICAgICAgIH0KICAgICAgICBicmVh
azsKCiAgICBjYXNlIFVEX0lkM3ZpbDoKICAgICAgICBrZGJwKCJLREI6c2VhcmNoX2l0YWIoKTog
aW52YWxpZCBpbnN0ciBtbmVtb25pYyBjb25zdGFudCBJZDN2aWxcbiIpOwogICAgICAgIHJldHVy
biAtMTsKCiAgICBkZWZhdWx0OgogICAgICAgIGtkYnAoIktEQjpzZWFyY2hfaXRhYigpOiBpbnZh
bGlkIGluc3RydWN0aW9uIG1uZW1vbmljIGNvbnN0YW50XG4iKTsKICAgICAgICByZXR1cm4gLTE7
CiAgICB9CgogICAgZ290byBzZWFyY2g7Cgpmb3VuZF9lbnRyeToKCiAgICB1LT5pdGFiX2VudHJ5
ID0gZTsKICAgIHUtPm1uZW1vbmljID0gdS0+aXRhYl9lbnRyeS0+bW5lbW9uaWM7CgogICAgcmV0
dXJuIDA7Cn0KCgpzdGF0aWMgdW5zaWduZWQgaW50IHJlc29sdmVfb3BlcmFuZF9zaXplKCBjb25z
dCBzdHJ1Y3QgdWQgKiB1LCB1bnNpZ25lZCBpbnQgcyApCnsKICAgIHN3aXRjaCAoIHMgKSAKICAg
IHsKICAgIGNhc2UgU1pfVjoKICAgICAgICByZXR1cm4gKCB1LT5vcHJfbW9kZSApOwogICAgY2Fz
ZSBTWl9aOiAgCiAgICAgICAgcmV0dXJuICggdS0+b3ByX21vZGUgPT0gMTYgKSA/IDE2IDogMzI7
CiAgICBjYXNlIFNaX1A6ICAKICAgICAgICByZXR1cm4gKCB1LT5vcHJfbW9kZSA9PSAxNiApID8g
U1pfV1AgOiBTWl9EUDsKICAgIGNhc2UgU1pfTURROgogICAgICAgIHJldHVybiAoIHUtPm9wcl9t
b2RlID09IDE2ICkgPyAzMiA6IHUtPm9wcl9tb2RlOwogICAgY2FzZSBTWl9SRFE6CiAgICAgICAg
cmV0dXJuICggdS0+ZGlzX21vZGUgPT0gNjQgKSA/IDY0IDogMzI7CiAgICBkZWZhdWx0OgogICAg
ICAgIHJldHVybiBzOwogICAgfQp9CgoKc3RhdGljIGludCByZXNvbHZlX21uZW1vbmljKCBzdHJ1
Y3QgdWQqIHUgKQp7CiAgLyogZmFyL25lYXIgZmxhZ3MgKi8KICB1LT5icl9mYXIgPSAwOwogIHUt
PmJyX25lYXIgPSAwOwogIC8qIHJlYWRqdXN0IG9wZXJhbmQgc2l6ZXMgZm9yIGNhbGwvam1wIGlu
c3RyY3V0aW9ucyAqLwogIGlmICggdS0+bW5lbW9uaWMgPT0gVURfSWNhbGwgfHwgdS0+bW5lbW9u
aWMgPT0gVURfSWptcCApIHsKICAgIC8qIFdQOiAxNmJpdCBwb2ludGVyICovCiAgICBpZiAoIHUt
Pm9wZXJhbmRbIDAgXS5zaXplID09IFNaX1dQICkgewogICAgICAgIHUtPm9wZXJhbmRbIDAgXS5z
aXplID0gMTY7CiAgICAgICAgdS0+YnJfZmFyID0gMTsKICAgICAgICB1LT5icl9uZWFyPSAwOwog
ICAgLyogRFA6IDMyYml0IHBvaW50ZXIgKi8KICAgIH0gZWxzZSBpZiAoIHUtPm9wZXJhbmRbIDAg
XS5zaXplID09IFNaX0RQICkgewogICAgICAgIHUtPm9wZXJhbmRbIDAgXS5zaXplID0gMzI7CiAg
ICAgICAgdS0+YnJfZmFyID0gMTsKICAgICAgICB1LT5icl9uZWFyPSAwOwogICAgfSBlbHNlIHsK
ICAgICAgICB1LT5icl9mYXIgPSAwOwogICAgICAgIHUtPmJyX25lYXI9IDE7CiAgICB9CiAgLyog
cmVzb2x2ZSAzZG5vdyB3ZWlyZG5lc3MuICovCiAgfSBlbHNlIGlmICggdS0+bW5lbW9uaWMgPT0g
VURfSTNkbm93ICkgewogICAgdS0+bW5lbW9uaWMgPSB1ZF9pdGFiX2xpc3RbIElUQUJfXzNETk9X
IF1bIGlucF9jdXJyKCB1ICkgIF0ubW5lbW9uaWM7CiAgfQogIC8qIFNXQVBHUyBpcyBvbmx5IHZh
bGlkIGluIDY0Yml0cyBtb2RlICovCiAgaWYgKCB1LT5tbmVtb25pYyA9PSBVRF9Jc3dhcGdzICYm
IHUtPmRpc19tb2RlICE9IDY0ICkgewogICAgdS0+ZXJyb3IgPSAxOwogICAgcmV0dXJuIC0xOwog
IH0KCiAgcmV0dXJuIDA7Cn0KCgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBkZWNvZGVfYSgp
LSBEZWNvZGVzIG9wZXJhbmRzIG9mIHRoZSB0eXBlIHNlZzpvZmZzZXQKICogLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0KICovCnN0YXRpYyB2b2lkIApkZWNvZGVfYShzdHJ1Y3QgdWQqIHUsIHN0cnVjdCB1
ZF9vcGVyYW5kICpvcCkKewogIGlmICh1LT5vcHJfbW9kZSA9PSAxNikgeyAgCiAgICAvKiBzZWcx
NjpvZmYxNiAqLwogICAgb3AtPnR5cGUgPSBVRF9PUF9QVFI7CiAgICBvcC0+c2l6ZSA9IDMyOwog
ICAgb3AtPmx2YWwucHRyLm9mZiA9IGlucF91aW50MTYodSk7CiAgICBvcC0+bHZhbC5wdHIuc2Vn
ID0gaW5wX3VpbnQxNih1KTsKICB9IGVsc2UgewogICAgLyogc2VnMTY6b2ZmMzIgKi8KICAgIG9w
LT50eXBlID0gVURfT1BfUFRSOwogICAgb3AtPnNpemUgPSA0ODsKICAgIG9wLT5sdmFsLnB0ci5v
ZmYgPSBpbnBfdWludDMyKHUpOwogICAgb3AtPmx2YWwucHRyLnNlZyA9IGlucF91aW50MTYodSk7
CiAgfQp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBkZWNvZGVfZ3ByKCkgLSBSZXR1cm5z
IGRlY29kZWQgR2VuZXJhbCBQdXJwb3NlIFJlZ2lzdGVyIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQogKi8Kc3RhdGljIGVudW0gdWRfdHlwZSAKZGVjb2RlX2dwcihyZWdpc3RlciBzdHJ1Y3QgdWQq
IHUsIHVuc2lnbmVkIGludCBzLCB1bnNpZ25lZCBjaGFyIHJtKQp7CiAgcyA9IHJlc29sdmVfb3Bl
cmFuZF9zaXplKHUsIHMpOwogICAgICAgIAogIHN3aXRjaCAocykgewogICAgY2FzZSA2NDoKICAg
ICAgICByZXR1cm4gVURfUl9SQVggKyBybTsKICAgIGNhc2UgU1pfRFA6CiAgICBjYXNlIDMyOgog
ICAgICAgIHJldHVybiBVRF9SX0VBWCArIHJtOwogICAgY2FzZSBTWl9XUDoKICAgIGNhc2UgMTY6
CiAgICAgICAgcmV0dXJuIFVEX1JfQVggICsgcm07CiAgICBjYXNlICA4OgogICAgICAgIGlmICh1
LT5kaXNfbW9kZSA9PSA2NCAmJiB1LT5wZnhfcmV4KSB7CiAgICAgICAgICAgIGlmIChybSA+PSA0
KQogICAgICAgICAgICAgICAgcmV0dXJuIFVEX1JfU1BMICsgKHJtLTQpOwogICAgICAgICAgICBy
ZXR1cm4gVURfUl9BTCArIHJtOwogICAgICAgIH0gZWxzZSByZXR1cm4gVURfUl9BTCArIHJtOwog
ICAgZGVmYXVsdDoKICAgICAgICByZXR1cm4gMDsKICB9Cn0KCi8qIC0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tCiAqIHJlc29sdmVfZ3ByNjQoKSAtIDY0Yml0IEdlbmVyYWwgUHVycG9zZSBSZWdpc3Rlci1T
ZWxlY3Rpb24uIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGVudW0gdWRfdHlw
ZSAKcmVzb2x2ZV9ncHI2NChzdHJ1Y3QgdWQqIHUsIGVudW0gdWRfb3BlcmFuZF9jb2RlIGdwcl9v
cCkKewogIGlmIChncHJfb3AgPj0gT1BfckFYcjggJiYgZ3ByX29wIDw9IE9QX3JESXIxNSkKICAg
IGdwcl9vcCA9IChncHJfb3AgLSBPUF9yQVhyOCkgfCAoUkVYX0IodS0+cGZ4X3JleCkgPDwgMyk7
ICAgICAgICAgIAogIGVsc2UgIGdwcl9vcCA9IChncHJfb3AgLSBPUF9yQVgpOwoKICBpZiAodS0+
b3ByX21vZGUgPT0gMTYpCiAgICByZXR1cm4gZ3ByX29wICsgVURfUl9BWDsKICBpZiAodS0+ZGlz
X21vZGUgPT0gMzIgfHwgCiAgICAodS0+b3ByX21vZGUgPT0gMzIgJiYgISAoUkVYX1codS0+cGZ4
X3JleCkgfHwgdS0+ZGVmYXVsdDY0KSkpIHsKICAgIHJldHVybiBncHJfb3AgKyBVRF9SX0VBWDsK
ICB9CgogIHJldHVybiBncHJfb3AgKyBVRF9SX1JBWDsKfQoKLyogLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0KICogcmVzb2x2ZV9ncHIzMiAoKSAtIDMyYml0IEdlbmVyYWwgUHVycG9zZSBSZWdpc3Rlci1T
ZWxlY3Rpb24uIAogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGVudW0gdWRfdHlw
ZSAKcmVzb2x2ZV9ncHIzMihzdHJ1Y3QgdWQqIHUsIGVudW0gdWRfb3BlcmFuZF9jb2RlIGdwcl9v
cCkKewogIGdwcl9vcCA9IGdwcl9vcCAtIE9QX2VBWDsKCiAgaWYgKHUtPm9wcl9tb2RlID09IDE2
KSAKICAgIHJldHVybiBncHJfb3AgKyBVRF9SX0FYOwoKICByZXR1cm4gZ3ByX29wICsgIFVEX1Jf
RUFYOwp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiByZXNvbHZlX3JlZygpIC0gUmVzb2x2
ZXMgdGhlIHJlZ2lzdGVyIHR5cGUgCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpzdGF0aWMg
ZW51bSB1ZF90eXBlIApyZXNvbHZlX3JlZyhzdHJ1Y3QgdWQqIHUsIHVuc2lnbmVkIGludCB0eXBl
LCB1bnNpZ25lZCBjaGFyIGkpCnsKICBzd2l0Y2ggKHR5cGUpIHsKICAgIGNhc2UgVF9NTVggOiAg
ICByZXR1cm4gVURfUl9NTTAgICsgKGkgJiA3KTsKICAgIGNhc2UgVF9YTU0gOiAgICByZXR1cm4g
VURfUl9YTU0wICsgaTsKICAgIGNhc2UgVF9DUkcgOiAgICByZXR1cm4gVURfUl9DUjAgICsgaTsK
ICAgIGNhc2UgVF9EQkcgOiAgICByZXR1cm4gVURfUl9EUjAgICsgaTsKICAgIGNhc2UgVF9TRUcg
OiAgICByZXR1cm4gVURfUl9FUyAgICsgKGkgJiA3KTsKICAgIGNhc2UgVF9OT05FOgogICAgZGVm
YXVsdDogICAgcmV0dXJuIFVEX05PTkU7CiAgfQp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQog
KiBkZWNvZGVfaW1tKCkgLSBEZWNvZGVzIEltbWVkaWF0ZSB2YWx1ZXMuCiAqIC0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tCiAqLwpzdGF0aWMgdm9pZCAKZGVjb2RlX2ltbShzdHJ1Y3QgdWQqIHUsIHVuc2ln
bmVkIGludCBzLCBzdHJ1Y3QgdWRfb3BlcmFuZCAqb3ApCnsKICBvcC0+c2l6ZSA9IHJlc29sdmVf
b3BlcmFuZF9zaXplKHUsIHMpOwogIG9wLT50eXBlID0gVURfT1BfSU1NOwoKICBzd2l0Y2ggKG9w
LT5zaXplKSB7CiAgICBjYXNlICA4OiBvcC0+bHZhbC5zYnl0ZSA9IGlucF91aW50OCh1KTsgICBi
cmVhazsKICAgIGNhc2UgMTY6IG9wLT5sdmFsLnV3b3JkID0gaW5wX3VpbnQxNih1KTsgIGJyZWFr
OwogICAgY2FzZSAzMjogb3AtPmx2YWwudWR3b3JkID0gaW5wX3VpbnQzMih1KTsgYnJlYWs7CiAg
ICBjYXNlIDY0OiBvcC0+bHZhbC51cXdvcmQgPSBpbnBfdWludDY0KHUpOyBicmVhazsKICAgIGRl
ZmF1bHQ6IHJldHVybjsKICB9Cn0KCi8qIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqIGRlY29kZV9t
b2RybSgpIC0gRGVjb2RlcyBNb2RSTSBCeXRlCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpz
dGF0aWMgdm9pZCAKZGVjb2RlX21vZHJtKHN0cnVjdCB1ZCogdSwgc3RydWN0IHVkX29wZXJhbmQg
Km9wLCB1bnNpZ25lZCBpbnQgcywgCiAgICAgICAgIHVuc2lnbmVkIGNoYXIgcm1fdHlwZSwgc3Ry
dWN0IHVkX29wZXJhbmQgKm9wcmVnLCAKICAgICAgICAgdW5zaWduZWQgY2hhciByZWdfc2l6ZSwg
dW5zaWduZWQgY2hhciByZWdfdHlwZSkKewogIHVuc2lnbmVkIGNoYXIgbW9kLCBybSwgcmVnOwoK
ICBpbnBfbmV4dCh1KTsKCiAgLyogZ2V0IG1vZCwgci9tIGFuZCByZWcgZmllbGRzICovCiAgbW9k
ID0gTU9EUk1fTU9EKGlucF9jdXJyKHUpKTsKICBybSAgPSAoUkVYX0IodS0+cGZ4X3JleCkgPDwg
MykgfCBNT0RSTV9STShpbnBfY3Vycih1KSk7CiAgcmVnID0gKFJFWF9SKHUtPnBmeF9yZXgpIDw8
IDMpIHwgTU9EUk1fUkVHKGlucF9jdXJyKHUpKTsKCiAgb3AtPnNpemUgPSByZXNvbHZlX29wZXJh
bmRfc2l6ZSh1LCBzKTsKCiAgLyogaWYgbW9kIGlzIDExYiwgdGhlbiB0aGUgVURfUl9tIHNwZWNp
ZmllcyBhIGdwci9tbXgvc3NlL2NvbnRyb2wvZGVidWcgKi8KICBpZiAobW9kID09IDMpIHsKICAg
IG9wLT50eXBlID0gVURfT1BfUkVHOwogICAgaWYgKHJtX3R5cGUgPT0gIFRfR1BSKQogICAgICAg
IG9wLT5iYXNlID0gZGVjb2RlX2dwcih1LCBvcC0+c2l6ZSwgcm0pOwogICAgZWxzZSAgICBvcC0+
YmFzZSA9IHJlc29sdmVfcmVnKHUsIHJtX3R5cGUsIChSRVhfQih1LT5wZnhfcmV4KSA8PCAzKSB8
IChybSY3KSk7CiAgfSAKICAvKiBlbHNlIGl0cyBtZW1vcnkgYWRkcmVzc2luZyAqLyAgCiAgZWxz
ZSB7CiAgICBvcC0+dHlwZSA9IFVEX09QX01FTTsKCiAgICAvKiA2NGJpdCBhZGRyZXNzaW5nICov
CiAgICBpZiAodS0+YWRyX21vZGUgPT0gNjQpIHsKCiAgICAgICAgb3AtPmJhc2UgPSBVRF9SX1JB
WCArIHJtOwoKICAgICAgICAvKiBnZXQgb2Zmc2V0IHR5cGUgKi8KICAgICAgICBpZiAobW9kID09
IDEpCiAgICAgICAgICAgIG9wLT5vZmZzZXQgPSA4OwogICAgICAgIGVsc2UgaWYgKG1vZCA9PSAy
KQogICAgICAgICAgICBvcC0+b2Zmc2V0ID0gMzI7CiAgICAgICAgZWxzZSBpZiAobW9kID09IDAg
JiYgKHJtICYgNykgPT0gNSkgeyAgICAgICAgICAgCiAgICAgICAgICAgIG9wLT5iYXNlID0gVURf
Ul9SSVA7CiAgICAgICAgICAgIG9wLT5vZmZzZXQgPSAzMjsKICAgICAgICB9IGVsc2UgIG9wLT5v
ZmZzZXQgPSAwOwoKICAgICAgICAvKiBTY2FsZS1JbmRleC1CYXNlIChTSUIpICovCiAgICAgICAg
aWYgKChybSAmIDcpID09IDQpIHsKICAgICAgICAgICAgaW5wX25leHQodSk7CiAgICAgICAgICAg
IAogICAgICAgICAgICBvcC0+c2NhbGUgPSAoMSA8PCBTSUJfUyhpbnBfY3Vycih1KSkpICYgfjE7
CiAgICAgICAgICAgIG9wLT5pbmRleCA9IFVEX1JfUkFYICsgKFNJQl9JKGlucF9jdXJyKHUpKSB8
IChSRVhfWCh1LT5wZnhfcmV4KSA8PCAzKSk7CiAgICAgICAgICAgIG9wLT5iYXNlICA9IFVEX1Jf
UkFYICsgKFNJQl9CKGlucF9jdXJyKHUpKSB8IChSRVhfQih1LT5wZnhfcmV4KSA8PCAzKSk7Cgog
ICAgICAgICAgICAvKiBzcGVjaWFsIGNvbmRpdGlvbnMgZm9yIGJhc2UgcmVmZXJlbmNlICovCiAg
ICAgICAgICAgIGlmIChvcC0+aW5kZXggPT0gVURfUl9SU1ApIHsKICAgICAgICAgICAgICAgIG9w
LT5pbmRleCA9IFVEX05PTkU7CiAgICAgICAgICAgICAgICBvcC0+c2NhbGUgPSBVRF9OT05FOwog
ICAgICAgICAgICB9CgogICAgICAgICAgICBpZiAob3AtPmJhc2UgPT0gVURfUl9SQlAgfHwgb3At
PmJhc2UgPT0gVURfUl9SMTMpIHsKICAgICAgICAgICAgICAgIGlmIChtb2QgPT0gMCkgCiAgICAg
ICAgICAgICAgICAgICAgb3AtPmJhc2UgPSBVRF9OT05FOwogICAgICAgICAgICAgICAgaWYgKG1v
ZCA9PSAxKQogICAgICAgICAgICAgICAgICAgIG9wLT5vZmZzZXQgPSA4OwogICAgICAgICAgICAg
ICAgZWxzZSBvcC0+b2Zmc2V0ID0gMzI7CiAgICAgICAgICAgIH0KICAgICAgICB9CiAgICB9IAoK
ICAgIC8qIDMyLUJpdCBhZGRyZXNzaW5nIG1vZGUgKi8KICAgIGVsc2UgaWYgKHUtPmFkcl9tb2Rl
ID09IDMyKSB7CgogICAgICAgIC8qIGdldCBiYXNlICovCiAgICAgICAgb3AtPmJhc2UgPSBVRF9S
X0VBWCArIHJtOwoKICAgICAgICAvKiBnZXQgb2Zmc2V0IHR5cGUgKi8KICAgICAgICBpZiAobW9k
ID09IDEpCiAgICAgICAgICAgIG9wLT5vZmZzZXQgPSA4OwogICAgICAgIGVsc2UgaWYgKG1vZCA9
PSAyKQogICAgICAgICAgICBvcC0+b2Zmc2V0ID0gMzI7CiAgICAgICAgZWxzZSBpZiAobW9kID09
IDAgJiYgcm0gPT0gNSkgewogICAgICAgICAgICBvcC0+YmFzZSA9IFVEX05PTkU7CiAgICAgICAg
ICAgIG9wLT5vZmZzZXQgPSAzMjsKICAgICAgICB9IGVsc2UgIG9wLT5vZmZzZXQgPSAwOwoKICAg
ICAgICAvKiBTY2FsZS1JbmRleC1CYXNlIChTSUIpICovCiAgICAgICAgaWYgKChybSAmIDcpID09
IDQpIHsKICAgICAgICAgICAgaW5wX25leHQodSk7CgogICAgICAgICAgICBvcC0+c2NhbGUgPSAo
MSA8PCBTSUJfUyhpbnBfY3Vycih1KSkpICYgfjE7CiAgICAgICAgICAgIG9wLT5pbmRleCA9IFVE
X1JfRUFYICsgKFNJQl9JKGlucF9jdXJyKHUpKSB8IChSRVhfWCh1LT5wZnhfcmV4KSA8PCAzKSk7
CiAgICAgICAgICAgIG9wLT5iYXNlICA9IFVEX1JfRUFYICsgKFNJQl9CKGlucF9jdXJyKHUpKSB8
IChSRVhfQih1LT5wZnhfcmV4KSA8PCAzKSk7CgogICAgICAgICAgICBpZiAob3AtPmluZGV4ID09
IFVEX1JfRVNQKSB7CiAgICAgICAgICAgICAgICBvcC0+aW5kZXggPSBVRF9OT05FOwogICAgICAg
ICAgICAgICAgb3AtPnNjYWxlID0gVURfTk9ORTsKICAgICAgICAgICAgfQoKICAgICAgICAgICAg
Lyogc3BlY2lhbCBjb25kaXRpb24gZm9yIGJhc2UgcmVmZXJlbmNlICovCiAgICAgICAgICAgIGlm
IChvcC0+YmFzZSA9PSBVRF9SX0VCUCkgewogICAgICAgICAgICAgICAgaWYgKG1vZCA9PSAwKQog
ICAgICAgICAgICAgICAgICAgIG9wLT5iYXNlID0gVURfTk9ORTsKICAgICAgICAgICAgICAgIGlm
IChtb2QgPT0gMSkKICAgICAgICAgICAgICAgICAgICBvcC0+b2Zmc2V0ID0gODsKICAgICAgICAg
ICAgICAgIGVsc2Ugb3AtPm9mZnNldCA9IDMyOwogICAgICAgICAgICB9CiAgICAgICAgfQogICAg
fSAKCiAgICAvKiAxNmJpdCBhZGRyZXNzaW5nIG1vZGUgKi8KICAgIGVsc2UgIHsKICAgICAgICBz
d2l0Y2ggKHJtKSB7CiAgICAgICAgICAgIGNhc2UgMDogb3AtPmJhc2UgPSBVRF9SX0JYOyBvcC0+
aW5kZXggPSBVRF9SX1NJOyBicmVhazsKICAgICAgICAgICAgY2FzZSAxOiBvcC0+YmFzZSA9IFVE
X1JfQlg7IG9wLT5pbmRleCA9IFVEX1JfREk7IGJyZWFrOwogICAgICAgICAgICBjYXNlIDI6IG9w
LT5iYXNlID0gVURfUl9CUDsgb3AtPmluZGV4ID0gVURfUl9TSTsgYnJlYWs7CiAgICAgICAgICAg
IGNhc2UgMzogb3AtPmJhc2UgPSBVRF9SX0JQOyBvcC0+aW5kZXggPSBVRF9SX0RJOyBicmVhazsK
ICAgICAgICAgICAgY2FzZSA0OiBvcC0+YmFzZSA9IFVEX1JfU0k7IGJyZWFrOwogICAgICAgICAg
ICBjYXNlIDU6IG9wLT5iYXNlID0gVURfUl9ESTsgYnJlYWs7CiAgICAgICAgICAgIGNhc2UgNjog
b3AtPmJhc2UgPSBVRF9SX0JQOyBicmVhazsKICAgICAgICAgICAgY2FzZSA3OiBvcC0+YmFzZSA9
IFVEX1JfQlg7IGJyZWFrOwogICAgICAgIH0KCiAgICAgICAgaWYgKG1vZCA9PSAwICYmIHJtID09
IDYpIHsKICAgICAgICAgICAgb3AtPm9mZnNldD0gMTY7CiAgICAgICAgICAgIG9wLT5iYXNlID0g
VURfTk9ORTsKICAgICAgICB9CiAgICAgICAgZWxzZSBpZiAobW9kID09IDEpCiAgICAgICAgICAg
IG9wLT5vZmZzZXQgPSA4OwogICAgICAgIGVsc2UgaWYgKG1vZCA9PSAyKSAKICAgICAgICAgICAg
b3AtPm9mZnNldCA9IDE2OwogICAgfQogIH0gIAoKICAvKiBleHRyYWN0IG9mZnNldCwgaWYgYW55
ICovCiAgc3dpdGNoKG9wLT5vZmZzZXQpIHsKICAgIGNhc2UgOCA6IG9wLT5sdmFsLnVieXRlICA9
IGlucF91aW50OCh1KTsgIGJyZWFrOwogICAgY2FzZSAxNjogb3AtPmx2YWwudXdvcmQgID0gaW5w
X3VpbnQxNih1KTsgIGJyZWFrOwogICAgY2FzZSAzMjogb3AtPmx2YWwudWR3b3JkID0gaW5wX3Vp
bnQzMih1KTsgYnJlYWs7CiAgICBjYXNlIDY0OiBvcC0+bHZhbC51cXdvcmQgPSBpbnBfdWludDY0
KHUpOyBicmVhazsKICAgIGRlZmF1bHQ6IGJyZWFrOwogIH0KCiAgLyogcmVzb2x2ZSByZWdpc3Rl
ciBlbmNvZGVkIGluIHJlZyBmaWVsZCAqLwogIGlmIChvcHJlZykgewogICAgb3ByZWctPnR5cGUg
PSBVRF9PUF9SRUc7CiAgICBvcHJlZy0+c2l6ZSA9IHJlc29sdmVfb3BlcmFuZF9zaXplKHUsIHJl
Z19zaXplKTsKICAgIGlmIChyZWdfdHlwZSA9PSBUX0dQUikgCiAgICAgICAgb3ByZWctPmJhc2Ug
PSBkZWNvZGVfZ3ByKHUsIG9wcmVnLT5zaXplLCByZWcpOwogICAgZWxzZSBvcHJlZy0+YmFzZSA9
IHJlc29sdmVfcmVnKHUsIHJlZ190eXBlLCByZWcpOwogIH0KfQoKLyogLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0KICogZGVjb2RlX28oKSAtIERlY29kZXMgb2Zmc2V0CiAqIC0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tCiAqLwpzdGF0aWMgdm9pZCAKZGVjb2RlX28oc3RydWN0IHVkKiB1LCB1bnNpZ25lZCBpbnQg
cywgc3RydWN0IHVkX29wZXJhbmQgKm9wKQp7CiAgc3dpdGNoICh1LT5hZHJfbW9kZSkgewogICAg
Y2FzZSA2NDoKICAgICAgICBvcC0+b2Zmc2V0ID0gNjQ7IAogICAgICAgIG9wLT5sdmFsLnVxd29y
ZCA9IGlucF91aW50NjQodSk7IAogICAgICAgIGJyZWFrOwogICAgY2FzZSAzMjoKICAgICAgICBv
cC0+b2Zmc2V0ID0gMzI7IAogICAgICAgIG9wLT5sdmFsLnVkd29yZCA9IGlucF91aW50MzIodSk7
IAogICAgICAgIGJyZWFrOwogICAgY2FzZSAxNjoKICAgICAgICBvcC0+b2Zmc2V0ID0gMTY7IAog
ICAgICAgIG9wLT5sdmFsLnV3b3JkICA9IGlucF91aW50MTYodSk7IAogICAgICAgIGJyZWFrOwog
ICAgZGVmYXVsdDoKICAgICAgICByZXR1cm47CiAgfQogIG9wLT50eXBlID0gVURfT1BfTUVNOwog
IG9wLT5zaXplID0gcmVzb2x2ZV9vcGVyYW5kX3NpemUodSwgcyk7Cn0KCi8qIC0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tCiAqIGRpc2FzbV9vcGVyYW5kcygpIC0gRGlzYXNzZW1ibGVzIE9wZXJhbmRzLgog
KiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGludCBkaXNhc21fb3BlcmFuZHMocmVn
aXN0ZXIgc3RydWN0IHVkKiB1KQp7CgoKICAvKiBtb3BYdCA9IG1hcCBlbnRyeSwgb3BlcmFuZCBY
LCB0eXBlOyAqLwogIGVudW0gdWRfb3BlcmFuZF9jb2RlIG1vcDF0ID0gdS0+aXRhYl9lbnRyeS0+
b3BlcmFuZDEudHlwZTsKICBlbnVtIHVkX29wZXJhbmRfY29kZSBtb3AydCA9IHUtPml0YWJfZW50
cnktPm9wZXJhbmQyLnR5cGU7CiAgZW51bSB1ZF9vcGVyYW5kX2NvZGUgbW9wM3QgPSB1LT5pdGFi
X2VudHJ5LT5vcGVyYW5kMy50eXBlOwoKICAvKiBtb3BYcyA9IG1hcCBlbnRyeSwgb3BlcmFuZCBY
LCBzaXplICovCiAgdW5zaWduZWQgaW50IG1vcDFzID0gdS0+aXRhYl9lbnRyeS0+b3BlcmFuZDEu
c2l6ZTsKICB1bnNpZ25lZCBpbnQgbW9wMnMgPSB1LT5pdGFiX2VudHJ5LT5vcGVyYW5kMi5zaXpl
OwogIHVuc2lnbmVkIGludCBtb3AzcyA9IHUtPml0YWJfZW50cnktPm9wZXJhbmQzLnNpemU7Cgog
IC8qIGlvcCA9IGluc3RydWN0aW9uIG9wZXJhbmQgKi8KICByZWdpc3RlciBzdHJ1Y3QgdWRfb3Bl
cmFuZCogaW9wID0gdS0+b3BlcmFuZDsKICAgIAogIHN3aXRjaChtb3AxdCkgewogICAgCiAgICBj
YXNlIE9QX0EgOgogICAgICAgIGRlY29kZV9hKHUsICYoaW9wWzBdKSk7CiAgICAgICAgYnJlYWs7
CiAgICAKICAgIC8qIE1bYl0gLi4uICovCiAgICBjYXNlIE9QX00gOgogICAgICAgIGlmIChNT0RS
TV9NT0QoaW5wX3BlZWsodSkpID09IDMpCiAgICAgICAgICAgIHUtPmVycm9yPSAxOwogICAgLyog
RSwgRy9QL1YvSS9DTC8xL1MgKi8KICAgIGNhc2UgT1BfRSA6CiAgICAgICAgaWYgKG1vcDJ0ID09
IE9QX0cpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzBdKSwgbW9wMXMsIFRf
R1BSLCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUik7CiAgICAgICAgICAgIGlmIChtb3AzdCA9PSBP
UF9JKQogICAgICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AzcywgJihpb3BbMl0pKTsKICAg
ICAgICAgICAgZWxzZSBpZiAobW9wM3QgPT0gT1BfQ0wpIHsKICAgICAgICAgICAgICAgIGlvcFsy
XS50eXBlID0gVURfT1BfUkVHOwogICAgICAgICAgICAgICAgaW9wWzJdLmJhc2UgPSBVRF9SX0NM
OwogICAgICAgICAgICAgICAgaW9wWzJdLnNpemUgPSA4OwogICAgICAgICAgICB9CiAgICAgICAg
fQogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX1ApCiAgICAgICAgICAgIGRlY29kZV9tb2Ry
bSh1LCAmKGlvcFswXSksIG1vcDFzLCBUX0dQUiwgJihpb3BbMV0pLCBtb3AycywgVF9NTVgpOwog
ICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX1YpCiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1
LCAmKGlvcFswXSksIG1vcDFzLCBUX0dQUiwgJihpb3BbMV0pLCBtb3AycywgVF9YTU0pOwogICAg
ICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX1MpCiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1LCAm
KGlvcFswXSksIG1vcDFzLCBUX0dQUiwgJihpb3BbMV0pLCBtb3AycywgVF9TRUcpOwogICAgICAg
IGVsc2UgewogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0pLCBtb3AxcywgVF9H
UFIsIE5VTEwsIDAsIFRfTk9ORSk7CiAgICAgICAgICAgIGlmIChtb3AydCA9PSBPUF9DTCkgewog
ICAgICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgICAgICAgICBp
b3BbMV0uYmFzZSA9IFVEX1JfQ0w7CiAgICAgICAgICAgICAgICBpb3BbMV0uc2l6ZSA9IDg7CiAg
ICAgICAgICAgIH0gZWxzZSBpZiAobW9wMnQgPT0gT1BfSTEpIHsKICAgICAgICAgICAgICAgIGlv
cFsxXS50eXBlID0gVURfT1BfQ09OU1Q7CiAgICAgICAgICAgICAgICB1LT5vcGVyYW5kWzFdLmx2
YWwudWR3b3JkID0gMTsKICAgICAgICAgICAgfSBlbHNlIGlmIChtb3AydCA9PSBPUF9JKSB7CiAg
ICAgICAgICAgICAgICBkZWNvZGVfaW1tKHUsIG1vcDJzLCAmKGlvcFsxXSkpOwogICAgICAgICAg
ICB9CiAgICAgICAgfQogICAgICAgIGJyZWFrOwoKICAgIC8qIEcsIEUvUFJbLEldL1ZSICovCiAg
ICBjYXNlIE9QX0cgOgogICAgICAgIGlmIChtb3AydCA9PSBPUF9NKSB7CiAgICAgICAgICAgIGlm
IChNT0RSTV9NT0QoaW5wX3BlZWsodSkpID09IDMpCiAgICAgICAgICAgICAgICB1LT5lcnJvcj0g
MTsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfR1BSLCAm
KGlvcFswXSksIG1vcDFzLCBUX0dQUik7CiAgICAgICAgfSBlbHNlIGlmIChtb3AydCA9PSBPUF9F
KSB7CiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUiwg
Jihpb3BbMF0pLCBtb3AxcywgVF9HUFIpOwogICAgICAgICAgICBpZiAobW9wM3QgPT0gT1BfSSkK
ICAgICAgICAgICAgICAgIGRlY29kZV9pbW0odSwgbW9wM3MsICYoaW9wWzJdKSk7CiAgICAgICAg
fSBlbHNlIGlmIChtb3AydCA9PSBPUF9QUikgewogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwg
Jihpb3BbMV0pLCBtb3AycywgVF9NTVgsICYoaW9wWzBdKSwgbW9wMXMsIFRfR1BSKTsKICAgICAg
ICAgICAgaWYgKG1vcDN0ID09IE9QX0kpCiAgICAgICAgICAgICAgICBkZWNvZGVfaW1tKHUsIG1v
cDNzLCAmKGlvcFsyXSkpOwogICAgICAgIH0gZWxzZSBpZiAobW9wMnQgPT0gT1BfVlIpIHsKICAg
ICAgICAgICAgaWYgKE1PRFJNX01PRChpbnBfcGVlayh1KSkgIT0gMykKICAgICAgICAgICAgICAg
IHUtPmVycm9yID0gMTsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9w
MnMsIFRfWE1NLCAmKGlvcFswXSksIG1vcDFzLCBUX0dQUik7CiAgICAgICAgfSBlbHNlIGlmICht
b3AydCA9PSBPUF9XKQogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMV0pLCBtb3Ay
cywgVF9YTU0sICYoaW9wWzBdKSwgbW9wMXMsIFRfR1BSKTsKICAgICAgICBicmVhazsKCiAgICAv
KiBBTC4uQkgsIEkvTy9EWCAqLwogICAgY2FzZSBPUF9BTCA6IGNhc2UgT1BfQ0wgOiBjYXNlIE9Q
X0RMIDogY2FzZSBPUF9CTCA6CiAgICBjYXNlIE9QX0FIIDogY2FzZSBPUF9DSCA6IGNhc2UgT1Bf
REggOiBjYXNlIE9QX0JIIDoKCiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAg
ICAgaW9wWzBdLmJhc2UgPSBVRF9SX0FMICsgKG1vcDF0IC0gT1BfQUwpOwogICAgICAgIGlvcFsw
XS5zaXplID0gODsKCiAgICAgICAgaWYgKG1vcDJ0ID09IE9QX0kpCiAgICAgICAgICAgIGRlY29k
ZV9pbW0odSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgZWxzZSBpZiAobW9wMnQgPT0gT1Bf
RFgpIHsKICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgICAgIGlv
cFsxXS5iYXNlID0gVURfUl9EWDsKICAgICAgICAgICAgaW9wWzFdLnNpemUgPSAxNjsKICAgICAg
ICB9CiAgICAgICAgZWxzZSBpZiAobW9wMnQgPT0gT1BfTykKICAgICAgICAgICAgZGVjb2RlX28o
dSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgYnJlYWs7CgogICAgLyogckFYW3I4XS4uckRJ
W3IxNV0sIEkvckFYLi5yREkvTyAqLwogICAgY2FzZSBPUF9yQVhyOCA6IGNhc2UgT1BfckNYcjkg
OiBjYXNlIE9QX3JEWHIxMCA6IGNhc2UgT1BfckJYcjExIDoKICAgIGNhc2UgT1BfclNQcjEyOiBj
YXNlIE9QX3JCUHIxMzogY2FzZSBPUF9yU0lyMTQgOiBjYXNlIE9QX3JESXIxNSA6CiAgICBjYXNl
IE9QX3JBWCA6IGNhc2UgT1BfckNYIDogY2FzZSBPUF9yRFggOiBjYXNlIE9QX3JCWCA6CiAgICBj
YXNlIE9QX3JTUCA6IGNhc2UgT1BfckJQIDogY2FzZSBPUF9yU0kgOiBjYXNlIE9QX3JESSA6Cgog
ICAgICAgIGlvcFswXS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlvcFswXS5iYXNlID0gcmVz
b2x2ZV9ncHI2NCh1LCBtb3AxdCk7CgogICAgICAgIGlmIChtb3AydCA9PSBPUF9JKQogICAgICAg
ICAgICBkZWNvZGVfaW1tKHUsIG1vcDJzLCAmKGlvcFsxXSkpOwogICAgICAgIGVsc2UgaWYgKG1v
cDJ0ID49IE9QX3JBWCAmJiBtb3AydCA8PSBPUF9yREkpIHsKICAgICAgICAgICAgaW9wWzFdLnR5
cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgICAgIGlvcFsxXS5iYXNlID0gcmVzb2x2ZV9ncHI2NCh1
LCBtb3AydCk7CiAgICAgICAgfQogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX08pIHsKICAg
ICAgICAgICAgZGVjb2RlX28odSwgbW9wMnMsICYoaW9wWzFdKSk7ICAKICAgICAgICAgICAgaW9w
WzBdLnNpemUgPSByZXNvbHZlX29wZXJhbmRfc2l6ZSh1LCBtb3Aycyk7CiAgICAgICAgfQogICAg
ICAgIGJyZWFrOwoKICAgIC8qIEFMW3I4Yl0uLkJIW3IxNWJdLCBJICovCiAgICBjYXNlIE9QX0FM
cjhiIDogY2FzZSBPUF9DTHI5YiA6IGNhc2UgT1BfRExyMTBiIDogY2FzZSBPUF9CTHIxMWIgOgog
ICAgY2FzZSBPUF9BSHIxMmI6IGNhc2UgT1BfQ0hyMTNiOiBjYXNlIE9QX0RIcjE0YiA6IGNhc2Ug
T1BfQkhyMTViIDoKICAgIHsKICAgICAgICB1ZF90eXBlX3QgZ3ByID0gKG1vcDF0IC0gT1BfQUxy
OGIpICsgVURfUl9BTCArIAogICAgICAgICAgICAgICAgICAgICAgICAoUkVYX0IodS0+cGZ4X3Jl
eCkgPDwgMyk7CiAgICAgICAgaWYgKFVEX1JfQUggPD0gZ3ByICYmIHUtPnBmeF9yZXgpCiAgICAg
ICAgICAgIGdwciA9IGdwciArIDQ7CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAg
ICAgICAgaW9wWzBdLmJhc2UgPSBncHI7CiAgICAgICAgaWYgKG1vcDJ0ID09IE9QX0kpCiAgICAg
ICAgICAgIGRlY29kZV9pbW0odSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgYnJlYWs7CiAg
ICB9CgogICAgLyogZUFYLi5lRFgsIERYL0kgKi8KICAgIGNhc2UgT1BfZUFYIDogY2FzZSBPUF9l
Q1ggOiBjYXNlIE9QX2VEWCA6IGNhc2UgT1BfZUJYIDoKICAgIGNhc2UgT1BfZVNQIDogY2FzZSBP
UF9lQlAgOiBjYXNlIE9QX2VTSSA6IGNhc2UgT1BfZURJIDoKICAgICAgICBpb3BbMF0udHlwZSA9
IFVEX09QX1JFRzsKICAgICAgICBpb3BbMF0uYmFzZSA9IHJlc29sdmVfZ3ByMzIodSwgbW9wMXQp
OwogICAgICAgIGlmIChtb3AydCA9PSBPUF9EWCkgewogICAgICAgICAgICBpb3BbMV0udHlwZSA9
IFVEX09QX1JFRzsKICAgICAgICAgICAgaW9wWzFdLmJhc2UgPSBVRF9SX0RYOwogICAgICAgICAg
ICBpb3BbMV0uc2l6ZSA9IDE2OwogICAgICAgIH0gZWxzZSBpZiAobW9wMnQgPT0gT1BfSSkKICAg
ICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AycywgJihpb3BbMV0pKTsKICAgICAgICBicmVhazsK
CiAgICAvKiBFUy4uR1MgKi8KICAgIGNhc2UgT1BfRVMgOiBjYXNlIE9QX0NTIDogY2FzZSBPUF9E
UyA6CiAgICBjYXNlIE9QX1NTIDogY2FzZSBPUF9GUyA6IGNhc2UgT1BfR1MgOgoKICAgICAgICAv
KiBpbiA2NGJpdHMgbW9kZSwgb25seSBmcyBhbmQgZ3MgYXJlIGFsbG93ZWQgKi8KICAgICAgICBp
ZiAodS0+ZGlzX21vZGUgPT0gNjQpCiAgICAgICAgICAgIGlmIChtb3AxdCAhPSBPUF9GUyAmJiBt
b3AxdCAhPSBPUF9HUykKICAgICAgICAgICAgICAgIHUtPmVycm9yPSAxOwogICAgICAgIGlvcFsw
XS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlvcFswXS5iYXNlID0gKG1vcDF0IC0gT1BfRVMp
ICsgVURfUl9FUzsKICAgICAgICBpb3BbMF0uc2l6ZSA9IDE2OwoKICAgICAgICBicmVhazsKCiAg
ICAvKiBKICovCiAgICBjYXNlIE9QX0ogOgogICAgICAgIGRlY29kZV9pbW0odSwgbW9wMXMsICYo
aW9wWzBdKSk7ICAgICAgICAKICAgICAgICBpb3BbMF0udHlwZSA9IFVEX09QX0pJTU07CiAgICAg
ICAgYnJlYWsgOwoKICAgIC8qIFBSLCBJICovCiAgICBjYXNlIE9QX1BSOgogICAgICAgIGlmIChN
T0RSTV9NT0QoaW5wX3BlZWsodSkpICE9IDMpCiAgICAgICAgICAgIHUtPmVycm9yID0gMTsKICAg
ICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0pLCBtb3AxcywgVF9NTVgsIE5VTEwsIDAsIFRf
Tk9ORSk7CiAgICAgICAgaWYgKG1vcDJ0ID09IE9QX0kpCiAgICAgICAgICAgIGRlY29kZV9pbW0o
dSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAgYnJlYWs7IAoKICAgIC8qIFZSLCBJICovCiAg
ICBjYXNlIE9QX1ZSOgogICAgICAgIGlmIChNT0RSTV9NT0QoaW5wX3BlZWsodSkpICE9IDMpCiAg
ICAgICAgICAgIHUtPmVycm9yID0gMTsKICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0p
LCBtb3AxcywgVF9YTU0sIE5VTEwsIDAsIFRfTk9ORSk7CiAgICAgICAgaWYgKG1vcDJ0ID09IE9Q
X0kpCiAgICAgICAgICAgIGRlY29kZV9pbW0odSwgbW9wMnMsICYoaW9wWzFdKSk7CiAgICAgICAg
YnJlYWs7IAoKICAgIC8qIFAsIFFbLEldL1cvRVssSV0sVlIgKi8KICAgIGNhc2UgT1BfUCA6CiAg
ICAgICAgaWYgKG1vcDJ0ID09IE9QX1EpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYo
aW9wWzFdKSwgbW9wMnMsIFRfTU1YLCAmKGlvcFswXSksIG1vcDFzLCBUX01NWCk7CiAgICAgICAg
ICAgIGlmIChtb3AzdCA9PSBPUF9JKQogICAgICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3Az
cywgJihpb3BbMl0pKTsKICAgICAgICB9IGVsc2UgaWYgKG1vcDJ0ID09IE9QX1cpIHsKICAgICAg
ICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfWE1NLCAmKGlvcFswXSks
IG1vcDFzLCBUX01NWCk7CiAgICAgICAgfSBlbHNlIGlmIChtb3AydCA9PSBPUF9WUikgewogICAg
ICAgICAgICBpZiAoTU9EUk1fTU9EKGlucF9wZWVrKHUpKSAhPSAzKQogICAgICAgICAgICAgICAg
dS0+ZXJyb3IgPSAxOwogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMV0pLCBtb3Ay
cywgVF9YTU0sICYoaW9wWzBdKSwgbW9wMXMsIFRfTU1YKTsKICAgICAgICB9IGVsc2UgaWYgKG1v
cDJ0ID09IE9QX0UpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9w
MnMsIFRfR1BSLCAmKGlvcFswXSksIG1vcDFzLCBUX01NWCk7CiAgICAgICAgICAgIGlmIChtb3Az
dCA9PSBPUF9JKQogICAgICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AzcywgJihpb3BbMl0p
KTsKICAgICAgICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogUiwgQy9EICovCiAgICBjYXNlIE9Q
X1IgOgogICAgICAgIGlmIChtb3AydCA9PSBPUF9DKQogICAgICAgICAgICBkZWNvZGVfbW9kcm0o
dSwgJihpb3BbMF0pLCBtb3AxcywgVF9HUFIsICYoaW9wWzFdKSwgbW9wMnMsIFRfQ1JHKTsKICAg
ICAgICBlbHNlIGlmIChtb3AydCA9PSBPUF9EKQogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwg
Jihpb3BbMF0pLCBtb3AxcywgVF9HUFIsICYoaW9wWzFdKSwgbW9wMnMsIFRfREJHKTsKICAgICAg
ICBicmVhazsKCiAgICAvKiBDLCBSICovCiAgICBjYXNlIE9QX0MgOgogICAgICAgIGRlY29kZV9t
b2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUiwgJihpb3BbMF0pLCBtb3AxcywgVF9DUkcp
OwogICAgICAgIGJyZWFrOwoKICAgIC8qIEQsIFIgKi8KICAgIGNhc2UgT1BfRCA6CiAgICAgICAg
ZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfR1BSLCAmKGlvcFswXSksIG1vcDFz
LCBUX0RCRyk7CiAgICAgICAgYnJlYWs7CgogICAgLyogUSwgUCAqLwogICAgY2FzZSBPUF9RIDoK
ICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihpb3BbMF0pLCBtb3AxcywgVF9NTVgsICYoaW9wWzFd
KSwgbW9wMnMsIFRfTU1YKTsKICAgICAgICBicmVhazsKCiAgICAvKiBTLCBFICovCiAgICBjYXNl
IE9QX1MgOgogICAgICAgIGRlY29kZV9tb2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBUX0dQUiwg
Jihpb3BbMF0pLCBtb3AxcywgVF9TRUcpOwogICAgICAgIGJyZWFrOwoKICAgIC8qIFcsIFYgKi8K
ICAgIGNhc2UgT1BfVyA6CiAgICAgICAgZGVjb2RlX21vZHJtKHUsICYoaW9wWzBdKSwgbW9wMXMs
IFRfWE1NLCAmKGlvcFsxXSksIG1vcDJzLCBUX1hNTSk7CiAgICAgICAgYnJlYWs7CgogICAgLyog
ViwgV1ssSV0vUS9NL0UgKi8KICAgIGNhc2UgT1BfViA6CiAgICAgICAgaWYgKG1vcDJ0ID09IE9Q
X1cpIHsKICAgICAgICAgICAgLyogc3BlY2lhbCBjYXNlcyBmb3IgbW92bHBzIGFuZCBtb3ZocHMg
Ki8KICAgICAgICAgICAgaWYgKE1PRFJNX01PRChpbnBfcGVlayh1KSkgPT0gMykgewogICAgICAg
ICAgICAgICAgaWYgKHUtPm1uZW1vbmljID09IFVEX0ltb3ZscHMpCiAgICAgICAgICAgICAgICAg
ICAgdS0+bW5lbW9uaWMgPSBVRF9JbW92aGxwczsKICAgICAgICAgICAgICAgIGVsc2UKICAgICAg
ICAgICAgICAgIGlmICh1LT5tbmVtb25pYyA9PSBVRF9JbW92aHBzKQogICAgICAgICAgICAgICAg
ICAgIHUtPm1uZW1vbmljID0gVURfSW1vdmxocHM7CiAgICAgICAgICAgIH0KICAgICAgICAgICAg
ZGVjb2RlX21vZHJtKHUsICYoaW9wWzFdKSwgbW9wMnMsIFRfWE1NLCAmKGlvcFswXSksIG1vcDFz
LCBUX1hNTSk7CiAgICAgICAgICAgIGlmIChtb3AzdCA9PSBPUF9JKQogICAgICAgICAgICAgICAg
ZGVjb2RlX2ltbSh1LCBtb3AzcywgJihpb3BbMl0pKTsKICAgICAgICB9IGVsc2UgaWYgKG1vcDJ0
ID09IE9QX1EpCiAgICAgICAgICAgIGRlY29kZV9tb2RybSh1LCAmKGlvcFsxXSksIG1vcDJzLCBU
X01NWCwgJihpb3BbMF0pLCBtb3AxcywgVF9YTU0pOwogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09
IE9QX00pIHsKICAgICAgICAgICAgaWYgKE1PRFJNX01PRChpbnBfcGVlayh1KSkgPT0gMykKICAg
ICAgICAgICAgICAgIHUtPmVycm9yPSAxOwogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwgJihp
b3BbMV0pLCBtb3AycywgVF9HUFIsICYoaW9wWzBdKSwgbW9wMXMsIFRfWE1NKTsKICAgICAgICB9
IGVsc2UgaWYgKG1vcDJ0ID09IE9QX0UpIHsKICAgICAgICAgICAgZGVjb2RlX21vZHJtKHUsICYo
aW9wWzFdKSwgbW9wMnMsIFRfR1BSLCAmKGlvcFswXSksIG1vcDFzLCBUX1hNTSk7CiAgICAgICAg
fSBlbHNlIGlmIChtb3AydCA9PSBPUF9QUikgewogICAgICAgICAgICBkZWNvZGVfbW9kcm0odSwg
Jihpb3BbMV0pLCBtb3AycywgVF9NTVgsICYoaW9wWzBdKSwgbW9wMXMsIFRfWE1NKTsKICAgICAg
ICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogRFgsIGVBWC9BTCAqLwogICAgY2FzZSBPUF9EWCA6
CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgaW9wWzBdLmJhc2UgPSBV
RF9SX0RYOwogICAgICAgIGlvcFswXS5zaXplID0gMTY7CgogICAgICAgIGlmIChtb3AydCA9PSBP
UF9lQVgpIHsKICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9SRUc7ICAgIAogICAgICAg
ICAgICBpb3BbMV0uYmFzZSA9IHJlc29sdmVfZ3ByMzIodSwgbW9wMnQpOwogICAgICAgIH0gZWxz
ZSBpZiAobW9wMnQgPT0gT1BfQUwpIHsKICAgICAgICAgICAgaW9wWzFdLnR5cGUgPSBVRF9PUF9S
RUc7CiAgICAgICAgICAgIGlvcFsxXS5iYXNlID0gVURfUl9BTDsKICAgICAgICAgICAgaW9wWzFd
LnNpemUgPSA4OwogICAgICAgIH0KCiAgICAgICAgYnJlYWs7CgogICAgLyogSSwgSS9BTC9lQVgg
Ki8KICAgIGNhc2UgT1BfSSA6CiAgICAgICAgZGVjb2RlX2ltbSh1LCBtb3AxcywgJihpb3BbMF0p
KTsKICAgICAgICBpZiAobW9wMnQgPT0gT1BfSSkKICAgICAgICAgICAgZGVjb2RlX2ltbSh1LCBt
b3AycywgJihpb3BbMV0pKTsKICAgICAgICBlbHNlIGlmIChtb3AydCA9PSBPUF9BTCkgewogICAg
ICAgICAgICBpb3BbMV0udHlwZSA9IFVEX09QX1JFRzsKICAgICAgICAgICAgaW9wWzFdLmJhc2Ug
PSBVRF9SX0FMOwogICAgICAgICAgICBpb3BbMV0uc2l6ZSA9IDE2OwogICAgICAgIH0gZWxzZSBp
ZiAobW9wMnQgPT0gT1BfZUFYKSB7CiAgICAgICAgICAgIGlvcFsxXS50eXBlID0gVURfT1BfUkVH
OyAgICAKICAgICAgICAgICAgaW9wWzFdLmJhc2UgPSByZXNvbHZlX2dwcjMyKHUsIG1vcDJ0KTsK
ICAgICAgICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogTywgQUwvZUFYICovCiAgICBjYXNlIE9Q
X08gOgogICAgICAgIGRlY29kZV9vKHUsIG1vcDFzLCAmKGlvcFswXSkpOwogICAgICAgIGlvcFsx
XS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlvcFsxXS5zaXplID0gcmVzb2x2ZV9vcGVyYW5k
X3NpemUodSwgbW9wMXMpOwogICAgICAgIGlmIChtb3AydCA9PSBPUF9BTCkKICAgICAgICAgICAg
aW9wWzFdLmJhc2UgPSBVRF9SX0FMOwogICAgICAgIGVsc2UgaWYgKG1vcDJ0ID09IE9QX2VBWCkK
ICAgICAgICAgICAgaW9wWzFdLmJhc2UgPSByZXNvbHZlX2dwcjMyKHUsIG1vcDJ0KTsKICAgICAg
ICBlbHNlIGlmIChtb3AydCA9PSBPUF9yQVgpCiAgICAgICAgICAgIGlvcFsxXS5iYXNlID0gcmVz
b2x2ZV9ncHI2NCh1LCBtb3AydCk7ICAgICAgCiAgICAgICAgYnJlYWs7CgogICAgLyogMyAqLwog
ICAgY2FzZSBPUF9JMyA6CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9DT05TVDsKICAgICAg
ICBpb3BbMF0ubHZhbC5zYnl0ZSA9IDM7CiAgICAgICAgYnJlYWs7CgogICAgLyogU1QobiksIFNU
KG4pICovCiAgICBjYXNlIE9QX1NUMCA6IGNhc2UgT1BfU1QxIDogY2FzZSBPUF9TVDIgOiBjYXNl
IE9QX1NUMyA6CiAgICBjYXNlIE9QX1NUNCA6IGNhc2UgT1BfU1Q1IDogY2FzZSBPUF9TVDYgOiBj
YXNlIE9QX1NUNyA6CgogICAgICAgIGlvcFswXS50eXBlID0gVURfT1BfUkVHOwogICAgICAgIGlv
cFswXS5iYXNlID0gKG1vcDF0LU9QX1NUMCkgKyBVRF9SX1NUMDsKICAgICAgICBpb3BbMF0uc2l6
ZSA9IDA7CgogICAgICAgIGlmIChtb3AydCA+PSBPUF9TVDAgJiYgbW9wMnQgPD0gT1BfU1Q3KSB7
CiAgICAgICAgICAgIGlvcFsxXS50eXBlID0gVURfT1BfUkVHOwogICAgICAgICAgICBpb3BbMV0u
YmFzZSA9IChtb3AydC1PUF9TVDApICsgVURfUl9TVDA7CiAgICAgICAgICAgIGlvcFsxXS5zaXpl
ID0gMDsKICAgICAgICB9CiAgICAgICAgYnJlYWs7CgogICAgLyogQVggKi8KICAgIGNhc2UgT1Bf
QVg6CiAgICAgICAgaW9wWzBdLnR5cGUgPSBVRF9PUF9SRUc7CiAgICAgICAgaW9wWzBdLmJhc2Ug
PSBVRF9SX0FYOwogICAgICAgIGlvcFswXS5zaXplID0gMTY7CiAgICAgICAgYnJlYWs7CgogICAg
Lyogbm9uZSAqLwogICAgZGVmYXVsdCA6CiAgICAgICAgaW9wWzBdLnR5cGUgPSBpb3BbMV0udHlw
ZSA9IGlvcFsyXS50eXBlID0gVURfTk9ORTsKICB9CgogIHJldHVybiAwOwp9CgovKiAtLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQogKiBjbGVhcl9pbnNuKCkgLSBjbGVhciBpbnN0cnVjdGlvbiBwb2ludGVy
IAogKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIGludCBjbGVhcl9pbnNuKHJlZ2lz
dGVyIHN0cnVjdCB1ZCogdSkKewogIHUtPmVycm9yICAgICA9IDA7CiAgdS0+cGZ4X3NlZyAgID0g
MDsKICB1LT5wZnhfb3ByICAgPSAwOwogIHUtPnBmeF9hZHIgICA9IDA7CiAgdS0+cGZ4X2xvY2sg
ID0gMDsKICB1LT5wZnhfcmVwbmUgPSAwOwogIHUtPnBmeF9yZXAgICA9IDA7CiAgdS0+cGZ4X3Jl
cGUgID0gMDsKICB1LT5wZnhfc2VnICAgPSAwOwogIHUtPnBmeF9yZXggICA9IDA7CiAgdS0+cGZ4
X2luc24gID0gMDsKICB1LT5tbmVtb25pYyAgPSBVRF9Jbm9uZTsKICB1LT5pdGFiX2VudHJ5ID0g
TlVMTDsKCiAgbWVtc2V0KCAmdS0+b3BlcmFuZFsgMCBdLCAwLCBzaXplb2YoIHN0cnVjdCB1ZF9v
cGVyYW5kICkgKTsKICBtZW1zZXQoICZ1LT5vcGVyYW5kWyAxIF0sIDAsIHNpemVvZiggc3RydWN0
IHVkX29wZXJhbmQgKSApOwogIG1lbXNldCggJnUtPm9wZXJhbmRbIDIgXSwgMCwgc2l6ZW9mKCBz
dHJ1Y3QgdWRfb3BlcmFuZCApICk7CiAKICByZXR1cm4gMDsKfQoKc3RhdGljIGludCBkb19tb2Rl
KCBzdHJ1Y3QgdWQqIHUgKQp7CiAgLyogaWYgaW4gZXJyb3Igc3RhdGUsIGJhaWwgb3V0ICovCiAg
aWYgKCB1LT5lcnJvciApIHJldHVybiAtMTsgCgogIC8qIHByb3BhZ2F0ZSBwZXJmaXggZWZmZWN0
cyAqLwogIGlmICggdS0+ZGlzX21vZGUgPT0gNjQgKSB7ICAvKiBzZXQgNjRiaXQtbW9kZSBmbGFn
cyAqLwoKICAgIC8qIENoZWNrIHZhbGlkaXR5IG9mICBpbnN0cnVjdGlvbiBtNjQgKi8KICAgIGlm
ICggUF9JTlY2NCggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICkgKSB7CiAgICAgICAgdS0+ZXJyb3Ig
PSAxOwogICAgICAgIHJldHVybiAtMTsKICAgIH0KCiAgICAvKiBlZmZlY3RpdmUgcmV4IHByZWZp
eCBpcyB0aGUgIGVmZmVjdGl2ZSBtYXNrIGZvciB0aGUgCiAgICAgKiBpbnN0cnVjdGlvbiBoYXJk
LWNvZGVkIGluIHRoZSBvcGNvZGUgbWFwLgogICAgICovCiAgICB1LT5wZnhfcmV4ID0gKCB1LT5w
ZnhfcmV4ICYgMHg0MCApIHwgCiAgICAgICAgICAgICAgICAgKCB1LT5wZnhfcmV4ICYgUkVYX1BG
WF9NQVNLKCB1LT5pdGFiX2VudHJ5LT5wcmVmaXggKSApOyAKCiAgICAvKiB3aGV0aGVyIHRoaXMg
aW5zdHJ1Y3Rpb24gaGFzIGEgZGVmYXVsdCBvcGVyYW5kIHNpemUgb2YgCiAgICAgKiA2NGJpdCwg
YWxzbyBoYXJkY29kZWQgaW50byB0aGUgb3Bjb2RlIG1hcC4KICAgICAqLwogICAgdS0+ZGVmYXVs
dDY0ID0gUF9ERUY2NCggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICk7IAogICAgLyogY2FsY3VsYXRl
IGVmZmVjdGl2ZSBvcGVyYW5kIHNpemUgKi8KICAgIGlmICggUkVYX1coIHUtPnBmeF9yZXggKSAp
IHsKICAgICAgICB1LT5vcHJfbW9kZSA9IDY0OwogICAgfSBlbHNlIGlmICggdS0+cGZ4X29wciAp
IHsKICAgICAgICB1LT5vcHJfbW9kZSA9IDE2OwogICAgfSBlbHNlIHsKICAgICAgICAvKiB1bmxl
c3MgdGhlIGRlZmF1bHQgb3ByIHNpemUgb2YgaW5zdHJ1Y3Rpb24gaXMgNjQsCiAgICAgICAgICog
dGhlIGVmZmVjdGl2ZSBvcGVyYW5kIHNpemUgaW4gdGhlIGFic2VuY2Ugb2YgcmV4LncKICAgICAg
ICAgKiBwcmVmaXggaXMgMzIuCiAgICAgICAgICovCiAgICAgICAgdS0+b3ByX21vZGUgPSAoIHUt
PmRlZmF1bHQ2NCApID8gNjQgOiAzMjsKICAgIH0KCiAgICAvKiBjYWxjdWxhdGUgZWZmZWN0aXZl
IGFkZHJlc3Mgc2l6ZSAqLwogICAgdS0+YWRyX21vZGUgPSAodS0+cGZ4X2FkcikgPyAzMiA6IDY0
OwogIH0gZWxzZSBpZiAoIHUtPmRpc19tb2RlID09IDMyICkgeyAvKiBzZXQgMzJiaXQtbW9kZSBm
bGFncyAqLwogICAgdS0+b3ByX21vZGUgPSAoIHUtPnBmeF9vcHIgKSA/IDE2IDogMzI7CiAgICB1
LT5hZHJfbW9kZSA9ICggdS0+cGZ4X2FkciApID8gMTYgOiAzMjsKICB9IGVsc2UgaWYgKCB1LT5k
aXNfbW9kZSA9PSAxNiApIHsgLyogc2V0IDE2Yml0LW1vZGUgZmxhZ3MgKi8KICAgIHUtPm9wcl9t
b2RlID0gKCB1LT5wZnhfb3ByICkgPyAzMiA6IDE2OwogICAgdS0+YWRyX21vZGUgPSAoIHUtPnBm
eF9hZHIgKSA/IDMyIDogMTY7CiAgfQoKICAvKiBUaGVzZSBmbGFncyBkZXRlcm1pbmUgd2hpY2gg
b3BlcmFuZCB0byBhcHBseSB0aGUgb3BlcmFuZCBzaXplCiAgICogY2FzdCB0by4KICAgKi8KICB1
LT5jMSA9ICggUF9DMSggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICkgKSA/IDEgOiAwOwogIHUtPmMy
ID0gKCBQX0MyKCB1LT5pdGFiX2VudHJ5LT5wcmVmaXggKSApID8gMSA6IDA7CiAgdS0+YzMgPSAo
IFBfQzMoIHUtPml0YWJfZW50cnktPnByZWZpeCApICkgPyAxIDogMDsKCiAgLyogc2V0IGZsYWdz
IGZvciBpbXBsaWNpdCBhZGRyZXNzaW5nICovCiAgdS0+aW1wbGljaXRfYWRkciA9IFBfSU1QQURE
UiggdS0+aXRhYl9lbnRyeS0+cHJlZml4ICk7CgogIHJldHVybiAwOwp9CgpzdGF0aWMgaW50IGdl
bl9oZXgoIHN0cnVjdCB1ZCAqdSApCnsKICB1bnNpZ25lZCBpbnQgaTsKICB1bnNpZ25lZCBjaGFy
ICpzcmNfcHRyID0gaW5wX3Nlc3MoIHUgKTsKICBjaGFyKiBzcmNfaGV4OwoKICAvKiBiYWlsIG91
dCBpZiBpbiBlcnJvciBzdGF0LiAqLwogIGlmICggdS0+ZXJyb3IgKSByZXR1cm4gLTE7IAogIC8q
IG91dHB1dCBidWZmZXIgcG9pbnRlICovCiAgc3JjX2hleCA9ICggY2hhciogKSB1LT5pbnNuX2hl
eGNvZGU7CiAgLyogZm9yIGVhY2ggYnl0ZSB1c2VkIHRvIGRlY29kZSBpbnN0cnVjdGlvbiAqLwog
IGZvciAoIGkgPSAwOyBpIDwgdS0+aW5wX2N0cjsgKytpLCArK3NyY19wdHIpIHsKICAgIHNucHJp
bnRmKCBzcmNfaGV4LCAyLCAiJTAyeCIsICpzcmNfcHRyICYgMHhGRiApOwogICAgc3JjX2hleCAr
PSAyOwogIH0KICByZXR1cm4gMDsKfQoKLyogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICogdWRfZGVj
b2RlKCkgLSBJbnN0cnVjdGlvbiBkZWNvZGVyLiBSZXR1cm5zIHRoZSBudW1iZXIgb2YgYnl0ZXMg
ZGVjb2RlZC4KICogPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KICovCnVuc2lnbmVkIGludCB1ZF9kZWNv
ZGUoIHN0cnVjdCB1ZCogdSApCnsKICBpbnBfc3RhcnQodSk7CgogIGlmICggY2xlYXJfaW5zbigg
dSApICkgewogICAgOyAvKiBlcnJvciAqLwogIH0gZWxzZSBpZiAoIGdldF9wcmVmaXhlcyggdSAp
ICE9IDAgKSB7CiAgICA7IC8qIGVycm9yICovCiAgfSBlbHNlIGlmICggc2VhcmNoX2l0YWIoIHUg
KSAhPSAwICkgewogICAgOyAvKiBlcnJvciAqLwogIH0gZWxzZSBpZiAoIGRvX21vZGUoIHUgKSAh
PSAwICkgewogICAgOyAvKiBlcnJvciAqLwogIH0gZWxzZSBpZiAoIGRpc2FzbV9vcGVyYW5kcygg
dSApICE9IDAgKSB7CiAgICA7IC8qIGVycm9yICovCiAgfSBlbHNlIGlmICggcmVzb2x2ZV9tbmVt
b25pYyggdSApICE9IDAgKSB7CiAgICA7IC8qIGVycm9yICovCiAgfQoKICAvKiBIYW5kbGUgZGVj
b2RlIGVycm9yLiAqLwogIGlmICggdS0+ZXJyb3IgKSB7CiAgICAvKiBjbGVhciBvdXQgdGhlIGRl
Y29kZSBkYXRhLiAqLwogICAgY2xlYXJfaW5zbiggdSApOwogICAgLyogbWFyayB0aGUgc2VxdWVu
Y2Ugb2YgYnl0ZXMgYXMgaW52YWxpZC4gKi8KICAgIHUtPml0YWJfZW50cnkgPSAmIGllX2ludmFs
aWQ7CiAgICB1LT5tbmVtb25pYyA9IHUtPml0YWJfZW50cnktPm1uZW1vbmljOwogIH0gCgogIHUt
Pmluc25fb2Zmc2V0ID0gdS0+cGM7IC8qIHNldCBvZmZzZXQgb2YgaW5zdHJ1Y3Rpb24gKi8KICB1
LT5pbnNuX2ZpbGwgPSAwOyAgIC8qIHNldCB0cmFuc2xhdGlvbiBidWZmZXIgaW5kZXggdG8gMCAq
LwogIHUtPnBjICs9IHUtPmlucF9jdHI7ICAgIC8qIG1vdmUgcHJvZ3JhbSBjb3VudGVyIGJ5IGJ5
dGVzIGRlY29kZWQgKi8KICBnZW5faGV4KCB1ICk7ICAgICAgIC8qIGdlbmVyYXRlIGhleCBjb2Rl
ICovCgogIC8qIHJldHVybiBudW1iZXIgb2YgYnl0ZXMgZGlzYXNzZW1ibGVkLiAqLwogIHJldHVy
biB1LT5pbnBfY3RyOwp9CgovKiB2aW06Y2luZGVudAogKiB2aW06dHM9NAogKiB2aW06c3c9NAog
KiB2aW06ZXhwYW5kdGFiCiAqLwoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL3g4Ni91ZGlzODYtMS43L3N5bi1hdHQuYwAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDExMjU3ADExNzY1NDY1NTU2ADAx
NTQxNwAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAg
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAvKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBzeW4tYXR0LmMK
ICoKICogQ29weXJpZ2h0IChjKSAyMDA0LCAyMDA1LCAyMDA2IFZpdmVrIE1vaGFuIDx2aXZla0Bz
aWc5LmNvbT4KICogQWxsIHJpZ2h0cyByZXNlcnZlZC4gU2VlIChMSUNFTlNFKQogKiAtLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQogKi8KCiNpbmNsdWRlICJ0eXBlcy5oIgojaW5jbHVkZSAiZXh0ZXJuLmgi
CiNpbmNsdWRlICJkZWNvZGUuaCIKI2luY2x1ZGUgIml0YWIuaCIKI2luY2x1ZGUgInN5bi5oIgoK
LyogLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICogb3ByX2Nhc3QoKSAtIFByaW50cyBhbiBvcGVyYW5k
IGNhc3QuCiAqIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAqLwpzdGF0aWMgdm9pZCAKb3ByX2Nhc3Qo
c3RydWN0IHVkKiB1LCBzdHJ1Y3QgdWRfb3BlcmFuZCogb3ApCnsKICBzd2l0Y2gob3AtPnNpemUp
IHsKCWNhc2UgMTYgOiBjYXNlIDMyIDoKCQlta2FzbSh1LCAiKiIpOyAgIGJyZWFrOwoJZGVmYXVs
dDogYnJlYWs7CiAgfQp9CgovKiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogKiBnZW5fb3BlcmFuZCgp
IC0gR2VuZXJhdGVzIGFzc2VtYmx5IG91dHB1dCBmb3IgZWFjaCBvcGVyYW5kLgogKiAtLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQogKi8Kc3RhdGljIHZvaWQgCmdlbl9vcGVyYW5kKHN0cnVjdCB1ZCogdSwg
c3RydWN0IHVkX29wZXJhbmQqIG9wKQp7CiAgc3dpdGNoKG9wLT50eXBlKSB7CgljYXNlIFVEX09Q
X1JFRzoKCQlta2FzbSh1LCAiJSUlcyIsIHVkX3JlZ190YWJbb3AtPmJhc2UgLSBVRF9SX0FMXSk7
CgkJYnJlYWs7CgoJY2FzZSBVRF9PUF9NRU06CgkJaWYgKHUtPmJyX2Zhcikgb3ByX2Nhc3QodSwg
b3ApOwoJCWlmICh1LT5wZnhfc2VnKQoJCQlta2FzbSh1LCAiJSUlczoiLCB1ZF9yZWdfdGFiW3Ut
PnBmeF9zZWcgLSBVRF9SX0FMXSk7CgkJaWYgKG9wLT5vZmZzZXQgPT0gOCkgewoJCQlpZiAob3At
Pmx2YWwuc2J5dGUgPCAwKQoJCQkJbWthc20odSwgIi0weCV4IiwgKC1vcC0+bHZhbC5zYnl0ZSkg
JiAweGZmKTsKCQkJZWxzZQlta2FzbSh1LCAiMHgleCIsIG9wLT5sdmFsLnNieXRlKTsKCQl9IAoJ
CWVsc2UgaWYgKG9wLT5vZmZzZXQgPT0gMTYpIAoJCQlta2FzbSh1LCAiMHgleCIsIG9wLT5sdmFs
LnV3b3JkKTsKCQllbHNlIGlmIChvcC0+b2Zmc2V0ID09IDMyKSAKCQkJbWthc20odSwgIjB4JWx4
Iiwgb3AtPmx2YWwudWR3b3JkKTsKCQllbHNlIGlmIChvcC0+b2Zmc2V0ID09IDY0KSAKCQkJbWth
c20odSwgIjB4IiBGTVQ2NCAieCIsIG9wLT5sdmFsLnVxd29yZCk7CgoJCWlmIChvcC0+YmFzZSkK
CQkJbWthc20odSwgIiglJSVzIiwgdWRfcmVnX3RhYltvcC0+YmFzZSAtIFVEX1JfQUxdKTsKCQlp
ZiAob3AtPmluZGV4KSB7CgkJCWlmIChvcC0+YmFzZSkKCQkJCW1rYXNtKHUsICIsIik7CgkJCWVs
c2UgbWthc20odSwgIigiKTsKCQkJbWthc20odSwgIiUlJXMiLCB1ZF9yZWdfdGFiW29wLT5pbmRl
eCAtIFVEX1JfQUxdKTsKCQl9CgkJaWYgKG9wLT5zY2FsZSkKCQkJbWthc20odSwgIiwlZCIsIG9w
LT5zY2FsZSk7CgkJaWYgKG9wLT5iYXNlIHx8IG9wLT5pbmRleCkKCQkJbWthc20odSwgIikiKTsK
CQlicmVhazsKCgljYXNlIFVEX09QX0lNTToKCQlzd2l0Y2ggKG9wLT5zaXplKSB7CgkJCWNhc2Ug
IDg6IG1rYXNtKHUsICIkMHgleCIsIG9wLT5sdmFsLnVieXRlKTsgICAgYnJlYWs7CgkJCWNhc2Ug
MTY6IG1rYXNtKHUsICIkMHgleCIsIG9wLT5sdmFsLnV3b3JkKTsgICAgYnJlYWs7CgkJCWNhc2Ug
MzI6IG1rYXNtKHUsICIkMHglbHgiLCBvcC0+bHZhbC51ZHdvcmQpOyAgYnJlYWs7CgkJCWNhc2Ug
NjQ6IG1rYXNtKHUsICIkMHgiIEZNVDY0ICJ4Iiwgb3AtPmx2YWwudXF3b3JkKTsgYnJlYWs7CgkJ
CWRlZmF1bHQ6IGJyZWFrOwoJCX0KCQlicmVhazsKCgljYXNlIFVEX09QX0pJTU06CgkJc3dpdGNo
IChvcC0+c2l6ZSkgewoJCQljYXNlICA4OgoJCQkJbWthc20odSwgIjB4IiBGTVQ2NCAieCIsIHUt
PnBjICsgb3AtPmx2YWwuc2J5dGUpOyAKCQkJCWJyZWFrOwoJCQljYXNlIDE2OgoJCQkJbWthc20o
dSwgIjB4IiBGTVQ2NCAieCIsIHUtPnBjICsgb3AtPmx2YWwuc3dvcmQpOwoJCQkJYnJlYWs7CgkJ
CWNhc2UgMzI6CgkJCQlta2FzbSh1LCAiMHgiIEZNVDY0ICJ4IiwgdS0+cGMgKyBvcC0+bHZhbC5z
ZHdvcmQpOwoJCQkJYnJlYWs7CgkJCWRlZmF1bHQ6YnJlYWs7CgkJfQoJCWJyZWFrOwoKCWNhc2Ug
VURfT1BfUFRSOgoJCXN3aXRjaCAob3AtPnNpemUpIHsKCQkJY2FzZSAzMjoKCQkJCW1rYXNtKHUs
ICIkMHgleCwgJDB4JXgiLCBvcC0+bHZhbC5wdHIuc2VnLCAKCQkJCQlvcC0+bHZhbC5wdHIub2Zm
ICYgMHhGRkZGKTsKCQkJCWJyZWFrOwoJCQljYXNlIDQ4OgoJCQkJbWthc20odSwgIiQweCV4LCAk
MHglbHgiLCBvcC0+bHZhbC5wdHIuc2VnLCAKCQkJCQlvcC0+bHZhbC5wdHIub2ZmKTsKCQkJCWJy
ZWFrOwoJCX0KCQlicmVhazsKCQkJCglkZWZhdWx0OiByZXR1cm47CiAgfQp9CgovKiA9PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PQogKiB0cmFuc2xhdGVzIHRvIEFUJlQgc3ludGF4IAogKiA9PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PQogKi8KZXh0ZXJuIHZvaWQgCnVkX3RyYW5zbGF0ZV9hdHQoc3RydWN0IHVkICp1
KQp7CiAgaW50IHNpemUgPSAwOwoKICAvKiBjaGVjayBpZiBQX09TTyBwcmVmaXggaXMgdXNlZCAq
LwogIGlmICghIFBfT1NPKHUtPml0YWJfZW50cnktPnByZWZpeCkgJiYgdS0+cGZ4X29wcikgewoJ
c3dpdGNoICh1LT5kaXNfbW9kZSkgewoJCWNhc2UgMTY6IAoJCQlta2FzbSh1LCAibzMyICIpOwoJ
CQlicmVhazsKCQljYXNlIDMyOgoJCWNhc2UgNjQ6CiAJCQlta2FzbSh1LCAibzE2ICIpOwoJCQli
cmVhazsKCX0KICB9CgogIC8qIGNoZWNrIGlmIFBfQVNPIHByZWZpeCB3YXMgdXNlZCAqLwogIGlm
ICghIFBfQVNPKHUtPml0YWJfZW50cnktPnByZWZpeCkgJiYgdS0+cGZ4X2FkcikgewoJc3dpdGNo
ICh1LT5kaXNfbW9kZSkgewoJCWNhc2UgMTY6IAoJCQlta2FzbSh1LCAiYTMyICIpOwoJCQlicmVh
azsKCQljYXNlIDMyOgogCQkJbWthc20odSwgImExNiAiKTsKCQkJYnJlYWs7CgkJY2FzZSA2NDoK
IAkJCW1rYXNtKHUsICJhMzIgIik7CgkJCWJyZWFrOwoJfQogIH0KCiAgaWYgKHUtPnBmeF9sb2Nr
KQogIAlta2FzbSh1LCAgImxvY2sgIik7CiAgaWYgKHUtPnBmeF9yZXApCglta2FzbSh1LCAgInJl
cCAiKTsKICBpZiAodS0+cGZ4X3JlcG5lKQoJCW1rYXNtKHUsICAicmVwbmUgIik7CgogIC8qIHNw
ZWNpYWwgaW5zdHJ1Y3Rpb25zICovCiAgc3dpdGNoICh1LT5tbmVtb25pYykgewoJY2FzZSBVRF9J
cmV0ZjogCgkJbWthc20odSwgImxyZXQgIik7IAoJCWJyZWFrOwoJY2FzZSBVRF9JZGI6CgkJbWth
c20odSwgIi5ieXRlIDB4JXgiLCB1LT5vcGVyYW5kWzBdLmx2YWwudWJ5dGUpOwoJCXJldHVybjsK
CWNhc2UgVURfSWptcDoKCWNhc2UgVURfSWNhbGw6CgkJaWYgKHUtPmJyX2ZhcikgbWthc20odSwg
ICJsIik7CgkJbWthc20odSwgIiVzIiwgdWRfbG9va3VwX21uZW1vbmljKHUtPm1uZW1vbmljKSk7
CgkJYnJlYWs7CgljYXNlIFVEX0lib3VuZDoKCWNhc2UgVURfSWVudGVyOgoJCWlmICh1LT5vcGVy
YW5kWzBdLnR5cGUgIT0gVURfTk9ORSkKCQkJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMF0p
OwoJCWlmICh1LT5vcGVyYW5kWzFdLnR5cGUgIT0gVURfTk9ORSkgewoJCQlta2FzbSh1LCAiLCIp
OwoJCQlnZW5fb3BlcmFuZCh1LCAmdS0+b3BlcmFuZFsxXSk7CgkJfQoJCXJldHVybjsKCWRlZmF1
bHQ6CgkJbWthc20odSwgIiVzIiwgdWRfbG9va3VwX21uZW1vbmljKHUtPm1uZW1vbmljKSk7CiAg
fQoKICBpZiAodS0+YzEpCglzaXplID0gdS0+b3BlcmFuZFswXS5zaXplOwogIGVsc2UgaWYgKHUt
PmMyKQoJc2l6ZSA9IHUtPm9wZXJhbmRbMV0uc2l6ZTsKICBlbHNlIGlmICh1LT5jMykKCXNpemUg
PSB1LT5vcGVyYW5kWzJdLnNpemU7CgogIGlmIChzaXplID09IDgpCglta2FzbSh1LCAiYiIpOwog
IGVsc2UgaWYgKHNpemUgPT0gMTYpCglta2FzbSh1LCAidyIpOwogIGVsc2UgaWYgKHNpemUgPT0g
NjQpCiAJbWthc20odSwgInEiKTsKCiAgbWthc20odSwgIiAiKTsKCiAgaWYgKHUtPm9wZXJhbmRb
Ml0udHlwZSAhPSBVRF9OT05FKSB7CglnZW5fb3BlcmFuZCh1LCAmdS0+b3BlcmFuZFsyXSk7Cglt
a2FzbSh1LCAiLCAiKTsKICB9CgogIGlmICh1LT5vcGVyYW5kWzFdLnR5cGUgIT0gVURfTk9ORSkg
ewoJZ2VuX29wZXJhbmQodSwgJnUtPm9wZXJhbmRbMV0pOwoJbWthc20odSwgIiwgIik7CiAgfQoK
ICBpZiAodS0+b3BlcmFuZFswXS50eXBlICE9IFVEX05PTkUpCglnZW5fb3BlcmFuZCh1LCAmdS0+
b3BlcmFuZFswXSk7Cn0KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAGtkYi94ODYvdWRpczg2LTEuNy9rZGJfZGlzLmMAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwMDAw
NjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAxNDMzNgAxMTc2NTQ2NTU1NgAwMTU0MjAAIDAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABtcmF0aG9yAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBNdWtlc2ggUmF0aG9yLCBPcmFjbGUg
Q29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNv
ZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICogbW9kaWZ5IGl0IHVuZGVy
IHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2UgdjIgYXMgcHVi
bGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAqCiAqIFRoaXMgcHJvZ3Jh
bSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLAogKiBi
dXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50
eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBP
U0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFp
bHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5l
cmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBpZiBub3QsIHdy
aXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4sIDU5IFRlbXBsZSBQ
bGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMwNywgVVNBLgogKi8KCiNp
bmNsdWRlIDx4ZW4vY29tcGlsZS5oPiAgICAgICAgICAgICAgICAvKiBmb3IgWEVOX1NVQlZFUlNJ
T04gKi8KI2luY2x1ZGUgIi4uLy4uL2luY2x1ZGUva2RiaW5jLmgiCiNpbmNsdWRlICJleHRlcm4u
aCIKCnN0YXRpYyB2b2lkICgqZGlzX3N5bnRheCkodWRfdCopID0gVURfU1lOX0FUVDsgLyogZGVm
YXVsdCBkaXMtYXNzZW1ibHkgc3ludGF4ICovCgpzdGF0aWMgc3RydWN0IHsgICAgICAgICAgICAg
ICAgICAgICAgICAgLyogaW5mbyBmb3Iga2RiX3JlYWRfYnl0ZV9mb3JfdWQoKSAqLwogICAga2Ri
dmFfdCBrdWRfaW5zdHJfYWRkcjsKICAgIGRvbWlkX3Qga3VkX2RvbWlkOwp9IGtkYl91ZF9yZF9p
bmZvOwoKLyogY2FsbGVkIHZpYSBmdW5jdGlvbiBwdHIgYnkgdWQgd2hlbiBkaXNhc3NlbWJsaW5n
LiAKICoga2RiIGluZm8gcGFzc2VkIHZpYSBrZGJfdWRfcmRfaW5mb3t9IAogKi8Kc3RhdGljIGlu
dAprZGJfcmVhZF9ieXRlX2Zvcl91ZChzdHJ1Y3QgdWQgKnVkcCkKewogICAga2RiYnl0X3QgYnl0
ZWJ1ZjsKICAgIGRvbWlkX3QgZG9taWQgPSBrZGJfdWRfcmRfaW5mby5rdWRfZG9taWQ7CiAgICBr
ZGJ2YV90IGFkZHIgPSBrZGJfdWRfcmRfaW5mby5rdWRfaW5zdHJfYWRkcjsKCiAgICBpZiAoa2Ri
X3JlYWRfbWVtKGFkZHIsICZieXRlYnVmLCAxLCBkb21pZCkgPT0gMSkgewogICAgICAgIGtkYl91
ZF9yZF9pbmZvLmt1ZF9pbnN0cl9hZGRyKys7CiAgICAgICAgS0RCR1AxKCJ1ZHJkOmFkZHI6JWx4
IGRvbWlkOiVkIGJ5dDoleFxuIiwgYWRkciwgZG9taWQsIGJ5dGVidWYpOwogICAgICAgIHJldHVy
biBieXRlYnVmOwogICAgfQogICAgS0RCR1AxKCJ1ZHJkOmFkZHI6JWx4IGRvbWlkOiVkIGVyclxu
IiwgYWRkciwgZG9taWQpOwogICAgcmV0dXJuIFVEX0VPSTsKfQoKLyogCiAqIGdpdmVuIGEgZG9t
aWQsIGNvbnZlcnQgYWRkciB0byBzeW1ib2wgYW5kIHByaW50IGl0IAogKiBFZzogZmZmZjgyOGM4
MDEyMzVlMjogaWRsZV9sb29wKzUyICAgICAgICAgICAgICAgICAgam1wICBpZGxlX2xvb3ArNTUK
ICogICAgQ2FsbGVkIHR3aWNlIGhlcmUgZm9yIGlkbGVfbG9vcC4gSW4gZmlyc3QgY2FzZSwgbmwg
aXMgbnVsbCwgCiAqICAgIGluIHRoZSBzZWNvbmQgY2FzZSBubCA9PSAnXG4nCiAqLwp2b2lkCmtk
Yl9wcm50X2FkZHIyc3ltKGRvbWlkX3QgZG9taWQsIGtkYnZhX3QgYWRkciwgY2hhciAqbmwpCnsK
ICAgIHVuc2lnbmVkIGxvbmcgc3osIG9mZnM7CiAgICBjaGFyIGJ1ZltLU1lNX05BTUVfTEVOKzFd
LCBwYnVmWzE1MF0sIHByZWZpeFs4XTsKICAgIGNoYXIgKnAgPSBidWY7CgogICAgcHJlZml4WzBd
PSdcMCc7CiAgICBpZiAoZG9taWQgIT0gRE9NSURfSURMRSkgewogICAgICAgIHNucHJpbnRmKHBy
ZWZpeCwgOCwgIiV4OiIsIGRvbWlkKTsKICAgICAgICBwID0ga2RiX2d1ZXN0X2FkZHIyc3ltKGFk
ZHIsIGRvbWlkLCAmb2Zmcyk7CiAgICB9IGVsc2UKICAgICAgICBzeW1ib2xzX2xvb2t1cChhZGRy
LCAmc3osICZvZmZzLCBidWYpOwoKICAgIHNucHJpbnRmKHBidWYsIDE1MCwgIiVzJXMrJWx4Iiwg
cHJlZml4LCBwLCBvZmZzKTsKICAgIGlmICgqbmwgIT0gJ1xuJykKICAgICAgICBrZGJwKCIlLTMw
cyVzIiwgcGJ1ZiwgbmwpOyAgLyogcHJpbnRzIG1vcmUgdGhhbiAzMCBpZiBuZWVkZWQgKi8KICAg
IGVsc2UKICAgICAgICBrZGJwKCIlcyVzIiwgcGJ1ZiwgbmwpOwoKfQoKc3RhdGljIGludAprZGJf
anVtcF9pbnN0cihlbnVtIHVkX21uZW1vbmljX2NvZGUgbW5lbW9uaWMpCnsKICAgIHJldHVybiAo
bW5lbW9uaWMgPj0gVURfSWpvICYmIG1uZW1vbmljIDw9IFVEX0lqbXApOwp9CgovKgogKiBwcmlu
dCBvbmUgaW5zdHI6IGZ1bmN0aW9uIHNvIHRoYXQgd2UgY2FuIHByaW50IG9mZnNldHMgb2Ygam1w
IGV0Yy4uIGFzCiAqICBzeW1ib2wrb2Zmc2V0IGluc3RlYWQgb2YganVzdCBhZGRyZXNzCiAqLwpz
dGF0aWMgdm9pZAprZGJfcHJpbnRfb25lX2luc3RyKHN0cnVjdCB1ZCAqdWRwLCBkb21pZF90IGRv
bWlkKQp7CiAgICBzaWduZWQgbG9uZyB2YWwgPSAwOwogICAgdWRfdHlwZV90IHR5cGUgPSB1ZHAt
Pm9wZXJhbmRbMF0udHlwZTsKCiAgICBpZiAoKHVkcC0+bW5lbW9uaWMgPT0gVURfSWNhbGwgfHwg
a2RiX2p1bXBfaW5zdHIodWRwLT5tbmVtb25pYykpICYmCiAgICAgICAgdHlwZSA9PSBVRF9PUF9K
SU1NKSB7CiAgICAgICAgCiAgICAgICAgaW50IHN6ID0gdWRwLT5vcGVyYW5kWzBdLnNpemU7CiAg
ICAgICAgY2hhciAqcCwgaWJ1Zls0MF0sICpxID0gaWJ1ZjsKICAgICAgICBrZGJ2YV90IGFkZHI7
CgogICAgICAgIGlmIChzeiA9PSA4KSB2YWwgPSB1ZHAtPm9wZXJhbmRbMF0ubHZhbC5zYnl0ZTsK
ICAgICAgICBlbHNlIGlmIChzeiA9PSAxNikgdmFsID0gdWRwLT5vcGVyYW5kWzBdLmx2YWwuc3dv
cmQ7CiAgICAgICAgZWxzZSBpZiAoc3ogPT0gMzIpIHZhbCA9IHVkcC0+b3BlcmFuZFswXS5sdmFs
LnNkd29yZDsKICAgICAgICBlbHNlIGlmIChzeiA9PSA2NCkgdmFsID0gdWRwLT5vcGVyYW5kWzBd
Lmx2YWwuc3F3b3JkOwogICAgICAgIGVsc2Uga2RicCgia2RiX3ByaW50X29uZV9pbnN0cjogSW52
YWwgc3o6eiVkXG4iLCBzeik7CgogICAgICAgIGFkZHIgPSB1ZHAtPnBjICsgdmFsOwogICAgICAg
IGZvcihwPXVkX2luc25fYXNtKHVkcCk7ICgqcT0qcCkgJiYgKnAhPScgJzsgcCsrLHErKyk7CiAg
ICAgICAgKnE9J1wwJzsKICAgICAgICBrZGJwKCIgJS00cyAiLCBpYnVmKTsgICAgLyogc3BhY2Ug
YmVmb3JlIGZvciBsb25nIGZ1bmMgbmFtZXMgKi8KICAgICAgICBrZGJfcHJudF9hZGRyMnN5bShk
b21pZCwgYWRkciwgIlxuIik7CiAgICB9IGVsc2UKICAgICAgICBrZGJwKCIgJS0yNHNcbiIsIHVk
X2luc25fYXNtKHVkcCkpOwojaWYgMAogICAga2RicCgibW5lbW9uaWM6eiVkICIsIHVkcC0+bW5l
bW9uaWMpOwogICAgaWYgKHR5cGUgPT0gVURfT1BfQ09OU1QpIGtkYnAoInR5cGUgaXMgY29uc3Rc
biIpOwogICAgZWxzZSBpZiAodHlwZSA9PSBVRF9PUF9KSU1NKSBrZGJwKCJ0eXBlIGlzIEpJTU1c
biIpOwogICAgZWxzZSBpZiAodHlwZSA9PSBVRF9PUF9JTU0pIGtkYnAoInR5cGUgaXMgSU1NXG4i
KTsKICAgIGVsc2UgaWYgKHR5cGUgPT0gVURfT1BfUFRSKSBrZGJwKCJ0eXBlIGlzIFBUUlxuIik7
CiNlbmRpZgp9CgpzdGF0aWMgdm9pZAprZGJfc2V0dXBfdWQoc3RydWN0IHVkICp1ZHAsIGtkYnZh
X3QgYWRkciwgZG9taWRfdCBkb21pZCkKewogICAgaW50IGJpdG5lc3MgPSBrZGJfZ3Vlc3RfYml0
bmVzcyhkb21pZCk7CiAgICB1aW50IHZlbmRvciA9IChib290X2NwdV9kYXRhLng4Nl92ZW5kb3Ig
PT0gWDg2X1ZFTkRPUl9BTUQpID8KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFVEX1ZFTkRPUl9BTUQgOiBVRF9WRU5ET1JfSU5URUw7CgogICAgS0RCR1AxKCJzZXR1
cF91ZDpkb21pZDolZCBiaXRuZXNzOiVkIGFkZHI6JWx4XG4iLCBkb21pZCwgYml0bmVzcywgYWRk
cik7CiAgICB1ZF9pbml0KHVkcCk7CiAgICB1ZF9zZXRfbW9kZSh1ZHAsIGtkYl9ndWVzdF9iaXRu
ZXNzKGRvbWlkKSk7CiAgICB1ZF9zZXRfc3ludGF4KHVkcCwgZGlzX3N5bnRheCk7IAogICAgdWRf
c2V0X3ZlbmRvcih1ZHAsIHZlbmRvcik7ICAgICAgICAgICAvKiBIVk06IHZteC9zdm0gZGlmZmVy
ZW50IGluc3RycyovCiAgICB1ZF9zZXRfcGModWRwLCBhZGRyKTsgICAgICAgICAgICAgICAgIC8q
IGZvciBudW1iZXJzIHByaW50ZWQgb24gbGVmdCAqLwogICAgdWRfc2V0X2lucHV0X2hvb2sodWRw
LCBrZGJfcmVhZF9ieXRlX2Zvcl91ZCk7CiAgICBrZGJfdWRfcmRfaW5mby5rdWRfaW5zdHJfYWRk
ciA9IGFkZHI7CiAgICBrZGJfdWRfcmRfaW5mby5rdWRfZG9taWQgPSBkb21pZDsKfQoKLyoKICog
Z2l2ZW4gYW4gYWRkciwgcHJpbnQgZ2l2ZW4gbnVtYmVyIG9mIGluc3RydWN0aW9ucy4KICogUmV0
dXJuczogYWRkcmVzcyBvZiBuZXh0IGluc3RydWN0aW9uIGluIHRoZSBzdHJlYW0KICovCmtkYnZh
X3QKa2RiX3ByaW50X2luc3RyKGtkYnZhX3QgYWRkciwgbG9uZyBudW0sIGRvbWlkX3QgZG9taWQp
CnsKICAgIHN0cnVjdCB1ZCB1ZF9zOwoKICAgIEtEQkdQMSgicHJpbnRfaW5zdHI6YWRkcjoweCVs
eCBudW06JWxkIGRvbWlkOiV4XG4iLCBhZGRyLCBudW0sIGRvbWlkKTsKCiAgICBrZGJfc2V0dXBf
dWQoJnVkX3MsIGFkZHIsIGRvbWlkKTsKICAgIHdoaWxlKG51bS0tKSB7CiAgICAgICAgaWYgKHVk
X2Rpc2Fzc2VtYmxlKCZ1ZF9zKSkgewogICAgICAgICAgICB1aW50NjRfdCBwYyA9IHVkX2luc25f
b2ZmKCZ1ZF9zKTsKICAgICAgICAgICAgLyoga2RicCgiJTA4eDogIiwoaW50KXBjKTsgKi8KICAg
ICAgICAgICAga2RicCgiJTAxNmx4OiAiLCBwYyk7CiAgICAgICAgICAgIGtkYl9wcm50X2FkZHIy
c3ltKGRvbWlkLCBwYywgIiIpOwogICAgICAgICAgICBrZGJfcHJpbnRfb25lX2luc3RyKCZ1ZF9z
LCBkb21pZCk7CiAgICAgICAgfSBlbHNlCiAgICAgICAgICAgIGtkYnAoIktEQjpDb3VsZG4ndCBk
aXNhc3NlbWJsZSBQQzoweCVseFxuIiwgYWRkcik7CiAgICAgICAgICAgIC8qIGZvciBzdGFjayBy
ZWFkcywgZG9uJ3QgYWx3YXlzIGRpc3BsYXkgZXJyb3IgKi8KICAgIH0KICAgIEtEQkdQMSgicHJp
bnRfaW5zdHI6a3VkYWRkcjoweCVseFxuIiwga2RiX3VkX3JkX2luZm8ua3VkX2luc3RyX2FkZHIp
OwogICAgcmV0dXJuIGtkYl91ZF9yZF9pbmZvLmt1ZF9pbnN0cl9hZGRyOwp9Cgp2b2lkCmtkYl9k
aXNwbGF5X3BjKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7ICAgCiAgICBkb21pZF90IGRv
bWlkOwogICAgc3RydWN0IGNwdV91c2VyX3JlZ3MgcmVnczEgPSAqcmVnczsKICAgIGRvbWlkID0g
Z3Vlc3RfbW9kZShyZWdzKSA/IGN1cnJlbnQtPmRvbWFpbi0+ZG9tYWluX2lkIDogRE9NSURfSURM
RTsKCiAgICByZWdzMS5LREJJUCA9IHJlZ3MtPktEQklQOwogICAga2RiX3ByaW50X2luc3RyKHJl
Z3MxLktEQklQLCAxLCBkb21pZCk7Cn0KCi8qIGNoZWNrIGlmIHRoZSBpbnN0ciBhdCB0aGUgYWRk
ciBpcyBjYWxsIGluc3RydWN0aW9uCiAqIFJFVFVSTlM6IHNpemUgb2YgdGhlIGluc3RyIGlmIGl0
J3MgYSBjYWxsIGluc3RyLCBlbHNlIDAKICovCmludAprZGJfY2hlY2tfY2FsbF9pbnN0cihkb21p
ZF90IGRvbWlkLCBrZGJ2YV90IGFkZHIpCnsKICAgIHN0cnVjdCB1ZCB1ZF9zOwogICAgaW50IHN6
OwoKICAgIGtkYl9zZXR1cF91ZCgmdWRfcywgYWRkciwgZG9taWQpOwogICAgaWYgKChzej11ZF9k
aXNhc3NlbWJsZSgmdWRfcykpICYmIHVkX3MubW5lbW9uaWMgPT0gVURfSWNhbGwpCiAgICAgICAg
cmV0dXJuIChzeik7CiAgICByZXR1cm4gMDsKfQoKLyogdG9nZ2xlIEFUVCBhbmQgSW50ZWwgc3lu
dGF4ZXMgKi8Kdm9pZAprZGJfdG9nZ2xlX2Rpc19zeW50YXgodm9pZCkKewogICAgaWYgKGRpc19z
eW50YXggPT0gVURfU1lOX0lOVEVMKSB7CiAgICAgICAgZGlzX3N5bnRheCA9IFVEX1NZTl9BVFQ7
CiAgICAgICAga2RicCgiZGlzIHN5bnRheCBub3cgc2V0IHRvIEFUVCAoR2FzKVxuIik7CiAgICB9
IGVsc2UgewogICAgICAgIGRpc19zeW50YXggPSBVRF9TWU5fSU5URUw7CiAgICAgICAga2RicCgi
ZGlzIHN5bnRheCBub3cgc2V0IHRvIEludGVsIChOQVNNKVxuIik7CiAgICB9Cn0KAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIv
eDg2L2tkYl93cC5jAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAw
MDI3NTYAMDAwMDAwMjA1MzYAMTE3NjU0NjU1NTYAMDEzNjQxACAwAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC8qCiAq
IENvcHlyaWdodCAoQykgMjAwOSwgTXVrZXNoIFJhdGhvciwgT3JhY2xlIENvcnAuICBBbGwgcmln
aHRzIHJlc2VydmVkLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNh
biByZWRpc3RyaWJ1dGUgaXQgYW5kL29yCiAqIG1vZGlmeSBpdCB1bmRlciB0aGUgdGVybXMgb2Yg
dGhlIEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIHYyIGFzIHB1Ymxpc2hlZCBieSB0aGUg
RnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLgogKgogKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0
ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwKICogYnV0IFdJVEhPVVQgQU5Z
IFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YKICogTUVSQ0hB
TlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZSBH
TlUKICogR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLgogKgogKiBZb3Ug
c2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMKICog
TGljZW5zZSBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbTsgaWYgbm90LCB3cml0ZSB0byB0aGUKICog
RnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLCBJbmMuLCA1OSBUZW1wbGUgUGxhY2UgLSBTdWl0ZSAz
MzAsCiAqIEJvc3RvbiwgTUEgMDIxMTEwLTEzMDcsIFVTQS4KICovCgojaW5jbHVkZSAiLi4vaW5j
bHVkZS9rZGJpbmMuaCIKCiNpZiAwCiNkZWZpbmUgRFI2X0JUICAweDAwMDA4MDAwCiNkZWZpbmUg
RFI2X0JTICAweDAwMDA0MDAwCiNkZWZpbmUgRFI2X0JEICAweDAwMDAyMDAwCiNlbmRpZgojZGVm
aW5lIERSNl9CMyAgMHgwMDAwMDAwOAojZGVmaW5lIERSNl9CMiAgMHgwMDAwMDAwNAojZGVmaW5l
IERSNl9CMSAgMHgwMDAwMDAwMgojZGVmaW5lIERSNl9CMCAgMHgwMDAwMDAwMQoKI2RlZmluZSBL
REJfTUFYV1AgNCAgICAgICAgICAgICAgICAgICAgICAgICAgLyogRFIwIHRocnUgRFIzICovCgpz
dHJ1Y3Qga2RiX3dwIHsKICAgIGtkYm1hX3QgIHdwX2FkZHI7CiAgICBpbnQgICAgICB3cF9yd2Zs
YWc7CiAgICBpbnQgICAgICB3cF9sZW47CiAgICBpbnQgICAgICB3cF9kZWxldGVkOyAgICAgICAg
ICAgICAgICAgICAgIC8qIHBlbmRpbmcgZGVsZXRlICovCn07CnN0YXRpYyBzdHJ1Y3Qga2RiX3dw
IGtkYl93cGFbS0RCX01BWFdQXTsKCi8qIGZvbGxvd2luZyBiZWNhdXNlIHZtY3MgaGFzIGl0J3Mg
b3duIGRyNy4gd2hlbiB2bWNzIHJ1bnMsIGl0IG1lc3NlcyB1cCB0aGUKICogbmF0aXZlIGRyNyBz
byB3ZSBuZWVkIHRvIHNhdmUvcmVzdG9yZSBpdCAqLwp1bnNpZ25lZCBsb25nIGtkYl9kcjc7CgoK
LyogU2V0IEcwLUczIGJpdHMgaW4gRFI3LiB0aGlzIGRvZXMgZ2xvYmFsIGVuYWJsZSBvZiB0aGUg
Y29ycmVzcG9uZGluZyB3cCAqLwpzdGF0aWMgdm9pZAprZGJfc2V0X2d4X2luX2RyNyhpbnQgcmVn
bm8sIGtkYm1hX3QgKmRyN3ApCnsKICAgIGlmIChyZWdubyA9PSAwKQogICAgICAgICpkcjdwID0g
KmRyN3AgfCAweDI7CiAgICBlbHNlIGlmIChyZWdubyA9PSAxKQogICAgICAgICpkcjdwID0gKmRy
N3AgfCAweDg7CiAgICBlbHNlIGlmIChyZWdubyA9PSAyKQogICAgICAgICpkcjdwID0gKmRyN3Ag
fCAweDIwOwogICAgZWxzZSBpZiAocmVnbm8gPT0gMykKICAgICAgICAqZHI3cCA9ICpkcjdwIHwg
MHg4MDsKfQoKLyogU2V0IExFTjAgLSBMRU4zIHBhaXIgYml0cyBpbiBEUjcgKGxlbiBzaG91bGQg
YmUgMSAyIDQgb3IgOCkgKi8Kc3RhdGljIHZvaWQKa2RiX3NldF9sZW5faW5fZHI3KGludCByZWdu
bywga2RibWFfdCAqZHI3cCwgaW50IGxlbikKewogICAgaW50IGxlbmJpdHMgPSAobGVuID09IDgp
ID8gMiA6IGxlbi0xOwoKICAgICpkcjdwICY9IH4oMHgzIDw8ICgxOCArIDQqcmVnbm8pKTsKICAg
ICpkcjdwIHw9ICgodWxvbmcpKGxlbmJpdHMgJiAweDMpIDw8ICgxOCArIDQqcmVnbm8pKTsKfQoK
c3RhdGljIHZvaWQKa2RiX3NldF9kcjdfcncoaW50IHJlZ25vLCBrZGJtYV90ICpkcjdwLCBpbnQg
cncpCnsKICAgICpkcjdwICY9IH4oMHgzIDw8ICgxNiArIDQqcmVnbm8pKTsKICAgICpkcjdwIHw9
ICgodWxvbmcpKHJ3ICYgMHgzKSkgPDwgKDE2ICsgNCpyZWdubyk7Cn0KCi8qIGdldCB2YWx1ZSBv
ZiBhIGRlYnVnIHJlZ2lzdGVyOiBEUjAtRFIzIERSNiBEUjcuIG90aGVyIHZhbHVlcyByZXR1cm4g
MCAqLwprZGJtYV90CmtkYl9yZF9kYmdyZWcoaW50IHJlZ251bSkKewogICAga2RibWFfdCBjb250
ZW50cyA9IDA7CgogICAgaWYgKHJlZ251bSA9PSAwKQogICAgICAgIF9fYXNtX18gKCJtb3ZxICUl
ZGIwLCUwXG5cdCI6Ij1yIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVnbnVtID09IDEpCiAg
ICAgICAgX19hc21fXyAoIm1vdnEgJSVkYjEsJTBcblx0IjoiPXIiKGNvbnRlbnRzKSk7CiAgICBl
bHNlIGlmIChyZWdudW0gPT0gMikKICAgICAgICBfX2FzbV9fICgibW92cSAlJWRiMiwlMFxuXHQi
OiI9ciIoY29udGVudHMpKTsKICAgIGVsc2UgaWYgKHJlZ251bSA9PSAzKQogICAgICAgIF9fYXNt
X18gKCJtb3ZxICUlZGIzLCUwXG5cdCI6Ij1yIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVn
bnVtID09IDYpCiAgICAgICAgX19hc21fXyAoIm1vdnEgJSVkYjYsJTBcblx0IjoiPXIiKGNvbnRl
bnRzKSk7CiAgICBlbHNlIGlmIChyZWdudW0gPT0gNykKICAgICAgICBfX2FzbV9fICgibW92cSAl
JWRiNywlMFxuXHQiOiI9ciIoY29udGVudHMpKTsKCiAgICByZXR1cm4gY29udGVudHM7Cn0KCnN0
YXRpYyB2b2lkCmtkYl93cl9kYmdyZWcoaW50IHJlZ251bSwga2RibWFfdCBjb250ZW50cykKewog
ICAgaWYgKHJlZ251bSA9PSAwKQogICAgICAgIF9fYXNtX18gKCJtb3ZxICUwLCUlZGIwXG5cdCI6
OiJyIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVnbnVtID09IDEpCiAgICAgICAgX19hc21f
XyAoIm1vdnEgJTAsJSVkYjFcblx0Ijo6InIiKGNvbnRlbnRzKSk7CiAgICBlbHNlIGlmIChyZWdu
dW0gPT0gMikKICAgICAgICBfX2FzbV9fICgibW92cSAlMCwlJWRiMlxuXHQiOjoiciIoY29udGVu
dHMpKTsKICAgIGVsc2UgaWYgKHJlZ251bSA9PSAzKQogICAgICAgIF9fYXNtX18gKCJtb3ZxICUw
LCUlZGIzXG5cdCI6OiJyIihjb250ZW50cykpOwogICAgZWxzZSBpZiAocmVnbnVtID09IDYpCiAg
ICAgICAgX19hc21fXyAoIm1vdnEgJTAsJSVkYjZcblx0Ijo6InIiKGNvbnRlbnRzKSk7CiAgICBl
bHNlIGlmIChyZWdudW0gPT0gNykKICAgICAgICBfX2FzbV9fICgibW92cSAlMCwlJWRiN1xuXHQi
OjoiciIoY29udGVudHMpKTsKfQoKc3RhdGljIHZvaWQKa2RiX3ByaW50X3dwX2luZm8oY2hhciAq
c3RycCwgaW50IGlkeCkKewogICAga2RicCgiJXNbJWRdOiUwMTZseCBsZW46JWQgIiwgc3RycCwg
aWR4LCBrZGJfd3BhW2lkeF0ud3BfYWRkciwKICAgICAgICAga2RiX3dwYVtpZHhdLndwX2xlbik7
CiAgICBpZiAoa2RiX3dwYVtpZHhdLndwX3J3ZmxhZyA9PSAxKQogICAgICAgIGtkYnAoIm9uIGRh
dGEgd3JpdGUgb25seVxuIik7CiAgICBlbHNlIGlmIChrZGJfd3BhW2lkeF0ud3BfcndmbGFnID09
IDIpCiAgICAgICAga2RicCgib24gSU8gcmVhZC93cml0ZVxuIik7CiAgICBlbHNlIAogICAgICAg
IGtkYnAoIm9uIGRhdGEgcmVhZC93cml0ZVxuIik7Cn0KCi8qCiAqIFJldHVybnMgOiAwIGlmIG5v
dCBvbmUgb2Ygb3VycwogKiAgICAgICAgICAgMSBpZiBvbmUgb2Ygb3VycwogKi8KaW50CmtkYl9j
aGVja193YXRjaHBvaW50cyhzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaW50IHdw
bnVtOwogICAga2RibWFfdCBkcjYgPSBrZGJfcmRfZGJncmVnKDYpOwoKICAgIEtEQkdQMSgiY2hl
Y2tfd3A6IElQOiVseCBFRkxBR1M6JWx4XG4iLCByZWdzLT5yaXAsIHJlZ3MtPnJmbGFncyk7CiAg
ICBpZiAoZHI2ICYgRFI2X0IwKQogICAgICAgIHdwbnVtID0gMDsKICAgIGVsc2UgaWYgKGRyNiAm
IERSNl9CMSkKICAgICAgICB3cG51bSA9IDE7CiAgICBlbHNlIGlmIChkcjYgJiBEUjZfQjIpCiAg
ICAgICAgd3BudW0gPSAyOwogICAgZWxzZSBpZiAoZHI2ICYgRFI2X0IzKQogICAgICAgIHdwbnVt
ID0gMzsKICAgIGVsc2UKICAgICAgICByZXR1cm4gMDsKCiAgICBrZGJfcHJpbnRfd3BfaW5mbygi
V2F0Y2hwb2ludCAiLCB3cG51bSk7CiAgICByZXR1cm4gMTsKfQoKLyogc2V0IGEgd2F0Y2hwb2lu
dCBhdCBhIGdpdmVuIGFkZHJlc3MgCiAqIFByZUNvbmRpdGlvbjogYWRkciAhPSAwICovCnN0YXRp
YyB2b2lkCmtkYl9zZXRfd3Aoa2RidmFfdCBhZGRyLCBpbnQgcndmbGFnLCBpbnQgbGVuKQp7CiAg
ICBpbnQgcmVnbm87CgogICAgZm9yIChyZWdubz0wOyByZWdubyA8IEtEQl9NQVhXUDsgcmVnbm8r
KykgewogICAgICAgIGlmIChrZGJfd3BhW3JlZ25vXS53cF9hZGRyID09IGFkZHIgJiYgIWtkYl93
cGFbcmVnbm9dLndwX2RlbGV0ZWQpIHsKICAgICAgICAgICAga2RicCgiV2F0Y2hwb2ludCBhbHJl
YWR5IHNldFxuIik7CiAgICAgICAgICAgIHJldHVybjsKICAgICAgICB9CiAgICAgICAgaWYgKGtk
Yl93cGFbcmVnbm9dLndwX2RlbGV0ZWQpCiAgICAgICAgICAgIG1lbXNldCgma2RiX3dwYVtyZWdu
b10sIDAsIHNpemVvZihrZGJfd3BhW3JlZ25vXSkpOwogICAgfQogICAgZm9yIChyZWdubz0wOyBy
ZWdubyA8IEtEQl9NQVhXUCAmJiBrZGJfd3BhW3JlZ25vXS53cF9hZGRyOyByZWdubysrKTsKICAg
IGlmIChyZWdubyA+PSBLREJfTUFYV1ApIHsKICAgICAgICBrZGJwKCJ3YXRjaHBvaW50IHRhYmxl
IGZ1bGwuIGxpbWl0OiVkXG4iLCBLREJfTUFYV1ApOwogICAgICAgIHJldHVybjsKICAgIH0KICAg
IGtkYl93cGFbcmVnbm9dLndwX2FkZHIgPSBhZGRyOwogICAga2RiX3dwYVtyZWdub10ud3Bfcndm
bGFnID0gcndmbGFnOwogICAga2RiX3dwYVtyZWdub10ud3BfbGVuID0gbGVuOwogICAga2RiX3By
aW50X3dwX2luZm8oIldhdGNocG9pbnQgc2V0ICIsIHJlZ25vKTsKfQoKLyogd3JpdGUgcmVnIERS
MC0zIHdpdGggYWRkcmVzcy4gVXBkYXRlIGNvcnJlc3BvbmRpbmcgYml0cyBpbiBEUjcgKi8Kc3Rh
dGljIHZvaWQKa2RiX2luc3RhbGxfd2F0Y2hwb2ludChpbnQgcmVnbm8sIGtkYm1hX3QgKmRyN3Ap
CnsKICAgIGtkYl9zZXRfZ3hfaW5fZHI3KHJlZ25vLCBkcjdwKTsKICAgIGtkYl9zZXRfbGVuX2lu
X2RyNyhyZWdubywgZHI3cCwga2RiX3dwYVtyZWdub10ud3BfbGVuKTsgCiAgICBrZGJfc2V0X2Ry
N19ydyhyZWdubywgZHI3cCwga2RiX3dwYVtyZWdub10ud3BfcndmbGFnKTsKICAgIGtkYl93cl9k
YmdyZWcocmVnbm8sIGtkYl93cGFbcmVnbm9dLndwX2FkZHIpOwoKICAgIEtEQkdQMSgiY2NwdTol
ZCBpbnN0YWxsZWQgd3AuIGFkZHI6JWx4IHJ3OiV4IGxlbjoleCBkcjc6JTAxNmx4XG4iLAogICAg
ICAgICAgIHNtcF9wcm9jZXNzb3JfaWQoKSwga2RiX3dwYVtyZWdub10ud3BfYWRkciwgCiAgICAg
ICAgICAga2RiX3dwYVtyZWdub10ud3BfcndmbGFnLCBrZGJfd3BhW3JlZ25vXS53cF9sZW4sICpk
cjdwKTsKfQoKLyogY2xlYXIgRzAtRzMgYml0cyBpbiBEUjcgZm9yIGdpdmVuIERSMC0zICovCnN0
YXRpYyB2b2lkCmtkYl9jbGVhcl9kcjdfZ3goaW50IHJlZ25vLCBrZGJtYV90ICpkcjdwKQp7CiAg
ICBpZiAocmVnbm8gPT0gMCkKICAgICAgICAqZHI3cCA9ICpkcjdwICYgfjB4MjsKICAgIGVsc2Ug
aWYgKHJlZ25vID09IDEpCiAgICAgICAgKmRyN3AgPSAqZHI3cCAmIH4weDg7CiAgICBlbHNlIGlm
IChyZWdubyA9PSAyKQogICAgICAgICpkcjdwID0gKmRyN3AgJiB+MHgyMDsKICAgIGVsc2UgaWYg
KHJlZ25vID09IDMpCiAgICAgICAgKmRyN3AgPSAqZHI3cCAmIH4weDgwOwp9CgovKiB1cGRhdGUg
ZHI3IG9uY2UsIGFzIGl0J3Mgc2xvdyB0byB1cGRhdGUgZGVidWcgcmVncyBhbmQgY3B1J3Mgd2ls
bCBzdGlsbCBiZSAKICogcGF1c2VkIHdoZW4gbGVhdmluZyBrZGIuCiAqCiAqIEp1c3QgbGVhdmUg
RFIwLTMgY2xvYmJlcmVkIGJ1dCByZW1vdmUgYml0cyBmcm9tIERSNyB0byBkaXNhYmxlIHdwIAog
Ki8Kdm9pZAprZGJfaW5zdGFsbF93YXRjaHBvaW50cyh2b2lkKQp7CiAgICBpbnQgcmVnbm87CiAg
ICBrZGJtYV90IGRyNyA9IGtkYl9yZF9kYmdyZWcoNyk7CgogICAgZm9yIChyZWdubz0wOyByZWdu
byA8IEtEQl9NQVhXUDsgcmVnbm8rKykgewogICAgICAgIC8qIGRvIG5vdCBjbGVhciB3cF9kZWxl
dGVkIGhlcmUgYXMgYWxsIGNwdXMgbXVzdCBjbGVhciB3cHMgKi8KICAgICAgICBpZiAoa2RiX3dw
YVtyZWdub10ud3BfZGVsZXRlZCkgewogICAgICAgICAgICBrZGJfY2xlYXJfZHI3X2d4KHJlZ25v
LCAmZHI3KTsKICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlmIChrZGJf
d3BhW3JlZ25vXS53cF9hZGRyKQogICAgICAgICAgICBrZGJfaW5zdGFsbF93YXRjaHBvaW50KHJl
Z25vLCAmZHI3KTsKICAgIH0KICAgIC8qIGFsd2F5cyBjbGVhciBEUjYgd2hlbiBsZWF2aW5nICov
CiAgICBrZGJfd3JfZGJncmVnKDYsIDApOwogICAga2RiX3dyX2RiZ3JlZyg3LCBkcjcpOwoKICAg
IGlmIChkcjcgJiBEUjdfQUNUSVZFX01BU0spCiAgICAgICAga2RiX2RyNyA9IGRyNzsKICAgIGVs
c2UKICAgICAgICBrZGJfZHI3ID0gMDsKI2lmIDAKICAgIGZvcihkcD1kb21haW5fbGlzdDsgZHA7
IGRwPWRwLT5uZXh0X2luX2xpc3QpIHsKICAgICAgICBzdHJ1Y3QgdmNwdSAqdnA7CiAgICAgICAg
Zm9yX2VhY2hfdmNwdShkcCwgdnApIHsKICAgICAgICAgICAgZm9yIChyZWdubz0wOyByZWdubyA8
IEtEQl9NQVhXUDsgcmVnbm8rKykKICAgICAgICAgICAgICAgIHZwLT5hcmNoLmd1ZXN0X2NvbnRl
eHQuZGVidWdyZWdbcmVnbm9dID0ga2RiX3dwYVtyZWdub10ud3BfYWRkcjsKCiAgICAgICAgICAg
IHZwLT5hcmNoLmd1ZXN0X2NvbnRleHQuZGVidWdyZWdbNl0gPSAwOwogICAgICAgICAgICB2cC0+
YXJjaC5ndWVzdF9jb250ZXh0LmRlYnVncmVnWzddID0gZHI3OwogICAgICAgICAgICBLREJHUCgi
a2RiX2luc3RhbGxfd2F0Y2hwb2ludHMoKTogdjolcCBkcjc6JWx4XG4iLCB2cCwgZHI3KTsKICAg
ICAgICAgICAgLyogaHZtX3NldF9pbmZvX2d1ZXN0KHZwKTs6IENhbid0IGJlY2F1c2UgY2FuJ3Qg
dm1jc19lbnRlciBpbiBrZGIgKi8KICAgICAgICB9CiAgICB9CiNlbmRpZgp9CgovKiBjbGVhciB3
YXRjaHBvaW50L3MuIHdwbnVtID09IC0xIHRvIGNsZWFyIGFsbCB3YXRjaHBvaW50cyAqLwp2b2lk
CmtkYl9jbGVhcl93cHMoaW50IHdwbnVtKQp7CiAgICBpbnQgaTsKCiAgICBpZiAod3BudW0gPj0g
S0RCX01BWFdQKSB7CiAgICAgICAga2RicCgiSW52YWxpZCB3cG51bSAlZFxuIiwgd3BudW0pOwog
ICAgICAgIHJldHVybjsKICAgIH0KICAgIGlmICh3cG51bSA+PTApIHsKICAgICAgICBpZiAoa2Ri
X3dwYVt3cG51bV0ud3BfYWRkcikgewogICAgICAgICAgICBrZGJfd3BhW3dwbnVtXS53cF9kZWxl
dGVkID0gMTsKICAgICAgICAgICAga2RiX3ByaW50X3dwX2luZm8oIkRlbGV0ZWQgd2F0Y2hwb2lu
dCIsIHdwbnVtKTsKICAgICAgICB9IGVsc2UKICAgICAgICAgICAga2RicCgid2F0Y2hwb2ludCAl
ZCBub3Qgc2V0XG4iLCB3cG51bSk7CiAgICAgICAgcmV0dXJuOwogICAgfQogICAgZm9yIChpPTA7
IGkgPCBLREJfTUFYV1A7IGkrKykgewogICAgICAgIGlmIChrZGJfd3BhW2ldLndwX2FkZHIpIHsK
ICAgICAgICAgICAga2RiX3dwYVtpXS53cF9kZWxldGVkID0gMTsKICAgICAgICAgICAga2RiX3By
aW50X3dwX2luZm8oIkRlbGV0ZWQgd2F0Y2hwb2ludCIsIGkpOwogICAgICAgIH0KICAgIH0KfQoK
LyogZGlzcGxheSBhbnkgd2F0Y2hwb2ludHMgdGhhdCBhcmUgc2V0ICovCnN0YXRpYyB2b2lkCmtk
Yl9kaXNwbGF5X3dwcyh2b2lkKQp7CiAgICBpbnQgaTsKICAgIGZvciAoaT0wOyBpIDwgS0RCX01B
WFdQOyBpKyspCiAgICAgICAgaWYgKGtkYl93cGFbaV0ud3BfYWRkciAmJiAha2RiX3dwYVtpXS53
cF9kZWxldGVkKSAKICAgICAgICAgICAga2RiX3ByaW50X3dwX2luZm8oIiIsIGkpOwp9CgovKiAK
ICogRGlzcGxheSBvciBTZXQgaGFyZHdhcmUgYnJlYWtwb2ludHMsIGllLCB3YXRjaHBvaW50czoK
ICogICAtIFVwdG8gNCBhcmUgYWxsb3dlZAogKiAgIAogKiAgcndfZmxhZyBzaG91bGQgYmUgb25l
IG9mOiAKICogICAgIDAxID09IGJyZWFrIG9uIGRhdGEgd3JpdGUgb25seQogKiAgICAgMTAgPT0g
YnJlYWsgb24gSU8gcmVhZC93cml0ZQogKiAgICAgMTEgPT0gQnJlYWsgb24gZGF0YSByZWFkcyBv
ciB3cml0ZXMKICoKICogIGxlbiBzaG91bGQgYmUgb25lIG9mIDogMSAyIDQgOCAKICovCnZvaWQK
a2RiX2RvX3dhdGNocG9pbnRzKGtkYnZhX3QgYWRkciwgaW50IHJ3X2ZsYWcsIGludCBsZW4pCnsK
ICAgIGlmIChhZGRyID09IDApIHsKICAgICAgICBrZGJfZGlzcGxheV93cHMoKTsgICAgICAgIC8q
IGRpc3BsYXkgc2V0IHdhdGNocG9pbnRzICovCiAgICAgICAgcmV0dXJuOwogICAgfQogICAga2Ri
X3NldF93cChhZGRyLCByd19mbGFnLCBsZW4pOwogICAgcmV0dXJuOwp9CgoAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIveDg2L01ha2VmaWxlAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAwMDI3NTYAMDAwMDAwMDAwNTUA
MTE3NjU0NjU1NTYAMDEzNjYxACAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABtcmF0aG9yAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAApvYmoteSAgICA6PSBrZGJfd3Aubwpz
dWJkaXIteSArPSB1ZGlzODYtMS43CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2tkYl9jbWRzLmMAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMzU1NTcyADEy
MDE3NTAzNzExADAxMzUwMAAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAB1c3RhciAgAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAvKgogKiBDb3B5cmlnaHQgKEMpIDIwMDks
IE11a2VzaCBSYXRob3IsIE9yYWNsZSBDb3JwLiAgQWxsIHJpZ2h0cyByZXNlcnZlZC4KICoKICog
VGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFu
ZC9vcgogKiBtb2RpZnkgaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJs
aWMKICogTGljZW5zZSB2MiBhcyBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbi4KICoKICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQg
aXQgd2lsbCBiZSB1c2VmdWwsCiAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBl
dmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mCiAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNT
IEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUgR05VCiAqIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4KICoKICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVk
IGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2UgYWxvbmcgd2l0aCB0
aGlzIHByb2dyYW07IGlmIG5vdCwgd3JpdGUgdG8gdGhlCiAqIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbiwgSW5jLiwgNTkgVGVtcGxlIFBsYWNlIC0gU3VpdGUgMzMwLAogKiBCb3N0b24sIE1BIDAy
MTExMC0xMzA3LCBVU0EuCiAqLwoKI2luY2x1ZGUgImluY2x1ZGUva2RiaW5jLmgiCgojaWYgZGVm
aW5lZChfX3g4Nl82NF9fKQogICAgI2RlZmluZSBLREJGNjQgIiVseCIKICAgICNkZWZpbmUgS0RC
RkwgIiUwMTZseCIgICAgICAgICAvKiBwcmludCBsb25nIGFsbCBkaWdpdHMgKi8KI2Vsc2UKICAg
ICNkZWZpbmUgS0RCRjY0ICIlbGx4IgogICAgI2RlZmluZSBLREJGTCAiJTA4bHgiCiNlbmRpZgoK
I2lmIFhFTl9TVUJWRVJTSU9OID4gNCB8fCBYRU5fVkVSU0lPTiA9PSA0ICAgICAgICAgICAgICAv
KiB4ZW4gMy41Lnggb3IgYWJvdmUgKi8KICAgICNkZWZpbmUgS0RCX0xLREVGKGwpICgobCkucmF3
LmxvY2spCiAgICAjZGVmaW5lIEtEQl9QR0xMRSh0KSAoKHQpLnRhaWwpICAgIC8qIHBhZ2UgbGlz
dCBsYXN0IGVsZW1lbnQgXiUkI0AgKi8KI2Vsc2UKICAgICNkZWZpbmUgS0RCX0xLREVGKGwpICgo
bCkubG9jaykKICAgICNkZWZpbmUgS0RCX1BHTExFKHQpICgodCkucHJldikgICAgLyogcGFnZSBs
aXN0IGxhc3QgZWxlbWVudCBeJSQjQCAqLwojZW5kaWYKCiNkZWZpbmUgS0RCX0NNRF9ISVNUT1JZ
X0NPVU5UICAgMzIKI2RlZmluZSBDTURfQlVGTEVOICAgICAgICAgICAgICAyMDAgICAgIC8qIGtk
Yl9wcmludGY6IG1heCBwcmludGxpbmUgPT0gMjU2ICovCgojZGVmaW5lIEtEQk1BWFNCUCAxNiAg
ICAgICAgICAgICAgICAgICAgLyogbWF4IG51bWJlciBvZiBzb2Z0d2FyZSBicmVha3BvaW50cyAq
LwojZGVmaW5lIEtEQl9NQVhBUkdDIDE2ICAgICAgICAgICAgICAgICAgLyogbWF4IGFyZ3MgaW4g
YSBrZGIgY29tbWFuZCAqLwojZGVmaW5lIEtEQl9NQVhCVFAgIDggICAgICAgICAgICAgICAgICAg
LyogbWF4IGRpc3BsYXkgYXJncyBpbiBidHAgKi8KCi8qIGNvbmRpdGlvbiBpczogJ3I2ID09IDB4
MTIzZicgb3IgJzB4ZmZmZmZmZmY4MjgwMDAwMCAhPSBkZWFkYmVlZicgICovCnN0cnVjdCBrZGJf
YnBjb25kIHsKICAgIGtkYmJ5dF90IGJwX2NvbmRfc3RhdHVzOyAgICAgICAvKiAwID09IG9mZiwg
MSA9PSByZWdpc3RlciwgMiA9PSBtZW1vcnkgKi8KICAgIGtkYmJ5dF90IGJwX2NvbmRfdHlwZTsg
ICAgICAgICAvKiAwID09IGJhZCwgMSA9PSBlcXVhbCwgMiA9PSBub3QgZXF1YWwgKi8KICAgIHVs
b25nICAgIGJwX2NvbmRfbGhzOyAgICAgICAgICAvKiBsaHMgb2YgY29uZGl0aW9uOiByZWcgb2Zm
c2V0IG9yIG1lbSBsb2MgKi8KICAgIHVsb25nICAgIGJwX2NvbmRfcmhzOyAgICAgICAgICAvKiBy
aWdodCBoYW5kIHNpZGUgb2YgY29uZGl0aW9uICovCn07CgovKiBzb2Z0d2FyZSBicmVha3BvaW50
IHN0cnVjdHVyZSAqLwpzdHJ1Y3Qga2RiX3NicmtwdCB7CiAgICBrZGJ2YV90ICBicF9hZGRyOyAg
ICAgICAgICAgICAgLyogYWRkcmVzcyB0aGUgYnAgaXMgc2V0IGF0ICovCiAgICBkb21pZF90ICBi
cF9kb21pZDsgICAgICAgICAgICAgLyogd2hpY2ggZG9tYWluIHRoZSBicCBiZWxvbmdzIHRvICov
CiAgICBrZGJieXRfdCBicF9vcmlnaW5zdDsgICAgICAgICAgLyogc2F2ZSBvcmlnIGluc3RyL3Mg
aGVyZSAqLwogICAga2RiYnl0X3QgYnBfZGVsZXRlZDsgICAgICAgICAgIC8qIGRlbGV0ZSBwZW5k
aW5nIG9uIHRoaXMgYnAgKi8KICAgIGtkYmJ5dF90IGJwX25pOyAgICAgICAgICAgICAgICAvKiBz
ZXQgZm9yIEtEQl9DUFVfTkkgKi8KICAgIGtkYmJ5dF90IGJwX2p1c3RfYWRkZWQ7ICAgICAgICAv
KiBhZGRlZCBpbiB0aGUgY3VycmVudCBrZGIgc2Vzc2lvbiAqLwogICAga2RiYnl0X3QgYnBfdHlw
ZTsgICAgICAgICAgICAgIC8qIDAgPSBub3JtYWwsIDEgPT0gY29uZCwgIDIgPT0gYnRwICovCiAg
ICB1bmlvbiB7CiAgICAgICAgc3RydWN0IGtkYl9icGNvbmQgYnBfY29uZDsKICAgICAgICB1bG9u
ZyAqYnBfYnRwOwogICAgfSB1Owp9OwoKLyogZG9uJ3QgdXNlIGttYWxsb2MgaW4ga2RiIHdoaWNo
IGhpamFja3MgYWxsIGNwdXMgKi8Kc3RhdGljIHVsb25nIGtkYl9idHBfYXJnc2FbS0RCTUFYU0JQ
XVtLREJfTUFYQlRQXTsKc3RhdGljIHVsb25nICprZGJfYnRwX2FwW0tEQk1BWFNCUF07CgpzdGF0
aWMgc3RydWN0IGtkYl9yZWdfbm1vZnMgewogICAgY2hhciAqcmVnX25tOwogICAgaW50IHJlZ19v
ZmZzOwp9IGtkYl9yZWdfbm1fb2Zmc1tdID0gIHsKICAgICAgIHsgInJheCIsIG9mZnNldG9mKHN0
cnVjdCBjcHVfdXNlcl9yZWdzLCByYXgpIH0sCiAgICAgICB7ICJyYngiLCBvZmZzZXRvZihzdHJ1
Y3QgY3B1X3VzZXJfcmVncywgcmJ4KSB9LAogICAgICAgeyAicmN4Iiwgb2Zmc2V0b2Yoc3RydWN0
IGNwdV91c2VyX3JlZ3MsIHJjeCkgfSwKICAgICAgIHsgInJkeCIsIG9mZnNldG9mKHN0cnVjdCBj
cHVfdXNlcl9yZWdzLCByZHgpIH0sCiAgICAgICB7ICJyc2kiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1
X3VzZXJfcmVncywgcnNpKSB9LAogICAgICAgeyAicmRpIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91
c2VyX3JlZ3MsIHJkaSkgfSwKICAgICAgIHsgInJicCIsIG9mZnNldG9mKHN0cnVjdCBjcHVfdXNl
cl9yZWdzLCByYnApIH0sCiAgICAgICB7ICJyc3AiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1X3VzZXJf
cmVncywgcnNwKSB9LAogICAgICAgeyAicjgiLCAgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3Jl
Z3MsIHI4KSB9LAogICAgICAgeyAicjkiLCAgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3Ms
IHI5KSB9LAogICAgICAgeyAicjEwIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3MsIHIx
MCkgfSwKICAgICAgIHsgInIxMSIsIG9mZnNldG9mKHN0cnVjdCBjcHVfdXNlcl9yZWdzLCByMTEp
IH0sCiAgICAgICB7ICJyMTIiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1X3VzZXJfcmVncywgcjEyKSB9
LAogICAgICAgeyAicjEzIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3MsIHIxMykgfSwK
ICAgICAgIHsgInIxNCIsIG9mZnNldG9mKHN0cnVjdCBjcHVfdXNlcl9yZWdzLCByMTQpIH0sCiAg
ICAgICB7ICJyMTUiLCBvZmZzZXRvZihzdHJ1Y3QgY3B1X3VzZXJfcmVncywgcjE1KSB9LAogICAg
ICAgeyAicmZsYWdzIiwgb2Zmc2V0b2Yoc3RydWN0IGNwdV91c2VyX3JlZ3MsIHJmbGFncykgfSB9
OwoKc3RhdGljIGNvbnN0IGludCBLREJCUFNaPTE7ICAgICAgICAgICAgICAgICAgIC8qIHNpemUg
b2YgS0RCX0JQSU5TVCBpcyAxIGJ5dGUqLwpzdGF0aWMga2RiYnl0X3Qga2RiX2JwaW5zdCA9IDB4
Y2M7ICAgICAgICAgICAgLyogYnJlYWtwb2ludCBpbnN0cjogSU5UMyAqLwpzdGF0aWMgc3RydWN0
IGtkYl9zYnJrcHQga2RiX3NicGFbS0RCTUFYU0JQXTsgLyogc29mdCBicmtwdCBhcnJheS90YWJs
ZSAqLwpzdGF0aWMga2RidGFiX3QgKnRicDsKCnN0YXRpYyBpbnQga2RiX3NldF9icChkb21pZF90
LCBrZGJ2YV90LCBpbnQsIHVsb25nICosIGNoYXIqLCBjaGFyKiwgY2hhciopOwpzdGF0aWMgdm9p
ZCBrZGJfcHJpbnRfdXJlZ3Moc3RydWN0IGNwdV91c2VyX3JlZ3MgKik7CgoKLyogPT09PT09PT09
PT09PT09PT09PT09IGNtZGxpbmUgZnVuY3Rpb25zICA9PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PSAqLwoKLyogbHAgcG9pbnRzIHRvIGEgc3RyaW5nIG9mIG9ubHkgYWxwaGEgbnVtZXJp
YyBjaGFycyB0ZXJtaW5hdGVkIGJ5ICdcbicuCiAqIFBhcnNlIHRoZSBzdHJpbmcgaW50byBhcmd2
IHBvaW50ZXJzLCBhbmQgUkVUVVJOIGFyZ2MKICogRWc6ICBpZiBscCAtLT4gImRyICBzcFxuIiA6
ICBhcmd2WzBdPT0iZHJcMCIgIGFyZ3ZbMV09PSJzcFwwIiAgYXJnYz09MgogKi8Kc3RhdGljIGlu
dAprZGJfcGFyc2VfY21kbGluZShjaGFyICpscCwgY29uc3QgY2hhciAqKmFyZ3YpCnsKICAgIGlu
dCBpPTA7CgogICAgZm9yICg7ICpscCA9PSAnICc7IGxwKyspOyAgICAgIC8qIG5vdGU6IGlzc3Bh
Y2UoKSBza2lwcyAnXG4nIGFsc28gKi8KICAgIHdoaWxlICggKmxwICE9ICdcbicgKSB7CiAgICAg
ICAgaWYgKGkgPT0gS0RCX01BWEFSR0MpIHsKICAgICAgICAgICAgcHJpbnRrKCJrZGI6IG1heCBh
cmdzIGV4Y2VlZGVkXG4iKTsKICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgfQogICAgICAgIGFy
Z3ZbaSsrXSA9IGxwOwogICAgICAgIGZvciAoOyAqbHAgIT0gJyAnICYmICpscCAhPSAnXG4nOyBs
cCsrKTsKICAgICAgICBpZiAoKmxwICE9ICdcbicpCiAgICAgICAgICAgICpscCsrID0gJ1wwJzsK
ICAgICAgICBmb3IgKDsgKmxwID09ICcgJzsgbHArKyk7CiAgICB9CiAgICAqbHAgPSAnXDAnOwog
ICAgcmV0dXJuIGk7Cn0KCnZvaWQKa2RiX2NsZWFyX3ByZXZfY21kKCkgICAgICAgICAgICAgLyog
c28gcHJldmlvdXMgY29tbWFuZCBpcyBub3QgcmVwZWF0ZWQgKi8KewogICAgdGJwID0gTlVMTDsK
fQoKdm9pZAprZGJfZG9fY21kcyhzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgY2hh
ciAqY21kbGluZXA7CiAgICBjb25zdCBjaGFyICphcmd2W0tEQl9NQVhBUkdDXTsKICAgIGludCBh
cmdjID0gMCwgY3VyY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwogICAga2RiX2NwdV9jbWRfdCBy
ZXN1bHQgPSBLREJfQ1BVX01BSU5fS0RCOwoKICAgIHNucHJpbnRmKGtkYl9wcm9tcHQsIHNpemVv
ZihrZGJfcHJvbXB0KSwgIlslZF14a2RiPiAiLCBjdXJjcHUpOwoKICAgIHdoaWxlIChyZXN1bHQg
PT0gS0RCX0NQVV9NQUlOX0tEQikgewogICAgICAgIGNtZGxpbmVwID0ga2RiX2dldF9jbWRsaW5l
KGtkYl9wcm9tcHQpOwogICAgICAgIGlmICgqY21kbGluZXAgPT0gJ1xuJykgewogICAgICAgICAg
ICBpZiAodGJwPT1OVUxMIHx8IHRicC0+a2RiX2NtZF9mdW5jPT1OVUxMKQogICAgICAgICAgICAg
ICAgY29udGludWU7CiAgICAgICAgICAgIGVsc2UKICAgICAgICAgICAgICAgIGFyZ2MgPSAtMTsg
ICAgLyogcmVwZWF0IHByZXYgY29tbWFuZCAqLwogICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAg
IGFyZ2MgPSBrZGJfcGFyc2VfY21kbGluZShjbWRsaW5lcCwgYXJndik7CiAgICAgICAgICAgIGZv
cih0YnA9a2RiX2NtZF90Ymw7IHRicC0+a2RiX2NtZF9mdW5jOyB0YnArKykgIHsKICAgICAgICAg
ICAgICAgIGlmIChzdHJjbXAoYXJndlswXSwgdGJwLT5rZGJfY21kX25hbWUpPT0wKSAKICAgICAg
ICAgICAgICAgICAgICBicmVhazsKICAgICAgICAgICAgfQogICAgICAgIH0KICAgICAgICBpZiAo
a2RiX3N5c19jcmFzaCAmJiB0YnAtPmtkYl9jbWRfZnVuYyAmJiAhdGJwLT5rZGJfY21kX2NyYXNo
X2F2YWlsKSB7CiAgICAgICAgICAgIGtkYnAoImNtZCBub3QgYXZhaWxhYmxlIGluIGZhdGFsL2Ny
YXNoZWQgc3RhdGUuLi4uXG4iKTsKICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAg
ICAgIGlmICh0YnAtPmtkYl9jbWRfZnVuYykgewogICAgICAgICAgICByZXN1bHQgPSAoKnRicC0+
a2RiX2NtZF9mdW5jKShhcmdjLCBhcmd2LCByZWdzKTsKICAgICAgICAgICAgaWYgKHRicC0+a2Ri
X2NtZF9yZXBlYXQgPT0gS0RCX1JFUEVBVF9OT05FKQogICAgICAgICAgICAgICAgdGJwID0gTlVM
TDsKICAgICAgICB9IGVsc2UKICAgICAgICAgICAga2RicCgia2RiOiBVbmtub3duIGNtZDogJXNc
biIsIGNtZGxpbmVwKTsKICAgIH0KICAgIGtkYl9jcHVfY21kW2N1cmNwdV0gPSByZXN1bHQ7CiAg
ICByZXR1cm47Cn0KCi8qID09PT09PT09PT09PT09PT09PT09PSBVdGlsIGZ1bmN0aW9ucyAgPT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09ICovCgppbnQKa2RiX3ZjcHVfdmFsaWQo
c3RydWN0IHZjcHUgKmluX3ZwKQp7CiAgICBzdHJ1Y3QgZG9tYWluICpkcDsKICAgIHN0cnVjdCB2
Y3B1ICp2cDsKCiAgICBmb3IoZHA9ZG9tYWluX2xpc3Q7IGluX3ZwICYmIGRwOyBkcD1kcC0+bmV4
dF9pbl9saXN0KQogICAgICAgIGZvcl9lYWNoX3ZjcHUoZHAsIHZwKQogICAgICAgICAgICBpZiAo
aW5fdnAgPT0gdnApCiAgICAgICAgICAgICAgICByZXR1cm4gMTsKICAgIHJldHVybiAwOyAgICAg
Lyogbm90IGZvdW5kICovCn0KCi8qCiAqIEdpdmVuIGEgc3ltYm9sLCBmaW5kIGl0J3MgYWRkcmVz
cwogKi8Kc3RhdGljIGtkYnZhX3QKa2RiX3N5bTJhZGRyKGNvbnN0IGNoYXIgKnAsIGRvbWlkX3Qg
ZG9taWQpCnsKICAgIGtkYnZhX3QgYWRkcjsKCiAgICBLREJHUDEoInN5bTJhZGRyOiBwOiVzIGRv
bWlkOiVkXG4iLCBwLCBkb21pZCk7CiAgICBpZiAoZG9taWQgPT0gRE9NSURfSURMRSkKICAgICAg
ICBhZGRyID0gYWRkcmVzc19sb29rdXAoKGNoYXIgKilwKTsKICAgIGVsc2UKICAgICAgICBhZGRy
ID0gKGtkYnZhX3Qpa2RiX2d1ZXN0X3N5bTJhZGRyKChjaGFyICopcCwgZG9taWQpOwogICAgS0RC
R1AxKCJzeW0yYWRkcjogZXhpdDogYWRkciByZXR1cm5lZDoweCVseFxuIiwgYWRkcik7CiAgICBy
ZXR1cm4gYWRkcjsKfQoKLyoKICogY29udmVydCBhc2NpaSB0byBpbnQgZGVjaW1hbCAoYmFzZSAx
MCkuIAogKiBSZXR1cm46IDAgOiBmYWlsZWQgdG8gY29udmVydCwgb3RoZXJ3aXNlIDEgCiAqLwpz
dGF0aWMgaW50CmtkYl9zdHIyZGVjaShjb25zdCBjaGFyICpzdHJwLCBpbnQgKmludHApCnsKICAg
IGNvbnN0IGNoYXIgKmVuZHA7CgogICAgS0RCR1AyKCJzdHIyZGVjaTogc3RyOiVzXG4iLCBzdHJw
KTsKICAgIGlmICghaXNkaWdpdCgqc3RycCkpCiAgICAgICAgcmV0dXJuIDA7CiAgICAqaW50cCA9
IChpbnQpc2ltcGxlX3N0cnRvdWwoc3RycCwgJmVuZHAsIDEwKTsKICAgIGlmIChlbmRwICE9IHN0
cnArc3RybGVuKHN0cnApKQogICAgICAgIHJldHVybiAwOwogICAgS0RCR1AyKCJzdHIyZGVjaTog
aW50dmFsOiQlZFxuIiwgKmludHApOwogICAgcmV0dXJuIDE7Cn0KLyoKICogY29udmVydCBhc2Np
aSB0byBsb25nLiBOT1RFOiBiYXNlIGlzIDE2CiAqIFJldHVybjogMCA6IGZhaWxlZCB0byBjb252
ZXJ0LCBvdGhlcndpc2UgMSAKICovCnN0YXRpYyBpbnQKa2RiX3N0cjJ1bG9uZyhjb25zdCBjaGFy
ICpzdHJwLCB1bG9uZyAqbG9uZ3ApCnsKICAgIHVsb25nIHZhbDsKICAgIGNvbnN0IGNoYXIgKmVu
ZHA7CgogICAgS0RCR1AyKCJzdHIybG9uZzogc3RyOiVzXG4iLCBzdHJwKTsKICAgIGlmICghaXN4
ZGlnaXQoKnN0cnApKQogICAgICAgIHJldHVybiAwOwogICAgdmFsID0gKGxvbmcpc2ltcGxlX3N0
cnRvdWwoc3RycCwgJmVuZHAsIDE2KTsgICAvKiBoYW5kbGVzIGxlYWRpbmcgMHggKi8KICAgIGlm
IChlbmRwICE9IHN0cnArc3RybGVuKHN0cnApKQogICAgICAgIHJldHVybiAwOwogICAgaWYgKGxv
bmdwKQogICAgICAgICpsb25ncCA9IHZhbDsKICAgIEtEQkdQMigic3RyMmxvbmc6IHZhbDoweCVs
eFxuIiwgdmFsKTsKICAgIHJldHVybiAxOwp9Ci8qCiAqIGNvbnZlcnQgYSBzeW1ib2wgb3IgYXNj
aWkgYWRkcmVzcyB0byBoZXggYWRkcmVzcwogKiBSZXR1cm46IDAgOiBmYWlsZWQgdG8gY29udmVy
dCwgb3RoZXJ3aXNlIDEgCiAqLwpzdGF0aWMgaW50CmtkYl9zdHIyYWRkcihjb25zdCBjaGFyICpz
dHJwLCBrZGJ2YV90ICphZGRycCwgZG9taWRfdCBpZCkKewogICAga2RidmFfdCBhZGRyOwogICAg
Y29uc3QgY2hhciAqZW5kcDsKCiAgICAvKiBhc3N1bWUgaXQncyBhbiBhZGRyZXNzICovCiAgICBL
REJHUDIoInN0cjJhZGRyOiBzdHI6JXMgaWQ6JWRcbiIsIHN0cnAsIGlkKTsKICAgIGFkZHIgPSAo
a2RidmFfdClzaW1wbGVfc3RydG91bChzdHJwLCAmZW5kcCwgMTYpOyAvKmhhbmRsZXMgbGVhZGlu
ZyAweCAqLwogICAgaWYgKGVuZHAgIT0gc3RycCtzdHJsZW4oc3RycCkpCiAgICAgICAgaWYgKCAh
KGFkZHI9a2RiX3N5bTJhZGRyKHN0cnAsIGlkKSkgKQogICAgICAgICAgICByZXR1cm4gMDsKICAg
ICphZGRycCA9IGFkZHI7CiAgICBLREJHUDIoInN0cjJhZGRyOiBhZGRyOjB4JWx4XG4iLCBhZGRy
KTsKICAgIHJldHVybiAxOwp9CgovKiBHaXZlbiBkb21pZCwgcmV0dXJuIHB0ciB0byBzdHJ1Y3Qg
ZG9tYWluIAogKiBJRiBkb21pZCA9PSBET01JRF9JRExFIHJldHVybiBwdHIgdG8gaWRsZV9kb21h
aW4gCiAqIElGIGRvbWlkID09IHZhbGlkIGRvbWFpbiwgcmV0dXJuIHB0ciB0byBkb21haW4gc3Ry
dWN0CiAqIGVsc2UgZG9taWQgaXMgYmFkIGFuZCByZXR1cm4gTlVMTAogKi8Kc3RhdGljIHN0cnVj
dCBkb21haW4gKgprZGJfZG9taWQycHRyKGRvbWlkX3QgZG9taWQpCnsKICAgIHN0cnVjdCBkb21h
aW4gKmRwOwoKICAgIC8qIGdldF9kb21haW5fYnlfaWQoKSByZXQgTlVMTCBmb3IgYm90aCBET01J
RF9JRExFIGFuZCBiYWQgZG9taWRzICovCiAgICBpZiAoZG9taWQgPT0gRE9NSURfSURMRSkKICAg
ICAgICBkcCA9IGlkbGVfdmNwdVtzbXBfcHJvY2Vzc29yX2lkKCldLT5kb21haW47CiAgICBlbHNl
IAogICAgICAgIGRwID0gZ2V0X2RvbWFpbl9ieV9pZChkb21pZCk7ICAgLyogTlVMTCBub3cgbWVh
bnMgYmFkIGRvbWlkICovCiAgICByZXR1cm4gZHA7Cn0KCi8qCiAqIFJldHVybnM6ICAwOiBmYWls
ZWQuIGludmFsaWQgZG9taWQgb3Igc3RyaW5nLCAqaWRwIG5vdCBjaGFuZ2VkLgogKi8Kc3RhdGlj
IGludAprZGJfc3RyMmRvbWlkKGNvbnN0IGNoYXIgKmRvbXN0ciwgZG9taWRfdCAqaWRwLCBpbnQg
cGVycikKewogICAgaW50IGlkOwogICAgaWYgKCFrZGJfc3RyMmRlY2koZG9tc3RyLCAmaWQpIHx8
ICFrZGJfZG9taWQycHRyKChkb21pZF90KWlkKSkgewogICAgICAgIGlmIChwZXJyKQogICAgICAg
ICAgICBrZGJwKCJJbnZhbGlkIGRvbWlkOiVzXG4iLCBkb21zdHIpOwogICAgICAgIHJldHVybiAw
OwogICAgfQogICAgKmlkcCA9IChkb21pZF90KWlkOwogICAgcmV0dXJuIDE7Cn0KCnN0YXRpYyBz
dHJ1Y3QgZG9tYWluICoKa2RiX3N0cmRvbWlkMnB0cihjb25zdCBjaGFyICpkb21zdHIsIGludCBw
ZXJyb3IpCnsKICAgIGRvbWlkX3QgZG9taWQ7CiAgICBpZiAoa2RiX3N0cjJkb21pZChkb21zdHIs
ICZkb21pZCwgcGVycm9yKSkgewogICAgICAgIHJldHVybihrZGJfZG9taWQycHRyKGRvbWlkKSk7
CiAgICB9CiAgICByZXR1cm4gTlVMTDsKfQoKLyogcmV0dXJuIGEgZ3Vlc3QgYml0bmVzczogMzIg
b3IgNjQgKi8KaW50CmtkYl9ndWVzdF9iaXRuZXNzKGRvbWlkX3QgZG9taWQpCnsKICAgIGNvbnN0
IGludCBIWVBTWiA9IHNpemVvZihsb25nKSAqIDg7CiAgICBzdHJ1Y3QgZG9tYWluICpkcCA9IGtk
Yl9kb21pZDJwdHIoZG9taWQpOwogICAgaW50IHJldHZhbDsgCgogICAgaWYgKGlzX2lkbGVfZG9t
YWluKGRwKSkKICAgICAgICByZXR2YWwgPSBIWVBTWjsKICAgIGVsc2UgaWYgKGlzX2h2bV9vcl9o
eWJfZG9tYWluKGRwKSkKICAgICAgICByZXR2YWwgPSAoaHZtX2xvbmdfbW9kZV9lbmFibGVkKGRw
LT52Y3B1WzBdKSkgPyBIWVBTWiA6IDMyOwogICAgZWxzZSAKICAgICAgICByZXR2YWwgPSBpc19w
dl8zMmJpdF9kb21haW4oZHApID8gMzIgOiBIWVBTWjsKICAgIEtEQkdQMSgiZ2JpdG5lc3M6IGRv
bWlkOiVkIGRwOiVwIGJpdG5lc3M6JWRcbiIsIGRvbWlkLCBkcCwgcmV0dmFsKTsKICAgIHJldHVy
biByZXR2YWw7Cn0KCi8qIGtkYl9wcmludF9zcGluX2xvY2soJnh5el9sb2NrLCAieHl6X2xvY2s6
IiwgIlxuIik7ICovCnN0YXRpYyB2b2lkCmtkYl9wcmludF9zcGluX2xvY2soY2hhciAqc3RycCwg
c3BpbmxvY2tfdCAqbGtwLCBjaGFyICpubHApCnsKICAgIGtkYnAoIiVzICUwNGh4ICVkICVkJXMi
LCBzdHJwLCBLREJfTEtERUYoKmxrcCksIGxrcC0+cmVjdXJzZV9jcHUsCiAgICAgICAgIGxrcC0+
cmVjdXJzZV9jbnQsIG5scCk7Cn0KCi8qIGNoZWNrIGlmIHJlZ2lzdGVyIHN0cmluZyBpcyB2YWxp
ZC4gaWYgeWVzLCByZXR1cm4gb2Zmc2V0IHRvIHRoZSByZWdpc3RlcgogKiBpbiBjcHVfdXNlcl9y
ZWdzLCBlbHNlIHJldHVybiAtMSAqLwpzdGF0aWMgaW50CmtkYl92YWxpZF9yZWcoY29uc3QgY2hh
ciAqbm1wKSAKewogICAgaW50IGk7CiAgICBmb3IgKGk9MDsgaSA8IHNpemVvZihrZGJfcmVnX25t
X29mZnMpL3NpemVvZihrZGJfcmVnX25tX29mZnNbMF0pOyBpKyspCiAgICAgICAgaWYgKHN0cmNt
cChrZGJfcmVnX25tX29mZnNbaV0ucmVnX25tLCBubXApID09IDApCiAgICAgICAgICAgIHJldHVy
biBrZGJfcmVnX25tX29mZnNbaV0ucmVnX29mZnM7CiAgICByZXR1cm4gLTE7Cn0KCi8qIGdpdmVu
IG9mZnNldCBvZiByZWdpc3RlciwgcmV0dXJuIHJlZ2lzdGVyIG5hbWUgc3RyaW5nLiBpZiBvZmZz
ZXQgaXMgaW52YWxpZAogKiByZXR1cm4gTlVMTCAqLwpzdGF0aWMgY2hhciAqa2RiX3JlZ29mZnNf
dG9fbmFtZShpbnQgb2ZmcykKewogICAgaW50IGk7CiAgICBmb3IgKGk9MDsgaSA8IHNpemVvZihr
ZGJfcmVnX25tX29mZnMpL3NpemVvZihrZGJfcmVnX25tX29mZnNbMF0pOyBpKyspCiAgICAgICAg
aWYgKGtkYl9yZWdfbm1fb2Zmc1tpXS5yZWdfb2ZmcyA9PSBvZmZzKQogICAgICAgICAgICByZXR1
cm4ga2RiX3JlZ19ubV9vZmZzW2ldLnJlZ19ubTsKICAgIHJldHVybiBOVUxMOwp9CgovKiA9PT09
PT09PT09PT09PT09PT09PT0gdXRpbCBzdHJ1Y3QgZnVuY3MgPT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09ICovCnN0YXRpYyB2b2lkCmtkYl9wcm50X3RpbWVyKHN0cnVjdCB0aW1lciAq
dHApCnsKI2lmIFhFTl9TVUJWRVJTSU9OID09IDAgCiAgICBrZGJwKCIgZXhwaXJlczolMDE2bHgg
ZXhwaXJlc19lbmQ6JTAxNmx4IGNwdTolZCBzdGF0dXM6JXhcbiIsIHRwLT5leHBpcmVzLCAKICAg
ICAgICAgdHAtPmV4cGlyZXNfZW5kLCB0cC0+Y3B1LCB0cC0+c3RhdHVzKTsKI2Vsc2UKICAgIGtk
YnAoIiBleHBpcmVzOiUwMTZseCBjcHU6JWQgc3RhdHVzOiV4XG4iLCB0cC0+ZXhwaXJlcywgdHAt
PmNwdSx0cC0+c3RhdHVzKTsKI2VuZGlmCiAgICBrZGJwKCIgZnVuY3Rpb24gZGF0YTolcCBwdHI6
JXAgIiwgdHAtPmRhdGEsIHRwLT5mdW5jdGlvbik7CiAgICBrZGJfcHJudF9hZGRyMnN5bShET01J
RF9JRExFLCAoa2RidmFfdCl0cC0+ZnVuY3Rpb24sICJcbiIpOwp9CgpzdGF0aWMgdm9pZCAKa2Ri
X3BybnRfcGVyaW9kaWNfdGltZShzdHJ1Y3QgcGVyaW9kaWNfdGltZSAqcHRwKQp7CiAgICBrZGJw
KCIgbmV4dDolcCBwcmV2OiVwXG4iLCBwdHAtPmxpc3QubmV4dCwgcHRwLT5saXN0LnByZXYpOwog
ICAga2RicCgiIG9uX2xpc3Q6JWQgb25lX3Nob3Q6JWQgZG9udF9mcmVlemU6JWQgaXJxX2lzc3Vl
ZDolZCBzcmM6JXggaXJxOiV4XG4iLAogICAgICAgICBwdHAtPm9uX2xpc3QsIHB0cC0+b25lX3No
b3QsIHB0cC0+ZG9fbm90X2ZyZWV6ZSwgcHRwLT5pcnFfaXNzdWVkLAogICAgICAgICBwdHAtPnNv
dXJjZSwgcHRwLT5pcnEpOwogICAga2RicCgiIHZjcHU6JXAgcGVuZGluZ19pbnRyX25yOiUwOHgg
cGVyaW9kOiUwMTZseFxuIiwgcHRwLT52Y3B1LAogICAgICAgICBwdHAtPnBlbmRpbmdfaW50cl9u
ciwgcHRwLT5wZXJpb2QpOwogICAga2RicCgiIHNjaGVkdWxlZDolMDE2bHggbGFzdF9wbHRfZ3Rp
bWU6JTAxNmx4XG4iLCBwdHAtPnNjaGVkdWxlZCwKICAgICAgICAgcHRwLT5sYXN0X3BsdF9ndGlt
ZSk7CiAgICBrZGJwKCIgXG4gICAgICAgICAgdGltZXIgaW5mbzpcbiIpOwogICAga2RiX3BybnRf
dGltZXIoJnB0cC0+dGltZXIpOwogICAga2RicCgiXG4iKTsKfQoKLyogPT09PT09PT09PT09PT09
PT09PT09IGNtZCBmdW5jdGlvbnMgID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PSAqLwoKLyoKICogRlVOQ1RJT046IERpc2Fzc2VtYmxlIGluc3RydWN0aW9ucwogKi8Kc3RhdGlj
IGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZGlzKHZvaWQpCnsKICAgIGtkYnAoImRpcyBbYWRkcnxz
eW1dW251bV1bZG9taWRdIDogRGlzYXNzZW1ibGUgaW5zdHJzXG4iKTsKICAgIHJldHVybiBLREJf
Q1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl9kaXMoaW50IGFy
Z2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAg
aW50IG51bSA9IDg7ICAgICAgICAgICAgICAgICAgICAgICAgICAgLyogZGlzcGxheSA4IGluc3Ry
IGJ5IGRlZmF1bHQgKi8KICAgIHN0YXRpYyBrZGJ2YV90IGFkZHIgPSBCRkRfSU5WQUw7CiAgICBz
dGF0aWMgZG9taWRfdCBkb21pZDsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8n
KQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kaXMoKTsKCiAgICBpZiAoYXJnYyAhPSAtMSkgICAg
ICAvKiBub3QgYSBjb21tYW5kIHJlcGVhdCAqLwogICAgICAgIGRvbWlkID0gZ3Vlc3RfbW9kZShy
ZWdzKSA/ICBjdXJyZW50LT5kb21haW4tPmRvbWFpbl9pZCA6IERPTUlEX0lETEU7CgogICAgaWYg
KGFyZ2MgPj0gNCAmJiAha2RiX3N0cjJkb21pZChhcmd2WzNdLCAmZG9taWQsIDEpKSB7IAogICAg
ICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfSAKICAgIGlmIChhcmdjID49IDMgJiYg
IWtkYl9zdHIyZGVjaShhcmd2WzJdLCAmbnVtKSkgewogICAgICAgIGtkYnAoImtkYjpJbnZhbGlk
IG51bVxuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9IAogICAgaWYg
KGFyZ2MgPiAxICYmICFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGRvbWlkKSkgewogICAg
ICAgIGtkYnAoImtkYjpJbnZhbGlkIGFkZHIvc3ltXG4iKTsKICAgICAgICBrZGJwKCIobnVtIGhh
cyB0byBiZSBzcGVjaWZpZWQgaWYgcHJvdmlkaW5nIGRvbWlkKVxuIik7CiAgICAgICAgcmV0dXJu
IEtEQl9DUFVfTUFJTl9LREI7CiAgICB9IAogICAgaWYgKGFyZ2MgPT0gMSkgICAgICAgICAgICAg
ICAgICAgIC8qIG5vdCBjb21tYW5kIHJlcGVhdCAqLwogICAgICAgIGFkZHIgPSByZWdzLT5LREJJ
UDsgICAgICAgICAgIC8qIFBDIGlzIHRoZSBkZWZhdWx0ICovCiAgICBlbHNlIGlmIChhZGRyID09
IEJGRF9JTlZBTCkgewogICAgICAgIGtkYnAoImtkYjpJbnZhbGlkIGFkZHIvc3ltXG4iKTsKICAg
ICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGFkZHIgPSBrZGJfcHJpbnRf
aW5zdHIoYWRkciwgbnVtLCBkb21pZCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoK
LyogRlVOQ1RJT046IGtkYl9jbWRmX2Rpc20oKSBUb2dnbGUgZGlzYXNzZW1ibHkgc3ludGF4IGZy
b20gSW50ZWwgdG8gQVRUL0dBUyAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kaXNt
KHZvaWQpCnsKICAgIGtkYnAoImRpc206IHRvZ2dsZSBkaXNhc3NlbWJseSBtb2RlIGJldHdlZW4g
QVRUL0dBUyBhbmQgSU5URUxcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3Rh
dGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2Rpc20oaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiph
cmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICph
cmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZGlzbSgpOwoKICAgIGtkYl90
b2dnbGVfZGlzX3N5bnRheCgpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRp
YyB2b2lkCl9rZGJfc2hvd19ndWVzdF9zdGFjayhkb21pZF90IGRvbWlkLCBrZGJ2YV90IGlwYWRk
ciwga2RidmFfdCBzcGFkZHIpCnsKICAgIGtkYnZhX3QgdmFsOwogICAgaW50IG51bT0wLCBtYXg9
MCwgcmQgPSBrZGJfZ3Vlc3RfYml0bmVzcyhkb21pZCkvODsKCiAgICBrZGJfcHJpbnRfaW5zdHIo
aXBhZGRyLCAxLCBkb21pZCk7CiAgICBLREJHUCgiX2d1ZXN0X3N0YWNrOnNwOiVseCBkb21pZDol
ZCByZDokJWRcbiIsIHNwYWRkciwgZG9taWQsIHJkKTsKICAgIHZhbCA9IDA7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAvKiBtdXN0IHplcm8sIGluIGNhc2UgZ3Vlc3QgaXMgMzJiaXQgKi8KICAg
IHdoaWxlKChrZGJfcmVhZF9tZW0oc3BhZGRyLChrZGJieXRfdCAqKSZ2YWwscmQsZG9taWQpPT1y
ZCkgJiYgbnVtIDwgMTYpewogICAgICAgIEtEQkdQMSgiZ3N0azphZGRyOiVseCB2YWw6JWx4XG4i
LCBzcGFkZHIsIHZhbCk7CiAgICAgICAgaWYgKGtkYl9pc19hZGRyX2d1ZXN0X3RleHQodmFsLCBk
b21pZCkpIHsKICAgICAgICAgICAga2RiX3ByaW50X2luc3RyKHZhbCwgMSwgZG9taWQpOwogICAg
ICAgICAgICBudW0rKzsKICAgICAgICB9CiAgICAgICAgaWYgKG1heCsrID4gMTAwMDApICAgICAg
ICAgICAgLyogZG9uJ3Qgd2FsayBkb3duIHRoZSBzdGFjayBmb3JldmVyICovCiAgICAgICAgICAg
IGJyZWFrOyAgICAgICAgICAgICAgICAgICAgLyogMTBrIGlzIGNob3NlbiByYW5kb21seSAqLwog
ICAgICAgIHNwYWRkciArPSByZDsKICAgIH0KfQoKLyogUmVhZCBndWVzdCBtZW1vcnkgYW5kIGRp
c3BsYXkgYWRkcmVzcyB0aGF0IGxvb2tzIGxpa2UgdGV4dC4gKi8Kc3RhdGljIHZvaWQKa2RiX3No
b3dfZ3Vlc3Rfc3RhY2soc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIHN0cnVjdCB2Y3B1ICp2
Y3B1cCkKewogICAga2RidmFfdCBpcGFkZHI9cmVncy0+S0RCSVAsIHNwYWRkciA9IHJlZ3MtPktE
QlNQOwogICAgZG9taWRfdCBkb21pZCA9IHZjcHVwLT5kb21haW4tPmRvbWFpbl9pZDsKCiAgICBB
U1NFUlQoZG9taWQgIT0gRE9NSURfSURMRSk7CiAgICBfa2RiX3Nob3dfZ3Vlc3Rfc3RhY2soZG9t
aWQsIGlwYWRkciwgc3BhZGRyKTsKfQoKLyogZGlzcGxheSBzdGFjay4gaWYgdmNwdSBwdHIgZ2l2
ZW4sIHRoZW4gZGlzcGxheSBzdGFjayBmb3IgdGhhdC4gT3RoZXJ3aXNlLAogKiB1c2UgY3VycmVu
dCByZWdzICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2Yodm9pZCkKewogICAga2Ri
cCgiZiBbdmNwdS1wdHJdOiBkdW1wIGN1cnJlbnQvdmNwdSBzdGFja1xuIik7CiAgICByZXR1cm4g
S0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2NtZGZfZihpbnQg
YXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAg
ICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNn
Zl9mKCk7CgogICAgaWYgKGFyZ2MgPiAxICkgewogICAgICAgIHN0cnVjdCB2Y3B1ICp2cDsKICAg
ICAgICBpZiAoIWtkYl9zdHIydWxvbmcoYXJndlsxXSwgKHVsb25nICopJnZwKSB8fCAha2RiX3Zj
cHVfdmFsaWQodnApKSB7CiAgICAgICAgICAgIGtkYnAoImtkYjogQmFkIFZDUFUgcHRyOiVzXG4i
LCBhcmd2WzFdKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAg
fQogICAgICAgIGtkYl9zaG93X2d1ZXN0X3N0YWNrKCZ2cC0+YXJjaC51c2VyX3JlZ3MsIHZwKTsK
ICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGlmIChndWVzdF9tb2Rl
KHJlZ3MpKQogICAgICAgIGtkYl9zaG93X2d1ZXN0X3N0YWNrKHJlZ3MsIGN1cnJlbnQpOwogICAg
ZWxzZQogICAgICAgIHNob3dfdHJhY2UocmVncyk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKfQoKLyogZ2l2ZW4gYW4gc3BhZGRyIGFuZCBkb21pZCBmb3IgZ3Vlc3QsIGR1bXAgc3RhY2sg
Ki8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZmcodm9pZCkKewogICAga2RicCgiZmcg
ZG9taWQgUklQIEVTUDogZHVtcCBndWVzdCBzdGFjayBnaXZlbiBkb21pZCwgUklQLCBhbmQgRVNQ
XG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90
IAprZGJfY21kZl9mZyhpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNl
cl9yZWdzICpyZWdzKQp7CiAgICBkb21pZF90IGRvbWlkOwogICAga2RidmFfdCBpcGFkZHIsIHNw
YWRkcjsKCiAgICBpZiAoYXJnYyAhPSA0KSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZmcoKTsK
CiAgICBpZiAoa2RiX3N0cjJkb21pZChhcmd2WzFdLCAmZG9taWQsIDEpPT0wKSB7CiAgICAgICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICBpZiAoa2RiX3N0cjJ1bG9uZyhhcmd2
WzJdLCAmaXBhZGRyKT09MCkgewogICAgICAgIGtkYnAoIkJhZCBpcGFkZHI6JXNcbiIsIGFyZ3Zb
Ml0pOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGtkYl9z
dHIydWxvbmcoYXJndlszXSwgJnNwYWRkcik9PTApIHsKICAgICAgICBrZGJwKCJCYWQgc3BhZGRy
OiVzXG4iLCBhcmd2WzNdKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0K
ICAgIF9rZGJfc2hvd19ndWVzdF9zdGFjayhkb21pZCwgaXBhZGRyLCBzcGFkZHIpOwogICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIERpc3BsYXkga2RiIHN0YWNrLiBmb3IgZGVidWdn
aW5nIGtkYiBpdHNlbGYgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2Zfa2RiZih2b2lk
KQp7CiAgICBrZGJwKCJrZGJmOiBkaXNwbGF5IGtkYiBzdGFjay4gZm9yIGRlYnVnZ2luZyBrZGIg
b25seVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9j
bWRfdCAKa2RiX2NtZGZfa2RiZihpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBj
cHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8n
KQogICAgICAgIHJldHVybiBrZGJfdXNnZl9rZGJmKCk7CgogICAga2RiX3RyYXBfaW1tZWQoS0RC
X1RSQVBfS0RCU1RBQ0spOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIHdvcmtl
ciBmdW5jdGlvbiB0byBkaXNwbGF5IG1lbW9yeS4gUmVxdWVzdCBjb3VsZCBiZSBmb3IgYW55IGd1
ZXN0LCBkb21pZC4KICogQWxzbyBhZGRyZXNzIGNvdWxkIGJlIG1hY2hpbmUgb3IgdmlydHVhbCAq
LwpzdGF0aWMgdm9pZApfa2RiX2Rpc3BsYXlfbWVtKGtkYnZhX3QgKmFkZHJwLCBpbnQgKmxlbnAs
IGludCB3b3Jkc3osIGludCBkb21pZCwgaW50IGlzX21hZGRyKQp7CiAgICAjZGVmaW5lIEREQlVG
U1ogNDA5NgoKICAgIGtkYmJ5dF90IGJ1ZltEREJVRlNaXSwgKmJwOwogICAgaW50IG51bXJkLCBi
eXRlczsKICAgIGludCBsZW4gPSAqbGVucDsKICAgIGtkYnZhX3QgYWRkciA9ICphZGRycDsKCiAg
ICAvKiByb3VuZCBsZW4gZG93biB0byB3b3Jkc3ogYm91bmRyeSBiZWNhdXNlIG9uIGludGVsIGVu
ZGlhbiwgcHJpbnRpbmcKICAgICAqIGNoYXJhY3RlcnMgaXMgbm90IHBydWRlbnQsIChsb25nIGFu
ZCBpbnRzIGNhbid0IGJlIGludGVycHJldGVkIAogICAgICogZWFzaWx5KSAqLwogICAgbGVuICY9
IH4od29yZHN6LTEpOwogICAgbGVuID0gS0RCTUlOKEREQlVGU1osIGxlbik7CiAgICBsZW4gPSBs
ZW4gPyBsZW4gOiB3b3Jkc3o7CgogICAgS0RCR1AoImRtZW06YWRkcjolbHggYnVmOiVwIGxlbjok
JWQgZG9taWQ6JWQgc3o6JCVkIG1hZGRyOiVkXG4iLCBhZGRyLAogICAgICAgICAgYnVmLCBsZW4s
IGRvbWlkLCB3b3Jkc3osIGlzX21hZGRyKTsKICAgIGlmIChpc19tYWRkcikKICAgICAgICBudW1y
ZD1rZGJfcmVhZF9tbWVtKChrZGJtYV90KWFkZHIsIGJ1ZiwgbGVuKTsKICAgIGVsc2UKICAgICAg
ICBudW1yZD1rZGJfcmVhZF9tZW0oYWRkciwgYnVmLCBsZW4sIGRvbWlkKTsKICAgIGlmIChudW1y
ZCAhPSBsZW4pCiAgICAgICAga2RicCgiTWVtb3J5IHJlYWQgZXJyb3IuIEJ5dGVzIHJlYWQ6JCVk
XG4iLCBudW1yZCk7CgogICAgZm9yIChicCA9IGJ1ZjsgbnVtcmQgPiAwOykgewogICAgICAgIGtk
YnAoIiUwMTZseDogIiwgYWRkcik7IAoKICAgICAgICAvKiBkaXNwbGF5IDE2IGJ5dGVzIHBlciBs
aW5lICovCiAgICAgICAgZm9yIChieXRlcz0wOyBieXRlcyA8IDE2ICYmIG51bXJkID4gMDsgYnl0
ZXMgKz0gd29yZHN6KSB7CiAgICAgICAgICAgIGlmIChudW1yZCA+PSB3b3Jkc3opIHsKICAgICAg
ICAgICAgICAgIGlmICh3b3Jkc3ogPT0gOCkKICAgICAgICAgICAgICAgICAgICBrZGJwKCIgJTAx
Nmx4IiwgKihsb25nICopYnApOwogICAgICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgICAg
ICAgIGtkYnAoIiAlMDh4IiwgKihpbnQgKilicCk7CiAgICAgICAgICAgICAgICBicCArPSB3b3Jk
c3o7CiAgICAgICAgICAgICAgICBudW1yZCAtPSB3b3Jkc3o7CiAgICAgICAgICAgICAgICBhZGRy
ICs9IHdvcmRzejsKICAgICAgICAgICAgfQogICAgICAgIH0KICAgICAgICBrZGJwKCJcbiIpOwog
ICAgICAgIGNvbnRpbnVlOwogICAgfQogICAgKmxlbnAgPSBsZW47CiAgICAqYWRkcnAgPSBhZGRy
Owp9CgovKiBkaXNwbGF5IG1hY2hpbmUgbWVtLCBpZSwgdGhlIGdpdmVuIGFkZHJlc3MgaXMgbWFj
aGluZSBhZGRyZXNzICovCnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfZGlzcGxheV9tbWVtKGlu
dCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgaW50IHdvcmRzeiwga2RiX3VzZ2ZfdCB1c2dfZnAp
CnsKICAgIHN0YXRpYyBrZGJtYV90IG1hZGRyOwogICAgc3RhdGljIGludCBsZW47CiAgICBzdGF0
aWMgZG9taWRfdCBpZCA9IERPTUlEX0lETEU7CgogICAgaWYgKGFyZ2MgPT0gLTEpIHsKICAgICAg
ICBfa2RiX2Rpc3BsYXlfbWVtKCZtYWRkciwgJmxlbiwgd29yZHN6LCBpZCwgMSk7ICAvKiBjbWQg
cmVwZWF0ICovCiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICBpZiAo
YXJnYyA8PSAxIHx8ICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4gKCp1c2dfZnApKCk7
CgogICAgLyogY2hlY2sgaWYgbnVtIG9mIGJ5dGVzIHRvIGRpc3BsYXkgaXMgZ2l2ZW4gYnkgdXNl
ciAqLwogICAgaWYgKGFyZ2MgPj0gMykgewogICAgICAgIGlmICgha2RiX3N0cjJkZWNpKGFyZ3Zb
Ml0sICZsZW4pKSB7CiAgICAgICAgICAgIGtkYnAoIkludmFsaWQgbGVuZ3RoOiVzXG4iLCBhcmd2
WzJdKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgfSAKICAg
IH0gZWxzZQogICAgICAgIGxlbiA9IDMyOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAvKiBkZWZhdWx0IHJlYWQgbGVuICovCgogICAgaWYgKCFrZGJfc3RyMnVsb25nKGFyZ3Zb
MV0sICZtYWRkcikpIHsKICAgICAgICBrZGJwKCJJbnZhbGlkIGFyZ3VtZW50OiVzXG4iLCBhcmd2
WzFdKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIF9rZGJfZGlz
cGxheV9tZW0oJm1hZGRyLCAmbGVuLCB3b3Jkc3osIDAsIDEpOwogICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7Cn0KCi8qIAogKiBGVU5DVElPTjogRGlzcGFseSBtYWNoaW5lIE1lbW9yeSBXb3Jk
CiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kd20odm9pZCkKewogICAga2RicCgi
ZHdtOiAgbWFkZHJ8c3ltIFtudW1dIDogZHVtcCBtZW1vcnkgd29yZCBnaXZlbiBtYWNoaW5lIGFk
ZHJcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21k
X3QgCmtkYl9jbWRmX2R3bShpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVf
dXNlcl9yZWdzICpyZWdzKQp7CiAgICByZXR1cm4ga2RiX2Rpc3BsYXlfbW1lbShhcmdjLCBhcmd2
LCA0LCBrZGJfdXNnZl9kd20pOwp9CgovKiAKICogRlVOQ1RJT046IERpc3BhbHkgbWFjaGluZSBN
ZW1vcnkgRG91YmxlV29yZCAKICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2RkbSh2
b2lkKQp7CiAgICBrZGJwKCJkZG06ICBtYWRkcnxzeW0gW251bV0gOiBkdW1wIGRvdWJsZSB3b3Jk
IGdpdmVuIG1hY2hpbmUgYWRkclxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpz
dGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2NtZGZfZGRtKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHJldHVybiBrZGJfZGlzcGxh
eV9tbWVtKGFyZ2MsIGFyZ3YsIDgsIGtkYl91c2dmX2RkbSk7Cn0KCi8qIAogKiBGVU5DVElPTjog
RGlzcGFseSBNZW1vcnkgOiB3b3JkIG9yIGRvdWJsZXdvcmQKICogICAgICAgICAgIHdvcmRzeiA6
IGJ5dGVzIGluIHdvcmQuIDQgb3IgOAogKgogKiAgICAgICAgICAgV2UgZGlzcGxheSB1cHRvIEJV
RlNaIGJ5dGVzLiBVc2VyIGNhbiBqdXN0IHByZXNzIGVudGVyIGZvciBtb3JlLgogKiAgICAgICAg
ICAgYWRkciBpcyBhbHdheXMgaW4gaGV4IHdpdGggb3Igd2l0aG91dCBsZWFkaW5nIDB4CiAqLwpz
dGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Rpc3BsYXlfbWVtKGludCBhcmdjLCBjb25zdCBjaGFy
ICoqYXJndiwgaW50IHdvcmRzeiwga2RiX3VzZ2ZfdCB1c2dfZnApCnsKICAgIHN0YXRpYyBrZGJ2
YV90IGFkZHI7CiAgICBzdGF0aWMgaW50IGxlbjsKICAgIHN0YXRpYyBkb21pZF90IGlkID0gRE9N
SURfSURMRTsKCiAgICBpZiAoYXJnYyA9PSAtMSkgewogICAgICAgIF9rZGJfZGlzcGxheV9tZW0o
JmFkZHIsICZsZW4sIHdvcmRzeiwgaWQsIDApOyAgLyogY21kIHJlcGVhdCAqLwogICAgICAgIHJl
dHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGFyZ2MgPD0gMSB8fCAqYXJndlsx
XSA9PSAnPycpCiAgICAgICAgcmV0dXJuICgqdXNnX2ZwKSgpOwoKICAgIGlkID0gRE9NSURfSURM
RTsgICAgICAgICAgICAgICAgLyogbm90IGEgY29tbWFuZCByZXBlYXQsIHJlc2V0IGRvbSBpZCAq
LwogICAgaWYgKGFyZ2MgPj0gNCkgeyAKICAgICAgICBpZiAoIWtkYl9zdHIyZG9taWQoYXJndlsz
XSwgJmlkLCAxKSkgCiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQog
ICAgLyogY2hlY2sgaWYgbnVtIG9mIGJ5dGVzIHRvIGRpc3BsYXkgaXMgZ2l2ZW4gYnkgdXNlciAq
LwogICAgaWYgKGFyZ2MgPj0gMykgewogICAgICAgIGlmICgha2RiX3N0cjJkZWNpKGFyZ3ZbMl0s
ICZsZW4pKSB7CiAgICAgICAgICAgIGtkYnAoIkludmFsaWQgbGVuZ3RoOiVzXG4iLCBhcmd2WzJd
KTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgfSAKICAgIH0g
ZWxzZQogICAgICAgIGxlbiA9IDMyOyAgICAgICAgICAgICAgICAgICAgICAgLyogZGVmYXVsdCBy
ZWFkIGxlbiAqLwogICAgaWYgKCFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGlkKSkgewog
ICAgICAgIGtkYnAoIkludmFsaWQgYXJndW1lbnQ6JXNcbiIsIGFyZ3ZbMV0pOwogICAgICAgIHJl
dHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQoKICAgIF9rZGJfZGlzcGxheV9tZW0oJmFkZHIs
ICZsZW4sIHdvcmRzeiwgaWQsIDApOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8q
IAogKiBGVU5DVElPTjogRGlzcGFseSBNZW1vcnkgV29yZAogKi8Kc3RhdGljIGtkYl9jcHVfY21k
X3QKa2RiX3VzZ2ZfZHcodm9pZCkKewogICAga2RicCgiZHcgdmFkZHJ8c3ltIFtudW1dW2RvbWlk
XSA6IGR1bXAgbWVtIHdvcmQuIG51bSByZXF1aXJlZCBmb3IgZG9taWRcbiIpOwogICAgcmV0dXJu
IEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2R3KGlu
dCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsK
ICAgIHJldHVybiBrZGJfZGlzcGxheV9tZW0oYXJnYywgYXJndiwgNCwga2RiX3VzZ2ZfZHcpOwp9
CgovKiAKICogRlVOQ1RJT046IERpc3BhbHkgTWVtb3J5IERvdWJsZVdvcmQgCiAqLwpzdGF0aWMg
a2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kZCh2b2lkKQp7CiAgICBrZGJwKCJkZCB2YWRkcnxzeW0g
W251bV1bZG9taWRdIDogZHVtcCBkd29yZC4gbnVtIHJlcXVpcmVkIGZvciBkb21pZFxuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Nt
ZGZfZGQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAgcmV0dXJuIGtkYl9kaXNwbGF5X21lbShhcmdjLCBhcmd2LCA4LCBrZGJfdXNn
Zl9kZCk7Cn0KCi8qIAogKiBGVU5DVElPTjogTW9kaWZ5IE1lbW9yeSBXb3JkIAogKi8Kc3RhdGlj
IGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfbXcodm9pZCkKewogICAga2RicCgibXcgdmFkZHJ8c3lt
IHZhbCBbZG9taWRdIDogbW9kaWZ5IG1lbW9yeSB3b3JkIGluIHZhZGRyXG4iKTsKICAgIHJldHVy
biBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl9tdyhp
bnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7
CiAgICB1bG9uZyB2YWw7CiAgICBrZGJ2YV90IGFkZHI7CiAgICBkb21pZF90IGlkID0gRE9NSURf
SURMRTsKCiAgICBpZiAoYXJnYyA8IDMpIHsKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfbXcoKTsK
ICAgIH0KICAgIGlmIChhcmdjID49NCkgewogICAgICAgIGlmICgha2RiX3N0cjJkb21pZChhcmd2
WzNdLCAmaWQsIDEpKSAKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9
CiAgICBpZiAoIWtkYl9zdHIydWxvbmcoYXJndlsyXSwgJnZhbCkpIHsKICAgICAgICBrZGJwKCJJ
bnZhbGlkIHZhbDogJXNcbiIsIGFyZ3ZbMl0pOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5f
S0RCOwogICAgfQogICAgaWYgKCFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGlkKSkgewog
ICAgICAgIGtkYnAoIkludmFsaWQgYWRkci9zeW06ICVzXG4iLCBhcmd2WzFdKTsKICAgICAgICBy
ZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGlmIChrZGJfd3JpdGVfbWVtKGFkZHIs
IChrZGJieXRfdCAqKSZ2YWwsIDQsIGlkKSAhPSA0KQogICAgICAgIGtkYnAoIlVuYWJsZSB0byBz
ZXQgMHglbHggdG8gMHglbHhcbiIsIGFkZHIsIHZhbCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlO
X0tEQjsKfQoKLyogCiAqIEZVTkNUSU9OOiBNb2RpZnkgTWVtb3J5IERvdWJsZVdvcmQgCiAqLwpz
dGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9tZCh2b2lkKQp7CiAgICBrZGJwKCJtZCB2YWRk
cnxzeW0gdmFsIFtkb21pZF0gOiBtb2RpZnkgbWVtb3J5IGR3b3JkIGluIHZhZGRyXG4iKTsKICAg
IHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21k
Zl9tZChpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICB1bG9uZyB2YWw7CiAgICBrZGJ2YV90IGFkZHI7CiAgICBkb21pZF90IGlkID0g
RE9NSURfSURMRTsKCiAgICBpZiAoYXJnYyA8IDMpIHsKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zf
bWQoKTsKICAgIH0KICAgIGlmIChhcmdjID49NCkgewogICAgICAgIGlmICgha2RiX3N0cjJkb21p
ZChhcmd2WzNdLCAmaWQsIDEpKSB7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RC
OwogICAgICAgIH0KICAgIH0KICAgIGlmICgha2RiX3N0cjJ1bG9uZyhhcmd2WzJdLCAmdmFsKSkg
ewogICAgICAgIGtkYnAoIkludmFsaWQgdmFsOiAlc1xuIiwgYXJndlsyXSk7CiAgICAgICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICBpZiAoIWtkYl9zdHIyYWRkcihhcmd2WzFd
LCAmYWRkciwgaWQpKSB7CiAgICAgICAga2RicCgiSW52YWxpZCBhZGRyL3N5bTogJXNcbiIsIGFy
Z3ZbMV0pOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGtk
Yl93cml0ZV9tZW0oYWRkciwgKGtkYmJ5dF90ICopJnZhbCxzaXplb2YodmFsKSxpZCkgIT0gc2l6
ZW9mKHZhbCkpCiAgICAgICAga2RicCgiVW5hYmxlIHRvIHNldCAweCVseCB0byAweCVseFxuIiwg
YWRkciwgdmFsKTsKCiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RydWN0ICBYZ3Rf
ZGVzY19zdHJ1Y3QgewogICAgdW5zaWduZWQgc2hvcnQgc2l6ZTsKICAgIHVuc2lnbmVkIGxvbmcg
YWRkcmVzcyBfX2F0dHJpYnV0ZV9fKChwYWNrZWQpKTsKfTsKCnZvaWQKa2RiX3Nob3dfc3BlY2lh
bF9yZWdzKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBzdHJ1Y3QgWGd0X2Rlc2Nf
c3RydWN0IGRlc2M7CiAgICB1bnNpZ25lZCBzaG9ydCB0cjsgICAgICAgICAgICAgICAgIC8qIFRh
c2sgUmVnaXN0ZXIgc2VnbWVudCBzZWxlY3RvciAqLwogICAgX191NjQgZWZlcjsKCiAgICBrZGJw
KCJcblNwZWNpYWwgUmVnaXN0ZXJzOlxuIik7CiAgICBfX2FzbV9fIF9fdm9sYXRpbGVfXyAoInNp
ZHQgICglMCkgXG4iIDo6ICJhIigmZGVzYykgOiAibWVtb3J5Iik7CiAgICBrZGJwKCJJRFRSOiBh
ZGRyOiAlMDE2bHggbGltaXQ6ICUwNHhcbiIsIGRlc2MuYWRkcmVzcywgZGVzYy5zaXplKTsKICAg
IF9fYXNtX18gX192b2xhdGlsZV9fICgic2dkdCAgKCUwKSBcbiIgOjogImEiKCZkZXNjKSA6ICJt
ZW1vcnkiKTsKICAgIGtkYnAoIkdEVFI6IGFkZHI6ICUwMTZseCBsaW1pdDogJTA0eFxuIiwgZGVz
Yy5hZGRyZXNzLCBkZXNjLnNpemUpOwoKICAgIGtkYnAoImNyMDogJTAxNmx4ICBjcjI6ICUwMTZs
eFxuIiwgcmVhZF9jcjAoKSwgcmVhZF9jcjIoKSk7CiAgICBrZGJwKCJjcjM6ICUwMTZseCAgY3I0
OiAlMDE2bHhcbiIsIHJlYWRfY3IzKCksIHJlYWRfY3I0KCkpOwogICAgX19hc21fXyBfX3ZvbGF0
aWxlX18gKCJzdHIgKCUwKSBcbiI6OiAiYSIoJnRyKSA6ICJtZW1vcnkiKTsKICAgIGtkYnAoIlRS
OiAleFxuIiwgdHIpOwoKICAgIHJkbXNybChNU1JfRUZFUiwgZWZlcik7ICAgIC8qIElBMzJfRUZF
UiAqLwogICAga2RicCgiZWZlcjoiS0RCRjY0IiBMTUEoSUEtMzJlIG1vZGUpOiVkIFNDRShzeXNj
YWxsL3N5c3JldCk6JWRcbiIsCiAgICAgICAgIGVmZXIsICgoZWZlciZFRkVSX0xNQSkgIT0gMCks
ICgoZWZlciZFRkVSX1NDRSkgIT0gMCkpOwoKICAgIGtkYnAoIkRSMDogJTAxNmx4ICBEUjE6JTAx
Nmx4ICBEUjI6JTAxNmx4XG4iLCBrZGJfcmRfZGJncmVnKDApLAogICAgICAgICBrZGJfcmRfZGJn
cmVnKDEpLCBrZGJfcmRfZGJncmVnKDIpKTsgCiAgICBrZGJwKCJEUjM6ICUwMTZseCAgRFI2OiUw
MTZseCAgRFI3OiUwMTZseFxuIiwga2RiX3JkX2RiZ3JlZygzKSwKICAgICAgICAga2RiX3JkX2Ri
Z3JlZyg2KSwga2RiX3JkX2RiZ3JlZyg3KSk7IAp9CgovKiAKICogRlVOQ1RJT046IERpc3BhbHkg
UmVnaXN0ZXJzLiBJZiAic3AiIGFyZ3VtZW50LCB0aGVuIGRpc3BsYXkgYWRkaXRpb25hbCByZWdz
CiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kcih2b2lkKQp7CiAgICBrZGJwKCJk
ciBbc3BdOiBkaXNwbGF5IHJlZ2lzdGVycy4gc3AgdG8gZGlzcGxheSBzcGVjaWFsIHJlZ3MgYWxz
b1xuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRf
dCAKa2RiX2NtZGZfZHIoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3Vz
ZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAg
ICAgICByZXR1cm4ga2RiX3VzZ2ZfZHIoKTsKCiAgICBLREJHUDEoInJlZ3M6JXAgLnJzcDolbHgg
LnJpcDolbHhcbiIsIHJlZ3MsIHJlZ3MtPnJzcCwgcmVncy0+cmlwKTsKICAgIHNob3dfcmVnaXN0
ZXJzKHJlZ3MpOwogICAgaWYgKGFyZ2MgPiAxICYmICFzdHJjbXAoYXJndlsxXSwgInNwIikpIAog
ICAgICAgIGtkYl9zaG93X3NwZWNpYWxfcmVncyhyZWdzKTsKICAgIHJldHVybiBLREJfQ1BVX01B
SU5fS0RCOwp9CgovKiBzaG93IHJlZ2lzdGVycyBvbiBzdGFjayBib3R0b20gd2hlcmUgZ3Vlc3Qg
Y29udGV4dCBpcy4gc2FtZSBhcyBkciBpZgogKiBub3QgcnVubmluZyBpbiBndWVzdCBtb2RlICov
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2RyZyh2b2lkKQp7CiAgICBrZGJwKCJkcmc6
IGRpc3BsYXkgYWN0aXZlIGd1ZXN0IHJlZ2lzdGVycyBhdCBzdGFjayBib3R0b21cbiIpOwogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRm
X2RyZyhpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVy
biBrZGJfdXNnZl9kcmcoKTsKCiAgICBrZGJwKCJcdE5vdGU6IGRzL2VzL2ZzL2dzIGV0Yy4uIGFy
ZSBub3Qgc2F2ZWQgZnJvbSB0aGUgY3B1XG4iKTsKICAgIGtkYl9wcmludF91cmVncyhndWVzdF9j
cHVfdXNlcl9yZWdzKCkpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIAogKiBG
VU5DVElPTjogTW9kaWZ5IFJlZ2lzdGVyCiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNn
Zl9tcih2b2lkKQp7CiAgICBrZGJwKCJtciByZWcgdmFsIDogTW9kaWZ5IFJlZ2lzdGVyLiB2YWwg
YXNzdW1lZCBpbiBoZXhcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGlj
IGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX21yKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwg
c3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGNvbnN0IGNoYXIgKmFyZ3A7CiAgICBp
bnQgcmVnb2ZmczsKICAgIHVsb25nIHZhbDsKCiAgICBpZiAoYXJnYyAhPSAzIHx8ICFrZGJfc3Ry
MnVsb25nKGFyZ3ZbMl0sICZ2YWwpKSB7CiAgICAgICAgcmV0dXJuIGtkYl91c2dmX21yKCk7CiAg
ICB9CiAgICBhcmdwID0gYXJndlsxXTsKCiNpZiBkZWZpbmVkKF9feDg2XzY0X18pCiAgICBpZiAo
KHJlZ29mZnM9a2RiX3ZhbGlkX3JlZyhhcmdwKSkgIT0gLTEpCiAgICAgICAgKigodWludDY0X3Qg
KikoKGNoYXIgKilyZWdzK3JlZ29mZnMpKSA9IHZhbDsKI2Vsc2UKICAgIGlmICghc3RyY21wKGFy
Z3AsICJlYXgiKSkKICAgICAgICByZWdzLT5lYXggPSB2YWw7CiAgICBlbHNlIGlmICghc3RyY21w
KGFyZ3AsICJlYngiKSkKICAgICAgICByZWdzLT5lYnggPSB2YWw7CiAgICBlbHNlIGlmICghc3Ry
Y21wKGFyZ3AsICJlY3giKSkKICAgICAgICByZWdzLT5lY3ggPSB2YWw7CiAgICBlbHNlIGlmICgh
c3RyY21wKGFyZ3AsICJlZHgiKSkKICAgICAgICByZWdzLT5lZHggPSB2YWw7CiAgICBlbHNlIGlm
ICghc3RyY21wKGFyZ3AsICJlc2kiKSkKICAgICAgICByZWdzLT5lc2kgPSB2YWw7CiAgICBlbHNl
IGlmICghc3RyY21wKGFyZ3AsICJlZGkiKSkKICAgICAgICByZWdzLT5lZGkgPSB2YWw7CiAgICBl
bHNlIGlmICghc3RyY21wKGFyZ3AsICJlYnAiKSkKICAgICAgICByZWdzLT5lYnAgPSB2YWw7CiAg
ICBlbHNlIGlmICghc3RyY21wKGFyZ3AsICJlc3AiKSkKICAgICAgICByZWdzLT5lc3AgPSB2YWw7
CiAgICBlbHNlIGlmICghc3RyY21wKGFyZ3AsICJlZmxhZ3MiKSB8fCAhc3RyY21wKGFyZ3AsICJy
ZmxhZ3MiKSkKICAgICAgICByZWdzLT5lZmxhZ3MgPSB2YWw7CiNlbmRpZgogICAgZWxzZQogICAg
ICAgIGtkYnAoIkVycm9yLiBCYWQgcmVnaXN0ZXIgOiAlc1xuIiwgYXJncCk7CgogICAgcmV0dXJu
IEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIAogKiBGVU5DVElPTjogU2luZ2xlIFN0ZXAKICovCnN0
YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3NzKHZvaWQpCnsKICAgIGtkYnAoInNzOiBzaW5n
bGUgc3RlcFxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2Nw
dV9jbWRfdCAKa2RiX2NtZGZfc3MoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3Qg
Y3B1X3VzZXJfcmVncyAqcmVncykKewogICAgI2RlZmluZSBLREJfSEFMVF9JTlNUUiAweGY0Cgog
ICAga2RiYnl0X3QgYnl0ZTsKICAgIHN0cnVjdCBkb21haW4gKmRwID0gY3VycmVudC0+ZG9tYWlu
OwogICAgZG9taWRfdCBpZCA9IGd1ZXN0X21vZGUocmVncykgPyBkcC0+ZG9tYWluX2lkIDogRE9N
SURfSURMRTsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJl
dHVybiBrZGJfdXNnZl9zcygpOwoKICAgIEtEQkdQKCJlbnRlciBrZGJfY21kZl9zcyBcbiIpOwog
ICAgaWYgKCFyZWdzKSB7CiAgICAgICAga2RicCgiJXM6IHJlZ3Mgbm90IGF2YWlsYWJsZVxuIiwg
X19GVU5DVElPTl9fKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAg
IGlmIChrZGJfcmVhZF9tZW0ocmVncy0+S0RCSVAsICZieXRlLCAxLCBpZCkgPT0gMSkgewogICAg
ICAgIGlmIChieXRlID09IEtEQl9IQUxUX0lOU1RSKSB7CiAgICAgICAgICAgIGtkYnAoImtkYjog
anVtcGluZyBvdmVyIGhhbHQgaW5zdHJ1Y3Rpb25cbiIpOwogICAgICAgICAgICByZWdzLT5LREJJ
UCsrOwogICAgICAgIH0KICAgIH0gZWxzZSB7CiAgICAgICAga2RicCgia2RiOiBGYWlsZWQgdG8g
cmVhZCBieXRlIGF0OiAlbHhcbiIsIHJlZ3MtPktEQklQKTsKICAgICAgICByZXR1cm4gS0RCX0NQ
VV9NQUlOX0tEQjsKICAgIH0KICAgIGlmIChndWVzdF9tb2RlKHJlZ3MpICYmIGlzX2h2bV9vcl9o
eWJfdmNwdShjdXJyZW50KSkgewogICAgICAgIGRwLT5kZWJ1Z2dlcl9hdHRhY2hlZCA9IDE7ICAv
KiBzZWUgc3ZtX2RvX3Jlc3VtZS92bXhfZG9fICovCiAgICAgICAgY3VycmVudC0+YXJjaC5odm1f
dmNwdS5zaW5nbGVfc3RlcCA9IDE7CiAgICB9IGVsc2UKICAgICAgICByZWdzLT5lZmxhZ3MgfD0g
WDg2X0VGTEFHU19URjsKCiAgICByZXR1cm4gS0RCX0NQVV9TUzsKfQoKLyogCiAqIEZVTkNUSU9O
OiBOZXh0IEluc3RydWN0aW9uLCBzdGVwIG92ZXIgdGhlIGNhbGwgaW5zdHIgdG8gdGhlIG5leHQg
aW5zdHIKICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX25pKHZvaWQpCnsKICAgIGtk
YnAoIm5pOiBzaW5nbGUgc3RlcCwgc3RlcHBpbmcgb3ZlciBmdW5jdGlvbiBjYWxsc1xuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Nt
ZGZfbmkoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAgaW50IHN6LCBpOwogICAgZG9taWRfdCBpZD1ndWVzdF9tb2RlKHJlZ3MpID8g
Y3VycmVudC0+ZG9tYWluLT5kb21haW5faWQ6RE9NSURfSURMRTsKCiAgICBpZiAoYXJnYyA+IDEg
JiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9uaSgpOwoKICAgIEtE
QkdQKCJlbnRlciBrZGJfY21kZl9uaSBcbiIpOwogICAgaWYgKCFyZWdzKSB7CiAgICAgICAga2Ri
cCgiJXM6IHJlZ3Mgbm90IGF2YWlsYWJsZVxuIiwgX19GVU5DVElPTl9fKTsKICAgICAgICByZXR1
cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGlmICgoc3o9a2RiX2NoZWNrX2NhbGxfaW5z
dHIoaWQsIHJlZ3MtPktEQklQKSkgPT0gMCkgIC8qICFjYWxsIGluc3RyICovCiAgICAgICAgcmV0
dXJuIGtkYl9jbWRmX3NzKGFyZ2MsIGFyZ3YsIHJlZ3MpOyAgICAgICAgIC8qIGp1c3QgZG8gc3Mg
Ki8KCiAgICBpZiAoKGk9a2RiX3NldF9icChpZCwgcmVncy0+S0RCSVArc3osIDEsMCwwLDAsMCkp
ID49IEtEQk1BWFNCUCkgLyogZmFpbGVkICovCiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9L
REI7CgogICAga2RiX3NicGFbaV0uYnBfbmkgPSAxOwogICAgaWYgKGd1ZXN0X21vZGUocmVncykg
JiYgaXNfaHZtX29yX2h5Yl92Y3B1KGN1cnJlbnQpKQogICAgICAgIGN1cnJlbnQtPmFyY2guaHZt
X3ZjcHUuc2luZ2xlX3N0ZXAgPSAwOwogICAgZWxzZQogICAgICAgIHJlZ3MtPmVmbGFncyAmPSB+
WDg2X0VGTEFHU19URjsKCiAgICByZXR1cm4gS0RCX0NQVV9OSTsKfQoKc3RhdGljIHZvaWQKa2Ri
X2J0Zl9lbmFibGUodm9pZCkKewogICAgdTY0IGRlYnVnY3RsOwogICAgcmRtc3JsKE1TUl9JQTMy
X0RFQlVHQ1RMTVNSLCBkZWJ1Z2N0bCk7CiAgICB3cm1zcmwoTVNSX0lBMzJfREVCVUdDVExNU1Is
IGRlYnVnY3RsIHwgMHgyKTsKfQoKLyogCiAqIEZVTkNUSU9OOiBTaW5nbGUgU3RlcCB0byBicmFu
Y2guIERvZXNuJ3Qgc2VlbSB0byB3b3JrIHZlcnkgd2VsbC4KICovCnN0YXRpYyBrZGJfY3B1X2Nt
ZF90CmtkYl91c2dmX3NzYih2b2lkKQp7CiAgICBrZGJwKCJzc2I6IHNpbmdlIHN0ZXAgdG8gYnJh
bmNoXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2Nt
ZF90IAprZGJfY21kZl9zc2IoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1
X3VzZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykK
ICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfc3NiKCk7CgogICAgS0RCR1AoIk1VSzogZW50ZXIga2Ri
X2NtZGZfc3NiXG4iKTsKICAgIGlmICghcmVncykgewogICAgICAgIGtkYnAoIiVzOiByZWdzIG5v
dCBhdmFpbGFibGVcbiIsIF9fRlVOQ1RJT05fXyk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7CiAgICB9CiAgICBpZiAoaXNfaHZtX29yX2h5Yl92Y3B1KGN1cnJlbnQpKSAKICAgICAg
ICBjdXJyZW50LT5kb21haW4tPmRlYnVnZ2VyX2F0dGFjaGVkID0gMTsgICAgICAgIC8qIHZteC9z
dm1fZG9fcmVzdW1lKCkqLwoKICAgIHJlZ3MtPmVmbGFncyB8PSBYODZfRUZMQUdTX1RGOwogICAg
a2RiX2J0Zl9lbmFibGUoKTsKICAgIHJldHVybiBLREJfQ1BVX1NTOwp9CgovKiAKICogRlVOQ1RJ
T046IENvbnRpbnVlIEV4ZWN1dGlvbi4gVEYgbXVzdCBiZSBjbGVhcmVkIGhlcmUgYXMgdGhpcyBj
b3VsZCBydW4gb24gCiAqICAgICAgICAgICBhbnkgY3B1LiBIZW5jZSBub3QgT0sgdG8gZG8gaXQg
ZnJvbSBrZGJfZW5kX3Nlc3Npb24uCiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9n
byh2b2lkKQp7CiAgICBrZGJwKCJnbzogbGVhdmUga2RiIGFuZCBjb250aW51ZSBleGVjdXRpb25c
biIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3Qg
CmtkYl9jbWRmX2dvKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2Vy
X3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAg
ICAgcmV0dXJuIGtkYl91c2dmX2dvKCk7CgogICAgcmVncy0+ZWZsYWdzICY9IH5YODZfRUZMQUdT
X1RGOwogICAgcmV0dXJuIEtEQl9DUFVfR087Cn0KCi8qIEFsbCBjcHVzIG11c3QgZGlzcGxheSB0
aGVpciBjdXJyZW50IGNvbnRleHQgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jcHVfc3Rh
dHVzX2FsbChpbnQgY2NwdSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGludCBj
cHU7CiAgICBmb3JfZWFjaF9vbmxpbmVfY3B1KGNwdSkgewogICAgICAgIGlmIChjcHUgPT0gY2Nw
dSkgewogICAgICAgICAgICBrZGJwKCJbJWRdIiwgY2NwdSk7CiAgICAgICAgICAgIGtkYl9kaXNw
bGF5X3BjKHJlZ3MpOwogICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgIGlmIChrZGJfY3B1X2Nt
ZFtjcHVdICE9IEtEQl9DUFVfUEFVU0UpICAgLyogaHVuZyBjcHUgKi8KICAgICAgICAgICAgICAg
IGNvbnRpbnVlOwogICAgICAgICAgICBrZGJfY3B1X2NtZFtjcHVdID0gS0RCX0NQVV9TSE9XUEM7
CiAgICAgICAgICAgIHdoaWxlIChrZGJfY3B1X2NtZFtjcHVdPT1LREJfQ1BVX1NIT1dQQyk7CiAg
ICAgICAgfQogICAgfQogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIAogKiBkaXNw
bGF5L3N3aXRjaCBDUFUuIAogKiAgQXJndW1lbnQ6CiAqICAgICBub25lOiAgIGp1c3QgZ28gYmFj
ayB0byBpbml0aWFsIGNwdQogKiAgICAgY3B1bnVtOiBzd2l0Y2ggdG8gZ2l2ZW4gdnB1CiAqICAg
ICAiYWxsIjogIHNob3cgb25lIGxpbmUgc3RhdHVzIG9mIGFsbCBjcHVzCiAqLwpleHRlcm4gdm9s
YXRpbGUgaW50IGtkYl9pbml0X2NwdTsKc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfY3B1
KHZvaWQpCnsKICAgIGtkYnAoImNwdSBbYWxsfG51bV06IG5vbmUgd2lsbCBzd2l0Y2ggYmFjayB0
byBpbml0aWFsIGNwdVxuIik7CiAgICBrZGJwKCIgICAgICAgICAgICAgICBjcHVudW0gdG8gc3dp
dGNoIHRvIHRoZSB2Y3B1LiBhbGwgdG8gc2hvdyBzdGF0dXNcbiIpOwogICAgcmV0dXJuIEtEQl9D
UFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2NwdShpbnQgYXJn
YywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBp
bnQgY3B1OwogICAgaW50IGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CgogICAgaWYgKGFyZ2Mg
PiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfY3B1KCk7Cgog
ICAgaWYgKGFyZ2MgPiAxKSB7CiAgICAgICAgaWYgKCFzdHJjbXAoYXJndlsxXSwgImFsbCIpKQog
ICAgICAgICAgICByZXR1cm4ga2RiX2NwdV9zdGF0dXNfYWxsKGNjcHUsIHJlZ3MpOwoKICAgICAg
ICAgICAgY3B1ID0gKGludClzaW1wbGVfc3RydG91bChhcmd2WzFdLCBOVUxMLCAwKTsgLyogaGFu
ZGxlcyAweCAqLwogICAgICAgICAgICBpZiAoY3B1ID49IDAgJiYgY3B1IDwgTlJfQ1BVUyAmJiBj
cHUgIT0gY2NwdSAmJiAKICAgICAgICAgICAgICAgIGNwdV9vbmxpbmUoY3B1KSAmJiBrZGJfY3B1
X2NtZFtjcHVdID09IEtEQl9DUFVfUEFVU0UpCiAgICAgICAgICAgIHsKICAgICAgICAgICAgICAg
IGtkYnAoIlN3aXRjaGluZyB0byBjcHU6JWRcbiIsIGNwdSk7CiAgICAgICAgICAgICAgICBrZGJf
Y3B1X2NtZFtjcHVdID0gS0RCX0NQVV9NQUlOX0tEQjsKCiAgICAgICAgICAgICAgICAvKiBjbGVh
ciBhbnkgc2luZ2xlIHN0ZXAgb24gdGhlIGN1cnJlbnQgY3B1ICovCiAgICAgICAgICAgICAgICBy
ZWdzLT5lZmxhZ3MgJj0gflg4Nl9FRkxBR1NfVEY7CiAgICAgICAgICAgICAgICByZXR1cm4gS0RC
X0NQVV9QQVVTRTsKICAgICAgICAgICAgfSBlbHNlIHsKICAgICAgICAgICAgICAgIGlmIChjcHUg
IT0gY2NwdSkKICAgICAgICAgICAgICAgICAgICBrZGJwKCJVbmFibGUgdG8gc3dpdGNoIHRvIGNw
dTolZFxuIiwgY3B1KTsKICAgICAgICAgICAgICAgIGVsc2UgewogICAgICAgICAgICAgICAgICAg
IGtkYl9kaXNwbGF5X3BjKHJlZ3MpOwogICAgICAgICAgICAgICAgfQogICAgICAgICAgICAgICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgICAgIH0KICAgIH0KICAgIC8qIG5vIGFy
ZyBtZWFucyBiYWNrIHRvIGluaXRpYWwgY3B1ICovCiAgICBpZiAoIWtkYl9zeXNfY3Jhc2ggJiYg
Y2NwdSAhPSBrZGJfaW5pdF9jcHUpIHsKICAgICAgICBpZiAoa2RiX2NwdV9jbWRba2RiX2luaXRf
Y3B1XSA9PSBLREJfQ1BVX1BBVVNFKSB7CiAgICAgICAgICAgIHJlZ3MtPmVmbGFncyAmPSB+WDg2
X0VGTEFHU19URjsKICAgICAgICAgICAga2RiX2NwdV9jbWRba2RiX2luaXRfY3B1XSA9IEtEQl9D
UFVfTUFJTl9LREI7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX1BBVVNFOwogICAgICAgIH0g
ZWxzZQogICAgICAgICAgICBrZGJwKCJVbmFibGUgdG8gc3dpdGNoIHRvOiAlZFxuIiwga2RiX2lu
aXRfY3B1KTsKICAgIH0KICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBzZW5kIE5N
SSB0byBhbGwgb3IgZ2l2ZW4gQ1BVLiBNdXN0IGJlIGNyYXNoZWQvZmF0YWwgc3RhdGUgKi8Kc3Rh
dGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2Zfbm1pKHZvaWQpCnsKICAgIGtkYnAoIm5taSBjcHUj
fGFsbDogc2VuZCBubWkgY3B1L3MuIG11c3QgcmVib290IHdoZW4gZG9uZSB3aXRoIGtkYlxuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2Ri
X2NtZGZfbm1pKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKnJlZ3MpCnsKICAgIGNwdW1hc2tfdCBjcHVtYXNrOwogICAgaW50IGNjcHUgPSBzbXBfcHJv
Y2Vzc29yX2lkKCk7CgogICAgaWYgKGFyZ2MgPD0gMSB8fCAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0g
PT0gJz8nKSkKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfbm1pKCk7CgogICAgaWYgKCFrZGJfc3lz
X2NyYXNoKSB7CiAgICAgICAga2RicCgia2RiOiBubWkgY21kIGF2YWlsYWJsZSBpbiBjcmFzaGVk
IHN0YXRlIG9ubHlcbiIpOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQog
ICAgaWYgKCFzdHJjbXAoYXJndlsxXSwgImFsbCIpKQogICAgICAgIGNwdW1hc2sgPSBjcHVfb25s
aW5lX21hcDsKICAgIGVsc2UgewogICAgICAgIGludCBjcHUgPSAoaW50KXNpbXBsZV9zdHJ0b3Vs
KGFyZ3ZbMV0sIE5VTEwsIDApOwogICAgICAgIGlmIChjcHUgPj0gMCAmJiBjcHUgPCBOUl9DUFVT
ICYmIGNwdSAhPSBjY3B1ICYmIGNwdV9vbmxpbmUoY3B1KSkKICAgICAgICAgICAgY3B1bWFzayA9
ICpjcHVtYXNrX29mKGNwdSk7CiAgICAgICAgZWxzZSB7CiAgICAgICAgICAgIGtkYnAoIktEQiBu
bWk6IGludmFsaWQgY3B1ICVzXG4iLCBhcmd2WzFdKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9D
UFVfTUFJTl9LREI7CiAgICAgICAgfQogICAgfQogICAga2RiX25taV9wYXVzZV9jcHVzKGNwdW1h
c2spOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90
CmtkYl91c2dmX3BlcmNwdSh2b2lkKQp7CiAgICBrZGJwKCJwZXJjcHU6IGRpc3BsYXkgcGVyIGNw
dSBwb2ludGVyc1xuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2Ri
X2NwdV9jbWRfdCAKa2RiX2NtZGZfcGVyY3B1KGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwg
c3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsx
XSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3BlcmNwdSgpOwogICAga2RiX2R1bXBf
dGltZV9wY3B1KCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKLyogPT09PT09PT09
PT09PT09PT09PT09PT09PSBCcmVha3BvaW50cyA9PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT0gKi8KCnN0YXRpYyB2b2lkCmtkYl9wcm50X2JwX2NvbmQoaW50IGJwbnVtKQp7CiAg
ICBzdHJ1Y3Qga2RiX2JwY29uZCAqYnBjcCA9ICZrZGJfc2JwYVticG51bV0udS5icF9jb25kOwoK
ICAgIGlmIChicGNwLT5icF9jb25kX3N0YXR1cyA9PSAxKSB7CiAgICAgICAga2RicCgiICAgICAo
ICVzICVjJWMgJWx4IClcbiIsIAogICAgICAgICAgICAga2RiX3JlZ29mZnNfdG9fbmFtZShicGNw
LT5icF9jb25kX2xocyksCiAgICAgICAgICAgICBicGNwLT5icF9jb25kX3R5cGUgPT0gMSA/ICc9
JyA6ICchJywgJz0nLCBicGNwLT5icF9jb25kX3Jocyk7CiAgICB9IGVsc2UgewogICAgICAgIGtk
YnAoIiAgICAgKCAlbHggJWMlYyAlbHggKVxuIiwgYnBjcC0+YnBfY29uZF9saHMsCiAgICAgICAg
ICAgICBicGNwLT5icF9jb25kX3R5cGUgPT0gMSA/ICc9JyA6ICchJywgJz0nLCBicGNwLT5icF9j
b25kX3Jocyk7CiAgICB9Cn0KCnN0YXRpYyB2b2lkCmtkYl9wcm50X2JwX2V4dHJhKGludCBicG51
bSkKewogICAgaWYgKGtkYl9zYnBhW2JwbnVtXS5icF90eXBlID09IDIpIHsKICAgICAgICB1bG9u
ZyBpLCBhcmcsICpidHAgPSBrZGJfc2JwYVticG51bV0udS5icF9idHA7CiAgICAgICAgCiAgICAg
ICAga2RicCgiICAgd2lsbCB0cmFjZSAiKTsKICAgICAgICBmb3IgKGk9MDsgaSA8IEtEQl9NQVhC
VFAgJiYgYnRwW2ldOyBpKyspCiAgICAgICAgICAgIGlmICgoYXJnPWJ0cFtpXSkgPCBzaXplb2Yg
KHN0cnVjdCBjcHVfdXNlcl9yZWdzKSkgewogICAgICAgICAgICAgICAga2RicCgiICVzICIsIGtk
Yl9yZWdvZmZzX3RvX25hbWUoYXJnKSk7CiAgICAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAg
ICAgICBrZGJwKCIgJWx4ICIsIGFyZyk7CiAgICAgICAgICAgIH0KICAgICAgICBrZGJwKCJcbiIp
OwoKICAgIH0gZWxzZSBpZiAoa2RiX3NicGFbYnBudW1dLmJwX3R5cGUgPT0gMSkKICAgICAgICBr
ZGJfcHJudF9icF9jb25kKGJwbnVtKTsKfQoKLyoKICogTGlzdCBzb2Z0d2FyZSBicmVha3BvaW50
cwogKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2Rpc3BsYXlfc2JrcHRzKHZvaWQpCnsKICAg
IGludCBpOwogICAgZm9yKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsrKQogICAgICAgIGlmIChrZGJf
c2JwYVtpXS5icF9hZGRyICYmICFrZGJfc2JwYVtpXS5icF9kZWxldGVkKSB7CiAgICAgICAgICAg
IHN0cnVjdCBkb21haW4gKmRwID0ga2RiX2RvbWlkMnB0cihrZGJfc2JwYVtpXS5icF9kb21pZCk7
CgogICAgICAgICAgICBpZiAoZHAgPT0gTlVMTCB8fCBkcC0+aXNfZHlpbmcpIHsKICAgICAgICAg
ICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAsIHNpemVvZihrZGJfc2JwYVtpXSkpOwogICAg
ICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgICAgIH0KICAgICAgICAgICAga2RicCgiWyVk
XTogZG9taWQ6JWQgMHglbHggICAiLCBpLCAKICAgICAgICAgICAgICAgICBrZGJfc2JwYVtpXS5i
cF9kb21pZCwga2RiX3NicGFbaV0uYnBfYWRkcik7CiAgICAgICAgICAgIGtkYl9wcm50X2FkZHIy
c3ltKGtkYl9zYnBhW2ldLmJwX2RvbWlkLCBrZGJfc2JwYVtpXS5icF9hZGRyLCJcbiIpOwogICAg
ICAgICAgICBrZGJfcHJudF9icF9leHRyYShpKTsKICAgICAgICB9CiAgICByZXR1cm4gS0RCX0NQ
VV9NQUlOX0tEQjsKfQoKLyoKICogQ2hlY2sgaWYgYW55IGJyZWFrcG9pbnRzIHRoYXQgd2UgbmVl
ZCB0byBpbnN0YWxsIChkZWxheWVkIGluc3RhbGwpCiAqIFJldHVybnM6IDEgaWYgeWVzLCAwIGlm
IG5vbmUuCiAqLwppbnQKa2RiX3N3YnBfZXhpc3RzKHZvaWQpCnsKICAgIGludCBpOwogICAgZm9y
IChpPTA7IGkgPCBLREJNQVhTQlA7IGkrKykKICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfYWRk
ciAmJiAha2RiX3NicGFbaV0uYnBfZGVsZXRlZCkKICAgICAgICAgICAgcmV0dXJuIDE7CiAgICBy
ZXR1cm4gMDsKfQovKgogKiBDaGVjayBpZiBhbnkgYnJlYWtwb2ludHMgd2VyZSBkZWxldGVkIHRo
aXMga2RiIHNlc3Npb24KICogUmV0dXJuczogMCBpZiBub25lLCAxIGlmIHllcwogKi8Kc3RhdGlj
IGludAprZGJfc3dicF9kZWxldGVkKHZvaWQpCnsKICAgIGludCBpOwogICAgZm9yIChpPTA7IGkg
PCBLREJNQVhTQlA7IGkrKykKICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfYWRkciAmJiBrZGJf
c2JwYVtpXS5icF9kZWxldGVkKQogICAgICAgICAgICByZXR1cm4gMTsKICAgIHJldHVybiAwOwp9
CgovKgogKiBGbHVzaCBkZWxldGVkIHN3IGJyZWFrcG9pbnRzCiAqLwp2b2lkCmtkYl9mbHVzaF9z
d2JwX3RhYmxlKHZvaWQpCnsKICAgIGludCBpOwogICAgS0RCR1AoImNjcHU6JWQgZmx1c2hfc3di
cF90YWJsZTogZGVsZXRlZDoleFxuIiwgc21wX3Byb2Nlc3Nvcl9pZCgpLCAKICAgICAgICAgIGtk
Yl9zd2JwX2RlbGV0ZWQoKSk7CiAgICBmb3IoaT0wOyBpIDwgS0RCTUFYU0JQOyBpKyspCiAgICAg
ICAgaWYgKGtkYl9zYnBhW2ldLmJwX2FkZHIgJiYga2RiX3NicGFbaV0uYnBfZGVsZXRlZCkgewog
ICAgICAgICAgICBLREJHUCgiZmx1c2g6WyV4XSBhZGRyOjB4JWx4XG4iLGksa2RiX3NicGFbaV0u
YnBfYWRkcik7CiAgICAgICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAsIHNpemVvZihrZGJf
c2JwYVtpXSkpOwogICAgICAgIH0KfQoKLyoKICogRGVsZXRlL0NsZWFyIGEgc3cgYnJlYWtwb2lu
dAogKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfYmModm9pZCkKewogICAga2RicCgi
YmMgJG51bXxhbGwgOiBjbGVhciBnaXZlbiBvciBhbGwgYnJlYWtwb2ludHNcbiIpOwogICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX2Jj
KGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3Mp
CnsKICAgIGludCBpLCBicG51bSA9IC0xLCBkZWxhbGwgPSAwOwogICAgY29uc3QgY2hhciAqYXJn
cDsKCiAgICBpZiAoYXJnYyAhPSAyIHx8ICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4g
a2RiX3VzZ2ZfYmMoKTsKCiAgICBpZiAoIWtkYl9zd2JwX2V4aXN0cygpKSB7CiAgICAgICAga2Ri
cCgiTm8gYnJlYWtwb2ludHMgYXJlIHNldFxuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7CiAgICB9CiAgICBhcmdwID0gYXJndlsxXTsKCiAgICBpZiAoIXN0cmNtcChhcmdwLCAi
YWxsIikpCiAgICAgICAgZGVsYWxsID0gMTsKICAgIGVsc2UgaWYgKCFrZGJfc3RyMmRlY2koYXJn
cCwgJmJwbnVtKSB8fCBicG51bSA8IDAgfHwgYnBudW0gPiBLREJNQVhTQlApIHsKICAgICAgICBr
ZGJwKCJJbnZhbGlkIGJwbnVtOiAlc1xuIiwgYXJncCk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7CiAgICB9CiAgICBmb3IgKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsrKSB7CiAgICAg
ICAgaWYgKGRlbGFsbCAmJiBrZGJfc2JwYVtpXS5icF9hZGRyKSB7CiAgICAgICAgICAgIGtkYnAo
IkRlbGV0ZWQgYnJlYWtwb2ludCBbJXhdIGFkZHI6MHglbHggZG9taWQ6JWRcbiIsIAogICAgICAg
ICAgICAgICAgIChpbnQpaSwga2RiX3NicGFbaV0uYnBfYWRkciwga2RiX3NicGFbaV0uYnBfZG9t
aWQpOwogICAgICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfanVzdF9hZGRlZCkKICAgICAgICAg
ICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAsIHNpemVvZihrZGJfc2JwYVtpXSkpOwogICAg
ICAgICAgICBlbHNlCiAgICAgICAgICAgICAgICBrZGJfc2JwYVtpXS5icF9kZWxldGVkID0gMTsK
ICAgICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlmIChicG51bSAhPSAtMSAm
JiBicG51bSA9PSBpKSB7CiAgICAgICAgICAgIGtkYnAoIkRlbGV0ZWQgYnJlYWtwb2ludCBbJXhd
IGF0IDB4JWx4IGRvbWlkOiVkXG4iLCAKICAgICAgICAgICAgICAgICAoaW50KWksIGtkYl9zYnBh
W2ldLmJwX2FkZHIsIGtkYl9zYnBhW2ldLmJwX2RvbWlkKTsKICAgICAgICAgICAgaWYgKGtkYl9z
YnBhW2ldLmJwX2p1c3RfYWRkZWQpCiAgICAgICAgICAgICAgICBtZW1zZXQoJmtkYl9zYnBhW2ld
LCAwLCBzaXplb2Yoa2RiX3NicGFbaV0pKTsKICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAg
ICAga2RiX3NicGFbaV0uYnBfZGVsZXRlZCA9IDE7CiAgICAgICAgICAgIGJyZWFrOwogICAgICAg
IH0KICAgIH0KICAgIGlmIChpID49IEtEQk1BWFNCUCAmJiAhZGVsYWxsKQogICAgICAgIGtkYnAo
IlVuYWJsZSB0byBkZWxldGUgYnJlYWtwb2ludDogJXNcbiIsIGFyZ3ApOwoKICAgIHJldHVybiBL
REJfQ1BVX01BSU5fS0RCOwp9CgovKgogKiBJbnN0YWxsIGEgYnJlYWtwb2ludCBpbiB0aGUgZ2l2
ZW4gYXJyYXkgZW50cnkKICogUmV0dXJuczogMCA6IGZhaWxlZCB0byBpbnN0YWxsCiAqICAgICAg
ICAgIDEgOiBpbnN0YWxsZWQgc3VjY2Vzc2Z1bGx5CiAqLwpzdGF0aWMgaW50CmtkYl9pbnN0YWxs
X3N3YnAoaW50IGlkeCkgICAgICAgICAgICAgICAgICAgLyogd2hpY2ggZW50cnkgaW4gdGhlIGJw
IGFycmF5ICovCnsKICAgIGtkYnZhX3QgYWRkciA9IGtkYl9zYnBhW2lkeF0uYnBfYWRkcjsKICAg
IGRvbWlkX3QgZG9taWQgPSBrZGJfc2JwYVtpZHhdLmJwX2RvbWlkOwogICAga2RiYnl0X3QgKnAg
PSAma2RiX3NicGFbaWR4XS5icF9vcmlnaW5zdDsKICAgIHN0cnVjdCBkb21haW4gKmRwID0ga2Ri
X2RvbWlkMnB0cihkb21pZCk7CgogICAgaWYgKGRwID09IE5VTEwgfHwgZHAtPmlzX2R5aW5nKSB7
CiAgICAgICAgbWVtc2V0KCZrZGJfc2JwYVtpZHhdLCAwLCBzaXplb2Yoa2RiX3NicGFbaWR4XSkp
OwogICAgICAgIGtkYnAoIlJlbW92ZWQgYnAgJWQgYWRkcjolcCBkb21pZDolZFxuIiwgaWR4LCBh
ZGRyLCBkb21pZCk7CiAgICAgICAgcmV0dXJuIDA7CiAgICB9CgogICAgaWYgKGtkYl9yZWFkX21l
bShhZGRyLCBwLCBLREJCUFNaLCBkb21pZCkgIT0gS0RCQlBTWil7CiAgICAgICAga2RicCgiRmFp
bGVkKFIpIHRvIGluc3RhbGwgYnA6JXggYXQ6MHglbHggZG9taWQ6JWRcbiIsCiAgICAgICAgICAg
ICBpZHgsIGtkYl9zYnBhW2lkeF0uYnBfYWRkciwgZG9taWQpOwogICAgICAgIHJldHVybiAwOwog
ICAgfQogICAgaWYgKGtkYl93cml0ZV9tZW0oYWRkciwgJmtkYl9icGluc3QsIEtEQkJQU1osIGRv
bWlkKSAhPSBLREJCUFNaKSB7CiAgICAgICAga2RicCgiRmFpbGVkKFcpIHRvIGluc3RhbGwgYnA6
JXggYXQ6MHglbHggZG9taWQ6JWRcbiIsCiAgICAgICAgICAgICBpZHgsIGtkYl9zYnBhW2lkeF0u
YnBfYWRkciwgZG9taWQpOwogICAgICAgIHJldHVybiAwOwogICAgfQogICAgS0RCR1AoImluc3Rh
bGxfc3dicDogaW5zdGFsbGVkIGJwOiV4IGF0OjB4JWx4IGNjcHU6JXggZG9taWQ6JWRcbiIsCiAg
ICAgICAgICBpZHgsIGtkYl9zYnBhW2lkeF0uYnBfYWRkciwgc21wX3Byb2Nlc3Nvcl9pZCgpLCBk
b21pZCk7CiAgICByZXR1cm4gMTsKfQoKLyoKICogSW5zdGFsbCBhbGwgdGhlIHNvZnR3YXJlIGJy
ZWFrcG9pbnRzCiAqLwp2b2lkCmtkYl9pbnN0YWxsX2FsbF9zd2JwKHZvaWQpCnsKICAgIGludCBp
OwogICAgZm9yKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsrKQogICAgICAgIGlmICgha2RiX3NicGFb
aV0uYnBfZGVsZXRlZCAmJiBrZGJfc2JwYVtpXS5icF9hZGRyKQogICAgICAgICAgICBrZGJfaW5z
dGFsbF9zd2JwKGkpOwp9CgpzdGF0aWMgdm9pZAprZGJfdW5pbnN0YWxsX2Ffc3dicChpbnQgaSkK
ewogICAga2RidmFfdCBhZGRyID0ga2RiX3NicGFbaV0uYnBfYWRkcjsKICAgIGtkYmJ5dF90IG9y
aWdpbnN0ID0ga2RiX3NicGFbaV0uYnBfb3JpZ2luc3Q7CiAgICBkb21pZF90IGlkID0ga2RiX3Ni
cGFbaV0uYnBfZG9taWQ7CgogICAga2RiX3NicGFbaV0uYnBfanVzdF9hZGRlZCA9IDA7CiAgICBp
ZiAoIWFkZHIpCiAgICAgICAgcmV0dXJuOwogICAgaWYgKGtkYl93cml0ZV9tZW0oYWRkciwgJm9y
aWdpbnN0LCBLREJCUFNaLCBpZCkgIT0gS0RCQlBTWikgewogICAgICAgIGtkYnAoIkZhaWxlZCB0
byB1bmluc3RhbGwgYnJlYWtwb2ludCAleCBhdDoweCVseCBkb21pZDolZFxuIiwKICAgICAgICAg
ICAgIGksIGtkYl9zYnBhW2ldLmJwX2FkZHIsIGlkKTsKICAgIH0KfQoKLyoKICogVW5pbnN0YWxs
IGFsbCB0aGUgc29mdHdhcmUgYnJlYWtwb2ludHMgYXQgYmVnaW5uaW5nIG9mIGtkYiBzZXNzaW9u
CiAqLwp2b2lkCmtkYl91bmluc3RhbGxfYWxsX3N3YnAodm9pZCkKewogICAgaW50IGk7CiAgICBm
b3IoaT0wOyBpIDwgS0RCTUFYU0JQOyBpKyspIAogICAgICAgIGtkYl91bmluc3RhbGxfYV9zd2Jw
KGkpOwogICAgS0RCR1AoImNjcHU6JWQgdW5pbnN0YWxsZWQgYWxsIGJwc1xuIiwgc21wX3Byb2Nl
c3Nvcl9pZCgpKTsKfQoKLyogUkVUVVJOUzogcmMgPT0gMjogY29uZGl0aW9uIHdhcyBub3QgbWV0
LCAgcmMgPT0gMzogY29uZGl0aW9uIHdhcyBtZXQgKi8Kc3RhdGljIGludAprZGJfY2hlY2tfYnBf
Y29uZGl0aW9uKGludCBicG51bSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGRvbWlkX3Qg
ZG9taWQpCnsKICAgIHVsb25nIHJlcyA9IDAsIGxoc3ZhbD0wOwogICAgc3RydWN0IGtkYl9icGNv
bmQgKmJwY3AgPSAma2RiX3NicGFbYnBudW1dLnUuYnBfY29uZDsKCiAgICBpZiAoYnBjcC0+YnBf
Y29uZF9zdGF0dXMgPT0gMSkgeyAgICAgICAgICAgICAvKiByZWdpc3RlciBjb25kaXRpb24gKi8K
ICAgICAgICB1aW50NjRfdCAqcnAgPSAodWludDY0X3QgKikoKGNoYXIgKilyZWdzICsgYnBjcC0+
YnBfY29uZF9saHMpOwogICAgICAgIGxoc3ZhbCA9ICpycDsKICAgIH0gZWxzZSBpZiAoYnBjcC0+
YnBfY29uZF9zdGF0dXMgPT0gMikgeyAgICAgIC8qIG1lbWFkZHIgY29uZGl0aW9uICovCiAgICAg
ICAgdWxvbmcgYWRkciA9IGJwY3AtPmJwX2NvbmRfbGhzOwogICAgICAgIGludCBudW0gPSBzaXpl
b2YobGhzdmFsKTsKCiAgICAgICAgaWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikm
bGhzdmFsLCBudW0sIGRvbWlkKSAhPSBudW0pIHsKICAgICAgICAgICAga2RicCgia2RiOiB1bmFi
bGUgdG8gcmVhZCAlZCBieXRlcyBhdCAlbHhcbiIsIG51bSwgYWRkcik7CiAgICAgICAgICAgIHJl
dHVybiAzOwogICAgICAgIH0KICAgIH0KICAgIGlmIChicGNwLT5icF9jb25kX3R5cGUgPT0gMSkg
ICAgICAgICAgICAgICAgIC8qIGxocyA9PSByaHMgKi8KICAgICAgICByZXMgPSAobGhzdmFsID09
IGJwY3AtPmJwX2NvbmRfcmhzKTsKICAgIGVsc2UgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIC8qIGxocyAhPSByaHMgKi8KICAgICAgICByZXMgPSAobGhzdmFsICE9IGJw
Y3AtPmJwX2NvbmRfcmhzKTsKCiAgICBpZiAoIXJlcykKICAgICAgICBrZGJwKCJLREI6IFslZF1J
Z25vcmluZyBicDolZCBjb25kaXRpb24gbm90IG1ldC4gdmFsOiVseFxuIiwgCiAgICAgICAgICAg
ICAgc21wX3Byb2Nlc3Nvcl9pZCgpLCBicG51bSwgbGhzdmFsKTsgCgogICAgS0RCR1AxKCJicG51
bTolZCBkb21pZDolZCBjb25kOiAlZCAlZCAlbHggJWx4IHJlczolZFxuIiwgYnBudW0sIGRvbWlk
LCAKICAgICAgICAgICBicGNwLT5icF9jb25kX3N0YXR1cywgYnBjcC0+YnBfY29uZF90eXBlLCBi
cGNwLT5icF9jb25kX2xocywgCiAgICAgICAgICAgYnBjcC0+YnBfY29uZF9yaHMsIHJlcyk7Cgog
ICAgcmV0dXJuIChyZXMgPyAzIDogMik7Cn0KCnN0YXRpYyB2b2lkCmtkYl9wcm50X2J0cF9pbmZv
KGludCBicG51bSwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MsIGRvbWlkX3QgZG9taWQpCnsK
ICAgIHVsb25nIGksIGFyZywgdmFsLCBudW0sICpidHAgPSBrZGJfc2JwYVticG51bV0udS5icF9i
dHA7CgogICAga2RiX3BybnRfYWRkcjJzeW0oZG9taWQsIHJlZ3MtPktEQklQLCAiXG4iKTsKICAg
IG51bSA9IGtkYl9ndWVzdF9iaXRuZXNzKGRvbWlkKS84OwogICAgZm9yIChpPTA7IGkgPCBLREJf
TUFYQlRQICYmIChhcmc9YnRwW2ldKTsgaSsrKSB7CiAgICAgICAgaWYgKGFyZyA8IHNpemVvZiAo
c3RydWN0IGNwdV91c2VyX3JlZ3MpKSB7CiAgICAgICAgICAgIHVpbnQ2NF90ICpycCA9ICh1aW50
NjRfdCAqKSgoY2hhciAqKXJlZ3MgKyBhcmcpOwogICAgICAgICAgICBrZGJwKCIgJXM6ICUwMTZs
eCAiLCBrZGJfcmVnb2Zmc190b19uYW1lKGFyZyksICpycCk7CiAgICAgICAgfSBlbHNlIHsKICAg
ICAgICAgICAgaWYgKGtkYl9yZWFkX21lbShhcmcsIChrZGJieXRfdCAqKSZ2YWwsIG51bSwgZG9t
aWQpICE9IG51bSkKICAgICAgICAgICAgICAgIGtkYnAoImtkYjogdW5hYmxlIHRvIHJlYWQgJWQg
Ynl0ZXMgYXQgJWx4XG4iLCBudW0sIGFyZyk7CiAgICAgICAgICAgIGlmIChudW0gPT0gOCkKICAg
ICAgICAgICAgICAgIGtkYnAoIiAlMDE2bHg6JTAxNmx4ICIsIGFyZywgdmFsKTsKICAgICAgICAg
ICAgZWxzZQogICAgICAgICAgICAgICAga2RicCgiICUwOGx4OiUwOGx4ICIsIGFyZywgdmFsKTsK
ICAgICAgICB9CiAgICB9CiAgICBrZGJwKCJcbiIpOwogICAgS0RCR1AxKCJicG51bTolZCBkb21p
ZDolZCBidHA6JXAgbnVtOiVkXG4iLCBicG51bSwgZG9taWQsIGJ0cCwgbnVtKTsKfQoKLyoKICog
Q2hlY2sgaWYgdGhlIEJQIHRyYXAgYmVsb25ncyB0byB1cy4gCiAqIFJldHVybjogMCA6IG5vdCBv
bmUgb2Ygb3Vycy4gSVAgbm90IGNoYW5nZWQuIChsZWF2ZSBrZGIpCiAqICAgICAgICAgMSA6IG9u
ZSBvZiBvdXJzIGJ1dCBkZWxldGVkLiBJUCBkZWNyZW1lbnRlZC4gKGxlYXZlIGtkYikKICogICAg
ICAgICAyIDogb25lIG9mIG91cnMgYnV0IGNvbmRpdGlvbiBub3QgbWV0LCBvciBidHAuIElQIGRl
Y3JlbWVudGVkLihsZWF2ZSkKICogICAgICAgICAzIDogb25lIG9mIG91cnMgYW5kIGFjdGl2ZS4g
SVAgZGVjcmVtZW50ZWQuIChzdGF5IGluIGtkYikKICovCmludCAKa2RiX2NoZWNrX3N3X2JrcHRz
KHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBpbnQgaSwgcmM9MDsKICAgIGRvbWlk
X3QgY3VyaWQ7CgogICAgY3VyaWQgPSBndWVzdF9tb2RlKHJlZ3MpID8gY3VycmVudC0+ZG9tYWlu
LT5kb21haW5faWQgOiBET01JRF9JRExFOwogICAgZm9yKGk9MDsgaSA8IEtEQk1BWFNCUDsgaSsr
KSB7CiAgICAgICAgaWYgKGtkYl9zYnBhW2ldLmJwX2RvbWlkID09IGN1cmlkICAmJiAKICAgICAg
ICAgICAga2RiX3NicGFbaV0uYnBfYWRkciA9PSAocmVncy0+S0RCSVAtIEtEQkJQU1opKSB7Cgog
ICAgICAgICAgICByZWdzLT5LREJJUCAtPSBLREJCUFNaOwogICAgICAgICAgICByYyA9IDM7Cgog
ICAgICAgICAgICBpZiAoa2RiX3NicGFbaV0uYnBfbmkpIHsKICAgICAgICAgICAgICAgIGtkYl91
bmluc3RhbGxfYV9zd2JwKGkpOwogICAgICAgICAgICAgICAgbWVtc2V0KCZrZGJfc2JwYVtpXSwg
MCwgc2l6ZW9mKGtkYl9zYnBhW2ldKSk7CiAgICAgICAgICAgIH0gZWxzZSBpZiAoa2RiX3NicGFb
aV0uYnBfZGVsZXRlZCkgewogICAgICAgICAgICAgICAgcmMgPSAxOwogICAgICAgICAgICB9IGVs
c2UgaWYgKGtkYl9zYnBhW2ldLmJwX3R5cGUgPT0gMSkgewogICAgICAgICAgICAgICAgcmMgPSBr
ZGJfY2hlY2tfYnBfY29uZGl0aW9uKGksIHJlZ3MsIGN1cmlkKTsKICAgICAgICAgICAgfSBlbHNl
IGlmIChrZGJfc2JwYVtpXS5icF90eXBlID09IDIpIHsKICAgICAgICAgICAgICAgIGtkYl9wcm50
X2J0cF9pbmZvKGksIHJlZ3MsIGN1cmlkKTsKICAgICAgICAgICAgICAgIHJjID0gMjsKICAgICAg
ICAgICAgfQogICAgICAgICAgICBLREJHUDEoImNjcHU6JWQgcmM6JWQgY3VyaWQ6JWQgZG9taWQ6
JWQgYWRkcjolbHhcbiIsIAogICAgICAgICAgICAgICAgICAgc21wX3Byb2Nlc3Nvcl9pZCgpLCBy
YywgY3VyaWQsIGtkYl9zYnBhW2ldLmJwX2RvbWlkLCAKICAgICAgICAgICAgICAgICAgIGtkYl9z
YnBhW2ldLmJwX2FkZHIpOwogICAgICAgICAgICBicmVhazsKICAgICAgICB9CiAgICB9CiAgICBy
ZXR1cm4gKHJjKTsKfQoKLyogRWc6IHI2ID09IDB4MTIzRURGICBvciAweEZGRkYyMDM0ICE9IDB4
REVBREJFRUYKICogcmVnb2ZmczogLTEgbWVhbnMgbGhzIGlzIG5vdCByZWcuIGVsc2Ugb2Zmc2V0
IG9mIHJlZyBpbiBjcHVfdXNlcl9yZWdzCiAqIGFkZHI6IG1lbW9yeSBsb2NhdGlvbiBpZiBsaHMg
aXMgbm90IHJlZ2lzdGVyLCBlZywgMHhGRkZGMjAzNAogKiBjb25kcCA6IHBvaW50cyB0byAhPSBv
ciA9PQogKiByaHN2YWwgOiByaWdodCBoYW5kIHNpZGUgdmFsdWUKICovCnN0YXRpYyB2b2lkCmtk
Yl9zZXRfYnBfY29uZChpbnQgYnBudW0sIGludCByZWdvZmZzLCB1bG9uZyBhZGRyLCBjaGFyICpj
b25kcCwgdWxvbmcgcmhzdmFsKQp7CiAgICBpZiAoYnBudW0gPj0gS0RCTUFYU0JQKSB7CiAgICAg
ICAga2RicCgiQlVHOiAlcyBnb3QgaW52YWxpZCBicG51bVxuIiwgX19GVU5DVElPTl9fKTsKICAg
ICAgICByZXR1cm47CiAgICB9CiAgICBpZiAocmVnb2ZmcyAhPSAtMSkgewogICAgICAgIGtkYl9z
YnBhW2JwbnVtXS51LmJwX2NvbmQuYnBfY29uZF9zdGF0dXMgPSAxOwogICAgICAgIGtkYl9zYnBh
W2JwbnVtXS51LmJwX2NvbmQuYnBfY29uZF9saHMgPSByZWdvZmZzOwogICAgfSBlbHNlIGlmIChh
ZGRyICE9IDApIHsKICAgICAgICBrZGJfc2JwYVticG51bV0udS5icF9jb25kLmJwX2NvbmRfc3Rh
dHVzID0gMjsKICAgICAgICBrZGJfc2JwYVticG51bV0udS5icF9jb25kLmJwX2NvbmRfbGhzID0g
YWRkcjsKICAgIH0gZWxzZSB7CiAgICAgICAga2RicCgiZXJyb3I6IGludmFsaWQgY2FsbCB0byBr
ZGJfc2V0X2JwX2NvbmRcbiIpOwogICAgICAgIHJldHVybjsKICAgIH0KICAgIGtkYl9zYnBhW2Jw
bnVtXS51LmJwX2NvbmQuYnBfY29uZF9yaHMgPSByaHN2YWw7CgogICAgaWYgKCpjb25kcCA9PSAn
IScpCiAgICAgICAga2RiX3NicGFbYnBudW1dLnUuYnBfY29uZC5icF9jb25kX3R5cGUgPSAyOwog
ICAgZWxzZQogICAgICAgIGtkYl9zYnBhW2JwbnVtXS51LmJwX2NvbmQuYnBfY29uZF90eXBlID0g
MTsKfQoKLyogaW5zdGFsbCBicmVha3B0IGF0IGdpdmVuIGFkZHIuIAogKiBuaTogYnAgZm9yIG5l
eHQgaW5zdHIgCiAqIGJ0cGE6IHB0ciB0byBhcmdzIGZvciBidHAgZm9yIHByaW50aW5nIHdoZW4g
YnAgaXMgaGl0CiAqIGxoc3AvY29uZHAvcmhzcDogcG9pbnQgdG8gc3RyaW5ncyBvZiBjb25kaXRp
b24KICoKICogUkVUVVJOUzogdGhlIGluZGV4IGluIGFycmF5IHdoZXJlIGluc3RhbGxlZC4gS0RC
TUFYU0JQIGlmIGVycm9yIAogKi8Kc3RhdGljIGludAprZGJfc2V0X2JwKGRvbWlkX3QgZG9taWQs
IGtkYnZhX3QgYWRkciwgaW50IG5pLCB1bG9uZyAqYnRwYSwgY2hhciAqbGhzcCwgCiAgICAgICAg
ICAgY2hhciAqY29uZHAsIGNoYXIgKnJoc3ApCnsKICAgIGludCBpLCBwcmVfZXhpc3RpbmcgPSAw
LCByZWdvZmZzID0gLTE7CiAgICB1bG9uZyBtZW1sb2M9MCwgcmhzdmFsPTAsIHRtcHVsOwoKICAg
IGlmIChidHBhICYmIChsaHNwIHx8IHJoc3AgfHwgY29uZHApKSB7CiAgICAgICAga2RicCgiaW50
ZXJuYWwgZXJyb3IuIGJ0cGEgYW5kIChsaHNwIHx8IHJoc3AgfHwgY29uZHApIHNldFxuIik7CiAg
ICAgICAgcmV0dXJuIEtEQk1BWFNCUDsKICAgIH0KICAgIGlmIChsaHNwICYmICgocmVnb2Zmcz1r
ZGJfdmFsaWRfcmVnKGxoc3ApKSA9PSAtMSkgICYmCiAgICAgICAga2RiX3N0cjJ1bG9uZyhsaHNw
LCAmbWVtbG9jKSAmJgogICAgICAgIGtkYl9yZWFkX21lbShtZW1sb2MsIChrZGJieXRfdCAqKSZ0
bXB1bCwgc2l6ZW9mKHRtcHVsKSwgZG9taWQpPT0wKSB7CgogICAgICAgIGtkYnAoImVycm9yOiBp
bnZhbGlkIGFyZ3VtZW50OiAlc1xuIiwgbGhzcCk7CiAgICAgICAgcmV0dXJuIEtEQk1BWFNCUDsK
ICAgIH0KICAgIGlmIChyaHNwICYmICEga2RiX3N0cjJ1bG9uZyhyaHNwLCAmcmhzdmFsKSkgewog
ICAgICAgIGtkYnAoImVycm9yOiBpbnZhbGlkIGFyZ3VtZW50OiAlc1xuIiwgcmhzcCk7CiAgICAg
ICAgcmV0dXJuIEtEQk1BWFNCUDsKICAgIH0KCiAgICAvKiBzZWUgaWYgYnAgYWxyZWFkeSBzZXQg
Ki8KICAgIGZvciAoaT0wOyBpIDwgS0RCTUFYU0JQOyBpKyspIHsKICAgICAgICBpZiAoa2RiX3Ni
cGFbaV0uYnBfYWRkcj09YWRkciAmJiBrZGJfc2JwYVtpXS5icF9kb21pZD09ZG9taWQpIHsKCiAg
ICAgICAgICAgIGlmIChrZGJfc2JwYVtpXS5icF9kZWxldGVkKSB7CiAgICAgICAgICAgICAgICAv
KiBqdXN0IHJlLXNldCB0aGlzIGJwIGFnYWluICovCiAgICAgICAgICAgICAgICBtZW1zZXQoJmtk
Yl9zYnBhW2ldLCAwLCBzaXplb2Yoa2RiX3NicGFbaV0pKTsKICAgICAgICAgICAgICAgIHByZV9l
eGlzdGluZyA9IDE7CiAgICAgICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgICAgICBrZGJwKCJC
cmVha3BvaW50IGFscmVhZHkgc2V0IFxuIik7CiAgICAgICAgICAgICAgICByZXR1cm4gS0RCTUFY
U0JQOwogICAgICAgICAgICB9CiAgICAgICAgfQogICAgfQogICAgLyogc2VlIGlmIGFueSByb29t
IGxlZnQgZm9yIGFub3RoZXIgYnJlYWtwb2ludCAqLwogICAgZm9yIChpPTA7IGkgPCBLREJNQVhT
QlA7IGkrKykKICAgICAgICBpZiAoIWtkYl9zYnBhW2ldLmJwX2FkZHIpCiAgICAgICAgICAgIGJy
ZWFrOwogICAgaWYgKGkgPj0gS0RCTUFYU0JQKSB7CiAgICAgICAga2RicCgiRVJST1I6IEJyZWFr
cG9pbnQgdGFibGUgZnVsbC4uLi5cbiIpOwogICAgICAgIHJldHVybiBpOwogICAgfQogICAga2Ri
X3NicGFbaV0uYnBfYWRkciA9IGFkZHI7CiAgICBrZGJfc2JwYVtpXS5icF9kb21pZCA9IGRvbWlk
OwogICAgaWYgKGJ0cGEpIHsKICAgICAgICBrZGJfc2JwYVtpXS5icF90eXBlID0gMjsKICAgICAg
ICBrZGJfc2JwYVtpXS51LmJwX2J0cCA9IGJ0cGE7CiAgICB9IGVsc2UgaWYgKHJlZ29mZnMgIT0g
LTEgfHwgbWVtbG9jKSB7CiAgICAgICAga2RiX3NicGFbaV0uYnBfdHlwZSA9IDE7CiAgICAgICAg
a2RiX3NldF9icF9jb25kKGksIHJlZ29mZnMsIG1lbWxvYywgY29uZHAsIHJoc3ZhbCk7CiAgICB9
IGVsc2UKICAgICAgICBrZGJfc2JwYVtpXS5icF90eXBlID0gMDsKCiAgICBpZiAoa2RiX2luc3Rh
bGxfc3dicChpKSkgeyAgICAgICAgICAgICAgICAgIC8qIG1ha2Ugc3VyZSBpdCBjYW4gYmUgZG9u
ZSAqLwogICAgICAgIGlmIChuaSkKICAgICAgICAgICAgcmV0dXJuIGk7CgogICAgICAgIGtkYl91
bmluc3RhbGxfYV9zd2JwKGkpOyAgICAgICAgICAgICAgICAvKiBkb250JyBzaG93IHVzZXIgSU5U
MyAqLwogICAgICAgIGlmICghcHJlX2V4aXN0aW5nKSAgICAgICAgICAgICAgIC8qIG1ha2Ugc3Vy
ZSBubyBpcyBjcHUgc2l0dGluZyBvbiBpdCAqLwogICAgICAgICAgICBrZGJfc2JwYVtpXS5icF9q
dXN0X2FkZGVkID0gMTsKCiAgICAgICAga2RicCgiYnAgJWQgc2V0IGZvciBkb21pZDolZCBhdDog
MHglbHggIiwgaSwga2RiX3NicGFbaV0uYnBfZG9taWQsIAogICAgICAgICAgICAga2RiX3NicGFb
aV0uYnBfYWRkcik7CiAgICAgICAga2RiX3BybnRfYWRkcjJzeW0oZG9taWQsIGFkZHIsICJcbiIp
OwogICAgICAgIGtkYl9wcm50X2JwX2V4dHJhKGkpOwogICAgfSBlbHNlIHsKICAgICAgICBrZGJw
KCJFUlJPUjpDYW4ndCBpbnN0YWxsIGJwOiAweCVseCBkb21pZDolZFxuIiwgYWRkciwgZG9taWQp
OwogICAgICAgIGlmIChwcmVfZXhpc3RpbmcpICAgICAvKiBpbiBjYXNlIGEgY3B1IGlzIHNpdHRp
bmcgb24gdGhpcyBicCBpbiB0cmFwcyAqLwogICAgICAgICAgICBrZGJfc2JwYVtpXS5icF9kZWxl
dGVkID0gMTsKICAgICAgICBlbHNlCiAgICAgICAgICAgIG1lbXNldCgma2RiX3NicGFbaV0sIDAs
IHNpemVvZihrZGJfc2JwYVtpXSkpOwogICAgICAgIHJldHVybiBLREJNQVhTQlA7CiAgICB9CiAg
ICAvKiBtYWtlIHN1cmUgc3dicCByZXBvcnRpbmcgaXMgZW5hYmxlZCBpbiB0aGUgdm1jYi92bWNz
ICovCiAgICBpZiAoaXNfaHZtX29yX2h5Yl9kb21haW4oa2RiX2RvbWlkMnB0cihkb21pZCkpKSB7
CiAgICAgICAgc3RydWN0IGRvbWFpbiAqZHAgPSBrZGJfZG9taWQycHRyKGRvbWlkKTsKICAgICAg
ICBkcC0+ZGVidWdnZXJfYXR0YWNoZWQgPSAxOyAgICAgICAgICAgICAgLyogc2VlIHN2bV9kb19y
ZXN1bWUvdm14X2RvXyAqLwogICAgICAgIEtEQkdQKCJkZWJ1Z2dlcl9hdHRhY2hlZCBzZXQuIGRv
bWlkOiVkXG4iLCBkb21pZCk7CiAgICB9CiAgICByZXR1cm4gaTsKfQoKLyogCiAqIFNldC9MaXN0
IFNvZnR3YXJlIEJyZWFrcG9pbnQvcwogKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2Zf
YnAodm9pZCkKewogICAga2RicCgiYnAgW2FkZHJ8c3ltXVtkb21pZF1bY29uZGl0aW9uXTogZGlz
cGxheSBvciBzZXQgYSBicmVha3BvaW50XG4iKTsKICAgIGtkYnAoIiAgd2hlcmUgY29uZCBpcyBs
aWtlOiByNiA9PSAweDEyM0Ygb3IgcmF4ICE9IERFQURCRUVGIG9yIFxuIik7CiAgICBrZGJwKCIg
ICAgICAgZmZmZjgyYzQ4MDM4ZmU1OCA9PSAzMjFFIG9yIDB4ZmZmZjgyYzQ4MDM4ZmU1OCAhPSAw
XG4iKTsKICAgIGtkYnAoIiAgcmVnczogcmF4IHJieCByY3ggcmR4IHJzaSByZGkgcmJwIHJzcCBy
OCByOSIpOwogICAga2RicCgiIHIxMCByMTEgcjEyIHIxMyByMTQgcjE1IHJmbGFnc1xuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdCAKa2RiX2Nt
ZGZfYnAoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAga2RidmFfdCBhZGRyOwogICAgaW50IGlkeCA9IC0xOwogICAgZG9taWRfdCBk
b21pZCA9IERPTUlEX0lETEU7CiAgICBjaGFyICpkb21pZHN0cnAsICpsaHNwPU5VTEwsICpjb25k
cD1OVUxMLCAqcmhzcD1OVUxMOwoKICAgIGlmICgoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8n
KSB8fCBhcmdjID09IDQgfHwgYXJnYyA+IDYpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX2JwKCk7
CgogICAgaWYgKGFyZ2MgPCAyIHx8IGtkYl9zeXNfY3Jhc2gpICAgICAgICAgLyogbGlzdCBhbGwg
c2V0IGJyZWFrcG9pbnRzICovCiAgICAgICAgcmV0dXJuIGtkYl9kaXNwbGF5X3Nia3B0cygpOwoK
ICAgIC8qIHZhbGlkIGFyZ2MgZWl0aGVyOiAyIDMgNSBvciA2IAogICAgICogJ2JwIGlkbGVfbG9v
cCByNiA9PSAweGMwMDAnIE9SICdicCBpZGxlX2xvb3AgMyByOSAhPSAweGRlYWRiZWVmJyAqLwog
ICAgaWR4ID0gKGFyZ2MgPT0gNSkgPyAyIDogKChhcmdjID09IDYpID8gMyA6IGlkeCk7CiAgICBp
ZiAoYXJnYyA+PSA1ICkgewogICAgICAgIGxoc3AgPSAoY2hhciAqKWFyZ3ZbaWR4XTsKICAgICAg
ICBjb25kcCA9IChjaGFyICopYXJndltpZHgrMV07CiAgICAgICAgcmhzcCA9IChjaGFyICopYXJn
dltpZHgrMl07CgogICAgICAgIGlmICgha2RiX3N0cjJ1bG9uZyhyaHNwLCBOVUxMKSB8fCAqKGNv
bmRwKzEpICE9ICc9JyB8fCAKICAgICAgICAgICAgKCpjb25kcCAhPSAnPScgJiYgKmNvbmRwICE9
ICchJykpIHsKCiAgICAgICAgICAgIHJldHVybiBrZGJfdXNnZl9icCgpOwogICAgICAgIH0KICAg
IH0KICAgIGRvbWlkc3RycCA9IChhcmdjID09IDMgfHwgYXJnYyA9PSA2ICkgPyAoY2hhciAqKWFy
Z3ZbMl0gOiBOVUxMOwogICAgaWYgKGRvbWlkc3RycCAmJiAha2RiX3N0cjJkb21pZChkb21pZHN0
cnAsICZkb21pZCwgMSkpIHsKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfYnAoKTsKICAgIH0KICAg
IGlmIChhcmdjID4gMyAmJiBpc19odm1fb3JfaHliX2RvbWFpbihrZGJfZG9taWQycHRyKGRvbWlk
KSkpIHsKICAgICAgICBrZGJwKCJIVk0gZG9tYWluIG5vdCBzdXBwb3J0ZWQgeWV0IGZvciBjb25k
aXRpb25hbCBicFxuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9Cgog
ICAgaWYgKCFrZGJfc3RyMmFkZHIoYXJndlsxXSwgJmFkZHIsIGRvbWlkKSB8fCBhZGRyID09IDAp
IHsKICAgICAgICBrZGJwKCJJbnZhbGlkIGFyZ3VtZW50OiVzXG4iLCBhcmd2WzFdKTsKICAgICAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KCiAgICAvKiBtYWtlIHN1cmUgeGVuIGFk
ZHIgaXMgaW4geGVuIHRleHQsIG90aGVyd2lzZSBicCBzZXQgaW4gNjRiaXQgZG9tMC9VICovCiAg
ICBpZiAoZG9taWQgPT0gRE9NSURfSURMRSAmJiAKICAgICAgICAoYWRkciA8IFhFTl9WSVJUX1NU
QVJUIHx8IGFkZHIgPiBYRU5fVklSVF9FTkQpKQogICAgewogICAgICAgIGtkYnAoImFkZHI6JWx4
IG5vdCBpbiAgeGVuIHRleHRcbiIsIGFkZHIpOwogICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5f
S0RCOwogICAgfQogICAga2RiX3NldF9icChkb21pZCwgYWRkciwgMCwgTlVMTCwgbGhzcCwgY29u
ZHAsIHJoc3ApOyAgICAgLyogMCBpcyBuaSBmbGFnICovCiAgICByZXR1cm4gS0RCX0NQVV9NQUlO
X0tEQjsKfQoKCi8qIHRyYWNlIGJyZWFrcG9pbnQsIG1lYW5pbmcsIHVwb24gYnAgdHJhY2UvcHJp
bnQgc29tZSBpbmZvIGFuZCBjb250aW51ZSAqLwoKc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfYnRwKHZvaWQpCnsKICAgIGtkYnAoImJ0cCBhZGRyfHN5bSBbZG9taWRdIHJlZ3xkb21pZC1t
ZW0tYWRkci4uLiA6IGJyZWFrcG9pbnQgdHJhY2VcbiIpOwogICAga2RicCgiICByZWdzOiByYXgg
cmJ4IHJjeCByZHggcnNpIHJkaSByYnAgcnNwIHI4IHI5ICIpOwogICAga2RicCgicjEwIHIxMSBy
MTIgcjEzIHIxNCByMTUgcmZsYWdzXG4iKTsKICAgIGtkYnAoIiAgRWcuIGJ0cCBpZGxlX2NwdSA3
IHJheCByYnggMHgyMGVmNWE1IHI5XG4iKTsKICAgIGtkYnAoIiAgICAgIHdpbGwgcHJpbnQgcmF4
LCByYngsICoobG9uZyAqKTB4MjBlZjVhNSwgcjkgYW5kIGNvbnRpbnVlXG4iKTsKICAgIHJldHVy
biBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl9idHAo
aW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykK
ewogICAgaW50IGksIGJ0cGlkeCwgbnVtcmQsIGFyZ3NpZHgsIHJlZ29mZnMgPSAtMTsKICAgIGtk
YnZhX3QgYWRkciwgbWVtbG9jPTA7CiAgICBkb21pZF90IGRvbWlkID0gRE9NSURfSURMRTsKICAg
IHVsb25nICpidHBhLCB0bXB1bDsKCiAgICBpZiAoKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/
JykgfHwgYXJnYyA8IDMpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX2J0cCgpOwoKICAgIGFyZ3Np
ZHggPSAyOyAgICAgICAgICAgICAgICAgICAvKiBhc3N1bWUgM3JkIGFyZyBpcyBub3QgZG9taWQg
Ki8KICAgIGlmIChhcmdjID4gMyAmJiBrZGJfc3RyMmRvbWlkKGFyZ3ZbMl0sICZkb21pZCwgMCkp
IHsKCiAgICAgICAgaWYgKGlzX2h2bV9vcl9oeWJfZG9tYWluKGtkYl9kb21pZDJwdHIoZG9taWQp
KSkgewogICAgICAgICAgICBrZGJwKCJIVk0gZG9tYWlucyBhcmUgbm90IGN1cnJlbnRseSBzdXBw
cnRlZFxuIik7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgICAgIH0g
ZWxzZQogICAgICAgICAgICBhcmdzaWR4ID0gMzsgICAgICAgICAgICAgICAvKiAzcmQgYXJnIGlz
IGEgZG9taWQgKi8KICAgIH0KICAgIGlmICgha2RiX3N0cjJhZGRyKGFyZ3ZbMV0sICZhZGRyLCBk
b21pZCkgfHwgYWRkciA9PSAwKSB7CiAgICAgICAga2RicCgiSW52YWxpZCBhcmd1bWVudDolc1xu
IiwgYXJndlsxXSk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAgICAv
KiBtYWtlIHN1cmUgeGVuIGFkZHIgaXMgaW4geGVuIHRleHQsIG90aGVyd2lzZSB3aWxsIHRyYWNl
IDY0Yml0IGRvbTAvVSAqLwogICAgaWYgKGRvbWlkID09IERPTUlEX0lETEUgJiYgCiAgICAgICAg
KGFkZHIgPCBYRU5fVklSVF9TVEFSVCB8fCBhZGRyID4gWEVOX1ZJUlRfRU5EKSkKICAgIHsKICAg
ICAgICBrZGJwKCJhZGRyOiVseCBub3QgaW4gIHhlbiB0ZXh0XG4iLCBhZGRyKTsKICAgICAgICBy
ZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KCiAgICBudW1yZCA9IGtkYl9ndWVzdF9iaXRu
ZXNzKGRvbWlkKS84OwogICAgaWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikmdG1w
dWwsIG51bXJkLCBkb21pZCkgIT0gbnVtcmQpIHsKICAgICAgICBrZGJwKCJVbmFibGUgdG8gcmVh
ZCBtZW0gZnJvbSAlcyAoJWx4KVxuIiwgYXJndlsxXSwgYWRkcik7CiAgICAgICAgcmV0dXJuIEtE
Ql9DUFVfTUFJTl9LREI7CiAgICB9CgogICAgZm9yIChidHBpZHg9MDsgYnRwaWR4IDwgS0RCTUFY
U0JQICYmIGtkYl9idHBfYXBbYnRwaWR4XTsgYnRwaWR4KyspOwogICAgaWYgKGJ0cGlkeCA+PSBL
REJNQVhTQlApIHsKICAgICAgICBrZGJwKCJlcnJvcjogdGFibGUgZnVsbC4gZGVsZXRlIGZldyBi
cmVha3BvaW50c1xuIik7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICB9CiAg
ICBidHBhID0ga2RiX2J0cF9hcmdzYVtidHBpZHhdOwogICAgbWVtc2V0KGJ0cGEsIDAsIHNpemVv
ZihrZGJfYnRwX2FyZ3NhWzBdKSk7CgogICAgZm9yIChpPTA7IGFyZ3ZbYXJnc2lkeF07IGkrKywg
YXJnc2lkeCsrKSB7CgogICAgICAgIGlmICgoKHJlZ29mZnM9a2RiX3ZhbGlkX3JlZyhhcmd2W2Fy
Z3NpZHhdKSkgPT0gLTEpICAmJgogICAgICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbYXJnc2lk
eF0sICZtZW1sb2MpICYmCiAgICAgICAgICAgIChtZW1sb2MgPCBzaXplb2YgKHN0cnVjdCBjcHVf
dXNlcl9yZWdzKSB8fAogICAgICAgICAgICBrZGJfcmVhZF9tZW0obWVtbG9jLCAoa2RiYnl0X3Qg
KikmdG1wdWwsIHNpemVvZih0bXB1bCksIGRvbWlkKT09MCkpewoKICAgICAgICAgICAga2RicCgi
ZXJyb3I6IGludmFsaWQgYXJndW1lbnQ6ICVzXG4iLCBhcmd2W2FyZ3NpZHhdKTsKICAgICAgICAg
ICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7CiAgICAgICAgfQogICAgICAgIGlmIChpID49IEtE
Ql9NQVhCVFApIHsKICAgICAgICAgICAga2RicCgiZXJyb3I6IGNhbm5vdCBzcGVjaWZ5IG1vcmUg
dGhhbiAlZCBhcmdzXG4iLCBLREJfTUFYQlRQKTsKICAgICAgICAgICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7CiAgICAgICAgfQogICAgICAgIGJ0cGFbaV0gPSAocmVnb2ZmcyA9PSAtMSkgPyBt
ZW1sb2MgOiByZWdvZmZzOwogICAgfQoKICAgIGkgPSBrZGJfc2V0X2JwKGRvbWlkLCBhZGRyLCAw
LCBidHBhLCAwLCAwLCAwKTsgICAgIC8qIDAgaXMgbmkgZmxhZyAqLwogICAgaWYgKGkgPCBLREJN
QVhTQlApCiAgICAgICAga2RiX2J0cF9hcFtidHBpZHhdID0ga2RiX2J0cF9hcmdzYVtidHBpZHhd
OwoKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiAKICogU2V0L0xpc3Qgd2F0Y2hw
b2ludHMsIGllLCBoYXJkd2FyZSBicmVha3BvaW50L3MsIGluIGh5cGVydmlzb3IKICogICBVc2Fn
ZTogd3AgW3N5bXxhZGRyXSBbd3xpXSAgIHcgPT0gd3JpdGUgb25seSBkYXRhIHdhdGNocG9pbnQK
ICogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGkgPT0gSU8gd2F0Y2hwb2ludCAocmVh
ZC93cml0ZSkKICoKICogICBFZzogIHdwICAgICAgICA6IGxpc3QgYWxsIHdhdGNocG9pbnRzIHNl
dAogKiAgICAgICAgd3AgYWRkciAgIDogc2V0IGEgcmVhZC93cml0ZSB3cCBhdCBnaXZlbiBhZGRy
CiAqICAgICAgICB3cCBhZGRyIHcgOiBzZXQgYSB3cml0ZSBvbmx5IHdwIGF0IGdpdmVuIGFkZHIK
ICogICAgICAgIHdwIGFkZHIgaSA6IHNldCBhbiBJTyB3cCBhdCBnaXZlbiBhZGRyICgxNmJpdHMg
cG9ydCAjKQogKgogKiAgVEJEOiBhbGxvdyB0byBiZSBzZXQgb24gcGFydGljdWxhciBjcHUKICov
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3dwKHZvaWQpCnsKICAgIGtkYnAoIndwIFth
ZGRyfHN5bV1bd3xpXTogZGlzcGxheSBvciBzZXQgd2F0Y2hwb2ludC4gd3JpdGVvbmx5IG9yIElP
XG4iKTsKICAgIGtkYnAoIlx0bm90ZTogd2F0Y2hwb2ludCBpcyB0cmlnZ2VyZWQgYWZ0ZXIgdGhl
IGluc3RydWN0aW9uIGV4ZWN1dGVzXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9
CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl93cChpbnQgYXJnYywgY29uc3QgY2hhciAq
KmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBrZGJ2YV90IGFkZHI7CiAg
ICBkb21pZF90IGRvbWlkID0gRE9NSURfSURMRTsKICAgIGludCBydyA9IDMsIGxlbiA9IDQ7ICAg
ICAgIC8qIGZvciBub3cganVzdCBkZWZhdWx0IHRvIDQgYnl0ZXMgbGVuICovCgogICAgaWYgKGFy
Z2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfd3AoKTsK
CiAgICBpZiAoYXJnYyA8PSAxIHx8IGtkYl9zeXNfY3Jhc2gpIHsgICAgICAgLyogbGlzdCBhbGwg
c2V0IHdhdGNocG9pbnRzICovCiAgICAgICAga2RiX2RvX3dhdGNocG9pbnRzKDAsIDAsIDApOwog
ICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKCFrZGJfc3RyMmFk
ZHIoYXJndlsxXSwgJmFkZHIsIGRvbWlkKSB8fCBhZGRyID09IDApIHsKICAgICAgICBrZGJwKCJJ
bnZhbGlkIGFyZ3VtZW50OiVzXG4iLCBhcmd2WzFdKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9N
QUlOX0tEQjsKICAgIH0KICAgIGlmIChhcmdjID4gMikgewogICAgICAgIGlmICghc3RyY21wKGFy
Z3ZbMl0sICJ3IikpCiAgICAgICAgICAgIHJ3ID0gMTsKICAgICAgICBlbHNlIGlmICghc3RyY21w
KGFyZ3ZbMl0sICJpIikpCiAgICAgICAgICAgIHJ3ID0gMjsKICAgICAgICBlbHNlIHsKICAgICAg
ICAgICAgcmV0dXJuIGtkYl91c2dmX3dwKCk7CiAgICAgICAgfQogICAgfQogICAga2RiX2RvX3dh
dGNocG9pbnRzKGFkZHIsIHJ3LCBsZW4pOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0K
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3djKHZvaWQpCnsKICAgIGtkYnAoIndjICRu
dW18YWxsIDogY2xlYXIgZ2l2ZW4gb3IgYWxsIHdhdGNocG9pbnRzXG4iKTsKICAgIHJldHVybiBL
REJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl93YyhpbnQg
YXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAg
ICBjb25zdCBjaGFyICphcmdwOwogICAgaW50IHdwbnVtOyAgICAgICAgICAgICAgLyogd3AgbnVt
IHRvIGRlbGV0ZS4gLTEgZm9yIGFsbCAqLwoKICAgIGlmIChhcmdjICE9IDIgfHwgKmFyZ3ZbMV0g
PT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2Zfd2MoKTsKCiAgICBhcmdwID0gYXJndlsx
XTsKCiAgICBpZiAoIXN0cmNtcChhcmdwLCAiYWxsIikpCiAgICAgICAgd3BudW0gPSAtMTsKICAg
IGVsc2UgaWYgKCFrZGJfc3RyMmRlY2koYXJncCwgJndwbnVtKSkgewogICAgICAgIGtkYnAoIklu
dmFsaWQgd3BudW06ICVzXG4iLCBhcmdwKTsKICAgICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKICAgIH0KICAgIGtkYl9jbGVhcl93cHMod3BudW0pOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7Cn0KCnN0YXRpYyB2b2lkCmtkYl9kaXNwbGF5X2h2bV92Y3B1KHN0cnVjdCB2Y3B1ICp2
cCkKewogICAgc3RydWN0IGh2bV92Y3B1ICpodnA7CiAgICBzdHJ1Y3QgdmxhcGljICp2bHA7CiAg
ICBzdHJ1Y3QgaHZtX2lvX29wICppb29wOwoKICAgIGh2cCA9ICZ2cC0+YXJjaC5odm1fdmNwdTsK
ICAgIHZscCA9ICZodnAtPnZsYXBpYzsKICAgIGtkYnAoInZjcHU6JWx4IGlkOiVkIGRvbWlkOiVk
XG4iLCB2cCwgdnAtPnZjcHVfaWQsIHZwLT5kb21haW4tPmRvbWFpbl9pZCk7CgojaWYgMAogICAg
aWYgKGlzX2h5YnJpZF92Y3B1KHZwKSkgewogICAgICAgIHN0cnVjdCBoeWJyaWRfZXh0ICpocCA9
ICZodnAtPmh2X2h5YnJpZDsKICAgICAgICBrZGJwKCIgICAgJmh5YnJpZF9leHQ6JXAgbGltaXQ6
JXggaW9wbDoleCB2Y3B1X2luZm9fbWZuOiVseFxuIiwKICAgICAgICAgICAgIGhwLCBocC0+aHli
X2lvYm1wX2xpbWl0LCBocC0+aHliX2lvcGwsIGhwLT5oeWJfdmNwdV9pbmZvX21mbik7CiAgICB9
CiNlbmRpZgoKICAgIGlvb3AgPSBOVUxMOyAgIC8qIGNvbXBpbGVyIHdhcm5pbmcgKi8KICAgIGtk
YnAoIiAgICAmaHZtX3ZjcHU6JWx4ICBndWVzdF9lZmVyOiJLREJGTCJcbiIsIGh2cCwgaHZwLT5n
dWVzdF9lZmVyKTsKICAgIGtkYnAoIiAgICAgIGd1ZXN0X2NyOiBbMF06IktEQkZMIiBbMV06IktE
QkZMIiBbMl06IktEQkZMIlxuIiwgCiAgICAgICAgIGh2cC0+Z3Vlc3RfY3JbMF0sIGh2cC0+Z3Vl
c3RfY3JbMV0saHZwLT5ndWVzdF9jclsyXSk7CiAgICBrZGJwKCIgICAgICAgICAgICAgICAgWzNd
OiJLREJGTCIgWzRdOiJLREJGTCJcbiIsIGh2cC0+Z3Vlc3RfY3JbM10sCiAgICAgICAgIGh2cC0+
Z3Vlc3RfY3JbNF0pOwogICAga2RicCgiICAgICAgaHdfY3I6IFswXToiS0RCRkwiIFsxXToiS0RC
RkwiIFsyXToiS0RCRkwiXG4iLCBodnAtPmh3X2NyWzBdLAogICAgICAgICBodnAtPmh3X2NyWzFd
LCBodnAtPmh3X2NyWzJdKTsKICAgIGtkYnAoIiAgICAgICAgICAgICAgWzNdOiJLREJGTCIgWzRd
OiJLREJGTCJcbiIsIGh2cC0+aHdfY3JbM10sIAogICAgICAgICBodnAtPmh3X2NyWzRdKTsKCiAg
ICBrZGJwKCIgICAgICBWTEFQSUM6IGJhc2UgbXNyOiJLREJGNjQiIGRpczoleCB0bXJkaXY6JXhc
biIsIAogICAgICAgICB2bHAtPmh3LmFwaWNfYmFzZV9tc3IsIHZscC0+aHcuZGlzYWJsZWQsIHZs
cC0+aHcudGltZXJfZGl2aXNvcik7CiAgICBrZGJwKCIgICAgICAgICAgcmVnczolcCByZWdzX3Bh
Z2U6JXBcbiIsIHZscC0+cmVncywgdmxwLT5yZWdzX3BhZ2UpOwogICAga2RicCgiICAgICAgICAg
IHBlcmlvZGljIHRpbWU6XG4iKTsgCiAgICBrZGJfcHJudF9wZXJpb2RpY190aW1lKCZ2bHAtPnB0
KTsKCiAgICBrZGJwKCIgICAgICB4ZW5fcG9ydDoleCBmbGFnX2RyX2RpcnR5OiV4IGRiZ19zdF9s
YXRjaDoleFxuIiwgaHZwLT54ZW5fcG9ydCwKICAgICAgICAgaHZwLT5mbGFnX2RyX2RpcnR5LCBo
dnAtPmRlYnVnX3N0YXRlX2xhdGNoKTsKCiAgICBpZiAoYm9vdF9jcHVfZGF0YS54ODZfdmVuZG9y
ID09IFg4Nl9WRU5ET1JfSU5URUwpIHsKCiAgICAgICAgc3RydWN0IGFyY2hfdm14X3N0cnVjdCAq
dnhwID0gJmh2cC0+dS52bXg7CiAgICAgICAga2RicCgiICAgICAgJnZteDogJXAgdm1jczolbHgg
YWN0aXZlX2NwdToleCBsYXVuY2hlZDoleFxuIiwgdnhwLCAKICAgICAgICAgICAgIHZ4cC0+dm1j
cywgdnhwLT5hY3RpdmVfY3B1LCB2eHAtPmxhdW5jaGVkKTsKI2lmIFhFTl9WRVJTSU9OICE9IDQg
ICAgICAgICAgICAgICAvKiB4ZW4gMy54LnggKi8KICAgICAgICBrZGJwKCIgICAgICAgIGV4ZWNf
Y3RybDoleCB2cGlkOiQlZFxuIiwgdnhwLT5leGVjX2NvbnRyb2wsIHZ4cC0+dnBpZCk7CiNlbmRp
ZgogICAgICAgIGtkYnAoIiAgICAgICAgaG9zdF9jcjA6ICJLREJGTCIgdm14OiB7cmVhbG06JXgg
ZW11bGF0ZToleH1cbiIsCiAgICAgICAgICAgICB2eHAtPmhvc3RfY3IwLCB2eHAtPnZteF9yZWFs
bW9kZSwgdnhwLT52bXhfZW11bGF0ZSk7CgojaWZkZWYgX194ODZfNjRfXwogICAgICAgIGtkYnAo
IiAgICAgICAgJm1zcl9zdGF0ZTolcCBleGNlcHRpb25fYml0bWFwOiVseFxuIiwgJnZ4cC0+bXNy
X3N0YXRlLAogICAgICAgICAgICAgdnhwLT5leGNlcHRpb25fYml0bWFwKTsKI2VuZGlmCiAgICB9
IGVsc2UgaWYgKGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRvciA9PSBYODZfVkVORE9SX0FNRCkgewog
ICAgICAgIHN0cnVjdCBhcmNoX3N2bV9zdHJ1Y3QgKnN2cCA9ICZodnAtPnUuc3ZtOwojaWYgWEVO
X1ZFUlNJT04gIT0gNCAgICAgICAgICAgICAgIC8qIHhlbiAzLngueCAqLwogICAgICAgIGtkYnAo
IiAgJnN2bTogdm1jYjolbHggcGE6IktEQkY2NCIgYXNpZDoiS0RCRjY0IlxuIiwgc3ZwLCBzdnAt
PnZtY2IsCiAgICAgICAgICAgICBzdnAtPnZtY2JfcGEsIHN2cC0+YXNpZF9nZW5lcmF0aW9uKTsK
I2VuZGlmCiAgICAgICAga2RicCgiICAgIG1zcnBtOiVwIGxuY2hfY29yZToleCB2bWNiX3N5bmM6
JXhcbiIsIHN2cC0+bXNycG0sIAogICAgICAgICAgICAgc3ZwLT5sYXVuY2hfY29yZSwgc3ZwLT52
bWNiX2luX3N5bmMpOwogICAgfQogICAga2RicCgiICAgICAgY2FjaGVtb2RlOiV4IGlvOiB7c3Rh
dGU6ICV4IGRhdGE6ICJLREJGTCJ9XG4iLCBodnAtPmNhY2hlX21vZGUsCiAgICAgICAgIGh2cC0+
aHZtX2lvLmlvX3N0YXRlLCBodnAtPmh2bV9pby5pb19kYXRhKTsKICAgIGtkYnAoIiAgICAgIG1t
aW86IHtndmE6ICJLREJGTCIgZ3BmbjogIktEQkZMIn1cbiIsIGh2cC0+aHZtX2lvLm1taW9fZ3Zh
LAogICAgICAgICBodnAtPmh2bV9pby5tbWlvX2dwZm4pOwp9CgovKiBkaXNwbGF5IHN0cnVjdCBo
dm1fdmNwdXt9IGluIHN0cnVjdCB2Y3B1LmFyY2h7fSAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdApr
ZGJfdXNnZl92Y3B1aCh2b2lkKQp7CiAgICBrZGJwKCJ2Y3B1aCB2Y3B1LXB0ciA6IGRpc3BsYXkg
aHZtX3ZjcHUgc3RydWN0XG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRp
YyBrZGJfY3B1X2NtZF90IAprZGJfY21kZl92Y3B1aChpbnQgYXJnYywgY29uc3QgY2hhciAqKmFy
Z3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBzdHJ1Y3QgdmNwdSAqdnA7Cgog
ICAgaWYgKGFyZ2MgPCAyIHx8ICphcmd2WzFdID09ICc/JykgCiAgICAgICAgcmV0dXJuIGtkYl91
c2dmX3ZjcHVoKCk7CgogICAgaWYgKCFrZGJfc3RyMnVsb25nKGFyZ3ZbMV0sICh1bG9uZyAqKSZ2
cCkgfHwgIWtkYl92Y3B1X3ZhbGlkKHZwKSB8fAogICAgICAgICFpc19odm1fb3JfaHliX3ZjcHUo
dnApKSB7CgogICAgICAgIGtkYnAoImtkYjogQmFkIFZDUFU6ICVzXG4iLCBhcmd2WzFdKTsKICAg
ICAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKICAgIH0KICAgIGtkYl9kaXNwbGF5X2h2bV92
Y3B1KHZwKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBhbHNvIGxvb2sgaW50
byBhcmNoX2dldF9pbmZvX2d1ZXN0KCkgdG8gZ2V0IGNvbnRleHQgKi8Kc3RhdGljIHZvaWQKa2Ri
X3ByaW50X3VyZWdzKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiNpZmRlZiBfX3g4Nl82
NF9fCiAgICBrZGJwKCIgICAgICByZmxhZ3M6ICUwMTZseCAgIHJpcDogJTAxNmx4XG4iLCByZWdz
LT5yZmxhZ3MsIHJlZ3MtPnJpcCk7CiAgICBrZGJwKCIgICAgICAgICByYXg6ICUwMTZseCAgIHJi
eDogJTAxNmx4ICAgcmN4OiAlMDE2bHhcbiIsCiAgICAgICAgIHJlZ3MtPnJheCwgcmVncy0+cmJ4
LCByZWdzLT5yY3gpOwogICAga2RicCgiICAgICAgICAgcmR4OiAlMDE2bHggICByc2k6ICUwMTZs
eCAgIHJkaTogJTAxNmx4XG4iLAogICAgICAgICByZWdzLT5yZHgsIHJlZ3MtPnJzaSwgcmVncy0+
cmRpKTsKICAgIGtkYnAoIiAgICAgICAgIHJicDogJTAxNmx4ICAgcnNwOiAlMDE2bHggICAgcjg6
ICUwMTZseFxuIiwKICAgICAgICAgcmVncy0+cmJwLCByZWdzLT5yc3AsIHJlZ3MtPnI4KTsKICAg
IGtkYnAoIiAgICAgICAgICByOTogICUwMTZseCAgcjEwOiAlMDE2bHggICByMTE6ICUwMTZseFxu
IiwKICAgICAgICAgcmVncy0+cjksICByZWdzLT5yMTAsIHJlZ3MtPnIxMSk7CiAgICBrZGJwKCIg
ICAgICAgICByMTI6ICUwMTZseCAgIHIxMzogJTAxNmx4ICAgcjE0OiAlMDE2bHhcbiIsCiAgICAg
ICAgIHJlZ3MtPnIxMiwgcmVncy0+cjEzLCByZWdzLT5yMTQpOwogICAga2RicCgiICAgICAgICAg
cjE1OiAlMDE2bHhcbiIsIHJlZ3MtPnIxNSk7CiAgICBrZGJwKCIgICAgICBkczogJTA0eCAgIGVz
OiAlMDR4ICAgZnM6ICUwNHggICBnczogJTA0eCAgICIKICAgICAgICAgIiAgICAgIHNzOiAlMDR4
ICAgY3M6ICUwNHhcbiIsIHJlZ3MtPmRzLCByZWdzLT5lcywgcmVncy0+ZnMsCiAgICAgICAgIHJl
Z3MtPmdzLCByZWdzLT5zcywgcmVncy0+Y3MpOwogICAga2RicCgiICAgICAgZXJyY29kZTolMDhs
eCBlbnRyeXZlYzolMDhseCB1cGNhbGxfbWFzazolbHhcbiIsCiAgICAgICAgIHJlZ3MtPmVycm9y
X2NvZGUsIHJlZ3MtPmVudHJ5X3ZlY3RvciwgcmVncy0+c2F2ZWRfdXBjYWxsX21hc2spOwojZWxz
ZQogICAga2RicCgiICAgICAgZWZsYWdzOiAlMDE2bHggZWlwOiAwMTZseFxuIiwgcmVncy0+ZWZs
YWdzLCByZWdzLT5laXApOwogICAga2RicCgiICAgICAgZWF4OiAlMDh4ICAgZWJ4OiAlMDh4ICAg
ZWN4OiAlMDh4ICAgZWR4OiAlMDh4XG4iLAogICAgICAgICByZWdzLT5lYXgsIHJlZ3MtPmVieCwg
cmVncy0+ZWN4LCByZWdzLT5lZHgpOwogICAga2RicCgiICAgICAgZXNpOiAlMDh4ICAgZWRpOiAl
MDh4ICAgZWJwOiAlMDh4ICAgZXNwOiAlMDh4XG4iLAogICAgICAgICByZWdzLT5lc2ksIHJlZ3Mt
PmVkaSwgcmVncy0+ZWJwLCByZWdzLT5lc3ApOwogICAga2RicCgiICAgICAgZHM6ICUwNHggICBl
czogJTA0eCAgIGZzOiAlMDR4ICAgZ3M6ICUwNHggICAiCiAgICAgIiAgICAgIHNzOiAlMDR4ICAg
Y3M6ICUwNHhcbiIsIHJlZ3MtPmRzLCByZWdzLT5lcywgcmVncy0+ZnMsCiAgICAgICAgIHJlZ3Mt
PmdzLCByZWdzLT5zcywgcmVncy0+Y3MpOwogICAga2RicCgiICAgICAgZXJyY29kZTolMDRseCBl
bnRyeXZlYzolMDRseCB1cGNhbGxfbWFzazolbHhcbiIsIAogICAgICAgICByZWdzLT5lcnJvcl9j
b2RlLCByZWdzLT5lbnRyeV92ZWN0b3IsIHJlZ3MtPnNhdmVkX3VwY2FsbF9tYXNrKTsKI2VuZGlm
Cn0KCiNpZiBYRU5fU1VCVkVSU0lPTiA8IDMgICAgICAgICAgICAgLyogeGVuIDMuMS54IG9yIHhl
biAzLjIueCAqLwojaWZkZWYgQ09ORklHX0NPTVBBVAogICAgI3VuZGVmIHZjcHVfaW5mbwogICAg
I2RlZmluZSB2Y3B1X2luZm8odiwgZmllbGQpICAgICAgICAgICAgIFwKICAgICgqKCFoYXNfMzJi
aXRfc2hpbmZvKCh2KS0+ZG9tYWluKSA/ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAogICAgICAgKHR5cGVvZigmKHYpLT52Y3B1X2luZm8tPmNvbXBhdC5maWVsZCkpJih2
KS0+dmNwdV9pbmZvLT5uYXRpdmUuZmllbGQgOiBcCiAgICAgICAodHlwZW9mKCYodiktPnZjcHVf
aW5mby0+Y29tcGF0LmZpZWxkKSkmKHYpLT52Y3B1X2luZm8tPmNvbXBhdC5maWVsZCkpCgogICAg
I3VuZGVmIF9fc2hhcmVkX2luZm8KICAgICNkZWZpbmUgX19zaGFyZWRfaW5mbyhkLCBzLCBmaWVs
ZCkgICAgICAgICAgICAgICAgICAgICAgXAogICAgKCooIWhhc18zMmJpdF9zaGluZm8oZCkgPyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwKICAgICAgICh0eXBlb2YoJihzKS0+Y29tcGF0LmZp
ZWxkKSkmKHMpLT5uYXRpdmUuZmllbGQgOiBcCiAgICAgICAodHlwZW9mKCYocyktPmNvbXBhdC5m
aWVsZCkpJihzKS0+Y29tcGF0LmZpZWxkKSkKI2VuZGlmCiNlbmRpZgoKc3RhdGljIHZvaWQga2Ri
X2Rpc3BsYXlfcHZfdmNwdShzdHJ1Y3QgdmNwdSAqdnApCnsKICAgIGludCBpOwogICAgc3RydWN0
IHB2X3ZjcHUgKmdwID0gJnZwLT5hcmNoLnB2X3ZjcHU7CgogICAga2RicCgiICAgICAgR0RUX1ZJ
UlRfU1RBUlQodmNwdSk6ICVseFxuIiwgR0RUX1ZJUlRfU1RBUlQodnApKTsKICAgIGtkYnAoIiAg
ICAgIEdEVDogZW50cmllczoweCVseCAgZnJhbWVzOlxuIiwgZ3AtPmdkdF9lbnRzKTsKICAgIGZv
ciAoaT0wOyBpIDwgMTY7IGk9aSs0KSAKICAgICAgICBrZGJwKCIgICAgICAgICAgJTAxNmx4ICUw
MTZseCAlMDE2bHggJTAxNmx4XG4iLCBncC0+Z2R0X2ZyYW1lc1tpXSwgCiAgICAgICAgICAgICBn
cC0+Z2R0X2ZyYW1lc1tpKzFdLCBncC0+Z2R0X2ZyYW1lc1tpKzJdLGdwLT5nZHRfZnJhbWVzW2kr
M10pOwogICAgCiAgICBrZGJwKCIgICAgICB0cmFwX2N0eHQ6JWx4IGtlcm5lbF9zczolbHgga2Vy
bmVsX3NwOiVseFxuIiwgZ3AtPnRyYXBfY3R4dCwKICAgICAgICAgZ3AtPmtlcm5lbF9zcywgZ3At
Pmtlcm5lbF9zcCk7CiAgICBrZGJwKCIgICAgICBjdHJscmVnczpcbiIpOwogICAgZm9yIChpPTA7
IGkgPCA4OyBpPWkrNCkKICAgICAgICBrZGJwKCIgICAgICAgICAgJTAxNmx4ICUwMTZseCAlMDE2
bHggJTAxNmx4XG4iLCBncC0+Y3RybHJlZ1tpXSwgCiAgICAgICAgICAgICBncC0+Y3RybHJlZ1tp
KzFdLCBncC0+Y3RybHJlZ1tpKzJdLCBncC0+Y3RybHJlZ1tpKzNdKTsKI2lmZGVmIF9feDg2XzY0
X18KICAgIGtkYnAoIiAgICAgIGNhbGxiYWNrOiAgIGV2ZW50OiAlMDE2bHggICBmYWlsc2FmZTog
JTAxNmx4XG4iLCAKICAgICAgICAgZ3AtPmV2ZW50X2NhbGxiYWNrX2VpcCwgZ3AtPmZhaWxzYWZl
X2NhbGxiYWNrX2VpcCk7CiAgICBrZGJwKCIgICAgICBiYXNlOiBmczoweCVseCBnc2tlcm46MHgl
bHggZ3N1c2VyOjB4JWx4XG4iLCAKICAgICAgICAgZ3AtPmZzX2Jhc2UsIGdwLT5nc19iYXNlX2tl
cm5lbCwgZ3AtPmdzX2Jhc2VfdXNlcik7CiNlbHNlCiAgICBrZGJwKCIgICAgICBjYWxsYmFjazog
ICBldmVudDogJTA4bHg6JTA4bHggICBmYWlsc2FmZTogJTA4bHg6JTA4bHhcbiIsIAogICAgICAg
ICBncC0+ZXZlbnRfY2FsbGJhY2tfY3MsIGdwLT5ldmVudF9jYWxsYmFja19laXAsIAogICAgICAg
ICBncC0+ZmFpbHNhZmVfY2FsbGJhY2tfY3MsIGdwLT5mYWlsc2FmZV9jYWxsYmFja19laXApOwoj
ZW5kaWYKICAgIGtkYnAoIiAgICB2Y3B1X2luZm9fbWZuOiAlbHggIGlvcGw6ICV4XG4iLCBncC0+
dmNwdV9pbmZvX21mbiwgZ3AtPmlvcGwpOwogICAga2RicCgiXG4iKTsKfQoKLyogRGlzcGxheSBv
bmUgVkNQVSBpbmZvICovCnN0YXRpYyB2b2lkCmtkYl9kaXNwbGF5X3ZjcHUoc3RydWN0IHZjcHUg
KnZwKQp7CiAgICBpbnQgaTsKICAgIHN0cnVjdCBhcmNoX3ZjcHUgKmF2cCA9ICZ2cC0+YXJjaDsK
ICAgIHN0cnVjdCBwYWdpbmdfdmNwdSAqcHZwID0gJnZwLT5hcmNoLnBhZ2luZzsKICAgIGludCBk
b21pZCA9IHZwLT5kb21haW4tPmRvbWFpbl9pZDsKCiAgICBrZGJwKCJcblZDUFU6ICB2Y3B1LWlk
OiVkICB2Y3B1LXB0cjolcCAiLCB2cC0+dmNwdV9pZCwgdnApOwogICAga2RicCgiICBwcm9jZXNz
b3I6JWQgZG9taWQ6JWQgIGRvbXA6JXBcbiIsIHZwLT5wcm9jZXNzb3IsIGRvbWlkLHZwLT5kb21h
aW4pOwoKICAgIGlmIChkb21pZCA9PSBET01JRF9JRExFKSB7CiAgICAgICAga2RicCgiICAgIElE
TEUgdmNwdS5cbiIpOwogICAgICAgIHJldHVybjsKICAgIH0KICAgIGtkYnAoIiAgcGF1c2U6IGZs
YWdzOjB4JTAxNmx4IGNvdW50OiV4XG4iLCB2cC0+cGF1c2VfZmxhZ3MsIAogICAgICAgICB2cC0+
cGF1c2VfY291bnQuY291bnRlcik7CiAgICBrZGJwKCIgIHZjcHU6IGluaXRkb25lOiVkIHJ1bm5p
bmc6JWRcbiIsIAogICAgICAgICB2cC0+aXNfaW5pdGlhbGlzZWQsIHZwLT5pc19ydW5uaW5nKTsK
ICAgIGtkYnAoIiAgbWNlcGVuZDolZCBubWlwZW5kOiVkIHNodXQ6IGRlZjolZCBwYXVzZWQ6JWRc
biIsIAogICAgICAgICB2cC0+bWNlX3BlbmRpbmcsICB2cC0+bm1pX3BlbmRpbmcsIHZwLT5kZWZl
cl9zaHV0ZG93biwgCiAgICAgICAgIHZwLT5wYXVzZWRfZm9yX3NodXRkb3duKTsKICAgIGtkYnAo
IiAgJnZjcHVfaW5mbzolcCA6IGV2dGNobl91cGNfcGVuZDoleCBfbWFzazoleFxuIiwKICAgICAg
ICAgdnAtPnZjcHVfaW5mbywgdmNwdV9pbmZvKHZwLCBldnRjaG5fdXBjYWxsX3BlbmRpbmcpLAog
ICAgICAgICB2Y3B1X2luZm8odnAsIGV2dGNobl91cGNhbGxfbWFzaykpOwogICAga2RicCgiICBl
dnRfcGVuZF9zZWw6JWx4IHBvbGxfZXZ0Y2huOiV4ICIsIAogICAgICAgICAqKHVuc2lnbmVkIGxv
bmcgKikmdmNwdV9pbmZvKHZwLCBldnRjaG5fcGVuZGluZ19zZWwpLCB2cC0+cG9sbF9ldnRjaG4p
OwogICAga2RiX3ByaW50X3NwaW5fbG9jaygidmlycV9sb2NrOiIsICZ2cC0+dmlycV9sb2NrLCAi
XG4iKTsKICAgIGZvciAoaT0wOyBpIDwgTlJfVklSUVM7IGkrKykKICAgICAgICBpZiAodnAtPnZp
cnFfdG9fZXZ0Y2huW2ldICE9IDApCiAgICAgICAgICAgIGtkYnAoIiAgICAgIHZpcnE6JCVkIHBv
cnQ6JCVkXG4iLCBpLCB2cC0+dmlycV90b19ldnRjaG5baV0pOwoKICAgIGtkYnAoIiAgbmV4dDol
cCBwZXJpb2RpYzogcGVyaW9kOjB4JWx4IGxhc3RfZXZlbnQ6MHglbHhcbiIsIAogICAgICAgICB2
cC0+bmV4dF9pbl9saXN0LCB2cC0+cGVyaW9kaWNfcGVyaW9kLCB2cC0+cGVyaW9kaWNfbGFzdF9l
dmVudCk7CiAgICBrZGJwKCIgIGNwdV9hZmZpbml0eToweCVseCB2Y3B1X2RpcnR5X2NwdW1hc2s6
JXAgc2NoZWRfcHJpdjoweCVwXG4iLAogICAgICAgICB2cC0+Y3B1X2FmZmluaXR5LCB2cC0+dmNw
dV9kaXJ0eV9jcHVtYXNrLCB2cC0+c2NoZWRfcHJpdik7CiAgICBrZGJwKCIgICZydW5zdGF0ZTog
JXAgc3RhdGU6ICV4IChlZy4gUlVOU1RBVEVfcnVubmluZykgZ3Vlc3RwdHI6JXBcbiIsIAogICAg
ICAgICAmdnAtPnJ1bnN0YXRlLCB2cC0+cnVuc3RhdGUuc3RhdGUsIHJ1bnN0YXRlX2d1ZXN0KHZw
KSk7CiAgICBrZGJwKCJcbiIpOwogICAga2RicCgiICBhcmNoIGluZm86ICglcClcbiIsICZ2cC0+
YXJjaCk7CiAgICBrZGJwKCIgICAgZ3Vlc3RfY29udGV4dDogVkdDRl8gZmxhZ3M6JWx4IiwgCiAg
ICAgICAgIHZwLT5hcmNoLnZnY19mbGFncyk7IC8qIFZHQ0ZfaW5fa2VybmVsICovCiAgICBpZiAo
aXNfaHZtX29yX2h5Yl92Y3B1KHZwKSkKICAgICAgICBrZGJwKCIgICAgKEhWTSBndWVzdDogSVAs
IFNQLCBFRkxBR1MgbWF5IGJlIHN0YWxlKSIpOwogICAga2RicCgiXG4iKTsKICAgIGtkYl9wcmlu
dF91cmVncygmdnAtPmFyY2gudXNlcl9yZWdzKTsKICAgIGtkYnAoIiAgICAgIGRlYnVncmVnczpc
biIpOwogICAgZm9yIChpPTA7IGkgPCA4OyBpPWkrNCkKICAgICAgICBrZGJwKCIgICAgICAgICAg
JTAxNmx4ICUwMTZseCAlMDE2bHggJTAxNmx4XG4iLCBhdnAtPmRlYnVncmVnW2ldLCAKICAgICAg
ICAgICAgIGF2cC0+ZGVidWdyZWdbaSsxXSwgYXZwLT5kZWJ1Z3JlZ1tpKzJdLCBhdnAtPmRlYnVn
cmVnW2krM10pOwoKICAgIGlmIChpc19odm1fb3JfaHliX3ZjcHUodnApKQogICAgICAgIGtkYl9k
aXNwbGF5X2h2bV92Y3B1KHZwKTsKICAgIGVsc2UKICAgICAgICBrZGJfZGlzcGxheV9wdl92Y3B1
KHZwKTsKCiAgICBrZGJwKCIgICAgVEZfZmxhZ3M6ICUwMTZseCAgZ3Vlc3RfdGFibGU6ICUwMTZs
eCBjcjM6JTAxNmx4XG4iLCAKICAgICAgICAgdnAtPmFyY2guZmxhZ3MsIHZwLT5hcmNoLmd1ZXN0
X3RhYmxlLnBmbiwgYXZwLT5jcjMpOyAKICAgIGtkYnAoIiAgICBwYWdpbmc6IFxuIik7CiAgICBr
ZGJwKCIgICAgICB2dGxiOiVwXG4iLCAmcHZwLT52dGxiKTsKICAgIGtkYnAoIiAgICAgICZwZ19t
b2RlOiVwIGdzdGxldmVsczolZCAmc2hhZG93OiVwIHNobGV2ZWxzOiVkXG4iLAogICAgICAgICBw
dnAtPm1vZGUsIHB2cC0+bW9kZS0+Z3Vlc3RfbGV2ZWxzLCAmcHZwLT5tb2RlLT5zaGFkb3csCiAg
ICAgICAgIHB2cC0+bW9kZS0+c2hhZG93LnNoYWRvd19sZXZlbHMpOwogICAga2RicCgiICAgICAg
c2hhZG93X3ZjcHU6XG4iKTsKICAgIGtkYnAoIiAgICAgICAgZ3Vlc3RfdnRhYmxlOiVwIGxhc3Qg
ZW1fbWZuOiJLREJGTCJcbiIsCiAgICAgICAgIHB2cC0+c2hhZG93Lmd1ZXN0X3Z0YWJsZSwgcHZw
LT5zaGFkb3cubGFzdF9lbXVsYXRlZF9tZm4pOwojaWYgQ09ORklHX1BBR0lOR19MRVZFTFMgPj0g
MwogICAga2RicCgiICAgICAgICAgbDN0Ymw6IDM6IktEQkZMIiAyOiJLREJGTCJcbiIKICAgICAg
ICAgIiAgICAgICAgICAgICAgICAxOiJLREJGTCIgMDoiS0RCRkwiXG4iLAogICAgIHB2cC0+c2hh
ZG93LmwzdGFibGVbM10ubDMsIHB2cC0+c2hhZG93LmwzdGFibGVbMl0ubDMsIAogICAgIHB2cC0+
c2hhZG93LmwzdGFibGVbMV0ubDMsIHB2cC0+c2hhZG93LmwzdGFibGVbMF0ubDMpOwogICAga2Ri
cCgiICAgICAgICBnbDN0Ymw6IDM6IktEQkZMIiAyOiJLREJGTCJcbiIKICAgICAgICAgIiAgICAg
ICAgICAgICAgICAxOiJLREJGTCIgMDoiS0RCRkwiXG4iLAogICAgIHB2cC0+c2hhZG93LmdsM2Vb
M10ubDMsIHB2cC0+c2hhZG93LmdsM2VbMl0ubDMsIAogICAgIHB2cC0+c2hhZG93LmdsM2VbMV0u
bDMsIHB2cC0+c2hhZG93LmdsM2VbMF0ubDMpOwojZW5kaWYKICAgIGtkYnAoIiAgZ2Ric3hfdmNw
dV9ldmVudDoleFxuIiwgdnAtPmFyY2guZ2Ric3hfdmNwdV9ldmVudCk7Cn0KCi8qIAogKiBGVU5D
VElPTjogRGlzcGFseSAoY3VycmVudCkgVkNQVS9zCiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdApr
ZGJfdXNnZl92Y3B1KHZvaWQpCnsKICAgIGtkYnAoInZjcHUgW3ZjcHUtcHRyXSA6IGRpc3BsYXkg
Y3VycmVudC92Y3B1LXB0ciB2Y3B1IGluZm9cbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9L
REI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRmX3ZjcHUoaW50IGFyZ2MsIGNvbnN0
IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgc3RydWN0IHZj
cHUgKnYgPSBjdXJyZW50OwoKICAgIGlmIChhcmdjID4gMiB8fCAoYXJnYyA+IDEgJiYgKmFyZ3Zb
MV0gPT0gJz8nKSkKICAgICAgICBrZGJfdXNnZl92Y3B1KCk7CiAgICBlbHNlIGlmIChhcmdjIDw9
IDEpCiAgICAgICAga2RiX2Rpc3BsYXlfdmNwdSh2KTsKICAgIGVsc2UgaWYgKGtkYl9zdHIydWxv
bmcoYXJndlsxXSwgKHVsb25nICopJnYpICYmIGtkYl92Y3B1X3ZhbGlkKHYpKQogICAgICAgIGtk
Yl9kaXNwbGF5X3ZjcHUodik7CiAgICBlbHNlIAogICAgICAgIGtkYnAoIkludmFsaWQgdXNhZ2Uv
YXJndW1lbnQ6JXMgdjolbHhcbiIsIGFyZ3ZbMV0sIChsb25nKXYpOwogICAgcmV0dXJuIEtEQl9D
UFVfTUFJTl9LREI7Cn0KCi8qIGZyb20gcGFnaW5nX2R1bXBfZG9tYWluX2luZm8oKSAqLwpzdGF0
aWMgdm9pZCBrZGJfcHJfZG9tX3BnX21vZGVzKHN0cnVjdCBkb21haW4gKmQpCnsKICAgIGlmIChw
YWdpbmdfbW9kZV9lbmFibGVkKGQpKSB7CiAgICAgICAga2RicCgiIHBhZ2luZyBtb2RlIGVuYWJs
ZWQiKTsKICAgICAgICBpZiAoIHBhZ2luZ19tb2RlX3NoYWRvdyhkKSApCiAgICAgICAgICAgIGtk
YnAoIiBzaGFkb3coUEdfU0hfZW5hYmxlKSIpOwogICAgICAgIGlmICggcGFnaW5nX21vZGVfaGFw
KGQpICkKICAgICAgICAgICAga2RicCgiIGhhcChQR19IQVBfZW5hYmxlKSAiKTsKICAgICAgICBp
ZiAoIHBhZ2luZ19tb2RlX3JlZmNvdW50cyhkKSApCiAgICAgICAgICAgIGtkYnAoIiByZWZjb3Vu
dHMoUEdfcmVmY291bnRzKSAiKTsKICAgICAgICBpZiAoIHBhZ2luZ19tb2RlX2xvZ19kaXJ0eShk
KSApCiAgICAgICAgICAgIGtkYnAoIiBsb2dfZGlydHkoUEdfbG9nX2RpcnR5KSAiKTsKICAgICAg
ICBpZiAoIHBhZ2luZ19tb2RlX3RyYW5zbGF0ZShkKSApCiAgICAgICAgICAgIGtkYnAoIiB0cmFu
c2xhdGUoUEdfdHJhbnNsYXRlKSAiKTsKICAgICAgICBpZiAoIHBhZ2luZ19tb2RlX2V4dGVybmFs
KGQpICkKICAgICAgICAgICAga2RicCgiIGV4dGVybmFsKFBHX2V4dGVybmFsKSAiKTsKICAgIH0g
ZWxzZQogICAgICAgIGtkYnAoIiBkaXNhYmxlZCIpOwogICAga2RicCgiXG4iKTsKfQoKLyogcHJp
bnQgZXZlbnQgY2hhbm5lbHMgaW5mbyBmb3IgYSBnaXZlbiBkb21haW4gCiAqIE5PVEU6IHZlcnkg
Y29uZnVzaW5nLCBwb3J0IGFuZCBldmVudCBjaGFubmVsIHJlZmVyIHRvIHRoZSBzYW1lIHRoaW5n
LiBldnRjaG4KICogaXMgYXJyeSBvZiBwb2ludGVycyB0byBhIGJ1Y2tldCBvZiBwb2ludGVycyB0
byAxMjggc3RydWN0IGV2dGNobnt9LiB3aGlsZQogKiA2NGJpdCB4ZW4gY2FuIGhhbmRsZSA0MDk2
IG1heCBjaGFubmVscywgYSAzMmJpdCBndWVzdCBpcyBsaW1pdGVkIHRvIDEwMjQgKi8Kc3RhdGlj
IHZvaWQgbm9pbmxpbmUga2RiX3ByaW50X2RvbV9ldmVudGluZm8oc3RydWN0IGRvbWFpbiAqZHAp
CnsKICAgIHVpbnQgY2huOwoKICAgIGtkYnAoIlxuIik7CiAgICBrZGJwKCIgIEV2dDogTUFYX0VW
VENITlM6JCVkIHB0cjolcCBwb2xsbXNrOiUwOGx4ICIsCiAgICAgICAgIE1BWF9FVlRDSE5TKGRw
KSwgZHAtPmV2dGNobiwgZHAtPnBvbGxfbWFza1swXSk7CiAgICBrZGJfcHJpbnRfc3Bpbl9sb2Nr
KCJsazoiLCAmZHAtPmV2ZW50X2xvY2ssICJcbiIpOwogICAga2RicCgiICAgICZldnRjaG5fcGVu
ZGluZzolcCAmZXZ0Y2huX21hc2s6JXBcbiIsIAogICAgICAgICBzaGFyZWRfaW5mbyhkcCwgZXZ0
Y2huX3BlbmRpbmcpLCBzaGFyZWRfaW5mbyhkcCwgZXZ0Y2huX21hc2spKTsKCiAgICBrZGJwKCIg
ICBDaGFubmVscyBpbmZvOiAoZXZlcnl0aGluZyBpcyBpbiBkZWNpbWFsKTpcbiIpOwogICAgZm9y
IChjaG49MDsgY2huIDwgTUFYX0VWVENITlMoZHApOyBjaG4rKyApIHsKICAgICAgICBzdHJ1Y3Qg
ZXZ0Y2huICpia3RwID0gZHAtPmV2dGNobltjaG4vRVZUQ0hOU19QRVJfQlVDS0VUXTsKICAgICAg
ICBzdHJ1Y3QgZXZ0Y2huICpjaG5wID0gJmJrdHBbY2huICYgKEVWVENITlNfUEVSX0JVQ0tFVC0x
KV07CiAgICAgICAgY2hhciBwYml0ID0gdGVzdF9iaXQoY2huLCAmc2hhcmVkX2luZm8oZHAsIGV2
dGNobl9wZW5kaW5nKSkgPyAnWScgOiAnTic7CiAgICAgICAgY2hhciBtYml0ID0gdGVzdF9iaXQo
Y2huLCAmc2hhcmVkX2luZm8oZHAsIGV2dGNobl9tYXNrKSkgPyAnWScgOiAnTic7CgogICAgICAg
IGlmIChia3RwPT1OVUxMIHx8IGNobnAtPnN0YXRlPT1FQ1NfRlJFRSkKICAgICAgICAgICAgY29u
dGludWU7CgogICAgICAgIGtkYnAoIiAgICBjaG46JTR1IHN0OiVkIF94ZW49JWQgX3ZjcHVfaWQ6
JTJkICIsIGNobiwgY2hucC0+c3RhdGUsCiAgICAgICAgICAgICBjaG5wLT54ZW5fY29uc3VtZXIs
IGNobnAtPm5vdGlmeV92Y3B1X2lkKTsKICAgICAgICBpZiAoY2hucC0+c3RhdGUgPT0gRUNTX1VO
Qk9VTkQpCiAgICAgICAgICAgIGtkYnAoIiByZW0tZG9taWQ6JWQiLCBjaG5wLT51LnVuYm91bmQu
cmVtb3RlX2RvbWlkKTsKICAgICAgICBlbHNlIGlmIChjaG5wLT5zdGF0ZSA9PSBFQ1NfSU5URVJE
T01BSU4pCiAgICAgICAgICAgIGtkYnAoIiByZW0tcG9ydDolZCByZW0tZG9tOiVkIiwgY2hucC0+
dS5pbnRlcmRvbWFpbi5yZW1vdGVfcG9ydCwKICAgICAgICAgICAgICAgICBjaG5wLT51LmludGVy
ZG9tYWluLnJlbW90ZV9kb20tPmRvbWFpbl9pZCk7CiAgICAgICAgZWxzZSBpZiAoY2hucC0+c3Rh
dGUgPT0gRUNTX1BJUlEpCiAgICAgICAgICAgIGtkYnAoIiBwaXJxOiVkIiwgY2hucC0+dS5waXJx
KTsKICAgICAgICBlbHNlIGlmIChjaG5wLT5zdGF0ZSA9PSBFQ1NfVklSUSkKICAgICAgICAgICAg
a2RicCgiIHZpcnE6JWQiLCBjaG5wLT51LnZpcnEpOwoKICAgICAgICBrZGJwKCIgIHBlbmQ6JWMg
bWFzazolY1xuIiwgcGJpdCwgbWJpdCk7CiAgICB9CiNpZiAwCiAgICBrZGJwKCJwaXJxIHRvIGV2
dGNobiBtYXBwaW5nIChwaXJxOmV2dGNobikgKGFsbCBkZWNpbWFsKTpcbiIpOwogICAgZm9yIChp
PTA7IGkgPCBkcC0+bnJfcGlycXM7IGkgKyspCiAgICAgICAgaWYgKGRwLT5waXJxX3RvX2V2dGNo
bltpXSkKICAgICAgICAgICAga2RicCgiKCVkOiVkKSAiLCBpLCBkcC0+cGlycV90b19ldnRjaG5b
aV0pOwogICAga2RicCgiXG4iKTsKI2VuZGlmCn0KCnN0YXRpYyB2b2lkIGtkYl9wcm50X2h2bV9k
b21faW5mbyhzdHJ1Y3QgZG9tYWluICpkcCkKewogICAgc3RydWN0IGh2bV9kb21haW4gKmh2cCA9
ICZkcC0+YXJjaC5odm1fZG9tYWluOwoKICAgIGtkYnAoIiAgICBIVk0gaW5mbzogSGFwIGlzJXMg
ZW5hYmxlZFxuIiwgCiAgICAgICAgIGRwLT5hcmNoLmh2bV9kb21haW4uaGFwX2VuYWJsZWQgPyAi
IiA6ICIgbm90Iik7CgogICAgaWYgKGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRvciA9PSBYODZfVkVO
RE9SX0lOVEVMKSB7CiAgICAgICAgc3RydWN0IHZteF9kb21haW4gKnZkcCA9ICZkcC0+YXJjaC5o
dm1fZG9tYWluLnZteDsKICAgICAgICBrZGJwKCIgICAgRVBUOiBlcHRfbXQ6JXggZXB0X3dsOiV4
IGFzcjolMDEzbHhcbiIsIAogICAgICAgICAgICAgdmRwLT5lcHRfY29udHJvbC5lcHRfbXQsIHZk
cC0+ZXB0X2NvbnRyb2wuZXB0X3dsLCAKICAgICAgICAgICAgIHZkcC0+ZXB0X2NvbnRyb2wuYXNy
KTsKICAgIH0KICAgIGlmIChodnAgPT0gTlVMTCkKICAgICAgICByZXR1cm47CgogICAgaWYgKGh2
cC0+aXJxLmNhbGxiYWNrX3ZpYV90eXBlID09IEhWTUlSUV9jYWxsYmFja192ZWN0b3IpCiAgICAg
ICAga2RicCgiICAgIEhWTUlSUV9jYWxsYmFja192ZWN0b3I6ICV4XG4iLCBodnAtPmlycS5jYWxs
YmFja192aWEudmVjdG9yKTsKCiAgICBpZiAoIWlzX2h2bV9kb21haW4oZHApKQogICAgICAgIHJl
dHVybjsKCiAgICBrZGJwKCIgICAgSFZNIFBBUkFNUyAoYWxsIGluIGhleCk6XG4iKTsKICAgIGtk
YnAoIlx0aW9yZXEucGFnZTolbHggaW9yZXEudmE6JWx4XG4iLCBodnAtPmlvcmVxLnBhZ2UsIGh2
cC0+aW9yZXEudmEpOwogICAga2RicCgiXHRidWZfaW9yZXEucGFnZTolbHggaW9yZXEudmE6JWx4
XG4iLCBodnAtPmJ1Zl9pb3JlcS5wYWdlLCAKICAgICAgICAgaHZwLT5idWZfaW9yZXEudmEpOwog
ICAga2RicCgiXHRIVk1fUEFSQU1fQ0FMTEJBQ0tfSVJROiAleFxuIiwgaHZwLT5wYXJhbXNbSFZN
X1BBUkFNX0NBTExCQUNLX0lSUV0pOwogICAga2RicCgiXHRIVk1fUEFSQU1fU1RPUkVfUEZOOiAl
eFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1NUT1JFX1BGTl0pOwogICAga2RicCgiXHRIVk1f
UEFSQU1fU1RPUkVfRVZUQ0hOOiAleFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1NUT1JFX0VW
VENITl0pOwogICAga2RicCgiXHRIVk1fUEFSQU1fUEFFX0VOQUJMRUQ6ICV4XG4iLCBodnAtPnBh
cmFtc1tIVk1fUEFSQU1fUEFFX0VOQUJMRURdKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX0lPUkVR
X1BGTjogJXhcbiIsIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9JT1JFUV9QRk5dKTsKICAgIGtkYnAo
Ilx0SFZNX1BBUkFNX0JVRklPUkVRX1BGTjogJXhcbiIsIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9C
VUZJT1JFUV9QRk5dKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX1ZJUklESUFOOiAleFxuIiwgaHZw
LT5wYXJhbXNbSFZNX1BBUkFNX1ZJUklESUFOXSk7CiAgICBrZGJwKCJcdEhWTV9QQVJBTV9USU1F
Ul9NT0RFOiAleFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1RJTUVSX01PREVdKTsKICAgIGtk
YnAoIlx0SFZNX1BBUkFNX0hQRVRfRU5BQkxFRDogJXhcbiIsIGh2cC0+cGFyYW1zW0hWTV9QQVJB
TV9IUEVUX0VOQUJMRURdKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX0lERU5UX1BUOiAleFxuIiwg
aHZwLT5wYXJhbXNbSFZNX1BBUkFNX0lERU5UX1BUXSk7CiAgICBrZGJwKCJcdEhWTV9QQVJBTV9E
TV9ET01BSU46ICV4XG4iLCBodnAtPnBhcmFtc1tIVk1fUEFSQU1fRE1fRE9NQUlOXSk7CiAgICBr
ZGJwKCJcdEhWTV9QQVJBTV9BQ1BJX1NfU1RBVEU6ICV4XG4iLCBodnAtPnBhcmFtc1tIVk1fUEFS
QU1fQUNQSV9TX1NUQVRFXSk7CiAgICBrZGJwKCJcdEhWTV9QQVJBTV9WTTg2X1RTUzogJXhcbiIs
IGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9WTTg2X1RTU10pOwogICAga2RicCgiXHRIVk1fUEFSQU1f
VlBUX0FMSUdOOiAleFxuIiwgaHZwLT5wYXJhbXNbSFZNX1BBUkFNX1ZQVF9BTElHTl0pOwogICAg
a2RicCgiXHRIVk1fUEFSQU1fQ09OU09MRV9QRk46ICV4XG4iLCBodnAtPnBhcmFtc1tIVk1fUEFS
QU1fQ09OU09MRV9QRk5dKTsKICAgIGtkYnAoIlx0SFZNX1BBUkFNX0NPTlNPTEVfRVZUQ0hOOiAl
eFxuIiwgCiAgICAgICAgIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9DT05TT0xFX0VWVENITl0pOwog
ICAga2RicCgiXHRIVk1fUEFSQU1fQUNQSV9JT1BPUlRTX0xPQ0FUSU9OOiAleFxuIiwgCiAgICAg
ICAgIGh2cC0+cGFyYW1zW0hWTV9QQVJBTV9BQ1BJX0lPUE9SVFNfTE9DQVRJT05dKTsKICAgIGtk
YnAoIlx0SFZNX1BBUkFNX01FTU9SWV9FVkVOVF9TSU5HTEVfU1RFUDogJXhcbiIsIAogICAgICAg
ICBodnAtPnBhcmFtc1tIVk1fUEFSQU1fTUVNT1JZX0VWRU5UX1NJTkdMRV9TVEVQXSk7Cn0Kc3Rh
dGljIHZvaWQga2RiX3ByaW50X3Jhbmdlc2V0cyhzdHJ1Y3QgZG9tYWluICpkcCkKewogICAgaW50
IGxvY2tlZCA9IHNwaW5faXNfbG9ja2VkKCZkcC0+cmFuZ2VzZXRzX2xvY2spOwoKICAgIGlmIChs
b2NrZWQpCiAgICAgICAgc3Bpbl91bmxvY2soJmRwLT5yYW5nZXNldHNfbG9jayk7CiAgICByYW5n
ZXNldF9kb21haW5fcHJpbnRrKGRwKTsKICAgIGlmIChsb2NrZWQpCiAgICAgICAgc3Bpbl9sb2Nr
KCZkcC0+cmFuZ2VzZXRzX2xvY2spOwp9CgpzdGF0aWMgdm9pZCBrZGJfcHJfdnRzY19pbmZvKHN0
cnVjdCBhcmNoX2RvbWFpbiAqYXApCnsKICAgIGtkYnAoIiAgICBWVFNDIGluZm86IHRzY19tb2Rl
OiV4ICB2dHNjOiV4ICB2dHNjX2xhc3Q6JTAxNmx4XG4iLCAKICAgICAgICAgYXAtPnRzY19tb2Rl
LCBhcC0+dnRzYywgYXAtPnZ0c2NfbGFzdCk7CiAgICBrZGJwKCIgICAgICAgIHZ0c2Nfb2Zmc2V0
OiUwMTZseCB0c2Nfa2h6OiUwOGx4IGluY2FybmF0aW9uOiV4XG4iLCAKICAgICAgICAgYXAtPnZ0
c2Nfb2Zmc2V0LCBhcC0+dnRzY19vZmZzZXQsIGFwLT5pbmNhcm5hdGlvbik7CiAgICBrZGJwKCIg
ICAgICAgIHZ0c2Nfa2VybmNvdW50OiUwMTZseCBfdXNlcmNvdW50OiUwMTZseFxuIiwKICAgICAg
ICAgYXAtPnZ0c2Nfa2VybmNvdW50LCBhcC0+dnRzY191c2VyY291bnQpOwp9CgovKiBkaXNwbGF5
IG9uZSBkb21haW4gaW5mbyAqLwpzdGF0aWMgdm9pZAprZGJfZGlzcGxheV9kb20oc3RydWN0IGRv
bWFpbiAqZHApCnsKICAgIHN0cnVjdCB2Y3B1ICp2cDsKICAgIGludCBwcmludGVkID0gMDsKICAg
IHN0cnVjdCBncmFudF90YWJsZSAqZ3AgPSBkcC0+Z3JhbnRfdGFibGU7CiAgICBzdHJ1Y3QgYXJj
aF9kb21haW4gKmFwID0gJmRwLT5hcmNoOwoKICAgIGtkYnAoIlxuRE9NQUlOIDogICAgZG9taWQ6
MHglMDR4IHB0cjoweCVwXG4iLCBkcC0+ZG9tYWluX2lkLCBkcCk7CiAgICBpZiAoZHAtPmRvbWFp
bl9pZCA9PSBET01JRF9JRExFKSB7CiAgICAgICAga2RicCgiICAgIElETEUgZG9tYWluLlxuIik7
CiAgICAgICAgcmV0dXJuOwogICAgfQogICAgaWYgKGRwLT5pc19keWluZykgewogICAgICAgIGtk
YnAoIiAgICBkb21haW4gaXMgRFlJTkcuXG4iKTsKICAgICAgICByZXR1cm47CiAgICB9CiNpZiAw
CiAgICBrZGJfcHJpbnRfc3Bpbl9sb2NrKCIgIHBnYWxrOiIsICZkcC0+cGFnZV9hbGxvY19sb2Nr
LCAiXG4iKTsKICAgIGtkYnAoIiAgcGdsaXN0OiAgMHglcCAweCVwXG4iLCBkcC0+cGFnZV9saXN0
Lm5leHQsS0RCX1BHTExFKGRwLT5wYWdlX2xpc3QpKTsKICAgIGtkYnAoIiAgeHBnbGlzdDogMHgl
cCAweCVwXG4iLCBkcC0+eGVucGFnZV9saXN0Lm5leHQsIAogICAgICAgICBLREJfUEdMTEUoZHAt
PnhlbnBhZ2VfbGlzdCkpOwogICAga2RicCgiICBuZXh0OjB4JXAgaGFzaG5leHQ6MHglcFxuIiwg
CiAgICAgICAgIGRwLT5uZXh0X2luX2xpc3QsIGRwLT5uZXh0X2luX2hhc2hidWNrZXQpOwojZW5k
aWYKICAgIGtkYnAoIiAgUEFHRVM6IHRvdDoweCUwOHggbWF4OjB4JTA4eCB4ZW5oZWFwOjB4JTA4
eFxuIiwgCiAgICAgICAgIGRwLT50b3RfcGFnZXMsIGRwLT5tYXhfcGFnZXMsIGRwLT54ZW5oZWFw
X3BhZ2VzKTsKCiAgICBrZGJfcHJpbnRfcmFuZ2VzZXRzKGRwKTsKICAgIGtkYl9wcmludF9kb21f
ZXZlbnRpbmZvKGRwKTsKICAgIGtkYnAoIlxuIik7CiAgICBrZGJwKCIgIEdyYW50IHRhYmxlOiBn
cDoweCVwXG4iLCBncCk7CiAgICBpZiAoZ3ApIHsKICAgICAgICBrZGJwKCIgICAgbnJfZnJhbWVz
OjB4JTA4eCBzaHBwOjB4JXAgYWN0aXZlOjB4JXBcbiIsCiAgICAgICAgICAgICBncC0+bnJfZ3Jh
bnRfZnJhbWVzLCBncC0+c2hhcmVkX3JhdywgZ3AtPmFjdGl2ZSk7CiAgICAgICAga2RicCgiICAg
IG1hcHRyazoweCVwIG1hcGhkOjB4JTA4eCBtYXBsbXQ6MHglMDh4XG4iLCAKICAgICAgICAgICAg
IGdwLT5tYXB0cmFjaywgZ3AtPm1hcHRyYWNrX2hlYWQsIGdwLT5tYXB0cmFja19saW1pdCk7CiAg
ICAgICAga2RicCgiICAgIG1hcGNudDoiKTsKICAgICAgICBrZGJfcHJpbnRfc3Bpbl9sb2NrKCJt
YXBjbnQ6IGxrOiIsICZncC0+bG9jaywgIlxuIik7CiAgICB9CiAgICBrZGJwKCIgIGh2bTolZCBw
cml2OiVkIG5lZWRfaW9tbXU6JWQgZGJnOiVkIGR5aW5nOiVkIHBhdXNlZDolZFxuIiwKICAgICAg
ICAgZHAtPmlzX2h2bSwgZHAtPmlzX3ByaXZpbGVnZWQsIGRwLT5uZWVkX2lvbW11LAogICAgICAg
ICBkcC0+ZGVidWdnZXJfYXR0YWNoZWQsIGRwLT5pc19keWluZywgZHAtPmlzX3BhdXNlZF9ieV9j
b250cm9sbGVyKTsKICAgIGtkYl9wcmludF9zcGluX2xvY2soIiAgc2h1dGRvd246IGxrOiIsICZk
cC0+c2h1dGRvd25fbG9jaywgIlxuIik7CiAgICBrZGJwKCIgIHNodXRuOiVkIHNodXQ6JWQgY29k
ZTolZCBcbiIsIGRwLT5pc19zaHV0dGluZ19kb3duLAogICAgICAgICBkcC0+aXNfc2h1dF9kb3du
LCBkcC0+c2h1dGRvd25fY29kZSk7CiAgICBrZGJwKCIgIHBhdXNlY250OjB4JTA4eCB2bV9hc3Np
c3Q6MHgiS0RCRkwiIHJlZmNudDoweCUwOHhcbiIsCiAgICAgICAgIGRwLT5wYXVzZV9jb3VudC5j
b3VudGVyLCBkcC0+dm1fYXNzaXN0LCBkcC0+cmVmY250LmNvdW50ZXIpOwogICAga2RicCgiICAm
ZG9tYWluX2RpcnR5X2NwdW1hc2s6JXBcbiIsICZkcC0+ZG9tYWluX2RpcnR5X2NwdW1hc2spOyAK
CiAgICBrZGJwKCIgIHNoYXJlZCA9PSB2Y3B1X2luZm9bXTogJXBcbiIsICBkcC0+c2hhcmVkX2lu
Zm8pOyAKICAgIGtkYnAoIiAgICBhcmNoX3NoYXJlZDogbWF4cGZuOiAlbHggcGZuLW1mbi1mcmFt
ZS1sbCBtZm46ICVseFxuIiwgCiAgICAgICAgIGFyY2hfZ2V0X21heF9wZm4oZHApLCBhcmNoX2dl
dF9wZm5fdG9fbWZuX2ZyYW1lX2xpc3RfbGlzdChkcCkpOwogICAga2RicCgiXG4iKTsKICAgIGtk
YnAoIiAgYXJjaF9kb21haW4gYXQgOiAlcFxuIiwgYXApOwoKI2lmZGVmIENPTkZJR19YODZfNjQK
ICAgIGtkYnAoIiAgICBwdF9wYWdlczoweCVwICIsIGFwLT5tbV9wZXJkb21haW5fcHRfcGFnZXMp
OwogICAga2RicCgiICAgIGwyOjB4JXAgbDM6MHglcFxuIiwgYXAtPm1tX3BlcmRvbWFpbl9sMiwg
YXAtPm1tX3BlcmRvbWFpbl9sMyk7CiNlbHNlCiAgICBrZGJwKCIgICAgcHQ6MHglcCAiLCBhcC0+
bW1fcGVyZG9tYWluX3B0KTsKI2VuZGlmCiNpZmRlZiBDT05GSUdfWDg2XzMyCiAgICBrZGJwKCIg
ICAgJm1hcGNoYWNoZToweCV4cFxuIiwgJmFwLT5tYXBjYWNoZSk7CiNlbmRpZgogICAga2RicCgi
ICAgIGlvcG9ydDoweCVwICZodm1fZG9tOjB4JXBcbiIsIGFwLT5pb3BvcnRfY2FwcywgJmFwLT5o
dm1fZG9tYWluKTsKICAgIGlmIChpc19odm1fb3JfaHliX2RvbWFpbihkcCkpCiAgICAgICAga2Ri
X3BybnRfaHZtX2RvbV9pbmZvKGRwKTsKCiAgICBrZGJwKCIgICAgJnBnaW5nX2RvbTolcCBtb2Rl
OiAlbHgiLCAmYXAtPnBhZ2luZywgYXAtPnBhZ2luZy5tb2RlKTsgCiAgICBrZGJfcHJfZG9tX3Bn
X21vZGVzKGRwKTsKICAgIGtkYnAoIiAgICBwMm0gcHRyOiVwICBwYWdlczp7JXAsICVwfVxuIiwg
YXAtPnAybSwgYXAtPnAybS0+cGFnZXMubmV4dCwKICAgICAgICAgS0RCX1BHTExFKGFwLT5wMm0t
PnBhZ2VzKSk7CiAgICBrZGJwKCIgICAgICAgbWF4X21hcHBlZF9wZm46IktEQkZMLCBhcC0+cDJt
LT5tYXhfbWFwcGVkX3Bmbik7CiNpZiBYRU5fU1VCVkVSU0lPTiA+IDAgJiYgWEVOX1ZFUlNJT04g
PT0gNCAgICAgICAgICAgICAgLyogeGVuIDQuMSBhbmQgYWJvdmUgKi8KICAgIGtkYnAoIiAgcGh5
c190YWJsZTolcFxuIiwgYXAtPnAybS0+cGh5c190YWJsZS5wZm4pOwojZWxzZQogICAga2RicCgi
ICBwaHlzX3RhYmxlLnBmbjoiS0RCRkwiXG4iLCBhcC0+cGh5c190YWJsZS5wZm4pOwojZW5kaWYK
ICAgIGtkYnAoIiAgICBwaHlzYWRkcl9iaXRzejolZCAzMmJpdF9wdjolZCBoYXNfMzJiaXRfc2hp
bmZvOiVkXG4iLCAKICAgICAgICAgYXAtPnBoeXNhZGRyX2JpdHNpemUsIGFwLT5pc18zMmJpdF9w
diwgYXAtPmhhc18zMmJpdF9zaGluZm8pOwogICAga2RiX3ByX3Z0c2NfaW5mbyhhcCk7CiAgICBr
ZGJwKCIgIHNjaGVkOjB4JXAgICZoYW5kbGU6MHglcFxuIiwgZHAtPnNjaGVkX3ByaXYsICZkcC0+
aGFuZGxlKTsKICAgIGtkYnAoIiAgdmNwdSBwdHJzOlxuICAgIik7CiAgICBmb3JfZWFjaF92Y3B1
KGRwLCB2cCkgewogICAgICAgIGtkYnAoIiAlZDolcCIsIHZwLT52Y3B1X2lkLCB2cCk7CiAgICAg
ICAgaWYgKCsrcHJpbnRlZCAlIDQgPT0gMCkga2RicCgiXG4gICAiKTsKICAgIH0KICAgIGtkYnAo
IlxuIik7Cn0KCi8qIAogKiBGVU5DVElPTjogRGlzcGFseSAoY3VycmVudCkgZG9tYWluL3MKICov
CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2RvbSh2b2lkKQp7CiAgICBrZGJwKCJkb20g
W2FsbHxkb21pZF06IERpc3BsYXkgY3VycmVudC9hbGwvZ2l2ZW4gZG9tYWluL3NcbiIpOwogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtkYl9jcHVfY21kX3QgCmtkYl9jbWRm
X2RvbShpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpy
ZWdzKQp7CiAgICBpbnQgaWQ7CiAgICBzdHJ1Y3QgZG9tYWluICpkcCA9IGN1cnJlbnQtPmRvbWFp
bjsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBr
ZGJfdXNnZl9kb20oKTsKCiAgICBpZiAoYXJnYyA+IDEpIHsKICAgICAgICBmb3IoZHA9ZG9tYWlu
X2xpc3Q7IGRwOyBkcD1kcC0+bmV4dF9pbl9saXN0KQogICAgICAgICAgICBpZiAoa2RiX3N0cjJk
ZWNpKGFyZ3ZbMV0sICZpZCkgJiYgZHAtPmRvbWFpbl9pZD09aWQpCiAgICAgICAgICAgICAgICBr
ZGJfZGlzcGxheV9kb20oZHApOwogICAgICAgICAgICBlbHNlIGlmICghc3RyY21wKGFyZ3ZbMV0s
ICJhbGwiKSkgCiAgICAgICAgICAgICAgICBrZGJfZGlzcGxheV9kb20oZHApOwogICAgfSBlbHNl
IHsKICAgICAgICBrZGJwKCJEaXNwbGF5aW5nIGN1cnJlbnQgZG9tYWluIDpcbiIpOwogICAgICAg
IGtkYl9kaXNwbGF5X2RvbShkcCk7CiAgICB9CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQoKLyogRHVtcCBpcnEgZGVzYyB0YWJsZSAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNn
Zl9kaXJxKHZvaWQpCnsKICAgIGtkYnAoImRpcnEgOiBkdW1wIGlycSBiaW5kaW5nc1xuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21k
Zl9kaXJxKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3Mg
KnJlZ3MpCnsKICAgIHVuc2lnbmVkIGxvbmcgaXJxLCBzeiwgb2ZmcywgYWRkcjsKICAgIGNoYXIg
YnVmW0tTWU1fTkFNRV9MRU4rMV07CiAgICBjaGFyIGFmZnN0cltOUl9DUFVTLzQrTlJfQ1BVUy8z
MisyXTsgICAgLyogY291cnRlc3kgZHVtcF9pcnFzKCkgKi8KCiAgICBpZiAoYXJnYyA+IDEgJiYg
KmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kaXJxKCk7CgojaWYgWEVO
X1ZFUlNJT04gPCA0ICYmIFhFTl9TVUJWRVJTSU9OIDwgNSAgICAgICAgICAgLyogeGVuIDMuNC54
IG9yIGJlbG93ICovCiAgICBrZGJwKCJpZHgvaXJxIy9zdGF0dXM6IGFsbCBhcmUgaW4gZGVjaW1h
bFxuIik7CiAgICBrZGJwKCJpZHggIGlycSMgIHN0YXR1cyAgIGFjdGlvbihoYW5kbGVyIG5hbWUg
ZGV2aWQpXG4iKTsKICAgIGZvciAoaXJxPTA7IGlycSA8IE5SX1ZFQ1RPUlM7IGlycSsrKSB7CiAg
ICAgICAgaXJxX2Rlc2NfdCAgKmRwID0gJmlycV9kZXNjW2lycV07CiAgICAgICAgaWYgKCFkcC0+
YWN0aW9uKQogICAgICAgICAgICBjb250aW51ZTsKICAgICAgICBhZGRyID0gKHVuc2lnbmVkIGxv
bmcpZHAtPmFjdGlvbi0+aGFuZGxlcjsKICAgICAgICBrZGJwKCJbJTNsZF06aXJxOiUzZCBzdDol
M2QgZjolcyBkZXZubTolcyBkZXZpZDoweCVwXG4iLAogICAgICAgICAgICAgaSwgdmVjdG9yX3Rv
X2lycShpcnEpLCBkcC0+c3RhdHVzLCAoZHAtPnN0YXR1cyAmIElSUV9HVUVTVCkgPyAKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJHVUVTVCBJUlEiIDogc3ltYm9sc19sb29rdXAoYWRkciwg
JnN6LCAmb2ZmcywgYnVmKSwKICAgICAgICAgICAgIGRwLT5hY3Rpb24tPm5hbWUsIGRwLT5hY3Rp
b24tPmRldl9pZCk7CiAgICB9CiNlbHNlCiAgICBrZGJwKCJpcnFfZGVzY1tdOiVwIG5yX2lycXM6
ICQlZCBucl9pcnFzX2dzaTogJCVkXG4iLCBpcnFfZGVzYywgbnJfaXJxcywgCiAgICAgICAgICBu
cl9pcnFzX2dzaSk7CiAgICBrZGJwKCJpcnEvdmVjIy9zdGF0dXM6IGluIGRlY2ltYWwuIGFmZmlu
aXR5IGluIGhleCwgbm90IGJpdG1hcFxuIik7CiAgICBrZGJwKCJpcnEtLSB2ZWMgc3RhIGZ1bmN0
aW9uLS0tLS0tLS0tLS0gbmFtZS0tLS0gdHlwZS0tLS0tLS0tLSAiKTsKICAgIGtkYnAoImFmZiBk
ZXZpZC0tLS0tLS0tLS0tLVxuIik7CiAgICBmb3IgKGlycT0wOyBpcnEgPCBucl9pcnFzOyBpcnEr
KykgewogICAgICAgIHZvaWQgKmRldmlkcDsKICAgICAgICBjb25zdCBjaGFyICpzeW1wLCAqbm1w
OwogICAgICAgIGlycV9kZXNjX3QgICpkcCA9IGlycV90b19kZXNjKGlycSk7CiAgICAgICAgc3Ry
dWN0IGFyY2hfaXJxX2Rlc2MgKmFyY2hwID0gJmRwLT5hcmNoOwoKICAgICAgICBpZiAoIWRwLT5o
YW5kbGVyIHx8IGRwLT5oYW5kbGVyPT0mbm9faXJxX3R5cGUgfHwgZHAtPnN0YXR1cyAmIElSUV9H
VUVTVCkKICAgICAgICAgICAgY29udGludWU7CgogICAgICAgIGFkZHIgPSBkcC0+YWN0aW9uID8g
KHVuc2lnbmVkIGxvbmcpZHAtPmFjdGlvbi0+aGFuZGxlciA6IDA7CiAgICAgICAgc3ltcCA9IGFk
ZHIgPyBzeW1ib2xzX2xvb2t1cChhZGRyLCAmc3osICZvZmZzLCBidWYpIDogIm4vYSAiOwogICAg
ICAgIG5tcCA9IGFkZHIgPyBkcC0+YWN0aW9uLT5uYW1lIDogIm4vYSAiOwogICAgICAgIGRldmlk
cCA9IGFkZHIgPyBkcC0+YWN0aW9uLT5kZXZfaWQgOiBOVUxMOwogICAgICAgIGNwdW1hc2tfc2Nu
cHJpbnRmKGFmZnN0ciwgc2l6ZW9mKGFmZnN0ciksIGRwLT5hZmZpbml0eSk7CiAgICAgICAga2Ri
cCgiWyUzbGRdICUwM2QgJTAzZCAlLTE5cyAlLThzICUtMTNzICUzcyAweCVwXG4iLCBpcnEsIGFy
Y2hwLT52ZWN0b3IsCiAgICAgICAgICAgICBkcC0+c3RhdHVzLCBzeW1wLCBubXAsIGRwLT5oYW5k
bGVyLT50eXBlbmFtZSwgYWZmc3RyLCBkZXZpZHApOwogICAgfQogICAga2RiX3BybnRfZ3Vlc3Rf
bWFwcGVkX2lycXMoKTsKI2VuZGlmCiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3Rh
dGljIHZvaWQKa2RiX3BybnRfdmVjX2lycV90YWJsZShpbnQgY3B1KQp7CiAgICBpbnQgaSxqLCAq
dGJsID0gcGVyX2NwdSh2ZWN0b3JfaXJxLCBjcHUpOwoKICAgIGtkYnAoIkNQVSAlZCA6ICIsIGNw
dSk7CiAgICBmb3IgKGk9MCwgaj0wOyBpIDwgTlJfVkVDVE9SUzsgaSsrKQogICAgICAgIGlmICh0
YmxbaV0gIT0gLTEpIHsKICAgICAgICAgICAga2RicCgiKCUzZDolM2QpICIsIGksIHRibFtpXSk7
CiAgICAgICAgICAgIGlmICghKCsraiAlIDUpKQogICAgICAgICAgICAgICAga2RicCgiXG4gICAg
ICAgICIpOwogICAgICAgIH0KICAgIGtkYnAoIlxuIik7Cn0KCi8qIER1bXAgaXJxIGRlc2MgdGFi
bGUgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZHZpdCh2b2lkKQp7CiAgICBrZGJw
KCJkdml0IFtjcHV8YWxsXTogZHVtcCAocGVyIGNwdSl2ZWN0b3IgaXJxIHRhYmxlXG4iKTsKICAg
IHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRm
X2R2aXQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAq
cmVncykKewogICAgaW50IGNwdSwgY2NwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKCiAgICBpZiAo
YXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kdml0
KCk7CiAgICAKICAgIGlmIChhcmdjID4gMSkgewogICAgICAgIGlmICghc3RyY21wKGFyZ3ZbMV0s
ICJhbGwiKSkgCiAgICAgICAgICAgIGNwdSA9IC0xOwogICAgICAgIGVsc2UgaWYgKCFrZGJfc3Ry
MmRlY2koYXJndlsxXSwgJmNwdSkpIHsKICAgICAgICAgICAga2RicCgiSW52YWxpZCBjcHU6JWRc
biIsIGNwdSk7CiAgICAgICAgICAgIHJldHVybiBrZGJfdXNnZl9kdml0KCk7CiAgICAgICAgfQog
ICAgfSBlbHNlCiAgICAgICAgY3B1ID0gY2NwdTsKCiAgICBrZGJwKCJQZXIgQ1BVIHZlY3RvciBp
cnEgdGFibGUgcGFpcnMgKHZlY3RvcjppcnEpIChhbGwgZGVjaW1hbHMpOlxuIik7CiAgICBpZiAo
Y3B1ICE9IC0xKSAKICAgICAgICBrZGJfcHJudF92ZWNfaXJxX3RhYmxlKGNwdSk7CiAgICBlbHNl
CiAgICAgICAgZm9yX2VhY2hfb25saW5lX2NwdShjcHUpIAogICAgICAgICAgICBrZGJfcHJudF92
ZWNfaXJxX3RhYmxlKGNwdSk7CgogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIGRv
IHZtZXhpdCBvbiBhbGwgY3B1J3Mgc28gaW50ZWwgVk1DUyBjYW4gYmUgZHVtcGVkICovCnN0YXRp
YyBrZGJfY3B1X2NtZF90IAprZGJfYWxsX2NwdV9mbHVzaF92bWNzKHZvaWQpCnsKICAgIGludCBj
cHUsIGNjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CiAgICBmb3JfZWFjaF9vbmxpbmVfY3B1KGNw
dSkgewogICAgICAgIGlmIChjcHUgPT0gY2NwdSkgewogICAgICAgICAgICBrZGJfY3Vycl9jcHVf
Zmx1c2hfdm1jcygpOwogICAgICAgIH0gZWxzZSB7CiAgICAgICAgICAgIGlmIChrZGJfY3B1X2Nt
ZFtjcHVdICE9IEtEQl9DUFVfUEFVU0UpeyAgLyogaHVuZyBjcHUgKi8KICAgICAgICAgICAgICAg
IGtkYnAoIlNraXBwaW5nIChodW5nPykgY3B1ICVkXG4iLCBjcHUpOwogICAgICAgICAgICAgICAg
Y29udGludWU7CiAgICAgICAgICAgIH0KICAgICAgICAgICAga2RiX2NwdV9jbWRbY3B1XSA9IEtE
Ql9DUFVfRE9fVk1FWElUOwogICAgICAgICAgICB3aGlsZSAoa2RiX2NwdV9jbWRbY3B1XT09S0RC
X0NQVV9ET19WTUVYSVQpOwogICAgICAgIH0KICAgIH0KICAgIHJldHVybiBLREJfQ1BVX01BSU5f
S0RCOwp9CgovKiBEaXNwbGF5IFZNQ1Mgb3IgVk1DQiAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdApr
ZGJfdXNnZl9kdm1jKHZvaWQpCnsKICAgIGtkYnAoImR2bWMgW2RvbWlkXVt2Y3B1aWRdIDogRHVt
cCB2bWNzL3ZtY2JcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0Kc3RhdGljIGtk
Yl9jcHVfY21kX3QKa2RiX2NtZGZfZHZtYyhpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0
cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBkb21pZF90IGRvbWlkID0gMDsgIC8qIHVu
c2lnbmVkIHR5cGUgZG9uJ3QgbGlrZSAtMSAqLwogICAgaW50IHZjcHVpZCA9IC0xOwoKICAgIGlm
IChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX2R2
bWMoKTsKCiAgICBpZiAoYXJnYyA+IDEpIHsgCiAgICAgICAgaWYgKCFrZGJfc3RyMmRvbWlkKGFy
Z3ZbMV0sICZkb21pZCwgMSkpCiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwog
ICAgfQogICAgaWYgKGFyZ2MgPiAyICYmICFrZGJfc3RyMmRlY2koYXJndlsyXSwgJnZjcHVpZCkp
IHsKICAgICAgICBrZGJwKCJCYWQgdmNwdWlkOiAweCV4XG4iLCB2Y3B1aWQpOwogICAgICAgIHJl
dHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAgaWYgKGJvb3RfY3B1X2RhdGEueDg2X3Zl
bmRvciA9PSBYODZfVkVORE9SX0lOVEVMKSB7CiAgICAgICAga2RiX2FsbF9jcHVfZmx1c2hfdm1j
cygpOwogICAgICAgIGtkYl9kdW1wX3ZtY3MoZG9taWQsIChpbnQpdmNwdWlkKTsKICAgIH0gZWxz
ZSB7CiAgICAgICAga2RiX2R1bXBfdm1jYihkb21pZCwgKGludCl2Y3B1aWQpOwogICAgfQogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dm
X21taW8odm9pZCkKewogICAga2RicCgibW1pbzogZHVtcCBtbWlvIHJlbGF0ZWQgaW5mb1xuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9tbWlvKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAg
cmV0dXJuIGtkYl91c2dmX21taW8oKTsKCiAgICBrZGJwKCJyL28gbW1pbyByYW5nZXM6XG4iKTsK
ICAgIHJhbmdlc2V0X3ByaW50ayhtbWlvX3JvX3Jhbmdlcyk7CiAgICBrZGJwKCJcbiIpOwogICAg
cmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIER1bXAgdGltZXIvdGltZXJzIHF1ZXVlcyAq
LwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9kdHJxKHZvaWQpCnsKICAgIGtkYnAoImR0
cnE6IGR1bXAgdGltZXIgcXVldWVzIG9uIGFsbCBjcHVzXG4iKTsKICAgIHJldHVybiBLREJfQ1BV
X01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRmX2R0cnEoaW50IGFyZ2Ms
IGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaWYg
KGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZHRy
cSgpOwoKICAgIGtkYl9kdW1wX3RpbWVyX3F1ZXVlcygpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJ
Tl9LREI7Cn0KCnN0cnVjdCBpZHRlIHsKICAgIHVpbnQxNl90IG9mZnMwXzE1OwogICAgdWludDE2
X3Qgc2VsZWN0b3I7CiAgICB1aW50MTZfdCBtZXRhOwogICAgdWludDE2X3Qgb2ZmczE2XzMxOwog
ICAgdWludDMyX3Qgb2ZmczMyXzYzOwogICAgdWludDMyX3QgcmVzdmQ7Cn07CgojaWZkZWYgX194
ODZfNjRfXwpzdGF0aWMgdm9pZAprZGJfcHJpbnRfaWR0ZShpbnQgbnVtLCBzdHJ1Y3QgaWR0ZSAq
aWR0cCkgCnsKICAgIHVpbnQxNl90IG10YSA9IGlkdHAtPm1ldGE7CiAgICBjaGFyIGRwbCA9ICgo
bXRhICYgMHg2MDAwKSA+PiAxMyk7CiAgICBjaGFyIHByZXNlbnQgPSAoKG10YSAmMHg4MDAwKSA+
PiAxNSk7CiAgICBpbnQgdHZhbCA9ICgobXRhICYweDMwMCkgPj4gOCk7CiAgICBjaGFyICp0eXBl
ID0gKHR2YWwgPT0gMSkgPyAiVGFzayIgOiAoKHR2YWw9PSAyKSA/ICJJbnRyIiA6ICJUcmFwIik7
CiAgICBkb21pZF90IGRvbWlkID0gaWR0cC0+c2VsZWN0b3I9PV9fSFlQRVJWSVNPUl9DUzY0ID8g
RE9NSURfSURMRSA6CiAgICAgICAgICAgICAgICAgICAgY3VycmVudC0+ZG9tYWluLT5kb21haW5f
aWQ7CiAgICB1aW50NjRfdCBhZGRyID0gaWR0cC0+b2ZmczBfMTUgfCAoKHVpbnQ2NF90KWlkdHAt
Pm9mZnMxNl8zMSA8PCAxNikgfCAKICAgICAgICAgICAgICAgICAgICAoKHVpbnQ2NF90KWlkdHAt
Pm9mZnMzMl82MyA8PCAzMik7CgogICAga2RicCgiWyUwM2RdOiAlcyAleCAgJXggJTA0eDolMDE2
bHggIiwgbnVtLCB0eXBlLCBkcGwsIHByZXNlbnQsCiAgICAgICAgIGlkdHAtPnNlbGVjdG9yLCBh
ZGRyKTsgCiAgICBrZGJfcHJudF9hZGRyMnN5bShkb21pZCwgYWRkciwgIlxuIik7Cn0KCi8qIER1
bXAgNjRiaXQgaWR0IHRhYmxlIGN1cnJlbnRseSBvbiB0aGlzIGNwdS4gSW50ZWwgVm9sIDMgc2Vj
dGlvbiA1LjE0LjEgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZGlkdCh2b2lkKQp7
CiAgICBrZGJwKCJkaWR0IDogZHVtcCBJRFQgdGFibGUgb24gdGhlIGN1cnJlbnQgY3B1XG4iKTsK
ICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9j
bWRmX2RpZHQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVn
cyAqcmVncykKewogICAgaW50IGk7CiAgICBzdHJ1Y3QgaWR0ZSAqaWR0cCA9IChzdHJ1Y3QgaWR0
ZSAqKWlkdF90YWJsZXNbc21wX3Byb2Nlc3Nvcl9pZCgpXTsKCiAgICBpZiAoYXJnYyA+IDEgJiYg
KmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9kaWR0KCk7CgogICAga2Ri
cCgiSURUIGF0OiVwXG4iLCBpZHRwKTsKICAgIGtkYnAoImlkdCMgIFR5cGUgRFBMIFAgYWRkciAo
YWxsIGhleCBleGNlcHQgaWR0IylcbiIsIGlkdHApOwogICAgZm9yIChpPTA7IGkgPCAyNTY7IGkr
KywgaWR0cCsrKSAKICAgICAgICBrZGJfcHJpbnRfaWR0ZShpLCBpZHRwKTsKICAgIHJldHVybiBL
REJfQ1BVX01BSU5fS0RCOwp9CiNlbHNlCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRmX2Rp
ZHQoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVn
cykKewogICAga2RicCgia2RiOiBQbGVhc2UgaW1wbGVtZW50IG1lIGluIDMyYml0IGh5cGVydmlz
b3JcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KI2VuZGlmCgpzdHJ1Y3QgZ2R0
ZSB7ICAgICAgICAgICAgIC8qIHNhbWUgZm9yIFRTUyBhbmQgTERUICovCiAgICB1bG9uZyBsaW1p
dDA6MTY7CiAgICB1bG9uZyBiYXNlMDoyNDsgICAgICAgLyogbGluZWFyIGFkZHJlc3MgYmFzZSwg
bm90IHBhICovCiAgICB1bG9uZyBhY2N0eXBlOjQ7ICAgICAgLyogVHlwZTogYWNjZXNzIHJpZ2h0
cyAqLwogICAgdWxvbmcgUzoxOyAgICAgICAgICAgIC8qIFM6IDAgPSBzeXN0ZW0sIDEgPSBjb2Rl
L2RhdGEgKi8KICAgIHVsb25nIERQTDoyOyAgICAgICAgICAvKiBEUEwgKi8KICAgIHVsb25nIFA6
MTsgICAgICAgICAgICAvKiBQOiBTZWdtZW50IFByZXNlbnQgKi8KICAgIHVsb25nIGxpbWl0MTo0
OwogICAgdWxvbmcgQVZMOjE7ICAgICAgICAgIC8qIEFWTDogYXZhaWwgZm9yIHVzZSBieSBzeXN0
ZW0gc29mdHdhcmUgKi8KICAgIHVsb25nIEw6MTsgICAgICAgICAgICAvKiBMOiA2NGJpdCBjb2Rl
IHNlZ21lbnQgKi8KICAgIHVsb25nIERCOjE7ICAgICAgICAgICAvKiBEL0IgKi8KICAgIHVsb25n
IEc6MTsgICAgICAgICAgICAvKiBHOiBncmFudWxhcml0eSAqLwogICAgdWxvbmcgYmFzZTE6ODsg
ICAgICAgIC8qIGxpbmVhciBhZGRyZXNzIGJhc2UsIG5vdCBwYSAqLwp9OwoKdW5pb24gZ2R0ZV91
IHsKICAgIHN0cnVjdCBnZHRlIGdkdGU7CiAgICB1NjQgZ3ZhbDsKfTsKCnN0cnVjdCBjYWxsX2dk
dGUgewogICAgdW5zaWduZWQgc2hvcnQgb2ZmczA6MTY7CiAgICB1bnNpZ25lZCBzaG9ydCBzZWw6
MTY7CiAgICB1bnNpZ25lZCBzaG9ydCBtaXNjMDoxNjsKICAgIHVuc2lnbmVkIHNob3J0IG9mZnMx
OjE2Owp9OwoKc3RydWN0IGlkdF9nZHRlIHsKICAgIHVuc2lnbmVkIGxvbmcgb2ZmczA6MTY7CiAg
ICB1bnNpZ25lZCBsb25nIHNlbDoxNjsKICAgIHVuc2lnbmVkIGxvbmcgaXN0OjM7CiAgICB1bnNp
Z25lZCBsb25nIHVudXNlZDA6MTM7CiAgICB1bnNpZ25lZCBsb25nIG9mZnMxOjE2Owp9Owp1bmlv
biBzZ2R0ZV91IHsKICAgIHN0cnVjdCBjYWxsX2dkdGUgY2dkdGU7CiAgICBzdHJ1Y3QgaWR0X2dk
dGUgaWdkdGU7CiAgICB1NjQgc2d2YWw7Cn07CgovKiByZXR1cm4gYmluYXJ5IGZvcm0gb2YgYSBo
ZXggaW4gc3RyaW5nIDogbWF4IDQgY2hhcnMgMDAwMCB0byAxMTExICovCnN0YXRpYyBjaGFyICpr
ZGJfcmV0X2FjY3R5cGUodWludCBhY2N0eXBlKQp7CiAgICBzdGF0aWMgY2hhciBidWZbMTZdOwog
ICAgY2hhciAqcCA9IGJ1ZjsKICAgIGludCBpOwoKICAgIGlmIChhY2N0eXBlID4gMHhmKSB7CiAg
ICAgICAgYnVmWzBdID0gYnVmWzFdID0gYnVmWzJdID0gYnVmWzNdID0gJz8nOwogICAgICAgIGJ1
Zls1XSA9ICdcbic7CiAgICAgICAgcmV0dXJuIGJ1ZjsKICAgIH0KICAgIGZvciAoaT0wOyBpIDwg
NDsgaSsrLCBwKyssIGFjY3R5cGU9YWNjdHlwZT4+MSkKICAgICAgICAqcCA9IChhY2N0eXBlICYg
MHgxKSA/ICcxJyA6ICcwJzsKCiAgICByZXR1cm4gYnVmOwp9CgovKiBEaXNwbGF5IEdEVCB0YWJs
ZS4gSUEtMzJlIG1vZGUgaXMgYXNzdW1kZWQuICovCi8qIGZpcnN0IGRpc3BsYXkgbm9uIHN5c3Rl
bSBkZXNjcmlwdG9ycyB0aGVuIGRpc3BsYXkgc3lzdGVtIGRlc2NyaXB0b3JzICovCnN0YXRpYyBr
ZGJfY3B1X2NtZF90CmtkYl91c2dmX2RnZHQodm9pZCkKewogICAga2RicCgiZGdkdCBbZ2R0LXB0
ciBkZWNpbWFsLWJ5dGUtc2l6ZV0gZHVtcCBHRFQgdGFibGUgb24gY3VycmVudCBjcHUgb3IgZm9y
IgogICAgICAgICAiZ2l2ZW4gdmNwdVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9kZ2R0KGludCBhcmdjLCBjb25zdCBjaGFy
ICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHN0cnVjdCBYZ3RfZGVz
Y19zdHJ1Y3QgZGVzYzsKICAgIHVuaW9uIGdkdGVfdSB1MTsKICAgIHVsb25nIHN0YXJ0X2FkZHIs
IGVuZF9hZGRyLCB0YWRkcj0wOwogICAgZG9taWRfdCBkb21pZCA9IERPTUlEX0lETEU7CiAgICBp
bnQgaWR4OwoKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAgcmV0
dXJuIGtkYl91c2dmX2RnZHQoKTsKCiAgICBpZiAoYXJnYyA+IDEpIHsKICAgICAgICBpZiAoYXJn
YyAhPSAzKQogICAgICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZGdkdCgpOwoKICAgICAgICBpZiAo
a2RiX3N0cjJ1bG9uZyhhcmd2WzFdLCAodWxvbmcgKikmc3RhcnRfYWRkcikgJiYgCiAgICAgICAg
ICAgIGtkYl9zdHIyZGVjaShhcmd2WzJdLCAoaW50ICopJnRhZGRyKSkgewogICAgICAgICAgICBl
bmRfYWRkciA9IHN0YXJ0X2FkZHIgKyB0YWRkcjsKICAgICAgICB9IGVsc2UgewogICAgICAgICAg
ICBrZGJwKCJkZ2R0OiBCYWQgYXJnOiVzIG9yICVzXG4iLCBhcmd2WzFdLCBhcmd2WzJdKTsKICAg
ICAgICAgICAgcmV0dXJuIGtkYl91c2dmX2RnZHQoKTsKICAgICAgICB9CiAgICB9IGVsc2Ugewog
ICAgICAgIF9fYXNtX18gX192b2xhdGlsZV9fICgic2dkdCAgKCUwKSBcbiIgOjogImEiKCZkZXNj
KSA6ICJtZW1vcnkiKTsKICAgICAgICBzdGFydF9hZGRyID0gKHVsb25nKWRlc2MuYWRkcmVzczsg
CiAgICAgICAgZW5kX2FkZHIgPSAodWxvbmcpZGVzYy5hZGRyZXNzICsgZGVzYy5zaXplOwogICAg
fQogICAga2RicCgiR0RUOiBXaWxsIHNraXAgbnVsbCBkZXNjIGF0IDAsIHN0YXJ0OiVseCBlbmQ6
JWx4XG4iLCBzdGFydF9hZGRyLCAKICAgICAgICAgZW5kX2FkZHIpOwogICAga2RicCgiW2lkeF0g
ICBzZWwgLS0tIHZhbCAtLS0tLS0tLSAgQWNjcyBEUEwgUCBBVkwgTCBEQiBHICIKICAgICAgICAg
Ii0tQmFzZSBBZGRyIC0tLS0gIExpbWl0XG4iKTsKICAgIGtkYnAoIiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFR5cGVcbiIpOwoKICAgIC8qIHNraXAgZmlyc3QgOCBudWxsIGJ5dGVzICov
CiAgICAvKiB0aGUgY3B1IG11bHRpcGxpZXMgdGhlIGluZGV4IGJ5IDggYW5kIGFkZHMgdG8gR0RU
LmJhc2UgKi8KICAgIGZvciAodGFkZHIgPSBzdGFydF9hZGRyKzg7IHRhZGRyIDwgZW5kX2FkZHI7
ICB0YWRkciArPSBzaXplb2YodWxvbmcpKSB7CgogICAgICAgIC8qIG5vdCBhbGwgZW50cmllcyBh
cmUgbWFwcGVkLiBkbyB0aGlzIHRvIGF2b2lkIEdQIGV2ZW4gaWYgaHlwICovCiAgICAgICAgaWYg
KCFrZGJfcmVhZF9tZW0odGFkZHIsIChrZGJieXRfdCAqKSZ1MSwgc2l6ZW9mKHUxKSxkb21pZCkg
fHwgIXUxLmd2YWwpCiAgICAgICAgICAgIGNvbnRpbnVlOwoKICAgICAgICBpZiAodTEuZ3ZhbCA9
PSAweGZmZmZmZmZmZmZmZmZmZmYgfHwgdTEuZ3ZhbCA9PSAweDU1NTU1NTU1NTU1NTU1NTUpCiAg
ICAgICAgICAgIGNvbnRpbnVlOyAgICAgICAgICAgICAgIC8qIHdoYXQgYW4gZWZmaW4geDg2IG1l
c3MgKi8KCiAgICAgICAgaWR4ID0gKHRhZGRyIC0gc3RhcnRfYWRkcikgLyA4OwogICAgICAgIGlm
ICh1MS5nZHRlLlMgPT0gMCkgeyAgICAgICAvKiBTeXN0ZW0gRGVzYyBhcmUgMTYgYnl0ZXMgaW4g
NjRiaXQgbW9kZSAqLwogICAgICAgICAgICB0YWRkciArPSBzaXplb2YodWxvbmcpOwogICAgICAg
ICAgICBjb250aW51ZTsKICAgICAgICB9CiAgICAgICAga2RicCgiWyUwNHhdICUwNHggJTAxNmx4
ICAlNHMgICV4ICAlZCAgJWQgICVkICAlZCAlZCAlMDE2bHggICUwNXhcbiIsCiAgICAgICAgICAg
ICBpZHgsIChpZHg8PDMpLCB1MS5ndmFsLCBrZGJfcmV0X2FjY3R5cGUodTEuZ2R0ZS5hY2N0eXBl
KSwgCiAgICAgICAgICAgICB1MS5nZHRlLkRQTCwgCiAgICAgICAgICAgICB1MS5nZHRlLlAsIHUx
LmdkdGUuQVZMLCB1MS5nZHRlLkwsIHUxLmdkdGUuREIsIHUxLmdkdGUuRywgIAogICAgICAgICAg
ICAgKHU2NCkoKHU2NCl1MS5nZHRlLmJhc2UwIHwgKHU2NCkoKHU2NCl1MS5nZHRlLmJhc2UxPDwy
NCkpLCAKICAgICAgICAgICAgIHUxLmdkdGUubGltaXQwIHwgKHUxLmdkdGUubGltaXQxPDwxNikp
OwogICAgfQoKICAgIGtkYnAoIlxuU3lzdGVtIGRlc2NyaXB0b3JzIChTPTApIDogKHNraXBwaW5n
IDB0aCBlbnRyeSlcbiIpOwogICAgZm9yICh0YWRkcj1zdGFydF9hZGRyKzg7ICB0YWRkciA8IGVu
ZF9hZGRyOyAgdGFkZHIgKz0gc2l6ZW9mKHVsb25nKSkgewogICAgICAgIHVpbnQgYWNjdHlwZTsK
ICAgICAgICB1NjQgdXBwZXIsIGFkZHI2ND0wOwoKICAgICAgICAvKiBub3QgYWxsIGVudHJpZXMg
YXJlIG1hcHBlZC4gZG8gdGhpcyB0byBhdm9pZCBHUCBldmVuIGlmIGh5cCAqLwogICAgICAgIGlm
IChrZGJfcmVhZF9tZW0odGFkZHIsIChrZGJieXRfdCAqKSZ1MSwgc2l6ZW9mKHUxKSwgZG9taWQp
PT0wIHx8IAogICAgICAgICAgICB1MS5ndmFsID09IDAgfHwgdTEuZ2R0ZS5TID09IDEpIHsKICAg
ICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGlkeCA9ICh0YWRkciAtIHN0YXJ0
X2FkZHIpIC8gODsKICAgICAgICB0YWRkciArPSBzaXplb2YodWxvbmcpOwogICAgICAgIGlmIChr
ZGJfcmVhZF9tZW0odGFkZHIsIChrZGJieXRfdCAqKSZ1cHBlciwgOCwgZG9taWQpID09IDApIHsK
ICAgICAgICAgICAga2RicCgiQ291bGQgbm90IHJlYWQgdXBwZXIgOCBieXRlcyBvZiBzeXN0ZW0g
ZGVzY1xuIik7CiAgICAgICAgICAgIHVwcGVyID0gMDsKICAgICAgICB9CiAgICAgICAgYWNjdHlw
ZSA9IHUxLmdkdGUuYWNjdHlwZTsKICAgICAgICBpZiAoYWNjdHlwZSAhPSAyICYmIGFjY3R5cGUg
IT0gOSAmJiBhY2N0eXBlICE9IDExICYmIGFjY3R5cGUgIT0xMiAmJgogICAgICAgICAgICBhY2N0
eXBlICE9IDE0ICYmIGFjY3R5cGUgIT0gMTUpCiAgICAgICAgICAgIGNvbnRpbnVlOwoKICAgICAg
ICBrZGJwKCJbJTA0eF0gJTA0eCB2YWw6JTAxNmx4IERQTDoleCBQOiVkIHR5cGU6JXggIiwKICAg
ICAgICAgICAgIGlkeCwgKGlkeDw8MyksIHUxLmd2YWwsIHUxLmdkdGUuRFBMLCB1MS5nZHRlLlAs
IGFjY3R5cGUpOyAKCiAgICAgICAgdXBwZXIgPSAodTY0KSgodTY0KSh1cHBlciAmIDB4RkZGRkZG
RkYpIDw8IDMyKTsKCiAgICAgICAgLyogVm9sIDNBOiB0YWJsZTogMy0yICBwYWdlOiAzLTE5ICov
CiAgICAgICAgaWYgKGFjY3R5cGUgPT0gMikgewogICAgICAgICAgICBrZGJwKCJMRFQgZ2F0ZSAo
MDAxMClcbiIpOwogICAgICAgIH0KICAgICAgICBlbHNlIGlmIChhY2N0eXBlID09IDkpIHsKICAg
ICAgICAgICAga2RicCgiVFNTIGF2YWlsIGdhdGUoMTAwMSlcbiIpOwogICAgICAgIH0KICAgICAg
ICBlbHNlIGlmIChhY2N0eXBlID09IDExKSB7CiAgICAgICAgICAgIGtkYnAoIlRTUyBidXN5IGdh
dGUoMTAxMSlcbiIpOwogICAgICAgIH0KICAgICAgICBlbHNlIGlmIChhY2N0eXBlID09IDEyKSB7
CiAgICAgICAgICAgIGtkYnAoIkNBTEwgZ2F0ZSAoMTEwMClcbiIpOwogICAgICAgIH0KICAgICAg
ICBlbHNlIGlmIChhY2N0eXBlID09IDE0KSB7CiAgICAgICAgICAgIGtkYnAoIklEVCBnYXRlICgx
MTEwKVxuIik7CiAgICAgICAgfQogICAgICAgIGVsc2UgaWYgKGFjY3R5cGUgPT0gMTUpIHsKICAg
ICAgICAgICAga2RicCgiVHJhcCBnYXRlICgxMTExKVxuIik7IAogICAgICAgIH0KCiAgICAgICAg
aWYgKGFjY3R5cGUgPT0gMiB8fCBhY2N0eXBlID09IDkgfHwgYWNjdHlwZSA9PSAxMSkgewogICAg
ICAgICAgICBrZGJwKCIgICAgICAgIEFWTDolZCBHOiVkIEJhc2UgQWRkcjolMDE2bHggTGltaXQ6
JXhcbiIsCiAgICAgICAgICAgICAgICAgdTEuZ2R0ZS5BVkwsIHUxLmdkdGUuRywgIAogICAgICAg
ICAgICAgICAgICh1NjQpKCh1NjQpdTEuZ2R0ZS5iYXNlMCB8ICgodTY0KXUxLmdkdGUuYmFzZTE8
PDI0KXwgdXBwZXIpLAogICAgICAgICAgICAgICAgICh1MzIpdTEuZ2R0ZS5saW1pdDAgfCAodTMy
KSgodTMyKXUxLmdkdGUubGltaXQxPDwxNikpOwoKICAgICAgICB9IGVsc2UgaWYgKGFjY3R5cGUg
PT0gMTIpIHsKICAgICAgICAgICAgdW5pb24gc2dkdGVfdSB1MjsKICAgICAgICAgICAgdTIuc2d2
YWwgPSB1MS5ndmFsOwoKICAgICAgICAgICAgYWRkcjY0ID0gKHU2NCkoKHU2NCl1Mi5jZ2R0ZS5v
ZmZzMCB8IAogICAgICAgICAgICAgICAgICAgICAgICAgICAodTY0KSgodTY0KXUyLmNnZHRlLm9m
ZnMxPDwxNikgfCB1cHBlcik7CiAgICAgICAgICAgIGtkYnAoIiAgICAgICAgRW50cnk6ICUwNHg6
JTAxNmx4XG4iLCB1Mi5jZ2R0ZS5zZWwsIGFkZHI2NCk7CiAgICAgICAgfSBlbHNlIGlmIChhY2N0
eXBlID09IDE0IHx8IGFjY3R5cGUgPT0gMTUpIHsKICAgICAgICAgICAgdW5pb24gc2dkdGVfdSB1
MjsKICAgICAgICAgICAgdTIuc2d2YWwgPSB1MS5ndmFsOwoKICAgICAgICAgICAgYWRkcjY0ID0g
KHU2NCkoKHU2NCl1Mi5pZ2R0ZS5vZmZzMCB8IAogICAgICAgICAgICAgICAgICAgICAgICAgICAo
dTY0KSgodTY0KXUyLmlnZHRlLm9mZnMxPDwxNikgfCB1cHBlcik7CiAgICAgICAgICAgIGtkYnAo
IiAgICAgICAgRW50cnk6ICUwNHg6JTAxNmx4IGlzdDolMDN4XG4iLCB1Mi5pZ2R0ZS5zZWwsIGFk
ZHI2NCwKICAgICAgICAgICAgICAgICB1Mi5pZ2R0ZS5pc3QpOwogICAgICAgIH0gZWxzZSAKICAg
ICAgICAgICAga2RicCgiIEVycm9yOiBVbnJlY29uZ2l6ZWQgdHlwZTolbHhcbiIsIGFjY3R5cGUp
OwogICAgfQogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIERpc3BsYXkgc2NoZWR1
bGVyIGJhc2ljIGFuZCBleHRlbmRlZCBpbmZvICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91
c2dmX3NjaGVkKHZvaWQpCnsKICAgIGtkYnAoInNjaGVkOiBzaG93IHNjaGVkdWxhciBpbmZvIGFu
ZCBydW4gcXVldWVzXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBr
ZGJfY3B1X2NtZF90CmtkYl9jbWRmX3NjaGVkKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwg
c3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsx
XSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3NjaGVkKCk7CgogICAga2RiX3ByaW50
X3NjaGVkX2luZm8oKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBEaXNwbGF5
IE1NVSBiYXNpYyBhbmQgZXh0ZW5kZWQgaW5mbyAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
dXNnZl9tbXUodm9pZCkKewogICAga2RicCgibW11OiBwcmludCBiYXNpYyBNTVUgaW5mb1xuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9tbXUoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVn
cyAqcmVncykKewogICAgaW50IGNwdTsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0g
Jz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9tbXUoKTsKCiAgICBrZGJwKCJNTVUgSW5mbzpc
biIpOwogICAga2RicCgidG90YWwgIHBhZ2VzOiAlbHhcbiIsIHRvdGFsX3BhZ2VzKTsKICAgIGtk
YnAoIm1heCBwYWdlL21mbjogJWx4XG4iLCBtYXhfcGFnZSk7CiAgICBrZGJwKCJmcmFtZV90YWJs
ZTogICVwXG4iLCBmcmFtZV90YWJsZSk7CiAgICBrZGJwKCJESVJFQ1RNQVBfVklSVF9TVEFSVDog
ICVseFxuIiwgRElSRUNUTUFQX1ZJUlRfU1RBUlQpOwogICAga2RicCgiSFlQRVJWSVNPUl9WSVJU
X1NUQVJUOiAlbHhcbiIsIEhZUEVSVklTT1JfVklSVF9TVEFSVCk7CiAgICBrZGJwKCJIWVBFUlZJ
U09SX1ZJUlRfRU5EOiAgICVseFxuIiwgSFlQRVJWSVNPUl9WSVJUX0VORCk7CiAgICBrZGJwKCJS
T19NUFRfVklSVF9TVEFSVDogICAgICVseFxuIiwgUk9fTVBUX1ZJUlRfU1RBUlQpOwogICAga2Ri
cCgiUEVSRE9NQUlOX1ZJUlRfU1RBUlQ6ICAlbHhcbiIsIFBFUkRPTUFJTl9WSVJUX1NUQVJUKTsK
ICAgIGtkYnAoIkNPTkZJR19QQUdJTkdfTEVWRUxTOiVkXG4iLCBDT05GSUdfUEFHSU5HX0xFVkVM
Uyk7CiAgICBrZGJwKCJfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRfU1RBUlQ6ICVseFxuIiwgCiAg
ICAgICAgICh1bG9uZylfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRfU1RBUlQpOwogICAga2RicCgi
Jk1QVFswXSA9PSAlMDE2bHhcbiIsICZtYWNoaW5lX3RvX3BoeXNfbWFwcGluZ1swXSk7CgogICAg
a2RicCgiXG5GSVJTVF9SRVNFUlZFRF9HRFRfUEFHRTogJXhcbiIsIEZJUlNUX1JFU0VSVkVEX0dE
VF9QQUdFKTsKICAgIGtkYnAoIkZJUlNUX1JFU0VSVkVEX0dEVF9FTlRSWTogJWx4XG4iLCAodWxv
bmcpRklSU1RfUkVTRVJWRURfR0RUX0VOVFJZKTsKICAgIGtkYnAoIkxBU1RfUkVTRVJWRURfR0RU
X0VOVFJZOiAlbHhcbiIsICh1bG9uZylMQVNUX1JFU0VSVkVEX0dEVF9FTlRSWSk7CiAgICBrZGJw
KCIgIFBlciBjcHUgbm9uLWNvbXBhdCBnZHRfdGFibGU6XG4iKTsKICAgIGZvcl9lYWNoX29ubGlu
ZV9jcHUoY3B1KSB7CiAgICAgICAga2RicCgiXHRjcHU6JWQgIGdkdF90YWJsZTolcFxuIiwgY3B1
LCBwZXJfY3B1KGdkdF90YWJsZSwgY3B1KSk7CiAgICB9CiAgICBrZGJwKCIgIFBlciBjcHUgY29t
cGF0IGdkdF90YWJsZTpcbiIpOwogICAgZm9yX2VhY2hfb25saW5lX2NwdShjcHUpIHsKICAgICAg
ICBrZGJwKCJcdGNwdTolZCAgZ2R0X3RhYmxlOiVwXG4iLCBjcHUsIHBlcl9jcHUoY29tcGF0X2dk
dF90YWJsZSwgY3B1KSk7CiAgICB9CiAgICBrZGJwKCJcbiIpOwogICAga2RicCgiICBQZXIgY3B1
IHRzczpcbiIpOwogICAgZm9yX2VhY2hfb25saW5lX2NwdShjcHUpIHsKICAgICAgICBzdHJ1Y3Qg
dHNzX3N0cnVjdCAqdHNzcCA9ICZwZXJfY3B1KGluaXRfdHNzLCBjcHUpOwogICAgICAgIGtkYnAo
Ilx0Y3B1OiVkICB0c3M6JXAgKHJzcDA6JTAxNmx4KVxuIiwgY3B1LCB0c3NwLCB0c3NwLT5yc3Aw
KTsKICAgIH0KI2lmZGVmIFVTRVJfTUFQUElOR1NfQVJFX0dMT0JBTAogICAga2RicCgiVVNFUl9N
QVBQSU5HU19BUkVfR0xPQkFMIGlzIGRlZmluZWRcbiIpOwojZWxzZQogICAga2RicCgiVVNFUl9N
QVBQSU5HU19BUkVfR0xPQkFMIGlzIE5PVCBkZWZpbmVkXG4iKTsKI2VuZGlmCiAgICBrZGJwKCJc
biIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIGZvciBIVk0vSFlCIGd1ZXN0
cywgZ28gdGhydSBFUFQuIEZvciBQViBndWVzdCB3ZSBuZWVkIHRvIGdvIHRvIHRoZSBidHJlZS4g
CiAqIGJ0cmVlOiBwZm5fdG9fbWZuX2ZyYW1lX2xpc3RfbGlzdCBpcyByb290IHRoYXQgcG9pbnRz
IChoYXMgbWZucyBvZikgdXB0byAxNgogKiBwYWdlcyAoY2FsbCAnZW0gbDIgbm9kZXMpIHRoYXQg
Y29udGFpbiBtZm5zIG9mIGd1ZXN0IHAybSB0YWJsZSBwYWdlcyAKICogTk9URTogbnVtIG9mIGVu
dHJpZXMgaW4gYSBwMm0gcGFnZSBpcyBzYW1lIGFzIG51bSBvZiBlbnRyaWVzIGluIGwyIG5vZGUg
Ki8Kc3RhdGljIG5vaW5saW5lIHVsb25nCmtkYl9ncGZuMm1mbihzdHJ1Y3QgZG9tYWluICpkcCwg
dWxvbmcgZ3BmbiwgcDJtX3R5cGVfdCAqdHlwZXApIAp7CiAgICBpbnQgaWR4OwoKICAgIGlmICgg
IXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShkcCkgKSB7CiAgICAgICAgbWZuX3QgKm1mbl92YSwgbWZu
ID0gYXJjaF9nZXRfcGZuX3RvX21mbl9mcmFtZV9saXN0X2xpc3QoZHApOwogICAgICAgIGludCBn
X2xvbmdzeiA9IGtkYl9ndWVzdF9iaXRuZXNzKGRwLT5kb21haW5faWQpLzg7CiAgICAgICAgaW50
IGVudHJpZXNfcGVyX3BnID0gUEFHRV9TSVpFL2dfbG9uZ3N6OwogICAgICAgIGNvbnN0IGludCBz
aGlmdCA9IGdldF9jb3VudF9vcmRlcihlbnRyaWVzX3Blcl9wZyk7CgoJaWYgKCAhbWZuX3ZhbGlk
KG1mbikgKSB7CgkgICAga2RicCgiSW52YWxpZCBmcmFtZV9saXN0X2xpc3QgbWZuOiVseCBmb3Ig
bm9uLXhsYXRlIGd1ZXN0XG4iLCBtZm4pOwoJICAgIHJldHVybiBJTlZBTElEX01GTjsKCX0KCiAg
ICAgICAgbWZuX3ZhID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CiAgICAgICAgaWR4ID0gZ3BmbiA+
PiAyKnNoaWZ0OyAgICAgLyogaW5kZXggaW4gcm9vdCBwYWdlL25vZGUgKi8KICAgICAgICBpZiAo
aWR4ID4gMTUpIHsKICAgICAgICAgICAga2RicCgiZ3BmbjolbHggaWR4OiV4IG5vdCBpbiBmcmFt
ZSBsaXN0IGxpbWl0IG9mIHoxNlxuIiwgZ3BmbiwgaWR4KTsKICAgICAgICAgICAgdW5tYXBfZG9t
YWluX3BhZ2UobWZuX3ZhKTsKICAgICAgICAgICAgcmV0dXJuIElOVkFMSURfTUZOOwogICAgICAg
IH0KICAgICAgICBtZm4gPSAoZ19sb25nc3ogPT0gNCkgPyAoKGludCAqKW1mbl92YSlbaWR4XSA6
IG1mbl92YVtpZHhdOwogICAgICAgIGlmIChtZm49PTApIHsKICAgICAgICAgICAga2RicCgiTm8g
bWZuIGZvciBpZHg6JWQgZm9yIGdwZm46JWx4IGluIHJvb3QgcGdcbiIsIGlkeCwgZ3Bmbik7CiAg
ICAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKG1mbl92YSk7CiAgICAgICAgICAgIHJldHVybiBJ
TlZBTElEX01GTjsKICAgICAgICB9CiAgICAgICAgbWZuX3ZhID0gbWFwX2RvbWFpbl9wYWdlKG1m
bik7CiAgICAgICAgS0RCR1AxKCJwMm06IGlkeDoleCBmbGw6JWx4IG1mbiBvZiAybmQgbHZsIHBh
Z2U6JWx4XG4iLCBpZHgsCiAgICAgICAgICAgICAgIGFyY2hfZ2V0X3Bmbl90b19tZm5fZnJhbWVf
bGlzdF9saXN0KGRwKSwgbWZuKTsKCiAgICAgICAgaWR4ID0gKGdwZm4+PnNoaWZ0KSAmICgoMTw8
c2hpZnQpLTEpOyAgICAgLyogaWR4IGluIGwyIG5vZGUgKi8KICAgICAgICBtZm4gPSAoZ19sb25n
c3ogPT0gNCkgPyAoKGludCAqKW1mbl92YSlbaWR4XSA6IG1mbl92YVtpZHhdOwogICAgICAgIHVu
bWFwX2RvbWFpbl9wYWdlKG1mbl92YSk7CiAgICAgICAgaWYgKG1mbiA9PSAwKSB7CiAgICAgICAg
ICAgIGtkYnAoIk5vIG1mbiBlbnRyeSBhdDoleCBpbiAybmQgbHZsIHBnIGZvciBncGZuOiVseFxu
IiwgaWR4LCBncGZuKTsKICAgICAgICAgICAgcmV0dXJuIElOVkFMSURfTUZOOwogICAgICAgIH0K
ICAgICAgICBLREJHUDEoInAybTogaWR4OiV4ICBtZm4gb2YgcDJtIHBhZ2U6JWx4XG4iLCBpZHgs
IG1mbik7IAogICAgICAgIG1mbl92YSA9IG1hcF9kb21haW5fcGFnZShtZm4pOwogICAgICAgIGlk
eCA9IGdwZm4gJiAoKDE8PHNoaWZ0KS0xKTsKICAgICAgICBtZm4gPSAoZ19sb25nc3ogPT0gNCkg
PyAoKGludCAqKW1mbl92YSlbaWR4XSA6IG1mbl92YVtpZHhdOwogICAgICAgIHVubWFwX2RvbWFp
bl9wYWdlKG1mbl92YSk7CgoJKnR5cGVwID0gLTE7CiAgICAgICAgcmV0dXJuIG1mbjsKICAgIH0g
ZWxzZQogICAgICAgIHJldHVybiBtZm5feChnZXRfZ2ZuX3F1ZXJ5X3VubG9ja2VkKGRwLCBncGZu
LCB0eXBlcCkpOwoKICAgIHJldHVybiBJTlZBTElEX01GTjsKfQoKLyogZ2l2ZW4gYSBwZm4sIGZp
bmQgaXQncyBtZm4gKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfcDJtKHZvaWQpCnsK
ICAgIGtkYnAoInAybSBkb21pZCAweGdwZm4gOiBncGZuIHRvIG1mblxuIik7CiAgICByZXR1cm4g
S0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9wMm0oaW50
IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewog
ICAgc3RydWN0IGRvbWFpbiAqZHA7CiAgICB1bG9uZyBncGZuLCBtZm49MHhkZWFkYmVlZjsKICAg
IHAybV90eXBlX3QgcDJtdHlwZSA9IC0xOwoKICAgIGlmIChhcmdjIDwgMyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgfHwKICAgICAgICAoZHA9a2RiX3N0cmRvbWlkMnB0cihhcmd2
WzFdLCAxKSkgPT0gTlVMTCAgfHwKICAgICAgICAha2RiX3N0cjJ1bG9uZyhhcmd2WzJdLCAmZ3Bm
bikpIHsKCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3AybSgpOwogICAgfQogICAgbWZuID0ga2Ri
X2dwZm4ybWZuKGRwLCBncGZuLCAmcDJtdHlwZSk7CiAgICBpZiAoIHBhZ2luZ19tb2RlX3RyYW5z
bGF0ZShkcCkgKQogICAgICAgIGtkYnAoInAybVslbHhdID09ICVseCB0eXBlOiVkLzB4JXhcbiIs
IGdwZm4sIG1mbiwgcDJtdHlwZSwgcDJtdHlwZSk7CiAgICBlbHNlIAogICAgICAgIGtkYnAoInAy
bVslbHhdID09ICVseCB0eXBlOk4vQShQVilcbiIsIGdwZm4sIG1mbik7CgogICAgcmV0dXJuIEtE
Ql9DUFVfTUFJTl9LREI7Cn0KCi8qIGdpdmVuIGFuIG1mbiwgbG9va3VwIHBmbiBpbiB0aGUgTVBU
ICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX20ycCh2b2lkKQp7CiAgICBrZGJwKCJt
MnAgMHhtZm46IG1mbiB0byBwZm5cbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0K
c3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfbTJwKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIG1mbl90IG1mbjsKICAgIGlm
IChhcmdjID4gMSAmJiBrZGJfc3RyMnVsb25nKGFyZ3ZbMV0sICZtZm4pKQogICAgICAgIGlmICht
Zm5fdmFsaWQobWZuKSkKICAgICAgICAgICAga2RicCgibXB0WyV4XSA9PSAlbHhcbiIsIG1mbiwg
bWFjaGluZV90b19waHlzX21hcHBpbmdbbWZuXSk7CiAgICAgICAgZWxzZQogICAgICAgICAgICBr
ZGJwKCJJbnZhbGlkIG1mbjolbHhcbiIsIG1mbik7CiAgICBlbHNlCiAgICAgICAga2RiX3VzZ2Zf
bTJwKCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIHZvaWQgCmtkYl9w
cl9wZ19wZ3RfZmxkcyh1bnNpZ25lZCBsb25nIHR5cGVfaW5mbykKewogICAgc3dpdGNoICh0eXBl
X2luZm8gJiBQR1RfdHlwZV9tYXNrKSB7CiAgICAgICAgY2FzZSAoUEdUX2wxX3BhZ2VfdGFibGUp
OgogICAgICAgICAgICBrZGJwKCIgICAgcGFnZSBpcyBQR1RfbDFfcGFnZV90YWJsZVxuIik7CiAg
ICAgICAgICAgIGJyZWFrOwogICAgICAgIGNhc2UgUEdUX2wyX3BhZ2VfdGFibGU6CiAgICAgICAg
ICAgIGtkYnAoIiAgICBwYWdlIGlzIFBHVF9sMl9wYWdlX3RhYmxlXG4iKTsKICAgICAgICAgICAg
YnJlYWs7CiAgICAgICAgY2FzZSBQR1RfbDNfcGFnZV90YWJsZToKICAgICAgICAgICAga2RicCgi
ICAgIHBhZ2UgaXMgUEdUX2wzX3BhZ2VfdGFibGVcbiIpOwogICAgICAgICAgICBicmVhazsKICAg
ICAgICBjYXNlIFBHVF9sNF9wYWdlX3RhYmxlOgogICAgICAgICAgICBrZGJwKCIgICAgcGFnZSBp
cyBQR1RfbDRfcGFnZV90YWJsZVxuIik7CiAgICAgICAgICAgIGJyZWFrOwogICAgICAgIGNhc2Ug
UEdUX3NlZ19kZXNjX3BhZ2U6CiAgICAgICAgICAgIGtkYnAoIiAgICBwYWdlIGlzIHNlZyBkZXNj
IHBhZ2VcbiIpOwogICAgICAgICAgICBicmVhazsKICAgICAgICBjYXNlIFBHVF93cml0YWJsZV9w
YWdlOgogICAgICAgICAgICBrZGJwKCIgICAgcGFnZSBpcyB3cml0YWJsZSBwYWdlXG4iKTsKICAg
ICAgICAgICAgYnJlYWs7CiAgICAgICAgY2FzZSBQR1Rfc2hhcmVkX3BhZ2U6CiAgICAgICAgICAg
IGtkYnAoIiAgICBwYWdlIGlzIHNoYXJlZCBwYWdlXG4iKTsKICAgICAgICAgICAgYnJlYWs7CiAg
ICB9CiAgICBpZiAodHlwZV9pbmZvICYgUEdUX3Bpbm5lZCkKICAgICAgICBrZGJwKCIgICAgcGFn
ZSBpcyBwaW5uZWRcbiIpOwogICAgaWYgKHR5cGVfaW5mbyAmIFBHVF92YWxpZGF0ZWQpCiAgICAg
ICAga2RicCgiICAgIHBhZ2UgaXMgdmFsaWRhdGVkXG4iKTsKICAgIGlmICh0eXBlX2luZm8gJiBQ
R1RfcGFlX3hlbl9sMikKICAgICAgICBrZGJwKCIgICAgcGFnZSBpcyBQR1RfcGFlX3hlbl9sMlxu
Iik7CiAgICBpZiAodHlwZV9pbmZvICYgUEdUX3BhcnRpYWwpCiAgICAgICAga2RicCgiICAgIHBh
Z2UgaXMgUEdUX3BhcnRpYWxcbiIpOwogICAgaWYgKHR5cGVfaW5mbyAmIFBHVF9sb2NrZWQpCiAg
ICAgICAga2RicCgiICAgIHBhZ2UgaXMgUEdUX2xvY2tlZFxuIik7Cn0KCnN0YXRpYyB2b2lkCmtk
Yl9wcl9wZ19wZ2NfZmxkcyh1bnNpZ25lZCBsb25nIGNvdW50X2luZm8pCnsKICAgIGlmIChjb3Vu
dF9pbmZvICYgUEdDX2FsbG9jYXRlZCkKICAgICAgICBrZGJwKCIgIFBHQ19hbGxvY2F0ZWQiKTsK
ICAgIGlmIChjb3VudF9pbmZvICYgUEdDX3hlbl9oZWFwKQogICAgICAgIGtkYnAoIiAgUEdDX3hl
bl9oZWFwIik7CiAgICBpZiAoY291bnRfaW5mbyAmIFBHQ19wYWdlX3RhYmxlKQogICAgICAgIGtk
YnAoIiAgUEdDX3BhZ2VfdGFibGUiKTsKICAgIGlmIChjb3VudF9pbmZvICYgUEdDX2Jyb2tlbikK
ICAgICAgICBrZGJwKCIgIFBHQ19icm9rZW4iKTsKI2lmIFhFTl9WRVJTSU9OIDwgNCAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIC8qIHhlbiAzLngueCAqLwogICAgaWYgKGNvdW50X2lu
Zm8gJiBQR0Nfb2ZmbGluaW5nKQogICAgICAgIGtkYnAoIiAgUEdDX29mZmxpbmluZyIpOwogICAg
aWYgKGNvdW50X2luZm8gJiBQR0Nfb2ZmbGluZWQpCiAgICAgICAga2RicCgiICBQR0Nfb2ZmbGlu
ZWQiKTsKI2Vsc2UKICAgIGlmIChjb3VudF9pbmZvICYgUEdDX3N0YXRlX2ludXNlKQogICAgICAg
IGtkYnAoIiAgUEdDX2ludXNlIik7CiAgICBpZiAoY291bnRfaW5mbyAmIFBHQ19zdGF0ZV9vZmZs
aW5pbmcpCiAgICAgICAga2RicCgiICBQR0Nfc3RhdGVfb2ZmbGluaW5nIik7CiAgICBpZiAoY291
bnRfaW5mbyAmIFBHQ19zdGF0ZV9vZmZsaW5lZCkKICAgICAgICBrZGJwKCIgIFBHQ19zdGF0ZV9v
ZmZsaW5lZCIpOwogICAgaWYgKGNvdW50X2luZm8gJiBQR0Nfc3RhdGVfZnJlZSkKICAgICAgICBr
ZGJwKCIgIFBHQ19zdGF0ZV9mcmVlIik7CiNlbmRpZgogICAga2RicCgiXG4iKTsKfQoKLyogcHJp
bnQgc3RydWN0IHBhZ2VfaW5mb3t9IGdpdmVuIHB0ciB0byBpdCBvciBhbiBtZm4KICogTk9URTog
dGhhdCBnaXZlbiBhbiBtZm4gdGhlcmUgc2VlbXMgbm8gd2F5IG9mIGtub3dpbmcgaG93IGl0J3Mg
dXNlZCwgc28KICogICAgICAgaGVyZSB3ZSBqdXN0IHByaW50IGFsbCBpbmZvIGFuZCBsZXQgdXNl
ciBkZWNpZGUgd2hhdCdzIGFwcGxpY2FibGUgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfZHBhZ2Uodm9pZCkKewogICAga2RicCgiZHBhZ2UgbWZufHBhZ2UtcHRyIDogRGlzcGxheSBz
dHJ1Y3QgcGFnZVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2Ri
X2NwdV9jbWRfdAprZGJfY21kZl9kcGFnZShpbnQgYXJnYywgY29uc3QgY2hhciAqKmFyZ3YsIHN0
cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICB1bnNpZ25lZCBsb25nIHZhbDsKICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBncDsKICAgIHN0cnVjdCBkb21haW4gKmRwOwoKICAgIGlmIChhcmdj
IDw9IDEgfHwgKmFyZ3ZbMV0gPT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZHBhZ2Uo
KTsKCiAgICBpZiAoKGtkYl9zdHIydWxvbmcoYXJndlsxXSwgJnZhbCkgPT0gMCkgICAgICB8fAog
ICAgICAgICh2YWwgPCAgKHVsb25nKWZyYW1lX3RhYmxlICYmICFtZm5fdmFsaWQodmFsKSkpIHsK
CiAgICAgICAga2RicCgiSW52YWxpZCBhcmc6JXNcbiIsIGFyZ3ZbMV0pOwogICAgICAgIHJldHVy
biBLREJfQ1BVX01BSU5fS0RCOwogICAgfQogICAga2RicCgiUGFnZSBJbmZvOlxuIik7CiAgICBp
ZiAodmFsIDw9ICh1bG9uZylmcmFtZV90YWJsZSkgeyAgICAgICAvKiBhcmcgaXMgbWZuICovCiAg
ICAgICAgcGdwID0gbWZuX3RvX3BhZ2UodmFsKTsKICAgICAgICBrZGJwKCIgIG1mbjogJWx4IHBh
Z2VfaW5mbzolcFxuIiwgdmFsLCBwZ3ApOwogICAgfSBlbHNlIHsKICAgICAgICBwZ3AgPSAoc3Ry
dWN0IHBhZ2VfaW5mbyAqKXZhbDsgLyogYXJnIGlzIHN0cnVjdCBwYWdle30gKi8KICAgICAgICBp
ZiAocGdwIDwgZnJhbWVfdGFibGUgfHwgcGdwID49IGZyYW1lX3RhYmxlK21heF9wYWdlKSB7CiAg
ICAgICAgICAgIGtkYnAoIkludmFsaWQgcGFnZSBwdHIuIGJlbG93L2JleW9uZCBtYXhfcGFnZVxu
Iik7CiAgICAgICAgICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwogICAgICAgIH0KICAgICAg
ICBrZGJwKCIgIG1mbjogJWx4IHBhZ2VfaW5mbzolcFxuIiwgcGFnZV90b19tZm4ocGdwKSwgcGdw
KTsKICAgIH0gCiAgICBrZGJwKCIgIGNvdW50X2luZm86ICUwMTZseCAgKHJlZmNudDogJXgpXG4i
LCBwZ3AtPmNvdW50X2luZm8sCiAgICAgICAgIHBncC0+Y291bnRfaW5mbyAmIFBHQ19jb3VudF9t
YXNrKTsKI2lmIFhFTl9WRVJTSU9OID4gMyB8fCBYRU5fU1VCVkVSU0lPTiA+IDMgICAgICAgICAg
ICAgLyogeGVuIDMuNC54IG9yIGxhdGVyICovCiAgICBrZGJfcHJfcGdfcGdjX2ZsZHMocGdwLT5j
b3VudF9pbmZvKTsKCiAgICBrZGJwKCJJbiB1c2UgaW5mbzpcbiIpOwogICAga2RicCgiICB0eXBl
X2luZm86JTAxNmx4XG4iLCBwZ3AtPnUuaW51c2UudHlwZV9pbmZvKTsKICAgIGtkYl9wcl9wZ19w
Z3RfZmxkcyhwZ3AtPnUuaW51c2UudHlwZV9pbmZvKTsKICAgIGRwID0gcGFnZV9nZXRfb3duZXIo
cGdwKTsKICAgIGtkYnAoIiAgZG9taWQ6JWQgKHBpY2tsZWQ6JWx4KVxuIiwgZHAgPyBkcC0+ZG9t
YWluX2lkIDogLTEsIAogICAgICAgICBwZ3AtPnYuaW51c2UuX2RvbWFpbik7CgogICAga2RicCgi
U2hhZG93IEluZm86XG4iKTsKICAgIGtkYnAoIiAgdHlwZToleCBwaW5uZWQ6JXggY291bnQ6JXhc
biIsIHBncC0+dS5zaC50eXBlLCBwZ3AtPnUuc2gucGlubmVkLAogICAgICAgICBwZ3AtPnUuc2gu
Y291bnQpOwogICAga2RicCgiICBiYWNrOiVseCAgc2hhZG93X2ZsYWdzOiV4ICBuZXh0X3NoYWRv
dzolbHhcbiIsIHBncC0+di5zaC5iYWNrLAogICAgICAgICBwZ3AtPnNoYWRvd19mbGFncywgcGdw
LT5uZXh0X3NoYWRvdyk7CgogICAga2RicCgiRnJlZSBJbmZvXG4iKTsKICAgIGtkYnAoIiAgbmVl
ZF90bGJmbHVzaDolZCBvcmRlcjolZCB0bGJmbHVzaF90aW1lc3RhbXA6JXhcbiIsCiAgICAgICAg
IHBncC0+dS5mcmVlLm5lZWRfdGxiZmx1c2gsIHBncC0+di5mcmVlLm9yZGVyLCAKICAgICAgICAg
cGdwLT50bGJmbHVzaF90aW1lc3RhbXApOwojZWxzZQogICAgaWYgKHBncC0+Y291bnRfaW5mbyAm
IFBHQ19hbGxvY2F0ZWQpICAgICAgICAgICAgLyogcGFnZSBhbGxvY2F0ZWQgKi8KICAgICAgICBr
ZGJwKCIgIFBHQ19hbGxvY2F0ZWQiKTsKICAgIGlmIChwZ3AtPmNvdW50X2luZm8gJiBQR0NfcGFn
ZV90YWJsZSkgICAgICAgICAgIC8qIHBhZ2UgdGFibGUgcGFnZSAqLwogICAgICAgIGtkYnAoIiAg
UEdDX3BhZ2VfdGFibGUiKTsKICAgIGtkYnAoIlxuIik7CiAgICBrZGJwKCIgIHBhZ2UgaXMgJXMg
eGVuIGhlYXAgcGFnZVxuIiwgaXNfeGVuX2hlYXBfcGFnZShwZ3ApID8gImEiOiJOT1QiKTsKICAg
IGtkYnAoIiAgY2FjaGVhdHRyOiV4XG4iLCAocGdwLT5jb3VudF9pbmZvPj5QR0NfY2FjaGVhdHRy
X2Jhc2UpICYgNyk7CiAgICBpZiAocGdwLT5jb3VudF9pbmZvICYgUEdDX2NvdW50X21hc2spIHsg
ICAgICAgICAvKiBwYWdlIGluIHVzZSAqLwogICAgICAgIGRwID0gcGdwLT51LmludXNlLl9kb21h
aW47ICAgICAgICAgLyogcGlja2xlZCBkb21haW4gKi8KICAgICAgICBrZGJwKCIgIHBhZ2UgaXMg
aW4gdXNlXG4iKTsKICAgICAgICBrZGJwKCIgICAgZG9taWQ6ICVkICAocGlja2xlZCBkb206JXgp
XG4iLCAKICAgICAgICAgICAgIGRwID8gKHVucGlja2xlX2RvbXB0cihkcCkpLT5kb21haW5faWQg
OiAtMSwgZHApOwogICAgICAgIGtkYnAoIiAgICB0eXBlX2luZm86ICVseFxuIiwgcGdwLT51Lmlu
dXNlLnR5cGVfaW5mbyk7CiAgICAgICAga2RiX3BydF9wZ190eXBlKHBncC0+dS5pbnVzZS50eXBl
X2luZm8pOwogICAgfSBlbHNlIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIC8qIHBhZ2UgaXMgZnJlZSAqLwogICAgICAgIGtkYnAoIiAgcGFnZSBpcyBmcmVlXG4iKTsK
ICAgICAgICBrZGJwKCIgICAgb3JkZXI6ICV4XG4iLCBwZ3AtPnUuZnJlZS5vcmRlcik7CiAgICAg
ICAga2RicCgiICAgIGNwdW1hc2s6ICVseFxuIiwgcGdwLT51LmZyZWUuY3B1bWFzay5iaXRzKTsK
ICAgIH0KICAgIGtkYnAoIiAgdGxiZmx1c2gvc2hhZG93X2ZsYWdzOiAlbHhcbiIsIHBncC0+c2hh
ZG93X2ZsYWdzKTsKI2VuZGlmCiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKLyogZGlz
cGxheSBhc2tlZCBtc3IgdmFsdWUgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfZG1z
cih2b2lkKQp7CiAgICBrZGJwKCJkbXNyIGFkZHJlc3MgOiBEaXNwbGF5IG1zciB2YWx1ZVxuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9kbXNyKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3Jl
Z3MgKnJlZ3MpCnsKICAgIHVuc2lnbmVkIGxvbmcgYWRkciwgdmFsOwoKICAgIGlmIChhcmdjIDw9
IDEgfHwgKmFyZ3ZbMV0gPT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfZG1zcigpOwoK
ICAgIGlmICgoa2RiX3N0cjJ1bG9uZyhhcmd2WzFdLCAmYWRkcikgPT0gMCkpIHsKICAgICAgICBr
ZGJwKCJJbnZhbGlkIGFyZzolc1xuIiwgYXJndlsxXSk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVf
TUFJTl9LREI7CiAgICB9CiAgICByZG1zcmwoYWRkciwgdmFsKTsKICAgIGtkYnAoIm1zcjogJWx4
ICB2YWw6JWx4XG4iLCBhZGRyLCB2YWwpOwoKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9
CgovKiBleGVjdXRlIGNwdWlkIGZvciBnaXZlbiB2YWx1ZSAqLwpzdGF0aWMga2RiX2NwdV9jbWRf
dAprZGJfdXNnZl9jcHVpZCh2b2lkKQp7CiAgICBrZGJwKCJjcHVpZCBlYXggOiBEaXNwbGF5IGNw
dWlkIHZhbHVlIHJldHVybmVkIGluIHJheFxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9jcHVpZChpbnQgYXJnYywgY29uc3Qg
Y2hhciAqKmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICB1bnNpZ25lZCBs
b25nIHJheD0wLCByYng9MCwgcmN4PTAsIHJkeD0wOwoKICAgIGlmIChhcmdjIDw9IDEgfHwgKmFy
Z3ZbMV0gPT0gJz8nKSAKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfY3B1aWQoKTsKCiAgICBpZiAo
KGtkYl9zdHIydWxvbmcoYXJndlsxXSwgJnJheCkgPT0gMCkpIHsKICAgICAgICBrZGJwKCJJbnZh
bGlkIGFyZzolc1xuIiwgYXJndlsxXSk7CiAgICAgICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7
CiAgICB9CiNpZiAwCiAgICBfX2FzbV9fIF9fdm9sYXRpbGVfXyAoCiAgICAgICAgICAgIC8qICJw
dXNobCAlJXJheCAgXG4iICovCgogICAgICAgICAgICAibW92bCAlMCwgJSVyYXggIFxuIgogICAg
ICAgICAgICAiY3B1aWQgICAgICAgICAgIFxuIiAKICAgICAgICAgICAgOiAiPSZhIiAocmF4KSwg
Ij1iIiAocmJ4KSwgIj1jIiAocmN4KSwgIj1kIiAocmR4KQogICAgICAgICAgICA6ICIwIiAocmF4
KQogICAgICAgICAgICA6ICJyYXgiLCAicmJ4IiwgInJjeCIsICJyZHgiLCAibWVtb3J5Iik7CiNl
bmRpZgogICAgY3B1aWQocmF4LCAmcmF4LCAmcmJ4LCAmcmN4LCAmcmR4KTsKICAgIGtkYnAoInJh
eDogJTAxNmx4ICByYng6JTAxNmx4IHJjeDolMDE2bHggcmR4OiUwMTZseFxuIiwgcmF4LCByYngs
CiAgICAgICAgIHJjeCwgcmR4KTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBl
eGVjdXRlIGNwdWlkIGZvciBnaXZlbiB2YWx1ZSAqLwpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
dXNnZl93ZXB0KHZvaWQpCnsKICAgIGtkYnAoIndlcHQgZG9taWQgZ2ZuOiB3YWxrIGVwdCB0YWJs
ZSBmb3IgZ2l2ZW4gZG9taWQgYW5kIGdmblxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tE
QjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl93ZXB0KGludCBhcmdjLCBjb25zdCBj
aGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHN0cnVjdCBkb21h
aW4gKmRwOwogICAgdWxvbmcgZ2ZuOwoKICAgIGlmICgoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0g
Jz8nKSB8fCBhcmdjICE9IDMpCiAgICAgICAgcmV0dXJuIGtkYl91c2dmX3dlcHQoKTsKICAgIGlm
ICgoZHA9a2RiX3N0cmRvbWlkMnB0cihhcmd2WzFdLCAxKSkgJiYga2RiX3N0cjJ1bG9uZyhhcmd2
WzJdLCAmZ2ZuKSkKICAgICAgICBlcHRfd2Fsa190YWJsZShkcCwgZ2ZuKTsKICAgIGVsc2UKICAg
ICAgICBrZGJfdXNnZl93ZXB0KCk7CgogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8q
CiAqIFNhdmUgc3ltYm9scyBpbmZvIGZvciBhIGd1ZXN0LCBkb20wIG9yIG90aGVyLi4uCiAqLwpz
dGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9zeW0odm9pZCkKewogICBrZGJwKCJzeW0gZG9t
aWQgJmthbGxzeW1zX25hbWVzICZrYWxsc3ltc19hZGRyZXNzZXMgJmthbGxzeW1zX251bV9zeW1z
XG4iKTsKICAga2RicCgiXHQgWyZrYWxsc3ltc190b2tlbl90YWJsZV0gWyZrYWxsc3ltc190b2tl
bl9pbmRleF1cbiIpOwogICBrZGJwKCJcdHRva2VuIF90YWJsZSBhbmQgX2luZGV4IE1VU1QgYmUg
c3BlY2lmaWVkIGZvciBlbDVcbiIpOwogICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0
aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl9zeW0oaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2
LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgdWxvbmcgbmFtZXNwLCBhZGRyYXAs
IG51bXAsIHRva3RibHAsIHRva2lkeHA7CiAgICBkb21pZF90IGRvbWlkOwoKICAgIGlmIChhcmdj
IDwgNSkgewogICAgICAgIHJldHVybiBrZGJfdXNnZl9zeW0oKTsKICAgIH0KICAgIHRva3RibHAg
PSB0b2tpZHhwID0gMDsgICAgIC8qIG9wdGlvbmFsIHBhcmFtZXRlcnMgKi8KICAgIGlmIChrZGJf
c3RyMmRvbWlkKGFyZ3ZbMV0sICZkb21pZCwgMSkgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFy
Z3ZbMl0sICZuYW1lc3ApICAgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbM10sICZhZGRy
YXApICAgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbNF0sICZudW1wKSAgICAgJiYgCiAg
ICAgICAgKGFyZ2M9PTUgfHwgKGFyZ2M9PTcgJiYga2RiX3N0cjJ1bG9uZyhhcmd2WzVdLCAmdG9r
dGJscCkgJiYKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBrZGJfc3RyMnVsb25nKGFy
Z3ZbNl0sICZ0b2tpZHhwKSkpKSB7CgogICAgICAgIGtkYl9zYXZfZG9tX3N5bWluZm8oZG9taWQs
IG5hbWVzcCwgYWRkcmFwLG51bXAsdG9rdGJscCx0b2tpZHhwKTsKICAgIH0gZWxzZQogICAgICAg
IGtkYl91c2dmX3N5bSgpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCgovKiBtb2Rz
IGlzIHRoZSBkdW1iIGFzcyAmbW9kdWxlcy4gbW9kdWxlcyBpcyBzdHJ1Y3Qge254dCwgcHJldn0s
IGFuZCBub3QgcHRyICovCnN0YXRpYyB2b2lkCmtkYl9kdW1wX2xpbnV4X21vZHVsZXMoZG9taWRf
dCBkb21pZCwgdWxvbmcgbW9kcywgdWludCBueHRvZmZzLCB1aW50IG5tb2ZmcywgCiAgICAgICAg
ICAgICAgICAgICAgICAgdWludCBjb3Jlb2ZmcykKewogICAgY29uc3QgaW50IGJ1ZnN6ID0gNTY7
CiAgICBjaGFyIGJ1ZltidWZzel07CiAgICB1aW50NjRfdCBhZGRyLCBhZGRydmFsLCAqbnh0cHRy
LCAqbW9kcHRyOwogICAgdWludCBpLCBudW0gPSA4OwoKICAgIGlmIChrZGJfZ3Vlc3RfYml0bmVz
cyhkb21pZCkgPT0gMzIpCiAgICAgICAgbnVtID0gNDsKCiAgICAvKiBmaXJzdCByZWFkIG1vZHVs
ZXN7fS5uZXh0IHB0ciAqLwogICAgaWYgKGtkYl9yZWFkX21lbShtb2RzLCAoa2RiYnl0X3QgKikm
bnh0cHRyLCBudW0sIGRvbWlkKSAhPSBudW0pIHsKICAgICAgICBrZGJwKCJFUlJPUjogQ291bGQg
bm90IHJlYWQgbmV4dCBhdCBtb2Q6JXBcbiIsICh2b2lkICopbW9kcyk7CiAgICAgICAgcmV0dXJu
OwogICAgfQoKICAgIEtEQkdQKCJtb2RzOiVwIG54dHB0cjolcCBubW9mZnM6JXggY29yZW9mZnM6
JXhcbiIsICh2b2lkICopbW9kcywgbnh0cHRyLAogICAgICAgICAgbm1vZmZzLCBjb3Jlb2Zmcyk7
CgogICAgd2hpbGUgKCh1aW50NjRfdClueHRwdHIgIT0gbW9kcykgewoKICAgICAgICBtb2RwdHIg
PSAodWludDY0X3QgKikgKCh1bG9uZylueHRwdHIgLSBueHRvZmZzKTsKCiAgICAgICAgYWRkciA9
ICh1bG9uZyltb2RwdHIgKyBjb3Jlb2ZmczsKICAgICAgICBpZiAoa2RiX3JlYWRfbWVtKGFkZHIs
IChrZGJieXRfdCAqKSZhZGRydmFsLCBudW0sIGRvbWlkKSAhPSBudW0pIHsKICAgICAgICAgICAg
a2RicCgiRVJST1I6IENvdWxkIG5vdCByZWFkIG1vZCBhZGRyIGF0IDolcFxuIiwgKHZvaWQgKilh
ZGRyKTsKICAgICAgICAgICAgcmV0dXJuOwogICAgICAgIH0KCiAgICAgICAgS0RCR1AoIm1vZHB0
cjolcCBhZGRyOiVwXG4iLCBtb2RwdHIsICh2b2lkICopYWRkcik7CiAgICAgICAgYWRkciA9ICh1
bG9uZyltb2RwdHIgKyBubW9mZnM7CiAgICAgICAgaT0wOwogICAgICAgIGRvIHsKICAgICAgICAg
ICAgaWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikmYnVmW2ldLCAxLCBkb21pZCkg
IT0gMSkgewogICAgICAgICAgICAgICAga2RicCgiRVJST1I6Q291bGQgbm90IHJlYWQgbmFtZSBj
aCBhdCBhZGRyOiVwXG4iLCAodm9pZCAqKWFkZHIpOwogICAgICAgICAgICAgICAgcmV0dXJuOwog
ICAgICAgICAgICB9CiAgICAgICAgICAgIGFkZHIrKzsKICAgICAgICB9IHdoaWxlIChidWZbaV0g
JiYgaSsrIDwgYnVmc3opOwogICAgICAgIGJ1ZltidWZzei0xXSA9ICdcMCc7CgogICAgICAgIGtk
YnAoIiUwMTZseCAlMDE2bHggJXNcbiIsIG1vZHB0ciwgYWRkcnZhbCwgYnVmKTsKCiAgICAgICAg
aWYgKGtkYl9yZWFkX21lbSgodWxvbmcpbnh0cHRyLCAoa2RiYnl0X3QgKikmbnh0cHRyLCBudW0s
IGRvbWlkKSE9bnVtKSB7CiAgICAgICAgICAgIGtkYnAoIkVSUk9SOiBDb3VsZCBub3QgcmVhZCBu
ZXh0IGF0IG1vZDolcFxuIiwgKHZvaWQgKiltb2RzKTsKICAgICAgICAgICAgcmV0dXJuOwogICAg
ICAgIH0KICAgICAgICBLREJHUCgibnh0cHRyOiVwIGFkZHI6JXBcbiIsIG54dHB0ciwgKHZvaWQg
KilhZGRyKTsKICAgIH0gCn0KCi8qIERpc3BsYXkgbW9kdWxlcyBsb2FkZWQgaW4gbGludXggZ3Vl
c3QgKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3VzZ2ZfbW9kKHZvaWQpCnsKICAga2RicCgi
bW9kIGRvbWlkICZtb2R1bGVzIG5leHQtb2ZmcyBuYW1lLW9mZnMgbW9kdWxlX2NvcmUtb2Zmc1xu
Iik7CiAgIGtkYnAoIlx0d2hlcmUgbmV4dC1vZmZzOiAmKChzdHJ1Y3QgbW9kdWxlICopMCktPmxp
c3QubmV4dFxuIik7CiAgIGtkYnAoIlx0bmFtZS1vZmZzOiAmKChzdHJ1Y3QgbW9kdWxlICopMCkt
Pm5hbWUgZXRjLi5cbiIpOwogICBrZGJwKCJcdERpc3BsYXlzIGFsbCBsb2FkZWQgbW9kdWxlcyBp
biB0aGUgbGludXggZ3Vlc3RcbiIpOwogICBrZGJwKCJcdEVnOiBtb2QgMCBmZmZmZmZmZjgwMzAy
NzgwIDggMHgxOCAweDE3OFxuIik7CgogICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3Rh
dGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfbW9kKGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJn
diwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHVsb25nIG1vZHMsIG54dG9mZnMs
IG5tb2ZmcywgY29yZW9mZnM7CiAgICBkb21pZF90IGRvbWlkOwoKICAgIGlmIChhcmdjIDwgNikg
ewogICAgICAgIHJldHVybiBrZGJfdXNnZl9tb2QoKTsKICAgIH0KICAgIGlmIChrZGJfc3RyMmRv
bWlkKGFyZ3ZbMV0sICZkb21pZCwgMSkgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbMl0s
ICZtb2RzKSAgICAgJiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbM10sICZueHRvZmZzKSAg
JiYKICAgICAgICBrZGJfc3RyMnVsb25nKGFyZ3ZbNF0sICZubW9mZnMpICAgJiYKICAgICAgICBr
ZGJfc3RyMnVsb25nKGFyZ3ZbNV0sICZjb3Jlb2ZmcykpIHsKCiAgICAgICAga2RicCgibW9kcHRy
IGFkZHJlc3MgbmFtZVxuIik7CiAgICAgICAga2RiX2R1bXBfbGludXhfbW9kdWxlcyhkb21pZCwg
bW9kcywgbnh0b2Zmcywgbm1vZmZzLCBjb3Jlb2Zmcyk7CiAgICB9IGVsc2UKICAgICAgICBrZGJf
dXNnZl9tb2QoKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiB0b2dnbGUga2Ri
IGRlYnVnIHRyYWNlIGxldmVsICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX2tkYmRi
Zyh2b2lkKQp7CiAgICBrZGJwKCJrZGJkYmcgOiB0cmFjZSBpbmZvIHRvIGRlYnVnIGtkYlxuIik7
CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJf
Y21kZl9rZGJkYmcoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAg
ICByZXR1cm4ga2RiX3VzZ2Zfa2RiZGJnKCk7CgogICAga2RiZGJnID0gKGtkYmRiZz09MykgPyAw
IDogKGtkYmRiZysxKTsKICAgIGtkYnAoImtkYmRiZyBzZXQgdG86JWRcbiIsIGtkYmRiZyk7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfcmVib290KHZvaWQpCnsKICAgIGtkYnAoInJlYm9vdDogcmVib290IHN5c3RlbVxuIik7CiAg
ICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21k
Zl9yZWJvb3QoaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVn
cyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09ICc/JykKICAgICAgICBy
ZXR1cm4ga2RiX3VzZ2ZfcmVib290KCk7CgogICAgbWFjaGluZV9yZXN0YXJ0KDUwMCk7CiAgICBy
ZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsgICAgICAgICAgICAgIC8qIG5vdCByZWFjaGVkICovCn0K
CgpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl90cmNvbih2b2lkKQp7CiAgICBrZGJwKCJ0
cmNvbjogdHVybiB1c2VyIGFkZGVkIGtkYiB0cmFjaW5nIG9uXG4iKTsKICAgIHJldHVybiBLREJf
Q1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl9jbWRmX3RyY29uKGludCBh
cmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAg
IGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAgICAgICAgcmV0dXJuIGtkYl91c2dm
X3RyY29uKCk7CgogICAga2RiX3RyY29uID0gMTsKICAgIGtkYnAoImtkYiB0cmFjaW5nIGlzIG5v
dyBvblxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIGtkYl9jcHVf
Y21kX3QKa2RiX3VzZ2ZfdHJjb2ZmKHZvaWQpCnsKICAgIGtkYnAoInRyY29mZjogdHVybiB1c2Vy
IGFkZGVkIGtkYiB0cmFjaW5nIG9mZlxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl90cmNvZmYoaW50IGFyZ2MsIGNvbnN0IGNo
YXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgaWYgKGFyZ2MgPiAx
ICYmICphcmd2WzFdID09ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfdHJjb2ZmKCk7Cgog
ICAga2RiX3RyY29uID0gMDsKICAgIGtkYnAoImtkYiB0cmFjaW5nIGlzIG5vdyBvZmZcbiIpOwog
ICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91
c2dmX3RyY3oodm9pZCkKewogICAga2RicCgidHJjeiA6IHplcm8gZW50aXJlIHRyYWNlIGJ1ZmZl
clxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpzdGF0aWMga2RiX2NwdV9jbWRf
dAprZGJfY21kZl90cmN6KGludCBhcmdjLCBjb25zdCBjaGFyICoqYXJndiwgc3RydWN0IGNwdV91
c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGlmIChhcmdjID4gMSAmJiAqYXJndlsxXSA9PSAnPycpCiAg
ICAgICAgcmV0dXJuIGtkYl91c2dmX3RyY3ooKTsKCiAgICBrZGJfdHJjemVybygpOwogICAgcmV0
dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dmX3Ry
Y3Aodm9pZCkKewogICAga2RicCgidHJjcCA6IGdpdmUgaGludHMgdG8gZHVtcCB0cmFjZSBidWZm
ZXIgdmlhIGR3L2RkIGNvbW1hbmRcbiIpOwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0K
c3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfdHJjcChpbnQgYXJnYywgY29uc3QgY2hhciAq
KmFyZ3YsIHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQp7CiAgICBpZiAoYXJnYyA+IDEgJiYg
KmFyZ3ZbMV0gPT0gJz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl90cmNwKCk7CgogICAga2Ri
X3RyY3AoKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CgovKiBwcmludCBzb21lIGJh
c2ljIGluZm8sIGNvbnN0YW50cywgZXRjLi4gKi8Kc3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX3Vz
Z2ZfaW5mbyh2b2lkKQp7CiAgICBrZGJwKCJpbmZvIDogZGlzcGxheSBiYXNpYyBpbmZvLCBjb25z
dGFudHMsIGV0Yy4uXG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBr
ZGJfY3B1X2NtZF90CmtkYl9jbWRmX2luZm8oaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBz
dHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykKewogICAgc3RydWN0IGRvbWFpbiAqZHA7CiAgICBz
dHJ1Y3QgY3B1aW5mb194ODYgKmJjZHA7CgogICAgaWYgKGFyZ2MgPiAxICYmICphcmd2WzFdID09
ICc/JykKICAgICAgICByZXR1cm4ga2RiX3VzZ2ZfaW5mbygpOwoKICAgIGtkYnAoIlZlcnNpb246
ICVkLiVkLiVzICglc0AlcykgJXNcbiIsIHhlbl9tYWpvcl92ZXJzaW9uKCksIAogICAgICAgICB4
ZW5fbWlub3JfdmVyc2lvbigpLCB4ZW5fZXh0cmFfdmVyc2lvbigpLCB4ZW5fY29tcGlsZV9ieSgp
LCAKICAgICAgICAgeGVuX2NvbXBpbGVfZG9tYWluKCksIHhlbl9jb21waWxlX2RhdGUoKSk7CiAg
ICBrZGJwKCJfX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyA6IDB4JXhcbiIsIAogICAg
ICAgICBfX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyk7CiAgICBrZGJwKCJfX1hFTl9J
TlRFUkZBQ0VfVkVSU0lPTl9fOiAweCV4XG4iLCBfX1hFTl9JTlRFUkZBQ0VfVkVSU0lPTl9fKTsK
CiAgICBiY2RwID0gJmJvb3RfY3B1X2RhdGE7CiAgICBrZGJwKCJDUFU6IChhbGwgZGVjaW1hbCki
KTsKICAgICAgICBpZiAoYmNkcC0+eDg2X3ZlbmRvciA9PSBYODZfVkVORE9SX0FNRCkKICAgICAg
ICAgICAga2RicCgiIEFNRCIpOwogICAgICAgIGVsc2UKICAgICAgICAgICAga2RicCgiIElOVEVM
Iik7CiAgICAgICAga2RicCgiIGZhbWlseTolZCBtb2RlbDolZFxuIiwgYmNkcC0+eDg2LCBiY2Rw
LT54ODZfbW9kZWwpOwogICAgICAgIGtkYnAoIiAgICAgdmVuZG9yX2lkOiUxNnMgbW9kZWxfaWQ6
JTY0c1xuIiwgYmNkcC0+eDg2X3ZlbmRvcl9pZCwKICAgICAgICAgICAgIGJjZHAtPng4Nl9tb2Rl
bF9pZCk7CiAgICAgICAga2RicCgiICAgICBjcHVpZGx2bDolZCBjYWNoZTpzejolZCBhbGlnbjol
ZFxuIiwgYmNkcC0+Y3B1aWRfbGV2ZWwsCiAgICAgICAgICAgICBiY2RwLT54ODZfY2FjaGVfc2l6
ZSwgYmNkcC0+eDg2X2NhY2hlX2FsaWdubWVudCk7CiAgICAgICAga2RicCgiICAgICBwb3dlcjol
ZCBjb3JlczogbWF4OiVkIGJvb3RlZDolZCBzaWJsaW5nczolZCBhcGljaWQ6JWRcbiIsCiAgICAg
ICAgICAgICBiY2RwLT54ODZfcG93ZXIsIGJjZHAtPng4Nl9tYXhfY29yZXMsIGJjZHAtPmJvb3Rl
ZF9jb3JlcywKICAgICAgICAgICAgIGJjZHAtPng4Nl9udW1fc2libGluZ3MsIGJjZHAtPmFwaWNp
ZCk7CiAgICAgICAga2RicCgiICAgICAiKTsKICAgICAgICBpZiAoY3B1X2hhc19hcGljKQogICAg
ICAgICAgICBrZGJwKCJfYXBpYyIpOwogICAgICAgIGlmIChjcHVfaGFzX3NlcCkKICAgICAgICAg
ICAga2RicCgifF9zZXAiKTsKICAgICAgICBpZiAoY3B1X2hhc194bW0zKQogICAgICAgICAgICBr
ZGJwKCJ8X3htbTMiKTsKICAgICAgICBpZiAoY3B1X2hhc19odCkKICAgICAgICAgICAga2RicCgi
fF9odCIpOwogICAgICAgIGlmIChjcHVfaGFzX254KQogICAgICAgICAgICBrZGJwKCJ8X254Iik7
CiAgICAgICAgaWYgKGNwdV9oYXNfY2xmbHVzaCkKICAgICAgICAgICAga2RicCgifF9jbGZsdXNo
Iik7CiAgICAgICAgaWYgKGNwdV9oYXNfcGFnZTFnYikKICAgICAgICAgICAga2RicCgifF9wYWdl
MWdiIik7CiAgICAgICAgaWYgKGNwdV9oYXNfZmZ4c3IpCiAgICAgICAgICAgIGtkYnAoInxfZmZ4
c3IiKTsKICAgICAgICBpZiAoY3B1X2hhc194MmFwaWMpCiAgICAgICAgICAgIGtkYnAoInxfeDJh
cGljIik7CiAgICBrZGJwKCJcblxuIik7CiAgICBrZGJwKCJDQzoiKTsKI2lmIGRlZmluZWQoQ09O
RklHX1g4Nl82NCkKICAgICAgICBrZGJwKCIgQ09ORklHX1g4Nl82NCIpOwojZW5kaWYKI2lmIGRl
ZmluZWQoQ09ORklHX0NPTVBBVCkKICAgICAgICBrZGJwKCIgQ09ORklHX0NPTVBBVCIpOwojZW5k
aWYKI2lmIGRlZmluZWQoQ09ORklHX1BBR0lOR19BU1NJU1RBTkNFKQogICAgICAgIGtkYnAoIiBD
T05GSUdfUEFHSU5HX0FTU0lTVEFOQ0UiKTsKI2VuZGlmCiAgICBrZGJwKCJcbiIpOwogICAga2Ri
cCgiY3B1IGhhcyBmb2xsb3dpbmcgZmVhdHVyZXM6XG4iKTsKICAgIGtkYnAoIiAgJXNcbiIsIGJv
b3RfY3B1X2hhcyhYODZfRkVBVFVSRV9UU0NfUkVMSUFCTEUpID8gCiAgICAgICAgICJYODZfRkVB
VFVSRV9UU0NfUkVMSUFCTEUiIDogIiIpOwogICAga2RicCgiICAlc1xuIiwgCiAgICAgICAgIGJv
b3RfY3B1X2hhcyhYODZfRkVBVFVSRV9DT05TVEFOVF9UU0MpPyAiWDg2X0ZFQVRVUkVfQ09OU1RB
TlRfVFNDIjoiIik7CiAgICBrZGJwKCIgICVzXG4iLCAKICAgICAgICAgYm9vdF9jcHVfaGFzKFg4
Nl9GRUFUVVJFX05PTlNUT1BfVFNDKSA/ICJYODZfRkVBVFVSRV9OT05TVE9QX1RTQyIgOiIiKTsK
ICAgIGtkYnAoIiAgJXNcbiIsIAogICAgICAgICBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfUkRU
U0NQKSA/ICAiWDg2X0ZFQVRVUkVfUkRUU0NQIiA6ICIiKTsKICAgIGtkYnAoIiAgJXNcbiIsIGJv
b3RfY3B1X2hhcyhYODZfRkVBVFVSRV9GWFNSKSA/ICAiWDg2X0ZFQVRVUkVfRlhTUiIgOiAiIik7
CiAgICBrZGJwKCIgICVzXG4iLCBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfQ1BVSURfRkFVTFRJ
TkcpID8gIAogICAgICAgICAiWDg2X0ZFQVRVUkVfQ1BVSURfRkFVTFRJTkciIDogIiIpOwogICAg
a2RicCgiICAlc1xuIiwgCiAgICAgICAgIGJvb3RfY3B1X2hhcyhYODZfRkVBVFVSRV9QQUdFMUdC
KSA/ICAiWDg2X0ZFQVRVUkVfUEFHRTFHQiIgOiAiIik7CiAgICBrZGJwKCIgICVzXG4iLCBib290
X2NwdV9oYXMoWDg2X0ZFQVRVUkVfTVdBSVQpID8gICJYODZfRkVBVFVSRV9NV0FJVCIgOiAiIik7
CiAgICBrZGJwKCIgICVzXG4iLCBib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfWDJBUElDKSA/ICAi
WDg2X0ZFQVRVUkVfWDJBUElDIjoiIik7CiAgICBrZGJwKCIgICVzXG4iLCBib290X2NwdV9oYXMo
WDg2X0ZFQVRVUkVfWFNBVkUpID8gICJYODZfRkVBVFVSRV9YU0FWRSI6IiIpOwogICAga2RicCgi
XG4iKTsKCiAgICBrZGJwKCJNQVhfVklSVF9DUFVTOiQlZCAgTUFYX0hWTV9WQ1BVUzokJWRcbiIs
IE1BWF9WSVJUX0NQVVMsTUFYX0hWTV9WQ1BVUyk7CiAgICBrZGJwKCJOUl9FVkVOVF9DSEFOTkVM
UzogJCVkXG4iLCBOUl9FVkVOVF9DSEFOTkVMUyk7CiAgICBrZGJwKCJOUl9FVlRDSE5fQlVDS0VU
UzogJCVkXG4iLCBOUl9FVlRDSE5fQlVDS0VUUyk7CgogICAga2RicCgiXG5Eb21haW5zIGFuZCB0
aGVpciB2Y3B1czpcbiIpOwogICAgZm9yX2VhY2hfZG9tYWluKGRwKSB7CiAgICAgICAgc3RydWN0
IHZjcHUgKnZwOwogICAgICAgIGludCBwcmludGVkPTA7CiAgICAgICAga2RicCgiICBEb21haW46
IHtpZDolZCAweCV4ICAgcHRyOiVwJXN9ICBWQ1BVczpcbiIsIAogICAgICAgICAgICAgZHAtPmRv
bWFpbl9pZCwgZHAtPmRvbWFpbl9pZCwgZHAsIGRwLT5pc19keWluZyA/ICIgRFlJTkciOiIiKTsK
ICAgICAgICBmb3IodnA9ZHAtPnZjcHVbMF07IHZwOyB2cCA9IHZwLT5uZXh0X2luX2xpc3QpIHsK
ICAgICAgICAgICAga2RicCgiICB7aWQ6JWQgcDolcCBydW5zdGF0ZTolZH0iLCB2cC0+dmNwdV9p
ZCwgdnAsIAogICAgICAgICAgICAgICAgIHZwLT5ydW5zdGF0ZS5zdGF0ZSk7CiAgICAgICAgICAg
IGlmICgrK3ByaW50ZWQgJSAyID09IDApIGtkYnAoIlxuIik7CiAgICAgICAgfQogICAgICAgIGtk
YnAoIlxuIik7CiAgICB9CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoKc3RhdGljIGtk
Yl9jcHVfY21kX3QKa2RiX3VzZ2ZfY3VyKHZvaWQpCnsKICAgIGtkYnAoImN1ciA6IGRpc3BsYXkg
Y3VycmVudCBkb21pZCBhbmQgdmNwdVxuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsK
fQoKLyogQ2hlY2tpbmcgZm9yIGd1ZXN0X21vZGUoKSBub3QgZmVhc2libGUgaGVyZS4gaWYgZG9t
MC0+aGNhbGwtPmJwIGluIHhlbiwgCiAqIHRoZW4gZ19tKCkgd2lsbCBzaG93IHhlbiwgYnV0IHZj
cHUgaXMgc3RpbGwgZG9tMC4gaGVuY2UganVzdCBsb29rIGF0IAogKiBjdXJyZW50IG9ubHkgKi8K
c3RhdGljIGtkYl9jcHVfY21kX3QKa2RiX2NtZGZfY3VyKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIGRvbWlkX3QgaWQgPSBjdXJy
ZW50LT5kb21haW4tPmRvbWFpbl9pZDsKCiAgICBpZiAoYXJnYyA+IDEgJiYgKmFyZ3ZbMV0gPT0g
Jz8nKQogICAgICAgIHJldHVybiBrZGJfdXNnZl9jdXIoKTsKCiAgICBrZGJwKCJkb21pZDogJWR7
JXB9ICVzIHZjcHU6JWQgeyVwfSAiLCBpZCwgY3VycmVudC0+ZG9tYWluLAogICAgICAgICAoaWQ9
PURPTUlEX0lETEUpID8gIihJRExFKSIgOiAiIiwgY3VycmVudC0+dmNwdV9pZCwgY3VycmVudCk7
CgogICAgLyogaWYgKGlkICE9IERPTUlEX0lETEUpIHsgKi8KICAgICAgICBpZiAoYm9vdF9jcHVf
ZGF0YS54ODZfdmVuZG9yID09IFg4Nl9WRU5ET1JfSU5URUwpIHsKICAgICAgICAgICAgdTY0IGFk
ZHIgPSAtMTsKICAgICAgICAgICAgX192bXB0cnN0KCZhZGRyKTsKICAgICAgICAgICAga2RicCgi
IFZNQ1M6IktEQkZMLCBhZGRyKTsKICAgICAgICB9CiAgICAvKiB9ICovCiAgICBrZGJwKCJcbiIp
OwogICAgcmV0dXJuIEtEQl9DUFVfTUFJTl9LREI7Cn0KCi8qIHN0dWIgdG8gcXVpY2tseSBhbmQg
ZWFzaWx5IGFkZCBhIG5ldyBjb21tYW5kICovCnN0YXRpYyBrZGJfY3B1X2NtZF90CmtkYl91c2dm
X3VzcjEodm9pZCkKewogICAga2RicCgidXNyMTogYWRkIGFueSBhcmJpdHJhcnkgY21kIHVzaW5n
IHRoaXMgaW4ga2RiX2NtZHMuY1xuIik7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQpz
dGF0aWMga2RiX2NwdV9jbWRfdAprZGJfY21kZl91c3IxKGludCBhcmdjLCBjb25zdCBjaGFyICoq
YXJndiwgc3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCnsKICAgIHJldHVybiBLREJfQ1BVX01B
SU5fS0RCOwp9CgpzdGF0aWMga2RiX2NwdV9jbWRfdAprZGJfdXNnZl9oKHZvaWQpCnsKICAgIGtk
YnAoImg6IGRpc3BsYXkgYWxsIGNvbW1hbmRzLiBTZWUga2RiL1JFQURNRSBmb3IgbW9yZSBpbmZv
XG4iKTsKICAgIHJldHVybiBLREJfQ1BVX01BSU5fS0RCOwp9CnN0YXRpYyBrZGJfY3B1X2NtZF90
CmtkYl9jbWRmX2goaW50IGFyZ2MsIGNvbnN0IGNoYXIgKiphcmd2LCBzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqcmVncykKewogICAga2RidGFiX3QgKnRicDsKCiAgICBrZGJwKCIgLSBjY3B1IGlzIGN1
cnJlbnQgY3B1IFxuIik7CiAgICBrZGJwKCIgLSBmb2xsb3dpbmcgYXJlIGFsd2F5cyBpbiBkZWNp
bWFsOlxuIik7CiAgICBrZGJwKCIgICAgIHZjcHUgbnVtLCBjcHUgbnVtLCBkb21pZFxuIik7CiAg
ICBrZGJwKCIgLSBvdGhlcndpc2UsIGFsbW9zdCBhbGwgbnVtYmVycyBhcmUgaW4gaGV4ICgweCBu
b3QgbmVlZGVkKVxuIik7CiAgICBrZGJwKCIgLSBvdXRwdXQ6ICQxNyBtZWFucyBkZWNpbWFsIDE3
XG4iKTsKICAgIGtkYnAoIiAtIGRvbWlkIDdmZmYoJDMyNzY3KSByZWZlcnMgdG8gaHlwZXJ2aXNv
clxuIik7CiAgICBrZGJwKCIgLSBpZiBubyBkb21pZCBiZWZvcmUgZnVuY3Rpb24gbmFtZSwgdGhl
biBpdCdzIGh5cGVydmlzb3JcbiIpOwogICAga2RicCgiIC0gZWFybHlrZGIgaW4geGVuIGdydWIg
bGluZSB0byBicmVhayBpbnRvIGtkYiBkdXJpbmcgYm9vdFxuIik7CiAgICBrZGJwKCIgLSBjb21t
YW5kID8gd2lsbCBzaG93IHRoZSBjb21tYW5kIHVzYWdlXG4iKTsKICAgIGtkYnAoIlxuIik7Cgog
ICAgZm9yKHRicD1rZGJfY21kX3RibDsgdGJwLT5rZGJfY21kX3VzZ2Y7IHRicCsrKQogICAgICAg
ICgqdGJwLT5rZGJfY21kX3VzZ2YpKCk7CiAgICByZXR1cm4gS0RCX0NQVV9NQUlOX0tEQjsKfQoK
LyogPT09PT09PT09PT09PT09PT09PT09IGNtZCB0YWJsZSBpbml0aWFsaXphdGlvbiA9PT09PT09
PT09PT09PT09PT09PT09PT09PSAqLwp2b2lkIF9faW5pdAprZGJfaW5pdF9jbWR0YWIodm9pZCkK
ewogIHN0YXRpYyBrZGJ0YWJfdCBfa2RiX2NtZF90YWJsZVtdID0gewoKICAgIHsiaW5mbyIsIGtk
Yl9jbWRmX2luZm8sIGtkYl91c2dmX2luZm8sIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImN1
ciIsICBrZGJfY21kZl9jdXIsIGtkYl91c2dmX2N1ciwgMSwgS0RCX1JFUEVBVF9OT05FfSwKCiAg
ICB7ImYiLCAga2RiX2NtZGZfZiwgIGtkYl91c2dmX2YsICAxLCBLREJfUkVQRUFUX05PTkV9LAog
ICAgeyJmZyIsIGtkYl9jbWRmX2ZnLCBrZGJfdXNnZl9mZywgMSwgS0RCX1JFUEVBVF9OT05FfSwK
CiAgICB7ImR3IiwgIGtkYl9jbWRmX2R3LCAga2RiX3VzZ2ZfZHcsICAxLCBLREJfUkVQRUFUX05P
X0FSR1N9LAogICAgeyJkZCIsICBrZGJfY21kZl9kZCwgIGtkYl91c2dmX2RkLCAgMSwgS0RCX1JF
UEVBVF9OT19BUkdTfSwKICAgIHsiZHdtIiwga2RiX2NtZGZfZHdtLCBrZGJfdXNnZl9kd20sIDEs
IEtEQl9SRVBFQVRfTk9fQVJHU30sCiAgICB7ImRkbSIsIGtkYl9jbWRmX2RkbSwga2RiX3VzZ2Zf
ZGRtLCAxLCBLREJfUkVQRUFUX05PX0FSR1N9LAogICAgeyJkciIsICBrZGJfY21kZl9kciwgIGtk
Yl91c2dmX2RyLCAgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZHJnIiwga2RiX2NtZGZfZHJn
LCBrZGJfdXNnZl9kcmcsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCgogICAgeyJkaXMiLCBrZGJfY21k
Zl9kaXMsICBrZGJfdXNnZl9kaXMsICAxLCBLREJfUkVQRUFUX05PX0FSR1N9LAogICAgeyJkaXNt
IixrZGJfY21kZl9kaXNtLCBrZGJfdXNnZl9kaXNtLCAxLCBLREJfUkVQRUFUX05PX0FSR1N9LAoK
ICAgIHsibXciLCBrZGJfY21kZl9tdywga2RiX3VzZ2ZfbXcsIDEsIEtEQl9SRVBFQVRfTk9ORX0s
CiAgICB7Im1kIiwga2RiX2NtZGZfbWQsIGtkYl91c2dmX21kLCAxLCBLREJfUkVQRUFUX05PTkV9
LAogICAgeyJtciIsIGtkYl9jbWRmX21yLCBrZGJfdXNnZl9tciwgMSwgS0RCX1JFUEVBVF9OT05F
fSwKCiAgICB7ImJjIiwga2RiX2NtZGZfYmMsIGtkYl91c2dmX2JjLCAwLCBLREJfUkVQRUFUX05P
TkV9LAogICAgeyJicCIsIGtkYl9jbWRmX2JwLCBrZGJfdXNnZl9icCwgMSwgS0RCX1JFUEVBVF9O
T05FfSwKICAgIHsiYnRwIiwga2RiX2NtZGZfYnRwLCBrZGJfdXNnZl9idHAsIDEsIEtEQl9SRVBF
QVRfTk9ORX0sCgogICAgeyJ3cCIsIGtkYl9jbWRmX3dwLCBrZGJfdXNnZl93cCwgMSwgS0RCX1JF
UEVBVF9OT05FfSwKICAgIHsid2MiLCBrZGJfY21kZl93Yywga2RiX3VzZ2Zfd2MsIDAsIEtEQl9S
RVBFQVRfTk9ORX0sCgogICAgeyJuaSIsIGtkYl9jbWRmX25pLCBrZGJfdXNnZl9uaSwgMCwgS0RC
X1JFUEVBVF9OT19BUkdTfSwKICAgIHsic3MiLCBrZGJfY21kZl9zcywga2RiX3VzZ2Zfc3MsIDEs
IEtEQl9SRVBFQVRfTk9fQVJHU30sCiAgICB7InNzYiIsa2RiX2NtZGZfc3NiLGtkYl91c2dmX3Nz
YiwwLCBLREJfUkVQRUFUX05PX0FSR1N9LAogICAgeyJnbyIsIGtkYl9jbWRmX2dvLCBrZGJfdXNn
Zl9nbywgMCwgS0RCX1JFUEVBVF9OT05FfSwKCiAgICB7ImNwdSIsa2RiX2NtZGZfY3B1LCBrZGJf
dXNnZl9jcHUsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7Im5taSIsa2RiX2NtZGZfbm1pLCBr
ZGJfdXNnZl9ubWksIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7InBlcmNwdSIsa2RiX2NtZGZf
cGVyY3B1LCBrZGJfdXNnZl9wZXJjcHUsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCgogICAgeyJzeW0i
LCAga2RiX2NtZGZfc3ltLCAgIGtkYl91c2dmX3N5bSwgICAxLCBLREJfUkVQRUFUX05PTkV9LAog
ICAgeyJtb2QiLCAga2RiX2NtZGZfbW9kLCAgIGtkYl91c2dmX21vZCwgICAxLCBLREJfUkVQRUFU
X05PTkV9LAoKICAgIHsidmNwdWgiLGtkYl9jbWRmX3ZjcHVoLCBrZGJfdXNnZl92Y3B1aCwgMSwg
S0RCX1JFUEVBVF9OT05FfSwKICAgIHsidmNwdSIsIGtkYl9jbWRmX3ZjcHUsICBrZGJfdXNnZl92
Y3B1LCAgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZG9tIiwgIGtkYl9jbWRmX2RvbSwgICBr
ZGJfdXNnZl9kb20sICAgMSwgS0RCX1JFUEVBVF9OT05FfSwKCiAgICB7InNjaGVkIiwga2RiX2Nt
ZGZfc2NoZWQsIGtkYl91c2dmX3NjaGVkLCAxLCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJtbXUi
LCAgIGtkYl9jbWRmX21tdSwgICBrZGJfdXNnZl9tbXUsICAgMSwgS0RCX1JFUEVBVF9OT05FfSwK
ICAgIHsicDJtIiwgICBrZGJfY21kZl9wMm0sICAga2RiX3VzZ2ZfcDJtLCAgIDEsIEtEQl9SRVBF
QVRfTk9ORX0sCiAgICB7Im0ycCIsICAga2RiX2NtZGZfbTJwLCAgIGtkYl91c2dmX20ycCwgICAx
LCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJkcGFnZSIsIGtkYl9jbWRmX2RwYWdlLCBrZGJfdXNn
Zl9kcGFnZSwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZG1zciIsICBrZGJfY21kZl9kbXNy
LCAga2RiX3VzZ2ZfZG1zciwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiY3B1aWQiLCAga2Ri
X2NtZGZfY3B1aWQsICBrZGJfdXNnZl9jcHVpZCwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsi
d2VwdCIsICBrZGJfY21kZl93ZXB0LCAga2RiX3VzZ2Zfd2VwdCwgMSwgS0RCX1JFUEVBVF9OT05F
fSwKCiAgICB7ImR0cnEiLCBrZGJfY21kZl9kdHJxLCAga2RiX3VzZ2ZfZHRycSwgMSwgS0RCX1JF
UEVBVF9OT05FfSwKICAgIHsiZGlkdCIsIGtkYl9jbWRmX2RpZHQsICBrZGJfdXNnZl9kaWR0LCAx
LCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJkZ2R0Iiwga2RiX2NtZGZfZGdkdCwgIGtkYl91c2dm
X2RnZHQsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImRpcnEiLCBrZGJfY21kZl9kaXJxLCAg
a2RiX3VzZ2ZfZGlycSwgMSwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsiZHZpdCIsIGtkYl9jbWRm
X2R2aXQsICBrZGJfdXNnZl9kdml0LCAxLCBLREJfUkVQRUFUX05PTkV9LAogICAgeyJkdm1jIiwg
a2RiX2NtZGZfZHZtYywgIGtkYl91c2dmX2R2bWMsIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7
Im1taW8iLCBrZGJfY21kZl9tbWlvLCAga2RiX3VzZ2ZfbW1pbywgMSwgS0RCX1JFUEVBVF9OT05F
fSwKCiAgICAvKiB0cmFjaW5nIHJlbGF0ZWQgY29tbWFuZHMgKi8KICAgIHsidHJjb24iLCBrZGJf
Y21kZl90cmNvbiwgIGtkYl91c2dmX3RyY29uLCAgMCwgS0RCX1JFUEVBVF9OT05FfSwKICAgIHsi
dHJjb2ZmIixrZGJfY21kZl90cmNvZmYsIGtkYl91c2dmX3RyY29mZiwgMCwgS0RCX1JFUEVBVF9O
T05FfSwKICAgIHsidHJjeiIsICBrZGJfY21kZl90cmN6LCAgIGtkYl91c2dmX3RyY3osICAgMCwg
S0RCX1JFUEVBVF9OT05FfSwKICAgIHsidHJjcCIsICBrZGJfY21kZl90cmNwLCAgIGtkYl91c2dm
X3RyY3AsICAgMSwgS0RCX1JFUEVBVF9OT05FfSwKCiAgICB7InVzcjEiLCAga2RiX2NtZGZfdXNy
MSwgICBrZGJfdXNnZl91c3IxLCAgIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImtkYmYiLCAg
a2RiX2NtZGZfa2RiZiwgICBrZGJfdXNnZl9rZGJmLCAgIDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAg
ICB7ImtkYmRiZyIsa2RiX2NtZGZfa2RiZGJnLCBrZGJfdXNnZl9rZGJkYmcsIDEsIEtEQl9SRVBF
QVRfTk9ORX0sCiAgICB7InJlYm9vdCIsa2RiX2NtZGZfcmVib290LCBrZGJfdXNnZl9yZWJvb3Qs
IDEsIEtEQl9SRVBFQVRfTk9ORX0sCiAgICB7ImgiLCAgICAga2RiX2NtZGZfaCwgICAgICBrZGJf
dXNnZl9oLCAgICAgIDEsIEtEQl9SRVBFQVRfTk9ORX0sCgogICAgeyIiLCBOVUxMLCBOVUxMLCAw
LCAwfSwKICB9OwogICAga2RiX2NtZF90YmwgPSBfa2RiX2NtZF90YWJsZTsKICAgIHJldHVybjsK
fQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9ndWVzdC8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAwMDAwNzc1ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAwMDAwMAAxMjAxNzcyNDYyNAAw
MTI3MDQAIDUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIg
IABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAa2RiL2d1ZXN0L01ha2VmaWxlAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAADAwMDA2NjQAMDAwMjc1NgAwMDAyNzU2ADAwMDAwMDAwMDQxADExNzY1NDY1NTU2ADAx
NDM1NgAgMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB1c3RhciAg
AG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAKb2JqLXkgICAgICAgICAgIDo9IGtkYl9ndWVzdC5vCgoAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAGtkYi9ndWVzdC9rZGJfZ3Vlc3QuYwAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAwMDAwNjY0ADAwMDI3NTYAMDAwMjc1NgAwMDAwMDAyNTAwNgAxMTc2NTQ2NTU1NgAwMTUw
NDEAIDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdXN0YXIgIABt
cmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAG1yYXRob3IAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAALyoKICogQ29weXJpZ2h0IChDKSAyMDA5LCBNdWtlc2ggUmF0aG9y
LCBPcmFjbGUgQ29ycC4gIEFsbCByaWdodHMgcmVzZXJ2ZWQuCiAqCiAqIFRoaXMgcHJvZ3JhbSBp
cyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IKICogbW9kaWZ5
IGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljCiAqIExpY2Vuc2Ug
djIgYXMgcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uCiAqCiAqIFRo
aXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNl
ZnVsLAogKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGll
ZCB3YXJyYW50eSBvZgogKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNV
TEFSIFBVUlBPU0UuICBTZWUgdGhlIEdOVQogKiBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBt
b3JlIGRldGFpbHMuCiAqCiAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYwogKiBMaWNlbnNlIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtOyBp
ZiBub3QsIHdyaXRlIHRvIHRoZQogKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24sIEluYy4sIDU5
IFRlbXBsZSBQbGFjZSAtIFN1aXRlIDMzMCwKICogQm9zdG9uLCBNQSAwMjExMTAtMTMwNywgVVNB
LgogKi8KCiNpbmNsdWRlICIuLi9pbmNsdWRlL2tkYmluYy5oIgoKLyogaW5mb3JtYXRpb24gZm9y
IHN5bWJvbHMgZm9yIGEgZ3Vlc3QgKGluY2x1ZGVpbmcgZG9tIDAgKSBpcyBzYXZlZCBoZXJlICov
CnN0cnVjdCBnc3Rfc3ltaW5mbyB7ICAgICAgICAgICAvKiBndWVzdCBzeW1ib2xzIGluZm8gKi8K
ICAgIGludCAgIGRvbWlkOyAgICAgICAgICAgICAgIC8qIHdoaWNoIGRvbWFpbiAqLwogICAgaW50
ICAgYml0bmVzczsgICAgICAgICAgICAgLyogMzIgb3IgNjQgKi8KICAgIHZvaWQgKmFkZHJ0Ymxw
OyAgICAgICAgICAgIC8qIHB0ciB0byAoMzIvNjQpYWRkcmVzc2VzIHRibCAqLwogICAgdTggICAq
dG9rdGJsOyAgICAgICAgICAgICAgLyogcHRyIHRvIGthbGxzeW1zX3Rva2VuX3RhYmxlICovCiAg
ICB1MTYgICp0b2tpZHh0Ymw7ICAgICAgICAgICAvKiBwdHIgdG8ga2FsbHN5bXNfdG9rZW5faW5k
ZXggKi8KICAgIHU4ICAgKmthbGxzeW1zX25hbWVzOyAgICAgIC8qIHB0ciB0byBrYWxsc3ltc19u
YW1lcyAqLwogICAgbG9uZyAga2FsbHN5bXNfbnVtX3N5bXM7ICAgLyogcHRyIHRvIGthbGxzeW1z
X251bV9zeW1zICovCiAgICBrZGJ2YV90ICBzdGV4dDsgICAgICAgICAgICAvKiB2YWx1ZSBvZiBf
c3RleHQgaW4gZ3Vlc3QgKi8KICAgIGtkYnZhX3QgIGV0ZXh0OyAgICAgICAgICAgIC8qIHZhbHVl
IG9mIF9ldGV4dCBpbiBndWVzdCAqLwogICAga2RidmFfdCAgc2luaXR0ZXh0OyAgICAgICAgLyog
dmFsdWUgb2YgX3Npbml0dGV4dCBpbiBndWVzdCAqLwogICAga2RidmFfdCAgZWluaXR0ZXh0OyAg
ICAgICAgLyogdmFsdWUgb2YgX2Vpbml0dGV4dCBpbiBndWVzdCAqLwp9OwoKI2RlZmluZSBNQVhf
Q0FDSEUgMTYgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAvKiBjYWNoZSB1cHRvIDE2IGd1
ZXN0cyAqLwpzdHJ1Y3QgZ3N0X3N5bWluZm8gZ3N0X3N5bWluZm9hW01BWF9DQUNIRV07ICAgICAg
IC8qIGd1ZXN0IHN5bWJvbCBpbmZvIGFycmF5ICovCgpzdGF0aWMgc3RydWN0IGdzdF9zeW1pbmZv
ICoKa2RiX2dldF9zeW1pbmZvX3Nsb3Qodm9pZCkKewogICAgaW50IGk7CiAgICBmb3IgKGk9MDsg
aSA8IE1BWF9DQUNIRTsgaSsrKQogICAgICAgIGlmIChnc3Rfc3ltaW5mb2FbaV0uYWRkcnRibHAg
PT0gTlVMTCkKICAgICAgICAgICAgcmV0dXJuICgmZ3N0X3N5bWluZm9hW2ldKTsgICAgICAKCiAg
ICByZXR1cm4gTlVMTDsKfQoKc3RhdGljIHN0cnVjdCBnc3Rfc3ltaW5mbyAqCmtkYl9kb21pZDJz
eW1pbmZvcChkb21pZF90IGRvbWlkKQp7CiAgICBpbnQgaTsKICAgIGZvciAoaT0wOyBpIDwgTUFY
X0NBQ0hFOyBpKyspCiAgICAgICAgaWYgKGdzdF9zeW1pbmZvYVtpXS5kb21pZCA9PSBkb21pZCkK
ICAgICAgICAgICAgcmV0dXJuICgmZ3N0X3N5bWluZm9hW2ldKTsgICAgICAKCiAgICByZXR1cm4g
TlVMTDsKfQoKLyogY2hlY2sgaWYgYW4gYWRkcmVzcyBsb29rcyBsaWtlIHRleHQgYWRkcmVzcyBp
biBndWVzdCAqLwppbnQKa2RiX2lzX2FkZHJfZ3Vlc3RfdGV4dChrZGJ2YV90IGFkZHIsIGludCBk
b21pZCkKewogICAgc3RydWN0IGdzdF9zeW1pbmZvICpncCA9IGtkYl9kb21pZDJzeW1pbmZvcChk
b21pZCk7CgogICAgaWYgKCFncCB8fCAhZ3AtPnN0ZXh0IHx8ICFncC0+ZXRleHQpCiAgICAgICAg
cmV0dXJuIDA7CiAgICBLREJHUDEoImd1ZXN0YWRkcjogYWRkcjolbHggZG9taWQ6JWRcbiIsIGFk
ZHIsIGRvbWlkKTsKCiAgICByZXR1cm4gKCAoYWRkciA+PSBncC0+c3RleHQgJiYgYWRkciA8PSBn
cC0+ZXRleHQpIHx8CiAgICAgICAgICAgICAoYWRkciA+PSBncC0+c2luaXR0ZXh0ICYmIGFkZHIg
PD0gZ3AtPmVpbml0dGV4dCkgKTsKfQoKLyoKICogcmV0dXJuczogdmFsdWUgb2Yga2FsbHN5bXNf
YWRkcmVzc2VzW2lkeF07CiAqLwpzdGF0aWMga2RidmFfdAprZGJfcmRfZ3Vlc3RfYWRkcnRibChz
dHJ1Y3QgZ3N0X3N5bWluZm8gKmdwLCBpbnQgaWR4KQp7CiAgICBrZGJ2YV90IGFkZHIsIHJldGFk
ZHI9MDsKICAgIGludCBudW0gPSBncC0+Yml0bmVzcy84OyAgICAgICAvKiB3aGV0aGVyIDQgYnl0
ZSBvciA4IGJ5dGUgcHRycyAqLwogICAgZG9taWRfdCBpZCA9IGdwLT5kb21pZDsKCiAgICBhZGRy
ID0gKGtkYnZhX3QpKCgoY2hhciAqKWdwLT5hZGRydGJscCkgKyBpZHggKiBudW0pOwogICAgS0RC
R1AxKCJyZGd1ZXN0YWRkcnRibDphZGRyOiVseCBpZHg6JWRcbiIsIGFkZHIsIGlkeCk7CgogICAg
aWYgKGtkYl9yZWFkX21lbShhZGRyLCAoa2RiYnl0X3QgKikmcmV0YWRkcixudW0saWQpICE9IG51
bSkgewogICAgICAgIGtkYnAoIkNhbid0IHJlYWQgYWRkcnRibCBkb21pZDolZCBhdDolbHhcbiIs
IGlkLCBhZGRyKTsKICAgICAgICByZXR1cm4gMDsKICAgIH0KICAgIEtEQkdQMSgicmRndWVzdGFk
ZHJ0Ymw6ZXhpdDpyZXRhZGRyOiVseFxuIiwgcmV0YWRkcik7CiAgICByZXR1cm4gcmV0YWRkcjsK
fQoKLyogQmFzZWQgb24gZWw1IGthbGxzeW1zLmMgZmlsZS4gKi8Kc3RhdGljIHVuc2lnbmVkIGlu
dCAKa2RiX2V4cGFuZF9lbDVfc3ltKHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AsIHVuc2lnbmVkIGlu
dCBvZmYsIGNoYXIgKnJlc3VsdCkKeyAgIAogICAgaW50IGxlbiwgc2tpcHBlZF9maXJzdCA9IDA7
CiAgICB1OCB1OGlkeCwgKnRwdHIsICpkYXRhcDsKICAgIGRvbWlkX3QgZG9taWQgPSBncC0+ZG9t
aWQ7CgogICAgKnJlc3VsdCA9ICdcMCc7CgogICAgLyogZ2V0IHRoZSBjb21wcmVzc2VkIHN5bWJv
bCBsZW5ndGggZnJvbSB0aGUgZmlyc3Qgc3ltYm9sIGJ5dGUgKi8KICAgIGRhdGFwID0gZ3AtPmth
bGxzeW1zX25hbWVzICsgb2ZmOwogICAgbGVuID0gMDsKICAgIGlmICgoa2RiX3JlYWRfbWVtKChr
ZGJ2YV90KWRhdGFwLCAoa2RiYnl0X3QgKikmbGVuLCAxLCBkb21pZCkpICE9IDEpIHsKICAgICAg
ICBLREJHUCgiZmFpbGVkIHRvIHJlYWQgZ3Vlc3QgbWVtb3J5XG4iKTsKICAgICAgICByZXR1cm4g
MDsKICAgIH0KICAgIGRhdGFwKys7CgogICAgLyogdXBkYXRlIHRoZSBvZmZzZXQgdG8gcmV0dXJu
IHRoZSBvZmZzZXQgZm9yIHRoZSBuZXh0IHN5bWJvbCBvbgogICAgICogdGhlIGNvbXByZXNzZWQg
c3RyZWFtICovCiAgICBvZmYgKz0gbGVuICsgMTsKCiAgICAvKiBmb3IgZXZlcnkgYnl0ZSBvbiB0
aGUgY29tcHJlc3NlZCBzeW1ib2wgZGF0YSwgY29weSB0aGUgdGFibGUKICAgICAqIGVudHJ5IGZv
ciB0aGF0IGJ5dGUgKi8KICAgIHdoaWxlKGxlbikgewogICAgICAgIHUxNiB1MTZpZHgsICp1MTZw
OwogICAgICAgIGlmIChrZGJfcmVhZF9tZW0oKGtkYnZhX3QpZGF0YXAsKGtkYmJ5dF90ICopJnU4
aWR4LDEsZG9taWQpIT0xKXsKICAgICAgICAgICAga2RicCgibWVtb3J5ICh1OGlkeCkgcmVhZCBl
cnJvcjolcFxuIixncC0+dG9raWR4dGJsKTsKICAgICAgICAgICAgcmV0dXJuIDA7CiAgICAgICAg
fQogICAgICAgIHUxNnAgPSB1OGlkeCArIGdwLT50b2tpZHh0Ymw7CiAgICAgICAgaWYgKGtkYl9y
ZWFkX21lbSgoa2RidmFfdCl1MTZwLChrZGJieXRfdCAqKSZ1MTZpZHgsMixkb21pZCkhPTIpewog
ICAgICAgICAgICBrZGJwKCJ0b2tpZHh0YmwgcmVhZCBlcnJvcjolcFxuIiwgdTE2cCk7CiAgICAg
ICAgICAgIHJldHVybiAwOwogICAgICAgIH0KICAgICAgICB0cHRyID0gZ3AtPnRva3RibCArIHUx
NmlkeDsKICAgICAgICBkYXRhcCsrOwogICAgICAgIGxlbi0tOwoKICAgICAgICB3aGlsZSAoKGtk
Yl9yZWFkX21lbSgoa2RidmFfdCl0cHRyLCAoa2RiYnl0X3QgKikmdThpZHgsIDEsIGRvbWlkKT09
MSkgJiYKICAgICAgICAgICAgICAgdThpZHgpIHsKCiAgICAgICAgICAgIGlmKHNraXBwZWRfZmly
c3QpIHsKICAgICAgICAgICAgICAgICpyZXN1bHQgPSB1OGlkeDsKICAgICAgICAgICAgICAgIHJl
c3VsdCsrOwogICAgICAgICAgICB9IGVsc2UKICAgICAgICAgICAgICAgIHNraXBwZWRfZmlyc3Qg
PSAxOwogICAgICAgICAgICB0cHRyKys7CiAgICAgICAgfQogICAgfQogICAgKnJlc3VsdCA9ICdc
MCc7CiAgICByZXR1cm4gb2ZmOyAgICAgICAgICAvKiByZXR1cm4gdG8gb2Zmc2V0IHRvIHRoZSBu
ZXh0IHN5bWJvbCAqLwp9CgojZGVmaW5lIEVMNF9OTUxFTiAxMjcKLyogc28gbXVjaCBwYWluLCBz
byBub3Qgc3VyZSBvZiBpdCdzIHdvcnRoIC4uIDopLi4gKi8Kc3RhdGljIGtkYnZhX3QKa2RiX2V4
cGFuZF9lbDRfc3ltKHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AsIGludCBsb3csIGNoYXIgKnJlc3Vs
dCwgY2hhciAqc3ltcCkKeyAgIAogICAgaW50IGksIGo7CiAgICB1OCAqbm1wID0gZ3AtPmthbGxz
eW1zX25hbWVzOyAgICAgICAvKiBndWVzdCBhZGRyZXNzIHNwYWNlICovCiAgICBrZGJieXRfdCBi
eXRlLCBwcmVmaXg7CiAgICBkb21pZF90IGlkID0gZ3AtPmRvbWlkOwogICAga2RidmFfdCBhZGRy
OwoKICAgIEtEQkdQMSgiRWVsNHN5bTpubXA6JXAgbWF4aWR4OiQlZCBzeW06JXNcbiIsIG5tcCwg
bG93LCBzeW1wKTsKICAgIGZvciAoaT0wOyBpIDw9IGxvdzsgaSsrKSB7CiAgICAgICAgLyogdW5z
aWduZWQgcHJlZml4ID0gKm5hbWUrKzsgKi8KICAgICAgICBpZiAoa2RiX3JlYWRfbWVtKChrZGJ2
YV90KW5tcCwgJnByZWZpeCwgMSwgaWQpICE9IDEpIHsKICAgICAgICAgICAga2RicCgiZmFpbGVk
IHRvIHJlYWQ6JXAgZG9taWQ6JXhcbiIsIG5tcCwgaWQpOwogICAgICAgICAgICByZXR1cm4gMDsK
ICAgICAgICB9CiAgICAgICAgS0RCR1AyKCJlbDQ6aTolZCBwcmVmaXg6JXhcbiIsIGksIHByZWZp
eCk7CiAgICAgICAgbm1wKys7CiAgICAgICAgLyogc3RybmNweShuYW1lYnVmICsgcHJlZml4LCBu
YW1lLCBLU1lNX05BTUVfTEVOIC0gcHJlZml4KTsgKi8KICAgICAgICBhZGRyID0gKGxvbmcpcmVz
dWx0ICsgcHJlZml4OwogICAgICAgIGZvciAoaj0wOyBqIDwgRUw0X05NTEVOLXByZWZpeDsgaisr
KSB7CiAgICAgICAgICAgIGlmIChrZGJfcmVhZF9tZW0oKGtkYnZhX3Qpbm1wLCAmYnl0ZSwgMSwg
aWQpICE9IDEpIHsKICAgICAgICAgICAgICAgIGtkYnAoImZhaWxlZCByZWFkOiVwIGRvbWlkOiV4
XG4iLCBubXAsIGlkKTsKICAgICAgICAgICAgICAgIHJldHVybiAwOwogICAgICAgICAgICB9CiAg
ICAgICAgICAgIEtEQkdQMigiZWw0Omo6JWQgYnl0ZToleFxuIiwgaiwgYnl0ZSk7CiAgICAgICAg
ICAgICooa2RiYnl0X3QgKilhZGRyID0gYnl0ZTsKICAgICAgICAgICAgYWRkcisrOyBubXArKzsK
ICAgICAgICAgICAgaWYgKGJ5dGUgPT0gJ1wwJykKICAgICAgICAgICAgICAgIGJyZWFrOwogICAg
ICAgIH0KICAgICAgICBLREJHUDIoImVsNHN5bTppOiVkIHJlczolc1xuIiwgaSwgcmVzdWx0KTsK
ICAgICAgICBpZiAoc3ltcCAmJiBzdHJjbXAocmVzdWx0LCBzeW1wKSA9PSAwKQogICAgICAgICAg
ICByZXR1cm4oa2RiX3JkX2d1ZXN0X2FkZHJ0YmwoZ3AsIGkpKTsKCiAgICAgICAgLyoga2FsbHN5
bXMuYzogbmFtZSArPSBzdHJsZW4obmFtZSkgKyAxOyAqLwogICAgICAgIGlmIChqID09IEVMNF9O
TUxFTi1wcmVmaXggJiYgYnl0ZSAhPSAnXDAnKQogICAgICAgICAgICB3aGlsZSAoa2RiX3JlYWRf
bWVtKChrZGJ2YV90KW5tcCwgJmJ5dGUsIDEsIGlkKSAmJiBieXRlICE9ICdcMCcpCiAgICAgICAg
ICAgICAgICBubXArKzsKICAgIH0KICAgIEtEQkdQMSgiWGVsNHN5bTogbmEtZ2EtZGFcbiIpOwog
ICAgcmV0dXJuIDA7Cn0KCnN0YXRpYyB1bnNpZ25lZCBpbnQKa2RiX2dldF9lbDVfc3ltb2Zmc2V0
KHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AsIGxvbmcgcG9zKQp7CiAgICBpbnQgaTsKICAgIHU4IGRh
dGEsICpuYW1lcDsKICAgIGRvbWlkX3QgZG9taWQgPSBncC0+ZG9taWQ7CgogICAgbmFtZXAgPSBn
cC0+a2FsbHN5bXNfbmFtZXM7CiAgICBmb3IgKGk9MDsgaSA8IHBvczsgaSsrKSB7CiAgICAgICAg
aWYgKGtkYl9yZWFkX21lbSgoa2RidmFfdCluYW1lcCwgJmRhdGEsIDEsIGRvbWlkKSAhPSAxKSB7
CiAgICAgICAgICAgIGtkYnAoIkNhbid0IHJlYWQgaWQ6JCVkIG1lbTolcFxuIiwgZG9taWQsIG5h
bWVwKTsKICAgICAgICAgICAgcmV0dXJuIDA7CiAgICAgICAgfQogICAgICAgIG5hbWVwID0gbmFt
ZXAgKyBkYXRhICsgMTsKICAgIH0KICAgIHJldHVybiBuYW1lcCAtIGdwLT5rYWxsc3ltc19uYW1l
czsKfQoKLyoKICogZm9yIGEgZ2l2ZW4gZ3Vlc3QgZG9taWQgKGRvbWlkID49IDAgJiYgPCBLREJf
SFlQRE9NSUQpLCBjb252ZXJ0IGFkZHIgdG8KICogc3ltYm9sLiBvZmZzZXQgaXMgc2V0IHRvICBh
ZGRyIC0gc3ltYm9sc3RhcnQKICovCmNoYXIgKgprZGJfZ3Vlc3RfYWRkcjJzeW0odW5zaWduZWQg
bG9uZyBhZGRyLCBkb21pZF90IGRvbWlkLCB1bG9uZyAqb2Zmc3ApCnsKICAgIHN0YXRpYyBjaGFy
IG5hbWVidWZbS1NZTV9OQU1FX0xFTisxXTsKICAgIHVuc2lnbmVkIGxvbmcgbG93LCBoaWdoLCBt
aWQ7CiAgICBzdHJ1Y3QgZ3N0X3N5bWluZm8gKmdwID0ga2RiX2RvbWlkMnN5bWluZm9wKGRvbWlk
KTsKCiAgICAqb2Zmc3AgPSAwOwogICAgaWYoIWdwIHx8IGdwLT5rYWxsc3ltc19udW1fc3ltcyA9
PSAwKQogICAgICAgIHJldHVybiAiID8/PyAiOwoKICAgIG5hbWVidWZbMF0gPSBuYW1lYnVmW0tT
WU1fTkFNRV9MRU5dID0gJ1wwJzsKICAgIGlmICgxKSB7CiAgICAgICAgLyogZG8gYSBiaW5hcnkg
c2VhcmNoIG9uIHRoZSBzb3J0ZWQga2FsbHN5bXNfYWRkcmVzc2VzIGFycmF5ICovCiAgICAgICAg
bG93ID0gMDsKICAgICAgICBoaWdoID0gZ3AtPmthbGxzeW1zX251bV9zeW1zOwoKICAgICAgICB3
aGlsZSAoaGlnaC1sb3cgPiAxKSB7CiAgICAgICAgICAgIG1pZCA9IChsb3cgKyBoaWdoKSAvIDI7
CiAgICAgICAgICAgIGlmIChrZGJfcmRfZ3Vlc3RfYWRkcnRibChncCwgbWlkKSA8PSBhZGRyKSAK
ICAgICAgICAgICAgICAgIGxvdyA9IG1pZDsKICAgICAgICAgICAgZWxzZSAKICAgICAgICAgICAg
ICAgIGhpZ2ggPSBtaWQ7CiAgICAgICAgfQogICAgICAgIC8qIEdyYWIgbmFtZSAqLwogICAgICAg
IGlmIChncC0+dG9rdGJsKSB7CiAgICAgICAgICAgIGludCBzeW1vZmYgPSBrZGJfZ2V0X2VsNV9z
eW1vZmZzZXQoZ3AsbG93KTsKICAgICAgICAgICAga2RiX2V4cGFuZF9lbDVfc3ltKGdwLCBzeW1v
ZmYsIG5hbWVidWYpOwogICAgICAgIH0gZWxzZQogICAgICAgICAgICBrZGJfZXhwYW5kX2VsNF9z
eW0oZ3AsIGxvdywgbmFtZWJ1ZiwgTlVMTCk7CiAgICAgICAgKm9mZnNwID0gYWRkciAtIGtkYl9y
ZF9ndWVzdF9hZGRydGJsKGdwLCBsb3cpOwogICAgICAgIHJldHVybiBuYW1lYnVmOwogICAgfQog
ICAgcmV0dXJuICIgPz8/PyAiOwp9CgoKLyogCiAqIHNhdmUgZ3Vlc3QgKGRvbTAgYW5kIG90aGVy
cykgc3ltYm9scyBpbmZvIDogZG9taWQgYW5kIGZvbGxvd2luZyBhZGRyZXNzZXM6CiAqICAgICAm
a2FsbHN5bXNfbmFtZXMgJmthbGxzeW1zX2FkZHJlc3NlcyAma2FsbHN5bXNfbnVtX3N5bXMgXAog
KiAgICAgJmthbGxzeW1zX3Rva2VuX3RhYmxlICZrYWxsc3ltc190b2tlbl9pbmRleAogKi8Kdm9p
ZAprZGJfc2F2X2RvbV9zeW1pbmZvKGRvbWlkX3QgZG9taWQsIGxvbmcgbmFtZXNwLCBsb25nIGFk
ZHJhcCwgbG9uZyBudW1wLAogICAgICAgICAgICAgICAgICAgIGxvbmcgdG9rdGJscCwgbG9uZyB0
b2tpZHhwKQp7CiAgICBpbnQgYnl0ZXM7CiAgICBsb25nIHZhbCA9IDA7ICAgIC8qIG11c3QgYmUg
c2V0IHRvIHplcm8gZm9yIDMyIG9uIDY0IGNhc2VzICovCiAgICBzdHJ1Y3QgZ3N0X3N5bWluZm8g
KmdwID0ga2RiX2dldF9zeW1pbmZvX3Nsb3QoKTsKCiAgICBpZiAoZ3AgPT0gTlVMTCkgewogICAg
ICAgIGtkYnAoImtkYjprZGJfc2F2X2RvbV9zeW1pbmZvKCk6VGFibGUgZnVsbC4uIHN5bWJvbHMg
bm90IHNhdmVkXG4iKTsKICAgICAgICByZXR1cm47CiAgICB9CiAgICBtZW1zZXQoZ3AsIDAsIHNp
emVvZigqZ3ApKTsKCiAgICBncC0+ZG9taWQgPSBkb21pZDsKICAgIGdwLT5iaXRuZXNzID0ga2Ri
X2d1ZXN0X2JpdG5lc3MoZG9taWQpOwogICAgZ3AtPmFkZHJ0YmxwID0gKHZvaWQgKilhZGRyYXA7
CiAgICBncC0+a2FsbHN5bXNfbmFtZXMgPSAodTggKiluYW1lc3A7CiAgICBncC0+dG9rdGJsID0g
KHU4ICopdG9rdGJscDsKICAgIGdwLT50b2tpZHh0YmwgPSAodTE2ICopdG9raWR4cDsKCiAgICBL
REJHUCgiZG9taWQ6JXggYml0bmVzczokJWQgbnVtc3ltczokJWxkIGFycmF5cDolcFxuIiwgZG9t
aWQsCiAgICAgICAgICBncC0+Yml0bmVzcywgZ3AtPmthbGxzeW1zX251bV9zeW1zLCBncC0+YWRk
cnRibHApOwoKICAgIGJ5dGVzID0gZ3AtPmJpdG5lc3MvODsKICAgIGlmIChrZGJfcmVhZF9tZW0o
bnVtcCwgKGtkYmJ5dF90ICopJnZhbCwgYnl0ZXMsIGRvbWlkKSAhPSBieXRlcykgewoKICAgICAg
ICBrZGJwKCJVbmFibGUgdG8gcmVhZCBudW1iZXIgb2Ygc3ltYm9scyBmcm9tOiVseFxuIiwgbnVt
cCk7CiAgICAgICAgbWVtc2V0KGdwLCAwLCBzaXplb2YoKmdwKSk7CiAgICAgICAgcmV0dXJuOwog
ICAgfSBlbHNlCiAgICAgICAga2RicCgiTnVtYmVyIG9mIHN5bWJvbHM6JCVsZFxuIiwgdmFsKTsK
CiAgICBncC0+a2FsbHN5bXNfbnVtX3N5bXMgPSB2YWw7CgogICAgYnl0ZXMgPSAoZ3AtPmJpdG5l
c3MvOCkgKiBncC0+a2FsbHN5bXNfbnVtX3N5bXM7CiAgICBncC0+c3RleHQgPSBrZGJfZ3Vlc3Rf
c3ltMmFkZHIoIl9zdGV4dCIsIGRvbWlkKTsKICAgIGdwLT5ldGV4dCA9IGtkYl9ndWVzdF9zeW0y
YWRkcigiX2V0ZXh0IiwgZG9taWQpOwogICAgaWYgKCFncC0+c3RleHQgfHwgIWdwLT5ldGV4dCkK
ICAgICAgICBrZGJwKCJXYXJuOiBDYW4ndCBmaW5kIHN0ZXh0L2V0ZXh0XG4iKTsKCiAgICBpZiAo
Z3AtPnRva3RibCAmJiBncC0+dG9raWR4dGJsKSB7CiAgICAgICAgZ3AtPnNpbml0dGV4dCA9IGtk
Yl9ndWVzdF9zeW0yYWRkcigiX3Npbml0dGV4dCIsIGRvbWlkKTsKICAgICAgICBncC0+ZWluaXR0
ZXh0ID0ga2RiX2d1ZXN0X3N5bTJhZGRyKCJfZWluaXR0ZXh0IiwgZG9taWQpOwogICAgICAgIGlm
ICghZ3AtPnNpbml0dGV4dCB8fCAhZ3AtPmVpbml0dGV4dCkgewogICAgICAgICAgICBrZGJwKCJX
YXJuOiBDYW4ndCBmaW5kIHNpbml0dGV4dC9laW5pdHRleHRcbiIpOwogICAgfSAKICAgIH0KICAg
IEtEQkdQMSgic3R4dDolbHggZXR4dDolbHggc2l0eHQ6JWx4IGVpdHh0OiVseFxuIiwgZ3AtPnN0
ZXh0LCBncC0+ZXRleHQsCiAgICAgICAgICAgZ3AtPnNpbml0dGV4dCwgZ3AtPmVpbml0dGV4dCk7
CiAgICBrZGJwKCJTdWNjZXNmdWxseSBzYXZlZCBzeW1ib2wgaW5mb1xuIik7Cn0KCi8qCiAqIGdp
dmVuIGEgc3ltYm9sIHN0cmluZyBmb3IgYSBndWVzdC9kb21pZCwgcmV0dXJuIGl0cyBhZGRyZXNz
CiAqLwprZGJ2YV90CmtkYl9ndWVzdF9zeW0yYWRkcihjaGFyICpzeW1wLCBkb21pZF90IGRvbWlk
KQp7CiAgICBjaGFyIG5hbWVidWZbS1NZTV9OQU1FX0xFTisxXTsKICAgIGludCBpLCBvZmY9MDsK
ICAgIHN0cnVjdCBnc3Rfc3ltaW5mbyAqZ3AgPSBrZGJfZG9taWQyc3ltaW5mb3AoZG9taWQpOwoK
ICAgIEtEQkdQKCJzeW0yYTogc3ltOiVzIGRvbWlkOiV4IG51bXN5bXM6JWxkXG4iLCBzeW1wLCBk
b21pZCwKICAgICAgICAgIGdwID8gZ3AtPmthbGxzeW1zX251bV9zeW1zOiAtMSk7CgogICAgaWYg
KCFncCkKICAgICAgICByZXR1cm4gMDsKCiAgICBpZiAoZ3AtPnRva3RibCA9PSAwIHx8IGdwLT50
b2tpZHh0YmwgPT0gMCkKICAgICAgICByZXR1cm4oa2RiX2V4cGFuZF9lbDRfc3ltKGdwLCBncC0+
a2FsbHN5bXNfbnVtX3N5bXMsIG5hbWVidWYsIHN5bXApKTsKCiAgICBmb3IgKGk9MDsgaSA8IGdw
LT5rYWxsc3ltc19udW1fc3ltczsgaSsrKSB7CiAgICAgICAgb2ZmID0ga2RiX2V4cGFuZF9lbDVf
c3ltKGdwLCBvZmYsIG5hbWVidWYpOwogICAgICAgIEtEQkdQMSgiaTolZCBuYW1lYnVmOiVzXG4i
LCBpLCBuYW1lYnVmKTsKICAgICAgICBpZiAoc3RyY21wKG5hbWVidWYsIHN5bXApID09IDApIHsK
ICAgICAgICAgICAgcmV0dXJuKGtkYl9yZF9ndWVzdF9hZGRydGJsKGdwLCBpKSk7CiAgICAgICAg
fQogICAgfQogICAgS0RCR1AoInN5bTJhOmV4aXQ6bmEtZ2EtZGFcbiIpOwogICAgcmV0dXJuIDA7
Cn0KAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABrZGIv
TWFrZWZpbGUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMDAwMDY2NAAwMDAyNzU2ADAw
MDI3NTYAMDAwMDAwMDAxMDIAMTE3NjU0NjU1NTYAMDEzMjI1ACAwAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVzdGFyICAAbXJhdGhvcgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAABtcmF0aG9yAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAApvYmot
eQkJKz0ga2RibWFpbi5vIGtkYl9jbWRzLm8ga2RiX2lvLm8gCgpzdWJkaXIteSArPSB4ODYgZ3Vl
c3QKCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==

--MP_/hjZb_3K/+AywVJErRJV.V7.
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--MP_/hjZb_3K/+AywVJErRJV.V7.--


From xen-devel-bounces@lists.xen.org Thu Aug 30 20:54:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7BkO-0002Rs-Ek; Thu, 30 Aug 2012 20:53:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7BkM-0002Rn-DO
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 20:53:50 +0000
Received: from [85.158.143.35:27541] by server-1.bemta-4.messagelabs.com id
	6A/91-12504-DD2DF305; Thu, 30 Aug 2012 20:53:49 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346360028!12690401!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5590 invoked from network); 30 Aug 2012 20:53:49 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	30 Aug 2012 20:53:49 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:57314
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7BhE-0006Ss-Ak
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 22:50:36 +0200
Date: Thu, 30 Aug 2012 22:53:44 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1304893920.20120830225344@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel]
	/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92:
	error: initializer element is not constant
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

I'm was trying to compile xen-unstable (last changeset: 25786) and ran into a compile error in qemu-xen:

After freshly cloning, running ./configure and make world:


  CC    hmp.o
  CC    libdis/i386-dis.o
  GEN   config-target.h
  CC    i386-softmmu/arch_init.o
  CC    i386-softmmu/cpus.o
  GEN   i386-softmmu/hmp-commands.h
  GEN   i386-softmmu/qmp-commands-old.h
  CC    i386-softmmu/monitor.o
  CC    i386-softmmu/machine.o
  CC    i386-softmmu/gdbstub.o
  CC    i386-softmmu/balloon.o
  CC    i386-softmmu/ioport.o
  CC    i386-softmmu/virtio.o
  CC    i386-softmmu/virtio-blk.o
  CC    i386-softmmu/virtio-balloon.o
  CC    i386-softmmu/virtio-net.o
  CC    i386-softmmu/virtio-serial-bus.o
  CC    i386-softmmu/vhost_net.o
  CC    i386-softmmu/9pfs/virtio-9p-device.o
  CC    i386-softmmu/kvm-stub.o
  CC    i386-softmmu/memory.o
  CC    i386-softmmu/xen-all.o
  CC    i386-softmmu/xen_machine_pv.o
  CC    i386-softmmu/xen_domainbuild.o
  CC    i386-softmmu/xen-mapcache.o
  CC    i386-softmmu/exec.o
  CC    i386-softmmu/translate-all.o
  CC    i386-softmmu/cpu-exec.o
  CC    i386-softmmu/translate.o
  CC    i386-softmmu/tcg/tcg.o
  CC    i386-softmmu/tcg/optimize.o
  CC    i386-softmmu/fpu/softfloat.o
In file included from /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat.c:60:
/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92: error: initializer element is not constant
/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:107: error: initializer element is not constant
make[5]: *** [fpu/softfloat.o] Error 1
make[4]: *** [subdir-i386-softmmu] Error 2
make[4]: Leaving directory `/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir-remote'
make[3]: *** [subdir-all-qemu-xen-dir] Error 2
make[3]: Leaving directory `/usr/src/new/xen-unstable.hg/tools'
make[2]: *** [subdirs-install] Error 2
make[2]: Leaving directory `/usr/src/new/xen-unstable.hg/tools'
make[1]: *** [install-tools] Error 2
make[1]: Leaving directory `/usr/src/new/xen-unstable.hg'
make: *** [world] Error 2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 20:54:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 20:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7BkO-0002Rs-Ek; Thu, 30 Aug 2012 20:53:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7BkM-0002Rn-DO
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 20:53:50 +0000
Received: from [85.158.143.35:27541] by server-1.bemta-4.messagelabs.com id
	6A/91-12504-DD2DF305; Thu, 30 Aug 2012 20:53:49 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346360028!12690401!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5590 invoked from network); 30 Aug 2012 20:53:49 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	30 Aug 2012 20:53:49 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:57314
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7BhE-0006Ss-Ak
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 22:50:36 +0200
Date: Thu, 30 Aug 2012 22:53:44 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1304893920.20120830225344@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel]
	/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92:
	error: initializer element is not constant
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

I'm was trying to compile xen-unstable (last changeset: 25786) and ran into a compile error in qemu-xen:

After freshly cloning, running ./configure and make world:


  CC    hmp.o
  CC    libdis/i386-dis.o
  GEN   config-target.h
  CC    i386-softmmu/arch_init.o
  CC    i386-softmmu/cpus.o
  GEN   i386-softmmu/hmp-commands.h
  GEN   i386-softmmu/qmp-commands-old.h
  CC    i386-softmmu/monitor.o
  CC    i386-softmmu/machine.o
  CC    i386-softmmu/gdbstub.o
  CC    i386-softmmu/balloon.o
  CC    i386-softmmu/ioport.o
  CC    i386-softmmu/virtio.o
  CC    i386-softmmu/virtio-blk.o
  CC    i386-softmmu/virtio-balloon.o
  CC    i386-softmmu/virtio-net.o
  CC    i386-softmmu/virtio-serial-bus.o
  CC    i386-softmmu/vhost_net.o
  CC    i386-softmmu/9pfs/virtio-9p-device.o
  CC    i386-softmmu/kvm-stub.o
  CC    i386-softmmu/memory.o
  CC    i386-softmmu/xen-all.o
  CC    i386-softmmu/xen_machine_pv.o
  CC    i386-softmmu/xen_domainbuild.o
  CC    i386-softmmu/xen-mapcache.o
  CC    i386-softmmu/exec.o
  CC    i386-softmmu/translate-all.o
  CC    i386-softmmu/cpu-exec.o
  CC    i386-softmmu/translate.o
  CC    i386-softmmu/tcg/tcg.o
  CC    i386-softmmu/tcg/optimize.o
  CC    i386-softmmu/fpu/softfloat.o
In file included from /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat.c:60:
/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92: error: initializer element is not constant
/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:107: error: initializer element is not constant
make[5]: *** [fpu/softfloat.o] Error 1
make[4]: *** [subdir-i386-softmmu] Error 2
make[4]: Leaving directory `/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir-remote'
make[3]: *** [subdir-all-qemu-xen-dir] Error 2
make[3]: Leaving directory `/usr/src/new/xen-unstable.hg/tools'
make[2]: *** [subdirs-install] Error 2
make[2]: Leaving directory `/usr/src/new/xen-unstable.hg/tools'
make[1]: *** [install-tools] Error 2
make[1]: Leaving directory `/usr/src/new/xen-unstable.hg'
make: *** [world] Error 2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 21:17:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 21:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7C78-0002hg-F2; Thu, 30 Aug 2012 21:17:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T7C76-0002hb-V6
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 21:17:21 +0000
Received: from [85.158.138.51:12771] by server-4.bemta-3.messagelabs.com id
	3C/BA-24831-068DF305; Thu, 30 Aug 2012 21:17:20 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346361439!27779981!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTQ0NDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTQ0NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1172 invoked from network); 30 Aug 2012 21:17:19 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 21:17:19 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll387vEsM=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-086-040.pools.arcor-ip.net [84.57.86.40])
	by smtp.strato.de (joses mo1) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e01866o7UJcULL ;
	Thu, 30 Aug 2012 23:17:18 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id B8727183AD; Thu, 30 Aug 2012 23:17:16 +0200 (CEST)
Date: Thu, 30 Aug 2012 23:17:16 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20120830211716.GA24154@aepfle.de>
References: <1304893920.20120830225344@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1304893920.20120830225344@eikelenboom.it>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel]
 /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92:
 error: initializer element is not constant
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, Sander Eikelenboom wrote:

> In file included from /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat.c:60:
> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92: error: initializer element is not constant
> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:107: error: initializer element is not constant

I have seen that too months ago, but cant remember the solution.  Did
you export CFLAGS before build by any chance?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 21:17:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 21:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7C78-0002hg-F2; Thu, 30 Aug 2012 21:17:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1T7C76-0002hb-V6
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 21:17:21 +0000
Received: from [85.158.138.51:12771] by server-4.bemta-3.messagelabs.com id
	3C/BA-24831-068DF305; Thu, 30 Aug 2012 21:17:20 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346361439!27779981!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTQ0NDg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA0MTQ0NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1172 invoked from network); 30 Aug 2012 21:17:19 -0000
Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.160)
	by server-11.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Aug 2012 21:17:19 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+zrwiavkK6tmQaLfmwtM48/ll387vEsM=
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-084-057-086-040.pools.arcor-ip.net [84.57.86.40])
	by smtp.strato.de (joses mo1) (RZmta 30.12 DYNA|AUTH)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPA id e01866o7UJcULL ;
	Thu, 30 Aug 2012 23:17:18 +0200 (CEST)
Received: by probook.site (Postfix, from userid 1000)
	id B8727183AD; Thu, 30 Aug 2012 23:17:16 +0200 (CEST)
Date: Thu, 30 Aug 2012 23:17:16 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20120830211716.GA24154@aepfle.de>
References: <1304893920.20120830225344@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1304893920.20120830225344@eikelenboom.it>
User-Agent: Mutt/1.5.21.rev5555 (2012-07-20)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel]
 /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92:
 error: initializer element is not constant
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, Sander Eikelenboom wrote:

> In file included from /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat.c:60:
> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92: error: initializer element is not constant
> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:107: error: initializer element is not constant

I have seen that too months ago, but cant remember the solution.  Did
you export CFLAGS before build by any chance?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 21:33:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 21:33:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7CM5-0002tJ-Ub; Thu, 30 Aug 2012 21:32:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7CM3-0002s5-TG
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 21:32:48 +0000
Received: from [85.158.139.83:54021] by server-11.bemta-5.messagelabs.com id
	9F/3E-24658-FFBDF305; Thu, 30 Aug 2012 21:32:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346362364!28074848!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26907 invoked from network); 30 Aug 2012 21:32:45 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 21:32:45 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346362365; x=1377898365;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=E/UEbQIMkA6dsDQrsJaFNDoVAbido1qVTM78CTat72g=;
	b=MWeCFdesAQJMqM8jwUP2M8gkK5MyubFlUPe8skPk/CAqIRu2ZqkBPuQR
	TXncKByOiU97+6WC/09zMYzs+YQnnw==;
X-IronPort-AV: E=Sophos;i="4.80,343,1344211200"; d="scan'208";a="286809241"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 21:32:42 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7ULWequ013245
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 21:32:41 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.34) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 14:32:32 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 30 Aug 2012 14:32:34 -0700
Date: Thu, 30 Aug 2012 14:32:34 -0700
From: Matt Wilson <msw@amazon.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120830112323.5086d73c@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 11:23:23AM -0700, Mukesh Rathor wrote:
> 
> Changes to xen code for the debugger.
> 
> 
[...]  
> -SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers
> +SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers kdb

If the name is going to change, I think it'd be good to go ahead and
move the sub directory name, rename XEN_KDB_CONFIG, etc.

Also, it'd be nice to use "diff -p" to show the function names as part
of the diff. There are some big hunks below that I think should be
evaluated later, after the main entry points are reviewed.

[...]
> --- a/xen/arch/x86/hvm/svm/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -59,12 +59,23 @@
>          get_current(bx)
>          CLGI
>  
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif
> +        jnz  .Lkdb_skip_softirq
> +#endif

Not sure if somthing like:

        cmpb  $0,kdb_session_begun(%rip)
UNLIKELY_START(ne, kdb_session_exists)
        jmp .Lkdb_skip_softirq
UNLIKELY_END(ne, kdb_session_exists)

is worth it in this case, since it's just a jnz.

>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          testl $~0,(r(dx),r(ax),1)
>          jnz  .Lsvm_process_softirqs
>
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif
>          testb $0, VCPU_nsvm_hap_enabled(r(bx))
>  UNLIKELY_START(nz, nsvm_hap)
>          mov  VCPU_nhvm_p2m(r(bx)),r(ax)
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
> --- a/xen/arch/x86/hvm/svm/svm.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/svm.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2170,6 +2170,10 @@
>          break;
>  
>      case VMEXIT_EXCEPTION_DB:
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +	    break;

Correct indention.

> +#endif
>          if ( !v->domain->debugger_attached )
>              goto exit_and_crash;
>          domain_pause_for_debugger();
> @@ -2182,6 +2186,10 @@
>          if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
>              break;
>          __update_guest_eip(regs, inst_len);
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_int3, regs))
> +            break;
> +#endif
>          current->arch.gdbsx_vcpu_event = TRAP_int3;
>          domain_pause_for_debugger();
>          break;
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
> --- a/xen/arch/x86/hvm/svm/vmcb.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/vmcb.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -315,6 +315,36 @@
>      register_keyhandler('v', &vmcb_dump_keyhandler);
>  }
>  
> +#if defined(XEN_KDB_CONFIG)
> +/* did == 0 : display for all HVM domains. domid 0 is never HVM.
> + *  * vid == -1 : display for all HVM VCPUs
> + *   */
> +void kdb_dump_vmcb(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +
> +    rcu_read_lock(&domlist_read_lock);
> +    for_each_domain (dp) {
> +        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;
> +
> +            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
> +            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
> +            kdbp("\n");
> +        }
> +        kdbp("\n");
> +    }
> +    rcu_read_unlock(&domlist_read_lock);
> +}
> +#endif
> +
>  /*
>   * Local variables:
>   * mode: C

I think that Keir was most interested in the hairy bits of code that
wires {x,h}db in to Xen. I'd save this chunk for evaluation later.

> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
> --- a/xen/arch/x86/hvm/vmx/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -124,12 +124,23 @@
>          get_current(bx)
>          cli
>  
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif
> +        jnz  .Lkdb_skip_softirq
> +#endif
>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          cmpl $0,(r(dx),r(ax),1)
>          jnz  .Lvmx_process_softirqs
>  
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif
>          testb $0xff,VCPU_vmx_emulate(r(bx))
>          jnz .Lvmx_goto_emulator
>          testb $0xff,VCPU_vmx_realmode(r(bx))
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
> --- a/xen/arch/x86/hvm/vmx/vmcs.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -1117,6 +1117,13 @@
>          hvm_asid_flush_vcpu(v);
>      }
>  
> +#if defined(XEN_KDB_CONFIG)
> +    if (kdb_dr7)
> +        __vmwrite(GUEST_DR7, kdb_dr7);
> +    else
> +        __vmwrite(GUEST_DR7, 0);
> +#endif

This should just be "__vmwrite(GUEST_DR7, kdb_dr7);", since when 
"if (kdb_dr7)" evaluates to false the value is 0.

>      debug_state = v->domain->debugger_attached
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
> @@ -1326,6 +1333,220 @@
>      register_keyhandler('v', &vmcs_dump_keyhandler);
>  }
>  
> +#if defined(XEN_KDB_CONFIG)
> +#define GUEST_EFER      0x2806   /* see page 23-20 */
> +#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */
> +

page 23-20 of what? Also, I'd save this chunk for later.

> +/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
> + * will IPI other CPUs. also, print a subset relevant to software debugging */

Well, that could be re-factored.

> +static void noinline kdb_print_vmcs(struct vcpu *vp)
> +{
[...] mostly duplicated dump function snipped
> +}
> +
> +/* Flush VMCS on this cpu if it needs to: 
> + *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and 
> + *     do __vmreads. So, the VMCS pointer can't be left cleared.
> + *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
> + *     vmlaunch must be done and not vmresume. This means, we must clear 
> + *     arch_vmx->launched.
> + */
> +void kdb_curr_cpu_flush_vmcs(void)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    int ccpu = smp_processor_id();
> +    struct vmcs_struct *cvp = this_cpu(current_vmcs);
> +
> +    if (this_cpu(current_vmcs) == NULL)
> +        return;             /* no HVM active on this CPU */
> +
> +    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
> +
> +    /* looks like we got one. unfortunately, current_vmcs points to vmcs 
> +     * and not VCPU, so we gotta search the entire list... */
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        for_each_vcpu (dp, vp) {
> +            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
> +                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                vp->arch.hvm_vmx.launched = 0;
> +                this_cpu(current_vmcs) = NULL;
> +                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n", 
> +		     ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
> +            }
> +        }
> +    }
> +}
> +
> +/*
> + * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)
> + * vcpu id == -1 : display all vcpuids
> + * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
> + */
> +void kdb_dump_vmcs(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    struct vmcs_struct  *vmcsp;
> +    u64 addr = -1;
> +
> +    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
> +    __vmptrst(&addr);
> +
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;
> +
> +	    vmcsp = vp->arch.hvm_vmx.vmcs;
> +            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
> +	         virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
> +            __vmptrld(virt_to_maddr(vmcsp));
> +            kdb_print_vmcs(vp);
> +            __vmpclear(virt_to_maddr(vmcsp));
> +            vp->arch.hvm_vmx.launched = 0;
> +        }
> +        kdbp("\n");
> +    }
> +    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
> +    if (addr && addr != (u64)-1)
> +        __vmptrld(addr);
> +}
> +#endif
>  
>  /*
>   * Local variables:
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
> --- a/xen/arch/x86/hvm/vmx/vmx.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmx.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2183,11 +2183,14 @@
>          printk("reason not known yet!");
>          break;
>      }
> -
> +#if defined(XEN_KDB_CONFIG)
> +    kdbp("\n************* VMCS Area **************\n");
> +    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
> +#else
>      printk("************* VMCS Area **************\n");
>      vmcs_dump_vcpu(curr);
>      printk("**************************************\n");
> -
> +#endif
>      domain_crash(curr->domain);
>  }
>  
> @@ -2415,6 +2418,12 @@
>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>              if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
>                  goto exit_and_crash;
> +
> +#if defined(XEN_KDB_CONFIG)
> +            /* TRAP_debug: IP points correctly to next instr */
> +            if (kdb_handle_trap_entry(vector, regs))
> +                break;
> +#endif
>              domain_pause_for_debugger();
>              break;
>          case TRAP_int3: 
> @@ -2423,6 +2432,13 @@
>              if ( v->domain->debugger_attached )
>              {
>                  update_guest_eip(); /* Safe: INT3 */            
> +#if defined(XEN_KDB_CONFIG)
> +                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
> +                 * update_guest_eip which updates to bp+1. works for gdbsx too 
> +                 */
> +                if (kdb_handle_trap_entry(vector, regs))
> +                    break;
> +#endif
>                  current->arch.gdbsx_vcpu_event = TRAP_int3;
>                  domain_pause_for_debugger();
>                  break;
> @@ -2707,6 +2723,10 @@
>      case EXIT_REASON_MONITOR_TRAP_FLAG:
>          v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
>          vmx_update_cpu_exec_control(v);
> +#if defined(XEN_KDB_CONFIG)
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +            break;
> +#endif
>          if ( v->arch.hvm_vcpu.single_step ) {
>            hvm_memory_event_single_step(regs->eip);
>            if ( v->domain->debugger_attached )
> diff -r 32034d1914a6 xen/arch/x86/irq.c
> --- a/xen/arch/x86/irq.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/irq.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2305,3 +2305,29 @@
>      return is_hvm_domain(d) && pirq &&
>             pirq->arch.hvm.emuirq != IRQ_UNBOUND; 
>  }
> +
> +#ifdef XEN_KDB_CONFIG
> +void kdb_prnt_guest_mapped_irqs(void)
> +{
> +    int irq, j;
> +    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
> +
> +    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
> +    for (irq=0; irq < nr_irqs; irq++) {
> +        irq_desc_t  *dp = irq_to_desc(irq);
> +        struct arch_irq_desc *archp = &dp->arch;
> +        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
> +
> +        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
> +            continue;
> +
> +        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
> +        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
> +             dp->handler->typename);
> +        for (j=0; j < actp->nr_guests; j++)
> +            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
> +                 domain_irq_to_pirq(actp->guest[j], irq));
> +        kdbp("\n");
> +    }
> +}
> +#endif
> diff -r 32034d1914a6 xen/arch/x86/setup.c
> --- a/xen/arch/x86/setup.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/setup.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -47,6 +47,13 @@
>  #include <xen/cpu.h>
>  #include <asm/nmi.h>
>  
> +#ifdef XEN_KDB_CONFIG
> +#include <asm/debugger.h>
> +
> +int opt_earlykdb=0;
> +boolean_param("earlykdb", opt_earlykdb);
> +#endif
> +
>  /* opt_nosmp: If true, secondary processors are ignored. */
>  static bool_t __initdata opt_nosmp;
>  boolean_param("nosmp", opt_nosmp);
> @@ -1242,6 +1249,11 @@
>  
>      trap_init();
>  
> +#ifdef XEN_KDB_CONFIG
> +    kdb_init();
> +    if (opt_earlykdb)
> +        kdb_trap_immed(KDB_TRAP_NONFATAL);

I think you need something here that makes sure that NMI watchdog is
disabled when kdb is enabled. I think it'd be nice to have an option
to continue to use NMI watchdog in a {x,h}db-enabled Xen build. You'd
just not have the option of using NMI to trigger the debugger, and
you'd need to disable the watchdog during a debugging session - right?

> +#endif
>      rcu_init();
>      
>      early_time_init();
> diff -r 32034d1914a6 xen/arch/x86/smp.c
> --- a/xen/arch/x86/smp.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/smp.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -273,7 +273,7 @@
>   * Structure and data for smp_call_function()/on_selected_cpus().
>   */
>  
> -static void __smp_call_function_interrupt(void);
> +static void __smp_call_function_interrupt(struct cpu_user_regs *regs);
>  static DEFINE_SPINLOCK(call_lock);
>  static struct call_data_struct {
>      void (*func) (void *info);
> @@ -321,7 +321,7 @@
>      if ( cpumask_test_cpu(smp_processor_id(), &call_data.selected) )
>      {
>          local_irq_disable();
> -        __smp_call_function_interrupt();
> +        __smp_call_function_interrupt(NULL);
>          local_irq_enable();
>      }
>  
> @@ -390,7 +390,7 @@
>      this_cpu(irq_count)++;
>  }
>  
> -static void __smp_call_function_interrupt(void)
> +static void __smp_call_function_interrupt(struct cpu_user_regs *regs)
>  {
>      void (*func)(void *info) = call_data.func;
>      void *info = call_data.info;
> @@ -411,6 +411,11 @@
>      {
>          mb();
>          cpumask_clear_cpu(cpu, &call_data.selected);
> +#ifdef XEN_KDB_CONFIG
> +        if (info && !strcmp(info, "XENKDB")) {           /* called from kdb */
> +                (*(void (*)(struct cpu_user_regs *, void *))func)(regs, info);
> +        } else
> +#endif

This seems like a bad overloading of semantics here. Why not introduce
a new call_data.xdb flag, add a new internal __on_selected_cpus that
takes "xdb" as an argument, and plumb it up that way?

>          (*func)(info);
>      }
>  
> @@ -421,5 +426,5 @@
>  {
>      ack_APIC_irq();
>      perfc_incr(ipis);
> -    __smp_call_function_interrupt();
> +    __smp_call_function_interrupt(regs);
>  }
>
> diff -r 32034d1914a6 xen/arch/x86/time.c
> --- a/xen/arch/x86/time.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/time.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2007,6 +2007,46 @@
>  }
>  __initcall(setup_dump_softtsc);
>  
> +#ifdef XEN_KDB_CONFIG
> +void kdb_time_resume(int update_domains)
> +{

This bit isn't really kdb specific, why not name this more
generically and use printk()?

> +        s_time_t now;
> +        int ccpu = smp_processor_id();
> +        struct cpu_time *t = &this_cpu(cpu_time);
> +
> +        if (!plt_src.read_counter)            /* not initialized for earlykdb */
> +                return;
> +
> +        if (update_domains) {
> +                plt_stamp = plt_src.read_counter();
> +                platform_timer_stamp = plt_stamp64;
> +                platform_time_calibration();
> +                do_settime(get_cmos_time(), 0, read_platform_stime());
> +        }
> +        if (local_irq_is_enabled())
> +                kdbp("kdb BUG: enabled in time_resume(). ccpu:%d\n", ccpu);
> +
> +        rdtscll(t->local_tsc_stamp);
> +        now = read_platform_stime();
> +        t->stime_master_stamp = now;
> +        t->stime_local_stamp  = now;
> +
> +        update_vcpu_system_time(current);
> +
> +        if (update_domains)
> +                set_timer(&calibration_timer, NOW() + EPOCH);
> +}
> +
> +void kdb_dump_time_pcpu(void)
> +{
> +    int cpu;
> +    for_each_online_cpu(cpu) {
> +        kdbp("[%d]: cpu_time: %016lx\n", cpu, &per_cpu(cpu_time, cpu));
> +        kdbp("[%d]: cpu_calibration: %016lx\n", cpu, 
> +             &per_cpu(cpu_calibration, cpu));
> +    }
> +}
> +#endif
>  /*
>   * Local variables:
>   * mode: C
> diff -r 32034d1914a6 xen/arch/x86/traps.c
> --- a/xen/arch/x86/traps.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/traps.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -225,7 +225,7 @@
>  
>  #else
>  
> -static void show_trace(struct cpu_user_regs *regs)
> +void show_trace(struct cpu_user_regs *regs)

If you're making this an exported function, the prototype should be
added to the appropriate header.

>  {
>      unsigned long *frame, next, addr, low, high;
>  
> @@ -3326,6 +3326,10 @@
>      if ( nmi_callback(regs, cpu) )
>          return;
>  
> +#ifdef XEN_KDB_CONFIG
> +    if (kdb_enabled && kdb_handle_trap_entry(TRAP_nmi, regs))
> +        return;
> +#endif
>      if ( nmi_watchdog )
>          nmi_watchdog_tick(regs);
>  
> diff -r 32034d1914a6 xen/arch/x86/x86_64/compat/entry.S
> --- a/xen/arch/x86/x86_64/compat/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/x86_64/compat/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -95,6 +95,10 @@
>  /* %rbx: struct vcpu */
>  ENTRY(compat_test_all_events)
>          cli                             # tests must not race interrupts
> +#ifdef XEN_KDB_CONFIG
> +        testl $1, kdb_session_begun(%rip)
> +        jnz   compat_restore_all_guest
> +#endif
>  /*compat_test_softirqs:*/
>          movl  VCPU_processor(%rbx),%eax
>          shlq  $IRQSTAT_shift,%rax
> diff -r 32034d1914a6 xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/x86_64/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -184,6 +184,10 @@
>  /* %rbx: struct vcpu */
>  test_all_events:
>          cli                             # tests must not race interrupts
> +#ifdef XEN_KDB_CONFIG                   /* 64bit dom0 will resume here */
> +        testl $1, kdb_session_begun(%rip)
> +        jnz   restore_all_guest
> +#endif
>  /*test_softirqs:*/  
>          movl  VCPU_processor(%rbx),%eax
>          shl   $IRQSTAT_shift,%rax
> @@ -546,6 +550,13 @@
>  
>  ENTRY(int3)
>          pushq $0
> +#ifdef XEN_KDB_CONFIG
> +        pushq %rax
> +        GET_CPUINFO_FIELD(CPUINFO_processor_id, %rax)
> +        movq  (%rax), %rax
> +        lock  bts %rax, kdb_cpu_traps(%rip)
> +        popq  %rax
> +#endif
>          movl  $TRAP_int3,4(%rsp)
>          jmp   handle_exception
>  
> diff -r 32034d1914a6 xen/common/domain.c
> --- a/xen/common/domain.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/domain.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -530,6 +530,14 @@
>  {
>      struct vcpu *v;
>  
> +#ifdef XEN_KDB_CONFIG
> +    if (reason == SHUTDOWN_crash) {
> +        if ( IS_PRIV(d) )
> +            kdb_trap_immed(KDB_TRAP_FATAL);
> +        else
> +            kdb_trap_immed(KDB_TRAP_NONFATAL);
> +    }

I think that this behavior should be runtime definable.

> +#endif
>      spin_lock(&d->shutdown_lock);
>  
>      if ( d->shutdown_code == -1 )
> @@ -624,7 +632,9 @@
>      for_each_vcpu ( d, v )
>          vcpu_sleep_nosync(v);
>  
> -    send_global_virq(VIRQ_DEBUGGER);
> +    /* send VIRQ_DEBUGGER to guest only if gdbsx_vcpu_event is not active */
> +    if (current->arch.gdbsx_vcpu_event == 0)
> +        send_global_virq(VIRQ_DEBUGGER);
>  }
>  
>  /* Complete domain destroy after RCU readers are not holding old references. */
> diff -r 32034d1914a6 xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/sched_credit.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -1475,6 +1475,33 @@
>      printk("\n");
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +static void kdb_csched_dump(int cpu)
> +{
> +    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
> +    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
> +    struct list_head *tmp, *runq = RUNQ(cpu);
> +
> +    kdbp("    csched_pcpu: %p\n", pcpup);
> +    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
> +         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
> +    kdbp("    runq:\n");
> +
> +    /* next is top of struct, so screw stupid, ugly hard to follow macros */
> +    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
> +        kdbp("next is not first in struct csched_vcpu. please fixme\n");
> +        return;        /* otherwise for loop will crash */
> +    }
> +    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
> +
> +        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
> +        struct vcpu *vp = csp->vcpu;
> +        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
> +             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
> +    };
> +}
> +#endif

I'd think that we would want the "sched" command in {x,h}db to provide
the same output as the debug key handler. Is there any way to re-use
generic .dump_* functions and simply have printk() redirect to use the
kdbp() for writing its output when a {x,h}db session is active?

>  static void
>  csched_dump_pcpu(const struct scheduler *ops, int cpu)
>  {
> @@ -1484,6 +1511,10 @@
>      int loop;
>  #define cpustr keyhandler_scratch
>  
> +#ifdef XEN_KDB_CONFIG
> +    kdb_csched_dump(cpu);
> +    return;
> +#endif
>      spc = CSCHED_PCPU(cpu);
>      runq = &spc->runq;
>  
> diff -r 32034d1914a6 xen/common/schedule.c
> --- a/xen/common/schedule.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/schedule.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -1454,6 +1454,25 @@
>      schedule();
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +void kdb_print_sched_info(void)
> +{
> +    int cpu;
> +
> +    kdbp("Scheduler: name:%s opt_name:%s id:%d\n", ops.name, ops.opt_name,
> +         ops.sched_id);
> +    kdbp("per cpu schedule_data:\n");
> +    for_each_online_cpu(cpu) {
> +        struct schedule_data *p =  &per_cpu(schedule_data, cpu);
> +        kdbp("  cpu:%d  &(per cpu)schedule_data:%p\n", cpu, p);
> +        kdbp("         curr:%p sched_priv:%p\n", p->curr, p->sched_priv);
> +        kdbp("\n");
> +        ops.dump_cpu_state(&ops, cpu);
> +        kdbp("\n");
> +    }
> +}
> +#endif
> +
>  #ifdef CONFIG_COMPAT
>  #include "compat/schedule.c"
>  #endif
> diff -r 32034d1914a6 xen/common/symbols.c
> --- a/xen/common/symbols.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/symbols.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -168,3 +168,21 @@
>  
>      spin_unlock_irqrestore(&lock, flags);
>  }
> +
> +#ifdef XEN_KDB_CONFIG
> +/*
> + *  * Given a symbol, return its address 
> + *   */
> +unsigned long address_lookup(char *symp)
> +{
> +    int i, off = 0;
> +    char namebuf[KSYM_NAME_LEN+1];
> +
> +    for (i=0; i < symbols_num_syms; i++) {
> +        off = symbols_expand_symbol(off, namebuf);
> +        if (strcmp(namebuf, symp) == 0)                  /* found it */
> +            return symbols_address(i);
> +    }
> +    return 0;
> +}
> +#endif
> diff -r 32034d1914a6 xen/common/timer.c
> --- a/xen/common/timer.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/timer.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -643,6 +643,48 @@
>      register_keyhandler('a', &dump_timerq_keyhandler);
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +#include <xen/symbols.h>
> +void kdb_dump_timer_queues(void)
> +{
> +    struct timer  *t;
> +    struct timers *ts;
> +    unsigned long sz, offs;
> +    char buf[KSYM_NAME_LEN+1];
> +    int cpu, j;
> +    u64 tsc;
> +
> +    for_each_online_cpu( cpu )
> +    {
> +        ts = &per_cpu(timers, cpu);
> +        kdbp("CPU[%02d]:", cpu);
> +
> +        if (cpu == smp_processor_id()) {
> +            s_time_t now = NOW();
> +            rdtscll(tsc);
> +            kdbp("NOW:0x%08x%08x TSC:0x%016lx\n", (u32)(now>>32),(u32)now, tsc);
> +        } else
> +            kdbp("\n");
> +
> +        /* timers in the heap */
> +        for ( j = 1; j <= GET_HEAP_SIZE(ts->heap); j++ ) {
> +            t = ts->heap[j];
> +            kdbp("  %d: exp=0x%08x%08x fn:%s data:%p\n",
> +                 j, (u32)(t->expires>>32), (u32)t->expires,
> +                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
> +                 t->data);
> +        }
> +        /* timers on the link list */
> +        for ( t = ts->list, j = 0; t != NULL; t = t->list_next, j++ ) {
> +            kdbp(" L%d: exp=0x%08x%08x fn:%s data:%p\n",
> +                 j, (u32)(t->expires>>32), (u32)t->expires,
> +                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
> +                 t->data);
> +        }
> +    }
> +}
> +#endif
> +
>  /*
>   * Local variables:
>   * mode: C
> diff -r 32034d1914a6 xen/drivers/char/console.c
> --- a/xen/drivers/char/console.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/drivers/char/console.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -295,6 +295,21 @@
>  {
>      static int switch_code_count = 0;
>  
> +#ifdef XEN_KDB_CONFIG
> +    /* if ctrl-\ pressed and kdb handles it, return */
> +    if (kdb_enabled && c == 0x1c) {

This should be a constant at least, and boot time configurable would
be nice.

> +        if (!kdb_session_begun) {
> +            if (kdb_keyboard(regs))
> +                return;
> +        } else {
> +            kdbp("Sorry... kdb session already active.. please try again..\n");
> +            return;
> +        }
> +    }
> +    if (kdb_session_begun)      /* kdb should already be polling */
> +        return;                 /* swallow chars so they don't buffer in dom0 */
> +#endif
> +
>      if ( switch_code && (c == switch_code) )
>      {
>          /* We eat CTRL-<switch_char> in groups of 3 to switch console input. */
> @@ -710,6 +725,18 @@
>      atomic_dec(&print_everything);
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +void console_putc(char c)
> +{
> +    serial_putc(sercon_handle, c);
> +}
> +
> +int console_getc(void)
> +{
> +    return serial_getc(sercon_handle);
> +}
> +#endif
> +
>  /*
>   * printk rate limiting, lifted from Linux.
>   *
> diff -r 32034d1914a6 xen/include/asm-x86/debugger.h
> --- a/xen/include/asm-x86/debugger.h	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/asm-x86/debugger.h	Wed Aug 29 14:39:57 2012 -0700
> @@ -39,7 +39,11 @@
>  #define DEBUGGER_trap_fatal(_v, _r) \
>      if ( debugger_trap_fatal(_v, _r) ) return;
>  
> -#if defined(CRASH_DEBUG)
> +#if defined(XEN_KDB_CONFIG)
> +#define debugger_trap_immediate() kdb_trap_immed(KDB_TRAP_NONFATAL)
> +#define debugger_trap_fatal(_v, _r) kdb_trap_fatal(_v, _r)
> +
> +#elif defined(CRASH_DEBUG)
>  
>  #include <xen/gdbstub.h>
>  
> @@ -70,6 +74,10 @@
>  {
>      struct vcpu *v = current;
>  
> +#ifdef XEN_KDB_CONFIG
> +    if (kdb_handle_trap_entry(vector, regs))
> +        return 1;
> +#endif
>      if ( guest_kernel_mode(v, regs) && v->domain->debugger_attached &&
>           ((vector == TRAP_int3) || (vector == TRAP_debug)) )
>      {
> diff -r 32034d1914a6 xen/include/xen/lib.h
> --- a/xen/include/xen/lib.h	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/xen/lib.h	Wed Aug 29 14:39:57 2012 -0700
> @@ -116,4 +116,7 @@
>  struct cpu_user_regs;
>  void dump_execstate(struct cpu_user_regs *);
>  
> +#ifdef XEN_KDB_CONFIG
> +#include "../../kdb/include/kdb_extern.h"
> +#endif
>  #endif /* __LIB_H__ */
> diff -r 32034d1914a6 xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/xen/sched.h	Wed Aug 29 14:39:57 2012 -0700
> @@ -576,11 +576,14 @@
>  unsigned long hypercall_create_continuation(
>      unsigned int op, const char *format, ...);
>  void hypercall_cancel_continuation(void);
> -
> +#ifdef XEN_KDB_CONFIG
> +#define hypercall_preempt_check() (0)
> +#else
>  #define hypercall_preempt_check() (unlikely(    \
>          softirq_pending(smp_processor_id()) |   \
>          local_events_need_delivery()            \
>      ))
> +#endif
>  
>  extern struct domain *domain_list;

Thanks for posting this, Mukesh. I think that further splitting of the
changes could be helpful. I wonder if that would be easier to do in a
git repository.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 21:33:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 21:33:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7CM5-0002tJ-Ub; Thu, 30 Aug 2012 21:32:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <msw@amazon.com>) id 1T7CM3-0002s5-TG
	for Xen-devel@lists.xensource.com; Thu, 30 Aug 2012 21:32:48 +0000
Received: from [85.158.139.83:54021] by server-11.bemta-5.messagelabs.com id
	9F/3E-24658-FFBDF305; Thu, 30 Aug 2012 21:32:47 +0000
X-Env-Sender: msw@amazon.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346362364!28074848!1
X-Originating-IP: [207.171.178.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA3LjE3MS4xNzguMjUgPT4gNjM2NDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26907 invoked from network); 30 Aug 2012 21:32:45 -0000
Received: from smtp-fw-31001.amazon.com (HELO smtp-fw-31001.amazon.com)
	(207.171.178.25)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Aug 2012 21:32:45 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.com; i=msw@amazon.com; q=dns/txt; s=rte02;
	t=1346362365; x=1377898365;
	h=date:from:to:cc:subject:message-id:references:
	mime-version:in-reply-to;
	bh=E/UEbQIMkA6dsDQrsJaFNDoVAbido1qVTM78CTat72g=;
	b=MWeCFdesAQJMqM8jwUP2M8gkK5MyubFlUPe8skPk/CAqIRu2ZqkBPuQR
	TXncKByOiU97+6WC/09zMYzs+YQnnw==;
X-IronPort-AV: E=Sophos;i="4.80,343,1344211200"; d="scan'208";a="286809241"
Received: from smtp-in-6002.iad6.amazon.com ([10.195.76.108])
	by smtp-border-fw-out-31001.sea31.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 30 Aug 2012 21:32:42 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by smtp-in-6002.iad6.amazon.com (8.13.8/8.13.8) with ESMTP id
	q7ULWequ013245
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Thu, 30 Aug 2012 21:32:41 GMT
Received: from u002268147cd4502c336d.ant.amazon.com (10.224.80.34) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.247.3; Thu, 30 Aug 2012 14:32:32 -0700
Received: by u002268147cd4502c336d.ant.amazon.com (sSMTP sendmail emulation); 
	Thu, 30 Aug 2012 14:32:34 -0700
Date: Thu, 30 Aug 2012 14:32:34 -0700
From: Matt Wilson <msw@amazon.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20120830112323.5086d73c@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Aug 30, 2012 at 11:23:23AM -0700, Mukesh Rathor wrote:
> 
> Changes to xen code for the debugger.
> 
> 
[...]  
> -SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers
> +SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers kdb

If the name is going to change, I think it'd be good to go ahead and
move the sub directory name, rename XEN_KDB_CONFIG, etc.

Also, it'd be nice to use "diff -p" to show the function names as part
of the diff. There are some big hunks below that I think should be
evaluated later, after the main entry points are reviewed.

[...]
> --- a/xen/arch/x86/hvm/svm/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -59,12 +59,23 @@
>          get_current(bx)
>          CLGI
>  
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif
> +        jnz  .Lkdb_skip_softirq
> +#endif

Not sure if somthing like:

        cmpb  $0,kdb_session_begun(%rip)
UNLIKELY_START(ne, kdb_session_exists)
        jmp .Lkdb_skip_softirq
UNLIKELY_END(ne, kdb_session_exists)

is worth it in this case, since it's just a jnz.

>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          testl $~0,(r(dx),r(ax),1)
>          jnz  .Lsvm_process_softirqs
>
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif
>          testb $0, VCPU_nsvm_hap_enabled(r(bx))
>  UNLIKELY_START(nz, nsvm_hap)
>          mov  VCPU_nhvm_p2m(r(bx)),r(ax)
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
> --- a/xen/arch/x86/hvm/svm/svm.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/svm.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2170,6 +2170,10 @@
>          break;
>  
>      case VMEXIT_EXCEPTION_DB:
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +	    break;

Correct indention.

> +#endif
>          if ( !v->domain->debugger_attached )
>              goto exit_and_crash;
>          domain_pause_for_debugger();
> @@ -2182,6 +2186,10 @@
>          if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
>              break;
>          __update_guest_eip(regs, inst_len);
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_int3, regs))
> +            break;
> +#endif
>          current->arch.gdbsx_vcpu_event = TRAP_int3;
>          domain_pause_for_debugger();
>          break;
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
> --- a/xen/arch/x86/hvm/svm/vmcb.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/vmcb.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -315,6 +315,36 @@
>      register_keyhandler('v', &vmcb_dump_keyhandler);
>  }
>  
> +#if defined(XEN_KDB_CONFIG)
> +/* did == 0 : display for all HVM domains. domid 0 is never HVM.
> + *  * vid == -1 : display for all HVM VCPUs
> + *   */
> +void kdb_dump_vmcb(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +
> +    rcu_read_lock(&domlist_read_lock);
> +    for_each_domain (dp) {
> +        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;
> +
> +            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
> +            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
> +            kdbp("\n");
> +        }
> +        kdbp("\n");
> +    }
> +    rcu_read_unlock(&domlist_read_lock);
> +}
> +#endif
> +
>  /*
>   * Local variables:
>   * mode: C

I think that Keir was most interested in the hairy bits of code that
wires {x,h}db in to Xen. I'd save this chunk for evaluation later.

> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
> --- a/xen/arch/x86/hvm/vmx/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -124,12 +124,23 @@
>          get_current(bx)
>          cli
>  
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif
> +        jnz  .Lkdb_skip_softirq
> +#endif
>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          cmpl $0,(r(dx),r(ax),1)
>          jnz  .Lvmx_process_softirqs
>  
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif
>          testb $0xff,VCPU_vmx_emulate(r(bx))
>          jnz .Lvmx_goto_emulator
>          testb $0xff,VCPU_vmx_realmode(r(bx))
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
> --- a/xen/arch/x86/hvm/vmx/vmcs.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -1117,6 +1117,13 @@
>          hvm_asid_flush_vcpu(v);
>      }
>  
> +#if defined(XEN_KDB_CONFIG)
> +    if (kdb_dr7)
> +        __vmwrite(GUEST_DR7, kdb_dr7);
> +    else
> +        __vmwrite(GUEST_DR7, 0);
> +#endif

This should just be "__vmwrite(GUEST_DR7, kdb_dr7);", since when 
"if (kdb_dr7)" evaluates to false the value is 0.

>      debug_state = v->domain->debugger_attached
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
> @@ -1326,6 +1333,220 @@
>      register_keyhandler('v', &vmcs_dump_keyhandler);
>  }
>  
> +#if defined(XEN_KDB_CONFIG)
> +#define GUEST_EFER      0x2806   /* see page 23-20 */
> +#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */
> +

page 23-20 of what? Also, I'd save this chunk for later.

> +/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
> + * will IPI other CPUs. also, print a subset relevant to software debugging */

Well, that could be re-factored.

> +static void noinline kdb_print_vmcs(struct vcpu *vp)
> +{
[...] mostly duplicated dump function snipped
> +}
> +
> +/* Flush VMCS on this cpu if it needs to: 
> + *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and 
> + *     do __vmreads. So, the VMCS pointer can't be left cleared.
> + *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
> + *     vmlaunch must be done and not vmresume. This means, we must clear 
> + *     arch_vmx->launched.
> + */
> +void kdb_curr_cpu_flush_vmcs(void)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    int ccpu = smp_processor_id();
> +    struct vmcs_struct *cvp = this_cpu(current_vmcs);
> +
> +    if (this_cpu(current_vmcs) == NULL)
> +        return;             /* no HVM active on this CPU */
> +
> +    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
> +
> +    /* looks like we got one. unfortunately, current_vmcs points to vmcs 
> +     * and not VCPU, so we gotta search the entire list... */
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        for_each_vcpu (dp, vp) {
> +            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
> +                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                vp->arch.hvm_vmx.launched = 0;
> +                this_cpu(current_vmcs) = NULL;
> +                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n", 
> +		     ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
> +            }
> +        }
> +    }
> +}
> +
> +/*
> + * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)
> + * vcpu id == -1 : display all vcpuids
> + * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
> + */
> +void kdb_dump_vmcs(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    struct vmcs_struct  *vmcsp;
> +    u64 addr = -1;
> +
> +    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
> +    __vmptrst(&addr);
> +
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;
> +
> +	    vmcsp = vp->arch.hvm_vmx.vmcs;
> +            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
> +	         virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
> +            __vmptrld(virt_to_maddr(vmcsp));
> +            kdb_print_vmcs(vp);
> +            __vmpclear(virt_to_maddr(vmcsp));
> +            vp->arch.hvm_vmx.launched = 0;
> +        }
> +        kdbp("\n");
> +    }
> +    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
> +    if (addr && addr != (u64)-1)
> +        __vmptrld(addr);
> +}
> +#endif
>  
>  /*
>   * Local variables:
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
> --- a/xen/arch/x86/hvm/vmx/vmx.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmx.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2183,11 +2183,14 @@
>          printk("reason not known yet!");
>          break;
>      }
> -
> +#if defined(XEN_KDB_CONFIG)
> +    kdbp("\n************* VMCS Area **************\n");
> +    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
> +#else
>      printk("************* VMCS Area **************\n");
>      vmcs_dump_vcpu(curr);
>      printk("**************************************\n");
> -
> +#endif
>      domain_crash(curr->domain);
>  }
>  
> @@ -2415,6 +2418,12 @@
>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>              if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
>                  goto exit_and_crash;
> +
> +#if defined(XEN_KDB_CONFIG)
> +            /* TRAP_debug: IP points correctly to next instr */
> +            if (kdb_handle_trap_entry(vector, regs))
> +                break;
> +#endif
>              domain_pause_for_debugger();
>              break;
>          case TRAP_int3: 
> @@ -2423,6 +2432,13 @@
>              if ( v->domain->debugger_attached )
>              {
>                  update_guest_eip(); /* Safe: INT3 */            
> +#if defined(XEN_KDB_CONFIG)
> +                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
> +                 * update_guest_eip which updates to bp+1. works for gdbsx too 
> +                 */
> +                if (kdb_handle_trap_entry(vector, regs))
> +                    break;
> +#endif
>                  current->arch.gdbsx_vcpu_event = TRAP_int3;
>                  domain_pause_for_debugger();
>                  break;
> @@ -2707,6 +2723,10 @@
>      case EXIT_REASON_MONITOR_TRAP_FLAG:
>          v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
>          vmx_update_cpu_exec_control(v);
> +#if defined(XEN_KDB_CONFIG)
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +            break;
> +#endif
>          if ( v->arch.hvm_vcpu.single_step ) {
>            hvm_memory_event_single_step(regs->eip);
>            if ( v->domain->debugger_attached )
> diff -r 32034d1914a6 xen/arch/x86/irq.c
> --- a/xen/arch/x86/irq.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/irq.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2305,3 +2305,29 @@
>      return is_hvm_domain(d) && pirq &&
>             pirq->arch.hvm.emuirq != IRQ_UNBOUND; 
>  }
> +
> +#ifdef XEN_KDB_CONFIG
> +void kdb_prnt_guest_mapped_irqs(void)
> +{
> +    int irq, j;
> +    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */
> +
> +    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
> +    for (irq=0; irq < nr_irqs; irq++) {
> +        irq_desc_t  *dp = irq_to_desc(irq);
> +        struct arch_irq_desc *archp = &dp->arch;
> +        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
> +
> +        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
> +            continue;
> +
> +        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
> +        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
> +             dp->handler->typename);
> +        for (j=0; j < actp->nr_guests; j++)
> +            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
> +                 domain_irq_to_pirq(actp->guest[j], irq));
> +        kdbp("\n");
> +    }
> +}
> +#endif
> diff -r 32034d1914a6 xen/arch/x86/setup.c
> --- a/xen/arch/x86/setup.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/setup.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -47,6 +47,13 @@
>  #include <xen/cpu.h>
>  #include <asm/nmi.h>
>  
> +#ifdef XEN_KDB_CONFIG
> +#include <asm/debugger.h>
> +
> +int opt_earlykdb=0;
> +boolean_param("earlykdb", opt_earlykdb);
> +#endif
> +
>  /* opt_nosmp: If true, secondary processors are ignored. */
>  static bool_t __initdata opt_nosmp;
>  boolean_param("nosmp", opt_nosmp);
> @@ -1242,6 +1249,11 @@
>  
>      trap_init();
>  
> +#ifdef XEN_KDB_CONFIG
> +    kdb_init();
> +    if (opt_earlykdb)
> +        kdb_trap_immed(KDB_TRAP_NONFATAL);

I think you need something here that makes sure that NMI watchdog is
disabled when kdb is enabled. I think it'd be nice to have an option
to continue to use NMI watchdog in a {x,h}db-enabled Xen build. You'd
just not have the option of using NMI to trigger the debugger, and
you'd need to disable the watchdog during a debugging session - right?

> +#endif
>      rcu_init();
>      
>      early_time_init();
> diff -r 32034d1914a6 xen/arch/x86/smp.c
> --- a/xen/arch/x86/smp.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/smp.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -273,7 +273,7 @@
>   * Structure and data for smp_call_function()/on_selected_cpus().
>   */
>  
> -static void __smp_call_function_interrupt(void);
> +static void __smp_call_function_interrupt(struct cpu_user_regs *regs);
>  static DEFINE_SPINLOCK(call_lock);
>  static struct call_data_struct {
>      void (*func) (void *info);
> @@ -321,7 +321,7 @@
>      if ( cpumask_test_cpu(smp_processor_id(), &call_data.selected) )
>      {
>          local_irq_disable();
> -        __smp_call_function_interrupt();
> +        __smp_call_function_interrupt(NULL);
>          local_irq_enable();
>      }
>  
> @@ -390,7 +390,7 @@
>      this_cpu(irq_count)++;
>  }
>  
> -static void __smp_call_function_interrupt(void)
> +static void __smp_call_function_interrupt(struct cpu_user_regs *regs)
>  {
>      void (*func)(void *info) = call_data.func;
>      void *info = call_data.info;
> @@ -411,6 +411,11 @@
>      {
>          mb();
>          cpumask_clear_cpu(cpu, &call_data.selected);
> +#ifdef XEN_KDB_CONFIG
> +        if (info && !strcmp(info, "XENKDB")) {           /* called from kdb */
> +                (*(void (*)(struct cpu_user_regs *, void *))func)(regs, info);
> +        } else
> +#endif

This seems like a bad overloading of semantics here. Why not introduce
a new call_data.xdb flag, add a new internal __on_selected_cpus that
takes "xdb" as an argument, and plumb it up that way?

>          (*func)(info);
>      }
>  
> @@ -421,5 +426,5 @@
>  {
>      ack_APIC_irq();
>      perfc_incr(ipis);
> -    __smp_call_function_interrupt();
> +    __smp_call_function_interrupt(regs);
>  }
>
> diff -r 32034d1914a6 xen/arch/x86/time.c
> --- a/xen/arch/x86/time.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/time.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -2007,6 +2007,46 @@
>  }
>  __initcall(setup_dump_softtsc);
>  
> +#ifdef XEN_KDB_CONFIG
> +void kdb_time_resume(int update_domains)
> +{

This bit isn't really kdb specific, why not name this more
generically and use printk()?

> +        s_time_t now;
> +        int ccpu = smp_processor_id();
> +        struct cpu_time *t = &this_cpu(cpu_time);
> +
> +        if (!plt_src.read_counter)            /* not initialized for earlykdb */
> +                return;
> +
> +        if (update_domains) {
> +                plt_stamp = plt_src.read_counter();
> +                platform_timer_stamp = plt_stamp64;
> +                platform_time_calibration();
> +                do_settime(get_cmos_time(), 0, read_platform_stime());
> +        }
> +        if (local_irq_is_enabled())
> +                kdbp("kdb BUG: enabled in time_resume(). ccpu:%d\n", ccpu);
> +
> +        rdtscll(t->local_tsc_stamp);
> +        now = read_platform_stime();
> +        t->stime_master_stamp = now;
> +        t->stime_local_stamp  = now;
> +
> +        update_vcpu_system_time(current);
> +
> +        if (update_domains)
> +                set_timer(&calibration_timer, NOW() + EPOCH);
> +}
> +
> +void kdb_dump_time_pcpu(void)
> +{
> +    int cpu;
> +    for_each_online_cpu(cpu) {
> +        kdbp("[%d]: cpu_time: %016lx\n", cpu, &per_cpu(cpu_time, cpu));
> +        kdbp("[%d]: cpu_calibration: %016lx\n", cpu, 
> +             &per_cpu(cpu_calibration, cpu));
> +    }
> +}
> +#endif
>  /*
>   * Local variables:
>   * mode: C
> diff -r 32034d1914a6 xen/arch/x86/traps.c
> --- a/xen/arch/x86/traps.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/traps.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -225,7 +225,7 @@
>  
>  #else
>  
> -static void show_trace(struct cpu_user_regs *regs)
> +void show_trace(struct cpu_user_regs *regs)

If you're making this an exported function, the prototype should be
added to the appropriate header.

>  {
>      unsigned long *frame, next, addr, low, high;
>  
> @@ -3326,6 +3326,10 @@
>      if ( nmi_callback(regs, cpu) )
>          return;
>  
> +#ifdef XEN_KDB_CONFIG
> +    if (kdb_enabled && kdb_handle_trap_entry(TRAP_nmi, regs))
> +        return;
> +#endif
>      if ( nmi_watchdog )
>          nmi_watchdog_tick(regs);
>  
> diff -r 32034d1914a6 xen/arch/x86/x86_64/compat/entry.S
> --- a/xen/arch/x86/x86_64/compat/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/x86_64/compat/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -95,6 +95,10 @@
>  /* %rbx: struct vcpu */
>  ENTRY(compat_test_all_events)
>          cli                             # tests must not race interrupts
> +#ifdef XEN_KDB_CONFIG
> +        testl $1, kdb_session_begun(%rip)
> +        jnz   compat_restore_all_guest
> +#endif
>  /*compat_test_softirqs:*/
>          movl  VCPU_processor(%rbx),%eax
>          shlq  $IRQSTAT_shift,%rax
> diff -r 32034d1914a6 xen/arch/x86/x86_64/entry.S
> --- a/xen/arch/x86/x86_64/entry.S	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/x86_64/entry.S	Wed Aug 29 14:39:57 2012 -0700
> @@ -184,6 +184,10 @@
>  /* %rbx: struct vcpu */
>  test_all_events:
>          cli                             # tests must not race interrupts
> +#ifdef XEN_KDB_CONFIG                   /* 64bit dom0 will resume here */
> +        testl $1, kdb_session_begun(%rip)
> +        jnz   restore_all_guest
> +#endif
>  /*test_softirqs:*/  
>          movl  VCPU_processor(%rbx),%eax
>          shl   $IRQSTAT_shift,%rax
> @@ -546,6 +550,13 @@
>  
>  ENTRY(int3)
>          pushq $0
> +#ifdef XEN_KDB_CONFIG
> +        pushq %rax
> +        GET_CPUINFO_FIELD(CPUINFO_processor_id, %rax)
> +        movq  (%rax), %rax
> +        lock  bts %rax, kdb_cpu_traps(%rip)
> +        popq  %rax
> +#endif
>          movl  $TRAP_int3,4(%rsp)
>          jmp   handle_exception
>  
> diff -r 32034d1914a6 xen/common/domain.c
> --- a/xen/common/domain.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/domain.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -530,6 +530,14 @@
>  {
>      struct vcpu *v;
>  
> +#ifdef XEN_KDB_CONFIG
> +    if (reason == SHUTDOWN_crash) {
> +        if ( IS_PRIV(d) )
> +            kdb_trap_immed(KDB_TRAP_FATAL);
> +        else
> +            kdb_trap_immed(KDB_TRAP_NONFATAL);
> +    }

I think that this behavior should be runtime definable.

> +#endif
>      spin_lock(&d->shutdown_lock);
>  
>      if ( d->shutdown_code == -1 )
> @@ -624,7 +632,9 @@
>      for_each_vcpu ( d, v )
>          vcpu_sleep_nosync(v);
>  
> -    send_global_virq(VIRQ_DEBUGGER);
> +    /* send VIRQ_DEBUGGER to guest only if gdbsx_vcpu_event is not active */
> +    if (current->arch.gdbsx_vcpu_event == 0)
> +        send_global_virq(VIRQ_DEBUGGER);
>  }
>  
>  /* Complete domain destroy after RCU readers are not holding old references. */
> diff -r 32034d1914a6 xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/sched_credit.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -1475,6 +1475,33 @@
>      printk("\n");
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +static void kdb_csched_dump(int cpu)
> +{
> +    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
> +    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
> +    struct list_head *tmp, *runq = RUNQ(cpu);
> +
> +    kdbp("    csched_pcpu: %p\n", pcpup);
> +    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
> +         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
> +    kdbp("    runq:\n");
> +
> +    /* next is top of struct, so screw stupid, ugly hard to follow macros */
> +    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
> +        kdbp("next is not first in struct csched_vcpu. please fixme\n");
> +        return;        /* otherwise for loop will crash */
> +    }
> +    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
> +
> +        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
> +        struct vcpu *vp = csp->vcpu;
> +        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
> +             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
> +    };
> +}
> +#endif

I'd think that we would want the "sched" command in {x,h}db to provide
the same output as the debug key handler. Is there any way to re-use
generic .dump_* functions and simply have printk() redirect to use the
kdbp() for writing its output when a {x,h}db session is active?

>  static void
>  csched_dump_pcpu(const struct scheduler *ops, int cpu)
>  {
> @@ -1484,6 +1511,10 @@
>      int loop;
>  #define cpustr keyhandler_scratch
>  
> +#ifdef XEN_KDB_CONFIG
> +    kdb_csched_dump(cpu);
> +    return;
> +#endif
>      spc = CSCHED_PCPU(cpu);
>      runq = &spc->runq;
>  
> diff -r 32034d1914a6 xen/common/schedule.c
> --- a/xen/common/schedule.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/schedule.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -1454,6 +1454,25 @@
>      schedule();
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +void kdb_print_sched_info(void)
> +{
> +    int cpu;
> +
> +    kdbp("Scheduler: name:%s opt_name:%s id:%d\n", ops.name, ops.opt_name,
> +         ops.sched_id);
> +    kdbp("per cpu schedule_data:\n");
> +    for_each_online_cpu(cpu) {
> +        struct schedule_data *p =  &per_cpu(schedule_data, cpu);
> +        kdbp("  cpu:%d  &(per cpu)schedule_data:%p\n", cpu, p);
> +        kdbp("         curr:%p sched_priv:%p\n", p->curr, p->sched_priv);
> +        kdbp("\n");
> +        ops.dump_cpu_state(&ops, cpu);
> +        kdbp("\n");
> +    }
> +}
> +#endif
> +
>  #ifdef CONFIG_COMPAT
>  #include "compat/schedule.c"
>  #endif
> diff -r 32034d1914a6 xen/common/symbols.c
> --- a/xen/common/symbols.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/symbols.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -168,3 +168,21 @@
>  
>      spin_unlock_irqrestore(&lock, flags);
>  }
> +
> +#ifdef XEN_KDB_CONFIG
> +/*
> + *  * Given a symbol, return its address 
> + *   */
> +unsigned long address_lookup(char *symp)
> +{
> +    int i, off = 0;
> +    char namebuf[KSYM_NAME_LEN+1];
> +
> +    for (i=0; i < symbols_num_syms; i++) {
> +        off = symbols_expand_symbol(off, namebuf);
> +        if (strcmp(namebuf, symp) == 0)                  /* found it */
> +            return symbols_address(i);
> +    }
> +    return 0;
> +}
> +#endif
> diff -r 32034d1914a6 xen/common/timer.c
> --- a/xen/common/timer.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/timer.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -643,6 +643,48 @@
>      register_keyhandler('a', &dump_timerq_keyhandler);
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +#include <xen/symbols.h>
> +void kdb_dump_timer_queues(void)
> +{
> +    struct timer  *t;
> +    struct timers *ts;
> +    unsigned long sz, offs;
> +    char buf[KSYM_NAME_LEN+1];
> +    int cpu, j;
> +    u64 tsc;
> +
> +    for_each_online_cpu( cpu )
> +    {
> +        ts = &per_cpu(timers, cpu);
> +        kdbp("CPU[%02d]:", cpu);
> +
> +        if (cpu == smp_processor_id()) {
> +            s_time_t now = NOW();
> +            rdtscll(tsc);
> +            kdbp("NOW:0x%08x%08x TSC:0x%016lx\n", (u32)(now>>32),(u32)now, tsc);
> +        } else
> +            kdbp("\n");
> +
> +        /* timers in the heap */
> +        for ( j = 1; j <= GET_HEAP_SIZE(ts->heap); j++ ) {
> +            t = ts->heap[j];
> +            kdbp("  %d: exp=0x%08x%08x fn:%s data:%p\n",
> +                 j, (u32)(t->expires>>32), (u32)t->expires,
> +                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
> +                 t->data);
> +        }
> +        /* timers on the link list */
> +        for ( t = ts->list, j = 0; t != NULL; t = t->list_next, j++ ) {
> +            kdbp(" L%d: exp=0x%08x%08x fn:%s data:%p\n",
> +                 j, (u32)(t->expires>>32), (u32)t->expires,
> +                 symbols_lookup((unsigned long)t->function, &sz, &offs, buf),
> +                 t->data);
> +        }
> +    }
> +}
> +#endif
> +
>  /*
>   * Local variables:
>   * mode: C
> diff -r 32034d1914a6 xen/drivers/char/console.c
> --- a/xen/drivers/char/console.c	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/drivers/char/console.c	Wed Aug 29 14:39:57 2012 -0700
> @@ -295,6 +295,21 @@
>  {
>      static int switch_code_count = 0;
>  
> +#ifdef XEN_KDB_CONFIG
> +    /* if ctrl-\ pressed and kdb handles it, return */
> +    if (kdb_enabled && c == 0x1c) {

This should be a constant at least, and boot time configurable would
be nice.

> +        if (!kdb_session_begun) {
> +            if (kdb_keyboard(regs))
> +                return;
> +        } else {
> +            kdbp("Sorry... kdb session already active.. please try again..\n");
> +            return;
> +        }
> +    }
> +    if (kdb_session_begun)      /* kdb should already be polling */
> +        return;                 /* swallow chars so they don't buffer in dom0 */
> +#endif
> +
>      if ( switch_code && (c == switch_code) )
>      {
>          /* We eat CTRL-<switch_char> in groups of 3 to switch console input. */
> @@ -710,6 +725,18 @@
>      atomic_dec(&print_everything);
>  }
>  
> +#ifdef XEN_KDB_CONFIG
> +void console_putc(char c)
> +{
> +    serial_putc(sercon_handle, c);
> +}
> +
> +int console_getc(void)
> +{
> +    return serial_getc(sercon_handle);
> +}
> +#endif
> +
>  /*
>   * printk rate limiting, lifted from Linux.
>   *
> diff -r 32034d1914a6 xen/include/asm-x86/debugger.h
> --- a/xen/include/asm-x86/debugger.h	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/asm-x86/debugger.h	Wed Aug 29 14:39:57 2012 -0700
> @@ -39,7 +39,11 @@
>  #define DEBUGGER_trap_fatal(_v, _r) \
>      if ( debugger_trap_fatal(_v, _r) ) return;
>  
> -#if defined(CRASH_DEBUG)
> +#if defined(XEN_KDB_CONFIG)
> +#define debugger_trap_immediate() kdb_trap_immed(KDB_TRAP_NONFATAL)
> +#define debugger_trap_fatal(_v, _r) kdb_trap_fatal(_v, _r)
> +
> +#elif defined(CRASH_DEBUG)
>  
>  #include <xen/gdbstub.h>
>  
> @@ -70,6 +74,10 @@
>  {
>      struct vcpu *v = current;
>  
> +#ifdef XEN_KDB_CONFIG
> +    if (kdb_handle_trap_entry(vector, regs))
> +        return 1;
> +#endif
>      if ( guest_kernel_mode(v, regs) && v->domain->debugger_attached &&
>           ((vector == TRAP_int3) || (vector == TRAP_debug)) )
>      {
> diff -r 32034d1914a6 xen/include/xen/lib.h
> --- a/xen/include/xen/lib.h	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/xen/lib.h	Wed Aug 29 14:39:57 2012 -0700
> @@ -116,4 +116,7 @@
>  struct cpu_user_regs;
>  void dump_execstate(struct cpu_user_regs *);
>  
> +#ifdef XEN_KDB_CONFIG
> +#include "../../kdb/include/kdb_extern.h"
> +#endif
>  #endif /* __LIB_H__ */
> diff -r 32034d1914a6 xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h	Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/xen/sched.h	Wed Aug 29 14:39:57 2012 -0700
> @@ -576,11 +576,14 @@
>  unsigned long hypercall_create_continuation(
>      unsigned int op, const char *format, ...);
>  void hypercall_cancel_continuation(void);
> -
> +#ifdef XEN_KDB_CONFIG
> +#define hypercall_preempt_check() (0)
> +#else
>  #define hypercall_preempt_check() (unlikely(    \
>          softirq_pending(smp_processor_id()) |   \
>          local_events_need_delivery()            \
>      ))
> +#endif
>  
>  extern struct domain *domain_list;

Thanks for posting this, Mukesh. I think that further splitting of the
changes could be helpful. I wonder if that would be easier to do in a
git repository.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 21:35:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 21:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7COD-00031V-LB; Thu, 30 Aug 2012 21:35:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7COB-00031M-MQ
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 21:34:59 +0000
Received: from [85.158.138.51:62265] by server-7.bemta-3.messagelabs.com id
	E6/EB-32000-28CDF305; Thu, 30 Aug 2012 21:34:58 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-174.messagelabs.com!1346362498!19702539!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32219 invoked from network); 30 Aug 2012 21:34:58 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-3.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	30 Aug 2012 21:34:58 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:57686
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7CL3-0006e1-UU; Thu, 30 Aug 2012 23:31:46 +0200
Date: Thu, 30 Aug 2012 23:34:54 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <19610206885.20120830233454@eikelenboom.it>
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20120830211716.GA24154@aepfle.de>
References: <1304893920.20120830225344@eikelenboom.it>
	<20120830211716.GA24154@aepfle.de>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel]
	/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92:
	error: initializer element is not constant
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Olaf,

Ahh yes good point, after setting them for a kernel build, i had set them to "", instead of unsetting them completely ...

Thx !

--
Sander

Thursday, August 30, 2012, 11:17:16 PM, you wrote:

> On Thu, Aug 30, Sander Eikelenboom wrote:

>> In file included from /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat.c:60:
>> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92: error: initializer element is not constant
>> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:107: error: initializer element is not constant

> I have seen that too months ago, but cant remember the solution.  Did
> you export CFLAGS before build by any chance?

> Olaf




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Aug 30 21:35:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Aug 2012 21:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7COD-00031V-LB; Thu, 30 Aug 2012 21:35:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7COB-00031M-MQ
	for xen-devel@lists.xen.org; Thu, 30 Aug 2012 21:34:59 +0000
Received: from [85.158.138.51:62265] by server-7.bemta-3.messagelabs.com id
	E6/EB-32000-28CDF305; Thu, 30 Aug 2012 21:34:58 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-174.messagelabs.com!1346362498!19702539!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32219 invoked from network); 30 Aug 2012 21:34:58 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-3.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	30 Aug 2012 21:34:58 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:57686
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7CL3-0006e1-UU; Thu, 30 Aug 2012 23:31:46 +0200
Date: Thu, 30 Aug 2012 23:34:54 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <19610206885.20120830233454@eikelenboom.it>
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20120830211716.GA24154@aepfle.de>
References: <1304893920.20120830225344@eikelenboom.it>
	<20120830211716.GA24154@aepfle.de>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel]
	/usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92:
	error: initializer element is not constant
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Olaf,

Ahh yes good point, after setting them for a kernel build, i had set them to "", instead of unsetting them completely ...

Thx !

--
Sander

Thursday, August 30, 2012, 11:17:16 PM, you wrote:

> On Thu, Aug 30, Sander Eikelenboom wrote:

>> In file included from /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat.c:60:
>> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:92: error: initializer element is not constant
>> /usr/src/new/xen-unstable.hg/tools/qemu-xen-dir/fpu/softfloat-specialize.h:107: error: initializer element is not constant

> I have seen that too months ago, but cant remember the solution.  Did
> you export CFLAGS before build by any chance?

> Olaf




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 00:35:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 00:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7FCI-0004z8-TY; Fri, 31 Aug 2012 00:34:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1T7FCH-0004z3-RD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 00:34:54 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1346373287!1905672!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM3Njgz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17330 invoked from network); 31 Aug 2012 00:34:47 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 00:34:47 -0000
Received: from mail-qa0-f52.google.com ([209.85.216.52])
	by mga09.intel.com with ESMTP/TLS/RC4-SHA; 30 Aug 2012 17:34:43 -0700
Received: by qabg14 with SMTP id g14so670783qab.11
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 17:34:45 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type
	:x-gm-message-state;
	bh=nWXhMOqdiSy5tONuU2YkdiiTKXCE+iCbedD+kqo85qg=;
	b=SrObtqvxNhY6nXTRAi5zngyWOmJO6GzLFgO7VwGQZHMDCvMDELzfDeQL395Q+Gi09n
	gE8ruzHjqUb2YXTG8CgGnycDc6UnCauoIymi43RcA6GBYEHl0iSTyrpmPjBSXL/omofE
	AXOC5kKG9C83SJWWnnugOC0RSKoU8ybP/aIj7TrMXZCaAgk1Y7o1lej01DJzeqqMjezk
	oWB7CdqnlgJ5REJBdtd06H6B7uvRLam+Ny8MMbD1U0Ye9cROU+0i3G9Md8++mEeSVGKA
	aXPvxYM8o7CLuESJRb8QgiGSmYwQzOGw59wWUvEhvEHqkiAgVgnXS53SA41rrH3dLlJU
	qW4A==
MIME-Version: 1.0
Received: by 10.224.175.19 with SMTP id v19mr14672622qaz.78.1346373285310;
	Thu, 30 Aug 2012 17:34:45 -0700 (PDT)
Received: by 10.229.33.138 with HTTP; Thu, 30 Aug 2012 17:34:45 -0700 (PDT)
Date: Thu, 30 Aug 2012 17:34:45 -0700
Message-ID: <CAL54oT1cktaUScYFbOCzWtKn2zzHZ=BCjeaBCKi9VbxX0jQW-g@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: xen-devel <xen-devel@lists.xen.org>, kvm@vger.kernel.org
X-Gm-Message-State: ALoCoQl7vPJWo2qUk5rgVbT1VIlKMJ+s1wxH8AJ3M/uc55r6eymDXNDWxXCt0dGZiL+Av4wpPMn6
Subject: [Xen-devel] SDM Updates available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6084656290218510372=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6084656290218510372==
Content-Type: multipart/alternative; boundary=485b397dd30967f14004c884f510

--485b397dd30967f14004c884f510
Content-Type: text/plain; charset=ISO-8859-1

It includes the new VT features for interrupt/APIC virtualization that I
mentioned at XenSummit and Linux Plumbers Conference.

http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html


Enjoy.
-- 
Jun
Intel Open Source Technology Center

--485b397dd30967f14004c884f510
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

It includes the new VT features for interrupt/APIC virtualization that I me=
ntioned at XenSummit and Linux Plumbers Conference.<div><br></div><div><a h=
ref=3D"http://www.intel.com/content/www/us/en/processors/architectures-soft=
ware-developer-manuals.html" target=3D"_blank" style=3D"color:rgb(17,85,204=
);font-family:arial,sans-serif;font-size:12.727272033691406px;background-co=
lor:rgb(255,255,255)">http://www.intel.com/content/www/us/en/processors/arc=
hitectures-software-developer-manuals.html</a></div>
<div><br><div><div><div><br></div><div>Enjoy.</div>-- <br><div><div>Jun</di=
v><div><div>Intel Open Source Technology Center</div></div></div><br>
</div></div></div>

--485b397dd30967f14004c884f510--


--===============6084656290218510372==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6084656290218510372==--


From xen-devel-bounces@lists.xen.org Fri Aug 31 00:35:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 00:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7FCI-0004z8-TY; Fri, 31 Aug 2012 00:34:54 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1T7FCH-0004z3-RD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 00:34:54 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1346373287!1905672!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM3Njgz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17330 invoked from network); 31 Aug 2012 00:34:47 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 00:34:47 -0000
Received: from mail-qa0-f52.google.com ([209.85.216.52])
	by mga09.intel.com with ESMTP/TLS/RC4-SHA; 30 Aug 2012 17:34:43 -0700
Received: by qabg14 with SMTP id g14so670783qab.11
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 17:34:45 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type
	:x-gm-message-state;
	bh=nWXhMOqdiSy5tONuU2YkdiiTKXCE+iCbedD+kqo85qg=;
	b=SrObtqvxNhY6nXTRAi5zngyWOmJO6GzLFgO7VwGQZHMDCvMDELzfDeQL395Q+Gi09n
	gE8ruzHjqUb2YXTG8CgGnycDc6UnCauoIymi43RcA6GBYEHl0iSTyrpmPjBSXL/omofE
	AXOC5kKG9C83SJWWnnugOC0RSKoU8ybP/aIj7TrMXZCaAgk1Y7o1lej01DJzeqqMjezk
	oWB7CdqnlgJ5REJBdtd06H6B7uvRLam+Ny8MMbD1U0Ye9cROU+0i3G9Md8++mEeSVGKA
	aXPvxYM8o7CLuESJRb8QgiGSmYwQzOGw59wWUvEhvEHqkiAgVgnXS53SA41rrH3dLlJU
	qW4A==
MIME-Version: 1.0
Received: by 10.224.175.19 with SMTP id v19mr14672622qaz.78.1346373285310;
	Thu, 30 Aug 2012 17:34:45 -0700 (PDT)
Received: by 10.229.33.138 with HTTP; Thu, 30 Aug 2012 17:34:45 -0700 (PDT)
Date: Thu, 30 Aug 2012 17:34:45 -0700
Message-ID: <CAL54oT1cktaUScYFbOCzWtKn2zzHZ=BCjeaBCKi9VbxX0jQW-g@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: xen-devel <xen-devel@lists.xen.org>, kvm@vger.kernel.org
X-Gm-Message-State: ALoCoQl7vPJWo2qUk5rgVbT1VIlKMJ+s1wxH8AJ3M/uc55r6eymDXNDWxXCt0dGZiL+Av4wpPMn6
Subject: [Xen-devel] SDM Updates available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6084656290218510372=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6084656290218510372==
Content-Type: multipart/alternative; boundary=485b397dd30967f14004c884f510

--485b397dd30967f14004c884f510
Content-Type: text/plain; charset=ISO-8859-1

It includes the new VT features for interrupt/APIC virtualization that I
mentioned at XenSummit and Linux Plumbers Conference.

http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html


Enjoy.
-- 
Jun
Intel Open Source Technology Center

--485b397dd30967f14004c884f510
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

It includes the new VT features for interrupt/APIC virtualization that I me=
ntioned at XenSummit and Linux Plumbers Conference.<div><br></div><div><a h=
ref=3D"http://www.intel.com/content/www/us/en/processors/architectures-soft=
ware-developer-manuals.html" target=3D"_blank" style=3D"color:rgb(17,85,204=
);font-family:arial,sans-serif;font-size:12.727272033691406px;background-co=
lor:rgb(255,255,255)">http://www.intel.com/content/www/us/en/processors/arc=
hitectures-software-developer-manuals.html</a></div>
<div><br><div><div><div><br></div><div>Enjoy.</div>-- <br><div><div>Jun</di=
v><div><div>Intel Open Source Technology Center</div></div></div><br>
</div></div></div>

--485b397dd30967f14004c884f510--


--===============6084656290218510372==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6084656290218510372==--


From xen-devel-bounces@lists.xen.org Fri Aug 31 01:15:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 01:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Fp0-0000bw-9B; Fri, 31 Aug 2012 01:14:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Foy-0000br-2b
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 01:14:52 +0000
Received: from [85.158.143.35:31662] by server-1.bemta-4.messagelabs.com id
	13/AF-12504-B0010405; Fri, 31 Aug 2012 01:14:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1346375690!10478418!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2075 invoked from network); 31 Aug 2012 01:14:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 01:14:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,344,1344211200"; d="scan'208";a="14279042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 01:14:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 02:14:49 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T7Fov-000616-BE;
	Fri, 31 Aug 2012 01:14:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T7Fou-000251-PY;
	Fri, 31 Aug 2012 02:14:49 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13639-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 02:14:48 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13639: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13639 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13639/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13638
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13638
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13638
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13638

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  9e5665f9f430
baseline version:
 xen                  a0b5f8102a00

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Jonathan Tripathy <jonnyt@abpni.co.uk>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=9e5665f9f430
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 9e5665f9f430
+ branch=xen-unstable
+ revision=9e5665f9f430
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 9e5665f9f430 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 11 changes to 9 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 01:15:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 01:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Fp0-0000bw-9B; Fri, 31 Aug 2012 01:14:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Foy-0000br-2b
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 01:14:52 +0000
Received: from [85.158.143.35:31662] by server-1.bemta-4.messagelabs.com id
	13/AF-12504-B0010405; Fri, 31 Aug 2012 01:14:51 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1346375690!10478418!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2075 invoked from network); 31 Aug 2012 01:14:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 01:14:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,344,1344211200"; d="scan'208";a="14279042"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 01:14:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 02:14:49 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T7Fov-000616-BE;
	Fri, 31 Aug 2012 01:14:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T7Fou-000251-PY;
	Fri, 31 Aug 2012 02:14:49 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13639-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 02:14:48 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13639: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13639 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13639/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13638
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13638
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13638
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13638

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  9e5665f9f430
baseline version:
 xen                  a0b5f8102a00

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dongxiao Xu <dongxiao.xu@intel.com>
  Jonathan Tripathy <jonnyt@abpni.co.uk>
  Keir Fraser <keir@xen.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=9e5665f9f430
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 9e5665f9f430
+ branch=xen-unstable
+ revision=9e5665f9f430
+ . cri-lock-repos
++ . cri-common
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readconfigonly();
                print $c{Repos} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=linux
+ : master
+ : tested/2.6.39.x
+ . ap-common
++ : xen@xenbits.xensource.com
++ : http://xenbits.xen.org/staging/xen-unstable.hg
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : xen@xenbits.xensource.com:git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : master
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=xen@xenbits.xensource.com:git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=xen@xenbits.xensource.com:git/qemu-upstream-unstable.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen-unstable.hg
+ hg push -r 9e5665f9f430 ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
pushing to ssh://xen@xenbits.xensource.com/HG/xen-unstable.hg
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 5 changesets with 11 changes to 9 files

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 04:12:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 04:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Iaa-0001un-ME; Fri, 31 Aug 2012 04:12:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mmlogothetis@gmail.com>) id 1T7IaZ-0001ui-Gk
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 04:12:11 +0000
Received: from [85.158.138.51:5557] by server-3.bemta-3.messagelabs.com id
	82/B1-21322-A9930405; Fri, 31 Aug 2012 04:12:10 +0000
X-Env-Sender: mmlogothetis@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346386326!27833125!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10062 invoked from network); 31 Aug 2012 04:12:08 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 04:12:08 -0000
Received: by dadn15 with SMTP id n15so1572974dad.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 21:12:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:mime-version:content-type:content-transfer-encoding;
	bh=246C2nq7UKfwDm//59HzzAgZVWe0Buu2bnIAfRo8NKg=;
	b=xHaWIGHMSA+l5GIgx4c/v4Wsv4ZHFlRMHfGHhoMuWraWOlf0su7FLqeBljx7OKY+x1
	oPK0eqy2L3nB6Sc1ApBXJODSHC7JN5pDaC8W+We7GeCom8Ge5pnqjTmYtlMSb9ZJocEz
	NqUriT6GiGFyDzBMMM1MLxc7rX4ShUNjwOr1LWPWTJlsN3bUPQ2ab6AW1iKay8ezAgH3
	V8cALEdsLIxYFAPeDevne9dNBE6/B0yLYPRJFw7V4cOvDMceqwkHkSdFRAwBGObMWiPo
	MkkcSVHE6IUkkNFqZPHXPvMRMbqph+j+aVLRdpjJwgbwUwgPf/bUkXTvd+nPBVBXw0aA
	fCbA==
Received: by 10.68.227.169 with SMTP id sb9mr15196299pbc.104.1346386326099;
	Thu, 30 Aug 2012 21:12:06 -0700 (PDT)
Received: from [10.10.3.114] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id gv1sm2685525pbc.38.2012.08.30.21.12.03
	(version=SSLv3 cipher=OTHER); Thu, 30 Aug 2012 21:12:04 -0700 (PDT)
User-Agent: Microsoft-MacOutlook/14.2.3.120616
Date: Thu, 30 Aug 2012 21:12:01 -0700
From: Michael Logothetis <mmlogothetis@gmail.com>
To: <xen-devel@lists.xen.org>
Message-ID: <CC6587A1.1AA01%mmlogothetis@gmail.com>
Thread-Topic: xl regression: Support backend domain ID for disks for
	compatibility with xm
Mime-version: 1.0
Subject: [Xen-devel] xl regression: Support backend domain ID for disks for
 compatibility with xm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current 4.2 unstable does not seem to support xl block-attach
in the same way that xm did.

The back domain always defaults to 0 (Dom0) in:

tools/libxl/xl_cmdimpl.c

which prevents using a driver domain as a block device backend.

Looks like there is a patch from Daniel De Graaf discussed here:

http://lists.xen.org/archives/html/xen-devel/2012-08/msg00648.html

which should be included in 4.2.

--
Michael Logothetis.





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 04:12:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 04:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Iaa-0001un-ME; Fri, 31 Aug 2012 04:12:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mmlogothetis@gmail.com>) id 1T7IaZ-0001ui-Gk
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 04:12:11 +0000
Received: from [85.158.138.51:5557] by server-3.bemta-3.messagelabs.com id
	82/B1-21322-A9930405; Fri, 31 Aug 2012 04:12:10 +0000
X-Env-Sender: mmlogothetis@gmail.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346386326!27833125!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10062 invoked from network); 31 Aug 2012 04:12:08 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-2.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 04:12:08 -0000
Received: by dadn15 with SMTP id n15so1572974dad.32
	for <xen-devel@lists.xen.org>; Thu, 30 Aug 2012 21:12:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:mime-version:content-type:content-transfer-encoding;
	bh=246C2nq7UKfwDm//59HzzAgZVWe0Buu2bnIAfRo8NKg=;
	b=xHaWIGHMSA+l5GIgx4c/v4Wsv4ZHFlRMHfGHhoMuWraWOlf0su7FLqeBljx7OKY+x1
	oPK0eqy2L3nB6Sc1ApBXJODSHC7JN5pDaC8W+We7GeCom8Ge5pnqjTmYtlMSb9ZJocEz
	NqUriT6GiGFyDzBMMM1MLxc7rX4ShUNjwOr1LWPWTJlsN3bUPQ2ab6AW1iKay8ezAgH3
	V8cALEdsLIxYFAPeDevne9dNBE6/B0yLYPRJFw7V4cOvDMceqwkHkSdFRAwBGObMWiPo
	MkkcSVHE6IUkkNFqZPHXPvMRMbqph+j+aVLRdpjJwgbwUwgPf/bUkXTvd+nPBVBXw0aA
	fCbA==
Received: by 10.68.227.169 with SMTP id sb9mr15196299pbc.104.1346386326099;
	Thu, 30 Aug 2012 21:12:06 -0700 (PDT)
Received: from [10.10.3.114] (0127ahost2.starwoodbroadband.com. [12.105.246.2])
	by mx.google.com with ESMTPS id gv1sm2685525pbc.38.2012.08.30.21.12.03
	(version=SSLv3 cipher=OTHER); Thu, 30 Aug 2012 21:12:04 -0700 (PDT)
User-Agent: Microsoft-MacOutlook/14.2.3.120616
Date: Thu, 30 Aug 2012 21:12:01 -0700
From: Michael Logothetis <mmlogothetis@gmail.com>
To: <xen-devel@lists.xen.org>
Message-ID: <CC6587A1.1AA01%mmlogothetis@gmail.com>
Thread-Topic: xl regression: Support backend domain ID for disks for
	compatibility with xm
Mime-version: 1.0
Subject: [Xen-devel] xl regression: Support backend domain ID for disks for
 compatibility with xm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current 4.2 unstable does not seem to support xl block-attach
in the same way that xm did.

The back domain always defaults to 0 (Dom0) in:

tools/libxl/xl_cmdimpl.c

which prevents using a driver domain as a block device backend.

Looks like there is a patch from Daniel De Graaf discussed here:

http://lists.xen.org/archives/html/xen-devel/2012-08/msg00648.html

which should be included in 4.2.

--
Michael Logothetis.





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 06:00:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 06:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7KGO-0002Zo-7L; Fri, 31 Aug 2012 05:59:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1T7KGM-0002Zj-2N
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 05:59:26 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346392757!8879848!1
X-Originating-IP: [98.138.91.48]
X-SpamReason: No, hits=1.2 required=7.0 tests=BODY_RANDOM_LONG,
	FROM_HAS_ULINE_NUMS,ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31232 invoked from network); 31 Aug 2012 05:59:18 -0000
Received: from nm13-vm0.bullet.mail.ne1.yahoo.com (HELO
	nm13-vm0.bullet.mail.ne1.yahoo.com) (98.138.91.48)
	by server-2.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 05:59:18 -0000
Received: from [98.138.90.49] by nm13.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 05:59:17 -0000
Received: from [98.138.89.250] by tm2.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 05:59:17 -0000
Received: from [127.0.0.1] by omp1042.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 05:59:17 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 262448.18414.bm@omp1042.mail.ne1.yahoo.com
Received: (qmail 86162 invoked by uid 60001); 31 Aug 2012 05:59:17 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1346392757; bh=bcVqo65ZmoSprMTKwewEfpnlIVIRnvOwq/SLFfSF+ps=;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding;
	b=6jwCO5Fxl6Ikx6g4Uf8Yj5efxvf08zc231bUI8MnUg5648xQps4jQRjqtkJOuU/Qux+VPqkBFf35GltltsnueadnWq2DImltTShPuzio0H3NCCSiMdaxqGWkAZVpHsAYGp76TSUmCI5hCR6aOzCbap+0+47W/0ixhmggJKPtlQc=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding;
	b=Q841kOVELj6eyT51dJH52Defu1ZUbCHMjhSnTVJgafHUcy0YEEySJdZFSimvGtcmYe+Xmj1Pz8xgUOnDFx3zzunSFOdI+ldeiB6B8ATAW5i8hPdTR7f+Khr+ZEWqeasKkTdrtpTkOxsD0AzX3CYbh56fIpgmvdZsFG89GbKzyNE=;
X-YMail-OSG: 2kvTA4cVM1ku7aKWHcAh3cznyzeVqxZfYCjTMXo0kHbM9vm
	4SEO61OD.dRlH4yl4y1SDnEKlumQtr8ttvuNuCfOBatXDrP1nbKfiGnkM4gP
	VLlcbeDHsR6RoqZZhuBsxbqSPsDIG2m.ZVAZtFX6Hgl1hNQ6iY00l3VfRv6Q
	wOw5UAWj.1OvVQkKLRBeou62YQ_7f3I67oR1NUHwTIZqQXPyuEeCjU2_4qqJ
	q4UnyYtFXVfzlLYX9kO1oR.dkb9LJlvnksBf0axuTEA0oJ0UEJhwUAosN8cy
	u8DGZhQk9lFRQisU.HGnAoVXvamacxJc2274y.aUmGKmLvcTJ4NZp3l2cThW
	VsQ8NsaEz.6ud_Qc6iGSh0WZyOJG._gOmbHt4Y4LyIYxK37m5hHeKGfWK5cB
	bmyQChw6jGEhqk77dH2EoVR4qtwfYj0qgBRXKB69Zd5ujQpNLiQEslXU5tbE
	limV8zRO8sawK8TbyoU1W8QQoig--
Received: from [111.68.102.23] by web126004.mail.ne1.yahoo.com via HTTP;
	Thu, 30 Aug 2012 22:59:17 PDT
X-Mailer: YahooMailWebService/0.8.121.416
Message-ID: <1346392757.85980.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Date: Thu, 30 Aug 2012 22:59:17 -0700 (PDT)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] Xen port for MIPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I want to port Xen for MIPS64 architecture as my MS thesis. My strategy
is that to know (some how) what are changes required to make a kernel into=
=A0
xen enabled kernel for x86 and try to change accordingly MIPS portion of th=
e code.
Now I have two issues over here
1) The above strategy is for xen enabled kernel but have no clear picture f=
or Xen hypervisor.
2) I'm currently working on 2.6.18 xenlinux. Why I'm not using the current =
version because=A0
newer kernels are using pv_ops. and there is no such interface exist for MI=
PS( considering that=A0
I'm following X86). I know my knowledge is very limited in case of pv_ops.

So anybody can guide me what should be correct strategy? and can I use newer
kernel (any thing having version >=3D 2.6.26) and xen-unstable but avoiding=
 pv_ops as well?
we are two members in the team. Any guess how much time it will take? =A0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 06:00:06 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 06:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7KGO-0002Zo-7L; Fri, 31 Aug 2012 05:59:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <maheen_butt26@yahoo.com>) id 1T7KGM-0002Zj-2N
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 05:59:26 +0000
X-Env-Sender: maheen_butt26@yahoo.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1346392757!8879848!1
X-Originating-IP: [98.138.91.48]
X-SpamReason: No, hits=1.2 required=7.0 tests=BODY_RANDOM_LONG,
	FROM_HAS_ULINE_NUMS,ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31232 invoked from network); 31 Aug 2012 05:59:18 -0000
Received: from nm13-vm0.bullet.mail.ne1.yahoo.com (HELO
	nm13-vm0.bullet.mail.ne1.yahoo.com) (98.138.91.48)
	by server-2.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 05:59:18 -0000
Received: from [98.138.90.49] by nm13.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 05:59:17 -0000
Received: from [98.138.89.250] by tm2.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 05:59:17 -0000
Received: from [127.0.0.1] by omp1042.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 05:59:17 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 262448.18414.bm@omp1042.mail.ne1.yahoo.com
Received: (qmail 86162 invoked by uid 60001); 31 Aug 2012 05:59:17 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1346392757; bh=bcVqo65ZmoSprMTKwewEfpnlIVIRnvOwq/SLFfSF+ps=;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding;
	b=6jwCO5Fxl6Ikx6g4Uf8Yj5efxvf08zc231bUI8MnUg5648xQps4jQRjqtkJOuU/Qux+VPqkBFf35GltltsnueadnWq2DImltTShPuzio0H3NCCSiMdaxqGWkAZVpHsAYGp76TSUmCI5hCR6aOzCbap+0+47W/0ixhmggJKPtlQc=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding;
	b=Q841kOVELj6eyT51dJH52Defu1ZUbCHMjhSnTVJgafHUcy0YEEySJdZFSimvGtcmYe+Xmj1Pz8xgUOnDFx3zzunSFOdI+ldeiB6B8ATAW5i8hPdTR7f+Khr+ZEWqeasKkTdrtpTkOxsD0AzX3CYbh56fIpgmvdZsFG89GbKzyNE=;
X-YMail-OSG: 2kvTA4cVM1ku7aKWHcAh3cznyzeVqxZfYCjTMXo0kHbM9vm
	4SEO61OD.dRlH4yl4y1SDnEKlumQtr8ttvuNuCfOBatXDrP1nbKfiGnkM4gP
	VLlcbeDHsR6RoqZZhuBsxbqSPsDIG2m.ZVAZtFX6Hgl1hNQ6iY00l3VfRv6Q
	wOw5UAWj.1OvVQkKLRBeou62YQ_7f3I67oR1NUHwTIZqQXPyuEeCjU2_4qqJ
	q4UnyYtFXVfzlLYX9kO1oR.dkb9LJlvnksBf0axuTEA0oJ0UEJhwUAosN8cy
	u8DGZhQk9lFRQisU.HGnAoVXvamacxJc2274y.aUmGKmLvcTJ4NZp3l2cThW
	VsQ8NsaEz.6ud_Qc6iGSh0WZyOJG._gOmbHt4Y4LyIYxK37m5hHeKGfWK5cB
	bmyQChw6jGEhqk77dH2EoVR4qtwfYj0qgBRXKB69Zd5ujQpNLiQEslXU5tbE
	limV8zRO8sawK8TbyoU1W8QQoig--
Received: from [111.68.102.23] by web126004.mail.ne1.yahoo.com via HTTP;
	Thu, 30 Aug 2012 22:59:17 PDT
X-Mailer: YahooMailWebService/0.8.121.416
Message-ID: <1346392757.85980.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Date: Thu, 30 Aug 2012 22:59:17 -0700 (PDT)
From: maheen butt <maheen_butt26@yahoo.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Subject: [Xen-devel] Xen port for MIPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: maheen butt <maheen_butt26@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I want to port Xen for MIPS64 architecture as my MS thesis. My strategy
is that to know (some how) what are changes required to make a kernel into=
=A0
xen enabled kernel for x86 and try to change accordingly MIPS portion of th=
e code.
Now I have two issues over here
1) The above strategy is for xen enabled kernel but have no clear picture f=
or Xen hypervisor.
2) I'm currently working on 2.6.18 xenlinux. Why I'm not using the current =
version because=A0
newer kernels are using pv_ops. and there is no such interface exist for MI=
PS( considering that=A0
I'm following X86). I know my knowledge is very limited in case of pv_ops.

So anybody can guide me what should be correct strategy? and can I use newer
kernel (any thing having version >=3D 2.6.26) and xen-unstable but avoiding=
 pv_ops as well?
we are two members in the team. Any guess how much time it will take? =A0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 07:02:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 07:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7LFC-0003UN-7G; Fri, 31 Aug 2012 07:02:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7LFA-0003UI-6h
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 07:02:16 +0000
Received: from [85.158.143.99:36603] by server-2.bemta-4.messagelabs.com id
	ED/EA-21239-77160405; Fri, 31 Aug 2012 07:02:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1346396533!20731462!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4572 invoked from network); 31 Aug 2012 07:02:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 07:02:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14280957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 07:02:13 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 08:02:13 +0100
Message-ID: <1346396532.5820.1.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Date: Fri, 31 Aug 2012 08:02:12 +0100
In-Reply-To: <165680AD-2970-4A8D-BEC0-B02830F8A97F@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
	<503F9D14.8000600@citrix.com>
	<165680AD-2970-4A8D-BEC0-B02830F8A97F@gridcentric.ca>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 19:29 +0100, Andres Lagar-Cavilla wrote:
> On Aug 30, 2012, at 1:04 PM, David Vrabel wrote:
> > You can't access user space pages here while holding
> > current->mm->mmap_sem.  I tried this and it would sometimes deadlock in
> > the page fault handler.
> > 
> > access_ok() only checks if the pointer is in the user space virtual
> > address space - not that a valid mapping exists and is writable.  So
> > BUG_ON(__put_user()) should not be done.
> 
> Very true. Thanks for the pointer. Clearly the reason for the gather_array/traverse_pages structure.

/me has flashbacks to the mammoth debugging session which led to
http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/043dc7488c11.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 07:02:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 07:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7LFC-0003UN-7G; Fri, 31 Aug 2012 07:02:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7LFA-0003UI-6h
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 07:02:16 +0000
Received: from [85.158.143.99:36603] by server-2.bemta-4.messagelabs.com id
	ED/EA-21239-77160405; Fri, 31 Aug 2012 07:02:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-216.messagelabs.com!1346396533!20731462!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4572 invoked from network); 31 Aug 2012 07:02:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 07:02:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14280957"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 07:02:13 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 08:02:13 +0100
Message-ID: <1346396532.5820.1.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Date: Fri, 31 Aug 2012 08:02:12 +0100
In-Reply-To: <165680AD-2970-4A8D-BEC0-B02830F8A97F@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<12E7F3C7-86B7-4B6B-8F53-23CCFCEF80FB@gridcentric.ca>
	<503F9D14.8000600@citrix.com>
	<165680AD-2970-4A8D-BEC0-B02830F8A97F@gridcentric.ca>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 19:29 +0100, Andres Lagar-Cavilla wrote:
> On Aug 30, 2012, at 1:04 PM, David Vrabel wrote:
> > You can't access user space pages here while holding
> > current->mm->mmap_sem.  I tried this and it would sometimes deadlock in
> > the page fault handler.
> > 
> > access_ok() only checks if the pointer is in the user space virtual
> > address space - not that a valid mapping exists and is writable.  So
> > BUG_ON(__put_user()) should not be done.
> 
> Very true. Thanks for the pointer. Clearly the reason for the gather_array/traverse_pages structure.

/me has flashbacks to the mammoth debugging session which led to
http://xenbits.xen.org/hg/linux-2.6.18-xen.hg/rev/043dc7488c11.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 07:28:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 07:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Le3-0003jr-GN; Fri, 31 Aug 2012 07:27:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T7Le2-0003jm-2n
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 07:27:58 +0000
Received: from [85.158.143.99:10995] by server-2.bemta-4.messagelabs.com id
	4C/CD-21239-D7760405; Fri, 31 Aug 2012 07:27:57 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1346398075!17998829!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTkxNTky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 445 invoked from network); 31 Aug 2012 07:27:56 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-8.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 07:27:56 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 31 Aug 2012 00:27:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="187259749"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 31 Aug 2012 00:27:54 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 00:27:53 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 15:27:52 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: 'xen-devel' <xen-devel@lists.xen.org>
Thread-Topic: Xen4.2-rc3 test result
Thread-Index: Ac2HShtAedsoaH/XR0eHCAg2EulTuA==
Date: Fri, 31 Aug 2012 07:27:51 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: 'Keir Fraser' <keir@xen.org>, 'Ian Campbell' <Ian.Campbell@citrix.com>,
	'Jan Beulich' <JBeulich@suse.com>,
	'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>
Subject: [Xen-devel] Xen4.2-rc3 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did a round testing for Xen 4.2 RC3 (CS# 25784) with Linux 3.5.2 dom0.
We found no new issue, and verified 1 fixed bug.

Fixed bug (1):
1. long stop during the guest boot process with qcow image
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
  -- Fixed by reverting a bad commit about "O_DIRECT to open IDE block device".

The following are some of the old issues which we guess are something important.
Some of the old issues:
1. Fail to probe NIC driver to HVM domU (with 3.5/3.6 Linux as Dom0)
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
  -- We already know the corrupt commit in Linux tree. Konrad will try to fix it.
2. Poor performance when do guest save/restore and migration with linux 3.x dom0
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
4. after detaching a VF from a guest, shutdown the guest is very slow
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
6. Guest hang after resuming from S3
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828
7. Dom0 S3 resume fails
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707

Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 07:28:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 07:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Le3-0003jr-GN; Fri, 31 Aug 2012 07:27:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yongjie.ren@intel.com>) id 1T7Le2-0003jm-2n
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 07:27:58 +0000
Received: from [85.158.143.99:10995] by server-2.bemta-4.messagelabs.com id
	4C/CD-21239-D7760405; Fri, 31 Aug 2012 07:27:57 +0000
X-Env-Sender: yongjie.ren@intel.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1346398075!17998829!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTkxNTky\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 445 invoked from network); 31 Aug 2012 07:27:56 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-8.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 07:27:56 -0000
Received: from azsmga001.ch.intel.com ([10.2.17.19])
	by azsmga101.ch.intel.com with ESMTP; 31 Aug 2012 00:27:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="187259749"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by azsmga001.ch.intel.com with ESMTP; 31 Aug 2012 00:27:54 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 00:27:53 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX102.ccr.corp.intel.com ([169.254.2.92]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 15:27:52 +0800
From: "Ren, Yongjie" <yongjie.ren@intel.com>
To: 'xen-devel' <xen-devel@lists.xen.org>
Thread-Topic: Xen4.2-rc3 test result
Thread-Index: Ac2HShtAedsoaH/XR0eHCAg2EulTuA==
Date: Fri, 31 Aug 2012 07:27:51 +0000
Message-ID: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: 'Keir Fraser' <keir@xen.org>, 'Ian Campbell' <Ian.Campbell@citrix.com>,
	'Jan Beulich' <JBeulich@suse.com>,
	'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>
Subject: [Xen-devel] Xen4.2-rc3 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,
We did a round testing for Xen 4.2 RC3 (CS# 25784) with Linux 3.5.2 dom0.
We found no new issue, and verified 1 fixed bug.

Fixed bug (1):
1. long stop during the guest boot process with qcow image
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
  -- Fixed by reverting a bad commit about "O_DIRECT to open IDE block device".

The following are some of the old issues which we guess are something important.
Some of the old issues:
1. Fail to probe NIC driver to HVM domU (with 3.5/3.6 Linux as Dom0)
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
  -- We already know the corrupt commit in Linux tree. Konrad will try to fix it.
2. Poor performance when do guest save/restore and migration with linux 3.x dom0
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
4. after detaching a VF from a guest, shutdown the guest is very slow
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
6. Guest hang after resuming from S3
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828
7. Dom0 S3 resume fails
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707

Best Regards,
     Yongjie (Jay)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 07:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 07:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7LmV-0003y1-Lm; Fri, 31 Aug 2012 07:36:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7LmT-0003xw-Pv
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 07:36:41 +0000
Received: from [85.158.143.35:8442] by server-1.bemta-4.messagelabs.com id
	27/4F-12504-98960405; Fri, 31 Aug 2012 07:36:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1346398586!5095239!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24548 invoked from network); 31 Aug 2012 07:36:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 07:36:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14281451"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 07:36:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 08:36:06 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T7Llu-0000A2-7b;
	Fri, 31 Aug 2012 07:36:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T7Llt-0002O0-Uz;
	Fri, 31 Aug 2012 08:36:06 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13640-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 08:36:06 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13640: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13640 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13640/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13639

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13639
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13639
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13639
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13639

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13639 never pass

version targeted for testing:
 xen                  9e5665f9f430
baseline version:
 xen                  9e5665f9f430

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 07:37:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 07:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7LmV-0003y1-Lm; Fri, 31 Aug 2012 07:36:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7LmT-0003xw-Pv
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 07:36:41 +0000
Received: from [85.158.143.35:8442] by server-1.bemta-4.messagelabs.com id
	27/4F-12504-98960405; Fri, 31 Aug 2012 07:36:41 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1346398586!5095239!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24548 invoked from network); 31 Aug 2012 07:36:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 07:36:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14281451"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 07:36:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 08:36:06 +0100
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1T7Llu-0000A2-7b;
	Fri, 31 Aug 2012 07:36:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1T7Llt-0002O0-Uz;
	Fri, 31 Aug 2012 08:36:06 +0100
To: xen-devel@lists.xensource.com
Message-ID: <osstest-13640-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 08:36:06 +0100
MIME-Version: 1.0
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 13640: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 13640 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/13640/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 13639

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 13639
 test-amd64-amd64-xl-qemuu-winxpsp3  9 guest-localmigrate       fail like 13639
 test-i386-i386-xl-qemuu-winxpsp3  9 guest-localmigrate         fail like 13639
 test-amd64-amd64-xl-qemuu-win7-amd64  9 guest-localmigrate     fail like 13639

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 13639 never pass

version targeted for testing:
 xen                  9e5665f9f430
baseline version:
 xen                  9e5665f9f430

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-win-vcpus1                                   fail    
 test-amd64-i386-xl-win-vcpus1                                fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-amd64-win                                         fail    
 test-amd64-i386-win                                          fail    
 test-i386-i386-win                                           fail    
 test-amd64-amd64-xl-win                                      fail    
 test-i386-i386-xl-win                                        fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:02:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7MBO-0004b2-37; Fri, 31 Aug 2012 08:02:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7MBM-0004ax-NB
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:02:24 +0000
Received: from [85.158.143.35:15290] by server-3.bemta-4.messagelabs.com id
	DE/B7-08232-F8F60405; Fri, 31 Aug 2012 08:02:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346400137!14661893!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31785 invoked from network); 31 Aug 2012 08:02:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:02:18 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14281933"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:02:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:02:00 +0100
Message-ID: <1346400119.27277.91.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Michael Logothetis <mmlogothetis@gmail.com>
Date: Fri, 31 Aug 2012 09:01:59 +0100
In-Reply-To: <CC6587A1.1AA01%mmlogothetis@gmail.com>
References: <CC6587A1.1AA01%mmlogothetis@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl regression: Support backend domain ID for disks
 for compatibility with xm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 05:12 +0100, Michael Logothetis wrote:
> The current 4.2 unstable does not seem to support xl block-attach
> in the same way that xm did.
> 
> The back domain always defaults to 0 (Dom0) in:
> 
> tools/libxl/xl_cmdimpl.c
> 
> which prevents using a driver domain as a block device backend.
> 
> Looks like there is a patch from Daniel De Graaf discussed here:
> 
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00648.html
> 
> which should be included in 4.2.

As you can see from the thread we are waiting for an updated patch from
Daniel. I'll ping him on that thread.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:02:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7MBO-0004b2-37; Fri, 31 Aug 2012 08:02:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7MBM-0004ax-NB
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:02:24 +0000
Received: from [85.158.143.35:15290] by server-3.bemta-4.messagelabs.com id
	DE/B7-08232-F8F60405; Fri, 31 Aug 2012 08:02:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346400137!14661893!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31785 invoked from network); 31 Aug 2012 08:02:18 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:02:18 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14281933"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:02:00 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:02:00 +0100
Message-ID: <1346400119.27277.91.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Michael Logothetis <mmlogothetis@gmail.com>
Date: Fri, 31 Aug 2012 09:01:59 +0100
In-Reply-To: <CC6587A1.1AA01%mmlogothetis@gmail.com>
References: <CC6587A1.1AA01%mmlogothetis@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl regression: Support backend domain ID for disks
 for compatibility with xm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 05:12 +0100, Michael Logothetis wrote:
> The current 4.2 unstable does not seem to support xl block-attach
> in the same way that xm did.
> 
> The back domain always defaults to 0 (Dom0) in:
> 
> tools/libxl/xl_cmdimpl.c
> 
> which prevents using a driver domain as a block device backend.
> 
> Looks like there is a patch from Daniel De Graaf discussed here:
> 
> http://lists.xen.org/archives/html/xen-devel/2012-08/msg00648.html
> 
> which should be included in 4.2.

As you can see from the thread we are waiting for an updated patch from
Daniel. I'll ping him on that thread.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:04:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7MDQ-0004jF-Ju; Fri, 31 Aug 2012 08:04:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7MDO-0004j4-T2
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:04:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1346400262!6501180!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 476 invoked from network); 31 Aug 2012 08:04:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:04:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14281973"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:04:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:04:22 +0100
Message-ID: <1346400260.27277.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Fri, 31 Aug 2012 09:04:20 +0100
In-Reply-To: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Daniel,
On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
> Allow specification of backend domains for disks, either in the config
> file or via xl block-attach.

Were you intending to resubmit this patch for 4.2? We're pretty close to
cutting what we hope will be the final RC.

We are intending to take a slightly more liberal than normal approach to
backports for 4.2.1 to allow xl features which improve xm parity, so
this might be a candidate for that.

Ian.

> A version of this patch was submitted in October 2011 but was not
> suitable at the time because libxl did not support the "script=" option
> for disks in libxl. Now that this option exists, it is possible to
> specify a backend domain without needing to duplicate the device tree of
> domain providing the disk in the domain using libxl; just specify
> script=/bin/true (or any more useful script) to prevent the block script
> from running in the domain using libxl.
> 
> In order to support named backend domains like network-attach, the
> prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
> this parameter, it would only be only possible to support numeric domain
> IDs in the block device specification.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> ---
> 
> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
> changes due to different generator versions.
> 
>  tools/libxl/libxlu_disk.c   |   3 +-
>  tools/libxl/libxlu_disk_i.h |   3 +-
>  tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
>  tools/libxl/libxlu_disk_l.h |  24 +-
>  tools/libxl/libxlu_disk_l.l |   8 +
>  tools/libxl/libxlutil.h     |   2 +-
>  tools/libxl/xl_cmdimpl.c    |   6 +-
>  7 files changed, 319 insertions(+), 308 deletions(-)
> 
> diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
> index 18fe386..1e6caca 100644
> --- a/tools/libxl/libxlu_disk.c
> +++ b/tools/libxl/libxlu_disk.c
> @@ -48,7 +48,7 @@ static void dpc_dispose(DiskParseContext *dpc) {
> 
>  int xlu_disk_parse(XLU_Config *cfg,
>                     int nspecs, const char *const *specs,
> -                   libxl_device_disk *disk) {
> +                   libxl_device_disk *disk, libxl_ctx *ctx) {
>      DiskParseContext dpc;
>      int i, e;
> 
> @@ -56,6 +56,7 @@ int xlu_disk_parse(XLU_Config *cfg,
>      dpc.cfg = cfg;
>      dpc.scanner = 0;
>      dpc.disk = disk;
> +    dpc.ctx = ctx;
> 
>      disk->readwrite = 1;
> 
> diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
> index 4fccd4a..c220bcf 100644
> --- a/tools/libxl/libxlu_disk_i.h
> +++ b/tools/libxl/libxlu_disk_i.h
> @@ -2,7 +2,7 @@
>  #define LIBXLU_DISK_I_H
> 
>  #include "libxlu_internal.h"
> -
> +#include "libxl_utils.h"
> 
>  typedef struct {
>      XLU_Config *cfg;
> @@ -12,6 +12,7 @@ typedef struct {
>      libxl_device_disk *disk;
>      int access_set, had_depr_prefix;
>      const char *spec;
> +    libxl_ctx *ctx;
>  } DiskParseContext;
> 
>  void xlu__disk_err(DiskParseContext *dpc, const char *erroneous,
> diff --git a/tools/libxl/libxlu_disk_l.c b/tools/libxl/libxlu_disk_l.c
> index 4c68034..4e17f7c 100644
> --- a/tools/libxl/libxlu_disk_l.c
> +++ b/tools/libxl/libxlu_disk_l.c
> @@ -58,6 +58,7 @@ typedef int flex_int32_t;
>  typedef unsigned char flex_uint8_t;
>  typedef unsigned short int flex_uint16_t;
>  typedef unsigned int flex_uint32_t;
> +#endif /* ! C99 */
> 
>  /* Limits of integral types. */
>  #ifndef INT8_MIN
> @@ -88,8 +89,6 @@ typedef unsigned int flex_uint32_t;
>  #define UINT32_MAX             (4294967295U)
>  #endif
> 
> -#endif /* ! C99 */
> -
>  #endif /* ! FLEXINT_H */
> 
>  #ifdef __cplusplus
> @@ -163,15 +162,7 @@ typedef void* yyscan_t;
> 
>  /* Size of default input buffer. */
>  #ifndef YY_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k.
> - * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
> - * Ditto for the __ia64__ case accordingly.
> - */
> -#define YY_BUF_SIZE 32768
> -#else
>  #define YY_BUF_SIZE 16384
> -#endif /* __ia64__ */
>  #endif
> 
>  /* The state buf must be large enough to hold one state per character in the main buffer.
> @@ -361,8 +352,8 @@ static void yy_fatal_error (yyconst char msg[] ,yyscan_t yyscanner );
>         *yy_cp = '\0'; \
>         yyg->yy_c_buf_p = yy_cp;
> 
> -#define YY_NUM_RULES 25
> -#define YY_END_OF_BUFFER 26
> +#define YY_NUM_RULES 26
> +#define YY_END_OF_BUFFER 27
>  /* This struct is not used in this scanner,
>     but its presence is necessary. */
>  struct yy_trans_info
> @@ -370,60 +361,61 @@ struct yy_trans_info
>         flex_int32_t yy_verify;
>         flex_int32_t yy_nxt;
>         };
> -static yyconst flex_int16_t yy_acclist[447] =
> +static yyconst flex_int16_t yy_acclist[460] =
>      {   0,
> -       24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
> -    16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
> -       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
> -       23,   25,   22,   23,   25,   22,   23,   25,   22,   23,
> -       25,   22,   23,   25,   22,   23,   25,   22,   23,   25,
> -       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
> -       23,   25,   22,   23,   25,   24,   25,   25,   22,   22,
> -     8193,   22, 8193,   22,16385, 8193,   22, 8193,   22,   22,
> -     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
> -       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
> -
> -       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
> -     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
> -       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
> -       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
> -     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
> -     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
> -     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
> -    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
> -     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
> -       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
> -
> -     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
> -     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
> -     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
> -     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
> -       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
> -       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
> -     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
> -    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
> -       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
> -       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
> -
> -     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
> -       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
> -        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
> -       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
> -     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
> -        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
> -       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
> -       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
> -       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
> -        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
> -
> -       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
> -       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
> -     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
> -        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
> -        6,    8,    8,   12,    4,    6
> +       25,   25,   27,   23,   24,   26, 8193,   23,   24,   26,
> +    16385, 8193,   23,   26,16385,   23,   24,   26,   24,   26,
> +       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
> +       24,   26,   23,   24,   26,   23,   24,   26,   23,   24,
> +       26,   23,   24,   26,   23,   24,   26,   23,   24,   26,
> +       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
> +       24,   26,   23,   24,   26,   25,   26,   26,   23,   23,
> +     8193,   23, 8193,   23,16385, 8193,   23, 8193,   23,   23,
> +     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
> +       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
> +
> +       23,   23,   25, 8193,   23, 8193,   23, 8193, 8214,   23,
> +     8214,   23, 8214,   13,   23,   23,   23,   23,   23,   23,
> +       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
> +       23,   23, 8214,   23, 8214,   23, 8214,   13,   23,   18,
> +     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
> +     8207, 8214,   23,16399,16406,   21, 8214,   23,16406,   23,
> +     8206, 8214,   23,16398,16406,   23,   23, 8209, 8214,   23,
> +    16401,16406,   23,   23,   23,   23,   18, 8214,   23,   18,
> +     8214,   23,   18,   23,   18, 8214,   23,    3,   23,   23,
> +       20, 8214,   23,16406,   23,   23, 8207, 8214,   23, 8207,
> +
> +     8214,   23, 8207,   23, 8207, 8214,   21, 8214,   23,   21,
> +     8214,   23,   21,   23,   21, 8214, 8206, 8214,   23, 8206,
> +     8214,   23, 8206,   23, 8206, 8214,   23, 8209, 8214,   23,
> +     8209, 8214,   23, 8209,   23, 8209, 8214,   23,   23,   10,
> +       23,   18, 8214,   23,   18, 8214,   23,   18, 8214,   18,
> +       23,   18,   23,    3,   23,   23,   20, 8214,   23,   20,
> +     8214,   23,   20,   23,   20, 8214,   23,   19, 8214,   23,
> +    16406, 8207, 8214,   23, 8207, 8214,   23, 8207, 8214, 8207,
> +       23, 8207,   21, 8214,   23,   21, 8214,   23,   21, 8214,
> +       21,   23,   21, 8206, 8214,   23, 8206, 8214,   23, 8206,
> +
> +     8214, 8206,   23, 8206,   23, 8209, 8214,   23, 8209, 8214,
> +       23, 8209, 8214, 8209,   23, 8209,   23,   23,   10,   13,
> +       10,    7,   23,   23,   20, 8214,   23,   20, 8214,   23,
> +       20, 8214,   20,   23,   20,    2,   19, 8214,   23,   19,
> +     8214,   23,   19,   23,   19, 8214,   11,   23,   12,   10,
> +       10,   13,    7,   13,    7,   23,   23,    6,    2,   13,
> +        2,   19, 8214,   23,   19, 8214,   23,   19, 8214,   19,
> +       23,   19,   11,   13,   11,   16, 8214,   23,16406,   12,
> +       13,   12,    7,    7,   13,   23,   23,    6,   13,    6,
> +        6,   13,    6,   13,    2,    2,   13,   11,   11,   13,
> +
> +       16, 8214,   23,   16, 8214,   23,   16,   23,   16, 8214,
> +       12,   13,   23,   23,    6,    6,   13,    6,    6,   16,
> +     8214,   23,   16, 8214,   23,   16, 8214,   16,   23,   16,
> +       23,   23,    6,    6,   23,    8,    6,    5,    6,   23,
> +        8,   13,    8,    4,    6,    5,    6,    9,    8,    8,
> +       13,    4,    6,    9,   13,    9,    9,    9,   13
>      } ;
> 
> -static yyconst flex_int16_t yy_accept[252] =
> +static yyconst flex_int16_t yy_accept[263] =
>      {   0,
>          1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
>         21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
> @@ -445,14 +437,15 @@ static yyconst flex_int16_t yy_accept[252] =
>        293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
>        314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
>        328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
> -      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
> -
> -      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
> -      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
> -      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
> -      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
> -      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
> -      447
> +      348,  349,  350,  351,  353,  355,  356,  357,  358,  359,
> +
> +      361,  362,  365,  368,  370,  372,  373,  375,  376,  380,
> +      382,  383,  384,  386,  387,  388,  390,  391,  393,  395,
> +      396,  398,  399,  401,  404,  407,  409,  411,  413,  414,
> +      415,  416,  418,  419,  420,  423,  426,  428,  430,  431,
> +      432,  433,  434,  435,  436,  437,  438,  440,  441,  443,
> +      444,  446,  448,  449,  450,  452,  454,  456,  457,  458,
> +      460,  460
>      } ;
> 
>  static yyconst flex_int32_t yy_ec[256] =
> @@ -495,83 +488,85 @@ static yyconst flex_int32_t yy_meta[34] =
>          1,    1,    1
>      } ;
> 
> -static yyconst flex_int16_t yy_base[308] =
> +static yyconst flex_int16_t yy_base[321] =
>      {   0,
> -        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
> -       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
> -       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
> -       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
> +        0,    0,  644,  632,  623,  595,   32,   35,  670,  670,
> +       44,   62,   30,   41,   50,   51,  577,   64,   47,   66,
> +       67,  565,   68,  561,   72,    0,  670,  563,  670,   87,
> +       91,    0,    0,  100,  553,  109,    0,   74,   95,   87,
>         32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
>        118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
> -      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
> +      147,    0,    0,  551,  129,  126,  134,  143,  145,  147,
>        148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
> -      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
> -      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
> +      168,  159,  188,    0,    0,  670,  166,  197,  179,  185,
> +      176,  200,  537,  186,  193,  216,  225,  205,  234,  221,
> 
>        237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
>        251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
>        286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
> -        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
> -        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
> +        0,  301,    0,  288,  297,  535,  302,  310,    0,    0,
> +        0,    0,  305,  670,  307,  319,    0,  321,    0,  322,
>        332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
>          0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
> -        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
> -        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
> -      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
> -
> -      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
> -      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
> -      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
> -      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
> -      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
> -      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
> -      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
> -      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
> -      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
> -      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
> -
> -      627,  631,  635,  639,  643,  647,  651
> +        0,    0,  337,  345,  527,  670,  519,  346,  351,  359,
> +        0,    0,    0,    0,  511,  361,    0,  363,    0,  499,
> +      319,  370,  471,  670,  464,  670,  359,  276,  367,  455,
> +
> +      670,  373,    0,    0,    0,    0,  447,  670,  383,  429,
> +        0,  428,  670,  368,  371,  425,  670,  385,  389,  422,
> +      670,  421,  670,  391,    0,  399,    0,    0,  414,  387,
> +      419,  670,  395,  400,  402,    0,    0,    0,    0,  399,
> +      403,  406,  411,  404,  417,  412,  416,  409,  316,  670,
> +      271,  670,  228,  200,  670,  670,  175,  670,   77,  670,
> +      670,  434,  438,  441,  445,  449,  453,  457,  461,  465,
> +      469,  473,  477,  481,  485,  489,  493,  497,  501,  505,
> +      509,  513,  517,  521,  525,  529,  533,  537,  541,  545,
> +      549,  553,  557,  561,  565,  569,  573,  577,  581,  585,
> +
> +      589,  593,  597,  601,  605,  609,  613,  617,  621,  625,
> +      629,  633,  637,  641,  645,  649,  653,  657,  661,  665
>      } ;
> 
> -static yyconst flex_int16_t yy_def[308] =
> +static yyconst flex_int16_t yy_def[321] =
>      {   0,
> -      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
> -      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
> -       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
> -      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
> +      261,    1,  262,  262,  261,  263,  264,  264,  261,  261,
> +      265,  265,   12,   12,   12,   12,   12,   12,   12,   12,
> +       12,   12,   12,   12,   12,  266,  261,  263,  261,  267,
> +      264,  268,  268,  269,   12,  263,  270,   12,   12,   12,
>         12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
> -       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
> -      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
> +       12,   12,   12,   12,   12,   12,  266,  267,  268,  268,
> +      271,  272,  272,  261,   12,   12,   12,   12,   12,   12,
>         12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
> -       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
> -       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
> -
> -       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
> -       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
> -      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
> -      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
> -      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
> -      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
> -      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
> -      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
> -      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
> -       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
> -
> -      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
> -      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
> -      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
> -      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
> -      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -
> -      250,  250,  250,  250,  250,  250,  250
> +       12,   12,  271,  272,  272,  261,   12,  273,   12,   12,
> +       12,   12,   12,   12,   12,  274,  275,   12,  276,   12,
> +
> +       12,  277,   12,   12,   12,   12,  278,  279,  273,  279,
> +       12,   12,   12,  280,   12,   12,  281,  282,  274,  282,
> +      283,  284,  275,  284,  285,  286,  276,  286,   12,  287,
> +      288,  277,  288,   12,   12,  289,   12,  278,  279,  279,
> +      290,  290,   12,  261,   12,  291,  292,  280,  292,   12,
> +      293,  281,  282,  282,  294,  294,  283,  284,  284,  295,
> +      295,  285,  286,  286,  296,  296,   12,  287,  288,  288,
> +      297,  297,   12,   12,  298,  261,  299,   12,   12,  291,
> +      292,  292,  300,  300,  301,  302,  303,  293,  303,  304,
> +       12,  305,  298,  261,  306,  261,   12,   12,  307,  308,
> +
> +      261,  302,  303,  303,  309,  309,  310,  261,  311,  312,
> +      312,  306,  261,   12,   12,  313,  261,  313,  313,  308,
> +      261,  310,  261,  314,  315,  311,  315,  312,   12,   12,
> +      313,  261,  313,  313,  314,  315,  315,  316,  316,   12,
> +       12,  313,  313,   12,  317,  313,  313,   12,  318,  261,
> +      313,  261,  319,  318,  261,  261,  320,  261,  320,  261,
> +        0,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261
>      } ;
> 
> -static yyconst flex_int16_t yy_nxt[690] =
> +static yyconst flex_int16_t yy_nxt[704] =
>      {   0,
>          6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
>         12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
> @@ -581,7 +576,7 @@ static yyconst flex_int16_t yy_nxt[690] =
>         35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
>         35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
>         37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
> -      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
> +      258,   35,   50,   35,   55,   65,   35,   47,   56,   28,
>         59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
> 
>         28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
> @@ -591,66 +586,69 @@ static yyconst flex_int16_t yy_nxt[690] =
>         59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
>         84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
>         35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
> -       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
> +       93,   35,   94,   91,   99,   35,   35,   35,  260,  100,
>         95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
>         28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
> 
> -      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
> +      108,  107,   35,  250,  107,  110,  112,  114,  113,   35,
>         75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
>        117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
> -       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
> +       35,  258,  121,  124,  125,  125,   61,  126,  125,   35,
>        137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
>        131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
>         35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
> -      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
> +      153,   28,  155,  143,  256,  154,   35,  156,  145,  146,
>        146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
>         28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
> 
> -      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
> -      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
> -      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
> +      165,   28,  169,   28,  171,  166,   35,  170,  215,  172,
> +      177,   35,   28,  139,   35,  173,   35,  178,  140,  255,
> +      179,   28,  181,   28,  183,  174,  209,  182,   35,  184,
>        185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
>        189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
> -      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
> -       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
> -      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
> -      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
> -      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
> -
> -       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
> -      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
> -      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
> -       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
> -       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
> -       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
> -       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
> -       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
> -      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
> -      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
> -
> -      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
> -      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
> -      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
> -      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
> -       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
> -      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
> -      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
> -      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
> -      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
> -      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
> -
> -      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
> -      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
> -      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
> -      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
> -      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
> -      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250
> +      164,   28,  169,  192,   35,   35,  191,  170,  197,  199,
> +       35,   28,  181,   28,  203,   28,  205,  182,   35,  204,
> +      217,  206,   64,  211,  198,   28,  203,   35,  218,  219,
> +       35,  204,  214,  224,  224,   61,  225,  224,  232,  229,
> +      224,  227,  232,   28,  236,  230,   35,  233,  217,  237,
> +
> +      241,   28,  238,  217,   28,  236,  234,  239,   35,  217,
> +      237,  245,   35,   35,  217,  217,  244,  253,   35,  252,
> +      250,  242,  217,  240,  208,  201,  248,  243,  232,  246,
> +      247,  196,  228,  251,   26,   26,   26,   26,   28,   28,
> +       28,   30,   30,   30,   30,   35,   35,   35,   35,   57,
> +      223,   57,   57,   58,   58,   58,   58,   60,  221,   60,
> +       60,   34,   34,   34,   34,   64,   64,  213,   64,   83,
> +       83,   83,   83,   85,  176,   85,   85,  109,  109,  109,
> +      109,  119,  119,  119,  119,  123,  123,  123,  123,  127,
> +      127,  127,  127,  132,  132,  132,  132,  138,  138,  138,
> +
> +      138,  140,  208,  140,  140,  148,  148,  148,  148,  152,
> +      152,  152,  152,  154,  201,  154,  154,  157,  157,  157,
> +      157,  159,  196,  159,  159,  162,  162,  162,  162,  164,
> +      194,  164,  164,  168,  168,  168,  168,  170,  176,  170,
> +      170,  175,  175,  175,  175,  142,  115,  142,  142,  180,
> +      180,  180,  180,  182,   86,  182,  182,  188,  188,  188,
> +      188,  156,   35,  156,  156,  161,   29,  161,  161,  166,
> +       54,  166,  166,  172,   52,  172,  172,  193,  193,  193,
> +      193,  195,  195,  195,  195,  184,   35,  184,  184,  200,
> +      200,  200,  200,  202,  202,  202,  202,  204,   29,  204,
> +
> +      204,  207,  207,  207,  207,  210,  210,  210,  210,  212,
> +      212,  212,  212,  216,  216,  216,  216,  220,  220,  220,
> +      220,  206,  261,  206,  206,  222,  222,  222,  222,  226,
> +      226,  226,  226,  211,   27,  211,  211,  231,  231,  231,
> +      231,  235,  235,  235,  235,  237,   27,  237,  237,  239,
> +      261,  239,  239,  249,  249,  249,  249,  254,  254,  254,
> +      254,  257,  257,  257,  257,  259,  259,  259,  259,    5,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +
> +      261,  261,  261
>      } ;
> 
> -static yyconst flex_int16_t yy_chk[690] =
> +static yyconst flex_int16_t yy_chk[704] =
>      {   0,
>          1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
>          1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
> @@ -660,7 +658,7 @@ static yyconst flex_int16_t yy_chk[690] =
>         14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
>         16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
>         12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
> -      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
> +      259,   25,   20,   38,   25,   38,   45,   18,   25,   30,
>         30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
> 
>         34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
> @@ -670,63 +668,66 @@ static yyconst flex_int16_t yy_chk[690] =
>         58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
>         61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
>         73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
> -       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
> +       72,   79,   73,   69,   78,   87,   78,   81,  257,   79,
>         74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
>         83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
> 
> -       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
> +       88,   88,   95,  254,   88,   88,   90,   92,   91,   92,
>         95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
>         96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
> -      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
> +      100,  253,   97,   97,   99,   99,   99,   99,   99,  104,
>        106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
>        102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
>        111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
> -      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
> -      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
> +      117,  119,  119,  111,  251,  117,  129,  119,  113,  114,
> +      114,  114,  114,  114,  115,  198,  114,  114,  121,  121,
>        123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
> 
> -      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
> -      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
> +      127,  130,  130,  132,  132,  127,  135,  130,  198,  132,
> +      137,  137,  138,  138,  143,  134,  145,  143,  138,  249,
>        145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
>        150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
>        151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
> -      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
> -      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
> -      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
> -      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
> -      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
> -
> -      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
> -      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
> -      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
> -      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
> -      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
> -      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
> -      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
> -      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
> -      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
> -      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
> -
> -      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
> -      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
> -      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
> -      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
> -        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
> -      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
> -        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
> -      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
> -        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
> -      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
> -
> -      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
> -      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
> -      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
> -      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
> -        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
> -      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250
> +      162,  168,  168,  174,  174,  178,  173,  168,  178,  179,
> +      179,  180,  180,  186,  186,  188,  188,  180,  197,  186,
> +      199,  188,  192,  192,  178,  202,  202,  214,  199,  199,
> +      215,  202,  197,  209,  209,  209,  209,  209,  218,  214,
> +      209,  209,  219,  224,  224,  215,  230,  218,  233,  224,
> +
> +      230,  226,  226,  234,  235,  235,  219,  226,  240,  242,
> +      235,  241,  241,  244,  243,  246,  240,  248,  248,  247,
> +      245,  233,  231,  229,  222,  220,  244,  234,  216,  242,
> +      243,  212,  210,  246,  262,  262,  262,  262,  263,  263,
> +      263,  264,  264,  264,  264,  265,  265,  265,  265,  266,
> +      207,  266,  266,  267,  267,  267,  267,  268,  200,  268,
> +      268,  269,  269,  269,  269,  270,  270,  195,  270,  271,
> +      271,  271,  271,  272,  193,  272,  272,  273,  273,  273,
> +      273,  274,  274,  274,  274,  275,  275,  275,  275,  276,
> +      276,  276,  276,  277,  277,  277,  277,  278,  278,  278,
> +
> +      278,  279,  190,  279,  279,  280,  280,  280,  280,  281,
> +      281,  281,  281,  282,  185,  282,  282,  283,  283,  283,
> +      283,  284,  177,  284,  284,  285,  285,  285,  285,  286,
> +      175,  286,  286,  287,  287,  287,  287,  288,  136,  288,
> +      288,  289,  289,  289,  289,  290,   93,  290,  290,  291,
> +      291,  291,  291,  292,   64,  292,  292,  293,  293,  293,
> +      293,  294,   35,  294,  294,  295,   28,  295,  295,  296,
> +       24,  296,  296,  297,   22,  297,  297,  298,  298,  298,
> +      298,  299,  299,  299,  299,  300,   17,  300,  300,  301,
> +      301,  301,  301,  302,  302,  302,  302,  303,    6,  303,
> +
> +      303,  304,  304,  304,  304,  305,  305,  305,  305,  306,
> +      306,  306,  306,  307,  307,  307,  307,  308,  308,  308,
> +      308,  309,    5,  309,  309,  310,  310,  310,  310,  311,
> +      311,  311,  311,  312,    4,  312,  312,  313,  313,  313,
> +      313,  314,  314,  314,  314,  315,    3,  315,  315,  316,
> +        0,  316,  316,  317,  317,  317,  317,  318,  318,  318,
> +      318,  319,  319,  319,  319,  320,  320,  320,  320,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +
> +      261,  261,  261
>      } ;
> 
>  #define YY_TRAILING_MASK 0x2000
> @@ -856,6 +857,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
>      else xlu__disk_err(dpc,str,"unknown value for backendtype");
>  }
> 
> +/* Sets ->backend_domid from the string. */
> +static void setbackend(DiskParseContext *dpc, const char *str) {
> +    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
> +        xlu__disk_err(dpc,str,"unknown domain for backend");
> +    }
> +}
> +
>  #define DEPRECATE(usewhatinstead) /* not currently reported */
> 
>  /* Handles a vdev positional parameter which includes a devtype. */
> @@ -883,7 +891,7 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
>  #define DPC ((DiskParseContext*)yyextra)
> 
> 
> -#line 887 "libxlu_disk_l.c"
> +#line 895 "libxlu_disk_l.c"
> 
>  #define INITIAL 0
>  #define LEXERR 1
> @@ -980,6 +988,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
> 
>  void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
> 
> +int xlu__disk_yyget_column  (yyscan_t yyscanner );
> +
> +void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
> +
>  /* Macros after this point can all be overridden by user definitions in
>   * section 1.
>   */
> @@ -1012,12 +1024,7 @@ static int input (yyscan_t yyscanner );
> 
>  /* Amount of stuff to slurp up with each read. */
>  #ifndef YY_READ_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k */
> -#define YY_READ_BUF_SIZE 16384
> -#else
>  #define YY_READ_BUF_SIZE 8192
> -#endif /* __ia64__ */
>  #endif
> 
>  /* Copy whatever the last rule matched to the standard output. */
> @@ -1036,7 +1043,7 @@ static int input (yyscan_t yyscanner );
>         if ( YY_CURRENT_BUFFER_LVALUE->yy_is_interactive ) \
>                 { \
>                 int c = '*'; \
> -               size_t n; \
> +               unsigned n; \
>                 for ( n = 0; n < max_size && \
>                              (c = getc( yyin )) != EOF && c != '\n'; ++n ) \
>                         buf[n] = (char) c; \
> @@ -1119,12 +1126,12 @@ YY_DECL
>         register int yy_act;
>      struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
> 
> -#line 155 "libxlu_disk_l.l"
> +#line 162 "libxlu_disk_l.l"
> 
> 
>   /*----- the scanner rules which do the parsing -----*/
> 
> -#line 1128 "libxlu_disk_l.c"
> +#line 1135 "libxlu_disk_l.c"
> 
>         if ( !yyg->yy_init )
>                 {
> @@ -1188,14 +1195,14 @@ yy_match:
>                         while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
>                                 {
>                                 yy_current_state = (int) yy_def[yy_current_state];
> -                               if ( yy_current_state >= 251 )
> +                               if ( yy_current_state >= 262 )
>                                         yy_c = yy_meta[(unsigned int) yy_c];
>                                 }
>                         yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
>                         *yyg->yy_state_ptr++ = yy_current_state;
>                         ++yy_cp;
>                         }
> -               while ( yy_current_state != 250 );
> +               while ( yy_current_state != 261 );
> 
>  yy_find_action:
>                 yy_current_state = *--yyg->yy_state_ptr;
> @@ -1245,89 +1252,95 @@ do_action:      /* This label is used only to access EOF actions. */
>  case 1:
>  /* rule 1 can match eol */
>  YY_RULE_SETUP
> -#line 159 "libxlu_disk_l.l"
> +#line 166 "libxlu_disk_l.l"
>  { /* ignore whitespace before parameters */ }
>         YY_BREAK
>  /* ordinary parameters setting enums or strings */
>  case 2:
>  /* rule 2 can match eol */
>  YY_RULE_SETUP
> -#line 163 "libxlu_disk_l.l"
> +#line 170 "libxlu_disk_l.l"
>  { STRIP(','); setformat(DPC, FROMEQUALS); }
>         YY_BREAK
>  case 3:
>  YY_RULE_SETUP
> -#line 165 "libxlu_disk_l.l"
> +#line 172 "libxlu_disk_l.l"
>  { DPC->disk->is_cdrom = 1; }
>         YY_BREAK
>  case 4:
>  YY_RULE_SETUP
> -#line 166 "libxlu_disk_l.l"
> +#line 173 "libxlu_disk_l.l"
>  { DPC->disk->is_cdrom = 1; }
>         YY_BREAK
>  case 5:
>  YY_RULE_SETUP
> -#line 167 "libxlu_disk_l.l"
> +#line 174 "libxlu_disk_l.l"
>  { DPC->disk->is_cdrom = 0; }
>         YY_BREAK
>  case 6:
>  /* rule 6 can match eol */
>  YY_RULE_SETUP
> -#line 168 "libxlu_disk_l.l"
> +#line 175 "libxlu_disk_l.l"
>  { xlu__disk_err(DPC,yytext,"unknown value for type"); }
>         YY_BREAK
>  case 7:
>  /* rule 7 can match eol */
>  YY_RULE_SETUP
> -#line 170 "libxlu_disk_l.l"
> +#line 177 "libxlu_disk_l.l"
>  { STRIP(','); setaccess(DPC, FROMEQUALS); }
>         YY_BREAK
>  case 8:
>  /* rule 8 can match eol */
>  YY_RULE_SETUP
> -#line 171 "libxlu_disk_l.l"
> +#line 178 "libxlu_disk_l.l"
>  { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>         YY_BREAK
>  case 9:
>  /* rule 9 can match eol */
>  YY_RULE_SETUP
> -#line 173 "libxlu_disk_l.l"
> -{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
> +#line 179 "libxlu_disk_l.l"
> +{ STRIP(','); setbackend(DPC,FROMEQUALS); }
>         YY_BREAK
>  case 10:
>  /* rule 10 can match eol */
>  YY_RULE_SETUP
> -#line 174 "libxlu_disk_l.l"
> +#line 181 "libxlu_disk_l.l"
> +{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
> +       YY_BREAK
> +case 11:
> +/* rule 11 can match eol */
> +YY_RULE_SETUP
> +#line 182 "libxlu_disk_l.l"
>  { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
>         YY_BREAK
>  /* the target magic parameter, eats the rest of the string */
> -case 11:
> +case 12:
>  YY_RULE_SETUP
> -#line 178 "libxlu_disk_l.l"
> +#line 186 "libxlu_disk_l.l"
>  { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
>         YY_BREAK
>  /* unknown parameters */
> -case 12:
> -/* rule 12 can match eol */
> +case 13:
> +/* rule 13 can match eol */
>  YY_RULE_SETUP
> -#line 182 "libxlu_disk_l.l"
> +#line 190 "libxlu_disk_l.l"
>  { xlu__disk_err(DPC,yytext,"unknown parameter"); }
>         YY_BREAK
>  /* deprecated prefixes */
>  /* the "/.*" in these patterns ensures that they count as if they
>     * matched the whole string, so these patterns take precedence */
> -case 13:
> +case 14:
>  YY_RULE_SETUP
> -#line 189 "libxlu_disk_l.l"
> +#line 197 "libxlu_disk_l.l"
>  {
>                      STRIP(':');
>                      DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
>                      setformat(DPC, yytext);
>                   }
>         YY_BREAK
> -case 14:
> +case 15:
>  YY_RULE_SETUP
> -#line 195 "libxlu_disk_l.l"
> +#line 203 "libxlu_disk_l.l"
>  {
>                      char *newscript;
>                      STRIP(':');
> @@ -1341,65 +1354,65 @@ YY_RULE_SETUP
>                      free(newscript);
>                  }
>         YY_BREAK
> -case 15:
> +case 16:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 208 "libxlu_disk_l.l"
> +#line 216 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 16:
> +case 17:
>  YY_RULE_SETUP
> -#line 209 "libxlu_disk_l.l"
> +#line 217 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 17:
> +case 18:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 210 "libxlu_disk_l.l"
> +#line 218 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 18:
> +case 19:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 211 "libxlu_disk_l.l"
> +#line 219 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 19:
> +case 20:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 212 "libxlu_disk_l.l"
> +#line 220 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 20:
> +case 21:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 213 "libxlu_disk_l.l"
> +#line 221 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 21:
> -/* rule 21 can match eol */
> +case 22:
> +/* rule 22 can match eol */
>  YY_RULE_SETUP
> -#line 215 "libxlu_disk_l.l"
> +#line 223 "libxlu_disk_l.l"
>  {
>                   xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
>                   return 0;
>                 }
>         YY_BREAK
>  /* positional parameters */
> -case 22:
> -/* rule 22 can match eol */
> +case 23:
> +/* rule 23 can match eol */
>  YY_RULE_SETUP
> -#line 222 "libxlu_disk_l.l"
> +#line 230 "libxlu_disk_l.l"
>  {
>      STRIP(',');
> 
> @@ -1426,27 +1439,27 @@ YY_RULE_SETUP
>      }
>  }
>         YY_BREAK
> -case 23:
> +case 24:
>  YY_RULE_SETUP
> -#line 248 "libxlu_disk_l.l"
> +#line 256 "libxlu_disk_l.l"
>  {
>      BEGIN(LEXERR);
>      yymore();
>  }
>         YY_BREAK
> -case 24:
> +case 25:
>  YY_RULE_SETUP
> -#line 252 "libxlu_disk_l.l"
> +#line 260 "libxlu_disk_l.l"
>  {
>      xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
>  }
>         YY_BREAK
> -case 25:
> +case 26:
>  YY_RULE_SETUP
> -#line 255 "libxlu_disk_l.l"
> +#line 263 "libxlu_disk_l.l"
>  YY_FATAL_ERROR( "flex scanner jammed" );
>         YY_BREAK
> -#line 1450 "libxlu_disk_l.c"
> +#line 1463 "libxlu_disk_l.c"
>                         case YY_STATE_EOF(INITIAL):
>                         case YY_STATE_EOF(LEXERR):
>                                 yyterminate();
> @@ -1710,7 +1723,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
>                 while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
>                         {
>                         yy_current_state = (int) yy_def[yy_current_state];
> -                       if ( yy_current_state >= 251 )
> +                       if ( yy_current_state >= 262 )
>                                 yy_c = yy_meta[(unsigned int) yy_c];
>                         }
>                 yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
> @@ -1734,11 +1747,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
>         while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
>                 {
>                 yy_current_state = (int) yy_def[yy_current_state];
> -               if ( yy_current_state >= 251 )
> +               if ( yy_current_state >= 262 )
>                         yy_c = yy_meta[(unsigned int) yy_c];
>                 }
>         yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
> -       yy_is_jam = (yy_current_state == 250);
> +       yy_is_jam = (yy_current_state == 261);
>         if ( ! yy_is_jam )
>                 *yyg->yy_state_ptr++ = yy_current_state;
> 
> @@ -2147,8 +2160,8 @@ YY_BUFFER_STATE xlu__disk_yy_scan_string (yyconst char * yystr , yyscan_t yyscan
> 
>  /** Setup the input buffer state to scan the given bytes. The next call to xlu__disk_yylex() will
>   * scan from a @e copy of @a bytes.
> - * @param yybytes the byte buffer to scan
> - * @param _yybytes_len the number of bytes in the buffer pointed to by @a bytes.
> + * @param bytes the byte buffer to scan
> + * @param len the number of bytes in the buffer pointed to by @a bytes.
>   * @param yyscanner The scanner object.
>   * @return the newly allocated buffer state object.
>   */
> @@ -2538,4 +2551,4 @@ void xlu__disk_yyfree (void * ptr , yyscan_t yyscanner)
> 
>  #define YYTABLES_NAME "yytables"
> 
> -#line 255 "libxlu_disk_l.l"
> +#line 263 "libxlu_disk_l.l"
> diff --git a/tools/libxl/libxlu_disk_l.h b/tools/libxl/libxlu_disk_l.h
> index de03908..247a0d7 100644
> --- a/tools/libxl/libxlu_disk_l.h
> +++ b/tools/libxl/libxlu_disk_l.h
> @@ -62,6 +62,7 @@ typedef int flex_int32_t;
>  typedef unsigned char flex_uint8_t;
>  typedef unsigned short int flex_uint16_t;
>  typedef unsigned int flex_uint32_t;
> +#endif /* ! C99 */
> 
>  /* Limits of integral types. */
>  #ifndef INT8_MIN
> @@ -92,8 +93,6 @@ typedef unsigned int flex_uint32_t;
>  #define UINT32_MAX             (4294967295U)
>  #endif
> 
> -#endif /* ! C99 */
> -
>  #endif /* ! FLEXINT_H */
> 
>  #ifdef __cplusplus
> @@ -136,15 +135,7 @@ typedef void* yyscan_t;
> 
>  /* Size of default input buffer. */
>  #ifndef YY_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k.
> - * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
> - * Ditto for the __ia64__ case accordingly.
> - */
> -#define YY_BUF_SIZE 32768
> -#else
>  #define YY_BUF_SIZE 16384
> -#endif /* __ia64__ */
>  #endif
> 
>  #ifndef YY_TYPEDEF_YY_BUFFER_STATE
> @@ -280,6 +271,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
> 
>  void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
> 
> +int xlu__disk_yyget_column  (yyscan_t yyscanner );
> +
> +void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
> +
>  /* Macros after this point can all be overridden by user definitions in
>   * section 1.
>   */
> @@ -306,12 +301,7 @@ static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
> 
>  /* Amount of stuff to slurp up with each read. */
>  #ifndef YY_READ_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k */
> -#define YY_READ_BUF_SIZE 16384
> -#else
>  #define YY_READ_BUF_SIZE 8192
> -#endif /* __ia64__ */
>  #endif
> 
>  /* Number of entries by which start-condition stack grows. */
> @@ -344,8 +334,8 @@ extern int xlu__disk_yylex (yyscan_t yyscanner);
>  #undef YY_DECL
>  #endif
> 
> -#line 255 "libxlu_disk_l.l"
> +#line 263 "libxlu_disk_l.l"
> 
> -#line 350 "libxlu_disk_l.h"
> +#line 340 "libxlu_disk_l.h"
>  #undef xlu__disk_yyIN_HEADER
>  #endif /* xlu__disk_yyHEADER_H */
> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index bee16a1..6bd48e8 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -113,6 +113,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
>      else xlu__disk_err(dpc,str,"unknown value for backendtype");
>  }
> 
> +/* Sets ->backend_domid from the string. */
> +static void setbackend(DiskParseContext *dpc, const char *str) {
> +    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
> +        xlu__disk_err(dpc,str,"unknown domain for backend");
> +    }
> +}
> +
>  #define DEPRECATE(usewhatinstead) /* not currently reported */
> 
>  /* Handles a vdev positional parameter which includes a devtype. */
> @@ -169,6 +176,7 @@ devtype=[^,]*,?     { xlu__disk_err(DPC,yytext,"unknown value for type"); }
> 
>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
> +backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }
> 
>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,? { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
> index 0333e55..87eb399 100644
> --- a/tools/libxl/libxlutil.h
> +++ b/tools/libxl/libxlutil.h
> @@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
>   */
> 
>  int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
> -                   libxl_device_disk *disk);
> +                   libxl_device_disk *disk, libxl_ctx *ctx);
>    /* disk must have been initialised.
>     *
>     * On error, returns errno value.  Bad strings cause EINVAL and
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 138cd72..fd00d61 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
>          if (!*config) { perror("xlu_cfg_init"); exit(-1); }
>      }
> 
> -    e = xlu_disk_parse(*config, nspecs, specs, disk);
> +    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
>      if (e == EINVAL) exit(-1);
>      if (e) {
>          fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
> @@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
>  int main_blockattach(int argc, char **argv)
>  {
>      int opt;
> -    uint32_t fe_domid, be_domid = 0;
> +    uint32_t fe_domid;
>      libxl_device_disk disk = { 0 };
>      XLU_Config *config = 0;
> 
> @@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
>      parse_disk_config_multistring
>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
> 
> -    disk.backend_domid = be_domid;
> -
>      if (dryrun_only) {
>          char *json = libxl_device_disk_to_json(ctx, &disk);
>          printf("disk: %s\n", json);
> --
> 1.7.11.2
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:04:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7MDQ-0004jF-Ju; Fri, 31 Aug 2012 08:04:32 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7MDO-0004j4-T2
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:04:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1346400262!6501180!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 476 invoked from network); 31 Aug 2012 08:04:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:04:22 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14281973"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:04:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:04:22 +0100
Message-ID: <1346400260.27277.93.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Fri, 31 Aug 2012 09:04:20 +0100
In-Reply-To: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
References: <1344289916-11518-1-git-send-email-dgdegra@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC/for-4.2?] libxl: Support backend domain
	ID for disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Daniel,
On Mon, 2012-08-06 at 22:51 +0100, Daniel De Graaf wrote:
> Allow specification of backend domains for disks, either in the config
> file or via xl block-attach.

Were you intending to resubmit this patch for 4.2? We're pretty close to
cutting what we hope will be the final RC.

We are intending to take a slightly more liberal than normal approach to
backports for 4.2.1 to allow xl features which improve xm parity, so
this might be a candidate for that.

Ian.

> A version of this patch was submitted in October 2011 but was not
> suitable at the time because libxl did not support the "script=" option
> for disks in libxl. Now that this option exists, it is possible to
> specify a backend domain without needing to duplicate the device tree of
> domain providing the disk in the domain using libxl; just specify
> script=/bin/true (or any more useful script) to prevent the block script
> from running in the domain using libxl.
> 
> In order to support named backend domains like network-attach, the
> prototype of xlu_disk_parse in libxlutil.h needs a libxl_ctx. Without
> this parameter, it would only be only possible to support numeric domain
> IDs in the block device specification.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> ---
> 
> This patch does not include the changes to tools/libxl/libxlu_disk_l.c
> and tools/libxl/libxlu_disk_l.h because the diffs contain unrelated
> changes due to different generator versions.
> 
>  tools/libxl/libxlu_disk.c   |   3 +-
>  tools/libxl/libxlu_disk_i.h |   3 +-
>  tools/libxl/libxlu_disk_l.c | 581 ++++++++++++++++++++++----------------------
>  tools/libxl/libxlu_disk_l.h |  24 +-
>  tools/libxl/libxlu_disk_l.l |   8 +
>  tools/libxl/libxlutil.h     |   2 +-
>  tools/libxl/xl_cmdimpl.c    |   6 +-
>  7 files changed, 319 insertions(+), 308 deletions(-)
> 
> diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
> index 18fe386..1e6caca 100644
> --- a/tools/libxl/libxlu_disk.c
> +++ b/tools/libxl/libxlu_disk.c
> @@ -48,7 +48,7 @@ static void dpc_dispose(DiskParseContext *dpc) {
> 
>  int xlu_disk_parse(XLU_Config *cfg,
>                     int nspecs, const char *const *specs,
> -                   libxl_device_disk *disk) {
> +                   libxl_device_disk *disk, libxl_ctx *ctx) {
>      DiskParseContext dpc;
>      int i, e;
> 
> @@ -56,6 +56,7 @@ int xlu_disk_parse(XLU_Config *cfg,
>      dpc.cfg = cfg;
>      dpc.scanner = 0;
>      dpc.disk = disk;
> +    dpc.ctx = ctx;
> 
>      disk->readwrite = 1;
> 
> diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
> index 4fccd4a..c220bcf 100644
> --- a/tools/libxl/libxlu_disk_i.h
> +++ b/tools/libxl/libxlu_disk_i.h
> @@ -2,7 +2,7 @@
>  #define LIBXLU_DISK_I_H
> 
>  #include "libxlu_internal.h"
> -
> +#include "libxl_utils.h"
> 
>  typedef struct {
>      XLU_Config *cfg;
> @@ -12,6 +12,7 @@ typedef struct {
>      libxl_device_disk *disk;
>      int access_set, had_depr_prefix;
>      const char *spec;
> +    libxl_ctx *ctx;
>  } DiskParseContext;
> 
>  void xlu__disk_err(DiskParseContext *dpc, const char *erroneous,
> diff --git a/tools/libxl/libxlu_disk_l.c b/tools/libxl/libxlu_disk_l.c
> index 4c68034..4e17f7c 100644
> --- a/tools/libxl/libxlu_disk_l.c
> +++ b/tools/libxl/libxlu_disk_l.c
> @@ -58,6 +58,7 @@ typedef int flex_int32_t;
>  typedef unsigned char flex_uint8_t;
>  typedef unsigned short int flex_uint16_t;
>  typedef unsigned int flex_uint32_t;
> +#endif /* ! C99 */
> 
>  /* Limits of integral types. */
>  #ifndef INT8_MIN
> @@ -88,8 +89,6 @@ typedef unsigned int flex_uint32_t;
>  #define UINT32_MAX             (4294967295U)
>  #endif
> 
> -#endif /* ! C99 */
> -
>  #endif /* ! FLEXINT_H */
> 
>  #ifdef __cplusplus
> @@ -163,15 +162,7 @@ typedef void* yyscan_t;
> 
>  /* Size of default input buffer. */
>  #ifndef YY_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k.
> - * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
> - * Ditto for the __ia64__ case accordingly.
> - */
> -#define YY_BUF_SIZE 32768
> -#else
>  #define YY_BUF_SIZE 16384
> -#endif /* __ia64__ */
>  #endif
> 
>  /* The state buf must be large enough to hold one state per character in the main buffer.
> @@ -361,8 +352,8 @@ static void yy_fatal_error (yyconst char msg[] ,yyscan_t yyscanner );
>         *yy_cp = '\0'; \
>         yyg->yy_c_buf_p = yy_cp;
> 
> -#define YY_NUM_RULES 25
> -#define YY_END_OF_BUFFER 26
> +#define YY_NUM_RULES 26
> +#define YY_END_OF_BUFFER 27
>  /* This struct is not used in this scanner,
>     but its presence is necessary. */
>  struct yy_trans_info
> @@ -370,60 +361,61 @@ struct yy_trans_info
>         flex_int32_t yy_verify;
>         flex_int32_t yy_nxt;
>         };
> -static yyconst flex_int16_t yy_acclist[447] =
> +static yyconst flex_int16_t yy_acclist[460] =
>      {   0,
> -       24,   24,   26,   22,   23,   25, 8193,   22,   23,   25,
> -    16385, 8193,   22,   25,16385,   22,   23,   25,   23,   25,
> -       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
> -       23,   25,   22,   23,   25,   22,   23,   25,   22,   23,
> -       25,   22,   23,   25,   22,   23,   25,   22,   23,   25,
> -       22,   23,   25,   22,   23,   25,   22,   23,   25,   22,
> -       23,   25,   22,   23,   25,   24,   25,   25,   22,   22,
> -     8193,   22, 8193,   22,16385, 8193,   22, 8193,   22,   22,
> -     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
> -       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
> -
> -       22,   22,   24, 8193,   22, 8193,   22, 8193, 8213,   22,
> -     8213,   22, 8213,   12,   22,   22,   22,   22,   22,   22,
> -       22,   22,   22,   22,   22,   22,   22,   22,   22,   22,
> -       22,   22, 8213,   22, 8213,   22, 8213,   12,   22,   17,
> -     8213,   22,16405,   22,   22,   22,   22,   22,   22,   22,
> -     8206, 8213,   22,16398,16405,   20, 8213,   22,16405,   22,
> -     8205, 8213,   22,16397,16405,   22,   22, 8208, 8213,   22,
> -    16400,16405,   22,   22,   22,   22,   17, 8213,   22,   17,
> -     8213,   22,   17,   22,   17, 8213,   22,    3,   22,   22,
> -       19, 8213,   22,16405,   22,   22, 8206, 8213,   22, 8206,
> -
> -     8213,   22, 8206,   22, 8206, 8213,   20, 8213,   22,   20,
> -     8213,   22,   20,   22,   20, 8213, 8205, 8213,   22, 8205,
> -     8213,   22, 8205,   22, 8205, 8213,   22, 8208, 8213,   22,
> -     8208, 8213,   22, 8208,   22, 8208, 8213,   22,   22,    9,
> -       22,   17, 8213,   22,   17, 8213,   22,   17, 8213,   17,
> -       22,   17,   22,    3,   22,   22,   19, 8213,   22,   19,
> -     8213,   22,   19,   22,   19, 8213,   22,   18, 8213,   22,
> -    16405, 8206, 8213,   22, 8206, 8213,   22, 8206, 8213, 8206,
> -       22, 8206,   20, 8213,   22,   20, 8213,   22,   20, 8213,
> -       20,   22,   20, 8205, 8213,   22, 8205, 8213,   22, 8205,
> -
> -     8213, 8205,   22, 8205,   22, 8208, 8213,   22, 8208, 8213,
> -       22, 8208, 8213, 8208,   22, 8208,   22,   22,    9,   12,
> -        9,    7,   22,   22,   19, 8213,   22,   19, 8213,   22,
> -       19, 8213,   19,   22,   19,    2,   18, 8213,   22,   18,
> -     8213,   22,   18,   22,   18, 8213,   10,   22,   11,    9,
> -        9,   12,    7,   12,    7,   22,    6,    2,   12,    2,
> -       18, 8213,   22,   18, 8213,   22,   18, 8213,   18,   22,
> -       18,   10,   12,   10,   15, 8213,   22,16405,   11,   12,
> -       11,    7,    7,   12,   22,    6,   12,    6,    6,   12,
> -        6,   12,    2,    2,   12,   10,   10,   12,   15, 8213,
> -
> -       22,   15, 8213,   22,   15,   22,   15, 8213,   11,   12,
> -       22,    6,    6,   12,    6,    6,   15, 8213,   22,   15,
> -     8213,   22,   15, 8213,   15,   22,   15,   22,    6,    6,
> -        8,    6,    5,    6,    8,   12,    8,    4,    6,    5,
> -        6,    8,    8,   12,    4,    6
> +       25,   25,   27,   23,   24,   26, 8193,   23,   24,   26,
> +    16385, 8193,   23,   26,16385,   23,   24,   26,   24,   26,
> +       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
> +       24,   26,   23,   24,   26,   23,   24,   26,   23,   24,
> +       26,   23,   24,   26,   23,   24,   26,   23,   24,   26,
> +       23,   24,   26,   23,   24,   26,   23,   24,   26,   23,
> +       24,   26,   23,   24,   26,   25,   26,   26,   23,   23,
> +     8193,   23, 8193,   23,16385, 8193,   23, 8193,   23,   23,
> +     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
> +       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
> +
> +       23,   23,   25, 8193,   23, 8193,   23, 8193, 8214,   23,
> +     8214,   23, 8214,   13,   23,   23,   23,   23,   23,   23,
> +       23,   23,   23,   23,   23,   23,   23,   23,   23,   23,
> +       23,   23, 8214,   23, 8214,   23, 8214,   13,   23,   18,
> +     8214,   23,16406,   23,   23,   23,   23,   23,   23,   23,
> +     8207, 8214,   23,16399,16406,   21, 8214,   23,16406,   23,
> +     8206, 8214,   23,16398,16406,   23,   23, 8209, 8214,   23,
> +    16401,16406,   23,   23,   23,   23,   18, 8214,   23,   18,
> +     8214,   23,   18,   23,   18, 8214,   23,    3,   23,   23,
> +       20, 8214,   23,16406,   23,   23, 8207, 8214,   23, 8207,
> +
> +     8214,   23, 8207,   23, 8207, 8214,   21, 8214,   23,   21,
> +     8214,   23,   21,   23,   21, 8214, 8206, 8214,   23, 8206,
> +     8214,   23, 8206,   23, 8206, 8214,   23, 8209, 8214,   23,
> +     8209, 8214,   23, 8209,   23, 8209, 8214,   23,   23,   10,
> +       23,   18, 8214,   23,   18, 8214,   23,   18, 8214,   18,
> +       23,   18,   23,    3,   23,   23,   20, 8214,   23,   20,
> +     8214,   23,   20,   23,   20, 8214,   23,   19, 8214,   23,
> +    16406, 8207, 8214,   23, 8207, 8214,   23, 8207, 8214, 8207,
> +       23, 8207,   21, 8214,   23,   21, 8214,   23,   21, 8214,
> +       21,   23,   21, 8206, 8214,   23, 8206, 8214,   23, 8206,
> +
> +     8214, 8206,   23, 8206,   23, 8209, 8214,   23, 8209, 8214,
> +       23, 8209, 8214, 8209,   23, 8209,   23,   23,   10,   13,
> +       10,    7,   23,   23,   20, 8214,   23,   20, 8214,   23,
> +       20, 8214,   20,   23,   20,    2,   19, 8214,   23,   19,
> +     8214,   23,   19,   23,   19, 8214,   11,   23,   12,   10,
> +       10,   13,    7,   13,    7,   23,   23,    6,    2,   13,
> +        2,   19, 8214,   23,   19, 8214,   23,   19, 8214,   19,
> +       23,   19,   11,   13,   11,   16, 8214,   23,16406,   12,
> +       13,   12,    7,    7,   13,   23,   23,    6,   13,    6,
> +        6,   13,    6,   13,    2,    2,   13,   11,   11,   13,
> +
> +       16, 8214,   23,   16, 8214,   23,   16,   23,   16, 8214,
> +       12,   13,   23,   23,    6,    6,   13,    6,    6,   16,
> +     8214,   23,   16, 8214,   23,   16, 8214,   16,   23,   16,
> +       23,   23,    6,    6,   23,    8,    6,    5,    6,   23,
> +        8,   13,    8,    4,    6,    5,    6,    9,    8,    8,
> +       13,    4,    6,    9,   13,    9,    9,    9,   13
>      } ;
> 
> -static yyconst flex_int16_t yy_accept[252] =
> +static yyconst flex_int16_t yy_accept[263] =
>      {   0,
>          1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
>         21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
> @@ -445,14 +437,15 @@ static yyconst flex_int16_t yy_accept[252] =
>        293,  294,  297,  300,  302,  304,  305,  306,  309,  312,
>        314,  316,  317,  318,  319,  321,  322,  323,  324,  325,
>        328,  331,  333,  335,  336,  337,  340,  343,  345,  347,
> -      348,  349,  350,  351,  353,  355,  356,  357,  358,  360,
> -
> -      361,  364,  367,  369,  371,  372,  374,  375,  379,  381,
> -      382,  383,  385,  386,  388,  389,  391,  393,  394,  396,
> -      397,  399,  402,  405,  407,  409,  411,  412,  413,  415,
> -      416,  417,  420,  423,  425,  427,  428,  429,  430,  431,
> -      432,  433,  435,  437,  438,  440,  442,  443,  445,  447,
> -      447
> +      348,  349,  350,  351,  353,  355,  356,  357,  358,  359,
> +
> +      361,  362,  365,  368,  370,  372,  373,  375,  376,  380,
> +      382,  383,  384,  386,  387,  388,  390,  391,  393,  395,
> +      396,  398,  399,  401,  404,  407,  409,  411,  413,  414,
> +      415,  416,  418,  419,  420,  423,  426,  428,  430,  431,
> +      432,  433,  434,  435,  436,  437,  438,  440,  441,  443,
> +      444,  446,  448,  449,  450,  452,  454,  456,  457,  458,
> +      460,  460
>      } ;
> 
>  static yyconst flex_int32_t yy_ec[256] =
> @@ -495,83 +488,85 @@ static yyconst flex_int32_t yy_meta[34] =
>          1,    1,    1
>      } ;
> 
> -static yyconst flex_int16_t yy_base[308] =
> +static yyconst flex_int16_t yy_base[321] =
>      {   0,
> -        0,    0,  546,  538,  533,  521,   32,   35,  656,  656,
> -       44,   62,   30,   41,   50,   51,  507,   64,   47,   66,
> -       67,  499,   68,  487,   72,    0,  656,  465,  656,   87,
> -       91,    0,    0,  100,  452,  109,    0,   74,   95,   87,
> +        0,    0,  644,  632,  623,  595,   32,   35,  670,  670,
> +       44,   62,   30,   41,   50,   51,  577,   64,   47,   66,
> +       67,  565,   68,  561,   72,    0,  670,  563,  670,   87,
> +       91,    0,    0,  100,  553,  109,    0,   74,   95,   87,
>         32,   96,  105,  110,   77,   97,   40,  113,  116,  112,
>        118,  120,  121,  122,  123,  125,    0,  137,    0,    0,
> -      147,    0,    0,  449,  129,  126,  134,  143,  145,  147,
> +      147,    0,    0,  551,  129,  126,  134,  143,  145,  147,
>        148,  149,  151,  153,  156,  160,  155,  167,  162,  175,
> -      168,  159,  188,    0,    0,  656,  166,  197,  179,  185,
> -      176,  200,  435,  186,  193,  216,  225,  205,  234,  221,
> +      168,  159,  188,    0,    0,  670,  166,  197,  179,  185,
> +      176,  200,  537,  186,  193,  216,  225,  205,  234,  221,
> 
>        237,  247,  204,  230,  244,  213,  254,    0,  256,    0,
>        251,  258,  254,  279,  256,  259,  267,    0,  269,    0,
>        286,    0,  288,    0,  290,    0,  297,    0,  267,  299,
> -        0,  301,    0,  288,  297,  421,  302,  310,    0,    0,
> -        0,    0,  305,  656,  307,  319,    0,  321,    0,  322,
> +        0,  301,    0,  288,  297,  535,  302,  310,    0,    0,
> +        0,    0,  305,  670,  307,  319,    0,  321,    0,  322,
>        332,  335,    0,    0,    0,    0,  339,    0,    0,    0,
>          0,  342,    0,    0,    0,    0,  340,  349,    0,    0,
> -        0,    0,  337,  345,  420,  656,  419,  346,  350,  358,
> -        0,    0,    0,    0,  418,  360,    0,  362,    0,  417,
> -      319,  369,  416,  656,  415,  656,  276,  364,  414,  656,
> -
> -      375,    0,    0,    0,    0,  413,  656,  384,  412,    0,
> -      410,  656,  370,  409,  656,  370,  378,  408,  656,  366,
> -      656,  394,    0,  396,    0,    0,  380,  316,  656,  377,
> -      387,  398,    0,    0,    0,    0,  399,  402,  407,  271,
> -      406,  228,  200,  656,  175,  656,   77,  656,  656,  656,
> -      428,  432,  435,  439,  443,  447,  451,  455,  459,  463,
> -      467,  471,  475,  479,  483,  487,  491,  495,  499,  503,
> -      507,  511,  515,  519,  523,  527,  531,  535,  539,  543,
> -      547,  551,  555,  559,  563,  567,  571,  575,  579,  583,
> -      587,  591,  595,  599,  603,  607,  611,  615,  619,  623,
> -
> -      627,  631,  635,  639,  643,  647,  651
> +        0,    0,  337,  345,  527,  670,  519,  346,  351,  359,
> +        0,    0,    0,    0,  511,  361,    0,  363,    0,  499,
> +      319,  370,  471,  670,  464,  670,  359,  276,  367,  455,
> +
> +      670,  373,    0,    0,    0,    0,  447,  670,  383,  429,
> +        0,  428,  670,  368,  371,  425,  670,  385,  389,  422,
> +      670,  421,  670,  391,    0,  399,    0,    0,  414,  387,
> +      419,  670,  395,  400,  402,    0,    0,    0,    0,  399,
> +      403,  406,  411,  404,  417,  412,  416,  409,  316,  670,
> +      271,  670,  228,  200,  670,  670,  175,  670,   77,  670,
> +      670,  434,  438,  441,  445,  449,  453,  457,  461,  465,
> +      469,  473,  477,  481,  485,  489,  493,  497,  501,  505,
> +      509,  513,  517,  521,  525,  529,  533,  537,  541,  545,
> +      549,  553,  557,  561,  565,  569,  573,  577,  581,  585,
> +
> +      589,  593,  597,  601,  605,  609,  613,  617,  621,  625,
> +      629,  633,  637,  641,  645,  649,  653,  657,  661,  665
>      } ;
> 
> -static yyconst flex_int16_t yy_def[308] =
> +static yyconst flex_int16_t yy_def[321] =
>      {   0,
> -      250,    1,  251,  251,  250,  252,  253,  253,  250,  250,
> -      254,  254,   12,   12,   12,   12,   12,   12,   12,   12,
> -       12,   12,   12,   12,   12,  255,  250,  252,  250,  256,
> -      253,  257,  257,  258,   12,  252,  259,   12,   12,   12,
> +      261,    1,  262,  262,  261,  263,  264,  264,  261,  261,
> +      265,  265,   12,   12,   12,   12,   12,   12,   12,   12,
> +       12,   12,   12,   12,   12,  266,  261,  263,  261,  267,
> +      264,  268,  268,  269,   12,  263,  270,   12,   12,   12,
>         12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
> -       12,   12,   12,   12,   12,   12,  255,  256,  257,  257,
> -      260,  261,  261,  250,   12,   12,   12,   12,   12,   12,
> +       12,   12,   12,   12,   12,   12,  266,  267,  268,  268,
> +      271,  272,  272,  261,   12,   12,   12,   12,   12,   12,
>         12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
> -       12,   12,  260,  261,  261,  250,   12,  262,   12,   12,
> -       12,   12,   12,   12,   12,  263,  264,   12,  265,   12,
> -
> -       12,  266,   12,   12,   12,   12,  267,  268,  262,  268,
> -       12,   12,   12,  269,   12,   12,  270,  271,  263,  271,
> -      272,  273,  264,  273,  274,  275,  265,  275,   12,  276,
> -      277,  266,  277,   12,   12,  278,   12,  267,  268,  268,
> -      279,  279,   12,  250,   12,  280,  281,  269,  281,   12,
> -      282,  270,  271,  271,  283,  283,  272,  273,  273,  284,
> -      284,  274,  275,  275,  285,  285,   12,  276,  277,  277,
> -      286,  286,   12,   12,  287,  250,  288,   12,   12,  280,
> -      281,  281,  289,  289,  290,  291,  292,  282,  292,  293,
> -       12,  294,  287,  250,  295,  250,   12,  296,  297,  250,
> -
> -      291,  292,  292,  298,  298,  299,  250,  300,  301,  301,
> -      295,  250,   12,  302,  250,  302,  302,  297,  250,  299,
> -      250,  303,  304,  300,  304,  301,   12,  302,  250,  302,
> -      302,  303,  304,  304,  305,  305,   12,  302,  302,  306,
> -      302,  302,  307,  250,  302,  250,  307,  250,  250,    0,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -
> -      250,  250,  250,  250,  250,  250,  250
> +       12,   12,  271,  272,  272,  261,   12,  273,   12,   12,
> +       12,   12,   12,   12,   12,  274,  275,   12,  276,   12,
> +
> +       12,  277,   12,   12,   12,   12,  278,  279,  273,  279,
> +       12,   12,   12,  280,   12,   12,  281,  282,  274,  282,
> +      283,  284,  275,  284,  285,  286,  276,  286,   12,  287,
> +      288,  277,  288,   12,   12,  289,   12,  278,  279,  279,
> +      290,  290,   12,  261,   12,  291,  292,  280,  292,   12,
> +      293,  281,  282,  282,  294,  294,  283,  284,  284,  295,
> +      295,  285,  286,  286,  296,  296,   12,  287,  288,  288,
> +      297,  297,   12,   12,  298,  261,  299,   12,   12,  291,
> +      292,  292,  300,  300,  301,  302,  303,  293,  303,  304,
> +       12,  305,  298,  261,  306,  261,   12,   12,  307,  308,
> +
> +      261,  302,  303,  303,  309,  309,  310,  261,  311,  312,
> +      312,  306,  261,   12,   12,  313,  261,  313,  313,  308,
> +      261,  310,  261,  314,  315,  311,  315,  312,   12,   12,
> +      313,  261,  313,  313,  314,  315,  315,  316,  316,   12,
> +       12,  313,  313,   12,  317,  313,  313,   12,  318,  261,
> +      313,  261,  319,  318,  261,  261,  320,  261,  320,  261,
> +        0,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261
>      } ;
> 
> -static yyconst flex_int16_t yy_nxt[690] =
> +static yyconst flex_int16_t yy_nxt[704] =
>      {   0,
>          6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
>         12,   13,   14,   15,   16,   17,   17,   18,   17,   17,
> @@ -581,7 +576,7 @@ static yyconst flex_int16_t yy_nxt[690] =
>         35,   36,   37,   73,   42,   38,   35,   49,   68,   35,
>         35,   39,   28,   28,   28,   29,   34,   43,   45,   36,
>         37,   40,   44,   35,   46,   35,   35,   35,   51,   53,
> -      244,   35,   50,   35,   55,   65,   35,   47,   56,   28,
> +      258,   35,   50,   35,   55,   65,   35,   47,   56,   28,
>         59,   48,   31,   31,   32,   60,   35,   71,   67,   33,
> 
>         28,   28,   28,   29,   35,   35,   35,   28,   37,   61,
> @@ -591,66 +586,69 @@ static yyconst flex_int16_t yy_nxt[690] =
>         59,   77,   87,   35,   76,   60,   80,   79,   81,   28,
>         84,   78,   35,   89,   35,   85,   35,   35,   35,   75,
>         35,   92,   35,   96,   35,   35,   90,   97,   35,   35,
> -       93,   35,   94,   91,   99,   35,   35,   35,  249,  100,
> +       93,   35,   94,   91,   99,   35,   35,   35,  260,  100,
>         95,  101,  102,  104,   35,   35,   98,  103,   35,  105,
>         28,   84,  111,  106,   35,   35,   85,  107,  107,   61,
> 
> -      108,  107,   35,  248,  107,  110,  112,  114,  113,   35,
> +      108,  107,   35,  250,  107,  110,  112,  114,  113,   35,
>         75,   78,   99,   35,   35,  116,  117,  117,   61,  118,
>        117,  134,   35,  117,  120,  121,  121,   61,  122,  121,
> -       35,  246,  121,  124,  125,  125,   61,  126,  125,   35,
> +       35,  258,  121,  124,  125,  125,   61,  126,  125,   35,
>        137,  125,  128,  135,  102,  129,   35,  130,  130,   61,
>        131,  130,  136,   35,  130,  133,   28,  139,   28,  141,
>         35,  144,  140,   35,  142,   35,  151,   35,   35,   28,
> -      153,   28,  155,  143,  244,  154,   35,  156,  145,  146,
> +      153,   28,  155,  143,  256,  154,   35,  156,  145,  146,
>        146,   61,  147,  146,  150,   35,  146,  149,   28,  158,
>         28,  160,   28,  163,  159,  167,  161,   35,  164,   28,
> 
> -      165,   28,  169,   28,  171,  166,   35,  170,  213,  172,
> -      177,   35,   28,  139,   35,  173,   35,  178,  140,  215,
> -      179,   28,  181,   28,  183,  174,  208,  182,   35,  184,
> +      165,   28,  169,   28,  171,  166,   35,  170,  215,  172,
> +      177,   35,   28,  139,   35,  173,   35,  178,  140,  255,
> +      179,   28,  181,   28,  183,  174,  209,  182,   35,  184,
>        185,   35,  186,  186,   61,  187,  186,   28,  153,  186,
>        189,   28,  158,  154,   28,  163,   35,  159,  190,   35,
> -      164,   28,  169,  192,   35,   35,  191,  170,  198,   35,
> -       28,  181,   28,  202,   28,  204,  182,  215,  203,  207,
> -      205,   64,  210,  229,  197,  216,  217,   28,  202,   35,
> -      215,  229,  230,  203,  222,  222,   61,  223,  222,   35,
> -      215,  222,  225,  237,  227,  231,   28,  233,   28,  235,
> -
> -       28,  233,  234,  238,  236,  215,  234,  240,   35,  215,
> -      215,  200,  229,  196,  239,  226,  221,  219,  212,  176,
> -      207,  200,  196,  194,  176,  241,  242,  245,   26,   26,
> -       26,   26,   28,   28,   28,   30,   30,   30,   30,   35,
> -       35,   35,   35,   57,  115,   57,   57,   58,   58,   58,
> -       58,   60,   86,   60,   60,   34,   34,   34,   34,   64,
> -       64,   35,   64,   83,   83,   83,   83,   85,   29,   85,
> -       85,  109,  109,  109,  109,  119,  119,  119,  119,  123,
> -      123,  123,  123,  127,  127,  127,  127,  132,  132,  132,
> -      132,  138,  138,  138,  138,  140,   54,  140,  140,  148,
> -
> -      148,  148,  148,  152,  152,  152,  152,  154,   52,  154,
> -      154,  157,  157,  157,  157,  159,   35,  159,  159,  162,
> -      162,  162,  162,  164,   29,  164,  164,  168,  168,  168,
> -      168,  170,  250,  170,  170,  175,  175,  175,  175,  142,
> -       27,  142,  142,  180,  180,  180,  180,  182,   27,  182,
> -      182,  188,  188,  188,  188,  156,  250,  156,  156,  161,
> -      250,  161,  161,  166,  250,  166,  166,  172,  250,  172,
> -      172,  193,  193,  193,  193,  195,  195,  195,  195,  184,
> -      250,  184,  184,  199,  199,  199,  199,  201,  201,  201,
> -      201,  203,  250,  203,  203,  206,  206,  206,  206,  209,
> -
> -      209,  209,  209,  211,  211,  211,  211,  214,  214,  214,
> -      214,  218,  218,  218,  218,  205,  250,  205,  205,  220,
> -      220,  220,  220,  224,  224,  224,  224,  210,  250,  210,
> -      210,  228,  228,  228,  228,  232,  232,  232,  232,  234,
> -      250,  234,  234,  236,  250,  236,  236,  243,  243,  243,
> -      243,  247,  247,  247,  247,    5,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250
> +      164,   28,  169,  192,   35,   35,  191,  170,  197,  199,
> +       35,   28,  181,   28,  203,   28,  205,  182,   35,  204,
> +      217,  206,   64,  211,  198,   28,  203,   35,  218,  219,
> +       35,  204,  214,  224,  224,   61,  225,  224,  232,  229,
> +      224,  227,  232,   28,  236,  230,   35,  233,  217,  237,
> +
> +      241,   28,  238,  217,   28,  236,  234,  239,   35,  217,
> +      237,  245,   35,   35,  217,  217,  244,  253,   35,  252,
> +      250,  242,  217,  240,  208,  201,  248,  243,  232,  246,
> +      247,  196,  228,  251,   26,   26,   26,   26,   28,   28,
> +       28,   30,   30,   30,   30,   35,   35,   35,   35,   57,
> +      223,   57,   57,   58,   58,   58,   58,   60,  221,   60,
> +       60,   34,   34,   34,   34,   64,   64,  213,   64,   83,
> +       83,   83,   83,   85,  176,   85,   85,  109,  109,  109,
> +      109,  119,  119,  119,  119,  123,  123,  123,  123,  127,
> +      127,  127,  127,  132,  132,  132,  132,  138,  138,  138,
> +
> +      138,  140,  208,  140,  140,  148,  148,  148,  148,  152,
> +      152,  152,  152,  154,  201,  154,  154,  157,  157,  157,
> +      157,  159,  196,  159,  159,  162,  162,  162,  162,  164,
> +      194,  164,  164,  168,  168,  168,  168,  170,  176,  170,
> +      170,  175,  175,  175,  175,  142,  115,  142,  142,  180,
> +      180,  180,  180,  182,   86,  182,  182,  188,  188,  188,
> +      188,  156,   35,  156,  156,  161,   29,  161,  161,  166,
> +       54,  166,  166,  172,   52,  172,  172,  193,  193,  193,
> +      193,  195,  195,  195,  195,  184,   35,  184,  184,  200,
> +      200,  200,  200,  202,  202,  202,  202,  204,   29,  204,
> +
> +      204,  207,  207,  207,  207,  210,  210,  210,  210,  212,
> +      212,  212,  212,  216,  216,  216,  216,  220,  220,  220,
> +      220,  206,  261,  206,  206,  222,  222,  222,  222,  226,
> +      226,  226,  226,  211,   27,  211,  211,  231,  231,  231,
> +      231,  235,  235,  235,  235,  237,   27,  237,  237,  239,
> +      261,  239,  239,  249,  249,  249,  249,  254,  254,  254,
> +      254,  257,  257,  257,  257,  259,  259,  259,  259,    5,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +
> +      261,  261,  261
>      } ;
> 
> -static yyconst flex_int16_t yy_chk[690] =
> +static yyconst flex_int16_t yy_chk[704] =
>      {   0,
>          1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
>          1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
> @@ -660,7 +658,7 @@ static yyconst flex_int16_t yy_chk[690] =
>         14,   11,   11,   47,   14,   11,   19,   19,   41,   15,
>         16,   11,   12,   12,   12,   12,   12,   14,   16,   12,
>         12,   12,   15,   18,   16,   20,   21,   23,   21,   23,
> -      247,   25,   20,   38,   25,   38,   45,   18,   25,   30,
> +      259,   25,   20,   38,   25,   38,   45,   18,   25,   30,
>         30,   18,   31,   31,   31,   30,   40,   45,   40,   31,
> 
>         34,   34,   34,   34,   39,   42,   46,   34,   34,   36,
> @@ -670,63 +668,66 @@ static yyconst flex_int16_t yy_chk[690] =
>         58,   51,   65,   67,   50,   58,   54,   53,   54,   61,
>         61,   52,   68,   67,   69,   61,   70,   71,   72,   70,
>         73,   71,   74,   75,   77,   75,   68,   76,   82,   76,
> -       72,   79,   73,   69,   78,   87,   78,   81,  245,   79,
> +       72,   79,   73,   69,   78,   87,   78,   81,  257,   79,
>         74,   80,   80,   81,   80,   91,   77,   80,   89,   82,
>         83,   83,   89,   87,   90,   94,   83,   88,   88,   88,
> 
> -       88,   88,   95,  243,   88,   88,   90,   92,   91,   92,
> +       88,   88,   95,  254,   88,   88,   90,   92,   91,   92,
>         95,   98,   98,  103,   98,   94,   96,   96,   96,   96,
>         96,  103,  106,   96,   96,   97,   97,   97,   97,   97,
> -      100,  242,   97,   97,   99,   99,   99,   99,   99,  104,
> +      100,  253,   97,   97,   99,   99,   99,   99,   99,  104,
>        106,   99,   99,  104,  101,  100,  101,  102,  102,  102,
>        102,  102,  105,  105,  102,  102,  107,  107,  109,  109,
>        111,  112,  107,  113,  109,  115,  116,  112,  116,  117,
> -      117,  119,  119,  111,  240,  117,  129,  119,  113,  114,
> -      114,  114,  114,  114,  115,  197,  114,  114,  121,  121,
> +      117,  119,  119,  111,  251,  117,  129,  119,  113,  114,
> +      114,  114,  114,  114,  115,  198,  114,  114,  121,  121,
>        123,  123,  125,  125,  121,  129,  123,  134,  125,  127,
> 
> -      127,  130,  130,  132,  132,  127,  135,  130,  197,  132,
> -      137,  137,  138,  138,  143,  134,  145,  143,  138,  228,
> +      127,  130,  130,  132,  132,  127,  135,  130,  198,  132,
> +      137,  137,  138,  138,  143,  134,  145,  143,  138,  249,
>        145,  146,  146,  148,  148,  135,  191,  146,  191,  148,
>        150,  150,  151,  151,  151,  151,  151,  152,  152,  151,
>        151,  157,  157,  152,  162,  162,  173,  157,  167,  167,
> -      162,  168,  168,  174,  174,  178,  173,  168,  179,  179,
> -      180,  180,  186,  186,  188,  188,  180,  198,  186,  220,
> -      188,  192,  192,  216,  178,  198,  198,  201,  201,  213,
> -      230,  217,  216,  201,  208,  208,  208,  208,  208,  227,
> -      231,  208,  208,  227,  213,  217,  222,  222,  224,  224,
> -
> -      232,  232,  222,  230,  224,  238,  232,  237,  237,  241,
> -      239,  218,  214,  211,  231,  209,  206,  199,  195,  193,
> -      190,  185,  177,  175,  136,  238,  239,  241,  251,  251,
> -      251,  251,  252,  252,  252,  253,  253,  253,  253,  254,
> -      254,  254,  254,  255,   93,  255,  255,  256,  256,  256,
> -      256,  257,   64,  257,  257,  258,  258,  258,  258,  259,
> -      259,   35,  259,  260,  260,  260,  260,  261,   28,  261,
> -      261,  262,  262,  262,  262,  263,  263,  263,  263,  264,
> -      264,  264,  264,  265,  265,  265,  265,  266,  266,  266,
> -      266,  267,  267,  267,  267,  268,   24,  268,  268,  269,
> -
> -      269,  269,  269,  270,  270,  270,  270,  271,   22,  271,
> -      271,  272,  272,  272,  272,  273,   17,  273,  273,  274,
> -      274,  274,  274,  275,    6,  275,  275,  276,  276,  276,
> -      276,  277,    5,  277,  277,  278,  278,  278,  278,  279,
> -        4,  279,  279,  280,  280,  280,  280,  281,    3,  281,
> -      281,  282,  282,  282,  282,  283,    0,  283,  283,  284,
> -        0,  284,  284,  285,    0,  285,  285,  286,    0,  286,
> -      286,  287,  287,  287,  287,  288,  288,  288,  288,  289,
> -        0,  289,  289,  290,  290,  290,  290,  291,  291,  291,
> -      291,  292,    0,  292,  292,  293,  293,  293,  293,  294,
> -
> -      294,  294,  294,  295,  295,  295,  295,  296,  296,  296,
> -      296,  297,  297,  297,  297,  298,    0,  298,  298,  299,
> -      299,  299,  299,  300,  300,  300,  300,  301,    0,  301,
> -      301,  302,  302,  302,  302,  303,  303,  303,  303,  304,
> -        0,  304,  304,  305,    0,  305,  305,  306,  306,  306,
> -      306,  307,  307,  307,  307,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250,  250,
> -      250,  250,  250,  250,  250,  250,  250,  250,  250
> +      162,  168,  168,  174,  174,  178,  173,  168,  178,  179,
> +      179,  180,  180,  186,  186,  188,  188,  180,  197,  186,
> +      199,  188,  192,  192,  178,  202,  202,  214,  199,  199,
> +      215,  202,  197,  209,  209,  209,  209,  209,  218,  214,
> +      209,  209,  219,  224,  224,  215,  230,  218,  233,  224,
> +
> +      230,  226,  226,  234,  235,  235,  219,  226,  240,  242,
> +      235,  241,  241,  244,  243,  246,  240,  248,  248,  247,
> +      245,  233,  231,  229,  222,  220,  244,  234,  216,  242,
> +      243,  212,  210,  246,  262,  262,  262,  262,  263,  263,
> +      263,  264,  264,  264,  264,  265,  265,  265,  265,  266,
> +      207,  266,  266,  267,  267,  267,  267,  268,  200,  268,
> +      268,  269,  269,  269,  269,  270,  270,  195,  270,  271,
> +      271,  271,  271,  272,  193,  272,  272,  273,  273,  273,
> +      273,  274,  274,  274,  274,  275,  275,  275,  275,  276,
> +      276,  276,  276,  277,  277,  277,  277,  278,  278,  278,
> +
> +      278,  279,  190,  279,  279,  280,  280,  280,  280,  281,
> +      281,  281,  281,  282,  185,  282,  282,  283,  283,  283,
> +      283,  284,  177,  284,  284,  285,  285,  285,  285,  286,
> +      175,  286,  286,  287,  287,  287,  287,  288,  136,  288,
> +      288,  289,  289,  289,  289,  290,   93,  290,  290,  291,
> +      291,  291,  291,  292,   64,  292,  292,  293,  293,  293,
> +      293,  294,   35,  294,  294,  295,   28,  295,  295,  296,
> +       24,  296,  296,  297,   22,  297,  297,  298,  298,  298,
> +      298,  299,  299,  299,  299,  300,   17,  300,  300,  301,
> +      301,  301,  301,  302,  302,  302,  302,  303,    6,  303,
> +
> +      303,  304,  304,  304,  304,  305,  305,  305,  305,  306,
> +      306,  306,  306,  307,  307,  307,  307,  308,  308,  308,
> +      308,  309,    5,  309,  309,  310,  310,  310,  310,  311,
> +      311,  311,  311,  312,    4,  312,  312,  313,  313,  313,
> +      313,  314,  314,  314,  314,  315,    3,  315,  315,  316,
> +        0,  316,  316,  317,  317,  317,  317,  318,  318,  318,
> +      318,  319,  319,  319,  319,  320,  320,  320,  320,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +      261,  261,  261,  261,  261,  261,  261,  261,  261,  261,
> +
> +      261,  261,  261
>      } ;
> 
>  #define YY_TRAILING_MASK 0x2000
> @@ -856,6 +857,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
>      else xlu__disk_err(dpc,str,"unknown value for backendtype");
>  }
> 
> +/* Sets ->backend_domid from the string. */
> +static void setbackend(DiskParseContext *dpc, const char *str) {
> +    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
> +        xlu__disk_err(dpc,str,"unknown domain for backend");
> +    }
> +}
> +
>  #define DEPRECATE(usewhatinstead) /* not currently reported */
> 
>  /* Handles a vdev positional parameter which includes a devtype. */
> @@ -883,7 +891,7 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
>  #define DPC ((DiskParseContext*)yyextra)
> 
> 
> -#line 887 "libxlu_disk_l.c"
> +#line 895 "libxlu_disk_l.c"
> 
>  #define INITIAL 0
>  #define LEXERR 1
> @@ -980,6 +988,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
> 
>  void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
> 
> +int xlu__disk_yyget_column  (yyscan_t yyscanner );
> +
> +void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
> +
>  /* Macros after this point can all be overridden by user definitions in
>   * section 1.
>   */
> @@ -1012,12 +1024,7 @@ static int input (yyscan_t yyscanner );
> 
>  /* Amount of stuff to slurp up with each read. */
>  #ifndef YY_READ_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k */
> -#define YY_READ_BUF_SIZE 16384
> -#else
>  #define YY_READ_BUF_SIZE 8192
> -#endif /* __ia64__ */
>  #endif
> 
>  /* Copy whatever the last rule matched to the standard output. */
> @@ -1036,7 +1043,7 @@ static int input (yyscan_t yyscanner );
>         if ( YY_CURRENT_BUFFER_LVALUE->yy_is_interactive ) \
>                 { \
>                 int c = '*'; \
> -               size_t n; \
> +               unsigned n; \
>                 for ( n = 0; n < max_size && \
>                              (c = getc( yyin )) != EOF && c != '\n'; ++n ) \
>                         buf[n] = (char) c; \
> @@ -1119,12 +1126,12 @@ YY_DECL
>         register int yy_act;
>      struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
> 
> -#line 155 "libxlu_disk_l.l"
> +#line 162 "libxlu_disk_l.l"
> 
> 
>   /*----- the scanner rules which do the parsing -----*/
> 
> -#line 1128 "libxlu_disk_l.c"
> +#line 1135 "libxlu_disk_l.c"
> 
>         if ( !yyg->yy_init )
>                 {
> @@ -1188,14 +1195,14 @@ yy_match:
>                         while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
>                                 {
>                                 yy_current_state = (int) yy_def[yy_current_state];
> -                               if ( yy_current_state >= 251 )
> +                               if ( yy_current_state >= 262 )
>                                         yy_c = yy_meta[(unsigned int) yy_c];
>                                 }
>                         yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
>                         *yyg->yy_state_ptr++ = yy_current_state;
>                         ++yy_cp;
>                         }
> -               while ( yy_current_state != 250 );
> +               while ( yy_current_state != 261 );
> 
>  yy_find_action:
>                 yy_current_state = *--yyg->yy_state_ptr;
> @@ -1245,89 +1252,95 @@ do_action:      /* This label is used only to access EOF actions. */
>  case 1:
>  /* rule 1 can match eol */
>  YY_RULE_SETUP
> -#line 159 "libxlu_disk_l.l"
> +#line 166 "libxlu_disk_l.l"
>  { /* ignore whitespace before parameters */ }
>         YY_BREAK
>  /* ordinary parameters setting enums or strings */
>  case 2:
>  /* rule 2 can match eol */
>  YY_RULE_SETUP
> -#line 163 "libxlu_disk_l.l"
> +#line 170 "libxlu_disk_l.l"
>  { STRIP(','); setformat(DPC, FROMEQUALS); }
>         YY_BREAK
>  case 3:
>  YY_RULE_SETUP
> -#line 165 "libxlu_disk_l.l"
> +#line 172 "libxlu_disk_l.l"
>  { DPC->disk->is_cdrom = 1; }
>         YY_BREAK
>  case 4:
>  YY_RULE_SETUP
> -#line 166 "libxlu_disk_l.l"
> +#line 173 "libxlu_disk_l.l"
>  { DPC->disk->is_cdrom = 1; }
>         YY_BREAK
>  case 5:
>  YY_RULE_SETUP
> -#line 167 "libxlu_disk_l.l"
> +#line 174 "libxlu_disk_l.l"
>  { DPC->disk->is_cdrom = 0; }
>         YY_BREAK
>  case 6:
>  /* rule 6 can match eol */
>  YY_RULE_SETUP
> -#line 168 "libxlu_disk_l.l"
> +#line 175 "libxlu_disk_l.l"
>  { xlu__disk_err(DPC,yytext,"unknown value for type"); }
>         YY_BREAK
>  case 7:
>  /* rule 7 can match eol */
>  YY_RULE_SETUP
> -#line 170 "libxlu_disk_l.l"
> +#line 177 "libxlu_disk_l.l"
>  { STRIP(','); setaccess(DPC, FROMEQUALS); }
>         YY_BREAK
>  case 8:
>  /* rule 8 can match eol */
>  YY_RULE_SETUP
> -#line 171 "libxlu_disk_l.l"
> +#line 178 "libxlu_disk_l.l"
>  { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>         YY_BREAK
>  case 9:
>  /* rule 9 can match eol */
>  YY_RULE_SETUP
> -#line 173 "libxlu_disk_l.l"
> -{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
> +#line 179 "libxlu_disk_l.l"
> +{ STRIP(','); setbackend(DPC,FROMEQUALS); }
>         YY_BREAK
>  case 10:
>  /* rule 10 can match eol */
>  YY_RULE_SETUP
> -#line 174 "libxlu_disk_l.l"
> +#line 181 "libxlu_disk_l.l"
> +{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
> +       YY_BREAK
> +case 11:
> +/* rule 11 can match eol */
> +YY_RULE_SETUP
> +#line 182 "libxlu_disk_l.l"
>  { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
>         YY_BREAK
>  /* the target magic parameter, eats the rest of the string */
> -case 11:
> +case 12:
>  YY_RULE_SETUP
> -#line 178 "libxlu_disk_l.l"
> +#line 186 "libxlu_disk_l.l"
>  { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
>         YY_BREAK
>  /* unknown parameters */
> -case 12:
> -/* rule 12 can match eol */
> +case 13:
> +/* rule 13 can match eol */
>  YY_RULE_SETUP
> -#line 182 "libxlu_disk_l.l"
> +#line 190 "libxlu_disk_l.l"
>  { xlu__disk_err(DPC,yytext,"unknown parameter"); }
>         YY_BREAK
>  /* deprecated prefixes */
>  /* the "/.*" in these patterns ensures that they count as if they
>     * matched the whole string, so these patterns take precedence */
> -case 13:
> +case 14:
>  YY_RULE_SETUP
> -#line 189 "libxlu_disk_l.l"
> +#line 197 "libxlu_disk_l.l"
>  {
>                      STRIP(':');
>                      DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
>                      setformat(DPC, yytext);
>                   }
>         YY_BREAK
> -case 14:
> +case 15:
>  YY_RULE_SETUP
> -#line 195 "libxlu_disk_l.l"
> +#line 203 "libxlu_disk_l.l"
>  {
>                      char *newscript;
>                      STRIP(':');
> @@ -1341,65 +1354,65 @@ YY_RULE_SETUP
>                      free(newscript);
>                  }
>         YY_BREAK
> -case 15:
> +case 16:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 208 "libxlu_disk_l.l"
> +#line 216 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 16:
> +case 17:
>  YY_RULE_SETUP
> -#line 209 "libxlu_disk_l.l"
> +#line 217 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 17:
> +case 18:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 210 "libxlu_disk_l.l"
> +#line 218 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 18:
> +case 19:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 211 "libxlu_disk_l.l"
> +#line 219 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 19:
> +case 20:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 212 "libxlu_disk_l.l"
> +#line 220 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 20:
> +case 21:
>  *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
>  yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
>  YY_DO_BEFORE_ACTION; /* set up yytext again */
>  YY_RULE_SETUP
> -#line 213 "libxlu_disk_l.l"
> +#line 221 "libxlu_disk_l.l"
>  { DPC->had_depr_prefix=1; DEPRECATE(0); }
>         YY_BREAK
> -case 21:
> -/* rule 21 can match eol */
> +case 22:
> +/* rule 22 can match eol */
>  YY_RULE_SETUP
> -#line 215 "libxlu_disk_l.l"
> +#line 223 "libxlu_disk_l.l"
>  {
>                   xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
>                   return 0;
>                 }
>         YY_BREAK
>  /* positional parameters */
> -case 22:
> -/* rule 22 can match eol */
> +case 23:
> +/* rule 23 can match eol */
>  YY_RULE_SETUP
> -#line 222 "libxlu_disk_l.l"
> +#line 230 "libxlu_disk_l.l"
>  {
>      STRIP(',');
> 
> @@ -1426,27 +1439,27 @@ YY_RULE_SETUP
>      }
>  }
>         YY_BREAK
> -case 23:
> +case 24:
>  YY_RULE_SETUP
> -#line 248 "libxlu_disk_l.l"
> +#line 256 "libxlu_disk_l.l"
>  {
>      BEGIN(LEXERR);
>      yymore();
>  }
>         YY_BREAK
> -case 24:
> +case 25:
>  YY_RULE_SETUP
> -#line 252 "libxlu_disk_l.l"
> +#line 260 "libxlu_disk_l.l"
>  {
>      xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
>  }
>         YY_BREAK
> -case 25:
> +case 26:
>  YY_RULE_SETUP
> -#line 255 "libxlu_disk_l.l"
> +#line 263 "libxlu_disk_l.l"
>  YY_FATAL_ERROR( "flex scanner jammed" );
>         YY_BREAK
> -#line 1450 "libxlu_disk_l.c"
> +#line 1463 "libxlu_disk_l.c"
>                         case YY_STATE_EOF(INITIAL):
>                         case YY_STATE_EOF(LEXERR):
>                                 yyterminate();
> @@ -1710,7 +1723,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
>                 while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
>                         {
>                         yy_current_state = (int) yy_def[yy_current_state];
> -                       if ( yy_current_state >= 251 )
> +                       if ( yy_current_state >= 262 )
>                                 yy_c = yy_meta[(unsigned int) yy_c];
>                         }
>                 yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
> @@ -1734,11 +1747,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
>         while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
>                 {
>                 yy_current_state = (int) yy_def[yy_current_state];
> -               if ( yy_current_state >= 251 )
> +               if ( yy_current_state >= 262 )
>                         yy_c = yy_meta[(unsigned int) yy_c];
>                 }
>         yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
> -       yy_is_jam = (yy_current_state == 250);
> +       yy_is_jam = (yy_current_state == 261);
>         if ( ! yy_is_jam )
>                 *yyg->yy_state_ptr++ = yy_current_state;
> 
> @@ -2147,8 +2160,8 @@ YY_BUFFER_STATE xlu__disk_yy_scan_string (yyconst char * yystr , yyscan_t yyscan
> 
>  /** Setup the input buffer state to scan the given bytes. The next call to xlu__disk_yylex() will
>   * scan from a @e copy of @a bytes.
> - * @param yybytes the byte buffer to scan
> - * @param _yybytes_len the number of bytes in the buffer pointed to by @a bytes.
> + * @param bytes the byte buffer to scan
> + * @param len the number of bytes in the buffer pointed to by @a bytes.
>   * @param yyscanner The scanner object.
>   * @return the newly allocated buffer state object.
>   */
> @@ -2538,4 +2551,4 @@ void xlu__disk_yyfree (void * ptr , yyscan_t yyscanner)
> 
>  #define YYTABLES_NAME "yytables"
> 
> -#line 255 "libxlu_disk_l.l"
> +#line 263 "libxlu_disk_l.l"
> diff --git a/tools/libxl/libxlu_disk_l.h b/tools/libxl/libxlu_disk_l.h
> index de03908..247a0d7 100644
> --- a/tools/libxl/libxlu_disk_l.h
> +++ b/tools/libxl/libxlu_disk_l.h
> @@ -62,6 +62,7 @@ typedef int flex_int32_t;
>  typedef unsigned char flex_uint8_t;
>  typedef unsigned short int flex_uint16_t;
>  typedef unsigned int flex_uint32_t;
> +#endif /* ! C99 */
> 
>  /* Limits of integral types. */
>  #ifndef INT8_MIN
> @@ -92,8 +93,6 @@ typedef unsigned int flex_uint32_t;
>  #define UINT32_MAX             (4294967295U)
>  #endif
> 
> -#endif /* ! C99 */
> -
>  #endif /* ! FLEXINT_H */
> 
>  #ifdef __cplusplus
> @@ -136,15 +135,7 @@ typedef void* yyscan_t;
> 
>  /* Size of default input buffer. */
>  #ifndef YY_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k.
> - * Moreover, YY_BUF_SIZE is 2*YY_READ_BUF_SIZE in the general case.
> - * Ditto for the __ia64__ case accordingly.
> - */
> -#define YY_BUF_SIZE 32768
> -#else
>  #define YY_BUF_SIZE 16384
> -#endif /* __ia64__ */
>  #endif
> 
>  #ifndef YY_TYPEDEF_YY_BUFFER_STATE
> @@ -280,6 +271,10 @@ int xlu__disk_yyget_lineno (yyscan_t yyscanner );
> 
>  void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
> 
> +int xlu__disk_yyget_column  (yyscan_t yyscanner );
> +
> +void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
> +
>  /* Macros after this point can all be overridden by user definitions in
>   * section 1.
>   */
> @@ -306,12 +301,7 @@ static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
> 
>  /* Amount of stuff to slurp up with each read. */
>  #ifndef YY_READ_BUF_SIZE
> -#ifdef __ia64__
> -/* On IA-64, the buffer size is 16k, not 8k */
> -#define YY_READ_BUF_SIZE 16384
> -#else
>  #define YY_READ_BUF_SIZE 8192
> -#endif /* __ia64__ */
>  #endif
> 
>  /* Number of entries by which start-condition stack grows. */
> @@ -344,8 +334,8 @@ extern int xlu__disk_yylex (yyscan_t yyscanner);
>  #undef YY_DECL
>  #endif
> 
> -#line 255 "libxlu_disk_l.l"
> +#line 263 "libxlu_disk_l.l"
> 
> -#line 350 "libxlu_disk_l.h"
> +#line 340 "libxlu_disk_l.h"
>  #undef xlu__disk_yyIN_HEADER
>  #endif /* xlu__disk_yyHEADER_H */
> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index bee16a1..6bd48e8 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -113,6 +113,13 @@ static void setbackendtype(DiskParseContext *dpc, const char *str) {
>      else xlu__disk_err(dpc,str,"unknown value for backendtype");
>  }
> 
> +/* Sets ->backend_domid from the string. */
> +static void setbackend(DiskParseContext *dpc, const char *str) {
> +    if (libxl_name_to_domid(dpc->ctx, str, &dpc->disk->backend_domid)) {
> +        xlu__disk_err(dpc,str,"unknown domain for backend");
> +    }
> +}
> +
>  #define DEPRECATE(usewhatinstead) /* not currently reported */
> 
>  /* Handles a vdev positional parameter which includes a devtype. */
> @@ -169,6 +176,7 @@ devtype=[^,]*,?     { xlu__disk_err(DPC,yytext,"unknown value for type"); }
> 
>  access=[^,]*,? { STRIP(','); setaccess(DPC, FROMEQUALS); }
>  backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
> +backenddomain=[^,]*,? { STRIP(','); setbackend(DPC,FROMEQUALS); }
> 
>  vdev=[^,]*,?   { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,? { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
> index 0333e55..87eb399 100644
> --- a/tools/libxl/libxlutil.h
> +++ b/tools/libxl/libxlutil.h
> @@ -72,7 +72,7 @@ const char *xlu_cfg_get_listitem(const XLU_ConfigList*, int entry);
>   */
> 
>  int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
> -                   libxl_device_disk *disk);
> +                   libxl_device_disk *disk, libxl_ctx *ctx);
>    /* disk must have been initialised.
>     *
>     * On error, returns errno value.  Bad strings cause EINVAL and
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 138cd72..fd00d61 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -420,7 +420,7 @@ static void parse_disk_config_multistring(XLU_Config **config,
>          if (!*config) { perror("xlu_cfg_init"); exit(-1); }
>      }
> 
> -    e = xlu_disk_parse(*config, nspecs, specs, disk);
> +    e = xlu_disk_parse(*config, nspecs, specs, disk, ctx);
>      if (e == EINVAL) exit(-1);
>      if (e) {
>          fprintf(stderr,"xlu_disk_parse failed: %s\n",strerror(errno));
> @@ -5335,7 +5335,7 @@ int main_networkdetach(int argc, char **argv)
>  int main_blockattach(int argc, char **argv)
>  {
>      int opt;
> -    uint32_t fe_domid, be_domid = 0;
> +    uint32_t fe_domid;
>      libxl_device_disk disk = { 0 };
>      XLU_Config *config = 0;
> 
> @@ -5351,8 +5351,6 @@ int main_blockattach(int argc, char **argv)
>      parse_disk_config_multistring
>          (&config, argc-optind, (const char* const*)argv + optind, &disk);
> 
> -    disk.backend_domid = be_domid;
> -
>      if (dryrun_only) {
>          char *json = libxl_device_disk_to_json(ctx, &disk);
>          printf("disk: %s\n", json);
> --
> 1.7.11.2
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:17:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7MQ4-0004y8-4S; Fri, 31 Aug 2012 08:17:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7MQ2-0004y3-K3
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 08:17:34 +0000
Received: from [85.158.143.99:40157] by server-1.bemta-4.messagelabs.com id
	2B/C2-12504-D1370405; Fri, 31 Aug 2012 08:17:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1346401052!27459174!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22352 invoked from network); 31 Aug 2012 08:17:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14282237"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:16:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:16:52 +0100
Message-ID: <1346401010.27277.103.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Fri, 31 Aug 2012 09:16:50 +0100
In-Reply-To: <1345191671.30865.81.camel@zakaz.uk.xensource.com>
References: <0982bad392e4f96fb39a.1345022903@elijah>
	<1345191671.30865.81.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
 cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 09:21 +0100, Ian Campbell wrote:
> On Wed, 2012-08-15 at 10:28 +0100, George Dunlap wrote:
> > # HG changeset patch
> > # User George Dunlap <george.dunlap@eu.citrix.com>
> > # Date 1345022863 -3600
> > # Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
> > # Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
> > xl: Suppress spurious warning message for cpupool-list
> > 
> > libxl_cpupool_list() enumerates the cpupools by "probing": calling
> > cpupool_info, starting at 0 and stopping when it gets an error. However,
> > cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
> > resulting in every xl command that uses libxl_list_cpupool (such as
> > cpupool-list) printing that error message spuriously.
> [...]
> > This patch adds a "probe" argument to cpupool_info(). If set, it won't print
> > a warning if the xc_cpupool_getinfo() fails with ENOENT.
> 
> Looking at the callers I think the existing "exact" parameter could be
> used instead of a new param -- it would be fine to fail silently on
> ENOENT iff !exact, I think.

Hi George,

were you intending to do this for 4.2?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:17:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7MQ4-0004y8-4S; Fri, 31 Aug 2012 08:17:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7MQ2-0004y3-K3
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 08:17:34 +0000
Received: from [85.158.143.99:40157] by server-1.bemta-4.messagelabs.com id
	2B/C2-12504-D1370405; Fri, 31 Aug 2012 08:17:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1346401052!27459174!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22352 invoked from network); 31 Aug 2012 08:17:32 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14282237"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:16:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:16:52 +0100
Message-ID: <1346401010.27277.103.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Fri, 31 Aug 2012 09:16:50 +0100
In-Reply-To: <1345191671.30865.81.camel@zakaz.uk.xensource.com>
References: <0982bad392e4f96fb39a.1345022903@elijah>
	<1345191671.30865.81.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH] xl: Suppress spurious warning message for
 cpupool-list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-17 at 09:21 +0100, Ian Campbell wrote:
> On Wed, 2012-08-15 at 10:28 +0100, George Dunlap wrote:
> > # HG changeset patch
> > # User George Dunlap <george.dunlap@eu.citrix.com>
> > # Date 1345022863 -3600
> > # Node ID 0982bad392e4f96fb39a025d6528c33be32c6c04
> > # Parent  dc56a9defa30312a46cfb6ddb578e64cfbc6bc8b
> > xl: Suppress spurious warning message for cpupool-list
> > 
> > libxl_cpupool_list() enumerates the cpupools by "probing": calling
> > cpupool_info, starting at 0 and stopping when it gets an error. However,
> > cpupool_info will print an error when the call to xc_cpupool_getinfo() fails,
> > resulting in every xl command that uses libxl_list_cpupool (such as
> > cpupool-list) printing that error message spuriously.
> [...]
> > This patch adds a "probe" argument to cpupool_info(). If set, it won't print
> > a warning if the xc_cpupool_getinfo() fails with ENOENT.
> 
> Looking at the callers I think the existing "exact" parameter could be
> used instead of a new param -- it would be fine to fail silently on
> ENOENT iff !exact, I think.

Hi George,

were you intending to do this for 4.2?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:41:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Mmz-0005OO-Sd; Fri, 31 Aug 2012 08:41:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Mmy-0005OB-F3
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:41:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346402458!8912735!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 304 invoked from network); 31 Aug 2012 08:40:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:40:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14282707"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:40:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:40:58 +0100
Message-ID: <1346402456.27277.111.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 09:40:56 +0100
In-Reply-To: <4FB377A10200007800083FFE@nat28.tlf.novell.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-05-16 at 08:47 +0100, Jan Beulich wrote:
> Finally, loading "blktap" here is the right thing for pvops, but
> wrong for legacy/forward ported kernels - blktap2 would be
> the right module name there afaict. Perhaps, if this really is to
> go in (temporarily), loading an alias (devname: or xen-backend:,
> though the latter apears to be missing from the pvops driver)
> would be better here?

George's patch doesn't seem to apply any more (hardly surprising).

>From what you say I think we want to modprobe blktap if blktap2 didn't
exist.

blktap2 isn't actually a xenbus backend driver (since it uses blkback to
do the guest facing bit) so I don't think a xen-backend: alias is
available. I can't see any other aliases defined in the code in either
the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
latest I happen to have to hand) or a mainline kernel. If there is
something else we should be trying please let me know.

So I intend to commit the following:

8<--------------------------------------------------

xencommons: Attempt to load blktap2 driver

Older kernels, such as those found in Debian Squeeze:
* Have bugs in handling of AIO into foreign pages
* Have blktap modules, which will cause qemu not to use AIO, but
  which are not loaded on boot.

Attempt to load blktap in xencommons, to make sure modern qemu's which
use AIO will work properly on those kernels.

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Prefer to load blktap2 if it exists. This is the name of the driver in
classic-Xen ports, while in mainline kernels the driver is called just
blktap.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 1126b3079bef -r 451724678dd4 tools/hotplug/Linux/init.d/xencommons
--- a/tools/hotplug/Linux/init.d/xencommons	Fri Aug 24 12:38:18 2012 +0100
+++ b/tools/hotplug/Linux/init.d/xencommons	Fri Aug 31 09:32:37 2012 +0100
@@ -68,6 +68,7 @@ do_start () {
 	modprobe usbbk 2>/dev/null
 	modprobe pciback 2>/dev/null
 	modprobe xen-acpi-processor 2>/dev/null
+	modprobe blktap2 2>/dev/null || modprobe blktap 2>/dev/null
 	mkdir -p /var/run/xen
 
 	if ! `xenstore-read -s / >/dev/null 2>&1`



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:41:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Mmz-0005OO-Sd; Fri, 31 Aug 2012 08:41:17 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Mmy-0005OB-F3
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:41:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346402458!8912735!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTExNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 304 invoked from network); 31 Aug 2012 08:40:58 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:40:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14282707"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:40:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:40:58 +0100
Message-ID: <1346402456.27277.111.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 09:40:56 +0100
In-Reply-To: <4FB377A10200007800083FFE@nat28.tlf.novell.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2012-05-16 at 08:47 +0100, Jan Beulich wrote:
> Finally, loading "blktap" here is the right thing for pvops, but
> wrong for legacy/forward ported kernels - blktap2 would be
> the right module name there afaict. Perhaps, if this really is to
> go in (temporarily), loading an alias (devname: or xen-backend:,
> though the latter apears to be missing from the pvops driver)
> would be better here?

George's patch doesn't seem to apply any more (hardly surprising).

>From what you say I think we want to modprobe blktap if blktap2 didn't
exist.

blktap2 isn't actually a xenbus backend driver (since it uses blkback to
do the guest facing bit) so I don't think a xen-backend: alias is
available. I can't see any other aliases defined in the code in either
the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
latest I happen to have to hand) or a mainline kernel. If there is
something else we should be trying please let me know.

So I intend to commit the following:

8<--------------------------------------------------

xencommons: Attempt to load blktap2 driver

Older kernels, such as those found in Debian Squeeze:
* Have bugs in handling of AIO into foreign pages
* Have blktap modules, which will cause qemu not to use AIO, but
  which are not loaded on boot.

Attempt to load blktap in xencommons, to make sure modern qemu's which
use AIO will work properly on those kernels.

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

Prefer to load blktap2 if it exists. This is the name of the driver in
classic-Xen ports, while in mainline kernels the driver is called just
blktap.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 1126b3079bef -r 451724678dd4 tools/hotplug/Linux/init.d/xencommons
--- a/tools/hotplug/Linux/init.d/xencommons	Fri Aug 24 12:38:18 2012 +0100
+++ b/tools/hotplug/Linux/init.d/xencommons	Fri Aug 31 09:32:37 2012 +0100
@@ -68,6 +68,7 @@ do_start () {
 	modprobe usbbk 2>/dev/null
 	modprobe pciback 2>/dev/null
 	modprobe xen-acpi-processor 2>/dev/null
+	modprobe blktap2 2>/dev/null || modprobe blktap 2>/dev/null
 	mkdir -p /var/run/xen
 
 	if ! `xenstore-read -s / >/dev/null 2>&1`



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:55:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7N0g-0005gU-DP; Fri, 31 Aug 2012 08:55:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7N0f-0005gL-6d
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:55:25 +0000
Received: from [85.158.143.35:27745] by server-1.bemta-4.messagelabs.com id
	BE/F8-12504-CFB70405; Fri, 31 Aug 2012 08:55:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346403320!13479703!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17847 invoked from network); 31 Aug 2012 08:55:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:55:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283027"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:55:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:55:20 +0100
Message-ID: <1346403318.27277.118.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dmitry Ivanov <vonami@gmail.com>
Date: Fri, 31 Aug 2012 09:55:18 +0100
In-Reply-To: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding xen-devel.

On Fri, 2012-08-31 at 09:37 +0100, Dmitry Ivanov wrote:
> Hello,
> 
> After configuring xen-unstable with ./configure --disable-ocamltools
> --disable-pythontools it fails to build:
> 
> make[4]: Entering directory
> `/home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign'
> mkheader.py x86_32 x86_32.h
> /home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign/../../../xen/include/public/arch-x86/xen-x86_32.h
> /home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign/../../../xen/include/public/arch-x86/xen.h
> /home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign/../../../xen/include/public/xen.h
> make[4]: mkheader.py: Command not found
> make[4]: *** [x86_32.h] Error 127
> 
> Turns out PYTHON is empty in the tools/include/xen-foreign/Makefile.
> Looks like PYTHON and PYTHONPATH are not set by tools/configure script
> when --disable-pythontools is on.

So long as Xen uses python as part of the build system (which I think it
will do for the foreseeable future) I don't think it is optional and
therefore the --disable-pythontools is broken as currently implemented.

I think we should just remove this option for 4.2 and revisit being able
to disable the runtime components which use python, but not the build
time usage.

--disable-pythontools is a bit of a big hammer since as well as
disabling xend (which seems reasonable) it also throws out pygrub which
people are more likely to want. In 4.3 we should aim for more fine
grained control of those two options.

In the meantime I think the answer is "don't do that then". That may
well also turn out to be the answer for the 4.2.0 release at this point.

A patch to remove this option follows.

Ian.

8<--------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346403269 -3600
# Node ID e334eebb59c5b02edeab5dc3ed0174053676b5c6
# Parent  451724678dd474025bd3472ed802d9d01a4fa3d8
tools: remove --disable-pythontools option

This incorrectly removes the $(PYTHON) variable which is used at build
time as well as by the tools.

Remove and revisit for 4.3.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 451724678dd4 -r e334eebb59c5 config/Tools.mk.in
--- a/config/Tools.mk.in	Fri Aug 31 09:32:37 2012 +0100
+++ b/config/Tools.mk.in	Fri Aug 31 09:54:29 2012 +0100
@@ -43,7 +43,6 @@ GIT_HTTP            := @githttp@
 XENSTAT_XENTOP      := @monitors@
 VTPM_TOOLS          := @vtpm@
 LIBXENAPI_BINDINGS  := @xenapi@
-PYTHON_TOOLS        := @pythontools@
 OCAML_TOOLS         := @ocamltools@
 CONFIG_MINITERM     := @miniterm@
 CONFIG_LOMOUNT      := @lomount@
diff -r 451724678dd4 -r e334eebb59c5 tools/Makefile
--- a/tools/Makefile	Fri Aug 31 09:32:37 2012 +0100
+++ b/tools/Makefile	Fri Aug 31 09:54:29 2012 +0100
@@ -48,8 +48,8 @@ SUBDIRS-$(CONFIG_TESTS) += tests
 
 # These don't cross-compile
 ifeq ($(XEN_COMPILE_ARCH),$(XEN_TARGET_ARCH))
-SUBDIRS-$(PYTHON_TOOLS) += python
-SUBDIRS-$(PYTHON_TOOLS) += pygrub
+SUBDIRS-y += python
+SUBDIRS-y += pygrub
 SUBDIRS-$(OCAML_TOOLS) += ocaml
 endif
 
diff -r 451724678dd4 -r e334eebb59c5 tools/configure
--- a/tools/configure	Fri Aug 31 09:32:37 2012 +0100
+++ b/tools/configure	Fri Aug 31 09:54:29 2012 +0100
@@ -665,7 +665,6 @@ ovmf
 lomount
 miniterm
 ocamltools
-pythontools
 xenapi
 vtpm
 monitors
@@ -723,7 +722,6 @@ enable_githttp
 enable_monitors
 enable_vtpm
 enable_xenapi
-enable_pythontools
 enable_ocamltools
 enable_miniterm
 enable_lomount
@@ -1384,7 +1382,6 @@ Optional Features:
   --enable-vtpm           Enable Virtual Trusted Platform Module (default is
                           DISABLED)
   --enable-xenapi         Enable Xen API Bindings (default is DISABLED)
-  --disable-pythontools   Disable Python tools (default is ENABLED)
   --disable-ocamltools    Disable Ocaml tools (default is ENABLED)
   --enable-miniterm       Enable miniterm (default is DISABLED)
   --enable-lomount        Enable lomount (default is DISABLED)
@@ -2489,29 +2486,6 @@ xenapi=$ax_cv_xenapi
 
 
 
-# Check whether --enable-pythontools was given.
-if test "${enable_pythontools+set}" = set; then :
-  enableval=$enable_pythontools;
-fi
-
-
-if test "x$enable_pythontools" = "xno"; then :
-
-    ax_cv_pythontools="n"
-
-elif test "x$enable_pythontools" = "xyes"; then :
-
-    ax_cv_pythontools="y"
-
-elif test -z $ax_cv_pythontools; then :
-
-    ax_cv_pythontools="y"
-
-fi
-pythontools=$ax_cv_pythontools
-
-
-
 # Check whether --enable-ocamltools was given.
 if test "${enable_ocamltools+set}" = set; then :
   enableval=$enable_ocamltools;
@@ -4900,6 +4874,74 @@ if test x"${BASH}" == x"no"
 then
     as_fn_error $? "Unable to find bash, please install bash" "$LINENO" 5
 fi
+if echo "$PYTHON" | grep -q "^/"; then :
+
+    PYTHONPATH=$PYTHON
+    PYTHON=`basename $PYTHONPATH`
+
+elif test -z "$PYTHON"; then :
+  PYTHON="python"
+else
+  as_fn_error $? "PYTHON specified, but is not an absolute path" "$LINENO" 5
+fi
+# Extract the first word of "$PYTHON", so it can be a program name with args.
+set dummy $PYTHON; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if test "${ac_cv_path_PYTHONPATH+set}" = set; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $PYTHONPATH in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_PYTHONPATH="$PYTHONPATH" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+    ac_cv_path_PYTHONPATH="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  test -z "$ac_cv_path_PYTHONPATH" && ac_cv_path_PYTHONPATH="no"
+  ;;
+esac
+fi
+PYTHONPATH=$ac_cv_path_PYTHONPATH
+if test -n "$PYTHONPATH"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHONPATH" >&5
+$as_echo "$PYTHONPATH" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+if test x"${PYTHONPATH}" == x"no"
+then
+    as_fn_error $? "Unable to find $PYTHON, please install $PYTHON" "$LINENO" 5
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for python version >= 2.3 " >&5
+$as_echo_n "checking for python version >= 2.3 ... " >&6; }
+`$PYTHON -c 'import sys; sys.exit(eval("sys.version_info < (2, 3)"))'`
+if test "$?" != "0"
+then
+    python_version=`$PYTHON -V 2>&1`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+    as_fn_error $? "$python_version is too old, minimum required version is 2.3" "$LINENO" 5
+else
+    { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+fi
 
 ac_ext=c
 ac_cpp='$CPP $CPPFLAGS'
@@ -5298,76 +5340,6 @@ fi
 done
 
 
-if test "x$pythontools" = "xy"; then :
-
-    if echo "$PYTHON" | grep -q "^/"; then :
-
-        PYTHONPATH=$PYTHON
-        PYTHON=`basename $PYTHONPATH`
-
-elif test -z "$PYTHON"; then :
-  PYTHON="python"
-else
-  as_fn_error $? "PYTHON specified, but is not an absolute path" "$LINENO" 5
-fi
-    # Extract the first word of "$PYTHON", so it can be a program name with args.
-set dummy $PYTHON; ac_word=$2
-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-$as_echo_n "checking for $ac_word... " >&6; }
-if test "${ac_cv_path_PYTHONPATH+set}" = set; then :
-  $as_echo_n "(cached) " >&6
-else
-  case $PYTHONPATH in
-  [\\/]* | ?:[\\/]*)
-  ac_cv_path_PYTHONPATH="$PYTHONPATH" # Let the user override the test with a path.
-  ;;
-  *)
-  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-for as_dir in $PATH
-do
-  IFS=$as_save_IFS
-  test -z "$as_dir" && as_dir=.
-    for ac_exec_ext in '' $ac_executable_extensions; do
-  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-    ac_cv_path_PYTHONPATH="$as_dir/$ac_word$ac_exec_ext"
-    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-    break 2
-  fi
-done
-  done
-IFS=$as_save_IFS
-
-  test -z "$ac_cv_path_PYTHONPATH" && ac_cv_path_PYTHONPATH="no"
-  ;;
-esac
-fi
-PYTHONPATH=$ac_cv_path_PYTHONPATH
-if test -n "$PYTHONPATH"; then
-  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHONPATH" >&5
-$as_echo "$PYTHONPATH" >&6; }
-else
-  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-$as_echo "no" >&6; }
-fi
-
-
-if test x"${PYTHONPATH}" == x"no"
-then
-    as_fn_error $? "Unable to find $PYTHON, please install $PYTHON" "$LINENO" 5
-fi
-    { $as_echo "$as_me:${as_lineno-$LINENO}: checking for python version >= 2.3 " >&5
-$as_echo_n "checking for python version >= 2.3 ... " >&6; }
-`$PYTHON -c 'import sys; sys.exit(eval("sys.version_info < (2, 3)"))'`
-if test "$?" != "0"
-then
-    python_version=`$PYTHON -V 2>&1`
-    { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-$as_echo "no" >&6; }
-    as_fn_error $? "$python_version is too old, minimum required version is 2.3" "$LINENO" 5
-else
-    { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
-$as_echo "yes" >&6; }
-fi
 
 ac_previous_cppflags=$CPPFLAGS
 ac_previous_ldflags=$LDFLAGS
@@ -5499,9 +5471,7 @@ fi
 CPPFLAGS=$ac_previous_cppflags
 LDLFAGS=$ac_previous_ldflags
 
-
-fi
- # Extract the first word of "xgettext", so it can be a program name with args.
+# Extract the first word of "xgettext", so it can be a program name with args.
 set dummy xgettext; ac_word=$2
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
 $as_echo_n "checking for $ac_word... " >&6; }
diff -r 451724678dd4 -r e334eebb59c5 tools/configure.ac
--- a/tools/configure.ac	Fri Aug 31 09:32:37 2012 +0100
+++ b/tools/configure.ac	Fri Aug 31 09:54:29 2012 +0100
@@ -41,7 +41,6 @@ AX_ARG_DEFAULT_DISABLE([githttp], [Downl
 AX_ARG_DEFAULT_ENABLE([monitors], [Disable xenstat and xentop monitoring tools])
 AX_ARG_DEFAULT_DISABLE([vtpm], [Enable Virtual Trusted Platform Module])
 AX_ARG_DEFAULT_DISABLE([xenapi], [Enable Xen API Bindings])
-AX_ARG_DEFAULT_ENABLE([pythontools], [Disable Python tools])
 AX_ARG_DEFAULT_ENABLE([ocamltools], [Disable Ocaml tools])
 AX_ARG_DEFAULT_DISABLE([miniterm], [Enable miniterm])
 AX_ARG_DEFAULT_DISABLE([lomount], [Enable lomount])
@@ -94,17 +93,15 @@ AS_IF([test "x$ocamltools" = "xy"], [
     ])
 ])
 AX_PATH_PROG_OR_FAIL([BASH], [bash])
-AS_IF([test "x$pythontools" = "xy"], [
-    AS_IF([echo "$PYTHON" | grep -q "^/"], [
-        PYTHONPATH=$PYTHON
-        PYTHON=`basename $PYTHONPATH`
-    ],[test -z "$PYTHON"], [PYTHON="python"],
-    [AC_MSG_ERROR([PYTHON specified, but is not an absolute path])])
-    AX_PATH_PROG_OR_FAIL([PYTHONPATH], [$PYTHON])
-    AX_CHECK_PYTHON_VERSION([2], [3])
-     AX_CHECK_PYTHON_DEVEL()
- ])
- AX_PATH_PROG_OR_FAIL([XGETTEXT], [xgettext])
+AS_IF([echo "$PYTHON" | grep -q "^/"], [
+    PYTHONPATH=$PYTHON
+    PYTHON=`basename $PYTHONPATH`
+],[test -z "$PYTHON"], [PYTHON="python"],
+[AC_MSG_ERROR([PYTHON specified, but is not an absolute path])])
+AX_PATH_PROG_OR_FAIL([PYTHONPATH], [$PYTHON])
+AX_CHECK_PYTHON_VERSION([2], [3])
+ AX_CHECK_PYTHON_DEVEL()
+AX_PATH_PROG_OR_FAIL([XGETTEXT], [xgettext])
 dnl as86, ld86, bcc and iasl are only required when the host system is x86*.
 dnl "host" here means the platform on which the hypervisor and tools is
 dnl going to run, not the platform on which we are building (known as



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 08:55:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 08:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7N0g-0005gU-DP; Fri, 31 Aug 2012 08:55:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7N0f-0005gL-6d
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 08:55:25 +0000
Received: from [85.158.143.35:27745] by server-1.bemta-4.messagelabs.com id
	BE/F8-12504-CFB70405; Fri, 31 Aug 2012 08:55:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346403320!13479703!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17847 invoked from network); 31 Aug 2012 08:55:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 08:55:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283027"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 08:55:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 09:55:20 +0100
Message-ID: <1346403318.27277.118.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dmitry Ivanov <vonami@gmail.com>
Date: Fri, 31 Aug 2012 09:55:18 +0100
In-Reply-To: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding xen-devel.

On Fri, 2012-08-31 at 09:37 +0100, Dmitry Ivanov wrote:
> Hello,
> 
> After configuring xen-unstable with ./configure --disable-ocamltools
> --disable-pythontools it fails to build:
> 
> make[4]: Entering directory
> `/home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign'
> mkheader.py x86_32 x86_32.h
> /home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign/../../../xen/include/public/arch-x86/xen-x86_32.h
> /home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign/../../../xen/include/public/arch-x86/xen.h
> /home/vonami/sources/xen-unstable.hg/tools/include/xen-foreign/../../../xen/include/public/xen.h
> make[4]: mkheader.py: Command not found
> make[4]: *** [x86_32.h] Error 127
> 
> Turns out PYTHON is empty in the tools/include/xen-foreign/Makefile.
> Looks like PYTHON and PYTHONPATH are not set by tools/configure script
> when --disable-pythontools is on.

So long as Xen uses python as part of the build system (which I think it
will do for the foreseeable future) I don't think it is optional and
therefore the --disable-pythontools is broken as currently implemented.

I think we should just remove this option for 4.2 and revisit being able
to disable the runtime components which use python, but not the build
time usage.

--disable-pythontools is a bit of a big hammer since as well as
disabling xend (which seems reasonable) it also throws out pygrub which
people are more likely to want. In 4.3 we should aim for more fine
grained control of those two options.

In the meantime I think the answer is "don't do that then". That may
well also turn out to be the answer for the 4.2.0 release at this point.

A patch to remove this option follows.

Ian.

8<--------------------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346403269 -3600
# Node ID e334eebb59c5b02edeab5dc3ed0174053676b5c6
# Parent  451724678dd474025bd3472ed802d9d01a4fa3d8
tools: remove --disable-pythontools option

This incorrectly removes the $(PYTHON) variable which is used at build
time as well as by the tools.

Remove and revisit for 4.3.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 451724678dd4 -r e334eebb59c5 config/Tools.mk.in
--- a/config/Tools.mk.in	Fri Aug 31 09:32:37 2012 +0100
+++ b/config/Tools.mk.in	Fri Aug 31 09:54:29 2012 +0100
@@ -43,7 +43,6 @@ GIT_HTTP            := @githttp@
 XENSTAT_XENTOP      := @monitors@
 VTPM_TOOLS          := @vtpm@
 LIBXENAPI_BINDINGS  := @xenapi@
-PYTHON_TOOLS        := @pythontools@
 OCAML_TOOLS         := @ocamltools@
 CONFIG_MINITERM     := @miniterm@
 CONFIG_LOMOUNT      := @lomount@
diff -r 451724678dd4 -r e334eebb59c5 tools/Makefile
--- a/tools/Makefile	Fri Aug 31 09:32:37 2012 +0100
+++ b/tools/Makefile	Fri Aug 31 09:54:29 2012 +0100
@@ -48,8 +48,8 @@ SUBDIRS-$(CONFIG_TESTS) += tests
 
 # These don't cross-compile
 ifeq ($(XEN_COMPILE_ARCH),$(XEN_TARGET_ARCH))
-SUBDIRS-$(PYTHON_TOOLS) += python
-SUBDIRS-$(PYTHON_TOOLS) += pygrub
+SUBDIRS-y += python
+SUBDIRS-y += pygrub
 SUBDIRS-$(OCAML_TOOLS) += ocaml
 endif
 
diff -r 451724678dd4 -r e334eebb59c5 tools/configure
--- a/tools/configure	Fri Aug 31 09:32:37 2012 +0100
+++ b/tools/configure	Fri Aug 31 09:54:29 2012 +0100
@@ -665,7 +665,6 @@ ovmf
 lomount
 miniterm
 ocamltools
-pythontools
 xenapi
 vtpm
 monitors
@@ -723,7 +722,6 @@ enable_githttp
 enable_monitors
 enable_vtpm
 enable_xenapi
-enable_pythontools
 enable_ocamltools
 enable_miniterm
 enable_lomount
@@ -1384,7 +1382,6 @@ Optional Features:
   --enable-vtpm           Enable Virtual Trusted Platform Module (default is
                           DISABLED)
   --enable-xenapi         Enable Xen API Bindings (default is DISABLED)
-  --disable-pythontools   Disable Python tools (default is ENABLED)
   --disable-ocamltools    Disable Ocaml tools (default is ENABLED)
   --enable-miniterm       Enable miniterm (default is DISABLED)
   --enable-lomount        Enable lomount (default is DISABLED)
@@ -2489,29 +2486,6 @@ xenapi=$ax_cv_xenapi
 
 
 
-# Check whether --enable-pythontools was given.
-if test "${enable_pythontools+set}" = set; then :
-  enableval=$enable_pythontools;
-fi
-
-
-if test "x$enable_pythontools" = "xno"; then :
-
-    ax_cv_pythontools="n"
-
-elif test "x$enable_pythontools" = "xyes"; then :
-
-    ax_cv_pythontools="y"
-
-elif test -z $ax_cv_pythontools; then :
-
-    ax_cv_pythontools="y"
-
-fi
-pythontools=$ax_cv_pythontools
-
-
-
 # Check whether --enable-ocamltools was given.
 if test "${enable_ocamltools+set}" = set; then :
   enableval=$enable_ocamltools;
@@ -4900,6 +4874,74 @@ if test x"${BASH}" == x"no"
 then
     as_fn_error $? "Unable to find bash, please install bash" "$LINENO" 5
 fi
+if echo "$PYTHON" | grep -q "^/"; then :
+
+    PYTHONPATH=$PYTHON
+    PYTHON=`basename $PYTHONPATH`
+
+elif test -z "$PYTHON"; then :
+  PYTHON="python"
+else
+  as_fn_error $? "PYTHON specified, but is not an absolute path" "$LINENO" 5
+fi
+# Extract the first word of "$PYTHON", so it can be a program name with args.
+set dummy $PYTHON; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if test "${ac_cv_path_PYTHONPATH+set}" = set; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $PYTHONPATH in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_PYTHONPATH="$PYTHONPATH" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+    ac_cv_path_PYTHONPATH="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  test -z "$ac_cv_path_PYTHONPATH" && ac_cv_path_PYTHONPATH="no"
+  ;;
+esac
+fi
+PYTHONPATH=$ac_cv_path_PYTHONPATH
+if test -n "$PYTHONPATH"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHONPATH" >&5
+$as_echo "$PYTHONPATH" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+if test x"${PYTHONPATH}" == x"no"
+then
+    as_fn_error $? "Unable to find $PYTHON, please install $PYTHON" "$LINENO" 5
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for python version >= 2.3 " >&5
+$as_echo_n "checking for python version >= 2.3 ... " >&6; }
+`$PYTHON -c 'import sys; sys.exit(eval("sys.version_info < (2, 3)"))'`
+if test "$?" != "0"
+then
+    python_version=`$PYTHON -V 2>&1`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+    as_fn_error $? "$python_version is too old, minimum required version is 2.3" "$LINENO" 5
+else
+    { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+fi
 
 ac_ext=c
 ac_cpp='$CPP $CPPFLAGS'
@@ -5298,76 +5340,6 @@ fi
 done
 
 
-if test "x$pythontools" = "xy"; then :
-
-    if echo "$PYTHON" | grep -q "^/"; then :
-
-        PYTHONPATH=$PYTHON
-        PYTHON=`basename $PYTHONPATH`
-
-elif test -z "$PYTHON"; then :
-  PYTHON="python"
-else
-  as_fn_error $? "PYTHON specified, but is not an absolute path" "$LINENO" 5
-fi
-    # Extract the first word of "$PYTHON", so it can be a program name with args.
-set dummy $PYTHON; ac_word=$2
-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-$as_echo_n "checking for $ac_word... " >&6; }
-if test "${ac_cv_path_PYTHONPATH+set}" = set; then :
-  $as_echo_n "(cached) " >&6
-else
-  case $PYTHONPATH in
-  [\\/]* | ?:[\\/]*)
-  ac_cv_path_PYTHONPATH="$PYTHONPATH" # Let the user override the test with a path.
-  ;;
-  *)
-  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-for as_dir in $PATH
-do
-  IFS=$as_save_IFS
-  test -z "$as_dir" && as_dir=.
-    for ac_exec_ext in '' $ac_executable_extensions; do
-  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-    ac_cv_path_PYTHONPATH="$as_dir/$ac_word$ac_exec_ext"
-    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-    break 2
-  fi
-done
-  done
-IFS=$as_save_IFS
-
-  test -z "$ac_cv_path_PYTHONPATH" && ac_cv_path_PYTHONPATH="no"
-  ;;
-esac
-fi
-PYTHONPATH=$ac_cv_path_PYTHONPATH
-if test -n "$PYTHONPATH"; then
-  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHONPATH" >&5
-$as_echo "$PYTHONPATH" >&6; }
-else
-  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-$as_echo "no" >&6; }
-fi
-
-
-if test x"${PYTHONPATH}" == x"no"
-then
-    as_fn_error $? "Unable to find $PYTHON, please install $PYTHON" "$LINENO" 5
-fi
-    { $as_echo "$as_me:${as_lineno-$LINENO}: checking for python version >= 2.3 " >&5
-$as_echo_n "checking for python version >= 2.3 ... " >&6; }
-`$PYTHON -c 'import sys; sys.exit(eval("sys.version_info < (2, 3)"))'`
-if test "$?" != "0"
-then
-    python_version=`$PYTHON -V 2>&1`
-    { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-$as_echo "no" >&6; }
-    as_fn_error $? "$python_version is too old, minimum required version is 2.3" "$LINENO" 5
-else
-    { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
-$as_echo "yes" >&6; }
-fi
 
 ac_previous_cppflags=$CPPFLAGS
 ac_previous_ldflags=$LDFLAGS
@@ -5499,9 +5471,7 @@ fi
 CPPFLAGS=$ac_previous_cppflags
 LDLFAGS=$ac_previous_ldflags
 
-
-fi
- # Extract the first word of "xgettext", so it can be a program name with args.
+# Extract the first word of "xgettext", so it can be a program name with args.
 set dummy xgettext; ac_word=$2
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
 $as_echo_n "checking for $ac_word... " >&6; }
diff -r 451724678dd4 -r e334eebb59c5 tools/configure.ac
--- a/tools/configure.ac	Fri Aug 31 09:32:37 2012 +0100
+++ b/tools/configure.ac	Fri Aug 31 09:54:29 2012 +0100
@@ -41,7 +41,6 @@ AX_ARG_DEFAULT_DISABLE([githttp], [Downl
 AX_ARG_DEFAULT_ENABLE([monitors], [Disable xenstat and xentop monitoring tools])
 AX_ARG_DEFAULT_DISABLE([vtpm], [Enable Virtual Trusted Platform Module])
 AX_ARG_DEFAULT_DISABLE([xenapi], [Enable Xen API Bindings])
-AX_ARG_DEFAULT_ENABLE([pythontools], [Disable Python tools])
 AX_ARG_DEFAULT_ENABLE([ocamltools], [Disable Ocaml tools])
 AX_ARG_DEFAULT_DISABLE([miniterm], [Enable miniterm])
 AX_ARG_DEFAULT_DISABLE([lomount], [Enable lomount])
@@ -94,17 +93,15 @@ AS_IF([test "x$ocamltools" = "xy"], [
     ])
 ])
 AX_PATH_PROG_OR_FAIL([BASH], [bash])
-AS_IF([test "x$pythontools" = "xy"], [
-    AS_IF([echo "$PYTHON" | grep -q "^/"], [
-        PYTHONPATH=$PYTHON
-        PYTHON=`basename $PYTHONPATH`
-    ],[test -z "$PYTHON"], [PYTHON="python"],
-    [AC_MSG_ERROR([PYTHON specified, but is not an absolute path])])
-    AX_PATH_PROG_OR_FAIL([PYTHONPATH], [$PYTHON])
-    AX_CHECK_PYTHON_VERSION([2], [3])
-     AX_CHECK_PYTHON_DEVEL()
- ])
- AX_PATH_PROG_OR_FAIL([XGETTEXT], [xgettext])
+AS_IF([echo "$PYTHON" | grep -q "^/"], [
+    PYTHONPATH=$PYTHON
+    PYTHON=`basename $PYTHONPATH`
+],[test -z "$PYTHON"], [PYTHON="python"],
+[AC_MSG_ERROR([PYTHON specified, but is not an absolute path])])
+AX_PATH_PROG_OR_FAIL([PYTHONPATH], [$PYTHON])
+AX_CHECK_PYTHON_VERSION([2], [3])
+ AX_CHECK_PYTHON_DEVEL()
+AX_PATH_PROG_OR_FAIL([XGETTEXT], [xgettext])
 dnl as86, ld86, bcc and iasl are only required when the host system is x86*.
 dnl "host" here means the platform on which the hypervisor and tools is
 dnl going to run, not the platform on which we are building (known as



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:04:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7N9Z-0006BM-9t; Fri, 31 Aug 2012 09:04:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7N9Y-0006BD-49
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:04:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1346403820!1252286!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19096 invoked from network); 31 Aug 2012 09:03:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 09:03:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 10:03:38 +0100
Message-Id: <50409A070200007800097BA3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 10:03:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346402456.27277.111.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> From what you say I think we want to modprobe blktap if blktap2 didn't
> exist.
> 
> blktap2 isn't actually a xenbus backend driver (since it uses blkback to
> do the guest facing bit) so I don't think a xen-backend: alias is
> available. I can't see any other aliases defined in the code in either
> the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
> latest I happen to have to hand) or a mainline kernel. If there is
> something else we should be trying please let me know.

There's a "devname:xen/blktap-2/control" alias in our SLE11 SP2
and newer openSUSE ones (as of 2.6.35). Whether that's fully
appropriate to be there and/or to be used as a modprobe
argument I'm not sure though.

The bad thing about the "blktap" name is that that's also the
name of the blktap1 driver in the 2.6.18 tree and its forward
ports, but I don't think there's anything we can reasonably do
about that. So I'm fine with the change you suggest from that
perspective (whether to use the module alias pointed out ).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:04:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7N9Z-0006BM-9t; Fri, 31 Aug 2012 09:04:37 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7N9Y-0006BD-49
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:04:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1346403820!1252286!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19096 invoked from network); 31 Aug 2012 09:03:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 09:03:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 10:03:38 +0100
Message-Id: <50409A070200007800097BA3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 10:03:35 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346402456.27277.111.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> From what you say I think we want to modprobe blktap if blktap2 didn't
> exist.
> 
> blktap2 isn't actually a xenbus backend driver (since it uses blkback to
> do the guest facing bit) so I don't think a xen-backend: alias is
> available. I can't see any other aliases defined in the code in either
> the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
> latest I happen to have to hand) or a mainline kernel. If there is
> something else we should be trying please let me know.

There's a "devname:xen/blktap-2/control" alias in our SLE11 SP2
and newer openSUSE ones (as of 2.6.35). Whether that's fully
appropriate to be there and/or to be used as a modprobe
argument I'm not sure though.

The bad thing about the "blktap" name is that that's also the
name of the blktap1 driver in the 2.6.18 tree and its forward
ports, but I don't think there's anything we can reasonably do
about that. So I'm fine with the change you suggest from that
perspective (whether to use the module alias pointed out ).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:08:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ND9-0006L8-UE; Fri, 31 Aug 2012 09:08:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7ND8-0006Kv-8e
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:08:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346404064!8471026!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30371 invoked from network); 31 Aug 2012 09:07:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 09:07:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 10:07:43 +0100
Message-Id: <50409AFC0200007800097BA6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 10:07:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<503DAA5F.5030306@oracle.com>
In-Reply-To: <503DAA5F.5030306@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDI5LjA4LjEyIGF0IDA3OjM2LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKCj4gCj4g5LqOIDIwMTItMDgtMTMgMTc6MjksIEphbiBCZXVs
aWNoIOWGmemBkzoKPj4+Pj4gT24gMTMuMDguMTIgYXQgMDk6NTgsICJ6aGVuemhvbmcuZHVhbiI8
emhlbnpob25nLmR1YW5Ab3JhY2xlLmNvbT4gIHdyb3RlOgo+Pj4g5LqOIDIwMTItMDgtMTAgMjI6
MjIsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+PiBHb2luZyBiYWNrIHRvIHlvdXIgb3JpZ2luYWwg
bWFpbCwgSSB3b25kZXIgaG93ZXZlciB3aHkgdGhpcwo+Pj4+IGdldHMgZG9uZSBhdCBhbGwuIFlv
dSBzYWlkIGl0IGdvdCB0aGVyZSB2aWEKPj4+Pgo+Pj4+IG10cnJfYXBzX2luaXQoKQo+Pj4+ICAg
IFwtPiAgIHNldF9tdHJyKCkKPj4+PiAgICAgICAgXC0+ICAgbXRycl93b3JrX2hhbmRsZXIoKQo+
Pj4+Cj4+Pj4geWV0IHRoaXMgaXNuJ3QgZG9uZSB1bmNvbmRpdGlvbmFsbHkgLSBzZWUgdGhlIGNv
bW1lbnQgYmVmb3JlCj4+Pj4gY2hlY2tpbmcgbXRycl9hcHNfZGVsYXllZF9pbml0LiBDYW4geW91
IGZpbmQgb3V0IHdoZXJlIHRoZQo+Pj4+IG9idmlvdXNseSBuZWNlc3NhcnkgY2FsbChzKSB0byBz
ZXRfbXRycl9hcHNfZGVsYXllZF9pbml0KCkKPj4+PiBjb21lKHMpIGZyb20/Cj4+PiBBdCBib290
dXAgc3RhZ2UsIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQgaXMgY2FsbGVkIGJ5Cj4+PiBuYXRp
dmVfc21wX3ByZXBhcmVfY3B1cy4KPj4+IG10cnJfYXBzX2RlbGF5ZWRfaW5pdCBpcyBhbHdheXMg
c2V0IHRvIHR1cmUgZm9yIGludGVsIHByb2Nlc3NvciBpbiB1cHN0cmVhbQo+Pj4gY29kZS4KPj4g
SW5kZWVkLCBhbmQgdGhhdCAoaW4gb25lIGZvcm0gb3IgYW5vdGhlcikgaGFzIGJlZW4gZG9uZQo+
PiB2aXJ0dWFsbHkgZm9yZXZlciBpbiBMaW51eC4gSSB3b25kZXIgd2h5IHRoZSBwcm9ibGVtIHdh
c24ndAo+PiBub3RpY2VkIChvciBsb29rZWQgaW50bywgaWYgaXQgd2FzIG5vdGljZWQpIHNvIGZh
ci4KPj4KPj4gQXMgaXQncyBnb2luZyB0byBiZSByYXRoZXIgZGlmZmljdWx0IHRvIGNvbnZpbmNl
IHRoZSBMaW51eCBmb2xrcwo+PiB0byBjaGFuZ2UgdGhlaXIgY29kZSAocGx1cyB0aGlzIHdvdWxk
bid0IGhlbHAgd2l0aCBleGlzdGluZwo+PiBrZXJuZWxzIGFueXdheSksIHdlJ2xsIG5lZWQgdG8g
ZmluZCBhIHdheSB0byBpbXByb3ZlIHRoaXMgaW4KPj4gdGhlIGh5cGVydmlzb3IuCj4gSXMgdGhp
cyBpc3N1ZSBpbXByb3ZhYmxlIGZyb20geGVuIHNpZGU/CgpZZXMsIHdlJ3JlIGludmVzdGlnYXRp
bmcgb3B0aW9ucy4KCkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpo
dHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:08:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ND9-0006L8-UE; Fri, 31 Aug 2012 09:08:19 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7ND8-0006Kv-8e
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:08:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346404064!8471026!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30371 invoked from network); 31 Aug 2012 09:07:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 09:07:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 10:07:43 +0100
Message-Id: <50409AFC0200007800097BA6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 10:07:40 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: <zhenzhong.duan@oracle.com>
References: <5020C24A.3060604@oracle.com>
	<5020F003020000780009322C@nat28.tlf.novell.com>
	<502235E8.9040309@oracle.com>
	<50229B840200007800093A73@nat28.tlf.novell.com>
	<5023860E.7080908@oracle.com>
	<5023AE960200007800093DE8@nat28.tlf.novell.com>
	<502490A7.7020603@oracle.com>
	<502535280200007800094322@nat28.tlf.novell.com>
	<5028B3AB.7060705@oracle.com>
	<5028E53202000078000946B1@nat28.tlf.novell.com>
	<503DAA5F.5030306@oracle.com>
In-Reply-To: <503DAA5F.5030306@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Satish Kantheti <satish.kantheti@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Feng Jin <joe.jin@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] kernel bootup slow issue on ovm3.1.1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDI5LjA4LjEyIGF0IDA3OjM2LCAiemhlbnpob25nLmR1YW4iIDx6aGVuemhvbmcuZHVh
bkBvcmFjbGUuY29tPiB3cm90ZToKCj4gCj4g5LqOIDIwMTItMDgtMTMgMTc6MjksIEphbiBCZXVs
aWNoIOWGmemBkzoKPj4+Pj4gT24gMTMuMDguMTIgYXQgMDk6NTgsICJ6aGVuemhvbmcuZHVhbiI8
emhlbnpob25nLmR1YW5Ab3JhY2xlLmNvbT4gIHdyb3RlOgo+Pj4g5LqOIDIwMTItMDgtMTAgMjI6
MjIsIEphbiBCZXVsaWNoIOWGmemBkzoKPj4+PiBHb2luZyBiYWNrIHRvIHlvdXIgb3JpZ2luYWwg
bWFpbCwgSSB3b25kZXIgaG93ZXZlciB3aHkgdGhpcwo+Pj4+IGdldHMgZG9uZSBhdCBhbGwuIFlv
dSBzYWlkIGl0IGdvdCB0aGVyZSB2aWEKPj4+Pgo+Pj4+IG10cnJfYXBzX2luaXQoKQo+Pj4+ICAg
IFwtPiAgIHNldF9tdHJyKCkKPj4+PiAgICAgICAgXC0+ICAgbXRycl93b3JrX2hhbmRsZXIoKQo+
Pj4+Cj4+Pj4geWV0IHRoaXMgaXNuJ3QgZG9uZSB1bmNvbmRpdGlvbmFsbHkgLSBzZWUgdGhlIGNv
bW1lbnQgYmVmb3JlCj4+Pj4gY2hlY2tpbmcgbXRycl9hcHNfZGVsYXllZF9pbml0LiBDYW4geW91
IGZpbmQgb3V0IHdoZXJlIHRoZQo+Pj4+IG9idmlvdXNseSBuZWNlc3NhcnkgY2FsbChzKSB0byBz
ZXRfbXRycl9hcHNfZGVsYXllZF9pbml0KCkKPj4+PiBjb21lKHMpIGZyb20/Cj4+PiBBdCBib290
dXAgc3RhZ2UsIHNldF9tdHJyX2Fwc19kZWxheWVkX2luaXQgaXMgY2FsbGVkIGJ5Cj4+PiBuYXRp
dmVfc21wX3ByZXBhcmVfY3B1cy4KPj4+IG10cnJfYXBzX2RlbGF5ZWRfaW5pdCBpcyBhbHdheXMg
c2V0IHRvIHR1cmUgZm9yIGludGVsIHByb2Nlc3NvciBpbiB1cHN0cmVhbQo+Pj4gY29kZS4KPj4g
SW5kZWVkLCBhbmQgdGhhdCAoaW4gb25lIGZvcm0gb3IgYW5vdGhlcikgaGFzIGJlZW4gZG9uZQo+
PiB2aXJ0dWFsbHkgZm9yZXZlciBpbiBMaW51eC4gSSB3b25kZXIgd2h5IHRoZSBwcm9ibGVtIHdh
c24ndAo+PiBub3RpY2VkIChvciBsb29rZWQgaW50bywgaWYgaXQgd2FzIG5vdGljZWQpIHNvIGZh
ci4KPj4KPj4gQXMgaXQncyBnb2luZyB0byBiZSByYXRoZXIgZGlmZmljdWx0IHRvIGNvbnZpbmNl
IHRoZSBMaW51eCBmb2xrcwo+PiB0byBjaGFuZ2UgdGhlaXIgY29kZSAocGx1cyB0aGlzIHdvdWxk
bid0IGhlbHAgd2l0aCBleGlzdGluZwo+PiBrZXJuZWxzIGFueXdheSksIHdlJ2xsIG5lZWQgdG8g
ZmluZCBhIHdheSB0byBpbXByb3ZlIHRoaXMgaW4KPj4gdGhlIGh5cGVydmlzb3IuCj4gSXMgdGhp
cyBpc3N1ZSBpbXByb3ZhYmxlIGZyb20geGVuIHNpZGU/CgpZZXMsIHdlJ3JlIGludmVzdGlnYXRp
bmcgb3B0aW9ucy4KCkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpo
dHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:18:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NMN-0006Xj-0A; Fri, 31 Aug 2012 09:17:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T7NML-0006Xe-0I
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:17:49 +0000
Received: from [85.158.143.99:23846] by server-2.bemta-4.messagelabs.com id
	0E/9F-21239-C3180405; Fri, 31 Aug 2012 09:17:48 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346404626!20268868!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 31 Aug 2012 09:17:07 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-11.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	31 Aug 2012 09:17:07 -0000
Received: from mail160-tx2-R.bigfish.com (10.9.14.245) by
	TX2EHSOBE015.bigfish.com (10.9.40.35) with Microsoft SMTP Server id
	14.1.225.23; Fri, 31 Aug 2012 09:17:05 +0000
Received: from mail160-tx2 (localhost [127.0.0.1])	by
	mail160-tx2-R.bigfish.com (Postfix) with ESMTP id 82EC31C00F3	for
	<xen-devel@lists.xen.org>; Fri, 31 Aug 2012 09:17:05 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzc85fhc85dhzz1202hzz8275bhz2dh668h839he5bhf0ah107ah34h1155h)
Received: from mail160-tx2 (localhost.localdomain [127.0.0.1]) by mail160-tx2
	(MessageSwitch) id 1346404622606045_18103;
	Fri, 31 Aug 2012 09:17:02 +0000 (UTC)
Received: from TX2EHSMHS023.bigfish.com (unknown [10.9.14.236])	by
	mail160-tx2.bigfish.com (Postfix) with ESMTP id 84A8A3E0048	for
	<xen-devel@lists.xen.org>; Fri, 31 Aug 2012 09:17:02 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS023.bigfish.com (10.9.99.123) with Microsoft SMTP Server id
	14.1.225.23; Fri, 31 Aug 2012 09:17:00 +0000
X-WSS-ID: 0M9M4G8-02-FAB-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 271ABC80BE	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012
	04:16:56 -0500 (CDT)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 31 Aug 2012 04:17:18 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 31 Aug 2012 04:16:58 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 31 Aug 2012
	05:16:56 -0400
Message-ID: <50408105.3020502@amd.com>
Date: Fri, 31 Aug 2012 11:16:53 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Content-Type: multipart/mixed; boundary="------------060908090801080904070009"
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH] nestedsvm: fix interrupt handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060908090801080904070009
Content-Type: text/plain; charset="ISO-8859-15"
Content-Transfer-Encoding: 7bit

Give the l2 guest a chance to finish the delivery of the last injected
interrupt or exception before we emulate a VMEXIT.
For example after a NPF handled by the host there can be an interrupt
for the l1 guest.

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

--------------060908090801080904070009
Content-Type: text/plain; charset="us-ascii"; name="xen_nh_intr.diff"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="xen_nh_intr.diff"
Content-Description: xen_nh_intr.diff

diff -r a0b5f8102a00 xen/arch/x86/hvm/svm/nestedsvm.c
--- a/xen/arch/x86/hvm/svm/nestedsvm.c	Tue Aug 28 22:40:45 2012 +0100
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c	Fri Aug 31 11:09:37 2012 +0200
@@ -1164,6 +1164,8 @@ enum hvm_intblk nsvm_intr_blocked(struct
         return hvm_intblk_svm_gif;
 
     if ( nestedhvm_vcpu_in_guestmode(v) ) {
+        struct vmcb_struct *n2vmcb = nv->nv_n2vmcx;
+
         if ( svm->ns_hostflags.fields.vintrmask )
             if ( !svm->ns_hostflags.fields.rflagsif )
                 return hvm_intblk_rflags_ie;
@@ -1176,6 +1178,14 @@ enum hvm_intblk nsvm_intr_blocked(struct
          */
         if ( v->arch.hvm_vcpu.hvm_io.io_state != HVMIO_none )
             return hvm_intblk_shadow;
+
+        if ( !nv->nv_vmexit_pending && n2vmcb->exitintinfo.bytes != 0 ) {
+            /* Give the l2 guest a chance to finish the delivery of
+             * the last injected interrupt or exception before we
+             * emulate a VMEXIT (e.g. VMEXIT(INTR) ).
+             */
+            return hvm_intblk_shadow;
+        }
     }
 
     if ( nv->nv_vmexit_pending ) {

--------------060908090801080904070009
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060908090801080904070009--


From xen-devel-bounces@lists.xen.org Fri Aug 31 09:18:09 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NMN-0006Xj-0A; Fri, 31 Aug 2012 09:17:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Christoph.Egger@amd.com>) id 1T7NML-0006Xe-0I
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:17:49 +0000
Received: from [85.158.143.99:23846] by server-2.bemta-4.messagelabs.com id
	0E/9F-21239-C3180405; Fri, 31 Aug 2012 09:17:48 +0000
X-Env-Sender: Christoph.Egger@amd.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346404626!20268868!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 31 Aug 2012 09:17:07 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-11.tower-216.messagelabs.com with AES128-SHA encrypted SMTP;
	31 Aug 2012 09:17:07 -0000
Received: from mail160-tx2-R.bigfish.com (10.9.14.245) by
	TX2EHSOBE015.bigfish.com (10.9.40.35) with Microsoft SMTP Server id
	14.1.225.23; Fri, 31 Aug 2012 09:17:05 +0000
Received: from mail160-tx2 (localhost [127.0.0.1])	by
	mail160-tx2-R.bigfish.com (Postfix) with ESMTP id 82EC31C00F3	for
	<xen-devel@lists.xen.org>; Fri, 31 Aug 2012 09:17:05 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:163.181.249.109; KIP:(null); UIP:(null);
	IPV:NLI; H:ausb3twp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzc85fhc85dhzz1202hzz8275bhz2dh668h839he5bhf0ah107ah34h1155h)
Received: from mail160-tx2 (localhost.localdomain [127.0.0.1]) by mail160-tx2
	(MessageSwitch) id 1346404622606045_18103;
	Fri, 31 Aug 2012 09:17:02 +0000 (UTC)
Received: from TX2EHSMHS023.bigfish.com (unknown [10.9.14.236])	by
	mail160-tx2.bigfish.com (Postfix) with ESMTP id 84A8A3E0048	for
	<xen-devel@lists.xen.org>; Fri, 31 Aug 2012 09:17:02 +0000 (UTC)
Received: from ausb3twp02.amd.com (163.181.249.109) by
	TX2EHSMHS023.bigfish.com (10.9.99.123) with Microsoft SMTP Server id
	14.1.225.23; Fri, 31 Aug 2012 09:17:00 +0000
X-WSS-ID: 0M9M4G8-02-FAB-02
X-M-MSG: 
Received: from sausexedgep01.amd.com (sausexedgep01-ext.amd.com
	[163.181.249.72])	(using TLSv1 with cipher AES128-SHA (128/128
	bits))	(No
	client certificate requested)	by ausb3twp02.amd.com (Axway MailGate
	3.8.1)
	with ESMTP id 271ABC80BE	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012
	04:16:56 -0500 (CDT)
Received: from SAUSEXDAG01.amd.com (163.181.55.1) by sausexedgep01.amd.com
	(163.181.36.54) with Microsoft SMTP Server (TLS) id 8.3.192.1;
	Fri, 31 Aug 2012 04:17:18 -0500
Received: from storexhtp02.amd.com (172.24.4.4) by sausexdag01.amd.com
	(163.181.55.1) with Microsoft SMTP Server (TLS) id 14.1.323.3;
	Fri, 31 Aug 2012 04:16:58 -0500
Received: from rhodium.osrc.amd.com (165.204.15.173) by storexhtp02.amd.com
	(172.24.4.4) with Microsoft SMTP Server id 8.3.213.0; Fri, 31 Aug 2012
	05:16:56 -0400
Message-ID: <50408105.3020502@amd.com>
Date: Fri, 31 Aug 2012 11:16:53 +0200
From: Christoph Egger <Christoph.Egger@amd.com>
User-Agent: Mozilla/5.0 (X11; NetBSD amd64;
	rv:11.0) Gecko/20120404 Thunderbird/11.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Content-Type: multipart/mixed; boundary="------------060908090801080904070009"
X-OriginatorOrg: amd.com
Subject: [Xen-devel] [PATCH] nestedsvm: fix interrupt handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------060908090801080904070009
Content-Type: text/plain; charset="ISO-8859-15"
Content-Transfer-Encoding: 7bit

Give the l2 guest a chance to finish the delivery of the last injected
interrupt or exception before we emulate a VMEXIT.
For example after a NPF handled by the host there can be an interrupt
for the l1 guest.

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

--------------060908090801080904070009
Content-Type: text/plain; charset="us-ascii"; name="xen_nh_intr.diff"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="xen_nh_intr.diff"
Content-Description: xen_nh_intr.diff

diff -r a0b5f8102a00 xen/arch/x86/hvm/svm/nestedsvm.c
--- a/xen/arch/x86/hvm/svm/nestedsvm.c	Tue Aug 28 22:40:45 2012 +0100
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c	Fri Aug 31 11:09:37 2012 +0200
@@ -1164,6 +1164,8 @@ enum hvm_intblk nsvm_intr_blocked(struct
         return hvm_intblk_svm_gif;
 
     if ( nestedhvm_vcpu_in_guestmode(v) ) {
+        struct vmcb_struct *n2vmcb = nv->nv_n2vmcx;
+
         if ( svm->ns_hostflags.fields.vintrmask )
             if ( !svm->ns_hostflags.fields.rflagsif )
                 return hvm_intblk_rflags_ie;
@@ -1176,6 +1178,14 @@ enum hvm_intblk nsvm_intr_blocked(struct
          */
         if ( v->arch.hvm_vcpu.hvm_io.io_state != HVMIO_none )
             return hvm_intblk_shadow;
+
+        if ( !nv->nv_vmexit_pending && n2vmcb->exitintinfo.bytes != 0 ) {
+            /* Give the l2 guest a chance to finish the delivery of
+             * the last injected interrupt or exception before we
+             * emulate a VMEXIT (e.g. VMEXIT(INTR) ).
+             */
+            return hvm_intblk_shadow;
+        }
     }
 
     if ( nv->nv_vmexit_pending ) {

--------------060908090801080904070009
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060908090801080904070009--


From xen-devel-bounces@lists.xen.org Fri Aug 31 09:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NTt-0006iu-46; Fri, 31 Aug 2012 09:25:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7NTs-0006ip-9H
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:25:36 +0000
Received: from [85.158.139.83:58927] by server-12.bemta-5.messagelabs.com id
	E9/20-18300-F0380405; Fri, 31 Aug 2012 09:25:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1346405133!27195440!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26405 invoked from network); 31 Aug 2012 09:25:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:25:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283888"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:25:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:25:33 +0100
Message-ID: <1346405131.27277.131.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 10:25:31 +0100
In-Reply-To: <512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 19:59 +0100, Matt Wilson wrote:
> It is sometimes hard to discover all the optional tools that should be
> on a system to build all available Xen documentation. By checking for
> documentation generation tools at ./configure time and displaying a
> warning, Xen packagers will more easily learn about new optional build
> dependencies, like markdown, when they are introduced.
> 
> Changes since v1:
>  * require that ./configure be run before building docs
>  * remove Docs.mk and make Tools.mk the canonical locaiton where
>    docs tools are defined (via ./configure)
>  * fold in checking for markdown_py
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> diff -r d7e4efa17fb0 -r 512b4e0c49f3 README
> --- a/README	Tue Aug 28 15:35:08 2012 -0700
> +++ b/README	Thu Aug 30 10:51:00 2012 -0700
> @@ -28,8 +28,9 @@
>  your system. For full documentation, see the Xen User Manual. If this
>  is a pre-built release then you can find the manual at:
>   dist/install/usr/share/doc/xen/pdf/user.pdf
> -If you have a source release, then 'make -C docs' will build the
> -manual at docs/pdf/user.pdf.
> +If you have a source release and the required documentation generation
> +tools, then './configure; make -C docs' will build the manual at
> +docs/pdf/user.pdf.

This document was removed in 24563:4271634e4c86, looks like we missed
this reference. Could you nuke it as you go please?

> diff -r d7e4efa17fb0 -r 512b4e0c49f3 tools/m4/docs_tool.m4
> --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
> +++ b/tools/m4/docs_tool.m4	Thu Aug 30 10:51:00 2012 -0700
> @@ -0,0 +1,17 @@
> +AC_DEFUN([AX_DOCS_TOOL_PROG], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])
> +    AC_PATH_PROG([$1], [$2])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be built])
> +    ])
> +])
> +
> +AC_DEFUN([AX_DOCS_TOOL_PROGS], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])

Does this do something sensible when $2 is a space separated list?

Do we need both PROG and PROGS variants? PROG is just a special case of
PROGS, isn't it? Although seeing that AC apparently has both variants I
guess there's a reason for that.

> +    AC_PATH_PROGS([$1], [$2])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be built])
> +    ])
> +])



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:25:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NTt-0006iu-46; Fri, 31 Aug 2012 09:25:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7NTs-0006ip-9H
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:25:36 +0000
Received: from [85.158.139.83:58927] by server-12.bemta-5.messagelabs.com id
	E9/20-18300-F0380405; Fri, 31 Aug 2012 09:25:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1346405133!27195440!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26405 invoked from network); 31 Aug 2012 09:25:33 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:25:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283888"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:25:33 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:25:33 +0100
Message-ID: <1346405131.27277.131.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 10:25:31 +0100
In-Reply-To: <512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 19:59 +0100, Matt Wilson wrote:
> It is sometimes hard to discover all the optional tools that should be
> on a system to build all available Xen documentation. By checking for
> documentation generation tools at ./configure time and displaying a
> warning, Xen packagers will more easily learn about new optional build
> dependencies, like markdown, when they are introduced.
> 
> Changes since v1:
>  * require that ./configure be run before building docs
>  * remove Docs.mk and make Tools.mk the canonical locaiton where
>    docs tools are defined (via ./configure)
>  * fold in checking for markdown_py
> 
> Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> diff -r d7e4efa17fb0 -r 512b4e0c49f3 README
> --- a/README	Tue Aug 28 15:35:08 2012 -0700
> +++ b/README	Thu Aug 30 10:51:00 2012 -0700
> @@ -28,8 +28,9 @@
>  your system. For full documentation, see the Xen User Manual. If this
>  is a pre-built release then you can find the manual at:
>   dist/install/usr/share/doc/xen/pdf/user.pdf
> -If you have a source release, then 'make -C docs' will build the
> -manual at docs/pdf/user.pdf.
> +If you have a source release and the required documentation generation
> +tools, then './configure; make -C docs' will build the manual at
> +docs/pdf/user.pdf.

This document was removed in 24563:4271634e4c86, looks like we missed
this reference. Could you nuke it as you go please?

> diff -r d7e4efa17fb0 -r 512b4e0c49f3 tools/m4/docs_tool.m4
> --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
> +++ b/tools/m4/docs_tool.m4	Thu Aug 30 10:51:00 2012 -0700
> @@ -0,0 +1,17 @@
> +AC_DEFUN([AX_DOCS_TOOL_PROG], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])
> +    AC_PATH_PROG([$1], [$2])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be built])
> +    ])
> +])
> +
> +AC_DEFUN([AX_DOCS_TOOL_PROGS], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])

Does this do something sensible when $2 is a space separated list?

Do we need both PROG and PROGS variants? PROG is just a special case of
PROGS, isn't it? Although seeing that AC apparently has both variants I
guess there's a reason for that.

> +    AC_PATH_PROGS([$1], [$2])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_WARN([$2 is not available so some documentation won't be built])
> +    ])
> +])



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:27:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NVA-0006my-J7; Fri, 31 Aug 2012 09:26:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7NV9-0006mq-IW
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:26:55 +0000
Received: from [85.158.139.83:11603] by server-9.bemta-5.messagelabs.com id
	4C/BF-20529-E5380405; Fri, 31 Aug 2012 09:26:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1346405212!27195751!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3015 invoked from network); 31 Aug 2012 09:26:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:26:53 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283915"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:26:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:26:52 +0100
Message-ID: <1346405211.27277.133.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 10:26:51 +0100
In-Reply-To: <651347cccff7c5619ab0.1346353200@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<651347cccff7c5619ab0.1346353200@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 v2] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 20:00 +0100, Matt Wilson wrote:
> Markdown, while easy to read and write, isn't the most consumable
> format for users reading documentation on a terminal. This patch uses
> lynx to format markdown produced HTML into text files.

s/lynx/elinks/ in the comment.

> diff -r 512b4e0c49f3 -r 651347cccff7 config/Tools.mk.in
> --- a/config/Tools.mk.in	Thu Aug 30 10:51:00 2012 -0700
> +++ b/config/Tools.mk.in	Thu Aug 30 11:56:01 2012 -0700
> @@ -34,6 +34,8 @@
>  DOT                 := @DOT@
>  NEATO               := @NEATO@
>  MARKDOWN            := @MARKDOWN@
> +HTMLDUMP            := @HTMLDUMP@
> +HTMLDUMPFLAGS       := @HTMLDUMPFLAGS@

Does HTMLDUMP="elinks -dump" not work?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:27:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NVA-0006my-J7; Fri, 31 Aug 2012 09:26:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7NV9-0006mq-IW
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:26:55 +0000
Received: from [85.158.139.83:11603] by server-9.bemta-5.messagelabs.com id
	4C/BF-20529-E5380405; Fri, 31 Aug 2012 09:26:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-182.messagelabs.com!1346405212!27195751!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3015 invoked from network); 31 Aug 2012 09:26:53 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:26:53 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283915"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:26:52 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:26:52 +0100
Message-ID: <1346405211.27277.133.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 10:26:51 +0100
In-Reply-To: <651347cccff7c5619ab0.1346353200@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<651347cccff7c5619ab0.1346353200@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2 of 2 v2] docs: use elinks to format
 markdown-generated html to text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 20:00 +0100, Matt Wilson wrote:
> Markdown, while easy to read and write, isn't the most consumable
> format for users reading documentation on a terminal. This patch uses
> lynx to format markdown produced HTML into text files.

s/lynx/elinks/ in the comment.

> diff -r 512b4e0c49f3 -r 651347cccff7 config/Tools.mk.in
> --- a/config/Tools.mk.in	Thu Aug 30 10:51:00 2012 -0700
> +++ b/config/Tools.mk.in	Thu Aug 30 11:56:01 2012 -0700
> @@ -34,6 +34,8 @@
>  DOT                 := @DOT@
>  NEATO               := @NEATO@
>  MARKDOWN            := @MARKDOWN@
> +HTMLDUMP            := @HTMLDUMP@
> +HTMLDUMPFLAGS       := @HTMLDUMPFLAGS@

Does HTMLDUMP="elinks -dump" not work?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:27:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NVu-0006rB-1J; Fri, 31 Aug 2012 09:27:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7NVs-0006qu-60
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:27:40 +0000
Received: from [85.158.143.99:41443] by server-3.bemta-4.messagelabs.com id
	73/D3-08232-B8380405; Fri, 31 Aug 2012 09:27:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1346405257!27476285!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12249 invoked from network); 31 Aug 2012 09:27:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:27:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:27:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:27:37 +0100
Message-ID: <1346405255.27277.134.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Fri, 31 Aug 2012 10:27:35 +0100
In-Reply-To: <50366574.1040307@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
	<1345020888.5926.115.camel@zakaz.uk.xensource.com>
	<50366574.1040307@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 18:16 +0100, Roger Pau Monne wrote:
> Ok, I will probably split this in two patches, one for libxl and one
> for the parser.

Are you still indenting to do this for 4.2.0? It's fast approaching...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:27:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NVu-0006rB-1J; Fri, 31 Aug 2012 09:27:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7NVs-0006qu-60
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:27:40 +0000
Received: from [85.158.143.99:41443] by server-3.bemta-4.messagelabs.com id
	73/D3-08232-B8380405; Fri, 31 Aug 2012 09:27:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-216.messagelabs.com!1346405257!27476285!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12249 invoked from network); 31 Aug 2012 09:27:37 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-5.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:27:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14283934"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:27:37 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:27:37 +0100
Message-ID: <1346405255.27277.134.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Fri, 31 Aug 2012 10:27:35 +0100
In-Reply-To: <50366574.1040307@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
	<1345020888.5926.115.camel@zakaz.uk.xensource.com>
	<50366574.1040307@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-23 at 18:16 +0100, Roger Pau Monne wrote:
> Ok, I will probably split this in two patches, one for libxl and one
> for the parser.

Are you still indenting to do this for 4.2.0? It's fast approaching...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:29:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:29:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NXt-000779-9K; Fri, 31 Aug 2012 09:29:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiongxi.li@intel.com>) id 1T7NXr-00076s-Tc
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:29:44 +0000
Received: from [85.158.143.35:18376] by server-3.bemta-4.messagelabs.com id
	22/28-08232-70480405; Fri, 31 Aug 2012 09:29:43 +0000
X-Env-Sender: jiongxi.li@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346405382!12763079!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM4MDIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18836 invoked from network); 31 Aug 2012 09:29:42 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 09:29:42 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 31 Aug 2012 02:29:39 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="187517874"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 31 Aug 2012 02:29:32 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:29:31 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 17:29:30 +0800
From: "Li, Jiongxi" <jiongxi.li@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [ PATCH 0/2] xen: enable APIC-Register Virtualization and
	Virtual-interrupt delivery
Thread-Index: Ac2HWD8DVBjcwxDjQkawu0XwnKU3mQ==
Date: Fri, 31 Aug 2012 09:29:29 +0000
Message-ID: <D9137FCD9CFF644B965863BCFBEDABB877991D@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: [Xen-devel] [ PATCH 0/2] xen: enable APIC-Register Virtualization
 and Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The VMCS includes controls that enable the virtualization of interrupts and the Advanced Programmable Interrupt Controller (APIC).
When these controls are used, the processor will emulate many accesses to the APIC, track the state of the virtual APIC, and deliver virtual interrupts - all in VMX non-root operation without a VM exit.
You can refer to Chapter 29 of the latest SDM.
This series of patches enable APIC-Register Virtualization and Virtual-interrupt delivery.

PATCH 1/2: Enable APIC-Register Virtualization.

PATCH 2/2: Enable Virtual-interrupt delivery.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:29:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:29:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NXt-000779-9K; Fri, 31 Aug 2012 09:29:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiongxi.li@intel.com>) id 1T7NXr-00076s-Tc
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:29:44 +0000
Received: from [85.158.143.35:18376] by server-3.bemta-4.messagelabs.com id
	22/28-08232-70480405; Fri, 31 Aug 2012 09:29:43 +0000
X-Env-Sender: jiongxi.li@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346405382!12763079!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzM4MDIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18836 invoked from network); 31 Aug 2012 09:29:42 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 09:29:42 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 31 Aug 2012 02:29:39 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="187517874"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 31 Aug 2012 02:29:32 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:29:31 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 17:29:30 +0800
From: "Li, Jiongxi" <jiongxi.li@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [ PATCH 0/2] xen: enable APIC-Register Virtualization and
	Virtual-interrupt delivery
Thread-Index: Ac2HWD8DVBjcwxDjQkawu0XwnKU3mQ==
Date: Fri, 31 Aug 2012 09:29:29 +0000
Message-ID: <D9137FCD9CFF644B965863BCFBEDABB877991D@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: [Xen-devel] [ PATCH 0/2] xen: enable APIC-Register Virtualization
 and Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The VMCS includes controls that enable the virtualization of interrupts and the Advanced Programmable Interrupt Controller (APIC).
When these controls are used, the processor will emulate many accesses to the APIC, track the state of the virtual APIC, and deliver virtual interrupts - all in VMX non-root operation without a VM exit.
You can refer to Chapter 29 of the latest SDM.
This series of patches enable APIC-Register Virtualization and Virtual-interrupt delivery.

PATCH 1/2: Enable APIC-Register Virtualization.

PATCH 2/2: Enable Virtual-interrupt delivery.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:30:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NY8-0007A5-Ma; Fri, 31 Aug 2012 09:30:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiongxi.li@intel.com>) id 1T7NY7-00079c-6X
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:29:59 +0000
Received: from [85.158.143.99:38892] by server-1.bemta-4.messagelabs.com id
	58/AD-12504-61480405; Fri, 31 Aug 2012 09:29:58 +0000
X-Env-Sender: jiongxi.li@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346405395!18172861!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTkyMDA4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19835 invoked from network); 31 Aug 2012 09:29:55 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-14.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 09:29:55 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 31 Aug 2012 02:29:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="140060619"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 31 Aug 2012 02:29:54 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:29:54 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 17:29:52 +0800
From: "Li, Jiongxi" <jiongxi.li@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [ PATCH 1/2] xen: enable APIC-Register Virtualization
Thread-Index: Ac2HWGKAUDDr53BWQZO2/Uq+OTcM5w==
Date: Fri, 31 Aug 2012 09:29:51 +0000
Message-ID: <D9137FCD9CFF644B965863BCFBEDABB8779924@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: [Xen-devel] [ PATCH 1/2] xen: enable APIC-Register Virtualization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIEFQSUMgcmVnaXN0ZXIgdmlydHVhbGl6YXRpb24gc3VwcG9ydA0K44CA44CALSBBUElDIHJl
YWQgZG9lc24ndCBjYXVzZSBWTS1FeGl0DQrjgIDjgIAtIEFQSUMgd3JpdGUgYmVjb21lcyB0cmFw
LWxpa2UNCg0KU2lnbmVkLW9mZi1ieTogWWFuZyBaaGFuZyA8eWFuZy56LnpoYW5nQGludGVsLmNv
bT4NClNpZ25lZC1vZmYtYnk6IEppb25neGkgTGkgPGppb25neGkubGlAaW50ZWwuY29tPg0KDQoN
CmRpZmYgLXIgMTEyNmIzMDc5YmVmIHhlbi9hcmNoL3g4Ni9odm0vdmxhcGljLmMNCi0tLSBhL3hl
bi9hcmNoL3g4Ni9odm0vdmxhcGljLmPCoMKgwqDCoMKgIEZyaSBBdWcgMjQgMTI6Mzg6MTggMjAx
MiArMDEwMA0KKysrIGIveGVuL2FyY2gveDg2L2h2bS92bGFwaWMuY8KgwqAgVGh1IEF1ZyAzMCAy
MjozODoyNiAyMDEyICswODAwDQpAQCAtODIzLDYgKzgyMywxNiBAQCBzdGF0aWMgaW50IHZsYXBp
Y193cml0ZShzdHJ1Y3QgdmNwdSAqdiwgDQrCoMKgwqDCoMKgcmV0dXJuIHJjOw0KfQ0KDQoraW50
IHZsYXBpY19hcGljdl93cml0ZShzdHJ1Y3QgdmNwdSAqdiwgdW5zaWduZWQgaW50IG9mZnNldCkN
Cit7DQorwqDCoMKgIHVpbnQzMl90IHZhbCA9IHZsYXBpY19nZXRfcmVnKHZjcHVfdmxhcGljKHYp
LCBvZmZzZXQpOw0KKw0KK8KgwqDCoCBBU1NFUlQoY3B1X2hhc192bXhfYXBpY19yZWdfdmlydCk7
DQorDQorwqDCoMKgIHZsYXBpY19yZWdfd3JpdGUodiwgb2Zmc2V0LCB2YWwpOw0KK8KgwqDCoCBy
ZXR1cm4gMDsNCit9DQorDQppbnQgaHZtX3gyYXBpY19tc3Jfd3JpdGUoc3RydWN0IHZjcHUgKnYs
IHVuc2lnbmVkIGludCBtc3IsIHVpbnQ2NF90IG1zcl9jb250ZW50KQ0Kew0KwqDCoMKgwqAgc3Ry
dWN0IHZsYXBpYyAqdmxhcGljID0gdmNwdV92bGFwaWModik7DQpkaWZmIC1yIDExMjZiMzA3OWJl
ZiB4ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMNCi0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14
L3ZtY3MuY8KgwqDCoMKgwqDCoCBGcmkgQXVnIDI0IDEyOjM4OjE4IDIwMTIgKzAxMDANCisrKyBi
L3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuY8KgwqDCoCBUaHUgQXVnIDMwIDIyOjM4OjI2IDIw
MTIgKzA4MDANCkBAIC04OSw2ICs4OSw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCB2bXhfZGlzcGxh
eV9mZWF0dXJlcygNCsKgwqDCoMKgIFAoY3B1X2hhc192bXhfdm5taSwgIlZpcnR1YWwgTk1JIik7
DQrCoMKgwqDCoCBQKGNwdV9oYXNfdm14X21zcl9iaXRtYXAsICJNU1IgZGlyZWN0LWFjY2VzcyBi
aXRtYXAiKTsNCsKgwqDCoMKgIFAoY3B1X2hhc192bXhfdW5yZXN0cmljdGVkX2d1ZXN0LCAiVW5y
ZXN0cmljdGVkIEd1ZXN0Iik7DQorwqDCoMKgIFAoY3B1X2hhc192bXhfYXBpY19yZWdfdmlydCwg
IkFQSUMgUmVnaXN0ZXIgVmlydHVhbGl6YXRpb24iKTsNCiN1bmRlZiBQDQoNCsKgwqDCoMKgwqBp
ZiAoICFwcmludGVkICkNCkBAIC0xODYsNiArMTg3LDE0IEBAIHN0YXRpYyBpbnQgdm14X2luaXRf
dm1jc19jb25maWcodm9pZCkNCsKgwqDCoMKgwqDCoMKgwqAgaWYgKCBvcHRfdW5yZXN0cmljdGVk
X2d1ZXN0X2VuYWJsZWQgKQ0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG9wdCB8PSBTRUNPTkRB
UllfRVhFQ19VTlJFU1RSSUNURURfR1VFU1Q7DQoNCivCoMKgwqDCoMKgwqDCoCAvKg0KK8KgwqDC
oMKgwqDCoMKgwqAgKiAiQVBJQyBSZWdpc3RlciBWaXJ0dWFsaXphdGlvbiINCivCoMKgwqDCoMKg
wqDCoMKgICogY2FuIGJlIHNldCBvbmx5IHdoZW4gInVzZSBUUFIgc2hhZG93IiBpcyBzZXQNCivC
oMKgwqDCoMKgwqDCoMKgICovDQorwqDCoMKgwqDCoMKgwqAgaWYgKCBfdm14X2NwdV9iYXNlZF9l
eGVjX2NvbnRyb2wgJiBDUFVfQkFTRURfVFBSX1NIQURPVyApDQorwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBvcHQgfD0gU0VDT05EQVJZX0VYRUNfQVBJQ19SRUdJU1RFUl9WSVJUOw0KKw0KKw0KwqDC
oMKgwqDCoMKgwqDCoCBfdm14X3NlY29uZGFyeV9leGVjX2NvbnRyb2wgPSBhZGp1c3Rfdm14X2Nv
bnRyb2xzKA0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICJTZWNvbmRhcnkgRXhlYyBDb250cm9s
IiwgbWluLCBvcHQsDQrCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgTVNSX0lBMzJfVk1YX1BST0NC
QVNFRF9DVExTMiwgJm1pc21hdGNoKTsNCmRpZmYgLXIgMTEyNmIzMDc5YmVmIHhlbi9hcmNoL3g4
Ni9odm0vdm14L3ZteC5jDQotLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguY8KgwqDCoMKg
wqDCoMKgwqAgRnJpIEF1ZyAyNCAxMjozODoxOCAyMDEyICswMTAwDQorKysgYi94ZW4vYXJjaC94
ODYvaHZtL3ZteC92bXguY8KgwqDCoMKgwqAgVGh1IEF1ZyAzMCAyMjozODoyNiAyMDEyICswODAw
DQpAQCAtMjI3Myw2ICsyMjczLDE0IEBAIHN0YXRpYyB2b2lkIHZteF9pZHR2X3JlaW5qZWN0KHVu
c2lnbmVkIGwNCsKgwqDCoMKgIH0NCn0NCg0KK3N0YXRpYyBpbnQgdm14X2hhbmRsZV9hcGljX3dy
aXRlKHZvaWQpDQorew0KK8KgwqDCoCB1bnNpZ25lZCBsb25nIGV4aXRfcXVhbGlmaWNhdGlvbiA9
IF9fdm1yZWFkKEVYSVRfUVVBTElGSUNBVElPTik7DQorwqDCoMKgIHVuc2lnbmVkIGludCBvZmZz
ZXQgPSBleGl0X3F1YWxpZmljYXRpb24gJiAweGZmZjsNCisNCivCoMKgwqAgcmV0dXJuIHZsYXBp
Y19hcGljdl93cml0ZShjdXJyZW50LCBvZmZzZXQpOw0KK30NCisNCnZvaWQgdm14X3ZtZXhpdF9o
YW5kbGVyKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQ0Kew0KwqDCoMKgwqAgdW5zaWduZWQg
aW50IGV4aXRfcmVhc29uLCBpZHR2X2luZm8sIGludHJfaW5mbyA9IDAsIHZlY3RvciA9IDA7DQpA
QCAtMjcyOCw2ICsyNzM2LDExIEBAIHZvaWQgdm14X3ZtZXhpdF9oYW5kbGVyKHN0cnVjdCBjcHVf
dXNlcl8NCsKgwqDCoMKgwqDCoMKgwqAgYnJlYWs7DQrCoMKgwqDCoCB9DQoNCivCoMKgwqAgY2Fz
ZSBFWElUX1JFQVNPTl9BUElDX1dSSVRFOg0KK8KgwqDCoMKgwqDCoMKgIGlmICggdm14X2hhbmRs
ZV9hcGljX3dyaXRlKCkgKQ0KK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaHZtX2luamVjdF9od19l
eGNlcHRpb24oVFJBUF9ncF9mYXVsdCwgMCk7DQorwqDCoMKgwqDCoMKgwqAgYnJlYWs7DQorDQrC
oMKgwqDCoCBjYXNlIEVYSVRfUkVBU09OX0FDQ0VTU19HRFRSX09SX0lEVFI6DQrCoMKgwqDCoCBj
YXNlIEVYSVRfUkVBU09OX0FDQ0VTU19MRFRSX09SX1RSOg0KwqDCoMKgwqAgY2FzZSBFWElUX1JF
QVNPTl9WTVhfUFJFRU1QVElPTl9USU1FUl9FWFBJUkVEOg0KZGlmZiAtciAxMTI2YjMwNzliZWYg
eGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdmxhcGljLmgNCi0tLSBhL3hlbi9pbmNsdWRlL2FzbS14
ODYvaHZtL3ZsYXBpYy5oIEZyaSBBdWcgMjQgMTI6Mzg6MTggMjAxMiArMDEwMA0KKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9odm0vdmxhcGljLmjCoMKgwqDCoMKgwqAgVGh1IEF1ZyAzMCAyMjoz
ODoyNiAyMDEyICswODAwDQpAQCAtMTAzLDYgKzEwMyw4IEBAIHZvaWQgdmxhcGljX0VPSV9zZXQo
c3RydWN0IHZsYXBpYyAqdmxhcGkNCg0KwqBpbnQgdmxhcGljX2lwaShzdHJ1Y3QgdmxhcGljICp2
bGFwaWMsIHVpbnQzMl90IGljcl9sb3csIHVpbnQzMl90IGljcl9oaWdoKTsNCg0KK2ludCB2bGFw
aWNfYXBpY3Zfd3JpdGUoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVkIGludCBvZmZzZXQpOw0KKw0K
c3RydWN0IHZsYXBpYyAqdmxhcGljX2xvd2VzdF9wcmlvKA0KwqDCoMKgwqAgc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IHZsYXBpYyAqc291cmNlLA0KwqDCoMKgwqAgaW50IHNob3J0X2hhbmQsIHVp
bnQ4X3QgZGVzdCwgdWludDhfdCBkZXN0X21vZGUpOw0KZGlmZiAtciAxMTI2YjMwNzliZWYgeGVu
L2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZtY3MuaA0KLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4
Ni9odm0vdm14L3ZtY3MuaMKgIEZyaSBBdWcgMjQgMTI6Mzg6MTggMjAxMiArMDEwMA0KKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZtY3MuaMKgwqDCoMKgwqDCoMKgIFRodSBBdWcg
MzAgMjI6Mzg6MjYgMjAxMiArMDgwMA0KQEAgLTE4Miw2ICsxODIsNyBAQCBleHRlcm4gdTMyIHZt
eF92bWVudHJ5X2NvbnRyb2w7DQojZGVmaW5lIFNFQ09OREFSWV9FWEVDX0VOQUJMRV9WUElEwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgMHgwMDAwMDAyMA0KI2RlZmluZSBTRUNPTkRBUllfRVhF
Q19XQklOVkRfRVhJVElOR8KgwqDCoMKgwqDCoMKgwqDCoMKgIDB4MDAwMDAwNDANCiNkZWZpbmUg
U0VDT05EQVJZX0VYRUNfVU5SRVNUUklDVEVEX0dVRVNUwqDCoMKgwqDCoMKgIDB4MDAwMDAwODAN
CisjZGVmaW5lIFNFQ09OREFSWV9FWEVDX0FQSUNfUkVHSVNURVJfVklSVMKgwqDCoMKgwqDCoCAw
eDAwMDAwMTAwDQojZGVmaW5lIFNFQ09OREFSWV9FWEVDX1BBVVNFX0xPT1BfRVhJVElOR8KgwqDC
oMKgwqDCoCAweDAwMDAwNDAwDQojZGVmaW5lIFNFQ09OREFSWV9FWEVDX0VOQUJMRV9JTlZQQ0lE
wqDCoMKgwqDCoMKgwqDCoMKgwqAgMHgwMDAwMTAwMA0KZXh0ZXJuIHUzMiB2bXhfc2Vjb25kYXJ5
X2V4ZWNfY29udHJvbDsNCkBAIC0yMzAsNiArMjMxLDggQEAgZXh0ZXJuIGJvb2xfdCBjcHVfaGFz
X3ZteF9pbnNfb3V0c19pbnN0cg0KwqDCoMKgwqDCoCBTRUNPTkRBUllfRVhFQ19VTlJFU1RSSUNU
RURfR1VFU1QpDQojZGVmaW5lIGNwdV9oYXNfdm14X3BsZSBcDQrCoMKgwqDCoCAodm14X3NlY29u
ZGFyeV9leGVjX2NvbnRyb2wgJiBTRUNPTkRBUllfRVhFQ19QQVVTRV9MT09QX0VYSVRJTkcpDQor
I2RlZmluZSBjcHVfaGFzX3ZteF9hcGljX3JlZ192aXJ0IFwNCivCoMKgwqAgKHZteF9zZWNvbmRh
cnlfZXhlY19jb250cm9sICYgU0VDT05EQVJZX0VYRUNfQVBJQ19SRUdJU1RFUl9WSVJUKQ0KDQrC
oC8qIEdVRVNUX0lOVEVSUlVQVElCSUxJVFlfSU5GTyBmbGFncy4gKi8NCiNkZWZpbmUgVk1YX0lO
VFJfU0hBRE9XX1NUScKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAweDAwMDAwMDAxDQpkaWZmIC1y
IDExMjZiMzA3OWJlZiB4ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92bXgvdm14LmgNCi0tLSBhL3hl
bi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92bXguaMKgwqDCoCBGcmkgQXVnIDI0IDEyOjM4OjE4
IDIwMTIgKzAxMDANCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92bXguaCBUaHUg
QXVnIDMwIDIyOjM4OjI2IDIwMTIgKzA4MDANCkBAIC0xMjksNiArMTI5LDcgQEAgdm9pZCB2bXhf
dXBkYXRlX2NwdV9leGVjX2NvbnRyb2woc3RydWN0IA0KwqAjZGVmaW5lIEVYSVRfUkVBU09OX0lO
VlZQSUTCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTMNCiNkZWZpbmUgRVhJVF9SRUFTT05fV0JJ
TlZEwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTQNCiNkZWZpbmUgRVhJVF9SRUFTT05fWFNF
VEJWwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTUNCisjZGVmaW5lIEVYSVRfUkVBU09OX0FQ
SUNfV1JJVEXCoMKgwqDCoMKgwqDCoMKgwqAgNTYNCiNkZWZpbmUgRVhJVF9SRUFTT05fSU5WUENJ
RMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA1OA0KDQrCoC8qDQoNCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:30:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NY8-0007A5-Ma; Fri, 31 Aug 2012 09:30:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiongxi.li@intel.com>) id 1T7NY7-00079c-6X
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:29:59 +0000
Received: from [85.158.143.99:38892] by server-1.bemta-4.messagelabs.com id
	58/AD-12504-61480405; Fri, 31 Aug 2012 09:29:58 +0000
X-Env-Sender: jiongxi.li@intel.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346405395!18172861!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMTkyMDA4\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19835 invoked from network); 31 Aug 2012 09:29:55 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-14.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 09:29:55 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 31 Aug 2012 02:29:54 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="140060619"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by AZSMGA002.ch.intel.com with ESMTP; 31 Aug 2012 02:29:54 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:29:54 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 17:29:52 +0800
From: "Li, Jiongxi" <jiongxi.li@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [ PATCH 1/2] xen: enable APIC-Register Virtualization
Thread-Index: Ac2HWGKAUDDr53BWQZO2/Uq+OTcM5w==
Date: Fri, 31 Aug 2012 09:29:51 +0000
Message-ID: <D9137FCD9CFF644B965863BCFBEDABB8779924@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: [Xen-devel] [ PATCH 1/2] xen: enable APIC-Register Virtualization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIEFQSUMgcmVnaXN0ZXIgdmlydHVhbGl6YXRpb24gc3VwcG9ydA0K44CA44CALSBBUElDIHJl
YWQgZG9lc24ndCBjYXVzZSBWTS1FeGl0DQrjgIDjgIAtIEFQSUMgd3JpdGUgYmVjb21lcyB0cmFw
LWxpa2UNCg0KU2lnbmVkLW9mZi1ieTogWWFuZyBaaGFuZyA8eWFuZy56LnpoYW5nQGludGVsLmNv
bT4NClNpZ25lZC1vZmYtYnk6IEppb25neGkgTGkgPGppb25neGkubGlAaW50ZWwuY29tPg0KDQoN
CmRpZmYgLXIgMTEyNmIzMDc5YmVmIHhlbi9hcmNoL3g4Ni9odm0vdmxhcGljLmMNCi0tLSBhL3hl
bi9hcmNoL3g4Ni9odm0vdmxhcGljLmPCoMKgwqDCoMKgIEZyaSBBdWcgMjQgMTI6Mzg6MTggMjAx
MiArMDEwMA0KKysrIGIveGVuL2FyY2gveDg2L2h2bS92bGFwaWMuY8KgwqAgVGh1IEF1ZyAzMCAy
MjozODoyNiAyMDEyICswODAwDQpAQCAtODIzLDYgKzgyMywxNiBAQCBzdGF0aWMgaW50IHZsYXBp
Y193cml0ZShzdHJ1Y3QgdmNwdSAqdiwgDQrCoMKgwqDCoMKgcmV0dXJuIHJjOw0KfQ0KDQoraW50
IHZsYXBpY19hcGljdl93cml0ZShzdHJ1Y3QgdmNwdSAqdiwgdW5zaWduZWQgaW50IG9mZnNldCkN
Cit7DQorwqDCoMKgIHVpbnQzMl90IHZhbCA9IHZsYXBpY19nZXRfcmVnKHZjcHVfdmxhcGljKHYp
LCBvZmZzZXQpOw0KKw0KK8KgwqDCoCBBU1NFUlQoY3B1X2hhc192bXhfYXBpY19yZWdfdmlydCk7
DQorDQorwqDCoMKgIHZsYXBpY19yZWdfd3JpdGUodiwgb2Zmc2V0LCB2YWwpOw0KK8KgwqDCoCBy
ZXR1cm4gMDsNCit9DQorDQppbnQgaHZtX3gyYXBpY19tc3Jfd3JpdGUoc3RydWN0IHZjcHUgKnYs
IHVuc2lnbmVkIGludCBtc3IsIHVpbnQ2NF90IG1zcl9jb250ZW50KQ0Kew0KwqDCoMKgwqAgc3Ry
dWN0IHZsYXBpYyAqdmxhcGljID0gdmNwdV92bGFwaWModik7DQpkaWZmIC1yIDExMjZiMzA3OWJl
ZiB4ZW4vYXJjaC94ODYvaHZtL3ZteC92bWNzLmMNCi0tLSBhL3hlbi9hcmNoL3g4Ni9odm0vdm14
L3ZtY3MuY8KgwqDCoMKgwqDCoCBGcmkgQXVnIDI0IDEyOjM4OjE4IDIwMTIgKzAxMDANCisrKyBi
L3hlbi9hcmNoL3g4Ni9odm0vdm14L3ZtY3MuY8KgwqDCoCBUaHUgQXVnIDMwIDIyOjM4OjI2IDIw
MTIgKzA4MDANCkBAIC04OSw2ICs4OSw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCB2bXhfZGlzcGxh
eV9mZWF0dXJlcygNCsKgwqDCoMKgIFAoY3B1X2hhc192bXhfdm5taSwgIlZpcnR1YWwgTk1JIik7
DQrCoMKgwqDCoCBQKGNwdV9oYXNfdm14X21zcl9iaXRtYXAsICJNU1IgZGlyZWN0LWFjY2VzcyBi
aXRtYXAiKTsNCsKgwqDCoMKgIFAoY3B1X2hhc192bXhfdW5yZXN0cmljdGVkX2d1ZXN0LCAiVW5y
ZXN0cmljdGVkIEd1ZXN0Iik7DQorwqDCoMKgIFAoY3B1X2hhc192bXhfYXBpY19yZWdfdmlydCwg
IkFQSUMgUmVnaXN0ZXIgVmlydHVhbGl6YXRpb24iKTsNCiN1bmRlZiBQDQoNCsKgwqDCoMKgwqBp
ZiAoICFwcmludGVkICkNCkBAIC0xODYsNiArMTg3LDE0IEBAIHN0YXRpYyBpbnQgdm14X2luaXRf
dm1jc19jb25maWcodm9pZCkNCsKgwqDCoMKgwqDCoMKgwqAgaWYgKCBvcHRfdW5yZXN0cmljdGVk
X2d1ZXN0X2VuYWJsZWQgKQ0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG9wdCB8PSBTRUNPTkRB
UllfRVhFQ19VTlJFU1RSSUNURURfR1VFU1Q7DQoNCivCoMKgwqDCoMKgwqDCoCAvKg0KK8KgwqDC
oMKgwqDCoMKgwqAgKiAiQVBJQyBSZWdpc3RlciBWaXJ0dWFsaXphdGlvbiINCivCoMKgwqDCoMKg
wqDCoMKgICogY2FuIGJlIHNldCBvbmx5IHdoZW4gInVzZSBUUFIgc2hhZG93IiBpcyBzZXQNCivC
oMKgwqDCoMKgwqDCoMKgICovDQorwqDCoMKgwqDCoMKgwqAgaWYgKCBfdm14X2NwdV9iYXNlZF9l
eGVjX2NvbnRyb2wgJiBDUFVfQkFTRURfVFBSX1NIQURPVyApDQorwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBvcHQgfD0gU0VDT05EQVJZX0VYRUNfQVBJQ19SRUdJU1RFUl9WSVJUOw0KKw0KKw0KwqDC
oMKgwqDCoMKgwqDCoCBfdm14X3NlY29uZGFyeV9leGVjX2NvbnRyb2wgPSBhZGp1c3Rfdm14X2Nv
bnRyb2xzKA0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICJTZWNvbmRhcnkgRXhlYyBDb250cm9s
IiwgbWluLCBvcHQsDQrCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgTVNSX0lBMzJfVk1YX1BST0NC
QVNFRF9DVExTMiwgJm1pc21hdGNoKTsNCmRpZmYgLXIgMTEyNmIzMDc5YmVmIHhlbi9hcmNoL3g4
Ni9odm0vdm14L3ZteC5jDQotLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZteC92bXguY8KgwqDCoMKg
wqDCoMKgwqAgRnJpIEF1ZyAyNCAxMjozODoxOCAyMDEyICswMTAwDQorKysgYi94ZW4vYXJjaC94
ODYvaHZtL3ZteC92bXguY8KgwqDCoMKgwqAgVGh1IEF1ZyAzMCAyMjozODoyNiAyMDEyICswODAw
DQpAQCAtMjI3Myw2ICsyMjczLDE0IEBAIHN0YXRpYyB2b2lkIHZteF9pZHR2X3JlaW5qZWN0KHVu
c2lnbmVkIGwNCsKgwqDCoMKgIH0NCn0NCg0KK3N0YXRpYyBpbnQgdm14X2hhbmRsZV9hcGljX3dy
aXRlKHZvaWQpDQorew0KK8KgwqDCoCB1bnNpZ25lZCBsb25nIGV4aXRfcXVhbGlmaWNhdGlvbiA9
IF9fdm1yZWFkKEVYSVRfUVVBTElGSUNBVElPTik7DQorwqDCoMKgIHVuc2lnbmVkIGludCBvZmZz
ZXQgPSBleGl0X3F1YWxpZmljYXRpb24gJiAweGZmZjsNCisNCivCoMKgwqAgcmV0dXJuIHZsYXBp
Y19hcGljdl93cml0ZShjdXJyZW50LCBvZmZzZXQpOw0KK30NCisNCnZvaWQgdm14X3ZtZXhpdF9o
YW5kbGVyKHN0cnVjdCBjcHVfdXNlcl9yZWdzICpyZWdzKQ0Kew0KwqDCoMKgwqAgdW5zaWduZWQg
aW50IGV4aXRfcmVhc29uLCBpZHR2X2luZm8sIGludHJfaW5mbyA9IDAsIHZlY3RvciA9IDA7DQpA
QCAtMjcyOCw2ICsyNzM2LDExIEBAIHZvaWQgdm14X3ZtZXhpdF9oYW5kbGVyKHN0cnVjdCBjcHVf
dXNlcl8NCsKgwqDCoMKgwqDCoMKgwqAgYnJlYWs7DQrCoMKgwqDCoCB9DQoNCivCoMKgwqAgY2Fz
ZSBFWElUX1JFQVNPTl9BUElDX1dSSVRFOg0KK8KgwqDCoMKgwqDCoMKgIGlmICggdm14X2hhbmRs
ZV9hcGljX3dyaXRlKCkgKQ0KK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaHZtX2luamVjdF9od19l
eGNlcHRpb24oVFJBUF9ncF9mYXVsdCwgMCk7DQorwqDCoMKgwqDCoMKgwqAgYnJlYWs7DQorDQrC
oMKgwqDCoCBjYXNlIEVYSVRfUkVBU09OX0FDQ0VTU19HRFRSX09SX0lEVFI6DQrCoMKgwqDCoCBj
YXNlIEVYSVRfUkVBU09OX0FDQ0VTU19MRFRSX09SX1RSOg0KwqDCoMKgwqAgY2FzZSBFWElUX1JF
QVNPTl9WTVhfUFJFRU1QVElPTl9USU1FUl9FWFBJUkVEOg0KZGlmZiAtciAxMTI2YjMwNzliZWYg
eGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdmxhcGljLmgNCi0tLSBhL3hlbi9pbmNsdWRlL2FzbS14
ODYvaHZtL3ZsYXBpYy5oIEZyaSBBdWcgMjQgMTI6Mzg6MTggMjAxMiArMDEwMA0KKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9odm0vdmxhcGljLmjCoMKgwqDCoMKgwqAgVGh1IEF1ZyAzMCAyMjoz
ODoyNiAyMDEyICswODAwDQpAQCAtMTAzLDYgKzEwMyw4IEBAIHZvaWQgdmxhcGljX0VPSV9zZXQo
c3RydWN0IHZsYXBpYyAqdmxhcGkNCg0KwqBpbnQgdmxhcGljX2lwaShzdHJ1Y3QgdmxhcGljICp2
bGFwaWMsIHVpbnQzMl90IGljcl9sb3csIHVpbnQzMl90IGljcl9oaWdoKTsNCg0KK2ludCB2bGFw
aWNfYXBpY3Zfd3JpdGUoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVkIGludCBvZmZzZXQpOw0KKw0K
c3RydWN0IHZsYXBpYyAqdmxhcGljX2xvd2VzdF9wcmlvKA0KwqDCoMKgwqAgc3RydWN0IGRvbWFp
biAqZCwgc3RydWN0IHZsYXBpYyAqc291cmNlLA0KwqDCoMKgwqAgaW50IHNob3J0X2hhbmQsIHVp
bnQ4X3QgZGVzdCwgdWludDhfdCBkZXN0X21vZGUpOw0KZGlmZiAtciAxMTI2YjMwNzliZWYgeGVu
L2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZtY3MuaA0KLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4
Ni9odm0vdm14L3ZtY3MuaMKgIEZyaSBBdWcgMjQgMTI6Mzg6MTggMjAxMiArMDEwMA0KKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9odm0vdm14L3ZtY3MuaMKgwqDCoMKgwqDCoMKgIFRodSBBdWcg
MzAgMjI6Mzg6MjYgMjAxMiArMDgwMA0KQEAgLTE4Miw2ICsxODIsNyBAQCBleHRlcm4gdTMyIHZt
eF92bWVudHJ5X2NvbnRyb2w7DQojZGVmaW5lIFNFQ09OREFSWV9FWEVDX0VOQUJMRV9WUElEwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgMHgwMDAwMDAyMA0KI2RlZmluZSBTRUNPTkRBUllfRVhF
Q19XQklOVkRfRVhJVElOR8KgwqDCoMKgwqDCoMKgwqDCoMKgIDB4MDAwMDAwNDANCiNkZWZpbmUg
U0VDT05EQVJZX0VYRUNfVU5SRVNUUklDVEVEX0dVRVNUwqDCoMKgwqDCoMKgIDB4MDAwMDAwODAN
CisjZGVmaW5lIFNFQ09OREFSWV9FWEVDX0FQSUNfUkVHSVNURVJfVklSVMKgwqDCoMKgwqDCoCAw
eDAwMDAwMTAwDQojZGVmaW5lIFNFQ09OREFSWV9FWEVDX1BBVVNFX0xPT1BfRVhJVElOR8KgwqDC
oMKgwqDCoCAweDAwMDAwNDAwDQojZGVmaW5lIFNFQ09OREFSWV9FWEVDX0VOQUJMRV9JTlZQQ0lE
wqDCoMKgwqDCoMKgwqDCoMKgwqAgMHgwMDAwMTAwMA0KZXh0ZXJuIHUzMiB2bXhfc2Vjb25kYXJ5
X2V4ZWNfY29udHJvbDsNCkBAIC0yMzAsNiArMjMxLDggQEAgZXh0ZXJuIGJvb2xfdCBjcHVfaGFz
X3ZteF9pbnNfb3V0c19pbnN0cg0KwqDCoMKgwqDCoCBTRUNPTkRBUllfRVhFQ19VTlJFU1RSSUNU
RURfR1VFU1QpDQojZGVmaW5lIGNwdV9oYXNfdm14X3BsZSBcDQrCoMKgwqDCoCAodm14X3NlY29u
ZGFyeV9leGVjX2NvbnRyb2wgJiBTRUNPTkRBUllfRVhFQ19QQVVTRV9MT09QX0VYSVRJTkcpDQor
I2RlZmluZSBjcHVfaGFzX3ZteF9hcGljX3JlZ192aXJ0IFwNCivCoMKgwqAgKHZteF9zZWNvbmRh
cnlfZXhlY19jb250cm9sICYgU0VDT05EQVJZX0VYRUNfQVBJQ19SRUdJU1RFUl9WSVJUKQ0KDQrC
oC8qIEdVRVNUX0lOVEVSUlVQVElCSUxJVFlfSU5GTyBmbGFncy4gKi8NCiNkZWZpbmUgVk1YX0lO
VFJfU0hBRE9XX1NUScKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAweDAwMDAwMDAxDQpkaWZmIC1y
IDExMjZiMzA3OWJlZiB4ZW4vaW5jbHVkZS9hc20teDg2L2h2bS92bXgvdm14LmgNCi0tLSBhL3hl
bi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92bXguaMKgwqDCoCBGcmkgQXVnIDI0IDEyOjM4OjE4
IDIwMTIgKzAxMDANCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvaHZtL3ZteC92bXguaCBUaHUg
QXVnIDMwIDIyOjM4OjI2IDIwMTIgKzA4MDANCkBAIC0xMjksNiArMTI5LDcgQEAgdm9pZCB2bXhf
dXBkYXRlX2NwdV9leGVjX2NvbnRyb2woc3RydWN0IA0KwqAjZGVmaW5lIEVYSVRfUkVBU09OX0lO
VlZQSUTCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTMNCiNkZWZpbmUgRVhJVF9SRUFTT05fV0JJ
TlZEwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTQNCiNkZWZpbmUgRVhJVF9SRUFTT05fWFNF
VEJWwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgNTUNCisjZGVmaW5lIEVYSVRfUkVBU09OX0FQ
SUNfV1JJVEXCoMKgwqDCoMKgwqDCoMKgwqAgNTYNCiNkZWZpbmUgRVhJVF9SRUFTT05fSU5WUENJ
RMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA1OA0KDQrCoC8qDQoNCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:30:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NYf-0007Hg-4k; Fri, 31 Aug 2012 09:30:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiongxi.li@intel.com>) id 1T7NYe-0007HM-18
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:30:32 +0000
Received: from [85.158.143.99:63835] by server-1.bemta-4.messagelabs.com id
	3C/9E-12504-73480405; Fri, 31 Aug 2012 09:30:31 +0000
X-Env-Sender: jiongxi.li@intel.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346405427!22475130!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxNjYzNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3723 invoked from network); 31 Aug 2012 09:30:28 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 09:30:28 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 31 Aug 2012 02:30:26 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="216201080"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga002.fm.intel.com with ESMTP; 31 Aug 2012 02:30:16 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:30:15 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:30:15 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 17:30:14 +0800
From: "Li, Jiongxi" <jiongxi.li@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [ PATCH 2/2] xen: enable  Virtual-interrupt delivery
Thread-Index: Ac2HWp7G2HqBx++QQP2kPrY5SPXyLg==
Date: Fri, 31 Aug 2012 09:30:13 +0000
Message-ID: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: [Xen-devel] [ PATCH 2/2] xen: enable  Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Virtual interrupt delivery avoids Xen to inject vAPIC interrupts manually, which is fully taken care of by the hardware. This needs some special awareness into existing interrupr injection path:
For pending interrupt from vLAPIC, instead of direct injection, we may need update architecture specific indicators before resuming to guest.
Before returning to guest, RVI should be updated if any pending IRRs
EOI exit bitmap controls whether an EOI write should cause VM-Exit. If set, a trap-like induced EOI VM-Exit is triggered. The approach here is to manipulate EOI exit bitmap based on value of TMR. Level triggered irq requires a hook in vLAPIC EOI write, so that vIOAPIC EOI is triggered and emulated

Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>

diff -r cb821c24ca74 xen/arch/x86/hvm/irq.c
--- a/xen/arch/x86/hvm/irq.c  Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/irq.c         Fri Aug 31 09:49:39 2012 +0800
@@ -452,7 +452,11 @@ struct hvm_intack hvm_vcpu_ack_pending_i

 int hvm_local_events_need_delivery(struct vcpu *v)
{
-    struct hvm_intack intack = hvm_vcpu_has_pending_irq(v);
+    struct hvm_intack intack;
+
+    pt_update_irq(v);
+
+    intack = hvm_vcpu_has_pending_irq(v);

     if ( likely(intack.source == hvm_intsrc_none) )
         return 0;
diff -r cb821c24ca74 xen/arch/x86/hvm/vlapic.c
--- a/xen/arch/x86/hvm/vlapic.c      Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vlapic.c   Fri Aug 31 09:49:39 2012 +0800
@@ -143,7 +143,16 @@ static int vlapic_find_highest_irr(struc
int vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig)
{
     if ( trig )
+    {
         vlapic_set_vector(vec, &vlapic->regs->data[APIC_TMR]);
+        if ( cpu_has_vmx_virtual_intr_delivery )
+            vmx_set_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
+    }
+    else
+    {
+        if ( cpu_has_vmx_virtual_intr_delivery )
+            vmx_clear_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
+    }

     /* We may need to wake up target vcpu, besides set pending bit here */
     return !vlapic_test_and_set_irr(vec, vlapic);
@@ -410,6 +419,22 @@ void vlapic_EOI_set(struct vlapic *vlapi
     hvm_dpci_msi_eoi(current->domain, vector);
}

+/*
+ * When "Virtual Interrupt Delivery" is enabled, this function is used
+ * to handle EOI-induced VM exit
+ */
+void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector)
+{
+    ASSERT(cpu_has_vmx_virtual_intr_delivery);
+
+    if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) )
+    {
+        vioapic_update_EOI(vlapic_domain(vlapic), vector);
+    }
+
+    hvm_dpci_msi_eoi(current->domain, vector);
+}
+
int vlapic_ipi(
     struct vlapic *vlapic, uint32_t icr_low, uint32_t icr_high)
{
@@ -1014,6 +1039,9 @@ int vlapic_has_pending_irq(struct vcpu *
     if ( irr == -1 )
         return -1;

+    if ( cpu_has_vmx_virtual_intr_delivery )
+        return irr;
+
     isr = vlapic_find_highest_isr(vlapic);
     isr = (isr != -1) ? isr : 0;
     if ( (isr & 0xf0) >= (irr & 0xf0) )
@@ -1026,6 +1054,9 @@ int vlapic_ack_pending_irq(struct vcpu *
{
     struct vlapic *vlapic = vcpu_vlapic(v);

+    if ( cpu_has_vmx_virtual_intr_delivery )
+        return 1;
+
    vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
     vlapic_clear_irr(vector, vlapic);

diff -r cb821c24ca74 xen/arch/x86/hvm/vmx/intr.c
--- a/xen/arch/x86/hvm/vmx/intr.c Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vmx/intr.c       Fri Aug 31 09:49:39 2012 +0800
@@ -227,19 +227,43 @@ void vmx_intr_assist(void)
             goto out;

         intblk = hvm_interrupt_blocked(v, intack);
-        if ( intblk == hvm_intblk_tpr )
+        if ( cpu_has_vmx_virtual_intr_delivery )
         {
-            ASSERT(vlapic_enabled(vcpu_vlapic(v)));
-            ASSERT(intack.source == hvm_intsrc_lapic);
-            tpr_threshold = intack.vector >> 4;
-            goto out;
+            /* Set "Interrupt-window exiting" for ExtINT */
+            if ( (intblk != hvm_intblk_none) &&
+                 ( (intack.source == hvm_intsrc_pic) ||
+                 ( intack.source == hvm_intsrc_vector) ) )
+            {
+                enable_intr_window(v, intack);
+                goto out;
+            }
+
+            if ( __vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK )
+            {
+                if ( (intack.source == hvm_intsrc_pic) ||
+                     (intack.source == hvm_intsrc_nmi) ||
+                     (intack.source == hvm_intsrc_mce) )
+                    enable_intr_window(v, intack);
+
+                goto out;
+            }
         }
+        else
+        {
+            if ( intblk == hvm_intblk_tpr )
+            {
+                ASSERT(vlapic_enabled(vcpu_vlapic(v)));
+                ASSERT(intack.source == hvm_intsrc_lapic);
+                tpr_threshold = intack.vector >> 4;
+                goto out;
+            }

-        if ( (intblk != hvm_intblk_none) ||
-             (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
-        {
-            enable_intr_window(v, intack);
-            goto out;
+            if ( (intblk != hvm_intblk_none) ||
+                 (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
+            {
+                enable_intr_window(v, intack);
+                goto out;
+            }
         }

         intack = hvm_vcpu_ack_pending_irq(v, intack);
@@ -253,6 +277,29 @@ void vmx_intr_assist(void)
     {
         hvm_inject_hw_exception(TRAP_machine_check, HVM_DELIVER_NO_ERROR_CODE);
     }
+    else if ( cpu_has_vmx_virtual_intr_delivery &&
+              intack.source != hvm_intsrc_pic &&
+              intack.source != hvm_intsrc_vector )
+    {
+        /* we need update the RVI field */
+        unsigned long status = __vmread(GUEST_INTR_STATUS);
+        status &= ~(unsigned long)0x0FF;
+        status |= (unsigned long)0x0FF & 
+                    intack.vector;
+        __vmwrite(GUEST_INTR_STATUS, status);
+        if (v->arch.hvm_vmx.eoi_exitmap_changed) {
+#define UPDATE_EOI_EXITMAP(v, e) {                             \
+        if (test_and_clear_bit(e, &(v).eoi_exitmap_changed)) {      \
+                __vmwrite(EOI_EXIT_BITMAP##e, (v).eoi_exit_bitmap[e]);}}
+
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 0);
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 1);
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 2);
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 3);
+        }
+
+        pt_intr_post(v, intack);
+    }
     else
     {
         HVMTRACE_2D(INJ_VIRQ, intack.vector, /*fake=*/ 0);
@@ -262,11 +309,16 @@ void vmx_intr_assist(void)

     /* Is there another IRQ to queue up behind this one? */
     intack = hvm_vcpu_has_pending_irq(v);
-    if ( unlikely(intack.source != hvm_intsrc_none) )
-        enable_intr_window(v, intack);
+    if ( !cpu_has_vmx_virtual_intr_delivery ||
+         intack.source == hvm_intsrc_pic ||
+         intack.source == hvm_intsrc_vector )
+    {
+        if ( unlikely(intack.source != hvm_intsrc_none) )
+            enable_intr_window(v, intack);
+    }

  out:
-    if ( cpu_has_vmx_tpr_shadow )
+    if ( !cpu_has_vmx_virtual_intr_delivery && cpu_has_vmx_tpr_shadow )
         __vmwrite(TPR_THRESHOLD, tpr_threshold);
}

diff -r cb821c24ca74 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c       Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vmx/vmcs.c    Fri Aug 31 09:49:39 2012 +0800
@@ -90,6 +90,7 @@ static void __init vmx_display_features(
     P(cpu_has_vmx_msr_bitmap, "MSR direct-access bitmap");
     P(cpu_has_vmx_unrestricted_guest, "Unrestricted Guest");
     P(cpu_has_vmx_apic_reg_virt, "APIC Register Virtualization");
+    P(cpu_has_vmx_virtual_intr_delivery, "Virtual Interrupt Delivery");
#undef P

     if ( !printed )
@@ -188,11 +189,12 @@ static int vmx_init_vmcs_config(void)
             opt |= SECONDARY_EXEC_UNRESTRICTED_GUEST;

         /*
-         * "APIC Register Virtualization"
+         * "APIC Register Virtualization" and "Virtual Interrupt Delivery"
          * can be set only when "use TPR shadow" is set
          */
         if ( _vmx_cpu_based_exec_control & CPU_BASED_TPR_SHADOW )
-            opt |= SECONDARY_EXEC_APIC_REGISTER_VIRT;
+            opt |= SECONDARY_EXEC_APIC_REGISTER_VIRT |
+                   SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;

 
         _vmx_secondary_exec_control = adjust_vmx_controls(
@@ -787,6 +789,22 @@ static int construct_vmcs(struct vcpu *v
     __vmwrite(IO_BITMAP_A, virt_to_maddr((char *)hvm_io_bitmap + 0));
     __vmwrite(IO_BITMAP_B, virt_to_maddr((char *)hvm_io_bitmap + PAGE_SIZE));

+    if ( cpu_has_vmx_virtual_intr_delivery )
+    {
+        /* EOI-exit bitmap */
+        v->arch.hvm_vmx.eoi_exit_bitmap[0] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP0, v->arch.hvm_vmx.eoi_exit_bitmap[0]);
+        v->arch.hvm_vmx.eoi_exit_bitmap[1] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP1, v->arch.hvm_vmx.eoi_exit_bitmap[1]);
+        v->arch.hvm_vmx.eoi_exit_bitmap[2] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP2, v->arch.hvm_vmx.eoi_exit_bitmap[2]);
+        v->arch.hvm_vmx.eoi_exit_bitmap[3] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP3, v->arch.hvm_vmx.eoi_exit_bitmap[3]);
+
+        /* Initialise Guest Interrupt Status (RVI and SVI) to 0 */
+        __vmwrite(GUEST_INTR_STATUS, 0);
+    }
+
     /* Host data selectors. */
     __vmwrite(HOST_SS_SELECTOR, __HYPERVISOR_DS);
     __vmwrite(HOST_DS_SELECTOR, __HYPERVISOR_DS);
@@ -1028,6 +1046,30 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
}

+void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
+{
+    int index, offset, changed;
+
+    index = vector >> 6; 
+    offset = vector & 63;
+    changed = !test_and_set_bit(offset,
+                  (uint64_t *)&v->arch.hvm_vmx.eoi_exit_bitmap[index]);
+    if (changed)
+        set_bit(index, &v->arch.hvm_vmx.eoi_exitmap_changed);
+}
+
+void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector)
+{
+    int index, offset, changed;
+
+    index = vector >> 6; 
+    offset = vector & 63;
+    changed = test_and_clear_bit(offset,
+                  (uint64_t *)&v->arch.hvm_vmx.eoi_exit_bitmap[index]);
+    if (changed)
+        set_bit(index, &v->arch.hvm_vmx.eoi_exitmap_changed);
+}
+
int vmx_create_vmcs(struct vcpu *v)
{
     struct arch_vmx_struct *arch_vmx = &v->arch.hvm_vmx;
diff -r cb821c24ca74 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c         Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vmx/vmx.c      Fri Aug 31 09:49:39 2012 +0800
@@ -2674,6 +2674,16 @@ void vmx_vmexit_handler(struct cpu_user_
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;

+    case EXIT_REASON_EOI_INDUCED:
+    {
+        int vector;
+        exit_qualification = __vmread(EXIT_QUALIFICATION);
+        vector = exit_qualification & 0xff;
+
+        vlapic_handle_EOI_induced_exit(vcpu_vlapic(current), vector);
+        break;
+    }
+
     case EXIT_REASON_IO_INSTRUCTION:
         exit_qualification = __vmread(EXIT_QUALIFICATION);
         if ( exit_qualification & 0x10 )
diff -r cb821c24ca74 xen/include/asm-x86/hvm/vlapic.h
--- a/xen/include/asm-x86/hvm/vlapic.h Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/include/asm-x86/hvm/vlapic.h       Fri Aug 31 09:49:39 2012 +0800
@@ -100,6 +100,7 @@ int vlapic_accept_pic_intr(struct vcpu *
void vlapic_adjust_i8259_target(struct domain *d);

 void vlapic_EOI_set(struct vlapic *vlapic);
+void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector);

 int vlapic_ipi(struct vlapic *vlapic, uint32_t icr_low, uint32_t icr_high);

diff -r cb821c24ca74 xen/include/asm-x86/hvm/vmx/vmcs.h
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h  Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h        Fri Aug 31 09:49:39 2012 +0800
@@ -110,6 +110,9 @@ struct arch_vmx_struct {
     unsigned int         host_msr_count;
     struct vmx_msr_entry *host_msr_area;

+    uint32_t             eoi_exitmap_changed;
+    uint64_t             eoi_exit_bitmap[4];
+
     unsigned long        host_cr0;

     /* Is the guest in real mode? */
@@ -183,6 +186,7 @@ extern u32 vmx_vmentry_control;
#define SECONDARY_EXEC_WBINVD_EXITING           0x00000040
#define SECONDARY_EXEC_UNRESTRICTED_GUEST       0x00000080
#define SECONDARY_EXEC_APIC_REGISTER_VIRT       0x00000100
+#define SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY    0x00000200
#define SECONDARY_EXEC_PAUSE_LOOP_EXITING       0x00000400
#define SECONDARY_EXEC_ENABLE_INVPCID           0x00001000
extern u32 vmx_secondary_exec_control;
@@ -233,6 +237,8 @@ extern bool_t cpu_has_vmx_ins_outs_instr
     (vmx_secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING)
#define cpu_has_vmx_apic_reg_virt \
     (vmx_secondary_exec_control & SECONDARY_EXEC_APIC_REGISTER_VIRT)
+#define cpu_has_vmx_virtual_intr_delivery \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY)

 /* GUEST_INTERRUPTIBILITY_INFO flags. */
#define VMX_INTR_SHADOW_STI             0x00000001
@@ -251,6 +257,7 @@ enum vmcs_field {
     GUEST_GS_SELECTOR               = 0x0000080a,
     GUEST_LDTR_SELECTOR             = 0x0000080c,
     GUEST_TR_SELECTOR               = 0x0000080e,
+    GUEST_INTR_STATUS               = 0x00000810,
     HOST_ES_SELECTOR                = 0x00000c00,
     HOST_CS_SELECTOR                = 0x00000c02,
     HOST_SS_SELECTOR                = 0x00000c04,
@@ -278,6 +285,14 @@ enum vmcs_field {
     APIC_ACCESS_ADDR_HIGH           = 0x00002015,
     EPT_POINTER                     = 0x0000201a,
     EPT_POINTER_HIGH                = 0x0000201b,
+    EOI_EXIT_BITMAP0                = 0x0000201c,
+    EOI_EXIT_BITMAP0_HIGH           = 0x0000201d,
+    EOI_EXIT_BITMAP1                = 0x0000201e,
+    EOI_EXIT_BITMAP1_HIGH           = 0x0000201f,
+    EOI_EXIT_BITMAP2                = 0x00002020,
+    EOI_EXIT_BITMAP2_HIGH           = 0x00002021,
+    EOI_EXIT_BITMAP3                = 0x00002022,
+    EOI_EXIT_BITMAP3_HIGH           = 0x00002023,
    GUEST_PHYSICAL_ADDRESS          = 0x00002400,
     GUEST_PHYSICAL_ADDRESS_HIGH     = 0x00002401,
     VMCS_LINK_POINTER               = 0x00002800,
@@ -398,6 +413,8 @@ int vmx_write_guest_msr(u32 msr, u64 val
int vmx_add_guest_msr(u32 msr);
int vmx_add_host_load_msr(u32 msr);
void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
+void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);

 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */

diff -r cb821c24ca74 xen/include/asm-x86/hvm/vmx/vmx.h
--- a/xen/include/asm-x86/hvm/vmx/vmx.h    Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h Fri Aug 31 09:49:39 2012 +0800
@@ -119,6 +119,7 @@ void vmx_update_cpu_exec_control(struct 
 #define EXIT_REASON_MCE_DURING_VMENTRY  41
#define EXIT_REASON_TPR_BELOW_THRESHOLD 43
#define EXIT_REASON_APIC_ACCESS         44
+#define EXIT_REASON_EOI_INDUCED         45
#define EXIT_REASON_ACCESS_GDTR_OR_IDTR 46
#define EXIT_REASON_ACCESS_LDTR_OR_TR   47
#define EXIT_REASON_EPT_VIOLATION       48

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:30:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NYf-0007Hg-4k; Fri, 31 Aug 2012 09:30:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiongxi.li@intel.com>) id 1T7NYe-0007HM-18
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:30:32 +0000
Received: from [85.158.143.99:63835] by server-1.bemta-4.messagelabs.com id
	3C/9E-12504-73480405; Fri, 31 Aug 2012 09:30:31 +0000
X-Env-Sender: jiongxi.li@intel.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346405427!22475130!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDMxNjYzNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3723 invoked from network); 31 Aug 2012 09:30:28 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 09:30:28 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 31 Aug 2012 02:30:26 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.80,346,1344236400"; d="scan'208";a="216201080"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga002.fm.intel.com with ESMTP; 31 Aug 2012 02:30:16 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:30:15 -0700
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.1.355.2; Fri, 31 Aug 2012 02:30:15 -0700
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.239]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.175]) with mapi id
	14.01.0355.002; Fri, 31 Aug 2012 17:30:14 +0800
From: "Li, Jiongxi" <jiongxi.li@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [ PATCH 2/2] xen: enable  Virtual-interrupt delivery
Thread-Index: Ac2HWp7G2HqBx++QQP2kPrY5SPXyLg==
Date: Fri, 31 Aug 2012 09:30:13 +0000
Message-ID: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: [Xen-devel] [ PATCH 2/2] xen: enable  Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Virtual interrupt delivery avoids Xen to inject vAPIC interrupts manually, which is fully taken care of by the hardware. This needs some special awareness into existing interrupr injection path:
For pending interrupt from vLAPIC, instead of direct injection, we may need update architecture specific indicators before resuming to guest.
Before returning to guest, RVI should be updated if any pending IRRs
EOI exit bitmap controls whether an EOI write should cause VM-Exit. If set, a trap-like induced EOI VM-Exit is triggered. The approach here is to manipulate EOI exit bitmap based on value of TMR. Level triggered irq requires a hook in vLAPIC EOI write, so that vIOAPIC EOI is triggered and emulated

Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>

diff -r cb821c24ca74 xen/arch/x86/hvm/irq.c
--- a/xen/arch/x86/hvm/irq.c  Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/irq.c         Fri Aug 31 09:49:39 2012 +0800
@@ -452,7 +452,11 @@ struct hvm_intack hvm_vcpu_ack_pending_i

 int hvm_local_events_need_delivery(struct vcpu *v)
{
-    struct hvm_intack intack = hvm_vcpu_has_pending_irq(v);
+    struct hvm_intack intack;
+
+    pt_update_irq(v);
+
+    intack = hvm_vcpu_has_pending_irq(v);

     if ( likely(intack.source == hvm_intsrc_none) )
         return 0;
diff -r cb821c24ca74 xen/arch/x86/hvm/vlapic.c
--- a/xen/arch/x86/hvm/vlapic.c      Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vlapic.c   Fri Aug 31 09:49:39 2012 +0800
@@ -143,7 +143,16 @@ static int vlapic_find_highest_irr(struc
int vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig)
{
     if ( trig )
+    {
         vlapic_set_vector(vec, &vlapic->regs->data[APIC_TMR]);
+        if ( cpu_has_vmx_virtual_intr_delivery )
+            vmx_set_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
+    }
+    else
+    {
+        if ( cpu_has_vmx_virtual_intr_delivery )
+            vmx_clear_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
+    }

     /* We may need to wake up target vcpu, besides set pending bit here */
     return !vlapic_test_and_set_irr(vec, vlapic);
@@ -410,6 +419,22 @@ void vlapic_EOI_set(struct vlapic *vlapi
     hvm_dpci_msi_eoi(current->domain, vector);
}

+/*
+ * When "Virtual Interrupt Delivery" is enabled, this function is used
+ * to handle EOI-induced VM exit
+ */
+void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector)
+{
+    ASSERT(cpu_has_vmx_virtual_intr_delivery);
+
+    if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) )
+    {
+        vioapic_update_EOI(vlapic_domain(vlapic), vector);
+    }
+
+    hvm_dpci_msi_eoi(current->domain, vector);
+}
+
int vlapic_ipi(
     struct vlapic *vlapic, uint32_t icr_low, uint32_t icr_high)
{
@@ -1014,6 +1039,9 @@ int vlapic_has_pending_irq(struct vcpu *
     if ( irr == -1 )
         return -1;

+    if ( cpu_has_vmx_virtual_intr_delivery )
+        return irr;
+
     isr = vlapic_find_highest_isr(vlapic);
     isr = (isr != -1) ? isr : 0;
     if ( (isr & 0xf0) >= (irr & 0xf0) )
@@ -1026,6 +1054,9 @@ int vlapic_ack_pending_irq(struct vcpu *
{
     struct vlapic *vlapic = vcpu_vlapic(v);

+    if ( cpu_has_vmx_virtual_intr_delivery )
+        return 1;
+
    vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
     vlapic_clear_irr(vector, vlapic);

diff -r cb821c24ca74 xen/arch/x86/hvm/vmx/intr.c
--- a/xen/arch/x86/hvm/vmx/intr.c Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vmx/intr.c       Fri Aug 31 09:49:39 2012 +0800
@@ -227,19 +227,43 @@ void vmx_intr_assist(void)
             goto out;

         intblk = hvm_interrupt_blocked(v, intack);
-        if ( intblk == hvm_intblk_tpr )
+        if ( cpu_has_vmx_virtual_intr_delivery )
         {
-            ASSERT(vlapic_enabled(vcpu_vlapic(v)));
-            ASSERT(intack.source == hvm_intsrc_lapic);
-            tpr_threshold = intack.vector >> 4;
-            goto out;
+            /* Set "Interrupt-window exiting" for ExtINT */
+            if ( (intblk != hvm_intblk_none) &&
+                 ( (intack.source == hvm_intsrc_pic) ||
+                 ( intack.source == hvm_intsrc_vector) ) )
+            {
+                enable_intr_window(v, intack);
+                goto out;
+            }
+
+            if ( __vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK )
+            {
+                if ( (intack.source == hvm_intsrc_pic) ||
+                     (intack.source == hvm_intsrc_nmi) ||
+                     (intack.source == hvm_intsrc_mce) )
+                    enable_intr_window(v, intack);
+
+                goto out;
+            }
         }
+        else
+        {
+            if ( intblk == hvm_intblk_tpr )
+            {
+                ASSERT(vlapic_enabled(vcpu_vlapic(v)));
+                ASSERT(intack.source == hvm_intsrc_lapic);
+                tpr_threshold = intack.vector >> 4;
+                goto out;
+            }

-        if ( (intblk != hvm_intblk_none) ||
-             (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
-        {
-            enable_intr_window(v, intack);
-            goto out;
+            if ( (intblk != hvm_intblk_none) ||
+                 (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
+            {
+                enable_intr_window(v, intack);
+                goto out;
+            }
         }

         intack = hvm_vcpu_ack_pending_irq(v, intack);
@@ -253,6 +277,29 @@ void vmx_intr_assist(void)
     {
         hvm_inject_hw_exception(TRAP_machine_check, HVM_DELIVER_NO_ERROR_CODE);
     }
+    else if ( cpu_has_vmx_virtual_intr_delivery &&
+              intack.source != hvm_intsrc_pic &&
+              intack.source != hvm_intsrc_vector )
+    {
+        /* we need update the RVI field */
+        unsigned long status = __vmread(GUEST_INTR_STATUS);
+        status &= ~(unsigned long)0x0FF;
+        status |= (unsigned long)0x0FF & 
+                    intack.vector;
+        __vmwrite(GUEST_INTR_STATUS, status);
+        if (v->arch.hvm_vmx.eoi_exitmap_changed) {
+#define UPDATE_EOI_EXITMAP(v, e) {                             \
+        if (test_and_clear_bit(e, &(v).eoi_exitmap_changed)) {      \
+                __vmwrite(EOI_EXIT_BITMAP##e, (v).eoi_exit_bitmap[e]);}}
+
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 0);
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 1);
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 2);
+                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 3);
+        }
+
+        pt_intr_post(v, intack);
+    }
     else
     {
         HVMTRACE_2D(INJ_VIRQ, intack.vector, /*fake=*/ 0);
@@ -262,11 +309,16 @@ void vmx_intr_assist(void)

     /* Is there another IRQ to queue up behind this one? */
     intack = hvm_vcpu_has_pending_irq(v);
-    if ( unlikely(intack.source != hvm_intsrc_none) )
-        enable_intr_window(v, intack);
+    if ( !cpu_has_vmx_virtual_intr_delivery ||
+         intack.source == hvm_intsrc_pic ||
+         intack.source == hvm_intsrc_vector )
+    {
+        if ( unlikely(intack.source != hvm_intsrc_none) )
+            enable_intr_window(v, intack);
+    }

  out:
-    if ( cpu_has_vmx_tpr_shadow )
+    if ( !cpu_has_vmx_virtual_intr_delivery && cpu_has_vmx_tpr_shadow )
         __vmwrite(TPR_THRESHOLD, tpr_threshold);
}

diff -r cb821c24ca74 xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c       Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vmx/vmcs.c    Fri Aug 31 09:49:39 2012 +0800
@@ -90,6 +90,7 @@ static void __init vmx_display_features(
     P(cpu_has_vmx_msr_bitmap, "MSR direct-access bitmap");
     P(cpu_has_vmx_unrestricted_guest, "Unrestricted Guest");
     P(cpu_has_vmx_apic_reg_virt, "APIC Register Virtualization");
+    P(cpu_has_vmx_virtual_intr_delivery, "Virtual Interrupt Delivery");
#undef P

     if ( !printed )
@@ -188,11 +189,12 @@ static int vmx_init_vmcs_config(void)
             opt |= SECONDARY_EXEC_UNRESTRICTED_GUEST;

         /*
-         * "APIC Register Virtualization"
+         * "APIC Register Virtualization" and "Virtual Interrupt Delivery"
          * can be set only when "use TPR shadow" is set
          */
         if ( _vmx_cpu_based_exec_control & CPU_BASED_TPR_SHADOW )
-            opt |= SECONDARY_EXEC_APIC_REGISTER_VIRT;
+            opt |= SECONDARY_EXEC_APIC_REGISTER_VIRT |
+                   SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;

 
         _vmx_secondary_exec_control = adjust_vmx_controls(
@@ -787,6 +789,22 @@ static int construct_vmcs(struct vcpu *v
     __vmwrite(IO_BITMAP_A, virt_to_maddr((char *)hvm_io_bitmap + 0));
     __vmwrite(IO_BITMAP_B, virt_to_maddr((char *)hvm_io_bitmap + PAGE_SIZE));

+    if ( cpu_has_vmx_virtual_intr_delivery )
+    {
+        /* EOI-exit bitmap */
+        v->arch.hvm_vmx.eoi_exit_bitmap[0] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP0, v->arch.hvm_vmx.eoi_exit_bitmap[0]);
+        v->arch.hvm_vmx.eoi_exit_bitmap[1] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP1, v->arch.hvm_vmx.eoi_exit_bitmap[1]);
+        v->arch.hvm_vmx.eoi_exit_bitmap[2] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP2, v->arch.hvm_vmx.eoi_exit_bitmap[2]);
+        v->arch.hvm_vmx.eoi_exit_bitmap[3] = (uint64_t)0;
+        __vmwrite(EOI_EXIT_BITMAP3, v->arch.hvm_vmx.eoi_exit_bitmap[3]);
+
+        /* Initialise Guest Interrupt Status (RVI and SVI) to 0 */
+        __vmwrite(GUEST_INTR_STATUS, 0);
+    }
+
     /* Host data selectors. */
     __vmwrite(HOST_SS_SELECTOR, __HYPERVISOR_DS);
     __vmwrite(HOST_DS_SELECTOR, __HYPERVISOR_DS);
@@ -1028,6 +1046,30 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
}

+void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
+{
+    int index, offset, changed;
+
+    index = vector >> 6; 
+    offset = vector & 63;
+    changed = !test_and_set_bit(offset,
+                  (uint64_t *)&v->arch.hvm_vmx.eoi_exit_bitmap[index]);
+    if (changed)
+        set_bit(index, &v->arch.hvm_vmx.eoi_exitmap_changed);
+}
+
+void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector)
+{
+    int index, offset, changed;
+
+    index = vector >> 6; 
+    offset = vector & 63;
+    changed = test_and_clear_bit(offset,
+                  (uint64_t *)&v->arch.hvm_vmx.eoi_exit_bitmap[index]);
+    if (changed)
+        set_bit(index, &v->arch.hvm_vmx.eoi_exitmap_changed);
+}
+
int vmx_create_vmcs(struct vcpu *v)
{
     struct arch_vmx_struct *arch_vmx = &v->arch.hvm_vmx;
diff -r cb821c24ca74 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c         Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/arch/x86/hvm/vmx/vmx.c      Fri Aug 31 09:49:39 2012 +0800
@@ -2674,6 +2674,16 @@ void vmx_vmexit_handler(struct cpu_user_
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         break;

+    case EXIT_REASON_EOI_INDUCED:
+    {
+        int vector;
+        exit_qualification = __vmread(EXIT_QUALIFICATION);
+        vector = exit_qualification & 0xff;
+
+        vlapic_handle_EOI_induced_exit(vcpu_vlapic(current), vector);
+        break;
+    }
+
     case EXIT_REASON_IO_INSTRUCTION:
         exit_qualification = __vmread(EXIT_QUALIFICATION);
         if ( exit_qualification & 0x10 )
diff -r cb821c24ca74 xen/include/asm-x86/hvm/vlapic.h
--- a/xen/include/asm-x86/hvm/vlapic.h Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/include/asm-x86/hvm/vlapic.h       Fri Aug 31 09:49:39 2012 +0800
@@ -100,6 +100,7 @@ int vlapic_accept_pic_intr(struct vcpu *
void vlapic_adjust_i8259_target(struct domain *d);

 void vlapic_EOI_set(struct vlapic *vlapic);
+void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector);

 int vlapic_ipi(struct vlapic *vlapic, uint32_t icr_low, uint32_t icr_high);

diff -r cb821c24ca74 xen/include/asm-x86/hvm/vmx/vmcs.h
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h  Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h        Fri Aug 31 09:49:39 2012 +0800
@@ -110,6 +110,9 @@ struct arch_vmx_struct {
     unsigned int         host_msr_count;
     struct vmx_msr_entry *host_msr_area;

+    uint32_t             eoi_exitmap_changed;
+    uint64_t             eoi_exit_bitmap[4];
+
     unsigned long        host_cr0;

     /* Is the guest in real mode? */
@@ -183,6 +186,7 @@ extern u32 vmx_vmentry_control;
#define SECONDARY_EXEC_WBINVD_EXITING           0x00000040
#define SECONDARY_EXEC_UNRESTRICTED_GUEST       0x00000080
#define SECONDARY_EXEC_APIC_REGISTER_VIRT       0x00000100
+#define SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY    0x00000200
#define SECONDARY_EXEC_PAUSE_LOOP_EXITING       0x00000400
#define SECONDARY_EXEC_ENABLE_INVPCID           0x00001000
extern u32 vmx_secondary_exec_control;
@@ -233,6 +237,8 @@ extern bool_t cpu_has_vmx_ins_outs_instr
     (vmx_secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING)
#define cpu_has_vmx_apic_reg_virt \
     (vmx_secondary_exec_control & SECONDARY_EXEC_APIC_REGISTER_VIRT)
+#define cpu_has_vmx_virtual_intr_delivery \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY)

 /* GUEST_INTERRUPTIBILITY_INFO flags. */
#define VMX_INTR_SHADOW_STI             0x00000001
@@ -251,6 +257,7 @@ enum vmcs_field {
     GUEST_GS_SELECTOR               = 0x0000080a,
     GUEST_LDTR_SELECTOR             = 0x0000080c,
     GUEST_TR_SELECTOR               = 0x0000080e,
+    GUEST_INTR_STATUS               = 0x00000810,
     HOST_ES_SELECTOR                = 0x00000c00,
     HOST_CS_SELECTOR                = 0x00000c02,
     HOST_SS_SELECTOR                = 0x00000c04,
@@ -278,6 +285,14 @@ enum vmcs_field {
     APIC_ACCESS_ADDR_HIGH           = 0x00002015,
     EPT_POINTER                     = 0x0000201a,
     EPT_POINTER_HIGH                = 0x0000201b,
+    EOI_EXIT_BITMAP0                = 0x0000201c,
+    EOI_EXIT_BITMAP0_HIGH           = 0x0000201d,
+    EOI_EXIT_BITMAP1                = 0x0000201e,
+    EOI_EXIT_BITMAP1_HIGH           = 0x0000201f,
+    EOI_EXIT_BITMAP2                = 0x00002020,
+    EOI_EXIT_BITMAP2_HIGH           = 0x00002021,
+    EOI_EXIT_BITMAP3                = 0x00002022,
+    EOI_EXIT_BITMAP3_HIGH           = 0x00002023,
    GUEST_PHYSICAL_ADDRESS          = 0x00002400,
     GUEST_PHYSICAL_ADDRESS_HIGH     = 0x00002401,
     VMCS_LINK_POINTER               = 0x00002800,
@@ -398,6 +413,8 @@ int vmx_write_guest_msr(u32 msr, u64 val
int vmx_add_guest_msr(u32 msr);
int vmx_add_host_load_msr(u32 msr);
void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
+void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
+void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);

 #endif /* ASM_X86_HVM_VMX_VMCS_H__ */

diff -r cb821c24ca74 xen/include/asm-x86/hvm/vmx/vmx.h
--- a/xen/include/asm-x86/hvm/vmx/vmx.h    Fri Aug 31 09:30:38 2012 +0800
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h Fri Aug 31 09:49:39 2012 +0800
@@ -119,6 +119,7 @@ void vmx_update_cpu_exec_control(struct 
 #define EXIT_REASON_MCE_DURING_VMENTRY  41
#define EXIT_REASON_TPR_BELOW_THRESHOLD 43
#define EXIT_REASON_APIC_ACCESS         44
+#define EXIT_REASON_EOI_INDUCED         45
#define EXIT_REASON_ACCESS_GDTR_OR_IDTR 46
#define EXIT_REASON_ACCESS_LDTR_OR_TR   47
#define EXIT_REASON_EPT_VIOLATION       48

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:34:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nc6-0007px-Vj; Fri, 31 Aug 2012 09:34:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nc5-0007pV-I3
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:34:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346405636!8476333!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29778 invoked from network); 31 Aug 2012 09:33:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:33:57 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284166"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:33:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:33:56 +0100
Message-ID: <1346405635.27277.137.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 10:33:55 +0100
In-Reply-To: <50409A070200007800097BA3@nat28.tlf.novell.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
	<50409A070200007800097BA3@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:03 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > From what you say I think we want to modprobe blktap if blktap2 didn't
> > exist.
> > 
> > blktap2 isn't actually a xenbus backend driver (since it uses blkback to
> > do the guest facing bit) so I don't think a xen-backend: alias is
> > available. I can't see any other aliases defined in the code in either
> > the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
> > latest I happen to have to hand) or a mainline kernel. If there is
> > something else we should be trying please let me know.
> 
> There's a "devname:xen/blktap-2/control" alias in our SLE11 SP2
> and newer openSUSE ones (as of 2.6.35). Whether that's fully
> appropriate to be there and/or to be used as a modprobe
> argument I'm not sure though.
> 
> The bad thing about the "blktap" name is that that's also the
> name of the blktap1 driver in the 2.6.18 tree and its forward
> ports, but I don't think there's anything we can reasonably do
> about that.

I thought about that. Most kernels which have blktap1 nowadays also have
blktap2 so the number of systems where you would actually end up with
only blktap1 loaded is pretty small. It's also AFAIK reasonably harmless
other than the memory usage etc.

In retrospect renaming blktap2->blktap ni pvops was a stupid idea (I can
say that since it was my idea...)

>  So I'm fine with the change you suggest from that
> perspective (whether to use the module alias pointed out ).

Can I take that as an
Acked-by: Jan Beulich <JBeulich@suse.com>
?

I think I'll skip the alias for now.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:34:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nc6-0007px-Vj; Fri, 31 Aug 2012 09:34:06 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nc5-0007pV-I3
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:34:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346405636!8476333!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29778 invoked from network); 31 Aug 2012 09:33:57 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:33:57 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284166"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:33:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:33:56 +0100
Message-ID: <1346405635.27277.137.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 10:33:55 +0100
In-Reply-To: <50409A070200007800097BA3@nat28.tlf.novell.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
	<50409A070200007800097BA3@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:03 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > From what you say I think we want to modprobe blktap if blktap2 didn't
> > exist.
> > 
> > blktap2 isn't actually a xenbus backend driver (since it uses blkback to
> > do the guest facing bit) so I don't think a xen-backend: alias is
> > available. I can't see any other aliases defined in the code in either
> > the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
> > latest I happen to have to hand) or a mainline kernel. If there is
> > something else we should be trying please let me know.
> 
> There's a "devname:xen/blktap-2/control" alias in our SLE11 SP2
> and newer openSUSE ones (as of 2.6.35). Whether that's fully
> appropriate to be there and/or to be used as a modprobe
> argument I'm not sure though.
> 
> The bad thing about the "blktap" name is that that's also the
> name of the blktap1 driver in the 2.6.18 tree and its forward
> ports, but I don't think there's anything we can reasonably do
> about that.

I thought about that. Most kernels which have blktap1 nowadays also have
blktap2 so the number of systems where you would actually end up with
only blktap1 loaded is pretty small. It's also AFAIK reasonably harmless
other than the memory usage etc.

In retrospect renaming blktap2->blktap ni pvops was a stupid idea (I can
say that since it was my idea...)

>  So I'm fine with the change you suggest from that
> perspective (whether to use the module alias pointed out ).

Can I take that as an
Acked-by: Jan Beulich <JBeulich@suse.com>
?

I think I'll skip the alias for now.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:37:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nez-00081V-JR; Fri, 31 Aug 2012 09:37:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vonami@gmail.com>) id 1T7NWb-0006wE-IC
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:28:25 +0000
Received: from [85.158.138.51:59844] by server-12.bemta-3.messagelabs.com id
	59/0D-10384-8B380405; Fri, 31 Aug 2012 09:28:24 +0000
X-Env-Sender: vonami@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346405303!26106574!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 430 invoked from network); 31 Aug 2012 09:28:24 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:28:24 -0000
Received: by vbip1 with SMTP id p1so3637584vbi.32
	for <multiple recipients>; Fri, 31 Aug 2012 02:28:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=lnHOs67REv3HuYhRI28we9lh4+KjTtU5OxHY+t1ig18=;
	b=d7QukSVJyVdZ4z64j1hGRD2KVUTaTa4ctsTphDoahcSHjSBE7m64HdVyVk/eXP5rGl
	koL8c0uM3iZJhwsUdIHGYID4jP2MRufvmiodbaMwQOS+/LShUrLsJzI1c3ULh+kCxtpe
	k70fc4DeRX6hZQUswh2kuaB/XG9uDfXiR8vrRN1+Tk0ABxL/HtFc4X0LR+dDobspYYS/
	cICMJ4FwWdVc/dUu38j5P/+ozcwHuxS8BpFEzoBlkVEMSImwnxnBCFQrARv6eg50Xrgv
	rj82635IGu/MirFEFNZ/t4J2h/ATSyQ12xljeUxlUP8QZ8ss9tIhVZXX47D7XUA907h0
	0EhA==
MIME-Version: 1.0
Received: by 10.52.31.230 with SMTP id d6mr4430753vdi.87.1346405302689; Fri,
	31 Aug 2012 02:28:22 -0700 (PDT)
Received: by 10.58.59.39 with HTTP; Fri, 31 Aug 2012 02:28:22 -0700 (PDT)
In-Reply-To: <1346403318.27277.118.camel@zakaz.uk.xensource.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
Date: Fri, 31 Aug 2012 13:28:22 +0400
Message-ID: <CALaHpuvSpbp-u=o6VxXqxfCVHQSwLM1y_hnZFGQoE+nRx2erTg@mail.gmail.com>
From: Dmitry Ivanov <vonami@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Fri, 31 Aug 2012 09:37:04 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
	--disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 12:55 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>
> In the meantime I think the answer is "don't do that then". That may
> well also turn out to be the answer for the 4.2.0 release at this point.
>

Thanks for the explanation. I'm cross-compiling xen for an embedded
system so I'd like to build a minimal dom0 system. Explicitly setting
PYTHON when doing ./configure seems to work fine.

-- 
Best regards,
Dmitry Ivanov

A: Because it breaks the logical sequence of discussion
Q: Why is top posting bad?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:37:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nez-00081V-JR; Fri, 31 Aug 2012 09:37:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vonami@gmail.com>) id 1T7NWb-0006wE-IC
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:28:25 +0000
Received: from [85.158.138.51:59844] by server-12.bemta-3.messagelabs.com id
	59/0D-10384-8B380405; Fri, 31 Aug 2012 09:28:24 +0000
X-Env-Sender: vonami@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346405303!26106574!1
X-Originating-IP: [209.85.212.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 430 invoked from network); 31 Aug 2012 09:28:24 -0000
Received: from mail-vb0-f45.google.com (HELO mail-vb0-f45.google.com)
	(209.85.212.45)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:28:24 -0000
Received: by vbip1 with SMTP id p1so3637584vbi.32
	for <multiple recipients>; Fri, 31 Aug 2012 02:28:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=lnHOs67REv3HuYhRI28we9lh4+KjTtU5OxHY+t1ig18=;
	b=d7QukSVJyVdZ4z64j1hGRD2KVUTaTa4ctsTphDoahcSHjSBE7m64HdVyVk/eXP5rGl
	koL8c0uM3iZJhwsUdIHGYID4jP2MRufvmiodbaMwQOS+/LShUrLsJzI1c3ULh+kCxtpe
	k70fc4DeRX6hZQUswh2kuaB/XG9uDfXiR8vrRN1+Tk0ABxL/HtFc4X0LR+dDobspYYS/
	cICMJ4FwWdVc/dUu38j5P/+ozcwHuxS8BpFEzoBlkVEMSImwnxnBCFQrARv6eg50Xrgv
	rj82635IGu/MirFEFNZ/t4J2h/ATSyQ12xljeUxlUP8QZ8ss9tIhVZXX47D7XUA907h0
	0EhA==
MIME-Version: 1.0
Received: by 10.52.31.230 with SMTP id d6mr4430753vdi.87.1346405302689; Fri,
	31 Aug 2012 02:28:22 -0700 (PDT)
Received: by 10.58.59.39 with HTTP; Fri, 31 Aug 2012 02:28:22 -0700 (PDT)
In-Reply-To: <1346403318.27277.118.camel@zakaz.uk.xensource.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
Date: Fri, 31 Aug 2012 13:28:22 +0400
Message-ID: <CALaHpuvSpbp-u=o6VxXqxfCVHQSwLM1y_hnZFGQoE+nRx2erTg@mail.gmail.com>
From: Dmitry Ivanov <vonami@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Fri, 31 Aug 2012 09:37:04 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
	--disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 12:55 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>
> In the meantime I think the answer is "don't do that then". That may
> well also turn out to be the answer for the 4.2.0 release at this point.
>

Thanks for the explanation. I'm cross-compiling xen for an embedded
system so I'd like to build a minimal dom0 system. Explicitly setting
PYTHON when doing ./configure seems to work fine.

-- 
Best regards,
Dmitry Ivanov

A: Because it breaks the logical sequence of discussion
Q: Why is top posting bad?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:42:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Njr-0008Eg-An; Fri, 31 Aug 2012 09:42:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Njq-0008EZ-Mp
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:06 +0000
Received: from [85.158.139.83:49963] by server-10.bemta-5.messagelabs.com id
	13/85-10969-DE680405; Fri, 31 Aug 2012 09:42:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346406123!23930404!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26647 invoked from network); 31 Aug 2012 09:42:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284389"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 10:42:03 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Njm-0001Tr-R5; Fri, 31 Aug 2012 09:42:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Njm-0007xw-MZ;
	Fri, 31 Aug 2012 10:42:02 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.34538.599108.990505@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 10:42:02 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346403318.27277.118.camel@zakaz.uk.xensource.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Dmitry Ivanov <vonami@gmail.com>
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-users] Failed to build xen-unstable with --disable-pythontools"):
> On Fri, 2012-08-31 at 09:37 +0100, Dmitry Ivanov wrote:
> > After configuring xen-unstable with ./configure --disable-ocamltools
> > --disable-pythontools it fails to build:
...
> In the meantime I think the answer is "don't do that then". That may
> well also turn out to be the answer for the 4.2.0 release at this point.
> 
> A patch to remove this option follows.

Thanks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:42:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Njr-0008Eg-An; Fri, 31 Aug 2012 09:42:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Njq-0008EZ-Mp
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:06 +0000
Received: from [85.158.139.83:49963] by server-10.bemta-5.messagelabs.com id
	13/85-10969-DE680405; Fri, 31 Aug 2012 09:42:05 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346406123!23930404!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26647 invoked from network); 31 Aug 2012 09:42:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284389"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 10:42:03 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Njm-0001Tr-R5; Fri, 31 Aug 2012 09:42:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Njm-0007xw-MZ;
	Fri, 31 Aug 2012 10:42:02 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.34538.599108.990505@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 10:42:02 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346403318.27277.118.camel@zakaz.uk.xensource.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Dmitry Ivanov <vonami@gmail.com>
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-users] Failed to build xen-unstable with --disable-pythontools"):
> On Fri, 2012-08-31 at 09:37 +0100, Dmitry Ivanov wrote:
> > After configuring xen-unstable with ./configure --disable-ocamltools
> > --disable-pythontools it fails to build:
...
> In the meantime I think the answer is "don't do that then". That may
> well also turn out to be the answer for the 4.2.0 release at this point.
> 
> A patch to remove this option follows.

Thanks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkg-0008Lo-R0; Fri, 31 Aug 2012 09:42:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkf-0008KF-Kr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:57 +0000
Received: from [85.158.139.83:41408] by server-12.bemta-5.messagelabs.com id
	B8/88-18300-12780405; Fri, 31 Aug 2012 09:42:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31607 invoked from network); 31 Aug 2012 09:42:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:55 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284413"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:42:55 +0100
Message-ID: <1346406173.27277.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:53 +0100
In-Reply-To: <20543.34665.170118.455635@mariner.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
	<1345710212.12501.55.camel@zakaz.uk.xensource.com>
	<20543.34665.170118.455635@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:31 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] do not remove kernels or modules on uninstall. (Was: Re: make uninstall can delete xen-kernels)"):
> > That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
> > about this patch as a 4.2 thing, but given that the regression happened
> > due to the switch to autoconf in 4.2 I think it might be good to take,
> > even though as a %age of what we install the delta is pretty
> > insignificant.
> 
> I'm happy with both of these for 4.2.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks, applied with this and
Looks-good: Jan Beulich <jbeulich@suse.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkg-0008Lo-R0; Fri, 31 Aug 2012 09:42:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkf-0008KF-Kr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:57 +0000
Received: from [85.158.139.83:41408] by server-12.bemta-5.messagelabs.com id
	B8/88-18300-12780405; Fri, 31 Aug 2012 09:42:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31607 invoked from network); 31 Aug 2012 09:42:55 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:55 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284413"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:42:55 +0100
Message-ID: <1346406173.27277.140.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:53 +0100
In-Reply-To: <20543.34665.170118.455635@mariner.uk.xensource.com>
References: <20120822214709.296550@gmx.net>
	<1345703962.23624.57.camel@dagon.hellion.org.uk>
	<5035E421020000780008A601@nat28.tlf.novell.com>
	<1345707105.12501.38.camel@zakaz.uk.xensource.com>
	<1345708742.12501.48.camel@zakaz.uk.xensource.com>
	<1345710212.12501.55.camel@zakaz.uk.xensource.com>
	<20543.34665.170118.455635@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "p.d@gmx.de" <p.d@gmx.de>, Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] do not remove kernels or modules on
 uninstall. (Was: Re: make uninstall can delete xen-kernels)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:31 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] do not remove kernels or modules on uninstall. (Was: Re: make uninstall can delete xen-kernels)"):
> > That broader rework is certainly a post 4.2 thing IMHO. I'm in two minds
> > about this patch as a 4.2 thing, but given that the regression happened
> > due to the switch to autoconf in 4.2 I think it might be good to take,
> > even though as a %age of what we install the delta is pretty
> > insignificant.
> 
> I'm happy with both of these for 4.2.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks, applied with this and
Looks-good: Jan Beulich <jbeulich@suse.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkc-0008Ks-Dx; Fri, 31 Aug 2012 09:42:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nka-0008KJ-Nj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:52 +0000
Received: from [85.158.139.83:40819] by server-7.bemta-5.messagelabs.com id
	F5/06-19703-C1780405; Fri, 31 Aug 2012 09:42:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30914 invoked from network); 31 Aug 2012 09:42:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:51 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284409"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:42:50 +0100
Message-ID: <1346406169.27277.139.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:49 +0100
In-Reply-To: <20543.34801.132186.226088@mariner.uk.xensource.com>
References: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
	<20543.34801.132186.226088@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: remove vestigial default_lib.m4
 macros and adjust substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:34 +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH] tools: remove vestigial default_lib.m4 macros and adjust substitutions"):
> > LIB_PATH is no longer used, so the AX_DEFAULT_LIB macro is no longer
> > needed. Additionally lower case make variables are now used as
> > autoconf substitutions, which allows for more correct overrides at
> > build time.
> > 
> > I've checked the file layout in dist/install from the build made
> > before this change versus after with ./configure values of:
> >  1) ./configure (no flags provided)
> >  2) ./configure --libdir=/usr/lib/x86_64-linux-gnu (Debian style)
> >  3) ./configure --libdir='${exec_prefix}/lib' (late variable expansion)
> > 
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

The problem with this last time was
<1341478690.31696.110.camel@zakaz.uk.xensource.com>, I've confirmed that
this doesn't happen this time (list of installed files is identical
before and after).

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> Good for 4.2.
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkj-0008MO-7J; Fri, 31 Aug 2012 09:43:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkh-0008Lv-Mg
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:59 +0000
Received: from [85.158.139.83:63285] by server-5.bemta-5.messagelabs.com id
	5B/79-30514-22780405; Fri, 31 Aug 2012 09:42:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31808 invoked from network); 31 Aug 2012 09:42:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:42:58 +0100
Message-ID: <1346406176.27277.141.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:56 +0100
In-Reply-To: <20543.33906.764835.984658@mariner.uk.xensource.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
	<20543.33906.764835.984658@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script
 fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:19 +0100, Ian Jackson wrote:
> Roger Pau Monne writes ("[Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script fixes"):
> > Remaining patches from the hotplug script series for NetBSD. Expanded 
> > with Ian Campbell recommendations.
> > 
> > The xenstore_write fix has been moved to a pre-patch, and the error 
> > function has been expanded to write the script error to hotplug-error 
> > (in a pre-patch also).
> 
> All three:
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> for 4.2.

All applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkj-0008MO-7J; Fri, 31 Aug 2012 09:43:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkh-0008Lv-Mg
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:59 +0000
Received: from [85.158.139.83:63285] by server-5.bemta-5.messagelabs.com id
	5B/79-30514-22780405; Fri, 31 Aug 2012 09:42:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!3
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31808 invoked from network); 31 Aug 2012 09:42:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284415"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:58 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:42:58 +0100
Message-ID: <1346406176.27277.141.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:56 +0100
In-Reply-To: <20543.33906.764835.984658@mariner.uk.xensource.com>
References: <1345740826-49830-1-git-send-email-roger.pau@citrix.com>
	<20543.33906.764835.984658@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script
 fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:19 +0100, Ian Jackson wrote:
> Roger Pau Monne writes ("[Xen-devel] [PATCH 0/3] hotplug/NetBSD: remaining block script fixes"):
> > Remaining patches from the hotplug script series for NetBSD. Expanded 
> > with Ian Campbell recommendations.
> > 
> > The xenstore_write fix has been moved to a pre-patch, and the error 
> > function has been expanded to write the script error to hotplug-error 
> > (in a pre-patch also).
> 
> All three:
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> for 4.2.

All applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkc-0008Ks-Dx; Fri, 31 Aug 2012 09:42:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nka-0008KJ-Nj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:42:52 +0000
Received: from [85.158.139.83:40819] by server-7.bemta-5.messagelabs.com id
	F5/06-19703-C1780405; Fri, 31 Aug 2012 09:42:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30914 invoked from network); 31 Aug 2012 09:42:51 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:42:51 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284409"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:42:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:42:50 +0100
Message-ID: <1346406169.27277.139.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:49 +0100
In-Reply-To: <20543.34801.132186.226088@mariner.uk.xensource.com>
References: <d7e4efa17fb0b9b69c58.1346193435@u002268147cd4502c336d.ant.amazon.com>
	<20543.34801.132186.226088@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: remove vestigial default_lib.m4
 macros and adjust substitutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:34 +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH] tools: remove vestigial default_lib.m4 macros and adjust substitutions"):
> > LIB_PATH is no longer used, so the AX_DEFAULT_LIB macro is no longer
> > needed. Additionally lower case make variables are now used as
> > autoconf substitutions, which allows for more correct overrides at
> > build time.
> > 
> > I've checked the file layout in dist/install from the build made
> > before this change versus after with ./configure values of:
> >  1) ./configure (no flags provided)
> >  2) ./configure --libdir=/usr/lib/x86_64-linux-gnu (Debian style)
> >  3) ./configure --libdir='${exec_prefix}/lib' (late variable expansion)
> > 
> > Signed-off-by: Matt Wilson <msw@amazon.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

The problem with this last time was
<1341478690.31696.110.camel@zakaz.uk.xensource.com>, I've confirmed that
this doesn't happen this time (list of installed files is identical
before and after).

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> Good for 4.2.
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkm-0008NE-LM; Fri, 31 Aug 2012 09:43:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkk-0008Me-Jr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:43:02 +0000
Received: from [85.158.139.83:2069] by server-4.bemta-5.messagelabs.com id
	6B/59-23042-52780405; Fri, 31 Aug 2012 09:43:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32023 invoked from network); 31 Aug 2012 09:43:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:01 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284418"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:01 +0100
Message-ID: <1346406179.27277.142.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:59 +0100
In-Reply-To: <20543.33810.862529.50557@mariner.uk.xensource.com>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
	<20543.33810.862529.50557@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Paul Durrant <Paul.Durrant@citrix.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:17 +0100, Ian Jackson wrote:
> Paul Durrant writes ("[Xen-devel] [PATCH] Remove VM genearation ID device and incr_generationid from build_info"):
> > Remove VM genearation ID device and incr_generationid from build_info.
> 
> Thanks.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> for 4.2.

Looks like Keir already took this one.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkm-0008NE-LM; Fri, 31 Aug 2012 09:43:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkk-0008Me-Jr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:43:02 +0000
Received: from [85.158.139.83:2069] by server-4.bemta-5.messagelabs.com id
	6B/59-23042-52780405; Fri, 31 Aug 2012 09:43:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346406170!24007291!4
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32023 invoked from network); 31 Aug 2012 09:43:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:01 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284418"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:01 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:01 +0100
Message-ID: <1346406179.27277.142.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:42:59 +0100
In-Reply-To: <20543.33810.862529.50557@mariner.uk.xensource.com>
References: <4b1f399193f5e363c2b4.1345564497@cosworth.uk.xensource.com>
	<1345711250.12501.58.camel@zakaz.uk.xensource.com>
	<20543.33810.862529.50557@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Paul Durrant <Paul.Durrant@citrix.com>, "Keir \(Xen.org\)" <keir@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Remove VM genearation ID device and
 incr_generationid from build_info [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:17 +0100, Ian Jackson wrote:
> Paul Durrant writes ("[Xen-devel] [PATCH] Remove VM genearation ID device and incr_generationid from build_info"):
> > Remove VM genearation ID device and incr_generationid from build_info.
> 
> Thanks.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> for 4.2.

Looks like Keir already took this one.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkr-0008Oa-2k; Fri, 31 Aug 2012 09:43:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkq-0008OA-Co
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:43:08 +0000
Received: from [85.158.138.51:29613] by server-9.bemta-3.messagelabs.com id
	5E/1F-15390-B2780405; Fri, 31 Aug 2012 09:43:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346406186!19928705!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32137 invoked from network); 31 Aug 2012 09:43:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:06 +0100
Message-ID: <1346406185.27277.144.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:43:05 +0100
In-Reply-To: <20543.32723.578750.578966@mariner.uk.xensource.com>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
	<20543.32723.578750.578966@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 15:59 +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of Xen command line parameters"):
> > This change improves documentation for several Xen command line
> > parameters. Some of the Itanium-specific options are now removed. A
> > more thorough check should be performed to remove any other remnants.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkr-0008Oa-2k; Fri, 31 Aug 2012 09:43:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkq-0008OA-Co
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:43:08 +0000
Received: from [85.158.138.51:29613] by server-9.bemta-3.messagelabs.com id
	5E/1F-15390-B2780405; Fri, 31 Aug 2012 09:43:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346406186!19928705!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32137 invoked from network); 31 Aug 2012 09:43:06 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:06 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:06 +0100
Message-ID: <1346406185.27277.144.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:43:05 +0100
In-Reply-To: <20543.32723.578750.578966@mariner.uk.xensource.com>
References: <1809175cdc9b3a606d75.1343749000@kaos-source-31003.sea31.amazon.com>
	<20120807032453.GB4324@US-SEA-R8XVZTX>
	<1345211930.10161.33.camel@zakaz.uk.xensource.com>
	<20543.32723.578750.578966@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, Matt Wilson <msw@amazon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of
 Xen command line parameters [and 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 15:59 +0100, Ian Jackson wrote:
> Matt Wilson writes ("[Xen-devel] [PATCH DOCDAY v3] docs: improve documentation of Xen command line parameters"):
> > This change improves documentation for several Xen command line
> > parameters. Some of the Itanium-specific options are now removed. A
> > more thorough check should be performed to remove any other remnants.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkt-0008Pt-Et; Fri, 31 Aug 2012 09:43:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nks-0008Ou-7J
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 09:43:10 +0000
Received: from [85.158.138.51:29779] by server-12.bemta-3.messagelabs.com id
	06/AF-10384-D2780405; Fri, 31 Aug 2012 09:43:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1346406184!27930305!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11434 invoked from network); 31 Aug 2012 09:43:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284420"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:04 +0100
Message-ID: <1346406182.27277.143.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:43:02 +0100
In-Reply-To: <20543.33639.161709.272449@mariner.uk.xensource.com>
References: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
	<1345711732.12501.62.camel@zakaz.uk.xensource.com>
	<20543.33639.161709.272449@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:14 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains"):
> > On Tue, 2012-08-21 at 17:24 +0100, David Vrabel wrote:
> > > From: David Vrabel <david.vrabel@citrix.com>
> 
> Thanks, and to Ian for the review.
> 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Good for 4.2.

Applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkt-0008Pt-Et; Fri, 31 Aug 2012 09:43:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nks-0008Ou-7J
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 09:43:10 +0000
Received: from [85.158.138.51:29779] by server-12.bemta-3.messagelabs.com id
	06/AF-10384-D2780405; Fri, 31 Aug 2012 09:43:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-174.messagelabs.com!1346406184!27930305!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11434 invoked from network); 31 Aug 2012 09:43:04 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-9.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284420"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:04 +0100
Message-ID: <1346406182.27277.143.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:43:02 +0100
In-Reply-To: <20543.33639.161709.272449@mariner.uk.xensource.com>
References: <1345566265-11618-1-git-send-email-david.vrabel@citrix.com>
	<1345711732.12501.62.camel@zakaz.uk.xensource.com>
	<20543.33639.161709.272449@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:14 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] xenconsoled: clean-up after all dead domains"):
> > On Tue, 2012-08-21 at 17:24 +0100, David Vrabel wrote:
> > > From: David Vrabel <david.vrabel@citrix.com>
> 
> Thanks, and to Ian for the review.
> 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Good for 4.2.

Applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkv-0008S4-3j; Fri, 31 Aug 2012 09:43:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkt-0008PU-04
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:43:11 +0000
Received: from [85.158.138.51:29873] by server-3.bemta-3.messagelabs.com id
	CF/62-21322-E2780405; Fri, 31 Aug 2012 09:43:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346406186!19928705!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32355 invoked from network); 31 Aug 2012 09:43:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284427"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:09 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:09 +0100
Message-ID: <1346406187.27277.145.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:43:07 +0100
In-Reply-To: <20543.32784.60568.562924@mariner.uk.xensource.com>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
	<1345537876.28762.136.camel@zakaz.uk.xensource.com>
	<20543.32784.60568.562924@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH] README: Update references to PyXML to lxml
 (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:00 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[Xen-devel] [PATCH] README: Update references to PyXML to lxml (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)"):
> > README: Update references to PyXML to lxml
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:43:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nkv-0008S4-3j; Fri, 31 Aug 2012 09:43:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nkt-0008PU-04
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:43:11 +0000
Received: from [85.158.138.51:29873] by server-3.bemta-3.messagelabs.com id
	CF/62-21322-E2780405; Fri, 31 Aug 2012 09:43:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346406186!19928705!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32355 invoked from network); 31 Aug 2012 09:43:09 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:43:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284427"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:43:09 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:43:09 +0100
Message-ID: <1346406187.27277.145.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 10:43:07 +0100
In-Reply-To: <20543.32784.60568.562924@mariner.uk.xensource.com>
References: <alpine.DEB.2.00.1207241956230.14506@vega-c.dur.ac.uk>
	<20120724193604.GB29124@phenom.dumpdata.com>
	<1343205815.18971.43.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208121853070.7313@vega-a.dur.ac.uk>
	<1345209224.10161.21.camel@zakaz.uk.xensource.com>
	<alpine.DEB.2.00.1208202011290.8591@procyon.dur.ac.uk>
	<1345537876.28762.136.camel@zakaz.uk.xensource.com>
	<20543.32784.60568.562924@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH] README: Update references to PyXML to lxml
 (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 16:00 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[Xen-devel] [PATCH] README: Update references to PyXML to lxml (Was: Re: [PATCH] Re: remove dependency on PyXML from xen?)"):
> > README: Update references to PyXML to lxml
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:44:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:44:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NmS-0000hy-WC; Fri, 31 Aug 2012 09:44:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7NmR-0000hJ-Q9
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:44:47 +0000
Received: from [85.158.138.51:46587] by server-4.bemta-3.messagelabs.com id
	6F/BB-24831-B8780405; Fri, 31 Aug 2012 09:44:43 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-174.messagelabs.com!1346406283!18974015!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19611 invoked from network); 31 Aug 2012 09:44:43 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-7.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 09:44:43 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:49648
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7NjG-0004a4-Sh; Fri, 31 Aug 2012 11:41:31 +0200
Date: Fri, 31 Aug 2012 11:44:37 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1803123652.20120831114437@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

It seems my upgrade to rc4 of xen-unstable went pretty smoothly ... only two things i noticed so far:

- Had to disable startup of xend by hand, perhaps a bit strange if it's deprecated ?
- Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:

[    3.333195] tseg: 0000000000
[    3.333231] CPU: Physical Processor ID: 0
[    3.333236] CPU: Processor Core ID: 0
[    3.333240] mce: CPU supports 2 MCE banks
[    3.333258] LVT offset 0 assigned for vector 0xf9
[    3.333263] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for vector 0xf9, but the register is already in use for vector 0x0 on this cpu
[    3.333272] [Firmware Bug]: cpu 0, failed to setup threshold interrupt for bank 4, block 0 (MSR00000413=0xc008000001000000)
[    3.333281] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for vector 0xf9, but the register is already in use for vector 0x0 on this cpu
[    3.333290] [Firmware Bug]: cpu 0, failed to setup threshold interrupt for bank 4, block 1 (MSRC0000408=0xc000000001000000)
[    3.333306] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for vector 0xf9, but the register is already in use for vector 0x0 on this cpu
[    3.333315] [Firmware Bug]: cpu 0, failed to setup threshold interrupt for bank 4, block 2 (MSRC0000409=0xc000000001000000)
[    3.333331] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    3.333331] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
[    3.333331] tlb_flushall_shift is 0xffffffff
[    3.334057] ACPI: Core revision 20120711

The kernel gives these firmware bugs for the other CPUS later on, this is on a AMD Phenom x6, but everything seems to be working so perhaps a minor thing.

For the rest, all my domains and pci-passthrough seem to be working :-)

Thx,

--

Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:44:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:44:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NmS-0000hy-WC; Fri, 31 Aug 2012 09:44:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7NmR-0000hJ-Q9
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:44:47 +0000
Received: from [85.158.138.51:46587] by server-4.bemta-3.messagelabs.com id
	6F/BB-24831-B8780405; Fri, 31 Aug 2012 09:44:43 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-174.messagelabs.com!1346406283!18974015!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19611 invoked from network); 31 Aug 2012 09:44:43 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-7.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 09:44:43 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:49648
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7NjG-0004a4-Sh; Fri, 31 Aug 2012 11:41:31 +0200
Date: Fri, 31 Aug 2012 11:44:37 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1803123652.20120831114437@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

It seems my upgrade to rc4 of xen-unstable went pretty smoothly ... only two things i noticed so far:

- Had to disable startup of xend by hand, perhaps a bit strange if it's deprecated ?
- Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:

[    3.333195] tseg: 0000000000
[    3.333231] CPU: Physical Processor ID: 0
[    3.333236] CPU: Processor Core ID: 0
[    3.333240] mce: CPU supports 2 MCE banks
[    3.333258] LVT offset 0 assigned for vector 0xf9
[    3.333263] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for vector 0xf9, but the register is already in use for vector 0x0 on this cpu
[    3.333272] [Firmware Bug]: cpu 0, failed to setup threshold interrupt for bank 4, block 0 (MSR00000413=0xc008000001000000)
[    3.333281] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for vector 0xf9, but the register is already in use for vector 0x0 on this cpu
[    3.333290] [Firmware Bug]: cpu 0, failed to setup threshold interrupt for bank 4, block 1 (MSRC0000408=0xc000000001000000)
[    3.333306] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for vector 0xf9, but the register is already in use for vector 0x0 on this cpu
[    3.333315] [Firmware Bug]: cpu 0, failed to setup threshold interrupt for bank 4, block 2 (MSRC0000409=0xc000000001000000)
[    3.333331] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    3.333331] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
[    3.333331] tlb_flushall_shift is 0xffffffff
[    3.334057] ACPI: Core revision 20120711

The kernel gives these firmware bugs for the other CPUS later on, this is on a AMD Phenom x6, but everything seems to be working so perhaps a minor thing.

For the rest, all my domains and pci-passthrough seem to be working :-)

Thx,

--

Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:49:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:49:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nqg-0001N8-NU; Fri, 31 Aug 2012 09:49:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Nqf-0001Ms-0P
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:49:09 +0000
Received: from [85.158.143.99:26367] by server-2.bemta-4.messagelabs.com id
	67/2B-21239-49880405; Fri, 31 Aug 2012 09:49:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346406546!16599541!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17616 invoked from network); 31 Aug 2012 09:49:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 09:49:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 10:49:05 +0100
Message-Id: <5040A4AE0200007800097C43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 10:49:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
	<50409A070200007800097BA3@nat28.tlf.novell.com>
	<1346405635.27277.137.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346405635.27277.137.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:33, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-31 at 10:03 +0100, Jan Beulich wrote:
>> >>> On 31.08.12 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > From what you say I think we want to modprobe blktap if blktap2 didn't
>> > exist.
>> > 
>> > blktap2 isn't actually a xenbus backend driver (since it uses blkback to
>> > do the guest facing bit) so I don't think a xen-backend: alias is
>> > available. I can't see any other aliases defined in the code in either
>> > the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
>> > latest I happen to have to hand) or a mainline kernel. If there is
>> > something else we should be trying please let me know.
>> 
>> There's a "devname:xen/blktap-2/control" alias in our SLE11 SP2
>> and newer openSUSE ones (as of 2.6.35). Whether that's fully
>> appropriate to be there and/or to be used as a modprobe
>> argument I'm not sure though.
>> 
>> The bad thing about the "blktap" name is that that's also the
>> name of the blktap1 driver in the 2.6.18 tree and its forward
>> ports, but I don't think there's anything we can reasonably do
>> about that.
> 
> I thought about that. Most kernels which have blktap1 nowadays also have
> blktap2 so the number of systems where you would actually end up with
> only blktap1 loaded is pretty small. It's also AFAIK reasonably harmless
> other than the memory usage etc.
> 
> In retrospect renaming blktap2->blktap ni pvops was a stupid idea (I can
> say that since it was my idea...)
> 
>>  So I'm fine with the change you suggest from that
>> perspective (whether to use the module alias pointed out ).
> 
> Can I take that as an
> Acked-by: Jan Beulich <JBeulich@suse.com>
> ?

Yes, feel free to do so.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:49:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:49:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nqg-0001N8-NU; Fri, 31 Aug 2012 09:49:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Nqf-0001Ms-0P
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:49:09 +0000
Received: from [85.158.143.99:26367] by server-2.bemta-4.messagelabs.com id
	67/2B-21239-49880405; Fri, 31 Aug 2012 09:49:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346406546!16599541!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17616 invoked from network); 31 Aug 2012 09:49:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 09:49:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 10:49:05 +0100
Message-Id: <5040A4AE0200007800097C43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 10:49:02 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
	<50409A070200007800097BA3@nat28.tlf.novell.com>
	<1346405635.27277.137.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346405635.27277.137.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:33, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-31 at 10:03 +0100, Jan Beulich wrote:
>> >>> On 31.08.12 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > From what you say I think we want to modprobe blktap if blktap2 didn't
>> > exist.
>> > 
>> > blktap2 isn't actually a xenbus backend driver (since it uses blkback to
>> > do the guest facing bit) so I don't think a xen-backend: alias is
>> > available. I can't see any other aliases defined in the code in either
>> > the 2.6.18-xen tree, the SLES 2.6.32.12-0.7.1 kernel (which is the
>> > latest I happen to have to hand) or a mainline kernel. If there is
>> > something else we should be trying please let me know.
>> 
>> There's a "devname:xen/blktap-2/control" alias in our SLE11 SP2
>> and newer openSUSE ones (as of 2.6.35). Whether that's fully
>> appropriate to be there and/or to be used as a modprobe
>> argument I'm not sure though.
>> 
>> The bad thing about the "blktap" name is that that's also the
>> name of the blktap1 driver in the 2.6.18 tree and its forward
>> ports, but I don't think there's anything we can reasonably do
>> about that.
> 
> I thought about that. Most kernels which have blktap1 nowadays also have
> blktap2 so the number of systems where you would actually end up with
> only blktap1 loaded is pretty small. It's also AFAIK reasonably harmless
> other than the memory usage etc.
> 
> In retrospect renaming blktap2->blktap ni pvops was a stupid idea (I can
> say that since it was my idea...)
> 
>>  So I'm fine with the change you suggest from that
>> perspective (whether to use the module alias pointed out ).
> 
> Can I take that as an
> Acked-by: Jan Beulich <JBeulich@suse.com>
> ?

Yes, feel free to do so.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:52:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nth-0001XC-BB; Fri, 31 Aug 2012 09:52:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T7Ntg-0001X4-QR
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:52:16 +0000
Received: from [85.158.143.35:25857] by server-2.bemta-4.messagelabs.com id
	CA/B0-21239-F4980405; Fri, 31 Aug 2012 09:52:15 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-11.tower-21.messagelabs.com!1346406723!11652530!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20027 invoked from network); 31 Aug 2012 09:52:12 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-11.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 09:52:12 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 569795A0006
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:51:34 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Gu4PH2QOkJYk for <xen-devel@lists.xen.org>;
	Fri, 31 Aug 2012 10:51:34 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id F0C045A0005
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:51:33 +0100 (BST)
MIME-Version: 1.0
Date: Fri, 31 Aug 2012 10:52:02 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: <xen-devel@lists.xen.org>
Message-ID: <b48b8a512e26a92c313b96b7e5550cde@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: [Xen-devel] Dom0 Memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Everyone,

I have set dom0_mem=2048M in grub2, set dom0 minimum memory to 2048 in 
xend-config (and also disabled ballooning). However, when I do "free -m" 
in the Dom0, I only see 1567 total RAM (xentop reports 2048ish).

Is this a known issue? I am also aware that this problem occurs in the 
DomUs as well, but never a discrepancy that big.

The kernel I'm using is 3.2.28. Xen is version 4.1.3 (although I'm 
using a hack in setup.c where I changed the order of the if block is 
give priority to the 802 memory map. I needed to do this as my 
motherboard uses UEFI. Keir made a patch for this for unstable 
recently).

Any help is appreciated.

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:52:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nth-0001XC-BB; Fri, 31 Aug 2012 09:52:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jonnyt@abpni.co.uk>) id 1T7Ntg-0001X4-QR
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:52:16 +0000
Received: from [85.158.143.35:25857] by server-2.bemta-4.messagelabs.com id
	CA/B0-21239-F4980405; Fri, 31 Aug 2012 09:52:15 +0000
X-Env-Sender: jonnyt@abpni.co.uk
X-Msg-Ref: server-11.tower-21.messagelabs.com!1346406723!11652530!1
X-Originating-IP: [109.200.19.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20027 invoked from network); 31 Aug 2012 09:52:12 -0000
Received: from edge1.gosport.uk.abpni.net (HELO
	mail1.gosport.uk.corp.abpni.net) (109.200.19.114)
	by server-11.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 09:52:12 -0000
Received: from localhost (mail1.gosport.corp.uk.abpni.net [127.0.0.1])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTP id 569795A0006
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:51:34 +0100 (BST)
X-Virus-Scanned: Debian amavisd-new at mail1.mail.gosport.corp.uk.abpni.net
Received: from mail1.gosport.uk.corp.abpni.net ([127.0.0.1])
	by localhost (mail1.mail.gosport.corp.uk.abpni.net [127.0.0.1])
	(amavisd-new, port 10024)
	with ESMTP id Gu4PH2QOkJYk for <xen-devel@lists.xen.org>;
	Fri, 31 Aug 2012 10:51:34 +0100 (BST)
Received: from mail.abpni.co.uk (unknown [10.87.17.3])
	by mail1.gosport.uk.corp.abpni.net (Postfix) with ESMTPA id F0C045A0005
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:51:33 +0100 (BST)
MIME-Version: 1.0
Date: Fri, 31 Aug 2012 10:52:02 +0100
From: Jonathan Tripathy <jonnyt@abpni.co.uk>
To: <xen-devel@lists.xen.org>
Message-ID: <b48b8a512e26a92c313b96b7e5550cde@abpni.co.uk>
X-Sender: jonnyt@abpni.co.uk
User-Agent: Roundcube Webmail/0.6
Subject: [Xen-devel] Dom0 Memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Everyone,

I have set dom0_mem=2048M in grub2, set dom0 minimum memory to 2048 in 
xend-config (and also disabled ballooning). However, when I do "free -m" 
in the Dom0, I only see 1567 total RAM (xentop reports 2048ish).

Is this a known issue? I am also aware that this problem occurs in the 
DomUs as well, but never a discrepancy that big.

The kernel I'm using is 3.2.28. Xen is version 4.1.3 (although I'm 
using a hack in setup.c where I changed the order of the if block is 
give priority to the 802 memory map. I needed to do this as my 
motherboard uses UEFI. Keir made a patch for this for unstable 
recently).

Any help is appreciated.

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:56:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nxw-0001ii-1C; Fri, 31 Aug 2012 09:56:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nxu-0001iZ-AD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:56:38 +0000
Received: from [85.158.143.35:2357] by server-3.bemta-4.messagelabs.com id
	C8/8B-08232-55A80405; Fri, 31 Aug 2012 09:56:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346406997!13528670!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5005 invoked from network); 31 Aug 2012 09:56:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284762"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:55:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:55:54 +0100
Message-ID: <1346406953.27277.149.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 10:55:53 +0100
In-Reply-To: <1803123652.20120831114437@eikelenboom.it>
References: <1803123652.20120831114437@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:44 +0100, Sander Eikelenboom wrote:
> Hi All,
> 
> It seems my upgrade to rc4 of xen-unstable went pretty smoothly ... only two things i noticed so far:
> 
> - Had to disable startup of xend by hand, perhaps a bit strange if it's deprecated ?

I can't for the life of me find the bit of Xen's build+install system
which actually enables the initscripts. My installation notes include an
explicit step to do this, but I build and deploy on different machines
so perhaps I miss out on part of the usual infrastructure.

Are you sure this isn't part of your packaging or surrounding scripts?
Or perhaps just left over rc?.d symlinks from before you upgraded?

> - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
[...ducks...]

> For the rest, all my domains and pci-passthrough seem to be working :-)

Excellent new! Thanks for the report.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:56:48 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Nxw-0001ii-1C; Fri, 31 Aug 2012 09:56:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Nxu-0001iZ-AD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:56:38 +0000
Received: from [85.158.143.35:2357] by server-3.bemta-4.messagelabs.com id
	C8/8B-08232-55A80405; Fri, 31 Aug 2012 09:56:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346406997!13528670!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5005 invoked from network); 31 Aug 2012 09:56:37 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284762"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:55:55 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:55:54 +0100
Message-ID: <1346406953.27277.149.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 10:55:53 +0100
In-Reply-To: <1803123652.20120831114437@eikelenboom.it>
References: <1803123652.20120831114437@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:44 +0100, Sander Eikelenboom wrote:
> Hi All,
> 
> It seems my upgrade to rc4 of xen-unstable went pretty smoothly ... only two things i noticed so far:
> 
> - Had to disable startup of xend by hand, perhaps a bit strange if it's deprecated ?

I can't for the life of me find the bit of Xen's build+install system
which actually enables the initscripts. My installation notes include an
explicit step to do this, but I build and deploy on different machines
so perhaps I miss out on part of the usual infrastructure.

Are you sure this isn't part of your packaging or surrounding scripts?
Or perhaps just left over rc?.d symlinks from before you upgraded?

> - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
[...ducks...]

> For the rest, all my domains and pci-passthrough seem to be working :-)

Excellent new! Thanks for the report.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:58:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NzK-0001oq-GN; Fri, 31 Aug 2012 09:58:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefano.panella@citrix.com>) id 1T7NzI-0001oe-Gp
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 09:58:04 +0000
Received: from [85.158.139.83:10287] by server-11.bemta-5.messagelabs.com id
	AC/BC-24658-BAA80405; Fri, 31 Aug 2012 09:58:03 +0000
X-Env-Sender: stefano.panella@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346407083!23615827!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27737 invoked from network); 31 Aug 2012 09:58:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:58:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284817"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:58:03 +0000
Received: from stefano-ThinkPad-T520.cam.xci-test.com (10.31.3.233) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 10:58:02 +0100
From: Stefano Panella <stefano.panella@citrix.com>
To: <linux-kernel@vger.kernel.org>, <konrad.wilk@oracle.com>,
	<xen-devel@lists.xensource.com>
Date: Fri, 31 Aug 2012 10:57:52 +0100
Message-ID: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
X-Mailer: git-send-email 1.7.4.1
MIME-Version: 1.0
Cc: Stefano Panella <stefano.panella@citrix.com>
Subject: [Xen-devel] [PATCH 1/1] XEN: Use correct masking in
	xen_swiotlb_alloc_coherent.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When running 32-bit pvops-dom0 and a driver tries to allocate a coherent
DMA-memory the xen swiotlb-implementation returned memory beyond 4GB.

This caused for example not working sound on a system with 4 GB and a 64-bit
compatible sound-card with sets the DMA-mask to 64bit.

On bare-metal and the forward-ported xen-dom0 patches from OpenSuse a coherent
DMA-memory is always allocated inside the 32-bit address-range by calling
dma_alloc_coherent_mask.

This patch adds the same functionality to xen swiotlb and is a rebase of the
original patch from Ronny Hegewald which never got upstream for some reason.

The original email with the original patch is in:

http://old-list-archives.xen.org/archives/html/xen-devel/2010-02/msg00038.html

the original thread from where the deiscussion started is in:

http://old-list-archives.xen.org/archives/html/xen-devel/2010-01/msg00928.html

Signed-off-by: Ronny Hegewald
Signed-off-by: Stefano Panella <stefano.panella@citrix.com>
---
 drivers/xen/swiotlb-xen.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1afb4fb..4d51948 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -232,7 +232,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 		return ret;
 
 	if (hwdev && hwdev->coherent_dma_mask)
-		dma_mask = hwdev->coherent_dma_mask;
+		dma_mask = dma_alloc_coherent_mask(hwdev, flags);
 
 	phys = virt_to_phys(ret);
 	dev_addr = xen_phys_to_bus(phys);
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:58:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NzK-0001oq-GN; Fri, 31 Aug 2012 09:58:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefano.panella@citrix.com>) id 1T7NzI-0001oe-Gp
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 09:58:04 +0000
Received: from [85.158.139.83:10287] by server-11.bemta-5.messagelabs.com id
	AC/BC-24658-BAA80405; Fri, 31 Aug 2012 09:58:03 +0000
X-Env-Sender: stefano.panella@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346407083!23615827!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27737 invoked from network); 31 Aug 2012 09:58:03 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:58:03 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284817"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 09:58:03 +0000
Received: from stefano-ThinkPad-T520.cam.xci-test.com (10.31.3.233) by
	LONPMAILMX01.citrite.net (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 10:58:02 +0100
From: Stefano Panella <stefano.panella@citrix.com>
To: <linux-kernel@vger.kernel.org>, <konrad.wilk@oracle.com>,
	<xen-devel@lists.xensource.com>
Date: Fri, 31 Aug 2012 10:57:52 +0100
Message-ID: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
X-Mailer: git-send-email 1.7.4.1
MIME-Version: 1.0
Cc: Stefano Panella <stefano.panella@citrix.com>
Subject: [Xen-devel] [PATCH 1/1] XEN: Use correct masking in
	xen_swiotlb_alloc_coherent.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When running 32-bit pvops-dom0 and a driver tries to allocate a coherent
DMA-memory the xen swiotlb-implementation returned memory beyond 4GB.

This caused for example not working sound on a system with 4 GB and a 64-bit
compatible sound-card with sets the DMA-mask to 64bit.

On bare-metal and the forward-ported xen-dom0 patches from OpenSuse a coherent
DMA-memory is always allocated inside the 32-bit address-range by calling
dma_alloc_coherent_mask.

This patch adds the same functionality to xen swiotlb and is a rebase of the
original patch from Ronny Hegewald which never got upstream for some reason.

The original email with the original patch is in:

http://old-list-archives.xen.org/archives/html/xen-devel/2010-02/msg00038.html

the original thread from where the deiscussion started is in:

http://old-list-archives.xen.org/archives/html/xen-devel/2010-01/msg00928.html

Signed-off-by: Ronny Hegewald
Signed-off-by: Stefano Panella <stefano.panella@citrix.com>
---
 drivers/xen/swiotlb-xen.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1afb4fb..4d51948 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -232,7 +232,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 		return ret;
 
 	if (hwdev && hwdev->coherent_dma_mask)
-		dma_mask = hwdev->coherent_dma_mask;
+		dma_mask = dma_alloc_coherent_mask(hwdev, flags);
 
 	phys = virt_to_phys(ret);
 	dev_addr = xen_phys_to_bus(phys);
-- 
1.7.4.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:58:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NzR-0001pn-Rv; Fri, 31 Aug 2012 09:58:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7NzQ-0001pY-GW
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:58:12 +0000
Received: from [85.158.143.35:43081] by server-2.bemta-4.messagelabs.com id
	29/4B-21239-3BA80405; Fri, 31 Aug 2012 09:58:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346407087!13491717!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11163 invoked from network); 31 Aug 2012 09:58:09 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:58:09 -0000
Received: by dadn15 with SMTP id n15so1739791dad.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 02:58:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=courxIOtKhoC0uEvqglCyE5PrRO3oSfv8fyYPyVNpxw=;
	b=X6bw/eKPr6/nhMUBkfuKy+Yt0A/fSqiePPUhv+vsKYOKW78luPOSvHSqwmHw3h+p2D
	FrSbiinmMQERnNkCr0fXT+1+F3OOPAgJ1LBZtwoF2XdgiAag5+2dDDnXyYgpHuzr//CR
	q0ThbrKydjHl4W7dJDZnSI12wkMB5dGNPg3jVNMA/J5z3OlJFtCWch+wHT7zRZx3Jorh
	PWwL4kyeuflvjLJAn+hNHhTmP3P8Qh5Qc/zdJX0iHhtADGStuOnodeAQNd5odTCPf/Te
	wmnfxo6DP2EdN+rSygTY2gvgsYAhOd0Xed7R3ComFY9c4guV5yDRZMod5lqobF7+qvDk
	HBDQ==
Received: by 10.66.77.168 with SMTP id t8mr14528714paw.28.1346407087015;
	Fri, 31 Aug 2012 02:58:07 -0700 (PDT)
Received: from [10.0.0.57] ([184.70.138.66])
	by mx.google.com with ESMTPS id ou6sm3123662pbc.9.2012.08.31.02.58.03
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 02:58:06 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 10:57:59 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: "Li, Jiongxi" <jiongxi.li@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <CC664937.3D6DA%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [ PATCH 2/2] xen: enable Virtual-interrupt delivery
Thread-Index: Ac2HWp7G2HqBx++QQP2kPrY5SPXyLgABHshJ
In-Reply-To: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [ PATCH 2/2] xen: enable Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 10:30, "Li, Jiongxi" <jiongxi.li@intel.com> wrote:

> Virtual interrupt delivery avoids Xen to inject vAPIC interrupts manually,
> which is fully taken care of by the hardware. This needs some special
> awareness into existing interrupr injection path:
> For pending interrupt from vLAPIC, instead of direct injection, we may need
> update architecture specific indicators before resuming to guest.
> Before returning to guest, RVI should be updated if any pending IRRs
> EOI exit bitmap controls whether an EOI write should cause VM-Exit. If set, a
> trap-like induced EOI VM-Exit is triggered. The approach here is to manipulate
> EOI exit bitmap based on value of TMR. Level triggered irq requires a hook in
> vLAPIC EOI write, so that vIOAPIC EOI is triggered and emulated

Thanks. A couple of quick comments below. This will need some careful review
from a couple of us, I expect.

 -- Keir

> Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
> Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
> 
> diff -r cb821c24ca74 xen/arch/x86/hvm/irq.c
> --- a/xen/arch/x86/hvm/irq.c  Fri Aug 31 09:30:38 2012 +0800
> +++ b/xen/arch/x86/hvm/irq.c         Fri Aug 31 09:49:39 2012 +0800
> @@ -452,7 +452,11 @@ struct hvm_intack hvm_vcpu_ack_pending_i
> 
>  int hvm_local_events_need_delivery(struct vcpu *v)
> {
> -    struct hvm_intack intack = hvm_vcpu_has_pending_irq(v);
> +    struct hvm_intack intack;
> +
> +    pt_update_irq(v);

Why would this change be needed for vAPIC?

> +    intack = hvm_vcpu_has_pending_irq(v);
> 
>      if ( likely(intack.source == hvm_intsrc_none) )
>          return 0;
> diff -r cb821c24ca74 xen/arch/x86/hvm/vlapic.c
> --- a/xen/arch/x86/hvm/vlapic.c      Fri Aug 31 09:30:38 2012 +0800
> +++ b/xen/arch/x86/hvm/vlapic.c   Fri Aug 31 09:49:39 2012 +0800
> @@ -143,7 +143,16 @@ static int vlapic_find_highest_irr(struc
> int vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig)
> {
>      if ( trig )
> +    {
>          vlapic_set_vector(vec, &vlapic->regs->data[APIC_TMR]);
> +        if ( cpu_has_vmx_virtual_intr_delivery )
> +            vmx_set_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
> +    }
> +    else
> +    {
> +        if ( cpu_has_vmx_virtual_intr_delivery )
> +            vmx_clear_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
> +    }
> 
>      /* We may need to wake up target vcpu, besides set pending bit here */
>      return !vlapic_test_and_set_irr(vec, vlapic);
> @@ -410,6 +419,22 @@ void vlapic_EOI_set(struct vlapic *vlapi
>      hvm_dpci_msi_eoi(current->domain, vector);
> }
> 
> +/*
> + * When "Virtual Interrupt Delivery" is enabled, this function is used
> + * to handle EOI-induced VM exit
> + */
> +void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector)
> +{
> +    ASSERT(cpu_has_vmx_virtual_intr_delivery);
> +
> +    if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR])
> )
> +    {
> +        vioapic_update_EOI(vlapic_domain(vlapic), vector);
> +    }

No need for braces for single-line statement.

> +    hvm_dpci_msi_eoi(current->domain, vector);
> +}
> +
> int vlapic_ipi(
>      struct vlapic *vlapic, uint32_t icr_low, uint32_t icr_high)
> {
> @@ -1014,6 +1039,9 @@ int vlapic_has_pending_irq(struct vcpu *
>      if ( irr == -1 )
>          return -1;
> 
> +    if ( cpu_has_vmx_virtual_intr_delivery )
> +        return irr;

Why is it correct to ignore ISR here? I guess vAPIC deals with interrupt
window automatically, maybe?

>      isr = vlapic_find_highest_isr(vlapic);
>      isr = (isr != -1) ? isr : 0;
>      if ( (isr & 0xf0) >= (irr & 0xf0) )
> @@ -1026,6 +1054,9 @@ int vlapic_ack_pending_irq(struct vcpu *
> {
>      struct vlapic *vlapic = vcpu_vlapic(v);
> 
> +    if ( cpu_has_vmx_virtual_intr_delivery )
> +        return 1;
> +
>     vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
>      vlapic_clear_irr(vector, vlapic);
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 09:58:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 09:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7NzR-0001pn-Rv; Fri, 31 Aug 2012 09:58:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7NzQ-0001pY-GW
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 09:58:12 +0000
Received: from [85.158.143.35:43081] by server-2.bemta-4.messagelabs.com id
	29/4B-21239-3BA80405; Fri, 31 Aug 2012 09:58:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1346407087!13491717!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11163 invoked from network); 31 Aug 2012 09:58:09 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 09:58:09 -0000
Received: by dadn15 with SMTP id n15so1739791dad.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 02:58:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=courxIOtKhoC0uEvqglCyE5PrRO3oSfv8fyYPyVNpxw=;
	b=X6bw/eKPr6/nhMUBkfuKy+Yt0A/fSqiePPUhv+vsKYOKW78luPOSvHSqwmHw3h+p2D
	FrSbiinmMQERnNkCr0fXT+1+F3OOPAgJ1LBZtwoF2XdgiAag5+2dDDnXyYgpHuzr//CR
	q0ThbrKydjHl4W7dJDZnSI12wkMB5dGNPg3jVNMA/J5z3OlJFtCWch+wHT7zRZx3Jorh
	PWwL4kyeuflvjLJAn+hNHhTmP3P8Qh5Qc/zdJX0iHhtADGStuOnodeAQNd5odTCPf/Te
	wmnfxo6DP2EdN+rSygTY2gvgsYAhOd0Xed7R3ComFY9c4guV5yDRZMod5lqobF7+qvDk
	HBDQ==
Received: by 10.66.77.168 with SMTP id t8mr14528714paw.28.1346407087015;
	Fri, 31 Aug 2012 02:58:07 -0700 (PDT)
Received: from [10.0.0.57] ([184.70.138.66])
	by mx.google.com with ESMTPS id ou6sm3123662pbc.9.2012.08.31.02.58.03
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 02:58:06 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 10:57:59 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: "Li, Jiongxi" <jiongxi.li@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <CC664937.3D6DA%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [ PATCH 2/2] xen: enable Virtual-interrupt delivery
Thread-Index: Ac2HWp7G2HqBx++QQP2kPrY5SPXyLgABHshJ
In-Reply-To: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [ PATCH 2/2] xen: enable Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 10:30, "Li, Jiongxi" <jiongxi.li@intel.com> wrote:

> Virtual interrupt delivery avoids Xen to inject vAPIC interrupts manually,
> which is fully taken care of by the hardware. This needs some special
> awareness into existing interrupr injection path:
> For pending interrupt from vLAPIC, instead of direct injection, we may need
> update architecture specific indicators before resuming to guest.
> Before returning to guest, RVI should be updated if any pending IRRs
> EOI exit bitmap controls whether an EOI write should cause VM-Exit. If set, a
> trap-like induced EOI VM-Exit is triggered. The approach here is to manipulate
> EOI exit bitmap based on value of TMR. Level triggered irq requires a hook in
> vLAPIC EOI write, so that vIOAPIC EOI is triggered and emulated

Thanks. A couple of quick comments below. This will need some careful review
from a couple of us, I expect.

 -- Keir

> Signed-off-by: Yang Zhang <yang.z.zhang@intel.com>
> Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
> 
> diff -r cb821c24ca74 xen/arch/x86/hvm/irq.c
> --- a/xen/arch/x86/hvm/irq.c  Fri Aug 31 09:30:38 2012 +0800
> +++ b/xen/arch/x86/hvm/irq.c         Fri Aug 31 09:49:39 2012 +0800
> @@ -452,7 +452,11 @@ struct hvm_intack hvm_vcpu_ack_pending_i
> 
>  int hvm_local_events_need_delivery(struct vcpu *v)
> {
> -    struct hvm_intack intack = hvm_vcpu_has_pending_irq(v);
> +    struct hvm_intack intack;
> +
> +    pt_update_irq(v);

Why would this change be needed for vAPIC?

> +    intack = hvm_vcpu_has_pending_irq(v);
> 
>      if ( likely(intack.source == hvm_intsrc_none) )
>          return 0;
> diff -r cb821c24ca74 xen/arch/x86/hvm/vlapic.c
> --- a/xen/arch/x86/hvm/vlapic.c      Fri Aug 31 09:30:38 2012 +0800
> +++ b/xen/arch/x86/hvm/vlapic.c   Fri Aug 31 09:49:39 2012 +0800
> @@ -143,7 +143,16 @@ static int vlapic_find_highest_irr(struc
> int vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig)
> {
>      if ( trig )
> +    {
>          vlapic_set_vector(vec, &vlapic->regs->data[APIC_TMR]);
> +        if ( cpu_has_vmx_virtual_intr_delivery )
> +            vmx_set_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
> +    }
> +    else
> +    {
> +        if ( cpu_has_vmx_virtual_intr_delivery )
> +            vmx_clear_eoi_exit_bitmap(vlapic_vcpu(vlapic), vec);
> +    }
> 
>      /* We may need to wake up target vcpu, besides set pending bit here */
>      return !vlapic_test_and_set_irr(vec, vlapic);
> @@ -410,6 +419,22 @@ void vlapic_EOI_set(struct vlapic *vlapi
>      hvm_dpci_msi_eoi(current->domain, vector);
> }
> 
> +/*
> + * When "Virtual Interrupt Delivery" is enabled, this function is used
> + * to handle EOI-induced VM exit
> + */
> +void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector)
> +{
> +    ASSERT(cpu_has_vmx_virtual_intr_delivery);
> +
> +    if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR])
> )
> +    {
> +        vioapic_update_EOI(vlapic_domain(vlapic), vector);
> +    }

No need for braces for single-line statement.

> +    hvm_dpci_msi_eoi(current->domain, vector);
> +}
> +
> int vlapic_ipi(
>      struct vlapic *vlapic, uint32_t icr_low, uint32_t icr_high)
> {
> @@ -1014,6 +1039,9 @@ int vlapic_has_pending_irq(struct vcpu *
>      if ( irr == -1 )
>          return -1;
> 
> +    if ( cpu_has_vmx_virtual_intr_delivery )
> +        return irr;

Why is it correct to ignore ISR here? I guess vAPIC deals with interrupt
window automatically, maybe?

>      isr = vlapic_find_highest_isr(vlapic);
>      isr = (isr != -1) ? isr : 0;
>      if ( (isr & 0xf0) >= (irr & 0xf0) )
> @@ -1026,6 +1054,9 @@ int vlapic_ack_pending_irq(struct vcpu *
> {
>      struct vlapic *vlapic = vcpu_vlapic(v);
> 
> +    if ( cpu_has_vmx_virtual_intr_delivery )
> +        return 1;
> +
>     vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]);
>      vlapic_clear_irr(vector, vlapic);
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:02:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:02:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7O2z-0002CP-GN; Fri, 31 Aug 2012 10:01:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7O2y-0002CE-3G
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:01:52 +0000
Received: from [85.158.138.51:50119] by server-12.bemta-3.messagelabs.com id
	2D/AB-10384-F8B80405; Fri, 31 Aug 2012 10:01:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346407310!21529169!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14879 invoked from network); 31 Aug 2012 10:01:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:01:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284896"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:01:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:01:04 +0100
Message-ID: <1346407262.27277.151.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dmitry Ivanov <vonami@gmail.com>
Date: Fri, 31 Aug 2012 11:01:02 +0100
In-Reply-To: <CALaHpuvSpbp-u=o6VxXqxfCVHQSwLM1y_hnZFGQoE+nRx2erTg@mail.gmail.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
	<CALaHpuvSpbp-u=o6VxXqxfCVHQSwLM1y_hnZFGQoE+nRx2erTg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:28 +0100, Dmitry Ivanov wrote:
> On Fri, Aug 31, 2012 at 12:55 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >
> > In the meantime I think the answer is "don't do that then". That may
> > well also turn out to be the answer for the 4.2.0 release at this point.
> >
> 
> Thanks for the explanation. I'm cross-compiling xen for an embedded
> system so I'd like to build a minimal dom0 system.

Patches for 4.3 gratefully received ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:02:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:02:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7O2z-0002CP-GN; Fri, 31 Aug 2012 10:01:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7O2y-0002CE-3G
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:01:52 +0000
Received: from [85.158.138.51:50119] by server-12.bemta-3.messagelabs.com id
	2D/AB-10384-F8B80405; Fri, 31 Aug 2012 10:01:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346407310!21529169!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14879 invoked from network); 31 Aug 2012 10:01:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:01:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14284896"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:01:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:01:04 +0100
Message-ID: <1346407262.27277.151.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dmitry Ivanov <vonami@gmail.com>
Date: Fri, 31 Aug 2012 11:01:02 +0100
In-Reply-To: <CALaHpuvSpbp-u=o6VxXqxfCVHQSwLM1y_hnZFGQoE+nRx2erTg@mail.gmail.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
	<CALaHpuvSpbp-u=o6VxXqxfCVHQSwLM1y_hnZFGQoE+nRx2erTg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:28 +0100, Dmitry Ivanov wrote:
> On Fri, Aug 31, 2012 at 12:55 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >
> > In the meantime I think the answer is "don't do that then". That may
> > well also turn out to be the answer for the 4.2.0 release at this point.
> >
> 
> Thanks for the explanation. I'm cross-compiling xen for an embedded
> system so I'd like to build a minimal dom0 system.

Patches for 4.3 gratefully received ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OEJ-0002rC-Gj; Fri, 31 Aug 2012 10:13:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OEI-0002r6-Oq
	for Xen-devel@lists.xensource.com; Fri, 31 Aug 2012 10:13:34 +0000
Received: from [85.158.143.35:9496] by server-3.bemta-4.messagelabs.com id
	A8/6D-08232-E4E80405; Fri, 31 Aug 2012 10:13:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346408013!14251084!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7311 invoked from network); 31 Aug 2012 10:13:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:13:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285140"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:13:32 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:13:32 +0100
Message-ID: <1346408011.27277.160.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 11:13:31 +0100
In-Reply-To: <20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
	<20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 22:32 +0100, Matt Wilson wrote:
> 
> Also, it'd be nice to use "diff -p" to show the function names as part
> of the diff. There are some big hunks below that I think should be
> evaluated later, after the main entry points are reviewed. 
If using mercurial then adding:
	[diff]
	showfunc = True
to your ~/.hgrc makes this the default.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:13:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OEJ-0002rC-Gj; Fri, 31 Aug 2012 10:13:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OEI-0002r6-Oq
	for Xen-devel@lists.xensource.com; Fri, 31 Aug 2012 10:13:34 +0000
Received: from [85.158.143.35:9496] by server-3.bemta-4.messagelabs.com id
	A8/6D-08232-E4E80405; Fri, 31 Aug 2012 10:13:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346408013!14251084!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7311 invoked from network); 31 Aug 2012 10:13:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:13:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285140"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:13:32 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:13:32 +0100
Message-ID: <1346408011.27277.160.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 11:13:31 +0100
In-Reply-To: <20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
	<20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 22:32 +0100, Matt Wilson wrote:
> 
> Also, it'd be nice to use "diff -p" to show the function names as part
> of the diff. There are some big hunks below that I think should be
> evaluated later, after the main entry points are reviewed. 
If using mercurial then adding:
	[diff]
	showfunc = True
to your ~/.hgrc makes this the default.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OZE-00036Q-HC; Fri, 31 Aug 2012 10:35:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OZD-00036L-8I
	for Xen-devel@lists.xensource.com; Fri, 31 Aug 2012 10:35:11 +0000
Received: from [85.158.143.35:46435] by server-2.bemta-4.messagelabs.com id
	81/AD-21239-E5390405; Fri, 31 Aug 2012 10:35:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346409308!14689731!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16018 invoked from network); 31 Aug 2012 10:35:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:35:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:35:08 +0100
Message-ID: <1346409306.27277.175.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 31 Aug 2012 11:35:06 +0100
In-Reply-To: <20120830112323.5086d73c@mantra.us.oracle.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> diff -r 32034d1914a6 xen/Rules.mk
> --- a/xen/Rules.mk      Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/Rules.mk      Wed Aug 29 14:39:57 2012 -0700
> @@ -53,6 +55,7 @@
>  CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
>  CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
>  CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
> +CFLAGS-$(kdb)           += -DXEN_KDB_CONFIG

Pretty much everywhere else uses CONFIG_FOO not FOO_CONFIG. Also isn't
XEN rather redundant in the context?

> 
>  ifneq ($(max_phys_cpus),)
>  CFLAGS-y                += -DMAX_PHYS_CPUS=$(max_phys_cpus)
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/entry.S
> --- a/xen/arch/x86/hvm/svm/entry.S      Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/entry.S      Wed Aug 29 14:39:57 2012 -0700
> @@ -59,12 +59,23 @@
>          get_current(bx)
>          CLGI
> 
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif

This inner #ifdef is what the addr_of macro does.

> +        jnz  .Lkdb_skip_softirq
> +#endif
>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          testl $~0,(r(dx),r(ax),1)
>          jnz  .Lsvm_process_softirqs
> 
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif

Does gas complain about unused labels? IOW can you omit the ifdef here?

Why does kdb skip soft irqs? I presume the final submission will include
a commit log which will explain this sort of thing?

>          testb $0, VCPU_nsvm_hap_enabled(r(bx))
>  UNLIKELY_START(nz, nsvm_hap)
>          mov  VCPU_nhvm_p2m(r(bx)),r(ax)
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
> --- a/xen/arch/x86/hvm/svm/svm.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/svm.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -2170,6 +2170,10 @@
>          break;
> 
>      case VMEXIT_EXCEPTION_DB:
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +           break;

defining kdb_handle_trap_entry as a nop when !KDB might make this and
the following similar examples tidier.

> +#endif
>          if ( !v->domain->debugger_attached )
>              goto exit_and_crash;
>          domain_pause_for_debugger();
> @@ -2182,6 +2186,10 @@
>          if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
>              break;
>          __update_guest_eip(regs, inst_len);
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_int3, regs))
> +            break;
> +#endif
>          current->arch.gdbsx_vcpu_event = TRAP_int3;
>          domain_pause_for_debugger();
>          break;
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
> --- a/xen/arch/x86/hvm/svm/vmcb.c       Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/vmcb.c       Wed Aug 29 14:39:57 2012 -0700
> @@ -315,6 +315,36 @@
>      register_keyhandler('v', &vmcb_dump_keyhandler);
>  }
> 
> +#if defined(XEN_KDB_CONFIG)
> +/* did == 0 : display for all HVM domains. domid 0 is never HVM.

Ahem, I think you have a patch which kind of changes that ;-)

(I notice you have _or_hyb_domain below so I guess you've thought of
it ;-). Perhaps you culd use DOMID_INVALID or something as your flag?

> + *  * vid == -1 : display for all HVM VCPUs
> + *   */
> +void kdb_dump_vmcb(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +
> +    rcu_read_lock(&domlist_read_lock);
> +    for_each_domain (dp) {
> +        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;
> +
> +            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
> +            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
> +            kdbp("\n");
> +        }
> +        kdbp("\n");
> +    }
> +    rcu_read_unlock(&domlist_read_lock);
> +}
> +#endif
> +
>  /*
>   * Local variables:
>   * mode: C
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
> --- a/xen/arch/x86/hvm/vmx/entry.S      Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/entry.S      Wed Aug 29 14:39:57 2012 -0700
> @@ -124,12 +124,23 @@
>          get_current(bx)
>          cli
> 
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif
> +        jnz  .Lkdb_skip_softirq
> +#endif
>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          cmpl $0,(r(dx),r(ax),1)
>          jnz  .Lvmx_process_softirqs
> 
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif

Same comments here as to the svm version.

>          testb $0xff,VCPU_vmx_emulate(r(bx))
>          jnz .Lvmx_goto_emulator
>          testb $0xff,VCPU_vmx_realmode(r(bx))
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
> --- a/xen/arch/x86/hvm/vmx/vmcs.c       Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c       Wed Aug 29 14:39:57 2012 -0700
> @@ -1117,6 +1117,13 @@
>          hvm_asid_flush_vcpu(v);
>      }
> 
> +#if defined(XEN_KDB_CONFIG)
> +    if (kdb_dr7)
> +        __vmwrite(GUEST_DR7, kdb_dr7);
> +    else
> +        __vmwrite(GUEST_DR7, 0);

Isn't this the same as
	__vmwrite(GUEST_DR7, kdb_dr7) ?

Why does kdb maintain it's own global here instead of manipulating the
guest dr7 values in the state?

> +#endif
> +
>      debug_state = v->domain->debugger_attached
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
> @@ -1326,6 +1333,220 @@
>      register_keyhandler('v', &vmcs_dump_keyhandler);
>  }
> 
> +#if defined(XEN_KDB_CONFIG)
> +#define GUEST_EFER      0x2806   /* see page 23-20 */
> +#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */

23-20 of what?

These belong with the other GUEST_* VMCS offset definitions in vmcs.h

> +
> +/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
> + * will IPI other CPUs. also, print a subset relevant to software debugging */
> +static void noinline kdb_print_vmcs(struct vcpu *vp)
> +{
> +    struct cpu_user_regs *regs = &vp->arch.user_regs;
> +    unsigned long long x;
> +
> +    kdbp("*** Guest State ***\n");
> +    kdbp("CR0: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
> +         (unsigned long long)vmr(GUEST_CR0),

vmr returns an unsigned long, why the mismatched format string+ cast?

This seems to duplicate a fairly large chunk of vmcs_dump_vcpu. If kdbp
and printk can't co-exist then perhaps make a common function which
takes a printer as a function pointer?

Same goes for the SVM version I think I saw before.

> +/* Flush VMCS on this cpu if it needs to:
> + *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and
> + *     do __vmreads. So, the VMCS pointer can't be left cleared.
> + *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
> + *     vmlaunch must be done and not vmresume. This means, we must clear
> + *     arch_vmx->launched.
> + */
> +void kdb_curr_cpu_flush_vmcs(void)

Is this function actually kdb specific? Other than the printing it looks
like a pretty generic help to me.

> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    int ccpu = smp_processor_id();
> +    struct vmcs_struct *cvp = this_cpu(current_vmcs);
> +
> +    if (this_cpu(current_vmcs) == NULL)
> +        return;             /* no HVM active on this CPU */
> +
> +    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
> +
> +    /* looks like we got one. unfortunately, current_vmcs points to vmcs
> +     * and not VCPU, so we gotta search the entire list... */
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        for_each_vcpu (dp, vp) {
> +            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
> +                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                vp->arch.hvm_vmx.launched = 0;
> +                this_cpu(current_vmcs) = NULL;
> +                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n",
> +                    ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
> +            }
> +        }
> +    }
> +}
> +
> +/*
> + * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)

Not any more...

> + * vcpu id == -1 : display all vcpuids
> + * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
> + */
> +void kdb_dump_vmcs(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    struct vmcs_struct  *vmcsp;
> +    u64 addr = -1;
> +
> +    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
> +    __vmptrst(&addr);
> +
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;

A lot of this scaffolding is the same as for svm. Perhaps pull it up
into a common hvm function with only the inner arch specific bit  in a
per-VMX/SVM function?

> +
> +           vmcsp = vp->arch.hvm_vmx.vmcs;
> +            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
> +                virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
> +            __vmptrld(virt_to_maddr(vmcsp));
> +            kdb_print_vmcs(vp);
> +            __vmpclear(virt_to_maddr(vmcsp));
> +            vp->arch.hvm_vmx.launched = 0;
> +        }
> +        kdbp("\n");
> +    }
> +    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
> +    if (addr && addr != (u64)-1)
> +        __vmptrld(addr);
> +}
> +#endif
> 
>  /*
>   * Local variables:
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
> --- a/xen/arch/x86/hvm/vmx/vmx.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmx.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -2183,11 +2183,14 @@
>          printk("reason not known yet!");
>          break;
>      }
> -
> +#if defined(XEN_KDB_CONFIG)
> +    kdbp("\n************* VMCS Area **************\n");
> +    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
> +#else
>      printk("************* VMCS Area **************\n");
>      vmcs_dump_vcpu(curr);

Is kdb_dump_vmcs better than/different to vmcs_dump_vcpu in this
context?
>      printk("**************************************\n");
> -
> +#endif
>      domain_crash(curr->domain);
>  }
> 
> @@ -2415,6 +2418,12 @@
>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>              if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
>                  goto exit_and_crash;
> +
> +#if defined(XEN_KDB_CONFIG)
> +            /* TRAP_debug: IP points correctly to next instr */
> +            if (kdb_handle_trap_entry(vector, regs))
> +                break;
> +#endif
>              domain_pause_for_debugger();
>              break;
>          case TRAP_int3:
> @@ -2423,6 +2432,13 @@
>              if ( v->domain->debugger_attached )
>              {
>                  update_guest_eip(); /* Safe: INT3 */
> +#if defined(XEN_KDB_CONFIG)
> +                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
> +                 * update_guest_eip which updates to bp+1. works for gdbsx too
> +                 */
> +                if (kdb_handle_trap_entry(vector, regs))
> +                    break;
> +#endif
>                  current->arch.gdbsx_vcpu_event = TRAP_int3;
>                  domain_pause_for_debugger();
>                  break;
> @@ -2707,6 +2723,10 @@
>      case EXIT_REASON_MONITOR_TRAP_FLAG:
>          v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
>          vmx_update_cpu_exec_control(v);
> +#if defined(XEN_KDB_CONFIG)
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +            break;
> +#endif
>          if ( v->arch.hvm_vcpu.single_step ) {
>            hvm_memory_event_single_step(regs->eip);
>            if ( v->domain->debugger_attached )
> diff -r 32034d1914a6 xen/arch/x86/irq.c
> --- a/xen/arch/x86/irq.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/irq.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -2305,3 +2305,29 @@
>      return is_hvm_domain(d) && pirq &&
>             pirq->arch.hvm.emuirq != IRQ_UNBOUND;
>  }
> +
> +#ifdef XEN_KDB_CONFIG
> +void kdb_prnt_guest_mapped_irqs(void)
> +{
> +    int irq, j;
> +    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */

Can this share some code with dump_irqs then?

I don't see this construct there though -- but that magic expression
could use some explanation.

> +
> +    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
> +    for (irq=0; irq < nr_irqs; irq++) {
> +        irq_desc_t  *dp = irq_to_desc(irq);
> +        struct arch_irq_desc *archp = &dp->arch;
> +        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
> +
> +        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
> +            continue;
> +
> +        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
> +        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
> +             dp->handler->typename);
> +        for (j=0; j < actp->nr_guests; j++)
> +            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
> +                 domain_irq_to_pirq(actp->guest[j], irq));
> +        kdbp("\n");
> +    }
> +}
> +#endif

> diff -r 32034d1914a6 xen/arch/x86/smp.c
> --- a/xen/arch/x86/smp.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/smp.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -273,7 +273,7 @@
>   * Structure and data for smp_call_function()/on_selected_cpus().
>   */
> 
> -static void __smp_call_function_interrupt(void);
> +static void __smp_call_function_interrupt(struct cpu_user_regs *regs);

I think you can just use get_irq_regs in the functions which need it
instead of adding this to every irq path?

>  static DEFINE_SPINLOCK(call_lock);
>  static struct call_data_struct {
>      void (*func) (void *info);


> diff -r 32034d1914a6 xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/sched_credit.c Wed Aug 29 14:39:57 2012 -0700
> @@ -1475,6 +1475,33 @@
>      printk("\n");
>  }
> 
> +#ifdef XEN_KDB_CONFIG
> +static void kdb_csched_dump(int cpu)
> +{
> +    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
> +    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
> +    struct list_head *tmp, *runq = RUNQ(cpu);
> +
> +    kdbp("    csched_pcpu: %p\n", pcpup);
> +    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
> +         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
> +    kdbp("    runq:\n");
> +
> +    /* next is top of struct, so screw stupid, ugly hard to follow macros */
> +    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
> +        kdbp("next is not first in struct csched_vcpu. please fixme\n");

er, that's why the stupid macros are there!

at least this could be a build time check!

There seems to be a lot of code in  this patch which adds a kdb_FOO_dump
function which duplicates the content (if not the exact layout) of an
exsiting FOO_dump function but using kdbp instead of printk.

I think that a recipe for one or the other getting out of sync or
bit-rotting and should be replaced either with making printk usable
while in kdb or if that isn't possible by abstracting out the printing
function.

> +        return;        /* otherwise for loop will crash */
> +    }
> +    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
> +
> +        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
> +        struct vcpu *vp = csp->vcpu;
> +        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
> +             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
> +    };
> +}
> +#endif
> +
>  static void
>  csched_dump_pcpu(const struct scheduler *ops, int cpu)
>  {
> @@ -1484,6 +1511,10 @@
>      int loop;
>  #define cpustr keyhandler_scratch
> 
> +#ifdef XEN_KDB_CONFIG
> +    kdb_csched_dump(cpu);
> +    return;

Return ? 

> +#endif
>      spc = CSCHED_PCPU(cpu);
>      runq = &spc->runq;
> 
> diff -r 32034d1914a6 xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h   Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/xen/sched.h   Wed Aug 29 14:39:57 2012 -0700
> @@ -576,11 +576,14 @@
>  unsigned long hypercall_create_continuation(
>      unsigned int op, const char *format, ...);
>  void hypercall_cancel_continuation(void);
> -
> +#ifdef XEN_KDB_CONFIG
> +#define hypercall_preempt_check() (0)

Erm, really?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:35:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OZE-00036Q-HC; Fri, 31 Aug 2012 10:35:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OZD-00036L-8I
	for Xen-devel@lists.xensource.com; Fri, 31 Aug 2012 10:35:11 +0000
Received: from [85.158.143.35:46435] by server-2.bemta-4.messagelabs.com id
	81/AD-21239-E5390405; Fri, 31 Aug 2012 10:35:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1346409308!14689731!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16018 invoked from network); 31 Aug 2012 10:35:09 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:09 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285758"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:35:08 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:35:08 +0100
Message-ID: <1346409306.27277.175.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 31 Aug 2012 11:35:06 +0100
In-Reply-To: <20120830112323.5086d73c@mantra.us.oracle.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> diff -r 32034d1914a6 xen/Rules.mk
> --- a/xen/Rules.mk      Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/Rules.mk      Wed Aug 29 14:39:57 2012 -0700
> @@ -53,6 +55,7 @@
>  CFLAGS-$(HAS_ACPI)      += -DHAS_ACPI
>  CFLAGS-$(HAS_PASSTHROUGH) += -DHAS_PASSTHROUGH
>  CFLAGS-$(frame_pointer) += -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER
> +CFLAGS-$(kdb)           += -DXEN_KDB_CONFIG

Pretty much everywhere else uses CONFIG_FOO not FOO_CONFIG. Also isn't
XEN rather redundant in the context?

> 
>  ifneq ($(max_phys_cpus),)
>  CFLAGS-y                += -DMAX_PHYS_CPUS=$(max_phys_cpus)
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/entry.S
> --- a/xen/arch/x86/hvm/svm/entry.S      Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/entry.S      Wed Aug 29 14:39:57 2012 -0700
> @@ -59,12 +59,23 @@
>          get_current(bx)
>          CLGI
> 
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif

This inner #ifdef is what the addr_of macro does.

> +        jnz  .Lkdb_skip_softirq
> +#endif
>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          testl $~0,(r(dx),r(ax),1)
>          jnz  .Lsvm_process_softirqs
> 
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif

Does gas complain about unused labels? IOW can you omit the ifdef here?

Why does kdb skip soft irqs? I presume the final submission will include
a commit log which will explain this sort of thing?

>          testb $0, VCPU_nsvm_hap_enabled(r(bx))
>  UNLIKELY_START(nz, nsvm_hap)
>          mov  VCPU_nhvm_p2m(r(bx)),r(ax)
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/svm.c
> --- a/xen/arch/x86/hvm/svm/svm.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/svm.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -2170,6 +2170,10 @@
>          break;
> 
>      case VMEXIT_EXCEPTION_DB:
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +           break;

defining kdb_handle_trap_entry as a nop when !KDB might make this and
the following similar examples tidier.

> +#endif
>          if ( !v->domain->debugger_attached )
>              goto exit_and_crash;
>          domain_pause_for_debugger();
> @@ -2182,6 +2186,10 @@
>          if ( (inst_len = __get_instruction_length(v, INSTR_INT3)) == 0 )
>              break;
>          __update_guest_eip(regs, inst_len);
> +#ifdef XEN_KDB_CONFIG
> +        if (kdb_handle_trap_entry(TRAP_int3, regs))
> +            break;
> +#endif
>          current->arch.gdbsx_vcpu_event = TRAP_int3;
>          domain_pause_for_debugger();
>          break;
> diff -r 32034d1914a6 xen/arch/x86/hvm/svm/vmcb.c
> --- a/xen/arch/x86/hvm/svm/vmcb.c       Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/svm/vmcb.c       Wed Aug 29 14:39:57 2012 -0700
> @@ -315,6 +315,36 @@
>      register_keyhandler('v', &vmcb_dump_keyhandler);
>  }
> 
> +#if defined(XEN_KDB_CONFIG)
> +/* did == 0 : display for all HVM domains. domid 0 is never HVM.

Ahem, I think you have a patch which kind of changes that ;-)

(I notice you have _or_hyb_domain below so I guess you've thought of
it ;-). Perhaps you culd use DOMID_INVALID or something as your flag?

> + *  * vid == -1 : display for all HVM VCPUs
> + *   */
> +void kdb_dump_vmcb(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +
> +    rcu_read_lock(&domlist_read_lock);
> +    for_each_domain (dp) {
> +        if (!is_hvm_or_hyb_domain(dp) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;
> +
> +            kdbp("  VMCB [domid: %d  vcpu:%d]:\n", dp->domain_id, vp->vcpu_id);
> +            svm_vmcb_dump("kdb", vp->arch.hvm_svm.vmcb);
> +            kdbp("\n");
> +        }
> +        kdbp("\n");
> +    }
> +    rcu_read_unlock(&domlist_read_lock);
> +}
> +#endif
> +
>  /*
>   * Local variables:
>   * mode: C
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/entry.S
> --- a/xen/arch/x86/hvm/vmx/entry.S      Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/entry.S      Wed Aug 29 14:39:57 2012 -0700
> @@ -124,12 +124,23 @@
>          get_current(bx)
>          cli
> 
> +#ifdef XEN_KDB_CONFIG
> +#if defined(__x86_64__)
> +        testl $1, kdb_session_begun(%rip)
> +#else
> +        testl $1, kdb_session_begun
> +#endif
> +        jnz  .Lkdb_skip_softirq
> +#endif
>          mov  VCPU_processor(r(bx)),%eax
>          shl  $IRQSTAT_shift,r(ax)
>          lea  addr_of(irq_stat),r(dx)
>          cmpl $0,(r(dx),r(ax),1)
>          jnz  .Lvmx_process_softirqs
> 
> +#ifdef XEN_KDB_CONFIG
> +.Lkdb_skip_softirq:
> +#endif

Same comments here as to the svm version.

>          testb $0xff,VCPU_vmx_emulate(r(bx))
>          jnz .Lvmx_goto_emulator
>          testb $0xff,VCPU_vmx_realmode(r(bx))
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmcs.c
> --- a/xen/arch/x86/hvm/vmx/vmcs.c       Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c       Wed Aug 29 14:39:57 2012 -0700
> @@ -1117,6 +1117,13 @@
>          hvm_asid_flush_vcpu(v);
>      }
> 
> +#if defined(XEN_KDB_CONFIG)
> +    if (kdb_dr7)
> +        __vmwrite(GUEST_DR7, kdb_dr7);
> +    else
> +        __vmwrite(GUEST_DR7, 0);

Isn't this the same as
	__vmwrite(GUEST_DR7, kdb_dr7) ?

Why does kdb maintain it's own global here instead of manipulating the
guest dr7 values in the state?

> +#endif
> +
>      debug_state = v->domain->debugger_attached
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_INT3]
>                    || v->domain->arch.hvm_domain.params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP];
> @@ -1326,6 +1333,220 @@
>      register_keyhandler('v', &vmcs_dump_keyhandler);
>  }
> 
> +#if defined(XEN_KDB_CONFIG)
> +#define GUEST_EFER      0x2806   /* see page 23-20 */
> +#define GUEST_EFER_HIGH 0x2807   /* see page 23-20 */

23-20 of what?

These belong with the other GUEST_* VMCS offset definitions in vmcs.h

> +
> +/* it's a shame we can't use vmcs_dump_vcpu(), but it does vmx_vmcs_enter which
> + * will IPI other CPUs. also, print a subset relevant to software debugging */
> +static void noinline kdb_print_vmcs(struct vcpu *vp)
> +{
> +    struct cpu_user_regs *regs = &vp->arch.user_regs;
> +    unsigned long long x;
> +
> +    kdbp("*** Guest State ***\n");
> +    kdbp("CR0: actual=0x%016llx, shadow=0x%016llx, gh_mask=%016llx\n",
> +         (unsigned long long)vmr(GUEST_CR0),

vmr returns an unsigned long, why the mismatched format string+ cast?

This seems to duplicate a fairly large chunk of vmcs_dump_vcpu. If kdbp
and printk can't co-exist then perhaps make a common function which
takes a printer as a function pointer?

Same goes for the SVM version I think I saw before.

> +/* Flush VMCS on this cpu if it needs to:
> + *   - Upon leaving kdb, the HVM cpu will resume in vmx_vmexit_handler() and
> + *     do __vmreads. So, the VMCS pointer can't be left cleared.
> + *   - Doing __vmpclear will set the vmx state to 'clear', so to resume a
> + *     vmlaunch must be done and not vmresume. This means, we must clear
> + *     arch_vmx->launched.
> + */
> +void kdb_curr_cpu_flush_vmcs(void)

Is this function actually kdb specific? Other than the printing it looks
like a pretty generic help to me.

> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    int ccpu = smp_processor_id();
> +    struct vmcs_struct *cvp = this_cpu(current_vmcs);
> +
> +    if (this_cpu(current_vmcs) == NULL)
> +        return;             /* no HVM active on this CPU */
> +
> +    kdbp("KDB:[%d] curvmcs:%lx/%lx\n", ccpu, cvp, virt_to_maddr(cvp));
> +
> +    /* looks like we got one. unfortunately, current_vmcs points to vmcs
> +     * and not VCPU, so we gotta search the entire list... */
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        for_each_vcpu (dp, vp) {
> +            if ( vp->arch.hvm_vmx.vmcs == cvp ) {
> +                __vmpclear(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                __vmptrld(virt_to_maddr(vp->arch.hvm_vmx.vmcs));
> +                vp->arch.hvm_vmx.launched = 0;
> +                this_cpu(current_vmcs) = NULL;
> +                kdbp("KDB:[%d] %d:%d current_vmcs:%lx flushed\n",
> +                    ccpu, dp->domain_id, vp->vcpu_id, cvp, virt_to_maddr(cvp));
> +            }
> +        }
> +    }
> +}
> +
> +/*
> + * domid == 0 : display for all HVM domains  (dom0 is never an HVM domain)

Not any more...

> + * vcpu id == -1 : display all vcpuids
> + * PreCondition: all HVM cpus (including current cpu) have flushed VMCS
> + */
> +void kdb_dump_vmcs(domid_t did, int vid)
> +{
> +    struct domain *dp;
> +    struct vcpu *vp;
> +    struct vmcs_struct  *vmcsp;
> +    u64 addr = -1;
> +
> +    ASSERT(!local_irq_is_enabled());     /* kdb should always run disabled */
> +    __vmptrst(&addr);
> +
> +    for_each_domain (dp) {
> +        if ( !(is_hvm_or_hyb_domain(dp)) || dp->is_dying)
> +            continue;
> +        if (did != 0 && did != dp->domain_id)
> +            continue;
> +
> +        for_each_vcpu (dp, vp) {
> +            if (vid != -1 && vid != vp->vcpu_id)
> +                continue;

A lot of this scaffolding is the same as for svm. Perhaps pull it up
into a common hvm function with only the inner arch specific bit  in a
per-VMX/SVM function?

> +
> +           vmcsp = vp->arch.hvm_vmx.vmcs;
> +            kdbp("VMCS %lx/%lx [domid:%d (%p)  vcpu:%d (%p)]:\n", vmcsp,
> +                virt_to_maddr(vmcsp), dp->domain_id, dp, vp->vcpu_id, vp);
> +            __vmptrld(virt_to_maddr(vmcsp));
> +            kdb_print_vmcs(vp);
> +            __vmpclear(virt_to_maddr(vmcsp));
> +            vp->arch.hvm_vmx.launched = 0;
> +        }
> +        kdbp("\n");
> +    }
> +    /* restore orig vmcs pointer for __vmreads in vmx_vmexit_handler() */
> +    if (addr && addr != (u64)-1)
> +        __vmptrld(addr);
> +}
> +#endif
> 
>  /*
>   * Local variables:
> diff -r 32034d1914a6 xen/arch/x86/hvm/vmx/vmx.c
> --- a/xen/arch/x86/hvm/vmx/vmx.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/hvm/vmx/vmx.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -2183,11 +2183,14 @@
>          printk("reason not known yet!");
>          break;
>      }
> -
> +#if defined(XEN_KDB_CONFIG)
> +    kdbp("\n************* VMCS Area **************\n");
> +    kdb_dump_vmcs(curr->domain->domain_id, (curr)->vcpu_id);
> +#else
>      printk("************* VMCS Area **************\n");
>      vmcs_dump_vcpu(curr);

Is kdb_dump_vmcs better than/different to vmcs_dump_vcpu in this
context?
>      printk("**************************************\n");
> -
> +#endif
>      domain_crash(curr->domain);
>  }
> 
> @@ -2415,6 +2418,12 @@
>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>              if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
>                  goto exit_and_crash;
> +
> +#if defined(XEN_KDB_CONFIG)
> +            /* TRAP_debug: IP points correctly to next instr */
> +            if (kdb_handle_trap_entry(vector, regs))
> +                break;
> +#endif
>              domain_pause_for_debugger();
>              break;
>          case TRAP_int3:
> @@ -2423,6 +2432,13 @@
>              if ( v->domain->debugger_attached )
>              {
>                  update_guest_eip(); /* Safe: INT3 */
> +#if defined(XEN_KDB_CONFIG)
> +                /* vmcs.IP points to bp, kdb expects bp+1. Hence after the above
> +                 * update_guest_eip which updates to bp+1. works for gdbsx too
> +                 */
> +                if (kdb_handle_trap_entry(vector, regs))
> +                    break;
> +#endif
>                  current->arch.gdbsx_vcpu_event = TRAP_int3;
>                  domain_pause_for_debugger();
>                  break;
> @@ -2707,6 +2723,10 @@
>      case EXIT_REASON_MONITOR_TRAP_FLAG:
>          v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
>          vmx_update_cpu_exec_control(v);
> +#if defined(XEN_KDB_CONFIG)
> +        if (kdb_handle_trap_entry(TRAP_debug, regs))
> +            break;
> +#endif
>          if ( v->arch.hvm_vcpu.single_step ) {
>            hvm_memory_event_single_step(regs->eip);
>            if ( v->domain->debugger_attached )
> diff -r 32034d1914a6 xen/arch/x86/irq.c
> --- a/xen/arch/x86/irq.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/irq.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -2305,3 +2305,29 @@
>      return is_hvm_domain(d) && pirq &&
>             pirq->arch.hvm.emuirq != IRQ_UNBOUND;
>  }
> +
> +#ifdef XEN_KDB_CONFIG
> +void kdb_prnt_guest_mapped_irqs(void)
> +{
> +    int irq, j;
> +    char affstr[NR_CPUS/4+NR_CPUS/32+2];    /* courtesy dump_irqs() */

Can this share some code with dump_irqs then?

I don't see this construct there though -- but that magic expression
could use some explanation.

> +
> +    kdbp("irq  vec  aff  type  domid:mapped-pirq pairs  (all in decimal)\n");
> +    for (irq=0; irq < nr_irqs; irq++) {
> +        irq_desc_t  *dp = irq_to_desc(irq);
> +        struct arch_irq_desc *archp = &dp->arch;
> +        irq_guest_action_t *actp = (irq_guest_action_t *)dp->action;
> +
> +        if (!dp->handler ||dp->handler==&no_irq_type || !(dp->status&IRQ_GUEST))
> +            continue;
> +
> +        cpumask_scnprintf(affstr, sizeof(affstr), dp->affinity);
> +        kdbp("[%3ld] %3d %3s %-13s ", irq, archp->vector, affstr,
> +             dp->handler->typename);
> +        for (j=0; j < actp->nr_guests; j++)
> +            kdbp("%03d:%04d ", actp->guest[j]->domain_id,
> +                 domain_irq_to_pirq(actp->guest[j], irq));
> +        kdbp("\n");
> +    }
> +}
> +#endif

> diff -r 32034d1914a6 xen/arch/x86/smp.c
> --- a/xen/arch/x86/smp.c        Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/arch/x86/smp.c        Wed Aug 29 14:39:57 2012 -0700
> @@ -273,7 +273,7 @@
>   * Structure and data for smp_call_function()/on_selected_cpus().
>   */
> 
> -static void __smp_call_function_interrupt(void);
> +static void __smp_call_function_interrupt(struct cpu_user_regs *regs);

I think you can just use get_irq_regs in the functions which need it
instead of adding this to every irq path?

>  static DEFINE_SPINLOCK(call_lock);
>  static struct call_data_struct {
>      void (*func) (void *info);


> diff -r 32034d1914a6 xen/common/sched_credit.c
> --- a/xen/common/sched_credit.c Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/common/sched_credit.c Wed Aug 29 14:39:57 2012 -0700
> @@ -1475,6 +1475,33 @@
>      printk("\n");
>  }
> 
> +#ifdef XEN_KDB_CONFIG
> +static void kdb_csched_dump(int cpu)
> +{
> +    struct csched_pcpu *pcpup = CSCHED_PCPU(cpu);
> +    struct vcpu *scurrvp = (CSCHED_VCPU(current))->vcpu;
> +    struct list_head *tmp, *runq = RUNQ(cpu);
> +
> +    kdbp("    csched_pcpu: %p\n", pcpup);
> +    kdbp("    curr csched:%p {vcpu:%p id:%d domid:%d}\n", (current)->sched_priv,
> +         scurrvp, scurrvp->vcpu_id, scurrvp->domain->domain_id);
> +    kdbp("    runq:\n");
> +
> +    /* next is top of struct, so screw stupid, ugly hard to follow macros */
> +    if (offsetof(struct csched_vcpu, runq_elem.next) != 0) {
> +        kdbp("next is not first in struct csched_vcpu. please fixme\n");

er, that's why the stupid macros are there!

at least this could be a build time check!

There seems to be a lot of code in  this patch which adds a kdb_FOO_dump
function which duplicates the content (if not the exact layout) of an
exsiting FOO_dump function but using kdbp instead of printk.

I think that a recipe for one or the other getting out of sync or
bit-rotting and should be replaced either with making printk usable
while in kdb or if that isn't possible by abstracting out the printing
function.

> +        return;        /* otherwise for loop will crash */
> +    }
> +    for (tmp = runq->next; tmp != runq; tmp = tmp->next) {
> +
> +        struct csched_vcpu *csp = (struct csched_vcpu *)tmp;
> +        struct vcpu *vp = csp->vcpu;
> +        kdbp("      csp:%p pri:%02d vcpu: {p:%p id:%d domid:%d}\n", csp,
> +             csp->pri, vp, vp->vcpu_id, vp->domain->domain_id);
> +    };
> +}
> +#endif
> +
>  static void
>  csched_dump_pcpu(const struct scheduler *ops, int cpu)
>  {
> @@ -1484,6 +1511,10 @@
>      int loop;
>  #define cpustr keyhandler_scratch
> 
> +#ifdef XEN_KDB_CONFIG
> +    kdb_csched_dump(cpu);
> +    return;

Return ? 

> +#endif
>      spc = CSCHED_PCPU(cpu);
>      runq = &spc->runq;
> 
> diff -r 32034d1914a6 xen/include/xen/sched.h
> --- a/xen/include/xen/sched.h   Thu Jun 07 19:46:57 2012 +0100
> +++ b/xen/include/xen/sched.h   Wed Aug 29 14:39:57 2012 -0700
> @@ -576,11 +576,14 @@
>  unsigned long hypercall_create_continuation(
>      unsigned int op, const char *format, ...);
>  void hypercall_cancel_continuation(void);
> -
> +#ifdef XEN_KDB_CONFIG
> +#define hypercall_preempt_check() (0)

Erm, really?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OZn-00038r-V9; Fri, 31 Aug 2012 10:35:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OZm-00038b-8t
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:35:46 +0000
Received: from [85.158.139.83:57650] by server-11.bemta-5.messagelabs.com id
	AC/93-24658-18390405; Fri, 31 Aug 2012 10:35:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346409344!20666417!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18172 invoked from network); 31 Aug 2012 10:35:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285766"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:35:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:35:44 +0100
Message-ID: <1346409342.27277.176.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 11:35:42 +0100
In-Reply-To: <20544.34538.599108.990505@mariner.uk.xensource.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
	<20544.34538.599108.990505@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Dmitry Ivanov <vonami@gmail.com>
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:42 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-users] Failed to build xen-unstable with --disable-pythontools"):
> > On Fri, 2012-08-31 at 09:37 +0100, Dmitry Ivanov wrote:
> > > After configuring xen-unstable with ./configure --disable-ocamltools
> > > --disable-pythontools it fails to build:
> ...
> > In the meantime I think the answer is "don't do that then". That may
> > well also turn out to be the answer for the 4.2.0 release at this point.
> > 
> > A patch to remove this option follows.
> 
> Thanks.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:36:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OZn-00038r-V9; Fri, 31 Aug 2012 10:35:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OZm-00038b-8t
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:35:46 +0000
Received: from [85.158.139.83:57650] by server-11.bemta-5.messagelabs.com id
	AC/93-24658-18390405; Fri, 31 Aug 2012 10:35:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346409344!20666417!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18172 invoked from network); 31 Aug 2012 10:35:45 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285766"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:35:44 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:35:44 +0100
Message-ID: <1346409342.27277.176.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 11:35:42 +0100
In-Reply-To: <20544.34538.599108.990505@mariner.uk.xensource.com>
References: <CALaHputr63fjY6DOiNSx5SRS6dsb4v0dZzhypSeb2M8hQmhH-g@mail.gmail.com>
	<1346403318.27277.118.camel@zakaz.uk.xensource.com>
	<20544.34538.599108.990505@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>, Dmitry Ivanov <vonami@gmail.com>
Subject: Re: [Xen-devel] [Xen-users] Failed to build xen-unstable with
 --disable-pythontools
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:42 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-users] Failed to build xen-unstable with --disable-pythontools"):
> > On Fri, 2012-08-31 at 09:37 +0100, Dmitry Ivanov wrote:
> > > After configuring xen-unstable with ./configure --disable-ocamltools
> > > --disable-pythontools it fails to build:
> ...
> > In the meantime I think the answer is "don't do that then". That may
> > well also turn out to be the answer for the 4.2.0 release at this point.
> > 
> > A patch to remove this option follows.
> 
> Thanks.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:36:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:36:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OZv-0003AJ-16; Fri, 31 Aug 2012 10:35:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <d.vrabel.98@gmail.com>) id 1T7OZt-00038p-5d
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:35:53 +0000
X-Env-Sender: d.vrabel.98@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1346409346!8924144!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4259 invoked from network); 31 Aug 2012 10:35:47 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:47 -0000
Received: by yenm4 with SMTP id m4so571327yen.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 03:35:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=L6JiHO0tdSapyjED627N3IL0HwWrwhQhRGTDEWcR/lM=;
	b=Go5TCCjvJMoPMxc2voybjK8jqP/pdDXCQP724j49aqYSyNCsPqDsCXLTrX9L8bI/W3
	dmkXbur3q8GgGdbKfn5VawEzYB0sNFYKXtmNehpGI7A0sqzXSz4XcpVX6UH2mqxqyLlU
	OcC0govLJBLwLhdYtfwlwnsmtQgyDOTl1dkOQR44on9zpUvgxAVd7VgG95S/XdPgbWIx
	zqb9s5Jq9f8EHReiITN+INfYXn7KNM5tGjYX1eprhMRWsgutcj+Q7iWrWPqX1Tx/TQJH
	9NuQto4mUoyYzOjOzfoSUdOKPuPKSlI0YpfgCVyd2b+DgIpeMvOy0LTXOhgfYzD+NV1F
	4hsg==
Received: by 10.236.76.103 with SMTP id a67mr7458818yhe.69.1346409345664;
	Fri, 31 Aug 2012 03:35:45 -0700 (PDT)
Received: from [10.80.2.76] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id s17sm3916628anj.13.2012.08.31.03.35.44
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 03:35:45 -0700 (PDT)
Message-ID: <5040937E.2020408@cantab.net>
Date: Fri, 31 Aug 2012 11:35:42 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
References: <b48b8a512e26a92c313b96b7e5550cde@abpni.co.uk>
In-Reply-To: <b48b8a512e26a92c313b96b7e5550cde@abpni.co.uk>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Dom0 Memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/12 10:52, Jonathan Tripathy wrote:
> Hi Everyone,
> 
> I have set dom0_mem=2048M in grub2, set dom0 minimum memory to 2048 in
> xend-config (and also disabled ballooning). However, when I do "free -m"
> in the Dom0, I only see 1567 total RAM (xentop reports 2048ish).

Does this article provide the answers?

http://blog.xen.org/index.php/2012/04/30/do%ef%bb%bfm0-memory-where-it-has-not-gone/

David

> Is this a known issue? I am also aware that this problem occurs in the
> DomUs as well, but never a discrepancy that big.
> 
> The kernel I'm using is 3.2.28. Xen is version 4.1.3 (although I'm using
> a hack in setup.c where I changed the order of the if block is give
> priority to the 802 memory map. I needed to do this as my motherboard
> uses UEFI. Keir made a patch for this for unstable recently).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:36:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:36:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OZv-0003AJ-16; Fri, 31 Aug 2012 10:35:55 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <d.vrabel.98@gmail.com>) id 1T7OZt-00038p-5d
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:35:53 +0000
X-Env-Sender: d.vrabel.98@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1346409346!8924144!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4259 invoked from network); 31 Aug 2012 10:35:47 -0000
Received: from mail-yx0-f173.google.com (HELO mail-yx0-f173.google.com)
	(209.85.213.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:47 -0000
Received: by yenm4 with SMTP id m4so571327yen.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 03:35:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=L6JiHO0tdSapyjED627N3IL0HwWrwhQhRGTDEWcR/lM=;
	b=Go5TCCjvJMoPMxc2voybjK8jqP/pdDXCQP724j49aqYSyNCsPqDsCXLTrX9L8bI/W3
	dmkXbur3q8GgGdbKfn5VawEzYB0sNFYKXtmNehpGI7A0sqzXSz4XcpVX6UH2mqxqyLlU
	OcC0govLJBLwLhdYtfwlwnsmtQgyDOTl1dkOQR44on9zpUvgxAVd7VgG95S/XdPgbWIx
	zqb9s5Jq9f8EHReiITN+INfYXn7KNM5tGjYX1eprhMRWsgutcj+Q7iWrWPqX1Tx/TQJH
	9NuQto4mUoyYzOjOzfoSUdOKPuPKSlI0YpfgCVyd2b+DgIpeMvOy0LTXOhgfYzD+NV1F
	4hsg==
Received: by 10.236.76.103 with SMTP id a67mr7458818yhe.69.1346409345664;
	Fri, 31 Aug 2012 03:35:45 -0700 (PDT)
Received: from [10.80.2.76] (firewall.ctxuk.citrix.com. [62.200.22.2])
	by mx.google.com with ESMTPS id s17sm3916628anj.13.2012.08.31.03.35.44
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 03:35:45 -0700 (PDT)
Message-ID: <5040937E.2020408@cantab.net>
Date: Fri, 31 Aug 2012 11:35:42 +0100
From: David Vrabel <dvrabel@cantab.net>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jonathan Tripathy <jonnyt@abpni.co.uk>
References: <b48b8a512e26a92c313b96b7e5550cde@abpni.co.uk>
In-Reply-To: <b48b8a512e26a92c313b96b7e5550cde@abpni.co.uk>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Dom0 Memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/12 10:52, Jonathan Tripathy wrote:
> Hi Everyone,
> 
> I have set dom0_mem=2048M in grub2, set dom0 minimum memory to 2048 in
> xend-config (and also disabled ballooning). However, when I do "free -m"
> in the Dom0, I only see 1567 total RAM (xentop reports 2048ish).

Does this article provide the answers?

http://blog.xen.org/index.php/2012/04/30/do%ef%bb%bfm0-memory-where-it-has-not-gone/

David

> Is this a known issue? I am also aware that this problem occurs in the
> DomUs as well, but never a discrepancy that big.
> 
> The kernel I'm using is 3.2.28. Xen is version 4.1.3 (although I'm using
> a hack in setup.c where I changed the order of the if block is give
> priority to the 802 memory map. I needed to do this as my motherboard
> uses UEFI. Keir made a patch for this for unstable recently).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:36:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OaL-0003Fc-Ey; Fri, 31 Aug 2012 10:36:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OaK-0003E8-44
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:36:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1346409351!1983808!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24532 invoked from network); 31 Aug 2012 10:35:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:52 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285770"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:35:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:35:51 +0100
Message-ID: <1346409349.27277.177.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 11:35:49 +0100
In-Reply-To: <5040A4AE0200007800097C43@nat28.tlf.novell.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
	<50409A070200007800097BA3@nat28.tlf.novell.com>
	<1346405635.27277.137.camel@zakaz.uk.xensource.com>
	<5040A4AE0200007800097C43@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:49 +0100, Jan Beulich wrote:
> > Can I take that as an
> > Acked-by: Jan Beulich <JBeulich@suse.com>
> > ?
> 
> Yes, feel free to do so.

Thanks, applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:36:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OaL-0003Fc-Ey; Fri, 31 Aug 2012 10:36:21 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OaK-0003E8-44
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:36:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1346409351!1983808!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24532 invoked from network); 31 Aug 2012 10:35:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:35:52 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14285770"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:35:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:35:51 +0100
Message-ID: <1346409349.27277.177.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 11:35:49 +0100
In-Reply-To: <5040A4AE0200007800097C43@nat28.tlf.novell.com>
References: <db614e92faf743e20b3f.1337096977@kodo2>
	<4FB29D6F0200007800083E81@nat28.tlf.novell.com>
	<4FB28295.8020905@eu.citrix.com>
	<4FB2A2320200007800083EB4@nat28.tlf.novell.com>
	<4FB28709.9080001@eu.citrix.com>
	<4FB377A10200007800083FFE@nat28.tlf.novell.com>
	<1346402456.27277.111.camel@zakaz.uk.xensource.com>
	<50409A070200007800097BA3@nat28.tlf.novell.com>
	<1346405635.27277.137.camel@zakaz.uk.xensource.com>
	<5040A4AE0200007800097C43@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xencommons: Attempt to load blktap driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:49 +0100, Jan Beulich wrote:
> > Can I take that as an
> > Acked-by: Jan Beulich <JBeulich@suse.com>
> > ?
> 
> Yes, feel free to do so.

Thanks, applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:37:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ObL-0003Zx-2s; Fri, 31 Aug 2012 10:37:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7ObJ-0003ZS-Hr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:37:21 +0000
Received: from [85.158.138.51:30907] by server-11.bemta-3.messagelabs.com id
	A3/71-30250-0E390405; Fri, 31 Aug 2012 10:37:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346409438!27905449!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29902 invoked from network); 31 Aug 2012 10:37:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 10:37:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:37:17 +0100
Message-Id: <5040AFFA0200007800097C8F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 11:37:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "maheen butt" <maheen_butt26@yahoo.com>
References: <1346392757.85980.YahooMailNeo@web126004.mail.ne1.yahoo.com>
In-Reply-To: <1346392757.85980.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen port for MIPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 07:59, maheen butt <maheen_butt26@yahoo.com> wrote:
> I want to port Xen for MIPS64 architecture as my MS thesis. My strategy
> is that to know (some how) what are changes required to make a kernel into 
> xen enabled kernel for x86 and try to change accordingly MIPS portion of the 
> code.
> Now I have two issues over here
> 1) The above strategy is for xen enabled kernel but have no clear picture 
> for Xen hypervisor.
> 2) I'm currently working on 2.6.18 xenlinux. Why I'm not using the current 
> version because 
> newer kernels are using pv_ops. and there is no such interface exist for 
> MIPS( considering that 
> I'm following X86). I know my knowledge is very limited in case of pv_ops.
> 
> So anybody can guide me what should be correct strategy? and can I use newer
> kernel (any thing having version >= 2.6.26) and xen-unstable but avoiding 
> pv_ops as well?
> we are two members in the team. Any guess how much time it will take?  

According to a presentation on the summit earlier this week, such
a port already exists.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:37:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ObL-0003Zx-2s; Fri, 31 Aug 2012 10:37:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7ObJ-0003ZS-Hr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:37:21 +0000
Received: from [85.158.138.51:30907] by server-11.bemta-3.messagelabs.com id
	A3/71-30250-0E390405; Fri, 31 Aug 2012 10:37:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-174.messagelabs.com!1346409438!27905449!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29902 invoked from network); 31 Aug 2012 10:37:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 10:37:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:37:17 +0100
Message-Id: <5040AFFA0200007800097C8F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 11:37:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "maheen butt" <maheen_butt26@yahoo.com>
References: <1346392757.85980.YahooMailNeo@web126004.mail.ne1.yahoo.com>
In-Reply-To: <1346392757.85980.YahooMailNeo@web126004.mail.ne1.yahoo.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen port for MIPS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 07:59, maheen butt <maheen_butt26@yahoo.com> wrote:
> I want to port Xen for MIPS64 architecture as my MS thesis. My strategy
> is that to know (some how) what are changes required to make a kernel into 
> xen enabled kernel for x86 and try to change accordingly MIPS portion of the 
> code.
> Now I have two issues over here
> 1) The above strategy is for xen enabled kernel but have no clear picture 
> for Xen hypervisor.
> 2) I'm currently working on 2.6.18 xenlinux. Why I'm not using the current 
> version because 
> newer kernels are using pv_ops. and there is no such interface exist for 
> MIPS( considering that 
> I'm following X86). I know my knowledge is very limited in case of pv_ops.
> 
> So anybody can guide me what should be correct strategy? and can I use newer
> kernel (any thing having version >= 2.6.26) and xen-unstable but avoiding 
> pv_ops as well?
> we are two members in the team. Any guess how much time it will take?  

According to a presentation on the summit earlier this week, such
a port already exists.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:49:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Omk-0004DC-Au; Fri, 31 Aug 2012 10:49:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7Omj-0004D4-6L
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:49:09 +0000
Received: from [85.158.139.83:51758] by server-11.bemta-5.messagelabs.com id
	13/EA-24658-4A690405; Fri, 31 Aug 2012 10:49:08 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346410147!24021472!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27927 invoked from network); 31 Aug 2012 10:49:08 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 10:49:08 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:50625
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7Ojb-0004uE-SG
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:45:55 +0200
Date: Fri, 31 Aug 2012 12:49:02 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <885821548.20120831124902@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] xl / xend feature parity: Missing '-a' option for xl
	'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

Is there any reason why xl doesn't support the '-a' option for shutdown,  to shutdown all domains ?

--

Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:49:28 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Omk-0004DC-Au; Fri, 31 Aug 2012 10:49:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7Omj-0004D4-6L
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:49:09 +0000
Received: from [85.158.139.83:51758] by server-11.bemta-5.messagelabs.com id
	13/EA-24658-4A690405; Fri, 31 Aug 2012 10:49:08 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346410147!24021472!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27927 invoked from network); 31 Aug 2012 10:49:08 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-182.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 10:49:08 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:50625
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7Ojb-0004uE-SG
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:45:55 +0200
Date: Fri, 31 Aug 2012 12:49:02 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <885821548.20120831124902@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] xl / xend feature parity: Missing '-a' option for xl
	'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

Is there any reason why xl doesn't support the '-a' option for shutdown,  to shutdown all domains ?

--

Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:55:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OsZ-0004TI-4S; Fri, 31 Aug 2012 10:55:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OsX-0004T9-TW
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:55:10 +0000
Received: from [85.158.138.51:3660] by server-3.bemta-3.messagelabs.com id
	B7/72-21322-C0890405; Fri, 31 Aug 2012 10:55:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346410508!23844027!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31161 invoked from network); 31 Aug 2012 10:55:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286249"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:55:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:55:07 +0100
Message-ID: <1346410506.27277.179.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 11:55:06 +0100
In-Reply-To: <885821548.20120831124902@eikelenboom.it>
References: <885821548.20120831124902@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl / xend feature parity: Missing '-a' option for
 xl 'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 11:49 +0100, Sander Eikelenboom wrote:
> Hi All,
> 
> Is there any reason why xl doesn't support the '-a' option for
> shutdown,  to shutdown all domains ?

I'd never heard of it for one thing ;-)

It should be a reasonably easy patch -- I can give some pointers if you
are interested.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:55:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OsZ-0004TI-4S; Fri, 31 Aug 2012 10:55:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7OsX-0004T9-TW
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:55:10 +0000
Received: from [85.158.138.51:3660] by server-3.bemta-3.messagelabs.com id
	B7/72-21322-C0890405; Fri, 31 Aug 2012 10:55:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346410508!23844027!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31161 invoked from network); 31 Aug 2012 10:55:08 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 10:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286249"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 10:55:07 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:55:07 +0100
Message-ID: <1346410506.27277.179.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 11:55:06 +0100
In-Reply-To: <885821548.20120831124902@eikelenboom.it>
References: <885821548.20120831124902@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl / xend feature parity: Missing '-a' option for
 xl 'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 11:49 +0100, Sander Eikelenboom wrote:
> Hi All,
> 
> Is there any reason why xl doesn't support the '-a' option for
> shutdown,  to shutdown all domains ?

I'd never heard of it for one thing ;-)

It should be a reasonably easy patch -- I can give some pointers if you
are interested.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:56:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OtG-0004Vu-Hx; Fri, 31 Aug 2012 10:55:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7OtF-0004Vo-TK
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:55:54 +0000
Received: from [85.158.143.35:27660] by server-2.bemta-4.messagelabs.com id
	3A/51-21239-93890405; Fri, 31 Aug 2012 10:55:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346410550!12776963!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10979 invoked from network); 31 Aug 2012 10:55:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 10:55:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:55:49 +0100
Message-Id: <5040B4510200007800097CBD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 11:55:45 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <20542.14593.18652.74782@mariner.uk.xensource.com>
In-Reply-To: <20542.14593.18652.74782@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH] xen: comment opaque expression in
 __page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.08.12 at 17:45, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> mm.h's __page_to_virt has a rather opaque expression.  Comment it.
> 
> The diff below shows the effect that the extra division and
> multiplication has on gcc's output; the "-" lines are the result of
> compiling
>     return (void *)(DIRECTMAP_VIRT_START +
>                     ((unsigned long)pg - FRAMETABLE_VIRT_START) /
>                     (sizeof(*pg) ) *
>                     (PAGE_SIZE )
>                     );
> instead.
> 
> NB that this patch is an RFC because I don't actually know whether
> what I wrote in the comment about x86 performance, and the purpose, of
> the code, is correct.  Jan, please confirm/deny/correct as
> appropriate.
> 
> Reported-By: Ian Campbell <ian.campbell@citrix.com>
> Cc: Jan Beulich <jbeulich@novell.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> --- page_alloc.tmp.mariner.31972.s	2012-08-29 16:32:44.000000000 +0100
> +++ page_alloc.tmp.mariner.31960.s	2012-08-29 16:32:09.000000000 +0100
> @@ -5338,15 +5338,15 @@
>  # 325 "/u/iwj/work/xen-unstable-tools.hg/xen/include/asm/mm.h" 1
>  	ud2 ; ret $1303; movl $.LC31, %esp; movl $.LC41, %esp
>  # 0 "" 2
> -	.loc 10 327 0
> +	.loc 10 333 0
>  #NO_APP
> -	movl	$3, %ebx
> +	movl	$24, %ebx
>  .LVL543:
>  	movl	$0, %edx
>  	divl	%ebx
> -	addl	$8355840, %eax
> +	addl	$1044480, %eax
>  	movl	%eax, %ebx
> -	sall	$9, %ebx
> +	sall	$12, %ebx
>  .LBE737:
>  .LBE736:
>  	.loc 1 1179 0
> @@ -5368,13 +5368,13 @@
>  .LBE739:
>  .LBB741:
>  .LBB738:
> -	.loc 10 327 0
> +	.loc 10 333 0
>  	movl	$-1431655765, %edx
>  	mull	%edx
> -	shrl	%edx
> -	leal	8355840(%edx), %ebx
> +	shrl	$4, %edx
> +	leal	1044480(%edx), %ebx
>  .LVL545:
> -	sall	$9, %ebx
> +	sall	$12, %ebx
>  .LBE738:
>  .LBE741:
>  	.loc 1 1179 0
> 
> diff -r a0b5f8102a00 xen/include/asm-x86/mm.h
> --- a/xen/include/asm-x86/mm.h	Tue Aug 28 22:40:45 2012 +0100
> +++ b/xen/include/asm-x86/mm.h	Wed Aug 29 16:44:58 2012 +0100
> @@ -323,6 +323,13 @@ static inline struct page_info *__virt_t
>  static inline void *__page_to_virt(const struct page_info *pg)
>  {
>      ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
> +    /* (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg).
> +     * The division and re-multiplication arranges to do the easy part
> +     * of the division with a shift, and then puts the shifted-out
> +     * power of 2 back again in the multiplication.  This is
> +     * beneficial because with gcc (at least with 4.4.5) it generates
> +     * a division by 3 instead of a division by 8 which is faster.
> +     */

No, that's not precise. There's really not much of a win to be had
on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
should be the same in speed.

The win is on x86-64, where sizeof(struct page_info) is a power
of 2, and hence the pair of shifts (right, then left) can be reduced
to a single one.

Yet (for obvious reasons) the code ought to not break anything
if even on x86-64 the size of the structure would change, hence
it needs to be that complex (and can't be broken into separate,
simpler implementations for 32- and 64-bits).

Jan

>      return (void *)(DIRECTMAP_VIRT_START +
>                      ((unsigned long)pg - FRAMETABLE_VIRT_START) /
>                      (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 10:56:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 10:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7OtG-0004Vu-Hx; Fri, 31 Aug 2012 10:55:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7OtF-0004Vo-TK
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 10:55:54 +0000
Received: from [85.158.143.35:27660] by server-2.bemta-4.messagelabs.com id
	3A/51-21239-93890405; Fri, 31 Aug 2012 10:55:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346410550!12776963!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10979 invoked from network); 31 Aug 2012 10:55:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 10:55:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:55:49 +0100
Message-Id: <5040B4510200007800097CBD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 11:55:45 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <20542.14593.18652.74782@mariner.uk.xensource.com>
In-Reply-To: <20542.14593.18652.74782@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH] xen: comment opaque expression in
 __page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.08.12 at 17:45, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> mm.h's __page_to_virt has a rather opaque expression.  Comment it.
> 
> The diff below shows the effect that the extra division and
> multiplication has on gcc's output; the "-" lines are the result of
> compiling
>     return (void *)(DIRECTMAP_VIRT_START +
>                     ((unsigned long)pg - FRAMETABLE_VIRT_START) /
>                     (sizeof(*pg) ) *
>                     (PAGE_SIZE )
>                     );
> instead.
> 
> NB that this patch is an RFC because I don't actually know whether
> what I wrote in the comment about x86 performance, and the purpose, of
> the code, is correct.  Jan, please confirm/deny/correct as
> appropriate.
> 
> Reported-By: Ian Campbell <ian.campbell@citrix.com>
> Cc: Jan Beulich <jbeulich@novell.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> --- page_alloc.tmp.mariner.31972.s	2012-08-29 16:32:44.000000000 +0100
> +++ page_alloc.tmp.mariner.31960.s	2012-08-29 16:32:09.000000000 +0100
> @@ -5338,15 +5338,15 @@
>  # 325 "/u/iwj/work/xen-unstable-tools.hg/xen/include/asm/mm.h" 1
>  	ud2 ; ret $1303; movl $.LC31, %esp; movl $.LC41, %esp
>  # 0 "" 2
> -	.loc 10 327 0
> +	.loc 10 333 0
>  #NO_APP
> -	movl	$3, %ebx
> +	movl	$24, %ebx
>  .LVL543:
>  	movl	$0, %edx
>  	divl	%ebx
> -	addl	$8355840, %eax
> +	addl	$1044480, %eax
>  	movl	%eax, %ebx
> -	sall	$9, %ebx
> +	sall	$12, %ebx
>  .LBE737:
>  .LBE736:
>  	.loc 1 1179 0
> @@ -5368,13 +5368,13 @@
>  .LBE739:
>  .LBB741:
>  .LBB738:
> -	.loc 10 327 0
> +	.loc 10 333 0
>  	movl	$-1431655765, %edx
>  	mull	%edx
> -	shrl	%edx
> -	leal	8355840(%edx), %ebx
> +	shrl	$4, %edx
> +	leal	1044480(%edx), %ebx
>  .LVL545:
> -	sall	$9, %ebx
> +	sall	$12, %ebx
>  .LBE738:
>  .LBE741:
>  	.loc 1 1179 0
> 
> diff -r a0b5f8102a00 xen/include/asm-x86/mm.h
> --- a/xen/include/asm-x86/mm.h	Tue Aug 28 22:40:45 2012 +0100
> +++ b/xen/include/asm-x86/mm.h	Wed Aug 29 16:44:58 2012 +0100
> @@ -323,6 +323,13 @@ static inline struct page_info *__virt_t
>  static inline void *__page_to_virt(const struct page_info *pg)
>  {
>      ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
> +    /* (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg).
> +     * The division and re-multiplication arranges to do the easy part
> +     * of the division with a shift, and then puts the shifted-out
> +     * power of 2 back again in the multiplication.  This is
> +     * beneficial because with gcc (at least with 4.4.5) it generates
> +     * a division by 3 instead of a division by 8 which is faster.
> +     */

No, that's not precise. There's really not much of a win to be had
on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
should be the same in speed.

The win is on x86-64, where sizeof(struct page_info) is a power
of 2, and hence the pair of shifts (right, then left) can be reduced
to a single one.

Yet (for obvious reasons) the code ought to not break anything
if even on x86-64 the size of the structure would change, hence
it needs to be that complex (and can't be broken into separate,
simpler implementations for 32- and 64-bits).

Jan

>      return (void *)(DIRECTMAP_VIRT_START +
>                      ((unsigned long)pg - FRAMETABLE_VIRT_START) /
>                      (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:01:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Oyj-0004nl-Ev; Fri, 31 Aug 2012 11:01:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Oyi-0004ng-Ii
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:01:32 +0000
Received: from [85.158.143.35:46259] by server-2.bemta-4.messagelabs.com id
	97/DB-21239-B8990405; Fri, 31 Aug 2012 11:01:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1346410889!10548949!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5262 invoked from network); 31 Aug 2012 11:01:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 11:01:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:01:29 +0100
Message-Id: <5040B5A40200007800097CCC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:01:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dan Magenheimer" <dan.magenheimer@oracle.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
	<503F463E.90505@cantab.net>
	<2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
In-Reply-To: <2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
Mime-Version: 1.0
Content-Disposition: inline
Cc: David Vrabel <dvrabel@cantab.net>,
	George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 18:11, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> Of course, 18 months is far too long a release cycle for this approach,
> and 9 months may be too long as well.  I think a target cycle
> of 6 months with a "window" of 6 weeks would be a step in
> the right direction

I disagree. Even the fix months of a freeze we're having right
now already is way too long. Nor do I personally consider the
two-weeks-out-of-ten (approximately) model on the Linux side
too nice. Having larger development windows, and shorter
stabilization periods is pretty desirable imo, the more on Xen
where, despite its name, -unstable really normally isn't that
unstable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:01:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Oyj-0004nl-Ev; Fri, 31 Aug 2012 11:01:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Oyi-0004ng-Ii
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:01:32 +0000
Received: from [85.158.143.35:46259] by server-2.bemta-4.messagelabs.com id
	97/DB-21239-B8990405; Fri, 31 Aug 2012 11:01:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1346410889!10548949!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5262 invoked from network); 31 Aug 2012 11:01:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 11:01:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:01:29 +0100
Message-Id: <5040B5A40200007800097CCC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:01:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dan Magenheimer" <dan.magenheimer@oracle.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
	<503F463E.90505@cantab.net>
	<2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
In-Reply-To: <2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
Mime-Version: 1.0
Content-Disposition: inline
Cc: David Vrabel <dvrabel@cantab.net>,
	George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 18:11, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> Of course, 18 months is far too long a release cycle for this approach,
> and 9 months may be too long as well.  I think a target cycle
> of 6 months with a "window" of 6 weeks would be a step in
> the right direction

I disagree. Even the fix months of a freeze we're having right
now already is way too long. Nor do I personally consider the
two-weeks-out-of-ten (approximately) model on the Linux side
too nice. Having larger development windows, and shorter
stabilization periods is pretty desirable imo, the more on Xen
where, despite its name, -unstable really normally isn't that
unstable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:04:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P1Y-0004x2-1H; Fri, 31 Aug 2012 11:04:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7P1W-0004wk-L2
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:04:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346411058!8967370!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16834 invoked from network); 31 Aug 2012 11:04:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:04:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286499"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:04:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 12:04:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7P1A-0002LD-TE; Fri, 31 Aug 2012 11:04:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7P1A-00085H-P6;
	Fri, 31 Aug 2012 12:04:04 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.39460.499127.781598@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 12:04:04 +0100
To: <xen-devel@lists.xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: ian.campbell@eu.citrix.com
Subject: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Migration 4.1 xend -> 4.2 xend
  OK

Migration 4.2 -> 4.1 (xend or xl)
  xend: Fails, guest ends up destroyed
  xl: Fails, xl tries to resume at sender but guest gets BUG (see below)
      This is probably a guest bug?

Migration 4.1 xend -> 4.2 xl
  Needs to be done with xl
  Stop xend on source, which leaves domain running and manipulable by xl
  xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
  Works.

However, xl fails on config files which are missing the final
newline.  This should be fixed for 4.2.

Ian.

  xc: error: Max batch size exceeded (-18). Giving up.: Internal error
  xc: error: Error when reading batch (90 = Message too long): Internal error
  libxl: error: libxl_dom.c:313:libxl__domain_restore_common restoring domain: Resource temporarily unavailable
  cannot (re-)build domain: -3
  libxl: error: libxl.c:711:libxl_domain_destroy non-existant domain 6
  migration target: Domain creation failed (code -3).
  libxl: error: libxl_utils.c:363:libxl_read_exactly: file/stream truncated reading ready message from migration receiver stream
  libxl: info: libxl_exec.c:118:libxl_report_child_exitstatus: migration target process [15654] exited with error status 3
  Migration failed, resuming at sender.

[   37.151396] Setting capacity to 8388608
[   37.151988] Setting capacity to 8388608
[   37.172710] Setting capacity to 2048000
[   90.507105] ------------[ cut here ]------------
[   90.507105] kernel BUG at drivers/xen/events.c:1344!
[   90.507105] invalid opcode: 0000 [#1] SMP 
[   90.507105] last sysfs file: /sys/devices/virtual/net/lo/operstate
[   90.507105] Modules linked in: nbd [last unloaded: scsi_wait_scan]
[   90.507105] 
[   90.507105] Pid: 1299, comm: kstop/0 Not tainted (2.6.32.57 #1) 
[   90.507105] EIP: 0061:[<c121b9e4>] EFLAGS: 00010082 CPU: 0
[   90.507105] EIP is at xen_irq_resume+0xe3/0x2b6
[   90.507105] EAX: ffffffef EBX: 00000000 ECX: deadbeef EDX: c4c8df24
[   90.507105] ESI: 000001ff EDI: 00001ff0 EBP: c4c8df3c ESP: c4c8deec
[   90.507105]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
[   90.507105] Process kstop/0 (pid: 1299, ti=c4c8c000 task=db980000 task.ti=c4c8c000)
[   90.507105] Stack:
[   90.507105]  c102ce92 c4c8df14 c1752288 c1752228 c1770004 00000000 c4c8df24 c102c270
[   90.507105] <0> c1771004 c4de1720 c1770004 c102c267 c1464b35 c166fed0 00000000 00000000
[   90.507105] <0> deadbeef deadbeef 00000003 c6000c14 c4c8df5c c121d011 00000000 dfc63f5c
[   90.507105] Call Trace:
[   90.507105]  [<c102ce92>] ? __xen_spin_lock+0xcb/0xdf
[   90.507105]  [<c102c270>] ? check_events+0x8/0xc
[   90.507105]  [<c102c267>] ? xen_restore_fl_direct_end+0x0/0x1
[   90.507105]  [<c1464b35>] ? _spin_unlock_irqrestore+0x40/0x43
[   90.507105]  [<c121d011>] ? xen_suspend+0x8c/0xa6
[   90.507105]  [<c1097f4f>] ? stop_cpu+0x7d/0xc9
[   90.507105]  [<c1073582>] ? worker_thread+0x15c/0x1f4
[   90.507105]  [<c1097ed2>] ? stop_cpu+0x0/0xc9
[   90.507105]  [<c107664d>] ? autoremove_wake_function+0x0/0x2f
[   90.507105]  [<c1073426>] ? worker_thread+0x0/0x1f4
[   90.507105]  [<c1076333>] ? kthread+0x5f/0x64
[   90.507105]  [<c10762d4>] ? kthread+0x0/0x64
[   90.507105]  [<c102f4d7>] ? kernel_thread_helper+0x7/0x10
[   90.507105] Code: 0f 0b eb fe 0f b7 40 08 3b 45 c4 74 04 0f 0b eb fe 8b 45 c4 8d 55 e8 89 5d ec 89 45 e8 b8 01 00 00 00 e8 69 f9 ff ff 85 c0 74 04 <0f> 0b eb fe 8b 55 f0 89 55 c0 8b 15 60 a0 7f c1 8b 4d c0 89 34 
[   90.507105] EIP: [<c121b9e4>] xen_irq_resume+0xe3/0x2b6 SS:ESP 0069:c4c8deec
[   90.507105] ---[ end trace c48e0191332db3e4 ]---
[   90.507105] ------------[ cut here ]------------
[   90.507105] WARNING: at kernel/time/timekeeping.c:260 ktime_get+0x21/0xce()
[   90.507105] Modules linked in: nbd [last unloaded: scsi_wait_scan]
[   90.507105] Pid: 0, comm: swapper Tainted: G      D    2.6.32.57 #1
[   90.507105] Call Trace:
[   90.507105]  [<c1061200>] warn_slowpath_common+0x65/0x7c
[   90.507105]  [<c107de3f>] ? ktime_get+0x21/0xce
[   90.507105]  [<c1061224>] warn_slowpath_null+0xd/0x10
[   90.507105]  [<c107de3f>] ktime_get+0x21/0xce
[   90.507105]  [<c14637fa>] ? schedule+0x82d/0x87a
[   90.507105]  [<c108226c>] tick_nohz_stop_sched_tick+0x76/0x387
[   90.507105]  [<c1082633>] ? T.504+0x1d/0x25
[   90.507105]  [<c10827c2>] ? tick_nohz_restart_sched_tick+0x187/0x18f
[   90.507105]  [<c102bb75>] ? xen_safe_halt+0x12/0x1f
[   90.507105]  [<c102dc1b>] cpu_idle+0x27/0x70
[   90.507105]  [<c1449ca1>] rest_init+0x5d/0x5f
[   90.507105]  [<c16dd85f>] start_kernel+0x315/0x31a
[   90.507105]  [<c16dd0a8>] i386_start_kernel+0x97/0x9e
[   90.507105]  [<c16e0cae>] xen_start_kernel+0x557/0x55f
[   90.507105] ---[ end trace c48e0191332db3e5 ]---

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:04:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P1Y-0004x2-1H; Fri, 31 Aug 2012 11:04:28 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7P1W-0004wk-L2
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:04:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1346411058!8967370!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16834 invoked from network); 31 Aug 2012 11:04:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:04:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286499"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:04:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 12:04:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7P1A-0002LD-TE; Fri, 31 Aug 2012 11:04:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7P1A-00085H-P6;
	Fri, 31 Aug 2012 12:04:04 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.39460.499127.781598@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 12:04:04 +0100
To: <xen-devel@lists.xen.org>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: ian.campbell@eu.citrix.com
Subject: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Migration 4.1 xend -> 4.2 xend
  OK

Migration 4.2 -> 4.1 (xend or xl)
  xend: Fails, guest ends up destroyed
  xl: Fails, xl tries to resume at sender but guest gets BUG (see below)
      This is probably a guest bug?

Migration 4.1 xend -> 4.2 xl
  Needs to be done with xl
  Stop xend on source, which leaves domain running and manipulable by xl
  xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
  Works.

However, xl fails on config files which are missing the final
newline.  This should be fixed for 4.2.

Ian.

  xc: error: Max batch size exceeded (-18). Giving up.: Internal error
  xc: error: Error when reading batch (90 = Message too long): Internal error
  libxl: error: libxl_dom.c:313:libxl__domain_restore_common restoring domain: Resource temporarily unavailable
  cannot (re-)build domain: -3
  libxl: error: libxl.c:711:libxl_domain_destroy non-existant domain 6
  migration target: Domain creation failed (code -3).
  libxl: error: libxl_utils.c:363:libxl_read_exactly: file/stream truncated reading ready message from migration receiver stream
  libxl: info: libxl_exec.c:118:libxl_report_child_exitstatus: migration target process [15654] exited with error status 3
  Migration failed, resuming at sender.

[   37.151396] Setting capacity to 8388608
[   37.151988] Setting capacity to 8388608
[   37.172710] Setting capacity to 2048000
[   90.507105] ------------[ cut here ]------------
[   90.507105] kernel BUG at drivers/xen/events.c:1344!
[   90.507105] invalid opcode: 0000 [#1] SMP 
[   90.507105] last sysfs file: /sys/devices/virtual/net/lo/operstate
[   90.507105] Modules linked in: nbd [last unloaded: scsi_wait_scan]
[   90.507105] 
[   90.507105] Pid: 1299, comm: kstop/0 Not tainted (2.6.32.57 #1) 
[   90.507105] EIP: 0061:[<c121b9e4>] EFLAGS: 00010082 CPU: 0
[   90.507105] EIP is at xen_irq_resume+0xe3/0x2b6
[   90.507105] EAX: ffffffef EBX: 00000000 ECX: deadbeef EDX: c4c8df24
[   90.507105] ESI: 000001ff EDI: 00001ff0 EBP: c4c8df3c ESP: c4c8deec
[   90.507105]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
[   90.507105] Process kstop/0 (pid: 1299, ti=c4c8c000 task=db980000 task.ti=c4c8c000)
[   90.507105] Stack:
[   90.507105]  c102ce92 c4c8df14 c1752288 c1752228 c1770004 00000000 c4c8df24 c102c270
[   90.507105] <0> c1771004 c4de1720 c1770004 c102c267 c1464b35 c166fed0 00000000 00000000
[   90.507105] <0> deadbeef deadbeef 00000003 c6000c14 c4c8df5c c121d011 00000000 dfc63f5c
[   90.507105] Call Trace:
[   90.507105]  [<c102ce92>] ? __xen_spin_lock+0xcb/0xdf
[   90.507105]  [<c102c270>] ? check_events+0x8/0xc
[   90.507105]  [<c102c267>] ? xen_restore_fl_direct_end+0x0/0x1
[   90.507105]  [<c1464b35>] ? _spin_unlock_irqrestore+0x40/0x43
[   90.507105]  [<c121d011>] ? xen_suspend+0x8c/0xa6
[   90.507105]  [<c1097f4f>] ? stop_cpu+0x7d/0xc9
[   90.507105]  [<c1073582>] ? worker_thread+0x15c/0x1f4
[   90.507105]  [<c1097ed2>] ? stop_cpu+0x0/0xc9
[   90.507105]  [<c107664d>] ? autoremove_wake_function+0x0/0x2f
[   90.507105]  [<c1073426>] ? worker_thread+0x0/0x1f4
[   90.507105]  [<c1076333>] ? kthread+0x5f/0x64
[   90.507105]  [<c10762d4>] ? kthread+0x0/0x64
[   90.507105]  [<c102f4d7>] ? kernel_thread_helper+0x7/0x10
[   90.507105] Code: 0f 0b eb fe 0f b7 40 08 3b 45 c4 74 04 0f 0b eb fe 8b 45 c4 8d 55 e8 89 5d ec 89 45 e8 b8 01 00 00 00 e8 69 f9 ff ff 85 c0 74 04 <0f> 0b eb fe 8b 55 f0 89 55 c0 8b 15 60 a0 7f c1 8b 4d c0 89 34 
[   90.507105] EIP: [<c121b9e4>] xen_irq_resume+0xe3/0x2b6 SS:ESP 0069:c4c8deec
[   90.507105] ---[ end trace c48e0191332db3e4 ]---
[   90.507105] ------------[ cut here ]------------
[   90.507105] WARNING: at kernel/time/timekeeping.c:260 ktime_get+0x21/0xce()
[   90.507105] Modules linked in: nbd [last unloaded: scsi_wait_scan]
[   90.507105] Pid: 0, comm: swapper Tainted: G      D    2.6.32.57 #1
[   90.507105] Call Trace:
[   90.507105]  [<c1061200>] warn_slowpath_common+0x65/0x7c
[   90.507105]  [<c107de3f>] ? ktime_get+0x21/0xce
[   90.507105]  [<c1061224>] warn_slowpath_null+0xd/0x10
[   90.507105]  [<c107de3f>] ktime_get+0x21/0xce
[   90.507105]  [<c14637fa>] ? schedule+0x82d/0x87a
[   90.507105]  [<c108226c>] tick_nohz_stop_sched_tick+0x76/0x387
[   90.507105]  [<c1082633>] ? T.504+0x1d/0x25
[   90.507105]  [<c10827c2>] ? tick_nohz_restart_sched_tick+0x187/0x18f
[   90.507105]  [<c102bb75>] ? xen_safe_halt+0x12/0x1f
[   90.507105]  [<c102dc1b>] cpu_idle+0x27/0x70
[   90.507105]  [<c1449ca1>] rest_init+0x5d/0x5f
[   90.507105]  [<c16dd85f>] start_kernel+0x315/0x31a
[   90.507105]  [<c16dd0a8>] i386_start_kernel+0x97/0x9e
[   90.507105]  [<c16e0cae>] xen_start_kernel+0x557/0x55f
[   90.507105] ---[ end trace c48e0191332db3e5 ]---

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:05:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P2a-00052R-G4; Fri, 31 Aug 2012 11:05:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7P2Y-00052B-Fx
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:05:30 +0000
Received: from [85.158.139.83:13832] by server-11.bemta-5.messagelabs.com id
	4D/4B-24658-97A90405; Fri, 31 Aug 2012 11:05:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346411128!27721857!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19739 invoked from network); 31 Aug 2012 11:05:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 11:05:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:05:27 +0100
Message-Id: <5040B6940200007800097CDE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:05:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dan Magenheimer" <dan.magenheimer@oracle.com>,
	"bing" <Libing.Chen@uts.edu.au>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
	<aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
In-Reply-To: <aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 19:04, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
>>  From: bing [mailto:Libing.Chen@uts.edu.au]
>> Sent: Thursday, August 30, 2012 12:16 AM
>> To: xen-devel@lists.xen.org 
>> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 
> 790, boots fine with
>> 4.1.3
>> 
>> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
>> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
>> to the xen kernel. It seems to be CPU related, same setup (same xen,
>> Dom0 kernel version) works fine on another Dell Precision 3200.
> 
> Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
> did solve my problem also.

But that was known to be required only with certain, non-upstream
broken kernels (which got their own xsave handling wrong). Can
either of you confirm this is a problem with a plain upstream kernel
now too (and if so, which version(s))?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:05:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P2a-00052R-G4; Fri, 31 Aug 2012 11:05:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7P2Y-00052B-Fx
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:05:30 +0000
Received: from [85.158.139.83:13832] by server-11.bemta-5.messagelabs.com id
	4D/4B-24658-97A90405; Fri, 31 Aug 2012 11:05:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-182.messagelabs.com!1346411128!27721857!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19739 invoked from network); 31 Aug 2012 11:05:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 11:05:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:05:27 +0100
Message-Id: <5040B6940200007800097CDE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:05:24 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dan Magenheimer" <dan.magenheimer@oracle.com>,
	"bing" <Libing.Chen@uts.edu.au>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
	<aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
In-Reply-To: <aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 19:04, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
>>  From: bing [mailto:Libing.Chen@uts.edu.au]
>> Sent: Thursday, August 30, 2012 12:16 AM
>> To: xen-devel@lists.xen.org 
>> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 
> 790, boots fine with
>> 4.1.3
>> 
>> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
>> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
>> to the xen kernel. It seems to be CPU related, same setup (same xen,
>> Dom0 kernel version) works fine on another Dell Precision 3200.
> 
> Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
> did solve my problem also.

But that was known to be required only with certain, non-upstream
broken kernels (which got their own xsave handling wrong). Can
either of you confirm this is a problem with a plain upstream kernel
now too (and if so, which version(s))?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:09:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P6V-0005IQ-6h; Fri, 31 Aug 2012 11:09:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7P6U-0005IL-Al
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:09:34 +0000
Received: from [85.158.139.83:27778] by server-12.bemta-5.messagelabs.com id
	86/03-18300-D6B90405; Fri, 31 Aug 2012 11:09:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346411372!20673255!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8219 invoked from network); 31 Aug 2012 11:09:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:09:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286643"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:09:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:09:14 +0100
Message-ID: <1346411353.27277.183.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 12:09:13 +0100
In-Reply-To: <20544.39460.499127.781598@mariner.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:04 +0100, Ian Jackson wrote:
> Migration 4.1 xend -> 4.2 xend
>   OK
> 
> Migration 4.2 -> 4.1 (xend or xl)

We don't support this, so I'm not too concerned about the failures.

>   xend: Fails, guest ends up destroyed
>   xl: Fails, xl tries to resume at sender but guest gets BUG (see below)
>       This is probably a guest bug?
> 
> Migration 4.1 xend -> 4.2 xl
>   Needs to be done with xl
>   Stop xend on source, which leaves domain running and manipulable by xl
>   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>   Works.

This is great. We should write this down somewhere. I guess
http://wiki.xen.org/wiki/XL#Upgrading_from_xend ?

> However, xl fails on config files which are missing the final
> newline.  This should be fixed for 4.2.

Since that I suppose involves the parser are you going to do that?

Did you also try xl 4.1 -> 4.2? 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:09:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P6V-0005IQ-6h; Fri, 31 Aug 2012 11:09:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7P6U-0005IL-Al
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:09:34 +0000
Received: from [85.158.139.83:27778] by server-12.bemta-5.messagelabs.com id
	86/03-18300-D6B90405; Fri, 31 Aug 2012 11:09:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-182.messagelabs.com!1346411372!20673255!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8219 invoked from network); 31 Aug 2012 11:09:33 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-11.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:09:33 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286643"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:09:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:09:14 +0100
Message-ID: <1346411353.27277.183.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 12:09:13 +0100
In-Reply-To: <20544.39460.499127.781598@mariner.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:04 +0100, Ian Jackson wrote:
> Migration 4.1 xend -> 4.2 xend
>   OK
> 
> Migration 4.2 -> 4.1 (xend or xl)

We don't support this, so I'm not too concerned about the failures.

>   xend: Fails, guest ends up destroyed
>   xl: Fails, xl tries to resume at sender but guest gets BUG (see below)
>       This is probably a guest bug?
> 
> Migration 4.1 xend -> 4.2 xl
>   Needs to be done with xl
>   Stop xend on source, which leaves domain running and manipulable by xl
>   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>   Works.

This is great. We should write this down somewhere. I guess
http://wiki.xen.org/wiki/XL#Upgrading_from_xend ?

> However, xl fails on config files which are missing the final
> newline.  This should be fixed for 4.2.

Since that I suppose involves the parser are you going to do that?

Did you also try xl 4.1 -> 4.2? 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:10:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:10:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P6x-0005MG-P9; Fri, 31 Aug 2012 11:10:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7P6w-0005M3-Mj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:10:02 +0000
Received: from [85.158.138.51:40128] by server-4.bemta-3.messagelabs.com id
	03/52-24831-98B90405; Fri, 31 Aug 2012 11:10:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1346411401!18994667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23546 invoked from network); 31 Aug 2012 11:10:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 11:10:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:10:00 +0100
Message-Id: <5040B7A40200007800097CEE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:09:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Javier Marcet" <jmarcet@gmail.com>
References: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
In-Reply-To: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen Devel Mailing list <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.0-rc4 bugs with GigaByte H77M-D3H + Core i7
 3770
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 12:43, Javier Marcet <jmarcet@gmail.com> wrote:
> [    0.358278] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358278] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358286] DRHD: handling fault status reg 2
> [    0.358288] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358288] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358291] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358291] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358307] DRHD: handling fault status reg 3
> 
> Furthermore, later on, just after enabling the IOMMU, I get this:
> 
> [    0.328564] DMAR: No ATSR found
> [    0.328580] IOMMU 1 0xfed91000: using Queued invalidation
> [    0.328582] IOMMU: Setting RMRR:
> [    0.328589] IOMMU: Setting identity map for device 0000:00:1d.0
> [0x9de36000 - 0x9de52fff]
> [    0.328606] IOMMU: Setting identity map for device 0000:00:1a.0
> [0x9de36000 - 0x9de52fff]
> [    0.328617] IOMMU: Setting identity map for device 0000:00:14.0
> [0x9de36000 - 0x9de52fff]
> [    0.328625] IOMMU: Prepare 0-16MiB unity mapping for LPC
> [    0.328630] IOMMU: Setting identity map for device 0000:00:1f.0
> [0x0 - 0xffffff]

All of these messages shouldn't appear when running under Xen,
as it's the hypervisor, not the kernel to take care of the IOMMU(s).
The logs you had pointers to confirm this - are you sure the above
was seen when running under Xen?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:10:14 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:10:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7P6x-0005MG-P9; Fri, 31 Aug 2012 11:10:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7P6w-0005M3-Mj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:10:02 +0000
Received: from [85.158.138.51:40128] by server-4.bemta-3.messagelabs.com id
	03/52-24831-98B90405; Fri, 31 Aug 2012 11:10:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1346411401!18994667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23546 invoked from network); 31 Aug 2012 11:10:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 11:10:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:10:00 +0100
Message-Id: <5040B7A40200007800097CEE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:09:56 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Javier Marcet" <jmarcet@gmail.com>
References: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
In-Reply-To: <CAAnFQG8z_ja1Wj2hX+0ZRsz5eLWr+U+7PoSiF9NePsSh2jbX4g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen Devel Mailing list <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2.0-rc4 bugs with GigaByte H77M-D3H + Core i7
 3770
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 12:43, Javier Marcet <jmarcet@gmail.com> wrote:
> [    0.358278] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358278] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358286] DRHD: handling fault status reg 2
> [    0.358288] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358288] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358291] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358291] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358307] DRHD: handling fault status reg 3
> 
> Furthermore, later on, just after enabling the IOMMU, I get this:
> 
> [    0.328564] DMAR: No ATSR found
> [    0.328580] IOMMU 1 0xfed91000: using Queued invalidation
> [    0.328582] IOMMU: Setting RMRR:
> [    0.328589] IOMMU: Setting identity map for device 0000:00:1d.0
> [0x9de36000 - 0x9de52fff]
> [    0.328606] IOMMU: Setting identity map for device 0000:00:1a.0
> [0x9de36000 - 0x9de52fff]
> [    0.328617] IOMMU: Setting identity map for device 0000:00:14.0
> [0x9de36000 - 0x9de52fff]
> [    0.328625] IOMMU: Prepare 0-16MiB unity mapping for LPC
> [    0.328630] IOMMU: Setting identity map for device 0000:00:1f.0
> [0x0 - 0xffffff]

All of these messages shouldn't appear when running under Xen,
as it's the hypervisor, not the kernel to take care of the IOMMU(s).
The logs you had pointers to confirm this - are you sure the above
was seen when running under Xen?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:14:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PBG-0005bs-Fh; Fri, 31 Aug 2012 11:14:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7PBF-0005bm-2j
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:14:29 +0000
Received: from [85.158.139.83:34043] by server-5.bemta-5.messagelabs.com id
	90/E8-30514-49C90405; Fri, 31 Aug 2012 11:14:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1346411667!27816114!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27419 invoked from network); 31 Aug 2012 11:14:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:14:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286747"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:13:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:13:56 +0100
Message-ID: <1346411635.27277.185.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 12:13:55 +0100
In-Reply-To: <1346411353.27277.183.camel@zakaz.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<1346411353.27277.183.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:09 +0100, Ian Campbell wrote:

Should have said in here somewhere: Thank for doing this.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:14:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PBG-0005bs-Fh; Fri, 31 Aug 2012 11:14:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7PBF-0005bm-2j
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:14:29 +0000
Received: from [85.158.139.83:34043] by server-5.bemta-5.messagelabs.com id
	90/E8-30514-49C90405; Fri, 31 Aug 2012 11:14:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1346411667!27816114!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27419 invoked from network); 31 Aug 2012 11:14:27 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:14:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286747"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:13:56 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:13:56 +0100
Message-ID: <1346411635.27277.185.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 12:13:55 +0100
In-Reply-To: <1346411353.27277.183.camel@zakaz.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<1346411353.27277.183.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:09 +0100, Ian Campbell wrote:

Should have said in here somewhere: Thank for doing this.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:16:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PCn-0005hq-VP; Fri, 31 Aug 2012 11:16:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7PCm-0005hk-Nj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:16:04 +0000
Received: from [85.158.138.51:36595] by server-6.bemta-3.messagelabs.com id
	1E/72-29694-3FC90405; Fri, 31 Aug 2012 11:16:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346411763!26132660!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21492 invoked from network); 31 Aug 2012 11:16:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 11:16:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:16:02 +0100
Message-Id: <5040B90F0200007800097D01@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:15:59 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20120830112141.3e2dec75@mantra.us.oracle.com>
In-Reply-To: <20120830112141.3e2dec75@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 0/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 20:21, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> As promised sending two patches after this. First is the changes to
> common code. Other is a tar file of kdb subdirectory under
> xen-unstable.hg/xen.
> 
> It seems there is enough interested that it's worth considering for
> merging into xen. Good thing is I've developed it as I debug things. So
> it's developed completely from a developer's perspective who did not
> have access to any other tools like jtag etc.. 

I'm glad to see something like this coming along, but see below.

> BTW, I'd like to rename it from kdb to xdb or hdb in the final
> submission.

Yes, please.

> At present I've following commands:
> ...

So the same command line driven, hard to use interface that
Linux kernel debuggers and gdb use, with hardly ever all
needed information visible at once on the screen because of
things scrolling away too fast. Unfortunately it's quite
unlikely that I'll ever find time to try to port the debugger
I had ported to Linux a few years back (called nlkd at that
time) over to Xen, which - with its Borland Turbo Debugger
derived UI - makes all of the most important state information
available at once.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:16:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PCn-0005hq-VP; Fri, 31 Aug 2012 11:16:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7PCm-0005hk-Nj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:16:04 +0000
Received: from [85.158.138.51:36595] by server-6.bemta-3.messagelabs.com id
	1E/72-29694-3FC90405; Fri, 31 Aug 2012 11:16:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346411763!26132660!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21492 invoked from network); 31 Aug 2012 11:16:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 11:16:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:16:02 +0100
Message-Id: <5040B90F0200007800097D01@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:15:59 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20120830112141.3e2dec75@mantra.us.oracle.com>
In-Reply-To: <20120830112141.3e2dec75@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 0/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 20:21, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> As promised sending two patches after this. First is the changes to
> common code. Other is a tar file of kdb subdirectory under
> xen-unstable.hg/xen.
> 
> It seems there is enough interested that it's worth considering for
> merging into xen. Good thing is I've developed it as I debug things. So
> it's developed completely from a developer's perspective who did not
> have access to any other tools like jtag etc.. 

I'm glad to see something like this coming along, but see below.

> BTW, I'd like to rename it from kdb to xdb or hdb in the final
> submission.

Yes, please.

> At present I've following commands:
> ...

So the same command line driven, hard to use interface that
Linux kernel debuggers and gdb use, with hardly ever all
needed information visible at once on the screen because of
things scrolling away too fast. Unfortunately it's quite
unlikely that I'll ever find time to try to port the debugger
I had ported to Linux a few years back (called nlkd at that
time) over to Xen, which - with its Borland Turbo Debugger
derived UI - makes all of the most important state information
available at once.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:17:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PDi-0005mZ-Dc; Fri, 31 Aug 2012 11:17:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7PDg-0005m3-VH
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:17:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1346411760!3798992!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19040 invoked from network); 31 Aug 2012 11:16:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:16:01 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:15:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:15:59 +0100
Message-ID: <1346411758.27277.186.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 12:15:58 +0100
In-Reply-To: <20543.39324.318867.373709@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20543.35421.80723.497439@mariner.uk.xensource.com>
	<20543.39324.318867.373709@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 17:49 +0100, Ian Jackson wrote:
> Ian Jackson writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > I don't think this is correct.  It may happen to work with this
> > version of bison but I don't think you're allowed to assign to $3.
> 
> I think this fixes it.

It works for me.
> 
> Ian.
> 
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Subject: [PATCH] libxl: fix double free on some config parser errors
> 
> If libxlu_cfg_y.y encountered a config file error, the code generated
> by bison would sometimes _both_ run the %destructor _and_ call
> xlu__cfg_set_store for the same XLU_ConfigSetting* semantic value.
> The result would be a double free.
> 
> This appears to be because of the use of a mid-rule action.  There is
> some discussion of the problems with destructors and mid-rule action
> error handling in "(bison)Mid-Rule Actions".  This area is complex and
> best avoided.
> 
> So fix the bug by abolishing the use of a mid-rule action, which was
> in any case not necessary here.
> 
> Also while we are there rename the nonterminal rule "setting" to
> "assignment", to avoid confusion with the token type "setting", which
> had an identically name in a different namespace.  This was especially
> confusing because the nonterminal "setting" did not have "setting" as
> the type of its semantic value!  (In fact the nonterminal, now called
> "assignment", does not have a value so it does not have a value type.)
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I shall apply in a moment, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:17:10 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PDi-0005mZ-Dc; Fri, 31 Aug 2012 11:17:02 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7PDg-0005m3-VH
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:17:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1346411760!3798992!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19040 invoked from network); 31 Aug 2012 11:16:01 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:16:01 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14286795"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:15:59 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:15:59 +0100
Message-ID: <1346411758.27277.186.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 12:15:58 +0100
In-Reply-To: <20543.39324.318867.373709@mariner.uk.xensource.com>
References: <CAFLBxZaEci0mOcDCgFX9zk=wh3z4Nf1LD5E5Fcy7Y3=ioDAM=g@mail.gmail.com>
	<1345048547.5926.245.camel@zakaz.uk.xensource.com>
	<20543.35421.80723.497439@mariner.uk.xensource.com>
	<20543.39324.318867.373709@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given
 invalid configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 17:49 +0100, Ian Jackson wrote:
> Ian Jackson writes ("Re: [Xen-devel] [TESTDAY] xl cpupool-create segfaults if given invalid configuration"):
> > I don't think this is correct.  It may happen to work with this
> > version of bison but I don't think you're allowed to assign to $3.
> 
> I think this fixes it.

It works for me.
> 
> Ian.
> 
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Subject: [PATCH] libxl: fix double free on some config parser errors
> 
> If libxlu_cfg_y.y encountered a config file error, the code generated
> by bison would sometimes _both_ run the %destructor _and_ call
> xlu__cfg_set_store for the same XLU_ConfigSetting* semantic value.
> The result would be a double free.
> 
> This appears to be because of the use of a mid-rule action.  There is
> some discussion of the problems with destructors and mid-rule action
> error handling in "(bison)Mid-Rule Actions".  This area is complex and
> best avoided.
> 
> So fix the bug by abolishing the use of a mid-rule action, which was
> in any case not necessary here.
> 
> Also while we are there rename the nonterminal rule "setting" to
> "assignment", to avoid confusion with the token type "setting", which
> had an identically name in a different namespace.  This was especially
> confusing because the nonterminal "setting" did not have "setting" as
> the type of its semantic value!  (In fact the nonterminal, now called
> "assignment", does not have a value so it does not have a value type.)
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I shall apply in a moment, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:23:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PK3-00063Z-9u; Fri, 31 Aug 2012 11:23:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7PK1-00063U-K1
	for Xen-devel@lists.xensource.com; Fri, 31 Aug 2012 11:23:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1346412124!3799855!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5629 invoked from network); 31 Aug 2012 11:22:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 11:22:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:22:03 +0100
Message-Id: <5040BA770200007800097D14@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:21:59 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
	<20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 23:32, Matt Wilson <msw@amazon.com> wrote:
> On Thu, Aug 30, 2012 at 11:23:23AM -0700, Mukesh Rathor wrote:
>> +#ifdef XEN_KDB_CONFIG
>> +#if defined(__x86_64__)
>> +        testl $1, kdb_session_begun(%rip)
>> +#else
>> +        testl $1, kdb_session_begun
>> +#endif
>> +        jnz  .Lkdb_skip_softirq
>> +#endif
> 
> Not sure if somthing like:
> 
>         cmpb  $0,kdb_session_begun(%rip)
> UNLIKELY_START(ne, kdb_session_exists)
>         jmp .Lkdb_skip_softirq
> UNLIKELY_END(ne, kdb_session_exists)
> 
> is worth it in this case, since it's just a jnz.

Clearly not, as the UNLIKELY_START() itself already expands to
a conditional branch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:23:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PK3-00063Z-9u; Fri, 31 Aug 2012 11:23:35 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7PK1-00063U-K1
	for Xen-devel@lists.xensource.com; Fri, 31 Aug 2012 11:23:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1346412124!3799855!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5629 invoked from network); 31 Aug 2012 11:22:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 11:22:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:22:03 +0100
Message-Id: <5040BA770200007800097D14@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:21:59 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@amazon.com>
References: <20120830112323.5086d73c@mantra.us.oracle.com>
	<20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
In-Reply-To: <20120830213230.GA32155@u002268147cd4502c336d.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [RFC PATCH 1/2]: hypervisor debugger
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.08.12 at 23:32, Matt Wilson <msw@amazon.com> wrote:
> On Thu, Aug 30, 2012 at 11:23:23AM -0700, Mukesh Rathor wrote:
>> +#ifdef XEN_KDB_CONFIG
>> +#if defined(__x86_64__)
>> +        testl $1, kdb_session_begun(%rip)
>> +#else
>> +        testl $1, kdb_session_begun
>> +#endif
>> +        jnz  .Lkdb_skip_softirq
>> +#endif
> 
> Not sure if somthing like:
> 
>         cmpb  $0,kdb_session_begun(%rip)
> UNLIKELY_START(ne, kdb_session_exists)
>         jmp .Lkdb_skip_softirq
> UNLIKELY_END(ne, kdb_session_exists)
> 
> is worth it in this case, since it's just a jnz.

Clearly not, as the UNLIKELY_START() itself already expands to
a conditional branch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PSy-0006DH-99; Fri, 31 Aug 2012 11:32:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7PSw-0006DA-22
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:32:46 +0000
Received: from [85.158.138.51:21227] by server-12.bemta-3.messagelabs.com id
	9A/1D-10384-DD0A0405; Fri, 31 Aug 2012 11:32:45 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-174.messagelabs.com!1346412763!27975807!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31379 invoked from network); 31 Aug 2012 11:32:43 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 11:32:43 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:50676
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7PPm-00058m-U1; Fri, 31 Aug 2012 13:29:30 +0200
Date: Fri, 31 Aug 2012 13:32:38 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1342800738.20120831133238@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346410506.27277.179.camel@zakaz.uk.xensource.com>
References: <885821548.20120831124902@eikelenboom.it>
	<1346410506.27277.179.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl / xend feature parity: Missing '-a' option for
	xl 'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, August 31, 2012, 12:55:06 PM, you wrote:

> On Fri, 2012-08-31 at 11:49 +0100, Sander Eikelenboom wrote:
>> Hi All,
>> 
>> Is there any reason why xl doesn't support the '-a' option for
>> shutdown,  to shutdown all domains ?

> I'd never heard of it for one thing ;-)

> It should be a reasonably easy patch -- I can give some pointers if you
> are interested.

> Ian.

Could give it a try, although my C skills are virtually non existent :-)
So every pointer could be handy !


I found out some more slightly related issues/inconsistencies:

- /etc/default/xendomains contains:
  XENDOMAINS_SHUTDOWN="--halt --wait"
  XENDOMAINS_SHUTDOWN_ALL="--all --halt --wait"

  which are used by /etc/init.d/xendomains.

- man xm says:

       shutdown [OPTIONS] domain-id
           Gracefully shuts down a domain.  This coordinates with the domain OS to perform graceful shutdown, so there is no guarantee that it will succeed, and may take a variable length of time depending on what services must be
           shutdown in the domain.  The command returns immediately after signally the domain unless that -w flag is used.

           The behavior of what happens to a domain when it reboots is set by the on_shutdown parameter of the xmdomain.cfg file when the domain was created.

           OPTIONS

           -a  Shutdown all domains.  Often used when doing a complete shutdown of a Xen system.

           -w  Wait for the domain to complete shutdown before returning.

- man xl says:
       shutdown [OPTIONS] domain-id
           Gracefully shuts down a domain.  This coordinates with the domain OS to perform graceful shutdown, so there is no guarantee that it will succeed, and may take a variable length of time depending on what services must be
           shutdown in the domain.

           For HVM domains this requires PV drivers to be installed in your guest OS. If PV drivers are not present but you have configured the guest OS to behave appropriately you may be able to use the -F option trigger a power
           button press.

           The command returns immediately after signally the domain unless that -w flag is used.

           The behavior of what happens to a domain when it reboots is set by the on_shutdown parameter of the domain configuration file when the domain was created.

           OPTIONS

           -w  Wait for the domain to complete shutdown before returning.

           -F  If the guest does not support PV shutdown control then fallback to sending an ACPI power event (equivalent to the power option to trigger.

               You should ensure that the guest is configured to behave as expected in response to this event.

- xm shutdown --help
  Usage: xm shutdown <Domain> [-waRH]

  There are two undocumented options "R" and "H"


So i have to:
     - Implement the '-a' option
     - Update docs that '-a' is supported
     - Find out what "--halt" / "-H" did .. and perhaps implement that as well
     - Change /etc/default to use the short option format, so it will work for both xm and xl

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:32:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PSy-0006DH-99; Fri, 31 Aug 2012 11:32:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7PSw-0006DA-22
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:32:46 +0000
Received: from [85.158.138.51:21227] by server-12.bemta-3.messagelabs.com id
	9A/1D-10384-DD0A0405; Fri, 31 Aug 2012 11:32:45 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-174.messagelabs.com!1346412763!27975807!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31379 invoked from network); 31 Aug 2012 11:32:43 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 11:32:43 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:50676
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7PPm-00058m-U1; Fri, 31 Aug 2012 13:29:30 +0200
Date: Fri, 31 Aug 2012 13:32:38 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1342800738.20120831133238@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346410506.27277.179.camel@zakaz.uk.xensource.com>
References: <885821548.20120831124902@eikelenboom.it>
	<1346410506.27277.179.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl / xend feature parity: Missing '-a' option for
	xl 'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, August 31, 2012, 12:55:06 PM, you wrote:

> On Fri, 2012-08-31 at 11:49 +0100, Sander Eikelenboom wrote:
>> Hi All,
>> 
>> Is there any reason why xl doesn't support the '-a' option for
>> shutdown,  to shutdown all domains ?

> I'd never heard of it for one thing ;-)

> It should be a reasonably easy patch -- I can give some pointers if you
> are interested.

> Ian.

Could give it a try, although my C skills are virtually non existent :-)
So every pointer could be handy !


I found out some more slightly related issues/inconsistencies:

- /etc/default/xendomains contains:
  XENDOMAINS_SHUTDOWN="--halt --wait"
  XENDOMAINS_SHUTDOWN_ALL="--all --halt --wait"

  which are used by /etc/init.d/xendomains.

- man xm says:

       shutdown [OPTIONS] domain-id
           Gracefully shuts down a domain.  This coordinates with the domain OS to perform graceful shutdown, so there is no guarantee that it will succeed, and may take a variable length of time depending on what services must be
           shutdown in the domain.  The command returns immediately after signally the domain unless that -w flag is used.

           The behavior of what happens to a domain when it reboots is set by the on_shutdown parameter of the xmdomain.cfg file when the domain was created.

           OPTIONS

           -a  Shutdown all domains.  Often used when doing a complete shutdown of a Xen system.

           -w  Wait for the domain to complete shutdown before returning.

- man xl says:
       shutdown [OPTIONS] domain-id
           Gracefully shuts down a domain.  This coordinates with the domain OS to perform graceful shutdown, so there is no guarantee that it will succeed, and may take a variable length of time depending on what services must be
           shutdown in the domain.

           For HVM domains this requires PV drivers to be installed in your guest OS. If PV drivers are not present but you have configured the guest OS to behave appropriately you may be able to use the -F option trigger a power
           button press.

           The command returns immediately after signally the domain unless that -w flag is used.

           The behavior of what happens to a domain when it reboots is set by the on_shutdown parameter of the domain configuration file when the domain was created.

           OPTIONS

           -w  Wait for the domain to complete shutdown before returning.

           -F  If the guest does not support PV shutdown control then fallback to sending an ACPI power event (equivalent to the power option to trigger.

               You should ensure that the guest is configured to behave as expected in response to this event.

- xm shutdown --help
  Usage: xm shutdown <Domain> [-waRH]

  There are two undocumented options "R" and "H"


So i have to:
     - Implement the '-a' option
     - Update docs that '-a' is supported
     - Find out what "--halt" / "-H" did .. and perhaps implement that as well
     - Change /etc/default to use the short option format, so it will work for both xm and xl

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:34:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PTu-0006J9-OI; Fri, 31 Aug 2012 11:33:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7PTt-0006IJ-Sb
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:33:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1346412650!3801443!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7352 invoked from network); 31 Aug 2012 11:30:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 11:30:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:30:49 +0100
Message-Id: <5040BC860200007800097D24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:30:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
In-Reply-To: <20544.39460.499127.781598@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Migration 4.1 xend -> 4.2 xl
>   Needs to be done with xl
>   Stop xend on source, which leaves domain running and manipulable by xl
>   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>   Works.

Is that really an acceptable approach? What if you have multiple
VMs running, and want to migrate just part of them? All the other
would remain unmanageable at least for the duration of the
migration(s). (And I also wonder if 4.1's xl is complete/stable
enough to recommend such an approach as a general mechanism.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:34:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PTu-0006J9-OI; Fri, 31 Aug 2012 11:33:46 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7PTt-0006IJ-Sb
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:33:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1346412650!3801443!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7352 invoked from network); 31 Aug 2012 11:30:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 11:30:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 12:30:49 +0100
Message-Id: <5040BC860200007800097D24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 12:30:46 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
In-Reply-To: <20544.39460.499127.781598@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Migration 4.1 xend -> 4.2 xl
>   Needs to be done with xl
>   Stop xend on source, which leaves domain running and manipulable by xl
>   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>   Works.

Is that really an acceptable approach? What if you have multiple
VMs running, and want to migrate just part of them? All the other
would remain unmanageable at least for the duration of the
migration(s). (And I also wonder if 4.1's xl is complete/stable
enough to recommend such an approach as a general mechanism.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:52:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Pm6-0006b1-F6; Fri, 31 Aug 2012 11:52:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Pm5-0006aw-0b
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:52:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1346413940!2085437!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4522 invoked from network); 31 Aug 2012 11:52:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14287545"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:52:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:52:20 +0100
Message-ID: <1346413938.27277.196.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 12:52:18 +0100
In-Reply-To: <1342800738.20120831133238@eikelenboom.it>
References: <885821548.20120831124902@eikelenboom.it>
	<1346410506.27277.179.camel@zakaz.uk.xensource.com>
	<1342800738.20120831133238@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl / xend feature parity: Missing '-a' option for
 xl 'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:32 +0100, Sander Eikelenboom wrote:
> Friday, August 31, 2012, 12:55:06 PM, you wrote:
> 
> > On Fri, 2012-08-31 at 11:49 +0100, Sander Eikelenboom wrote:
> >> Hi All,
> >> 
> >> Is there any reason why xl doesn't support the '-a' option for
> >> shutdown,  to shutdown all domains ?
> 
> > I'd never heard of it for one thing ;-)
> 
> > It should be a reasonably easy patch -- I can give some pointers if you
> > are interested.
> 
> > Ian.
> 
> Could give it a try, although my C skills are virtually non existent :-)
> So every pointer could be handy !
[...]
>      - Implement the '-a' option

In the case where -a is given you want to call libxl_list_domain, then
loop over the list and finally call libxl_dominfo_list_free.

main_list() might be a handy reference although its semantics are subtly
different (it effective assumes -a if you don't give a domain, which you
don't want for shutdown!)

vcpulist() might also be a handy reference.

>      - Update docs that '-a' is supported

This should be the easiest bit ;-)

You also want to update xl_cmdtable.c to include the new option in xl
help etc.

>      - Find out what "--halt" / "-H" did .. and perhaps implement that as well

tools/python/xen/xm/shutdown.py says
        gopts.opt('halt', short='H',
                  fn=set_true, default=0,
                  use='Shutdown without reboot.')

which I guess means shutdown on xm can behave like xm reboot. Later on
it does:
    if opts.vals.halt:
        return 'halt'
    elif opts.vals.reboot:
        return 'reboot'
    else:
        return 'poweroff'

i.e.
	xm shutdown -H -> "halt"
	xm shutdown -R -> "reboot"
	xm shutdown    -> "poweroff"

Linux in a guest treats "halt" and "poweroff" identically.

So I think --halt/-H is pointless and you can remove it from the
defaults.

>      - Change /etc/default to use the short option format, so it will work for both xm and xl

In principal it is possible for xl to support long options too (see e.g.
main_create, but changing the default would be OK for 4.2 IMHO.

Thanks for looking into this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:52:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Pm6-0006b1-F6; Fri, 31 Aug 2012 11:52:34 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Pm5-0006aw-0b
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:52:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1346413940!2085437!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4522 invoked from network); 31 Aug 2012 11:52:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 11:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14287545"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 11:52:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:52:20 +0100
Message-ID: <1346413938.27277.196.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 12:52:18 +0100
In-Reply-To: <1342800738.20120831133238@eikelenboom.it>
References: <885821548.20120831124902@eikelenboom.it>
	<1346410506.27277.179.camel@zakaz.uk.xensource.com>
	<1342800738.20120831133238@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl / xend feature parity: Missing '-a' option for
 xl 'shutdown' to shutdown all domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:32 +0100, Sander Eikelenboom wrote:
> Friday, August 31, 2012, 12:55:06 PM, you wrote:
> 
> > On Fri, 2012-08-31 at 11:49 +0100, Sander Eikelenboom wrote:
> >> Hi All,
> >> 
> >> Is there any reason why xl doesn't support the '-a' option for
> >> shutdown,  to shutdown all domains ?
> 
> > I'd never heard of it for one thing ;-)
> 
> > It should be a reasonably easy patch -- I can give some pointers if you
> > are interested.
> 
> > Ian.
> 
> Could give it a try, although my C skills are virtually non existent :-)
> So every pointer could be handy !
[...]
>      - Implement the '-a' option

In the case where -a is given you want to call libxl_list_domain, then
loop over the list and finally call libxl_dominfo_list_free.

main_list() might be a handy reference although its semantics are subtly
different (it effective assumes -a if you don't give a domain, which you
don't want for shutdown!)

vcpulist() might also be a handy reference.

>      - Update docs that '-a' is supported

This should be the easiest bit ;-)

You also want to update xl_cmdtable.c to include the new option in xl
help etc.

>      - Find out what "--halt" / "-H" did .. and perhaps implement that as well

tools/python/xen/xm/shutdown.py says
        gopts.opt('halt', short='H',
                  fn=set_true, default=0,
                  use='Shutdown without reboot.')

which I guess means shutdown on xm can behave like xm reboot. Later on
it does:
    if opts.vals.halt:
        return 'halt'
    elif opts.vals.reboot:
        return 'reboot'
    else:
        return 'poweroff'

i.e.
	xm shutdown -H -> "halt"
	xm shutdown -R -> "reboot"
	xm shutdown    -> "poweroff"

Linux in a guest treats "halt" and "poweroff" identically.

So I think --halt/-H is pointless and you can remove it from the
defaults.

>      - Change /etc/default to use the short option format, so it will work for both xm and xl

In principal it is possible for xl to support long options too (see e.g.
main_create, but changing the default would be OK for 4.2 IMHO.

Thanks for looking into this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:56:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:56:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Ppn-0006iO-61; Fri, 31 Aug 2012 11:56:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mathias.gaunard@ens-lyon.org>) id 1T7Ppl-0006iE-Nn
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:56:21 +0000
Received: from [85.158.138.51:52237] by server-11.bemta-3.messagelabs.com id
	EB/5E-30250-566A0405; Fri, 31 Aug 2012 11:56:21 +0000
X-Env-Sender: mathias.gaunard@ens-lyon.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346414180!27901560!1
X-Originating-IP: [87.98.165.232]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27237 invoked from network); 31 Aug 2012 11:56:20 -0000
Received: from 10.mo3.mail-out.ovh.net (HELO mo3.mail-out.ovh.net)
	(87.98.165.232) by server-11.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 11:56:20 -0000
Received: from mail422.ha.ovh.net (b9.ovh.net [213.186.33.59])
	by mo3.mail-out.ovh.net (Postfix) with SMTP id 60EF9FF98E1
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 14:03:42 +0200 (CEST)
Received: from b0.ovh.net (HELO queueout) (213.186.33.50)
	by b0.ovh.net with SMTP; 31 Aug 2012 13:56:27 +0200
Received: from vbo91-4-88-164-255-250.fbx.proxad.net (HELO ?192.168.0.4?)
	(mathias@gaunard.com@88.164.255.250)
	by ns0.ovh.net with SMTP; 31 Aug 2012 13:56:26 +0200
Message-ID: <5040A65E.6060608@ens-lyon.org>
Date: Fri, 31 Aug 2012 13:56:14 +0200
From: Mathias Gaunard <mathias.gaunard@ens-lyon.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Ovh-Mailout: 178.32.228.3 (mo3.mail-out.ovh.net)
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
In-Reply-To: <20544.39460.499127.781598@mariner.uk.xensource.com>
X-Ovh-Tracer-Id: 11991396959680397863
X-Ovh-Remote: 88.164.255.250 (vbo91-4-88-164-255-250.fbx.proxad.net)
X-Ovh-Local: 213.186.33.20 (ns0.ovh.net)
X-OVH-SPAMSTATE: OK
X-OVH-SPAMSCORE: 0
X-OVH-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeehtddrfeelucetufdoteggodetrfcurfhrohhfihhlvgemucfqggfjnecuuegrihhlohhuthemuceftddtnecunecuhfhrohhmpeforghthhhirghsucfirghunhgrrhguuceomhgrthhhihgrshdrghgruhhnrghrugesvghnshdqlhihohhnrdhorhhgqeenucfjughrpefkfffhfgggvffufhgjtgfgsehtjegrtddtfedu
X-Spam-Check: DONE|U 0.500003/N
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: 0
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeehtddrfeelucetufdoteggodetrfcurfhrohhfihhlvgemucfqggfjnecuuegrihhlohhuthemuceftddtnecunecuhfhrohhmpeforghthhhirghsucfirghunhgrrhguuceomhgrthhhihgrshdrghgruhhnrghrugesvghnshdqlhihohhnrdhorhhgqeenucfjughrpefkfffhfgggvffufhgjtgfgsehtjegrtddtfedu
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 13:04, Ian Jackson wrote:

> Migration 4.1 xend -> 4.2 xl
>    Needs to be done with xl
>    Stop xend on source, which leaves domain running and manipulable by xl
>    xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>    Works.

Does it work out of the box if you just choose to keep using xend and 
not move to xl?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 11:56:40 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 11:56:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Ppn-0006iO-61; Fri, 31 Aug 2012 11:56:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mathias.gaunard@ens-lyon.org>) id 1T7Ppl-0006iE-Nn
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 11:56:21 +0000
Received: from [85.158.138.51:52237] by server-11.bemta-3.messagelabs.com id
	EB/5E-30250-566A0405; Fri, 31 Aug 2012 11:56:21 +0000
X-Env-Sender: mathias.gaunard@ens-lyon.org
X-Msg-Ref: server-11.tower-174.messagelabs.com!1346414180!27901560!1
X-Originating-IP: [87.98.165.232]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27237 invoked from network); 31 Aug 2012 11:56:20 -0000
Received: from 10.mo3.mail-out.ovh.net (HELO mo3.mail-out.ovh.net)
	(87.98.165.232) by server-11.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 11:56:20 -0000
Received: from mail422.ha.ovh.net (b9.ovh.net [213.186.33.59])
	by mo3.mail-out.ovh.net (Postfix) with SMTP id 60EF9FF98E1
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 14:03:42 +0200 (CEST)
Received: from b0.ovh.net (HELO queueout) (213.186.33.50)
	by b0.ovh.net with SMTP; 31 Aug 2012 13:56:27 +0200
Received: from vbo91-4-88-164-255-250.fbx.proxad.net (HELO ?192.168.0.4?)
	(mathias@gaunard.com@88.164.255.250)
	by ns0.ovh.net with SMTP; 31 Aug 2012 13:56:26 +0200
Message-ID: <5040A65E.6060608@ens-lyon.org>
Date: Fri, 31 Aug 2012 13:56:14 +0200
From: Mathias Gaunard <mathias.gaunard@ens-lyon.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
X-Ovh-Mailout: 178.32.228.3 (mo3.mail-out.ovh.net)
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
In-Reply-To: <20544.39460.499127.781598@mariner.uk.xensource.com>
X-Ovh-Tracer-Id: 11991396959680397863
X-Ovh-Remote: 88.164.255.250 (vbo91-4-88-164-255-250.fbx.proxad.net)
X-Ovh-Local: 213.186.33.20 (ns0.ovh.net)
X-OVH-SPAMSTATE: OK
X-OVH-SPAMSCORE: 0
X-OVH-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeehtddrfeelucetufdoteggodetrfcurfhrohhfihhlvgemucfqggfjnecuuegrihhlohhuthemuceftddtnecunecuhfhrohhmpeforghthhhirghsucfirghunhgrrhguuceomhgrthhhihgrshdrghgruhhnrghrugesvghnshdqlhihohhnrdhorhhgqeenucfjughrpefkfffhfgggvffufhgjtgfgsehtjegrtddtfedu
X-Spam-Check: DONE|U 0.500003/N
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: 0
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeehtddrfeelucetufdoteggodetrfcurfhrohhfihhlvgemucfqggfjnecuuegrihhlohhuthemuceftddtnecunecuhfhrohhmpeforghthhhirghsucfirghunhgrrhguuceomhgrthhhihgrshdrghgruhhnrghrugesvghnshdqlhihohhnrdhorhhgqeenucfjughrpefkfffhfgggvffufhgjtgfgsehtjegrtddtfedu
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 13:04, Ian Jackson wrote:

> Migration 4.1 xend -> 4.2 xl
>    Needs to be done with xl
>    Stop xend on source, which leaves domain running and manipulable by xl
>    xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>    Works.

Does it work out of the box if you just choose to keep using xend and 
not move to xl?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:03:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PwZ-00072v-Ic; Fri, 31 Aug 2012 12:03:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7PwY-00072q-1y
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:03:22 +0000
Received: from [85.158.143.35:8592] by server-2.bemta-4.messagelabs.com id
	5E/B7-21239-908A0405; Fri, 31 Aug 2012 12:03:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346414600!14269908!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13720 invoked from network); 31 Aug 2012 12:03:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:03:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14287736"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:03:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 13:03:20 +0100
Message-ID: <1346414598.27277.197.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mathias Gaunard <mathias.gaunard@ens-lyon.org>
Date: Fri, 31 Aug 2012 13:03:18 +0100
In-Reply-To: <5040A65E.6060608@ens-lyon.org>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040A65E.6060608@ens-lyon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:56 +0100, Mathias Gaunard wrote:
> On 31/08/2012 13:04, Ian Jackson wrote:
> 
> > Migration 4.1 xend -> 4.2 xl
> >    Needs to be done with xl
> >    Stop xend on source, which leaves domain running and manipulable by xl
> >    xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >    Works.
> 
> Does it work out of the box if you just choose to keep using xend and 
> not move to xl?

That was the very first line of Ian's report.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:03:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7PwZ-00072v-Ic; Fri, 31 Aug 2012 12:03:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7PwY-00072q-1y
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:03:22 +0000
Received: from [85.158.143.35:8592] by server-2.bemta-4.messagelabs.com id
	5E/B7-21239-908A0405; Fri, 31 Aug 2012 12:03:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346414600!14269908!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13720 invoked from network); 31 Aug 2012 12:03:20 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:03:20 -0000
X-IronPort-AV: E=Sophos;i="4.80,346,1344211200"; d="scan'208";a="14287736"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:03:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 13:03:20 +0100
Message-ID: <1346414598.27277.197.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mathias Gaunard <mathias.gaunard@ens-lyon.org>
Date: Fri, 31 Aug 2012 13:03:18 +0100
In-Reply-To: <5040A65E.6060608@ens-lyon.org>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040A65E.6060608@ens-lyon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:56 +0100, Mathias Gaunard wrote:
> On 31/08/2012 13:04, Ian Jackson wrote:
> 
> > Migration 4.1 xend -> 4.2 xl
> >    Needs to be done with xl
> >    Stop xend on source, which leaves domain running and manipulable by xl
> >    xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >    Works.
> 
> Does it work out of the box if you just choose to keep using xend and 
> not move to xl?

That was the very first line of Ian's report.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:09:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:09:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Q29-0007Il-GO; Fri, 31 Aug 2012 12:09:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Q28-0007Ie-4D
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:09:08 +0000
Received: from [85.158.138.51:58331] by server-9.bemta-3.messagelabs.com id
	CF/7F-15390-369A0405; Fri, 31 Aug 2012 12:09:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346414945!27878499!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6823 invoked from network); 31 Aug 2012 12:09:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 12:09:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 13:09:05 +0100
Message-Id: <5040C57C0200007800097D59@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 13:09:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Sander Eikelenboom" <linux@eikelenboom.it>
References: <1803123652.20120831114437@eikelenboom.it>
In-Reply-To: <1803123652.20120831114437@eikelenboom.it>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:44, Sander Eikelenboom <linux@eikelenboom.it> wrote:
> - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? 
> kernel:
> 
> [    3.333195] tseg: 0000000000
> [    3.333231] CPU: Physical Processor ID: 0
> [    3.333236] CPU: Processor Core ID: 0
> [    3.333240] mce: CPU supports 2 MCE banks
> [    3.333258] LVT offset 0 assigned for vector 0xf9
> [    3.333263] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for 
> vector 0xf9, but the register is already in use for vector 0x0 on this cpu
> [    3.333272] [Firmware Bug]: cpu 0, failed to setup threshold interrupt 
> for bank 4, block 0 (MSR00000413=0xc008000001000000)
> [    3.333281] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for 
> vector 0xf9, but the register is already in use for vector 0x0 on this cpu
> [    3.333290] [Firmware Bug]: cpu 0, failed to setup threshold interrupt 
> for bank 4, block 1 (MSRC0000408=0xc000000001000000)
> [    3.333306] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for 
> vector 0xf9, but the register is already in use for vector 0x0 on this cpu
> [    3.333315] [Firmware Bug]: cpu 0, failed to setup threshold interrupt 
> for bank 4, block 2 (MSRC0000409=0xc000000001000000)
> [    3.333331] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
> [    3.333331] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
> [    3.333331] tlb_flushall_shift is 0xffffffff
> [    3.334057] ACPI: Core revision 20120711
> 
> The kernel gives these firmware bugs for the other CPUS later on, this is on 
> a AMD Phenom x6, but everything seems to be working so perhaps a minor thing.

This is the usual effect of keeping all sorts of native code enabled
in the pv-ops kernel that tries to control what really is being
controlled by Xen (MCE in this case).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:09:25 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:09:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Q29-0007Il-GO; Fri, 31 Aug 2012 12:09:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Q28-0007Ie-4D
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:09:08 +0000
Received: from [85.158.138.51:58331] by server-9.bemta-3.messagelabs.com id
	CF/7F-15390-369A0405; Fri, 31 Aug 2012 12:09:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346414945!27878499!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6823 invoked from network); 31 Aug 2012 12:09:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 12:09:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 13:09:05 +0100
Message-Id: <5040C57C0200007800097D59@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 13:09:00 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Sander Eikelenboom" <linux@eikelenboom.it>
References: <1803123652.20120831114437@eikelenboom.it>
In-Reply-To: <1803123652.20120831114437@eikelenboom.it>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:44, Sander Eikelenboom <linux@eikelenboom.it> wrote:
> - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? 
> kernel:
> 
> [    3.333195] tseg: 0000000000
> [    3.333231] CPU: Physical Processor ID: 0
> [    3.333236] CPU: Processor Core ID: 0
> [    3.333240] mce: CPU supports 2 MCE banks
> [    3.333258] LVT offset 0 assigned for vector 0xf9
> [    3.333263] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for 
> vector 0xf9, but the register is already in use for vector 0x0 on this cpu
> [    3.333272] [Firmware Bug]: cpu 0, failed to setup threshold interrupt 
> for bank 4, block 0 (MSR00000413=0xc008000001000000)
> [    3.333281] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for 
> vector 0xf9, but the register is already in use for vector 0x0 on this cpu
> [    3.333290] [Firmware Bug]: cpu 0, failed to setup threshold interrupt 
> for bank 4, block 1 (MSRC0000408=0xc000000001000000)
> [    3.333306] [Firmware Bug]: cpu 0, try to use APIC500 (LVT offset 0) for 
> vector 0xf9, but the register is already in use for vector 0x0 on this cpu
> [    3.333315] [Firmware Bug]: cpu 0, failed to setup threshold interrupt 
> for bank 4, block 2 (MSRC0000409=0xc000000001000000)
> [    3.333331] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
> [    3.333331] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
> [    3.333331] tlb_flushall_shift is 0xffffffff
> [    3.334057] ACPI: Core revision 20120711
> 
> The kernel gives these firmware bugs for the other CPUS later on, this is on 
> a AMD Phenom x6, but everything seems to be working so perhaps a minor thing.

This is the usual effect of keeping all sorts of native code enabled
in the pv-ops kernel that tries to control what really is being
controlled by Xen (MCE in this case).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:15:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Q7b-0007Sk-9J; Fri, 31 Aug 2012 12:14:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Q7a-0007Sb-AQ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:14:46 +0000
Received: from [85.158.143.99:59274] by server-2.bemta-4.messagelabs.com id
	D8/8A-21239-5BAA0405; Fri, 31 Aug 2012 12:14:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346415283!21452226!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4815 invoked from network); 31 Aug 2012 12:14:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 12:14:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 13:14:43 +0100
Message-Id: <5040C6CF0200007800097D65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 13:14:39 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jiongxi Li" <jiongxi.li@intel.com>
References: <D9137FCD9CFF644B965863BCFBEDABB8779924@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <D9137FCD9CFF644B965863BCFBEDABB8779924@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ PATCH 1/2] xen: enable APIC-Register
 Virtualization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:29, "Li, Jiongxi" <jiongxi.li@intel.com> wrote:
> --- a/xen/arch/x86/hvm/vlapic.c      Fri Aug 24 12:38:18 2012 +0100
> +++ b/xen/arch/x86/hvm/vlapic.c   Thu Aug 30 22:38:26 2012 +0800
> @@ -823,6 +823,16 @@ static int vlapic_write(struct vcpu *v, 
>      return rc;
> }
> 
> +int vlapic_apicv_write(struct vcpu *v, unsigned int offset)
> +{
> +    uint32_t val = vlapic_get_reg(vcpu_vlapic(v), offset);
> +
> +    ASSERT(cpu_has_vmx_apic_reg_virt);

Given the function and the assertion are in a common file, both
should be named without using VMX specific terms (or moved
elsewhere).

Jan

> +
> +    vlapic_reg_write(v, offset, val);
> +    return 0;
> +}
> +



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:15:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Q7b-0007Sk-9J; Fri, 31 Aug 2012 12:14:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Q7a-0007Sb-AQ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:14:46 +0000
Received: from [85.158.143.99:59274] by server-2.bemta-4.messagelabs.com id
	D8/8A-21239-5BAA0405; Fri, 31 Aug 2012 12:14:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346415283!21452226!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4815 invoked from network); 31 Aug 2012 12:14:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-216.messagelabs.com with SMTP;
	31 Aug 2012 12:14:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 13:14:43 +0100
Message-Id: <5040C6CF0200007800097D65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 13:14:39 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jiongxi Li" <jiongxi.li@intel.com>
References: <D9137FCD9CFF644B965863BCFBEDABB8779924@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <D9137FCD9CFF644B965863BCFBEDABB8779924@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ PATCH 1/2] xen: enable APIC-Register
 Virtualization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:29, "Li, Jiongxi" <jiongxi.li@intel.com> wrote:
> --- a/xen/arch/x86/hvm/vlapic.c      Fri Aug 24 12:38:18 2012 +0100
> +++ b/xen/arch/x86/hvm/vlapic.c   Thu Aug 30 22:38:26 2012 +0800
> @@ -823,6 +823,16 @@ static int vlapic_write(struct vcpu *v, 
>      return rc;
> }
> 
> +int vlapic_apicv_write(struct vcpu *v, unsigned int offset)
> +{
> +    uint32_t val = vlapic_get_reg(vcpu_vlapic(v), offset);
> +
> +    ASSERT(cpu_has_vmx_apic_reg_virt);

Given the function and the assertion are in a common file, both
should be named without using VMX specific terms (or moved
elsewhere).

Jan

> +
> +    vlapic_reg_write(v, offset, val);
> +    return 0;
> +}
> +



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:30:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QMP-0007d4-Pz; Fri, 31 Aug 2012 12:30:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7QMN-0007cz-Tr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:30:04 +0000
Received: from [85.158.139.83:30130] by server-9.bemta-5.messagelabs.com id
	A2/79-20529-A4EA0405; Fri, 31 Aug 2012 12:30:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1346416202!25188926!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6582 invoked from network); 31 Aug 2012 12:30:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:30:02 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14288251"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:30:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 13:30:02 +0100
Message-ID: <1346416200.27277.203.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 13:30:00 +0100
In-Reply-To: <1346405131.27277.131.camel@zakaz.uk.xensource.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
	<1346405131.27277.131.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:25 +0100, Ian Campbell wrote:
> > +AC_DEFUN([AX_DOCS_TOOL_PROGS], [
> > +dnl
> > +    AC_ARG_VAR([$1], [Path to $2 tool])
> 
> Does this do something sensible when $2 is a space separated list?

Not especially:
$ ./configure --help| grep mark
   MARKDOWN    Path to markdown markdown_py tool

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:30:24 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QMP-0007d4-Pz; Fri, 31 Aug 2012 12:30:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7QMN-0007cz-Tr
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:30:04 +0000
Received: from [85.158.139.83:30130] by server-9.bemta-5.messagelabs.com id
	A2/79-20529-A4EA0405; Fri, 31 Aug 2012 12:30:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-182.messagelabs.com!1346416202!25188926!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6582 invoked from network); 31 Aug 2012 12:30:02 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:30:02 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14288251"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:30:02 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 13:30:02 +0100
Message-ID: <1346416200.27277.203.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 13:30:00 +0100
In-Reply-To: <1346405131.27277.131.camel@zakaz.uk.xensource.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
	<1346405131.27277.131.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 10:25 +0100, Ian Campbell wrote:
> > +AC_DEFUN([AX_DOCS_TOOL_PROGS], [
> > +dnl
> > +    AC_ARG_VAR([$1], [Path to $2 tool])
> 
> Does this do something sensible when $2 is a space separated list?

Not especially:
$ ./configure --help| grep mark
   MARKDOWN    Path to markdown markdown_py tool

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:31:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QNg-0007hf-8J; Fri, 31 Aug 2012 12:31:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7QNe-0007hT-0z
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:31:22 +0000
Received: from [85.158.138.51:22790] by server-3.bemta-3.messagelabs.com id
	BE/D7-21322-99EA0405; Fri, 31 Aug 2012 12:31:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346416279!23864935!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10742 invoked from network); 31 Aug 2012 12:31:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 12:31:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 13:31:18 +0100
Message-Id: <5040CAB30200007800097D73@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 13:31:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jiongxi Li" <jiongxi.li@intel.com>
References: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ PATCH 2/2] xen: enable Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:30, "Li, Jiongxi" <jiongxi.li@intel.com> wrote:
> +/*
> + * When "Virtual Interrupt Delivery" is enabled, this function is used
> + * to handle EOI-induced VM exit
> + */
> +void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector)
> +{
> +    ASSERT(cpu_has_vmx_virtual_intr_delivery);
> +
> +    if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) )

Why test_and_clear rather than just test?

> +    {
> +        vioapic_update_EOI(vlapic_domain(vlapic), vector);
> +    }
> +
> +    hvm_dpci_msi_eoi(current->domain, vector);
> +}
>...
> --- a/xen/arch/x86/hvm/vmx/intr.c Fri Aug 31 09:30:38 2012 +0800
> +++ b/xen/arch/x86/hvm/vmx/intr.c       Fri Aug 31 09:49:39 2012 +0800
> @@ -227,19 +227,43 @@ void vmx_intr_assist(void)
>              goto out;
> 
>          intblk = hvm_interrupt_blocked(v, intack);
> -        if ( intblk == hvm_intblk_tpr )
> +        if ( cpu_has_vmx_virtual_intr_delivery )
>          {
> -            ASSERT(vlapic_enabled(vcpu_vlapic(v)));
> -            ASSERT(intack.source == hvm_intsrc_lapic);
> -            tpr_threshold = intack.vector >> 4;
> -            goto out;
> +            /* Set "Interrupt-window exiting" for ExtINT */
> +            if ( (intblk != hvm_intblk_none) &&
> +                 ( (intack.source == hvm_intsrc_pic) ||
> +                 ( intack.source == hvm_intsrc_vector) ) )
> +            {
> +                enable_intr_window(v, intack);
> +                goto out;
> +            }
> +
> +            if ( __vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK )
> +            {
> +                if ( (intack.source == hvm_intsrc_pic) ||
> +                     (intack.source == hvm_intsrc_nmi) ||
> +                     (intack.source == hvm_intsrc_mce) )
> +                    enable_intr_window(v, intack);
> +
> +                goto out;
> +            }
>          }
> +        else
> +        {
> +            if ( intblk == hvm_intblk_tpr )
> +            {
> +                ASSERT(vlapic_enabled(vcpu_vlapic(v)));
> +                ASSERT(intack.source == hvm_intsrc_lapic);
> +                tpr_threshold = intack.vector >> 4;
> +                goto out;
> +            }
> 
> -        if ( (intblk != hvm_intblk_none) ||
> -             (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
> -        {
> -            enable_intr_window(v, intack);
> -            goto out;
> +            if ( (intblk != hvm_intblk_none) ||
> +                 (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
> +            {
> +                enable_intr_window(v, intack);
> +                goto out;
> +            }
>          }

If you made the above and if()/else if() series, some of the
differences would go away, making the changes easier to
review.

> 
>          intack = hvm_vcpu_ack_pending_irq(v, intack);
> @@ -253,6 +277,29 @@ void vmx_intr_assist(void)
>      {
>          hvm_inject_hw_exception(TRAP_machine_check, HVM_DELIVER_NO_ERROR_CODE);
>      }
> +    else if ( cpu_has_vmx_virtual_intr_delivery &&
> +              intack.source != hvm_intsrc_pic &&
> +              intack.source != hvm_intsrc_vector )
> +    {
> +        /* we need update the RVI field */
> +        unsigned long status = __vmread(GUEST_INTR_STATUS);
> +        status &= ~(unsigned long)0x0FF;
> +        status |= (unsigned long)0x0FF & 
> +                    intack.vector;
> +        __vmwrite(GUEST_INTR_STATUS, status);
> +        if (v->arch.hvm_vmx.eoi_exitmap_changed) {
> +#define UPDATE_EOI_EXITMAP(v, e) {                             \
> +        if (test_and_clear_bit(e, &(v).eoi_exitmap_changed)) {      \

Here and elsewhere - do you really need locked accesses?

> +                __vmwrite(EOI_EXIT_BITMAP##e, (v).eoi_exit_bitmap[e]);}}
> +
> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 0);

This is not very logical: Passing just 'v' to the macro, and accessing
the full field there would be more consistent.

Furthermore, here and in other places you fail to write the upper
halves for 32-bit, yet you also don't disable the code for 32-bit
afaics.

> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 1);
> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 2);
> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 3);
> +        }
> +
> +        pt_intr_post(v, intack);
> +    }
>      else
>      {
>          HVMTRACE_2D(INJ_VIRQ, intack.vector, /*fake=*/ 0);

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:31:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QNg-0007hf-8J; Fri, 31 Aug 2012 12:31:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7QNe-0007hT-0z
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:31:22 +0000
Received: from [85.158.138.51:22790] by server-3.bemta-3.messagelabs.com id
	BE/D7-21322-99EA0405; Fri, 31 Aug 2012 12:31:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-174.messagelabs.com!1346416279!23864935!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10742 invoked from network); 31 Aug 2012 12:31:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 12:31:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 13:31:18 +0100
Message-Id: <5040CAB30200007800097D73@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 13:31:15 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jiongxi Li" <jiongxi.li@intel.com>
References: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <D9137FCD9CFF644B965863BCFBEDABB8779942@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ PATCH 2/2] xen: enable Virtual-interrupt delivery
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 11:30, "Li, Jiongxi" <jiongxi.li@intel.com> wrote:
> +/*
> + * When "Virtual Interrupt Delivery" is enabled, this function is used
> + * to handle EOI-induced VM exit
> + */
> +void vlapic_handle_EOI_induced_exit(struct vlapic *vlapic, int vector)
> +{
> +    ASSERT(cpu_has_vmx_virtual_intr_delivery);
> +
> +    if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) )

Why test_and_clear rather than just test?

> +    {
> +        vioapic_update_EOI(vlapic_domain(vlapic), vector);
> +    }
> +
> +    hvm_dpci_msi_eoi(current->domain, vector);
> +}
>...
> --- a/xen/arch/x86/hvm/vmx/intr.c Fri Aug 31 09:30:38 2012 +0800
> +++ b/xen/arch/x86/hvm/vmx/intr.c       Fri Aug 31 09:49:39 2012 +0800
> @@ -227,19 +227,43 @@ void vmx_intr_assist(void)
>              goto out;
> 
>          intblk = hvm_interrupt_blocked(v, intack);
> -        if ( intblk == hvm_intblk_tpr )
> +        if ( cpu_has_vmx_virtual_intr_delivery )
>          {
> -            ASSERT(vlapic_enabled(vcpu_vlapic(v)));
> -            ASSERT(intack.source == hvm_intsrc_lapic);
> -            tpr_threshold = intack.vector >> 4;
> -            goto out;
> +            /* Set "Interrupt-window exiting" for ExtINT */
> +            if ( (intblk != hvm_intblk_none) &&
> +                 ( (intack.source == hvm_intsrc_pic) ||
> +                 ( intack.source == hvm_intsrc_vector) ) )
> +            {
> +                enable_intr_window(v, intack);
> +                goto out;
> +            }
> +
> +            if ( __vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK )
> +            {
> +                if ( (intack.source == hvm_intsrc_pic) ||
> +                     (intack.source == hvm_intsrc_nmi) ||
> +                     (intack.source == hvm_intsrc_mce) )
> +                    enable_intr_window(v, intack);
> +
> +                goto out;
> +            }
>          }
> +        else
> +        {
> +            if ( intblk == hvm_intblk_tpr )
> +            {
> +                ASSERT(vlapic_enabled(vcpu_vlapic(v)));
> +                ASSERT(intack.source == hvm_intsrc_lapic);
> +                tpr_threshold = intack.vector >> 4;
> +                goto out;
> +            }
> 
> -        if ( (intblk != hvm_intblk_none) ||
> -             (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
> -        {
> -            enable_intr_window(v, intack);
> -            goto out;
> +            if ( (intblk != hvm_intblk_none) ||
> +                 (__vmread(VM_ENTRY_INTR_INFO) & INTR_INFO_VALID_MASK) )
> +            {
> +                enable_intr_window(v, intack);
> +                goto out;
> +            }
>          }

If you made the above and if()/else if() series, some of the
differences would go away, making the changes easier to
review.

> 
>          intack = hvm_vcpu_ack_pending_irq(v, intack);
> @@ -253,6 +277,29 @@ void vmx_intr_assist(void)
>      {
>          hvm_inject_hw_exception(TRAP_machine_check, HVM_DELIVER_NO_ERROR_CODE);
>      }
> +    else if ( cpu_has_vmx_virtual_intr_delivery &&
> +              intack.source != hvm_intsrc_pic &&
> +              intack.source != hvm_intsrc_vector )
> +    {
> +        /* we need update the RVI field */
> +        unsigned long status = __vmread(GUEST_INTR_STATUS);
> +        status &= ~(unsigned long)0x0FF;
> +        status |= (unsigned long)0x0FF & 
> +                    intack.vector;
> +        __vmwrite(GUEST_INTR_STATUS, status);
> +        if (v->arch.hvm_vmx.eoi_exitmap_changed) {
> +#define UPDATE_EOI_EXITMAP(v, e) {                             \
> +        if (test_and_clear_bit(e, &(v).eoi_exitmap_changed)) {      \

Here and elsewhere - do you really need locked accesses?

> +                __vmwrite(EOI_EXIT_BITMAP##e, (v).eoi_exit_bitmap[e]);}}
> +
> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 0);

This is not very logical: Passing just 'v' to the macro, and accessing
the full field there would be more consistent.

Furthermore, here and in other places you fail to write the upper
halves for 32-bit, yet you also don't disable the code for 32-bit
afaics.

> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 1);
> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 2);
> +                UPDATE_EOI_EXITMAP(v->arch.hvm_vmx, 3);
> +        }
> +
> +        pt_intr_post(v, intack);
> +    }
>      else
>      {
>          HVMTRACE_2D(INJ_VIRQ, intack.vector, /*fake=*/ 0);

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:31:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QNv-0007jA-Lr; Fri, 31 Aug 2012 12:31:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7QNu-0007iS-FR
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:31:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346416287!8958746!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17691 invoked from network); 31 Aug 2012 12:31:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:31:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14288263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:30:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 13:30:41 +0100
Message-ID: <1346416240.27277.204.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 13:30:40 +0100
In-Reply-To: <512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 19:59 +0100, Matt Wilson wrote:
> It is sometimes hard to discover all the optional tools that should be
> on a system to build all available Xen documentation. By checking for
> documentation generation tools at ./configure time and displaying a
> warning, Xen packagers will more easily learn about new optional build
> dependencies, like markdown, when they are introduced.

Having remembered to rerun autogen.sh I can confirm that I get the same
set of docs as I did before on my regular build machine. I get no
warnings about missing tools so I suppose I'm building all the docs.

(note to self, when this patch goes in check that the cronjob running
http://xenbits.xen.org/docs/unstable/ still works. reminders
appreciated...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:31:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QNv-0007jA-Lr; Fri, 31 Aug 2012 12:31:39 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7QNu-0007iS-FR
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 12:31:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346416287!8958746!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17691 invoked from network); 31 Aug 2012 12:31:27 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:31:27 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14288263"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:30:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 13:30:41 +0100
Message-ID: <1346416240.27277.204.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@amazon.com>
Date: Fri, 31 Aug 2012 13:30:40 +0100
In-Reply-To: <512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
References: <patchbomb.1346353198@u002268147cd4502c336d.ant.amazon.com>
	<512b4e0c49f331e252ae.1346353199@u002268147cd4502c336d.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1 of 2 v2] tools: check for documentation
 generation tools at configure time
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2012-08-30 at 19:59 +0100, Matt Wilson wrote:
> It is sometimes hard to discover all the optional tools that should be
> on a system to build all available Xen documentation. By checking for
> documentation generation tools at ./configure time and displaying a
> warning, Xen packagers will more easily learn about new optional build
> dependencies, like markdown, when they are introduced.

Having remembered to rerun autogen.sh I can confirm that I get the same
set of docs as I did before on my regular build machine. I get no
warnings about missing tools so I suppose I'm building all the docs.

(note to self, when this patch goes in check that the cronjob running
http://xenbits.xen.org/docs/unstable/ still works. reminders
appreciated...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:47:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QdD-00089Z-8G; Fri, 31 Aug 2012 12:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7QdC-00089U-15
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 12:47:26 +0000
Received: from [85.158.143.99:42184] by server-3.bemta-4.messagelabs.com id
	B9/8E-08232-D52B0405; Fri, 31 Aug 2012 12:47:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346417243!22517694!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3609 invoked from network); 31 Aug 2012 12:47:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:47:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36424456"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:47:07 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	08:47:07 -0400
Message-ID: <5040B249.4000306@citrix.com>
Date: Fri, 31 Aug 2012 13:47:05 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Panella <stefano.panella@citrix.com>
References: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
In-Reply-To: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH 1/1] XEN: Use correct masking
	in	xen_swiotlb_alloc_coherent.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/12 10:57, Stefano Panella wrote:
> When running 32-bit pvops-dom0 and a driver tries to allocate a coherent
> DMA-memory the xen swiotlb-implementation returned memory beyond 4GB.
> 
> This caused for example not working sound on a system with 4 GB and a 64-bit
> compatible sound-card with sets the DMA-mask to 64bit.
> 
> On bare-metal and the forward-ported xen-dom0 patches from OpenSuse a coherent
> DMA-memory is always allocated inside the 32-bit address-range by calling
> dma_alloc_coherent_mask.

We should have the same behaviour under Xen as bare metal so:

Acked-By: David Vrabel <david.vrabel@citrix.com>

This does limit the DMA mask to 32-bits by passing it through an
unsigned long, which seems a bit sneaky...

Presumably the sound card is capable of handling 64 bit physical
addresses (or it would break under 64-bit kernels) so it's not clear why
this sound driver requires this restriction.

Is there a bug in the sound driver or sound subsystem where it's
truncating a dma_addr_t by assigning it to an unsigned long or similar?

> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -232,7 +232,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>  		return ret;
>  
>  	if (hwdev && hwdev->coherent_dma_mask)
> -		dma_mask = hwdev->coherent_dma_mask;
> +		dma_mask = dma_alloc_coherent_mask(hwdev, flags);

Suggest

    if (hwdev)
        dma_mask = dma_alloc_coherent_mask(hwdev, flags)

>  	phys = virt_to_phys(ret);
>  	dev_addr = xen_phys_to_bus(phys);

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 12:47:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 12:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QdD-00089Z-8G; Fri, 31 Aug 2012 12:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7QdC-00089U-15
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 12:47:26 +0000
Received: from [85.158.143.99:42184] by server-3.bemta-4.messagelabs.com id
	B9/8E-08232-D52B0405; Fri, 31 Aug 2012 12:47:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346417243!22517694!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3609 invoked from network); 31 Aug 2012 12:47:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 12:47:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36424456"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 12:47:07 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	08:47:07 -0400
Message-ID: <5040B249.4000306@citrix.com>
Date: Fri, 31 Aug 2012 13:47:05 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Panella <stefano.panella@citrix.com>
References: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
In-Reply-To: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [PATCH 1/1] XEN: Use correct masking
	in	xen_swiotlb_alloc_coherent.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/12 10:57, Stefano Panella wrote:
> When running 32-bit pvops-dom0 and a driver tries to allocate a coherent
> DMA-memory the xen swiotlb-implementation returned memory beyond 4GB.
> 
> This caused for example not working sound on a system with 4 GB and a 64-bit
> compatible sound-card with sets the DMA-mask to 64bit.
> 
> On bare-metal and the forward-ported xen-dom0 patches from OpenSuse a coherent
> DMA-memory is always allocated inside the 32-bit address-range by calling
> dma_alloc_coherent_mask.

We should have the same behaviour under Xen as bare metal so:

Acked-By: David Vrabel <david.vrabel@citrix.com>

This does limit the DMA mask to 32-bits by passing it through an
unsigned long, which seems a bit sneaky...

Presumably the sound card is capable of handling 64 bit physical
addresses (or it would break under 64-bit kernels) so it's not clear why
this sound driver requires this restriction.

Is there a bug in the sound driver or sound subsystem where it's
truncating a dma_addr_t by assigning it to an unsigned long or similar?

> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -232,7 +232,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>  		return ret;
>  
>  	if (hwdev && hwdev->coherent_dma_mask)
> -		dma_mask = hwdev->coherent_dma_mask;
> +		dma_mask = dma_alloc_coherent_mask(hwdev, flags);

Suggest

    if (hwdev)
        dma_mask = dma_alloc_coherent_mask(hwdev, flags)

>  	phys = virt_to_phys(ret);
>  	dev_addr = xen_phys_to_bus(phys);

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QrD-0008LB-Lb; Fri, 31 Aug 2012 13:01:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7QrB-0008L6-KX
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:01:53 +0000
Received: from [85.158.139.83:65382] by server-3.bemta-5.messagelabs.com id
	E5/09-21836-0C5B0405; Fri, 31 Aug 2012 13:01:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346418110!16649825!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28762 invoked from network); 31 Aug 2012 13:01:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:01:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14289049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:01:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 14:01:13 +0100
Message-ID: <1346418071.27277.207.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 14:01:11 +0100
In-Reply-To: <5040BC860200007800097D24@nat28.tlf.novell.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:30 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > Migration 4.1 xend -> 4.2 xl
> >   Needs to be done with xl
> >   Stop xend on source, which leaves domain running and manipulable by xl
> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >   Works.
> 
> Is that really an acceptable approach? What if you have multiple
> VMs running, and want to migrate just part of them? All the other
> would remain unmanageable at least for the duration of the
> migration(s). (And I also wonder if 4.1's xl is complete/stable
> enough to recommend such an approach as a general mechanism.)

The alternative I suppose would be to:
      * start xend on new host (running 4.2)
      * migrate the domains you want over to the new system
      * stop xend on the new host
      * xl migrate localhost for each domain.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:02:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7QrD-0008LB-Lb; Fri, 31 Aug 2012 13:01:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7QrB-0008L6-KX
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:01:53 +0000
Received: from [85.158.139.83:65382] by server-3.bemta-5.messagelabs.com id
	E5/09-21836-0C5B0405; Fri, 31 Aug 2012 13:01:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346418110!16649825!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28762 invoked from network); 31 Aug 2012 13:01:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:01:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14289049"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:01:13 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 14:01:13 +0100
Message-ID: <1346418071.27277.207.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 14:01:11 +0100
In-Reply-To: <5040BC860200007800097D24@nat28.tlf.novell.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 12:30 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > Migration 4.1 xend -> 4.2 xl
> >   Needs to be done with xl
> >   Stop xend on source, which leaves domain running and manipulable by xl
> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >   Works.
> 
> Is that really an acceptable approach? What if you have multiple
> VMs running, and want to migrate just part of them? All the other
> would remain unmanageable at least for the duration of the
> migration(s). (And I also wonder if 4.1's xl is complete/stable
> enough to recommend such an approach as a general mechanism.)

The alternative I suppose would be to:
      * start xend on new host (running 4.2)
      * migrate the domains you want over to the new system
      * stop xend on the new host
      * xl migrate localhost for each domain.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:09:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Qy3-000093-ND; Fri, 31 Aug 2012 13:08:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7Qy2-00008y-Hr
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 13:08:58 +0000
Received: from [85.158.143.35:33360] by server-2.bemta-4.messagelabs.com id
	BC/75-21239-967B0405; Fri, 31 Aug 2012 13:08:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1346418535!10569868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17380 invoked from network); 31 Aug 2012 13:08:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:08:57 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="206776065"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:08:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	09:08:51 -0400
Message-ID: <5040B762.3060402@citrix.com>
Date: Fri, 31 Aug 2012 14:08:50 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
In-Reply-To: <DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
Cc: xen-devel@lists.xensource.com, David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/08/12 19:32, Andres Lagar-Cavilla wrote:
> Second repost of my version, heavily based on David's. 

Doing another allocation that may be larger than a page (and its
associated additional error paths) seems to me to be a hammer to crack
the (admittedly bit wonky) casting nut.

That said, it's up to Konrad which version he prefers.

There are also some minor improvements you could make if you respin this
patch.

> Complementary to this patch, on the xen tree I intend to add
> PRIVCMD_MMAPBATCH_*_ERROR into the libxc header files, and remove
> XEN_DOMCTL_PFINFO_PAGEDTAB from domctl.h

Yes, a good idea.  There's no correspondence between the ioctl's error
reporting values and the DOMCTL_PFINFO flags.

> commit 3f40e8d79b7e032527ee207a97499ddbc81ca12b
> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Date:   Thu Aug 30 12:23:33 2012 -0400
> 
>     xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
>     
>     PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>     field for reporting the error code for every frame that could not be
>     mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
>     
>     Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
>     in the mfn array.
>     
>     Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
[...]
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>  {
>  	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
>  	struct mm_struct *mm = current->mm;
>  	struct vm_area_struct *vma;
>  	unsigned long nr_pages;
>  	LIST_HEAD(pagelist);
> +	int *err_array;

int *err_array = NULL;

and you could avoid the additional jump label as kfree(NULL) is safe.

>  	struct mmap_batch_state state;
>  
>  	if (!xen_initial_domain())
>  		return -EPERM;
>  
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
>  
>  	nr_pages = m.num;
>  	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
>  		return -EINVAL;
>  
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
>  
>  	if (ret || list_empty(&pagelist))
> -		goto out;
> +		goto out_no_err_array;
> +
> +	err_array = kmalloc_array(m.num, sizeof(int), GFP_KERNEL);

kcalloc() (see below).

> +	if (err_array == NULL)
> +	{

Style: if (err_array == NULL) {

> +	if (state.global_error) {
> +		int efault;
> +
> +		if (state.global_error == -ENOENT)
> +			ret = -ENOENT;
> +
> +		/* Write back errors in second pass. */
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> +		state.err      = err_array;
> +		efault = traverse_pages(m.num, sizeof(xen_pfn_t),
> +					 &pagelist, mmap_return_errors, &state);
> +		if (efault)
> +			ret = efault;
> +	} else if (m.err)
> +		ret = __clear_user(m.err, m.num * sizeof(*m.err));

Since you have an array of errors already there's no need to iterate
through all the MFNs again for V2.  A simple copy_to_user() is
sufficient provided err_array was zeroed with kcalloc().

if (m.err)
    ret = __copy_to_user(m.err, err_array, m.num * sizeof(*m.err));
else {
    /* Write back errors in second pass. */
    state.user_mfn = (xen_pfn_t *)m.arr;
    state.user_err = m.err;
    state.err      = err_array;
    ret = traverse_pages(m.num, sizeof(xen_pfn_t),
            &pagelist, mmap_return_errors, &state);
}

if (ret == 0 && state.global_error == -ENOENT)
    ret = -ENOENT;

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:09:19 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Qy3-000093-ND; Fri, 31 Aug 2012 13:08:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7Qy2-00008y-Hr
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 13:08:58 +0000
Received: from [85.158.143.35:33360] by server-2.bemta-4.messagelabs.com id
	BC/75-21239-967B0405; Fri, 31 Aug 2012 13:08:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-21.messagelabs.com!1346418535!10569868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17380 invoked from network); 31 Aug 2012 13:08:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:08:57 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="206776065"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:08:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	09:08:51 -0400
Message-ID: <5040B762.3060402@citrix.com>
Date: Fri, 31 Aug 2012 14:08:50 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
In-Reply-To: <DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
Cc: xen-devel@lists.xensource.com, David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
 ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/08/12 19:32, Andres Lagar-Cavilla wrote:
> Second repost of my version, heavily based on David's. 

Doing another allocation that may be larger than a page (and its
associated additional error paths) seems to me to be a hammer to crack
the (admittedly bit wonky) casting nut.

That said, it's up to Konrad which version he prefers.

There are also some minor improvements you could make if you respin this
patch.

> Complementary to this patch, on the xen tree I intend to add
> PRIVCMD_MMAPBATCH_*_ERROR into the libxc header files, and remove
> XEN_DOMCTL_PFINFO_PAGEDTAB from domctl.h

Yes, a good idea.  There's no correspondence between the ioctl's error
reporting values and the DOMCTL_PFINFO flags.

> commit 3f40e8d79b7e032527ee207a97499ddbc81ca12b
> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Date:   Thu Aug 30 12:23:33 2012 -0400
> 
>     xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
>     
>     PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>     field for reporting the error code for every frame that could not be
>     mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
>     
>     Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
>     in the mfn array.
>     
>     Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
[...]
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>  {
>  	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
>  	struct mm_struct *mm = current->mm;
>  	struct vm_area_struct *vma;
>  	unsigned long nr_pages;
>  	LIST_HEAD(pagelist);
> +	int *err_array;

int *err_array = NULL;

and you could avoid the additional jump label as kfree(NULL) is safe.

>  	struct mmap_batch_state state;
>  
>  	if (!xen_initial_domain())
>  		return -EPERM;
>  
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
>  
>  	nr_pages = m.num;
>  	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
>  		return -EINVAL;
>  
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
>  
>  	if (ret || list_empty(&pagelist))
> -		goto out;
> +		goto out_no_err_array;
> +
> +	err_array = kmalloc_array(m.num, sizeof(int), GFP_KERNEL);

kcalloc() (see below).

> +	if (err_array == NULL)
> +	{

Style: if (err_array == NULL) {

> +	if (state.global_error) {
> +		int efault;
> +
> +		if (state.global_error == -ENOENT)
> +			ret = -ENOENT;
> +
> +		/* Write back errors in second pass. */
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> +		state.err      = err_array;
> +		efault = traverse_pages(m.num, sizeof(xen_pfn_t),
> +					 &pagelist, mmap_return_errors, &state);
> +		if (efault)
> +			ret = efault;
> +	} else if (m.err)
> +		ret = __clear_user(m.err, m.num * sizeof(*m.err));

Since you have an array of errors already there's no need to iterate
through all the MFNs again for V2.  A simple copy_to_user() is
sufficient provided err_array was zeroed with kcalloc().

if (m.err)
    ret = __copy_to_user(m.err, err_array, m.num * sizeof(*m.err));
else {
    /* Write back errors in second pass. */
    state.user_mfn = (xen_pfn_t *)m.arr;
    state.user_err = m.err;
    state.err      = err_array;
    ret = traverse_pages(m.num, sizeof(xen_pfn_t),
            &pagelist, mmap_return_errors, &state);
}

if (ret == 0 && state.global_error == -ENOENT)
    ret = -ENOENT;

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:12:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:12:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7R0x-0000FS-9l; Fri, 31 Aug 2012 13:11:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7R0v-0000F6-Ge
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:11:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1346418687!8953300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18849 invoked from network); 31 Aug 2012 13:11:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 13:11:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 14:11:27 +0100
Message-Id: <5040D41A0200007800097D9D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 14:11:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346418071.27277.207.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 15:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-31 at 12:30 +0100, Jan Beulich wrote:
>> >>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> > Migration 4.1 xend -> 4.2 xl
>> >   Needs to be done with xl
>> >   Stop xend on source, which leaves domain running and manipulable by xl
>> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>> >   Works.
>> 
>> Is that really an acceptable approach? What if you have multiple
>> VMs running, and want to migrate just part of them? All the other
>> would remain unmanageable at least for the duration of the
>> migration(s). (And I also wonder if 4.1's xl is complete/stable
>> enough to recommend such an approach as a general mechanism.)
> 
> The alternative I suppose would be to:
>       * start xend on new host (running 4.2)
>       * migrate the domains you want over to the new system
>       * stop xend on the new host
>       * xl migrate localhost for each domain.

So why is it that xend and xl can't talk to each other for a migration?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:12:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:12:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7R0x-0000FS-9l; Fri, 31 Aug 2012 13:11:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with smtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7R0v-0000F6-Ge
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:11:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1346418687!8953300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18849 invoked from network); 31 Aug 2012 13:11:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with SMTP;
	31 Aug 2012 13:11:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 14:11:27 +0100
Message-Id: <5040D41A0200007800097D9D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 14:11:22 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346418071.27277.207.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 15:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-31 at 12:30 +0100, Jan Beulich wrote:
>> >>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> > Migration 4.1 xend -> 4.2 xl
>> >   Needs to be done with xl
>> >   Stop xend on source, which leaves domain running and manipulable by xl
>> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
>> >   Works.
>> 
>> Is that really an acceptable approach? What if you have multiple
>> VMs running, and want to migrate just part of them? All the other
>> would remain unmanageable at least for the duration of the
>> migration(s). (And I also wonder if 4.1's xl is complete/stable
>> enough to recommend such an approach as a general mechanism.)
> 
> The alternative I suppose would be to:
>       * start xend on new host (running 4.2)
>       * migrate the domains you want over to the new system
>       * stop xend on the new host
>       * xl migrate localhost for each domain.

So why is it that xend and xl can't talk to each other for a migration?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:14:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:14:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7R2d-0000M8-Pg; Fri, 31 Aug 2012 13:13:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7R2c-0000Lx-7b
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 13:13:42 +0000
Received: from [85.158.143.35:24233] by server-1.bemta-4.messagelabs.com id
	17/40-12504-588B0405; Fri, 31 Aug 2012 13:13:41 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346418817!5044404!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8171 invoked from network); 31 Aug 2012 13:13:38 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:13:38 -0000
Received: by ieje14 with SMTP id e14so2121962iej.30
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Aug 2012 06:13:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=2iWRmL9VCh59a8cIER5uuL9jIX9YgGQQm1Lcmu3OSr0=;
	b=PnrZTjOOom7fZ5O+7WLFu/Ipqw8BNNTt1FCbPJMoVWZ5owqET3W7PT9e5hEy7jVnge
	ovDIEwAfgH4ymWZV5ONJ5WJXsIBwXq2sli9BIpHx9+5rc41Mqgjc4nFbgSPgSNS14sym
	ZE4hYKwyLUt8qRmIr0Fi76UwJlS02gaBH5aTpzfa8T3LsoP6SBX1C+rVoGlHPFWi5uiA
	wGI+7OHLGSJ3mK0iIrqE2MPOvBuapMYgLucuBR+R0/rx8LLFNpwTu5bDLEqAMEpjBsIL
	c4t8f7fhZ7IlfwAsJ5CInAFE9NgJReMdZgHgPEnTgZuDYX84/KWqdCHQjfV8P9uKF7Ug
	QIgw==
Received: by 10.50.7.177 with SMTP id k17mr2741811iga.27.1346418817011;
	Fri, 31 Aug 2012 06:13:37 -0700 (PDT)
Received: from [192.168.5.251] ([206.223.182.18])
	by mx.google.com with ESMTPS id ud8sm1008533igb.4.2012.08.31.06.13.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 06:13:36 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <5040B762.3060402@citrix.com>
Date: Fri, 31 Aug 2012 09:13:44 -0400
Message-Id: <6000F56A-3346-482A-8250-AE21AAE3DD12@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
	<5040B762.3060402@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlGXWhtthq+vww8ycaNAK9HUYD47hreovAaFBpZQ4Ts4de2NPHLiG7G0UQhwkD/IxZ29471
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 31, 2012, at 9:08 AM, David Vrabel wrote:

> On 30/08/12 19:32, Andres Lagar-Cavilla wrote:
>> Second repost of my version, heavily based on David's. 
> 
> Doing another allocation that may be larger than a page (and its
> associated additional error paths) seems to me to be a hammer to crack
> the (admittedly bit wonky) casting nut.
> 
> That said, it's up to Konrad which version he prefers.

Yeah absolutely.

> 
> There are also some minor improvements you could make if you respin this
> patch.
> 
>> Complementary to this patch, on the xen tree I intend to add
>> PRIVCMD_MMAPBATCH_*_ERROR into the libxc header files, and remove
>> XEN_DOMCTL_PFINFO_PAGEDTAB from domctl.h
> 
> Yes, a good idea.  There's no correspondence between the ioctl's error
> reporting values and the DOMCTL_PFINFO flags.
> 
>> commit 3f40e8d79b7e032527ee207a97499ddbc81ca12b
>> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> Date:   Thu Aug 30 12:23:33 2012 -0400
>> 
>>    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
>> 
>>    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>>    field for reporting the error code for every frame that could not be
>>    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
>> 
>>    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
>>    in the mfn array.
>> 
>>    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>>    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> [...]
>> -static long privcmd_ioctl_mmap_batch(void __user *udata)
>> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>> {
>> 	int ret;
>> -	struct privcmd_mmapbatch m;
>> +	struct privcmd_mmapbatch_v2 m;
>> 	struct mm_struct *mm = current->mm;
>> 	struct vm_area_struct *vma;
>> 	unsigned long nr_pages;
>> 	LIST_HEAD(pagelist);
>> +	int *err_array;
> 
> int *err_array = NULL;
> 
> and you could avoid the additional jump label as kfree(NULL) is safe.

Didn't know, great.

> 
>> 	struct mmap_batch_state state;
>> 
>> 	if (!xen_initial_domain())
>> 		return -EPERM;
>> 
>> -	if (copy_from_user(&m, udata, sizeof(m)))
>> -		return -EFAULT;
>> +	switch (version) {
>> +	case 1:
>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
>> +			return -EFAULT;
>> +		/* Returns per-frame error in m.arr. */
>> +		m.err = NULL;
>> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
>> +			return -EFAULT;
>> +		break;
>> +	case 2:
>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
>> +			return -EFAULT;
>> +		/* Returns per-frame error code in m.err. */
>> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
>> +			return -EFAULT;
>> +		break;
>> +	default:
>> +		return -EINVAL;
>> +	}
>> 
>> 	nr_pages = m.num;
>> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
>> 		return -EINVAL;
>> 
>> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
>> -			   m.arr);
>> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
>> 
>> 	if (ret || list_empty(&pagelist))
>> -		goto out;
>> +		goto out_no_err_array;
>> +
>> +	err_array = kmalloc_array(m.num, sizeof(int), GFP_KERNEL);
> 
> kcalloc() (see below).
> 
>> +	if (err_array == NULL)
>> +	{
> 
> Style: if (err_array == NULL) {
> 
>> +	if (state.global_error) {
>> +		int efault;
>> +
>> +		if (state.global_error == -ENOENT)
>> +			ret = -ENOENT;
>> +
>> +		/* Write back errors in second pass. */
>> +		state.user_mfn = (xen_pfn_t *)m.arr;
>> +		state.user_err = m.err;
>> +		state.err      = err_array;
>> +		efault = traverse_pages(m.num, sizeof(xen_pfn_t),
>> +					 &pagelist, mmap_return_errors, &state);
>> +		if (efault)
>> +			ret = efault;
>> +	} else if (m.err)
>> +		ret = __clear_user(m.err, m.num * sizeof(*m.err));
> 
> Since you have an array of errors already there's no need to iterate
> through all the MFNs again for V2.  A simple copy_to_user() is
> sufficient provided err_array was zeroed with kcalloc().
I can kcalloc for extra paranoia, but the code will set all error slots, and is guaranteed to not jump over anything. I +1 the shortcut you outline below. Re-spin coming.
Andres
> 
> if (m.err)
>    ret = __copy_to_user(m.err, err_array, m.num * sizeof(*m.err));
> else {
>    /* Write back errors in second pass. */
>    state.user_mfn = (xen_pfn_t *)m.arr;
>    state.user_err = m.err;
>    state.err      = err_array;
>    ret = traverse_pages(m.num, sizeof(xen_pfn_t),
>            &pagelist, mmap_return_errors, &state);
> }
> 
> if (ret == 0 && state.global_error == -ENOENT)
>    ret = -ENOENT;
> 
> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:14:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:14:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7R2d-0000M8-Pg; Fri, 31 Aug 2012 13:13:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7R2c-0000Lx-7b
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 13:13:42 +0000
Received: from [85.158.143.35:24233] by server-1.bemta-4.messagelabs.com id
	17/40-12504-588B0405; Fri, 31 Aug 2012 13:13:41 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346418817!5044404!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8171 invoked from network); 31 Aug 2012 13:13:38 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:13:38 -0000
Received: by ieje14 with SMTP id e14so2121962iej.30
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Aug 2012 06:13:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=2iWRmL9VCh59a8cIER5uuL9jIX9YgGQQm1Lcmu3OSr0=;
	b=PnrZTjOOom7fZ5O+7WLFu/Ipqw8BNNTt1FCbPJMoVWZ5owqET3W7PT9e5hEy7jVnge
	ovDIEwAfgH4ymWZV5ONJ5WJXsIBwXq2sli9BIpHx9+5rc41Mqgjc4nFbgSPgSNS14sym
	ZE4hYKwyLUt8qRmIr0Fi76UwJlS02gaBH5aTpzfa8T3LsoP6SBX1C+rVoGlHPFWi5uiA
	wGI+7OHLGSJ3mK0iIrqE2MPOvBuapMYgLucuBR+R0/rx8LLFNpwTu5bDLEqAMEpjBsIL
	c4t8f7fhZ7IlfwAsJ5CInAFE9NgJReMdZgHgPEnTgZuDYX84/KWqdCHQjfV8P9uKF7Ug
	QIgw==
Received: by 10.50.7.177 with SMTP id k17mr2741811iga.27.1346418817011;
	Fri, 31 Aug 2012 06:13:37 -0700 (PDT)
Received: from [192.168.5.251] ([206.223.182.18])
	by mx.google.com with ESMTPS id ud8sm1008533igb.4.2012.08.31.06.13.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 06:13:36 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <5040B762.3060402@citrix.com>
Date: Fri, 31 Aug 2012 09:13:44 -0400
Message-Id: <6000F56A-3346-482A-8250-AE21AAE3DD12@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
	<DDCE284F-9506-4EF4-88BB-CF6A04D98A2F@gridcentric.ca>
	<5040B762.3060402@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlGXWhtthq+vww8ycaNAK9HUYD47hreovAaFBpZQ4Ts4de2NPHLiG7G0UQhwkD/IxZ29471
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 31, 2012, at 9:08 AM, David Vrabel wrote:

> On 30/08/12 19:32, Andres Lagar-Cavilla wrote:
>> Second repost of my version, heavily based on David's. 
> 
> Doing another allocation that may be larger than a page (and its
> associated additional error paths) seems to me to be a hammer to crack
> the (admittedly bit wonky) casting nut.
> 
> That said, it's up to Konrad which version he prefers.

Yeah absolutely.

> 
> There are also some minor improvements you could make if you respin this
> patch.
> 
>> Complementary to this patch, on the xen tree I intend to add
>> PRIVCMD_MMAPBATCH_*_ERROR into the libxc header files, and remove
>> XEN_DOMCTL_PFINFO_PAGEDTAB from domctl.h
> 
> Yes, a good idea.  There's no correspondence between the ioctl's error
> reporting values and the DOMCTL_PFINFO flags.
> 
>> commit 3f40e8d79b7e032527ee207a97499ddbc81ca12b
>> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> Date:   Thu Aug 30 12:23:33 2012 -0400
>> 
>>    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
>> 
>>    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
>>    field for reporting the error code for every frame that could not be
>>    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
>> 
>>    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
>>    in the mfn array.
>> 
>>    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>>    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> [...]
>> -static long privcmd_ioctl_mmap_batch(void __user *udata)
>> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
>> {
>> 	int ret;
>> -	struct privcmd_mmapbatch m;
>> +	struct privcmd_mmapbatch_v2 m;
>> 	struct mm_struct *mm = current->mm;
>> 	struct vm_area_struct *vma;
>> 	unsigned long nr_pages;
>> 	LIST_HEAD(pagelist);
>> +	int *err_array;
> 
> int *err_array = NULL;
> 
> and you could avoid the additional jump label as kfree(NULL) is safe.

Didn't know, great.

> 
>> 	struct mmap_batch_state state;
>> 
>> 	if (!xen_initial_domain())
>> 		return -EPERM;
>> 
>> -	if (copy_from_user(&m, udata, sizeof(m)))
>> -		return -EFAULT;
>> +	switch (version) {
>> +	case 1:
>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
>> +			return -EFAULT;
>> +		/* Returns per-frame error in m.arr. */
>> +		m.err = NULL;
>> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
>> +			return -EFAULT;
>> +		break;
>> +	case 2:
>> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
>> +			return -EFAULT;
>> +		/* Returns per-frame error code in m.err. */
>> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
>> +			return -EFAULT;
>> +		break;
>> +	default:
>> +		return -EINVAL;
>> +	}
>> 
>> 	nr_pages = m.num;
>> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
>> 		return -EINVAL;
>> 
>> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
>> -			   m.arr);
>> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
>> 
>> 	if (ret || list_empty(&pagelist))
>> -		goto out;
>> +		goto out_no_err_array;
>> +
>> +	err_array = kmalloc_array(m.num, sizeof(int), GFP_KERNEL);
> 
> kcalloc() (see below).
> 
>> +	if (err_array == NULL)
>> +	{
> 
> Style: if (err_array == NULL) {
> 
>> +	if (state.global_error) {
>> +		int efault;
>> +
>> +		if (state.global_error == -ENOENT)
>> +			ret = -ENOENT;
>> +
>> +		/* Write back errors in second pass. */
>> +		state.user_mfn = (xen_pfn_t *)m.arr;
>> +		state.user_err = m.err;
>> +		state.err      = err_array;
>> +		efault = traverse_pages(m.num, sizeof(xen_pfn_t),
>> +					 &pagelist, mmap_return_errors, &state);
>> +		if (efault)
>> +			ret = efault;
>> +	} else if (m.err)
>> +		ret = __clear_user(m.err, m.num * sizeof(*m.err));
> 
> Since you have an array of errors already there's no need to iterate
> through all the MFNs again for V2.  A simple copy_to_user() is
> sufficient provided err_array was zeroed with kcalloc().
I can kcalloc for extra paranoia, but the code will set all error slots, and is guaranteed to not jump over anything. I +1 the shortcut you outline below. Re-spin coming.
Andres
> 
> if (m.err)
>    ret = __copy_to_user(m.err, err_array, m.num * sizeof(*m.err));
> else {
>    /* Write back errors in second pass. */
>    state.user_mfn = (xen_pfn_t *)m.arr;
>    state.user_err = m.err;
>    state.err      = err_array;
>    ret = traverse_pages(m.num, sizeof(xen_pfn_t),
>            &pagelist, mmap_return_errors, &state);
> }
> 
> if (ret == 0 && state.global_error == -ENOENT)
>    ret = -ENOENT;
> 
> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:20:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7R8V-0000a9-Ja; Fri, 31 Aug 2012 13:19:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7R8U-0000a4-K6
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:19:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346419136!8519031!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29954 invoked from network); 31 Aug 2012 13:18:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14289424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:18:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 14:18:42 +0100
Message-ID: <1346419120.27277.212.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 14:18:40 +0100
In-Reply-To: <5040D41A0200007800097D9D@nat28.tlf.novell.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
	<5040D41A0200007800097D9D@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 14:11 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 15:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2012-08-31 at 12:30 +0100, Jan Beulich wrote:
> >> >>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> >> > Migration 4.1 xend -> 4.2 xl
> >> >   Needs to be done with xl
> >> >   Stop xend on source, which leaves domain running and manipulable by xl
> >> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >> >   Works.
> >> 
> >> Is that really an acceptable approach? What if you have multiple
> >> VMs running, and want to migrate just part of them? All the other
> >> would remain unmanageable at least for the duration of the
> >> migration(s). (And I also wonder if 4.1's xl is complete/stable
> >> enough to recommend such an approach as a general mechanism.)
> > 
> > The alternative I suppose would be to:
> >       * start xend on new host (running 4.2)
> >       * migrate the domains you want over to the new system
> >       * stop xend on the new host
> >       * xl migrate localhost for each domain.
> 
> So why is it that xend and xl can't talk to each other for a migration?

The wire protocol is different, also xend doesn't know how to start the
xl receive process on the other end.

I suppose we could implement xl migrate-receive-from-xend which the user
could manually run on the target. Bit late for 4.2 though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:20:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7R8V-0000a9-Ja; Fri, 31 Aug 2012 13:19:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7R8U-0000a4-K6
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:19:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1346419136!8519031!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29954 invoked from network); 31 Aug 2012 13:18:56 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14289424"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:18:42 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 14:18:42 +0100
Message-ID: <1346419120.27277.212.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 14:18:40 +0100
In-Reply-To: <5040D41A0200007800097D9D@nat28.tlf.novell.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
	<5040D41A0200007800097D9D@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 14:11 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 15:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2012-08-31 at 12:30 +0100, Jan Beulich wrote:
> >> >>> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> >> > Migration 4.1 xend -> 4.2 xl
> >> >   Needs to be done with xl
> >> >   Stop xend on source, which leaves domain running and manipulable by xl
> >> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >> >   Works.
> >> 
> >> Is that really an acceptable approach? What if you have multiple
> >> VMs running, and want to migrate just part of them? All the other
> >> would remain unmanageable at least for the duration of the
> >> migration(s). (And I also wonder if 4.1's xl is complete/stable
> >> enough to recommend such an approach as a general mechanism.)
> > 
> > The alternative I suppose would be to:
> >       * start xend on new host (running 4.2)
> >       * migrate the domains you want over to the new system
> >       * stop xend on the new host
> >       * xl migrate localhost for each domain.
> 
> So why is it that xend and xl can't talk to each other for a migration?

The wire protocol is different, also xend doesn't know how to start the
xl receive process on the other end.

I suppose we could implement xl migrate-receive-from-xend which the user
could manually run on the target. Bit late for 4.2 though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:59:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:59:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Rka-0000ri-1s; Fri, 31 Aug 2012 13:59:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7RkY-0000rZ-GD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:59:06 +0000
Received: from [85.158.143.35:31777] by server-3.bemta-4.messagelabs.com id
	B3/AA-08232-923C0405; Fri, 31 Aug 2012 13:59:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346421544!14289139!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2135 invoked from network); 31 Aug 2012 13:59:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:59:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14290535"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:59:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 14:59:04 +0100
Message-ID: <1346421542.27277.218.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dieter Bloms <xensource.com@bloms.de>
Date: Fri, 31 Aug 2012 14:59:02 +0100
In-Reply-To: <20120814100704.GA19704@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 11:07 +0100, Dieter Bloms wrote:
> Hi,
> 
> On Tue, Aug 14, Ian Campbell wrote:
> 
> > tools, blockers:
> > 
> >     * xl compatibility with xm:
> > 
> >         * No known issues
> 
> the parameter io and irq in domU config files are not evaluated by xl.
> So it is not possible to passthrough a parallel port for my printer to
> domU when I start the domU with xl command.
> With xm I have no issue.

Do you have a reference to the actual syntax you are using?

As usual with xend this syntax appears to be mostly undocumented...

http://wiki.xen.org/wiki/Xen_Configuration_File_Options seems to suggest
that you can give the ioports multiple times, rather than the more usual
approach of using an array. Whereas
http://cmrg.fifthhorseman.net/wiki/xen seems to suggest that an array is
how it is done. Perhaps the xen.org wiki is just badly worded and it is
array based.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:59:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:59:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Rka-0000ri-1s; Fri, 31 Aug 2012 13:59:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7RkY-0000rZ-GD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 13:59:06 +0000
Received: from [85.158.143.35:31777] by server-3.bemta-4.messagelabs.com id
	B3/AA-08232-923C0405; Fri, 31 Aug 2012 13:59:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346421544!14289139!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2135 invoked from network); 31 Aug 2012 13:59:05 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:59:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14290535"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 13:59:04 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 14:59:04 +0100
Message-ID: <1346421542.27277.218.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dieter Bloms <xensource.com@bloms.de>
Date: Fri, 31 Aug 2012 14:59:02 +0100
In-Reply-To: <20120814100704.GA19704@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-14 at 11:07 +0100, Dieter Bloms wrote:
> Hi,
> 
> On Tue, Aug 14, Ian Campbell wrote:
> 
> > tools, blockers:
> > 
> >     * xl compatibility with xm:
> > 
> >         * No known issues
> 
> the parameter io and irq in domU config files are not evaluated by xl.
> So it is not possible to passthrough a parallel port for my printer to
> domU when I start the domU with xl command.
> With xm I have no issue.

Do you have a reference to the actual syntax you are using?

As usual with xend this syntax appears to be mostly undocumented...

http://wiki.xen.org/wiki/Xen_Configuration_File_Options seems to suggest
that you can give the ioports multiple times, rather than the more usual
approach of using an array. Whereas
http://cmrg.fifthhorseman.net/wiki/xen seems to suggest that an array is
how it is done. Perhaps the xen.org wiki is just badly worded and it is
array based.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:59:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Rkt-0000t5-2A; Fri, 31 Aug 2012 13:59:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7Rks-0000sw-9H
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 13:59:26 +0000
Received: from [85.158.139.83:10598] by server-9.bemta-5.messagelabs.com id
	3E/0A-20529-D33C0405; Fri, 31 Aug 2012 13:59:25 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346421562!16661299!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6244 invoked from network); 31 Aug 2012 13:59:23 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:59:23 -0000
Received: by iabz25 with SMTP id z25so6422165iab.30
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Aug 2012 06:59:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=p4Ib1EzK5YuPugU3DwweOql7Cv/pjJRo1sQ7050r68w=;
	b=O1O7EvK+7KNahBQzmLvHKMaQDMqPJRZSE6iBS261+EZeLmr7TXOzevXkMFrx7Ufit+
	IyUq93ej2AiUWPBbQwYHSRg5hkgFQ+WuiDC7HpYaQBya78GshOOMhiRkEVPOgDW6kq5G
	8bN92pvLYLZQmFLsmJoZQm7EZPYec6o2XmtjOnI6XUhNcPOQWFIbZgwVX92o7ms1G32Q
	L6wSUupkUXP+kpbRSW3MsddTDdgDA66NsgktbCzoVgbyiw4zGXQLGvWAF37YBGYeuXE5
	wKHuvDOm+Eewb//hIBW7jl9jN08ClIDBFvYh8zTkIQ1g6t2tgSIwoGsmKzhlxwkSoPcs
	La3Q==
Received: by 10.50.187.162 with SMTP id ft2mr2906030igc.11.1346421562079;
	Fri, 31 Aug 2012 06:59:22 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ua5sm524052igb.10.2012.08.31.06.59.20
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 06:59:21 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
Date: Fri, 31 Aug 2012 09:59:30 -0400
Message-Id: <C989BE9D-E520-4D04-9028-6CE6CC765E76@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlNVY1bo5jWkX9wMc19yMfRe3Chg5KejUYpZqphLH+TddOrjYnlf+5MQT1SGLhJGdx9fW/o
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Re-spin of alternative patch after David's feedback.
Thanks
Andres

commit ab351a5cef1797935b083c2f6e72800a8949c515
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Thu Aug 30 12:23:33 2012 -0400

    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
    
    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
    field for reporting the error code for every frame that could not be
    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
    
    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
    in the mfn array.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..5386f20 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -246,61 +246,117 @@ struct mmap_batch_state {
 	domid_t domain;
 	unsigned long va;
 	struct vm_area_struct *vma;
-	int err;
-
-	xen_pfn_t __user *user;
+	/* A tristate: 
+	 *      0 for no errors
+	 *      1 if at least one error has happened (and no
+	 *          -ENOENT errors have happened)
+	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 */
+	int global_error;
+	/* An array for individual errors */
+	int *err;
+
+	/* User-space mfn array to store errors in the second pass for V1. */
+	xen_pfn_t __user *user_mfn;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+
+	/* Store error code for second pass. */
+	*(st->err++) = ret;
+
+	/* And see if it affects the global_error. */
+	if (ret < 0) {
+		if (ret == -ENOENT)
+			st->global_error = -ENOENT;
+		else {
+			/* Record that at least one error has happened. */
+			if (st->global_error == 0)
+				st->global_error = 1;
+		}
 	}
 	st->va += PAGE_SIZE;
 
 	return 0;
 }
 
-static int mmap_return_errors(void *data, void *state)
+static int mmap_return_errors_v1(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
-
-	return put_user(*mfnp, st->user++);
+	int err = *(st->err++);
+
+	/*
+	 * V1 encodes the error codes in the 32bit top nibble of the 
+	 * mfn (with its known limitations vis-a-vis 64 bit callers).
+	 */
+	*mfnp |= (err == -ENOENT) ?
+				PRIVCMD_MMAPBATCH_PAGED_ERROR :
+				PRIVCMD_MMAPBATCH_MFN_ERROR;
+	return __put_user(*mfnp, st->user_mfn++);
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
 	LIST_HEAD(pagelist);
+	int *err_array = NULL;
 	struct mmap_batch_state state;
 
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
+
+	if (ret)
+		goto out;
+	if (list_empty(&pagelist)) {
+		ret = -EINVAL;
+		goto out;
+    }
 
-	if (ret || list_empty(&pagelist))
+	err_array = kcalloc(m.num, sizeof(int), GFP_KERNEL);
+	if (err_array == NULL) {
+		ret = -ENOMEM;
 		goto out;
+	}
 
 	down_write(&mm->mmap_sem);
 
@@ -315,24 +371,34 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
-	state.domain = m.dom;
-	state.vma = vma;
-	state.va = m.addr;
-	state.err = 0;
+	state.domain        = m.dom;
+	state.vma           = vma;
+	state.va            = m.addr;
+	state.global_error  = 0;
+	state.err           = err_array;
 
-	ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state);
+	/* mmap_batch_fn guarantees ret == 0 */
+	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
+			     &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
+	if (state.global_error && (version == 1)) {
+		/* Write back errors in second pass. */
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.err      = err_array;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+					 &pagelist, mmap_return_errors_v1, &state);
+	} else
+		ret = __copy_to_user(m.err, err_array, m.num * sizeof(int));
+
+    /* If we have not had any EFAULT-like global errors then set the global
+     * error to -ENOENT if necessary. */
+    if ((ret == 0) && (state.global_error == -ENOENT))
+        ret = -ENOENT;
 
 out:
+	kfree(err_array);
 	free_page_list(&pagelist);
 
 	return ret;
@@ -354,7 +420,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 45c1aa1..a853168 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -58,13 +58,33 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_*_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR     0xf0000000U
+#define PRIVCMD_MMAPBATCH_PAGED_ERROR   0x80000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -72,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */

On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
> include/xen/privcmd.h |   23 +++++++++++-
> 2 files changed, 102 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..c0e89e7 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
>  */
> static int gather_array(struct list_head *pagelist,
> 			unsigned nelem, size_t size,
> -			void __user *data)
> +			const void __user *data)
> {
> 	unsigned pageidx;
> 	void *pagedata;
> @@ -248,18 +248,37 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> 
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> 
> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> -		st->err++;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		/*
> +		 * Error reporting is a mess but userspace relies on
> +		 * it behaving this way.
> +		 *
> +		 * V2 needs to a) return the result of each frame's
> +		 * remap; and b) return -ENOENT if any frame failed
> +		 * with -ENOENT.
> +		 *
> +		 * In this first pass the error code is saved by
> +		 * overwriting the mfn and an error is indicated in
> +		 * st->err.
> +		 *
> +		 * The second pass by mmap_return_errors() will write
> +		 * the error codes to user space and get the right
> +		 * ioctl return value.
> +		 */
> +		*(int *)mfnp = ret;
> +		st->err = ret;
> 	}
> 	st->va += PAGE_SIZE;
> 
> @@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> +
> +	if (st->user_err) {
> +		int err = *(int *)mfnp;
> +
> +		if (err == -ENOENT)
> +			st->err = err;
> 
> -	return put_user(*mfnp, st->user++);
> +		return __put_user(err, st->user_err++);
> +	} else {
> +		xen_pfn_t mfn;
> +
> +		ret = __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> 
> static struct vm_operations_struct privcmd_vm_ops;
> 
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm = current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> 
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> 
> 	nr_pages = m.num;
> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> 
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
> 
> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 
> 	up_write(&mm->mmap_sem);
> 
> -	if (state.err > 0) {
> -		state.user = m.arr;
> +	if (state.err) {
> +		state.err = 0;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> +		if (ret >= 0)
> +			ret = state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> 
> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> 
> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> 
> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..f60d75c 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> 
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 on success (i.e., arg->err contains valid error codes for
> + * each frame).  On an error other than a failed frame remap, -1 is
> + * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
> + * if the operation was otherwise successful but any frame failed with
> + * -ENOENT, then -1 is returned and errno is set to ENOENT.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> 
> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 13:59:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 13:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Rkt-0000t5-2A; Fri, 31 Aug 2012 13:59:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7Rks-0000sw-9H
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 13:59:26 +0000
Received: from [85.158.139.83:10598] by server-9.bemta-5.messagelabs.com id
	3E/0A-20529-D33C0405; Fri, 31 Aug 2012 13:59:25 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346421562!16661299!1
X-Originating-IP: [209.85.210.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6244 invoked from network); 31 Aug 2012 13:59:23 -0000
Received: from mail-iy0-f171.google.com (HELO mail-iy0-f171.google.com)
	(209.85.210.171)
	by server-8.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 13:59:23 -0000
Received: by iabz25 with SMTP id z25so6422165iab.30
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Aug 2012 06:59:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=p4Ib1EzK5YuPugU3DwweOql7Cv/pjJRo1sQ7050r68w=;
	b=O1O7EvK+7KNahBQzmLvHKMaQDMqPJRZSE6iBS261+EZeLmr7TXOzevXkMFrx7Ufit+
	IyUq93ej2AiUWPBbQwYHSRg5hkgFQ+WuiDC7HpYaQBya78GshOOMhiRkEVPOgDW6kq5G
	8bN92pvLYLZQmFLsmJoZQm7EZPYec6o2XmtjOnI6XUhNcPOQWFIbZgwVX92o7ms1G32Q
	L6wSUupkUXP+kpbRSW3MsddTDdgDA66NsgktbCzoVgbyiw4zGXQLGvWAF37YBGYeuXE5
	wKHuvDOm+Eewb//hIBW7jl9jN08ClIDBFvYh8zTkIQ1g6t2tgSIwoGsmKzhlxwkSoPcs
	La3Q==
Received: by 10.50.187.162 with SMTP id ft2mr2906030igc.11.1346421562079;
	Fri, 31 Aug 2012 06:59:22 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id ua5sm524052igb.10.2012.08.31.06.59.20
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 06:59:21 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
Date: Fri, 31 Aug 2012 09:59:30 -0400
Message-Id: <C989BE9D-E520-4D04-9028-6CE6CC765E76@gridcentric.ca>
References: <1346331492-15027-1-git-send-email-david.vrabel@citrix.com>
	<1346331492-15027-3-git-send-email-david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQlNVY1bo5jWkX9wMc19yMfRe3Chg5KejUYpZqphLH+TddOrjYnlf+5MQT1SGLhJGdx9fW/o
Cc: xen-devel@lists.xensource.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH 2/2] xen/privcmd: add PRIVCMD_MMAPBATCH_V2
	ioctl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Re-spin of alternative patch after David's feedback.
Thanks
Andres

commit ab351a5cef1797935b083c2f6e72800a8949c515
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Thu Aug 30 12:23:33 2012 -0400

    xen/privcmd: add PRIVCMD_MMAPBATCH_V2 ioctl
    
    PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
    field for reporting the error code for every frame that could not be
    mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
    
    Also expand PRIVCMD_MMAPBATCH to return appropriate error-encoding top nibble
    in the mfn array.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 85226cb..5386f20 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
  */
 static int gather_array(struct list_head *pagelist,
 			unsigned nelem, size_t size,
-			void __user *data)
+			const void __user *data)
 {
 	unsigned pageidx;
 	void *pagedata;
@@ -246,61 +246,117 @@ struct mmap_batch_state {
 	domid_t domain;
 	unsigned long va;
 	struct vm_area_struct *vma;
-	int err;
-
-	xen_pfn_t __user *user;
+	/* A tristate: 
+	 *      0 for no errors
+	 *      1 if at least one error has happened (and no
+	 *          -ENOENT errors have happened)
+	 *      -ENOENT if at least 1 -ENOENT has happened.
+	 */
+	int global_error;
+	/* An array for individual errors */
+	int *err;
+
+	/* User-space mfn array to store errors in the second pass for V1. */
+	xen_pfn_t __user *user_mfn;
 };
 
 static int mmap_batch_fn(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
+	int ret;
 
-	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
-				       st->vma->vm_page_prot, st->domain) < 0) {
-		*mfnp |= 0xf0000000U;
-		st->err++;
+	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
+					 st->vma->vm_page_prot, st->domain);
+
+	/* Store error code for second pass. */
+	*(st->err++) = ret;
+
+	/* And see if it affects the global_error. */
+	if (ret < 0) {
+		if (ret == -ENOENT)
+			st->global_error = -ENOENT;
+		else {
+			/* Record that at least one error has happened. */
+			if (st->global_error == 0)
+				st->global_error = 1;
+		}
 	}
 	st->va += PAGE_SIZE;
 
 	return 0;
 }
 
-static int mmap_return_errors(void *data, void *state)
+static int mmap_return_errors_v1(void *data, void *state)
 {
 	xen_pfn_t *mfnp = data;
 	struct mmap_batch_state *st = state;
-
-	return put_user(*mfnp, st->user++);
+	int err = *(st->err++);
+
+	/*
+	 * V1 encodes the error codes in the 32bit top nibble of the 
+	 * mfn (with its known limitations vis-a-vis 64 bit callers).
+	 */
+	*mfnp |= (err == -ENOENT) ?
+				PRIVCMD_MMAPBATCH_PAGED_ERROR :
+				PRIVCMD_MMAPBATCH_MFN_ERROR;
+	return __put_user(*mfnp, st->user_mfn++);
 }
 
 static struct vm_operations_struct privcmd_vm_ops;
 
-static long privcmd_ioctl_mmap_batch(void __user *udata)
+static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
 {
 	int ret;
-	struct privcmd_mmapbatch m;
+	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	unsigned long nr_pages;
 	LIST_HEAD(pagelist);
+	int *err_array = NULL;
 	struct mmap_batch_state state;
 
 	if (!xen_initial_domain())
 		return -EPERM;
 
-	if (copy_from_user(&m, udata, sizeof(m)))
-		return -EFAULT;
+	switch (version) {
+	case 1:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
+			return -EFAULT;
+		/* Returns per-frame error in m.arr. */
+		m.err = NULL;
+		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
+			return -EFAULT;
+		break;
+	case 2:
+		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
+			return -EFAULT;
+		/* Returns per-frame error code in m.err. */
+		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
+			return -EFAULT;
+		break;
+	default:
+		return -EINVAL;
+	}
 
 	nr_pages = m.num;
 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
 		return -EINVAL;
 
-	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
-			   m.arr);
+	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
+
+	if (ret)
+		goto out;
+	if (list_empty(&pagelist)) {
+		ret = -EINVAL;
+		goto out;
+    }
 
-	if (ret || list_empty(&pagelist))
+	err_array = kcalloc(m.num, sizeof(int), GFP_KERNEL);
+	if (err_array == NULL) {
+		ret = -ENOMEM;
 		goto out;
+	}
 
 	down_write(&mm->mmap_sem);
 
@@ -315,24 +371,34 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
 		goto out;
 	}
 
-	state.domain = m.dom;
-	state.vma = vma;
-	state.va = m.addr;
-	state.err = 0;
+	state.domain        = m.dom;
+	state.vma           = vma;
+	state.va            = m.addr;
+	state.global_error  = 0;
+	state.err           = err_array;
 
-	ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			     &pagelist, mmap_batch_fn, &state);
+	/* mmap_batch_fn guarantees ret == 0 */
+	BUG_ON(traverse_pages(m.num, sizeof(xen_pfn_t),
+			     &pagelist, mmap_batch_fn, &state));
 
 	up_write(&mm->mmap_sem);
 
-	if (state.err > 0) {
-		state.user = m.arr;
+	if (state.global_error && (version == 1)) {
+		/* Write back errors in second pass. */
+		state.user_mfn = (xen_pfn_t *)m.arr;
+		state.err      = err_array;
 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
-			       &pagelist,
-			       mmap_return_errors, &state);
-	}
+					 &pagelist, mmap_return_errors_v1, &state);
+	} else
+		ret = __copy_to_user(m.err, err_array, m.num * sizeof(int));
+
+    /* If we have not had any EFAULT-like global errors then set the global
+     * error to -ENOENT if necessary. */
+    if ((ret == 0) && (state.global_error == -ENOENT))
+        ret = -ENOENT;
 
 out:
+	kfree(err_array);
 	free_page_list(&pagelist);
 
 	return ret;
@@ -354,7 +420,11 @@ static long privcmd_ioctl(struct file *file,
 		break;
 
 	case IOCTL_PRIVCMD_MMAPBATCH:
-		ret = privcmd_ioctl_mmap_batch(udata);
+		ret = privcmd_ioctl_mmap_batch(udata, 1);
+		break;
+
+	case IOCTL_PRIVCMD_MMAPBATCH_V2:
+		ret = privcmd_ioctl_mmap_batch(udata, 2);
 		break;
 
 	default:
diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
index 45c1aa1..a853168 100644
--- a/include/xen/privcmd.h
+++ b/include/xen/privcmd.h
@@ -58,13 +58,33 @@ struct privcmd_mmapbatch {
 	int num;     /* number of pages to populate */
 	domid_t dom; /* target domain */
 	__u64 addr;  /* virtual address */
-	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
+	xen_pfn_t __user *arr; /* array of mfns - or'd with
+				  PRIVCMD_MMAPBATCH_*_ERROR on err */
+};
+
+#define PRIVCMD_MMAPBATCH_MFN_ERROR     0xf0000000U
+#define PRIVCMD_MMAPBATCH_PAGED_ERROR   0x80000000U
+
+struct privcmd_mmapbatch_v2 {
+	unsigned int num; /* number of pages to populate */
+	domid_t dom;      /* target domain */
+	__u64 addr;       /* virtual address */
+	const xen_pfn_t __user *arr; /* array of mfns */
+	int __user *err;  /* array of error codes */
 };
 
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
  * Return: Value returned from execution of the specified hypercall.
+ *
+ * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
+ * @arg: &struct privcmd_mmapbatch_v2
+ * Return: 0 on success (i.e., arg->err contains valid error codes for
+ * each frame).  On an error other than a failed frame remap, -1 is
+ * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
+ * if the operation was otherwise successful but any frame failed with
+ * -ENOENT, then -1 is returned and errno is set to ENOENT.
  */
 #define IOCTL_PRIVCMD_HYPERCALL					\
 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
@@ -72,5 +92,7 @@ struct privcmd_mmapbatch {
 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
 #define IOCTL_PRIVCMD_MMAPBATCH					\
 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
+#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
+	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */

On Aug 30, 2012, at 8:58 AM, David Vrabel wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> PRIVCMD_MMAPBATCH_V2 extends PRIVCMD_MMAPBATCH with an additional
> field for reporting the error code for every frame that could not be
> mapped.  libxc prefers PRIVCMD_MMAPBATCH_V2 over PRIVCMD_MMAPBATCH.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> drivers/xen/privcmd.c |   99 +++++++++++++++++++++++++++++++++++++++---------
> include/xen/privcmd.h |   23 +++++++++++-
> 2 files changed, 102 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index ccee0f1..c0e89e7 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -76,7 +76,7 @@ static void free_page_list(struct list_head *pages)
>  */
> static int gather_array(struct list_head *pagelist,
> 			unsigned nelem, size_t size,
> -			void __user *data)
> +			const void __user *data)
> {
> 	unsigned pageidx;
> 	void *pagedata;
> @@ -248,18 +248,37 @@ struct mmap_batch_state {
> 	struct vm_area_struct *vma;
> 	int err;
> 
> -	xen_pfn_t __user *user;
> +	xen_pfn_t __user *user_mfn;
> +	int __user *user_err;
> };
> 
> static int mmap_batch_fn(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> 
> -	if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> -				       st->vma->vm_page_prot, st->domain) < 0) {
> -		*mfnp |= 0xf0000000U;
> -		st->err++;
> +	ret = xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1,
> +					 st->vma->vm_page_prot, st->domain);
> +	if (ret < 0) {
> +		/*
> +		 * Error reporting is a mess but userspace relies on
> +		 * it behaving this way.
> +		 *
> +		 * V2 needs to a) return the result of each frame's
> +		 * remap; and b) return -ENOENT if any frame failed
> +		 * with -ENOENT.
> +		 *
> +		 * In this first pass the error code is saved by
> +		 * overwriting the mfn and an error is indicated in
> +		 * st->err.
> +		 *
> +		 * The second pass by mmap_return_errors() will write
> +		 * the error codes to user space and get the right
> +		 * ioctl return value.
> +		 */
> +		*(int *)mfnp = ret;
> +		st->err = ret;
> 	}
> 	st->va += PAGE_SIZE;
> 
> @@ -270,16 +289,33 @@ static int mmap_return_errors(void *data, void *state)
> {
> 	xen_pfn_t *mfnp = data;
> 	struct mmap_batch_state *st = state;
> +	int ret;
> +
> +	if (st->user_err) {
> +		int err = *(int *)mfnp;
> +
> +		if (err == -ENOENT)
> +			st->err = err;
> 
> -	return put_user(*mfnp, st->user++);
> +		return __put_user(err, st->user_err++);
> +	} else {
> +		xen_pfn_t mfn;
> +
> +		ret = __get_user(mfn, st->user_mfn);
> +		if (ret < 0)
> +			return ret;
> +
> +		mfn |= PRIVCMD_MMAPBATCH_MFN_ERROR;
> +		return __put_user(mfn, st->user_mfn++);
> +	}
> }
> 
> static struct vm_operations_struct privcmd_vm_ops;
> 
> -static long privcmd_ioctl_mmap_batch(void __user *udata)
> +static long privcmd_ioctl_mmap_batch(void __user *udata, int version)
> {
> 	int ret;
> -	struct privcmd_mmapbatch m;
> +	struct privcmd_mmapbatch_v2 m;
> 	struct mm_struct *mm = current->mm;
> 	struct vm_area_struct *vma;
> 	unsigned long nr_pages;
> @@ -289,15 +325,31 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 	if (!xen_initial_domain())
> 		return -EPERM;
> 
> -	if (copy_from_user(&m, udata, sizeof(m)))
> -		return -EFAULT;
> +	switch (version) {
> +	case 1:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch)))
> +			return -EFAULT;
> +		/* Returns per-frame error in m.arr. */
> +		m.err = NULL;
> +		if (!access_ok(VERIFY_WRITE, m.arr, m.num * sizeof(*m.arr)))
> +			return -EFAULT;
> +		break;
> +	case 2:
> +		if (copy_from_user(&m, udata, sizeof(struct privcmd_mmapbatch_v2)))
> +			return -EFAULT;
> +		/* Returns per-frame error code in m.err. */
> +		if (!access_ok(VERIFY_WRITE, m.err, m.num * (sizeof(*m.err))))
> +			return -EFAULT;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> 
> 	nr_pages = m.num;
> 	if ((m.num <= 0) || (nr_pages > (LONG_MAX >> PAGE_SHIFT)))
> 		return -EINVAL;
> 
> -	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t),
> -			   m.arr);
> +	ret = gather_array(&pagelist, m.num, sizeof(xen_pfn_t), m.arr);
> 
> 	if (ret || list_empty(&pagelist))
> 		goto out;
> @@ -325,12 +377,17 @@ static long privcmd_ioctl_mmap_batch(void __user *udata)
> 
> 	up_write(&mm->mmap_sem);
> 
> -	if (state.err > 0) {
> -		state.user = m.arr;
> +	if (state.err) {
> +		state.err = 0;
> +		state.user_mfn = (xen_pfn_t *)m.arr;
> +		state.user_err = m.err;
> 		ret = traverse_pages(m.num, sizeof(xen_pfn_t),
> -			       &pagelist,
> -			       mmap_return_errors, &state);
> -	}
> +				     &pagelist,
> +				     mmap_return_errors, &state);
> +		if (ret >= 0)
> +			ret = state.err;
> +	} else if (m.err)
> +		__clear_user(m.err, m.num * sizeof(*m.err));
> 
> out:
> 	free_page_list(&pagelist);
> @@ -354,7 +411,11 @@ static long privcmd_ioctl(struct file *file,
> 		break;
> 
> 	case IOCTL_PRIVCMD_MMAPBATCH:
> -		ret = privcmd_ioctl_mmap_batch(udata);
> +		ret = privcmd_ioctl_mmap_batch(udata, 1);
> +		break;
> +
> +	case IOCTL_PRIVCMD_MMAPBATCH_V2:
> +		ret = privcmd_ioctl_mmap_batch(udata, 2);
> 		break;
> 
> 	default:
> diff --git a/include/xen/privcmd.h b/include/xen/privcmd.h
> index 17857fb..f60d75c 100644
> --- a/include/xen/privcmd.h
> +++ b/include/xen/privcmd.h
> @@ -59,13 +59,32 @@ struct privcmd_mmapbatch {
> 	int num;     /* number of pages to populate */
> 	domid_t dom; /* target domain */
> 	__u64 addr;  /* virtual address */
> -	xen_pfn_t __user *arr; /* array of mfns - top nibble set on err */
> +	xen_pfn_t __user *arr; /* array of mfns - or'd with
> +				  PRIVCMD_MMAPBATCH_MFN_ERROR on err */
> +};
> +
> +#define PRIVCMD_MMAPBATCH_MFN_ERROR 0xf0000000U
> +
> +struct privcmd_mmapbatch_v2 {
> +	unsigned int num; /* number of pages to populate */
> +	domid_t dom;      /* target domain */
> +	__u64 addr;       /* virtual address */
> +	const xen_pfn_t __user *arr; /* array of mfns */
> +	int __user *err;  /* array of error codes */
> };
> 
> /*
>  * @cmd: IOCTL_PRIVCMD_HYPERCALL
>  * @arg: &privcmd_hypercall_t
>  * Return: Value returned from execution of the specified hypercall.
> + *
> + * @cmd: IOCTL_PRIVCMD_MMAPBATCH_V2
> + * @arg: &struct privcmd_mmapbatch_v2
> + * Return: 0 on success (i.e., arg->err contains valid error codes for
> + * each frame).  On an error other than a failed frame remap, -1 is
> + * returned and errno is set to EINVAL, EFAULT etc.  As an exception,
> + * if the operation was otherwise successful but any frame failed with
> + * -ENOENT, then -1 is returned and errno is set to ENOENT.
>  */
> #define IOCTL_PRIVCMD_HYPERCALL					\
> 	_IOC(_IOC_NONE, 'P', 0, sizeof(struct privcmd_hypercall))
> @@ -73,5 +92,7 @@ struct privcmd_mmapbatch {
> 	_IOC(_IOC_NONE, 'P', 2, sizeof(struct privcmd_mmap))
> #define IOCTL_PRIVCMD_MMAPBATCH					\
> 	_IOC(_IOC_NONE, 'P', 3, sizeof(struct privcmd_mmapbatch))
> +#define IOCTL_PRIVCMD_MMAPBATCH_V2				\
> +	_IOC(_IOC_NONE, 'P', 4, sizeof(struct privcmd_mmapbatch_v2))
> 
> #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
> -- 
> 1.7.2.5
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:32:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:32:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SGk-0001hK-5f; Fri, 31 Aug 2012 14:32:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7SGi-0001hF-GP
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:32:20 +0000
Received: from [85.158.138.51:58994] by server-4.bemta-3.messagelabs.com id
	17/44-24831-0FAC0405; Fri, 31 Aug 2012 14:32:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346423532!21588027!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14320 invoked from network); 31 Aug 2012 14:32:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:32:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36437469"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:32:11 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	10:32:11 -0400
Message-ID: <5040CAEA.7000600@citrix.com>
Date: Fri, 31 Aug 2012 15:32:10 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: <andres@lagarcavilla.org>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
In-Reply-To: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
> foreign domain (such as dom0) attempts to map these frames, the map will
> initially fail. The hypervisor returns a suitable errno, and kicks an
> asynchronous page-in operation carried out by a helper. The foreign domain is
> expected to retry the mapping operation until it eventually succeeds. The
> foreign domain is not put to sleep because itself could be the one running the
> pager assist (typical scenario for dom0).
> 
> This patch adds support for this mechanism for backend drivers using grant
> mapping and copying operations. Specifically, this covers the blkback and
> gntdev drivers (which map foregin grants), and the netback driver (which copies
> foreign grants).
> 
> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>   target foregin frame is paged out).
> * Insert hooks with appropriate macro decorators in the aforementioned drivers.

I think you should implement wrappers around HYPERVISOR_grant_table_op()
have have the wrapper do the retries instead of every backend having to
check for EAGAIN and issue the retries itself. Similar to the
gnttab_map_grant_no_eagain() function you've already added.

Why do some operations not retry anyway?

> +void
> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> +						const char *func)
> +{
> +	u8 delay = 1;
> +
> +	do {
> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> +		if (*status == GNTST_eagain)
> +			msleep(delay++);
> +	} while ((*status == GNTST_eagain) && delay);

Terminating the loop when delay wraps is a bit subtle.  Why not make
delay unsigned and check delay <= MAX_DELAY?

Would it be sensible to ramp the delay faster?  Perhaps double each
iteration with a maximum possible delay of e.g., 256 ms.

> +#define gnttab_map_grant_no_eagain(_gop)                                    \
> +do {                                                                        \
> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
> +        BUG();                                                              \
> +    if ((_gop)->status == GNTST_eagain)                                     \
> +        gnttab_retry_eagain_map((_gop));                                    \
> +} while(0)

Inline functions, please.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:32:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:32:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SGk-0001hK-5f; Fri, 31 Aug 2012 14:32:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7SGi-0001hF-GP
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:32:20 +0000
Received: from [85.158.138.51:58994] by server-4.bemta-3.messagelabs.com id
	17/44-24831-0FAC0405; Fri, 31 Aug 2012 14:32:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346423532!21588027!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14320 invoked from network); 31 Aug 2012 14:32:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:32:13 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36437469"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:32:11 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX01.citrite.net
	(10.13.107.65) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	10:32:11 -0400
Message-ID: <5040CAEA.7000600@citrix.com>
Date: Fri, 31 Aug 2012 15:32:10 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: <andres@lagarcavilla.org>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
In-Reply-To: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
Cc: Andres Lagar-Cavilla <andres@gridcentric.ca>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
> foreign domain (such as dom0) attempts to map these frames, the map will
> initially fail. The hypervisor returns a suitable errno, and kicks an
> asynchronous page-in operation carried out by a helper. The foreign domain is
> expected to retry the mapping operation until it eventually succeeds. The
> foreign domain is not put to sleep because itself could be the one running the
> pager assist (typical scenario for dom0).
> 
> This patch adds support for this mechanism for backend drivers using grant
> mapping and copying operations. Specifically, this covers the blkback and
> gntdev drivers (which map foregin grants), and the netback driver (which copies
> foreign grants).
> 
> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>   target foregin frame is paged out).
> * Insert hooks with appropriate macro decorators in the aforementioned drivers.

I think you should implement wrappers around HYPERVISOR_grant_table_op()
have have the wrapper do the retries instead of every backend having to
check for EAGAIN and issue the retries itself. Similar to the
gnttab_map_grant_no_eagain() function you've already added.

Why do some operations not retry anyway?

> +void
> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> +						const char *func)
> +{
> +	u8 delay = 1;
> +
> +	do {
> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> +		if (*status == GNTST_eagain)
> +			msleep(delay++);
> +	} while ((*status == GNTST_eagain) && delay);

Terminating the loop when delay wraps is a bit subtle.  Why not make
delay unsigned and check delay <= MAX_DELAY?

Would it be sensible to ramp the delay faster?  Perhaps double each
iteration with a maximum possible delay of e.g., 256 ms.

> +#define gnttab_map_grant_no_eagain(_gop)                                    \
> +do {                                                                        \
> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
> +        BUG();                                                              \
> +    if ((_gop)->status == GNTST_eagain)                                     \
> +        gnttab_retry_eagain_map((_gop));                                    \
> +} while(0)

Inline functions, please.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SKR-0001sM-QZ; Fri, 31 Aug 2012 14:36:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SKP-0001sB-Ki
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:36:10 +0000
Received: from [85.158.138.51:44319] by server-4.bemta-3.messagelabs.com id
	47/8A-24831-8DBC0405; Fri, 31 Aug 2012 14:36:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1346423757!19039563!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12689 invoked from network); 31 Aug 2012 14:35:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:35:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291373"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:35:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:35:57 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SKD-0004Dd-CE; Fri, 31 Aug 2012 14:35:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SKD-0007j6-5e;
	Fri, 31 Aug 2012 15:35:57 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52169.126229.82141@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:35:53 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346411353.27277.183.camel@zakaz.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<1346411353.27277.183.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: Test report: Migration from 4.1 to 4.2 works"):
> On Fri, 2012-08-31 at 12:04 +0100, Ian Jackson wrote:
> > However, xl fails on config files which are missing the final
> > newline.  This should be fixed for 4.2.
> 
> Since that I suppose involves the parser are you going to do that?

Below.

> Did you also try xl 4.1 -> 4.2? 

Yes.  Although that's just the same as 4.1+xend -> 4.2+xl really.  But
I did try migrating a domain created with xl as well as one created
with xm.


From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH] libxl: Tolerate xl config files missing trailing newline

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
 tools/libxl/libxlu_cfg_y.c |  154 +++++++++++++++++++++++---------------------
 tools/libxl/libxlu_cfg_y.y |   12 ++-
 2 files changed, 88 insertions(+), 78 deletions(-)

diff --git a/tools/libxl/libxlu_cfg_y.c b/tools/libxl/libxlu_cfg_y.c
index 5214386..218933e 100644
--- a/tools/libxl/libxlu_cfg_y.c
+++ b/tools/libxl/libxlu_cfg_y.c
@@ -373,18 +373,18 @@ union yyalloc
 #endif
 
 /* YYFINAL -- State number of the termination state.  */
-#define YYFINAL  2
+#define YYFINAL  3
 /* YYLAST -- Last index in YYTABLE.  */
-#define YYLAST   23
+#define YYLAST   24
 
 /* YYNTOKENS -- Number of terminals.  */
 #define YYNTOKENS  12
 /* YYNNTS -- Number of nonterminals.  */
-#define YYNNTS  9
+#define YYNNTS  11
 /* YYNRULES -- Number of rules.  */
-#define YYNRULES  19
+#define YYNRULES  22
 /* YYNRULES -- Number of states.  */
-#define YYNSTATES  28
+#define YYNSTATES  30
 
 /* YYTRANSLATE(YYLEX) -- Bison symbol number corresponding to YYLEX.  */
 #define YYUNDEFTOK  2
@@ -430,26 +430,28 @@ static const yytype_uint8 yytranslate[] =
    YYRHS.  */
 static const yytype_uint8 yyprhs[] =
 {
-       0,     0,     3,     4,     7,    12,    14,    17,    19,    21,
-      23,    28,    30,    32,    33,    35,    39,    42,    48,    49
+       0,     0,     3,     5,     8,     9,    12,    15,    17,    20,
+      24,    26,    28,    30,    35,    37,    39,    40,    42,    46,
+      49,    55,    56
 };
 
 /* YYRHS -- A `-1'-separated list of the rules' RHS.  */
 static const yytype_int8 yyrhs[] =
 {
-      13,     0,    -1,    -1,    13,    14,    -1,     3,     7,    16,
-      15,    -1,    15,    -1,     1,     6,    -1,     6,    -1,     8,
-      -1,    17,    -1,     9,    20,    18,    10,    -1,     4,    -1,
-       5,    -1,    -1,    19,    -1,    19,    11,    20,    -1,    17,
-      20,    -1,    19,    11,    20,    17,    20,    -1,    -1,    20,
-       6,    -1
+      13,     0,    -1,    14,    -1,    14,    16,    -1,    -1,    14,
+      15,    -1,    16,    17,    -1,    17,    -1,     1,     6,    -1,
+       3,     7,    18,    -1,     6,    -1,     8,    -1,    19,    -1,
+       9,    22,    20,    10,    -1,     4,    -1,     5,    -1,    -1,
+      21,    -1,    21,    11,    22,    -1,    19,    22,    -1,    21,
+      11,    22,    19,    22,    -1,    -1,    22,     6,    -1
 };
 
 /* YYRLINE[YYN] -- source line where rule number YYN was defined.  */
 static const yytype_uint8 yyrline[] =
 {
-       0,    47,    47,    48,    50,    52,    53,    55,    56,    58,
-      59,    61,    62,    64,    65,    66,    68,    69,    71,    73
+       0,    47,    47,    48,    50,    51,    53,    54,    55,    57,
+      59,    60,    62,    63,    65,    66,    68,    69,    70,    72,
+      73,    75,    77
 };
 #endif
 
@@ -459,8 +461,8 @@ static const yytype_uint8 yyrline[] =
 static const char *const yytname[] =
 {
   "$end", "error", "$undefined", "IDENT", "STRING", "NUMBER", "NEWLINE",
-  "'='", "';'", "'['", "']'", "','", "$accept", "file", "assignment",
-  "endstmt", "value", "atom", "valuelist", "values", "nlok", 0
+  "'='", "';'", "'['", "']'", "','", "$accept", "file", "stmts", "stmt",
+  "assignment", "endstmt", "value", "atom", "valuelist", "values", "nlok", 0
 };
 #endif
 
@@ -477,15 +479,17 @@ static const yytype_uint16 yytoknum[] =
 /* YYR1[YYN] -- Symbol number of symbol that rule YYN derives.  */
 static const yytype_uint8 yyr1[] =
 {
-       0,    12,    13,    13,    14,    14,    14,    15,    15,    16,
-      16,    17,    17,    18,    18,    18,    19,    19,    20,    20
+       0,    12,    13,    13,    14,    14,    15,    15,    15,    16,
+      17,    17,    18,    18,    19,    19,    20,    20,    20,    21,
+      21,    22,    22
 };
 
 /* YYR2[YYN] -- Number of symbols composing right hand side of rule YYN.  */
 static const yytype_uint8 yyr2[] =
 {
-       0,     2,     0,     2,     4,     1,     2,     1,     1,     1,
-       4,     1,     1,     0,     1,     3,     2,     5,     0,     2
+       0,     2,     1,     2,     0,     2,     2,     1,     2,     3,
+       1,     1,     1,     4,     1,     1,     0,     1,     3,     2,
+       5,     0,     2
 };
 
 /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state
@@ -493,59 +497,61 @@ static const yytype_uint8 yyr2[] =
    means the default is an error.  */
 static const yytype_uint8 yydefact[] =
 {
-       2,     0,     1,     0,     0,     7,     8,     3,     5,     6,
-       0,    11,    12,    18,     0,     9,    13,     4,    19,    18,
-       0,    14,    16,    10,    18,    15,    18,    17
+       4,     0,     0,     1,     0,     0,    10,    11,     5,     3,
+       7,     8,     0,     6,    14,    15,    21,     9,    12,    16,
+      22,    21,     0,    17,    19,    13,    21,    18,    21,    20
 };
 
 /* YYDEFGOTO[NTERM-NUM].  */
 static const yytype_int8 yydefgoto[] =
 {
-      -1,     1,     7,     8,    14,    15,    20,    21,    16
+      -1,     1,     2,     8,     9,    10,    17,    18,    22,    23,
+      19
 };
 
 /* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing
    STATE-NUM.  */
-#define YYPACT_NINF -17
+#define YYPACT_NINF -18
 static const yytype_int8 yypact[] =
 {
-     -17,     2,   -17,    -5,    -3,   -17,   -17,   -17,   -17,   -17,
-      10,   -17,   -17,   -17,    14,   -17,    12,   -17,   -17,   -17,
-      11,    -4,     6,   -17,   -17,    12,   -17,     6
+     -18,     4,     0,   -18,    -1,     6,   -18,   -18,   -18,     3,
+     -18,   -18,    11,   -18,   -18,   -18,   -18,   -18,   -18,    13,
+     -18,   -18,    12,    10,    17,   -18,   -18,    13,   -18,    17
 };
 
 /* YYPGOTO[NTERM-NUM].  */
 static const yytype_int8 yypgoto[] =
 {
-     -17,   -17,   -17,     9,   -17,   -16,   -17,   -17,   -13
+     -18,   -18,   -18,   -18,   -18,    15,   -18,   -17,   -18,   -18,
+     -14
 };
 
 /* YYTABLE[YYPACT[STATE-NUM]].  What to do in state STATE-NUM.  If
    positive, shift that token.  If negative, reduce the rule which
    number is the opposite.  If zero, do what YYDEFACT says.
    If YYTABLE_NINF, syntax error.  */
-#define YYTABLE_NINF -1
-static const yytype_uint8 yytable[] =
+#define YYTABLE_NINF -3
+static const yytype_int8 yytable[] =
 {
-      19,     9,     2,     3,    10,     4,    22,    24,     5,    26,
-       6,    25,    18,    27,    11,    12,    11,    12,    18,    13,
-       5,    23,     6,    17
+      -2,     4,    21,     5,     3,    11,     6,    24,     7,     6,
+      28,     7,    27,    12,    29,    14,    15,    14,    15,    20,
+      16,    26,    25,    20,    13
 };
 
 static const yytype_uint8 yycheck[] =
 {
-      16,     6,     0,     1,     7,     3,    19,    11,     6,    25,
-       8,    24,     6,    26,     4,     5,     4,     5,     6,     9,
-       6,    10,     8,    14
+       0,     1,    19,     3,     0,     6,     6,    21,     8,     6,
+      27,     8,    26,     7,    28,     4,     5,     4,     5,     6,
+       9,    11,    10,     6,     9
 };
 
 /* YYSTOS[STATE-NUM] -- The (internal number of the) accessing
    symbol of state STATE-NUM.  */
 static const yytype_uint8 yystos[] =
 {
-       0,    13,     0,     1,     3,     6,     8,    14,    15,     6,
-       7,     4,     5,     9,    16,    17,    20,    15,     6,    17,
-      18,    19,    20,    10,    11,    20,    17,    20
+       0,    13,    14,     0,     1,     3,     6,     8,    15,    16,
+      17,     6,     7,    17,     4,     5,     9,    18,    19,    22,
+       6,    19,    20,    21,    22,    10,    11,    22,    19,    22
 };
 
 #define yyerrok		(yyerrstatus = 0)
@@ -1077,7 +1083,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1081 "libxlu_cfg_y.c"
+#line 1087 "libxlu_cfg_y.c"
 	break;
       case 4: /* "STRING" */
 
@@ -1086,7 +1092,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1090 "libxlu_cfg_y.c"
+#line 1096 "libxlu_cfg_y.c"
 	break;
       case 5: /* "NUMBER" */
 
@@ -1095,43 +1101,43 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1099 "libxlu_cfg_y.c"
+#line 1105 "libxlu_cfg_y.c"
 	break;
-      case 16: /* "value" */
+      case 18: /* "value" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1108 "libxlu_cfg_y.c"
+#line 1114 "libxlu_cfg_y.c"
 	break;
-      case 17: /* "atom" */
+      case 19: /* "atom" */
 
 /* Line 1000 of yacc.c  */
 #line 40 "libxlu_cfg_y.y"
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1117 "libxlu_cfg_y.c"
+#line 1123 "libxlu_cfg_y.c"
 	break;
-      case 18: /* "valuelist" */
+      case 20: /* "valuelist" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1126 "libxlu_cfg_y.c"
+#line 1132 "libxlu_cfg_y.c"
 	break;
-      case 19: /* "values" */
+      case 21: /* "values" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1135 "libxlu_cfg_y.c"
+#line 1141 "libxlu_cfg_y.c"
 	break;
 
       default:
@@ -1459,80 +1465,80 @@ yyreduce:
   YY_REDUCE_PRINT (yyn);
   switch (yyn)
     {
-        case 4:
+        case 9:
 
 /* Line 1455 of yacc.c  */
-#line 51 "libxlu_cfg_y.y"
-    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (4)].string),(yyvsp[(3) - (4)].setting),(yylsp[(3) - (4)]).first_line); ;}
+#line 57 "libxlu_cfg_y.y"
+    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); ;}
     break;
 
-  case 9:
+  case 12:
 
 /* Line 1455 of yacc.c  */
-#line 58 "libxlu_cfg_y.y"
+#line 62 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,1,(yyvsp[(1) - (1)].string)); ;}
     break;
 
-  case 10:
+  case 13:
 
 /* Line 1455 of yacc.c  */
-#line 59 "libxlu_cfg_y.y"
+#line 63 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(3) - (4)].setting); ;}
     break;
 
-  case 11:
+  case 14:
 
 /* Line 1455 of yacc.c  */
-#line 61 "libxlu_cfg_y.y"
+#line 65 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 12:
+  case 15:
 
 /* Line 1455 of yacc.c  */
-#line 62 "libxlu_cfg_y.y"
+#line 66 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 13:
+  case 16:
 
 /* Line 1455 of yacc.c  */
-#line 64 "libxlu_cfg_y.y"
+#line 68 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,0,0); ;}
     break;
 
-  case 14:
+  case 17:
 
 /* Line 1455 of yacc.c  */
-#line 65 "libxlu_cfg_y.y"
+#line 69 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (1)].setting); ;}
     break;
 
-  case 15:
+  case 18:
 
 /* Line 1455 of yacc.c  */
-#line 66 "libxlu_cfg_y.y"
+#line 70 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (3)].setting); ;}
     break;
 
-  case 16:
+  case 19:
 
 /* Line 1455 of yacc.c  */
-#line 68 "libxlu_cfg_y.y"
+#line 72 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,2,(yyvsp[(1) - (2)].string)); ;}
     break;
 
-  case 17:
+  case 20:
 
 /* Line 1455 of yacc.c  */
-#line 69 "libxlu_cfg_y.y"
+#line 73 "libxlu_cfg_y.y"
     { xlu__cfg_set_add(ctx,(yyvsp[(1) - (5)].setting),(yyvsp[(4) - (5)].string)); (yyval.setting)= (yyvsp[(1) - (5)].setting); ;}
     break;
 
 
 
 /* Line 1455 of yacc.c  */
-#line 1536 "libxlu_cfg_y.c"
+#line 1542 "libxlu_cfg_y.c"
       default: break;
     }
   YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc);
diff --git a/tools/libxl/libxlu_cfg_y.y b/tools/libxl/libxlu_cfg_y.y
index 29aedca..aa9f787 100644
--- a/tools/libxl/libxlu_cfg_y.y
+++ b/tools/libxl/libxlu_cfg_y.y
@@ -44,14 +44,18 @@
 
 %%
 
-file: /* empty */
- |     file assignment
+file:  stmts
+ |     stmts assignment
 
-assignment: IDENT '=' value endstmt
-                            { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
+stmts:  /* empty */
+ |      stmts stmt
+
+stmt:   assignment endstmt
  |      endstmt
  |      error NEWLINE
 
+assignment: IDENT '=' value { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
+
 endstmt: NEWLINE
  |      ';'
 
-- 
tg: (9153666..) t/xen/xl.cfg.no-final-newline-ok (depends on: t/xen/xl.cfg.mem-fix)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:36:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SKR-0001sM-QZ; Fri, 31 Aug 2012 14:36:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SKP-0001sB-Ki
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:36:10 +0000
Received: from [85.158.138.51:44319] by server-4.bemta-3.messagelabs.com id
	47/8A-24831-8DBC0405; Fri, 31 Aug 2012 14:36:08 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-174.messagelabs.com!1346423757!19039563!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12689 invoked from network); 31 Aug 2012 14:35:58 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:35:58 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291373"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:35:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:35:57 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SKD-0004Dd-CE; Fri, 31 Aug 2012 14:35:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SKD-0007j6-5e;
	Fri, 31 Aug 2012 15:35:57 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52169.126229.82141@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:35:53 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346411353.27277.183.camel@zakaz.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<1346411353.27277.183.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: Test report: Migration from 4.1 to 4.2 works"):
> On Fri, 2012-08-31 at 12:04 +0100, Ian Jackson wrote:
> > However, xl fails on config files which are missing the final
> > newline.  This should be fixed for 4.2.
> 
> Since that I suppose involves the parser are you going to do that?

Below.

> Did you also try xl 4.1 -> 4.2? 

Yes.  Although that's just the same as 4.1+xend -> 4.2+xl really.  But
I did try migrating a domain created with xl as well as one created
with xm.


From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH] libxl: Tolerate xl config files missing trailing newline

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
 tools/libxl/libxlu_cfg_y.c |  154 +++++++++++++++++++++++---------------------
 tools/libxl/libxlu_cfg_y.y |   12 ++-
 2 files changed, 88 insertions(+), 78 deletions(-)

diff --git a/tools/libxl/libxlu_cfg_y.c b/tools/libxl/libxlu_cfg_y.c
index 5214386..218933e 100644
--- a/tools/libxl/libxlu_cfg_y.c
+++ b/tools/libxl/libxlu_cfg_y.c
@@ -373,18 +373,18 @@ union yyalloc
 #endif
 
 /* YYFINAL -- State number of the termination state.  */
-#define YYFINAL  2
+#define YYFINAL  3
 /* YYLAST -- Last index in YYTABLE.  */
-#define YYLAST   23
+#define YYLAST   24
 
 /* YYNTOKENS -- Number of terminals.  */
 #define YYNTOKENS  12
 /* YYNNTS -- Number of nonterminals.  */
-#define YYNNTS  9
+#define YYNNTS  11
 /* YYNRULES -- Number of rules.  */
-#define YYNRULES  19
+#define YYNRULES  22
 /* YYNRULES -- Number of states.  */
-#define YYNSTATES  28
+#define YYNSTATES  30
 
 /* YYTRANSLATE(YYLEX) -- Bison symbol number corresponding to YYLEX.  */
 #define YYUNDEFTOK  2
@@ -430,26 +430,28 @@ static const yytype_uint8 yytranslate[] =
    YYRHS.  */
 static const yytype_uint8 yyprhs[] =
 {
-       0,     0,     3,     4,     7,    12,    14,    17,    19,    21,
-      23,    28,    30,    32,    33,    35,    39,    42,    48,    49
+       0,     0,     3,     5,     8,     9,    12,    15,    17,    20,
+      24,    26,    28,    30,    35,    37,    39,    40,    42,    46,
+      49,    55,    56
 };
 
 /* YYRHS -- A `-1'-separated list of the rules' RHS.  */
 static const yytype_int8 yyrhs[] =
 {
-      13,     0,    -1,    -1,    13,    14,    -1,     3,     7,    16,
-      15,    -1,    15,    -1,     1,     6,    -1,     6,    -1,     8,
-      -1,    17,    -1,     9,    20,    18,    10,    -1,     4,    -1,
-       5,    -1,    -1,    19,    -1,    19,    11,    20,    -1,    17,
-      20,    -1,    19,    11,    20,    17,    20,    -1,    -1,    20,
-       6,    -1
+      13,     0,    -1,    14,    -1,    14,    16,    -1,    -1,    14,
+      15,    -1,    16,    17,    -1,    17,    -1,     1,     6,    -1,
+       3,     7,    18,    -1,     6,    -1,     8,    -1,    19,    -1,
+       9,    22,    20,    10,    -1,     4,    -1,     5,    -1,    -1,
+      21,    -1,    21,    11,    22,    -1,    19,    22,    -1,    21,
+      11,    22,    19,    22,    -1,    -1,    22,     6,    -1
 };
 
 /* YYRLINE[YYN] -- source line where rule number YYN was defined.  */
 static const yytype_uint8 yyrline[] =
 {
-       0,    47,    47,    48,    50,    52,    53,    55,    56,    58,
-      59,    61,    62,    64,    65,    66,    68,    69,    71,    73
+       0,    47,    47,    48,    50,    51,    53,    54,    55,    57,
+      59,    60,    62,    63,    65,    66,    68,    69,    70,    72,
+      73,    75,    77
 };
 #endif
 
@@ -459,8 +461,8 @@ static const yytype_uint8 yyrline[] =
 static const char *const yytname[] =
 {
   "$end", "error", "$undefined", "IDENT", "STRING", "NUMBER", "NEWLINE",
-  "'='", "';'", "'['", "']'", "','", "$accept", "file", "assignment",
-  "endstmt", "value", "atom", "valuelist", "values", "nlok", 0
+  "'='", "';'", "'['", "']'", "','", "$accept", "file", "stmts", "stmt",
+  "assignment", "endstmt", "value", "atom", "valuelist", "values", "nlok", 0
 };
 #endif
 
@@ -477,15 +479,17 @@ static const yytype_uint16 yytoknum[] =
 /* YYR1[YYN] -- Symbol number of symbol that rule YYN derives.  */
 static const yytype_uint8 yyr1[] =
 {
-       0,    12,    13,    13,    14,    14,    14,    15,    15,    16,
-      16,    17,    17,    18,    18,    18,    19,    19,    20,    20
+       0,    12,    13,    13,    14,    14,    15,    15,    15,    16,
+      17,    17,    18,    18,    19,    19,    20,    20,    20,    21,
+      21,    22,    22
 };
 
 /* YYR2[YYN] -- Number of symbols composing right hand side of rule YYN.  */
 static const yytype_uint8 yyr2[] =
 {
-       0,     2,     0,     2,     4,     1,     2,     1,     1,     1,
-       4,     1,     1,     0,     1,     3,     2,     5,     0,     2
+       0,     2,     1,     2,     0,     2,     2,     1,     2,     3,
+       1,     1,     1,     4,     1,     1,     0,     1,     3,     2,
+       5,     0,     2
 };
 
 /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state
@@ -493,59 +497,61 @@ static const yytype_uint8 yyr2[] =
    means the default is an error.  */
 static const yytype_uint8 yydefact[] =
 {
-       2,     0,     1,     0,     0,     7,     8,     3,     5,     6,
-       0,    11,    12,    18,     0,     9,    13,     4,    19,    18,
-       0,    14,    16,    10,    18,    15,    18,    17
+       4,     0,     0,     1,     0,     0,    10,    11,     5,     3,
+       7,     8,     0,     6,    14,    15,    21,     9,    12,    16,
+      22,    21,     0,    17,    19,    13,    21,    18,    21,    20
 };
 
 /* YYDEFGOTO[NTERM-NUM].  */
 static const yytype_int8 yydefgoto[] =
 {
-      -1,     1,     7,     8,    14,    15,    20,    21,    16
+      -1,     1,     2,     8,     9,    10,    17,    18,    22,    23,
+      19
 };
 
 /* YYPACT[STATE-NUM] -- Index in YYTABLE of the portion describing
    STATE-NUM.  */
-#define YYPACT_NINF -17
+#define YYPACT_NINF -18
 static const yytype_int8 yypact[] =
 {
-     -17,     2,   -17,    -5,    -3,   -17,   -17,   -17,   -17,   -17,
-      10,   -17,   -17,   -17,    14,   -17,    12,   -17,   -17,   -17,
-      11,    -4,     6,   -17,   -17,    12,   -17,     6
+     -18,     4,     0,   -18,    -1,     6,   -18,   -18,   -18,     3,
+     -18,   -18,    11,   -18,   -18,   -18,   -18,   -18,   -18,    13,
+     -18,   -18,    12,    10,    17,   -18,   -18,    13,   -18,    17
 };
 
 /* YYPGOTO[NTERM-NUM].  */
 static const yytype_int8 yypgoto[] =
 {
-     -17,   -17,   -17,     9,   -17,   -16,   -17,   -17,   -13
+     -18,   -18,   -18,   -18,   -18,    15,   -18,   -17,   -18,   -18,
+     -14
 };
 
 /* YYTABLE[YYPACT[STATE-NUM]].  What to do in state STATE-NUM.  If
    positive, shift that token.  If negative, reduce the rule which
    number is the opposite.  If zero, do what YYDEFACT says.
    If YYTABLE_NINF, syntax error.  */
-#define YYTABLE_NINF -1
-static const yytype_uint8 yytable[] =
+#define YYTABLE_NINF -3
+static const yytype_int8 yytable[] =
 {
-      19,     9,     2,     3,    10,     4,    22,    24,     5,    26,
-       6,    25,    18,    27,    11,    12,    11,    12,    18,    13,
-       5,    23,     6,    17
+      -2,     4,    21,     5,     3,    11,     6,    24,     7,     6,
+      28,     7,    27,    12,    29,    14,    15,    14,    15,    20,
+      16,    26,    25,    20,    13
 };
 
 static const yytype_uint8 yycheck[] =
 {
-      16,     6,     0,     1,     7,     3,    19,    11,     6,    25,
-       8,    24,     6,    26,     4,     5,     4,     5,     6,     9,
-       6,    10,     8,    14
+       0,     1,    19,     3,     0,     6,     6,    21,     8,     6,
+      27,     8,    26,     7,    28,     4,     5,     4,     5,     6,
+       9,    11,    10,     6,     9
 };
 
 /* YYSTOS[STATE-NUM] -- The (internal number of the) accessing
    symbol of state STATE-NUM.  */
 static const yytype_uint8 yystos[] =
 {
-       0,    13,     0,     1,     3,     6,     8,    14,    15,     6,
-       7,     4,     5,     9,    16,    17,    20,    15,     6,    17,
-      18,    19,    20,    10,    11,    20,    17,    20
+       0,    13,    14,     0,     1,     3,     6,     8,    15,    16,
+      17,     6,     7,    17,     4,     5,     9,    18,    19,    22,
+       6,    19,    20,    21,    22,    10,    11,    22,    19,    22
 };
 
 #define yyerrok		(yyerrstatus = 0)
@@ -1077,7 +1083,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1081 "libxlu_cfg_y.c"
+#line 1087 "libxlu_cfg_y.c"
 	break;
       case 4: /* "STRING" */
 
@@ -1086,7 +1092,7 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1090 "libxlu_cfg_y.c"
+#line 1096 "libxlu_cfg_y.c"
 	break;
       case 5: /* "NUMBER" */
 
@@ -1095,43 +1101,43 @@ yydestruct (yymsg, yytype, yyvaluep, yylocationp, ctx)
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1099 "libxlu_cfg_y.c"
+#line 1105 "libxlu_cfg_y.c"
 	break;
-      case 16: /* "value" */
+      case 18: /* "value" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1108 "libxlu_cfg_y.c"
+#line 1114 "libxlu_cfg_y.c"
 	break;
-      case 17: /* "atom" */
+      case 19: /* "atom" */
 
 /* Line 1000 of yacc.c  */
 #line 40 "libxlu_cfg_y.y"
 	{ free((yyvaluep->string)); };
 
 /* Line 1000 of yacc.c  */
-#line 1117 "libxlu_cfg_y.c"
+#line 1123 "libxlu_cfg_y.c"
 	break;
-      case 18: /* "valuelist" */
+      case 20: /* "valuelist" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1126 "libxlu_cfg_y.c"
+#line 1132 "libxlu_cfg_y.c"
 	break;
-      case 19: /* "values" */
+      case 21: /* "values" */
 
 /* Line 1000 of yacc.c  */
 #line 43 "libxlu_cfg_y.y"
 	{ xlu__cfg_set_free((yyvaluep->setting)); };
 
 /* Line 1000 of yacc.c  */
-#line 1135 "libxlu_cfg_y.c"
+#line 1141 "libxlu_cfg_y.c"
 	break;
 
       default:
@@ -1459,80 +1465,80 @@ yyreduce:
   YY_REDUCE_PRINT (yyn);
   switch (yyn)
     {
-        case 4:
+        case 9:
 
 /* Line 1455 of yacc.c  */
-#line 51 "libxlu_cfg_y.y"
-    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (4)].string),(yyvsp[(3) - (4)].setting),(yylsp[(3) - (4)]).first_line); ;}
+#line 57 "libxlu_cfg_y.y"
+    { xlu__cfg_set_store(ctx,(yyvsp[(1) - (3)].string),(yyvsp[(3) - (3)].setting),(yylsp[(3) - (3)]).first_line); ;}
     break;
 
-  case 9:
+  case 12:
 
 /* Line 1455 of yacc.c  */
-#line 58 "libxlu_cfg_y.y"
+#line 62 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,1,(yyvsp[(1) - (1)].string)); ;}
     break;
 
-  case 10:
+  case 13:
 
 /* Line 1455 of yacc.c  */
-#line 59 "libxlu_cfg_y.y"
+#line 63 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(3) - (4)].setting); ;}
     break;
 
-  case 11:
+  case 14:
 
 /* Line 1455 of yacc.c  */
-#line 61 "libxlu_cfg_y.y"
+#line 65 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 12:
+  case 15:
 
 /* Line 1455 of yacc.c  */
-#line 62 "libxlu_cfg_y.y"
+#line 66 "libxlu_cfg_y.y"
     { (yyval.string)= (yyvsp[(1) - (1)].string); ;}
     break;
 
-  case 13:
+  case 16:
 
 /* Line 1455 of yacc.c  */
-#line 64 "libxlu_cfg_y.y"
+#line 68 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,0,0); ;}
     break;
 
-  case 14:
+  case 17:
 
 /* Line 1455 of yacc.c  */
-#line 65 "libxlu_cfg_y.y"
+#line 69 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (1)].setting); ;}
     break;
 
-  case 15:
+  case 18:
 
 /* Line 1455 of yacc.c  */
-#line 66 "libxlu_cfg_y.y"
+#line 70 "libxlu_cfg_y.y"
     { (yyval.setting)= (yyvsp[(1) - (3)].setting); ;}
     break;
 
-  case 16:
+  case 19:
 
 /* Line 1455 of yacc.c  */
-#line 68 "libxlu_cfg_y.y"
+#line 72 "libxlu_cfg_y.y"
     { (yyval.setting)= xlu__cfg_set_mk(ctx,2,(yyvsp[(1) - (2)].string)); ;}
     break;
 
-  case 17:
+  case 20:
 
 /* Line 1455 of yacc.c  */
-#line 69 "libxlu_cfg_y.y"
+#line 73 "libxlu_cfg_y.y"
     { xlu__cfg_set_add(ctx,(yyvsp[(1) - (5)].setting),(yyvsp[(4) - (5)].string)); (yyval.setting)= (yyvsp[(1) - (5)].setting); ;}
     break;
 
 
 
 /* Line 1455 of yacc.c  */
-#line 1536 "libxlu_cfg_y.c"
+#line 1542 "libxlu_cfg_y.c"
       default: break;
     }
   YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc);
diff --git a/tools/libxl/libxlu_cfg_y.y b/tools/libxl/libxlu_cfg_y.y
index 29aedca..aa9f787 100644
--- a/tools/libxl/libxlu_cfg_y.y
+++ b/tools/libxl/libxlu_cfg_y.y
@@ -44,14 +44,18 @@
 
 %%
 
-file: /* empty */
- |     file assignment
+file:  stmts
+ |     stmts assignment
 
-assignment: IDENT '=' value endstmt
-                            { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
+stmts:  /* empty */
+ |      stmts stmt
+
+stmt:   assignment endstmt
  |      endstmt
  |      error NEWLINE
 
+assignment: IDENT '=' value { xlu__cfg_set_store(ctx,$1,$3,@3.first_line); }
+
 endstmt: NEWLINE
  |      ';'
 
-- 
tg: (9153666..) t/xen/xl.cfg.no-final-newline-ok (depends on: t/xen/xl.cfg.mem-fix)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:37:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SL0-0001ut-8L; Fri, 31 Aug 2012 14:36:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SKz-0001ui-99
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:36:45 +0000
Received: from [85.158.143.35:42121] by server-2.bemta-4.messagelabs.com id
	C8/35-21239-CFBC0405; Fri, 31 Aug 2012 14:36:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346423802!12814819!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26470 invoked from network); 31 Aug 2012 14:36:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:36:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291387"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:36:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:36:42 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SKv-0004Dw-U6; Fri, 31 Aug 2012 14:36:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SKv-0007jD-Qc;
	Fri, 31 Aug 2012 15:36:41 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52217.717066.788945@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:36:41 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5040B4510200007800097CBD@nat28.tlf.novell.com>
References: <20542.14593.18652.74782@mariner.uk.xensource.com>
	<5040B4510200007800097CBD@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH] xen: comment opaque expression in
	__page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [RFC PATCH] xen: comment opaque expression in __page_to_virt"):
> No, that's not precise. There's really not much of a win to be had
> on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
> should be the same in speed.
> 
> The win is on x86-64, where sizeof(struct page_info) is a power
> of 2, and hence the pair of shifts (right, then left) can be reduced
> to a single one.
> 
> Yet (for obvious reasons) the code ought to not break anything
> if even on x86-64 the size of the structure would change, hence
> it needs to be that complex (and can't be broken into separate,
> simpler implementations for 32- and 64-bits).

Thanks.  Do you want to post a revised version of my patch or shall I
do so ?  (If so please confirm that I should put your s-o-b on it for
your wording above.)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:37:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SL0-0001ut-8L; Fri, 31 Aug 2012 14:36:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SKz-0001ui-99
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:36:45 +0000
Received: from [85.158.143.35:42121] by server-2.bemta-4.messagelabs.com id
	C8/35-21239-CFBC0405; Fri, 31 Aug 2012 14:36:44 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346423802!12814819!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26470 invoked from network); 31 Aug 2012 14:36:43 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:36:43 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291387"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:36:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:36:42 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SKv-0004Dw-U6; Fri, 31 Aug 2012 14:36:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SKv-0007jD-Qc;
	Fri, 31 Aug 2012 15:36:41 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52217.717066.788945@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:36:41 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5040B4510200007800097CBD@nat28.tlf.novell.com>
References: <20542.14593.18652.74782@mariner.uk.xensource.com>
	<5040B4510200007800097CBD@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH] xen: comment opaque expression in
	__page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [RFC PATCH] xen: comment opaque expression in __page_to_virt"):
> No, that's not precise. There's really not much of a win to be had
> on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
> should be the same in speed.
> 
> The win is on x86-64, where sizeof(struct page_info) is a power
> of 2, and hence the pair of shifts (right, then left) can be reduced
> to a single one.
> 
> Yet (for obvious reasons) the code ought to not break anything
> if even on x86-64 the size of the structure would change, hence
> it needs to be that complex (and can't be broken into separate,
> simpler implementations for 32- and 64-bits).

Thanks.  Do you want to post a revised version of my patch or shall I
do so ?  (If so please confirm that I should put your s-o-b on it for
your wording above.)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:38:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SML-000228-O5; Fri, 31 Aug 2012 14:38:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SMK-00021u-7n
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:38:08 +0000
Received: from [85.158.139.83:44967] by server-9.bemta-5.messagelabs.com id
	18/AC-20529-F4CC0405; Fri, 31 Aug 2012 14:38:07 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346423885!23990730!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3114 invoked from network); 31 Aug 2012 14:38:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:38:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291422"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:38:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:38:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SMG-0004ER-TI; Fri, 31 Aug 2012 14:38:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SMG-0007jL-Pd;
	Fri, 31 Aug 2012 15:38:04 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52300.699375.749320@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:38:04 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346418071.27277.207.camel@zakaz.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works"):
> The alternative I suppose would be to:
>       * start xend on new host (running 4.2)
>       * migrate the domains you want over to the new system
>       * stop xend on the new host

This would work.

>       * xl migrate localhost for each domain.

This is only necessary to get the domain config file stashed away for
the _next_ save or migration.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:38:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SML-000228-O5; Fri, 31 Aug 2012 14:38:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SMK-00021u-7n
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:38:08 +0000
Received: from [85.158.139.83:44967] by server-9.bemta-5.messagelabs.com id
	18/AC-20529-F4CC0405; Fri, 31 Aug 2012 14:38:07 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-7.tower-182.messagelabs.com!1346423885!23990730!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3114 invoked from network); 31 Aug 2012 14:38:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:38:05 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291422"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:38:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:38:05 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SMG-0004ER-TI; Fri, 31 Aug 2012 14:38:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SMG-0007jL-Pd;
	Fri, 31 Aug 2012 15:38:04 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52300.699375.749320@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:38:04 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346418071.27277.207.camel@zakaz.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works"):
> The alternative I suppose would be to:
>       * start xend on new host (running 4.2)
>       * migrate the domains you want over to the new system
>       * stop xend on the new host

This would work.

>       * xl migrate localhost for each domain.

This is only necessary to get the domain config file stashed away for
the _next_ save or migration.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:39:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SNj-0002Bl-7T; Fri, 31 Aug 2012 14:39:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SNh-0002BR-9w
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:39:33 +0000
Received: from [85.158.138.51:63872] by server-10.bemta-3.messagelabs.com id
	05/29-10411-4ACC0405; Fri, 31 Aug 2012 14:39:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346423970!27896126!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14363 invoked from network); 31 Aug 2012 14:39:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:39:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291439"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:38:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:38:56 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SN5-0004Em-Gi	for xen-devel@lists.xen.org;
	Fri, 31 Aug 2012 14:38:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SN5-0007k2-Cs	for
	xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:38:55 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52351.298163.138907@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:38:55 +0100
To: xen-devel@lists.xen.org
X-Mailer: VM 7.19 under Emacs 21.4.1
Subject: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Touch the libxl.api-ok stamp file, and unconditionally put in place
the new _libxl.api-for-check.  This avoids needlessly rerunning the
preprocessor on libxl.h each time we call "make".

Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
if it is asked for in a standalone make run it can find xentoollog.h.

Remove *.api-ok on clean.

Also fix .gitignore.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
 .gitignore           |    2 +-
 tools/libxl/Makefile |    8 +++++---
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/.gitignore b/.gitignore
index 084ec62..2d56e70 100644
--- a/.gitignore
+++ b/.gitignore
@@ -189,7 +189,7 @@ tools/libxl/xl
 tools/libxl/testenum
 tools/libxl/testenum.c
 tools/libxl/tmp.*
-tools/libxl/libxl.api-for-check
+tools/libxl/_libxl.api-for-check
 tools/libaio/src/*.ol
 tools/libaio/src/*.os
 tools/misc/cpuperf/cpuperf-perfcntr
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 22c4881..a9d9ec6 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 CLIENTS = xl testidl libxl-save-helper
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
+$(XL_OBJS) _libxl.api-for-check: \
+            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
 
@@ -116,12 +117,13 @@ $(eval $(genpath-target))
 
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
+	touch $@
 
 _%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
-	$(call move-if-changed,$@.new,$@)
+	mv -f $@.new $@
 
 _paths.h: genpath
 	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
@@ -211,7 +213,7 @@ install: all
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
-	$(RM) -f testidl.c.new testidl.c
+	$(RM) -f testidl.c.new testidl.c *.api-ok
 
 distclean: clean
 
-- 
tg: (f86a047..) t/xen/xl.api-check-makefile (depends on: t/xen/xl.cfg.no-final-newline-ok)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:39:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:39:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SNj-0002Bl-7T; Fri, 31 Aug 2012 14:39:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SNh-0002BR-9w
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:39:33 +0000
Received: from [85.158.138.51:63872] by server-10.bemta-3.messagelabs.com id
	05/29-10411-4ACC0405; Fri, 31 Aug 2012 14:39:32 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346423970!27896126!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14363 invoked from network); 31 Aug 2012 14:39:30 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:39:30 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291439"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:38:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:38:56 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SN5-0004Em-Gi	for xen-devel@lists.xen.org;
	Fri, 31 Aug 2012 14:38:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SN5-0007k2-Cs	for
	xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:38:55 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52351.298163.138907@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:38:55 +0100
To: xen-devel@lists.xen.org
X-Mailer: VM 7.19 under Emacs 21.4.1
Subject: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Touch the libxl.api-ok stamp file, and unconditionally put in place
the new _libxl.api-for-check.  This avoids needlessly rerunning the
preprocessor on libxl.h each time we call "make".

Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
if it is asked for in a standalone make run it can find xentoollog.h.

Remove *.api-ok on clean.

Also fix .gitignore.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
 .gitignore           |    2 +-
 tools/libxl/Makefile |    8 +++++---
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/.gitignore b/.gitignore
index 084ec62..2d56e70 100644
--- a/.gitignore
+++ b/.gitignore
@@ -189,7 +189,7 @@ tools/libxl/xl
 tools/libxl/testenum
 tools/libxl/testenum.c
 tools/libxl/tmp.*
-tools/libxl/libxl.api-for-check
+tools/libxl/_libxl.api-for-check
 tools/libaio/src/*.ol
 tools/libaio/src/*.os
 tools/misc/cpuperf/cpuperf-perfcntr
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 22c4881..a9d9ec6 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 CLIENTS = xl testidl libxl-save-helper
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
+$(XL_OBJS) _libxl.api-for-check: \
+            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
 
@@ -116,12 +117,13 @@ $(eval $(genpath-target))
 
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
+	touch $@
 
 _%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
-	$(call move-if-changed,$@.new,$@)
+	mv -f $@.new $@
 
 _paths.h: genpath
 	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
@@ -211,7 +213,7 @@ install: all
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
-	$(RM) -f testidl.c.new testidl.c
+	$(RM) -f testidl.c.new testidl.c *.api-ok
 
 distclean: clean
 
-- 
tg: (f86a047..) t/xen/xl.api-check-makefile (depends on: t/xen/xl.cfg.no-final-newline-ok)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SQK-0002Ve-55; Fri, 31 Aug 2012 14:42:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SQI-0002VO-Nu
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:42:14 +0000
Received: from [85.158.143.35:65245] by server-1.bemta-4.messagelabs.com id
	3B/21-12504-44DC0405; Fri, 31 Aug 2012 14:42:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346424110!12818331!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6228 invoked from network); 31 Aug 2012 14:41:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:41:52 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291501"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:41:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:41:51 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SPu-0004GS-Il; Fri, 31 Aug 2012 14:41:50 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SPu-0007kI-Ep;
	Fri, 31 Aug 2012 15:41:50 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52526.335228.777436@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:41:50 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5040BC860200007800097D24@nat28.tlf.novell.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works"):
> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > Migration 4.1 xend -> 4.2 xl
> >   Needs to be done with xl
> >   Stop xend on source, which leaves domain running and manipulable by xl
> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >   Works.
> 
> Is that really an acceptable approach? What if you have multiple
> VMs running, and want to migrate just part of them? All the other
> would remain unmanageable at least for the duration of the
> migration(s).

This is true but in the usual case you'll be wanting to migrate them
all as part of an infrastructure upgrade to 4.2.

> (And I also wonder if 4.1's xl is complete/stable
> enough to recommend such an approach as a general mechanism.)

That is perhaps a worry but I'm pleased to be able to report that it
does work :-).

Thinking about it, I didn't try an HVM domain.  I will see if I can
manage that but probably not today.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:42:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:42:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SQK-0002Ve-55; Fri, 31 Aug 2012 14:42:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SQI-0002VO-Nu
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:42:14 +0000
Received: from [85.158.143.35:65245] by server-1.bemta-4.messagelabs.com id
	3B/21-12504-44DC0405; Fri, 31 Aug 2012 14:42:12 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346424110!12818331!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6228 invoked from network); 31 Aug 2012 14:41:52 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:41:52 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14291501"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:41:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 15:41:51 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SPu-0004GS-Il; Fri, 31 Aug 2012 14:41:50 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SPu-0007kI-Ep;
	Fri, 31 Aug 2012 15:41:50 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.52526.335228.777436@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 15:41:50 +0100
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <5040BC860200007800097D24@nat28.tlf.novell.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works"):
> On 31.08.12 at 13:04, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> > Migration 4.1 xend -> 4.2 xl
> >   Needs to be done with xl
> >   Stop xend on source, which leaves domain running and manipulable by xl
> >   xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
> >   Works.
> 
> Is that really an acceptable approach? What if you have multiple
> VMs running, and want to migrate just part of them? All the other
> would remain unmanageable at least for the duration of the
> migration(s).

This is true but in the usual case you'll be wanting to migrate them
all as part of an infrastructure upgrade to 4.2.

> (And I also wonder if 4.1's xl is complete/stable
> enough to recommend such an approach as a general mechanism.)

That is perhaps a worry but I'm pleased to be able to report that it
does work :-).

Thinking about it, I didn't try an HVM domain.  I will see if I can
manage that but probably not today.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:45:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7STe-0002h5-P7; Fri, 31 Aug 2012 14:45:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7STd-0002gs-5K
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:45:41 +0000
Received: from [85.158.138.51:53544] by server-4.bemta-3.messagelabs.com id
	77/A9-24831-21EC0405; Fri, 31 Aug 2012 14:45:38 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346424335!21590317!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7059 invoked from network); 31 Aug 2012 14:45:36 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:45:36 -0000
Received: by iebc10 with SMTP id c10so2150087ieb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 07:45:34 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=RQkCfAVnw45poWayoW7hRjS3/4M+4s5VGGfe3CQAOco=;
	b=bdvvgwqvAoroUfZaQUKY9bUliWCzM7cB9j/itw10eNrXmJoMlwIWu7oOSoMISffTRX
	Qa1+A7F/aEXGXwLXk7v+unrgOnF4e5eiL424jhNqIVtdPOBjCZ05qng8T3tfceoUCVgy
	gLfRTvanCPMZzl86ATY6USzhFfAxUpJYjctDA1xwFfhvFUUhfA+rRh5KB8vkYuUKQhmX
	tNNMgniRs0OcWf+dCNcIl3X7v5G2tHGaToIDtk+FBD8sXez+2V2jooRktOnMR4KDLk84
	LF8gpvlYaioPlUPXEIQ3Yhuwj763O/R8UrbxKjZjXcg2lGBCgUTYAXTy+7oJs/afrM/C
	Fpdw==
Received: by 10.50.152.243 with SMTP id vb19mr3210663igb.4.1346424334542;
	Fri, 31 Aug 2012 07:45:34 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id wn5sm653304igc.7.2012.08.31.07.45.33
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 07:45:33 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <5040CAEA.7000600@citrix.com>
Date: Fri, 31 Aug 2012 10:45:43 -0400
Message-Id: <160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQnzdKwC0zCyfa2Tt7ix0H5C/7mC8ASutUKGHXSr++Y/rzjnbQS8TXRZetb6Za0MYalZ+ukJ
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, andres@lagarcavilla.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:

> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> 
>> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
>> foreign domain (such as dom0) attempts to map these frames, the map will
>> initially fail. The hypervisor returns a suitable errno, and kicks an
>> asynchronous page-in operation carried out by a helper. The foreign domain is
>> expected to retry the mapping operation until it eventually succeeds. The
>> foreign domain is not put to sleep because itself could be the one running the
>> pager assist (typical scenario for dom0).
>> 
>> This patch adds support for this mechanism for backend drivers using grant
>> mapping and copying operations. Specifically, this covers the blkback and
>> gntdev drivers (which map foregin grants), and the netback driver (which copies
>> foreign grants).
>> 
>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
>> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>>  target foregin frame is paged out).
>> * Insert hooks with appropriate macro decorators in the aforementioned drivers.
> 
> I think you should implement wrappers around HYPERVISOR_grant_table_op()
> have have the wrapper do the retries instead of every backend having to
> check for EAGAIN and issue the retries itself. Similar to the
> gnttab_map_grant_no_eagain() function you've already added.
> 
> Why do some operations not retry anyway?

All operations retry. The reason why I could not make it as elegant as you suggest is because grant operations are submitted in batches and their status(es?) later checked individually elsewhere. This is the case for netback. Note that both blkback and gntdev use a more linear structure with the gnttab_map_refs helper, which allows me to hide all the retry gore from those drivers into grant table code. Likewise for xenbus ring mapping.

In summary, outside of core grant table code, only the netback driver needs to check explicitly for retries, due to its batch-copy-delayed-per-slot-check structure.

> 
>> +void
>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
>> +						const char *func)
>> +{
>> +	u8 delay = 1;
>> +
>> +	do {
>> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
>> +		if (*status == GNTST_eagain)
>> +			msleep(delay++);
>> +	} while ((*status == GNTST_eagain) && delay);
> 
> Terminating the loop when delay wraps is a bit subtle.  Why not make
> delay unsigned and check delay <= MAX_DELAY?
Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before a re-spin.

> 
> Would it be sensible to ramp the delay faster?  Perhaps double each
> iteration with a maximum possible delay of e.g., 256 ms.
Generally speaking we've never seen past three retries. I am open to changing the algorithm but there is a significant possibility it won't matter at all.

> 
>> +#define gnttab_map_grant_no_eagain(_gop)                                    \
>> +do {                                                                        \
>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
>> +        BUG();                                                              \
>> +    if ((_gop)->status == GNTST_eagain)                                     \
>> +        gnttab_retry_eagain_map((_gop));                                    \
>> +} while(0)
> 
> Inline functions, please.

I want to retain the original context for debugging. Eventually we print __func__ if things go wrong.

Thanks, great feedback
Andres

> 
> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:45:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7STe-0002h5-P7; Fri, 31 Aug 2012 14:45:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7STd-0002gs-5K
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:45:41 +0000
Received: from [85.158.138.51:53544] by server-4.bemta-3.messagelabs.com id
	77/A9-24831-21EC0405; Fri, 31 Aug 2012 14:45:38 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-14.tower-174.messagelabs.com!1346424335!21590317!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7059 invoked from network); 31 Aug 2012 14:45:36 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-14.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:45:36 -0000
Received: by iebc10 with SMTP id c10so2150087ieb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 07:45:34 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer
	:x-gm-message-state;
	bh=RQkCfAVnw45poWayoW7hRjS3/4M+4s5VGGfe3CQAOco=;
	b=bdvvgwqvAoroUfZaQUKY9bUliWCzM7cB9j/itw10eNrXmJoMlwIWu7oOSoMISffTRX
	Qa1+A7F/aEXGXwLXk7v+unrgOnF4e5eiL424jhNqIVtdPOBjCZ05qng8T3tfceoUCVgy
	gLfRTvanCPMZzl86ATY6USzhFfAxUpJYjctDA1xwFfhvFUUhfA+rRh5KB8vkYuUKQhmX
	tNNMgniRs0OcWf+dCNcIl3X7v5G2tHGaToIDtk+FBD8sXez+2V2jooRktOnMR4KDLk84
	LF8gpvlYaioPlUPXEIQ3Yhuwj763O/R8UrbxKjZjXcg2lGBCgUTYAXTy+7oJs/afrM/C
	Fpdw==
Received: by 10.50.152.243 with SMTP id vb19mr3210663igb.4.1346424334542;
	Fri, 31 Aug 2012 07:45:34 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id wn5sm653304igc.7.2012.08.31.07.45.33
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 07:45:33 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <5040CAEA.7000600@citrix.com>
Date: Fri, 31 Aug 2012 10:45:43 -0400
Message-Id: <160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
X-Mailer: Apple Mail (2.1278)
X-Gm-Message-State: ALoCoQnzdKwC0zCyfa2Tt7ix0H5C/7mC8ASutUKGHXSr++Y/rzjnbQS8TXRZetb6Za0MYalZ+ukJ
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, andres@lagarcavilla.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:

> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> 
>> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
>> foreign domain (such as dom0) attempts to map these frames, the map will
>> initially fail. The hypervisor returns a suitable errno, and kicks an
>> asynchronous page-in operation carried out by a helper. The foreign domain is
>> expected to retry the mapping operation until it eventually succeeds. The
>> foreign domain is not put to sleep because itself could be the one running the
>> pager assist (typical scenario for dom0).
>> 
>> This patch adds support for this mechanism for backend drivers using grant
>> mapping and copying operations. Specifically, this covers the blkback and
>> gntdev drivers (which map foregin grants), and the netback driver (which copies
>> foreign grants).
>> 
>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
>> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>>  target foregin frame is paged out).
>> * Insert hooks with appropriate macro decorators in the aforementioned drivers.
> 
> I think you should implement wrappers around HYPERVISOR_grant_table_op()
> have have the wrapper do the retries instead of every backend having to
> check for EAGAIN and issue the retries itself. Similar to the
> gnttab_map_grant_no_eagain() function you've already added.
> 
> Why do some operations not retry anyway?

All operations retry. The reason why I could not make it as elegant as you suggest is because grant operations are submitted in batches and their status(es?) later checked individually elsewhere. This is the case for netback. Note that both blkback and gntdev use a more linear structure with the gnttab_map_refs helper, which allows me to hide all the retry gore from those drivers into grant table code. Likewise for xenbus ring mapping.

In summary, outside of core grant table code, only the netback driver needs to check explicitly for retries, due to its batch-copy-delayed-per-slot-check structure.

> 
>> +void
>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
>> +						const char *func)
>> +{
>> +	u8 delay = 1;
>> +
>> +	do {
>> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
>> +		if (*status == GNTST_eagain)
>> +			msleep(delay++);
>> +	} while ((*status == GNTST_eagain) && delay);
> 
> Terminating the loop when delay wraps is a bit subtle.  Why not make
> delay unsigned and check delay <= MAX_DELAY?
Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before a re-spin.

> 
> Would it be sensible to ramp the delay faster?  Perhaps double each
> iteration with a maximum possible delay of e.g., 256 ms.
Generally speaking we've never seen past three retries. I am open to changing the algorithm but there is a significant possibility it won't matter at all.

> 
>> +#define gnttab_map_grant_no_eagain(_gop)                                    \
>> +do {                                                                        \
>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
>> +        BUG();                                                              \
>> +    if ((_gop)->status == GNTST_eagain)                                     \
>> +        gnttab_retry_eagain_map((_gop));                                    \
>> +} while(0)
> 
> Inline functions, please.

I want to retain the original context for debugging. Eventually we print __func__ if things go wrong.

Thanks, great feedback
Andres

> 
> David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:53:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Sat-0002rV-MK; Fri, 31 Aug 2012 14:53:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Sar-0002rQ-E5
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:53:09 +0000
Received: from [85.158.139.83:44321] by server-8.bemta-5.messagelabs.com id
	AE/1F-17085-4DFC0405; Fri, 31 Aug 2012 14:53:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346424786!24068817!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1500 invoked from network); 31 Aug 2012 14:53:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 14:53:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 15:53:06 +0100
Message-Id: <5040EBED0200007800097E2B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 15:53:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <20542.14593.18652.74782@mariner.uk.xensource.com>
	<5040B4510200007800097CBD@nat28.tlf.novell.com>
	<20544.52217.717066.788945@mariner.uk.xensource.com>
In-Reply-To: <20544.52217.717066.788945@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] Re: [RFC PATCH] xen: comment opaque expression
 in __page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 16:36, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [RFC PATCH] xen: comment opaque expression in 
> __page_to_virt"):
>> No, that's not precise. There's really not much of a win to be had
>> on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
>> should be the same in speed.
>> 
>> The win is on x86-64, where sizeof(struct page_info) is a power
>> of 2, and hence the pair of shifts (right, then left) can be reduced
>> to a single one.
>> 
>> Yet (for obvious reasons) the code ought to not break anything
>> if even on x86-64 the size of the structure would change, hence
>> it needs to be that complex (and can't be broken into separate,
>> simpler implementations for 32- and 64-bits).
> 
> Thanks.  Do you want to post a revised version of my patch or shall I
> do so ?  (If so please confirm that I should put your s-o-b on it for
> your wording above.)

x86: comment opaque expression in __page_to_virt()

mm.h's __page_to_virt() has a rather opaque expression. Comment it.

Reported-By: Ian Campbell <ian.campbell@citrix.com>
Suggested-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- 2012-08-08.orig/xen/include/asm-x86/mm.h	2012-06-20 17:34:02.000000000 +0200
+++ 2012-08-08/xen/include/asm-x86/mm.h	2012-08-31 16:50:50.000000000 +0200
@@ -323,6 +323,12 @@ static inline struct page_info *__virt_t
 static inline void *__page_to_virt(const struct page_info *pg)
 {
     ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
+    /*
+     * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). The
+     * division and re-multiplication avoids one shift when sizeof(*pg) is a
+     * power of two (otherwise there would be a right shift followed by a
+     * left shift, which the compiler can't know it can fold into one).
+     */
     return (void *)(DIRECTMAP_VIRT_START +
                     ((unsigned long)pg - FRAMETABLE_VIRT_START) /
                     (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:53:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Sat-0002rV-MK; Fri, 31 Aug 2012 14:53:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Sar-0002rQ-E5
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:53:09 +0000
Received: from [85.158.139.83:44321] by server-8.bemta-5.messagelabs.com id
	AE/1F-17085-4DFC0405; Fri, 31 Aug 2012 14:53:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346424786!24068817!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1500 invoked from network); 31 Aug 2012 14:53:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 14:53:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 15:53:06 +0100
Message-Id: <5040EBED0200007800097E2B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 15:53:01 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <20542.14593.18652.74782@mariner.uk.xensource.com>
	<5040B4510200007800097CBD@nat28.tlf.novell.com>
	<20544.52217.717066.788945@mariner.uk.xensource.com>
In-Reply-To: <20544.52217.717066.788945@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH] Re: [RFC PATCH] xen: comment opaque expression
 in __page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 16:36, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [RFC PATCH] xen: comment opaque expression in 
> __page_to_virt"):
>> No, that's not precise. There's really not much of a win to be had
>> on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
>> should be the same in speed.
>> 
>> The win is on x86-64, where sizeof(struct page_info) is a power
>> of 2, and hence the pair of shifts (right, then left) can be reduced
>> to a single one.
>> 
>> Yet (for obvious reasons) the code ought to not break anything
>> if even on x86-64 the size of the structure would change, hence
>> it needs to be that complex (and can't be broken into separate,
>> simpler implementations for 32- and 64-bits).
> 
> Thanks.  Do you want to post a revised version of my patch or shall I
> do so ?  (If so please confirm that I should put your s-o-b on it for
> your wording above.)

x86: comment opaque expression in __page_to_virt()

mm.h's __page_to_virt() has a rather opaque expression. Comment it.

Reported-By: Ian Campbell <ian.campbell@citrix.com>
Suggested-by: Ian Jackson <ian.jackson@eu.citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- 2012-08-08.orig/xen/include/asm-x86/mm.h	2012-06-20 17:34:02.000000000 +0200
+++ 2012-08-08/xen/include/asm-x86/mm.h	2012-08-31 16:50:50.000000000 +0200
@@ -323,6 +323,12 @@ static inline struct page_info *__virt_t
 static inline void *__page_to_virt(const struct page_info *pg)
 {
     ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
+    /*
+     * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). The
+     * division and re-multiplication avoids one shift when sizeof(*pg) is a
+     * power of two (otherwise there would be a right shift followed by a
+     * left shift, which the compiler can't know it can fold into one).
+     */
     return (void *)(DIRECTMAP_VIRT_START +
                     ((unsigned long)pg - FRAMETABLE_VIRT_START) /
                     (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:55:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SdM-0002yc-8d; Fri, 31 Aug 2012 14:55:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T7SdL-0002yX-3K
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:55:43 +0000
Received: from [85.158.139.83:37514] by server-12.bemta-5.messagelabs.com id
	3E/F4-18300-E60D0405; Fri, 31 Aug 2012 14:55:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1346424938!27402938!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30701 invoked from network); 31 Aug 2012 14:55:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:55:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36439990"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:55:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:55:26 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T7Sd3-0005ow-Ux;
	Fri, 31 Aug 2012 15:55:25 +0100
Message-ID: <5040D05D.8090808@citrix.com>
Date: Fri, 31 Aug 2012 15:55:25 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.4.4
Content-Type: multipart/mixed; boundary="------------010608020500030806070102"
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>
Subject: [Xen-devel] docs/command line: Clarify the behavior with invalid
	input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------010608020500030806070102
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

A college asked me these questions after reading the html page, so I
guess others may wish to know.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------010608020500030806070102
Content-Type: text/x-patch; name="docs-cmdline.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="docs-cmdline.patch"

# HG changeset patch
# Parent 1126b3079bef37e1bb5a97b90c14a51d4e1c91c3
docs/command line: Clarify the behavior with invalid input.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 1126b3079bef docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -11,7 +11,8 @@ Hypervisor.
 ## Types of parameter
 
 Most parameters take the form `option=value`.  Different options on
-the command line should be space delimited.
+the command line should be space delimited.  All options are case
+sensitive, as are all values unless explicitly noted.
 
 ### Boolean (`<boolean>`)
 
@@ -68,6 +69,39 @@ Some options take a comma separated list
 Some parameters act as combinations of the above, most commonly a mix
 of Boolean and String.  These are noted in the relevant sections.
 
+## Unexpected input
+
+This information describes the behaviour with unexpected or malformed
+input, for reference purposes.  It is not expected to be used in any
+situation, nor relied upon.
+
+### Boolean
+
+* `<values>` other than those listed will invert the option, so take
+  its non-default value.
+
+* A `no-` prefix can be stacked with an explicit `=<value>`.
+ * A `no-<option>=0` will cancel, so the option takes its default
+   value.
+ * A `no-<option>=1` will invert the option, so take its non-default
+   value.
+
+### Integer
+
+Input which cannot be converted to a valid number will result in the
+parameter being set to 0.  Strings which start as a valid integer but
+contain invalid characters as a suffix will have the suffix ignored.
+
+### Size
+
+Unrecognised suffixes will be ignored, and the default suffix will be
+used.
+
+### Strings, Lists and Combinations
+
+Depending on implementation, strings may be truncated or ignored all
+together as if the option were not specified in the first place.
+
 ## Parameter details
 
 ### acpi

--------------010608020500030806070102
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------010608020500030806070102--


From xen-devel-bounces@lists.xen.org Fri Aug 31 14:55:56 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SdM-0002yc-8d; Fri, 31 Aug 2012 14:55:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T7SdL-0002yX-3K
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:55:43 +0000
Received: from [85.158.139.83:37514] by server-12.bemta-5.messagelabs.com id
	3E/F4-18300-E60D0405; Fri, 31 Aug 2012 14:55:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1346424938!27402938!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30701 invoked from network); 31 Aug 2012 14:55:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:55:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36439990"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:55:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 10:55:26 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T7Sd3-0005ow-Ux;
	Fri, 31 Aug 2012 15:55:25 +0100
Message-ID: <5040D05D.8090808@citrix.com>
Date: Fri, 31 Aug 2012 15:55:25 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.4.4
Content-Type: multipart/mixed; boundary="------------010608020500030806070102"
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>
Subject: [Xen-devel] docs/command line: Clarify the behavior with invalid
	input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------010608020500030806070102
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

A college asked me these questions after reading the html page, so I
guess others may wish to know.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------010608020500030806070102
Content-Type: text/x-patch; name="docs-cmdline.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="docs-cmdline.patch"

# HG changeset patch
# Parent 1126b3079bef37e1bb5a97b90c14a51d4e1c91c3
docs/command line: Clarify the behavior with invalid input.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 1126b3079bef docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -11,7 +11,8 @@ Hypervisor.
 ## Types of parameter
 
 Most parameters take the form `option=value`.  Different options on
-the command line should be space delimited.
+the command line should be space delimited.  All options are case
+sensitive, as are all values unless explicitly noted.
 
 ### Boolean (`<boolean>`)
 
@@ -68,6 +69,39 @@ Some options take a comma separated list
 Some parameters act as combinations of the above, most commonly a mix
 of Boolean and String.  These are noted in the relevant sections.
 
+## Unexpected input
+
+This information describes the behaviour with unexpected or malformed
+input, for reference purposes.  It is not expected to be used in any
+situation, nor relied upon.
+
+### Boolean
+
+* `<values>` other than those listed will invert the option, so take
+  its non-default value.
+
+* A `no-` prefix can be stacked with an explicit `=<value>`.
+ * A `no-<option>=0` will cancel, so the option takes its default
+   value.
+ * A `no-<option>=1` will invert the option, so take its non-default
+   value.
+
+### Integer
+
+Input which cannot be converted to a valid number will result in the
+parameter being set to 0.  Strings which start as a valid integer but
+contain invalid characters as a suffix will have the suffix ignored.
+
+### Size
+
+Unrecognised suffixes will be ignored, and the default suffix will be
+used.
+
+### Strings, Lists and Combinations
+
+Depending on implementation, strings may be truncated or ignored all
+together as if the option were not specified in the first place.
+
 ## Parameter details
 
 ### acpi

--------------010608020500030806070102
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------010608020500030806070102--


From xen-devel-bounces@lists.xen.org Fri Aug 31 14:59:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ShA-0003A5-UW; Fri, 31 Aug 2012 14:59:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Sh9-0003A0-QV
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:59:40 +0000
Received: from [85.158.143.35:38618] by server-1.bemta-4.messagelabs.com id
	8B/1A-12504-B51D0405; Fri, 31 Aug 2012 14:59:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346425178!16088262!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18304 invoked from network); 31 Aug 2012 14:59:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:59:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:58:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 15:58:06 +0100
Message-ID: <1346425084.27277.220.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 15:58:04 +0100
In-Reply-To: <20544.52351.298163.138907@mariner.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> Touch the libxl.api-ok stamp file, and unconditionally put in place
> the new _libxl.api-for-check.  This avoids needlessly rerunning the
> preprocessor on libxl.h each time we call "make".
> 
> Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
> if it is asked for in a standalone make run it can find xentoollog.h.
> 
> Remove *.api-ok on clean.
> 
> Also fix .gitignore.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> ---
>  .gitignore           |    2 +-
>  tools/libxl/Makefile |    8 +++++---
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index 084ec62..2d56e70 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -189,7 +189,7 @@ tools/libxl/xl
>  tools/libxl/testenum
>  tools/libxl/testenum.c
>  tools/libxl/tmp.*
> -tools/libxl/libxl.api-for-check
> +tools/libxl/_libxl.api-for-check
>  tools/libaio/src/*.ol
>  tools/libaio/src/*.os
>  tools/misc/cpuperf/cpuperf-perfcntr
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 22c4881..a9d9ec6 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  CLIENTS = xl testidl libxl-save-helper
>  
>  XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
> -$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
> +$(XL_OBJS) _libxl.api-for-check: \
> +            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
>  $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
>  
> @@ -116,12 +117,13 @@ $(eval $(genpath-target))
>  
>  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
>  	$(PERL) $^
> +	touch $@

libxl.api-ok needs to either go in .*ignore or start with an _.

Otherwise this looks good:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>  
>  _%.api-for-check: %.h
>  	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
>  		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
>  		>$@.new
> -	$(call move-if-changed,$@.new,$@)
> +	mv -f $@.new $@
>  
>  _paths.h: genpath
>  	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
> @@ -211,7 +213,7 @@ install: all
>  clean:
>  	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
>  	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
> -	$(RM) -f testidl.c.new testidl.c
> +	$(RM) -f testidl.c.new testidl.c *.api-ok
>  
>  distclean: clean
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:59:53 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ShA-0003A5-UW; Fri, 31 Aug 2012 14:59:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Sh9-0003A0-QV
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:59:40 +0000
Received: from [85.158.143.35:38618] by server-1.bemta-4.messagelabs.com id
	8B/1A-12504-B51D0405; Fri, 31 Aug 2012 14:59:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346425178!16088262!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18304 invoked from network); 31 Aug 2012 14:59:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:59:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292163"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:58:06 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 15:58:06 +0100
Message-ID: <1346425084.27277.220.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 15:58:04 +0100
In-Reply-To: <20544.52351.298163.138907@mariner.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> Touch the libxl.api-ok stamp file, and unconditionally put in place
> the new _libxl.api-for-check.  This avoids needlessly rerunning the
> preprocessor on libxl.h each time we call "make".
> 
> Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
> if it is asked for in a standalone make run it can find xentoollog.h.
> 
> Remove *.api-ok on clean.
> 
> Also fix .gitignore.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> ---
>  .gitignore           |    2 +-
>  tools/libxl/Makefile |    8 +++++---
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index 084ec62..2d56e70 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -189,7 +189,7 @@ tools/libxl/xl
>  tools/libxl/testenum
>  tools/libxl/testenum.c
>  tools/libxl/tmp.*
> -tools/libxl/libxl.api-for-check
> +tools/libxl/_libxl.api-for-check
>  tools/libaio/src/*.ol
>  tools/libaio/src/*.os
>  tools/misc/cpuperf/cpuperf-perfcntr
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 22c4881..a9d9ec6 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  CLIENTS = xl testidl libxl-save-helper
>  
>  XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
> -$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
> +$(XL_OBJS) _libxl.api-for-check: \
> +            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
>  $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
>  
> @@ -116,12 +117,13 @@ $(eval $(genpath-target))
>  
>  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
>  	$(PERL) $^
> +	touch $@

libxl.api-ok needs to either go in .*ignore or start with an _.

Otherwise this looks good:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>  
>  _%.api-for-check: %.h
>  	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
>  		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
>  		>$@.new
> -	$(call move-if-changed,$@.new,$@)
> +	mv -f $@.new $@
>  
>  _paths.h: genpath
>  	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
> @@ -211,7 +213,7 @@ install: all
>  clean:
>  	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
>  	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
> -	$(RM) -f testidl.c.new testidl.c
> +	$(RM) -f testidl.c.new testidl.c *.api-ok
>  
>  distclean: clean
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:59:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ShE-0003AR-BO; Fri, 31 Aug 2012 14:59:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7ShC-0003AF-Bh
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:59:42 +0000
Received: from [85.158.143.35:38742] by server-2.bemta-4.messagelabs.com id
	AE/27-21239-D51D0405; Fri, 31 Aug 2012 14:59:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346425178!16088262!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18416 invoked from network); 31 Aug 2012 14:59:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:59:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292168"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:58:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 15:58:19 +0100
Message-ID: <1346425097.27277.221.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 15:58:17 +0100
In-Reply-To: <20544.52300.699375.749320@mariner.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
	<20544.52300.699375.749320@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works"):
> > The alternative I suppose would be to:
> >       * start xend on new host (running 4.2)
> >       * migrate the domains you want over to the new system
> >       * stop xend on the new host
> 
> This would work.
> 
> >       * xl migrate localhost for each domain.
> 
> This is only necessary to get the domain config file stashed away for
> the _next_ save or migration.

and general consistency + reduction of later surprises?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 14:59:57 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 14:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ShE-0003AR-BO; Fri, 31 Aug 2012 14:59:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7ShC-0003AF-Bh
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 14:59:42 +0000
Received: from [85.158.143.35:38742] by server-2.bemta-4.messagelabs.com id
	AE/27-21239-D51D0405; Fri, 31 Aug 2012 14:59:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346425178!16088262!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18416 invoked from network); 31 Aug 2012 14:59:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 14:59:41 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292168"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 14:58:20 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 15:58:19 +0100
Message-ID: <1346425097.27277.221.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 15:58:17 +0100
In-Reply-To: <20544.52300.699375.749320@mariner.uk.xensource.com>
References: <20544.39460.499127.781598@mariner.uk.xensource.com>
	<5040BC860200007800097D24@nat28.tlf.novell.com>
	<1346418071.27277.207.camel@zakaz.uk.xensource.com>
	<20544.52300.699375.749320@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] Test report: Migration from 4.1 to 4.2 works"):
> > The alternative I suppose would be to:
> >       * start xend on new host (running 4.2)
> >       * migrate the domains you want over to the new system
> >       * stop xend on the new host
> 
> This would work.
> 
> >       * xl migrate localhost for each domain.
> 
> This is only necessary to get the domain config file stashed away for
> the _next_ save or migration.

and general consistency + reduction of later surprises?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:00:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Shm-0003HK-PE; Fri, 31 Aug 2012 15:00:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7Shl-0003H4-84
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:00:17 +0000
Received: from [85.158.138.51:49172] by server-10.bemta-3.messagelabs.com id
	42/21-10411-081D0405; Fri, 31 Aug 2012 15:00:16 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346425211!26181786!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 743 invoked from network); 31 Aug 2012 15:00:13 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Aug 2012 15:00:13 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VF09c0030024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 15:00:10 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VF09n9004902
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 15:00:09 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VF08vM004810; Fri, 31 Aug 2012 10:00:08 -0500
MIME-Version: 1.0
Message-ID: <4fed3a7c-df88-496c-800b-bcc8f723606e@default>
Date: Fri, 31 Aug 2012 07:59:30 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, bing <Libing.Chen@uts.edu.au>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
	<aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
	<5040B6940200007800097CDE@nat28.tlf.novell.com>
In-Reply-To: <5040B6940200007800097CDE@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 790, boots fine with
> 4.1.3
> 
> >>> On 30.08.12 at 19:04, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> >>  From: bing [mailto:Libing.Chen@uts.edu.au]
> >> Sent: Thursday, August 30, 2012 12:16 AM
> >> To: xen-devel@lists.xen.org
> >> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex
> > 790, boots fine with
> >> 4.1.3
> >>
> >> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
> >> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
> >> to the xen kernel. It seems to be CPU related, same setup (same xen,
> >> Dom0 kernel version) works fine on another Dell Precision 3200.
> >
> > Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
> > did solve my problem also.
> 
> But that was known to be required only with certain, non-upstream
> broken kernels (which got their own xsave handling wrong). Can
> either of you confirm this is a problem with a plain upstream kernel
> now too (and if so, which version(s))?

Konrad told me this was a known problem on Fedora so I didn't
try an upstream kernel.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:00:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Shm-0003HK-PE; Fri, 31 Aug 2012 15:00:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7Shl-0003H4-84
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:00:17 +0000
Received: from [85.158.138.51:49172] by server-10.bemta-3.messagelabs.com id
	42/21-10411-081D0405; Fri, 31 Aug 2012 15:00:16 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346425211!26181786!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 743 invoked from network); 31 Aug 2012 15:00:13 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-15.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Aug 2012 15:00:13 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VF09c0030024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 15:00:10 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VF09n9004902
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 15:00:09 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VF08vM004810; Fri, 31 Aug 2012 10:00:08 -0500
MIME-Version: 1.0
Message-ID: <4fed3a7c-df88-496c-800b-bcc8f723606e@default>
Date: Fri, 31 Aug 2012 07:59:30 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>, bing <Libing.Chen@uts.edu.au>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
	<aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
	<5040B6940200007800097CDE@nat28.tlf.novell.com>
In-Reply-To: <5040B6940200007800097CDE@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 790, boots fine with
> 4.1.3
> 
> >>> On 30.08.12 at 19:04, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> >>  From: bing [mailto:Libing.Chen@uts.edu.au]
> >> Sent: Thursday, August 30, 2012 12:16 AM
> >> To: xen-devel@lists.xen.org
> >> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex
> > 790, boots fine with
> >> 4.1.3
> >>
> >> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
> >> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
> >> to the xen kernel. It seems to be CPU related, same setup (same xen,
> >> Dom0 kernel version) works fine on another Dell Precision 3200.
> >
> > Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
> > did solve my problem also.
> 
> But that was known to be required only with certain, non-upstream
> broken kernels (which got their own xsave handling wrong). Can
> either of you confirm this is a problem with a plain upstream kernel
> now too (and if so, which version(s))?

Konrad told me this was a known problem on Fedora so I didn't
try an upstream kernel.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:01:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Siw-0003Rn-Di; Fri, 31 Aug 2012 15:01:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1T7Siu-0003Qq-MD; Fri, 31 Aug 2012 15:01:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346425280!8986924!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11982 invoked from network); 31 Aug 2012 15:01:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292262"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:01:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:01:19 +0100
Message-ID: <1346425278.27277.224.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dieter Bloms <xensource.com@bloms.de>
Date: Fri, 31 Aug 2012 16:01:18 +0100
In-Reply-To: <1346421542.27277.218.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

User list to BCC now that we are talking patches etc.

On Fri, 2012-08-31 at 14:59 +0100, Ian Campbell wrote:
> On Tue, 2012-08-14 at 11:07 +0100, Dieter Bloms wrote:
> > Hi,
> > 
> > On Tue, Aug 14, Ian Campbell wrote:
> > 
> > > tools, blockers:
> > > 
> > >     * xl compatibility with xm:
> > > 
> > >         * No known issues
> > 
> > the parameter io and irq in domU config files are not evaluated by xl.
> > So it is not possible to passthrough a parallel port for my printer to
> > domU when I start the domU with xl command.
> > With xm I have no issue.
> 
> Do you have a reference to the actual syntax you are using?

Nevermind, I went with the fifthhorseman described syntax, the other one
isn't useful in python (since only the last would take effect) so it
won't be what xm is parsing.

Does this work for you? I only tested by observing the sets of
ports/irqs which are allowed for the domain.

I don't think this can break anything other than the new options
themselves so I think this patch could be a candidate for 4.2.0.

8<---------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346424882 -3600
# Node ID d6b418e9e52579a004797781c343734afc244389
# Parent  ccbee5bcb31b72706497725381f4e6836b9df657
libxl/xl: implement support for guest iooprt and irq permissions.

This is useful for passing legacy ISA devices (e.g. com ports,
parallel ports) to guests.

Supported syntax is as described in
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU

I tested this using Xen's 'q' key handler which prints out the I/O
port and IRQ ranges allowed for each domain. e.g.:

(XEN) Rangesets belonging to domain 31:
(XEN)     I/O Ports  { 2e8-2ef, 2f8-2ff }
(XEN)     Interrupts { 3, 5-6 }

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r ccbee5bcb31b -r d6b418e9e525 docs/man/xl.cfg.pod.5
--- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
+++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
@@ -402,6 +402,30 @@ for more information on the "permissive"
 
 =back
 
+=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
+
+=over 4
+
+Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
+is given in hexadecimal and may either a span e.g. C<2f8-2ff>
+(inclusive) or a single I/O port C<2f8>.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
+=item B<irqs=[ NUMBER, NUMBER, ... ]>
+
+=over 4
+
+Allow a guest to access specific physical IRQs.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
 =head2 Paravirtualised (PV) Guest Specific Options
 
 The following options apply only to Paravirtual guests.
diff -r ccbee5bcb31b -r d6b418e9e525 tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_create.c	Fri Aug 31 15:54:42 2012 +0100
@@ -933,6 +933,36 @@ static void domcreate_launch_dm(libxl__e
         LOG(ERROR, "unable to add disk devices");
         goto error_out;
     }
+
+    for (i = 0; i < d_config->b_info.num_ioports; i++) {
+        libxl_ioport_range *io = &d_config->b_info.ioports[i];
+
+        LOG(DEBUG, "dom%d ioports %"PRIx32"-%"PRIx32,
+            domid, io->first, io->first + io->number - 1);
+
+        ret = xc_domain_ioport_permission(CTX->xch, domid,
+                                          io->first, io->number, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to ioports %"PRIx32"-%"PRIx32,
+                 domid, io->first, io->first + io->number - 1);
+            ret = ERROR_FAIL;
+        }
+    }
+
+    for (i = 0; i < d_config->b_info.num_irqs; i++) {
+        uint32_t irq = d_config->b_info.irqs[i];
+
+        LOG(DEBUG, "dom%d irq %"PRIx32, domid, irq);
+
+        ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to irq %"PRId32, domid, irq);
+            ret = ERROR_FAIL;
+        }
+    }
+
     for (i = 0; i < d_config->num_nics; i++) {
         /* We have to init the nic here, because we still haven't
          * called libxl_device_nic_add at this point, but qemu needs
diff -r ccbee5bcb31b -r d6b418e9e525 tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Fri Aug 31 15:54:42 2012 +0100
@@ -135,6 +135,11 @@ libxl_vga_interface_type = Enumeration("
 # Complex libxl types
 #
 
+libxl_ioport_range = Struct("ioport_range", [
+    ("first", uint32),
+    ("number", uint32),
+    ])
+
 libxl_vga_interface_info = Struct("vga_interface_info", [
     ("kind",    libxl_vga_interface_type),
     ])
@@ -277,6 +282,9 @@ libxl_domain_build_info = Struct("domain
     #  parameters for all type of scheduler
     ("sched_params",     libxl_domain_sched_params),
 
+    ("ioports",          Array(libxl_ioport_range, "num_ioports")),
+    ("irqs",             Array(uint32, "num_irqs")),
+    
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff -r ccbee5bcb31b -r d6b418e9e525 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Fri Aug 31 15:54:42 2012 +0100
@@ -573,10 +573,12 @@ static void parse_config_data(const char
     long l;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *ioports, *irqs;
+    int num_ioports, num_irqs;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
-    int e;
+    int i, e;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -919,6 +921,61 @@ static void parse_config_data(const char
         abort();
     }
 
+    if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
+        b_info->num_ioports = num_ioports;
+        b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
+        for (i = 0; i < num_ioports; i++) {
+            const char *buf2;
+            char *ep;
+            uint32_t s, e;
+            buf = xlu_cfg_get_listitem (ioports, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element #%d in ioport list\n", i);
+                exit(1);
+            }
+            s = e = strtoul(buf, &ep, 16);
+            if (ep == buf) {
+                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
+                        buf);
+                exit(1);
+            }
+            if (*ep == '-') {
+                buf2 = ep + 1;
+                e = strtoul(buf2, &ep, 16);
+                if (ep == buf2 || s > e) {
+                    fprintf(stderr,
+                            "xl: Invalid argument parsion ioport: %s\n", buf);
+                    exit(1);
+                }
+            }
+            b_info->ioports[i].first = s;
+            b_info->ioports[i].number = e - s + 1;
+        }
+    }
+
+    if (!xlu_cfg_get_list(config, "irqs", &irqs, &num_irqs, 0)) {
+        b_info->num_irqs = num_irqs;
+        b_info->irqs = calloc(num_irqs, sizeof(*b_info->irqs));
+        for (i = 0; i < num_irqs; i++) {
+            char *ep;
+            uint32_t irq;
+            buf = xlu_cfg_get_listitem (irqs, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element %d in irq list\n", i);
+                exit(1);
+            }
+            irq = strtoul(buf, &ep, 10);
+            if (ep == buf) {
+                fprintf(stderr,
+                        "xl: Invalid argument parsing irq: %s\n", buf);
+                exit(1);
+            }
+            b_info->irqs[i] = irq;
+        }
+    }
+
     if (!xlu_cfg_get_list (config, "disk", &vbds, 0, 0)) {
         d_config->num_disks = 0;
         d_config->disks = NULL;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:01:42 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Siw-0003Rn-Di; Fri, 31 Aug 2012 15:01:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1T7Siu-0003Qq-MD; Fri, 31 Aug 2012 15:01:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346425280!8986924!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11982 invoked from network); 31 Aug 2012 15:01:21 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292262"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:01:19 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:01:19 +0100
Message-ID: <1346425278.27277.224.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dieter Bloms <xensource.com@bloms.de>
Date: Fri, 31 Aug 2012 16:01:18 +0100
In-Reply-To: <1346421542.27277.218.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

User list to BCC now that we are talking patches etc.

On Fri, 2012-08-31 at 14:59 +0100, Ian Campbell wrote:
> On Tue, 2012-08-14 at 11:07 +0100, Dieter Bloms wrote:
> > Hi,
> > 
> > On Tue, Aug 14, Ian Campbell wrote:
> > 
> > > tools, blockers:
> > > 
> > >     * xl compatibility with xm:
> > > 
> > >         * No known issues
> > 
> > the parameter io and irq in domU config files are not evaluated by xl.
> > So it is not possible to passthrough a parallel port for my printer to
> > domU when I start the domU with xl command.
> > With xm I have no issue.
> 
> Do you have a reference to the actual syntax you are using?

Nevermind, I went with the fifthhorseman described syntax, the other one
isn't useful in python (since only the last would take effect) so it
won't be what xm is parsing.

Does this work for you? I only tested by observing the sets of
ports/irqs which are allowed for the domain.

I don't think this can break anything other than the new options
themselves so I think this patch could be a candidate for 4.2.0.

8<---------------------------------------

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346424882 -3600
# Node ID d6b418e9e52579a004797781c343734afc244389
# Parent  ccbee5bcb31b72706497725381f4e6836b9df657
libxl/xl: implement support for guest iooprt and irq permissions.

This is useful for passing legacy ISA devices (e.g. com ports,
parallel ports) to guests.

Supported syntax is as described in
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU

I tested this using Xen's 'q' key handler which prints out the I/O
port and IRQ ranges allowed for each domain. e.g.:

(XEN) Rangesets belonging to domain 31:
(XEN)     I/O Ports  { 2e8-2ef, 2f8-2ff }
(XEN)     Interrupts { 3, 5-6 }

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r ccbee5bcb31b -r d6b418e9e525 docs/man/xl.cfg.pod.5
--- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
+++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
@@ -402,6 +402,30 @@ for more information on the "permissive"
 
 =back
 
+=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
+
+=over 4
+
+Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
+is given in hexadecimal and may either a span e.g. C<2f8-2ff>
+(inclusive) or a single I/O port C<2f8>.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
+=item B<irqs=[ NUMBER, NUMBER, ... ]>
+
+=over 4
+
+Allow a guest to access specific physical IRQs.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
 =head2 Paravirtualised (PV) Guest Specific Options
 
 The following options apply only to Paravirtual guests.
diff -r ccbee5bcb31b -r d6b418e9e525 tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_create.c	Fri Aug 31 15:54:42 2012 +0100
@@ -933,6 +933,36 @@ static void domcreate_launch_dm(libxl__e
         LOG(ERROR, "unable to add disk devices");
         goto error_out;
     }
+
+    for (i = 0; i < d_config->b_info.num_ioports; i++) {
+        libxl_ioport_range *io = &d_config->b_info.ioports[i];
+
+        LOG(DEBUG, "dom%d ioports %"PRIx32"-%"PRIx32,
+            domid, io->first, io->first + io->number - 1);
+
+        ret = xc_domain_ioport_permission(CTX->xch, domid,
+                                          io->first, io->number, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to ioports %"PRIx32"-%"PRIx32,
+                 domid, io->first, io->first + io->number - 1);
+            ret = ERROR_FAIL;
+        }
+    }
+
+    for (i = 0; i < d_config->b_info.num_irqs; i++) {
+        uint32_t irq = d_config->b_info.irqs[i];
+
+        LOG(DEBUG, "dom%d irq %"PRIx32, domid, irq);
+
+        ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to irq %"PRId32, domid, irq);
+            ret = ERROR_FAIL;
+        }
+    }
+
     for (i = 0; i < d_config->num_nics; i++) {
         /* We have to init the nic here, because we still haven't
          * called libxl_device_nic_add at this point, but qemu needs
diff -r ccbee5bcb31b -r d6b418e9e525 tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Fri Aug 31 15:54:42 2012 +0100
@@ -135,6 +135,11 @@ libxl_vga_interface_type = Enumeration("
 # Complex libxl types
 #
 
+libxl_ioport_range = Struct("ioport_range", [
+    ("first", uint32),
+    ("number", uint32),
+    ])
+
 libxl_vga_interface_info = Struct("vga_interface_info", [
     ("kind",    libxl_vga_interface_type),
     ])
@@ -277,6 +282,9 @@ libxl_domain_build_info = Struct("domain
     #  parameters for all type of scheduler
     ("sched_params",     libxl_domain_sched_params),
 
+    ("ioports",          Array(libxl_ioport_range, "num_ioports")),
+    ("irqs",             Array(uint32, "num_irqs")),
+    
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff -r ccbee5bcb31b -r d6b418e9e525 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Fri Aug 31 15:54:42 2012 +0100
@@ -573,10 +573,12 @@ static void parse_config_data(const char
     long l;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *ioports, *irqs;
+    int num_ioports, num_irqs;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
-    int e;
+    int i, e;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -919,6 +921,61 @@ static void parse_config_data(const char
         abort();
     }
 
+    if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
+        b_info->num_ioports = num_ioports;
+        b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
+        for (i = 0; i < num_ioports; i++) {
+            const char *buf2;
+            char *ep;
+            uint32_t s, e;
+            buf = xlu_cfg_get_listitem (ioports, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element #%d in ioport list\n", i);
+                exit(1);
+            }
+            s = e = strtoul(buf, &ep, 16);
+            if (ep == buf) {
+                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
+                        buf);
+                exit(1);
+            }
+            if (*ep == '-') {
+                buf2 = ep + 1;
+                e = strtoul(buf2, &ep, 16);
+                if (ep == buf2 || s > e) {
+                    fprintf(stderr,
+                            "xl: Invalid argument parsion ioport: %s\n", buf);
+                    exit(1);
+                }
+            }
+            b_info->ioports[i].first = s;
+            b_info->ioports[i].number = e - s + 1;
+        }
+    }
+
+    if (!xlu_cfg_get_list(config, "irqs", &irqs, &num_irqs, 0)) {
+        b_info->num_irqs = num_irqs;
+        b_info->irqs = calloc(num_irqs, sizeof(*b_info->irqs));
+        for (i = 0; i < num_irqs; i++) {
+            char *ep;
+            uint32_t irq;
+            buf = xlu_cfg_get_listitem (irqs, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element %d in irq list\n", i);
+                exit(1);
+            }
+            irq = strtoul(buf, &ep, 10);
+            if (ep == buf) {
+                fprintf(stderr,
+                        "xl: Invalid argument parsing irq: %s\n", buf);
+                exit(1);
+            }
+            b_info->irqs[i] = irq;
+        }
+    }
+
     if (!xlu_cfg_get_list (config, "disk", &vbds, 0, 0)) {
         d_config->num_disks = 0;
         d_config->disks = NULL;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:07:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7So7-0003t6-AD; Fri, 31 Aug 2012 15:06:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7So5-0003sy-EC
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:06:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346425602!8987558!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29782 invoked from network); 31 Aug 2012 15:06:42 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:06:42 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292433"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:06:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:06:41 +0100
Message-ID: <1346425600.27277.227.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 31 Aug 2012 16:06:40 +0100
In-Reply-To: <5040D05D.8090808@citrix.com>
References: <5040D05D.8090808@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] docs/command line: Clarify the behavior with
	invalid input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> @@ -68,6 +69,39 @@ Some options take a comma separated list
>  Some parameters act as combinations of the above, most commonly a mix
>  of Boolean and String.  These are noted in the relevant sections.
>  
> +## Unexpected input
> +
> +This information describes the behaviour with unexpected or malformed
> +input, for reference purposes.  It is not expected to be used in any
> +situation, nor relied upon.

We should document most of these as explicitly having Undefined
Behaviour (i.e. it might eat your dog). Writing down what the code
happens to do in the face of a nonsensical option isn't that useful.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:07:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7So7-0003t6-AD; Fri, 31 Aug 2012 15:06:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7So5-0003sy-EC
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:06:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346425602!8987558!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29782 invoked from network); 31 Aug 2012 15:06:42 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:06:42 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292433"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:06:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:06:41 +0100
Message-ID: <1346425600.27277.227.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 31 Aug 2012 16:06:40 +0100
In-Reply-To: <5040D05D.8090808@citrix.com>
References: <5040D05D.8090808@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] docs/command line: Clarify the behavior with
	invalid input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> @@ -68,6 +69,39 @@ Some options take a comma separated list
>  Some parameters act as combinations of the above, most commonly a mix
>  of Boolean and String.  These are noted in the relevant sections.
>  
> +## Unexpected input
> +
> +This information describes the behaviour with unexpected or malformed
> +input, for reference purposes.  It is not expected to be used in any
> +situation, nor relied upon.

We should document most of these as explicitly having Undefined
Behaviour (i.e. it might eat your dog). Writing down what the code
happens to do in the face of a nonsensical option isn't that useful.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:07:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SoL-0003uB-Mz; Fri, 31 Aug 2012 15:07:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SoK-0003tX-1k
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:07:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346425602!8987558!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30182 invoked from network); 31 Aug 2012 15:06:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:06:47 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:06:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:06:47 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7So3-0004Vx-1k; Fri, 31 Aug 2012 15:06:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7So2-0007ox-Rx;
	Fri, 31 Aug 2012 16:06:46 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.54022.744391.866200@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:06:46 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346425084.27277.220.camel@zakaz.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> > @@ -116,12 +117,13 @@ $(eval $(genpath-target))
> >  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
> >  	$(PERL) $^
> > +	touch $@
> 
> libxl.api-ok needs to either go in .*ignore or start with an _.

Duh, don't know how I missed that.

> Otherwise this looks good:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH v2] libxl: fix api check Makefile

Touch the libxl.api-ok stamp file, and unconditionally put in place
the new _libxl.api-for-check.  This avoids needlessly rerunning the
preprocessor on libxl.h each time we call "make".

Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
if it is asked for in a standalone make run it can find xentoollog.h.

Remove *.api-ok on clean.

Also fix .gitignore.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

--
v2: add api-ok to gitignore

---
 .gitignore           |    3 ++-
 tools/libxl/Makefile |    8 +++++---
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/.gitignore b/.gitignore
index 084ec62..776e4b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -189,7 +189,8 @@ tools/libxl/xl
 tools/libxl/testenum
 tools/libxl/testenum.c
 tools/libxl/tmp.*
-tools/libxl/libxl.api-for-check
+tools/libxl/_libxl.api-for-check
+tools/libxl/*.api-ok
 tools/libaio/src/*.ol
 tools/libaio/src/*.os
 tools/misc/cpuperf/cpuperf-perfcntr
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 22c4881..a9d9ec6 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 CLIENTS = xl testidl libxl-save-helper
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
+$(XL_OBJS) _libxl.api-for-check: \
+            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
 
@@ -116,12 +117,13 @@ $(eval $(genpath-target))
 
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
+	touch $@
 
 _%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
-	$(call move-if-changed,$@.new,$@)
+	mv -f $@.new $@
 
 _paths.h: genpath
 	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
@@ -211,7 +213,7 @@ install: all
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
-	$(RM) -f testidl.c.new testidl.c
+	$(RM) -f testidl.c.new testidl.c *.api-ok
 
 distclean: clean
 
-- 
tg: (f86a047..) t/xen/xl.api-check-makefile (depends on: t/xen/xl.cfg.no-final-newline-ok)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:07:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7SoL-0003uB-Mz; Fri, 31 Aug 2012 15:07:05 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7SoK-0003tX-1k
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:07:04 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346425602!8987558!2
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30182 invoked from network); 31 Aug 2012 15:06:47 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:06:47 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292458"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:06:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:06:47 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7So3-0004Vx-1k; Fri, 31 Aug 2012 15:06:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7So2-0007ox-Rx;
	Fri, 31 Aug 2012 16:06:46 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.54022.744391.866200@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:06:46 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346425084.27277.220.camel@zakaz.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> > @@ -116,12 +117,13 @@ $(eval $(genpath-target))
> >  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
> >  	$(PERL) $^
> > +	touch $@
> 
> libxl.api-ok needs to either go in .*ignore or start with an _.

Duh, don't know how I missed that.

> Otherwise this looks good:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH v2] libxl: fix api check Makefile

Touch the libxl.api-ok stamp file, and unconditionally put in place
the new _libxl.api-for-check.  This avoids needlessly rerunning the
preprocessor on libxl.h each time we call "make".

Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
if it is asked for in a standalone make run it can find xentoollog.h.

Remove *.api-ok on clean.

Also fix .gitignore.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

--
v2: add api-ok to gitignore

---
 .gitignore           |    3 ++-
 tools/libxl/Makefile |    8 +++++---
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/.gitignore b/.gitignore
index 084ec62..776e4b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -189,7 +189,8 @@ tools/libxl/xl
 tools/libxl/testenum
 tools/libxl/testenum.c
 tools/libxl/tmp.*
-tools/libxl/libxl.api-for-check
+tools/libxl/_libxl.api-for-check
+tools/libxl/*.api-ok
 tools/libaio/src/*.ol
 tools/libaio/src/*.os
 tools/misc/cpuperf/cpuperf-perfcntr
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 22c4881..a9d9ec6 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 CLIENTS = xl testidl libxl-save-helper
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
+$(XL_OBJS) _libxl.api-for-check: \
+            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
 
@@ -116,12 +117,13 @@ $(eval $(genpath-target))
 
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
+	touch $@
 
 _%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
-	$(call move-if-changed,$@.new,$@)
+	mv -f $@.new $@
 
 _paths.h: genpath
 	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
@@ -211,7 +213,7 @@ install: all
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
-	$(RM) -f testidl.c.new testidl.c
+	$(RM) -f testidl.c.new testidl.c *.api-ok
 
 distclean: clean
 
-- 
tg: (f86a047..) t/xen/xl.api-check-makefile (depends on: t/xen/xl.cfg.no-final-newline-ok)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:08:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Spz-00042c-7l; Fri, 31 Aug 2012 15:08:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Spy-00042V-GH
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:08:46 +0000
Received: from [85.158.143.35:39448] by server-3.bemta-4.messagelabs.com id
	A5/5F-08232-D73D0405; Fri, 31 Aug 2012 15:08:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1346425725!11660598!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17526 invoked from network); 31 Aug 2012 15:08:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:08:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292519"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:08:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:08:45 +0100
Message-ID: <1346425723.27277.228.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 16:08:43 +0100
In-Reply-To: <20544.54022.744391.866200@mariner.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
	<20544.54022.744391.866200@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:06 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> > On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> > > @@ -116,12 +117,13 @@ $(eval $(genpath-target))
> > >  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
> > >  	$(PERL) $^
> > > +	touch $@
> > 
> > libxl.api-ok needs to either go in .*ignore or start with an _.
> 
> Duh, don't know how I missed that.
> 
> > Otherwise this looks good:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Subject: [PATCH v2] libxl: fix api check Makefile
> 
> Touch the libxl.api-ok stamp file, and unconditionally put in place
> the new _libxl.api-for-check.  This avoids needlessly rerunning the
> preprocessor on libxl.h each time we call "make".
> 
> Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
> if it is asked for in a standalone make run it can find xentoollog.h.
> 
> Remove *.api-ok on clean.
> 
> Also fix .gitignore.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> --
> v2: add api-ok to gitignore

Needs to be in hgignore too. Sorry!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:08:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Spz-00042c-7l; Fri, 31 Aug 2012 15:08:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Spy-00042V-GH
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:08:46 +0000
Received: from [85.158.143.35:39448] by server-3.bemta-4.messagelabs.com id
	A5/5F-08232-D73D0405; Fri, 31 Aug 2012 15:08:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1346425725!11660598!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17526 invoked from network); 31 Aug 2012 15:08:45 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:08:45 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292519"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:08:45 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:08:45 +0100
Message-ID: <1346425723.27277.228.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 16:08:43 +0100
In-Reply-To: <20544.54022.744391.866200@mariner.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
	<20544.54022.744391.866200@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:06 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> > On Fri, 2012-08-31 at 15:38 +0100, Ian Jackson wrote:
> > > @@ -116,12 +117,13 @@ $(eval $(genpath-target))
> > >  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
> > >  	$(PERL) $^
> > > +	touch $@
> > 
> > libxl.api-ok needs to either go in .*ignore or start with an _.
> 
> Duh, don't know how I missed that.
> 
> > Otherwise this looks good:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Subject: [PATCH v2] libxl: fix api check Makefile
> 
> Touch the libxl.api-ok stamp file, and unconditionally put in place
> the new _libxl.api-for-check.  This avoids needlessly rerunning the
> preprocessor on libxl.h each time we call "make".
> 
> Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
> if it is asked for in a standalone make run it can find xentoollog.h.
> 
> Remove *.api-ok on clean.
> 
> Also fix .gitignore.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> --
> v2: add api-ok to gitignore

Needs to be in hgignore too. Sorry!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7StM-0004I6-Rw; Fri, 31 Aug 2012 15:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7StL-0004Hx-NX
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:12:15 +0000
Received: from [85.158.138.51:26553] by server-2.bemta-3.messagelabs.com id
	2A/1A-04862-E44D0405; Fri, 31 Aug 2012 15:12:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346425934!20003340!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12070 invoked from network); 31 Aug 2012 15:12:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292600"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:12:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:12:14 +0100
Message-ID: <1346425932.27277.230.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Fri, 31 Aug 2012 16:12:12 +0100
In-Reply-To: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-28 at 11:06 +0100, Ian Campbell wrote:

> hypervisor, nice to have:
> 
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>       	    (George Dunlap)

>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)

I have a feeling one or both of these might be fixed already, is that
true?

>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)

AIUI this one isn't though.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:12:29 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7StM-0004I6-Rw; Fri, 31 Aug 2012 15:12:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7StL-0004Hx-NX
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:12:15 +0000
Received: from [85.158.138.51:26553] by server-2.bemta-3.messagelabs.com id
	2A/1A-04862-E44D0405; Fri, 31 Aug 2012 15:12:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346425934!20003340!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12070 invoked from network); 31 Aug 2012 15:12:14 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292600"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:12:14 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:12:14 +0100
Message-ID: <1346425932.27277.230.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Fri, 31 Aug 2012 16:12:12 +0100
In-Reply-To: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2012-08-28 at 11:06 +0100, Ian Campbell wrote:

> hypervisor, nice to have:
> 
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>       	    (George Dunlap)

>     * fix high change rate to CMOS RTC periodic interrupt causing
>       guest wall clock time to lag (possible fix outlined, needs to be
>       put in patch form and thoroughly reviewed/tested for unwanted
>       side effects, Jan Beulich)

I have a feeling one or both of these might be fixed already, is that
true?

>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)

AIUI this one isn't though.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:13:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:13:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Suj-0004Ng-B8; Fri, 31 Aug 2012 15:13:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Suh-0004NY-Kh
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:13:39 +0000
Received: from [85.158.143.35:12958] by server-3.bemta-4.messagelabs.com id
	08/96-08232-3A4D0405; Fri, 31 Aug 2012 15:13:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346426018!16175429!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1863 invoked from network); 31 Aug 2012 15:13:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:13:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292640"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:13:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:13:31 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SuZ-0004Z4-5j; Fri, 31 Aug 2012 15:13:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SuZ-0007pg-1S;
	Fri, 31 Aug 2012 16:13:31 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.54424.775548.66532@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:13:28 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346425278.27277.224.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> libxl/xl: implement support for guest iooprt and irq permissions.

Most of this looks good, but:

...                                       ("bios",             libxl_bios_type),
> +            buf = xlu_cfg_get_listitem (ioports, i);
> +            if (!buf) {
> +                fprintf(stderr,
> +                        "xl: Unable to get element #%d in ioport list\n", i);
> +                exit(1);
> +            }
> +            s = e = strtoul(buf, &ep, 16);
> +            if (ep == buf) {
> +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> +                        buf);
> +                exit(1);
> +            }
> +            if (*ep == '-') {

This code fails to properly handle (reject)
   - (*ep!=0 && *ep!='-')
   - value > LONG_MAX
   - INT_MAX < value <= LONG_MAX
   - *ep2!=0

> +            irq = strtoul(buf, &ep, 10);

Likewise.

I take it we're not worrying about missing malloc failure checks in
xl.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:13:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:13:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Suj-0004Ng-B8; Fri, 31 Aug 2012 15:13:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Suh-0004NY-Kh
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:13:39 +0000
Received: from [85.158.143.35:12958] by server-3.bemta-4.messagelabs.com id
	08/96-08232-3A4D0405; Fri, 31 Aug 2012 15:13:39 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1346426018!16175429!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1863 invoked from network); 31 Aug 2012 15:13:38 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:13:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292640"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:13:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:13:31 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7SuZ-0004Z4-5j; Fri, 31 Aug 2012 15:13:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7SuZ-0007pg-1S;
	Fri, 31 Aug 2012 16:13:31 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.54424.775548.66532@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:13:28 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346425278.27277.224.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> libxl/xl: implement support for guest iooprt and irq permissions.

Most of this looks good, but:

...                                       ("bios",             libxl_bios_type),
> +            buf = xlu_cfg_get_listitem (ioports, i);
> +            if (!buf) {
> +                fprintf(stderr,
> +                        "xl: Unable to get element #%d in ioport list\n", i);
> +                exit(1);
> +            }
> +            s = e = strtoul(buf, &ep, 16);
> +            if (ep == buf) {
> +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> +                        buf);
> +                exit(1);
> +            }
> +            if (*ep == '-') {

This code fails to properly handle (reject)
   - (*ep!=0 && *ep!='-')
   - value > LONG_MAX
   - INT_MAX < value <= LONG_MAX
   - *ep2!=0

> +            irq = strtoul(buf, &ep, 10);

Likewise.

I take it we're not worrying about missing malloc failure checks in
xl.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:15:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Sw2-0004VJ-Vs; Fri, 31 Aug 2012 15:15:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Sw0-0004Uv-Ir
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:15:00 +0000
Received: from [85.158.138.51:52372] by server-11.bemta-3.messagelabs.com id
	A5/C4-30250-3F4D0405; Fri, 31 Aug 2012 15:14:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346426098!27919693!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4580 invoked from network); 31 Aug 2012 15:14:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:14:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292675"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:14:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:14:58 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Svy-0004Zd-Dz; Fri, 31 Aug 2012 15:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Svy-0007ry-Ab;
	Fri, 31 Aug 2012 16:14:58 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.54514.22038.727353@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:14:58 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346425723.27277.228.camel@zakaz.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
	<20544.54022.744391.866200@mariner.uk.xensource.com>
	<1346425723.27277.228.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> Needs to be in hgignore too. Sorry!

I was just fixing what was in front of my nose, but fine:

Ian.

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH v3] libxl: fix api check Makefile

Touch the libxl.api-ok stamp file, and unconditionally put in place
the new _libxl.api-for-check.  This avoids needlessly rerunning the
preprocessor on libxl.h each time we call "make".

Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
if it is asked for in a standalone make run it can find xentoollog.h.

Remove *.api-ok on clean.

Also fix .gitignore.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v3: add api-ok to hgignore
v2: add api-ok to gitignore

---
 .gitignore           |    3 ++-
 .hgignore            |    1 +
 tools/libxl/Makefile |    8 +++++---
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/.gitignore b/.gitignore
index 084ec62..776e4b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -189,7 +189,8 @@ tools/libxl/xl
 tools/libxl/testenum
 tools/libxl/testenum.c
 tools/libxl/tmp.*
-tools/libxl/libxl.api-for-check
+tools/libxl/_libxl.api-for-check
+tools/libxl/*.api-ok
 tools/libaio/src/*.ol
 tools/libaio/src/*.os
 tools/misc/cpuperf/cpuperf-perfcntr
diff --git a/.hgignore b/.hgignore
index 5ef6838..141809e 100644
--- a/.hgignore
+++ b/.hgignore
@@ -188,6 +188,7 @@
 ^tools/libxl/tmp\..*$
 ^tools/libxl/.*\.new$
 ^tools/libxl/_libxl\.api-for-check
+^tools/libxl/libxl\.api-ok
 ^tools/libvchan/vchan-node[12]$
 ^tools/libaio/src/.*\.ol$
 ^tools/libaio/src/.*\.os$
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 22c4881..a9d9ec6 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 CLIENTS = xl testidl libxl-save-helper
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
+$(XL_OBJS) _libxl.api-for-check: \
+            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
 
@@ -116,12 +117,13 @@ $(eval $(genpath-target))
 
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
+	touch $@
 
 _%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
-	$(call move-if-changed,$@.new,$@)
+	mv -f $@.new $@
 
 _paths.h: genpath
 	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
@@ -211,7 +213,7 @@ install: all
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
-	$(RM) -f testidl.c.new testidl.c
+	$(RM) -f testidl.c.new testidl.c *.api-ok
 
 distclean: clean
 
-- 
tg: (f86a047..) t/xen/xl.api-check-makefile (depends on: t/xen/xl.cfg.no-final-newline-ok)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:15:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Sw2-0004VJ-Vs; Fri, 31 Aug 2012 15:15:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Sw0-0004Uv-Ir
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:15:00 +0000
Received: from [85.158.138.51:52372] by server-11.bemta-3.messagelabs.com id
	A5/C4-30250-3F4D0405; Fri, 31 Aug 2012 15:14:59 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346426098!27919693!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4580 invoked from network); 31 Aug 2012 15:14:59 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:14:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292675"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:14:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:14:58 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Svy-0004Zd-Dz; Fri, 31 Aug 2012 15:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Svy-0007ry-Ab;
	Fri, 31 Aug 2012 16:14:58 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.54514.22038.727353@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:14:58 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346425723.27277.228.camel@zakaz.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
	<20544.54022.744391.866200@mariner.uk.xensource.com>
	<1346425723.27277.228.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> Needs to be in hgignore too. Sorry!

I was just fixing what was in front of my nose, but fine:

Ian.

From: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [PATCH v3] libxl: fix api check Makefile

Touch the libxl.api-ok stamp file, and unconditionally put in place
the new _libxl.api-for-check.  This avoids needlessly rerunning the
preprocessor on libxl.h each time we call "make".

Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
if it is asked for in a standalone make run it can find xentoollog.h.

Remove *.api-ok on clean.

Also fix .gitignore.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

--
v3: add api-ok to hgignore
v2: add api-ok to gitignore

---
 .gitignore           |    3 ++-
 .hgignore            |    1 +
 tools/libxl/Makefile |    8 +++++---
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/.gitignore b/.gitignore
index 084ec62..776e4b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -189,7 +189,8 @@ tools/libxl/xl
 tools/libxl/testenum
 tools/libxl/testenum.c
 tools/libxl/tmp.*
-tools/libxl/libxl.api-for-check
+tools/libxl/_libxl.api-for-check
+tools/libxl/*.api-ok
 tools/libaio/src/*.ol
 tools/libaio/src/*.os
 tools/misc/cpuperf/cpuperf-perfcntr
diff --git a/.hgignore b/.hgignore
index 5ef6838..141809e 100644
--- a/.hgignore
+++ b/.hgignore
@@ -188,6 +188,7 @@
 ^tools/libxl/tmp\..*$
 ^tools/libxl/.*\.new$
 ^tools/libxl/_libxl\.api-for-check
+^tools/libxl/libxl\.api-ok
 ^tools/libvchan/vchan-node[12]$
 ^tools/libaio/src/.*\.ol$
 ^tools/libaio/src/.*\.os$
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 22c4881..a9d9ec6 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 CLIENTS = xl testidl libxl-save-helper
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
+$(XL_OBJS) _libxl.api-for-check: \
+            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
 
@@ -116,12 +117,13 @@ $(eval $(genpath-target))
 
 libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
+	touch $@
 
 _%.api-for-check: %.h
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
-	$(call move-if-changed,$@.new,$@)
+	mv -f $@.new $@
 
 _paths.h: genpath
 	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
@@ -211,7 +213,7 @@ install: all
 clean:
 	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
 	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
-	$(RM) -f testidl.c.new testidl.c
+	$(RM) -f testidl.c.new testidl.c *.api-ok
 
 distclean: clean
 
-- 
tg: (f86a047..) t/xen/xl.api-check-makefile (depends on: t/xen/xl.cfg.no-final-newline-ok)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:19:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Szr-0004kr-LD; Fri, 31 Aug 2012 15:18:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Szq-0004ke-73
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:18:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346426330!8896752!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3256 invoked from network); 31 Aug 2012 15:18:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:18:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292746"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:18:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:18:50 +0100
Message-ID: <1346426328.27277.234.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Date: Fri, 31 Aug 2012 16:18:48 +0100
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
References: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, 'Jan Beulich' <JBeulich@suse.com>,
	'xen-devel' <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2-rc3 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 08:27 +0100, Ren, Yongjie wrote:
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822

I've only just noticed that this has changed from the previous
description which was "vcpu-set doesn't take effect on guest".

Have we ever supported HVM guest CPU remove? I thought not.

http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822#c3 seems to
describe the behaviour I would expect.

If this is supposed to be an existing feature then is this a regression
with xl vs xm or from 4.1 to 4.2?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:19:12 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Szr-0004kr-LD; Fri, 31 Aug 2012 15:18:59 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Szq-0004ke-73
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:18:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1346426330!8896752!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3256 invoked from network); 31 Aug 2012 15:18:50 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:18:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292746"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:18:50 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:18:50 +0100
Message-ID: <1346426328.27277.234.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Date: Fri, 31 Aug 2012 16:18:48 +0100
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
References: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
	"Keir \(Xen.org\)" <keir@xen.org>, 'Jan Beulich' <JBeulich@suse.com>,
	'xen-devel' <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2-rc3 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 08:27 +0100, Ren, Yongjie wrote:
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822

I've only just noticed that this has changed from the previous
description which was "vcpu-set doesn't take effect on guest".

Have we ever supported HVM guest CPU remove? I thought not.

http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822#c3 seems to
describe the behaviour I would expect.

If this is supposed to be an existing feature then is this a regression
with xl vs xm or from 4.1 to 4.2?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7T4f-0004uc-Gd; Fri, 31 Aug 2012 15:23:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7T4e-0004uO-0I
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:23:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1346426630!8553205!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5706 invoked from network); 31 Aug 2012 15:23:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:23:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292879"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:23:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:23:41 +0100
Message-ID: <1346426619.27277.237.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 16:23:39 +0100
In-Reply-To: <20544.54424.775548.66532@mariner.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<20544.54424.775548.66532@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:13 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> > libxl/xl: implement support for guest iooprt and irq permissions.
> 
> Most of this looks good, but:
> 
> ...                                       ("bios",             libxl_bios_type),
> > +            buf = xlu_cfg_get_listitem (ioports, i);
> > +            if (!buf) {
> > +                fprintf(stderr,
> > +                        "xl: Unable to get element #%d in ioport list\n", i);
> > +                exit(1);
> > +            }
> > +            s = e = strtoul(buf, &ep, 16);
> > +            if (ep == buf) {
> > +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> > +                        buf);
> > +                exit(1);
> > +            }
> > +            if (*ep == '-') {
> 
> This code fails to properly handle (reject)
>    - (*ep!=0 && *ep!='-')

Oops, will fix.

>    - value > LONG_MAX
>    - INT_MAX < value <= LONG_MAX

These all get checked inside the (eventual) hypercall. Or were you
thinking of something else?

BTW the types should be unsigned all the way down, unless I've screwed
something up.

>    - *ep2!=0

Will fix.

> 
> > +            irq = strtoul(buf, &ep, 10);
> 
> Likewise.
> 
> I take it we're not worrying about missing malloc failure checks in
> xl.

Seems like we do elsewhere, so I might as well fix this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:24:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7T4f-0004uc-Gd; Fri, 31 Aug 2012 15:23:57 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7T4e-0004uO-0I
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:23:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1346426630!8553205!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5706 invoked from network); 31 Aug 2012 15:23:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:23:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14292879"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:23:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:23:41 +0100
Message-ID: <1346426619.27277.237.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 16:23:39 +0100
In-Reply-To: <20544.54424.775548.66532@mariner.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<20544.54424.775548.66532@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:13 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> > libxl/xl: implement support for guest iooprt and irq permissions.
> 
> Most of this looks good, but:
> 
> ...                                       ("bios",             libxl_bios_type),
> > +            buf = xlu_cfg_get_listitem (ioports, i);
> > +            if (!buf) {
> > +                fprintf(stderr,
> > +                        "xl: Unable to get element #%d in ioport list\n", i);
> > +                exit(1);
> > +            }
> > +            s = e = strtoul(buf, &ep, 16);
> > +            if (ep == buf) {
> > +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> > +                        buf);
> > +                exit(1);
> > +            }
> > +            if (*ep == '-') {
> 
> This code fails to properly handle (reject)
>    - (*ep!=0 && *ep!='-')

Oops, will fix.

>    - value > LONG_MAX
>    - INT_MAX < value <= LONG_MAX

These all get checked inside the (eventual) hypercall. Or were you
thinking of something else?

BTW the types should be unsigned all the way down, unless I've screwed
something up.

>    - *ep2!=0

Will fix.

> 
> > +            irq = strtoul(buf, &ep, 10);
> 
> Likewise.
> 
> I take it we're not worrying about missing malloc failure checks in
> xl.

Seems like we do elsewhere, so I might as well fix this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:33:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TDf-00055W-IC; Fri, 31 Aug 2012 15:33:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TDe-00055R-A9
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:33:14 +0000
Received: from [85.158.139.83:6077] by server-6.bemta-5.messagelabs.com id
	42/62-21336-939D0405; Fri, 31 Aug 2012 15:33:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346427192!23680697!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30443 invoked from network); 31 Aug 2012 15:33:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:33:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293112"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:32:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:32:12 +0100
Message-ID: <1346427131.27277.238.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 16:32:11 +0100
In-Reply-To: <20544.54514.22038.727353@mariner.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
	<20544.54022.744391.866200@mariner.uk.xensource.com>
	<1346425723.27277.228.camel@zakaz.uk.xensource.com>
	<20544.54514.22038.727353@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:14 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> > Needs to be in hgignore too. Sorry!
> 
> I was just fixing what was in front of my nose, but fine:
> 
> Ian.
> 
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Subject: [PATCH v3] libxl: fix api check Makefile
> 
> Touch the libxl.api-ok stamp file, and unconditionally put in place
> the new _libxl.api-for-check.  This avoids needlessly rerunning the
> preprocessor on libxl.h each time we call "make".
> 
> Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
> if it is asked for in a standalone make run it can find xentoollog.h.
> 
> Remove *.api-ok on clean.
> 
> Also fix .gitignore.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> --
> v3: add api-ok to hgignore
> v2: add api-ok to gitignore
> 
> ---
>  .gitignore           |    3 ++-
>  .hgignore            |    1 +
>  tools/libxl/Makefile |    8 +++++---
>  3 files changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index 084ec62..776e4b2 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -189,7 +189,8 @@ tools/libxl/xl
>  tools/libxl/testenum
>  tools/libxl/testenum.c
>  tools/libxl/tmp.*
> -tools/libxl/libxl.api-for-check
> +tools/libxl/_libxl.api-for-check
> +tools/libxl/*.api-ok
>  tools/libaio/src/*.ol
>  tools/libaio/src/*.os
>  tools/misc/cpuperf/cpuperf-perfcntr
> diff --git a/.hgignore b/.hgignore
> index 5ef6838..141809e 100644
> --- a/.hgignore
> +++ b/.hgignore
> @@ -188,6 +188,7 @@
>  ^tools/libxl/tmp\..*$
>  ^tools/libxl/.*\.new$
>  ^tools/libxl/_libxl\.api-for-check
> +^tools/libxl/libxl\.api-ok
>  ^tools/libvchan/vchan-node[12]$
>  ^tools/libaio/src/.*\.ol$
>  ^tools/libaio/src/.*\.os$
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 22c4881..a9d9ec6 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  CLIENTS = xl testidl libxl-save-helper
>  
>  XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
> -$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
> +$(XL_OBJS) _libxl.api-for-check: \
> +            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
>  $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
>  
> @@ -116,12 +117,13 @@ $(eval $(genpath-target))
>  
>  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
>  	$(PERL) $^
> +	touch $@
>  
>  _%.api-for-check: %.h
>  	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
>  		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
>  		>$@.new
> -	$(call move-if-changed,$@.new,$@)
> +	mv -f $@.new $@
>  
>  _paths.h: genpath
>  	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
> @@ -211,7 +213,7 @@ install: all
>  clean:
>  	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
>  	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
> -	$(RM) -f testidl.c.new testidl.c
> +	$(RM) -f testidl.c.new testidl.c *.api-ok
>  
>  distclean: clean
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:33:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TDf-00055W-IC; Fri, 31 Aug 2012 15:33:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TDe-00055R-A9
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:33:14 +0000
Received: from [85.158.139.83:6077] by server-6.bemta-5.messagelabs.com id
	42/62-21336-939D0405; Fri, 31 Aug 2012 15:33:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-182.messagelabs.com!1346427192!23680697!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30443 invoked from network); 31 Aug 2012 15:33:12 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:33:12 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293112"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:32:12 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:32:12 +0100
Message-ID: <1346427131.27277.238.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 16:32:11 +0100
In-Reply-To: <20544.54514.22038.727353@mariner.uk.xensource.com>
References: <20544.52351.298163.138907@mariner.uk.xensource.com>
	<1346425084.27277.220.camel@zakaz.uk.xensource.com>
	<20544.54022.744391.866200@mariner.uk.xensource.com>
	<1346425723.27277.228.camel@zakaz.uk.xensource.com>
	<20544.54514.22038.727353@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix api check Makefile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:14 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH] libxl: fix api check Makefile"):
> > Needs to be in hgignore too. Sorry!
> 
> I was just fixing what was in front of my nose, but fine:
> 
> Ian.
> 
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Subject: [PATCH v3] libxl: fix api check Makefile
> 
> Touch the libxl.api-ok stamp file, and unconditionally put in place
> the new _libxl.api-for-check.  This avoids needlessly rerunning the
> preprocessor on libxl.h each time we call "make".
> 
> Ensure that _libxl.api-for-check gets the CFLAGS used for xl, so that
> if it is asked for in a standalone make run it can find xentoollog.h.
> 
> Remove *.api-ok on clean.
> 
> Also fix .gitignore.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> --
> v3: add api-ok to hgignore
> v2: add api-ok to gitignore
> 
> ---
>  .gitignore           |    3 ++-
>  .hgignore            |    1 +
>  tools/libxl/Makefile |    8 +++++---
>  3 files changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index 084ec62..776e4b2 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -189,7 +189,8 @@ tools/libxl/xl
>  tools/libxl/testenum
>  tools/libxl/testenum.c
>  tools/libxl/tmp.*
> -tools/libxl/libxl.api-for-check
> +tools/libxl/_libxl.api-for-check
> +tools/libxl/*.api-ok
>  tools/libaio/src/*.ol
>  tools/libaio/src/*.os
>  tools/misc/cpuperf/cpuperf-perfcntr
> diff --git a/.hgignore b/.hgignore
> index 5ef6838..141809e 100644
> --- a/.hgignore
> +++ b/.hgignore
> @@ -188,6 +188,7 @@
>  ^tools/libxl/tmp\..*$
>  ^tools/libxl/.*\.new$
>  ^tools/libxl/_libxl\.api-for-check
> +^tools/libxl/libxl\.api-ok
>  ^tools/libvchan/vchan-node[12]$
>  ^tools/libaio/src/.*\.ol$
>  ^tools/libaio/src/.*\.os$
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 22c4881..a9d9ec6 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -85,7 +85,8 @@ $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  CLIENTS = xl testidl libxl-save-helper
>  
>  XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
> -$(XL_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
> +$(XL_OBJS) _libxl.api-for-check: \
> +            CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
>  $(XL_OBJS): CFLAGS += $(CFLAGS_libxenlight)
>  $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
>  
> @@ -116,12 +117,13 @@ $(eval $(genpath-target))
>  
>  libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
>  	$(PERL) $^
> +	touch $@
>  
>  _%.api-for-check: %.h
>  	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
>  		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
>  		>$@.new
> -	$(call move-if-changed,$@.new,$@)
> +	mv -f $@.new $@
>  
>  _paths.h: genpath
>  	sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" $@.tmp >$@.2.tmp
> @@ -211,7 +213,7 @@ install: all
>  clean:
>  	$(RM) -f _*.h *.o *.so* *.a $(CLIENTS) $(DEPS)
>  	$(RM) -f _*.c *.pyc _paths.*.tmp _*.api-for-check
> -	$(RM) -f testidl.c.new testidl.c
> +	$(RM) -f testidl.c.new testidl.c *.api-ok
>  
>  distclean: clean
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:36:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TGw-0005EG-5V; Fri, 31 Aug 2012 15:36:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TGu-0005E9-RG
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:36:36 +0000
Received: from [85.158.143.35:40849] by server-1.bemta-4.messagelabs.com id
	14/1E-12504-30AD0405; Fri, 31 Aug 2012 15:36:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346427389!13583664!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18392 invoked from network); 31 Aug 2012 15:36:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 15:36:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:36:28 +0100
Message-Id: <5040F61A0200007800097E86@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:36:26 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <5040D05D.8090808@citrix.com>
In-Reply-To: <5040D05D.8090808@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] docs/command line: Clarify the behavior with
 invalid input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 16:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:

I don't think we should specifically document the behavior for
unexpected value; instead, the behavior should simply be
"undefined".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:36:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TGw-0005EG-5V; Fri, 31 Aug 2012 15:36:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TGu-0005E9-RG
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:36:36 +0000
Received: from [85.158.143.35:40849] by server-1.bemta-4.messagelabs.com id
	14/1E-12504-30AD0405; Fri, 31 Aug 2012 15:36:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346427389!13583664!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18392 invoked from network); 31 Aug 2012 15:36:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 15:36:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:36:28 +0100
Message-Id: <5040F61A0200007800097E86@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:36:26 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <5040D05D.8090808@citrix.com>
In-Reply-To: <5040D05D.8090808@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] docs/command line: Clarify the behavior with
 invalid input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 16:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:

I don't think we should specifically document the behavior for
unexpected value; instead, the behavior should simply be
"undefined".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:40:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TKr-0005RM-RG; Fri, 31 Aug 2012 15:40:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TKr-0005RF-2h
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:40:41 +0000
Received: from [85.158.143.35:54962] by server-1.bemta-4.messagelabs.com id
	81/53-12504-8FAD0405; Fri, 31 Aug 2012 15:40:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346427639!16093843!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24536 invoked from network); 31 Aug 2012 15:40:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 15:40:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:40:39 +0100
Message-Id: <5040F7140200007800097E91@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:40:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346425278.27277.224.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
> +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
> @@ -402,6 +402,30 @@ for more information on the "permissive"
>  
>  =back
>  
> +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>

Is this really with quotes, and requiring an array?

> +
> +=over 4
> +
> +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
> +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
> +(inclusive) or a single I/O port C<2f8>.
> +
> +It is recommended to use this option only for trusted VMs under
> +administrator control.
> +
> +=back
> +
> +=item B<irqs=[ NUMBER, NUMBER, ... ]>

Similarly here - is this really requiring an array? I ask because
I had to look at this just last week for a colleague, and what
we got out of inspection of examples/code was that a simple
number (and a simple range without quotes above) are
permitted too.

Jan

> +
> +=over 4
> +
> +Allow a guest to access specific physical IRQs.
> +
> +It is recommended to use this option only for trusted VMs under
> +administrator control.
> +
> +=back
> +
>  =head2 Paravirtualised (PV) Guest Specific Options
>  
>  The following options apply only to Paravirtual guests.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:40:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TKr-0005RM-RG; Fri, 31 Aug 2012 15:40:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TKr-0005RF-2h
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:40:41 +0000
Received: from [85.158.143.35:54962] by server-1.bemta-4.messagelabs.com id
	81/53-12504-8FAD0405; Fri, 31 Aug 2012 15:40:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346427639!16093843!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24536 invoked from network); 31 Aug 2012 15:40:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 15:40:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:40:39 +0100
Message-Id: <5040F7140200007800097E91@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:40:36 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346425278.27277.224.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
> +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
> @@ -402,6 +402,30 @@ for more information on the "permissive"
>  
>  =back
>  
> +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>

Is this really with quotes, and requiring an array?

> +
> +=over 4
> +
> +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
> +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
> +(inclusive) or a single I/O port C<2f8>.
> +
> +It is recommended to use this option only for trusted VMs under
> +administrator control.
> +
> +=back
> +
> +=item B<irqs=[ NUMBER, NUMBER, ... ]>

Similarly here - is this really requiring an array? I ask because
I had to look at this just last week for a colleague, and what
we got out of inspection of examples/code was that a simple
number (and a simple range without quotes above) are
permitted too.

Jan

> +
> +=over 4
> +
> +Allow a guest to access specific physical IRQs.
> +
> +It is recommended to use this option only for trusted VMs under
> +administrator control.
> +
> +=back
> +
>  =head2 Paravirtualised (PV) Guest Specific Options
>  
>  The following options apply only to Paravirtual guests.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:42:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TMc-0005XR-BR; Fri, 31 Aug 2012 15:42:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres.lagarcavilla@gmail.com>) id 1T7TMb-0005XK-Be
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:42:29 +0000
Received: from [85.158.143.35:61489] by server-3.bemta-4.messagelabs.com id
	CE/6E-08232-46BD0405; Fri, 31 Aug 2012 15:42:28 +0000
X-Env-Sender: andres.lagarcavilla@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346427745!13584552!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3035 invoked from network); 31 Aug 2012 15:42:26 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:42:26 -0000
Received: by iebc10 with SMTP id c10so2205226ieb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 08:42:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer;
	bh=9kvwoJQlLKveRw526+8UmFAN4orfjPQkN6AMxYv292I=;
	b=H0E1Xu1Av5ltZXA5fgSVqSN4XQuYUG60ZlZ1Vq+f065At8tifSjRSIcYHyhy3KZKNT
	Dz/ZwWqBt/08BAimccZtIh4aFGGL0YtgwfWZKB/IZaw3DzUQ4Hl5v4TP9dBkgR3/ohsN
	cwBkY7qiKt+oZ3rXB2z9Jya+iYJD23vo/TO4/dKLFCx7V1QfxSJNsMcUSYtYDz2UbZR1
	0yE+rFbJfnOjWs4+C9FgM7HNEpjhJp6pEL+oEBg/GgwN6p2K1BalZwag1C82jWFCoUYo
	WTfkdvoIjBeNUwjaFrSuQebLxD3xplB5p7DDVxWwqMeKX7nzmoNImp81YmuLMTGoOT+6
	oxKQ==
Received: by 10.50.40.133 with SMTP id x5mr3341147igk.69.1346427745021;
	Fri, 31 Aug 2012 08:42:25 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id wg9sm873556igb.0.2012.08.31.08.42.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 08:42:24 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andres.lagarcavilla@gmail.com>
In-Reply-To: <160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
Date: Fri, 31 Aug 2012 11:42:32 -0400
Message-Id: <A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
X-Mailer: Apple Mail (2.1278)
Cc: xen-devel@lists.xen.org, David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, andres@lagarcavilla.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually acted upon your feedback ipso facto:

commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Sun Aug 26 09:45:57 2012 -0400

    Xen backend support for paged out grant targets.
    
    Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
    foreign domain (such as dom0) attempts to map these frames, the map will
    initially fail. The hypervisor returns a suitable errno, and kicks an
    asynchronous page-in operation carried out by a helper. The foreign domain is
    expected to retry the mapping operation until it eventually succeeds. The
    foreign domain is not put to sleep because itself could be the one running the
    pager assist (typical scenario for dom0).
    
    This patch adds support for this mechanism for backend drivers using grant
    mapping and copying operations. Specifically, this covers the blkback and
    gntdev drivers (which map foregin grants), and the netback driver (which copies
    foreign grants).
    
    * Add GNTST_eagain, already exposed by Xen, to the grant interface.
    * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
      target foregin frame is paged out).
    * Insert hooks with appropriate macro decorators in the aforementioned drivers.
    
    The retry loop is only invoked if the grant operation status is GNTST_eagain.
    It guarantees to leave a new status code different from GNTST_eagain. Any other
    status code results in identical code execution as before.
    
    The retry loop performs 256 attempts with increasing time intervals through a
    32 second period. It uses msleep to yield while waiting for the next retry.
    
    V2 after feedback from David Vrabel:
    * Explicit MAX_DELAY instead of wrap-around delay into zero
    * Abstract GNTST_eagain check into core grant table code for netback module.
    
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 682633b..5610fd8 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk)
 		return;
 
 	BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
-	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy, &netbk->grant_copy_op,
-					npo.copy_prod);
-	BUG_ON(ret != 0);
+	gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
@@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk)
 static void xen_netbk_tx_action(struct xen_netbk *netbk)
 {
 	unsigned nr_gops;
-	int ret;
 
 	nr_gops = xen_netbk_tx_build_gops(netbk);
 
 	if (nr_gops == 0)
 		return;
-	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
-					netbk->tx_copy_ops, nr_gops);
-	BUG_ON(ret);
 
-	xen_netbk_tx_submit(netbk);
+	gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
 
+	xen_netbk_tx_submit(netbk);
 }
 
 static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index eea81cf..96543b2 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -38,6 +38,7 @@
 #include <linux/vmalloc.h>
 #include <linux/uaccess.h>
 #include <linux/io.h>
+#include <linux/delay.h>
 #include <linux/hardirq.h>
 
 #include <xen/xen.h>
@@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+#define MAX_DELAY 256
+void
+gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+						const char *func)
+{
+	unsigned delay = 1;
+
+	do {
+		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
+		if (*status == GNTST_eagain)
+			msleep(delay++);
+	} while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
+
+	if (delay >= MAX_DELAY) {
+		printk(KERN_ERR "%s: %s eagain grant\n", func, current->comm);
+		*status = GNTST_bad_page;
+	}
+}
+EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
@@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (ret)
 		return ret;
 
+	/* Retry eagain maps */
+	for (i = 0; i < count; i++)
+		if (map_ops[i].status == GNTST_eagain)
+			gnttab_retry_eagain_map(map_ops + i);
+
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return ret;
 
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index b3e146e..749f6a3 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 
 	op.host_addr = arbitrary_virt_to_machine(pte).maddr;
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		free_vm_area(area);
@@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref,
 	gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map, gnt_ref,
 			  dev->otherend_id);
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		xenbus_dev_fatal(dev, op.status,
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 11e27c3..2fecfab 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -43,6 +43,7 @@
 #include <xen/interface/grant_table.h>
 
 #include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
 
 #include <xen/features.h>
 
@@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
+/* Retry a grant map/copy operation when the hypervisor returns GNTST_eagain.
+ * This is typically due to paged out target frames.
+ * Generic entry-point, use macro decorators below for specific grant
+ * operations.
+ * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
+ * Return value in *status guaranteed to no longer be GNTST_eagain. */
+void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+                             const char *func);
+
+#define gnttab_retry_eagain_map(_gop)                       \
+    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
+                            &(_gop)->status, __func__)
+
+#define gnttab_retry_eagain_copy(_gop)                  \
+    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
+                            &(_gop)->status, __func__)
+
+#define gnttab_map_grant_no_eagain(_gop)                                    \
+do {                                                                        \
+    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))       \
+        BUG();                                                              \
+    if ((_gop)->status == GNTST_eagain)                                     \
+        gnttab_retry_eagain_map((_gop));                                    \
+} while(0)
+
+static inline void
+gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
+{
+    unsigned i;
+    struct gnttab_copy *op;
+
+    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
+    for (i = 0, op = batch; i < count; i++, op++)
+        if (op->status == GNTST_eagain)
+            gnttab_retry_eagain_copy(op);
+}
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index 7da811b..66cb734 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
 #define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
 #define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
 #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary */
+#define GNTST_eagain          (-12) /* Retry.                                */
 
 #define GNTTABOP_error_msgs {                   \
     "okay",                                     \
@@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
     "permission denied",                        \
     "bad page",                                 \
     "copy arguments cross page boundary"        \
+    "retry"                                     \
 }
 
 #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */


On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:

> 
> On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> 
>> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
>>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>>> 
>>> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
>>> foreign domain (such as dom0) attempts to map these frames, the map will
>>> initially fail. The hypervisor returns a suitable errno, and kicks an
>>> asynchronous page-in operation carried out by a helper. The foreign domain is
>>> expected to retry the mapping operation until it eventually succeeds. The
>>> foreign domain is not put to sleep because itself could be the one running the
>>> pager assist (typical scenario for dom0).
>>> 
>>> This patch adds support for this mechanism for backend drivers using grant
>>> mapping and copying operations. Specifically, this covers the blkback and
>>> gntdev drivers (which map foregin grants), and the netback driver (which copies
>>> foreign grants).
>>> 
>>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
>>> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>>> target foregin frame is paged out).
>>> * Insert hooks with appropriate macro decorators in the aforementioned drivers.
>> 
>> I think you should implement wrappers around HYPERVISOR_grant_table_op()
>> have have the wrapper do the retries instead of every backend having to
>> check for EAGAIN and issue the retries itself. Similar to the
>> gnttab_map_grant_no_eagain() function you've already added.
>> 
>> Why do some operations not retry anyway?
> 
> All operations retry. The reason why I could not make it as elegant as you suggest is because grant operations are submitted in batches and their status(es?) later checked individually elsewhere. This is the case for netback. Note that both blkback and gntdev use a more linear structure with the gnttab_map_refs helper, which allows me to hide all the retry gore from those drivers into grant table code. Likewise for xenbus ring mapping.
> 
> In summary, outside of core grant table code, only the netback driver needs to check explicitly for retries, due to its batch-copy-delayed-per-slot-check structure.
> 
>> 
>>> +void
>>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
>>> +						const char *func)
>>> +{
>>> +	u8 delay = 1;
>>> +
>>> +	do {
>>> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
>>> +		if (*status == GNTST_eagain)
>>> +			msleep(delay++);
>>> +	} while ((*status == GNTST_eagain) && delay);
>> 
>> Terminating the loop when delay wraps is a bit subtle.  Why not make
>> delay unsigned and check delay <= MAX_DELAY?
> Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before a re-spin.
> 
>> 
>> Would it be sensible to ramp the delay faster?  Perhaps double each
>> iteration with a maximum possible delay of e.g., 256 ms.
> Generally speaking we've never seen past three retries. I am open to changing the algorithm but there is a significant possibility it won't matter at all.
> 
>> 
>>> +#define gnttab_map_grant_no_eagain(_gop)                                    \
>>> +do {                                                                        \
>>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
>>> +        BUG();                                                              \
>>> +    if ((_gop)->status == GNTST_eagain)                                     \
>>> +        gnttab_retry_eagain_map((_gop));                                    \
>>> +} while(0)
>> 
>> Inline functions, please.
> 
> I want to retain the original context for debugging. Eventually we print __func__ if things go wrong.
> 
> Thanks, great feedback
> Andres
> 
>> 
>> David
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:42:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TMc-0005XR-BR; Fri, 31 Aug 2012 15:42:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andres.lagarcavilla@gmail.com>) id 1T7TMb-0005XK-Be
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:42:29 +0000
Received: from [85.158.143.35:61489] by server-3.bemta-4.messagelabs.com id
	CE/6E-08232-46BD0405; Fri, 31 Aug 2012 15:42:28 +0000
X-Env-Sender: andres.lagarcavilla@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346427745!13584552!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3035 invoked from network); 31 Aug 2012 15:42:26 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:42:26 -0000
Received: by iebc10 with SMTP id c10so2205226ieb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 08:42:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to:x-mailer;
	bh=9kvwoJQlLKveRw526+8UmFAN4orfjPQkN6AMxYv292I=;
	b=H0E1Xu1Av5ltZXA5fgSVqSN4XQuYUG60ZlZ1Vq+f065At8tifSjRSIcYHyhy3KZKNT
	Dz/ZwWqBt/08BAimccZtIh4aFGGL0YtgwfWZKB/IZaw3DzUQ4Hl5v4TP9dBkgR3/ohsN
	cwBkY7qiKt+oZ3rXB2z9Jya+iYJD23vo/TO4/dKLFCx7V1QfxSJNsMcUSYtYDz2UbZR1
	0yE+rFbJfnOjWs4+C9FgM7HNEpjhJp6pEL+oEBg/GgwN6p2K1BalZwag1C82jWFCoUYo
	WTfkdvoIjBeNUwjaFrSuQebLxD3xplB5p7DDVxWwqMeKX7nzmoNImp81YmuLMTGoOT+6
	oxKQ==
Received: by 10.50.40.133 with SMTP id x5mr3341147igk.69.1346427745021;
	Fri, 31 Aug 2012 08:42:25 -0700 (PDT)
Received: from [192.168.7.210] ([206.223.182.18])
	by mx.google.com with ESMTPS id wg9sm873556igb.0.2012.08.31.08.42.22
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 08:42:24 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v1278)
From: Andres Lagar-Cavilla <andres.lagarcavilla@gmail.com>
In-Reply-To: <160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
Date: Fri, 31 Aug 2012 11:42:32 -0400
Message-Id: <A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
X-Mailer: Apple Mail (2.1278)
Cc: xen-devel@lists.xen.org, David Vrabel <david.vrabel@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, andres@lagarcavilla.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually acted upon your feedback ipso facto:

commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Date:   Sun Aug 26 09:45:57 2012 -0400

    Xen backend support for paged out grant targets.
    
    Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
    foreign domain (such as dom0) attempts to map these frames, the map will
    initially fail. The hypervisor returns a suitable errno, and kicks an
    asynchronous page-in operation carried out by a helper. The foreign domain is
    expected to retry the mapping operation until it eventually succeeds. The
    foreign domain is not put to sleep because itself could be the one running the
    pager assist (typical scenario for dom0).
    
    This patch adds support for this mechanism for backend drivers using grant
    mapping and copying operations. Specifically, this covers the blkback and
    gntdev drivers (which map foregin grants), and the netback driver (which copies
    foreign grants).
    
    * Add GNTST_eagain, already exposed by Xen, to the grant interface.
    * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
      target foregin frame is paged out).
    * Insert hooks with appropriate macro decorators in the aforementioned drivers.
    
    The retry loop is only invoked if the grant operation status is GNTST_eagain.
    It guarantees to leave a new status code different from GNTST_eagain. Any other
    status code results in identical code execution as before.
    
    The retry loop performs 256 attempts with increasing time intervals through a
    32 second period. It uses msleep to yield while waiting for the next retry.
    
    V2 after feedback from David Vrabel:
    * Explicit MAX_DELAY instead of wrap-around delay into zero
    * Abstract GNTST_eagain check into core grant table code for netback module.
    
    Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 682633b..5610fd8 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk)
 		return;
 
 	BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
-	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy, &netbk->grant_copy_op,
-					npo.copy_prod);
-	BUG_ON(ret != 0);
+	gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
@@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk)
 static void xen_netbk_tx_action(struct xen_netbk *netbk)
 {
 	unsigned nr_gops;
-	int ret;
 
 	nr_gops = xen_netbk_tx_build_gops(netbk);
 
 	if (nr_gops == 0)
 		return;
-	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
-					netbk->tx_copy_ops, nr_gops);
-	BUG_ON(ret);
 
-	xen_netbk_tx_submit(netbk);
+	gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
 
+	xen_netbk_tx_submit(netbk);
 }
 
 static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index eea81cf..96543b2 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -38,6 +38,7 @@
 #include <linux/vmalloc.h>
 #include <linux/uaccess.h>
 #include <linux/io.h>
+#include <linux/delay.h>
 #include <linux/hardirq.h>
 
 #include <xen/xen.h>
@@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+#define MAX_DELAY 256
+void
+gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+						const char *func)
+{
+	unsigned delay = 1;
+
+	do {
+		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
+		if (*status == GNTST_eagain)
+			msleep(delay++);
+	} while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
+
+	if (delay >= MAX_DELAY) {
+		printk(KERN_ERR "%s: %s eagain grant\n", func, current->comm);
+		*status = GNTST_bad_page;
+	}
+}
+EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
@@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (ret)
 		return ret;
 
+	/* Retry eagain maps */
+	for (i = 0; i < count; i++)
+		if (map_ops[i].status == GNTST_eagain)
+			gnttab_retry_eagain_map(map_ops + i);
+
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return ret;
 
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index b3e146e..749f6a3 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 
 	op.host_addr = arbitrary_virt_to_machine(pte).maddr;
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		free_vm_area(area);
@@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref,
 	gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map, gnt_ref,
 			  dev->otherend_id);
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
-		BUG();
+	gnttab_map_grant_no_eagain(&op);
 
 	if (op.status != GNTST_okay) {
 		xenbus_dev_fatal(dev, op.status,
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 11e27c3..2fecfab 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -43,6 +43,7 @@
 #include <xen/interface/grant_table.h>
 
 #include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
 
 #include <xen/features.h>
 
@@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
+/* Retry a grant map/copy operation when the hypervisor returns GNTST_eagain.
+ * This is typically due to paged out target frames.
+ * Generic entry-point, use macro decorators below for specific grant
+ * operations.
+ * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
+ * Return value in *status guaranteed to no longer be GNTST_eagain. */
+void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
+                             const char *func);
+
+#define gnttab_retry_eagain_map(_gop)                       \
+    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
+                            &(_gop)->status, __func__)
+
+#define gnttab_retry_eagain_copy(_gop)                  \
+    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
+                            &(_gop)->status, __func__)
+
+#define gnttab_map_grant_no_eagain(_gop)                                    \
+do {                                                                        \
+    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))       \
+        BUG();                                                              \
+    if ((_gop)->status == GNTST_eagain)                                     \
+        gnttab_retry_eagain_map((_gop));                                    \
+} while(0)
+
+static inline void
+gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
+{
+    unsigned i;
+    struct gnttab_copy *op;
+
+    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
+    for (i = 0, op = batch; i < count; i++, op++)
+        if (op->status == GNTST_eagain)
+            gnttab_retry_eagain_copy(op);
+}
+
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
index 7da811b..66cb734 100644
--- a/include/xen/interface/grant_table.h
+++ b/include/xen/interface/grant_table.h
@@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
 #define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
 #define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
 #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary */
+#define GNTST_eagain          (-12) /* Retry.                                */
 
 #define GNTTABOP_error_msgs {                   \
     "okay",                                     \
@@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
     "permission denied",                        \
     "bad page",                                 \
     "copy arguments cross page boundary"        \
+    "retry"                                     \
 }
 
 #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */


On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:

> 
> On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> 
>> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
>>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>>> 
>>> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
>>> foreign domain (such as dom0) attempts to map these frames, the map will
>>> initially fail. The hypervisor returns a suitable errno, and kicks an
>>> asynchronous page-in operation carried out by a helper. The foreign domain is
>>> expected to retry the mapping operation until it eventually succeeds. The
>>> foreign domain is not put to sleep because itself could be the one running the
>>> pager assist (typical scenario for dom0).
>>> 
>>> This patch adds support for this mechanism for backend drivers using grant
>>> mapping and copying operations. Specifically, this covers the blkback and
>>> gntdev drivers (which map foregin grants), and the netback driver (which copies
>>> foreign grants).
>>> 
>>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
>>> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>>> target foregin frame is paged out).
>>> * Insert hooks with appropriate macro decorators in the aforementioned drivers.
>> 
>> I think you should implement wrappers around HYPERVISOR_grant_table_op()
>> have have the wrapper do the retries instead of every backend having to
>> check for EAGAIN and issue the retries itself. Similar to the
>> gnttab_map_grant_no_eagain() function you've already added.
>> 
>> Why do some operations not retry anyway?
> 
> All operations retry. The reason why I could not make it as elegant as you suggest is because grant operations are submitted in batches and their status(es?) later checked individually elsewhere. This is the case for netback. Note that both blkback and gntdev use a more linear structure with the gnttab_map_refs helper, which allows me to hide all the retry gore from those drivers into grant table code. Likewise for xenbus ring mapping.
> 
> In summary, outside of core grant table code, only the netback driver needs to check explicitly for retries, due to its batch-copy-delayed-per-slot-check structure.
> 
>> 
>>> +void
>>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
>>> +						const char *func)
>>> +{
>>> +	u8 delay = 1;
>>> +
>>> +	do {
>>> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
>>> +		if (*status == GNTST_eagain)
>>> +			msleep(delay++);
>>> +	} while ((*status == GNTST_eagain) && delay);
>> 
>> Terminating the loop when delay wraps is a bit subtle.  Why not make
>> delay unsigned and check delay <= MAX_DELAY?
> Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before a re-spin.
> 
>> 
>> Would it be sensible to ramp the delay faster?  Perhaps double each
>> iteration with a maximum possible delay of e.g., 256 ms.
> Generally speaking we've never seen past three retries. I am open to changing the algorithm but there is a significant possibility it won't matter at all.
> 
>> 
>>> +#define gnttab_map_grant_no_eagain(_gop)                                    \
>>> +do {                                                                        \
>>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
>>> +        BUG();                                                              \
>>> +    if ((_gop)->status == GNTST_eagain)                                     \
>>> +        gnttab_retry_eagain_map((_gop));                                    \
>>> +} while(0)
>> 
>> Inline functions, please.
> 
> I want to retain the original context for debugging. Eventually we print __func__ if things go wrong.
> 
> Thanks, great feedback
> Andres
> 
>> 
>> David
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:43:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TNS-0005dN-UA; Fri, 31 Aug 2012 15:43:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TNR-0005d5-0Z
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:43:21 +0000
Received: from [85.158.143.35:5633] by server-2.bemta-4.messagelabs.com id
	0A/C4-21239-89BD0405; Fri, 31 Aug 2012 15:43:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1346427796!5176882!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30298 invoked from network); 31 Aug 2012 15:43:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 15:43:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:43:15 +0100
Message-Id: <5040F7B00200007800097E94@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:43:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
	<1346425932.27277.230.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346425932.27277.230.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 17:12, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-28 at 11:06 +0100, Ian Campbell wrote:
> 
>> hypervisor, nice to have:
>> 
>>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>>       stop halfway through searching, causing a guest to crash even if
>>       there was zeroed memory available.  This is NOT a regression
>>       from 4.1, and is a very rare case, so probably shouldn't be a
>>       blocker.  (In fact, I'd be open to the idea that it should wait
>>       until after the release to get more testing.)
>>       	    (George Dunlap)
> 
>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>       guest wall clock time to lag (possible fix outlined, needs to be
>>       put in patch form and thoroughly reviewed/tested for unwanted
>>       side effects, Jan Beulich)
> 
> I have a feeling one or both of these might be fixed already, is that
> true?

No, neither is afaict. I broke up the patch for the second and will
submit just the core bits (hopefully still today, else early on
Monday) to address the reported problem, leaving the rest of the
fixes for post-4.2.

>>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
> 
> AIUI this one isn't though.

Correct.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:43:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TNS-0005dN-UA; Fri, 31 Aug 2012 15:43:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TNR-0005d5-0Z
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:43:21 +0000
Received: from [85.158.143.35:5633] by server-2.bemta-4.messagelabs.com id
	0A/C4-21239-89BD0405; Fri, 31 Aug 2012 15:43:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1346427796!5176882!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30298 invoked from network); 31 Aug 2012 15:43:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	31 Aug 2012 15:43:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:43:15 +0100
Message-Id: <5040F7B00200007800097E94@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:43:12 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
	<1346425932.27277.230.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346425932.27277.230.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 17:12, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2012-08-28 at 11:06 +0100, Ian Campbell wrote:
> 
>> hypervisor, nice to have:
>> 
>>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>>       stop halfway through searching, causing a guest to crash even if
>>       there was zeroed memory available.  This is NOT a regression
>>       from 4.1, and is a very rare case, so probably shouldn't be a
>>       blocker.  (In fact, I'd be open to the idea that it should wait
>>       until after the release to get more testing.)
>>       	    (George Dunlap)
> 
>>     * fix high change rate to CMOS RTC periodic interrupt causing
>>       guest wall clock time to lag (possible fix outlined, needs to be
>>       put in patch form and thoroughly reviewed/tested for unwanted
>>       side effects, Jan Beulich)
> 
> I have a feeling one or both of these might be fixed already, is that
> true?

No, neither is afaict. I broke up the patch for the second and will
submit just the core bits (hopefully still today, else early on
Monday) to address the reported problem, leaving the rest of the
fixes for post-4.2.

>>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
> 
> AIUI this one isn't though.

Correct.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:51:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TUf-0005vf-Rf; Fri, 31 Aug 2012 15:50:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7TUf-0005vZ-70
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:50:49 +0000
Received: from [85.158.139.83:3798] by server-3.bemta-5.messagelabs.com id
	B5/FC-21836-85DD0405; Fri, 31 Aug 2012 15:50:48 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346428246!28228844!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11222 invoked from network); 31 Aug 2012 15:50:47 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:50:47 -0000
Received: by pbbjt11 with SMTP id jt11so4925427pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 08:50:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=e+rvnuyXqlmmGHIT9dzgEbqNDK8G8N5gRYAlKuPuTQo=;
	b=U/T9ImMj/OXF6bKzJTPW1p91Hbeq9ZubiqWOZoy1CVpVE8UUQbRmwAzf7BWcOg4Sv1
	VV/O3OzQfwiUjOblQ2V7rAxZ/HGGZiyY1SisFapRbSR61HDAAJSRORljfc6ulvysPv8r
	IoZw8XQrIxgc6RqyDt4saB5O5E/8QgjFtCAA4kKtAfVCKgPd7RkNSzdz0erCdvI+os8o
	Sy2LJvLaGqGQiLCVQeqqcHXmDGJoXiEFw2Rvk4GdQnyIupk7DQteu0VolS1uyhADljc+
	F+pXeCC6JXxxG7MgezoWVBBKmFdLXoEK7Ynuv9jHShl9D38fam4FCb4BqwX1hpUiQzOE
	FOtg==
Received: by 10.66.78.195 with SMTP id d3mr15947202pax.17.1346428245394;
	Fri, 31 Aug 2012 08:50:45 -0700 (PDT)
Received: from [192.168.0.197] (S0106001b116a048a.vc.shawcable.net.
	[24.86.10.22])
	by mx.google.com with ESMTPS id vd4sm3673966pbc.41.2012.08.31.08.50.41
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 08:50:44 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 16:50:35 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <CC669BDB.3D815%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] Re: [RFC PATCH] xen: comment opaque
	expression in __page_to_virt
Thread-Index: Ac2HkFvf15AB4bCnUkO8Xquu0Wz9Zg==
In-Reply-To: <5040EBED0200007800097E2B@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Re: [RFC PATCH] xen: comment opaque
 expression in __page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 15:53, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 31.08.12 at 16:36, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> Jan Beulich writes ("Re: [RFC PATCH] xen: comment opaque expression in
>> __page_to_virt"):
>>> No, that's not precise. There's really not much of a win to be had
>>> on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
>>> should be the same in speed.
>>> 
>>> The win is on x86-64, where sizeof(struct page_info) is a power
>>> of 2, and hence the pair of shifts (right, then left) can be reduced
>>> to a single one.
>>> 
>>> Yet (for obvious reasons) the code ought to not break anything
>>> if even on x86-64 the size of the structure would change, hence
>>> it needs to be that complex (and can't be broken into separate,
>>> simpler implementations for 32- and 64-bits).
>> 
>> Thanks.  Do you want to post a revised version of my patch or shall I
>> do so ?  (If so please confirm that I should put your s-o-b on it for
>> your wording above.)
> 
> x86: comment opaque expression in __page_to_virt()
> 
> mm.h's __page_to_virt() has a rather opaque expression. Comment it.
> 
> Reported-By: Ian Campbell <ian.campbell@citrix.com>
> Suggested-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- 2012-08-08.orig/xen/include/asm-x86/mm.h 2012-06-20 17:34:02.000000000
> +0200
> +++ 2012-08-08/xen/include/asm-x86/mm.h 2012-08-31 16:50:50.000000000 +0200
> @@ -323,6 +323,12 @@ static inline struct page_info *__virt_t
>  static inline void *__page_to_virt(const struct page_info *pg)
>  {
>      ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
> +    /*
> +     * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). The
> +     * division and re-multiplication avoids one shift when sizeof(*pg) is a
> +     * power of two (otherwise there would be a right shift followed by a
> +     * left shift, which the compiler can't know it can fold into one).
> +     */
>      return (void *)(DIRECTMAP_VIRT_START +
>                      ((unsigned long)pg - FRAMETABLE_VIRT_START) /
>                      (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:51:02 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TUf-0005vf-Rf; Fri, 31 Aug 2012 15:50:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7TUf-0005vZ-70
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:50:49 +0000
Received: from [85.158.139.83:3798] by server-3.bemta-5.messagelabs.com id
	B5/FC-21836-85DD0405; Fri, 31 Aug 2012 15:50:48 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-182.messagelabs.com!1346428246!28228844!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11222 invoked from network); 31 Aug 2012 15:50:47 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-10.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:50:47 -0000
Received: by pbbjt11 with SMTP id jt11so4925427pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 08:50:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=e+rvnuyXqlmmGHIT9dzgEbqNDK8G8N5gRYAlKuPuTQo=;
	b=U/T9ImMj/OXF6bKzJTPW1p91Hbeq9ZubiqWOZoy1CVpVE8UUQbRmwAzf7BWcOg4Sv1
	VV/O3OzQfwiUjOblQ2V7rAxZ/HGGZiyY1SisFapRbSR61HDAAJSRORljfc6ulvysPv8r
	IoZw8XQrIxgc6RqyDt4saB5O5E/8QgjFtCAA4kKtAfVCKgPd7RkNSzdz0erCdvI+os8o
	Sy2LJvLaGqGQiLCVQeqqcHXmDGJoXiEFw2Rvk4GdQnyIupk7DQteu0VolS1uyhADljc+
	F+pXeCC6JXxxG7MgezoWVBBKmFdLXoEK7Ynuv9jHShl9D38fam4FCb4BqwX1hpUiQzOE
	FOtg==
Received: by 10.66.78.195 with SMTP id d3mr15947202pax.17.1346428245394;
	Fri, 31 Aug 2012 08:50:45 -0700 (PDT)
Received: from [192.168.0.197] (S0106001b116a048a.vc.shawcable.net.
	[24.86.10.22])
	by mx.google.com with ESMTPS id vd4sm3673966pbc.41.2012.08.31.08.50.41
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 08:50:44 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 16:50:35 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <CC669BDB.3D815%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH] Re: [RFC PATCH] xen: comment opaque
	expression in __page_to_virt
Thread-Index: Ac2HkFvf15AB4bCnUkO8Xquu0Wz9Zg==
In-Reply-To: <5040EBED0200007800097E2B@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Re: [RFC PATCH] xen: comment opaque
 expression in __page_to_virt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 15:53, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 31.08.12 at 16:36, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> Jan Beulich writes ("Re: [RFC PATCH] xen: comment opaque expression in
>> __page_to_virt"):
>>> No, that's not precise. There's really not much of a win to be had
>>> on 32-bit (division by 3 and division by 24 (sizeof(struct page_info))
>>> should be the same in speed.
>>> 
>>> The win is on x86-64, where sizeof(struct page_info) is a power
>>> of 2, and hence the pair of shifts (right, then left) can be reduced
>>> to a single one.
>>> 
>>> Yet (for obvious reasons) the code ought to not break anything
>>> if even on x86-64 the size of the structure would change, hence
>>> it needs to be that complex (and can't be broken into separate,
>>> simpler implementations for 32- and 64-bits).
>> 
>> Thanks.  Do you want to post a revised version of my patch or shall I
>> do so ?  (If so please confirm that I should put your s-o-b on it for
>> your wording above.)
> 
> x86: comment opaque expression in __page_to_virt()
> 
> mm.h's __page_to_virt() has a rather opaque expression. Comment it.
> 
> Reported-By: Ian Campbell <ian.campbell@citrix.com>
> Suggested-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- 2012-08-08.orig/xen/include/asm-x86/mm.h 2012-06-20 17:34:02.000000000
> +0200
> +++ 2012-08-08/xen/include/asm-x86/mm.h 2012-08-31 16:50:50.000000000 +0200
> @@ -323,6 +323,12 @@ static inline struct page_info *__virt_t
>  static inline void *__page_to_virt(const struct page_info *pg)
>  {
>      ASSERT((unsigned long)pg - FRAMETABLE_VIRT_START < FRAMETABLE_VIRT_END);
> +    /*
> +     * (sizeof(*pg) & -sizeof(*pg)) selects the LS bit of sizeof(*pg). The
> +     * division and re-multiplication avoids one shift when sizeof(*pg) is a
> +     * power of two (otherwise there would be a right shift followed by a
> +     * left shift, which the compiler can't know it can fold into one).
> +     */
>      return (void *)(DIRECTMAP_VIRT_START +
>                      ((unsigned long)pg - FRAMETABLE_VIRT_START) /
>                      (sizeof(*pg) / (sizeof(*pg) & -sizeof(*pg))) *
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:51:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TVV-0005yj-8q; Fri, 31 Aug 2012 15:51:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TVU-0005yJ-GT
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:51:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1346428294!8557067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31224 invoked from network); 31 Aug 2012 15:51:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:51:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293650"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:51:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:51:34 +0100
Message-ID: <1346428292.27277.243.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 16:51:32 +0100
In-Reply-To: <5040F7140200007800097E91@nat28.tlf.novell.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:40 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
> > +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
> > @@ -402,6 +402,30 @@ for more information on the "permissive"
> >  
> >  =back
> >  
> > +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
> 
> Is this really with quotes, and requiring an array?

I was mostly just following
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU
which suggested that this is the xm syntax too.

> > +
> > +=over 4
> > +
> > +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
> > +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
> > +(inclusive) or a single I/O port C<2f8>.
> > +
> > +It is recommended to use this option only for trusted VMs under
> > +administrator control.
> > +
> > +=back
> > +
> > +=item B<irqs=[ NUMBER, NUMBER, ... ]>
> 
> Similarly here - is this really requiring an array? I ask because
> I had to look at this just last week for a colleague, and what
> we got out of inspection of examples/code was that a simple
> number (and a simple range without quotes above) are
> permitted too.

I had a look in create.py and opts.py and didn't see that, I suppose I
missed it.

I could implement support for either an array or a simple string/number
but it would complicate the code quite a bit. Is it really worth it?

Ian.


> 
> Jan
> 
> > +
> > +=over 4
> > +
> > +Allow a guest to access specific physical IRQs.
> > +
> > +It is recommended to use this option only for trusted VMs under
> > +administrator control.
> > +
> > +=back
> > +
> >  =head2 Paravirtualised (PV) Guest Specific Options
> >  
> >  The following options apply only to Paravirtual guests.
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:51:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TVV-0005yj-8q; Fri, 31 Aug 2012 15:51:41 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TVU-0005yJ-GT
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:51:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1346428294!8557067!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31224 invoked from network); 31 Aug 2012 15:51:34 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:51:34 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293650"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:51:34 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:51:34 +0100
Message-ID: <1346428292.27277.243.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 16:51:32 +0100
In-Reply-To: <5040F7140200007800097E91@nat28.tlf.novell.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:40 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
> > +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
> > @@ -402,6 +402,30 @@ for more information on the "permissive"
> >  
> >  =back
> >  
> > +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
> 
> Is this really with quotes, and requiring an array?

I was mostly just following
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU
which suggested that this is the xm syntax too.

> > +
> > +=over 4
> > +
> > +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
> > +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
> > +(inclusive) or a single I/O port C<2f8>.
> > +
> > +It is recommended to use this option only for trusted VMs under
> > +administrator control.
> > +
> > +=back
> > +
> > +=item B<irqs=[ NUMBER, NUMBER, ... ]>
> 
> Similarly here - is this really requiring an array? I ask because
> I had to look at this just last week for a colleague, and what
> we got out of inspection of examples/code was that a simple
> number (and a simple range without quotes above) are
> permitted too.

I had a look in create.py and opts.py and didn't see that, I suppose I
missed it.

I could implement support for either an array or a simple string/number
but it would complicate the code quite a bit. Is it really worth it?

Ian.


> 
> Jan
> 
> > +
> > +=over 4
> > +
> > +Allow a guest to access specific physical IRQs.
> > +
> > +It is recommended to use this option only for trusted VMs under
> > +administrator control.
> > +
> > +=back
> > +
> >  =head2 Paravirtualised (PV) Guest Specific Options
> >  
> >  The following options apply only to Paravirtual guests.
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:53:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TX4-00068D-Oy; Fri, 31 Aug 2012 15:53:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TX3-000681-BY
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:53:17 +0000
Received: from [85.158.143.35:3094] by server-1.bemta-4.messagelabs.com id
	DF/62-12504-CEDD0405; Fri, 31 Aug 2012 15:53:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1346428394!11666717!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31074 invoked from network); 31 Aug 2012 15:53:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:53:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293692"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:52:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:52:41 +0100
Message-ID: <1346428359.27277.244.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 16:52:39 +0100
In-Reply-To: <5040F7B00200007800097E94@nat28.tlf.novell.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
	<1346425932.27277.230.camel@zakaz.uk.xensource.com>
	<5040F7B00200007800097E94@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:43 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 17:12, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2012-08-28 at 11:06 +0100, Ian Campbell wrote:
> > 
> >> hypervisor, nice to have:
> >> 
> >>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
> >>       stop halfway through searching, causing a guest to crash even if
> >>       there was zeroed memory available.  This is NOT a regression
> >>       from 4.1, and is a very rare case, so probably shouldn't be a
> >>       blocker.  (In fact, I'd be open to the idea that it should wait
> >>       until after the release to get more testing.)
> >>       	    (George Dunlap)
> > 
> >>     * fix high change rate to CMOS RTC periodic interrupt causing
> >>       guest wall clock time to lag (possible fix outlined, needs to be
> >>       put in patch form and thoroughly reviewed/tested for unwanted
> >>       side effects, Jan Beulich)
> > 
> > I have a feeling one or both of these might be fixed already, is that
> > true?
> 
> No, neither is afaict.

OK, thanks for confirming.

>  I broke up the patch for the second and will
> submit just the core bits (hopefully still today, else early on
> Monday) to address the reported problem, leaving the rest of the
> fixes for post-4.2.

OK, I'll mark it as DONE on Monday with a note about deferring some of
it to 4.3.

Ian

> 
> >>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
> > 
> > AIUI this one isn't though.
> 
> Correct.
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:53:27 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TX4-00068D-Oy; Fri, 31 Aug 2012 15:53:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TX3-000681-BY
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:53:17 +0000
Received: from [85.158.143.35:3094] by server-1.bemta-4.messagelabs.com id
	DF/62-12504-CEDD0405; Fri, 31 Aug 2012 15:53:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1346428394!11666717!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31074 invoked from network); 31 Aug 2012 15:53:14 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:53:14 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293692"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:52:41 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 16:52:41 +0100
Message-ID: <1346428359.27277.244.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 31 Aug 2012 16:52:39 +0100
In-Reply-To: <5040F7B00200007800097E94@nat28.tlf.novell.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
	<1346425932.27277.230.camel@zakaz.uk.xensource.com>
	<5040F7B00200007800097E94@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:43 +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 17:12, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2012-08-28 at 11:06 +0100, Ian Campbell wrote:
> > 
> >> hypervisor, nice to have:
> >> 
> >>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
> >>       stop halfway through searching, causing a guest to crash even if
> >>       there was zeroed memory available.  This is NOT a regression
> >>       from 4.1, and is a very rare case, so probably shouldn't be a
> >>       blocker.  (In fact, I'd be open to the idea that it should wait
> >>       until after the release to get more testing.)
> >>       	    (George Dunlap)
> > 
> >>     * fix high change rate to CMOS RTC periodic interrupt causing
> >>       guest wall clock time to lag (possible fix outlined, needs to be
> >>       put in patch form and thoroughly reviewed/tested for unwanted
> >>       side effects, Jan Beulich)
> > 
> > I have a feeling one or both of these might be fixed already, is that
> > true?
> 
> No, neither is afaict.

OK, thanks for confirming.

>  I broke up the patch for the second and will
> submit just the core bits (hopefully still today, else early on
> Monday) to address the reported problem, leaving the rest of the
> fixes for post-4.2.

OK, I'll mark it as DONE on Monday with a note about deferring some of
it to 4.3.

Ian

> 
> >>     * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)
> > 
> > AIUI this one isn't though.
> 
> Correct.
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:56:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TZS-0006Jd-Al; Fri, 31 Aug 2012 15:55:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TZP-0006JP-Vc
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:55:44 +0000
Received: from [85.158.143.99:11652] by server-3.bemta-4.messagelabs.com id
	AC/FE-08232-F7ED0405; Fri, 31 Aug 2012 15:55:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346428541!18255815!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9922 invoked from network); 31 Aug 2012 15:55:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:55:42 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="206797996"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:55:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:55:40 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T7TZM-0006nb-83;
	Fri, 31 Aug 2012 16:55:40 +0100
MIME-Version: 1.0
X-Mercurial-Node: ddde6c2c45de8e60518aafa077f0f3867ff68e17
Message-ID: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 31 Aug 2012 16:55:39 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl/xl: implement support for guest iooprt
 and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346428441 -3600
# Node ID ddde6c2c45de8e60518aafa077f0f3867ff68e17
# Parent  ccbee5bcb31b72706497725381f4e6836b9df657
libxl/xl: implement support for guest iooprt and irq permissions.

This is useful for passing legacy ISA devices (e.g. com ports,
parallel ports) to guests.

Supported syntax is as described in
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU

I tested this using Xen's 'q' key handler which prints out the I/O
port and IRQ ranges allowed for each domain. e.g.:

(XEN) Rangesets belonging to domain 31:
(XEN)     I/O Ports  { 2e8-2ef, 2f8-2ff }
(XEN)     Interrupts { 3, 5-6 }

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
Handle additional error conditions:
- (*ep!=0 && *ep!='-')
- *ep2!=0
- calloc failure
- values which are too big for a uint32_t
s/parsion/parsing/

diff -r ccbee5bcb31b -r ddde6c2c45de docs/man/xl.cfg.pod.5
--- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
+++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 16:54:01 2012 +0100
@@ -402,6 +402,30 @@ for more information on the "permissive"
 
 =back
 
+=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
+
+=over 4
+
+Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
+is given in hexadecimal and may either a span e.g. C<2f8-2ff>
+(inclusive) or a single I/O port C<2f8>.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
+=item B<irqs=[ NUMBER, NUMBER, ... ]>
+
+=over 4
+
+Allow a guest to access specific physical IRQs.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
 =head2 Paravirtualised (PV) Guest Specific Options
 
 The following options apply only to Paravirtual guests.
diff -r ccbee5bcb31b -r ddde6c2c45de tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_create.c	Fri Aug 31 16:54:01 2012 +0100
@@ -933,6 +933,36 @@ static void domcreate_launch_dm(libxl__e
         LOG(ERROR, "unable to add disk devices");
         goto error_out;
     }
+
+    for (i = 0; i < d_config->b_info.num_ioports; i++) {
+        libxl_ioport_range *io = &d_config->b_info.ioports[i];
+
+        LOG(DEBUG, "dom%d ioports %"PRIx32"-%"PRIx32,
+            domid, io->first, io->first + io->number - 1);
+
+        ret = xc_domain_ioport_permission(CTX->xch, domid,
+                                          io->first, io->number, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to ioports %"PRIx32"-%"PRIx32,
+                 domid, io->first, io->first + io->number - 1);
+            ret = ERROR_FAIL;
+        }
+    }
+
+    for (i = 0; i < d_config->b_info.num_irqs; i++) {
+        uint32_t irq = d_config->b_info.irqs[i];
+
+        LOG(DEBUG, "dom%d irq %"PRIx32, domid, irq);
+
+        ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to irq %"PRId32, domid, irq);
+            ret = ERROR_FAIL;
+        }
+    }
+
     for (i = 0; i < d_config->num_nics; i++) {
         /* We have to init the nic here, because we still haven't
          * called libxl_device_nic_add at this point, but qemu needs
diff -r ccbee5bcb31b -r ddde6c2c45de tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Fri Aug 31 16:54:01 2012 +0100
@@ -135,6 +135,11 @@ libxl_vga_interface_type = Enumeration("
 # Complex libxl types
 #
 
+libxl_ioport_range = Struct("ioport_range", [
+    ("first", uint32),
+    ("number", uint32),
+    ])
+
 libxl_vga_interface_info = Struct("vga_interface_info", [
     ("kind",    libxl_vga_interface_type),
     ])
@@ -277,6 +282,9 @@ libxl_domain_build_info = Struct("domain
     #  parameters for all type of scheduler
     ("sched_params",     libxl_domain_sched_params),
 
+    ("ioports",          Array(libxl_ioport_range, "num_ioports")),
+    ("irqs",             Array(uint32, "num_irqs")),
+    
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff -r ccbee5bcb31b -r ddde6c2c45de tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Fri Aug 31 16:54:01 2012 +0100
@@ -573,10 +573,12 @@ static void parse_config_data(const char
     long l;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *ioports, *irqs;
+    int num_ioports, num_irqs;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
-    int e;
+    int i, e;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -919,6 +921,89 @@ static void parse_config_data(const char
         abort();
     }
 
+    if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
+        b_info->num_ioports = num_ioports;
+        b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
+        if (b_info->ioports == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+
+        for (i = 0; i < num_ioports; i++) {
+            const char *buf2;
+            char *ep;
+            uint32_t s, e;
+            unsigned long ul;
+
+            buf = xlu_cfg_get_listitem (ioports, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element #%d in ioport list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 16);
+            if (ep == buf) {
+                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
+                        buf);
+                exit(1);
+            }
+            if (ul > UINT32_MAX) {
+                fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                exit(1);
+            }
+            s = e = ul;
+
+            if (*ep == '-') {
+                buf2 = ep + 1;
+                ul = strtoul(buf2, &ep, 16);
+                if (ep == buf2 || *ep != '\0' || s > e) {
+                    fprintf(stderr,
+                            "xl: Invalid argument parsing ioport: %s\n", buf);
+                    exit(1);
+                }
+                if (ul > UINT32_MAX) {
+                    fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                    exit(1);
+                }
+                e = ul;
+            } else if ( *ep != '\0' )
+                fprintf(stderr,
+                        "xl: Invalid argument parsing ioport: %s\n", buf);
+            b_info->ioports[i].first = s;
+            b_info->ioports[i].number = e - s + 1;
+        }
+    }
+
+    if (!xlu_cfg_get_list(config, "irqs", &irqs, &num_irqs, 0)) {
+        b_info->num_irqs = num_irqs;
+        b_info->irqs = calloc(num_irqs, sizeof(*b_info->irqs));
+        if (b_info->irqs == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+        for (i = 0; i < num_irqs; i++) {
+            char *ep;
+            unsigned long ul;
+            buf = xlu_cfg_get_listitem (irqs, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element %d in irq list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 10);
+            if (ep == buf) {
+                fprintf(stderr,
+                        "xl: Invalid argument parsing irq: %s\n", buf);
+                exit(1);
+            }
+            if (ul > UINT32_MAX) {
+                fprintf(stderr, "xl: irq %lx too big\n", ul);
+                exit(1);
+            }
+            b_info->irqs[i] = ul;
+        }
+    }
+
     if (!xlu_cfg_get_list (config, "disk", &vbds, 0, 0)) {
         d_config->num_disks = 0;
         d_config->disks = NULL;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:56:01 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TZS-0006Jd-Al; Fri, 31 Aug 2012 15:55:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TZP-0006JP-Vc
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:55:44 +0000
Received: from [85.158.143.99:11652] by server-3.bemta-4.messagelabs.com id
	AC/FE-08232-F7ED0405; Fri, 31 Aug 2012 15:55:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346428541!18255815!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9922 invoked from network); 31 Aug 2012 15:55:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:55:42 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="206797996"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:55:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 11:55:40 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T7TZM-0006nb-83;
	Fri, 31 Aug 2012 16:55:40 +0100
MIME-Version: 1.0
X-Mercurial-Node: ddde6c2c45de8e60518aafa077f0f3867ff68e17
Message-ID: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 31 Aug 2012 16:55:39 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH V2] libxl/xl: implement support for guest iooprt
 and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346428441 -3600
# Node ID ddde6c2c45de8e60518aafa077f0f3867ff68e17
# Parent  ccbee5bcb31b72706497725381f4e6836b9df657
libxl/xl: implement support for guest iooprt and irq permissions.

This is useful for passing legacy ISA devices (e.g. com ports,
parallel ports) to guests.

Supported syntax is as described in
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU

I tested this using Xen's 'q' key handler which prints out the I/O
port and IRQ ranges allowed for each domain. e.g.:

(XEN) Rangesets belonging to domain 31:
(XEN)     I/O Ports  { 2e8-2ef, 2f8-2ff }
(XEN)     Interrupts { 3, 5-6 }

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
Handle additional error conditions:
- (*ep!=0 && *ep!='-')
- *ep2!=0
- calloc failure
- values which are too big for a uint32_t
s/parsion/parsing/

diff -r ccbee5bcb31b -r ddde6c2c45de docs/man/xl.cfg.pod.5
--- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
+++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 16:54:01 2012 +0100
@@ -402,6 +402,30 @@ for more information on the "permissive"
 
 =back
 
+=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
+
+=over 4
+
+Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
+is given in hexadecimal and may either a span e.g. C<2f8-2ff>
+(inclusive) or a single I/O port C<2f8>.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
+=item B<irqs=[ NUMBER, NUMBER, ... ]>
+
+=over 4
+
+Allow a guest to access specific physical IRQs.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
 =head2 Paravirtualised (PV) Guest Specific Options
 
 The following options apply only to Paravirtual guests.
diff -r ccbee5bcb31b -r ddde6c2c45de tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_create.c	Fri Aug 31 16:54:01 2012 +0100
@@ -933,6 +933,36 @@ static void domcreate_launch_dm(libxl__e
         LOG(ERROR, "unable to add disk devices");
         goto error_out;
     }
+
+    for (i = 0; i < d_config->b_info.num_ioports; i++) {
+        libxl_ioport_range *io = &d_config->b_info.ioports[i];
+
+        LOG(DEBUG, "dom%d ioports %"PRIx32"-%"PRIx32,
+            domid, io->first, io->first + io->number - 1);
+
+        ret = xc_domain_ioport_permission(CTX->xch, domid,
+                                          io->first, io->number, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to ioports %"PRIx32"-%"PRIx32,
+                 domid, io->first, io->first + io->number - 1);
+            ret = ERROR_FAIL;
+        }
+    }
+
+    for (i = 0; i < d_config->b_info.num_irqs; i++) {
+        uint32_t irq = d_config->b_info.irqs[i];
+
+        LOG(DEBUG, "dom%d irq %"PRIx32, domid, irq);
+
+        ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to irq %"PRId32, domid, irq);
+            ret = ERROR_FAIL;
+        }
+    }
+
     for (i = 0; i < d_config->num_nics; i++) {
         /* We have to init the nic here, because we still haven't
          * called libxl_device_nic_add at this point, but qemu needs
diff -r ccbee5bcb31b -r ddde6c2c45de tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Fri Aug 31 16:54:01 2012 +0100
@@ -135,6 +135,11 @@ libxl_vga_interface_type = Enumeration("
 # Complex libxl types
 #
 
+libxl_ioport_range = Struct("ioport_range", [
+    ("first", uint32),
+    ("number", uint32),
+    ])
+
 libxl_vga_interface_info = Struct("vga_interface_info", [
     ("kind",    libxl_vga_interface_type),
     ])
@@ -277,6 +282,9 @@ libxl_domain_build_info = Struct("domain
     #  parameters for all type of scheduler
     ("sched_params",     libxl_domain_sched_params),
 
+    ("ioports",          Array(libxl_ioport_range, "num_ioports")),
+    ("irqs",             Array(uint32, "num_irqs")),
+    
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff -r ccbee5bcb31b -r ddde6c2c45de tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Fri Aug 31 16:54:01 2012 +0100
@@ -573,10 +573,12 @@ static void parse_config_data(const char
     long l;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *ioports, *irqs;
+    int num_ioports, num_irqs;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
-    int e;
+    int i, e;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -919,6 +921,89 @@ static void parse_config_data(const char
         abort();
     }
 
+    if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
+        b_info->num_ioports = num_ioports;
+        b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
+        if (b_info->ioports == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+
+        for (i = 0; i < num_ioports; i++) {
+            const char *buf2;
+            char *ep;
+            uint32_t s, e;
+            unsigned long ul;
+
+            buf = xlu_cfg_get_listitem (ioports, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element #%d in ioport list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 16);
+            if (ep == buf) {
+                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
+                        buf);
+                exit(1);
+            }
+            if (ul > UINT32_MAX) {
+                fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                exit(1);
+            }
+            s = e = ul;
+
+            if (*ep == '-') {
+                buf2 = ep + 1;
+                ul = strtoul(buf2, &ep, 16);
+                if (ep == buf2 || *ep != '\0' || s > e) {
+                    fprintf(stderr,
+                            "xl: Invalid argument parsing ioport: %s\n", buf);
+                    exit(1);
+                }
+                if (ul > UINT32_MAX) {
+                    fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                    exit(1);
+                }
+                e = ul;
+            } else if ( *ep != '\0' )
+                fprintf(stderr,
+                        "xl: Invalid argument parsing ioport: %s\n", buf);
+            b_info->ioports[i].first = s;
+            b_info->ioports[i].number = e - s + 1;
+        }
+    }
+
+    if (!xlu_cfg_get_list(config, "irqs", &irqs, &num_irqs, 0)) {
+        b_info->num_irqs = num_irqs;
+        b_info->irqs = calloc(num_irqs, sizeof(*b_info->irqs));
+        if (b_info->irqs == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+        for (i = 0; i < num_irqs; i++) {
+            char *ep;
+            unsigned long ul;
+            buf = xlu_cfg_get_listitem (irqs, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element %d in irq list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 10);
+            if (ep == buf) {
+                fprintf(stderr,
+                        "xl: Invalid argument parsing irq: %s\n", buf);
+                exit(1);
+            }
+            if (ul > UINT32_MAX) {
+                fprintf(stderr, "xl: irq %lx too big\n", ul);
+                exit(1);
+            }
+            b_info->irqs[i] = ul;
+        }
+    }
+
     if (!xlu_cfg_get_list (config, "disk", &vbds, 0, 0)) {
         d_config->num_disks = 0;
         d_config->disks = NULL;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:57:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:57:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Tb6-0006S0-R1; Fri, 31 Aug 2012 15:57:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Tb5-0006Rp-9m
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:57:27 +0000
Received: from [85.158.143.35:21611] by server-3.bemta-4.messagelabs.com id
	4C/E0-08232-6EED0405; Fri, 31 Aug 2012 15:57:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346428646!16096232!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11393 invoked from network); 31 Aug 2012 15:57:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:57:26 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293840"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:57:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:57:26 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Tb3-0004uP-KG; Fri, 31 Aug 2012 15:57:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Tb3-0007vm-Gu;
	Fri, 31 Aug 2012 16:57:25 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.57061.422450.821411@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:57:25 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346426619.27277.237.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<20544.54424.775548.66532@mariner.uk.xensource.com>
	<1346426619.27277.237.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> On Fri, 2012-08-31 at 16:13 +0100, Ian Jackson wrote:
> > This code fails to properly handle (reject)
> >    - (*ep!=0 && *ep!='-')
> 
> Oops, will fix.
> 
> >    - value > LONG_MAX
> >    - INT_MAX < value <= LONG_MAX
> 
> These all get checked inside the (eventual) hypercall. Or were you
> thinking of something else?

Suppose buf contains "1100000055\0".

If a long is 32-bit, strtoul will return ULONG_MAX (0xffffffffUL)
setting errno to ERANGE.  Converting that to a 32-bit signed int will
do something implementation-defined (C99 6.3.1.3(3)) - in reality,
give -1.  Relying on this being rejected later seems poor practice.

If a long is 64-bit and an int 32-bit, strtoul will return
0x1100000055UL.  Converting that to a 32-bit int will again do
something implementation-defined - in reality, give 0x55.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:57:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:57:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Tb6-0006S0-R1; Fri, 31 Aug 2012 15:57:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Tb5-0006Rp-9m
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:57:27 +0000
Received: from [85.158.143.35:21611] by server-3.bemta-4.messagelabs.com id
	4C/E0-08232-6EED0405; Fri, 31 Aug 2012 15:57:26 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346428646!16096232!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11393 invoked from network); 31 Aug 2012 15:57:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 15:57:26 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293840"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 15:57:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 16:57:26 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Tb3-0004uP-KG; Fri, 31 Aug 2012 15:57:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Tb3-0007vm-Gu;
	Fri, 31 Aug 2012 16:57:25 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.57061.422450.821411@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 16:57:25 +0100
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1346426619.27277.237.camel@zakaz.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<20544.54424.775548.66532@mariner.uk.xensource.com>
	<1346426619.27277.237.camel@zakaz.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> On Fri, 2012-08-31 at 16:13 +0100, Ian Jackson wrote:
> > This code fails to properly handle (reject)
> >    - (*ep!=0 && *ep!='-')
> 
> Oops, will fix.
> 
> >    - value > LONG_MAX
> >    - INT_MAX < value <= LONG_MAX
> 
> These all get checked inside the (eventual) hypercall. Or were you
> thinking of something else?

Suppose buf contains "1100000055\0".

If a long is 32-bit, strtoul will return ULONG_MAX (0xffffffffUL)
setting errno to ERANGE.  Converting that to a 32-bit signed int will
do something implementation-defined (C99 6.3.1.3(3)) - in reality,
give -1.  Relying on this being rejected later seems poor practice.

If a long is 64-bit and an int 32-bit, strtoul will return
0x1100000055UL.  Converting that to a 32-bit int will again do
something implementation-defined - in reality, give 0x55.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:58:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TcH-0006a5-Go; Fri, 31 Aug 2012 15:58:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TcF-0006Zr-L4
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:58:40 +0000
Received: from [85.158.139.83:41315] by server-2.bemta-5.messagelabs.com id
	66/68-11456-E2FD0405; Fri, 31 Aug 2012 15:58:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1346428717!27413019!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20921 invoked from network); 31 Aug 2012 15:58:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 15:58:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:58:36 +0100
Message-Id: <5040FB490200007800097ED2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:58:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Charles Arnold" <CARNOLD@suse.com>,"Kirk Allan" <KALLAN@suse.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346428292.27277.243.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 17:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-31 at 16:40 +0100, Jan Beulich wrote:
>> >>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
>> > +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
>> > @@ -402,6 +402,30 @@ for more information on the "permissive"
>> >  
>> >  =back
>> >  
>> > +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
>> 
>> Is this really with quotes, and requiring an array?
> 
> I was mostly just following
> http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU 
> which suggested that this is the xm syntax too.
> 
>> > +
>> > +=over 4
>> > +
>> > +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
>> > +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
>> > +(inclusive) or a single I/O port C<2f8>.
>> > +
>> > +It is recommended to use this option only for trusted VMs under
>> > +administrator control.
>> > +
>> > +=back
>> > +
>> > +=item B<irqs=[ NUMBER, NUMBER, ... ]>
>> 
>> Similarly here - is this really requiring an array? I ask because
>> I had to look at this just last week for a colleague, and what
>> we got out of inspection of examples/code was that a simple
>> number (and a simple range without quotes above) are
>> permitted too.
> 
> I had a look in create.py and opts.py and didn't see that, I suppose I
> missed it.
> 
> I could implement support for either an array or a simple string/number
> but it would complicate the code quite a bit. Is it really worth it?

Charles, Kirk, could you comment here?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 15:58:54 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 15:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TcH-0006a5-Go; Fri, 31 Aug 2012 15:58:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7TcF-0006Zr-L4
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 15:58:40 +0000
Received: from [85.158.139.83:41315] by server-2.bemta-5.messagelabs.com id
	66/68-11456-E2FD0405; Fri, 31 Aug 2012 15:58:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-182.messagelabs.com!1346428717!27413019!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20921 invoked from network); 31 Aug 2012 15:58:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 15:58:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 16:58:36 +0100
Message-Id: <5040FB490200007800097ED2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 16:58:33 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "Charles Arnold" <CARNOLD@suse.com>,"Kirk Allan" <KALLAN@suse.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346428292.27277.243.camel@zakaz.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.08.12 at 17:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2012-08-31 at 16:40 +0100, Jan Beulich wrote:
>> >>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
>> > +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
>> > @@ -402,6 +402,30 @@ for more information on the "permissive"
>> >  
>> >  =back
>> >  
>> > +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
>> 
>> Is this really with quotes, and requiring an array?
> 
> I was mostly just following
> http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU 
> which suggested that this is the xm syntax too.
> 
>> > +
>> > +=over 4
>> > +
>> > +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
>> > +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
>> > +(inclusive) or a single I/O port C<2f8>.
>> > +
>> > +It is recommended to use this option only for trusted VMs under
>> > +administrator control.
>> > +
>> > +=back
>> > +
>> > +=item B<irqs=[ NUMBER, NUMBER, ... ]>
>> 
>> Similarly here - is this really requiring an array? I ask because
>> I had to look at this just last week for a colleague, and what
>> we got out of inspection of examples/code was that a simple
>> number (and a simple range without quotes above) are
>> permitted too.
> 
> I had a look in create.py and opts.py and didn't see that, I suppose I
> missed it.
> 
> I could implement support for either an array or a simple string/number
> but it would complicate the code quite a bit. Is it really worth it?

Charles, Kirk, could you comment here?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:01:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Tes-0007Bk-2O; Fri, 31 Aug 2012 16:01:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Tep-0007BW-SB
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:01:20 +0000
Received: from [85.158.139.83:59422] by server-12.bemta-5.messagelabs.com id
	27/C7-18300-FCFD0405; Fri, 31 Aug 2012 16:01:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346428877!27740187!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2017 invoked from network); 31 Aug 2012 16:01:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 16:01:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 17:01:17 +0100
Message-Id: <5040FBEA0200007800097EE5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 17:01:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7D4C60DA.0__="
Subject: [Xen-devel] [PATCH,
	v3] x86/HVM: RTC periodic timer emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7D4C60DA.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes when RTC_PIE didn't
  change

Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
v3: Break out just this change from the previously submitted much
    larger patch. The rest of that one is now planned to go in only
    after 4.2.

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -365,6 +365,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +383,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +407,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -436,7 +438,8 @@ static int rtc_ioport_write(void *opaque
                 hvm_isa_irq_assert(d, RTC_IRQ);
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
+        if ( (data ^ orig) & RTC_PIE )
+            rtc_timer_update(s);
         check_update_timer(s);
         alarm_timer_update(s);
         break;




--=__Part7D4C60DA.0__=
Content-Type: text/plain; name="x86-hvm-rtc-periodic.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc-periodic.patch"

x86/HVM: RTC periodic timer emulation adjustments=0A=0A- don't call =
rtc_timer_update() on REG_A writes when the value didn't=0A  change (doing =
the call always was reported to cause wall clock time=0A  lagging with the =
JVM running on Windows)=0A- don't call rtc_timer_update() on REG_B writes =
when RTC_PIE didn't=0A  change=0A=0ASigned-off-by: Jan Beulich <jbeulich@su=
se.com>=0A=0A---=0Av3: Break out just this change from the previously =
submitted much=0A    larger patch. The rest of that one is now planned to =
go in only=0A    after 4.2.=0A=0A--- a/xen/arch/x86/hvm/rtc.c=0A+++ =
b/xen/arch/x86/hvm/rtc.c=0A@@ -365,6 +365,7 @@ static int rtc_ioport_write(=
void *opaque=0A {=0A     RTCState *s =3D opaque;=0A     struct domain *d =
=3D vrtc_domain(s);=0A+    uint32_t orig;=0A =0A     spin_lock(&s->lock);=
=0A =0A@@ -382,6 +383,7 @@ static int rtc_ioport_write(void *opaque=0A     =
    return 0;=0A     }=0A =0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index=
];=0A     switch ( s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALAR=
M:=0A@@ -405,9 +407,9 @@ static int rtc_ioport_write(void *opaque=0A       =
  break;=0A     case RTC_REG_A:=0A         /* UIP bit is read only */=0A-  =
      s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -436,7 +438,8 @@ static =
int rtc_ioport_write(void *opaque=0A                 hvm_isa_irq_assert(d, =
RTC_IRQ);=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A+        if ( (data ^ orig) & =
RTC_PIE )=0A+            rtc_timer_update(s);=0A         check_update_timer=
(s);=0A         alarm_timer_update(s);=0A         break;=0A
--=__Part7D4C60DA.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7D4C60DA.0__=--


From xen-devel-bounces@lists.xen.org Fri Aug 31 16:01:35 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Tes-0007Bk-2O; Fri, 31 Aug 2012 16:01:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1T7Tep-0007BW-SB
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:01:20 +0000
Received: from [85.158.139.83:59422] by server-12.bemta-5.messagelabs.com id
	27/C7-18300-FCFD0405; Fri, 31 Aug 2012 16:01:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346428877!27740187!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDM2OTU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2017 invoked from network); 31 Aug 2012 16:01:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 16:01:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 17:01:17 +0100
Message-Id: <5040FBEA0200007800097EE5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.0 
Date: Fri, 31 Aug 2012 17:01:14 +0100
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7D4C60DA.0__="
Subject: [Xen-devel] [PATCH,
	v3] x86/HVM: RTC periodic timer emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7D4C60DA.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- don't call rtc_timer_update() on REG_A writes when the value didn't
  change (doing the call always was reported to cause wall clock time
  lagging with the JVM running on Windows)
- don't call rtc_timer_update() on REG_B writes when RTC_PIE didn't
  change

Signed-off-by: Jan Beulich <jbeulich@suse.com>

---
v3: Break out just this change from the previously submitted much
    larger patch. The rest of that one is now planned to go in only
    after 4.2.

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -365,6 +365,7 @@ static int rtc_ioport_write(void *opaque
 {
     RTCState *s =3D opaque;
     struct domain *d =3D vrtc_domain(s);
+    uint32_t orig;
=20
     spin_lock(&s->lock);
=20
@@ -382,6 +383,7 @@ static int rtc_ioport_write(void *opaque
         return 0;
     }
=20
+    orig =3D s->hw.cmos_data[s->hw.cmos_index];
     switch ( s->hw.cmos_index )
     {
     case RTC_SECONDS_ALARM:
@@ -405,9 +407,9 @@ static int rtc_ioport_write(void *opaque
         break;
     case RTC_REG_A:
         /* UIP bit is read only */
-        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |
-            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
-        rtc_timer_update(s);
+        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);
+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )
+            rtc_timer_update(s);
         break;
     case RTC_REG_B:
         if ( data & RTC_SET )
@@ -436,7 +438,8 @@ static int rtc_ioport_write(void *opaque
                 hvm_isa_irq_assert(d, RTC_IRQ);
             }
         s->hw.cmos_data[RTC_REG_B] =3D data;
-        rtc_timer_update(s);
+        if ( (data ^ orig) & RTC_PIE )
+            rtc_timer_update(s);
         check_update_timer(s);
         alarm_timer_update(s);
         break;




--=__Part7D4C60DA.0__=
Content-Type: text/plain; name="x86-hvm-rtc-periodic.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-hvm-rtc-periodic.patch"

x86/HVM: RTC periodic timer emulation adjustments=0A=0A- don't call =
rtc_timer_update() on REG_A writes when the value didn't=0A  change (doing =
the call always was reported to cause wall clock time=0A  lagging with the =
JVM running on Windows)=0A- don't call rtc_timer_update() on REG_B writes =
when RTC_PIE didn't=0A  change=0A=0ASigned-off-by: Jan Beulich <jbeulich@su=
se.com>=0A=0A---=0Av3: Break out just this change from the previously =
submitted much=0A    larger patch. The rest of that one is now planned to =
go in only=0A    after 4.2.=0A=0A--- a/xen/arch/x86/hvm/rtc.c=0A+++ =
b/xen/arch/x86/hvm/rtc.c=0A@@ -365,6 +365,7 @@ static int rtc_ioport_write(=
void *opaque=0A {=0A     RTCState *s =3D opaque;=0A     struct domain *d =
=3D vrtc_domain(s);=0A+    uint32_t orig;=0A =0A     spin_lock(&s->lock);=
=0A =0A@@ -382,6 +383,7 @@ static int rtc_ioport_write(void *opaque=0A     =
    return 0;=0A     }=0A =0A+    orig =3D s->hw.cmos_data[s->hw.cmos_index=
];=0A     switch ( s->hw.cmos_index )=0A     {=0A     case RTC_SECONDS_ALAR=
M:=0A@@ -405,9 +407,9 @@ static int rtc_ioport_write(void *opaque=0A       =
  break;=0A     case RTC_REG_A:=0A         /* UIP bit is read only */=0A-  =
      s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) |=0A-            =
(s->hw.cmos_data[RTC_REG_A] & RTC_UIP);=0A-        rtc_timer_update(s);=0A+=
        s->hw.cmos_data[RTC_REG_A] =3D (data & ~RTC_UIP) | (orig & =
RTC_UIP);=0A+        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) =
)=0A+            rtc_timer_update(s);=0A         break;=0A     case =
RTC_REG_B:=0A         if ( data & RTC_SET )=0A@@ -436,7 +438,8 @@ static =
int rtc_ioport_write(void *opaque=0A                 hvm_isa_irq_assert(d, =
RTC_IRQ);=0A             }=0A         s->hw.cmos_data[RTC_REG_B] =3D =
data;=0A-        rtc_timer_update(s);=0A+        if ( (data ^ orig) & =
RTC_PIE )=0A+            rtc_timer_update(s);=0A         check_update_timer=
(s);=0A         alarm_timer_update(s);=0A         break;=0A
--=__Part7D4C60DA.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7D4C60DA.0__=--


From xen-devel-bounces@lists.xen.org Fri Aug 31 16:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Tes-0007Bt-Em; Fri, 31 Aug 2012 16:01:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Teq-0007Bd-M4
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:01:20 +0000
Received: from [85.158.143.99:39954] by server-2.bemta-4.messagelabs.com id
	4B/3B-21239-0DFD0405; Fri, 31 Aug 2012 16:01:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346428879!21498575!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18700 invoked from network); 31 Aug 2012 16:01:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:01:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293902"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:01:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 17:01:13 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Tej-0004vp-4q; Fri, 31 Aug 2012 16:01:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Tej-0007wE-0l;
	Fri, 31 Aug 2012 17:01:13 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.57287.833420.508349@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 17:01:11 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
References: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl/xl: implement support for guest
 iooprt and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl/xl: implement support for guest iooprt and irq permissions"):
> libxl/xl: implement support for guest iooprt and irq permissions.
...> 
> -    int e;
> +    int i, e;
...
> +            ul = strtoul(buf, &ep, 16);
> +            if (ep == buf) {
> +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> +                        buf);
> +                exit(1);
> +            }
> +            if (ul > UINT32_MAX) {
> +                fprintf(stderr, "xl: ioport %lx too big\n", ul);
> +                exit(1);
> +            }

1. If long and int are the same size, this still mishandles
"1100000055\0", passing -1 to libxl.

2. You compare to UINT32_MAX and then assign to a signed integer.
That seems odd.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:01:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Tes-0007Bt-Em; Fri, 31 Aug 2012 16:01:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7Teq-0007Bd-M4
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:01:20 +0000
Received: from [85.158.143.99:39954] by server-2.bemta-4.messagelabs.com id
	4B/3B-21239-0DFD0405; Fri, 31 Aug 2012 16:01:20 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-10.tower-216.messagelabs.com!1346428879!21498575!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18700 invoked from network); 31 Aug 2012 16:01:19 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-10.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:01:19 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293902"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:01:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 17:01:13 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7Tej-0004vp-4q; Fri, 31 Aug 2012 16:01:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7Tej-0007wE-0l;
	Fri, 31 Aug 2012 17:01:13 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.57287.833420.508349@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 17:01:11 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
References: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl/xl: implement support for guest
 iooprt and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V2] libxl/xl: implement support for guest iooprt and irq permissions"):
> libxl/xl: implement support for guest iooprt and irq permissions.
...> 
> -    int e;
> +    int i, e;
...
> +            ul = strtoul(buf, &ep, 16);
> +            if (ep == buf) {
> +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> +                        buf);
> +                exit(1);
> +            }
> +            if (ul > UINT32_MAX) {
> +                fprintf(stderr, "xl: ioport %lx too big\n", ul);
> +                exit(1);
> +            }

1. If long and int are the same size, this still mishandles
"1100000055\0", passing -1 to libxl.

2. You compare to UINT32_MAX and then assign to a signed integer.
That seems odd.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:02:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TfO-0007I5-Sx; Fri, 31 Aug 2012 16:01:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TfM-0007HN-Vy
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:01:53 +0000
Received: from [85.158.138.51:44511] by server-5.bemta-3.messagelabs.com id
	EC/AD-13133-0FFD0405; Fri, 31 Aug 2012 16:01:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346428911!27910889!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17907 invoked from network); 31 Aug 2012 16:01:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:01:51 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293913"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:01:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 17:01:51 +0100
Message-ID: <1346428909.27277.246.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 17:01:49 +0100
In-Reply-To: <20544.57061.422450.821411@mariner.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<20544.54424.775548.66532@mariner.uk.xensource.com>
	<1346426619.27277.237.camel@zakaz.uk.xensource.com>
	<20544.57061.422450.821411@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:57 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> > On Fri, 2012-08-31 at 16:13 +0100, Ian Jackson wrote:
> > > This code fails to properly handle (reject)
> > >    - (*ep!=0 && *ep!='-')
> > 
> > Oops, will fix.
> > 
> > >    - value > LONG_MAX
> > >    - INT_MAX < value <= LONG_MAX
> > 
> > These all get checked inside the (eventual) hypercall. Or were you
> > thinking of something else?
> 
> Suppose buf contains "1100000055\0".
> 
> If a long is 32-bit, strtoul will return ULONG_MAX (0xffffffffUL)
> setting errno to ERANGE.  Converting that to a 32-bit signed int will
> do something implementation-defined (C99 6.3.1.3(3)) - in reality,
> give -1.  Relying on this being rejected later seems poor practice.

The target variable here is an unsigned 32-bit int though.

> If a long is 64-bit and an int 32-bit, strtoul will return
> 0x1100000055UL.  Converting that to a 32-bit int will again do
> something implementation-defined - in reality, give 0x55.

I've added checks for this and posted as V2.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:02:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TfO-0007I5-Sx; Fri, 31 Aug 2012 16:01:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TfM-0007HN-Vy
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:01:53 +0000
Received: from [85.158.138.51:44511] by server-5.bemta-3.messagelabs.com id
	EC/AD-13133-0FFD0405; Fri, 31 Aug 2012 16:01:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-174.messagelabs.com!1346428911!27910889!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17907 invoked from network); 31 Aug 2012 16:01:51 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-8.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:01:51 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293913"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:01:51 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 17:01:51 +0100
Message-ID: <1346428909.27277.246.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 17:01:49 +0100
In-Reply-To: <20544.57061.422450.821411@mariner.uk.xensource.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<20544.54424.775548.66532@mariner.uk.xensource.com>
	<1346426619.27277.237.camel@zakaz.uk.xensource.com>
	<20544.57061.422450.821411@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 16:57 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-users] Xen 4.2 TODO (io and irq parameter are not evaluated by xl)"):
> > On Fri, 2012-08-31 at 16:13 +0100, Ian Jackson wrote:
> > > This code fails to properly handle (reject)
> > >    - (*ep!=0 && *ep!='-')
> > 
> > Oops, will fix.
> > 
> > >    - value > LONG_MAX
> > >    - INT_MAX < value <= LONG_MAX
> > 
> > These all get checked inside the (eventual) hypercall. Or were you
> > thinking of something else?
> 
> Suppose buf contains "1100000055\0".
> 
> If a long is 32-bit, strtoul will return ULONG_MAX (0xffffffffUL)
> setting errno to ERANGE.  Converting that to a 32-bit signed int will
> do something implementation-defined (C99 6.3.1.3(3)) - in reality,
> give -1.  Relying on this being rejected later seems poor practice.

The target variable here is an unsigned 32-bit int though.

> If a long is 64-bit and an int 32-bit, strtoul will return
> 0x1100000055UL.  Converting that to a 32-bit int will again do
> something implementation-defined - in reality, give 0x55.

I've added checks for this and posted as V2.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:04:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Thq-0007af-FO; Fri, 31 Aug 2012 16:04:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Thp-0007aM-0D
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:04:25 +0000
Received: from [85.158.143.35:3477] by server-1.bemta-4.messagelabs.com id
	DF/31-12504-880E0405; Fri, 31 Aug 2012 16:04:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1346429063!5682927!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5816 invoked from network); 31 Aug 2012 16:04:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:04:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293953"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:04:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 17:04:23 +0100
Message-ID: <1346429061.27277.248.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 17:04:21 +0100
In-Reply-To: <20544.57287.833420.508349@mariner.uk.xensource.com>
References: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
	<20544.57287.833420.508349@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl/xl: implement support for guest
 iooprt and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 17:01 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH V2] libxl/xl: implement support for guest iooprt and irq permissions"):
> > libxl/xl: implement support for guest iooprt and irq permissions.
> ...> 
> > -    int e;
> > +    int i, e;
> ...
> > +            ul = strtoul(buf, &ep, 16);
> > +            if (ep == buf) {
> > +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> > +                        buf);
> > +                exit(1);
> > +            }
> > +            if (ul > UINT32_MAX) {
> > +                fprintf(stderr, "xl: ioport %lx too big\n", ul);
> > +                exit(1);
> > +            }
> 
> 1. If long and int are the same size, this still mishandles
> "1100000055\0", passing -1 to libxl.
> 
> 2. You compare to UINT32_MAX and then assign to a signed integer.
> That seems odd.

Everything here is signed now.

Oh, I see the confusion, I've unwittingly shadowed the e you've quoted
with an uint32_t in this scope. I'll change the variable name since that
obviously not helpful.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:04:41 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Thq-0007af-FO; Fri, 31 Aug 2012 16:04:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7Thp-0007aM-0D
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:04:25 +0000
Received: from [85.158.143.35:3477] by server-1.bemta-4.messagelabs.com id
	DF/31-12504-880E0405; Fri, 31 Aug 2012 16:04:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1346429063!5682927!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5816 invoked from network); 31 Aug 2012 16:04:23 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:04:23 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="14293953"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:04:22 +0000
Received: from [10.80.2.42] (10.80.2.42) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 17:04:23 +0100
Message-ID: <1346429061.27277.248.camel@zakaz.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 31 Aug 2012 17:04:21 +0100
In-Reply-To: <20544.57287.833420.508349@mariner.uk.xensource.com>
References: <ddde6c2c45de8e60518a.1346428539@cosworth.uk.xensource.com>
	<20544.57287.833420.508349@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V2] libxl/xl: implement support for guest
 iooprt and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 17:01 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH V2] libxl/xl: implement support for guest iooprt and irq permissions"):
> > libxl/xl: implement support for guest iooprt and irq permissions.
> ...> 
> > -    int e;
> > +    int i, e;
> ...
> > +            ul = strtoul(buf, &ep, 16);
> > +            if (ep == buf) {
> > +                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
> > +                        buf);
> > +                exit(1);
> > +            }
> > +            if (ul > UINT32_MAX) {
> > +                fprintf(stderr, "xl: ioport %lx too big\n", ul);
> > +                exit(1);
> > +            }
> 
> 1. If long and int are the same size, this still mishandles
> "1100000055\0", passing -1 to libxl.
> 
> 2. You compare to UINT32_MAX and then assign to a signed integer.
> That seems odd.

Everything here is signed now.

Oh, I see the confusion, I've unwittingly shadowed the e you've quoted
with an uint32_t in this scope. I'll change the variable name since that
obviously not helpful.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:11:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ToF-0007wM-GF; Fri, 31 Aug 2012 16:11:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7ToE-0007wF-5L
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:11:02 +0000
Received: from [85.158.138.51:9014] by server-3.bemta-3.messagelabs.com id
	15/D4-21322-512E0405; Fri, 31 Aug 2012 16:11:01 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346429458!20012617!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3499 invoked from network); 31 Aug 2012 16:10:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:10:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36451362"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:10:57 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	12:10:57 -0400
Message-ID: <5040E210.2060702@citrix.com>
Date: Fri, 31 Aug 2012 17:10:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andres.lagarcavilla@gmail.com>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
In-Reply-To: <A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"andres@lagarcavilla.org" <andres@lagarcavilla.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/12 16:42, Andres Lagar-Cavilla wrote:
> Actually acted upon your feedback ipso facto:
> 
> commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Date:   Sun Aug 26 09:45:57 2012 -0400
> 
>     Xen backend support for paged out grant targets.

This looks mostly fine expect for the #define instead of inline functions.

> +#define gnttab_map_grant_no_eagain(_gop)                                    \

This name tripped me up previously. As I read this as:

gnttab_map_grant_no_[retries_for]_eagain().

Perhaps gnttab_map_grant_with_retries() ? Or similar?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:11:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ToF-0007wM-GF; Fri, 31 Aug 2012 16:11:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1T7ToE-0007wF-5L
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:11:02 +0000
Received: from [85.158.138.51:9014] by server-3.bemta-3.messagelabs.com id
	15/D4-21322-512E0405; Fri, 31 Aug 2012 16:11:01 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346429458!20012617!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3499 invoked from network); 31 Aug 2012 16:10:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:10:59 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="36451362"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:10:57 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPMAILMX02.citrite.net
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1; Fri, 31 Aug 2012
	12:10:57 -0400
Message-ID: <5040E210.2060702@citrix.com>
Date: Fri, 31 Aug 2012 17:10:56 +0100
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20120428 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andres.lagarcavilla@gmail.com>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
In-Reply-To: <A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"andres@lagarcavilla.org" <andres@lagarcavilla.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/12 16:42, Andres Lagar-Cavilla wrote:
> Actually acted upon your feedback ipso facto:
> 
> commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Date:   Sun Aug 26 09:45:57 2012 -0400
> 
>     Xen backend support for paged out grant targets.

This looks mostly fine expect for the #define instead of inline functions.

> +#define gnttab_map_grant_no_eagain(_gop)                                    \

This name tripped me up previously. As I read this as:

gnttab_map_grant_no_[retries_for]_eagain().

Perhaps gnttab_map_grant_with_retries() ? Or similar?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:16:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TtH-00086C-8u; Fri, 31 Aug 2012 16:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TtF-000865-9J
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:16:13 +0000
Received: from [85.158.143.99:47208] by server-1.bemta-4.messagelabs.com id
	4F/8E-12504-C43E0405; Fri, 31 Aug 2012 16:16:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346429769!16681035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9801 invoked from network); 31 Aug 2012 16:16:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:16:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="206800380"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:16:08 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T7Tt9-00078Q-VA;
	Fri, 31 Aug 2012 17:16:07 +0100
MIME-Version: 1.0
X-Mercurial-Node: f9a7d8c439f9aa47b665cb7c443b40cdadffe0b0
Message-ID: <f9a7d8c439f9aa47b665.1346429767@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 31 Aug 2012 17:16:07 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH V3] libxl/xl: implement support for guest ioport
 and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346429764 -3600
# Node ID f9a7d8c439f9aa47b665cb7c443b40cdadffe0b0
# Parent  ccbee5bcb31b72706497725381f4e6836b9df657
libxl/xl: implement support for guest ioport and irq permissions.

This is useful for passing legacy ISA devices (e.g. com ports,
parallel ports) to guests.

Supported syntax is as described in
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU

I tested this using Xen's 'q' key handler which prints out the I/O
port and IRQ ranges allowed for each domain. e.g.:

(XEN) Rangesets belonging to domain 31:
(XEN)     I/O Ports  { 2e8-2ef, 2f8-2ff }
(XEN)     Interrupts { 3, 5-6 }

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3:
- rename variable s and e to start and end since there is already a
  variable called e in the enclosing scope.
- Reject UINT32_MAX as a value too since strtoul returns ULONG_MAX and
  sets errno=ERANGE under some circumstances. UINT32_MAX-1 is still
  much larger than any possible ioport or irq.

v2:
Handle additional error conditions:
- (*ep!=0 && *ep!='-')
- *ep2!=0
- calloc failure
- values which are too big for a uint32_t
s/parsion/parsing/

diff -r ccbee5bcb31b -r f9a7d8c439f9 docs/man/xl.cfg.pod.5
--- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
+++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 17:16:04 2012 +0100
@@ -402,6 +402,30 @@ for more information on the "permissive"
 
 =back
 
+=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
+
+=over 4
+
+Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
+is given in hexadecimal and may either a span e.g. C<2f8-2ff>
+(inclusive) or a single I/O port C<2f8>.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
+=item B<irqs=[ NUMBER, NUMBER, ... ]>
+
+=over 4
+
+Allow a guest to access specific physical IRQs.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
 =head2 Paravirtualised (PV) Guest Specific Options
 
 The following options apply only to Paravirtual guests.
diff -r ccbee5bcb31b -r f9a7d8c439f9 tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_create.c	Fri Aug 31 17:16:04 2012 +0100
@@ -933,6 +933,36 @@ static void domcreate_launch_dm(libxl__e
         LOG(ERROR, "unable to add disk devices");
         goto error_out;
     }
+
+    for (i = 0; i < d_config->b_info.num_ioports; i++) {
+        libxl_ioport_range *io = &d_config->b_info.ioports[i];
+
+        LOG(DEBUG, "dom%d ioports %"PRIx32"-%"PRIx32,
+            domid, io->first, io->first + io->number - 1);
+
+        ret = xc_domain_ioport_permission(CTX->xch, domid,
+                                          io->first, io->number, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to ioports %"PRIx32"-%"PRIx32,
+                 domid, io->first, io->first + io->number - 1);
+            ret = ERROR_FAIL;
+        }
+    }
+
+    for (i = 0; i < d_config->b_info.num_irqs; i++) {
+        uint32_t irq = d_config->b_info.irqs[i];
+
+        LOG(DEBUG, "dom%d irq %"PRIx32, domid, irq);
+
+        ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to irq %"PRId32, domid, irq);
+            ret = ERROR_FAIL;
+        }
+    }
+
     for (i = 0; i < d_config->num_nics; i++) {
         /* We have to init the nic here, because we still haven't
          * called libxl_device_nic_add at this point, but qemu needs
diff -r ccbee5bcb31b -r f9a7d8c439f9 tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Fri Aug 31 17:16:04 2012 +0100
@@ -135,6 +135,11 @@ libxl_vga_interface_type = Enumeration("
 # Complex libxl types
 #
 
+libxl_ioport_range = Struct("ioport_range", [
+    ("first", uint32),
+    ("number", uint32),
+    ])
+
 libxl_vga_interface_info = Struct("vga_interface_info", [
     ("kind",    libxl_vga_interface_type),
     ])
@@ -277,6 +282,9 @@ libxl_domain_build_info = Struct("domain
     #  parameters for all type of scheduler
     ("sched_params",     libxl_domain_sched_params),
 
+    ("ioports",          Array(libxl_ioport_range, "num_ioports")),
+    ("irqs",             Array(uint32, "num_irqs")),
+    
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff -r ccbee5bcb31b -r f9a7d8c439f9 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Fri Aug 31 17:16:04 2012 +0100
@@ -573,10 +573,12 @@ static void parse_config_data(const char
     long l;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *ioports, *irqs;
+    int num_ioports, num_irqs;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
-    int e;
+    int i, e;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -919,6 +921,89 @@ static void parse_config_data(const char
         abort();
     }
 
+    if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
+        b_info->num_ioports = num_ioports;
+        b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
+        if (b_info->ioports == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+
+        for (i = 0; i < num_ioports; i++) {
+            const char *buf2;
+            char *ep;
+            uint32_t start, end;
+            unsigned long ul;
+
+            buf = xlu_cfg_get_listitem (ioports, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element #%d in ioport list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 16);
+            if (ep == buf) {
+                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
+                        buf);
+                exit(1);
+            }
+            if (ul >= UINT32_MAX) {
+                fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                exit(1);
+            }
+            start = end = ul;
+
+            if (*ep == '-') {
+                buf2 = ep + 1;
+                ul = strtoul(buf2, &ep, 16);
+                if (ep == buf2 || *ep != '\0' || start > end) {
+                    fprintf(stderr,
+                            "xl: Invalid argument parsing ioport: %s\n", buf);
+                    exit(1);
+                }
+                if (ul >= UINT32_MAX) {
+                    fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                    exit(1);
+                }
+                end = ul;
+            } else if ( *ep != '\0' )
+                fprintf(stderr,
+                        "xl: Invalid argument parsing ioport: %s\n", buf);
+            b_info->ioports[i].first = start;
+            b_info->ioports[i].number = end - start + 1;
+        }
+    }
+
+    if (!xlu_cfg_get_list(config, "irqs", &irqs, &num_irqs, 0)) {
+        b_info->num_irqs = num_irqs;
+        b_info->irqs = calloc(num_irqs, sizeof(*b_info->irqs));
+        if (b_info->irqs == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+        for (i = 0; i < num_irqs; i++) {
+            char *ep;
+            unsigned long ul;
+            buf = xlu_cfg_get_listitem (irqs, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element %d in irq list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 10);
+            if (ep == buf) {
+                fprintf(stderr,
+                        "xl: Invalid argument parsing irq: %s\n", buf);
+                exit(1);
+            }
+            if (ul >= UINT32_MAX) {
+                fprintf(stderr, "xl: irq %lx too big\n", ul);
+                exit(1);
+            }
+            b_info->irqs[i] = ul;
+        }
+    }
+
     if (!xlu_cfg_get_list (config, "disk", &vbds, 0, 0)) {
         d_config->num_disks = 0;
         d_config->disks = NULL;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:16:30 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7TtH-00086C-8u; Fri, 31 Aug 2012 16:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7TtF-000865-9J
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:16:13 +0000
Received: from [85.158.143.99:47208] by server-1.bemta-4.messagelabs.com id
	4F/8E-12504-C43E0405; Fri, 31 Aug 2012 16:16:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346429769!16681035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9801 invoked from network); 31 Aug 2012 16:16:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:16:10 -0000
X-IronPort-AV: E=Sophos;i="4.80,347,1344211200"; d="scan'208";a="206800380"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:16:08 -0400
Received: from cosworth.uk.xensource.com ([10.80.16.52] ident=ianc)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<ian.campbell@citrix.com>)	id 1T7Tt9-00078Q-VA;
	Fri, 31 Aug 2012 17:16:07 +0100
MIME-Version: 1.0
X-Mercurial-Node: f9a7d8c439f9aa47b665cb7c443b40cdadffe0b0
Message-ID: <f9a7d8c439f9aa47b665.1346429767@cosworth.uk.xensource.com>
User-Agent: Mercurial-patchbomb/1.6.4
Date: Fri, 31 Aug 2012 17:16:07 +0100
From: Ian Campbell <ian.campbell@citrix.com>
To: xen-devel@lists.xen.org
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH V3] libxl/xl: implement support for guest ioport
 and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1346429764 -3600
# Node ID f9a7d8c439f9aa47b665cb7c443b40cdadffe0b0
# Parent  ccbee5bcb31b72706497725381f4e6836b9df657
libxl/xl: implement support for guest ioport and irq permissions.

This is useful for passing legacy ISA devices (e.g. com ports,
parallel ports) to guests.

Supported syntax is as described in
http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU

I tested this using Xen's 'q' key handler which prints out the I/O
port and IRQ ranges allowed for each domain. e.g.:

(XEN) Rangesets belonging to domain 31:
(XEN)     I/O Ports  { 2e8-2ef, 2f8-2ff }
(XEN)     Interrupts { 3, 5-6 }

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v3:
- rename variable s and e to start and end since there is already a
  variable called e in the enclosing scope.
- Reject UINT32_MAX as a value too since strtoul returns ULONG_MAX and
  sets errno=ERANGE under some circumstances. UINT32_MAX-1 is still
  much larger than any possible ioport or irq.

v2:
Handle additional error conditions:
- (*ep!=0 && *ep!='-')
- *ep2!=0
- calloc failure
- values which are too big for a uint32_t
s/parsion/parsing/

diff -r ccbee5bcb31b -r f9a7d8c439f9 docs/man/xl.cfg.pod.5
--- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
+++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 17:16:04 2012 +0100
@@ -402,6 +402,30 @@ for more information on the "permissive"
 
 =back
 
+=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
+
+=over 4
+
+Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
+is given in hexadecimal and may either a span e.g. C<2f8-2ff>
+(inclusive) or a single I/O port C<2f8>.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
+=item B<irqs=[ NUMBER, NUMBER, ... ]>
+
+=over 4
+
+Allow a guest to access specific physical IRQs.
+
+It is recommended to use this option only for trusted VMs under
+administrator control.
+
+=back
+
 =head2 Paravirtualised (PV) Guest Specific Options
 
 The following options apply only to Paravirtual guests.
diff -r ccbee5bcb31b -r f9a7d8c439f9 tools/libxl/libxl_create.c
--- a/tools/libxl/libxl_create.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_create.c	Fri Aug 31 17:16:04 2012 +0100
@@ -933,6 +933,36 @@ static void domcreate_launch_dm(libxl__e
         LOG(ERROR, "unable to add disk devices");
         goto error_out;
     }
+
+    for (i = 0; i < d_config->b_info.num_ioports; i++) {
+        libxl_ioport_range *io = &d_config->b_info.ioports[i];
+
+        LOG(DEBUG, "dom%d ioports %"PRIx32"-%"PRIx32,
+            domid, io->first, io->first + io->number - 1);
+
+        ret = xc_domain_ioport_permission(CTX->xch, domid,
+                                          io->first, io->number, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to ioports %"PRIx32"-%"PRIx32,
+                 domid, io->first, io->first + io->number - 1);
+            ret = ERROR_FAIL;
+        }
+    }
+
+    for (i = 0; i < d_config->b_info.num_irqs; i++) {
+        uint32_t irq = d_config->b_info.irqs[i];
+
+        LOG(DEBUG, "dom%d irq %"PRIx32, domid, irq);
+
+        ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
+        if ( ret<0 ){
+            LOGE(ERROR,
+                 "failed give dom%d access to irq %"PRId32, domid, irq);
+            ret = ERROR_FAIL;
+        }
+    }
+
     for (i = 0; i < d_config->num_nics; i++) {
         /* We have to init the nic here, because we still haven't
          * called libxl_device_nic_add at this point, but qemu needs
diff -r ccbee5bcb31b -r f9a7d8c439f9 tools/libxl/libxl_types.idl
--- a/tools/libxl/libxl_types.idl	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/libxl_types.idl	Fri Aug 31 17:16:04 2012 +0100
@@ -135,6 +135,11 @@ libxl_vga_interface_type = Enumeration("
 # Complex libxl types
 #
 
+libxl_ioport_range = Struct("ioport_range", [
+    ("first", uint32),
+    ("number", uint32),
+    ])
+
 libxl_vga_interface_info = Struct("vga_interface_info", [
     ("kind",    libxl_vga_interface_type),
     ])
@@ -277,6 +282,9 @@ libxl_domain_build_info = Struct("domain
     #  parameters for all type of scheduler
     ("sched_params",     libxl_domain_sched_params),
 
+    ("ioports",          Array(libxl_ioport_range, "num_ioports")),
+    ("irqs",             Array(uint32, "num_irqs")),
+    
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff -r ccbee5bcb31b -r f9a7d8c439f9 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c	Fri Aug 31 12:03:55 2012 +0100
+++ b/tools/libxl/xl_cmdimpl.c	Fri Aug 31 17:16:04 2012 +0100
@@ -573,10 +573,12 @@ static void parse_config_data(const char
     long l;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids;
+    XLU_ConfigList *ioports, *irqs;
+    int num_ioports, num_irqs;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
-    int e;
+    int i, e;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -919,6 +921,89 @@ static void parse_config_data(const char
         abort();
     }
 
+    if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
+        b_info->num_ioports = num_ioports;
+        b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
+        if (b_info->ioports == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+
+        for (i = 0; i < num_ioports; i++) {
+            const char *buf2;
+            char *ep;
+            uint32_t start, end;
+            unsigned long ul;
+
+            buf = xlu_cfg_get_listitem (ioports, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element #%d in ioport list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 16);
+            if (ep == buf) {
+                fprintf(stderr, "xl: Invalid argument parsing ioport: %s\n",
+                        buf);
+                exit(1);
+            }
+            if (ul >= UINT32_MAX) {
+                fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                exit(1);
+            }
+            start = end = ul;
+
+            if (*ep == '-') {
+                buf2 = ep + 1;
+                ul = strtoul(buf2, &ep, 16);
+                if (ep == buf2 || *ep != '\0' || start > end) {
+                    fprintf(stderr,
+                            "xl: Invalid argument parsing ioport: %s\n", buf);
+                    exit(1);
+                }
+                if (ul >= UINT32_MAX) {
+                    fprintf(stderr, "xl: ioport %lx too big\n", ul);
+                    exit(1);
+                }
+                end = ul;
+            } else if ( *ep != '\0' )
+                fprintf(stderr,
+                        "xl: Invalid argument parsing ioport: %s\n", buf);
+            b_info->ioports[i].first = start;
+            b_info->ioports[i].number = end - start + 1;
+        }
+    }
+
+    if (!xlu_cfg_get_list(config, "irqs", &irqs, &num_irqs, 0)) {
+        b_info->num_irqs = num_irqs;
+        b_info->irqs = calloc(num_irqs, sizeof(*b_info->irqs));
+        if (b_info->irqs == NULL) {
+            fprintf(stderr, "unable to allocate memory for ioports\n");
+            exit(-1);
+        }
+        for (i = 0; i < num_irqs; i++) {
+            char *ep;
+            unsigned long ul;
+            buf = xlu_cfg_get_listitem (irqs, i);
+            if (!buf) {
+                fprintf(stderr,
+                        "xl: Unable to get element %d in irq list\n", i);
+                exit(1);
+            }
+            ul = strtoul(buf, &ep, 10);
+            if (ep == buf) {
+                fprintf(stderr,
+                        "xl: Invalid argument parsing irq: %s\n", buf);
+                exit(1);
+            }
+            if (ul >= UINT32_MAX) {
+                fprintf(stderr, "xl: irq %lx too big\n", ul);
+                exit(1);
+            }
+            b_info->irqs[i] = ul;
+        }
+    }
+
     if (!xlu_cfg_get_list (config, "disk", &vbds, 0, 0)) {
         d_config->num_disks = 0;
         d_config->disks = NULL;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:40:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UGk-0008Nf-Lo; Fri, 31 Aug 2012 16:40:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7UGi-0008NJ-Sj
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 16:40:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1346431219!1336449!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25222 invoked from network); 31 Aug 2012 16:40:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 16:40:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VGeHlH015604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 16:40:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VGeGa5025972
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 16:40:17 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VGeGES016166; Fri, 31 Aug 2012 11:40:16 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 09:40:16 -0700
Date: Fri, 31 Aug 2012 12:40:11 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120831164010.GA18929@localhost.localdomain>
References: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
	<5040B249.4000306@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5040B249.4000306@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, Stefano Panella <stefano.panella@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 1/1] XEN: Use correct masking in
 xen_swiotlb_alloc_coherent.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 01:47:05PM +0100, David Vrabel wrote:
> On 31/08/12 10:57, Stefano Panella wrote:
> > When running 32-bit pvops-dom0 and a driver tries to allocate a coherent
> > DMA-memory the xen swiotlb-implementation returned memory beyond 4GB.
> > 
> > This caused for example not working sound on a system with 4 GB and a 64-bit
> > compatible sound-card with sets the DMA-mask to 64bit.
> > 
> > On bare-metal and the forward-ported xen-dom0 patches from OpenSuse a coherent
> > DMA-memory is always allocated inside the 32-bit address-range by calling
> > dma_alloc_coherent_mask.
> 
> We should have the same behaviour under Xen as bare metal so:
> 
> Acked-By: David Vrabel <david.vrabel@citrix.com>
> 
> This does limit the DMA mask to 32-bits by passing it through an
> unsigned long, which seems a bit sneaky...

so is the issue that we are not casting it from 'u64' to 'u32'
(unsigned long) on 32-bit?

> 
> Presumably the sound card is capable of handling 64 bit physical
> addresses (or it would break under 64-bit kernels) so it's not clear why
> this sound driver requires this restriction.
> 
> Is there a bug in the sound driver or sound subsystem where it's
> truncating a dma_addr_t by assigning it to an unsigned long or similar?
> 
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -232,7 +232,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
> >  		return ret;
> >  
> >  	if (hwdev && hwdev->coherent_dma_mask)
> > -		dma_mask = hwdev->coherent_dma_mask;
> > +		dma_mask = dma_alloc_coherent_mask(hwdev, flags);
> 
> Suggest
> 
>     if (hwdev)
>         dma_mask = dma_alloc_coherent_mask(hwdev, flags)

Isn't that code just doing this:
atic inline unsigned long dma_alloc_coherent_mask(struct device *dev,
                                                    gfp_t gfp)
{
        unsigned long dma_mask = 0;

        dma_mask = dev->coherent_dma_mask;
        if (!dma_mask)
                dma_mask = (gfp & GFP_DMA) ? DMA_BIT_MASK(24) :
DMA_BIT_MASK(32);

        return dma_mask;
}

and in our code, the dma_mask by default is DMA_BIT_MASK(32):

u64 dma_mask = DMA_BIT_MASK(32);

So what I am missing?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:40:47 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UGk-0008Nf-Lo; Fri, 31 Aug 2012 16:40:30 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7UGi-0008NJ-Sj
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 16:40:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1346431219!1336449!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25222 invoked from network); 31 Aug 2012 16:40:21 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 16:40:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VGeHlH015604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 16:40:18 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VGeGa5025972
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 16:40:17 GMT
Received: from abhmt116.oracle.com (abhmt116.oracle.com [141.146.116.68])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VGeGES016166; Fri, 31 Aug 2012 11:40:16 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 09:40:16 -0700
Date: Fri, 31 Aug 2012 12:40:11 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20120831164010.GA18929@localhost.localdomain>
References: <1346407072-6405-1-git-send-email-stefano.panella@citrix.com>
	<5040B249.4000306@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5040B249.4000306@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, Stefano Panella <stefano.panella@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH 1/1] XEN: Use correct masking in
 xen_swiotlb_alloc_coherent.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 01:47:05PM +0100, David Vrabel wrote:
> On 31/08/12 10:57, Stefano Panella wrote:
> > When running 32-bit pvops-dom0 and a driver tries to allocate a coherent
> > DMA-memory the xen swiotlb-implementation returned memory beyond 4GB.
> > 
> > This caused for example not working sound on a system with 4 GB and a 64-bit
> > compatible sound-card with sets the DMA-mask to 64bit.
> > 
> > On bare-metal and the forward-ported xen-dom0 patches from OpenSuse a coherent
> > DMA-memory is always allocated inside the 32-bit address-range by calling
> > dma_alloc_coherent_mask.
> 
> We should have the same behaviour under Xen as bare metal so:
> 
> Acked-By: David Vrabel <david.vrabel@citrix.com>
> 
> This does limit the DMA mask to 32-bits by passing it through an
> unsigned long, which seems a bit sneaky...

so is the issue that we are not casting it from 'u64' to 'u32'
(unsigned long) on 32-bit?

> 
> Presumably the sound card is capable of handling 64 bit physical
> addresses (or it would break under 64-bit kernels) so it's not clear why
> this sound driver requires this restriction.
> 
> Is there a bug in the sound driver or sound subsystem where it's
> truncating a dma_addr_t by assigning it to an unsigned long or similar?
> 
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -232,7 +232,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
> >  		return ret;
> >  
> >  	if (hwdev && hwdev->coherent_dma_mask)
> > -		dma_mask = hwdev->coherent_dma_mask;
> > +		dma_mask = dma_alloc_coherent_mask(hwdev, flags);
> 
> Suggest
> 
>     if (hwdev)
>         dma_mask = dma_alloc_coherent_mask(hwdev, flags)

Isn't that code just doing this:
atic inline unsigned long dma_alloc_coherent_mask(struct device *dev,
                                                    gfp_t gfp)
{
        unsigned long dma_mask = 0;

        dma_mask = dev->coherent_dma_mask;
        if (!dma_mask)
                dma_mask = (gfp & GFP_DMA) ? DMA_BIT_MASK(24) :
DMA_BIT_MASK(32);

        return dma_mask;
}

and in our code, the dma_mask by default is DMA_BIT_MASK(32):

u64 dma_mask = DMA_BIT_MASK(32);

So what I am missing?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:50:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UPp-00007u-O1; Fri, 31 Aug 2012 16:49:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T7UPo-00007p-LA
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:49:52 +0000
Received: from [85.158.138.51:38416] by server-9.bemta-3.messagelabs.com id
	21/FE-15390-F2BE0405; Fri, 31 Aug 2012 16:49:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346431787!27934129!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31852 invoked from network); 31 Aug 2012 16:49:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:49:49 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="36456018"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:49:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:49:32 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T7UPT-0007cd-TX;
	Fri, 31 Aug 2012 17:49:31 +0100
Message-ID: <5040EB1B.80004@citrix.com>
Date: Fri, 31 Aug 2012 17:49:31 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5040D05D.8090808@citrix.com>
	<5040F61A0200007800097E86@nat28.tlf.novell.com>
In-Reply-To: <5040F61A0200007800097E86@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.4
Content-Type: multipart/mixed; boundary="------------050006010605040901060001"
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (v2) docs/command line: Clarify the behavior with
 invalid input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------050006010605040901060001
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 31/08/12 16:36, Jan Beulich wrote:
>>>> On 31.08.12 at 16:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> I don't think we should specifically document the behavior for
> unexpected value; instead, the behavior should simply be
> "undefined".
>
> Jan
>

Yes ok.  v2 attached.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------050006010605040901060001
Content-Type: text/x-patch; name="docs-cmdline.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="docs-cmdline.patch"

# HG changeset patch
# Parent 993922337bc1ceaac1054c31ba683af74e737cc1
docs/command line: Clarify the behavior with invalid input.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--
v2: State that invalid input is undefined, rather than describing the
current behaviour.

diff -r 993922337bc1 docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -11,7 +11,8 @@ Hypervisor.
 ## Types of parameter
 
 Most parameters take the form `option=value`.  Different options on
-the command line should be space delimited.
+the command line should be space delimited.  All options are case
+sensitive, as are all values unless explicitly noted.
 
 ### Boolean (`<boolean>`)
 
@@ -35,6 +36,9 @@ Disable x2apic support (if present)
 Enable synchronous console mode
 > `sync_console`
 
+Explicitly specifying any value other than those listed above is
+undefined, as is stacking a `no-` prefix with an explicit value.
+
 ### Integer (`<integer>`)
 
 An integer parameter will default to decimal and may be prefixed with
@@ -42,6 +46,9 @@ a `-` for negative numbers.  Alternative
 used by prefixing the number with `0x`, or an octal number may be used
 if a leading `0` is present.
 
+Providing a string which does not validly convert to an integer is
+undefined.
+
 ### Size (`<size>`)
 
 A size parameter may be any integer, with a size suffix
@@ -51,7 +58,8 @@ A size parameter may be any integer, wit
 * `K` or `k`: Kilo (2^10)
 * `B` or `b`: Bytes
 
-Without a size suffix, the default will be kilo.
+Without a size suffix, the default will be kilo.  Providing a suffix
+other than those listed above is undefined.
 
 ### String
 

--------------050006010605040901060001
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050006010605040901060001--


From xen-devel-bounces@lists.xen.org Fri Aug 31 16:50:20 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UPp-00007u-O1; Fri, 31 Aug 2012 16:49:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1T7UPo-00007p-LA
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:49:52 +0000
Received: from [85.158.138.51:38416] by server-9.bemta-3.messagelabs.com id
	21/FE-15390-F2BE0405; Fri, 31 Aug 2012 16:49:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346431787!27934129!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjgzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31852 invoked from network); 31 Aug 2012 16:49:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:49:49 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="36456018"
Received: from ftlpmailmx02.citrite.net ([10.13.107.66])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:49:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.66) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 12:49:32 -0400
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1T7UPT-0007cd-TX;
	Fri, 31 Aug 2012 17:49:31 +0100
Message-ID: <5040EB1B.80004@citrix.com>
Date: Fri, 31 Aug 2012 17:49:31 +0100
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:14.0) Gecko/20120714 Thunderbird/14.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5040D05D.8090808@citrix.com>
	<5040F61A0200007800097E86@nat28.tlf.novell.com>
In-Reply-To: <5040F61A0200007800097E86@nat28.tlf.novell.com>
X-Enigmail-Version: 1.4.4
Content-Type: multipart/mixed; boundary="------------050006010605040901060001"
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] (v2) docs/command line: Clarify the behavior with
 invalid input.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------050006010605040901060001
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 31/08/12 16:36, Jan Beulich wrote:
>>>> On 31.08.12 at 16:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> I don't think we should specifically document the behavior for
> unexpected value; instead, the behavior should simply be
> "undefined".
>
> Jan
>

Yes ok.  v2 attached.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


--------------050006010605040901060001
Content-Type: text/x-patch; name="docs-cmdline.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="docs-cmdline.patch"

# HG changeset patch
# Parent 993922337bc1ceaac1054c31ba683af74e737cc1
docs/command line: Clarify the behavior with invalid input.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

--
v2: State that invalid input is undefined, rather than describing the
current behaviour.

diff -r 993922337bc1 docs/misc/xen-command-line.markdown
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -11,7 +11,8 @@ Hypervisor.
 ## Types of parameter
 
 Most parameters take the form `option=value`.  Different options on
-the command line should be space delimited.
+the command line should be space delimited.  All options are case
+sensitive, as are all values unless explicitly noted.
 
 ### Boolean (`<boolean>`)
 
@@ -35,6 +36,9 @@ Disable x2apic support (if present)
 Enable synchronous console mode
 > `sync_console`
 
+Explicitly specifying any value other than those listed above is
+undefined, as is stacking a `no-` prefix with an explicit value.
+
 ### Integer (`<integer>`)
 
 An integer parameter will default to decimal and may be prefixed with
@@ -42,6 +46,9 @@ a `-` for negative numbers.  Alternative
 used by prefixing the number with `0x`, or an octal number may be used
 if a leading `0` is present.
 
+Providing a string which does not validly convert to an integer is
+undefined.
+
 ### Size (`<size>`)
 
 A size parameter may be any integer, with a size suffix
@@ -51,7 +58,8 @@ A size parameter may be any integer, wit
 * `K` or `k`: Kilo (2^10)
 * `B` or `b`: Bytes
 
-Without a size suffix, the default will be kilo.
+Without a size suffix, the default will be kilo.  Providing a suffix
+other than those listed above is undefined.
 
 ### String
 

--------------050006010605040901060001
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050006010605040901060001--


From xen-devel-bounces@lists.xen.org Fri Aug 31 16:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UQd-0000Aa-5j; Fri, 31 Aug 2012 16:50:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7UQc-0000AR-8v
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:50:42 +0000
Received: from [85.158.139.83:7822] by server-5.bemta-5.messagelabs.com id
	ED/06-30514-16BE0405; Fri, 31 Aug 2012 16:50:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346431838!27745862!1
X-Originating-IP: [209.85.216.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1982 invoked from network); 31 Aug 2012 16:50:40 -0000
Received: from mail-qc0-f173.google.com (HELO mail-qc0-f173.google.com)
	(209.85.216.173)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:50:40 -0000
Received: by qcab12 with SMTP id b12so2805104qca.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 09:50:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=WNtxK7pg6BoB79EHd9PxmSYzukRHMSh3H9l6+qLteNw=;
	b=SIIRzq14TsXWhxEy3Z+Le5Bx9HVMQbnrCcRM1IJaA0nsbkSl0SSPcJa53zMzIo6ieL
	3eb/5UAde9SB2pP30HQfUORt/E/nOkuBJ00Pm3BNYT+BSI7MPDyBbNdAuhaga0b2lArG
	ckwkpsjaunAB6lXEL5S9xSq/XEgClMFBoGnXCAIts78S4f5GWeosPn+KC0ePj1MCXqqc
	r+xu4p08dkfL0JyJfTgSWumPtucrD/zQ/vyIxhjx3xR/OqfXMAilLkknCjvyNi5oJAsz
	mrrlTGnVPZ1c2TkNlCuipNrGGUW87ptCbK82vq4WezrfCEGeF6yz2C90PrIy5CEshFDE
	xGBw==
Received: by 10.224.196.132 with SMTP id eg4mr19135682qab.93.1346431838372;
	Fri, 31 Aug 2012 09:50:38 -0700 (PDT)
Received: from [10.254.74.233] ([38.108.87.20])
	by mx.google.com with ESMTPS id m4sm5894328qak.6.2012.08.31.09.50.34
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 09:50:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 17:50:31 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC66A9E7.3D875%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH, v3] x86/HVM: RTC periodic timer emulation
	adjustments
Thread-Index: Ac2HmLtBL5S4b9jj6E6tpq5coDkR1Q==
In-Reply-To: <5040FBEA0200007800097EE5@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH,
 v3] x86/HVM: RTC periodic timer emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 17:01, "Jan Beulich" <JBeulich@suse.com> wrote:

> - don't call rtc_timer_update() on REG_A writes when the value didn't
>   change (doing the call always was reported to cause wall clock time
>   lagging with the JVM running on Windows)
> - don't call rtc_timer_update() on REG_B writes when RTC_PIE didn't
>   change
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

One comment below in-line.

> ---
> v3: Break out just this change from the previously submitted much
>     larger patch. The rest of that one is now planned to go in only
>     after 4.2.
> 
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -365,6 +365,7 @@ static int rtc_ioport_write(void *opaque
>  {
>      RTCState *s = opaque;
>      struct domain *d = vrtc_domain(s);
> +    uint32_t orig;
>  
>      spin_lock(&s->lock);
>  
> @@ -382,6 +383,7 @@ static int rtc_ioport_write(void *opaque
>          return 0;
>      }
>  
> +    orig = s->hw.cmos_data[s->hw.cmos_index];
>      switch ( s->hw.cmos_index )
>      {
>      case RTC_SECONDS_ALARM:
> @@ -405,9 +407,9 @@ static int rtc_ioport_write(void *opaque
>          break;
>      case RTC_REG_A:
>          /* UIP bit is read only */
> -        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) |
> -            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
> -        rtc_timer_update(s);
> +        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) | (orig & RTC_UIP);
> +        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )

Please change to 'if ( (data ^ orig) & ~RTC_UIP )'. It is shorter and
matches the style of the immediately preceding line.

Once you make this change:
Acked-by: Keir Fraser <keir@xen.org>

 -- Keir

> +            rtc_timer_update(s);
>          break;
>      case RTC_REG_B:
>          if ( data & RTC_SET )
> @@ -436,7 +438,8 @@ static int rtc_ioport_write(void *opaque
>                  hvm_isa_irq_assert(d, RTC_IRQ);
>              }
>          s->hw.cmos_data[RTC_REG_B] = data;
> -        rtc_timer_update(s);
> +        if ( (data ^ orig) & RTC_PIE )
> +            rtc_timer_update(s);
>          check_update_timer(s);
>          alarm_timer_update(s);
>          break;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:50:58 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UQd-0000Aa-5j; Fri, 31 Aug 2012 16:50:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7UQc-0000AR-8v
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:50:42 +0000
Received: from [85.158.139.83:7822] by server-5.bemta-5.messagelabs.com id
	ED/06-30514-16BE0405; Fri, 31 Aug 2012 16:50:41 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346431838!27745862!1
X-Originating-IP: [209.85.216.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1982 invoked from network); 31 Aug 2012 16:50:40 -0000
Received: from mail-qc0-f173.google.com (HELO mail-qc0-f173.google.com)
	(209.85.216.173)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:50:40 -0000
Received: by qcab12 with SMTP id b12so2805104qca.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 09:50:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=WNtxK7pg6BoB79EHd9PxmSYzukRHMSh3H9l6+qLteNw=;
	b=SIIRzq14TsXWhxEy3Z+Le5Bx9HVMQbnrCcRM1IJaA0nsbkSl0SSPcJa53zMzIo6ieL
	3eb/5UAde9SB2pP30HQfUORt/E/nOkuBJ00Pm3BNYT+BSI7MPDyBbNdAuhaga0b2lArG
	ckwkpsjaunAB6lXEL5S9xSq/XEgClMFBoGnXCAIts78S4f5GWeosPn+KC0ePj1MCXqqc
	r+xu4p08dkfL0JyJfTgSWumPtucrD/zQ/vyIxhjx3xR/OqfXMAilLkknCjvyNi5oJAsz
	mrrlTGnVPZ1c2TkNlCuipNrGGUW87ptCbK82vq4WezrfCEGeF6yz2C90PrIy5CEshFDE
	xGBw==
Received: by 10.224.196.132 with SMTP id eg4mr19135682qab.93.1346431838372;
	Fri, 31 Aug 2012 09:50:38 -0700 (PDT)
Received: from [10.254.74.233] ([38.108.87.20])
	by mx.google.com with ESMTPS id m4sm5894328qak.6.2012.08.31.09.50.34
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 09:50:36 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 17:50:31 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xen.org>
Message-ID: <CC66A9E7.3D875%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH, v3] x86/HVM: RTC periodic timer emulation
	adjustments
Thread-Index: Ac2HmLtBL5S4b9jj6E6tpq5coDkR1Q==
In-Reply-To: <5040FBEA0200007800097EE5@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH,
 v3] x86/HVM: RTC periodic timer emulation adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 17:01, "Jan Beulich" <JBeulich@suse.com> wrote:

> - don't call rtc_timer_update() on REG_A writes when the value didn't
>   change (doing the call always was reported to cause wall clock time
>   lagging with the JVM running on Windows)
> - don't call rtc_timer_update() on REG_B writes when RTC_PIE didn't
>   change
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

One comment below in-line.

> ---
> v3: Break out just this change from the previously submitted much
>     larger patch. The rest of that one is now planned to go in only
>     after 4.2.
> 
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -365,6 +365,7 @@ static int rtc_ioport_write(void *opaque
>  {
>      RTCState *s = opaque;
>      struct domain *d = vrtc_domain(s);
> +    uint32_t orig;
>  
>      spin_lock(&s->lock);
>  
> @@ -382,6 +383,7 @@ static int rtc_ioport_write(void *opaque
>          return 0;
>      }
>  
> +    orig = s->hw.cmos_data[s->hw.cmos_index];
>      switch ( s->hw.cmos_index )
>      {
>      case RTC_SECONDS_ALARM:
> @@ -405,9 +407,9 @@ static int rtc_ioport_write(void *opaque
>          break;
>      case RTC_REG_A:
>          /* UIP bit is read only */
> -        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) |
> -            (s->hw.cmos_data[RTC_REG_A] & RTC_UIP);
> -        rtc_timer_update(s);
> +        s->hw.cmos_data[RTC_REG_A] = (data & ~RTC_UIP) | (orig & RTC_UIP);
> +        if ( (data ^ orig) & (RTC_RATE_SELECT | RTC_DIV_CTL) )

Please change to 'if ( (data ^ orig) & ~RTC_UIP )'. It is shorter and
matches the style of the immediately preceding line.

Once you make this change:
Acked-by: Keir Fraser <keir@xen.org>

 -- Keir

> +            rtc_timer_update(s);
>          break;
>      case RTC_REG_B:
>          if ( data & RTC_SET )
> @@ -436,7 +438,8 @@ static int rtc_ioport_write(void *opaque
>                  hvm_isa_irq_assert(d, RTC_IRQ);
>              }
>          s->hw.cmos_data[RTC_REG_B] = data;
> -        rtc_timer_update(s);
> +        if ( (data ^ orig) & RTC_PIE )
> +            rtc_timer_update(s);
>          check_update_timer(s);
>          alarm_timer_update(s);
>          break;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UVB-0000PW-TW; Fri, 31 Aug 2012 16:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7UVA-0000PL-1R
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:55:24 +0000
Received: from [85.158.139.83:28653] by server-11.bemta-5.messagelabs.com id
	70/D7-24658-B7CE0405; Fri, 31 Aug 2012 16:55:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346432105!27746329!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11942 invoked from network); 31 Aug 2012 16:55:06 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 16:55:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VGt0d3030473
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 16:55:01 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VGsxSv010321
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 16:55:00 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VGsxh4026513; Fri, 31 Aug 2012 11:54:59 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 09:54:58 -0700
Date: Fri, 31 Aug 2012 12:54:55 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andres.lagarcavilla@gmail.com>
Message-ID: <20120831165454.GE18929@localhost.localdomain>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, andres@lagarcavilla.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
 targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:
> Actually acted upon your feedback ipso facto:
> 
> commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Date:   Sun Aug 26 09:45:57 2012 -0400
> 
>     Xen backend support for paged out grant targets.
>     
>     Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
>     foreign domain (such as dom0) attempts to map these frames, the map will
>     initially fail. The hypervisor returns a suitable errno, and kicks an
>     asynchronous page-in operation carried out by a helper. The foreign domain is
>     expected to retry the mapping operation until it eventually succeeds. The
>     foreign domain is not put to sleep because itself could be the one running the
>     pager assist (typical scenario for dom0).
>     
>     This patch adds support for this mechanism for backend drivers using grant
>     mapping and copying operations. Specifically, this covers the blkback and
>     gntdev drivers (which map foregin grants), and the netback driver (which copies
>     foreign grants).
>     
>     * Add GNTST_eagain, already exposed by Xen, to the grant interface.
>     * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>       target foregin frame is paged out).
>     * Insert hooks with appropriate macro decorators in the aforementioned drivers.
>     
>     The retry loop is only invoked if the grant operation status is GNTST_eagain.
>     It guarantees to leave a new status code different from GNTST_eagain. Any other
>     status code results in identical code execution as before.
>     
>     The retry loop performs 256 attempts with increasing time intervals through a
>     32 second period. It uses msleep to yield while waiting for the next retry.
>     


Would it make sense to yield to other processes (so call schedule)? Or
perhaps have this in a workqueue ?

I mean the 'msleep' just looks like a hack. .. 32 seconds of doing
'msleep' on 1VCPU dom0 could trigger the watchdog I think?

>     V2 after feedback from David Vrabel:
>     * Explicit MAX_DELAY instead of wrap-around delay into zero
>     * Abstract GNTST_eagain check into core grant table code for netback module.
>     
>     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 682633b..5610fd8 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk)
>  		return;
>  
>  	BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
> -	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy, &netbk->grant_copy_op,
> -					npo.copy_prod);
> -	BUG_ON(ret != 0);
> +	gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
>  
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>  		sco = (struct skb_cb_overlay *)skb->cb;
> @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk)
>  static void xen_netbk_tx_action(struct xen_netbk *netbk)
>  {
>  	unsigned nr_gops;
> -	int ret;
>  
>  	nr_gops = xen_netbk_tx_build_gops(netbk);
>  
>  	if (nr_gops == 0)
>  		return;
> -	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> -					netbk->tx_copy_ops, nr_gops);
> -	BUG_ON(ret);
>  
> -	xen_netbk_tx_submit(netbk);
> +	gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
>  
> +	xen_netbk_tx_submit(netbk);
>  }
>  
>  static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index eea81cf..96543b2 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -38,6 +38,7 @@
>  #include <linux/vmalloc.h>
>  #include <linux/uaccess.h>
>  #include <linux/io.h>
> +#include <linux/delay.h>
>  #include <linux/hardirq.h>
>  
>  #include <xen/xen.h>
> @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +#define MAX_DELAY 256
> +void
> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> +						const char *func)
> +{
> +	unsigned delay = 1;
> +
> +	do {
> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> +		if (*status == GNTST_eagain)
> +			msleep(delay++);
> +	} while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
> +
> +	if (delay >= MAX_DELAY) {
> +		printk(KERN_ERR "%s: %s eagain grant\n", func, current->comm);
> +		*status = GNTST_bad_page;
> +	}
> +}
> +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
> +
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count)
> @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  	if (ret)
>  		return ret;
>  
> +	/* Retry eagain maps */
> +	for (i = 0; i < count; i++)
> +		if (map_ops[i].status == GNTST_eagain)
> +			gnttab_retry_eagain_map(map_ops + i);
> +
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return ret;
>  
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index b3e146e..749f6a3 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
>  
>  	op.host_addr = arbitrary_virt_to_machine(pte).maddr;
>  
> -	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> -		BUG();
> +	gnttab_map_grant_no_eagain(&op);
>  
>  	if (op.status != GNTST_okay) {
>  		free_vm_area(area);
> @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref,
>  	gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map, gnt_ref,
>  			  dev->otherend_id);
>  
> -	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> -		BUG();
> +	gnttab_map_grant_no_eagain(&op);
>  
>  	if (op.status != GNTST_okay) {
>  		xenbus_dev_fatal(dev, op.status,
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 11e27c3..2fecfab 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -43,6 +43,7 @@
>  #include <xen/interface/grant_table.h>
>  
>  #include <asm/xen/hypervisor.h>
> +#include <asm/xen/hypercall.h>
>  
>  #include <xen/features.h>
>  
> @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
>  
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
> +/* Retry a grant map/copy operation when the hypervisor returns GNTST_eagain.
> + * This is typically due to paged out target frames.
> + * Generic entry-point, use macro decorators below for specific grant
> + * operations.
> + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
> + * Return value in *status guaranteed to no longer be GNTST_eagain. */
> +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> +                             const char *func);
> +
> +#define gnttab_retry_eagain_map(_gop)                       \
> +    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
> +                            &(_gop)->status, __func__)
> +
> +#define gnttab_retry_eagain_copy(_gop)                  \
> +    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
> +                            &(_gop)->status, __func__)
> +
> +#define gnttab_map_grant_no_eagain(_gop)                                    \
> +do {                                                                        \
> +    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))       \
> +        BUG();                                                              \
> +    if ((_gop)->status == GNTST_eagain)                                     \
> +        gnttab_retry_eagain_map((_gop));                                    \
> +} while(0)
> +
> +static inline void
> +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
> +{
> +    unsigned i;
> +    struct gnttab_copy *op;
> +
> +    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
> +    for (i = 0, op = batch; i < count; i++, op++)
> +        if (op->status == GNTST_eagain)
> +            gnttab_retry_eagain_copy(op);
> +}
> +
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
> index 7da811b..66cb734 100644
> --- a/include/xen/interface/grant_table.h
> +++ b/include/xen/interface/grant_table.h
> @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
>  #define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
>  #define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
>  #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary */
> +#define GNTST_eagain          (-12) /* Retry.                                */
>  
>  #define GNTTABOP_error_msgs {                   \
>      "okay",                                     \
> @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
>      "permission denied",                        \
>      "bad page",                                 \
>      "copy arguments cross page boundary"        \
> +    "retry"                                     \
>  }
>  
>  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
> 
> 
> On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:
> 
> > 
> > On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> > 
> >> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> >>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> >>> 
> >>> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
> >>> foreign domain (such as dom0) attempts to map these frames, the map will
> >>> initially fail. The hypervisor returns a suitable errno, and kicks an
> >>> asynchronous page-in operation carried out by a helper. The foreign domain is
> >>> expected to retry the mapping operation until it eventually succeeds. The
> >>> foreign domain is not put to sleep because itself could be the one running the
> >>> pager assist (typical scenario for dom0).
> >>> 
> >>> This patch adds support for this mechanism for backend drivers using grant
> >>> mapping and copying operations. Specifically, this covers the blkback and
> >>> gntdev drivers (which map foregin grants), and the netback driver (which copies
> >>> foreign grants).
> >>> 
> >>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> >>> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
> >>> target foregin frame is paged out).
> >>> * Insert hooks with appropriate macro decorators in the aforementioned drivers.
> >> 
> >> I think you should implement wrappers around HYPERVISOR_grant_table_op()
> >> have have the wrapper do the retries instead of every backend having to
> >> check for EAGAIN and issue the retries itself. Similar to the
> >> gnttab_map_grant_no_eagain() function you've already added.
> >> 
> >> Why do some operations not retry anyway?
> > 
> > All operations retry. The reason why I could not make it as elegant as you suggest is because grant operations are submitted in batches and their status(es?) later checked individually elsewhere. This is the case for netback. Note that both blkback and gntdev use a more linear structure with the gnttab_map_refs helper, which allows me to hide all the retry gore from those drivers into grant table code. Likewise for xenbus ring mapping.
> > 
> > In summary, outside of core grant table code, only the netback driver needs to check explicitly for retries, due to its batch-copy-delayed-per-slot-check structure.
> > 
> >> 
> >>> +void
> >>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> >>> +						const char *func)
> >>> +{
> >>> +	u8 delay = 1;
> >>> +
> >>> +	do {
> >>> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> >>> +		if (*status == GNTST_eagain)
> >>> +			msleep(delay++);
> >>> +	} while ((*status == GNTST_eagain) && delay);
> >> 
> >> Terminating the loop when delay wraps is a bit subtle.  Why not make
> >> delay unsigned and check delay <= MAX_DELAY?
> > Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before a re-spin.
> > 
> >> 
> >> Would it be sensible to ramp the delay faster?  Perhaps double each
> >> iteration with a maximum possible delay of e.g., 256 ms.
> > Generally speaking we've never seen past three retries. I am open to changing the algorithm but there is a significant possibility it won't matter at all.
> > 
> >> 
> >>> +#define gnttab_map_grant_no_eagain(_gop)                                    \
> >>> +do {                                                                        \
> >>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
> >>> +        BUG();                                                              \
> >>> +    if ((_gop)->status == GNTST_eagain)                                     \
> >>> +        gnttab_retry_eagain_map((_gop));                                    \
> >>> +} while(0)
> >> 
> >> Inline functions, please.
> > 
> > I want to retain the original context for debugging. Eventually we print __func__ if things go wrong.
> > 
> > Thanks, great feedback
> > Andres
> > 
> >> 
> >> David
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:55:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UVB-0000PW-TW; Fri, 31 Aug 2012 16:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7UVA-0000PL-1R
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:55:24 +0000
Received: from [85.158.139.83:28653] by server-11.bemta-5.messagelabs.com id
	70/D7-24658-B7CE0405; Fri, 31 Aug 2012 16:55:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346432105!27746329!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11942 invoked from network); 31 Aug 2012 16:55:06 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-5.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 16:55:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VGt0d3030473
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 16:55:01 GMT
Received: from acsmt358.oracle.com (acsmt358.oracle.com [141.146.40.158])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VGsxSv010321
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 16:55:00 GMT
Received: from abhmt119.oracle.com (abhmt119.oracle.com [141.146.116.71])
	by acsmt358.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VGsxh4026513; Fri, 31 Aug 2012 11:54:59 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 09:54:58 -0700
Date: Fri, 31 Aug 2012 12:54:55 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andres.lagarcavilla@gmail.com>
Message-ID: <20120831165454.GE18929@localhost.localdomain>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andres Lagar-Cavilla <andreslc@gridcentric.ca>, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, andres@lagarcavilla.org
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
 targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:
> Actually acted upon your feedback ipso facto:
> 
> commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Date:   Sun Aug 26 09:45:57 2012 -0400
> 
>     Xen backend support for paged out grant targets.
>     
>     Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
>     foreign domain (such as dom0) attempts to map these frames, the map will
>     initially fail. The hypervisor returns a suitable errno, and kicks an
>     asynchronous page-in operation carried out by a helper. The foreign domain is
>     expected to retry the mapping operation until it eventually succeeds. The
>     foreign domain is not put to sleep because itself could be the one running the
>     pager assist (typical scenario for dom0).
>     
>     This patch adds support for this mechanism for backend drivers using grant
>     mapping and copying operations. Specifically, this covers the blkback and
>     gntdev drivers (which map foregin grants), and the netback driver (which copies
>     foreign grants).
>     
>     * Add GNTST_eagain, already exposed by Xen, to the grant interface.
>     * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
>       target foregin frame is paged out).
>     * Insert hooks with appropriate macro decorators in the aforementioned drivers.
>     
>     The retry loop is only invoked if the grant operation status is GNTST_eagain.
>     It guarantees to leave a new status code different from GNTST_eagain. Any other
>     status code results in identical code execution as before.
>     
>     The retry loop performs 256 attempts with increasing time intervals through a
>     32 second period. It uses msleep to yield while waiting for the next retry.
>     


Would it make sense to yield to other processes (so call schedule)? Or
perhaps have this in a workqueue ?

I mean the 'msleep' just looks like a hack. .. 32 seconds of doing
'msleep' on 1VCPU dom0 could trigger the watchdog I think?

>     V2 after feedback from David Vrabel:
>     * Explicit MAX_DELAY instead of wrap-around delay into zero
>     * Abstract GNTST_eagain check into core grant table code for netback module.
>     
>     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> 
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 682633b..5610fd8 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk *netbk)
>  		return;
>  
>  	BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
> -	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy, &netbk->grant_copy_op,
> -					npo.copy_prod);
> -	BUG_ON(ret != 0);
> +	gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
>  
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>  		sco = (struct skb_cb_overlay *)skb->cb;
> @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk *netbk)
>  static void xen_netbk_tx_action(struct xen_netbk *netbk)
>  {
>  	unsigned nr_gops;
> -	int ret;
>  
>  	nr_gops = xen_netbk_tx_build_gops(netbk);
>  
>  	if (nr_gops == 0)
>  		return;
> -	ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> -					netbk->tx_copy_ops, nr_gops);
> -	BUG_ON(ret);
>  
> -	xen_netbk_tx_submit(netbk);
> +	gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
>  
> +	xen_netbk_tx_submit(netbk);
>  }
>  
>  static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index eea81cf..96543b2 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -38,6 +38,7 @@
>  #include <linux/vmalloc.h>
>  #include <linux/uaccess.h>
>  #include <linux/io.h>
> +#include <linux/delay.h>
>  #include <linux/hardirq.h>
>  
>  #include <xen/xen.h>
> @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +#define MAX_DELAY 256
> +void
> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> +						const char *func)
> +{
> +	unsigned delay = 1;
> +
> +	do {
> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> +		if (*status == GNTST_eagain)
> +			msleep(delay++);
> +	} while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
> +
> +	if (delay >= MAX_DELAY) {
> +		printk(KERN_ERR "%s: %s eagain grant\n", func, current->comm);
> +		*status = GNTST_bad_page;
> +	}
> +}
> +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
> +
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count)
> @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  	if (ret)
>  		return ret;
>  
> +	/* Retry eagain maps */
> +	for (i = 0; i < count; i++)
> +		if (map_ops[i].status == GNTST_eagain)
> +			gnttab_retry_eagain_map(map_ops + i);
> +
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return ret;
>  
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index b3e146e..749f6a3 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
>  
>  	op.host_addr = arbitrary_virt_to_machine(pte).maddr;
>  
> -	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> -		BUG();
> +	gnttab_map_grant_no_eagain(&op);
>  
>  	if (op.status != GNTST_okay) {
>  		free_vm_area(area);
> @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref,
>  	gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map, gnt_ref,
>  			  dev->otherend_id);
>  
> -	if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> -		BUG();
> +	gnttab_map_grant_no_eagain(&op);
>  
>  	if (op.status != GNTST_okay) {
>  		xenbus_dev_fatal(dev, op.status,
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 11e27c3..2fecfab 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -43,6 +43,7 @@
>  #include <xen/interface/grant_table.h>
>  
>  #include <asm/xen/hypervisor.h>
> +#include <asm/xen/hypercall.h>
>  
>  #include <xen/features.h>
>  
> @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
>  
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
> +/* Retry a grant map/copy operation when the hypervisor returns GNTST_eagain.
> + * This is typically due to paged out target frames.
> + * Generic entry-point, use macro decorators below for specific grant
> + * operations.
> + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
> + * Return value in *status guaranteed to no longer be GNTST_eagain. */
> +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> +                             const char *func);
> +
> +#define gnttab_retry_eagain_map(_gop)                       \
> +    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
> +                            &(_gop)->status, __func__)
> +
> +#define gnttab_retry_eagain_copy(_gop)                  \
> +    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
> +                            &(_gop)->status, __func__)
> +
> +#define gnttab_map_grant_no_eagain(_gop)                                    \
> +do {                                                                        \
> +    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))       \
> +        BUG();                                                              \
> +    if ((_gop)->status == GNTST_eagain)                                     \
> +        gnttab_retry_eagain_map((_gop));                                    \
> +} while(0)
> +
> +static inline void
> +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
> +{
> +    unsigned i;
> +    struct gnttab_copy *op;
> +
> +    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
> +    for (i = 0, op = batch; i < count; i++, op++)
> +        if (op->status == GNTST_eagain)
> +            gnttab_retry_eagain_copy(op);
> +}
> +
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> diff --git a/include/xen/interface/grant_table.h b/include/xen/interface/grant_table.h
> index 7da811b..66cb734 100644
> --- a/include/xen/interface/grant_table.h
> +++ b/include/xen/interface/grant_table.h
> @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
>  #define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
>  #define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
>  #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary */
> +#define GNTST_eagain          (-12) /* Retry.                                */
>  
>  #define GNTTABOP_error_msgs {                   \
>      "okay",                                     \
> @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
>      "permission denied",                        \
>      "bad page",                                 \
>      "copy arguments cross page boundary"        \
> +    "retry"                                     \
>  }
>  
>  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
> 
> 
> On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:
> 
> > 
> > On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> > 
> >> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> >>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> >>> 
> >>> Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
> >>> foreign domain (such as dom0) attempts to map these frames, the map will
> >>> initially fail. The hypervisor returns a suitable errno, and kicks an
> >>> asynchronous page-in operation carried out by a helper. The foreign domain is
> >>> expected to retry the mapping operation until it eventually succeeds. The
> >>> foreign domain is not put to sleep because itself could be the one running the
> >>> pager assist (typical scenario for dom0).
> >>> 
> >>> This patch adds support for this mechanism for backend drivers using grant
> >>> mapping and copying operations. Specifically, this covers the blkback and
> >>> gntdev drivers (which map foregin grants), and the netback driver (which copies
> >>> foreign grants).
> >>> 
> >>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> >>> * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
> >>> target foregin frame is paged out).
> >>> * Insert hooks with appropriate macro decorators in the aforementioned drivers.
> >> 
> >> I think you should implement wrappers around HYPERVISOR_grant_table_op()
> >> have have the wrapper do the retries instead of every backend having to
> >> check for EAGAIN and issue the retries itself. Similar to the
> >> gnttab_map_grant_no_eagain() function you've already added.
> >> 
> >> Why do some operations not retry anyway?
> > 
> > All operations retry. The reason why I could not make it as elegant as you suggest is because grant operations are submitted in batches and their status(es?) later checked individually elsewhere. This is the case for netback. Note that both blkback and gntdev use a more linear structure with the gnttab_map_refs helper, which allows me to hide all the retry gore from those drivers into grant table code. Likewise for xenbus ring mapping.
> > 
> > In summary, outside of core grant table code, only the netback driver needs to check explicitly for retries, due to its batch-copy-delayed-per-slot-check structure.
> > 
> >> 
> >>> +void
> >>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> >>> +						const char *func)
> >>> +{
> >>> +	u8 delay = 1;
> >>> +
> >>> +	do {
> >>> +		BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> >>> +		if (*status == GNTST_eagain)
> >>> +			msleep(delay++);
> >>> +	} while ((*status == GNTST_eagain) && delay);
> >> 
> >> Terminating the loop when delay wraps is a bit subtle.  Why not make
> >> delay unsigned and check delay <= MAX_DELAY?
> > Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before a re-spin.
> > 
> >> 
> >> Would it be sensible to ramp the delay faster?  Perhaps double each
> >> iteration with a maximum possible delay of e.g., 256 ms.
> > Generally speaking we've never seen past three retries. I am open to changing the algorithm but there is a significant possibility it won't matter at all.
> > 
> >> 
> >>> +#define gnttab_map_grant_no_eagain(_gop)                                    \
> >>> +do {                                                                        \
> >>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))      \
> >>> +        BUG();                                                              \
> >>> +    if ((_gop)->status == GNTST_eagain)                                     \
> >>> +        gnttab_retry_eagain_map((_gop));                                    \
> >>> +} while(0)
> >> 
> >> Inline functions, please.
> > 
> > I want to retain the original context for debugging. Eventually we print __func__ if things go wrong.
> > 
> > Thanks, great feedback
> > Andres
> > 
> >> 
> >> David
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:56:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:56:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UWB-0000U3-H3; Fri, 31 Aug 2012 16:56:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7UWA-0000Tq-HN
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:56:26 +0000
Received: from [85.158.143.99:16467] by server-3.bemta-4.messagelabs.com id
	61/1E-08232-9BCE0405; Fri, 31 Aug 2012 16:56:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346432184!18264530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32628 invoked from network); 31 Aug 2012 16:56:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:56:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="14294897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:56:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 17:56:23 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7UW7-0005JQ-FN; Fri, 31 Aug 2012 16:56:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7UW7-00081D-Bk;
	Fri, 31 Aug 2012 17:56:23 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.60599.275302.789129@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 17:56:23 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <f9a7d8c439f9aa47b665.1346429767@cosworth.uk.xensource.com>
References: <f9a7d8c439f9aa47b665.1346429767@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxl/xl: implement support for guest
 ioport and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V3] libxl/xl: implement support for guest ioport and irq permissions"):
> libxl/xl: implement support for guest ioport and irq permissions.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

For 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 16:56:39 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 16:56:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UWB-0000U3-H3; Fri, 31 Aug 2012 16:56:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>) id 1T7UWA-0000Tq-HN
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 16:56:26 +0000
Received: from [85.158.143.99:16467] by server-3.bemta-4.messagelabs.com id
	61/1E-08232-9BCE0405; Fri, 31 Aug 2012 16:56:25 +0000
X-Env-Sender: Ian.Jackson@eu.citrix.com
X-Msg-Ref: server-14.tower-216.messagelabs.com!1346432184!18264530!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32628 invoked from network); 31 Aug 2012 16:56:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-14.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 16:56:24 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="14294897"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 16:56:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.30.203.162) with Microsoft SMTP Server id
	8.3.279.1; Fri, 31 Aug 2012 17:56:23 +0100
Received: from mariner.cam.xci-test.com	([10.80.2.22]
	helo=mariner.uk.xensource.com ident=Debian-exim)	by
	norwich.cam.xci-test.com
	with esmtp (Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1T7UW7-0005JQ-FN; Fri, 31 Aug 2012 16:56:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.72)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1T7UW7-00081D-Bk;
	Fri, 31 Aug 2012 17:56:23 +0100
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <20544.60599.275302.789129@mariner.uk.xensource.com>
Date: Fri, 31 Aug 2012 17:56:23 +0100
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <f9a7d8c439f9aa47b665.1346429767@cosworth.uk.xensource.com>
References: <f9a7d8c439f9aa47b665.1346429767@cosworth.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.2.1 (i486-pc-linux-gnu)
Cc: Dieter Bloms <xensource.com@bloms.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V3] libxl/xl: implement support for guest
 ioport and irq permissions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH V3] libxl/xl: implement support for guest ioport and irq permissions"):
> libxl/xl: implement support for guest ioport and irq permissions.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

For 4.2.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:22:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Uuy-0000sl-VW; Fri, 31 Aug 2012 17:22:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kallan@suse.com>) id 1T7Uov-0000rr-LN
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:15:49 +0000
Received: from [85.158.139.83:55091] by server-12.bemta-5.messagelabs.com id
	3D/C3-18300-441F0405; Fri, 31 Aug 2012 17:15:48 +0000
X-Env-Sender: kallan@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346433347!24088724!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18721 invoked from network); 31 Aug 2012 17:15:47 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-6.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 17:15:47 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:15:46 -0600
Message-Id: <50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 31 Aug 2012 11:15:44 -0600
From: "Kirk Allan" <kallan@suse.com>
To: "Charles Arnold" <CARNOLD@suse.com>,
 "Jan Beulich" <JBeulich@suse.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
In-Reply-To: <5040FB490200007800097ED2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
X-Mailman-Approved-At: Fri, 31 Aug 2012 17:22:03 +0000
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



>>> On 8/31/2012 at 09:58 AM, in message
<5040FB490200007800097ED2@nat28.tlf.novell.com>, "Jan Beulich"
<JBeulich@suse.com> wrote: 
>>>> On 31.08.12 at 17:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Fri, 2012-08-31 at 16:40 +0100, Jan Beulich wrote:
>>> >>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> > --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
>>> > +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
>>> > @@ -402,6 +402,30 @@ for more information on the "permissive"
>>> >  
>>> >  =back
>>> >  
>>> > +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
>>> 
>>> Is this really with quotes, and requiring an array?
>> 
>> I was mostly just following
>> http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU 
> 
>> which suggested that this is the xm syntax too.
>> 
>>> > +
>>> > +=over 4
>>> > +
>>> > +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
>>> > +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
>>> > +(inclusive) or a single I/O port C<2f8>.
>>> > +
>>> > +It is recommended to use this option only for trusted VMs under
>>> > +administrator control.
>>> > +
>>> > +=back
>>> > +
>>> > +=item B<irqs=[ NUMBER, NUMBER, ... ]>
>>> 
>>> Similarly here - is this really requiring an array? I ask because
>>> I had to look at this just last week for a colleague, and what
>>> we got out of inspection of examples/code was that a simple
>>> number (and a simple range without quotes above) are
>>> permitted too.
>> 
>> I had a look in create.py and opts.py and didn't see that, I suppose I
>> missed it.
>> 
>> I could implement support for either an array or a simple string/number
>> but it would complicate the code quite a bit. Is it really worth it?
> 
> Charles, Kirk, could you comment here?

In one of my Window's vm config files, I was able to get the vm to boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of the Windows vm.  I also added irq=[4] to the config file.  However, I was not able to actually get a debug session to work.  The physical machine running windbg received a string from the vm which gave me hope that it was working, but then it never received further data so the vm eventually booted without being attached to the debugger.

> 
> Thanks, Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:22:23 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Uuy-0000sl-VW; Fri, 31 Aug 2012 17:22:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kallan@suse.com>) id 1T7Uov-0000rr-LN
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:15:49 +0000
Received: from [85.158.139.83:55091] by server-12.bemta-5.messagelabs.com id
	3D/C3-18300-441F0405; Fri, 31 Aug 2012 17:15:48 +0000
X-Env-Sender: kallan@suse.com
X-Msg-Ref: server-6.tower-182.messagelabs.com!1346433347!24088724!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18721 invoked from network); 31 Aug 2012 17:15:47 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-6.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 17:15:47 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:15:46 -0600
Message-Id: <50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 31 Aug 2012 11:15:44 -0600
From: "Kirk Allan" <kallan@suse.com>
To: "Charles Arnold" <CARNOLD@suse.com>,
 "Jan Beulich" <JBeulich@suse.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
In-Reply-To: <5040FB490200007800097ED2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
X-Mailman-Approved-At: Fri, 31 Aug 2012 17:22:03 +0000
Cc: Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



>>> On 8/31/2012 at 09:58 AM, in message
<5040FB490200007800097ED2@nat28.tlf.novell.com>, "Jan Beulich"
<JBeulich@suse.com> wrote: 
>>>> On 31.08.12 at 17:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Fri, 2012-08-31 at 16:40 +0100, Jan Beulich wrote:
>>> >>> On 31.08.12 at 17:01, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> > --- a/docs/man/xl.cfg.pod.5	Fri Aug 31 12:03:55 2012 +0100
>>> > +++ b/docs/man/xl.cfg.pod.5	Fri Aug 31 15:54:42 2012 +0100
>>> > @@ -402,6 +402,30 @@ for more information on the "permissive"
>>> >  
>>> >  =back
>>> >  
>>> > +=item B<ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ... ]>
>>> 
>>> Is this really with quotes, and requiring an array?
>> 
>> I was mostly just following
>> http://cmrg.fifthhorseman.net/wiki/xen#grantingaccesstoserialhardwaretoadomU 
> 
>> which suggested that this is the xm syntax too.
>> 
>>> > +
>>> > +=over 4
>>> > +
>>> > +Allow guest to access specific legacy I/O ports. Each B<IOPORT_RANGE>
>>> > +is given in hexadecimal and may either a span e.g. C<2f8-2ff>
>>> > +(inclusive) or a single I/O port C<2f8>.
>>> > +
>>> > +It is recommended to use this option only for trusted VMs under
>>> > +administrator control.
>>> > +
>>> > +=back
>>> > +
>>> > +=item B<irqs=[ NUMBER, NUMBER, ... ]>
>>> 
>>> Similarly here - is this really requiring an array? I ask because
>>> I had to look at this just last week for a colleague, and what
>>> we got out of inspection of examples/code was that a simple
>>> number (and a simple range without quotes above) are
>>> permitted too.
>> 
>> I had a look in create.py and opts.py and didn't see that, I suppose I
>> missed it.
>> 
>> I could implement support for either an array or a simple string/number
>> but it would complicate the code quite a bit. Is it really worth it?
> 
> Charles, Kirk, could you comment here?

In one of my Window's vm config files, I was able to get the vm to boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of the Windows vm.  I also added irq=[4] to the config file.  However, I was not able to actually get a debug session to work.  The physical machine running windbg received a string from the vm which gave me hope that it was working, but then it never received further data so the vm eventually booted without being attached to the debugger.

> 
> Thanks, Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:24:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:24:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UxE-0000y7-Hg; Fri, 31 Aug 2012 17:24:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7UxC-0000y0-CZ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:24:22 +0000
Received: from [85.158.143.99:56886] by server-2.bemta-4.messagelabs.com id
	9F/17-21239-543F0405; Fri, 31 Aug 2012 17:24:21 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1346433859!22617213!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21048 invoked from network); 31 Aug 2012 17:24:20 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:24:20 -0000
Received: by pbbjt11 with SMTP id jt11so5057098pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:24:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=1nEKobH6ZOYmKrlqkNQBRlniEKL0cZZBVrU29oKY+8Y=;
	b=gGflz39y1u6M9Uf5OTuNE6261F/WT5wSe7trkSBdNspIAYQAI7tnCYFKCZCa3kLr/7
	MqDgwlCeQIXGLy9uIkrUVga/rf2H17qmDDSCmND+YYRrby1i4a3EikamKR59TLBLvvwg
	ZRdHyLfwbMGKt9LQvVffRkRqUPzhIlYlpQwZyD/YmhmjTJ7/M6Ds19dYljvvCPSwzoWm
	K698GA6489jdvOAg8nBFXRFM+TIXnY1YKwyOcMjfWgU9LcBY0KyuyQ1qU5hwiKxmJxGi
	YW4tXuIB6Lyb5E5f0YgGMhao7UmhdLp16R/nTAClRbpXemV9A6q8GJDdsz/oUl5KBzsN
	nwzQ==
Received: by 10.66.83.129 with SMTP id q1mr16466940pay.4.1346433858558;
	Fri, 31 Aug 2012 10:24:18 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id wf7sm3825100pbc.34.2012.08.31.10.24.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:24:18 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:24:11 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120831172410.GA19756@localhost.localdomain>
References: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
	'Keir Fraser' <keir@xen.org>, 'Ian Campbell' <Ian.Campbell@citrix.com>,
	'Jan Beulich' <JBeulich@suse.com>, 'xen-devel' <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2-rc3 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 07:27:51AM +0000, Ren, Yongjie wrote:
> Hi All,
> We did a round testing for Xen 4.2 RC3 (CS# 25784) with Linux 3.5.2 dom0.
> We found no new issue, and verified 1 fixed bug.
> 
> Fixed bug (1):
> 1. long stop during the guest boot process with qcow image
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
>   -- Fixed by reverting a bad commit about "O_DIRECT to open IDE block device".
> 
> The following are some of the old issues which we guess are something important.
> Some of the old issues:
> 1. Fail to probe NIC driver to HVM domU (with 3.5/3.6 Linux as Dom0)
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
>   -- We already know the corrupt commit in Linux tree. Konrad will try to fix it.
> 2. Poor performance when do guest save/restore and migration with linux 3.x dom0
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 4. after detaching a VF from a guest, shutdown the guest is very slow
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> 5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826

Um, so you are assigning the same VF to two guests. I am surprised that
the tools even allowed you to do that. Was 'xm' allowing you to do that?

> 6. Guest hang after resuming from S3
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828

Jan posted some patches to fix that. Can you test with an up-to-date
guest? (so not RHEL6U1 which does not have the fix).

> 7. Dom0 S3 resume fails
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707

Yeah, that one is mine. Have some patches for that I will post soonish.
> 
> Best Regards,
>      Yongjie (Jay)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:24:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:24:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UxE-0000y7-Hg; Fri, 31 Aug 2012 17:24:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7UxC-0000y0-CZ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:24:22 +0000
Received: from [85.158.143.99:56886] by server-2.bemta-4.messagelabs.com id
	9F/17-21239-543F0405; Fri, 31 Aug 2012 17:24:21 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-4.tower-216.messagelabs.com!1346433859!22617213!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21048 invoked from network); 31 Aug 2012 17:24:20 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-4.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:24:20 -0000
Received: by pbbjt11 with SMTP id jt11so5057098pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:24:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=1nEKobH6ZOYmKrlqkNQBRlniEKL0cZZBVrU29oKY+8Y=;
	b=gGflz39y1u6M9Uf5OTuNE6261F/WT5wSe7trkSBdNspIAYQAI7tnCYFKCZCa3kLr/7
	MqDgwlCeQIXGLy9uIkrUVga/rf2H17qmDDSCmND+YYRrby1i4a3EikamKR59TLBLvvwg
	ZRdHyLfwbMGKt9LQvVffRkRqUPzhIlYlpQwZyD/YmhmjTJ7/M6Ds19dYljvvCPSwzoWm
	K698GA6489jdvOAg8nBFXRFM+TIXnY1YKwyOcMjfWgU9LcBY0KyuyQ1qU5hwiKxmJxGi
	YW4tXuIB6Lyb5E5f0YgGMhao7UmhdLp16R/nTAClRbpXemV9A6q8GJDdsz/oUl5KBzsN
	nwzQ==
Received: by 10.66.83.129 with SMTP id q1mr16466940pay.4.1346433858558;
	Fri, 31 Aug 2012 10:24:18 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id wf7sm3825100pbc.34.2012.08.31.10.24.17
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:24:18 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:24:11 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: "Ren, Yongjie" <yongjie.ren@intel.com>
Message-ID: <20120831172410.GA19756@localhost.localdomain>
References: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1B4B44D9196EFF41AE41FDA404FC0A1018AF31@SHSMSX101.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
	'Keir Fraser' <keir@xen.org>, 'Ian Campbell' <Ian.Campbell@citrix.com>,
	'Jan Beulich' <JBeulich@suse.com>, 'xen-devel' <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen4.2-rc3 test result
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 07:27:51AM +0000, Ren, Yongjie wrote:
> Hi All,
> We did a round testing for Xen 4.2 RC3 (CS# 25784) with Linux 3.5.2 dom0.
> We found no new issue, and verified 1 fixed bug.
> 
> Fixed bug (1):
> 1. long stop during the guest boot process with qcow image
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
>   -- Fixed by reverting a bad commit about "O_DIRECT to open IDE block device".
> 
> The following are some of the old issues which we guess are something important.
> Some of the old issues:
> 1. Fail to probe NIC driver to HVM domU (with 3.5/3.6 Linux as Dom0)
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
>   -- We already know the corrupt commit in Linux tree. Konrad will try to fix it.
> 2. Poor performance when do guest save/restore and migration with linux 3.x dom0
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 4. after detaching a VF from a guest, shutdown the guest is very slow
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> 5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826

Um, so you are assigning the same VF to two guests. I am surprised that
the tools even allowed you to do that. Was 'xm' allowing you to do that?

> 6. Guest hang after resuming from S3
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828

Jan posted some patches to fix that. Can you test with an up-to-date
guest? (so not RHEL6U1 which does not have the fix).

> 7. Dom0 S3 resume fails
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707

Yeah, that one is mine. Have some patches for that I will post soonish.
> 
> Best Regards,
>      Yongjie (Jay)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:26:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UzH-00015i-2P; Fri, 31 Aug 2012 17:26:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T7UzG-00015U-5z
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:26:30 +0000
Received: from [85.158.139.83:37138] by server-4.bemta-5.messagelabs.com id
	6A/81-23042-5C3F0405; Fri, 31 Aug 2012 17:26:29 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346433988!27953623!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8639 invoked from network); 31 Aug 2012 17:26:28 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:26:28 -0000
Received: by eeke53 with SMTP id e53so1401194eek.32
	for <multiple recipients>; Fri, 31 Aug 2012 10:26:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=r8xbzAGi+jIt3k0kY5FJ1I4LGJCWhIH1biogZ+aL79E=;
	b=ePS65koididVMzGpZVXjiHu12Y4K55CePX/tlQWgJMfXerbjMtK1aBKyZ4HyBIFSBZ
	C8Z43x8n51TuO87pcQS/il5AWKThM5syO377FQcq52Ks1shlvk7zEXZJqUeEmThFYtjF
	y8EW2PqWBAXu2Nantkj/2nJdNmae68kviY/Dqcaztu7jzDx0Qw2fJPVhCHSgRuOASnd5
	OwgngV8UnvRVrVaLWozFt1aOj5/Xee74+CeJLQFuYm+oBoCXV5vg4oK47rxqp1PqS0Mf
	NzTkyL95OtNaf4KscvxbAQTKpMurtObm6pPIhY1c9SgIyYrPKGibYVnJq5EKj8hO5xqA
	CVlw==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr12073950eeo.40.1346433988195; Fri,
	31 Aug 2012 10:26:28 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Fri, 31 Aug 2012 10:26:28 -0700 (PDT)
In-Reply-To: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
Date: Fri, 31 Aug 2012 10:26:28 -0700
X-Google-Sender-Auth: ZlKrnhto9uQQhHo_2v64CdayJ-Q
Message-ID: <CAFLBxZZ7sBw-aKkF88jVkC5qbQetJEA2L_iPJ1iBXJ1RNDpjcw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 3:06 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> hypervisor, nice to have:
>
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>             (George Dunlap)

I probably won't get a chance to work on this until I get back
mid-september, so this shouldn't be a blocker.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:26:49 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7UzH-00015i-2P; Fri, 31 Aug 2012 17:26:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T7UzG-00015U-5z
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:26:30 +0000
Received: from [85.158.139.83:37138] by server-4.bemta-5.messagelabs.com id
	6A/81-23042-5C3F0405; Fri, 31 Aug 2012 17:26:29 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346433988!27953623!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8639 invoked from network); 31 Aug 2012 17:26:28 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:26:28 -0000
Received: by eeke53 with SMTP id e53so1401194eek.32
	for <multiple recipients>; Fri, 31 Aug 2012 10:26:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=r8xbzAGi+jIt3k0kY5FJ1I4LGJCWhIH1biogZ+aL79E=;
	b=ePS65koididVMzGpZVXjiHu12Y4K55CePX/tlQWgJMfXerbjMtK1aBKyZ4HyBIFSBZ
	C8Z43x8n51TuO87pcQS/il5AWKThM5syO377FQcq52Ks1shlvk7zEXZJqUeEmThFYtjF
	y8EW2PqWBAXu2Nantkj/2nJdNmae68kviY/Dqcaztu7jzDx0Qw2fJPVhCHSgRuOASnd5
	OwgngV8UnvRVrVaLWozFt1aOj5/Xee74+CeJLQFuYm+oBoCXV5vg4oK47rxqp1PqS0Mf
	NzTkyL95OtNaf4KscvxbAQTKpMurtObm6pPIhY1c9SgIyYrPKGibYVnJq5EKj8hO5xqA
	CVlw==
MIME-Version: 1.0
Received: by 10.14.212.72 with SMTP id x48mr12073950eeo.40.1346433988195; Fri,
	31 Aug 2012 10:26:28 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Fri, 31 Aug 2012 10:26:28 -0700 (PDT)
In-Reply-To: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
Date: Fri, 31 Aug 2012 10:26:28 -0700
X-Google-Sender-Auth: ZlKrnhto9uQQhHo_2v64CdayJ-Q
Message-ID: <CAFLBxZZ7sBw-aKkF88jVkC5qbQetJEA2L_iPJ1iBXJ1RNDpjcw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 3:06 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> hypervisor, nice to have:
>
>     * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
>       stop halfway through searching, causing a guest to crash even if
>       there was zeroed memory available.  This is NOT a regression
>       from 4.1, and is a very rare case, so probably shouldn't be a
>       blocker.  (In fact, I'd be open to the idea that it should wait
>       until after the release to get more testing.)
>             (George Dunlap)

I probably won't get a chance to work on this until I get back
mid-september, so this shouldn't be a blocker.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:29:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V29-0001Q7-9e; Fri, 31 Aug 2012 17:29:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7V27-0001PJ-KT
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:29:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346434159!9005717!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21192 invoked from network); 31 Aug 2012 17:29:21 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:29:21 -0000
Received: by pbbjt11 with SMTP id jt11so5064003pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:29:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=yZ99978xpmPAr1syyFtIonRgO3tqKQ/KE1PqrU0FNrg=;
	b=gmKX3XbvH2nB4Q5EO8t6nOd1Az4m8BJPTYnUON2hRACQOQvLezOJs5qXkdyUiUElBm
	W0F1qedqcIyPaEZFBhkIqXEHNe65XHc1hQM2xVDhsBE0R7D2dDrP9ljvOqBT2RuRRTnx
	YPkvq2cmzCq2CG5DGdHLlOQxsluJC1UARm8PD4fmHVwVF8cTHYZPs4jKSqcABKIbVYg2
	ihBlYLi1LWGiBskF+t84tvGNRGm03fc8d+aN2nZVSKuRLHaDcqgd+EM1+E4FDiM8kQkg
	BCKx0JtmTttTrrD2H6VAPqel3lQFLLLMq7u/Jvg/TO8aCpwwe4u9DpowND9JJla2hjfx
	eNSA==
Received: by 10.68.227.169 with SMTP id sb9mr18522054pbc.104.1346434159366;
	Fri, 31 Aug 2012 10:29:19 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id hr1sm3838248pbc.23.2012.08.31.10.29.13
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:29:13 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:29:11 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120831172910.GB19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346406953.27277.149.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
> [...ducks...]

I think that can easily be fixed. I seem to be seeing those too.
> 
> > For the rest, all my domains and pci-passthrough seem to be working :-)

Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
Hadn't yet actually done any git bisection or tried to get a serial
output.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:29:44 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V29-0001Q7-9e; Fri, 31 Aug 2012 17:29:29 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7V27-0001PJ-KT
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:29:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346434159!9005717!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21192 invoked from network); 31 Aug 2012 17:29:21 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:29:21 -0000
Received: by pbbjt11 with SMTP id jt11so5064003pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:29:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=yZ99978xpmPAr1syyFtIonRgO3tqKQ/KE1PqrU0FNrg=;
	b=gmKX3XbvH2nB4Q5EO8t6nOd1Az4m8BJPTYnUON2hRACQOQvLezOJs5qXkdyUiUElBm
	W0F1qedqcIyPaEZFBhkIqXEHNe65XHc1hQM2xVDhsBE0R7D2dDrP9ljvOqBT2RuRRTnx
	YPkvq2cmzCq2CG5DGdHLlOQxsluJC1UARm8PD4fmHVwVF8cTHYZPs4jKSqcABKIbVYg2
	ihBlYLi1LWGiBskF+t84tvGNRGm03fc8d+aN2nZVSKuRLHaDcqgd+EM1+E4FDiM8kQkg
	BCKx0JtmTttTrrD2H6VAPqel3lQFLLLMq7u/Jvg/TO8aCpwwe4u9DpowND9JJla2hjfx
	eNSA==
Received: by 10.68.227.169 with SMTP id sb9mr18522054pbc.104.1346434159366;
	Fri, 31 Aug 2012 10:29:19 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id hr1sm3838248pbc.23.2012.08.31.10.29.13
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:29:13 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:29:11 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120831172910.GB19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346406953.27277.149.camel@zakaz.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
> [...ducks...]

I think that can easily be fixed. I seem to be seeing those too.
> 
> > For the rest, all my domains and pci-passthrough seem to be working :-)

Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
Hadn't yet actually done any git bisection or tried to get a serial
output.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:33:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V5Q-0001jG-2Y; Fri, 31 Aug 2012 17:32:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7V5O-0001iE-K8
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:32:50 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346434364!9006068!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29467 invoked from network); 31 Aug 2012 17:32:44 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 17:32:44 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:55166
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7V28-0007D3-Ed; Fri, 31 Aug 2012 19:29:28 +0200
Date: Fri, 31 Aug 2012 19:32:35 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1365913956.20120831193235@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
In-Reply-To: <20120831172910.GB19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
	<20120831172910.GB19756@localhost.localdomain>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, August 31, 2012, 7:29:11 PM, you wrote:

>> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
>> [...ducks...]

> I think that can easily be fixed. I seem to be seeing those too.
>> 
>> > For the rest, all my domains and pci-passthrough seem to be working :-)

> Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
> Hadn't yet actually done any git bisection or tried to get a serial
> output.

I'm mostly using PV and PCI passthrough, haven't tried HVM and PCI passthrough yet, but will somewhere this weekend ..


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:33:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V5Q-0001jG-2Y; Fri, 31 Aug 2012 17:32:52 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7V5O-0001iE-K8
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:32:50 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346434364!9006068!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29467 invoked from network); 31 Aug 2012 17:32:44 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 17:32:44 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:55166
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7V28-0007D3-Ed; Fri, 31 Aug 2012 19:29:28 +0200
Date: Fri, 31 Aug 2012 19:32:35 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1365913956.20120831193235@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
In-Reply-To: <20120831172910.GB19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
	<20120831172910.GB19756@localhost.localdomain>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, August 31, 2012, 7:29:11 PM, you wrote:

>> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
>> [...ducks...]

> I think that can easily be fixed. I seem to be seeing those too.
>> 
>> > For the rest, all my domains and pci-passthrough seem to be working :-)

> Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
> Hadn't yet actually done any git bisection or tried to get a serial
> output.

I'm mostly using PV and PCI passthrough, haven't tried HVM and PCI passthrough yet, but will somewhere this weekend ..


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:33:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V5b-0001kt-Fy; Fri, 31 Aug 2012 17:33:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7V5a-0001k3-Gx
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:33:02 +0000
Received: from [85.158.139.83:61060] by server-4.bemta-5.messagelabs.com id
	8C/78-23042-D45F0405; Fri, 31 Aug 2012 17:33:01 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346434356!27750358!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10013 invoked from network); 31 Aug 2012 17:32:38 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:32:38 -0000
Received: by pbbjt11 with SMTP id jt11so5068522pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:32:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=XKBQmm9A/BEBGyr88WfVquyjXzXX3KGqqpYO0omjeq8=;
	b=zl6oo+3Z/FnZDyllQQPBiTq1b0YQURDOjtc+S5vGOkyOewhxQ1h1vmMW5KXnqYXtF9
	e0/aC8bsdPl4PIQe4JC83FaVBif9bZ4TrUgjdWgWBOkNZl649cdEB7HMOXyehxf9VafX
	v3pkSstiukEgi8y0ZKAWEZelEmrQZ2FtMlqXo66g9jUSagfwGjzF+fWBPqMBpDOezOVC
	bUUz3sJUzmDG/ZqogCqiRxrFoHPkgqGobGSUQ6WPyIC+42J5p9HyIJsfO/ulxOaUGg0y
	FpDI/b/k3JlxlFKklPgA7rsEx3SfZbQ4sRvuX8sJtCJVReOUA4cOE3z1jFxgdYsiOURQ
	GhcA==
Received: by 10.68.136.102 with SMTP id pz6mr18284594pbb.160.1346434355914;
	Fri, 31 Aug 2012 10:32:35 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id jz10sm3846304pbc.8.2012.08.31.10.32.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:32:35 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:32:33 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120831173232.GC19756@localhost.localdomain>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
	<aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
	<5040B6940200007800097CDE@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5040B6940200007800097CDE@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>, bing <Libing.Chen@uts.edu.au>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 12:05:24PM +0100, Jan Beulich wrote:
> >>> On 30.08.12 at 19:04, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> >>  From: bing [mailto:Libing.Chen@uts.edu.au]
> >> Sent: Thursday, August 30, 2012 12:16 AM
> >> To: xen-devel@lists.xen.org 
> >> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 
> > 790, boots fine with
> >> 4.1.3
> >> 
> >> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
> >> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
> >> to the xen kernel. It seems to be CPU related, same setup (same xen,
> >> Dom0 kernel version) works fine on another Dell Precision 3200.
> > 
> > Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
> > did solve my problem also.
> 
> But that was known to be required only with certain, non-upstream
> broken kernels (which got their own xsave handling wrong). Can
> either of you confirm this is a problem with a plain upstream kernel
> now too (and if so, which version(s))?

I can confirm. Have a brand new AMD machine that goes bellyup with that
with a Fedora stock kernel and with a vanialla (v3.5 or v3.6-rc2) works
just right.

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:33:15 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V5b-0001kt-Fy; Fri, 31 Aug 2012 17:33:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7V5a-0001k3-Gx
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:33:02 +0000
Received: from [85.158.139.83:61060] by server-4.bemta-5.messagelabs.com id
	8C/78-23042-D45F0405; Fri, 31 Aug 2012 17:33:01 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-5.tower-182.messagelabs.com!1346434356!27750358!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10013 invoked from network); 31 Aug 2012 17:32:38 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:32:38 -0000
Received: by pbbjt11 with SMTP id jt11so5068522pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:32:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=XKBQmm9A/BEBGyr88WfVquyjXzXX3KGqqpYO0omjeq8=;
	b=zl6oo+3Z/FnZDyllQQPBiTq1b0YQURDOjtc+S5vGOkyOewhxQ1h1vmMW5KXnqYXtF9
	e0/aC8bsdPl4PIQe4JC83FaVBif9bZ4TrUgjdWgWBOkNZl649cdEB7HMOXyehxf9VafX
	v3pkSstiukEgi8y0ZKAWEZelEmrQZ2FtMlqXo66g9jUSagfwGjzF+fWBPqMBpDOezOVC
	bUUz3sJUzmDG/ZqogCqiRxrFoHPkgqGobGSUQ6WPyIC+42J5p9HyIJsfO/ulxOaUGg0y
	FpDI/b/k3JlxlFKklPgA7rsEx3SfZbQ4sRvuX8sJtCJVReOUA4cOE3z1jFxgdYsiOURQ
	GhcA==
Received: by 10.68.136.102 with SMTP id pz6mr18284594pbb.160.1346434355914;
	Fri, 31 Aug 2012 10:32:35 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id jz10sm3846304pbc.8.2012.08.31.10.32.35
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:32:35 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:32:33 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20120831173232.GC19756@localhost.localdomain>
References: <76048170-772b-4365-87d2-00a3bf4aa5f8@default>
	<503F053D.5010001@uts.edu.au>
	<aa529a53-a56b-4215-81c2-f1fda4acedbd@default>
	<5040B6940200007800097CDE@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5040B6940200007800097CDE@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>, bing <Libing.Chen@uts.edu.au>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell
 Optiplex 790, boots fine with 4.1.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 12:05:24PM +0100, Jan Beulich wrote:
> >>> On 30.08.12 at 19:04, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> >>  From: bing [mailto:Libing.Chen@uts.edu.au]
> >> Sent: Thursday, August 30, 2012 12:16 AM
> >> To: xen-devel@lists.xen.org 
> >> Subject: Re: [Xen-devel] Xen 4.2.0 won't boot with FC17 dom0 on Dell Optiplex 
> > 790, boots fine with
> >> 4.1.3
> >> 
> >> I have a similar crash on Xen 4.2.0-rc3 with HP DC7900, reboot right
> >> after loading Dom0 kernel. I found a workaround by pass xsave=0 option
> >> to the xen kernel. It seems to be CPU related, same setup (same xen,
> >> Dom0 kernel version) works fine on another Dell Precision 3200.
> > 
> > Thanks, just saw this as I wasn't cc'ed on your email.  xsave=0
> > did solve my problem also.
> 
> But that was known to be required only with certain, non-upstream
> broken kernels (which got their own xsave handling wrong). Can
> either of you confirm this is a problem with a plain upstream kernel
> now too (and if so, which version(s))?

I can confirm. Have a brand new AMD machine that goes bellyup with that
with a Fedora stock kernel and with a vanialla (v3.5 or v3.6-rc2) works
just right.

> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:36:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V8J-00023C-3K; Fri, 31 Aug 2012 17:35:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7V8H-000234-NK
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:35:49 +0000
Received: from [85.158.139.83:12512] by server-4.bemta-5.messagelabs.com id
	F8/1B-23042-5F5F0405; Fri, 31 Aug 2012 17:35:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1346434548!27880477!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11562 invoked from network); 31 Aug 2012 17:35:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:35:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="14295337"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 17:35:47 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 18:35:47 +0100
Message-ID: <1346434547.5820.10.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Kirk Allan <kallan@suse.com>
Date: Fri, 31 Aug 2012 18:35:47 +0100
In-Reply-To: <50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
	<50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Charles Arnold <CARNOLD@suse.com>, Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 18:15 +0100, Kirk Allan wrote:

> > Charles, Kirk, could you comment here?
> 
> In one of my Window's vm config files, I was able to get the vm to
> boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of
> the Windows vm.  I also added irq=[4] to the config file.  However, I
> was not able to actually get a debug session to work.  The physical
> machine running windbg received a string from the vm which gave me
> hope that it was working, but then it never received further data so
> the vm eventually booted without being attached to the debugger.

Thanks, the question was whether it would be useful to implement the
	ioports = '3f8-3ff'
	irq = 4
syntax as well as the
	ioports = ['3f8-3ff']
	irq = [4]
but it looks like you are actually using the array version anyway?

I think I'd rather avoid implementing both options unless there is a
strong reason to do so.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:36:04 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V8J-00023C-3K; Fri, 31 Aug 2012 17:35:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1T7V8H-000234-NK
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:35:49 +0000
Received: from [85.158.139.83:12512] by server-4.bemta-5.messagelabs.com id
	F8/1B-23042-5F5F0405; Fri, 31 Aug 2012 17:35:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-182.messagelabs.com!1346434548!27880477!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11562 invoked from network); 31 Aug 2012 17:35:48 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-15.tower-182.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:35:48 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="14295337"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 17:35:47 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 18:35:47 +0100
Message-ID: <1346434547.5820.10.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Kirk Allan <kallan@suse.com>
Date: Fri, 31 Aug 2012 18:35:47 +0100
In-Reply-To: <50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
	<50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.3-1 
MIME-Version: 1.0
Cc: Charles Arnold <CARNOLD@suse.com>, Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2012-08-31 at 18:15 +0100, Kirk Allan wrote:

> > Charles, Kirk, could you comment here?
> 
> In one of my Window's vm config files, I was able to get the vm to
> boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of
> the Windows vm.  I also added irq=[4] to the config file.  However, I
> was not able to actually get a debug session to work.  The physical
> machine running windbg received a string from the vm which gave me
> hope that it was working, but then it never received further data so
> the vm eventually booted without being attached to the debugger.

Thanks, the question was whether it would be useful to implement the
	ioports = '3f8-3ff'
	irq = 4
syntax as well as the
	ioports = ['3f8-3ff']
	irq = [4]
but it looks like you are actually using the array version anyway?

I think I'd rather avoid implementing both options unless there is a
strong reason to do so.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:37:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V9H-0002Au-N5; Fri, 31 Aug 2012 17:36:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T7V9G-0002AY-DA
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:36:50 +0000
Received: from [85.158.138.51:5729] by server-10.bemta-3.messagelabs.com id
	80/50-10411-136F0405; Fri, 31 Aug 2012 17:36:49 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346434607!26203331!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26687 invoked from network); 31 Aug 2012 17:36:47 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:36:47 -0000
Received: by eaac13 with SMTP id c13so1052481eaa.32
	for <multiple recipients>; Fri, 31 Aug 2012 10:36:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=q51Q4++Pv82o+B4hP4S2sjfqFZUphxaUmRN7QrkNNQg=;
	b=ZFTNv6Yv3SVRQT/KHRba8oUSqhjo8zsVam4/zEknWyJ5XOESZm3G0VvITNHQ8WtWgD
	fFKbKK4On7VsGQFYMseU6SkH8umWXgkg35NP8mu41iph+vQlu8pLvNVgdU9AhVkyns3N
	CnGr+IHKrVFt5e1+5h1QHTyPo0c3CbDSUGatk8CML+vHrfTRSq2b4dbEMosSxXnZcFKG
	3oQAUFRZxsd8Vje8yi2Z6EeAc5PSB7NvRtQgt+L1FkDqlpNzUhUKeQ0lhTU97tOhBzHY
	3Wvnk+IrIDPebJOAb1zDiEsTdUtby5D4o6yqkaXtJQ808eC/Izi+/v8i5LQvTnJM/50N
	eJDQ==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr12322430eej.0.1346434607117; Fri, 31
	Aug 2012 10:36:47 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Fri, 31 Aug 2012 10:36:47 -0700 (PDT)
In-Reply-To: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
Date: Fri, 31 Aug 2012 10:36:47 -0700
X-Google-Sender-Auth: eXCPuX8E2BwbNF-z7uB_BSHY_1U
Message-ID: <CAFLBxZaDmZ8bTQzZ_CpQpTros9WFK7Q96v-1Y3zXXWLdciMXTw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 3:06 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>     * [BUG] qemu-traditional has 50% cpu utilization on an idle
>       Windows system if USB is enabled. Not 100% clear whether this is
>       Xen or qemu.  George Dunlap is performing initial
>       investigations.

So it's hard to get directly comparable results, but I think that
early indications are that the biggest chunk of this is due to the
extra syscall overhead for a 64-bit dom0.  Data points are:
1. Ubuntu 12.04, 64-bit, pvops Ubuntu kernel, Xen 4.2-rc2, older AMD
system: qemu uses 50% on an idle system
2. XenServer built with Xen-4.2; (32-bit 2.6.32 dom0), Nehalem system:
<qemu uses 2% on an idle system
3. Debian wheezy with the squeeze 2.6.32 32-bit kernel, older AMD
system: qemu uses 10% on an idle system

Looking at the traces, it seems that on the AMD box there were just a
whole lot more USB-related IO accesses than on the Nehalem system.  #2
had far fewer USB-related accesses than #1, but #3 had about twice as
many as #1.  So it seems likely to be a combination between something
weird that the USB driver in the guest is doing under AMD, and the
extra overhead of a 64-bit kernel.

So I think this is probably OK to take off the blocker list (although
it's probably something we want to look into further).

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:37:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7V9H-0002Au-N5; Fri, 31 Aug 2012 17:36:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1T7V9G-0002AY-DA
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:36:50 +0000
Received: from [85.158.138.51:5729] by server-10.bemta-3.messagelabs.com id
	80/50-10411-136F0405; Fri, 31 Aug 2012 17:36:49 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346434607!26203331!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26687 invoked from network); 31 Aug 2012 17:36:47 -0000
Received: from mail-ey0-f173.google.com (HELO mail-ey0-f173.google.com)
	(209.85.215.173)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:36:47 -0000
Received: by eaac13 with SMTP id c13so1052481eaa.32
	for <multiple recipients>; Fri, 31 Aug 2012 10:36:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date
	:x-google-sender-auth:message-id:subject:from:to:cc:content-type;
	bh=q51Q4++Pv82o+B4hP4S2sjfqFZUphxaUmRN7QrkNNQg=;
	b=ZFTNv6Yv3SVRQT/KHRba8oUSqhjo8zsVam4/zEknWyJ5XOESZm3G0VvITNHQ8WtWgD
	fFKbKK4On7VsGQFYMseU6SkH8umWXgkg35NP8mu41iph+vQlu8pLvNVgdU9AhVkyns3N
	CnGr+IHKrVFt5e1+5h1QHTyPo0c3CbDSUGatk8CML+vHrfTRSq2b4dbEMosSxXnZcFKG
	3oQAUFRZxsd8Vje8yi2Z6EeAc5PSB7NvRtQgt+L1FkDqlpNzUhUKeQ0lhTU97tOhBzHY
	3Wvnk+IrIDPebJOAb1zDiEsTdUtby5D4o6yqkaXtJQ808eC/Izi+/v8i5LQvTnJM/50N
	eJDQ==
MIME-Version: 1.0
Received: by 10.14.4.201 with SMTP id 49mr12322430eej.0.1346434607117; Fri, 31
	Aug 2012 10:36:47 -0700 (PDT)
Received: by 10.14.215.195 with HTTP; Fri, 31 Aug 2012 10:36:47 -0700 (PDT)
In-Reply-To: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
Date: Fri, 31 Aug 2012 10:36:47 -0700
X-Google-Sender-Auth: eXCPuX8E2BwbNF-z7uB_BSHY_1U
Message-ID: <CAFLBxZaDmZ8bTQzZ_CpQpTros9WFK7Q96v-1Y3zXXWLdciMXTw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-users <xen-users@lists.xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 3:06 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>     * [BUG] qemu-traditional has 50% cpu utilization on an idle
>       Windows system if USB is enabled. Not 100% clear whether this is
>       Xen or qemu.  George Dunlap is performing initial
>       investigations.

So it's hard to get directly comparable results, but I think that
early indications are that the biggest chunk of this is due to the
extra syscall overhead for a 64-bit dom0.  Data points are:
1. Ubuntu 12.04, 64-bit, pvops Ubuntu kernel, Xen 4.2-rc2, older AMD
system: qemu uses 50% on an idle system
2. XenServer built with Xen-4.2; (32-bit 2.6.32 dom0), Nehalem system:
<qemu uses 2% on an idle system
3. Debian wheezy with the squeeze 2.6.32 32-bit kernel, older AMD
system: qemu uses 10% on an idle system

Looking at the traces, it seems that on the AMD box there were just a
whole lot more USB-related IO accesses than on the Nehalem system.  #2
had far fewer USB-related accesses than #1, but #3 had about twice as
many as #1.  So it seems likely to be a combination between something
weird that the USB driver in the guest is doing under AMD, and the
extra overhead of a 64-bit kernel.

So I think this is probably OK to take off the blocker list (although
it's probably something we want to look into further).

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VC9-0002a4-VF; Fri, 31 Aug 2012 17:39:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7VC8-0002Zm-Pu
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:39:48 +0000
Received: from [85.158.143.35:62133] by server-2.bemta-4.messagelabs.com id
	4E/B0-21239-3E6F0405; Fri, 31 Aug 2012 17:39:47 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346434785!12841521!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6429 invoked from network); 31 Aug 2012 17:39:47 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:39:47 -0000
Received: by dadn15 with SMTP id n15so1984786dad.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:39:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=rP0z7w3IJvcFKKfLX0Ds1o36KiK7DhCI9kE39kDsSkk=;
	b=HKseVKfY0PxxtdO5QfjApyOZZVZP5d2DY+Hygezbl77twbUG9N1NNvA1w8mE3iE4zh
	fKsgFaiUcdHdgbtxVlj8Q49rc2dwxq0UpSxwCErBFGN9y7IyibPmpe+JFV71ClTtlTMH
	nLmXuROivrAOlXWkf8O0BTboAbfbXQ72bJLu45SQEhZYYa6A+nFpe1PM46Jbuf0urIBs
	FVV6gGqmt7OoFoKT9eiTaYI4O5EpRjkH6hGDZAaai5F7wHx4vsTOxdumXoZmJBI6MWxt
	U0nIiOdp1wWuvbvTtkLfWd1PtYmhh6dw2jNPq1okUIvTNAKGTeNZ1lqzb2uw8CoSTJHW
	u8mQ==
Received: by 10.68.241.99 with SMTP id wh3mr18803766pbc.16.1346434785057;
	Fri, 31 Aug 2012 10:39:45 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id pj8sm3842886pbb.60.2012.08.31.10.39.44
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:39:44 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:39:42 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20120831173941.GD19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
	<20120831172910.GB19756@localhost.localdomain>
	<1365913956.20120831193235@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1365913956.20120831193235@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 07:32:35PM +0200, Sander Eikelenboom wrote:
> 
> Friday, August 31, 2012, 7:29:11 PM, you wrote:
> 
> >> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
> >> [...ducks...]
> 
> > I think that can easily be fixed. I seem to be seeing those too.
> >> 
> >> > For the rest, all my domains and pci-passthrough seem to be working :-)
> 
> > Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
> > Hadn't yet actually done any git bisection or tried to get a serial
> > output.
> 
> I'm mostly using PV and PCI passthrough, haven't tried HVM and PCI passthrough yet, but will somewhere this weekend ..

For v3.5 and later there is a bug in the Linux code (introduced by me)
which makes MSI in HVM PCI passthrough guests not work. You would need
to revert cd9db80e5257682a7f7ab245a2459648b3c8d268.

> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:39:59 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VC9-0002a4-VF; Fri, 31 Aug 2012 17:39:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7VC8-0002Zm-Pu
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:39:48 +0000
Received: from [85.158.143.35:62133] by server-2.bemta-4.messagelabs.com id
	4E/B0-21239-3E6F0405; Fri, 31 Aug 2012 17:39:47 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1346434785!12841521!1
X-Originating-IP: [209.85.210.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6429 invoked from network); 31 Aug 2012 17:39:47 -0000
Received: from mail-pz0-f45.google.com (HELO mail-pz0-f45.google.com)
	(209.85.210.45)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:39:47 -0000
Received: by dadn15 with SMTP id n15so1984786dad.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 10:39:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=rP0z7w3IJvcFKKfLX0Ds1o36KiK7DhCI9kE39kDsSkk=;
	b=HKseVKfY0PxxtdO5QfjApyOZZVZP5d2DY+Hygezbl77twbUG9N1NNvA1w8mE3iE4zh
	fKsgFaiUcdHdgbtxVlj8Q49rc2dwxq0UpSxwCErBFGN9y7IyibPmpe+JFV71ClTtlTMH
	nLmXuROivrAOlXWkf8O0BTboAbfbXQ72bJLu45SQEhZYYa6A+nFpe1PM46Jbuf0urIBs
	FVV6gGqmt7OoFoKT9eiTaYI4O5EpRjkH6hGDZAaai5F7wHx4vsTOxdumXoZmJBI6MWxt
	U0nIiOdp1wWuvbvTtkLfWd1PtYmhh6dw2jNPq1okUIvTNAKGTeNZ1lqzb2uw8CoSTJHW
	u8mQ==
Received: by 10.68.241.99 with SMTP id wh3mr18803766pbc.16.1346434785057;
	Fri, 31 Aug 2012 10:39:45 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id pj8sm3842886pbb.60.2012.08.31.10.39.44
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:39:44 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:39:42 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20120831173941.GD19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
	<20120831172910.GB19756@localhost.localdomain>
	<1365913956.20120831193235@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1365913956.20120831193235@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 07:32:35PM +0200, Sander Eikelenboom wrote:
> 
> Friday, August 31, 2012, 7:29:11 PM, you wrote:
> 
> >> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
> >> [...ducks...]
> 
> > I think that can easily be fixed. I seem to be seeing those too.
> >> 
> >> > For the rest, all my domains and pci-passthrough seem to be working :-)
> 
> > Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
> > Hadn't yet actually done any git bisection or tried to get a serial
> > output.
> 
> I'm mostly using PV and PCI passthrough, haven't tried HVM and PCI passthrough yet, but will somewhere this weekend ..

For v3.5 and later there is a bug in the Linux code (introduced by me)
which makes MSI in HVM PCI passthrough guests not work. You would need
to revert cd9db80e5257682a7f7ab245a2459648b3c8d268.

> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:42:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VEw-0002o9-HT; Fri, 31 Aug 2012 17:42:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7VEv-0002nr-Hz
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:42:41 +0000
Received: from [85.158.143.99:40973] by server-1.bemta-4.messagelabs.com id
	FA/43-12504-097F0405; Fri, 31 Aug 2012 17:42:40 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346434958!20367963!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6135 invoked from network); 31 Aug 2012 17:42:40 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:42:40 -0000
Received: by pbbjt11 with SMTP id jt11so5082687pbb.32
	for <multiple recipients>; Fri, 31 Aug 2012 10:42:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=J5i9I/2685r/7kpFUQaqWb7PB5/mOnQ8xpyf6zs8IXk=;
	b=osyL3ZKbWXG2D6fAbh5Obdj8DfqKSGyE6X+5MAXt47EV1IHbI66bzaklbeTLoK17O2
	Z/2QgvGp1aRV4SEDyjRlCh4xvj/5d3DsKqu6tMCc0dbyUmRlNIb3bFXSfvHz4vwM5CMj
	oEEeTKBmqLhUY1PKprGjtUI65eS6i0r1cZvxSPjeDLNjmCRJhpkQPiiywIolDRR5JQR1
	8uUlYXjwOR9Jy1uSzgTWsiEIPoKvx+LOJelVdrQ8rg0U3uWC9/BxH2GQD4SDx6qduxsV
	w/dCuDTF45c098mM5vTsVj4ndfDHchzhGYT5kWEtu21jgR/H2Z1sZ1lT6TzBrb+/6/+t
	b2KA==
Received: by 10.66.75.225 with SMTP id f1mr16412297paw.35.1346434957981;
	Fri, 31 Aug 2012 10:42:37 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id wh7sm3855496pbc.33.2012.08.31.10.42.37
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:42:37 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:42:35 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120831174234.GE19756@localhost.localdomain>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
	<CAFLBxZaDmZ8bTQzZ_CpQpTros9WFK7Q96v-1Y3zXXWLdciMXTw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaDmZ8bTQzZ_CpQpTros9WFK7Q96v-1Y3zXXWLdciMXTw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-users <xen-users@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 10:36:47AM -0700, George Dunlap wrote:
> On Tue, Aug 28, 2012 at 3:06 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >     * [BUG] qemu-traditional has 50% cpu utilization on an idle
> >       Windows system if USB is enabled. Not 100% clear whether this is
> >       Xen or qemu.  George Dunlap is performing initial
> >       investigations.
> 
> So it's hard to get directly comparable results, but I think that
> early indications are that the biggest chunk of this is due to the
> extra syscall overhead for a 64-bit dom0.  Data points are:
> 1. Ubuntu 12.04, 64-bit, pvops Ubuntu kernel, Xen 4.2-rc2, older AMD
> system: qemu uses 50% on an idle system

So what happens if you run with a 32-bit dom0? What is the kernel
version? There were some issues with extra traps being done due to the
cpuidle running (which it should not).

> 2. XenServer built with Xen-4.2; (32-bit 2.6.32 dom0), Nehalem system:
> <qemu uses 2% on an idle system
> 3. Debian wheezy with the squeeze 2.6.32 32-bit kernel, older AMD
> system: qemu uses 10% on an idle system

Can you try booting with 'nohz=off'. What does 'perf top' (you need to
run v3.4 or later) give you?

> 
> Looking at the traces, it seems that on the AMD box there were just a
> whole lot more USB-related IO accesses than on the Nehalem system.  #2
> had far fewer USB-related accesses than #1, but #3 had about twice as
> many as #1.  So it seems likely to be a combination between something
> weird that the USB driver in the guest is doing under AMD, and the
> extra overhead of a 64-bit kernel.
> 
> So I think this is probably OK to take off the blocker list (although
> it's probably something we want to look into further).
> 
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:42:52 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VEw-0002o9-HT; Fri, 31 Aug 2012 17:42:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7VEv-0002nr-Hz
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:42:41 +0000
Received: from [85.158.143.99:40973] by server-1.bemta-4.messagelabs.com id
	FA/43-12504-097F0405; Fri, 31 Aug 2012 17:42:40 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-11.tower-216.messagelabs.com!1346434958!20367963!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6135 invoked from network); 31 Aug 2012 17:42:40 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-11.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 17:42:40 -0000
Received: by pbbjt11 with SMTP id jt11so5082687pbb.32
	for <multiple recipients>; Fri, 31 Aug 2012 10:42:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=J5i9I/2685r/7kpFUQaqWb7PB5/mOnQ8xpyf6zs8IXk=;
	b=osyL3ZKbWXG2D6fAbh5Obdj8DfqKSGyE6X+5MAXt47EV1IHbI66bzaklbeTLoK17O2
	Z/2QgvGp1aRV4SEDyjRlCh4xvj/5d3DsKqu6tMCc0dbyUmRlNIb3bFXSfvHz4vwM5CMj
	oEEeTKBmqLhUY1PKprGjtUI65eS6i0r1cZvxSPjeDLNjmCRJhpkQPiiywIolDRR5JQR1
	8uUlYXjwOR9Jy1uSzgTWsiEIPoKvx+LOJelVdrQ8rg0U3uWC9/BxH2GQD4SDx6qduxsV
	w/dCuDTF45c098mM5vTsVj4ndfDHchzhGYT5kWEtu21jgR/H2Z1sZ1lT6TzBrb+/6/+t
	b2KA==
Received: by 10.66.75.225 with SMTP id f1mr16412297paw.35.1346434957981;
	Fri, 31 Aug 2012 10:42:37 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id wh7sm3855496pbc.33.2012.08.31.10.42.37
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 10:42:37 -0700 (PDT)
Date: Fri, 31 Aug 2012 13:42:35 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20120831174234.GE19756@localhost.localdomain>
References: <1346148361.9975.3.camel@zakaz.uk.xensource.com>
	<CAFLBxZaDmZ8bTQzZ_CpQpTros9WFK7Q96v-1Y3zXXWLdciMXTw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaDmZ8bTQzZ_CpQpTros9WFK7Q96v-1Y3zXXWLdciMXTw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-users <xen-users@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.2 TODO / Release Plan
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 10:36:47AM -0700, George Dunlap wrote:
> On Tue, Aug 28, 2012 at 3:06 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >     * [BUG] qemu-traditional has 50% cpu utilization on an idle
> >       Windows system if USB is enabled. Not 100% clear whether this is
> >       Xen or qemu.  George Dunlap is performing initial
> >       investigations.
> 
> So it's hard to get directly comparable results, but I think that
> early indications are that the biggest chunk of this is due to the
> extra syscall overhead for a 64-bit dom0.  Data points are:
> 1. Ubuntu 12.04, 64-bit, pvops Ubuntu kernel, Xen 4.2-rc2, older AMD
> system: qemu uses 50% on an idle system

So what happens if you run with a 32-bit dom0? What is the kernel
version? There were some issues with extra traps being done due to the
cpuidle running (which it should not).

> 2. XenServer built with Xen-4.2; (32-bit 2.6.32 dom0), Nehalem system:
> <qemu uses 2% on an idle system
> 3. Debian wheezy with the squeeze 2.6.32 32-bit kernel, older AMD
> system: qemu uses 10% on an idle system

Can you try booting with 'nohz=off'. What does 'perf top' (you need to
run v3.4 or later) give you?

> 
> Looking at the traces, it seems that on the AMD box there were just a
> whole lot more USB-related IO accesses than on the Nehalem system.  #2
> had far fewer USB-related accesses than #1, but #3 had about twice as
> many as #1.  So it seems likely to be a combination between something
> weird that the USB driver in the guest is doing under AMD, and the
> extra overhead of a 64-bit kernel.
> 
> So I think this is probably OK to take off the blocker list (although
> it's probably something we want to look into further).
> 
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:56:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VS9-0003nr-Ej; Fri, 31 Aug 2012 17:56:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7VS8-0003nm-6V
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:56:20 +0000
Received: from [85.158.138.51:5695] by server-4.bemta-3.messagelabs.com id
	9B/6B-24831-3CAF0405; Fri, 31 Aug 2012 17:56:19 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-174.messagelabs.com!1346435778!28044878!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19667 invoked from network); 31 Aug 2012 17:56:18 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 17:56:18 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:55367
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7VOu-0007K8-Ia; Fri, 31 Aug 2012 19:53:00 +0200
Date: Fri, 31 Aug 2012 19:56:07 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <364328646.20120831195607@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
In-Reply-To: <20120831173941.GD19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
	<20120831172910.GB19756@localhost.localdomain>
	<1365913956.20120831193235@eikelenboom.it>
	<20120831173941.GD19756@localhost.localdomain>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Konrad,

Friday, August 31, 2012, 7:39:42 PM, you wrote:

> On Fri, Aug 31, 2012 at 07:32:35PM +0200, Sander Eikelenboom wrote:
>> 
>> Friday, August 31, 2012, 7:29:11 PM, you wrote:
>> 
>> >> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
>> >> [...ducks...]
>> 
>> > I think that can easily be fixed. I seem to be seeing those too.
>> >> 
>> >> > For the rest, all my domains and pci-passthrough seem to be working :-)
>> 
>> > Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
>> > Hadn't yet actually done any git bisection or tried to get a serial
>> > output.
>> 
>> I'm mostly using PV and PCI passthrough, haven't tried HVM and PCI passthrough yet, but will somewhere this weekend ..

> For v3.5 and later there is a bug in the Linux code (introduced by me)
> which makes MSI in HVM PCI passthrough guests not work. You would need
> to revert cd9db80e5257682a7f7ab245a2459648b3c8d268.

Ok
Found a problem with PCI passthrough to the PV guest as well, seems to be IOMMU related, never seen IO_PAGE_FAULT's on 4.1.x:

xl dmesg:

(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bfee0
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff00
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff40
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff60
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bffc0
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff80
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bffe0
(XEN) [2012-08-31 09:56:20] grant_table.c:254:d0 Increased maptrack size to 2 frames


>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>> 




-- 
Best regards,
 Sander                            mailto:linux@eikelenboom.it


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 17:56:38 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 17:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VS9-0003nr-Ej; Fri, 31 Aug 2012 17:56:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7VS8-0003nm-6V
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:56:20 +0000
Received: from [85.158.138.51:5695] by server-4.bemta-3.messagelabs.com id
	9B/6B-24831-3CAF0405; Fri, 31 Aug 2012 17:56:19 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-5.tower-174.messagelabs.com!1346435778!28044878!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19667 invoked from network); 31 Aug 2012 17:56:18 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-5.tower-174.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 17:56:18 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:55367
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7VOu-0007K8-Ia; Fri, 31 Aug 2012 19:53:00 +0200
Date: Fri, 31 Aug 2012 19:56:07 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <364328646.20120831195607@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
In-Reply-To: <20120831173941.GD19756@localhost.localdomain>
References: <1803123652.20120831114437@eikelenboom.it>
	<1346406953.27277.149.camel@zakaz.uk.xensource.com>
	<20120831172910.GB19756@localhost.localdomain>
	<1365913956.20120831193235@eikelenboom.it>
	<20120831173941.GD19756@localhost.localdomain>
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Upgrade to xen-unstable-rc4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Konrad,

Friday, August 31, 2012, 7:39:42 PM, you wrote:

> On Fri, Aug 31, 2012 at 07:32:35PM +0200, Sander Eikelenboom wrote:
>> 
>> Friday, August 31, 2012, 7:29:11 PM, you wrote:
>> 
>> >> > - Messages i haven't seen in dmesg with xen-4.1.3 in combination with 3.6-rc? kernel:
>> >> [...ducks...]
>> 
>> > I think that can easily be fixed. I seem to be seeing those too.
>> >> 
>> >> > For the rest, all my domains and pci-passthrough seem to be working :-)
>> 
>> > Oh nice. My AMD machine reboots when I tried HVM and PCI passthrough :-(
>> > Hadn't yet actually done any git bisection or tried to get a serial
>> > output.
>> 
>> I'm mostly using PV and PCI passthrough, haven't tried HVM and PCI passthrough yet, but will somewhere this weekend ..

> For v3.5 and later there is a bug in the Linux code (introduced by me)
> which makes MSI in HVM PCI passthrough guests not work. You would need
> to revert cd9db80e5257682a7f7ab245a2459648b3c8d268.

Ok
Found a problem with PCI passthrough to the PV guest as well, seems to be IOMMU related, never seen IO_PAGE_FAULT's on 4.1.x:

xl dmesg:

(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bfee0
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff00
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff40
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff60
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bffc0
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bff80
(XEN) [2012-08-31 09:46:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device id = 0x0700, fault address = 0xa99bffe0
(XEN) [2012-08-31 09:56:20] grant_table.c:254:d0 Increased maptrack size to 2 frames


>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>> 




-- 
Best regards,
 Sander                            mailto:linux@eikelenboom.it


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:00:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VVe-0003wR-2n; Fri, 31 Aug 2012 17:59:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kallan@suse.com>) id 1T7VVc-0003wJ-QD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:59:57 +0000
Received: from [85.158.139.83:36838] by server-9.bemta-5.messagelabs.com id
	73/27-20529-C9BF0405; Fri, 31 Aug 2012 17:59:56 +0000
X-Env-Sender: kallan@suse.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1346435993!20620376!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10072 invoked from network); 31 Aug 2012 17:59:53 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-16.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 17:59:53 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:59:52 -0600
Message-Id: <5040A73502000076000B3B82@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 31 Aug 2012 11:59:49 -0600
From: "Kirk Allan" <kallan@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
	<50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
	<1346434547.5820.10.camel@dagon.hellion.org.uk>
In-Reply-To: <1346434547.5820.10.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Charles Arnold <CARNOLD@suse.com>, DieterBloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



>>> On 8/31/2012 at 11:35 AM, in message
<1346434547.5820.10.camel@dagon.hellion.org.uk>, Ian Campbell
<Ian.Campbell@citrix.com> wrote: 
> On Fri, 2012-08-31 at 18:15 +0100, Kirk Allan wrote:
> 
>> > Charles, Kirk, could you comment here?
>> 
>> In one of my Window's vm config files, I was able to get the vm to
>> boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of
>> the Windows vm.  I also added irq=[4] to the config file.  However, I
>> was not able to actually get a debug session to work.  The physical
>> machine running windbg received a string from the vm which gave me
>> hope that it was working, but then it never received further data so
>> the vm eventually booted without being attached to the debugger.
> 
> Thanks, the question was whether it would be useful to implement the
> 	ioports = '3f8-3ff'
> 	irq = 4
> syntax as well as the
> 	ioports = ['3f8-3ff']
> 	irq = [4]
> but it looks like you are actually using the array version anyway?

I first looked at this last week.  I found reference to both formats so I tried both.  I was only able to get the ioports = ['3f8-3ff'] and irq = [4] syntax to allow a vm to boot.

> 
> I think I'd rather avoid implementing both options unless there is a
> strong reason to do so.

For me, I don't have a strong reason to implement support for both ways.  

> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:00:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VVe-0003wR-2n; Fri, 31 Aug 2012 17:59:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kallan@suse.com>) id 1T7VVc-0003wJ-QD
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 17:59:57 +0000
Received: from [85.158.139.83:36838] by server-9.bemta-5.messagelabs.com id
	73/27-20529-C9BF0405; Fri, 31 Aug 2012 17:59:56 +0000
X-Env-Sender: kallan@suse.com
X-Msg-Ref: server-16.tower-182.messagelabs.com!1346435993!20620376!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10072 invoked from network); 31 Aug 2012 17:59:53 -0000
Received: from novprvoes0310.provo.novell.com (HELO
	novprvoes0310.provo.novell.com) (137.65.248.74)
	by server-16.tower-182.messagelabs.com with SMTP;
	31 Aug 2012 17:59:53 -0000
Received: from INET-PRV-MTA by novprvoes0310.provo.novell.com
	with Novell_GroupWise; Fri, 31 Aug 2012 11:59:52 -0600
Message-Id: <5040A73502000076000B3B82@novprvoes0310.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.1 
Date: Fri, 31 Aug 2012 11:59:49 -0600
From: "Kirk Allan" <kallan@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
	<50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
	<1346434547.5820.10.camel@dagon.hellion.org.uk>
In-Reply-To: <1346434547.5820.10.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Charles Arnold <CARNOLD@suse.com>, DieterBloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



>>> On 8/31/2012 at 11:35 AM, in message
<1346434547.5820.10.camel@dagon.hellion.org.uk>, Ian Campbell
<Ian.Campbell@citrix.com> wrote: 
> On Fri, 2012-08-31 at 18:15 +0100, Kirk Allan wrote:
> 
>> > Charles, Kirk, could you comment here?
>> 
>> In one of my Window's vm config files, I was able to get the vm to
>> boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of
>> the Windows vm.  I also added irq=[4] to the config file.  However, I
>> was not able to actually get a debug session to work.  The physical
>> machine running windbg received a string from the vm which gave me
>> hope that it was working, but then it never received further data so
>> the vm eventually booted without being attached to the debugger.
> 
> Thanks, the question was whether it would be useful to implement the
> 	ioports = '3f8-3ff'
> 	irq = 4
> syntax as well as the
> 	ioports = ['3f8-3ff']
> 	irq = [4]
> but it looks like you are actually using the array version anyway?

I first looked at this last week.  I found reference to both formats so I tried both.  I was only able to get the ioports = ['3f8-3ff'] and irq = [4] syntax to allow a vm to boot.

> 
> I think I'd rather avoid implementing both options unless there is a
> strong reason to do so.

For me, I don't have a strong reason to implement support for both ways.  

> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:02:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VXP-00046z-J7; Fri, 31 Aug 2012 18:01:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7VXN-00046f-Qj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 18:01:46 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1346436098!2204022!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22505 invoked from network); 31 Aug 2012 18:01:39 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 18:01:39 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VI0QYv030558
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 18:00:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VI0Qwn020765
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 18:00:26 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VI0QI1032292; Fri, 31 Aug 2012 13:00:26 -0500
MIME-Version: 1.0
Message-ID: <01f7a5c7-1b78-4768-81b1-0be0a193dde0@default>
Date: Fri, 31 Aug 2012 10:59:48 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
	<503F463E.90505@cantab.net>
	<2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
	<5040B5A40200007800097CCC@nat28.tlf.novell.com>
In-Reply-To: <5040B5A40200007800097CCC@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: David Vrabel <dvrabel@cantab.net>,
	George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, August 31, 2012 5:01 AM
> To: Dan Magenheimer
> Cc: David Vrabel; George Dunlap; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
> 
> >>> On 30.08.12 at 18:11, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> > Of course, 18 months is far too long a release cycle for this approach,
> > and 9 months may be too long as well.  I think a target cycle
> > of 6 months with a "window" of 6 weeks would be a step in
> > the right direction
> 
> I disagree. Even the fix months of a freeze we're having right
> now already is way too long. Nor do I personally consider the
> two-weeks-out-of-ten (approximately) model on the Linux side
> too nice. Having larger development windows, and shorter
> stabilization periods is pretty desirable imo, the more on Xen
> where, despite its name, -unstable really normally isn't that
> unstable.

I apparently still haven't made my point clear.

With the current Xen model, there is a functionality freeze for
MONTHS during the rc cycles.  This guarantees a stampede
when the freeze ends, which almost certainly guarantees a long
period of instability in xen-unstable.  I agree that "-unstable
really isn't that unstable" but IMHO that's mostly because of the
very long release cycle.

With the Linux model, the stampede still occurs but it gets
sorted out in linux-next and the cream that rises to the top
is merged into the next release at the next (brief) window.

(Did you know that Linus now mostly refuses any new functionality
that wasn't already in linux-next at least for a week or so
before the window?)

So on Linux the "development window" is 100% _minus_ "the window",
i.e. the stabilization period and the "development window"
happen concurrently and the only time new functionality cannot
be taken is during the "window" (during which linux-next is
unavailable).

While the root cause of the difference between Xen and Linux
release cycles may indeed be that Linux has more developers,
I think everyone agrees the current Xen model (18 months since
last release) is broken, so it may be worth re-examining the
process rather than just saying "we'll try to do better this
time and maybe get it down to 9 months"... even though that
was the goal last time and it didn't work.  See classic
definition of insanity.

Just my opinion though...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:02:03 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VXP-00046z-J7; Fri, 31 Aug 2012 18:01:47 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7VXN-00046f-Qj
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 18:01:46 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1346436098!2204022!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22505 invoked from network); 31 Aug 2012 18:01:39 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 18:01:39 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VI0QYv030558
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 18:00:27 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VI0Qwn020765
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 18:00:26 GMT
Received: from abhmt102.oracle.com (abhmt102.oracle.com [141.146.116.54])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VI0QI1032292; Fri, 31 Aug 2012 13:00:26 -0500
MIME-Version: 1.0
Message-ID: <01f7a5c7-1b78-4768-81b1-0be0a193dde0@default>
Date: Fri, 31 Aug 2012 10:59:48 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
References: <CAFLBxZaGmvSYgL4DcHcSwE_0shqKC0DRbf7ab=uhFrerPCxRig@mail.gmail.com>
	<f9a8835e-95d8-4495-8c9d-4fa769913549@default>
	<503F463E.90505@cantab.net>
	<2ecc5ed5-a95e-40e4-9e00-8d1378ce1eef@default>
	<5040B5A40200007800097CCC@nat28.tlf.novell.com>
In-Reply-To: <5040B5A40200007800097CCC@nat28.tlf.novell.com>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: David Vrabel <dvrabel@cantab.net>,
	George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Friday, August 31, 2012 5:01 AM
> To: Dan Magenheimer
> Cc: David Vrabel; George Dunlap; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Xen 4.3 release planning proposal
> 
> >>> On 30.08.12 at 18:11, Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> > Of course, 18 months is far too long a release cycle for this approach,
> > and 9 months may be too long as well.  I think a target cycle
> > of 6 months with a "window" of 6 weeks would be a step in
> > the right direction
> 
> I disagree. Even the fix months of a freeze we're having right
> now already is way too long. Nor do I personally consider the
> two-weeks-out-of-ten (approximately) model on the Linux side
> too nice. Having larger development windows, and shorter
> stabilization periods is pretty desirable imo, the more on Xen
> where, despite its name, -unstable really normally isn't that
> unstable.

I apparently still haven't made my point clear.

With the current Xen model, there is a functionality freeze for
MONTHS during the rc cycles.  This guarantees a stampede
when the freeze ends, which almost certainly guarantees a long
period of instability in xen-unstable.  I agree that "-unstable
really isn't that unstable" but IMHO that's mostly because of the
very long release cycle.

With the Linux model, the stampede still occurs but it gets
sorted out in linux-next and the cream that rises to the top
is merged into the next release at the next (brief) window.

(Did you know that Linus now mostly refuses any new functionality
that wasn't already in linux-next at least for a week or so
before the window?)

So on Linux the "development window" is 100% _minus_ "the window",
i.e. the stabilization period and the "development window"
happen concurrently and the only time new functionality cannot
be taken is during the "window" (during which linux-next is
unavailable).

While the root cause of the difference between Xen and Linux
release cycles may indeed be that Linux has more developers,
I think everyone agrees the current Xen model (18 months since
last release) is broken, so it may be worth re-examining the
process rather than just saying "we'll try to do better this
time and maybe get it down to 9 months"... even though that
was the goal last time and it didn't work.  See classic
definition of insanity.

Just my opinion though...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:23:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Vro-0004SW-H7; Fri, 31 Aug 2012 18:22:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T7Vro-0004SR-1h
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 18:22:52 +0000
Received: from [85.158.143.35:10019] by server-1.bemta-4.messagelabs.com id
	F8/4A-12504-BF001405; Fri, 31 Aug 2012 18:22:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346437370!12841522!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16428 invoked from network); 31 Aug 2012 18:22:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 18:22:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="14295909"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 18:22:50 +0000
Received: from Roger-2.local (10.31.3.230) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 19:22:50 +0100
Message-ID: <504100F5.40707@citrix.com>
Date: Fri, 31 Aug 2012 11:22:45 -0700
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.5 (Macintosh/20120826)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
	<1345020888.5926.115.camel@zakaz.uk.xensource.com>
	<50366574.1040307@citrix.com>
	<1346405255.27277.134.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346405255.27277.134.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Thu, 2012-08-23 at 18:16 +0100, Roger Pau Monne wrote:
>> Ok, I will probably split this in two patches, one for libxl and one
>> for the parser.
> 
> Are you still indenting to do this for 4.2.0? It's fast approaching...

Yes, sorry for the delay. I will try to submit a patch for it on Tuesday.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:23:08 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Vro-0004SW-H7; Fri, 31 Aug 2012 18:22:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1T7Vro-0004SR-1h
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 18:22:52 +0000
Received: from [85.158.143.35:10019] by server-1.bemta-4.messagelabs.com id
	F8/4A-12504-BF001405; Fri, 31 Aug 2012 18:22:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1346437370!12841522!1
X-Originating-IP: [62.200.22.115]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMjAwLjIyLjExNSA9PiAxMTEyMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16428 invoked from network); 31 Aug 2012 18:22:50 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (62.200.22.115)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 18:22:50 -0000
X-IronPort-AV: E=Sophos;i="4.80,348,1344211200"; d="scan'208";a="14295909"
Received: from lonpmailmx01.citrite.net ([10.30.203.162])
	by LONPIPO01.EU.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 18:22:50 +0000
Received: from Roger-2.local (10.31.3.230) by LONPMAILMX01.citrite.net
	(10.30.203.162) with Microsoft SMTP Server id 8.3.279.1;
	Fri, 31 Aug 2012 19:22:50 +0100
Message-ID: <504100F5.40707@citrix.com>
Date: Fri, 31 Aug 2012 11:22:45 -0700
From: Roger Pau Monne <roger.pau@citrix.com>
User-Agent: Postbox 3.0.5 (Macintosh/20120826)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1344968106-6765-1-git-send-email-roger.pau@citrix.com>
	<1345020888.5926.115.camel@zakaz.uk.xensource.com>
	<50366574.1040307@citrix.com>
	<1346405255.27277.134.camel@zakaz.uk.xensource.com>
In-Reply-To: <1346405255.27277.134.camel@zakaz.uk.xensource.com>
X-Enigmail-Version: 1.2.2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: fix usage of backend parameter and
 run_hotplug_scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Thu, 2012-08-23 at 18:16 +0100, Roger Pau Monne wrote:
>> Ok, I will probably split this in two patches, one for libxl and one
>> for the parser.
> 
> Are you still indenting to do this for 4.2.0? It's fast approaching...

Yes, sorry for the delay. I will try to submit a patch for it on Tuesday.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:24:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VtN-0004Z8-8v; Fri, 31 Aug 2012 18:24:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <artnapor@yahoo.com>) id 1T7Vrw-0004TC-3h
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 18:23:00 +0000
Received: from [85.158.138.51:24731] by server-7.bemta-3.messagelabs.com id
	2D/BD-32000-30101405; Fri, 31 Aug 2012 18:22:59 +0000
X-Env-Sender: artnapor@yahoo.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346437376!27943900!1
X-Originating-IP: [98.138.91.71]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_6,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 412 invoked from network); 31 Aug 2012 18:22:56 -0000
Received: from nm6-vm1.bullet.mail.ne1.yahoo.com (HELO
	nm6-vm1.bullet.mail.ne1.yahoo.com) (98.138.91.71)
	by server-4.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 18:22:56 -0000
Received: from [98.138.90.54] by nm6.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 18:22:55 -0000
Received: from [98.138.88.236] by tm7.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 18:22:55 -0000
Received: from [127.0.0.1] by omp1036.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 18:22:55 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 712616.17711.bm@omp1036.mail.ne1.yahoo.com
Received: (qmail 62681 invoked by uid 60001); 31 Aug 2012 18:22:55 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1346437375; bh=iMb34Kz4tgn94o+YOa0zNzMNaXP1PujevIDavh16UV4=;
	h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=W59EbQ2Eddsn5Vgl/+E9mfY4csrDBqqLw1LKzlki7JmIkdlKtcH+07xdNY9AkQ6KuyCyPqGULZVoOXZYt8zExSYRk19XlklJEQnbksBhEFmiD4HJUMORQcTSSno1gxGf9nEYhwIHIZJDrdj5OPu8UlDBmgDPX4IsxIG8h78naz8=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=dHYmbZEXAVHJnosc8gXUJ730Fh0G/+gtLysT13OZROwDhKO3BoD4rSkU3lv6BDAb8C8TZQlXRDm+mDqwBthgq30AlIx5zUvim0TGIk3V2ePd8089SXwX1y3qZSy4+BHG63L0GifELOhxq6+09V4lkH/hKGXWFsVlmlmjbAJ3ELU=;
X-YMail-OSG: mKAWIAwVM1m4Ih_41SXcIqbivA5KJTiUSJo5NzPgGpvSI0m
	40utfY5yvZzjfR6t2AyVggIaVVWvFZArQICfgeFlXtGzaWvbnFgN8qiJCAst
	s72zg_76mgIQJqmahh7NOruM.Wfxf.5B4_nAzP3EklWM7eS7HNIEE3CKY.3L
	ZqaZ4hK0KdJdMd_TvS32s62Y58SpmEtRM..E.p1Qtq405YlrdoRZkgF.4nQh
	4rCoQ._ZJLRtiBRnsDwUgcJE6kw.Udt6rKDXp6b.W3MTRN.BQtGpT9qYsX3r
	_snEqRtkAuaorrP.mI5CO5sYDiUGPe5zo46x1R1h2yvdGJWgmlVzUeurVuAs
	qQk7KJRPkSbPTJlIRpNYRbKOkEoWCuNv4wPQEcJ.xpLy75n5qDv6hDTvqMXr
	ip_PG0.7NK0VpmXJmtQLZUCuCpdoYevXifWvmeHBkv0qy8BOb8AUK_OqV5kc
	5oQdx1NTDo.vI2S6qwZprL.vKU9sRcEpp8K780ClbgDXFa3N7CoFDvaYnLbi
	0FLuk47v07sq3TDYnL751p1DlvZadxcpNX6uEzbessezEnnNdk4BgiIyLLlc
	CUlT_nEh.MQ--
Received: from [50.58.96.2] by web121001.mail.ne1.yahoo.com via HTTP;
	Fri, 31 Aug 2012 11:22:55 PDT
X-Mailer: YahooMailWebService/0.8.121.416
References: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
Message-ID: <1346437375.61994.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Date: Fri, 31 Aug 2012 11:22:55 -0700 (PDT)
From: Art Napor <artnapor@yahoo.com>
To: Ross Philipson <Ross.Philipson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 31 Aug 2012 18:24:28 +0000
Subject: Re: [Xen-devel] [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Art Napor <artnapor@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2376478597639085005=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2376478597639085005==
Content-Type: multipart/alternative; boundary="344044665-1831219007-1346437375=:61994"

--344044665-1831219007-1346437375=:61994
Content-Type: text/plain; charset=us-ascii

Thanks Ross,

I also came across the earlier smbios passthrough patch series. I'm looking to be able to pass the DMI block from the Dom0 to the DomU when. Your earlier smbios patch series applied cleanly built against to 4.1.2, but the Dom0 smbios data didn't seem to make it into the HVM DomU. 

Are there any version or other changset limitations that would prevent the patches from being manually applied to 4.1? 


http://lists.xen.org/archives/html/xen-devel/2012-02/msg01754.html



________________________________
 From: Ross Philipson <Ross.Philipson@citrix.com>
To: Art Napor <artnapor@yahoo.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.org> 
Sent: Thursday, August 30, 2012 4:32 PM
Subject: RE: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
 
> Is there a patch series that applies to Xen 4.1.2 for this feature? 
> 
> 
> http://markmail.org/message/ipmyqtuaepe7d7iy

I missed the cut for 4.2 so the series is targeted at 4.3. There are no patches to support any earlier versions.

Thanks,
Ross
--344044665-1831219007-1346437375=:61994
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:14pt">Thanks Ross,<br><br>I also came across the earlier smbios passthrough patch series. I'm looking to be able to pass the DMI block from the Dom0 to the DomU when. Your earlier smbios patch series applied cleanly built against to 4.1.2, but the Dom0 smbios data didn't seem to make it into the HVM DomU. <br><br>Are there any version or other changset limitations that would prevent the patches from being manually applied to 4.1? <br><div><br><span></span></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;"><span>http://lists.xen.org/archives/html/xen-devel/2012-02/msg01754.html<br></span></div><div><br></div>  <div style="font-family: times new roman, new york, times, serif; font-size: 14pt;"> <div style="font-family: times
 new roman, new york, times, serif; font-size: 12pt;"> <div dir="ltr"> <font size="2" face="Arial"> <hr size="1">  <b><span style="font-weight:bold;">From:</span></b> Ross Philipson &lt;Ross.Philipson@citrix.com&gt;<br> <b><span style="font-weight: bold;">To:</span></b> Art Napor &lt;artnapor@yahoo.com&gt;; "xen-devel@lists.xen.org" &lt;xen-devel@lists.xen.org&gt; <br> <b><span style="font-weight: bold;">Sent:</span></b> Thursday, August 30, 2012 4:32 PM<br> <b><span style="font-weight: bold;">Subject:</span></b> RE: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM<br> </font> </div> <br>
&gt; Is there a patch series that applies to Xen 4.1.2 for this feature? <br>&gt; <br>&gt; <br>&gt; http://markmail.org/message/ipmyqtuaepe7d7iy<br><br>I missed the cut for 4.2 so the series is targeted at 4.3. There are no patches to support any earlier versions.<br><br>Thanks,<br>Ross<br><br><br> </div> </div>  </div></body></html>
--344044665-1831219007-1346437375=:61994--


--===============2376478597639085005==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2376478597639085005==--


From xen-devel-bounces@lists.xen.org Fri Aug 31 18:24:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7VtN-0004Z8-8v; Fri, 31 Aug 2012 18:24:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <artnapor@yahoo.com>) id 1T7Vrw-0004TC-3h
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 18:23:00 +0000
Received: from [85.158.138.51:24731] by server-7.bemta-3.messagelabs.com id
	2D/BD-32000-30101405; Fri, 31 Aug 2012 18:22:59 +0000
X-Env-Sender: artnapor@yahoo.com
X-Msg-Ref: server-4.tower-174.messagelabs.com!1346437376!27943900!1
X-Originating-IP: [98.138.91.71]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_6,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 412 invoked from network); 31 Aug 2012 18:22:56 -0000
Received: from nm6-vm1.bullet.mail.ne1.yahoo.com (HELO
	nm6-vm1.bullet.mail.ne1.yahoo.com) (98.138.91.71)
	by server-4.tower-174.messagelabs.com with SMTP;
	31 Aug 2012 18:22:56 -0000
Received: from [98.138.90.54] by nm6.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 18:22:55 -0000
Received: from [98.138.88.236] by tm7.bullet.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 18:22:55 -0000
Received: from [127.0.0.1] by omp1036.mail.ne1.yahoo.com with NNFMP;
	31 Aug 2012 18:22:55 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 712616.17711.bm@omp1036.mail.ne1.yahoo.com
Received: (qmail 62681 invoked by uid 60001); 31 Aug 2012 18:22:55 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1346437375; bh=iMb34Kz4tgn94o+YOa0zNzMNaXP1PujevIDavh16UV4=;
	h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=W59EbQ2Eddsn5Vgl/+E9mfY4csrDBqqLw1LKzlki7JmIkdlKtcH+07xdNY9AkQ6KuyCyPqGULZVoOXZYt8zExSYRk19XlklJEQnbksBhEFmiD4HJUMORQcTSSno1gxGf9nEYhwIHIZJDrdj5OPu8UlDBmgDPX4IsxIG8h78naz8=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=dHYmbZEXAVHJnosc8gXUJ730Fh0G/+gtLysT13OZROwDhKO3BoD4rSkU3lv6BDAb8C8TZQlXRDm+mDqwBthgq30AlIx5zUvim0TGIk3V2ePd8089SXwX1y3qZSy4+BHG63L0GifELOhxq6+09V4lkH/hKGXWFsVlmlmjbAJ3ELU=;
X-YMail-OSG: mKAWIAwVM1m4Ih_41SXcIqbivA5KJTiUSJo5NzPgGpvSI0m
	40utfY5yvZzjfR6t2AyVggIaVVWvFZArQICfgeFlXtGzaWvbnFgN8qiJCAst
	s72zg_76mgIQJqmahh7NOruM.Wfxf.5B4_nAzP3EklWM7eS7HNIEE3CKY.3L
	ZqaZ4hK0KdJdMd_TvS32s62Y58SpmEtRM..E.p1Qtq405YlrdoRZkgF.4nQh
	4rCoQ._ZJLRtiBRnsDwUgcJE6kw.Udt6rKDXp6b.W3MTRN.BQtGpT9qYsX3r
	_snEqRtkAuaorrP.mI5CO5sYDiUGPe5zo46x1R1h2yvdGJWgmlVzUeurVuAs
	qQk7KJRPkSbPTJlIRpNYRbKOkEoWCuNv4wPQEcJ.xpLy75n5qDv6hDTvqMXr
	ip_PG0.7NK0VpmXJmtQLZUCuCpdoYevXifWvmeHBkv0qy8BOb8AUK_OqV5kc
	5oQdx1NTDo.vI2S6qwZprL.vKU9sRcEpp8K780ClbgDXFa3N7CoFDvaYnLbi
	0FLuk47v07sq3TDYnL751p1DlvZadxcpNX6uEzbessezEnnNdk4BgiIyLLlc
	CUlT_nEh.MQ--
Received: from [50.58.96.2] by web121001.mail.ne1.yahoo.com via HTTP;
	Fri, 31 Aug 2012 11:22:55 PDT
X-Mailer: YahooMailWebService/0.8.121.416
References: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
Message-ID: <1346437375.61994.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Date: Fri, 31 Aug 2012 11:22:55 -0700 (PDT)
From: Art Napor <artnapor@yahoo.com>
To: Ross Philipson <Ross.Philipson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
In-Reply-To: <831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 31 Aug 2012 18:24:28 +0000
Subject: Re: [Xen-devel] [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Art Napor <artnapor@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2376478597639085005=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2376478597639085005==
Content-Type: multipart/alternative; boundary="344044665-1831219007-1346437375=:61994"

--344044665-1831219007-1346437375=:61994
Content-Type: text/plain; charset=us-ascii

Thanks Ross,

I also came across the earlier smbios passthrough patch series. I'm looking to be able to pass the DMI block from the Dom0 to the DomU when. Your earlier smbios patch series applied cleanly built against to 4.1.2, but the Dom0 smbios data didn't seem to make it into the HVM DomU. 

Are there any version or other changset limitations that would prevent the patches from being manually applied to 4.1? 


http://lists.xen.org/archives/html/xen-devel/2012-02/msg01754.html



________________________________
 From: Ross Philipson <Ross.Philipson@citrix.com>
To: Art Napor <artnapor@yahoo.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.org> 
Sent: Thursday, August 30, 2012 4:32 PM
Subject: RE: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
 
> Is there a patch series that applies to Xen 4.1.2 for this feature? 
> 
> 
> http://markmail.org/message/ipmyqtuaepe7d7iy

I missed the cut for 4.2 so the series is targeted at 4.3. There are no patches to support any earlier versions.

Thanks,
Ross
--344044665-1831219007-1346437375=:61994
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:14pt">Thanks Ross,<br><br>I also came across the earlier smbios passthrough patch series. I'm looking to be able to pass the DMI block from the Dom0 to the DomU when. Your earlier smbios patch series applied cleanly built against to 4.1.2, but the Dom0 smbios data didn't seem to make it into the HVM DomU. <br><br>Are there any version or other changset limitations that would prevent the patches from being manually applied to 4.1? <br><div><br><span></span></div><div style="color: rgb(0, 0, 0); font-size: 18.6667px; font-family: times new roman,new york,times,serif; background-color: transparent; font-style: normal;"><span>http://lists.xen.org/archives/html/xen-devel/2012-02/msg01754.html<br></span></div><div><br></div>  <div style="font-family: times new roman, new york, times, serif; font-size: 14pt;"> <div style="font-family: times
 new roman, new york, times, serif; font-size: 12pt;"> <div dir="ltr"> <font size="2" face="Arial"> <hr size="1">  <b><span style="font-weight:bold;">From:</span></b> Ross Philipson &lt;Ross.Philipson@citrix.com&gt;<br> <b><span style="font-weight: bold;">To:</span></b> Art Napor &lt;artnapor@yahoo.com&gt;; "xen-devel@lists.xen.org" &lt;xen-devel@lists.xen.org&gt; <br> <b><span style="font-weight: bold;">Sent:</span></b> Thursday, August 30, 2012 4:32 PM<br> <b><span style="font-weight: bold;">Subject:</span></b> RE: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM<br> </font> </div> <br>
&gt; Is there a patch series that applies to Xen 4.1.2 for this feature? <br>&gt; <br>&gt; <br>&gt; http://markmail.org/message/ipmyqtuaepe7d7iy<br><br>I missed the cut for 4.2 so the series is targeted at 4.3. There are no patches to support any earlier versions.<br><br>Thanks,<br>Ross<br><br><br> </div> </div>  </div></body></html>
--344044665-1831219007-1346437375=:61994--


--===============2376478597639085005==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2376478597639085005==--


From xen-devel-bounces@lists.xen.org Fri Aug 31 18:53:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7WLD-0004tG-SZ; Fri, 31 Aug 2012 18:53:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.lezcano@linaro.org>) id 1T7WLC-0004tB-61
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 18:53:14 +0000
Received: from [85.158.143.35:11406] by server-2.bemta-4.messagelabs.com id
	A8/C9-21239-91801405; Fri, 31 Aug 2012 18:53:13 +0000
X-Env-Sender: daniel.lezcano@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346439189!14325487!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26477 invoked from network); 31 Aug 2012 18:53:10 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 18:53:10 -0000
Received: by pbbrq2 with SMTP id rq2so6013321pbb.30
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Aug 2012 11:53:08 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding
	:x-gm-message-state;
	bh=7k72qwkf1M1moY9MkzIuFPZvHsgMN/nQkpykOCoQC0E=;
	b=QB5JgD7uI4NKo8rUNoxbkyvWJF8q2rvlXT6HaE+BsXRobmh6C4ao9SiSJk0G467l/s
	V85/+GG/NR6oXAMaJKVEDPHHmuR+x/th9nidmAuBxSHWJxeV9PDSTZb+UQilEFXLLqki
	ePHfzM9w1wtG/GNYhnNIslG+dbllgBVKef95g2KUl80SEbLQJuoJVhqYqwfwGo52gY3t
	zuME0AbKRV2l/TAXf8zbfd68nO1HS06s8Vajbf0Z122tFBt1S27ngt+agcfmXrix8SDv
	9FmlSrClnqaEwIC6TtXh/1ez/vQhRGjew5GAJ97zLL6TSBolhjEkMoB6v/Cqy4YRH463
	uFOA==
Received: by 10.68.197.9 with SMTP id iq9mr19318035pbc.17.1346439188710;
	Fri, 31 Aug 2012 11:53:08 -0700 (PDT)
Received: from [10.11.9.9] ([38.96.16.75])
	by mx.google.com with ESMTPS id oa5sm3965288pbb.14.2012.08.31.11.53.06
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 11:53:07 -0700 (PDT)
Message-ID: <50410811.70500@linaro.org>
Date: Fri, 31 Aug 2012 20:53:05 +0200
From: Daniel Lezcano <daniel.lezcano@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:12.0) Gecko/20120430 Thunderbird/12.0.1
MIME-Version: 1.0
To: rjw@sisk.pl
References: <1343164349-28550-1-git-send-email-daniel.lezcano@linaro.org>
	<20120724210629.GA1149@phenom.dumpdata.com>
In-Reply-To: <20120724210629.GA1149@phenom.dumpdata.com>
X-Gm-Message-State: ALoCoQns/2hAN/JgAWZZ8+/9Z9++R8roHjG96DLpgWshmSFSFAFRL6rm26tNEQWduj1A+HPJqPrh
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linux-pm@vger.kernel.org,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	linux-acpi@vger.kernel.org, patches@linaro.org, lenb@kernel.org
Subject: Re: [Xen-devel] [PATCH] acpi : remove power from acpi_processor_cx
	structure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDcvMjQvMjAxMiAxMTowNiBQTSwgS29ucmFkIFJ6ZXN6dXRlayBXaWxrIHdyb3RlOgo+IE9u
IFR1ZSwgSnVsIDI0LCAyMDEyIGF0IDExOjEyOjI5UE0gKzAyMDAsIERhbmllbCBMZXpjYW5vIHdy
b3RlOgo+PiBSZW1vdmUgdGhlIHBvd2VyIGZpZWxkIGFzIGl0IGlzIG5vdCB1c2VkLgo+Pgo+PiBT
aWduZWQtb2ZmLWJ5OiBEYW5pZWwgTGV6Y2FubyA8ZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZz4K
Pj4gQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiBB
Y2tlZC4KCkhpIFJhZmFlbCwKCkkgZGlkIG5vdCBzZWUgdGhpcyBwYXRjaCBnb2luZyBpbi4gSXMg
aXQgcG9zc2libGUgdG8gbWVyZ2UgaXQgPwoKVGhhbmtzIGluIGFkdmFuY2UKLS0gRGFuaWVsCgo+
PiAtLS0KPj4gIGRyaXZlcnMvYWNwaS9wcm9jZXNzb3JfaWRsZS5jICAgIHwgICAgMiAtLQo+PiAg
ZHJpdmVycy94ZW4veGVuLWFjcGktcHJvY2Vzc29yLmMgfCAgICAxIC0KPj4gIGluY2x1ZGUvYWNw
aS9wcm9jZXNzb3IuaCAgICAgICAgIHwgICAgMSAtCj4+ICAzIGZpbGVzIGNoYW5nZWQsIDAgaW5z
ZXJ0aW9ucygrKSwgNCBkZWxldGlvbnMoLSkKPj4KPj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMvYWNw
aS9wcm9jZXNzb3JfaWRsZS5jIGIvZHJpdmVycy9hY3BpL3Byb2Nlc3Nvcl9pZGxlLmMKPj4gaW5k
ZXggZTU4OWMxOS4uOTA1ODJmYiAxMDA2NDQKPj4gLS0tIGEvZHJpdmVycy9hY3BpL3Byb2Nlc3Nv
cl9pZGxlLmMKPj4gKysrIGIvZHJpdmVycy9hY3BpL3Byb2Nlc3Nvcl9pZGxlLmMKPj4gQEAgLTQ4
Myw4ICs0ODMsNiBAQCBzdGF0aWMgaW50IGFjcGlfcHJvY2Vzc29yX2dldF9wb3dlcl9pbmZvX2Nz
dChzdHJ1Y3QgYWNwaV9wcm9jZXNzb3IgKnByKQo+PiAgCQlpZiAob2JqLT50eXBlICE9IEFDUElf
VFlQRV9JTlRFR0VSKQo+PiAgCQkJY29udGludWU7Cj4+ICAKPj4gLQkJY3gucG93ZXIgPSBvYmot
PmludGVnZXIudmFsdWU7Cj4+IC0KPj4gIAkJY3VycmVudF9jb3VudCsrOwo+PiAgCQltZW1jcHko
Jihwci0+cG93ZXIuc3RhdGVzW2N1cnJlbnRfY291bnRdKSwgJmN4LCBzaXplb2YoY3gpKTsKPj4g
IAo+PiBkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4veGVuLWFjcGktcHJvY2Vzc29yLmMgYi9kcml2
ZXJzL3hlbi94ZW4tYWNwaS1wcm9jZXNzb3IuYwo+PiBpbmRleCA3ZmYyNTY5Li43ZWY5YzFkIDEw
MDY0NAo+PiAtLS0gYS9kcml2ZXJzL3hlbi94ZW4tYWNwaS1wcm9jZXNzb3IuYwo+PiArKysgYi9k
cml2ZXJzL3hlbi94ZW4tYWNwaS1wcm9jZXNzb3IuYwo+PiBAQCAtOTgsNyArOTgsNiBAQCBzdGF0
aWMgaW50IHB1c2hfY3h4X3RvX2h5cGVydmlzb3Ioc3RydWN0IGFjcGlfcHJvY2Vzc29yICpfcHIp
Cj4+ICAKPj4gIAkJZHN0X2N4LT50eXBlID0gY3gtPnR5cGU7Cj4+ICAJCWRzdF9jeC0+bGF0ZW5j
eSA9IGN4LT5sYXRlbmN5Owo+PiAtCQlkc3RfY3gtPnBvd2VyID0gY3gtPnBvd2VyOwo+PiAgCj4+
ICAJCWRzdF9jeC0+ZHBjbnQgPSAwOwo+PiAgCQlzZXRfeGVuX2d1ZXN0X2hhbmRsZShkc3RfY3gt
PmRwLCBOVUxMKTsKPj4gZGlmZiAtLWdpdCBhL2luY2x1ZGUvYWNwaS9wcm9jZXNzb3IuaCBiL2lu
Y2x1ZGUvYWNwaS9wcm9jZXNzb3IuaAo+PiBpbmRleCA2NGVjNjQ0Li5kYjQyN2ZhIDEwMDY0NAo+
PiAtLS0gYS9pbmNsdWRlL2FjcGkvcHJvY2Vzc29yLmgKPj4gKysrIGIvaW5jbHVkZS9hY3BpL3By
b2Nlc3Nvci5oCj4+IEBAIC01OSw3ICs1OSw2IEBAIHN0cnVjdCBhY3BpX3Byb2Nlc3Nvcl9jeCB7
Cj4+ICAJdTggZW50cnlfbWV0aG9kOwo+PiAgCXU4IGluZGV4Owo+PiAgCXUzMiBsYXRlbmN5Owo+
PiAtCXUzMiBwb3dlcjsKPj4gIAl1OCBibV9zdHNfc2tpcDsKPj4gIAljaGFyIGRlc2NbQUNQSV9D
WF9ERVNDX0xFTl07Cj4+ICB9Owo+PiAtLSAKPj4gMS43LjUuNAo+IF9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gbGluYXJvLWRldiBtYWlsaW5nIGxpc3QK
PiBsaW5hcm8tZGV2QGxpc3RzLmxpbmFyby5vcmcKPiBodHRwOi8vbGlzdHMubGluYXJvLm9yZy9t
YWlsbWFuL2xpc3RpbmZvL2xpbmFyby1kZXYKPgoKCi0tIAogPGh0dHA6Ly93d3cubGluYXJvLm9y
Zy8+IExpbmFyby5vcmcg4pSCIE9wZW4gc291cmNlIHNvZnR3YXJlIGZvciBBUk0gU29DcwoKRm9s
bG93IExpbmFybzogIDxodHRwOi8vd3d3LmZhY2Vib29rLmNvbS9wYWdlcy9MaW5hcm8+IEZhY2Vi
b29rIHwKPGh0dHA6Ly90d2l0dGVyLmNvbS8jIS9saW5hcm9vcmc+IFR3aXR0ZXIgfAo8aHR0cDov
L3d3dy5saW5hcm8ub3JnL2xpbmFyby1ibG9nLz4gQmxvZwoKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Aug 31 18:53:36 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 18:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7WLD-0004tG-SZ; Fri, 31 Aug 2012 18:53:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.lezcano@linaro.org>) id 1T7WLC-0004tB-61
	for xen-devel@lists.xensource.com; Fri, 31 Aug 2012 18:53:14 +0000
Received: from [85.158.143.35:11406] by server-2.bemta-4.messagelabs.com id
	A8/C9-21239-91801405; Fri, 31 Aug 2012 18:53:13 +0000
X-Env-Sender: daniel.lezcano@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346439189!14325487!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26477 invoked from network); 31 Aug 2012 18:53:10 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 18:53:10 -0000
Received: by pbbrq2 with SMTP id rq2so6013321pbb.30
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Aug 2012 11:53:08 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding
	:x-gm-message-state;
	bh=7k72qwkf1M1moY9MkzIuFPZvHsgMN/nQkpykOCoQC0E=;
	b=QB5JgD7uI4NKo8rUNoxbkyvWJF8q2rvlXT6HaE+BsXRobmh6C4ao9SiSJk0G467l/s
	V85/+GG/NR6oXAMaJKVEDPHHmuR+x/th9nidmAuBxSHWJxeV9PDSTZb+UQilEFXLLqki
	ePHfzM9w1wtG/GNYhnNIslG+dbllgBVKef95g2KUl80SEbLQJuoJVhqYqwfwGo52gY3t
	zuME0AbKRV2l/TAXf8zbfd68nO1HS06s8Vajbf0Z122tFBt1S27ngt+agcfmXrix8SDv
	9FmlSrClnqaEwIC6TtXh/1ez/vQhRGjew5GAJ97zLL6TSBolhjEkMoB6v/Cqy4YRH463
	uFOA==
Received: by 10.68.197.9 with SMTP id iq9mr19318035pbc.17.1346439188710;
	Fri, 31 Aug 2012 11:53:08 -0700 (PDT)
Received: from [10.11.9.9] ([38.96.16.75])
	by mx.google.com with ESMTPS id oa5sm3965288pbb.14.2012.08.31.11.53.06
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 11:53:07 -0700 (PDT)
Message-ID: <50410811.70500@linaro.org>
Date: Fri, 31 Aug 2012 20:53:05 +0200
From: Daniel Lezcano <daniel.lezcano@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:12.0) Gecko/20120430 Thunderbird/12.0.1
MIME-Version: 1.0
To: rjw@sisk.pl
References: <1343164349-28550-1-git-send-email-daniel.lezcano@linaro.org>
	<20120724210629.GA1149@phenom.dumpdata.com>
In-Reply-To: <20120724210629.GA1149@phenom.dumpdata.com>
X-Gm-Message-State: ALoCoQns/2hAN/JgAWZZ8+/9Z9++R8roHjG96DLpgWshmSFSFAFRL6rm26tNEQWduj1A+HPJqPrh
Cc: xen-devel@lists.xensource.com, linaro-dev@lists.linaro.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linux-pm@vger.kernel.org,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	linux-acpi@vger.kernel.org, patches@linaro.org, lenb@kernel.org
Subject: Re: [Xen-devel] [PATCH] acpi : remove power from acpi_processor_cx
	structure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDcvMjQvMjAxMiAxMTowNiBQTSwgS29ucmFkIFJ6ZXN6dXRlayBXaWxrIHdyb3RlOgo+IE9u
IFR1ZSwgSnVsIDI0LCAyMDEyIGF0IDExOjEyOjI5UE0gKzAyMDAsIERhbmllbCBMZXpjYW5vIHdy
b3RlOgo+PiBSZW1vdmUgdGhlIHBvd2VyIGZpZWxkIGFzIGl0IGlzIG5vdCB1c2VkLgo+Pgo+PiBT
aWduZWQtb2ZmLWJ5OiBEYW5pZWwgTGV6Y2FubyA8ZGFuaWVsLmxlemNhbm9AbGluYXJvLm9yZz4K
Pj4gQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiBB
Y2tlZC4KCkhpIFJhZmFlbCwKCkkgZGlkIG5vdCBzZWUgdGhpcyBwYXRjaCBnb2luZyBpbi4gSXMg
aXQgcG9zc2libGUgdG8gbWVyZ2UgaXQgPwoKVGhhbmtzIGluIGFkdmFuY2UKLS0gRGFuaWVsCgo+
PiAtLS0KPj4gIGRyaXZlcnMvYWNwaS9wcm9jZXNzb3JfaWRsZS5jICAgIHwgICAgMiAtLQo+PiAg
ZHJpdmVycy94ZW4veGVuLWFjcGktcHJvY2Vzc29yLmMgfCAgICAxIC0KPj4gIGluY2x1ZGUvYWNw
aS9wcm9jZXNzb3IuaCAgICAgICAgIHwgICAgMSAtCj4+ICAzIGZpbGVzIGNoYW5nZWQsIDAgaW5z
ZXJ0aW9ucygrKSwgNCBkZWxldGlvbnMoLSkKPj4KPj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMvYWNw
aS9wcm9jZXNzb3JfaWRsZS5jIGIvZHJpdmVycy9hY3BpL3Byb2Nlc3Nvcl9pZGxlLmMKPj4gaW5k
ZXggZTU4OWMxOS4uOTA1ODJmYiAxMDA2NDQKPj4gLS0tIGEvZHJpdmVycy9hY3BpL3Byb2Nlc3Nv
cl9pZGxlLmMKPj4gKysrIGIvZHJpdmVycy9hY3BpL3Byb2Nlc3Nvcl9pZGxlLmMKPj4gQEAgLTQ4
Myw4ICs0ODMsNiBAQCBzdGF0aWMgaW50IGFjcGlfcHJvY2Vzc29yX2dldF9wb3dlcl9pbmZvX2Nz
dChzdHJ1Y3QgYWNwaV9wcm9jZXNzb3IgKnByKQo+PiAgCQlpZiAob2JqLT50eXBlICE9IEFDUElf
VFlQRV9JTlRFR0VSKQo+PiAgCQkJY29udGludWU7Cj4+ICAKPj4gLQkJY3gucG93ZXIgPSBvYmot
PmludGVnZXIudmFsdWU7Cj4+IC0KPj4gIAkJY3VycmVudF9jb3VudCsrOwo+PiAgCQltZW1jcHko
Jihwci0+cG93ZXIuc3RhdGVzW2N1cnJlbnRfY291bnRdKSwgJmN4LCBzaXplb2YoY3gpKTsKPj4g
IAo+PiBkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4veGVuLWFjcGktcHJvY2Vzc29yLmMgYi9kcml2
ZXJzL3hlbi94ZW4tYWNwaS1wcm9jZXNzb3IuYwo+PiBpbmRleCA3ZmYyNTY5Li43ZWY5YzFkIDEw
MDY0NAo+PiAtLS0gYS9kcml2ZXJzL3hlbi94ZW4tYWNwaS1wcm9jZXNzb3IuYwo+PiArKysgYi9k
cml2ZXJzL3hlbi94ZW4tYWNwaS1wcm9jZXNzb3IuYwo+PiBAQCAtOTgsNyArOTgsNiBAQCBzdGF0
aWMgaW50IHB1c2hfY3h4X3RvX2h5cGVydmlzb3Ioc3RydWN0IGFjcGlfcHJvY2Vzc29yICpfcHIp
Cj4+ICAKPj4gIAkJZHN0X2N4LT50eXBlID0gY3gtPnR5cGU7Cj4+ICAJCWRzdF9jeC0+bGF0ZW5j
eSA9IGN4LT5sYXRlbmN5Owo+PiAtCQlkc3RfY3gtPnBvd2VyID0gY3gtPnBvd2VyOwo+PiAgCj4+
ICAJCWRzdF9jeC0+ZHBjbnQgPSAwOwo+PiAgCQlzZXRfeGVuX2d1ZXN0X2hhbmRsZShkc3RfY3gt
PmRwLCBOVUxMKTsKPj4gZGlmZiAtLWdpdCBhL2luY2x1ZGUvYWNwaS9wcm9jZXNzb3IuaCBiL2lu
Y2x1ZGUvYWNwaS9wcm9jZXNzb3IuaAo+PiBpbmRleCA2NGVjNjQ0Li5kYjQyN2ZhIDEwMDY0NAo+
PiAtLS0gYS9pbmNsdWRlL2FjcGkvcHJvY2Vzc29yLmgKPj4gKysrIGIvaW5jbHVkZS9hY3BpL3By
b2Nlc3Nvci5oCj4+IEBAIC01OSw3ICs1OSw2IEBAIHN0cnVjdCBhY3BpX3Byb2Nlc3Nvcl9jeCB7
Cj4+ICAJdTggZW50cnlfbWV0aG9kOwo+PiAgCXU4IGluZGV4Owo+PiAgCXUzMiBsYXRlbmN5Owo+
PiAtCXUzMiBwb3dlcjsKPj4gIAl1OCBibV9zdHNfc2tpcDsKPj4gIAljaGFyIGRlc2NbQUNQSV9D
WF9ERVNDX0xFTl07Cj4+ICB9Owo+PiAtLSAKPj4gMS43LjUuNAo+IF9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gbGluYXJvLWRldiBtYWlsaW5nIGxpc3QK
PiBsaW5hcm8tZGV2QGxpc3RzLmxpbmFyby5vcmcKPiBodHRwOi8vbGlzdHMubGluYXJvLm9yZy9t
YWlsbWFuL2xpc3RpbmZvL2xpbmFyby1kZXYKPgoKCi0tIAogPGh0dHA6Ly93d3cubGluYXJvLm9y
Zy8+IExpbmFyby5vcmcg4pSCIE9wZW4gc291cmNlIHNvZnR3YXJlIGZvciBBUk0gU29DcwoKRm9s
bG93IExpbmFybzogIDxodHRwOi8vd3d3LmZhY2Vib29rLmNvbS9wYWdlcy9MaW5hcm8+IEZhY2Vi
b29rIHwKPGh0dHA6Ly90d2l0dGVyLmNvbS8jIS9saW5hcm9vcmc+IFR3aXR0ZXIgfAo8aHR0cDov
L3d3dy5saW5hcm8ub3JnL2xpbmFyby1ibG9nLz4gQmxvZwoKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Aug 31 19:33:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:33:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Wy3-0005Et-G3; Fri, 31 Aug 2012 19:33:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7Wy1-0005Eo-OQ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:33:22 +0000
Received: from [85.158.138.51:40754] by server-10.bemta-3.messagelabs.com id
	F3/EE-10411-08111405; Fri, 31 Aug 2012 19:33:20 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346441597!19944610!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8894 invoked from network); 31 Aug 2012 19:33:18 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 19:33:18 -0000
Received: by vcbgb23 with SMTP id gb23so4526128vcb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 12:33:17 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=PeMUy1TE8XTBIzWMAaMxcAqcSg3maQ8HeXeUZY++xsA=;
	b=a9t6cUVJC3CV0At1k8xtzCdQnGtIMCcKqjVYpi6UXOnCutQjFQkpMOO+Xl0RLSCeop
	/3JBZfayflUffMQIjQGP2dvxYIXXNlL1ODGhmro6uc3kWErvj9xgAT+wRfBo17hTh1EK
	Ib2sDC3X0p0gwDzXWOKnOrL2PD3R862iypCFN1FW5ANX/OFfmh4LR/TxR1UAQiaphzSw
	rG37o8EIgUSddHYymJu+RgP6Uif3SvThmJj44VlHmKOEB37hLJNE8M5E1SXrPrwqzePC
	cWZlWqLR4rAKnLK3qGJp63DlwwhR8Gl0UPHMnnWUfl14n6qZzofRCMWJiYQf4CKrybWI
	2W8g==
MIME-Version: 1.0
Received: by 10.58.249.195 with SMTP id yw3mr6855144vec.43.1346441597349; Fri,
	31 Aug 2012 12:33:17 -0700 (PDT)
Received: by 10.58.70.52 with HTTP; Fri, 31 Aug 2012 12:33:17 -0700 (PDT)
Received: by 10.58.70.52 with HTTP; Fri, 31 Aug 2012 12:33:17 -0700 (PDT)
In-Reply-To: <20120831165454.GE18929@localhost.localdomain>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
	<20120831165454.GE18929@localhost.localdomain>
Date: Fri, 31 Aug 2012 15:33:17 -0400
Message-ID: <CAO=PTzoNfS8+DFXUM2+E9FaZSCJwTeRjecCXJwy_+CFyxGYpUQ@mail.gmail.com>
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Gm-Message-State: ALoCoQmn25jVnL7nXe40LGNR9ShO4Tys3MaArtA+cc9urT8JXJw63OyhE6kNzBKaDDkkjgDSImi2
Cc: andres.lagarcavilla@gmail.com, xen-devel@lists.xen.org,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3267962755280235207=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3267962755280235207==
Content-Type: multipart/alternative; boundary=047d7b86f3aa1ef11704c894ddee

--047d7b86f3aa1ef11704c894ddee
Content-Type: text/plain; charset=ISO-8859-1

But msleep will wind up calling schedule(). We definitely cannot afford to
pin down a dom0 vcpu when the pager itself is in dom0.

Iiuc....

Andres
On Aug 31, 2012 12:55 PM, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
wrote:

> On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:
> > Actually acted upon your feedback ipso facto:
> >
> > commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> > Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > Date:   Sun Aug 26 09:45:57 2012 -0400
> >
> >     Xen backend support for paged out grant targets.
> >
> >     Since Xen-4.2, hvm domains may have portions of their memory paged
> out. When a
> >     foreign domain (such as dom0) attempts to map these frames, the map
> will
> >     initially fail. The hypervisor returns a suitable errno, and kicks an
> >     asynchronous page-in operation carried out by a helper. The foreign
> domain is
> >     expected to retry the mapping operation until it eventually
> succeeds. The
> >     foreign domain is not put to sleep because itself could be the one
> running the
> >     pager assist (typical scenario for dom0).
> >
> >     This patch adds support for this mechanism for backend drivers using
> grant
> >     mapping and copying operations. Specifically, this covers the
> blkback and
> >     gntdev drivers (which map foregin grants), and the netback driver
> (which copies
> >     foreign grants).
> >
> >     * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> >     * Add a retry method for grants that fail with GNTST_eagain (i.e.
> because the
> >       target foregin frame is paged out).
> >     * Insert hooks with appropriate macro decorators in the
> aforementioned drivers.
> >
> >     The retry loop is only invoked if the grant operation status is
> GNTST_eagain.
> >     It guarantees to leave a new status code different from
> GNTST_eagain. Any other
> >     status code results in identical code execution as before.
> >
> >     The retry loop performs 256 attempts with increasing time intervals
> through a
> >     32 second period. It uses msleep to yield while waiting for the next
> retry.
> >
>
>
> Would it make sense to yield to other processes (so call schedule)? Or
> perhaps have this in a workqueue ?
>
> I mean the 'msleep' just looks like a hack. .. 32 seconds of doing
> 'msleep' on 1VCPU dom0 could trigger the watchdog I think?
>
> >     V2 after feedback from David Vrabel:
> >     * Explicit MAX_DELAY instead of wrap-around delay into zero
> >     * Abstract GNTST_eagain check into core grant table code for netback
> module.
> >
> >     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> >
> > diff --git a/drivers/net/xen-netback/netback.c
> b/drivers/net/xen-netback/netback.c
> > index 682633b..5610fd8 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk
> *netbk)
> >               return;
> >
> >       BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
> > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> &netbk->grant_copy_op,
> > -                                     npo.copy_prod);
> > -     BUG_ON(ret != 0);
> > +     gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
> >
> >       while ((skb = __skb_dequeue(&rxq)) != NULL) {
> >               sco = (struct skb_cb_overlay *)skb->cb;
> > @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk
> *netbk)
> >  static void xen_netbk_tx_action(struct xen_netbk *netbk)
> >  {
> >       unsigned nr_gops;
> > -     int ret;
> >
> >       nr_gops = xen_netbk_tx_build_gops(netbk);
> >
> >       if (nr_gops == 0)
> >               return;
> > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> > -                                     netbk->tx_copy_ops, nr_gops);
> > -     BUG_ON(ret);
> >
> > -     xen_netbk_tx_submit(netbk);
> > +     gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
> >
> > +     xen_netbk_tx_submit(netbk);
> >  }
> >
> >  static void xen_netbk_idx_release(struct xen_netbk *netbk, u16
> pending_idx)
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index eea81cf..96543b2 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -38,6 +38,7 @@
> >  #include <linux/vmalloc.h>
> >  #include <linux/uaccess.h>
> >  #include <linux/io.h>
> > +#include <linux/delay.h>
> >  #include <linux/hardirq.h>
> >
> >  #include <xen/xen.h>
> > @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >
> > +#define MAX_DELAY 256
> > +void
> > +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> > +                                             const char *func)
> > +{
> > +     unsigned delay = 1;
> > +
> > +     do {
> > +             BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > +             if (*status == GNTST_eagain)
> > +                     msleep(delay++);
> > +     } while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
> > +
> > +     if (delay >= MAX_DELAY) {
> > +             printk(KERN_ERR "%s: %s eagain grant\n", func,
> current->comm);
> > +             *status = GNTST_bad_page;
> > +     }
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
> > +
> >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >                   struct gnttab_map_grant_ref *kmap_ops,
> >                   struct page **pages, unsigned int count)
> > @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
> *map_ops,
> >       if (ret)
> >               return ret;
> >
> > +     /* Retry eagain maps */
> > +     for (i = 0; i < count; i++)
> > +             if (map_ops[i].status == GNTST_eagain)
> > +                     gnttab_retry_eagain_map(map_ops + i);
> > +
> >       if (xen_feature(XENFEAT_auto_translated_physmap))
> >               return ret;
> >
> > diff --git a/drivers/xen/xenbus/xenbus_client.c
> b/drivers/xen/xenbus/xenbus_client.c
> > index b3e146e..749f6a3 100644
> > --- a/drivers/xen/xenbus/xenbus_client.c
> > +++ b/drivers/xen/xenbus/xenbus_client.c
> > @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct
> xenbus_device *dev,
> >
> >       op.host_addr = arbitrary_virt_to_machine(pte).maddr;
> >
> > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > -             BUG();
> > +     gnttab_map_grant_no_eagain(&op);
> >
> >       if (op.status != GNTST_okay) {
> >               free_vm_area(area);
> > @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int
> gnt_ref,
> >       gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map,
> gnt_ref,
> >                         dev->otherend_id);
> >
> > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > -             BUG();
> > +     gnttab_map_grant_no_eagain(&op);
> >
> >       if (op.status != GNTST_okay) {
> >               xenbus_dev_fatal(dev, op.status,
> > diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> > index 11e27c3..2fecfab 100644
> > --- a/include/xen/grant_table.h
> > +++ b/include/xen/grant_table.h
> > @@ -43,6 +43,7 @@
> >  #include <xen/interface/grant_table.h>
> >
> >  #include <asm/xen/hypervisor.h>
> > +#include <asm/xen/hypercall.h>
> >
> >  #include <xen/features.h>
> >
> > @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
> >
> >  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> >
> > +/* Retry a grant map/copy operation when the hypervisor returns
> GNTST_eagain.
> > + * This is typically due to paged out target frames.
> > + * Generic entry-point, use macro decorators below for specific grant
> > + * operations.
> > + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
> > + * Return value in *status guaranteed to no longer be GNTST_eagain. */
> > +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> *status,
> > +                             const char *func);
> > +
> > +#define gnttab_retry_eagain_map(_gop)                       \
> > +    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
> > +                            &(_gop)->status, __func__)
> > +
> > +#define gnttab_retry_eagain_copy(_gop)                  \
> > +    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
> > +                            &(_gop)->status, __func__)
> > +
> > +#define gnttab_map_grant_no_eagain(_gop)
>      \
> > +do {
>      \
> > +    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))
>     \
> > +        BUG();
>      \
> > +    if ((_gop)->status == GNTST_eagain)
>     \
> > +        gnttab_retry_eagain_map((_gop));
>      \
> > +} while(0)
> > +
> > +static inline void
> > +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
> > +{
> > +    unsigned i;
> > +    struct gnttab_copy *op;
> > +
> > +    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
> > +    for (i = 0, op = batch; i < count; i++, op++)
> > +        if (op->status == GNTST_eagain)
> > +            gnttab_retry_eagain_copy(op);
> > +}
> > +
> >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >                   struct gnttab_map_grant_ref *kmap_ops,
> >                   struct page **pages, unsigned int count);
> > diff --git a/include/xen/interface/grant_table.h
> b/include/xen/interface/grant_table.h
> > index 7da811b..66cb734 100644
> > --- a/include/xen/interface/grant_table.h
> > +++ b/include/xen/interface/grant_table.h
> > @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> >  #define GNTST_permission_denied (-8) /* Not enough privilege for
> operation.  */
> >  #define GNTST_bad_page         (-9) /* Specified page was invalid for
> op.    */
> >  #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page
> boundary */
> > +#define GNTST_eagain          (-12) /* Retry.
>      */
> >
> >  #define GNTTABOP_error_msgs {                   \
> >      "okay",                                     \
> > @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> >      "permission denied",                        \
> >      "bad page",                                 \
> >      "copy arguments cross page boundary"        \
> > +    "retry"                                     \
> >  }
> >
> >  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
> >
> >
> > On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:
> >
> > >
> > > On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> > >
> > >> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> > >>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > >>>
> > >>> Since Xen-4.2, hvm domains may have portions of their memory paged
> out. When a
> > >>> foreign domain (such as dom0) attempts to map these frames, the map
> will
> > >>> initially fail. The hypervisor returns a suitable errno, and kicks an
> > >>> asynchronous page-in operation carried out by a helper. The foreign
> domain is
> > >>> expected to retry the mapping operation until it eventually
> succeeds. The
> > >>> foreign domain is not put to sleep because itself could be the one
> running the
> > >>> pager assist (typical scenario for dom0).
> > >>>
> > >>> This patch adds support for this mechanism for backend drivers using
> grant
> > >>> mapping and copying operations. Specifically, this covers the
> blkback and
> > >>> gntdev drivers (which map foregin grants), and the netback driver
> (which copies
> > >>> foreign grants).
> > >>>
> > >>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> > >>> * Add a retry method for grants that fail with GNTST_eagain (i.e.
> because the
> > >>> target foregin frame is paged out).
> > >>> * Insert hooks with appropriate macro decorators in the
> aforementioned drivers.
> > >>
> > >> I think you should implement wrappers around
> HYPERVISOR_grant_table_op()
> > >> have have the wrapper do the retries instead of every backend having
> to
> > >> check for EAGAIN and issue the retries itself. Similar to the
> > >> gnttab_map_grant_no_eagain() function you've already added.
> > >>
> > >> Why do some operations not retry anyway?
> > >
> > > All operations retry. The reason why I could not make it as elegant as
> you suggest is because grant operations are submitted in batches and their
> status(es?) later checked individually elsewhere. This is the case for
> netback. Note that both blkback and gntdev use a more linear structure with
> the gnttab_map_refs helper, which allows me to hide all the retry gore from
> those drivers into grant table code. Likewise for xenbus ring mapping.
> > >
> > > In summary, outside of core grant table code, only the netback driver
> needs to check explicitly for retries, due to its
> batch-copy-delayed-per-slot-check structure.
> > >
> > >>
> > >>> +void
> > >>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> *status,
> > >>> +                                         const char *func)
> > >>> +{
> > >>> + u8 delay = 1;
> > >>> +
> > >>> + do {
> > >>> +         BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > >>> +         if (*status == GNTST_eagain)
> > >>> +                 msleep(delay++);
> > >>> + } while ((*status == GNTST_eagain) && delay);
> > >>
> > >> Terminating the loop when delay wraps is a bit subtle.  Why not make
> > >> delay unsigned and check delay <= MAX_DELAY?
> > > Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before
> a re-spin.
> > >
> > >>
> > >> Would it be sensible to ramp the delay faster?  Perhaps double each
> > >> iteration with a maximum possible delay of e.g., 256 ms.
> > > Generally speaking we've never seen past three retries. I am open to
> changing the algorithm but there is a significant possibility it won't
> matter at all.
> > >
> > >>
> > >>> +#define gnttab_map_grant_no_eagain(_gop)
>          \
> > >>> +do {
>          \
> > >>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop),
> 1))      \
> > >>> +        BUG();
>          \
> > >>> +    if ((_gop)->status == GNTST_eagain)
>         \
> > >>> +        gnttab_retry_eagain_map((_gop));
>          \
> > >>> +} while(0)
> > >>
> > >> Inline functions, please.
> > >
> > > I want to retain the original context for debugging. Eventually we
> print __func__ if things go wrong.
> > >
> > > Thanks, great feedback
> > > Andres
> > >
> > >>
> > >> David
> > >
> >
>

--047d7b86f3aa1ef11704c894ddee
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">But msleep will wind up calling schedule(). We definitely ca=
nnot afford to pin down a dom0 vcpu when the pager itself is in dom0.</p>
<p dir=3D"ltr">Iiuc....</p>
<p dir=3D"ltr">Andres</p>
<div class=3D"gmail_quote">On Aug 31, 2012 12:55 PM, &quot;Konrad Rzeszutek=
 Wilk&quot; &lt;<a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracl=
e.com</a>&gt; wrote:<br type=3D"attribution"><blockquote class=3D"gmail_quo=
te" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"=
>
On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:<br>
&gt; Actually acted upon your feedback ipso facto:<br>
&gt;<br>
&gt; commit d5fab912caa1f0cf6be0a6773f502d3417a207b6<br>
&gt; Author: Andres Lagar-Cavilla &lt;<a href=3D"mailto:andres@lagarcavilla=
.org">andres@lagarcavilla.org</a>&gt;<br>
&gt; Date: =A0 Sun Aug 26 09:45:57 2012 -0400<br>
&gt;<br>
&gt; =A0 =A0 Xen backend support for paged out grant targets.<br>
&gt;<br>
&gt; =A0 =A0 Since Xen-4.2, hvm domains may have portions of their memory p=
aged out. When a<br>
&gt; =A0 =A0 foreign domain (such as dom0) attempts to map these frames, th=
e map will<br>
&gt; =A0 =A0 initially fail. The hypervisor returns a suitable errno, and k=
icks an<br>
&gt; =A0 =A0 asynchronous page-in operation carried out by a helper. The fo=
reign domain is<br>
&gt; =A0 =A0 expected to retry the mapping operation until it eventually su=
cceeds. The<br>
&gt; =A0 =A0 foreign domain is not put to sleep because itself could be the=
 one running the<br>
&gt; =A0 =A0 pager assist (typical scenario for dom0).<br>
&gt;<br>
&gt; =A0 =A0 This patch adds support for this mechanism for backend drivers=
 using grant<br>
&gt; =A0 =A0 mapping and copying operations. Specifically, this covers the =
blkback and<br>
&gt; =A0 =A0 gntdev drivers (which map foregin grants), and the netback dri=
ver (which copies<br>
&gt; =A0 =A0 foreign grants).<br>
&gt;<br>
&gt; =A0 =A0 * Add GNTST_eagain, already exposed by Xen, to the grant inter=
face.<br>
&gt; =A0 =A0 * Add a retry method for grants that fail with GNTST_eagain (i=
.e. because the<br>
&gt; =A0 =A0 =A0 target foregin frame is paged out).<br>
&gt; =A0 =A0 * Insert hooks with appropriate macro decorators in the aforem=
entioned drivers.<br>
&gt;<br>
&gt; =A0 =A0 The retry loop is only invoked if the grant operation status i=
s GNTST_eagain.<br>
&gt; =A0 =A0 It guarantees to leave a new status code different from GNTST_=
eagain. Any other<br>
&gt; =A0 =A0 status code results in identical code execution as before.<br>
&gt;<br>
&gt; =A0 =A0 The retry loop performs 256 attempts with increasing time inte=
rvals through a<br>
&gt; =A0 =A0 32 second period. It uses msleep to yield while waiting for th=
e next retry.<br>
&gt;<br>
<br>
<br>
Would it make sense to yield to other processes (so call schedule)? Or<br>
perhaps have this in a workqueue ?<br>
<br>
I mean the &#39;msleep&#39; just looks like a hack. .. 32 seconds of doing<=
br>
&#39;msleep&#39; on 1VCPU dom0 could trigger the watchdog I think?<br>
<br>
&gt; =A0 =A0 V2 after feedback from David Vrabel:<br>
&gt; =A0 =A0 * Explicit MAX_DELAY instead of wrap-around delay into zero<br=
>
&gt; =A0 =A0 * Abstract GNTST_eagain check into core grant table code for n=
etback module.<br>
&gt;<br>
&gt; =A0 =A0 Signed-off-by: Andres Lagar-Cavilla &lt;<a href=3D"mailto:andr=
es@lagarcavilla.org">andres@lagarcavilla.org</a>&gt;<br>
&gt;<br>
&gt; diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netba=
ck/netback.c<br>
&gt; index 682633b..5610fd8 100644<br>
&gt; --- a/drivers/net/xen-netback/netback.c<br>
&gt; +++ b/drivers/net/xen-netback/netback.c<br>
&gt; @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk *=
netbk)<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;<br>
&gt;<br>
&gt; =A0 =A0 =A0 BUG_ON(npo.copy_prod &gt; ARRAY_SIZE(netbk-&gt;grant_copy_=
op));<br>
&gt; - =A0 =A0 ret =3D HYPERVISOR_grant_table_op(GNTTABOP_copy, &amp;netbk-=
&gt;grant_copy_op,<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 npo.copy_prod);<br>
&gt; - =A0 =A0 BUG_ON(ret !=3D 0);<br>
&gt; + =A0 =A0 gnttab_batch_copy_no_eagain(netbk-&gt;grant_copy_op, npo.cop=
y_prod);<br>
&gt;<br>
&gt; =A0 =A0 =A0 while ((skb =3D __skb_dequeue(&amp;rxq)) !=3D NULL) {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 sco =3D (struct skb_cb_overlay *)skb-&gt;c=
b;<br>
&gt; @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_net=
bk *netbk)<br>
&gt; =A0static void xen_netbk_tx_action(struct xen_netbk *netbk)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0 unsigned nr_gops;<br>
&gt; - =A0 =A0 int ret;<br>
&gt;<br>
&gt; =A0 =A0 =A0 nr_gops =3D xen_netbk_tx_build_gops(netbk);<br>
&gt;<br>
&gt; =A0 =A0 =A0 if (nr_gops =3D=3D 0)<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;<br>
&gt; - =A0 =A0 ret =3D HYPERVISOR_grant_table_op(GNTTABOP_copy,<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 netbk-&gt;tx_copy_ops, nr_gops);<br>
&gt; - =A0 =A0 BUG_ON(ret);<br>
&gt;<br>
&gt; - =A0 =A0 xen_netbk_tx_submit(netbk);<br>
&gt; + =A0 =A0 gnttab_batch_copy_no_eagain(netbk-&gt;tx_copy_ops, nr_gops);=
<br>
&gt;<br>
&gt; + =A0 =A0 xen_netbk_tx_submit(netbk);<br>
&gt; =A0}<br>
&gt;<br>
&gt; =A0static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pend=
ing_idx)<br>
&gt; diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c<br>
&gt; index eea81cf..96543b2 100644<br>
&gt; --- a/drivers/xen/grant-table.c<br>
&gt; +++ b/drivers/xen/grant-table.c<br>
&gt; @@ -38,6 +38,7 @@<br>
&gt; =A0#include &lt;linux/vmalloc.h&gt;<br>
&gt; =A0#include &lt;linux/uaccess.h&gt;<br>
&gt; =A0#include &lt;linux/io.h&gt;<br>
&gt; +#include &lt;linux/delay.h&gt;<br>
&gt; =A0#include &lt;linux/hardirq.h&gt;<br>
&gt;<br>
&gt; =A0#include &lt;xen/xen.h&gt;<br>
&gt; @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)<br>
&gt; =A0}<br>
&gt; =A0EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);<br>
&gt;<br>
&gt; +#define MAX_DELAY 256<br>
&gt; +void<br>
&gt; +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,=
<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 const char *func)<br>
&gt; +{<br>
&gt; + =A0 =A0 unsigned delay =3D 1;<br>
&gt; +<br>
&gt; + =A0 =A0 do {<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1=
));<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 if (*status =3D=3D GNTST_eagain)<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 msleep(delay++);<br>
&gt; + =A0 =A0 } while ((*status =3D=3D GNTST_eagain) &amp;&amp; (delay &lt=
; MAX_DELAY));<br>
&gt; +<br>
&gt; + =A0 =A0 if (delay &gt;=3D MAX_DELAY) {<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 printk(KERN_ERR &quot;%s: %s eagain grant\n&=
quot;, func, current-&gt;comm);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 *status =3D GNTST_bad_page;<br>
&gt; + =A0 =A0 }<br>
&gt; +}<br>
&gt; +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);<br>
&gt; +<br>
&gt; =A0int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct gnttab_map_grant_ref *kmap_=
ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct page **pages, unsigned int =
count)<br>
&gt; @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *=
map_ops,<br>
&gt; =A0 =A0 =A0 if (ret)<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return ret;<br>
&gt;<br>
&gt; + =A0 =A0 /* Retry eagain maps */<br>
&gt; + =A0 =A0 for (i =3D 0; i &lt; count; i++)<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 if (map_ops[i].status =3D=3D GNTST_eagain)<b=
r>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gnttab_retry_eagain_map(map_=
ops + i);<br>
&gt; +<br>
&gt; =A0 =A0 =A0 if (xen_feature(XENFEAT_auto_translated_physmap))<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return ret;<br>
&gt;<br>
&gt; diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/x=
enbus_client.c<br>
&gt; index b3e146e..749f6a3 100644<br>
&gt; --- a/drivers/xen/xenbus/xenbus_client.c<br>
&gt; +++ b/drivers/xen/xenbus/xenbus_client.c<br>
&gt; @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus=
_device *dev,<br>
&gt;<br>
&gt; =A0 =A0 =A0 op.host_addr =3D arbitrary_virt_to_machine(pte).maddr;<br>
&gt;<br>
&gt; - =A0 =A0 if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &amp;o=
p, 1))<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 BUG();<br>
&gt; + =A0 =A0 gnttab_map_grant_no_eagain(&amp;op);<br>
&gt;<br>
&gt; =A0 =A0 =A0 if (op.status !=3D GNTST_okay) {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 free_vm_area(area);<br>
&gt; @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int=
 gnt_ref,<br>
&gt; =A0 =A0 =A0 gnttab_set_map_op(&amp;op, (unsigned long)vaddr, GNTMAP_ho=
st_map, gnt_ref,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 dev-&gt;otherend_id);<=
br>
&gt;<br>
&gt; - =A0 =A0 if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &amp;o=
p, 1))<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 BUG();<br>
&gt; + =A0 =A0 gnttab_map_grant_no_eagain(&amp;op);<br>
&gt;<br>
&gt; =A0 =A0 =A0 if (op.status !=3D GNTST_okay) {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 xenbus_dev_fatal(dev, op.status,<br>
&gt; diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h<br>
&gt; index 11e27c3..2fecfab 100644<br>
&gt; --- a/include/xen/grant_table.h<br>
&gt; +++ b/include/xen/grant_table.h<br>
&gt; @@ -43,6 +43,7 @@<br>
&gt; =A0#include &lt;xen/interface/grant_table.h&gt;<br>
&gt;<br>
&gt; =A0#include &lt;asm/xen/hypervisor.h&gt;<br>
&gt; +#include &lt;asm/xen/hypercall.h&gt;<br>
&gt;<br>
&gt; =A0#include &lt;xen/features.h&gt;<br>
&gt;<br>
&gt; @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);<br>
&gt;<br>
&gt; =A0#define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))<br>
&gt;<br>
&gt; +/* Retry a grant map/copy operation when the hypervisor returns GNTST=
_eagain.<br>
&gt; + * This is typically due to paged out target frames.<br>
&gt; + * Generic entry-point, use macro decorators below for specific grant=
<br>
&gt; + * operations.<br>
&gt; + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.=
<br>
&gt; + * Return value in *status guaranteed to no longer be GNTST_eagain. *=
/<br>
&gt; +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *st=
atus,<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 const char *=
func);<br>
&gt; +<br>
&gt; +#define gnttab_retry_eagain_map(_gop) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 \<br>
&gt; + =A0 =A0gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&amp;(_gop)-&=
gt;status, __func__)<br>
&gt; +<br>
&gt; +#define gnttab_retry_eagain_copy(_gop) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0\<br>
&gt; + =A0 =A0gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop), =A0 =A0 =A0\<b=
r>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&amp;(_gop)-&=
gt;status, __func__)<br>
&gt; +<br>
&gt; +#define gnttab_map_grant_no_eagain(_gop) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; +do { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0\<br>
&gt; + =A0 =A0if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop),=
 1)) =A0 =A0 =A0 \<br>
&gt; + =A0 =A0 =A0 =A0BUG(); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0\<br>
&gt; + =A0 =A0if ((_gop)-&gt;status =3D=3D GNTST_eagain) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; + =A0 =A0 =A0 =A0gnttab_retry_eagain_map((_gop)); =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; +} while(0)<br>
&gt; +<br>
&gt; +static inline void<br>
&gt; +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count=
)<br>
&gt; +{<br>
&gt; + =A0 =A0unsigned i;<br>
&gt; + =A0 =A0struct gnttab_copy *op;<br>
&gt; +<br>
&gt; + =A0 =A0BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count)=
);<br>
&gt; + =A0 =A0for (i =3D 0, op =3D batch; i &lt; count; i++, op++)<br>
&gt; + =A0 =A0 =A0 =A0if (op-&gt;status =3D=3D GNTST_eagain)<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0gnttab_retry_eagain_copy(op);<br>
&gt; +}<br>
&gt; +<br>
&gt; =A0int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct gnttab_map_grant_ref *kmap_=
ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct page **pages, unsigned int =
count);<br>
&gt; diff --git a/include/xen/interface/grant_table.h b/include/xen/interfa=
ce/grant_table.h<br>
&gt; index 7da811b..66cb734 100644<br>
&gt; --- a/include/xen/interface/grant_table.h<br>
&gt; +++ b/include/xen/interface/grant_table.h<br>
&gt; @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);<br=
>
&gt; =A0#define GNTST_permission_denied (-8) /* Not enough privilege for op=
eration. =A0*/<br>
&gt; =A0#define GNTST_bad_page =A0 =A0 =A0 =A0 (-9) /* Specified page was i=
nvalid for op. =A0 =A0*/<br>
&gt; =A0#define GNTST_bad_copy_arg =A0 =A0(-10) /* copy arguments cross pag=
e boundary */<br>
&gt; +#define GNTST_eagain =A0 =A0 =A0 =A0 =A0(-12) /* Retry. =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/<br>
&gt;<br>
&gt; =A0#define GNTTABOP_error_msgs { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \=
<br>
&gt; =A0 =A0 =A0&quot;okay&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);<br=
>
&gt; =A0 =A0 =A0&quot;permission denied&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0\<br>
&gt; =A0 =A0 =A0&quot;bad page&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; =A0 =A0 =A0&quot;copy arguments cross page boundary&quot; =A0 =A0 =A0 =
=A0\<br>
&gt; + =A0 =A0&quot;retry&quot; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; =A0}<br>
&gt;<br>
&gt; =A0#endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */<br>
&gt;<br>
&gt;<br>
&gt; On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:<br>
&gt;<br>
&gt; &gt;<br>
&gt; &gt; On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On 27/08/12 17:51, <a href=3D"mailto:andres@lagarcavilla.org"=
>andres@lagarcavilla.org</a> wrote:<br>
&gt; &gt;&gt;&gt; From: Andres Lagar-Cavilla &lt;<a href=3D"mailto:andres@l=
agarcavilla.org">andres@lagarcavilla.org</a>&gt;<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; Since Xen-4.2, hvm domains may have portions of their mem=
ory paged out. When a<br>
&gt; &gt;&gt;&gt; foreign domain (such as dom0) attempts to map these frame=
s, the map will<br>
&gt; &gt;&gt;&gt; initially fail. The hypervisor returns a suitable errno, =
and kicks an<br>
&gt; &gt;&gt;&gt; asynchronous page-in operation carried out by a helper. T=
he foreign domain is<br>
&gt; &gt;&gt;&gt; expected to retry the mapping operation until it eventual=
ly succeeds. The<br>
&gt; &gt;&gt;&gt; foreign domain is not put to sleep because itself could b=
e the one running the<br>
&gt; &gt;&gt;&gt; pager assist (typical scenario for dom0).<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; This patch adds support for this mechanism for backend dr=
ivers using grant<br>
&gt; &gt;&gt;&gt; mapping and copying operations. Specifically, this covers=
 the blkback and<br>
&gt; &gt;&gt;&gt; gntdev drivers (which map foregin grants), and the netbac=
k driver (which copies<br>
&gt; &gt;&gt;&gt; foreign grants).<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; * Add GNTST_eagain, already exposed by Xen, to the grant =
interface.<br>
&gt; &gt;&gt;&gt; * Add a retry method for grants that fail with GNTST_eaga=
in (i.e. because the<br>
&gt; &gt;&gt;&gt; target foregin frame is paged out).<br>
&gt; &gt;&gt;&gt; * Insert hooks with appropriate macro decorators in the a=
forementioned drivers.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; I think you should implement wrappers around HYPERVISOR_grant=
_table_op()<br>
&gt; &gt;&gt; have have the wrapper do the retries instead of every backend=
 having to<br>
&gt; &gt;&gt; check for EAGAIN and issue the retries itself. Similar to the=
<br>
&gt; &gt;&gt; gnttab_map_grant_no_eagain() function you&#39;ve already adde=
d.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Why do some operations not retry anyway?<br>
&gt; &gt;<br>
&gt; &gt; All operations retry. The reason why I could not make it as elega=
nt as you suggest is because grant operations are submitted in batches and =
their status(es?) later checked individually elsewhere. This is the case fo=
r netback. Note that both blkback and gntdev use a more linear structure wi=
th the gnttab_map_refs helper, which allows me to hide all the retry gore f=
rom those drivers into grant table code. Likewise for xenbus ring mapping.<=
br>

&gt; &gt;<br>
&gt; &gt; In summary, outside of core grant table code, only the netback dr=
iver needs to check explicitly for retries, due to its batch-copy-delayed-p=
er-slot-check structure.<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; +void<br>
&gt; &gt;&gt;&gt; +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int=
16_t *status,<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 const char *func)<br>
&gt; &gt;&gt;&gt; +{<br>
&gt; &gt;&gt;&gt; + u8 delay =3D 1;<br>
&gt; &gt;&gt;&gt; +<br>
&gt; &gt;&gt;&gt; + do {<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 BUG_ON(HYPERVISOR_grant_table_op(cmd, g=
op, 1));<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 if (*status =3D=3D GNTST_eagain)<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 msleep(delay++);<br>
&gt; &gt;&gt;&gt; + } while ((*status =3D=3D GNTST_eagain) &amp;&amp; delay=
);<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Terminating the loop when delay wraps is a bit subtle. =A0Why=
 not make<br>
&gt; &gt;&gt; delay unsigned and check delay &lt;=3D MAX_DELAY?<br>
&gt; &gt; Good idea (MAX_DELAY =3D=3D 256). I&#39;d like to get Konrad&#39;=
s feedback before a re-spin.<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Would it be sensible to ramp the delay faster? =A0Perhaps dou=
ble each<br>
&gt; &gt;&gt; iteration with a maximum possible delay of e.g., 256 ms.<br>
&gt; &gt; Generally speaking we&#39;ve never seen past three retries. I am =
open to changing the algorithm but there is a significant possibility it wo=
n&#39;t matter at all.<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; +#define gnttab_map_grant_no_eagain(_gop) =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; +do { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; + =A0 =A0if ( HYPERVISOR_grant_table_op(GNTTABOP_map_gran=
t_ref, (_gop), 1)) =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0BUG(); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; + =A0 =A0if ((_gop)-&gt;status =3D=3D GNTST_eagain) =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0gnttab_retry_eagain_map((_gop)); =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; +} while(0)<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Inline functions, please.<br>
&gt; &gt;<br>
&gt; &gt; I want to retain the original context for debugging. Eventually w=
e print __func__ if things go wrong.<br>
&gt; &gt;<br>
&gt; &gt; Thanks, great feedback<br>
&gt; &gt; Andres<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; David<br>
&gt; &gt;<br>
&gt;<br>
</blockquote></div>

--047d7b86f3aa1ef11704c894ddee--


--===============3267962755280235207==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3267962755280235207==--


From xen-devel-bounces@lists.xen.org Fri Aug 31 19:33:50 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:33:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Wy3-0005Et-G3; Fri, 31 Aug 2012 19:33:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1T7Wy1-0005Eo-OQ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:33:22 +0000
Received: from [85.158.138.51:40754] by server-10.bemta-3.messagelabs.com id
	F3/EE-10411-08111405; Fri, 31 Aug 2012 19:33:20 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-6.tower-174.messagelabs.com!1346441597!19944610!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8894 invoked from network); 31 Aug 2012 19:33:18 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-6.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 19:33:18 -0000
Received: by vcbgb23 with SMTP id gb23so4526128vcb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 12:33:17 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:x-gm-message-state;
	bh=PeMUy1TE8XTBIzWMAaMxcAqcSg3maQ8HeXeUZY++xsA=;
	b=a9t6cUVJC3CV0At1k8xtzCdQnGtIMCcKqjVYpi6UXOnCutQjFQkpMOO+Xl0RLSCeop
	/3JBZfayflUffMQIjQGP2dvxYIXXNlL1ODGhmro6uc3kWErvj9xgAT+wRfBo17hTh1EK
	Ib2sDC3X0p0gwDzXWOKnOrL2PD3R862iypCFN1FW5ANX/OFfmh4LR/TxR1UAQiaphzSw
	rG37o8EIgUSddHYymJu+RgP6Uif3SvThmJj44VlHmKOEB37hLJNE8M5E1SXrPrwqzePC
	cWZlWqLR4rAKnLK3qGJp63DlwwhR8Gl0UPHMnnWUfl14n6qZzofRCMWJiYQf4CKrybWI
	2W8g==
MIME-Version: 1.0
Received: by 10.58.249.195 with SMTP id yw3mr6855144vec.43.1346441597349; Fri,
	31 Aug 2012 12:33:17 -0700 (PDT)
Received: by 10.58.70.52 with HTTP; Fri, 31 Aug 2012 12:33:17 -0700 (PDT)
Received: by 10.58.70.52 with HTTP; Fri, 31 Aug 2012 12:33:17 -0700 (PDT)
In-Reply-To: <20120831165454.GE18929@localhost.localdomain>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
	<20120831165454.GE18929@localhost.localdomain>
Date: Fri, 31 Aug 2012 15:33:17 -0400
Message-ID: <CAO=PTzoNfS8+DFXUM2+E9FaZSCJwTeRjecCXJwy_+CFyxGYpUQ@mail.gmail.com>
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Gm-Message-State: ALoCoQmn25jVnL7nXe40LGNR9ShO4Tys3MaArtA+cc9urT8JXJw63OyhE6kNzBKaDDkkjgDSImi2
Cc: andres.lagarcavilla@gmail.com, xen-devel@lists.xen.org,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
	targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3267962755280235207=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3267962755280235207==
Content-Type: multipart/alternative; boundary=047d7b86f3aa1ef11704c894ddee

--047d7b86f3aa1ef11704c894ddee
Content-Type: text/plain; charset=ISO-8859-1

But msleep will wind up calling schedule(). We definitely cannot afford to
pin down a dom0 vcpu when the pager itself is in dom0.

Iiuc....

Andres
On Aug 31, 2012 12:55 PM, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
wrote:

> On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:
> > Actually acted upon your feedback ipso facto:
> >
> > commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> > Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > Date:   Sun Aug 26 09:45:57 2012 -0400
> >
> >     Xen backend support for paged out grant targets.
> >
> >     Since Xen-4.2, hvm domains may have portions of their memory paged
> out. When a
> >     foreign domain (such as dom0) attempts to map these frames, the map
> will
> >     initially fail. The hypervisor returns a suitable errno, and kicks an
> >     asynchronous page-in operation carried out by a helper. The foreign
> domain is
> >     expected to retry the mapping operation until it eventually
> succeeds. The
> >     foreign domain is not put to sleep because itself could be the one
> running the
> >     pager assist (typical scenario for dom0).
> >
> >     This patch adds support for this mechanism for backend drivers using
> grant
> >     mapping and copying operations. Specifically, this covers the
> blkback and
> >     gntdev drivers (which map foregin grants), and the netback driver
> (which copies
> >     foreign grants).
> >
> >     * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> >     * Add a retry method for grants that fail with GNTST_eagain (i.e.
> because the
> >       target foregin frame is paged out).
> >     * Insert hooks with appropriate macro decorators in the
> aforementioned drivers.
> >
> >     The retry loop is only invoked if the grant operation status is
> GNTST_eagain.
> >     It guarantees to leave a new status code different from
> GNTST_eagain. Any other
> >     status code results in identical code execution as before.
> >
> >     The retry loop performs 256 attempts with increasing time intervals
> through a
> >     32 second period. It uses msleep to yield while waiting for the next
> retry.
> >
>
>
> Would it make sense to yield to other processes (so call schedule)? Or
> perhaps have this in a workqueue ?
>
> I mean the 'msleep' just looks like a hack. .. 32 seconds of doing
> 'msleep' on 1VCPU dom0 could trigger the watchdog I think?
>
> >     V2 after feedback from David Vrabel:
> >     * Explicit MAX_DELAY instead of wrap-around delay into zero
> >     * Abstract GNTST_eagain check into core grant table code for netback
> module.
> >
> >     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> >
> > diff --git a/drivers/net/xen-netback/netback.c
> b/drivers/net/xen-netback/netback.c
> > index 682633b..5610fd8 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk
> *netbk)
> >               return;
> >
> >       BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
> > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> &netbk->grant_copy_op,
> > -                                     npo.copy_prod);
> > -     BUG_ON(ret != 0);
> > +     gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
> >
> >       while ((skb = __skb_dequeue(&rxq)) != NULL) {
> >               sco = (struct skb_cb_overlay *)skb->cb;
> > @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk
> *netbk)
> >  static void xen_netbk_tx_action(struct xen_netbk *netbk)
> >  {
> >       unsigned nr_gops;
> > -     int ret;
> >
> >       nr_gops = xen_netbk_tx_build_gops(netbk);
> >
> >       if (nr_gops == 0)
> >               return;
> > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> > -                                     netbk->tx_copy_ops, nr_gops);
> > -     BUG_ON(ret);
> >
> > -     xen_netbk_tx_submit(netbk);
> > +     gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
> >
> > +     xen_netbk_tx_submit(netbk);
> >  }
> >
> >  static void xen_netbk_idx_release(struct xen_netbk *netbk, u16
> pending_idx)
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index eea81cf..96543b2 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -38,6 +38,7 @@
> >  #include <linux/vmalloc.h>
> >  #include <linux/uaccess.h>
> >  #include <linux/io.h>
> > +#include <linux/delay.h>
> >  #include <linux/hardirq.h>
> >
> >  #include <xen/xen.h>
> > @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >
> > +#define MAX_DELAY 256
> > +void
> > +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> > +                                             const char *func)
> > +{
> > +     unsigned delay = 1;
> > +
> > +     do {
> > +             BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > +             if (*status == GNTST_eagain)
> > +                     msleep(delay++);
> > +     } while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
> > +
> > +     if (delay >= MAX_DELAY) {
> > +             printk(KERN_ERR "%s: %s eagain grant\n", func,
> current->comm);
> > +             *status = GNTST_bad_page;
> > +     }
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
> > +
> >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >                   struct gnttab_map_grant_ref *kmap_ops,
> >                   struct page **pages, unsigned int count)
> > @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
> *map_ops,
> >       if (ret)
> >               return ret;
> >
> > +     /* Retry eagain maps */
> > +     for (i = 0; i < count; i++)
> > +             if (map_ops[i].status == GNTST_eagain)
> > +                     gnttab_retry_eagain_map(map_ops + i);
> > +
> >       if (xen_feature(XENFEAT_auto_translated_physmap))
> >               return ret;
> >
> > diff --git a/drivers/xen/xenbus/xenbus_client.c
> b/drivers/xen/xenbus/xenbus_client.c
> > index b3e146e..749f6a3 100644
> > --- a/drivers/xen/xenbus/xenbus_client.c
> > +++ b/drivers/xen/xenbus/xenbus_client.c
> > @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct
> xenbus_device *dev,
> >
> >       op.host_addr = arbitrary_virt_to_machine(pte).maddr;
> >
> > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > -             BUG();
> > +     gnttab_map_grant_no_eagain(&op);
> >
> >       if (op.status != GNTST_okay) {
> >               free_vm_area(area);
> > @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int
> gnt_ref,
> >       gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map,
> gnt_ref,
> >                         dev->otherend_id);
> >
> > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > -             BUG();
> > +     gnttab_map_grant_no_eagain(&op);
> >
> >       if (op.status != GNTST_okay) {
> >               xenbus_dev_fatal(dev, op.status,
> > diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> > index 11e27c3..2fecfab 100644
> > --- a/include/xen/grant_table.h
> > +++ b/include/xen/grant_table.h
> > @@ -43,6 +43,7 @@
> >  #include <xen/interface/grant_table.h>
> >
> >  #include <asm/xen/hypervisor.h>
> > +#include <asm/xen/hypercall.h>
> >
> >  #include <xen/features.h>
> >
> > @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
> >
> >  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> >
> > +/* Retry a grant map/copy operation when the hypervisor returns
> GNTST_eagain.
> > + * This is typically due to paged out target frames.
> > + * Generic entry-point, use macro decorators below for specific grant
> > + * operations.
> > + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
> > + * Return value in *status guaranteed to no longer be GNTST_eagain. */
> > +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> *status,
> > +                             const char *func);
> > +
> > +#define gnttab_retry_eagain_map(_gop)                       \
> > +    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
> > +                            &(_gop)->status, __func__)
> > +
> > +#define gnttab_retry_eagain_copy(_gop)                  \
> > +    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
> > +                            &(_gop)->status, __func__)
> > +
> > +#define gnttab_map_grant_no_eagain(_gop)
>      \
> > +do {
>      \
> > +    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))
>     \
> > +        BUG();
>      \
> > +    if ((_gop)->status == GNTST_eagain)
>     \
> > +        gnttab_retry_eagain_map((_gop));
>      \
> > +} while(0)
> > +
> > +static inline void
> > +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
> > +{
> > +    unsigned i;
> > +    struct gnttab_copy *op;
> > +
> > +    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
> > +    for (i = 0, op = batch; i < count; i++, op++)
> > +        if (op->status == GNTST_eagain)
> > +            gnttab_retry_eagain_copy(op);
> > +}
> > +
> >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >                   struct gnttab_map_grant_ref *kmap_ops,
> >                   struct page **pages, unsigned int count);
> > diff --git a/include/xen/interface/grant_table.h
> b/include/xen/interface/grant_table.h
> > index 7da811b..66cb734 100644
> > --- a/include/xen/interface/grant_table.h
> > +++ b/include/xen/interface/grant_table.h
> > @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> >  #define GNTST_permission_denied (-8) /* Not enough privilege for
> operation.  */
> >  #define GNTST_bad_page         (-9) /* Specified page was invalid for
> op.    */
> >  #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page
> boundary */
> > +#define GNTST_eagain          (-12) /* Retry.
>      */
> >
> >  #define GNTTABOP_error_msgs {                   \
> >      "okay",                                     \
> > @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> >      "permission denied",                        \
> >      "bad page",                                 \
> >      "copy arguments cross page boundary"        \
> > +    "retry"                                     \
> >  }
> >
> >  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
> >
> >
> > On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:
> >
> > >
> > > On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> > >
> > >> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> > >>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > >>>
> > >>> Since Xen-4.2, hvm domains may have portions of their memory paged
> out. When a
> > >>> foreign domain (such as dom0) attempts to map these frames, the map
> will
> > >>> initially fail. The hypervisor returns a suitable errno, and kicks an
> > >>> asynchronous page-in operation carried out by a helper. The foreign
> domain is
> > >>> expected to retry the mapping operation until it eventually
> succeeds. The
> > >>> foreign domain is not put to sleep because itself could be the one
> running the
> > >>> pager assist (typical scenario for dom0).
> > >>>
> > >>> This patch adds support for this mechanism for backend drivers using
> grant
> > >>> mapping and copying operations. Specifically, this covers the
> blkback and
> > >>> gntdev drivers (which map foregin grants), and the netback driver
> (which copies
> > >>> foreign grants).
> > >>>
> > >>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> > >>> * Add a retry method for grants that fail with GNTST_eagain (i.e.
> because the
> > >>> target foregin frame is paged out).
> > >>> * Insert hooks with appropriate macro decorators in the
> aforementioned drivers.
> > >>
> > >> I think you should implement wrappers around
> HYPERVISOR_grant_table_op()
> > >> have have the wrapper do the retries instead of every backend having
> to
> > >> check for EAGAIN and issue the retries itself. Similar to the
> > >> gnttab_map_grant_no_eagain() function you've already added.
> > >>
> > >> Why do some operations not retry anyway?
> > >
> > > All operations retry. The reason why I could not make it as elegant as
> you suggest is because grant operations are submitted in batches and their
> status(es?) later checked individually elsewhere. This is the case for
> netback. Note that both blkback and gntdev use a more linear structure with
> the gnttab_map_refs helper, which allows me to hide all the retry gore from
> those drivers into grant table code. Likewise for xenbus ring mapping.
> > >
> > > In summary, outside of core grant table code, only the netback driver
> needs to check explicitly for retries, due to its
> batch-copy-delayed-per-slot-check structure.
> > >
> > >>
> > >>> +void
> > >>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> *status,
> > >>> +                                         const char *func)
> > >>> +{
> > >>> + u8 delay = 1;
> > >>> +
> > >>> + do {
> > >>> +         BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > >>> +         if (*status == GNTST_eagain)
> > >>> +                 msleep(delay++);
> > >>> + } while ((*status == GNTST_eagain) && delay);
> > >>
> > >> Terminating the loop when delay wraps is a bit subtle.  Why not make
> > >> delay unsigned and check delay <= MAX_DELAY?
> > > Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before
> a re-spin.
> > >
> > >>
> > >> Would it be sensible to ramp the delay faster?  Perhaps double each
> > >> iteration with a maximum possible delay of e.g., 256 ms.
> > > Generally speaking we've never seen past three retries. I am open to
> changing the algorithm but there is a significant possibility it won't
> matter at all.
> > >
> > >>
> > >>> +#define gnttab_map_grant_no_eagain(_gop)
>          \
> > >>> +do {
>          \
> > >>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop),
> 1))      \
> > >>> +        BUG();
>          \
> > >>> +    if ((_gop)->status == GNTST_eagain)
>         \
> > >>> +        gnttab_retry_eagain_map((_gop));
>          \
> > >>> +} while(0)
> > >>
> > >> Inline functions, please.
> > >
> > > I want to retain the original context for debugging. Eventually we
> print __func__ if things go wrong.
> > >
> > > Thanks, great feedback
> > > Andres
> > >
> > >>
> > >> David
> > >
> >
>

--047d7b86f3aa1ef11704c894ddee
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">But msleep will wind up calling schedule(). We definitely ca=
nnot afford to pin down a dom0 vcpu when the pager itself is in dom0.</p>
<p dir=3D"ltr">Iiuc....</p>
<p dir=3D"ltr">Andres</p>
<div class=3D"gmail_quote">On Aug 31, 2012 12:55 PM, &quot;Konrad Rzeszutek=
 Wilk&quot; &lt;<a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracl=
e.com</a>&gt; wrote:<br type=3D"attribution"><blockquote class=3D"gmail_quo=
te" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"=
>
On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:<br>
&gt; Actually acted upon your feedback ipso facto:<br>
&gt;<br>
&gt; commit d5fab912caa1f0cf6be0a6773f502d3417a207b6<br>
&gt; Author: Andres Lagar-Cavilla &lt;<a href=3D"mailto:andres@lagarcavilla=
.org">andres@lagarcavilla.org</a>&gt;<br>
&gt; Date: =A0 Sun Aug 26 09:45:57 2012 -0400<br>
&gt;<br>
&gt; =A0 =A0 Xen backend support for paged out grant targets.<br>
&gt;<br>
&gt; =A0 =A0 Since Xen-4.2, hvm domains may have portions of their memory p=
aged out. When a<br>
&gt; =A0 =A0 foreign domain (such as dom0) attempts to map these frames, th=
e map will<br>
&gt; =A0 =A0 initially fail. The hypervisor returns a suitable errno, and k=
icks an<br>
&gt; =A0 =A0 asynchronous page-in operation carried out by a helper. The fo=
reign domain is<br>
&gt; =A0 =A0 expected to retry the mapping operation until it eventually su=
cceeds. The<br>
&gt; =A0 =A0 foreign domain is not put to sleep because itself could be the=
 one running the<br>
&gt; =A0 =A0 pager assist (typical scenario for dom0).<br>
&gt;<br>
&gt; =A0 =A0 This patch adds support for this mechanism for backend drivers=
 using grant<br>
&gt; =A0 =A0 mapping and copying operations. Specifically, this covers the =
blkback and<br>
&gt; =A0 =A0 gntdev drivers (which map foregin grants), and the netback dri=
ver (which copies<br>
&gt; =A0 =A0 foreign grants).<br>
&gt;<br>
&gt; =A0 =A0 * Add GNTST_eagain, already exposed by Xen, to the grant inter=
face.<br>
&gt; =A0 =A0 * Add a retry method for grants that fail with GNTST_eagain (i=
.e. because the<br>
&gt; =A0 =A0 =A0 target foregin frame is paged out).<br>
&gt; =A0 =A0 * Insert hooks with appropriate macro decorators in the aforem=
entioned drivers.<br>
&gt;<br>
&gt; =A0 =A0 The retry loop is only invoked if the grant operation status i=
s GNTST_eagain.<br>
&gt; =A0 =A0 It guarantees to leave a new status code different from GNTST_=
eagain. Any other<br>
&gt; =A0 =A0 status code results in identical code execution as before.<br>
&gt;<br>
&gt; =A0 =A0 The retry loop performs 256 attempts with increasing time inte=
rvals through a<br>
&gt; =A0 =A0 32 second period. It uses msleep to yield while waiting for th=
e next retry.<br>
&gt;<br>
<br>
<br>
Would it make sense to yield to other processes (so call schedule)? Or<br>
perhaps have this in a workqueue ?<br>
<br>
I mean the &#39;msleep&#39; just looks like a hack. .. 32 seconds of doing<=
br>
&#39;msleep&#39; on 1VCPU dom0 could trigger the watchdog I think?<br>
<br>
&gt; =A0 =A0 V2 after feedback from David Vrabel:<br>
&gt; =A0 =A0 * Explicit MAX_DELAY instead of wrap-around delay into zero<br=
>
&gt; =A0 =A0 * Abstract GNTST_eagain check into core grant table code for n=
etback module.<br>
&gt;<br>
&gt; =A0 =A0 Signed-off-by: Andres Lagar-Cavilla &lt;<a href=3D"mailto:andr=
es@lagarcavilla.org">andres@lagarcavilla.org</a>&gt;<br>
&gt;<br>
&gt; diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netba=
ck/netback.c<br>
&gt; index 682633b..5610fd8 100644<br>
&gt; --- a/drivers/net/xen-netback/netback.c<br>
&gt; +++ b/drivers/net/xen-netback/netback.c<br>
&gt; @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk *=
netbk)<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;<br>
&gt;<br>
&gt; =A0 =A0 =A0 BUG_ON(npo.copy_prod &gt; ARRAY_SIZE(netbk-&gt;grant_copy_=
op));<br>
&gt; - =A0 =A0 ret =3D HYPERVISOR_grant_table_op(GNTTABOP_copy, &amp;netbk-=
&gt;grant_copy_op,<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 npo.copy_prod);<br>
&gt; - =A0 =A0 BUG_ON(ret !=3D 0);<br>
&gt; + =A0 =A0 gnttab_batch_copy_no_eagain(netbk-&gt;grant_copy_op, npo.cop=
y_prod);<br>
&gt;<br>
&gt; =A0 =A0 =A0 while ((skb =3D __skb_dequeue(&amp;rxq)) !=3D NULL) {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 sco =3D (struct skb_cb_overlay *)skb-&gt;c=
b;<br>
&gt; @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_net=
bk *netbk)<br>
&gt; =A0static void xen_netbk_tx_action(struct xen_netbk *netbk)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0 unsigned nr_gops;<br>
&gt; - =A0 =A0 int ret;<br>
&gt;<br>
&gt; =A0 =A0 =A0 nr_gops =3D xen_netbk_tx_build_gops(netbk);<br>
&gt;<br>
&gt; =A0 =A0 =A0 if (nr_gops =3D=3D 0)<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;<br>
&gt; - =A0 =A0 ret =3D HYPERVISOR_grant_table_op(GNTTABOP_copy,<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 netbk-&gt;tx_copy_ops, nr_gops);<br>
&gt; - =A0 =A0 BUG_ON(ret);<br>
&gt;<br>
&gt; - =A0 =A0 xen_netbk_tx_submit(netbk);<br>
&gt; + =A0 =A0 gnttab_batch_copy_no_eagain(netbk-&gt;tx_copy_ops, nr_gops);=
<br>
&gt;<br>
&gt; + =A0 =A0 xen_netbk_tx_submit(netbk);<br>
&gt; =A0}<br>
&gt;<br>
&gt; =A0static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pend=
ing_idx)<br>
&gt; diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c<br>
&gt; index eea81cf..96543b2 100644<br>
&gt; --- a/drivers/xen/grant-table.c<br>
&gt; +++ b/drivers/xen/grant-table.c<br>
&gt; @@ -38,6 +38,7 @@<br>
&gt; =A0#include &lt;linux/vmalloc.h&gt;<br>
&gt; =A0#include &lt;linux/uaccess.h&gt;<br>
&gt; =A0#include &lt;linux/io.h&gt;<br>
&gt; +#include &lt;linux/delay.h&gt;<br>
&gt; =A0#include &lt;linux/hardirq.h&gt;<br>
&gt;<br>
&gt; =A0#include &lt;xen/xen.h&gt;<br>
&gt; @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)<br>
&gt; =A0}<br>
&gt; =A0EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);<br>
&gt;<br>
&gt; +#define MAX_DELAY 256<br>
&gt; +void<br>
&gt; +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,=
<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 const char *func)<br>
&gt; +{<br>
&gt; + =A0 =A0 unsigned delay =3D 1;<br>
&gt; +<br>
&gt; + =A0 =A0 do {<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1=
));<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 if (*status =3D=3D GNTST_eagain)<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 msleep(delay++);<br>
&gt; + =A0 =A0 } while ((*status =3D=3D GNTST_eagain) &amp;&amp; (delay &lt=
; MAX_DELAY));<br>
&gt; +<br>
&gt; + =A0 =A0 if (delay &gt;=3D MAX_DELAY) {<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 printk(KERN_ERR &quot;%s: %s eagain grant\n&=
quot;, func, current-&gt;comm);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 *status =3D GNTST_bad_page;<br>
&gt; + =A0 =A0 }<br>
&gt; +}<br>
&gt; +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);<br>
&gt; +<br>
&gt; =A0int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct gnttab_map_grant_ref *kmap_=
ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct page **pages, unsigned int =
count)<br>
&gt; @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *=
map_ops,<br>
&gt; =A0 =A0 =A0 if (ret)<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return ret;<br>
&gt;<br>
&gt; + =A0 =A0 /* Retry eagain maps */<br>
&gt; + =A0 =A0 for (i =3D 0; i &lt; count; i++)<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 if (map_ops[i].status =3D=3D GNTST_eagain)<b=
r>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gnttab_retry_eagain_map(map_=
ops + i);<br>
&gt; +<br>
&gt; =A0 =A0 =A0 if (xen_feature(XENFEAT_auto_translated_physmap))<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 return ret;<br>
&gt;<br>
&gt; diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/x=
enbus_client.c<br>
&gt; index b3e146e..749f6a3 100644<br>
&gt; --- a/drivers/xen/xenbus/xenbus_client.c<br>
&gt; +++ b/drivers/xen/xenbus/xenbus_client.c<br>
&gt; @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct xenbus=
_device *dev,<br>
&gt;<br>
&gt; =A0 =A0 =A0 op.host_addr =3D arbitrary_virt_to_machine(pte).maddr;<br>
&gt;<br>
&gt; - =A0 =A0 if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &amp;o=
p, 1))<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 BUG();<br>
&gt; + =A0 =A0 gnttab_map_grant_no_eagain(&amp;op);<br>
&gt;<br>
&gt; =A0 =A0 =A0 if (op.status !=3D GNTST_okay) {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 free_vm_area(area);<br>
&gt; @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int=
 gnt_ref,<br>
&gt; =A0 =A0 =A0 gnttab_set_map_op(&amp;op, (unsigned long)vaddr, GNTMAP_ho=
st_map, gnt_ref,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 dev-&gt;otherend_id);<=
br>
&gt;<br>
&gt; - =A0 =A0 if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &amp;o=
p, 1))<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 BUG();<br>
&gt; + =A0 =A0 gnttab_map_grant_no_eagain(&amp;op);<br>
&gt;<br>
&gt; =A0 =A0 =A0 if (op.status !=3D GNTST_okay) {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 xenbus_dev_fatal(dev, op.status,<br>
&gt; diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h<br>
&gt; index 11e27c3..2fecfab 100644<br>
&gt; --- a/include/xen/grant_table.h<br>
&gt; +++ b/include/xen/grant_table.h<br>
&gt; @@ -43,6 +43,7 @@<br>
&gt; =A0#include &lt;xen/interface/grant_table.h&gt;<br>
&gt;<br>
&gt; =A0#include &lt;asm/xen/hypervisor.h&gt;<br>
&gt; +#include &lt;asm/xen/hypercall.h&gt;<br>
&gt;<br>
&gt; =A0#include &lt;xen/features.h&gt;<br>
&gt;<br>
&gt; @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);<br>
&gt;<br>
&gt; =A0#define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))<br>
&gt;<br>
&gt; +/* Retry a grant map/copy operation when the hypervisor returns GNTST=
_eagain.<br>
&gt; + * This is typically due to paged out target frames.<br>
&gt; + * Generic entry-point, use macro decorators below for specific grant=
<br>
&gt; + * operations.<br>
&gt; + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.=
<br>
&gt; + * Return value in *status guaranteed to no longer be GNTST_eagain. *=
/<br>
&gt; +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *st=
atus,<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 const char *=
func);<br>
&gt; +<br>
&gt; +#define gnttab_retry_eagain_map(_gop) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 \<br>
&gt; + =A0 =A0gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&amp;(_gop)-&=
gt;status, __func__)<br>
&gt; +<br>
&gt; +#define gnttab_retry_eagain_copy(_gop) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0\<br>
&gt; + =A0 =A0gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop), =A0 =A0 =A0\<b=
r>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&amp;(_gop)-&=
gt;status, __func__)<br>
&gt; +<br>
&gt; +#define gnttab_map_grant_no_eagain(_gop) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; +do { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0\<br>
&gt; + =A0 =A0if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop),=
 1)) =A0 =A0 =A0 \<br>
&gt; + =A0 =A0 =A0 =A0BUG(); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0\<br>
&gt; + =A0 =A0if ((_gop)-&gt;status =3D=3D GNTST_eagain) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; + =A0 =A0 =A0 =A0gnttab_retry_eagain_map((_gop)); =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; +} while(0)<br>
&gt; +<br>
&gt; +static inline void<br>
&gt; +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count=
)<br>
&gt; +{<br>
&gt; + =A0 =A0unsigned i;<br>
&gt; + =A0 =A0struct gnttab_copy *op;<br>
&gt; +<br>
&gt; + =A0 =A0BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count)=
);<br>
&gt; + =A0 =A0for (i =3D 0, op =3D batch; i &lt; count; i++, op++)<br>
&gt; + =A0 =A0 =A0 =A0if (op-&gt;status =3D=3D GNTST_eagain)<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0gnttab_retry_eagain_copy(op);<br>
&gt; +}<br>
&gt; +<br>
&gt; =A0int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct gnttab_map_grant_ref *kmap_=
ops,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct page **pages, unsigned int =
count);<br>
&gt; diff --git a/include/xen/interface/grant_table.h b/include/xen/interfa=
ce/grant_table.h<br>
&gt; index 7da811b..66cb734 100644<br>
&gt; --- a/include/xen/interface/grant_table.h<br>
&gt; +++ b/include/xen/interface/grant_table.h<br>
&gt; @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);<br=
>
&gt; =A0#define GNTST_permission_denied (-8) /* Not enough privilege for op=
eration. =A0*/<br>
&gt; =A0#define GNTST_bad_page =A0 =A0 =A0 =A0 (-9) /* Specified page was i=
nvalid for op. =A0 =A0*/<br>
&gt; =A0#define GNTST_bad_copy_arg =A0 =A0(-10) /* copy arguments cross pag=
e boundary */<br>
&gt; +#define GNTST_eagain =A0 =A0 =A0 =A0 =A0(-12) /* Retry. =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/<br>
&gt;<br>
&gt; =A0#define GNTTABOP_error_msgs { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \=
<br>
&gt; =A0 =A0 =A0&quot;okay&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);<br=
>
&gt; =A0 =A0 =A0&quot;permission denied&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0\<br>
&gt; =A0 =A0 =A0&quot;bad page&quot;, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; =A0 =A0 =A0&quot;copy arguments cross page boundary&quot; =A0 =A0 =A0 =
=A0\<br>
&gt; + =A0 =A0&quot;retry&quot; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; =A0}<br>
&gt;<br>
&gt; =A0#endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */<br>
&gt;<br>
&gt;<br>
&gt; On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:<br>
&gt;<br>
&gt; &gt;<br>
&gt; &gt; On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On 27/08/12 17:51, <a href=3D"mailto:andres@lagarcavilla.org"=
>andres@lagarcavilla.org</a> wrote:<br>
&gt; &gt;&gt;&gt; From: Andres Lagar-Cavilla &lt;<a href=3D"mailto:andres@l=
agarcavilla.org">andres@lagarcavilla.org</a>&gt;<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; Since Xen-4.2, hvm domains may have portions of their mem=
ory paged out. When a<br>
&gt; &gt;&gt;&gt; foreign domain (such as dom0) attempts to map these frame=
s, the map will<br>
&gt; &gt;&gt;&gt; initially fail. The hypervisor returns a suitable errno, =
and kicks an<br>
&gt; &gt;&gt;&gt; asynchronous page-in operation carried out by a helper. T=
he foreign domain is<br>
&gt; &gt;&gt;&gt; expected to retry the mapping operation until it eventual=
ly succeeds. The<br>
&gt; &gt;&gt;&gt; foreign domain is not put to sleep because itself could b=
e the one running the<br>
&gt; &gt;&gt;&gt; pager assist (typical scenario for dom0).<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; This patch adds support for this mechanism for backend dr=
ivers using grant<br>
&gt; &gt;&gt;&gt; mapping and copying operations. Specifically, this covers=
 the blkback and<br>
&gt; &gt;&gt;&gt; gntdev drivers (which map foregin grants), and the netbac=
k driver (which copies<br>
&gt; &gt;&gt;&gt; foreign grants).<br>
&gt; &gt;&gt;&gt;<br>
&gt; &gt;&gt;&gt; * Add GNTST_eagain, already exposed by Xen, to the grant =
interface.<br>
&gt; &gt;&gt;&gt; * Add a retry method for grants that fail with GNTST_eaga=
in (i.e. because the<br>
&gt; &gt;&gt;&gt; target foregin frame is paged out).<br>
&gt; &gt;&gt;&gt; * Insert hooks with appropriate macro decorators in the a=
forementioned drivers.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; I think you should implement wrappers around HYPERVISOR_grant=
_table_op()<br>
&gt; &gt;&gt; have have the wrapper do the retries instead of every backend=
 having to<br>
&gt; &gt;&gt; check for EAGAIN and issue the retries itself. Similar to the=
<br>
&gt; &gt;&gt; gnttab_map_grant_no_eagain() function you&#39;ve already adde=
d.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Why do some operations not retry anyway?<br>
&gt; &gt;<br>
&gt; &gt; All operations retry. The reason why I could not make it as elega=
nt as you suggest is because grant operations are submitted in batches and =
their status(es?) later checked individually elsewhere. This is the case fo=
r netback. Note that both blkback and gntdev use a more linear structure wi=
th the gnttab_map_refs helper, which allows me to hide all the retry gore f=
rom those drivers into grant table code. Likewise for xenbus ring mapping.<=
br>

&gt; &gt;<br>
&gt; &gt; In summary, outside of core grant table code, only the netback dr=
iver needs to check explicitly for retries, due to its batch-copy-delayed-p=
er-slot-check structure.<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; +void<br>
&gt; &gt;&gt;&gt; +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int=
16_t *status,<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 const char *func)<br>
&gt; &gt;&gt;&gt; +{<br>
&gt; &gt;&gt;&gt; + u8 delay =3D 1;<br>
&gt; &gt;&gt;&gt; +<br>
&gt; &gt;&gt;&gt; + do {<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 BUG_ON(HYPERVISOR_grant_table_op(cmd, g=
op, 1));<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 if (*status =3D=3D GNTST_eagain)<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 msleep(delay++);<br>
&gt; &gt;&gt;&gt; + } while ((*status =3D=3D GNTST_eagain) &amp;&amp; delay=
);<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Terminating the loop when delay wraps is a bit subtle. =A0Why=
 not make<br>
&gt; &gt;&gt; delay unsigned and check delay &lt;=3D MAX_DELAY?<br>
&gt; &gt; Good idea (MAX_DELAY =3D=3D 256). I&#39;d like to get Konrad&#39;=
s feedback before a re-spin.<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Would it be sensible to ramp the delay faster? =A0Perhaps dou=
ble each<br>
&gt; &gt;&gt; iteration with a maximum possible delay of e.g., 256 ms.<br>
&gt; &gt; Generally speaking we&#39;ve never seen past three retries. I am =
open to changing the algorithm but there is a significant possibility it wo=
n&#39;t matter at all.<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;&gt; +#define gnttab_map_grant_no_eagain(_gop) =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; +do { =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; + =A0 =A0if ( HYPERVISOR_grant_table_op(GNTTABOP_map_gran=
t_ref, (_gop), 1)) =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0BUG(); =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; + =A0 =A0if ((_gop)-&gt;status =3D=3D GNTST_eagain) =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 \<br>
&gt; &gt;&gt;&gt; + =A0 =A0 =A0 =A0gnttab_retry_eagain_map((_gop)); =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0\<br>
&gt; &gt;&gt;&gt; +} while(0)<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Inline functions, please.<br>
&gt; &gt;<br>
&gt; &gt; I want to retain the original context for debugging. Eventually w=
e print __func__ if things go wrong.<br>
&gt; &gt;<br>
&gt; &gt; Thanks, great feedback<br>
&gt; &gt; Andres<br>
&gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; David<br>
&gt; &gt;<br>
&gt;<br>
</blockquote></div>

--047d7b86f3aa1ef11704c894ddee--


--===============3267962755280235207==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3267962755280235207==--


From xen-devel-bounces@lists.xen.org Fri Aug 31 19:38:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7X2Q-0005On-CP; Fri, 31 Aug 2012 19:37:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dieter@bloms.de>) id 1T7X2N-0005Of-4u
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:37:53 +0000
Received: from [85.158.139.83:63487] by server-5.bemta-5.messagelabs.com id
	0C/BB-30514-E8211405; Fri, 31 Aug 2012 19:37:50 +0000
X-Env-Sender: dieter@bloms.de
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346441868!27965002!1
X-Originating-IP: [84.200.248.35]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7455 invoked from network); 31 Aug 2012 19:37:48 -0000
Received: from smtp.bloms.de (HELO smtp.bloms.de) (84.200.248.35)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 19:37:48 -0000
Received: from smtp.bloms.de (localhost [127.0.0.1])
	by smtp.bloms.de (Postfix) with ESMTP id 71AB11C140D5;
	Fri, 31 Aug 2012 21:37:47 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=bloms.de; h=date:from:to
	:cc:subject:message-id:references:mime-version:content-type
	:in-reply-to; s=selector1; bh=tiAvst1z86HR9Fany52SrXwonBc=; b=XT
	+3Ni0waq9vK0Sdi0lb7ISWBrd78Q250XECmRu6+bti4wEQv727xtVDCPzNoQhpuZ
	igAP4hMAfXr0vfQM+BcDerf0CzLcqdnQxm+r4yD5yektNhMNSq4asg1DReXcDMG6
	wTgU+j86TECkt6V9HUHfzeIw+th1uYteU90Ku3MWYN7jcBKHg0Sp/vzOV6AQh6hk
	Wo0j04iNWXlz6m3M7fLd2/2K+cjJbQ6V8x/TPv6/Np3gaZVAK1C5zwmaM3BP0yBb
	/ubj+aTIfCeNgtlJvratrrCCsJMuwtlvt4q7CeTOVaKaKO4/eJgtj0+aIKJrCHCS
	4IHO0CpDDAjsYcS1DhMdj1qS5sYEm94u9tj7NnQd82+8qbTTfMKS+Pc9jlJhCTOv
	UY2Mww8UxXF9loxo5Q4tgujMZN0y079d32dImm/B4OhshoeZop0CWMf//8LwZUlr
	Is00qJJ4OOxw/0a+YJ/SOVDSqmELuheAJJxmeaU09vBorkZITgnMsipP0QVTZnMk
	qUpD99X3koG5sgITQ2hWra+9DfTimc9KNNxf0lpVYS9J/kgq3L/JRYKp0uqTjGzm
	20SDxraSWsoRw1gYm3sg0CJ3R6NnLLqPu/KQWMOPZISbW+H1aIO/YC1A72SfZ42I
	j+qwKt2SZ9rZ0QorKL1cUbjjorh6HO2Hg3Axp2o4w=
Received: by smtp.bloms.de (Postfix, from userid 1000)
	id 5CCD81C140E9; Fri, 31 Aug 2012 21:37:47 +0200 (CEST)
Date: Fri, 31 Aug 2012 21:37:47 +0200
From: Dieter Bloms <dieter@bloms.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120831193746.GA13009@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
	<50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
	<1346434547.5820.10.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346434547.5820.10.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Charles Arnold <CARNOLD@suse.com>, Kirk Allan <kallan@suse.com>,
	Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Ian, thank you for implementing this io and irq support.

On Fri, Aug 31, Ian Campbell wrote:

> On Fri, 2012-08-31 at 18:15 +0100, Kirk Allan wrote:
> 
> > > Charles, Kirk, could you comment here?
> > 
> > In one of my Window's vm config files, I was able to get the vm to
> > boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of
> > the Windows vm.  I also added irq=[4] to the config file.  However, I
> > was not able to actually get a debug session to work.  The physical
> > machine running windbg received a string from the vm which gave me
> > hope that it was working, but then it never received further data so
> > the vm eventually booted without being attached to the debugger.
> 
> Thanks, the question was whether it would be useful to implement the
> 	ioports = '3f8-3ff'
> 	irq = 4
> syntax as well as the
> 	ioports = ['3f8-3ff']
> 	irq = [4]
> but it looks like you are actually using the array version anyway?

I use this syntax with xm:

ioports=['0378-037a']
irq=[5]

and it works good.


-- 
Best regards

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
>From field.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 19:38:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7X2Q-0005On-CP; Fri, 31 Aug 2012 19:37:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dieter@bloms.de>) id 1T7X2N-0005Of-4u
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:37:53 +0000
Received: from [85.158.139.83:63487] by server-5.bemta-5.messagelabs.com id
	0C/BB-30514-E8211405; Fri, 31 Aug 2012 19:37:50 +0000
X-Env-Sender: dieter@bloms.de
X-Msg-Ref: server-3.tower-182.messagelabs.com!1346441868!27965002!1
X-Originating-IP: [84.200.248.35]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7455 invoked from network); 31 Aug 2012 19:37:48 -0000
Received: from smtp.bloms.de (HELO smtp.bloms.de) (84.200.248.35)
	by server-3.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 19:37:48 -0000
Received: from smtp.bloms.de (localhost [127.0.0.1])
	by smtp.bloms.de (Postfix) with ESMTP id 71AB11C140D5;
	Fri, 31 Aug 2012 21:37:47 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=bloms.de; h=date:from:to
	:cc:subject:message-id:references:mime-version:content-type
	:in-reply-to; s=selector1; bh=tiAvst1z86HR9Fany52SrXwonBc=; b=XT
	+3Ni0waq9vK0Sdi0lb7ISWBrd78Q250XECmRu6+bti4wEQv727xtVDCPzNoQhpuZ
	igAP4hMAfXr0vfQM+BcDerf0CzLcqdnQxm+r4yD5yektNhMNSq4asg1DReXcDMG6
	wTgU+j86TECkt6V9HUHfzeIw+th1uYteU90Ku3MWYN7jcBKHg0Sp/vzOV6AQh6hk
	Wo0j04iNWXlz6m3M7fLd2/2K+cjJbQ6V8x/TPv6/Np3gaZVAK1C5zwmaM3BP0yBb
	/ubj+aTIfCeNgtlJvratrrCCsJMuwtlvt4q7CeTOVaKaKO4/eJgtj0+aIKJrCHCS
	4IHO0CpDDAjsYcS1DhMdj1qS5sYEm94u9tj7NnQd82+8qbTTfMKS+Pc9jlJhCTOv
	UY2Mww8UxXF9loxo5Q4tgujMZN0y079d32dImm/B4OhshoeZop0CWMf//8LwZUlr
	Is00qJJ4OOxw/0a+YJ/SOVDSqmELuheAJJxmeaU09vBorkZITgnMsipP0QVTZnMk
	qUpD99X3koG5sgITQ2hWra+9DfTimc9KNNxf0lpVYS9J/kgq3L/JRYKp0uqTjGzm
	20SDxraSWsoRw1gYm3sg0CJ3R6NnLLqPu/KQWMOPZISbW+H1aIO/YC1A72SfZ42I
	j+qwKt2SZ9rZ0QorKL1cUbjjorh6HO2Hg3Axp2o4w=
Received: by smtp.bloms.de (Postfix, from userid 1000)
	id 5CCD81C140E9; Fri, 31 Aug 2012 21:37:47 +0200 (CEST)
Date: Fri, 31 Aug 2012 21:37:47 +0200
From: Dieter Bloms <dieter@bloms.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20120831193746.GA13009@bloms.de>
References: <1344935141.5926.6.camel@zakaz.uk.xensource.com>
	<20120814100704.GA19704@bloms.de>
	<1346421542.27277.218.camel@zakaz.uk.xensource.com>
	<1346425278.27277.224.camel@zakaz.uk.xensource.com>
	<5040F7140200007800097E91@nat28.tlf.novell.com>
	<1346428292.27277.243.camel@zakaz.uk.xensource.com>
	<5040FB490200007800097ED2@nat28.tlf.novell.com>
	<50409CE002000076000B3B44@novprvoes0310.provo.novell.com>
	<1346434547.5820.10.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1346434547.5820.10.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Charles Arnold <CARNOLD@suse.com>, Kirk Allan <kallan@suse.com>,
	Dieter Bloms <xensource.com@bloms.de>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Xen-users] Xen 4.2 TODO (io and irq parameter are
 not evaluated by xl)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Ian, thank you for implementing this io and irq support.

On Fri, Aug 31, Ian Campbell wrote:

> On Fri, 2012-08-31 at 18:15 +0100, Kirk Allan wrote:
> 
> > > Charles, Kirk, could you comment here?
> > 
> > In one of my Window's vm config files, I was able to get the vm to
> > boot using ioports=['3f8-3ff'].  My goal was to do serial debugging of
> > the Windows vm.  I also added irq=[4] to the config file.  However, I
> > was not able to actually get a debug session to work.  The physical
> > machine running windbg received a string from the vm which gave me
> > hope that it was working, but then it never received further data so
> > the vm eventually booted without being attached to the debugger.
> 
> Thanks, the question was whether it would be useful to implement the
> 	ioports = '3f8-3ff'
> 	irq = 4
> syntax as well as the
> 	ioports = ['3f8-3ff']
> 	irq = [4]
> but it looks like you are actually using the array version anyway?

I use this syntax with xm:

ioports=['0378-037a']
irq=[5]

and it works good.


-- 
Best regards

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
>From field.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 19:48:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XCY-0005a8-Ft; Fri, 31 Aug 2012 19:48:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7XCX-0005a3-Eq
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:48:21 +0000
Received: from [85.158.138.51:35771] by server-12.bemta-3.messagelabs.com id
	9A/B0-10384-40511405; Fri, 31 Aug 2012 19:48:20 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346442498!20034767!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5490 invoked from network); 31 Aug 2012 19:48:20 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Aug 2012 19:48:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VJm1Wi029360
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 19:48:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VJm0QV003159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 19:48:01 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VJm08T027625; Fri, 31 Aug 2012 14:48:00 -0500
MIME-Version: 1.0
Message-ID: <16d6e254-acff-4212-b5ed-0b3dd5b40ea5@default>
Date: Fri, 31 Aug 2012 12:47:21 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: keir@xen.org, xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] preemption and locking: why joined at the hip?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tracking down a tmem problem in 4.2.0-rcN that crashes the
hypervisor, I've discovered a 4.2 changeset that forces
a preemption_enable/disable for every lock/unlock.

Tmem has dynamically allocated "objects" that contain a
lock.  The lock is held when the object is destroyed.
No reason to unlock something that's about to be destroyed.
But with the preempt_enable/disable in the generic locking code,
and the fact that do_softirq ASSERTs that preempt_count
must be zero, a crash occurs.

While I'm suitably embarrassed that tmem has not yet
been tested with any recent -unstable, and I note that the
workaround is simple (forcing an unlock before destroying the
object containing the held lock), I have to ask if
this change is really a good idea or is it unnecessary
babysitting?

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 19:48:43 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XCY-0005a8-Ft; Fri, 31 Aug 2012 19:48:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7XCX-0005a3-Eq
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:48:21 +0000
Received: from [85.158.138.51:35771] by server-12.bemta-3.messagelabs.com id
	9A/B0-10384-40511405; Fri, 31 Aug 2012 19:48:20 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-12.tower-174.messagelabs.com!1346442498!20034767!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5490 invoked from network); 31 Aug 2012 19:48:20 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-12.tower-174.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Aug 2012 19:48:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VJm1Wi029360
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 19:48:02 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VJm0QV003159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 19:48:01 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VJm08T027625; Fri, 31 Aug 2012 14:48:00 -0500
MIME-Version: 1.0
Message-ID: <16d6e254-acff-4212-b5ed-0b3dd5b40ea5@default>
Date: Fri, 31 Aug 2012 12:47:21 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: keir@xen.org, xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] preemption and locking: why joined at the hip?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tracking down a tmem problem in 4.2.0-rcN that crashes the
hypervisor, I've discovered a 4.2 changeset that forces
a preemption_enable/disable for every lock/unlock.

Tmem has dynamically allocated "objects" that contain a
lock.  The lock is held when the object is destroyed.
No reason to unlock something that's about to be destroyed.
But with the preempt_enable/disable in the generic locking code,
and the fact that do_softirq ASSERTs that preempt_count
must be zero, a crash occurs.

While I'm suitably embarrassed that tmem has not yet
been tested with any recent -unstable, and I note that the
workaround is simple (forcing an unlock before destroying the
object containing the held lock), I have to ask if
this change is really a good idea or is it unnecessary
babysitting?

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 19:59:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XMu-0005jv-Kh; Fri, 31 Aug 2012 19:59:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7XMs-0005jq-VJ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:59:03 +0000
Received: from [85.158.143.99:15797] by server-3.bemta-4.messagelabs.com id
	AB/98-08232-68711405; Fri, 31 Aug 2012 19:59:02 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1346443139!18136159!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3458 invoked from network); 31 Aug 2012 19:59:00 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 19:59:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VJww6k007714
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 19:58:59 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VJwvnp007844
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 19:58:57 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VJwvkm009916; Fri, 31 Aug 2012 14:58:57 -0500
MIME-Version: 1.0
Message-ID: <76edd759-ceca-4fb5-b411-1e598137afd8@default>
Date: Fri, 31 Aug 2012 12:58:18 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
Content-Type: multipart/mixed;
	boundary="__1346443137219609740abhmt117.oracle.com"
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] [FOR 4.2] tmem: add matching unlock for an
 about-to-be-destroyed object
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--__1346443137219609740abhmt117.oracle.com
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable

(Guidance appreciated if the Xen patch submittal process
has changed and I need to do more than send this email.)

A 4.2 changeset forces a preempt_disable/enable with
every lock/unlock.

Tmem has dynamically allocated "objects" that contain a
lock.  The lock is held when the object is destroyed.
No reason to unlock something that's about to be destroyed!
But with the preempt_enable/disable in the generic locking code,
and the fact that do_softirq ASSERTs that preempt_count
must be zero, a crash occurs soon after any object is
destroyed.

So force lock to be released before destroying objects.

Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

diff -r 1967c7c290eb xen/common/tmem.c
--- a/xen/common/tmem.c=09Wed Feb 09 12:03:09 2011 +0000
+++ b/xen/common/tmem.c=09Fri Aug 31 13:49:51 2012 -0600
@@ -957,6 +957,7 @@
     /* use no_rebalance only if all objects are being destroyed anyway */
     if ( !no_rebalance )
         rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]=
);
+    tmem_spin_unlock(&obj->obj_spinlock);
     tmem_free(obj,sizeof(obj_t),pool);
 }


--__1346443137219609740abhmt117.oracle.com
Content-Type: application/octet-stream; name="xen-tmem-destroy-obj.patch"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xen-tmem-destroy-obj.patch"

ZGlmZiAtciAxOTY3YzdjMjkwZWIgeGVuL2NvbW1vbi90bWVtLmMKLS0tIGEveGVuL2NvbW1vbi90
bWVtLmMJV2VkIEZlYiAwOSAxMjowMzowOSAyMDExICswMDAwCisrKyBiL3hlbi9jb21tb24vdG1l
bS5jCUZyaSBBdWcgMzEgMTM6NDk6NTEgMjAxMiAtMDYwMApAQCAtOTU3LDYgKzk1Nyw3IEBACiAg
ICAgLyogdXNlIG5vX3JlYmFsYW5jZSBvbmx5IGlmIGFsbCBvYmplY3RzIGFyZSBiZWluZyBkZXN0
cm95ZWQgYW55d2F5ICovCiAgICAgaWYgKCAhbm9fcmViYWxhbmNlICkKICAgICAgICAgcmJfZXJh
c2UoJm9iai0+cmJfdHJlZV9ub2RlLCZwb29sLT5vYmpfcmJfcm9vdFtvaWRfaGFzaCgmb2xkX29p
ZCldKTsKKyAgICB0bWVtX3NwaW5fdW5sb2NrKCZvYmotPm9ial9zcGlubG9jayk7CiAgICAgdG1l
bV9mcmVlKG9iaixzaXplb2Yob2JqX3QpLHBvb2wpOwogfQogCg==
--__1346443137219609740abhmt117.oracle.com
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--__1346443137219609740abhmt117.oracle.com--


From xen-devel-bounces@lists.xen.org Fri Aug 31 19:59:26 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 19:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XMu-0005jv-Kh; Fri, 31 Aug 2012 19:59:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7XMs-0005jq-VJ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 19:59:03 +0000
Received: from [85.158.143.99:15797] by server-3.bemta-4.messagelabs.com id
	AB/98-08232-68711405; Fri, 31 Aug 2012 19:59:02 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-216.messagelabs.com!1346443139!18136159!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI0MTQ0\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3458 invoked from network); 31 Aug 2012 19:59:00 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-216.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 19:59:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VJww6k007714
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 19:58:59 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VJwvnp007844
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 19:58:57 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VJwvkm009916; Fri, 31 Aug 2012 14:58:57 -0500
MIME-Version: 1.0
Message-ID: <76edd759-ceca-4fb5-b411-1e598137afd8@default>
Date: Fri, 31 Aug 2012 12:58:18 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
Content-Type: multipart/mixed;
	boundary="__1346443137219609740abhmt117.oracle.com"
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] [FOR 4.2] tmem: add matching unlock for an
 about-to-be-destroyed object
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--__1346443137219609740abhmt117.oracle.com
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable

(Guidance appreciated if the Xen patch submittal process
has changed and I need to do more than send this email.)

A 4.2 changeset forces a preempt_disable/enable with
every lock/unlock.

Tmem has dynamically allocated "objects" that contain a
lock.  The lock is held when the object is destroyed.
No reason to unlock something that's about to be destroyed!
But with the preempt_enable/disable in the generic locking code,
and the fact that do_softirq ASSERTs that preempt_count
must be zero, a crash occurs soon after any object is
destroyed.

So force lock to be released before destroying objects.

Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>

diff -r 1967c7c290eb xen/common/tmem.c
--- a/xen/common/tmem.c=09Wed Feb 09 12:03:09 2011 +0000
+++ b/xen/common/tmem.c=09Fri Aug 31 13:49:51 2012 -0600
@@ -957,6 +957,7 @@
     /* use no_rebalance only if all objects are being destroyed anyway */
     if ( !no_rebalance )
         rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]=
);
+    tmem_spin_unlock(&obj->obj_spinlock);
     tmem_free(obj,sizeof(obj_t),pool);
 }


--__1346443137219609740abhmt117.oracle.com
Content-Type: application/octet-stream; name="xen-tmem-destroy-obj.patch"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xen-tmem-destroy-obj.patch"

ZGlmZiAtciAxOTY3YzdjMjkwZWIgeGVuL2NvbW1vbi90bWVtLmMKLS0tIGEveGVuL2NvbW1vbi90
bWVtLmMJV2VkIEZlYiAwOSAxMjowMzowOSAyMDExICswMDAwCisrKyBiL3hlbi9jb21tb24vdG1l
bS5jCUZyaSBBdWcgMzEgMTM6NDk6NTEgMjAxMiAtMDYwMApAQCAtOTU3LDYgKzk1Nyw3IEBACiAg
ICAgLyogdXNlIG5vX3JlYmFsYW5jZSBvbmx5IGlmIGFsbCBvYmplY3RzIGFyZSBiZWluZyBkZXN0
cm95ZWQgYW55d2F5ICovCiAgICAgaWYgKCAhbm9fcmViYWxhbmNlICkKICAgICAgICAgcmJfZXJh
c2UoJm9iai0+cmJfdHJlZV9ub2RlLCZwb29sLT5vYmpfcmJfcm9vdFtvaWRfaGFzaCgmb2xkX29p
ZCldKTsKKyAgICB0bWVtX3NwaW5fdW5sb2NrKCZvYmotPm9ial9zcGlubG9jayk7CiAgICAgdG1l
bV9mcmVlKG9iaixzaXplb2Yob2JqX3QpLHBvb2wpOwogfQogCg==
--__1346443137219609740abhmt117.oracle.com
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--__1346443137219609740abhmt117.oracle.com--


From xen-devel-bounces@lists.xen.org Fri Aug 31 20:08:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 20:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XVn-00061I-M6; Fri, 31 Aug 2012 20:08:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7XVm-00061D-Qt
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 20:08:15 +0000
Received: from [85.158.143.35:33323] by server-3.bemta-4.messagelabs.com id
	AC/9D-08232-EA911405; Fri, 31 Aug 2012 20:08:14 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346443690!5093695!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2646 invoked from network); 31 Aug 2012 20:08:11 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 20:08:11 -0000
Received: by qadc10 with SMTP id c10so1364552qad.11
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 13:08:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=VTqsfML6e3vyBfDEBexXd0DUwxpSrHuBin2Uul3VoFY=;
	b=dYWAK90XWRO3xk+5nibtLjheUvRU3d4aNvPGHej680Z72wsnjFlLdfNgun8dUbkBlV
	ebc3cfG2si8xTVBAXugoW8Yn4zvn+VvRC3UD7Hf8FJaH5GoPhp8vdV1oK8iwF9u629nu
	CceFU4PBWpJ5LajOITLDxp8g1yuFyorUe6LsOAf8iof8gVPVCLwhVvJwPwKMVrZ4JNiU
	vYjN8MDtxUYd/TzL6e2AmrkanzXsYVmodYsGpa9829dtW8fe2xmJkhgndmlsKEiWhhQq
	1L6+oM2lvI9JFd1yE5aAKQcQ6Raot0VZ99gV9Ed2UNI1mz6XGfEYUHP7rtiN8v93mbr/
	xSaQ==
Received: by 10.229.135.7 with SMTP id l7mr5593580qct.110.1346443689991;
	Fri, 31 Aug 2012 13:08:09 -0700 (PDT)
Received: from [10.254.74.233] ([38.108.87.20])
	by mx.google.com with ESMTPS id da18sm6501898qab.14.2012.08.31.13.08.07
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 13:08:08 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 21:08:02 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC66D832.3D8B3%keir.xen@gmail.com>
Thread-Topic: preemption and locking: why joined at the hip?
Thread-Index: Ac2HtFMAgE5ka2NF30GPFMTCDZyllg==
In-Reply-To: <16d6e254-acff-4212-b5ed-0b3dd5b40ea5@default>
Mime-version: 1.0
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] preemption and locking: why joined at the hip?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 20:47, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> Tracking down a tmem problem in 4.2.0-rcN that crashes the
> hypervisor, I've discovered a 4.2 changeset that forces
> a preemption_enable/disable for every lock/unlock.
> 
> Tmem has dynamically allocated "objects" that contain a
> lock.  The lock is held when the object is destroyed.
> No reason to unlock something that's about to be destroyed.
> But with the preempt_enable/disable in the generic locking code,
> and the fact that do_softirq ASSERTs that preempt_count
> must be zero, a crash occurs.
> 
> While I'm suitably embarrassed that tmem has not yet
> been tested with any recent -unstable, and I note that the
> workaround is simple (forcing an unlock before destroying the
> object containing the held lock), I have to ask if
> this change is really a good idea or is it unnecessary
> babysitting?

It is to prevent a vcpu from sleeping (eg on a waitqueue) while holding
spinlocks. If this were to happen, the possibility for deadlocks is obvious.
Hence it provides handy belt & braces sanity checking for this situation.

So just clean up after yourself and only destroy locks that are not locked.
;) I'm not clear why you'd be holding the lock during object destruction
anyway -- if anyone else could be spinning on the lock, it would not be safe
to free the lock.

 -- Keir

> Dan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 20:08:33 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 20:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XVn-00061I-M6; Fri, 31 Aug 2012 20:08:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7XVm-00061D-Qt
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 20:08:15 +0000
Received: from [85.158.143.35:33323] by server-3.bemta-4.messagelabs.com id
	AC/9D-08232-EA911405; Fri, 31 Aug 2012 20:08:14 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1346443690!5093695!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2646 invoked from network); 31 Aug 2012 20:08:11 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 20:08:11 -0000
Received: by qadc10 with SMTP id c10so1364552qad.11
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 13:08:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=VTqsfML6e3vyBfDEBexXd0DUwxpSrHuBin2Uul3VoFY=;
	b=dYWAK90XWRO3xk+5nibtLjheUvRU3d4aNvPGHej680Z72wsnjFlLdfNgun8dUbkBlV
	ebc3cfG2si8xTVBAXugoW8Yn4zvn+VvRC3UD7Hf8FJaH5GoPhp8vdV1oK8iwF9u629nu
	CceFU4PBWpJ5LajOITLDxp8g1yuFyorUe6LsOAf8iof8gVPVCLwhVvJwPwKMVrZ4JNiU
	vYjN8MDtxUYd/TzL6e2AmrkanzXsYVmodYsGpa9829dtW8fe2xmJkhgndmlsKEiWhhQq
	1L6+oM2lvI9JFd1yE5aAKQcQ6Raot0VZ99gV9Ed2UNI1mz6XGfEYUHP7rtiN8v93mbr/
	xSaQ==
Received: by 10.229.135.7 with SMTP id l7mr5593580qct.110.1346443689991;
	Fri, 31 Aug 2012 13:08:09 -0700 (PDT)
Received: from [10.254.74.233] ([38.108.87.20])
	by mx.google.com with ESMTPS id da18sm6501898qab.14.2012.08.31.13.08.07
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 13:08:08 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 21:08:02 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC66D832.3D8B3%keir.xen@gmail.com>
Thread-Topic: preemption and locking: why joined at the hip?
Thread-Index: Ac2HtFMAgE5ka2NF30GPFMTCDZyllg==
In-Reply-To: <16d6e254-acff-4212-b5ed-0b3dd5b40ea5@default>
Mime-version: 1.0
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] preemption and locking: why joined at the hip?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 20:47, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> Tracking down a tmem problem in 4.2.0-rcN that crashes the
> hypervisor, I've discovered a 4.2 changeset that forces
> a preemption_enable/disable for every lock/unlock.
> 
> Tmem has dynamically allocated "objects" that contain a
> lock.  The lock is held when the object is destroyed.
> No reason to unlock something that's about to be destroyed.
> But with the preempt_enable/disable in the generic locking code,
> and the fact that do_softirq ASSERTs that preempt_count
> must be zero, a crash occurs.
> 
> While I'm suitably embarrassed that tmem has not yet
> been tested with any recent -unstable, and I note that the
> workaround is simple (forcing an unlock before destroying the
> object containing the held lock), I have to ask if
> this change is really a good idea or is it unnecessary
> babysitting?

It is to prevent a vcpu from sleeping (eg on a waitqueue) while holding
spinlocks. If this were to happen, the possibility for deadlocks is obvious.
Hence it provides handy belt & braces sanity checking for this situation.

So just clean up after yourself and only destroy locks that are not locked.
;) I'm not clear why you'd be holding the lock during object destruction
anyway -- if anyone else could be spinning on the lock, it would not be safe
to free the lock.

 -- Keir

> Dan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 20:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 20:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XWn-00064d-42; Fri, 31 Aug 2012 20:09:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7XWm-00064S-32
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 20:09:16 +0000
Received: from [85.158.138.51:43507] by server-8.bemta-3.messagelabs.com id
	BC/09-24700-BE911405; Fri, 31 Aug 2012 20:09:15 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346443753!26216679!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1229 invoked from network); 31 Aug 2012 20:09:14 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 20:09:14 -0000
Received: by qabg14 with SMTP id g14so1370075qab.11
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 13:09:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=5LQ3ArST+J6jRG7GM7mUMw1qIjehT9TULKhwaX7YcW4=;
	b=feGpRtySOnhc6LrqZx/4bJuxLEwdfGHbSOTS8XCU7Y04IlBn4A1J/jZOIe+Z5BAB4q
	6ILoMYhoLHosPnPORcMZL+4IFA/VRC+shJCpAv+EXgT3jhxB4ssckMZ1oJ9wMByo+WpL
	wZ4ehOyHeO+8Mx//RuKxpvgi72ZRu8/ZK39UIpdIgR8kci3v/SDwyOwXxyCT6CCscywx
	uXvD3P5lqSRTFRsOMLkBRQUfuB4gDvVKY1c4ZPpNkVBweXj0GESNE4eHKaby2gJ7V5Qr
	lDa2MaJ/KUniqyH7oFIdEaDLkB5Q+Kc/dD/3g4UxCZ7RqGpPM9Pb4sSnd8dt3A/39Scd
	eVxQ==
Received: by 10.224.181.193 with SMTP id bz1mr20560839qab.64.1346443753451;
	Fri, 31 Aug 2012 13:09:13 -0700 (PDT)
Received: from [10.254.74.233] ([38.108.87.20])
	by mx.google.com with ESMTPS id ez6sm6504652qab.17.2012.08.31.13.09.05
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 13:09:12 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 21:08:58 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC66D86A.3D8B4%keir.xen@gmail.com>
Thread-Topic: [PATCH] [FOR 4.2] tmem: add matching unlock for an
	about-to-be-destroyed object
Thread-Index: Ac2HtHRh+Q+0Bu89z0qBWtB0t/8F6A==
In-Reply-To: <76edd759-ceca-4fb5-b411-1e598137afd8@default>
Mime-version: 1.0
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] [FOR 4.2] tmem: add matching unlock for an
 about-to-be-destroyed object
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 20:58, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> (Guidance appreciated if the Xen patch submittal process
> has changed and I need to do more than send this email.)

This is fine, thanks!

 -- Keir

> A 4.2 changeset forces a preempt_disable/enable with
> every lock/unlock.
> 
> Tmem has dynamically allocated "objects" that contain a
> lock.  The lock is held when the object is destroyed.
> No reason to unlock something that's about to be destroyed!
> But with the preempt_enable/disable in the generic locking code,
> and the fact that do_softirq ASSERTs that preempt_count
> must be zero, a crash occurs soon after any object is
> destroyed.
> 
> So force lock to be released before destroying objects.
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> diff -r 1967c7c290eb xen/common/tmem.c
> --- a/xen/common/tmem.c Wed Feb 09 12:03:09 2011 +0000
> +++ b/xen/common/tmem.c Fri Aug 31 13:49:51 2012 -0600
> @@ -957,6 +957,7 @@
>      /* use no_rebalance only if all objects are being destroyed anyway */
>      if ( !no_rebalance )
>          rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]);
> +    tmem_spin_unlock(&obj->obj_spinlock);
>      tmem_free(obj,sizeof(obj_t),pool);
>  }
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 20:09:34 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 20:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XWn-00064d-42; Fri, 31 Aug 2012 20:09:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1T7XWm-00064S-32
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 20:09:16 +0000
Received: from [85.158.138.51:43507] by server-8.bemta-3.messagelabs.com id
	BC/09-24700-BE911405; Fri, 31 Aug 2012 20:09:15 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-174.messagelabs.com!1346443753!26216679!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1229 invoked from network); 31 Aug 2012 20:09:14 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-15.tower-174.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 20:09:14 -0000
Received: by qabg14 with SMTP id g14so1370075qab.11
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 13:09:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=5LQ3ArST+J6jRG7GM7mUMw1qIjehT9TULKhwaX7YcW4=;
	b=feGpRtySOnhc6LrqZx/4bJuxLEwdfGHbSOTS8XCU7Y04IlBn4A1J/jZOIe+Z5BAB4q
	6ILoMYhoLHosPnPORcMZL+4IFA/VRC+shJCpAv+EXgT3jhxB4ssckMZ1oJ9wMByo+WpL
	wZ4ehOyHeO+8Mx//RuKxpvgi72ZRu8/ZK39UIpdIgR8kci3v/SDwyOwXxyCT6CCscywx
	uXvD3P5lqSRTFRsOMLkBRQUfuB4gDvVKY1c4ZPpNkVBweXj0GESNE4eHKaby2gJ7V5Qr
	lDa2MaJ/KUniqyH7oFIdEaDLkB5Q+Kc/dD/3g4UxCZ7RqGpPM9Pb4sSnd8dt3A/39Scd
	eVxQ==
Received: by 10.224.181.193 with SMTP id bz1mr20560839qab.64.1346443753451;
	Fri, 31 Aug 2012 13:09:13 -0700 (PDT)
Received: from [10.254.74.233] ([38.108.87.20])
	by mx.google.com with ESMTPS id ez6sm6504652qab.17.2012.08.31.13.09.05
	(version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 13:09:12 -0700 (PDT)
User-Agent: Microsoft-Entourage/12.32.0.111121
Date: Fri, 31 Aug 2012 21:08:58 +0100
From: Keir Fraser <keir.xen@gmail.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CC66D86A.3D8B4%keir.xen@gmail.com>
Thread-Topic: [PATCH] [FOR 4.2] tmem: add matching unlock for an
	about-to-be-destroyed object
Thread-Index: Ac2HtHRh+Q+0Bu89z0qBWtB0t/8F6A==
In-Reply-To: <76edd759-ceca-4fb5-b411-1e598137afd8@default>
Mime-version: 1.0
Cc: Zhenzhong Duan <zhenzhong.duan@oracle.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [Xen-devel] [PATCH] [FOR 4.2] tmem: add matching unlock for an
 about-to-be-destroyed object
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/08/2012 20:58, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:

> (Guidance appreciated if the Xen patch submittal process
> has changed and I need to do more than send this email.)

This is fine, thanks!

 -- Keir

> A 4.2 changeset forces a preempt_disable/enable with
> every lock/unlock.
> 
> Tmem has dynamically allocated "objects" that contain a
> lock.  The lock is held when the object is destroyed.
> No reason to unlock something that's about to be destroyed!
> But with the preempt_enable/disable in the generic locking code,
> and the fact that do_softirq ASSERTs that preempt_count
> must be zero, a crash occurs soon after any object is
> destroyed.
> 
> So force lock to be released before destroying objects.
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> 
> diff -r 1967c7c290eb xen/common/tmem.c
> --- a/xen/common/tmem.c Wed Feb 09 12:03:09 2011 +0000
> +++ b/xen/common/tmem.c Fri Aug 31 13:49:51 2012 -0600
> @@ -957,6 +957,7 @@
>      /* use no_rebalance only if all objects are being destroyed anyway */
>      if ( !no_rebalance )
>          rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]);
> +    tmem_spin_unlock(&obj->obj_spinlock);
>      tmem_free(obj,sizeof(obj_t),pool);
>  }
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 20:37:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 20:37:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XxL-0006Po-Ha; Fri, 31 Aug 2012 20:36:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1T7XxJ-0006Pj-GZ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 20:36:41 +0000
Received: from [85.158.143.35:52036] by server-1.bemta-4.messagelabs.com id
	9D/DC-12504-85021405; Fri, 31 Aug 2012 20:36:40 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346445396!13612732!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19894 invoked from network); 31 Aug 2012 20:36:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 20:36:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,349,1344211200"; d="scan'208";a="206826803"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 20:36:36 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Fri, 31 Aug 2012
	16:36:36 -0400
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Art Napor <artnapor@yahoo.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Date: Fri, 31 Aug 2012 16:36:38 -0400
Thread-Topic: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
Thread-Index: Ac2HpbUpcrU25+ijTjKWFaI4HLC9dQAEfg0w
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31BCC442E@FTLPMAILBOX02.citrite.net>
References: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
	<1346437375.61994.YahooMailNeo@web121001.mail.ne1.yahoo.com>
In-Reply-To: <1346437375.61994.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I also came across the earlier smbios passthrough patch series. I'm looking to be able to pass the DMI block from the Dom0 to the DomU when. Your earlier smbios patch series applied cleanly built against to 4.1.2, but the Dom0 smbios data didn't seem to make it into the HVM DomU. 

> Are there any version or other changset limitations that would prevent the patches from being manually applied to 4.1? 

> http://lists.xen.org/archives/html/xen-devel/2012-02/msg01754.html

Well there were some tool stack changes (in libxl) that happened right around the time I was making the patches so I had to change things to match that. So in an older branch you might have to change the way the BIOS blobs get sent in to the domain building code.

The rest of the code to load the BIOS bits into the new guests memory and the changes to hvmloader to process them should apply cleanly I think.

Note that the last patch set for this (v3) does support passing in both SMBIOS structures and ACPI tables so you can use that patch set.

Thanks
Ross



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 20:37:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 20:37:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7XxL-0006Po-Ha; Fri, 31 Aug 2012 20:36:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1T7XxJ-0006Pj-GZ
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 20:36:41 +0000
Received: from [85.158.143.35:52036] by server-1.bemta-4.messagelabs.com id
	9D/DC-12504-85021405; Fri, 31 Aug 2012 20:36:40 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346445396!13612732!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19894 invoked from network); 31 Aug 2012 20:36:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 20:36:38 -0000
X-IronPort-AV: E=Sophos;i="4.80,349,1344211200"; d="scan'208";a="206826803"
Received: from ftlpmailmx01.citrite.net ([10.13.107.65])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 20:36:36 +0000
Received: from FTLPMAILBOX02.citrite.net ([10.13.98.210]) by
	FTLPMAILMX01.citrite.net ([10.13.107.65]) with mapi; Fri, 31 Aug 2012
	16:36:36 -0400
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Art Napor <artnapor@yahoo.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Date: Fri, 31 Aug 2012 16:36:38 -0400
Thread-Topic: [Xen-devel]  [PATCH v3 01/04] HVM firmware passthrough HVM
Thread-Index: Ac2HpbUpcrU25+ijTjKWFaI4HLC9dQAEfg0w
Message-ID: <831D55AF5A11D64C9B4B43F59EEBF720A31BCC442E@FTLPMAILBOX02.citrite.net>
References: <1346343064.91089.YahooMailNeo@web121001.mail.ne1.yahoo.com>
	<831D55AF5A11D64C9B4B43F59EEBF720A31BCC42AB@FTLPMAILBOX02.citrite.net>
	<1346437375.61994.YahooMailNeo@web121001.mail.ne1.yahoo.com>
In-Reply-To: <1346437375.61994.YahooMailNeo@web121001.mail.ne1.yahoo.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH v3 01/04] HVM firmware passthrough HVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> I also came across the earlier smbios passthrough patch series. I'm looking to be able to pass the DMI block from the Dom0 to the DomU when. Your earlier smbios patch series applied cleanly built against to 4.1.2, but the Dom0 smbios data didn't seem to make it into the HVM DomU. 

> Are there any version or other changset limitations that would prevent the patches from being manually applied to 4.1? 

> http://lists.xen.org/archives/html/xen-devel/2012-02/msg01754.html

Well there were some tool stack changes (in libxl) that happened right around the time I was making the patches so I had to change things to match that. So in an older branch you might have to change the way the BIOS blobs get sent in to the domain building code.

The rest of the code to load the BIOS bits into the new guests memory and the changes to hvmloader to process them should apply cleanly I think.

Note that the last patch set for this (v3) does support passing in both SMBIOS structures and ACPI tables so you can use that patch set.

Thanks
Ross



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:10:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YTP-0006hk-7n; Fri, 31 Aug 2012 21:09:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7YTN-0006he-K2
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:09:49 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1346447380!2219517!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI1MzU5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16511 invoked from network); 31 Aug 2012 21:09:41 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 21:09:41 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VL9TeZ001609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:09:30 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VL9S4F019637
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:09:29 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VL9ShV018376; Fri, 31 Aug 2012 16:09:28 -0500
MIME-Version: 1.0
Message-ID: <e927526f-b096-43da-a3b1-57d84daea825@default>
Date: Fri, 31 Aug 2012 14:08:49 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: =?iso-8859-1?B?UGFzaSBL5HJra+RpbmVu?= <pasik@iki.fi>,
	xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Is there a how-to for starting/running xm/xend on Fedora (FC17)?
Is it different for Xen 4.1 and 4.2?

I did find this:
http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F 
but it doesn't seem to help.  And this:
http://wiki.xen.org/wiki/Fedora_Host_Installation 
only addresses xl.

I expect I need to do something manually to start xencommons or
something like that but obvious things don't seem to work,
and I'm not a FC17 expert at all.

I am able to use xl in general... I need to run xm instead for
some specific testing (on both 4.1 and 4.2).

Thanks for any help and sorry for the newbie-ish question!

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:10:17 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YTP-0006hk-7n; Fri, 31 Aug 2012 21:09:51 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7YTN-0006he-K2
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:09:49 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1346447380!2219517!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI1MzU5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16511 invoked from network); 31 Aug 2012 21:09:41 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 21:09:41 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VL9TeZ001609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:09:30 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VL9S4F019637
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:09:29 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VL9ShV018376; Fri, 31 Aug 2012 16:09:28 -0500
MIME-Version: 1.0
Message-ID: <e927526f-b096-43da-a3b1-57d84daea825@default>
Date: Fri, 31 Aug 2012 14:08:49 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: =?iso-8859-1?B?UGFzaSBL5HJra+RpbmVu?= <pasik@iki.fi>,
	xen-devel@lists.xen.org
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Is there a how-to for starting/running xm/xend on Fedora (FC17)?
Is it different for Xen 4.1 and 4.2?

I did find this:
http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F 
but it doesn't seem to help.  And this:
http://wiki.xen.org/wiki/Fedora_Host_Installation 
only addresses xl.

I expect I need to do something manually to start xencommons or
something like that but obvious things don't seem to work,
and I'm not a FC17 expert at all.

I am able to use xl in general... I need to run xm instead for
some specific testing (on both 4.1 and 4.2).

Thanks for any help and sorry for the newbie-ish question!

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:14:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YXA-0006ow-Sx; Fri, 31 Aug 2012 21:13:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7YX9-0006oq-AU
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:13:43 +0000
Received: from [85.158.139.83:36547] by server-2.bemta-5.messagelabs.com id
	F7/1C-11456-60921405; Fri, 31 Aug 2012 21:13:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346447620!16710292!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19454 invoked from network); 31 Aug 2012 21:13:41 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 21:13:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VLDToN003577
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:13:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VLDSoV017285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:13:28 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VLDRk6011666; Fri, 31 Aug 2012 16:13:27 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 14:13:27 -0700
Date: Fri, 31 Aug 2012 17:13:25 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Message-ID: <20120831211325.GB20594@localhost.localdomain>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <e927526f-b096-43da-a3b1-57d84daea825@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> Is it different for Xen 4.1 and 4.2?
> 
> I did find this:
> http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F 
> but it doesn't seem to help.  And this:
> http://wiki.xen.org/wiki/Fedora_Host_Installation 
> only addresses xl.
> 
> I expect I need to do something manually to start xencommons or
> something like that but obvious things don't seem to work,

How are you running this? When you boot up does it work? Or is this not
working after your restart xend couple of times?

> and I'm not a FC17 expert at all.

service xend start

But you also need to enable it if it wasn't enabled using systemd.
The syntax was something like (look at
http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)

systemctl enable xend.service

(thought it might not be called xend but something else).
> 
> I am able to use xl in general... I need to run xm instead for
> some specific testing (on both 4.1 and 4.2).
> 
> Thanks for any help and sorry for the newbie-ish question!
> 
> Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:14:11 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YXA-0006ow-Sx; Fri, 31 Aug 2012 21:13:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7YX9-0006oq-AU
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:13:43 +0000
Received: from [85.158.139.83:36547] by server-2.bemta-5.messagelabs.com id
	F7/1C-11456-60921405; Fri, 31 Aug 2012 21:13:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-182.messagelabs.com!1346447620!16710292!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19454 invoked from network); 31 Aug 2012 21:13:41 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-8.tower-182.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 21:13:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VLDToN003577
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:13:29 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VLDSoV017285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:13:28 GMT
Received: from abhmt112.oracle.com (abhmt112.oracle.com [141.146.116.64])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VLDRk6011666; Fri, 31 Aug 2012 16:13:27 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 14:13:27 -0700
Date: Fri, 31 Aug 2012 17:13:25 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Message-ID: <20120831211325.GB20594@localhost.localdomain>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <e927526f-b096-43da-a3b1-57d84daea825@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> Is it different for Xen 4.1 and 4.2?
> 
> I did find this:
> http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F 
> but it doesn't seem to help.  And this:
> http://wiki.xen.org/wiki/Fedora_Host_Installation 
> only addresses xl.
> 
> I expect I need to do something manually to start xencommons or
> something like that but obvious things don't seem to work,

How are you running this? When you boot up does it work? Or is this not
working after your restart xend couple of times?

> and I'm not a FC17 expert at all.

service xend start

But you also need to enable it if it wasn't enabled using systemd.
The syntax was something like (look at
http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)

systemctl enable xend.service

(thought it might not be called xend but something else).
> 
> I am able to use xl in general... I need to run xm instead for
> some specific testing (on both 4.1 and 4.2).
> 
> Thanks for any help and sorry for the newbie-ish question!
> 
> Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:15:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YYL-0006vI-Gg; Fri, 31 Aug 2012 21:14:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7YYJ-0006vB-VL
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:14:56 +0000
Received: from [85.158.143.35:55345] by server-3.bemta-4.messagelabs.com id
	8B/49-08232-F4921405; Fri, 31 Aug 2012 21:14:55 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346447693!16124643!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9825 invoked from network); 31 Aug 2012 21:14:54 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 21:14:54 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VLEkRT004380
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:14:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VLEieG025576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:14:46 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VLEiT5021212; Fri, 31 Aug 2012 16:14:44 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 14:14:44 -0700
Date: Fri, 31 Aug 2012 17:14:42 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20120831211441.GC20594@localhost.localdomain>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
	<20120831165454.GE18929@localhost.localdomain>
	<CAO=PTzoNfS8+DFXUM2+E9FaZSCJwTeRjecCXJwy_+CFyxGYpUQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAO=PTzoNfS8+DFXUM2+E9FaZSCJwTeRjecCXJwy_+CFyxGYpUQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: andres.lagarcavilla@gmail.com, xen-devel@lists.xen.org,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
 targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 03:33:17PM -0400, Andres Lagar-Cavilla wrote:
> But msleep will wind up calling schedule(). We definitely cannot afford to
> pin down a dom0 vcpu when the pager itself is in dom0.

Duh! Somehow I was thinking it just do a dumb loop. Maybe I am thinking
about a different type of early XYZsleep variant.

> 
> Iiuc....
> 
> Andres
> On Aug 31, 2012 12:55 PM, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
> wrote:
> 
> > On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:
> > > Actually acted upon your feedback ipso facto:
> > >
> > > commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> > > Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > > Date:   Sun Aug 26 09:45:57 2012 -0400
> > >
> > >     Xen backend support for paged out grant targets.
> > >
> > >     Since Xen-4.2, hvm domains may have portions of their memory paged
> > out. When a
> > >     foreign domain (such as dom0) attempts to map these frames, the map
> > will
> > >     initially fail. The hypervisor returns a suitable errno, and kicks an
> > >     asynchronous page-in operation carried out by a helper. The foreign
> > domain is
> > >     expected to retry the mapping operation until it eventually
> > succeeds. The
> > >     foreign domain is not put to sleep because itself could be the one
> > running the
> > >     pager assist (typical scenario for dom0).
> > >
> > >     This patch adds support for this mechanism for backend drivers using
> > grant
> > >     mapping and copying operations. Specifically, this covers the
> > blkback and
> > >     gntdev drivers (which map foregin grants), and the netback driver
> > (which copies
> > >     foreign grants).
> > >
> > >     * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> > >     * Add a retry method for grants that fail with GNTST_eagain (i.e.
> > because the
> > >       target foregin frame is paged out).
> > >     * Insert hooks with appropriate macro decorators in the
> > aforementioned drivers.
> > >
> > >     The retry loop is only invoked if the grant operation status is
> > GNTST_eagain.
> > >     It guarantees to leave a new status code different from
> > GNTST_eagain. Any other
> > >     status code results in identical code execution as before.
> > >
> > >     The retry loop performs 256 attempts with increasing time intervals
> > through a
> > >     32 second period. It uses msleep to yield while waiting for the next
> > retry.
> > >
> >
> >
> > Would it make sense to yield to other processes (so call schedule)? Or
> > perhaps have this in a workqueue ?
> >
> > I mean the 'msleep' just looks like a hack. .. 32 seconds of doing
> > 'msleep' on 1VCPU dom0 could trigger the watchdog I think?
> >
> > >     V2 after feedback from David Vrabel:
> > >     * Explicit MAX_DELAY instead of wrap-around delay into zero
> > >     * Abstract GNTST_eagain check into core grant table code for netback
> > module.
> > >
> > >     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > >
> > > diff --git a/drivers/net/xen-netback/netback.c
> > b/drivers/net/xen-netback/netback.c
> > > index 682633b..5610fd8 100644
> > > --- a/drivers/net/xen-netback/netback.c
> > > +++ b/drivers/net/xen-netback/netback.c
> > > @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk
> > *netbk)
> > >               return;
> > >
> > >       BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
> > > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> > &netbk->grant_copy_op,
> > > -                                     npo.copy_prod);
> > > -     BUG_ON(ret != 0);
> > > +     gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
> > >
> > >       while ((skb = __skb_dequeue(&rxq)) != NULL) {
> > >               sco = (struct skb_cb_overlay *)skb->cb;
> > > @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk
> > *netbk)
> > >  static void xen_netbk_tx_action(struct xen_netbk *netbk)
> > >  {
> > >       unsigned nr_gops;
> > > -     int ret;
> > >
> > >       nr_gops = xen_netbk_tx_build_gops(netbk);
> > >
> > >       if (nr_gops == 0)
> > >               return;
> > > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> > > -                                     netbk->tx_copy_ops, nr_gops);
> > > -     BUG_ON(ret);
> > >
> > > -     xen_netbk_tx_submit(netbk);
> > > +     gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
> > >
> > > +     xen_netbk_tx_submit(netbk);
> > >  }
> > >
> > >  static void xen_netbk_idx_release(struct xen_netbk *netbk, u16
> > pending_idx)
> > > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > > index eea81cf..96543b2 100644
> > > --- a/drivers/xen/grant-table.c
> > > +++ b/drivers/xen/grant-table.c
> > > @@ -38,6 +38,7 @@
> > >  #include <linux/vmalloc.h>
> > >  #include <linux/uaccess.h>
> > >  #include <linux/io.h>
> > > +#include <linux/delay.h>
> > >  #include <linux/hardirq.h>
> > >
> > >  #include <xen/xen.h>
> > > @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
> > >  }
> > >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> > >
> > > +#define MAX_DELAY 256
> > > +void
> > > +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> > > +                                             const char *func)
> > > +{
> > > +     unsigned delay = 1;
> > > +
> > > +     do {
> > > +             BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > > +             if (*status == GNTST_eagain)
> > > +                     msleep(delay++);
> > > +     } while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
> > > +
> > > +     if (delay >= MAX_DELAY) {
> > > +             printk(KERN_ERR "%s: %s eagain grant\n", func,
> > current->comm);
> > > +             *status = GNTST_bad_page;
> > > +     }
> > > +}
> > > +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
> > > +
> > >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > >                   struct gnttab_map_grant_ref *kmap_ops,
> > >                   struct page **pages, unsigned int count)
> > > @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
> > *map_ops,
> > >       if (ret)
> > >               return ret;
> > >
> > > +     /* Retry eagain maps */
> > > +     for (i = 0; i < count; i++)
> > > +             if (map_ops[i].status == GNTST_eagain)
> > > +                     gnttab_retry_eagain_map(map_ops + i);
> > > +
> > >       if (xen_feature(XENFEAT_auto_translated_physmap))
> > >               return ret;
> > >
> > > diff --git a/drivers/xen/xenbus/xenbus_client.c
> > b/drivers/xen/xenbus/xenbus_client.c
> > > index b3e146e..749f6a3 100644
> > > --- a/drivers/xen/xenbus/xenbus_client.c
> > > +++ b/drivers/xen/xenbus/xenbus_client.c
> > > @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct
> > xenbus_device *dev,
> > >
> > >       op.host_addr = arbitrary_virt_to_machine(pte).maddr;
> > >
> > > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > > -             BUG();
> > > +     gnttab_map_grant_no_eagain(&op);
> > >
> > >       if (op.status != GNTST_okay) {
> > >               free_vm_area(area);
> > > @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int
> > gnt_ref,
> > >       gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map,
> > gnt_ref,
> > >                         dev->otherend_id);
> > >
> > > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > > -             BUG();
> > > +     gnttab_map_grant_no_eagain(&op);
> > >
> > >       if (op.status != GNTST_okay) {
> > >               xenbus_dev_fatal(dev, op.status,
> > > diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> > > index 11e27c3..2fecfab 100644
> > > --- a/include/xen/grant_table.h
> > > +++ b/include/xen/grant_table.h
> > > @@ -43,6 +43,7 @@
> > >  #include <xen/interface/grant_table.h>
> > >
> > >  #include <asm/xen/hypervisor.h>
> > > +#include <asm/xen/hypercall.h>
> > >
> > >  #include <xen/features.h>
> > >
> > > @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
> > >
> > >  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> > >
> > > +/* Retry a grant map/copy operation when the hypervisor returns
> > GNTST_eagain.
> > > + * This is typically due to paged out target frames.
> > > + * Generic entry-point, use macro decorators below for specific grant
> > > + * operations.
> > > + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
> > > + * Return value in *status guaranteed to no longer be GNTST_eagain. */
> > > +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> > *status,
> > > +                             const char *func);
> > > +
> > > +#define gnttab_retry_eagain_map(_gop)                       \
> > > +    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
> > > +                            &(_gop)->status, __func__)
> > > +
> > > +#define gnttab_retry_eagain_copy(_gop)                  \
> > > +    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
> > > +                            &(_gop)->status, __func__)
> > > +
> > > +#define gnttab_map_grant_no_eagain(_gop)
> >      \
> > > +do {
> >      \
> > > +    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))
> >     \
> > > +        BUG();
> >      \
> > > +    if ((_gop)->status == GNTST_eagain)
> >     \
> > > +        gnttab_retry_eagain_map((_gop));
> >      \
> > > +} while(0)
> > > +
> > > +static inline void
> > > +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
> > > +{
> > > +    unsigned i;
> > > +    struct gnttab_copy *op;
> > > +
> > > +    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
> > > +    for (i = 0, op = batch; i < count; i++, op++)
> > > +        if (op->status == GNTST_eagain)
> > > +            gnttab_retry_eagain_copy(op);
> > > +}
> > > +
> > >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > >                   struct gnttab_map_grant_ref *kmap_ops,
> > >                   struct page **pages, unsigned int count);
> > > diff --git a/include/xen/interface/grant_table.h
> > b/include/xen/interface/grant_table.h
> > > index 7da811b..66cb734 100644
> > > --- a/include/xen/interface/grant_table.h
> > > +++ b/include/xen/interface/grant_table.h
> > > @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> > >  #define GNTST_permission_denied (-8) /* Not enough privilege for
> > operation.  */
> > >  #define GNTST_bad_page         (-9) /* Specified page was invalid for
> > op.    */
> > >  #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page
> > boundary */
> > > +#define GNTST_eagain          (-12) /* Retry.
> >      */
> > >
> > >  #define GNTTABOP_error_msgs {                   \
> > >      "okay",                                     \
> > > @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> > >      "permission denied",                        \
> > >      "bad page",                                 \
> > >      "copy arguments cross page boundary"        \
> > > +    "retry"                                     \
> > >  }
> > >
> > >  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
> > >
> > >
> > > On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:
> > >
> > > >
> > > > On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> > > >
> > > >> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> > > >>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > > >>>
> > > >>> Since Xen-4.2, hvm domains may have portions of their memory paged
> > out. When a
> > > >>> foreign domain (such as dom0) attempts to map these frames, the map
> > will
> > > >>> initially fail. The hypervisor returns a suitable errno, and kicks an
> > > >>> asynchronous page-in operation carried out by a helper. The foreign
> > domain is
> > > >>> expected to retry the mapping operation until it eventually
> > succeeds. The
> > > >>> foreign domain is not put to sleep because itself could be the one
> > running the
> > > >>> pager assist (typical scenario for dom0).
> > > >>>
> > > >>> This patch adds support for this mechanism for backend drivers using
> > grant
> > > >>> mapping and copying operations. Specifically, this covers the
> > blkback and
> > > >>> gntdev drivers (which map foregin grants), and the netback driver
> > (which copies
> > > >>> foreign grants).
> > > >>>
> > > >>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> > > >>> * Add a retry method for grants that fail with GNTST_eagain (i.e.
> > because the
> > > >>> target foregin frame is paged out).
> > > >>> * Insert hooks with appropriate macro decorators in the
> > aforementioned drivers.
> > > >>
> > > >> I think you should implement wrappers around
> > HYPERVISOR_grant_table_op()
> > > >> have have the wrapper do the retries instead of every backend having
> > to
> > > >> check for EAGAIN and issue the retries itself. Similar to the
> > > >> gnttab_map_grant_no_eagain() function you've already added.
> > > >>
> > > >> Why do some operations not retry anyway?
> > > >
> > > > All operations retry. The reason why I could not make it as elegant as
> > you suggest is because grant operations are submitted in batches and their
> > status(es?) later checked individually elsewhere. This is the case for
> > netback. Note that both blkback and gntdev use a more linear structure with
> > the gnttab_map_refs helper, which allows me to hide all the retry gore from
> > those drivers into grant table code. Likewise for xenbus ring mapping.
> > > >
> > > > In summary, outside of core grant table code, only the netback driver
> > needs to check explicitly for retries, due to its
> > batch-copy-delayed-per-slot-check structure.
> > > >
> > > >>
> > > >>> +void
> > > >>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> > *status,
> > > >>> +                                         const char *func)
> > > >>> +{
> > > >>> + u8 delay = 1;
> > > >>> +
> > > >>> + do {
> > > >>> +         BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > > >>> +         if (*status == GNTST_eagain)
> > > >>> +                 msleep(delay++);
> > > >>> + } while ((*status == GNTST_eagain) && delay);
> > > >>
> > > >> Terminating the loop when delay wraps is a bit subtle.  Why not make
> > > >> delay unsigned and check delay <= MAX_DELAY?
> > > > Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before
> > a re-spin.
> > > >
> > > >>
> > > >> Would it be sensible to ramp the delay faster?  Perhaps double each
> > > >> iteration with a maximum possible delay of e.g., 256 ms.
> > > > Generally speaking we've never seen past three retries. I am open to
> > changing the algorithm but there is a significant possibility it won't
> > matter at all.
> > > >
> > > >>
> > > >>> +#define gnttab_map_grant_no_eagain(_gop)
> >          \
> > > >>> +do {
> >          \
> > > >>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop),
> > 1))      \
> > > >>> +        BUG();
> >          \
> > > >>> +    if ((_gop)->status == GNTST_eagain)
> >         \
> > > >>> +        gnttab_retry_eagain_map((_gop));
> >          \
> > > >>> +} while(0)
> > > >>
> > > >> Inline functions, please.
> > > >
> > > > I want to retain the original context for debugging. Eventually we
> > print __func__ if things go wrong.
> > > >
> > > > Thanks, great feedback
> > > > Andres
> > > >
> > > >>
> > > >> David
> > > >
> > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:15:16 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YYL-0006vI-Gg; Fri, 31 Aug 2012 21:14:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7YYJ-0006vB-VL
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:14:56 +0000
Received: from [85.158.143.35:55345] by server-3.bemta-4.messagelabs.com id
	8B/49-08232-F4921405; Fri, 31 Aug 2012 21:14:55 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346447693!16124643!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9825 invoked from network); 31 Aug 2012 21:14:54 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 21:14:54 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VLEkRT004380
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:14:47 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VLEieG025576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:14:46 GMT
Received: from abhmt118.oracle.com (abhmt118.oracle.com [141.146.116.70])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VLEiT5021212; Fri, 31 Aug 2012 16:14:44 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 14:14:44 -0700
Date: Fri, 31 Aug 2012 17:14:42 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20120831211441.GC20594@localhost.localdomain>
References: <1346086287-17674-1-git-send-email-andres@lagarcavilla.org>
	<5040CAEA.7000600@citrix.com>
	<160CC375-2682-4CBF-B1EC-06A9F3E49A40@gridcentric.ca>
	<A976B10C-58BE-4660-89E9-A8F85CAB5F19@gmail.com>
	<20120831165454.GE18929@localhost.localdomain>
	<CAO=PTzoNfS8+DFXUM2+E9FaZSCJwTeRjecCXJwy_+CFyxGYpUQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAO=PTzoNfS8+DFXUM2+E9FaZSCJwTeRjecCXJwy_+CFyxGYpUQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: andres.lagarcavilla@gmail.com, xen-devel@lists.xen.org,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Xen backend support for paged out grant
 targets.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 03:33:17PM -0400, Andres Lagar-Cavilla wrote:
> But msleep will wind up calling schedule(). We definitely cannot afford to
> pin down a dom0 vcpu when the pager itself is in dom0.

Duh! Somehow I was thinking it just do a dumb loop. Maybe I am thinking
about a different type of early XYZsleep variant.

> 
> Iiuc....
> 
> Andres
> On Aug 31, 2012 12:55 PM, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
> wrote:
> 
> > On Fri, Aug 31, 2012 at 11:42:32AM -0400, Andres Lagar-Cavilla wrote:
> > > Actually acted upon your feedback ipso facto:
> > >
> > > commit d5fab912caa1f0cf6be0a6773f502d3417a207b6
> > > Author: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > > Date:   Sun Aug 26 09:45:57 2012 -0400
> > >
> > >     Xen backend support for paged out grant targets.
> > >
> > >     Since Xen-4.2, hvm domains may have portions of their memory paged
> > out. When a
> > >     foreign domain (such as dom0) attempts to map these frames, the map
> > will
> > >     initially fail. The hypervisor returns a suitable errno, and kicks an
> > >     asynchronous page-in operation carried out by a helper. The foreign
> > domain is
> > >     expected to retry the mapping operation until it eventually
> > succeeds. The
> > >     foreign domain is not put to sleep because itself could be the one
> > running the
> > >     pager assist (typical scenario for dom0).
> > >
> > >     This patch adds support for this mechanism for backend drivers using
> > grant
> > >     mapping and copying operations. Specifically, this covers the
> > blkback and
> > >     gntdev drivers (which map foregin grants), and the netback driver
> > (which copies
> > >     foreign grants).
> > >
> > >     * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> > >     * Add a retry method for grants that fail with GNTST_eagain (i.e.
> > because the
> > >       target foregin frame is paged out).
> > >     * Insert hooks with appropriate macro decorators in the
> > aforementioned drivers.
> > >
> > >     The retry loop is only invoked if the grant operation status is
> > GNTST_eagain.
> > >     It guarantees to leave a new status code different from
> > GNTST_eagain. Any other
> > >     status code results in identical code execution as before.
> > >
> > >     The retry loop performs 256 attempts with increasing time intervals
> > through a
> > >     32 second period. It uses msleep to yield while waiting for the next
> > retry.
> > >
> >
> >
> > Would it make sense to yield to other processes (so call schedule)? Or
> > perhaps have this in a workqueue ?
> >
> > I mean the 'msleep' just looks like a hack. .. 32 seconds of doing
> > 'msleep' on 1VCPU dom0 could trigger the watchdog I think?
> >
> > >     V2 after feedback from David Vrabel:
> > >     * Explicit MAX_DELAY instead of wrap-around delay into zero
> > >     * Abstract GNTST_eagain check into core grant table code for netback
> > module.
> > >
> > >     Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > >
> > > diff --git a/drivers/net/xen-netback/netback.c
> > b/drivers/net/xen-netback/netback.c
> > > index 682633b..5610fd8 100644
> > > --- a/drivers/net/xen-netback/netback.c
> > > +++ b/drivers/net/xen-netback/netback.c
> > > @@ -635,9 +635,7 @@ static void xen_netbk_rx_action(struct xen_netbk
> > *netbk)
> > >               return;
> > >
> > >       BUG_ON(npo.copy_prod > ARRAY_SIZE(netbk->grant_copy_op));
> > > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> > &netbk->grant_copy_op,
> > > -                                     npo.copy_prod);
> > > -     BUG_ON(ret != 0);
> > > +     gnttab_batch_copy_no_eagain(netbk->grant_copy_op, npo.copy_prod);
> > >
> > >       while ((skb = __skb_dequeue(&rxq)) != NULL) {
> > >               sco = (struct skb_cb_overlay *)skb->cb;
> > > @@ -1460,18 +1458,15 @@ static void xen_netbk_tx_submit(struct xen_netbk
> > *netbk)
> > >  static void xen_netbk_tx_action(struct xen_netbk *netbk)
> > >  {
> > >       unsigned nr_gops;
> > > -     int ret;
> > >
> > >       nr_gops = xen_netbk_tx_build_gops(netbk);
> > >
> > >       if (nr_gops == 0)
> > >               return;
> > > -     ret = HYPERVISOR_grant_table_op(GNTTABOP_copy,
> > > -                                     netbk->tx_copy_ops, nr_gops);
> > > -     BUG_ON(ret);
> > >
> > > -     xen_netbk_tx_submit(netbk);
> > > +     gnttab_batch_copy_no_eagain(netbk->tx_copy_ops, nr_gops);
> > >
> > > +     xen_netbk_tx_submit(netbk);
> > >  }
> > >
> > >  static void xen_netbk_idx_release(struct xen_netbk *netbk, u16
> > pending_idx)
> > > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > > index eea81cf..96543b2 100644
> > > --- a/drivers/xen/grant-table.c
> > > +++ b/drivers/xen/grant-table.c
> > > @@ -38,6 +38,7 @@
> > >  #include <linux/vmalloc.h>
> > >  #include <linux/uaccess.h>
> > >  #include <linux/io.h>
> > > +#include <linux/delay.h>
> > >  #include <linux/hardirq.h>
> > >
> > >  #include <xen/xen.h>
> > > @@ -823,6 +824,26 @@ unsigned int gnttab_max_grant_frames(void)
> > >  }
> > >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> > >
> > > +#define MAX_DELAY 256
> > > +void
> > > +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t *status,
> > > +                                             const char *func)
> > > +{
> > > +     unsigned delay = 1;
> > > +
> > > +     do {
> > > +             BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > > +             if (*status == GNTST_eagain)
> > > +                     msleep(delay++);
> > > +     } while ((*status == GNTST_eagain) && (delay < MAX_DELAY));
> > > +
> > > +     if (delay >= MAX_DELAY) {
> > > +             printk(KERN_ERR "%s: %s eagain grant\n", func,
> > current->comm);
> > > +             *status = GNTST_bad_page;
> > > +     }
> > > +}
> > > +EXPORT_SYMBOL_GPL(gnttab_retry_eagain_gop);
> > > +
> > >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > >                   struct gnttab_map_grant_ref *kmap_ops,
> > >                   struct page **pages, unsigned int count)
> > > @@ -836,6 +857,11 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
> > *map_ops,
> > >       if (ret)
> > >               return ret;
> > >
> > > +     /* Retry eagain maps */
> > > +     for (i = 0; i < count; i++)
> > > +             if (map_ops[i].status == GNTST_eagain)
> > > +                     gnttab_retry_eagain_map(map_ops + i);
> > > +
> > >       if (xen_feature(XENFEAT_auto_translated_physmap))
> > >               return ret;
> > >
> > > diff --git a/drivers/xen/xenbus/xenbus_client.c
> > b/drivers/xen/xenbus/xenbus_client.c
> > > index b3e146e..749f6a3 100644
> > > --- a/drivers/xen/xenbus/xenbus_client.c
> > > +++ b/drivers/xen/xenbus/xenbus_client.c
> > > @@ -490,8 +490,7 @@ static int xenbus_map_ring_valloc_pv(struct
> > xenbus_device *dev,
> > >
> > >       op.host_addr = arbitrary_virt_to_machine(pte).maddr;
> > >
> > > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > > -             BUG();
> > > +     gnttab_map_grant_no_eagain(&op);
> > >
> > >       if (op.status != GNTST_okay) {
> > >               free_vm_area(area);
> > > @@ -572,8 +571,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int
> > gnt_ref,
> > >       gnttab_set_map_op(&op, (unsigned long)vaddr, GNTMAP_host_map,
> > gnt_ref,
> > >                         dev->otherend_id);
> > >
> > > -     if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> > > -             BUG();
> > > +     gnttab_map_grant_no_eagain(&op);
> > >
> > >       if (op.status != GNTST_okay) {
> > >               xenbus_dev_fatal(dev, op.status,
> > > diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> > > index 11e27c3..2fecfab 100644
> > > --- a/include/xen/grant_table.h
> > > +++ b/include/xen/grant_table.h
> > > @@ -43,6 +43,7 @@
> > >  #include <xen/interface/grant_table.h>
> > >
> > >  #include <asm/xen/hypervisor.h>
> > > +#include <asm/xen/hypercall.h>
> > >
> > >  #include <xen/features.h>
> > >
> > > @@ -183,6 +184,43 @@ unsigned int gnttab_max_grant_frames(void);
> > >
> > >  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> > >
> > > +/* Retry a grant map/copy operation when the hypervisor returns
> > GNTST_eagain.
> > > + * This is typically due to paged out target frames.
> > > + * Generic entry-point, use macro decorators below for specific grant
> > > + * operations.
> > > + * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds.
> > > + * Return value in *status guaranteed to no longer be GNTST_eagain. */
> > > +void gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> > *status,
> > > +                             const char *func);
> > > +
> > > +#define gnttab_retry_eagain_map(_gop)                       \
> > > +    gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, (_gop), \
> > > +                            &(_gop)->status, __func__)
> > > +
> > > +#define gnttab_retry_eagain_copy(_gop)                  \
> > > +    gnttab_retry_eagain_gop(GNTTABOP_copy, (_gop),      \
> > > +                            &(_gop)->status, __func__)
> > > +
> > > +#define gnttab_map_grant_no_eagain(_gop)
> >      \
> > > +do {
> >      \
> > > +    if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop), 1))
> >     \
> > > +        BUG();
> >      \
> > > +    if ((_gop)->status == GNTST_eagain)
> >     \
> > > +        gnttab_retry_eagain_map((_gop));
> >      \
> > > +} while(0)
> > > +
> > > +static inline void
> > > +gnttab_batch_copy_no_eagain(struct gnttab_copy *batch, unsigned count)
> > > +{
> > > +    unsigned i;
> > > +    struct gnttab_copy *op;
> > > +
> > > +    BUG_ON(HYPERVISOR_grant_table_op(GNTTABOP_copy, batch, count));
> > > +    for (i = 0, op = batch; i < count; i++, op++)
> > > +        if (op->status == GNTST_eagain)
> > > +            gnttab_retry_eagain_copy(op);
> > > +}
> > > +
> > >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > >                   struct gnttab_map_grant_ref *kmap_ops,
> > >                   struct page **pages, unsigned int count);
> > > diff --git a/include/xen/interface/grant_table.h
> > b/include/xen/interface/grant_table.h
> > > index 7da811b..66cb734 100644
> > > --- a/include/xen/interface/grant_table.h
> > > +++ b/include/xen/interface/grant_table.h
> > > @@ -520,6 +520,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> > >  #define GNTST_permission_denied (-8) /* Not enough privilege for
> > operation.  */
> > >  #define GNTST_bad_page         (-9) /* Specified page was invalid for
> > op.    */
> > >  #define GNTST_bad_copy_arg    (-10) /* copy arguments cross page
> > boundary */
> > > +#define GNTST_eagain          (-12) /* Retry.
> >      */
> > >
> > >  #define GNTTABOP_error_msgs {                   \
> > >      "okay",                                     \
> > > @@ -533,6 +534,7 @@ DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version);
> > >      "permission denied",                        \
> > >      "bad page",                                 \
> > >      "copy arguments cross page boundary"        \
> > > +    "retry"                                     \
> > >  }
> > >
> > >  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
> > >
> > >
> > > On Aug 31, 2012, at 10:45 AM, Andres Lagar-Cavilla wrote:
> > >
> > > >
> > > > On Aug 31, 2012, at 10:32 AM, David Vrabel wrote:
> > > >
> > > >> On 27/08/12 17:51, andres@lagarcavilla.org wrote:
> > > >>> From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > > >>>
> > > >>> Since Xen-4.2, hvm domains may have portions of their memory paged
> > out. When a
> > > >>> foreign domain (such as dom0) attempts to map these frames, the map
> > will
> > > >>> initially fail. The hypervisor returns a suitable errno, and kicks an
> > > >>> asynchronous page-in operation carried out by a helper. The foreign
> > domain is
> > > >>> expected to retry the mapping operation until it eventually
> > succeeds. The
> > > >>> foreign domain is not put to sleep because itself could be the one
> > running the
> > > >>> pager assist (typical scenario for dom0).
> > > >>>
> > > >>> This patch adds support for this mechanism for backend drivers using
> > grant
> > > >>> mapping and copying operations. Specifically, this covers the
> > blkback and
> > > >>> gntdev drivers (which map foregin grants), and the netback driver
> > (which copies
> > > >>> foreign grants).
> > > >>>
> > > >>> * Add GNTST_eagain, already exposed by Xen, to the grant interface.
> > > >>> * Add a retry method for grants that fail with GNTST_eagain (i.e.
> > because the
> > > >>> target foregin frame is paged out).
> > > >>> * Insert hooks with appropriate macro decorators in the
> > aforementioned drivers.
> > > >>
> > > >> I think you should implement wrappers around
> > HYPERVISOR_grant_table_op()
> > > >> have have the wrapper do the retries instead of every backend having
> > to
> > > >> check for EAGAIN and issue the retries itself. Similar to the
> > > >> gnttab_map_grant_no_eagain() function you've already added.
> > > >>
> > > >> Why do some operations not retry anyway?
> > > >
> > > > All operations retry. The reason why I could not make it as elegant as
> > you suggest is because grant operations are submitted in batches and their
> > status(es?) later checked individually elsewhere. This is the case for
> > netback. Note that both blkback and gntdev use a more linear structure with
> > the gnttab_map_refs helper, which allows me to hide all the retry gore from
> > those drivers into grant table code. Likewise for xenbus ring mapping.
> > > >
> > > > In summary, outside of core grant table code, only the netback driver
> > needs to check explicitly for retries, due to its
> > batch-copy-delayed-per-slot-check structure.
> > > >
> > > >>
> > > >>> +void
> > > >>> +gnttab_retry_eagain_gop(unsigned int cmd, void *gop, int16_t
> > *status,
> > > >>> +                                         const char *func)
> > > >>> +{
> > > >>> + u8 delay = 1;
> > > >>> +
> > > >>> + do {
> > > >>> +         BUG_ON(HYPERVISOR_grant_table_op(cmd, gop, 1));
> > > >>> +         if (*status == GNTST_eagain)
> > > >>> +                 msleep(delay++);
> > > >>> + } while ((*status == GNTST_eagain) && delay);
> > > >>
> > > >> Terminating the loop when delay wraps is a bit subtle.  Why not make
> > > >> delay unsigned and check delay <= MAX_DELAY?
> > > > Good idea (MAX_DELAY == 256). I'd like to get Konrad's feedback before
> > a re-spin.
> > > >
> > > >>
> > > >> Would it be sensible to ramp the delay faster?  Perhaps double each
> > > >> iteration with a maximum possible delay of e.g., 256 ms.
> > > > Generally speaking we've never seen past three retries. I am open to
> > changing the algorithm but there is a significant possibility it won't
> > matter at all.
> > > >
> > > >>
> > > >>> +#define gnttab_map_grant_no_eagain(_gop)
> >          \
> > > >>> +do {
> >          \
> > > >>> +    if ( HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, (_gop),
> > 1))      \
> > > >>> +        BUG();
> >          \
> > > >>> +    if ((_gop)->status == GNTST_eagain)
> >         \
> > > >>> +        gnttab_retry_eagain_map((_gop));
> >          \
> > > >>> +} while(0)
> > > >>
> > > >> Inline functions, please.
> > > >
> > > > I want to retain the original context for debugging. Eventually we
> > print __func__ if things go wrong.
> > > >
> > > > Thanks, great feedback
> > > > Andres
> > > >
> > > >>
> > > >> David
> > > >
> > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:17:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:17:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YaW-00074h-1o; Fri, 31 Aug 2012 21:17:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7YaU-00074X-0T
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:17:10 +0000
Received: from [85.158.143.99:2111] by server-2.bemta-4.messagelabs.com id
	E8/2C-21239-5D921405; Fri, 31 Aug 2012 21:17:09 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346447827!16710339!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27385 invoked from network); 31 Aug 2012 21:17:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Aug 2012 21:17:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VLGu63006144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:16:57 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VLGuA5018596
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:16:56 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VLGuwv013629; Fri, 31 Aug 2012 16:16:56 -0500
MIME-Version: 1.0
Message-ID: <da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
Date: Fri, 31 Aug 2012 14:16:18 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
	<20120831211325.GB20594@localhost.localdomain>
In-Reply-To: <20120831211325.GB20594@localhost.localdomain>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Konrad Rzeszutek Wilk
> Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
> 
> On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> > Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> > Is it different for Xen 4.1 and 4.2?
> >
> > I did find this:
> > http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F
> > but it doesn't seem to help.  And this:
> > http://wiki.xen.org/wiki/Fedora_Host_Installation
> > only addresses xl.
> >
> > I expect I need to do something manually to start xencommons or
> > something like that but obvious things don't seem to work,
> 
> How are you running this? When you boot up does it work? Or is this not
> working after your restart xend couple of times?
> 
> > and I'm not a FC17 expert at all.
> 
> service xend start
> 
> But you also need to enable it if it wasn't enabled using systemd.
> The syntax was something like (look at
> http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)
> 
> systemctl enable xend.service
> 
> (thought it might not be called xend but something else).

That was one of the obvious things I tried, but it fails to start :-/

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:17:31 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:17:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7YaW-00074h-1o; Fri, 31 Aug 2012 21:17:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7YaU-00074X-0T
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:17:10 +0000
Received: from [85.158.143.99:2111] by server-2.bemta-4.messagelabs.com id
	E8/2C-21239-5D921405; Fri, 31 Aug 2012 21:17:09 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-16.tower-216.messagelabs.com!1346447827!16710339!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMTQzMA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27385 invoked from network); 31 Aug 2012 21:17:08 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-16.tower-216.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Aug 2012 21:17:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VLGu63006144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 21:16:57 GMT
Received: from acsmt357.oracle.com (acsmt357.oracle.com [141.146.40.157])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VLGuA5018596
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 21:16:56 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt357.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VLGuwv013629; Fri, 31 Aug 2012 16:16:56 -0500
MIME-Version: 1.0
Message-ID: <da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
Date: Fri, 31 Aug 2012 14:16:18 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
	<20120831211325.GB20594@localhost.localdomain>
In-Reply-To: <20120831211325.GB20594@localhost.localdomain>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: Konrad Rzeszutek Wilk
> Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
> 
> On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> > Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> > Is it different for Xen 4.1 and 4.2?
> >
> > I did find this:
> > http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F
> > but it doesn't seem to help.  And this:
> > http://wiki.xen.org/wiki/Fedora_Host_Installation
> > only addresses xl.
> >
> > I expect I need to do something manually to start xencommons or
> > something like that but obvious things don't seem to work,
> 
> How are you running this? When you boot up does it work? Or is this not
> working after your restart xend couple of times?
> 
> > and I'm not a FC17 expert at all.
> 
> service xend start
> 
> But you also need to enable it if it wasn't enabled using systemd.
> The syntax was something like (look at
> http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)
> 
> systemctl enable xend.service
> 
> (thought it might not be called xend but something else).

That was one of the obvious things I tried, but it fails to start :-/

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 21:45:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Z1o-0007P7-G3; Fri, 31 Aug 2012 21:45:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7Z1m-0007P2-E1
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:45:22 +0000
Received: from [85.158.143.35:46307] by server-2.bemta-4.messagelabs.com id
	E8/25-21239-17031405; Fri, 31 Aug 2012 21:45:21 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-21.messagelabs.com!1346449517!5712015!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30321 invoked from network); 31 Aug 2012 21:45:18 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 21:45:18 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:50422
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7Yyb-0008OC-IH; Fri, 31 Aug 2012 23:42:05 +0200
Date: Fri, 31 Aug 2012 23:45:12 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <647712821.20120831234512@eikelenboom.it>
To: Santosh Jodh <santosh.jodh@citrix.com>, wei.wang2@amd.com
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------01015A0E4152274D9"
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------01015A0E4152274D9
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


I was trying to use the 'o' debug key to make a bug report about an "AMD-Vi: IO_PAGE_FAULT".

The result:
- When using "xl debug-keys o", the machine seems in a infinite loop, can hardly login, eventually resulting in a kernel RCU stall and complete lockup.
- When using serial console: I get a infinite stream of "gfn:  mfn: " lines, mean while on the normal console, S-ATA devices are starting to give errors.

So either option trashes the machine, other debug-keys work fine.

Machine has a 890-fx chipset and AMD phenom x6 proc.

xl dmesg with bootup and output from some other debug-keys is attached.

--

Sander
------------01015A0E4152274D9
Content-Type: text/plain;
 name="xl-dmesg.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg.txt"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gICAgICAgICAgICAgIF8g
IF8gICAgICAgICAgICAgICAgICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9f
XyBcICAvIF8gXCAgICBfIF9fIF9fX3wgfHwgfCAgICAgXyBfXyAgXyBfXyBfX18gDQogIFwg
IC8vIF8gXCAnXyBcICB8IHx8IHxfICAgX18pIHx8IHwgfCB8X198ICdfXy8gX198IHx8IHxf
IF9ffCAnXyBcfCAnX18vIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8g
fCB8X3wgfF9ffCB8IHwgKF9ffF9fICAgX3xfX3wgfF8pIHwgfCB8ICBfXy8NCiAvXy9cX1xf
X198X3wgfF98ICAgIHxffChfKV9fX19fKF8pX19fLyAgIHxffCAgXF9fX3wgIHxffCAgICB8
IC5fXy98X3wgIFxfX198DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgfF98ICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZl
cnNpb24gNC4yLjAtcmM0LXByZSAocm9vdEBkeW5kbnMub3JnKSAoZ2NjIChEZWJpYW4gNC40
LjUtOCkgNC40LjUpIFRodSBBdWcgMzAgMjM6NTg6MDMgQ0VTVCAyMDEyDQooWEVOKSBMYXRl
c3QgQ2hhbmdlU2V0OiBUdWUgQXVnIDI4IDIyOjQwOjQ1IDIwMTIgKzAxMDAgMjU3ODY6YTBi
NWY4MTAyYTAwDQooWEVOKSBCb290bG9hZGVyOiBHUlVCIDEuOTgrMjAxMDA4MDQtMTQrc3F1
ZWV6ZTENCihYRU4pIENvbW1hbmQgbGluZTogZG9tMF9tZW09MTAyNE0gbG9nbHZsPWFsbCBs
b2dsdmxfZ3Vlc3Q9YWxsIGNvbnNvbGVfdGltZXN0YW1wcyB2Z2E9Z2Z4LTEyODB4MTAyNHgz
MiBjcHVpZGxlIGNwdWZyZXE9eGVuIG5vcmVib290IGRlYnVnIGxhcGljPWRlYnVnIGFwaWNf
dmVyYm9zaXR5PWRlYnVnIGFwaWM9ZGVidWcgYWNwaV9lbmZvcmNlX3Jlc291cmNlcz1sYXgg
aW9tbXU9b24sdmVyYm9zZSxhbWQtaW9tbXUtZGVidWcgY29tMT0zODQwMCw4bjEgY29uc29s
ZT12Z2EsY29tMQ0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAgVkdBIGlzIGdy
YXBoaWNzIG1vZGUgMTI4MHgxMDI0LCAzMiBicHANCihYRU4pICBWQkUvRERDIG1ldGhvZHM6
IFYyOyBFRElEIHRyYW5zZmVyIHRpbWU6IDEgc2Vjb25kcw0KKFhFTikgRGlzYyBpbmZvcm1h
dGlvbjoNCihYRU4pICBGb3VuZCAyIE1CUiBzaWduYXR1cmVzDQooWEVOKSAgRm91bmQgMiBF
REQgaW5mb3JtYXRpb24gc3RydWN0dXJlcw0KKFhFTikgWGVuLWU4MjAgUkFNIG1hcDoNCihY
RU4pICAwMDAwMDAwMDAwMDAwMDAwIC0gMDAwMDAwMDAwMDA5YjAwMCAodXNhYmxlKQ0KKFhF
TikgIDAwMDAwMDAwMDAwOWIwMDAgLSAwMDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDAwMGU0MDAwIC0gMDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpDQoo
WEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAwMDAwYWZlOTAwMDAgKHVzYWJsZSkNCihY
RU4pICAwMDAwMDAwMGFmZTkwMDAwIC0gMDAwMDAwMDBhZmU5ZTAwMCAoQUNQSSBkYXRhKQ0K
KFhFTikgIDAwMDAwMDAwYWZlOWUwMDAgLSAwMDAwMDAwMGFmZWUwMDAwIChBQ1BJIE5WUykN
CihYRU4pICAwMDAwMDAwMGFmZWUwMDAwIC0gMDAwMDAwMDBhZmYwMDAwMCAocmVzZXJ2ZWQp
DQooWEVOKSAgMDAwMDAwMDBmZmUwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVk
KQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwMjUwMDAwMDAwICh1c2FibGUp
DQooWEVOKSBBQ1BJOiBSU0RQIDAwMEZCMTIwLCAwMDE0IChyMCBBQ1BJQU0pDQooWEVOKSBB
Q1BJOiBSU0RUIEFGRTkwMDAwLCAwMDQ4IChyMSBNU0kgICAgT0VNU0xJQyAgMjAxMDA2MjIg
TVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IEZBQ1AgQUZFOTAyMDAsIDAwODQgKHIxIDc2
NDBNUyBBNzY0MDEwMCAyMDEwMDYyMiBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRFNE
VCBBRkU5MDVFMCwgOTQ0OSAocjEgIEE3NjQwIEE3NjQwMTAwICAgICAgMTAwIElOVEwgMjAw
NTExMTcpDQooWEVOKSBBQ1BJOiBGQUNTIEFGRTlFMDAwLCAwMDQwDQooWEVOKSBBQ1BJOiBB
UElDIEFGRTkwMzkwLCAwMDg4IChyMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA2MjIgTVNGVCAg
ICAgICA5NykNCihYRU4pIEFDUEk6IE1DRkcgQUZFOTA0MjAsIDAwM0MgKHIxIDc2NDBNUyBP
RU1NQ0ZHICAyMDEwMDYyMiBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogU0xJQyBBRkU5
MDQ2MCwgMDE3NiAocjEgTVNJICAgIE9FTVNMSUMgIDIwMTAwNjIyIE1TRlQgICAgICAgOTcp
DQooWEVOKSBBQ1BJOiBPRU1CIEFGRTlFMDQwLCAwMDcyIChyMSA3NjQwTVMgQTc2NDAxMDAg
MjAxMDA2MjIgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IFNSQVQgQUZFOUE1RTAsIDAx
MDggKHIzIEFNRCAgICBGQU1fRl8xMCAgICAgICAgMiBBTUQgICAgICAgICAxKQ0KKFhFTikg
QUNQSTogSFBFVCBBRkU5QTZGMCwgMDAzOCAocjEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwNjIy
IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBJVlJTIEFGRTlBNzMwLCAwMEY4IChyMSAg
QU1EICAgICBSRDg5MFMgICAyMDIwMzEgQU1EICAgICAgICAgMCkNCihYRU4pIEFDUEk6IFNT
RFQgQUZFOUE4MzAsIDBEQTQgKHIxIEEgTSBJICBQT1dFUk5PVyAgICAgICAgMSBBTUQgICAg
ICAgICAxKQ0KKFhFTikgU3lzdGVtIFJBTTogODE5ME1CICg4Mzg2NzMya0IpDQooWEVOKSBT
UkFUOiBQWE0gMCAtPiBBUElDIDAgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQWE0gMCAtPiBB
UElDIDEgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQWE0gMCAtPiBBUElDIDIgLT4gTm9kZSAw
DQooWEVOKSBTUkFUOiBQWE0gMCAtPiBBUElDIDMgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQ
WE0gMCAtPiBBUElDIDQgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQWE0gMCAtPiBBUElDIDUg
LT4gTm9kZSAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMC1hMDAwMA0KKFhFTikgU1JB
VDogTm9kZSAwIFBYTSAwIDEwMDAwMC1iMDAwMDAwMA0KKFhFTikgU1JBVDogTm9kZSAwIFBY
TSAwIDEwMDAwMDAwMC0yNTAwMDAwMDANCihYRU4pIE5VTUE6IEFsbG9jYXRlZCBtZW1ub2Rl
bWFwIGZyb20gMjRkYmFjMDAwIC0gMjRkYmFmMDAwDQooWEVOKSBOVU1BOiBVc2luZyA4IGZv
ciB0aGUgaGFzaCBzaGlmdC4NCihYRU4pIERvbWFpbiBoZWFwIGluaXRpYWxpc2VkDQooWEVO
KSB2ZXNhZmI6IGZyYW1lYnVmZmVyIGF0IDB4ZmIwMDAwMDAsIG1hcHBlZCB0byAweGZmZmY4
MmMwMDAwMDAwMDAsIHVzaW5nIDYxNDRrLCB0b3RhbCAxNDMzNmsNCihYRU4pIHZlc2FmYjog
bW9kZSBpcyAxMjgweDEwMjR4MzIsIGxpbmVsZW5ndGg9NTEyMCwgZm9udCA4eDE2DQooWEVO
KSB2ZXNhZmI6IFRydWVjb2xvcjogc2l6ZT04Ojg6ODo4LCBzaGlmdD0yNDoxNjo4OjANCihY
RU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmZjc4MA0KKFhFTikgRE1JIHByZXNlbnQu
DQooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBk
cml2ZXIgZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihY
RU4pIEFDUEk6IEFDUEkgU0xFRVAgSU5GTzogcG0xeF9jbnRbODA0LDBdLCBwbTF4X2V2dFs4
MDAsMF0NCihYRU4pIEFDUEk6ICAgICAgICAgICAgICAgICAgd2FrZXVwX3ZlY1thZmU5ZTAw
Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNzIDB4ZmVl
MDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MDBd
IGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzAgMDoxMCBBUElDIHZlcnNpb24gMTYNCihY
RU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MDFdIGVuYWJsZWQp
DQooWEVOKSBQcm9jZXNzb3IgIzEgMDoxMCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6
IExBUElDIChhY3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MDJdIGVuYWJsZWQpDQooWEVOKSBQ
cm9jZXNzb3IgIzIgMDoxMCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3Ig
IzMgMDoxMCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDVdIGxhcGljX2lkWzB4MDRdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzQgMDoxMCBB
UElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDZdIGxhcGlj
X2lkWzB4MDVdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzUgMDoxMCBBUElDIHZlcnNp
b24gMTYNCihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwNl0gYWRkcmVzc1sweGZlYzAwMDAw
XSBnc2lfYmFzZVswXSkNCihYRU4pIElPQVBJQ1swXTogYXBpY19pZCA2LCB2ZXJzaW9uIDMz
LCBhZGRyZXNzIDB4ZmVjMDAwMDAsIEdTSSAwLTIzDQooWEVOKSBBQ1BJOiBJT0FQSUMgKGlk
WzB4MDddIGFkZHJlc3NbMHhmZWMyMDAwMF0gZ3NpX2Jhc2VbMjRdKQ0KKFhFTikgSU9BUElD
WzFdOiBhcGljX2lkIDcsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMyMDAwMCwgR1NJIDI0
LTU1DQooWEVOKSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSAwIGdsb2JhbF9p
cnEgMiBkZmwgZGZsKQ0KKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEg
OSBnbG9iYWxfaXJxIDkgbG93IGxldmVsKQ0KKFhFTikgQUNQSTogSVJRMCB1c2VkIGJ5IG92
ZXJyaWRlLg0KKFhFTikgQUNQSTogSVJRMiB1c2VkIGJ5IG92ZXJyaWRlLg0KKFhFTikgQUNQ
STogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLg0KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAg
RmxhdC4gIFVzaW5nIDIgSS9PIEFQSUNzDQooWEVOKSBBQ1BJOiBIUEVUIGlkOiAweDgzMDAg
YmFzZTogMHhmZWQwMDAwMA0KKFhFTikgVGFibGUgaXMgbm90IGZvdW5kIQ0KKFhFTikgVXNp
bmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9uDQooWEVO
KSBTTVA6IEFsbG93aW5nIDYgQ1BVcyAoMCBob3RwbHVnIENQVXMpDQooWEVOKSBtYXBwZWQg
QVBJQyB0byBmZmZmODJjM2ZmZGZlMDAwIChmZWUwMDAwMCkNCihYRU4pIG1hcHBlZCBJT0FQ
SUMgdG8gZmZmZjgyYzNmZmRmZDAwMCAoZmVjMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElD
IHRvIGZmZmY4MmMzZmZkZmMwMDAgKGZlYzIwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogNTYg
R1NJLCAxMTEyIE1TSS9NU0ktWA0KKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0
IFNjaGVkdWxlciAoY3JlZGl0KQ0KKFhFTikgRGV0ZWN0ZWQgMzIwMC4yMTYgTUh6IHByb2Nl
c3Nvci4NCihYRU4pIEluaXRpbmcgbWVtb3J5IHNoYXJpbmcuDQooWEVOKSBBTUQgRmFtMTBo
IG1hY2hpbmUgY2hlY2sgcmVwb3J0aW5nIGVuYWJsZWQNCihYRU4pIFBDSTogTUNGRyBjb25m
aWd1cmF0aW9uIDA6IGJhc2UgZTAwMDAwMDAgc2VnbWVudCAwMDAwIGJ1c2VzIDAwIC0gZmYN
CihYRU4pIFBDSTogTm90IHVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBidXMgMDAtZmYN
CihYRU4pIEFNRC1WaTogSU9NTVUgMCBFbmFibGVkLg0KKFhFTikgQU1ELVZpOiBFbmFibGlu
ZyBnbG9iYWwgdmVjdG9yIG1hcA0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQN
CihYRU4pICAtIERvbTAgbW9kZTogUmVsYXhlZA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4
MDA1MDAxMA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGlu
ZyBJRDogMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3MDANCihYRU4pIEdldHRpbmcgTFZUMTog
NDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUjMA0KKFhFTikgRVNSIHZhbHVlIGJl
Zm9yZSBlbmFibGluZyB2ZWN0b3I6IDB4MDAwMDAwMDQgIGFmdGVyOiAweDAwMDAwMDAwDQoo
WEVOKSBFTkFCTElORyBJTy1BUElDIElSUXMNCihYRU4pICAtPiBVc2luZyBuZXcgQUNLIG1l
dGhvZA0KKFhFTikgaW5pdCBJT19BUElDIElSUXMNCihYRU4pICBJTy1BUElDIChhcGljaWQt
cGluKSA2LTAsIDYtMTYsIDYtMTcsIDYtMTgsIDYtMTksIDYtMjAsIDYtMjEsIDYtMjIsIDYt
MjMsIDctMCwgNy0xLCA3LTIsIDctMywgNy00LCA3LTUsIDctNiwgNy03LCA3LTgsIDctOSwg
Ny0xMCwgNy0xMSwgNy0xMiwgNy0xMywgNy0xNCwgNy0xNSwgNy0xNiwgNy0xNywgNy0xOCwg
Ny0xOSwgNy0yMCwgNy0yMSwgNy0yMiwgNy0yMywgNy0yNCwgNy0yNSwgNy0yNiwgNy0yNywg
Ny0yOCwgNy0yOSwgNy0zMCwgNy0zMSBub3QgY29ubmVjdGVkLg0KKFhFTikgLi5USU1FUjog
dmVjdG9yPTB4RjAgYXBpYzE9MCBwaW4xPTIgYXBpYzI9LTEgcGluMj0tMQ0KKFhFTikgbnVt
YmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4NCihYRU4pIG51bWJlciBvZiBJTy1BUElDICM2
IHJlZ2lzdGVyczogMjQuDQooWEVOKSBudW1iZXIgb2YgSU8tQVBJQyAjNyByZWdpc3RlcnM6
IDMyLg0KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
DQooWEVOKSBJTyBBUElDICM2Li4uLi4uDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDYw
MDAwMDANCihYRU4pIC4uLi4uLi4gICAgOiBwaHlzaWNhbCBBUElDIGlkOiAwNg0KKFhFTikg
Li4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDANCihYRU4pIC4uLi4uLi4gICAgOiBMVFMg
ICAgICAgICAgOiAwDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzgwMjENCihYRU4p
IC4uLi4uLi4gICAgIDogbWF4IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAwMTcNCihYRU4pIC4u
Li4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAxDQooWEVOKSAuLi4uLi4uICAgICA6IElP
IEFQSUMgdmVyc2lvbjogMDAyMQ0KKFhFTikgLi4uLiByZWdpc3RlciAjMDI6IDA2MDAwMDAw
DQooWEVOKSAuLi4uLi4uICAgICA6IGFyYml0cmF0aW9uOiAwNg0KKFhFTikgLi4uLiByZWdp
c3RlciAjMDM6IDA3MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgICA6IEJvb3QgRFQgICAgOiAw
DQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToNCihYRU4pICBOUiBMb2cgUGh5
IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZlY3Q6ICAgDQooWEVOKSAgMDAg
MDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDAx
IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzANCihYRU4pICAw
MiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEYwDQooWEVOKSAg
MDMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOA0KKFhFTikg
IDA0IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgRjENCihYRU4p
ICAwNSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVO
KSAgMDYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhF
TikgIDA3IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTANCihY
RU4pICAwOCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4DQoo
WEVOKSAgMDkgMDAxIDAxICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICA2MA0K
KFhFTikgIDBhIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNjgN
CihYRU4pICAwYiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDcw
DQooWEVOKSAgMGMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3
OA0KKFhFTikgIDBkIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAg
ODgNCihYRU4pICAwZSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IDkwDQooWEVOKSAgMGYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICA5OA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDE2IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSBJTyBBUElDICM3Li4uLi4uDQooWEVOKSAuLi4uIHJlZ2lz
dGVyICMwMDogMDcwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgOiBwaHlzaWNhbCBBUElDIGlk
OiAwNw0KKFhFTikgLi4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDANCihYRU4pIC4uLi4u
Li4gICAgOiBMVFMgICAgICAgICAgOiAwDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAx
RjgwMjENCihYRU4pIC4uLi4uLi4gICAgIDogbWF4IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAw
MUYNCihYRU4pIC4uLi4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAxDQooWEVOKSAuLi4u
Li4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMQ0KKFhFTikgLi4uLiByZWdpc3RlciAj
MDI6IDAwMDAwMDAwDQooWEVOKSAuLi4uLi4uICAgICA6IGFyYml0cmF0aW9uOiAwMA0KKFhF
TikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBoeSBNYXNr
IFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAwMCAw
MCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAwMSAwMDAg
MDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMDIgMDAw
IDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDAzIDAw
MCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAwNCAw
MDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMDUg
MDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDA2
IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAw
NyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAg
MDggMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikg
IDA5IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4p
ICAwYSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVO
KSAgMGIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhF
TikgIDBjIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihY
RU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQoo
WEVOKSAgMGUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0K
KFhFTikgIDBmIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAN
CihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAw
DQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAg
MDANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAg
ICAwMA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDE4IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAxOSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMWEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDFiIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAxYyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMWQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDFlIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAxZiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSBVc2luZyB2ZWN0b3ItYmFzZWQgaW5kZXhpbmcNCihY
RU4pIElSUSB0byBwaW4gbWFwcGluZ3M6DQooWEVOKSBJUlEyNDAgLT4gMDoyDQooWEVOKSBJ
UlE0OCAtPiAwOjENCihYRU4pIElSUTU2IC0+IDA6Mw0KKFhFTikgSVJRMjQxIC0+IDA6NA0K
KFhFTikgSVJRNjQgLT4gMDo1DQooWEVOKSBJUlE3MiAtPiAwOjYNCihYRU4pIElSUTgwIC0+
IDA6Nw0KKFhFTikgSVJRODggLT4gMDo4DQooWEVOKSBJUlE5NiAtPiAwOjkNCihYRU4pIElS
UTEwNCAtPiAwOjEwDQooWEVOKSBJUlExMTIgLT4gMDoxMQ0KKFhFTikgSVJRMTIwIC0+IDA6
MTINCihYRU4pIElSUTEzNiAtPiAwOjEzDQooWEVOKSBJUlExNDQgLT4gMDoxNA0KKFhFTikg
SVJRMTUyIC0+IDA6MTUNCihYRU4pIC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLiBkb25lLg0KKFhFTikgVXNpbmcgbG9jYWwgQVBJQyB0aW1lciBpbnRlcnJ1cHRzLg0K
KFhFTikgY2FsaWJyYXRpbmcgQVBJQyB0aW1lciAuLi4NCihYRU4pIC4uLi4uIENQVSBjbG9j
ayBzcGVlZCBpcyAzMjAwLjE2MDQgTUh6Lg0KKFhFTikgLi4uLi4gaG9zdCBidXMgY2xvY2sg
c3BlZWQgaXMgMjAwLjAwOTggTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHgwMDAw
Q0NENw0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIFBsYXRmb3JtIHRpbWVyIGlzIDE0
LjMxOE1IeiBIUEVUDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gQWxsb2NhdGVkIGNv
bnNvbGUgcmluZyBvZiA2NCBLaUIuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFZN
OiBBU0lEcyBlbmFibGVkLg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIFNWTTogU3Vw
cG9ydGVkIGFkdmFuY2VkIGZlYXR1cmVzOg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVd
ICAtIE5lc3RlZCBQYWdlIFRhYmxlcyAoTlBUKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6
MjVdICAtIExhc3QgQnJhbmNoIFJlY29yZCAoTEJSKSBWaXJ0dWFsaXNhdGlvbg0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdICAtIE5leHQtUklQIFNhdmVkIG9uICNWTUVYSVQNCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSAgLSBQYXVzZS1JbnRlcmNlcHQgRmlsdGVyDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFZNOiBTVk0gZW5hYmxlZA0KKFhFTikgWzIw
MTItMDgtMzEgMjA6NTI6MjVdIEhWTTogSGFyZHdhcmUgQXNzaXN0ZWQgUGFnaW5nIChIQVAp
IGRldGVjdGVkDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFZNOiBIQVAgcGFnZSBz
aXplczogNGtCLCAyTUIsIDFHQg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjRdIG1hc2tl
ZCBFeHRJTlQgb24gQ1BVIzENCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBtaWNyb2Nv
ZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjRdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzINCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI1XSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4
MTAwMDBiZg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjRdIG1hc2tlZCBFeHRJTlQgb24g
Q1BVIzMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBtaWNyb2NvZGU6IGNvbGxlY3Rf
Y3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6
MjRdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1
XSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjRdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzUNCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBCcm91Z2h0IHVwIDYgQ1BVcw0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdIG1pY3JvY29kZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9
MHgxMDAwMGJmDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFBFVCdzIE1TSSBtb2Rl
IGhhc24ndCBiZWVuIHN1cHBvcnRlZCB3aGVuIEludGVycnVwdCBSZW1hcHBpbmcgaXMgZW5h
YmxlZC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBBQ1BJIHNsZWVwIG1vZGVzOiBT
Mw0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIE1DQTogVXNlIGh3IHRocmVzaG9sZGlu
ZyB0byBhZGp1c3QgcG9sbGluZyBmcmVxdWVuY3kNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI1XSBtY2hlY2tfcG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0YXJ0ZWQu
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gWGVub3Byb2ZpbGU6IEZhaWxlZCB0byBz
ZXR1cCBJQlMgTFZUIG9mZnNldCwgSUJTQ1RMID0gMHhmZmZmZmZmZg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdICoqKiBMT0FESU5HIERPTUFJTiAwICoqKg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MTAwMDAw
MCBtZW1zej0weGI2ODAwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl9wYXJz
ZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MWMwMDAwMCBtZW1zej0weGQ4MGU4DQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1MjoyNV0gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgx
Y2Q5MDAwIG1lbXN6PTB4MTNjMDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZf
cGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZWQwMDAgbWVtc3o9MHhkZTQwMDANCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZfcGFyc2VfYmluYXJ5OiBtZW1vcnk6IDB4
MTAwMDAwMCAtPiAweDJhZDEwMDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZf
eGVuX3BhcnNlX25vdGU6IEdVRVNUX09TID0gImxpbnV4Ig0KKFhFTikgWzIwMTItMDgtMzEg
MjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfVkVSU0lPTiA9ICIyLjYiDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVS
U0lPTiA9ICJ4ZW4tMy4wIg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl94ZW5f
cGFyc2Vfbm90ZTogVklSVF9CQVNFID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMDo1MjoyNV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBFTlRSWSA9IDB4ZmZmZmZm
ZmY4MWNlZDIxMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vf
bm90ZTogSFlQRVJDQUxMX1BBR0UgPSAweGZmZmZmZmZmODEwMDEwMDANCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI1XSBlbGZfeGVuX3BhcnNlX25vdGU6IEZFQVRVUkVTID0gIiF3cml0
YWJsZV9wYWdlX3RhYmxlc3xwYWVfcGdkaXJfYWJvdmVfNGdiIg0KKFhFTikgWzIwMTItMDgt
MzEgMjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFFX01PREUgPSAieWVzIg0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogTE9BREVSID0g
ImdlbmVyaWMiDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gZWxmX3hlbl9wYXJzZV9u
b3RlOiB1bmtub3duIHhlbiBlbGYgbm90ZSAoMHhkKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6
NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VTUEVORF9DQU5DRUwgPSAweDENCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZfeGVuX3BhcnNlX25vdGU6IEhWX1NUQVJUX0xP
VyA9IDB4ZmZmZjgwMDAwMDAwMDAwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBbMjAxMi0wOC0z
MSAyMDo1MjoyNV0gZWxmX3hlbl9hZGRyX2NhbGNfY2hlY2s6IGFkZHJlc3NlczoNCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI1XSAgICAgdmlydF9iYXNlICAgICAgICA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdICAgICBlbGZfcGFkZHJf
b2Zmc2V0ID0gMHgwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gICAgIHZpcnRfb2Zm
c2V0ICAgICAgPSAweGZmZmZmZmZmODAwMDAwMDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI1XSAgICAgdmlydF9rc3RhcnQgICAgICA9IDB4ZmZmZmZmZmY4MTAwMDAwMA0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdICAgICB2aXJ0X2tlbmQgICAgICAgID0gMHhmZmZmZmZm
ZjgyYWQxMDAwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gICAgIHZpcnRfZW50cnkg
ICAgICAgPSAweGZmZmZmZmZmODFjZWQyMTANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1
XSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4ZmZmZmZmZmZmZmZmZmZmZg0KKFhFTikgWzIw
MTItMDgtMzEgMjA6NTI6MjVdICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNvbXBhdDMy
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gIERvbTAga2VybmVsOiA2NC1iaXQsIFBB
RSwgbHNiLCBwYWRkciAweDEwMDAwMDAgLT4gMHgyYWQxMDAwDQooWEVOKSBbMjAxMi0wOC0z
MSAyMDo1MjoyNV0gUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdICBEb20wIGFsbG9jLjogICAwMDAwMDAwMjQwMDAwMDAwLT4wMDAw
MDAwMjQ0MDAwMDAwICgyNDI0MjggcGFnZXMgdG8gYmUgYWxsb2NhdGVkKQ0KKFhFTikgWzIw
MTItMDgtMzEgMjA6NTI6MjVdICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRmMmZjMDAwLT4w
MDAwMDAwMjRmZmZmODAwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gVklSVFVBTCBN
RU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gIExvYWRl
ZCBrZXJuZWw6IGZmZmZmZmZmODEwMDAwMDAtPmZmZmZmZmZmODJhZDEwMDANCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI1XSAgSW5pdC4gcmFtZGlzazogZmZmZmZmZmY4MmFkMTAwMC0+
ZmZmZmZmZmY4MzdkNDgwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdICBQaHlzLU1h
Y2ggbWFwOiBmZmZmZmZmZjgzN2Q1MDAwLT5mZmZmZmZmZjgzOWQ1MDAwDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMDo1MjoyNV0gIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODM5ZDUwMDAtPmZm
ZmZmZmZmODM5ZDU0YjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSAgUGFnZSB0YWJs
ZXM6ICAgZmZmZmZmZmY4MzlkNjAwMC0+ZmZmZmZmZmY4MzlmNzAwMA0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdICBCb290IHN0YWNrOiAgICBmZmZmZmZmZjgzOWY3MDAwLT5mZmZm
ZmZmZjgzOWY4MDAwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gIFRPVEFMOiAgICAg
ICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZmZmZmODNjMDAwMDANCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI1XSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MWNlZDIxMA0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdIERvbTAgaGFzIG1heGltdW0gNiBWQ1BVcw0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0IDB4ZmZm
ZmZmZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODFiNjgwMDANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI1XSBlbGZfbG9hZF9iaW5hcnk6IHBoZHIgMSBhdCAweGZmZmZmZmZmODFjMDAw
MDAgLT4gMHhmZmZmZmZmZjgxY2Q4MGU4DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0g
ZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDIgYXQgMHhmZmZmZmZmZjgxY2Q5MDAwIC0+IDB4ZmZm
ZmZmZmY4MWNlY2MwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl9sb2FkX2Jp
bmFyeTogcGhkciAzIGF0IDB4ZmZmZmZmZmY4MWNlZDAwMCAtPiAweGZmZmZmZmZmODFkODUw
MDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI2XSBTY3J1YmJpbmcgRnJlZSBSQU06IC4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uZG9uZS4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI3XSBJbml0
aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4NCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI3XSBTdGQuIExvZ2xldmVsOiBBbGwNCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI3XSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1MjoyN10gWGVuIGlzIHJlbGlucXVpc2hpbmcgVkdBIGNvbnNvbGUuDQooWEVO
KSBbMjAxMi0wOC0zMSAyMDo1MjoyOF0gKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBl
ICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pDQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1MjoyOF0gRnJlZWQgMjU2a0IgaW5pdCBtZW1vcnkuDQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1MjoyOF0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkg
KDYtOSAtPiAweDYwIC0+IElSUSA5IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAw
MTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhjMDAwMDAw
MDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwZmZmMmI5OWIxMmJlIHRvIDB4MDAwMDAwMDAwMDAwYWJj
ZC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAw
MDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAw
NDA5IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRv
IDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZy
b20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAw
ODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhj
MDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAw
MTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhjMDAwMDAw
MDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAw
MC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhjMDAwMDAwMDAxMDAw
MDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAwLjINCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAyLjANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAzLjANCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA1LjANCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA2LjANCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBhLjANCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBiLjANCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBjLjAN
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBk
LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjExLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjEyLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjEyLjINCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjEzLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjEzLjINCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAwOjE0LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjE0LjMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjUNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE1LjANCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE2LjANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE2LjINCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjENCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjINCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjMNCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQN
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjBiOjAw
LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjBh
OjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAw
OjBhOjAwLjENCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAw
MDAwOjBhOjAwLjINCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjBhOjAwLjMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjBhOjAwLjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRk
IGRldmljZSAwMDAwOjBhOjAwLjUNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kg
YWRkIGRldmljZSAwMDAwOjBhOjAwLjYNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjBhOjAwLjcNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA5OjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA4OjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA3OjAwLjANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjANCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjENCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA1OjAwLjANCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAwLjANCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAwLjENCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAwLjIN
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAw
LjMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0
OjAwLjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAw
OjA0OjAwLjUNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAw
MDAwOjA0OjAwLjYNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjA0OjAwLjcNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAzOjA2LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNb
MF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNi04IC0+IDB4NTggLT4gSVJRIDggTW9kZTow
IEFjdGl2ZTowKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjhdIElPQVBJQ1swXTogU2V0
IFBDSSByb3V0aW5nIGVudHJ5ICg2LTEzIC0+IDB4ODggLT4gSVJRIDEzIE1vZGU6MCBBY3Rp
dmU6MCkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNbMV06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNy0yOCAtPiAweGEwIC0+IElSUSA1MiBNb2RlOjEgQWN0aXZlOjEp
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyOF0gSU9BUElDWzFdOiBTZXQgUENJIHJvdXRp
bmcgZW50cnkgKDctMjkgLT4gMHhhOCAtPiBJUlEgNTMgTW9kZToxIEFjdGl2ZToxKQ0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjhdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVu
dHJ5ICg3LTMwIC0+IDB4YjAgLT4gSVJRIDU0IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ni0xNiAtPiAweGI4IC0+IElSUSAxNiBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1MjoyOF0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTgg
LT4gMHhjMCAtPiBJUlEgMTggTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMDgtMzEg
MjA6NTI6MjhdIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTE3IC0+IDB4
YzggLT4gSVJRIDE3IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy00IC0+IDB4ZDAgLT4g
SVJRIDI4IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJ
T0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy01IC0+IDB4ZDggLT4gSVJRIDI5
IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNb
MV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy02IC0+IDB4MjEgLT4gSVJRIDMwIE1vZGU6
MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNbMV06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNy03IC0+IDB4MjkgLT4gSVJRIDMxIE1vZGU6MSBBY3Rp
dmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI5XSBJT0FQSUNbMV06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNy0xNiAtPiAweDMxIC0+IElSUSA0MCBNb2RlOjEgQWN0aXZlOjEp
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyOV0gSU9BUElDWzFdOiBTZXQgUENJIHJvdXRp
bmcgZW50cnkgKDctMTcgLT4gMHgzOSAtPiBJUlEgNDEgTW9kZToxIEFjdGl2ZToxKQ0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjldIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVu
dHJ5ICg3LTE4IC0+IDB4NDEgLT4gSVJRIDQyIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI5XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ny0xOSAtPiAweDQ5IC0+IElSUSA0MyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1MjoyOV0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMjIg
LT4gMHg5OSAtPiBJUlEgMjIgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMDgtMzEg
MjA6NTI6MjldIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTEyIC0+IDB4
YTEgLT4gSVJRIDM2IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI5XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yMyAtPiAweGE5IC0+
IElSUSA0NyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyOV0g
SU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTkgLT4gMHhiMSAtPiBJUlEg
MTkgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MzBdIElPQVBJ
Q1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTIyIC0+IDB4YzEgLT4gSVJRIDQ2IE1v
ZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjMwXSBJT0FQSUNbMV06
IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yNyAtPiAweGQxIC0+IElSUSA1MSBNb2RlOjEg
QWN0aXZlOjEpDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjozMl0gSU9BUElDWzFdOiBTZXQg
UENJIHJvdXRpbmcgZW50cnkgKDctOSAtPiAweDIyIC0+IElSUSAzMyBNb2RlOjEgQWN0aXZl
OjEpDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MzoyN10gdHJhcHMuYzoyNTg0OmQxIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDRlYjQwZDlj
Nzk1YSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1Mzoz
M10gdHJhcHMuYzoyNTg0OmQyIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwNCBmcm9tIDB4MDAwMDgxOTdlMDBiNjA5OSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMDo1MzozOV0gdHJhcHMuYzoyNTg0OmQzIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDYwMzRkNzBkYTE2NSB0
byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1Mzo0N10gdHJh
cHMuYzoyNTg0OmQ0IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBm
cm9tIDB4MDAwMDYwMzRkNzBkYTE2NSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1Mzo1M10gdHJhcHMuYzoyNTg0OmQ1IERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDgxOTdlMDBiNjA5OSB0byAweDAw
MDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1Mzo1OV0gdHJhcHMuYzoy
NTg0OmQ2IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4
MDAwMDgxOTdlMDBiNjA5OSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1NDowNV0gdHJhcHMuYzoyNTg0OmQ3IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDgxOTdlMDBiNjA5OSB0byAweDAwMDAwMDAw
MDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1NDoxMF0gdHJhcHMuYzoyNTg0OmQ4
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDYw
MzRkNzBkYTE2NSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAy
MDo1NDoxNl0gdHJhcHMuYzoyNTg0OmQ5IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwNCBmcm9tIDB4MDAwMDRlYjQwZDljNzk1YSB0byAweDAwMDAwMDAwMDAwMGFi
Y2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1NDoyM10gdHJhcHMuYzoyNTg0OmQxMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDA4MTk3ZTAw
YjYwOTkgdG8gMHgwMDAwMDAwMDAwMDBhYmNkLg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTQ6
MjldIHRyYXBzLmM6MjU4NDpkMTEgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDA0IGZyb20gMHgwMDAwODE5N2UwMGI2MDk5IHRvIDB4MDAwMDAwMDAwMDAwYWJjZC4N
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjU0OjM2XSB0cmFwcy5jOjI1ODQ6ZDEyIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDRlYjQwZDljNzk1
YSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1NDo0Ml0g
dHJhcHMuYzoyNTg0OmQxMyBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDQgZnJvbSAweDAwMDA2MDM0ZDcwZGExNjUgdG8gMHgwMDAwMDAwMDAwMDBhYmNkLg0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTQ6NDldIHRyYXBzLmM6MjU4NDpkMTQgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwODY2YjgwMmI0MTY3IHRv
IDB4MDAwMDAwMDAwMDAwYWJjZC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjU0OjU2XSB0cmFw
cy5jOjI1ODQ6ZDE1IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBm
cm9tIDB4MDAwMDg2NmI4MDJiNDE2NyB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjoyNF0gJ2gnIHByZXNzZWQgLT4gc2hvd2luZyBpbnN0YWxsZWQg
aGFuZGxlcnMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICclJyAoYXNjaWkg
JzI1JykgPT4gdHJhcCB0byB4ZW5kYmcNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAg
a2V5ICcqJyAoYXNjaWkgJzJhJykgPT4gcHJpbnQgYWxsIGRpYWdub3N0aWNzDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAnMCcgKGFzY2lpICczMCcpID0+IGR1bXAgRG9t
MCByZWdpc3RlcnMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdBJyAoYXNj
aWkgJzQxJykgPT4gdG9nZ2xlIGFsdGVybmF0aXZlIGtleSBoYW5kbGluZw0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ0gnIChhc2NpaSAnNDgnKSA9PiBkdW1wIGhlYXAg
aW5mbw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ0knIChhc2NpaSAnNDkn
KSA9PiBkdW1wIEhWTSBpcnEgaW5mbw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBr
ZXkgJ00nIChhc2NpaSAnNGQnKSA9PiBkdW1wIE1TSSBzdGF0ZQ0KKFhFTikgWzIwMTItMDgt
MzEgMjE6MDI6MjRdICBrZXkgJ04nIChhc2NpaSAnNGUnKSA9PiB0cmlnZ2VyIGFuIE5NSQ0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ08nIChhc2NpaSAnNGYnKSA9PiB0
b2dnbGUgc2hhZG93IGF1ZGl0cw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkg
J1EnIChhc2NpaSAnNTEnKSA9PiBkdW1wIFBDSSBkZXZpY2VzDQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjoyNF0gIGtleSAnUicgKGFzY2lpICc1MicpID0+IHJlYm9vdCBtYWNoaW5lDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAnUycgKGFzY2lpICc1MycpID0+IHJl
c2V0IHNoYWRvdyBwYWdldGFibGVzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtl
eSAnYScgKGFzY2lpICc2MScpID0+IGR1bXAgdGltZXIgcXVldWVzDQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjoyNF0gIGtleSAnYycgKGFzY2lpICc2MycpID0+IGR1bXAgQUNQSSBDeCBz
dHJ1Y3R1cmVzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAnZCcgKGFzY2lp
ICc2NCcpID0+IGR1bXAgcmVnaXN0ZXJzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0g
IGtleSAnZScgKGFzY2lpICc2NScpID0+IGR1bXAgZXZ0Y2huIGluZm8NCihYRU4pIFsyMDEy
LTA4LTMxIDIxOjAyOjI0XSAga2V5ICdnJyAoYXNjaWkgJzY3JykgPT4gcHJpbnQgZ3JhbnQg
dGFibGUgdXNhZ2UNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdoJyAoYXNj
aWkgJzY4JykgPT4gc2hvdyB0aGlzIG1lc3NhZ2UNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAy
OjI0XSAga2V5ICdpJyAoYXNjaWkgJzY5JykgPT4gZHVtcCBpbnRlcnJ1cHQgYmluZGluZ3MN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdtJyAoYXNjaWkgJzZkJykgPT4g
bWVtb3J5IGluZm8NCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICduJyAoYXNj
aWkgJzZlJykgPT4gTk1JIHN0YXRpc3RpY3MNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0
XSAga2V5ICdvJyAoYXNjaWkgJzZmJykgPT4gZHVtcCBpb21tdSBwMm0gdGFibGUNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdxJyAoYXNjaWkgJzcxJykgPT4gZHVtcCBk
b21haW4gKGFuZCBndWVzdCBkZWJ1ZykgaW5mbw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6
MjRdICBrZXkgJ3InIChhc2NpaSAnNzInKSA9PiBkdW1wIHJ1biBxdWV1ZXMNCihYRU4pIFsy
MDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdzJyAoYXNjaWkgJzczJykgPT4gZHVtcCBzb2Z0
dHNjIHN0YXRzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAndCcgKGFzY2lp
ICc3NCcpID0+IGRpc3BsYXkgbXVsdGktY3B1IGNsb2NrIGluZm8NCihYRU4pIFsyMDEyLTA4
LTMxIDIxOjAyOjI0XSAga2V5ICd1JyAoYXNjaWkgJzc1JykgPT4gZHVtcCBudW1hIGluZm8N
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICd2JyAoYXNjaWkgJzc2JykgPT4g
ZHVtcCBBTUQtViBWTUNCcw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ3on
IChhc2NpaSAnN2EnKSA9PiBwcmludCBpb2FwaWMgaW5mbw0KKFhFTikgWzIwMTItMDgtMzEg
MjE6MDI6NDJdIEd1ZXN0IGludGVycnVwdCBpbmZvcm1hdGlvbjoNCihYRU4pIFsyMDEyLTA4
LTMxIDIxOjAyOjQyXSAgICBJUlE6ICAgMCBhZmZpbml0eTowMSB2ZWM6ZjAgdHlwZT1JTy1B
UElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogICAxIGFmZmluaXR5OjAxIHZlYzozMCB0eXBl
PUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMzQgaW4tZmxpZ2h0PTAgZG9tYWluLWxp
c3Q9MDogIDEoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAg
IDIgYWZmaW5pdHk6M2YgdmVjOmUyIHR5cGU9WFQtUElDICAgICAgICAgIHN0YXR1cz0wMDAw
MDAwMCBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJ
UlE6ICAgMyBhZmZpbml0eTowMSB2ZWM6MzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVz
PTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgIElSUTogICA0IGFmZmluaXR5OjAxIHZlYzpmMSB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgSVJROiAgIDUgYWZmaW5pdHk6MDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1lZGdl
ICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICBJUlE6ICAgNiBhZmZpbml0eTowMSB2ZWM6NDggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTIt
MDgtMzEgMjE6MDI6NDJdICAgIElSUTogICA3IGFmZmluaXR5OjAxIHZlYzo1MCB0eXBlPUlP
LUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgIDggYWZmaW5pdHk6MDEgdmVjOjU4IHR5
cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4t
bGlzdD0wOiAgOCgtLS0tKSwNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6
ICAgOSBhZmZpbml0eTowMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDMwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6ICA5KC0tLS0pLA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDEwIGFmZmluaXR5OjAxIHZlYzo2OCB0eXBl
PUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVO
KSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMTEgYWZmaW5pdHk6MDEgdmVjOjcw
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICAxMiBhZmZpbml0eTowMSB2
ZWM6NzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDMwIGluLWZsaWdodD0w
IGRvbWFpbi1saXN0PTA6IDEyKC0tLS0pLA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgIElSUTogIDEzIGFmZmluaXR5OjNmIHZlYzo4OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgSVJROiAgMTQgYWZmaW5pdHk6MDEgdmVjOjkwIHR5cGU9SU8tQVBJQy1lZGdl
ICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICBJUlE6ICAxNSBhZmZpbml0eTowMSB2ZWM6OTggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTIt
MDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDE2IGFmZmluaXR5OjNmIHZlYzpiOCB0eXBlPUlP
LUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMTcgYWZmaW5pdHk6MDEgdmVjOmM4IHR5
cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4t
bGlzdD0wOiAxNygtLS0tKSwNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6
ICAxOCBhZmZpbml0eTowMSB2ZWM6YzAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDMwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDE4KC0tLS0pLA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDE5IGFmZmluaXR5OjNmIHZlYzpiMSB0eXBl
PUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVO
KSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMjIgYWZmaW5pdHk6MDEgdmVjOjk5
IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21h
aW4tbGlzdD0wOiAyMigtLS0tKSwxMTogMjIoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgSVJROiAgMjggYWZmaW5pdHk6MDEgdmVjOmQwIHR5cGU9SU8tQVBJQy1s
ZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAyOCgt
LS0tKSwxNDogMjgoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJR
OiAgMjkgYWZmaW5pdHk6MjAgdmVjOmQ4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0w
MDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAyOSgtLS0tKSwxNDogMjkoLS0t
LSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMzAgYWZmaW5pdHk6
MDQgdmVjOjIxIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGln
aHQ9MCBkb21haW4tbGlzdD0wOiAzMCgtLS0tKSwxNDogMzAoLS0tLSksDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMzEgYWZmaW5pdHk6MDEgdmVjOjI5IHR5cGU9
SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlz
dD0wOiAzMSgtLS0tKSwxNDogMzEoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0
Ml0gICAgSVJROiAgMzMgYWZmaW5pdHk6M2YgdmVjOjIyIHR5cGU9SU8tQVBJQy1sZXZlbCAg
IHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIx
OjAyOjQyXSAgICBJUlE6ICAzNiBhZmZpbml0eTozZiB2ZWM6YTEgdHlwZT1JTy1BUElDLWxl
dmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgt
MzEgMjE6MDI6NDJdICAgIElSUTogIDQwIGFmZmluaXR5OjNmIHZlYzozMSB0eXBlPUlPLUFQ
SUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNDEgYWZmaW5pdHk6M2YgdmVjOjM5IHR5cGU9
SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA0MiBhZmZpbml0eTozZiB2ZWM6NDEg
dHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDQzIGFmZmluaXR5OjNmIHZl
Yzo0OSB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJv
dW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNDYgYWZmaW5pdHk6
M2YgdmVjOmMxIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA0NyBhZmZp
bml0eTowNCB2ZWM6YTkgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDMwIGlu
LWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDQ3KC0tLS0pLDE0OiA0NygtLS0tKSwNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA1MSBhZmZpbml0eTozZiB2ZWM6ZDEg
dHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDUyIGFmZmluaXR5OjNmIHZl
YzphMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJv
dW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNTMgYWZmaW5pdHk6
M2YgdmVjOmE4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA1NCBhZmZp
bml0eTozZiB2ZWM6YjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1h
cHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDU2
IGFmZmluaXR5OjAxIHZlYzoyOCB0eXBlPUFNRC1JT01NVS1NU0kgICBzdGF0dXM9MDAwMDAw
MDAgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJR
OiAgNTcgYWZmaW5pdHk6M2YgdmVjOjUxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0w
MDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAg
ICBJUlE6ICA1OCBhZmZpbml0eTozZiB2ZWM6NTkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3Rh
dHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6
NDJdICAgIElSUTogIDU5IGFmZmluaXR5OjNmIHZlYzo2MSB0eXBlPVBDSS1NU0kgICAgICAg
ICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgSVJROiAgNjAgYWZmaW5pdHk6M2YgdmVjOjY5IHR5cGU9UENJLU1TSSAg
ICAgICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4
LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA2MSBhZmZpbml0eTozZiB2ZWM6NzEgdHlwZT1QQ0kt
TVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDYyIGFmZmluaXR5OjNmIHZlYzo3OSB0eXBl
PVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVO
KSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNjMgYWZmaW5pdHk6M2YgdmVjOjgx
IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA2NCBhZmZpbml0eTozZiB2
ZWM6ODkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5i
b3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDY1IGFmZmluaXR5
OjNmIHZlYzo5MSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVk
LCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNjYgYWZm
aW5pdHk6MDEgdmVjOmI5IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBp
bi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjMwMigtLS0tKSwNCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICBJUlE6ICA2NyBhZmZpbml0eTowMSB2ZWM6YzkgdHlwZT1QQ0ktTVNJ
ICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6MzAx
KC0tLS0pLA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDY4IGFmZmlu
aXR5OjAxIHZlYzpkOSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4t
ZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDozMDAoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgSVJROiAgNjkgYWZmaW5pdHk6MDEgdmVjOjJhIHR5cGU9UENJLU1TSSAg
ICAgICAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI5OSgt
LS0tKSwNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSBJTy1BUElDIGludGVycnVwdCBp
bmZvcm1hdGlvbjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRICAwIFZl
YzI0MDoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAsIFBp
biAgMjogdmVjPWYwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0w
IGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgIElSUSAgMSBWZWMgNDg6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0g
ICAgICAgQXBpYyAweDAwLCBQaW4gIDE6IHZlYz0zMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwg
c3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MQ0KKFhF
TikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgIDMgVmVjIDU2Og0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluICAzOiB2ZWM9MzggZGVs
aXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1h
c2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRICA0
IFZlYzI0MToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAs
IFBpbiAgNDogdmVjPWYxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0
eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgIElSUSAgNSBWZWMgNjQ6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0
Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gIDU6IHZlYz00MCBkZWxpdmVyeT1Mb1ByaSBkZXN0
PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MQ0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgIDYgVmVjIDcyOg0KKFhFTikg
WzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluICA2OiB2ZWM9NDgg
ZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1F
IG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJR
ICA3IFZlYyA4MDoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4
MDAsIFBpbiAgNzogdmVjPTUwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjo0Ml0gICAgIElSUSAgOCBWZWMgODg6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gIDg6IHZlYz01OCBkZWxpdmVyeT1Mb1ByaSBk
ZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6
MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgIDkgVmVjIDk2Og0KKFhF
TikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluICA5OiB2ZWM9
NjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJp
Zz1MIG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAg
SVJRIDEwIFZlYzEwNDoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGlj
IDB4MDAsIFBpbiAxMDogdmVjPTY4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjo0Ml0gICAgIElSUSAxMSBWZWMxMTI6DQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gMTE6IHZlYz03MCBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3Rf
aWQ6MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgMTIgVmVjMTIwOg0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluIDEyOiB2
ZWM9NzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAg
dHJpZz1FIG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAg
ICAgSVJRIDEzIFZlYzEzNjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBB
cGljIDB4MDAsIFBpbiAxMzogdmVjPTg4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9
MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgMTQgVmVjMTQ0Og0KKFhFTikgWzIwMTItMDgt
MzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluIDE0OiB2ZWM9OTAgZGVsaXZlcnk9
TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBk
ZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDE1IFZlYzE1
MjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAsIFBpbiAx
NTogdmVjPTk4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGly
cj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0
Ml0gICAgIElSUSAxNiBWZWMxODQ6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAg
ICAgQXBpYyAweDAwLCBQaW4gMTY6IHZlYz1iOCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3Rh
dHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6NjMNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDE3IFZlYzIwMDoNCihYRU4pIFsyMDEy
LTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAsIFBpbiAxNzogdmVjPWM4IGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNr
PTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSAxOCBW
ZWMxOTI6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAwLCBQ
aW4gMTg6IHZlYz1jMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9
MSBpcnI9MCB0cmlnPUwgbWFzaz0wIGRlc3RfaWQ6MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6
MDI6NDJdICAgICBJUlEgMTkgVmVjMTc3Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgICAgIEFwaWMgMHgwMCwgUGluIDE5OiB2ZWM9YjEgZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSAyMiBWZWMxNTM6DQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gMjI6IHZlYz05OSBk
ZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwg
bWFzaz0wIGRlc3RfaWQ6MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEg
MjggVmVjMjA4Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgw
MSwgUGluICA0OiB2ZWM9ZDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFy
aXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICAgSVJRIDI5IFZlYzIxNjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAy
OjQyXSAgICAgICBBcGljIDB4MDEsIFBpbiAgNTogdmVjPWQ4IGRlbGl2ZXJ5PUxvUHJpIGRl
c3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDoz
Mg0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgMzAgVmVjIDMzOg0KKFhF
TikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMSwgUGluICA2OiB2ZWM9
MjEgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJp
Zz1MIG1hc2s9MCBkZXN0X2lkOjQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAg
SVJRIDMxIFZlYyA0MToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGlj
IDB4MDEsIFBpbiAgNzogdmVjPTI5IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjo0Ml0gICAgIElSUSAzMyBWZWMgMzQ6DQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gIDk6IHZlYz0yMiBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3Rf
aWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDM2IFZlYzE2MToN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDEsIFBpbiAxMjog
dmVjPWExIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0w
IHRyaWc9TCBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgICBJUlEgNDAgVmVjIDQ5Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAg
IEFwaWMgMHgwMSwgUGluIDE2OiB2ZWM9MzEgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSA0MSBWZWMgNTc6DQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMTc6IHZlYz0zOSBkZWxpdmVy
eT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0x
IGRlc3RfaWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDQyIFZl
YyA2NToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDEsIFBp
biAxODogdmVjPTQxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0x
IGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIwMTItMDgtMzEgMjE6
MDI6NDJdICAgICBJUlEgNDMgVmVjIDczOg0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgICAgIEFwaWMgMHgwMSwgUGluIDE5OiB2ZWM9NDkgZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSA0NiBWZWMxOTM6DQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMjI6IHZlYz1jMSBk
ZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwg
bWFzaz0xIGRlc3RfaWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJR
IDQ3IFZlYzE2OToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4
MDEsIFBpbiAyMzogdmVjPWE5IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDo0DQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjo0Ml0gICAgIElSUSA1MSBWZWMyMDk6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMjc6IHZlYz1kMSBkZWxpdmVyeT1Mb1ByaSBk
ZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6
NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDUyIFZlYzE2MDoNCihY
RU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDEsIFBpbiAyODogdmVj
PWEwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRy
aWc9TCBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAg
ICBJUlEgNTMgVmVjMTY4Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFw
aWMgMHgwMSwgUGluIDI5OiB2ZWM9YTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSA1NCBWZWMxNzY6DQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMzA6IHZlYz1iMCBkZWxpdmVyeT1M
b1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRl
c3RfaWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjExOjU3XSAqKioqKioqKioqKiBWTUNC
IEFyZWFzICoqKioqKioqKioqKioqDQooWEVOKSBbMjAxMi0wOC0zMSAyMToxMTo1N10gKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCg==
------------01015A0E4152274D9
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------01015A0E4152274D9--



From xen-devel-bounces@lists.xen.org Fri Aug 31 21:45:45 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 21:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Z1o-0007P7-G3; Fri, 31 Aug 2012 21:45:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7Z1m-0007P2-E1
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 21:45:22 +0000
Received: from [85.158.143.35:46307] by server-2.bemta-4.messagelabs.com id
	E8/25-21239-17031405; Fri, 31 Aug 2012 21:45:21 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-21.messagelabs.com!1346449517!5712015!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30321 invoked from network); 31 Aug 2012 21:45:18 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-4.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 21:45:18 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:50422
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7Yyb-0008OC-IH; Fri, 31 Aug 2012 23:42:05 +0200
Date: Fri, 31 Aug 2012 23:45:12 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <647712821.20120831234512@eikelenboom.it>
To: Santosh Jodh <santosh.jodh@citrix.com>, wei.wang2@amd.com
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------01015A0E4152274D9"
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------01015A0E4152274D9
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


I was trying to use the 'o' debug key to make a bug report about an "AMD-Vi: IO_PAGE_FAULT".

The result:
- When using "xl debug-keys o", the machine seems in a infinite loop, can hardly login, eventually resulting in a kernel RCU stall and complete lockup.
- When using serial console: I get a infinite stream of "gfn:  mfn: " lines, mean while on the normal console, S-ATA devices are starting to give errors.

So either option trashes the machine, other debug-keys work fine.

Machine has a 890-fx chipset and AMD phenom x6 proc.

xl dmesg with bootup and output from some other debug-keys is attached.

--

Sander
------------01015A0E4152274D9
Content-Type: text/plain;
 name="xl-dmesg.txt"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg.txt"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgX19fXyAgICBfX18gICAgICAgICAgICAgIF8g
IF8gICAgICAgICAgICAgICAgICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCAgfF9f
XyBcICAvIF8gXCAgICBfIF9fIF9fX3wgfHwgfCAgICAgXyBfXyAgXyBfXyBfX18gDQogIFwg
IC8vIF8gXCAnXyBcICB8IHx8IHxfICAgX18pIHx8IHwgfCB8X198ICdfXy8gX198IHx8IHxf
IF9ffCAnXyBcfCAnX18vIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3wgLyBfXy8g
fCB8X3wgfF9ffCB8IHwgKF9ffF9fICAgX3xfX3wgfF8pIHwgfCB8ICBfXy8NCiAvXy9cX1xf
X198X3wgfF98ICAgIHxffChfKV9fX19fKF8pX19fLyAgIHxffCAgXF9fX3wgIHxffCAgICB8
IC5fXy98X3wgIFxfX198DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgfF98ICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZl
cnNpb24gNC4yLjAtcmM0LXByZSAocm9vdEBkeW5kbnMub3JnKSAoZ2NjIChEZWJpYW4gNC40
LjUtOCkgNC40LjUpIFRodSBBdWcgMzAgMjM6NTg6MDMgQ0VTVCAyMDEyDQooWEVOKSBMYXRl
c3QgQ2hhbmdlU2V0OiBUdWUgQXVnIDI4IDIyOjQwOjQ1IDIwMTIgKzAxMDAgMjU3ODY6YTBi
NWY4MTAyYTAwDQooWEVOKSBCb290bG9hZGVyOiBHUlVCIDEuOTgrMjAxMDA4MDQtMTQrc3F1
ZWV6ZTENCihYRU4pIENvbW1hbmQgbGluZTogZG9tMF9tZW09MTAyNE0gbG9nbHZsPWFsbCBs
b2dsdmxfZ3Vlc3Q9YWxsIGNvbnNvbGVfdGltZXN0YW1wcyB2Z2E9Z2Z4LTEyODB4MTAyNHgz
MiBjcHVpZGxlIGNwdWZyZXE9eGVuIG5vcmVib290IGRlYnVnIGxhcGljPWRlYnVnIGFwaWNf
dmVyYm9zaXR5PWRlYnVnIGFwaWM9ZGVidWcgYWNwaV9lbmZvcmNlX3Jlc291cmNlcz1sYXgg
aW9tbXU9b24sdmVyYm9zZSxhbWQtaW9tbXUtZGVidWcgY29tMT0zODQwMCw4bjEgY29uc29s
ZT12Z2EsY29tMQ0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAgVkdBIGlzIGdy
YXBoaWNzIG1vZGUgMTI4MHgxMDI0LCAzMiBicHANCihYRU4pICBWQkUvRERDIG1ldGhvZHM6
IFYyOyBFRElEIHRyYW5zZmVyIHRpbWU6IDEgc2Vjb25kcw0KKFhFTikgRGlzYyBpbmZvcm1h
dGlvbjoNCihYRU4pICBGb3VuZCAyIE1CUiBzaWduYXR1cmVzDQooWEVOKSAgRm91bmQgMiBF
REQgaW5mb3JtYXRpb24gc3RydWN0dXJlcw0KKFhFTikgWGVuLWU4MjAgUkFNIG1hcDoNCihY
RU4pICAwMDAwMDAwMDAwMDAwMDAwIC0gMDAwMDAwMDAwMDA5YjAwMCAodXNhYmxlKQ0KKFhF
TikgIDAwMDAwMDAwMDAwOWIwMDAgLSAwMDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDAwMGU0MDAwIC0gMDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpDQoo
WEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAtIDAwMDAwMDAwYWZlOTAwMDAgKHVzYWJsZSkNCihY
RU4pICAwMDAwMDAwMGFmZTkwMDAwIC0gMDAwMDAwMDBhZmU5ZTAwMCAoQUNQSSBkYXRhKQ0K
KFhFTikgIDAwMDAwMDAwYWZlOWUwMDAgLSAwMDAwMDAwMGFmZWUwMDAwIChBQ1BJIE5WUykN
CihYRU4pICAwMDAwMDAwMGFmZWUwMDAwIC0gMDAwMDAwMDBhZmYwMDAwMCAocmVzZXJ2ZWQp
DQooWEVOKSAgMDAwMDAwMDBmZmUwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVk
KQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwMjUwMDAwMDAwICh1c2FibGUp
DQooWEVOKSBBQ1BJOiBSU0RQIDAwMEZCMTIwLCAwMDE0IChyMCBBQ1BJQU0pDQooWEVOKSBB
Q1BJOiBSU0RUIEFGRTkwMDAwLCAwMDQ4IChyMSBNU0kgICAgT0VNU0xJQyAgMjAxMDA2MjIg
TVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IEZBQ1AgQUZFOTAyMDAsIDAwODQgKHIxIDc2
NDBNUyBBNzY0MDEwMCAyMDEwMDYyMiBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRFNE
VCBBRkU5MDVFMCwgOTQ0OSAocjEgIEE3NjQwIEE3NjQwMTAwICAgICAgMTAwIElOVEwgMjAw
NTExMTcpDQooWEVOKSBBQ1BJOiBGQUNTIEFGRTlFMDAwLCAwMDQwDQooWEVOKSBBQ1BJOiBB
UElDIEFGRTkwMzkwLCAwMDg4IChyMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA2MjIgTVNGVCAg
ICAgICA5NykNCihYRU4pIEFDUEk6IE1DRkcgQUZFOTA0MjAsIDAwM0MgKHIxIDc2NDBNUyBP
RU1NQ0ZHICAyMDEwMDYyMiBNU0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogU0xJQyBBRkU5
MDQ2MCwgMDE3NiAocjEgTVNJICAgIE9FTVNMSUMgIDIwMTAwNjIyIE1TRlQgICAgICAgOTcp
DQooWEVOKSBBQ1BJOiBPRU1CIEFGRTlFMDQwLCAwMDcyIChyMSA3NjQwTVMgQTc2NDAxMDAg
MjAxMDA2MjIgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IFNSQVQgQUZFOUE1RTAsIDAx
MDggKHIzIEFNRCAgICBGQU1fRl8xMCAgICAgICAgMiBBTUQgICAgICAgICAxKQ0KKFhFTikg
QUNQSTogSFBFVCBBRkU5QTZGMCwgMDAzOCAocjEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwNjIy
IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBJVlJTIEFGRTlBNzMwLCAwMEY4IChyMSAg
QU1EICAgICBSRDg5MFMgICAyMDIwMzEgQU1EICAgICAgICAgMCkNCihYRU4pIEFDUEk6IFNT
RFQgQUZFOUE4MzAsIDBEQTQgKHIxIEEgTSBJICBQT1dFUk5PVyAgICAgICAgMSBBTUQgICAg
ICAgICAxKQ0KKFhFTikgU3lzdGVtIFJBTTogODE5ME1CICg4Mzg2NzMya0IpDQooWEVOKSBT
UkFUOiBQWE0gMCAtPiBBUElDIDAgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQWE0gMCAtPiBB
UElDIDEgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQWE0gMCAtPiBBUElDIDIgLT4gTm9kZSAw
DQooWEVOKSBTUkFUOiBQWE0gMCAtPiBBUElDIDMgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQ
WE0gMCAtPiBBUElDIDQgLT4gTm9kZSAwDQooWEVOKSBTUkFUOiBQWE0gMCAtPiBBUElDIDUg
LT4gTm9kZSAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMC1hMDAwMA0KKFhFTikgU1JB
VDogTm9kZSAwIFBYTSAwIDEwMDAwMC1iMDAwMDAwMA0KKFhFTikgU1JBVDogTm9kZSAwIFBY
TSAwIDEwMDAwMDAwMC0yNTAwMDAwMDANCihYRU4pIE5VTUE6IEFsbG9jYXRlZCBtZW1ub2Rl
bWFwIGZyb20gMjRkYmFjMDAwIC0gMjRkYmFmMDAwDQooWEVOKSBOVU1BOiBVc2luZyA4IGZv
ciB0aGUgaGFzaCBzaGlmdC4NCihYRU4pIERvbWFpbiBoZWFwIGluaXRpYWxpc2VkDQooWEVO
KSB2ZXNhZmI6IGZyYW1lYnVmZmVyIGF0IDB4ZmIwMDAwMDAsIG1hcHBlZCB0byAweGZmZmY4
MmMwMDAwMDAwMDAsIHVzaW5nIDYxNDRrLCB0b3RhbCAxNDMzNmsNCihYRU4pIHZlc2FmYjog
bW9kZSBpcyAxMjgweDEwMjR4MzIsIGxpbmVsZW5ndGg9NTEyMCwgZm9udCA4eDE2DQooWEVO
KSB2ZXNhZmI6IFRydWVjb2xvcjogc2l6ZT04Ojg6ODo4LCBzaGlmdD0yNDoxNjo4OjANCihY
RU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmZjc4MA0KKFhFTikgRE1JIHByZXNlbnQu
DQooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBk
cml2ZXIgZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihY
RU4pIEFDUEk6IEFDUEkgU0xFRVAgSU5GTzogcG0xeF9jbnRbODA0LDBdLCBwbTF4X2V2dFs4
MDAsMF0NCihYRU4pIEFDUEk6ICAgICAgICAgICAgICAgICAgd2FrZXVwX3ZlY1thZmU5ZTAw
Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNzIDB4ZmVl
MDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MDBd
IGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzAgMDoxMCBBUElDIHZlcnNpb24gMTYNCihY
RU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MDFdIGVuYWJsZWQp
DQooWEVOKSBQcm9jZXNzb3IgIzEgMDoxMCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6
IExBUElDIChhY3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MDJdIGVuYWJsZWQpDQooWEVOKSBQ
cm9jZXNzb3IgIzIgMDoxMCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDRdIGxhcGljX2lkWzB4MDNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3Ig
IzMgMDoxMCBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDVdIGxhcGljX2lkWzB4MDRdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzQgMDoxMCBB
UElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDZdIGxhcGlj
X2lkWzB4MDVdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzUgMDoxMCBBUElDIHZlcnNp
b24gMTYNCihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwNl0gYWRkcmVzc1sweGZlYzAwMDAw
XSBnc2lfYmFzZVswXSkNCihYRU4pIElPQVBJQ1swXTogYXBpY19pZCA2LCB2ZXJzaW9uIDMz
LCBhZGRyZXNzIDB4ZmVjMDAwMDAsIEdTSSAwLTIzDQooWEVOKSBBQ1BJOiBJT0FQSUMgKGlk
WzB4MDddIGFkZHJlc3NbMHhmZWMyMDAwMF0gZ3NpX2Jhc2VbMjRdKQ0KKFhFTikgSU9BUElD
WzFdOiBhcGljX2lkIDcsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMyMDAwMCwgR1NJIDI0
LTU1DQooWEVOKSBBQ1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSAwIGdsb2JhbF9p
cnEgMiBkZmwgZGZsKQ0KKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEg
OSBnbG9iYWxfaXJxIDkgbG93IGxldmVsKQ0KKFhFTikgQUNQSTogSVJRMCB1c2VkIGJ5IG92
ZXJyaWRlLg0KKFhFTikgQUNQSTogSVJRMiB1c2VkIGJ5IG92ZXJyaWRlLg0KKFhFTikgQUNQ
STogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLg0KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAg
RmxhdC4gIFVzaW5nIDIgSS9PIEFQSUNzDQooWEVOKSBBQ1BJOiBIUEVUIGlkOiAweDgzMDAg
YmFzZTogMHhmZWQwMDAwMA0KKFhFTikgVGFibGUgaXMgbm90IGZvdW5kIQ0KKFhFTikgVXNp
bmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9uDQooWEVO
KSBTTVA6IEFsbG93aW5nIDYgQ1BVcyAoMCBob3RwbHVnIENQVXMpDQooWEVOKSBtYXBwZWQg
QVBJQyB0byBmZmZmODJjM2ZmZGZlMDAwIChmZWUwMDAwMCkNCihYRU4pIG1hcHBlZCBJT0FQ
SUMgdG8gZmZmZjgyYzNmZmRmZDAwMCAoZmVjMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElD
IHRvIGZmZmY4MmMzZmZkZmMwMDAgKGZlYzIwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogNTYg
R1NJLCAxMTEyIE1TSS9NU0ktWA0KKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0
IFNjaGVkdWxlciAoY3JlZGl0KQ0KKFhFTikgRGV0ZWN0ZWQgMzIwMC4yMTYgTUh6IHByb2Nl
c3Nvci4NCihYRU4pIEluaXRpbmcgbWVtb3J5IHNoYXJpbmcuDQooWEVOKSBBTUQgRmFtMTBo
IG1hY2hpbmUgY2hlY2sgcmVwb3J0aW5nIGVuYWJsZWQNCihYRU4pIFBDSTogTUNGRyBjb25m
aWd1cmF0aW9uIDA6IGJhc2UgZTAwMDAwMDAgc2VnbWVudCAwMDAwIGJ1c2VzIDAwIC0gZmYN
CihYRU4pIFBDSTogTm90IHVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBidXMgMDAtZmYN
CihYRU4pIEFNRC1WaTogSU9NTVUgMCBFbmFibGVkLg0KKFhFTikgQU1ELVZpOiBFbmFibGlu
ZyBnbG9iYWwgdmVjdG9yIG1hcA0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQN
CihYRU4pICAtIERvbTAgbW9kZTogUmVsYXhlZA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4
MDA1MDAxMA0KKFhFTikgR2V0dGluZyBWRVJTSU9OOiA4MDA1MDAxMA0KKFhFTikgR2V0dGlu
ZyBJRDogMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3MDANCihYRU4pIEdldHRpbmcgTFZUMTog
NDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUjMA0KKFhFTikgRVNSIHZhbHVlIGJl
Zm9yZSBlbmFibGluZyB2ZWN0b3I6IDB4MDAwMDAwMDQgIGFmdGVyOiAweDAwMDAwMDAwDQoo
WEVOKSBFTkFCTElORyBJTy1BUElDIElSUXMNCihYRU4pICAtPiBVc2luZyBuZXcgQUNLIG1l
dGhvZA0KKFhFTikgaW5pdCBJT19BUElDIElSUXMNCihYRU4pICBJTy1BUElDIChhcGljaWQt
cGluKSA2LTAsIDYtMTYsIDYtMTcsIDYtMTgsIDYtMTksIDYtMjAsIDYtMjEsIDYtMjIsIDYt
MjMsIDctMCwgNy0xLCA3LTIsIDctMywgNy00LCA3LTUsIDctNiwgNy03LCA3LTgsIDctOSwg
Ny0xMCwgNy0xMSwgNy0xMiwgNy0xMywgNy0xNCwgNy0xNSwgNy0xNiwgNy0xNywgNy0xOCwg
Ny0xOSwgNy0yMCwgNy0yMSwgNy0yMiwgNy0yMywgNy0yNCwgNy0yNSwgNy0yNiwgNy0yNywg
Ny0yOCwgNy0yOSwgNy0zMCwgNy0zMSBub3QgY29ubmVjdGVkLg0KKFhFTikgLi5USU1FUjog
dmVjdG9yPTB4RjAgYXBpYzE9MCBwaW4xPTIgYXBpYzI9LTEgcGluMj0tMQ0KKFhFTikgbnVt
YmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4NCihYRU4pIG51bWJlciBvZiBJTy1BUElDICM2
IHJlZ2lzdGVyczogMjQuDQooWEVOKSBudW1iZXIgb2YgSU8tQVBJQyAjNyByZWdpc3RlcnM6
IDMyLg0KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
DQooWEVOKSBJTyBBUElDICM2Li4uLi4uDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDYw
MDAwMDANCihYRU4pIC4uLi4uLi4gICAgOiBwaHlzaWNhbCBBUElDIGlkOiAwNg0KKFhFTikg
Li4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDANCihYRU4pIC4uLi4uLi4gICAgOiBMVFMg
ICAgICAgICAgOiAwDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzgwMjENCihYRU4p
IC4uLi4uLi4gICAgIDogbWF4IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAwMTcNCihYRU4pIC4u
Li4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAxDQooWEVOKSAuLi4uLi4uICAgICA6IElP
IEFQSUMgdmVyc2lvbjogMDAyMQ0KKFhFTikgLi4uLiByZWdpc3RlciAjMDI6IDA2MDAwMDAw
DQooWEVOKSAuLi4uLi4uICAgICA6IGFyYml0cmF0aW9uOiAwNg0KKFhFTikgLi4uLiByZWdp
c3RlciAjMDM6IDA3MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgICA6IEJvb3QgRFQgICAgOiAw
DQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0YWJsZToNCihYRU4pICBOUiBMb2cgUGh5
IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZlY3Q6ICAgDQooWEVOKSAgMDAg
MDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDAx
IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzANCihYRU4pICAw
MiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEYwDQooWEVOKSAg
MDMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOA0KKFhFTikg
IDA0IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgRjENCihYRU4p
ICAwNSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVO
KSAgMDYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhF
TikgIDA3IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTANCihY
RU4pICAwOCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4DQoo
WEVOKSAgMDkgMDAxIDAxICAxICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMSAgICA2MA0K
KFhFTikgIDBhIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNjgN
CihYRU4pICAwYiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDcw
DQooWEVOKSAgMGMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3
OA0KKFhFTikgIDBkIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAg
ODgNCihYRU4pICAwZSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAg
IDkwDQooWEVOKSAgMGYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICA5OA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDE2IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSBJTyBBUElDICM3Li4uLi4uDQooWEVOKSAuLi4uIHJlZ2lz
dGVyICMwMDogMDcwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgOiBwaHlzaWNhbCBBUElDIGlk
OiAwNw0KKFhFTikgLi4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDANCihYRU4pIC4uLi4u
Li4gICAgOiBMVFMgICAgICAgICAgOiAwDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAx
RjgwMjENCihYRU4pIC4uLi4uLi4gICAgIDogbWF4IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAw
MUYNCihYRU4pIC4uLi4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAxDQooWEVOKSAuLi4u
Li4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMQ0KKFhFTikgLi4uLiByZWdpc3RlciAj
MDI6IDAwMDAwMDAwDQooWEVOKSAuLi4uLi4uICAgICA6IGFyYml0cmF0aW9uOiAwMA0KKFhF
TikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBoeSBNYXNr
IFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAwMCAw
MCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAwMSAwMDAg
MDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMDIgMDAw
IDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDAzIDAw
MCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAwNCAw
MDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMDUg
MDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDA2
IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAw
NyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAg
MDggMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikg
IDA5IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4p
ICAwYSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVO
KSAgMGIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhF
TikgIDBjIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihY
RU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQoo
WEVOKSAgMGUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0K
KFhFTikgIDBmIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDAN
CihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAw
DQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAw
MA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAg
MDANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAg
IDAwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAg
ICAwMA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCihYRU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAw
ICAgIDAwDQooWEVOKSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MCAgICAwMA0KKFhFTikgIDE4IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAxOSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMWEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDFiIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAxYyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMWQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDFlIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAxZiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSBVc2luZyB2ZWN0b3ItYmFzZWQgaW5kZXhpbmcNCihY
RU4pIElSUSB0byBwaW4gbWFwcGluZ3M6DQooWEVOKSBJUlEyNDAgLT4gMDoyDQooWEVOKSBJ
UlE0OCAtPiAwOjENCihYRU4pIElSUTU2IC0+IDA6Mw0KKFhFTikgSVJRMjQxIC0+IDA6NA0K
KFhFTikgSVJRNjQgLT4gMDo1DQooWEVOKSBJUlE3MiAtPiAwOjYNCihYRU4pIElSUTgwIC0+
IDA6Nw0KKFhFTikgSVJRODggLT4gMDo4DQooWEVOKSBJUlE5NiAtPiAwOjkNCihYRU4pIElS
UTEwNCAtPiAwOjEwDQooWEVOKSBJUlExMTIgLT4gMDoxMQ0KKFhFTikgSVJRMTIwIC0+IDA6
MTINCihYRU4pIElSUTEzNiAtPiAwOjEzDQooWEVOKSBJUlExNDQgLT4gMDoxNA0KKFhFTikg
SVJRMTUyIC0+IDA6MTUNCihYRU4pIC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLiBkb25lLg0KKFhFTikgVXNpbmcgbG9jYWwgQVBJQyB0aW1lciBpbnRlcnJ1cHRzLg0K
KFhFTikgY2FsaWJyYXRpbmcgQVBJQyB0aW1lciAuLi4NCihYRU4pIC4uLi4uIENQVSBjbG9j
ayBzcGVlZCBpcyAzMjAwLjE2MDQgTUh6Lg0KKFhFTikgLi4uLi4gaG9zdCBidXMgY2xvY2sg
c3BlZWQgaXMgMjAwLjAwOTggTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHgwMDAw
Q0NENw0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIFBsYXRmb3JtIHRpbWVyIGlzIDE0
LjMxOE1IeiBIUEVUDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gQWxsb2NhdGVkIGNv
bnNvbGUgcmluZyBvZiA2NCBLaUIuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFZN
OiBBU0lEcyBlbmFibGVkLg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIFNWTTogU3Vw
cG9ydGVkIGFkdmFuY2VkIGZlYXR1cmVzOg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVd
ICAtIE5lc3RlZCBQYWdlIFRhYmxlcyAoTlBUKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6
MjVdICAtIExhc3QgQnJhbmNoIFJlY29yZCAoTEJSKSBWaXJ0dWFsaXNhdGlvbg0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdICAtIE5leHQtUklQIFNhdmVkIG9uICNWTUVYSVQNCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSAgLSBQYXVzZS1JbnRlcmNlcHQgRmlsdGVyDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFZNOiBTVk0gZW5hYmxlZA0KKFhFTikgWzIw
MTItMDgtMzEgMjA6NTI6MjVdIEhWTTogSGFyZHdhcmUgQXNzaXN0ZWQgUGFnaW5nIChIQVAp
IGRldGVjdGVkDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFZNOiBIQVAgcGFnZSBz
aXplczogNGtCLCAyTUIsIDFHQg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjRdIG1hc2tl
ZCBFeHRJTlQgb24gQ1BVIzENCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBtaWNyb2Nv
ZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjRdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzINCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI1XSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4
MTAwMDBiZg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjRdIG1hc2tlZCBFeHRJTlQgb24g
Q1BVIzMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBtaWNyb2NvZGU6IGNvbGxlY3Rf
Y3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6
MjRdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1
XSBtaWNyb2NvZGU6IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjRdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzUNCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBCcm91Z2h0IHVwIDYgQ1BVcw0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdIG1pY3JvY29kZTogY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9
MHgxMDAwMGJmDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gSFBFVCdzIE1TSSBtb2Rl
IGhhc24ndCBiZWVuIHN1cHBvcnRlZCB3aGVuIEludGVycnVwdCBSZW1hcHBpbmcgaXMgZW5h
YmxlZC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBBQ1BJIHNsZWVwIG1vZGVzOiBT
Mw0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIE1DQTogVXNlIGh3IHRocmVzaG9sZGlu
ZyB0byBhZGp1c3QgcG9sbGluZyBmcmVxdWVuY3kNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI1XSBtY2hlY2tfcG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0YXJ0ZWQu
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gWGVub3Byb2ZpbGU6IEZhaWxlZCB0byBz
ZXR1cCBJQlMgTFZUIG9mZnNldCwgSUJTQ1RMID0gMHhmZmZmZmZmZg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdICoqKiBMT0FESU5HIERPTUFJTiAwICoqKg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MTAwMDAw
MCBtZW1zej0weGI2ODAwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl9wYXJz
ZV9iaW5hcnk6IHBoZHI6IHBhZGRyPTB4MWMwMDAwMCBtZW1zej0weGQ4MGU4DQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1MjoyNV0gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgx
Y2Q5MDAwIG1lbXN6PTB4MTNjMDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZf
cGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDFjZWQwMDAgbWVtc3o9MHhkZTQwMDANCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZfcGFyc2VfYmluYXJ5OiBtZW1vcnk6IDB4
MTAwMDAwMCAtPiAweDJhZDEwMDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZf
eGVuX3BhcnNlX25vdGU6IEdVRVNUX09TID0gImxpbnV4Ig0KKFhFTikgWzIwMTItMDgtMzEg
MjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VFU1RfVkVSU0lPTiA9ICIyLjYiDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVS
U0lPTiA9ICJ4ZW4tMy4wIg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl94ZW5f
cGFyc2Vfbm90ZTogVklSVF9CQVNFID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMDo1MjoyNV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBFTlRSWSA9IDB4ZmZmZmZm
ZmY4MWNlZDIxMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vf
bm90ZTogSFlQRVJDQUxMX1BBR0UgPSAweGZmZmZmZmZmODEwMDEwMDANCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI1XSBlbGZfeGVuX3BhcnNlX25vdGU6IEZFQVRVUkVTID0gIiF3cml0
YWJsZV9wYWdlX3RhYmxlc3xwYWVfcGdkaXJfYWJvdmVfNGdiIg0KKFhFTikgWzIwMTItMDgt
MzEgMjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFFX01PREUgPSAieWVzIg0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogTE9BREVSID0g
ImdlbmVyaWMiDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gZWxmX3hlbl9wYXJzZV9u
b3RlOiB1bmtub3duIHhlbiBlbGYgbm90ZSAoMHhkKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6
NTI6MjVdIGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VTUEVORF9DQU5DRUwgPSAweDENCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI1XSBlbGZfeGVuX3BhcnNlX25vdGU6IEhWX1NUQVJUX0xP
VyA9IDB4ZmZmZjgwMDAwMDAwMDAwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVs
Zl94ZW5fcGFyc2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBbMjAxMi0wOC0z
MSAyMDo1MjoyNV0gZWxmX3hlbl9hZGRyX2NhbGNfY2hlY2s6IGFkZHJlc3NlczoNCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI1XSAgICAgdmlydF9iYXNlICAgICAgICA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdICAgICBlbGZfcGFkZHJf
b2Zmc2V0ID0gMHgwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gICAgIHZpcnRfb2Zm
c2V0ICAgICAgPSAweGZmZmZmZmZmODAwMDAwMDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI1XSAgICAgdmlydF9rc3RhcnQgICAgICA9IDB4ZmZmZmZmZmY4MTAwMDAwMA0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdICAgICB2aXJ0X2tlbmQgICAgICAgID0gMHhmZmZmZmZm
ZjgyYWQxMDAwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gICAgIHZpcnRfZW50cnkg
ICAgICAgPSAweGZmZmZmZmZmODFjZWQyMTANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1
XSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4ZmZmZmZmZmZmZmZmZmZmZg0KKFhFTikgWzIw
MTItMDgtMzEgMjA6NTI6MjVdICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNvbXBhdDMy
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gIERvbTAga2VybmVsOiA2NC1iaXQsIFBB
RSwgbHNiLCBwYWRkciAweDEwMDAwMDAgLT4gMHgyYWQxMDAwDQooWEVOKSBbMjAxMi0wOC0z
MSAyMDo1MjoyNV0gUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdICBEb20wIGFsbG9jLjogICAwMDAwMDAwMjQwMDAwMDAwLT4wMDAw
MDAwMjQ0MDAwMDAwICgyNDI0MjggcGFnZXMgdG8gYmUgYWxsb2NhdGVkKQ0KKFhFTikgWzIw
MTItMDgtMzEgMjA6NTI6MjVdICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRmMmZjMDAwLT4w
MDAwMDAwMjRmZmZmODAwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gVklSVFVBTCBN
RU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gIExvYWRl
ZCBrZXJuZWw6IGZmZmZmZmZmODEwMDAwMDAtPmZmZmZmZmZmODJhZDEwMDANCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI1XSAgSW5pdC4gcmFtZGlzazogZmZmZmZmZmY4MmFkMTAwMC0+
ZmZmZmZmZmY4MzdkNDgwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdICBQaHlzLU1h
Y2ggbWFwOiBmZmZmZmZmZjgzN2Q1MDAwLT5mZmZmZmZmZjgzOWQ1MDAwDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMDo1MjoyNV0gIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODM5ZDUwMDAtPmZm
ZmZmZmZmODM5ZDU0YjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI1XSAgUGFnZSB0YWJs
ZXM6ICAgZmZmZmZmZmY4MzlkNjAwMC0+ZmZmZmZmZmY4MzlmNzAwMA0KKFhFTikgWzIwMTIt
MDgtMzEgMjA6NTI6MjVdICBCb290IHN0YWNrOiAgICBmZmZmZmZmZjgzOWY3MDAwLT5mZmZm
ZmZmZjgzOWY4MDAwDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0gIFRPVEFMOiAgICAg
ICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZmZmZmODNjMDAwMDANCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI1XSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MWNlZDIxMA0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdIERvbTAgaGFzIG1heGltdW0gNiBWQ1BVcw0KKFhFTikg
WzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0IDB4ZmZm
ZmZmZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODFiNjgwMDANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI1XSBlbGZfbG9hZF9iaW5hcnk6IHBoZHIgMSBhdCAweGZmZmZmZmZmODFjMDAw
MDAgLT4gMHhmZmZmZmZmZjgxY2Q4MGU4DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyNV0g
ZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDIgYXQgMHhmZmZmZmZmZjgxY2Q5MDAwIC0+IDB4ZmZm
ZmZmZmY4MWNlY2MwMA0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjVdIGVsZl9sb2FkX2Jp
bmFyeTogcGhkciAzIGF0IDB4ZmZmZmZmZmY4MWNlZDAwMCAtPiAweGZmZmZmZmZmODFkODUw
MDANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI2XSBTY3J1YmJpbmcgRnJlZSBSQU06IC4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uZG9uZS4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI3XSBJbml0
aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4NCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI3XSBTdGQuIExvZ2xldmVsOiBBbGwNCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI3XSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1MjoyN10gWGVuIGlzIHJlbGlucXVpc2hpbmcgVkdBIGNvbnNvbGUuDQooWEVO
KSBbMjAxMi0wOC0zMSAyMDo1MjoyOF0gKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBl
ICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pDQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1MjoyOF0gRnJlZWQgMjU2a0IgaW5pdCBtZW1vcnkuDQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1MjoyOF0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkg
KDYtOSAtPiAweDYwIC0+IElSUSA5IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAw
MTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhjMDAwMDAw
MDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwZmZmMmI5OWIxMmJlIHRvIDB4MDAwMDAwMDAwMDAwYWJj
ZC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAw
MDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAw
NDA5IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRv
IDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFw
cy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZy
b20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBX
Uk1TUiAwMDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAw
ODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1
ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhj
MDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAw
MTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAg
RG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhjMDAwMDAw
MDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDAwNDA4IGZyb20gMHhjMDAwMDAwMDAxMDAwMDAwIHRvIDB4YzAwODAwMDAwMTAwMDAw
MC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSB0cmFwcy5jOjI1ODQ6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMwMDAwNDA5IGZyb20gMHhjMDAwMDAwMDAxMDAw
MDAwIHRvIDB4YzAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAwLjINCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAyLjANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAzLjANCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA1LjANCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA2LjANCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBhLjANCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBiLjANCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBjLjAN
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjBk
LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjExLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjEyLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjEyLjINCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjEzLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjEzLjINCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAwOjE0LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjE0LjMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjUNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE1LjANCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE2LjANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE2LjINCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjENCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjINCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjMNCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQN
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjBiOjAw
LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjBh
OjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAw
OjBhOjAwLjENCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAw
MDAwOjBhOjAwLjINCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjBhOjAwLjMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjBhOjAwLjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRk
IGRldmljZSAwMDAwOjBhOjAwLjUNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kg
YWRkIGRldmljZSAwMDAwOjBhOjAwLjYNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjBhOjAwLjcNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4
XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA5OjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA4OjAwLjANCihYRU4pIFsyMDEyLTA4LTMxIDIw
OjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA3OjAwLjANCihYRU4pIFsyMDEyLTA4LTMx
IDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjANCihYRU4pIFsyMDEyLTA4
LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA2OjAwLjENCihYRU4pIFsyMDEy
LTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA1OjAwLjANCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAwLjANCihYRU4p
IFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAwLjENCihY
RU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAwLjIN
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0OjAw
LjMNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAwOjA0
OjAwLjQNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAwMDAw
OjA0OjAwLjUNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmljZSAw
MDAwOjA0OjAwLjYNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjA0OjAwLjcNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAzOjA2LjANCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNb
MF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNi04IC0+IDB4NTggLT4gSVJRIDggTW9kZTow
IEFjdGl2ZTowKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MjhdIElPQVBJQ1swXTogU2V0
IFBDSSByb3V0aW5nIGVudHJ5ICg2LTEzIC0+IDB4ODggLT4gSVJRIDEzIE1vZGU6MCBBY3Rp
dmU6MCkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNbMV06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNy0yOCAtPiAweGEwIC0+IElSUSA1MiBNb2RlOjEgQWN0aXZlOjEp
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyOF0gSU9BUElDWzFdOiBTZXQgUENJIHJvdXRp
bmcgZW50cnkgKDctMjkgLT4gMHhhOCAtPiBJUlEgNTMgTW9kZToxIEFjdGl2ZToxKQ0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjhdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVu
dHJ5ICg3LTMwIC0+IDB4YjAgLT4gSVJRIDU0IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ni0xNiAtPiAweGI4IC0+IElSUSAxNiBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1MjoyOF0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTgg
LT4gMHhjMCAtPiBJUlEgMTggTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMDgtMzEg
MjA6NTI6MjhdIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTE3IC0+IDB4
YzggLT4gSVJRIDE3IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI4XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy00IC0+IDB4ZDAgLT4g
SVJRIDI4IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJ
T0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy01IC0+IDB4ZDggLT4gSVJRIDI5
IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNb
MV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy02IC0+IDB4MjEgLT4gSVJRIDMwIE1vZGU6
MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI4XSBJT0FQSUNbMV06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNy03IC0+IDB4MjkgLT4gSVJRIDMxIE1vZGU6MSBBY3Rp
dmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjI5XSBJT0FQSUNbMV06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNy0xNiAtPiAweDMxIC0+IElSUSA0MCBNb2RlOjEgQWN0aXZlOjEp
DQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyOV0gSU9BUElDWzFdOiBTZXQgUENJIHJvdXRp
bmcgZW50cnkgKDctMTcgLT4gMHgzOSAtPiBJUlEgNDEgTW9kZToxIEFjdGl2ZToxKQ0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTI6MjldIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVu
dHJ5ICg3LTE4IC0+IDB4NDEgLT4gSVJRIDQyIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsy
MDEyLTA4LTMxIDIwOjUyOjI5XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAo
Ny0xOSAtPiAweDQ5IC0+IElSUSA0MyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1MjoyOV0gSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMjIg
LT4gMHg5OSAtPiBJUlEgMjIgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMDgtMzEg
MjA6NTI6MjldIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTEyIC0+IDB4
YTEgLT4gSVJRIDM2IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUy
OjI5XSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yMyAtPiAweGE5IC0+
IElSUSA0NyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjoyOV0g
SU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDYtMTkgLT4gMHhiMSAtPiBJUlEg
MTkgTW9kZToxIEFjdGl2ZToxKQ0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTI6MzBdIElPQVBJ
Q1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTIyIC0+IDB4YzEgLT4gSVJRIDQ2IE1v
ZGU6MSBBY3RpdmU6MSkNCihYRU4pIFsyMDEyLTA4LTMxIDIwOjUyOjMwXSBJT0FQSUNbMV06
IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yNyAtPiAweGQxIC0+IElSUSA1MSBNb2RlOjEg
QWN0aXZlOjEpDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MjozMl0gSU9BUElDWzFdOiBTZXQg
UENJIHJvdXRpbmcgZW50cnkgKDctOSAtPiAweDIyIC0+IElSUSAzMyBNb2RlOjEgQWN0aXZl
OjEpDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1MzoyN10gdHJhcHMuYzoyNTg0OmQxIERvbWFp
biBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDRlYjQwZDlj
Nzk1YSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1Mzoz
M10gdHJhcHMuYzoyNTg0OmQyIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAx
MDAwNCBmcm9tIDB4MDAwMDgxOTdlMDBiNjA5OSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMDo1MzozOV0gdHJhcHMuYzoyNTg0OmQzIERvbWFpbiBhdHRl
bXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDYwMzRkNzBkYTE2NSB0
byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1Mzo0N10gdHJh
cHMuYzoyNTg0OmQ0IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBm
cm9tIDB4MDAwMDYwMzRkNzBkYTE2NSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBb
MjAxMi0wOC0zMSAyMDo1Mzo1M10gdHJhcHMuYzoyNTg0OmQ1IERvbWFpbiBhdHRlbXB0ZWQg
V1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDgxOTdlMDBiNjA5OSB0byAweDAw
MDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1Mzo1OV0gdHJhcHMuYzoy
NTg0OmQ2IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4
MDAwMDgxOTdlMDBiNjA5OSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0w
OC0zMSAyMDo1NDowNV0gdHJhcHMuYzoyNTg0OmQ3IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1Ig
MDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDgxOTdlMDBiNjA5OSB0byAweDAwMDAwMDAw
MDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1NDoxMF0gdHJhcHMuYzoyNTg0OmQ4
IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDYw
MzRkNzBkYTE2NSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAy
MDo1NDoxNl0gdHJhcHMuYzoyNTg0OmQ5IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDBjMDAxMDAwNCBmcm9tIDB4MDAwMDRlYjQwZDljNzk1YSB0byAweDAwMDAwMDAwMDAwMGFi
Y2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1NDoyM10gdHJhcHMuYzoyNTg0OmQxMCBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDA4MTk3ZTAw
YjYwOTkgdG8gMHgwMDAwMDAwMDAwMDBhYmNkLg0KKFhFTikgWzIwMTItMDgtMzEgMjA6NTQ6
MjldIHRyYXBzLmM6MjU4NDpkMTEgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMGMw
MDEwMDA0IGZyb20gMHgwMDAwODE5N2UwMGI2MDk5IHRvIDB4MDAwMDAwMDAwMDAwYWJjZC4N
CihYRU4pIFsyMDEyLTA4LTMxIDIwOjU0OjM2XSB0cmFwcy5jOjI1ODQ6ZDEyIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDRlYjQwZDljNzk1
YSB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBbMjAxMi0wOC0zMSAyMDo1NDo0Ml0g
dHJhcHMuYzoyNTg0OmQxMyBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAw
MDQgZnJvbSAweDAwMDA2MDM0ZDcwZGExNjUgdG8gMHgwMDAwMDAwMDAwMDBhYmNkLg0KKFhF
TikgWzIwMTItMDgtMzEgMjA6NTQ6NDldIHRyYXBzLmM6MjU4NDpkMTQgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwODY2YjgwMmI0MTY3IHRv
IDB4MDAwMDAwMDAwMDAwYWJjZC4NCihYRU4pIFsyMDEyLTA4LTMxIDIwOjU0OjU2XSB0cmFw
cy5jOjI1ODQ6ZDE1IERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBm
cm9tIDB4MDAwMDg2NmI4MDJiNDE2NyB0byAweDAwMDAwMDAwMDAwMGFiY2QuDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjoyNF0gJ2gnIHByZXNzZWQgLT4gc2hvd2luZyBpbnN0YWxsZWQg
aGFuZGxlcnMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICclJyAoYXNjaWkg
JzI1JykgPT4gdHJhcCB0byB4ZW5kYmcNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAg
a2V5ICcqJyAoYXNjaWkgJzJhJykgPT4gcHJpbnQgYWxsIGRpYWdub3N0aWNzDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAnMCcgKGFzY2lpICczMCcpID0+IGR1bXAgRG9t
MCByZWdpc3RlcnMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdBJyAoYXNj
aWkgJzQxJykgPT4gdG9nZ2xlIGFsdGVybmF0aXZlIGtleSBoYW5kbGluZw0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ0gnIChhc2NpaSAnNDgnKSA9PiBkdW1wIGhlYXAg
aW5mbw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ0knIChhc2NpaSAnNDkn
KSA9PiBkdW1wIEhWTSBpcnEgaW5mbw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBr
ZXkgJ00nIChhc2NpaSAnNGQnKSA9PiBkdW1wIE1TSSBzdGF0ZQ0KKFhFTikgWzIwMTItMDgt
MzEgMjE6MDI6MjRdICBrZXkgJ04nIChhc2NpaSAnNGUnKSA9PiB0cmlnZ2VyIGFuIE5NSQ0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ08nIChhc2NpaSAnNGYnKSA9PiB0
b2dnbGUgc2hhZG93IGF1ZGl0cw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkg
J1EnIChhc2NpaSAnNTEnKSA9PiBkdW1wIFBDSSBkZXZpY2VzDQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjoyNF0gIGtleSAnUicgKGFzY2lpICc1MicpID0+IHJlYm9vdCBtYWNoaW5lDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAnUycgKGFzY2lpICc1MycpID0+IHJl
c2V0IHNoYWRvdyBwYWdldGFibGVzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtl
eSAnYScgKGFzY2lpICc2MScpID0+IGR1bXAgdGltZXIgcXVldWVzDQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjoyNF0gIGtleSAnYycgKGFzY2lpICc2MycpID0+IGR1bXAgQUNQSSBDeCBz
dHJ1Y3R1cmVzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAnZCcgKGFzY2lp
ICc2NCcpID0+IGR1bXAgcmVnaXN0ZXJzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0g
IGtleSAnZScgKGFzY2lpICc2NScpID0+IGR1bXAgZXZ0Y2huIGluZm8NCihYRU4pIFsyMDEy
LTA4LTMxIDIxOjAyOjI0XSAga2V5ICdnJyAoYXNjaWkgJzY3JykgPT4gcHJpbnQgZ3JhbnQg
dGFibGUgdXNhZ2UNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdoJyAoYXNj
aWkgJzY4JykgPT4gc2hvdyB0aGlzIG1lc3NhZ2UNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAy
OjI0XSAga2V5ICdpJyAoYXNjaWkgJzY5JykgPT4gZHVtcCBpbnRlcnJ1cHQgYmluZGluZ3MN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdtJyAoYXNjaWkgJzZkJykgPT4g
bWVtb3J5IGluZm8NCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICduJyAoYXNj
aWkgJzZlJykgPT4gTk1JIHN0YXRpc3RpY3MNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0
XSAga2V5ICdvJyAoYXNjaWkgJzZmJykgPT4gZHVtcCBpb21tdSBwMm0gdGFibGUNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdxJyAoYXNjaWkgJzcxJykgPT4gZHVtcCBk
b21haW4gKGFuZCBndWVzdCBkZWJ1ZykgaW5mbw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6
MjRdICBrZXkgJ3InIChhc2NpaSAnNzInKSA9PiBkdW1wIHJ1biBxdWV1ZXMNCihYRU4pIFsy
MDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICdzJyAoYXNjaWkgJzczJykgPT4gZHVtcCBzb2Z0
dHNjIHN0YXRzDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjoyNF0gIGtleSAndCcgKGFzY2lp
ICc3NCcpID0+IGRpc3BsYXkgbXVsdGktY3B1IGNsb2NrIGluZm8NCihYRU4pIFsyMDEyLTA4
LTMxIDIxOjAyOjI0XSAga2V5ICd1JyAoYXNjaWkgJzc1JykgPT4gZHVtcCBudW1hIGluZm8N
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjI0XSAga2V5ICd2JyAoYXNjaWkgJzc2JykgPT4g
ZHVtcCBBTUQtViBWTUNCcw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6MjRdICBrZXkgJ3on
IChhc2NpaSAnN2EnKSA9PiBwcmludCBpb2FwaWMgaW5mbw0KKFhFTikgWzIwMTItMDgtMzEg
MjE6MDI6NDJdIEd1ZXN0IGludGVycnVwdCBpbmZvcm1hdGlvbjoNCihYRU4pIFsyMDEyLTA4
LTMxIDIxOjAyOjQyXSAgICBJUlE6ICAgMCBhZmZpbml0eTowMSB2ZWM6ZjAgdHlwZT1JTy1B
UElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAwIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogICAxIGFmZmluaXR5OjAxIHZlYzozMCB0eXBl
PUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMzQgaW4tZmxpZ2h0PTAgZG9tYWluLWxp
c3Q9MDogIDEoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAg
IDIgYWZmaW5pdHk6M2YgdmVjOmUyIHR5cGU9WFQtUElDICAgICAgICAgIHN0YXR1cz0wMDAw
MDAwMCBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJ
UlE6ICAgMyBhZmZpbml0eTowMSB2ZWM6MzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVz
PTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgIElSUTogICA0IGFmZmluaXR5OjAxIHZlYzpmMSB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDAgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgSVJROiAgIDUgYWZmaW5pdHk6MDEgdmVjOjQwIHR5cGU9SU8tQVBJQy1lZGdl
ICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICBJUlE6ICAgNiBhZmZpbml0eTowMSB2ZWM6NDggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTIt
MDgtMzEgMjE6MDI6NDJdICAgIElSUTogICA3IGFmZmluaXR5OjAxIHZlYzo1MCB0eXBlPUlP
LUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgIDggYWZmaW5pdHk6MDEgdmVjOjU4IHR5
cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4t
bGlzdD0wOiAgOCgtLS0tKSwNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6
ICAgOSBhZmZpbml0eTowMSB2ZWM6NjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDMwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6ICA5KC0tLS0pLA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDEwIGFmZmluaXR5OjAxIHZlYzo2OCB0eXBl
PUlPLUFQSUMtZWRnZSAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVO
KSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMTEgYWZmaW5pdHk6MDEgdmVjOjcw
IHR5cGU9SU8tQVBJQy1lZGdlICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICAxMiBhZmZpbml0eTowMSB2
ZWM6NzggdHlwZT1JTy1BUElDLWVkZ2UgICAgc3RhdHVzPTAwMDAwMDMwIGluLWZsaWdodD0w
IGRvbWFpbi1saXN0PTA6IDEyKC0tLS0pLA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgIElSUTogIDEzIGFmZmluaXR5OjNmIHZlYzo4OCB0eXBlPUlPLUFQSUMtZWRnZSAgICBz
dGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgSVJROiAgMTQgYWZmaW5pdHk6MDEgdmVjOjkwIHR5cGU9SU8tQVBJQy1lZGdl
ICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICBJUlE6ICAxNSBhZmZpbml0eTowMSB2ZWM6OTggdHlwZT1JTy1BUElD
LWVkZ2UgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTIt
MDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDE2IGFmZmluaXR5OjNmIHZlYzpiOCB0eXBlPUlP
LUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMTcgYWZmaW5pdHk6MDEgdmVjOmM4IHR5
cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4t
bGlzdD0wOiAxNygtLS0tKSwNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6
ICAxOCBhZmZpbml0eTowMSB2ZWM6YzAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAw
MDAwMDMwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDE4KC0tLS0pLA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDE5IGFmZmluaXR5OjNmIHZlYzpiMSB0eXBl
PUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVO
KSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMjIgYWZmaW5pdHk6MDEgdmVjOjk5
IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21h
aW4tbGlzdD0wOiAyMigtLS0tKSwxMTogMjIoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgSVJROiAgMjggYWZmaW5pdHk6MDEgdmVjOmQwIHR5cGU9SU8tQVBJQy1s
ZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAyOCgt
LS0tKSwxNDogMjgoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJR
OiAgMjkgYWZmaW5pdHk6MjAgdmVjOmQ4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0w
MDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOiAyOSgtLS0tKSwxNDogMjkoLS0t
LSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMzAgYWZmaW5pdHk6
MDQgdmVjOjIxIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGln
aHQ9MCBkb21haW4tbGlzdD0wOiAzMCgtLS0tKSwxNDogMzAoLS0tLSksDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgMzEgYWZmaW5pdHk6MDEgdmVjOjI5IHR5cGU9
SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlz
dD0wOiAzMSgtLS0tKSwxNDogMzEoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0
Ml0gICAgSVJROiAgMzMgYWZmaW5pdHk6M2YgdmVjOjIyIHR5cGU9SU8tQVBJQy1sZXZlbCAg
IHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIx
OjAyOjQyXSAgICBJUlE6ICAzNiBhZmZpbml0eTozZiB2ZWM6YTEgdHlwZT1JTy1BUElDLWxl
dmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgt
MzEgMjE6MDI6NDJdICAgIElSUTogIDQwIGFmZmluaXR5OjNmIHZlYzozMSB0eXBlPUlPLUFQ
SUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNDEgYWZmaW5pdHk6M2YgdmVjOjM5IHR5cGU9
SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA0MiBhZmZpbml0eTozZiB2ZWM6NDEg
dHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDQzIGFmZmluaXR5OjNmIHZl
Yzo0OSB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJv
dW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNDYgYWZmaW5pdHk6
M2YgdmVjOmMxIHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA0NyBhZmZp
bml0eTowNCB2ZWM6YTkgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDMwIGlu
LWZsaWdodD0wIGRvbWFpbi1saXN0PTA6IDQ3KC0tLS0pLDE0OiA0NygtLS0tKSwNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA1MSBhZmZpbml0eTozZiB2ZWM6ZDEg
dHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDUyIGFmZmluaXR5OjNmIHZl
YzphMCB0eXBlPUlPLUFQSUMtbGV2ZWwgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJv
dW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNTMgYWZmaW5pdHk6
M2YgdmVjOmE4IHR5cGU9SU8tQVBJQy1sZXZlbCAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQs
IHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA1NCBhZmZp
bml0eTozZiB2ZWM6YjAgdHlwZT1JTy1BUElDLWxldmVsICAgc3RhdHVzPTAwMDAwMDAyIG1h
cHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDU2
IGFmZmluaXR5OjAxIHZlYzoyOCB0eXBlPUFNRC1JT01NVS1NU0kgICBzdGF0dXM9MDAwMDAw
MDAgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJR
OiAgNTcgYWZmaW5pdHk6M2YgdmVjOjUxIHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0w
MDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAg
ICBJUlE6ICA1OCBhZmZpbml0eTozZiB2ZWM6NTkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3Rh
dHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6
NDJdICAgIElSUTogIDU5IGFmZmluaXR5OjNmIHZlYzo2MSB0eXBlPVBDSS1NU0kgICAgICAg
ICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgSVJROiAgNjAgYWZmaW5pdHk6M2YgdmVjOjY5IHR5cGU9UENJLU1TSSAg
ICAgICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQNCihYRU4pIFsyMDEyLTA4
LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA2MSBhZmZpbml0eTozZiB2ZWM6NzEgdHlwZT1QQ0kt
TVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5ib3VuZA0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDYyIGFmZmluaXR5OjNmIHZlYzo3OSB0eXBl
PVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVkLCB1bmJvdW5kDQooWEVO
KSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNjMgYWZmaW5pdHk6M2YgdmVjOjgx
IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAwMiBtYXBwZWQsIHVuYm91bmQN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICBJUlE6ICA2NCBhZmZpbml0eTozZiB2
ZWM6ODkgdHlwZT1QQ0ktTVNJICAgICAgICAgc3RhdHVzPTAwMDAwMDAyIG1hcHBlZCwgdW5i
b3VuZA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDY1IGFmZmluaXR5
OjNmIHZlYzo5MSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMDIgbWFwcGVk
LCB1bmJvdW5kDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgSVJROiAgNjYgYWZm
aW5pdHk6MDEgdmVjOmI5IHR5cGU9UENJLU1TSSAgICAgICAgIHN0YXR1cz0wMDAwMDAxMCBp
bi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjMwMigtLS0tKSwNCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICBJUlE6ICA2NyBhZmZpbml0eTowMSB2ZWM6YzkgdHlwZT1QQ0ktTVNJ
ICAgICAgICAgc3RhdHVzPTAwMDAwMDEwIGluLWZsaWdodD0wIGRvbWFpbi1saXN0PTA6MzAx
KC0tLS0pLA0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgIElSUTogIDY4IGFmZmlu
aXR5OjAxIHZlYzpkOSB0eXBlPVBDSS1NU0kgICAgICAgICBzdGF0dXM9MDAwMDAwMTAgaW4t
ZmxpZ2h0PTAgZG9tYWluLWxpc3Q9MDozMDAoLS0tLSksDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgSVJROiAgNjkgYWZmaW5pdHk6MDEgdmVjOjJhIHR5cGU9UENJLU1TSSAg
ICAgICAgIHN0YXR1cz0wMDAwMDAzMCBpbi1mbGlnaHQ9MCBkb21haW4tbGlzdD0wOjI5OSgt
LS0tKSwNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSBJTy1BUElDIGludGVycnVwdCBp
bmZvcm1hdGlvbjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRICAwIFZl
YzI0MDoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAsIFBp
biAgMjogdmVjPWYwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0w
IGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgIElSUSAgMSBWZWMgNDg6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0g
ICAgICAgQXBpYyAweDAwLCBQaW4gIDE6IHZlYz0zMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwg
c3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MQ0KKFhF
TikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgIDMgVmVjIDU2Og0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluICAzOiB2ZWM9MzggZGVs
aXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1h
c2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRICA0
IFZlYzI0MToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAs
IFBpbiAgNDogdmVjPWYxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0
eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgIElSUSAgNSBWZWMgNjQ6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0
Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gIDU6IHZlYz00MCBkZWxpdmVyeT1Mb1ByaSBkZXN0
PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6MQ0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgIDYgVmVjIDcyOg0KKFhFTikg
WzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluICA2OiB2ZWM9NDgg
ZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1F
IG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJR
ICA3IFZlYyA4MDoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4
MDAsIFBpbiAgNzogdmVjPTUwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjo0Ml0gICAgIElSUSAgOCBWZWMgODg6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gIDg6IHZlYz01OCBkZWxpdmVyeT1Mb1ByaSBk
ZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3RfaWQ6
MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgIDkgVmVjIDk2Og0KKFhF
TikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluICA5OiB2ZWM9
NjAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJp
Zz1MIG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAg
SVJRIDEwIFZlYzEwNDoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGlj
IDB4MDAsIFBpbiAxMDogdmVjPTY4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjo0Ml0gICAgIElSUSAxMSBWZWMxMTI6DQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gMTE6IHZlYz03MCBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MCBpcnI9MCB0cmlnPUUgbWFzaz0wIGRlc3Rf
aWQ6MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgMTIgVmVjMTIwOg0K
KFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluIDEyOiB2
ZWM9NzggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAg
dHJpZz1FIG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAg
ICAgSVJRIDEzIFZlYzEzNjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBB
cGljIDB4MDAsIFBpbiAxMzogdmVjPTg4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9
MCBwb2xhcml0eT0wIGlycj0wIHRyaWc9RSBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIw
MTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgMTQgVmVjMTQ0Og0KKFhFTikgWzIwMTItMDgt
MzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMCwgUGluIDE0OiB2ZWM9OTAgZGVsaXZlcnk9
TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTAgaXJyPTAgdHJpZz1FIG1hc2s9MCBk
ZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDE1IFZlYzE1
MjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAsIFBpbiAx
NTogdmVjPTk4IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0wIGly
cj0wIHRyaWc9RSBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0
Ml0gICAgIElSUSAxNiBWZWMxODQ6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAg
ICAgQXBpYyAweDAwLCBQaW4gMTY6IHZlYz1iOCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3Rh
dHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6NjMNCihYRU4p
IFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDE3IFZlYzIwMDoNCihYRU4pIFsyMDEy
LTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDAsIFBpbiAxNzogdmVjPWM4IGRlbGl2
ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNr
PTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSAxOCBW
ZWMxOTI6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAwLCBQ
aW4gMTg6IHZlYz1jMCBkZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9
MSBpcnI9MCB0cmlnPUwgbWFzaz0wIGRlc3RfaWQ6MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6
MDI6NDJdICAgICBJUlEgMTkgVmVjMTc3Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgICAgIEFwaWMgMHgwMCwgUGluIDE5OiB2ZWM9YjEgZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSAyMiBWZWMxNTM6DQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAwLCBQaW4gMjI6IHZlYz05OSBk
ZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwg
bWFzaz0wIGRlc3RfaWQ6MQ0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEg
MjggVmVjMjA4Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgw
MSwgUGluICA0OiB2ZWM9ZDAgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFy
aXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MCBkZXN0X2lkOjENCihYRU4pIFsyMDEyLTA4LTMx
IDIxOjAyOjQyXSAgICAgSVJRIDI5IFZlYzIxNjoNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAy
OjQyXSAgICAgICBBcGljIDB4MDEsIFBpbiAgNTogdmVjPWQ4IGRlbGl2ZXJ5PUxvUHJpIGRl
c3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDoz
Mg0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICBJUlEgMzAgVmVjIDMzOg0KKFhF
TikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFwaWMgMHgwMSwgUGluICA2OiB2ZWM9
MjEgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJp
Zz1MIG1hc2s9MCBkZXN0X2lkOjQNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAg
SVJRIDMxIFZlYyA0MToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGlj
IDB4MDEsIFBpbiAgNzogdmVjPTI5IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBw
b2xhcml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDoxDQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjo0Ml0gICAgIElSUSAzMyBWZWMgMzQ6DQooWEVOKSBbMjAxMi0wOC0zMSAy
MTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gIDk6IHZlYz0yMiBkZWxpdmVyeT1Mb1By
aSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3Rf
aWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDM2IFZlYzE2MToN
CihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDEsIFBpbiAxMjog
dmVjPWExIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0w
IHRyaWc9TCBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgICBJUlEgNDAgVmVjIDQ5Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAg
IEFwaWMgMHgwMSwgUGluIDE2OiB2ZWM9MzEgZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1
cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSA0MSBWZWMgNTc6DQooWEVOKSBbMjAxMi0w
OC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMTc6IHZlYz0zOSBkZWxpdmVy
eT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0x
IGRlc3RfaWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDQyIFZl
YyA2NToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDEsIFBp
biAxODogdmVjPTQxIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0x
IGlycj0wIHRyaWc9TCBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIwMTItMDgtMzEgMjE6
MDI6NDJdICAgICBJUlEgNDMgVmVjIDczOg0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJd
ICAgICAgIEFwaWMgMHgwMSwgUGluIDE5OiB2ZWM9NDkgZGVsaXZlcnk9TG9QcmkgZGVzdD1M
IHN0YXR1cz0wIHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQoo
WEVOKSBbMjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSA0NiBWZWMxOTM6DQooWEVOKSBb
MjAxMi0wOC0zMSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMjI6IHZlYz1jMSBk
ZWxpdmVyeT1Mb1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwg
bWFzaz0xIGRlc3RfaWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJR
IDQ3IFZlYzE2OToNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4
MDEsIFBpbiAyMzogdmVjPWE5IGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xh
cml0eT0xIGlycj0wIHRyaWc9TCBtYXNrPTAgZGVzdF9pZDo0DQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjo0Ml0gICAgIElSUSA1MSBWZWMyMDk6DQooWEVOKSBbMjAxMi0wOC0zMSAyMTow
Mjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMjc6IHZlYz1kMSBkZWxpdmVyeT1Mb1ByaSBk
ZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRlc3RfaWQ6
NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgSVJRIDUyIFZlYzE2MDoNCihY
RU4pIFsyMDEyLTA4LTMxIDIxOjAyOjQyXSAgICAgICBBcGljIDB4MDEsIFBpbiAyODogdmVj
PWEwIGRlbGl2ZXJ5PUxvUHJpIGRlc3Q9TCBzdGF0dXM9MCBwb2xhcml0eT0xIGlycj0wIHRy
aWc9TCBtYXNrPTEgZGVzdF9pZDo2Mw0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAg
ICBJUlEgNTMgVmVjMTY4Og0KKFhFTikgWzIwMTItMDgtMzEgMjE6MDI6NDJdICAgICAgIEFw
aWMgMHgwMSwgUGluIDI5OiB2ZWM9YTggZGVsaXZlcnk9TG9QcmkgZGVzdD1MIHN0YXR1cz0w
IHBvbGFyaXR5PTEgaXJyPTAgdHJpZz1MIG1hc2s9MSBkZXN0X2lkOjYzDQooWEVOKSBbMjAx
Mi0wOC0zMSAyMTowMjo0Ml0gICAgIElSUSA1NCBWZWMxNzY6DQooWEVOKSBbMjAxMi0wOC0z
MSAyMTowMjo0Ml0gICAgICAgQXBpYyAweDAxLCBQaW4gMzA6IHZlYz1iMCBkZWxpdmVyeT1M
b1ByaSBkZXN0PUwgc3RhdHVzPTAgcG9sYXJpdHk9MSBpcnI9MCB0cmlnPUwgbWFzaz0xIGRl
c3RfaWQ6NjMNCihYRU4pIFsyMDEyLTA4LTMxIDIxOjExOjU3XSAqKioqKioqKioqKiBWTUNC
IEFyZWFzICoqKioqKioqKioqKioqDQooWEVOKSBbMjAxMi0wOC0zMSAyMToxMTo1N10gKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCg==
------------01015A0E4152274D9
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------01015A0E4152274D9--



From xen-devel-bounces@lists.xen.org Fri Aug 31 22:22:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ZbZ-0007jO-Nj; Fri, 31 Aug 2012 22:22:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7ZbY-0007jJ-6J
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:22:20 +0000
Received: from [85.158.143.35:46451] by server-1.bemta-4.messagelabs.com id
	33/B1-12504-B1931405; Fri, 31 Aug 2012 22:22:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346451737!14340337!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMjYyNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26439 invoked from network); 31 Aug 2012 22:22:18 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 22:22:18 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VMM4bP020415
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 22:22:05 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VMM3TN023183
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 22:22:04 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VMM3QE024510; Fri, 31 Aug 2012 17:22:03 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 15:22:03 -0700
Date: Fri, 31 Aug 2012 18:21:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Message-ID: <20120831222157.GA21232@localhost.localdomain>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
	<20120831211325.GB20594@localhost.localdomain>
	<da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 02:16:18PM -0700, Dan Magenheimer wrote:
> > From: Konrad Rzeszutek Wilk
> > Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
> > 
> > On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> > > Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> > > Is it different for Xen 4.1 and 4.2?
> > >
> > > I did find this:
> > > http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F
> > > but it doesn't seem to help.  And this:
> > > http://wiki.xen.org/wiki/Fedora_Host_Installation
> > > only addresses xl.
> > >
> > > I expect I need to do something manually to start xencommons or
> > > something like that but obvious things don't seem to work,
> > 
> > How are you running this? When you boot up does it work? Or is this not
> > working after your restart xend couple of times?
> > 
> > > and I'm not a FC17 expert at all.
> > 
> > service xend start
> > 
> > But you also need to enable it if it wasn't enabled using systemd.
> > The syntax was something like (look at
> > http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)
> > 
> > systemctl enable xend.service
> > 
> > (thought it might not be called xend but something else).
> 
> That was one of the obvious things I tried, but it fails to start :-/

Are you running in graphical mode? If so see if there are some weird
SELinux warnings.
> 
> Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 22:22:46 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7ZbZ-0007jO-Nj; Fri, 31 Aug 2012 22:22:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1T7ZbY-0007jJ-6J
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:22:20 +0000
Received: from [85.158.143.35:46451] by server-1.bemta-4.messagelabs.com id
	33/B1-12504-B1931405; Fri, 31 Aug 2012 22:22:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1346451737!14340337!1
X-Originating-IP: [141.146.126.227]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuMjI3ID0+IDczMjYyNw==\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26439 invoked from network); 31 Aug 2012 22:22:18 -0000
Received: from acsinet15.oracle.com (HELO acsinet15.oracle.com)
	(141.146.126.227)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 22:22:18 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by acsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VMM4bP020415
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 22:22:05 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VMM3TN023183
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 22:22:04 GMT
Received: from abhmt115.oracle.com (abhmt115.oracle.com [141.146.116.67])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VMM3QE024510; Fri, 31 Aug 2012 17:22:03 -0500
Received: from localhost.localdomain (/38.96.16.75)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Aug 2012 15:22:03 -0700
Date: Fri, 31 Aug 2012 18:21:57 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Message-ID: <20120831222157.GA21232@localhost.localdomain>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
	<20120831211325.GB20594@localhost.localdomain>
	<da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Aug 31, 2012 at 02:16:18PM -0700, Dan Magenheimer wrote:
> > From: Konrad Rzeszutek Wilk
> > Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
> > 
> > On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> > > Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> > > Is it different for Xen 4.1 and 4.2?
> > >
> > > I did find this:
> > > http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F
> > > but it doesn't seem to help.  And this:
> > > http://wiki.xen.org/wiki/Fedora_Host_Installation
> > > only addresses xl.
> > >
> > > I expect I need to do something manually to start xencommons or
> > > something like that but obvious things don't seem to work,
> > 
> > How are you running this? When you boot up does it work? Or is this not
> > working after your restart xend couple of times?
> > 
> > > and I'm not a FC17 expert at all.
> > 
> > service xend start
> > 
> > But you also need to enable it if it wasn't enabled using systemd.
> > The syntax was something like (look at
> > http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)
> > 
> > systemctl enable xend.service
> > 
> > (thought it might not be called xend but something else).
> 
> That was one of the obvious things I tried, but it fails to start :-/

Are you running in graphical mode? If so see if there are some weird
SELinux warnings.
> 
> Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 22:25:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Ze7-0007or-9H; Fri, 31 Aug 2012 22:24:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T7Ze5-0007oj-1p
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:24:57 +0000
Received: from [85.158.143.99:50160] by server-1.bemta-4.messagelabs.com id
	FB/82-12504-8B931405; Fri, 31 Aug 2012 22:24:56 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346451891!22594600!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32714 invoked from network); 31 Aug 2012 22:24:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 22:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,349,1344211200"; d="scan'208";a="206834383"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 22:24:50 +0000
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 31 Aug 2012
	15:24:50 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, "wei.wang2@amd.com"
	<wei.wang2@amd.com>
Date: Fri, 31 Aug 2012 15:24:32 -0700
Thread-Topic: Using debug-key 'o:  Dump IOMMU p2m table, locks up machine
Thread-Index: Ac2HweyIv5hYCE9zTGOx/ZCh1l3JGQAAw+qw
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
In-Reply-To: <647712821.20120831234512@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Depending on how many VMs you have and the size of the IOMMU p2m table, it can take a while. It should not be infinite though. 

How many VMs do you have running?

Can you please send the serial output when you press 'o'?

Santosh

> -----Original Message-----
> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> Sent: Friday, August 31, 2012 2:45 PM
> To: Santosh Jodh; wei.wang2@amd.com
> Cc: xen-devel@lists.xen.org
> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> 
> 
> I was trying to use the 'o' debug key to make a bug report about an "AMD-Vi:
> IO_PAGE_FAULT".
> 
> The result:
> - When using "xl debug-keys o", the machine seems in a infinite loop, can
> hardly login, eventually resulting in a kernel RCU stall and complete lockup.
> - When using serial console: I get a infinite stream of "gfn:  mfn: " lines, mean
> while on the normal console, S-ATA devices are starting to give errors.
> 
> So either option trashes the machine, other debug-keys work fine.
> 
> Machine has a 890-fx chipset and AMD phenom x6 proc.
> 
> xl dmesg with bootup and output from some other debug-keys is attached.
> 
> --
> 
> Sander

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 22:25:21 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Ze7-0007or-9H; Fri, 31 Aug 2012 22:24:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T7Ze5-0007oj-1p
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:24:57 +0000
Received: from [85.158.143.99:50160] by server-1.bemta-4.messagelabs.com id
	FB/82-12504-8B931405; Fri, 31 Aug 2012 22:24:56 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-2.tower-216.messagelabs.com!1346451891!22594600!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAyNzU0NjM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32714 invoked from network); 31 Aug 2012 22:24:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 22:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.80,349,1344211200"; d="scan'208";a="206834383"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 22:24:50 +0000
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 31 Aug 2012
	15:24:50 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, "wei.wang2@amd.com"
	<wei.wang2@amd.com>
Date: Fri, 31 Aug 2012 15:24:32 -0700
Thread-Topic: Using debug-key 'o:  Dump IOMMU p2m table, locks up machine
Thread-Index: Ac2HweyIv5hYCE9zTGOx/ZCh1l3JGQAAw+qw
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
In-Reply-To: <647712821.20120831234512@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Depending on how many VMs you have and the size of the IOMMU p2m table, it can take a while. It should not be infinite though. 

How many VMs do you have running?

Can you please send the serial output when you press 'o'?

Santosh

> -----Original Message-----
> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> Sent: Friday, August 31, 2012 2:45 PM
> To: Santosh Jodh; wei.wang2@amd.com
> Cc: xen-devel@lists.xen.org
> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> 
> 
> I was trying to use the 'o' debug key to make a bug report about an "AMD-Vi:
> IO_PAGE_FAULT".
> 
> The result:
> - When using "xl debug-keys o", the machine seems in a infinite loop, can
> hardly login, eventually resulting in a kernel RCU stall and complete lockup.
> - When using serial console: I get a infinite stream of "gfn:  mfn: " lines, mean
> while on the normal console, S-ATA devices are starting to give errors.
> 
> So either option trashes the machine, other debug-keys work fine.
> 
> Machine has a 890-fx chipset and AMD phenom x6 proc.
> 
> xl dmesg with bootup and output from some other debug-keys is attached.
> 
> --
> 
> Sander

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 22:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Znh-00085y-Bx; Fri, 31 Aug 2012 22:34:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7Zng-00085q-9w
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:34:52 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346452484!9030623!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI1MzU5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5797 invoked from network); 31 Aug 2012 22:34:45 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 22:34:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VMYXKl031335
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 22:34:34 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VMYWOJ020769
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 22:34:33 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VMYWE6030910; Fri, 31 Aug 2012 17:34:32 -0500
MIME-Version: 1.0
Message-ID: <6dc36215-ef4a-4677-b0d5-7913a5ebb5f3@default>
Date: Fri, 31 Aug 2012 15:33:53 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
	<20120831211325.GB20594@localhost.localdomain>
	<da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
	<20120831222157.GA21232@localhost.localdomain>
In-Reply-To: <20120831222157.GA21232@localhost.localdomain>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
Content-Type: multipart/mixed;
	boundary="__1346452472172612698abhmt117.oracle.com"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--__1346452472172612698abhmt117.oracle.com
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

> From: Konrad Rzeszutek Wilk
> Sent: Friday, August 31, 2012 4:22 PM
> To: Dan Magenheimer
> Cc: Pasi K=E4rkk=E4inen; xen-devel@lists.xen.org
> Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
>=20
> On Fri, Aug 31, 2012 at 02:16:18PM -0700, Dan Magenheimer wrote:
> > > From: Konrad Rzeszutek Wilk
> > > Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
> > >
> > > On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> > > > Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> > > > Is it different for Xen 4.1 and 4.2?
> > > >
> > > > I did find this:
> > > > http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F
> > > > but it doesn't seem to help.  And this:
> > > > http://wiki.xen.org/wiki/Fedora_Host_Installation
> > > > only addresses xl.
> > > >
> > > > I expect I need to do something manually to start xencommons or
> > > > something like that but obvious things don't seem to work,
> > >
> > > How are you running this? When you boot up does it work? Or is this n=
ot
> > > working after your restart xend couple of times?
> > >
> > > > and I'm not a FC17 expert at all.
> > >
> > > service xend start
> > >
> > > But you also need to enable it if it wasn't enabled using systemd.
> > > The syntax was something like (look at
> > > http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)
> > >
> > > systemctl enable xend.service
> > >
> > > (thought it might not be called xend but something else).
> >
> > That was one of the obvious things I tried, but it fails to start :-/
>=20
> Are you running in graphical mode? If so see if there are some weird
> SELinux warnings.

SELinux is disabled.  But yes, I am booting in graphical mode.

Hmmm... manually running "/usr/sbin/xend start" seems to work though.
I guess that is all I need as I can start it in /etc/rc.d/rc.local.

Thanks for the help!
Dan



Dan

--__1346452472172612698abhmt117.oracle.com
Content-Type: application/ms-tnef; name="winmail.dat"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="winmail.dat"

eJ8+IgcAAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEDkAYAPAUAAAEAAAACAQkQ
AQAAACsFAAAnBQAAdAkAAExaRnWNU+dhAwAKAHJjcGcxMjUiMgNDdGV4BUJiaf5kBAADMAEDAfcK
gAKkA+MJAgBjaArAc2V0MP4gBxMCgBBzAFAEVghVB7LvEkUOUQMBEUcyBgAGwxJFfjMERhFJE1sS
UwjvCfc72xk/DjA1EkIMYGMAUAsJGQFkMzYR0AumID4gIkYDYTogSwIgcmFKZAfwegeQenUOsGv0
IFcDEGsKogqAHhAGYI8CMB5wHjAPMGF5LBHgkHVndXMFQDMxISBDAdAOICA0OjIVYFBSTR/mVG8e
cEQDkU2kYWcJ8GhlB3FyH+ZkQ2MecFBhAJAegFzgJ2U0cmsf0CWRC4BRCfA7IHgJ8C0BAHYQZWxA
bAQAdHMuFSaRLgWwZx/ndWJqMwWQIJFSZR5wJpFkLxh4bSACICIwLjEvNyoAFWAp0UYJgAWwYSDA
KEZDMTcpH+Yf5l5PKpEFECEjIahhBUAwUDI6MTYuIDgikCBgLTA3MDAhICN9IPZ3A2AOsDof5jDi
Hj8fT/8w8yiPKZ8qrzC2MIosjy2Y4DA4OjQ5Lo8vnzCodTEgSQQgdCQAGUAtwCCAaG93LXRvIAIQ
FwXAIYAKwHQLgGcvctx1bgMAPsAmgG00wDSR9TWvPzwvaQVAD0ABID1R+wIwPhNYCfA1Ii3AP8E1
ce9A/EENQjEx8GYLgDHwPTALBAA8HWgCQHA6Ly9ZA/BraSeGSBMvQyFfbQhQbQRgSWBQA2ACYGVQ
bXMjUz51XzSCX0pmC3BsJ3AzRkEMYssyYEIDbweQbicFQBGgD0pAPSA+ACQAbHAuIB8R4EZfR29I
ezXkX0hv2SGAX0kAgAGQbAtgPqAnAiBBDAIgbHktwGRkxxlABBAHkXhsLkQvRTr9DsBwM/FFwSZA
CYBN8jYAvz5QA3ARsE7wP0EDgXVSof9UMD3xPmMmggWgSaIEIAWw90EMWDgnMGs9cD0wLdFMwv1K
EHZS8CFwTtI+wAQgNgDlTXp3BbBrLDa/MORSENcH4ArAPXB5CGAgPvZO4n4/H5AkAAOgYTIG4DBA
ILx1cE0jQgJe8mIwTwXAfw9RTuJXcDBAMIpe8j8yYf8BgBOhYTEFwFSBWaYx8AWg92MwSjA08GY9
IAdyQPpBDPlDokknNOBk8T2BKzJW479ZwS3RUqFVGjCKEaByXUC+Yz1wP6M+Y2yPMORCTNH/YTIH
QFgwV3cJ8AGgaCFCEf8GkGOjJTBNcnHUMfAhcD8yrHN5IYBKQGRVG1QkALN0EQIwYXhywlueKBjQ
fm8fgC3QMIpQFUJwNgJwzwNgM+JIqAawc1YLgEIQ+l898F96YXRSSXAkAC3Q/nMkABGwNq9thnQz
NAADILtx1TSCLm31fF8w5Cg9MM0IYGdQEEICbWmBUmry+mI9cGNSoVehP6NMwlg4+ycAEaApVRgw
iHVgLdF2El8CIGg0dXFdLUXQdAiBZI8hIEzFS2NZWDotLyuO9wcQYRwLgCAJwHlATvCCkfdYwARx
YjBJaGBxIU2xcmJ/PTU9YVgyO7AkEAsiIAVF+kwLgHV18gSgXcJVFR/k/4/WD1EPQXNjTnFwgweQ
ISB9RdBhNOBi4ovfBGKQ20g5SaBtLpaAWMg+9iIv0SFwci9zDyBuP5Q+Y/4iTaOJM17yPSGBMlUV
RdD/IWBUkVyED1FSoVdWdiFF0F+CkAOgWZRyQgOgLxGwY/k+4GMuNLCeQRjQgpGQ23mFwW5rBCA+
IobCTjIhXx/kI3GQ6pDqoY19pDAAZ8gBA5AGAIwBAAABAAAAAgG5gAggBgAAAAAAwAAAAAAAAEYA
AAAAIIUAAAEAAABkAQAAAgEEAAAAAAAAAAVSZXBseQhJUE0uTm90ZQdNZXNzYWdlAlJFBQAAAAAA
AAAAAQAAAAAAAAACAAAAZgAAAAIAAAABAAAADFJlcGx5IHRvIEFsbAhJUE0uTm90ZQdNZXNzYWdl
AlJFBQAAAAAAAAAAAQAAAAAAAAACAAAAZwAAAAMAAAACAAAAB0ZvcndhcmQISVBNLk5vdGUHTWVz
c2FnZQJGVwUAAAAAAAAAAAEAAAAAAAAAAgAAAGgAAAAEAAAAAwAAAA9SZXBseSB0byBGb2xkZXII
SVBNLlBvc3QEUG9zdAAFAAAAAAAAAAABAAAAAAAAAAIAAABsAAAACAAAAAQBBVIAZQBwAGwAeQAC
UgBFAAxSAGUAcABsAHkAIAB0AG8AIABBAGwAbAACUgBFAAdGAG8AcgB3AGEAcgBkAAJGAFcAD1IA
ZQBwAGwAeQAgAHQAbwAgAEYAbwBsAGQAZQByAACeOw==
--__1346452472172612698abhmt117.oracle.com
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--__1346452472172612698abhmt117.oracle.com--


From xen-devel-bounces@lists.xen.org Fri Aug 31 22:35:13 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7Znh-00085y-Bx; Fri, 31 Aug 2012 22:34:53 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dan.magenheimer@oracle.com>) id 1T7Zng-00085q-9w
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:34:52 +0000
X-Env-Sender: dan.magenheimer@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1346452484!9030623!1
X-Originating-IP: [148.87.113.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQ4Ljg3LjExMy4xMTcgPT4gNzI1MzU5\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5797 invoked from network); 31 Aug 2012 22:34:45 -0000
Received: from rcsinet15.oracle.com (HELO rcsinet15.oracle.com)
	(148.87.113.117)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Aug 2012 22:34:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by rcsinet15.oracle.com (Sentrion-MTA-4.2.2/Sentrion-MTA-4.2.2) with
	ESMTP id q7VMYXKl031335
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Aug 2012 22:34:34 GMT
Received: from acsmt356.oracle.com (acsmt356.oracle.com [141.146.40.156])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	q7VMYWOJ020769
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Aug 2012 22:34:33 GMT
Received: from abhmt117.oracle.com (abhmt117.oracle.com [141.146.116.69])
	by acsmt356.oracle.com (8.12.11.20060308/8.12.11) with ESMTP id
	q7VMYWE6030910; Fri, 31 Aug 2012 17:34:32 -0500
MIME-Version: 1.0
Message-ID: <6dc36215-ef4a-4677-b0d5-7913a5ebb5f3@default>
Date: Fri, 31 Aug 2012 15:33:53 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
References: <e927526f-b096-43da-a3b1-57d84daea825@default>
	<20120831211325.GB20594@localhost.localdomain>
	<da3e1ce8-0fcf-4c6f-88f9-cea859fe9ec1@default>
	<20120831222157.GA21232@localhost.localdomain>
In-Reply-To: <20120831222157.GA21232@localhost.localdomain>
X-Priority: 3
X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.7  (607090) [OL
	12.0.6661.5003 (x86)]
Content-Type: multipart/mixed;
	boundary="__1346452472172612698abhmt117.oracle.com"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xend/xm on 4.1/4.2 on Fedora (FC17)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--__1346452472172612698abhmt117.oracle.com
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

> From: Konrad Rzeszutek Wilk
> Sent: Friday, August 31, 2012 4:22 PM
> To: Dan Magenheimer
> Cc: Pasi K=E4rkk=E4inen; xen-devel@lists.xen.org
> Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
>=20
> On Fri, Aug 31, 2012 at 02:16:18PM -0700, Dan Magenheimer wrote:
> > > From: Konrad Rzeszutek Wilk
> > > Subject: Re: xend/xm on 4.1/4.2 on Fedora (FC17)
> > >
> > > On Fri, Aug 31, 2012 at 02:08:49PM -0700, Dan Magenheimer wrote:
> > > > Is there a how-to for starting/running xm/xend on Fedora (FC17)?
> > > > Is it different for Xen 4.1 and 4.2?
> > > >
> > > > I did find this:
> > > > http://wiki.xen.org/wiki/Xen_Common_Problems#Starting_xend_fails.3F
> > > > but it doesn't seem to help.  And this:
> > > > http://wiki.xen.org/wiki/Fedora_Host_Installation
> > > > only addresses xl.
> > > >
> > > > I expect I need to do something manually to start xencommons or
> > > > something like that but obvious things don't seem to work,
> > >
> > > How are you running this? When you boot up does it work? Or is this n=
ot
> > > working after your restart xend couple of times?
> > >
> > > > and I'm not a FC17 expert at all.
> > >
> > > service xend start
> > >
> > > But you also need to enable it if it wasn't enabled using systemd.
> > > The syntax was something like (look at
> > > http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet)
> > >
> > > systemctl enable xend.service
> > >
> > > (thought it might not be called xend but something else).
> >
> > That was one of the obvious things I tried, but it fails to start :-/
>=20
> Are you running in graphical mode? If so see if there are some weird
> SELinux warnings.

SELinux is disabled.  But yes, I am booting in graphical mode.

Hmmm... manually running "/usr/sbin/xend start" seems to work though.
I guess that is all I need as I can start it in /etc/rc.d/rc.local.

Thanks for the help!
Dan



Dan

--__1346452472172612698abhmt117.oracle.com
Content-Type: application/ms-tnef; name="winmail.dat"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="winmail.dat"

eJ8+IgcAAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEDkAYAPAUAAAEAAAACAQkQ
AQAAACsFAAAnBQAAdAkAAExaRnWNU+dhAwAKAHJjcGcxMjUiMgNDdGV4BUJiaf5kBAADMAEDAfcK
gAKkA+MJAgBjaArAc2V0MP4gBxMCgBBzAFAEVghVB7LvEkUOUQMBEUcyBgAGwxJFfjMERhFJE1sS
UwjvCfc72xk/DjA1EkIMYGMAUAsJGQFkMzYR0AumID4gIkYDYTogSwIgcmFKZAfwegeQenUOsGv0
IFcDEGsKogqAHhAGYI8CMB5wHjAPMGF5LBHgkHVndXMFQDMxISBDAdAOICA0OjIVYFBSTR/mVG8e
cEQDkU2kYWcJ8GhlB3FyH+ZkQ2MecFBhAJAegFzgJ2U0cmsf0CWRC4BRCfA7IHgJ8C0BAHYQZWxA
bAQAdHMuFSaRLgWwZx/ndWJqMwWQIJFSZR5wJpFkLxh4bSACICIwLjEvNyoAFWAp0UYJgAWwYSDA
KEZDMTcpH+Yf5l5PKpEFECEjIahhBUAwUDI6MTYuIDgikCBgLTA3MDAhICN9IPZ3A2AOsDof5jDi
Hj8fT/8w8yiPKZ8qrzC2MIosjy2Y4DA4OjQ5Lo8vnzCodTEgSQQgdCQAGUAtwCCAaG93LXRvIAIQ
FwXAIYAKwHQLgGcvctx1bgMAPsAmgG00wDSR9TWvPzwvaQVAD0ABID1R+wIwPhNYCfA1Ii3AP8E1
ce9A/EENQjEx8GYLgDHwPTALBAA8HWgCQHA6Ly9ZA/BraSeGSBMvQyFfbQhQbQRgSWBQA2ACYGVQ
bXMjUz51XzSCX0pmC3BsJ3AzRkEMYssyYEIDbweQbicFQBGgD0pAPSA+ACQAbHAuIB8R4EZfR29I
ezXkX0hv2SGAX0kAgAGQbAtgPqAnAiBBDAIgbHktwGRkxxlABBAHkXhsLkQvRTr9DsBwM/FFwSZA
CYBN8jYAvz5QA3ARsE7wP0EDgXVSof9UMD3xPmMmggWgSaIEIAWw90EMWDgnMGs9cD0wLdFMwv1K
EHZS8CFwTtI+wAQgNgDlTXp3BbBrLDa/MORSENcH4ArAPXB5CGAgPvZO4n4/H5AkAAOgYTIG4DBA
ILx1cE0jQgJe8mIwTwXAfw9RTuJXcDBAMIpe8j8yYf8BgBOhYTEFwFSBWaYx8AWg92MwSjA08GY9
IAdyQPpBDPlDokknNOBk8T2BKzJW479ZwS3RUqFVGjCKEaByXUC+Yz1wP6M+Y2yPMORCTNH/YTIH
QFgwV3cJ8AGgaCFCEf8GkGOjJTBNcnHUMfAhcD8yrHN5IYBKQGRVG1QkALN0EQIwYXhywlueKBjQ
fm8fgC3QMIpQFUJwNgJwzwNgM+JIqAawc1YLgEIQ+l898F96YXRSSXAkAC3Q/nMkABGwNq9thnQz
NAADILtx1TSCLm31fF8w5Cg9MM0IYGdQEEICbWmBUmry+mI9cGNSoVehP6NMwlg4+ycAEaApVRgw
iHVgLdF2El8CIGg0dXFdLUXQdAiBZI8hIEzFS2NZWDotLyuO9wcQYRwLgCAJwHlATvCCkfdYwARx
YjBJaGBxIU2xcmJ/PTU9YVgyO7AkEAsiIAVF+kwLgHV18gSgXcJVFR/k/4/WD1EPQXNjTnFwgweQ
ISB9RdBhNOBi4ovfBGKQ20g5SaBtLpaAWMg+9iIv0SFwci9zDyBuP5Q+Y/4iTaOJM17yPSGBMlUV
RdD/IWBUkVyED1FSoVdWdiFF0F+CkAOgWZRyQgOgLxGwY/k+4GMuNLCeQRjQgpGQ23mFwW5rBCA+
IobCTjIhXx/kI3GQ6pDqoY19pDAAZ8gBA5AGAIwBAAABAAAAAgG5gAggBgAAAAAAwAAAAAAAAEYA
AAAAIIUAAAEAAABkAQAAAgEEAAAAAAAAAAVSZXBseQhJUE0uTm90ZQdNZXNzYWdlAlJFBQAAAAAA
AAAAAQAAAAAAAAACAAAAZgAAAAIAAAABAAAADFJlcGx5IHRvIEFsbAhJUE0uTm90ZQdNZXNzYWdl
AlJFBQAAAAAAAAAAAQAAAAAAAAACAAAAZwAAAAMAAAACAAAAB0ZvcndhcmQISVBNLk5vdGUHTWVz
c2FnZQJGVwUAAAAAAAAAAAEAAAAAAAAAAgAAAGgAAAAEAAAAAwAAAA9SZXBseSB0byBGb2xkZXII
SVBNLlBvc3QEUG9zdAAFAAAAAAAAAAABAAAAAAAAAAIAAABsAAAACAAAAAQBBVIAZQBwAGwAeQAC
UgBFAAxSAGUAcABsAHkAIAB0AG8AIABBAGwAbAACUgBFAAdGAG8AcgB3AGEAcgBkAAJGAFcAD1IA
ZQBwAGwAeQAgAHQAbwAgAEYAbwBsAGQAZQByAACeOw==
--__1346452472172612698abhmt117.oracle.com
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--__1346452472172612698abhmt117.oracle.com--


From xen-devel-bounces@lists.xen.org Fri Aug 31 22:58:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:58:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7aAC-0008HA-Vl; Fri, 31 Aug 2012 22:58:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T7aAB-0008H5-KM
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:58:07 +0000
Received: from [85.158.143.35:41071] by server-1.bemta-4.messagelabs.com id
	22/8C-12504-E7141405; Fri, 31 Aug 2012 22:58:06 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346453883!13621648!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjc1NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4582 invoked from network); 31 Aug 2012 22:58:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 22:58:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,349,1344211200"; d="scan'208";a="36489112"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 22:58:02 +0000
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 31 Aug 2012
	15:58:02 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 15:57:45 -0700
Thread-Topic: Using debug-key 'o:  Dump IOMMU p2m table, locks up machine
Thread-Index: Ac2HygBemrKmdBKQSXeAP3wUbiv4UQAATdbA
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
	<723041396.20120901004249@eikelenboom.it>
In-Reply-To: <723041396.20120901004249@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The dump should complete - would be curious to see how long it takes on serial console. What baudrate is the console running at?

The code does allow processing of pending softirqs quite frequently. I am not sure why you are still seeing SATA errors.

> -----Original Message-----
> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> Sent: Friday, August 31, 2012 3:43 PM
> To: Santosh Jodh
> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> 
> 
> Saturday, September 1, 2012, 12:24:32 AM, you wrote:
> 
> > Depending on how many VMs you have and the size of the IOMMU p2m
> table, it can take a while. It should not be infinite though.
> 
> > How many VMs do you have running?
> 
> 15
> 
> > Can you please send the serial output when you press 'o'?
> 
> Attached, to the end you will see the s-ata errors coming through while the
> dump still runs.
> This is not a complete dump, only a few minutes after which i did a hard
> reset.
> 
> > Santosh
> 
> >> -----Original Message-----
> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> >> Sent: Friday, August 31, 2012 2:45 PM
> >> To: Santosh Jodh; wei.wang2@amd.com
> >> Cc: xen-devel@lists.xen.org
> >> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> >>
> >>
> >> I was trying to use the 'o' debug key to make a bug report about an "AMD-
> Vi:
> >> IO_PAGE_FAULT".
> >>
> >> The result:
> >> - When using "xl debug-keys o", the machine seems in a infinite loop,
> >> can hardly login, eventually resulting in a kernel RCU stall and complete
> lockup.
> >> - When using serial console: I get a infinite stream of "gfn:  mfn: "
> >> lines, mean while on the normal console, S-ATA devices are starting to
> give errors.
> >>
> >> So either option trashes the machine, other debug-keys work fine.
> >>
> >> Machine has a 890-fx chipset and AMD phenom x6 proc.
> >>
> >> xl dmesg with bootup and output from some other debug-keys is
> attached.
> >>
> >> --
> >>
> >> Sander
> 
> 
> 
> 
> --
> Best regards,
>  Sander                            mailto:linux@eikelenboom.it

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 22:58:32 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 22:58:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7aAC-0008HA-Vl; Fri, 31 Aug 2012 22:58:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T7aAB-0008H5-KM
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 22:58:07 +0000
Received: from [85.158.143.35:41071] by server-1.bemta-4.messagelabs.com id
	22/8C-12504-E7141405; Fri, 31 Aug 2012 22:58:06 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1346453883!13621648!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjc1NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4582 invoked from network); 31 Aug 2012 22:58:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 22:58:04 -0000
X-IronPort-AV: E=Sophos;i="4.80,349,1344211200"; d="scan'208";a="36489112"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 22:58:02 +0000
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 31 Aug 2012
	15:58:02 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 15:57:45 -0700
Thread-Topic: Using debug-key 'o:  Dump IOMMU p2m table, locks up machine
Thread-Index: Ac2HygBemrKmdBKQSXeAP3wUbiv4UQAATdbA
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
	<723041396.20120901004249@eikelenboom.it>
In-Reply-To: <723041396.20120901004249@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The dump should complete - would be curious to see how long it takes on serial console. What baudrate is the console running at?

The code does allow processing of pending softirqs quite frequently. I am not sure why you are still seeing SATA errors.

> -----Original Message-----
> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> Sent: Friday, August 31, 2012 3:43 PM
> To: Santosh Jodh
> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> 
> 
> Saturday, September 1, 2012, 12:24:32 AM, you wrote:
> 
> > Depending on how many VMs you have and the size of the IOMMU p2m
> table, it can take a while. It should not be infinite though.
> 
> > How many VMs do you have running?
> 
> 15
> 
> > Can you please send the serial output when you press 'o'?
> 
> Attached, to the end you will see the s-ata errors coming through while the
> dump still runs.
> This is not a complete dump, only a few minutes after which i did a hard
> reset.
> 
> > Santosh
> 
> >> -----Original Message-----
> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> >> Sent: Friday, August 31, 2012 2:45 PM
> >> To: Santosh Jodh; wei.wang2@amd.com
> >> Cc: xen-devel@lists.xen.org
> >> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> >>
> >>
> >> I was trying to use the 'o' debug key to make a bug report about an "AMD-
> Vi:
> >> IO_PAGE_FAULT".
> >>
> >> The result:
> >> - When using "xl debug-keys o", the machine seems in a infinite loop,
> >> can hardly login, eventually resulting in a kernel RCU stall and complete
> lockup.
> >> - When using serial console: I get a infinite stream of "gfn:  mfn: "
> >> lines, mean while on the normal console, S-ATA devices are starting to
> give errors.
> >>
> >> So either option trashes the machine, other debug-keys work fine.
> >>
> >> Machine has a 890-fx chipset and AMD phenom x6 proc.
> >>
> >> xl dmesg with bootup and output from some other debug-keys is
> attached.
> >>
> >> --
> >>
> >> Sander
> 
> 
> 
> 
> --
> Best regards,
>  Sander                            mailto:linux@eikelenboom.it

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 23:17:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 23:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7aS6-00006D-MS; Fri, 31 Aug 2012 23:16:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7aS4-000068-Ci
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 23:16:36 +0000
Received: from [85.158.143.35:9958] by server-1.bemta-4.messagelabs.com id
	CD/B2-12504-3D541405; Fri, 31 Aug 2012 23:16:35 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346454991!16131778!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12937 invoked from network); 31 Aug 2012 23:16:31 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 23:16:31 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:51342
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7aOt-0000Rc-90; Sat, 01 Sep 2012 01:13:19 +0200
Date: Sat, 1 Sep 2012 01:16:25 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1377403931.20120901011625@eikelenboom.it>
To: Santosh Jodh <Santosh.Jodh@citrix.com>
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
	<723041396.20120901004249@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Saturday, September 1, 2012, 12:57:45 AM, you wrote:

> The dump should complete - would be curious to see how long it takes on serial console. What baudrate is the console running at?

I think for ages, this part seems only to cover a bit of the first of 3 pv guests which have devices passed through.
38400

And i wonder if the information is very valuable, gfn == mfn for every line ... at an increment of 1 ...
Perhaps a uhmmm more compact way of getting the interesting data would be handy ?
Or is this the intended output ?

> The code does allow processing of pending softirqs quite frequently. I am not sure why you are still seeing SATA errors.

The machine is completely unresponsive in every way.

And using it with "xl debug-keys o" is never going to work i guess, since the information flood is far larger than "xl dmesg" keeps ?



>> -----Original Message-----
>> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
>> Sent: Friday, August 31, 2012 3:43 PM
>> To: Santosh Jodh
>> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
>> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
>> 
>> 
>> Saturday, September 1, 2012, 12:24:32 AM, you wrote:
>> 
>> > Depending on how many VMs you have and the size of the IOMMU p2m
>> table, it can take a while. It should not be infinite though.
>> 
>> > How many VMs do you have running?
>> 
>> 15
>> 
>> > Can you please send the serial output when you press 'o'?
>> 
>> Attached, to the end you will see the s-ata errors coming through while the
>> dump still runs.
>> This is not a complete dump, only a few minutes after which i did a hard
>> reset.
>> 
>> > Santosh
>> 
>> >> -----Original Message-----
>> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
>> >> Sent: Friday, August 31, 2012 2:45 PM
>> >> To: Santosh Jodh; wei.wang2@amd.com
>> >> Cc: xen-devel@lists.xen.org
>> >> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
>> >>
>> >>
>> >> I was trying to use the 'o' debug key to make a bug report about an "AMD-
>> Vi:
>> >> IO_PAGE_FAULT".
>> >>
>> >> The result:
>> >> - When using "xl debug-keys o", the machine seems in a infinite loop,
>> >> can hardly login, eventually resulting in a kernel RCU stall and complete
>> lockup.
>> >> - When using serial console: I get a infinite stream of "gfn:  mfn: "
>> >> lines, mean while on the normal console, S-ATA devices are starting to
>> give errors.
>> >>
>> >> So either option trashes the machine, other debug-keys work fine.
>> >>
>> >> Machine has a 890-fx chipset and AMD phenom x6 proc.
>> >>
>> >> xl dmesg with bootup and output from some other debug-keys is
>> attached.
>> >>
>> >> --
>> >>
>> >> Sander
>> 
>> 
>> 
>> 
>> --
>> Best regards,
>>  Sander                            mailto:linux@eikelenboom.it




-- 
Best regards,
 Sander                            mailto:linux@eikelenboom.it


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 23:17:00 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 23:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7aS6-00006D-MS; Fri, 31 Aug 2012 23:16:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1T7aS4-000068-Ci
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 23:16:36 +0000
Received: from [85.158.143.35:9958] by server-1.bemta-4.messagelabs.com id
	CD/B2-12504-3D541405; Fri, 31 Aug 2012 23:16:35 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-21.messagelabs.com!1346454991!16131778!1
X-Originating-IP: [188.40.164.121]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12937 invoked from network); 31 Aug 2012 23:16:31 -0000
Received: from static.121.164.40.188.clients.your-server.de (HELO
	smtp.eikelenboom.it) (188.40.164.121)
	by server-6.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	31 Aug 2012 23:16:31 -0000
Received: from 26-69-ftth.onsneteindhoven.nl ([88.159.69.26]:51342
	helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32)
	(Exim 4.72) (envelope-from <linux@eikelenboom.it>)
	id 1T7aOt-0000Rc-90; Sat, 01 Sep 2012 01:13:19 +0200
Date: Sat, 1 Sep 2012 01:16:25 +0200
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1377403931.20120901011625@eikelenboom.it>
To: Santosh Jodh <Santosh.Jodh@citrix.com>
In-Reply-To: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
	<723041396.20120901004249@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Saturday, September 1, 2012, 12:57:45 AM, you wrote:

> The dump should complete - would be curious to see how long it takes on serial console. What baudrate is the console running at?

I think for ages, this part seems only to cover a bit of the first of 3 pv guests which have devices passed through.
38400

And i wonder if the information is very valuable, gfn == mfn for every line ... at an increment of 1 ...
Perhaps a uhmmm more compact way of getting the interesting data would be handy ?
Or is this the intended output ?

> The code does allow processing of pending softirqs quite frequently. I am not sure why you are still seeing SATA errors.

The machine is completely unresponsive in every way.

And using it with "xl debug-keys o" is never going to work i guess, since the information flood is far larger than "xl dmesg" keeps ?



>> -----Original Message-----
>> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
>> Sent: Friday, August 31, 2012 3:43 PM
>> To: Santosh Jodh
>> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
>> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
>> 
>> 
>> Saturday, September 1, 2012, 12:24:32 AM, you wrote:
>> 
>> > Depending on how many VMs you have and the size of the IOMMU p2m
>> table, it can take a while. It should not be infinite though.
>> 
>> > How many VMs do you have running?
>> 
>> 15
>> 
>> > Can you please send the serial output when you press 'o'?
>> 
>> Attached, to the end you will see the s-ata errors coming through while the
>> dump still runs.
>> This is not a complete dump, only a few minutes after which i did a hard
>> reset.
>> 
>> > Santosh
>> 
>> >> -----Original Message-----
>> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
>> >> Sent: Friday, August 31, 2012 2:45 PM
>> >> To: Santosh Jodh; wei.wang2@amd.com
>> >> Cc: xen-devel@lists.xen.org
>> >> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
>> >>
>> >>
>> >> I was trying to use the 'o' debug key to make a bug report about an "AMD-
>> Vi:
>> >> IO_PAGE_FAULT".
>> >>
>> >> The result:
>> >> - When using "xl debug-keys o", the machine seems in a infinite loop,
>> >> can hardly login, eventually resulting in a kernel RCU stall and complete
>> lockup.
>> >> - When using serial console: I get a infinite stream of "gfn:  mfn: "
>> >> lines, mean while on the normal console, S-ATA devices are starting to
>> give errors.
>> >>
>> >> So either option trashes the machine, other debug-keys work fine.
>> >>
>> >> Machine has a 890-fx chipset and AMD phenom x6 proc.
>> >>
>> >> xl dmesg with bootup and output from some other debug-keys is
>> attached.
>> >>
>> >> --
>> >>
>> >> Sander
>> 
>> 
>> 
>> 
>> --
>> Best regards,
>>  Sander                            mailto:linux@eikelenboom.it




-- 
Best regards,
 Sander                            mailto:linux@eikelenboom.it


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 23:44:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 23:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7asI-0000M9-VZ; Fri, 31 Aug 2012 23:43:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7asH-0000M4-JS
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 23:43:41 +0000
Received: from [85.158.143.99:33003] by server-2.bemta-4.messagelabs.com id
	F4/5B-21239-C2C41405; Fri, 31 Aug 2012 23:43:40 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346456618!24558467!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29755 invoked from network); 31 Aug 2012 23:43:39 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 23:43:39 -0000
Received: by pbbjt11 with SMTP id jt11so5443020pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 16:43:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=vVvsRlhzH9TusBWmu0ubeEVzB75/DD1zltnW0XpxdYA=;
	b=kVROLJZNubA1IG8xbyqFlIDmGnTeXnFCH1eecYhkoGJp5fO2DDZwqtqTalMGslT92f
	g+urHvKmGYkO8ZCQvwThDt7sccDI1hPgQdL+DPCePFld+oNbUaK04locC5FI9+eRSI40
	2r4OOyVRFrFEfCkY5PNgRwIVm3sZZXhTEeh+gJc42j9gJGsVVfagU+PuGjGGxTMU/T53
	xsk5ZpleIJ7hYLQvdLj3LwPnaL57Ku9BKi1Nta2NMHV6AL2WsLV1nZVW1DCiVr+v8H5o
	7+TGGSEO6xn+sJ7w4wqeDEAwE6AeHpnNI727Ndl/pqHt+sRAQrNXE81JLRGch7uNvkDw
	Ccrg==
Received: by 10.68.240.236 with SMTP id wd12mr20829315pbc.83.1346456617315;
	Fri, 31 Aug 2012 16:43:37 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id mr2sm4386942pbb.16.2012.08.31.16.43.36
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 16:43:37 -0700 (PDT)
Date: Fri, 31 Aug 2012 19:43:29 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120831234222.GA22460@localhost.localdomain>
References: <20120828124255.GA32452@aepfle.de>
	<CC6287A7.3D199%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC6287A7.3D199%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 02:35:19PM +0100, Keir Fraser wrote:
> On 28/08/2012 13:42, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> > On Tue, Aug 28, Keir Fraser wrote:
> > 
> >> Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
> >> pages. How about a bit lower in memory -- FE700000-FE7FFFFF?
> >> 
> >> Everything in range FC000000-FFFFFFFF should already be marked
> >> E820_RESERVED. You can test that, and also see
> >> tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
> >> RESERVED_MEMBASE == FC000000).
> > 
> > Yes, FC000000-FFFFFFFF has already an E820_RESERVED entry. Within that
> > range the kernel finds the IOAPIC, LAPIC and the HPET, perhaps because
> > they are listed in the ACPI table or because they are found by other
> > ways.
> 
> Yes they are all listed in various ACPI tables.
> 
> > To make the location of the of the shared pages configurable from the
> > tools, does tools/firmware/hvmloader/acpi/dsdt.asl have a way to
> > describe such special region? Maybe the kernel parses that table early

Isn't there also a magic string of said structure? If so we should also
check for the magic string to make sure we are mapping the proper
"thing." It can be done similary to how iBFT or EBDA is found.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 23:44:05 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 23:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7asI-0000M9-VZ; Fri, 31 Aug 2012 23:43:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1T7asH-0000M4-JS
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 23:43:41 +0000
Received: from [85.158.143.99:33003] by server-2.bemta-4.messagelabs.com id
	F4/5B-21239-C2C41405; Fri, 31 Aug 2012 23:43:40 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-7.tower-216.messagelabs.com!1346456618!24558467!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29755 invoked from network); 31 Aug 2012 23:43:39 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-7.tower-216.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 23:43:39 -0000
Received: by pbbjt11 with SMTP id jt11so5443020pbb.32
	for <xen-devel@lists.xen.org>; Fri, 31 Aug 2012 16:43:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=vVvsRlhzH9TusBWmu0ubeEVzB75/DD1zltnW0XpxdYA=;
	b=kVROLJZNubA1IG8xbyqFlIDmGnTeXnFCH1eecYhkoGJp5fO2DDZwqtqTalMGslT92f
	g+urHvKmGYkO8ZCQvwThDt7sccDI1hPgQdL+DPCePFld+oNbUaK04locC5FI9+eRSI40
	2r4OOyVRFrFEfCkY5PNgRwIVm3sZZXhTEeh+gJc42j9gJGsVVfagU+PuGjGGxTMU/T53
	xsk5ZpleIJ7hYLQvdLj3LwPnaL57Ku9BKi1Nta2NMHV6AL2WsLV1nZVW1DCiVr+v8H5o
	7+TGGSEO6xn+sJ7w4wqeDEAwE6AeHpnNI727Ndl/pqHt+sRAQrNXE81JLRGch7uNvkDw
	Ccrg==
Received: by 10.68.240.236 with SMTP id wd12mr20829315pbc.83.1346456617315;
	Fri, 31 Aug 2012 16:43:37 -0700 (PDT)
Received: from localhost.localdomain ([38.96.16.75])
	by mx.google.com with ESMTPS id mr2sm4386942pbb.16.2012.08.31.16.43.36
	(version=TLSv1/SSLv3 cipher=OTHER);
	Fri, 31 Aug 2012 16:43:37 -0700 (PDT)
Date: Fri, 31 Aug 2012 19:43:29 -0400
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: Keir Fraser <keir.xen@gmail.com>
Message-ID: <20120831234222.GA22460@localhost.localdomain>
References: <20120828124255.GA32452@aepfle.de>
	<CC6287A7.3D199%keir.xen@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CC6287A7.3D199%keir.xen@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] fixed location of share info page in HVM guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Aug 28, 2012 at 02:35:19PM +0100, Keir Fraser wrote:
> On 28/08/2012 13:42, "Olaf Hering" <olaf@aepfle.de> wrote:
> 
> > On Tue, Aug 28, Keir Fraser wrote:
> > 
> >> Okay, that was a bit too clever, trying to hide between IOAPIC and LAPIC
> >> pages. How about a bit lower in memory -- FE700000-FE7FFFFF?
> >> 
> >> Everything in range FC000000-FFFFFFFF should already be marked
> >> E820_RESERVED. You can test that, and also see
> >> tools/firmware/hvmloader/e820.c:build_e820_table() (and note that
> >> RESERVED_MEMBASE == FC000000).
> > 
> > Yes, FC000000-FFFFFFFF has already an E820_RESERVED entry. Within that
> > range the kernel finds the IOAPIC, LAPIC and the HPET, perhaps because
> > they are listed in the ACPI table or because they are found by other
> > ways.
> 
> Yes they are all listed in various ACPI tables.
> 
> > To make the location of the of the shared pages configurable from the
> > tools, does tools/firmware/hvmloader/acpi/dsdt.asl have a way to
> > describe such special region? Maybe the kernel parses that table early

Isn't there also a magic string of said structure? If so we should also
check for the magic string to make sure we are mapping the proper
"thing." It can be done similary to how iBFT or EBDA is found.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 23:59:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 23:59:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7b6w-0000aE-7f; Fri, 31 Aug 2012 23:58:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T7b6u-0000a6-0d
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 23:58:48 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1346457518!2083090!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjc1NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15166 invoked from network); 31 Aug 2012 23:58:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 23:58:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,351,1344211200"; d="scan'208";a="36491518"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 23:58:38 +0000
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 31 Aug 2012
	16:58:37 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 16:58:20 -0700
Thread-Topic: Using debug-key 'o:  Dump IOMMU p2m table, locks up machine
Thread-Index: Ac2Hzqq+LRXFiknsRxabJDC8y7W61AAALreg
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4B3C@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
	<723041396.20120901004249@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
	<1377403931.20120901011625@eikelenboom.it>
In-Reply-To: <1377403931.20120901011625@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1:1 mapping is not the common case for gfn-mfn. It is hard to say how much output would shrink by dumping contiguous ranges instead of individual pfns in the general case.

> -----Original Message-----
> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> Sent: Friday, August 31, 2012 4:16 PM
> To: Santosh Jodh
> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> 
> 
> Saturday, September 1, 2012, 12:57:45 AM, you wrote:
> 
> > The dump should complete - would be curious to see how long it takes on
> serial console. What baudrate is the console running at?
> 
> I think for ages, this part seems only to cover a bit of the first of 3 pv guests
> which have devices passed through.
> 38400
> 
> And i wonder if the information is very valuable, gfn == mfn for every line ...
> at an increment of 1 ...
> Perhaps a uhmmm more compact way of getting the interesting data would
> be handy ?
> Or is this the intended output ?
> 
> > The code does allow processing of pending softirqs quite frequently. I am
> not sure why you are still seeing SATA errors.
> 
> The machine is completely unresponsive in every way.
> 
> And using it with "xl debug-keys o" is never going to work i guess, since the
> information flood is far larger than "xl dmesg" keeps ?
> 
> 
> 
> >> -----Original Message-----
> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> >> Sent: Friday, August 31, 2012 3:43 PM
> >> To: Santosh Jodh
> >> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
> >> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up
> >> machine
> >>
> >>
> >> Saturday, September 1, 2012, 12:24:32 AM, you wrote:
> >>
> >> > Depending on how many VMs you have and the size of the IOMMU
> p2m
> >> table, it can take a while. It should not be infinite though.
> >>
> >> > How many VMs do you have running?
> >>
> >> 15
> >>
> >> > Can you please send the serial output when you press 'o'?
> >>
> >> Attached, to the end you will see the s-ata errors coming through
> >> while the dump still runs.
> >> This is not a complete dump, only a few minutes after which i did a
> >> hard reset.
> >>
> >> > Santosh
> >>
> >> >> -----Original Message-----
> >> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> >> >> Sent: Friday, August 31, 2012 2:45 PM
> >> >> To: Santosh Jodh; wei.wang2@amd.com
> >> >> Cc: xen-devel@lists.xen.org
> >> >> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up
> >> >> machine
> >> >>
> >> >>
> >> >> I was trying to use the 'o' debug key to make a bug report about
> >> >> an "AMD-
> >> Vi:
> >> >> IO_PAGE_FAULT".
> >> >>
> >> >> The result:
> >> >> - When using "xl debug-keys o", the machine seems in a infinite
> >> >> loop, can hardly login, eventually resulting in a kernel RCU stall
> >> >> and complete
> >> lockup.
> >> >> - When using serial console: I get a infinite stream of "gfn:  mfn: "
> >> >> lines, mean while on the normal console, S-ATA devices are
> >> >> starting to
> >> give errors.
> >> >>
> >> >> So either option trashes the machine, other debug-keys work fine.
> >> >>
> >> >> Machine has a 890-fx chipset and AMD phenom x6 proc.
> >> >>
> >> >> xl dmesg with bootup and output from some other debug-keys is
> >> attached.
> >> >>
> >> >> --
> >> >>
> >> >> Sander
> >>
> >>
> >>
> >>
> >> --
> >> Best regards,
> >>  Sander                            mailto:linux@eikelenboom.it
> 
> 
> 
> 
> --
> Best regards,
>  Sander                            mailto:linux@eikelenboom.it


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Aug 31 23:59:18 2012
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Aug 2012 23:59:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1T7b6w-0000aE-7f; Fri, 31 Aug 2012 23:58:50 +0000
Received: from mail27.messagelabs.com ([193.109.254.147])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>) id 1T7b6u-0000a6-0d
	for xen-devel@lists.xen.org; Fri, 31 Aug 2012 23:58:48 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1346457518!2083090!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAxNjc1NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.6.1.3; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15166 invoked from network); 31 Aug 2012 23:58:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Aug 2012 23:58:40 -0000
X-IronPort-AV: E=Sophos;i="4.80,351,1344211200"; d="scan'208";a="36491518"
Received: from sjcpmailmx01.citrite.net ([10.216.14.74])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5;
	31 Aug 2012 23:58:38 +0000
Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by
	SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 31 Aug 2012
	16:58:37 -0700
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Fri, 31 Aug 2012 16:58:20 -0700
Thread-Topic: Using debug-key 'o:  Dump IOMMU p2m table, locks up machine
Thread-Index: Ac2Hzqq+LRXFiknsRxabJDC8y7W61AAALreg
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0F6F4B3C@SJCPMAILBOX01.citrite.net>
References: <647712821.20120831234512@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4AE2@SJCPMAILBOX01.citrite.net>
	<723041396.20120901004249@eikelenboom.it>
	<7914B38A4445B34AA16EB9F1352942F1012F0F6F4B01@SJCPMAILBOX01.citrite.net>
	<1377403931.20120901011625@eikelenboom.it>
In-Reply-To: <1377403931.20120901011625@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
MIME-Version: 1.0
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Using debug-key 'o:  Dump IOMMU p2m table,
	locks up machine
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1:1 mapping is not the common case for gfn-mfn. It is hard to say how much output would shrink by dumping contiguous ranges instead of individual pfns in the general case.

> -----Original Message-----
> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> Sent: Friday, August 31, 2012 4:16 PM
> To: Santosh Jodh
> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up machine
> 
> 
> Saturday, September 1, 2012, 12:57:45 AM, you wrote:
> 
> > The dump should complete - would be curious to see how long it takes on
> serial console. What baudrate is the console running at?
> 
> I think for ages, this part seems only to cover a bit of the first of 3 pv guests
> which have devices passed through.
> 38400
> 
> And i wonder if the information is very valuable, gfn == mfn for every line ...
> at an increment of 1 ...
> Perhaps a uhmmm more compact way of getting the interesting data would
> be handy ?
> Or is this the intended output ?
> 
> > The code does allow processing of pending softirqs quite frequently. I am
> not sure why you are still seeing SATA errors.
> 
> The machine is completely unresponsive in every way.
> 
> And using it with "xl debug-keys o" is never going to work i guess, since the
> information flood is far larger than "xl dmesg" keeps ?
> 
> 
> 
> >> -----Original Message-----
> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> >> Sent: Friday, August 31, 2012 3:43 PM
> >> To: Santosh Jodh
> >> Cc: wei.wang2@amd.com; xen-devel@lists.xen.org
> >> Subject: Re: Using debug-key 'o: Dump IOMMU p2m table, locks up
> >> machine
> >>
> >>
> >> Saturday, September 1, 2012, 12:24:32 AM, you wrote:
> >>
> >> > Depending on how many VMs you have and the size of the IOMMU
> p2m
> >> table, it can take a while. It should not be infinite though.
> >>
> >> > How many VMs do you have running?
> >>
> >> 15
> >>
> >> > Can you please send the serial output when you press 'o'?
> >>
> >> Attached, to the end you will see the s-ata errors coming through
> >> while the dump still runs.
> >> This is not a complete dump, only a few minutes after which i did a
> >> hard reset.
> >>
> >> > Santosh
> >>
> >> >> -----Original Message-----
> >> >> From: Sander Eikelenboom [mailto:linux@eikelenboom.it]
> >> >> Sent: Friday, August 31, 2012 2:45 PM
> >> >> To: Santosh Jodh; wei.wang2@amd.com
> >> >> Cc: xen-devel@lists.xen.org
> >> >> Subject: Using debug-key 'o: Dump IOMMU p2m table, locks up
> >> >> machine
> >> >>
> >> >>
> >> >> I was trying to use the 'o' debug key to make a bug report about
> >> >> an "AMD-
> >> Vi:
> >> >> IO_PAGE_FAULT".
> >> >>
> >> >> The result:
> >> >> - When using "xl debug-keys o", the machine seems in a infinite
> >> >> loop, can hardly login, eventually resulting in a kernel RCU stall
> >> >> and complete
> >> lockup.
> >> >> - When using serial console: I get a infinite stream of "gfn:  mfn: "
> >> >> lines, mean while on the normal console, S-ATA devices are
> >> >> starting to
> >> give errors.
> >> >>
> >> >> So either option trashes the machine, other debug-keys work fine.
> >> >>
> >> >> Machine has a 890-fx chipset and AMD phenom x6 proc.
> >> >>
> >> >> xl dmesg with bootup and output from some other debug-keys is
> >> attached.
> >> >>
> >> >> --
> >> >>
> >> >> Sander
> >>
> >>
> >>
> >>
> >> --
> >> Best regards,
> >>  Sander                            mailto:linux@eikelenboom.it
> 
> 
> 
> 
> --
> Best regards,
>  Sander                            mailto:linux@eikelenboom.it


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

